Feb 28 08:48:13 localhost kernel: Linux version 5.14.0-686.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026
Feb 28 08:48:13 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Feb 28 08:48:13 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64 root=UUID=37391a25-080d-4723-8b0c-cb88a559875b ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 28 08:48:13 localhost kernel: BIOS-provided physical RAM map:
Feb 28 08:48:13 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb 28 08:48:13 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb 28 08:48:13 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb 28 08:48:13 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Feb 28 08:48:13 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Feb 28 08:48:13 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Feb 28 08:48:13 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb 28 08:48:13 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Feb 28 08:48:13 localhost kernel: NX (Execute Disable) protection: active
Feb 28 08:48:13 localhost kernel: APIC: Static calls initialized
Feb 28 08:48:13 localhost kernel: SMBIOS 2.8 present.
Feb 28 08:48:13 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Feb 28 08:48:13 localhost kernel: Hypervisor detected: KVM
Feb 28 08:48:13 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb 28 08:48:13 localhost kernel: kvm-clock: using sched offset of 9303144279 cycles
Feb 28 08:48:13 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb 28 08:48:13 localhost kernel: tsc: Detected 2800.000 MHz processor
Feb 28 08:48:13 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb 28 08:48:13 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb 28 08:48:13 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Feb 28 08:48:13 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Feb 28 08:48:13 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb 28 08:48:13 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Feb 28 08:48:13 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Feb 28 08:48:13 localhost kernel: Using GB pages for direct mapping
Feb 28 08:48:13 localhost kernel: RAMDISK: [mem 0x1b6ca000-0x29b5cfff]
Feb 28 08:48:13 localhost kernel: ACPI: Early table checksum verification disabled
Feb 28 08:48:13 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Feb 28 08:48:13 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 28 08:48:13 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 28 08:48:13 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 28 08:48:13 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Feb 28 08:48:13 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 28 08:48:13 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 28 08:48:13 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Feb 28 08:48:13 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Feb 28 08:48:13 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Feb 28 08:48:13 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Feb 28 08:48:13 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Feb 28 08:48:13 localhost kernel: No NUMA configuration found
Feb 28 08:48:13 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Feb 28 08:48:13 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Feb 28 08:48:13 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Feb 28 08:48:13 localhost kernel: Zone ranges:
Feb 28 08:48:13 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb 28 08:48:13 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Feb 28 08:48:13 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Feb 28 08:48:13 localhost kernel:   Device   empty
Feb 28 08:48:13 localhost kernel: Movable zone start for each node
Feb 28 08:48:13 localhost kernel: Early memory node ranges
Feb 28 08:48:13 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb 28 08:48:13 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Feb 28 08:48:13 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Feb 28 08:48:13 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Feb 28 08:48:13 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb 28 08:48:13 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb 28 08:48:13 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Feb 28 08:48:13 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Feb 28 08:48:13 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb 28 08:48:13 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb 28 08:48:13 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Feb 28 08:48:13 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb 28 08:48:13 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb 28 08:48:13 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb 28 08:48:13 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb 28 08:48:13 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb 28 08:48:13 localhost kernel: TSC deadline timer available
Feb 28 08:48:13 localhost kernel: CPU topo: Max. logical packages:   8
Feb 28 08:48:13 localhost kernel: CPU topo: Max. logical dies:       8
Feb 28 08:48:13 localhost kernel: CPU topo: Max. dies per package:   1
Feb 28 08:48:13 localhost kernel: CPU topo: Max. threads per core:   1
Feb 28 08:48:13 localhost kernel: CPU topo: Num. cores per package:     1
Feb 28 08:48:13 localhost kernel: CPU topo: Num. threads per package:   1
Feb 28 08:48:13 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Feb 28 08:48:13 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Feb 28 08:48:13 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Feb 28 08:48:13 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Feb 28 08:48:13 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Feb 28 08:48:13 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Feb 28 08:48:13 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Feb 28 08:48:13 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Feb 28 08:48:13 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Feb 28 08:48:13 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Feb 28 08:48:13 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Feb 28 08:48:13 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Feb 28 08:48:13 localhost kernel: Booting paravirtualized kernel on KVM
Feb 28 08:48:13 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb 28 08:48:13 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Feb 28 08:48:13 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Feb 28 08:48:13 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Feb 28 08:48:13 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Feb 28 08:48:13 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Feb 28 08:48:13 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64 root=UUID=37391a25-080d-4723-8b0c-cb88a559875b ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 28 08:48:13 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64", will be passed to user space.
Feb 28 08:48:13 localhost kernel: random: crng init done
Feb 28 08:48:13 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Feb 28 08:48:13 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 28 08:48:13 localhost kernel: Fallback order for Node 0: 0 
Feb 28 08:48:13 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Feb 28 08:48:13 localhost kernel: Policy zone: Normal
Feb 28 08:48:13 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 28 08:48:13 localhost kernel: software IO TLB: area num 8.
Feb 28 08:48:13 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Feb 28 08:48:13 localhost kernel: ftrace: allocating 49605 entries in 194 pages
Feb 28 08:48:13 localhost kernel: ftrace: allocated 194 pages with 3 groups
Feb 28 08:48:13 localhost kernel: Dynamic Preempt: voluntary
Feb 28 08:48:13 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 28 08:48:13 localhost kernel: rcu:         RCU event tracing is enabled.
Feb 28 08:48:13 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Feb 28 08:48:13 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Feb 28 08:48:13 localhost kernel:         Rude variant of Tasks RCU enabled.
Feb 28 08:48:13 localhost kernel:         Tracing variant of Tasks RCU enabled.
Feb 28 08:48:13 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 28 08:48:13 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Feb 28 08:48:13 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 28 08:48:13 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 28 08:48:13 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 28 08:48:13 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Feb 28 08:48:13 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 28 08:48:13 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Feb 28 08:48:13 localhost kernel: Console: colour VGA+ 80x25
Feb 28 08:48:13 localhost kernel: printk: console [ttyS0] enabled
Feb 28 08:48:13 localhost kernel: ACPI: Core revision 20230331
Feb 28 08:48:13 localhost kernel: APIC: Switch to symmetric I/O mode setup
Feb 28 08:48:13 localhost kernel: x2apic enabled
Feb 28 08:48:13 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Feb 28 08:48:13 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Feb 28 08:48:13 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Feb 28 08:48:13 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Feb 28 08:48:13 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Feb 28 08:48:13 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Feb 28 08:48:13 localhost kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Feb 28 08:48:13 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Feb 28 08:48:13 localhost kernel: Spectre V2 : Mitigation: Retpolines
Feb 28 08:48:13 localhost kernel: RETBleed: Mitigation: untrained return thunk
Feb 28 08:48:13 localhost kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Feb 28 08:48:13 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb 28 08:48:13 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Feb 28 08:48:13 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Feb 28 08:48:13 localhost kernel: active return thunk: retbleed_return_thunk
Feb 28 08:48:13 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb 28 08:48:13 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 28 08:48:13 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 28 08:48:13 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 28 08:48:13 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 28 08:48:13 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Feb 28 08:48:13 localhost kernel: Freeing SMP alternatives memory: 40K
Feb 28 08:48:13 localhost kernel: pid_max: default: 32768 minimum: 301
Feb 28 08:48:13 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Feb 28 08:48:13 localhost kernel: landlock: Up and running.
Feb 28 08:48:13 localhost kernel: Yama: becoming mindful.
Feb 28 08:48:13 localhost kernel: SELinux:  Initializing.
Feb 28 08:48:13 localhost kernel: LSM support for eBPF active
Feb 28 08:48:13 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 28 08:48:13 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 28 08:48:13 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Feb 28 08:48:13 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Feb 28 08:48:13 localhost kernel: ... version:                0
Feb 28 08:48:13 localhost kernel: ... bit width:              48
Feb 28 08:48:13 localhost kernel: ... generic registers:      6
Feb 28 08:48:13 localhost kernel: ... value mask:             0000ffffffffffff
Feb 28 08:48:13 localhost kernel: ... max period:             00007fffffffffff
Feb 28 08:48:13 localhost kernel: ... fixed-purpose events:   0
Feb 28 08:48:13 localhost kernel: ... event mask:             000000000000003f
Feb 28 08:48:13 localhost kernel: signal: max sigframe size: 1776
Feb 28 08:48:13 localhost kernel: rcu: Hierarchical SRCU implementation.
Feb 28 08:48:13 localhost kernel: rcu:         Max phase no-delay instances is 400.
Feb 28 08:48:13 localhost kernel: smp: Bringing up secondary CPUs ...
Feb 28 08:48:13 localhost kernel: smpboot: x86: Booting SMP configuration:
Feb 28 08:48:13 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Feb 28 08:48:13 localhost kernel: smp: Brought up 1 node, 8 CPUs
Feb 28 08:48:13 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Feb 28 08:48:13 localhost kernel: node 0 deferred pages initialised in 9ms
Feb 28 08:48:13 localhost kernel: Memory: 7617716K/8388068K available (16384K kernel code, 5797K rwdata, 13956K rodata, 4204K init, 7172K bss, 764460K reserved, 0K cma-reserved)
Feb 28 08:48:13 localhost kernel: devtmpfs: initialized
Feb 28 08:48:13 localhost kernel: x86/mm: Memory block size: 128MB
Feb 28 08:48:13 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 28 08:48:13 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Feb 28 08:48:13 localhost kernel: pinctrl core: initialized pinctrl subsystem
Feb 28 08:48:13 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 28 08:48:13 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Feb 28 08:48:13 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 28 08:48:13 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 28 08:48:13 localhost kernel: audit: initializing netlink subsys (disabled)
Feb 28 08:48:13 localhost kernel: audit: type=2000 audit(1772268492.122:1): state=initialized audit_enabled=0 res=1
Feb 28 08:48:13 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Feb 28 08:48:13 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 28 08:48:13 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Feb 28 08:48:13 localhost kernel: cpuidle: using governor menu
Feb 28 08:48:13 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 28 08:48:13 localhost kernel: PCI: Using configuration type 1 for base access
Feb 28 08:48:13 localhost kernel: PCI: Using configuration type 1 for extended access
Feb 28 08:48:13 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb 28 08:48:13 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 28 08:48:13 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Feb 28 08:48:13 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 28 08:48:13 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Feb 28 08:48:13 localhost kernel: Demotion targets for Node 0: null
Feb 28 08:48:13 localhost kernel: cryptd: max_cpu_qlen set to 1000
Feb 28 08:48:13 localhost kernel: ACPI: Added _OSI(Module Device)
Feb 28 08:48:13 localhost kernel: ACPI: Added _OSI(Processor Device)
Feb 28 08:48:13 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 28 08:48:13 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 28 08:48:13 localhost kernel: ACPI: Interpreter enabled
Feb 28 08:48:13 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Feb 28 08:48:13 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Feb 28 08:48:13 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb 28 08:48:13 localhost kernel: PCI: Using E820 reservations for host bridge windows
Feb 28 08:48:13 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Feb 28 08:48:13 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 28 08:48:13 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [3] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [4] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [5] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [6] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [7] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [8] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [9] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [10] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [11] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [12] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [13] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [14] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [15] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [16] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [17] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [18] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [19] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [20] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [21] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [22] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [23] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [24] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [25] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [26] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [27] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [28] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [29] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [30] registered
Feb 28 08:48:13 localhost kernel: acpiphp: Slot [31] registered
Feb 28 08:48:13 localhost kernel: PCI host bridge to bus 0000:00
Feb 28 08:48:13 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb 28 08:48:13 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb 28 08:48:13 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb 28 08:48:13 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Feb 28 08:48:13 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Feb 28 08:48:13 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 28 08:48:13 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Feb 28 08:48:13 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Feb 28 08:48:13 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Feb 28 08:48:13 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Feb 28 08:48:13 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Feb 28 08:48:13 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Feb 28 08:48:13 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Feb 28 08:48:13 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Feb 28 08:48:13 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Feb 28 08:48:13 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Feb 28 08:48:13 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Feb 28 08:48:13 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Feb 28 08:48:13 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Feb 28 08:48:13 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Feb 28 08:48:13 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Feb 28 08:48:13 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Feb 28 08:48:13 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Feb 28 08:48:13 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Feb 28 08:48:13 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb 28 08:48:13 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb 28 08:48:13 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Feb 28 08:48:13 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Feb 28 08:48:13 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Feb 28 08:48:13 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Feb 28 08:48:13 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Feb 28 08:48:13 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Feb 28 08:48:13 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Feb 28 08:48:13 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Feb 28 08:48:13 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Feb 28 08:48:13 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Feb 28 08:48:13 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Feb 28 08:48:13 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Feb 28 08:48:13 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Feb 28 08:48:13 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Feb 28 08:48:13 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb 28 08:48:13 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb 28 08:48:13 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb 28 08:48:13 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb 28 08:48:13 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb 28 08:48:13 localhost kernel: iommu: Default domain type: Translated
Feb 28 08:48:13 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Feb 28 08:48:13 localhost kernel: SCSI subsystem initialized
Feb 28 08:48:13 localhost kernel: ACPI: bus type USB registered
Feb 28 08:48:13 localhost kernel: usbcore: registered new interface driver usbfs
Feb 28 08:48:13 localhost kernel: usbcore: registered new interface driver hub
Feb 28 08:48:13 localhost kernel: usbcore: registered new device driver usb
Feb 28 08:48:13 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Feb 28 08:48:13 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb 28 08:48:13 localhost kernel: PTP clock support registered
Feb 28 08:48:13 localhost kernel: EDAC MC: Ver: 3.0.0
Feb 28 08:48:13 localhost kernel: NetLabel: Initializing
Feb 28 08:48:13 localhost kernel: NetLabel:  domain hash size = 128
Feb 28 08:48:13 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Feb 28 08:48:13 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Feb 28 08:48:13 localhost kernel: PCI: Using ACPI for IRQ routing
Feb 28 08:48:13 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Feb 28 08:48:13 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Feb 28 08:48:13 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Feb 28 08:48:13 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Feb 28 08:48:13 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Feb 28 08:48:13 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb 28 08:48:13 localhost kernel: vgaarb: loaded
Feb 28 08:48:13 localhost kernel: clocksource: Switched to clocksource kvm-clock
Feb 28 08:48:13 localhost kernel: VFS: Disk quotas dquot_6.6.0
Feb 28 08:48:13 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 28 08:48:13 localhost kernel: pnp: PnP ACPI init
Feb 28 08:48:13 localhost kernel: pnp 00:03: [dma 2]
Feb 28 08:48:13 localhost kernel: pnp: PnP ACPI: found 5 devices
Feb 28 08:48:13 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb 28 08:48:13 localhost kernel: NET: Registered PF_INET protocol family
Feb 28 08:48:13 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb 28 08:48:13 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Feb 28 08:48:13 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 28 08:48:13 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 28 08:48:13 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Feb 28 08:48:13 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Feb 28 08:48:13 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Feb 28 08:48:13 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 28 08:48:13 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 28 08:48:13 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 28 08:48:13 localhost kernel: NET: Registered PF_XDP protocol family
Feb 28 08:48:13 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb 28 08:48:13 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb 28 08:48:13 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb 28 08:48:13 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Feb 28 08:48:13 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Feb 28 08:48:13 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Feb 28 08:48:13 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb 28 08:48:13 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb 28 08:48:13 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 35311 usecs
Feb 28 08:48:13 localhost kernel: PCI: CLS 0 bytes, default 64
Feb 28 08:48:13 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Feb 28 08:48:13 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Feb 28 08:48:13 localhost kernel: ACPI: bus type thunderbolt registered
Feb 28 08:48:13 localhost kernel: Trying to unpack rootfs image as initramfs...
Feb 28 08:48:13 localhost kernel: Initialise system trusted keyrings
Feb 28 08:48:13 localhost kernel: Key type blacklist registered
Feb 28 08:48:13 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Feb 28 08:48:13 localhost kernel: zbud: loaded
Feb 28 08:48:13 localhost kernel: integrity: Platform Keyring initialized
Feb 28 08:48:13 localhost kernel: integrity: Machine keyring initialized
Feb 28 08:48:13 localhost kernel: Freeing initrd memory: 234060K
Feb 28 08:48:13 localhost kernel: NET: Registered PF_ALG protocol family
Feb 28 08:48:13 localhost kernel: xor: automatically using best checksumming function   avx       
Feb 28 08:48:13 localhost kernel: Key type asymmetric registered
Feb 28 08:48:13 localhost kernel: Asymmetric key parser 'x509' registered
Feb 28 08:48:13 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Feb 28 08:48:13 localhost kernel: io scheduler mq-deadline registered
Feb 28 08:48:13 localhost kernel: io scheduler kyber registered
Feb 28 08:48:13 localhost kernel: io scheduler bfq registered
Feb 28 08:48:13 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Feb 28 08:48:13 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Feb 28 08:48:13 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Feb 28 08:48:13 localhost kernel: ACPI: button: Power Button [PWRF]
Feb 28 08:48:13 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Feb 28 08:48:13 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb 28 08:48:13 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb 28 08:48:13 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 28 08:48:13 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb 28 08:48:13 localhost kernel: Non-volatile memory driver v1.3
Feb 28 08:48:13 localhost kernel: rdac: device handler registered
Feb 28 08:48:13 localhost kernel: hp_sw: device handler registered
Feb 28 08:48:13 localhost kernel: emc: device handler registered
Feb 28 08:48:13 localhost kernel: alua: device handler registered
Feb 28 08:48:13 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Feb 28 08:48:13 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Feb 28 08:48:13 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Feb 28 08:48:13 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Feb 28 08:48:13 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Feb 28 08:48:13 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Feb 28 08:48:13 localhost kernel: usb usb1: Product: UHCI Host Controller
Feb 28 08:48:13 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-686.el9.x86_64 uhci_hcd
Feb 28 08:48:13 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Feb 28 08:48:13 localhost kernel: hub 1-0:1.0: USB hub found
Feb 28 08:48:13 localhost kernel: hub 1-0:1.0: 2 ports detected
Feb 28 08:48:13 localhost kernel: usbcore: registered new interface driver usbserial_generic
Feb 28 08:48:13 localhost kernel: usbserial: USB Serial support registered for generic
Feb 28 08:48:13 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb 28 08:48:13 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb 28 08:48:13 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb 28 08:48:13 localhost kernel: mousedev: PS/2 mouse device common for all mice
Feb 28 08:48:13 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Feb 28 08:48:13 localhost kernel: rtc_cmos 00:04: registered as rtc0
Feb 28 08:48:13 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-02-28T08:48:12 UTC (1772268492)
Feb 28 08:48:13 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Feb 28 08:48:13 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Feb 28 08:48:13 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Feb 28 08:48:13 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 28 08:48:13 localhost kernel: usbcore: registered new interface driver usbhid
Feb 28 08:48:13 localhost kernel: usbhid: USB HID core driver
Feb 28 08:48:13 localhost kernel: drop_monitor: Initializing network drop monitor service
Feb 28 08:48:13 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Feb 28 08:48:13 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Feb 28 08:48:13 localhost kernel: Initializing XFRM netlink socket
Feb 28 08:48:13 localhost kernel: NET: Registered PF_INET6 protocol family
Feb 28 08:48:13 localhost kernel: Segment Routing with IPv6
Feb 28 08:48:13 localhost kernel: NET: Registered PF_PACKET protocol family
Feb 28 08:48:13 localhost kernel: mpls_gso: MPLS GSO support
Feb 28 08:48:13 localhost kernel: IPI shorthand broadcast: enabled
Feb 28 08:48:13 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Feb 28 08:48:13 localhost kernel: AES CTR mode by8 optimization enabled
Feb 28 08:48:13 localhost kernel: sched_clock: Marking stable (1064002120, 143548630)->(1313523170, -105972420)
Feb 28 08:48:13 localhost kernel: registered taskstats version 1
Feb 28 08:48:13 localhost kernel: Loading compiled-in X.509 certificates
Feb 28 08:48:13 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: d9d4cefd3ca2c4957ef0b2e7c6e39a7e4ae16390'
Feb 28 08:48:13 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Feb 28 08:48:13 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Feb 28 08:48:13 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Feb 28 08:48:13 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Feb 28 08:48:13 localhost kernel: Demotion targets for Node 0: null
Feb 28 08:48:13 localhost kernel: page_owner is disabled
Feb 28 08:48:13 localhost kernel: Key type .fscrypt registered
Feb 28 08:48:13 localhost kernel: Key type fscrypt-provisioning registered
Feb 28 08:48:13 localhost kernel: Key type big_key registered
Feb 28 08:48:13 localhost kernel: Key type encrypted registered
Feb 28 08:48:13 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 28 08:48:13 localhost kernel: Loading compiled-in module X.509 certificates
Feb 28 08:48:13 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: d9d4cefd3ca2c4957ef0b2e7c6e39a7e4ae16390'
Feb 28 08:48:13 localhost kernel: ima: Allocated hash algorithm: sha256
Feb 28 08:48:13 localhost kernel: ima: No architecture policies found
Feb 28 08:48:13 localhost kernel: evm: Initialising EVM extended attributes:
Feb 28 08:48:13 localhost kernel: evm: security.selinux
Feb 28 08:48:13 localhost kernel: evm: security.SMACK64 (disabled)
Feb 28 08:48:13 localhost kernel: evm: security.SMACK64EXEC (disabled)
Feb 28 08:48:13 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Feb 28 08:48:13 localhost kernel: evm: security.SMACK64MMAP (disabled)
Feb 28 08:48:13 localhost kernel: evm: security.apparmor (disabled)
Feb 28 08:48:13 localhost kernel: evm: security.ima
Feb 28 08:48:13 localhost kernel: evm: security.capability
Feb 28 08:48:13 localhost kernel: evm: HMAC attrs: 0x1
Feb 28 08:48:13 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Feb 28 08:48:13 localhost kernel: Running certificate verification RSA selftest
Feb 28 08:48:13 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Feb 28 08:48:13 localhost kernel: Running certificate verification ECDSA selftest
Feb 28 08:48:13 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Feb 28 08:48:13 localhost kernel: clk: Disabling unused clocks
Feb 28 08:48:13 localhost kernel: Freeing unused decrypted memory: 2028K
Feb 28 08:48:13 localhost kernel: Freeing unused kernel image (initmem) memory: 4204K
Feb 28 08:48:13 localhost kernel: Write protecting the kernel read-only data: 30720k
Feb 28 08:48:13 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 380K
Feb 28 08:48:13 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Feb 28 08:48:13 localhost kernel: Run /init as init process
Feb 28 08:48:13 localhost kernel:   with arguments:
Feb 28 08:48:13 localhost kernel:     /init
Feb 28 08:48:13 localhost kernel:   with environment:
Feb 28 08:48:13 localhost kernel:     HOME=/
Feb 28 08:48:13 localhost kernel:     TERM=linux
Feb 28 08:48:13 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64
Feb 28 08:48:13 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 28 08:48:13 localhost systemd[1]: Detected virtualization kvm.
Feb 28 08:48:13 localhost systemd[1]: Detected architecture x86-64.
Feb 28 08:48:13 localhost systemd[1]: Running in initrd.
Feb 28 08:48:13 localhost systemd[1]: No hostname configured, using default hostname.
Feb 28 08:48:13 localhost systemd[1]: Hostname set to <localhost>.
Feb 28 08:48:13 localhost systemd[1]: Initializing machine ID from VM UUID.
Feb 28 08:48:13 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Feb 28 08:48:13 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Feb 28 08:48:13 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Feb 28 08:48:13 localhost kernel: usb 1-1: Manufacturer: QEMU
Feb 28 08:48:13 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Feb 28 08:48:13 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Feb 28 08:48:13 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Feb 28 08:48:13 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Feb 28 08:48:13 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Feb 28 08:48:13 localhost systemd[1]: Reached target Local Encrypted Volumes.
Feb 28 08:48:13 localhost systemd[1]: Reached target Initrd /usr File System.
Feb 28 08:48:13 localhost systemd[1]: Reached target Local File Systems.
Feb 28 08:48:13 localhost systemd[1]: Reached target Path Units.
Feb 28 08:48:13 localhost systemd[1]: Reached target Slice Units.
Feb 28 08:48:13 localhost systemd[1]: Reached target Swaps.
Feb 28 08:48:13 localhost systemd[1]: Reached target Timer Units.
Feb 28 08:48:13 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb 28 08:48:13 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Feb 28 08:48:13 localhost systemd[1]: Listening on Journal Socket.
Feb 28 08:48:13 localhost systemd[1]: Listening on udev Control Socket.
Feb 28 08:48:13 localhost systemd[1]: Listening on udev Kernel Socket.
Feb 28 08:48:13 localhost systemd[1]: Reached target Socket Units.
Feb 28 08:48:13 localhost systemd[1]: Starting Create List of Static Device Nodes...
Feb 28 08:48:13 localhost systemd[1]: Starting Journal Service...
Feb 28 08:48:13 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb 28 08:48:13 localhost systemd[1]: Starting Apply Kernel Variables...
Feb 28 08:48:13 localhost systemd[1]: Starting Create System Users...
Feb 28 08:48:13 localhost systemd[1]: Starting Setup Virtual Console...
Feb 28 08:48:13 localhost systemd[1]: Finished Create List of Static Device Nodes.
Feb 28 08:48:13 localhost systemd[1]: Finished Apply Kernel Variables.
Feb 28 08:48:13 localhost systemd[1]: Finished Create System Users.
Feb 28 08:48:13 localhost systemd-journald[307]: Journal started
Feb 28 08:48:13 localhost systemd-journald[307]: Runtime Journal (/run/log/journal/df54d9c918244fc9835d7a3299a4aad4) is 8.0M, max 153.6M, 145.6M free.
Feb 28 08:48:13 localhost systemd-sysusers[312]: Creating group 'users' with GID 100.
Feb 28 08:48:13 localhost systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Feb 28 08:48:13 localhost systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Feb 28 08:48:13 localhost systemd[1]: Started Journal Service.
Feb 28 08:48:13 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Feb 28 08:48:13 localhost systemd[1]: Starting Create Volatile Files and Directories...
Feb 28 08:48:13 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Feb 28 08:48:13 localhost systemd[1]: Finished Create Volatile Files and Directories.
Feb 28 08:48:13 localhost systemd[1]: Finished Setup Virtual Console.
Feb 28 08:48:13 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Feb 28 08:48:13 localhost systemd[1]: Starting dracut cmdline hook...
Feb 28 08:48:13 localhost dracut-cmdline[326]: dracut-9 dracut-057-110.git20260130.el9
Feb 28 08:48:13 localhost dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64 root=UUID=37391a25-080d-4723-8b0c-cb88a559875b ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 28 08:48:13 localhost systemd[1]: Finished dracut cmdline hook.
Feb 28 08:48:13 localhost systemd[1]: Starting dracut pre-udev hook...
Feb 28 08:48:13 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 28 08:48:13 localhost kernel: device-mapper: uevent: version 1.0.3
Feb 28 08:48:13 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Feb 28 08:48:13 localhost kernel: RPC: Registered named UNIX socket transport module.
Feb 28 08:48:13 localhost kernel: RPC: Registered udp transport module.
Feb 28 08:48:13 localhost kernel: RPC: Registered tcp transport module.
Feb 28 08:48:13 localhost kernel: RPC: Registered tcp-with-tls transport module.
Feb 28 08:48:13 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb 28 08:48:13 localhost rpc.statd[441]: Version 2.5.4 starting
Feb 28 08:48:13 localhost rpc.statd[441]: Initializing NSM state
Feb 28 08:48:13 localhost rpc.idmapd[446]: Setting log level to 0
Feb 28 08:48:13 localhost systemd[1]: Finished dracut pre-udev hook.
Feb 28 08:48:13 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 28 08:48:13 localhost systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Feb 28 08:48:13 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 28 08:48:13 localhost systemd[1]: Starting dracut pre-trigger hook...
Feb 28 08:48:13 localhost systemd[1]: Finished dracut pre-trigger hook.
Feb 28 08:48:13 localhost systemd[1]: Starting Coldplug All udev Devices...
Feb 28 08:48:13 localhost systemd[1]: Created slice Slice /system/modprobe.
Feb 28 08:48:13 localhost systemd[1]: Starting Load Kernel Module configfs...
Feb 28 08:48:13 localhost systemd[1]: Finished Coldplug All udev Devices.
Feb 28 08:48:13 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 28 08:48:13 localhost systemd[1]: Finished Load Kernel Module configfs.
Feb 28 08:48:13 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 28 08:48:13 localhost systemd[1]: Reached target Network.
Feb 28 08:48:13 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 28 08:48:13 localhost systemd[1]: Starting dracut initqueue hook...
Feb 28 08:48:13 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Feb 28 08:48:13 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Feb 28 08:48:13 localhost kernel:  vda: vda1
Feb 28 08:48:13 localhost systemd-udevd[476]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 08:48:13 localhost systemd[1]: Found device /dev/disk/by-uuid/37391a25-080d-4723-8b0c-cb88a559875b.
Feb 28 08:48:13 localhost kernel: ACPI: bus type drm_connector registered
Feb 28 08:48:13 localhost kernel: libata version 3.00 loaded.
Feb 28 08:48:13 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Feb 28 08:48:13 localhost kernel: scsi host0: ata_piix
Feb 28 08:48:13 localhost kernel: scsi host1: ata_piix
Feb 28 08:48:13 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Feb 28 08:48:13 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Feb 28 08:48:13 localhost systemd[1]: Reached target Initrd Root Device.
Feb 28 08:48:14 localhost kernel: ata1: found unknown device (class 0)
Feb 28 08:48:14 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Feb 28 08:48:14 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Feb 28 08:48:14 localhost systemd[1]: Mounting Kernel Configuration File System...
Feb 28 08:48:14 localhost systemd[1]: Mounted Kernel Configuration File System.
Feb 28 08:48:14 localhost systemd[1]: Reached target System Initialization.
Feb 28 08:48:14 localhost systemd[1]: Reached target Basic System.
Feb 28 08:48:14 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Feb 28 08:48:14 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Feb 28 08:48:14 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb 28 08:48:14 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Feb 28 08:48:14 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Feb 28 08:48:14 localhost kernel: Console: switching to colour dummy device 80x25
Feb 28 08:48:14 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Feb 28 08:48:14 localhost kernel: [drm] features: -context_init
Feb 28 08:48:14 localhost kernel: [drm] number of scanouts: 1
Feb 28 08:48:14 localhost kernel: [drm] number of cap sets: 0
Feb 28 08:48:14 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Feb 28 08:48:14 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Feb 28 08:48:14 localhost kernel: Console: switching to colour frame buffer device 128x48
Feb 28 08:48:14 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Feb 28 08:48:14 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Feb 28 08:48:14 localhost systemd[1]: Finished dracut initqueue hook.
Feb 28 08:48:14 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Feb 28 08:48:14 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Feb 28 08:48:14 localhost systemd[1]: Reached target Remote File Systems.
Feb 28 08:48:14 localhost systemd[1]: Starting dracut pre-mount hook...
Feb 28 08:48:14 localhost systemd[1]: Finished dracut pre-mount hook.
Feb 28 08:48:14 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/37391a25-080d-4723-8b0c-cb88a559875b...
Feb 28 08:48:14 localhost systemd-fsck[567]: /usr/sbin/fsck.xfs: XFS file system.
Feb 28 08:48:14 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/37391a25-080d-4723-8b0c-cb88a559875b.
Feb 28 08:48:14 localhost systemd[1]: Mounting /sysroot...
Feb 28 08:48:14 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Feb 28 08:48:14 localhost kernel: XFS (vda1): Mounting V5 Filesystem 37391a25-080d-4723-8b0c-cb88a559875b
Feb 28 08:48:14 localhost kernel: XFS (vda1): Ending clean mount
Feb 28 08:48:14 localhost systemd[1]: Mounted /sysroot.
Feb 28 08:48:14 localhost systemd[1]: Reached target Initrd Root File System.
Feb 28 08:48:14 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Feb 28 08:48:14 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 28 08:48:14 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Feb 28 08:48:14 localhost systemd[1]: Reached target Initrd File Systems.
Feb 28 08:48:14 localhost systemd[1]: Reached target Initrd Default Target.
Feb 28 08:48:14 localhost systemd[1]: Starting dracut mount hook...
Feb 28 08:48:14 localhost systemd[1]: Finished dracut mount hook.
Feb 28 08:48:14 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Feb 28 08:48:14 localhost rpc.idmapd[446]: exiting on signal 15
Feb 28 08:48:14 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Feb 28 08:48:14 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Feb 28 08:48:15 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Feb 28 08:48:15 localhost systemd[1]: Stopped target Network.
Feb 28 08:48:15 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Feb 28 08:48:15 localhost systemd[1]: Stopped target Timer Units.
Feb 28 08:48:15 localhost systemd[1]: dbus.socket: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Feb 28 08:48:15 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Feb 28 08:48:15 localhost systemd[1]: Stopped target Initrd Default Target.
Feb 28 08:48:15 localhost systemd[1]: Stopped target Basic System.
Feb 28 08:48:15 localhost systemd[1]: Stopped target Initrd Root Device.
Feb 28 08:48:15 localhost systemd[1]: Stopped target Initrd /usr File System.
Feb 28 08:48:15 localhost systemd[1]: Stopped target Path Units.
Feb 28 08:48:15 localhost systemd[1]: Stopped target Remote File Systems.
Feb 28 08:48:15 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Feb 28 08:48:15 localhost systemd[1]: Stopped target Slice Units.
Feb 28 08:48:15 localhost systemd[1]: Stopped target Socket Units.
Feb 28 08:48:15 localhost systemd[1]: Stopped target System Initialization.
Feb 28 08:48:15 localhost systemd[1]: Stopped target Local File Systems.
Feb 28 08:48:15 localhost systemd[1]: Stopped target Swaps.
Feb 28 08:48:15 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: Stopped dracut mount hook.
Feb 28 08:48:15 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: Stopped dracut pre-mount hook.
Feb 28 08:48:15 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Feb 28 08:48:15 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Feb 28 08:48:15 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: Stopped dracut initqueue hook.
Feb 28 08:48:15 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: Stopped Apply Kernel Variables.
Feb 28 08:48:15 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Feb 28 08:48:15 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: Stopped Coldplug All udev Devices.
Feb 28 08:48:15 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: Stopped dracut pre-trigger hook.
Feb 28 08:48:15 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Feb 28 08:48:15 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: Stopped Setup Virtual Console.
Feb 28 08:48:15 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Feb 28 08:48:15 localhost systemd[1]: systemd-udevd.service: Consumed 1.039s CPU time.
Feb 28 08:48:15 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: Closed udev Control Socket.
Feb 28 08:48:15 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: Closed udev Kernel Socket.
Feb 28 08:48:15 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: Stopped dracut pre-udev hook.
Feb 28 08:48:15 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: Stopped dracut cmdline hook.
Feb 28 08:48:15 localhost systemd[1]: Starting Cleanup udev Database...
Feb 28 08:48:15 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Feb 28 08:48:15 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Feb 28 08:48:15 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: Stopped Create System Users.
Feb 28 08:48:15 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Feb 28 08:48:15 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 28 08:48:15 localhost systemd[1]: Finished Cleanup udev Database.
Feb 28 08:48:15 localhost systemd[1]: Reached target Switch Root.
Feb 28 08:48:15 localhost systemd[1]: Starting Switch Root...
Feb 28 08:48:15 localhost systemd[1]: Switching root.
Feb 28 08:48:15 localhost systemd-journald[307]: Journal stopped
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <video>
Feb 28 10:00:16 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 10:00:16 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     </video>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:00:16 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:00:16 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:00:16 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:00:16 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:00:16 compute-0 nova_compute[243452]: </domain>
Feb 28 10:00:16 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:00:16 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.189 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.190 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.191 243456 INFO nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Using config drive
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.220 243456 DEBUG nova.storage.rbd_utils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image e7cedb7c-31a4-4578-82e8-f93b29898300_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.458 243456 INFO nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Creating config drive at /var/lib/nova/instances/e7cedb7c-31a4-4578-82e8-f93b29898300/disk.config
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.463 243456 DEBUG oslo_concurrency.processutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e7cedb7c-31a4-4578-82e8-f93b29898300/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpw2kd9r4m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.493 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272816.4926243, 5cac75f5-aeef-427d-b484-7d40a33679cf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.493 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] VM Resumed (Lifecycle Event)
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.498 243456 DEBUG nova.compute.manager [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.498 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.512 243456 INFO nova.virt.libvirt.driver [-] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Instance spawned successfully.
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.516 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:00:16 compute-0 podman[254466]: 2026-02-28 10:00:16.519855195 +0000 UTC m=+0.079418438 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:00:16 compute-0 podman[254465]: 2026-02-28 10:00:16.548939454 +0000 UTC m=+0.109703331 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.560 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.566 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.569 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.570 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.570 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.571 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.572 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.572 243456 DEBUG nova.virt.libvirt.driver [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.581 243456 DEBUG oslo_concurrency.processutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e7cedb7c-31a4-4578-82e8-f93b29898300/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpw2kd9r4m" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.603 243456 DEBUG nova.storage.rbd_utils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image e7cedb7c-31a4-4578-82e8-f93b29898300_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.607 243456 DEBUG oslo_concurrency.processutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e7cedb7c-31a4-4578-82e8-f93b29898300/disk.config e7cedb7c-31a4-4578-82e8-f93b29898300_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.622 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.623 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272816.4996681, 5cac75f5-aeef-427d-b484-7d40a33679cf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.623 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] VM Started (Lifecycle Event)
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.644 243456 INFO nova.compute.manager [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Took 4.45 seconds to spawn the instance on the hypervisor.
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.645 243456 DEBUG nova.compute.manager [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.646 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.658 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.702 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.721 243456 INFO nova.compute.manager [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Took 6.48 seconds to build instance.
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.738 243456 DEBUG oslo_concurrency.processutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e7cedb7c-31a4-4578-82e8-f93b29898300/disk.config e7cedb7c-31a4-4578-82e8-f93b29898300_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.739 243456 INFO nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Deleting local config drive /var/lib/nova/instances/e7cedb7c-31a4-4578-82e8-f93b29898300/disk.config because it was imported into RBD.
Feb 28 10:00:16 compute-0 nova_compute[243452]: 2026-02-28 10:00:16.743 243456 DEBUG oslo_concurrency.lockutils [None req-ab514365-1ae6-4301-8167-fc4365345bf4 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lock "5cac75f5-aeef-427d-b484-7d40a33679cf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:16 compute-0 systemd-machined[209480]: New machine qemu-9-instance-00000009.
Feb 28 10:00:16 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Feb 28 10:00:17 compute-0 ceph-mon[76304]: pgmap v924: 305 pgs: 305 active+clean; 226 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.0 MiB/s wr, 188 op/s
Feb 28 10:00:17 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1257427459' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.472 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272817.4718213, e7cedb7c-31a4-4578-82e8-f93b29898300 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.474 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] VM Resumed (Lifecycle Event)
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.479 243456 DEBUG nova.compute.manager [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.481 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.487 243456 INFO nova.virt.libvirt.driver [-] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Instance spawned successfully.
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.488 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.518 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.525 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.529 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.530 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.531 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.532 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.532 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.533 243456 DEBUG nova.virt.libvirt.driver [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.575 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.576 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272817.472855, e7cedb7c-31a4-4578-82e8-f93b29898300 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.576 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] VM Started (Lifecycle Event)
Feb 28 10:00:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.588 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272802.5598826, cbfb5d00-33da-4fdc-a9b7-a16865020102 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.589 243456 INFO nova.compute.manager [-] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] VM Stopped (Lifecycle Event)
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.611 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.612 243456 DEBUG nova.compute.manager [None req-ce714cbe-946d-46e9-a47e-713867866c2e - - - - - -] [instance: cbfb5d00-33da-4fdc-a9b7-a16865020102] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.615 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.621 243456 INFO nova.compute.manager [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Took 4.59 seconds to spawn the instance on the hypervisor.
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.621 243456 DEBUG nova.compute.manager [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.634 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.647 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.686 243456 INFO nova.compute.manager [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Took 6.95 seconds to build instance.
Feb 28 10:00:17 compute-0 nova_compute[243452]: 2026-02-28 10:00:17.721 243456 DEBUG oslo_concurrency.lockutils [None req-5ffe339c-0004-46d0-b560-1a91d100a096 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "e7cedb7c-31a4-4578-82e8-f93b29898300" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 189 op/s
Feb 28 10:00:18 compute-0 nova_compute[243452]: 2026-02-28 10:00:18.136 243456 DEBUG nova.compute.manager [None req-126044c0-a853-44de-8144-0a919c2669b6 fbbda653764b41c6a0d9c5f1a589bdfe 179ac1b345e44371a698f9ff7f4ed1d4 - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:18 compute-0 nova_compute[243452]: 2026-02-28 10:00:18.140 243456 INFO nova.compute.manager [None req-126044c0-a853-44de-8144-0a919c2669b6 fbbda653764b41c6a0d9c5f1a589bdfe 179ac1b345e44371a698f9ff7f4ed1d4 - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Retrieving diagnostics
Feb 28 10:00:18 compute-0 nova_compute[243452]: 2026-02-28 10:00:18.386 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:19 compute-0 ceph-mon[76304]: pgmap v925: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 189 op/s
Feb 28 10:00:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 146 op/s
Feb 28 10:00:21 compute-0 ceph-mon[76304]: pgmap v926: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 146 op/s
Feb 28 10:00:21 compute-0 nova_compute[243452]: 2026-02-28 10:00:21.615 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Acquiring lock "5706ada3-074b-4ac3-8540-425edba37cbe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:21 compute-0 nova_compute[243452]: 2026-02-28 10:00:21.617 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "5706ada3-074b-4ac3-8540-425edba37cbe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:21 compute-0 nova_compute[243452]: 2026-02-28 10:00:21.646 243456 DEBUG nova.compute.manager [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:00:21 compute-0 nova_compute[243452]: 2026-02-28 10:00:21.719 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:21 compute-0 nova_compute[243452]: 2026-02-28 10:00:21.719 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:21 compute-0 nova_compute[243452]: 2026-02-28 10:00:21.725 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:00:21 compute-0 nova_compute[243452]: 2026-02-28 10:00:21.726 243456 INFO nova.compute.claims [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:00:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.6 MiB/s wr, 261 op/s
Feb 28 10:00:21 compute-0 nova_compute[243452]: 2026-02-28 10:00:21.871 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:22 compute-0 ceph-mon[76304]: pgmap v927: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.6 MiB/s wr, 261 op/s
Feb 28 10:00:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:00:22 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3188189077' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.385 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.393 243456 DEBUG nova.compute.provider_tree [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.413 243456 DEBUG nova.scheduler.client.report [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.439 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.441 243456 DEBUG nova.compute.manager [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.491 243456 DEBUG nova.compute.manager [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.492 243456 DEBUG nova.network.neutron [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.515 243456 INFO nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.552 243456 DEBUG nova.compute.manager [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:00:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.638 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.657 243456 DEBUG nova.compute.manager [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.659 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.660 243456 INFO nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Creating image(s)
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.693 243456 DEBUG nova.storage.rbd_utils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] rbd image 5706ada3-074b-4ac3-8540-425edba37cbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.726 243456 DEBUG nova.storage.rbd_utils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] rbd image 5706ada3-074b-4ac3-8540-425edba37cbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.754 243456 DEBUG nova.storage.rbd_utils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] rbd image 5706ada3-074b-4ac3-8540-425edba37cbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.758 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.784 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "a2e72370-536c-417e-8667-678b824b849c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.784 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "a2e72370-536c-417e-8667-678b824b849c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.802 243456 DEBUG nova.compute.manager [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.822 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.823 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.823 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.824 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.846 243456 DEBUG nova.storage.rbd_utils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] rbd image 5706ada3-074b-4ac3-8540-425edba37cbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.850 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 5706ada3-074b-4ac3-8540-425edba37cbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.892 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.893 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.905 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:00:22 compute-0 nova_compute[243452]: 2026-02-28 10:00:22.906 243456 INFO nova.compute.claims [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.008 243456 DEBUG nova.network.neutron [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.008 243456 DEBUG nova.compute.manager [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.089 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.112 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 5706ada3-074b-4ac3-8540-425edba37cbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3188189077' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.185 243456 DEBUG nova.storage.rbd_utils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] resizing rbd image 5706ada3-074b-4ac3-8540-425edba37cbe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.270 243456 DEBUG nova.objects.instance [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lazy-loading 'migration_context' on Instance uuid 5706ada3-074b-4ac3-8540-425edba37cbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.294 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.294 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Ensure instance console log exists: /var/lib/nova/instances/5706ada3-074b-4ac3-8540-425edba37cbe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.295 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.295 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.295 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.297 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.302 243456 WARNING nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.307 243456 DEBUG nova.virt.libvirt.host [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.307 243456 DEBUG nova.virt.libvirt.host [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.311 243456 DEBUG nova.virt.libvirt.host [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.311 243456 DEBUG nova.virt.libvirt.host [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.311 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.312 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.313 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.314 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.314 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.315 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.315 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.315 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.315 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.316 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.316 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.316 243456 DEBUG nova.virt.hardware [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.319 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.389 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:00:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/603828765' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.630 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.640 243456 DEBUG nova.compute.provider_tree [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.673 243456 DEBUG nova.scheduler.client.report [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.719 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.721 243456 DEBUG nova.compute.manager [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:00:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 211 op/s
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.792 243456 DEBUG nova.compute.manager [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.793 243456 DEBUG nova.network.neutron [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.817 243456 INFO nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.845 243456 DEBUG nova.compute.manager [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:00:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:00:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/548531986' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.864 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.893 243456 DEBUG nova.storage.rbd_utils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] rbd image 5706ada3-074b-4ac3-8540-425edba37cbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.898 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.983 243456 DEBUG nova.compute.manager [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.986 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:00:23 compute-0 nova_compute[243452]: 2026-02-28 10:00:23.987 243456 INFO nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Creating image(s)
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.008 243456 DEBUG nova.storage.rbd_utils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image a2e72370-536c-417e-8667-678b824b849c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.031 243456 DEBUG nova.storage.rbd_utils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image a2e72370-536c-417e-8667-678b824b849c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.056 243456 DEBUG nova.storage.rbd_utils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image a2e72370-536c-417e-8667-678b824b849c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.059 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.110 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.111 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.112 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.113 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:24 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/603828765' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:00:24 compute-0 ceph-mon[76304]: pgmap v928: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 211 op/s
Feb 28 10:00:24 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/548531986' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.139 243456 DEBUG nova.storage.rbd_utils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image a2e72370-536c-417e-8667-678b824b849c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.142 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a2e72370-536c-417e-8667-678b824b849c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.306 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272809.3039806, f63bf263-5801-463b-84c3-90fc3e438863 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.307 243456 INFO nova.compute.manager [-] [instance: f63bf263-5801-463b-84c3-90fc3e438863] VM Stopped (Lifecycle Event)
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.326 243456 DEBUG nova.compute.manager [None req-310d40aa-1ae5-4bfe-ae90-9c554acab7bc - - - - - -] [instance: f63bf263-5801-463b-84c3-90fc3e438863] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.356 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a2e72370-536c-417e-8667-678b824b849c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.402 243456 DEBUG nova.storage.rbd_utils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] resizing rbd image a2e72370-536c-417e-8667-678b824b849c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:00:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:00:24 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1132892288' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.463 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.464 243456 DEBUG nova.objects.instance [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lazy-loading 'pci_devices' on Instance uuid 5706ada3-074b-4ac3-8540-425edba37cbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.470 243456 DEBUG nova.objects.instance [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lazy-loading 'migration_context' on Instance uuid a2e72370-536c-417e-8667-678b824b849c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.492 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:00:24 compute-0 nova_compute[243452]:   <uuid>5706ada3-074b-4ac3-8540-425edba37cbe</uuid>
Feb 28 10:00:24 compute-0 nova_compute[243452]:   <name>instance-0000000a</name>
Feb 28 10:00:24 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:00:24 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:00:24 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <nova:name>tempest-TenantUsagesTestJSON-server-900182279</nova:name>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:00:23</nova:creationTime>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:00:24 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:00:24 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:00:24 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:00:24 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:00:24 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:00:24 compute-0 nova_compute[243452]:         <nova:user uuid="fe16ef4b58de443ba0abd815064150e4">tempest-TenantUsagesTestJSON-2127500945-project-member</nova:user>
Feb 28 10:00:24 compute-0 nova_compute[243452]:         <nova:project uuid="7d9c93e3ee774e3ea2b19d16b9ceab1b">tempest-TenantUsagesTestJSON-2127500945</nova:project>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <nova:ports/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:00:24 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:00:24 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <system>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <entry name="serial">5706ada3-074b-4ac3-8540-425edba37cbe</entry>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <entry name="uuid">5706ada3-074b-4ac3-8540-425edba37cbe</entry>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     </system>
Feb 28 10:00:24 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:00:24 compute-0 nova_compute[243452]:   <os>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:   </os>
Feb 28 10:00:24 compute-0 nova_compute[243452]:   <features>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:   </features>
Feb 28 10:00:24 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:00:24 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:00:24 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/5706ada3-074b-4ac3-8540-425edba37cbe_disk">
Feb 28 10:00:24 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       </source>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:00:24 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/5706ada3-074b-4ac3-8540-425edba37cbe_disk.config">
Feb 28 10:00:24 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       </source>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:00:24 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/5706ada3-074b-4ac3-8540-425edba37cbe/console.log" append="off"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <video>
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     </video>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:00:24 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:00:24 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:00:24 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:00:24 compute-0 nova_compute[243452]: </domain>
Feb 28 10:00:24 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.497 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.498 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Ensure instance console log exists: /var/lib/nova/instances/a2e72370-536c-417e-8667-678b824b849c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.498 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.498 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.499 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.538 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.539 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.539 243456 INFO nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Using config drive
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.558 243456 DEBUG nova.storage.rbd_utils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] rbd image 5706ada3-074b-4ac3-8540-425edba37cbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.738 243456 INFO nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Creating config drive at /var/lib/nova/instances/5706ada3-074b-4ac3-8540-425edba37cbe/disk.config
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.742 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5706ada3-074b-4ac3-8540-425edba37cbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpvmjv6gx7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.761 243456 DEBUG nova.network.neutron [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.762 243456 DEBUG nova.compute.manager [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.763 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.768 243456 WARNING nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.772 243456 DEBUG nova.virt.libvirt.host [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.773 243456 DEBUG nova.virt.libvirt.host [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.777 243456 DEBUG nova.virt.libvirt.host [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.777 243456 DEBUG nova.virt.libvirt.host [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.778 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.778 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.778 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.779 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.779 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.779 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.779 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.780 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.780 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.780 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.780 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.781 243456 DEBUG nova.virt.hardware [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.783 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.864 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5706ada3-074b-4ac3-8540-425edba37cbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpvmjv6gx7" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.901 243456 DEBUG nova.storage.rbd_utils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] rbd image 5706ada3-074b-4ac3-8540-425edba37cbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:24 compute-0 nova_compute[243452]: 2026-02-28 10:00:24.907 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5706ada3-074b-4ac3-8540-425edba37cbe/disk.config 5706ada3-074b-4ac3-8540-425edba37cbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.056 243456 DEBUG oslo_concurrency.processutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5706ada3-074b-4ac3-8540-425edba37cbe/disk.config 5706ada3-074b-4ac3-8540-425edba37cbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.058 243456 INFO nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Deleting local config drive /var/lib/nova/instances/5706ada3-074b-4ac3-8540-425edba37cbe/disk.config because it was imported into RBD.
Feb 28 10:00:25 compute-0 systemd-machined[209480]: New machine qemu-10-instance-0000000a.
Feb 28 10:00:25 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Feb 28 10:00:25 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1132892288' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:00:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:00:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3660596840' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.333 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.362 243456 DEBUG nova.storage.rbd_utils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image a2e72370-536c-417e-8667-678b824b849c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.368 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.650 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272825.6498132, 5706ada3-074b-4ac3-8540-425edba37cbe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.651 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] VM Resumed (Lifecycle Event)
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.653 243456 DEBUG nova.compute.manager [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.654 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.658 243456 INFO nova.virt.libvirt.driver [-] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Instance spawned successfully.
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.659 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.680 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.686 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.690 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.691 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.692 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.692 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.693 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.694 243456 DEBUG nova.virt.libvirt.driver [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.718 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.719 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272825.6512961, 5706ada3-074b-4ac3-8540-425edba37cbe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.719 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] VM Started (Lifecycle Event)
Feb 28 10:00:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 299 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.6 MiB/s wr, 227 op/s
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.750 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.754 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.761 243456 INFO nova.compute.manager [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Took 3.10 seconds to spawn the instance on the hypervisor.
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.762 243456 DEBUG nova.compute.manager [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.771 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.854 243456 INFO nova.compute.manager [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Took 4.16 seconds to build instance.
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.879 243456 DEBUG oslo_concurrency.lockutils [None req-7ae31899-9661-4a57-ae92-0d86eecbd90f fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "5706ada3-074b-4ac3-8540-425edba37cbe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:00:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1607902511' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.905 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.907 243456 DEBUG nova.objects.instance [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lazy-loading 'pci_devices' on Instance uuid a2e72370-536c-417e-8667-678b824b849c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:00:25 compute-0 nova_compute[243452]: 2026-02-28 10:00:25.921 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:00:25 compute-0 nova_compute[243452]:   <uuid>a2e72370-536c-417e-8667-678b824b849c</uuid>
Feb 28 10:00:25 compute-0 nova_compute[243452]:   <name>instance-0000000b</name>
Feb 28 10:00:25 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:00:25 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:00:25 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <nova:name>tempest-LiveMigrationNegativeTest-server-1356439074</nova:name>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:00:24</nova:creationTime>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:00:25 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:00:25 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:00:25 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:00:25 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:00:25 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:00:25 compute-0 nova_compute[243452]:         <nova:user uuid="cb71641f461242f9afa154410c27a4c5">tempest-LiveMigrationNegativeTest-1894925611-project-member</nova:user>
Feb 28 10:00:25 compute-0 nova_compute[243452]:         <nova:project uuid="e4ed979bda47466ebd87517c73a12e9d">tempest-LiveMigrationNegativeTest-1894925611</nova:project>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <nova:ports/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:00:25 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:00:25 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <system>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <entry name="serial">a2e72370-536c-417e-8667-678b824b849c</entry>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <entry name="uuid">a2e72370-536c-417e-8667-678b824b849c</entry>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     </system>
Feb 28 10:00:25 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:00:25 compute-0 nova_compute[243452]:   <os>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:   </os>
Feb 28 10:00:25 compute-0 nova_compute[243452]:   <features>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:   </features>
Feb 28 10:00:25 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:00:25 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:00:25 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/a2e72370-536c-417e-8667-678b824b849c_disk">
Feb 28 10:00:25 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       </source>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:00:25 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/a2e72370-536c-417e-8667-678b824b849c_disk.config">
Feb 28 10:00:25 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       </source>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:00:25 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/a2e72370-536c-417e-8667-678b824b849c/console.log" append="off"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <video>
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     </video>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:00:25 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:00:25 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:00:25 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:00:25 compute-0 nova_compute[243452]: </domain>
Feb 28 10:00:25 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:00:26 compute-0 nova_compute[243452]: 2026-02-28 10:00:26.012 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:00:26 compute-0 nova_compute[243452]: 2026-02-28 10:00:26.013 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:00:26 compute-0 nova_compute[243452]: 2026-02-28 10:00:26.013 243456 INFO nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Using config drive
Feb 28 10:00:26 compute-0 nova_compute[243452]: 2026-02-28 10:00:26.042 243456 DEBUG nova.storage.rbd_utils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image a2e72370-536c-417e-8667-678b824b849c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:26 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3660596840' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:00:26 compute-0 ceph-mon[76304]: pgmap v929: 305 pgs: 305 active+clean; 299 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.6 MiB/s wr, 227 op/s
Feb 28 10:00:26 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1607902511' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:00:26 compute-0 nova_compute[243452]: 2026-02-28 10:00:26.267 243456 INFO nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Creating config drive at /var/lib/nova/instances/a2e72370-536c-417e-8667-678b824b849c/disk.config
Feb 28 10:00:26 compute-0 nova_compute[243452]: 2026-02-28 10:00:26.272 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a2e72370-536c-417e-8667-678b824b849c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp73t_q6i4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:26 compute-0 nova_compute[243452]: 2026-02-28 10:00:26.394 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a2e72370-536c-417e-8667-678b824b849c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp73t_q6i4" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:26 compute-0 nova_compute[243452]: 2026-02-28 10:00:26.426 243456 DEBUG nova.storage.rbd_utils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] rbd image a2e72370-536c-417e-8667-678b824b849c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:26 compute-0 nova_compute[243452]: 2026-02-28 10:00:26.432 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a2e72370-536c-417e-8667-678b824b849c/disk.config a2e72370-536c-417e-8667-678b824b849c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:26 compute-0 nova_compute[243452]: 2026-02-28 10:00:26.596 243456 DEBUG oslo_concurrency.processutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a2e72370-536c-417e-8667-678b824b849c/disk.config a2e72370-536c-417e-8667-678b824b849c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:26 compute-0 nova_compute[243452]: 2026-02-28 10:00:26.599 243456 INFO nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Deleting local config drive /var/lib/nova/instances/a2e72370-536c-417e-8667-678b824b849c/disk.config because it was imported into RBD.
Feb 28 10:00:26 compute-0 systemd-machined[209480]: New machine qemu-11-instance-0000000b.
Feb 28 10:00:26 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Feb 28 10:00:27 compute-0 sudo[255339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:00:27 compute-0 sudo[255339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:00:27 compute-0 sudo[255339]: pam_unix(sudo:session): session closed for user root
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.063 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272827.0629199, a2e72370-536c-417e-8667-678b824b849c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.065 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] VM Resumed (Lifecycle Event)
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.068 243456 DEBUG nova.compute.manager [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.069 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.074 243456 INFO nova.virt.libvirt.driver [-] [instance: a2e72370-536c-417e-8667-678b824b849c] Instance spawned successfully.
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.075 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.090 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.097 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.102 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.102 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.102 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.103 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.103 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.104 243456 DEBUG nova.virt.libvirt.driver [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:27 compute-0 sudo[255365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:00:27 compute-0 sudo[255365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.128 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.130 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272827.063029, a2e72370-536c-417e-8667-678b824b849c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.130 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] VM Started (Lifecycle Event)
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.155 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.164 243456 INFO nova.compute.manager [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Took 3.18 seconds to spawn the instance on the hypervisor.
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.164 243456 DEBUG nova.compute.manager [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.165 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.194 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.242 243456 INFO nova.compute.manager [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Took 4.37 seconds to build instance.
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.272 243456 DEBUG oslo_concurrency.lockutils [None req-3955f86f-56ed-49b4-b33d-918c7cf81abb cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "a2e72370-536c-417e-8667-678b824b849c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:27 compute-0 sudo[255365]: pam_unix(sudo:session): session closed for user root
Feb 28 10:00:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:00:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:00:27 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:00:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:00:27 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:00:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:00:27 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:00:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:00:27 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:00:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:00:27 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:00:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:00:27 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.642 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:27 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:00:27 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:00:27 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:00:27 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:00:27 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:00:27 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:00:27 compute-0 sudo[255420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:00:27 compute-0 sudo[255420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:00:27 compute-0 sudo[255420]: pam_unix(sudo:session): session closed for user root
Feb 28 10:00:27 compute-0 sudo[255445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:00:27 compute-0 sudo[255445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:00:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 357 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.4 MiB/s wr, 280 op/s
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.901 243456 DEBUG nova.objects.instance [None req-bec8a33c-7b99-4570-ae36-53bee6c28520 aa88e7a4d6a74829a2aa88687f3e0d9b 048b401fefe0402ab609aa3c8e535be6 - - default default] Lazy-loading 'pci_devices' on Instance uuid a2e72370-536c-417e-8667-678b824b849c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.921 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272827.9217892, a2e72370-536c-417e-8667-678b824b849c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.922 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] VM Paused (Lifecycle Event)
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.943 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.946 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:00:27 compute-0 nova_compute[243452]: 2026-02-28 10:00:27.973 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] During sync_power_state the instance has a pending task (suspending). Skip.
Feb 28 10:00:28 compute-0 podman[255483]: 2026-02-28 10:00:28.044494657 +0000 UTC m=+0.090872401 container create d1deed01d9fe1667a235816c970cb89b28aa14201e224c63d4162603478413d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_saha, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 10:00:28 compute-0 podman[255483]: 2026-02-28 10:00:27.979223659 +0000 UTC m=+0.025601453 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:00:28 compute-0 systemd[1]: Started libpod-conmon-d1deed01d9fe1667a235816c970cb89b28aa14201e224c63d4162603478413d1.scope.
Feb 28 10:00:28 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Feb 28 10:00:28 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 1.203s CPU time.
Feb 28 10:00:28 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:00:28 compute-0 systemd-machined[209480]: Machine qemu-11-instance-0000000b terminated.
Feb 28 10:00:28 compute-0 podman[255483]: 2026-02-28 10:00:28.169669533 +0000 UTC m=+0.216047307 container init d1deed01d9fe1667a235816c970cb89b28aa14201e224c63d4162603478413d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:00:28 compute-0 podman[255483]: 2026-02-28 10:00:28.177445232 +0000 UTC m=+0.223822986 container start d1deed01d9fe1667a235816c970cb89b28aa14201e224c63d4162603478413d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:00:28 compute-0 cranky_saha[255499]: 167 167
Feb 28 10:00:28 compute-0 systemd[1]: libpod-d1deed01d9fe1667a235816c970cb89b28aa14201e224c63d4162603478413d1.scope: Deactivated successfully.
Feb 28 10:00:28 compute-0 conmon[255499]: conmon d1deed01d9fe1667a235 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d1deed01d9fe1667a235816c970cb89b28aa14201e224c63d4162603478413d1.scope/container/memory.events
Feb 28 10:00:28 compute-0 podman[255483]: 2026-02-28 10:00:28.201458318 +0000 UTC m=+0.247836102 container attach d1deed01d9fe1667a235816c970cb89b28aa14201e224c63d4162603478413d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:00:28 compute-0 podman[255483]: 2026-02-28 10:00:28.201837969 +0000 UTC m=+0.248215723 container died d1deed01d9fe1667a235816c970cb89b28aa14201e224c63d4162603478413d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_saha, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.226 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Acquiring lock "5706ada3-074b-4ac3-8540-425edba37cbe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.226 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "5706ada3-074b-4ac3-8540-425edba37cbe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.226 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Acquiring lock "5706ada3-074b-4ac3-8540-425edba37cbe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.226 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "5706ada3-074b-4ac3-8540-425edba37cbe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.227 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "5706ada3-074b-4ac3-8540-425edba37cbe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.228 243456 INFO nova.compute.manager [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Terminating instance
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.229 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Acquiring lock "refresh_cache-5706ada3-074b-4ac3-8540-425edba37cbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.229 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Acquired lock "refresh_cache-5706ada3-074b-4ac3-8540-425edba37cbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.229 243456 DEBUG nova.network.neutron [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.244 243456 DEBUG nova.compute.manager [None req-bec8a33c-7b99-4570-ae36-53bee6c28520 aa88e7a4d6a74829a2aa88687f3e0d9b 048b401fefe0402ab609aa3c8e535be6 - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-0974a094a17b97c8e96539590c954cb4e8f08c5f47e8305b3dbe805e8e5d16fa-merged.mount: Deactivated successfully.
Feb 28 10:00:28 compute-0 podman[255483]: 2026-02-28 10:00:28.271394548 +0000 UTC m=+0.317772282 container remove d1deed01d9fe1667a235816c970cb89b28aa14201e224c63d4162603478413d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 10:00:28 compute-0 systemd[1]: libpod-conmon-d1deed01d9fe1667a235816c970cb89b28aa14201e224c63d4162603478413d1.scope: Deactivated successfully.
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.390 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.394 243456 DEBUG nova.compute.manager [None req-40d8a711-a3ea-43bd-a6c9-cdc6a946890a fbbda653764b41c6a0d9c5f1a589bdfe 179ac1b345e44371a698f9ff7f4ed1d4 - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.397 243456 INFO nova.compute.manager [None req-40d8a711-a3ea-43bd-a6c9-cdc6a946890a fbbda653764b41c6a0d9c5f1a589bdfe 179ac1b345e44371a698f9ff7f4ed1d4 - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Retrieving diagnostics
Feb 28 10:00:28 compute-0 podman[255526]: 2026-02-28 10:00:28.429889021 +0000 UTC m=+0.044682628 container create 2d6fb0f4f4c5637c9126ba49f2efe86c8bf4ffb36828d2183599c83eb79a9e72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:00:28 compute-0 systemd[1]: Started libpod-conmon-2d6fb0f4f4c5637c9126ba49f2efe86c8bf4ffb36828d2183599c83eb79a9e72.scope.
Feb 28 10:00:28 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c6452720418b021c62a411dd9dae3c64faa280fe1e5afdcec6e49fd3378c619/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c6452720418b021c62a411dd9dae3c64faa280fe1e5afdcec6e49fd3378c619/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c6452720418b021c62a411dd9dae3c64faa280fe1e5afdcec6e49fd3378c619/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c6452720418b021c62a411dd9dae3c64faa280fe1e5afdcec6e49fd3378c619/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c6452720418b021c62a411dd9dae3c64faa280fe1e5afdcec6e49fd3378c619/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:00:28 compute-0 podman[255526]: 2026-02-28 10:00:28.412987015 +0000 UTC m=+0.027780642 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:00:28 compute-0 podman[255526]: 2026-02-28 10:00:28.51469108 +0000 UTC m=+0.129484687 container init 2d6fb0f4f4c5637c9126ba49f2efe86c8bf4ffb36828d2183599c83eb79a9e72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 10:00:28 compute-0 podman[255526]: 2026-02-28 10:00:28.520202896 +0000 UTC m=+0.134996503 container start 2d6fb0f4f4c5637c9126ba49f2efe86c8bf4ffb36828d2183599c83eb79a9e72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 10:00:28 compute-0 podman[255526]: 2026-02-28 10:00:28.522754797 +0000 UTC m=+0.137548414 container attach 2d6fb0f4f4c5637c9126ba49f2efe86c8bf4ffb36828d2183599c83eb79a9e72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.550 243456 DEBUG nova.network.neutron [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:00:28 compute-0 ceph-mon[76304]: pgmap v930: 305 pgs: 305 active+clean; 357 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.4 MiB/s wr, 280 op/s
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.716 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Acquiring lock "5cac75f5-aeef-427d-b484-7d40a33679cf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.717 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lock "5cac75f5-aeef-427d-b484-7d40a33679cf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.717 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Acquiring lock "5cac75f5-aeef-427d-b484-7d40a33679cf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.718 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lock "5cac75f5-aeef-427d-b484-7d40a33679cf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.718 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lock "5cac75f5-aeef-427d-b484-7d40a33679cf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.719 243456 INFO nova.compute.manager [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Terminating instance
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.720 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Acquiring lock "refresh_cache-5cac75f5-aeef-427d-b484-7d40a33679cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.720 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Acquired lock "refresh_cache-5cac75f5-aeef-427d-b484-7d40a33679cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.720 243456 DEBUG nova.network.neutron [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.830 243456 DEBUG nova.network.neutron [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.847 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Releasing lock "refresh_cache-5706ada3-074b-4ac3-8540-425edba37cbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.848 243456 DEBUG nova.compute.manager [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:00:28 compute-0 nova_compute[243452]: 2026-02-28 10:00:28.857 243456 DEBUG nova.network.neutron [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:00:28 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Feb 28 10:00:28 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 3.550s CPU time.
Feb 28 10:00:28 compute-0 systemd-machined[209480]: Machine qemu-10-instance-0000000a terminated.
Feb 28 10:00:28 compute-0 vibrant_mahavira[255542]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:00:28 compute-0 vibrant_mahavira[255542]: --> All data devices are unavailable
Feb 28 10:00:28 compute-0 systemd[1]: libpod-2d6fb0f4f4c5637c9126ba49f2efe86c8bf4ffb36828d2183599c83eb79a9e72.scope: Deactivated successfully.
Feb 28 10:00:28 compute-0 podman[255526]: 2026-02-28 10:00:28.959234752 +0000 UTC m=+0.574028379 container died 2d6fb0f4f4c5637c9126ba49f2efe86c8bf4ffb36828d2183599c83eb79a9e72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:00:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c6452720418b021c62a411dd9dae3c64faa280fe1e5afdcec6e49fd3378c619-merged.mount: Deactivated successfully.
Feb 28 10:00:28 compute-0 podman[255526]: 2026-02-28 10:00:28.994386502 +0000 UTC m=+0.609180109 container remove 2d6fb0f4f4c5637c9126ba49f2efe86c8bf4ffb36828d2183599c83eb79a9e72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:00:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:00:29
Feb 28 10:00:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:00:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:00:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'default.rgw.meta', 'backups', 'volumes', 'default.rgw.control', '.mgr', 'images', 'cephfs.cephfs.meta']
Feb 28 10:00:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:00:29 compute-0 sudo[255445]: pam_unix(sudo:session): session closed for user root
Feb 28 10:00:29 compute-0 systemd[1]: libpod-conmon-2d6fb0f4f4c5637c9126ba49f2efe86c8bf4ffb36828d2183599c83eb79a9e72.scope: Deactivated successfully.
Feb 28 10:00:29 compute-0 nova_compute[243452]: 2026-02-28 10:00:29.044 243456 DEBUG nova.network.neutron [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:00:29 compute-0 nova_compute[243452]: 2026-02-28 10:00:29.065 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Releasing lock "refresh_cache-5cac75f5-aeef-427d-b484-7d40a33679cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:00:29 compute-0 nova_compute[243452]: 2026-02-28 10:00:29.066 243456 DEBUG nova.compute.manager [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:00:29 compute-0 sudo[255575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:00:29 compute-0 sudo[255575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:00:29 compute-0 sudo[255575]: pam_unix(sudo:session): session closed for user root
Feb 28 10:00:29 compute-0 nova_compute[243452]: 2026-02-28 10:00:29.085 243456 INFO nova.virt.libvirt.driver [-] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Instance destroyed successfully.
Feb 28 10:00:29 compute-0 nova_compute[243452]: 2026-02-28 10:00:29.085 243456 DEBUG nova.objects.instance [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lazy-loading 'resources' on Instance uuid 5706ada3-074b-4ac3-8540-425edba37cbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:00:29 compute-0 sudo[255602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:00:29 compute-0 sudo[255602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:00:29 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Feb 28 10:00:29 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 11.608s CPU time.
Feb 28 10:00:29 compute-0 systemd-machined[209480]: Machine qemu-8-instance-00000008 terminated.
Feb 28 10:00:29 compute-0 nova_compute[243452]: 2026-02-28 10:00:29.297 243456 INFO nova.virt.libvirt.driver [-] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Instance destroyed successfully.
Feb 28 10:00:29 compute-0 nova_compute[243452]: 2026-02-28 10:00:29.298 243456 DEBUG nova.objects.instance [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lazy-loading 'resources' on Instance uuid 5cac75f5-aeef-427d-b484-7d40a33679cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:00:29 compute-0 nova_compute[243452]: 2026-02-28 10:00:29.384 243456 INFO nova.virt.libvirt.driver [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Deleting instance files /var/lib/nova/instances/5706ada3-074b-4ac3-8540-425edba37cbe_del
Feb 28 10:00:29 compute-0 nova_compute[243452]: 2026-02-28 10:00:29.385 243456 INFO nova.virt.libvirt.driver [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Deletion of /var/lib/nova/instances/5706ada3-074b-4ac3-8540-425edba37cbe_del complete
Feb 28 10:00:29 compute-0 podman[255679]: 2026-02-28 10:00:29.427479441 +0000 UTC m=+0.053458157 container create f92d9e319043d1e7e14bc9e0489c524b7282c675eaa5462228b11b6c59ccf519 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Feb 28 10:00:29 compute-0 nova_compute[243452]: 2026-02-28 10:00:29.437 243456 INFO nova.compute.manager [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Took 0.59 seconds to destroy the instance on the hypervisor.
Feb 28 10:00:29 compute-0 nova_compute[243452]: 2026-02-28 10:00:29.438 243456 DEBUG oslo.service.loopingcall [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:00:29 compute-0 nova_compute[243452]: 2026-02-28 10:00:29.438 243456 DEBUG nova.compute.manager [-] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:00:29 compute-0 nova_compute[243452]: 2026-02-28 10:00:29.438 243456 DEBUG nova.network.neutron [-] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:00:29 compute-0 systemd[1]: Started libpod-conmon-f92d9e319043d1e7e14bc9e0489c524b7282c675eaa5462228b11b6c59ccf519.scope.
Feb 28 10:00:29 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:00:29 compute-0 podman[255679]: 2026-02-28 10:00:29.408165827 +0000 UTC m=+0.034144593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:00:29 compute-0 podman[255679]: 2026-02-28 10:00:29.512446934 +0000 UTC m=+0.138425740 container init f92d9e319043d1e7e14bc9e0489c524b7282c675eaa5462228b11b6c59ccf519 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:00:29 compute-0 podman[255679]: 2026-02-28 10:00:29.523522276 +0000 UTC m=+0.149501002 container start f92d9e319043d1e7e14bc9e0489c524b7282c675eaa5462228b11b6c59ccf519 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_keller, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:00:29 compute-0 podman[255679]: 2026-02-28 10:00:29.527601511 +0000 UTC m=+0.153580307 container attach f92d9e319043d1e7e14bc9e0489c524b7282c675eaa5462228b11b6c59ccf519 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_keller, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:00:29 compute-0 distracted_keller[255696]: 167 167
Feb 28 10:00:29 compute-0 systemd[1]: libpod-f92d9e319043d1e7e14bc9e0489c524b7282c675eaa5462228b11b6c59ccf519.scope: Deactivated successfully.
Feb 28 10:00:29 compute-0 conmon[255696]: conmon f92d9e319043d1e7e14b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f92d9e319043d1e7e14bc9e0489c524b7282c675eaa5462228b11b6c59ccf519.scope/container/memory.events
Feb 28 10:00:29 compute-0 podman[255679]: 2026-02-28 10:00:29.533531168 +0000 UTC m=+0.159509944 container died f92d9e319043d1e7e14bc9e0489c524b7282c675eaa5462228b11b6c59ccf519 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_keller, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 10:00:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cda96c44d47c499b345b2d5522199217e83ee2df10d12458f6f8dc08e3e96f3-merged.mount: Deactivated successfully.
Feb 28 10:00:29 compute-0 podman[255679]: 2026-02-28 10:00:29.582783345 +0000 UTC m=+0.208762061 container remove f92d9e319043d1e7e14bc9e0489c524b7282c675eaa5462228b11b6c59ccf519 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_keller, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:00:29 compute-0 nova_compute[243452]: 2026-02-28 10:00:29.587 243456 INFO nova.virt.libvirt.driver [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Deleting instance files /var/lib/nova/instances/5cac75f5-aeef-427d-b484-7d40a33679cf_del
Feb 28 10:00:29 compute-0 nova_compute[243452]: 2026-02-28 10:00:29.589 243456 INFO nova.virt.libvirt.driver [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Deletion of /var/lib/nova/instances/5cac75f5-aeef-427d-b484-7d40a33679cf_del complete
Feb 28 10:00:29 compute-0 systemd[1]: libpod-conmon-f92d9e319043d1e7e14bc9e0489c524b7282c675eaa5462228b11b6c59ccf519.scope: Deactivated successfully.
Feb 28 10:00:29 compute-0 nova_compute[243452]: 2026-02-28 10:00:29.648 243456 INFO nova.compute.manager [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Took 0.58 seconds to destroy the instance on the hypervisor.
Feb 28 10:00:29 compute-0 nova_compute[243452]: 2026-02-28 10:00:29.648 243456 DEBUG oslo.service.loopingcall [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:00:29 compute-0 nova_compute[243452]: 2026-02-28 10:00:29.650 243456 DEBUG nova.compute.manager [-] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:00:29 compute-0 nova_compute[243452]: 2026-02-28 10:00:29.650 243456 DEBUG nova.network.neutron [-] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:00:29 compute-0 podman[255720]: 2026-02-28 10:00:29.699863573 +0000 UTC m=+0.036776827 container create 46b3adc7d3fbc8a8cce707f1f52184ef5f3a44edfb5034b482c346895fe5fa87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_pasteur, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:00:29 compute-0 systemd[1]: Started libpod-conmon-46b3adc7d3fbc8a8cce707f1f52184ef5f3a44edfb5034b482c346895fe5fa87.scope.
Feb 28 10:00:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 357 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.0 MiB/s wr, 228 op/s
Feb 28 10:00:29 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:00:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e894ee0d69dfd03f01a83cbb010dcdd2356a322e92ce32876bacdbc54a6ce417/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:00:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e894ee0d69dfd03f01a83cbb010dcdd2356a322e92ce32876bacdbc54a6ce417/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:00:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e894ee0d69dfd03f01a83cbb010dcdd2356a322e92ce32876bacdbc54a6ce417/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:00:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e894ee0d69dfd03f01a83cbb010dcdd2356a322e92ce32876bacdbc54a6ce417/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:00:29 compute-0 podman[255720]: 2026-02-28 10:00:29.680312112 +0000 UTC m=+0.017225356 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:00:29 compute-0 podman[255720]: 2026-02-28 10:00:29.788596062 +0000 UTC m=+0.125509346 container init 46b3adc7d3fbc8a8cce707f1f52184ef5f3a44edfb5034b482c346895fe5fa87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_pasteur, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:00:29 compute-0 podman[255720]: 2026-02-28 10:00:29.795824716 +0000 UTC m=+0.132737940 container start 46b3adc7d3fbc8a8cce707f1f52184ef5f3a44edfb5034b482c346895fe5fa87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 10:00:29 compute-0 podman[255720]: 2026-02-28 10:00:29.800557029 +0000 UTC m=+0.137470303 container attach 46b3adc7d3fbc8a8cce707f1f52184ef5f3a44edfb5034b482c346895fe5fa87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_pasteur, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]: {
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:     "0": [
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:         {
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "devices": [
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "/dev/loop3"
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             ],
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "lv_name": "ceph_lv0",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "lv_size": "21470642176",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "name": "ceph_lv0",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "tags": {
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.cluster_name": "ceph",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.crush_device_class": "",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.encrypted": "0",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.objectstore": "bluestore",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.osd_id": "0",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.type": "block",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.vdo": "0",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.with_tpm": "0"
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             },
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "type": "block",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "vg_name": "ceph_vg0"
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:         }
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:     ],
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:     "1": [
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:         {
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "devices": [
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "/dev/loop4"
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             ],
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "lv_name": "ceph_lv1",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "lv_size": "21470642176",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "name": "ceph_lv1",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "tags": {
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.cluster_name": "ceph",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.crush_device_class": "",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.encrypted": "0",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.objectstore": "bluestore",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.osd_id": "1",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.type": "block",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.vdo": "0",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.with_tpm": "0"
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             },
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "type": "block",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "vg_name": "ceph_vg1"
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:         }
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:     ],
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:     "2": [
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:         {
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "devices": [
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "/dev/loop5"
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             ],
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "lv_name": "ceph_lv2",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "lv_size": "21470642176",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "name": "ceph_lv2",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "tags": {
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.cluster_name": "ceph",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.crush_device_class": "",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.encrypted": "0",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.objectstore": "bluestore",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.osd_id": "2",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.type": "block",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.vdo": "0",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:                 "ceph.with_tpm": "0"
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             },
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "type": "block",
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:             "vg_name": "ceph_vg2"
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:         }
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]:     ]
Feb 28 10:00:30 compute-0 zealous_pasteur[255736]: }
Feb 28 10:00:30 compute-0 systemd[1]: libpod-46b3adc7d3fbc8a8cce707f1f52184ef5f3a44edfb5034b482c346895fe5fa87.scope: Deactivated successfully.
Feb 28 10:00:30 compute-0 podman[255746]: 2026-02-28 10:00:30.143411126 +0000 UTC m=+0.035278645 container died 46b3adc7d3fbc8a8cce707f1f52184ef5f3a44edfb5034b482c346895fe5fa87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_pasteur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Feb 28 10:00:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-e894ee0d69dfd03f01a83cbb010dcdd2356a322e92ce32876bacdbc54a6ce417-merged.mount: Deactivated successfully.
Feb 28 10:00:30 compute-0 podman[255746]: 2026-02-28 10:00:30.187677083 +0000 UTC m=+0.079544552 container remove 46b3adc7d3fbc8a8cce707f1f52184ef5f3a44edfb5034b482c346895fe5fa87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:00:30 compute-0 systemd[1]: libpod-conmon-46b3adc7d3fbc8a8cce707f1f52184ef5f3a44edfb5034b482c346895fe5fa87.scope: Deactivated successfully.
Feb 28 10:00:30 compute-0 sudo[255602]: pam_unix(sudo:session): session closed for user root
Feb 28 10:00:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:00:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:00:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:00:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:00:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:00:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:00:30 compute-0 sudo[255761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:00:30 compute-0 sudo[255761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:00:30 compute-0 nova_compute[243452]: 2026-02-28 10:00:30.333 243456 DEBUG nova.network.neutron [-] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:00:30 compute-0 sudo[255761]: pam_unix(sudo:session): session closed for user root
Feb 28 10:00:30 compute-0 nova_compute[243452]: 2026-02-28 10:00:30.337 243456 DEBUG nova.network.neutron [-] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:00:30 compute-0 nova_compute[243452]: 2026-02-28 10:00:30.353 243456 DEBUG nova.network.neutron [-] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:00:30 compute-0 nova_compute[243452]: 2026-02-28 10:00:30.355 243456 DEBUG nova.network.neutron [-] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:00:30 compute-0 nova_compute[243452]: 2026-02-28 10:00:30.375 243456 INFO nova.compute.manager [-] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Took 0.72 seconds to deallocate network for instance.
Feb 28 10:00:30 compute-0 nova_compute[243452]: 2026-02-28 10:00:30.376 243456 INFO nova.compute.manager [-] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Took 0.94 seconds to deallocate network for instance.
Feb 28 10:00:30 compute-0 sudo[255786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:00:30 compute-0 sudo[255786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:00:30 compute-0 nova_compute[243452]: 2026-02-28 10:00:30.469 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:30 compute-0 nova_compute[243452]: 2026-02-28 10:00:30.469 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:30 compute-0 nova_compute[243452]: 2026-02-28 10:00:30.472 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:00:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:00:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:00:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:00:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:00:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:00:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:00:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:00:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:00:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:00:30 compute-0 nova_compute[243452]: 2026-02-28 10:00:30.587 243456 DEBUG oslo_concurrency.processutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:30 compute-0 podman[255823]: 2026-02-28 10:00:30.813048777 +0000 UTC m=+0.099552015 container create 879a9515b024c20b5a401ca8c3e377d6ad536a31842d6991f418895142d59b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Feb 28 10:00:30 compute-0 ceph-mon[76304]: pgmap v931: 305 pgs: 305 active+clean; 357 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.0 MiB/s wr, 228 op/s
Feb 28 10:00:30 compute-0 podman[255823]: 2026-02-28 10:00:30.74925098 +0000 UTC m=+0.035754288 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:00:30 compute-0 systemd[1]: Started libpod-conmon-879a9515b024c20b5a401ca8c3e377d6ad536a31842d6991f418895142d59b82.scope.
Feb 28 10:00:30 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:00:30 compute-0 podman[255823]: 2026-02-28 10:00:30.910198944 +0000 UTC m=+0.196702242 container init 879a9515b024c20b5a401ca8c3e377d6ad536a31842d6991f418895142d59b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 10:00:30 compute-0 podman[255823]: 2026-02-28 10:00:30.91645078 +0000 UTC m=+0.202953998 container start 879a9515b024c20b5a401ca8c3e377d6ad536a31842d6991f418895142d59b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 10:00:30 compute-0 focused_bhaskara[255858]: 167 167
Feb 28 10:00:30 compute-0 systemd[1]: libpod-879a9515b024c20b5a401ca8c3e377d6ad536a31842d6991f418895142d59b82.scope: Deactivated successfully.
Feb 28 10:00:31 compute-0 podman[255823]: 2026-02-28 10:00:31.016299392 +0000 UTC m=+0.302802640 container attach 879a9515b024c20b5a401ca8c3e377d6ad536a31842d6991f418895142d59b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bhaskara, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:00:31 compute-0 podman[255823]: 2026-02-28 10:00:31.017135796 +0000 UTC m=+0.303639094 container died 879a9515b024c20b5a401ca8c3e377d6ad536a31842d6991f418895142d59b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 10:00:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:00:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2922638578' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.137 243456 DEBUG oslo_concurrency.processutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.147 243456 DEBUG nova.compute.provider_tree [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.162 243456 DEBUG nova.scheduler.client.report [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.182 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.186 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.220 243456 INFO nova.scheduler.client.report [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Deleted allocations for instance 5cac75f5-aeef-427d-b484-7d40a33679cf
Feb 28 10:00:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-728be934235e4e05bd44d02957bbb0398634ad1728db96b5f0e6eefc1fef17ee-merged.mount: Deactivated successfully.
Feb 28 10:00:31 compute-0 podman[255823]: 2026-02-28 10:00:31.28784397 +0000 UTC m=+0.574347208 container remove 879a9515b024c20b5a401ca8c3e377d6ad536a31842d6991f418895142d59b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.296 243456 DEBUG oslo_concurrency.lockutils [None req-5fd93611-db29-4e9d-bb84-85fbd19d8edc 137c57a081ef448e87e6ee3762d09a10 28fea2ec5f724190b6d7966ed36747db - - default default] Lock "5cac75f5-aeef-427d-b484-7d40a33679cf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:31 compute-0 systemd[1]: libpod-conmon-879a9515b024c20b5a401ca8c3e377d6ad536a31842d6991f418895142d59b82.scope: Deactivated successfully.
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.327 243456 DEBUG oslo_concurrency.processutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:31 compute-0 podman[255885]: 2026-02-28 10:00:31.455222095 +0000 UTC m=+0.070886738 container create d7d4f4f1f7b2bf3708ff079a28ff28b0704e12c6de6f0f8d2ae0b73ff9de73c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elion, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:00:31 compute-0 podman[255885]: 2026-02-28 10:00:31.423294546 +0000 UTC m=+0.038959269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.522 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "a2e72370-536c-417e-8667-678b824b849c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.523 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "a2e72370-536c-417e-8667-678b824b849c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.523 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "a2e72370-536c-417e-8667-678b824b849c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.523 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "a2e72370-536c-417e-8667-678b824b849c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.524 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "a2e72370-536c-417e-8667-678b824b849c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.525 243456 INFO nova.compute.manager [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Terminating instance
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.526 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "refresh_cache-a2e72370-536c-417e-8667-678b824b849c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.527 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquired lock "refresh_cache-a2e72370-536c-417e-8667-678b824b849c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.527 243456 DEBUG nova.network.neutron [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:00:31 compute-0 systemd[1]: Started libpod-conmon-d7d4f4f1f7b2bf3708ff079a28ff28b0704e12c6de6f0f8d2ae0b73ff9de73c4.scope.
Feb 28 10:00:31 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:00:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d31721b1dcaf2bdd53cae4d967960e55700c18728ed521b529e051ff6321a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:00:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d31721b1dcaf2bdd53cae4d967960e55700c18728ed521b529e051ff6321a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:00:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d31721b1dcaf2bdd53cae4d967960e55700c18728ed521b529e051ff6321a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:00:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d31721b1dcaf2bdd53cae4d967960e55700c18728ed521b529e051ff6321a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:00:31 compute-0 podman[255885]: 2026-02-28 10:00:31.591693699 +0000 UTC m=+0.207358342 container init d7d4f4f1f7b2bf3708ff079a28ff28b0704e12c6de6f0f8d2ae0b73ff9de73c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elion, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 10:00:31 compute-0 podman[255885]: 2026-02-28 10:00:31.599126428 +0000 UTC m=+0.214791071 container start d7d4f4f1f7b2bf3708ff079a28ff28b0704e12c6de6f0f8d2ae0b73ff9de73c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 10:00:31 compute-0 podman[255885]: 2026-02-28 10:00:31.618737691 +0000 UTC m=+0.234402324 container attach d7d4f4f1f7b2bf3708ff079a28ff28b0704e12c6de6f0f8d2ae0b73ff9de73c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elion, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 10:00:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 328 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.9 MiB/s wr, 400 op/s
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.802 243456 DEBUG nova.network.neutron [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:00:31 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2922638578' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:00:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:00:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2245145351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.919 243456 DEBUG oslo_concurrency.processutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.925 243456 DEBUG nova.compute.provider_tree [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.937 243456 DEBUG nova.scheduler.client.report [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.960 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:31 compute-0 nova_compute[243452]: 2026-02-28 10:00:31.992 243456 INFO nova.scheduler.client.report [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Deleted allocations for instance 5706ada3-074b-4ac3-8540-425edba37cbe
Feb 28 10:00:32 compute-0 nova_compute[243452]: 2026-02-28 10:00:32.058 243456 DEBUG nova.network.neutron [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:00:32 compute-0 nova_compute[243452]: 2026-02-28 10:00:32.063 243456 DEBUG oslo_concurrency.lockutils [None req-b3e2e156-93b7-4b45-9b0d-e8da515bbd0b fe16ef4b58de443ba0abd815064150e4 7d9c93e3ee774e3ea2b19d16b9ceab1b - - default default] Lock "5706ada3-074b-4ac3-8540-425edba37cbe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:32 compute-0 nova_compute[243452]: 2026-02-28 10:00:32.087 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Releasing lock "refresh_cache-a2e72370-536c-417e-8667-678b824b849c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:00:32 compute-0 nova_compute[243452]: 2026-02-28 10:00:32.088 243456 DEBUG nova.compute.manager [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:00:32 compute-0 nova_compute[243452]: 2026-02-28 10:00:32.094 243456 INFO nova.virt.libvirt.driver [-] [instance: a2e72370-536c-417e-8667-678b824b849c] Instance destroyed successfully.
Feb 28 10:00:32 compute-0 nova_compute[243452]: 2026-02-28 10:00:32.095 243456 DEBUG nova.objects.instance [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lazy-loading 'resources' on Instance uuid a2e72370-536c-417e-8667-678b824b849c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:00:32 compute-0 lvm[256019]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:00:32 compute-0 lvm[256017]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:00:32 compute-0 lvm[256019]: VG ceph_vg1 finished
Feb 28 10:00:32 compute-0 lvm[256017]: VG ceph_vg0 finished
Feb 28 10:00:32 compute-0 lvm[256021]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:00:32 compute-0 lvm[256021]: VG ceph_vg2 finished
Feb 28 10:00:32 compute-0 recursing_elion[255920]: {}
Feb 28 10:00:32 compute-0 systemd[1]: libpod-d7d4f4f1f7b2bf3708ff079a28ff28b0704e12c6de6f0f8d2ae0b73ff9de73c4.scope: Deactivated successfully.
Feb 28 10:00:32 compute-0 systemd[1]: libpod-d7d4f4f1f7b2bf3708ff079a28ff28b0704e12c6de6f0f8d2ae0b73ff9de73c4.scope: Consumed 1.109s CPU time.
Feb 28 10:00:32 compute-0 podman[255885]: 2026-02-28 10:00:32.426133421 +0000 UTC m=+1.041798064 container died d7d4f4f1f7b2bf3708ff079a28ff28b0704e12c6de6f0f8d2ae0b73ff9de73c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elion, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:00:32 compute-0 nova_compute[243452]: 2026-02-28 10:00:32.425 243456 INFO nova.virt.libvirt.driver [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Deleting instance files /var/lib/nova/instances/a2e72370-536c-417e-8667-678b824b849c_del
Feb 28 10:00:32 compute-0 nova_compute[243452]: 2026-02-28 10:00:32.426 243456 INFO nova.virt.libvirt.driver [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Deletion of /var/lib/nova/instances/a2e72370-536c-417e-8667-678b824b849c_del complete
Feb 28 10:00:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8d31721b1dcaf2bdd53cae4d967960e55700c18728ed521b529e051ff6321a1-merged.mount: Deactivated successfully.
Feb 28 10:00:32 compute-0 podman[255885]: 2026-02-28 10:00:32.475393118 +0000 UTC m=+1.091057791 container remove d7d4f4f1f7b2bf3708ff079a28ff28b0704e12c6de6f0f8d2ae0b73ff9de73c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:00:32 compute-0 systemd[1]: libpod-conmon-d7d4f4f1f7b2bf3708ff079a28ff28b0704e12c6de6f0f8d2ae0b73ff9de73c4.scope: Deactivated successfully.
Feb 28 10:00:32 compute-0 sudo[255786]: pam_unix(sudo:session): session closed for user root
Feb 28 10:00:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:00:32 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:00:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:00:32 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:00:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:00:32 compute-0 sudo[256038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:00:32 compute-0 sudo[256038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:00:32 compute-0 sudo[256038]: pam_unix(sudo:session): session closed for user root
Feb 28 10:00:32 compute-0 nova_compute[243452]: 2026-02-28 10:00:32.647 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:32 compute-0 ceph-mon[76304]: pgmap v932: 305 pgs: 305 active+clean; 328 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.9 MiB/s wr, 400 op/s
Feb 28 10:00:32 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2245145351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:00:32 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:00:32 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:00:33 compute-0 nova_compute[243452]: 2026-02-28 10:00:33.199 243456 INFO nova.compute.manager [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: a2e72370-536c-417e-8667-678b824b849c] Took 1.11 seconds to destroy the instance on the hypervisor.
Feb 28 10:00:33 compute-0 nova_compute[243452]: 2026-02-28 10:00:33.200 243456 DEBUG oslo.service.loopingcall [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:00:33 compute-0 nova_compute[243452]: 2026-02-28 10:00:33.201 243456 DEBUG nova.compute.manager [-] [instance: a2e72370-536c-417e-8667-678b824b849c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:00:33 compute-0 nova_compute[243452]: 2026-02-28 10:00:33.201 243456 DEBUG nova.network.neutron [-] [instance: a2e72370-536c-417e-8667-678b824b849c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:00:33 compute-0 nova_compute[243452]: 2026-02-28 10:00:33.392 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:33 compute-0 nova_compute[243452]: 2026-02-28 10:00:33.476 243456 DEBUG nova.network.neutron [-] [instance: a2e72370-536c-417e-8667-678b824b849c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:00:33 compute-0 nova_compute[243452]: 2026-02-28 10:00:33.490 243456 DEBUG nova.network.neutron [-] [instance: a2e72370-536c-417e-8667-678b824b849c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:00:33 compute-0 nova_compute[243452]: 2026-02-28 10:00:33.509 243456 INFO nova.compute.manager [-] [instance: a2e72370-536c-417e-8667-678b824b849c] Took 0.31 seconds to deallocate network for instance.
Feb 28 10:00:33 compute-0 nova_compute[243452]: 2026-02-28 10:00:33.559 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:33 compute-0 nova_compute[243452]: 2026-02-28 10:00:33.560 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:33 compute-0 nova_compute[243452]: 2026-02-28 10:00:33.637 243456 DEBUG oslo_concurrency.processutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 261 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 7.8 MiB/s wr, 354 op/s
Feb 28 10:00:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:00:34 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1878636309' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:00:34 compute-0 nova_compute[243452]: 2026-02-28 10:00:34.164 243456 DEBUG oslo_concurrency.processutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:34 compute-0 nova_compute[243452]: 2026-02-28 10:00:34.170 243456 DEBUG nova.compute.provider_tree [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:00:34 compute-0 nova_compute[243452]: 2026-02-28 10:00:34.194 243456 DEBUG nova.scheduler.client.report [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:00:34 compute-0 nova_compute[243452]: 2026-02-28 10:00:34.215 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:34 compute-0 nova_compute[243452]: 2026-02-28 10:00:34.237 243456 INFO nova.scheduler.client.report [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Deleted allocations for instance a2e72370-536c-417e-8667-678b824b849c
Feb 28 10:00:34 compute-0 nova_compute[243452]: 2026-02-28 10:00:34.305 243456 DEBUG oslo_concurrency.lockutils [None req-e5fddab5-6b2f-47b0-96f6-919365b9cf5a cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "a2e72370-536c-417e-8667-678b824b849c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:34 compute-0 nova_compute[243452]: 2026-02-28 10:00:34.551 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "e7cedb7c-31a4-4578-82e8-f93b29898300" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:34 compute-0 nova_compute[243452]: 2026-02-28 10:00:34.552 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "e7cedb7c-31a4-4578-82e8-f93b29898300" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:34 compute-0 nova_compute[243452]: 2026-02-28 10:00:34.553 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "e7cedb7c-31a4-4578-82e8-f93b29898300-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:34 compute-0 nova_compute[243452]: 2026-02-28 10:00:34.553 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "e7cedb7c-31a4-4578-82e8-f93b29898300-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:34 compute-0 nova_compute[243452]: 2026-02-28 10:00:34.554 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "e7cedb7c-31a4-4578-82e8-f93b29898300-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:34 compute-0 nova_compute[243452]: 2026-02-28 10:00:34.555 243456 INFO nova.compute.manager [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Terminating instance
Feb 28 10:00:34 compute-0 nova_compute[243452]: 2026-02-28 10:00:34.557 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "refresh_cache-e7cedb7c-31a4-4578-82e8-f93b29898300" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:00:34 compute-0 nova_compute[243452]: 2026-02-28 10:00:34.558 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquired lock "refresh_cache-e7cedb7c-31a4-4578-82e8-f93b29898300" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:00:34 compute-0 nova_compute[243452]: 2026-02-28 10:00:34.558 243456 DEBUG nova.network.neutron [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:00:34 compute-0 ceph-mon[76304]: pgmap v933: 305 pgs: 305 active+clean; 261 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 7.8 MiB/s wr, 354 op/s
Feb 28 10:00:34 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1878636309' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:00:34 compute-0 nova_compute[243452]: 2026-02-28 10:00:34.951 243456 DEBUG nova.network.neutron [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:00:35 compute-0 nova_compute[243452]: 2026-02-28 10:00:35.253 243456 DEBUG nova.network.neutron [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:00:35 compute-0 nova_compute[243452]: 2026-02-28 10:00:35.270 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Releasing lock "refresh_cache-e7cedb7c-31a4-4578-82e8-f93b29898300" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:00:35 compute-0 nova_compute[243452]: 2026-02-28 10:00:35.271 243456 DEBUG nova.compute.manager [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:00:35 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Feb 28 10:00:35 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 12.428s CPU time.
Feb 28 10:00:35 compute-0 systemd-machined[209480]: Machine qemu-9-instance-00000009 terminated.
Feb 28 10:00:35 compute-0 nova_compute[243452]: 2026-02-28 10:00:35.495 243456 INFO nova.virt.libvirt.driver [-] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Instance destroyed successfully.
Feb 28 10:00:35 compute-0 nova_compute[243452]: 2026-02-28 10:00:35.496 243456 DEBUG nova.objects.instance [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lazy-loading 'resources' on Instance uuid e7cedb7c-31a4-4578-82e8-f93b29898300 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:00:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 196 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 7.8 MiB/s wr, 368 op/s
Feb 28 10:00:35 compute-0 nova_compute[243452]: 2026-02-28 10:00:35.804 243456 INFO nova.virt.libvirt.driver [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Deleting instance files /var/lib/nova/instances/e7cedb7c-31a4-4578-82e8-f93b29898300_del
Feb 28 10:00:35 compute-0 nova_compute[243452]: 2026-02-28 10:00:35.805 243456 INFO nova.virt.libvirt.driver [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Deletion of /var/lib/nova/instances/e7cedb7c-31a4-4578-82e8-f93b29898300_del complete
Feb 28 10:00:35 compute-0 nova_compute[243452]: 2026-02-28 10:00:35.884 243456 INFO nova.compute.manager [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Took 0.61 seconds to destroy the instance on the hypervisor.
Feb 28 10:00:35 compute-0 nova_compute[243452]: 2026-02-28 10:00:35.885 243456 DEBUG oslo.service.loopingcall [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:00:35 compute-0 nova_compute[243452]: 2026-02-28 10:00:35.885 243456 DEBUG nova.compute.manager [-] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:00:35 compute-0 nova_compute[243452]: 2026-02-28 10:00:35.886 243456 DEBUG nova.network.neutron [-] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:00:36 compute-0 nova_compute[243452]: 2026-02-28 10:00:36.159 243456 DEBUG nova.network.neutron [-] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:00:36 compute-0 nova_compute[243452]: 2026-02-28 10:00:36.174 243456 DEBUG nova.network.neutron [-] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:00:36 compute-0 nova_compute[243452]: 2026-02-28 10:00:36.203 243456 INFO nova.compute.manager [-] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Took 0.32 seconds to deallocate network for instance.
Feb 28 10:00:36 compute-0 nova_compute[243452]: 2026-02-28 10:00:36.264 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:36 compute-0 nova_compute[243452]: 2026-02-28 10:00:36.265 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:36 compute-0 nova_compute[243452]: 2026-02-28 10:00:36.319 243456 DEBUG oslo_concurrency.processutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:36 compute-0 ceph-mon[76304]: pgmap v934: 305 pgs: 305 active+clean; 196 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 7.8 MiB/s wr, 368 op/s
Feb 28 10:00:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:00:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/413253201' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:00:36 compute-0 nova_compute[243452]: 2026-02-28 10:00:36.883 243456 DEBUG oslo_concurrency.processutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:36 compute-0 nova_compute[243452]: 2026-02-28 10:00:36.891 243456 DEBUG nova.compute.provider_tree [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:00:36 compute-0 nova_compute[243452]: 2026-02-28 10:00:36.909 243456 DEBUG nova.scheduler.client.report [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:00:36 compute-0 nova_compute[243452]: 2026-02-28 10:00:36.939 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:36 compute-0 nova_compute[243452]: 2026-02-28 10:00:36.966 243456 INFO nova.scheduler.client.report [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Deleted allocations for instance e7cedb7c-31a4-4578-82e8-f93b29898300
Feb 28 10:00:37 compute-0 nova_compute[243452]: 2026-02-28 10:00:37.063 243456 DEBUG oslo_concurrency.lockutils [None req-2c5b62ef-07a3-45d4-bb5f-698bad4b65e7 cb71641f461242f9afa154410c27a4c5 e4ed979bda47466ebd87517c73a12e9d - - default default] Lock "e7cedb7c-31a4-4578-82e8-f93b29898300" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:00:37 compute-0 nova_compute[243452]: 2026-02-28 10:00:37.651 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 173 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.6 MiB/s wr, 339 op/s
Feb 28 10:00:37 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/413253201' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:00:38 compute-0 nova_compute[243452]: 2026-02-28 10:00:38.394 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:38 compute-0 ceph-mon[76304]: pgmap v935: 305 pgs: 305 active+clean; 173 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.6 MiB/s wr, 339 op/s
Feb 28 10:00:38 compute-0 nova_compute[243452]: 2026-02-28 10:00:38.888 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:38.888 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:00:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:38.890 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:00:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 173 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.9 MiB/s wr, 268 op/s
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00013532782622502094 of space, bias 1.0, pg target 0.04059834786750628 quantized to 32 (current 32)
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002490987642331306 of space, bias 1.0, pg target 0.7472962926993918 quantized to 32 (current 32)
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.905678768646067e-07 of space, bias 4.0, pg target 0.0009486814522375281 quantized to 16 (current 16)
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:00:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:00:40 compute-0 ceph-mon[76304]: pgmap v936: 305 pgs: 305 active+clean; 173 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.9 MiB/s wr, 268 op/s
Feb 28 10:00:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 153 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 284 op/s
Feb 28 10:00:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:00:42 compute-0 nova_compute[243452]: 2026-02-28 10:00:42.654 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:42 compute-0 ceph-mon[76304]: pgmap v937: 305 pgs: 305 active+clean; 153 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 284 op/s
Feb 28 10:00:43 compute-0 nova_compute[243452]: 2026-02-28 10:00:43.246 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272828.24524, a2e72370-536c-417e-8667-678b824b849c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:00:43 compute-0 nova_compute[243452]: 2026-02-28 10:00:43.247 243456 INFO nova.compute.manager [-] [instance: a2e72370-536c-417e-8667-678b824b849c] VM Stopped (Lifecycle Event)
Feb 28 10:00:43 compute-0 nova_compute[243452]: 2026-02-28 10:00:43.294 243456 DEBUG nova.compute.manager [None req-3c479c04-daad-458b-8b27-66bbf60c9842 - - - - - -] [instance: a2e72370-536c-417e-8667-678b824b849c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:43 compute-0 nova_compute[243452]: 2026-02-28 10:00:43.396 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 153 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 498 KiB/s rd, 968 KiB/s wr, 113 op/s
Feb 28 10:00:44 compute-0 nova_compute[243452]: 2026-02-28 10:00:44.082 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272829.0815992, 5706ada3-074b-4ac3-8540-425edba37cbe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:00:44 compute-0 nova_compute[243452]: 2026-02-28 10:00:44.083 243456 INFO nova.compute.manager [-] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] VM Stopped (Lifecycle Event)
Feb 28 10:00:44 compute-0 nova_compute[243452]: 2026-02-28 10:00:44.106 243456 DEBUG nova.compute.manager [None req-78191570-94e6-4a08-85ca-71cf0419b715 - - - - - -] [instance: 5706ada3-074b-4ac3-8540-425edba37cbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:44 compute-0 nova_compute[243452]: 2026-02-28 10:00:44.296 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272829.2946024, 5cac75f5-aeef-427d-b484-7d40a33679cf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:00:44 compute-0 nova_compute[243452]: 2026-02-28 10:00:44.296 243456 INFO nova.compute.manager [-] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] VM Stopped (Lifecycle Event)
Feb 28 10:00:44 compute-0 nova_compute[243452]: 2026-02-28 10:00:44.321 243456 DEBUG nova.compute.manager [None req-3b0ae8e7-0afc-4075-ad2e-11ad9c0799b5 - - - - - -] [instance: 5cac75f5-aeef-427d-b484-7d40a33679cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:44 compute-0 ceph-mon[76304]: pgmap v938: 305 pgs: 305 active+clean; 153 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 498 KiB/s rd, 968 KiB/s wr, 113 op/s
Feb 28 10:00:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:00:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/632909243' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:00:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:00:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/632909243' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:00:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 153 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.3 KiB/s wr, 44 op/s
Feb 28 10:00:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/632909243' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:00:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/632909243' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:00:47 compute-0 ceph-mon[76304]: pgmap v939: 305 pgs: 305 active+clean; 153 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.3 KiB/s wr, 44 op/s
Feb 28 10:00:47 compute-0 podman[256132]: 2026-02-28 10:00:47.132060369 +0000 UTC m=+0.067764680 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 28 10:00:47 compute-0 podman[256131]: 2026-02-28 10:00:47.162953189 +0000 UTC m=+0.099482473 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:00:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:00:47 compute-0 nova_compute[243452]: 2026-02-28 10:00:47.657 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 153 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Feb 28 10:00:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:47.892 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:00:47 compute-0 nova_compute[243452]: 2026-02-28 10:00:47.973 243456 DEBUG oslo_concurrency.processutils [None req-9529d302-fd83-453a-93e7-c343c5e10f8f 12e661cb0cc24d3f87d5ad5e55437da9 962ad781bef44a669f4439ec50ff4508 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:48 compute-0 nova_compute[243452]: 2026-02-28 10:00:48.000 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:48 compute-0 nova_compute[243452]: 2026-02-28 10:00:48.000 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:48 compute-0 nova_compute[243452]: 2026-02-28 10:00:48.004 243456 DEBUG oslo_concurrency.processutils [None req-9529d302-fd83-453a-93e7-c343c5e10f8f 12e661cb0cc24d3f87d5ad5e55437da9 962ad781bef44a669f4439ec50ff4508 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:48 compute-0 nova_compute[243452]: 2026-02-28 10:00:48.022 243456 DEBUG nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:00:48 compute-0 nova_compute[243452]: 2026-02-28 10:00:48.110 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:48 compute-0 nova_compute[243452]: 2026-02-28 10:00:48.111 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:48 compute-0 nova_compute[243452]: 2026-02-28 10:00:48.120 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:00:48 compute-0 nova_compute[243452]: 2026-02-28 10:00:48.121 243456 INFO nova.compute.claims [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:00:48 compute-0 nova_compute[243452]: 2026-02-28 10:00:48.226 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:48 compute-0 nova_compute[243452]: 2026-02-28 10:00:48.397 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:00:48 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/972004038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:00:48 compute-0 nova_compute[243452]: 2026-02-28 10:00:48.723 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:48 compute-0 nova_compute[243452]: 2026-02-28 10:00:48.731 243456 DEBUG nova.compute.provider_tree [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:00:48 compute-0 nova_compute[243452]: 2026-02-28 10:00:48.756 243456 DEBUG nova.scheduler.client.report [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:00:48 compute-0 nova_compute[243452]: 2026-02-28 10:00:48.790 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:48 compute-0 nova_compute[243452]: 2026-02-28 10:00:48.792 243456 DEBUG nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:00:48 compute-0 nova_compute[243452]: 2026-02-28 10:00:48.870 243456 DEBUG nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:00:48 compute-0 nova_compute[243452]: 2026-02-28 10:00:48.871 243456 DEBUG nova.network.neutron [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:00:48 compute-0 nova_compute[243452]: 2026-02-28 10:00:48.895 243456 INFO nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:00:48 compute-0 nova_compute[243452]: 2026-02-28 10:00:48.920 243456 DEBUG nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:00:49 compute-0 ceph-mon[76304]: pgmap v940: 305 pgs: 305 active+clean; 153 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Feb 28 10:00:49 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/972004038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.082 243456 DEBUG nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.084 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.085 243456 INFO nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Creating image(s)
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.115 243456 DEBUG nova.storage.rbd_utils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.153 243456 DEBUG nova.storage.rbd_utils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.185 243456 DEBUG nova.storage.rbd_utils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.190 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.270 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.273 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.274 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.275 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.305 243456 DEBUG nova.storage.rbd_utils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.310 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.330 243456 DEBUG nova.policy [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b1dc716928742ca935bb155783e2d9a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '446562351a804787bd6c523245bada39', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.558 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.636 243456 DEBUG nova.storage.rbd_utils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] resizing rbd image 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.737 243456 DEBUG nova.objects.instance [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lazy-loading 'migration_context' on Instance uuid 9e27fde4-3df3-46cf-97ac-88a91baefbc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:00:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 153 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.757 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.757 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Ensure instance console log exists: /var/lib/nova/instances/9e27fde4-3df3-46cf-97ac-88a91baefbc0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.758 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.759 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.759 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:49 compute-0 nova_compute[243452]: 2026-02-28 10:00:49.905 243456 DEBUG nova.network.neutron [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Successfully created port: 77e0efad-ce89-42fd-9284-b155767f5c74 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:00:50 compute-0 nova_compute[243452]: 2026-02-28 10:00:50.494 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272835.4924226, e7cedb7c-31a4-4578-82e8-f93b29898300 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:00:50 compute-0 nova_compute[243452]: 2026-02-28 10:00:50.494 243456 INFO nova.compute.manager [-] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] VM Stopped (Lifecycle Event)
Feb 28 10:00:50 compute-0 nova_compute[243452]: 2026-02-28 10:00:50.518 243456 DEBUG nova.compute.manager [None req-8d6f5adf-fd74-4273-b315-1015c6c676a6 - - - - - -] [instance: e7cedb7c-31a4-4578-82e8-f93b29898300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:50 compute-0 nova_compute[243452]: 2026-02-28 10:00:50.847 243456 DEBUG nova.network.neutron [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Successfully updated port: 77e0efad-ce89-42fd-9284-b155767f5c74 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:00:50 compute-0 nova_compute[243452]: 2026-02-28 10:00:50.865 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:00:50 compute-0 nova_compute[243452]: 2026-02-28 10:00:50.866 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquired lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:00:50 compute-0 nova_compute[243452]: 2026-02-28 10:00:50.866 243456 DEBUG nova.network.neutron [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:00:50 compute-0 nova_compute[243452]: 2026-02-28 10:00:50.991 243456 DEBUG nova.compute.manager [req-c2f224e1-b002-48ea-b708-c2818b6aa814 req-31d1b836-b8fa-41a2-9253-602a71c152a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-changed-77e0efad-ce89-42fd-9284-b155767f5c74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:00:50 compute-0 nova_compute[243452]: 2026-02-28 10:00:50.992 243456 DEBUG nova.compute.manager [req-c2f224e1-b002-48ea-b708-c2818b6aa814 req-31d1b836-b8fa-41a2-9253-602a71c152a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Refreshing instance network info cache due to event network-changed-77e0efad-ce89-42fd-9284-b155767f5c74. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:00:50 compute-0 nova_compute[243452]: 2026-02-28 10:00:50.992 243456 DEBUG oslo_concurrency.lockutils [req-c2f224e1-b002-48ea-b708-c2818b6aa814 req-31d1b836-b8fa-41a2-9253-602a71c152a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:00:51 compute-0 nova_compute[243452]: 2026-02-28 10:00:51.019 243456 DEBUG nova.network.neutron [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:00:51 compute-0 ceph-mon[76304]: pgmap v941: 305 pgs: 305 active+clean; 153 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s
Feb 28 10:00:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 188 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.4 MiB/s wr, 40 op/s
Feb 28 10:00:52 compute-0 ceph-mon[76304]: pgmap v942: 305 pgs: 305 active+clean; 188 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.4 MiB/s wr, 40 op/s
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.202 243456 DEBUG nova.network.neutron [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updating instance_info_cache with network_info: [{"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.229 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Releasing lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.229 243456 DEBUG nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Instance network_info: |[{"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.230 243456 DEBUG oslo_concurrency.lockutils [req-c2f224e1-b002-48ea-b708-c2818b6aa814 req-31d1b836-b8fa-41a2-9253-602a71c152a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.230 243456 DEBUG nova.network.neutron [req-c2f224e1-b002-48ea-b708-c2818b6aa814 req-31d1b836-b8fa-41a2-9253-602a71c152a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Refreshing network info cache for port 77e0efad-ce89-42fd-9284-b155767f5c74 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.232 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Start _get_guest_xml network_info=[{"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.237 243456 WARNING nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.242 243456 DEBUG nova.virt.libvirt.host [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.242 243456 DEBUG nova.virt.libvirt.host [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.245 243456 DEBUG nova.virt.libvirt.host [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.246 243456 DEBUG nova.virt.libvirt.host [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.246 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.247 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.247 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.247 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.247 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.247 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.248 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.248 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.248 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.248 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.248 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.249 243456 DEBUG nova.virt.hardware [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.251 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.661 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:00:52 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1536218003' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.788 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.812 243456 DEBUG nova.storage.rbd_utils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:52 compute-0 nova_compute[243452]: 2026-02-28 10:00:52.817 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1536218003' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:00:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:00:53 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1251524903' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.343 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.345 243456 DEBUG nova.virt.libvirt.vif [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:00:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1253849242',display_name='tempest-FloatingIPsAssociationTestJSON-server-1253849242',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1253849242',id=12,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='446562351a804787bd6c523245bada39',ramdisk_id='',reservation_id='r-nu7q8nez',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1803239001',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1803239001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:00:48Z,user_data=None,user_id='3b1dc716928742ca935bb155783e2d9a',uuid=9e27fde4-3df3-46cf-97ac-88a91baefbc0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.347 243456 DEBUG nova.network.os_vif_util [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converting VIF {"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.348 243456 DEBUG nova.network.os_vif_util [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:04:a0,bridge_name='br-int',has_traffic_filtering=True,id=77e0efad-ce89-42fd-9284-b155767f5c74,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77e0efad-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.350 243456 DEBUG nova.objects.instance [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9e27fde4-3df3-46cf-97ac-88a91baefbc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.370 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:00:53 compute-0 nova_compute[243452]:   <uuid>9e27fde4-3df3-46cf-97ac-88a91baefbc0</uuid>
Feb 28 10:00:53 compute-0 nova_compute[243452]:   <name>instance-0000000c</name>
Feb 28 10:00:53 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:00:53 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:00:53 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1253849242</nova:name>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:00:52</nova:creationTime>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:00:53 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:00:53 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:00:53 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:00:53 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:00:53 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:00:53 compute-0 nova_compute[243452]:         <nova:user uuid="3b1dc716928742ca935bb155783e2d9a">tempest-FloatingIPsAssociationTestJSON-1803239001-project-member</nova:user>
Feb 28 10:00:53 compute-0 nova_compute[243452]:         <nova:project uuid="446562351a804787bd6c523245bada39">tempest-FloatingIPsAssociationTestJSON-1803239001</nova:project>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:00:53 compute-0 nova_compute[243452]:         <nova:port uuid="77e0efad-ce89-42fd-9284-b155767f5c74">
Feb 28 10:00:53 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:00:53 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:00:53 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <system>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <entry name="serial">9e27fde4-3df3-46cf-97ac-88a91baefbc0</entry>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <entry name="uuid">9e27fde4-3df3-46cf-97ac-88a91baefbc0</entry>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     </system>
Feb 28 10:00:53 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:00:53 compute-0 nova_compute[243452]:   <os>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:   </os>
Feb 28 10:00:53 compute-0 nova_compute[243452]:   <features>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:   </features>
Feb 28 10:00:53 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:00:53 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:00:53 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk">
Feb 28 10:00:53 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       </source>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:00:53 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk.config">
Feb 28 10:00:53 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       </source>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:00:53 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:9c:04:a0"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <target dev="tap77e0efad-ce"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/9e27fde4-3df3-46cf-97ac-88a91baefbc0/console.log" append="off"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <video>
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     </video>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:00:53 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:00:53 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:00:53 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:00:53 compute-0 nova_compute[243452]: </domain>
Feb 28 10:00:53 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.372 243456 DEBUG nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Preparing to wait for external event network-vif-plugged-77e0efad-ce89-42fd-9284-b155767f5c74 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.373 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.373 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.374 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.374 243456 DEBUG nova.virt.libvirt.vif [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:00:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1253849242',display_name='tempest-FloatingIPsAssociationTestJSON-server-1253849242',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1253849242',id=12,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='446562351a804787bd6c523245bada39',ramdisk_id='',reservation_id='r-nu7q8nez',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1803239001',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1803239001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:00:48Z,user_data=None,user_id='3b1dc716928742ca935bb155783e2d9a',uuid=9e27fde4-3df3-46cf-97ac-88a91baefbc0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.375 243456 DEBUG nova.network.os_vif_util [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converting VIF {"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.375 243456 DEBUG nova.network.os_vif_util [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:04:a0,bridge_name='br-int',has_traffic_filtering=True,id=77e0efad-ce89-42fd-9284-b155767f5c74,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77e0efad-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.376 243456 DEBUG os_vif [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:04:a0,bridge_name='br-int',has_traffic_filtering=True,id=77e0efad-ce89-42fd-9284-b155767f5c74,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77e0efad-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.377 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.377 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.378 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.382 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.382 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77e0efad-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.383 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap77e0efad-ce, col_values=(('external_ids', {'iface-id': '77e0efad-ce89-42fd-9284-b155767f5c74', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:04:a0', 'vm-uuid': '9e27fde4-3df3-46cf-97ac-88a91baefbc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.385 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:53 compute-0 NetworkManager[49805]: <info>  [1772272853.3863] manager: (tap77e0efad-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.388 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.395 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.396 243456 INFO os_vif [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:04:a0,bridge_name='br-int',has_traffic_filtering=True,id=77e0efad-ce89-42fd-9284-b155767f5c74,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77e0efad-ce')
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.398 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.481 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.482 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.483 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] No VIF found with MAC fa:16:3e:9c:04:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.484 243456 INFO nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Using config drive
Feb 28 10:00:53 compute-0 nova_compute[243452]: 2026-02-28 10:00:53.519 243456 DEBUG nova.storage.rbd_utils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:00:54 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1251524903' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:00:54 compute-0 ceph-mon[76304]: pgmap v943: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.157 243456 INFO nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Creating config drive at /var/lib/nova/instances/9e27fde4-3df3-46cf-97ac-88a91baefbc0/disk.config
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.160 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9e27fde4-3df3-46cf-97ac-88a91baefbc0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp0e_3jfik execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.236 243456 DEBUG nova.network.neutron [req-c2f224e1-b002-48ea-b708-c2818b6aa814 req-31d1b836-b8fa-41a2-9253-602a71c152a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updated VIF entry in instance network info cache for port 77e0efad-ce89-42fd-9284-b155767f5c74. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.237 243456 DEBUG nova.network.neutron [req-c2f224e1-b002-48ea-b708-c2818b6aa814 req-31d1b836-b8fa-41a2-9253-602a71c152a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updating instance_info_cache with network_info: [{"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.256 243456 DEBUG oslo_concurrency.lockutils [req-c2f224e1-b002-48ea-b708-c2818b6aa814 req-31d1b836-b8fa-41a2-9253-602a71c152a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.287 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9e27fde4-3df3-46cf-97ac-88a91baefbc0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp0e_3jfik" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.328 243456 DEBUG nova.storage.rbd_utils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.332 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9e27fde4-3df3-46cf-97ac-88a91baefbc0/disk.config 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.478 243456 DEBUG oslo_concurrency.processutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9e27fde4-3df3-46cf-97ac-88a91baefbc0/disk.config 9e27fde4-3df3-46cf-97ac-88a91baefbc0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.479 243456 INFO nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Deleting local config drive /var/lib/nova/instances/9e27fde4-3df3-46cf-97ac-88a91baefbc0/disk.config because it was imported into RBD.
Feb 28 10:00:54 compute-0 kernel: tap77e0efad-ce: entered promiscuous mode
Feb 28 10:00:54 compute-0 NetworkManager[49805]: <info>  [1772272854.5229] manager: (tap77e0efad-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Feb 28 10:00:54 compute-0 ovn_controller[146846]: 2026-02-28T10:00:54Z|00043|binding|INFO|Claiming lport 77e0efad-ce89-42fd-9284-b155767f5c74 for this chassis.
Feb 28 10:00:54 compute-0 ovn_controller[146846]: 2026-02-28T10:00:54Z|00044|binding|INFO|77e0efad-ce89-42fd-9284-b155767f5c74: Claiming fa:16:3e:9c:04:a0 10.100.0.4
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.525 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.530 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.536 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:54 compute-0 systemd-machined[209480]: New machine qemu-12-instance-0000000c.
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.547 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:04:a0 10.100.0.4'], port_security=['fa:16:3e:9c:04:a0 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9e27fde4-3df3-46cf-97ac-88a91baefbc0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '446562351a804787bd6c523245bada39', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c45a4b58-82e0-4f02-9f8e-0ad33df761ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d1e8e62-0bec-49c5-9374-c674d11c0532, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=77e0efad-ce89-42fd-9284-b155767f5c74) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.549 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 77e0efad-ce89-42fd-9284-b155767f5c74 in datapath 71984a35-6483-4ac4-a021-6bd1f9989d8b bound to our chassis
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.550 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71984a35-6483-4ac4-a021-6bd1f9989d8b
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.560 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8b077226-4f83-4c1a-adf6-4eac2024491b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.561 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap71984a35-61 in ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.563 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap71984a35-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.563 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b18308e1-18f0-45d9-8cfa-bdddaf1185b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.564 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a39762d5-becb-4b53-bc92-74932f73bbf4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:00:54 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.577 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb74e54-2ed3-4a30-a405-a5c2502c5e4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:00:54 compute-0 ovn_controller[146846]: 2026-02-28T10:00:54Z|00045|binding|INFO|Setting lport 77e0efad-ce89-42fd-9284-b155767f5c74 ovn-installed in OVS
Feb 28 10:00:54 compute-0 ovn_controller[146846]: 2026-02-28T10:00:54Z|00046|binding|INFO|Setting lport 77e0efad-ce89-42fd-9284-b155767f5c74 up in Southbound
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.586 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:54 compute-0 systemd-udevd[256504]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.587 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e09ba378-0387-40b4-a3cb-1c3718e9c24b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:00:54 compute-0 NetworkManager[49805]: <info>  [1772272854.5991] device (tap77e0efad-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:00:54 compute-0 NetworkManager[49805]: <info>  [1772272854.5999] device (tap77e0efad-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.607 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a442b4f1-3fd1-44ba-a57f-6061b76dcd89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:00:54 compute-0 NetworkManager[49805]: <info>  [1772272854.6118] manager: (tap71984a35-60): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.610 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f068473c-ff46-4246-958e-affdd57af741]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.635 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c2226a39-b6d1-4ac1-a52e-935ba121d049]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.638 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[cc2edd3b-e1af-4c3e-80f8-f2b2d679c90e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:00:54 compute-0 NetworkManager[49805]: <info>  [1772272854.6575] device (tap71984a35-60): carrier: link connected
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.660 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c88657e3-b8df-4750-8b5b-26881ef92d4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.673 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b96340f3-d7af-4353-a9ed-ab6984893e90]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71984a35-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:3a:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436194, 'reachable_time': 24401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256534, 'error': None, 'target': 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.686 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9771bb-5cbb-4e7d-82a7-0664de26aa1e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef5:3a8a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436194, 'tstamp': 436194}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256535, 'error': None, 'target': 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.697 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb05727-edfd-49bb-8284-ca3bf594bb6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71984a35-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:3a:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436194, 'reachable_time': 24401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256536, 'error': None, 'target': 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.719 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5132b491-1dd6-4783-9e90-a9dc22a43d23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.774 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dedf3d4e-4a5a-4313-8ef6-045c646e0355]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.776 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71984a35-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.776 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.777 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71984a35-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.778 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:54 compute-0 NetworkManager[49805]: <info>  [1772272854.7796] manager: (tap71984a35-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Feb 28 10:00:54 compute-0 kernel: tap71984a35-60: entered promiscuous mode
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.780 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.783 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71984a35-60, col_values=(('external_ids', {'iface-id': 'a589fe00-3087-4c3d-af34-6af9a22081de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:00:54 compute-0 ovn_controller[146846]: 2026-02-28T10:00:54Z|00047|binding|INFO|Releasing lport a589fe00-3087-4c3d-af34-6af9a22081de from this chassis (sb_readonly=0)
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.784 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.785 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/71984a35-6483-4ac4-a021-6bd1f9989d8b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/71984a35-6483-4ac4-a021-6bd1f9989d8b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.786 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e5b354cb-ea36-4b61-bad5-688ee111a200]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.787 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-71984a35-6483-4ac4-a021-6bd1f9989d8b
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/71984a35-6483-4ac4-a021-6bd1f9989d8b.pid.haproxy
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 71984a35-6483-4ac4-a021-6bd1f9989d8b
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:00:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:54.787 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'env', 'PROCESS_TAG=haproxy-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/71984a35-6483-4ac4-a021-6bd1f9989d8b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.790 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.898 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272854.8975585, 9e27fde4-3df3-46cf-97ac-88a91baefbc0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.898 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] VM Started (Lifecycle Event)
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.927 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.931 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272854.89767, 9e27fde4-3df3-46cf-97ac-88a91baefbc0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.931 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] VM Paused (Lifecycle Event)
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.957 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.960 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:00:54 compute-0 nova_compute[243452]: 2026-02-28 10:00:54.992 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:00:55 compute-0 podman[256610]: 2026-02-28 10:00:55.110175739 +0000 UTC m=+0.048600840 container create 0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 28 10:00:55 compute-0 systemd[1]: Started libpod-conmon-0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd.scope.
Feb 28 10:00:55 compute-0 podman[256610]: 2026-02-28 10:00:55.081683207 +0000 UTC m=+0.020108378 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:00:55 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70290e901e28e6d5eb5c062ad508564d16cc1df6df50d7d37fe37074f9c775c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:00:55 compute-0 podman[256610]: 2026-02-28 10:00:55.203769216 +0000 UTC m=+0.142194337 container init 0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 28 10:00:55 compute-0 podman[256610]: 2026-02-28 10:00:55.211197295 +0000 UTC m=+0.149622436 container start 0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:00:55 compute-0 neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b[256625]: [NOTICE]   (256629) : New worker (256631) forked
Feb 28 10:00:55 compute-0 neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b[256625]: [NOTICE]   (256629) : Loading success.
Feb 28 10:00:55 compute-0 nova_compute[243452]: 2026-02-28 10:00:55.660 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:55 compute-0 nova_compute[243452]: 2026-02-28 10:00:55.660 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:55 compute-0 nova_compute[243452]: 2026-02-28 10:00:55.680 243456 DEBUG nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:00:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Feb 28 10:00:55 compute-0 nova_compute[243452]: 2026-02-28 10:00:55.772 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:55 compute-0 nova_compute[243452]: 2026-02-28 10:00:55.773 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:55 compute-0 nova_compute[243452]: 2026-02-28 10:00:55.780 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:00:55 compute-0 nova_compute[243452]: 2026-02-28 10:00:55.780 243456 INFO nova.compute.claims [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:00:55 compute-0 nova_compute[243452]: 2026-02-28 10:00:55.933 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:00:56 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/773155305' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.481 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.489 243456 DEBUG nova.compute.provider_tree [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.511 243456 DEBUG nova.scheduler.client.report [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.540 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.541 243456 DEBUG nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.594 243456 DEBUG nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.594 243456 DEBUG nova.network.neutron [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.613 243456 INFO nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.629 243456 DEBUG nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.710 243456 DEBUG nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.712 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.713 243456 INFO nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Creating image(s)
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.747 243456 DEBUG nova.storage.rbd_utils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.777 243456 DEBUG nova.storage.rbd_utils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.801 243456 DEBUG nova.storage.rbd_utils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.804 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.884 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.885 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.886 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.886 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.930 243456 DEBUG nova.storage.rbd_utils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.934 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c92e965f-2d18-4b78-8b78-7d391039f382_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:56 compute-0 nova_compute[243452]: 2026-02-28 10:00:56.954 243456 DEBUG nova.policy [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4fb1e2bbed9c4e2395c13dba974f8603', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '339b7f5b41a54615b051fb9d036072dd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:00:57 compute-0 ceph-mon[76304]: pgmap v944: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Feb 28 10:00:57 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/773155305' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:00:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:00:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 28 10:00:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:57.841 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:57.841 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:00:57.842 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:57 compute-0 nova_compute[243452]: 2026-02-28 10:00:57.944 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:57 compute-0 nova_compute[243452]: 2026-02-28 10:00:57.945 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:57 compute-0 nova_compute[243452]: 2026-02-28 10:00:57.964 243456 DEBUG nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.064 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.065 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.081 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.081 243456 INFO nova.compute.claims [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.147 243456 DEBUG nova.network.neutron [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Successfully created port: 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:00:58 compute-0 ceph-mon[76304]: pgmap v945: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.205 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c92e965f-2d18-4b78-8b78-7d391039f382_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.250 243456 DEBUG nova.compute.manager [req-4965be19-f9ab-44ac-bf86-6be88b295e14 req-87c7b5fd-d770-4152-b7f7-d8a12422b9f1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-vif-plugged-77e0efad-ce89-42fd-9284-b155767f5c74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.251 243456 DEBUG oslo_concurrency.lockutils [req-4965be19-f9ab-44ac-bf86-6be88b295e14 req-87c7b5fd-d770-4152-b7f7-d8a12422b9f1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.251 243456 DEBUG oslo_concurrency.lockutils [req-4965be19-f9ab-44ac-bf86-6be88b295e14 req-87c7b5fd-d770-4152-b7f7-d8a12422b9f1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.252 243456 DEBUG oslo_concurrency.lockutils [req-4965be19-f9ab-44ac-bf86-6be88b295e14 req-87c7b5fd-d770-4152-b7f7-d8a12422b9f1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.252 243456 DEBUG nova.compute.manager [req-4965be19-f9ab-44ac-bf86-6be88b295e14 req-87c7b5fd-d770-4152-b7f7-d8a12422b9f1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Processing event network-vif-plugged-77e0efad-ce89-42fd-9284-b155767f5c74 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.254 243456 DEBUG nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.305 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272858.2602673, 9e27fde4-3df3-46cf-97ac-88a91baefbc0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.307 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] VM Resumed (Lifecycle Event)
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.310 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.317 243456 DEBUG nova.storage.rbd_utils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] resizing rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.364 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.382 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.386 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.388 243456 INFO nova.virt.libvirt.driver [-] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Instance spawned successfully.
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.389 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.392 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.400 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.435 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.440 243456 DEBUG nova.objects.instance [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'migration_context' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.443 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.443 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.444 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.444 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.444 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.445 243456 DEBUG nova.virt.libvirt.driver [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.466 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.467 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Ensure instance console log exists: /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.467 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.467 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.468 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.498 243456 INFO nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Took 9.41 seconds to spawn the instance on the hypervisor.
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.498 243456 DEBUG nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.568 243456 INFO nova.compute.manager [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Took 10.49 seconds to build instance.
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.585 243456 DEBUG oslo_concurrency.lockutils [None req-9ff1d32a-8827-4c97-bd9e-a347424361d6 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:00:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3968829183' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.892 243456 DEBUG nova.network.neutron [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Successfully updated port: 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.894 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.900 243456 DEBUG nova.compute.provider_tree [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.904 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "refresh_cache-c92e965f-2d18-4b78-8b78-7d391039f382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.904 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquired lock "refresh_cache-c92e965f-2d18-4b78-8b78-7d391039f382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.905 243456 DEBUG nova.network.neutron [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.919 243456 DEBUG nova.scheduler.client.report [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.942 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.943 243456 DEBUG nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.991 243456 DEBUG nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:00:58 compute-0 nova_compute[243452]: 2026-02-28 10:00:58.991 243456 DEBUG nova.network.neutron [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.009 243456 INFO nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.029 243456 DEBUG nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.104 243456 DEBUG nova.compute.manager [req-69c4aa5a-e780-4c0a-aa9e-1d546644e01c req-fcd9ff68-240e-4911-9605-4d1d118bfe36 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-changed-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.104 243456 DEBUG nova.compute.manager [req-69c4aa5a-e780-4c0a-aa9e-1d546644e01c req-fcd9ff68-240e-4911-9605-4d1d118bfe36 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Refreshing instance network info cache due to event network-changed-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.104 243456 DEBUG oslo_concurrency.lockutils [req-69c4aa5a-e780-4c0a-aa9e-1d546644e01c req-fcd9ff68-240e-4911-9605-4d1d118bfe36 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c92e965f-2d18-4b78-8b78-7d391039f382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.122 243456 DEBUG nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.124 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.124 243456 INFO nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Creating image(s)
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.148 243456 DEBUG nova.storage.rbd_utils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.171 243456 DEBUG nova.storage.rbd_utils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:59 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3968829183' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.199 243456 DEBUG nova.storage.rbd_utils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.203 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.221 243456 DEBUG nova.network.neutron [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.235 243456 DEBUG nova.policy [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4fb1e2bbed9c4e2395c13dba974f8603', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '339b7f5b41a54615b051fb9d036072dd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.260 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.261 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.262 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.263 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.291 243456 DEBUG nova.storage.rbd_utils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.295 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.728 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:00:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.797 243456 DEBUG nova.storage.rbd_utils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] resizing rbd image 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.888 243456 DEBUG nova.objects.instance [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'migration_context' on Instance uuid 89ced16e-cc50-41d5-bfcb-fa5af85c14c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.891 243456 DEBUG nova.network.neutron [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Successfully created port: eaa5f652-63c2-4a9b-aae0-eec299565322 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.911 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.912 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Ensure instance console log exists: /var/lib/nova/instances/89ced16e-cc50-41d5-bfcb-fa5af85c14c8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.913 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.913 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:00:59 compute-0 nova_compute[243452]: 2026-02-28 10:00:59.914 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:00 compute-0 ceph-mon[76304]: pgmap v946: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 28 10:01:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:01:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:01:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:01:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:01:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:01:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.488 243456 DEBUG nova.network.neutron [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Updating instance_info_cache with network_info: [{"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.548 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Releasing lock "refresh_cache-c92e965f-2d18-4b78-8b78-7d391039f382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.549 243456 DEBUG nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance network_info: |[{"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.549 243456 DEBUG oslo_concurrency.lockutils [req-69c4aa5a-e780-4c0a-aa9e-1d546644e01c req-fcd9ff68-240e-4911-9605-4d1d118bfe36 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c92e965f-2d18-4b78-8b78-7d391039f382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.549 243456 DEBUG nova.network.neutron [req-69c4aa5a-e780-4c0a-aa9e-1d546644e01c req-fcd9ff68-240e-4911-9605-4d1d118bfe36 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Refreshing network info cache for port 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.552 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Start _get_guest_xml network_info=[{"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.558 243456 WARNING nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.564 243456 DEBUG nova.virt.libvirt.host [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.568 243456 DEBUG nova.virt.libvirt.host [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.578 243456 DEBUG nova.virt.libvirt.host [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.579 243456 DEBUG nova.virt.libvirt.host [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.579 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.579 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.580 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.580 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.580 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.581 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.581 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.581 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.582 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.582 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.582 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.582 243456 DEBUG nova.virt.hardware [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.589 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.716 243456 DEBUG nova.compute.manager [req-9842146e-d25e-42f7-85ef-c641f59e9eb3 req-acb3cef8-a602-43c6-8adf-0ffef0a4f8cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-vif-plugged-77e0efad-ce89-42fd-9284-b155767f5c74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.717 243456 DEBUG oslo_concurrency.lockutils [req-9842146e-d25e-42f7-85ef-c641f59e9eb3 req-acb3cef8-a602-43c6-8adf-0ffef0a4f8cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.718 243456 DEBUG oslo_concurrency.lockutils [req-9842146e-d25e-42f7-85ef-c641f59e9eb3 req-acb3cef8-a602-43c6-8adf-0ffef0a4f8cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.718 243456 DEBUG oslo_concurrency.lockutils [req-9842146e-d25e-42f7-85ef-c641f59e9eb3 req-acb3cef8-a602-43c6-8adf-0ffef0a4f8cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.718 243456 DEBUG nova.compute.manager [req-9842146e-d25e-42f7-85ef-c641f59e9eb3 req-acb3cef8-a602-43c6-8adf-0ffef0a4f8cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] No waiting events found dispatching network-vif-plugged-77e0efad-ce89-42fd-9284-b155767f5c74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.718 243456 WARNING nova.compute.manager [req-9842146e-d25e-42f7-85ef-c641f59e9eb3 req-acb3cef8-a602-43c6-8adf-0ffef0a4f8cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received unexpected event network-vif-plugged-77e0efad-ce89-42fd-9284-b155767f5c74 for instance with vm_state active and task_state None.
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.789 243456 DEBUG nova.network.neutron [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Successfully updated port: eaa5f652-63c2-4a9b-aae0-eec299565322 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.816 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "refresh_cache-89ced16e-cc50-41d5-bfcb-fa5af85c14c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.817 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquired lock "refresh_cache-89ced16e-cc50-41d5-bfcb-fa5af85c14c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:01:00 compute-0 nova_compute[243452]: 2026-02-28 10:01:00.817 243456 DEBUG nova.network.neutron [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:01:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2276677706' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.147 243456 DEBUG nova.network.neutron [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.155 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.177 243456 DEBUG nova.storage.rbd_utils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.182 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:01 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2276677706' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:01:01 compute-0 anacron[7982]: Job `cron.monthly' started
Feb 28 10:01:01 compute-0 anacron[7982]: Job `cron.monthly' terminated
Feb 28 10:01:01 compute-0 anacron[7982]: Normal exit (3 jobs run)
Feb 28 10:01:01 compute-0 CROND[257079]: (root) CMD (run-parts /etc/cron.hourly)
Feb 28 10:01:01 compute-0 run-parts[257082]: (/etc/cron.hourly) starting 0anacron
Feb 28 10:01:01 compute-0 run-parts[257088]: (/etc/cron.hourly) finished 0anacron
Feb 28 10:01:01 compute-0 CROND[257078]: (root) CMDEND (run-parts /etc/cron.hourly)
Feb 28 10:01:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2004804878' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 255 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.7 MiB/s wr, 115 op/s
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.753 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.758 243456 DEBUG nova.virt.libvirt.vif [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1293627042',display_name='tempest-ServersAdminTestJSON-server-1293627042',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1293627042',id=13,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-7hctsnmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:00:56Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=c92e965f-2d18-4b78-8b78-7d391039f382,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.759 243456 DEBUG nova.network.os_vif_util [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.760 243456 DEBUG nova.network.os_vif_util [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.766 243456 DEBUG nova.objects.instance [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'pci_devices' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.798 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:01:01 compute-0 nova_compute[243452]:   <uuid>c92e965f-2d18-4b78-8b78-7d391039f382</uuid>
Feb 28 10:01:01 compute-0 nova_compute[243452]:   <name>instance-0000000d</name>
Feb 28 10:01:01 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:01:01 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:01:01 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersAdminTestJSON-server-1293627042</nova:name>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:01:00</nova:creationTime>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:01:01 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:01:01 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:01:01 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:01:01 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:01:01 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:01:01 compute-0 nova_compute[243452]:         <nova:user uuid="4fb1e2bbed9c4e2395c13dba974f8603">tempest-ServersAdminTestJSON-1494420313-project-member</nova:user>
Feb 28 10:01:01 compute-0 nova_compute[243452]:         <nova:project uuid="339b7f5b41a54615b051fb9d036072dd">tempest-ServersAdminTestJSON-1494420313</nova:project>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:01:01 compute-0 nova_compute[243452]:         <nova:port uuid="1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7">
Feb 28 10:01:01 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:01:01 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:01:01 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <system>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <entry name="serial">c92e965f-2d18-4b78-8b78-7d391039f382</entry>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <entry name="uuid">c92e965f-2d18-4b78-8b78-7d391039f382</entry>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     </system>
Feb 28 10:01:01 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:01:01 compute-0 nova_compute[243452]:   <os>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:   </os>
Feb 28 10:01:01 compute-0 nova_compute[243452]:   <features>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:   </features>
Feb 28 10:01:01 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:01:01 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:01:01 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c92e965f-2d18-4b78-8b78-7d391039f382_disk">
Feb 28 10:01:01 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:01 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c92e965f-2d18-4b78-8b78-7d391039f382_disk.config">
Feb 28 10:01:01 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:01 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:3d:6e:bc"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <target dev="tap1c6e98f3-e9"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/console.log" append="off"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <video>
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     </video>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:01:01 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:01:01 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:01:01 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:01:01 compute-0 nova_compute[243452]: </domain>
Feb 28 10:01:01 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.801 243456 DEBUG nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Preparing to wait for external event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.802 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.802 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.803 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.804 243456 DEBUG nova.virt.libvirt.vif [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1293627042',display_name='tempest-ServersAdminTestJSON-server-1293627042',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1293627042',id=13,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-7hctsnmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:00:56Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=c92e965f-2d18-4b78-8b78-7d391039f382,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.805 243456 DEBUG nova.network.os_vif_util [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.806 243456 DEBUG nova.network.os_vif_util [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.806 243456 DEBUG os_vif [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.808 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.808 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.809 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.814 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.814 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c6e98f3-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.815 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c6e98f3-e9, col_values=(('external_ids', {'iface-id': '1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:6e:bc', 'vm-uuid': 'c92e965f-2d18-4b78-8b78-7d391039f382'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.817 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:01 compute-0 NetworkManager[49805]: <info>  [1772272861.8186] manager: (tap1c6e98f3-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.822 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.824 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.826 243456 INFO os_vif [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9')
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.895 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.896 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.896 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No VIF found with MAC fa:16:3e:3d:6e:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.897 243456 INFO nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Using config drive
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.923 243456 DEBUG nova.storage.rbd_utils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.933 243456 DEBUG nova.compute.manager [req-7dd0752e-6aea-4d46-bfc5-f8adeaa8c150 req-564da138-3b21-4860-978d-b381e2e2f172 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Received event network-changed-eaa5f652-63c2-4a9b-aae0-eec299565322 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.934 243456 DEBUG nova.compute.manager [req-7dd0752e-6aea-4d46-bfc5-f8adeaa8c150 req-564da138-3b21-4860-978d-b381e2e2f172 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Refreshing instance network info cache due to event network-changed-eaa5f652-63c2-4a9b-aae0-eec299565322. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:01:01 compute-0 nova_compute[243452]: 2026-02-28 10:01:01.934 243456 DEBUG oslo_concurrency.lockutils [req-7dd0752e-6aea-4d46-bfc5-f8adeaa8c150 req-564da138-3b21-4860-978d-b381e2e2f172 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-89ced16e-cc50-41d5-bfcb-fa5af85c14c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.219 243456 DEBUG nova.network.neutron [req-69c4aa5a-e780-4c0a-aa9e-1d546644e01c req-fcd9ff68-240e-4911-9605-4d1d118bfe36 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Updated VIF entry in instance network info cache for port 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.219 243456 DEBUG nova.network.neutron [req-69c4aa5a-e780-4c0a-aa9e-1d546644e01c req-fcd9ff68-240e-4911-9605-4d1d118bfe36 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Updating instance_info_cache with network_info: [{"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.233 243456 DEBUG oslo_concurrency.lockutils [req-69c4aa5a-e780-4c0a-aa9e-1d546644e01c req-fcd9ff68-240e-4911-9605-4d1d118bfe36 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c92e965f-2d18-4b78-8b78-7d391039f382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.264 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Acquiring lock "c8eefb37-41ae-4d33-8085-e4e8c3ce2075" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.265 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "c8eefb37-41ae-4d33-8085-e4e8c3ce2075" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:02 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2004804878' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:02 compute-0 ceph-mon[76304]: pgmap v947: 305 pgs: 305 active+clean; 255 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.7 MiB/s wr, 115 op/s
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.293 243456 DEBUG nova.compute.manager [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.329 243456 DEBUG nova.network.neutron [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Updating instance_info_cache with network_info: [{"id": "eaa5f652-63c2-4a9b-aae0-eec299565322", "address": "fa:16:3e:f2:c8:7d", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeaa5f652-63", "ovs_interfaceid": "eaa5f652-63c2-4a9b-aae0-eec299565322", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.367 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Releasing lock "refresh_cache-89ced16e-cc50-41d5-bfcb-fa5af85c14c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.368 243456 DEBUG nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Instance network_info: |[{"id": "eaa5f652-63c2-4a9b-aae0-eec299565322", "address": "fa:16:3e:f2:c8:7d", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeaa5f652-63", "ovs_interfaceid": "eaa5f652-63c2-4a9b-aae0-eec299565322", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.369 243456 DEBUG oslo_concurrency.lockutils [req-7dd0752e-6aea-4d46-bfc5-f8adeaa8c150 req-564da138-3b21-4860-978d-b381e2e2f172 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-89ced16e-cc50-41d5-bfcb-fa5af85c14c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.370 243456 DEBUG nova.network.neutron [req-7dd0752e-6aea-4d46-bfc5-f8adeaa8c150 req-564da138-3b21-4860-978d-b381e2e2f172 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Refreshing network info cache for port eaa5f652-63c2-4a9b-aae0-eec299565322 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.381 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Start _get_guest_xml network_info=[{"id": "eaa5f652-63c2-4a9b-aae0-eec299565322", "address": "fa:16:3e:f2:c8:7d", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeaa5f652-63", "ovs_interfaceid": "eaa5f652-63c2-4a9b-aae0-eec299565322", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.388 243456 WARNING nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.396 243456 DEBUG nova.virt.libvirt.host [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.397 243456 DEBUG nova.virt.libvirt.host [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.406 243456 DEBUG nova.virt.libvirt.host [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.406 243456 DEBUG nova.virt.libvirt.host [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.407 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.408 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.409 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.409 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.410 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.410 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.411 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.412 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.412 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.413 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.414 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.415 243456 DEBUG nova.virt.hardware [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.419 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.449 243456 INFO nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Creating config drive at /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.457 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqv3o97zw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.516 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.517 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.521 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.522 243456 INFO nova.compute.claims [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:01:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.588 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqv3o97zw" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.622 243456 DEBUG nova.storage.rbd_utils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.626 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config c92e965f-2d18-4b78-8b78-7d391039f382_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.760 243456 DEBUG oslo_concurrency.processutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config c92e965f-2d18-4b78-8b78-7d391039f382_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.761 243456 INFO nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Deleting local config drive /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config because it was imported into RBD.
Feb 28 10:01:02 compute-0 kernel: tap1c6e98f3-e9: entered promiscuous mode
Feb 28 10:01:02 compute-0 ovn_controller[146846]: 2026-02-28T10:01:02Z|00048|binding|INFO|Claiming lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for this chassis.
Feb 28 10:01:02 compute-0 ovn_controller[146846]: 2026-02-28T10:01:02Z|00049|binding|INFO|1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7: Claiming fa:16:3e:3d:6e:bc 10.100.0.5
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.814 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:02 compute-0 NetworkManager[49805]: <info>  [1772272862.8159] manager: (tap1c6e98f3-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.818 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.821 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.837 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:6e:bc 10.100.0.5'], port_security=['fa:16:3e:3d:6e:bc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c92e965f-2d18-4b78-8b78-7d391039f382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:01:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.838 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 bound to our chassis
Feb 28 10:01:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.839 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24
Feb 28 10:01:02 compute-0 systemd-udevd[257182]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:01:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.850 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2f91d171-c7a9-40b4-8e90-92845386f9bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.851 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce5045ea-11 in ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.854 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.852 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce5045ea-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:01:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.852 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf09106-4812-428c-bdf1-14a15183d1df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.853 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd0d840-90f8-45f5-b84a-37080f6c0024]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:02 compute-0 ovn_controller[146846]: 2026-02-28T10:01:02Z|00050|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 ovn-installed in OVS
Feb 28 10:01:02 compute-0 ovn_controller[146846]: 2026-02-28T10:01:02Z|00051|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 up in Southbound
Feb 28 10:01:02 compute-0 nova_compute[243452]: 2026-02-28 10:01:02.858 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:02 compute-0 NetworkManager[49805]: <info>  [1772272862.8666] device (tap1c6e98f3-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:01:02 compute-0 NetworkManager[49805]: <info>  [1772272862.8673] device (tap1c6e98f3-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:01:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.868 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[1b75388b-69b9-4bab-b6db-d5d64ff4069c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:02 compute-0 systemd-machined[209480]: New machine qemu-13-instance-0000000d.
Feb 28 10:01:02 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Feb 28 10:01:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.890 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bc2fbf35-d766-4030-8d22-d65781cafc39]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.921 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a2d97e-e8b6-48b9-936a-cbbc94c91f77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:02 compute-0 NetworkManager[49805]: <info>  [1772272862.9284] manager: (tapce5045ea-10): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Feb 28 10:01:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.929 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c7f717aa-bcd8-4d6e-b36d-c4c3ede356ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.961 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[65e9b207-7637-4aa8-ae11-49336ab92980]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.965 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[dfeff24f-1ebd-47e7-a936-3ef91d11d9d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:02 compute-0 NetworkManager[49805]: <info>  [1772272862.9854] device (tapce5045ea-10): carrier: link connected
Feb 28 10:01:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:02.992 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ee406a61-cb36-4e7f-bc4e-e221c07b9aaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:02 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3705257239' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.014 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.015 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[41ff3c81-7b57-4d1b-981e-1a331f694924]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257219, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.031 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[910e128b-ac2a-4469-a55c-746e09cbf104]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe03:35bf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437027, 'tstamp': 437027}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257227, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.039 243456 DEBUG nova.storage.rbd_utils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.044 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.047 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[59c99f0d-11e6-480d-badb-744880011b7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257238, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.077 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[515250a2-ac3b-4fde-8653-9e2ac460bc0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.138 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.151 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a14f6d6d-5a5b-4fae-b516-f2b155824113]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.152 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.153 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.153 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:03 compute-0 NetworkManager[49805]: <info>  [1772272863.1562] manager: (tapce5045ea-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Feb 28 10:01:03 compute-0 kernel: tapce5045ea-10: entered promiscuous mode
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.161 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:03 compute-0 ovn_controller[146846]: 2026-02-28T10:01:03Z|00052|binding|INFO|Releasing lport 863d7ac1-9b1e-4788-aadf-440a36d66b39 from this chassis (sb_readonly=0)
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.170 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce5045ea-1437-4fd1-bdb3-3fe83470fb24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce5045ea-1437-4fd1-bdb3-3fe83470fb24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.177 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.176 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[111a4262-46cc-47e1-99d3-8cf37c44cf1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.178 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-ce5045ea-1437-4fd1-bdb3-3fe83470fb24
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/ce5045ea-1437-4fd1-bdb3-3fe83470fb24.pid.haproxy
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID ce5045ea-1437-4fd1-bdb3-3fe83470fb24
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:01:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:03.179 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'env', 'PROCESS_TAG=haproxy-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce5045ea-1437-4fd1-bdb3-3fe83470fb24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:01:03 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3705257239' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.342 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.342 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.401 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.505 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272863.5048075, c92e965f-2d18-4b78-8b78-7d391039f382 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.506 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] VM Started (Lifecycle Event)
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.523 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.528 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272863.5057948, c92e965f-2d18-4b78-8b78-7d391039f382 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.528 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] VM Paused (Lifecycle Event)
Feb 28 10:01:03 compute-0 podman[257351]: 2026-02-28 10:01:03.536816144 +0000 UTC m=+0.056609146 container create f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.551 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.553 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.577 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:01:03 compute-0 systemd[1]: Started libpod-conmon-f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a.scope.
Feb 28 10:01:03 compute-0 podman[257351]: 2026-02-28 10:01:03.50436676 +0000 UTC m=+0.024159742 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:01:03 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:01:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e22472ecc95bc20b98dcc0462d4cd4b77133e1b85e4f473f2fb73cbcea480a12/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:01:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:03 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2772477825' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:03 compute-0 podman[257351]: 2026-02-28 10:01:03.622092495 +0000 UTC m=+0.141885487 container init f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 10:01:03 compute-0 podman[257351]: 2026-02-28 10:01:03.63076496 +0000 UTC m=+0.150557932 container start f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.636 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.637 243456 DEBUG nova.virt.libvirt.vif [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:00:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1953624900',display_name='tempest-ServersAdminTestJSON-server-1953624900',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1953624900',id=14,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-y1ejzv4y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:00:59Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=89ced16e-cc50-41d5-bfcb-fa5af85c14c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eaa5f652-63c2-4a9b-aae0-eec299565322", "address": "fa:16:3e:f2:c8:7d", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeaa5f652-63", "ovs_interfaceid": "eaa5f652-63c2-4a9b-aae0-eec299565322", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.637 243456 DEBUG nova.network.os_vif_util [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "eaa5f652-63c2-4a9b-aae0-eec299565322", "address": "fa:16:3e:f2:c8:7d", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeaa5f652-63", "ovs_interfaceid": "eaa5f652-63c2-4a9b-aae0-eec299565322", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.638 243456 DEBUG nova.network.os_vif_util [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c8:7d,bridge_name='br-int',has_traffic_filtering=True,id=eaa5f652-63c2-4a9b-aae0-eec299565322,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeaa5f652-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.640 243456 DEBUG nova.objects.instance [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'pci_devices' on Instance uuid 89ced16e-cc50-41d5-bfcb-fa5af85c14c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:03 compute-0 neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24[257366]: [NOTICE]   (257372) : New worker (257374) forked
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.654 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:01:03 compute-0 nova_compute[243452]:   <uuid>89ced16e-cc50-41d5-bfcb-fa5af85c14c8</uuid>
Feb 28 10:01:03 compute-0 nova_compute[243452]:   <name>instance-0000000e</name>
Feb 28 10:01:03 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:01:03 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:01:03 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersAdminTestJSON-server-1953624900</nova:name>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:01:02</nova:creationTime>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:01:03 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:01:03 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:01:03 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:01:03 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:01:03 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:01:03 compute-0 nova_compute[243452]:         <nova:user uuid="4fb1e2bbed9c4e2395c13dba974f8603">tempest-ServersAdminTestJSON-1494420313-project-member</nova:user>
Feb 28 10:01:03 compute-0 nova_compute[243452]:         <nova:project uuid="339b7f5b41a54615b051fb9d036072dd">tempest-ServersAdminTestJSON-1494420313</nova:project>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:01:03 compute-0 nova_compute[243452]:         <nova:port uuid="eaa5f652-63c2-4a9b-aae0-eec299565322">
Feb 28 10:01:03 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:01:03 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:01:03 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <system>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <entry name="serial">89ced16e-cc50-41d5-bfcb-fa5af85c14c8</entry>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <entry name="uuid">89ced16e-cc50-41d5-bfcb-fa5af85c14c8</entry>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     </system>
Feb 28 10:01:03 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:01:03 compute-0 nova_compute[243452]:   <os>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:   </os>
Feb 28 10:01:03 compute-0 nova_compute[243452]:   <features>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:   </features>
Feb 28 10:01:03 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:01:03 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:01:03 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk">
Feb 28 10:01:03 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:03 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk.config">
Feb 28 10:01:03 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:03 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:f2:c8:7d"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <target dev="tapeaa5f652-63"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/89ced16e-cc50-41d5-bfcb-fa5af85c14c8/console.log" append="off"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <video>
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     </video>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:01:03 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:01:03 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:01:03 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:01:03 compute-0 nova_compute[243452]: </domain>
Feb 28 10:01:03 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.654 243456 DEBUG nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Preparing to wait for external event network-vif-plugged-eaa5f652-63c2-4a9b-aae0-eec299565322 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.654 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:03 compute-0 neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24[257366]: [NOTICE]   (257372) : Loading success.
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.655 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.655 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.656 243456 DEBUG nova.virt.libvirt.vif [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:00:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1953624900',display_name='tempest-ServersAdminTestJSON-server-1953624900',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1953624900',id=14,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-y1ejzv4y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:00:59Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=89ced16e-cc50-41d5-bfcb-fa5af85c14c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eaa5f652-63c2-4a9b-aae0-eec299565322", "address": "fa:16:3e:f2:c8:7d", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeaa5f652-63", "ovs_interfaceid": "eaa5f652-63c2-4a9b-aae0-eec299565322", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.656 243456 DEBUG nova.network.os_vif_util [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "eaa5f652-63c2-4a9b-aae0-eec299565322", "address": "fa:16:3e:f2:c8:7d", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeaa5f652-63", "ovs_interfaceid": "eaa5f652-63c2-4a9b-aae0-eec299565322", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.657 243456 DEBUG nova.network.os_vif_util [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c8:7d,bridge_name='br-int',has_traffic_filtering=True,id=eaa5f652-63c2-4a9b-aae0-eec299565322,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeaa5f652-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.657 243456 DEBUG os_vif [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c8:7d,bridge_name='br-int',has_traffic_filtering=True,id=eaa5f652-63c2-4a9b-aae0-eec299565322,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeaa5f652-63') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.657 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.658 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.658 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.665 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.665 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeaa5f652-63, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.665 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeaa5f652-63, col_values=(('external_ids', {'iface-id': 'eaa5f652-63c2-4a9b-aae0-eec299565322', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f2:c8:7d', 'vm-uuid': '89ced16e-cc50-41d5-bfcb-fa5af85c14c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.667 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:03 compute-0 NetworkManager[49805]: <info>  [1772272863.6679] manager: (tapeaa5f652-63): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.672 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.675 243456 INFO os_vif [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c8:7d,bridge_name='br-int',has_traffic_filtering=True,id=eaa5f652-63c2-4a9b-aae0-eec299565322,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeaa5f652-63')
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.720 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.721 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.721 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No VIF found with MAC fa:16:3e:f2:c8:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.721 243456 INFO nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Using config drive
Feb 28 10:01:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:01:03 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1188977669' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.738 243456 DEBUG nova.storage.rbd_utils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 292 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.0 MiB/s wr, 129 op/s
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.752 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.765 243456 DEBUG nova.compute.provider_tree [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.781 243456 DEBUG nova.scheduler.client.report [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.808 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.808 243456 DEBUG nova.compute.manager [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.863 243456 DEBUG nova.compute.manager [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.864 243456 DEBUG nova.network.neutron [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.888 243456 INFO nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:01:03 compute-0 nova_compute[243452]: 2026-02-28 10:01:03.906 243456 DEBUG nova.compute.manager [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.032 243456 DEBUG nova.compute.manager [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.035 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.036 243456 INFO nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Creating image(s)
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.065 243456 DEBUG nova.storage.rbd_utils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] rbd image c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.092 243456 DEBUG nova.storage.rbd_utils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] rbd image c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.121 243456 DEBUG nova.storage.rbd_utils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] rbd image c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.128 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.175 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.176 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.176 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.176 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.198 243456 DEBUG nova.storage.rbd_utils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] rbd image c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.201 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:04 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2772477825' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:04 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1188977669' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:04 compute-0 ceph-mon[76304]: pgmap v948: 305 pgs: 305 active+clean; 292 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.0 MiB/s wr, 129 op/s
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.377 243456 DEBUG nova.network.neutron [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.378 243456 DEBUG nova.compute.manager [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.397 243456 INFO nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Creating config drive at /var/lib/nova/instances/89ced16e-cc50-41d5-bfcb-fa5af85c14c8/disk.config
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.404 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89ced16e-cc50-41d5-bfcb-fa5af85c14c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpeq04zqwz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.507 243456 DEBUG nova.compute.manager [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.508 243456 DEBUG oslo_concurrency.lockutils [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.509 243456 DEBUG oslo_concurrency.lockutils [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.509 243456 DEBUG oslo_concurrency.lockutils [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.510 243456 DEBUG nova.compute.manager [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Processing event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.510 243456 DEBUG nova.compute.manager [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.511 243456 DEBUG oslo_concurrency.lockutils [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.511 243456 DEBUG oslo_concurrency.lockutils [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.511 243456 DEBUG oslo_concurrency.lockutils [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.512 243456 DEBUG nova.compute.manager [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.512 243456 WARNING nova.compute.manager [req-06bb235e-4155-4f30-9483-b6b90d8eb9f1 req-dba7ca7f-ecfb-4bd0-ada9-8a803da28de5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state building and task_state spawning.
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.513 243456 DEBUG nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.518 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272864.5179641, c92e965f-2d18-4b78-8b78-7d391039f382 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.518 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] VM Resumed (Lifecycle Event)
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.522 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.525 243456 INFO nova.virt.libvirt.driver [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance spawned successfully.
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.526 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.533 243456 DEBUG nova.network.neutron [req-7dd0752e-6aea-4d46-bfc5-f8adeaa8c150 req-564da138-3b21-4860-978d-b381e2e2f172 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Updated VIF entry in instance network info cache for port eaa5f652-63c2-4a9b-aae0-eec299565322. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.534 243456 DEBUG nova.network.neutron [req-7dd0752e-6aea-4d46-bfc5-f8adeaa8c150 req-564da138-3b21-4860-978d-b381e2e2f172 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Updating instance_info_cache with network_info: [{"id": "eaa5f652-63c2-4a9b-aae0-eec299565322", "address": "fa:16:3e:f2:c8:7d", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeaa5f652-63", "ovs_interfaceid": "eaa5f652-63c2-4a9b-aae0-eec299565322", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.539 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89ced16e-cc50-41d5-bfcb-fa5af85c14c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpeq04zqwz" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.566 243456 DEBUG nova.storage.rbd_utils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.571 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/89ced16e-cc50-41d5-bfcb-fa5af85c14c8/disk.config 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.597 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.598 243456 DEBUG oslo_concurrency.lockutils [req-7dd0752e-6aea-4d46-bfc5-f8adeaa8c150 req-564da138-3b21-4860-978d-b381e2e2f172 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-89ced16e-cc50-41d5-bfcb-fa5af85c14c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.603 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.604 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.604 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.605 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.605 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.606 243456 DEBUG nova.virt.libvirt.driver [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.610 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.650 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.677 243456 INFO nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Took 7.97 seconds to spawn the instance on the hypervisor.
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.678 243456 DEBUG nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.762 243456 INFO nova.compute.manager [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Took 9.04 seconds to build instance.
Feb 28 10:01:04 compute-0 nova_compute[243452]: 2026-02-28 10:01:04.792 243456 DEBUG oslo_concurrency.lockutils [None req-b0d3e36e-9c9a-49b0-ad63-8803d958eec5 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.255 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.326 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.331 243456 DEBUG nova.storage.rbd_utils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] resizing rbd image c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.364 243456 DEBUG oslo_concurrency.processutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/89ced16e-cc50-41d5-bfcb-fa5af85c14c8/disk.config 89ced16e-cc50-41d5-bfcb-fa5af85c14c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.793s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.365 243456 INFO nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Deleting local config drive /var/lib/nova/instances/89ced16e-cc50-41d5-bfcb-fa5af85c14c8/disk.config because it was imported into RBD.
Feb 28 10:01:05 compute-0 kernel: tapeaa5f652-63: entered promiscuous mode
Feb 28 10:01:05 compute-0 NetworkManager[49805]: <info>  [1772272865.4001] manager: (tapeaa5f652-63): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Feb 28 10:01:05 compute-0 systemd-udevd[257204]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.401 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:05 compute-0 ovn_controller[146846]: 2026-02-28T10:01:05Z|00053|binding|INFO|Claiming lport eaa5f652-63c2-4a9b-aae0-eec299565322 for this chassis.
Feb 28 10:01:05 compute-0 ovn_controller[146846]: 2026-02-28T10:01:05Z|00054|binding|INFO|eaa5f652-63c2-4a9b-aae0-eec299565322: Claiming fa:16:3e:f2:c8:7d 10.100.0.13
Feb 28 10:01:05 compute-0 ovn_controller[146846]: 2026-02-28T10:01:05Z|00055|binding|INFO|Setting lport eaa5f652-63c2-4a9b-aae0-eec299565322 ovn-installed in OVS
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.409 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:05 compute-0 NetworkManager[49805]: <info>  [1772272865.4110] device (tapeaa5f652-63): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:01:05 compute-0 NetworkManager[49805]: <info>  [1772272865.4117] device (tapeaa5f652-63): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:01:05 compute-0 ovn_controller[146846]: 2026-02-28T10:01:05Z|00056|binding|INFO|Setting lport eaa5f652-63c2-4a9b-aae0-eec299565322 up in Southbound
Feb 28 10:01:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.414 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:c8:7d 10.100.0.13'], port_security=['fa:16:3e:f2:c8:7d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '89ced16e-cc50-41d5-bfcb-fa5af85c14c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=eaa5f652-63c2-4a9b-aae0-eec299565322) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:01:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.416 156681 INFO neutron.agent.ovn.metadata.agent [-] Port eaa5f652-63c2-4a9b-aae0-eec299565322 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 bound to our chassis
Feb 28 10:01:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.417 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24
Feb 28 10:01:05 compute-0 systemd-machined[209480]: New machine qemu-14-instance-0000000e.
Feb 28 10:01:05 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Feb 28 10:01:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.433 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ef5c42d9-7d7a-4e13-bbed-d99066fcc944]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.454 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.465 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fffd65e4-8046-463f-a771-32a46635758e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.468 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3b7bf5ea-22af-4db0-8154-a0d78c225d4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.475 243456 DEBUG nova.objects.instance [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lazy-loading 'migration_context' on Instance uuid c8eefb37-41ae-4d33-8085-e4e8c3ce2075 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.483 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6961919b-9a9d-4125-80da-e82170a9fca4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.496 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.496 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Ensure instance console log exists: /var/lib/nova/instances/c8eefb37-41ae-4d33-8085-e4e8c3ce2075/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.496 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.496 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b360da7e-2818-47ac-a14c-af21e6aa37ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257636, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.497 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.497 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.498 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.503 243456 WARNING nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.507 243456 DEBUG nova.virt.libvirt.host [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:01:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.511 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[38c79587-1c89-4cc8-a16d-a42785839c28]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257637, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257637, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.512 243456 DEBUG nova.virt.libvirt.host [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:01:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.512 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.514 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.515 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.515 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.515 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.516 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.516 243456 DEBUG nova.virt.libvirt.host [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:01:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:05.516 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.516 243456 DEBUG nova.virt.libvirt.host [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.517 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.517 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.517 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.517 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.518 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.518 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.518 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.518 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.518 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.518 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.519 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.519 243456 DEBUG nova.virt.hardware [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.521 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 307 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.1 MiB/s wr, 152 op/s
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.945 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272865.9442315, 89ced16e-cc50-41d5-bfcb-fa5af85c14c8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.947 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] VM Started (Lifecycle Event)
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.980 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.984 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272865.9443486, 89ced16e-cc50-41d5-bfcb-fa5af85c14c8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:05 compute-0 nova_compute[243452]: 2026-02-28 10:01:05.985 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] VM Paused (Lifecycle Event)
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.018 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.023 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.044 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:01:06 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:06 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/530190241' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.091 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.113 243456 DEBUG nova.storage.rbd_utils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] rbd image c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.117 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.360 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.361 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.581 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.582 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.583 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.583 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9e27fde4-3df3-46cf-97ac-88a91baefbc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:06 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:06 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3979094616' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.734 243456 DEBUG nova.compute.manager [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Received event network-vif-plugged-eaa5f652-63c2-4a9b-aae0-eec299565322 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.735 243456 DEBUG oslo_concurrency.lockutils [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.735 243456 DEBUG oslo_concurrency.lockutils [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.735 243456 DEBUG oslo_concurrency.lockutils [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.736 243456 DEBUG nova.compute.manager [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Processing event network-vif-plugged-eaa5f652-63c2-4a9b-aae0-eec299565322 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.736 243456 DEBUG nova.compute.manager [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Received event network-vif-plugged-eaa5f652-63c2-4a9b-aae0-eec299565322 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.736 243456 DEBUG oslo_concurrency.lockutils [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.737 243456 DEBUG oslo_concurrency.lockutils [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.737 243456 DEBUG oslo_concurrency.lockutils [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.737 243456 DEBUG nova.compute.manager [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] No waiting events found dispatching network-vif-plugged-eaa5f652-63c2-4a9b-aae0-eec299565322 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.738 243456 WARNING nova.compute.manager [req-5341a2bd-e713-49ea-b5db-30337417f4f2 req-57198834-dc1e-4050-b252-b47c44b6d6e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Received unexpected event network-vif-plugged-eaa5f652-63c2-4a9b-aae0-eec299565322 for instance with vm_state building and task_state spawning.
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.738 243456 DEBUG nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.742 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272866.7414095, 89ced16e-cc50-41d5-bfcb-fa5af85c14c8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.742 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] VM Resumed (Lifecycle Event)
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.744 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.745 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.747 243456 DEBUG nova.objects.instance [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lazy-loading 'pci_devices' on Instance uuid c8eefb37-41ae-4d33-8085-e4e8c3ce2075 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.750 243456 INFO nova.virt.libvirt.driver [-] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Instance spawned successfully.
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.750 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.769 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.773 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:01:06 compute-0 nova_compute[243452]:   <uuid>c8eefb37-41ae-4d33-8085-e4e8c3ce2075</uuid>
Feb 28 10:01:06 compute-0 nova_compute[243452]:   <name>instance-0000000f</name>
Feb 28 10:01:06 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:01:06 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:01:06 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerExternalEventsTest-server-1374385373</nova:name>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:01:05</nova:creationTime>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:01:06 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:01:06 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:01:06 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:01:06 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:01:06 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:01:06 compute-0 nova_compute[243452]:         <nova:user uuid="bf06d33f07d54a60bff952b57a770e77">tempest-ServerExternalEventsTest-234006114-project-member</nova:user>
Feb 28 10:01:06 compute-0 nova_compute[243452]:         <nova:project uuid="383ac6e5ec8946a0afec20ecf5e8021e">tempest-ServerExternalEventsTest-234006114</nova:project>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <nova:ports/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:01:06 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:01:06 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <system>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <entry name="serial">c8eefb37-41ae-4d33-8085-e4e8c3ce2075</entry>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <entry name="uuid">c8eefb37-41ae-4d33-8085-e4e8c3ce2075</entry>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     </system>
Feb 28 10:01:06 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:01:06 compute-0 nova_compute[243452]:   <os>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:   </os>
Feb 28 10:01:06 compute-0 nova_compute[243452]:   <features>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:   </features>
Feb 28 10:01:06 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:01:06 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:01:06 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk">
Feb 28 10:01:06 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:06 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk.config">
Feb 28 10:01:06 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:06 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/c8eefb37-41ae-4d33-8085-e4e8c3ce2075/console.log" append="off"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <video>
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     </video>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:01:06 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:01:06 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:01:06 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:01:06 compute-0 nova_compute[243452]: </domain>
Feb 28 10:01:06 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.787 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.794 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.795 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.796 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.796 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.797 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.797 243456 DEBUG nova.virt.libvirt.driver [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:06 compute-0 ceph-mon[76304]: pgmap v949: 305 pgs: 305 active+clean; 307 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.1 MiB/s wr, 152 op/s
Feb 28 10:01:06 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/530190241' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:06 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3979094616' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.823 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.856 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.866 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.868 243456 INFO nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Using config drive
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.893 243456 DEBUG nova.storage.rbd_utils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] rbd image c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.901 243456 INFO nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Took 7.78 seconds to spawn the instance on the hypervisor.
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.902 243456 DEBUG nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:06 compute-0 nova_compute[243452]: 2026-02-28 10:01:06.981 243456 INFO nova.compute.manager [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Took 8.96 seconds to build instance.
Feb 28 10:01:07 compute-0 nova_compute[243452]: 2026-02-28 10:01:07.003 243456 DEBUG oslo_concurrency.lockutils [None req-73419269-214c-4131-bf8f-0c1dfc10a5c6 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:01:07 compute-0 nova_compute[243452]: 2026-02-28 10:01:07.648 243456 INFO nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Creating config drive at /var/lib/nova/instances/c8eefb37-41ae-4d33-8085-e4e8c3ce2075/disk.config
Feb 28 10:01:07 compute-0 nova_compute[243452]: 2026-02-28 10:01:07.652 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c8eefb37-41ae-4d33-8085-e4e8c3ce2075/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkr3b55rm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 318 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.7 MiB/s wr, 200 op/s
Feb 28 10:01:07 compute-0 nova_compute[243452]: 2026-02-28 10:01:07.787 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c8eefb37-41ae-4d33-8085-e4e8c3ce2075/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkr3b55rm" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:07 compute-0 nova_compute[243452]: 2026-02-28 10:01:07.819 243456 DEBUG nova.storage.rbd_utils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] rbd image c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:07 compute-0 nova_compute[243452]: 2026-02-28 10:01:07.823 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c8eefb37-41ae-4d33-8085-e4e8c3ce2075/disk.config c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:07 compute-0 nova_compute[243452]: 2026-02-28 10:01:07.931 243456 DEBUG oslo_concurrency.processutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c8eefb37-41ae-4d33-8085-e4e8c3ce2075/disk.config c8eefb37-41ae-4d33-8085-e4e8c3ce2075_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:07 compute-0 nova_compute[243452]: 2026-02-28 10:01:07.931 243456 INFO nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Deleting local config drive /var/lib/nova/instances/c8eefb37-41ae-4d33-8085-e4e8c3ce2075/disk.config because it was imported into RBD.
Feb 28 10:01:07 compute-0 systemd-machined[209480]: New machine qemu-15-instance-0000000f.
Feb 28 10:01:07 compute-0 nova_compute[243452]: 2026-02-28 10:01:07.984 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "f37b722c-8def-4545-a455-39df230540d8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:07 compute-0 nova_compute[243452]: 2026-02-28 10:01:07.984 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:07 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000f.
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.002 243456 DEBUG nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.079 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.080 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.086 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.086 243456 INFO nova.compute.claims [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.263 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.404 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.667 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.710 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272868.710008, c8eefb37-41ae-4d33-8085-e4e8c3ce2075 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.711 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] VM Resumed (Lifecycle Event)
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.713 243456 DEBUG nova.compute.manager [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.714 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.718 243456 INFO nova.virt.libvirt.driver [-] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Instance spawned successfully.
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.718 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.749 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.750 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.751 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.751 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.752 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.752 243456 DEBUG nova.virt.libvirt.driver [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.757 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.759 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.788 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.789 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272868.7131577, c8eefb37-41ae-4d33-8085-e4e8c3ce2075 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.789 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] VM Started (Lifecycle Event)
Feb 28 10:01:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:01:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1035697662' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.824 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:08 compute-0 ceph-mon[76304]: pgmap v950: 305 pgs: 305 active+clean; 318 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.7 MiB/s wr, 200 op/s
Feb 28 10:01:08 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1035697662' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.834 243456 INFO nova.compute.manager [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Took 4.80 seconds to spawn the instance on the hypervisor.
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.836 243456 DEBUG nova.compute.manager [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.837 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.838 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.852 243456 DEBUG nova.compute.provider_tree [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.880 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.883 243456 DEBUG nova.scheduler.client.report [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.911 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.912 243456 DEBUG nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.924 243456 INFO nova.compute.manager [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Took 6.56 seconds to build instance.
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.948 243456 DEBUG oslo_concurrency.lockutils [None req-7fb26887-26de-40cb-9b78-bf2159d4d37a bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "c8eefb37-41ae-4d33-8085-e4e8c3ce2075" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.982 243456 DEBUG nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:01:08 compute-0 nova_compute[243452]: 2026-02-28 10:01:08.983 243456 DEBUG nova.network.neutron [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.183 243456 INFO nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.203 243456 DEBUG nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.306 243456 DEBUG nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.307 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.308 243456 INFO nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Creating image(s)
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.327 243456 DEBUG nova.storage.rbd_utils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image f37b722c-8def-4545-a455-39df230540d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.349 243456 DEBUG nova.storage.rbd_utils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image f37b722c-8def-4545-a455-39df230540d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.374 243456 DEBUG nova.storage.rbd_utils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image f37b722c-8def-4545-a455-39df230540d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.378 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.398 243456 DEBUG nova.policy [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b1dc716928742ca935bb155783e2d9a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '446562351a804787bd6c523245bada39', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.403 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updating instance_info_cache with network_info: [{"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.420 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.421 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.422 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.422 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.422 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.423 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.423 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.438 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.439 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.439 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.440 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.456 243456 DEBUG nova.storage.rbd_utils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image f37b722c-8def-4545-a455-39df230540d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.459 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f37b722c-8def-4545-a455-39df230540d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.478 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.478 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.479 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.479 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.479 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:09 compute-0 ovn_controller[146846]: 2026-02-28T10:01:09Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9c:04:a0 10.100.0.4
Feb 28 10:01:09 compute-0 ovn_controller[146846]: 2026-02-28T10:01:09Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:04:a0 10.100.0.4
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.734 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f37b722c-8def-4545-a455-39df230540d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 318 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.7 MiB/s wr, 194 op/s
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.792 243456 DEBUG nova.storage.rbd_utils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] resizing rbd image f37b722c-8def-4545-a455-39df230540d8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.866 243456 DEBUG nova.objects.instance [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lazy-loading 'migration_context' on Instance uuid f37b722c-8def-4545-a455-39df230540d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.883 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.883 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Ensure instance console log exists: /var/lib/nova/instances/f37b722c-8def-4545-a455-39df230540d8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.884 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.884 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.884 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:09 compute-0 nova_compute[243452]: 2026-02-28 10:01:09.957 243456 DEBUG nova.network.neutron [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Successfully created port: 24f1e17a-f542-4eab-9180-968c61bc1cf7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:01:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:01:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1040341953' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:10 compute-0 nova_compute[243452]: 2026-02-28 10:01:10.185 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.706s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:10 compute-0 nova_compute[243452]: 2026-02-28 10:01:10.282 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:01:10 compute-0 nova_compute[243452]: 2026-02-28 10:01:10.282 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:01:10 compute-0 nova_compute[243452]: 2026-02-28 10:01:10.286 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:01:10 compute-0 nova_compute[243452]: 2026-02-28 10:01:10.286 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:01:10 compute-0 nova_compute[243452]: 2026-02-28 10:01:10.292 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:01:10 compute-0 nova_compute[243452]: 2026-02-28 10:01:10.292 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:01:10 compute-0 nova_compute[243452]: 2026-02-28 10:01:10.296 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:01:10 compute-0 nova_compute[243452]: 2026-02-28 10:01:10.296 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:01:10 compute-0 nova_compute[243452]: 2026-02-28 10:01:10.528 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:01:10 compute-0 nova_compute[243452]: 2026-02-28 10:01:10.530 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4038MB free_disk=59.9117830619216GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:01:10 compute-0 nova_compute[243452]: 2026-02-28 10:01:10.531 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:10 compute-0 nova_compute[243452]: 2026-02-28 10:01:10.532 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:10 compute-0 nova_compute[243452]: 2026-02-28 10:01:10.623 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 9e27fde4-3df3-46cf-97ac-88a91baefbc0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:01:10 compute-0 nova_compute[243452]: 2026-02-28 10:01:10.623 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance c92e965f-2d18-4b78-8b78-7d391039f382 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:01:10 compute-0 nova_compute[243452]: 2026-02-28 10:01:10.623 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 89ced16e-cc50-41d5-bfcb-fa5af85c14c8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:01:10 compute-0 nova_compute[243452]: 2026-02-28 10:01:10.623 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance c8eefb37-41ae-4d33-8085-e4e8c3ce2075 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:01:10 compute-0 nova_compute[243452]: 2026-02-28 10:01:10.623 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance f37b722c-8def-4545-a455-39df230540d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:01:10 compute-0 nova_compute[243452]: 2026-02-28 10:01:10.623 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:01:10 compute-0 nova_compute[243452]: 2026-02-28 10:01:10.624 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:01:10 compute-0 nova_compute[243452]: 2026-02-28 10:01:10.745 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:10 compute-0 ceph-mon[76304]: pgmap v951: 305 pgs: 305 active+clean; 318 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.7 MiB/s wr, 194 op/s
Feb 28 10:01:10 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1040341953' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:01:11 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4205101017' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.374 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.383 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.402 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.424 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.425 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.426 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.426 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.440 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.441 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.441 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.473 243456 DEBUG nova.network.neutron [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Successfully updated port: 24f1e17a-f542-4eab-9180-968c61bc1cf7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.502 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.503 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquired lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.503 243456 DEBUG nova.network.neutron [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.596 243456 DEBUG nova.compute.manager [req-a458f70a-d1e7-4206-879a-bc93cf9c918b req-bec16dad-c73c-4398-bf4c-0036136286bb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received event network-changed-24f1e17a-f542-4eab-9180-968c61bc1cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.597 243456 DEBUG nova.compute.manager [req-a458f70a-d1e7-4206-879a-bc93cf9c918b req-bec16dad-c73c-4398-bf4c-0036136286bb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Refreshing instance network info cache due to event network-changed-24f1e17a-f542-4eab-9180-968c61bc1cf7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.597 243456 DEBUG oslo_concurrency.lockutils [req-a458f70a-d1e7-4206-879a-bc93cf9c918b req-bec16dad-c73c-4398-bf4c-0036136286bb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.736 243456 DEBUG nova.network.neutron [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.741 243456 DEBUG nova.compute.manager [None req-f91acf22-a15f-4bd1-8a25-837be84c8696 659c6b9da71d4b6d88c16cd01e3e121b b3cac0b1e8b5417184b3b8163e0c489a - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.742 243456 DEBUG nova.compute.manager [None req-f91acf22-a15f-4bd1-8a25-837be84c8696 659c6b9da71d4b6d88c16cd01e3e121b b3cac0b1e8b5417184b3b8163e0c489a - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.743 243456 DEBUG oslo_concurrency.lockutils [None req-f91acf22-a15f-4bd1-8a25-837be84c8696 659c6b9da71d4b6d88c16cd01e3e121b b3cac0b1e8b5417184b3b8163e0c489a - - default default] Acquiring lock "refresh_cache-c8eefb37-41ae-4d33-8085-e4e8c3ce2075" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.743 243456 DEBUG oslo_concurrency.lockutils [None req-f91acf22-a15f-4bd1-8a25-837be84c8696 659c6b9da71d4b6d88c16cd01e3e121b b3cac0b1e8b5417184b3b8163e0c489a - - default default] Acquired lock "refresh_cache-c8eefb37-41ae-4d33-8085-e4e8c3ce2075" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.743 243456 DEBUG nova.network.neutron [None req-f91acf22-a15f-4bd1-8a25-837be84c8696 659c6b9da71d4b6d88c16cd01e3e121b b3cac0b1e8b5417184b3b8163e0c489a - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:01:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 388 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 7.7 MiB/s wr, 375 op/s
Feb 28 10:01:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4205101017' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.907 243456 DEBUG nova.network.neutron [None req-f91acf22-a15f-4bd1-8a25-837be84c8696 659c6b9da71d4b6d88c16cd01e3e121b b3cac0b1e8b5417184b3b8163e0c489a - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.917 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "08147934-b9df-4154-8d1f-3fd318973eb6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.918 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.937 243456 DEBUG nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.941 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Acquiring lock "c8eefb37-41ae-4d33-8085-e4e8c3ce2075" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.941 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "c8eefb37-41ae-4d33-8085-e4e8c3ce2075" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.941 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Acquiring lock "c8eefb37-41ae-4d33-8085-e4e8c3ce2075-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.942 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "c8eefb37-41ae-4d33-8085-e4e8c3ce2075-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.942 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "c8eefb37-41ae-4d33-8085-e4e8c3ce2075-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.943 243456 INFO nova.compute.manager [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Terminating instance
Feb 28 10:01:11 compute-0 nova_compute[243452]: 2026-02-28 10:01:11.944 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Acquiring lock "refresh_cache-c8eefb37-41ae-4d33-8085-e4e8c3ce2075" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.003 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.003 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.010 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.010 243456 INFO nova.compute.claims [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.213 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.653 243456 DEBUG nova.network.neutron [None req-f91acf22-a15f-4bd1-8a25-837be84c8696 659c6b9da71d4b6d88c16cd01e3e121b b3cac0b1e8b5417184b3b8163e0c489a - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.677 243456 DEBUG oslo_concurrency.lockutils [None req-f91acf22-a15f-4bd1-8a25-837be84c8696 659c6b9da71d4b6d88c16cd01e3e121b b3cac0b1e8b5417184b3b8163e0c489a - - default default] Releasing lock "refresh_cache-c8eefb37-41ae-4d33-8085-e4e8c3ce2075" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.679 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Acquired lock "refresh_cache-c8eefb37-41ae-4d33-8085-e4e8c3ce2075" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.680 243456 DEBUG nova.network.neutron [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:01:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:01:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/762713474' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.743 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.749 243456 DEBUG nova.compute.provider_tree [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.765 243456 DEBUG nova.scheduler.client.report [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.786 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.787 243456 DEBUG nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.836 243456 DEBUG nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.836 243456 DEBUG nova.network.neutron [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.852 243456 DEBUG nova.network.neutron [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.857 243456 INFO nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:01:12 compute-0 ceph-mon[76304]: pgmap v952: 305 pgs: 305 active+clean; 388 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 7.7 MiB/s wr, 375 op/s
Feb 28 10:01:12 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/762713474' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.875 243456 DEBUG nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.974 243456 DEBUG nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.976 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:01:12 compute-0 nova_compute[243452]: 2026-02-28 10:01:12.977 243456 INFO nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Creating image(s)
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.004 243456 DEBUG nova.storage.rbd_utils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 08147934-b9df-4154-8d1f-3fd318973eb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.023 243456 DEBUG nova.storage.rbd_utils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 08147934-b9df-4154-8d1f-3fd318973eb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.048 243456 DEBUG nova.storage.rbd_utils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 08147934-b9df-4154-8d1f-3fd318973eb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.052 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.068 243456 DEBUG nova.policy [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4fb1e2bbed9c4e2395c13dba974f8603', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '339b7f5b41a54615b051fb9d036072dd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.072 243456 DEBUG nova.network.neutron [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Updating instance_info_cache with network_info: [{"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.095 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Releasing lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.095 243456 DEBUG nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Instance network_info: |[{"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.097 243456 DEBUG nova.network.neutron [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.097 243456 DEBUG oslo_concurrency.lockutils [req-a458f70a-d1e7-4206-879a-bc93cf9c918b req-bec16dad-c73c-4398-bf4c-0036136286bb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.098 243456 DEBUG nova.network.neutron [req-a458f70a-d1e7-4206-879a-bc93cf9c918b req-bec16dad-c73c-4398-bf4c-0036136286bb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Refreshing network info cache for port 24f1e17a-f542-4eab-9180-968c61bc1cf7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.101 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Start _get_guest_xml network_info=[{"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.104 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.104 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.105 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.105 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.125 243456 DEBUG nova.storage.rbd_utils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 08147934-b9df-4154-8d1f-3fd318973eb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.128 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 08147934-b9df-4154-8d1f-3fd318973eb6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.149 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Releasing lock "refresh_cache-c8eefb37-41ae-4d33-8085-e4e8c3ce2075" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.150 243456 DEBUG nova.compute.manager [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.153 243456 WARNING nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.158 243456 DEBUG nova.virt.libvirt.host [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.159 243456 DEBUG nova.virt.libvirt.host [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.162 243456 DEBUG nova.virt.libvirt.host [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.162 243456 DEBUG nova.virt.libvirt.host [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.163 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.163 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.163 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.164 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.164 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.164 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.164 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.165 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.165 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.166 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.166 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.167 243456 DEBUG nova.virt.hardware [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.176 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:13 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Feb 28 10:01:13 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Consumed 5.007s CPU time.
Feb 28 10:01:13 compute-0 systemd-machined[209480]: Machine qemu-15-instance-0000000f terminated.
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.371 243456 INFO nova.virt.libvirt.driver [-] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Instance destroyed successfully.
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.373 243456 DEBUG nova.objects.instance [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lazy-loading 'resources' on Instance uuid c8eefb37-41ae-4d33-8085-e4e8c3ce2075 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.375 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 08147934-b9df-4154-8d1f-3fd318973eb6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.419 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.469 243456 DEBUG nova.storage.rbd_utils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] resizing rbd image 08147934-b9df-4154-8d1f-3fd318973eb6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.548 243456 DEBUG nova.objects.instance [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'migration_context' on Instance uuid 08147934-b9df-4154-8d1f-3fd318973eb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.566 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.566 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Ensure instance console log exists: /var/lib/nova/instances/08147934-b9df-4154-8d1f-3fd318973eb6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.567 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.567 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.567 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.668 243456 INFO nova.virt.libvirt.driver [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Deleting instance files /var/lib/nova/instances/c8eefb37-41ae-4d33-8085-e4e8c3ce2075_del
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.669 243456 INFO nova.virt.libvirt.driver [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Deletion of /var/lib/nova/instances/c8eefb37-41ae-4d33-8085-e4e8c3ce2075_del complete
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.671 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2747155551' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.702 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.727 243456 DEBUG nova.storage.rbd_utils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image f37b722c-8def-4545-a455-39df230540d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.731 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.750 243456 INFO nova.compute.manager [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Took 0.60 seconds to destroy the instance on the hypervisor.
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.752 243456 DEBUG oslo.service.loopingcall [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.753 243456 DEBUG nova.compute.manager [-] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:01:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 413 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 6.5 MiB/s rd, 7.4 MiB/s wr, 368 op/s
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.754 243456 DEBUG nova.network.neutron [-] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.783 243456 DEBUG nova.network.neutron [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Successfully created port: 1b6bf464-31de-4504-9af4-59a95d6d9c05 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:01:13 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2747155551' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.926 243456 DEBUG nova.network.neutron [-] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.955 243456 DEBUG nova.network.neutron [-] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:13 compute-0 nova_compute[243452]: 2026-02-28 10:01:13.971 243456 INFO nova.compute.manager [-] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Took 0.22 seconds to deallocate network for instance.
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.036 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.037 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.201 243456 DEBUG oslo_concurrency.processutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1634414211' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.282 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.283 243456 DEBUG nova.virt.libvirt.vif [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1434521462',display_name='tempest-FloatingIPsAssociationTestJSON-server-1434521462',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1434521462',id=16,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='446562351a804787bd6c523245bada39',ramdisk_id='',reservation_id='r-txju9utb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1803239001',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1803239001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:01:09Z,user_data=None,user_id='3b1dc716928742ca935bb155783e2d9a',uuid=f37b722c-8def-4545-a455-39df230540d8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.284 243456 DEBUG nova.network.os_vif_util [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converting VIF {"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.285 243456 DEBUG nova.network.os_vif_util [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:dd:a4,bridge_name='br-int',has_traffic_filtering=True,id=24f1e17a-f542-4eab-9180-968c61bc1cf7,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24f1e17a-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.286 243456 DEBUG nova.objects.instance [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lazy-loading 'pci_devices' on Instance uuid f37b722c-8def-4545-a455-39df230540d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.303 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:01:14 compute-0 nova_compute[243452]:   <uuid>f37b722c-8def-4545-a455-39df230540d8</uuid>
Feb 28 10:01:14 compute-0 nova_compute[243452]:   <name>instance-00000010</name>
Feb 28 10:01:14 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:01:14 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:01:14 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1434521462</nova:name>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:01:13</nova:creationTime>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:01:14 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:01:14 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:01:14 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:01:14 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:01:14 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:01:14 compute-0 nova_compute[243452]:         <nova:user uuid="3b1dc716928742ca935bb155783e2d9a">tempest-FloatingIPsAssociationTestJSON-1803239001-project-member</nova:user>
Feb 28 10:01:14 compute-0 nova_compute[243452]:         <nova:project uuid="446562351a804787bd6c523245bada39">tempest-FloatingIPsAssociationTestJSON-1803239001</nova:project>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:01:14 compute-0 nova_compute[243452]:         <nova:port uuid="24f1e17a-f542-4eab-9180-968c61bc1cf7">
Feb 28 10:01:14 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:01:14 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:01:14 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <system>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <entry name="serial">f37b722c-8def-4545-a455-39df230540d8</entry>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <entry name="uuid">f37b722c-8def-4545-a455-39df230540d8</entry>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     </system>
Feb 28 10:01:14 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:01:14 compute-0 nova_compute[243452]:   <os>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:   </os>
Feb 28 10:01:14 compute-0 nova_compute[243452]:   <features>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:   </features>
Feb 28 10:01:14 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:01:14 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:01:14 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/f37b722c-8def-4545-a455-39df230540d8_disk">
Feb 28 10:01:14 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:14 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/f37b722c-8def-4545-a455-39df230540d8_disk.config">
Feb 28 10:01:14 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:14 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:84:dd:a4"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <target dev="tap24f1e17a-f5"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/f37b722c-8def-4545-a455-39df230540d8/console.log" append="off"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <video>
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     </video>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:01:14 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:01:14 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:01:14 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:01:14 compute-0 nova_compute[243452]: </domain>
Feb 28 10:01:14 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.307 243456 DEBUG nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Preparing to wait for external event network-vif-plugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.308 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "f37b722c-8def-4545-a455-39df230540d8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.308 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.308 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.309 243456 DEBUG nova.virt.libvirt.vif [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1434521462',display_name='tempest-FloatingIPsAssociationTestJSON-server-1434521462',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1434521462',id=16,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='446562351a804787bd6c523245bada39',ramdisk_id='',reservation_id='r-txju9utb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1803239001',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1803239001-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:01:09Z,user_data=None,user_id='3b1dc716928742ca935bb155783e2d9a',uuid=f37b722c-8def-4545-a455-39df230540d8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.310 243456 DEBUG nova.network.os_vif_util [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converting VIF {"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.310 243456 DEBUG nova.network.os_vif_util [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:dd:a4,bridge_name='br-int',has_traffic_filtering=True,id=24f1e17a-f542-4eab-9180-968c61bc1cf7,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24f1e17a-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.311 243456 DEBUG os_vif [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:dd:a4,bridge_name='br-int',has_traffic_filtering=True,id=24f1e17a-f542-4eab-9180-968c61bc1cf7,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24f1e17a-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.311 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.312 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.312 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.317 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.317 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24f1e17a-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.318 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap24f1e17a-f5, col_values=(('external_ids', {'iface-id': '24f1e17a-f542-4eab-9180-968c61bc1cf7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:dd:a4', 'vm-uuid': 'f37b722c-8def-4545-a455-39df230540d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.319 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:14 compute-0 NetworkManager[49805]: <info>  [1772272874.3211] manager: (tap24f1e17a-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.323 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.327 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.328 243456 INFO os_vif [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:dd:a4,bridge_name='br-int',has_traffic_filtering=True,id=24f1e17a-f542-4eab-9180-968c61bc1cf7,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24f1e17a-f5')
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.372 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.373 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.373 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] No VIF found with MAC fa:16:3e:84:dd:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.374 243456 INFO nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Using config drive
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.398 243456 DEBUG nova.storage.rbd_utils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image f37b722c-8def-4545-a455-39df230540d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.450 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:01:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:01:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3384180488' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.744 243456 DEBUG oslo_concurrency.processutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.751 243456 DEBUG nova.compute.provider_tree [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.775 243456 DEBUG nova.network.neutron [req-a458f70a-d1e7-4206-879a-bc93cf9c918b req-bec16dad-c73c-4398-bf4c-0036136286bb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Updated VIF entry in instance network info cache for port 24f1e17a-f542-4eab-9180-968c61bc1cf7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.776 243456 DEBUG nova.network.neutron [req-a458f70a-d1e7-4206-879a-bc93cf9c918b req-bec16dad-c73c-4398-bf4c-0036136286bb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Updating instance_info_cache with network_info: [{"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.781 243456 DEBUG nova.scheduler.client.report [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.804 243456 DEBUG oslo_concurrency.lockutils [req-a458f70a-d1e7-4206-879a-bc93cf9c918b req-bec16dad-c73c-4398-bf4c-0036136286bb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.807 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.858 243456 INFO nova.scheduler.client.report [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Deleted allocations for instance c8eefb37-41ae-4d33-8085-e4e8c3ce2075
Feb 28 10:01:14 compute-0 ceph-mon[76304]: pgmap v953: 305 pgs: 305 active+clean; 413 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 6.5 MiB/s rd, 7.4 MiB/s wr, 368 op/s
Feb 28 10:01:14 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1634414211' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:14 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3384180488' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.950 243456 DEBUG oslo_concurrency.lockutils [None req-c685c454-8d63-4a8e-b796-0619c959dd03 bf06d33f07d54a60bff952b57a770e77 383ac6e5ec8946a0afec20ecf5e8021e - - default default] Lock "c8eefb37-41ae-4d33-8085-e4e8c3ce2075" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.955 243456 INFO nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Creating config drive at /var/lib/nova/instances/f37b722c-8def-4545-a455-39df230540d8/disk.config
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.965 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f37b722c-8def-4545-a455-39df230540d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6izudh52 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:14 compute-0 nova_compute[243452]: 2026-02-28 10:01:14.997 243456 DEBUG nova.network.neutron [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Successfully updated port: 1b6bf464-31de-4504-9af4-59a95d6d9c05 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.015 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "refresh_cache-08147934-b9df-4154-8d1f-3fd318973eb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.016 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquired lock "refresh_cache-08147934-b9df-4154-8d1f-3fd318973eb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.016 243456 DEBUG nova.network.neutron [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.068 243456 DEBUG nova.compute.manager [req-01954f58-9768-4d50-848d-002affe884c8 req-ec5ad1a7-c1a3-45c6-8aed-771802dd64c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Received event network-changed-1b6bf464-31de-4504-9af4-59a95d6d9c05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.069 243456 DEBUG nova.compute.manager [req-01954f58-9768-4d50-848d-002affe884c8 req-ec5ad1a7-c1a3-45c6-8aed-771802dd64c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Refreshing instance network info cache due to event network-changed-1b6bf464-31de-4504-9af4-59a95d6d9c05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.069 243456 DEBUG oslo_concurrency.lockutils [req-01954f58-9768-4d50-848d-002affe884c8 req-ec5ad1a7-c1a3-45c6-8aed-771802dd64c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-08147934-b9df-4154-8d1f-3fd318973eb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.099 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f37b722c-8def-4545-a455-39df230540d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6izudh52" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.135 243456 DEBUG nova.storage.rbd_utils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] rbd image f37b722c-8def-4545-a455-39df230540d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.140 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f37b722c-8def-4545-a455-39df230540d8/disk.config f37b722c-8def-4545-a455-39df230540d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.199 243456 DEBUG nova.network.neutron [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.320 243456 DEBUG oslo_concurrency.processutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f37b722c-8def-4545-a455-39df230540d8/disk.config f37b722c-8def-4545-a455-39df230540d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.320 243456 INFO nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Deleting local config drive /var/lib/nova/instances/f37b722c-8def-4545-a455-39df230540d8/disk.config because it was imported into RBD.
Feb 28 10:01:15 compute-0 NetworkManager[49805]: <info>  [1772272875.3654] manager: (tap24f1e17a-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Feb 28 10:01:15 compute-0 systemd-udevd[258205]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:01:15 compute-0 kernel: tap24f1e17a-f5: entered promiscuous mode
Feb 28 10:01:15 compute-0 ovn_controller[146846]: 2026-02-28T10:01:15Z|00057|binding|INFO|Claiming lport 24f1e17a-f542-4eab-9180-968c61bc1cf7 for this chassis.
Feb 28 10:01:15 compute-0 ovn_controller[146846]: 2026-02-28T10:01:15Z|00058|binding|INFO|24f1e17a-f542-4eab-9180-968c61bc1cf7: Claiming fa:16:3e:84:dd:a4 10.100.0.7
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.372 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.381 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:15 compute-0 NetworkManager[49805]: <info>  [1772272875.3852] device (tap24f1e17a-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:01:15 compute-0 ovn_controller[146846]: 2026-02-28T10:01:15Z|00059|binding|INFO|Setting lport 24f1e17a-f542-4eab-9180-968c61bc1cf7 ovn-installed in OVS
Feb 28 10:01:15 compute-0 NetworkManager[49805]: <info>  [1772272875.3865] device (tap24f1e17a-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.386 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:15 compute-0 systemd-machined[209480]: New machine qemu-16-instance-00000010.
Feb 28 10:01:15 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-00000010.
Feb 28 10:01:15 compute-0 ovn_controller[146846]: 2026-02-28T10:01:15Z|00060|binding|INFO|Setting lport 24f1e17a-f542-4eab-9180-968c61bc1cf7 up in Southbound
Feb 28 10:01:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.521 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:dd:a4 10.100.0.7'], port_security=['fa:16:3e:84:dd:a4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f37b722c-8def-4545-a455-39df230540d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '446562351a804787bd6c523245bada39', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c45a4b58-82e0-4f02-9f8e-0ad33df761ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d1e8e62-0bec-49c5-9374-c674d11c0532, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=24f1e17a-f542-4eab-9180-968c61bc1cf7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:01:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.522 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 24f1e17a-f542-4eab-9180-968c61bc1cf7 in datapath 71984a35-6483-4ac4-a021-6bd1f9989d8b bound to our chassis
Feb 28 10:01:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.524 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71984a35-6483-4ac4-a021-6bd1f9989d8b
Feb 28 10:01:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.536 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[26b30c7e-0399-489c-a5a9-6f2056c93f88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.561 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[14726ade-30ab-4eb7-9037-277868e0fa48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.565 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[42ce50d0-41aa-4994-8302-99e08c5828d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.590 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f072a8-87ab-43b8-bfd1-a56102415f37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.609 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5e49017b-4c0d-464e-86c5-2a23c4444273]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71984a35-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:3a:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436194, 'reachable_time': 28919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258492, 'error': None, 'target': 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.630 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[51475c66-245f-4c45-bfb6-81cbaf707eb4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap71984a35-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436202, 'tstamp': 436202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258508, 'error': None, 'target': 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap71984a35-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436205, 'tstamp': 436205}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258508, 'error': None, 'target': 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.632 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71984a35-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.634 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.636 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71984a35-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.636 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.638 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71984a35-60, col_values=(('external_ids', {'iface-id': 'a589fe00-3087-4c3d-af34-6af9a22081de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:15.638 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 409 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 7.1 MiB/s wr, 393 op/s
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.772 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272875.7716343, f37b722c-8def-4545-a455-39df230540d8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.773 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] VM Started (Lifecycle Event)
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.803 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.809 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272875.776962, f37b722c-8def-4545-a455-39df230540d8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.810 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] VM Paused (Lifecycle Event)
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.836 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.842 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.867 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.919 243456 DEBUG nova.compute.manager [req-a46bc3e1-2fae-441e-b0f2-97533ab3bdf2 req-5d52427d-9de9-4f73-9c12-81e2f5875504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received event network-vif-plugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.919 243456 DEBUG oslo_concurrency.lockutils [req-a46bc3e1-2fae-441e-b0f2-97533ab3bdf2 req-5d52427d-9de9-4f73-9c12-81e2f5875504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f37b722c-8def-4545-a455-39df230540d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.919 243456 DEBUG oslo_concurrency.lockutils [req-a46bc3e1-2fae-441e-b0f2-97533ab3bdf2 req-5d52427d-9de9-4f73-9c12-81e2f5875504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.920 243456 DEBUG oslo_concurrency.lockutils [req-a46bc3e1-2fae-441e-b0f2-97533ab3bdf2 req-5d52427d-9de9-4f73-9c12-81e2f5875504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.920 243456 DEBUG nova.compute.manager [req-a46bc3e1-2fae-441e-b0f2-97533ab3bdf2 req-5d52427d-9de9-4f73-9c12-81e2f5875504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Processing event network-vif-plugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.921 243456 DEBUG nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.925 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272875.924463, f37b722c-8def-4545-a455-39df230540d8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.926 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] VM Resumed (Lifecycle Event)
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.929 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.932 243456 INFO nova.virt.libvirt.driver [-] [instance: f37b722c-8def-4545-a455-39df230540d8] Instance spawned successfully.
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.932 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.961 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.964 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.964 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.964 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.965 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.965 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.965 243456 DEBUG nova.virt.libvirt.driver [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:15 compute-0 nova_compute[243452]: 2026-02-28 10:01:15.969 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.016 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.048 243456 INFO nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Took 6.74 seconds to spawn the instance on the hypervisor.
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.049 243456 DEBUG nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.121 243456 INFO nova.compute.manager [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Took 8.07 seconds to build instance.
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.143 243456 DEBUG oslo_concurrency.lockutils [None req-1fac8f6c-85ec-4b43-9aad-701d812c1dba 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:16 compute-0 ceph-mon[76304]: pgmap v954: 305 pgs: 305 active+clean; 409 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 7.1 MiB/s wr, 393 op/s
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.902 243456 DEBUG nova.network.neutron [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Updating instance_info_cache with network_info: [{"id": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "address": "fa:16:3e:12:02:52", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b6bf464-31", "ovs_interfaceid": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.944 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Releasing lock "refresh_cache-08147934-b9df-4154-8d1f-3fd318973eb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.944 243456 DEBUG nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Instance network_info: |[{"id": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "address": "fa:16:3e:12:02:52", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b6bf464-31", "ovs_interfaceid": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.945 243456 DEBUG oslo_concurrency.lockutils [req-01954f58-9768-4d50-848d-002affe884c8 req-ec5ad1a7-c1a3-45c6-8aed-771802dd64c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-08147934-b9df-4154-8d1f-3fd318973eb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.945 243456 DEBUG nova.network.neutron [req-01954f58-9768-4d50-848d-002affe884c8 req-ec5ad1a7-c1a3-45c6-8aed-771802dd64c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Refreshing network info cache for port 1b6bf464-31de-4504-9af4-59a95d6d9c05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.948 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Start _get_guest_xml network_info=[{"id": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "address": "fa:16:3e:12:02:52", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b6bf464-31", "ovs_interfaceid": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.952 243456 WARNING nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.958 243456 DEBUG nova.virt.libvirt.host [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.959 243456 DEBUG nova.virt.libvirt.host [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.965 243456 DEBUG nova.virt.libvirt.host [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.965 243456 DEBUG nova.virt.libvirt.host [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.966 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.966 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.967 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.967 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.967 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.967 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.968 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.968 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.968 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.968 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.971 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.971 243456 DEBUG nova.virt.hardware [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:01:16 compute-0 nova_compute[243452]: 2026-02-28 10:01:16.974 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:17 compute-0 ovn_controller[146846]: 2026-02-28T10:01:17Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:6e:bc 10.100.0.5
Feb 28 10:01:17 compute-0 ovn_controller[146846]: 2026-02-28T10:01:17Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:6e:bc 10.100.0.5
Feb 28 10:01:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1080498958' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:17 compute-0 nova_compute[243452]: 2026-02-28 10:01:17.555 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:01:17 compute-0 nova_compute[243452]: 2026-02-28 10:01:17.596 243456 DEBUG nova.storage.rbd_utils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 08147934-b9df-4154-8d1f-3fd318973eb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:17 compute-0 nova_compute[243452]: 2026-02-28 10:01:17.601 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 444 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 6.7 MiB/s rd, 9.2 MiB/s wr, 441 op/s
Feb 28 10:01:17 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1080498958' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/933414871' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.118 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.122 243456 DEBUG nova.virt.libvirt.vif [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:01:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2143913214',display_name='tempest-ServersAdminTestJSON-server-2143913214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2143913214',id=17,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-zs0r3m3g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:01:12Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=08147934-b9df-4154-8d1f-3fd318973eb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "address": "fa:16:3e:12:02:52", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b6bf464-31", "ovs_interfaceid": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.123 243456 DEBUG nova.network.os_vif_util [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "address": "fa:16:3e:12:02:52", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b6bf464-31", "ovs_interfaceid": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.125 243456 DEBUG nova.network.os_vif_util [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:02:52,bridge_name='br-int',has_traffic_filtering=True,id=1b6bf464-31de-4504-9af4-59a95d6d9c05,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b6bf464-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.128 243456 DEBUG nova.objects.instance [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'pci_devices' on Instance uuid 08147934-b9df-4154-8d1f-3fd318973eb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.152 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:01:18 compute-0 nova_compute[243452]:   <uuid>08147934-b9df-4154-8d1f-3fd318973eb6</uuid>
Feb 28 10:01:18 compute-0 nova_compute[243452]:   <name>instance-00000011</name>
Feb 28 10:01:18 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:01:18 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:01:18 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersAdminTestJSON-server-2143913214</nova:name>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:01:16</nova:creationTime>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:01:18 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:01:18 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:01:18 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:01:18 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:01:18 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:01:18 compute-0 nova_compute[243452]:         <nova:user uuid="4fb1e2bbed9c4e2395c13dba974f8603">tempest-ServersAdminTestJSON-1494420313-project-member</nova:user>
Feb 28 10:01:18 compute-0 nova_compute[243452]:         <nova:project uuid="339b7f5b41a54615b051fb9d036072dd">tempest-ServersAdminTestJSON-1494420313</nova:project>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:01:18 compute-0 nova_compute[243452]:         <nova:port uuid="1b6bf464-31de-4504-9af4-59a95d6d9c05">
Feb 28 10:01:18 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:01:18 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:01:18 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <system>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <entry name="serial">08147934-b9df-4154-8d1f-3fd318973eb6</entry>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <entry name="uuid">08147934-b9df-4154-8d1f-3fd318973eb6</entry>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     </system>
Feb 28 10:01:18 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:01:18 compute-0 nova_compute[243452]:   <os>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:   </os>
Feb 28 10:01:18 compute-0 nova_compute[243452]:   <features>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:   </features>
Feb 28 10:01:18 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:01:18 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:01:18 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/08147934-b9df-4154-8d1f-3fd318973eb6_disk">
Feb 28 10:01:18 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:18 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/08147934-b9df-4154-8d1f-3fd318973eb6_disk.config">
Feb 28 10:01:18 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:18 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:12:02:52"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <target dev="tap1b6bf464-31"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/08147934-b9df-4154-8d1f-3fd318973eb6/console.log" append="off"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <video>
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     </video>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:01:18 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:01:18 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:01:18 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:01:18 compute-0 nova_compute[243452]: </domain>
Feb 28 10:01:18 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.153 243456 DEBUG nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Preparing to wait for external event network-vif-plugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.154 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.155 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:18 compute-0 podman[258579]: 2026-02-28 10:01:18.155709467 +0000 UTC m=+0.085152979 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.155 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.157 243456 DEBUG nova.virt.libvirt.vif [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:01:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2143913214',display_name='tempest-ServersAdminTestJSON-server-2143913214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2143913214',id=17,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-zs0r3m3g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:01:12Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=08147934-b9df-4154-8d1f-3fd318973eb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "address": "fa:16:3e:12:02:52", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b6bf464-31", "ovs_interfaceid": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.158 243456 DEBUG nova.network.os_vif_util [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "address": "fa:16:3e:12:02:52", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b6bf464-31", "ovs_interfaceid": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.159 243456 DEBUG nova.network.os_vif_util [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:02:52,bridge_name='br-int',has_traffic_filtering=True,id=1b6bf464-31de-4504-9af4-59a95d6d9c05,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b6bf464-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.160 243456 DEBUG os_vif [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:02:52,bridge_name='br-int',has_traffic_filtering=True,id=1b6bf464-31de-4504-9af4-59a95d6d9c05,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b6bf464-31') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.161 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.162 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.163 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:18 compute-0 podman[258578]: 2026-02-28 10:01:18.16822108 +0000 UTC m=+0.102277982 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.167 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.168 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b6bf464-31, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.169 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1b6bf464-31, col_values=(('external_ids', {'iface-id': '1b6bf464-31de-4504-9af4-59a95d6d9c05', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:12:02:52', 'vm-uuid': '08147934-b9df-4154-8d1f-3fd318973eb6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:18 compute-0 NetworkManager[49805]: <info>  [1772272878.1736] manager: (tap1b6bf464-31): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.178 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.182 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.182 243456 INFO os_vif [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:02:52,bridge_name='br-int',has_traffic_filtering=True,id=1b6bf464-31de-4504-9af4-59a95d6d9c05,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b6bf464-31')
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.231 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.231 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.232 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No VIF found with MAC fa:16:3e:12:02:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.233 243456 INFO nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Using config drive
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.259 243456 DEBUG nova.storage.rbd_utils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 08147934-b9df-4154-8d1f-3fd318973eb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.393 243456 DEBUG nova.compute.manager [req-3ad06832-64d9-47be-a03f-42d7aabe1ce7 req-747dd3fb-50a6-4af1-a462-0ce410400c41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received event network-vif-plugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.394 243456 DEBUG oslo_concurrency.lockutils [req-3ad06832-64d9-47be-a03f-42d7aabe1ce7 req-747dd3fb-50a6-4af1-a462-0ce410400c41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f37b722c-8def-4545-a455-39df230540d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.394 243456 DEBUG oslo_concurrency.lockutils [req-3ad06832-64d9-47be-a03f-42d7aabe1ce7 req-747dd3fb-50a6-4af1-a462-0ce410400c41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.394 243456 DEBUG oslo_concurrency.lockutils [req-3ad06832-64d9-47be-a03f-42d7aabe1ce7 req-747dd3fb-50a6-4af1-a462-0ce410400c41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.394 243456 DEBUG nova.compute.manager [req-3ad06832-64d9-47be-a03f-42d7aabe1ce7 req-747dd3fb-50a6-4af1-a462-0ce410400c41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] No waiting events found dispatching network-vif-plugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.395 243456 WARNING nova.compute.manager [req-3ad06832-64d9-47be-a03f-42d7aabe1ce7 req-747dd3fb-50a6-4af1-a462-0ce410400c41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received unexpected event network-vif-plugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 for instance with vm_state active and task_state None.
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.410 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.569 243456 DEBUG nova.network.neutron [req-01954f58-9768-4d50-848d-002affe884c8 req-ec5ad1a7-c1a3-45c6-8aed-771802dd64c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Updated VIF entry in instance network info cache for port 1b6bf464-31de-4504-9af4-59a95d6d9c05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.570 243456 DEBUG nova.network.neutron [req-01954f58-9768-4d50-848d-002affe884c8 req-ec5ad1a7-c1a3-45c6-8aed-771802dd64c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Updating instance_info_cache with network_info: [{"id": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "address": "fa:16:3e:12:02:52", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b6bf464-31", "ovs_interfaceid": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.599 243456 DEBUG oslo_concurrency.lockutils [req-01954f58-9768-4d50-848d-002affe884c8 req-ec5ad1a7-c1a3-45c6-8aed-771802dd64c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-08147934-b9df-4154-8d1f-3fd318973eb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.706 243456 INFO nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Creating config drive at /var/lib/nova/instances/08147934-b9df-4154-8d1f-3fd318973eb6/disk.config
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.711 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/08147934-b9df-4154-8d1f-3fd318973eb6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqfnr8lb5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.834 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/08147934-b9df-4154-8d1f-3fd318973eb6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqfnr8lb5" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.864 243456 DEBUG nova.storage.rbd_utils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image 08147934-b9df-4154-8d1f-3fd318973eb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:18 compute-0 nova_compute[243452]: 2026-02-28 10:01:18.868 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/08147934-b9df-4154-8d1f-3fd318973eb6/disk.config 08147934-b9df-4154-8d1f-3fd318973eb6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:18 compute-0 ceph-mon[76304]: pgmap v955: 305 pgs: 305 active+clean; 444 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 6.7 MiB/s rd, 9.2 MiB/s wr, 441 op/s
Feb 28 10:01:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/933414871' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.012 243456 DEBUG oslo_concurrency.processutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/08147934-b9df-4154-8d1f-3fd318973eb6/disk.config 08147934-b9df-4154-8d1f-3fd318973eb6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.013 243456 INFO nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Deleting local config drive /var/lib/nova/instances/08147934-b9df-4154-8d1f-3fd318973eb6/disk.config because it was imported into RBD.
Feb 28 10:01:19 compute-0 kernel: tap1b6bf464-31: entered promiscuous mode
Feb 28 10:01:19 compute-0 NetworkManager[49805]: <info>  [1772272879.0573] manager: (tap1b6bf464-31): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Feb 28 10:01:19 compute-0 ovn_controller[146846]: 2026-02-28T10:01:19Z|00061|binding|INFO|Claiming lport 1b6bf464-31de-4504-9af4-59a95d6d9c05 for this chassis.
Feb 28 10:01:19 compute-0 ovn_controller[146846]: 2026-02-28T10:01:19Z|00062|binding|INFO|1b6bf464-31de-4504-9af4-59a95d6d9c05: Claiming fa:16:3e:12:02:52 10.100.0.10
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.063 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.073 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:19 compute-0 ovn_controller[146846]: 2026-02-28T10:01:19Z|00063|binding|INFO|Setting lport 1b6bf464-31de-4504-9af4-59a95d6d9c05 ovn-installed in OVS
Feb 28 10:01:19 compute-0 ovn_controller[146846]: 2026-02-28T10:01:19Z|00064|binding|INFO|Setting lport 1b6bf464-31de-4504-9af4-59a95d6d9c05 up in Southbound
Feb 28 10:01:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.071 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:02:52 10.100.0.10'], port_security=['fa:16:3e:12:02:52 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '08147934-b9df-4154-8d1f-3fd318973eb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1b6bf464-31de-4504-9af4-59a95d6d9c05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:01:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.074 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1b6bf464-31de-4504-9af4-59a95d6d9c05 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 bound to our chassis
Feb 28 10:01:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.076 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.075 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:19 compute-0 systemd-udevd[258698]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:01:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.096 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[37be13a1-a105-42f4-8325-0c31682bb0ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:19 compute-0 systemd-machined[209480]: New machine qemu-17-instance-00000011.
Feb 28 10:01:19 compute-0 NetworkManager[49805]: <info>  [1772272879.1129] device (tap1b6bf464-31): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:01:19 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000011.
Feb 28 10:01:19 compute-0 NetworkManager[49805]: <info>  [1772272879.1147] device (tap1b6bf464-31): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:01:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.129 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[361bc16d-ec0e-49eb-98dc-f5f12c7ab7b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.134 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7eac565b-8984-4c9c-ab74-823036c51936]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.155 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0828e04f-a267-47ee-bd96-9c18af7afed7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.170 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ba180782-d7c0-41e8-955b-ae7e2b61daf4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258712, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.183 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9628dce1-aed5-438f-83d0-620de603935a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258713, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258713, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.184 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.186 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.188 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.188 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.189 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:19.189 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:19 compute-0 ovn_controller[146846]: 2026-02-28T10:01:19Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f2:c8:7d 10.100.0.13
Feb 28 10:01:19 compute-0 ovn_controller[146846]: 2026-02-28T10:01:19Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f2:c8:7d 10.100.0.13
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.251 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.276 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid 9e27fde4-3df3-46cf-97ac-88a91baefbc0 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.277 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid c92e965f-2d18-4b78-8b78-7d391039f382 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.277 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid 89ced16e-cc50-41d5-bfcb-fa5af85c14c8 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.278 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid f37b722c-8def-4545-a455-39df230540d8 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.278 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid 08147934-b9df-4154-8d1f-3fd318973eb6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.279 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.279 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.279 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.280 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "c92e965f-2d18-4b78-8b78-7d391039f382" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.280 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.281 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.281 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "f37b722c-8def-4545-a455-39df230540d8" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.282 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "f37b722c-8def-4545-a455-39df230540d8" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.282 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "08147934-b9df-4154-8d1f-3fd318973eb6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.346 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "f37b722c-8def-4545-a455-39df230540d8" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.352 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.376 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "c92e965f-2d18-4b78-8b78-7d391039f382" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.403 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.491 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272879.4913454, 08147934-b9df-4154-8d1f-3fd318973eb6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.492 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] VM Started (Lifecycle Event)
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.530 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.534 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272879.493852, 08147934-b9df-4154-8d1f-3fd318973eb6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.535 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] VM Paused (Lifecycle Event)
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.561 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.565 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.588 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:01:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 444 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.6 MiB/s wr, 390 op/s
Feb 28 10:01:19 compute-0 NetworkManager[49805]: <info>  [1772272879.9304] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.929 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:19 compute-0 NetworkManager[49805]: <info>  [1772272879.9316] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.971 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:19 compute-0 ovn_controller[146846]: 2026-02-28T10:01:19Z|00065|binding|INFO|Releasing lport 863d7ac1-9b1e-4788-aadf-440a36d66b39 from this chassis (sb_readonly=0)
Feb 28 10:01:19 compute-0 ovn_controller[146846]: 2026-02-28T10:01:19Z|00066|binding|INFO|Releasing lport a589fe00-3087-4c3d-af34-6af9a22081de from this chassis (sb_readonly=0)
Feb 28 10:01:19 compute-0 nova_compute[243452]: 2026-02-28 10:01:19.992 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.231 243456 DEBUG nova.compute.manager [req-b8416d1e-3733-4ccb-8b90-19436dbd0c34 req-ad22d82e-1dfc-47f1-89b3-a849ae49a86a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Received event network-vif-plugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.231 243456 DEBUG oslo_concurrency.lockutils [req-b8416d1e-3733-4ccb-8b90-19436dbd0c34 req-ad22d82e-1dfc-47f1-89b3-a849ae49a86a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.232 243456 DEBUG oslo_concurrency.lockutils [req-b8416d1e-3733-4ccb-8b90-19436dbd0c34 req-ad22d82e-1dfc-47f1-89b3-a849ae49a86a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.232 243456 DEBUG oslo_concurrency.lockutils [req-b8416d1e-3733-4ccb-8b90-19436dbd0c34 req-ad22d82e-1dfc-47f1-89b3-a849ae49a86a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.233 243456 DEBUG nova.compute.manager [req-b8416d1e-3733-4ccb-8b90-19436dbd0c34 req-ad22d82e-1dfc-47f1-89b3-a849ae49a86a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Processing event network-vif-plugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.233 243456 DEBUG nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.238 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.239 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272880.239201, 08147934-b9df-4154-8d1f-3fd318973eb6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.239 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] VM Resumed (Lifecycle Event)
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.244 243456 INFO nova.virt.libvirt.driver [-] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Instance spawned successfully.
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.245 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.271 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.271 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.272 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.272 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.272 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.273 243456 DEBUG nova.virt.libvirt.driver [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.277 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.281 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.315 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.371 243456 INFO nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Took 7.40 seconds to spawn the instance on the hypervisor.
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.371 243456 DEBUG nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.435 243456 INFO nova.compute.manager [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Took 8.46 seconds to build instance.
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.459 243456 DEBUG oslo_concurrency.lockutils [None req-c5b1ea56-5965-44ad-9f59-eb365d4ebdaa 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.460 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "08147934-b9df-4154-8d1f-3fd318973eb6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 1.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.460 243456 INFO nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:01:20 compute-0 nova_compute[243452]: 2026-02-28 10:01:20.460 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "08147934-b9df-4154-8d1f-3fd318973eb6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:20 compute-0 ceph-mon[76304]: pgmap v956: 305 pgs: 305 active+clean; 444 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.6 MiB/s wr, 390 op/s
Feb 28 10:01:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 481 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 11 MiB/s wr, 521 op/s
Feb 28 10:01:22 compute-0 nova_compute[243452]: 2026-02-28 10:01:22.441 243456 DEBUG nova.compute.manager [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Received event network-vif-plugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:22 compute-0 nova_compute[243452]: 2026-02-28 10:01:22.441 243456 DEBUG oslo_concurrency.lockutils [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:22 compute-0 nova_compute[243452]: 2026-02-28 10:01:22.442 243456 DEBUG oslo_concurrency.lockutils [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:22 compute-0 nova_compute[243452]: 2026-02-28 10:01:22.442 243456 DEBUG oslo_concurrency.lockutils [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:22 compute-0 nova_compute[243452]: 2026-02-28 10:01:22.443 243456 DEBUG nova.compute.manager [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] No waiting events found dispatching network-vif-plugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:01:22 compute-0 nova_compute[243452]: 2026-02-28 10:01:22.443 243456 WARNING nova.compute.manager [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Received unexpected event network-vif-plugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 for instance with vm_state active and task_state None.
Feb 28 10:01:22 compute-0 nova_compute[243452]: 2026-02-28 10:01:22.443 243456 DEBUG nova.compute.manager [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-changed-77e0efad-ce89-42fd-9284-b155767f5c74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:22 compute-0 nova_compute[243452]: 2026-02-28 10:01:22.443 243456 DEBUG nova.compute.manager [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Refreshing instance network info cache due to event network-changed-77e0efad-ce89-42fd-9284-b155767f5c74. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:01:22 compute-0 nova_compute[243452]: 2026-02-28 10:01:22.444 243456 DEBUG oslo_concurrency.lockutils [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:01:22 compute-0 nova_compute[243452]: 2026-02-28 10:01:22.444 243456 DEBUG oslo_concurrency.lockutils [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:01:22 compute-0 nova_compute[243452]: 2026-02-28 10:01:22.444 243456 DEBUG nova.network.neutron [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Refreshing network info cache for port 77e0efad-ce89-42fd-9284-b155767f5c74 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:01:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:01:22 compute-0 ceph-mon[76304]: pgmap v957: 305 pgs: 305 active+clean; 481 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 11 MiB/s wr, 521 op/s
Feb 28 10:01:23 compute-0 nova_compute[243452]: 2026-02-28 10:01:23.175 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:23 compute-0 nova_compute[243452]: 2026-02-28 10:01:23.411 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 484 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 7.6 MiB/s wr, 385 op/s
Feb 28 10:01:24 compute-0 nova_compute[243452]: 2026-02-28 10:01:24.299 243456 DEBUG nova.network.neutron [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updated VIF entry in instance network info cache for port 77e0efad-ce89-42fd-9284-b155767f5c74. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:01:24 compute-0 nova_compute[243452]: 2026-02-28 10:01:24.300 243456 DEBUG nova.network.neutron [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updating instance_info_cache with network_info: [{"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:24 compute-0 nova_compute[243452]: 2026-02-28 10:01:24.324 243456 DEBUG oslo_concurrency.lockutils [req-f0e6f22b-385a-45f7-a1e4-5c1232872e32 req-0c8a801f-8d45-45d1-a8ef-030a171b8bcd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:24 compute-0 ceph-mon[76304]: pgmap v958: 305 pgs: 305 active+clean; 484 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 7.6 MiB/s wr, 385 op/s
Feb 28 10:01:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 484 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.1 MiB/s wr, 339 op/s
Feb 28 10:01:25 compute-0 nova_compute[243452]: 2026-02-28 10:01:25.978 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:25 compute-0 nova_compute[243452]: 2026-02-28 10:01:25.979 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:26 compute-0 nova_compute[243452]: 2026-02-28 10:01:26.001 243456 DEBUG nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:01:26 compute-0 nova_compute[243452]: 2026-02-28 10:01:26.102 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:26 compute-0 nova_compute[243452]: 2026-02-28 10:01:26.102 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:26 compute-0 nova_compute[243452]: 2026-02-28 10:01:26.110 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:01:26 compute-0 nova_compute[243452]: 2026-02-28 10:01:26.111 243456 INFO nova.compute.claims [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:01:26 compute-0 nova_compute[243452]: 2026-02-28 10:01:26.281 243456 DEBUG nova.compute.manager [req-594c78ea-6c20-49e2-a945-ffb85e5fb27f req-1ddb4e60-7eb2-4334-ab30-d354f1831249 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-changed-77e0efad-ce89-42fd-9284-b155767f5c74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:26 compute-0 nova_compute[243452]: 2026-02-28 10:01:26.281 243456 DEBUG nova.compute.manager [req-594c78ea-6c20-49e2-a945-ffb85e5fb27f req-1ddb4e60-7eb2-4334-ab30-d354f1831249 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Refreshing instance network info cache due to event network-changed-77e0efad-ce89-42fd-9284-b155767f5c74. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:01:26 compute-0 nova_compute[243452]: 2026-02-28 10:01:26.282 243456 DEBUG oslo_concurrency.lockutils [req-594c78ea-6c20-49e2-a945-ffb85e5fb27f req-1ddb4e60-7eb2-4334-ab30-d354f1831249 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:01:26 compute-0 nova_compute[243452]: 2026-02-28 10:01:26.282 243456 DEBUG oslo_concurrency.lockutils [req-594c78ea-6c20-49e2-a945-ffb85e5fb27f req-1ddb4e60-7eb2-4334-ab30-d354f1831249 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:01:26 compute-0 nova_compute[243452]: 2026-02-28 10:01:26.283 243456 DEBUG nova.network.neutron [req-594c78ea-6c20-49e2-a945-ffb85e5fb27f req-1ddb4e60-7eb2-4334-ab30-d354f1831249 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Refreshing network info cache for port 77e0efad-ce89-42fd-9284-b155767f5c74 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:01:26 compute-0 nova_compute[243452]: 2026-02-28 10:01:26.388 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:26 compute-0 nova_compute[243452]: 2026-02-28 10:01:26.532 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "ce94b006-3fde-4285-89f7-1e435e514d3e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:26 compute-0 nova_compute[243452]: 2026-02-28 10:01:26.533 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "ce94b006-3fde-4285-89f7-1e435e514d3e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:26 compute-0 nova_compute[243452]: 2026-02-28 10:01:26.574 243456 DEBUG nova.compute.manager [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:01:26 compute-0 nova_compute[243452]: 2026-02-28 10:01:26.670 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:26 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:01:26 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1552549082' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:26 compute-0 ceph-mon[76304]: pgmap v959: 305 pgs: 305 active+clean; 484 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.1 MiB/s wr, 339 op/s
Feb 28 10:01:26 compute-0 nova_compute[243452]: 2026-02-28 10:01:26.965 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:26 compute-0 nova_compute[243452]: 2026-02-28 10:01:26.972 243456 DEBUG nova.compute.provider_tree [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:01:26 compute-0 nova_compute[243452]: 2026-02-28 10:01:26.991 243456 DEBUG nova.scheduler.client.report [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.023 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.024 243456 DEBUG nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.027 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.357s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.035 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.036 243456 INFO nova.compute.claims [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.088 243456 DEBUG nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.088 243456 DEBUG nova.network.neutron [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.112 243456 INFO nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.131 243456 DEBUG nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.226 243456 DEBUG nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.229 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.229 243456 INFO nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Creating image(s)
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.258 243456 DEBUG nova.storage.rbd_utils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.278 243456 DEBUG nova.storage.rbd_utils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.300 243456 DEBUG nova.storage.rbd_utils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.304 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.352 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.353 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.353 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.354 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.375 243456 DEBUG nova.storage.rbd_utils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.377 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.403 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.617 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.671 243456 DEBUG nova.storage.rbd_utils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] resizing rbd image d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.752 243456 DEBUG nova.objects.instance [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'migration_context' on Instance uuid d2d9bd29-453d-4abd-a3de-c1a9603cfc11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 499 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 5.6 MiB/s wr, 295 op/s
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.776 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.776 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Ensure instance console log exists: /var/lib/nova/instances/d2d9bd29-453d-4abd-a3de-c1a9603cfc11/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.777 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.778 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.778 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:27 compute-0 ovn_controller[146846]: 2026-02-28T10:01:27Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:dd:a4 10.100.0.7
Feb 28 10:01:27 compute-0 ovn_controller[146846]: 2026-02-28T10:01:27Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:dd:a4 10.100.0.7
Feb 28 10:01:27 compute-0 nova_compute[243452]: 2026-02-28 10:01:27.934 243456 DEBUG nova.policy [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4fb1e2bbed9c4e2395c13dba974f8603', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '339b7f5b41a54615b051fb9d036072dd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:01:27 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1552549082' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:01:27 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4118599950' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.000 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.012 243456 DEBUG nova.compute.provider_tree [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.044 243456 DEBUG nova.scheduler.client.report [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.073 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.075 243456 DEBUG nova.compute.manager [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.128 243456 DEBUG nova.compute.manager [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.129 243456 DEBUG nova.network.neutron [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.150 243456 INFO nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.166 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "9bf11a21-b6eb-432e-aa3b-4dd3413f91a5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.167 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "9bf11a21-b6eb-432e-aa3b-4dd3413f91a5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.171 243456 DEBUG nova.compute.manager [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.178 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.185 243456 DEBUG nova.compute.manager [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.285 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.286 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.293 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.294 243456 INFO nova.compute.claims [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.299 243456 DEBUG nova.compute.manager [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.300 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.301 243456 INFO nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Creating image(s)
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.323 243456 DEBUG nova.storage.rbd_utils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image ce94b006-3fde-4285-89f7-1e435e514d3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.347 243456 DEBUG nova.storage.rbd_utils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image ce94b006-3fde-4285-89f7-1e435e514d3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.371 243456 DEBUG nova.storage.rbd_utils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image ce94b006-3fde-4285-89f7-1e435e514d3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.375 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.400 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272873.3699229, c8eefb37-41ae-4d33-8085-e4e8c3ce2075 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.400 243456 INFO nova.compute.manager [-] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] VM Stopped (Lifecycle Event)
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.413 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.421 243456 DEBUG nova.compute.manager [None req-11c76ca5-80d6-47ee-9527-b7e71801a716 - - - - - -] [instance: c8eefb37-41ae-4d33-8085-e4e8c3ce2075] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.441 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.442 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.442 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.443 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.462 243456 DEBUG nova.storage.rbd_utils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image ce94b006-3fde-4285-89f7-1e435e514d3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.464 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ce94b006-3fde-4285-89f7-1e435e514d3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.481 243456 DEBUG nova.network.neutron [req-594c78ea-6c20-49e2-a945-ffb85e5fb27f req-1ddb4e60-7eb2-4334-ab30-d354f1831249 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updated VIF entry in instance network info cache for port 77e0efad-ce89-42fd-9284-b155767f5c74. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.482 243456 DEBUG nova.network.neutron [req-594c78ea-6c20-49e2-a945-ffb85e5fb27f req-1ddb4e60-7eb2-4334-ab30-d354f1831249 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updating instance_info_cache with network_info: [{"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.497 243456 DEBUG oslo_concurrency.lockutils [req-594c78ea-6c20-49e2-a945-ffb85e5fb27f req-1ddb4e60-7eb2-4334-ab30-d354f1831249 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.601 243456 DEBUG nova.network.neutron [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.601 243456 DEBUG nova.compute.manager [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.670 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ce94b006-3fde-4285-89f7-1e435e514d3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.739 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.766 243456 DEBUG nova.storage.rbd_utils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] resizing rbd image ce94b006-3fde-4285-89f7-1e435e514d3e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.847 243456 DEBUG nova.objects.instance [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lazy-loading 'migration_context' on Instance uuid ce94b006-3fde-4285-89f7-1e435e514d3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.861 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.862 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Ensure instance console log exists: /var/lib/nova/instances/ce94b006-3fde-4285-89f7-1e435e514d3e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.862 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.863 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.863 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.865 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.870 243456 WARNING nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.896 243456 DEBUG nova.virt.libvirt.host [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.897 243456 DEBUG nova.virt.libvirt.host [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.900 243456 DEBUG nova.virt.libvirt.host [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.901 243456 DEBUG nova.virt.libvirt.host [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.901 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.902 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.902 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.902 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.903 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.903 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.904 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.904 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.905 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.905 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.905 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.906 243456 DEBUG nova.virt.hardware [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.910 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.935 243456 DEBUG nova.compute.manager [req-ca1cbc50-ed4f-4fc3-9555-525257e53739 req-9ea3dbed-a19d-42ce-b7f5-8b9c45414e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received event network-changed-24f1e17a-f542-4eab-9180-968c61bc1cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.935 243456 DEBUG nova.compute.manager [req-ca1cbc50-ed4f-4fc3-9555-525257e53739 req-9ea3dbed-a19d-42ce-b7f5-8b9c45414e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Refreshing instance network info cache due to event network-changed-24f1e17a-f542-4eab-9180-968c61bc1cf7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.936 243456 DEBUG oslo_concurrency.lockutils [req-ca1cbc50-ed4f-4fc3-9555-525257e53739 req-9ea3dbed-a19d-42ce-b7f5-8b9c45414e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.936 243456 DEBUG oslo_concurrency.lockutils [req-ca1cbc50-ed4f-4fc3-9555-525257e53739 req-9ea3dbed-a19d-42ce-b7f5-8b9c45414e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.937 243456 DEBUG nova.network.neutron [req-ca1cbc50-ed4f-4fc3-9555-525257e53739 req-9ea3dbed-a19d-42ce-b7f5-8b9c45414e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Refreshing network info cache for port 24f1e17a-f542-4eab-9180-968c61bc1cf7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:01:28 compute-0 ceph-mon[76304]: pgmap v960: 305 pgs: 305 active+clean; 499 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 5.6 MiB/s wr, 295 op/s
Feb 28 10:01:28 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4118599950' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:28 compute-0 nova_compute[243452]: 2026-02-28 10:01:28.989 243456 DEBUG nova.network.neutron [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Successfully created port: 42dc1876-90c4-4b52-b301-1c90b71ff297 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:01:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:01:29
Feb 28 10:01:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:01:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:01:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['backups', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.log', 'vms', '.rgw.root', 'default.rgw.meta']
Feb 28 10:01:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:01:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:01:29 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/383752822' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.290 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.299 243456 DEBUG nova.compute.provider_tree [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.318 243456 DEBUG nova.scheduler.client.report [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.348 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.349 243456 DEBUG nova.compute.manager [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:01:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:29 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4212338717' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.476 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.505 243456 DEBUG nova.storage.rbd_utils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image ce94b006-3fde-4285-89f7-1e435e514d3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.510 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.545 243456 DEBUG nova.compute.manager [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.563 243456 INFO nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.583 243456 DEBUG nova.compute.manager [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.676 243456 DEBUG nova.compute.manager [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.678 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.678 243456 INFO nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Creating image(s)
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.705 243456 DEBUG nova.storage.rbd_utils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.741 243456 DEBUG nova.storage.rbd_utils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 499 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.9 MiB/s wr, 220 op/s
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.769 243456 DEBUG nova.storage.rbd_utils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.773 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.850 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.851 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.852 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.853 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.879 243456 DEBUG nova.storage.rbd_utils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:29 compute-0 nova_compute[243452]: 2026-02-28 10:01:29.883 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:29 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/383752822' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:29 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4212338717' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.006 243456 DEBUG nova.network.neutron [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Successfully updated port: 42dc1876-90c4-4b52-b301-1c90b71ff297 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:01:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1533661090' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.027 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "refresh_cache-d2d9bd29-453d-4abd-a3de-c1a9603cfc11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.030 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquired lock "refresh_cache-d2d9bd29-453d-4abd-a3de-c1a9603cfc11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.031 243456 DEBUG nova.network.neutron [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.044 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.047 243456 DEBUG nova.objects.instance [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lazy-loading 'pci_devices' on Instance uuid ce94b006-3fde-4285-89f7-1e435e514d3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.074 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:01:30 compute-0 nova_compute[243452]:   <uuid>ce94b006-3fde-4285-89f7-1e435e514d3e</uuid>
Feb 28 10:01:30 compute-0 nova_compute[243452]:   <name>instance-00000013</name>
Feb 28 10:01:30 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:01:30 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:01:30 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-848702146</nova:name>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:01:28</nova:creationTime>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:01:30 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:01:30 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:01:30 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:01:30 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:01:30 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:01:30 compute-0 nova_compute[243452]:         <nova:user uuid="d12cfb1e6b0d4d93916ba6a6c4b75cfc">tempest-ServersAdminNegativeTestJSON-1432426192-project-member</nova:user>
Feb 28 10:01:30 compute-0 nova_compute[243452]:         <nova:project uuid="388f2f7e6d59433a8c88217806df2e33">tempest-ServersAdminNegativeTestJSON-1432426192</nova:project>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <nova:ports/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:01:30 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:01:30 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <system>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <entry name="serial">ce94b006-3fde-4285-89f7-1e435e514d3e</entry>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <entry name="uuid">ce94b006-3fde-4285-89f7-1e435e514d3e</entry>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     </system>
Feb 28 10:01:30 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:01:30 compute-0 nova_compute[243452]:   <os>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:   </os>
Feb 28 10:01:30 compute-0 nova_compute[243452]:   <features>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:   </features>
Feb 28 10:01:30 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:01:30 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:01:30 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ce94b006-3fde-4285-89f7-1e435e514d3e_disk">
Feb 28 10:01:30 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:30 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ce94b006-3fde-4285-89f7-1e435e514d3e_disk.config">
Feb 28 10:01:30 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:30 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/ce94b006-3fde-4285-89f7-1e435e514d3e/console.log" append="off"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <video>
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     </video>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:01:30 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:01:30 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:01:30 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:01:30 compute-0 nova_compute[243452]: </domain>
Feb 28 10:01:30 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.157 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.157 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.158 243456 INFO nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Using config drive
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.177 243456 DEBUG nova.storage.rbd_utils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image ce94b006-3fde-4285-89f7-1e435e514d3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.182 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.238 243456 DEBUG nova.storage.rbd_utils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] resizing rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:01:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:01:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:01:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:01:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:01:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:01:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.308 243456 DEBUG nova.objects.instance [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'migration_context' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.319 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.320 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Ensure instance console log exists: /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.320 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.320 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.320 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.321 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.325 243456 WARNING nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.330 243456 DEBUG nova.virt.libvirt.host [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.330 243456 DEBUG nova.virt.libvirt.host [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.333 243456 DEBUG nova.virt.libvirt.host [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.333 243456 DEBUG nova.virt.libvirt.host [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.333 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.333 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.334 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.334 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.334 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.334 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.334 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.334 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.334 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.335 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.335 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.335 243456 DEBUG nova.virt.hardware [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.337 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:01:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:01:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:01:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:01:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:01:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:01:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:01:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:01:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:01:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.684 243456 INFO nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Creating config drive at /var/lib/nova/instances/ce94b006-3fde-4285-89f7-1e435e514d3e/disk.config
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.687 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ce94b006-3fde-4285-89f7-1e435e514d3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpid341_41 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.746 243456 DEBUG nova.network.neutron [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.810 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ce94b006-3fde-4285-89f7-1e435e514d3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpid341_41" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.842 243456 DEBUG nova.storage.rbd_utils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image ce94b006-3fde-4285-89f7-1e435e514d3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.846 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ce94b006-3fde-4285-89f7-1e435e514d3e/disk.config ce94b006-3fde-4285-89f7-1e435e514d3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.871 243456 DEBUG nova.network.neutron [req-ca1cbc50-ed4f-4fc3-9555-525257e53739 req-9ea3dbed-a19d-42ce-b7f5-8b9c45414e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Updated VIF entry in instance network info cache for port 24f1e17a-f542-4eab-9180-968c61bc1cf7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.872 243456 DEBUG nova.network.neutron [req-ca1cbc50-ed4f-4fc3-9555-525257e53739 req-9ea3dbed-a19d-42ce-b7f5-8b9c45414e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Updating instance_info_cache with network_info: [{"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3880366764' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.890 243456 DEBUG oslo_concurrency.lockutils [req-ca1cbc50-ed4f-4fc3-9555-525257e53739 req-9ea3dbed-a19d-42ce-b7f5-8b9c45414e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.906 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.929 243456 DEBUG nova.storage.rbd_utils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.934 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:30 compute-0 ceph-mon[76304]: pgmap v961: 305 pgs: 305 active+clean; 499 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.9 MiB/s wr, 220 op/s
Feb 28 10:01:30 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1533661090' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:30 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3880366764' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.988 243456 DEBUG oslo_concurrency.processutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ce94b006-3fde-4285-89f7-1e435e514d3e/disk.config ce94b006-3fde-4285-89f7-1e435e514d3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:30 compute-0 nova_compute[243452]: 2026-02-28 10:01:30.989 243456 INFO nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Deleting local config drive /var/lib/nova/instances/ce94b006-3fde-4285-89f7-1e435e514d3e/disk.config because it was imported into RBD.
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.005 243456 DEBUG nova.compute.manager [req-a36c5936-715c-434b-9456-ae7ceeb4fb0a req-e46b2fd6-5b31-4df7-9744-8a6d4dbdb39b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Received event network-changed-42dc1876-90c4-4b52-b301-1c90b71ff297 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.005 243456 DEBUG nova.compute.manager [req-a36c5936-715c-434b-9456-ae7ceeb4fb0a req-e46b2fd6-5b31-4df7-9744-8a6d4dbdb39b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Refreshing instance network info cache due to event network-changed-42dc1876-90c4-4b52-b301-1c90b71ff297. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.005 243456 DEBUG oslo_concurrency.lockutils [req-a36c5936-715c-434b-9456-ae7ceeb4fb0a req-e46b2fd6-5b31-4df7-9744-8a6d4dbdb39b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-d2d9bd29-453d-4abd-a3de-c1a9603cfc11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:01:31 compute-0 systemd-machined[209480]: New machine qemu-18-instance-00000013.
Feb 28 10:01:31 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000013.
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.451 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272891.4501214, ce94b006-3fde-4285-89f7-1e435e514d3e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.455 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] VM Resumed (Lifecycle Event)
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.465 243456 DEBUG nova.compute.manager [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.465 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.471 243456 INFO nova.virt.libvirt.driver [-] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Instance spawned successfully.
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.471 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.484 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.486 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.493 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.494 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.494 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.495 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.495 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.496 243456 DEBUG nova.virt.libvirt.driver [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2166209625' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.533 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.533 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272891.4545243, ce94b006-3fde-4285-89f7-1e435e514d3e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.533 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] VM Started (Lifecycle Event)
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.549 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.550 243456 DEBUG nova.objects.instance [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.563 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.566 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:01:31 compute-0 nova_compute[243452]:   <uuid>9bf11a21-b6eb-432e-aa3b-4dd3413f91a5</uuid>
Feb 28 10:01:31 compute-0 nova_compute[243452]:   <name>instance-00000014</name>
Feb 28 10:01:31 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:01:31 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:01:31 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersAdmin275Test-server-1098361722</nova:name>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:01:30</nova:creationTime>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:01:31 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:01:31 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:01:31 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:01:31 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:01:31 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:01:31 compute-0 nova_compute[243452]:         <nova:user uuid="c1d3b80b39ba4f3392d63b05c85009e2">tempest-ServersAdmin275Test-175914647-project-member</nova:user>
Feb 28 10:01:31 compute-0 nova_compute[243452]:         <nova:project uuid="a1e90097927e484eb57dee6ac05b7b47">tempest-ServersAdmin275Test-175914647</nova:project>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <nova:ports/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:01:31 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:01:31 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <system>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <entry name="serial">9bf11a21-b6eb-432e-aa3b-4dd3413f91a5</entry>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <entry name="uuid">9bf11a21-b6eb-432e-aa3b-4dd3413f91a5</entry>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     </system>
Feb 28 10:01:31 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:01:31 compute-0 nova_compute[243452]:   <os>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:   </os>
Feb 28 10:01:31 compute-0 nova_compute[243452]:   <features>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:   </features>
Feb 28 10:01:31 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:01:31 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:01:31 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk">
Feb 28 10:01:31 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:31 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config">
Feb 28 10:01:31 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:31 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/console.log" append="off"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <video>
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     </video>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:01:31 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:01:31 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:01:31 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:01:31 compute-0 nova_compute[243452]: </domain>
Feb 28 10:01:31 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.574 243456 INFO nova.compute.manager [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Took 3.27 seconds to spawn the instance on the hypervisor.
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.574 243456 DEBUG nova.compute.manager [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.576 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.586 243456 DEBUG nova.network.neutron [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Updating instance_info_cache with network_info: [{"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.631 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.634 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Releasing lock "refresh_cache-d2d9bd29-453d-4abd-a3de-c1a9603cfc11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.634 243456 DEBUG nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Instance network_info: |[{"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.635 243456 DEBUG oslo_concurrency.lockutils [req-a36c5936-715c-434b-9456-ae7ceeb4fb0a req-e46b2fd6-5b31-4df7-9744-8a6d4dbdb39b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-d2d9bd29-453d-4abd-a3de-c1a9603cfc11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.635 243456 DEBUG nova.network.neutron [req-a36c5936-715c-434b-9456-ae7ceeb4fb0a req-e46b2fd6-5b31-4df7-9744-8a6d4dbdb39b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Refreshing network info cache for port 42dc1876-90c4-4b52-b301-1c90b71ff297 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.638 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Start _get_guest_xml network_info=[{"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.643 243456 WARNING nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.649 243456 DEBUG nova.virt.libvirt.host [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.650 243456 DEBUG nova.virt.libvirt.host [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.652 243456 INFO nova.compute.manager [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Took 5.00 seconds to build instance.
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.661 243456 DEBUG nova.virt.libvirt.host [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.662 243456 DEBUG nova.virt.libvirt.host [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.662 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.662 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.663 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.663 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.664 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.664 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.664 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.665 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.665 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.665 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.666 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.666 243456 DEBUG nova.virt.hardware [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.668 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.683 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.683 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.684 243456 INFO nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Using config drive
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.710 243456 DEBUG nova.storage.rbd_utils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.740 243456 DEBUG oslo_concurrency.lockutils [None req-1ddae3e6-6ef6-4828-8c87-9702175d5509 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "ce94b006-3fde-4285-89f7-1e435e514d3e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 590 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.0 MiB/s wr, 305 op/s
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.961 243456 INFO nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Creating config drive at /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config
Feb 28 10:01:31 compute-0 nova_compute[243452]: 2026-02-28 10:01:31.970 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp78vrattu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:31 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2166209625' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.096 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp78vrattu" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.131 243456 DEBUG nova.storage.rbd_utils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.136 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:32 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2489451399' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.233 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:32 compute-0 ovn_controller[146846]: 2026-02-28T10:01:32Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:12:02:52 10.100.0.10
Feb 28 10:01:32 compute-0 ovn_controller[146846]: 2026-02-28T10:01:32Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:12:02:52 10.100.0.10
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.260 243456 DEBUG nova.storage.rbd_utils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.266 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.286 243456 DEBUG oslo_concurrency.processutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.287 243456 INFO nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Deleting local config drive /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config because it was imported into RBD.
Feb 28 10:01:32 compute-0 systemd-machined[209480]: New machine qemu-19-instance-00000014.
Feb 28 10:01:32 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-00000014.
Feb 28 10:01:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:01:32 compute-0 sudo[259695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:01:32 compute-0 sudo[259695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:01:32 compute-0 sudo[259695]: pam_unix(sudo:session): session closed for user root
Feb 28 10:01:32 compute-0 sudo[259720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:01:32 compute-0 sudo[259720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.886 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272892.8852859, 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.887 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] VM Resumed (Lifecycle Event)
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.891 243456 DEBUG nova.compute.manager [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.891 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.896 243456 INFO nova.virt.libvirt.driver [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance spawned successfully.
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.897 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:01:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:32 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2635969508' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.919 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.927 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.931 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.931 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.932 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.932 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.932 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.933 243456 DEBUG nova.virt.libvirt.driver [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.937 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.671s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.938 243456 DEBUG nova.virt.libvirt.vif [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:01:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1694487247',display_name='tempest-ServersAdminTestJSON-server-1694487247',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1694487247',id=18,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-vmrf35mh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:01:27Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=d2d9bd29-453d-4abd-a3de-c1a9603cfc11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.938 243456 DEBUG nova.network.os_vif_util [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.939 243456 DEBUG nova.network.os_vif_util [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:8a:2e,bridge_name='br-int',has_traffic_filtering=True,id=42dc1876-90c4-4b52-b301-1c90b71ff297,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42dc1876-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.939 243456 DEBUG nova.objects.instance [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'pci_devices' on Instance uuid d2d9bd29-453d-4abd-a3de-c1a9603cfc11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.978 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.978 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272892.8859205, 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.978 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] VM Started (Lifecycle Event)
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.981 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:01:32 compute-0 nova_compute[243452]:   <uuid>d2d9bd29-453d-4abd-a3de-c1a9603cfc11</uuid>
Feb 28 10:01:32 compute-0 nova_compute[243452]:   <name>instance-00000012</name>
Feb 28 10:01:32 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:01:32 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:01:32 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersAdminTestJSON-server-1694487247</nova:name>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:01:31</nova:creationTime>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:01:32 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:01:32 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:01:32 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:01:32 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:01:32 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:01:32 compute-0 nova_compute[243452]:         <nova:user uuid="4fb1e2bbed9c4e2395c13dba974f8603">tempest-ServersAdminTestJSON-1494420313-project-member</nova:user>
Feb 28 10:01:32 compute-0 nova_compute[243452]:         <nova:project uuid="339b7f5b41a54615b051fb9d036072dd">tempest-ServersAdminTestJSON-1494420313</nova:project>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:01:32 compute-0 nova_compute[243452]:         <nova:port uuid="42dc1876-90c4-4b52-b301-1c90b71ff297">
Feb 28 10:01:32 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:01:32 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:01:32 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <system>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <entry name="serial">d2d9bd29-453d-4abd-a3de-c1a9603cfc11</entry>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <entry name="uuid">d2d9bd29-453d-4abd-a3de-c1a9603cfc11</entry>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     </system>
Feb 28 10:01:32 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:01:32 compute-0 nova_compute[243452]:   <os>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:   </os>
Feb 28 10:01:32 compute-0 nova_compute[243452]:   <features>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:   </features>
Feb 28 10:01:32 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:01:32 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:01:32 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk">
Feb 28 10:01:32 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:32 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk.config">
Feb 28 10:01:32 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:32 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:0e:8a:2e"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <target dev="tap42dc1876-90"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/d2d9bd29-453d-4abd-a3de-c1a9603cfc11/console.log" append="off"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <video>
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     </video>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:01:32 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:01:32 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:01:32 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:01:32 compute-0 nova_compute[243452]: </domain>
Feb 28 10:01:32 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.982 243456 DEBUG nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Preparing to wait for external event network-vif-plugged-42dc1876-90c4-4b52-b301-1c90b71ff297 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.982 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.982 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.983 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.983 243456 DEBUG nova.virt.libvirt.vif [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:01:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1694487247',display_name='tempest-ServersAdminTestJSON-server-1694487247',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1694487247',id=18,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-vmrf35mh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:01:27Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=d2d9bd29-453d-4abd-a3de-c1a9603cfc11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.984 243456 DEBUG nova.network.os_vif_util [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.985 243456 DEBUG nova.network.os_vif_util [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:8a:2e,bridge_name='br-int',has_traffic_filtering=True,id=42dc1876-90c4-4b52-b301-1c90b71ff297,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42dc1876-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.985 243456 DEBUG os_vif [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:8a:2e,bridge_name='br-int',has_traffic_filtering=True,id=42dc1876-90c4-4b52-b301-1c90b71ff297,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42dc1876-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.990 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.990 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.991 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.994 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.994 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap42dc1876-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.995 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap42dc1876-90, col_values=(('external_ids', {'iface-id': '42dc1876-90c4-4b52-b301-1c90b71ff297', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:8a:2e', 'vm-uuid': 'd2d9bd29-453d-4abd-a3de-c1a9603cfc11'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.996 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:32 compute-0 ceph-mon[76304]: pgmap v962: 305 pgs: 305 active+clean; 590 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.0 MiB/s wr, 305 op/s
Feb 28 10:01:32 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2489451399' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:32 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2635969508' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:32 compute-0 NetworkManager[49805]: <info>  [1772272892.9976] manager: (tap42dc1876-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Feb 28 10:01:32 compute-0 nova_compute[243452]: 2026-02-28 10:01:32.999 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.005 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.005 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.006 243456 INFO os_vif [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:8a:2e,bridge_name='br-int',has_traffic_filtering=True,id=42dc1876-90c4-4b52-b301-1c90b71ff297,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42dc1876-90')
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.009 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.042 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.056 243456 INFO nova.compute.manager [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Took 3.38 seconds to spawn the instance on the hypervisor.
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.057 243456 DEBUG nova.compute.manager [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.066 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.067 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.068 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No VIF found with MAC fa:16:3e:0e:8a:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.068 243456 INFO nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Using config drive
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.094 243456 DEBUG nova.storage.rbd_utils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.123 243456 INFO nova.compute.manager [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Took 4.88 seconds to build instance.
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.151 243456 DEBUG oslo_concurrency.lockutils [None req-3acdf327-8732-45ce-b544-f3a90156af35 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "9bf11a21-b6eb-432e-aa3b-4dd3413f91a5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.984s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:33 compute-0 sudo[259720]: pam_unix(sudo:session): session closed for user root
Feb 28 10:01:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:01:33 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:01:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:01:33 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:01:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:01:33 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:01:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:01:33 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:01:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:01:33 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:01:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:01:33 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:01:33 compute-0 sudo[259839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:01:33 compute-0 sudo[259839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:01:33 compute-0 sudo[259839]: pam_unix(sudo:session): session closed for user root
Feb 28 10:01:33 compute-0 sudo[259864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:01:33 compute-0 sudo[259864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.415 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.619 243456 DEBUG nova.compute.manager [req-b25d08a5-6947-4618-a921-e6203cee8e3b req-52bca9b5-6b44-4a59-a09f-996afe6e6e59 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received event network-changed-24f1e17a-f542-4eab-9180-968c61bc1cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.619 243456 DEBUG nova.compute.manager [req-b25d08a5-6947-4618-a921-e6203cee8e3b req-52bca9b5-6b44-4a59-a09f-996afe6e6e59 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Refreshing instance network info cache due to event network-changed-24f1e17a-f542-4eab-9180-968c61bc1cf7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.620 243456 DEBUG oslo_concurrency.lockutils [req-b25d08a5-6947-4618-a921-e6203cee8e3b req-52bca9b5-6b44-4a59-a09f-996afe6e6e59 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.620 243456 DEBUG oslo_concurrency.lockutils [req-b25d08a5-6947-4618-a921-e6203cee8e3b req-52bca9b5-6b44-4a59-a09f-996afe6e6e59 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.621 243456 DEBUG nova.network.neutron [req-b25d08a5-6947-4618-a921-e6203cee8e3b req-52bca9b5-6b44-4a59-a09f-996afe6e6e59 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Refreshing network info cache for port 24f1e17a-f542-4eab-9180-968c61bc1cf7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:01:33 compute-0 podman[259902]: 2026-02-28 10:01:33.703522645 +0000 UTC m=+0.052338606 container create 0c85fa5f138d8edf2d29dc969f5901c81b848aaf6bd5b6643dc67aa9d448034b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:01:33 compute-0 systemd[1]: Started libpod-conmon-0c85fa5f138d8edf2d29dc969f5901c81b848aaf6bd5b6643dc67aa9d448034b.scope.
Feb 28 10:01:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 643 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 7.8 MiB/s wr, 283 op/s
Feb 28 10:01:33 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:01:33 compute-0 podman[259902]: 2026-02-28 10:01:33.68311441 +0000 UTC m=+0.031930411 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:01:33 compute-0 podman[259902]: 2026-02-28 10:01:33.802571624 +0000 UTC m=+0.151387625 container init 0c85fa5f138d8edf2d29dc969f5901c81b848aaf6bd5b6643dc67aa9d448034b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 10:01:33 compute-0 podman[259902]: 2026-02-28 10:01:33.813877673 +0000 UTC m=+0.162693624 container start 0c85fa5f138d8edf2d29dc969f5901c81b848aaf6bd5b6643dc67aa9d448034b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:01:33 compute-0 podman[259902]: 2026-02-28 10:01:33.817710711 +0000 UTC m=+0.166526662 container attach 0c85fa5f138d8edf2d29dc969f5901c81b848aaf6bd5b6643dc67aa9d448034b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:01:33 compute-0 competent_bohr[259917]: 167 167
Feb 28 10:01:33 compute-0 systemd[1]: libpod-0c85fa5f138d8edf2d29dc969f5901c81b848aaf6bd5b6643dc67aa9d448034b.scope: Deactivated successfully.
Feb 28 10:01:33 compute-0 conmon[259917]: conmon 0c85fa5f138d8edf2d29 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0c85fa5f138d8edf2d29dc969f5901c81b848aaf6bd5b6643dc67aa9d448034b.scope/container/memory.events
Feb 28 10:01:33 compute-0 podman[259902]: 2026-02-28 10:01:33.822420874 +0000 UTC m=+0.171236825 container died 0c85fa5f138d8edf2d29dc969f5901c81b848aaf6bd5b6643dc67aa9d448034b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:01:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c844b2e2d961845dc6fa34e945c17530506f506f953029b152705be32ea7cd85-merged.mount: Deactivated successfully.
Feb 28 10:01:33 compute-0 podman[259902]: 2026-02-28 10:01:33.856892524 +0000 UTC m=+0.205708485 container remove 0c85fa5f138d8edf2d29dc969f5901c81b848aaf6bd5b6643dc67aa9d448034b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:01:33 compute-0 systemd[1]: libpod-conmon-0c85fa5f138d8edf2d29dc969f5901c81b848aaf6bd5b6643dc67aa9d448034b.scope: Deactivated successfully.
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.970 243456 DEBUG nova.network.neutron [req-a36c5936-715c-434b-9456-ae7ceeb4fb0a req-e46b2fd6-5b31-4df7-9744-8a6d4dbdb39b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Updated VIF entry in instance network info cache for port 42dc1876-90c4-4b52-b301-1c90b71ff297. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.971 243456 DEBUG nova.network.neutron [req-a36c5936-715c-434b-9456-ae7ceeb4fb0a req-e46b2fd6-5b31-4df7-9744-8a6d4dbdb39b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Updating instance_info_cache with network_info: [{"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.976 243456 INFO nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Creating config drive at /var/lib/nova/instances/d2d9bd29-453d-4abd-a3de-c1a9603cfc11/disk.config
Feb 28 10:01:33 compute-0 nova_compute[243452]: 2026-02-28 10:01:33.981 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d2d9bd29-453d-4abd-a3de-c1a9603cfc11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3uvy17b3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:34 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:01:34 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:01:34 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:01:34 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:01:34 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:01:34 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:01:34 compute-0 nova_compute[243452]: 2026-02-28 10:01:34.012 243456 DEBUG oslo_concurrency.lockutils [req-a36c5936-715c-434b-9456-ae7ceeb4fb0a req-e46b2fd6-5b31-4df7-9744-8a6d4dbdb39b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-d2d9bd29-453d-4abd-a3de-c1a9603cfc11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:34 compute-0 podman[259941]: 2026-02-28 10:01:34.055659303 +0000 UTC m=+0.056402240 container create d7d2970cea941c58d67ee2af05c72483b58cfbd53ef0945f487a4c2213afd9e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_snyder, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Feb 28 10:01:34 compute-0 systemd[1]: Started libpod-conmon-d7d2970cea941c58d67ee2af05c72483b58cfbd53ef0945f487a4c2213afd9e1.scope.
Feb 28 10:01:34 compute-0 nova_compute[243452]: 2026-02-28 10:01:34.107 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d2d9bd29-453d-4abd-a3de-c1a9603cfc11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3uvy17b3" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:34 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:01:34 compute-0 podman[259941]: 2026-02-28 10:01:34.032035248 +0000 UTC m=+0.032778195 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:01:34 compute-0 nova_compute[243452]: 2026-02-28 10:01:34.127 243456 DEBUG nova.storage.rbd_utils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04c6d462d2d6850d4cf8ca8ecfe73d1f6edd284f91f055a3c8d83266520fe4d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04c6d462d2d6850d4cf8ca8ecfe73d1f6edd284f91f055a3c8d83266520fe4d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04c6d462d2d6850d4cf8ca8ecfe73d1f6edd284f91f055a3c8d83266520fe4d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04c6d462d2d6850d4cf8ca8ecfe73d1f6edd284f91f055a3c8d83266520fe4d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04c6d462d2d6850d4cf8ca8ecfe73d1f6edd284f91f055a3c8d83266520fe4d1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:01:34 compute-0 nova_compute[243452]: 2026-02-28 10:01:34.162 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d2d9bd29-453d-4abd-a3de-c1a9603cfc11/disk.config d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:34 compute-0 podman[259941]: 2026-02-28 10:01:34.164088817 +0000 UTC m=+0.164831754 container init d7d2970cea941c58d67ee2af05c72483b58cfbd53ef0945f487a4c2213afd9e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_snyder, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:01:34 compute-0 podman[259941]: 2026-02-28 10:01:34.172240377 +0000 UTC m=+0.172983314 container start d7d2970cea941c58d67ee2af05c72483b58cfbd53ef0945f487a4c2213afd9e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_snyder, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:01:34 compute-0 podman[259941]: 2026-02-28 10:01:34.176017703 +0000 UTC m=+0.176760650 container attach d7d2970cea941c58d67ee2af05c72483b58cfbd53ef0945f487a4c2213afd9e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:01:34 compute-0 nova_compute[243452]: 2026-02-28 10:01:34.283 243456 DEBUG oslo_concurrency.processutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d2d9bd29-453d-4abd-a3de-c1a9603cfc11/disk.config d2d9bd29-453d-4abd-a3de-c1a9603cfc11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:34 compute-0 nova_compute[243452]: 2026-02-28 10:01:34.287 243456 INFO nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Deleting local config drive /var/lib/nova/instances/d2d9bd29-453d-4abd-a3de-c1a9603cfc11/disk.config because it was imported into RBD.
Feb 28 10:01:34 compute-0 kernel: tap42dc1876-90: entered promiscuous mode
Feb 28 10:01:34 compute-0 ovn_controller[146846]: 2026-02-28T10:01:34Z|00067|binding|INFO|Claiming lport 42dc1876-90c4-4b52-b301-1c90b71ff297 for this chassis.
Feb 28 10:01:34 compute-0 ovn_controller[146846]: 2026-02-28T10:01:34Z|00068|binding|INFO|42dc1876-90c4-4b52-b301-1c90b71ff297: Claiming fa:16:3e:0e:8a:2e 10.100.0.4
Feb 28 10:01:34 compute-0 NetworkManager[49805]: <info>  [1772272894.3246] manager: (tap42dc1876-90): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Feb 28 10:01:34 compute-0 nova_compute[243452]: 2026-02-28 10:01:34.329 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:34 compute-0 systemd-udevd[259557]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:01:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.331 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:8a:2e 10.100.0.4'], port_security=['fa:16:3e:0e:8a:2e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd2d9bd29-453d-4abd-a3de-c1a9603cfc11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=42dc1876-90c4-4b52-b301-1c90b71ff297) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:01:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.335 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 42dc1876-90c4-4b52-b301-1c90b71ff297 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 bound to our chassis
Feb 28 10:01:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.337 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24
Feb 28 10:01:34 compute-0 ovn_controller[146846]: 2026-02-28T10:01:34Z|00069|binding|INFO|Setting lport 42dc1876-90c4-4b52-b301-1c90b71ff297 ovn-installed in OVS
Feb 28 10:01:34 compute-0 ovn_controller[146846]: 2026-02-28T10:01:34Z|00070|binding|INFO|Setting lport 42dc1876-90c4-4b52-b301-1c90b71ff297 up in Southbound
Feb 28 10:01:34 compute-0 nova_compute[243452]: 2026-02-28 10:01:34.338 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:34 compute-0 nova_compute[243452]: 2026-02-28 10:01:34.342 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:34 compute-0 NetworkManager[49805]: <info>  [1772272894.3505] device (tap42dc1876-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:01:34 compute-0 NetworkManager[49805]: <info>  [1772272894.3511] device (tap42dc1876-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:01:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.352 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[63ab131b-71bb-457d-b84d-bf01556bf97f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:34 compute-0 systemd-machined[209480]: New machine qemu-20-instance-00000012.
Feb 28 10:01:34 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-00000012.
Feb 28 10:01:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.380 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[871c8709-5467-4e0d-9d2b-d92b6ce788ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.384 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7491a0b8-c56a-4b0a-9de8-8e1f2f2f448e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.429 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[381be29c-7e41-47bb-9826-b7b2956e9a00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.450 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1b902278-44ad-4f43-b2cd-f5ed86fd86df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260033, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.469 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8d314eef-3748-4a20-b007-ff0ab642cce4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260035, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260035, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.471 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:34 compute-0 nova_compute[243452]: 2026-02-28 10:01:34.476 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.479 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.480 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.480 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:34.481 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:34 compute-0 jolly_snyder[259960]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:01:34 compute-0 jolly_snyder[259960]: --> All data devices are unavailable
Feb 28 10:01:34 compute-0 systemd[1]: libpod-d7d2970cea941c58d67ee2af05c72483b58cfbd53ef0945f487a4c2213afd9e1.scope: Deactivated successfully.
Feb 28 10:01:34 compute-0 conmon[259960]: conmon d7d2970cea941c58d67e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d7d2970cea941c58d67ee2af05c72483b58cfbd53ef0945f487a4c2213afd9e1.scope/container/memory.events
Feb 28 10:01:34 compute-0 podman[259941]: 2026-02-28 10:01:34.688187029 +0000 UTC m=+0.688929956 container died d7d2970cea941c58d67ee2af05c72483b58cfbd53ef0945f487a4c2213afd9e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_snyder, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:01:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-04c6d462d2d6850d4cf8ca8ecfe73d1f6edd284f91f055a3c8d83266520fe4d1-merged.mount: Deactivated successfully.
Feb 28 10:01:34 compute-0 podman[259941]: 2026-02-28 10:01:34.735556283 +0000 UTC m=+0.736299210 container remove d7d2970cea941c58d67ee2af05c72483b58cfbd53ef0945f487a4c2213afd9e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_snyder, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:01:34 compute-0 sudo[259864]: pam_unix(sudo:session): session closed for user root
Feb 28 10:01:34 compute-0 systemd[1]: libpod-conmon-d7d2970cea941c58d67ee2af05c72483b58cfbd53ef0945f487a4c2213afd9e1.scope: Deactivated successfully.
Feb 28 10:01:34 compute-0 sudo[260057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:01:34 compute-0 sudo[260057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:01:34 compute-0 sudo[260057]: pam_unix(sudo:session): session closed for user root
Feb 28 10:01:34 compute-0 sudo[260082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:01:34 compute-0 sudo[260082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:01:35 compute-0 ceph-mon[76304]: pgmap v963: 305 pgs: 305 active+clean; 643 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 7.8 MiB/s wr, 283 op/s
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.072 243456 DEBUG nova.compute.manager [req-b6d2eefd-602e-40f2-b2a6-2368d01e3d22 req-1af5c688-0892-43dc-b378-845264ccabb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Received event network-vif-plugged-42dc1876-90c4-4b52-b301-1c90b71ff297 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.074 243456 DEBUG oslo_concurrency.lockutils [req-b6d2eefd-602e-40f2-b2a6-2368d01e3d22 req-1af5c688-0892-43dc-b378-845264ccabb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.074 243456 DEBUG oslo_concurrency.lockutils [req-b6d2eefd-602e-40f2-b2a6-2368d01e3d22 req-1af5c688-0892-43dc-b378-845264ccabb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.074 243456 DEBUG oslo_concurrency.lockutils [req-b6d2eefd-602e-40f2-b2a6-2368d01e3d22 req-1af5c688-0892-43dc-b378-845264ccabb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.074 243456 DEBUG nova.compute.manager [req-b6d2eefd-602e-40f2-b2a6-2368d01e3d22 req-1af5c688-0892-43dc-b378-845264ccabb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Processing event network-vif-plugged-42dc1876-90c4-4b52-b301-1c90b71ff297 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.092 243456 DEBUG nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.094 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272895.0922856, d2d9bd29-453d-4abd-a3de-c1a9603cfc11 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.094 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] VM Started (Lifecycle Event)
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.098 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.103 243456 INFO nova.virt.libvirt.driver [-] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Instance spawned successfully.
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.103 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.112 243456 DEBUG oslo_concurrency.lockutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "f37b722c-8def-4545-a455-39df230540d8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.113 243456 DEBUG oslo_concurrency.lockutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.113 243456 DEBUG oslo_concurrency.lockutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "f37b722c-8def-4545-a455-39df230540d8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.113 243456 DEBUG oslo_concurrency.lockutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.114 243456 DEBUG oslo_concurrency.lockutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.116 243456 INFO nova.compute.manager [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Terminating instance
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.118 243456 DEBUG nova.compute.manager [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.121 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.131 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.136 243456 DEBUG nova.network.neutron [req-b25d08a5-6947-4618-a921-e6203cee8e3b req-52bca9b5-6b44-4a59-a09f-996afe6e6e59 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Updated VIF entry in instance network info cache for port 24f1e17a-f542-4eab-9180-968c61bc1cf7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.136 243456 DEBUG nova.network.neutron [req-b25d08a5-6947-4618-a921-e6203cee8e3b req-52bca9b5-6b44-4a59-a09f-996afe6e6e59 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Updating instance_info_cache with network_info: [{"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.142 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.143 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.143 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.144 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.145 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.146 243456 DEBUG nova.virt.libvirt.driver [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.171 243456 DEBUG oslo_concurrency.lockutils [req-b25d08a5-6947-4618-a921-e6203cee8e3b req-52bca9b5-6b44-4a59-a09f-996afe6e6e59 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f37b722c-8def-4545-a455-39df230540d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:35 compute-0 kernel: tap24f1e17a-f5 (unregistering): left promiscuous mode
Feb 28 10:01:35 compute-0 NetworkManager[49805]: <info>  [1772272895.1781] device (tap24f1e17a-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.180 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.180 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272895.0937629, d2d9bd29-453d-4abd-a3de-c1a9603cfc11 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.181 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] VM Paused (Lifecycle Event)
Feb 28 10:01:35 compute-0 ovn_controller[146846]: 2026-02-28T10:01:35Z|00071|binding|INFO|Releasing lport 24f1e17a-f542-4eab-9180-968c61bc1cf7 from this chassis (sb_readonly=0)
Feb 28 10:01:35 compute-0 ovn_controller[146846]: 2026-02-28T10:01:35Z|00072|binding|INFO|Setting lport 24f1e17a-f542-4eab-9180-968c61bc1cf7 down in Southbound
Feb 28 10:01:35 compute-0 ovn_controller[146846]: 2026-02-28T10:01:35Z|00073|binding|INFO|Removing iface tap24f1e17a-f5 ovn-installed in OVS
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.193 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.201 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:dd:a4 10.100.0.7'], port_security=['fa:16:3e:84:dd:a4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f37b722c-8def-4545-a455-39df230540d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '446562351a804787bd6c523245bada39', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c45a4b58-82e0-4f02-9f8e-0ad33df761ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d1e8e62-0bec-49c5-9374-c674d11c0532, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=24f1e17a-f542-4eab-9180-968c61bc1cf7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:01:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.204 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 24f1e17a-f542-4eab-9180-968c61bc1cf7 in datapath 71984a35-6483-4ac4-a021-6bd1f9989d8b unbound from our chassis
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.205 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.210 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.208 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71984a35-6483-4ac4-a021-6bd1f9989d8b
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.211 243456 INFO nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Took 7.98 seconds to spawn the instance on the hypervisor.
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.211 243456 DEBUG nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.215 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272895.0994737, d2d9bd29-453d-4abd-a3de-c1a9603cfc11 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.215 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] VM Resumed (Lifecycle Event)
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.222 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "cd5eedf6-c835-46d6-9378-148eb04d4cb2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.223 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "cd5eedf6-c835-46d6-9378-148eb04d4cb2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:35 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Deactivated successfully.
Feb 28 10:01:35 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Consumed 12.652s CPU time.
Feb 28 10:01:35 compute-0 systemd-machined[209480]: Machine qemu-16-instance-00000010 terminated.
Feb 28 10:01:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.228 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a518c0d6-d7eb-4b69-a7a4-e4d96257ea01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.250 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.250 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[25d1cbe9-fe28-4f62-ba8c-95a4760d96f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.254 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d39728-ecca-402b-abbc-a89f0db04936]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.257 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.259 243456 DEBUG nova.compute.manager [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:01:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.276 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d4333e-f4e2-4288-a582-727f2e4646d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:35 compute-0 podman[260164]: 2026-02-28 10:01:35.286619774 +0000 UTC m=+0.051834811 container create 3c6a300e30791667bc0a760f870545c9d45f1b95e26f82e11430101af0bf91a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_rhodes, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:01:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.290 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6dcebfaa-5304-4a43-9f26-a58de94d4fe3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71984a35-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:3a:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436194, 'reachable_time': 28919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260182, 'error': None, 'target': 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.300 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:01:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.308 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[32277d7e-a243-4f39-ab5a-239e86e5608b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap71984a35-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436202, 'tstamp': 436202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260183, 'error': None, 'target': 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap71984a35-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436205, 'tstamp': 436205}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260183, 'error': None, 'target': 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.310 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71984a35-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.311 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.318 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:35 compute-0 systemd[1]: Started libpod-conmon-3c6a300e30791667bc0a760f870545c9d45f1b95e26f82e11430101af0bf91a2.scope.
Feb 28 10:01:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.319 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71984a35-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.319 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.319 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71984a35-60, col_values=(('external_ids', {'iface-id': 'a589fe00-3087-4c3d-af34-6af9a22081de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:35.320 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.326 243456 INFO nova.compute.manager [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Took 9.25 seconds to build instance.
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.345 243456 DEBUG oslo_concurrency.lockutils [None req-e0ba94db-cd7e-4841-9952-4c6eb9ac4cdc 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.366s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:35 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.352 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.352 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.359 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.359 243456 INFO nova.compute.claims [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.363 243456 INFO nova.virt.libvirt.driver [-] [instance: f37b722c-8def-4545-a455-39df230540d8] Instance destroyed successfully.
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.364 243456 DEBUG nova.objects.instance [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lazy-loading 'resources' on Instance uuid f37b722c-8def-4545-a455-39df230540d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:35 compute-0 podman[260164]: 2026-02-28 10:01:35.269368749 +0000 UTC m=+0.034583806 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:01:35 compute-0 podman[260164]: 2026-02-28 10:01:35.368898822 +0000 UTC m=+0.134113879 container init 3c6a300e30791667bc0a760f870545c9d45f1b95e26f82e11430101af0bf91a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 28 10:01:35 compute-0 podman[260164]: 2026-02-28 10:01:35.374698425 +0000 UTC m=+0.139913462 container start 3c6a300e30791667bc0a760f870545c9d45f1b95e26f82e11430101af0bf91a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_rhodes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:01:35 compute-0 exciting_rhodes[260187]: 167 167
Feb 28 10:01:35 compute-0 systemd[1]: libpod-3c6a300e30791667bc0a760f870545c9d45f1b95e26f82e11430101af0bf91a2.scope: Deactivated successfully.
Feb 28 10:01:35 compute-0 conmon[260187]: conmon 3c6a300e30791667bc0a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3c6a300e30791667bc0a760f870545c9d45f1b95e26f82e11430101af0bf91a2.scope/container/memory.events
Feb 28 10:01:35 compute-0 podman[260164]: 2026-02-28 10:01:35.382000871 +0000 UTC m=+0.147215928 container attach 3c6a300e30791667bc0a760f870545c9d45f1b95e26f82e11430101af0bf91a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 28 10:01:35 compute-0 podman[260164]: 2026-02-28 10:01:35.382458984 +0000 UTC m=+0.147674021 container died 3c6a300e30791667bc0a760f870545c9d45f1b95e26f82e11430101af0bf91a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_rhodes, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.388 243456 DEBUG nova.virt.libvirt.vif [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1434521462',display_name='tempest-FloatingIPsAssociationTestJSON-server-1434521462',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1434521462',id=16,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:01:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='446562351a804787bd6c523245bada39',ramdisk_id='',reservation_id='r-txju9utb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1803239001',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1803239001-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:01:16Z,user_data=None,user_id='3b1dc716928742ca935bb155783e2d9a',uuid=f37b722c-8def-4545-a455-39df230540d8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.391 243456 DEBUG nova.network.os_vif_util [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converting VIF {"id": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "address": "fa:16:3e:84:dd:a4", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f1e17a-f5", "ovs_interfaceid": "24f1e17a-f542-4eab-9180-968c61bc1cf7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.392 243456 DEBUG nova.network.os_vif_util [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:84:dd:a4,bridge_name='br-int',has_traffic_filtering=True,id=24f1e17a-f542-4eab-9180-968c61bc1cf7,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24f1e17a-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.393 243456 DEBUG os_vif [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:dd:a4,bridge_name='br-int',has_traffic_filtering=True,id=24f1e17a-f542-4eab-9180-968c61bc1cf7,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24f1e17a-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.395 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.395 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24f1e17a-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.397 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.398 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.400 243456 INFO os_vif [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:dd:a4,bridge_name='br-int',has_traffic_filtering=True,id=24f1e17a-f542-4eab-9180-968c61bc1cf7,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24f1e17a-f5')
Feb 28 10:01:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-00006daa3d636e0ee33f3163e551eeb6053feff82c40035a4a0ce62ffb7be6cc-merged.mount: Deactivated successfully.
Feb 28 10:01:35 compute-0 podman[260164]: 2026-02-28 10:01:35.417075749 +0000 UTC m=+0.182290786 container remove 3c6a300e30791667bc0a760f870545c9d45f1b95e26f82e11430101af0bf91a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:01:35 compute-0 systemd[1]: libpod-conmon-3c6a300e30791667bc0a760f870545c9d45f1b95e26f82e11430101af0bf91a2.scope: Deactivated successfully.
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.468 243456 INFO nova.compute.manager [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Rebuilding instance
Feb 28 10:01:35 compute-0 podman[260241]: 2026-02-28 10:01:35.609720145 +0000 UTC m=+0.066491924 container create 29b148492bd853febc5ee22673fdb0c16b71f3a65fa4f0c9b320b648f6495868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pike, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 10:01:35 compute-0 podman[260241]: 2026-02-28 10:01:35.575324076 +0000 UTC m=+0.032095875 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.703 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:35 compute-0 systemd[1]: Started libpod-conmon-29b148492bd853febc5ee22673fdb0c16b71f3a65fa4f0c9b320b648f6495868.scope.
Feb 28 10:01:35 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.751 243456 DEBUG nova.objects.instance [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dd20e64680734bcb844c6217170d9fee0b6c89c9516be7c1861be8654cb256c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dd20e64680734bcb844c6217170d9fee0b6c89c9516be7c1861be8654cb256c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dd20e64680734bcb844c6217170d9fee0b6c89c9516be7c1861be8654cb256c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dd20e64680734bcb844c6217170d9fee0b6c89c9516be7c1861be8654cb256c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.769 243456 DEBUG nova.compute.manager [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 674 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 9.6 MiB/s wr, 379 op/s
Feb 28 10:01:35 compute-0 podman[260241]: 2026-02-28 10:01:35.777909542 +0000 UTC m=+0.234681361 container init 29b148492bd853febc5ee22673fdb0c16b71f3a65fa4f0c9b320b648f6495868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pike, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.781 243456 DEBUG nova.compute.manager [req-299aad8a-b79a-443c-98c7-b648e2435e55 req-32ea12fe-d103-4185-814c-67676708d475 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received event network-vif-unplugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.782 243456 DEBUG oslo_concurrency.lockutils [req-299aad8a-b79a-443c-98c7-b648e2435e55 req-32ea12fe-d103-4185-814c-67676708d475 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f37b722c-8def-4545-a455-39df230540d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.782 243456 DEBUG oslo_concurrency.lockutils [req-299aad8a-b79a-443c-98c7-b648e2435e55 req-32ea12fe-d103-4185-814c-67676708d475 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.782 243456 DEBUG oslo_concurrency.lockutils [req-299aad8a-b79a-443c-98c7-b648e2435e55 req-32ea12fe-d103-4185-814c-67676708d475 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.782 243456 DEBUG nova.compute.manager [req-299aad8a-b79a-443c-98c7-b648e2435e55 req-32ea12fe-d103-4185-814c-67676708d475 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] No waiting events found dispatching network-vif-unplugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.783 243456 DEBUG nova.compute.manager [req-299aad8a-b79a-443c-98c7-b648e2435e55 req-32ea12fe-d103-4185-814c-67676708d475 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received event network-vif-unplugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:01:35 compute-0 podman[260241]: 2026-02-28 10:01:35.783730946 +0000 UTC m=+0.240502715 container start 29b148492bd853febc5ee22673fdb0c16b71f3a65fa4f0c9b320b648f6495868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 10:01:35 compute-0 podman[260241]: 2026-02-28 10:01:35.802239127 +0000 UTC m=+0.259010906 container attach 29b148492bd853febc5ee22673fdb0c16b71f3a65fa4f0c9b320b648f6495868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pike, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.855 243456 DEBUG nova.objects.instance [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.869 243456 DEBUG nova.objects.instance [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.882 243456 DEBUG nova.objects.instance [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'resources' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.894 243456 DEBUG nova.objects.instance [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'migration_context' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.907 243456 DEBUG nova.objects.instance [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.919 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.977 243456 INFO nova.virt.libvirt.driver [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Deleting instance files /var/lib/nova/instances/f37b722c-8def-4545-a455-39df230540d8_del
Feb 28 10:01:35 compute-0 nova_compute[243452]: 2026-02-28 10:01:35.978 243456 INFO nova.virt.libvirt.driver [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Deletion of /var/lib/nova/instances/f37b722c-8def-4545-a455-39df230540d8_del complete
Feb 28 10:01:36 compute-0 boring_pike[260259]: {
Feb 28 10:01:36 compute-0 boring_pike[260259]:     "0": [
Feb 28 10:01:36 compute-0 boring_pike[260259]:         {
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "devices": [
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "/dev/loop3"
Feb 28 10:01:36 compute-0 boring_pike[260259]:             ],
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "lv_name": "ceph_lv0",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "lv_size": "21470642176",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "name": "ceph_lv0",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "tags": {
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.cluster_name": "ceph",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.crush_device_class": "",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.encrypted": "0",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.objectstore": "bluestore",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.osd_id": "0",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.type": "block",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.vdo": "0",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.with_tpm": "0"
Feb 28 10:01:36 compute-0 boring_pike[260259]:             },
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "type": "block",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "vg_name": "ceph_vg0"
Feb 28 10:01:36 compute-0 boring_pike[260259]:         }
Feb 28 10:01:36 compute-0 boring_pike[260259]:     ],
Feb 28 10:01:36 compute-0 boring_pike[260259]:     "1": [
Feb 28 10:01:36 compute-0 boring_pike[260259]:         {
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "devices": [
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "/dev/loop4"
Feb 28 10:01:36 compute-0 boring_pike[260259]:             ],
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "lv_name": "ceph_lv1",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "lv_size": "21470642176",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "name": "ceph_lv1",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "tags": {
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.cluster_name": "ceph",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.crush_device_class": "",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.encrypted": "0",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.objectstore": "bluestore",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.osd_id": "1",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.type": "block",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.vdo": "0",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.with_tpm": "0"
Feb 28 10:01:36 compute-0 boring_pike[260259]:             },
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "type": "block",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "vg_name": "ceph_vg1"
Feb 28 10:01:36 compute-0 boring_pike[260259]:         }
Feb 28 10:01:36 compute-0 boring_pike[260259]:     ],
Feb 28 10:01:36 compute-0 boring_pike[260259]:     "2": [
Feb 28 10:01:36 compute-0 boring_pike[260259]:         {
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "devices": [
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "/dev/loop5"
Feb 28 10:01:36 compute-0 boring_pike[260259]:             ],
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "lv_name": "ceph_lv2",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "lv_size": "21470642176",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "name": "ceph_lv2",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "tags": {
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.cluster_name": "ceph",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.crush_device_class": "",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.encrypted": "0",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.objectstore": "bluestore",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.osd_id": "2",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.type": "block",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.vdo": "0",
Feb 28 10:01:36 compute-0 boring_pike[260259]:                 "ceph.with_tpm": "0"
Feb 28 10:01:36 compute-0 boring_pike[260259]:             },
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "type": "block",
Feb 28 10:01:36 compute-0 boring_pike[260259]:             "vg_name": "ceph_vg2"
Feb 28 10:01:36 compute-0 boring_pike[260259]:         }
Feb 28 10:01:36 compute-0 boring_pike[260259]:     ]
Feb 28 10:01:36 compute-0 boring_pike[260259]: }
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.043 243456 INFO nova.compute.manager [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Took 0.92 seconds to destroy the instance on the hypervisor.
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.043 243456 DEBUG oslo.service.loopingcall [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.044 243456 DEBUG nova.compute.manager [-] [instance: f37b722c-8def-4545-a455-39df230540d8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.044 243456 DEBUG nova.network.neutron [-] [instance: f37b722c-8def-4545-a455-39df230540d8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:01:36 compute-0 systemd[1]: libpod-29b148492bd853febc5ee22673fdb0c16b71f3a65fa4f0c9b320b648f6495868.scope: Deactivated successfully.
Feb 28 10:01:36 compute-0 podman[260287]: 2026-02-28 10:01:36.087146832 +0000 UTC m=+0.025783917 container died 29b148492bd853febc5ee22673fdb0c16b71f3a65fa4f0c9b320b648f6495868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 10:01:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dd20e64680734bcb844c6217170d9fee0b6c89c9516be7c1861be8654cb256c-merged.mount: Deactivated successfully.
Feb 28 10:01:36 compute-0 podman[260287]: 2026-02-28 10:01:36.131312746 +0000 UTC m=+0.069949821 container remove 29b148492bd853febc5ee22673fdb0c16b71f3a65fa4f0c9b320b648f6495868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pike, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:01:36 compute-0 systemd[1]: libpod-conmon-29b148492bd853febc5ee22673fdb0c16b71f3a65fa4f0c9b320b648f6495868.scope: Deactivated successfully.
Feb 28 10:01:36 compute-0 sudo[260082]: pam_unix(sudo:session): session closed for user root
Feb 28 10:01:36 compute-0 sudo[260302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:01:36 compute-0 sudo[260302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:01:36 compute-0 sudo[260302]: pam_unix(sudo:session): session closed for user root
Feb 28 10:01:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:01:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1967810180' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:36 compute-0 sudo[260327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:01:36 compute-0 sudo[260327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.275 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.281 243456 DEBUG nova.compute.provider_tree [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.298 243456 DEBUG nova.scheduler.client.report [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.323 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.324 243456 DEBUG nova.compute.manager [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.368 243456 DEBUG nova.compute.manager [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.368 243456 DEBUG nova.network.neutron [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.392 243456 INFO nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.420 243456 DEBUG nova.compute.manager [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.499 243456 DEBUG nova.compute.manager [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.501 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.502 243456 INFO nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Creating image(s)
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.527 243456 DEBUG nova.storage.rbd_utils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:36 compute-0 podman[260366]: 2026-02-28 10:01:36.55396819 +0000 UTC m=+0.046820340 container create 778d69ba8212c9cfcf33160b4a504a723039533a861e1db424f5fe0ecb4ba880 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.563 243456 DEBUG nova.storage.rbd_utils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:36 compute-0 systemd[1]: Started libpod-conmon-778d69ba8212c9cfcf33160b4a504a723039533a861e1db424f5fe0ecb4ba880.scope.
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.595 243456 DEBUG nova.storage.rbd_utils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.600 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:36 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:01:36 compute-0 podman[260366]: 2026-02-28 10:01:36.621520792 +0000 UTC m=+0.114372942 container init 778d69ba8212c9cfcf33160b4a504a723039533a861e1db424f5fe0ecb4ba880 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 10:01:36 compute-0 podman[260366]: 2026-02-28 10:01:36.532521306 +0000 UTC m=+0.025373516 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:01:36 compute-0 podman[260366]: 2026-02-28 10:01:36.627256514 +0000 UTC m=+0.120108644 container start 778d69ba8212c9cfcf33160b4a504a723039533a861e1db424f5fe0ecb4ba880 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:01:36 compute-0 fervent_einstein[260432]: 167 167
Feb 28 10:01:36 compute-0 systemd[1]: libpod-778d69ba8212c9cfcf33160b4a504a723039533a861e1db424f5fe0ecb4ba880.scope: Deactivated successfully.
Feb 28 10:01:36 compute-0 podman[260366]: 2026-02-28 10:01:36.632062969 +0000 UTC m=+0.124915099 container attach 778d69ba8212c9cfcf33160b4a504a723039533a861e1db424f5fe0ecb4ba880 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 28 10:01:36 compute-0 conmon[260432]: conmon 778d69ba8212c9cfcf33 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-778d69ba8212c9cfcf33160b4a504a723039533a861e1db424f5fe0ecb4ba880.scope/container/memory.events
Feb 28 10:01:36 compute-0 podman[260366]: 2026-02-28 10:01:36.632975395 +0000 UTC m=+0.125827525 container died 778d69ba8212c9cfcf33160b4a504a723039533a861e1db424f5fe0ecb4ba880 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:01:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-0068f3b3a27544eaa4486f76724916600c281b8923da91d2e79e4af742f9fdc0-merged.mount: Deactivated successfully.
Feb 28 10:01:36 compute-0 podman[260366]: 2026-02-28 10:01:36.666410907 +0000 UTC m=+0.159263037 container remove 778d69ba8212c9cfcf33160b4a504a723039533a861e1db424f5fe0ecb4ba880 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.674 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.675 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.675 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.676 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:36 compute-0 systemd[1]: libpod-conmon-778d69ba8212c9cfcf33160b4a504a723039533a861e1db424f5fe0ecb4ba880.scope: Deactivated successfully.
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.696 243456 DEBUG nova.storage.rbd_utils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.701 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.781 243456 DEBUG nova.network.neutron [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.781 243456 DEBUG nova.compute.manager [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:01:36 compute-0 podman[260494]: 2026-02-28 10:01:36.818878271 +0000 UTC m=+0.043008262 container create 757428c54ab78234c83ce7c533298969779bb6479bfc29a760a79df35a6fe124 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 10:01:36 compute-0 systemd[1]: Started libpod-conmon-757428c54ab78234c83ce7c533298969779bb6479bfc29a760a79df35a6fe124.scope.
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.880 243456 DEBUG nova.network.neutron [-] [instance: f37b722c-8def-4545-a455-39df230540d8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:36 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:01:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f52c6c790f3f1a13fc27448f470511197f5f8aab2177092e38c047dec2b587f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:01:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f52c6c790f3f1a13fc27448f470511197f5f8aab2177092e38c047dec2b587f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:01:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f52c6c790f3f1a13fc27448f470511197f5f8aab2177092e38c047dec2b587f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:01:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f52c6c790f3f1a13fc27448f470511197f5f8aab2177092e38c047dec2b587f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:01:36 compute-0 podman[260494]: 2026-02-28 10:01:36.797268613 +0000 UTC m=+0.021398624 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.901 243456 INFO nova.compute.manager [-] [instance: f37b722c-8def-4545-a455-39df230540d8] Took 0.86 seconds to deallocate network for instance.
Feb 28 10:01:36 compute-0 podman[260494]: 2026-02-28 10:01:36.906011305 +0000 UTC m=+0.130141296 container init 757428c54ab78234c83ce7c533298969779bb6479bfc29a760a79df35a6fe124 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 10:01:36 compute-0 podman[260494]: 2026-02-28 10:01:36.913005922 +0000 UTC m=+0.137135913 container start 757428c54ab78234c83ce7c533298969779bb6479bfc29a760a79df35a6fe124 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 10:01:36 compute-0 podman[260494]: 2026-02-28 10:01:36.918686962 +0000 UTC m=+0.142816953 container attach 757428c54ab78234c83ce7c533298969779bb6479bfc29a760a79df35a6fe124 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hermann, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.936 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.963 243456 DEBUG oslo_concurrency.lockutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:36 compute-0 nova_compute[243452]: 2026-02-28 10:01:36.964 243456 DEBUG oslo_concurrency.lockutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.005 243456 DEBUG nova.storage.rbd_utils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] resizing rbd image cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:01:37 compute-0 ceph-mon[76304]: pgmap v964: 305 pgs: 305 active+clean; 674 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 9.6 MiB/s wr, 379 op/s
Feb 28 10:01:37 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1967810180' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.126 243456 DEBUG nova.objects.instance [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lazy-loading 'migration_context' on Instance uuid cd5eedf6-c835-46d6-9378-148eb04d4cb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.148 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.148 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Ensure instance console log exists: /var/lib/nova/instances/cd5eedf6-c835-46d6-9378-148eb04d4cb2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.148 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.149 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.149 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.150 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.153 243456 WARNING nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.158 243456 DEBUG nova.virt.libvirt.host [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.159 243456 DEBUG nova.virt.libvirt.host [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.161 243456 DEBUG nova.virt.libvirt.host [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.162 243456 DEBUG nova.virt.libvirt.host [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.162 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.162 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.163 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.163 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.163 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.163 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.163 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.163 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.164 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.164 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.164 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.164 243456 DEBUG nova.virt.hardware [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.166 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.316 243456 DEBUG oslo_concurrency.processutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:37 compute-0 lvm[260701]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:01:37 compute-0 lvm[260701]: VG ceph_vg0 finished
Feb 28 10:01:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:01:37 compute-0 lvm[260703]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:01:37 compute-0 lvm[260703]: VG ceph_vg1 finished
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.613 243456 DEBUG nova.compute.manager [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Received event network-vif-plugged-42dc1876-90c4-4b52-b301-1c90b71ff297 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.614 243456 DEBUG oslo_concurrency.lockutils [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.616 243456 DEBUG oslo_concurrency.lockutils [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.617 243456 DEBUG oslo_concurrency.lockutils [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:37 compute-0 lvm[260705]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:01:37 compute-0 lvm[260705]: VG ceph_vg2 finished
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.618 243456 DEBUG nova.compute.manager [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] No waiting events found dispatching network-vif-plugged-42dc1876-90c4-4b52-b301-1c90b71ff297 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.618 243456 WARNING nova.compute.manager [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Received unexpected event network-vif-plugged-42dc1876-90c4-4b52-b301-1c90b71ff297 for instance with vm_state active and task_state None.
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.618 243456 DEBUG nova.compute.manager [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received event network-vif-deleted-24f1e17a-f542-4eab-9180-968c61bc1cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.619 243456 DEBUG nova.compute.manager [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-changed-77e0efad-ce89-42fd-9284-b155767f5c74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.619 243456 DEBUG nova.compute.manager [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Refreshing instance network info cache due to event network-changed-77e0efad-ce89-42fd-9284-b155767f5c74. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.619 243456 DEBUG oslo_concurrency.lockutils [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.619 243456 DEBUG oslo_concurrency.lockutils [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.619 243456 DEBUG nova.network.neutron [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Refreshing network info cache for port 77e0efad-ce89-42fd-9284-b155767f5c74 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:01:37 compute-0 agitated_hermann[260514]: {}
Feb 28 10:01:37 compute-0 systemd[1]: libpod-757428c54ab78234c83ce7c533298969779bb6479bfc29a760a79df35a6fe124.scope: Deactivated successfully.
Feb 28 10:01:37 compute-0 systemd[1]: libpod-757428c54ab78234c83ce7c533298969779bb6479bfc29a760a79df35a6fe124.scope: Consumed 1.045s CPU time.
Feb 28 10:01:37 compute-0 podman[260494]: 2026-02-28 10:01:37.695034427 +0000 UTC m=+0.919164418 container died 757428c54ab78234c83ce7c533298969779bb6479bfc29a760a79df35a6fe124 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 10:01:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-f52c6c790f3f1a13fc27448f470511197f5f8aab2177092e38c047dec2b587f3-merged.mount: Deactivated successfully.
Feb 28 10:01:37 compute-0 podman[260494]: 2026-02-28 10:01:37.732981562 +0000 UTC m=+0.957111563 container remove 757428c54ab78234c83ce7c533298969779bb6479bfc29a760a79df35a6fe124 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:01:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:37 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/663080830' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:37 compute-0 systemd[1]: libpod-conmon-757428c54ab78234c83ce7c533298969779bb6479bfc29a760a79df35a6fe124.scope: Deactivated successfully.
Feb 28 10:01:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 656 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 10 MiB/s wr, 411 op/s
Feb 28 10:01:37 compute-0 sudo[260327]: pam_unix(sudo:session): session closed for user root
Feb 28 10:01:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:01:37 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:01:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.784 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:37 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.813 243456 DEBUG nova.storage.rbd_utils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.819 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:37 compute-0 sudo[260726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:01:37 compute-0 sudo[260726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:01:37 compute-0 sudo[260726]: pam_unix(sudo:session): session closed for user root
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.841 243456 DEBUG oslo_concurrency.lockutils [None req-0ddd3846-ae6b-43e0-98af-fdd91e5ab2dd 0e532ad680b24bcbac2ad58e279c9d00 81c87609780749f7b368e51bddc62945 - - default default] Acquiring lock "refresh_cache-d2d9bd29-453d-4abd-a3de-c1a9603cfc11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.841 243456 DEBUG oslo_concurrency.lockutils [None req-0ddd3846-ae6b-43e0-98af-fdd91e5ab2dd 0e532ad680b24bcbac2ad58e279c9d00 81c87609780749f7b368e51bddc62945 - - default default] Acquired lock "refresh_cache-d2d9bd29-453d-4abd-a3de-c1a9603cfc11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.842 243456 DEBUG nova.network.neutron [None req-0ddd3846-ae6b-43e0-98af-fdd91e5ab2dd 0e532ad680b24bcbac2ad58e279c9d00 81c87609780749f7b368e51bddc62945 - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:01:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:01:37 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1590387511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.889 243456 DEBUG oslo_concurrency.processutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.895 243456 DEBUG nova.compute.provider_tree [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.911 243456 DEBUG nova.scheduler.client.report [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.934 243456 DEBUG oslo_concurrency.lockutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:37 compute-0 nova_compute[243452]: 2026-02-28 10:01:37.955 243456 INFO nova.scheduler.client.report [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Deleted allocations for instance f37b722c-8def-4545-a455-39df230540d8
Feb 28 10:01:38 compute-0 nova_compute[243452]: 2026-02-28 10:01:38.015 243456 DEBUG oslo_concurrency.lockutils [None req-7f854a76-eda6-4e6c-9539-fc0311550182 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:38 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/663080830' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:38 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:01:38 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:01:38 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1590387511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:38 compute-0 nova_compute[243452]: 2026-02-28 10:01:38.152 243456 DEBUG nova.compute.manager [req-42c42e9b-ded0-49cc-8eee-b13d0b086cff req-548d21cb-2844-4d37-8e1a-4024e9241f1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received event network-vif-plugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:38 compute-0 nova_compute[243452]: 2026-02-28 10:01:38.152 243456 DEBUG oslo_concurrency.lockutils [req-42c42e9b-ded0-49cc-8eee-b13d0b086cff req-548d21cb-2844-4d37-8e1a-4024e9241f1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f37b722c-8def-4545-a455-39df230540d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:38 compute-0 nova_compute[243452]: 2026-02-28 10:01:38.153 243456 DEBUG oslo_concurrency.lockutils [req-42c42e9b-ded0-49cc-8eee-b13d0b086cff req-548d21cb-2844-4d37-8e1a-4024e9241f1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:38 compute-0 nova_compute[243452]: 2026-02-28 10:01:38.153 243456 DEBUG oslo_concurrency.lockutils [req-42c42e9b-ded0-49cc-8eee-b13d0b086cff req-548d21cb-2844-4d37-8e1a-4024e9241f1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f37b722c-8def-4545-a455-39df230540d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:38 compute-0 nova_compute[243452]: 2026-02-28 10:01:38.153 243456 DEBUG nova.compute.manager [req-42c42e9b-ded0-49cc-8eee-b13d0b086cff req-548d21cb-2844-4d37-8e1a-4024e9241f1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] No waiting events found dispatching network-vif-plugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:01:38 compute-0 nova_compute[243452]: 2026-02-28 10:01:38.153 243456 WARNING nova.compute.manager [req-42c42e9b-ded0-49cc-8eee-b13d0b086cff req-548d21cb-2844-4d37-8e1a-4024e9241f1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f37b722c-8def-4545-a455-39df230540d8] Received unexpected event network-vif-plugged-24f1e17a-f542-4eab-9180-968c61bc1cf7 for instance with vm_state deleted and task_state None.
Feb 28 10:01:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:38 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3566414693' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:38 compute-0 nova_compute[243452]: 2026-02-28 10:01:38.323 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:38 compute-0 nova_compute[243452]: 2026-02-28 10:01:38.325 243456 DEBUG nova.objects.instance [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lazy-loading 'pci_devices' on Instance uuid cd5eedf6-c835-46d6-9378-148eb04d4cb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:38 compute-0 nova_compute[243452]: 2026-02-28 10:01:38.343 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:01:38 compute-0 nova_compute[243452]:   <uuid>cd5eedf6-c835-46d6-9378-148eb04d4cb2</uuid>
Feb 28 10:01:38 compute-0 nova_compute[243452]:   <name>instance-00000015</name>
Feb 28 10:01:38 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:01:38 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:01:38 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-1400219163</nova:name>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:01:37</nova:creationTime>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:01:38 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:01:38 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:01:38 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:01:38 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:01:38 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:01:38 compute-0 nova_compute[243452]:         <nova:user uuid="d12cfb1e6b0d4d93916ba6a6c4b75cfc">tempest-ServersAdminNegativeTestJSON-1432426192-project-member</nova:user>
Feb 28 10:01:38 compute-0 nova_compute[243452]:         <nova:project uuid="388f2f7e6d59433a8c88217806df2e33">tempest-ServersAdminNegativeTestJSON-1432426192</nova:project>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <nova:ports/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:01:38 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:01:38 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <system>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <entry name="serial">cd5eedf6-c835-46d6-9378-148eb04d4cb2</entry>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <entry name="uuid">cd5eedf6-c835-46d6-9378-148eb04d4cb2</entry>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     </system>
Feb 28 10:01:38 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:01:38 compute-0 nova_compute[243452]:   <os>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:   </os>
Feb 28 10:01:38 compute-0 nova_compute[243452]:   <features>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:   </features>
Feb 28 10:01:38 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:01:38 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:01:38 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk">
Feb 28 10:01:38 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:38 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk.config">
Feb 28 10:01:38 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:38 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/cd5eedf6-c835-46d6-9378-148eb04d4cb2/console.log" append="off"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <video>
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     </video>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:01:38 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:01:38 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:01:38 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:01:38 compute-0 nova_compute[243452]: </domain>
Feb 28 10:01:38 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:01:38 compute-0 nova_compute[243452]: 2026-02-28 10:01:38.395 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:38 compute-0 nova_compute[243452]: 2026-02-28 10:01:38.395 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:38 compute-0 nova_compute[243452]: 2026-02-28 10:01:38.396 243456 INFO nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Using config drive
Feb 28 10:01:38 compute-0 nova_compute[243452]: 2026-02-28 10:01:38.415 243456 DEBUG nova.storage.rbd_utils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:38 compute-0 nova_compute[243452]: 2026-02-28 10:01:38.422 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:39 compute-0 ceph-mon[76304]: pgmap v965: 305 pgs: 305 active+clean; 656 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 10 MiB/s wr, 411 op/s
Feb 28 10:01:39 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3566414693' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:39 compute-0 nova_compute[243452]: 2026-02-28 10:01:39.272 243456 INFO nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Creating config drive at /var/lib/nova/instances/cd5eedf6-c835-46d6-9378-148eb04d4cb2/disk.config
Feb 28 10:01:39 compute-0 nova_compute[243452]: 2026-02-28 10:01:39.279 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cd5eedf6-c835-46d6-9378-148eb04d4cb2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8bktf0om execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:39 compute-0 nova_compute[243452]: 2026-02-28 10:01:39.410 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cd5eedf6-c835-46d6-9378-148eb04d4cb2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8bktf0om" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:39 compute-0 nova_compute[243452]: 2026-02-28 10:01:39.436 243456 DEBUG nova.storage.rbd_utils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] rbd image cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:39 compute-0 nova_compute[243452]: 2026-02-28 10:01:39.440 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cd5eedf6-c835-46d6-9378-148eb04d4cb2/disk.config cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:39 compute-0 nova_compute[243452]: 2026-02-28 10:01:39.598 243456 DEBUG oslo_concurrency.processutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cd5eedf6-c835-46d6-9378-148eb04d4cb2/disk.config cd5eedf6-c835-46d6-9378-148eb04d4cb2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:39 compute-0 nova_compute[243452]: 2026-02-28 10:01:39.600 243456 INFO nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Deleting local config drive /var/lib/nova/instances/cd5eedf6-c835-46d6-9378-148eb04d4cb2/disk.config because it was imported into RBD.
Feb 28 10:01:39 compute-0 systemd-machined[209480]: New machine qemu-21-instance-00000015.
Feb 28 10:01:39 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-00000015.
Feb 28 10:01:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 656 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 9.2 MiB/s wr, 391 op/s
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.008 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272900.0083244, cd5eedf6-c835-46d6-9378-148eb04d4cb2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.009 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] VM Resumed (Lifecycle Event)
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.022 243456 DEBUG nova.compute.manager [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.023 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.028 243456 INFO nova.virt.libvirt.driver [-] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Instance spawned successfully.
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.028 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.040 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.044 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.056 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.057 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.058 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.058 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.059 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.060 243456 DEBUG nova.virt.libvirt.driver [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.074 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.074 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272900.0218563, cd5eedf6-c835-46d6-9378-148eb04d4cb2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.075 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] VM Started (Lifecycle Event)
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.118 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.121 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.166 243456 INFO nova.compute.manager [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Took 3.67 seconds to spawn the instance on the hypervisor.
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.167 243456 DEBUG nova.compute.manager [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.176 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.241 243456 INFO nova.compute.manager [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Took 4.92 seconds to build instance.
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.260 243456 DEBUG oslo_concurrency.lockutils [None req-f8fe4c30-57b5-4424-8e18-5920e2c40888 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "cd5eedf6-c835-46d6-9378-148eb04d4cb2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.396 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004546580513667694 of space, bias 1.0, pg target 1.3639741541003083 quantized to 32 (current 32)
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024910054340674293 of space, bias 1.0, pg target 0.7448106247861613 quantized to 32 (current 32)
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.023018427299415e-07 of space, bias 4.0, pg target 0.00107915300390501 quantized to 16 (current 16)
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:01:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.779 243456 DEBUG nova.network.neutron [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updated VIF entry in instance network info cache for port 77e0efad-ce89-42fd-9284-b155767f5c74. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.780 243456 DEBUG nova.network.neutron [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updating instance_info_cache with network_info: [{"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:40 compute-0 nova_compute[243452]: 2026-02-28 10:01:40.902 243456 DEBUG oslo_concurrency.lockutils [req-f5ab97e3-95bd-4558-9e58-9c9bba4c0fbe req-d292c1a8-7afa-4a54-b179-8100f25f2c15 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:41 compute-0 ceph-mon[76304]: pgmap v966: 305 pgs: 305 active+clean; 656 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 9.2 MiB/s wr, 391 op/s
Feb 28 10:01:41 compute-0 nova_compute[243452]: 2026-02-28 10:01:41.427 243456 DEBUG nova.network.neutron [None req-0ddd3846-ae6b-43e0-98af-fdd91e5ab2dd 0e532ad680b24bcbac2ad58e279c9d00 81c87609780749f7b368e51bddc62945 - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Updating instance_info_cache with network_info: [{"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:41 compute-0 nova_compute[243452]: 2026-02-28 10:01:41.455 243456 DEBUG oslo_concurrency.lockutils [None req-0ddd3846-ae6b-43e0-98af-fdd91e5ab2dd 0e532ad680b24bcbac2ad58e279c9d00 81c87609780749f7b368e51bddc62945 - - default default] Releasing lock "refresh_cache-d2d9bd29-453d-4abd-a3de-c1a9603cfc11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:41 compute-0 nova_compute[243452]: 2026-02-28 10:01:41.455 243456 DEBUG nova.compute.manager [None req-0ddd3846-ae6b-43e0-98af-fdd91e5ab2dd 0e532ad680b24bcbac2ad58e279c9d00 81c87609780749f7b368e51bddc62945 - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Feb 28 10:01:41 compute-0 nova_compute[243452]: 2026-02-28 10:01:41.455 243456 DEBUG nova.compute.manager [None req-0ddd3846-ae6b-43e0-98af-fdd91e5ab2dd 0e532ad680b24bcbac2ad58e279c9d00 81c87609780749f7b368e51bddc62945 - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] network_info to inject: |[{"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Feb 28 10:01:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 656 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 11 MiB/s wr, 477 op/s
Feb 28 10:01:42 compute-0 nova_compute[243452]: 2026-02-28 10:01:42.007 243456 DEBUG nova.objects.instance [None req-f7342722-e58f-43f4-a242-177cbc0b1374 b9190d82939143d782afeba7662b6241 59eeacf1fc7e434d81334a4fff5e51d9 - - default default] Lazy-loading 'pci_devices' on Instance uuid cd5eedf6-c835-46d6-9378-148eb04d4cb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:42 compute-0 nova_compute[243452]: 2026-02-28 10:01:42.027 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272902.0276446, cd5eedf6-c835-46d6-9378-148eb04d4cb2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:42 compute-0 nova_compute[243452]: 2026-02-28 10:01:42.028 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] VM Paused (Lifecycle Event)
Feb 28 10:01:42 compute-0 nova_compute[243452]: 2026-02-28 10:01:42.055 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:42 compute-0 nova_compute[243452]: 2026-02-28 10:01:42.064 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:42 compute-0 nova_compute[243452]: 2026-02-28 10:01:42.086 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] During sync_power_state the instance has a pending task (suspending). Skip.
Feb 28 10:01:42 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Deactivated successfully.
Feb 28 10:01:42 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Consumed 2.400s CPU time.
Feb 28 10:01:42 compute-0 systemd-machined[209480]: Machine qemu-21-instance-00000015 terminated.
Feb 28 10:01:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:01:42 compute-0 nova_compute[243452]: 2026-02-28 10:01:42.666 243456 DEBUG nova.compute.manager [None req-f7342722-e58f-43f4-a242-177cbc0b1374 b9190d82939143d782afeba7662b6241 59eeacf1fc7e434d81334a4fff5e51d9 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:43 compute-0 ceph-mon[76304]: pgmap v967: 305 pgs: 305 active+clean; 656 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 11 MiB/s wr, 477 op/s
Feb 28 10:01:43 compute-0 nova_compute[243452]: 2026-02-28 10:01:43.420 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 656 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 6.4 MiB/s wr, 410 op/s
Feb 28 10:01:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:44.705 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:01:44 compute-0 nova_compute[243452]: 2026-02-28 10:01:44.706 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:44.707 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:01:44 compute-0 nova_compute[243452]: 2026-02-28 10:01:44.935 243456 DEBUG nova.compute.manager [req-82b1cbd5-a446-45cd-a9e9-d421152b6a88 req-55dcae6b-ebcf-4c82-b234-217a404324ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-changed-77e0efad-ce89-42fd-9284-b155767f5c74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:44 compute-0 nova_compute[243452]: 2026-02-28 10:01:44.936 243456 DEBUG nova.compute.manager [req-82b1cbd5-a446-45cd-a9e9-d421152b6a88 req-55dcae6b-ebcf-4c82-b234-217a404324ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Refreshing instance network info cache due to event network-changed-77e0efad-ce89-42fd-9284-b155767f5c74. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:01:44 compute-0 nova_compute[243452]: 2026-02-28 10:01:44.936 243456 DEBUG oslo_concurrency.lockutils [req-82b1cbd5-a446-45cd-a9e9-d421152b6a88 req-55dcae6b-ebcf-4c82-b234-217a404324ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:01:44 compute-0 nova_compute[243452]: 2026-02-28 10:01:44.936 243456 DEBUG oslo_concurrency.lockutils [req-82b1cbd5-a446-45cd-a9e9-d421152b6a88 req-55dcae6b-ebcf-4c82-b234-217a404324ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:01:44 compute-0 nova_compute[243452]: 2026-02-28 10:01:44.936 243456 DEBUG nova.network.neutron [req-82b1cbd5-a446-45cd-a9e9-d421152b6a88 req-55dcae6b-ebcf-4c82-b234-217a404324ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Refreshing network info cache for port 77e0efad-ce89-42fd-9284-b155767f5c74 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:01:45 compute-0 ceph-mon[76304]: pgmap v968: 305 pgs: 305 active+clean; 656 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 6.4 MiB/s wr, 410 op/s
Feb 28 10:01:45 compute-0 nova_compute[243452]: 2026-02-28 10:01:45.398 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:01:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/704200910' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:01:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:01:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/704200910' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:01:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:45.708 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 677 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 5.1 MiB/s wr, 396 op/s
Feb 28 10:01:45 compute-0 nova_compute[243452]: 2026-02-28 10:01:45.977 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 28 10:01:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/704200910' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:01:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/704200910' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:01:46 compute-0 ceph-mon[76304]: pgmap v969: 305 pgs: 305 active+clean; 677 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 5.1 MiB/s wr, 396 op/s
Feb 28 10:01:46 compute-0 ovn_controller[146846]: 2026-02-28T10:01:46Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0e:8a:2e 10.100.0.4
Feb 28 10:01:46 compute-0 ovn_controller[146846]: 2026-02-28T10:01:46Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0e:8a:2e 10.100.0.4
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.054 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "cd5eedf6-c835-46d6-9378-148eb04d4cb2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.054 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "cd5eedf6-c835-46d6-9378-148eb04d4cb2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.055 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "cd5eedf6-c835-46d6-9378-148eb04d4cb2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.055 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "cd5eedf6-c835-46d6-9378-148eb04d4cb2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.056 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "cd5eedf6-c835-46d6-9378-148eb04d4cb2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.058 243456 INFO nova.compute.manager [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Terminating instance
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.059 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "refresh_cache-cd5eedf6-c835-46d6-9378-148eb04d4cb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.059 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquired lock "refresh_cache-cd5eedf6-c835-46d6-9378-148eb04d4cb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.060 243456 DEBUG nova.network.neutron [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.307 243456 DEBUG nova.network.neutron [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.363 243456 DEBUG nova.network.neutron [req-82b1cbd5-a446-45cd-a9e9-d421152b6a88 req-55dcae6b-ebcf-4c82-b234-217a404324ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updated VIF entry in instance network info cache for port 77e0efad-ce89-42fd-9284-b155767f5c74. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.365 243456 DEBUG nova.network.neutron [req-82b1cbd5-a446-45cd-a9e9-d421152b6a88 req-55dcae6b-ebcf-4c82-b234-217a404324ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updating instance_info_cache with network_info: [{"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.387 243456 DEBUG oslo_concurrency.lockutils [req-82b1cbd5-a446-45cd-a9e9-d421152b6a88 req-55dcae6b-ebcf-4c82-b234-217a404324ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9e27fde4-3df3-46cf-97ac-88a91baefbc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.576 243456 DEBUG nova.network.neutron [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.597 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Releasing lock "refresh_cache-cd5eedf6-c835-46d6-9378-148eb04d4cb2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.597 243456 DEBUG nova.compute.manager [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.605 243456 INFO nova.virt.libvirt.driver [-] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Instance destroyed successfully.
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.605 243456 DEBUG nova.objects.instance [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lazy-loading 'resources' on Instance uuid cd5eedf6-c835-46d6-9378-148eb04d4cb2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 721 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.6 MiB/s wr, 316 op/s
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.888 243456 INFO nova.virt.libvirt.driver [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Deleting instance files /var/lib/nova/instances/cd5eedf6-c835-46d6-9378-148eb04d4cb2_del
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.888 243456 INFO nova.virt.libvirt.driver [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Deletion of /var/lib/nova/instances/cd5eedf6-c835-46d6-9378-148eb04d4cb2_del complete
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.966 243456 INFO nova.compute.manager [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Took 0.37 seconds to destroy the instance on the hypervisor.
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.967 243456 DEBUG oslo.service.loopingcall [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.967 243456 DEBUG nova.compute.manager [-] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:01:47 compute-0 nova_compute[243452]: 2026-02-28 10:01:47.967 243456 DEBUG nova.network.neutron [-] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:01:48 compute-0 nova_compute[243452]: 2026-02-28 10:01:48.121 243456 DEBUG nova.network.neutron [-] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:01:48 compute-0 nova_compute[243452]: 2026-02-28 10:01:48.149 243456 DEBUG nova.network.neutron [-] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:48 compute-0 nova_compute[243452]: 2026-02-28 10:01:48.170 243456 INFO nova.compute.manager [-] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Took 0.20 seconds to deallocate network for instance.
Feb 28 10:01:48 compute-0 nova_compute[243452]: 2026-02-28 10:01:48.232 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:48 compute-0 nova_compute[243452]: 2026-02-28 10:01:48.233 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:48 compute-0 nova_compute[243452]: 2026-02-28 10:01:48.410 243456 DEBUG oslo_concurrency.processutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:48 compute-0 nova_compute[243452]: 2026-02-28 10:01:48.428 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:48 compute-0 nova_compute[243452]: 2026-02-28 10:01:48.781 243456 INFO nova.compute.manager [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Rebuilding instance
Feb 28 10:01:48 compute-0 ceph-mon[76304]: pgmap v970: 305 pgs: 305 active+clean; 721 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.6 MiB/s wr, 316 op/s
Feb 28 10:01:48 compute-0 nova_compute[243452]: 2026-02-28 10:01:48.993 243456 INFO nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance shutdown successfully after 13 seconds.
Feb 28 10:01:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:01:49 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3534240982' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.040 243456 DEBUG nova.objects.instance [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'trusted_certs' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.043 243456 DEBUG oslo_concurrency.processutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:49 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000014.scope: Deactivated successfully.
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.049 243456 DEBUG nova.compute.provider_tree [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:01:49 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000014.scope: Consumed 13.242s CPU time.
Feb 28 10:01:49 compute-0 systemd-machined[209480]: Machine qemu-19-instance-00000014 terminated.
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.057 243456 DEBUG nova.compute.manager [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.065 243456 DEBUG nova.scheduler.client.report [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.090 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.105 243456 DEBUG nova.objects.instance [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'pci_requests' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.118 243456 DEBUG nova.objects.instance [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'pci_devices' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.132 243456 DEBUG nova.objects.instance [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'resources' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.134 243456 INFO nova.scheduler.client.report [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Deleted allocations for instance cd5eedf6-c835-46d6-9378-148eb04d4cb2
Feb 28 10:01:49 compute-0 podman[260952]: 2026-02-28 10:01:49.138872165 +0000 UTC m=+0.088661221 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.141 243456 DEBUG nova.objects.instance [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'migration_context' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:49 compute-0 podman[260951]: 2026-02-28 10:01:49.143739021 +0000 UTC m=+0.093861536 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.155 243456 DEBUG nova.objects.instance [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.161 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.206 243456 DEBUG oslo_concurrency.lockutils [None req-6ff8205a-f0e9-4408-a0d2-e47fe625cfbc d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "cd5eedf6-c835-46d6-9378-148eb04d4cb2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.212 243456 INFO nova.virt.libvirt.driver [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance destroyed successfully.
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.218 243456 INFO nova.virt.libvirt.driver [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance destroyed successfully.
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.302 243456 DEBUG oslo_concurrency.lockutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.302 243456 DEBUG oslo_concurrency.lockutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.303 243456 DEBUG oslo_concurrency.lockutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.303 243456 DEBUG oslo_concurrency.lockutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.304 243456 DEBUG oslo_concurrency.lockutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.306 243456 INFO nova.compute.manager [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Terminating instance
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.308 243456 DEBUG nova.compute.manager [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:01:49 compute-0 kernel: tap77e0efad-ce (unregistering): left promiscuous mode
Feb 28 10:01:49 compute-0 NetworkManager[49805]: <info>  [1772272909.3606] device (tap77e0efad-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.368 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:49 compute-0 ovn_controller[146846]: 2026-02-28T10:01:49Z|00074|binding|INFO|Releasing lport 77e0efad-ce89-42fd-9284-b155767f5c74 from this chassis (sb_readonly=0)
Feb 28 10:01:49 compute-0 ovn_controller[146846]: 2026-02-28T10:01:49Z|00075|binding|INFO|Setting lport 77e0efad-ce89-42fd-9284-b155767f5c74 down in Southbound
Feb 28 10:01:49 compute-0 ovn_controller[146846]: 2026-02-28T10:01:49Z|00076|binding|INFO|Removing iface tap77e0efad-ce ovn-installed in OVS
Feb 28 10:01:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.377 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:04:a0 10.100.0.4'], port_security=['fa:16:3e:9c:04:a0 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9e27fde4-3df3-46cf-97ac-88a91baefbc0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '446562351a804787bd6c523245bada39', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c45a4b58-82e0-4f02-9f8e-0ad33df761ad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d1e8e62-0bec-49c5-9374-c674d11c0532, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=77e0efad-ce89-42fd-9284-b155767f5c74) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:01:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.378 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 77e0efad-ce89-42fd-9284-b155767f5c74 in datapath 71984a35-6483-4ac4-a021-6bd1f9989d8b unbound from our chassis
Feb 28 10:01:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.380 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 71984a35-6483-4ac4-a021-6bd1f9989d8b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:01:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.381 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fa7fea34-bbb6-4d05-a548-0c766d3621f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.382 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b namespace which is not needed anymore
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.382 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:49 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Feb 28 10:01:49 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 13.304s CPU time.
Feb 28 10:01:49 compute-0 systemd-machined[209480]: Machine qemu-12-instance-0000000c terminated.
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.507 243456 INFO nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Deleting instance files /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_del
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.509 243456 INFO nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Deletion of /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_del complete
Feb 28 10:01:49 compute-0 neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b[256625]: [NOTICE]   (256629) : haproxy version is 2.8.14-c23fe91
Feb 28 10:01:49 compute-0 neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b[256625]: [NOTICE]   (256629) : path to executable is /usr/sbin/haproxy
Feb 28 10:01:49 compute-0 neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b[256625]: [WARNING]  (256629) : Exiting Master process...
Feb 28 10:01:49 compute-0 neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b[256625]: [WARNING]  (256629) : Exiting Master process...
Feb 28 10:01:49 compute-0 neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b[256625]: [ALERT]    (256629) : Current worker (256631) exited with code 143 (Terminated)
Feb 28 10:01:49 compute-0 neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b[256625]: [WARNING]  (256629) : All workers exited. Exiting... (0)
Feb 28 10:01:49 compute-0 systemd[1]: libpod-0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd.scope: Deactivated successfully.
Feb 28 10:01:49 compute-0 podman[261042]: 2026-02-28 10:01:49.523473825 +0000 UTC m=+0.053118523 container died 0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.528 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.533 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.542 243456 INFO nova.virt.libvirt.driver [-] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Instance destroyed successfully.
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.542 243456 DEBUG nova.objects.instance [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lazy-loading 'resources' on Instance uuid 9e27fde4-3df3-46cf-97ac-88a91baefbc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd-userdata-shm.mount: Deactivated successfully.
Feb 28 10:01:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-e70290e901e28e6d5eb5c062ad508564d16cc1df6df50d7d37fe37074f9c775c-merged.mount: Deactivated successfully.
Feb 28 10:01:49 compute-0 podman[261042]: 2026-02-28 10:01:49.56638665 +0000 UTC m=+0.096031358 container cleanup 0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 10:01:49 compute-0 systemd[1]: libpod-conmon-0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd.scope: Deactivated successfully.
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.615 243456 DEBUG nova.virt.libvirt.vif [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:00:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1253849242',display_name='tempest-FloatingIPsAssociationTestJSON-server-1253849242',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1253849242',id=12,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:00:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='446562351a804787bd6c523245bada39',ramdisk_id='',reservation_id='r-nu7q8nez',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1803239001',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1803239001-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:00:58Z,user_data=None,user_id='3b1dc716928742ca935bb155783e2d9a',uuid=9e27fde4-3df3-46cf-97ac-88a91baefbc0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.616 243456 DEBUG nova.network.os_vif_util [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converting VIF {"id": "77e0efad-ce89-42fd-9284-b155767f5c74", "address": "fa:16:3e:9c:04:a0", "network": {"id": "71984a35-6483-4ac4-a021-6bd1f9989d8b", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1138722046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "446562351a804787bd6c523245bada39", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77e0efad-ce", "ovs_interfaceid": "77e0efad-ce89-42fd-9284-b155767f5c74", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.617 243456 DEBUG nova.network.os_vif_util [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:04:a0,bridge_name='br-int',has_traffic_filtering=True,id=77e0efad-ce89-42fd-9284-b155767f5c74,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77e0efad-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.617 243456 DEBUG os_vif [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:04:a0,bridge_name='br-int',has_traffic_filtering=True,id=77e0efad-ce89-42fd-9284-b155767f5c74,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77e0efad-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.621 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.621 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77e0efad-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.625 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.627 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.630 243456 INFO os_vif [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:04:a0,bridge_name='br-int',has_traffic_filtering=True,id=77e0efad-ce89-42fd-9284-b155767f5c74,network=Network(71984a35-6483-4ac4-a021-6bd1f9989d8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77e0efad-ce')
Feb 28 10:01:49 compute-0 podman[261079]: 2026-02-28 10:01:49.635262754 +0000 UTC m=+0.044903492 container remove 0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 28 10:01:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.642 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1141ab66-0b75-438f-9dad-672a7f4c1a92]: (4, ('Sat Feb 28 10:01:49 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b (0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd)\n0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd\nSat Feb 28 10:01:49 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b (0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd)\n0233c3a3426125766b4976c29885a7e8a4bd9aac5fb29cfb8eb72ff2e1375cdd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.643 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[53a9c2ef-165b-40ec-8977-e1dd822b576a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.644 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71984a35-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:49 compute-0 kernel: tap71984a35-60: left promiscuous mode
Feb 28 10:01:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.651 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a129fc64-6ea1-499b-9d83-fc6a610e97c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.654 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.661 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4d0b51-ffdb-4eea-8ca9-872a3ae982ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.662 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c47ae017-4b2b-49fd-b3c5-71eef31d47ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.677 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[817c6bf1-603b-43a9-b3f4-a72064043bc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436189, 'reachable_time': 33716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261109, 'error': None, 'target': 'ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.681 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-71984a35-6483-4ac4-a021-6bd1f9989d8b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:01:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d71984a35\x2d6483\x2d4ac4\x2da021\x2d6bd1f9989d8b.mount: Deactivated successfully.
Feb 28 10:01:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:49.681 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[7b2b34f1-89a7-42ed-a767-a20fda2cdba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 721 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.1 MiB/s wr, 260 op/s
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.781 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.781 243456 INFO nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Creating image(s)
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.801 243456 DEBUG nova.storage.rbd_utils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.821 243456 DEBUG nova.storage.rbd_utils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.843 243456 DEBUG nova.storage.rbd_utils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.846 243456 DEBUG oslo_concurrency.lockutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.847 243456 DEBUG oslo_concurrency.lockutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:49 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3534240982' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.965 243456 INFO nova.virt.libvirt.driver [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Deleting instance files /var/lib/nova/instances/9e27fde4-3df3-46cf-97ac-88a91baefbc0_del
Feb 28 10:01:49 compute-0 nova_compute[243452]: 2026-02-28 10:01:49.966 243456 INFO nova.virt.libvirt.driver [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Deletion of /var/lib/nova/instances/9e27fde4-3df3-46cf-97ac-88a91baefbc0_del complete
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.008 243456 DEBUG nova.compute.manager [req-58940474-5dde-44c1-991b-09ecd9c942e0 req-966d8598-c0a9-4e1f-803b-d8462f5a68b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-vif-unplugged-77e0efad-ce89-42fd-9284-b155767f5c74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.009 243456 DEBUG oslo_concurrency.lockutils [req-58940474-5dde-44c1-991b-09ecd9c942e0 req-966d8598-c0a9-4e1f-803b-d8462f5a68b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.009 243456 DEBUG oslo_concurrency.lockutils [req-58940474-5dde-44c1-991b-09ecd9c942e0 req-966d8598-c0a9-4e1f-803b-d8462f5a68b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.009 243456 DEBUG oslo_concurrency.lockutils [req-58940474-5dde-44c1-991b-09ecd9c942e0 req-966d8598-c0a9-4e1f-803b-d8462f5a68b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.009 243456 DEBUG nova.compute.manager [req-58940474-5dde-44c1-991b-09ecd9c942e0 req-966d8598-c0a9-4e1f-803b-d8462f5a68b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] No waiting events found dispatching network-vif-unplugged-77e0efad-ce89-42fd-9284-b155767f5c74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.009 243456 DEBUG nova.compute.manager [req-58940474-5dde-44c1-991b-09ecd9c942e0 req-966d8598-c0a9-4e1f-803b-d8462f5a68b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-vif-unplugged-77e0efad-ce89-42fd-9284-b155767f5c74 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.014 243456 INFO nova.compute.manager [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Took 0.71 seconds to destroy the instance on the hypervisor.
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.015 243456 DEBUG oslo.service.loopingcall [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.015 243456 DEBUG nova.compute.manager [-] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.015 243456 DEBUG nova.network.neutron [-] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.081 243456 DEBUG nova.virt.libvirt.imagebackend [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Image locations are: [{'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/88971623-4808-4102-a4a7-34a287d8b7fe/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/88971623-4808-4102-a4a7-34a287d8b7fe/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.354 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272895.3534305, f37b722c-8def-4545-a455-39df230540d8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.355 243456 INFO nova.compute.manager [-] [instance: f37b722c-8def-4545-a455-39df230540d8] VM Stopped (Lifecycle Event)
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.377 243456 DEBUG nova.compute.manager [None req-46c5655a-1a8c-408a-9d29-b8c1fb3547c7 - - - - - -] [instance: f37b722c-8def-4545-a455-39df230540d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.406 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "ce94b006-3fde-4285-89f7-1e435e514d3e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.407 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "ce94b006-3fde-4285-89f7-1e435e514d3e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.407 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "ce94b006-3fde-4285-89f7-1e435e514d3e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.408 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "ce94b006-3fde-4285-89f7-1e435e514d3e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.408 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "ce94b006-3fde-4285-89f7-1e435e514d3e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.410 243456 INFO nova.compute.manager [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Terminating instance
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.412 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "refresh_cache-ce94b006-3fde-4285-89f7-1e435e514d3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.413 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquired lock "refresh_cache-ce94b006-3fde-4285-89f7-1e435e514d3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.414 243456 DEBUG nova.network.neutron [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:01:50 compute-0 nova_compute[243452]: 2026-02-28 10:01:50.673 243456 DEBUG nova.network.neutron [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:01:51 compute-0 ceph-mon[76304]: pgmap v971: 305 pgs: 305 active+clean; 721 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.1 MiB/s wr, 260 op/s
Feb 28 10:01:51 compute-0 nova_compute[243452]: 2026-02-28 10:01:51.413 243456 INFO nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance shutdown successfully after 2 seconds.
Feb 28 10:01:51 compute-0 nova_compute[243452]: 2026-02-28 10:01:51.422 243456 DEBUG nova.network.neutron [-] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:51 compute-0 nova_compute[243452]: 2026-02-28 10:01:51.441 243456 INFO nova.compute.manager [-] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Took 1.43 seconds to deallocate network for instance.
Feb 28 10:01:51 compute-0 nova_compute[243452]: 2026-02-28 10:01:51.502 243456 DEBUG oslo_concurrency.lockutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:51 compute-0 nova_compute[243452]: 2026-02-28 10:01:51.503 243456 DEBUG oslo_concurrency.lockutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:51 compute-0 nova_compute[243452]: 2026-02-28 10:01:51.593 243456 DEBUG nova.network.neutron [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:51 compute-0 nova_compute[243452]: 2026-02-28 10:01:51.616 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Releasing lock "refresh_cache-ce94b006-3fde-4285-89f7-1e435e514d3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:01:51 compute-0 nova_compute[243452]: 2026-02-28 10:01:51.616 243456 DEBUG nova.compute.manager [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:01:51 compute-0 nova_compute[243452]: 2026-02-28 10:01:51.654 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:51 compute-0 nova_compute[243452]: 2026-02-28 10:01:51.716 243456 DEBUG oslo_concurrency.processutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:51 compute-0 nova_compute[243452]: 2026-02-28 10:01:51.754 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847.part --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:51 compute-0 nova_compute[243452]: 2026-02-28 10:01:51.755 243456 DEBUG nova.virt.images [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] 88971623-4808-4102-a4a7-34a287d8b7fe was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Feb 28 10:01:51 compute-0 nova_compute[243452]: 2026-02-28 10:01:51.759 243456 DEBUG nova.privsep.utils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Feb 28 10:01:51 compute-0 nova_compute[243452]: 2026-02-28 10:01:51.760 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847.part /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.762082) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272911762127, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1451, "num_deletes": 251, "total_data_size": 2015445, "memory_usage": 2052432, "flush_reason": "Manual Compaction"}
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Feb 28 10:01:51 compute-0 kernel: tap1c6e98f3-e9 (unregistering): left promiscuous mode
Feb 28 10:01:51 compute-0 NetworkManager[49805]: <info>  [1772272911.7714] device (tap1c6e98f3-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:01:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 608 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 7.8 MiB/s wr, 387 op/s
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272911777897, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1982253, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19589, "largest_seqno": 21039, "table_properties": {"data_size": 1975817, "index_size": 3511, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14904, "raw_average_key_size": 20, "raw_value_size": 1962330, "raw_average_value_size": 2655, "num_data_blocks": 159, "num_entries": 739, "num_filter_entries": 739, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772272783, "oldest_key_time": 1772272783, "file_creation_time": 1772272911, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 15860 microseconds, and 3758 cpu microseconds.
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:01:51 compute-0 ovn_controller[146846]: 2026-02-28T10:01:51Z|00077|binding|INFO|Releasing lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 from this chassis (sb_readonly=0)
Feb 28 10:01:51 compute-0 ovn_controller[146846]: 2026-02-28T10:01:51Z|00078|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 down in Southbound
Feb 28 10:01:51 compute-0 ovn_controller[146846]: 2026-02-28T10:01:51Z|00079|binding|INFO|Removing iface tap1c6e98f3-e9 ovn-installed in OVS
Feb 28 10:01:51 compute-0 nova_compute[243452]: 2026-02-28 10:01:51.781 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:51 compute-0 nova_compute[243452]: 2026-02-28 10:01:51.786 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.777942) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1982253 bytes OK
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.777963) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.783969) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.784008) EVENT_LOG_v1 {"time_micros": 1772272911784001, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.784031) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2008949, prev total WAL file size 2008949, number of live WAL files 2.
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.785466) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1935KB)], [47(7106KB)]
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272911785523, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9258826, "oldest_snapshot_seqno": -1}
Feb 28 10:01:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.794 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:6e:bc 10.100.0.5'], port_security=['fa:16:3e:3d:6e:bc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c92e965f-2d18-4b78-8b78-7d391039f382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:01:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.795 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 unbound from our chassis
Feb 28 10:01:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.796 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24
Feb 28 10:01:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.812 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1b78c2ba-d419-43a9-b8d7-1882df47ddbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:51 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Feb 28 10:01:51 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 13.547s CPU time.
Feb 28 10:01:51 compute-0 systemd-machined[209480]: Machine qemu-13-instance-0000000d terminated.
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4419 keys, 7489099 bytes, temperature: kUnknown
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272911830223, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7489099, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7459055, "index_size": 17902, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 109375, "raw_average_key_size": 24, "raw_value_size": 7378712, "raw_average_value_size": 1669, "num_data_blocks": 747, "num_entries": 4419, "num_filter_entries": 4419, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772272911, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.830426) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7489099 bytes
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.832123) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 206.8 rd, 167.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 6.9 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(8.4) write-amplify(3.8) OK, records in: 4933, records dropped: 514 output_compression: NoCompression
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.832141) EVENT_LOG_v1 {"time_micros": 1772272911832132, "job": 24, "event": "compaction_finished", "compaction_time_micros": 44769, "compaction_time_cpu_micros": 13246, "output_level": 6, "num_output_files": 1, "total_output_size": 7489099, "num_input_records": 4933, "num_output_records": 4419, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272911832866, "job": 24, "event": "table_file_deletion", "file_number": 49}
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772272911833801, "job": 24, "event": "table_file_deletion", "file_number": 47}
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.784456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.833885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.833890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.833892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.833893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:01:51 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:01:51.833895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:01:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.836 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[17e507ba-bbe8-4ac9-8076-ae991da6a6e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.839 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d2aaca2a-70d4-4d8f-98fb-6759c2bb99b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:51 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000013.scope: Deactivated successfully.
Feb 28 10:01:51 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000013.scope: Consumed 12.717s CPU time.
Feb 28 10:01:51 compute-0 systemd-machined[209480]: Machine qemu-18-instance-00000013 terminated.
Feb 28 10:01:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.854 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1968e90f-fd6d-421b-8261-d40fdd433e5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.866 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9d40d14b-29dd-4b22-9411-8eab6b8b6c0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 784, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 784, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261208, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.875 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aad1ca22-4a18-4cc8-8afe-2f7c70db4d60]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261209, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261209, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.876 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:51 compute-0 nova_compute[243452]: 2026-02-28 10:01:51.877 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:51 compute-0 nova_compute[243452]: 2026-02-28 10:01:51.881 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.881 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.882 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.882 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:51.882 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:52 compute-0 kernel: tap1c6e98f3-e9: entered promiscuous mode
Feb 28 10:01:52 compute-0 ovn_controller[146846]: 2026-02-28T10:01:52Z|00080|binding|INFO|Claiming lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for this chassis.
Feb 28 10:01:52 compute-0 ovn_controller[146846]: 2026-02-28T10:01:52Z|00081|binding|INFO|1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7: Claiming fa:16:3e:3d:6e:bc 10.100.0.5
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.042 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:52 compute-0 kernel: tap1c6e98f3-e9 (unregistering): left promiscuous mode
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.049 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:6e:bc 10.100.0.5'], port_security=['fa:16:3e:3d:6e:bc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c92e965f-2d18-4b78-8b78-7d391039f382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.050 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 bound to our chassis
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.052 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.067 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[094735d7-02ba-429d-b49d-c9b66fa9b7c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:52 compute-0 ovn_controller[146846]: 2026-02-28T10:01:52Z|00082|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 ovn-installed in OVS
Feb 28 10:01:52 compute-0 ovn_controller[146846]: 2026-02-28T10:01:52Z|00083|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 up in Southbound
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.068 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:52 compute-0 ovn_controller[146846]: 2026-02-28T10:01:52Z|00084|binding|INFO|Releasing lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 from this chassis (sb_readonly=1)
Feb 28 10:01:52 compute-0 ovn_controller[146846]: 2026-02-28T10:01:52Z|00085|if_status|INFO|Not setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 down as sb is readonly
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.074 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:52 compute-0 ovn_controller[146846]: 2026-02-28T10:01:52Z|00086|binding|INFO|Removing iface tap1c6e98f3-e9 ovn-installed in OVS
Feb 28 10:01:52 compute-0 ovn_controller[146846]: 2026-02-28T10:01:52Z|00087|binding|INFO|Releasing lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 from this chassis (sb_readonly=0)
Feb 28 10:01:52 compute-0 ovn_controller[146846]: 2026-02-28T10:01:52Z|00088|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 down in Southbound
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.078 243456 DEBUG nova.compute.manager [req-f4585cd4-5154-44d3-b413-770b1e5ce37d req-740e4e5c-e256-434a-89c4-c3f1986dea7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.079 243456 DEBUG oslo_concurrency.lockutils [req-f4585cd4-5154-44d3-b413-770b1e5ce37d req-740e4e5c-e256-434a-89c4-c3f1986dea7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.079 243456 DEBUG oslo_concurrency.lockutils [req-f4585cd4-5154-44d3-b413-770b1e5ce37d req-740e4e5c-e256-434a-89c4-c3f1986dea7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.080 243456 DEBUG oslo_concurrency.lockutils [req-f4585cd4-5154-44d3-b413-770b1e5ce37d req-740e4e5c-e256-434a-89c4-c3f1986dea7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.080 243456 DEBUG nova.compute.manager [req-f4585cd4-5154-44d3-b413-770b1e5ce37d req-740e4e5c-e256-434a-89c4-c3f1986dea7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.080 243456 WARNING nova.compute.manager [req-f4585cd4-5154-44d3-b413-770b1e5ce37d req-740e4e5c-e256-434a-89c4-c3f1986dea7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state error and task_state rebuilding.
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.082 243456 INFO nova.virt.libvirt.driver [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance destroyed successfully.
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.083 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:6e:bc 10.100.0.5'], port_security=['fa:16:3e:3d:6e:bc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c92e965f-2d18-4b78-8b78-7d391039f382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '5', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.086 243456 INFO nova.virt.libvirt.driver [-] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Instance destroyed successfully.
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.087 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.087 243456 DEBUG nova.objects.instance [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lazy-loading 'resources' on Instance uuid ce94b006-3fde-4285-89f7-1e435e514d3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.093 243456 INFO nova.virt.libvirt.driver [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance destroyed successfully.
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.093 243456 DEBUG nova.virt.libvirt.vif [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1293627042',display_name='tempest-ServersAdminTestJSON-server-1293627042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1293627042',id=13,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:01:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-7hctsnmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:01:48Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=c92e965f-2d18-4b78-8b78-7d391039f382,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.094 243456 DEBUG nova.network.os_vif_util [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.094 243456 DEBUG nova.network.os_vif_util [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.095 243456 DEBUG os_vif [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.096 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.096 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c6e98f3-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.102 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a757c9e5-4134-41c0-9eca-a7b3fa4fe429]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.105 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[34c32d26-cc7c-41d8-a2c4-61209806ae11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.118 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.122 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.125 243456 INFO os_vif [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9')
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.130 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c50ecd6a-9f76-4e7e-aee1-873968c00793]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.140 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847.part /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847.converted" returned: 0 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.146 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.149 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[29ff280d-e287-40db-9ed1-aecf1f22556f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 14, 'rx_bytes': 826, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 14, 'rx_bytes': 826, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261252, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.165 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[38fef1c6-a58a-46bf-aa4d-36f7b337cca7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261265, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261265, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.167 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.169 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.170 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.170 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.170 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.171 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 unbound from our chassis
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.172 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.172 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.175 243456 DEBUG nova.compute.manager [req-d7b03b14-9cee-4f5f-8ff1-156552badd12 req-36ca4339-314f-4d96-9746-17ed7742caf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-vif-plugged-77e0efad-ce89-42fd-9284-b155767f5c74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.176 243456 DEBUG oslo_concurrency.lockutils [req-d7b03b14-9cee-4f5f-8ff1-156552badd12 req-36ca4339-314f-4d96-9746-17ed7742caf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.176 243456 DEBUG oslo_concurrency.lockutils [req-d7b03b14-9cee-4f5f-8ff1-156552badd12 req-36ca4339-314f-4d96-9746-17ed7742caf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.176 243456 DEBUG oslo_concurrency.lockutils [req-d7b03b14-9cee-4f5f-8ff1-156552badd12 req-36ca4339-314f-4d96-9746-17ed7742caf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.177 243456 DEBUG nova.compute.manager [req-d7b03b14-9cee-4f5f-8ff1-156552badd12 req-36ca4339-314f-4d96-9746-17ed7742caf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] No waiting events found dispatching network-vif-plugged-77e0efad-ce89-42fd-9284-b155767f5c74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.177 243456 WARNING nova.compute.manager [req-d7b03b14-9cee-4f5f-8ff1-156552badd12 req-36ca4339-314f-4d96-9746-17ed7742caf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received unexpected event network-vif-plugged-77e0efad-ce89-42fd-9284-b155767f5c74 for instance with vm_state deleted and task_state None.
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.177 243456 DEBUG nova.compute.manager [req-d7b03b14-9cee-4f5f-8ff1-156552badd12 req-36ca4339-314f-4d96-9746-17ed7742caf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Received event network-vif-deleted-77e0efad-ce89-42fd-9284-b155767f5c74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.184 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[85e392b0-e2c2-4985-8b9d-57410b2b0daf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.206 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae58384-f955-49b7-921c-035440da4d61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.209 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a31af515-6037-4df0-956d-4357b08990a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:01:52 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1514507087' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.228 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6709ae84-940f-41c7-881c-fbfb6b897d05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.230 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847.converted --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.231 243456 DEBUG oslo_concurrency.lockutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.385s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.241 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cf077049-70c1-4eaa-b244-2d7647b05d64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 16, 'rx_bytes': 826, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 16, 'rx_bytes': 826, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261278, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.253 243456 DEBUG nova.storage.rbd_utils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.256 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.257 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f443bd72-d225-48c6-a5d1-89134c8aeae2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261294, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261294, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.259 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.262 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.262 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.262 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:52.263 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.277 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.279 243456 DEBUG oslo_concurrency.processutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.286 243456 DEBUG nova.compute.provider_tree [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.300 243456 DEBUG nova.scheduler.client.report [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.326 243456 DEBUG oslo_concurrency.lockutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.354 243456 INFO nova.scheduler.client.report [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Deleted allocations for instance 9e27fde4-3df3-46cf-97ac-88a91baefbc0
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.413 243456 DEBUG oslo_concurrency.lockutils [None req-84a35fe3-b75e-4fb0-8b95-987269e81bab 3b1dc716928742ca935bb155783e2d9a 446562351a804787bd6c523245bada39 - - default default] Lock "9e27fde4-3df3-46cf-97ac-88a91baefbc0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.542 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.600 243456 DEBUG nova.storage.rbd_utils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] resizing rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.704 243456 INFO nova.virt.libvirt.driver [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Deleting instance files /var/lib/nova/instances/ce94b006-3fde-4285-89f7-1e435e514d3e_del
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.705 243456 INFO nova.virt.libvirt.driver [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Deletion of /var/lib/nova/instances/ce94b006-3fde-4285-89f7-1e435e514d3e_del complete
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.712 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.713 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Ensure instance console log exists: /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.713 243456 DEBUG oslo_concurrency.lockutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.714 243456 DEBUG oslo_concurrency.lockutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.714 243456 DEBUG oslo_concurrency.lockutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.715 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.719 243456 INFO nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Deleting instance files /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382_del
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.719 243456 INFO nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Deletion of /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382_del complete
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.724 243456 WARNING nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.739 243456 DEBUG nova.virt.libvirt.host [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.739 243456 DEBUG nova.virt.libvirt.host [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:01:52 compute-0 ceph-mon[76304]: pgmap v972: 305 pgs: 305 active+clean; 608 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 7.8 MiB/s wr, 387 op/s
Feb 28 10:01:52 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1514507087' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.784 243456 DEBUG nova.virt.libvirt.host [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.785 243456 DEBUG nova.virt.libvirt.host [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.786 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.786 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.787 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.788 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.788 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.788 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.789 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.789 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.790 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.790 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.791 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.791 243456 DEBUG nova.virt.hardware [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.791 243456 DEBUG nova.objects.instance [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.897 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.929 243456 INFO nova.compute.manager [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Took 1.31 seconds to destroy the instance on the hypervisor.
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.931 243456 DEBUG oslo.service.loopingcall [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.932 243456 DEBUG nova.compute.manager [-] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:01:52 compute-0 nova_compute[243452]: 2026-02-28 10:01:52.932 243456 DEBUG nova.network.neutron [-] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.026 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.027 243456 INFO nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Creating image(s)
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.255 243456 DEBUG nova.storage.rbd_utils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.296 243456 DEBUG nova.storage.rbd_utils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.328 243456 DEBUG nova.storage.rbd_utils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.333 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.354 243456 DEBUG nova.network.neutron [-] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.369 243456 DEBUG nova.network.neutron [-] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.382 243456 INFO nova.compute.manager [-] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Took 0.45 seconds to deallocate network for instance.
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.407 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.408 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.409 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.409 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.435 243456 DEBUG nova.storage.rbd_utils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.439 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 c92e965f-2d18-4b78-8b78-7d391039f382_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.456 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.460 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.460 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.604 243456 DEBUG oslo_concurrency.processutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.671 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 c92e965f-2d18-4b78-8b78-7d391039f382_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:53 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/450386554' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.741 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.844s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.757 243456 DEBUG nova.storage.rbd_utils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.760 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 505 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 7.0 MiB/s wr, 372 op/s
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.776 243456 DEBUG nova.storage.rbd_utils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] resizing rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:01:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/450386554' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.853 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.853 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Ensure instance console log exists: /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.854 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.854 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.854 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.857 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Start _get_guest_xml network_info=[{"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.861 243456 WARNING nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.864 243456 DEBUG nova.virt.libvirt.host [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.865 243456 DEBUG nova.virt.libvirt.host [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.867 243456 DEBUG nova.virt.libvirt.host [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.867 243456 DEBUG nova.virt.libvirt.host [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.868 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.868 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.868 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.868 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.869 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.869 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.869 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.869 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.869 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.870 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.870 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.870 243456 DEBUG nova.virt.hardware [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.870 243456 DEBUG nova.objects.instance [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'vcpu_model' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:53 compute-0 nova_compute[243452]: 2026-02-28 10:01:53.891 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:01:54 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/722264479' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.148 243456 DEBUG oslo_concurrency.processutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.154 243456 DEBUG nova.compute.provider_tree [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.173 243456 DEBUG nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.174 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.175 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.175 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.175 243456 DEBUG nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.176 243456 WARNING nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state error and task_state rebuild_spawning.
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.176 243456 DEBUG nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.176 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.176 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.177 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.177 243456 DEBUG nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.177 243456 WARNING nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state error and task_state rebuild_spawning.
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.177 243456 DEBUG nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.177 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.178 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.178 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.178 243456 DEBUG nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.179 243456 WARNING nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state error and task_state rebuild_spawning.
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.179 243456 DEBUG nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.179 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.179 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.179 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.180 243456 DEBUG nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.180 243456 WARNING nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state error and task_state rebuild_spawning.
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.180 243456 DEBUG nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.180 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.181 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.181 243456 DEBUG oslo_concurrency.lockutils [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.181 243456 DEBUG nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.181 243456 WARNING nova.compute.manager [req-70fa04c3-081f-4a22-9826-a9fe4c91b6c9 req-3ecd3b1a-9747-4998-b8da-3a2413e59f75 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state error and task_state rebuild_spawning.
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.184 243456 DEBUG nova.scheduler.client.report [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.213 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:54 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4220485943' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.271 243456 INFO nova.scheduler.client.report [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Deleted allocations for instance ce94b006-3fde-4285-89f7-1e435e514d3e
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.285 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.288 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:01:54 compute-0 nova_compute[243452]:   <uuid>9bf11a21-b6eb-432e-aa3b-4dd3413f91a5</uuid>
Feb 28 10:01:54 compute-0 nova_compute[243452]:   <name>instance-00000014</name>
Feb 28 10:01:54 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:01:54 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:01:54 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersAdmin275Test-server-1098361722</nova:name>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:01:52</nova:creationTime>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:01:54 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:01:54 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:01:54 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:01:54 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:01:54 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:01:54 compute-0 nova_compute[243452]:         <nova:user uuid="c1d3b80b39ba4f3392d63b05c85009e2">tempest-ServersAdmin275Test-175914647-project-member</nova:user>
Feb 28 10:01:54 compute-0 nova_compute[243452]:         <nova:project uuid="a1e90097927e484eb57dee6ac05b7b47">tempest-ServersAdmin275Test-175914647</nova:project>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="88971623-4808-4102-a4a7-34a287d8b7fe"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <nova:ports/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:01:54 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:01:54 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <system>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <entry name="serial">9bf11a21-b6eb-432e-aa3b-4dd3413f91a5</entry>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <entry name="uuid">9bf11a21-b6eb-432e-aa3b-4dd3413f91a5</entry>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     </system>
Feb 28 10:01:54 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:01:54 compute-0 nova_compute[243452]:   <os>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:   </os>
Feb 28 10:01:54 compute-0 nova_compute[243452]:   <features>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:   </features>
Feb 28 10:01:54 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:01:54 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:01:54 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk">
Feb 28 10:01:54 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:54 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config">
Feb 28 10:01:54 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:54 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/console.log" append="off"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <video>
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     </video>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:01:54 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:01:54 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:01:54 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:01:54 compute-0 nova_compute[243452]: </domain>
Feb 28 10:01:54 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.355 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.356 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.356 243456 INFO nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Using config drive
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.384 243456 DEBUG nova.storage.rbd_utils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:54 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1047387282' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.395 243456 DEBUG oslo_concurrency.lockutils [None req-52289441-7ff8-4ad3-a382-9ac3a2244931 d12cfb1e6b0d4d93916ba6a6c4b75cfc 388f2f7e6d59433a8c88217806df2e33 - - default default] Lock "ce94b006-3fde-4285-89f7-1e435e514d3e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.404 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.429 243456 DEBUG nova.storage.rbd_utils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.434 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.463 243456 DEBUG nova.objects.instance [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:54 compute-0 nova_compute[243452]: 2026-02-28 10:01:54.561 243456 DEBUG nova.objects.instance [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'keypairs' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:54 compute-0 ceph-mon[76304]: pgmap v973: 305 pgs: 305 active+clean; 505 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 7.0 MiB/s wr, 372 op/s
Feb 28 10:01:54 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/722264479' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:01:54 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4220485943' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:54 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1047387282' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:01:54 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/364084966' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.005 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.006 243456 DEBUG nova.virt.libvirt.vif [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1293627042',display_name='tempest-ServersAdminTestJSON-server-1293627042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1293627042',id=13,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:01:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-7hctsnmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:01:52Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=c92e965f-2d18-4b78-8b78-7d391039f382,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.007 243456 DEBUG nova.network.os_vif_util [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.008 243456 DEBUG nova.network.os_vif_util [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.010 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:01:55 compute-0 nova_compute[243452]:   <uuid>c92e965f-2d18-4b78-8b78-7d391039f382</uuid>
Feb 28 10:01:55 compute-0 nova_compute[243452]:   <name>instance-0000000d</name>
Feb 28 10:01:55 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:01:55 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:01:55 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersAdminTestJSON-server-1293627042</nova:name>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:01:53</nova:creationTime>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:01:55 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:01:55 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:01:55 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:01:55 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:01:55 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:01:55 compute-0 nova_compute[243452]:         <nova:user uuid="4fb1e2bbed9c4e2395c13dba974f8603">tempest-ServersAdminTestJSON-1494420313-project-member</nova:user>
Feb 28 10:01:55 compute-0 nova_compute[243452]:         <nova:project uuid="339b7f5b41a54615b051fb9d036072dd">tempest-ServersAdminTestJSON-1494420313</nova:project>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="88971623-4808-4102-a4a7-34a287d8b7fe"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:01:55 compute-0 nova_compute[243452]:         <nova:port uuid="1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7">
Feb 28 10:01:55 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:01:55 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:01:55 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <system>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <entry name="serial">c92e965f-2d18-4b78-8b78-7d391039f382</entry>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <entry name="uuid">c92e965f-2d18-4b78-8b78-7d391039f382</entry>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     </system>
Feb 28 10:01:55 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:01:55 compute-0 nova_compute[243452]:   <os>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:   </os>
Feb 28 10:01:55 compute-0 nova_compute[243452]:   <features>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:   </features>
Feb 28 10:01:55 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:01:55 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:01:55 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c92e965f-2d18-4b78-8b78-7d391039f382_disk">
Feb 28 10:01:55 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:55 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c92e965f-2d18-4b78-8b78-7d391039f382_disk.config">
Feb 28 10:01:55 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       </source>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:01:55 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:3d:6e:bc"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <target dev="tap1c6e98f3-e9"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/console.log" append="off"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <video>
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     </video>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:01:55 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:01:55 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:01:55 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:01:55 compute-0 nova_compute[243452]: </domain>
Feb 28 10:01:55 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.011 243456 DEBUG nova.compute.manager [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Preparing to wait for external event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.011 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.011 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.012 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.012 243456 DEBUG nova.virt.libvirt.vif [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1293627042',display_name='tempest-ServersAdminTestJSON-server-1293627042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1293627042',id=13,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:01:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-7hctsnmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:01:52Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=c92e965f-2d18-4b78-8b78-7d391039f382,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.012 243456 DEBUG nova.network.os_vif_util [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.013 243456 DEBUG nova.network.os_vif_util [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.013 243456 DEBUG os_vif [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.014 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.014 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.015 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.019 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.019 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c6e98f3-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.020 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c6e98f3-e9, col_values=(('external_ids', {'iface-id': '1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:6e:bc', 'vm-uuid': 'c92e965f-2d18-4b78-8b78-7d391039f382'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.021 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:55 compute-0 NetworkManager[49805]: <info>  [1772272915.0235] manager: (tap1c6e98f3-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.024 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.028 243456 INFO os_vif [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9')
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.070 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.071 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.071 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No VIF found with MAC fa:16:3e:3d:6e:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.072 243456 INFO nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Using config drive
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.105 243456 DEBUG nova.storage.rbd_utils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.132 243456 DEBUG nova.objects.instance [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'ec2_ids' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.163 243456 DEBUG nova.objects.instance [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'keypairs' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.710 243456 INFO nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Creating config drive at /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.717 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphljzyvrx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 476 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 9.5 MiB/s wr, 415 op/s
Feb 28 10:01:55 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/364084966' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.846 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphljzyvrx" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.870 243456 DEBUG nova.storage.rbd_utils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:55 compute-0 nova_compute[243452]: 2026-02-28 10:01:55.874 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:56 compute-0 nova_compute[243452]: 2026-02-28 10:01:56.015 243456 DEBUG oslo_concurrency.processutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:56 compute-0 nova_compute[243452]: 2026-02-28 10:01:56.017 243456 INFO nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Deleting local config drive /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config because it was imported into RBD.
Feb 28 10:01:56 compute-0 systemd-machined[209480]: New machine qemu-22-instance-00000014.
Feb 28 10:01:56 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-00000014.
Feb 28 10:01:56 compute-0 nova_compute[243452]: 2026-02-28 10:01:56.148 243456 INFO nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Creating config drive at /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config
Feb 28 10:01:56 compute-0 nova_compute[243452]: 2026-02-28 10:01:56.155 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpr7x32s8n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:56 compute-0 nova_compute[243452]: 2026-02-28 10:01:56.285 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpr7x32s8n" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:56 compute-0 nova_compute[243452]: 2026-02-28 10:01:56.308 243456 DEBUG nova.storage.rbd_utils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:01:56 compute-0 nova_compute[243452]: 2026-02-28 10:01:56.311 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config c92e965f-2d18-4b78-8b78-7d391039f382_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:01:56 compute-0 nova_compute[243452]: 2026-02-28 10:01:56.433 243456 DEBUG oslo_concurrency.processutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config c92e965f-2d18-4b78-8b78-7d391039f382_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:01:56 compute-0 nova_compute[243452]: 2026-02-28 10:01:56.434 243456 INFO nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Deleting local config drive /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config because it was imported into RBD.
Feb 28 10:01:56 compute-0 kernel: tap1c6e98f3-e9: entered promiscuous mode
Feb 28 10:01:56 compute-0 NetworkManager[49805]: <info>  [1772272916.4750] manager: (tap1c6e98f3-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Feb 28 10:01:56 compute-0 nova_compute[243452]: 2026-02-28 10:01:56.475 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:56 compute-0 ovn_controller[146846]: 2026-02-28T10:01:56Z|00089|binding|INFO|Claiming lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for this chassis.
Feb 28 10:01:56 compute-0 ovn_controller[146846]: 2026-02-28T10:01:56Z|00090|binding|INFO|1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7: Claiming fa:16:3e:3d:6e:bc 10.100.0.5
Feb 28 10:01:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.484 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:6e:bc 10.100.0.5'], port_security=['fa:16:3e:3d:6e:bc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c92e965f-2d18-4b78-8b78-7d391039f382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '7', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:01:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.485 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 bound to our chassis
Feb 28 10:01:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.487 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24
Feb 28 10:01:56 compute-0 ovn_controller[146846]: 2026-02-28T10:01:56Z|00091|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 ovn-installed in OVS
Feb 28 10:01:56 compute-0 ovn_controller[146846]: 2026-02-28T10:01:56Z|00092|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 up in Southbound
Feb 28 10:01:56 compute-0 nova_compute[243452]: 2026-02-28 10:01:56.500 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:56 compute-0 systemd-machined[209480]: New machine qemu-23-instance-0000000d.
Feb 28 10:01:56 compute-0 nova_compute[243452]: 2026-02-28 10:01:56.505 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.509 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[38735c8b-33ad-4f3b-9d61-dd819993a3d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:56 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-0000000d.
Feb 28 10:01:56 compute-0 systemd-udevd[261852]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:01:56 compute-0 NetworkManager[49805]: <info>  [1772272916.5306] device (tap1c6e98f3-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:01:56 compute-0 NetworkManager[49805]: <info>  [1772272916.5320] device (tap1c6e98f3-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:01:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.547 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d54c0105-3b92-4815-bd04-a42d2f942417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.553 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[72ceab56-ed5b-4a1b-8514-db695b519014]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.588 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8791e657-0de9-4d6e-8add-65528a314a4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.608 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6dfba8f4-4a93-4fbc-b2d7-c44f074551a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 18, 'rx_bytes': 868, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 18, 'rx_bytes': 868, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261863, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.627 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf2e5f5-2800-41d0-8493-9a4d4bd47eb1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261865, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261865, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:01:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.629 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:56 compute-0 nova_compute[243452]: 2026-02-28 10:01:56.631 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:56 compute-0 nova_compute[243452]: 2026-02-28 10:01:56.632 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.632 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.633 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.633 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:01:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:56.633 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:01:56 compute-0 nova_compute[243452]: 2026-02-28 10:01:56.672 243456 DEBUG nova.compute.manager [req-a95a8a98-f41b-4c27-8875-f29626bb23a3 req-db973713-4dc5-49dc-8e84-1ab6dbb0575a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:56 compute-0 nova_compute[243452]: 2026-02-28 10:01:56.673 243456 DEBUG oslo_concurrency.lockutils [req-a95a8a98-f41b-4c27-8875-f29626bb23a3 req-db973713-4dc5-49dc-8e84-1ab6dbb0575a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:56 compute-0 nova_compute[243452]: 2026-02-28 10:01:56.673 243456 DEBUG oslo_concurrency.lockutils [req-a95a8a98-f41b-4c27-8875-f29626bb23a3 req-db973713-4dc5-49dc-8e84-1ab6dbb0575a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:56 compute-0 nova_compute[243452]: 2026-02-28 10:01:56.673 243456 DEBUG oslo_concurrency.lockutils [req-a95a8a98-f41b-4c27-8875-f29626bb23a3 req-db973713-4dc5-49dc-8e84-1ab6dbb0575a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:56 compute-0 nova_compute[243452]: 2026-02-28 10:01:56.673 243456 DEBUG nova.compute.manager [req-a95a8a98-f41b-4c27-8875-f29626bb23a3 req-db973713-4dc5-49dc-8e84-1ab6dbb0575a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Processing event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:01:56 compute-0 ceph-mon[76304]: pgmap v974: 305 pgs: 305 active+clean; 476 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 9.5 MiB/s wr, 415 op/s
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.051 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for c92e965f-2d18-4b78-8b78-7d391039f382 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.052 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272917.0510361, c92e965f-2d18-4b78-8b78-7d391039f382 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.052 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] VM Started (Lifecycle Event)
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.054 243456 DEBUG nova.compute.manager [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.058 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.064 243456 INFO nova.virt.libvirt.driver [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance spawned successfully.
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.065 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.080 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.084 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Synchronizing instance power state after lifecycle event "Started"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.094 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.095 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.095 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.095 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.096 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.096 243456 DEBUG nova.virt.libvirt.driver [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.102 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.103 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272917.0541003, c92e965f-2d18-4b78-8b78-7d391039f382 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.103 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] VM Paused (Lifecycle Event)
Feb 28 10:01:57 compute-0 ovn_controller[146846]: 2026-02-28T10:01:57Z|00093|binding|INFO|Releasing lport 863d7ac1-9b1e-4788-aadf-440a36d66b39 from this chassis (sb_readonly=0)
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.154 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.178 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.182 243456 DEBUG nova.compute.manager [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.186 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272917.0580935, c92e965f-2d18-4b78-8b78-7d391039f382 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.186 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] VM Resumed (Lifecycle Event)
Feb 28 10:01:57 compute-0 ovn_controller[146846]: 2026-02-28T10:01:57Z|00094|binding|INFO|Releasing lport 863d7ac1-9b1e-4788-aadf-440a36d66b39 from this chassis (sb_readonly=0)
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.208 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.212 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.217 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.254 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.255 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.255 243456 DEBUG nova.objects.instance [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.307 243456 DEBUG oslo_concurrency.lockutils [None req-56b3a4ce-edea-4246-bc4d-0e5683cbc6bf 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.668 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272902.666035, cd5eedf6-c835-46d6-9378-148eb04d4cb2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.668 243456 INFO nova.compute.manager [-] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] VM Stopped (Lifecycle Event)
Feb 28 10:01:57 compute-0 nova_compute[243452]: 2026-02-28 10:01:57.689 243456 DEBUG nova.compute.manager [None req-aeba82f6-aad4-4fa2-81e6-532fea495248 - - - - - -] [instance: cd5eedf6-c835-46d6-9378-148eb04d4cb2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 484 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 8.6 MiB/s wr, 347 op/s
Feb 28 10:01:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:57.841 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:57.842 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:01:57.843 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.091 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.091 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272918.0906162, 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.092 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] VM Resumed (Lifecycle Event)
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.095 243456 DEBUG nova.compute.manager [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.095 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.099 243456 INFO nova.virt.libvirt.driver [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance spawned successfully.
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.099 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.128 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.133 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.133 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.134 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.134 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.135 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.135 243456 DEBUG nova.virt.libvirt.driver [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.140 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.172 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.173 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272918.0916939, 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.173 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] VM Started (Lifecycle Event)
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.202 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.206 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.223 243456 DEBUG nova.compute.manager [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.250 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.279 243456 DEBUG oslo_concurrency.lockutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.280 243456 DEBUG oslo_concurrency.lockutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.280 243456 DEBUG nova.objects.instance [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.338 243456 DEBUG oslo_concurrency.lockutils [None req-e4fbbf01-54c6-4517-aa83-f0facb7dd093 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:58 compute-0 nova_compute[243452]: 2026-02-28 10:01:58.425 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:01:59 compute-0 ceph-mon[76304]: pgmap v975: 305 pgs: 305 active+clean; 484 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 8.6 MiB/s wr, 347 op/s
Feb 28 10:01:59 compute-0 nova_compute[243452]: 2026-02-28 10:01:59.326 243456 DEBUG nova.compute.manager [req-7d280523-5aa2-423b-b0c3-c331c5d6ec13 req-1b26f9de-8e13-43bb-86fd-11080e09377c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:01:59 compute-0 nova_compute[243452]: 2026-02-28 10:01:59.327 243456 DEBUG oslo_concurrency.lockutils [req-7d280523-5aa2-423b-b0c3-c331c5d6ec13 req-1b26f9de-8e13-43bb-86fd-11080e09377c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:01:59 compute-0 nova_compute[243452]: 2026-02-28 10:01:59.327 243456 DEBUG oslo_concurrency.lockutils [req-7d280523-5aa2-423b-b0c3-c331c5d6ec13 req-1b26f9de-8e13-43bb-86fd-11080e09377c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:01:59 compute-0 nova_compute[243452]: 2026-02-28 10:01:59.327 243456 DEBUG oslo_concurrency.lockutils [req-7d280523-5aa2-423b-b0c3-c331c5d6ec13 req-1b26f9de-8e13-43bb-86fd-11080e09377c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:01:59 compute-0 nova_compute[243452]: 2026-02-28 10:01:59.328 243456 DEBUG nova.compute.manager [req-7d280523-5aa2-423b-b0c3-c331c5d6ec13 req-1b26f9de-8e13-43bb-86fd-11080e09377c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:01:59 compute-0 nova_compute[243452]: 2026-02-28 10:01:59.328 243456 WARNING nova.compute.manager [req-7d280523-5aa2-423b-b0c3-c331c5d6ec13 req-1b26f9de-8e13-43bb-86fd-11080e09377c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state active and task_state rebuilding.
Feb 28 10:01:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 484 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.3 MiB/s wr, 285 op/s
Feb 28 10:01:59 compute-0 nova_compute[243452]: 2026-02-28 10:01:59.954 243456 INFO nova.compute.manager [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Rebuilding instance
Feb 28 10:02:00 compute-0 nova_compute[243452]: 2026-02-28 10:02:00.023 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:02:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:02:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:02:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:02:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:02:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:02:00 compute-0 nova_compute[243452]: 2026-02-28 10:02:00.636 243456 DEBUG nova.objects.instance [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'trusted_certs' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:00 compute-0 nova_compute[243452]: 2026-02-28 10:02:00.661 243456 DEBUG nova.compute.manager [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:02:00 compute-0 nova_compute[243452]: 2026-02-28 10:02:00.712 243456 DEBUG nova.objects.instance [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'pci_requests' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:00 compute-0 nova_compute[243452]: 2026-02-28 10:02:00.724 243456 DEBUG nova.objects.instance [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'pci_devices' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:00 compute-0 nova_compute[243452]: 2026-02-28 10:02:00.736 243456 DEBUG nova.objects.instance [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'resources' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:00 compute-0 nova_compute[243452]: 2026-02-28 10:02:00.746 243456 DEBUG nova.objects.instance [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'migration_context' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:00 compute-0 nova_compute[243452]: 2026-02-28 10:02:00.763 243456 DEBUG nova.objects.instance [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:02:00 compute-0 nova_compute[243452]: 2026-02-28 10:02:00.766 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:02:00 compute-0 nova_compute[243452]: 2026-02-28 10:02:00.950 243456 INFO nova.compute.manager [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Rebuilding instance
Feb 28 10:02:01 compute-0 ceph-mon[76304]: pgmap v976: 305 pgs: 305 active+clean; 484 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.3 MiB/s wr, 285 op/s
Feb 28 10:02:01 compute-0 nova_compute[243452]: 2026-02-28 10:02:01.455 243456 DEBUG nova.objects.instance [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:01 compute-0 nova_compute[243452]: 2026-02-28 10:02:01.474 243456 DEBUG nova.compute.manager [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:02:01 compute-0 nova_compute[243452]: 2026-02-28 10:02:01.535 243456 DEBUG nova.objects.instance [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:01 compute-0 nova_compute[243452]: 2026-02-28 10:02:01.551 243456 DEBUG nova.objects.instance [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:01 compute-0 nova_compute[243452]: 2026-02-28 10:02:01.566 243456 DEBUG nova.objects.instance [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lazy-loading 'resources' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:01 compute-0 nova_compute[243452]: 2026-02-28 10:02:01.578 243456 DEBUG nova.objects.instance [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lazy-loading 'migration_context' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:01 compute-0 nova_compute[243452]: 2026-02-28 10:02:01.593 243456 DEBUG nova.objects.instance [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:02:01 compute-0 nova_compute[243452]: 2026-02-28 10:02:01.597 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:02:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 484 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 6.3 MiB/s wr, 390 op/s
Feb 28 10:02:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:02:03 compute-0 ceph-mon[76304]: pgmap v977: 305 pgs: 305 active+clean; 484 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 6.3 MiB/s wr, 390 op/s
Feb 28 10:02:03 compute-0 nova_compute[243452]: 2026-02-28 10:02:03.427 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 484 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 300 op/s
Feb 28 10:02:04 compute-0 nova_compute[243452]: 2026-02-28 10:02:04.346 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:02:04 compute-0 nova_compute[243452]: 2026-02-28 10:02:04.539 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272909.5375903, 9e27fde4-3df3-46cf-97ac-88a91baefbc0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:02:04 compute-0 nova_compute[243452]: 2026-02-28 10:02:04.539 243456 INFO nova.compute.manager [-] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] VM Stopped (Lifecycle Event)
Feb 28 10:02:04 compute-0 nova_compute[243452]: 2026-02-28 10:02:04.589 243456 DEBUG nova.compute.manager [None req-4f1e2d07-e8e6-42c6-ba63-396e02d329c2 - - - - - -] [instance: 9e27fde4-3df3-46cf-97ac-88a91baefbc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:02:05 compute-0 nova_compute[243452]: 2026-02-28 10:02:05.025 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:05 compute-0 ceph-mon[76304]: pgmap v978: 305 pgs: 305 active+clean; 484 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 300 op/s
Feb 28 10:02:05 compute-0 nova_compute[243452]: 2026-02-28 10:02:05.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:02:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 484 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 229 op/s
Feb 28 10:02:06 compute-0 ceph-mon[76304]: pgmap v979: 305 pgs: 305 active+clean; 484 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 229 op/s
Feb 28 10:02:06 compute-0 nova_compute[243452]: 2026-02-28 10:02:06.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:02:06 compute-0 nova_compute[243452]: 2026-02-28 10:02:06.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:02:06 compute-0 nova_compute[243452]: 2026-02-28 10:02:06.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:02:06 compute-0 nova_compute[243452]: 2026-02-28 10:02:06.359 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-c92e965f-2d18-4b78-8b78-7d391039f382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:02:06 compute-0 nova_compute[243452]: 2026-02-28 10:02:06.360 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-c92e965f-2d18-4b78-8b78-7d391039f382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:02:06 compute-0 nova_compute[243452]: 2026-02-28 10:02:06.360 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:02:07 compute-0 nova_compute[243452]: 2026-02-28 10:02:07.068 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272912.067587, ce94b006-3fde-4285-89f7-1e435e514d3e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:02:07 compute-0 nova_compute[243452]: 2026-02-28 10:02:07.069 243456 INFO nova.compute.manager [-] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] VM Stopped (Lifecycle Event)
Feb 28 10:02:07 compute-0 nova_compute[243452]: 2026-02-28 10:02:07.089 243456 DEBUG nova.compute.manager [None req-677d476b-b1eb-4a15-a0ce-5ea7c3fbfa9f - - - - - -] [instance: ce94b006-3fde-4285-89f7-1e435e514d3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:02:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:02:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 488 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 964 KiB/s wr, 175 op/s
Feb 28 10:02:08 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 28 10:02:08 compute-0 nova_compute[243452]: 2026-02-28 10:02:08.429 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:08 compute-0 ceph-mon[76304]: pgmap v980: 305 pgs: 305 active+clean; 488 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 964 KiB/s wr, 175 op/s
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.013 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Updating instance_info_cache with network_info: [{"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.166 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-c92e965f-2d18-4b78-8b78-7d391039f382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.167 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.168 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.168 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.168 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.337 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.338 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.338 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.338 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.339 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:09 compute-0 ovn_controller[146846]: 2026-02-28T10:02:09Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:6e:bc 10.100.0.5
Feb 28 10:02:09 compute-0 ovn_controller[146846]: 2026-02-28T10:02:09Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:6e:bc 10.100.0.5
Feb 28 10:02:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 488 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 437 KiB/s wr, 148 op/s
Feb 28 10:02:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:02:09 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/300958680' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:02:09 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/300958680' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.844 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.964 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.965 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.970 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.970 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.974 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.975 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.980 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.980 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.984 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:02:09 compute-0 nova_compute[243452]: 2026-02-28 10:02:09.984 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:02:10 compute-0 nova_compute[243452]: 2026-02-28 10:02:10.028 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:10 compute-0 nova_compute[243452]: 2026-02-28 10:02:10.175 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:02:10 compute-0 nova_compute[243452]: 2026-02-28 10:02:10.176 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3544MB free_disk=59.804935125634074GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:02:10 compute-0 nova_compute[243452]: 2026-02-28 10:02:10.177 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:10 compute-0 nova_compute[243452]: 2026-02-28 10:02:10.177 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:10 compute-0 nova_compute[243452]: 2026-02-28 10:02:10.261 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance c92e965f-2d18-4b78-8b78-7d391039f382 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:02:10 compute-0 nova_compute[243452]: 2026-02-28 10:02:10.261 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 89ced16e-cc50-41d5-bfcb-fa5af85c14c8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:02:10 compute-0 nova_compute[243452]: 2026-02-28 10:02:10.262 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 08147934-b9df-4154-8d1f-3fd318973eb6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:02:10 compute-0 nova_compute[243452]: 2026-02-28 10:02:10.262 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance d2d9bd29-453d-4abd-a3de-c1a9603cfc11 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:02:10 compute-0 nova_compute[243452]: 2026-02-28 10:02:10.262 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:02:10 compute-0 nova_compute[243452]: 2026-02-28 10:02:10.262 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:02:10 compute-0 nova_compute[243452]: 2026-02-28 10:02:10.262 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:02:10 compute-0 nova_compute[243452]: 2026-02-28 10:02:10.400 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:10 compute-0 nova_compute[243452]: 2026-02-28 10:02:10.808 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 28 10:02:10 compute-0 ceph-mon[76304]: pgmap v981: 305 pgs: 305 active+clean; 488 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 437 KiB/s wr, 148 op/s
Feb 28 10:02:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:02:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4054022208' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:02:10 compute-0 nova_compute[243452]: 2026-02-28 10:02:10.962 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:10 compute-0 nova_compute[243452]: 2026-02-28 10:02:10.966 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:02:10 compute-0 nova_compute[243452]: 2026-02-28 10:02:10.982 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:02:11 compute-0 nova_compute[243452]: 2026-02-28 10:02:11.024 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:02:11 compute-0 nova_compute[243452]: 2026-02-28 10:02:11.024 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:11 compute-0 nova_compute[243452]: 2026-02-28 10:02:11.645 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 28 10:02:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 533 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.5 MiB/s wr, 214 op/s
Feb 28 10:02:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4054022208' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:02:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:02:12 compute-0 ceph-mon[76304]: pgmap v982: 305 pgs: 305 active+clean; 533 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.5 MiB/s wr, 214 op/s
Feb 28 10:02:13 compute-0 kernel: tap1c6e98f3-e9 (unregistering): left promiscuous mode
Feb 28 10:02:13 compute-0 NetworkManager[49805]: <info>  [1772272933.1189] device (tap1c6e98f3-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:02:13 compute-0 ovn_controller[146846]: 2026-02-28T10:02:13Z|00095|binding|INFO|Releasing lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 from this chassis (sb_readonly=0)
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.126 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:13 compute-0 ovn_controller[146846]: 2026-02-28T10:02:13Z|00096|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 down in Southbound
Feb 28 10:02:13 compute-0 ovn_controller[146846]: 2026-02-28T10:02:13Z|00097|binding|INFO|Removing iface tap1c6e98f3-e9 ovn-installed in OVS
Feb 28 10:02:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.139 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:6e:bc 10.100.0.5'], port_security=['fa:16:3e:3d:6e:bc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c92e965f-2d18-4b78-8b78-7d391039f382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '8', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.140 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.143 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 unbound from our chassis
Feb 28 10:02:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.146 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24
Feb 28 10:02:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.161 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cef11176-7f1f-4c22-937c-3f20406c163b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:13 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Feb 28 10:02:13 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000000d.scope: Consumed 12.162s CPU time.
Feb 28 10:02:13 compute-0 systemd-machined[209480]: Machine qemu-23-instance-0000000d terminated.
Feb 28 10:02:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.188 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[60605e97-55a1-4363-86dd-b66d37983924]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.193 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b837b479-f1d6-4125-a27a-243faf6fb1e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.235 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[33094a30-ccec-4a33-88c1-62ab9ff9fbb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.248 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c00f7d-1dd8-4ae1-9d01-7280215bf21f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 20, 'rx_bytes': 868, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 20, 'rx_bytes': 868, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262006, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.263 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2aedc342-39c8-47e5-983f-91c8578fc9c5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262007, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262007, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.265 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.267 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.271 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.271 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.272 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:02:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.272 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:13.273 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.432 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 550 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.3 MiB/s wr, 163 op/s
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.812 243456 DEBUG nova.compute.manager [req-eee43dab-ca7c-400e-9b51-1c4efb5d8568 req-b4cceb82-a9bc-40b0-9cdc-2956562267cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.813 243456 DEBUG oslo_concurrency.lockutils [req-eee43dab-ca7c-400e-9b51-1c4efb5d8568 req-b4cceb82-a9bc-40b0-9cdc-2956562267cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.814 243456 DEBUG oslo_concurrency.lockutils [req-eee43dab-ca7c-400e-9b51-1c4efb5d8568 req-b4cceb82-a9bc-40b0-9cdc-2956562267cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.814 243456 DEBUG oslo_concurrency.lockutils [req-eee43dab-ca7c-400e-9b51-1c4efb5d8568 req-b4cceb82-a9bc-40b0-9cdc-2956562267cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.815 243456 DEBUG nova.compute.manager [req-eee43dab-ca7c-400e-9b51-1c4efb5d8568 req-b4cceb82-a9bc-40b0-9cdc-2956562267cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.815 243456 WARNING nova.compute.manager [req-eee43dab-ca7c-400e-9b51-1c4efb5d8568 req-b4cceb82-a9bc-40b0-9cdc-2956562267cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state active and task_state rebuilding.
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.824 243456 INFO nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance shutdown successfully after 13 seconds.
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.831 243456 INFO nova.virt.libvirt.driver [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance destroyed successfully.
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.837 243456 INFO nova.virt.libvirt.driver [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance destroyed successfully.
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.838 243456 DEBUG nova.virt.libvirt.vif [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1293627042',display_name='tempest-ServersAdminTestJSON-server-1293627042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1293627042',id=13,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:01:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-7hctsnmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:01:59Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=c92e965f-2d18-4b78-8b78-7d391039f382,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.839 243456 DEBUG nova.network.os_vif_util [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.841 243456 DEBUG nova.network.os_vif_util [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.841 243456 DEBUG os_vif [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.843 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.844 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c6e98f3-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.846 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.848 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:13 compute-0 nova_compute[243452]: 2026-02-28 10:02:13.850 243456 INFO os_vif [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9')
Feb 28 10:02:14 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Deactivated successfully.
Feb 28 10:02:14 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Consumed 13.544s CPU time.
Feb 28 10:02:14 compute-0 systemd-machined[209480]: Machine qemu-22-instance-00000014 terminated.
Feb 28 10:02:14 compute-0 nova_compute[243452]: 2026-02-28 10:02:14.531 243456 INFO nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Deleting instance files /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382_del
Feb 28 10:02:14 compute-0 nova_compute[243452]: 2026-02-28 10:02:14.532 243456 INFO nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Deletion of /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382_del complete
Feb 28 10:02:14 compute-0 nova_compute[243452]: 2026-02-28 10:02:14.665 243456 INFO nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance shutdown successfully after 13 seconds.
Feb 28 10:02:14 compute-0 nova_compute[243452]: 2026-02-28 10:02:14.683 243456 INFO nova.virt.libvirt.driver [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance destroyed successfully.
Feb 28 10:02:14 compute-0 nova_compute[243452]: 2026-02-28 10:02:14.691 243456 INFO nova.virt.libvirt.driver [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance destroyed successfully.
Feb 28 10:02:14 compute-0 nova_compute[243452]: 2026-02-28 10:02:14.719 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:02:14 compute-0 nova_compute[243452]: 2026-02-28 10:02:14.720 243456 INFO nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Creating image(s)
Feb 28 10:02:14 compute-0 nova_compute[243452]: 2026-02-28 10:02:14.752 243456 DEBUG nova.storage.rbd_utils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:02:14 compute-0 nova_compute[243452]: 2026-02-28 10:02:14.784 243456 DEBUG nova.storage.rbd_utils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:02:14 compute-0 nova_compute[243452]: 2026-02-28 10:02:14.817 243456 DEBUG nova.storage.rbd_utils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:02:14 compute-0 nova_compute[243452]: 2026-02-28 10:02:14.822 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:14 compute-0 nova_compute[243452]: 2026-02-28 10:02:14.907 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:14 compute-0 nova_compute[243452]: 2026-02-28 10:02:14.908 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:14 compute-0 nova_compute[243452]: 2026-02-28 10:02:14.909 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:14 compute-0 nova_compute[243452]: 2026-02-28 10:02:14.909 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:14 compute-0 nova_compute[243452]: 2026-02-28 10:02:14.933 243456 DEBUG nova.storage.rbd_utils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:02:14 compute-0 nova_compute[243452]: 2026-02-28 10:02:14.948 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c92e965f-2d18-4b78-8b78-7d391039f382_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Feb 28 10:02:14 compute-0 ceph-mon[76304]: pgmap v983: 305 pgs: 305 active+clean; 550 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.3 MiB/s wr, 163 op/s
Feb 28 10:02:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Feb 28 10:02:14 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.345 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c92e965f-2d18-4b78-8b78-7d391039f382_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.425 243456 INFO nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Deleting instance files /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_del
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.426 243456 INFO nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Deletion of /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_del complete
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.433 243456 DEBUG nova.storage.rbd_utils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] resizing rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.519 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.520 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Ensure instance console log exists: /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.521 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.521 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.521 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.523 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Start _get_guest_xml network_info=[{"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.530 243456 WARNING nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.536 243456 DEBUG nova.virt.libvirt.host [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.536 243456 DEBUG nova.virt.libvirt.host [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.539 243456 DEBUG nova.virt.libvirt.host [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.540 243456 DEBUG nova.virt.libvirt.host [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.541 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.541 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.541 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.542 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.542 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.542 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.542 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.543 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.543 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.543 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.544 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.544 243456 DEBUG nova.virt.hardware [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.544 243456 DEBUG nova.objects.instance [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'vcpu_model' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.562 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.585 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.588 243456 INFO nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Creating image(s)
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.620 243456 DEBUG nova.storage.rbd_utils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.648 243456 DEBUG nova.storage.rbd_utils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.671 243456 DEBUG nova.storage.rbd_utils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.675 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.749 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.751 243456 DEBUG oslo_concurrency.lockutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.752 243456 DEBUG oslo_concurrency.lockutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.752 243456 DEBUG oslo_concurrency.lockutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 475 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 764 KiB/s rd, 5.9 MiB/s wr, 202 op/s
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.787 243456 DEBUG nova.storage.rbd_utils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:02:15 compute-0 nova_compute[243452]: 2026-02-28 10:02:15.792 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:15 compute-0 ceph-mon[76304]: osdmap e131: 3 total, 3 up, 3 in
Feb 28 10:02:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:02:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1043529309' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.090 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.116 243456 DEBUG nova.storage.rbd_utils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.120 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.135 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.209 243456 DEBUG nova.storage.rbd_utils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] resizing rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.290 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.291 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Ensure instance console log exists: /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.291 243456 DEBUG oslo_concurrency.lockutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.292 243456 DEBUG oslo_concurrency.lockutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.292 243456 DEBUG oslo_concurrency.lockutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.293 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.298 243456 WARNING nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.302 243456 DEBUG nova.virt.libvirt.host [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.302 243456 DEBUG nova.virt.libvirt.host [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.306 243456 DEBUG nova.virt.libvirt.host [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.306 243456 DEBUG nova.virt.libvirt.host [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.306 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.307 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.307 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.307 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.307 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.308 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.308 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.308 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.308 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.309 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.309 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.309 243456 DEBUG nova.virt.hardware [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.309 243456 DEBUG nova.objects.instance [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.315 243456 DEBUG nova.compute.manager [req-3e8f4f0d-fce8-4663-afa0-c2b8f51eac5c req-47b5accc-5cfc-4de4-956f-247cc4d288aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.316 243456 DEBUG oslo_concurrency.lockutils [req-3e8f4f0d-fce8-4663-afa0-c2b8f51eac5c req-47b5accc-5cfc-4de4-956f-247cc4d288aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.316 243456 DEBUG oslo_concurrency.lockutils [req-3e8f4f0d-fce8-4663-afa0-c2b8f51eac5c req-47b5accc-5cfc-4de4-956f-247cc4d288aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.316 243456 DEBUG oslo_concurrency.lockutils [req-3e8f4f0d-fce8-4663-afa0-c2b8f51eac5c req-47b5accc-5cfc-4de4-956f-247cc4d288aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.316 243456 DEBUG nova.compute.manager [req-3e8f4f0d-fce8-4663-afa0-c2b8f51eac5c req-47b5accc-5cfc-4de4-956f-247cc4d288aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.317 243456 WARNING nova.compute.manager [req-3e8f4f0d-fce8-4663-afa0-c2b8f51eac5c req-47b5accc-5cfc-4de4-956f-247cc4d288aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state active and task_state rebuild_spawning.
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.326 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:02:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1241410563' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.655 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.659 243456 DEBUG nova.virt.libvirt.vif [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1293627042',display_name='tempest-ServersAdminTestJSON-server-1293627042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1293627042',id=13,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:01:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-7hctsnmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:02:14Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=c92e965f-2d18-4b78-8b78-7d391039f382,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.661 243456 DEBUG nova.network.os_vif_util [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.663 243456 DEBUG nova.network.os_vif_util [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.668 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:02:16 compute-0 nova_compute[243452]:   <uuid>c92e965f-2d18-4b78-8b78-7d391039f382</uuid>
Feb 28 10:02:16 compute-0 nova_compute[243452]:   <name>instance-0000000d</name>
Feb 28 10:02:16 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:02:16 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:02:16 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersAdminTestJSON-server-1293627042</nova:name>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:02:15</nova:creationTime>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:02:16 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:02:16 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:02:16 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:02:16 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:02:16 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:02:16 compute-0 nova_compute[243452]:         <nova:user uuid="4fb1e2bbed9c4e2395c13dba974f8603">tempest-ServersAdminTestJSON-1494420313-project-member</nova:user>
Feb 28 10:02:16 compute-0 nova_compute[243452]:         <nova:project uuid="339b7f5b41a54615b051fb9d036072dd">tempest-ServersAdminTestJSON-1494420313</nova:project>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:02:16 compute-0 nova_compute[243452]:         <nova:port uuid="1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7">
Feb 28 10:02:16 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:02:16 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:02:16 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <system>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <entry name="serial">c92e965f-2d18-4b78-8b78-7d391039f382</entry>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <entry name="uuid">c92e965f-2d18-4b78-8b78-7d391039f382</entry>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     </system>
Feb 28 10:02:16 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:02:16 compute-0 nova_compute[243452]:   <os>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:   </os>
Feb 28 10:02:16 compute-0 nova_compute[243452]:   <features>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:   </features>
Feb 28 10:02:16 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:02:16 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:02:16 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c92e965f-2d18-4b78-8b78-7d391039f382_disk">
Feb 28 10:02:16 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       </source>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:02:16 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c92e965f-2d18-4b78-8b78-7d391039f382_disk.config">
Feb 28 10:02:16 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       </source>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:02:16 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:3d:6e:bc"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <target dev="tap1c6e98f3-e9"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/console.log" append="off"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <video>
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     </video>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:02:16 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:02:16 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:02:16 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:02:16 compute-0 nova_compute[243452]: </domain>
Feb 28 10:02:16 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.671 243456 DEBUG nova.compute.manager [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Preparing to wait for external event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.672 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.673 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.673 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.675 243456 DEBUG nova.virt.libvirt.vif [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1293627042',display_name='tempest-ServersAdminTestJSON-server-1293627042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1293627042',id=13,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:01:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-7hctsnmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:02:14Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=c92e965f-2d18-4b78-8b78-7d391039f382,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.676 243456 DEBUG nova.network.os_vif_util [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.678 243456 DEBUG nova.network.os_vif_util [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.679 243456 DEBUG os_vif [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.681 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.682 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.683 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.687 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.687 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c6e98f3-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.688 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c6e98f3-e9, col_values=(('external_ids', {'iface-id': '1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:6e:bc', 'vm-uuid': 'c92e965f-2d18-4b78-8b78-7d391039f382'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:16 compute-0 NetworkManager[49805]: <info>  [1772272936.6899] manager: (tap1c6e98f3-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.689 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.693 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.694 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.694 243456 INFO os_vif [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9')
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.756 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.757 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.757 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] No VIF found with MAC fa:16:3e:3d:6e:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.758 243456 INFO nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Using config drive
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.791 243456 DEBUG nova.storage.rbd_utils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.816 243456 DEBUG nova.objects.instance [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'ec2_ids' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.850 243456 DEBUG nova.objects.instance [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'keypairs' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:02:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2995584168' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.946 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.966 243456 DEBUG nova.storage.rbd_utils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:02:16 compute-0 nova_compute[243452]: 2026-02-28 10:02:16.970 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:16 compute-0 ceph-mon[76304]: pgmap v985: 305 pgs: 305 active+clean; 475 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 764 KiB/s rd, 5.9 MiB/s wr, 202 op/s
Feb 28 10:02:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1043529309' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:02:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1241410563' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:02:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2995584168' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:02:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:02:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3643995517' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:02:17 compute-0 nova_compute[243452]: 2026-02-28 10:02:17.522 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:17 compute-0 nova_compute[243452]: 2026-02-28 10:02:17.526 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:02:17 compute-0 nova_compute[243452]:   <uuid>9bf11a21-b6eb-432e-aa3b-4dd3413f91a5</uuid>
Feb 28 10:02:17 compute-0 nova_compute[243452]:   <name>instance-00000014</name>
Feb 28 10:02:17 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:02:17 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:02:17 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersAdmin275Test-server-1098361722</nova:name>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:02:16</nova:creationTime>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:02:17 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:02:17 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:02:17 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:02:17 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:02:17 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:02:17 compute-0 nova_compute[243452]:         <nova:user uuid="c1d3b80b39ba4f3392d63b05c85009e2">tempest-ServersAdmin275Test-175914647-project-member</nova:user>
Feb 28 10:02:17 compute-0 nova_compute[243452]:         <nova:project uuid="a1e90097927e484eb57dee6ac05b7b47">tempest-ServersAdmin275Test-175914647</nova:project>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <nova:ports/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:02:17 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:02:17 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <system>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <entry name="serial">9bf11a21-b6eb-432e-aa3b-4dd3413f91a5</entry>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <entry name="uuid">9bf11a21-b6eb-432e-aa3b-4dd3413f91a5</entry>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     </system>
Feb 28 10:02:17 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:02:17 compute-0 nova_compute[243452]:   <os>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:   </os>
Feb 28 10:02:17 compute-0 nova_compute[243452]:   <features>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:   </features>
Feb 28 10:02:17 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:02:17 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:02:17 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk">
Feb 28 10:02:17 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       </source>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:02:17 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config">
Feb 28 10:02:17 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       </source>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:02:17 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/console.log" append="off"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <video>
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     </video>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:02:17 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:02:17 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:02:17 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:02:17 compute-0 nova_compute[243452]: </domain>
Feb 28 10:02:17 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:02:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:02:17 compute-0 nova_compute[243452]: 2026-02-28 10:02:17.621 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:02:17 compute-0 nova_compute[243452]: 2026-02-28 10:02:17.621 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:02:17 compute-0 nova_compute[243452]: 2026-02-28 10:02:17.622 243456 INFO nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Using config drive
Feb 28 10:02:17 compute-0 nova_compute[243452]: 2026-02-28 10:02:17.649 243456 DEBUG nova.storage.rbd_utils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:02:17 compute-0 nova_compute[243452]: 2026-02-28 10:02:17.731 243456 INFO nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Creating config drive at /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config
Feb 28 10:02:17 compute-0 nova_compute[243452]: 2026-02-28 10:02:17.737 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_tdgc112 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:17 compute-0 nova_compute[243452]: 2026-02-28 10:02:17.769 243456 DEBUG nova.objects.instance [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 469 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 773 KiB/s rd, 7.1 MiB/s wr, 217 op/s
Feb 28 10:02:17 compute-0 nova_compute[243452]: 2026-02-28 10:02:17.842 243456 DEBUG nova.objects.instance [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lazy-loading 'keypairs' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:17 compute-0 nova_compute[243452]: 2026-02-28 10:02:17.875 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_tdgc112" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:17 compute-0 nova_compute[243452]: 2026-02-28 10:02:17.918 243456 DEBUG nova.storage.rbd_utils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] rbd image c92e965f-2d18-4b78-8b78-7d391039f382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:02:17 compute-0 nova_compute[243452]: 2026-02-28 10:02:17.923 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config c92e965f-2d18-4b78-8b78-7d391039f382_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Feb 28 10:02:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Feb 28 10:02:18 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Feb 28 10:02:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3643995517' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.067 243456 DEBUG oslo_concurrency.processutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config c92e965f-2d18-4b78-8b78-7d391039f382_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.068 243456 INFO nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Deleting local config drive /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382/disk.config because it was imported into RBD.
Feb 28 10:02:18 compute-0 kernel: tap1c6e98f3-e9: entered promiscuous mode
Feb 28 10:02:18 compute-0 ovn_controller[146846]: 2026-02-28T10:02:18Z|00098|binding|INFO|Claiming lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for this chassis.
Feb 28 10:02:18 compute-0 ovn_controller[146846]: 2026-02-28T10:02:18Z|00099|binding|INFO|1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7: Claiming fa:16:3e:3d:6e:bc 10.100.0.5
Feb 28 10:02:18 compute-0 NetworkManager[49805]: <info>  [1772272938.1211] manager: (tap1c6e98f3-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/54)
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.116 243456 INFO nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Creating config drive at /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.124 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5wgz08g8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.128 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:6e:bc 10.100.0.5'], port_security=['fa:16:3e:3d:6e:bc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c92e965f-2d18-4b78-8b78-7d391039f382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '9', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:02:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.130 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 bound to our chassis
Feb 28 10:02:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.131 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24
Feb 28 10:02:18 compute-0 ovn_controller[146846]: 2026-02-28T10:02:18Z|00100|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 ovn-installed in OVS
Feb 28 10:02:18 compute-0 ovn_controller[146846]: 2026-02-28T10:02:18Z|00101|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 up in Southbound
Feb 28 10:02:18 compute-0 systemd-udevd[262607]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.155 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.152 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1e60fc78-84f2-4eaf-bd33-474a09fb60a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:18 compute-0 NetworkManager[49805]: <info>  [1772272938.1665] device (tap1c6e98f3-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:02:18 compute-0 NetworkManager[49805]: <info>  [1772272938.1672] device (tap1c6e98f3-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:02:18 compute-0 systemd-machined[209480]: New machine qemu-24-instance-0000000d.
Feb 28 10:02:18 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-0000000d.
Feb 28 10:02:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.184 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[28beec0d-109e-46c7-8cf5-f6f6dadea6ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.187 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[68c5f544-3b01-4030-911e-91bf95effe73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.213 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6ae140b0-a4f2-4d8f-9e4d-d47feb6fbb5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.228 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9baabd-8252-4001-91bd-4041b325c709]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 22, 'rx_bytes': 868, 'tx_bytes': 1112, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 22, 'rx_bytes': 868, 'tx_bytes': 1112, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262624, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.248 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[81ddbd36-7995-485b-becb-b948d7acc311]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262625, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262625, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.250 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.252 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.253 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.254 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.255 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:02:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.255 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:18.256 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.264 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5wgz08g8" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.292 243456 DEBUG nova.storage.rbd_utils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] rbd image 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.295 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.390 243456 DEBUG nova.compute.manager [req-3b69d00f-14c8-4f07-ac02-af7c3844210c req-6cba49de-a061-4c29-95fa-824da15b21e5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.390 243456 DEBUG oslo_concurrency.lockutils [req-3b69d00f-14c8-4f07-ac02-af7c3844210c req-6cba49de-a061-4c29-95fa-824da15b21e5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.391 243456 DEBUG oslo_concurrency.lockutils [req-3b69d00f-14c8-4f07-ac02-af7c3844210c req-6cba49de-a061-4c29-95fa-824da15b21e5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.391 243456 DEBUG oslo_concurrency.lockutils [req-3b69d00f-14c8-4f07-ac02-af7c3844210c req-6cba49de-a061-4c29-95fa-824da15b21e5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.391 243456 DEBUG nova.compute.manager [req-3b69d00f-14c8-4f07-ac02-af7c3844210c req-6cba49de-a061-4c29-95fa-824da15b21e5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Processing event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.417 243456 DEBUG oslo_concurrency.processutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.418 243456 INFO nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Deleting local config drive /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5/disk.config because it was imported into RBD.
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.433 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:18 compute-0 systemd-machined[209480]: New machine qemu-25-instance-00000014.
Feb 28 10:02:18 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000014.
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.682 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for c92e965f-2d18-4b78-8b78-7d391039f382 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.683 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272938.6816688, c92e965f-2d18-4b78-8b78-7d391039f382 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.684 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] VM Started (Lifecycle Event)
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.688 243456 DEBUG nova.compute.manager [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.694 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.699 243456 INFO nova.virt.libvirt.driver [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance spawned successfully.
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.700 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.715 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.719 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.740 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.741 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.741 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.741 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.742 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.742 243456 DEBUG nova.virt.libvirt.driver [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.753 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.753 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272938.6817753, c92e965f-2d18-4b78-8b78-7d391039f382 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.754 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] VM Paused (Lifecycle Event)
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.783 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.788 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272938.692803, c92e965f-2d18-4b78-8b78-7d391039f382 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.788 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] VM Resumed (Lifecycle Event)
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.812 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.817 243456 DEBUG nova.compute.manager [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.819 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.859 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.897 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.898 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.898 243456 DEBUG nova.objects.instance [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:02:18 compute-0 nova_compute[243452]: 2026-02-28 10:02:18.961 243456 DEBUG oslo_concurrency.lockutils [None req-34d59c5d-adb9-41d3-b195-846b180ab427 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:19 compute-0 ceph-mon[76304]: pgmap v986: 305 pgs: 305 active+clean; 469 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 773 KiB/s rd, 7.1 MiB/s wr, 217 op/s
Feb 28 10:02:19 compute-0 ceph-mon[76304]: osdmap e132: 3 total, 3 up, 3 in
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.072 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.073 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272939.0720258, 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.073 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] VM Resumed (Lifecycle Event)
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.075 243456 DEBUG nova.compute.manager [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.076 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.079 243456 INFO nova.virt.libvirt.driver [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance spawned successfully.
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.080 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.097 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.103 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.109 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.109 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.110 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.110 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.110 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.111 243456 DEBUG nova.virt.libvirt.driver [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.133 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.133 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272939.0748594, 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.133 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] VM Started (Lifecycle Event)
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.158 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.161 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.167 243456 DEBUG nova.compute.manager [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.193 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.232 243456 DEBUG oslo_concurrency.lockutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.232 243456 DEBUG oslo_concurrency.lockutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.233 243456 DEBUG nova.objects.instance [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:02:19 compute-0 nova_compute[243452]: 2026-02-28 10:02:19.298 243456 DEBUG oslo_concurrency.lockutils [None req-efe3a1e8-b7e5-47e1-adac-3c450b808f18 29c668ed41724c5cba4cec3db22c4e4c 8a25ecc7a73c497a82af21d72e02c011 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 469 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 527 KiB/s rd, 4.2 MiB/s wr, 173 op/s
Feb 28 10:02:20 compute-0 podman[262763]: 2026-02-28 10:02:20.129040991 +0000 UTC m=+0.059289616 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 28 10:02:20 compute-0 podman[262762]: 2026-02-28 10:02:20.151583294 +0000 UTC m=+0.086083168 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.build-date=20260223, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 10:02:20 compute-0 nova_compute[243452]: 2026-02-28 10:02:20.518 243456 DEBUG nova.compute.manager [req-98c8636a-74b9-4e77-bd67-756ca19af65d req-f6c9767a-1d06-49b6-a910-2fea1d2b4e19 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:02:20 compute-0 nova_compute[243452]: 2026-02-28 10:02:20.518 243456 DEBUG oslo_concurrency.lockutils [req-98c8636a-74b9-4e77-bd67-756ca19af65d req-f6c9767a-1d06-49b6-a910-2fea1d2b4e19 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:20 compute-0 nova_compute[243452]: 2026-02-28 10:02:20.518 243456 DEBUG oslo_concurrency.lockutils [req-98c8636a-74b9-4e77-bd67-756ca19af65d req-f6c9767a-1d06-49b6-a910-2fea1d2b4e19 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:20 compute-0 nova_compute[243452]: 2026-02-28 10:02:20.519 243456 DEBUG oslo_concurrency.lockutils [req-98c8636a-74b9-4e77-bd67-756ca19af65d req-f6c9767a-1d06-49b6-a910-2fea1d2b4e19 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:20 compute-0 nova_compute[243452]: 2026-02-28 10:02:20.519 243456 DEBUG nova.compute.manager [req-98c8636a-74b9-4e77-bd67-756ca19af65d req-f6c9767a-1d06-49b6-a910-2fea1d2b4e19 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:02:20 compute-0 nova_compute[243452]: 2026-02-28 10:02:20.519 243456 WARNING nova.compute.manager [req-98c8636a-74b9-4e77-bd67-756ca19af65d req-f6c9767a-1d06-49b6-a910-2fea1d2b4e19 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state error and task_state None.
Feb 28 10:02:21 compute-0 ceph-mon[76304]: pgmap v988: 305 pgs: 305 active+clean; 469 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 527 KiB/s rd, 4.2 MiB/s wr, 173 op/s
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.229 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "9bf11a21-b6eb-432e-aa3b-4dd3413f91a5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.230 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "9bf11a21-b6eb-432e-aa3b-4dd3413f91a5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.230 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "9bf11a21-b6eb-432e-aa3b-4dd3413f91a5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.230 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "9bf11a21-b6eb-432e-aa3b-4dd3413f91a5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.231 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "9bf11a21-b6eb-432e-aa3b-4dd3413f91a5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.232 243456 INFO nova.compute.manager [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Terminating instance
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.233 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "refresh_cache-9bf11a21-b6eb-432e-aa3b-4dd3413f91a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.233 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquired lock "refresh_cache-9bf11a21-b6eb-432e-aa3b-4dd3413f91a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.233 243456 DEBUG nova.network.neutron [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.442 243456 DEBUG nova.network.neutron [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.673 243456 DEBUG oslo_concurrency.lockutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.673 243456 DEBUG oslo_concurrency.lockutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.673 243456 DEBUG oslo_concurrency.lockutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.673 243456 DEBUG oslo_concurrency.lockutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.673 243456 DEBUG oslo_concurrency.lockutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.674 243456 INFO nova.compute.manager [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Terminating instance
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.675 243456 DEBUG nova.compute.manager [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.689 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 484 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.4 MiB/s wr, 368 op/s
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.903 243456 DEBUG nova.network.neutron [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:02:21 compute-0 kernel: tap42dc1876-90 (unregistering): left promiscuous mode
Feb 28 10:02:21 compute-0 NetworkManager[49805]: <info>  [1772272941.9161] device (tap42dc1876-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.920 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Releasing lock "refresh_cache-9bf11a21-b6eb-432e-aa3b-4dd3413f91a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.920 243456 DEBUG nova.compute.manager [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:02:21 compute-0 ovn_controller[146846]: 2026-02-28T10:02:21Z|00102|binding|INFO|Releasing lport 42dc1876-90c4-4b52-b301-1c90b71ff297 from this chassis (sb_readonly=0)
Feb 28 10:02:21 compute-0 ovn_controller[146846]: 2026-02-28T10:02:21Z|00103|binding|INFO|Setting lport 42dc1876-90c4-4b52-b301-1c90b71ff297 down in Southbound
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.925 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:21 compute-0 ovn_controller[146846]: 2026-02-28T10:02:21Z|00104|binding|INFO|Removing iface tap42dc1876-90 ovn-installed in OVS
Feb 28 10:02:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:21.935 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:8a:2e 10.100.0.4'], port_security=['fa:16:3e:0e:8a:2e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd2d9bd29-453d-4abd-a3de-c1a9603cfc11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=42dc1876-90c4-4b52-b301-1c90b71ff297) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:02:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:21.936 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 42dc1876-90c4-4b52-b301-1c90b71ff297 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 unbound from our chassis
Feb 28 10:02:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:21.938 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24
Feb 28 10:02:21 compute-0 nova_compute[243452]: 2026-02-28 10:02:21.940 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:21.952 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[114d428e-c655-4390-b5fc-21df3af0a1a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:21.974 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[65b5b36b-971e-4b87-b36d-c6bdc79adbf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:21.978 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[99b16035-57bf-42cf-a826-553e932e6db0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:21 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Deactivated successfully.
Feb 28 10:02:21 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Consumed 13.354s CPU time.
Feb 28 10:02:21 compute-0 systemd-machined[209480]: Machine qemu-20-instance-00000012 terminated.
Feb 28 10:02:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:22.000 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4f3186bc-af9f-44cc-b3bd-48dc1e376c4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:22.015 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[50e9025f-1de4-4af3-814f-cd281d2ed49d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 24, 'rx_bytes': 868, 'tx_bytes': 1196, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 24, 'rx_bytes': 868, 'tx_bytes': 1196, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262819, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:22.028 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[469213cb-7b5f-4bd5-8cdd-beb8af0221d4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262820, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262820, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:22.030 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.031 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.035 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:22.035 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:22.035 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:02:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:22.036 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:22.036 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.109 243456 INFO nova.virt.libvirt.driver [-] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Instance destroyed successfully.
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.110 243456 DEBUG nova.objects.instance [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'resources' on Instance uuid d2d9bd29-453d-4abd-a3de-c1a9603cfc11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.131 243456 DEBUG nova.virt.libvirt.vif [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:01:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1694487247',display_name='tempest-ServersAdminTestJSON-server-1694487247',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1694487247',id=18,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:01:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-vmrf35mh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:01:35Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=d2d9bd29-453d-4abd-a3de-c1a9603cfc11,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.131 243456 DEBUG nova.network.os_vif_util [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "42dc1876-90c4-4b52-b301-1c90b71ff297", "address": "fa:16:3e:0e:8a:2e", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42dc1876-90", "ovs_interfaceid": "42dc1876-90c4-4b52-b301-1c90b71ff297", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.133 243456 DEBUG nova.network.os_vif_util [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:8a:2e,bridge_name='br-int',has_traffic_filtering=True,id=42dc1876-90c4-4b52-b301-1c90b71ff297,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42dc1876-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.133 243456 DEBUG os_vif [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:8a:2e,bridge_name='br-int',has_traffic_filtering=True,id=42dc1876-90c4-4b52-b301-1c90b71ff297,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42dc1876-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.136 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.137 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap42dc1876-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.139 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.141 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.144 243456 INFO os_vif [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:8a:2e,bridge_name='br-int',has_traffic_filtering=True,id=42dc1876-90c4-4b52-b301-1c90b71ff297,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42dc1876-90')
Feb 28 10:02:22 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000014.scope: Deactivated successfully.
Feb 28 10:02:22 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000014.scope: Consumed 3.430s CPU time.
Feb 28 10:02:22 compute-0 systemd-machined[209480]: Machine qemu-25-instance-00000014 terminated.
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.339 243456 INFO nova.virt.libvirt.driver [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance destroyed successfully.
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.340 243456 DEBUG nova.objects.instance [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lazy-loading 'resources' on Instance uuid 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:22 compute-0 ceph-mon[76304]: pgmap v989: 305 pgs: 305 active+clean; 484 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.4 MiB/s wr, 368 op/s
Feb 28 10:02:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.804 243456 INFO nova.virt.libvirt.driver [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Deleting instance files /var/lib/nova/instances/d2d9bd29-453d-4abd-a3de-c1a9603cfc11_del
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.805 243456 INFO nova.virt.libvirt.driver [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Deletion of /var/lib/nova/instances/d2d9bd29-453d-4abd-a3de-c1a9603cfc11_del complete
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.814 243456 INFO nova.virt.libvirt.driver [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Deleting instance files /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_del
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.814 243456 INFO nova.virt.libvirt.driver [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Deletion of /var/lib/nova/instances/9bf11a21-b6eb-432e-aa3b-4dd3413f91a5_del complete
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.836 243456 DEBUG nova.compute.manager [req-adae786b-dc5e-47c9-b919-c79631f80959 req-495c3227-dec7-4ccc-beb8-0ca581836d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Received event network-vif-unplugged-42dc1876-90c4-4b52-b301-1c90b71ff297 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.836 243456 DEBUG oslo_concurrency.lockutils [req-adae786b-dc5e-47c9-b919-c79631f80959 req-495c3227-dec7-4ccc-beb8-0ca581836d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.837 243456 DEBUG oslo_concurrency.lockutils [req-adae786b-dc5e-47c9-b919-c79631f80959 req-495c3227-dec7-4ccc-beb8-0ca581836d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.837 243456 DEBUG oslo_concurrency.lockutils [req-adae786b-dc5e-47c9-b919-c79631f80959 req-495c3227-dec7-4ccc-beb8-0ca581836d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.837 243456 DEBUG nova.compute.manager [req-adae786b-dc5e-47c9-b919-c79631f80959 req-495c3227-dec7-4ccc-beb8-0ca581836d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] No waiting events found dispatching network-vif-unplugged-42dc1876-90c4-4b52-b301-1c90b71ff297 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.838 243456 DEBUG nova.compute.manager [req-adae786b-dc5e-47c9-b919-c79631f80959 req-495c3227-dec7-4ccc-beb8-0ca581836d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Received event network-vif-unplugged-42dc1876-90c4-4b52-b301-1c90b71ff297 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.909 243456 INFO nova.compute.manager [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Took 1.23 seconds to destroy the instance on the hypervisor.
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.909 243456 DEBUG oslo.service.loopingcall [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.910 243456 DEBUG nova.compute.manager [-] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.910 243456 DEBUG nova.network.neutron [-] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.915 243456 INFO nova.compute.manager [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Took 0.99 seconds to destroy the instance on the hypervisor.
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.915 243456 DEBUG oslo.service.loopingcall [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.916 243456 DEBUG nova.compute.manager [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:02:22 compute-0 nova_compute[243452]: 2026-02-28 10:02:22.916 243456 DEBUG nova.network.neutron [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:02:23 compute-0 nova_compute[243452]: 2026-02-28 10:02:23.084 243456 DEBUG nova.network.neutron [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:02:23 compute-0 nova_compute[243452]: 2026-02-28 10:02:23.106 243456 DEBUG nova.network.neutron [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:02:23 compute-0 nova_compute[243452]: 2026-02-28 10:02:23.126 243456 INFO nova.compute.manager [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Took 0.21 seconds to deallocate network for instance.
Feb 28 10:02:23 compute-0 nova_compute[243452]: 2026-02-28 10:02:23.185 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:23 compute-0 nova_compute[243452]: 2026-02-28 10:02:23.186 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:23 compute-0 nova_compute[243452]: 2026-02-28 10:02:23.340 243456 DEBUG oslo_concurrency.processutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:23 compute-0 nova_compute[243452]: 2026-02-28 10:02:23.435 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 484 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 4.9 MiB/s wr, 382 op/s
Feb 28 10:02:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:02:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2162428933' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:02:23 compute-0 nova_compute[243452]: 2026-02-28 10:02:23.845 243456 DEBUG oslo_concurrency.processutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:23 compute-0 nova_compute[243452]: 2026-02-28 10:02:23.853 243456 DEBUG nova.compute.provider_tree [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:02:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2162428933' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:02:23 compute-0 nova_compute[243452]: 2026-02-28 10:02:23.892 243456 DEBUG nova.scheduler.client.report [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:02:23 compute-0 nova_compute[243452]: 2026-02-28 10:02:23.925 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:23 compute-0 nova_compute[243452]: 2026-02-28 10:02:23.971 243456 INFO nova.scheduler.client.report [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Deleted allocations for instance 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5
Feb 28 10:02:23 compute-0 sshd-session[262873]: Invalid user ubuntu from 45.148.10.240 port 45316
Feb 28 10:02:24 compute-0 nova_compute[243452]: 2026-02-28 10:02:24.079 243456 DEBUG oslo_concurrency.lockutils [None req-9efa1a70-797d-4f7b-b5e2-c81e389f3686 c1d3b80b39ba4f3392d63b05c85009e2 a1e90097927e484eb57dee6ac05b7b47 - - default default] Lock "9bf11a21-b6eb-432e-aa3b-4dd3413f91a5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:24 compute-0 nova_compute[243452]: 2026-02-28 10:02:24.120 243456 DEBUG nova.network.neutron [-] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:02:24 compute-0 nova_compute[243452]: 2026-02-28 10:02:24.150 243456 INFO nova.compute.manager [-] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Took 1.24 seconds to deallocate network for instance.
Feb 28 10:02:24 compute-0 nova_compute[243452]: 2026-02-28 10:02:24.196 243456 DEBUG oslo_concurrency.lockutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:24 compute-0 nova_compute[243452]: 2026-02-28 10:02:24.197 243456 DEBUG oslo_concurrency.lockutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:24 compute-0 sshd-session[262873]: Connection closed by invalid user ubuntu 45.148.10.240 port 45316 [preauth]
Feb 28 10:02:24 compute-0 nova_compute[243452]: 2026-02-28 10:02:24.246 243456 DEBUG nova.compute.manager [req-d1e94399-a069-427b-bea9-6acaca919792 req-955b58d3-1dce-4396-a7b1-78fb114716be 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Received event network-vif-deleted-42dc1876-90c4-4b52-b301-1c90b71ff297 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:02:24 compute-0 nova_compute[243452]: 2026-02-28 10:02:24.291 243456 DEBUG oslo_concurrency.processutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:02:24 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1170047623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:02:24 compute-0 nova_compute[243452]: 2026-02-28 10:02:24.851 243456 DEBUG oslo_concurrency.processutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:24 compute-0 nova_compute[243452]: 2026-02-28 10:02:24.857 243456 DEBUG nova.compute.provider_tree [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:02:24 compute-0 nova_compute[243452]: 2026-02-28 10:02:24.881 243456 DEBUG nova.scheduler.client.report [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:02:24 compute-0 nova_compute[243452]: 2026-02-28 10:02:24.907 243456 DEBUG oslo_concurrency.lockutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:24 compute-0 ceph-mon[76304]: pgmap v990: 305 pgs: 305 active+clean; 484 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 4.9 MiB/s wr, 382 op/s
Feb 28 10:02:24 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1170047623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:02:24 compute-0 nova_compute[243452]: 2026-02-28 10:02:24.946 243456 INFO nova.scheduler.client.report [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Deleted allocations for instance d2d9bd29-453d-4abd-a3de-c1a9603cfc11
Feb 28 10:02:24 compute-0 nova_compute[243452]: 2026-02-28 10:02:24.976 243456 DEBUG nova.compute.manager [req-1ba0412d-1971-436a-9a03-ff12abca4a05 req-3c9219a0-21de-442e-a1a6-03033c552a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Received event network-vif-plugged-42dc1876-90c4-4b52-b301-1c90b71ff297 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:02:24 compute-0 nova_compute[243452]: 2026-02-28 10:02:24.977 243456 DEBUG oslo_concurrency.lockutils [req-1ba0412d-1971-436a-9a03-ff12abca4a05 req-3c9219a0-21de-442e-a1a6-03033c552a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:24 compute-0 nova_compute[243452]: 2026-02-28 10:02:24.978 243456 DEBUG oslo_concurrency.lockutils [req-1ba0412d-1971-436a-9a03-ff12abca4a05 req-3c9219a0-21de-442e-a1a6-03033c552a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:24 compute-0 nova_compute[243452]: 2026-02-28 10:02:24.978 243456 DEBUG oslo_concurrency.lockutils [req-1ba0412d-1971-436a-9a03-ff12abca4a05 req-3c9219a0-21de-442e-a1a6-03033c552a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:24 compute-0 nova_compute[243452]: 2026-02-28 10:02:24.979 243456 DEBUG nova.compute.manager [req-1ba0412d-1971-436a-9a03-ff12abca4a05 req-3c9219a0-21de-442e-a1a6-03033c552a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] No waiting events found dispatching network-vif-plugged-42dc1876-90c4-4b52-b301-1c90b71ff297 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:02:24 compute-0 nova_compute[243452]: 2026-02-28 10:02:24.979 243456 WARNING nova.compute.manager [req-1ba0412d-1971-436a-9a03-ff12abca4a05 req-3c9219a0-21de-442e-a1a6-03033c552a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Received unexpected event network-vif-plugged-42dc1876-90c4-4b52-b301-1c90b71ff297 for instance with vm_state deleted and task_state None.
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.029 243456 DEBUG oslo_concurrency.lockutils [None req-d0e42727-3056-4a31-86d8-16eb3988e836 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "d2d9bd29-453d-4abd-a3de-c1a9603cfc11" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.627 243456 DEBUG oslo_concurrency.lockutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "08147934-b9df-4154-8d1f-3fd318973eb6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.628 243456 DEBUG oslo_concurrency.lockutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.628 243456 DEBUG oslo_concurrency.lockutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.628 243456 DEBUG oslo_concurrency.lockutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.628 243456 DEBUG oslo_concurrency.lockutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.629 243456 INFO nova.compute.manager [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Terminating instance
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.630 243456 DEBUG nova.compute.manager [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:02:25 compute-0 kernel: tap1b6bf464-31 (unregistering): left promiscuous mode
Feb 28 10:02:25 compute-0 NetworkManager[49805]: <info>  [1772272945.6754] device (tap1b6bf464-31): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.682 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:25 compute-0 ovn_controller[146846]: 2026-02-28T10:02:25Z|00105|binding|INFO|Releasing lport 1b6bf464-31de-4504-9af4-59a95d6d9c05 from this chassis (sb_readonly=0)
Feb 28 10:02:25 compute-0 ovn_controller[146846]: 2026-02-28T10:02:25Z|00106|binding|INFO|Setting lport 1b6bf464-31de-4504-9af4-59a95d6d9c05 down in Southbound
Feb 28 10:02:25 compute-0 ovn_controller[146846]: 2026-02-28T10:02:25Z|00107|binding|INFO|Removing iface tap1b6bf464-31 ovn-installed in OVS
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.684 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.690 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:02:52 10.100.0.10'], port_security=['fa:16:3e:12:02:52 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '08147934-b9df-4154-8d1f-3fd318973eb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1b6bf464-31de-4504-9af4-59a95d6d9c05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:02:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.692 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1b6bf464-31de-4504-9af4-59a95d6d9c05 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 unbound from our chassis
Feb 28 10:02:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.694 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.708 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.712 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f6fe438c-5b10-443e-ae71-e595eea7e8e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.738 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f077583e-9f74-476e-bcea-caa0155da9fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.742 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7510b7e8-1978-48ae-a119-2047023e8e65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:25 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Deactivated successfully.
Feb 28 10:02:25 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Consumed 14.426s CPU time.
Feb 28 10:02:25 compute-0 systemd-machined[209480]: Machine qemu-17-instance-00000011 terminated.
Feb 28 10:02:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.766 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d2af6416-73fa-4429-bd56-7b7abaa5d7a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.784 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[144c5cfa-bdb1-4bc5-af61-08966b1259b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 26, 'rx_bytes': 868, 'tx_bytes': 1280, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 26, 'rx_bytes': 868, 'tx_bytes': 1280, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262930, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 401 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.6 MiB/s wr, 368 op/s
Feb 28 10:02:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.798 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ae09cea8-1d43-42c7-8f18-a49d22c4954b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262931, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262931, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.800 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.801 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.806 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.807 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.807 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:02:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.808 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:25.808 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.861 243456 INFO nova.virt.libvirt.driver [-] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Instance destroyed successfully.
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.861 243456 DEBUG nova.objects.instance [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'resources' on Instance uuid 08147934-b9df-4154-8d1f-3fd318973eb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.881 243456 DEBUG nova.virt.libvirt.vif [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:01:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2143913214',display_name='tempest-ServersAdminTestJSON-server-2143913214',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2143913214',id=17,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:01:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-zs0r3m3g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:01:20Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=08147934-b9df-4154-8d1f-3fd318973eb6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "address": "fa:16:3e:12:02:52", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b6bf464-31", "ovs_interfaceid": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.881 243456 DEBUG nova.network.os_vif_util [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "address": "fa:16:3e:12:02:52", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b6bf464-31", "ovs_interfaceid": "1b6bf464-31de-4504-9af4-59a95d6d9c05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.882 243456 DEBUG nova.network.os_vif_util [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:02:52,bridge_name='br-int',has_traffic_filtering=True,id=1b6bf464-31de-4504-9af4-59a95d6d9c05,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b6bf464-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.882 243456 DEBUG os_vif [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:02:52,bridge_name='br-int',has_traffic_filtering=True,id=1b6bf464-31de-4504-9af4-59a95d6d9c05,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b6bf464-31') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.884 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.884 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b6bf464-31, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.886 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.888 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:25 compute-0 nova_compute[243452]: 2026-02-28 10:02:25.890 243456 INFO os_vif [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:02:52,bridge_name='br-int',has_traffic_filtering=True,id=1b6bf464-31de-4504-9af4-59a95d6d9c05,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b6bf464-31')
Feb 28 10:02:26 compute-0 nova_compute[243452]: 2026-02-28 10:02:26.182 243456 INFO nova.virt.libvirt.driver [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Deleting instance files /var/lib/nova/instances/08147934-b9df-4154-8d1f-3fd318973eb6_del
Feb 28 10:02:26 compute-0 nova_compute[243452]: 2026-02-28 10:02:26.184 243456 INFO nova.virt.libvirt.driver [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Deletion of /var/lib/nova/instances/08147934-b9df-4154-8d1f-3fd318973eb6_del complete
Feb 28 10:02:26 compute-0 nova_compute[243452]: 2026-02-28 10:02:26.271 243456 INFO nova.compute.manager [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Took 0.64 seconds to destroy the instance on the hypervisor.
Feb 28 10:02:26 compute-0 nova_compute[243452]: 2026-02-28 10:02:26.272 243456 DEBUG oslo.service.loopingcall [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:02:26 compute-0 nova_compute[243452]: 2026-02-28 10:02:26.273 243456 DEBUG nova.compute.manager [-] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:02:26 compute-0 nova_compute[243452]: 2026-02-28 10:02:26.273 243456 DEBUG nova.network.neutron [-] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:02:26 compute-0 ceph-mon[76304]: pgmap v991: 305 pgs: 305 active+clean; 401 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.6 MiB/s wr, 368 op/s
Feb 28 10:02:27 compute-0 nova_compute[243452]: 2026-02-28 10:02:27.128 243456 DEBUG nova.compute.manager [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Received event network-vif-unplugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:02:27 compute-0 nova_compute[243452]: 2026-02-28 10:02:27.129 243456 DEBUG oslo_concurrency.lockutils [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:27 compute-0 nova_compute[243452]: 2026-02-28 10:02:27.130 243456 DEBUG oslo_concurrency.lockutils [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:27 compute-0 nova_compute[243452]: 2026-02-28 10:02:27.130 243456 DEBUG oslo_concurrency.lockutils [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:27 compute-0 nova_compute[243452]: 2026-02-28 10:02:27.130 243456 DEBUG nova.compute.manager [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] No waiting events found dispatching network-vif-unplugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:02:27 compute-0 nova_compute[243452]: 2026-02-28 10:02:27.131 243456 DEBUG nova.compute.manager [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Received event network-vif-unplugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:02:27 compute-0 nova_compute[243452]: 2026-02-28 10:02:27.131 243456 DEBUG nova.compute.manager [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Received event network-vif-plugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:02:27 compute-0 nova_compute[243452]: 2026-02-28 10:02:27.132 243456 DEBUG oslo_concurrency.lockutils [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:27 compute-0 nova_compute[243452]: 2026-02-28 10:02:27.132 243456 DEBUG oslo_concurrency.lockutils [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:27 compute-0 nova_compute[243452]: 2026-02-28 10:02:27.133 243456 DEBUG oslo_concurrency.lockutils [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:27 compute-0 nova_compute[243452]: 2026-02-28 10:02:27.133 243456 DEBUG nova.compute.manager [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] No waiting events found dispatching network-vif-plugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:02:27 compute-0 nova_compute[243452]: 2026-02-28 10:02:27.134 243456 WARNING nova.compute.manager [req-29cadeab-27e4-4841-9dca-dc40639ae7e1 req-742b48c7-e3ab-4bb2-9f6e-76eca70083e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Received unexpected event network-vif-plugged-1b6bf464-31de-4504-9af4-59a95d6d9c05 for instance with vm_state active and task_state deleting.
Feb 28 10:02:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:02:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Feb 28 10:02:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Feb 28 10:02:27 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Feb 28 10:02:27 compute-0 nova_compute[243452]: 2026-02-28 10:02:27.783 243456 DEBUG nova.network.neutron [-] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:02:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 316 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 2.0 MiB/s wr, 366 op/s
Feb 28 10:02:27 compute-0 nova_compute[243452]: 2026-02-28 10:02:27.851 243456 INFO nova.compute.manager [-] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Took 1.58 seconds to deallocate network for instance.
Feb 28 10:02:27 compute-0 nova_compute[243452]: 2026-02-28 10:02:27.873 243456 DEBUG nova.compute.manager [req-63212b9a-989c-4b9a-ab56-a39752441c77 req-b214e926-e184-4997-ae25-096d1e37fdfb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Received event network-vif-deleted-1b6bf464-31de-4504-9af4-59a95d6d9c05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:02:27 compute-0 nova_compute[243452]: 2026-02-28 10:02:27.874 243456 INFO nova.compute.manager [req-63212b9a-989c-4b9a-ab56-a39752441c77 req-b214e926-e184-4997-ae25-096d1e37fdfb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Neutron deleted interface 1b6bf464-31de-4504-9af4-59a95d6d9c05; detaching it from the instance and deleting it from the info cache
Feb 28 10:02:27 compute-0 nova_compute[243452]: 2026-02-28 10:02:27.874 243456 DEBUG nova.network.neutron [req-63212b9a-989c-4b9a-ab56-a39752441c77 req-b214e926-e184-4997-ae25-096d1e37fdfb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:02:27 compute-0 nova_compute[243452]: 2026-02-28 10:02:27.946 243456 DEBUG nova.compute.manager [req-63212b9a-989c-4b9a-ab56-a39752441c77 req-b214e926-e184-4997-ae25-096d1e37fdfb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Detach interface failed, port_id=1b6bf464-31de-4504-9af4-59a95d6d9c05, reason: Instance 08147934-b9df-4154-8d1f-3fd318973eb6 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:02:27 compute-0 nova_compute[243452]: 2026-02-28 10:02:27.958 243456 DEBUG oslo_concurrency.lockutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:27 compute-0 nova_compute[243452]: 2026-02-28 10:02:27.959 243456 DEBUG oslo_concurrency.lockutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:28 compute-0 nova_compute[243452]: 2026-02-28 10:02:28.065 243456 DEBUG oslo_concurrency.processutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:28 compute-0 nova_compute[243452]: 2026-02-28 10:02:28.438 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:02:28 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4074060817' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:02:28 compute-0 ceph-mon[76304]: osdmap e133: 3 total, 3 up, 3 in
Feb 28 10:02:28 compute-0 ceph-mon[76304]: pgmap v993: 305 pgs: 305 active+clean; 316 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 2.0 MiB/s wr, 366 op/s
Feb 28 10:02:28 compute-0 nova_compute[243452]: 2026-02-28 10:02:28.623 243456 DEBUG oslo_concurrency.processutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:28 compute-0 nova_compute[243452]: 2026-02-28 10:02:28.631 243456 DEBUG nova.compute.provider_tree [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:02:28 compute-0 nova_compute[243452]: 2026-02-28 10:02:28.657 243456 DEBUG nova.scheduler.client.report [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:02:28 compute-0 nova_compute[243452]: 2026-02-28 10:02:28.729 243456 DEBUG oslo_concurrency.lockutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:28 compute-0 nova_compute[243452]: 2026-02-28 10:02:28.809 243456 INFO nova.scheduler.client.report [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Deleted allocations for instance 08147934-b9df-4154-8d1f-3fd318973eb6
Feb 28 10:02:28 compute-0 nova_compute[243452]: 2026-02-28 10:02:28.947 243456 DEBUG oslo_concurrency.lockutils [None req-bca923e4-1616-4c10-aad1-606490c6104a 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "08147934-b9df-4154-8d1f-3fd318973eb6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.319s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:02:29
Feb 28 10:02:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:02:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:02:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'volumes', 'default.rgw.meta', 'vms', 'default.rgw.log', '.rgw.root', 'backups', 'images', 'cephfs.cephfs.data']
Feb 28 10:02:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:02:29 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4074060817' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:02:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 316 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 1.9 MiB/s wr, 358 op/s
Feb 28 10:02:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:02:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:02:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:02:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:02:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:02:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.462 243456 DEBUG oslo_concurrency.lockutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.463 243456 DEBUG oslo_concurrency.lockutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.463 243456 DEBUG oslo_concurrency.lockutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.463 243456 DEBUG oslo_concurrency.lockutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.463 243456 DEBUG oslo_concurrency.lockutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.464 243456 INFO nova.compute.manager [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Terminating instance
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.465 243456 DEBUG nova.compute.manager [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:02:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:02:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:02:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:02:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:02:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:02:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:02:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:02:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:02:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:02:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:02:30 compute-0 kernel: tapeaa5f652-63 (unregistering): left promiscuous mode
Feb 28 10:02:30 compute-0 NetworkManager[49805]: <info>  [1772272950.5618] device (tapeaa5f652-63): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.566 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:30 compute-0 ovn_controller[146846]: 2026-02-28T10:02:30Z|00108|binding|INFO|Releasing lport eaa5f652-63c2-4a9b-aae0-eec299565322 from this chassis (sb_readonly=0)
Feb 28 10:02:30 compute-0 ovn_controller[146846]: 2026-02-28T10:02:30Z|00109|binding|INFO|Setting lport eaa5f652-63c2-4a9b-aae0-eec299565322 down in Southbound
Feb 28 10:02:30 compute-0 ovn_controller[146846]: 2026-02-28T10:02:30Z|00110|binding|INFO|Removing iface tapeaa5f652-63 ovn-installed in OVS
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.569 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.580 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:c8:7d 10.100.0.13'], port_security=['fa:16:3e:f2:c8:7d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '89ced16e-cc50-41d5-bfcb-fa5af85c14c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=eaa5f652-63c2-4a9b-aae0-eec299565322) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:02:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.583 156681 INFO neutron.agent.ovn.metadata.agent [-] Port eaa5f652-63c2-4a9b-aae0-eec299565322 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 unbound from our chassis
Feb 28 10:02:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.585 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.585 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.597 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[acdf87f2-ab9b-40d7-88a3-d3183a0fcba4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.615 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ecfbf9eb-8ec5-4f46-8458-dfa3149f6704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.618 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb687f0-fa90-4676-8912-64696200060d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:30 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Feb 28 10:02:30 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 14.401s CPU time.
Feb 28 10:02:30 compute-0 systemd-machined[209480]: Machine qemu-14-instance-0000000e terminated.
Feb 28 10:02:30 compute-0 ceph-mon[76304]: pgmap v994: 305 pgs: 305 active+clean; 316 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 1.9 MiB/s wr, 358 op/s
Feb 28 10:02:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.645 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0f01c86a-5c6f-4e99-9bd8-da1f4f6bc1db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.661 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[74e17975-2d5e-46fe-bc71-debc9567a3c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce5045ea-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:35:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 28, 'rx_bytes': 868, 'tx_bytes': 1364, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 28, 'rx_bytes': 868, 'tx_bytes': 1364, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437027, 'reachable_time': 21253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262996, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.672 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fdf67e39-62ee-43a8-a984-3efbdcbab3f8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437039, 'tstamp': 437039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262997, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce5045ea-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437043, 'tstamp': 437043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262997, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.674 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.677 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.683 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.686 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce5045ea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.687 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:02:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.687 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce5045ea-10, col_values=(('external_ids', {'iface-id': '863d7ac1-9b1e-4788-aadf-440a36d66b39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:30.689 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.695 243456 INFO nova.virt.libvirt.driver [-] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Instance destroyed successfully.
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.696 243456 DEBUG nova.objects.instance [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'resources' on Instance uuid 89ced16e-cc50-41d5-bfcb-fa5af85c14c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.711 243456 DEBUG nova.virt.libvirt.vif [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:00:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1953624900',display_name='tempest-ServersAdminTestJSON-server-1953624900',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1953624900',id=14,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:01:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-y1ejzv4y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:01:06Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=89ced16e-cc50-41d5-bfcb-fa5af85c14c8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eaa5f652-63c2-4a9b-aae0-eec299565322", "address": "fa:16:3e:f2:c8:7d", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeaa5f652-63", "ovs_interfaceid": "eaa5f652-63c2-4a9b-aae0-eec299565322", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.711 243456 DEBUG nova.network.os_vif_util [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "eaa5f652-63c2-4a9b-aae0-eec299565322", "address": "fa:16:3e:f2:c8:7d", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeaa5f652-63", "ovs_interfaceid": "eaa5f652-63c2-4a9b-aae0-eec299565322", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.712 243456 DEBUG nova.network.os_vif_util [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c8:7d,bridge_name='br-int',has_traffic_filtering=True,id=eaa5f652-63c2-4a9b-aae0-eec299565322,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeaa5f652-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.712 243456 DEBUG os_vif [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c8:7d,bridge_name='br-int',has_traffic_filtering=True,id=eaa5f652-63c2-4a9b-aae0-eec299565322,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeaa5f652-63') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.714 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.714 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeaa5f652-63, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.715 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.717 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.719 243456 INFO os_vif [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c8:7d,bridge_name='br-int',has_traffic_filtering=True,id=eaa5f652-63c2-4a9b-aae0-eec299565322,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeaa5f652-63')
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.834 243456 DEBUG nova.compute.manager [req-d9f26057-1994-42aa-8b55-9d39f2625dad req-28b85781-712d-4303-8f5d-c176654c3e13 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Received event network-vif-unplugged-eaa5f652-63c2-4a9b-aae0-eec299565322 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.834 243456 DEBUG oslo_concurrency.lockutils [req-d9f26057-1994-42aa-8b55-9d39f2625dad req-28b85781-712d-4303-8f5d-c176654c3e13 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.835 243456 DEBUG oslo_concurrency.lockutils [req-d9f26057-1994-42aa-8b55-9d39f2625dad req-28b85781-712d-4303-8f5d-c176654c3e13 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.835 243456 DEBUG oslo_concurrency.lockutils [req-d9f26057-1994-42aa-8b55-9d39f2625dad req-28b85781-712d-4303-8f5d-c176654c3e13 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.836 243456 DEBUG nova.compute.manager [req-d9f26057-1994-42aa-8b55-9d39f2625dad req-28b85781-712d-4303-8f5d-c176654c3e13 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] No waiting events found dispatching network-vif-unplugged-eaa5f652-63c2-4a9b-aae0-eec299565322 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.836 243456 DEBUG nova.compute.manager [req-d9f26057-1994-42aa-8b55-9d39f2625dad req-28b85781-712d-4303-8f5d-c176654c3e13 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Received event network-vif-unplugged-eaa5f652-63c2-4a9b-aae0-eec299565322 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.970 243456 INFO nova.virt.libvirt.driver [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Deleting instance files /var/lib/nova/instances/89ced16e-cc50-41d5-bfcb-fa5af85c14c8_del
Feb 28 10:02:30 compute-0 nova_compute[243452]: 2026-02-28 10:02:30.972 243456 INFO nova.virt.libvirt.driver [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Deletion of /var/lib/nova/instances/89ced16e-cc50-41d5-bfcb-fa5af85c14c8_del complete
Feb 28 10:02:31 compute-0 ovn_controller[146846]: 2026-02-28T10:02:31Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:6e:bc 10.100.0.5
Feb 28 10:02:31 compute-0 ovn_controller[146846]: 2026-02-28T10:02:31Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:6e:bc 10.100.0.5
Feb 28 10:02:31 compute-0 nova_compute[243452]: 2026-02-28 10:02:31.215 243456 INFO nova.compute.manager [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Took 0.75 seconds to destroy the instance on the hypervisor.
Feb 28 10:02:31 compute-0 nova_compute[243452]: 2026-02-28 10:02:31.215 243456 DEBUG oslo.service.loopingcall [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:02:31 compute-0 nova_compute[243452]: 2026-02-28 10:02:31.216 243456 DEBUG nova.compute.manager [-] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:02:31 compute-0 nova_compute[243452]: 2026-02-28 10:02:31.216 243456 DEBUG nova.network.neutron [-] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:02:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 293 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 205 op/s
Feb 28 10:02:32 compute-0 nova_compute[243452]: 2026-02-28 10:02:32.218 243456 DEBUG nova.network.neutron [-] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:02:32 compute-0 nova_compute[243452]: 2026-02-28 10:02:32.237 243456 INFO nova.compute.manager [-] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Took 1.02 seconds to deallocate network for instance.
Feb 28 10:02:32 compute-0 nova_compute[243452]: 2026-02-28 10:02:32.293 243456 DEBUG oslo_concurrency.lockutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:32 compute-0 nova_compute[243452]: 2026-02-28 10:02:32.294 243456 DEBUG oslo_concurrency.lockutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:32 compute-0 nova_compute[243452]: 2026-02-28 10:02:32.356 243456 DEBUG oslo_concurrency.processutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:02:32 compute-0 ceph-mon[76304]: pgmap v995: 305 pgs: 305 active+clean; 293 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 205 op/s
Feb 28 10:02:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:02:32 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3126105542' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:02:32 compute-0 nova_compute[243452]: 2026-02-28 10:02:32.917 243456 DEBUG oslo_concurrency.processutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:32 compute-0 nova_compute[243452]: 2026-02-28 10:02:32.923 243456 DEBUG nova.compute.provider_tree [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:02:32 compute-0 nova_compute[243452]: 2026-02-28 10:02:32.943 243456 DEBUG nova.scheduler.client.report [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:02:32 compute-0 nova_compute[243452]: 2026-02-28 10:02:32.973 243456 DEBUG oslo_concurrency.lockutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:33 compute-0 nova_compute[243452]: 2026-02-28 10:02:33.004 243456 INFO nova.scheduler.client.report [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Deleted allocations for instance 89ced16e-cc50-41d5-bfcb-fa5af85c14c8
Feb 28 10:02:33 compute-0 nova_compute[243452]: 2026-02-28 10:02:33.075 243456 DEBUG oslo_concurrency.lockutils [None req-97d4f397-a16a-4420-8f28-7db9b27340b8 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:33 compute-0 nova_compute[243452]: 2026-02-28 10:02:33.392 243456 DEBUG nova.compute.manager [req-590d0c30-e749-42db-9648-86c8a7887fae req-2e45d788-c1b1-4e8c-a04e-02877848c3a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Received event network-vif-plugged-eaa5f652-63c2-4a9b-aae0-eec299565322 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:02:33 compute-0 nova_compute[243452]: 2026-02-28 10:02:33.393 243456 DEBUG oslo_concurrency.lockutils [req-590d0c30-e749-42db-9648-86c8a7887fae req-2e45d788-c1b1-4e8c-a04e-02877848c3a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:33 compute-0 nova_compute[243452]: 2026-02-28 10:02:33.393 243456 DEBUG oslo_concurrency.lockutils [req-590d0c30-e749-42db-9648-86c8a7887fae req-2e45d788-c1b1-4e8c-a04e-02877848c3a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:33 compute-0 nova_compute[243452]: 2026-02-28 10:02:33.393 243456 DEBUG oslo_concurrency.lockutils [req-590d0c30-e749-42db-9648-86c8a7887fae req-2e45d788-c1b1-4e8c-a04e-02877848c3a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "89ced16e-cc50-41d5-bfcb-fa5af85c14c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:33 compute-0 nova_compute[243452]: 2026-02-28 10:02:33.394 243456 DEBUG nova.compute.manager [req-590d0c30-e749-42db-9648-86c8a7887fae req-2e45d788-c1b1-4e8c-a04e-02877848c3a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] No waiting events found dispatching network-vif-plugged-eaa5f652-63c2-4a9b-aae0-eec299565322 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:02:33 compute-0 nova_compute[243452]: 2026-02-28 10:02:33.394 243456 WARNING nova.compute.manager [req-590d0c30-e749-42db-9648-86c8a7887fae req-2e45d788-c1b1-4e8c-a04e-02877848c3a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Received unexpected event network-vif-plugged-eaa5f652-63c2-4a9b-aae0-eec299565322 for instance with vm_state deleted and task_state None.
Feb 28 10:02:33 compute-0 nova_compute[243452]: 2026-02-28 10:02:33.395 243456 DEBUG nova.compute.manager [req-590d0c30-e749-42db-9648-86c8a7887fae req-2e45d788-c1b1-4e8c-a04e-02877848c3a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Received event network-vif-deleted-eaa5f652-63c2-4a9b-aae0-eec299565322 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:02:33 compute-0 nova_compute[243452]: 2026-02-28 10:02:33.439 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 270 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 509 KiB/s rd, 2.5 MiB/s wr, 188 op/s
Feb 28 10:02:33 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3126105542' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.476 243456 DEBUG oslo_concurrency.lockutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.478 243456 DEBUG oslo_concurrency.lockutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.478 243456 DEBUG oslo_concurrency.lockutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.479 243456 DEBUG oslo_concurrency.lockutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.479 243456 DEBUG oslo_concurrency.lockutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.481 243456 INFO nova.compute.manager [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Terminating instance
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.483 243456 DEBUG nova.compute.manager [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:02:34 compute-0 kernel: tap1c6e98f3-e9 (unregistering): left promiscuous mode
Feb 28 10:02:34 compute-0 NetworkManager[49805]: <info>  [1772272954.5471] device (tap1c6e98f3-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.548 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:34 compute-0 ovn_controller[146846]: 2026-02-28T10:02:34Z|00111|binding|INFO|Releasing lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 from this chassis (sb_readonly=0)
Feb 28 10:02:34 compute-0 ovn_controller[146846]: 2026-02-28T10:02:34Z|00112|binding|INFO|Setting lport 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 down in Southbound
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.554 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:34 compute-0 ovn_controller[146846]: 2026-02-28T10:02:34Z|00113|binding|INFO|Removing iface tap1c6e98f3-e9 ovn-installed in OVS
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.558 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.563 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:6e:bc 10.100.0.5'], port_security=['fa:16:3e:3d:6e:bc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c92e965f-2d18-4b78-8b78-7d391039f382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '339b7f5b41a54615b051fb9d036072dd', 'neutron:revision_number': '10', 'neutron:security_group_ids': '31cd020b-8dfe-4110-958d-e56458213ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4820828d-6ca4-485e-8a8b-c67ac7795fef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.563 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.564 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 in datapath ce5045ea-1437-4fd1-bdb3-3fe83470fb24 unbound from our chassis
Feb 28 10:02:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.565 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce5045ea-1437-4fd1-bdb3-3fe83470fb24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:02:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.566 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5157edad-4be3-4836-b3f3-26e3cb6bd496]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.566 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24 namespace which is not needed anymore
Feb 28 10:02:34 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Feb 28 10:02:34 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000000d.scope: Consumed 11.799s CPU time.
Feb 28 10:02:34 compute-0 systemd-machined[209480]: Machine qemu-24-instance-0000000d terminated.
Feb 28 10:02:34 compute-0 neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24[257366]: [NOTICE]   (257372) : haproxy version is 2.8.14-c23fe91
Feb 28 10:02:34 compute-0 neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24[257366]: [NOTICE]   (257372) : path to executable is /usr/sbin/haproxy
Feb 28 10:02:34 compute-0 neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24[257366]: [WARNING]  (257372) : Exiting Master process...
Feb 28 10:02:34 compute-0 neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24[257366]: [ALERT]    (257372) : Current worker (257374) exited with code 143 (Terminated)
Feb 28 10:02:34 compute-0 neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24[257366]: [WARNING]  (257372) : All workers exited. Exiting... (0)
Feb 28 10:02:34 compute-0 systemd[1]: libpod-f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a.scope: Deactivated successfully.
Feb 28 10:02:34 compute-0 podman[263074]: 2026-02-28 10:02:34.66796979 +0000 UTC m=+0.041309161 container died f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.705 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a-userdata-shm.mount: Deactivated successfully.
Feb 28 10:02:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-e22472ecc95bc20b98dcc0462d4cd4b77133e1b85e4f473f2fb73cbcea480a12-merged.mount: Deactivated successfully.
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.712 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.719 243456 INFO nova.virt.libvirt.driver [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Instance destroyed successfully.
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.720 243456 DEBUG nova.objects.instance [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lazy-loading 'resources' on Instance uuid c92e965f-2d18-4b78-8b78-7d391039f382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:34 compute-0 podman[263074]: 2026-02-28 10:02:34.723593482 +0000 UTC m=+0.096932823 container cleanup f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 10:02:34 compute-0 systemd[1]: libpod-conmon-f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a.scope: Deactivated successfully.
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.740 243456 DEBUG nova.virt.libvirt.vif [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1293627042',display_name='tempest-ServersAdminTestJSON-server-1293627042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1293627042',id=13,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:02:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='339b7f5b41a54615b051fb9d036072dd',ramdisk_id='',reservation_id='r-7hctsnmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1494420313',owner_user_name='tempest-ServersAdminTestJSON-1494420313-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:02:20Z,user_data=None,user_id='4fb1e2bbed9c4e2395c13dba974f8603',uuid=c92e965f-2d18-4b78-8b78-7d391039f382,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.741 243456 DEBUG nova.network.os_vif_util [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converting VIF {"id": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "address": "fa:16:3e:3d:6e:bc", "network": {"id": "ce5045ea-1437-4fd1-bdb3-3fe83470fb24", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-628649690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "339b7f5b41a54615b051fb9d036072dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c6e98f3-e9", "ovs_interfaceid": "1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.742 243456 DEBUG nova.network.os_vif_util [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.742 243456 DEBUG os_vif [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.744 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.744 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c6e98f3-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.747 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.750 243456 INFO os_vif [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:6e:bc,bridge_name='br-int',has_traffic_filtering=True,id=1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7,network=Network(ce5045ea-1437-4fd1-bdb3-3fe83470fb24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c6e98f3-e9')
Feb 28 10:02:34 compute-0 podman[263116]: 2026-02-28 10:02:34.806355716 +0000 UTC m=+0.053433172 container remove f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 10:02:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.811 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[86899cfd-c84d-444c-b6ff-eeb1b2a9217b]: (4, ('Sat Feb 28 10:02:34 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24 (f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a)\nf60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a\nSat Feb 28 10:02:34 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24 (f60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a)\nf60b6be0ea765d66c4f246001c779db6898ae364fb773006e4afd6cb5d47e28a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.812 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cff66e8e-14b9-4e04-b1c1-8e1c2b2d1be9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.813 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce5045ea-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.815 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:34 compute-0 kernel: tapce5045ea-10: left promiscuous mode
Feb 28 10:02:34 compute-0 nova_compute[243452]: 2026-02-28 10:02:34.825 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.828 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7287aca4-8ea4-44fa-9ce6-91bfdfed5373]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.840 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[51ae0c0b-a5ce-4f9e-84e3-45668d3b7bc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.842 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aa28296c-4acb-4f5e-a5ae-702448dbef02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.858 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2bb18736-5188-49ca-9191-2281950e2e1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437020, 'reachable_time': 39897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263149, 'error': None, 'target': 'ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.860 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce5045ea-1437-4fd1-bdb3-3fe83470fb24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:02:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:34.860 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[12b19a97-de0d-4938-b06a-fc8c52f5f81f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:02:34 compute-0 systemd[1]: run-netns-ovnmeta\x2dce5045ea\x2d1437\x2d4fd1\x2dbdb3\x2d3fe83470fb24.mount: Deactivated successfully.
Feb 28 10:02:34 compute-0 ceph-mon[76304]: pgmap v996: 305 pgs: 305 active+clean; 270 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 509 KiB/s rd, 2.5 MiB/s wr, 188 op/s
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.071 243456 INFO nova.virt.libvirt.driver [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Deleting instance files /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382_del
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.071 243456 INFO nova.virt.libvirt.driver [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Deletion of /var/lib/nova/instances/c92e965f-2d18-4b78-8b78-7d391039f382_del complete
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.112 243456 INFO nova.compute.manager [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Took 0.63 seconds to destroy the instance on the hypervisor.
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.113 243456 DEBUG oslo.service.loopingcall [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.113 243456 DEBUG nova.compute.manager [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.113 243456 DEBUG nova.network.neutron [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.495 243456 DEBUG nova.compute.manager [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.495 243456 DEBUG oslo_concurrency.lockutils [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.495 243456 DEBUG oslo_concurrency.lockutils [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.496 243456 DEBUG oslo_concurrency.lockutils [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.497 243456 DEBUG nova.compute.manager [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.498 243456 DEBUG nova.compute.manager [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-unplugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.499 243456 DEBUG nova.compute.manager [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.499 243456 DEBUG oslo_concurrency.lockutils [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.500 243456 DEBUG oslo_concurrency.lockutils [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.500 243456 DEBUG oslo_concurrency.lockutils [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.501 243456 DEBUG nova.compute.manager [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] No waiting events found dispatching network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.501 243456 WARNING nova.compute.manager [req-ef94d76c-cc3a-4c7c-bd24-47c721ec62fe req-2acdacbc-d94c-4969-bed6-6f8277cbca2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received unexpected event network-vif-plugged-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 for instance with vm_state active and task_state deleting.
Feb 28 10:02:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 210 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 446 KiB/s rd, 2.6 MiB/s wr, 158 op/s
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.791 243456 DEBUG nova.network.neutron [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.815 243456 INFO nova.compute.manager [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Took 0.70 seconds to deallocate network for instance.
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.877 243456 DEBUG nova.compute.manager [req-a8017b11-95c5-4dd1-a5bb-9bc97c8f1bd3 req-10c43368-35c3-4401-82ff-c1fef46694a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Received event network-vif-deleted-1c6e98f3-e99a-43a4-9c83-7b5a8cac01b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.881 243456 DEBUG oslo_concurrency.lockutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.882 243456 DEBUG oslo_concurrency.lockutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:35 compute-0 nova_compute[243452]: 2026-02-28 10:02:35.949 243456 DEBUG oslo_concurrency.processutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:02:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4288166464' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:02:36 compute-0 nova_compute[243452]: 2026-02-28 10:02:36.503 243456 DEBUG oslo_concurrency.processutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:36 compute-0 nova_compute[243452]: 2026-02-28 10:02:36.509 243456 DEBUG nova.compute.provider_tree [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:02:36 compute-0 nova_compute[243452]: 2026-02-28 10:02:36.525 243456 DEBUG nova.scheduler.client.report [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:02:36 compute-0 nova_compute[243452]: 2026-02-28 10:02:36.552 243456 DEBUG oslo_concurrency.lockutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:36 compute-0 nova_compute[243452]: 2026-02-28 10:02:36.579 243456 INFO nova.scheduler.client.report [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Deleted allocations for instance c92e965f-2d18-4b78-8b78-7d391039f382
Feb 28 10:02:36 compute-0 nova_compute[243452]: 2026-02-28 10:02:36.643 243456 DEBUG oslo_concurrency.lockutils [None req-3fd0b853-3681-4ceb-ad03-1e1cdf82c19b 4fb1e2bbed9c4e2395c13dba974f8603 339b7f5b41a54615b051fb9d036072dd - - default default] Lock "c92e965f-2d18-4b78-8b78-7d391039f382" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:36 compute-0 ceph-mon[76304]: pgmap v997: 305 pgs: 305 active+clean; 210 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 446 KiB/s rd, 2.6 MiB/s wr, 158 op/s
Feb 28 10:02:36 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4288166464' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:02:37 compute-0 nova_compute[243452]: 2026-02-28 10:02:37.107 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272942.1053064, d2d9bd29-453d-4abd-a3de-c1a9603cfc11 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:02:37 compute-0 nova_compute[243452]: 2026-02-28 10:02:37.107 243456 INFO nova.compute.manager [-] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] VM Stopped (Lifecycle Event)
Feb 28 10:02:37 compute-0 nova_compute[243452]: 2026-02-28 10:02:37.136 243456 DEBUG nova.compute.manager [None req-aec7cbeb-01a9-4f0b-b152-bda289859736 - - - - - -] [instance: d2d9bd29-453d-4abd-a3de-c1a9603cfc11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:02:37 compute-0 nova_compute[243452]: 2026-02-28 10:02:37.339 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272942.3376737, 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:02:37 compute-0 nova_compute[243452]: 2026-02-28 10:02:37.339 243456 INFO nova.compute.manager [-] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] VM Stopped (Lifecycle Event)
Feb 28 10:02:37 compute-0 nova_compute[243452]: 2026-02-28 10:02:37.520 243456 DEBUG nova.compute.manager [None req-cf1556eb-6ba8-481f-8a4f-517b97695f69 - - - - - -] [instance: 9bf11a21-b6eb-432e-aa3b-4dd3413f91a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:02:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:02:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 182 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 435 KiB/s rd, 2.5 MiB/s wr, 148 op/s
Feb 28 10:02:37 compute-0 sudo[263173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:02:37 compute-0 sudo[263173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:02:37 compute-0 sudo[263173]: pam_unix(sudo:session): session closed for user root
Feb 28 10:02:37 compute-0 sudo[263198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:02:37 compute-0 sudo[263198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:02:38 compute-0 nova_compute[243452]: 2026-02-28 10:02:38.441 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:38 compute-0 sudo[263198]: pam_unix(sudo:session): session closed for user root
Feb 28 10:02:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:02:38 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:02:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:02:38 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:02:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:02:38 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:02:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:02:38 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:02:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:02:38 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:02:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:02:38 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:02:38 compute-0 sudo[263254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:02:38 compute-0 sudo[263254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:02:38 compute-0 sudo[263254]: pam_unix(sudo:session): session closed for user root
Feb 28 10:02:38 compute-0 sudo[263279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:02:38 compute-0 sudo[263279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:02:38 compute-0 ceph-mon[76304]: pgmap v998: 305 pgs: 305 active+clean; 182 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 435 KiB/s rd, 2.5 MiB/s wr, 148 op/s
Feb 28 10:02:38 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:02:38 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:02:38 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:02:38 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:02:38 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:02:38 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:02:38 compute-0 podman[263316]: 2026-02-28 10:02:38.944062044 +0000 UTC m=+0.059541313 container create f94076da59edee8ef7e3b166497e251b9fadbe4d26bf6de1e4057ee149c49594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 10:02:38 compute-0 systemd[1]: Started libpod-conmon-f94076da59edee8ef7e3b166497e251b9fadbe4d26bf6de1e4057ee149c49594.scope.
Feb 28 10:02:39 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:02:39 compute-0 podman[263316]: 2026-02-28 10:02:39.017466495 +0000 UTC m=+0.132945774 container init f94076da59edee8ef7e3b166497e251b9fadbe4d26bf6de1e4057ee149c49594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:02:39 compute-0 podman[263316]: 2026-02-28 10:02:38.925298307 +0000 UTC m=+0.040777616 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:02:39 compute-0 podman[263316]: 2026-02-28 10:02:39.022093595 +0000 UTC m=+0.137572874 container start f94076da59edee8ef7e3b166497e251b9fadbe4d26bf6de1e4057ee149c49594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_panini, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 10:02:39 compute-0 podman[263316]: 2026-02-28 10:02:39.025659285 +0000 UTC m=+0.141138544 container attach f94076da59edee8ef7e3b166497e251b9fadbe4d26bf6de1e4057ee149c49594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_panini, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 10:02:39 compute-0 adoring_panini[263332]: 167 167
Feb 28 10:02:39 compute-0 systemd[1]: libpod-f94076da59edee8ef7e3b166497e251b9fadbe4d26bf6de1e4057ee149c49594.scope: Deactivated successfully.
Feb 28 10:02:39 compute-0 podman[263316]: 2026-02-28 10:02:39.026776147 +0000 UTC m=+0.142255406 container died f94076da59edee8ef7e3b166497e251b9fadbe4d26bf6de1e4057ee149c49594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_panini, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:02:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2b2f5ec45ff294f5bb9a1bd756d8d62490082c0c0467165e361c5359be4bc9c-merged.mount: Deactivated successfully.
Feb 28 10:02:39 compute-0 podman[263316]: 2026-02-28 10:02:39.065166395 +0000 UTC m=+0.180645664 container remove f94076da59edee8ef7e3b166497e251b9fadbe4d26bf6de1e4057ee149c49594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_panini, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Feb 28 10:02:39 compute-0 systemd[1]: libpod-conmon-f94076da59edee8ef7e3b166497e251b9fadbe4d26bf6de1e4057ee149c49594.scope: Deactivated successfully.
Feb 28 10:02:39 compute-0 podman[263356]: 2026-02-28 10:02:39.23379448 +0000 UTC m=+0.067397604 container create 8eed6fe56ea477dbea14ab668f16245e7c83ad05b03e7712496b38cbe1ab3641 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_herschel, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:02:39 compute-0 systemd[1]: Started libpod-conmon-8eed6fe56ea477dbea14ab668f16245e7c83ad05b03e7712496b38cbe1ab3641.scope.
Feb 28 10:02:39 compute-0 podman[263356]: 2026-02-28 10:02:39.205359541 +0000 UTC m=+0.038962675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:02:39 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adfcfa03f3d8f24faef9877f5ffd31c96d799f86183d87ef2ad35f8953e2e454/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adfcfa03f3d8f24faef9877f5ffd31c96d799f86183d87ef2ad35f8953e2e454/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adfcfa03f3d8f24faef9877f5ffd31c96d799f86183d87ef2ad35f8953e2e454/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adfcfa03f3d8f24faef9877f5ffd31c96d799f86183d87ef2ad35f8953e2e454/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adfcfa03f3d8f24faef9877f5ffd31c96d799f86183d87ef2ad35f8953e2e454/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:02:39 compute-0 podman[263356]: 2026-02-28 10:02:39.337949805 +0000 UTC m=+0.171552929 container init 8eed6fe56ea477dbea14ab668f16245e7c83ad05b03e7712496b38cbe1ab3641 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 10:02:39 compute-0 podman[263356]: 2026-02-28 10:02:39.345194138 +0000 UTC m=+0.178797262 container start 8eed6fe56ea477dbea14ab668f16245e7c83ad05b03e7712496b38cbe1ab3641 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:02:39 compute-0 podman[263356]: 2026-02-28 10:02:39.349508749 +0000 UTC m=+0.183111873 container attach 8eed6fe56ea477dbea14ab668f16245e7c83ad05b03e7712496b38cbe1ab3641 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 10:02:39 compute-0 magical_herschel[263372]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:02:39 compute-0 magical_herschel[263372]: --> All data devices are unavailable
Feb 28 10:02:39 compute-0 nova_compute[243452]: 2026-02-28 10:02:39.747 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:39 compute-0 systemd[1]: libpod-8eed6fe56ea477dbea14ab668f16245e7c83ad05b03e7712496b38cbe1ab3641.scope: Deactivated successfully.
Feb 28 10:02:39 compute-0 podman[263356]: 2026-02-28 10:02:39.775742798 +0000 UTC m=+0.609345922 container died 8eed6fe56ea477dbea14ab668f16245e7c83ad05b03e7712496b38cbe1ab3641 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 10:02:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 182 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 369 KiB/s rd, 2.1 MiB/s wr, 126 op/s
Feb 28 10:02:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-adfcfa03f3d8f24faef9877f5ffd31c96d799f86183d87ef2ad35f8953e2e454-merged.mount: Deactivated successfully.
Feb 28 10:02:39 compute-0 podman[263356]: 2026-02-28 10:02:39.833624764 +0000 UTC m=+0.667227848 container remove 8eed6fe56ea477dbea14ab668f16245e7c83ad05b03e7712496b38cbe1ab3641 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_herschel, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:02:39 compute-0 systemd[1]: libpod-conmon-8eed6fe56ea477dbea14ab668f16245e7c83ad05b03e7712496b38cbe1ab3641.scope: Deactivated successfully.
Feb 28 10:02:39 compute-0 sudo[263279]: pam_unix(sudo:session): session closed for user root
Feb 28 10:02:39 compute-0 sudo[263403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:02:39 compute-0 sudo[263403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:02:39 compute-0 sudo[263403]: pam_unix(sudo:session): session closed for user root
Feb 28 10:02:40 compute-0 sudo[263428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:02:40 compute-0 sudo[263428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:02:40 compute-0 podman[263466]: 2026-02-28 10:02:40.261607102 +0000 UTC m=+0.054324107 container create a3370b59cfd51cf1e117786e10634e243d8aa87697befceb1d2f28626da8e2af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elgamal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb 28 10:02:40 compute-0 systemd[1]: Started libpod-conmon-a3370b59cfd51cf1e117786e10634e243d8aa87697befceb1d2f28626da8e2af.scope.
Feb 28 10:02:40 compute-0 podman[263466]: 2026-02-28 10:02:40.237638919 +0000 UTC m=+0.030355944 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:02:40 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:02:40 compute-0 podman[263466]: 2026-02-28 10:02:40.361574449 +0000 UTC m=+0.154291494 container init a3370b59cfd51cf1e117786e10634e243d8aa87697befceb1d2f28626da8e2af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elgamal, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 10:02:40 compute-0 podman[263466]: 2026-02-28 10:02:40.369597404 +0000 UTC m=+0.162314409 container start a3370b59cfd51cf1e117786e10634e243d8aa87697befceb1d2f28626da8e2af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elgamal, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:02:40 compute-0 podman[263466]: 2026-02-28 10:02:40.374366638 +0000 UTC m=+0.167083663 container attach a3370b59cfd51cf1e117786e10634e243d8aa87697befceb1d2f28626da8e2af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 10:02:40 compute-0 nervous_elgamal[263483]: 167 167
Feb 28 10:02:40 compute-0 systemd[1]: libpod-a3370b59cfd51cf1e117786e10634e243d8aa87697befceb1d2f28626da8e2af.scope: Deactivated successfully.
Feb 28 10:02:40 compute-0 podman[263466]: 2026-02-28 10:02:40.377376752 +0000 UTC m=+0.170093757 container died a3370b59cfd51cf1e117786e10634e243d8aa87697befceb1d2f28626da8e2af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elgamal, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 10:02:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b863e144c0f8a9a7e27d576c18c967301c226f48804f6340a5709c3b0ea5690-merged.mount: Deactivated successfully.
Feb 28 10:02:40 compute-0 podman[263466]: 2026-02-28 10:02:40.422133819 +0000 UTC m=+0.214850804 container remove a3370b59cfd51cf1e117786e10634e243d8aa87697befceb1d2f28626da8e2af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 10:02:40 compute-0 systemd[1]: libpod-conmon-a3370b59cfd51cf1e117786e10634e243d8aa87697befceb1d2f28626da8e2af.scope: Deactivated successfully.
Feb 28 10:02:40 compute-0 podman[263507]: 2026-02-28 10:02:40.581300249 +0000 UTC m=+0.044962244 container create 013754a8ce9bdc1cee854947be9bcfadaaab52868ac4c19ba2a72a884abbb390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 10:02:40 compute-0 systemd[1]: Started libpod-conmon-013754a8ce9bdc1cee854947be9bcfadaaab52868ac4c19ba2a72a884abbb390.scope.
Feb 28 10:02:40 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3174da55ee8b3d2e58419209506650088651e9e0ef833d5f41827470e096d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3174da55ee8b3d2e58419209506650088651e9e0ef833d5f41827470e096d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3174da55ee8b3d2e58419209506650088651e9e0ef833d5f41827470e096d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3174da55ee8b3d2e58419209506650088651e9e0ef833d5f41827470e096d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0002967955940844235 of space, bias 1.0, pg target 0.08903867822532706 quantized to 32 (current 32)
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024910402878611446 of space, bias 1.0, pg target 0.7473120863583433 quantized to 32 (current 32)
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.707674241480498e-07 of space, bias 4.0, pg target 0.0011649209089776597 quantized to 16 (current 16)
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:02:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:02:40 compute-0 podman[263507]: 2026-02-28 10:02:40.639680018 +0000 UTC m=+0.103342013 container init 013754a8ce9bdc1cee854947be9bcfadaaab52868ac4c19ba2a72a884abbb390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hofstadter, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Feb 28 10:02:40 compute-0 podman[263507]: 2026-02-28 10:02:40.653018523 +0000 UTC m=+0.116680518 container start 013754a8ce9bdc1cee854947be9bcfadaaab52868ac4c19ba2a72a884abbb390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:02:40 compute-0 podman[263507]: 2026-02-28 10:02:40.657520989 +0000 UTC m=+0.121182984 container attach 013754a8ce9bdc1cee854947be9bcfadaaab52868ac4c19ba2a72a884abbb390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 10:02:40 compute-0 podman[263507]: 2026-02-28 10:02:40.566549305 +0000 UTC m=+0.030211320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:02:40 compute-0 nova_compute[243452]: 2026-02-28 10:02:40.861 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272945.8599164, 08147934-b9df-4154-8d1f-3fd318973eb6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:02:40 compute-0 nova_compute[243452]: 2026-02-28 10:02:40.862 243456 INFO nova.compute.manager [-] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] VM Stopped (Lifecycle Event)
Feb 28 10:02:40 compute-0 nova_compute[243452]: 2026-02-28 10:02:40.882 243456 DEBUG nova.compute.manager [None req-eeafceef-2488-4e49-8dec-95253ebfb150 - - - - - -] [instance: 08147934-b9df-4154-8d1f-3fd318973eb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:02:40 compute-0 ceph-mon[76304]: pgmap v999: 305 pgs: 305 active+clean; 182 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 369 KiB/s rd, 2.1 MiB/s wr, 126 op/s
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]: {
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:     "0": [
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:         {
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "devices": [
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "/dev/loop3"
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             ],
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "lv_name": "ceph_lv0",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "lv_size": "21470642176",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "name": "ceph_lv0",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "tags": {
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.cluster_name": "ceph",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.crush_device_class": "",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.encrypted": "0",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.objectstore": "bluestore",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.osd_id": "0",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.type": "block",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.vdo": "0",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.with_tpm": "0"
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             },
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "type": "block",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "vg_name": "ceph_vg0"
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:         }
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:     ],
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:     "1": [
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:         {
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "devices": [
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "/dev/loop4"
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             ],
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "lv_name": "ceph_lv1",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "lv_size": "21470642176",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "name": "ceph_lv1",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "tags": {
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.cluster_name": "ceph",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.crush_device_class": "",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.encrypted": "0",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.objectstore": "bluestore",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.osd_id": "1",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.type": "block",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.vdo": "0",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.with_tpm": "0"
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             },
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "type": "block",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "vg_name": "ceph_vg1"
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:         }
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:     ],
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:     "2": [
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:         {
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "devices": [
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "/dev/loop5"
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             ],
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "lv_name": "ceph_lv2",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "lv_size": "21470642176",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "name": "ceph_lv2",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "tags": {
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.cluster_name": "ceph",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.crush_device_class": "",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.encrypted": "0",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.objectstore": "bluestore",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.osd_id": "2",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.type": "block",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.vdo": "0",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:                 "ceph.with_tpm": "0"
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             },
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "type": "block",
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:             "vg_name": "ceph_vg2"
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:         }
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]:     ]
Feb 28 10:02:40 compute-0 laughing_hofstadter[263523]: }
Feb 28 10:02:40 compute-0 systemd[1]: libpod-013754a8ce9bdc1cee854947be9bcfadaaab52868ac4c19ba2a72a884abbb390.scope: Deactivated successfully.
Feb 28 10:02:40 compute-0 podman[263507]: 2026-02-28 10:02:40.932927282 +0000 UTC m=+0.396589317 container died 013754a8ce9bdc1cee854947be9bcfadaaab52868ac4c19ba2a72a884abbb390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 10:02:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c3174da55ee8b3d2e58419209506650088651e9e0ef833d5f41827470e096d4-merged.mount: Deactivated successfully.
Feb 28 10:02:40 compute-0 podman[263507]: 2026-02-28 10:02:40.982318339 +0000 UTC m=+0.445980384 container remove 013754a8ce9bdc1cee854947be9bcfadaaab52868ac4c19ba2a72a884abbb390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hofstadter, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 10:02:40 compute-0 systemd[1]: libpod-conmon-013754a8ce9bdc1cee854947be9bcfadaaab52868ac4c19ba2a72a884abbb390.scope: Deactivated successfully.
Feb 28 10:02:41 compute-0 sudo[263428]: pam_unix(sudo:session): session closed for user root
Feb 28 10:02:41 compute-0 sudo[263544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:02:41 compute-0 sudo[263544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:02:41 compute-0 sudo[263544]: pam_unix(sudo:session): session closed for user root
Feb 28 10:02:41 compute-0 sudo[263569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:02:41 compute-0 sudo[263569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:02:41 compute-0 podman[263606]: 2026-02-28 10:02:41.434745713 +0000 UTC m=+0.046542848 container create efcb91fdd3b5b25baabc6fd467d99a8a29d56fd81c8ffbdf246c4a6c457cc8cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_tharp, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 10:02:41 compute-0 systemd[1]: Started libpod-conmon-efcb91fdd3b5b25baabc6fd467d99a8a29d56fd81c8ffbdf246c4a6c457cc8cb.scope.
Feb 28 10:02:41 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:02:41 compute-0 podman[263606]: 2026-02-28 10:02:41.420164954 +0000 UTC m=+0.031962109 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:02:41 compute-0 podman[263606]: 2026-02-28 10:02:41.519958886 +0000 UTC m=+0.131756081 container init efcb91fdd3b5b25baabc6fd467d99a8a29d56fd81c8ffbdf246c4a6c457cc8cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Feb 28 10:02:41 compute-0 podman[263606]: 2026-02-28 10:02:41.527746745 +0000 UTC m=+0.139543920 container start efcb91fdd3b5b25baabc6fd467d99a8a29d56fd81c8ffbdf246c4a6c457cc8cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Feb 28 10:02:41 compute-0 podman[263606]: 2026-02-28 10:02:41.532839918 +0000 UTC m=+0.144637083 container attach efcb91fdd3b5b25baabc6fd467d99a8a29d56fd81c8ffbdf246c4a6c457cc8cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Feb 28 10:02:41 compute-0 laughing_tharp[263623]: 167 167
Feb 28 10:02:41 compute-0 systemd[1]: libpod-efcb91fdd3b5b25baabc6fd467d99a8a29d56fd81c8ffbdf246c4a6c457cc8cb.scope: Deactivated successfully.
Feb 28 10:02:41 compute-0 podman[263606]: 2026-02-28 10:02:41.535892203 +0000 UTC m=+0.147689408 container died efcb91fdd3b5b25baabc6fd467d99a8a29d56fd81c8ffbdf246c4a6c457cc8cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:02:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-14e9277274e80614e84bb088bc5927bbf0ff48f172497f856814f31a281bbddd-merged.mount: Deactivated successfully.
Feb 28 10:02:41 compute-0 podman[263606]: 2026-02-28 10:02:41.582971535 +0000 UTC m=+0.194768670 container remove efcb91fdd3b5b25baabc6fd467d99a8a29d56fd81c8ffbdf246c4a6c457cc8cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:02:41 compute-0 systemd[1]: libpod-conmon-efcb91fdd3b5b25baabc6fd467d99a8a29d56fd81c8ffbdf246c4a6c457cc8cb.scope: Deactivated successfully.
Feb 28 10:02:41 compute-0 podman[263647]: 2026-02-28 10:02:41.743311268 +0000 UTC m=+0.048205695 container create 3ac9ce535a52a34c2ee94dbc194ae1cb3051278dfd8c37b973609f8decf02b05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_dubinsky, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:02:41 compute-0 systemd[1]: Started libpod-conmon-3ac9ce535a52a34c2ee94dbc194ae1cb3051278dfd8c37b973609f8decf02b05.scope.
Feb 28 10:02:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 153 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.1 MiB/s wr, 138 op/s
Feb 28 10:02:41 compute-0 podman[263647]: 2026-02-28 10:02:41.719560881 +0000 UTC m=+0.024455278 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:02:41 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8341ffd6fcbd14166acff11e2cc6347f1d587add91016f571060a269ec721ba6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8341ffd6fcbd14166acff11e2cc6347f1d587add91016f571060a269ec721ba6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8341ffd6fcbd14166acff11e2cc6347f1d587add91016f571060a269ec721ba6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8341ffd6fcbd14166acff11e2cc6347f1d587add91016f571060a269ec721ba6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:02:41 compute-0 podman[263647]: 2026-02-28 10:02:41.857914746 +0000 UTC m=+0.162809233 container init 3ac9ce535a52a34c2ee94dbc194ae1cb3051278dfd8c37b973609f8decf02b05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:02:41 compute-0 podman[263647]: 2026-02-28 10:02:41.867267489 +0000 UTC m=+0.172161886 container start 3ac9ce535a52a34c2ee94dbc194ae1cb3051278dfd8c37b973609f8decf02b05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_dubinsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:02:41 compute-0 podman[263647]: 2026-02-28 10:02:41.870775737 +0000 UTC m=+0.175670214 container attach 3ac9ce535a52a34c2ee94dbc194ae1cb3051278dfd8c37b973609f8decf02b05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_dubinsky, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:02:41 compute-0 nova_compute[243452]: 2026-02-28 10:02:41.914 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:42 compute-0 lvm[263742]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:02:42 compute-0 lvm[263743]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:02:42 compute-0 lvm[263742]: VG ceph_vg0 finished
Feb 28 10:02:42 compute-0 lvm[263743]: VG ceph_vg1 finished
Feb 28 10:02:42 compute-0 lvm[263745]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:02:42 compute-0 lvm[263745]: VG ceph_vg2 finished
Feb 28 10:02:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:02:42 compute-0 goofy_dubinsky[263664]: {}
Feb 28 10:02:42 compute-0 systemd[1]: libpod-3ac9ce535a52a34c2ee94dbc194ae1cb3051278dfd8c37b973609f8decf02b05.scope: Deactivated successfully.
Feb 28 10:02:42 compute-0 podman[263647]: 2026-02-28 10:02:42.754336538 +0000 UTC m=+1.059230955 container died 3ac9ce535a52a34c2ee94dbc194ae1cb3051278dfd8c37b973609f8decf02b05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_dubinsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 10:02:42 compute-0 systemd[1]: libpod-3ac9ce535a52a34c2ee94dbc194ae1cb3051278dfd8c37b973609f8decf02b05.scope: Consumed 1.270s CPU time.
Feb 28 10:02:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8341ffd6fcbd14166acff11e2cc6347f1d587add91016f571060a269ec721ba6-merged.mount: Deactivated successfully.
Feb 28 10:02:42 compute-0 podman[263647]: 2026-02-28 10:02:42.808042326 +0000 UTC m=+1.112936753 container remove 3ac9ce535a52a34c2ee94dbc194ae1cb3051278dfd8c37b973609f8decf02b05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_dubinsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 10:02:42 compute-0 systemd[1]: libpod-conmon-3ac9ce535a52a34c2ee94dbc194ae1cb3051278dfd8c37b973609f8decf02b05.scope: Deactivated successfully.
Feb 28 10:02:42 compute-0 sudo[263569]: pam_unix(sudo:session): session closed for user root
Feb 28 10:02:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:02:42 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:02:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:02:42 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:02:42 compute-0 ceph-mon[76304]: pgmap v1000: 305 pgs: 305 active+clean; 153 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.1 MiB/s wr, 138 op/s
Feb 28 10:02:42 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:02:42 compute-0 sudo[263759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:02:42 compute-0 sudo[263759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:02:42 compute-0 sudo[263759]: pam_unix(sudo:session): session closed for user root
Feb 28 10:02:43 compute-0 nova_compute[243452]: 2026-02-28 10:02:43.443 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 153 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 218 KiB/s rd, 1.1 MiB/s wr, 81 op/s
Feb 28 10:02:43 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:02:44 compute-0 nova_compute[243452]: 2026-02-28 10:02:44.752 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:44 compute-0 nova_compute[243452]: 2026-02-28 10:02:44.810 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:44.809 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:02:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:44.811 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:02:44 compute-0 ceph-mon[76304]: pgmap v1001: 305 pgs: 305 active+clean; 153 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 218 KiB/s rd, 1.1 MiB/s wr, 81 op/s
Feb 28 10:02:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:02:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3451699820' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:02:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:02:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3451699820' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:02:45 compute-0 nova_compute[243452]: 2026-02-28 10:02:45.691 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272950.690185, 89ced16e-cc50-41d5-bfcb-fa5af85c14c8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:02:45 compute-0 nova_compute[243452]: 2026-02-28 10:02:45.691 243456 INFO nova.compute.manager [-] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] VM Stopped (Lifecycle Event)
Feb 28 10:02:45 compute-0 nova_compute[243452]: 2026-02-28 10:02:45.715 243456 DEBUG nova.compute.manager [None req-3b475f93-03d6-44a9-8f3a-a3f81a314046 - - - - - -] [instance: 89ced16e-cc50-41d5-bfcb-fa5af85c14c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:02:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 153 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 45 KiB/s wr, 44 op/s
Feb 28 10:02:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3451699820' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:02:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3451699820' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:02:46 compute-0 ceph-mon[76304]: pgmap v1002: 305 pgs: 305 active+clean; 153 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 45 KiB/s wr, 44 op/s
Feb 28 10:02:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:02:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 153 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Feb 28 10:02:48 compute-0 nova_compute[243452]: 2026-02-28 10:02:48.444 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:48 compute-0 ceph-mon[76304]: pgmap v1003: 305 pgs: 305 active+clean; 153 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Feb 28 10:02:49 compute-0 nova_compute[243452]: 2026-02-28 10:02:49.717 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772272954.7157624, c92e965f-2d18-4b78-8b78-7d391039f382 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:02:49 compute-0 nova_compute[243452]: 2026-02-28 10:02:49.718 243456 INFO nova.compute.manager [-] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] VM Stopped (Lifecycle Event)
Feb 28 10:02:49 compute-0 nova_compute[243452]: 2026-02-28 10:02:49.740 243456 DEBUG nova.compute.manager [None req-9459368a-1629-42fd-b264-ee953fcbb798 - - - - - -] [instance: c92e965f-2d18-4b78-8b78-7d391039f382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:02:49 compute-0 nova_compute[243452]: 2026-02-28 10:02:49.756 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 153 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 852 B/s wr, 12 op/s
Feb 28 10:02:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:50.813 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:02:50 compute-0 ceph-mon[76304]: pgmap v1004: 305 pgs: 305 active+clean; 153 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 852 B/s wr, 12 op/s
Feb 28 10:02:51 compute-0 podman[263785]: 2026-02-28 10:02:51.128049266 +0000 UTC m=+0.062615490 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:02:51 compute-0 podman[263784]: 2026-02-28 10:02:51.165353053 +0000 UTC m=+0.098996841 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:02:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 153 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 852 B/s wr, 12 op/s
Feb 28 10:02:52 compute-0 nova_compute[243452]: 2026-02-28 10:02:52.495 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Acquiring lock "540aa538-279f-4645-a7a1-03fa5c859440" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:52 compute-0 nova_compute[243452]: 2026-02-28 10:02:52.496 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:52 compute-0 nova_compute[243452]: 2026-02-28 10:02:52.513 243456 DEBUG nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:02:52 compute-0 nova_compute[243452]: 2026-02-28 10:02:52.584 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:52 compute-0 nova_compute[243452]: 2026-02-28 10:02:52.585 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:52 compute-0 nova_compute[243452]: 2026-02-28 10:02:52.595 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:02:52 compute-0 nova_compute[243452]: 2026-02-28 10:02:52.596 243456 INFO nova.compute.claims [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:02:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:02:52 compute-0 nova_compute[243452]: 2026-02-28 10:02:52.715 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:52 compute-0 ceph-mon[76304]: pgmap v1005: 305 pgs: 305 active+clean; 153 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 852 B/s wr, 12 op/s
Feb 28 10:02:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:02:53 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3121478681' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.308 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.316 243456 DEBUG nova.compute.provider_tree [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.334 243456 DEBUG nova.scheduler.client.report [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.359 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.360 243456 DEBUG nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.408 243456 DEBUG nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.409 243456 DEBUG nova.network.neutron [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.442 243456 INFO nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.447 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.461 243456 DEBUG nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.547 243456 DEBUG nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.549 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.550 243456 INFO nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Creating image(s)
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.584 243456 DEBUG nova.storage.rbd_utils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] rbd image 540aa538-279f-4645-a7a1-03fa5c859440_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.620 243456 DEBUG nova.storage.rbd_utils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] rbd image 540aa538-279f-4645-a7a1-03fa5c859440_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.656 243456 DEBUG nova.storage.rbd_utils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] rbd image 540aa538-279f-4645-a7a1-03fa5c859440_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.661 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.735 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.737 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.738 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.738 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.778 243456 DEBUG nova.storage.rbd_utils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] rbd image 540aa538-279f-4645-a7a1-03fa5c859440_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:02:53 compute-0 nova_compute[243452]: 2026-02-28 10:02:53.783 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 540aa538-279f-4645-a7a1-03fa5c859440_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 153 MiB data, 399 MiB used, 60 GiB / 60 GiB avail
Feb 28 10:02:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3121478681' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:02:54 compute-0 nova_compute[243452]: 2026-02-28 10:02:54.004 243456 DEBUG nova.policy [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '678df28a33b147768bd6e1e5d3b17ccf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2ee8273ac001494c973c44a7dd357180', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:02:54 compute-0 nova_compute[243452]: 2026-02-28 10:02:54.070 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 540aa538-279f-4645-a7a1-03fa5c859440_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:54 compute-0 nova_compute[243452]: 2026-02-28 10:02:54.138 243456 DEBUG nova.storage.rbd_utils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] resizing rbd image 540aa538-279f-4645-a7a1-03fa5c859440_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:02:54 compute-0 nova_compute[243452]: 2026-02-28 10:02:54.224 243456 DEBUG nova.objects.instance [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lazy-loading 'migration_context' on Instance uuid 540aa538-279f-4645-a7a1-03fa5c859440 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:02:54 compute-0 nova_compute[243452]: 2026-02-28 10:02:54.237 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:02:54 compute-0 nova_compute[243452]: 2026-02-28 10:02:54.237 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Ensure instance console log exists: /var/lib/nova/instances/540aa538-279f-4645-a7a1-03fa5c859440/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:02:54 compute-0 nova_compute[243452]: 2026-02-28 10:02:54.238 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:54 compute-0 nova_compute[243452]: 2026-02-28 10:02:54.238 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:54 compute-0 nova_compute[243452]: 2026-02-28 10:02:54.238 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:55 compute-0 nova_compute[243452]: 2026-02-28 10:02:54.759 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:55 compute-0 nova_compute[243452]: 2026-02-28 10:02:55.202 243456 DEBUG nova.network.neutron [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Successfully created port: f4b2f0ef-feda-437c-96c7-92a0645bceb9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:02:55 compute-0 ceph-mon[76304]: pgmap v1006: 305 pgs: 305 active+clean; 153 MiB data, 399 MiB used, 60 GiB / 60 GiB avail
Feb 28 10:02:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 171 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 457 KiB/s wr, 24 op/s
Feb 28 10:02:56 compute-0 ceph-mon[76304]: pgmap v1007: 305 pgs: 305 active+clean; 171 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 457 KiB/s wr, 24 op/s
Feb 28 10:02:57 compute-0 nova_compute[243452]: 2026-02-28 10:02:57.086 243456 DEBUG nova.network.neutron [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Successfully updated port: f4b2f0ef-feda-437c-96c7-92a0645bceb9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:02:57 compute-0 nova_compute[243452]: 2026-02-28 10:02:57.107 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Acquiring lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:02:57 compute-0 nova_compute[243452]: 2026-02-28 10:02:57.108 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Acquired lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:02:57 compute-0 nova_compute[243452]: 2026-02-28 10:02:57.108 243456 DEBUG nova.network.neutron [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:02:57 compute-0 nova_compute[243452]: 2026-02-28 10:02:57.250 243456 DEBUG nova.compute.manager [req-e7c3eff8-dcd1-4c59-b811-225f40cf187a req-926971e1-e9e0-4877-991a-83d98b746377 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received event network-changed-f4b2f0ef-feda-437c-96c7-92a0645bceb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:02:57 compute-0 nova_compute[243452]: 2026-02-28 10:02:57.251 243456 DEBUG nova.compute.manager [req-e7c3eff8-dcd1-4c59-b811-225f40cf187a req-926971e1-e9e0-4877-991a-83d98b746377 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Refreshing instance network info cache due to event network-changed-f4b2f0ef-feda-437c-96c7-92a0645bceb9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:02:57 compute-0 nova_compute[243452]: 2026-02-28 10:02:57.251 243456 DEBUG oslo_concurrency.lockutils [req-e7c3eff8-dcd1-4c59-b811-225f40cf187a req-926971e1-e9e0-4877-991a-83d98b746377 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:02:57 compute-0 nova_compute[243452]: 2026-02-28 10:02:57.605 243456 DEBUG nova.network.neutron [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:02:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:02:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 200 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:02:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:57.842 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:02:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:57.842 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:02:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:02:57.843 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.449 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.823 243456 DEBUG nova.network.neutron [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Updating instance_info_cache with network_info: [{"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.845 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Releasing lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.846 243456 DEBUG nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Instance network_info: |[{"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.848 243456 DEBUG oslo_concurrency.lockutils [req-e7c3eff8-dcd1-4c59-b811-225f40cf187a req-926971e1-e9e0-4877-991a-83d98b746377 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.848 243456 DEBUG nova.network.neutron [req-e7c3eff8-dcd1-4c59-b811-225f40cf187a req-926971e1-e9e0-4877-991a-83d98b746377 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Refreshing network info cache for port f4b2f0ef-feda-437c-96c7-92a0645bceb9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.855 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Start _get_guest_xml network_info=[{"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:02:58 compute-0 ceph-mon[76304]: pgmap v1008: 305 pgs: 305 active+clean; 200 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.864 243456 WARNING nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.872 243456 DEBUG nova.virt.libvirt.host [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.873 243456 DEBUG nova.virt.libvirt.host [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.876 243456 DEBUG nova.virt.libvirt.host [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.877 243456 DEBUG nova.virt.libvirt.host [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.877 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.877 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.878 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.878 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.879 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.879 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.879 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.879 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.880 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.880 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.880 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.880 243456 DEBUG nova.virt.hardware [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:02:58 compute-0 nova_compute[243452]: 2026-02-28 10:02:58.883 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:02:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3568348272' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:02:59 compute-0 nova_compute[243452]: 2026-02-28 10:02:59.416 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:02:59 compute-0 nova_compute[243452]: 2026-02-28 10:02:59.446 243456 DEBUG nova.storage.rbd_utils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] rbd image 540aa538-279f-4645-a7a1-03fa5c859440_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:02:59 compute-0 nova_compute[243452]: 2026-02-28 10:02:59.450 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:02:59 compute-0 nova_compute[243452]: 2026-02-28 10:02:59.762 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:02:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 200 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:02:59 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3568348272' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:03:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:03:00 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4231140852' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.020 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.023 243456 DEBUG nova.virt.libvirt.vif [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:02:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1652183905',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1652183905',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-165218390',id=22,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ee8273ac001494c973c44a7dd357180',ramdisk_id='',reservation_id='r-4t9kpqze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1398433179',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1398433179-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:02:53Z,user_data=None,user_id='678df28a33b147768bd6e1e5d3b17ccf',uuid=540aa538-279f-4645-a7a1-03fa5c859440,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.024 243456 DEBUG nova.network.os_vif_util [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Converting VIF {"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.026 243456 DEBUG nova.network.os_vif_util [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:c8:f4,bridge_name='br-int',has_traffic_filtering=True,id=f4b2f0ef-feda-437c-96c7-92a0645bceb9,network=Network(a11c3342-4f74-40c1-a9f3-ae18f9be9d19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4b2f0ef-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.028 243456 DEBUG nova.objects.instance [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lazy-loading 'pci_devices' on Instance uuid 540aa538-279f-4645-a7a1-03fa5c859440 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.049 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:03:00 compute-0 nova_compute[243452]:   <uuid>540aa538-279f-4645-a7a1-03fa5c859440</uuid>
Feb 28 10:03:00 compute-0 nova_compute[243452]:   <name>instance-00000016</name>
Feb 28 10:03:00 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:03:00 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:03:00 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-1652183905</nova:name>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:02:58</nova:creationTime>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:03:00 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:03:00 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:03:00 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:03:00 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:03:00 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:03:00 compute-0 nova_compute[243452]:         <nova:user uuid="678df28a33b147768bd6e1e5d3b17ccf">tempest-FloatingIPsAssociationNegativeTestJSON-1398433179-project-member</nova:user>
Feb 28 10:03:00 compute-0 nova_compute[243452]:         <nova:project uuid="2ee8273ac001494c973c44a7dd357180">tempest-FloatingIPsAssociationNegativeTestJSON-1398433179</nova:project>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:03:00 compute-0 nova_compute[243452]:         <nova:port uuid="f4b2f0ef-feda-437c-96c7-92a0645bceb9">
Feb 28 10:03:00 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:03:00 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:03:00 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <system>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <entry name="serial">540aa538-279f-4645-a7a1-03fa5c859440</entry>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <entry name="uuid">540aa538-279f-4645-a7a1-03fa5c859440</entry>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     </system>
Feb 28 10:03:00 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:03:00 compute-0 nova_compute[243452]:   <os>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:   </os>
Feb 28 10:03:00 compute-0 nova_compute[243452]:   <features>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:   </features>
Feb 28 10:03:00 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:03:00 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:03:00 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/540aa538-279f-4645-a7a1-03fa5c859440_disk">
Feb 28 10:03:00 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       </source>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:03:00 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/540aa538-279f-4645-a7a1-03fa5c859440_disk.config">
Feb 28 10:03:00 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       </source>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:03:00 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:bd:c8:f4"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <target dev="tapf4b2f0ef-fe"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/540aa538-279f-4645-a7a1-03fa5c859440/console.log" append="off"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <video>
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     </video>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:03:00 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:03:00 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:03:00 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:03:00 compute-0 nova_compute[243452]: </domain>
Feb 28 10:03:00 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.051 243456 DEBUG nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Preparing to wait for external event network-vif-plugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.051 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Acquiring lock "540aa538-279f-4645-a7a1-03fa5c859440-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.052 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.052 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.053 243456 DEBUG nova.virt.libvirt.vif [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:02:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1652183905',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1652183905',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-165218390',id=22,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ee8273ac001494c973c44a7dd357180',ramdisk_id='',reservation_id='r-4t9kpqze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1398433179',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1398433179-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:02:53Z,user_data=None,user_id='678df28a33b147768bd6e1e5d3b17ccf',uuid=540aa538-279f-4645-a7a1-03fa5c859440,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.054 243456 DEBUG nova.network.os_vif_util [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Converting VIF {"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.055 243456 DEBUG nova.network.os_vif_util [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:c8:f4,bridge_name='br-int',has_traffic_filtering=True,id=f4b2f0ef-feda-437c-96c7-92a0645bceb9,network=Network(a11c3342-4f74-40c1-a9f3-ae18f9be9d19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4b2f0ef-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.056 243456 DEBUG os_vif [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:c8:f4,bridge_name='br-int',has_traffic_filtering=True,id=f4b2f0ef-feda-437c-96c7-92a0645bceb9,network=Network(a11c3342-4f74-40c1-a9f3-ae18f9be9d19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4b2f0ef-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.057 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.058 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.059 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.064 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.065 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4b2f0ef-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.066 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf4b2f0ef-fe, col_values=(('external_ids', {'iface-id': 'f4b2f0ef-feda-437c-96c7-92a0645bceb9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bd:c8:f4', 'vm-uuid': '540aa538-279f-4645-a7a1-03fa5c859440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.068 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:00 compute-0 NetworkManager[49805]: <info>  [1772272980.0696] manager: (tapf4b2f0ef-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.079 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.082 243456 INFO os_vif [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:c8:f4,bridge_name='br-int',has_traffic_filtering=True,id=f4b2f0ef-feda-437c-96c7-92a0645bceb9,network=Network(a11c3342-4f74-40c1-a9f3-ae18f9be9d19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4b2f0ef-fe')
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.135 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.137 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.137 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] No VIF found with MAC fa:16:3e:bd:c8:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.138 243456 INFO nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Using config drive
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.173 243456 DEBUG nova.storage.rbd_utils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] rbd image 540aa538-279f-4645-a7a1-03fa5c859440_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:03:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:03:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:03:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:03:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:03:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:03:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.817 243456 INFO nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Creating config drive at /var/lib/nova/instances/540aa538-279f-4645-a7a1-03fa5c859440/disk.config
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.824 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/540aa538-279f-4645-a7a1-03fa5c859440/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwooamopn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:00 compute-0 ceph-mon[76304]: pgmap v1009: 305 pgs: 305 active+clean; 200 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:03:00 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4231140852' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.952 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/540aa538-279f-4645-a7a1-03fa5c859440/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwooamopn" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.974 243456 DEBUG nova.storage.rbd_utils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] rbd image 540aa538-279f-4645-a7a1-03fa5c859440_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:03:00 compute-0 nova_compute[243452]: 2026-02-28 10:03:00.978 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/540aa538-279f-4645-a7a1-03fa5c859440/disk.config 540aa538-279f-4645-a7a1-03fa5c859440_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:01 compute-0 nova_compute[243452]: 2026-02-28 10:03:01.103 243456 DEBUG oslo_concurrency.processutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/540aa538-279f-4645-a7a1-03fa5c859440/disk.config 540aa538-279f-4645-a7a1-03fa5c859440_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:01 compute-0 nova_compute[243452]: 2026-02-28 10:03:01.104 243456 INFO nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Deleting local config drive /var/lib/nova/instances/540aa538-279f-4645-a7a1-03fa5c859440/disk.config because it was imported into RBD.
Feb 28 10:03:01 compute-0 kernel: tapf4b2f0ef-fe: entered promiscuous mode
Feb 28 10:03:01 compute-0 NetworkManager[49805]: <info>  [1772272981.1652] manager: (tapf4b2f0ef-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Feb 28 10:03:01 compute-0 ovn_controller[146846]: 2026-02-28T10:03:01Z|00114|binding|INFO|Claiming lport f4b2f0ef-feda-437c-96c7-92a0645bceb9 for this chassis.
Feb 28 10:03:01 compute-0 ovn_controller[146846]: 2026-02-28T10:03:01Z|00115|binding|INFO|f4b2f0ef-feda-437c-96c7-92a0645bceb9: Claiming fa:16:3e:bd:c8:f4 10.100.0.9
Feb 28 10:03:01 compute-0 nova_compute[243452]: 2026-02-28 10:03:01.166 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:01 compute-0 nova_compute[243452]: 2026-02-28 10:03:01.172 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:01 compute-0 nova_compute[243452]: 2026-02-28 10:03:01.175 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.188 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:c8:f4 10.100.0.9'], port_security=['fa:16:3e:bd:c8:f4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '540aa538-279f-4645-a7a1-03fa5c859440', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a11c3342-4f74-40c1-a9f3-ae18f9be9d19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ee8273ac001494c973c44a7dd357180', 'neutron:revision_number': '2', 'neutron:security_group_ids': '26c27b27-9608-4c57-8427-9c45b7a72eae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c643f549-0af1-46f6-9870-09f9117f13fa, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f4b2f0ef-feda-437c-96c7-92a0645bceb9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.190 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f4b2f0ef-feda-437c-96c7-92a0645bceb9 in datapath a11c3342-4f74-40c1-a9f3-ae18f9be9d19 bound to our chassis
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.191 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a11c3342-4f74-40c1-a9f3-ae18f9be9d19
Feb 28 10:03:01 compute-0 systemd-machined[209480]: New machine qemu-26-instance-00000016.
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.200 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5f26b837-6aa8-4cfe-8dda-89c1b3a2557c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.202 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa11c3342-41 in ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.204 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa11c3342-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.205 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3fe5630d-20a8-4a73-b889-ad09e11a5969]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.206 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0b882321-29b8-4d95-9690-0e19b06774d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:01 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-00000016.
Feb 28 10:03:01 compute-0 nova_compute[243452]: 2026-02-28 10:03:01.217 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.220 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[6659914a-65af-40a0-8503-20be4145eee8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:01 compute-0 ovn_controller[146846]: 2026-02-28T10:03:01Z|00116|binding|INFO|Setting lport f4b2f0ef-feda-437c-96c7-92a0645bceb9 ovn-installed in OVS
Feb 28 10:03:01 compute-0 ovn_controller[146846]: 2026-02-28T10:03:01Z|00117|binding|INFO|Setting lport f4b2f0ef-feda-437c-96c7-92a0645bceb9 up in Southbound
Feb 28 10:03:01 compute-0 nova_compute[243452]: 2026-02-28 10:03:01.222 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:01 compute-0 systemd-udevd[264154]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.234 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f15a9024-aae9-4e3c-ad91-79efc7adda3e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:01 compute-0 nova_compute[243452]: 2026-02-28 10:03:01.241 243456 DEBUG nova.network.neutron [req-e7c3eff8-dcd1-4c59-b811-225f40cf187a req-926971e1-e9e0-4877-991a-83d98b746377 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Updated VIF entry in instance network info cache for port f4b2f0ef-feda-437c-96c7-92a0645bceb9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:03:01 compute-0 nova_compute[243452]: 2026-02-28 10:03:01.242 243456 DEBUG nova.network.neutron [req-e7c3eff8-dcd1-4c59-b811-225f40cf187a req-926971e1-e9e0-4877-991a-83d98b746377 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Updating instance_info_cache with network_info: [{"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:03:01 compute-0 NetworkManager[49805]: <info>  [1772272981.2536] device (tapf4b2f0ef-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:03:01 compute-0 NetworkManager[49805]: <info>  [1772272981.2547] device (tapf4b2f0ef-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.260 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a095e8f2-1848-4edf-853b-2c913243db63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:01 compute-0 NetworkManager[49805]: <info>  [1772272981.2662] manager: (tapa11c3342-40): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.265 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf4d1da-6eea-4d7a-856f-33ca5b286a49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:01 compute-0 nova_compute[243452]: 2026-02-28 10:03:01.271 243456 DEBUG oslo_concurrency.lockutils [req-e7c3eff8-dcd1-4c59-b811-225f40cf187a req-926971e1-e9e0-4877-991a-83d98b746377 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.293 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[af209c46-d80a-40d9-becf-6c9c14605161]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.296 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7131946c-4fbc-4575-abbe-4b1717c00061]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:01 compute-0 NetworkManager[49805]: <info>  [1772272981.3194] device (tapa11c3342-40): carrier: link connected
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.325 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ac325c59-fc35-48e3-ae78-b1241511b6aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.341 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c0e0023c-e637-47a8-9afc-2310583faca3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa11c3342-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:d3:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448860, 'reachable_time': 17153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264184, 'error': None, 'target': 'ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.356 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[980b4fae-4170-4510-9df4-70a494cb8454]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:d346'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448860, 'tstamp': 448860}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264185, 'error': None, 'target': 'ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.372 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b6951edd-48a1-42fd-8052-ccdfbd044b5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa11c3342-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:d3:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448860, 'reachable_time': 17153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264186, 'error': None, 'target': 'ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.398 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ae02b448-740b-4590-a10f-e34783b9ac4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.442 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9314078c-4016-4ecf-9389-701a9220a1d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.444 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa11c3342-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.444 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.445 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa11c3342-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:03:01 compute-0 NetworkManager[49805]: <info>  [1772272981.4476] manager: (tapa11c3342-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Feb 28 10:03:01 compute-0 kernel: tapa11c3342-40: entered promiscuous mode
Feb 28 10:03:01 compute-0 nova_compute[243452]: 2026-02-28 10:03:01.447 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.451 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa11c3342-40, col_values=(('external_ids', {'iface-id': '8314f3c1-f6f7-4153-871f-5211928b3006'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:03:01 compute-0 nova_compute[243452]: 2026-02-28 10:03:01.453 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:01 compute-0 ovn_controller[146846]: 2026-02-28T10:03:01Z|00118|binding|INFO|Releasing lport 8314f3c1-f6f7-4153-871f-5211928b3006 from this chassis (sb_readonly=0)
Feb 28 10:03:01 compute-0 nova_compute[243452]: 2026-02-28 10:03:01.465 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.466 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a11c3342-4f74-40c1-a9f3-ae18f9be9d19.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a11c3342-4f74-40c1-a9f3-ae18f9be9d19.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.467 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4aaa159a-bd46-4ce6-adcf-329ce636d083]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.467 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-a11c3342-4f74-40c1-a9f3-ae18f9be9d19
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/a11c3342-4f74-40c1-a9f3-ae18f9be9d19.pid.haproxy
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID a11c3342-4f74-40c1-a9f3-ae18f9be9d19
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:03:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:01.469 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19', 'env', 'PROCESS_TAG=haproxy-a11c3342-4f74-40c1-a9f3-ae18f9be9d19', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a11c3342-4f74-40c1-a9f3-ae18f9be9d19.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:03:01 compute-0 nova_compute[243452]: 2026-02-28 10:03:01.480 243456 DEBUG nova.compute.manager [req-4a7cb7f1-b85a-425c-9a83-20abd2ef4490 req-b8275c26-646a-4d0a-aa16-1f40756fa1e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received event network-vif-plugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:03:01 compute-0 nova_compute[243452]: 2026-02-28 10:03:01.480 243456 DEBUG oslo_concurrency.lockutils [req-4a7cb7f1-b85a-425c-9a83-20abd2ef4490 req-b8275c26-646a-4d0a-aa16-1f40756fa1e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "540aa538-279f-4645-a7a1-03fa5c859440-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:01 compute-0 nova_compute[243452]: 2026-02-28 10:03:01.480 243456 DEBUG oslo_concurrency.lockutils [req-4a7cb7f1-b85a-425c-9a83-20abd2ef4490 req-b8275c26-646a-4d0a-aa16-1f40756fa1e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:01 compute-0 nova_compute[243452]: 2026-02-28 10:03:01.481 243456 DEBUG oslo_concurrency.lockutils [req-4a7cb7f1-b85a-425c-9a83-20abd2ef4490 req-b8275c26-646a-4d0a-aa16-1f40756fa1e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:01 compute-0 nova_compute[243452]: 2026-02-28 10:03:01.481 243456 DEBUG nova.compute.manager [req-4a7cb7f1-b85a-425c-9a83-20abd2ef4490 req-b8275c26-646a-4d0a-aa16-1f40756fa1e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Processing event network-vif-plugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:03:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 200 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:03:01 compute-0 podman[264216]: 2026-02-28 10:03:01.855796595 +0000 UTC m=+0.073054872 container create 15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 10:03:01 compute-0 podman[264216]: 2026-02-28 10:03:01.819366732 +0000 UTC m=+0.036625039 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:03:01 compute-0 systemd[1]: Started libpod-conmon-15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7.scope.
Feb 28 10:03:01 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3537f5f8d55cebfe7fda6f58cc62d454caeb57ea64eac96f07c6f8abf46f3522/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:03:01 compute-0 podman[264216]: 2026-02-28 10:03:01.950616888 +0000 UTC m=+0.167875225 container init 15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 28 10:03:01 compute-0 podman[264216]: 2026-02-28 10:03:01.958297173 +0000 UTC m=+0.175555450 container start 15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 10:03:01 compute-0 neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19[264231]: [NOTICE]   (264235) : New worker (264237) forked
Feb 28 10:03:01 compute-0 neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19[264231]: [NOTICE]   (264235) : Loading success.
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.399 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272982.3982081, 540aa538-279f-4645-a7a1-03fa5c859440 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.399 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] VM Started (Lifecycle Event)
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.402 243456 DEBUG nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.407 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.411 243456 INFO nova.virt.libvirt.driver [-] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Instance spawned successfully.
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.413 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.526 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.534 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.539 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.540 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.540 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.541 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.542 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.543 243456 DEBUG nova.virt.libvirt.driver [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.575 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.576 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272982.3986197, 540aa538-279f-4645-a7a1-03fa5c859440 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.576 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] VM Paused (Lifecycle Event)
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.600 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.606 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272982.405823, 540aa538-279f-4645-a7a1-03fa5c859440 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.607 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] VM Resumed (Lifecycle Event)
Feb 28 10:03:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.612 243456 INFO nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Took 9.06 seconds to spawn the instance on the hypervisor.
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.613 243456 DEBUG nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.624 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.628 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.647 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.678 243456 INFO nova.compute.manager [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Took 10.12 seconds to build instance.
Feb 28 10:03:02 compute-0 nova_compute[243452]: 2026-02-28 10:03:02.710 243456 DEBUG oslo_concurrency.lockutils [None req-01c78e40-90ab-462c-9f98-9b12783df169 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:02 compute-0 ceph-mon[76304]: pgmap v1010: 305 pgs: 305 active+clean; 200 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:03:03 compute-0 nova_compute[243452]: 2026-02-28 10:03:03.377 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Acquiring lock "ff148d2a-2dba-45c2-b726-78423f3ccedc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:03 compute-0 nova_compute[243452]: 2026-02-28 10:03:03.378 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:03 compute-0 nova_compute[243452]: 2026-02-28 10:03:03.394 243456 DEBUG nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:03:03 compute-0 nova_compute[243452]: 2026-02-28 10:03:03.451 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:03 compute-0 nova_compute[243452]: 2026-02-28 10:03:03.493 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:03 compute-0 nova_compute[243452]: 2026-02-28 10:03:03.494 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:03 compute-0 nova_compute[243452]: 2026-02-28 10:03:03.503 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:03:03 compute-0 nova_compute[243452]: 2026-02-28 10:03:03.503 243456 INFO nova.compute.claims [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:03:03 compute-0 nova_compute[243452]: 2026-02-28 10:03:03.569 243456 DEBUG nova.compute.manager [req-f42b31ee-42cb-4874-ace3-0dfd22de58e1 req-be82b42e-143a-463e-b21d-55b69617db87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received event network-vif-plugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:03:03 compute-0 nova_compute[243452]: 2026-02-28 10:03:03.570 243456 DEBUG oslo_concurrency.lockutils [req-f42b31ee-42cb-4874-ace3-0dfd22de58e1 req-be82b42e-143a-463e-b21d-55b69617db87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "540aa538-279f-4645-a7a1-03fa5c859440-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:03 compute-0 nova_compute[243452]: 2026-02-28 10:03:03.570 243456 DEBUG oslo_concurrency.lockutils [req-f42b31ee-42cb-4874-ace3-0dfd22de58e1 req-be82b42e-143a-463e-b21d-55b69617db87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:03 compute-0 nova_compute[243452]: 2026-02-28 10:03:03.571 243456 DEBUG oslo_concurrency.lockutils [req-f42b31ee-42cb-4874-ace3-0dfd22de58e1 req-be82b42e-143a-463e-b21d-55b69617db87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:03 compute-0 nova_compute[243452]: 2026-02-28 10:03:03.571 243456 DEBUG nova.compute.manager [req-f42b31ee-42cb-4874-ace3-0dfd22de58e1 req-be82b42e-143a-463e-b21d-55b69617db87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] No waiting events found dispatching network-vif-plugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:03:03 compute-0 nova_compute[243452]: 2026-02-28 10:03:03.572 243456 WARNING nova.compute.manager [req-f42b31ee-42cb-4874-ace3-0dfd22de58e1 req-be82b42e-143a-463e-b21d-55b69617db87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received unexpected event network-vif-plugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 for instance with vm_state active and task_state None.
Feb 28 10:03:03 compute-0 nova_compute[243452]: 2026-02-28 10:03:03.663 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 200 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb 28 10:03:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:03:04 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2532414934' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.210 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.215 243456 DEBUG nova.compute.provider_tree [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.231 243456 DEBUG nova.scheduler.client.report [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.260 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.261 243456 DEBUG nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.313 243456 DEBUG nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.314 243456 DEBUG nova.network.neutron [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.334 243456 INFO nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.350 243456 DEBUG nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.425 243456 DEBUG nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.427 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.427 243456 INFO nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Creating image(s)
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.443 243456 DEBUG nova.storage.rbd_utils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] rbd image ff148d2a-2dba-45c2-b726-78423f3ccedc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.463 243456 DEBUG nova.storage.rbd_utils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] rbd image ff148d2a-2dba-45c2-b726-78423f3ccedc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.483 243456 DEBUG nova.storage.rbd_utils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] rbd image ff148d2a-2dba-45c2-b726-78423f3ccedc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.486 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.528 243456 DEBUG nova.policy [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2973b4a88c3d4417a732901954bb6c6a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dd177a65e6274abe9b6091a2f34c319c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.550 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.551 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.551 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.552 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.570 243456 DEBUG nova.storage.rbd_utils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] rbd image ff148d2a-2dba-45c2-b726-78423f3ccedc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.574 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ff148d2a-2dba-45c2-b726-78423f3ccedc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.778 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ff148d2a-2dba-45c2-b726-78423f3ccedc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.842 243456 DEBUG nova.storage.rbd_utils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] resizing rbd image ff148d2a-2dba-45c2-b726-78423f3ccedc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:03:04 compute-0 ceph-mon[76304]: pgmap v1011: 305 pgs: 305 active+clean; 200 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb 28 10:03:04 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2532414934' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.928 243456 DEBUG nova.objects.instance [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lazy-loading 'migration_context' on Instance uuid ff148d2a-2dba-45c2-b726-78423f3ccedc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.949 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.949 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Ensure instance console log exists: /var/lib/nova/instances/ff148d2a-2dba-45c2-b726-78423f3ccedc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.950 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.950 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:04 compute-0 nova_compute[243452]: 2026-02-28 10:03:04.950 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:05 compute-0 nova_compute[243452]: 2026-02-28 10:03:05.069 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 220 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 575 KiB/s rd, 2.7 MiB/s wr, 78 op/s
Feb 28 10:03:05 compute-0 nova_compute[243452]: 2026-02-28 10:03:05.918 243456 DEBUG nova.network.neutron [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Successfully created port: 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:03:06 compute-0 nova_compute[243452]: 2026-02-28 10:03:06.727 243456 DEBUG nova.network.neutron [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Successfully updated port: 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:03:06 compute-0 nova_compute[243452]: 2026-02-28 10:03:06.747 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Acquiring lock "refresh_cache-ff148d2a-2dba-45c2-b726-78423f3ccedc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:03:06 compute-0 nova_compute[243452]: 2026-02-28 10:03:06.748 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Acquired lock "refresh_cache-ff148d2a-2dba-45c2-b726-78423f3ccedc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:03:06 compute-0 nova_compute[243452]: 2026-02-28 10:03:06.749 243456 DEBUG nova.network.neutron [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:03:06 compute-0 nova_compute[243452]: 2026-02-28 10:03:06.806 243456 DEBUG nova.compute.manager [req-067d5bb9-e0e5-4329-a142-754bfca8797a req-19da5fdc-fc75-400e-8515-abb4310bedb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Received event network-changed-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:03:06 compute-0 nova_compute[243452]: 2026-02-28 10:03:06.807 243456 DEBUG nova.compute.manager [req-067d5bb9-e0e5-4329-a142-754bfca8797a req-19da5fdc-fc75-400e-8515-abb4310bedb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Refreshing instance network info cache due to event network-changed-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:03:06 compute-0 nova_compute[243452]: 2026-02-28 10:03:06.807 243456 DEBUG oslo_concurrency.lockutils [req-067d5bb9-e0e5-4329-a142-754bfca8797a req-19da5fdc-fc75-400e-8515-abb4310bedb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ff148d2a-2dba-45c2-b726-78423f3ccedc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:03:06 compute-0 ceph-mon[76304]: pgmap v1012: 305 pgs: 305 active+clean; 220 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 575 KiB/s rd, 2.7 MiB/s wr, 78 op/s
Feb 28 10:03:07 compute-0 nova_compute[243452]: 2026-02-28 10:03:07.025 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:03:07 compute-0 nova_compute[243452]: 2026-02-28 10:03:07.027 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:03:07 compute-0 nova_compute[243452]: 2026-02-28 10:03:07.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:03:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:03:07 compute-0 nova_compute[243452]: 2026-02-28 10:03:07.765 243456 DEBUG nova.network.neutron [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:03:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 246 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.1 MiB/s wr, 102 op/s
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.313 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.344 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 28 10:03:08 compute-0 NetworkManager[49805]: <info>  [1772272988.4142] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Feb 28 10:03:08 compute-0 NetworkManager[49805]: <info>  [1772272988.4148] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.426 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.449 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.451 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:08 compute-0 ovn_controller[146846]: 2026-02-28T10:03:08Z|00119|binding|INFO|Releasing lport 8314f3c1-f6f7-4153-871f-5211928b3006 from this chassis (sb_readonly=0)
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.470 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.573 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.573 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.574 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.574 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 540aa538-279f-4645-a7a1-03fa5c859440 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.739 243456 DEBUG nova.network.neutron [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Updating instance_info_cache with network_info: [{"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.765 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Releasing lock "refresh_cache-ff148d2a-2dba-45c2-b726-78423f3ccedc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.765 243456 DEBUG nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Instance network_info: |[{"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.766 243456 DEBUG oslo_concurrency.lockutils [req-067d5bb9-e0e5-4329-a142-754bfca8797a req-19da5fdc-fc75-400e-8515-abb4310bedb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ff148d2a-2dba-45c2-b726-78423f3ccedc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.767 243456 DEBUG nova.network.neutron [req-067d5bb9-e0e5-4329-a142-754bfca8797a req-19da5fdc-fc75-400e-8515-abb4310bedb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Refreshing network info cache for port 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.772 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Start _get_guest_xml network_info=[{"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.778 243456 WARNING nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.782 243456 DEBUG nova.virt.libvirt.host [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.783 243456 DEBUG nova.virt.libvirt.host [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.794 243456 DEBUG nova.virt.libvirt.host [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.795 243456 DEBUG nova.virt.libvirt.host [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.796 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.796 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.797 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.797 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.797 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.798 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.798 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.798 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.799 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.799 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.799 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.799 243456 DEBUG nova.virt.hardware [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:03:08 compute-0 nova_compute[243452]: 2026-02-28 10:03:08.802 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:08 compute-0 ceph-mon[76304]: pgmap v1013: 305 pgs: 305 active+clean; 246 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.1 MiB/s wr, 102 op/s
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.149 243456 DEBUG nova.compute.manager [req-639bc7af-0173-4c0d-83ae-a6f0751f5f79 req-695502d2-27b7-4709-9322-e4d3d3d3eea6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received event network-changed-f4b2f0ef-feda-437c-96c7-92a0645bceb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.149 243456 DEBUG nova.compute.manager [req-639bc7af-0173-4c0d-83ae-a6f0751f5f79 req-695502d2-27b7-4709-9322-e4d3d3d3eea6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Refreshing instance network info cache due to event network-changed-f4b2f0ef-feda-437c-96c7-92a0645bceb9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.150 243456 DEBUG oslo_concurrency.lockutils [req-639bc7af-0173-4c0d-83ae-a6f0751f5f79 req-695502d2-27b7-4709-9322-e4d3d3d3eea6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:03:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:03:09 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446439942' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.359 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.388 243456 DEBUG nova.storage.rbd_utils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] rbd image ff148d2a-2dba-45c2-b726-78423f3ccedc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.392 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 246 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Feb 28 10:03:09 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1446439942' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:03:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:03:09 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1877928499' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.950 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.952 243456 DEBUG nova.virt.libvirt.vif [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:03:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1118743636',display_name='tempest-ServersTestJSON-server-1118743636',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1118743636',id=23,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGJCbaARgz2MiKHmjHHxTu8VMbZv/za78QmoiFrm1b2dtjLbi0sfz0k8qm0DAA2vOdrkhv2c4afOR1apvGQL9ZXBiG6t7d6JyxwbG3gSOY/aDpdVVM9RXPjLMeE9AYf3zg==',key_name='tempest-keypair-178399307',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd177a65e6274abe9b6091a2f34c319c',ramdisk_id='',reservation_id='r-1mssv15c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-595608049',owner_user_name='tempest-ServersTestJSON-595608049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:03:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2973b4a88c3d4417a732901954bb6c6a',uuid=ff148d2a-2dba-45c2-b726-78423f3ccedc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.952 243456 DEBUG nova.network.os_vif_util [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Converting VIF {"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.953 243456 DEBUG nova.network.os_vif_util [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:ef:ce,bridge_name='br-int',has_traffic_filtering=True,id=86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0,network=Network(82c3895c-56bd-4273-9e68-83c2fcec4532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a5b3dc-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.954 243456 DEBUG nova.objects.instance [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lazy-loading 'pci_devices' on Instance uuid ff148d2a-2dba-45c2-b726-78423f3ccedc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.970 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:03:09 compute-0 nova_compute[243452]:   <uuid>ff148d2a-2dba-45c2-b726-78423f3ccedc</uuid>
Feb 28 10:03:09 compute-0 nova_compute[243452]:   <name>instance-00000017</name>
Feb 28 10:03:09 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:03:09 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:03:09 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersTestJSON-server-1118743636</nova:name>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:03:08</nova:creationTime>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:03:09 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:03:09 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:03:09 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:03:09 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:03:09 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:03:09 compute-0 nova_compute[243452]:         <nova:user uuid="2973b4a88c3d4417a732901954bb6c6a">tempest-ServersTestJSON-595608049-project-member</nova:user>
Feb 28 10:03:09 compute-0 nova_compute[243452]:         <nova:project uuid="dd177a65e6274abe9b6091a2f34c319c">tempest-ServersTestJSON-595608049</nova:project>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:03:09 compute-0 nova_compute[243452]:         <nova:port uuid="86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0">
Feb 28 10:03:09 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:03:09 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:03:09 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <system>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <entry name="serial">ff148d2a-2dba-45c2-b726-78423f3ccedc</entry>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <entry name="uuid">ff148d2a-2dba-45c2-b726-78423f3ccedc</entry>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     </system>
Feb 28 10:03:09 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:03:09 compute-0 nova_compute[243452]:   <os>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:   </os>
Feb 28 10:03:09 compute-0 nova_compute[243452]:   <features>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:   </features>
Feb 28 10:03:09 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:03:09 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:03:09 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ff148d2a-2dba-45c2-b726-78423f3ccedc_disk">
Feb 28 10:03:09 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       </source>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:03:09 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ff148d2a-2dba-45c2-b726-78423f3ccedc_disk.config">
Feb 28 10:03:09 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       </source>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:03:09 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:e8:ef:ce"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <target dev="tap86a5b3dc-16"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/ff148d2a-2dba-45c2-b726-78423f3ccedc/console.log" append="off"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <video>
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     </video>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:03:09 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:03:09 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:03:09 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:03:09 compute-0 nova_compute[243452]: </domain>
Feb 28 10:03:09 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.971 243456 DEBUG nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Preparing to wait for external event network-vif-plugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.971 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Acquiring lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.972 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.972 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.973 243456 DEBUG nova.virt.libvirt.vif [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:03:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1118743636',display_name='tempest-ServersTestJSON-server-1118743636',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1118743636',id=23,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGJCbaARgz2MiKHmjHHxTu8VMbZv/za78QmoiFrm1b2dtjLbi0sfz0k8qm0DAA2vOdrkhv2c4afOR1apvGQL9ZXBiG6t7d6JyxwbG3gSOY/aDpdVVM9RXPjLMeE9AYf3zg==',key_name='tempest-keypair-178399307',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd177a65e6274abe9b6091a2f34c319c',ramdisk_id='',reservation_id='r-1mssv15c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-595608049',owner_user_name='tempest-ServersTestJSON-595608049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:03:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2973b4a88c3d4417a732901954bb6c6a',uuid=ff148d2a-2dba-45c2-b726-78423f3ccedc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.973 243456 DEBUG nova.network.os_vif_util [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Converting VIF {"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.974 243456 DEBUG nova.network.os_vif_util [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:ef:ce,bridge_name='br-int',has_traffic_filtering=True,id=86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0,network=Network(82c3895c-56bd-4273-9e68-83c2fcec4532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a5b3dc-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.974 243456 DEBUG os_vif [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:ef:ce,bridge_name='br-int',has_traffic_filtering=True,id=86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0,network=Network(82c3895c-56bd-4273-9e68-83c2fcec4532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a5b3dc-16') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.975 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.975 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.976 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.978 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.979 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86a5b3dc-16, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.979 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap86a5b3dc-16, col_values=(('external_ids', {'iface-id': '86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:ef:ce', 'vm-uuid': 'ff148d2a-2dba-45c2-b726-78423f3ccedc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.980 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:09 compute-0 NetworkManager[49805]: <info>  [1772272989.9816] manager: (tap86a5b3dc-16): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.984 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.989 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:09 compute-0 nova_compute[243452]: 2026-02-28 10:03:09.990 243456 INFO os_vif [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:ef:ce,bridge_name='br-int',has_traffic_filtering=True,id=86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0,network=Network(82c3895c-56bd-4273-9e68-83c2fcec4532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a5b3dc-16')
Feb 28 10:03:10 compute-0 nova_compute[243452]: 2026-02-28 10:03:10.042 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:03:10 compute-0 nova_compute[243452]: 2026-02-28 10:03:10.043 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:03:10 compute-0 nova_compute[243452]: 2026-02-28 10:03:10.043 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] No VIF found with MAC fa:16:3e:e8:ef:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:03:10 compute-0 nova_compute[243452]: 2026-02-28 10:03:10.043 243456 INFO nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Using config drive
Feb 28 10:03:10 compute-0 nova_compute[243452]: 2026-02-28 10:03:10.061 243456 DEBUG nova.storage.rbd_utils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] rbd image ff148d2a-2dba-45c2-b726-78423f3ccedc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:03:10 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:03:10 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4834 writes, 21K keys, 4834 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4834 writes, 4834 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1449 writes, 6513 keys, 1449 commit groups, 1.0 writes per commit group, ingest: 9.07 MB, 0.02 MB/s
                                           Interval WAL: 1449 writes, 1449 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     53.7      0.46              0.06        12    0.038       0      0       0.0       0.0
                                             L6      1/0    7.14 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    141.5    116.3      0.69              0.19        11    0.062     48K   5772       0.0       0.0
                                            Sum      1/0    7.14 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     85.1     91.3      1.14              0.25        23    0.050     48K   5772       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.4     93.6     93.8      0.49              0.10        10    0.049     23K   2572       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    141.5    116.3      0.69              0.19        11    0.062     48K   5772       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     54.2      0.45              0.06        11    0.041       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.5      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.024, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 1.1 seconds
                                           Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.08 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555caadbb8d0#2 capacity: 304.00 MB usage: 9.08 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000156 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(559,8.68 MB,2.85454%) FilterBlock(24,141.67 KB,0.0455103%) IndexBlock(24,269.58 KB,0.0865986%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 28 10:03:10 compute-0 ceph-mon[76304]: pgmap v1014: 305 pgs: 305 active+clean; 246 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Feb 28 10:03:10 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1877928499' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:03:10 compute-0 nova_compute[243452]: 2026-02-28 10:03:10.997 243456 INFO nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Creating config drive at /var/lib/nova/instances/ff148d2a-2dba-45c2-b726-78423f3ccedc/disk.config
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.002 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ff148d2a-2dba-45c2-b726-78423f3ccedc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpgnbrt6by execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.095 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Updating instance_info_cache with network_info: [{"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.111 243456 DEBUG nova.network.neutron [req-067d5bb9-e0e5-4329-a142-754bfca8797a req-19da5fdc-fc75-400e-8515-abb4310bedb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Updated VIF entry in instance network info cache for port 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.112 243456 DEBUG nova.network.neutron [req-067d5bb9-e0e5-4329-a142-754bfca8797a req-19da5fdc-fc75-400e-8515-abb4310bedb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Updating instance_info_cache with network_info: [{"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.117 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.117 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.118 243456 DEBUG oslo_concurrency.lockutils [req-639bc7af-0173-4c0d-83ae-a6f0751f5f79 req-695502d2-27b7-4709-9322-e4d3d3d3eea6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.118 243456 DEBUG nova.network.neutron [req-639bc7af-0173-4c0d-83ae-a6f0751f5f79 req-695502d2-27b7-4709-9322-e4d3d3d3eea6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Refreshing network info cache for port f4b2f0ef-feda-437c-96c7-92a0645bceb9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.120 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.121 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.121 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.121 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.122 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.127 243456 DEBUG oslo_concurrency.lockutils [req-067d5bb9-e0e5-4329-a142-754bfca8797a req-19da5fdc-fc75-400e-8515-abb4310bedb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ff148d2a-2dba-45c2-b726-78423f3ccedc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.129 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ff148d2a-2dba-45c2-b726-78423f3ccedc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpgnbrt6by" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.158 243456 DEBUG nova.storage.rbd_utils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] rbd image ff148d2a-2dba-45c2-b726-78423f3ccedc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.161 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ff148d2a-2dba-45c2-b726-78423f3ccedc/disk.config ff148d2a-2dba-45c2-b726-78423f3ccedc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.201 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.202 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.202 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.202 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.202 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.302 243456 DEBUG oslo_concurrency.processutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ff148d2a-2dba-45c2-b726-78423f3ccedc/disk.config ff148d2a-2dba-45c2-b726-78423f3ccedc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.303 243456 INFO nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Deleting local config drive /var/lib/nova/instances/ff148d2a-2dba-45c2-b726-78423f3ccedc/disk.config because it was imported into RBD.
Feb 28 10:03:11 compute-0 kernel: tap86a5b3dc-16: entered promiscuous mode
Feb 28 10:03:11 compute-0 NetworkManager[49805]: <info>  [1772272991.3464] manager: (tap86a5b3dc-16): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Feb 28 10:03:11 compute-0 ovn_controller[146846]: 2026-02-28T10:03:11Z|00120|binding|INFO|Claiming lport 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 for this chassis.
Feb 28 10:03:11 compute-0 ovn_controller[146846]: 2026-02-28T10:03:11Z|00121|binding|INFO|86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0: Claiming fa:16:3e:e8:ef:ce 10.100.0.5
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.351 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.360 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:ef:ce 10.100.0.5'], port_security=['fa:16:3e:e8:ef:ce 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ff148d2a-2dba-45c2-b726-78423f3ccedc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82c3895c-56bd-4273-9e68-83c2fcec4532', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd177a65e6274abe9b6091a2f34c319c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '177d1676-9b22-4402-a0f3-24e0d0ec2ee5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab32c8ab-3085-42dc-845f-ce2535981325, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:03:11 compute-0 ovn_controller[146846]: 2026-02-28T10:03:11Z|00122|binding|INFO|Setting lport 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 ovn-installed in OVS
Feb 28 10:03:11 compute-0 ovn_controller[146846]: 2026-02-28T10:03:11Z|00123|binding|INFO|Setting lport 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 up in Southbound
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.364 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 in datapath 82c3895c-56bd-4273-9e68-83c2fcec4532 bound to our chassis
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.364 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.365 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 82c3895c-56bd-4273-9e68-83c2fcec4532
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.366 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:11 compute-0 systemd-udevd[264632]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:03:11 compute-0 systemd-machined[209480]: New machine qemu-27-instance-00000017.
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.379 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e184d5-3711-4446-8231-a0b77d4c895e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.381 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap82c3895c-51 in ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.383 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap82c3895c-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.383 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4bc76f9c-209a-41cc-acec-7eb7cbe0990c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.384 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1c57e61b-f73f-4f7f-9245-bfe2fb10e390]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:11 compute-0 NetworkManager[49805]: <info>  [1772272991.3859] device (tap86a5b3dc-16): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:03:11 compute-0 NetworkManager[49805]: <info>  [1772272991.3864] device (tap86a5b3dc-16): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:03:11 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-00000017.
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.399 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[ff942f31-32cb-40fa-86b5-12fd398f01bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.421 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[269df904-949d-48c3-bc97-1d2b75a0babf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.463 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2bb2c2fa-e323-4185-ac5c-ebcfa0e15cd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.469 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bbcd9611-a8a9-4cce-80cb-62a49a9d24f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:11 compute-0 NetworkManager[49805]: <info>  [1772272991.4705] manager: (tap82c3895c-50): new Veth device (/org/freedesktop/NetworkManager/Devices/63)
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.500 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[18cd12d6-6622-47f0-abda-0402153debee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.504 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b3546fc7-4a87-4e35-bfe9-7d3b3afcae43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:11 compute-0 NetworkManager[49805]: <info>  [1772272991.5214] device (tap82c3895c-50): carrier: link connected
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.526 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f916b6f5-2d58-44a6-9e4d-7b7f25c4d573]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.541 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d69e38ba-9911-466a-84a0-b7c281060302]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82c3895c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:5a:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449880, 'reachable_time': 26373, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264666, 'error': None, 'target': 'ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.556 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[18aafe91-c619-4f96-9f63-07b1dee4f81c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe82:5a6d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449880, 'tstamp': 449880}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264667, 'error': None, 'target': 'ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.575 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e34cdbc1-ce94-4c32-98c1-1dcb2073c2d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82c3895c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:5a:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449880, 'reachable_time': 26373, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264668, 'error': None, 'target': 'ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.602 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa3c5a4-fc8f-4df6-a982-f55be1ea3fcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.610 243456 DEBUG nova.compute.manager [req-9c4ce7f3-9681-421f-bc8d-8b00dd9da0e1 req-85f8aca7-5cbc-48cb-a840-3ff8169c220d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Received event network-vif-plugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.610 243456 DEBUG oslo_concurrency.lockutils [req-9c4ce7f3-9681-421f-bc8d-8b00dd9da0e1 req-85f8aca7-5cbc-48cb-a840-3ff8169c220d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.611 243456 DEBUG oslo_concurrency.lockutils [req-9c4ce7f3-9681-421f-bc8d-8b00dd9da0e1 req-85f8aca7-5cbc-48cb-a840-3ff8169c220d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.611 243456 DEBUG oslo_concurrency.lockutils [req-9c4ce7f3-9681-421f-bc8d-8b00dd9da0e1 req-85f8aca7-5cbc-48cb-a840-3ff8169c220d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.612 243456 DEBUG nova.compute.manager [req-9c4ce7f3-9681-421f-bc8d-8b00dd9da0e1 req-85f8aca7-5cbc-48cb-a840-3ff8169c220d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Processing event network-vif-plugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.658 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4b242d-30d8-49cb-aeae-d58773fb788c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.660 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82c3895c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.660 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.660 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82c3895c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.662 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:11 compute-0 kernel: tap82c3895c-50: entered promiscuous mode
Feb 28 10:03:11 compute-0 NetworkManager[49805]: <info>  [1772272991.6625] manager: (tap82c3895c-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.663 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.665 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap82c3895c-50, col_values=(('external_ids', {'iface-id': '7c02020d-625b-46f3-9b28-7bd463d7a7e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.666 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.667 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.667 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/82c3895c-56bd-4273-9e68-83c2fcec4532.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/82c3895c-56bd-4273-9e68-83c2fcec4532.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.668 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[08e73de8-2d55-4ead-8c45-2d532299b38e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.669 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-82c3895c-56bd-4273-9e68-83c2fcec4532
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/82c3895c-56bd-4273-9e68-83c2fcec4532.pid.haproxy
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 82c3895c-56bd-4273-9e68-83c2fcec4532
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:03:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:11.669 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532', 'env', 'PROCESS_TAG=haproxy-82c3895c-56bd-4273-9e68-83c2fcec4532', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/82c3895c-56bd-4273-9e68-83c2fcec4532.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:03:11 compute-0 ovn_controller[146846]: 2026-02-28T10:03:11Z|00124|binding|INFO|Releasing lport 7c02020d-625b-46f3-9b28-7bd463d7a7e3 from this chassis (sb_readonly=0)
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.684 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:03:11 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/767398388' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.766 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 246 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.847 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.848 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.855 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:03:11 compute-0 nova_compute[243452]: 2026-02-28 10:03:11.855 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:03:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/767398388' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.017 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272992.016694, ff148d2a-2dba-45c2-b726-78423f3ccedc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.017 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] VM Started (Lifecycle Event)
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.021 243456 DEBUG nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.025 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.028 243456 INFO nova.virt.libvirt.driver [-] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Instance spawned successfully.
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.029 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.045 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:03:12 compute-0 podman[264744]: 2026-02-28 10:03:12.049032185 +0000 UTC m=+0.050522860 container create 177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.057 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.061 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.061 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.062 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.062 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.063 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.063 243456 DEBUG nova.virt.libvirt.driver [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:03:12 compute-0 systemd[1]: Started libpod-conmon-177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738.scope.
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.097 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.098 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272992.0210967, ff148d2a-2dba-45c2-b726-78423f3ccedc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.098 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] VM Paused (Lifecycle Event)
Feb 28 10:03:12 compute-0 podman[264744]: 2026-02-28 10:03:12.020678029 +0000 UTC m=+0.022168734 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.121 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.123 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.124 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4279MB free_disk=59.946378882043064GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.124 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.124 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.127 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772272992.0241928, ff148d2a-2dba-45c2-b726-78423f3ccedc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.127 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] VM Resumed (Lifecycle Event)
Feb 28 10:03:12 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.132 243456 INFO nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Took 7.71 seconds to spawn the instance on the hypervisor.
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.132 243456 DEBUG nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:03:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78619c8bdd71b488143c8f78ad8f7f85fee1a03814cf56ea253497a71198123b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:03:12 compute-0 podman[264744]: 2026-02-28 10:03:12.145201465 +0000 UTC m=+0.146692140 container init 177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:03:12 compute-0 podman[264744]: 2026-02-28 10:03:12.151898553 +0000 UTC m=+0.153389228 container start 177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.173 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.179 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:03:12 compute-0 neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532[264759]: [NOTICE]   (264763) : New worker (264765) forked
Feb 28 10:03:12 compute-0 neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532[264759]: [NOTICE]   (264763) : Loading success.
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.213 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.227 243456 INFO nova.compute.manager [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Took 8.77 seconds to build instance.
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.243 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 540aa538-279f-4645-a7a1-03fa5c859440 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.243 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance ff148d2a-2dba-45c2-b726-78423f3ccedc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.243 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.244 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.247 243456 DEBUG oslo_concurrency.lockutils [None req-276d7585-fc5c-4c7c-92f1-9d060c9715e0 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.295 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:12 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 28 10:03:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:03:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:03:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/351380988' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.900 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.905 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.917 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.935 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:03:12 compute-0 nova_compute[243452]: 2026-02-28 10:03:12.936 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:12 compute-0 ceph-mon[76304]: pgmap v1015: 305 pgs: 305 active+clean; 246 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 10:03:12 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/351380988' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:03:13 compute-0 nova_compute[243452]: 2026-02-28 10:03:13.456 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:13 compute-0 nova_compute[243452]: 2026-02-28 10:03:13.468 243456 DEBUG nova.network.neutron [req-639bc7af-0173-4c0d-83ae-a6f0751f5f79 req-695502d2-27b7-4709-9322-e4d3d3d3eea6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Updated VIF entry in instance network info cache for port f4b2f0ef-feda-437c-96c7-92a0645bceb9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:03:13 compute-0 nova_compute[243452]: 2026-02-28 10:03:13.469 243456 DEBUG nova.network.neutron [req-639bc7af-0173-4c0d-83ae-a6f0751f5f79 req-695502d2-27b7-4709-9322-e4d3d3d3eea6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Updating instance_info_cache with network_info: [{"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:03:13 compute-0 nova_compute[243452]: 2026-02-28 10:03:13.488 243456 DEBUG oslo_concurrency.lockutils [req-639bc7af-0173-4c0d-83ae-a6f0751f5f79 req-695502d2-27b7-4709-9322-e4d3d3d3eea6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:03:13 compute-0 nova_compute[243452]: 2026-02-28 10:03:13.705 243456 DEBUG nova.compute.manager [req-5b72a02c-7747-4a2e-b62b-d80d70864fce req-6392e1be-d9ac-4300-8c27-011f3f821cc6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Received event network-vif-plugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:03:13 compute-0 nova_compute[243452]: 2026-02-28 10:03:13.706 243456 DEBUG oslo_concurrency.lockutils [req-5b72a02c-7747-4a2e-b62b-d80d70864fce req-6392e1be-d9ac-4300-8c27-011f3f821cc6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:13 compute-0 nova_compute[243452]: 2026-02-28 10:03:13.707 243456 DEBUG oslo_concurrency.lockutils [req-5b72a02c-7747-4a2e-b62b-d80d70864fce req-6392e1be-d9ac-4300-8c27-011f3f821cc6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:13 compute-0 nova_compute[243452]: 2026-02-28 10:03:13.707 243456 DEBUG oslo_concurrency.lockutils [req-5b72a02c-7747-4a2e-b62b-d80d70864fce req-6392e1be-d9ac-4300-8c27-011f3f821cc6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:13 compute-0 nova_compute[243452]: 2026-02-28 10:03:13.708 243456 DEBUG nova.compute.manager [req-5b72a02c-7747-4a2e-b62b-d80d70864fce req-6392e1be-d9ac-4300-8c27-011f3f821cc6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] No waiting events found dispatching network-vif-plugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:03:13 compute-0 nova_compute[243452]: 2026-02-28 10:03:13.708 243456 WARNING nova.compute.manager [req-5b72a02c-7747-4a2e-b62b-d80d70864fce req-6392e1be-d9ac-4300-8c27-011f3f821cc6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Received unexpected event network-vif-plugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 for instance with vm_state active and task_state None.
Feb 28 10:03:13 compute-0 nova_compute[243452]: 2026-02-28 10:03:13.789 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 255 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 117 op/s
Feb 28 10:03:14 compute-0 ovn_controller[146846]: 2026-02-28T10:03:14Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bd:c8:f4 10.100.0.9
Feb 28 10:03:14 compute-0 ovn_controller[146846]: 2026-02-28T10:03:14Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bd:c8:f4 10.100.0.9
Feb 28 10:03:14 compute-0 ceph-mon[76304]: pgmap v1016: 305 pgs: 305 active+clean; 255 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 117 op/s
Feb 28 10:03:14 compute-0 nova_compute[243452]: 2026-02-28 10:03:14.982 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 274 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.9 MiB/s wr, 198 op/s
Feb 28 10:03:15 compute-0 nova_compute[243452]: 2026-02-28 10:03:15.897 243456 DEBUG nova.compute.manager [req-6a10630c-241d-4a8d-8316-6629f950df5d req-7a08af56-ef6e-41b7-8589-05ea121667d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Received event network-changed-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:03:15 compute-0 nova_compute[243452]: 2026-02-28 10:03:15.898 243456 DEBUG nova.compute.manager [req-6a10630c-241d-4a8d-8316-6629f950df5d req-7a08af56-ef6e-41b7-8589-05ea121667d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Refreshing instance network info cache due to event network-changed-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:03:15 compute-0 nova_compute[243452]: 2026-02-28 10:03:15.899 243456 DEBUG oslo_concurrency.lockutils [req-6a10630c-241d-4a8d-8316-6629f950df5d req-7a08af56-ef6e-41b7-8589-05ea121667d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ff148d2a-2dba-45c2-b726-78423f3ccedc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:03:15 compute-0 nova_compute[243452]: 2026-02-28 10:03:15.899 243456 DEBUG oslo_concurrency.lockutils [req-6a10630c-241d-4a8d-8316-6629f950df5d req-7a08af56-ef6e-41b7-8589-05ea121667d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ff148d2a-2dba-45c2-b726-78423f3ccedc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:03:15 compute-0 nova_compute[243452]: 2026-02-28 10:03:15.900 243456 DEBUG nova.network.neutron [req-6a10630c-241d-4a8d-8316-6629f950df5d req-7a08af56-ef6e-41b7-8589-05ea121667d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Refreshing network info cache for port 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:03:15 compute-0 nova_compute[243452]: 2026-02-28 10:03:15.932 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:03:16 compute-0 ceph-mon[76304]: pgmap v1017: 305 pgs: 305 active+clean; 274 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.9 MiB/s wr, 198 op/s
Feb 28 10:03:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:03:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 279 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.0 MiB/s wr, 184 op/s
Feb 28 10:03:18 compute-0 nova_compute[243452]: 2026-02-28 10:03:18.460 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:18 compute-0 nova_compute[243452]: 2026-02-28 10:03:18.518 243456 DEBUG nova.network.neutron [req-6a10630c-241d-4a8d-8316-6629f950df5d req-7a08af56-ef6e-41b7-8589-05ea121667d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Updated VIF entry in instance network info cache for port 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:03:18 compute-0 nova_compute[243452]: 2026-02-28 10:03:18.519 243456 DEBUG nova.network.neutron [req-6a10630c-241d-4a8d-8316-6629f950df5d req-7a08af56-ef6e-41b7-8589-05ea121667d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Updating instance_info_cache with network_info: [{"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:03:18 compute-0 nova_compute[243452]: 2026-02-28 10:03:18.543 243456 DEBUG nova.compute.manager [req-fed232fb-7436-40bc-b43c-f0349b1e374c req-7cfb4b74-f315-4105-8caa-f6a70c07459a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received event network-changed-f4b2f0ef-feda-437c-96c7-92a0645bceb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:03:18 compute-0 nova_compute[243452]: 2026-02-28 10:03:18.543 243456 DEBUG nova.compute.manager [req-fed232fb-7436-40bc-b43c-f0349b1e374c req-7cfb4b74-f315-4105-8caa-f6a70c07459a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Refreshing instance network info cache due to event network-changed-f4b2f0ef-feda-437c-96c7-92a0645bceb9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:03:18 compute-0 nova_compute[243452]: 2026-02-28 10:03:18.544 243456 DEBUG oslo_concurrency.lockutils [req-fed232fb-7436-40bc-b43c-f0349b1e374c req-7cfb4b74-f315-4105-8caa-f6a70c07459a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:03:18 compute-0 nova_compute[243452]: 2026-02-28 10:03:18.544 243456 DEBUG oslo_concurrency.lockutils [req-fed232fb-7436-40bc-b43c-f0349b1e374c req-7cfb4b74-f315-4105-8caa-f6a70c07459a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:03:18 compute-0 nova_compute[243452]: 2026-02-28 10:03:18.545 243456 DEBUG nova.network.neutron [req-fed232fb-7436-40bc-b43c-f0349b1e374c req-7cfb4b74-f315-4105-8caa-f6a70c07459a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Refreshing network info cache for port f4b2f0ef-feda-437c-96c7-92a0645bceb9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:03:18 compute-0 nova_compute[243452]: 2026-02-28 10:03:18.549 243456 DEBUG oslo_concurrency.lockutils [req-6a10630c-241d-4a8d-8316-6629f950df5d req-7a08af56-ef6e-41b7-8589-05ea121667d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ff148d2a-2dba-45c2-b726-78423f3ccedc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:03:18 compute-0 nova_compute[243452]: 2026-02-28 10:03:18.648 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:18 compute-0 ceph-mon[76304]: pgmap v1018: 305 pgs: 305 active+clean; 279 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.0 MiB/s wr, 184 op/s
Feb 28 10:03:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 279 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Feb 28 10:03:19 compute-0 nova_compute[243452]: 2026-02-28 10:03:19.817 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:19 compute-0 nova_compute[243452]: 2026-02-28 10:03:19.984 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:20 compute-0 nova_compute[243452]: 2026-02-28 10:03:20.777 243456 DEBUG nova.network.neutron [req-fed232fb-7436-40bc-b43c-f0349b1e374c req-7cfb4b74-f315-4105-8caa-f6a70c07459a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Updated VIF entry in instance network info cache for port f4b2f0ef-feda-437c-96c7-92a0645bceb9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:03:20 compute-0 nova_compute[243452]: 2026-02-28 10:03:20.778 243456 DEBUG nova.network.neutron [req-fed232fb-7436-40bc-b43c-f0349b1e374c req-7cfb4b74-f315-4105-8caa-f6a70c07459a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Updating instance_info_cache with network_info: [{"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:03:20 compute-0 nova_compute[243452]: 2026-02-28 10:03:20.795 243456 DEBUG oslo_concurrency.lockutils [req-fed232fb-7436-40bc-b43c-f0349b1e374c req-7cfb4b74-f315-4105-8caa-f6a70c07459a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-540aa538-279f-4645-a7a1-03fa5c859440" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:03:20 compute-0 ceph-mon[76304]: pgmap v1019: 305 pgs: 305 active+clean; 279 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Feb 28 10:03:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 279 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Feb 28 10:03:22 compute-0 podman[264798]: 2026-02-28 10:03:22.12386214 +0000 UTC m=+0.058011840 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 28 10:03:22 compute-0 podman[264797]: 2026-02-28 10:03:22.156415434 +0000 UTC m=+0.101652175 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 10:03:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:03:22 compute-0 ceph-mon[76304]: pgmap v1020: 305 pgs: 305 active+clean; 279 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Feb 28 10:03:23 compute-0 ovn_controller[146846]: 2026-02-28T10:03:23Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e8:ef:ce 10.100.0.5
Feb 28 10:03:23 compute-0 ovn_controller[146846]: 2026-02-28T10:03:23Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e8:ef:ce 10.100.0.5
Feb 28 10:03:23 compute-0 nova_compute[243452]: 2026-02-28 10:03:23.462 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 289 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 157 op/s
Feb 28 10:03:24 compute-0 nova_compute[243452]: 2026-02-28 10:03:24.970 243456 DEBUG oslo_concurrency.lockutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Acquiring lock "540aa538-279f-4645-a7a1-03fa5c859440" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:24 compute-0 nova_compute[243452]: 2026-02-28 10:03:24.971 243456 DEBUG oslo_concurrency.lockutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:24 compute-0 nova_compute[243452]: 2026-02-28 10:03:24.972 243456 DEBUG oslo_concurrency.lockutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Acquiring lock "540aa538-279f-4645-a7a1-03fa5c859440-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:24 compute-0 nova_compute[243452]: 2026-02-28 10:03:24.972 243456 DEBUG oslo_concurrency.lockutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:24 compute-0 nova_compute[243452]: 2026-02-28 10:03:24.973 243456 DEBUG oslo_concurrency.lockutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:24 compute-0 nova_compute[243452]: 2026-02-28 10:03:24.974 243456 INFO nova.compute.manager [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Terminating instance
Feb 28 10:03:24 compute-0 nova_compute[243452]: 2026-02-28 10:03:24.977 243456 DEBUG nova.compute.manager [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:03:24 compute-0 nova_compute[243452]: 2026-02-28 10:03:24.987 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:25 compute-0 ceph-mon[76304]: pgmap v1021: 305 pgs: 305 active+clean; 289 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 157 op/s
Feb 28 10:03:25 compute-0 kernel: tapf4b2f0ef-fe (unregistering): left promiscuous mode
Feb 28 10:03:25 compute-0 NetworkManager[49805]: <info>  [1772273005.0249] device (tapf4b2f0ef-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:03:25 compute-0 ovn_controller[146846]: 2026-02-28T10:03:25Z|00125|binding|INFO|Releasing lport f4b2f0ef-feda-437c-96c7-92a0645bceb9 from this chassis (sb_readonly=0)
Feb 28 10:03:25 compute-0 ovn_controller[146846]: 2026-02-28T10:03:25Z|00126|binding|INFO|Setting lport f4b2f0ef-feda-437c-96c7-92a0645bceb9 down in Southbound
Feb 28 10:03:25 compute-0 ovn_controller[146846]: 2026-02-28T10:03:25Z|00127|binding|INFO|Removing iface tapf4b2f0ef-fe ovn-installed in OVS
Feb 28 10:03:25 compute-0 nova_compute[243452]: 2026-02-28 10:03:25.035 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.042 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:c8:f4 10.100.0.9'], port_security=['fa:16:3e:bd:c8:f4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '540aa538-279f-4645-a7a1-03fa5c859440', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a11c3342-4f74-40c1-a9f3-ae18f9be9d19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ee8273ac001494c973c44a7dd357180', 'neutron:revision_number': '4', 'neutron:security_group_ids': '26c27b27-9608-4c57-8427-9c45b7a72eae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c643f549-0af1-46f6-9870-09f9117f13fa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f4b2f0ef-feda-437c-96c7-92a0645bceb9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:03:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.045 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f4b2f0ef-feda-437c-96c7-92a0645bceb9 in datapath a11c3342-4f74-40c1-a9f3-ae18f9be9d19 unbound from our chassis
Feb 28 10:03:25 compute-0 nova_compute[243452]: 2026-02-28 10:03:25.046 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.047 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a11c3342-4f74-40c1-a9f3-ae18f9be9d19, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:03:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.049 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2f739466-3199-4aec-9bc4-a802a1e070c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.050 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19 namespace which is not needed anymore
Feb 28 10:03:25 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000016.scope: Deactivated successfully.
Feb 28 10:03:25 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000016.scope: Consumed 12.900s CPU time.
Feb 28 10:03:25 compute-0 systemd-machined[209480]: Machine qemu-26-instance-00000016 terminated.
Feb 28 10:03:25 compute-0 nova_compute[243452]: 2026-02-28 10:03:25.199 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:25 compute-0 neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19[264231]: [NOTICE]   (264235) : haproxy version is 2.8.14-c23fe91
Feb 28 10:03:25 compute-0 neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19[264231]: [NOTICE]   (264235) : path to executable is /usr/sbin/haproxy
Feb 28 10:03:25 compute-0 neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19[264231]: [WARNING]  (264235) : Exiting Master process...
Feb 28 10:03:25 compute-0 neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19[264231]: [ALERT]    (264235) : Current worker (264237) exited with code 143 (Terminated)
Feb 28 10:03:25 compute-0 neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19[264231]: [WARNING]  (264235) : All workers exited. Exiting... (0)
Feb 28 10:03:25 compute-0 nova_compute[243452]: 2026-02-28 10:03:25.205 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:25 compute-0 systemd[1]: libpod-15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7.scope: Deactivated successfully.
Feb 28 10:03:25 compute-0 nova_compute[243452]: 2026-02-28 10:03:25.218 243456 INFO nova.virt.libvirt.driver [-] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Instance destroyed successfully.
Feb 28 10:03:25 compute-0 nova_compute[243452]: 2026-02-28 10:03:25.219 243456 DEBUG nova.objects.instance [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lazy-loading 'resources' on Instance uuid 540aa538-279f-4645-a7a1-03fa5c859440 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:03:25 compute-0 podman[264863]: 2026-02-28 10:03:25.21852309 +0000 UTC m=+0.064184174 container died 15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:03:25 compute-0 nova_compute[243452]: 2026-02-28 10:03:25.235 243456 DEBUG nova.virt.libvirt.vif [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:02:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1652183905',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1652183905',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-165218390',id=22,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:03:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2ee8273ac001494c973c44a7dd357180',ramdisk_id='',reservation_id='r-4t9kpqze',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1398433179',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1398433179-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:03:02Z,user_data=None,user_id='678df28a33b147768bd6e1e5d3b17ccf',uuid=540aa538-279f-4645-a7a1-03fa5c859440,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:03:25 compute-0 nova_compute[243452]: 2026-02-28 10:03:25.236 243456 DEBUG nova.network.os_vif_util [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Converting VIF {"id": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "address": "fa:16:3e:bd:c8:f4", "network": {"id": "a11c3342-4f74-40c1-a9f3-ae18f9be9d19", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1076482414-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ee8273ac001494c973c44a7dd357180", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4b2f0ef-fe", "ovs_interfaceid": "f4b2f0ef-feda-437c-96c7-92a0645bceb9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:03:25 compute-0 nova_compute[243452]: 2026-02-28 10:03:25.237 243456 DEBUG nova.network.os_vif_util [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bd:c8:f4,bridge_name='br-int',has_traffic_filtering=True,id=f4b2f0ef-feda-437c-96c7-92a0645bceb9,network=Network(a11c3342-4f74-40c1-a9f3-ae18f9be9d19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4b2f0ef-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:03:25 compute-0 nova_compute[243452]: 2026-02-28 10:03:25.238 243456 DEBUG os_vif [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:c8:f4,bridge_name='br-int',has_traffic_filtering=True,id=f4b2f0ef-feda-437c-96c7-92a0645bceb9,network=Network(a11c3342-4f74-40c1-a9f3-ae18f9be9d19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4b2f0ef-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:03:25 compute-0 nova_compute[243452]: 2026-02-28 10:03:25.239 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:25 compute-0 nova_compute[243452]: 2026-02-28 10:03:25.240 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4b2f0ef-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:03:25 compute-0 nova_compute[243452]: 2026-02-28 10:03:25.241 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:25 compute-0 nova_compute[243452]: 2026-02-28 10:03:25.243 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:25 compute-0 nova_compute[243452]: 2026-02-28 10:03:25.246 243456 INFO os_vif [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:c8:f4,bridge_name='br-int',has_traffic_filtering=True,id=f4b2f0ef-feda-437c-96c7-92a0645bceb9,network=Network(a11c3342-4f74-40c1-a9f3-ae18f9be9d19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4b2f0ef-fe')
Feb 28 10:03:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7-userdata-shm.mount: Deactivated successfully.
Feb 28 10:03:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3537f5f8d55cebfe7fda6f58cc62d454caeb57ea64eac96f07c6f8abf46f3522-merged.mount: Deactivated successfully.
Feb 28 10:03:25 compute-0 podman[264863]: 2026-02-28 10:03:25.282580938 +0000 UTC m=+0.128242002 container cleanup 15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 10:03:25 compute-0 systemd[1]: libpod-conmon-15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7.scope: Deactivated successfully.
Feb 28 10:03:25 compute-0 podman[264920]: 2026-02-28 10:03:25.351845413 +0000 UTC m=+0.051100066 container remove 15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 10:03:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.355 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b5747315-5dcd-47e4-b95d-6a9de3d89318]: (4, ('Sat Feb 28 10:03:25 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19 (15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7)\n15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7\nSat Feb 28 10:03:25 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19 (15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7)\n15831f98989c7934c2c7f4d840c913b23ae53e92130eb01781eb7fe0015512b7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.357 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eec7ce6f-5b92-49b8-9ad6-dd001d479f46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.357 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa11c3342-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:03:25 compute-0 nova_compute[243452]: 2026-02-28 10:03:25.359 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:25 compute-0 kernel: tapa11c3342-40: left promiscuous mode
Feb 28 10:03:25 compute-0 nova_compute[243452]: 2026-02-28 10:03:25.362 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.365 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6a26e793-7cd8-466c-83c2-7147d7b5776a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:25 compute-0 nova_compute[243452]: 2026-02-28 10:03:25.368 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.378 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f570462a-0bc9-473f-99ad-d36906f78085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.379 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[779f8302-008d-4d9a-a7b7-879d2d36a74b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.391 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8f893f6d-6904-4287-8380-bba15863381d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448854, 'reachable_time': 23949, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264936, 'error': None, 'target': 'ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.393 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a11c3342-4f74-40c1-a9f3-ae18f9be9d19 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:03:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:25.393 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4893fe-fbf1-42e7-8507-e550304b36ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:25 compute-0 systemd[1]: run-netns-ovnmeta\x2da11c3342\x2d4f74\x2d40c1\x2da9f3\x2dae18f9be9d19.mount: Deactivated successfully.
Feb 28 10:03:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 294 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.3 MiB/s wr, 191 op/s
Feb 28 10:03:26 compute-0 nova_compute[243452]: 2026-02-28 10:03:26.194 243456 DEBUG nova.compute.manager [req-32d7a4b0-c853-4c8e-98d8-e141554d0a3f req-f8e02c85-f63a-47d6-a368-c91127a3d178 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received event network-vif-unplugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:03:26 compute-0 nova_compute[243452]: 2026-02-28 10:03:26.194 243456 DEBUG oslo_concurrency.lockutils [req-32d7a4b0-c853-4c8e-98d8-e141554d0a3f req-f8e02c85-f63a-47d6-a368-c91127a3d178 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "540aa538-279f-4645-a7a1-03fa5c859440-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:26 compute-0 nova_compute[243452]: 2026-02-28 10:03:26.195 243456 DEBUG oslo_concurrency.lockutils [req-32d7a4b0-c853-4c8e-98d8-e141554d0a3f req-f8e02c85-f63a-47d6-a368-c91127a3d178 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:26 compute-0 nova_compute[243452]: 2026-02-28 10:03:26.195 243456 DEBUG oslo_concurrency.lockutils [req-32d7a4b0-c853-4c8e-98d8-e141554d0a3f req-f8e02c85-f63a-47d6-a368-c91127a3d178 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:26 compute-0 nova_compute[243452]: 2026-02-28 10:03:26.196 243456 DEBUG nova.compute.manager [req-32d7a4b0-c853-4c8e-98d8-e141554d0a3f req-f8e02c85-f63a-47d6-a368-c91127a3d178 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] No waiting events found dispatching network-vif-unplugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:03:26 compute-0 nova_compute[243452]: 2026-02-28 10:03:26.196 243456 DEBUG nova.compute.manager [req-32d7a4b0-c853-4c8e-98d8-e141554d0a3f req-f8e02c85-f63a-47d6-a368-c91127a3d178 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received event network-vif-unplugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:03:26 compute-0 ceph-mon[76304]: pgmap v1022: 305 pgs: 305 active+clean; 294 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.3 MiB/s wr, 191 op/s
Feb 28 10:03:26 compute-0 nova_compute[243452]: 2026-02-28 10:03:26.206 243456 INFO nova.virt.libvirt.driver [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Deleting instance files /var/lib/nova/instances/540aa538-279f-4645-a7a1-03fa5c859440_del
Feb 28 10:03:26 compute-0 nova_compute[243452]: 2026-02-28 10:03:26.207 243456 INFO nova.virt.libvirt.driver [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Deletion of /var/lib/nova/instances/540aa538-279f-4645-a7a1-03fa5c859440_del complete
Feb 28 10:03:26 compute-0 nova_compute[243452]: 2026-02-28 10:03:26.272 243456 INFO nova.compute.manager [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Took 1.29 seconds to destroy the instance on the hypervisor.
Feb 28 10:03:26 compute-0 nova_compute[243452]: 2026-02-28 10:03:26.273 243456 DEBUG oslo.service.loopingcall [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:03:26 compute-0 nova_compute[243452]: 2026-02-28 10:03:26.273 243456 DEBUG nova.compute.manager [-] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:03:26 compute-0 nova_compute[243452]: 2026-02-28 10:03:26.273 243456 DEBUG nova.network.neutron [-] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:03:26 compute-0 nova_compute[243452]: 2026-02-28 10:03:26.609 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:26 compute-0 nova_compute[243452]: 2026-02-28 10:03:26.995 243456 DEBUG nova.network.neutron [-] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:03:27 compute-0 nova_compute[243452]: 2026-02-28 10:03:27.019 243456 INFO nova.compute.manager [-] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Took 0.75 seconds to deallocate network for instance.
Feb 28 10:03:27 compute-0 nova_compute[243452]: 2026-02-28 10:03:27.065 243456 DEBUG oslo_concurrency.lockutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:27 compute-0 nova_compute[243452]: 2026-02-28 10:03:27.066 243456 DEBUG oslo_concurrency.lockutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:27 compute-0 nova_compute[243452]: 2026-02-28 10:03:27.094 243456 DEBUG nova.compute.manager [req-4fc6dfbb-91b3-47f8-b1b4-000cc56359c7 req-c489a0e9-7262-4d24-965e-60895216523d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received event network-vif-deleted-f4b2f0ef-feda-437c-96c7-92a0645bceb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:03:27 compute-0 nova_compute[243452]: 2026-02-28 10:03:27.129 243456 DEBUG oslo_concurrency.processutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:03:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:03:27 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2082020615' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:03:27 compute-0 nova_compute[243452]: 2026-02-28 10:03:27.693 243456 DEBUG oslo_concurrency.processutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:27 compute-0 nova_compute[243452]: 2026-02-28 10:03:27.699 243456 DEBUG nova.compute.provider_tree [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:03:27 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2082020615' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:03:27 compute-0 nova_compute[243452]: 2026-02-28 10:03:27.722 243456 DEBUG nova.scheduler.client.report [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:03:27 compute-0 nova_compute[243452]: 2026-02-28 10:03:27.745 243456 DEBUG oslo_concurrency.lockutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:27 compute-0 nova_compute[243452]: 2026-02-28 10:03:27.778 243456 INFO nova.scheduler.client.report [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Deleted allocations for instance 540aa538-279f-4645-a7a1-03fa5c859440
Feb 28 10:03:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 255 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 117 op/s
Feb 28 10:03:27 compute-0 nova_compute[243452]: 2026-02-28 10:03:27.857 243456 DEBUG oslo_concurrency.lockutils [None req-60ed31b5-c271-462d-b16b-f64cc5319b18 678df28a33b147768bd6e1e5d3b17ccf 2ee8273ac001494c973c44a7dd357180 - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:28 compute-0 nova_compute[243452]: 2026-02-28 10:03:28.465 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:28 compute-0 nova_compute[243452]: 2026-02-28 10:03:28.481 243456 DEBUG nova.compute.manager [req-be5a0d48-ce02-481e-9546-f10293ea9e76 req-7faf5989-7f44-42c6-a1a5-8c784686f604 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received event network-vif-plugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:03:28 compute-0 nova_compute[243452]: 2026-02-28 10:03:28.481 243456 DEBUG oslo_concurrency.lockutils [req-be5a0d48-ce02-481e-9546-f10293ea9e76 req-7faf5989-7f44-42c6-a1a5-8c784686f604 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "540aa538-279f-4645-a7a1-03fa5c859440-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:28 compute-0 nova_compute[243452]: 2026-02-28 10:03:28.482 243456 DEBUG oslo_concurrency.lockutils [req-be5a0d48-ce02-481e-9546-f10293ea9e76 req-7faf5989-7f44-42c6-a1a5-8c784686f604 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:28 compute-0 nova_compute[243452]: 2026-02-28 10:03:28.482 243456 DEBUG oslo_concurrency.lockutils [req-be5a0d48-ce02-481e-9546-f10293ea9e76 req-7faf5989-7f44-42c6-a1a5-8c784686f604 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "540aa538-279f-4645-a7a1-03fa5c859440-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:28 compute-0 nova_compute[243452]: 2026-02-28 10:03:28.482 243456 DEBUG nova.compute.manager [req-be5a0d48-ce02-481e-9546-f10293ea9e76 req-7faf5989-7f44-42c6-a1a5-8c784686f604 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] No waiting events found dispatching network-vif-plugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:03:28 compute-0 nova_compute[243452]: 2026-02-28 10:03:28.483 243456 WARNING nova.compute.manager [req-be5a0d48-ce02-481e-9546-f10293ea9e76 req-7faf5989-7f44-42c6-a1a5-8c784686f604 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Received unexpected event network-vif-plugged-f4b2f0ef-feda-437c-96c7-92a0645bceb9 for instance with vm_state deleted and task_state None.
Feb 28 10:03:28 compute-0 ceph-mon[76304]: pgmap v1023: 305 pgs: 305 active+clean; 255 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 117 op/s
Feb 28 10:03:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:03:29
Feb 28 10:03:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:03:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:03:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'images', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', '.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes']
Feb 28 10:03:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:03:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 255 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Feb 28 10:03:30 compute-0 nova_compute[243452]: 2026-02-28 10:03:30.243 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:03:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:03:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:03:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:03:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:03:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:03:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:03:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:03:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:03:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:03:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:03:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:03:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:03:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:03:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:03:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:03:30 compute-0 ceph-mon[76304]: pgmap v1024: 305 pgs: 305 active+clean; 255 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Feb 28 10:03:31 compute-0 ovn_controller[146846]: 2026-02-28T10:03:31Z|00128|binding|INFO|Releasing lport 7c02020d-625b-46f3-9b28-7bd463d7a7e3 from this chassis (sb_readonly=0)
Feb 28 10:03:31 compute-0 nova_compute[243452]: 2026-02-28 10:03:31.794 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 233 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Feb 28 10:03:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:03:32 compute-0 ceph-mon[76304]: pgmap v1025: 305 pgs: 305 active+clean; 233 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.133 243456 DEBUG oslo_concurrency.lockutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Acquiring lock "ff148d2a-2dba-45c2-b726-78423f3ccedc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.133 243456 DEBUG oslo_concurrency.lockutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.134 243456 DEBUG oslo_concurrency.lockutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Acquiring lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.134 243456 DEBUG oslo_concurrency.lockutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.135 243456 DEBUG oslo_concurrency.lockutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.137 243456 INFO nova.compute.manager [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Terminating instance
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.139 243456 DEBUG nova.compute.manager [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:03:33 compute-0 kernel: tap86a5b3dc-16 (unregistering): left promiscuous mode
Feb 28 10:03:33 compute-0 NetworkManager[49805]: <info>  [1772273013.1872] device (tap86a5b3dc-16): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:03:33 compute-0 ovn_controller[146846]: 2026-02-28T10:03:33Z|00129|binding|INFO|Releasing lport 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 from this chassis (sb_readonly=0)
Feb 28 10:03:33 compute-0 ovn_controller[146846]: 2026-02-28T10:03:33Z|00130|binding|INFO|Setting lport 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 down in Southbound
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.194 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:33 compute-0 ovn_controller[146846]: 2026-02-28T10:03:33Z|00131|binding|INFO|Removing iface tap86a5b3dc-16 ovn-installed in OVS
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.196 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.205 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:ef:ce 10.100.0.5'], port_security=['fa:16:3e:e8:ef:ce 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ff148d2a-2dba-45c2-b726-78423f3ccedc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82c3895c-56bd-4273-9e68-83c2fcec4532', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd177a65e6274abe9b6091a2f34c319c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '177d1676-9b22-4402-a0f3-24e0d0ec2ee5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.211'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab32c8ab-3085-42dc-845f-ce2535981325, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:03:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.209 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 in datapath 82c3895c-56bd-4273-9e68-83c2fcec4532 unbound from our chassis
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.210 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.212 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 82c3895c-56bd-4273-9e68-83c2fcec4532, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:03:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.213 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e8c7e9fa-e8d5-4452-b9aa-1f31abfd1f5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.214 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532 namespace which is not needed anymore
Feb 28 10:03:33 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000017.scope: Deactivated successfully.
Feb 28 10:03:33 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000017.scope: Consumed 11.799s CPU time.
Feb 28 10:03:33 compute-0 systemd-machined[209480]: Machine qemu-27-instance-00000017 terminated.
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.355 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.359 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:33 compute-0 neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532[264759]: [NOTICE]   (264763) : haproxy version is 2.8.14-c23fe91
Feb 28 10:03:33 compute-0 neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532[264759]: [NOTICE]   (264763) : path to executable is /usr/sbin/haproxy
Feb 28 10:03:33 compute-0 neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532[264759]: [WARNING]  (264763) : Exiting Master process...
Feb 28 10:03:33 compute-0 neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532[264759]: [WARNING]  (264763) : Exiting Master process...
Feb 28 10:03:33 compute-0 neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532[264759]: [ALERT]    (264763) : Current worker (264765) exited with code 143 (Terminated)
Feb 28 10:03:33 compute-0 neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532[264759]: [WARNING]  (264763) : All workers exited. Exiting... (0)
Feb 28 10:03:33 compute-0 systemd[1]: libpod-177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738.scope: Deactivated successfully.
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.377 243456 INFO nova.virt.libvirt.driver [-] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Instance destroyed successfully.
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.377 243456 DEBUG nova.objects.instance [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lazy-loading 'resources' on Instance uuid ff148d2a-2dba-45c2-b726-78423f3ccedc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:03:33 compute-0 podman[264985]: 2026-02-28 10:03:33.382264622 +0000 UTC m=+0.063655038 container died 177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.392 243456 DEBUG nova.virt.libvirt.vif [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:03:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1118743636',display_name='tempest-ServersTestJSON-server-1118743636',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1118743636',id=23,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGJCbaARgz2MiKHmjHHxTu8VMbZv/za78QmoiFrm1b2dtjLbi0sfz0k8qm0DAA2vOdrkhv2c4afOR1apvGQL9ZXBiG6t7d6JyxwbG3gSOY/aDpdVVM9RXPjLMeE9AYf3zg==',key_name='tempest-keypair-178399307',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:03:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dd177a65e6274abe9b6091a2f34c319c',ramdisk_id='',reservation_id='r-1mssv15c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-595608049',owner_user_name='tempest-ServersTestJSON-595608049-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:03:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2973b4a88c3d4417a732901954bb6c6a',uuid=ff148d2a-2dba-45c2-b726-78423f3ccedc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.393 243456 DEBUG nova.network.os_vif_util [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Converting VIF {"id": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "address": "fa:16:3e:e8:ef:ce", "network": {"id": "82c3895c-56bd-4273-9e68-83c2fcec4532", "bridge": "br-int", "label": "tempest-ServersTestJSON-179965434-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd177a65e6274abe9b6091a2f34c319c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a5b3dc-16", "ovs_interfaceid": "86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.394 243456 DEBUG nova.network.os_vif_util [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e8:ef:ce,bridge_name='br-int',has_traffic_filtering=True,id=86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0,network=Network(82c3895c-56bd-4273-9e68-83c2fcec4532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a5b3dc-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.394 243456 DEBUG os_vif [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:ef:ce,bridge_name='br-int',has_traffic_filtering=True,id=86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0,network=Network(82c3895c-56bd-4273-9e68-83c2fcec4532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a5b3dc-16') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.396 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.396 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86a5b3dc-16, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.397 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.398 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.403 243456 INFO os_vif [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:ef:ce,bridge_name='br-int',has_traffic_filtering=True,id=86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0,network=Network(82c3895c-56bd-4273-9e68-83c2fcec4532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a5b3dc-16')
Feb 28 10:03:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738-userdata-shm.mount: Deactivated successfully.
Feb 28 10:03:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-78619c8bdd71b488143c8f78ad8f7f85fee1a03814cf56ea253497a71198123b-merged.mount: Deactivated successfully.
Feb 28 10:03:33 compute-0 podman[264985]: 2026-02-28 10:03:33.418194771 +0000 UTC m=+0.099585147 container cleanup 177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:03:33 compute-0 systemd[1]: libpod-conmon-177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738.scope: Deactivated successfully.
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.469 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:33 compute-0 podman[265039]: 2026-02-28 10:03:33.4751242 +0000 UTC m=+0.038792011 container remove 177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 10:03:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.478 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a1fab63a-a275-4e28-b713-ab42163377aa]: (4, ('Sat Feb 28 10:03:33 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532 (177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738)\n177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738\nSat Feb 28 10:03:33 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532 (177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738)\n177fb7c6ebf347437ae453cefbd29e37d658785fdf932f9a8c4f67ee77388738\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.480 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[edc292f6-0a2f-4240-8529-77923eeca17f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.481 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82c3895c-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.482 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:33 compute-0 kernel: tap82c3895c-50: left promiscuous mode
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.485 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.487 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7a276b11-9391-483a-835b-e6865d28fc6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.489 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.502 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2b249c29-b774-448d-8f12-1fb6b8f7061d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.503 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0352f30f-e1ac-4118-a313-e19758236b71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.518 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a1731248-bd35-419a-be04-84a83f51de3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449874, 'reachable_time': 16932, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265057, 'error': None, 'target': 'ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:33 compute-0 systemd[1]: run-netns-ovnmeta\x2d82c3895c\x2d56bd\x2d4273\x2d9e68\x2d83c2fcec4532.mount: Deactivated successfully.
Feb 28 10:03:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.521 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-82c3895c-56bd-4273-9e68-83c2fcec4532 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:03:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:33.522 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b93491-b07a-40a7-9f65-67c1d0f49f93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.638 243456 INFO nova.virt.libvirt.driver [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Deleting instance files /var/lib/nova/instances/ff148d2a-2dba-45c2-b726-78423f3ccedc_del
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.639 243456 INFO nova.virt.libvirt.driver [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Deletion of /var/lib/nova/instances/ff148d2a-2dba-45c2-b726-78423f3ccedc_del complete
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.649 243456 DEBUG nova.compute.manager [req-d7736c44-93c9-44ca-816a-bb919df1d833 req-6f9ec5cb-783e-49b5-8ebe-37ea7cf8578a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Received event network-vif-unplugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.650 243456 DEBUG oslo_concurrency.lockutils [req-d7736c44-93c9-44ca-816a-bb919df1d833 req-6f9ec5cb-783e-49b5-8ebe-37ea7cf8578a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.650 243456 DEBUG oslo_concurrency.lockutils [req-d7736c44-93c9-44ca-816a-bb919df1d833 req-6f9ec5cb-783e-49b5-8ebe-37ea7cf8578a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.650 243456 DEBUG oslo_concurrency.lockutils [req-d7736c44-93c9-44ca-816a-bb919df1d833 req-6f9ec5cb-783e-49b5-8ebe-37ea7cf8578a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.651 243456 DEBUG nova.compute.manager [req-d7736c44-93c9-44ca-816a-bb919df1d833 req-6f9ec5cb-783e-49b5-8ebe-37ea7cf8578a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] No waiting events found dispatching network-vif-unplugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.651 243456 DEBUG nova.compute.manager [req-d7736c44-93c9-44ca-816a-bb919df1d833 req-6f9ec5cb-783e-49b5-8ebe-37ea7cf8578a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Received event network-vif-unplugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.702 243456 INFO nova.compute.manager [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Took 0.56 seconds to destroy the instance on the hypervisor.
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.703 243456 DEBUG oslo.service.loopingcall [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.704 243456 DEBUG nova.compute.manager [-] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.704 243456 DEBUG nova.network.neutron [-] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.731 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:33 compute-0 nova_compute[243452]: 2026-02-28 10:03:33.819 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 233 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Feb 28 10:03:34 compute-0 ceph-mon[76304]: pgmap v1026: 305 pgs: 305 active+clean; 233 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Feb 28 10:03:35 compute-0 nova_compute[243452]: 2026-02-28 10:03:35.590 243456 DEBUG nova.network.neutron [-] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:03:35 compute-0 nova_compute[243452]: 2026-02-28 10:03:35.612 243456 INFO nova.compute.manager [-] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Took 1.91 seconds to deallocate network for instance.
Feb 28 10:03:35 compute-0 nova_compute[243452]: 2026-02-28 10:03:35.666 243456 DEBUG oslo_concurrency.lockutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:35 compute-0 nova_compute[243452]: 2026-02-28 10:03:35.667 243456 DEBUG oslo_concurrency.lockutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:35 compute-0 nova_compute[243452]: 2026-02-28 10:03:35.725 243456 DEBUG oslo_concurrency.processutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 192 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 224 KiB/s rd, 1.4 MiB/s wr, 86 op/s
Feb 28 10:03:35 compute-0 nova_compute[243452]: 2026-02-28 10:03:35.847 243456 DEBUG nova.compute.manager [req-2943ea0a-60ce-42f5-b39c-4510e3b8f0d1 req-42582482-bf23-4487-afd6-2c01ccd0cecc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Received event network-vif-plugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:03:35 compute-0 nova_compute[243452]: 2026-02-28 10:03:35.848 243456 DEBUG oslo_concurrency.lockutils [req-2943ea0a-60ce-42f5-b39c-4510e3b8f0d1 req-42582482-bf23-4487-afd6-2c01ccd0cecc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:35 compute-0 nova_compute[243452]: 2026-02-28 10:03:35.848 243456 DEBUG oslo_concurrency.lockutils [req-2943ea0a-60ce-42f5-b39c-4510e3b8f0d1 req-42582482-bf23-4487-afd6-2c01ccd0cecc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:35 compute-0 nova_compute[243452]: 2026-02-28 10:03:35.849 243456 DEBUG oslo_concurrency.lockutils [req-2943ea0a-60ce-42f5-b39c-4510e3b8f0d1 req-42582482-bf23-4487-afd6-2c01ccd0cecc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:35 compute-0 nova_compute[243452]: 2026-02-28 10:03:35.849 243456 DEBUG nova.compute.manager [req-2943ea0a-60ce-42f5-b39c-4510e3b8f0d1 req-42582482-bf23-4487-afd6-2c01ccd0cecc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] No waiting events found dispatching network-vif-plugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:03:35 compute-0 nova_compute[243452]: 2026-02-28 10:03:35.850 243456 WARNING nova.compute.manager [req-2943ea0a-60ce-42f5-b39c-4510e3b8f0d1 req-42582482-bf23-4487-afd6-2c01ccd0cecc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Received unexpected event network-vif-plugged-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 for instance with vm_state deleted and task_state None.
Feb 28 10:03:35 compute-0 nova_compute[243452]: 2026-02-28 10:03:35.850 243456 DEBUG nova.compute.manager [req-2943ea0a-60ce-42f5-b39c-4510e3b8f0d1 req-42582482-bf23-4487-afd6-2c01ccd0cecc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Received event network-vif-deleted-86a5b3dc-169b-4cc4-8017-d4bfb7ad58a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:03:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:03:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2727393592' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:03:36 compute-0 nova_compute[243452]: 2026-02-28 10:03:36.239 243456 DEBUG oslo_concurrency.processutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:36 compute-0 nova_compute[243452]: 2026-02-28 10:03:36.246 243456 DEBUG nova.compute.provider_tree [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:03:36 compute-0 nova_compute[243452]: 2026-02-28 10:03:36.267 243456 DEBUG nova.scheduler.client.report [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:03:36 compute-0 nova_compute[243452]: 2026-02-28 10:03:36.286 243456 DEBUG oslo_concurrency.lockutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:36 compute-0 nova_compute[243452]: 2026-02-28 10:03:36.313 243456 INFO nova.scheduler.client.report [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Deleted allocations for instance ff148d2a-2dba-45c2-b726-78423f3ccedc
Feb 28 10:03:36 compute-0 nova_compute[243452]: 2026-02-28 10:03:36.372 243456 DEBUG oslo_concurrency.lockutils [None req-ca6202ab-89c4-4b3d-b2c4-4ca7c180b4ac 2973b4a88c3d4417a732901954bb6c6a dd177a65e6274abe9b6091a2f34c319c - - default default] Lock "ff148d2a-2dba-45c2-b726-78423f3ccedc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:36 compute-0 ceph-mon[76304]: pgmap v1027: 305 pgs: 305 active+clean; 192 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 224 KiB/s rd, 1.4 MiB/s wr, 86 op/s
Feb 28 10:03:36 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2727393592' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:03:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:03:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 153 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 65 KiB/s wr, 48 op/s
Feb 28 10:03:38 compute-0 nova_compute[243452]: 2026-02-28 10:03:38.401 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:38 compute-0 nova_compute[243452]: 2026-02-28 10:03:38.471 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Feb 28 10:03:38 compute-0 ceph-mon[76304]: pgmap v1028: 305 pgs: 305 active+clean; 153 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 65 KiB/s wr, 48 op/s
Feb 28 10:03:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Feb 28 10:03:38 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Feb 28 10:03:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 153 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 17 KiB/s wr, 46 op/s
Feb 28 10:03:39 compute-0 ceph-mon[76304]: osdmap e134: 3 total, 3 up, 3 in
Feb 28 10:03:40 compute-0 nova_compute[243452]: 2026-02-28 10:03:40.217 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273005.2148383, 540aa538-279f-4645-a7a1-03fa5c859440 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:03:40 compute-0 nova_compute[243452]: 2026-02-28 10:03:40.217 243456 INFO nova.compute.manager [-] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] VM Stopped (Lifecycle Event)
Feb 28 10:03:40 compute-0 nova_compute[243452]: 2026-02-28 10:03:40.235 243456 DEBUG nova.compute.manager [None req-a4bea8de-c5ca-4cfa-a4e4-4c1351cc6e6b - - - - - -] [instance: 540aa538-279f-4645-a7a1-03fa5c859440] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 4.2897490401857024e-06 of space, bias 1.0, pg target 0.0012869247120557107 quantized to 32 (current 32)
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00249102496461197 of space, bias 1.0, pg target 0.7473074893835909 quantized to 32 (current 32)
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.63424063601391e-07 of space, bias 4.0, pg target 0.0011561088763216692 quantized to 16 (current 16)
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:03:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:03:40 compute-0 ceph-mon[76304]: pgmap v1030: 305 pgs: 305 active+clean; 153 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 17 KiB/s wr, 46 op/s
Feb 28 10:03:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 17 KiB/s wr, 48 op/s
Feb 28 10:03:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:03:42 compute-0 ceph-mon[76304]: pgmap v1031: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 17 KiB/s wr, 48 op/s
Feb 28 10:03:43 compute-0 sudo[265081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:03:43 compute-0 sudo[265081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:03:43 compute-0 sudo[265081]: pam_unix(sudo:session): session closed for user root
Feb 28 10:03:43 compute-0 sudo[265106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:03:43 compute-0 sudo[265106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:03:43 compute-0 nova_compute[243452]: 2026-02-28 10:03:43.403 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:43 compute-0 nova_compute[243452]: 2026-02-28 10:03:43.474 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:43 compute-0 sudo[265106]: pam_unix(sudo:session): session closed for user root
Feb 28 10:03:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:03:43 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:03:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:03:43 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:03:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:03:43 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:03:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:03:43 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:03:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:03:43 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:03:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:03:43 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:03:43 compute-0 sudo[265161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:03:43 compute-0 sudo[265161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:03:43 compute-0 sudo[265161]: pam_unix(sudo:session): session closed for user root
Feb 28 10:03:43 compute-0 sudo[265186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:03:43 compute-0 sudo[265186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:03:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.0 KiB/s wr, 48 op/s
Feb 28 10:03:43 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:03:43 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:03:43 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:03:43 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:03:43 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:03:43 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:03:44 compute-0 podman[265223]: 2026-02-28 10:03:44.050549443 +0000 UTC m=+0.025958420 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:03:44 compute-0 podman[265223]: 2026-02-28 10:03:44.165588173 +0000 UTC m=+0.140997090 container create f57f7e15a370c063a0fde7d4c5f091a73acf9af10082a97f8086f55feb81c66e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 10:03:44 compute-0 systemd[1]: Started libpod-conmon-f57f7e15a370c063a0fde7d4c5f091a73acf9af10082a97f8086f55feb81c66e.scope.
Feb 28 10:03:44 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:03:44 compute-0 podman[265223]: 2026-02-28 10:03:44.433127676 +0000 UTC m=+0.408536663 container init f57f7e15a370c063a0fde7d4c5f091a73acf9af10082a97f8086f55feb81c66e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 10:03:44 compute-0 podman[265223]: 2026-02-28 10:03:44.443060745 +0000 UTC m=+0.418469672 container start f57f7e15a370c063a0fde7d4c5f091a73acf9af10082a97f8086f55feb81c66e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 10:03:44 compute-0 systemd[1]: libpod-f57f7e15a370c063a0fde7d4c5f091a73acf9af10082a97f8086f55feb81c66e.scope: Deactivated successfully.
Feb 28 10:03:44 compute-0 magical_hamilton[265239]: 167 167
Feb 28 10:03:44 compute-0 conmon[265239]: conmon f57f7e15a370c063a0fd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f57f7e15a370c063a0fde7d4c5f091a73acf9af10082a97f8086f55feb81c66e.scope/container/memory.events
Feb 28 10:03:44 compute-0 podman[265223]: 2026-02-28 10:03:44.561682206 +0000 UTC m=+0.537091103 container attach f57f7e15a370c063a0fde7d4c5f091a73acf9af10082a97f8086f55feb81c66e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:03:44 compute-0 podman[265223]: 2026-02-28 10:03:44.562739715 +0000 UTC m=+0.538148602 container died f57f7e15a370c063a0fde7d4c5f091a73acf9af10082a97f8086f55feb81c66e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:03:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b3bd29e9b417b4efa20e3dfed07f3096db3f68c2cf0f7c7452627c969d875c7-merged.mount: Deactivated successfully.
Feb 28 10:03:44 compute-0 podman[265223]: 2026-02-28 10:03:44.863998355 +0000 UTC m=+0.839407282 container remove f57f7e15a370c063a0fde7d4c5f091a73acf9af10082a97f8086f55feb81c66e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 10:03:44 compute-0 systemd[1]: libpod-conmon-f57f7e15a370c063a0fde7d4c5f091a73acf9af10082a97f8086f55feb81c66e.scope: Deactivated successfully.
Feb 28 10:03:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:44.954 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:03:44 compute-0 nova_compute[243452]: 2026-02-28 10:03:44.955 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:44.957 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:03:44 compute-0 ceph-mon[76304]: pgmap v1032: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.0 KiB/s wr, 48 op/s
Feb 28 10:03:45 compute-0 podman[265264]: 2026-02-28 10:03:45.054609287 +0000 UTC m=+0.047945397 container create 06310dba18ae71fb45231eecc2942685c2a2b80603c9c14f7752fc89681eb8d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cannon, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:03:45 compute-0 systemd[1]: Started libpod-conmon-06310dba18ae71fb45231eecc2942685c2a2b80603c9c14f7752fc89681eb8d5.scope.
Feb 28 10:03:45 compute-0 podman[265264]: 2026-02-28 10:03:45.029276346 +0000 UTC m=+0.022612486 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:03:45 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:03:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3fd47c2afe4e3b6c3e03c55c1a02ec6aea55f6a63e6c5948b5f7c4755d6a67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:03:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3fd47c2afe4e3b6c3e03c55c1a02ec6aea55f6a63e6c5948b5f7c4755d6a67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:03:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3fd47c2afe4e3b6c3e03c55c1a02ec6aea55f6a63e6c5948b5f7c4755d6a67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:03:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3fd47c2afe4e3b6c3e03c55c1a02ec6aea55f6a63e6c5948b5f7c4755d6a67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:03:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3fd47c2afe4e3b6c3e03c55c1a02ec6aea55f6a63e6c5948b5f7c4755d6a67/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:03:45 compute-0 podman[265264]: 2026-02-28 10:03:45.166215261 +0000 UTC m=+0.159551331 container init 06310dba18ae71fb45231eecc2942685c2a2b80603c9c14f7752fc89681eb8d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cannon, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 10:03:45 compute-0 podman[265264]: 2026-02-28 10:03:45.173926488 +0000 UTC m=+0.167262558 container start 06310dba18ae71fb45231eecc2942685c2a2b80603c9c14f7752fc89681eb8d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cannon, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 10:03:45 compute-0 podman[265264]: 2026-02-28 10:03:45.177040115 +0000 UTC m=+0.170376185 container attach 06310dba18ae71fb45231eecc2942685c2a2b80603c9c14f7752fc89681eb8d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cannon, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 28 10:03:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:03:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2495899353' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:03:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:03:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2495899353' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:03:45 compute-0 bold_cannon[265281]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:03:45 compute-0 bold_cannon[265281]: --> All data devices are unavailable
Feb 28 10:03:45 compute-0 systemd[1]: libpod-06310dba18ae71fb45231eecc2942685c2a2b80603c9c14f7752fc89681eb8d5.scope: Deactivated successfully.
Feb 28 10:03:45 compute-0 podman[265264]: 2026-02-28 10:03:45.701736548 +0000 UTC m=+0.695072618 container died 06310dba18ae71fb45231eecc2942685c2a2b80603c9c14f7752fc89681eb8d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cannon, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:03:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec3fd47c2afe4e3b6c3e03c55c1a02ec6aea55f6a63e6c5948b5f7c4755d6a67-merged.mount: Deactivated successfully.
Feb 28 10:03:45 compute-0 podman[265264]: 2026-02-28 10:03:45.757199246 +0000 UTC m=+0.750535326 container remove 06310dba18ae71fb45231eecc2942685c2a2b80603c9c14f7752fc89681eb8d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:03:45 compute-0 systemd[1]: libpod-conmon-06310dba18ae71fb45231eecc2942685c2a2b80603c9c14f7752fc89681eb8d5.scope: Deactivated successfully.
Feb 28 10:03:45 compute-0 sudo[265186]: pam_unix(sudo:session): session closed for user root
Feb 28 10:03:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 KiB/s wr, 30 op/s
Feb 28 10:03:45 compute-0 sudo[265315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:03:45 compute-0 sudo[265315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:03:45 compute-0 sudo[265315]: pam_unix(sudo:session): session closed for user root
Feb 28 10:03:45 compute-0 sudo[265340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:03:45 compute-0 sudo[265340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:03:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2495899353' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:03:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2495899353' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:03:46 compute-0 podman[265378]: 2026-02-28 10:03:46.255787286 +0000 UTC m=+0.052212459 container create 19374abdb0ece56ecd7d0372f844b1e116e2b1c9b9c86c93ce0094010f453d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:03:46 compute-0 systemd[1]: Started libpod-conmon-19374abdb0ece56ecd7d0372f844b1e116e2b1c9b9c86c93ce0094010f453d50.scope.
Feb 28 10:03:46 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:03:46 compute-0 podman[265378]: 2026-02-28 10:03:46.237728539 +0000 UTC m=+0.034153712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:03:46 compute-0 podman[265378]: 2026-02-28 10:03:46.345581881 +0000 UTC m=+0.142007104 container init 19374abdb0ece56ecd7d0372f844b1e116e2b1c9b9c86c93ce0094010f453d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:03:46 compute-0 podman[265378]: 2026-02-28 10:03:46.355744656 +0000 UTC m=+0.152169829 container start 19374abdb0ece56ecd7d0372f844b1e116e2b1c9b9c86c93ce0094010f453d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 10:03:46 compute-0 podman[265378]: 2026-02-28 10:03:46.35977264 +0000 UTC m=+0.156197813 container attach 19374abdb0ece56ecd7d0372f844b1e116e2b1c9b9c86c93ce0094010f453d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_fermi, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:03:46 compute-0 nifty_fermi[265394]: 167 167
Feb 28 10:03:46 compute-0 systemd[1]: libpod-19374abdb0ece56ecd7d0372f844b1e116e2b1c9b9c86c93ce0094010f453d50.scope: Deactivated successfully.
Feb 28 10:03:46 compute-0 podman[265378]: 2026-02-28 10:03:46.362556638 +0000 UTC m=+0.158981821 container died 19374abdb0ece56ecd7d0372f844b1e116e2b1c9b9c86c93ce0094010f453d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 10:03:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ced52aa93bf15c864890e2121acc50bec37f38c11cf08c64666a310e61d09dd-merged.mount: Deactivated successfully.
Feb 28 10:03:46 compute-0 podman[265378]: 2026-02-28 10:03:46.412032129 +0000 UTC m=+0.208457292 container remove 19374abdb0ece56ecd7d0372f844b1e116e2b1c9b9c86c93ce0094010f453d50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 10:03:46 compute-0 systemd[1]: libpod-conmon-19374abdb0ece56ecd7d0372f844b1e116e2b1c9b9c86c93ce0094010f453d50.scope: Deactivated successfully.
Feb 28 10:03:46 compute-0 podman[265419]: 2026-02-28 10:03:46.575778892 +0000 UTC m=+0.051203460 container create c77f1f7323b8191cab09b144b7976854b20e132e6b3c81ea5fbc2e2f81de9682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cannon, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 10:03:46 compute-0 systemd[1]: Started libpod-conmon-c77f1f7323b8191cab09b144b7976854b20e132e6b3c81ea5fbc2e2f81de9682.scope.
Feb 28 10:03:46 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:03:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1921176faa70651d2d0bbcde4922988f381bf3bb7673891f1adca3458ab367c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:03:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1921176faa70651d2d0bbcde4922988f381bf3bb7673891f1adca3458ab367c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:03:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1921176faa70651d2d0bbcde4922988f381bf3bb7673891f1adca3458ab367c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:03:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1921176faa70651d2d0bbcde4922988f381bf3bb7673891f1adca3458ab367c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:03:46 compute-0 podman[265419]: 2026-02-28 10:03:46.551236932 +0000 UTC m=+0.026661560 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:03:46 compute-0 podman[265419]: 2026-02-28 10:03:46.64682626 +0000 UTC m=+0.122250828 container init c77f1f7323b8191cab09b144b7976854b20e132e6b3c81ea5fbc2e2f81de9682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 10:03:46 compute-0 podman[265419]: 2026-02-28 10:03:46.651510921 +0000 UTC m=+0.126935479 container start c77f1f7323b8191cab09b144b7976854b20e132e6b3c81ea5fbc2e2f81de9682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cannon, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 10:03:46 compute-0 podman[265419]: 2026-02-28 10:03:46.65573282 +0000 UTC m=+0.131157368 container attach c77f1f7323b8191cab09b144b7976854b20e132e6b3c81ea5fbc2e2f81de9682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:03:46 compute-0 exciting_cannon[265435]: {
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:     "0": [
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:         {
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "devices": [
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "/dev/loop3"
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             ],
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "lv_name": "ceph_lv0",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "lv_size": "21470642176",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "name": "ceph_lv0",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "tags": {
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.cluster_name": "ceph",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.crush_device_class": "",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.encrypted": "0",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.objectstore": "bluestore",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.osd_id": "0",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.type": "block",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.vdo": "0",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.with_tpm": "0"
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             },
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "type": "block",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "vg_name": "ceph_vg0"
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:         }
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:     ],
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:     "1": [
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:         {
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "devices": [
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "/dev/loop4"
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             ],
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "lv_name": "ceph_lv1",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "lv_size": "21470642176",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "name": "ceph_lv1",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "tags": {
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.cluster_name": "ceph",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.crush_device_class": "",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.encrypted": "0",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.objectstore": "bluestore",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.osd_id": "1",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.type": "block",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.vdo": "0",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.with_tpm": "0"
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             },
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "type": "block",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "vg_name": "ceph_vg1"
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:         }
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:     ],
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:     "2": [
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:         {
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "devices": [
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "/dev/loop5"
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             ],
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "lv_name": "ceph_lv2",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "lv_size": "21470642176",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "name": "ceph_lv2",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "tags": {
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.cluster_name": "ceph",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.crush_device_class": "",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.encrypted": "0",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.objectstore": "bluestore",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.osd_id": "2",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.type": "block",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.vdo": "0",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:                 "ceph.with_tpm": "0"
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             },
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "type": "block",
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:             "vg_name": "ceph_vg2"
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:         }
Feb 28 10:03:46 compute-0 exciting_cannon[265435]:     ]
Feb 28 10:03:46 compute-0 exciting_cannon[265435]: }
Feb 28 10:03:46 compute-0 systemd[1]: libpod-c77f1f7323b8191cab09b144b7976854b20e132e6b3c81ea5fbc2e2f81de9682.scope: Deactivated successfully.
Feb 28 10:03:46 compute-0 podman[265419]: 2026-02-28 10:03:46.906699706 +0000 UTC m=+0.382124264 container died c77f1f7323b8191cab09b144b7976854b20e132e6b3c81ea5fbc2e2f81de9682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 10:03:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-1921176faa70651d2d0bbcde4922988f381bf3bb7673891f1adca3458ab367c4-merged.mount: Deactivated successfully.
Feb 28 10:03:46 compute-0 podman[265419]: 2026-02-28 10:03:46.945457605 +0000 UTC m=+0.420882163 container remove c77f1f7323b8191cab09b144b7976854b20e132e6b3c81ea5fbc2e2f81de9682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 10:03:46 compute-0 systemd[1]: libpod-conmon-c77f1f7323b8191cab09b144b7976854b20e132e6b3c81ea5fbc2e2f81de9682.scope: Deactivated successfully.
Feb 28 10:03:46 compute-0 sudo[265340]: pam_unix(sudo:session): session closed for user root
Feb 28 10:03:47 compute-0 ceph-mon[76304]: pgmap v1033: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 KiB/s wr, 30 op/s
Feb 28 10:03:47 compute-0 sudo[265456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:03:47 compute-0 sudo[265456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:03:47 compute-0 sudo[265456]: pam_unix(sudo:session): session closed for user root
Feb 28 10:03:47 compute-0 sudo[265481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:03:47 compute-0 sudo[265481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:03:47 compute-0 podman[265518]: 2026-02-28 10:03:47.328180005 +0000 UTC m=+0.048491664 container create 9410ab851d8db2ead2bbba1a29c865d7c081b28dce6f0c937f46d8c2902e3ca0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jepsen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:03:47 compute-0 systemd[1]: Started libpod-conmon-9410ab851d8db2ead2bbba1a29c865d7c081b28dce6f0c937f46d8c2902e3ca0.scope.
Feb 28 10:03:47 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:03:47 compute-0 podman[265518]: 2026-02-28 10:03:47.303753458 +0000 UTC m=+0.024065207 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:03:47 compute-0 podman[265518]: 2026-02-28 10:03:47.40627733 +0000 UTC m=+0.126589019 container init 9410ab851d8db2ead2bbba1a29c865d7c081b28dce6f0c937f46d8c2902e3ca0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:03:47 compute-0 podman[265518]: 2026-02-28 10:03:47.411798446 +0000 UTC m=+0.132110105 container start 9410ab851d8db2ead2bbba1a29c865d7c081b28dce6f0c937f46d8c2902e3ca0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:03:47 compute-0 podman[265518]: 2026-02-28 10:03:47.415659454 +0000 UTC m=+0.135971113 container attach 9410ab851d8db2ead2bbba1a29c865d7c081b28dce6f0c937f46d8c2902e3ca0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jepsen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:03:47 compute-0 systemd[1]: libpod-9410ab851d8db2ead2bbba1a29c865d7c081b28dce6f0c937f46d8c2902e3ca0.scope: Deactivated successfully.
Feb 28 10:03:47 compute-0 adoring_jepsen[265534]: 167 167
Feb 28 10:03:47 compute-0 conmon[265534]: conmon 9410ab851d8db2ead2bb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9410ab851d8db2ead2bbba1a29c865d7c081b28dce6f0c937f46d8c2902e3ca0.scope/container/memory.events
Feb 28 10:03:47 compute-0 podman[265518]: 2026-02-28 10:03:47.4176367 +0000 UTC m=+0.137948359 container died 9410ab851d8db2ead2bbba1a29c865d7c081b28dce6f0c937f46d8c2902e3ca0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jepsen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:03:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-1787387d80658f186c5d8bee6d96b39d8742bd28a8f190abd8149e2a57d78d79-merged.mount: Deactivated successfully.
Feb 28 10:03:47 compute-0 podman[265518]: 2026-02-28 10:03:47.462639405 +0000 UTC m=+0.182951094 container remove 9410ab851d8db2ead2bbba1a29c865d7c081b28dce6f0c937f46d8c2902e3ca0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jepsen, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:03:47 compute-0 systemd[1]: libpod-conmon-9410ab851d8db2ead2bbba1a29c865d7c081b28dce6f0c937f46d8c2902e3ca0.scope: Deactivated successfully.
Feb 28 10:03:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:03:47 compute-0 podman[265560]: 2026-02-28 10:03:47.637089829 +0000 UTC m=+0.036665492 container create b56185423b3cf130b1097bac11e114dd7d926202ede8edcdf6f8b911d89d99b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bassi, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:03:47 compute-0 systemd[1]: Started libpod-conmon-b56185423b3cf130b1097bac11e114dd7d926202ede8edcdf6f8b911d89d99b2.scope.
Feb 28 10:03:47 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:03:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009dd558833d1dbc48695efc60d1266ead07a861a6ae9781c656f12ea9eaa613/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:03:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009dd558833d1dbc48695efc60d1266ead07a861a6ae9781c656f12ea9eaa613/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:03:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009dd558833d1dbc48695efc60d1266ead07a861a6ae9781c656f12ea9eaa613/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:03:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009dd558833d1dbc48695efc60d1266ead07a861a6ae9781c656f12ea9eaa613/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:03:47 compute-0 podman[265560]: 2026-02-28 10:03:47.622290183 +0000 UTC m=+0.021865886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:03:47 compute-0 podman[265560]: 2026-02-28 10:03:47.735697751 +0000 UTC m=+0.135273504 container init b56185423b3cf130b1097bac11e114dd7d926202ede8edcdf6f8b911d89d99b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bassi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:03:47 compute-0 podman[265560]: 2026-02-28 10:03:47.747226836 +0000 UTC m=+0.146802539 container start b56185423b3cf130b1097bac11e114dd7d926202ede8edcdf6f8b911d89d99b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Feb 28 10:03:47 compute-0 podman[265560]: 2026-02-28 10:03:47.751227978 +0000 UTC m=+0.150803671 container attach b56185423b3cf130b1097bac11e114dd7d926202ede8edcdf6f8b911d89d99b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bassi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default)
Feb 28 10:03:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Feb 28 10:03:48 compute-0 lvm[265653]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:03:48 compute-0 lvm[265653]: VG ceph_vg0 finished
Feb 28 10:03:48 compute-0 lvm[265656]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:03:48 compute-0 lvm[265656]: VG ceph_vg1 finished
Feb 28 10:03:48 compute-0 lvm[265658]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:03:48 compute-0 lvm[265658]: VG ceph_vg2 finished
Feb 28 10:03:48 compute-0 nova_compute[243452]: 2026-02-28 10:03:48.372 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273013.371014, ff148d2a-2dba-45c2-b726-78423f3ccedc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:03:48 compute-0 nova_compute[243452]: 2026-02-28 10:03:48.373 243456 INFO nova.compute.manager [-] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] VM Stopped (Lifecycle Event)
Feb 28 10:03:48 compute-0 nova_compute[243452]: 2026-02-28 10:03:48.391 243456 DEBUG nova.compute.manager [None req-d1f4f9de-3538-4026-91f7-72b1be633110 - - - - - -] [instance: ff148d2a-2dba-45c2-b726-78423f3ccedc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:03:48 compute-0 nova_compute[243452]: 2026-02-28 10:03:48.409 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:48 compute-0 exciting_bassi[265576]: {}
Feb 28 10:03:48 compute-0 nova_compute[243452]: 2026-02-28 10:03:48.475 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:48 compute-0 systemd[1]: libpod-b56185423b3cf130b1097bac11e114dd7d926202ede8edcdf6f8b911d89d99b2.scope: Deactivated successfully.
Feb 28 10:03:48 compute-0 systemd[1]: libpod-b56185423b3cf130b1097bac11e114dd7d926202ede8edcdf6f8b911d89d99b2.scope: Consumed 1.089s CPU time.
Feb 28 10:03:48 compute-0 podman[265560]: 2026-02-28 10:03:48.501770648 +0000 UTC m=+0.901346331 container died b56185423b3cf130b1097bac11e114dd7d926202ede8edcdf6f8b911d89d99b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:03:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-009dd558833d1dbc48695efc60d1266ead07a861a6ae9781c656f12ea9eaa613-merged.mount: Deactivated successfully.
Feb 28 10:03:48 compute-0 podman[265560]: 2026-02-28 10:03:48.543995375 +0000 UTC m=+0.943571048 container remove b56185423b3cf130b1097bac11e114dd7d926202ede8edcdf6f8b911d89d99b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bassi, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:03:48 compute-0 systemd[1]: libpod-conmon-b56185423b3cf130b1097bac11e114dd7d926202ede8edcdf6f8b911d89d99b2.scope: Deactivated successfully.
Feb 28 10:03:48 compute-0 sudo[265481]: pam_unix(sudo:session): session closed for user root
Feb 28 10:03:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:03:48 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:03:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:03:48 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:03:48 compute-0 sudo[265672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:03:48 compute-0 sudo[265672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:03:48 compute-0 sudo[265672]: pam_unix(sudo:session): session closed for user root
Feb 28 10:03:49 compute-0 ceph-mon[76304]: pgmap v1034: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Feb 28 10:03:49 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:03:49 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:03:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Feb 28 10:03:51 compute-0 ceph-mon[76304]: pgmap v1035: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Feb 28 10:03:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Feb 28 10:03:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:03:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:52.959 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:03:53 compute-0 ceph-mon[76304]: pgmap v1036: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Feb 28 10:03:53 compute-0 podman[265698]: 2026-02-28 10:03:53.166954942 +0000 UTC m=+0.101686260 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:03:53 compute-0 podman[265697]: 2026-02-28 10:03:53.175756199 +0000 UTC m=+0.110385724 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:03:53 compute-0 nova_compute[243452]: 2026-02-28 10:03:53.411 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:53 compute-0 nova_compute[243452]: 2026-02-28 10:03:53.476 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Feb 28 10:03:55 compute-0 ceph-mon[76304]: pgmap v1037: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Feb 28 10:03:55 compute-0 nova_compute[243452]: 2026-02-28 10:03:55.427 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "400869d5-7369-466b-970e-ac7e3f4e2e4c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:55 compute-0 nova_compute[243452]: 2026-02-28 10:03:55.428 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:55 compute-0 nova_compute[243452]: 2026-02-28 10:03:55.428 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "95805a4e-8bc0-47e3-981a-dfe27127a270" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:55 compute-0 nova_compute[243452]: 2026-02-28 10:03:55.429 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:55 compute-0 nova_compute[243452]: 2026-02-28 10:03:55.429 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:55 compute-0 nova_compute[243452]: 2026-02-28 10:03:55.430 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:55 compute-0 nova_compute[243452]: 2026-02-28 10:03:55.461 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:03:55 compute-0 nova_compute[243452]: 2026-02-28 10:03:55.465 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:03:55 compute-0 nova_compute[243452]: 2026-02-28 10:03:55.467 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:03:55 compute-0 nova_compute[243452]: 2026-02-28 10:03:55.580 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:55 compute-0 nova_compute[243452]: 2026-02-28 10:03:55.581 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:55 compute-0 nova_compute[243452]: 2026-02-28 10:03:55.583 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:55 compute-0 nova_compute[243452]: 2026-02-28 10:03:55.586 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:55 compute-0 nova_compute[243452]: 2026-02-28 10:03:55.591 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:03:55 compute-0 nova_compute[243452]: 2026-02-28 10:03:55.591 243456 INFO nova.compute.claims [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:03:55 compute-0 nova_compute[243452]: 2026-02-28 10:03:55.705 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail
Feb 28 10:03:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:03:56 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2668771355' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.337 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.345 243456 DEBUG nova.compute.provider_tree [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.362 243456 DEBUG nova.scheduler.client.report [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.383 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.384 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.387 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.393 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.394 243456 INFO nova.compute.claims [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:03:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Feb 28 10:03:56 compute-0 ceph-mon[76304]: pgmap v1038: 305 pgs: 305 active+clean; 153 MiB data, 413 MiB used, 60 GiB / 60 GiB avail
Feb 28 10:03:56 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2668771355' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:03:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Feb 28 10:03:56 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.463 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.463 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.480 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.490 243456 DEBUG nova.scheduler.client.report [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.501 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.511 243456 DEBUG nova.scheduler.client.report [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.512 243456 DEBUG nova.compute.provider_tree [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.564 243456 DEBUG nova.scheduler.client.report [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.609 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.611 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.612 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Creating image(s)
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.643 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.678 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.710 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.715 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.735 243456 DEBUG nova.scheduler.client.report [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.788 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.789 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.790 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.791 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.822 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.827 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.888 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:56 compute-0 nova_compute[243452]: 2026-02-28 10:03:56.976 243456 DEBUG nova.policy [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a44c3f76b144221b23743a554a4a839', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a4f47b449434c708378388c3e76610e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.066 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.139 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] resizing rbd image c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.230 243456 DEBUG nova.objects.instance [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lazy-loading 'migration_context' on Instance uuid c38e4584-8a86-41a3-bc10-2a35205cf7c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.249 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.250 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Ensure instance console log exists: /var/lib/nova/instances/c38e4584-8a86-41a3-bc10-2a35205cf7c7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.250 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.251 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.251 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:57 compute-0 ceph-mon[76304]: osdmap e135: 3 total, 3 up, 3 in
Feb 28 10:03:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:03:57 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2013276614' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.466 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.472 243456 DEBUG nova.compute.provider_tree [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.487 243456 DEBUG nova.scheduler.client.report [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.525 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.526 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.531 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.540 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.540 243456 INFO nova.compute.claims [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.591 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.592 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.618 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:03:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.645 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.660 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Successfully created port: b9c5316d-0f6b-4a56-8273-692ee1492259 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.704 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.740 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.742 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.743 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Creating image(s)
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.772 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 95805a4e-8bc0-47e3-981a-dfe27127a270_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.802 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 95805a4e-8bc0-47e3-981a-dfe27127a270_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.835 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 95805a4e-8bc0-47e3-981a-dfe27127a270_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:03:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 170 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 1.1 MiB/s wr, 15 op/s
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.839 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:57.842 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:57.843 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:03:57.843 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.855 243456 DEBUG nova.policy [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a44c3f76b144221b23743a554a4a839', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a4f47b449434c708378388c3e76610e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.886 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.886 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.887 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.887 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.908 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 95805a4e-8bc0-47e3-981a-dfe27127a270_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:03:57 compute-0 nova_compute[243452]: 2026-02-28 10:03:57.911 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 95805a4e-8bc0-47e3-981a-dfe27127a270_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.194 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 95805a4e-8bc0-47e3-981a-dfe27127a270_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.246 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] resizing rbd image 95805a4e-8bc0-47e3-981a-dfe27127a270_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:03:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:03:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4115759404' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.279 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.283 243456 DEBUG nova.compute.provider_tree [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.315 243456 DEBUG nova.scheduler.client.report [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.323 243456 DEBUG nova.objects.instance [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lazy-loading 'migration_context' on Instance uuid 95805a4e-8bc0-47e3-981a-dfe27127a270 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.336 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.337 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Ensure instance console log exists: /var/lib/nova/instances/95805a4e-8bc0-47e3-981a-dfe27127a270/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.337 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.338 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.338 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.339 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.340 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.391 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Successfully created port: dca369b1-9f27-4c57-9e18-4a62a8bd95e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.393 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.394 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.416 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.418 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.437 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:03:58 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2013276614' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:03:58 compute-0 ceph-mon[76304]: pgmap v1040: 305 pgs: 305 active+clean; 170 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 1.1 MiB/s wr, 15 op/s
Feb 28 10:03:58 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4115759404' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.478 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.533 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.534 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.535 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Creating image(s)
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.556 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.581 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.607 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.612 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.642 243456 DEBUG nova.policy [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a44c3f76b144221b23743a554a4a839', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a4f47b449434c708378388c3e76610e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.683 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.684 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.684 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.685 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.708 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.713 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.782 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Successfully updated port: b9c5316d-0f6b-4a56-8273-692ee1492259 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.806 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "refresh_cache-c38e4584-8a86-41a3-bc10-2a35205cf7c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.807 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquired lock "refresh_cache-c38e4584-8a86-41a3-bc10-2a35205cf7c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.807 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.950 243456 DEBUG nova.compute.manager [req-f80fd425-863e-47b4-a25f-0ad773bcef0c req-5ef41959-8e3b-4f40-8d7f-10e480e0e020 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Received event network-changed-b9c5316d-0f6b-4a56-8273-692ee1492259 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.950 243456 DEBUG nova.compute.manager [req-f80fd425-863e-47b4-a25f-0ad773bcef0c req-5ef41959-8e3b-4f40-8d7f-10e480e0e020 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Refreshing instance network info cache due to event network-changed-b9c5316d-0f6b-4a56-8273-692ee1492259. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.951 243456 DEBUG oslo_concurrency.lockutils [req-f80fd425-863e-47b4-a25f-0ad773bcef0c req-5ef41959-8e3b-4f40-8d7f-10e480e0e020 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c38e4584-8a86-41a3-bc10-2a35205cf7c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:03:58 compute-0 nova_compute[243452]: 2026-02-28 10:03:58.994 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:03:59 compute-0 nova_compute[243452]: 2026-02-28 10:03:59.057 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] resizing rbd image 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:03:59 compute-0 nova_compute[243452]: 2026-02-28 10:03:59.167 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:03:59 compute-0 nova_compute[243452]: 2026-02-28 10:03:59.175 243456 DEBUG nova.objects.instance [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lazy-loading 'migration_context' on Instance uuid 400869d5-7369-466b-970e-ac7e3f4e2e4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:03:59 compute-0 nova_compute[243452]: 2026-02-28 10:03:59.195 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:03:59 compute-0 nova_compute[243452]: 2026-02-28 10:03:59.195 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Ensure instance console log exists: /var/lib/nova/instances/400869d5-7369-466b-970e-ac7e3f4e2e4c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:03:59 compute-0 nova_compute[243452]: 2026-02-28 10:03:59.196 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:03:59 compute-0 nova_compute[243452]: 2026-02-28 10:03:59.196 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:03:59 compute-0 nova_compute[243452]: 2026-02-28 10:03:59.196 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:03:59 compute-0 nova_compute[243452]: 2026-02-28 10:03:59.388 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Successfully created port: 5051af6b-3383-4342-be85-7f3b44b527a2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:03:59 compute-0 nova_compute[243452]: 2026-02-28 10:03:59.494 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Successfully updated port: dca369b1-9f27-4c57-9e18-4a62a8bd95e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:03:59 compute-0 nova_compute[243452]: 2026-02-28 10:03:59.510 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "refresh_cache-95805a4e-8bc0-47e3-981a-dfe27127a270" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:03:59 compute-0 nova_compute[243452]: 2026-02-28 10:03:59.511 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquired lock "refresh_cache-95805a4e-8bc0-47e3-981a-dfe27127a270" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:03:59 compute-0 nova_compute[243452]: 2026-02-28 10:03:59.511 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:03:59 compute-0 nova_compute[243452]: 2026-02-28 10:03:59.648 243456 DEBUG nova.compute.manager [req-d8a33801-8dba-40a2-b201-084d2adf4b1a req-4bf72f07-4367-4732-9fc6-812748ee578b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Received event network-changed-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:03:59 compute-0 nova_compute[243452]: 2026-02-28 10:03:59.649 243456 DEBUG nova.compute.manager [req-d8a33801-8dba-40a2-b201-084d2adf4b1a req-4bf72f07-4367-4732-9fc6-812748ee578b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Refreshing instance network info cache due to event network-changed-dca369b1-9f27-4c57-9e18-4a62a8bd95e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:03:59 compute-0 nova_compute[243452]: 2026-02-28 10:03:59.649 243456 DEBUG oslo_concurrency.lockutils [req-d8a33801-8dba-40a2-b201-084d2adf4b1a req-4bf72f07-4367-4732-9fc6-812748ee578b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-95805a4e-8bc0-47e3-981a-dfe27127a270" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:03:59 compute-0 nova_compute[243452]: 2026-02-28 10:03:59.731 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:03:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 170 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 1.1 MiB/s wr, 15 op/s
Feb 28 10:04:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:04:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:04:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:04:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:04:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:04:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.489 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Updating instance_info_cache with network_info: [{"id": "b9c5316d-0f6b-4a56-8273-692ee1492259", "address": "fa:16:3e:b1:5c:cb", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c5316d-0f", "ovs_interfaceid": "b9c5316d-0f6b-4a56-8273-692ee1492259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.529 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Releasing lock "refresh_cache-c38e4584-8a86-41a3-bc10-2a35205cf7c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.529 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Instance network_info: |[{"id": "b9c5316d-0f6b-4a56-8273-692ee1492259", "address": "fa:16:3e:b1:5c:cb", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c5316d-0f", "ovs_interfaceid": "b9c5316d-0f6b-4a56-8273-692ee1492259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.531 243456 DEBUG oslo_concurrency.lockutils [req-f80fd425-863e-47b4-a25f-0ad773bcef0c req-5ef41959-8e3b-4f40-8d7f-10e480e0e020 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c38e4584-8a86-41a3-bc10-2a35205cf7c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.531 243456 DEBUG nova.network.neutron [req-f80fd425-863e-47b4-a25f-0ad773bcef0c req-5ef41959-8e3b-4f40-8d7f-10e480e0e020 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Refreshing network info cache for port b9c5316d-0f6b-4a56-8273-692ee1492259 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.536 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Start _get_guest_xml network_info=[{"id": "b9c5316d-0f6b-4a56-8273-692ee1492259", "address": "fa:16:3e:b1:5c:cb", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c5316d-0f", "ovs_interfaceid": "b9c5316d-0f6b-4a56-8273-692ee1492259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.542 243456 WARNING nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.550 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.551 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.559 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.560 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.560 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.561 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.562 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.563 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.563 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.563 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.564 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.564 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.565 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.565 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.565 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.566 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.570 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.592 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Successfully updated port: 5051af6b-3383-4342-be85-7f3b44b527a2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.619 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "refresh_cache-400869d5-7369-466b-970e-ac7e3f4e2e4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.619 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquired lock "refresh_cache-400869d5-7369-466b-970e-ac7e3f4e2e4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.619 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.755 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Updating instance_info_cache with network_info: [{"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.782 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Releasing lock "refresh_cache-95805a4e-8bc0-47e3-981a-dfe27127a270" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.783 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Instance network_info: |[{"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.784 243456 DEBUG oslo_concurrency.lockutils [req-d8a33801-8dba-40a2-b201-084d2adf4b1a req-4bf72f07-4367-4732-9fc6-812748ee578b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-95805a4e-8bc0-47e3-981a-dfe27127a270" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.785 243456 DEBUG nova.network.neutron [req-d8a33801-8dba-40a2-b201-084d2adf4b1a req-4bf72f07-4367-4732-9fc6-812748ee578b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Refreshing network info cache for port dca369b1-9f27-4c57-9e18-4a62a8bd95e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.793 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Start _get_guest_xml network_info=[{"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.801 243456 WARNING nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.806 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.807 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.815 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.824 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.824 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.825 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.825 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.826 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.827 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.827 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.827 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.828 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.828 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.829 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.829 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.829 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.830 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:04:00 compute-0 nova_compute[243452]: 2026-02-28 10:04:00.835 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:00 compute-0 ceph-mon[76304]: pgmap v1041: 305 pgs: 305 active+clean; 170 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 1.1 MiB/s wr, 15 op/s
Feb 28 10:04:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:04:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1167639053' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.115 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.140 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.145 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:04:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/287769195' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.409 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.450 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 95805a4e-8bc0-47e3-981a-dfe27127a270_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.456 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:04:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2196627638' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.696 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.699 243456 DEBUG nova.virt.libvirt.vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:03:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1851116039',display_name='tempest-ListServersNegativeTestJSON-server-1851116039-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1851116039-3',id=26,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a4f47b449434c708378388c3e76610e',ramdisk_id='',reservation_id='r-my980dr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-784056711',owner_user_name='tempest-ListServersNegativeTestJSON-784056711-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:03:56Z,user_data=None,user_id='7a44c3f76b144221b23743a554a4a839',uuid=c38e4584-8a86-41a3-bc10-2a35205cf7c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9c5316d-0f6b-4a56-8273-692ee1492259", "address": "fa:16:3e:b1:5c:cb", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c5316d-0f", "ovs_interfaceid": "b9c5316d-0f6b-4a56-8273-692ee1492259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.699 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converting VIF {"id": "b9c5316d-0f6b-4a56-8273-692ee1492259", "address": "fa:16:3e:b1:5c:cb", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c5316d-0f", "ovs_interfaceid": "b9c5316d-0f6b-4a56-8273-692ee1492259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.701 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:5c:cb,bridge_name='br-int',has_traffic_filtering=True,id=b9c5316d-0f6b-4a56-8273-692ee1492259,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9c5316d-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.703 243456 DEBUG nova.objects.instance [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lazy-loading 'pci_devices' on Instance uuid c38e4584-8a86-41a3-bc10-2a35205cf7c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.724 243456 DEBUG nova.compute.manager [req-9b9a18d6-9ed2-4058-b258-c67fe7d1c4b4 req-12e582d8-5e53-4304-9f24-a746aaae678c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Received event network-changed-5051af6b-3383-4342-be85-7f3b44b527a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.725 243456 DEBUG nova.compute.manager [req-9b9a18d6-9ed2-4058-b258-c67fe7d1c4b4 req-12e582d8-5e53-4304-9f24-a746aaae678c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Refreshing instance network info cache due to event network-changed-5051af6b-3383-4342-be85-7f3b44b527a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.725 243456 DEBUG oslo_concurrency.lockutils [req-9b9a18d6-9ed2-4058-b258-c67fe7d1c4b4 req-12e582d8-5e53-4304-9f24-a746aaae678c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-400869d5-7369-466b-970e-ac7e3f4e2e4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.729 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:04:01 compute-0 nova_compute[243452]:   <uuid>c38e4584-8a86-41a3-bc10-2a35205cf7c7</uuid>
Feb 28 10:04:01 compute-0 nova_compute[243452]:   <name>instance-0000001a</name>
Feb 28 10:04:01 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:04:01 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:04:01 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <nova:name>tempest-ListServersNegativeTestJSON-server-1851116039-3</nova:name>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:04:00</nova:creationTime>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:04:01 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:04:01 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:04:01 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:04:01 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:04:01 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:04:01 compute-0 nova_compute[243452]:         <nova:user uuid="7a44c3f76b144221b23743a554a4a839">tempest-ListServersNegativeTestJSON-784056711-project-member</nova:user>
Feb 28 10:04:01 compute-0 nova_compute[243452]:         <nova:project uuid="1a4f47b449434c708378388c3e76610e">tempest-ListServersNegativeTestJSON-784056711</nova:project>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:04:01 compute-0 nova_compute[243452]:         <nova:port uuid="b9c5316d-0f6b-4a56-8273-692ee1492259">
Feb 28 10:04:01 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:04:01 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:04:01 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <system>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <entry name="serial">c38e4584-8a86-41a3-bc10-2a35205cf7c7</entry>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <entry name="uuid">c38e4584-8a86-41a3-bc10-2a35205cf7c7</entry>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     </system>
Feb 28 10:04:01 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:04:01 compute-0 nova_compute[243452]:   <os>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:   </os>
Feb 28 10:04:01 compute-0 nova_compute[243452]:   <features>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:   </features>
Feb 28 10:04:01 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:04:01 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:04:01 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk">
Feb 28 10:04:01 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:04:01 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk.config">
Feb 28 10:04:01 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:04:01 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:b1:5c:cb"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <target dev="tapb9c5316d-0f"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/c38e4584-8a86-41a3-bc10-2a35205cf7c7/console.log" append="off"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <video>
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     </video>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:04:01 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:04:01 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:04:01 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:04:01 compute-0 nova_compute[243452]: </domain>
Feb 28 10:04:01 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.731 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Preparing to wait for external event network-vif-plugged-b9c5316d-0f6b-4a56-8273-692ee1492259 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.731 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.732 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.732 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.733 243456 DEBUG nova.virt.libvirt.vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:03:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1851116039',display_name='tempest-ListServersNegativeTestJSON-server-1851116039-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1851116039-3',id=26,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a4f47b449434c708378388c3e76610e',ramdisk_id='',reservation_id='r-my980dr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-784056711',owner_user_name='tempest-ListServersNegativeTestJSON-784056711-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:03:56Z,user_data=None,user_id='7a44c3f76b144221b23743a554a4a839',uuid=c38e4584-8a86-41a3-bc10-2a35205cf7c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9c5316d-0f6b-4a56-8273-692ee1492259", "address": "fa:16:3e:b1:5c:cb", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c5316d-0f", "ovs_interfaceid": "b9c5316d-0f6b-4a56-8273-692ee1492259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.734 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converting VIF {"id": "b9c5316d-0f6b-4a56-8273-692ee1492259", "address": "fa:16:3e:b1:5c:cb", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c5316d-0f", "ovs_interfaceid": "b9c5316d-0f6b-4a56-8273-692ee1492259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.734 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:5c:cb,bridge_name='br-int',has_traffic_filtering=True,id=b9c5316d-0f6b-4a56-8273-692ee1492259,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9c5316d-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.735 243456 DEBUG os_vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:5c:cb,bridge_name='br-int',has_traffic_filtering=True,id=b9c5316d-0f6b-4a56-8273-692ee1492259,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9c5316d-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.736 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.737 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.737 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.747 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.747 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9c5316d-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.748 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb9c5316d-0f, col_values=(('external_ids', {'iface-id': 'b9c5316d-0f6b-4a56-8273-692ee1492259', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b1:5c:cb', 'vm-uuid': 'c38e4584-8a86-41a3-bc10-2a35205cf7c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:01 compute-0 NetworkManager[49805]: <info>  [1772273041.7517] manager: (tapb9c5316d-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.750 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.755 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.757 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.759 243456 INFO os_vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:5c:cb,bridge_name='br-int',has_traffic_filtering=True,id=b9c5316d-0f6b-4a56-8273-692ee1492259,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9c5316d-0f')
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.821 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.822 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.822 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] No VIF found with MAC fa:16:3e:b1:5c:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.822 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Using config drive
Feb 28 10:04:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 259 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 4.4 MiB/s wr, 90 op/s
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.846 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:01 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1167639053' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:01 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/287769195' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:01 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2196627638' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:01 compute-0 nova_compute[243452]: 2026-02-28 10:04:01.987 243456 DEBUG nova.network.neutron [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Updating instance_info_cache with network_info: [{"id": "5051af6b-3383-4342-be85-7f3b44b527a2", "address": "fa:16:3e:04:cc:6f", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5051af6b-33", "ovs_interfaceid": "5051af6b-3383-4342-be85-7f3b44b527a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.012 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Releasing lock "refresh_cache-400869d5-7369-466b-970e-ac7e3f4e2e4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.012 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Instance network_info: |[{"id": "5051af6b-3383-4342-be85-7f3b44b527a2", "address": "fa:16:3e:04:cc:6f", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5051af6b-33", "ovs_interfaceid": "5051af6b-3383-4342-be85-7f3b44b527a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.013 243456 DEBUG oslo_concurrency.lockutils [req-9b9a18d6-9ed2-4058-b258-c67fe7d1c4b4 req-12e582d8-5e53-4304-9f24-a746aaae678c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-400869d5-7369-466b-970e-ac7e3f4e2e4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.013 243456 DEBUG nova.network.neutron [req-9b9a18d6-9ed2-4058-b258-c67fe7d1c4b4 req-12e582d8-5e53-4304-9f24-a746aaae678c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Refreshing network info cache for port 5051af6b-3383-4342-be85-7f3b44b527a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:04:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:04:02 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1887011915' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.017 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Start _get_guest_xml network_info=[{"id": "5051af6b-3383-4342-be85-7f3b44b527a2", "address": "fa:16:3e:04:cc:6f", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5051af6b-33", "ovs_interfaceid": "5051af6b-3383-4342-be85-7f3b44b527a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.023 243456 WARNING nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.029 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.031 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.045 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.047 243456 DEBUG nova.network.neutron [req-f80fd425-863e-47b4-a25f-0ad773bcef0c req-5ef41959-8e3b-4f40-8d7f-10e480e0e020 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Updated VIF entry in instance network info cache for port b9c5316d-0f6b-4a56-8273-692ee1492259. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.047 243456 DEBUG nova.network.neutron [req-f80fd425-863e-47b4-a25f-0ad773bcef0c req-5ef41959-8e3b-4f40-8d7f-10e480e0e020 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Updating instance_info_cache with network_info: [{"id": "b9c5316d-0f6b-4a56-8273-692ee1492259", "address": "fa:16:3e:b1:5c:cb", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c5316d-0f", "ovs_interfaceid": "b9c5316d-0f6b-4a56-8273-692ee1492259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.051 243456 DEBUG nova.virt.libvirt.vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:03:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1851116039',display_name='tempest-ListServersNegativeTestJSON-server-1851116039-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1851116039-2',id=25,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a4f47b449434c708378388c3e76610e',ramdisk_id='',reservation_id='r-my980dr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-784056711',owner_user_name='tempest-ListServersNegativeTestJSON-784056711-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:03:57Z,user_data=None,user_id='7a44c3f76b144221b23743a554a4a839',uuid=95805a4e-8bc0-47e3-981a-dfe27127a270,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.052 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converting VIF {"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.053 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:2f:18,bridge_name='br-int',has_traffic_filtering=True,id=dca369b1-9f27-4c57-9e18-4a62a8bd95e9,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdca369b1-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.054 243456 DEBUG nova.objects.instance [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lazy-loading 'pci_devices' on Instance uuid 95805a4e-8bc0-47e3-981a-dfe27127a270 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.057 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.058 243456 DEBUG nova.virt.libvirt.host [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.058 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.059 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.060 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.060 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.060 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.061 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.061 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.062 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.062 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.063 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.063 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.063 243456 DEBUG nova.virt.hardware [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.068 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.101 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:04:02 compute-0 nova_compute[243452]:   <uuid>95805a4e-8bc0-47e3-981a-dfe27127a270</uuid>
Feb 28 10:04:02 compute-0 nova_compute[243452]:   <name>instance-00000019</name>
Feb 28 10:04:02 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:04:02 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:04:02 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <nova:name>tempest-ListServersNegativeTestJSON-server-1851116039-2</nova:name>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:04:00</nova:creationTime>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:04:02 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:04:02 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:04:02 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:04:02 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:04:02 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:04:02 compute-0 nova_compute[243452]:         <nova:user uuid="7a44c3f76b144221b23743a554a4a839">tempest-ListServersNegativeTestJSON-784056711-project-member</nova:user>
Feb 28 10:04:02 compute-0 nova_compute[243452]:         <nova:project uuid="1a4f47b449434c708378388c3e76610e">tempest-ListServersNegativeTestJSON-784056711</nova:project>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:04:02 compute-0 nova_compute[243452]:         <nova:port uuid="dca369b1-9f27-4c57-9e18-4a62a8bd95e9">
Feb 28 10:04:02 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:04:02 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:04:02 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <system>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <entry name="serial">95805a4e-8bc0-47e3-981a-dfe27127a270</entry>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <entry name="uuid">95805a4e-8bc0-47e3-981a-dfe27127a270</entry>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     </system>
Feb 28 10:04:02 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:04:02 compute-0 nova_compute[243452]:   <os>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:   </os>
Feb 28 10:04:02 compute-0 nova_compute[243452]:   <features>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:   </features>
Feb 28 10:04:02 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:04:02 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:04:02 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/95805a4e-8bc0-47e3-981a-dfe27127a270_disk">
Feb 28 10:04:02 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:04:02 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/95805a4e-8bc0-47e3-981a-dfe27127a270_disk.config">
Feb 28 10:04:02 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:04:02 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:63:2f:18"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <target dev="tapdca369b1-9f"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/95805a4e-8bc0-47e3-981a-dfe27127a270/console.log" append="off"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <video>
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     </video>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:04:02 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:04:02 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:04:02 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:04:02 compute-0 nova_compute[243452]: </domain>
Feb 28 10:04:02 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.102 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Preparing to wait for external event network-vif-plugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.102 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.102 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.103 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.104 243456 DEBUG nova.virt.libvirt.vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:03:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1851116039',display_name='tempest-ListServersNegativeTestJSON-server-1851116039-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1851116039-2',id=25,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a4f47b449434c708378388c3e76610e',ramdisk_id='',reservation_id='r-my980dr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-784056711',owner_user_name='tempest-ListServersNegativeTestJSON-784056711-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:03:57Z,user_data=None,user_id='7a44c3f76b144221b23743a554a4a839',uuid=95805a4e-8bc0-47e3-981a-dfe27127a270,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.104 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converting VIF {"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.105 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:2f:18,bridge_name='br-int',has_traffic_filtering=True,id=dca369b1-9f27-4c57-9e18-4a62a8bd95e9,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdca369b1-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.105 243456 DEBUG os_vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:2f:18,bridge_name='br-int',has_traffic_filtering=True,id=dca369b1-9f27-4c57-9e18-4a62a8bd95e9,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdca369b1-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.107 243456 DEBUG oslo_concurrency.lockutils [req-f80fd425-863e-47b4-a25f-0ad773bcef0c req-5ef41959-8e3b-4f40-8d7f-10e480e0e020 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c38e4584-8a86-41a3-bc10-2a35205cf7c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.108 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.108 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.109 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.112 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.112 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdca369b1-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.113 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdca369b1-9f, col_values=(('external_ids', {'iface-id': 'dca369b1-9f27-4c57-9e18-4a62a8bd95e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:2f:18', 'vm-uuid': '95805a4e-8bc0-47e3-981a-dfe27127a270'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.115 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:02 compute-0 NetworkManager[49805]: <info>  [1772273042.1161] manager: (tapdca369b1-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.118 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.124 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.125 243456 INFO os_vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:2f:18,bridge_name='br-int',has_traffic_filtering=True,id=dca369b1-9f27-4c57-9e18-4a62a8bd95e9,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdca369b1-9f')
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.166 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.167 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.167 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] No VIF found with MAC fa:16:3e:63:2f:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.168 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Using config drive
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.191 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 95805a4e-8bc0-47e3-981a-dfe27127a270_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:04:02 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2738159381' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:04:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Feb 28 10:04:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Feb 28 10:04:02 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.633 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.663 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.668 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:02 compute-0 ceph-mon[76304]: pgmap v1042: 305 pgs: 305 active+clean; 259 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 4.4 MiB/s wr, 90 op/s
Feb 28 10:04:02 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1887011915' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:02 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2738159381' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:02 compute-0 ceph-mon[76304]: osdmap e136: 3 total, 3 up, 3 in
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.931 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Creating config drive at /var/lib/nova/instances/95805a4e-8bc0-47e3-981a-dfe27127a270/disk.config
Feb 28 10:04:02 compute-0 nova_compute[243452]: 2026-02-28 10:04:02.935 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/95805a4e-8bc0-47e3-981a-dfe27127a270/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpfncl_0wi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.060 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Creating config drive at /var/lib/nova/instances/c38e4584-8a86-41a3-bc10-2a35205cf7c7/disk.config
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.063 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c38e4584-8a86-41a3-bc10-2a35205cf7c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpd281ri9v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.083 243456 DEBUG nova.network.neutron [req-d8a33801-8dba-40a2-b201-084d2adf4b1a req-4bf72f07-4367-4732-9fc6-812748ee578b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Updated VIF entry in instance network info cache for port dca369b1-9f27-4c57-9e18-4a62a8bd95e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.085 243456 DEBUG nova.network.neutron [req-d8a33801-8dba-40a2-b201-084d2adf4b1a req-4bf72f07-4367-4732-9fc6-812748ee578b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Updating instance_info_cache with network_info: [{"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.088 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/95805a4e-8bc0-47e3-981a-dfe27127a270/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpfncl_0wi" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.126 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 95805a4e-8bc0-47e3-981a-dfe27127a270_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.130 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/95805a4e-8bc0-47e3-981a-dfe27127a270/disk.config 95805a4e-8bc0-47e3-981a-dfe27127a270_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.157 243456 DEBUG oslo_concurrency.lockutils [req-d8a33801-8dba-40a2-b201-084d2adf4b1a req-4bf72f07-4367-4732-9fc6-812748ee578b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-95805a4e-8bc0-47e3-981a-dfe27127a270" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.186 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c38e4584-8a86-41a3-bc10-2a35205cf7c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpd281ri9v" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:04:03 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1649378117' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.216 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.222 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c38e4584-8a86-41a3-bc10-2a35205cf7c7/disk.config c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.249 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.251 243456 DEBUG nova.virt.libvirt.vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:03:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1851116039',display_name='tempest-ListServersNegativeTestJSON-server-1851116039-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1851116039-1',id=24,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a4f47b449434c708378388c3e76610e',ramdisk_id='',reservation_id='r-my980dr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-784056711',owner_user_name='tempest-ListServersNegativeTestJSON-784056711-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:03:58Z,user_data=None,user_id='7a44c3f76b144221b23743a554a4a839',uuid=400869d5-7369-466b-970e-ac7e3f4e2e4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5051af6b-3383-4342-be85-7f3b44b527a2", "address": "fa:16:3e:04:cc:6f", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5051af6b-33", "ovs_interfaceid": "5051af6b-3383-4342-be85-7f3b44b527a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.252 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converting VIF {"id": "5051af6b-3383-4342-be85-7f3b44b527a2", "address": "fa:16:3e:04:cc:6f", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5051af6b-33", "ovs_interfaceid": "5051af6b-3383-4342-be85-7f3b44b527a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.253 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:cc:6f,bridge_name='br-int',has_traffic_filtering=True,id=5051af6b-3383-4342-be85-7f3b44b527a2,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5051af6b-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.254 243456 DEBUG nova.objects.instance [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lazy-loading 'pci_devices' on Instance uuid 400869d5-7369-466b-970e-ac7e3f4e2e4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.257 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/95805a4e-8bc0-47e3-981a-dfe27127a270/disk.config 95805a4e-8bc0-47e3-981a-dfe27127a270_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.257 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Deleting local config drive /var/lib/nova/instances/95805a4e-8bc0-47e3-981a-dfe27127a270/disk.config because it was imported into RBD.
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.282 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:04:03 compute-0 nova_compute[243452]:   <uuid>400869d5-7369-466b-970e-ac7e3f4e2e4c</uuid>
Feb 28 10:04:03 compute-0 nova_compute[243452]:   <name>instance-00000018</name>
Feb 28 10:04:03 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:04:03 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:04:03 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <nova:name>tempest-ListServersNegativeTestJSON-server-1851116039-1</nova:name>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:04:02</nova:creationTime>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:04:03 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:04:03 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:04:03 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:04:03 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:04:03 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:04:03 compute-0 nova_compute[243452]:         <nova:user uuid="7a44c3f76b144221b23743a554a4a839">tempest-ListServersNegativeTestJSON-784056711-project-member</nova:user>
Feb 28 10:04:03 compute-0 nova_compute[243452]:         <nova:project uuid="1a4f47b449434c708378388c3e76610e">tempest-ListServersNegativeTestJSON-784056711</nova:project>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:04:03 compute-0 nova_compute[243452]:         <nova:port uuid="5051af6b-3383-4342-be85-7f3b44b527a2">
Feb 28 10:04:03 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:04:03 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:04:03 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <system>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <entry name="serial">400869d5-7369-466b-970e-ac7e3f4e2e4c</entry>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <entry name="uuid">400869d5-7369-466b-970e-ac7e3f4e2e4c</entry>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     </system>
Feb 28 10:04:03 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:04:03 compute-0 nova_compute[243452]:   <os>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:   </os>
Feb 28 10:04:03 compute-0 nova_compute[243452]:   <features>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:   </features>
Feb 28 10:04:03 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:04:03 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:04:03 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/400869d5-7369-466b-970e-ac7e3f4e2e4c_disk">
Feb 28 10:04:03 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:04:03 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/400869d5-7369-466b-970e-ac7e3f4e2e4c_disk.config">
Feb 28 10:04:03 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:04:03 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:04:cc:6f"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <target dev="tap5051af6b-33"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/400869d5-7369-466b-970e-ac7e3f4e2e4c/console.log" append="off"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <video>
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     </video>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:04:03 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:04:03 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:04:03 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:04:03 compute-0 nova_compute[243452]: </domain>
Feb 28 10:04:03 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.284 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Preparing to wait for external event network-vif-plugged-5051af6b-3383-4342-be85-7f3b44b527a2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.284 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.284 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.285 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.285 243456 DEBUG nova.virt.libvirt.vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:03:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1851116039',display_name='tempest-ListServersNegativeTestJSON-server-1851116039-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1851116039-1',id=24,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a4f47b449434c708378388c3e76610e',ramdisk_id='',reservation_id='r-my980dr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-784056711',owner_user_name='tempest-ListServersNegativeTestJSON-784056711-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:03:58Z,user_data=None,user_id='7a44c3f76b144221b23743a554a4a839',uuid=400869d5-7369-466b-970e-ac7e3f4e2e4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5051af6b-3383-4342-be85-7f3b44b527a2", "address": "fa:16:3e:04:cc:6f", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5051af6b-33", "ovs_interfaceid": "5051af6b-3383-4342-be85-7f3b44b527a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.286 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converting VIF {"id": "5051af6b-3383-4342-be85-7f3b44b527a2", "address": "fa:16:3e:04:cc:6f", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5051af6b-33", "ovs_interfaceid": "5051af6b-3383-4342-be85-7f3b44b527a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.286 243456 DEBUG nova.network.os_vif_util [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:cc:6f,bridge_name='br-int',has_traffic_filtering=True,id=5051af6b-3383-4342-be85-7f3b44b527a2,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5051af6b-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.287 243456 DEBUG os_vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:cc:6f,bridge_name='br-int',has_traffic_filtering=True,id=5051af6b-3383-4342-be85-7f3b44b527a2,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5051af6b-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.287 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.288 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.288 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.291 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.292 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5051af6b-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.292 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5051af6b-33, col_values=(('external_ids', {'iface-id': '5051af6b-3383-4342-be85-7f3b44b527a2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:cc:6f', 'vm-uuid': '400869d5-7369-466b-970e-ac7e3f4e2e4c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.294 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:03 compute-0 NetworkManager[49805]: <info>  [1772273043.2953] manager: (tap5051af6b-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.298 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.303 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.304 243456 INFO os_vif [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:cc:6f,bridge_name='br-int',has_traffic_filtering=True,id=5051af6b-3383-4342-be85-7f3b44b527a2,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5051af6b-33')
Feb 28 10:04:03 compute-0 NetworkManager[49805]: <info>  [1772273043.3127] manager: (tapdca369b1-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Feb 28 10:04:03 compute-0 kernel: tapdca369b1-9f: entered promiscuous mode
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.317 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:03 compute-0 ovn_controller[146846]: 2026-02-28T10:04:03Z|00132|binding|INFO|Claiming lport dca369b1-9f27-4c57-9e18-4a62a8bd95e9 for this chassis.
Feb 28 10:04:03 compute-0 ovn_controller[146846]: 2026-02-28T10:04:03Z|00133|binding|INFO|dca369b1-9f27-4c57-9e18-4a62a8bd95e9: Claiming fa:16:3e:63:2f:18 10.100.0.5
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.332 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:2f:18 10.100.0.5'], port_security=['fa:16:3e:63:2f:18 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '95805a4e-8bc0-47e3-981a-dfe27127a270', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a4f47b449434c708378388c3e76610e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40d4cd58-fe6b-4388-8998-09342a19e6c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dbb9d37-cc7b-4cfd-a162-1ef19741223f, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=dca369b1-9f27-4c57-9e18-4a62a8bd95e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.333 156681 INFO neutron.agent.ovn.metadata.agent [-] Port dca369b1-9f27-4c57-9e18-4a62a8bd95e9 in datapath c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 bound to our chassis
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.334 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4a214b7-a9ab-4548-82a3-ef624bc8e6a1
Feb 28 10:04:03 compute-0 systemd-udevd[266638]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:04:03 compute-0 systemd-machined[209480]: New machine qemu-28-instance-00000019.
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.347 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf0ce44-1a74-4e54-b4ff-91cf71f7afbf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.348 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc4a214b7-a1 in ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.350 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc4a214b7-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.351 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[68f9a301-b9b3-4a01-a972-d4ea334956e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.351 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aac7d99b-f5f4-4bc9-8749-50ce5882ed7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:03 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-00000019.
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.356 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:03 compute-0 NetworkManager[49805]: <info>  [1772273043.3597] device (tapdca369b1-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:04:03 compute-0 NetworkManager[49805]: <info>  [1772273043.3608] device (tapdca369b1-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.367 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[590e22cf-079f-4dd6-9346-9399ac17b2d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:03 compute-0 ovn_controller[146846]: 2026-02-28T10:04:03Z|00134|binding|INFO|Setting lport dca369b1-9f27-4c57-9e18-4a62a8bd95e9 ovn-installed in OVS
Feb 28 10:04:03 compute-0 ovn_controller[146846]: 2026-02-28T10:04:03Z|00135|binding|INFO|Setting lport dca369b1-9f27-4c57-9e18-4a62a8bd95e9 up in Southbound
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.372 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.377 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.378 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.378 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] No VIF found with MAC fa:16:3e:04:cc:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.378 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Using config drive
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.385 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f43d887d-7ade-4034-87fb-fccb46b4949e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.413 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.416 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[157aabb1-760e-4832-95d3-defeb4844201]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.422 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c38e4584-8a86-41a3-bc10-2a35205cf7c7/disk.config c38e4584-8a86-41a3-bc10-2a35205cf7c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.423 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Deleting local config drive /var/lib/nova/instances/c38e4584-8a86-41a3-bc10-2a35205cf7c7/disk.config because it was imported into RBD.
Feb 28 10:04:03 compute-0 NetworkManager[49805]: <info>  [1772273043.4259] manager: (tapc4a214b7-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/69)
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.424 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1aac0b48-cd52-4a62-99cd-8eb952c4965d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.461 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9724a2e2-dd10-47aa-8598-7add94c007dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:03 compute-0 NetworkManager[49805]: <info>  [1772273043.4634] manager: (tapb9c5316d-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Feb 28 10:04:03 compute-0 systemd-udevd[266689]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:04:03 compute-0 kernel: tapb9c5316d-0f: entered promiscuous mode
Feb 28 10:04:03 compute-0 ovn_controller[146846]: 2026-02-28T10:04:03Z|00136|binding|INFO|Claiming lport b9c5316d-0f6b-4a56-8273-692ee1492259 for this chassis.
Feb 28 10:04:03 compute-0 ovn_controller[146846]: 2026-02-28T10:04:03Z|00137|binding|INFO|b9c5316d-0f6b-4a56-8273-692ee1492259: Claiming fa:16:3e:b1:5c:cb 10.100.0.4
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.471 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.473 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[033c45bf-487e-478b-a539-ec19b0d54e47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.475 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:5c:cb 10.100.0.4'], port_security=['fa:16:3e:b1:5c:cb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c38e4584-8a86-41a3-bc10-2a35205cf7c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a4f47b449434c708378388c3e76610e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40d4cd58-fe6b-4388-8998-09342a19e6c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dbb9d37-cc7b-4cfd-a162-1ef19741223f, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b9c5316d-0f6b-4a56-8273-692ee1492259) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:04:03 compute-0 NetworkManager[49805]: <info>  [1772273043.4767] device (tapb9c5316d-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:04:03 compute-0 NetworkManager[49805]: <info>  [1772273043.4777] device (tapb9c5316d-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.481 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:03 compute-0 ovn_controller[146846]: 2026-02-28T10:04:03Z|00138|binding|INFO|Setting lport b9c5316d-0f6b-4a56-8273-692ee1492259 ovn-installed in OVS
Feb 28 10:04:03 compute-0 ovn_controller[146846]: 2026-02-28T10:04:03Z|00139|binding|INFO|Setting lport b9c5316d-0f6b-4a56-8273-692ee1492259 up in Southbound
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.483 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.488 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:03 compute-0 systemd-machined[209480]: New machine qemu-29-instance-0000001a.
Feb 28 10:04:03 compute-0 NetworkManager[49805]: <info>  [1772273043.5009] device (tapc4a214b7-a0): carrier: link connected
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.505 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[06e20277-a28a-436a-8633-7c39a8f92dcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:03 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-0000001a.
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.524 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[de1ef5b4-b174-4fb0-8ea5-652d8bca2d29]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4a214b7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:13:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455078, 'reachable_time': 28358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266709, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.555 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[02b578d8-e27e-43f8-b100-761b1759f3ba]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed7:1336'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455078, 'tstamp': 455078}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266712, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.580 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8598ac85-dc94-41bb-bc28-1ab91e0ad5b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4a214b7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:13:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455078, 'reachable_time': 28358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266717, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.611 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0ca815b2-ddd9-47c8-bfd6-b339a39457a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.680 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4c3d8fcd-9d23-4008-a148-8ffd486a6e3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.682 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4a214b7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.683 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.683 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4a214b7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.686 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:03 compute-0 NetworkManager[49805]: <info>  [1772273043.6866] manager: (tapc4a214b7-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Feb 28 10:04:03 compute-0 kernel: tapc4a214b7-a0: entered promiscuous mode
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.691 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.691 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4a214b7-a0, col_values=(('external_ids', {'iface-id': '6478ac98-6cab-4170-89f8-76b31581cc8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:03 compute-0 ovn_controller[146846]: 2026-02-28T10:04:03Z|00140|binding|INFO|Releasing lport 6478ac98-6cab-4170-89f8-76b31581cc8c from this chassis (sb_readonly=0)
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.700 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.702 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.703 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c4a214b7-a9ab-4548-82a3-ef624bc8e6a1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c4a214b7-a9ab-4548-82a3-ef624bc8e6a1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.704 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[01a04a0e-150d-4f90-99c0-6195ce0a2070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.704 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/c4a214b7-a9ab-4548-82a3-ef624bc8e6a1.pid.haproxy
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID c4a214b7-a9ab-4548-82a3-ef624bc8e6a1
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:04:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:03.705 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'env', 'PROCESS_TAG=haproxy-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c4a214b7-a9ab-4548-82a3-ef624bc8e6a1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.772 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273043.7723367, 95805a4e-8bc0-47e3-981a-dfe27127a270 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.773 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] VM Started (Lifecycle Event)
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.810 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.815 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273043.7736096, 95805a4e-8bc0-47e3-981a-dfe27127a270 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.815 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] VM Paused (Lifecycle Event)
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.834 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.839 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:04:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 292 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 8.0 MiB/s wr, 153 op/s
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.844 243456 DEBUG nova.compute.manager [req-016ccefb-df35-4935-9c9f-38b52b60c8c2 req-cda5e572-041f-4c88-9d27-db184115cbd6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Received event network-vif-plugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.844 243456 DEBUG oslo_concurrency.lockutils [req-016ccefb-df35-4935-9c9f-38b52b60c8c2 req-cda5e572-041f-4c88-9d27-db184115cbd6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.844 243456 DEBUG oslo_concurrency.lockutils [req-016ccefb-df35-4935-9c9f-38b52b60c8c2 req-cda5e572-041f-4c88-9d27-db184115cbd6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.844 243456 DEBUG oslo_concurrency.lockutils [req-016ccefb-df35-4935-9c9f-38b52b60c8c2 req-cda5e572-041f-4c88-9d27-db184115cbd6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.844 243456 DEBUG nova.compute.manager [req-016ccefb-df35-4935-9c9f-38b52b60c8c2 req-cda5e572-041f-4c88-9d27-db184115cbd6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Processing event network-vif-plugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.845 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.850 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.853 243456 INFO nova.virt.libvirt.driver [-] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Instance spawned successfully.
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.853 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.858 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.858 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273043.8511238, 95805a4e-8bc0-47e3-981a-dfe27127a270 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.858 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] VM Resumed (Lifecycle Event)
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.861 243456 DEBUG nova.network.neutron [req-9b9a18d6-9ed2-4058-b258-c67fe7d1c4b4 req-12e582d8-5e53-4304-9f24-a746aaae678c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Updated VIF entry in instance network info cache for port 5051af6b-3383-4342-be85-7f3b44b527a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.861 243456 DEBUG nova.network.neutron [req-9b9a18d6-9ed2-4058-b258-c67fe7d1c4b4 req-12e582d8-5e53-4304-9f24-a746aaae678c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Updating instance_info_cache with network_info: [{"id": "5051af6b-3383-4342-be85-7f3b44b527a2", "address": "fa:16:3e:04:cc:6f", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5051af6b-33", "ovs_interfaceid": "5051af6b-3383-4342-be85-7f3b44b527a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.882 243456 DEBUG oslo_concurrency.lockutils [req-9b9a18d6-9ed2-4058-b258-c67fe7d1c4b4 req-12e582d8-5e53-4304-9f24-a746aaae678c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-400869d5-7369-466b-970e-ac7e3f4e2e4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.884 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.887 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.887 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.888 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.888 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.888 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.888 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.892 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.923 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:04:03 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1649378117' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.953 243456 INFO nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Took 6.21 seconds to spawn the instance on the hypervisor.
Feb 28 10:04:03 compute-0 nova_compute[243452]: 2026-02-28 10:04:03.953 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.017 243456 INFO nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Took 8.46 seconds to build instance.
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.041 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Creating config drive at /var/lib/nova/instances/400869d5-7369-466b-970e-ac7e3f4e2e4c/disk.config
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.046 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/400869d5-7369-466b-970e-ac7e3f4e2e4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpm13phxm3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.077 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:04 compute-0 podman[266836]: 2026-02-28 10:04:04.096484514 +0000 UTC m=+0.049997907 container create f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.113 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273044.113597, c38e4584-8a86-41a3-bc10-2a35205cf7c7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.114 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] VM Started (Lifecycle Event)
Feb 28 10:04:04 compute-0 systemd[1]: Started libpod-conmon-f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242.scope.
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.146 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.150 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273044.1136684, c38e4584-8a86-41a3-bc10-2a35205cf7c7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.151 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] VM Paused (Lifecycle Event)
Feb 28 10:04:04 compute-0 podman[266836]: 2026-02-28 10:04:04.070048941 +0000 UTC m=+0.023562354 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.168 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:04 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.172 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:04:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0957a7b6eff2b2a82eae4443a3c42b23275f1b3848fb9d1ed3e7a8e33112a7b0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.189 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/400869d5-7369-466b-970e-ac7e3f4e2e4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpm13phxm3" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:04 compute-0 podman[266836]: 2026-02-28 10:04:04.191726252 +0000 UTC m=+0.145239735 container init f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 28 10:04:04 compute-0 podman[266836]: 2026-02-28 10:04:04.198640666 +0000 UTC m=+0.152154059 container start f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.221 243456 DEBUG nova.storage.rbd_utils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] rbd image 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.225 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/400869d5-7369-466b-970e-ac7e3f4e2e4c/disk.config 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:04 compute-0 neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1[266854]: [NOTICE]   (266865) : New worker (266878) forked
Feb 28 10:04:04 compute-0 neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1[266854]: [NOTICE]   (266865) : Loading success.
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.253 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.257 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.257 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.271 243456 DEBUG nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.289 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b9c5316d-0f6b-4a56-8273-692ee1492259 in datapath c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 unbound from our chassis
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.292 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4a214b7-a9ab-4548-82a3-ef624bc8e6a1
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.312 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9abb9d3a-2a2d-4cf4-a5c3-fa36ce40f696]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.343 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8f019c1a-fc73-4402-bed3-73b07b92befb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.349 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[09da118d-16b4-4a9c-b06a-9225b0991a79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.350 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.351 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.362 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.362 243456 INFO nova.compute.claims [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.366 243456 DEBUG oslo_concurrency.processutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/400869d5-7369-466b-970e-ac7e3f4e2e4c/disk.config 400869d5-7369-466b-970e-ac7e3f4e2e4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.367 243456 INFO nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Deleting local config drive /var/lib/nova/instances/400869d5-7369-466b-970e-ac7e3f4e2e4c/disk.config because it was imported into RBD.
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.380 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b778da09-c987-4b3b-8b67-7448e28d8b11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.417 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7fea37-f86d-41f4-ac7a-007fb362821d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4a214b7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:13:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455078, 'reachable_time': 28358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266914, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:04 compute-0 kernel: tap5051af6b-33: entered promiscuous mode
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.434 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e90c19c2-6c94-4ab6-9788-4140a6213860]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4a214b7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455092, 'tstamp': 455092}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266919, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4a214b7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455096, 'tstamp': 455096}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266919, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:04 compute-0 NetworkManager[49805]: <info>  [1772273044.4399] manager: (tap5051af6b-33): new Tun device (/org/freedesktop/NetworkManager/Devices/72)
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.437 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4a214b7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:04 compute-0 ovn_controller[146846]: 2026-02-28T10:04:04Z|00141|binding|INFO|Claiming lport 5051af6b-3383-4342-be85-7f3b44b527a2 for this chassis.
Feb 28 10:04:04 compute-0 ovn_controller[146846]: 2026-02-28T10:04:04Z|00142|binding|INFO|5051af6b-3383-4342-be85-7f3b44b527a2: Claiming fa:16:3e:04:cc:6f 10.100.0.9
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.445 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:04 compute-0 NetworkManager[49805]: <info>  [1772273044.4511] device (tap5051af6b-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:04:04 compute-0 NetworkManager[49805]: <info>  [1772273044.4520] device (tap5051af6b-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.455 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:cc:6f 10.100.0.9'], port_security=['fa:16:3e:04:cc:6f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '400869d5-7369-466b-970e-ac7e3f4e2e4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a4f47b449434c708378388c3e76610e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40d4cd58-fe6b-4388-8998-09342a19e6c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dbb9d37-cc7b-4cfd-a162-1ef19741223f, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=5051af6b-3383-4342-be85-7f3b44b527a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.462 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:04 compute-0 ovn_controller[146846]: 2026-02-28T10:04:04Z|00143|binding|INFO|Setting lport 5051af6b-3383-4342-be85-7f3b44b527a2 ovn-installed in OVS
Feb 28 10:04:04 compute-0 ovn_controller[146846]: 2026-02-28T10:04:04Z|00144|binding|INFO|Setting lport 5051af6b-3383-4342-be85-7f3b44b527a2 up in Southbound
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.464 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.464 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4a214b7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.464 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.465 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.466 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4a214b7-a0, col_values=(('external_ids', {'iface-id': '6478ac98-6cab-4170-89f8-76b31581cc8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.467 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.468 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 5051af6b-3383-4342-be85-7f3b44b527a2 in datapath c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 bound to our chassis
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.469 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4a214b7-a9ab-4548-82a3-ef624bc8e6a1
Feb 28 10:04:04 compute-0 systemd-machined[209480]: New machine qemu-30-instance-00000018.
Feb 28 10:04:04 compute-0 systemd[1]: Started Virtual Machine qemu-30-instance-00000018.
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.487 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7c45aac9-a510-4039-9db3-45d90f697094]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.518 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8a8ed719-923a-441b-a6dd-572e407e6028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.533 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c9e4ed5e-2a70-4d36-9b91-b01cc37ad000]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.559 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.564 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[621b52cb-b216-4feb-8071-a29d4ae2fe5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.591 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[be133535-30b3-4e58-b805-5f0832c62ce4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4a214b7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:13:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 7, 'rx_bytes': 266, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 7, 'rx_bytes': 266, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455078, 'reachable_time': 28358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266938, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.608 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[313cdf29-5cd3-4ff7-8216-6ceff8951f8c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4a214b7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455092, 'tstamp': 455092}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266940, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4a214b7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455096, 'tstamp': 455096}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266940, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.610 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4a214b7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.612 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.612 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4a214b7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.613 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.613 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4a214b7-a0, col_values=(('external_ids', {'iface-id': '6478ac98-6cab-4170-89f8-76b31581cc8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:04.613 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.929 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273044.9285486, 400869d5-7369-466b-970e-ac7e3f4e2e4c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.929 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] VM Started (Lifecycle Event)
Feb 28 10:04:04 compute-0 ceph-mon[76304]: pgmap v1044: 305 pgs: 305 active+clean; 292 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 8.0 MiB/s wr, 153 op/s
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.955 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.962 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273044.9287317, 400869d5-7369-466b-970e-ac7e3f4e2e4c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.962 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] VM Paused (Lifecycle Event)
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.982 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:04 compute-0 nova_compute[243452]: 2026-02-28 10:04:04.985 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.014 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.026 243456 DEBUG nova.compute.manager [req-2f2cc9fa-1bde-476f-9a05-87ccd353e2ae req-f797a9c4-0733-4eb5-8cdf-cc227ce2bac3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Received event network-vif-plugged-5051af6b-3383-4342-be85-7f3b44b527a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.027 243456 DEBUG oslo_concurrency.lockutils [req-2f2cc9fa-1bde-476f-9a05-87ccd353e2ae req-f797a9c4-0733-4eb5-8cdf-cc227ce2bac3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.027 243456 DEBUG oslo_concurrency.lockutils [req-2f2cc9fa-1bde-476f-9a05-87ccd353e2ae req-f797a9c4-0733-4eb5-8cdf-cc227ce2bac3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.027 243456 DEBUG oslo_concurrency.lockutils [req-2f2cc9fa-1bde-476f-9a05-87ccd353e2ae req-f797a9c4-0733-4eb5-8cdf-cc227ce2bac3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.028 243456 DEBUG nova.compute.manager [req-2f2cc9fa-1bde-476f-9a05-87ccd353e2ae req-f797a9c4-0733-4eb5-8cdf-cc227ce2bac3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Processing event network-vif-plugged-5051af6b-3383-4342-be85-7f3b44b527a2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.029 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.038 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273045.037757, 400869d5-7369-466b-970e-ac7e3f4e2e4c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.039 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] VM Resumed (Lifecycle Event)
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.041 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.044 243456 INFO nova.virt.libvirt.driver [-] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Instance spawned successfully.
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.045 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.087 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.093 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.098 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.098 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.099 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.099 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.100 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.100 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:04:05 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/439360762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.145 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.155 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.161 243456 DEBUG nova.compute.provider_tree [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.168 243456 INFO nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Took 6.63 seconds to spawn the instance on the hypervisor.
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.169 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.175 243456 DEBUG nova.scheduler.client.report [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.199 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.200 243456 DEBUG nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.231 243456 INFO nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Took 9.68 seconds to build instance.
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.256 243456 DEBUG nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.257 243456 DEBUG nova.network.neutron [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.260 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.274 243456 INFO nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.294 243456 DEBUG nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.377 243456 DEBUG nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.378 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.378 243456 INFO nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Creating image(s)
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.407 243456 DEBUG nova.storage.rbd_utils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image f1026535-7729-43d0-8027-dd71ef14dfbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.429 243456 DEBUG nova.storage.rbd_utils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image f1026535-7729-43d0-8027-dd71ef14dfbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.452 243456 DEBUG nova.storage.rbd_utils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image f1026535-7729-43d0-8027-dd71ef14dfbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.457 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.528 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.529 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.530 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.530 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.554 243456 DEBUG nova.storage.rbd_utils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image f1026535-7729-43d0-8027-dd71ef14dfbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.559 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f1026535-7729-43d0-8027-dd71ef14dfbf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 301 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 7.0 MiB/s wr, 205 op/s
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.859 243456 DEBUG nova.policy [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1d18942aa7a4f4f9f673058f8c5f232', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5478fb37e3fa483c86ed85ad939d42da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:04:05 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/439360762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:05 compute-0 nova_compute[243452]: 2026-02-28 10:04:05.968 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f1026535-7729-43d0-8027-dd71ef14dfbf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.007 243456 DEBUG nova.compute.manager [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Received event network-vif-plugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.007 243456 DEBUG oslo_concurrency.lockutils [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.008 243456 DEBUG oslo_concurrency.lockutils [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.008 243456 DEBUG oslo_concurrency.lockutils [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.008 243456 DEBUG nova.compute.manager [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] No waiting events found dispatching network-vif-plugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.008 243456 WARNING nova.compute.manager [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Received unexpected event network-vif-plugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 for instance with vm_state active and task_state None.
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.009 243456 DEBUG nova.compute.manager [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Received event network-vif-plugged-b9c5316d-0f6b-4a56-8273-692ee1492259 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.009 243456 DEBUG oslo_concurrency.lockutils [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.009 243456 DEBUG oslo_concurrency.lockutils [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.009 243456 DEBUG oslo_concurrency.lockutils [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.009 243456 DEBUG nova.compute.manager [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Processing event network-vif-plugged-b9c5316d-0f6b-4a56-8273-692ee1492259 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.010 243456 DEBUG nova.compute.manager [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Received event network-vif-plugged-b9c5316d-0f6b-4a56-8273-692ee1492259 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.010 243456 DEBUG oslo_concurrency.lockutils [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.010 243456 DEBUG oslo_concurrency.lockutils [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.010 243456 DEBUG oslo_concurrency.lockutils [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.010 243456 DEBUG nova.compute.manager [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] No waiting events found dispatching network-vif-plugged-b9c5316d-0f6b-4a56-8273-692ee1492259 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.011 243456 WARNING nova.compute.manager [req-19440f17-c5a2-458c-984d-ebc99cd5664e req-808a2ba5-9abd-4c11-93dc-cfa90efb45f9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Received unexpected event network-vif-plugged-b9c5316d-0f6b-4a56-8273-692ee1492259 for instance with vm_state building and task_state spawning.
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.012 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.055 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273046.0170944, c38e4584-8a86-41a3-bc10-2a35205cf7c7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.056 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] VM Resumed (Lifecycle Event)
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.058 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.066 243456 DEBUG nova.storage.rbd_utils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] resizing rbd image f1026535-7729-43d0-8027-dd71ef14dfbf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.100 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.105 243456 INFO nova.virt.libvirt.driver [-] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Instance spawned successfully.
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.106 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.109 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.158 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.166 243456 DEBUG nova.objects.instance [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'migration_context' on Instance uuid f1026535-7729-43d0-8027-dd71ef14dfbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.170 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.170 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.170 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.171 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.171 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.171 243456 DEBUG nova.virt.libvirt.driver [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.199 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.200 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Ensure instance console log exists: /var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.200 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.201 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.201 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.270 243456 INFO nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Took 9.66 seconds to spawn the instance on the hypervisor.
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.270 243456 DEBUG nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.359 243456 INFO nova.compute.manager [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Took 10.82 seconds to build instance.
Feb 28 10:04:06 compute-0 nova_compute[243452]: 2026-02-28 10:04:06.379 243456 DEBUG oslo_concurrency.lockutils [None req-7efd93b3-5925-4558-b2bf-82dd33c697bc 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:06 compute-0 ceph-mon[76304]: pgmap v1045: 305 pgs: 305 active+clean; 301 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 7.0 MiB/s wr, 205 op/s
Feb 28 10:04:07 compute-0 nova_compute[243452]: 2026-02-28 10:04:07.310 243456 DEBUG nova.compute.manager [req-84c9ee35-8032-44af-b3f9-54cf79f902c3 req-e8048ec5-f9f0-472d-b0e1-92b285c8fb24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Received event network-vif-plugged-5051af6b-3383-4342-be85-7f3b44b527a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:07 compute-0 nova_compute[243452]: 2026-02-28 10:04:07.310 243456 DEBUG oslo_concurrency.lockutils [req-84c9ee35-8032-44af-b3f9-54cf79f902c3 req-e8048ec5-f9f0-472d-b0e1-92b285c8fb24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:07 compute-0 nova_compute[243452]: 2026-02-28 10:04:07.311 243456 DEBUG oslo_concurrency.lockutils [req-84c9ee35-8032-44af-b3f9-54cf79f902c3 req-e8048ec5-f9f0-472d-b0e1-92b285c8fb24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:07 compute-0 nova_compute[243452]: 2026-02-28 10:04:07.311 243456 DEBUG oslo_concurrency.lockutils [req-84c9ee35-8032-44af-b3f9-54cf79f902c3 req-e8048ec5-f9f0-472d-b0e1-92b285c8fb24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:07 compute-0 nova_compute[243452]: 2026-02-28 10:04:07.311 243456 DEBUG nova.compute.manager [req-84c9ee35-8032-44af-b3f9-54cf79f902c3 req-e8048ec5-f9f0-472d-b0e1-92b285c8fb24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] No waiting events found dispatching network-vif-plugged-5051af6b-3383-4342-be85-7f3b44b527a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:04:07 compute-0 nova_compute[243452]: 2026-02-28 10:04:07.311 243456 WARNING nova.compute.manager [req-84c9ee35-8032-44af-b3f9-54cf79f902c3 req-e8048ec5-f9f0-472d-b0e1-92b285c8fb24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Received unexpected event network-vif-plugged-5051af6b-3383-4342-be85-7f3b44b527a2 for instance with vm_state active and task_state None.
Feb 28 10:04:07 compute-0 nova_compute[243452]: 2026-02-28 10:04:07.316 243456 DEBUG nova.network.neutron [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Successfully created port: 76d5199d-5d1e-4198-8780-c2537175a2be _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:04:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:04:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 316 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.3 MiB/s wr, 305 op/s
Feb 28 10:04:08 compute-0 nova_compute[243452]: 2026-02-28 10:04:08.296 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:08 compute-0 nova_compute[243452]: 2026-02-28 10:04:08.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:04:08 compute-0 nova_compute[243452]: 2026-02-28 10:04:08.489 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:08 compute-0 nova_compute[243452]: 2026-02-28 10:04:08.571 243456 DEBUG nova.network.neutron [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Successfully updated port: 76d5199d-5d1e-4198-8780-c2537175a2be _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:04:08 compute-0 nova_compute[243452]: 2026-02-28 10:04:08.587 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:04:08 compute-0 nova_compute[243452]: 2026-02-28 10:04:08.587 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquired lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:04:08 compute-0 nova_compute[243452]: 2026-02-28 10:04:08.588 243456 DEBUG nova.network.neutron [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:04:08 compute-0 nova_compute[243452]: 2026-02-28 10:04:08.727 243456 DEBUG nova.compute.manager [req-3fa719ea-8985-4749-ad4b-f0d279f4e1b8 req-80aaa86a-4da5-479e-8740-e5464ea47fca 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-changed-76d5199d-5d1e-4198-8780-c2537175a2be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:08 compute-0 nova_compute[243452]: 2026-02-28 10:04:08.728 243456 DEBUG nova.compute.manager [req-3fa719ea-8985-4749-ad4b-f0d279f4e1b8 req-80aaa86a-4da5-479e-8740-e5464ea47fca 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Refreshing instance network info cache due to event network-changed-76d5199d-5d1e-4198-8780-c2537175a2be. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:04:08 compute-0 nova_compute[243452]: 2026-02-28 10:04:08.729 243456 DEBUG oslo_concurrency.lockutils [req-3fa719ea-8985-4749-ad4b-f0d279f4e1b8 req-80aaa86a-4da5-479e-8740-e5464ea47fca 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:04:08 compute-0 nova_compute[243452]: 2026-02-28 10:04:08.826 243456 DEBUG nova.network.neutron [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:04:08 compute-0 ceph-mon[76304]: pgmap v1046: 305 pgs: 305 active+clean; 316 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.3 MiB/s wr, 305 op/s
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.345 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.345 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.346 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.346 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.476 243456 DEBUG oslo_concurrency.lockutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "400869d5-7369-466b-970e-ac7e3f4e2e4c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.479 243456 DEBUG oslo_concurrency.lockutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.480 243456 DEBUG oslo_concurrency.lockutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.480 243456 DEBUG oslo_concurrency.lockutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.481 243456 DEBUG oslo_concurrency.lockutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.483 243456 INFO nova.compute.manager [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Terminating instance
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.485 243456 DEBUG nova.compute.manager [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:04:09 compute-0 kernel: tap5051af6b-33 (unregistering): left promiscuous mode
Feb 28 10:04:09 compute-0 NetworkManager[49805]: <info>  [1772273049.5257] device (tap5051af6b-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.533 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:09 compute-0 ovn_controller[146846]: 2026-02-28T10:04:09Z|00145|binding|INFO|Releasing lport 5051af6b-3383-4342-be85-7f3b44b527a2 from this chassis (sb_readonly=0)
Feb 28 10:04:09 compute-0 ovn_controller[146846]: 2026-02-28T10:04:09Z|00146|binding|INFO|Setting lport 5051af6b-3383-4342-be85-7f3b44b527a2 down in Southbound
Feb 28 10:04:09 compute-0 ovn_controller[146846]: 2026-02-28T10:04:09Z|00147|binding|INFO|Removing iface tap5051af6b-33 ovn-installed in OVS
Feb 28 10:04:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.540 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:cc:6f 10.100.0.9'], port_security=['fa:16:3e:04:cc:6f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '400869d5-7369-466b-970e-ac7e3f4e2e4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a4f47b449434c708378388c3e76610e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40d4cd58-fe6b-4388-8998-09342a19e6c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dbb9d37-cc7b-4cfd-a162-1ef19741223f, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=5051af6b-3383-4342-be85-7f3b44b527a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:04:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.542 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 5051af6b-3383-4342-be85-7f3b44b527a2 in datapath c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 unbound from our chassis
Feb 28 10:04:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.544 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4a214b7-a9ab-4548-82a3-ef624bc8e6a1
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.543 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.564 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc5e764-cf1e-4681-875f-4681b943dd10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:09 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000018.scope: Deactivated successfully.
Feb 28 10:04:09 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000018.scope: Consumed 4.946s CPU time.
Feb 28 10:04:09 compute-0 systemd-machined[209480]: Machine qemu-30-instance-00000018 terminated.
Feb 28 10:04:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.601 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9404f4ca-5a55-497f-8e22-75794572af20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.606 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[67599f2b-5342-40cd-a2e1-f4cd696ac314]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.634 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5df59501-160c-4d64-bcc8-874f22d596c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.656 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2d6c1108-4a5c-479f-a42e-06c8b1c32d2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4a214b7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:13:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455078, 'reachable_time': 28358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267201, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.674 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2259dd61-fe72-4b0d-8d8f-0e7d777a812d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4a214b7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455092, 'tstamp': 455092}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267202, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4a214b7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455096, 'tstamp': 455096}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267202, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.676 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4a214b7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.678 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.682 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.683 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4a214b7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.683 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.683 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4a214b7-a0, col_values=(('external_ids', {'iface-id': '6478ac98-6cab-4170-89f8-76b31581cc8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:09.684 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.728 243456 INFO nova.virt.libvirt.driver [-] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Instance destroyed successfully.
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.730 243456 DEBUG nova.objects.instance [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lazy-loading 'resources' on Instance uuid 400869d5-7369-466b-970e-ac7e3f4e2e4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.747 243456 DEBUG nova.virt.libvirt.vif [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:03:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1851116039',display_name='tempest-ListServersNegativeTestJSON-server-1851116039-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1851116039-1',id=24,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a4f47b449434c708378388c3e76610e',ramdisk_id='',reservation_id='r-my980dr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-784056711',owner_user_name='tempest-ListServersNegativeTestJSON-784056711-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:05Z,user_data=None,user_id='7a44c3f76b144221b23743a554a4a839',uuid=400869d5-7369-466b-970e-ac7e3f4e2e4c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5051af6b-3383-4342-be85-7f3b44b527a2", "address": "fa:16:3e:04:cc:6f", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5051af6b-33", "ovs_interfaceid": "5051af6b-3383-4342-be85-7f3b44b527a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.748 243456 DEBUG nova.network.os_vif_util [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converting VIF {"id": "5051af6b-3383-4342-be85-7f3b44b527a2", "address": "fa:16:3e:04:cc:6f", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5051af6b-33", "ovs_interfaceid": "5051af6b-3383-4342-be85-7f3b44b527a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.749 243456 DEBUG nova.network.os_vif_util [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:cc:6f,bridge_name='br-int',has_traffic_filtering=True,id=5051af6b-3383-4342-be85-7f3b44b527a2,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5051af6b-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.749 243456 DEBUG os_vif [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:cc:6f,bridge_name='br-int',has_traffic_filtering=True,id=5051af6b-3383-4342-be85-7f3b44b527a2,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5051af6b-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.753 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.755 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5051af6b-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.760 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.763 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.765 243456 INFO os_vif [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:cc:6f,bridge_name='br-int',has_traffic_filtering=True,id=5051af6b-3383-4342-be85-7f3b44b527a2,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5051af6b-33')
Feb 28 10:04:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 316 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.3 MiB/s wr, 305 op/s
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.852 243456 DEBUG nova.network.neutron [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updating instance_info_cache with network_info: [{"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.882 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Releasing lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.883 243456 DEBUG nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Instance network_info: |[{"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.883 243456 DEBUG oslo_concurrency.lockutils [req-3fa719ea-8985-4749-ad4b-f0d279f4e1b8 req-80aaa86a-4da5-479e-8740-e5464ea47fca 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.884 243456 DEBUG nova.network.neutron [req-3fa719ea-8985-4749-ad4b-f0d279f4e1b8 req-80aaa86a-4da5-479e-8740-e5464ea47fca 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Refreshing network info cache for port 76d5199d-5d1e-4198-8780-c2537175a2be _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:04:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:04:09 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3595329839' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.888 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Start _get_guest_xml network_info=[{"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.896 243456 WARNING nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.901 243456 DEBUG nova.virt.libvirt.host [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.903 243456 DEBUG nova.virt.libvirt.host [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.915 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.916 243456 DEBUG nova.virt.libvirt.host [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.919 243456 DEBUG nova.virt.libvirt.host [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.920 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.920 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.922 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.922 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.923 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.923 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.923 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.923 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.924 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.924 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.924 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.925 243456 DEBUG nova.virt.hardware [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:04:09 compute-0 nova_compute[243452]: 2026-02-28 10:04:09.928 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:09 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3595329839' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.080 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.082 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.097 243456 INFO nova.virt.libvirt.driver [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Deleting instance files /var/lib/nova/instances/400869d5-7369-466b-970e-ac7e3f4e2e4c_del
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.104 243456 INFO nova.virt.libvirt.driver [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Deletion of /var/lib/nova/instances/400869d5-7369-466b-970e-ac7e3f4e2e4c_del complete
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.115 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.116 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.122 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.123 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.203 243456 INFO nova.compute.manager [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Took 0.72 seconds to destroy the instance on the hypervisor.
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.204 243456 DEBUG oslo.service.loopingcall [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.204 243456 DEBUG nova.compute.manager [-] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.205 243456 DEBUG nova.network.neutron [-] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.322 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.324 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4132MB free_disk=59.91480299457908GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.324 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.325 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.405 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 400869d5-7369-466b-970e-ac7e3f4e2e4c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.406 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 95805a4e-8bc0-47e3-981a-dfe27127a270 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.406 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance c38e4584-8a86-41a3-bc10-2a35205cf7c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.407 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance f1026535-7729-43d0-8027-dd71ef14dfbf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.407 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.407 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:04:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:04:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1332031427' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.494 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.518 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.543 243456 DEBUG nova.storage.rbd_utils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image f1026535-7729-43d0-8027-dd71ef14dfbf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:10 compute-0 nova_compute[243452]: 2026-02-28 10:04:10.548 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:10 compute-0 ceph-mon[76304]: pgmap v1047: 305 pgs: 305 active+clean; 316 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.3 MiB/s wr, 305 op/s
Feb 28 10:04:10 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1332031427' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:04:11 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183075377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.051 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.058 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.088 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.119 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.120 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.120 243456 DEBUG nova.network.neutron [-] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:04:11 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2096698919' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.141 243456 INFO nova.compute.manager [-] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Took 0.94 seconds to deallocate network for instance.
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.153 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.155 243456 DEBUG nova.virt.libvirt.vif [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:04:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-585792194',display_name='tempest-AttachInterfacesTestJSON-server-585792194',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-585792194',id=27,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPfxDpm3Myh4xl51GH/Eee4e+9gL0QXIrJb3KmgHGFFsr7Zk1zaEkJOxAcUnihnT1OPbqVJln2d2q3oc5Q222W4sAVF2VtSEfrwl92YPLxE6w1H/lw+338tjUK54+I/6sA==',key_name='tempest-keypair-1547211698',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-ngglzwzv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:04:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=f1026535-7729-43d0-8027-dd71ef14dfbf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.156 243456 DEBUG nova.network.os_vif_util [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.157 243456 DEBUG nova.network.os_vif_util [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:bd:5b,bridge_name='br-int',has_traffic_filtering=True,id=76d5199d-5d1e-4198-8780-c2537175a2be,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d5199d-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.159 243456 DEBUG nova.objects.instance [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'pci_devices' on Instance uuid f1026535-7729-43d0-8027-dd71ef14dfbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.181 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:04:11 compute-0 nova_compute[243452]:   <uuid>f1026535-7729-43d0-8027-dd71ef14dfbf</uuid>
Feb 28 10:04:11 compute-0 nova_compute[243452]:   <name>instance-0000001b</name>
Feb 28 10:04:11 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:04:11 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:04:11 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <nova:name>tempest-AttachInterfacesTestJSON-server-585792194</nova:name>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:04:09</nova:creationTime>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:04:11 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:04:11 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:04:11 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:04:11 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:04:11 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:04:11 compute-0 nova_compute[243452]:         <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:04:11 compute-0 nova_compute[243452]:         <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:04:11 compute-0 nova_compute[243452]:         <nova:port uuid="76d5199d-5d1e-4198-8780-c2537175a2be">
Feb 28 10:04:11 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:04:11 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:04:11 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <system>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <entry name="serial">f1026535-7729-43d0-8027-dd71ef14dfbf</entry>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <entry name="uuid">f1026535-7729-43d0-8027-dd71ef14dfbf</entry>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     </system>
Feb 28 10:04:11 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:04:11 compute-0 nova_compute[243452]:   <os>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:   </os>
Feb 28 10:04:11 compute-0 nova_compute[243452]:   <features>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:   </features>
Feb 28 10:04:11 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:04:11 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:04:11 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/f1026535-7729-43d0-8027-dd71ef14dfbf_disk">
Feb 28 10:04:11 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:04:11 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/f1026535-7729-43d0-8027-dd71ef14dfbf_disk.config">
Feb 28 10:04:11 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:04:11 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:7a:bd:5b"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <target dev="tap76d5199d-5d"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/console.log" append="off"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <video>
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     </video>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:04:11 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:04:11 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:04:11 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:04:11 compute-0 nova_compute[243452]: </domain>
Feb 28 10:04:11 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.189 243456 DEBUG nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Preparing to wait for external event network-vif-plugged-76d5199d-5d1e-4198-8780-c2537175a2be prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.189 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.190 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.190 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.192 243456 DEBUG nova.virt.libvirt.vif [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:04:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-585792194',display_name='tempest-AttachInterfacesTestJSON-server-585792194',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-585792194',id=27,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPfxDpm3Myh4xl51GH/Eee4e+9gL0QXIrJb3KmgHGFFsr7Zk1zaEkJOxAcUnihnT1OPbqVJln2d2q3oc5Q222W4sAVF2VtSEfrwl92YPLxE6w1H/lw+338tjUK54+I/6sA==',key_name='tempest-keypair-1547211698',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-ngglzwzv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:04:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=f1026535-7729-43d0-8027-dd71ef14dfbf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.192 243456 DEBUG nova.network.os_vif_util [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.194 243456 DEBUG nova.network.os_vif_util [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:bd:5b,bridge_name='br-int',has_traffic_filtering=True,id=76d5199d-5d1e-4198-8780-c2537175a2be,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d5199d-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.195 243456 DEBUG os_vif [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:bd:5b,bridge_name='br-int',has_traffic_filtering=True,id=76d5199d-5d1e-4198-8780-c2537175a2be,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d5199d-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.196 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.197 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.198 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.201 243456 DEBUG oslo_concurrency.lockutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.202 243456 DEBUG oslo_concurrency.lockutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.204 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.205 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76d5199d-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.208 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap76d5199d-5d, col_values=(('external_ids', {'iface-id': '76d5199d-5d1e-4198-8780-c2537175a2be', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:bd:5b', 'vm-uuid': 'f1026535-7729-43d0-8027-dd71ef14dfbf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:11 compute-0 NetworkManager[49805]: <info>  [1772273051.2116] manager: (tap76d5199d-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.211 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.214 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.220 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.222 243456 INFO os_vif [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:bd:5b,bridge_name='br-int',has_traffic_filtering=True,id=76d5199d-5d1e-4198-8780-c2537175a2be,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d5199d-5d')
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.308 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.309 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.310 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:7a:bd:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.310 243456 INFO nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Using config drive
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.337 243456 DEBUG nova.storage.rbd_utils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image f1026535-7729-43d0-8027-dd71ef14dfbf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.345 243456 DEBUG oslo_concurrency.processutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.829 243456 INFO nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Creating config drive at /var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/disk.config
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.840 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp631urk_j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 315 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 4.2 MiB/s wr, 340 op/s
Feb 28 10:04:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:04:11 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3373037427' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.874 243456 DEBUG oslo_concurrency.processutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.881 243456 DEBUG nova.compute.provider_tree [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.887 243456 DEBUG nova.network.neutron [req-3fa719ea-8985-4749-ad4b-f0d279f4e1b8 req-80aaa86a-4da5-479e-8740-e5464ea47fca 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updated VIF entry in instance network info cache for port 76d5199d-5d1e-4198-8780-c2537175a2be. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.888 243456 DEBUG nova.network.neutron [req-3fa719ea-8985-4749-ad4b-f0d279f4e1b8 req-80aaa86a-4da5-479e-8740-e5464ea47fca 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updating instance_info_cache with network_info: [{"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.915 243456 DEBUG nova.scheduler.client.report [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.923 243456 DEBUG oslo_concurrency.lockutils [req-3fa719ea-8985-4749-ad4b-f0d279f4e1b8 req-80aaa86a-4da5-479e-8740-e5464ea47fca 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.945 243456 DEBUG oslo_concurrency.lockutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:11 compute-0 nova_compute[243452]: 2026-02-28 10:04:11.973 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp631urk_j" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2183075377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2096698919' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3373037427' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.005 243456 DEBUG nova.storage.rbd_utils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image f1026535-7729-43d0-8027-dd71ef14dfbf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.010 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/disk.config f1026535-7729-43d0-8027-dd71ef14dfbf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.026 243456 INFO nova.scheduler.client.report [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Deleted allocations for instance 400869d5-7369-466b-970e-ac7e3f4e2e4c
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.092 243456 DEBUG oslo_concurrency.lockutils [None req-bb52df04-7677-43b7-ae58-3b096acd596a 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "400869d5-7369-466b-970e-ac7e3f4e2e4c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.122 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.123 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.123 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.134 243456 DEBUG nova.compute.manager [req-c621f763-9b68-407a-aa6a-383ef2385eda req-00659073-e0da-4d7b-b26a-11178ec0c500 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Received event network-vif-deleted-5051af6b-3383-4342-be85-7f3b44b527a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.135 243456 DEBUG oslo_concurrency.processutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/disk.config f1026535-7729-43d0-8027-dd71ef14dfbf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.136 243456 INFO nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Deleting local config drive /var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/disk.config because it was imported into RBD.
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.145 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 28 10:04:12 compute-0 kernel: tap76d5199d-5d: entered promiscuous mode
Feb 28 10:04:12 compute-0 ovn_controller[146846]: 2026-02-28T10:04:12Z|00148|binding|INFO|Claiming lport 76d5199d-5d1e-4198-8780-c2537175a2be for this chassis.
Feb 28 10:04:12 compute-0 systemd-udevd[267192]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:04:12 compute-0 ovn_controller[146846]: 2026-02-28T10:04:12Z|00149|binding|INFO|76d5199d-5d1e-4198-8780-c2537175a2be: Claiming fa:16:3e:7a:bd:5b 10.100.0.12
Feb 28 10:04:12 compute-0 NetworkManager[49805]: <info>  [1772273052.1805] manager: (tap76d5199d-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.185 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:12 compute-0 NetworkManager[49805]: <info>  [1772273052.1938] device (tap76d5199d-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:04:12 compute-0 NetworkManager[49805]: <info>  [1772273052.1943] device (tap76d5199d-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.193 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:bd:5b 10.100.0.12'], port_security=['fa:16:3e:7a:bd:5b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f1026535-7729-43d0-8027-dd71ef14dfbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b0c8faed-afc6-47c6-b1ad-97c1fd7445c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=76d5199d-5d1e-4198-8780-c2537175a2be) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.194 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 76d5199d-5d1e-4198-8780-c2537175a2be in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f bound to our chassis
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.196 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.196 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.197 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.206 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[13a72c46-6f21-475c-9e9b-51bca3a5e037]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.207 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap60dcefc3-91 in ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.209 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap60dcefc3-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.209 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d275725b-6cfb-4a36-a967-7fdb8d8d0954]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:12 compute-0 systemd-machined[209480]: New machine qemu-31-instance-0000001b.
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.211 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[32db3bc1-d678-4ff7-9b62-c2fe579d013a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.211 243456 DEBUG nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.219 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c8605f-bc97-40c5-989a-6fc534f6d671]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.226 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:12 compute-0 ovn_controller[146846]: 2026-02-28T10:04:12Z|00150|binding|INFO|Setting lport 76d5199d-5d1e-4198-8780-c2537175a2be ovn-installed in OVS
Feb 28 10:04:12 compute-0 ovn_controller[146846]: 2026-02-28T10:04:12Z|00151|binding|INFO|Setting lport 76d5199d-5d1e-4198-8780-c2537175a2be up in Southbound
Feb 28 10:04:12 compute-0 systemd[1]: Started Virtual Machine qemu-31-instance-0000001b.
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.228 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.241 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7a656fea-b2b9-446d-97ce-6b3a8095e9b9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.265 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c868daf0-ab9b-4b8f-a551-081fb75fbb0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:12 compute-0 NetworkManager[49805]: <info>  [1772273052.2720] manager: (tap60dcefc3-90): new Veth device (/org/freedesktop/NetworkManager/Devices/75)
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.270 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[499cc033-54ef-4e05-aca2-52514800dd01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.282 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.283 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.295 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a4da4a3b-75f5-471c-9ef4-58bc1dff0aea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.297 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.298 243456 INFO nova.compute.claims [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.298 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9521226c-60d3-4e20-b3ab-642cdbddcd0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:12 compute-0 NetworkManager[49805]: <info>  [1772273052.3167] device (tap60dcefc3-90): carrier: link connected
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.320 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-95805a4e-8bc0-47e3-981a-dfe27127a270" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.321 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-95805a4e-8bc0-47e3-981a-dfe27127a270" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.321 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.321 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 95805a4e-8bc0-47e3-981a-dfe27127a270 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.321 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[705ee4c8-a51d-42d7-8da0-703cebc3efd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.337 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b5075d66-3561-43db-bfcc-faca94d46d45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455960, 'reachable_time': 22319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267446, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.352 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8f023fc8-9854-4b88-8cdb-dd2096938868]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe46:227a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455960, 'tstamp': 455960}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267447, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.372 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9afaa222-f95c-4158-8d2b-675b6c0efa51]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455960, 'reachable_time': 22319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267448, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.394 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b4e7a01f-0d87-488a-866d-9067bd1e407f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.438 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[37e891bd-3bc6-4052-88ae-a4c3afea3490]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.440 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.440 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.440 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:12 compute-0 NetworkManager[49805]: <info>  [1772273052.4426] manager: (tap60dcefc3-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.441 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:12 compute-0 kernel: tap60dcefc3-90: entered promiscuous mode
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.445 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.446 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:12 compute-0 ovn_controller[146846]: 2026-02-28T10:04:12Z|00152|binding|INFO|Releasing lport 76417fb5-53fd-4986-92c0-fafddb45b2f5 from this chassis (sb_readonly=0)
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.461 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.471 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/60dcefc3-95e1-437e-9c00-e51656c39b8f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/60dcefc3-95e1-437e-9c00-e51656c39b8f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.472 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[095ee223-9216-4b09-b77c-825e72a9d90b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.473 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/60dcefc3-95e1-437e-9c00-e51656c39b8f.pid.haproxy
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:04:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:12.473 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'env', 'PROCESS_TAG=haproxy-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/60dcefc3-95e1-437e-9c00-e51656c39b8f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.612 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.612 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.627 243456 DEBUG nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.680 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:12 compute-0 podman[267499]: 2026-02-28 10:04:12.8847539 +0000 UTC m=+0.078219710 container create 0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 28 10:04:12 compute-0 systemd[1]: Started libpod-conmon-0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543.scope.
Feb 28 10:04:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:04:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3138718886' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:12 compute-0 podman[267499]: 2026-02-28 10:04:12.838193292 +0000 UTC m=+0.031659152 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:04:12 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e56ba081050cdf68ac2830d8fd1d550ce7d433e93c5d67b3c7be66a947248126/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.960 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:12 compute-0 podman[267499]: 2026-02-28 10:04:12.964855692 +0000 UTC m=+0.158321502 container init 0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.966 243456 DEBUG nova.compute.provider_tree [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:04:12 compute-0 podman[267499]: 2026-02-28 10:04:12.970000447 +0000 UTC m=+0.163466257 container start 0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 10:04:12 compute-0 nova_compute[243452]: 2026-02-28 10:04:12.983 243456 DEBUG nova.scheduler.client.report [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:04:12 compute-0 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[267513]: [NOTICE]   (267519) : New worker (267521) forked
Feb 28 10:04:12 compute-0 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[267513]: [NOTICE]   (267519) : Loading success.
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.004 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.004 243456 DEBUG nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.007 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.327s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:13 compute-0 ceph-mon[76304]: pgmap v1048: 305 pgs: 305 active+clean; 315 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 4.2 MiB/s wr, 340 op/s
Feb 28 10:04:13 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3138718886' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.012 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.013 243456 INFO nova.compute.claims [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.166 243456 DEBUG nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.167 243456 DEBUG nova.network.neutron [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.186 243456 INFO nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.204 243456 DEBUG nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.230 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273053.229568, f1026535-7729-43d0-8027-dd71ef14dfbf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.230 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] VM Started (Lifecycle Event)
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.262 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.267 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273053.2304146, f1026535-7729-43d0-8027-dd71ef14dfbf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.267 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] VM Paused (Lifecycle Event)
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.297 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.301 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.321 243456 DEBUG nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.323 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.323 243456 INFO nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Creating image(s)
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.348 243456 DEBUG nova.storage.rbd_utils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.375 243456 DEBUG nova.storage.rbd_utils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.402 243456 DEBUG nova.storage.rbd_utils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.405 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.425 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.438 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.471 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.472 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.473 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.474 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.497 243456 DEBUG nova.storage.rbd_utils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.500 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.516 243456 DEBUG nova.policy [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2737540b5d9a437cac0ea91b25f0c5d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da458887a8634c5a8b9a38fcbcc44e07', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.519 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.522 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Updating instance_info_cache with network_info: [{"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.546 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-95805a4e-8bc0-47e3-981a-dfe27127a270" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.547 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.548 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.549 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.549 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.549 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.736 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.794 243456 DEBUG nova.storage.rbd_utils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] resizing rbd image 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:04:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 299 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 1.9 MiB/s wr, 282 op/s
Feb 28 10:04:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:04:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/222971367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.983 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:13 compute-0 nova_compute[243452]: 2026-02-28 10:04:13.990 243456 DEBUG nova.compute.provider_tree [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.016 243456 DEBUG nova.scheduler.client.report [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.055 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.056 243456 DEBUG nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.104 243456 DEBUG nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.105 243456 DEBUG nova.network.neutron [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.120 243456 INFO nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.135 243456 DEBUG nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.157 243456 DEBUG nova.network.neutron [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Successfully created port: 3c6e6f23-d681-47b4-a8e5-474ce94e984d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.215 243456 DEBUG nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.217 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.218 243456 INFO nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Creating image(s)
Feb 28 10:04:14 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/222971367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.433 243456 DEBUG nova.storage.rbd_utils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.454 243456 DEBUG nova.storage.rbd_utils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.477 243456 DEBUG nova.storage.rbd_utils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.485 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.501 243456 DEBUG nova.policy [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '163582c3e6a34c87b52f82ac4f189f77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.545 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.546 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.546 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.547 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.566 243456 DEBUG nova.storage.rbd_utils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.572 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.882 243456 DEBUG nova.objects.instance [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lazy-loading 'migration_context' on Instance uuid 6d74f9b9-edf7-4d81-b139-cb664b2ab68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.901 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.901 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Ensure instance console log exists: /var/lib/nova/instances/6d74f9b9-edf7-4d81-b139-cb664b2ab68c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.902 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.902 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.903 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:14 compute-0 nova_compute[243452]: 2026-02-28 10:04:14.991 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.041 243456 DEBUG nova.storage.rbd_utils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] resizing rbd image 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.118 243456 DEBUG nova.objects.instance [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'migration_context' on Instance uuid 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.156 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.157 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Ensure instance console log exists: /var/lib/nova/instances/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.157 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.158 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.158 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:15 compute-0 ceph-mon[76304]: pgmap v1049: 305 pgs: 305 active+clean; 299 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 1.9 MiB/s wr, 282 op/s
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.280 243456 DEBUG nova.network.neutron [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Successfully updated port: 3c6e6f23-d681-47b4-a8e5-474ce94e984d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.294 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "refresh_cache-6d74f9b9-edf7-4d81-b139-cb664b2ab68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.295 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquired lock "refresh_cache-6d74f9b9-edf7-4d81-b139-cb664b2ab68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.295 243456 DEBUG nova.network.neutron [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.420 243456 DEBUG nova.compute.manager [req-bcb686c6-beae-4f3f-a5ef-4eddac6b2ea5 req-db3edd9d-e7f5-4b9a-a1d3-64a2301a2246 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Received event network-changed-3c6e6f23-d681-47b4-a8e5-474ce94e984d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.421 243456 DEBUG nova.compute.manager [req-bcb686c6-beae-4f3f-a5ef-4eddac6b2ea5 req-db3edd9d-e7f5-4b9a-a1d3-64a2301a2246 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Refreshing instance network info cache due to event network-changed-3c6e6f23-d681-47b4-a8e5-474ce94e984d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.421 243456 DEBUG oslo_concurrency.lockutils [req-bcb686c6-beae-4f3f-a5ef-4eddac6b2ea5 req-db3edd9d-e7f5-4b9a-a1d3-64a2301a2246 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-6d74f9b9-edf7-4d81-b139-cb664b2ab68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.482 243456 DEBUG nova.network.neutron [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Successfully created port: 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.507 243456 DEBUG nova.network.neutron [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.661 243456 DEBUG oslo_concurrency.lockutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "95805a4e-8bc0-47e3-981a-dfe27127a270" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.662 243456 DEBUG oslo_concurrency.lockutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.662 243456 DEBUG oslo_concurrency.lockutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.663 243456 DEBUG oslo_concurrency.lockutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.664 243456 DEBUG oslo_concurrency.lockutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.666 243456 INFO nova.compute.manager [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Terminating instance
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.668 243456 DEBUG nova.compute.manager [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:04:15 compute-0 kernel: tapdca369b1-9f (unregistering): left promiscuous mode
Feb 28 10:04:15 compute-0 NetworkManager[49805]: <info>  [1772273055.7158] device (tapdca369b1-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:04:15 compute-0 ovn_controller[146846]: 2026-02-28T10:04:15Z|00153|binding|INFO|Releasing lport dca369b1-9f27-4c57-9e18-4a62a8bd95e9 from this chassis (sb_readonly=0)
Feb 28 10:04:15 compute-0 ovn_controller[146846]: 2026-02-28T10:04:15Z|00154|binding|INFO|Setting lport dca369b1-9f27-4c57-9e18-4a62a8bd95e9 down in Southbound
Feb 28 10:04:15 compute-0 ovn_controller[146846]: 2026-02-28T10:04:15Z|00155|binding|INFO|Removing iface tapdca369b1-9f ovn-installed in OVS
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.720 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.727 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.764 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:2f:18 10.100.0.5'], port_security=['fa:16:3e:63:2f:18 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '95805a4e-8bc0-47e3-981a-dfe27127a270', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a4f47b449434c708378388c3e76610e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40d4cd58-fe6b-4388-8998-09342a19e6c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dbb9d37-cc7b-4cfd-a162-1ef19741223f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=dca369b1-9f27-4c57-9e18-4a62a8bd95e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:04:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.765 156681 INFO neutron.agent.ovn.metadata.agent [-] Port dca369b1-9f27-4c57-9e18-4a62a8bd95e9 in datapath c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 unbound from our chassis
Feb 28 10:04:15 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000019.scope: Deactivated successfully.
Feb 28 10:04:15 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000019.scope: Consumed 11.289s CPU time.
Feb 28 10:04:15 compute-0 systemd-machined[209480]: Machine qemu-28-instance-00000019 terminated.
Feb 28 10:04:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.772 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c4a214b7-a9ab-4548-82a3-ef624bc8e6a1
Feb 28 10:04:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.787 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7a5d9dc1-cc30-4edf-93c8-01e79d064443]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.803 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[15267a39-803f-4efc-8080-feda80500290]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.805 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c6eb26-e165-4505-bab8-f823933e9d37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.829 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4ceb3943-d93f-44fd-b1d0-9ba44a60fe7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 351 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 4.5 MiB/s wr, 302 op/s
Feb 28 10:04:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.849 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b463abcd-1711-4b79-96d9-801886eba87c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc4a214b7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:13:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455078, 'reachable_time': 28358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267938, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.864 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1e6c6f-3a78-4e01-83e2-f8dd7b940908]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc4a214b7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455092, 'tstamp': 455092}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267941, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc4a214b7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455096, 'tstamp': 455096}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267941, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.866 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4a214b7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.867 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.870 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.871 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4a214b7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.871 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.871 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc4a214b7-a0, col_values=(('external_ids', {'iface-id': '6478ac98-6cab-4170-89f8-76b31581cc8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:15.871 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:15 compute-0 NetworkManager[49805]: <info>  [1772273055.8826] manager: (tapdca369b1-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/77)
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.897 243456 INFO nova.virt.libvirt.driver [-] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Instance destroyed successfully.
Feb 28 10:04:15 compute-0 nova_compute[243452]: 2026-02-28 10:04:15.898 243456 DEBUG nova.objects.instance [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lazy-loading 'resources' on Instance uuid 95805a4e-8bc0-47e3-981a-dfe27127a270 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.009 243456 DEBUG nova.virt.libvirt.vif [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:03:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1851116039',display_name='tempest-ListServersNegativeTestJSON-server-1851116039-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1851116039-2',id=25,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-02-28T10:04:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a4f47b449434c708378388c3e76610e',ramdisk_id='',reservation_id='r-my980dr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-784056711',owner_user_name='tempest-ListServersNegativeTestJSON-784056711-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:03Z,user_data=None,user_id='7a44c3f76b144221b23743a554a4a839',uuid=95805a4e-8bc0-47e3-981a-dfe27127a270,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.011 243456 DEBUG nova.network.os_vif_util [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converting VIF {"id": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "address": "fa:16:3e:63:2f:18", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdca369b1-9f", "ovs_interfaceid": "dca369b1-9f27-4c57-9e18-4a62a8bd95e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.012 243456 DEBUG nova.network.os_vif_util [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:63:2f:18,bridge_name='br-int',has_traffic_filtering=True,id=dca369b1-9f27-4c57-9e18-4a62a8bd95e9,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdca369b1-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.013 243456 DEBUG os_vif [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:2f:18,bridge_name='br-int',has_traffic_filtering=True,id=dca369b1-9f27-4c57-9e18-4a62a8bd95e9,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdca369b1-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.016 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.016 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdca369b1-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.018 243456 DEBUG oslo_concurrency.lockutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.018 243456 DEBUG oslo_concurrency.lockutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.021 243456 DEBUG oslo_concurrency.lockutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.021 243456 DEBUG oslo_concurrency.lockutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.022 243456 DEBUG oslo_concurrency.lockutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.023 243456 INFO nova.compute.manager [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Terminating instance
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.025 243456 DEBUG nova.compute.manager [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.026 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.028 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.031 243456 INFO os_vif [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:2f:18,bridge_name='br-int',has_traffic_filtering=True,id=dca369b1-9f27-4c57-9e18-4a62a8bd95e9,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdca369b1-9f')
Feb 28 10:04:16 compute-0 kernel: tapb9c5316d-0f (unregistering): left promiscuous mode
Feb 28 10:04:16 compute-0 NetworkManager[49805]: <info>  [1772273056.0723] device (tapb9c5316d-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.082 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:16 compute-0 ovn_controller[146846]: 2026-02-28T10:04:16Z|00156|binding|INFO|Releasing lport b9c5316d-0f6b-4a56-8273-692ee1492259 from this chassis (sb_readonly=0)
Feb 28 10:04:16 compute-0 ovn_controller[146846]: 2026-02-28T10:04:16Z|00157|binding|INFO|Setting lport b9c5316d-0f6b-4a56-8273-692ee1492259 down in Southbound
Feb 28 10:04:16 compute-0 ovn_controller[146846]: 2026-02-28T10:04:16Z|00158|binding|INFO|Removing iface tapb9c5316d-0f ovn-installed in OVS
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.086 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.092 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:5c:cb 10.100.0.4'], port_security=['fa:16:3e:b1:5c:cb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c38e4584-8a86-41a3-bc10-2a35205cf7c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a4f47b449434c708378388c3e76610e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40d4cd58-fe6b-4388-8998-09342a19e6c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dbb9d37-cc7b-4cfd-a162-1ef19741223f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b9c5316d-0f6b-4a56-8273-692ee1492259) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:04:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.094 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b9c5316d-0f6b-4a56-8273-692ee1492259 in datapath c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 unbound from our chassis
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.095 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.096 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c4a214b7-a9ab-4548-82a3-ef624bc8e6a1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:04:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.097 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5c799d-073c-480f-892b-bcb47a46b39e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.098 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 namespace which is not needed anymore
Feb 28 10:04:16 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Feb 28 10:04:16 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001a.scope: Consumed 10.516s CPU time.
Feb 28 10:04:16 compute-0 systemd-machined[209480]: Machine qemu-29-instance-0000001a terminated.
Feb 28 10:04:16 compute-0 ceph-mon[76304]: pgmap v1050: 305 pgs: 305 active+clean; 351 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 4.5 MiB/s wr, 302 op/s
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.250 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:16 compute-0 neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1[266854]: [NOTICE]   (266865) : haproxy version is 2.8.14-c23fe91
Feb 28 10:04:16 compute-0 neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1[266854]: [NOTICE]   (266865) : path to executable is /usr/sbin/haproxy
Feb 28 10:04:16 compute-0 neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1[266854]: [WARNING]  (266865) : Exiting Master process...
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.258 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:16 compute-0 neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1[266854]: [ALERT]    (266865) : Current worker (266878) exited with code 143 (Terminated)
Feb 28 10:04:16 compute-0 neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1[266854]: [WARNING]  (266865) : All workers exited. Exiting... (0)
Feb 28 10:04:16 compute-0 systemd[1]: libpod-f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242.scope: Deactivated successfully.
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.267 243456 INFO nova.virt.libvirt.driver [-] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Instance destroyed successfully.
Feb 28 10:04:16 compute-0 podman[267991]: 2026-02-28 10:04:16.268683833 +0000 UTC m=+0.062284142 container died f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.268 243456 DEBUG nova.objects.instance [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lazy-loading 'resources' on Instance uuid c38e4584-8a86-41a3-bc10-2a35205cf7c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.282 243456 DEBUG nova.virt.libvirt.vif [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:03:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1851116039',display_name='tempest-ListServersNegativeTestJSON-server-1851116039-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1851116039-3',id=26,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2026-02-28T10:04:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a4f47b449434c708378388c3e76610e',ramdisk_id='',reservation_id='r-my980dr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-784056711',owner_user_name='tempest-ListServersNegativeTestJSON-784056711-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:06Z,user_data=None,user_id='7a44c3f76b144221b23743a554a4a839',uuid=c38e4584-8a86-41a3-bc10-2a35205cf7c7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9c5316d-0f6b-4a56-8273-692ee1492259", "address": "fa:16:3e:b1:5c:cb", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c5316d-0f", "ovs_interfaceid": "b9c5316d-0f6b-4a56-8273-692ee1492259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.283 243456 DEBUG nova.network.os_vif_util [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converting VIF {"id": "b9c5316d-0f6b-4a56-8273-692ee1492259", "address": "fa:16:3e:b1:5c:cb", "network": {"id": "c4a214b7-a9ab-4548-82a3-ef624bc8e6a1", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-619181201-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a4f47b449434c708378388c3e76610e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9c5316d-0f", "ovs_interfaceid": "b9c5316d-0f6b-4a56-8273-692ee1492259", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.284 243456 DEBUG nova.network.os_vif_util [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:5c:cb,bridge_name='br-int',has_traffic_filtering=True,id=b9c5316d-0f6b-4a56-8273-692ee1492259,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9c5316d-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.284 243456 DEBUG os_vif [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:5c:cb,bridge_name='br-int',has_traffic_filtering=True,id=b9c5316d-0f6b-4a56-8273-692ee1492259,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9c5316d-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.287 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.287 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9c5316d-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.289 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.293 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.295 243456 INFO os_vif [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:5c:cb,bridge_name='br-int',has_traffic_filtering=True,id=b9c5316d-0f6b-4a56-8273-692ee1492259,network=Network(c4a214b7-a9ab-4548-82a3-ef624bc8e6a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9c5316d-0f')
Feb 28 10:04:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242-userdata-shm.mount: Deactivated successfully.
Feb 28 10:04:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-0957a7b6eff2b2a82eae4443a3c42b23275f1b3848fb9d1ed3e7a8e33112a7b0-merged.mount: Deactivated successfully.
Feb 28 10:04:16 compute-0 podman[267991]: 2026-02-28 10:04:16.320218582 +0000 UTC m=+0.113818891 container cleanup f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:04:16 compute-0 systemd[1]: libpod-conmon-f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242.scope: Deactivated successfully.
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.347 243456 INFO nova.virt.libvirt.driver [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Deleting instance files /var/lib/nova/instances/95805a4e-8bc0-47e3-981a-dfe27127a270_del
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.348 243456 INFO nova.virt.libvirt.driver [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Deletion of /var/lib/nova/instances/95805a4e-8bc0-47e3-981a-dfe27127a270_del complete
Feb 28 10:04:16 compute-0 podman[268047]: 2026-02-28 10:04:16.382708439 +0000 UTC m=+0.045045387 container remove f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 10:04:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.388 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ec8019c9-ac93-4747-8d30-64010010c445]: (4, ('Sat Feb 28 10:04:16 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 (f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242)\nf03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242\nSat Feb 28 10:04:16 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 (f03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242)\nf03ea4b62c28bdcec44513089d170e5ef89ddf6bdf899a4fc98f80c1df62e242\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.389 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[88caa938-acac-4765-af22-30f2f1a9129f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.390 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4a214b7-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.391 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:16 compute-0 kernel: tapc4a214b7-a0: left promiscuous mode
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.395 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.397 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4a212a6d-2960-4a7f-8278-20ee151a7c86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.403 243456 INFO nova.compute.manager [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Took 0.73 seconds to destroy the instance on the hypervisor.
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.403 243456 DEBUG oslo.service.loopingcall [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.404 243456 DEBUG nova.compute.manager [-] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.404 243456 DEBUG nova.network.neutron [-] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.413 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.415 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a2231474-848b-4b09-b3c5-d6d05efc9a68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.417 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff7f827-d1dd-4433-a8e1-446acb8e1cb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.434 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8ea07fef-61b0-4a98-8934-1feea67ffe7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455069, 'reachable_time': 28078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268065, 'error': None, 'target': 'ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:16 compute-0 systemd[1]: run-netns-ovnmeta\x2dc4a214b7\x2da9ab\x2d4548\x2d82a3\x2def624bc8e6a1.mount: Deactivated successfully.
Feb 28 10:04:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.437 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c4a214b7-a9ab-4548-82a3-ef624bc8e6a1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:04:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:16.438 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[8eeb50cd-6961-48c4-bf82-8a73a0b3a519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.537 243456 INFO nova.virt.libvirt.driver [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Deleting instance files /var/lib/nova/instances/c38e4584-8a86-41a3-bc10-2a35205cf7c7_del
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.538 243456 INFO nova.virt.libvirt.driver [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Deletion of /var/lib/nova/instances/c38e4584-8a86-41a3-bc10-2a35205cf7c7_del complete
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.596 243456 INFO nova.compute.manager [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Took 0.57 seconds to destroy the instance on the hypervisor.
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.597 243456 DEBUG oslo.service.loopingcall [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.597 243456 DEBUG nova.compute.manager [-] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.598 243456 DEBUG nova.network.neutron [-] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.656 243456 DEBUG nova.network.neutron [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Updating instance_info_cache with network_info: [{"id": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "address": "fa:16:3e:32:58:99", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6e6f23-d6", "ovs_interfaceid": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:16 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:04:16 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 12K writes, 51K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s
                                           Cumulative WAL: 12K writes, 3648 syncs, 3.44 writes per sync, written: 0.05 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6674 writes, 27K keys, 6674 commit groups, 1.0 writes per commit group, ingest: 29.85 MB, 0.05 MB/s
                                           Interval WAL: 6674 writes, 2630 syncs, 2.54 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.677 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Releasing lock "refresh_cache-6d74f9b9-edf7-4d81-b139-cb664b2ab68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.678 243456 DEBUG nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Instance network_info: |[{"id": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "address": "fa:16:3e:32:58:99", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6e6f23-d6", "ovs_interfaceid": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.679 243456 DEBUG oslo_concurrency.lockutils [req-bcb686c6-beae-4f3f-a5ef-4eddac6b2ea5 req-db3edd9d-e7f5-4b9a-a1d3-64a2301a2246 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-6d74f9b9-edf7-4d81-b139-cb664b2ab68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.679 243456 DEBUG nova.network.neutron [req-bcb686c6-beae-4f3f-a5ef-4eddac6b2ea5 req-db3edd9d-e7f5-4b9a-a1d3-64a2301a2246 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Refreshing network info cache for port 3c6e6f23-d681-47b4-a8e5-474ce94e984d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.684 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Start _get_guest_xml network_info=[{"id": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "address": "fa:16:3e:32:58:99", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6e6f23-d6", "ovs_interfaceid": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.690 243456 WARNING nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.697 243456 DEBUG nova.virt.libvirt.host [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.698 243456 DEBUG nova.virt.libvirt.host [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.706 243456 DEBUG nova.virt.libvirt.host [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.707 243456 DEBUG nova.virt.libvirt.host [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.707 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.708 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.709 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.709 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.710 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.710 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.710 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.711 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.711 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.712 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.712 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.713 243456 DEBUG nova.virt.hardware [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:04:16 compute-0 nova_compute[243452]: 2026-02-28 10:04:16.717 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:04:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3719924605' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.253 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:17 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3719924605' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.289 243456 DEBUG nova.storage.rbd_utils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.295 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.458 243456 DEBUG nova.network.neutron [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Successfully updated port: 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.489 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "refresh_cache-9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.489 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquired lock "refresh_cache-9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.490 243456 DEBUG nova.network.neutron [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.512 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-plugged-76d5199d-5d1e-4198-8780-c2537175a2be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.512 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.512 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.513 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.513 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Processing event network-vif-plugged-76d5199d-5d1e-4198-8780-c2537175a2be _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.513 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-plugged-76d5199d-5d1e-4198-8780-c2537175a2be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.513 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.513 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.514 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.514 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] No waiting events found dispatching network-vif-plugged-76d5199d-5d1e-4198-8780-c2537175a2be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.514 243456 WARNING nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received unexpected event network-vif-plugged-76d5199d-5d1e-4198-8780-c2537175a2be for instance with vm_state building and task_state spawning.
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.514 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Received event network-vif-unplugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.514 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.514 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.515 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.515 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] No waiting events found dispatching network-vif-unplugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.515 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Received event network-vif-unplugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.515 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Received event network-vif-plugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.515 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.516 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.516 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.516 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] No waiting events found dispatching network-vif-plugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.516 243456 WARNING nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Received unexpected event network-vif-plugged-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 for instance with vm_state active and task_state deleting.
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.516 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Received event network-vif-unplugged-b9c5316d-0f6b-4a56-8273-692ee1492259 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.517 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.517 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.517 243456 DEBUG oslo_concurrency.lockutils [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.517 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] No waiting events found dispatching network-vif-unplugged-b9c5316d-0f6b-4a56-8273-692ee1492259 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.517 243456 DEBUG nova.compute.manager [req-a5d2cfa6-8696-42c6-bd16-096342bb1d8f req-0b7111f5-f7c5-4dc5-ba5a-f5159bd8ded3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Received event network-vif-unplugged-b9c5316d-0f6b-4a56-8273-692ee1492259 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.518 243456 DEBUG nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.523 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273057.52318, f1026535-7729-43d0-8027-dd71ef14dfbf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.523 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] VM Resumed (Lifecycle Event)
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.525 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.532 243456 INFO nova.virt.libvirt.driver [-] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Instance spawned successfully.
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.533 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.551 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.555 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.562 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.563 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.563 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.564 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.564 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.564 243456 DEBUG nova.virt.libvirt.driver [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.576 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:04:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.627 243456 INFO nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Took 12.25 seconds to spawn the instance on the hypervisor.
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.628 243456 DEBUG nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.683 243456 INFO nova.compute.manager [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Took 13.37 seconds to build instance.
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.702 243456 DEBUG oslo_concurrency.lockutils [None req-4f74f99b-ae84-48c9-b4e8-80ca42878470 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.810 243456 DEBUG nova.network.neutron [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:04:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:04:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2705202442' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.840 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.843 243456 DEBUG nova.virt.libvirt.vif [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:04:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-279405115',display_name='tempest-ImagesOneServerNegativeTestJSON-server-279405115',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-279405115',id=28,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da458887a8634c5a8b9a38fcbcc44e07',ramdisk_id='',reservation_id='r-4l4iiwh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-356581433',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-356581433-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:04:13Z,user_data=None,user_id='2737540b5d9a437cac0ea91b25f0c5d8',uuid=6d74f9b9-edf7-4d81-b139-cb664b2ab68c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "address": "fa:16:3e:32:58:99", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6e6f23-d6", "ovs_interfaceid": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.844 243456 DEBUG nova.network.os_vif_util [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converting VIF {"id": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "address": "fa:16:3e:32:58:99", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6e6f23-d6", "ovs_interfaceid": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.846 243456 DEBUG nova.network.os_vif_util [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:58:99,bridge_name='br-int',has_traffic_filtering=True,id=3c6e6f23-d681-47b4-a8e5-474ce94e984d,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6e6f23-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.848 243456 DEBUG nova.objects.instance [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6d74f9b9-edf7-4d81-b139-cb664b2ab68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 355 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 6.7 MiB/s wr, 332 op/s
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.868 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:04:17 compute-0 nova_compute[243452]:   <uuid>6d74f9b9-edf7-4d81-b139-cb664b2ab68c</uuid>
Feb 28 10:04:17 compute-0 nova_compute[243452]:   <name>instance-0000001c</name>
Feb 28 10:04:17 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:04:17 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:04:17 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-279405115</nova:name>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:04:16</nova:creationTime>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:04:17 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:04:17 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:04:17 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:04:17 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:04:17 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:04:17 compute-0 nova_compute[243452]:         <nova:user uuid="2737540b5d9a437cac0ea91b25f0c5d8">tempest-ImagesOneServerNegativeTestJSON-356581433-project-member</nova:user>
Feb 28 10:04:17 compute-0 nova_compute[243452]:         <nova:project uuid="da458887a8634c5a8b9a38fcbcc44e07">tempest-ImagesOneServerNegativeTestJSON-356581433</nova:project>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:04:17 compute-0 nova_compute[243452]:         <nova:port uuid="3c6e6f23-d681-47b4-a8e5-474ce94e984d">
Feb 28 10:04:17 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:04:17 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:04:17 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <system>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <entry name="serial">6d74f9b9-edf7-4d81-b139-cb664b2ab68c</entry>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <entry name="uuid">6d74f9b9-edf7-4d81-b139-cb664b2ab68c</entry>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     </system>
Feb 28 10:04:17 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:04:17 compute-0 nova_compute[243452]:   <os>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:   </os>
Feb 28 10:04:17 compute-0 nova_compute[243452]:   <features>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:   </features>
Feb 28 10:04:17 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:04:17 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:04:17 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk">
Feb 28 10:04:17 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:04:17 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk.config">
Feb 28 10:04:17 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:04:17 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:32:58:99"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <target dev="tap3c6e6f23-d6"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/6d74f9b9-edf7-4d81-b139-cb664b2ab68c/console.log" append="off"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <video>
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     </video>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:04:17 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:04:17 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:04:17 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:04:17 compute-0 nova_compute[243452]: </domain>
Feb 28 10:04:17 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.870 243456 DEBUG nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Preparing to wait for external event network-vif-plugged-3c6e6f23-d681-47b4-a8e5-474ce94e984d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.870 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.870 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.870 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.871 243456 DEBUG nova.virt.libvirt.vif [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:04:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-279405115',display_name='tempest-ImagesOneServerNegativeTestJSON-server-279405115',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-279405115',id=28,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da458887a8634c5a8b9a38fcbcc44e07',ramdisk_id='',reservation_id='r-4l4iiwh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-356581433',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-356581433-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:04:13Z,user_data=None,user_id='2737540b5d9a437cac0ea91b25f0c5d8',uuid=6d74f9b9-edf7-4d81-b139-cb664b2ab68c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "address": "fa:16:3e:32:58:99", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6e6f23-d6", "ovs_interfaceid": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.871 243456 DEBUG nova.network.os_vif_util [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converting VIF {"id": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "address": "fa:16:3e:32:58:99", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6e6f23-d6", "ovs_interfaceid": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.872 243456 DEBUG nova.network.os_vif_util [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:58:99,bridge_name='br-int',has_traffic_filtering=True,id=3c6e6f23-d681-47b4-a8e5-474ce94e984d,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6e6f23-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.872 243456 DEBUG os_vif [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:58:99,bridge_name='br-int',has_traffic_filtering=True,id=3c6e6f23-d681-47b4-a8e5-474ce94e984d,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6e6f23-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.873 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.873 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.874 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.876 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.876 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c6e6f23-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.877 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3c6e6f23-d6, col_values=(('external_ids', {'iface-id': '3c6e6f23-d681-47b4-a8e5-474ce94e984d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:58:99', 'vm-uuid': '6d74f9b9-edf7-4d81-b139-cb664b2ab68c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.878 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:17 compute-0 NetworkManager[49805]: <info>  [1772273057.8799] manager: (tap3c6e6f23-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.881 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.886 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.887 243456 INFO os_vif [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:58:99,bridge_name='br-int',has_traffic_filtering=True,id=3c6e6f23-d681-47b4-a8e5-474ce94e984d,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6e6f23-d6')
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.945 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.946 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.946 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] No VIF found with MAC fa:16:3e:32:58:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.946 243456 INFO nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Using config drive
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.972 243456 DEBUG nova.storage.rbd_utils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:17 compute-0 nova_compute[243452]: 2026-02-28 10:04:17.980 243456 DEBUG nova.network.neutron [-] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.003 243456 DEBUG nova.network.neutron [-] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.013 243456 INFO nova.compute.manager [-] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Took 1.41 seconds to deallocate network for instance.
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.019 243456 INFO nova.compute.manager [-] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Took 1.61 seconds to deallocate network for instance.
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.076 243456 DEBUG oslo_concurrency.lockutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.077 243456 DEBUG oslo_concurrency.lockutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.079 243456 DEBUG oslo_concurrency.lockutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.194 243456 DEBUG oslo_concurrency.processutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2705202442' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:18 compute-0 ceph-mon[76304]: pgmap v1051: 305 pgs: 305 active+clean; 355 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 6.7 MiB/s wr, 332 op/s
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.457 243456 INFO nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Creating config drive at /var/lib/nova/instances/6d74f9b9-edf7-4d81-b139-cb664b2ab68c/disk.config
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.463 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6d74f9b9-edf7-4d81-b139-cb664b2ab68c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpcy_ql8hm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.482 243456 DEBUG nova.network.neutron [req-bcb686c6-beae-4f3f-a5ef-4eddac6b2ea5 req-db3edd9d-e7f5-4b9a-a1d3-64a2301a2246 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Updated VIF entry in instance network info cache for port 3c6e6f23-d681-47b4-a8e5-474ce94e984d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.483 243456 DEBUG nova.network.neutron [req-bcb686c6-beae-4f3f-a5ef-4eddac6b2ea5 req-db3edd9d-e7f5-4b9a-a1d3-64a2301a2246 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Updating instance_info_cache with network_info: [{"id": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "address": "fa:16:3e:32:58:99", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6e6f23-d6", "ovs_interfaceid": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.493 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.498 243456 DEBUG oslo_concurrency.lockutils [req-bcb686c6-beae-4f3f-a5ef-4eddac6b2ea5 req-db3edd9d-e7f5-4b9a-a1d3-64a2301a2246 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-6d74f9b9-edf7-4d81-b139-cb664b2ab68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.589 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6d74f9b9-edf7-4d81-b139-cb664b2ab68c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpcy_ql8hm" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.627 243456 DEBUG nova.storage.rbd_utils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.633 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6d74f9b9-edf7-4d81-b139-cb664b2ab68c/disk.config 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.663 243456 DEBUG nova.network.neutron [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Updating instance_info_cache with network_info: [{"id": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "address": "fa:16:3e:3a:a0:83", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d6bdb4d-0d", "ovs_interfaceid": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.689 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Releasing lock "refresh_cache-9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.690 243456 DEBUG nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Instance network_info: |[{"id": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "address": "fa:16:3e:3a:a0:83", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d6bdb4d-0d", "ovs_interfaceid": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.694 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Start _get_guest_xml network_info=[{"id": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "address": "fa:16:3e:3a:a0:83", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d6bdb4d-0d", "ovs_interfaceid": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.700 243456 WARNING nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.705 243456 DEBUG nova.virt.libvirt.host [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:04:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:04:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1150524428' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.706 243456 DEBUG nova.virt.libvirt.host [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.710 243456 DEBUG nova.virt.libvirt.host [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.711 243456 DEBUG nova.virt.libvirt.host [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.711 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.712 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.713 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.713 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.714 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.714 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.715 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.716 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.716 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.717 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.717 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.717 243456 DEBUG nova.virt.hardware [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.722 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.739 243456 DEBUG oslo_concurrency.processutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.745 243456 DEBUG nova.compute.provider_tree [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.781 243456 DEBUG oslo_concurrency.processutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6d74f9b9-edf7-4d81-b139-cb664b2ab68c/disk.config 6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.782 243456 INFO nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Deleting local config drive /var/lib/nova/instances/6d74f9b9-edf7-4d81-b139-cb664b2ab68c/disk.config because it was imported into RBD.
Feb 28 10:04:18 compute-0 kernel: tap3c6e6f23-d6: entered promiscuous mode
Feb 28 10:04:18 compute-0 NetworkManager[49805]: <info>  [1772273058.8146] manager: (tap3c6e6f23-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Feb 28 10:04:18 compute-0 ovn_controller[146846]: 2026-02-28T10:04:18Z|00159|binding|INFO|Claiming lport 3c6e6f23-d681-47b4-a8e5-474ce94e984d for this chassis.
Feb 28 10:04:18 compute-0 ovn_controller[146846]: 2026-02-28T10:04:18Z|00160|binding|INFO|3c6e6f23-d681-47b4-a8e5-474ce94e984d: Claiming fa:16:3e:32:58:99 10.100.0.8
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.816 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:18 compute-0 systemd-udevd[268016]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.822 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.824 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:18 compute-0 NetworkManager[49805]: <info>  [1772273058.8320] device (tap3c6e6f23-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:04:18 compute-0 NetworkManager[49805]: <info>  [1772273058.8325] device (tap3c6e6f23-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:04:18 compute-0 ovn_controller[146846]: 2026-02-28T10:04:18Z|00161|binding|INFO|Setting lport 3c6e6f23-d681-47b4-a8e5-474ce94e984d ovn-installed in OVS
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.863 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:18 compute-0 systemd-machined[209480]: New machine qemu-32-instance-0000001c.
Feb 28 10:04:18 compute-0 systemd[1]: Started Virtual Machine qemu-32-instance-0000001c.
Feb 28 10:04:18 compute-0 ovn_controller[146846]: 2026-02-28T10:04:18Z|00162|binding|INFO|Setting lport 3c6e6f23-d681-47b4-a8e5-474ce94e984d up in Southbound
Feb 28 10:04:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:18.914 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:58:99 10.100.0.8'], port_security=['fa:16:3e:32:58:99 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6d74f9b9-edf7-4d81-b139-cb664b2ab68c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da458887a8634c5a8b9a38fcbcc44e07', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c01448ef-a8fc-4bd2-928c-fd08df4a870e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5852d9cf-0cd0-48e3-ac9d-e151dafb6ffc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=3c6e6f23-d681-47b4-a8e5-474ce94e984d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:04:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:18.916 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 3c6e6f23-d681-47b4-a8e5-474ce94e984d in datapath 2eebd3ec-f7d4-4881-813e-8d884cdcadaf bound to our chassis
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.919 243456 DEBUG nova.scheduler.client.report [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:04:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:18.919 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2eebd3ec-f7d4-4881-813e-8d884cdcadaf
Feb 28 10:04:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:18.934 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d94869fd-98eb-4452-9038-076f674c9e72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:18.935 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2eebd3ec-f1 in ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:04:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:18.937 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2eebd3ec-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.938 243456 DEBUG oslo_concurrency.lockutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:18.938 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eb0523a2-8276-406d-94db-983f58886d8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:18.939 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[df5d8098-11c0-423f-99a4-5b8b16ff6dd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.940 243456 DEBUG oslo_concurrency.lockutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:18.955 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd3f93a-6a65-4f3b-af27-e6163cca8af9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:18 compute-0 nova_compute[243452]: 2026-02-28 10:04:18.958 243456 INFO nova.scheduler.client.report [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Deleted allocations for instance c38e4584-8a86-41a3-bc10-2a35205cf7c7
Feb 28 10:04:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:18.981 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6d262d52-dd10-4e4e-ba32-fb7088097f9f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.007 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6e280dcf-1c86-41fa-8534-096aebdda992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:19 compute-0 NetworkManager[49805]: <info>  [1772273059.0161] manager: (tap2eebd3ec-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/80)
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.015 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[427e766f-5afd-412d-acad-2704c6024649]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.046 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[00bf3894-15e2-419c-992d-8ac0cd8c955b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.049 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1339a6af-111c-4371-9ef3-c8f534d6f33b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.062 243456 DEBUG oslo_concurrency.processutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:19 compute-0 NetworkManager[49805]: <info>  [1772273059.0677] device (tap2eebd3ec-f0): carrier: link connected
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.071 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2a9fc6c1-42d5-4009-85a8-2c98d1fa3a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.082 243456 DEBUG oslo_concurrency.lockutils [None req-12488d9f-f172-4b6d-a341-c4e121a1f177 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.088 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ab6daa29-4bd1-41ec-93c7-1dc0d8f8e7ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2eebd3ec-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:03:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456635, 'reachable_time': 25550, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268276, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.101 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d287b949-450e-4e1e-bd4e-12efb3387424]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecb:321'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456635, 'tstamp': 456635}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268277, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.116 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea15570-3d47-4474-ad54-5bbed22d3f2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2eebd3ec-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:03:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456635, 'reachable_time': 25550, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268278, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.143 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2b6202e8-7042-4b74-adae-8415685f01e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.192 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1a37b209-4ed7-46d4-9503-32b60c4bdd91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.194 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2eebd3ec-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.194 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.195 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2eebd3ec-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:19 compute-0 NetworkManager[49805]: <info>  [1772273059.1976] manager: (tap2eebd3ec-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.197 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:19 compute-0 kernel: tap2eebd3ec-f0: entered promiscuous mode
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.205 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2eebd3ec-f0, col_values=(('external_ids', {'iface-id': '6b6cc396-2618-4c5f-8702-0c03569c876b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:19 compute-0 ovn_controller[146846]: 2026-02-28T10:04:19Z|00163|binding|INFO|Releasing lport 6b6cc396-2618-4c5f-8702-0c03569c876b from this chassis (sb_readonly=0)
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.206 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.216 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.217 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.218 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bd29cc9a-60f6-4ffb-a9a5-353b2e905561]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.219 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-2eebd3ec-f7d4-4881-813e-8d884cdcadaf
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.pid.haproxy
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 2eebd3ec-f7d4-4881-813e-8d884cdcadaf
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:04:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:19.220 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'env', 'PROCESS_TAG=haproxy-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:04:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1150524428' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:04:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/297271897' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.301 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.331 243456 DEBUG nova.storage.rbd_utils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.335 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:04:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1214607614' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.554 243456 DEBUG oslo_concurrency.processutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.566 243456 DEBUG nova.compute.provider_tree [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:04:19 compute-0 podman[268368]: 2026-02-28 10:04:19.579668125 +0000 UTC m=+0.068659821 container create daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.603 243456 DEBUG nova.scheduler.client.report [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.620 243456 DEBUG nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Received event network-changed-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.620 243456 DEBUG nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Refreshing instance network info cache due to event network-changed-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.621 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.621 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.622 243456 DEBUG nova.network.neutron [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Refreshing network info cache for port 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:04:19 compute-0 podman[268368]: 2026-02-28 10:04:19.5343208 +0000 UTC m=+0.023312476 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:04:19 compute-0 systemd[1]: Started libpod-conmon-daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f.scope.
Feb 28 10:04:19 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.646 243456 DEBUG oslo_concurrency.lockutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd5eb1ee2da379ebf68d6e75d71e29d19c55cfc0b19947902e32f6aaa6684a7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:19 compute-0 podman[268368]: 2026-02-28 10:04:19.665777736 +0000 UTC m=+0.154769442 container init daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.667 243456 INFO nova.scheduler.client.report [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Deleted allocations for instance 95805a4e-8bc0-47e3-981a-dfe27127a270
Feb 28 10:04:19 compute-0 podman[268368]: 2026-02-28 10:04:19.672280719 +0000 UTC m=+0.161272395 container start daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:04:19 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[268385]: [NOTICE]   (268389) : New worker (268391) forked
Feb 28 10:04:19 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[268385]: [NOTICE]   (268389) : Loading success.
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.741 243456 DEBUG oslo_concurrency.lockutils [None req-b697c3e3-f4dd-497e-943e-be748267fe92 7a44c3f76b144221b23743a554a4a839 1a4f47b449434c708378388c3e76610e - - default default] Lock "95805a4e-8bc0-47e3-981a-dfe27127a270" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 355 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.0 MiB/s wr, 226 op/s
Feb 28 10:04:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:04:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1495951900' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.879 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.881 243456 DEBUG nova.virt.libvirt.vif [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:04:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1083426510',display_name='tempest-ImagesTestJSON-server-1083426510',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1083426510',id=29,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-dl71n395',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:04:14Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "address": "fa:16:3e:3a:a0:83", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d6bdb4d-0d", "ovs_interfaceid": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.882 243456 DEBUG nova.network.os_vif_util [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "address": "fa:16:3e:3a:a0:83", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d6bdb4d-0d", "ovs_interfaceid": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.882 243456 DEBUG nova.network.os_vif_util [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:a0:83,bridge_name='br-int',has_traffic_filtering=True,id=2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d6bdb4d-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.883 243456 DEBUG nova.objects.instance [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.903 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:04:19 compute-0 nova_compute[243452]:   <uuid>9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab</uuid>
Feb 28 10:04:19 compute-0 nova_compute[243452]:   <name>instance-0000001d</name>
Feb 28 10:04:19 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:04:19 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:04:19 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <nova:name>tempest-ImagesTestJSON-server-1083426510</nova:name>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:04:18</nova:creationTime>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:04:19 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:04:19 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:04:19 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:04:19 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:04:19 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:04:19 compute-0 nova_compute[243452]:         <nova:user uuid="163582c3e6a34c87b52f82ac4f189f77">tempest-ImagesTestJSON-2059286278-project-member</nova:user>
Feb 28 10:04:19 compute-0 nova_compute[243452]:         <nova:project uuid="a2ce6ed219d94b3b88c2d2d7001f6c3a">tempest-ImagesTestJSON-2059286278</nova:project>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:04:19 compute-0 nova_compute[243452]:         <nova:port uuid="2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd">
Feb 28 10:04:19 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:04:19 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:04:19 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <system>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <entry name="serial">9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab</entry>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <entry name="uuid">9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab</entry>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     </system>
Feb 28 10:04:19 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:04:19 compute-0 nova_compute[243452]:   <os>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:   </os>
Feb 28 10:04:19 compute-0 nova_compute[243452]:   <features>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:   </features>
Feb 28 10:04:19 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:04:19 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:04:19 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk">
Feb 28 10:04:19 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:04:19 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk.config">
Feb 28 10:04:19 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:04:19 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:3a:a0:83"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <target dev="tap2d6bdb4d-0d"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab/console.log" append="off"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <video>
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     </video>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:04:19 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:04:19 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:04:19 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:04:19 compute-0 nova_compute[243452]: </domain>
Feb 28 10:04:19 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.907 243456 DEBUG nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Preparing to wait for external event network-vif-plugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.908 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.908 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.908 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.909 243456 DEBUG nova.virt.libvirt.vif [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:04:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1083426510',display_name='tempest-ImagesTestJSON-server-1083426510',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1083426510',id=29,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-dl71n395',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:04:14Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "address": "fa:16:3e:3a:a0:83", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d6bdb4d-0d", "ovs_interfaceid": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.909 243456 DEBUG nova.network.os_vif_util [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "address": "fa:16:3e:3a:a0:83", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d6bdb4d-0d", "ovs_interfaceid": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.910 243456 DEBUG nova.network.os_vif_util [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:a0:83,bridge_name='br-int',has_traffic_filtering=True,id=2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d6bdb4d-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.910 243456 DEBUG os_vif [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:a0:83,bridge_name='br-int',has_traffic_filtering=True,id=2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d6bdb4d-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.911 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.911 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.911 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.914 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.914 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d6bdb4d-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.914 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2d6bdb4d-0d, col_values=(('external_ids', {'iface-id': '2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:a0:83', 'vm-uuid': '9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.915 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:19 compute-0 NetworkManager[49805]: <info>  [1772273059.9171] manager: (tap2d6bdb4d-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.918 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.921 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.922 243456 INFO os_vif [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:a0:83,bridge_name='br-int',has_traffic_filtering=True,id=2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d6bdb4d-0d')
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.975 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.976 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.976 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No VIF found with MAC fa:16:3e:3a:a0:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.977 243456 INFO nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Using config drive
Feb 28 10:04:19 compute-0 nova_compute[243452]: 2026-02-28 10:04:19.997 243456 DEBUG nova.storage.rbd_utils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:20 compute-0 nova_compute[243452]: 2026-02-28 10:04:20.123 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273060.1229281, 6d74f9b9-edf7-4d81-b139-cb664b2ab68c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:20 compute-0 nova_compute[243452]: 2026-02-28 10:04:20.123 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] VM Started (Lifecycle Event)
Feb 28 10:04:20 compute-0 nova_compute[243452]: 2026-02-28 10:04:20.162 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:20 compute-0 nova_compute[243452]: 2026-02-28 10:04:20.167 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273060.1249266, 6d74f9b9-edf7-4d81-b139-cb664b2ab68c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:20 compute-0 nova_compute[243452]: 2026-02-28 10:04:20.167 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] VM Paused (Lifecycle Event)
Feb 28 10:04:20 compute-0 nova_compute[243452]: 2026-02-28 10:04:20.192 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:20 compute-0 nova_compute[243452]: 2026-02-28 10:04:20.195 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:04:20 compute-0 nova_compute[243452]: 2026-02-28 10:04:20.222 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:04:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/297271897' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1214607614' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:20 compute-0 ceph-mon[76304]: pgmap v1052: 305 pgs: 305 active+clean; 355 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.0 MiB/s wr, 226 op/s
Feb 28 10:04:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1495951900' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.465 243456 INFO nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Creating config drive at /var/lib/nova/instances/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab/disk.config
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.473 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpa8um2us5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.603 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpa8um2us5" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.626 243456 DEBUG nova.storage.rbd_utils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.629 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab/disk.config 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:21 compute-0 NetworkManager[49805]: <info>  [1772273061.6462] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Feb 28 10:04:21 compute-0 NetworkManager[49805]: <info>  [1772273061.6471] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.649 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.692 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:21 compute-0 ovn_controller[146846]: 2026-02-28T10:04:21Z|00164|binding|INFO|Releasing lport 76417fb5-53fd-4986-92c0-fafddb45b2f5 from this chassis (sb_readonly=0)
Feb 28 10:04:21 compute-0 ovn_controller[146846]: 2026-02-28T10:04:21Z|00165|binding|INFO|Releasing lport 6b6cc396-2618-4c5f-8702-0c03569c876b from this chassis (sb_readonly=0)
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.704 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.763 243456 DEBUG oslo_concurrency.processutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab/disk.config 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.764 243456 INFO nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Deleting local config drive /var/lib/nova/instances/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab/disk.config because it was imported into RBD.
Feb 28 10:04:21 compute-0 kernel: tap2d6bdb4d-0d: entered promiscuous mode
Feb 28 10:04:21 compute-0 NetworkManager[49805]: <info>  [1772273061.8079] manager: (tap2d6bdb4d-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/85)
Feb 28 10:04:21 compute-0 systemd-udevd[268464]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:04:21 compute-0 ovn_controller[146846]: 2026-02-28T10:04:21Z|00166|binding|INFO|Claiming lport 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd for this chassis.
Feb 28 10:04:21 compute-0 ovn_controller[146846]: 2026-02-28T10:04:21Z|00167|binding|INFO|2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd: Claiming fa:16:3e:3a:a0:83 10.100.0.10
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.810 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:21 compute-0 NetworkManager[49805]: <info>  [1772273061.8214] device (tap2d6bdb4d-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:04:21 compute-0 NetworkManager[49805]: <info>  [1772273061.8229] device (tap2d6bdb4d-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:04:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.821 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:a0:83 10.100.0.10'], port_security=['fa:16:3e:3a:a0:83 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b4baa283-ae4f-4852-9324-45f4fb1de1ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d63e728b-5dfe-4a66-867f-fa0957507bc0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:04:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.823 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd in datapath 3a8395bc-d7fc-4457-8cb4-52e2b9305b61 bound to our chassis
Feb 28 10:04:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.824 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 10:04:21 compute-0 ovn_controller[146846]: 2026-02-28T10:04:21Z|00168|binding|INFO|Setting lport 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd ovn-installed in OVS
Feb 28 10:04:21 compute-0 ovn_controller[146846]: 2026-02-28T10:04:21Z|00169|binding|INFO|Setting lport 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd up in Southbound
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.828 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.831 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.835 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[89ad5daa-984a-41f8-af88-e451602a85e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.835 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3a8395bc-d1 in ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:04:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.837 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3a8395bc-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:04:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.837 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3833c839-952e-43f9-87ec-0d911af811f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.837 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2bdc0972-f51a-421e-9126-77b5cb7616d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.848 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[b95e0303-301d-4725-a9f8-139735686b1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 292 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 6.6 MiB/s wr, 314 op/s
Feb 28 10:04:21 compute-0 systemd-machined[209480]: New machine qemu-33-instance-0000001d.
Feb 28 10:04:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.870 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4935da9e-5595-4672-9f65-6cdc821cff9e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:21 compute-0 systemd[1]: Started Virtual Machine qemu-33-instance-0000001d.
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.897 243456 DEBUG nova.network.neutron [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Updated VIF entry in instance network info cache for port 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.901 243456 DEBUG nova.network.neutron [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Updating instance_info_cache with network_info: [{"id": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "address": "fa:16:3e:3a:a0:83", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d6bdb4d-0d", "ovs_interfaceid": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.901 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb33171-ba62-4d95-aae4-f20e9ffc3225]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:21 compute-0 NetworkManager[49805]: <info>  [1772273061.9118] manager: (tap3a8395bc-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/86)
Feb 28 10:04:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.914 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[52f55688-a93d-4844-ac3c-8d34b8119207]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:21 compute-0 systemd-udevd[268519]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.926 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.927 243456 DEBUG nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Received event network-vif-plugged-b9c5316d-0f6b-4a56-8273-692ee1492259 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.927 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.927 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.928 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c38e4584-8a86-41a3-bc10-2a35205cf7c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.928 243456 DEBUG nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] No waiting events found dispatching network-vif-plugged-b9c5316d-0f6b-4a56-8273-692ee1492259 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.928 243456 WARNING nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Received unexpected event network-vif-plugged-b9c5316d-0f6b-4a56-8273-692ee1492259 for instance with vm_state deleted and task_state None.
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.929 243456 DEBUG nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Received event network-vif-deleted-b9c5316d-0f6b-4a56-8273-692ee1492259 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.929 243456 DEBUG nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Received event network-vif-deleted-dca369b1-9f27-4c57-9e18-4a62a8bd95e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.929 243456 DEBUG nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Received event network-vif-plugged-3c6e6f23-d681-47b4-a8e5-474ce94e984d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.930 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.930 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.930 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.930 243456 DEBUG nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Processing event network-vif-plugged-3c6e6f23-d681-47b4-a8e5-474ce94e984d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.931 243456 DEBUG nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Received event network-vif-plugged-3c6e6f23-d681-47b4-a8e5-474ce94e984d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.932 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.932 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.932 243456 DEBUG oslo_concurrency.lockutils [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.933 243456 DEBUG nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] No waiting events found dispatching network-vif-plugged-3c6e6f23-d681-47b4-a8e5-474ce94e984d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.933 243456 WARNING nova.compute.manager [req-4235c171-1ab7-4639-81f6-1fa45df0ca89 req-b45b0d91-94db-4802-83a0-0a2c2192efba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Received unexpected event network-vif-plugged-3c6e6f23-d681-47b4-a8e5-474ce94e984d for instance with vm_state building and task_state spawning.
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.934 243456 DEBUG nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.948 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273061.9481177, 6d74f9b9-edf7-4d81-b139-cb664b2ab68c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.949 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] VM Resumed (Lifecycle Event)
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.951 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.957 243456 INFO nova.virt.libvirt.driver [-] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Instance spawned successfully.
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.957 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:04:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.964 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[339f27ee-68fb-4f49-ab1a-b964d9c671f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:21.968 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[90504454-a43c-482e-8203-51a38208d360]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.979 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.987 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.993 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.994 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.995 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.996 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.997 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:21 compute-0 nova_compute[243452]: 2026-02-28 10:04:21.997 243456 DEBUG nova.virt.libvirt.driver [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:22 compute-0 NetworkManager[49805]: <info>  [1772273062.0026] device (tap3a8395bc-d0): carrier: link connected
Feb 28 10:04:22 compute-0 nova_compute[243452]: 2026-02-28 10:04:22.008 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.007 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4b173214-822d-4789-bb40-fa33ca6e0861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.027 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb927f5-585e-44f3-b6b1-22728a73cf17]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3a8395bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:6b:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456928, 'reachable_time': 33733, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268555, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.041 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a2bc4ab5-9496-4e46-a4cc-1a2eb6bfede4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:6b8d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456928, 'tstamp': 456928}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268556, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:22 compute-0 nova_compute[243452]: 2026-02-28 10:04:22.049 243456 INFO nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Took 8.73 seconds to spawn the instance on the hypervisor.
Feb 28 10:04:22 compute-0 nova_compute[243452]: 2026-02-28 10:04:22.051 243456 DEBUG nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.056 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8e813fd3-64ba-46b9-a16b-e015559e130f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3a8395bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:6b:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456928, 'reachable_time': 33733, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268557, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.082 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[864a74f2-4a6c-4712-a9f2-381164eb001d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:22 compute-0 nova_compute[243452]: 2026-02-28 10:04:22.115 243456 INFO nova.compute.manager [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Took 9.85 seconds to build instance.
Feb 28 10:04:22 compute-0 nova_compute[243452]: 2026-02-28 10:04:22.131 243456 DEBUG oslo_concurrency.lockutils [None req-463d7544-88b9-4e8a-995a-d114b3689baf 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.135 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[117c70fe-8ddb-4fcc-a1f0-4a7f9a0e427c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.137 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a8395bc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.137 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.138 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3a8395bc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:22 compute-0 NetworkManager[49805]: <info>  [1772273062.1403] manager: (tap3a8395bc-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Feb 28 10:04:22 compute-0 nova_compute[243452]: 2026-02-28 10:04:22.140 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:22 compute-0 kernel: tap3a8395bc-d0: entered promiscuous mode
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.143 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3a8395bc-d0, col_values=(('external_ids', {'iface-id': '5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:22 compute-0 ovn_controller[146846]: 2026-02-28T10:04:22Z|00170|binding|INFO|Releasing lport 5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3 from this chassis (sb_readonly=0)
Feb 28 10:04:22 compute-0 nova_compute[243452]: 2026-02-28 10:04:22.145 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:22 compute-0 nova_compute[243452]: 2026-02-28 10:04:22.154 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.155 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.155 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[49116c3b-6dcd-4fff-b08f-2d9392bd1b5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.156 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:04:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:22.157 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'env', 'PROCESS_TAG=haproxy-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:04:22 compute-0 nova_compute[243452]: 2026-02-28 10:04:22.270 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273062.2703502, 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:22 compute-0 nova_compute[243452]: 2026-02-28 10:04:22.271 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] VM Started (Lifecycle Event)
Feb 28 10:04:22 compute-0 nova_compute[243452]: 2026-02-28 10:04:22.293 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:22 compute-0 nova_compute[243452]: 2026-02-28 10:04:22.296 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273062.2705956, 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:22 compute-0 nova_compute[243452]: 2026-02-28 10:04:22.296 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] VM Paused (Lifecycle Event)
Feb 28 10:04:22 compute-0 nova_compute[243452]: 2026-02-28 10:04:22.315 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:22 compute-0 nova_compute[243452]: 2026-02-28 10:04:22.318 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:04:22 compute-0 nova_compute[243452]: 2026-02-28 10:04:22.338 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:04:22 compute-0 podman[268631]: 2026-02-28 10:04:22.566845214 +0000 UTC m=+0.089054975 container create 8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 28 10:04:22 compute-0 podman[268631]: 2026-02-28 10:04:22.517370063 +0000 UTC m=+0.039579854 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:04:22 compute-0 systemd[1]: Started libpod-conmon-8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9.scope.
Feb 28 10:04:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:04:22 compute-0 nova_compute[243452]: 2026-02-28 10:04:22.631 243456 DEBUG nova.compute.manager [req-42c09cd6-5d4d-4d03-90e0-d4d5c6443c7a req-c84ad94b-1e1f-4a32-abd3-c02056ce2d91 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-changed-76d5199d-5d1e-4198-8780-c2537175a2be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:22 compute-0 nova_compute[243452]: 2026-02-28 10:04:22.632 243456 DEBUG nova.compute.manager [req-42c09cd6-5d4d-4d03-90e0-d4d5c6443c7a req-c84ad94b-1e1f-4a32-abd3-c02056ce2d91 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Refreshing instance network info cache due to event network-changed-76d5199d-5d1e-4198-8780-c2537175a2be. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:04:22 compute-0 nova_compute[243452]: 2026-02-28 10:04:22.632 243456 DEBUG oslo_concurrency.lockutils [req-42c09cd6-5d4d-4d03-90e0-d4d5c6443c7a req-c84ad94b-1e1f-4a32-abd3-c02056ce2d91 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:04:22 compute-0 nova_compute[243452]: 2026-02-28 10:04:22.633 243456 DEBUG oslo_concurrency.lockutils [req-42c09cd6-5d4d-4d03-90e0-d4d5c6443c7a req-c84ad94b-1e1f-4a32-abd3-c02056ce2d91 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:04:22 compute-0 nova_compute[243452]: 2026-02-28 10:04:22.633 243456 DEBUG nova.network.neutron [req-42c09cd6-5d4d-4d03-90e0-d4d5c6443c7a req-c84ad94b-1e1f-4a32-abd3-c02056ce2d91 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Refreshing network info cache for port 76d5199d-5d1e-4198-8780-c2537175a2be _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:04:22 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:04:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2f3bfc33757300e8d509d5547a7828aa30ca0c8b5e0dd5d94a856da1a162672/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:22 compute-0 podman[268631]: 2026-02-28 10:04:22.66451952 +0000 UTC m=+0.186729341 container init 8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 10:04:22 compute-0 podman[268631]: 2026-02-28 10:04:22.669325255 +0000 UTC m=+0.191535036 container start 8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true)
Feb 28 10:04:22 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[268646]: [NOTICE]   (268650) : New worker (268652) forked
Feb 28 10:04:22 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[268646]: [NOTICE]   (268650) : Loading success.
Feb 28 10:04:22 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:04:22 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1801.6 total, 600.0 interval
                                           Cumulative writes: 14K writes, 55K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s
                                           Cumulative WAL: 14K writes, 4217 syncs, 3.33 writes per sync, written: 0.05 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6856 writes, 25K keys, 6856 commit groups, 1.0 writes per commit group, ingest: 26.31 MB, 0.04 MB/s
                                           Interval WAL: 6856 writes, 2771 syncs, 2.47 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 10:04:22 compute-0 ceph-mon[76304]: pgmap v1053: 305 pgs: 305 active+clean; 292 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 6.6 MiB/s wr, 314 op/s
Feb 28 10:04:23 compute-0 nova_compute[243452]: 2026-02-28 10:04:23.496 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 293 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.6 MiB/s wr, 257 op/s
Feb 28 10:04:24 compute-0 podman[268662]: 2026-02-28 10:04:24.126187072 +0000 UTC m=+0.067972102 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:04:24 compute-0 podman[268661]: 2026-02-28 10:04:24.15991088 +0000 UTC m=+0.098803958 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.43.0)
Feb 28 10:04:24 compute-0 nova_compute[243452]: 2026-02-28 10:04:24.725 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273049.7249467, 400869d5-7369-466b-970e-ac7e3f4e2e4c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:24 compute-0 nova_compute[243452]: 2026-02-28 10:04:24.726 243456 INFO nova.compute.manager [-] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] VM Stopped (Lifecycle Event)
Feb 28 10:04:24 compute-0 nova_compute[243452]: 2026-02-28 10:04:24.755 243456 DEBUG nova.compute.manager [None req-a19e70e5-232c-4399-80d0-abc469478475 - - - - - -] [instance: 400869d5-7369-466b-970e-ac7e3f4e2e4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:24 compute-0 nova_compute[243452]: 2026-02-28 10:04:24.917 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:24 compute-0 ceph-mon[76304]: pgmap v1054: 305 pgs: 305 active+clean; 293 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.6 MiB/s wr, 257 op/s
Feb 28 10:04:24 compute-0 nova_compute[243452]: 2026-02-28 10:04:24.966 243456 DEBUG nova.compute.manager [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Received event network-vif-plugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:24 compute-0 nova_compute[243452]: 2026-02-28 10:04:24.966 243456 DEBUG oslo_concurrency.lockutils [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:24 compute-0 nova_compute[243452]: 2026-02-28 10:04:24.966 243456 DEBUG oslo_concurrency.lockutils [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:24 compute-0 nova_compute[243452]: 2026-02-28 10:04:24.967 243456 DEBUG oslo_concurrency.lockutils [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:24 compute-0 nova_compute[243452]: 2026-02-28 10:04:24.967 243456 DEBUG nova.compute.manager [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Processing event network-vif-plugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:04:24 compute-0 nova_compute[243452]: 2026-02-28 10:04:24.967 243456 DEBUG nova.compute.manager [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Received event network-vif-plugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:24 compute-0 nova_compute[243452]: 2026-02-28 10:04:24.967 243456 DEBUG oslo_concurrency.lockutils [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:24 compute-0 nova_compute[243452]: 2026-02-28 10:04:24.968 243456 DEBUG oslo_concurrency.lockutils [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:24 compute-0 nova_compute[243452]: 2026-02-28 10:04:24.968 243456 DEBUG oslo_concurrency.lockutils [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:24 compute-0 nova_compute[243452]: 2026-02-28 10:04:24.968 243456 DEBUG nova.compute.manager [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] No waiting events found dispatching network-vif-plugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:04:24 compute-0 nova_compute[243452]: 2026-02-28 10:04:24.968 243456 WARNING nova.compute.manager [req-ff96db3c-77e4-44af-9375-46b506e3c0b9 req-4ff36849-bcda-40a0-b247-007d58c2d73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Received unexpected event network-vif-plugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd for instance with vm_state building and task_state spawning.
Feb 28 10:04:24 compute-0 nova_compute[243452]: 2026-02-28 10:04:24.969 243456 DEBUG nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:04:24 compute-0 nova_compute[243452]: 2026-02-28 10:04:24.974 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273064.9746413, 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:24 compute-0 nova_compute[243452]: 2026-02-28 10:04:24.975 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] VM Resumed (Lifecycle Event)
Feb 28 10:04:24 compute-0 nova_compute[243452]: 2026-02-28 10:04:24.977 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:04:24 compute-0 nova_compute[243452]: 2026-02-28 10:04:24.983 243456 INFO nova.virt.libvirt.driver [-] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Instance spawned successfully.
Feb 28 10:04:24 compute-0 nova_compute[243452]: 2026-02-28 10:04:24.983 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:04:25 compute-0 nova_compute[243452]: 2026-02-28 10:04:25.008 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:25 compute-0 nova_compute[243452]: 2026-02-28 10:04:25.014 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:04:25 compute-0 nova_compute[243452]: 2026-02-28 10:04:25.018 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:25 compute-0 nova_compute[243452]: 2026-02-28 10:04:25.019 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:25 compute-0 nova_compute[243452]: 2026-02-28 10:04:25.019 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:25 compute-0 nova_compute[243452]: 2026-02-28 10:04:25.020 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:25 compute-0 nova_compute[243452]: 2026-02-28 10:04:25.020 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:25 compute-0 nova_compute[243452]: 2026-02-28 10:04:25.021 243456 DEBUG nova.virt.libvirt.driver [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:25 compute-0 nova_compute[243452]: 2026-02-28 10:04:25.056 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:04:25 compute-0 nova_compute[243452]: 2026-02-28 10:04:25.094 243456 INFO nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Took 10.88 seconds to spawn the instance on the hypervisor.
Feb 28 10:04:25 compute-0 nova_compute[243452]: 2026-02-28 10:04:25.094 243456 DEBUG nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:25 compute-0 nova_compute[243452]: 2026-02-28 10:04:25.183 243456 INFO nova.compute.manager [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Took 12.52 seconds to build instance.
Feb 28 10:04:25 compute-0 nova_compute[243452]: 2026-02-28 10:04:25.200 243456 DEBUG oslo_concurrency.lockutils [None req-7ca88cc1-858a-4dd8-befd-e13eb764a96a 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 293 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.6 MiB/s wr, 310 op/s
Feb 28 10:04:26 compute-0 nova_compute[243452]: 2026-02-28 10:04:26.077 243456 DEBUG nova.network.neutron [req-42c09cd6-5d4d-4d03-90e0-d4d5c6443c7a req-c84ad94b-1e1f-4a32-abd3-c02056ce2d91 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updated VIF entry in instance network info cache for port 76d5199d-5d1e-4198-8780-c2537175a2be. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:04:26 compute-0 nova_compute[243452]: 2026-02-28 10:04:26.078 243456 DEBUG nova.network.neutron [req-42c09cd6-5d4d-4d03-90e0-d4d5c6443c7a req-c84ad94b-1e1f-4a32-abd3-c02056ce2d91 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updating instance_info_cache with network_info: [{"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:26 compute-0 nova_compute[243452]: 2026-02-28 10:04:26.099 243456 DEBUG oslo_concurrency.lockutils [req-42c09cd6-5d4d-4d03-90e0-d4d5c6443c7a req-c84ad94b-1e1f-4a32-abd3-c02056ce2d91 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:04:26 compute-0 ceph-mon[76304]: pgmap v1055: 305 pgs: 305 active+clean; 293 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.6 MiB/s wr, 310 op/s
Feb 28 10:04:27 compute-0 ovn_controller[146846]: 2026-02-28T10:04:27Z|00171|binding|INFO|Releasing lport 76417fb5-53fd-4986-92c0-fafddb45b2f5 from this chassis (sb_readonly=0)
Feb 28 10:04:27 compute-0 ovn_controller[146846]: 2026-02-28T10:04:27Z|00172|binding|INFO|Releasing lport 6b6cc396-2618-4c5f-8702-0c03569c876b from this chassis (sb_readonly=0)
Feb 28 10:04:27 compute-0 ovn_controller[146846]: 2026-02-28T10:04:27Z|00173|binding|INFO|Releasing lport 5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3 from this chassis (sb_readonly=0)
Feb 28 10:04:27 compute-0 nova_compute[243452]: 2026-02-28 10:04:27.037 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:27 compute-0 nova_compute[243452]: 2026-02-28 10:04:27.260 243456 INFO nova.compute.manager [None req-20483f1a-0a6b-49b9-b66f-5564bed44b79 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Pausing
Feb 28 10:04:27 compute-0 nova_compute[243452]: 2026-02-28 10:04:27.261 243456 DEBUG nova.objects.instance [None req-20483f1a-0a6b-49b9-b66f-5564bed44b79 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'flavor' on Instance uuid 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:27 compute-0 nova_compute[243452]: 2026-02-28 10:04:27.315 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273067.3156853, 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:27 compute-0 nova_compute[243452]: 2026-02-28 10:04:27.316 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] VM Paused (Lifecycle Event)
Feb 28 10:04:27 compute-0 nova_compute[243452]: 2026-02-28 10:04:27.318 243456 DEBUG nova.compute.manager [None req-20483f1a-0a6b-49b9-b66f-5564bed44b79 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:27 compute-0 nova_compute[243452]: 2026-02-28 10:04:27.396 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:27 compute-0 nova_compute[243452]: 2026-02-28 10:04:27.400 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:04:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:04:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 293 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.9 MiB/s wr, 330 op/s
Feb 28 10:04:28 compute-0 nova_compute[243452]: 2026-02-28 10:04:28.498 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:28 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:04:28 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 10K writes, 43K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2947 syncs, 3.61 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4966 writes, 19K keys, 4966 commit groups, 1.0 writes per commit group, ingest: 20.73 MB, 0.03 MB/s
                                           Interval WAL: 4966 writes, 2044 syncs, 2.43 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 10:04:28 compute-0 ceph-mon[76304]: pgmap v1056: 305 pgs: 305 active+clean; 293 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.9 MiB/s wr, 330 op/s
Feb 28 10:04:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:04:29
Feb 28 10:04:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:04:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:04:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.meta', 'vms', '.mgr', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'default.rgw.log']
Feb 28 10:04:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:04:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 302 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 1.2 MiB/s wr, 278 op/s
Feb 28 10:04:29 compute-0 ovn_controller[146846]: 2026-02-28T10:04:29Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:bd:5b 10.100.0.12
Feb 28 10:04:29 compute-0 ovn_controller[146846]: 2026-02-28T10:04:29Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:bd:5b 10.100.0.12
Feb 28 10:04:29 compute-0 nova_compute[243452]: 2026-02-28 10:04:29.919 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:04:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:04:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:04:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:04:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:04:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:04:30 compute-0 nova_compute[243452]: 2026-02-28 10:04:30.375 243456 DEBUG nova.compute.manager [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:30 compute-0 nova_compute[243452]: 2026-02-28 10:04:30.423 243456 INFO nova.compute.manager [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] instance snapshotting
Feb 28 10:04:30 compute-0 nova_compute[243452]: 2026-02-28 10:04:30.423 243456 WARNING nova.compute.manager [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] trying to snapshot a non-running instance: (state: 3 expected: 1)
Feb 28 10:04:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:04:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:04:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:04:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:04:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:04:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:04:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:04:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:04:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:04:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:04:30 compute-0 nova_compute[243452]: 2026-02-28 10:04:30.829 243456 INFO nova.virt.libvirt.driver [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Beginning live snapshot process
Feb 28 10:04:30 compute-0 nova_compute[243452]: 2026-02-28 10:04:30.897 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273055.895925, 95805a4e-8bc0-47e3-981a-dfe27127a270 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:30 compute-0 nova_compute[243452]: 2026-02-28 10:04:30.898 243456 INFO nova.compute.manager [-] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] VM Stopped (Lifecycle Event)
Feb 28 10:04:30 compute-0 ceph-mon[76304]: pgmap v1057: 305 pgs: 305 active+clean; 302 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 1.2 MiB/s wr, 278 op/s
Feb 28 10:04:30 compute-0 nova_compute[243452]: 2026-02-28 10:04:30.975 243456 DEBUG nova.compute.manager [None req-eadb4ed0-4faf-4d96-8ae9-87afac8aff3a - - - - - -] [instance: 95805a4e-8bc0-47e3-981a-dfe27127a270] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:30 compute-0 nova_compute[243452]: 2026-02-28 10:04:30.981 243456 DEBUG nova.virt.libvirt.imagebackend [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 28 10:04:31 compute-0 ceph-mgr[76610]: [devicehealth INFO root] Check health
Feb 28 10:04:31 compute-0 nova_compute[243452]: 2026-02-28 10:04:31.264 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273056.2627988, c38e4584-8a86-41a3-bc10-2a35205cf7c7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:31 compute-0 nova_compute[243452]: 2026-02-28 10:04:31.265 243456 INFO nova.compute.manager [-] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] VM Stopped (Lifecycle Event)
Feb 28 10:04:31 compute-0 nova_compute[243452]: 2026-02-28 10:04:31.293 243456 DEBUG nova.compute.manager [None req-f4bc5179-8b06-4ca8-a484-e00586cc4c1b - - - - - -] [instance: c38e4584-8a86-41a3-bc10-2a35205cf7c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:31 compute-0 nova_compute[243452]: 2026-02-28 10:04:31.388 243456 DEBUG nova.storage.rbd_utils [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] creating snapshot(c3430a45051b47d7a8bcacb9cd6239b0) on rbd image(9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:04:31 compute-0 nova_compute[243452]: 2026-02-28 10:04:31.622 243456 DEBUG nova.compute.manager [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:31 compute-0 nova_compute[243452]: 2026-02-28 10:04:31.671 243456 INFO nova.compute.manager [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] instance snapshotting
Feb 28 10:04:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 307 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 1.8 MiB/s wr, 304 op/s
Feb 28 10:04:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Feb 28 10:04:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Feb 28 10:04:31 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Feb 28 10:04:32 compute-0 nova_compute[243452]: 2026-02-28 10:04:32.013 243456 DEBUG nova.storage.rbd_utils [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] cloning vms/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk@c3430a45051b47d7a8bcacb9cd6239b0 to images/15834e2d-0d53-4b8a-8f0b-345fe662dcbf clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:04:32 compute-0 nova_compute[243452]: 2026-02-28 10:04:32.058 243456 INFO nova.virt.libvirt.driver [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Beginning live snapshot process
Feb 28 10:04:32 compute-0 nova_compute[243452]: 2026-02-28 10:04:32.137 243456 DEBUG nova.storage.rbd_utils [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] flattening images/15834e2d-0d53-4b8a-8f0b-345fe662dcbf flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:04:32 compute-0 nova_compute[243452]: 2026-02-28 10:04:32.265 243456 DEBUG nova.virt.libvirt.imagebackend [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 28 10:04:32 compute-0 nova_compute[243452]: 2026-02-28 10:04:32.367 243456 DEBUG nova.storage.rbd_utils [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] removing snapshot(c3430a45051b47d7a8bcacb9cd6239b0) on rbd image(9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:04:32 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 28 10:04:32 compute-0 nova_compute[243452]: 2026-02-28 10:04:32.451 243456 DEBUG nova.storage.rbd_utils [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] creating snapshot(719713618c2a4a64a9cf133ff544640b) on rbd image(6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:04:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:04:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Feb 28 10:04:32 compute-0 ceph-mon[76304]: pgmap v1058: 305 pgs: 305 active+clean; 307 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 1.8 MiB/s wr, 304 op/s
Feb 28 10:04:32 compute-0 ceph-mon[76304]: osdmap e137: 3 total, 3 up, 3 in
Feb 28 10:04:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Feb 28 10:04:32 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Feb 28 10:04:33 compute-0 nova_compute[243452]: 2026-02-28 10:04:33.022 243456 DEBUG nova.storage.rbd_utils [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] creating snapshot(snap) on rbd image(15834e2d-0d53-4b8a-8f0b-345fe662dcbf) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:04:33 compute-0 nova_compute[243452]: 2026-02-28 10:04:33.095 243456 DEBUG nova.storage.rbd_utils [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] cloning vms/6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk@719713618c2a4a64a9cf133ff544640b to images/368a95dc-f2bf-43b2-80e0-562f69f20423 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:04:33 compute-0 nova_compute[243452]: 2026-02-28 10:04:33.220 243456 DEBUG nova.storage.rbd_utils [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] flattening images/368a95dc-f2bf-43b2-80e0-562f69f20423 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:04:33 compute-0 nova_compute[243452]: 2026-02-28 10:04:33.530 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:33 compute-0 nova_compute[243452]: 2026-02-28 10:04:33.543 243456 DEBUG nova.storage.rbd_utils [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] removing snapshot(719713618c2a4a64a9cf133ff544640b) on rbd image(6d74f9b9-edf7-4d81-b139-cb664b2ab68c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:04:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 339 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 3.9 MiB/s wr, 256 op/s
Feb 28 10:04:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Feb 28 10:04:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Feb 28 10:04:33 compute-0 ceph-mon[76304]: osdmap e138: 3 total, 3 up, 3 in
Feb 28 10:04:34 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Feb 28 10:04:34 compute-0 nova_compute[243452]: 2026-02-28 10:04:34.022 243456 DEBUG nova.storage.rbd_utils [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] creating snapshot(snap) on rbd image(368a95dc-f2bf-43b2-80e0-562f69f20423) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:04:34 compute-0 nova_compute[243452]: 2026-02-28 10:04:34.922 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Feb 28 10:04:34 compute-0 ceph-mon[76304]: pgmap v1061: 305 pgs: 305 active+clean; 339 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 3.9 MiB/s wr, 256 op/s
Feb 28 10:04:34 compute-0 ceph-mon[76304]: osdmap e139: 3 total, 3 up, 3 in
Feb 28 10:04:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Feb 28 10:04:35 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Feb 28 10:04:35 compute-0 ovn_controller[146846]: 2026-02-28T10:04:35Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:32:58:99 10.100.0.8
Feb 28 10:04:35 compute-0 ovn_controller[146846]: 2026-02-28T10:04:35Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:32:58:99 10.100.0.8
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 368a95dc-f2bf-43b2-80e0-562f69f20423 could not be found.
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     image = self._client.call(
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 368a95dc-f2bf-43b2-80e0-562f69f20423
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver 
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver 
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     image = self._client.call(
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 368a95dc-f2bf-43b2-80e0-562f69f20423 could not be found.
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.190 243456 ERROR nova.virt.libvirt.driver 
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.258 243456 DEBUG nova.storage.rbd_utils [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] removing snapshot(snap) on rbd image(368a95dc-f2bf-43b2-80e0-562f69f20423) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.453 243456 INFO nova.virt.libvirt.driver [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Snapshot image upload complete
Feb 28 10:04:35 compute-0 nova_compute[243452]: 2026-02-28 10:04:35.453 243456 INFO nova.compute.manager [None req-214b31c5-4876-4114-9013-d19ecadb0e87 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Took 5.03 seconds to snapshot the instance on the hypervisor.
Feb 28 10:04:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 440 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 21 MiB/s wr, 441 op/s
Feb 28 10:04:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Feb 28 10:04:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Feb 28 10:04:36 compute-0 ceph-mon[76304]: osdmap e140: 3 total, 3 up, 3 in
Feb 28 10:04:36 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Feb 28 10:04:36 compute-0 nova_compute[243452]: 2026-02-28 10:04:36.323 243456 WARNING nova.compute.manager [None req-fa474c0c-00a0-4102-90d5-8d23947b33dc 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Image not found during snapshot: nova.exception.ImageNotFound: Image 368a95dc-f2bf-43b2-80e0-562f69f20423 could not be found.
Feb 28 10:04:36 compute-0 nova_compute[243452]: 2026-02-28 10:04:36.542 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Feb 28 10:04:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Feb 28 10:04:37 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Feb 28 10:04:37 compute-0 ceph-mon[76304]: pgmap v1064: 305 pgs: 305 active+clean; 440 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 21 MiB/s wr, 441 op/s
Feb 28 10:04:37 compute-0 ceph-mon[76304]: osdmap e141: 3 total, 3 up, 3 in
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.610 243456 DEBUG oslo_concurrency.lockutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.611 243456 DEBUG oslo_concurrency.lockutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.612 243456 DEBUG oslo_concurrency.lockutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.612 243456 DEBUG oslo_concurrency.lockutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.613 243456 DEBUG oslo_concurrency.lockutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.615 243456 INFO nova.compute.manager [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Terminating instance
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.617 243456 DEBUG nova.compute.manager [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:04:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:04:37 compute-0 kernel: tap2d6bdb4d-0d (unregistering): left promiscuous mode
Feb 28 10:04:37 compute-0 NetworkManager[49805]: <info>  [1772273077.6661] device (tap2d6bdb4d-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:04:37 compute-0 ovn_controller[146846]: 2026-02-28T10:04:37Z|00174|binding|INFO|Releasing lport 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd from this chassis (sb_readonly=0)
Feb 28 10:04:37 compute-0 ovn_controller[146846]: 2026-02-28T10:04:37Z|00175|binding|INFO|Setting lport 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd down in Southbound
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.674 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:37 compute-0 ovn_controller[146846]: 2026-02-28T10:04:37Z|00176|binding|INFO|Removing iface tap2d6bdb4d-0d ovn-installed in OVS
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.677 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.687 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:37.683 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:a0:83 10.100.0.10'], port_security=['fa:16:3e:3a:a0:83 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b4baa283-ae4f-4852-9324-45f4fb1de1ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d63e728b-5dfe-4a66-867f-fa0957507bc0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:04:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:37.685 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd in datapath 3a8395bc-d7fc-4457-8cb4-52e2b9305b61 unbound from our chassis
Feb 28 10:04:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:37.688 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3a8395bc-d7fc-4457-8cb4-52e2b9305b61, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:04:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:37.690 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[142597f6-a84c-4121-b6ba-6b67ad8d9d19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:37.690 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 namespace which is not needed anymore
Feb 28 10:04:37 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Feb 28 10:04:37 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Consumed 2.807s CPU time.
Feb 28 10:04:37 compute-0 systemd-machined[209480]: Machine qemu-33-instance-0000001d terminated.
Feb 28 10:04:37 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[268646]: [NOTICE]   (268650) : haproxy version is 2.8.14-c23fe91
Feb 28 10:04:37 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[268646]: [NOTICE]   (268650) : path to executable is /usr/sbin/haproxy
Feb 28 10:04:37 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[268646]: [WARNING]  (268650) : Exiting Master process...
Feb 28 10:04:37 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[268646]: [WARNING]  (268650) : Exiting Master process...
Feb 28 10:04:37 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[268646]: [ALERT]    (268650) : Current worker (268652) exited with code 143 (Terminated)
Feb 28 10:04:37 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[268646]: [WARNING]  (268650) : All workers exited. Exiting... (0)
Feb 28 10:04:37 compute-0 systemd[1]: libpod-8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9.scope: Deactivated successfully.
Feb 28 10:04:37 compute-0 podman[269048]: 2026-02-28 10:04:37.838793706 +0000 UTC m=+0.047527867 container died 8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.859 243456 INFO nova.virt.libvirt.driver [-] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Instance destroyed successfully.
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.859 243456 DEBUG nova.objects.instance [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'resources' on Instance uuid 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 434 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 21 MiB/s wr, 573 op/s
Feb 28 10:04:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9-userdata-shm.mount: Deactivated successfully.
Feb 28 10:04:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2f3bfc33757300e8d509d5547a7828aa30ca0c8b5e0dd5d94a856da1a162672-merged.mount: Deactivated successfully.
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.874 243456 DEBUG nova.virt.libvirt.vif [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1083426510',display_name='tempest-ImagesTestJSON-server-1083426510',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1083426510',id=29,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-dl71n395',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:35Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "address": "fa:16:3e:3a:a0:83", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d6bdb4d-0d", "ovs_interfaceid": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.875 243456 DEBUG nova.network.os_vif_util [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "address": "fa:16:3e:3a:a0:83", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d6bdb4d-0d", "ovs_interfaceid": "2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.876 243456 DEBUG nova.network.os_vif_util [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:a0:83,bridge_name='br-int',has_traffic_filtering=True,id=2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d6bdb4d-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.876 243456 DEBUG os_vif [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:a0:83,bridge_name='br-int',has_traffic_filtering=True,id=2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d6bdb4d-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.878 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.878 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d6bdb4d-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.884 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.885 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.888 243456 INFO os_vif [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:a0:83,bridge_name='br-int',has_traffic_filtering=True,id=2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d6bdb4d-0d')
Feb 28 10:04:37 compute-0 podman[269048]: 2026-02-28 10:04:37.889685127 +0000 UTC m=+0.098419278 container cleanup 8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.905 243456 DEBUG oslo_concurrency.lockutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.906 243456 DEBUG oslo_concurrency.lockutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.906 243456 DEBUG oslo_concurrency.lockutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.906 243456 DEBUG oslo_concurrency.lockutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.906 243456 DEBUG oslo_concurrency.lockutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.909 243456 INFO nova.compute.manager [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Terminating instance
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.910 243456 DEBUG nova.compute.manager [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:04:37 compute-0 systemd[1]: libpod-conmon-8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9.scope: Deactivated successfully.
Feb 28 10:04:37 compute-0 kernel: tap3c6e6f23-d6 (unregistering): left promiscuous mode
Feb 28 10:04:37 compute-0 podman[269094]: 2026-02-28 10:04:37.960730804 +0000 UTC m=+0.048553196 container remove 8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 28 10:04:37 compute-0 NetworkManager[49805]: <info>  [1772273077.9626] device (tap3c6e6f23-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:04:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:37.970 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb47c55-c025-4ade-9d18-bb9bc73a1622]: (4, ('Sat Feb 28 10:04:37 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 (8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9)\n8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9\nSat Feb 28 10:04:37 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 (8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9)\n8979ecc4d03e40e1683bf2d8263c8af8bb32b524cc9d204a9e635a573442a4f9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:37.972 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b89bd1-5afb-469e-85a7-35a8dc7e8134]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:37 compute-0 ovn_controller[146846]: 2026-02-28T10:04:37Z|00177|binding|INFO|Releasing lport 3c6e6f23-d681-47b4-a8e5-474ce94e984d from this chassis (sb_readonly=0)
Feb 28 10:04:37 compute-0 ovn_controller[146846]: 2026-02-28T10:04:37Z|00178|binding|INFO|Setting lport 3c6e6f23-d681-47b4-a8e5-474ce94e984d down in Southbound
Feb 28 10:04:37 compute-0 ovn_controller[146846]: 2026-02-28T10:04:37Z|00179|binding|INFO|Removing iface tap3c6e6f23-d6 ovn-installed in OVS
Feb 28 10:04:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:37.974 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a8395bc-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.976 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.981 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:37.981 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:58:99 10.100.0.8'], port_security=['fa:16:3e:32:58:99 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6d74f9b9-edf7-4d81-b139-cb664b2ab68c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da458887a8634c5a8b9a38fcbcc44e07', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c01448ef-a8fc-4bd2-928c-fd08df4a870e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5852d9cf-0cd0-48e3-ac9d-e151dafb6ffc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=3c6e6f23-d681-47b4-a8e5-474ce94e984d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:04:37 compute-0 kernel: tap3a8395bc-d0: left promiscuous mode
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.985 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:37 compute-0 nova_compute[243452]: 2026-02-28 10:04:37.990 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:37.993 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eb7637c9-9be2-4519-b8f3-9d3be9802373]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:38 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Feb 28 10:04:38 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Consumed 12.863s CPU time.
Feb 28 10:04:38 compute-0 systemd-machined[209480]: Machine qemu-32-instance-0000001c terminated.
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.018 243456 DEBUG nova.compute.manager [req-bb029be2-25bd-4172-b975-2dede82c0dc7 req-ac0072b4-708f-4517-b23a-b3170907cf9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Received event network-vif-unplugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.019 243456 DEBUG oslo_concurrency.lockutils [req-bb029be2-25bd-4172-b975-2dede82c0dc7 req-ac0072b4-708f-4517-b23a-b3170907cf9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.019 243456 DEBUG oslo_concurrency.lockutils [req-bb029be2-25bd-4172-b975-2dede82c0dc7 req-ac0072b4-708f-4517-b23a-b3170907cf9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.019 243456 DEBUG oslo_concurrency.lockutils [req-bb029be2-25bd-4172-b975-2dede82c0dc7 req-ac0072b4-708f-4517-b23a-b3170907cf9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.019 243456 DEBUG nova.compute.manager [req-bb029be2-25bd-4172-b975-2dede82c0dc7 req-ac0072b4-708f-4517-b23a-b3170907cf9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] No waiting events found dispatching network-vif-unplugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.019 243456 DEBUG nova.compute.manager [req-bb029be2-25bd-4172-b975-2dede82c0dc7 req-ac0072b4-708f-4517-b23a-b3170907cf9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Received event network-vif-unplugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:04:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.024 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[03980b9d-0294-45e8-ad52-6873d23aed4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.025 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c90f60b6-f185-4871-bc8d-8494787177a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:38 compute-0 ceph-mon[76304]: osdmap e142: 3 total, 3 up, 3 in
Feb 28 10:04:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.044 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9fcbfb4d-9af9-420f-8dde-86500f999ddb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456918, 'reachable_time': 38734, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269127, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d3a8395bc\x2dd7fc\x2d4457\x2d8cb4\x2d52e2b9305b61.mount: Deactivated successfully.
Feb 28 10:04:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.049 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:04:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.049 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[ade54b44-1da6-4b3a-aaa3-5fb4c6b44d86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.051 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 3c6e6f23-d681-47b4-a8e5-474ce94e984d in datapath 2eebd3ec-f7d4-4881-813e-8d884cdcadaf unbound from our chassis
Feb 28 10:04:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.052 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2eebd3ec-f7d4-4881-813e-8d884cdcadaf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:04:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.053 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f02bc157-300b-4420-8392-3ddfabf91b13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.053 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf namespace which is not needed anymore
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.135 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.142 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.150 243456 INFO nova.virt.libvirt.driver [-] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Instance destroyed successfully.
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.151 243456 DEBUG nova.objects.instance [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lazy-loading 'resources' on Instance uuid 6d74f9b9-edf7-4d81-b139-cb664b2ab68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.170 243456 INFO nova.virt.libvirt.driver [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Deleting instance files /var/lib/nova/instances/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_del
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.171 243456 INFO nova.virt.libvirt.driver [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Deletion of /var/lib/nova/instances/9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab_del complete
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.176 243456 DEBUG nova.virt.libvirt.vif [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-279405115',display_name='tempest-ImagesOneServerNegativeTestJSON-server-279405115',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-279405115',id=28,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da458887a8634c5a8b9a38fcbcc44e07',ramdisk_id='',reservation_id='r-4l4iiwh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-356581433',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-356581433-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:36Z,user_data=None,user_id='2737540b5d9a437cac0ea91b25f0c5d8',uuid=6d74f9b9-edf7-4d81-b139-cb664b2ab68c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "address": "fa:16:3e:32:58:99", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6e6f23-d6", "ovs_interfaceid": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.176 243456 DEBUG nova.network.os_vif_util [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converting VIF {"id": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "address": "fa:16:3e:32:58:99", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c6e6f23-d6", "ovs_interfaceid": "3c6e6f23-d681-47b4-a8e5-474ce94e984d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.177 243456 DEBUG nova.network.os_vif_util [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:58:99,bridge_name='br-int',has_traffic_filtering=True,id=3c6e6f23-d681-47b4-a8e5-474ce94e984d,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6e6f23-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.177 243456 DEBUG os_vif [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:58:99,bridge_name='br-int',has_traffic_filtering=True,id=3c6e6f23-d681-47b4-a8e5-474ce94e984d,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6e6f23-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.179 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.179 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c6e6f23-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.181 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.182 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.185 243456 INFO os_vif [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:58:99,bridge_name='br-int',has_traffic_filtering=True,id=3c6e6f23-d681-47b4-a8e5-474ce94e984d,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c6e6f23-d6')
Feb 28 10:04:38 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[268385]: [NOTICE]   (268389) : haproxy version is 2.8.14-c23fe91
Feb 28 10:04:38 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[268385]: [NOTICE]   (268389) : path to executable is /usr/sbin/haproxy
Feb 28 10:04:38 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[268385]: [WARNING]  (268389) : Exiting Master process...
Feb 28 10:04:38 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[268385]: [WARNING]  (268389) : Exiting Master process...
Feb 28 10:04:38 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[268385]: [ALERT]    (268389) : Current worker (268391) exited with code 143 (Terminated)
Feb 28 10:04:38 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[268385]: [WARNING]  (268389) : All workers exited. Exiting... (0)
Feb 28 10:04:38 compute-0 systemd[1]: libpod-daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f.scope: Deactivated successfully.
Feb 28 10:04:38 compute-0 podman[269155]: 2026-02-28 10:04:38.215525427 +0000 UTC m=+0.043803762 container died daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.245 243456 INFO nova.compute.manager [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Took 0.63 seconds to destroy the instance on the hypervisor.
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.246 243456 DEBUG oslo.service.loopingcall [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.246 243456 DEBUG nova.compute.manager [-] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.247 243456 DEBUG nova.network.neutron [-] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:04:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f-userdata-shm.mount: Deactivated successfully.
Feb 28 10:04:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cd5eb1ee2da379ebf68d6e75d71e29d19c55cfc0b19947902e32f6aaa6684a7-merged.mount: Deactivated successfully.
Feb 28 10:04:38 compute-0 podman[269155]: 2026-02-28 10:04:38.257081206 +0000 UTC m=+0.085359551 container cleanup daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:04:38 compute-0 systemd[1]: libpod-conmon-daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f.scope: Deactivated successfully.
Feb 28 10:04:38 compute-0 podman[269205]: 2026-02-28 10:04:38.326441546 +0000 UTC m=+0.047222659 container remove daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:04:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.333 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f19fd591-e433-4ed6-85cf-31c5c3992131]: (4, ('Sat Feb 28 10:04:38 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf (daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f)\ndaf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f\nSat Feb 28 10:04:38 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf (daf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f)\ndaf946d1a3fab4dbc1b14b8f604449ac098685e3b17aac4dc0891ad671a8bb0f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.334 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[316307ee-3ba8-432b-b087-6b730f97e0a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.335 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2eebd3ec-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.338 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:38 compute-0 kernel: tap2eebd3ec-f0: left promiscuous mode
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.345 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.349 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[70de7f51-6769-477b-a17c-03057d6c76ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.363 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2d68587b-fd1a-4d47-bb47-8888b17152e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.365 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[10384ab0-e08a-4384-a6a4-5dc5353a9bc0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.386 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b9fda02b-75cf-4883-b68d-dabf4dd96e7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456629, 'reachable_time': 39722, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269221, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.389 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:04:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:38.390 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[f67a4fd3-7159-4355-8956-cd50123361ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.437 243456 INFO nova.virt.libvirt.driver [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Deleting instance files /var/lib/nova/instances/6d74f9b9-edf7-4d81-b139-cb664b2ab68c_del
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.438 243456 INFO nova.virt.libvirt.driver [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Deletion of /var/lib/nova/instances/6d74f9b9-edf7-4d81-b139-cb664b2ab68c_del complete
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.505 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.515 243456 INFO nova.compute.manager [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Took 0.60 seconds to destroy the instance on the hypervisor.
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.515 243456 DEBUG oslo.service.loopingcall [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.516 243456 DEBUG nova.compute.manager [-] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:04:38 compute-0 nova_compute[243452]: 2026-02-28 10:04:38.516 243456 DEBUG nova.network.neutron [-] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:04:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d2eebd3ec\x2df7d4\x2d4881\x2d813e\x2d8d884cdcadaf.mount: Deactivated successfully.
Feb 28 10:04:39 compute-0 ceph-mon[76304]: pgmap v1067: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 434 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 21 MiB/s wr, 573 op/s
Feb 28 10:04:39 compute-0 nova_compute[243452]: 2026-02-28 10:04:39.393 243456 DEBUG nova.network.neutron [-] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:39 compute-0 nova_compute[243452]: 2026-02-28 10:04:39.414 243456 INFO nova.compute.manager [-] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Took 1.17 seconds to deallocate network for instance.
Feb 28 10:04:39 compute-0 nova_compute[243452]: 2026-02-28 10:04:39.463 243456 DEBUG oslo_concurrency.lockutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:39 compute-0 nova_compute[243452]: 2026-02-28 10:04:39.464 243456 DEBUG oslo_concurrency.lockutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:39 compute-0 nova_compute[243452]: 2026-02-28 10:04:39.540 243456 DEBUG nova.compute.manager [req-351c88b1-d05e-401f-b16a-b4c2f18e2a8f req-554643f3-228b-4035-9789-9a01987c9f3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Received event network-vif-deleted-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:39 compute-0 nova_compute[243452]: 2026-02-28 10:04:39.543 243456 DEBUG nova.network.neutron [-] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:39 compute-0 nova_compute[243452]: 2026-02-28 10:04:39.557 243456 INFO nova.compute.manager [-] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Took 1.04 seconds to deallocate network for instance.
Feb 28 10:04:39 compute-0 nova_compute[243452]: 2026-02-28 10:04:39.607 243456 DEBUG oslo_concurrency.lockutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:39 compute-0 nova_compute[243452]: 2026-02-28 10:04:39.636 243456 DEBUG oslo_concurrency.processutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 362 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 15 MiB/s wr, 448 op/s
Feb 28 10:04:39 compute-0 nova_compute[243452]: 2026-02-28 10:04:39.939 243456 DEBUG oslo_concurrency.lockutils [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "interface-f1026535-7729-43d0-8027-dd71ef14dfbf-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:39 compute-0 nova_compute[243452]: 2026-02-28 10:04:39.939 243456 DEBUG oslo_concurrency.lockutils [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-f1026535-7729-43d0-8027-dd71ef14dfbf-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:39 compute-0 nova_compute[243452]: 2026-02-28 10:04:39.940 243456 DEBUG nova.objects.instance [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'flavor' on Instance uuid f1026535-7729-43d0-8027-dd71ef14dfbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:40 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:04:40 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1277498819' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.213 243456 DEBUG oslo_concurrency.processutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.219 243456 DEBUG nova.compute.provider_tree [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.234 243456 DEBUG nova.scheduler.client.report [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.255 243456 DEBUG oslo_concurrency.lockutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.258 243456 DEBUG oslo_concurrency.lockutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.277 243456 INFO nova.scheduler.client.report [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Deleted allocations for instance 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.316 243456 DEBUG oslo_concurrency.processutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.346 243456 DEBUG oslo_concurrency.lockutils [None req-cef51715-285a-4478-b77c-2ed9c14fac88 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.379 243456 DEBUG nova.compute.manager [req-c9a4ea16-36f1-40eb-bd01-8de11aace7b9 req-22377905-15e4-40b7-a9bd-d70c4839986f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Received event network-vif-plugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.380 243456 DEBUG oslo_concurrency.lockutils [req-c9a4ea16-36f1-40eb-bd01-8de11aace7b9 req-22377905-15e4-40b7-a9bd-d70c4839986f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.380 243456 DEBUG oslo_concurrency.lockutils [req-c9a4ea16-36f1-40eb-bd01-8de11aace7b9 req-22377905-15e4-40b7-a9bd-d70c4839986f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.381 243456 DEBUG oslo_concurrency.lockutils [req-c9a4ea16-36f1-40eb-bd01-8de11aace7b9 req-22377905-15e4-40b7-a9bd-d70c4839986f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.381 243456 DEBUG nova.compute.manager [req-c9a4ea16-36f1-40eb-bd01-8de11aace7b9 req-22377905-15e4-40b7-a9bd-d70c4839986f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] No waiting events found dispatching network-vif-plugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.381 243456 WARNING nova.compute.manager [req-c9a4ea16-36f1-40eb-bd01-8de11aace7b9 req-22377905-15e4-40b7-a9bd-d70c4839986f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Received unexpected event network-vif-plugged-2d6bdb4d-0d3f-4c33-9b1a-9a2652eb7bfd for instance with vm_state deleted and task_state None.
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.551 243456 DEBUG nova.objects.instance [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'pci_requests' on Instance uuid f1026535-7729-43d0-8027-dd71ef14dfbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.570 243456 DEBUG nova.network.neutron [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.607 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.607 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.626 243456 DEBUG nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015381725767343905 of space, bias 1.0, pg target 0.46145177302031715 quantized to 32 (current 32)
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002975076423412019 of space, bias 1.0, pg target 0.8925229270236057 quantized to 32 (current 32)
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.584590305021096e-07 of space, bias 4.0, pg target 0.0010301508366025315 quantized to 16 (current 16)
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:04:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.707 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.743 243456 DEBUG nova.policy [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1d18942aa7a4f4f9f673058f8c5f232', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5478fb37e3fa483c86ed85ad939d42da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:04:40 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:04:40 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3634590387' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.934 243456 DEBUG oslo_concurrency.processutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.938 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.949 243456 DEBUG nova.compute.provider_tree [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:04:40 compute-0 nova_compute[243452]: 2026-02-28 10:04:40.967 243456 DEBUG nova.scheduler.client.report [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:04:41 compute-0 nova_compute[243452]: 2026-02-28 10:04:41.009 243456 DEBUG oslo_concurrency.lockutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:41 compute-0 nova_compute[243452]: 2026-02-28 10:04:41.013 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:41 compute-0 nova_compute[243452]: 2026-02-28 10:04:41.022 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:04:41 compute-0 nova_compute[243452]: 2026-02-28 10:04:41.022 243456 INFO nova.compute.claims [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:04:41 compute-0 nova_compute[243452]: 2026-02-28 10:04:41.035 243456 INFO nova.scheduler.client.report [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Deleted allocations for instance 6d74f9b9-edf7-4d81-b139-cb664b2ab68c
Feb 28 10:04:41 compute-0 ceph-mon[76304]: pgmap v1068: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 362 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 15 MiB/s wr, 448 op/s
Feb 28 10:04:41 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1277498819' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:41 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3634590387' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:41 compute-0 nova_compute[243452]: 2026-02-28 10:04:41.114 243456 DEBUG oslo_concurrency.lockutils [None req-3d9183d8-7208-41b7-a050-1373fee6e08d 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "6d74f9b9-edf7-4d81-b139-cb664b2ab68c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.208s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:41 compute-0 nova_compute[243452]: 2026-02-28 10:04:41.166 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:41 compute-0 nova_compute[243452]: 2026-02-28 10:04:41.324 243456 DEBUG nova.network.neutron [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Successfully created port: 11bba824-bf60-4206-b70d-fd5035009fbf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:04:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:04:41 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1256323313' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:41 compute-0 nova_compute[243452]: 2026-02-28 10:04:41.716 243456 DEBUG nova.compute.manager [req-aa64e975-95d4-4bfe-a8b5-e1a3017d6f4f req-6af31a91-0886-44d6-85db-f4bc94701733 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Received event network-vif-deleted-3c6e6f23-d681-47b4-a8e5-474ce94e984d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:41 compute-0 nova_compute[243452]: 2026-02-28 10:04:41.738 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:41 compute-0 nova_compute[243452]: 2026-02-28 10:04:41.747 243456 DEBUG nova.compute.provider_tree [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:04:41 compute-0 nova_compute[243452]: 2026-02-28 10:04:41.765 243456 DEBUG nova.scheduler.client.report [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:04:41 compute-0 nova_compute[243452]: 2026-02-28 10:04:41.794 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:41 compute-0 nova_compute[243452]: 2026-02-28 10:04:41.795 243456 DEBUG nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:04:41 compute-0 nova_compute[243452]: 2026-02-28 10:04:41.843 243456 DEBUG nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:04:41 compute-0 nova_compute[243452]: 2026-02-28 10:04:41.844 243456 DEBUG nova.network.neutron [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:04:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 279 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 8.9 MiB/s wr, 356 op/s
Feb 28 10:04:41 compute-0 nova_compute[243452]: 2026-02-28 10:04:41.864 243456 INFO nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:04:41 compute-0 nova_compute[243452]: 2026-02-28 10:04:41.885 243456 DEBUG nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:04:41 compute-0 nova_compute[243452]: 2026-02-28 10:04:41.970 243456 DEBUG nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:04:41 compute-0 nova_compute[243452]: 2026-02-28 10:04:41.971 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:04:41 compute-0 nova_compute[243452]: 2026-02-28 10:04:41.971 243456 INFO nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Creating image(s)
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.002 243456 DEBUG nova.storage.rbd_utils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.039 243456 DEBUG nova.storage.rbd_utils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:42 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1256323313' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.074 243456 DEBUG nova.storage.rbd_utils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.079 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.105 243456 DEBUG nova.network.neutron [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Successfully updated port: 11bba824-bf60-4206-b70d-fd5035009fbf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.110 243456 DEBUG nova.policy [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '163582c3e6a34c87b52f82ac4f189f77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.134 243456 DEBUG oslo_concurrency.lockutils [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.136 243456 DEBUG oslo_concurrency.lockutils [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquired lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.136 243456 DEBUG nova.network.neutron [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.164 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.165 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.166 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.166 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.192 243456 DEBUG nova.storage.rbd_utils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.196 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.422 243456 WARNING nova.network.neutron [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] 60dcefc3-95e1-437e-9c00-e51656c39b8f already exists in list: networks containing: ['60dcefc3-95e1-437e-9c00-e51656c39b8f']. ignoring it
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.432 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.494 243456 DEBUG nova.storage.rbd_utils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] resizing rbd image ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.577 243456 DEBUG nova.objects.instance [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'migration_context' on Instance uuid ff5bf118-ea06-44c0-81f0-0a229162e1d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.590 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.590 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Ensure instance console log exists: /var/lib/nova/instances/ff5bf118-ea06-44c0-81f0-0a229162e1d8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.591 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.591 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:42 compute-0 nova_compute[243452]: 2026-02-28 10:04:42.591 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:04:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Feb 28 10:04:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Feb 28 10:04:42 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Feb 28 10:04:43 compute-0 ceph-mon[76304]: pgmap v1069: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 279 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 8.9 MiB/s wr, 356 op/s
Feb 28 10:04:43 compute-0 ceph-mon[76304]: osdmap e143: 3 total, 3 up, 3 in
Feb 28 10:04:43 compute-0 nova_compute[243452]: 2026-02-28 10:04:43.086 243456 DEBUG nova.network.neutron [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Successfully created port: 29b5f82a-cfc3-4c87-aac9-8419af0bcf75 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:04:43 compute-0 nova_compute[243452]: 2026-02-28 10:04:43.183 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:43 compute-0 nova_compute[243452]: 2026-02-28 10:04:43.508 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:43 compute-0 nova_compute[243452]: 2026-02-28 10:04:43.836 243456 DEBUG nova.compute.manager [req-07106fe3-254b-45d6-a435-9c686c01fc4f req-7de3e4ed-51d3-4edd-8067-2599c6155572 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-changed-11bba824-bf60-4206-b70d-fd5035009fbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:43 compute-0 nova_compute[243452]: 2026-02-28 10:04:43.837 243456 DEBUG nova.compute.manager [req-07106fe3-254b-45d6-a435-9c686c01fc4f req-7de3e4ed-51d3-4edd-8067-2599c6155572 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Refreshing instance network info cache due to event network-changed-11bba824-bf60-4206-b70d-fd5035009fbf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:04:43 compute-0 nova_compute[243452]: 2026-02-28 10:04:43.837 243456 DEBUG oslo_concurrency.lockutils [req-07106fe3-254b-45d6-a435-9c686c01fc4f req-7de3e4ed-51d3-4edd-8067-2599c6155572 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:04:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 244 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.0 MiB/s wr, 267 op/s
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.141 243456 DEBUG nova.network.neutron [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Successfully updated port: 29b5f82a-cfc3-4c87-aac9-8419af0bcf75 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.157 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "refresh_cache-ff5bf118-ea06-44c0-81f0-0a229162e1d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.158 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquired lock "refresh_cache-ff5bf118-ea06-44c0-81f0-0a229162e1d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.158 243456 DEBUG nova.network.neutron [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.801 243456 DEBUG nova.network.neutron [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.841 243456 DEBUG nova.network.neutron [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updating instance_info_cache with network_info: [{"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.872 243456 DEBUG oslo_concurrency.lockutils [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Releasing lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.873 243456 DEBUG oslo_concurrency.lockutils [req-07106fe3-254b-45d6-a435-9c686c01fc4f req-7de3e4ed-51d3-4edd-8067-2599c6155572 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.873 243456 DEBUG nova.network.neutron [req-07106fe3-254b-45d6-a435-9c686c01fc4f req-7de3e4ed-51d3-4edd-8067-2599c6155572 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Refreshing network info cache for port 11bba824-bf60-4206-b70d-fd5035009fbf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.876 243456 DEBUG nova.virt.libvirt.vif [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-585792194',display_name='tempest-AttachInterfacesTestJSON-server-585792194',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-585792194',id=27,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPfxDpm3Myh4xl51GH/Eee4e+9gL0QXIrJb3KmgHGFFsr7Zk1zaEkJOxAcUnihnT1OPbqVJln2d2q3oc5Q222W4sAVF2VtSEfrwl92YPLxE6w1H/lw+338tjUK54+I/6sA==',key_name='tempest-keypair-1547211698',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-ngglzwzv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=f1026535-7729-43d0-8027-dd71ef14dfbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.877 243456 DEBUG nova.network.os_vif_util [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.877 243456 DEBUG nova.network.os_vif_util [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.878 243456 DEBUG os_vif [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.878 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.878 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.879 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.881 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.881 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11bba824-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.882 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap11bba824-bf, col_values=(('external_ids', {'iface-id': '11bba824-bf60-4206-b70d-fd5035009fbf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:32:9d', 'vm-uuid': 'f1026535-7729-43d0-8027-dd71ef14dfbf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.883 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.884 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:04:44 compute-0 NetworkManager[49805]: <info>  [1772273084.8845] manager: (tap11bba824-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.890 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.891 243456 INFO os_vif [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf')
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.892 243456 DEBUG nova.virt.libvirt.vif [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-585792194',display_name='tempest-AttachInterfacesTestJSON-server-585792194',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-585792194',id=27,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPfxDpm3Myh4xl51GH/Eee4e+9gL0QXIrJb3KmgHGFFsr7Zk1zaEkJOxAcUnihnT1OPbqVJln2d2q3oc5Q222W4sAVF2VtSEfrwl92YPLxE6w1H/lw+338tjUK54+I/6sA==',key_name='tempest-keypair-1547211698',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-ngglzwzv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=f1026535-7729-43d0-8027-dd71ef14dfbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.892 243456 DEBUG nova.network.os_vif_util [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.893 243456 DEBUG nova.network.os_vif_util [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.895 243456 DEBUG nova.virt.libvirt.guest [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] attach device xml: <interface type="ethernet">
Feb 28 10:04:44 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:d6:32:9d"/>
Feb 28 10:04:44 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:04:44 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:04:44 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:04:44 compute-0 nova_compute[243452]:   <target dev="tap11bba824-bf"/>
Feb 28 10:04:44 compute-0 nova_compute[243452]: </interface>
Feb 28 10:04:44 compute-0 nova_compute[243452]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 28 10:04:44 compute-0 kernel: tap11bba824-bf: entered promiscuous mode
Feb 28 10:04:44 compute-0 ovn_controller[146846]: 2026-02-28T10:04:44Z|00180|binding|INFO|Claiming lport 11bba824-bf60-4206-b70d-fd5035009fbf for this chassis.
Feb 28 10:04:44 compute-0 ovn_controller[146846]: 2026-02-28T10:04:44Z|00181|binding|INFO|11bba824-bf60-4206-b70d-fd5035009fbf: Claiming fa:16:3e:d6:32:9d 10.100.0.14
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.908 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:44 compute-0 NetworkManager[49805]: <info>  [1772273084.9095] manager: (tap11bba824-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Feb 28 10:04:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:44.921 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:32:9d 10.100.0.14'], port_security=['fa:16:3e:d6:32:9d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f1026535-7729-43d0-8027-dd71ef14dfbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '24808a61-2182-43ba-a46d-e629124276f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=11bba824-bf60-4206-b70d-fd5035009fbf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:04:44 compute-0 ovn_controller[146846]: 2026-02-28T10:04:44Z|00182|binding|INFO|Setting lport 11bba824-bf60-4206-b70d-fd5035009fbf ovn-installed in OVS
Feb 28 10:04:44 compute-0 ovn_controller[146846]: 2026-02-28T10:04:44Z|00183|binding|INFO|Setting lport 11bba824-bf60-4206-b70d-fd5035009fbf up in Southbound
Feb 28 10:04:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:44.923 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 11bba824-bf60-4206-b70d-fd5035009fbf in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f bound to our chassis
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.926 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:44.926 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.932 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:44 compute-0 systemd-udevd[269461]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:04:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:44.947 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[48fd7a0a-7658-4227-b06c-19cd2aada1a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:44 compute-0 NetworkManager[49805]: <info>  [1772273084.9545] device (tap11bba824-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:04:44 compute-0 NetworkManager[49805]: <info>  [1772273084.9561] device (tap11bba824-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.984 243456 DEBUG nova.virt.libvirt.driver [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.984 243456 DEBUG nova.virt.libvirt.driver [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.984 243456 DEBUG nova.virt.libvirt.driver [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:7a:bd:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:04:44 compute-0 nova_compute[243452]: 2026-02-28 10:04:44.984 243456 DEBUG nova.virt.libvirt.driver [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:d6:32:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:04:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:44.984 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b48b13db-96d5-43c6-ae29-0451677bfb9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:44.988 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[53790528-4225-4e2c-81d6-6c5b1670f4de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:45 compute-0 nova_compute[243452]: 2026-02-28 10:04:45.015 243456 DEBUG nova.virt.libvirt.guest [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:04:45 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:04:45 compute-0 nova_compute[243452]:   <nova:name>tempest-AttachInterfacesTestJSON-server-585792194</nova:name>
Feb 28 10:04:45 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:04:45</nova:creationTime>
Feb 28 10:04:45 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:04:45 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:04:45 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:04:45 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:04:45 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:04:45 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:04:45 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:04:45 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:04:45 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:04:45 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:04:45 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:04:45 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:04:45 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:04:45 compute-0 nova_compute[243452]:     <nova:port uuid="76d5199d-5d1e-4198-8780-c2537175a2be">
Feb 28 10:04:45 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:04:45 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:04:45 compute-0 nova_compute[243452]:     <nova:port uuid="11bba824-bf60-4206-b70d-fd5035009fbf">
Feb 28 10:04:45 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:04:45 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:04:45 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:04:45 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:04:45 compute-0 nova_compute[243452]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 28 10:04:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:45.015 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[aefeda72-d80a-47bf-bbfb-081345ca73f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:45.029 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[78d93d01-13a6-4b66-a6ef-e989d754c185]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455960, 'reachable_time': 22319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269468, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:45.042 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cb221bd0-2389-4203-86b5-1ca59f4cb0e3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455970, 'tstamp': 455970}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269469, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455972, 'tstamp': 455972}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269469, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:45.044 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:45 compute-0 nova_compute[243452]: 2026-02-28 10:04:45.046 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:45.047 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:45.047 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:45.047 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:45 compute-0 nova_compute[243452]: 2026-02-28 10:04:45.047 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:45.048 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:45 compute-0 nova_compute[243452]: 2026-02-28 10:04:45.054 243456 DEBUG oslo_concurrency.lockutils [None req-13d03cd3-3b70-42ab-a4d5-bcee6fb27691 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-f1026535-7729-43d0-8027-dd71ef14dfbf-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:45 compute-0 ceph-mon[76304]: pgmap v1071: 305 pgs: 305 active+clean; 244 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.0 MiB/s wr, 267 op/s
Feb 28 10:04:45 compute-0 nova_compute[243452]: 2026-02-28 10:04:45.267 243456 DEBUG nova.compute.manager [req-967478f5-ae5f-4733-a517-0d3190865f33 req-6474f5d9-e91b-4361-8b59-b9bd9f5766f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-plugged-11bba824-bf60-4206-b70d-fd5035009fbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:45 compute-0 nova_compute[243452]: 2026-02-28 10:04:45.267 243456 DEBUG oslo_concurrency.lockutils [req-967478f5-ae5f-4733-a517-0d3190865f33 req-6474f5d9-e91b-4361-8b59-b9bd9f5766f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:45 compute-0 nova_compute[243452]: 2026-02-28 10:04:45.268 243456 DEBUG oslo_concurrency.lockutils [req-967478f5-ae5f-4733-a517-0d3190865f33 req-6474f5d9-e91b-4361-8b59-b9bd9f5766f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:45 compute-0 nova_compute[243452]: 2026-02-28 10:04:45.268 243456 DEBUG oslo_concurrency.lockutils [req-967478f5-ae5f-4733-a517-0d3190865f33 req-6474f5d9-e91b-4361-8b59-b9bd9f5766f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:45 compute-0 nova_compute[243452]: 2026-02-28 10:04:45.268 243456 DEBUG nova.compute.manager [req-967478f5-ae5f-4733-a517-0d3190865f33 req-6474f5d9-e91b-4361-8b59-b9bd9f5766f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] No waiting events found dispatching network-vif-plugged-11bba824-bf60-4206-b70d-fd5035009fbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:04:45 compute-0 nova_compute[243452]: 2026-02-28 10:04:45.268 243456 WARNING nova.compute.manager [req-967478f5-ae5f-4733-a517-0d3190865f33 req-6474f5d9-e91b-4361-8b59-b9bd9f5766f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received unexpected event network-vif-plugged-11bba824-bf60-4206-b70d-fd5035009fbf for instance with vm_state active and task_state None.
Feb 28 10:04:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:04:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2723943377' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:04:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:04:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2723943377' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:04:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 279 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.4 MiB/s wr, 273 op/s
Feb 28 10:04:45 compute-0 nova_compute[243452]: 2026-02-28 10:04:45.954 243456 DEBUG nova.compute.manager [req-8a7152f6-83cd-44c6-8cdc-81e110741ef0 req-82c41c35-4152-43b9-b40f-448c8820869f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Received event network-changed-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:45 compute-0 nova_compute[243452]: 2026-02-28 10:04:45.954 243456 DEBUG nova.compute.manager [req-8a7152f6-83cd-44c6-8cdc-81e110741ef0 req-82c41c35-4152-43b9-b40f-448c8820869f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Refreshing instance network info cache due to event network-changed-29b5f82a-cfc3-4c87-aac9-8419af0bcf75. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:04:45 compute-0 nova_compute[243452]: 2026-02-28 10:04:45.955 243456 DEBUG oslo_concurrency.lockutils [req-8a7152f6-83cd-44c6-8cdc-81e110741ef0 req-82c41c35-4152-43b9-b40f-448c8820869f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ff5bf118-ea06-44c0-81f0-0a229162e1d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:04:46 compute-0 ovn_controller[146846]: 2026-02-28T10:04:46Z|00184|binding|INFO|Releasing lport 76417fb5-53fd-4986-92c0-fafddb45b2f5 from this chassis (sb_readonly=0)
Feb 28 10:04:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2723943377' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:04:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2723943377' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.089 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.365 243456 DEBUG nova.network.neutron [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Updating instance_info_cache with network_info: [{"id": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "address": "fa:16:3e:29:2d:d1", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b5f82a-cf", "ovs_interfaceid": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.398 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Releasing lock "refresh_cache-ff5bf118-ea06-44c0-81f0-0a229162e1d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.399 243456 DEBUG nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Instance network_info: |[{"id": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "address": "fa:16:3e:29:2d:d1", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b5f82a-cf", "ovs_interfaceid": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.400 243456 DEBUG oslo_concurrency.lockutils [req-8a7152f6-83cd-44c6-8cdc-81e110741ef0 req-82c41c35-4152-43b9-b40f-448c8820869f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ff5bf118-ea06-44c0-81f0-0a229162e1d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.401 243456 DEBUG nova.network.neutron [req-8a7152f6-83cd-44c6-8cdc-81e110741ef0 req-82c41c35-4152-43b9-b40f-448c8820869f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Refreshing network info cache for port 29b5f82a-cfc3-4c87-aac9-8419af0bcf75 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.405 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Start _get_guest_xml network_info=[{"id": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "address": "fa:16:3e:29:2d:d1", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b5f82a-cf", "ovs_interfaceid": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.413 243456 WARNING nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.428 243456 DEBUG nova.virt.libvirt.host [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.429 243456 DEBUG nova.virt.libvirt.host [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.434 243456 DEBUG nova.virt.libvirt.host [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.435 243456 DEBUG nova.virt.libvirt.host [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.436 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.436 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.437 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.438 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.438 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.439 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.439 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.439 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.440 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.440 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.441 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.441 243456 DEBUG nova.virt.hardware [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.446 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.637 243456 DEBUG nova.network.neutron [req-07106fe3-254b-45d6-a435-9c686c01fc4f req-7de3e4ed-51d3-4edd-8067-2599c6155572 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updated VIF entry in instance network info cache for port 11bba824-bf60-4206-b70d-fd5035009fbf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.638 243456 DEBUG nova.network.neutron [req-07106fe3-254b-45d6-a435-9c686c01fc4f req-7de3e4ed-51d3-4edd-8067-2599c6155572 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updating instance_info_cache with network_info: [{"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:46 compute-0 nova_compute[243452]: 2026-02-28 10:04:46.655 243456 DEBUG oslo_concurrency.lockutils [req-07106fe3-254b-45d6-a435-9c686c01fc4f req-7de3e4ed-51d3-4edd-8067-2599c6155572 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:04:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:04:47 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2569154026' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.079 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.101 243456 DEBUG nova.storage.rbd_utils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:47 compute-0 ceph-mon[76304]: pgmap v1072: 305 pgs: 305 active+clean; 279 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.4 MiB/s wr, 273 op/s
Feb 28 10:04:47 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2569154026' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.107 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:47 compute-0 ovn_controller[146846]: 2026-02-28T10:04:47Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:32:9d 10.100.0.14
Feb 28 10:04:47 compute-0 ovn_controller[146846]: 2026-02-28T10:04:47Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:32:9d 10.100.0.14
Feb 28 10:04:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:04:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Feb 28 10:04:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:04:47 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1208741869' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Feb 28 10:04:47 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.668 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.671 243456 DEBUG nova.virt.libvirt.vif [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:04:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-52310168',display_name='tempest-ImagesTestJSON-server-52310168',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-52310168',id=30,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-1llqlg3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:04:41Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=ff5bf118-ea06-44c0-81f0-0a229162e1d8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "address": "fa:16:3e:29:2d:d1", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b5f82a-cf", "ovs_interfaceid": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.672 243456 DEBUG nova.network.os_vif_util [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "address": "fa:16:3e:29:2d:d1", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b5f82a-cf", "ovs_interfaceid": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.673 243456 DEBUG nova.network.os_vif_util [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:2d:d1,bridge_name='br-int',has_traffic_filtering=True,id=29b5f82a-cfc3-4c87-aac9-8419af0bcf75,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b5f82a-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.675 243456 DEBUG nova.objects.instance [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid ff5bf118-ea06-44c0-81f0-0a229162e1d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.696 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:04:47 compute-0 nova_compute[243452]:   <uuid>ff5bf118-ea06-44c0-81f0-0a229162e1d8</uuid>
Feb 28 10:04:47 compute-0 nova_compute[243452]:   <name>instance-0000001e</name>
Feb 28 10:04:47 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:04:47 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:04:47 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <nova:name>tempest-ImagesTestJSON-server-52310168</nova:name>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:04:46</nova:creationTime>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:04:47 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:04:47 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:04:47 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:04:47 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:04:47 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:04:47 compute-0 nova_compute[243452]:         <nova:user uuid="163582c3e6a34c87b52f82ac4f189f77">tempest-ImagesTestJSON-2059286278-project-member</nova:user>
Feb 28 10:04:47 compute-0 nova_compute[243452]:         <nova:project uuid="a2ce6ed219d94b3b88c2d2d7001f6c3a">tempest-ImagesTestJSON-2059286278</nova:project>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:04:47 compute-0 nova_compute[243452]:         <nova:port uuid="29b5f82a-cfc3-4c87-aac9-8419af0bcf75">
Feb 28 10:04:47 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:04:47 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:04:47 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <system>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <entry name="serial">ff5bf118-ea06-44c0-81f0-0a229162e1d8</entry>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <entry name="uuid">ff5bf118-ea06-44c0-81f0-0a229162e1d8</entry>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     </system>
Feb 28 10:04:47 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:04:47 compute-0 nova_compute[243452]:   <os>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:   </os>
Feb 28 10:04:47 compute-0 nova_compute[243452]:   <features>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:   </features>
Feb 28 10:04:47 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:04:47 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:04:47 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk">
Feb 28 10:04:47 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:04:47 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk.config">
Feb 28 10:04:47 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:04:47 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:29:2d:d1"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <target dev="tap29b5f82a-cf"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/ff5bf118-ea06-44c0-81f0-0a229162e1d8/console.log" append="off"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <video>
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     </video>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:04:47 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:04:47 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:04:47 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:04:47 compute-0 nova_compute[243452]: </domain>
Feb 28 10:04:47 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.698 243456 DEBUG nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Preparing to wait for external event network-vif-plugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.698 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.699 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.699 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.701 243456 DEBUG nova.virt.libvirt.vif [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:04:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-52310168',display_name='tempest-ImagesTestJSON-server-52310168',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-52310168',id=30,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-1llqlg3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:04:41Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=ff5bf118-ea06-44c0-81f0-0a229162e1d8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "address": "fa:16:3e:29:2d:d1", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b5f82a-cf", "ovs_interfaceid": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.701 243456 DEBUG nova.network.os_vif_util [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "address": "fa:16:3e:29:2d:d1", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b5f82a-cf", "ovs_interfaceid": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.702 243456 DEBUG nova.network.os_vif_util [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:2d:d1,bridge_name='br-int',has_traffic_filtering=True,id=29b5f82a-cfc3-4c87-aac9-8419af0bcf75,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b5f82a-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.703 243456 DEBUG os_vif [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:2d:d1,bridge_name='br-int',has_traffic_filtering=True,id=29b5f82a-cfc3-4c87-aac9-8419af0bcf75,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b5f82a-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.704 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.705 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.705 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.711 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.711 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29b5f82a-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.712 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29b5f82a-cf, col_values=(('external_ids', {'iface-id': '29b5f82a-cfc3-4c87-aac9-8419af0bcf75', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:29:2d:d1', 'vm-uuid': 'ff5bf118-ea06-44c0-81f0-0a229162e1d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.714 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:47 compute-0 NetworkManager[49805]: <info>  [1772273087.7158] manager: (tap29b5f82a-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.717 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.722 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.723 243456 INFO os_vif [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:2d:d1,bridge_name='br-int',has_traffic_filtering=True,id=29b5f82a-cfc3-4c87-aac9-8419af0bcf75,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b5f82a-cf')
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.787 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.787 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.788 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No VIF found with MAC fa:16:3e:29:2d:d1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.789 243456 INFO nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Using config drive
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.814 243456 DEBUG nova.storage.rbd_utils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.839 243456 DEBUG nova.compute.manager [req-e2b4014f-de3b-428d-99b9-588d62f4c115 req-ffa2168c-6016-4a82-9f74-17e92c855da2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-plugged-11bba824-bf60-4206-b70d-fd5035009fbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.839 243456 DEBUG oslo_concurrency.lockutils [req-e2b4014f-de3b-428d-99b9-588d62f4c115 req-ffa2168c-6016-4a82-9f74-17e92c855da2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.840 243456 DEBUG oslo_concurrency.lockutils [req-e2b4014f-de3b-428d-99b9-588d62f4c115 req-ffa2168c-6016-4a82-9f74-17e92c855da2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.840 243456 DEBUG oslo_concurrency.lockutils [req-e2b4014f-de3b-428d-99b9-588d62f4c115 req-ffa2168c-6016-4a82-9f74-17e92c855da2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.840 243456 DEBUG nova.compute.manager [req-e2b4014f-de3b-428d-99b9-588d62f4c115 req-ffa2168c-6016-4a82-9f74-17e92c855da2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] No waiting events found dispatching network-vif-plugged-11bba824-bf60-4206-b70d-fd5035009fbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:04:47 compute-0 nova_compute[243452]: 2026-02-28 10:04:47.840 243456 WARNING nova.compute.manager [req-e2b4014f-de3b-428d-99b9-588d62f4c115 req-ffa2168c-6016-4a82-9f74-17e92c855da2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received unexpected event network-vif-plugged-11bba824-bf60-4206-b70d-fd5035009fbf for instance with vm_state active and task_state None.
Feb 28 10:04:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 2.7 MiB/s wr, 122 op/s
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.101 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.102 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.103 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:04:48 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1208741869' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:48 compute-0 ceph-mon[76304]: osdmap e144: 3 total, 3 up, 3 in
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.121 243456 INFO nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Creating config drive at /var/lib/nova/instances/ff5bf118-ea06-44c0-81f0-0a229162e1d8/disk.config
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.127 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ff5bf118-ea06-44c0-81f0-0a229162e1d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphu52c29o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.255 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ff5bf118-ea06-44c0-81f0-0a229162e1d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphu52c29o" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.290 243456 DEBUG nova.storage.rbd_utils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.295 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ff5bf118-ea06-44c0-81f0-0a229162e1d8/disk.config ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.456 243456 DEBUG oslo_concurrency.processutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ff5bf118-ea06-44c0-81f0-0a229162e1d8/disk.config ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.456 243456 INFO nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Deleting local config drive /var/lib/nova/instances/ff5bf118-ea06-44c0-81f0-0a229162e1d8/disk.config because it was imported into RBD.
Feb 28 10:04:48 compute-0 NetworkManager[49805]: <info>  [1772273088.5091] manager: (tap29b5f82a-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Feb 28 10:04:48 compute-0 kernel: tap29b5f82a-cf: entered promiscuous mode
Feb 28 10:04:48 compute-0 ovn_controller[146846]: 2026-02-28T10:04:48Z|00185|binding|INFO|Claiming lport 29b5f82a-cfc3-4c87-aac9-8419af0bcf75 for this chassis.
Feb 28 10:04:48 compute-0 ovn_controller[146846]: 2026-02-28T10:04:48Z|00186|binding|INFO|29b5f82a-cfc3-4c87-aac9-8419af0bcf75: Claiming fa:16:3e:29:2d:d1 10.100.0.6
Feb 28 10:04:48 compute-0 ovn_controller[146846]: 2026-02-28T10:04:48Z|00187|binding|INFO|Setting lport 29b5f82a-cfc3-4c87-aac9-8419af0bcf75 up in Southbound
Feb 28 10:04:48 compute-0 ovn_controller[146846]: 2026-02-28T10:04:48Z|00188|binding|INFO|Setting lport 29b5f82a-cfc3-4c87-aac9-8419af0bcf75 ovn-installed in OVS
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.521 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:2d:d1 10.100.0.6'], port_security=['fa:16:3e:29:2d:d1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'ff5bf118-ea06-44c0-81f0-0a229162e1d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b4baa283-ae4f-4852-9324-45f4fb1de1ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d63e728b-5dfe-4a66-867f-fa0957507bc0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=29b5f82a-cfc3-4c87-aac9-8419af0bcf75) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.524 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 29b5f82a-cfc3-4c87-aac9-8419af0bcf75 in datapath 3a8395bc-d7fc-4457-8cb4-52e2b9305b61 bound to our chassis
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.527 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.540 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d6fe706c-ad86-4e8b-8a7f-29296c86ef57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.542 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3a8395bc-d1 in ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.544 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3a8395bc-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.544 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9e262ebb-2a7f-423e-bed8-35f5260bcda6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.545 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dd41417e-6e1a-4ef2-8d6c-a35460d4733a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:48 compute-0 systemd-udevd[269607]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:04:48 compute-0 systemd-machined[209480]: New machine qemu-34-instance-0000001e.
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.557 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[3547eb77-3235-42bf-8f71-49d4b2f353bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:48 compute-0 systemd[1]: Started Virtual Machine qemu-34-instance-0000001e.
Feb 28 10:04:48 compute-0 NetworkManager[49805]: <info>  [1772273088.5672] device (tap29b5f82a-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:04:48 compute-0 NetworkManager[49805]: <info>  [1772273088.5681] device (tap29b5f82a-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.583 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9ce1b97a-525c-4d64-92a8-3eb785750cb0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.603 243456 DEBUG oslo_concurrency.lockutils [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "interface-f1026535-7729-43d0-8027-dd71ef14dfbf-11bba824-bf60-4206-b70d-fd5035009fbf" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.604 243456 DEBUG oslo_concurrency.lockutils [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-f1026535-7729-43d0-8027-dd71ef14dfbf-11bba824-bf60-4206-b70d-fd5035009fbf" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.605 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.614 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a07ae5-1517-4071-9b88-525f78a8f2b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:48 compute-0 systemd-udevd[269611]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:04:48 compute-0 NetworkManager[49805]: <info>  [1772273088.6207] manager: (tap3a8395bc-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/92)
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.619 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c9286996-dd94-43d0-a321-f552d7d99d3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.632 243456 DEBUG nova.objects.instance [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'flavor' on Instance uuid f1026535-7729-43d0-8027-dd71ef14dfbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.654 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6487ac31-3cd0-4c31-bb4b-e29670acc306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.658 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e1b46fd1-c028-4d95-8b14-1db6e08a0605]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.661 243456 DEBUG nova.virt.libvirt.vif [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-585792194',display_name='tempest-AttachInterfacesTestJSON-server-585792194',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-585792194',id=27,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPfxDpm3Myh4xl51GH/Eee4e+9gL0QXIrJb3KmgHGFFsr7Zk1zaEkJOxAcUnihnT1OPbqVJln2d2q3oc5Q222W4sAVF2VtSEfrwl92YPLxE6w1H/lw+338tjUK54+I/6sA==',key_name='tempest-keypair-1547211698',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-ngglzwzv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=f1026535-7729-43d0-8027-dd71ef14dfbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.662 243456 DEBUG nova.network.os_vif_util [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.663 243456 DEBUG nova.network.os_vif_util [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.669 243456 DEBUG nova.virt.libvirt.guest [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d6:32:9d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap11bba824-bf"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.671 243456 DEBUG nova.virt.libvirt.guest [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d6:32:9d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap11bba824-bf"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.676 243456 DEBUG nova.virt.libvirt.driver [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Attempting to detach device tap11bba824-bf from instance f1026535-7729-43d0-8027-dd71ef14dfbf from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.676 243456 DEBUG nova.virt.libvirt.guest [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] detach device xml: <interface type="ethernet">
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:d6:32:9d"/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <target dev="tap11bba824-bf"/>
Feb 28 10:04:48 compute-0 nova_compute[243452]: </interface>
Feb 28 10:04:48 compute-0 nova_compute[243452]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.682 243456 DEBUG nova.virt.libvirt.guest [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d6:32:9d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap11bba824-bf"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:04:48 compute-0 NetworkManager[49805]: <info>  [1772273088.6840] device (tap3a8395bc-d0): carrier: link connected
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.686 243456 DEBUG nova.virt.libvirt.guest [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d6:32:9d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap11bba824-bf"/></interface>not found in domain: <domain type='kvm' id='31'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <name>instance-0000001b</name>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <uuid>f1026535-7729-43d0-8027-dd71ef14dfbf</uuid>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <nova:name>tempest-AttachInterfacesTestJSON-server-585792194</nova:name>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:04:45</nova:creationTime>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:port uuid="76d5199d-5d1e-4198-8780-c2537175a2be">
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:port uuid="11bba824-bf60-4206-b70d-fd5035009fbf">
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:04:48 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <memory unit='KiB'>131072</memory>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <vcpu placement='static'>1</vcpu>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <resource>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <partition>/machine</partition>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </resource>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <sysinfo type='smbios'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <system>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <entry name='manufacturer'>RDO</entry>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <entry name='product'>OpenStack Compute</entry>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <entry name='serial'>f1026535-7729-43d0-8027-dd71ef14dfbf</entry>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <entry name='uuid'>f1026535-7729-43d0-8027-dd71ef14dfbf</entry>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <entry name='family'>Virtual Machine</entry>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </system>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <os>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <boot dev='hd'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <smbios mode='sysinfo'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </os>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <features>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <vmcoreinfo state='on'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </features>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <cpu mode='custom' match='exact' check='full'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <vendor>AMD</vendor>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='x2apic'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc-deadline'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='hypervisor'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc_adjust'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='spec-ctrl'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='stibp'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='ssbd'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='cmp_legacy'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='overflow-recov'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='succor'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='ibrs'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='amd-ssbd'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='virt-ssbd'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='lbrv'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='tsc-scale'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='vmcb-clean'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='flushbyasid'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='pause-filter'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='pfthreshold'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='xsaves'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='svm'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='topoext'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='npt'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='nrip-save'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <clock offset='utc'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <timer name='pit' tickpolicy='delay'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <timer name='hpet' present='no'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <on_poweroff>destroy</on_poweroff>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <on_reboot>restart</on_reboot>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <on_crash>destroy</on_crash>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <disk type='network' device='disk'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/f1026535-7729-43d0-8027-dd71ef14dfbf_disk' index='2'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target dev='vda' bus='virtio'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='virtio-disk0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <disk type='network' device='cdrom'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/f1026535-7729-43d0-8027-dd71ef14dfbf_disk.config' index='1'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target dev='sda' bus='sata'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <readonly/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='sata0-0-0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='0' model='pcie-root'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pcie.0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='1' port='0x10'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.1'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='2' port='0x11'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.2'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='3' port='0x12'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.3'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='4' port='0x13'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.4'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='5' port='0x14'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.5'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='6' port='0x15'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.6'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='7' port='0x16'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.7'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='8' port='0x17'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.8'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='9' port='0x18'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.9'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='10' port='0x19'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.10'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='11' port='0x1a'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.11'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='12' port='0x1b'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.12'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='13' port='0x1c'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.13'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='14' port='0x1d'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.14'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='15' port='0x1e'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.15'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='16' port='0x1f'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.16'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='17' port='0x20'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.17'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='18' port='0x21'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.18'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='19' port='0x22'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.19'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='20' port='0x23'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.20'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='21' port='0x24'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.21'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='22' port='0x25'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.22'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='23' port='0x26'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.23'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='24' port='0x27'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.24'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='25' port='0x28'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.25'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-pci-bridge'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.26'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='usb'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='sata' index='0'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='ide'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:7a:bd:5b'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target dev='tap76d5199d-5d'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='net0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:d6:32:9d'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target dev='tap11bba824-bf'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='net1'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <serial type='pty'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <source path='/dev/pts/2'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/console.log' append='off'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target type='isa-serial' port='0'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:         <model name='isa-serial'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       </target>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <console type='pty' tty='/dev/pts/2'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <source path='/dev/pts/2'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/console.log' append='off'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target type='serial' port='0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </console>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <input type='tablet' bus='usb'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='input0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='usb' bus='0' port='1'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </input>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <input type='mouse' bus='ps2'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='input1'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </input>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <input type='keyboard' bus='ps2'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='input2'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </input>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <listen type='address' address='::0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </graphics>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <audio id='1' type='none'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <video>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model type='virtio' heads='1' primary='yes'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='video0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </video>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <watchdog model='itco' action='reset'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='watchdog0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </watchdog>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <memballoon model='virtio'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <stats period='10'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='balloon0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <rng model='virtio'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <backend model='random'>/dev/urandom</backend>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='rng0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <label>system_u:system_r:svirt_t:s0:c562,c938</label>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c562,c938</imagelabel>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <label>+107:+107</label>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <imagelabel>+107:+107</imagelabel>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:04:48 compute-0 nova_compute[243452]: </domain>
Feb 28 10:04:48 compute-0 nova_compute[243452]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.689 243456 INFO nova.virt.libvirt.driver [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully detached device tap11bba824-bf from instance f1026535-7729-43d0-8027-dd71ef14dfbf from the persistent domain config.
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.690 243456 DEBUG nova.virt.libvirt.driver [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] (1/8): Attempting to detach device tap11bba824-bf with device alias net1 from instance f1026535-7729-43d0-8027-dd71ef14dfbf from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.690 243456 DEBUG nova.virt.libvirt.guest [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] detach device xml: <interface type="ethernet">
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:d6:32:9d"/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <target dev="tap11bba824-bf"/>
Feb 28 10:04:48 compute-0 nova_compute[243452]: </interface>
Feb 28 10:04:48 compute-0 nova_compute[243452]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.693 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2c0f87-8de5-47bc-b551-9334f2c6ff29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.716 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aeef6c6e-7bfc-497b-808b-209727ca9b4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3a8395bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:6b:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459597, 'reachable_time': 37873, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269639, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.737 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[739966ac-01ca-492a-8b89-4202792b423e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:6b8d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459597, 'tstamp': 459597}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269642, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.752 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4219d219-8f99-4b98-8732-8be5096d6a09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3a8395bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:6b:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459597, 'reachable_time': 37873, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269661, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:48 compute-0 sudo[269640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:04:48 compute-0 sudo[269640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:04:48 compute-0 sudo[269640]: pam_unix(sudo:session): session closed for user root
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.784 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9d6ff305-8e7f-499a-83cc-6f7ed2a00a04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:48 compute-0 kernel: tap11bba824-bf (unregistering): left promiscuous mode
Feb 28 10:04:48 compute-0 NetworkManager[49805]: <info>  [1772273088.8175] device (tap11bba824-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:04:48 compute-0 ovn_controller[146846]: 2026-02-28T10:04:48Z|00189|binding|INFO|Releasing lport 11bba824-bf60-4206-b70d-fd5035009fbf from this chassis (sb_readonly=0)
Feb 28 10:04:48 compute-0 ovn_controller[146846]: 2026-02-28T10:04:48Z|00190|binding|INFO|Setting lport 11bba824-bf60-4206-b70d-fd5035009fbf down in Southbound
Feb 28 10:04:48 compute-0 ovn_controller[146846]: 2026-02-28T10:04:48Z|00191|binding|INFO|Removing iface tap11bba824-bf ovn-installed in OVS
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.829 243456 DEBUG nova.network.neutron [req-8a7152f6-83cd-44c6-8cdc-81e110741ef0 req-82c41c35-4152-43b9-b40f-448c8820869f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Updated VIF entry in instance network info cache for port 29b5f82a-cfc3-4c87-aac9-8419af0bcf75. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:04:48 compute-0 sudo[269669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.832 243456 DEBUG nova.network.neutron [req-8a7152f6-83cd-44c6-8cdc-81e110741ef0 req-82c41c35-4152-43b9-b40f-448c8820869f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Updating instance_info_cache with network_info: [{"id": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "address": "fa:16:3e:29:2d:d1", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b5f82a-cf", "ovs_interfaceid": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:48 compute-0 sudo[269669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.837 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.839 243456 DEBUG nova.virt.libvirt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Received event <DeviceRemovedEvent: 1772273088.831463, f1026535-7729-43d0-8027-dd71ef14dfbf => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.841 243456 DEBUG nova.virt.libvirt.driver [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Start waiting for the detach event from libvirt for device tap11bba824-bf with device alias net1 for instance f1026535-7729-43d0-8027-dd71ef14dfbf _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.841 243456 DEBUG nova.virt.libvirt.guest [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d6:32:9d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap11bba824-bf"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.842 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.849 243456 DEBUG nova.virt.libvirt.guest [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d6:32:9d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap11bba824-bf"/></interface>not found in domain: <domain type='kvm' id='31'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <name>instance-0000001b</name>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <uuid>f1026535-7729-43d0-8027-dd71ef14dfbf</uuid>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <nova:name>tempest-AttachInterfacesTestJSON-server-585792194</nova:name>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:04:45</nova:creationTime>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:port uuid="76d5199d-5d1e-4198-8780-c2537175a2be">
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:port uuid="11bba824-bf60-4206-b70d-fd5035009fbf">
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:04:48 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <memory unit='KiB'>131072</memory>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <vcpu placement='static'>1</vcpu>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <resource>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <partition>/machine</partition>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </resource>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <sysinfo type='smbios'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <system>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <entry name='manufacturer'>RDO</entry>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <entry name='product'>OpenStack Compute</entry>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <entry name='serial'>f1026535-7729-43d0-8027-dd71ef14dfbf</entry>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <entry name='uuid'>f1026535-7729-43d0-8027-dd71ef14dfbf</entry>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <entry name='family'>Virtual Machine</entry>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </system>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <os>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <boot dev='hd'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <smbios mode='sysinfo'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </os>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <features>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <vmcoreinfo state='on'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </features>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <cpu mode='custom' match='exact' check='full'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <vendor>AMD</vendor>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='x2apic'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc-deadline'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='hypervisor'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc_adjust'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='spec-ctrl'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='stibp'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='ssbd'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='cmp_legacy'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='overflow-recov'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='succor'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='ibrs'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='amd-ssbd'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='virt-ssbd'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='lbrv'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='tsc-scale'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='vmcb-clean'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='flushbyasid'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='pause-filter'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='pfthreshold'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='xsaves'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='svm'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='require' name='topoext'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='npt'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <feature policy='disable' name='nrip-save'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <clock offset='utc'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <timer name='pit' tickpolicy='delay'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <timer name='hpet' present='no'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <on_poweroff>destroy</on_poweroff>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <on_reboot>restart</on_reboot>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <on_crash>destroy</on_crash>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <disk type='network' device='disk'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/f1026535-7729-43d0-8027-dd71ef14dfbf_disk' index='2'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target dev='vda' bus='virtio'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='virtio-disk0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <disk type='network' device='cdrom'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/f1026535-7729-43d0-8027-dd71ef14dfbf_disk.config' index='1'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target dev='sda' bus='sata'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <readonly/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='sata0-0-0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='0' model='pcie-root'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pcie.0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='1' port='0x10'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.1'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='2' port='0x11'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.2'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='3' port='0x12'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.3'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='4' port='0x13'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.4'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='5' port='0x14'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.5'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='6' port='0x15'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.6'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='7' port='0x16'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.7'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='8' port='0x17'/>
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.850 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e864e97a-4056-48c7-b91c-ed24806ec829]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.8'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='9' port='0x18'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.9'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='10' port='0x19'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.10'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='11' port='0x1a'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.11'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='12' port='0x1b'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.12'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='13' port='0x1c'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.13'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='14' port='0x1d'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.14'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='15' port='0x1e'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.15'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='16' port='0x1f'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.16'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='17' port='0x20'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.17'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='18' port='0x21'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.18'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='19' port='0x22'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.19'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='20' port='0x23'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.20'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='21' port='0x24'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.21'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='22' port='0x25'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.22'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='23' port='0x26'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.23'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='24' port='0x27'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.24'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target chassis='25' port='0x28'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.25'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model name='pcie-pci-bridge'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='pci.26'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='usb'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <controller type='sata' index='0'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='ide'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:7a:bd:5b'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target dev='tap76d5199d-5d'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='net0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <serial type='pty'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <source path='/dev/pts/2'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/console.log' append='off'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target type='isa-serial' port='0'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:         <model name='isa-serial'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       </target>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <console type='pty' tty='/dev/pts/2'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <source path='/dev/pts/2'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/console.log' append='off'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <target type='serial' port='0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </console>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <input type='tablet' bus='usb'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='input0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='usb' bus='0' port='1'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </input>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <input type='mouse' bus='ps2'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='input1'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </input>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <input type='keyboard' bus='ps2'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='input2'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </input>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <listen type='address' address='::0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </graphics>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <audio id='1' type='none'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <video>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <model type='virtio' heads='1' primary='yes'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='video0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </video>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <watchdog model='itco' action='reset'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='watchdog0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </watchdog>
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.851 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a8395bc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <memballoon model='virtio'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <stats period='10'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='balloon0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <rng model='virtio'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <backend model='random'>/dev/urandom</backend>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <alias name='rng0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <label>system_u:system_r:svirt_t:s0:c562,c938</label>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c562,c938</imagelabel>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:04:48 compute-0 kernel: tap3a8395bc-d0: entered promiscuous mode
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <label>+107:+107</label>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <imagelabel>+107:+107</imagelabel>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:04:48 compute-0 nova_compute[243452]: </domain>
Feb 28 10:04:48 compute-0 nova_compute[243452]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.851 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.851 243456 INFO nova.virt.libvirt.driver [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully detached device tap11bba824-bf from instance f1026535-7729-43d0-8027-dd71ef14dfbf from the live domain config.
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.851 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3a8395bc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.852 243456 DEBUG nova.virt.libvirt.vif [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-585792194',display_name='tempest-AttachInterfacesTestJSON-server-585792194',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-585792194',id=27,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPfxDpm3Myh4xl51GH/Eee4e+9gL0QXIrJb3KmgHGFFsr7Zk1zaEkJOxAcUnihnT1OPbqVJln2d2q3oc5Q222W4sAVF2VtSEfrwl92YPLxE6w1H/lw+338tjUK54+I/6sA==',key_name='tempest-keypair-1547211698',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-ngglzwzv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=f1026535-7729-43d0-8027-dd71ef14dfbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.852 243456 DEBUG nova.network.os_vif_util [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.853 243456 DEBUG nova.network.os_vif_util [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.853 243456 DEBUG os_vif [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:04:48 compute-0 NetworkManager[49805]: <info>  [1772273088.8540] manager: (tap3a8395bc-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.855 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.856 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11bba824-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.856 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.857 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.859 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.865 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:32:9d 10.100.0.14'], port_security=['fa:16:3e:d6:32:9d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f1026535-7729-43d0-8027-dd71ef14dfbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '24808a61-2182-43ba-a46d-e629124276f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=11bba824-bf60-4206-b70d-fd5035009fbf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.867 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3a8395bc-d0, col_values=(('external_ids', {'iface-id': '5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.867 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:48 compute-0 ovn_controller[146846]: 2026-02-28T10:04:48Z|00192|binding|INFO|Releasing lport 5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3 from this chassis (sb_readonly=0)
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.870 243456 INFO os_vif [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf')
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.871 243456 DEBUG nova.virt.libvirt.guest [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <nova:name>tempest-AttachInterfacesTestJSON-server-585792194</nova:name>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:04:48</nova:creationTime>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     <nova:port uuid="76d5199d-5d1e-4198-8780-c2537175a2be">
Feb 28 10:04:48 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:04:48 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:04:48 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:04:48 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:04:48 compute-0 nova_compute[243452]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.875 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.877 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.877 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.878 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9f1299c3-a4af-4244-a655-86089c6780e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.879 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:04:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:48.879 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'env', 'PROCESS_TAG=haproxy-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:04:48 compute-0 nova_compute[243452]: 2026-02-28 10:04:48.884 243456 DEBUG oslo_concurrency.lockutils [req-8a7152f6-83cd-44c6-8cdc-81e110741ef0 req-82c41c35-4152-43b9-b40f-448c8820869f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ff5bf118-ea06-44c0-81f0-0a229162e1d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.027 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273089.026949, ff5bf118-ea06-44c0-81f0-0a229162e1d8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.027 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] VM Started (Lifecycle Event)
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.053 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.057 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273089.0289953, ff5bf118-ea06-44c0-81f0-0a229162e1d8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.058 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] VM Paused (Lifecycle Event)
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.075 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.081 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.098 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:04:49 compute-0 ceph-mon[76304]: pgmap v1074: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 2.7 MiB/s wr, 122 op/s
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.110 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "c1e4150a-4695-4464-a271-378970447180" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.111 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "c1e4150a-4695-4464-a271-378970447180" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.125 243456 DEBUG nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.186 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.186 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.193 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.194 243456 INFO nova.compute.claims [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:04:49 compute-0 podman[269787]: 2026-02-28 10:04:49.26386457 +0000 UTC m=+0.057974701 container create 2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:04:49 compute-0 sudo[269669]: pam_unix(sudo:session): session closed for user root
Feb 28 10:04:49 compute-0 podman[269787]: 2026-02-28 10:04:49.227293762 +0000 UTC m=+0.021403913 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.337 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:49 compute-0 systemd[1]: Started libpod-conmon-2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf.scope.
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.373 243456 DEBUG oslo_concurrency.lockutils [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.375 243456 DEBUG oslo_concurrency.lockutils [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquired lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.375 243456 DEBUG nova.network.neutron [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:04:49 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:04:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a5e83e1d44ebf105d11322506dc55c483ad72f2af990afcb1e865b656b3d9ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:49 compute-0 sudo[269814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:04:49 compute-0 sudo[269814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:04:49 compute-0 sudo[269814]: pam_unix(sudo:session): session closed for user root
Feb 28 10:04:49 compute-0 podman[269787]: 2026-02-28 10:04:49.415948616 +0000 UTC m=+0.210058767 container init 2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:04:49 compute-0 podman[269787]: 2026-02-28 10:04:49.423744115 +0000 UTC m=+0.217854246 container start 2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 10:04:49 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[269826]: [NOTICE]   (269844) : New worker (269864) forked
Feb 28 10:04:49 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[269826]: [NOTICE]   (269844) : Loading success.
Feb 28 10:04:49 compute-0 sudo[269843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Feb 28 10:04:49 compute-0 sudo[269843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.492 243456 DEBUG nova.compute.manager [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-deleted-11bba824-bf60-4206-b70d-fd5035009fbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.493 243456 INFO nova.compute.manager [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Neutron deleted interface 11bba824-bf60-4206-b70d-fd5035009fbf; detaching it from the instance and deleting it from the info cache
Feb 28 10:04:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.493 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 11bba824-bf60-4206-b70d-fd5035009fbf in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f unbound from our chassis
Feb 28 10:04:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.495 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.494 243456 DEBUG nova.network.neutron [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updating instance_info_cache with network_info: [{"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.508 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8fdd5cc7-9952-452d-84a9-3fd859129b67]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.535 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bb814ffb-d419-46d1-9aa0-7d099311170c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.539 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e03c4b23-a336-4cf8-8020-320f3abae949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.539 243456 DEBUG nova.objects.instance [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lazy-loading 'system_metadata' on Instance uuid f1026535-7729-43d0-8027-dd71ef14dfbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.567 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[44842415-95f1-42fc-8002-d9b0a7c70000]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.580 243456 DEBUG nova.objects.instance [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lazy-loading 'flavor' on Instance uuid f1026535-7729-43d0-8027-dd71ef14dfbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.589 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[52edb19c-9ed0-44a8-b7d5-8d6ec9c2a52a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455960, 'reachable_time': 22319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269903, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.609 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[02da52d6-dcfe-448c-9201-def4c37b3252]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455970, 'tstamp': 455970}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269904, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455972, 'tstamp': 455972}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269904, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.612 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.614 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.617 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.617 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.618 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:49.618 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.626 243456 DEBUG nova.virt.libvirt.vif [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-585792194',display_name='tempest-AttachInterfacesTestJSON-server-585792194',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-585792194',id=27,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPfxDpm3Myh4xl51GH/Eee4e+9gL0QXIrJb3KmgHGFFsr7Zk1zaEkJOxAcUnihnT1OPbqVJln2d2q3oc5Q222W4sAVF2VtSEfrwl92YPLxE6w1H/lw+338tjUK54+I/6sA==',key_name='tempest-keypair-1547211698',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-ngglzwzv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=f1026535-7729-43d0-8027-dd71ef14dfbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.627 243456 DEBUG nova.network.os_vif_util [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converting VIF {"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.629 243456 DEBUG nova.network.os_vif_util [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.634 243456 DEBUG nova.virt.libvirt.guest [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d6:32:9d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap11bba824-bf"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.639 243456 DEBUG nova.virt.libvirt.guest [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d6:32:9d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap11bba824-bf"/></interface>not found in domain: <domain type='kvm' id='31'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <name>instance-0000001b</name>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <uuid>f1026535-7729-43d0-8027-dd71ef14dfbf</uuid>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <nova:name>tempest-AttachInterfacesTestJSON-server-585792194</nova:name>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:04:48</nova:creationTime>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:port uuid="76d5199d-5d1e-4198-8780-c2537175a2be">
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:04:49 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <memory unit='KiB'>131072</memory>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <vcpu placement='static'>1</vcpu>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <resource>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <partition>/machine</partition>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </resource>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <sysinfo type='smbios'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <system>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <entry name='manufacturer'>RDO</entry>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <entry name='product'>OpenStack Compute</entry>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <entry name='serial'>f1026535-7729-43d0-8027-dd71ef14dfbf</entry>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <entry name='uuid'>f1026535-7729-43d0-8027-dd71ef14dfbf</entry>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <entry name='family'>Virtual Machine</entry>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </system>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <os>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <boot dev='hd'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <smbios mode='sysinfo'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </os>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <features>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <vmcoreinfo state='on'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </features>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <cpu mode='custom' match='exact' check='full'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <vendor>AMD</vendor>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='x2apic'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc-deadline'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='hypervisor'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc_adjust'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='spec-ctrl'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='stibp'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='ssbd'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='cmp_legacy'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='overflow-recov'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='succor'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='ibrs'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='amd-ssbd'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='virt-ssbd'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='lbrv'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='tsc-scale'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='vmcb-clean'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='flushbyasid'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='pause-filter'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='pfthreshold'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='xsaves'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='svm'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='topoext'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='npt'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='nrip-save'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <clock offset='utc'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <timer name='pit' tickpolicy='delay'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <timer name='hpet' present='no'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <on_poweroff>destroy</on_poweroff>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <on_reboot>restart</on_reboot>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <on_crash>destroy</on_crash>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <disk type='network' device='disk'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/f1026535-7729-43d0-8027-dd71ef14dfbf_disk' index='2'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target dev='vda' bus='virtio'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='virtio-disk0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <disk type='network' device='cdrom'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/f1026535-7729-43d0-8027-dd71ef14dfbf_disk.config' index='1'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target dev='sda' bus='sata'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <readonly/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='sata0-0-0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='0' model='pcie-root'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pcie.0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='1' port='0x10'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.1'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='2' port='0x11'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.2'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='3' port='0x12'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.3'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='4' port='0x13'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.4'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='5' port='0x14'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.5'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='6' port='0x15'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.6'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='7' port='0x16'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.7'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='8' port='0x17'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.8'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='9' port='0x18'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.9'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='10' port='0x19'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.10'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='11' port='0x1a'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.11'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='12' port='0x1b'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.12'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='13' port='0x1c'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.13'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='14' port='0x1d'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.14'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='15' port='0x1e'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.15'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='16' port='0x1f'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.16'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='17' port='0x20'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.17'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='18' port='0x21'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.18'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='19' port='0x22'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.19'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='20' port='0x23'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.20'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='21' port='0x24'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.21'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='22' port='0x25'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.22'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='23' port='0x26'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.23'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='24' port='0x27'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.24'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='25' port='0x28'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.25'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-pci-bridge'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.26'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='usb'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='sata' index='0'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='ide'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:7a:bd:5b'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target dev='tap76d5199d-5d'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='net0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <serial type='pty'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <source path='/dev/pts/2'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/console.log' append='off'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target type='isa-serial' port='0'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:         <model name='isa-serial'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       </target>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <console type='pty' tty='/dev/pts/2'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <source path='/dev/pts/2'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/console.log' append='off'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target type='serial' port='0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </console>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <input type='tablet' bus='usb'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='input0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='usb' bus='0' port='1'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </input>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <input type='mouse' bus='ps2'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='input1'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </input>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <input type='keyboard' bus='ps2'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='input2'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </input>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <listen type='address' address='::0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </graphics>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <audio id='1' type='none'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <video>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model type='virtio' heads='1' primary='yes'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='video0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </video>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <watchdog model='itco' action='reset'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='watchdog0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </watchdog>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <memballoon model='virtio'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <stats period='10'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='balloon0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <rng model='virtio'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <backend model='random'>/dev/urandom</backend>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='rng0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <label>system_u:system_r:svirt_t:s0:c562,c938</label>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c562,c938</imagelabel>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <label>+107:+107</label>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <imagelabel>+107:+107</imagelabel>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:04:49 compute-0 nova_compute[243452]: </domain>
Feb 28 10:04:49 compute-0 nova_compute[243452]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.653 243456 DEBUG nova.virt.libvirt.guest [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d6:32:9d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap11bba824-bf"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.659 243456 DEBUG nova.virt.libvirt.guest [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d6:32:9d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap11bba824-bf"/></interface>not found in domain: <domain type='kvm' id='31'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <name>instance-0000001b</name>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <uuid>f1026535-7729-43d0-8027-dd71ef14dfbf</uuid>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <nova:name>tempest-AttachInterfacesTestJSON-server-585792194</nova:name>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:04:48</nova:creationTime>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:port uuid="76d5199d-5d1e-4198-8780-c2537175a2be">
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:04:49 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <memory unit='KiB'>131072</memory>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <vcpu placement='static'>1</vcpu>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <resource>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <partition>/machine</partition>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </resource>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <sysinfo type='smbios'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <system>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <entry name='manufacturer'>RDO</entry>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <entry name='product'>OpenStack Compute</entry>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <entry name='serial'>f1026535-7729-43d0-8027-dd71ef14dfbf</entry>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <entry name='uuid'>f1026535-7729-43d0-8027-dd71ef14dfbf</entry>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <entry name='family'>Virtual Machine</entry>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </system>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <os>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <boot dev='hd'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <smbios mode='sysinfo'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </os>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <features>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <vmcoreinfo state='on'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </features>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <cpu mode='custom' match='exact' check='full'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <vendor>AMD</vendor>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='x2apic'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc-deadline'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='hypervisor'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc_adjust'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='spec-ctrl'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='stibp'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='ssbd'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='cmp_legacy'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='overflow-recov'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='succor'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='ibrs'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='amd-ssbd'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='virt-ssbd'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='lbrv'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='tsc-scale'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='vmcb-clean'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='flushbyasid'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='pause-filter'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='pfthreshold'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='xsaves'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='svm'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='require' name='topoext'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='npt'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='nrip-save'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <clock offset='utc'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <timer name='pit' tickpolicy='delay'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <timer name='hpet' present='no'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <on_poweroff>destroy</on_poweroff>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <on_reboot>restart</on_reboot>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <on_crash>destroy</on_crash>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <disk type='network' device='disk'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/f1026535-7729-43d0-8027-dd71ef14dfbf_disk' index='2'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target dev='vda' bus='virtio'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='virtio-disk0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <disk type='network' device='cdrom'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/f1026535-7729-43d0-8027-dd71ef14dfbf_disk.config' index='1'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target dev='sda' bus='sata'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <readonly/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='sata0-0-0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='0' model='pcie-root'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pcie.0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='1' port='0x10'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.1'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='2' port='0x11'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.2'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='3' port='0x12'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.3'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='4' port='0x13'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.4'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='5' port='0x14'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.5'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='6' port='0x15'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.6'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='7' port='0x16'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.7'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='8' port='0x17'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.8'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='9' port='0x18'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.9'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='10' port='0x19'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.10'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='11' port='0x1a'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.11'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='12' port='0x1b'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.12'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='13' port='0x1c'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.13'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='14' port='0x1d'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.14'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='15' port='0x1e'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.15'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='16' port='0x1f'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.16'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='17' port='0x20'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.17'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='18' port='0x21'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.18'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='19' port='0x22'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.19'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='20' port='0x23'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.20'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='21' port='0x24'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.21'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='22' port='0x25'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.22'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='23' port='0x26'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.23'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='24' port='0x27'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.24'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target chassis='25' port='0x28'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.25'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model name='pcie-pci-bridge'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='pci.26'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='usb'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <controller type='sata' index='0'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='ide'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:7a:bd:5b'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target dev='tap76d5199d-5d'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='net0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <serial type='pty'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <source path='/dev/pts/2'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/console.log' append='off'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target type='isa-serial' port='0'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:         <model name='isa-serial'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       </target>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <console type='pty' tty='/dev/pts/2'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <source path='/dev/pts/2'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf/console.log' append='off'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <target type='serial' port='0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </console>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <input type='tablet' bus='usb'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='input0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='usb' bus='0' port='1'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </input>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <input type='mouse' bus='ps2'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='input1'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </input>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <input type='keyboard' bus='ps2'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='input2'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </input>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <listen type='address' address='::0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </graphics>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <audio id='1' type='none'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <video>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <model type='virtio' heads='1' primary='yes'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='video0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </video>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <watchdog model='itco' action='reset'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='watchdog0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </watchdog>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <memballoon model='virtio'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <stats period='10'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='balloon0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <rng model='virtio'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <backend model='random'>/dev/urandom</backend>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <alias name='rng0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <label>system_u:system_r:svirt_t:s0:c562,c938</label>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c562,c938</imagelabel>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <label>+107:+107</label>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <imagelabel>+107:+107</imagelabel>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:04:49 compute-0 nova_compute[243452]: </domain>
Feb 28 10:04:49 compute-0 nova_compute[243452]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.659 243456 WARNING nova.virt.libvirt.driver [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Detaching interface fa:16:3e:d6:32:9d failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap11bba824-bf' not found.
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.660 243456 DEBUG nova.virt.libvirt.vif [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-585792194',display_name='tempest-AttachInterfacesTestJSON-server-585792194',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-585792194',id=27,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPfxDpm3Myh4xl51GH/Eee4e+9gL0QXIrJb3KmgHGFFsr7Zk1zaEkJOxAcUnihnT1OPbqVJln2d2q3oc5Q222W4sAVF2VtSEfrwl92YPLxE6w1H/lw+338tjUK54+I/6sA==',key_name='tempest-keypair-1547211698',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-ngglzwzv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=f1026535-7729-43d0-8027-dd71ef14dfbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.661 243456 DEBUG nova.network.os_vif_util [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converting VIF {"id": "11bba824-bf60-4206-b70d-fd5035009fbf", "address": "fa:16:3e:d6:32:9d", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11bba824-bf", "ovs_interfaceid": "11bba824-bf60-4206-b70d-fd5035009fbf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.662 243456 DEBUG nova.network.os_vif_util [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.662 243456 DEBUG os_vif [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.670 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.671 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11bba824-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.671 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.674 243456 INFO os_vif [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:32:9d,bridge_name='br-int',has_traffic_filtering=True,id=11bba824-bf60-4206-b70d-fd5035009fbf,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11bba824-bf')
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.675 243456 DEBUG nova.virt.libvirt.guest [req-fb1e6bc7-4784-4d97-a1e5-8d4256d405a7 req-004cdf79-b322-40d2-925c-30ad2df835bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <nova:name>tempest-AttachInterfacesTestJSON-server-585792194</nova:name>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:04:49</nova:creationTime>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     <nova:port uuid="76d5199d-5d1e-4198-8780-c2537175a2be">
Feb 28 10:04:49 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:04:49 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:04:49 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:04:49 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:04:49 compute-0 nova_compute[243452]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 28 10:04:49 compute-0 sudo[269843]: pam_unix(sudo:session): session closed for user root
Feb 28 10:04:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:04:49 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:04:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:04:49 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:04:49 compute-0 sudo[269923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:04:49 compute-0 sudo[269923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:04:49 compute-0 sudo[269923]: pam_unix(sudo:session): session closed for user root
Feb 28 10:04:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:04:49 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1274799466' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 2.7 MiB/s wr, 95 op/s
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.874 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.884 243456 DEBUG nova.compute.provider_tree [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.906 243456 DEBUG nova.scheduler.client.report [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:04:49 compute-0 sudo[269949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- inventory --format=json-pretty --filter-for-batch
Feb 28 10:04:49 compute-0 sudo[269949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.941 243456 DEBUG nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Received event network-vif-plugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.942 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.943 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.943 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.943 243456 DEBUG nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Processing event network-vif-plugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.944 243456 DEBUG nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Received event network-vif-plugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.944 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.945 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.945 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.946 243456 DEBUG nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] No waiting events found dispatching network-vif-plugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.946 243456 WARNING nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Received unexpected event network-vif-plugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 for instance with vm_state building and task_state spawning.
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.946 243456 DEBUG nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-unplugged-11bba824-bf60-4206-b70d-fd5035009fbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.947 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.947 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.948 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.948 243456 DEBUG nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] No waiting events found dispatching network-vif-unplugged-11bba824-bf60-4206-b70d-fd5035009fbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.949 243456 WARNING nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received unexpected event network-vif-unplugged-11bba824-bf60-4206-b70d-fd5035009fbf for instance with vm_state active and task_state None.
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.949 243456 DEBUG nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-plugged-11bba824-bf60-4206-b70d-fd5035009fbf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.950 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.950 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.950 243456 DEBUG oslo_concurrency.lockutils [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.951 243456 DEBUG nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] No waiting events found dispatching network-vif-plugged-11bba824-bf60-4206-b70d-fd5035009fbf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.951 243456 WARNING nova.compute.manager [req-da37aef8-1df0-421d-9568-05f484a626a1 req-6dc9fca3-4fd2-44dd-8924-3cbbf0781c63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received unexpected event network-vif-plugged-11bba824-bf60-4206-b70d-fd5035009fbf for instance with vm_state active and task_state None.
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.953 243456 DEBUG nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.954 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.955 243456 DEBUG nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.961 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.962 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273089.961608, ff5bf118-ea06-44c0-81f0-0a229162e1d8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.962 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] VM Resumed (Lifecycle Event)
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.970 243456 INFO nova.virt.libvirt.driver [-] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Instance spawned successfully.
Feb 28 10:04:49 compute-0 nova_compute[243452]: 2026-02-28 10:04:49.970 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.007 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.015 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.016 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.017 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.018 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.019 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.020 243456 DEBUG nova.virt.libvirt.driver [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.036 243456 DEBUG nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.039 243456 DEBUG nova.network.neutron [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.050 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.088 243456 INFO nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.092 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.108 243456 INFO nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Took 8.14 seconds to spawn the instance on the hypervisor.
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.108 243456 DEBUG nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.110 243456 DEBUG nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.180 243456 INFO nova.compute.manager [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Took 9.50 seconds to build instance.
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.202 243456 DEBUG oslo_concurrency.lockutils [None req-6b7cfd88-9f36-4213-8f3e-8e7342f38555 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.212 243456 DEBUG nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.214 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.214 243456 INFO nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Creating image(s)
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.239 243456 DEBUG nova.storage.rbd_utils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image c1e4150a-4695-4464-a271-378970447180_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.271 243456 DEBUG nova.storage.rbd_utils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image c1e4150a-4695-4464-a271-378970447180_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.303 243456 DEBUG nova.storage.rbd_utils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image c1e4150a-4695-4464-a271-378970447180_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.316 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:50 compute-0 podman[269991]: 2026-02-28 10:04:50.326483653 +0000 UTC m=+0.088814778 container create 78a72c1326e041e2e6381e72edf8feb4940c0247116413c55960edfc46b28458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dubinsky, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.355 243456 DEBUG nova.policy [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2737540b5d9a437cac0ea91b25f0c5d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da458887a8634c5a8b9a38fcbcc44e07', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:04:50 compute-0 systemd[1]: Started libpod-conmon-78a72c1326e041e2e6381e72edf8feb4940c0247116413c55960edfc46b28458.scope.
Feb 28 10:04:50 compute-0 podman[269991]: 2026-02-28 10:04:50.295043429 +0000 UTC m=+0.057374604 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.387 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.388 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.388 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.389 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:50 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.426 243456 DEBUG nova.storage.rbd_utils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image c1e4150a-4695-4464-a271-378970447180_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:50 compute-0 podman[269991]: 2026-02-28 10:04:50.428020918 +0000 UTC m=+0.190352073 container init 78a72c1326e041e2e6381e72edf8feb4940c0247116413c55960edfc46b28458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.432 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c1e4150a-4695-4464-a271-378970447180_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:50 compute-0 podman[269991]: 2026-02-28 10:04:50.437573876 +0000 UTC m=+0.199905001 container start 78a72c1326e041e2e6381e72edf8feb4940c0247116413c55960edfc46b28458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:04:50 compute-0 podman[269991]: 2026-02-28 10:04:50.442391952 +0000 UTC m=+0.204723087 container attach 78a72c1326e041e2e6381e72edf8feb4940c0247116413c55960edfc46b28458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:04:50 compute-0 systemd[1]: libpod-78a72c1326e041e2e6381e72edf8feb4940c0247116413c55960edfc46b28458.scope: Deactivated successfully.
Feb 28 10:04:50 compute-0 elastic_dubinsky[270061]: 167 167
Feb 28 10:04:50 compute-0 conmon[270061]: conmon 78a72c1326e041e2e638 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-78a72c1326e041e2e6381e72edf8feb4940c0247116413c55960edfc46b28458.scope/container/memory.events
Feb 28 10:04:50 compute-0 podman[269991]: 2026-02-28 10:04:50.446355643 +0000 UTC m=+0.208686768 container died 78a72c1326e041e2e6381e72edf8feb4940c0247116413c55960edfc46b28458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dubinsky, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 10:04:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9e5831d3b494939ceab09553ff4a200fc9c8459853a6c090185ef1097a57dab-merged.mount: Deactivated successfully.
Feb 28 10:04:50 compute-0 podman[269991]: 2026-02-28 10:04:50.493474698 +0000 UTC m=+0.255805823 container remove 78a72c1326e041e2e6381e72edf8feb4940c0247116413c55960edfc46b28458 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:04:50 compute-0 systemd[1]: libpod-conmon-78a72c1326e041e2e6381e72edf8feb4940c0247116413c55960edfc46b28458.scope: Deactivated successfully.
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.657 243456 DEBUG oslo_concurrency.lockutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.658 243456 DEBUG oslo_concurrency.lockutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.658 243456 DEBUG oslo_concurrency.lockutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.658 243456 DEBUG oslo_concurrency.lockutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.659 243456 DEBUG oslo_concurrency.lockutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.660 243456 INFO nova.compute.manager [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Terminating instance
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.661 243456 DEBUG nova.compute.manager [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.684 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c1e4150a-4695-4464-a271-378970447180_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:50 compute-0 podman[270123]: 2026-02-28 10:04:50.68813867 +0000 UTC m=+0.057551109 container create 0c57ea23320d0f747646e173d329304a564b3d6a9493f40207035144d09d6cfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Feb 28 10:04:50 compute-0 kernel: tap76d5199d-5d (unregistering): left promiscuous mode
Feb 28 10:04:50 compute-0 NetworkManager[49805]: <info>  [1772273090.7176] device (tap76d5199d-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:04:50 compute-0 ovn_controller[146846]: 2026-02-28T10:04:50Z|00193|binding|INFO|Releasing lport 76d5199d-5d1e-4198-8780-c2537175a2be from this chassis (sb_readonly=0)
Feb 28 10:04:50 compute-0 ovn_controller[146846]: 2026-02-28T10:04:50Z|00194|binding|INFO|Setting lport 76d5199d-5d1e-4198-8780-c2537175a2be down in Southbound
Feb 28 10:04:50 compute-0 ovn_controller[146846]: 2026-02-28T10:04:50Z|00195|binding|INFO|Removing iface tap76d5199d-5d ovn-installed in OVS
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.733 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:50 compute-0 systemd[1]: Started libpod-conmon-0c57ea23320d0f747646e173d329304a564b3d6a9493f40207035144d09d6cfe.scope.
Feb 28 10:04:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:50.739 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:bd:5b 10.100.0.12'], port_security=['fa:16:3e:7a:bd:5b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f1026535-7729-43d0-8027-dd71ef14dfbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0c8faed-afc6-47c6-b1ad-97c1fd7445c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=76d5199d-5d1e-4198-8780-c2537175a2be) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:04:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:50.741 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 76d5199d-5d1e-4198-8780-c2537175a2be in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f unbound from our chassis
Feb 28 10:04:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:50.742 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60dcefc3-95e1-437e-9c00-e51656c39b8f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.742 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:50.743 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ad5c4a4d-80ca-4883-8583-b0cca6c4c46f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:50.743 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f namespace which is not needed anymore
Feb 28 10:04:50 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Feb 28 10:04:50 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Consumed 13.120s CPU time.
Feb 28 10:04:50 compute-0 systemd-machined[209480]: Machine qemu-31-instance-0000001b terminated.
Feb 28 10:04:50 compute-0 podman[270123]: 2026-02-28 10:04:50.667388707 +0000 UTC m=+0.036801176 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:04:50 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62582b15685eca806715c6f9ba82b50a12142970c939c69f3f5a6b7e95c2b9e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62582b15685eca806715c6f9ba82b50a12142970c939c69f3f5a6b7e95c2b9e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62582b15685eca806715c6f9ba82b50a12142970c939c69f3f5a6b7e95c2b9e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62582b15685eca806715c6f9ba82b50a12142970c939c69f3f5a6b7e95c2b9e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:50 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:04:50 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:04:50 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1274799466' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:50 compute-0 ceph-mon[76304]: pgmap v1075: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 2.7 MiB/s wr, 95 op/s
Feb 28 10:04:50 compute-0 podman[270123]: 2026-02-28 10:04:50.789963473 +0000 UTC m=+0.159375952 container init 0c57ea23320d0f747646e173d329304a564b3d6a9493f40207035144d09d6cfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:04:50 compute-0 podman[270123]: 2026-02-28 10:04:50.802709811 +0000 UTC m=+0.172122270 container start 0c57ea23320d0f747646e173d329304a564b3d6a9493f40207035144d09d6cfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.806 243456 DEBUG nova.storage.rbd_utils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] resizing rbd image c1e4150a-4695-4464-a271-378970447180_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:04:50 compute-0 podman[270123]: 2026-02-28 10:04:50.809146822 +0000 UTC m=+0.178559291 container attach 0c57ea23320d0f747646e173d329304a564b3d6a9493f40207035144d09d6cfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 10:04:50 compute-0 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[267513]: [NOTICE]   (267519) : haproxy version is 2.8.14-c23fe91
Feb 28 10:04:50 compute-0 kernel: tap76d5199d-5d: entered promiscuous mode
Feb 28 10:04:50 compute-0 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[267513]: [NOTICE]   (267519) : path to executable is /usr/sbin/haproxy
Feb 28 10:04:50 compute-0 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[267513]: [WARNING]  (267519) : Exiting Master process...
Feb 28 10:04:50 compute-0 NetworkManager[49805]: <info>  [1772273090.8826] manager: (tap76d5199d-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Feb 28 10:04:50 compute-0 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[267513]: [ALERT]    (267519) : Current worker (267521) exited with code 143 (Terminated)
Feb 28 10:04:50 compute-0 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[267513]: [WARNING]  (267519) : All workers exited. Exiting... (0)
Feb 28 10:04:50 compute-0 ovn_controller[146846]: 2026-02-28T10:04:50Z|00196|binding|INFO|Claiming lport 76d5199d-5d1e-4198-8780-c2537175a2be for this chassis.
Feb 28 10:04:50 compute-0 ovn_controller[146846]: 2026-02-28T10:04:50Z|00197|binding|INFO|76d5199d-5d1e-4198-8780-c2537175a2be: Claiming fa:16:3e:7a:bd:5b 10.100.0.12
Feb 28 10:04:50 compute-0 kernel: tap76d5199d-5d (unregistering): left promiscuous mode
Feb 28 10:04:50 compute-0 systemd[1]: libpod-0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543.scope: Deactivated successfully.
Feb 28 10:04:50 compute-0 podman[270221]: 2026-02-28 10:04:50.893503274 +0000 UTC m=+0.044566604 container died 0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 10:04:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:50.893 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:bd:5b 10.100.0.12'], port_security=['fa:16:3e:7a:bd:5b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f1026535-7729-43d0-8027-dd71ef14dfbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0c8faed-afc6-47c6-b1ad-97c1fd7445c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=76d5199d-5d1e-4198-8780-c2537175a2be) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:04:50 compute-0 ovn_controller[146846]: 2026-02-28T10:04:50Z|00198|binding|INFO|Releasing lport 76d5199d-5d1e-4198-8780-c2537175a2be from this chassis (sb_readonly=0)
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.916 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:50.921 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:bd:5b 10.100.0.12'], port_security=['fa:16:3e:7a:bd:5b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f1026535-7729-43d0-8027-dd71ef14dfbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0c8faed-afc6-47c6-b1ad-97c1fd7445c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=76d5199d-5d1e-4198-8780-c2537175a2be) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.926 243456 DEBUG nova.objects.instance [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lazy-loading 'migration_context' on Instance uuid c1e4150a-4695-4464-a271-378970447180 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543-userdata-shm.mount: Deactivated successfully.
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.929 243456 INFO nova.network.neutron [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Port 11bba824-bf60-4206-b70d-fd5035009fbf from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.930 243456 DEBUG nova.network.neutron [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updating instance_info_cache with network_info: [{"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.932 243456 INFO nova.virt.libvirt.driver [-] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Instance destroyed successfully.
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.932 243456 DEBUG nova.objects.instance [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'resources' on Instance uuid f1026535-7729-43d0-8027-dd71ef14dfbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e56ba081050cdf68ac2830d8fd1d550ce7d433e93c5d67b3c7be66a947248126-merged.mount: Deactivated successfully.
Feb 28 10:04:50 compute-0 podman[270221]: 2026-02-28 10:04:50.942850631 +0000 UTC m=+0.093913941 container cleanup 0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 10:04:50 compute-0 systemd[1]: libpod-conmon-0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543.scope: Deactivated successfully.
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.951 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.952 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Ensure instance console log exists: /var/lib/nova/instances/c1e4150a-4695-4464-a271-378970447180/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.953 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.953 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.953 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.954 243456 DEBUG nova.virt.libvirt.vif [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-585792194',display_name='tempest-AttachInterfacesTestJSON-server-585792194',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-585792194',id=27,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPfxDpm3Myh4xl51GH/Eee4e+9gL0QXIrJb3KmgHGFFsr7Zk1zaEkJOxAcUnihnT1OPbqVJln2d2q3oc5Q222W4sAVF2VtSEfrwl92YPLxE6w1H/lw+338tjUK54+I/6sA==',key_name='tempest-keypair-1547211698',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-ngglzwzv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:04:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=f1026535-7729-43d0-8027-dd71ef14dfbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.954 243456 DEBUG nova.network.os_vif_util [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "76d5199d-5d1e-4198-8780-c2537175a2be", "address": "fa:16:3e:7a:bd:5b", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d5199d-5d", "ovs_interfaceid": "76d5199d-5d1e-4198-8780-c2537175a2be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.955 243456 DEBUG nova.network.os_vif_util [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:bd:5b,bridge_name='br-int',has_traffic_filtering=True,id=76d5199d-5d1e-4198-8780-c2537175a2be,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d5199d-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.956 243456 DEBUG os_vif [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:bd:5b,bridge_name='br-int',has_traffic_filtering=True,id=76d5199d-5d1e-4198-8780-c2537175a2be,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d5199d-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.959 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.959 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76d5199d-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.961 243456 DEBUG oslo_concurrency.lockutils [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Releasing lock "refresh_cache-f1026535-7729-43d0-8027-dd71ef14dfbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.964 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.965 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.967 243456 INFO os_vif [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:bd:5b,bridge_name='br-int',has_traffic_filtering=True,id=76d5199d-5d1e-4198-8780-c2537175a2be,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d5199d-5d')
Feb 28 10:04:50 compute-0 nova_compute[243452]: 2026-02-28 10:04:50.994 243456 DEBUG oslo_concurrency.lockutils [None req-20a6da2a-b6eb-4ec3-a028-8c5e1869284c f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-f1026535-7729-43d0-8027-dd71ef14dfbf-11bba824-bf60-4206-b70d-fd5035009fbf" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:51 compute-0 podman[270270]: 2026-02-28 10:04:51.00897569 +0000 UTC m=+0.047682941 container remove 0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:04:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.018 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3db3a3e5-7652-43c8-be87-8d899205bab8]: (4, ('Sat Feb 28 10:04:50 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f (0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543)\n0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543\nSat Feb 28 10:04:50 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f (0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543)\n0ff7694042f27aa6ce2e2c69df9615a99ce476fd7859363063592ef24ef72543\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.020 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[24bce2a5-73e5-4ba7-acba-a46bc9108a84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.021 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:51 compute-0 kernel: tap60dcefc3-90: left promiscuous mode
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.023 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.029 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.035 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[300a3f7c-9464-446d-9ef1-4022673b63c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.039 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.049 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[602887ac-bf66-4892-b318-71fab7e7306a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.051 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[846f6606-e8de-41c8-b5f1-e73089b3b49a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.061 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9e167f73-2a4d-482d-80bc-f74cde829d54]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455954, 'reachable_time': 24674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270304, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.063 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:04:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.063 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[ad898bbc-98e0-4bfd-a0eb-f0f40c23c164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.064 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 76d5199d-5d1e-4198-8780-c2537175a2be in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f unbound from our chassis
Feb 28 10:04:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.065 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60dcefc3-95e1-437e-9c00-e51656c39b8f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:04:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.066 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7a7ea4-3a95-4f24-805d-09e1d368dcc4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.066 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 76d5199d-5d1e-4198-8780-c2537175a2be in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f unbound from our chassis
Feb 28 10:04:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.068 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60dcefc3-95e1-437e-9c00-e51656c39b8f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:04:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:51.068 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c818b63c-cca1-4069-8a90-0fe9194e90c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.220 243456 INFO nova.virt.libvirt.driver [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Deleting instance files /var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf_del
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.227 243456 INFO nova.virt.libvirt.driver [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Deletion of /var/lib/nova/instances/f1026535-7729-43d0-8027-dd71ef14dfbf_del complete
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.279 243456 INFO nova.compute.manager [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Took 0.62 seconds to destroy the instance on the hypervisor.
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.279 243456 DEBUG oslo.service.loopingcall [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.280 243456 DEBUG nova.compute.manager [-] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.280 243456 DEBUG nova.network.neutron [-] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]: [
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:     {
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:         "available": false,
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:         "being_replaced": false,
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:         "ceph_device_lvm": false,
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:         "lsm_data": {},
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:         "lvs": [],
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:         "path": "/dev/sr0",
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:         "rejected_reasons": [
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "Has a FileSystem",
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "Insufficient space (<5GB)"
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:         ],
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:         "sys_api": {
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "actuators": null,
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "device_nodes": [
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:                 "sr0"
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             ],
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "devname": "sr0",
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "human_readable_size": "482.00 KB",
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "id_bus": "ata",
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "model": "QEMU DVD-ROM",
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "nr_requests": "2",
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "parent": "/dev/sr0",
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "partitions": {},
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "path": "/dev/sr0",
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "removable": "1",
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "rev": "2.5+",
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "ro": "0",
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "rotational": "1",
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "sas_address": "",
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "sas_device_handle": "",
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "scheduler_mode": "mq-deadline",
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "sectors": 0,
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "sectorsize": "2048",
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "size": 493568.0,
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "support_discard": "2048",
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "type": "disk",
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:             "vendor": "QEMU"
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:         }
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]:     }
Feb 28 10:04:51 compute-0 ecstatic_dubinsky[270164]: ]
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.318 243456 DEBUG nova.network.neutron [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Successfully created port: a1a4b6a4-de37-4bca-9501-0465f18ded83 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:04:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d60dcefc3\x2d95e1\x2d437e\x2d9c00\x2de51656c39b8f.mount: Deactivated successfully.
Feb 28 10:04:51 compute-0 systemd[1]: libpod-0c57ea23320d0f747646e173d329304a564b3d6a9493f40207035144d09d6cfe.scope: Deactivated successfully.
Feb 28 10:04:51 compute-0 podman[270123]: 2026-02-28 10:04:51.343003641 +0000 UTC m=+0.712416080 container died 0c57ea23320d0f747646e173d329304a564b3d6a9493f40207035144d09d6cfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_dubinsky, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:04:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-62582b15685eca806715c6f9ba82b50a12142970c939c69f3f5a6b7e95c2b9e7-merged.mount: Deactivated successfully.
Feb 28 10:04:51 compute-0 podman[270123]: 2026-02-28 10:04:51.376918194 +0000 UTC m=+0.746330633 container remove 0c57ea23320d0f747646e173d329304a564b3d6a9493f40207035144d09d6cfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 10:04:51 compute-0 systemd[1]: libpod-conmon-0c57ea23320d0f747646e173d329304a564b3d6a9493f40207035144d09d6cfe.scope: Deactivated successfully.
Feb 28 10:04:51 compute-0 sudo[269949]: pam_unix(sudo:session): session closed for user root
Feb 28 10:04:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:04:51 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:04:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:04:51 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:04:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:04:51 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:04:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:04:51 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:04:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:04:51 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:04:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:04:51 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:04:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:04:51 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:04:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:04:51 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:04:51 compute-0 sudo[271059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:04:51 compute-0 sudo[271059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:04:51 compute-0 sudo[271059]: pam_unix(sudo:session): session closed for user root
Feb 28 10:04:51 compute-0 sudo[271084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:04:51 compute-0 sudo[271084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.629 243456 DEBUG nova.compute.manager [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-unplugged-76d5199d-5d1e-4198-8780-c2537175a2be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.630 243456 DEBUG oslo_concurrency.lockutils [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.631 243456 DEBUG oslo_concurrency.lockutils [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.632 243456 DEBUG oslo_concurrency.lockutils [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.632 243456 DEBUG nova.compute.manager [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] No waiting events found dispatching network-vif-unplugged-76d5199d-5d1e-4198-8780-c2537175a2be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.633 243456 DEBUG nova.compute.manager [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-unplugged-76d5199d-5d1e-4198-8780-c2537175a2be for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.633 243456 DEBUG nova.compute.manager [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-plugged-76d5199d-5d1e-4198-8780-c2537175a2be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.634 243456 DEBUG oslo_concurrency.lockutils [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.634 243456 DEBUG oslo_concurrency.lockutils [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.635 243456 DEBUG oslo_concurrency.lockutils [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.635 243456 DEBUG nova.compute.manager [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] No waiting events found dispatching network-vif-plugged-76d5199d-5d1e-4198-8780-c2537175a2be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:04:51 compute-0 nova_compute[243452]: 2026-02-28 10:04:51.636 243456 WARNING nova.compute.manager [req-78134db9-123b-4054-9d31-4ea43531a7f7 req-0ac786cd-a575-4773-a460-6029eefc8e96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received unexpected event network-vif-plugged-76d5199d-5d1e-4198-8780-c2537175a2be for instance with vm_state active and task_state deleting.
Feb 28 10:04:51 compute-0 podman[271121]: 2026-02-28 10:04:51.825769863 +0000 UTC m=+0.062334123 container create 86c3262e412c64506c2e09ccaf71df66bec099248921260cbf554c3ea755df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:04:51 compute-0 systemd[1]: Started libpod-conmon-86c3262e412c64506c2e09ccaf71df66bec099248921260cbf554c3ea755df4c.scope.
Feb 28 10:04:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.7 MiB/s wr, 47 op/s
Feb 28 10:04:51 compute-0 podman[271121]: 2026-02-28 10:04:51.798762384 +0000 UTC m=+0.035326654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:04:51 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:04:51 compute-0 podman[271121]: 2026-02-28 10:04:51.92276797 +0000 UTC m=+0.159332210 container init 86c3262e412c64506c2e09ccaf71df66bec099248921260cbf554c3ea755df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 10:04:51 compute-0 podman[271121]: 2026-02-28 10:04:51.932222296 +0000 UTC m=+0.168786516 container start 86c3262e412c64506c2e09ccaf71df66bec099248921260cbf554c3ea755df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:04:51 compute-0 blissful_bhaskara[271139]: 167 167
Feb 28 10:04:51 compute-0 systemd[1]: libpod-86c3262e412c64506c2e09ccaf71df66bec099248921260cbf554c3ea755df4c.scope: Deactivated successfully.
Feb 28 10:04:51 compute-0 podman[271121]: 2026-02-28 10:04:51.941126566 +0000 UTC m=+0.177690786 container attach 86c3262e412c64506c2e09ccaf71df66bec099248921260cbf554c3ea755df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhaskara, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 10:04:51 compute-0 podman[271121]: 2026-02-28 10:04:51.941676522 +0000 UTC m=+0.178240742 container died 86c3262e412c64506c2e09ccaf71df66bec099248921260cbf554c3ea755df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhaskara, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 10:04:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-22250463dd2144ac0fce8a6e06dffe632c237ed59f04b1dd99212d226bc33158-merged.mount: Deactivated successfully.
Feb 28 10:04:51 compute-0 podman[271121]: 2026-02-28 10:04:51.977394316 +0000 UTC m=+0.213958536 container remove 86c3262e412c64506c2e09ccaf71df66bec099248921260cbf554c3ea755df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:04:51 compute-0 systemd[1]: libpod-conmon-86c3262e412c64506c2e09ccaf71df66bec099248921260cbf554c3ea755df4c.scope: Deactivated successfully.
Feb 28 10:04:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:52.105 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:52 compute-0 podman[271163]: 2026-02-28 10:04:52.148059804 +0000 UTC m=+0.049859373 container create 07fa4fef611e9c4969e7b7adb2e75214c25a110a0be95dbff0cec65db6cc90ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.172 243456 DEBUG nova.network.neutron [-] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.186 243456 INFO nova.compute.manager [-] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Took 0.91 seconds to deallocate network for instance.
Feb 28 10:04:52 compute-0 systemd[1]: Started libpod-conmon-07fa4fef611e9c4969e7b7adb2e75214c25a110a0be95dbff0cec65db6cc90ae.scope.
Feb 28 10:04:52 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62189f5111567f03bac4b3d815eaed0197467301e746c26f6e9ac2af5da4937f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62189f5111567f03bac4b3d815eaed0197467301e746c26f6e9ac2af5da4937f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62189f5111567f03bac4b3d815eaed0197467301e746c26f6e9ac2af5da4937f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62189f5111567f03bac4b3d815eaed0197467301e746c26f6e9ac2af5da4937f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62189f5111567f03bac4b3d815eaed0197467301e746c26f6e9ac2af5da4937f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.232 243456 DEBUG oslo_concurrency.lockutils [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.233 243456 DEBUG oslo_concurrency.lockutils [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.233 243456 DEBUG nova.compute.manager [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:52 compute-0 podman[271163]: 2026-02-28 10:04:52.128620437 +0000 UTC m=+0.030420026 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.241 243456 DEBUG oslo_concurrency.lockutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.242 243456 DEBUG oslo_concurrency.lockutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:52 compute-0 podman[271163]: 2026-02-28 10:04:52.246419409 +0000 UTC m=+0.148219038 container init 07fa4fef611e9c4969e7b7adb2e75214c25a110a0be95dbff0cec65db6cc90ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.248 243456 DEBUG nova.compute.manager [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.250 243456 DEBUG nova.objects.instance [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'flavor' on Instance uuid ff5bf118-ea06-44c0-81f0-0a229162e1d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:52 compute-0 podman[271163]: 2026-02-28 10:04:52.258036236 +0000 UTC m=+0.159835855 container start 07fa4fef611e9c4969e7b7adb2e75214c25a110a0be95dbff0cec65db6cc90ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 10:04:52 compute-0 podman[271163]: 2026-02-28 10:04:52.2674451 +0000 UTC m=+0.169244749 container attach 07fa4fef611e9c4969e7b7adb2e75214c25a110a0be95dbff0cec65db6cc90ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.294 243456 DEBUG nova.virt.libvirt.driver [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.412 243456 DEBUG oslo_concurrency.processutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:52 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:04:52 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:04:52 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:04:52 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:04:52 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:04:52 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:04:52 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:04:52 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:04:52 compute-0 ceph-mon[76304]: pgmap v1076: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.7 MiB/s wr, 47 op/s
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.529 243456 DEBUG nova.network.neutron [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Successfully updated port: a1a4b6a4-de37-4bca-9501-0465f18ded83 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.547 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "refresh_cache-c1e4150a-4695-4464-a271-378970447180" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.547 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquired lock "refresh_cache-c1e4150a-4695-4464-a271-378970447180" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.548 243456 DEBUG nova.network.neutron [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:04:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.683 243456 DEBUG nova.compute.manager [req-02963816-3b21-48b6-92bc-7b55523329d4 req-e017db2b-e1dc-4d97-9d59-80e4463ce183 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Received event network-changed-a1a4b6a4-de37-4bca-9501-0465f18ded83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.683 243456 DEBUG nova.compute.manager [req-02963816-3b21-48b6-92bc-7b55523329d4 req-e017db2b-e1dc-4d97-9d59-80e4463ce183 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Refreshing instance network info cache due to event network-changed-a1a4b6a4-de37-4bca-9501-0465f18ded83. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.684 243456 DEBUG oslo_concurrency.lockutils [req-02963816-3b21-48b6-92bc-7b55523329d4 req-e017db2b-e1dc-4d97-9d59-80e4463ce183 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c1e4150a-4695-4464-a271-378970447180" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.774 243456 DEBUG nova.network.neutron [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:04:52 compute-0 mystifying_wing[271180]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:04:52 compute-0 mystifying_wing[271180]: --> All data devices are unavailable
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.857 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273077.8557258, 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.857 243456 INFO nova.compute.manager [-] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] VM Stopped (Lifecycle Event)
Feb 28 10:04:52 compute-0 nova_compute[243452]: 2026-02-28 10:04:52.877 243456 DEBUG nova.compute.manager [None req-9f33c7ee-6a21-423a-aca3-6349b5adea7c - - - - - -] [instance: 9ba2ebb5-d13a-4778-bb5b-46e42d2c49ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:52 compute-0 systemd[1]: libpod-07fa4fef611e9c4969e7b7adb2e75214c25a110a0be95dbff0cec65db6cc90ae.scope: Deactivated successfully.
Feb 28 10:04:52 compute-0 podman[271163]: 2026-02-28 10:04:52.909316955 +0000 UTC m=+0.811116534 container died 07fa4fef611e9c4969e7b7adb2e75214c25a110a0be95dbff0cec65db6cc90ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 10:04:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-62189f5111567f03bac4b3d815eaed0197467301e746c26f6e9ac2af5da4937f-merged.mount: Deactivated successfully.
Feb 28 10:04:52 compute-0 podman[271163]: 2026-02-28 10:04:52.958651492 +0000 UTC m=+0.860451071 container remove 07fa4fef611e9c4969e7b7adb2e75214c25a110a0be95dbff0cec65db6cc90ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 10:04:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:04:52 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3331512955' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:52 compute-0 systemd[1]: libpod-conmon-07fa4fef611e9c4969e7b7adb2e75214c25a110a0be95dbff0cec65db6cc90ae.scope: Deactivated successfully.
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.017 243456 DEBUG oslo_concurrency.processutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.025 243456 DEBUG nova.compute.provider_tree [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:04:53 compute-0 sudo[271084]: pam_unix(sudo:session): session closed for user root
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.045 243456 DEBUG nova.scheduler.client.report [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.066 243456 DEBUG oslo_concurrency.lockutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.094 243456 INFO nova.scheduler.client.report [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Deleted allocations for instance f1026535-7729-43d0-8027-dd71ef14dfbf
Feb 28 10:04:53 compute-0 sudo[271235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:04:53 compute-0 sudo[271235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:04:53 compute-0 sudo[271235]: pam_unix(sudo:session): session closed for user root
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.149 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273078.1483636, 6d74f9b9-edf7-4d81-b139-cb664b2ab68c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.150 243456 INFO nova.compute.manager [-] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] VM Stopped (Lifecycle Event)
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.176 243456 DEBUG nova.compute.manager [None req-91b8e2a4-96c9-406e-b318-cb4ee13320d2 - - - - - -] [instance: 6d74f9b9-edf7-4d81-b139-cb664b2ab68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:53 compute-0 sudo[271260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:04:53 compute-0 sudo[271260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.186 243456 DEBUG oslo_concurrency.lockutils [None req-161af71c-f4f6-4f72-a99c-203d5c64c6fd f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "f1026535-7729-43d0-8027-dd71ef14dfbf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3331512955' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:04:53 compute-0 podman[271297]: 2026-02-28 10:04:53.477037465 +0000 UTC m=+0.058266079 container create c826dd0ab2945139f64d884a84948be0d226fabcb16cca06fbd11c4e2d693b25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_sammet, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.513 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:53 compute-0 systemd[1]: Started libpod-conmon-c826dd0ab2945139f64d884a84948be0d226fabcb16cca06fbd11c4e2d693b25.scope.
Feb 28 10:04:53 compute-0 podman[271297]: 2026-02-28 10:04:53.451384074 +0000 UTC m=+0.032612718 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:04:53 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:04:53 compute-0 podman[271297]: 2026-02-28 10:04:53.574347241 +0000 UTC m=+0.155575875 container init c826dd0ab2945139f64d884a84948be0d226fabcb16cca06fbd11c4e2d693b25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_sammet, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 10:04:53 compute-0 podman[271297]: 2026-02-28 10:04:53.587408328 +0000 UTC m=+0.168636952 container start c826dd0ab2945139f64d884a84948be0d226fabcb16cca06fbd11c4e2d693b25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:04:53 compute-0 podman[271297]: 2026-02-28 10:04:53.591108942 +0000 UTC m=+0.172337556 container attach c826dd0ab2945139f64d884a84948be0d226fabcb16cca06fbd11c4e2d693b25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_sammet, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 10:04:53 compute-0 stoic_sammet[271314]: 167 167
Feb 28 10:04:53 compute-0 systemd[1]: libpod-c826dd0ab2945139f64d884a84948be0d226fabcb16cca06fbd11c4e2d693b25.scope: Deactivated successfully.
Feb 28 10:04:53 compute-0 podman[271297]: 2026-02-28 10:04:53.595353651 +0000 UTC m=+0.176582325 container died c826dd0ab2945139f64d884a84948be0d226fabcb16cca06fbd11c4e2d693b25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_sammet, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 10:04:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-27634a0e397d83fb6c67a14161b87801e95d121eedb9702c11b3b00dd6dee5f8-merged.mount: Deactivated successfully.
Feb 28 10:04:53 compute-0 podman[271297]: 2026-02-28 10:04:53.634948544 +0000 UTC m=+0.216177168 container remove c826dd0ab2945139f64d884a84948be0d226fabcb16cca06fbd11c4e2d693b25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 10:04:53 compute-0 systemd[1]: libpod-conmon-c826dd0ab2945139f64d884a84948be0d226fabcb16cca06fbd11c4e2d693b25.scope: Deactivated successfully.
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.723 243456 DEBUG nova.compute.manager [req-a59f6e2b-1708-4112-beb5-8b31211d26e1 req-ab51b66e-fe72-4320-8497-34892dbf35ff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Received event network-vif-deleted-76d5199d-5d1e-4198-8780-c2537175a2be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.815 243456 DEBUG nova.network.neutron [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Updating instance_info_cache with network_info: [{"id": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "address": "fa:16:3e:22:fe:a4", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a4b6a4-de", "ovs_interfaceid": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:53 compute-0 podman[271338]: 2026-02-28 10:04:53.836543192 +0000 UTC m=+0.084143407 container create 02a301ae1529637e1b31f75d8035adc9fb1e2cddd2eb9577adc7dfdef5182bf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.855 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Releasing lock "refresh_cache-c1e4150a-4695-4464-a271-378970447180" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.856 243456 DEBUG nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Instance network_info: |[{"id": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "address": "fa:16:3e:22:fe:a4", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a4b6a4-de", "ovs_interfaceid": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.856 243456 DEBUG oslo_concurrency.lockutils [req-02963816-3b21-48b6-92bc-7b55523329d4 req-e017db2b-e1dc-4d97-9d59-80e4463ce183 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c1e4150a-4695-4464-a271-378970447180" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.856 243456 DEBUG nova.network.neutron [req-02963816-3b21-48b6-92bc-7b55523329d4 req-e017db2b-e1dc-4d97-9d59-80e4463ce183 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Refreshing network info cache for port a1a4b6a4-de37-4bca-9501-0465f18ded83 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.859 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Start _get_guest_xml network_info=[{"id": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "address": "fa:16:3e:22:fe:a4", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a4b6a4-de", "ovs_interfaceid": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.865 243456 WARNING nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:04:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 267 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 688 KiB/s rd, 2.4 MiB/s wr, 76 op/s
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.871 243456 DEBUG nova.virt.libvirt.host [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.872 243456 DEBUG nova.virt.libvirt.host [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.877 243456 DEBUG nova.virt.libvirt.host [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.878 243456 DEBUG nova.virt.libvirt.host [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.878 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.879 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.879 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.879 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.880 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.880 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.880 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.880 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.881 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.881 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.881 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.881 243456 DEBUG nova.virt.hardware [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:04:53 compute-0 podman[271338]: 2026-02-28 10:04:53.790369794 +0000 UTC m=+0.037969999 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:04:53 compute-0 systemd[1]: Started libpod-conmon-02a301ae1529637e1b31f75d8035adc9fb1e2cddd2eb9577adc7dfdef5182bf7.scope.
Feb 28 10:04:53 compute-0 nova_compute[243452]: 2026-02-28 10:04:53.884 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:53 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:04:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15956fb9ee3514d120b074f36465da9a2a14abff28184af291e5e30adef3fab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15956fb9ee3514d120b074f36465da9a2a14abff28184af291e5e30adef3fab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15956fb9ee3514d120b074f36465da9a2a14abff28184af291e5e30adef3fab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15956fb9ee3514d120b074f36465da9a2a14abff28184af291e5e30adef3fab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:53 compute-0 podman[271338]: 2026-02-28 10:04:53.936925254 +0000 UTC m=+0.184525469 container init 02a301ae1529637e1b31f75d8035adc9fb1e2cddd2eb9577adc7dfdef5182bf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:04:53 compute-0 podman[271338]: 2026-02-28 10:04:53.94282101 +0000 UTC m=+0.190421245 container start 02a301ae1529637e1b31f75d8035adc9fb1e2cddd2eb9577adc7dfdef5182bf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 10:04:53 compute-0 podman[271338]: 2026-02-28 10:04:53.946879004 +0000 UTC m=+0.194479199 container attach 02a301ae1529637e1b31f75d8035adc9fb1e2cddd2eb9577adc7dfdef5182bf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 10:04:54 compute-0 dazzling_cori[271354]: {
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:     "0": [
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:         {
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "devices": [
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "/dev/loop3"
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             ],
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "lv_name": "ceph_lv0",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "lv_size": "21470642176",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "name": "ceph_lv0",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "tags": {
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.cluster_name": "ceph",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.crush_device_class": "",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.encrypted": "0",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.objectstore": "bluestore",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.osd_id": "0",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.type": "block",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.vdo": "0",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.with_tpm": "0"
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             },
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "type": "block",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "vg_name": "ceph_vg0"
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:         }
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:     ],
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:     "1": [
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:         {
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "devices": [
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "/dev/loop4"
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             ],
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "lv_name": "ceph_lv1",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "lv_size": "21470642176",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "name": "ceph_lv1",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "tags": {
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.cluster_name": "ceph",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.crush_device_class": "",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.encrypted": "0",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.objectstore": "bluestore",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.osd_id": "1",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.type": "block",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.vdo": "0",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.with_tpm": "0"
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             },
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "type": "block",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "vg_name": "ceph_vg1"
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:         }
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:     ],
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:     "2": [
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:         {
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "devices": [
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "/dev/loop5"
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             ],
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "lv_name": "ceph_lv2",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "lv_size": "21470642176",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "name": "ceph_lv2",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "tags": {
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.cluster_name": "ceph",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.crush_device_class": "",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.encrypted": "0",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.objectstore": "bluestore",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.osd_id": "2",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.type": "block",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.vdo": "0",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:                 "ceph.with_tpm": "0"
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             },
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "type": "block",
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:             "vg_name": "ceph_vg2"
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:         }
Feb 28 10:04:54 compute-0 dazzling_cori[271354]:     ]
Feb 28 10:04:54 compute-0 dazzling_cori[271354]: }
Feb 28 10:04:54 compute-0 systemd[1]: libpod-02a301ae1529637e1b31f75d8035adc9fb1e2cddd2eb9577adc7dfdef5182bf7.scope: Deactivated successfully.
Feb 28 10:04:54 compute-0 podman[271338]: 2026-02-28 10:04:54.260522601 +0000 UTC m=+0.508122796 container died 02a301ae1529637e1b31f75d8035adc9fb1e2cddd2eb9577adc7dfdef5182bf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 10:04:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e15956fb9ee3514d120b074f36465da9a2a14abff28184af291e5e30adef3fab-merged.mount: Deactivated successfully.
Feb 28 10:04:54 compute-0 podman[271338]: 2026-02-28 10:04:54.335438608 +0000 UTC m=+0.583038803 container remove 02a301ae1529637e1b31f75d8035adc9fb1e2cddd2eb9577adc7dfdef5182bf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cori, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:04:54 compute-0 systemd[1]: libpod-conmon-02a301ae1529637e1b31f75d8035adc9fb1e2cddd2eb9577adc7dfdef5182bf7.scope: Deactivated successfully.
Feb 28 10:04:54 compute-0 sudo[271260]: pam_unix(sudo:session): session closed for user root
Feb 28 10:04:54 compute-0 podman[271390]: 2026-02-28 10:04:54.377640524 +0000 UTC m=+0.077273454 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Feb 28 10:04:54 compute-0 sudo[271426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:04:54 compute-0 podman[271384]: 2026-02-28 10:04:54.434207974 +0000 UTC m=+0.132984240 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260223)
Feb 28 10:04:54 compute-0 sudo[271426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:04:54 compute-0 sudo[271426]: pam_unix(sudo:session): session closed for user root
Feb 28 10:04:54 compute-0 nova_compute[243452]: 2026-02-28 10:04:54.452 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:54 compute-0 ceph-mon[76304]: pgmap v1077: 305 pgs: 305 active+clean; 267 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 688 KiB/s rd, 2.4 MiB/s wr, 76 op/s
Feb 28 10:04:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:04:54 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1906612923' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:54 compute-0 sudo[271455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:04:54 compute-0 sudo[271455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:04:54 compute-0 nova_compute[243452]: 2026-02-28 10:04:54.498 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:54 compute-0 nova_compute[243452]: 2026-02-28 10:04:54.518 243456 DEBUG nova.storage.rbd_utils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image c1e4150a-4695-4464-a271-378970447180_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:54 compute-0 nova_compute[243452]: 2026-02-28 10:04:54.522 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:54 compute-0 podman[271533]: 2026-02-28 10:04:54.7459887 +0000 UTC m=+0.044364969 container create 7c8f7d71ae8f3effbd1ce983bedf94d2edb03b251bfdef7606732fded50d740f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_villani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Feb 28 10:04:54 compute-0 systemd[1]: Started libpod-conmon-7c8f7d71ae8f3effbd1ce983bedf94d2edb03b251bfdef7606732fded50d740f.scope.
Feb 28 10:04:54 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:04:54 compute-0 podman[271533]: 2026-02-28 10:04:54.728609391 +0000 UTC m=+0.026985670 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:04:54 compute-0 podman[271533]: 2026-02-28 10:04:54.83350851 +0000 UTC m=+0.131884789 container init 7c8f7d71ae8f3effbd1ce983bedf94d2edb03b251bfdef7606732fded50d740f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Feb 28 10:04:54 compute-0 podman[271533]: 2026-02-28 10:04:54.838909782 +0000 UTC m=+0.137286041 container start 7c8f7d71ae8f3effbd1ce983bedf94d2edb03b251bfdef7606732fded50d740f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_villani, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 10:04:54 compute-0 podman[271533]: 2026-02-28 10:04:54.842815852 +0000 UTC m=+0.141192141 container attach 7c8f7d71ae8f3effbd1ce983bedf94d2edb03b251bfdef7606732fded50d740f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_villani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Feb 28 10:04:54 compute-0 elastic_villani[271551]: 167 167
Feb 28 10:04:54 compute-0 systemd[1]: libpod-7c8f7d71ae8f3effbd1ce983bedf94d2edb03b251bfdef7606732fded50d740f.scope: Deactivated successfully.
Feb 28 10:04:54 compute-0 conmon[271551]: conmon 7c8f7d71ae8f3effbd1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c8f7d71ae8f3effbd1ce983bedf94d2edb03b251bfdef7606732fded50d740f.scope/container/memory.events
Feb 28 10:04:54 compute-0 podman[271533]: 2026-02-28 10:04:54.846698211 +0000 UTC m=+0.145074510 container died 7c8f7d71ae8f3effbd1ce983bedf94d2edb03b251bfdef7606732fded50d740f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_villani, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:04:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-405fc703e34ecec1a6e7925f13c2126f5a0012cd8ae7150942216adc78e429d9-merged.mount: Deactivated successfully.
Feb 28 10:04:54 compute-0 podman[271533]: 2026-02-28 10:04:54.886657664 +0000 UTC m=+0.185033923 container remove 7c8f7d71ae8f3effbd1ce983bedf94d2edb03b251bfdef7606732fded50d740f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_villani, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 10:04:54 compute-0 systemd[1]: libpod-conmon-7c8f7d71ae8f3effbd1ce983bedf94d2edb03b251bfdef7606732fded50d740f.scope: Deactivated successfully.
Feb 28 10:04:55 compute-0 podman[271575]: 2026-02-28 10:04:55.027193815 +0000 UTC m=+0.046831677 container create 92b50f685abe062c540c95545cbf4aca1bde97f1b435adfc1c50f828742ab3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclean, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 10:04:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:04:55 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1150397621' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.058 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.063 243456 DEBUG nova.virt.libvirt.vif [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:04:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1596427493',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1596427493',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1596427493',id=31,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da458887a8634c5a8b9a38fcbcc44e07',ramdisk_id='',reservation_id='r-a3v88xmi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-356581433',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-356581433-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:04:50Z,user_data=None,user_id='2737540b5d9a437cac0ea91b25f0c5d8',uuid=c1e4150a-4695-4464-a271-378970447180,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "address": "fa:16:3e:22:fe:a4", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a4b6a4-de", "ovs_interfaceid": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.063 243456 DEBUG nova.network.os_vif_util [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converting VIF {"id": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "address": "fa:16:3e:22:fe:a4", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a4b6a4-de", "ovs_interfaceid": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.064 243456 DEBUG nova.network.os_vif_util [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:fe:a4,bridge_name='br-int',has_traffic_filtering=True,id=a1a4b6a4-de37-4bca-9501-0465f18ded83,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1a4b6a4-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.066 243456 DEBUG nova.objects.instance [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lazy-loading 'pci_devices' on Instance uuid c1e4150a-4695-4464-a271-378970447180 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:04:55 compute-0 systemd[1]: Started libpod-conmon-92b50f685abe062c540c95545cbf4aca1bde97f1b435adfc1c50f828742ab3a6.scope.
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.081 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:04:55 compute-0 nova_compute[243452]:   <uuid>c1e4150a-4695-4464-a271-378970447180</uuid>
Feb 28 10:04:55 compute-0 nova_compute[243452]:   <name>instance-0000001f</name>
Feb 28 10:04:55 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:04:55 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:04:55 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1596427493</nova:name>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:04:53</nova:creationTime>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:04:55 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:04:55 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:04:55 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:04:55 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:04:55 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:04:55 compute-0 nova_compute[243452]:         <nova:user uuid="2737540b5d9a437cac0ea91b25f0c5d8">tempest-ImagesOneServerNegativeTestJSON-356581433-project-member</nova:user>
Feb 28 10:04:55 compute-0 nova_compute[243452]:         <nova:project uuid="da458887a8634c5a8b9a38fcbcc44e07">tempest-ImagesOneServerNegativeTestJSON-356581433</nova:project>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:04:55 compute-0 nova_compute[243452]:         <nova:port uuid="a1a4b6a4-de37-4bca-9501-0465f18ded83">
Feb 28 10:04:55 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:04:55 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:04:55 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <system>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <entry name="serial">c1e4150a-4695-4464-a271-378970447180</entry>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <entry name="uuid">c1e4150a-4695-4464-a271-378970447180</entry>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     </system>
Feb 28 10:04:55 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:04:55 compute-0 nova_compute[243452]:   <os>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:   </os>
Feb 28 10:04:55 compute-0 nova_compute[243452]:   <features>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:   </features>
Feb 28 10:04:55 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:04:55 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:04:55 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c1e4150a-4695-4464-a271-378970447180_disk">
Feb 28 10:04:55 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:04:55 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c1e4150a-4695-4464-a271-378970447180_disk.config">
Feb 28 10:04:55 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       </source>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:04:55 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:22:fe:a4"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <target dev="tapa1a4b6a4-de"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/c1e4150a-4695-4464-a271-378970447180/console.log" append="off"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <video>
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     </video>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:04:55 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:04:55 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:04:55 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:04:55 compute-0 nova_compute[243452]: </domain>
Feb 28 10:04:55 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.082 243456 DEBUG nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Preparing to wait for external event network-vif-plugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.083 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "c1e4150a-4695-4464-a271-378970447180-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.083 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.083 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.084 243456 DEBUG nova.virt.libvirt.vif [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:04:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1596427493',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1596427493',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1596427493',id=31,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da458887a8634c5a8b9a38fcbcc44e07',ramdisk_id='',reservation_id='r-a3v88xmi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-356581433',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-356581433-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:04:50Z,user_data=None,user_id='2737540b5d9a437cac0ea91b25f0c5d8',uuid=c1e4150a-4695-4464-a271-378970447180,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "address": "fa:16:3e:22:fe:a4", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a4b6a4-de", "ovs_interfaceid": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.084 243456 DEBUG nova.network.os_vif_util [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converting VIF {"id": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "address": "fa:16:3e:22:fe:a4", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a4b6a4-de", "ovs_interfaceid": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.085 243456 DEBUG nova.network.os_vif_util [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:fe:a4,bridge_name='br-int',has_traffic_filtering=True,id=a1a4b6a4-de37-4bca-9501-0465f18ded83,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1a4b6a4-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.086 243456 DEBUG os_vif [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:fe:a4,bridge_name='br-int',has_traffic_filtering=True,id=a1a4b6a4-de37-4bca-9501-0465f18ded83,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1a4b6a4-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.086 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.087 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.087 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.092 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.093 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1a4b6a4-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.094 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa1a4b6a4-de, col_values=(('external_ids', {'iface-id': 'a1a4b6a4-de37-4bca-9501-0465f18ded83', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:fe:a4', 'vm-uuid': 'c1e4150a-4695-4464-a271-378970447180'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:55 compute-0 NetworkManager[49805]: <info>  [1772273095.0967] manager: (tapa1a4b6a4-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.095 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:55 compute-0 podman[271575]: 2026-02-28 10:04:55.004582419 +0000 UTC m=+0.024220331 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.099 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.107 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.108 243456 INFO os_vif [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:fe:a4,bridge_name='br-int',has_traffic_filtering=True,id=a1a4b6a4-de37-4bca-9501-0465f18ded83,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1a4b6a4-de')
Feb 28 10:04:55 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:04:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1129d4a19bd44bf272de2b9659fcb61a49da65fc5e3ab783aebfb73e147626cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1129d4a19bd44bf272de2b9659fcb61a49da65fc5e3ab783aebfb73e147626cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1129d4a19bd44bf272de2b9659fcb61a49da65fc5e3ab783aebfb73e147626cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1129d4a19bd44bf272de2b9659fcb61a49da65fc5e3ab783aebfb73e147626cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:55 compute-0 podman[271575]: 2026-02-28 10:04:55.144669218 +0000 UTC m=+0.164307060 container init 92b50f685abe062c540c95545cbf4aca1bde97f1b435adfc1c50f828742ab3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:04:55 compute-0 podman[271575]: 2026-02-28 10:04:55.153095015 +0000 UTC m=+0.172732837 container start 92b50f685abe062c540c95545cbf4aca1bde97f1b435adfc1c50f828742ab3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclean, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:04:55 compute-0 podman[271575]: 2026-02-28 10:04:55.157175779 +0000 UTC m=+0.176813651 container attach 92b50f685abe062c540c95545cbf4aca1bde97f1b435adfc1c50f828742ab3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.165 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.165 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.165 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] No VIF found with MAC fa:16:3e:22:fe:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.166 243456 INFO nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Using config drive
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.184 243456 DEBUG nova.storage.rbd_utils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image c1e4150a-4695-4464-a271-378970447180_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.553 243456 INFO nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Creating config drive at /var/lib/nova/instances/c1e4150a-4695-4464-a271-378970447180/disk.config
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.556 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c1e4150a-4695-4464-a271-378970447180/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpl9lbgsdr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:55 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1906612923' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:55 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1150397621' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.688 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c1e4150a-4695-4464-a271-378970447180/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpl9lbgsdr" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.728 243456 DEBUG nova.storage.rbd_utils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image c1e4150a-4695-4464-a271-378970447180_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.741 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c1e4150a-4695-4464-a271-378970447180/disk.config c1e4150a-4695-4464-a271-378970447180_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.827 243456 DEBUG nova.network.neutron [req-02963816-3b21-48b6-92bc-7b55523329d4 req-e017db2b-e1dc-4d97-9d59-80e4463ce183 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Updated VIF entry in instance network info cache for port a1a4b6a4-de37-4bca-9501-0465f18ded83. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.828 243456 DEBUG nova.network.neutron [req-02963816-3b21-48b6-92bc-7b55523329d4 req-e017db2b-e1dc-4d97-9d59-80e4463ce183 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Updating instance_info_cache with network_info: [{"id": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "address": "fa:16:3e:22:fe:a4", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a4b6a4-de", "ovs_interfaceid": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:04:55 compute-0 nova_compute[243452]: 2026-02-28 10:04:55.844 243456 DEBUG oslo_concurrency.lockutils [req-02963816-3b21-48b6-92bc-7b55523329d4 req-e017db2b-e1dc-4d97-9d59-80e4463ce183 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c1e4150a-4695-4464-a271-378970447180" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:04:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 246 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 154 op/s
Feb 28 10:04:56 compute-0 lvm[271728]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:04:56 compute-0 lvm[271728]: VG ceph_vg0 finished
Feb 28 10:04:56 compute-0 lvm[271729]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:04:56 compute-0 lvm[271729]: VG ceph_vg1 finished
Feb 28 10:04:56 compute-0 lvm[271731]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:04:56 compute-0 lvm[271731]: VG ceph_vg2 finished
Feb 28 10:04:56 compute-0 heuristic_mclean[271592]: {}
Feb 28 10:04:56 compute-0 systemd[1]: libpod-92b50f685abe062c540c95545cbf4aca1bde97f1b435adfc1c50f828742ab3a6.scope: Deactivated successfully.
Feb 28 10:04:56 compute-0 systemd[1]: libpod-92b50f685abe062c540c95545cbf4aca1bde97f1b435adfc1c50f828742ab3a6.scope: Consumed 1.359s CPU time.
Feb 28 10:04:56 compute-0 podman[271575]: 2026-02-28 10:04:56.192050393 +0000 UTC m=+1.211688255 container died 92b50f685abe062c540c95545cbf4aca1bde97f1b435adfc1c50f828742ab3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclean, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:04:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-1129d4a19bd44bf272de2b9659fcb61a49da65fc5e3ab783aebfb73e147626cc-merged.mount: Deactivated successfully.
Feb 28 10:04:56 compute-0 nova_compute[243452]: 2026-02-28 10:04:56.593 243456 DEBUG oslo_concurrency.processutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c1e4150a-4695-4464-a271-378970447180/disk.config c1e4150a-4695-4464-a271-378970447180_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.853s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:04:56 compute-0 nova_compute[243452]: 2026-02-28 10:04:56.596 243456 INFO nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Deleting local config drive /var/lib/nova/instances/c1e4150a-4695-4464-a271-378970447180/disk.config because it was imported into RBD.
Feb 28 10:04:56 compute-0 ceph-mon[76304]: pgmap v1078: 305 pgs: 305 active+clean; 246 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 154 op/s
Feb 28 10:04:56 compute-0 kernel: tapa1a4b6a4-de: entered promiscuous mode
Feb 28 10:04:56 compute-0 NetworkManager[49805]: <info>  [1772273096.6611] manager: (tapa1a4b6a4-de): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Feb 28 10:04:56 compute-0 nova_compute[243452]: 2026-02-28 10:04:56.662 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:56 compute-0 systemd-udevd[271726]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:04:56 compute-0 ovn_controller[146846]: 2026-02-28T10:04:56Z|00199|binding|INFO|Claiming lport a1a4b6a4-de37-4bca-9501-0465f18ded83 for this chassis.
Feb 28 10:04:56 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 10:04:56 compute-0 ovn_controller[146846]: 2026-02-28T10:04:56Z|00200|binding|INFO|a1a4b6a4-de37-4bca-9501-0465f18ded83: Claiming fa:16:3e:22:fe:a4 10.100.0.9
Feb 28 10:04:56 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 10:04:56 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 10:04:56 compute-0 ovn_controller[146846]: 2026-02-28T10:04:56Z|00201|binding|INFO|Setting lport a1a4b6a4-de37-4bca-9501-0465f18ded83 up in Southbound
Feb 28 10:04:56 compute-0 ovn_controller[146846]: 2026-02-28T10:04:56Z|00202|binding|INFO|Setting lport a1a4b6a4-de37-4bca-9501-0465f18ded83 ovn-installed in OVS
Feb 28 10:04:56 compute-0 nova_compute[243452]: 2026-02-28 10:04:56.683 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:56 compute-0 nova_compute[243452]: 2026-02-28 10:04:56.685 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.678 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:fe:a4 10.100.0.9'], port_security=['fa:16:3e:22:fe:a4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c1e4150a-4695-4464-a271-378970447180', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da458887a8634c5a8b9a38fcbcc44e07', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c01448ef-a8fc-4bd2-928c-fd08df4a870e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5852d9cf-0cd0-48e3-ac9d-e151dafb6ffc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a1a4b6a4-de37-4bca-9501-0465f18ded83) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:04:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.680 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a1a4b6a4-de37-4bca-9501-0465f18ded83 in datapath 2eebd3ec-f7d4-4881-813e-8d884cdcadaf bound to our chassis
Feb 28 10:04:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.683 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2eebd3ec-f7d4-4881-813e-8d884cdcadaf
Feb 28 10:04:56 compute-0 NetworkManager[49805]: <info>  [1772273096.6891] device (tapa1a4b6a4-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:04:56 compute-0 NetworkManager[49805]: <info>  [1772273096.6898] device (tapa1a4b6a4-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:04:56 compute-0 nova_compute[243452]: 2026-02-28 10:04:56.693 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.700 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b50f204d-a90d-4638-8638-423a860f17f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.701 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2eebd3ec-f1 in ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:04:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.704 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2eebd3ec-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:04:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.704 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[217852b4-6dfe-482c-a81c-2d80221f0084]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.706 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[29bbac88-1fac-48b1-8e33-012a2b5139ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:56 compute-0 systemd-machined[209480]: New machine qemu-35-instance-0000001f.
Feb 28 10:04:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.721 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac253c8-67c8-4876-988d-09e8c8505214]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:56 compute-0 systemd[1]: Started Virtual Machine qemu-35-instance-0000001f.
Feb 28 10:04:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.737 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9ac790ad-c5b9-4e03-b9d9-3491ae878f13]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:56 compute-0 podman[271575]: 2026-02-28 10:04:56.755599656 +0000 UTC m=+1.775237498 container remove 92b50f685abe062c540c95545cbf4aca1bde97f1b435adfc1c50f828742ab3a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mclean, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 10:04:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.767 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9ce618ba-84ff-4e23-8d79-88e209add576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.775 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c8977db4-5969-4b55-b17c-8c105c583e29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:56 compute-0 NetworkManager[49805]: <info>  [1772273096.7769] manager: (tap2eebd3ec-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/97)
Feb 28 10:04:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.800 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c798974f-debc-4276-8dce-40730c95d58c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.805 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ee2ba09a-12d0-4180-82a4-a964d899f48e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:56 compute-0 sudo[271455]: pam_unix(sudo:session): session closed for user root
Feb 28 10:04:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:04:56 compute-0 NetworkManager[49805]: <info>  [1772273096.8308] device (tap2eebd3ec-f0): carrier: link connected
Feb 28 10:04:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.836 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c0a6230a-21ba-4785-8734-49ba3830cd62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:56 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:04:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:04:56 compute-0 systemd[1]: libpod-conmon-92b50f685abe062c540c95545cbf4aca1bde97f1b435adfc1c50f828742ab3a6.scope: Deactivated successfully.
Feb 28 10:04:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.868 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f1a906-69e7-4539-9f23-6d501320ed22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2eebd3ec-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:03:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460411, 'reachable_time': 36983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271793, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:56 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:04:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.892 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[53f2230e-8734-4075-a2b2-c5ed95e6b550]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecb:321'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460411, 'tstamp': 460411}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271794, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.909 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4608eb99-5b36-4139-b730-c40611758ace]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2eebd3ec-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:03:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460411, 'reachable_time': 36983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271798, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:56.939 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f301a8d8-38fa-4c80-832c-23c9d7c5d775]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:56 compute-0 sudo[271795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:04:56 compute-0 sudo[271795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:04:56 compute-0 sudo[271795]: pam_unix(sudo:session): session closed for user root
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.008 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[553012b7-245b-468b-b725-47416f82ebd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.010 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2eebd3ec-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.011 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.011 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2eebd3ec-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:57 compute-0 NetworkManager[49805]: <info>  [1772273097.0145] manager: (tap2eebd3ec-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Feb 28 10:04:57 compute-0 kernel: tap2eebd3ec-f0: entered promiscuous mode
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.017 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2eebd3ec-f0, col_values=(('external_ids', {'iface-id': '6b6cc396-2618-4c5f-8702-0c03569c876b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:04:57 compute-0 ovn_controller[146846]: 2026-02-28T10:04:57Z|00203|binding|INFO|Releasing lport 6b6cc396-2618-4c5f-8702-0c03569c876b from this chassis (sb_readonly=0)
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.025 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.027 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.031 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b49605ee-3dce-43e2-9914-a1d674b4a6c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.032 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-2eebd3ec-f7d4-4881-813e-8d884cdcadaf
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.pid.haproxy
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 2eebd3ec-f7d4-4881-813e-8d884cdcadaf
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.033 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'env', 'PROCESS_TAG=haproxy-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.133 243456 DEBUG nova.compute.manager [req-ce1b4c36-e04f-41bf-a889-0b019c48c097 req-d167e048-c64c-485c-9c8a-23627179e26d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Received event network-vif-plugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.134 243456 DEBUG oslo_concurrency.lockutils [req-ce1b4c36-e04f-41bf-a889-0b019c48c097 req-d167e048-c64c-485c-9c8a-23627179e26d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c1e4150a-4695-4464-a271-378970447180-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.135 243456 DEBUG oslo_concurrency.lockutils [req-ce1b4c36-e04f-41bf-a889-0b019c48c097 req-d167e048-c64c-485c-9c8a-23627179e26d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.135 243456 DEBUG oslo_concurrency.lockutils [req-ce1b4c36-e04f-41bf-a889-0b019c48c097 req-d167e048-c64c-485c-9c8a-23627179e26d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.136 243456 DEBUG nova.compute.manager [req-ce1b4c36-e04f-41bf-a889-0b019c48c097 req-d167e048-c64c-485c-9c8a-23627179e26d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Processing event network-vif-plugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.213 243456 DEBUG nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.214 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273097.2143345, c1e4150a-4695-4464-a271-378970447180 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.215 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] VM Started (Lifecycle Event)
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.221 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.225 243456 INFO nova.virt.libvirt.driver [-] [instance: c1e4150a-4695-4464-a271-378970447180] Instance spawned successfully.
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.226 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.257 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.262 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.263 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.263 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.264 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.264 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.265 243456 DEBUG nova.virt.libvirt.driver [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.274 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.314 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.315 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273097.215522, c1e4150a-4695-4464-a271-378970447180 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.315 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] VM Paused (Lifecycle Event)
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.338 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.343 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273097.2174728, c1e4150a-4695-4464-a271-378970447180 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.344 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] VM Resumed (Lifecycle Event)
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.378 243456 INFO nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Took 7.17 seconds to spawn the instance on the hypervisor.
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.380 243456 DEBUG nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.431 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.436 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:04:57 compute-0 podman[271892]: 2026-02-28 10:04:57.461783759 +0000 UTC m=+0.053388332 container create 2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.477 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.489 243456 INFO nova.compute.manager [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Took 8.32 seconds to build instance.
Feb 28 10:04:57 compute-0 systemd[1]: Started libpod-conmon-2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3.scope.
Feb 28 10:04:57 compute-0 nova_compute[243452]: 2026-02-28 10:04:57.511 243456 DEBUG oslo_concurrency.lockutils [None req-eace41d8-5707-4f14-adb5-f7239ffd2c53 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "c1e4150a-4695-4464-a271-378970447180" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:57 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:04:57 compute-0 podman[271892]: 2026-02-28 10:04:57.438105923 +0000 UTC m=+0.029710516 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9528566aabd5692d826fa682f7bcba8949e6d95980a2790e31c1867903228e4e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:04:57 compute-0 podman[271892]: 2026-02-28 10:04:57.548523307 +0000 UTC m=+0.140127910 container init 2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:04:57 compute-0 podman[271892]: 2026-02-28 10:04:57.553890898 +0000 UTC m=+0.145495471 container start 2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 10:04:57 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[271907]: [NOTICE]   (271911) : New worker (271913) forked
Feb 28 10:04:57 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[271907]: [NOTICE]   (271911) : Loading success.
Feb 28 10:04:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.843 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.844 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:04:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:04:57.845 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:04:57 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:04:57 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:04:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 246 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 157 op/s
Feb 28 10:04:58 compute-0 nova_compute[243452]: 2026-02-28 10:04:58.515 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:58 compute-0 nova_compute[243452]: 2026-02-28 10:04:58.827 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:04:58 compute-0 ceph-mon[76304]: pgmap v1079: 305 pgs: 305 active+clean; 246 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 157 op/s
Feb 28 10:04:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 246 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 153 op/s
Feb 28 10:05:00 compute-0 nova_compute[243452]: 2026-02-28 10:05:00.096 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:05:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:05:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:05:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:05:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:05:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:05:00 compute-0 nova_compute[243452]: 2026-02-28 10:05:00.857 243456 DEBUG nova.compute.manager [req-98fea500-3be0-4fe5-8af7-76a2547f86b5 req-25615d0d-8514-41a0-bf9a-b5ffbf859caf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Received event network-vif-plugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:00 compute-0 nova_compute[243452]: 2026-02-28 10:05:00.857 243456 DEBUG oslo_concurrency.lockutils [req-98fea500-3be0-4fe5-8af7-76a2547f86b5 req-25615d0d-8514-41a0-bf9a-b5ffbf859caf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c1e4150a-4695-4464-a271-378970447180-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:00 compute-0 nova_compute[243452]: 2026-02-28 10:05:00.858 243456 DEBUG oslo_concurrency.lockutils [req-98fea500-3be0-4fe5-8af7-76a2547f86b5 req-25615d0d-8514-41a0-bf9a-b5ffbf859caf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:00 compute-0 nova_compute[243452]: 2026-02-28 10:05:00.858 243456 DEBUG oslo_concurrency.lockutils [req-98fea500-3be0-4fe5-8af7-76a2547f86b5 req-25615d0d-8514-41a0-bf9a-b5ffbf859caf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:00 compute-0 nova_compute[243452]: 2026-02-28 10:05:00.858 243456 DEBUG nova.compute.manager [req-98fea500-3be0-4fe5-8af7-76a2547f86b5 req-25615d0d-8514-41a0-bf9a-b5ffbf859caf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] No waiting events found dispatching network-vif-plugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:00 compute-0 nova_compute[243452]: 2026-02-28 10:05:00.859 243456 WARNING nova.compute.manager [req-98fea500-3be0-4fe5-8af7-76a2547f86b5 req-25615d0d-8514-41a0-bf9a-b5ffbf859caf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Received unexpected event network-vif-plugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 for instance with vm_state active and task_state None.
Feb 28 10:05:00 compute-0 ceph-mon[76304]: pgmap v1080: 305 pgs: 305 active+clean; 246 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 153 op/s
Feb 28 10:05:00 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Feb 28 10:05:00 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:00.968296) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:05:00 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Feb 28 10:05:00 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273100968411, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2148, "num_deletes": 254, "total_data_size": 3268720, "memory_usage": 3321352, "flush_reason": "Manual Compaction"}
Feb 28 10:05:00 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Feb 28 10:05:00 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273100989884, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3187585, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21040, "largest_seqno": 23187, "table_properties": {"data_size": 3178049, "index_size": 5902, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20465, "raw_average_key_size": 20, "raw_value_size": 3158582, "raw_average_value_size": 3158, "num_data_blocks": 264, "num_entries": 1000, "num_filter_entries": 1000, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772272912, "oldest_key_time": 1772272912, "file_creation_time": 1772273100, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:05:00 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 21633 microseconds, and 6421 cpu microseconds.
Feb 28 10:05:00 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:05:00 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:00.989948) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3187585 bytes OK
Feb 28 10:05:00 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:00.989978) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Feb 28 10:05:00 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:00.991882) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Feb 28 10:05:00 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:00.991900) EVENT_LOG_v1 {"time_micros": 1772273100991895, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:05:00 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:00.991928) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:05:00 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3259622, prev total WAL file size 3259622, number of live WAL files 2.
Feb 28 10:05:00 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:05:00 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:00.992670) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Feb 28 10:05:00 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:05:00 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3112KB)], [50(7313KB)]
Feb 28 10:05:00 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273100992753, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10676684, "oldest_snapshot_seqno": -1}
Feb 28 10:05:01 compute-0 sshd-session[271922]: Invalid user ubuntu from 45.148.10.240 port 38624
Feb 28 10:05:01 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4897 keys, 8908449 bytes, temperature: kUnknown
Feb 28 10:05:01 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273101042717, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 8908449, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8874028, "index_size": 21063, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12293, "raw_key_size": 120117, "raw_average_key_size": 24, "raw_value_size": 8784217, "raw_average_value_size": 1793, "num_data_blocks": 880, "num_entries": 4897, "num_filter_entries": 4897, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772273100, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:05:01 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:05:01 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:01.043137) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 8908449 bytes
Feb 28 10:05:01 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:01.045846) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.1 rd, 178.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 7.1 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5419, records dropped: 522 output_compression: NoCompression
Feb 28 10:05:01 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:01.045863) EVENT_LOG_v1 {"time_micros": 1772273101045855, "job": 26, "event": "compaction_finished", "compaction_time_micros": 49861, "compaction_time_cpu_micros": 14024, "output_level": 6, "num_output_files": 1, "total_output_size": 8908449, "num_input_records": 5419, "num_output_records": 4897, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:05:01 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:05:01 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273101046312, "job": 26, "event": "table_file_deletion", "file_number": 52}
Feb 28 10:05:01 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:05:01 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273101046911, "job": 26, "event": "table_file_deletion", "file_number": 50}
Feb 28 10:05:01 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:00.992512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:05:01 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:01.047014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:05:01 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:01.047022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:05:01 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:01.047024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:05:01 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:01.047025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:05:01 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:01.047027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:05:01 compute-0 sshd-session[271922]: Connection closed by invalid user ubuntu 45.148.10.240 port 38624 [preauth]
Feb 28 10:05:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 246 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 171 op/s
Feb 28 10:05:02 compute-0 ovn_controller[146846]: 2026-02-28T10:05:02Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:29:2d:d1 10.100.0.6
Feb 28 10:05:02 compute-0 ovn_controller[146846]: 2026-02-28T10:05:02Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:29:2d:d1 10.100.0.6
Feb 28 10:05:02 compute-0 nova_compute[243452]: 2026-02-28 10:05:02.224 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "3a118849-0d0a-4196-9bdd-65333da2e8f7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:02 compute-0 nova_compute[243452]: 2026-02-28 10:05:02.226 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:02 compute-0 nova_compute[243452]: 2026-02-28 10:05:02.243 243456 DEBUG nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:05:02 compute-0 nova_compute[243452]: 2026-02-28 10:05:02.330 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:02 compute-0 nova_compute[243452]: 2026-02-28 10:05:02.331 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:02 compute-0 nova_compute[243452]: 2026-02-28 10:05:02.338 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:05:02 compute-0 nova_compute[243452]: 2026-02-28 10:05:02.339 243456 INFO nova.compute.claims [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:05:02 compute-0 nova_compute[243452]: 2026-02-28 10:05:02.412 243456 DEBUG nova.virt.libvirt.driver [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 28 10:05:02 compute-0 nova_compute[243452]: 2026-02-28 10:05:02.474 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:05:02 compute-0 nova_compute[243452]: 2026-02-28 10:05:02.727 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Acquiring lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:02 compute-0 nova_compute[243452]: 2026-02-28 10:05:02.728 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:02 compute-0 nova_compute[243452]: 2026-02-28 10:05:02.744 243456 DEBUG nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:05:02 compute-0 nova_compute[243452]: 2026-02-28 10:05:02.813 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:02 compute-0 ceph-mon[76304]: pgmap v1081: 305 pgs: 305 active+clean; 246 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 171 op/s
Feb 28 10:05:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:05:03 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2757291868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.022 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.027 243456 DEBUG nova.compute.provider_tree [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.047 243456 DEBUG nova.scheduler.client.report [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.101 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.102 243456 DEBUG nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.106 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.119 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.120 243456 INFO nova.compute.claims [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.192 243456 DEBUG nova.compute.manager [None req-cf633291-f10a-4a92-ac8a-73aea78e7a5c 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.204 243456 DEBUG nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.204 243456 DEBUG nova.network.neutron [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.236 243456 INFO nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.241 243456 INFO nova.compute.manager [None req-cf633291-f10a-4a92-ac8a-73aea78e7a5c 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] instance snapshotting
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.252 243456 DEBUG nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.351 243456 DEBUG nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.352 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.353 243456 INFO nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Creating image(s)
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.375 243456 DEBUG nova.storage.rbd_utils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.398 243456 DEBUG nova.storage.rbd_utils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.418 243456 DEBUG nova.storage.rbd_utils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.421 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.445 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.473 243456 DEBUG nova.policy [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1d18942aa7a4f4f9f673058f8c5f232', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5478fb37e3fa483c86ed85ad939d42da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.478 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.478 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.479 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.479 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.504 243456 DEBUG nova.storage.rbd_utils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.510 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.533 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.538 243456 WARNING nova.compute.manager [None req-cf633291-f10a-4a92-ac8a-73aea78e7a5c 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Image not found during snapshot: nova.exception.ImageNotFound: Image fb0dbe89-ad99-412d-9a08-eeddbc9f2714 could not be found.
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.754 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.243s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.815 243456 DEBUG nova.storage.rbd_utils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] resizing rbd image 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:05:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 255 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 197 op/s
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.895 243456 DEBUG nova.objects.instance [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'migration_context' on Instance uuid 3a118849-0d0a-4196-9bdd-65333da2e8f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.910 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.910 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Ensure instance console log exists: /var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.911 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.911 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:03 compute-0 nova_compute[243452]: 2026-02-28 10:05:03.912 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:03 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2757291868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:05:04 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2660759736' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.028 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.033 243456 DEBUG nova.compute.provider_tree [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.047 243456 DEBUG nova.scheduler.client.report [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.074 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.075 243456 DEBUG nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.132 243456 DEBUG nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.133 243456 DEBUG nova.network.neutron [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.167 243456 INFO nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.201 243456 DEBUG nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.291 243456 DEBUG nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.293 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.293 243456 INFO nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Creating image(s)
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.320 243456 DEBUG nova.storage.rbd_utils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] rbd image bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.342 243456 DEBUG nova.storage.rbd_utils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] rbd image bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.366 243456 DEBUG nova.storage.rbd_utils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] rbd image bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.372 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.429 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.430 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.430 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.431 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.452 243456 DEBUG nova.storage.rbd_utils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] rbd image bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.456 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.819 243456 DEBUG nova.policy [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6bbc470612fa48afb6c2a143ba966473', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '135c387aaa024e42b1c3c19237591cf3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:05:04 compute-0 nova_compute[243452]: 2026-02-28 10:05:04.857 243456 DEBUG nova.network.neutron [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Successfully created port: 9f44b9f8-b888-40e8-be30-f985e3ca11b9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.055 243456 DEBUG oslo_concurrency.lockutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "c1e4150a-4695-4464-a271-378970447180" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.056 243456 DEBUG oslo_concurrency.lockutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "c1e4150a-4695-4464-a271-378970447180" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.056 243456 DEBUG oslo_concurrency.lockutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "c1e4150a-4695-4464-a271-378970447180-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.056 243456 DEBUG oslo_concurrency.lockutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.057 243456 DEBUG oslo_concurrency.lockutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.058 243456 INFO nova.compute.manager [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Terminating instance
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.059 243456 DEBUG nova.compute.manager [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.098 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:05 compute-0 kernel: tap29b5f82a-cf (unregistering): left promiscuous mode
Feb 28 10:05:05 compute-0 NetworkManager[49805]: <info>  [1772273105.1411] device (tap29b5f82a-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:05:05 compute-0 ovn_controller[146846]: 2026-02-28T10:05:05Z|00204|binding|INFO|Releasing lport 29b5f82a-cfc3-4c87-aac9-8419af0bcf75 from this chassis (sb_readonly=0)
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.149 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:05 compute-0 ovn_controller[146846]: 2026-02-28T10:05:05Z|00205|binding|INFO|Setting lport 29b5f82a-cfc3-4c87-aac9-8419af0bcf75 down in Southbound
Feb 28 10:05:05 compute-0 ovn_controller[146846]: 2026-02-28T10:05:05Z|00206|binding|INFO|Removing iface tap29b5f82a-cf ovn-installed in OVS
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.156 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.162 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.163 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:2d:d1 10.100.0.6'], port_security=['fa:16:3e:29:2d:d1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'ff5bf118-ea06-44c0-81f0-0a229162e1d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b4baa283-ae4f-4852-9324-45f4fb1de1ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d63e728b-5dfe-4a66-867f-fa0957507bc0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=29b5f82a-cfc3-4c87-aac9-8419af0bcf75) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.165 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 29b5f82a-cfc3-4c87-aac9-8419af0bcf75 in datapath 3a8395bc-d7fc-4457-8cb4-52e2b9305b61 unbound from our chassis
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.166 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3a8395bc-d7fc-4457-8cb4-52e2b9305b61, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:05:05 compute-0 ceph-mon[76304]: pgmap v1082: 305 pgs: 305 active+clean; 255 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 197 op/s
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.167 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3e3664-e5ad-4fee-804a-5ff15ae3d7d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.168 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 namespace which is not needed anymore
Feb 28 10:05:05 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2660759736' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:05 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Feb 28 10:05:05 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001e.scope: Consumed 12.334s CPU time.
Feb 28 10:05:05 compute-0 systemd-machined[209480]: Machine qemu-34-instance-0000001e terminated.
Feb 28 10:05:05 compute-0 kernel: tapa1a4b6a4-de (unregistering): left promiscuous mode
Feb 28 10:05:05 compute-0 NetworkManager[49805]: <info>  [1772273105.2512] device (tapa1a4b6a4-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.250 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.794s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:05 compute-0 ovn_controller[146846]: 2026-02-28T10:05:05Z|00207|binding|INFO|Releasing lport a1a4b6a4-de37-4bca-9501-0465f18ded83 from this chassis (sb_readonly=0)
Feb 28 10:05:05 compute-0 ovn_controller[146846]: 2026-02-28T10:05:05Z|00208|binding|INFO|Setting lport a1a4b6a4-de37-4bca-9501-0465f18ded83 down in Southbound
Feb 28 10:05:05 compute-0 ovn_controller[146846]: 2026-02-28T10:05:05Z|00209|binding|INFO|Removing iface tapa1a4b6a4-de ovn-installed in OVS
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.279 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:fe:a4 10.100.0.9'], port_security=['fa:16:3e:22:fe:a4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c1e4150a-4695-4464-a271-378970447180', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da458887a8634c5a8b9a38fcbcc44e07', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c01448ef-a8fc-4bd2-928c-fd08df4a870e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5852d9cf-0cd0-48e3-ac9d-e151dafb6ffc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a1a4b6a4-de37-4bca-9501-0465f18ded83) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:05:05 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Feb 28 10:05:05 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Consumed 8.364s CPU time.
Feb 28 10:05:05 compute-0 systemd-machined[209480]: Machine qemu-35-instance-0000001f terminated.
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.305 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:05 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[269826]: [NOTICE]   (269844) : haproxy version is 2.8.14-c23fe91
Feb 28 10:05:05 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[269826]: [NOTICE]   (269844) : path to executable is /usr/sbin/haproxy
Feb 28 10:05:05 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[269826]: [ALERT]    (269844) : Current worker (269864) exited with code 143 (Terminated)
Feb 28 10:05:05 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[269826]: [WARNING]  (269844) : All workers exited. Exiting... (0)
Feb 28 10:05:05 compute-0 systemd[1]: libpod-2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf.scope: Deactivated successfully.
Feb 28 10:05:05 compute-0 conmon[269826]: conmon 2f3240798ba0bc96b9c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf.scope/container/memory.events
Feb 28 10:05:05 compute-0 podman[272255]: 2026-02-28 10:05:05.320915364 +0000 UTC m=+0.054219126 container died 2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 28 10:05:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf-userdata-shm.mount: Deactivated successfully.
Feb 28 10:05:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a5e83e1d44ebf105d11322506dc55c483ad72f2af990afcb1e865b656b3d9ad-merged.mount: Deactivated successfully.
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.360 243456 DEBUG nova.storage.rbd_utils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] resizing rbd image bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:05:05 compute-0 NetworkManager[49805]: <info>  [1772273105.3651] manager: (tap29b5f82a-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/99)
Feb 28 10:05:05 compute-0 podman[272255]: 2026-02-28 10:05:05.367206555 +0000 UTC m=+0.100510337 container cleanup 2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 10:05:05 compute-0 systemd[1]: libpod-conmon-2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf.scope: Deactivated successfully.
Feb 28 10:05:05 compute-0 podman[272347]: 2026-02-28 10:05:05.438343145 +0000 UTC m=+0.046617122 container remove 2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.443 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[968dbc30-2ce9-4a93-9001-c9ff37649163]: (4, ('Sat Feb 28 10:05:05 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 (2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf)\n2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf\nSat Feb 28 10:05:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 (2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf)\n2f3240798ba0bc96b9c71f9950c8b9bea6b66b0b2511fc121d4e01a3792feeaf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.445 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[15329195-ee70-4ad0-a817-05a796e83c9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.446 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a8395bc-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:05 compute-0 kernel: tap3a8395bc-d0: left promiscuous mode
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.449 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.457 243456 DEBUG nova.objects.instance [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lazy-loading 'migration_context' on Instance uuid bacbbaae-ce23-42df-b5cc-0fc49b2f3741 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.460 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.462 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0900cb61-26f6-40da-a536-3799ef1fe1de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.475 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.475 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Ensure instance console log exists: /var/lib/nova/instances/bacbbaae-ce23-42df-b5cc-0fc49b2f3741/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.476 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.476 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.476 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.479 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7fa57ef7-6e2c-4886-902e-34dabe02acfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.480 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ec78c53d-4f56-4654-8902-cd85bb0bf33d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.497 243456 INFO nova.virt.libvirt.driver [-] [instance: c1e4150a-4695-4464-a271-378970447180] Instance destroyed successfully.
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.498 243456 DEBUG nova.objects.instance [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lazy-loading 'resources' on Instance uuid c1e4150a-4695-4464-a271-378970447180 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.497 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a22ef408-7f4e-424e-a307-08f611f2d21b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459589, 'reachable_time': 35470, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272404, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d3a8395bc\x2dd7fc\x2d4457\x2d8cb4\x2d52e2b9305b61.mount: Deactivated successfully.
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.502 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.502 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[e2e0910d-0470-42fd-9a57-04ef96661a57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.505 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a1a4b6a4-de37-4bca-9501-0465f18ded83 in datapath 2eebd3ec-f7d4-4881-813e-8d884cdcadaf unbound from our chassis
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.506 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2eebd3ec-f7d4-4881-813e-8d884cdcadaf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.507 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6f38d711-4ac6-4081-b933-2224659d85c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.508 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf namespace which is not needed anymore
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.512 243456 DEBUG nova.virt.libvirt.vif [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1596427493',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1596427493',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1596427493',id=31,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da458887a8634c5a8b9a38fcbcc44e07',ramdisk_id='',reservation_id='r-a3v88xmi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-356581433',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-356581433-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:03Z,user_data=None,user_id='2737540b5d9a437cac0ea91b25f0c5d8',uuid=c1e4150a-4695-4464-a271-378970447180,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "address": "fa:16:3e:22:fe:a4", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a4b6a4-de", "ovs_interfaceid": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.512 243456 DEBUG nova.network.os_vif_util [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converting VIF {"id": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "address": "fa:16:3e:22:fe:a4", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1a4b6a4-de", "ovs_interfaceid": "a1a4b6a4-de37-4bca-9501-0465f18ded83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.514 243456 DEBUG nova.network.os_vif_util [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:fe:a4,bridge_name='br-int',has_traffic_filtering=True,id=a1a4b6a4-de37-4bca-9501-0465f18ded83,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1a4b6a4-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.514 243456 DEBUG os_vif [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:fe:a4,bridge_name='br-int',has_traffic_filtering=True,id=a1a4b6a4-de37-4bca-9501-0465f18ded83,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1a4b6a4-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.516 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.516 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a4b6a4-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.518 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.520 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.521 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.524 243456 INFO os_vif [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:fe:a4,bridge_name='br-int',has_traffic_filtering=True,id=a1a4b6a4-de37-4bca-9501-0465f18ded83,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1a4b6a4-de')
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.550 243456 INFO nova.virt.libvirt.driver [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Instance shutdown successfully after 13 seconds.
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.556 243456 INFO nova.virt.libvirt.driver [-] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Instance destroyed successfully.
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.557 243456 DEBUG nova.objects.instance [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'numa_topology' on Instance uuid ff5bf118-ea06-44c0-81f0-0a229162e1d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.577 243456 DEBUG nova.compute.manager [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:05 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[271907]: [NOTICE]   (271911) : haproxy version is 2.8.14-c23fe91
Feb 28 10:05:05 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[271907]: [NOTICE]   (271911) : path to executable is /usr/sbin/haproxy
Feb 28 10:05:05 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[271907]: [WARNING]  (271911) : Exiting Master process...
Feb 28 10:05:05 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[271907]: [WARNING]  (271911) : Exiting Master process...
Feb 28 10:05:05 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[271907]: [ALERT]    (271911) : Current worker (271913) exited with code 143 (Terminated)
Feb 28 10:05:05 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[271907]: [WARNING]  (271911) : All workers exited. Exiting... (0)
Feb 28 10:05:05 compute-0 systemd[1]: libpod-2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3.scope: Deactivated successfully.
Feb 28 10:05:05 compute-0 podman[272440]: 2026-02-28 10:05:05.621183775 +0000 UTC m=+0.036702493 container died 2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 28 10:05:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3-userdata-shm.mount: Deactivated successfully.
Feb 28 10:05:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9528566aabd5692d826fa682f7bcba8949e6d95980a2790e31c1867903228e4e-merged.mount: Deactivated successfully.
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.650 243456 DEBUG oslo_concurrency.lockutils [None req-e6589198-ce55-453d-952a-57fedd94be67 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.417s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:05 compute-0 podman[272440]: 2026-02-28 10:05:05.656089597 +0000 UTC m=+0.071608315 container cleanup 2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:05:05 compute-0 systemd[1]: libpod-conmon-2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3.scope: Deactivated successfully.
Feb 28 10:05:05 compute-0 podman[272472]: 2026-02-28 10:05:05.705961309 +0000 UTC m=+0.034416459 container remove 2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.710 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2a998e8a-82ce-46eb-8f8f-336fb1dd2f5c]: (4, ('Sat Feb 28 10:05:05 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf (2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3)\n2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3\nSat Feb 28 10:05:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf (2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3)\n2028c44a79b7b77f53aefc6b0e28f1d4e33132cd9925e462b30cc65d38a470e3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.712 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6a7d1973-9eee-436b-b27e-5dcd02e36c45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.713 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2eebd3ec-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:05 compute-0 kernel: tap2eebd3ec-f0: left promiscuous mode
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.715 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.723 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.726 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8b65433f-cd4d-4921-b796-bbbfe566be28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.740 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[73020b36-532d-4b55-bddb-6a41630d42bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.741 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2597621e-527c-41d5-ab92-be85f82ff420]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.752 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[685d34c2-d667-4173-abd8-e353ce46e33f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460404, 'reachable_time': 19205, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272489, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.754 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:05:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:05.755 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a44b00d9-9c9d-4e49-a9e3-08cebcd434f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.760 243456 INFO nova.virt.libvirt.driver [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Deleting instance files /var/lib/nova/instances/c1e4150a-4695-4464-a271-378970447180_del
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.761 243456 INFO nova.virt.libvirt.driver [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Deletion of /var/lib/nova/instances/c1e4150a-4695-4464-a271-378970447180_del complete
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.826 243456 INFO nova.compute.manager [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Took 0.77 seconds to destroy the instance on the hypervisor.
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.826 243456 DEBUG oslo.service.loopingcall [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.827 243456 DEBUG nova.compute.manager [-] [instance: c1e4150a-4695-4464-a271-378970447180] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.827 243456 DEBUG nova.network.neutron [-] [instance: c1e4150a-4695-4464-a271-378970447180] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:05:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 326 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.9 MiB/s wr, 242 op/s
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.918 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273090.9009323, f1026535-7729-43d0-8027-dd71ef14dfbf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.919 243456 INFO nova.compute.manager [-] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] VM Stopped (Lifecycle Event)
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.927 243456 DEBUG nova.compute.manager [req-b575204c-a031-4a1d-b965-16cadcd551f7 req-0409a0b5-499a-42ee-b1d6-a3d76d8d04c5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Received event network-vif-unplugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.927 243456 DEBUG oslo_concurrency.lockutils [req-b575204c-a031-4a1d-b965-16cadcd551f7 req-0409a0b5-499a-42ee-b1d6-a3d76d8d04c5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.928 243456 DEBUG oslo_concurrency.lockutils [req-b575204c-a031-4a1d-b965-16cadcd551f7 req-0409a0b5-499a-42ee-b1d6-a3d76d8d04c5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.928 243456 DEBUG oslo_concurrency.lockutils [req-b575204c-a031-4a1d-b965-16cadcd551f7 req-0409a0b5-499a-42ee-b1d6-a3d76d8d04c5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.928 243456 DEBUG nova.compute.manager [req-b575204c-a031-4a1d-b965-16cadcd551f7 req-0409a0b5-499a-42ee-b1d6-a3d76d8d04c5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] No waiting events found dispatching network-vif-unplugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.929 243456 WARNING nova.compute.manager [req-b575204c-a031-4a1d-b965-16cadcd551f7 req-0409a0b5-499a-42ee-b1d6-a3d76d8d04c5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Received unexpected event network-vif-unplugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 for instance with vm_state stopped and task_state None.
Feb 28 10:05:05 compute-0 nova_compute[243452]: 2026-02-28 10:05:05.941 243456 DEBUG nova.compute.manager [None req-0afaa1c7-0992-4530-a3f9-fdcf0565e413 - - - - - -] [instance: f1026535-7729-43d0-8027-dd71ef14dfbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:06 compute-0 ceph-mon[76304]: pgmap v1083: 305 pgs: 305 active+clean; 326 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.9 MiB/s wr, 242 op/s
Feb 28 10:05:06 compute-0 nova_compute[243452]: 2026-02-28 10:05:06.310 243456 DEBUG nova.network.neutron [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Successfully created port: f3950212-15c6-462b-a9f9-1f218cfd3914 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:05:06 compute-0 nova_compute[243452]: 2026-02-28 10:05:06.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:05:06 compute-0 nova_compute[243452]: 2026-02-28 10:05:06.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:05:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d2eebd3ec\x2df7d4\x2d4881\x2d813e\x2d8d884cdcadaf.mount: Deactivated successfully.
Feb 28 10:05:06 compute-0 nova_compute[243452]: 2026-02-28 10:05:06.546 243456 DEBUG nova.network.neutron [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Successfully updated port: 9f44b9f8-b888-40e8-be30-f985e3ca11b9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:05:06 compute-0 nova_compute[243452]: 2026-02-28 10:05:06.565 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:05:06 compute-0 nova_compute[243452]: 2026-02-28 10:05:06.566 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquired lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:05:06 compute-0 nova_compute[243452]: 2026-02-28 10:05:06.566 243456 DEBUG nova.network.neutron [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:05:06 compute-0 nova_compute[243452]: 2026-02-28 10:05:06.614 243456 DEBUG nova.network.neutron [-] [instance: c1e4150a-4695-4464-a271-378970447180] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:06 compute-0 nova_compute[243452]: 2026-02-28 10:05:06.636 243456 INFO nova.compute.manager [-] [instance: c1e4150a-4695-4464-a271-378970447180] Took 0.81 seconds to deallocate network for instance.
Feb 28 10:05:06 compute-0 nova_compute[243452]: 2026-02-28 10:05:06.675 243456 DEBUG oslo_concurrency.lockutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:06 compute-0 nova_compute[243452]: 2026-02-28 10:05:06.676 243456 DEBUG oslo_concurrency.lockutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:06 compute-0 nova_compute[243452]: 2026-02-28 10:05:06.684 243456 DEBUG nova.compute.manager [req-254cbf5a-e6e6-41de-9542-527f6bf9dc41 req-98d85944-c465-4689-92c1-60cbdcf2c630 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-changed-9f44b9f8-b888-40e8-be30-f985e3ca11b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:06 compute-0 nova_compute[243452]: 2026-02-28 10:05:06.684 243456 DEBUG nova.compute.manager [req-254cbf5a-e6e6-41de-9542-527f6bf9dc41 req-98d85944-c465-4689-92c1-60cbdcf2c630 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Refreshing instance network info cache due to event network-changed-9f44b9f8-b888-40e8-be30-f985e3ca11b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:05:06 compute-0 nova_compute[243452]: 2026-02-28 10:05:06.685 243456 DEBUG oslo_concurrency.lockutils [req-254cbf5a-e6e6-41de-9542-527f6bf9dc41 req-98d85944-c465-4689-92c1-60cbdcf2c630 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:05:06 compute-0 nova_compute[243452]: 2026-02-28 10:05:06.762 243456 DEBUG nova.network.neutron [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:05:06 compute-0 nova_compute[243452]: 2026-02-28 10:05:06.778 243456 DEBUG oslo_concurrency.processutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.072 243456 DEBUG nova.compute.manager [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.111 243456 INFO nova.compute.manager [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] instance snapshotting
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.111 243456 WARNING nova.compute.manager [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] trying to snapshot a non-running instance: (state: 4 expected: 1)
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.313 243456 INFO nova.virt.libvirt.driver [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Beginning cold snapshot process
Feb 28 10:05:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:05:07 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3254729468' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.351 243456 DEBUG oslo_concurrency.processutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.364 243456 DEBUG nova.compute.provider_tree [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:05:07 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3254729468' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.401 243456 DEBUG nova.scheduler.client.report [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.451 243456 DEBUG oslo_concurrency.lockutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.458 243456 DEBUG nova.virt.libvirt.imagebackend [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.475 243456 INFO nova.scheduler.client.report [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Deleted allocations for instance c1e4150a-4695-4464-a271-378970447180
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.503 243456 DEBUG nova.network.neutron [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Successfully updated port: f3950212-15c6-462b-a9f9-1f218cfd3914 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.537 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Acquiring lock "refresh_cache-bacbbaae-ce23-42df-b5cc-0fc49b2f3741" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.537 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Acquired lock "refresh_cache-bacbbaae-ce23-42df-b5cc-0fc49b2f3741" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.538 243456 DEBUG nova.network.neutron [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.586 243456 DEBUG oslo_concurrency.lockutils [None req-fbeb08d5-1c70-40c6-bdac-fc183492b6eb 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "c1e4150a-4695-4464-a271-378970447180" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.824 243456 DEBUG nova.network.neutron [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.833 243456 DEBUG nova.storage.rbd_utils [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] creating snapshot(dc1318d480914aa6917e8aaac1e71514) on rbd image(ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:05:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 343 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.5 MiB/s wr, 187 op/s
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.919 243456 DEBUG nova.network.neutron [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updating instance_info_cache with network_info: [{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.938 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Releasing lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.939 243456 DEBUG nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Instance network_info: |[{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.939 243456 DEBUG oslo_concurrency.lockutils [req-254cbf5a-e6e6-41de-9542-527f6bf9dc41 req-98d85944-c465-4689-92c1-60cbdcf2c630 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.939 243456 DEBUG nova.network.neutron [req-254cbf5a-e6e6-41de-9542-527f6bf9dc41 req-98d85944-c465-4689-92c1-60cbdcf2c630 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Refreshing network info cache for port 9f44b9f8-b888-40e8-be30-f985e3ca11b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.942 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Start _get_guest_xml network_info=[{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.949 243456 WARNING nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.962 243456 DEBUG nova.virt.libvirt.host [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.963 243456 DEBUG nova.virt.libvirt.host [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.969 243456 DEBUG nova.virt.libvirt.host [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.970 243456 DEBUG nova.virt.libvirt.host [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.971 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.971 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.972 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.973 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.973 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.973 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.974 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.974 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.975 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.975 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.975 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.976 243456 DEBUG nova.virt.hardware [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:05:07 compute-0 nova_compute[243452]: 2026-02-28 10:05:07.980 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.029 243456 DEBUG nova.compute.manager [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Received event network-vif-plugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.030 243456 DEBUG oslo_concurrency.lockutils [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.031 243456 DEBUG oslo_concurrency.lockutils [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.031 243456 DEBUG oslo_concurrency.lockutils [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.031 243456 DEBUG nova.compute.manager [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] No waiting events found dispatching network-vif-plugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.032 243456 WARNING nova.compute.manager [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Received unexpected event network-vif-plugged-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 for instance with vm_state stopped and task_state image_uploading.
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.032 243456 DEBUG nova.compute.manager [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Received event network-vif-unplugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.032 243456 DEBUG oslo_concurrency.lockutils [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c1e4150a-4695-4464-a271-378970447180-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.033 243456 DEBUG oslo_concurrency.lockutils [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.033 243456 DEBUG oslo_concurrency.lockutils [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.033 243456 DEBUG nova.compute.manager [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] No waiting events found dispatching network-vif-unplugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.034 243456 WARNING nova.compute.manager [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Received unexpected event network-vif-unplugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 for instance with vm_state deleted and task_state None.
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.034 243456 DEBUG nova.compute.manager [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Received event network-vif-plugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.035 243456 DEBUG oslo_concurrency.lockutils [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c1e4150a-4695-4464-a271-378970447180-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.035 243456 DEBUG oslo_concurrency.lockutils [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.035 243456 DEBUG oslo_concurrency.lockutils [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c1e4150a-4695-4464-a271-378970447180-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.036 243456 DEBUG nova.compute.manager [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] No waiting events found dispatching network-vif-plugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.036 243456 WARNING nova.compute.manager [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Received unexpected event network-vif-plugged-a1a4b6a4-de37-4bca-9501-0465f18ded83 for instance with vm_state deleted and task_state None.
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.036 243456 DEBUG nova.compute.manager [req-6f1690d8-ecf7-4f1a-a311-c831547edea1 req-45e3511c-cbea-4aaf-8d81-1be01abe0fc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c1e4150a-4695-4464-a271-378970447180] Received event network-vif-deleted-a1a4b6a4-de37-4bca-9501-0465f18ded83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:05:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Feb 28 10:05:08 compute-0 ceph-mon[76304]: pgmap v1084: 305 pgs: 305 active+clean; 343 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.5 MiB/s wr, 187 op/s
Feb 28 10:05:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Feb 28 10:05:08 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.448 243456 DEBUG nova.storage.rbd_utils [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] cloning vms/ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk@dc1318d480914aa6917e8aaac1e71514 to images/60ad6e6b-14e8-4ec6-9e02-5b68cbfcac60 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.518 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.554 243456 DEBUG nova.storage.rbd_utils [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] flattening images/60ad6e6b-14e8-4ec6-9e02-5b68cbfcac60 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:05:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:05:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4093465859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.630 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.651 243456 DEBUG nova.storage.rbd_utils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.654 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.887 243456 DEBUG nova.network.neutron [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Updating instance_info_cache with network_info: [{"id": "f3950212-15c6-462b-a9f9-1f218cfd3914", "address": "fa:16:3e:a3:4a:3e", "network": {"id": "7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-199293454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "135c387aaa024e42b1c3c19237591cf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3950212-15", "ovs_interfaceid": "f3950212-15c6-462b-a9f9-1f218cfd3914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.899 243456 DEBUG nova.storage.rbd_utils [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] removing snapshot(dc1318d480914aa6917e8aaac1e71514) on rbd image(ff5bf118-ea06-44c0-81f0-0a229162e1d8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.916 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Releasing lock "refresh_cache-bacbbaae-ce23-42df-b5cc-0fc49b2f3741" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.917 243456 DEBUG nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Instance network_info: |[{"id": "f3950212-15c6-462b-a9f9-1f218cfd3914", "address": "fa:16:3e:a3:4a:3e", "network": {"id": "7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-199293454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "135c387aaa024e42b1c3c19237591cf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3950212-15", "ovs_interfaceid": "f3950212-15c6-462b-a9f9-1f218cfd3914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.919 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Start _get_guest_xml network_info=[{"id": "f3950212-15c6-462b-a9f9-1f218cfd3914", "address": "fa:16:3e:a3:4a:3e", "network": {"id": "7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-199293454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "135c387aaa024e42b1c3c19237591cf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3950212-15", "ovs_interfaceid": "f3950212-15c6-462b-a9f9-1f218cfd3914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.924 243456 WARNING nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.930 243456 DEBUG nova.virt.libvirt.host [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.931 243456 DEBUG nova.virt.libvirt.host [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.935 243456 DEBUG nova.virt.libvirt.host [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.935 243456 DEBUG nova.virt.libvirt.host [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.936 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.936 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.936 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.936 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.937 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.937 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.937 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.937 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.937 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.938 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.938 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.938 243456 DEBUG nova.virt.hardware [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:05:08 compute-0 nova_compute[243452]: 2026-02-28 10:05:08.940 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.095 243456 DEBUG nova.compute.manager [req-91279874-c84a-4595-8bf6-f90ed76af280 req-0028bc65-4b4b-4335-8843-bfee9839b34a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Received event network-changed-f3950212-15c6-462b-a9f9-1f218cfd3914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.096 243456 DEBUG nova.compute.manager [req-91279874-c84a-4595-8bf6-f90ed76af280 req-0028bc65-4b4b-4335-8843-bfee9839b34a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Refreshing instance network info cache due to event network-changed-f3950212-15c6-462b-a9f9-1f218cfd3914. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.096 243456 DEBUG oslo_concurrency.lockutils [req-91279874-c84a-4595-8bf6-f90ed76af280 req-0028bc65-4b4b-4335-8843-bfee9839b34a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-bacbbaae-ce23-42df-b5cc-0fc49b2f3741" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.096 243456 DEBUG oslo_concurrency.lockutils [req-91279874-c84a-4595-8bf6-f90ed76af280 req-0028bc65-4b4b-4335-8843-bfee9839b34a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-bacbbaae-ce23-42df-b5cc-0fc49b2f3741" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.097 243456 DEBUG nova.network.neutron [req-91279874-c84a-4595-8bf6-f90ed76af280 req-0028bc65-4b4b-4335-8843-bfee9839b34a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Refreshing network info cache for port f3950212-15c6-462b-a9f9-1f218cfd3914 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.173 243456 DEBUG nova.network.neutron [req-254cbf5a-e6e6-41de-9542-527f6bf9dc41 req-98d85944-c465-4689-92c1-60cbdcf2c630 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updated VIF entry in instance network info cache for port 9f44b9f8-b888-40e8-be30-f985e3ca11b9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.174 243456 DEBUG nova.network.neutron [req-254cbf5a-e6e6-41de-9542-527f6bf9dc41 req-98d85944-c465-4689-92c1-60cbdcf2c630 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updating instance_info_cache with network_info: [{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.191 243456 DEBUG oslo_concurrency.lockutils [req-254cbf5a-e6e6-41de-9542-527f6bf9dc41 req-98d85944-c465-4689-92c1-60cbdcf2c630 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:05:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:05:09 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2960477370' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.297 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.298 243456 DEBUG nova.virt.libvirt.vif [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-853218932',display_name='tempest-AttachInterfacesTestJSON-server-853218932',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-853218932',id=32,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNhRql1zNcgozp7+n+OTZNjDLjB8j28cqviorl1M0I+grkvDu6q5p/JYhTL7OoExHJt9j3CSAkdrK21HWqvkni4EGOra+zdcam7r8KHxbshLgFt9F0GQkeLVE/FGatuEGQ==',key_name='tempest-keypair-1407834863',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-b1a06y2e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:05:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=3a118849-0d0a-4196-9bdd-65333da2e8f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.298 243456 DEBUG nova.network.os_vif_util [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.299 243456 DEBUG nova.network.os_vif_util [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:3c:f5,bridge_name='br-int',has_traffic_filtering=True,id=9f44b9f8-b888-40e8-be30-f985e3ca11b9,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f44b9f8-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.300 243456 DEBUG nova.objects.instance [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a118849-0d0a-4196-9bdd-65333da2e8f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.314 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:05:09 compute-0 nova_compute[243452]:   <uuid>3a118849-0d0a-4196-9bdd-65333da2e8f7</uuid>
Feb 28 10:05:09 compute-0 nova_compute[243452]:   <name>instance-00000020</name>
Feb 28 10:05:09 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:05:09 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:05:09 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <nova:name>tempest-AttachInterfacesTestJSON-server-853218932</nova:name>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:05:07</nova:creationTime>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:05:09 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:05:09 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:05:09 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:05:09 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:05:09 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:05:09 compute-0 nova_compute[243452]:         <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:05:09 compute-0 nova_compute[243452]:         <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:05:09 compute-0 nova_compute[243452]:         <nova:port uuid="9f44b9f8-b888-40e8-be30-f985e3ca11b9">
Feb 28 10:05:09 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:05:09 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:05:09 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <system>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <entry name="serial">3a118849-0d0a-4196-9bdd-65333da2e8f7</entry>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <entry name="uuid">3a118849-0d0a-4196-9bdd-65333da2e8f7</entry>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     </system>
Feb 28 10:05:09 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:05:09 compute-0 nova_compute[243452]:   <os>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:   </os>
Feb 28 10:05:09 compute-0 nova_compute[243452]:   <features>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:   </features>
Feb 28 10:05:09 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:05:09 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:05:09 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/3a118849-0d0a-4196-9bdd-65333da2e8f7_disk">
Feb 28 10:05:09 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       </source>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:05:09 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/3a118849-0d0a-4196-9bdd-65333da2e8f7_disk.config">
Feb 28 10:05:09 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       </source>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:05:09 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:07:3c:f5"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <target dev="tap9f44b9f8-b8"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/console.log" append="off"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <video>
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     </video>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:05:09 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:05:09 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:05:09 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:05:09 compute-0 nova_compute[243452]: </domain>
Feb 28 10:05:09 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.315 243456 DEBUG nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Preparing to wait for external event network-vif-plugged-9f44b9f8-b888-40e8-be30-f985e3ca11b9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.315 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.315 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.315 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.316 243456 DEBUG nova.virt.libvirt.vif [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-853218932',display_name='tempest-AttachInterfacesTestJSON-server-853218932',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-853218932',id=32,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNhRql1zNcgozp7+n+OTZNjDLjB8j28cqviorl1M0I+grkvDu6q5p/JYhTL7OoExHJt9j3CSAkdrK21HWqvkni4EGOra+zdcam7r8KHxbshLgFt9F0GQkeLVE/FGatuEGQ==',key_name='tempest-keypair-1407834863',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-b1a06y2e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:05:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=3a118849-0d0a-4196-9bdd-65333da2e8f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.316 243456 DEBUG nova.network.os_vif_util [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.317 243456 DEBUG nova.network.os_vif_util [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:3c:f5,bridge_name='br-int',has_traffic_filtering=True,id=9f44b9f8-b888-40e8-be30-f985e3ca11b9,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f44b9f8-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.318 243456 DEBUG os_vif [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:3c:f5,bridge_name='br-int',has_traffic_filtering=True,id=9f44b9f8-b888-40e8-be30-f985e3ca11b9,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f44b9f8-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.318 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.319 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.319 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.325 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.325 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f44b9f8-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.326 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9f44b9f8-b8, col_values=(('external_ids', {'iface-id': '9f44b9f8-b888-40e8-be30-f985e3ca11b9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:3c:f5', 'vm-uuid': '3a118849-0d0a-4196-9bdd-65333da2e8f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.328 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:09 compute-0 NetworkManager[49805]: <info>  [1772273109.3295] manager: (tap9f44b9f8-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.331 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.337 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.337 243456 INFO os_vif [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:3c:f5,bridge_name='br-int',has_traffic_filtering=True,id=9f44b9f8-b888-40e8-be30-f985e3ca11b9,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f44b9f8-b8')
Feb 28 10:05:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.433 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.434 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.434 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:07:3c:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.435 243456 INFO nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Using config drive
Feb 28 10:05:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.461 243456 DEBUG nova.storage.rbd_utils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:09 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Feb 28 10:05:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:05:09 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/35539494' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:09 compute-0 ceph-mon[76304]: osdmap e145: 3 total, 3 up, 3 in
Feb 28 10:05:09 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4093465859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:09 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2960477370' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.598 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.619 243456 DEBUG nova.storage.rbd_utils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] rbd image bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.623 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.648 243456 DEBUG nova.storage.rbd_utils [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] creating snapshot(snap) on rbd image(60ad6e6b-14e8-4ec6-9e02-5b68cbfcac60) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.842 243456 INFO nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Creating config drive at /var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/disk.config
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.849 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5g01n1wm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 354 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 9.8 MiB/s wr, 313 op/s
Feb 28 10:05:09 compute-0 nova_compute[243452]: 2026-02-28 10:05:09.988 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5g01n1wm" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.015 243456 DEBUG nova.storage.rbd_utils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.019 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/disk.config 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.115 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "75431a43-9412-4ad7-86ef-4f1fc1563b37" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.116 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:05:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/224204770' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.139 243456 DEBUG nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.149 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.150 243456 DEBUG nova.virt.libvirt.vif [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:05:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-803037290',display_name='tempest-ServerMetadataTestJSON-server-803037290',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-803037290',id=33,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='135c387aaa024e42b1c3c19237591cf3',ramdisk_id='',reservation_id='r-70lii02n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-1667248256',owner_user_name='tempest-ServerMetadataTestJSON-1667248256-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:05:04Z,user_data=None,user_id='6bbc470612fa48afb6c2a143ba966473',uuid=bacbbaae-ce23-42df-b5cc-0fc49b2f3741,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f3950212-15c6-462b-a9f9-1f218cfd3914", "address": "fa:16:3e:a3:4a:3e", "network": {"id": "7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-199293454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "135c387aaa024e42b1c3c19237591cf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3950212-15", "ovs_interfaceid": "f3950212-15c6-462b-a9f9-1f218cfd3914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.150 243456 DEBUG nova.network.os_vif_util [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Converting VIF {"id": "f3950212-15c6-462b-a9f9-1f218cfd3914", "address": "fa:16:3e:a3:4a:3e", "network": {"id": "7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-199293454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "135c387aaa024e42b1c3c19237591cf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3950212-15", "ovs_interfaceid": "f3950212-15c6-462b-a9f9-1f218cfd3914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.151 243456 DEBUG nova.network.os_vif_util [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:4a:3e,bridge_name='br-int',has_traffic_filtering=True,id=f3950212-15c6-462b-a9f9-1f218cfd3914,network=Network(7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3950212-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.152 243456 DEBUG nova.objects.instance [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lazy-loading 'pci_devices' on Instance uuid bacbbaae-ce23-42df-b5cc-0fc49b2f3741 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.156 243456 DEBUG oslo_concurrency.processutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/disk.config 3a118849-0d0a-4196-9bdd-65333da2e8f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.156 243456 INFO nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Deleting local config drive /var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/disk.config because it was imported into RBD.
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.165 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:05:10 compute-0 nova_compute[243452]:   <uuid>bacbbaae-ce23-42df-b5cc-0fc49b2f3741</uuid>
Feb 28 10:05:10 compute-0 nova_compute[243452]:   <name>instance-00000021</name>
Feb 28 10:05:10 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:05:10 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:05:10 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerMetadataTestJSON-server-803037290</nova:name>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:05:08</nova:creationTime>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:05:10 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:05:10 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:05:10 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:05:10 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:05:10 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:05:10 compute-0 nova_compute[243452]:         <nova:user uuid="6bbc470612fa48afb6c2a143ba966473">tempest-ServerMetadataTestJSON-1667248256-project-member</nova:user>
Feb 28 10:05:10 compute-0 nova_compute[243452]:         <nova:project uuid="135c387aaa024e42b1c3c19237591cf3">tempest-ServerMetadataTestJSON-1667248256</nova:project>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:05:10 compute-0 nova_compute[243452]:         <nova:port uuid="f3950212-15c6-462b-a9f9-1f218cfd3914">
Feb 28 10:05:10 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:05:10 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:05:10 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <system>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <entry name="serial">bacbbaae-ce23-42df-b5cc-0fc49b2f3741</entry>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <entry name="uuid">bacbbaae-ce23-42df-b5cc-0fc49b2f3741</entry>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     </system>
Feb 28 10:05:10 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:05:10 compute-0 nova_compute[243452]:   <os>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:   </os>
Feb 28 10:05:10 compute-0 nova_compute[243452]:   <features>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:   </features>
Feb 28 10:05:10 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:05:10 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:05:10 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk">
Feb 28 10:05:10 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       </source>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:05:10 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk.config">
Feb 28 10:05:10 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       </source>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:05:10 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:a3:4a:3e"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <target dev="tapf3950212-15"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/bacbbaae-ce23-42df-b5cc-0fc49b2f3741/console.log" append="off"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <video>
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     </video>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:05:10 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:05:10 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:05:10 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:05:10 compute-0 nova_compute[243452]: </domain>
Feb 28 10:05:10 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.166 243456 DEBUG nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Preparing to wait for external event network-vif-plugged-f3950212-15c6-462b-a9f9-1f218cfd3914 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.166 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Acquiring lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.167 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.167 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.168 243456 DEBUG nova.virt.libvirt.vif [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:05:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-803037290',display_name='tempest-ServerMetadataTestJSON-server-803037290',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-803037290',id=33,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='135c387aaa024e42b1c3c19237591cf3',ramdisk_id='',reservation_id='r-70lii02n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-1667248256',owner_user_name='tempest-ServerMetadataTestJSON-1667248256-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:05:04Z,user_data=None,user_id='6bbc470612fa48afb6c2a143ba966473',uuid=bacbbaae-ce23-42df-b5cc-0fc49b2f3741,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f3950212-15c6-462b-a9f9-1f218cfd3914", "address": "fa:16:3e:a3:4a:3e", "network": {"id": "7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-199293454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "135c387aaa024e42b1c3c19237591cf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3950212-15", "ovs_interfaceid": "f3950212-15c6-462b-a9f9-1f218cfd3914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.168 243456 DEBUG nova.network.os_vif_util [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Converting VIF {"id": "f3950212-15c6-462b-a9f9-1f218cfd3914", "address": "fa:16:3e:a3:4a:3e", "network": {"id": "7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-199293454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "135c387aaa024e42b1c3c19237591cf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3950212-15", "ovs_interfaceid": "f3950212-15c6-462b-a9f9-1f218cfd3914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.169 243456 DEBUG nova.network.os_vif_util [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:4a:3e,bridge_name='br-int',has_traffic_filtering=True,id=f3950212-15c6-462b-a9f9-1f218cfd3914,network=Network(7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3950212-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.169 243456 DEBUG os_vif [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:4a:3e,bridge_name='br-int',has_traffic_filtering=True,id=f3950212-15c6-462b-a9f9-1f218cfd3914,network=Network(7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3950212-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.170 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.170 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.171 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.177 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.177 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3950212-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.179 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf3950212-15, col_values=(('external_ids', {'iface-id': 'f3950212-15c6-462b-a9f9-1f218cfd3914', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a3:4a:3e', 'vm-uuid': 'bacbbaae-ce23-42df-b5cc-0fc49b2f3741'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:10 compute-0 NetworkManager[49805]: <info>  [1772273110.1820] manager: (tapf3950212-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.191 243456 DEBUG nova.network.neutron [req-91279874-c84a-4595-8bf6-f90ed76af280 req-0028bc65-4b4b-4335-8843-bfee9839b34a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Updated VIF entry in instance network info cache for port f3950212-15c6-462b-a9f9-1f218cfd3914. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.192 243456 DEBUG nova.network.neutron [req-91279874-c84a-4595-8bf6-f90ed76af280 req-0028bc65-4b4b-4335-8843-bfee9839b34a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Updating instance_info_cache with network_info: [{"id": "f3950212-15c6-462b-a9f9-1f218cfd3914", "address": "fa:16:3e:a3:4a:3e", "network": {"id": "7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-199293454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "135c387aaa024e42b1c3c19237591cf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3950212-15", "ovs_interfaceid": "f3950212-15c6-462b-a9f9-1f218cfd3914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.194 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.198 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.199 243456 INFO os_vif [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:4a:3e,bridge_name='br-int',has_traffic_filtering=True,id=f3950212-15c6-462b-a9f9-1f218cfd3914,network=Network(7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3950212-15')
Feb 28 10:05:10 compute-0 NetworkManager[49805]: <info>  [1772273110.2025] manager: (tap9f44b9f8-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/102)
Feb 28 10:05:10 compute-0 kernel: tap9f44b9f8-b8: entered promiscuous mode
Feb 28 10:05:10 compute-0 ovn_controller[146846]: 2026-02-28T10:05:10Z|00210|binding|INFO|Claiming lport 9f44b9f8-b888-40e8-be30-f985e3ca11b9 for this chassis.
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.207 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:10 compute-0 ovn_controller[146846]: 2026-02-28T10:05:10Z|00211|binding|INFO|9f44b9f8-b888-40e8-be30-f985e3ca11b9: Claiming fa:16:3e:07:3c:f5 10.100.0.12
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.215 243456 DEBUG oslo_concurrency.lockutils [req-91279874-c84a-4595-8bf6-f90ed76af280 req-0028bc65-4b4b-4335-8843-bfee9839b34a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-bacbbaae-ce23-42df-b5cc-0fc49b2f3741" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.215 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:3c:f5 10.100.0.12'], port_security=['fa:16:3e:07:3c:f5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3a118849-0d0a-4196-9bdd-65333da2e8f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8d9a339b-3aab-4fbe-a87a-c3231e7f58e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=9f44b9f8-b888-40e8-be30-f985e3ca11b9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.217 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 9f44b9f8-b888-40e8-be30-f985e3ca11b9 in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f bound to our chassis
Feb 28 10:05:10 compute-0 ovn_controller[146846]: 2026-02-28T10:05:10Z|00212|binding|INFO|Setting lport 9f44b9f8-b888-40e8-be30-f985e3ca11b9 ovn-installed in OVS
Feb 28 10:05:10 compute-0 ovn_controller[146846]: 2026-02-28T10:05:10Z|00213|binding|INFO|Setting lport 9f44b9f8-b888-40e8-be30-f985e3ca11b9 up in Southbound
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.219 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.219 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.221 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.223 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:10 compute-0 systemd-udevd[272860]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.229 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.232 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.233 243456 INFO nova.compute.claims [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:05:10 compute-0 systemd-machined[209480]: New machine qemu-36-instance-00000020.
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.234 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[df6e11e3-b6c4-4a9e-8285-18a2c5f0a1e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.235 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap60dcefc3-91 in ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.237 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap60dcefc3-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.237 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[672f60b9-56da-4d24-96b8-868c213d1f90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.238 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8898a28b-c1cb-40fb-b247-5bc92796f2c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:10 compute-0 NetworkManager[49805]: <info>  [1772273110.2426] device (tap9f44b9f8-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:05:10 compute-0 NetworkManager[49805]: <info>  [1772273110.2432] device (tap9f44b9f8-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.251 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[4a57fa7f-0316-4b4d-91da-dd8b8a36b7b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:10 compute-0 systemd[1]: Started Virtual Machine qemu-36-instance-00000020.
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.262 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.262 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.263 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] No VIF found with MAC fa:16:3e:a3:4a:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.263 243456 INFO nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Using config drive
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.266 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c6c48dba-7d66-4de6-ba26-8d9d03cb1d36]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.288 243456 DEBUG nova.storage.rbd_utils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] rbd image bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.292 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[40279ac7-da0a-4948-8589-14a8c0049f23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.297 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f9812d3b-c3ab-4ce9-9aa5-a5126ce33c76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:10 compute-0 systemd-udevd[272864]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:05:10 compute-0 NetworkManager[49805]: <info>  [1772273110.2982] manager: (tap60dcefc3-90): new Veth device (/org/freedesktop/NetworkManager/Devices/103)
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.324 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[21ef70d2-6796-49bc-a8b4-957981b51238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.327 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f7ae5d-133e-4049-b256-d743037cefd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:10 compute-0 NetworkManager[49805]: <info>  [1772273110.3464] device (tap60dcefc3-90): carrier: link connected
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.350 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[650e6035-6d1d-4514-a53a-d20f7468b99f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.365 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[db66daed-e486-4446-86f3-a63aaeffefdc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461763, 'reachable_time': 27957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272917, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.381 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3c078a38-3e2d-4a53-9068-ad0472697897]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe46:227a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461763, 'tstamp': 461763}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272918, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.399 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a447433c-4883-4592-9309-2961dd4c912d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461763, 'reachable_time': 27957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272919, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.403 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.431 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c0f602b9-4822-4053-9b39-24064e66b041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.495 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[040d7934-e7e3-4cb0-a6f5-73bba8e840a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.497 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.497 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.498 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.499 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:10 compute-0 NetworkManager[49805]: <info>  [1772273110.5005] manager: (tap60dcefc3-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Feb 28 10:05:10 compute-0 kernel: tap60dcefc3-90: entered promiscuous mode
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.506 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.507 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.508 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:10 compute-0 ovn_controller[146846]: 2026-02-28T10:05:10Z|00214|binding|INFO|Releasing lport 76417fb5-53fd-4986-92c0-fafddb45b2f5 from this chassis (sb_readonly=0)
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.516 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.520 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.521 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/60dcefc3-95e1-437e-9c00-e51656c39b8f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/60dcefc3-95e1-437e-9c00-e51656c39b8f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.522 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f4fbb1e6-966f-4038-84da-1c412047ff81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.523 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/60dcefc3-95e1-437e-9c00-e51656c39b8f.pid.haproxy
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:05:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:10.523 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'env', 'PROCESS_TAG=haproxy-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/60dcefc3-95e1-437e-9c00-e51656c39b8f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:05:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Feb 28 10:05:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Feb 28 10:05:10 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Feb 28 10:05:10 compute-0 ceph-mon[76304]: osdmap e146: 3 total, 3 up, 3 in
Feb 28 10:05:10 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/35539494' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:10 compute-0 ceph-mon[76304]: pgmap v1087: 305 pgs: 305 active+clean; 354 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 9.8 MiB/s wr, 313 op/s
Feb 28 10:05:10 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/224204770' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.732 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273110.731971, 3a118849-0d0a-4196-9bdd-65333da2e8f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.732 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] VM Started (Lifecycle Event)
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.753 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.760 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273110.7341702, 3a118849-0d0a-4196-9bdd-65333da2e8f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.761 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] VM Paused (Lifecycle Event)
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.782 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.790 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:05:10 compute-0 nova_compute[243452]: 2026-02-28 10:05:10.818 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:05:10 compute-0 podman[273023]: 2026-02-28 10:05:10.86659507 +0000 UTC m=+0.046220840 container create 016f261e309610b5c58492c67c432b1c34975f7b62f1ac943c9fc4d7e5a60d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:05:10 compute-0 systemd[1]: Started libpod-conmon-016f261e309610b5c58492c67c432b1c34975f7b62f1ac943c9fc4d7e5a60d52.scope.
Feb 28 10:05:10 compute-0 podman[273023]: 2026-02-28 10:05:10.841519805 +0000 UTC m=+0.021145625 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:05:10 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:05:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5efc450d9fa849624f5c4721cc215a74a11ec38e71507abe88ff27e9bd2546bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:05:10 compute-0 podman[273023]: 2026-02-28 10:05:10.963866005 +0000 UTC m=+0.143491775 container init 016f261e309610b5c58492c67c432b1c34975f7b62f1ac943c9fc4d7e5a60d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 28 10:05:10 compute-0 podman[273023]: 2026-02-28 10:05:10.968452864 +0000 UTC m=+0.148078634 container start 016f261e309610b5c58492c67c432b1c34975f7b62f1ac943c9fc4d7e5a60d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223)
Feb 28 10:05:10 compute-0 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[273038]: [NOTICE]   (273042) : New worker (273044) forked
Feb 28 10:05:10 compute-0 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[273038]: [NOTICE]   (273042) : Loading success.
Feb 28 10:05:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:05:11 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3109450376' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.041 243456 INFO nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Creating config drive at /var/lib/nova/instances/bacbbaae-ce23-42df-b5cc-0fc49b2f3741/disk.config
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.045 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bacbbaae-ce23-42df-b5cc-0fc49b2f3741/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpaie5yg5k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.066 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.073 243456 DEBUG nova.compute.provider_tree [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.094 243456 DEBUG nova.scheduler.client.report [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.121 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.121 243456 DEBUG nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.170 243456 DEBUG nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.170 243456 DEBUG nova.network.neutron [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.175 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bacbbaae-ce23-42df-b5cc-0fc49b2f3741/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpaie5yg5k" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.210 243456 DEBUG nova.storage.rbd_utils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] rbd image bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.215 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bacbbaae-ce23-42df-b5cc-0fc49b2f3741/disk.config bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.245 243456 INFO nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.257 243456 DEBUG nova.compute.manager [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-vif-plugged-9f44b9f8-b888-40e8-be30-f985e3ca11b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.257 243456 DEBUG oslo_concurrency.lockutils [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.258 243456 DEBUG oslo_concurrency.lockutils [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.258 243456 DEBUG oslo_concurrency.lockutils [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.259 243456 DEBUG nova.compute.manager [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Processing event network-vif-plugged-9f44b9f8-b888-40e8-be30-f985e3ca11b9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.259 243456 DEBUG nova.compute.manager [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-vif-plugged-9f44b9f8-b888-40e8-be30-f985e3ca11b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.259 243456 DEBUG oslo_concurrency.lockutils [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.260 243456 DEBUG oslo_concurrency.lockutils [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.260 243456 DEBUG oslo_concurrency.lockutils [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.260 243456 DEBUG nova.compute.manager [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] No waiting events found dispatching network-vif-plugged-9f44b9f8-b888-40e8-be30-f985e3ca11b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.261 243456 WARNING nova.compute.manager [req-092f7666-c0ce-43aa-8119-d1abdcc302c9 req-1ce9f1d4-c64c-4dc9-86fd-622ce562ad60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received unexpected event network-vif-plugged-9f44b9f8-b888-40e8-be30-f985e3ca11b9 for instance with vm_state building and task_state spawning.
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.262 243456 DEBUG nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.267 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273111.2674031, 3a118849-0d0a-4196-9bdd-65333da2e8f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.268 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] VM Resumed (Lifecycle Event)
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.272 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.273 243456 DEBUG nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.284 243456 INFO nova.virt.libvirt.driver [-] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Instance spawned successfully.
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.284 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.289 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.293 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.332 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.343 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.344 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.345 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.346 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.347 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.348 243456 DEBUG nova.virt.libvirt.driver [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.359 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.360 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.360 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.360 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.361 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.384 243456 DEBUG oslo_concurrency.processutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bacbbaae-ce23-42df-b5cc-0fc49b2f3741/disk.config bacbbaae-ce23-42df-b5cc-0fc49b2f3741_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.384 243456 INFO nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Deleting local config drive /var/lib/nova/instances/bacbbaae-ce23-42df-b5cc-0fc49b2f3741/disk.config because it was imported into RBD.
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.418 243456 INFO nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Took 8.07 seconds to spawn the instance on the hypervisor.
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.418 243456 DEBUG nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:11 compute-0 kernel: tapf3950212-15: entered promiscuous mode
Feb 28 10:05:11 compute-0 NetworkManager[49805]: <info>  [1772273111.4366] manager: (tapf3950212-15): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Feb 28 10:05:11 compute-0 ovn_controller[146846]: 2026-02-28T10:05:11Z|00215|binding|INFO|Claiming lport f3950212-15c6-462b-a9f9-1f218cfd3914 for this chassis.
Feb 28 10:05:11 compute-0 ovn_controller[146846]: 2026-02-28T10:05:11Z|00216|binding|INFO|f3950212-15c6-462b-a9f9-1f218cfd3914: Claiming fa:16:3e:a3:4a:3e 10.100.0.14
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.442 243456 DEBUG nova.policy [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2737540b5d9a437cac0ea91b25f0c5d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da458887a8634c5a8b9a38fcbcc44e07', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.445 243456 DEBUG nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:05:11 compute-0 systemd-udevd[272900]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.448 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:4a:3e 10.100.0.14'], port_security=['fa:16:3e:a3:4a:3e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'bacbbaae-ce23-42df-b5cc-0fc49b2f3741', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '135c387aaa024e42b1c3c19237591cf3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5ee5bcf8-3b1e-427f-98a7-d823cb130082', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d8b3b33-cfca-4ec3-8a4a-f81d6eb91a80, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f3950212-15c6-462b-a9f9-1f218cfd3914) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.450 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f3950212-15c6-462b-a9f9-1f218cfd3914 in datapath 7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8 bound to our chassis
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.453 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.455 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:05:11 compute-0 ovn_controller[146846]: 2026-02-28T10:05:11Z|00217|binding|INFO|Setting lport f3950212-15c6-462b-a9f9-1f218cfd3914 ovn-installed in OVS
Feb 28 10:05:11 compute-0 ovn_controller[146846]: 2026-02-28T10:05:11Z|00218|binding|INFO|Setting lport f3950212-15c6-462b-a9f9-1f218cfd3914 up in Southbound
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.456 243456 INFO nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Creating image(s)
Feb 28 10:05:11 compute-0 NetworkManager[49805]: <info>  [1772273111.4694] device (tapf3950212-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:05:11 compute-0 NetworkManager[49805]: <info>  [1772273111.4699] device (tapf3950212-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.470 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e3340aff-c5e2-4d60-8b05-36758849de58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.473 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7579f8b7-d1 in ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.474 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7579f8b7-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.474 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ed652614-a193-4fd2-a5c2-c03804342b41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.475 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1ddcb5bf-1ed2-4842-9999-410132082127]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:11 compute-0 systemd-machined[209480]: New machine qemu-37-instance-00000021.
Feb 28 10:05:11 compute-0 systemd[1]: Started Virtual Machine qemu-37-instance-00000021.
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.488 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[2cdaba6d-0e68-452e-80cc-998b6b009b03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.492 243456 DEBUG nova.storage.rbd_utils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.508 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dc9bc786-7a9c-420d-ae41-c6d7e5ba6f26]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.541 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[492cfde2-5ae0-4592-af7d-2ca04a8e147f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:11 compute-0 NetworkManager[49805]: <info>  [1772273111.5494] manager: (tap7579f8b7-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/106)
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.549 243456 DEBUG nova.storage.rbd_utils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.548 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b0977a62-2aaa-4425-aa26-28188f6ed815]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.576 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[66d2e488-176e-48d5-8651-d786e2fd7eef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.579 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1d3a2b19-0083-45dc-9e27-8bb9dc2d23ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.590 243456 DEBUG nova.storage.rbd_utils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:11 compute-0 ceph-mon[76304]: osdmap e147: 3 total, 3 up, 3 in
Feb 28 10:05:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3109450376' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.597 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:11 compute-0 NetworkManager[49805]: <info>  [1772273111.5999] device (tap7579f8b7-d0): carrier: link connected
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.603 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c42bafed-70cb-4dfa-831d-7709c8c43d6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.624 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[43f82b98-de15-4ef4-8fcf-6bbf8bc7dd5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7579f8b7-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:d2:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461888, 'reachable_time': 39229, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273207, 'error': None, 'target': 'ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.628 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.638 243456 INFO nova.compute.manager [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Took 9.34 seconds to build instance.
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.644 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[34d29d17-ce45-4930-92e8-1a5d43eb9622]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea6:d2f4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461888, 'tstamp': 461888}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273209, 'error': None, 'target': 'ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.653 243456 DEBUG oslo_concurrency.lockutils [None req-211986c6-67e6-4bd5-959a-64c1bcf09a4f f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.662 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6506c63d-ad46-4cc6-bbe1-4d97b2d7740a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7579f8b7-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:d2:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461888, 'reachable_time': 39229, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273210, 'error': None, 'target': 'ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.677 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.678 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.678 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.678 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.691 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b39e07d0-a9a2-4bf6-a674-7f8de33995b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.706 243456 DEBUG nova.storage.rbd_utils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.714 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.759 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[58898bfe-39f1-4f82-9e17-ce0fea2a47d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.762 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7579f8b7-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.764 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.764 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7579f8b7-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:11 compute-0 NetworkManager[49805]: <info>  [1772273111.7670] manager: (tap7579f8b7-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.767 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:11 compute-0 kernel: tap7579f8b7-d0: entered promiscuous mode
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.773 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.773 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7579f8b7-d0, col_values=(('external_ids', {'iface-id': 'c9aff458-0679-4a43-bbf1-87878c1e8a54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:11 compute-0 ovn_controller[146846]: 2026-02-28T10:05:11Z|00219|binding|INFO|Releasing lport c9aff458-0679-4a43-bbf1-87878c1e8a54 from this chassis (sb_readonly=0)
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.787 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.791 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.793 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.794 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2e571e82-9caa-4723-a98b-ce0b4bba6433]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.795 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8.pid.haproxy
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:05:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:11.797 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8', 'env', 'PROCESS_TAG=haproxy-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.859 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273111.858253, bacbbaae-ce23-42df-b5cc-0fc49b2f3741 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.860 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] VM Started (Lifecycle Event)
Feb 28 10:05:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 383 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 9.3 MiB/s wr, 269 op/s
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.892 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.898 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273111.858537, bacbbaae-ce23-42df-b5cc-0fc49b2f3741 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.900 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] VM Paused (Lifecycle Event)
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.919 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.923 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.946 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:05:11 compute-0 nova_compute[243452]: 2026-02-28 10:05:11.984 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:05:11 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/47225562' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.045 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.685s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.057 243456 DEBUG nova.storage.rbd_utils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] resizing rbd image 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.135 243456 DEBUG nova.objects.instance [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lazy-loading 'migration_context' on Instance uuid 75431a43-9412-4ad7-86ef-4f1fc1563b37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.153 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.153 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Ensure instance console log exists: /var/lib/nova/instances/75431a43-9412-4ad7-86ef-4f1fc1563b37/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.153 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.154 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.154 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:12 compute-0 podman[273384]: 2026-02-28 10:05:12.179631233 +0000 UTC m=+0.053946577 container create 86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.193 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.193 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.196 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.196 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.199 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.199 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:05:12 compute-0 systemd[1]: Started libpod-conmon-86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f.scope.
Feb 28 10:05:12 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39ee18a45b312c8bd14e361b6e1db2c2440bb85e7a95b137a16d7ff81aa888a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:05:12 compute-0 podman[273384]: 2026-02-28 10:05:12.151362059 +0000 UTC m=+0.025677423 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:05:12 compute-0 podman[273384]: 2026-02-28 10:05:12.256837424 +0000 UTC m=+0.131152798 container init 86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 28 10:05:12 compute-0 podman[273384]: 2026-02-28 10:05:12.262784161 +0000 UTC m=+0.137099505 container start 86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:05:12 compute-0 neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8[273417]: [NOTICE]   (273421) : New worker (273423) forked
Feb 28 10:05:12 compute-0 neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8[273417]: [NOTICE]   (273421) : Loading success.
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.405 243456 INFO nova.virt.libvirt.driver [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Snapshot image upload complete
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.405 243456 INFO nova.compute.manager [None req-fca46017-b657-46e6-aafe-5ede402539b7 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Took 5.29 seconds to snapshot the instance on the hypervisor.
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.410 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.411 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4057MB free_disk=59.89544444810599GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.412 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.412 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.481 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance ff5bf118-ea06-44c0-81f0-0a229162e1d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.482 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 3a118849-0d0a-4196-9bdd-65333da2e8f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.482 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance bacbbaae-ce23-42df-b5cc-0fc49b2f3741 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.482 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 75431a43-9412-4ad7-86ef-4f1fc1563b37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.482 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.482 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.586 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:12 compute-0 ceph-mon[76304]: pgmap v1089: 305 pgs: 305 active+clean; 383 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 9.3 MiB/s wr, 269 op/s
Feb 28 10:05:12 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/47225562' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:05:12 compute-0 nova_compute[243452]: 2026-02-28 10:05:12.747 243456 DEBUG nova.network.neutron [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Successfully created port: ec45afc5-b898-4497-8ddf-4195ba6f8dfc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:05:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:05:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1940111830' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.133 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.139 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.154 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.174 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.175 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.332 243456 DEBUG nova.compute.manager [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Received event network-vif-plugged-f3950212-15c6-462b-a9f9-1f218cfd3914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.333 243456 DEBUG oslo_concurrency.lockutils [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.333 243456 DEBUG oslo_concurrency.lockutils [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.334 243456 DEBUG oslo_concurrency.lockutils [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.334 243456 DEBUG nova.compute.manager [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Processing event network-vif-plugged-f3950212-15c6-462b-a9f9-1f218cfd3914 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.334 243456 DEBUG nova.compute.manager [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Received event network-vif-plugged-f3950212-15c6-462b-a9f9-1f218cfd3914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.335 243456 DEBUG oslo_concurrency.lockutils [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.335 243456 DEBUG oslo_concurrency.lockutils [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.335 243456 DEBUG oslo_concurrency.lockutils [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.335 243456 DEBUG nova.compute.manager [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] No waiting events found dispatching network-vif-plugged-f3950212-15c6-462b-a9f9-1f218cfd3914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.336 243456 WARNING nova.compute.manager [req-a8356d6b-781e-4fe9-9f07-42cfb28e0150 req-c353e9d5-dbb8-42e5-812b-b861ffa82204 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Received unexpected event network-vif-plugged-f3950212-15c6-462b-a9f9-1f218cfd3914 for instance with vm_state building and task_state spawning.
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.336 243456 DEBUG nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.342 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273113.3405821, bacbbaae-ce23-42df-b5cc-0fc49b2f3741 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.342 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] VM Resumed (Lifecycle Event)
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.346 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.350 243456 INFO nova.virt.libvirt.driver [-] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Instance spawned successfully.
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.350 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.364 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.373 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.378 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.378 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.379 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.380 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.380 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.381 243456 DEBUG nova.virt.libvirt.driver [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.389 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.434 243456 INFO nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Took 9.14 seconds to spawn the instance on the hypervisor.
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.434 243456 DEBUG nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.510 243456 INFO nova.compute.manager [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Took 10.72 seconds to build instance.
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.520 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.533 243456 DEBUG oslo_concurrency.lockutils [None req-1994dc7b-805d-4953-9cab-52a7d0d219b2 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Feb 28 10:05:13 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1940111830' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Feb 28 10:05:13 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Feb 28 10:05:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 427 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 8.7 MiB/s rd, 11 MiB/s wr, 258 op/s
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.900 243456 DEBUG nova.network.neutron [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Successfully updated port: ec45afc5-b898-4497-8ddf-4195ba6f8dfc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.927 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "refresh_cache-75431a43-9412-4ad7-86ef-4f1fc1563b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.927 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquired lock "refresh_cache-75431a43-9412-4ad7-86ef-4f1fc1563b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:05:13 compute-0 nova_compute[243452]: 2026-02-28 10:05:13.927 243456 DEBUG nova.network.neutron [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.112 243456 DEBUG nova.network.neutron [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.171 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.178 243456 DEBUG oslo_concurrency.lockutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.179 243456 DEBUG oslo_concurrency.lockutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.179 243456 DEBUG oslo_concurrency.lockutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.179 243456 DEBUG oslo_concurrency.lockutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.179 243456 DEBUG oslo_concurrency.lockutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.180 243456 INFO nova.compute.manager [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Terminating instance
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.181 243456 DEBUG nova.compute.manager [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.187 243456 INFO nova.virt.libvirt.driver [-] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Instance destroyed successfully.
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.187 243456 DEBUG nova.objects.instance [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'resources' on Instance uuid ff5bf118-ea06-44c0-81f0-0a229162e1d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.197 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.198 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.222 243456 DEBUG nova.virt.libvirt.vif [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:04:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-52310168',display_name='tempest-ImagesTestJSON-server-52310168',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-52310168',id=30,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:04:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-1llqlg3p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:12Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=ff5bf118-ea06-44c0-81f0-0a229162e1d8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "address": "fa:16:3e:29:2d:d1", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b5f82a-cf", "ovs_interfaceid": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.222 243456 DEBUG nova.network.os_vif_util [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "address": "fa:16:3e:29:2d:d1", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29b5f82a-cf", "ovs_interfaceid": "29b5f82a-cfc3-4c87-aac9-8419af0bcf75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.223 243456 DEBUG nova.network.os_vif_util [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:2d:d1,bridge_name='br-int',has_traffic_filtering=True,id=29b5f82a-cfc3-4c87-aac9-8419af0bcf75,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b5f82a-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.223 243456 DEBUG os_vif [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:2d:d1,bridge_name='br-int',has_traffic_filtering=True,id=29b5f82a-cfc3-4c87-aac9-8419af0bcf75,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b5f82a-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.224 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.225 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29b5f82a-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.226 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.229 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.230 243456 INFO os_vif [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:2d:d1,bridge_name='br-int',has_traffic_filtering=True,id=29b5f82a-cfc3-4c87-aac9-8419af0bcf75,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29b5f82a-cf')
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.250 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.251 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.492 243456 INFO nova.virt.libvirt.driver [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Deleting instance files /var/lib/nova/instances/ff5bf118-ea06-44c0-81f0-0a229162e1d8_del
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.492 243456 INFO nova.virt.libvirt.driver [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Deletion of /var/lib/nova/instances/ff5bf118-ea06-44c0-81f0-0a229162e1d8_del complete
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.545 243456 INFO nova.compute.manager [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Took 0.36 seconds to destroy the instance on the hypervisor.
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.545 243456 DEBUG oslo.service.loopingcall [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.546 243456 DEBUG nova.compute.manager [-] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:05:14 compute-0 nova_compute[243452]: 2026-02-28 10:05:14.546 243456 DEBUG nova.network.neutron [-] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:05:14 compute-0 ceph-mon[76304]: osdmap e148: 3 total, 3 up, 3 in
Feb 28 10:05:14 compute-0 ceph-mon[76304]: pgmap v1091: 305 pgs: 305 active+clean; 427 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 8.7 MiB/s rd, 11 MiB/s wr, 258 op/s
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.406 243456 DEBUG nova.network.neutron [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Updating instance_info_cache with network_info: [{"id": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "address": "fa:16:3e:d3:62:f9", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45afc5-b8", "ovs_interfaceid": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.555 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Releasing lock "refresh_cache-75431a43-9412-4ad7-86ef-4f1fc1563b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.555 243456 DEBUG nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Instance network_info: |[{"id": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "address": "fa:16:3e:d3:62:f9", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45afc5-b8", "ovs_interfaceid": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.557 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Start _get_guest_xml network_info=[{"id": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "address": "fa:16:3e:d3:62:f9", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45afc5-b8", "ovs_interfaceid": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.563 243456 WARNING nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.569 243456 DEBUG nova.virt.libvirt.host [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.570 243456 DEBUG nova.virt.libvirt.host [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.574 243456 DEBUG nova.virt.libvirt.host [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.574 243456 DEBUG nova.virt.libvirt.host [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.575 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.575 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.575 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.576 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.576 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.576 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.576 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.576 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.577 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.577 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.577 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.577 243456 DEBUG nova.virt.hardware [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.580 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.854 243456 DEBUG nova.network.neutron [-] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.882 243456 DEBUG nova.compute.manager [req-296e831b-6f7d-436f-a8a0-59998425f0eb req-fd452033-c538-4230-9c22-3409c4ed3aec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-changed-9f44b9f8-b888-40e8-be30-f985e3ca11b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.883 243456 DEBUG nova.compute.manager [req-296e831b-6f7d-436f-a8a0-59998425f0eb req-fd452033-c538-4230-9c22-3409c4ed3aec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Refreshing instance network info cache due to event network-changed-9f44b9f8-b888-40e8-be30-f985e3ca11b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.883 243456 DEBUG oslo_concurrency.lockutils [req-296e831b-6f7d-436f-a8a0-59998425f0eb req-fd452033-c538-4230-9c22-3409c4ed3aec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.883 243456 DEBUG oslo_concurrency.lockutils [req-296e831b-6f7d-436f-a8a0-59998425f0eb req-fd452033-c538-4230-9c22-3409c4ed3aec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.884 243456 DEBUG nova.network.neutron [req-296e831b-6f7d-436f-a8a0-59998425f0eb req-fd452033-c538-4230-9c22-3409c4ed3aec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Refreshing network info cache for port 9f44b9f8-b888-40e8-be30-f985e3ca11b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:05:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 334 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 9.2 MiB/s wr, 460 op/s
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.893 243456 INFO nova.compute.manager [-] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Took 1.35 seconds to deallocate network for instance.
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.957 243456 DEBUG nova.compute.manager [req-32e412c6-f21e-41c1-84b2-e7f020ce22be req-3d946ba5-05cc-485c-9c77-9f604b3b7d7c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Received event network-changed-ec45afc5-b898-4497-8ddf-4195ba6f8dfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.957 243456 DEBUG nova.compute.manager [req-32e412c6-f21e-41c1-84b2-e7f020ce22be req-3d946ba5-05cc-485c-9c77-9f604b3b7d7c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Refreshing instance network info cache due to event network-changed-ec45afc5-b898-4497-8ddf-4195ba6f8dfc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.957 243456 DEBUG oslo_concurrency.lockutils [req-32e412c6-f21e-41c1-84b2-e7f020ce22be req-3d946ba5-05cc-485c-9c77-9f604b3b7d7c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-75431a43-9412-4ad7-86ef-4f1fc1563b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.958 243456 DEBUG oslo_concurrency.lockutils [req-32e412c6-f21e-41c1-84b2-e7f020ce22be req-3d946ba5-05cc-485c-9c77-9f604b3b7d7c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-75431a43-9412-4ad7-86ef-4f1fc1563b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.958 243456 DEBUG nova.network.neutron [req-32e412c6-f21e-41c1-84b2-e7f020ce22be req-3d946ba5-05cc-485c-9c77-9f604b3b7d7c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Refreshing network info cache for port ec45afc5-b898-4497-8ddf-4195ba6f8dfc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.993 243456 DEBUG oslo_concurrency.lockutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:15 compute-0 nova_compute[243452]: 2026-02-28 10:05:15.993 243456 DEBUG oslo_concurrency.lockutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:05:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1517483985' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.082 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.112 243456 DEBUG nova.storage.rbd_utils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.120 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.167 243456 DEBUG oslo_concurrency.processutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:05:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2271261437' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.651 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.654 243456 DEBUG nova.virt.libvirt.vif [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:05:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-80345680',display_name='tempest-ImagesOneServerNegativeTestJSON-server-80345680',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-80345680',id=34,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da458887a8634c5a8b9a38fcbcc44e07',ramdisk_id='',reservation_id='r-3y3m8m7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-356581433',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-356581433-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:05:11Z,user_data=None,user_id='2737540b5d9a437cac0ea91b25f0c5d8',uuid=75431a43-9412-4ad7-86ef-4f1fc1563b37,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "address": "fa:16:3e:d3:62:f9", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45afc5-b8", "ovs_interfaceid": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.655 243456 DEBUG nova.network.os_vif_util [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converting VIF {"id": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "address": "fa:16:3e:d3:62:f9", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45afc5-b8", "ovs_interfaceid": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.657 243456 DEBUG nova.network.os_vif_util [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:62:f9,bridge_name='br-int',has_traffic_filtering=True,id=ec45afc5-b898-4497-8ddf-4195ba6f8dfc,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45afc5-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.663 243456 DEBUG nova.objects.instance [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lazy-loading 'pci_devices' on Instance uuid 75431a43-9412-4ad7-86ef-4f1fc1563b37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.681 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:05:16 compute-0 nova_compute[243452]:   <uuid>75431a43-9412-4ad7-86ef-4f1fc1563b37</uuid>
Feb 28 10:05:16 compute-0 nova_compute[243452]:   <name>instance-00000022</name>
Feb 28 10:05:16 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:05:16 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:05:16 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-80345680</nova:name>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:05:15</nova:creationTime>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:05:16 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:05:16 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:05:16 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:05:16 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:05:16 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:05:16 compute-0 nova_compute[243452]:         <nova:user uuid="2737540b5d9a437cac0ea91b25f0c5d8">tempest-ImagesOneServerNegativeTestJSON-356581433-project-member</nova:user>
Feb 28 10:05:16 compute-0 nova_compute[243452]:         <nova:project uuid="da458887a8634c5a8b9a38fcbcc44e07">tempest-ImagesOneServerNegativeTestJSON-356581433</nova:project>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:05:16 compute-0 nova_compute[243452]:         <nova:port uuid="ec45afc5-b898-4497-8ddf-4195ba6f8dfc">
Feb 28 10:05:16 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:05:16 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:05:16 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <system>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <entry name="serial">75431a43-9412-4ad7-86ef-4f1fc1563b37</entry>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <entry name="uuid">75431a43-9412-4ad7-86ef-4f1fc1563b37</entry>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     </system>
Feb 28 10:05:16 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:05:16 compute-0 nova_compute[243452]:   <os>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:   </os>
Feb 28 10:05:16 compute-0 nova_compute[243452]:   <features>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:   </features>
Feb 28 10:05:16 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:05:16 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:05:16 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/75431a43-9412-4ad7-86ef-4f1fc1563b37_disk">
Feb 28 10:05:16 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       </source>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:05:16 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/75431a43-9412-4ad7-86ef-4f1fc1563b37_disk.config">
Feb 28 10:05:16 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       </source>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:05:16 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:d3:62:f9"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <target dev="tapec45afc5-b8"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/75431a43-9412-4ad7-86ef-4f1fc1563b37/console.log" append="off"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <video>
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     </video>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:05:16 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:05:16 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:05:16 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:05:16 compute-0 nova_compute[243452]: </domain>
Feb 28 10:05:16 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.682 243456 DEBUG nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Preparing to wait for external event network-vif-plugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.683 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.683 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.684 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.688 243456 DEBUG nova.virt.libvirt.vif [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:05:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-80345680',display_name='tempest-ImagesOneServerNegativeTestJSON-server-80345680',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-80345680',id=34,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da458887a8634c5a8b9a38fcbcc44e07',ramdisk_id='',reservation_id='r-3y3m8m7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-356581433',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-356581433-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:05:11Z,user_data=None,user_id='2737540b5d9a437cac0ea91b25f0c5d8',uuid=75431a43-9412-4ad7-86ef-4f1fc1563b37,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "address": "fa:16:3e:d3:62:f9", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45afc5-b8", "ovs_interfaceid": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.689 243456 DEBUG nova.network.os_vif_util [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converting VIF {"id": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "address": "fa:16:3e:d3:62:f9", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45afc5-b8", "ovs_interfaceid": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.690 243456 DEBUG nova.network.os_vif_util [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:62:f9,bridge_name='br-int',has_traffic_filtering=True,id=ec45afc5-b898-4497-8ddf-4195ba6f8dfc,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45afc5-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.691 243456 DEBUG os_vif [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:62:f9,bridge_name='br-int',has_traffic_filtering=True,id=ec45afc5-b898-4497-8ddf-4195ba6f8dfc,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45afc5-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.692 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.693 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.693 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.697 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.701 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec45afc5-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.701 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec45afc5-b8, col_values=(('external_ids', {'iface-id': 'ec45afc5-b898-4497-8ddf-4195ba6f8dfc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:62:f9', 'vm-uuid': '75431a43-9412-4ad7-86ef-4f1fc1563b37'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:16 compute-0 NetworkManager[49805]: <info>  [1772273116.7047] manager: (tapec45afc5-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.704 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.707 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.709 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:05:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3646194582' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.710 243456 INFO os_vif [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:62:f9,bridge_name='br-int',has_traffic_filtering=True,id=ec45afc5-b898-4497-8ddf-4195ba6f8dfc,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45afc5-b8')
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.740 243456 DEBUG oslo_concurrency.processutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.756 243456 DEBUG nova.compute.provider_tree [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.780 243456 DEBUG nova.scheduler.client.report [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.804 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.804 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.805 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] No VIF found with MAC fa:16:3e:d3:62:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.805 243456 INFO nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Using config drive
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.830 243456 DEBUG nova.storage.rbd_utils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.844 243456 DEBUG oslo_concurrency.lockutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.886 243456 INFO nova.scheduler.client.report [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Deleted allocations for instance ff5bf118-ea06-44c0-81f0-0a229162e1d8
Feb 28 10:05:16 compute-0 ceph-mon[76304]: pgmap v1092: 305 pgs: 305 active+clean; 334 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 9.2 MiB/s wr, 460 op/s
Feb 28 10:05:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1517483985' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2271261437' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3646194582' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:16 compute-0 nova_compute[243452]: 2026-02-28 10:05:16.960 243456 DEBUG oslo_concurrency.lockutils [None req-46e6ee3d-c3a2-407b-90ce-276b43c9e0ae 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "ff5bf118-ea06-44c0-81f0-0a229162e1d8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.132 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "abcde488-a508-4239-9f40-28af252a1cd3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.133 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.157 243456 DEBUG nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.221 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.222 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.231 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.232 243456 INFO nova.compute.claims [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.244 243456 INFO nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Creating config drive at /var/lib/nova/instances/75431a43-9412-4ad7-86ef-4f1fc1563b37/disk.config
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.250 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/75431a43-9412-4ad7-86ef-4f1fc1563b37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpuncqnlxd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.386 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/75431a43-9412-4ad7-86ef-4f1fc1563b37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpuncqnlxd" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.423 243456 DEBUG nova.storage.rbd_utils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] rbd image 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.429 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/75431a43-9412-4ad7-86ef-4f1fc1563b37/disk.config 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.457 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.485 243456 DEBUG nova.network.neutron [req-32e412c6-f21e-41c1-84b2-e7f020ce22be req-3d946ba5-05cc-485c-9c77-9f604b3b7d7c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Updated VIF entry in instance network info cache for port ec45afc5-b898-4497-8ddf-4195ba6f8dfc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.486 243456 DEBUG nova.network.neutron [req-32e412c6-f21e-41c1-84b2-e7f020ce22be req-3d946ba5-05cc-485c-9c77-9f604b3b7d7c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Updating instance_info_cache with network_info: [{"id": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "address": "fa:16:3e:d3:62:f9", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45afc5-b8", "ovs_interfaceid": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.513 243456 DEBUG oslo_concurrency.lockutils [req-32e412c6-f21e-41c1-84b2-e7f020ce22be req-3d946ba5-05cc-485c-9c77-9f604b3b7d7c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-75431a43-9412-4ad7-86ef-4f1fc1563b37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.567 243456 DEBUG oslo_concurrency.processutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/75431a43-9412-4ad7-86ef-4f1fc1563b37/disk.config 75431a43-9412-4ad7-86ef-4f1fc1563b37_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.567 243456 INFO nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Deleting local config drive /var/lib/nova/instances/75431a43-9412-4ad7-86ef-4f1fc1563b37/disk.config because it was imported into RBD.
Feb 28 10:05:17 compute-0 kernel: tapec45afc5-b8: entered promiscuous mode
Feb 28 10:05:17 compute-0 NetworkManager[49805]: <info>  [1772273117.6277] manager: (tapec45afc5-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/109)
Feb 28 10:05:17 compute-0 ovn_controller[146846]: 2026-02-28T10:05:17Z|00220|binding|INFO|Claiming lport ec45afc5-b898-4497-8ddf-4195ba6f8dfc for this chassis.
Feb 28 10:05:17 compute-0 ovn_controller[146846]: 2026-02-28T10:05:17Z|00221|binding|INFO|ec45afc5-b898-4497-8ddf-4195ba6f8dfc: Claiming fa:16:3e:d3:62:f9 10.100.0.9
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.627 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.641 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:62:f9 10.100.0.9'], port_security=['fa:16:3e:d3:62:f9 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '75431a43-9412-4ad7-86ef-4f1fc1563b37', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da458887a8634c5a8b9a38fcbcc44e07', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c01448ef-a8fc-4bd2-928c-fd08df4a870e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5852d9cf-0cd0-48e3-ac9d-e151dafb6ffc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ec45afc5-b898-4497-8ddf-4195ba6f8dfc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.643 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:17 compute-0 ovn_controller[146846]: 2026-02-28T10:05:17Z|00222|binding|INFO|Setting lport ec45afc5-b898-4497-8ddf-4195ba6f8dfc ovn-installed in OVS
Feb 28 10:05:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:05:17 compute-0 ovn_controller[146846]: 2026-02-28T10:05:17Z|00223|binding|INFO|Setting lport ec45afc5-b898-4497-8ddf-4195ba6f8dfc up in Southbound
Feb 28 10:05:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.644 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ec45afc5-b898-4497-8ddf-4195ba6f8dfc in datapath 2eebd3ec-f7d4-4881-813e-8d884cdcadaf bound to our chassis
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.645 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.647 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2eebd3ec-f7d4-4881-813e-8d884cdcadaf
Feb 28 10:05:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.654 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:17 compute-0 systemd-udevd[273646]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:05:17 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Feb 28 10:05:17 compute-0 NetworkManager[49805]: <info>  [1772273117.6696] device (tapec45afc5-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:05:17 compute-0 NetworkManager[49805]: <info>  [1772273117.6703] device (tapec45afc5-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.670 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[629e901d-faaa-4069-a05c-f7e9d170995c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.671 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2eebd3ec-f1 in ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.673 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2eebd3ec-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.673 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ec513239-a319-4b81-bfd7-9df0a3309f51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.675 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0cda1b5c-c11a-4c1f-9218-58cbae63116e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:17 compute-0 systemd-machined[209480]: New machine qemu-38-instance-00000022.
Feb 28 10:05:17 compute-0 systemd[1]: Started Virtual Machine qemu-38-instance-00000022.
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.692 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[6358105b-feb1-4818-aa4d-2bd9a7138879]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.707 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f4593e27-6d8b-4942-b32b-b83ca0d616b9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.748 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c20751-a441-4f18-943c-ca6d329050b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:17 compute-0 NetworkManager[49805]: <info>  [1772273117.7581] manager: (tap2eebd3ec-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/110)
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.757 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[22a57776-10c1-445f-905b-c5c4db188a09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:17 compute-0 systemd-udevd[273651]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.792 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f4985078-c92f-496a-a46d-e920d978035e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.795 243456 DEBUG nova.network.neutron [req-296e831b-6f7d-436f-a8a0-59998425f0eb req-fd452033-c538-4230-9c22-3409c4ed3aec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updated VIF entry in instance network info cache for port 9f44b9f8-b888-40e8-be30-f985e3ca11b9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.795 243456 DEBUG nova.network.neutron [req-296e831b-6f7d-436f-a8a0-59998425f0eb req-fd452033-c538-4230-9c22-3409c4ed3aec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updating instance_info_cache with network_info: [{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.797 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[641b38d5-ce95-4cc4-9a6d-8eef8826dbe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:17 compute-0 NetworkManager[49805]: <info>  [1772273117.8158] device (tap2eebd3ec-f0): carrier: link connected
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.821 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[94e27add-3ebe-4243-98e5-0f0847b30c17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.839 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[23a0ace1-5785-446b-a9d2-5f38b62f24c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2eebd3ec-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:03:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462510, 'reachable_time': 31121, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273684, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.841 243456 DEBUG oslo_concurrency.lockutils [req-296e831b-6f7d-436f-a8a0-59998425f0eb req-fd452033-c538-4230-9c22-3409c4ed3aec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.856 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4cc3ba-b3da-495a-986c-650790d73f7a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecb:321'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462510, 'tstamp': 462510}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273685, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.873 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5fdb7671-dc09-4d3b-81c2-7c82785ee5c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2eebd3ec-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:03:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462510, 'reachable_time': 31121, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273686, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 293 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 4.6 MiB/s wr, 489 op/s
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.910 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d9c7619f-403a-48a6-b36a-51df9bb870f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.976 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0fbc6348-2485-47cf-bf1d-c42685db988d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.977 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2eebd3ec-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.978 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.978 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2eebd3ec-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.980 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:17 compute-0 NetworkManager[49805]: <info>  [1772273117.9808] manager: (tap2eebd3ec-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Feb 28 10:05:17 compute-0 kernel: tap2eebd3ec-f0: entered promiscuous mode
Feb 28 10:05:17 compute-0 nova_compute[243452]: 2026-02-28 10:05:17.984 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.985 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2eebd3ec-f0, col_values=(('external_ids', {'iface-id': '6b6cc396-2618-4c5f-8702-0c03569c876b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.989 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:05:17 compute-0 ovn_controller[146846]: 2026-02-28T10:05:17Z|00224|binding|INFO|Releasing lport 6b6cc396-2618-4c5f-8702-0c03569c876b from this chassis (sb_readonly=0)
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.995 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a0f5fd-7264-4240-a34a-d8ba876b345b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.996 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-2eebd3ec-f7d4-4881-813e-8d884cdcadaf
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.pid.haproxy
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 2eebd3ec-f7d4-4881-813e-8d884cdcadaf
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:05:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:17.997 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'env', 'PROCESS_TAG=haproxy-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2eebd3ec-f7d4-4881-813e-8d884cdcadaf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.000 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:05:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2587161045' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.035 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.042 243456 DEBUG nova.compute.provider_tree [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.054 243456 DEBUG nova.compute.manager [req-68ce1da0-e59e-4da7-a73c-43252704372a req-2a61eca2-45d6-4b7d-b3b9-f7027d6c1a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Received event network-vif-deleted-29b5f82a-cfc3-4c87-aac9-8419af0bcf75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.055 243456 DEBUG nova.compute.manager [req-68ce1da0-e59e-4da7-a73c-43252704372a req-2a61eca2-45d6-4b7d-b3b9-f7027d6c1a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Received event network-vif-plugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.055 243456 DEBUG oslo_concurrency.lockutils [req-68ce1da0-e59e-4da7-a73c-43252704372a req-2a61eca2-45d6-4b7d-b3b9-f7027d6c1a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.056 243456 DEBUG oslo_concurrency.lockutils [req-68ce1da0-e59e-4da7-a73c-43252704372a req-2a61eca2-45d6-4b7d-b3b9-f7027d6c1a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.056 243456 DEBUG oslo_concurrency.lockutils [req-68ce1da0-e59e-4da7-a73c-43252704372a req-2a61eca2-45d6-4b7d-b3b9-f7027d6c1a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.056 243456 DEBUG nova.compute.manager [req-68ce1da0-e59e-4da7-a73c-43252704372a req-2a61eca2-45d6-4b7d-b3b9-f7027d6c1a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Processing event network-vif-plugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.062 243456 DEBUG nova.scheduler.client.report [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.140 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.141 243456 DEBUG nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.193 243456 DEBUG nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.193 243456 DEBUG nova.network.neutron [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.212 243456 INFO nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.226 243456 DEBUG nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.312 243456 DEBUG nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.313 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.313 243456 INFO nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Creating image(s)
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.341 243456 DEBUG nova.storage.rbd_utils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image abcde488-a508-4239-9f40-28af252a1cd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.372 243456 DEBUG nova.storage.rbd_utils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image abcde488-a508-4239-9f40-28af252a1cd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.398 243456 DEBUG nova.storage.rbd_utils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image abcde488-a508-4239-9f40-28af252a1cd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.404 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:18 compute-0 podman[273736]: 2026-02-28 10:05:18.41651758 +0000 UTC m=+0.053293759 container create b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.441 243456 DEBUG nova.policy [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '163582c3e6a34c87b52f82ac4f189f77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:05:18 compute-0 systemd[1]: Started libpod-conmon-b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90.scope.
Feb 28 10:05:18 compute-0 podman[273736]: 2026-02-28 10:05:18.391022284 +0000 UTC m=+0.027798483 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.487 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.487 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.488 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.488 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:18 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cedf4787cc345b38d5f64c843dc01ce5e7235c7fb10d6e0c8b426d33046e626/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:05:18 compute-0 podman[273736]: 2026-02-28 10:05:18.512313324 +0000 UTC m=+0.149089523 container init b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:05:18 compute-0 podman[273736]: 2026-02-28 10:05:18.519808844 +0000 UTC m=+0.156585023 container start b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 10:05:18 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[273811]: [NOTICE]   (273843) : New worker (273851) forked
Feb 28 10:05:18 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[273811]: [NOTICE]   (273843) : Loading success.
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.543 243456 DEBUG nova.storage.rbd_utils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image abcde488-a508-4239-9f40-28af252a1cd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.555 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 abcde488-a508-4239-9f40-28af252a1cd3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.584 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.607 243456 DEBUG nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.608 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273118.6062098, 75431a43-9412-4ad7-86ef-4f1fc1563b37 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.608 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] VM Started (Lifecycle Event)
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.613 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.618 243456 INFO nova.virt.libvirt.driver [-] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Instance spawned successfully.
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.619 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.643 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.651 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.651 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.651 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.652 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.652 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.652 243456 DEBUG nova.virt.libvirt.driver [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.656 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:05:18 compute-0 ceph-mon[76304]: osdmap e149: 3 total, 3 up, 3 in
Feb 28 10:05:18 compute-0 ceph-mon[76304]: pgmap v1094: 305 pgs: 305 active+clean; 293 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 4.6 MiB/s wr, 489 op/s
Feb 28 10:05:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2587161045' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.685 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.685 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273118.608918, 75431a43-9412-4ad7-86ef-4f1fc1563b37 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.685 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] VM Paused (Lifecycle Event)
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.712 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.720 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273118.6127074, 75431a43-9412-4ad7-86ef-4f1fc1563b37 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.721 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] VM Resumed (Lifecycle Event)
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.728 243456 INFO nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Took 7.28 seconds to spawn the instance on the hypervisor.
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.729 243456 DEBUG nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.770 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.775 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.798 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 abcde488-a508-4239-9f40-28af252a1cd3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.243s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.830 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.843 243456 INFO nova.compute.manager [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Took 8.66 seconds to build instance.
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.881 243456 DEBUG oslo_concurrency.lockutils [None req-f0f96324-fcdb-4b86-bf44-e3bec007884e 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.886 243456 DEBUG nova.storage.rbd_utils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] resizing rbd image abcde488-a508-4239-9f40-28af252a1cd3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:05:18 compute-0 nova_compute[243452]: 2026-02-28 10:05:18.995 243456 DEBUG nova.objects.instance [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'migration_context' on Instance uuid abcde488-a508-4239-9f40-28af252a1cd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.014 243456 DEBUG oslo_concurrency.lockutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Acquiring lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.014 243456 DEBUG oslo_concurrency.lockutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.015 243456 DEBUG oslo_concurrency.lockutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Acquiring lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.015 243456 DEBUG oslo_concurrency.lockutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.015 243456 DEBUG oslo_concurrency.lockutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.016 243456 INFO nova.compute.manager [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Terminating instance
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.017 243456 DEBUG nova.compute.manager [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.018 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.019 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Ensure instance console log exists: /var/lib/nova/instances/abcde488-a508-4239-9f40-28af252a1cd3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.019 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.019 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.019 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:19 compute-0 kernel: tapf3950212-15 (unregistering): left promiscuous mode
Feb 28 10:05:19 compute-0 NetworkManager[49805]: <info>  [1772273119.0566] device (tapf3950212-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:05:19 compute-0 ovn_controller[146846]: 2026-02-28T10:05:19Z|00225|binding|INFO|Releasing lport f3950212-15c6-462b-a9f9-1f218cfd3914 from this chassis (sb_readonly=0)
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.064 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:19 compute-0 ovn_controller[146846]: 2026-02-28T10:05:19Z|00226|binding|INFO|Setting lport f3950212-15c6-462b-a9f9-1f218cfd3914 down in Southbound
Feb 28 10:05:19 compute-0 ovn_controller[146846]: 2026-02-28T10:05:19Z|00227|binding|INFO|Removing iface tapf3950212-15 ovn-installed in OVS
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.066 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.073 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:4a:3e 10.100.0.14'], port_security=['fa:16:3e:a3:4a:3e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'bacbbaae-ce23-42df-b5cc-0fc49b2f3741', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '135c387aaa024e42b1c3c19237591cf3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5ee5bcf8-3b1e-427f-98a7-d823cb130082', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d8b3b33-cfca-4ec3-8a4a-f81d6eb91a80, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f3950212-15c6-462b-a9f9-1f218cfd3914) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:05:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.075 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f3950212-15c6-462b-a9f9-1f218cfd3914 in datapath 7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8 unbound from our chassis
Feb 28 10:05:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.086 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:05:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.087 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c04e84-0da7-469f-99a6-dfb184430a90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.088 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8 namespace which is not needed anymore
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.089 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:19 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000021.scope: Deactivated successfully.
Feb 28 10:05:19 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000021.scope: Consumed 6.074s CPU time.
Feb 28 10:05:19 compute-0 systemd-machined[209480]: Machine qemu-37-instance-00000021 terminated.
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.123 243456 DEBUG nova.network.neutron [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Successfully created port: d21b0d77-e339-46dd-a448-d8c3473dc8e8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:05:19 compute-0 neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8[273417]: [NOTICE]   (273421) : haproxy version is 2.8.14-c23fe91
Feb 28 10:05:19 compute-0 neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8[273417]: [NOTICE]   (273421) : path to executable is /usr/sbin/haproxy
Feb 28 10:05:19 compute-0 neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8[273417]: [WARNING]  (273421) : Exiting Master process...
Feb 28 10:05:19 compute-0 neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8[273417]: [WARNING]  (273421) : Exiting Master process...
Feb 28 10:05:19 compute-0 neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8[273417]: [ALERT]    (273421) : Current worker (273423) exited with code 143 (Terminated)
Feb 28 10:05:19 compute-0 neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8[273417]: [WARNING]  (273421) : All workers exited. Exiting... (0)
Feb 28 10:05:19 compute-0 systemd[1]: libpod-86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f.scope: Deactivated successfully.
Feb 28 10:05:19 compute-0 podman[273975]: 2026-02-28 10:05:19.232920102 +0000 UTC m=+0.053245708 container died 86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.252 243456 INFO nova.virt.libvirt.driver [-] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Instance destroyed successfully.
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.253 243456 DEBUG nova.objects.instance [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lazy-loading 'resources' on Instance uuid bacbbaae-ce23-42df-b5cc-0fc49b2f3741 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f-userdata-shm.mount: Deactivated successfully.
Feb 28 10:05:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-39ee18a45b312c8bd14e361b6e1db2c2440bb85e7a95b137a16d7ff81aa888a6-merged.mount: Deactivated successfully.
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.269 243456 DEBUG nova.virt.libvirt.vif [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-803037290',display_name='tempest-ServerMetadataTestJSON-server-803037290',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-803037290',id=33,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='135c387aaa024e42b1c3c19237591cf3',ramdisk_id='',reservation_id='r-70lii02n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-1667248256',owner_user_name='tempest-ServerMetadataTestJSON-1667248256-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:18Z,user_data=None,user_id='6bbc470612fa48afb6c2a143ba966473',uuid=bacbbaae-ce23-42df-b5cc-0fc49b2f3741,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f3950212-15c6-462b-a9f9-1f218cfd3914", "address": "fa:16:3e:a3:4a:3e", "network": {"id": "7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-199293454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "135c387aaa024e42b1c3c19237591cf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3950212-15", "ovs_interfaceid": "f3950212-15c6-462b-a9f9-1f218cfd3914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.270 243456 DEBUG nova.network.os_vif_util [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Converting VIF {"id": "f3950212-15c6-462b-a9f9-1f218cfd3914", "address": "fa:16:3e:a3:4a:3e", "network": {"id": "7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-199293454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "135c387aaa024e42b1c3c19237591cf3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3950212-15", "ovs_interfaceid": "f3950212-15c6-462b-a9f9-1f218cfd3914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.272 243456 DEBUG nova.network.os_vif_util [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:4a:3e,bridge_name='br-int',has_traffic_filtering=True,id=f3950212-15c6-462b-a9f9-1f218cfd3914,network=Network(7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3950212-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.273 243456 DEBUG os_vif [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:4a:3e,bridge_name='br-int',has_traffic_filtering=True,id=f3950212-15c6-462b-a9f9-1f218cfd3914,network=Network(7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3950212-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:05:19 compute-0 podman[273975]: 2026-02-28 10:05:19.274959894 +0000 UTC m=+0.095285500 container cleanup 86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.276 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.276 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3950212-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.280 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:19 compute-0 systemd[1]: libpod-conmon-86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f.scope: Deactivated successfully.
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.283 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.285 243456 INFO os_vif [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:4a:3e,bridge_name='br-int',has_traffic_filtering=True,id=f3950212-15c6-462b-a9f9-1f218cfd3914,network=Network(7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf3950212-15')
Feb 28 10:05:19 compute-0 podman[274013]: 2026-02-28 10:05:19.351177747 +0000 UTC m=+0.050226463 container remove 86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:05:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.358 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd8ee2b-5d03-4c98-8c17-a94b0c7e991b]: (4, ('Sat Feb 28 10:05:19 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8 (86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f)\n86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f\nSat Feb 28 10:05:19 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8 (86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f)\n86512893cd9a9eb8f660f9bce1167b423ad8789b85983df357a22bb81a0c9e4f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.361 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[37cf802e-37cb-4e7d-a3ca-04b74c187a41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.362 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7579f8b7-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.365 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:19 compute-0 kernel: tap7579f8b7-d0: left promiscuous mode
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.377 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.380 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[48932005-92a7-4cb5-a9ec-58e73f274f2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.396 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[68f497fb-95ab-4e3f-b5e6-c16c9c770cbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.398 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0c8cdcff-f29e-4bfb-b7e1-1282968d1ef0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.418 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3f71edcb-4570-4f42-b474-672bb23c8342]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461882, 'reachable_time': 42596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274046, 'error': None, 'target': 'ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d7579f8b7\x2ddbd7\x2d4cb9\x2db8d9\x2de1f48bd1c7c8.mount: Deactivated successfully.
Feb 28 10:05:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.423 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7579f8b7-dbd7-4cb9-b8d9-e1f48bd1c7c8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:05:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:19.423 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b5e105-d05e-4ff1-bfea-0963261a359d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.555 243456 INFO nova.virt.libvirt.driver [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Deleting instance files /var/lib/nova/instances/bacbbaae-ce23-42df-b5cc-0fc49b2f3741_del
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.556 243456 INFO nova.virt.libvirt.driver [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Deletion of /var/lib/nova/instances/bacbbaae-ce23-42df-b5cc-0fc49b2f3741_del complete
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.620 243456 INFO nova.compute.manager [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Took 0.60 seconds to destroy the instance on the hypervisor.
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.621 243456 DEBUG oslo.service.loopingcall [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.621 243456 DEBUG nova.compute.manager [-] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.622 243456 DEBUG nova.network.neutron [-] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:05:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 302 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 4.9 MiB/s wr, 516 op/s
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.980 243456 DEBUG oslo_concurrency.lockutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "75431a43-9412-4ad7-86ef-4f1fc1563b37" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.980 243456 DEBUG oslo_concurrency.lockutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.981 243456 DEBUG oslo_concurrency.lockutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.982 243456 DEBUG oslo_concurrency.lockutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.982 243456 DEBUG oslo_concurrency.lockutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.983 243456 INFO nova.compute.manager [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Terminating instance
Feb 28 10:05:19 compute-0 nova_compute[243452]: 2026-02-28 10:05:19.984 243456 DEBUG nova.compute.manager [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:05:20 compute-0 kernel: tapec45afc5-b8 (unregistering): left promiscuous mode
Feb 28 10:05:20 compute-0 NetworkManager[49805]: <info>  [1772273120.0283] device (tapec45afc5-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:05:20 compute-0 ovn_controller[146846]: 2026-02-28T10:05:20Z|00228|binding|INFO|Releasing lport ec45afc5-b898-4497-8ddf-4195ba6f8dfc from this chassis (sb_readonly=0)
Feb 28 10:05:20 compute-0 ovn_controller[146846]: 2026-02-28T10:05:20Z|00229|binding|INFO|Setting lport ec45afc5-b898-4497-8ddf-4195ba6f8dfc down in Southbound
Feb 28 10:05:20 compute-0 ovn_controller[146846]: 2026-02-28T10:05:20Z|00230|binding|INFO|Removing iface tapec45afc5-b8 ovn-installed in OVS
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.041 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.045 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:62:f9 10.100.0.9'], port_security=['fa:16:3e:d3:62:f9 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '75431a43-9412-4ad7-86ef-4f1fc1563b37', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da458887a8634c5a8b9a38fcbcc44e07', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c01448ef-a8fc-4bd2-928c-fd08df4a870e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5852d9cf-0cd0-48e3-ac9d-e151dafb6ffc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ec45afc5-b898-4497-8ddf-4195ba6f8dfc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:05:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.053 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ec45afc5-b898-4497-8ddf-4195ba6f8dfc in datapath 2eebd3ec-f7d4-4881-813e-8d884cdcadaf unbound from our chassis
Feb 28 10:05:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.055 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2eebd3ec-f7d4-4881-813e-8d884cdcadaf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.057 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.061 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[39e194a0-45b3-4e7d-9cbc-976ccd28c4ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.062 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf namespace which is not needed anymore
Feb 28 10:05:20 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Deactivated successfully.
Feb 28 10:05:20 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Consumed 2.212s CPU time.
Feb 28 10:05:20 compute-0 systemd-machined[209480]: Machine qemu-38-instance-00000022 terminated.
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.132 243456 DEBUG nova.compute.manager [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Received event network-vif-plugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.134 243456 DEBUG oslo_concurrency.lockutils [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.134 243456 DEBUG oslo_concurrency.lockutils [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.134 243456 DEBUG oslo_concurrency.lockutils [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.134 243456 DEBUG nova.compute.manager [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] No waiting events found dispatching network-vif-plugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.135 243456 WARNING nova.compute.manager [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Received unexpected event network-vif-plugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc for instance with vm_state active and task_state deleting.
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.135 243456 DEBUG nova.compute.manager [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Received event network-vif-unplugged-f3950212-15c6-462b-a9f9-1f218cfd3914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.135 243456 DEBUG oslo_concurrency.lockutils [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.135 243456 DEBUG oslo_concurrency.lockutils [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.136 243456 DEBUG oslo_concurrency.lockutils [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.136 243456 DEBUG nova.compute.manager [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] No waiting events found dispatching network-vif-unplugged-f3950212-15c6-462b-a9f9-1f218cfd3914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.136 243456 DEBUG nova.compute.manager [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Received event network-vif-unplugged-f3950212-15c6-462b-a9f9-1f218cfd3914 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.136 243456 DEBUG nova.compute.manager [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Received event network-vif-plugged-f3950212-15c6-462b-a9f9-1f218cfd3914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.136 243456 DEBUG oslo_concurrency.lockutils [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.137 243456 DEBUG oslo_concurrency.lockutils [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.137 243456 DEBUG oslo_concurrency.lockutils [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.137 243456 DEBUG nova.compute.manager [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] No waiting events found dispatching network-vif-plugged-f3950212-15c6-462b-a9f9-1f218cfd3914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.138 243456 WARNING nova.compute.manager [req-9bbef270-fb82-4f83-88c1-9bafd1e1acd9 req-5c6fda29-e4c6-4d69-9396-66b5f0565385 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Received unexpected event network-vif-plugged-f3950212-15c6-462b-a9f9-1f218cfd3914 for instance with vm_state active and task_state deleting.
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.213 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.216 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.227 243456 INFO nova.virt.libvirt.driver [-] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Instance destroyed successfully.
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.227 243456 DEBUG nova.objects.instance [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lazy-loading 'resources' on Instance uuid 75431a43-9412-4ad7-86ef-4f1fc1563b37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:20 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[273811]: [NOTICE]   (273843) : haproxy version is 2.8.14-c23fe91
Feb 28 10:05:20 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[273811]: [NOTICE]   (273843) : path to executable is /usr/sbin/haproxy
Feb 28 10:05:20 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[273811]: [WARNING]  (273843) : Exiting Master process...
Feb 28 10:05:20 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[273811]: [ALERT]    (273843) : Current worker (273851) exited with code 143 (Terminated)
Feb 28 10:05:20 compute-0 neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf[273811]: [WARNING]  (273843) : All workers exited. Exiting... (0)
Feb 28 10:05:20 compute-0 systemd[1]: libpod-b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90.scope: Deactivated successfully.
Feb 28 10:05:20 compute-0 podman[274067]: 2026-02-28 10:05:20.24693607 +0000 UTC m=+0.060930974 container died b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.243 243456 DEBUG nova.virt.libvirt.vif [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-80345680',display_name='tempest-ImagesOneServerNegativeTestJSON-server-80345680',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-80345680',id=34,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da458887a8634c5a8b9a38fcbcc44e07',ramdisk_id='',reservation_id='r-3y3m8m7g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-356581433',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-356581433-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:18Z,user_data=None,user_id='2737540b5d9a437cac0ea91b25f0c5d8',uuid=75431a43-9412-4ad7-86ef-4f1fc1563b37,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "address": "fa:16:3e:d3:62:f9", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45afc5-b8", "ovs_interfaceid": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.248 243456 DEBUG nova.network.os_vif_util [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converting VIF {"id": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "address": "fa:16:3e:d3:62:f9", "network": {"id": "2eebd3ec-f7d4-4881-813e-8d884cdcadaf", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1326815054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da458887a8634c5a8b9a38fcbcc44e07", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec45afc5-b8", "ovs_interfaceid": "ec45afc5-b898-4497-8ddf-4195ba6f8dfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.251 243456 DEBUG nova.network.os_vif_util [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:62:f9,bridge_name='br-int',has_traffic_filtering=True,id=ec45afc5-b898-4497-8ddf-4195ba6f8dfc,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45afc5-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.252 243456 DEBUG os_vif [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:62:f9,bridge_name='br-int',has_traffic_filtering=True,id=ec45afc5-b898-4497-8ddf-4195ba6f8dfc,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45afc5-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.254 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.254 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec45afc5-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.256 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.258 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.264 243456 INFO os_vif [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:62:f9,bridge_name='br-int',has_traffic_filtering=True,id=ec45afc5-b898-4497-8ddf-4195ba6f8dfc,network=Network(2eebd3ec-f7d4-4881-813e-8d884cdcadaf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec45afc5-b8')
Feb 28 10:05:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90-userdata-shm.mount: Deactivated successfully.
Feb 28 10:05:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cedf4787cc345b38d5f64c843dc01ce5e7235c7fb10d6e0c8b426d33046e626-merged.mount: Deactivated successfully.
Feb 28 10:05:20 compute-0 podman[274067]: 2026-02-28 10:05:20.305125126 +0000 UTC m=+0.119120010 container cleanup b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:05:20 compute-0 systemd[1]: libpod-conmon-b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90.scope: Deactivated successfully.
Feb 28 10:05:20 compute-0 podman[274124]: 2026-02-28 10:05:20.375893355 +0000 UTC m=+0.048340610 container remove b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:05:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.380 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[833d084f-0bea-4371-bea1-d4920f813979]: (4, ('Sat Feb 28 10:05:20 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf (b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90)\nb18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90\nSat Feb 28 10:05:20 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf (b18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90)\nb18029019190e63028e0bb596331cd3411267b5538acdd7fc0ac5a6cca00ec90\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.383 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[33ae6f05-4c83-4016-bcac-3bd5ce211fa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.385 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2eebd3ec-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:20 compute-0 kernel: tap2eebd3ec-f0: left promiscuous mode
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.387 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.390 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.396 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[005b25f5-e600-4e63-8fe7-7ffa8145e922]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.399 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.401 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273105.3794076, ff5bf118-ea06-44c0-81f0-0a229162e1d8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.401 243456 INFO nova.compute.manager [-] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] VM Stopped (Lifecycle Event)
Feb 28 10:05:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.417 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[21018f92-8176-41da-8f53-f11dd48b2132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.418 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1e27e228-016a-4f6c-b184-516f69aeb710]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.432 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8ce95cd9-58e8-4ae6-8be1-c12a81d6f2c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462502, 'reachable_time': 16840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274138, 'error': None, 'target': 'ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d2eebd3ec\x2df7d4\x2d4881\x2d813e\x2d8d884cdcadaf.mount: Deactivated successfully.
Feb 28 10:05:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.436 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2eebd3ec-f7d4-4881-813e-8d884cdcadaf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:05:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:20.437 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[4480d7e1-0e50-4bcb-9a0f-dff7848d0f16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.436 243456 DEBUG nova.compute.manager [None req-3885b83d-03eb-4e40-8479-9ef49b904a02 - - - - - -] [instance: ff5bf118-ea06-44c0-81f0-0a229162e1d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.439 243456 DEBUG nova.network.neutron [-] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.468 243456 INFO nova.compute.manager [-] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Took 0.85 seconds to deallocate network for instance.
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.494 243456 DEBUG nova.network.neutron [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Successfully updated port: d21b0d77-e339-46dd-a448-d8c3473dc8e8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.495 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273105.493643, c1e4150a-4695-4464-a271-378970447180 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.496 243456 INFO nova.compute.manager [-] [instance: c1e4150a-4695-4464-a271-378970447180] VM Stopped (Lifecycle Event)
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.524 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "refresh_cache-abcde488-a508-4239-9f40-28af252a1cd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.524 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquired lock "refresh_cache-abcde488-a508-4239-9f40-28af252a1cd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.524 243456 DEBUG nova.network.neutron [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.528 243456 DEBUG oslo_concurrency.lockutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.528 243456 DEBUG oslo_concurrency.lockutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.530 243456 DEBUG nova.compute.manager [None req-9c292224-10dc-45df-90ba-7813055acfc0 - - - - - -] [instance: c1e4150a-4695-4464-a271-378970447180] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.570 243456 INFO nova.virt.libvirt.driver [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Deleting instance files /var/lib/nova/instances/75431a43-9412-4ad7-86ef-4f1fc1563b37_del
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.572 243456 INFO nova.virt.libvirt.driver [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Deletion of /var/lib/nova/instances/75431a43-9412-4ad7-86ef-4f1fc1563b37_del complete
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.634 243456 INFO nova.compute.manager [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Took 0.65 seconds to destroy the instance on the hypervisor.
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.635 243456 DEBUG oslo.service.loopingcall [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.635 243456 DEBUG nova.compute.manager [-] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.635 243456 DEBUG nova.network.neutron [-] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.639 243456 DEBUG oslo_concurrency.processutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:20 compute-0 nova_compute[243452]: 2026-02-28 10:05:20.785 243456 DEBUG nova.network.neutron [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:05:20 compute-0 ceph-mon[76304]: pgmap v1095: 305 pgs: 305 active+clean; 302 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 4.9 MiB/s wr, 516 op/s
Feb 28 10:05:21 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:05:21 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1678683351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.187 243456 DEBUG oslo_concurrency.processutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.195 243456 DEBUG nova.compute.provider_tree [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.224 243456 DEBUG nova.scheduler.client.report [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.340 243456 DEBUG oslo_concurrency.lockutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.396 243456 DEBUG nova.network.neutron [-] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.402 243456 INFO nova.scheduler.client.report [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Deleted allocations for instance bacbbaae-ce23-42df-b5cc-0fc49b2f3741
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.426 243456 INFO nova.compute.manager [-] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Took 0.79 seconds to deallocate network for instance.
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.512 243456 DEBUG oslo_concurrency.lockutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.513 243456 DEBUG oslo_concurrency.lockutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.519 243456 DEBUG oslo_concurrency.lockutils [None req-9e15c431-3319-40d1-8419-8185195bc56f 6bbc470612fa48afb6c2a143ba966473 135c387aaa024e42b1c3c19237591cf3 - - default default] Lock "bacbbaae-ce23-42df-b5cc-0fc49b2f3741" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.504s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.549 243456 DEBUG nova.network.neutron [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Updating instance_info_cache with network_info: [{"id": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "address": "fa:16:3e:10:3d:54", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd21b0d77-e3", "ovs_interfaceid": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.568 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Releasing lock "refresh_cache-abcde488-a508-4239-9f40-28af252a1cd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.569 243456 DEBUG nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Instance network_info: |[{"id": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "address": "fa:16:3e:10:3d:54", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd21b0d77-e3", "ovs_interfaceid": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.572 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Start _get_guest_xml network_info=[{"id": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "address": "fa:16:3e:10:3d:54", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd21b0d77-e3", "ovs_interfaceid": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.582 243456 WARNING nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.591 243456 DEBUG nova.virt.libvirt.host [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.593 243456 DEBUG nova.virt.libvirt.host [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.606 243456 DEBUG nova.virt.libvirt.host [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.607 243456 DEBUG nova.virt.libvirt.host [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.607 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.608 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.608 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.609 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.609 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.609 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.609 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.609 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.609 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.610 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.610 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.610 243456 DEBUG nova.virt.hardware [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.613 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:21 compute-0 nova_compute[243452]: 2026-02-28 10:05:21.651 243456 DEBUG oslo_concurrency.processutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 311 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 3.0 MiB/s wr, 539 op/s
Feb 28 10:05:21 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1678683351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:05:22 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3583209275' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.187 243456 DEBUG oslo_concurrency.processutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.200 243456 DEBUG nova.compute.provider_tree [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:05:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:05:22 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2231054614' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.223 243456 DEBUG nova.scheduler.client.report [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.229 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.252 243456 DEBUG nova.storage.rbd_utils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image abcde488-a508-4239-9f40-28af252a1cd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.259 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.292 243456 DEBUG nova.compute.manager [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Received event network-vif-unplugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.293 243456 DEBUG oslo_concurrency.lockutils [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.293 243456 DEBUG oslo_concurrency.lockutils [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.294 243456 DEBUG oslo_concurrency.lockutils [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.294 243456 DEBUG nova.compute.manager [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] No waiting events found dispatching network-vif-unplugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.294 243456 WARNING nova.compute.manager [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Received unexpected event network-vif-unplugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc for instance with vm_state deleted and task_state None.
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.294 243456 DEBUG nova.compute.manager [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Received event network-vif-deleted-f3950212-15c6-462b-a9f9-1f218cfd3914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.294 243456 DEBUG nova.compute.manager [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Received event network-changed-d21b0d77-e339-46dd-a448-d8c3473dc8e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.294 243456 DEBUG nova.compute.manager [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Refreshing instance network info cache due to event network-changed-d21b0d77-e339-46dd-a448-d8c3473dc8e8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.295 243456 DEBUG oslo_concurrency.lockutils [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-abcde488-a508-4239-9f40-28af252a1cd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.295 243456 DEBUG oslo_concurrency.lockutils [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-abcde488-a508-4239-9f40-28af252a1cd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.295 243456 DEBUG nova.network.neutron [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Refreshing network info cache for port d21b0d77-e339-46dd-a448-d8c3473dc8e8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.297 243456 DEBUG oslo_concurrency.lockutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.340 243456 INFO nova.scheduler.client.report [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Deleted allocations for instance 75431a43-9412-4ad7-86ef-4f1fc1563b37
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.415 243456 DEBUG oslo_concurrency.lockutils [None req-3b92881c-46e9-4691-bd0d-73b7641b06a1 2737540b5d9a437cac0ea91b25f0c5d8 da458887a8634c5a8b9a38fcbcc44e07 - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:05:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Feb 28 10:05:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Feb 28 10:05:22 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Feb 28 10:05:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:05:22 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4023284657' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.777 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.779 243456 DEBUG nova.virt.libvirt.vif [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1645656228',display_name='tempest-ImagesTestJSON-server-1645656228',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1645656228',id=35,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-649wx7we',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:05:18Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=abcde488-a508-4239-9f40-28af252a1cd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "address": "fa:16:3e:10:3d:54", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd21b0d77-e3", "ovs_interfaceid": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.779 243456 DEBUG nova.network.os_vif_util [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "address": "fa:16:3e:10:3d:54", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd21b0d77-e3", "ovs_interfaceid": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.780 243456 DEBUG nova.network.os_vif_util [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:3d:54,bridge_name='br-int',has_traffic_filtering=True,id=d21b0d77-e339-46dd-a448-d8c3473dc8e8,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd21b0d77-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.781 243456 DEBUG nova.objects.instance [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid abcde488-a508-4239-9f40-28af252a1cd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.800 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:05:22 compute-0 nova_compute[243452]:   <uuid>abcde488-a508-4239-9f40-28af252a1cd3</uuid>
Feb 28 10:05:22 compute-0 nova_compute[243452]:   <name>instance-00000023</name>
Feb 28 10:05:22 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:05:22 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:05:22 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <nova:name>tempest-ImagesTestJSON-server-1645656228</nova:name>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:05:21</nova:creationTime>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:05:22 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:05:22 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:05:22 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:05:22 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:05:22 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:05:22 compute-0 nova_compute[243452]:         <nova:user uuid="163582c3e6a34c87b52f82ac4f189f77">tempest-ImagesTestJSON-2059286278-project-member</nova:user>
Feb 28 10:05:22 compute-0 nova_compute[243452]:         <nova:project uuid="a2ce6ed219d94b3b88c2d2d7001f6c3a">tempest-ImagesTestJSON-2059286278</nova:project>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:05:22 compute-0 nova_compute[243452]:         <nova:port uuid="d21b0d77-e339-46dd-a448-d8c3473dc8e8">
Feb 28 10:05:22 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:05:22 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:05:22 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <system>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <entry name="serial">abcde488-a508-4239-9f40-28af252a1cd3</entry>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <entry name="uuid">abcde488-a508-4239-9f40-28af252a1cd3</entry>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     </system>
Feb 28 10:05:22 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:05:22 compute-0 nova_compute[243452]:   <os>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:   </os>
Feb 28 10:05:22 compute-0 nova_compute[243452]:   <features>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:   </features>
Feb 28 10:05:22 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:05:22 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:05:22 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/abcde488-a508-4239-9f40-28af252a1cd3_disk">
Feb 28 10:05:22 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       </source>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:05:22 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/abcde488-a508-4239-9f40-28af252a1cd3_disk.config">
Feb 28 10:05:22 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       </source>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:05:22 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:10:3d:54"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <target dev="tapd21b0d77-e3"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/abcde488-a508-4239-9f40-28af252a1cd3/console.log" append="off"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <video>
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     </video>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:05:22 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:05:22 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:05:22 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:05:22 compute-0 nova_compute[243452]: </domain>
Feb 28 10:05:22 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.801 243456 DEBUG nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Preparing to wait for external event network-vif-plugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.802 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "abcde488-a508-4239-9f40-28af252a1cd3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.802 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.803 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.803 243456 DEBUG nova.virt.libvirt.vif [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1645656228',display_name='tempest-ImagesTestJSON-server-1645656228',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1645656228',id=35,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-649wx7we',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:05:18Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=abcde488-a508-4239-9f40-28af252a1cd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "address": "fa:16:3e:10:3d:54", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd21b0d77-e3", "ovs_interfaceid": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.804 243456 DEBUG nova.network.os_vif_util [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "address": "fa:16:3e:10:3d:54", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd21b0d77-e3", "ovs_interfaceid": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.805 243456 DEBUG nova.network.os_vif_util [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:3d:54,bridge_name='br-int',has_traffic_filtering=True,id=d21b0d77-e339-46dd-a448-d8c3473dc8e8,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd21b0d77-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.805 243456 DEBUG os_vif [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:3d:54,bridge_name='br-int',has_traffic_filtering=True,id=d21b0d77-e339-46dd-a448-d8c3473dc8e8,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd21b0d77-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.807 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.807 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.808 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.812 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.813 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd21b0d77-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.813 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd21b0d77-e3, col_values=(('external_ids', {'iface-id': 'd21b0d77-e339-46dd-a448-d8c3473dc8e8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:3d:54', 'vm-uuid': 'abcde488-a508-4239-9f40-28af252a1cd3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.815 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:22 compute-0 NetworkManager[49805]: <info>  [1772273122.8166] manager: (tapd21b0d77-e3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.817 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.821 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.822 243456 INFO os_vif [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:3d:54,bridge_name='br-int',has_traffic_filtering=True,id=d21b0d77-e339-46dd-a448-d8c3473dc8e8,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd21b0d77-e3')
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.887 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.889 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.889 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No VIF found with MAC fa:16:3e:10:3d:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.890 243456 INFO nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Using config drive
Feb 28 10:05:22 compute-0 nova_compute[243452]: 2026-02-28 10:05:22.915 243456 DEBUG nova.storage.rbd_utils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image abcde488-a508-4239-9f40-28af252a1cd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:22 compute-0 ceph-mon[76304]: pgmap v1096: 305 pgs: 305 active+clean; 311 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 3.0 MiB/s wr, 539 op/s
Feb 28 10:05:22 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3583209275' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:22 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2231054614' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:22 compute-0 ceph-mon[76304]: osdmap e150: 3 total, 3 up, 3 in
Feb 28 10:05:22 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4023284657' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:23 compute-0 nova_compute[243452]: 2026-02-28 10:05:23.402 243456 INFO nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Creating config drive at /var/lib/nova/instances/abcde488-a508-4239-9f40-28af252a1cd3/disk.config
Feb 28 10:05:23 compute-0 nova_compute[243452]: 2026-02-28 10:05:23.406 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/abcde488-a508-4239-9f40-28af252a1cd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpn7w9ylhg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:23 compute-0 nova_compute[243452]: 2026-02-28 10:05:23.524 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:23 compute-0 nova_compute[243452]: 2026-02-28 10:05:23.542 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/abcde488-a508-4239-9f40-28af252a1cd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpn7w9ylhg" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:23 compute-0 nova_compute[243452]: 2026-02-28 10:05:23.575 243456 DEBUG nova.storage.rbd_utils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image abcde488-a508-4239-9f40-28af252a1cd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:23 compute-0 nova_compute[243452]: 2026-02-28 10:05:23.580 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/abcde488-a508-4239-9f40-28af252a1cd3/disk.config abcde488-a508-4239-9f40-28af252a1cd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:23 compute-0 nova_compute[243452]: 2026-02-28 10:05:23.699 243456 DEBUG oslo_concurrency.processutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/abcde488-a508-4239-9f40-28af252a1cd3/disk.config abcde488-a508-4239-9f40-28af252a1cd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:23 compute-0 nova_compute[243452]: 2026-02-28 10:05:23.702 243456 INFO nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Deleting local config drive /var/lib/nova/instances/abcde488-a508-4239-9f40-28af252a1cd3/disk.config because it was imported into RBD.
Feb 28 10:05:23 compute-0 kernel: tapd21b0d77-e3: entered promiscuous mode
Feb 28 10:05:23 compute-0 NetworkManager[49805]: <info>  [1772273123.7610] manager: (tapd21b0d77-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/113)
Feb 28 10:05:23 compute-0 ovn_controller[146846]: 2026-02-28T10:05:23Z|00231|binding|INFO|Claiming lport d21b0d77-e339-46dd-a448-d8c3473dc8e8 for this chassis.
Feb 28 10:05:23 compute-0 ovn_controller[146846]: 2026-02-28T10:05:23Z|00232|binding|INFO|d21b0d77-e339-46dd-a448-d8c3473dc8e8: Claiming fa:16:3e:10:3d:54 10.100.0.8
Feb 28 10:05:23 compute-0 nova_compute[243452]: 2026-02-28 10:05:23.762 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.773 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:3d:54 10.100.0.8'], port_security=['fa:16:3e:10:3d:54 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'abcde488-a508-4239-9f40-28af252a1cd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b4baa283-ae4f-4852-9324-45f4fb1de1ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d63e728b-5dfe-4a66-867f-fa0957507bc0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d21b0d77-e339-46dd-a448-d8c3473dc8e8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:05:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.774 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d21b0d77-e339-46dd-a448-d8c3473dc8e8 in datapath 3a8395bc-d7fc-4457-8cb4-52e2b9305b61 bound to our chassis
Feb 28 10:05:23 compute-0 ovn_controller[146846]: 2026-02-28T10:05:23Z|00233|binding|INFO|Setting lport d21b0d77-e339-46dd-a448-d8c3473dc8e8 ovn-installed in OVS
Feb 28 10:05:23 compute-0 ovn_controller[146846]: 2026-02-28T10:05:23Z|00234|binding|INFO|Setting lport d21b0d77-e339-46dd-a448-d8c3473dc8e8 up in Southbound
Feb 28 10:05:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.776 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 10:05:23 compute-0 nova_compute[243452]: 2026-02-28 10:05:23.778 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:23 compute-0 nova_compute[243452]: 2026-02-28 10:05:23.782 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:23 compute-0 systemd-udevd[274318]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:05:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.789 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[77cd87bb-c708-4de5-9636-49ca84da256a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.790 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3a8395bc-d1 in ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:05:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.792 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3a8395bc-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:05:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.792 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9df3e24f-ec57-4eb1-97a0-4364d2db0968]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.793 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b47bae6c-f0f4-4ce6-9c05-db5dfd598613]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:23 compute-0 NetworkManager[49805]: <info>  [1772273123.7991] device (tapd21b0d77-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:05:23 compute-0 NetworkManager[49805]: <info>  [1772273123.7997] device (tapd21b0d77-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:05:23 compute-0 systemd-machined[209480]: New machine qemu-39-instance-00000023.
Feb 28 10:05:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.809 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[2fc01839-3c7c-4cc8-ad00-4b88a7e7a735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:23 compute-0 systemd[1]: Started Virtual Machine qemu-39-instance-00000023.
Feb 28 10:05:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.820 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6187dd23-727b-4536-af93-5c170fb8817b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:23 compute-0 nova_compute[243452]: 2026-02-28 10:05:23.830 243456 DEBUG nova.network.neutron [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Updated VIF entry in instance network info cache for port d21b0d77-e339-46dd-a448-d8c3473dc8e8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:05:23 compute-0 nova_compute[243452]: 2026-02-28 10:05:23.830 243456 DEBUG nova.network.neutron [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Updating instance_info_cache with network_info: [{"id": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "address": "fa:16:3e:10:3d:54", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd21b0d77-e3", "ovs_interfaceid": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.847 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d64e520e-aed8-4684-874f-6dadb824808e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.853 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[954e7ff3-48ae-4ea7-a95e-3bbb2f1ca019]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:23 compute-0 NetworkManager[49805]: <info>  [1772273123.8547] manager: (tap3a8395bc-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/114)
Feb 28 10:05:23 compute-0 nova_compute[243452]: 2026-02-28 10:05:23.858 243456 DEBUG oslo_concurrency.lockutils [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-abcde488-a508-4239-9f40-28af252a1cd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:05:23 compute-0 nova_compute[243452]: 2026-02-28 10:05:23.859 243456 DEBUG nova.compute.manager [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Received event network-vif-plugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:23 compute-0 nova_compute[243452]: 2026-02-28 10:05:23.859 243456 DEBUG oslo_concurrency.lockutils [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:23 compute-0 nova_compute[243452]: 2026-02-28 10:05:23.859 243456 DEBUG oslo_concurrency.lockutils [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:23 compute-0 nova_compute[243452]: 2026-02-28 10:05:23.859 243456 DEBUG oslo_concurrency.lockutils [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "75431a43-9412-4ad7-86ef-4f1fc1563b37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:23 compute-0 nova_compute[243452]: 2026-02-28 10:05:23.859 243456 DEBUG nova.compute.manager [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] No waiting events found dispatching network-vif-plugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:23 compute-0 nova_compute[243452]: 2026-02-28 10:05:23.860 243456 WARNING nova.compute.manager [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Received unexpected event network-vif-plugged-ec45afc5-b898-4497-8ddf-4195ba6f8dfc for instance with vm_state deleted and task_state None.
Feb 28 10:05:23 compute-0 nova_compute[243452]: 2026-02-28 10:05:23.860 243456 DEBUG nova.compute.manager [req-1815cfc3-e95e-433f-b9a6-9b4405eba381 req-11027171-a5b0-4670-b1ca-c5fc5621c7cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Received event network-vif-deleted-ec45afc5-b898-4497-8ddf-4195ba6f8dfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.883 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4572b330-1a44-4f6a-98ae-50468c4a6e2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 279 MiB data, 500 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.5 MiB/s wr, 377 op/s
Feb 28 10:05:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.888 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d306e5f9-b84b-4e0c-bcdb-6682790ef18f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:23 compute-0 NetworkManager[49805]: <info>  [1772273123.9113] device (tap3a8395bc-d0): carrier: link connected
Feb 28 10:05:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.917 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[62d94987-9f30-46f6-a241-3204afed7df5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.934 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9209ab05-bb46-4687-9e69-5255c6482ae8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3a8395bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:6b:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463119, 'reachable_time': 39338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274352, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.952 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[27a506ff-cf9f-47dc-9b2a-0c0180e301bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:6b8d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 463119, 'tstamp': 463119}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274353, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:23.969 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[46f6d97b-9d64-4614-8e49-2b7ff12d2aec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3a8395bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:6b:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463119, 'reachable_time': 39338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274354, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:24.015 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c97db1fe-6848-4c70-9db4-429f081142d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:24.084 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[56380812-0c77-4e54-b96f-ec64345a138b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:24.086 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a8395bc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:24.087 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:24.087 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3a8395bc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:24 compute-0 kernel: tap3a8395bc-d0: entered promiscuous mode
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.092 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:24 compute-0 NetworkManager[49805]: <info>  [1772273124.0929] manager: (tap3a8395bc-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:24.093 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3a8395bc-d0, col_values=(('external_ids', {'iface-id': '5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:24 compute-0 ovn_controller[146846]: 2026-02-28T10:05:24Z|00235|binding|INFO|Releasing lport 5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3 from this chassis (sb_readonly=0)
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:24.096 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:24.096 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3e08faa4-bd84-45c0-a332-6a71df564e93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:24.097 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:05:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:24.098 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'env', 'PROCESS_TAG=haproxy-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.100 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.155 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273124.1551216, abcde488-a508-4239-9f40-28af252a1cd3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.156 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] VM Started (Lifecycle Event)
Feb 28 10:05:24 compute-0 ovn_controller[146846]: 2026-02-28T10:05:24Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:07:3c:f5 10.100.0.12
Feb 28 10:05:24 compute-0 ovn_controller[146846]: 2026-02-28T10:05:24Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:07:3c:f5 10.100.0.12
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.220 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.224 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273124.1552832, abcde488-a508-4239-9f40-28af252a1cd3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.225 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] VM Paused (Lifecycle Event)
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.385 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.389 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.447 243456 DEBUG nova.compute.manager [req-618444ff-ea68-48f1-89ec-003c6c925d2e req-0f36c27b-a87b-48da-9beb-9fbf04a8ccdb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Received event network-vif-plugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.448 243456 DEBUG oslo_concurrency.lockutils [req-618444ff-ea68-48f1-89ec-003c6c925d2e req-0f36c27b-a87b-48da-9beb-9fbf04a8ccdb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "abcde488-a508-4239-9f40-28af252a1cd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.449 243456 DEBUG oslo_concurrency.lockutils [req-618444ff-ea68-48f1-89ec-003c6c925d2e req-0f36c27b-a87b-48da-9beb-9fbf04a8ccdb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.449 243456 DEBUG oslo_concurrency.lockutils [req-618444ff-ea68-48f1-89ec-003c6c925d2e req-0f36c27b-a87b-48da-9beb-9fbf04a8ccdb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.450 243456 DEBUG nova.compute.manager [req-618444ff-ea68-48f1-89ec-003c6c925d2e req-0f36c27b-a87b-48da-9beb-9fbf04a8ccdb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Processing event network-vif-plugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.451 243456 DEBUG nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.455 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.457 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.458 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273124.454519, abcde488-a508-4239-9f40-28af252a1cd3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.458 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] VM Resumed (Lifecycle Event)
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.462 243456 INFO nova.virt.libvirt.driver [-] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Instance spawned successfully.
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.463 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.481 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.487 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.492 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.493 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.493 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.494 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.494 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.495 243456 DEBUG nova.virt.libvirt.driver [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.504 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:05:24 compute-0 podman[274428]: 2026-02-28 10:05:24.506005456 +0000 UTC m=+0.055099440 container create f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:05:24 compute-0 systemd[1]: Started libpod-conmon-f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828.scope.
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.547 243456 INFO nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Took 6.24 seconds to spawn the instance on the hypervisor.
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.549 243456 DEBUG nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:24 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac069718318c7c6e38b19b80ce17d545aac543570ffaf0e081acaac88f751402/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:05:24 compute-0 podman[274428]: 2026-02-28 10:05:24.479106109 +0000 UTC m=+0.028200123 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:05:24 compute-0 podman[274428]: 2026-02-28 10:05:24.582775924 +0000 UTC m=+0.131869908 container init f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 10:05:24 compute-0 podman[274428]: 2026-02-28 10:05:24.588712981 +0000 UTC m=+0.137806965 container start f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:05:24 compute-0 podman[274441]: 2026-02-28 10:05:24.59723499 +0000 UTC m=+0.055999675 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 28 10:05:24 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[274452]: [NOTICE]   (274479) : New worker (274487) forked
Feb 28 10:05:24 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[274452]: [NOTICE]   (274479) : Loading success.
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.614 243456 INFO nova.compute.manager [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Took 7.41 seconds to build instance.
Feb 28 10:05:24 compute-0 podman[274440]: 2026-02-28 10:05:24.628318904 +0000 UTC m=+0.090094274 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true)
Feb 28 10:05:24 compute-0 nova_compute[243452]: 2026-02-28 10:05:24.630 243456 DEBUG oslo_concurrency.lockutils [None req-742c6cd1-5a1d-46fb-b9bc-896bd5ad842b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:24 compute-0 ceph-mon[76304]: pgmap v1098: 305 pgs: 305 active+clean; 279 MiB data, 500 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.5 MiB/s wr, 377 op/s
Feb 28 10:05:25 compute-0 ovn_controller[146846]: 2026-02-28T10:05:25Z|00236|binding|INFO|Releasing lport 76417fb5-53fd-4986-92c0-fafddb45b2f5 from this chassis (sb_readonly=0)
Feb 28 10:05:25 compute-0 ovn_controller[146846]: 2026-02-28T10:05:25Z|00237|binding|INFO|Releasing lport 5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3 from this chassis (sb_readonly=0)
Feb 28 10:05:25 compute-0 nova_compute[243452]: 2026-02-28 10:05:25.828 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 274 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 5.7 MiB/s wr, 397 op/s
Feb 28 10:05:25 compute-0 nova_compute[243452]: 2026-02-28 10:05:25.965 243456 DEBUG nova.objects.instance [None req-bd572c81-7ac0-45c8-853b-f4afc2c9f830 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid abcde488-a508-4239-9f40-28af252a1cd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:25 compute-0 nova_compute[243452]: 2026-02-28 10:05:25.989 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273125.989279, abcde488-a508-4239-9f40-28af252a1cd3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:25 compute-0 nova_compute[243452]: 2026-02-28 10:05:25.990 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] VM Paused (Lifecycle Event)
Feb 28 10:05:26 compute-0 nova_compute[243452]: 2026-02-28 10:05:26.011 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:26 compute-0 nova_compute[243452]: 2026-02-28 10:05:26.015 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:05:26 compute-0 nova_compute[243452]: 2026-02-28 10:05:26.034 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] During sync_power_state the instance has a pending task (suspending). Skip.
Feb 28 10:05:26 compute-0 kernel: tapd21b0d77-e3 (unregistering): left promiscuous mode
Feb 28 10:05:26 compute-0 NetworkManager[49805]: <info>  [1772273126.3813] device (tapd21b0d77-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:05:26 compute-0 ovn_controller[146846]: 2026-02-28T10:05:26Z|00238|binding|INFO|Releasing lport d21b0d77-e339-46dd-a448-d8c3473dc8e8 from this chassis (sb_readonly=0)
Feb 28 10:05:26 compute-0 ovn_controller[146846]: 2026-02-28T10:05:26Z|00239|binding|INFO|Setting lport d21b0d77-e339-46dd-a448-d8c3473dc8e8 down in Southbound
Feb 28 10:05:26 compute-0 ovn_controller[146846]: 2026-02-28T10:05:26Z|00240|binding|INFO|Removing iface tapd21b0d77-e3 ovn-installed in OVS
Feb 28 10:05:26 compute-0 nova_compute[243452]: 2026-02-28 10:05:26.392 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:26 compute-0 nova_compute[243452]: 2026-02-28 10:05:26.394 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.401 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:3d:54 10.100.0.8'], port_security=['fa:16:3e:10:3d:54 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'abcde488-a508-4239-9f40-28af252a1cd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b4baa283-ae4f-4852-9324-45f4fb1de1ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d63e728b-5dfe-4a66-867f-fa0957507bc0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d21b0d77-e339-46dd-a448-d8c3473dc8e8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:05:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.403 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d21b0d77-e339-46dd-a448-d8c3473dc8e8 in datapath 3a8395bc-d7fc-4457-8cb4-52e2b9305b61 unbound from our chassis
Feb 28 10:05:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.404 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3a8395bc-d7fc-4457-8cb4-52e2b9305b61, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:05:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.405 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[770ae92e-36ef-4446-940e-5fb1acb2ac1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.406 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 namespace which is not needed anymore
Feb 28 10:05:26 compute-0 nova_compute[243452]: 2026-02-28 10:05:26.410 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:26 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Deactivated successfully.
Feb 28 10:05:26 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Consumed 1.988s CPU time.
Feb 28 10:05:26 compute-0 systemd-machined[209480]: Machine qemu-39-instance-00000023 terminated.
Feb 28 10:05:26 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[274452]: [NOTICE]   (274479) : haproxy version is 2.8.14-c23fe91
Feb 28 10:05:26 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[274452]: [NOTICE]   (274479) : path to executable is /usr/sbin/haproxy
Feb 28 10:05:26 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[274452]: [WARNING]  (274479) : Exiting Master process...
Feb 28 10:05:26 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[274452]: [ALERT]    (274479) : Current worker (274487) exited with code 143 (Terminated)
Feb 28 10:05:26 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[274452]: [WARNING]  (274479) : All workers exited. Exiting... (0)
Feb 28 10:05:26 compute-0 systemd[1]: libpod-f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828.scope: Deactivated successfully.
Feb 28 10:05:26 compute-0 podman[274524]: 2026-02-28 10:05:26.541848529 +0000 UTC m=+0.045128030 container died f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 10:05:26 compute-0 nova_compute[243452]: 2026-02-28 10:05:26.551 243456 DEBUG nova.compute.manager [req-7a5f0a07-ec39-46b0-9659-97ed6836ef0a req-9d23c757-d237-41f0-92a3-cdd280cab698 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Received event network-vif-plugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:26 compute-0 nova_compute[243452]: 2026-02-28 10:05:26.551 243456 DEBUG oslo_concurrency.lockutils [req-7a5f0a07-ec39-46b0-9659-97ed6836ef0a req-9d23c757-d237-41f0-92a3-cdd280cab698 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "abcde488-a508-4239-9f40-28af252a1cd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:26 compute-0 nova_compute[243452]: 2026-02-28 10:05:26.551 243456 DEBUG oslo_concurrency.lockutils [req-7a5f0a07-ec39-46b0-9659-97ed6836ef0a req-9d23c757-d237-41f0-92a3-cdd280cab698 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:26 compute-0 nova_compute[243452]: 2026-02-28 10:05:26.551 243456 DEBUG oslo_concurrency.lockutils [req-7a5f0a07-ec39-46b0-9659-97ed6836ef0a req-9d23c757-d237-41f0-92a3-cdd280cab698 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:26 compute-0 nova_compute[243452]: 2026-02-28 10:05:26.552 243456 DEBUG nova.compute.manager [req-7a5f0a07-ec39-46b0-9659-97ed6836ef0a req-9d23c757-d237-41f0-92a3-cdd280cab698 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] No waiting events found dispatching network-vif-plugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:26 compute-0 nova_compute[243452]: 2026-02-28 10:05:26.552 243456 WARNING nova.compute.manager [req-7a5f0a07-ec39-46b0-9659-97ed6836ef0a req-9d23c757-d237-41f0-92a3-cdd280cab698 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Received unexpected event network-vif-plugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 for instance with vm_state active and task_state suspending.
Feb 28 10:05:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828-userdata-shm.mount: Deactivated successfully.
Feb 28 10:05:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac069718318c7c6e38b19b80ce17d545aac543570ffaf0e081acaac88f751402-merged.mount: Deactivated successfully.
Feb 28 10:05:26 compute-0 nova_compute[243452]: 2026-02-28 10:05:26.564 243456 DEBUG nova.compute.manager [None req-bd572c81-7ac0-45c8-853b-f4afc2c9f830 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:26 compute-0 podman[274524]: 2026-02-28 10:05:26.570172925 +0000 UTC m=+0.073452426 container cleanup f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:05:26 compute-0 systemd[1]: libpod-conmon-f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828.scope: Deactivated successfully.
Feb 28 10:05:26 compute-0 podman[274559]: 2026-02-28 10:05:26.630338507 +0000 UTC m=+0.040032737 container remove f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 10:05:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.635 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[231cdbd5-8764-459d-8c46-2e83e8518879]: (4, ('Sat Feb 28 10:05:26 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 (f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828)\nf00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828\nSat Feb 28 10:05:26 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 (f00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828)\nf00067cfa7376466ed44062f7e3c6429f8c0b90d0be992e8564386e1721af828\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.638 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dba105b7-f4ab-4817-aa16-cd80456da898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.639 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a8395bc-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:26 compute-0 nova_compute[243452]: 2026-02-28 10:05:26.642 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:26 compute-0 kernel: tap3a8395bc-d0: left promiscuous mode
Feb 28 10:05:26 compute-0 nova_compute[243452]: 2026-02-28 10:05:26.652 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.655 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[26443083-cbac-4921-8bd0-31a86bc80738]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.682 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e0251ac0-a751-470f-9397-f91604877c7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.684 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5ce34f86-4dcc-4773-8068-39ab3db1162c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.698 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[54bb2b8f-0f48-4735-ac47-40a73aa13d87]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463113, 'reachable_time': 15771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274579, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.700 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:05:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:26.700 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[0953951f-d82a-4f98-85c1-d260bcc8cf59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d3a8395bc\x2dd7fc\x2d4457\x2d8cb4\x2d52e2b9305b61.mount: Deactivated successfully.
Feb 28 10:05:26 compute-0 ceph-mon[76304]: pgmap v1099: 305 pgs: 305 active+clean; 274 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 5.7 MiB/s wr, 397 op/s
Feb 28 10:05:27 compute-0 ovn_controller[146846]: 2026-02-28T10:05:27Z|00241|binding|INFO|Releasing lport 76417fb5-53fd-4986-92c0-fafddb45b2f5 from this chassis (sb_readonly=0)
Feb 28 10:05:27 compute-0 nova_compute[243452]: 2026-02-28 10:05:27.062 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:05:27 compute-0 nova_compute[243452]: 2026-02-28 10:05:27.815 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.7 MiB/s wr, 354 op/s
Feb 28 10:05:28 compute-0 nova_compute[243452]: 2026-02-28 10:05:28.526 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:28 compute-0 nova_compute[243452]: 2026-02-28 10:05:28.814 243456 DEBUG nova.compute.manager [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Received event network-vif-unplugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:28 compute-0 nova_compute[243452]: 2026-02-28 10:05:28.815 243456 DEBUG oslo_concurrency.lockutils [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "abcde488-a508-4239-9f40-28af252a1cd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:28 compute-0 nova_compute[243452]: 2026-02-28 10:05:28.815 243456 DEBUG oslo_concurrency.lockutils [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:28 compute-0 nova_compute[243452]: 2026-02-28 10:05:28.816 243456 DEBUG oslo_concurrency.lockutils [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:28 compute-0 nova_compute[243452]: 2026-02-28 10:05:28.816 243456 DEBUG nova.compute.manager [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] No waiting events found dispatching network-vif-unplugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:28 compute-0 nova_compute[243452]: 2026-02-28 10:05:28.817 243456 WARNING nova.compute.manager [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Received unexpected event network-vif-unplugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 for instance with vm_state suspended and task_state None.
Feb 28 10:05:28 compute-0 nova_compute[243452]: 2026-02-28 10:05:28.817 243456 DEBUG nova.compute.manager [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Received event network-vif-plugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:28 compute-0 nova_compute[243452]: 2026-02-28 10:05:28.818 243456 DEBUG oslo_concurrency.lockutils [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "abcde488-a508-4239-9f40-28af252a1cd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:28 compute-0 nova_compute[243452]: 2026-02-28 10:05:28.818 243456 DEBUG oslo_concurrency.lockutils [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:28 compute-0 nova_compute[243452]: 2026-02-28 10:05:28.819 243456 DEBUG oslo_concurrency.lockutils [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:28 compute-0 nova_compute[243452]: 2026-02-28 10:05:28.819 243456 DEBUG nova.compute.manager [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] No waiting events found dispatching network-vif-plugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:28 compute-0 nova_compute[243452]: 2026-02-28 10:05:28.820 243456 WARNING nova.compute.manager [req-b6b5c77d-83f7-451f-80ab-47bb7fa2ee31 req-2748f2fd-c486-4014-8ef2-2e88ddfc3449 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Received unexpected event network-vif-plugged-d21b0d77-e339-46dd-a448-d8c3473dc8e8 for instance with vm_state suspended and task_state None.
Feb 28 10:05:28 compute-0 ceph-mon[76304]: pgmap v1100: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.7 MiB/s wr, 354 op/s
Feb 28 10:05:29 compute-0 nova_compute[243452]: 2026-02-28 10:05:29.026 243456 DEBUG nova.compute.manager [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:05:29
Feb 28 10:05:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:05:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:05:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'images', 'volumes', 'cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.meta', 'default.rgw.control']
Feb 28 10:05:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:05:29 compute-0 nova_compute[243452]: 2026-02-28 10:05:29.074 243456 INFO nova.compute.manager [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] instance snapshotting
Feb 28 10:05:29 compute-0 nova_compute[243452]: 2026-02-28 10:05:29.075 243456 WARNING nova.compute.manager [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] trying to snapshot a non-running instance: (state: 4 expected: 1)
Feb 28 10:05:29 compute-0 nova_compute[243452]: 2026-02-28 10:05:29.532 243456 INFO nova.virt.libvirt.driver [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Beginning cold snapshot process
Feb 28 10:05:29 compute-0 nova_compute[243452]: 2026-02-28 10:05:29.690 243456 DEBUG nova.virt.libvirt.imagebackend [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 28 10:05:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.1 MiB/s wr, 322 op/s
Feb 28 10:05:30 compute-0 nova_compute[243452]: 2026-02-28 10:05:30.059 243456 DEBUG nova.storage.rbd_utils [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] creating snapshot(faa26cc3cc7e4bb3b6ff32fe0bf39739) on rbd image(abcde488-a508-4239-9f40-28af252a1cd3_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:05:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:05:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:05:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:05:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:05:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:05:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:05:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:05:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:05:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:05:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:05:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:05:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:05:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:05:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:05:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:05:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:05:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Feb 28 10:05:30 compute-0 ceph-mon[76304]: pgmap v1101: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.1 MiB/s wr, 322 op/s
Feb 28 10:05:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Feb 28 10:05:31 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Feb 28 10:05:31 compute-0 nova_compute[243452]: 2026-02-28 10:05:31.053 243456 DEBUG nova.storage.rbd_utils [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] cloning vms/abcde488-a508-4239-9f40-28af252a1cd3_disk@faa26cc3cc7e4bb3b6ff32fe0bf39739 to images/830ebb40-f7a7-4e0f-9487-d755555148cb clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:05:31 compute-0 nova_compute[243452]: 2026-02-28 10:05:31.229 243456 DEBUG nova.storage.rbd_utils [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] flattening images/830ebb40-f7a7-4e0f-9487-d755555148cb flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:05:31 compute-0 nova_compute[243452]: 2026-02-28 10:05:31.477 243456 DEBUG nova.storage.rbd_utils [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] removing snapshot(faa26cc3cc7e4bb3b6ff32fe0bf39739) on rbd image(abcde488-a508-4239-9f40-28af252a1cd3_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:05:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 201 op/s
Feb 28 10:05:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Feb 28 10:05:32 compute-0 ceph-mon[76304]: osdmap e151: 3 total, 3 up, 3 in
Feb 28 10:05:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Feb 28 10:05:32 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Feb 28 10:05:32 compute-0 nova_compute[243452]: 2026-02-28 10:05:32.096 243456 DEBUG nova.storage.rbd_utils [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] creating snapshot(snap) on rbd image(830ebb40-f7a7-4e0f-9487-d755555148cb) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:05:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:05:32 compute-0 nova_compute[243452]: 2026-02-28 10:05:32.818 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:32 compute-0 nova_compute[243452]: 2026-02-28 10:05:32.918 243456 DEBUG oslo_concurrency.lockutils [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "interface-3a118849-0d0a-4196-9bdd-65333da2e8f7-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:32 compute-0 nova_compute[243452]: 2026-02-28 10:05:32.919 243456 DEBUG oslo_concurrency.lockutils [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-3a118849-0d0a-4196-9bdd-65333da2e8f7-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:32 compute-0 nova_compute[243452]: 2026-02-28 10:05:32.919 243456 DEBUG nova.objects.instance [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'flavor' on Instance uuid 3a118849-0d0a-4196-9bdd-65333da2e8f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:32 compute-0 nova_compute[243452]: 2026-02-28 10:05:32.946 243456 DEBUG nova.objects.instance [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'pci_requests' on Instance uuid 3a118849-0d0a-4196-9bdd-65333da2e8f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:32 compute-0 nova_compute[243452]: 2026-02-28 10:05:32.957 243456 DEBUG nova.network.neutron [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:05:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Feb 28 10:05:33 compute-0 ceph-mon[76304]: pgmap v1103: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 201 op/s
Feb 28 10:05:33 compute-0 ceph-mon[76304]: osdmap e152: 3 total, 3 up, 3 in
Feb 28 10:05:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Feb 28 10:05:33 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Feb 28 10:05:33 compute-0 nova_compute[243452]: 2026-02-28 10:05:33.528 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:33 compute-0 nova_compute[243452]: 2026-02-28 10:05:33.829 243456 DEBUG nova.policy [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1d18942aa7a4f4f9f673058f8c5f232', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5478fb37e3fa483c86ed85ad939d42da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:05:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 289 MiB data, 522 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 814 KiB/s wr, 97 op/s
Feb 28 10:05:33 compute-0 nova_compute[243452]: 2026-02-28 10:05:33.947 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:34 compute-0 ceph-mon[76304]: osdmap e153: 3 total, 3 up, 3 in
Feb 28 10:05:34 compute-0 nova_compute[243452]: 2026-02-28 10:05:34.250 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273119.2485564, bacbbaae-ce23-42df-b5cc-0fc49b2f3741 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:34 compute-0 nova_compute[243452]: 2026-02-28 10:05:34.251 243456 INFO nova.compute.manager [-] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] VM Stopped (Lifecycle Event)
Feb 28 10:05:34 compute-0 nova_compute[243452]: 2026-02-28 10:05:34.276 243456 DEBUG nova.compute.manager [None req-37bd679f-b975-4134-8a60-77b246aac42f - - - - - -] [instance: bacbbaae-ce23-42df-b5cc-0fc49b2f3741] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:34 compute-0 nova_compute[243452]: 2026-02-28 10:05:34.412 243456 INFO nova.virt.libvirt.driver [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Snapshot image upload complete
Feb 28 10:05:34 compute-0 nova_compute[243452]: 2026-02-28 10:05:34.413 243456 INFO nova.compute.manager [None req-d828df7d-1225-43bf-ae71-a25aa9e3a212 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Took 5.34 seconds to snapshot the instance on the hypervisor.
Feb 28 10:05:34 compute-0 nova_compute[243452]: 2026-02-28 10:05:34.824 243456 DEBUG nova.network.neutron [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Successfully created port: 5de28374-dbe4-4c8d-9f73-047a368cc895 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:05:35 compute-0 ceph-mon[76304]: pgmap v1106: 305 pgs: 305 active+clean; 289 MiB data, 522 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 814 KiB/s wr, 97 op/s
Feb 28 10:05:35 compute-0 nova_compute[243452]: 2026-02-28 10:05:35.225 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273120.2235603, 75431a43-9412-4ad7-86ef-4f1fc1563b37 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:35 compute-0 nova_compute[243452]: 2026-02-28 10:05:35.225 243456 INFO nova.compute.manager [-] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] VM Stopped (Lifecycle Event)
Feb 28 10:05:35 compute-0 nova_compute[243452]: 2026-02-28 10:05:35.254 243456 DEBUG nova.compute.manager [None req-037950e8-55a8-44b7-9278-d28fe3f43d98 - - - - - -] [instance: 75431a43-9412-4ad7-86ef-4f1fc1563b37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 111 op/s
Feb 28 10:05:36 compute-0 nova_compute[243452]: 2026-02-28 10:05:36.534 243456 DEBUG nova.network.neutron [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Successfully updated port: 5de28374-dbe4-4c8d-9f73-047a368cc895 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:05:36 compute-0 nova_compute[243452]: 2026-02-28 10:05:36.558 243456 DEBUG oslo_concurrency.lockutils [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:05:36 compute-0 nova_compute[243452]: 2026-02-28 10:05:36.558 243456 DEBUG oslo_concurrency.lockutils [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquired lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:05:36 compute-0 nova_compute[243452]: 2026-02-28 10:05:36.558 243456 DEBUG nova.network.neutron [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:05:36 compute-0 nova_compute[243452]: 2026-02-28 10:05:36.738 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:36 compute-0 nova_compute[243452]: 2026-02-28 10:05:36.851 243456 WARNING nova.network.neutron [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] 60dcefc3-95e1-437e-9c00-e51656c39b8f already exists in list: networks containing: ['60dcefc3-95e1-437e-9c00-e51656c39b8f']. ignoring it
Feb 28 10:05:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Feb 28 10:05:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Feb 28 10:05:37 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Feb 28 10:05:37 compute-0 ceph-mon[76304]: pgmap v1107: 305 pgs: 305 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 111 op/s
Feb 28 10:05:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:05:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Feb 28 10:05:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Feb 28 10:05:37 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.785531) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273137785649, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 694, "num_deletes": 258, "total_data_size": 720841, "memory_usage": 735144, "flush_reason": "Manual Compaction"}
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Feb 28 10:05:37 compute-0 nova_compute[243452]: 2026-02-28 10:05:37.820 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273137856478, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 712485, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23188, "largest_seqno": 23881, "table_properties": {"data_size": 708804, "index_size": 1461, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8202, "raw_average_key_size": 18, "raw_value_size": 701365, "raw_average_value_size": 1601, "num_data_blocks": 65, "num_entries": 438, "num_filter_entries": 438, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772273101, "oldest_key_time": 1772273101, "file_creation_time": 1772273137, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 71031 microseconds, and 3090 cpu microseconds.
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.856588) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 712485 bytes OK
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.856633) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.873984) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.874086) EVENT_LOG_v1 {"time_micros": 1772273137874049, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.874129) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 717129, prev total WAL file size 717129, number of live WAL files 2.
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.875605) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373532' seq:0, type:0; will stop at (end)
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(695KB)], [53(8699KB)]
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273137875693, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 9620934, "oldest_snapshot_seqno": -1}
Feb 28 10:05:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 153 op/s
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4809 keys, 9533029 bytes, temperature: kUnknown
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273137934371, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9533029, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9497755, "index_size": 22172, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12037, "raw_key_size": 119584, "raw_average_key_size": 24, "raw_value_size": 9408033, "raw_average_value_size": 1956, "num_data_blocks": 924, "num_entries": 4809, "num_filter_entries": 4809, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772273137, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.934757) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9533029 bytes
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.936256) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.6 rd, 162.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.5 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(26.9) write-amplify(13.4) OK, records in: 5335, records dropped: 526 output_compression: NoCompression
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.936288) EVENT_LOG_v1 {"time_micros": 1772273137936273, "job": 28, "event": "compaction_finished", "compaction_time_micros": 58805, "compaction_time_cpu_micros": 31974, "output_level": 6, "num_output_files": 1, "total_output_size": 9533029, "num_input_records": 5335, "num_output_records": 4809, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273137936598, "job": 28, "event": "table_file_deletion", "file_number": 55}
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273137938394, "job": 28, "event": "table_file_deletion", "file_number": 53}
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.875400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.938487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.938499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.938502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.938505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:05:37 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:05:37.938509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:05:38 compute-0 ceph-mon[76304]: osdmap e154: 3 total, 3 up, 3 in
Feb 28 10:05:38 compute-0 ceph-mon[76304]: osdmap e155: 3 total, 3 up, 3 in
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.176 243456 DEBUG nova.compute.manager [req-4f356008-0553-4801-98e9-513c6ca5a43f req-c3959927-fce9-4bfd-9ce9-9c1254a8982d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-changed-5de28374-dbe4-4c8d-9f73-047a368cc895 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.177 243456 DEBUG nova.compute.manager [req-4f356008-0553-4801-98e9-513c6ca5a43f req-c3959927-fce9-4bfd-9ce9-9c1254a8982d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Refreshing instance network info cache due to event network-changed-5de28374-dbe4-4c8d-9f73-047a368cc895. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.177 243456 DEBUG oslo_concurrency.lockutils [req-4f356008-0553-4801-98e9-513c6ca5a43f req-c3959927-fce9-4bfd-9ce9-9c1254a8982d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.317 243456 DEBUG nova.network.neutron [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updating instance_info_cache with network_info: [{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.337 243456 DEBUG oslo_concurrency.lockutils [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Releasing lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.338 243456 DEBUG oslo_concurrency.lockutils [req-4f356008-0553-4801-98e9-513c6ca5a43f req-c3959927-fce9-4bfd-9ce9-9c1254a8982d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.339 243456 DEBUG nova.network.neutron [req-4f356008-0553-4801-98e9-513c6ca5a43f req-c3959927-fce9-4bfd-9ce9-9c1254a8982d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Refreshing network info cache for port 5de28374-dbe4-4c8d-9f73-047a368cc895 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.343 243456 DEBUG nova.virt.libvirt.vif [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-853218932',display_name='tempest-AttachInterfacesTestJSON-server-853218932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-853218932',id=32,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNhRql1zNcgozp7+n+OTZNjDLjB8j28cqviorl1M0I+grkvDu6q5p/JYhTL7OoExHJt9j3CSAkdrK21HWqvkni4EGOra+zdcam7r8KHxbshLgFt9F0GQkeLVE/FGatuEGQ==',key_name='tempest-keypair-1407834863',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-b1a06y2e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=3a118849-0d0a-4196-9bdd-65333da2e8f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.344 243456 DEBUG nova.network.os_vif_util [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.345 243456 DEBUG nova.network.os_vif_util [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=5de28374-dbe4-4c8d-9f73-047a368cc895,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de28374-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.345 243456 DEBUG os_vif [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=5de28374-dbe4-4c8d-9f73-047a368cc895,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de28374-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.346 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.346 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.346 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.349 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.349 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5de28374-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.350 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5de28374-db, col_values=(('external_ids', {'iface-id': '5de28374-dbe4-4c8d-9f73-047a368cc895', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:8b:a2', 'vm-uuid': '3a118849-0d0a-4196-9bdd-65333da2e8f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:38 compute-0 NetworkManager[49805]: <info>  [1772273138.3524] manager: (tap5de28374-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.354 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.357 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.359 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.360 243456 INFO os_vif [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=5de28374-dbe4-4c8d-9f73-047a368cc895,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de28374-db')
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.361 243456 DEBUG nova.virt.libvirt.vif [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-853218932',display_name='tempest-AttachInterfacesTestJSON-server-853218932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-853218932',id=32,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNhRql1zNcgozp7+n+OTZNjDLjB8j28cqviorl1M0I+grkvDu6q5p/JYhTL7OoExHJt9j3CSAkdrK21HWqvkni4EGOra+zdcam7r8KHxbshLgFt9F0GQkeLVE/FGatuEGQ==',key_name='tempest-keypair-1407834863',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-b1a06y2e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=3a118849-0d0a-4196-9bdd-65333da2e8f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.361 243456 DEBUG nova.network.os_vif_util [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.362 243456 DEBUG nova.network.os_vif_util [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=5de28374-dbe4-4c8d-9f73-047a368cc895,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de28374-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.364 243456 DEBUG nova.virt.libvirt.guest [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] attach device xml: <interface type="ethernet">
Feb 28 10:05:38 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:fc:8b:a2"/>
Feb 28 10:05:38 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:05:38 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:05:38 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:05:38 compute-0 nova_compute[243452]:   <target dev="tap5de28374-db"/>
Feb 28 10:05:38 compute-0 nova_compute[243452]: </interface>
Feb 28 10:05:38 compute-0 nova_compute[243452]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 28 10:05:38 compute-0 NetworkManager[49805]: <info>  [1772273138.3773] manager: (tap5de28374-db): new Tun device (/org/freedesktop/NetworkManager/Devices/117)
Feb 28 10:05:38 compute-0 kernel: tap5de28374-db: entered promiscuous mode
Feb 28 10:05:38 compute-0 ovn_controller[146846]: 2026-02-28T10:05:38Z|00242|binding|INFO|Claiming lport 5de28374-dbe4-4c8d-9f73-047a368cc895 for this chassis.
Feb 28 10:05:38 compute-0 ovn_controller[146846]: 2026-02-28T10:05:38Z|00243|binding|INFO|5de28374-dbe4-4c8d-9f73-047a368cc895: Claiming fa:16:3e:fc:8b:a2 10.100.0.6
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.381 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.391 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:8b:a2 10.100.0.6'], port_security=['fa:16:3e:fc:8b:a2 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3a118849-0d0a-4196-9bdd-65333da2e8f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '24808a61-2182-43ba-a46d-e629124276f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=5de28374-dbe4-4c8d-9f73-047a368cc895) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:05:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.393 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 5de28374-dbe4-4c8d-9f73-047a368cc895 in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f bound to our chassis
Feb 28 10:05:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.394 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:05:38 compute-0 ovn_controller[146846]: 2026-02-28T10:05:38Z|00244|binding|INFO|Setting lport 5de28374-dbe4-4c8d-9f73-047a368cc895 ovn-installed in OVS
Feb 28 10:05:38 compute-0 ovn_controller[146846]: 2026-02-28T10:05:38Z|00245|binding|INFO|Setting lport 5de28374-dbe4-4c8d-9f73-047a368cc895 up in Southbound
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.402 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:38 compute-0 systemd-udevd[274732]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.409 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:38 compute-0 NetworkManager[49805]: <info>  [1772273138.4162] device (tap5de28374-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:05:38 compute-0 NetworkManager[49805]: <info>  [1772273138.4169] device (tap5de28374-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:05:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.416 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2d0fd40a-6f0d-4982-8a87-0229ea815d16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.442 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3b8c905e-378b-438d-a766-f5618bf8c828]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.448 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f0a8d4-bab7-42fc-8863-64f9bdaf8a86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.454 243456 DEBUG nova.virt.libvirt.driver [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.455 243456 DEBUG nova.virt.libvirt.driver [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.455 243456 DEBUG nova.virt.libvirt.driver [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:07:3c:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.455 243456 DEBUG nova.virt.libvirt.driver [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:fc:8b:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:05:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.477 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a1e2c29c-a1e1-48ab-9587-d92dc625e93d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.483 243456 DEBUG nova.virt.libvirt.guest [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:05:38 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:05:38 compute-0 nova_compute[243452]:   <nova:name>tempest-AttachInterfacesTestJSON-server-853218932</nova:name>
Feb 28 10:05:38 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:05:38</nova:creationTime>
Feb 28 10:05:38 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:05:38 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:05:38 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:05:38 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:05:38 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:05:38 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:05:38 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:05:38 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:05:38 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:05:38 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:05:38 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:05:38 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:05:38 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:05:38 compute-0 nova_compute[243452]:     <nova:port uuid="9f44b9f8-b888-40e8-be30-f985e3ca11b9">
Feb 28 10:05:38 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:05:38 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:05:38 compute-0 nova_compute[243452]:     <nova:port uuid="5de28374-dbe4-4c8d-9f73-047a368cc895">
Feb 28 10:05:38 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 10:05:38 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:05:38 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:05:38 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:05:38 compute-0 nova_compute[243452]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 28 10:05:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.493 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fd366eb9-05e3-4cbb-8aa8-30b23134ae35]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461763, 'reachable_time': 27957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274742, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.510 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7812a1f2-170f-464e-9eb3-879eb7193ebc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461774, 'tstamp': 461774}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274743, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461777, 'tstamp': 461777}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274743, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.512 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.517 243456 DEBUG oslo_concurrency.lockutils [None req-c3926848-cddc-4f7e-9279-963ff314fc87 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-3a118849-0d0a-4196-9bdd-65333da2e8f7-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.518 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.520 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.520 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.520 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:38.521 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:38 compute-0 nova_compute[243452]: 2026-02-28 10:05:38.531 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.099 243456 DEBUG oslo_concurrency.lockutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "abcde488-a508-4239-9f40-28af252a1cd3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.099 243456 DEBUG oslo_concurrency.lockutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.100 243456 DEBUG oslo_concurrency.lockutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "abcde488-a508-4239-9f40-28af252a1cd3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.100 243456 DEBUG oslo_concurrency.lockutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.100 243456 DEBUG oslo_concurrency.lockutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.102 243456 INFO nova.compute.manager [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Terminating instance
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.103 243456 DEBUG nova.compute.manager [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:05:39 compute-0 ceph-mon[76304]: pgmap v1110: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 153 op/s
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.112 243456 INFO nova.virt.libvirt.driver [-] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Instance destroyed successfully.
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.113 243456 DEBUG nova.objects.instance [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'resources' on Instance uuid abcde488-a508-4239-9f40-28af252a1cd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.131 243456 DEBUG nova.virt.libvirt.vif [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1645656228',display_name='tempest-ImagesTestJSON-server-1645656228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1645656228',id=35,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-649wx7we',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:34Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=abcde488-a508-4239-9f40-28af252a1cd3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "address": "fa:16:3e:10:3d:54", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd21b0d77-e3", "ovs_interfaceid": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.132 243456 DEBUG nova.network.os_vif_util [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "address": "fa:16:3e:10:3d:54", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd21b0d77-e3", "ovs_interfaceid": "d21b0d77-e339-46dd-a448-d8c3473dc8e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.132 243456 DEBUG nova.network.os_vif_util [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:3d:54,bridge_name='br-int',has_traffic_filtering=True,id=d21b0d77-e339-46dd-a448-d8c3473dc8e8,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd21b0d77-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.133 243456 DEBUG os_vif [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:3d:54,bridge_name='br-int',has_traffic_filtering=True,id=d21b0d77-e339-46dd-a448-d8c3473dc8e8,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd21b0d77-e3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.135 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.135 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd21b0d77-e3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.137 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.139 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.141 243456 INFO os_vif [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:3d:54,bridge_name='br-int',has_traffic_filtering=True,id=d21b0d77-e339-46dd-a448-d8c3473dc8e8,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd21b0d77-e3')
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.473 243456 INFO nova.virt.libvirt.driver [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Deleting instance files /var/lib/nova/instances/abcde488-a508-4239-9f40-28af252a1cd3_del
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.474 243456 INFO nova.virt.libvirt.driver [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Deletion of /var/lib/nova/instances/abcde488-a508-4239-9f40-28af252a1cd3_del complete
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.538 243456 DEBUG oslo_concurrency.lockutils [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "interface-3a118849-0d0a-4196-9bdd-65333da2e8f7-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.538 243456 DEBUG oslo_concurrency.lockutils [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-3a118849-0d0a-4196-9bdd-65333da2e8f7-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.539 243456 DEBUG nova.objects.instance [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'flavor' on Instance uuid 3a118849-0d0a-4196-9bdd-65333da2e8f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.564 243456 INFO nova.compute.manager [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Took 0.46 seconds to destroy the instance on the hypervisor.
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.564 243456 DEBUG oslo.service.loopingcall [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.566 243456 DEBUG nova.compute.manager [-] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.566 243456 DEBUG nova.network.neutron [-] [instance: abcde488-a508-4239-9f40-28af252a1cd3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.647 243456 DEBUG nova.network.neutron [req-4f356008-0553-4801-98e9-513c6ca5a43f req-c3959927-fce9-4bfd-9ce9-9c1254a8982d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updated VIF entry in instance network info cache for port 5de28374-dbe4-4c8d-9f73-047a368cc895. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.648 243456 DEBUG nova.network.neutron [req-4f356008-0553-4801-98e9-513c6ca5a43f req-c3959927-fce9-4bfd-9ce9-9c1254a8982d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updating instance_info_cache with network_info: [{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.668 243456 DEBUG oslo_concurrency.lockutils [req-4f356008-0553-4801-98e9-513c6ca5a43f req-c3959927-fce9-4bfd-9ce9-9c1254a8982d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:05:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 292 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 122 op/s
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.936 243456 DEBUG nova.objects.instance [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'pci_requests' on Instance uuid 3a118849-0d0a-4196-9bdd-65333da2e8f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:39 compute-0 nova_compute[243452]: 2026-02-28 10:05:39.957 243456 DEBUG nova.network.neutron [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.063 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Acquiring lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.063 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.091 243456 DEBUG nova.compute.manager [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.148 243456 DEBUG nova.policy [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1d18942aa7a4f4f9f673058f8c5f232', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5478fb37e3fa483c86ed85ad939d42da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.175 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.176 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.185 243456 DEBUG nova.virt.hardware [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.185 243456 INFO nova.compute.claims [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.308 243456 DEBUG oslo_concurrency.processutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001019492375708623 of space, bias 1.0, pg target 0.30584771271258687 quantized to 32 (current 32)
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0026436659199438376 of space, bias 1.0, pg target 0.7930997759831513 quantized to 32 (current 32)
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.943870453518754e-07 of space, bias 4.0, pg target 0.0009532644544222505 quantized to 16 (current 16)
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:05:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:05:40 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:05:40 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3196233387' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.808 243456 DEBUG nova.network.neutron [-] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.827 243456 DEBUG oslo_concurrency.processutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.830 243456 INFO nova.compute.manager [-] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Took 1.26 seconds to deallocate network for instance.
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.839 243456 DEBUG nova.compute.provider_tree [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:05:40 compute-0 ovn_controller[146846]: 2026-02-28T10:05:40Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:8b:a2 10.100.0.6
Feb 28 10:05:40 compute-0 ovn_controller[146846]: 2026-02-28T10:05:40Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:8b:a2 10.100.0.6
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.853 243456 DEBUG nova.scheduler.client.report [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.882 243456 DEBUG oslo_concurrency.lockutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.883 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.884 243456 DEBUG nova.compute.manager [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.887 243456 DEBUG oslo_concurrency.lockutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.936 243456 DEBUG nova.compute.manager [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.937 243456 DEBUG nova.network.neutron [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.955 243456 INFO nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.977 243456 DEBUG nova.compute.manager [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:05:40 compute-0 nova_compute[243452]: 2026-02-28 10:05:40.983 243456 DEBUG oslo_concurrency.processutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.084 243456 DEBUG nova.compute.manager [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.086 243456 DEBUG nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.087 243456 INFO nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Creating image(s)
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.109 243456 DEBUG nova.storage.rbd_utils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] rbd image b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:41 compute-0 ceph-mon[76304]: pgmap v1111: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 292 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 122 op/s
Feb 28 10:05:41 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3196233387' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.138 243456 DEBUG nova.storage.rbd_utils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] rbd image b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.164 243456 DEBUG nova.storage.rbd_utils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] rbd image b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.169 243456 DEBUG oslo_concurrency.processutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.198 243456 DEBUG nova.policy [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'af24a6d4c4c246cf80645675cc85b3c6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fc066bf883d477dab2475efe229ee9f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.231 243456 DEBUG oslo_concurrency.processutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.232 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.233 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.233 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.259 243456 DEBUG nova.storage.rbd_utils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] rbd image b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.264 243456 DEBUG oslo_concurrency.processutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.506 243456 DEBUG oslo_concurrency.processutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.242s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:05:41 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2978644247' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.573 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273126.5656288, abcde488-a508-4239-9f40-28af252a1cd3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.573 243456 INFO nova.compute.manager [-] [instance: abcde488-a508-4239-9f40-28af252a1cd3] VM Stopped (Lifecycle Event)
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.576 243456 DEBUG oslo_concurrency.processutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.582 243456 DEBUG nova.storage.rbd_utils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] resizing rbd image b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.611 243456 DEBUG nova.network.neutron [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Successfully created port: 81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.615 243456 DEBUG nova.compute.manager [None req-1e9027c7-9570-4e5b-b796-738cc41ba5cc - - - - - -] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.618 243456 DEBUG nova.compute.provider_tree [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.659 243456 DEBUG nova.scheduler.client.report [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.668 243456 DEBUG nova.objects.instance [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lazy-loading 'migration_context' on Instance uuid b320ad06-d6fa-470f-8bd8-1ecd6a00b33a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.683 243456 DEBUG nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.684 243456 DEBUG nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Ensure instance console log exists: /var/lib/nova/instances/b320ad06-d6fa-470f-8bd8-1ecd6a00b33a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.684 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.684 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.684 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.687 243456 DEBUG oslo_concurrency.lockutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.715 243456 INFO nova.scheduler.client.report [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Deleted allocations for instance abcde488-a508-4239-9f40-28af252a1cd3
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.791 243456 DEBUG nova.network.neutron [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Successfully created port: c965ae98-2866-43c7-bd75-b717acf060bd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:05:41 compute-0 nova_compute[243452]: 2026-02-28 10:05:41.796 243456 DEBUG oslo_concurrency.lockutils [None req-9bcc76ad-23e6-41ed-92c6-5918bd30356b 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "abcde488-a508-4239-9f40-28af252a1cd3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 257 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Feb 28 10:05:42 compute-0 nova_compute[243452]: 2026-02-28 10:05:42.067 243456 DEBUG nova.compute.manager [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-vif-plugged-5de28374-dbe4-4c8d-9f73-047a368cc895 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:42 compute-0 nova_compute[243452]: 2026-02-28 10:05:42.067 243456 DEBUG oslo_concurrency.lockutils [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:42 compute-0 nova_compute[243452]: 2026-02-28 10:05:42.068 243456 DEBUG oslo_concurrency.lockutils [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:42 compute-0 nova_compute[243452]: 2026-02-28 10:05:42.068 243456 DEBUG oslo_concurrency.lockutils [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:42 compute-0 nova_compute[243452]: 2026-02-28 10:05:42.069 243456 DEBUG nova.compute.manager [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] No waiting events found dispatching network-vif-plugged-5de28374-dbe4-4c8d-9f73-047a368cc895 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:42 compute-0 nova_compute[243452]: 2026-02-28 10:05:42.069 243456 WARNING nova.compute.manager [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received unexpected event network-vif-plugged-5de28374-dbe4-4c8d-9f73-047a368cc895 for instance with vm_state active and task_state None.
Feb 28 10:05:42 compute-0 nova_compute[243452]: 2026-02-28 10:05:42.069 243456 DEBUG nova.compute.manager [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-vif-plugged-5de28374-dbe4-4c8d-9f73-047a368cc895 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:42 compute-0 nova_compute[243452]: 2026-02-28 10:05:42.070 243456 DEBUG oslo_concurrency.lockutils [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:42 compute-0 nova_compute[243452]: 2026-02-28 10:05:42.070 243456 DEBUG oslo_concurrency.lockutils [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:42 compute-0 nova_compute[243452]: 2026-02-28 10:05:42.071 243456 DEBUG oslo_concurrency.lockutils [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:42 compute-0 nova_compute[243452]: 2026-02-28 10:05:42.071 243456 DEBUG nova.compute.manager [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] No waiting events found dispatching network-vif-plugged-5de28374-dbe4-4c8d-9f73-047a368cc895 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:42 compute-0 nova_compute[243452]: 2026-02-28 10:05:42.071 243456 WARNING nova.compute.manager [req-12c480a8-28e0-45d4-b258-3516fd00a75a req-e25601a6-1fb8-4a88-bf1d-07fddb691672 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received unexpected event network-vif-plugged-5de28374-dbe4-4c8d-9f73-047a368cc895 for instance with vm_state active and task_state None.
Feb 28 10:05:42 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2978644247' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.077 243456 DEBUG nova.network.neutron [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Successfully updated port: 81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.082 243456 DEBUG nova.network.neutron [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Successfully updated port: c965ae98-2866-43c7-bd75-b717acf060bd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.096 243456 DEBUG oslo_concurrency.lockutils [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.096 243456 DEBUG oslo_concurrency.lockutils [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquired lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.097 243456 DEBUG nova.network.neutron [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.099 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Acquiring lock "refresh_cache-b320ad06-d6fa-470f-8bd8-1ecd6a00b33a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.099 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Acquired lock "refresh_cache-b320ad06-d6fa-470f-8bd8-1ecd6a00b33a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.099 243456 DEBUG nova.network.neutron [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.116 243456 DEBUG oslo_concurrency.lockutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.117 243456 DEBUG oslo_concurrency.lockutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:43 compute-0 ceph-mon[76304]: pgmap v1112: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 257 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.140 243456 DEBUG nova.compute.manager [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.198 243456 DEBUG oslo_concurrency.lockutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.199 243456 DEBUG oslo_concurrency.lockutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.207 243456 DEBUG nova.virt.hardware [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.208 243456 INFO nova.compute.claims [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.375 243456 DEBUG oslo_concurrency.processutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:43 compute-0 rsyslogd[1017]: imjournal from <np0005634017:nova_compute>: begin to drop messages due to rate-limiting
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.534 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.773 243456 DEBUG nova.network.neutron [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.835 243456 WARNING nova.network.neutron [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] 60dcefc3-95e1-437e-9c00-e51656c39b8f already exists in list: networks containing: ['60dcefc3-95e1-437e-9c00-e51656c39b8f']. ignoring it
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.836 243456 WARNING nova.network.neutron [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] 60dcefc3-95e1-437e-9c00-e51656c39b8f already exists in list: networks containing: ['60dcefc3-95e1-437e-9c00-e51656c39b8f']. ignoring it
Feb 28 10:05:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:05:43 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1651742825' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 248 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 956 KiB/s wr, 109 op/s
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.911 243456 DEBUG oslo_concurrency.processutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.916 243456 DEBUG nova.compute.provider_tree [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.936 243456 DEBUG nova.scheduler.client.report [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.963 243456 DEBUG oslo_concurrency.lockutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:43 compute-0 nova_compute[243452]: 2026-02-28 10:05:43.963 243456 DEBUG nova.compute.manager [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.043 243456 DEBUG nova.compute.manager [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.043 243456 DEBUG nova.network.neutron [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.062 243456 DEBUG nova.compute.manager [req-9f34ec91-bf88-42ee-acae-90f9de206fbc req-41c7e43f-0a0e-4d95-a43a-795b20d14c96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-changed-81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.062 243456 DEBUG nova.compute.manager [req-9f34ec91-bf88-42ee-acae-90f9de206fbc req-41c7e43f-0a0e-4d95-a43a-795b20d14c96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Refreshing instance network info cache due to event network-changed-81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.063 243456 DEBUG oslo_concurrency.lockutils [req-9f34ec91-bf88-42ee-acae-90f9de206fbc req-41c7e43f-0a0e-4d95-a43a-795b20d14c96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.085 243456 INFO nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.103 243456 DEBUG nova.compute.manager [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:05:44 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1651742825' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.139 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.205 243456 DEBUG nova.compute.manager [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.207 243456 DEBUG nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.207 243456 INFO nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Creating image(s)
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.230 243456 DEBUG nova.storage.rbd_utils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image a9cfb8a4-5855-4ff2-8afa-3e14094e801e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.256 243456 DEBUG nova.storage.rbd_utils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image a9cfb8a4-5855-4ff2-8afa-3e14094e801e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.281 243456 DEBUG nova.storage.rbd_utils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image a9cfb8a4-5855-4ff2-8afa-3e14094e801e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.285 243456 DEBUG oslo_concurrency.processutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.305 243456 DEBUG nova.policy [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '163582c3e6a34c87b52f82ac4f189f77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.316 243456 DEBUG nova.compute.manager [req-c0426d3c-3b6e-4c42-ab42-e820949c0b9f req-8e63279b-b363-4e6e-83e5-0afba7d68d98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: abcde488-a508-4239-9f40-28af252a1cd3] Received event network-vif-deleted-d21b0d77-e339-46dd-a448-d8c3473dc8e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.316 243456 DEBUG nova.compute.manager [req-c0426d3c-3b6e-4c42-ab42-e820949c0b9f req-8e63279b-b363-4e6e-83e5-0afba7d68d98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Received event network-changed-c965ae98-2866-43c7-bd75-b717acf060bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.317 243456 DEBUG nova.compute.manager [req-c0426d3c-3b6e-4c42-ab42-e820949c0b9f req-8e63279b-b363-4e6e-83e5-0afba7d68d98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Refreshing instance network info cache due to event network-changed-c965ae98-2866-43c7-bd75-b717acf060bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.317 243456 DEBUG oslo_concurrency.lockutils [req-c0426d3c-3b6e-4c42-ab42-e820949c0b9f req-8e63279b-b363-4e6e-83e5-0afba7d68d98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b320ad06-d6fa-470f-8bd8-1ecd6a00b33a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.340 243456 DEBUG oslo_concurrency.processutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.341 243456 DEBUG oslo_concurrency.lockutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.341 243456 DEBUG oslo_concurrency.lockutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.342 243456 DEBUG oslo_concurrency.lockutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.363 243456 DEBUG nova.storage.rbd_utils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image a9cfb8a4-5855-4ff2-8afa-3e14094e801e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.367 243456 DEBUG oslo_concurrency.processutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a9cfb8a4-5855-4ff2-8afa-3e14094e801e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.592 243456 DEBUG oslo_concurrency.processutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a9cfb8a4-5855-4ff2-8afa-3e14094e801e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.674 243456 DEBUG nova.storage.rbd_utils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] resizing rbd image a9cfb8a4-5855-4ff2-8afa-3e14094e801e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.756 243456 DEBUG nova.objects.instance [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'migration_context' on Instance uuid a9cfb8a4-5855-4ff2-8afa-3e14094e801e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.771 243456 DEBUG nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.771 243456 DEBUG nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Ensure instance console log exists: /var/lib/nova/instances/a9cfb8a4-5855-4ff2-8afa-3e14094e801e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.772 243456 DEBUG oslo_concurrency.lockutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.772 243456 DEBUG oslo_concurrency.lockutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:44 compute-0 nova_compute[243452]: 2026-02-28 10:05:44.773 243456 DEBUG oslo_concurrency.lockutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:45 compute-0 ceph-mon[76304]: pgmap v1113: 305 pgs: 305 active+clean; 248 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 956 KiB/s wr, 109 op/s
Feb 28 10:05:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:05:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/554449817' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:05:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:05:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/554449817' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.419 243456 DEBUG nova.network.neutron [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Updating instance_info_cache with network_info: [{"id": "c965ae98-2866-43c7-bd75-b717acf060bd", "address": "fa:16:3e:65:d6:df", "network": {"id": "053a4c12-58de-4356-a494-50a223396715", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1046689882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fc066bf883d477dab2475efe229ee9f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc965ae98-28", "ovs_interfaceid": "c965ae98-2866-43c7-bd75-b717acf060bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.447 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Releasing lock "refresh_cache-b320ad06-d6fa-470f-8bd8-1ecd6a00b33a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.448 243456 DEBUG nova.compute.manager [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Instance network_info: |[{"id": "c965ae98-2866-43c7-bd75-b717acf060bd", "address": "fa:16:3e:65:d6:df", "network": {"id": "053a4c12-58de-4356-a494-50a223396715", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1046689882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fc066bf883d477dab2475efe229ee9f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc965ae98-28", "ovs_interfaceid": "c965ae98-2866-43c7-bd75-b717acf060bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.449 243456 DEBUG oslo_concurrency.lockutils [req-c0426d3c-3b6e-4c42-ab42-e820949c0b9f req-8e63279b-b363-4e6e-83e5-0afba7d68d98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b320ad06-d6fa-470f-8bd8-1ecd6a00b33a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.449 243456 DEBUG nova.network.neutron [req-c0426d3c-3b6e-4c42-ab42-e820949c0b9f req-8e63279b-b363-4e6e-83e5-0afba7d68d98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Refreshing network info cache for port c965ae98-2866-43c7-bd75-b717acf060bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.456 243456 DEBUG nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Start _get_guest_xml network_info=[{"id": "c965ae98-2866-43c7-bd75-b717acf060bd", "address": "fa:16:3e:65:d6:df", "network": {"id": "053a4c12-58de-4356-a494-50a223396715", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1046689882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fc066bf883d477dab2475efe229ee9f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc965ae98-28", "ovs_interfaceid": "c965ae98-2866-43c7-bd75-b717acf060bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.462 243456 WARNING nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.467 243456 DEBUG nova.virt.libvirt.host [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.468 243456 DEBUG nova.virt.libvirt.host [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.481 243456 DEBUG nova.virt.libvirt.host [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.482 243456 DEBUG nova.virt.libvirt.host [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.482 243456 DEBUG nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.483 243456 DEBUG nova.virt.hardware [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.483 243456 DEBUG nova.virt.hardware [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.484 243456 DEBUG nova.virt.hardware [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.484 243456 DEBUG nova.virt.hardware [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.485 243456 DEBUG nova.virt.hardware [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.485 243456 DEBUG nova.virt.hardware [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.485 243456 DEBUG nova.virt.hardware [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.486 243456 DEBUG nova.virt.hardware [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.486 243456 DEBUG nova.virt.hardware [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.486 243456 DEBUG nova.virt.hardware [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.487 243456 DEBUG nova.virt.hardware [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.492 243456 DEBUG oslo_concurrency.processutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:45 compute-0 nova_compute[243452]: 2026-02-28 10:05:45.738 243456 DEBUG nova.network.neutron [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Successfully created port: 53cf23fd-52e1-4c44-b96b-ca076c163326 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:05:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 294 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 3.5 MiB/s wr, 126 op/s
Feb 28 10:05:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:05:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2907547781' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.006 243456 DEBUG oslo_concurrency.processutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.041 243456 DEBUG nova.storage.rbd_utils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] rbd image b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.047 243456 DEBUG oslo_concurrency.processutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/554449817' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:05:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/554449817' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:05:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2907547781' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:05:46 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3302100433' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.581 243456 DEBUG oslo_concurrency.processutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.583 243456 DEBUG nova.virt.libvirt.vif [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:05:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-1901392352',display_name='tempest-ServerPasswordTestJSON-server-1901392352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-1901392352',id=36,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fc066bf883d477dab2475efe229ee9f',ramdisk_id='',reservation_id='r-iuyzv6qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-2078094345',owner_user_name='tempest-ServerPasswordTestJSON-2078094345-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:05:41Z,user_data=None,user_id='af24a6d4c4c246cf80645675cc85b3c6',uuid=b320ad06-d6fa-470f-8bd8-1ecd6a00b33a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c965ae98-2866-43c7-bd75-b717acf060bd", "address": "fa:16:3e:65:d6:df", "network": {"id": "053a4c12-58de-4356-a494-50a223396715", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1046689882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fc066bf883d477dab2475efe229ee9f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc965ae98-28", "ovs_interfaceid": "c965ae98-2866-43c7-bd75-b717acf060bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.584 243456 DEBUG nova.network.os_vif_util [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Converting VIF {"id": "c965ae98-2866-43c7-bd75-b717acf060bd", "address": "fa:16:3e:65:d6:df", "network": {"id": "053a4c12-58de-4356-a494-50a223396715", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1046689882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fc066bf883d477dab2475efe229ee9f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc965ae98-28", "ovs_interfaceid": "c965ae98-2866-43c7-bd75-b717acf060bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.585 243456 DEBUG nova.network.os_vif_util [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:d6:df,bridge_name='br-int',has_traffic_filtering=True,id=c965ae98-2866-43c7-bd75-b717acf060bd,network=Network(053a4c12-58de-4356-a494-50a223396715),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc965ae98-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.586 243456 DEBUG nova.objects.instance [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lazy-loading 'pci_devices' on Instance uuid b320ad06-d6fa-470f-8bd8-1ecd6a00b33a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.601 243456 DEBUG nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:05:46 compute-0 nova_compute[243452]:   <uuid>b320ad06-d6fa-470f-8bd8-1ecd6a00b33a</uuid>
Feb 28 10:05:46 compute-0 nova_compute[243452]:   <name>instance-00000024</name>
Feb 28 10:05:46 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:05:46 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:05:46 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerPasswordTestJSON-server-1901392352</nova:name>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:05:45</nova:creationTime>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:05:46 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:05:46 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:05:46 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:05:46 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:05:46 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:05:46 compute-0 nova_compute[243452]:         <nova:user uuid="af24a6d4c4c246cf80645675cc85b3c6">tempest-ServerPasswordTestJSON-2078094345-project-member</nova:user>
Feb 28 10:05:46 compute-0 nova_compute[243452]:         <nova:project uuid="2fc066bf883d477dab2475efe229ee9f">tempest-ServerPasswordTestJSON-2078094345</nova:project>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:05:46 compute-0 nova_compute[243452]:         <nova:port uuid="c965ae98-2866-43c7-bd75-b717acf060bd">
Feb 28 10:05:46 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:05:46 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:05:46 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <system>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <entry name="serial">b320ad06-d6fa-470f-8bd8-1ecd6a00b33a</entry>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <entry name="uuid">b320ad06-d6fa-470f-8bd8-1ecd6a00b33a</entry>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     </system>
Feb 28 10:05:46 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:05:46 compute-0 nova_compute[243452]:   <os>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:   </os>
Feb 28 10:05:46 compute-0 nova_compute[243452]:   <features>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:   </features>
Feb 28 10:05:46 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:05:46 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:05:46 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_disk">
Feb 28 10:05:46 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       </source>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:05:46 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_disk.config">
Feb 28 10:05:46 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       </source>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:05:46 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:65:d6:df"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <target dev="tapc965ae98-28"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/b320ad06-d6fa-470f-8bd8-1ecd6a00b33a/console.log" append="off"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <video>
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     </video>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:05:46 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:05:46 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:05:46 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:05:46 compute-0 nova_compute[243452]: </domain>
Feb 28 10:05:46 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.602 243456 DEBUG nova.compute.manager [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Preparing to wait for external event network-vif-plugged-c965ae98-2866-43c7-bd75-b717acf060bd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.602 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Acquiring lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.604 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.604 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.605 243456 DEBUG nova.virt.libvirt.vif [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:05:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-1901392352',display_name='tempest-ServerPasswordTestJSON-server-1901392352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-1901392352',id=36,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fc066bf883d477dab2475efe229ee9f',ramdisk_id='',reservation_id='r-iuyzv6qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-2078094345',owner_user_name='tempest-ServerPasswordTestJSON-2078094345-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:05:41Z,user_data=None,user_id='af24a6d4c4c246cf80645675cc85b3c6',uuid=b320ad06-d6fa-470f-8bd8-1ecd6a00b33a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c965ae98-2866-43c7-bd75-b717acf060bd", "address": "fa:16:3e:65:d6:df", "network": {"id": "053a4c12-58de-4356-a494-50a223396715", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1046689882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fc066bf883d477dab2475efe229ee9f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc965ae98-28", "ovs_interfaceid": "c965ae98-2866-43c7-bd75-b717acf060bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.605 243456 DEBUG nova.network.os_vif_util [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Converting VIF {"id": "c965ae98-2866-43c7-bd75-b717acf060bd", "address": "fa:16:3e:65:d6:df", "network": {"id": "053a4c12-58de-4356-a494-50a223396715", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1046689882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fc066bf883d477dab2475efe229ee9f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc965ae98-28", "ovs_interfaceid": "c965ae98-2866-43c7-bd75-b717acf060bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.606 243456 DEBUG nova.network.os_vif_util [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:d6:df,bridge_name='br-int',has_traffic_filtering=True,id=c965ae98-2866-43c7-bd75-b717acf060bd,network=Network(053a4c12-58de-4356-a494-50a223396715),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc965ae98-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.606 243456 DEBUG os_vif [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:d6:df,bridge_name='br-int',has_traffic_filtering=True,id=c965ae98-2866-43c7-bd75-b717acf060bd,network=Network(053a4c12-58de-4356-a494-50a223396715),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc965ae98-28') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.607 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.608 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.608 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.611 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.611 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc965ae98-28, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.611 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc965ae98-28, col_values=(('external_ids', {'iface-id': 'c965ae98-2866-43c7-bd75-b717acf060bd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:d6:df', 'vm-uuid': 'b320ad06-d6fa-470f-8bd8-1ecd6a00b33a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:46 compute-0 NetworkManager[49805]: <info>  [1772273146.6143] manager: (tapc965ae98-28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.613 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.618 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.619 243456 INFO os_vif [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:d6:df,bridge_name='br-int',has_traffic_filtering=True,id=c965ae98-2866-43c7-bd75-b717acf060bd,network=Network(053a4c12-58de-4356-a494-50a223396715),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc965ae98-28')
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.677 243456 DEBUG nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.678 243456 DEBUG nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.678 243456 DEBUG nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] No VIF found with MAC fa:16:3e:65:d6:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.679 243456 INFO nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Using config drive
Feb 28 10:05:46 compute-0 nova_compute[243452]: 2026-02-28 10:05:46.696 243456 DEBUG nova.storage.rbd_utils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] rbd image b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.019 243456 INFO nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Creating config drive at /var/lib/nova/instances/b320ad06-d6fa-470f-8bd8-1ecd6a00b33a/disk.config
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.024 243456 DEBUG oslo_concurrency.processutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b320ad06-d6fa-470f-8bd8-1ecd6a00b33a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxmugf1ih execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.076 243456 DEBUG nova.network.neutron [req-c0426d3c-3b6e-4c42-ab42-e820949c0b9f req-8e63279b-b363-4e6e-83e5-0afba7d68d98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Updated VIF entry in instance network info cache for port c965ae98-2866-43c7-bd75-b717acf060bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.078 243456 DEBUG nova.network.neutron [req-c0426d3c-3b6e-4c42-ab42-e820949c0b9f req-8e63279b-b363-4e6e-83e5-0afba7d68d98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Updating instance_info_cache with network_info: [{"id": "c965ae98-2866-43c7-bd75-b717acf060bd", "address": "fa:16:3e:65:d6:df", "network": {"id": "053a4c12-58de-4356-a494-50a223396715", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1046689882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fc066bf883d477dab2475efe229ee9f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc965ae98-28", "ovs_interfaceid": "c965ae98-2866-43c7-bd75-b717acf060bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.096 243456 DEBUG oslo_concurrency.lockutils [req-c0426d3c-3b6e-4c42-ab42-e820949c0b9f req-8e63279b-b363-4e6e-83e5-0afba7d68d98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b320ad06-d6fa-470f-8bd8-1ecd6a00b33a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:05:47 compute-0 ceph-mon[76304]: pgmap v1114: 305 pgs: 305 active+clean; 294 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 3.5 MiB/s wr, 126 op/s
Feb 28 10:05:47 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3302100433' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.158 243456 DEBUG oslo_concurrency.processutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b320ad06-d6fa-470f-8bd8-1ecd6a00b33a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxmugf1ih" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.199 243456 DEBUG nova.storage.rbd_utils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] rbd image b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.204 243456 DEBUG oslo_concurrency.processutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b320ad06-d6fa-470f-8bd8-1ecd6a00b33a/disk.config b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.231 243456 DEBUG nova.network.neutron [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Successfully updated port: 53cf23fd-52e1-4c44-b96b-ca076c163326 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.257 243456 DEBUG oslo_concurrency.lockutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "refresh_cache-a9cfb8a4-5855-4ff2-8afa-3e14094e801e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.257 243456 DEBUG oslo_concurrency.lockutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquired lock "refresh_cache-a9cfb8a4-5855-4ff2-8afa-3e14094e801e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.257 243456 DEBUG nova.network.neutron [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.317 243456 DEBUG nova.compute.manager [req-2fe770e8-fad2-4e5d-b5ed-489f6eecbd88 req-9da490d8-f05c-4503-8958-b2da38e23aea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Received event network-changed-53cf23fd-52e1-4c44-b96b-ca076c163326 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.318 243456 DEBUG nova.compute.manager [req-2fe770e8-fad2-4e5d-b5ed-489f6eecbd88 req-9da490d8-f05c-4503-8958-b2da38e23aea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Refreshing instance network info cache due to event network-changed-53cf23fd-52e1-4c44-b96b-ca076c163326. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.318 243456 DEBUG oslo_concurrency.lockutils [req-2fe770e8-fad2-4e5d-b5ed-489f6eecbd88 req-9da490d8-f05c-4503-8958-b2da38e23aea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-a9cfb8a4-5855-4ff2-8afa-3e14094e801e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.358 243456 DEBUG oslo_concurrency.processutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b320ad06-d6fa-470f-8bd8-1ecd6a00b33a/disk.config b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.358 243456 INFO nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Deleting local config drive /var/lib/nova/instances/b320ad06-d6fa-470f-8bd8-1ecd6a00b33a/disk.config because it was imported into RBD.
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.365 243456 DEBUG nova.network.neutron [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updating instance_info_cache with network_info: [{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "address": "fa:16:3e:1a:bd:05", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81d09b7b-e0", "ovs_interfaceid": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.385 243456 DEBUG oslo_concurrency.lockutils [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Releasing lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.386 243456 DEBUG oslo_concurrency.lockutils [req-9f34ec91-bf88-42ee-acae-90f9de206fbc req-41c7e43f-0a0e-4d95-a43a-795b20d14c96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.387 243456 DEBUG nova.network.neutron [req-9f34ec91-bf88-42ee-acae-90f9de206fbc req-41c7e43f-0a0e-4d95-a43a-795b20d14c96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Refreshing network info cache for port 81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.393 243456 DEBUG nova.virt.libvirt.vif [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-853218932',display_name='tempest-AttachInterfacesTestJSON-server-853218932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-853218932',id=32,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNhRql1zNcgozp7+n+OTZNjDLjB8j28cqviorl1M0I+grkvDu6q5p/JYhTL7OoExHJt9j3CSAkdrK21HWqvkni4EGOra+zdcam7r8KHxbshLgFt9F0GQkeLVE/FGatuEGQ==',key_name='tempest-keypair-1407834863',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-b1a06y2e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=3a118849-0d0a-4196-9bdd-65333da2e8f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "address": "fa:16:3e:1a:bd:05", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81d09b7b-e0", "ovs_interfaceid": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.394 243456 DEBUG nova.network.os_vif_util [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "address": "fa:16:3e:1a:bd:05", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81d09b7b-e0", "ovs_interfaceid": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.394 243456 DEBUG nova.network.os_vif_util [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:bd:05,bridge_name='br-int',has_traffic_filtering=True,id=81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81d09b7b-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.395 243456 DEBUG os_vif [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:bd:05,bridge_name='br-int',has_traffic_filtering=True,id=81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81d09b7b-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.395 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.396 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.396 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.399 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.399 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81d09b7b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.400 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap81d09b7b-e0, col_values=(('external_ids', {'iface-id': '81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:bd:05', 'vm-uuid': '3a118849-0d0a-4196-9bdd-65333da2e8f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.401 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:47 compute-0 NetworkManager[49805]: <info>  [1772273147.4039] manager: (tap81d09b7b-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.405 243456 DEBUG nova.network.neutron [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:05:47 compute-0 NetworkManager[49805]: <info>  [1772273147.4094] manager: (tapc965ae98-28): new Tun device (/org/freedesktop/NetworkManager/Devices/120)
Feb 28 10:05:47 compute-0 kernel: tapc965ae98-28: entered promiscuous mode
Feb 28 10:05:47 compute-0 ovn_controller[146846]: 2026-02-28T10:05:47Z|00246|binding|INFO|Claiming lport c965ae98-2866-43c7-bd75-b717acf060bd for this chassis.
Feb 28 10:05:47 compute-0 ovn_controller[146846]: 2026-02-28T10:05:47Z|00247|binding|INFO|c965ae98-2866-43c7-bd75-b717acf060bd: Claiming fa:16:3e:65:d6:df 10.100.0.12
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.414 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.417 243456 INFO os_vif [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:bd:05,bridge_name='br-int',has_traffic_filtering=True,id=81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81d09b7b-e0')
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.419 243456 DEBUG nova.virt.libvirt.vif [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-853218932',display_name='tempest-AttachInterfacesTestJSON-server-853218932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-853218932',id=32,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNhRql1zNcgozp7+n+OTZNjDLjB8j28cqviorl1M0I+grkvDu6q5p/JYhTL7OoExHJt9j3CSAkdrK21HWqvkni4EGOra+zdcam7r8KHxbshLgFt9F0GQkeLVE/FGatuEGQ==',key_name='tempest-keypair-1407834863',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-b1a06y2e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=3a118849-0d0a-4196-9bdd-65333da2e8f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "address": "fa:16:3e:1a:bd:05", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81d09b7b-e0", "ovs_interfaceid": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.419 243456 DEBUG nova.network.os_vif_util [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "address": "fa:16:3e:1a:bd:05", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81d09b7b-e0", "ovs_interfaceid": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.420 243456 DEBUG nova.network.os_vif_util [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:bd:05,bridge_name='br-int',has_traffic_filtering=True,id=81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81d09b7b-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.422 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:d6:df 10.100.0.12'], port_security=['fa:16:3e:65:d6:df 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b320ad06-d6fa-470f-8bd8-1ecd6a00b33a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-053a4c12-58de-4356-a494-50a223396715', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fc066bf883d477dab2475efe229ee9f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f898752a-105c-430d-85b9-be3165305b0c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec1e47c6-e968-4491-aa87-aafb173f5f52, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=c965ae98-2866-43c7-bd75-b717acf060bd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.424 243456 DEBUG nova.virt.libvirt.guest [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] attach device xml: <interface type="ethernet">
Feb 28 10:05:47 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:1a:bd:05"/>
Feb 28 10:05:47 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:05:47 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:05:47 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:05:47 compute-0 nova_compute[243452]:   <target dev="tap81d09b7b-e0"/>
Feb 28 10:05:47 compute-0 nova_compute[243452]: </interface>
Feb 28 10:05:47 compute-0 nova_compute[243452]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.425 156681 INFO neutron.agent.ovn.metadata.agent [-] Port c965ae98-2866-43c7-bd75-b717acf060bd in datapath 053a4c12-58de-4356-a494-50a223396715 bound to our chassis
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.428 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 053a4c12-58de-4356-a494-50a223396715
Feb 28 10:05:47 compute-0 ovn_controller[146846]: 2026-02-28T10:05:47Z|00248|binding|INFO|Setting lport c965ae98-2866-43c7-bd75-b717acf060bd ovn-installed in OVS
Feb 28 10:05:47 compute-0 ovn_controller[146846]: 2026-02-28T10:05:47Z|00249|binding|INFO|Setting lport c965ae98-2866-43c7-bd75-b717acf060bd up in Southbound
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.431 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.437 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:47 compute-0 kernel: tap81d09b7b-e0: entered promiscuous mode
Feb 28 10:05:47 compute-0 NetworkManager[49805]: <info>  [1772273147.4405] manager: (tap81d09b7b-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/121)
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.441 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:47 compute-0 ovn_controller[146846]: 2026-02-28T10:05:47Z|00250|binding|INFO|Claiming lport 81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9 for this chassis.
Feb 28 10:05:47 compute-0 ovn_controller[146846]: 2026-02-28T10:05:47Z|00251|binding|INFO|81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9: Claiming fa:16:3e:1a:bd:05 10.100.0.13
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.441 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e09777-fdd0-478e-b3c4-7eaa270961d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.443 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap053a4c12-51 in ovnmeta-053a4c12-58de-4356-a494-50a223396715 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.445 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap053a4c12-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.445 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[31f95d0f-0f0b-4e3c-8d1e-c6628dc8576a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.447 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[42079599-6fd7-4f62-8a9a-bb823ef4b096]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.451 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:bd:05 10.100.0.13'], port_security=['fa:16:3e:1a:bd:05 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3a118849-0d0a-4196-9bdd-65333da2e8f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '24808a61-2182-43ba-a46d-e629124276f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:05:47 compute-0 systemd-udevd[275306]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:05:47 compute-0 systemd-udevd[275307]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:05:47 compute-0 ovn_controller[146846]: 2026-02-28T10:05:47Z|00252|binding|INFO|Setting lport 81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9 ovn-installed in OVS
Feb 28 10:05:47 compute-0 ovn_controller[146846]: 2026-02-28T10:05:47Z|00253|binding|INFO|Setting lport 81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9 up in Southbound
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.462 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:47 compute-0 systemd-machined[209480]: New machine qemu-40-instance-00000024.
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.467 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[96f93f1a-b775-4260-881f-b79c80f6f191]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:47 compute-0 NetworkManager[49805]: <info>  [1772273147.4739] device (tapc965ae98-28): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:05:47 compute-0 NetworkManager[49805]: <info>  [1772273147.4750] device (tapc965ae98-28): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:05:47 compute-0 systemd[1]: Started Virtual Machine qemu-40-instance-00000024.
Feb 28 10:05:47 compute-0 NetworkManager[49805]: <info>  [1772273147.4770] device (tap81d09b7b-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:05:47 compute-0 NetworkManager[49805]: <info>  [1772273147.4776] device (tap81d09b7b-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.483 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5615ad-d678-4b52-81a8-8cf6f421836c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.510 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8920368a-bf40-4d95-9fce-83ea2defa8e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:47 compute-0 NetworkManager[49805]: <info>  [1772273147.5159] manager: (tap053a4c12-50): new Veth device (/org/freedesktop/NetworkManager/Devices/122)
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.516 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[863e2592-511e-4968-b8ae-63664016bf46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.541 243456 DEBUG nova.virt.libvirt.driver [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.541 243456 DEBUG nova.virt.libvirt.driver [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.541 243456 DEBUG nova.virt.libvirt.driver [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:07:3c:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.541 243456 DEBUG nova.virt.libvirt.driver [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:fc:8b:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.541 243456 DEBUG nova.virt.libvirt.driver [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:1a:bd:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.547 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c5591a33-2b8d-48b1-874f-533836f7bb86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.554 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c168fc97-a489-44a9-ac28-c6f132d655a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.568 243456 DEBUG nova.virt.libvirt.guest [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:05:47 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:05:47 compute-0 nova_compute[243452]:   <nova:name>tempest-AttachInterfacesTestJSON-server-853218932</nova:name>
Feb 28 10:05:47 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:05:47</nova:creationTime>
Feb 28 10:05:47 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:05:47 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:05:47 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:05:47 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:05:47 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:05:47 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:05:47 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:05:47 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:05:47 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:05:47 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:05:47 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:05:47 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:05:47 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:05:47 compute-0 nova_compute[243452]:     <nova:port uuid="9f44b9f8-b888-40e8-be30-f985e3ca11b9">
Feb 28 10:05:47 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:05:47 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:05:47 compute-0 nova_compute[243452]:     <nova:port uuid="5de28374-dbe4-4c8d-9f73-047a368cc895">
Feb 28 10:05:47 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 10:05:47 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:05:47 compute-0 nova_compute[243452]:     <nova:port uuid="81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9">
Feb 28 10:05:47 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:05:47 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:05:47 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:05:47 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:05:47 compute-0 nova_compute[243452]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 28 10:05:47 compute-0 NetworkManager[49805]: <info>  [1772273147.5764] device (tap053a4c12-50): carrier: link connected
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.580 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e53c28ef-5ec4-4a0d-8a07-0af92163fd17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.593 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[137776be-ccab-418e-be07-816ff576f070]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap053a4c12-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:ec:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465486, 'reachable_time': 42649, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275339, 'error': None, 'target': 'ovnmeta-053a4c12-58de-4356-a494-50a223396715', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.595 243456 DEBUG oslo_concurrency.lockutils [None req-1298fc1f-112b-4d10-bd76-6bf38f48901d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-3a118849-0d0a-4196-9bdd-65333da2e8f7-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.608 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7442b125-170a-4f3e-a19a-fafca1f41e00]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:ec16'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465486, 'tstamp': 465486}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275340, 'error': None, 'target': 'ovnmeta-053a4c12-58de-4356-a494-50a223396715', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.624 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[29e8c903-f748-456c-8bd0-741f623db5c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap053a4c12-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:ec:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465486, 'reachable_time': 42649, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275341, 'error': None, 'target': 'ovnmeta-053a4c12-58de-4356-a494-50a223396715', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.651 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fa2356b5-bc6e-4a36-977f-97c28297dca4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.689 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[077fd3b0-7f91-4e68-bb36-a6fb8a9299fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.691 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap053a4c12-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.691 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.692 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap053a4c12-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:05:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.695 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:47 compute-0 NetworkManager[49805]: <info>  [1772273147.6962] manager: (tap053a4c12-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Feb 28 10:05:47 compute-0 kernel: tap053a4c12-50: entered promiscuous mode
Feb 28 10:05:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Feb 28 10:05:47 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.706 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap053a4c12-50, col_values=(('external_ids', {'iface-id': '01e6d491-6d1d-45eb-abe6-4405bc56f882'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.709 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:47 compute-0 ovn_controller[146846]: 2026-02-28T10:05:47Z|00254|binding|INFO|Releasing lport 01e6d491-6d1d-45eb-abe6-4405bc56f882 from this chassis (sb_readonly=0)
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.711 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/053a4c12-58de-4356-a494-50a223396715.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/053a4c12-58de-4356-a494-50a223396715.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.716 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9f6d2faf-0baf-40c6-9dee-e08b33e68e19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:47 compute-0 nova_compute[243452]: 2026-02-28 10:05:47.717 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.717 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-053a4c12-58de-4356-a494-50a223396715
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/053a4c12-58de-4356-a494-50a223396715.pid.haproxy
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 053a4c12-58de-4356-a494-50a223396715
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:05:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:47.718 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-053a4c12-58de-4356-a494-50a223396715', 'env', 'PROCESS_TAG=haproxy-053a4c12-58de-4356-a494-50a223396715', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/053a4c12-58de-4356-a494-50a223396715.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:05:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 297 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 3.1 MiB/s wr, 83 op/s
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.010 243456 DEBUG nova.compute.manager [req-9d8f4f24-63a3-40ba-ab73-ed95dd0feeb5 req-ae75f884-7090-4305-ae00-d9597ed70be1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-vif-plugged-81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.010 243456 DEBUG oslo_concurrency.lockutils [req-9d8f4f24-63a3-40ba-ab73-ed95dd0feeb5 req-ae75f884-7090-4305-ae00-d9597ed70be1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.011 243456 DEBUG oslo_concurrency.lockutils [req-9d8f4f24-63a3-40ba-ab73-ed95dd0feeb5 req-ae75f884-7090-4305-ae00-d9597ed70be1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.011 243456 DEBUG oslo_concurrency.lockutils [req-9d8f4f24-63a3-40ba-ab73-ed95dd0feeb5 req-ae75f884-7090-4305-ae00-d9597ed70be1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.011 243456 DEBUG nova.compute.manager [req-9d8f4f24-63a3-40ba-ab73-ed95dd0feeb5 req-ae75f884-7090-4305-ae00-d9597ed70be1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] No waiting events found dispatching network-vif-plugged-81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.011 243456 WARNING nova.compute.manager [req-9d8f4f24-63a3-40ba-ab73-ed95dd0feeb5 req-ae75f884-7090-4305-ae00-d9597ed70be1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received unexpected event network-vif-plugged-81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9 for instance with vm_state active and task_state None.
Feb 28 10:05:48 compute-0 podman[275409]: 2026-02-28 10:05:48.059896796 +0000 UTC m=+0.045188751 container create 8c8a76e2f700a1585b6d7b52a048fdbfd2e309c20ad5f8bc3263d43ba9d46b73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-053a4c12-58de-4356-a494-50a223396715, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.092 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273148.0916004, b320ad06-d6fa-470f-8bd8-1ecd6a00b33a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.093 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] VM Started (Lifecycle Event)
Feb 28 10:05:48 compute-0 systemd[1]: Started libpod-conmon-8c8a76e2f700a1585b6d7b52a048fdbfd2e309c20ad5f8bc3263d43ba9d46b73.scope.
Feb 28 10:05:48 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.123 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32b7b906965368f0fff687b13a2b91bd0738b1d8b7111a0abfa54a62225d0694/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.127 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273148.0918052, b320ad06-d6fa-470f-8bd8-1ecd6a00b33a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.128 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] VM Paused (Lifecycle Event)
Feb 28 10:05:48 compute-0 podman[275409]: 2026-02-28 10:05:48.034515353 +0000 UTC m=+0.019807318 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:05:48 compute-0 podman[275409]: 2026-02-28 10:05:48.139933406 +0000 UTC m=+0.125225381 container init 8c8a76e2f700a1585b6d7b52a048fdbfd2e309c20ad5f8bc3263d43ba9d46b73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-053a4c12-58de-4356-a494-50a223396715, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:05:48 compute-0 podman[275409]: 2026-02-28 10:05:48.14506061 +0000 UTC m=+0.130352565 container start 8c8a76e2f700a1585b6d7b52a048fdbfd2e309c20ad5f8bc3263d43ba9d46b73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-053a4c12-58de-4356-a494-50a223396715, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.160 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:48 compute-0 neutron-haproxy-ovnmeta-053a4c12-58de-4356-a494-50a223396715[275430]: [NOTICE]   (275434) : New worker (275436) forked
Feb 28 10:05:48 compute-0 neutron-haproxy-ovnmeta-053a4c12-58de-4356-a494-50a223396715[275430]: [NOTICE]   (275434) : Loading success.
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.167 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.187 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:05:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:48.201 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9 in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f unbound from our chassis
Feb 28 10:05:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:48.204 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:05:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:48.215 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[881672fd-3404-4a59-b010-b169c7e3db13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:48.239 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d0924d-b793-4e6d-91f4-9218c4952b2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:48.242 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4b2078-8ff0-4a31-9554-0d8e35210bce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:48.267 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5fc1e8-14ca-41bc-b23c-4b61b4dea63c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:48.271 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.273 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:48.284 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2b826365-0401-42ed-a2bc-fb85af259e33]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461763, 'reachable_time': 27957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275450, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:48.297 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bc12fbca-b821-4faf-9e0d-d2b613a265e8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461774, 'tstamp': 461774}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275451, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461777, 'tstamp': 461777}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275451, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:48.300 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.301 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:48.304 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:48.304 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:48.304 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:48.305 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:48.305 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.469 243456 DEBUG nova.network.neutron [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Updating instance_info_cache with network_info: [{"id": "53cf23fd-52e1-4c44-b96b-ca076c163326", "address": "fa:16:3e:24:6e:f7", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53cf23fd-52", "ovs_interfaceid": "53cf23fd-52e1-4c44-b96b-ca076c163326", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.488 243456 DEBUG oslo_concurrency.lockutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Releasing lock "refresh_cache-a9cfb8a4-5855-4ff2-8afa-3e14094e801e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.489 243456 DEBUG nova.compute.manager [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Instance network_info: |[{"id": "53cf23fd-52e1-4c44-b96b-ca076c163326", "address": "fa:16:3e:24:6e:f7", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53cf23fd-52", "ovs_interfaceid": "53cf23fd-52e1-4c44-b96b-ca076c163326", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.490 243456 DEBUG oslo_concurrency.lockutils [req-2fe770e8-fad2-4e5d-b5ed-489f6eecbd88 req-9da490d8-f05c-4503-8958-b2da38e23aea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-a9cfb8a4-5855-4ff2-8afa-3e14094e801e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.491 243456 DEBUG nova.network.neutron [req-2fe770e8-fad2-4e5d-b5ed-489f6eecbd88 req-9da490d8-f05c-4503-8958-b2da38e23aea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Refreshing network info cache for port 53cf23fd-52e1-4c44-b96b-ca076c163326 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.499 243456 DEBUG nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Start _get_guest_xml network_info=[{"id": "53cf23fd-52e1-4c44-b96b-ca076c163326", "address": "fa:16:3e:24:6e:f7", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53cf23fd-52", "ovs_interfaceid": "53cf23fd-52e1-4c44-b96b-ca076c163326", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.505 243456 WARNING nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.516 243456 DEBUG nova.virt.libvirt.host [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.517 243456 DEBUG nova.virt.libvirt.host [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.521 243456 DEBUG nova.virt.libvirt.host [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.522 243456 DEBUG nova.virt.libvirt.host [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.522 243456 DEBUG nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.523 243456 DEBUG nova.virt.hardware [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.524 243456 DEBUG nova.virt.hardware [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.524 243456 DEBUG nova.virt.hardware [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.525 243456 DEBUG nova.virt.hardware [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.525 243456 DEBUG nova.virt.hardware [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.526 243456 DEBUG nova.virt.hardware [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.526 243456 DEBUG nova.virt.hardware [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.527 243456 DEBUG nova.virt.hardware [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.527 243456 DEBUG nova.virt.hardware [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.527 243456 DEBUG nova.virt.hardware [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.528 243456 DEBUG nova.virt.hardware [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.533 243456 DEBUG oslo_concurrency.processutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:48 compute-0 nova_compute[243452]: 2026-02-28 10:05:48.562 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:48 compute-0 ceph-mon[76304]: osdmap e156: 3 total, 3 up, 3 in
Feb 28 10:05:48 compute-0 ceph-mon[76304]: pgmap v1116: 305 pgs: 305 active+clean; 297 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 3.1 MiB/s wr, 83 op/s
Feb 28 10:05:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:05:49 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3870474010' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.109 243456 DEBUG oslo_concurrency.processutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.142 243456 DEBUG nova.storage.rbd_utils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image a9cfb8a4-5855-4ff2-8afa-3e14094e801e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.147 243456 DEBUG oslo_concurrency.processutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.206 243456 DEBUG nova.network.neutron [req-9f34ec91-bf88-42ee-acae-90f9de206fbc req-41c7e43f-0a0e-4d95-a43a-795b20d14c96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updated VIF entry in instance network info cache for port 81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.207 243456 DEBUG nova.network.neutron [req-9f34ec91-bf88-42ee-acae-90f9de206fbc req-41c7e43f-0a0e-4d95-a43a-795b20d14c96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updating instance_info_cache with network_info: [{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "address": "fa:16:3e:1a:bd:05", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81d09b7b-e0", "ovs_interfaceid": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.241 243456 DEBUG oslo_concurrency.lockutils [req-9f34ec91-bf88-42ee-acae-90f9de206fbc req-41c7e43f-0a0e-4d95-a43a-795b20d14c96 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.530 243456 DEBUG nova.compute.manager [req-58d5847d-8694-446c-bf1c-714ca2ad5e51 req-3b3af1c7-67e7-4b66-a752-c1051de319fa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Received event network-vif-plugged-c965ae98-2866-43c7-bd75-b717acf060bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.530 243456 DEBUG oslo_concurrency.lockutils [req-58d5847d-8694-446c-bf1c-714ca2ad5e51 req-3b3af1c7-67e7-4b66-a752-c1051de319fa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.532 243456 DEBUG oslo_concurrency.lockutils [req-58d5847d-8694-446c-bf1c-714ca2ad5e51 req-3b3af1c7-67e7-4b66-a752-c1051de319fa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.532 243456 DEBUG oslo_concurrency.lockutils [req-58d5847d-8694-446c-bf1c-714ca2ad5e51 req-3b3af1c7-67e7-4b66-a752-c1051de319fa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.532 243456 DEBUG nova.compute.manager [req-58d5847d-8694-446c-bf1c-714ca2ad5e51 req-3b3af1c7-67e7-4b66-a752-c1051de319fa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Processing event network-vif-plugged-c965ae98-2866-43c7-bd75-b717acf060bd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.532 243456 DEBUG nova.compute.manager [req-58d5847d-8694-446c-bf1c-714ca2ad5e51 req-3b3af1c7-67e7-4b66-a752-c1051de319fa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Received event network-vif-plugged-c965ae98-2866-43c7-bd75-b717acf060bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.533 243456 DEBUG oslo_concurrency.lockutils [req-58d5847d-8694-446c-bf1c-714ca2ad5e51 req-3b3af1c7-67e7-4b66-a752-c1051de319fa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.533 243456 DEBUG oslo_concurrency.lockutils [req-58d5847d-8694-446c-bf1c-714ca2ad5e51 req-3b3af1c7-67e7-4b66-a752-c1051de319fa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.533 243456 DEBUG oslo_concurrency.lockutils [req-58d5847d-8694-446c-bf1c-714ca2ad5e51 req-3b3af1c7-67e7-4b66-a752-c1051de319fa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.533 243456 DEBUG nova.compute.manager [req-58d5847d-8694-446c-bf1c-714ca2ad5e51 req-3b3af1c7-67e7-4b66-a752-c1051de319fa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] No waiting events found dispatching network-vif-plugged-c965ae98-2866-43c7-bd75-b717acf060bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.534 243456 WARNING nova.compute.manager [req-58d5847d-8694-446c-bf1c-714ca2ad5e51 req-3b3af1c7-67e7-4b66-a752-c1051de319fa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Received unexpected event network-vif-plugged-c965ae98-2866-43c7-bd75-b717acf060bd for instance with vm_state building and task_state spawning.
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.534 243456 DEBUG nova.compute.manager [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.538 243456 DEBUG nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.541 243456 INFO nova.virt.libvirt.driver [-] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Instance spawned successfully.
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.541 243456 DEBUG nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.546 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273149.5465136, b320ad06-d6fa-470f-8bd8-1ecd6a00b33a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.547 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] VM Resumed (Lifecycle Event)
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.567 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.573 243456 DEBUG nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.573 243456 DEBUG nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.574 243456 DEBUG nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.574 243456 DEBUG nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.575 243456 DEBUG nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.575 243456 DEBUG nova.virt.libvirt.driver [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.580 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.607 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.632 243456 INFO nova.compute.manager [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Took 8.55 seconds to spawn the instance on the hypervisor.
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.633 243456 DEBUG nova.compute.manager [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:05:49 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3746472541' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.687 243456 INFO nova.compute.manager [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Took 9.55 seconds to build instance.
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.696 243456 DEBUG oslo_concurrency.processutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.697 243456 DEBUG nova.virt.libvirt.vif [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:05:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-47808848',display_name='tempest-ImagesTestJSON-server-47808848',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-47808848',id=37,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-9gc4dzuc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:05:44Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=a9cfb8a4-5855-4ff2-8afa-3e14094e801e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53cf23fd-52e1-4c44-b96b-ca076c163326", "address": "fa:16:3e:24:6e:f7", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53cf23fd-52", "ovs_interfaceid": "53cf23fd-52e1-4c44-b96b-ca076c163326", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.698 243456 DEBUG nova.network.os_vif_util [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "53cf23fd-52e1-4c44-b96b-ca076c163326", "address": "fa:16:3e:24:6e:f7", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53cf23fd-52", "ovs_interfaceid": "53cf23fd-52e1-4c44-b96b-ca076c163326", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.698 243456 DEBUG nova.network.os_vif_util [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:6e:f7,bridge_name='br-int',has_traffic_filtering=True,id=53cf23fd-52e1-4c44-b96b-ca076c163326,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53cf23fd-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.699 243456 DEBUG nova.objects.instance [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid a9cfb8a4-5855-4ff2-8afa-3e14094e801e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.701 243456 DEBUG oslo_concurrency.lockutils [None req-804e02f2-2f89-4976-b70c-68e51a9910ae af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.709 243456 DEBUG nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:05:49 compute-0 nova_compute[243452]:   <uuid>a9cfb8a4-5855-4ff2-8afa-3e14094e801e</uuid>
Feb 28 10:05:49 compute-0 nova_compute[243452]:   <name>instance-00000025</name>
Feb 28 10:05:49 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:05:49 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:05:49 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <nova:name>tempest-ImagesTestJSON-server-47808848</nova:name>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:05:48</nova:creationTime>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:05:49 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:05:49 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:05:49 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:05:49 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:05:49 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:05:49 compute-0 nova_compute[243452]:         <nova:user uuid="163582c3e6a34c87b52f82ac4f189f77">tempest-ImagesTestJSON-2059286278-project-member</nova:user>
Feb 28 10:05:49 compute-0 nova_compute[243452]:         <nova:project uuid="a2ce6ed219d94b3b88c2d2d7001f6c3a">tempest-ImagesTestJSON-2059286278</nova:project>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:05:49 compute-0 nova_compute[243452]:         <nova:port uuid="53cf23fd-52e1-4c44-b96b-ca076c163326">
Feb 28 10:05:49 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:05:49 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:05:49 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <system>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <entry name="serial">a9cfb8a4-5855-4ff2-8afa-3e14094e801e</entry>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <entry name="uuid">a9cfb8a4-5855-4ff2-8afa-3e14094e801e</entry>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     </system>
Feb 28 10:05:49 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:05:49 compute-0 nova_compute[243452]:   <os>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:   </os>
Feb 28 10:05:49 compute-0 nova_compute[243452]:   <features>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:   </features>
Feb 28 10:05:49 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:05:49 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:05:49 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/a9cfb8a4-5855-4ff2-8afa-3e14094e801e_disk">
Feb 28 10:05:49 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       </source>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:05:49 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/a9cfb8a4-5855-4ff2-8afa-3e14094e801e_disk.config">
Feb 28 10:05:49 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       </source>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:05:49 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:24:6e:f7"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <target dev="tap53cf23fd-52"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/a9cfb8a4-5855-4ff2-8afa-3e14094e801e/console.log" append="off"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <video>
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     </video>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:05:49 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:05:49 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:05:49 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:05:49 compute-0 nova_compute[243452]: </domain>
Feb 28 10:05:49 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.709 243456 DEBUG nova.compute.manager [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Preparing to wait for external event network-vif-plugged-53cf23fd-52e1-4c44-b96b-ca076c163326 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.709 243456 DEBUG oslo_concurrency.lockutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.710 243456 DEBUG oslo_concurrency.lockutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.710 243456 DEBUG oslo_concurrency.lockutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.710 243456 DEBUG nova.virt.libvirt.vif [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:05:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-47808848',display_name='tempest-ImagesTestJSON-server-47808848',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-47808848',id=37,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-9gc4dzuc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:05:44Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=a9cfb8a4-5855-4ff2-8afa-3e14094e801e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53cf23fd-52e1-4c44-b96b-ca076c163326", "address": "fa:16:3e:24:6e:f7", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53cf23fd-52", "ovs_interfaceid": "53cf23fd-52e1-4c44-b96b-ca076c163326", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.711 243456 DEBUG nova.network.os_vif_util [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "53cf23fd-52e1-4c44-b96b-ca076c163326", "address": "fa:16:3e:24:6e:f7", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53cf23fd-52", "ovs_interfaceid": "53cf23fd-52e1-4c44-b96b-ca076c163326", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.711 243456 DEBUG nova.network.os_vif_util [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:6e:f7,bridge_name='br-int',has_traffic_filtering=True,id=53cf23fd-52e1-4c44-b96b-ca076c163326,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53cf23fd-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.711 243456 DEBUG os_vif [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:6e:f7,bridge_name='br-int',has_traffic_filtering=True,id=53cf23fd-52e1-4c44-b96b-ca076c163326,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53cf23fd-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.712 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.712 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.713 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.715 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.715 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap53cf23fd-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.715 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap53cf23fd-52, col_values=(('external_ids', {'iface-id': '53cf23fd-52e1-4c44-b96b-ca076c163326', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:6e:f7', 'vm-uuid': 'a9cfb8a4-5855-4ff2-8afa-3e14094e801e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:49 compute-0 NetworkManager[49805]: <info>  [1772273149.7185] manager: (tap53cf23fd-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.720 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:05:49 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3870474010' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:49 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3746472541' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.723 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.724 243456 INFO os_vif [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:6e:f7,bridge_name='br-int',has_traffic_filtering=True,id=53cf23fd-52e1-4c44-b96b-ca076c163326,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53cf23fd-52')
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.777 243456 DEBUG nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.778 243456 DEBUG nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.778 243456 DEBUG nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No VIF found with MAC fa:16:3e:24:6e:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.778 243456 INFO nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Using config drive
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.807 243456 DEBUG nova.storage.rbd_utils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image a9cfb8a4-5855-4ff2-8afa-3e14094e801e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 325 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 4.3 MiB/s wr, 92 op/s
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.939 243456 DEBUG oslo_concurrency.lockutils [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "interface-3a118849-0d0a-4196-9bdd-65333da2e8f7-cd167ca6-85b1-4795-9ec3-1dab9ee468dd" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.940 243456 DEBUG oslo_concurrency.lockutils [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-3a118849-0d0a-4196-9bdd-65333da2e8f7-cd167ca6-85b1-4795-9ec3-1dab9ee468dd" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:49 compute-0 nova_compute[243452]: 2026-02-28 10:05:49.940 243456 DEBUG nova.objects.instance [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'flavor' on Instance uuid 3a118849-0d0a-4196-9bdd-65333da2e8f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:49 compute-0 ovn_controller[146846]: 2026-02-28T10:05:49Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:bd:05 10.100.0.13
Feb 28 10:05:49 compute-0 ovn_controller[146846]: 2026-02-28T10:05:49Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:bd:05 10.100.0.13
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.230 243456 DEBUG nova.compute.manager [req-12a5236f-c40b-46a2-905c-819bc26c2068 req-ea0bee73-470c-48a0-814f-2eb115d35ea0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-vif-plugged-81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.231 243456 DEBUG oslo_concurrency.lockutils [req-12a5236f-c40b-46a2-905c-819bc26c2068 req-ea0bee73-470c-48a0-814f-2eb115d35ea0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.231 243456 DEBUG oslo_concurrency.lockutils [req-12a5236f-c40b-46a2-905c-819bc26c2068 req-ea0bee73-470c-48a0-814f-2eb115d35ea0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.232 243456 DEBUG oslo_concurrency.lockutils [req-12a5236f-c40b-46a2-905c-819bc26c2068 req-ea0bee73-470c-48a0-814f-2eb115d35ea0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.232 243456 DEBUG nova.compute.manager [req-12a5236f-c40b-46a2-905c-819bc26c2068 req-ea0bee73-470c-48a0-814f-2eb115d35ea0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] No waiting events found dispatching network-vif-plugged-81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.232 243456 WARNING nova.compute.manager [req-12a5236f-c40b-46a2-905c-819bc26c2068 req-ea0bee73-470c-48a0-814f-2eb115d35ea0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received unexpected event network-vif-plugged-81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9 for instance with vm_state active and task_state None.
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.249 243456 DEBUG nova.network.neutron [req-2fe770e8-fad2-4e5d-b5ed-489f6eecbd88 req-9da490d8-f05c-4503-8958-b2da38e23aea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Updated VIF entry in instance network info cache for port 53cf23fd-52e1-4c44-b96b-ca076c163326. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.250 243456 DEBUG nova.network.neutron [req-2fe770e8-fad2-4e5d-b5ed-489f6eecbd88 req-9da490d8-f05c-4503-8958-b2da38e23aea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Updating instance_info_cache with network_info: [{"id": "53cf23fd-52e1-4c44-b96b-ca076c163326", "address": "fa:16:3e:24:6e:f7", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53cf23fd-52", "ovs_interfaceid": "53cf23fd-52e1-4c44-b96b-ca076c163326", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.268 243456 DEBUG oslo_concurrency.lockutils [req-2fe770e8-fad2-4e5d-b5ed-489f6eecbd88 req-9da490d8-f05c-4503-8958-b2da38e23aea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-a9cfb8a4-5855-4ff2-8afa-3e14094e801e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.281 243456 INFO nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Creating config drive at /var/lib/nova/instances/a9cfb8a4-5855-4ff2-8afa-3e14094e801e/disk.config
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.285 243456 DEBUG oslo_concurrency.processutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a9cfb8a4-5855-4ff2-8afa-3e14094e801e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpyiq6n4x7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.417 243456 DEBUG oslo_concurrency.processutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a9cfb8a4-5855-4ff2-8afa-3e14094e801e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpyiq6n4x7" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.442 243456 DEBUG nova.storage.rbd_utils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image a9cfb8a4-5855-4ff2-8afa-3e14094e801e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.447 243456 DEBUG oslo_concurrency.processutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a9cfb8a4-5855-4ff2-8afa-3e14094e801e/disk.config a9cfb8a4-5855-4ff2-8afa-3e14094e801e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.571 243456 DEBUG oslo_concurrency.processutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a9cfb8a4-5855-4ff2-8afa-3e14094e801e/disk.config a9cfb8a4-5855-4ff2-8afa-3e14094e801e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.572 243456 INFO nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Deleting local config drive /var/lib/nova/instances/a9cfb8a4-5855-4ff2-8afa-3e14094e801e/disk.config because it was imported into RBD.
Feb 28 10:05:50 compute-0 kernel: tap53cf23fd-52: entered promiscuous mode
Feb 28 10:05:50 compute-0 NetworkManager[49805]: <info>  [1772273150.6149] manager: (tap53cf23fd-52): new Tun device (/org/freedesktop/NetworkManager/Devices/125)
Feb 28 10:05:50 compute-0 ovn_controller[146846]: 2026-02-28T10:05:50Z|00255|binding|INFO|Claiming lport 53cf23fd-52e1-4c44-b96b-ca076c163326 for this chassis.
Feb 28 10:05:50 compute-0 ovn_controller[146846]: 2026-02-28T10:05:50Z|00256|binding|INFO|53cf23fd-52e1-4c44-b96b-ca076c163326: Claiming fa:16:3e:24:6e:f7 10.100.0.5
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.626 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:6e:f7 10.100.0.5'], port_security=['fa:16:3e:24:6e:f7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a9cfb8a4-5855-4ff2-8afa-3e14094e801e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b4baa283-ae4f-4852-9324-45f4fb1de1ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d63e728b-5dfe-4a66-867f-fa0957507bc0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=53cf23fd-52e1-4c44-b96b-ca076c163326) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.627 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 53cf23fd-52e1-4c44-b96b-ca076c163326 in datapath 3a8395bc-d7fc-4457-8cb4-52e2b9305b61 bound to our chassis
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.623 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.629 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 10:05:50 compute-0 ovn_controller[146846]: 2026-02-28T10:05:50Z|00257|binding|INFO|Setting lport 53cf23fd-52e1-4c44-b96b-ca076c163326 ovn-installed in OVS
Feb 28 10:05:50 compute-0 ovn_controller[146846]: 2026-02-28T10:05:50Z|00258|binding|INFO|Setting lport 53cf23fd-52e1-4c44-b96b-ca076c163326 up in Southbound
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.632 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.638 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6d851f50-e3eb-4f7c-9515-2d43e9df1246]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.638 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3a8395bc-d1 in ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.640 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3a8395bc-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.640 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0dc179cd-63bc-4506-8d92-997f90983801]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.641 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[46c19b73-e08b-4c0d-b188-01828321eab5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:50 compute-0 systemd-machined[209480]: New machine qemu-41-instance-00000025.
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.648 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf54905-7cae-4f9f-b468-40689f408d8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:50 compute-0 systemd[1]: Started Virtual Machine qemu-41-instance-00000025.
Feb 28 10:05:50 compute-0 systemd-udevd[275590]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.663 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[93302b00-e8c0-48d8-ae27-893b3553e56c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:50 compute-0 NetworkManager[49805]: <info>  [1772273150.6757] device (tap53cf23fd-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:05:50 compute-0 NetworkManager[49805]: <info>  [1772273150.6763] device (tap53cf23fd-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.683 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[00e45a7b-101b-4147-a16c-357c62bc7342]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:50 compute-0 NetworkManager[49805]: <info>  [1772273150.6875] manager: (tap3a8395bc-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/126)
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.690 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e57c93da-0bd4-497b-9ca1-9615d68c2519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.711 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[50b6c8b2-9a4a-4552-9532-00f4f4d08d09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.714 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9b107b7e-eeb6-49f5-8d83-5638efcd3f35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:50 compute-0 ceph-mon[76304]: pgmap v1117: 305 pgs: 305 active+clean; 325 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 4.3 MiB/s wr, 92 op/s
Feb 28 10:05:50 compute-0 NetworkManager[49805]: <info>  [1772273150.7330] device (tap3a8395bc-d0): carrier: link connected
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.736 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[81166a56-7291-4b85-893d-cc7fb60aa543]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.749 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[84ce5471-b518-4ab1-a341-9b3d6417f0f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3a8395bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:6b:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465801, 'reachable_time': 37642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275620, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.760 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cc8dc4cc-1f1b-48c1-8bea-a3d49b14a466]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:6b8d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465801, 'tstamp': 465801}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275621, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.771 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[283cca5b-ee3a-4c46-871e-c8ec6f4abad4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3a8395bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:6b:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465801, 'reachable_time': 37642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275622, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.796 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c711699e-d93c-4d33-87bd-c9f7963fb565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.798 243456 DEBUG nova.objects.instance [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'pci_requests' on Instance uuid 3a118849-0d0a-4196-9bdd-65333da2e8f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.817 243456 DEBUG nova.network.neutron [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.842 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[645154a0-baa2-413b-8ce3-178ed100111f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.846 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a8395bc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.846 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.846 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3a8395bc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:50 compute-0 NetworkManager[49805]: <info>  [1772273150.8485] manager: (tap3a8395bc-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Feb 28 10:05:50 compute-0 kernel: tap3a8395bc-d0: entered promiscuous mode
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.849 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.854 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3a8395bc-d0, col_values=(('external_ids', {'iface-id': '5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:50 compute-0 ovn_controller[146846]: 2026-02-28T10:05:50Z|00259|binding|INFO|Releasing lport 5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3 from this chassis (sb_readonly=0)
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.855 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.858 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.865 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb7d936-9c77-46d5-9ec5-ee75b594e601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.866 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:05:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:50.866 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'env', 'PROCESS_TAG=haproxy-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:05:50 compute-0 nova_compute[243452]: 2026-02-28 10:05:50.869 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:51 compute-0 nova_compute[243452]: 2026-02-28 10:05:51.093 243456 DEBUG nova.policy [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1d18942aa7a4f4f9f673058f8c5f232', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5478fb37e3fa483c86ed85ad939d42da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:05:51 compute-0 nova_compute[243452]: 2026-02-28 10:05:51.100 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273151.0905478, a9cfb8a4-5855-4ff2-8afa-3e14094e801e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:51 compute-0 nova_compute[243452]: 2026-02-28 10:05:51.100 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] VM Started (Lifecycle Event)
Feb 28 10:05:51 compute-0 nova_compute[243452]: 2026-02-28 10:05:51.125 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:51 compute-0 nova_compute[243452]: 2026-02-28 10:05:51.130 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273151.0922744, a9cfb8a4-5855-4ff2-8afa-3e14094e801e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:51 compute-0 nova_compute[243452]: 2026-02-28 10:05:51.131 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] VM Paused (Lifecycle Event)
Feb 28 10:05:51 compute-0 nova_compute[243452]: 2026-02-28 10:05:51.165 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:51 compute-0 nova_compute[243452]: 2026-02-28 10:05:51.168 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:05:51 compute-0 nova_compute[243452]: 2026-02-28 10:05:51.199 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:05:51 compute-0 podman[275696]: 2026-02-28 10:05:51.234198285 +0000 UTC m=+0.055329366 container create 4caffb9cb864e194ced3d9b809cc6994a8252640f47ed7051c1b920e38e41c29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:05:51 compute-0 systemd[1]: Started libpod-conmon-4caffb9cb864e194ced3d9b809cc6994a8252640f47ed7051c1b920e38e41c29.scope.
Feb 28 10:05:51 compute-0 podman[275696]: 2026-02-28 10:05:51.203298467 +0000 UTC m=+0.024429598 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:05:51 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:05:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b993b25ac99679bb0f94483344a6fd93e034cb4cbc8ac30386a074979859cc20/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:05:51 compute-0 podman[275696]: 2026-02-28 10:05:51.346878973 +0000 UTC m=+0.168010054 container init 4caffb9cb864e194ced3d9b809cc6994a8252640f47ed7051c1b920e38e41c29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 10:05:51 compute-0 podman[275696]: 2026-02-28 10:05:51.354401935 +0000 UTC m=+0.175532976 container start 4caffb9cb864e194ced3d9b809cc6994a8252640f47ed7051c1b920e38e41c29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 28 10:05:51 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[275712]: [NOTICE]   (275716) : New worker (275719) forked
Feb 28 10:05:51 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[275712]: [NOTICE]   (275716) : Loading success.
Feb 28 10:05:51 compute-0 nova_compute[243452]: 2026-02-28 10:05:51.773 243456 DEBUG nova.network.neutron [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Successfully updated port: cd167ca6-85b1-4795-9ec3-1dab9ee468dd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:05:51 compute-0 nova_compute[243452]: 2026-02-28 10:05:51.791 243456 DEBUG oslo_concurrency.lockutils [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:05:51 compute-0 nova_compute[243452]: 2026-02-28 10:05:51.791 243456 DEBUG oslo_concurrency.lockutils [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquired lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:05:51 compute-0 nova_compute[243452]: 2026-02-28 10:05:51.792 243456 DEBUG nova.network.neutron [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:05:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 239 KiB/s rd, 4.3 MiB/s wr, 91 op/s
Feb 28 10:05:51 compute-0 nova_compute[243452]: 2026-02-28 10:05:51.909 243456 DEBUG nova.compute.manager [req-80b97b0c-30c9-4f35-b1bf-65d7e2811059 req-e4857429-e512-4ab0-8f00-48e3285226fb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-changed-cd167ca6-85b1-4795-9ec3-1dab9ee468dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:51 compute-0 nova_compute[243452]: 2026-02-28 10:05:51.910 243456 DEBUG nova.compute.manager [req-80b97b0c-30c9-4f35-b1bf-65d7e2811059 req-e4857429-e512-4ab0-8f00-48e3285226fb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Refreshing instance network info cache due to event network-changed-cd167ca6-85b1-4795-9ec3-1dab9ee468dd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:05:51 compute-0 nova_compute[243452]: 2026-02-28 10:05:51.910 243456 DEBUG oslo_concurrency.lockutils [req-80b97b0c-30c9-4f35-b1bf-65d7e2811059 req-e4857429-e512-4ab0-8f00-48e3285226fb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:05:51 compute-0 nova_compute[243452]: 2026-02-28 10:05:51.961 243456 WARNING nova.network.neutron [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] 60dcefc3-95e1-437e-9c00-e51656c39b8f already exists in list: networks containing: ['60dcefc3-95e1-437e-9c00-e51656c39b8f']. ignoring it
Feb 28 10:05:51 compute-0 nova_compute[243452]: 2026-02-28 10:05:51.962 243456 WARNING nova.network.neutron [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] 60dcefc3-95e1-437e-9c00-e51656c39b8f already exists in list: networks containing: ['60dcefc3-95e1-437e-9c00-e51656c39b8f']. ignoring it
Feb 28 10:05:51 compute-0 nova_compute[243452]: 2026-02-28 10:05:51.962 243456 WARNING nova.network.neutron [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] 60dcefc3-95e1-437e-9c00-e51656c39b8f already exists in list: networks containing: ['60dcefc3-95e1-437e-9c00-e51656c39b8f']. ignoring it
Feb 28 10:05:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.726 243456 DEBUG oslo_concurrency.lockutils [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Acquiring lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.727 243456 DEBUG oslo_concurrency.lockutils [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.728 243456 DEBUG oslo_concurrency.lockutils [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Acquiring lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.728 243456 DEBUG oslo_concurrency.lockutils [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.728 243456 DEBUG oslo_concurrency.lockutils [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.730 243456 INFO nova.compute.manager [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Terminating instance
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.732 243456 DEBUG nova.compute.manager [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.797 243456 DEBUG nova.compute.manager [req-8887011f-9e16-4936-a80e-cfb019b72d89 req-7090b8de-ea60-4048-a5f0-d1589a0c81b9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Received event network-vif-plugged-53cf23fd-52e1-4c44-b96b-ca076c163326 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.797 243456 DEBUG oslo_concurrency.lockutils [req-8887011f-9e16-4936-a80e-cfb019b72d89 req-7090b8de-ea60-4048-a5f0-d1589a0c81b9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.798 243456 DEBUG oslo_concurrency.lockutils [req-8887011f-9e16-4936-a80e-cfb019b72d89 req-7090b8de-ea60-4048-a5f0-d1589a0c81b9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.798 243456 DEBUG oslo_concurrency.lockutils [req-8887011f-9e16-4936-a80e-cfb019b72d89 req-7090b8de-ea60-4048-a5f0-d1589a0c81b9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.799 243456 DEBUG nova.compute.manager [req-8887011f-9e16-4936-a80e-cfb019b72d89 req-7090b8de-ea60-4048-a5f0-d1589a0c81b9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Processing event network-vif-plugged-53cf23fd-52e1-4c44-b96b-ca076c163326 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.799 243456 DEBUG nova.compute.manager [req-8887011f-9e16-4936-a80e-cfb019b72d89 req-7090b8de-ea60-4048-a5f0-d1589a0c81b9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Received event network-vif-plugged-53cf23fd-52e1-4c44-b96b-ca076c163326 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.800 243456 DEBUG oslo_concurrency.lockutils [req-8887011f-9e16-4936-a80e-cfb019b72d89 req-7090b8de-ea60-4048-a5f0-d1589a0c81b9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.800 243456 DEBUG oslo_concurrency.lockutils [req-8887011f-9e16-4936-a80e-cfb019b72d89 req-7090b8de-ea60-4048-a5f0-d1589a0c81b9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.801 243456 DEBUG oslo_concurrency.lockutils [req-8887011f-9e16-4936-a80e-cfb019b72d89 req-7090b8de-ea60-4048-a5f0-d1589a0c81b9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.801 243456 DEBUG nova.compute.manager [req-8887011f-9e16-4936-a80e-cfb019b72d89 req-7090b8de-ea60-4048-a5f0-d1589a0c81b9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] No waiting events found dispatching network-vif-plugged-53cf23fd-52e1-4c44-b96b-ca076c163326 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.801 243456 WARNING nova.compute.manager [req-8887011f-9e16-4936-a80e-cfb019b72d89 req-7090b8de-ea60-4048-a5f0-d1589a0c81b9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Received unexpected event network-vif-plugged-53cf23fd-52e1-4c44-b96b-ca076c163326 for instance with vm_state building and task_state spawning.
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.803 243456 DEBUG nova.compute.manager [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.807 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273152.8074875, a9cfb8a4-5855-4ff2-8afa-3e14094e801e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.808 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] VM Resumed (Lifecycle Event)
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.813 243456 DEBUG nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.818 243456 INFO nova.virt.libvirt.driver [-] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Instance spawned successfully.
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.819 243456 DEBUG nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.834 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.843 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.864 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.866 243456 DEBUG nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.866 243456 DEBUG nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.867 243456 DEBUG nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.867 243456 DEBUG nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.868 243456 DEBUG nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.868 243456 DEBUG nova.virt.libvirt.driver [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:05:52 compute-0 kernel: tapc965ae98-28 (unregistering): left promiscuous mode
Feb 28 10:05:52 compute-0 NetworkManager[49805]: <info>  [1772273152.8954] device (tapc965ae98-28): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.901 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:52 compute-0 ovn_controller[146846]: 2026-02-28T10:05:52Z|00260|binding|INFO|Releasing lport c965ae98-2866-43c7-bd75-b717acf060bd from this chassis (sb_readonly=0)
Feb 28 10:05:52 compute-0 ovn_controller[146846]: 2026-02-28T10:05:52Z|00261|binding|INFO|Setting lport c965ae98-2866-43c7-bd75-b717acf060bd down in Southbound
Feb 28 10:05:52 compute-0 ovn_controller[146846]: 2026-02-28T10:05:52Z|00262|binding|INFO|Removing iface tapc965ae98-28 ovn-installed in OVS
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.905 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:52.911 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:d6:df 10.100.0.12'], port_security=['fa:16:3e:65:d6:df 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b320ad06-d6fa-470f-8bd8-1ecd6a00b33a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-053a4c12-58de-4356-a494-50a223396715', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fc066bf883d477dab2475efe229ee9f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f898752a-105c-430d-85b9-be3165305b0c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec1e47c6-e968-4491-aa87-aafb173f5f52, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=c965ae98-2866-43c7-bd75-b717acf060bd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:05:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:52.913 156681 INFO neutron.agent.ovn.metadata.agent [-] Port c965ae98-2866-43c7-bd75-b717acf060bd in datapath 053a4c12-58de-4356-a494-50a223396715 unbound from our chassis
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.917 243456 INFO nova.compute.manager [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Took 8.71 seconds to spawn the instance on the hypervisor.
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.917 243456 DEBUG nova.compute.manager [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:52.922 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 053a4c12-58de-4356-a494-50a223396715, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:05:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:52.923 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b65bab53-5ff6-42c2-9c8b-d14bdddac743]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.923 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:52.925 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-053a4c12-58de-4356-a494-50a223396715 namespace which is not needed anymore
Feb 28 10:05:52 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000024.scope: Deactivated successfully.
Feb 28 10:05:52 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000024.scope: Consumed 3.817s CPU time.
Feb 28 10:05:52 compute-0 systemd-machined[209480]: Machine qemu-40-instance-00000024 terminated.
Feb 28 10:05:52 compute-0 ceph-mon[76304]: pgmap v1118: 305 pgs: 305 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 239 KiB/s rd, 4.3 MiB/s wr, 91 op/s
Feb 28 10:05:52 compute-0 nova_compute[243452]: 2026-02-28 10:05:52.986 243456 INFO nova.compute.manager [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Took 9.81 seconds to build instance.
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.008 243456 DEBUG oslo_concurrency.lockutils [None req-919e16e0-00ab-4772-87fb-e33ce6eb1720 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:53 compute-0 neutron-haproxy-ovnmeta-053a4c12-58de-4356-a494-50a223396715[275430]: [NOTICE]   (275434) : haproxy version is 2.8.14-c23fe91
Feb 28 10:05:53 compute-0 neutron-haproxy-ovnmeta-053a4c12-58de-4356-a494-50a223396715[275430]: [NOTICE]   (275434) : path to executable is /usr/sbin/haproxy
Feb 28 10:05:53 compute-0 neutron-haproxy-ovnmeta-053a4c12-58de-4356-a494-50a223396715[275430]: [ALERT]    (275434) : Current worker (275436) exited with code 143 (Terminated)
Feb 28 10:05:53 compute-0 neutron-haproxy-ovnmeta-053a4c12-58de-4356-a494-50a223396715[275430]: [WARNING]  (275434) : All workers exited. Exiting... (0)
Feb 28 10:05:53 compute-0 systemd[1]: libpod-8c8a76e2f700a1585b6d7b52a048fdbfd2e309c20ad5f8bc3263d43ba9d46b73.scope: Deactivated successfully.
Feb 28 10:05:53 compute-0 conmon[275430]: conmon 8c8a76e2f700a1585b6d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c8a76e2f700a1585b6d7b52a048fdbfd2e309c20ad5f8bc3263d43ba9d46b73.scope/container/memory.events
Feb 28 10:05:53 compute-0 podman[275749]: 2026-02-28 10:05:53.035725422 +0000 UTC m=+0.041304702 container died 8c8a76e2f700a1585b6d7b52a048fdbfd2e309c20ad5f8bc3263d43ba9d46b73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-053a4c12-58de-4356-a494-50a223396715, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 10:05:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c8a76e2f700a1585b6d7b52a048fdbfd2e309c20ad5f8bc3263d43ba9d46b73-userdata-shm.mount: Deactivated successfully.
Feb 28 10:05:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-32b7b906965368f0fff687b13a2b91bd0738b1d8b7111a0abfa54a62225d0694-merged.mount: Deactivated successfully.
Feb 28 10:05:53 compute-0 podman[275749]: 2026-02-28 10:05:53.073897746 +0000 UTC m=+0.079477026 container cleanup 8c8a76e2f700a1585b6d7b52a048fdbfd2e309c20ad5f8bc3263d43ba9d46b73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-053a4c12-58de-4356-a494-50a223396715, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:05:53 compute-0 systemd[1]: libpod-conmon-8c8a76e2f700a1585b6d7b52a048fdbfd2e309c20ad5f8bc3263d43ba9d46b73.scope: Deactivated successfully.
Feb 28 10:05:53 compute-0 podman[275781]: 2026-02-28 10:05:53.140123077 +0000 UTC m=+0.033167333 container remove 8c8a76e2f700a1585b6d7b52a048fdbfd2e309c20ad5f8bc3263d43ba9d46b73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-053a4c12-58de-4356-a494-50a223396715, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 28 10:05:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:53.145 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9ebe79bb-46aa-4713-b143-500c3b40e875]: (4, ('Sat Feb 28 10:05:52 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-053a4c12-58de-4356-a494-50a223396715 (8c8a76e2f700a1585b6d7b52a048fdbfd2e309c20ad5f8bc3263d43ba9d46b73)\n8c8a76e2f700a1585b6d7b52a048fdbfd2e309c20ad5f8bc3263d43ba9d46b73\nSat Feb 28 10:05:53 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-053a4c12-58de-4356-a494-50a223396715 (8c8a76e2f700a1585b6d7b52a048fdbfd2e309c20ad5f8bc3263d43ba9d46b73)\n8c8a76e2f700a1585b6d7b52a048fdbfd2e309c20ad5f8bc3263d43ba9d46b73\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:53.147 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6f956edc-3386-4901-8b8e-3a51e1b69439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:53.148 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap053a4c12-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.151 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.159 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.161 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:53 compute-0 kernel: tap053a4c12-50: left promiscuous mode
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.167 243456 INFO nova.virt.libvirt.driver [-] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Instance destroyed successfully.
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.167 243456 DEBUG nova.objects.instance [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lazy-loading 'resources' on Instance uuid b320ad06-d6fa-470f-8bd8-1ecd6a00b33a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.168 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:53.171 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eedd3460-e854-4da8-a087-e2876037df7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.184 243456 DEBUG nova.virt.libvirt.vif [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-1901392352',display_name='tempest-ServerPasswordTestJSON-server-1901392352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-1901392352',id=36,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fc066bf883d477dab2475efe229ee9f',ramdisk_id='',reservation_id='r-iuyzv6qr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-2078094345',owner_user_name='tempest-ServerPasswordTestJSON-2078094345-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:51Z,user_data=None,user_id='af24a6d4c4c246cf80645675cc85b3c6',uuid=b320ad06-d6fa-470f-8bd8-1ecd6a00b33a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c965ae98-2866-43c7-bd75-b717acf060bd", "address": "fa:16:3e:65:d6:df", "network": {"id": "053a4c12-58de-4356-a494-50a223396715", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1046689882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fc066bf883d477dab2475efe229ee9f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc965ae98-28", "ovs_interfaceid": "c965ae98-2866-43c7-bd75-b717acf060bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.184 243456 DEBUG nova.network.os_vif_util [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Converting VIF {"id": "c965ae98-2866-43c7-bd75-b717acf060bd", "address": "fa:16:3e:65:d6:df", "network": {"id": "053a4c12-58de-4356-a494-50a223396715", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1046689882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fc066bf883d477dab2475efe229ee9f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc965ae98-28", "ovs_interfaceid": "c965ae98-2866-43c7-bd75-b717acf060bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.185 243456 DEBUG nova.network.os_vif_util [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:d6:df,bridge_name='br-int',has_traffic_filtering=True,id=c965ae98-2866-43c7-bd75-b717acf060bd,network=Network(053a4c12-58de-4356-a494-50a223396715),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc965ae98-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.185 243456 DEBUG os_vif [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:d6:df,bridge_name='br-int',has_traffic_filtering=True,id=c965ae98-2866-43c7-bd75-b717acf060bd,network=Network(053a4c12-58de-4356-a494-50a223396715),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc965ae98-28') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.186 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.187 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc965ae98-28, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.188 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:53.189 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eb9d15a3-d8de-42f9-b5f0-aaba16fd1e54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.190 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.192 243456 INFO os_vif [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:d6:df,bridge_name='br-int',has_traffic_filtering=True,id=c965ae98-2866-43c7-bd75-b717acf060bd,network=Network(053a4c12-58de-4356-a494-50a223396715),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc965ae98-28')
Feb 28 10:05:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:53.193 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c76e9f-57c1-4b6d-8238-1bd25e41ae52]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:53.205 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0167af5b-4152-43a6-828c-06b7f012e9cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465479, 'reachable_time': 37843, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275809, 'error': None, 'target': 'ovnmeta-053a4c12-58de-4356-a494-50a223396715', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d053a4c12\x2d58de\x2d4356\x2da494\x2d50a223396715.mount: Deactivated successfully.
Feb 28 10:05:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:53.213 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-053a4c12-58de-4356-a494-50a223396715 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:05:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:53.214 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[d542245c-b73b-48da-8d9e-d65622ce1c02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.450 243456 INFO nova.virt.libvirt.driver [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Deleting instance files /var/lib/nova/instances/b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_del
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.451 243456 INFO nova.virt.libvirt.driver [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Deletion of /var/lib/nova/instances/b320ad06-d6fa-470f-8bd8-1ecd6a00b33a_del complete
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.492 243456 INFO nova.compute.manager [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Took 0.76 seconds to destroy the instance on the hypervisor.
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.492 243456 DEBUG oslo.service.loopingcall [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.492 243456 DEBUG nova.compute.manager [-] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.493 243456 DEBUG nova.network.neutron [-] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:05:53 compute-0 nova_compute[243452]: 2026-02-28 10:05:53.538 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 326 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.5 MiB/s wr, 96 op/s
Feb 28 10:05:54 compute-0 nova_compute[243452]: 2026-02-28 10:05:54.061 243456 DEBUG nova.compute.manager [req-9c8a5a57-7eba-400a-8e51-be3cb69c8fb9 req-93db017e-b461-4a50-bfa5-e2e5f0819e1b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Received event network-vif-unplugged-c965ae98-2866-43c7-bd75-b717acf060bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:54 compute-0 nova_compute[243452]: 2026-02-28 10:05:54.062 243456 DEBUG oslo_concurrency.lockutils [req-9c8a5a57-7eba-400a-8e51-be3cb69c8fb9 req-93db017e-b461-4a50-bfa5-e2e5f0819e1b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:54 compute-0 nova_compute[243452]: 2026-02-28 10:05:54.062 243456 DEBUG oslo_concurrency.lockutils [req-9c8a5a57-7eba-400a-8e51-be3cb69c8fb9 req-93db017e-b461-4a50-bfa5-e2e5f0819e1b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:54 compute-0 nova_compute[243452]: 2026-02-28 10:05:54.063 243456 DEBUG oslo_concurrency.lockutils [req-9c8a5a57-7eba-400a-8e51-be3cb69c8fb9 req-93db017e-b461-4a50-bfa5-e2e5f0819e1b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:54 compute-0 nova_compute[243452]: 2026-02-28 10:05:54.063 243456 DEBUG nova.compute.manager [req-9c8a5a57-7eba-400a-8e51-be3cb69c8fb9 req-93db017e-b461-4a50-bfa5-e2e5f0819e1b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] No waiting events found dispatching network-vif-unplugged-c965ae98-2866-43c7-bd75-b717acf060bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:54 compute-0 nova_compute[243452]: 2026-02-28 10:05:54.063 243456 DEBUG nova.compute.manager [req-9c8a5a57-7eba-400a-8e51-be3cb69c8fb9 req-93db017e-b461-4a50-bfa5-e2e5f0819e1b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Received event network-vif-unplugged-c965ae98-2866-43c7-bd75-b717acf060bd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:05:54 compute-0 nova_compute[243452]: 2026-02-28 10:05:54.064 243456 DEBUG nova.compute.manager [req-9c8a5a57-7eba-400a-8e51-be3cb69c8fb9 req-93db017e-b461-4a50-bfa5-e2e5f0819e1b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Received event network-vif-plugged-c965ae98-2866-43c7-bd75-b717acf060bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:54 compute-0 nova_compute[243452]: 2026-02-28 10:05:54.064 243456 DEBUG oslo_concurrency.lockutils [req-9c8a5a57-7eba-400a-8e51-be3cb69c8fb9 req-93db017e-b461-4a50-bfa5-e2e5f0819e1b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:54 compute-0 nova_compute[243452]: 2026-02-28 10:05:54.065 243456 DEBUG oslo_concurrency.lockutils [req-9c8a5a57-7eba-400a-8e51-be3cb69c8fb9 req-93db017e-b461-4a50-bfa5-e2e5f0819e1b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:54 compute-0 nova_compute[243452]: 2026-02-28 10:05:54.065 243456 DEBUG oslo_concurrency.lockutils [req-9c8a5a57-7eba-400a-8e51-be3cb69c8fb9 req-93db017e-b461-4a50-bfa5-e2e5f0819e1b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:54 compute-0 nova_compute[243452]: 2026-02-28 10:05:54.066 243456 DEBUG nova.compute.manager [req-9c8a5a57-7eba-400a-8e51-be3cb69c8fb9 req-93db017e-b461-4a50-bfa5-e2e5f0819e1b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] No waiting events found dispatching network-vif-plugged-c965ae98-2866-43c7-bd75-b717acf060bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:54 compute-0 nova_compute[243452]: 2026-02-28 10:05:54.066 243456 WARNING nova.compute.manager [req-9c8a5a57-7eba-400a-8e51-be3cb69c8fb9 req-93db017e-b461-4a50-bfa5-e2e5f0819e1b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Received unexpected event network-vif-plugged-c965ae98-2866-43c7-bd75-b717acf060bd for instance with vm_state active and task_state deleting.
Feb 28 10:05:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:54.308 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:54 compute-0 nova_compute[243452]: 2026-02-28 10:05:54.550 243456 DEBUG nova.network.neutron [-] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:54 compute-0 nova_compute[243452]: 2026-02-28 10:05:54.557 243456 DEBUG nova.compute.manager [None req-b7ef84d0-ebbb-4e10-b1e1-c2fe873d56e2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:05:54 compute-0 nova_compute[243452]: 2026-02-28 10:05:54.565 243456 INFO nova.compute.manager [-] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Took 1.07 seconds to deallocate network for instance.
Feb 28 10:05:54 compute-0 nova_compute[243452]: 2026-02-28 10:05:54.611 243456 INFO nova.compute.manager [None req-b7ef84d0-ebbb-4e10-b1e1-c2fe873d56e2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] instance snapshotting
Feb 28 10:05:54 compute-0 nova_compute[243452]: 2026-02-28 10:05:54.622 243456 DEBUG oslo_concurrency.lockutils [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:54 compute-0 nova_compute[243452]: 2026-02-28 10:05:54.623 243456 DEBUG oslo_concurrency.lockutils [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:54 compute-0 nova_compute[243452]: 2026-02-28 10:05:54.658 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:54 compute-0 nova_compute[243452]: 2026-02-28 10:05:54.783 243456 DEBUG oslo_concurrency.processutils [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:05:54 compute-0 nova_compute[243452]: 2026-02-28 10:05:54.919 243456 INFO nova.virt.libvirt.driver [None req-b7ef84d0-ebbb-4e10-b1e1-c2fe873d56e2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Beginning live snapshot process
Feb 28 10:05:54 compute-0 ceph-mon[76304]: pgmap v1119: 305 pgs: 305 active+clean; 326 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.5 MiB/s wr, 96 op/s
Feb 28 10:05:55 compute-0 nova_compute[243452]: 2026-02-28 10:05:55.081 243456 DEBUG nova.compute.manager [req-f2dfb183-28db-4d4b-9f1c-d84527a53d54 req-abe292ce-f9e9-43fa-8666-4786320bdc4d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Received event network-vif-deleted-c965ae98-2866-43c7-bd75-b717acf060bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:55 compute-0 nova_compute[243452]: 2026-02-28 10:05:55.093 243456 DEBUG nova.virt.libvirt.imagebackend [None req-b7ef84d0-ebbb-4e10-b1e1-c2fe873d56e2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 28 10:05:55 compute-0 podman[275879]: 2026-02-28 10:05:55.129404172 +0000 UTC m=+0.066836250 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:05:55 compute-0 podman[275871]: 2026-02-28 10:05:55.186775655 +0000 UTC m=+0.122891916 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0)
Feb 28 10:05:55 compute-0 nova_compute[243452]: 2026-02-28 10:05:55.270 243456 DEBUG nova.storage.rbd_utils [None req-b7ef84d0-ebbb-4e10-b1e1-c2fe873d56e2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] creating snapshot(4942be631a9a41bd8b3c56ce8d402979) on rbd image(a9cfb8a4-5855-4ff2-8afa-3e14094e801e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:05:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:05:55 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2103322549' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:55 compute-0 nova_compute[243452]: 2026-02-28 10:05:55.344 243456 DEBUG oslo_concurrency.processutils [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:05:55 compute-0 nova_compute[243452]: 2026-02-28 10:05:55.352 243456 DEBUG nova.compute.provider_tree [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:05:55 compute-0 nova_compute[243452]: 2026-02-28 10:05:55.369 243456 DEBUG nova.scheduler.client.report [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:05:55 compute-0 nova_compute[243452]: 2026-02-28 10:05:55.556 243456 DEBUG oslo_concurrency.lockutils [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:55 compute-0 nova_compute[243452]: 2026-02-28 10:05:55.591 243456 INFO nova.scheduler.client.report [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Deleted allocations for instance b320ad06-d6fa-470f-8bd8-1ecd6a00b33a
Feb 28 10:05:55 compute-0 nova_compute[243452]: 2026-02-28 10:05:55.653 243456 DEBUG oslo_concurrency.lockutils [None req-5759762d-9592-4eab-bf4d-112db47f7a4c af24a6d4c4c246cf80645675cc85b3c6 2fc066bf883d477dab2475efe229ee9f - - default default] Lock "b320ad06-d6fa-470f-8bd8-1ecd6a00b33a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 294 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.3 MiB/s wr, 221 op/s
Feb 28 10:05:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Feb 28 10:05:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Feb 28 10:05:56 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2103322549' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:05:56 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.053 243456 DEBUG nova.storage.rbd_utils [None req-b7ef84d0-ebbb-4e10-b1e1-c2fe873d56e2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] cloning vms/a9cfb8a4-5855-4ff2-8afa-3e14094e801e_disk@4942be631a9a41bd8b3c56ce8d402979 to images/38cb5514-6684-4136-959d-9444907f6bfa clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.146 243456 DEBUG nova.storage.rbd_utils [None req-b7ef84d0-ebbb-4e10-b1e1-c2fe873d56e2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] flattening images/38cb5514-6684-4136-959d-9444907f6bfa flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.416 243456 DEBUG nova.storage.rbd_utils [None req-b7ef84d0-ebbb-4e10-b1e1-c2fe873d56e2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] removing snapshot(4942be631a9a41bd8b3c56ce8d402979) on rbd image(a9cfb8a4-5855-4ff2-8afa-3e14094e801e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.649 243456 DEBUG nova.network.neutron [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updating instance_info_cache with network_info: [{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "address": "fa:16:3e:1a:bd:05", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81d09b7b-e0", "ovs_interfaceid": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "address": "fa:16:3e:6e:94:dc", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd167ca6-85", "ovs_interfaceid": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.669 243456 DEBUG oslo_concurrency.lockutils [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Releasing lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.669 243456 DEBUG oslo_concurrency.lockutils [req-80b97b0c-30c9-4f35-b1bf-65d7e2811059 req-e4857429-e512-4ab0-8f00-48e3285226fb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.670 243456 DEBUG nova.network.neutron [req-80b97b0c-30c9-4f35-b1bf-65d7e2811059 req-e4857429-e512-4ab0-8f00-48e3285226fb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Refreshing network info cache for port cd167ca6-85b1-4795-9ec3-1dab9ee468dd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.673 243456 DEBUG nova.virt.libvirt.vif [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-853218932',display_name='tempest-AttachInterfacesTestJSON-server-853218932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-853218932',id=32,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNhRql1zNcgozp7+n+OTZNjDLjB8j28cqviorl1M0I+grkvDu6q5p/JYhTL7OoExHJt9j3CSAkdrK21HWqvkni4EGOra+zdcam7r8KHxbshLgFt9F0GQkeLVE/FGatuEGQ==',key_name='tempest-keypair-1407834863',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-b1a06y2e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=3a118849-0d0a-4196-9bdd-65333da2e8f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "address": "fa:16:3e:6e:94:dc", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd167ca6-85", "ovs_interfaceid": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.673 243456 DEBUG nova.network.os_vif_util [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "address": "fa:16:3e:6e:94:dc", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd167ca6-85", "ovs_interfaceid": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.674 243456 DEBUG nova.network.os_vif_util [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=cd167ca6-85b1-4795-9ec3-1dab9ee468dd,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapcd167ca6-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.674 243456 DEBUG os_vif [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=cd167ca6-85b1-4795-9ec3-1dab9ee468dd,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapcd167ca6-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.675 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.675 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.675 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.679 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.679 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd167ca6-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.680 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcd167ca6-85, col_values=(('external_ids', {'iface-id': 'cd167ca6-85b1-4795-9ec3-1dab9ee468dd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:94:dc', 'vm-uuid': '3a118849-0d0a-4196-9bdd-65333da2e8f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.681 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:56 compute-0 NetworkManager[49805]: <info>  [1772273156.6829] manager: (tapcd167ca6-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.686 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.687 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.688 243456 INFO os_vif [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=cd167ca6-85b1-4795-9ec3-1dab9ee468dd,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapcd167ca6-85')
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.688 243456 DEBUG nova.virt.libvirt.vif [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-853218932',display_name='tempest-AttachInterfacesTestJSON-server-853218932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-853218932',id=32,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNhRql1zNcgozp7+n+OTZNjDLjB8j28cqviorl1M0I+grkvDu6q5p/JYhTL7OoExHJt9j3CSAkdrK21HWqvkni4EGOra+zdcam7r8KHxbshLgFt9F0GQkeLVE/FGatuEGQ==',key_name='tempest-keypair-1407834863',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-b1a06y2e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=3a118849-0d0a-4196-9bdd-65333da2e8f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "address": "fa:16:3e:6e:94:dc", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd167ca6-85", "ovs_interfaceid": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.689 243456 DEBUG nova.network.os_vif_util [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "address": "fa:16:3e:6e:94:dc", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd167ca6-85", "ovs_interfaceid": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.689 243456 DEBUG nova.network.os_vif_util [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=cd167ca6-85b1-4795-9ec3-1dab9ee468dd,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapcd167ca6-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.693 243456 DEBUG nova.virt.libvirt.guest [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] attach device xml: <interface type="ethernet">
Feb 28 10:05:56 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:6e:94:dc"/>
Feb 28 10:05:56 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:05:56 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:05:56 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:05:56 compute-0 nova_compute[243452]:   <target dev="tapcd167ca6-85"/>
Feb 28 10:05:56 compute-0 nova_compute[243452]: </interface>
Feb 28 10:05:56 compute-0 nova_compute[243452]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 28 10:05:56 compute-0 kernel: tapcd167ca6-85: entered promiscuous mode
Feb 28 10:05:56 compute-0 NetworkManager[49805]: <info>  [1772273156.7083] manager: (tapcd167ca6-85): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.717 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:56 compute-0 ovn_controller[146846]: 2026-02-28T10:05:56Z|00263|binding|INFO|Claiming lport cd167ca6-85b1-4795-9ec3-1dab9ee468dd for this chassis.
Feb 28 10:05:56 compute-0 ovn_controller[146846]: 2026-02-28T10:05:56Z|00264|binding|INFO|cd167ca6-85b1-4795-9ec3-1dab9ee468dd: Claiming fa:16:3e:6e:94:dc 10.100.0.14
Feb 28 10:05:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:56.724 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:94:dc 10.100.0.14'], port_security=['fa:16:3e:6e:94:dc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-119104979', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3a118849-0d0a-4196-9bdd-65333da2e8f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-119104979', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '24808a61-2182-43ba-a46d-e629124276f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cd167ca6-85b1-4795-9ec3-1dab9ee468dd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:05:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:56.726 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cd167ca6-85b1-4795-9ec3-1dab9ee468dd in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f bound to our chassis
Feb 28 10:05:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:56.729 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.733 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:56 compute-0 ovn_controller[146846]: 2026-02-28T10:05:56Z|00265|binding|INFO|Setting lport cd167ca6-85b1-4795-9ec3-1dab9ee468dd ovn-installed in OVS
Feb 28 10:05:56 compute-0 ovn_controller[146846]: 2026-02-28T10:05:56Z|00266|binding|INFO|Setting lport cd167ca6-85b1-4795-9ec3-1dab9ee468dd up in Southbound
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.736 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:56.746 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd8c9ef-9542-43e5-a95e-bd029b62fffc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:56 compute-0 systemd-udevd[276028]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:05:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:56.777 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4c44aa73-3bb2-48c0-af9e-88068723bbb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:56 compute-0 NetworkManager[49805]: <info>  [1772273156.7820] device (tapcd167ca6-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:05:56 compute-0 NetworkManager[49805]: <info>  [1772273156.7831] device (tapcd167ca6-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:05:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:56.790 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a65e8ce5-84a5-4ac2-914c-6305cc6ee4b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.805 243456 DEBUG nova.virt.libvirt.driver [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.806 243456 DEBUG nova.virt.libvirt.driver [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.806 243456 DEBUG nova.virt.libvirt.driver [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:07:3c:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.807 243456 DEBUG nova.virt.libvirt.driver [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:fc:8b:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.807 243456 DEBUG nova.virt.libvirt.driver [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:1a:bd:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.808 243456 DEBUG nova.virt.libvirt.driver [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:6e:94:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:05:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:56.809 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[173e53c5-eec6-48f7-bcb9-463b8e683e4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:56.821 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e397202c-656c-4aad-b6ca-88b1375812d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461763, 'reachable_time': 27957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276033, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:56.832 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5013899d-886b-45bf-b4db-230972d4820f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461774, 'tstamp': 461774}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276034, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461777, 'tstamp': 461777}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276034, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:56.834 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:56.837 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:56.837 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:56.837 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:56.838 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.843 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.847 243456 DEBUG nova.virt.libvirt.guest [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:05:56 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:05:56 compute-0 nova_compute[243452]:   <nova:name>tempest-AttachInterfacesTestJSON-server-853218932</nova:name>
Feb 28 10:05:56 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:05:56</nova:creationTime>
Feb 28 10:05:56 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:05:56 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:05:56 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:05:56 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:05:56 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:05:56 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:05:56 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:05:56 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:05:56 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:05:56 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:05:56 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:05:56 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:05:56 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:05:56 compute-0 nova_compute[243452]:     <nova:port uuid="9f44b9f8-b888-40e8-be30-f985e3ca11b9">
Feb 28 10:05:56 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:05:56 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:05:56 compute-0 nova_compute[243452]:     <nova:port uuid="5de28374-dbe4-4c8d-9f73-047a368cc895">
Feb 28 10:05:56 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 10:05:56 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:05:56 compute-0 nova_compute[243452]:     <nova:port uuid="81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9">
Feb 28 10:05:56 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:05:56 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:05:56 compute-0 nova_compute[243452]:     <nova:port uuid="cd167ca6-85b1-4795-9ec3-1dab9ee468dd">
Feb 28 10:05:56 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:05:56 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:05:56 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:05:56 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:05:56 compute-0 nova_compute[243452]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 28 10:05:56 compute-0 nova_compute[243452]: 2026-02-28 10:05:56.882 243456 DEBUG oslo_concurrency.lockutils [None req-b0f62ac5-316b-4fe2-a494-11136ebb3c9b f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-3a118849-0d0a-4196-9bdd-65333da2e8f7-cd167ca6-85b1-4795-9ec3-1dab9ee468dd" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Feb 28 10:05:57 compute-0 ceph-mon[76304]: pgmap v1120: 305 pgs: 305 active+clean; 294 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.3 MiB/s wr, 221 op/s
Feb 28 10:05:57 compute-0 ceph-mon[76304]: osdmap e157: 3 total, 3 up, 3 in
Feb 28 10:05:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Feb 28 10:05:57 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Feb 28 10:05:57 compute-0 sudo[276035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:05:57 compute-0 sudo[276035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:05:57 compute-0 sudo[276035]: pam_unix(sudo:session): session closed for user root
Feb 28 10:05:57 compute-0 nova_compute[243452]: 2026-02-28 10:05:57.049 243456 DEBUG nova.storage.rbd_utils [None req-b7ef84d0-ebbb-4e10-b1e1-c2fe873d56e2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] creating snapshot(snap) on rbd image(38cb5514-6684-4136-959d-9444907f6bfa) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:05:57 compute-0 sudo[276060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 28 10:05:57 compute-0 sudo[276060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:05:57 compute-0 sudo[276060]: pam_unix(sudo:session): session closed for user root
Feb 28 10:05:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:05:57 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:05:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:05:57 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:05:57 compute-0 sudo[276123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:05:57 compute-0 sudo[276123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:05:57 compute-0 sudo[276123]: pam_unix(sudo:session): session closed for user root
Feb 28 10:05:57 compute-0 sudo[276148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:05:57 compute-0 sudo[276148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:05:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:05:57 compute-0 nova_compute[243452]: 2026-02-28 10:05:57.736 243456 DEBUG nova.compute.manager [req-173e6350-da0f-4ecb-951d-ab54392c8b69 req-b21029ec-741d-4427-89ba-fd284fb60b1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-vif-plugged-cd167ca6-85b1-4795-9ec3-1dab9ee468dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:57 compute-0 nova_compute[243452]: 2026-02-28 10:05:57.737 243456 DEBUG oslo_concurrency.lockutils [req-173e6350-da0f-4ecb-951d-ab54392c8b69 req-b21029ec-741d-4427-89ba-fd284fb60b1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:57 compute-0 nova_compute[243452]: 2026-02-28 10:05:57.738 243456 DEBUG oslo_concurrency.lockutils [req-173e6350-da0f-4ecb-951d-ab54392c8b69 req-b21029ec-741d-4427-89ba-fd284fb60b1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:57 compute-0 nova_compute[243452]: 2026-02-28 10:05:57.738 243456 DEBUG oslo_concurrency.lockutils [req-173e6350-da0f-4ecb-951d-ab54392c8b69 req-b21029ec-741d-4427-89ba-fd284fb60b1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:57 compute-0 nova_compute[243452]: 2026-02-28 10:05:57.738 243456 DEBUG nova.compute.manager [req-173e6350-da0f-4ecb-951d-ab54392c8b69 req-b21029ec-741d-4427-89ba-fd284fb60b1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] No waiting events found dispatching network-vif-plugged-cd167ca6-85b1-4795-9ec3-1dab9ee468dd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:57 compute-0 nova_compute[243452]: 2026-02-28 10:05:57.740 243456 WARNING nova.compute.manager [req-173e6350-da0f-4ecb-951d-ab54392c8b69 req-b21029ec-741d-4427-89ba-fd284fb60b1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received unexpected event network-vif-plugged-cd167ca6-85b1-4795-9ec3-1dab9ee468dd for instance with vm_state active and task_state None.
Feb 28 10:05:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:57.844 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:57.845 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:57.846 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 290 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 469 KiB/s wr, 273 op/s
Feb 28 10:05:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Feb 28 10:05:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Feb 28 10:05:58 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Feb 28 10:05:58 compute-0 ceph-mon[76304]: osdmap e158: 3 total, 3 up, 3 in
Feb 28 10:05:58 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:05:58 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:05:58 compute-0 sudo[276148]: pam_unix(sudo:session): session closed for user root
Feb 28 10:05:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:05:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:05:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:05:58 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:05:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:05:58 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:05:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:05:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:05:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:05:58 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:05:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:05:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:05:58 compute-0 sudo[276204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:05:58 compute-0 sudo[276204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:05:58 compute-0 sudo[276204]: pam_unix(sudo:session): session closed for user root
Feb 28 10:05:58 compute-0 sudo[276229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:05:58 compute-0 sudo[276229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:05:58 compute-0 nova_compute[243452]: 2026-02-28 10:05:58.540 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:58 compute-0 podman[276266]: 2026-02-28 10:05:58.60103055 +0000 UTC m=+0.046619182 container create 52c8e4908c555dc33ce6e75c7afad8cc1341d32f346c2cd39169ce1667489f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 10:05:58 compute-0 systemd[1]: Started libpod-conmon-52c8e4908c555dc33ce6e75c7afad8cc1341d32f346c2cd39169ce1667489f35.scope.
Feb 28 10:05:58 compute-0 nova_compute[243452]: 2026-02-28 10:05:58.670 243456 DEBUG nova.network.neutron [req-80b97b0c-30c9-4f35-b1bf-65d7e2811059 req-e4857429-e512-4ab0-8f00-48e3285226fb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updated VIF entry in instance network info cache for port cd167ca6-85b1-4795-9ec3-1dab9ee468dd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:05:58 compute-0 nova_compute[243452]: 2026-02-28 10:05:58.671 243456 DEBUG nova.network.neutron [req-80b97b0c-30c9-4f35-b1bf-65d7e2811059 req-e4857429-e512-4ab0-8f00-48e3285226fb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updating instance_info_cache with network_info: [{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "address": "fa:16:3e:1a:bd:05", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81d09b7b-e0", "ovs_interfaceid": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "address": "fa:16:3e:6e:94:dc", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd167ca6-85", "ovs_interfaceid": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:05:58 compute-0 podman[276266]: 2026-02-28 10:05:58.579828594 +0000 UTC m=+0.025417316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:05:58 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:05:58 compute-0 nova_compute[243452]: 2026-02-28 10:05:58.689 243456 DEBUG oslo_concurrency.lockutils [req-80b97b0c-30c9-4f35-b1bf-65d7e2811059 req-e4857429-e512-4ab0-8f00-48e3285226fb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:05:58 compute-0 podman[276266]: 2026-02-28 10:05:58.697819031 +0000 UTC m=+0.143407703 container init 52c8e4908c555dc33ce6e75c7afad8cc1341d32f346c2cd39169ce1667489f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ride, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:05:58 compute-0 podman[276266]: 2026-02-28 10:05:58.705630711 +0000 UTC m=+0.151219363 container start 52c8e4908c555dc33ce6e75c7afad8cc1341d32f346c2cd39169ce1667489f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ride, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:05:58 compute-0 podman[276266]: 2026-02-28 10:05:58.70882593 +0000 UTC m=+0.154414582 container attach 52c8e4908c555dc33ce6e75c7afad8cc1341d32f346c2cd39169ce1667489f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ride, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:05:58 compute-0 distracted_ride[276282]: 167 167
Feb 28 10:05:58 compute-0 systemd[1]: libpod-52c8e4908c555dc33ce6e75c7afad8cc1341d32f346c2cd39169ce1667489f35.scope: Deactivated successfully.
Feb 28 10:05:58 compute-0 conmon[276282]: conmon 52c8e4908c555dc33ce6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52c8e4908c555dc33ce6e75c7afad8cc1341d32f346c2cd39169ce1667489f35.scope/container/memory.events
Feb 28 10:05:58 compute-0 podman[276266]: 2026-02-28 10:05:58.712950396 +0000 UTC m=+0.158539068 container died 52c8e4908c555dc33ce6e75c7afad8cc1341d32f346c2cd39169ce1667489f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Feb 28 10:05:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-7421109aa76b3e091fd2da600949b6c333dc3ba5af5811dffa74f19ac04c34e6-merged.mount: Deactivated successfully.
Feb 28 10:05:58 compute-0 podman[276266]: 2026-02-28 10:05:58.752616381 +0000 UTC m=+0.198205013 container remove 52c8e4908c555dc33ce6e75c7afad8cc1341d32f346c2cd39169ce1667489f35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Feb 28 10:05:58 compute-0 systemd[1]: libpod-conmon-52c8e4908c555dc33ce6e75c7afad8cc1341d32f346c2cd39169ce1667489f35.scope: Deactivated successfully.
Feb 28 10:05:58 compute-0 podman[276306]: 2026-02-28 10:05:58.934013071 +0000 UTC m=+0.047207608 container create 448bfcc7419745ca966ddd4e4c355cbefd1e7344ac61741f0f62003b94c8ed5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_keldysh, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 10:05:58 compute-0 systemd[1]: Started libpod-conmon-448bfcc7419745ca966ddd4e4c355cbefd1e7344ac61741f0f62003b94c8ed5f.scope.
Feb 28 10:05:59 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f06702cca2adb40fd27dc595e5c1398189ff8bf570ef1e9bf0de153bfd2600/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f06702cca2adb40fd27dc595e5c1398189ff8bf570ef1e9bf0de153bfd2600/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f06702cca2adb40fd27dc595e5c1398189ff8bf570ef1e9bf0de153bfd2600/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f06702cca2adb40fd27dc595e5c1398189ff8bf570ef1e9bf0de153bfd2600/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f06702cca2adb40fd27dc595e5c1398189ff8bf570ef1e9bf0de153bfd2600/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:05:59 compute-0 podman[276306]: 2026-02-28 10:05:58.915892732 +0000 UTC m=+0.029087289 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:05:59 compute-0 podman[276306]: 2026-02-28 10:05:59.029697311 +0000 UTC m=+0.142891868 container init 448bfcc7419745ca966ddd4e4c355cbefd1e7344ac61741f0f62003b94c8ed5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_keldysh, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 10:05:59 compute-0 ceph-mon[76304]: pgmap v1123: 305 pgs: 305 active+clean; 290 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 469 KiB/s wr, 273 op/s
Feb 28 10:05:59 compute-0 ceph-mon[76304]: osdmap e159: 3 total, 3 up, 3 in
Feb 28 10:05:59 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:05:59 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:05:59 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:05:59 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:05:59 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:05:59 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:05:59 compute-0 podman[276306]: 2026-02-28 10:05:59.03785462 +0000 UTC m=+0.151049177 container start 448bfcc7419745ca966ddd4e4c355cbefd1e7344ac61741f0f62003b94c8ed5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 10:05:59 compute-0 podman[276306]: 2026-02-28 10:05:59.042157682 +0000 UTC m=+0.155352239 container attach 448bfcc7419745ca966ddd4e4c355cbefd1e7344ac61741f0f62003b94c8ed5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_keldysh, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:05:59 compute-0 ovn_controller[146846]: 2026-02-28T10:05:59Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6e:94:dc 10.100.0.14
Feb 28 10:05:59 compute-0 ovn_controller[146846]: 2026-02-28T10:05:59Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6e:94:dc 10.100.0.14
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.353 243456 DEBUG oslo_concurrency.lockutils [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "interface-3a118849-0d0a-4196-9bdd-65333da2e8f7-5de28374-dbe4-4c8d-9f73-047a368cc895" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.354 243456 DEBUG oslo_concurrency.lockutils [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-3a118849-0d0a-4196-9bdd-65333da2e8f7-5de28374-dbe4-4c8d-9f73-047a368cc895" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.371 243456 DEBUG nova.objects.instance [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'flavor' on Instance uuid 3a118849-0d0a-4196-9bdd-65333da2e8f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.395 243456 DEBUG nova.virt.libvirt.vif [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-853218932',display_name='tempest-AttachInterfacesTestJSON-server-853218932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-853218932',id=32,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNhRql1zNcgozp7+n+OTZNjDLjB8j28cqviorl1M0I+grkvDu6q5p/JYhTL7OoExHJt9j3CSAkdrK21HWqvkni4EGOra+zdcam7r8KHxbshLgFt9F0GQkeLVE/FGatuEGQ==',key_name='tempest-keypair-1407834863',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-b1a06y2e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=3a118849-0d0a-4196-9bdd-65333da2e8f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.396 243456 DEBUG nova.network.os_vif_util [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.397 243456 DEBUG nova.network.os_vif_util [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=5de28374-dbe4-4c8d-9f73-047a368cc895,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de28374-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.401 243456 DEBUG nova.virt.libvirt.guest [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:fc:8b:a2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5de28374-db"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.404 243456 DEBUG nova.virt.libvirt.guest [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:fc:8b:a2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5de28374-db"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.407 243456 DEBUG nova.virt.libvirt.driver [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Attempting to detach device tap5de28374-db from instance 3a118849-0d0a-4196-9bdd-65333da2e8f7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.408 243456 DEBUG nova.virt.libvirt.guest [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] detach device xml: <interface type="ethernet">
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:fc:8b:a2"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <target dev="tap5de28374-db"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]: </interface>
Feb 28 10:05:59 compute-0 nova_compute[243452]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.414 243456 DEBUG nova.virt.libvirt.guest [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:fc:8b:a2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5de28374-db"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.418 243456 DEBUG nova.virt.libvirt.guest [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:fc:8b:a2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5de28374-db"/></interface>not found in domain: <domain type='kvm' id='36'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <name>instance-00000020</name>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <uuid>3a118849-0d0a-4196-9bdd-65333da2e8f7</uuid>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <nova:name>tempest-AttachInterfacesTestJSON-server-853218932</nova:name>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:05:56</nova:creationTime>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:port uuid="9f44b9f8-b888-40e8-be30-f985e3ca11b9">
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:port uuid="5de28374-dbe4-4c8d-9f73-047a368cc895">
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:port uuid="81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9">
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:port uuid="cd167ca6-85b1-4795-9ec3-1dab9ee468dd">
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:05:59 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <memory unit='KiB'>131072</memory>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <vcpu placement='static'>1</vcpu>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <resource>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <partition>/machine</partition>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </resource>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <sysinfo type='smbios'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <system>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <entry name='manufacturer'>RDO</entry>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <entry name='product'>OpenStack Compute</entry>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <entry name='serial'>3a118849-0d0a-4196-9bdd-65333da2e8f7</entry>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <entry name='uuid'>3a118849-0d0a-4196-9bdd-65333da2e8f7</entry>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <entry name='family'>Virtual Machine</entry>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </system>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <os>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <boot dev='hd'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <smbios mode='sysinfo'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </os>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <features>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <vmcoreinfo state='on'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </features>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <cpu mode='custom' match='exact' check='full'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <vendor>AMD</vendor>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='x2apic'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc-deadline'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='hypervisor'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc_adjust'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='spec-ctrl'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='stibp'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='ssbd'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='cmp_legacy'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='overflow-recov'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='succor'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='ibrs'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='amd-ssbd'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='virt-ssbd'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='lbrv'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='tsc-scale'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='vmcb-clean'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='flushbyasid'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='pause-filter'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='pfthreshold'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='xsaves'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='svm'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='topoext'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='npt'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='nrip-save'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <clock offset='utc'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <timer name='pit' tickpolicy='delay'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <timer name='hpet' present='no'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <on_poweroff>destroy</on_poweroff>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <on_reboot>restart</on_reboot>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <on_crash>destroy</on_crash>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <disk type='network' device='disk'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/3a118849-0d0a-4196-9bdd-65333da2e8f7_disk' index='2'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       </source>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target dev='vda' bus='virtio'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='virtio-disk0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <disk type='network' device='cdrom'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/3a118849-0d0a-4196-9bdd-65333da2e8f7_disk.config' index='1'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       </source>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target dev='sda' bus='sata'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <readonly/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='sata0-0-0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='0' model='pcie-root'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pcie.0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='1' port='0x10'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.1'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='2' port='0x11'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.2'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='3' port='0x12'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.3'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='4' port='0x13'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.4'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='5' port='0x14'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.5'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='6' port='0x15'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.6'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='7' port='0x16'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.7'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='8' port='0x17'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.8'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='9' port='0x18'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.9'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='10' port='0x19'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.10'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='11' port='0x1a'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.11'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='12' port='0x1b'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.12'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='13' port='0x1c'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.13'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='14' port='0x1d'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.14'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='15' port='0x1e'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.15'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='16' port='0x1f'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.16'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='17' port='0x20'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.17'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='18' port='0x21'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.18'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='19' port='0x22'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.19'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='20' port='0x23'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.20'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='21' port='0x24'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.21'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='22' port='0x25'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.22'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='23' port='0x26'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.23'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='24' port='0x27'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.24'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='25' port='0x28'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.25'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-pci-bridge'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.26'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='usb'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='sata' index='0'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='ide'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:07:3c:f5'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target dev='tap9f44b9f8-b8'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='net0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:fc:8b:a2'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target dev='tap5de28374-db'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='net1'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:1a:bd:05'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target dev='tap81d09b7b-e0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='net2'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:6e:94:dc'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target dev='tapcd167ca6-85'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='net3'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <serial type='pty'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <source path='/dev/pts/0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/console.log' append='off'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target type='isa-serial' port='0'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:         <model name='isa-serial'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       </target>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <console type='pty' tty='/dev/pts/0'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <source path='/dev/pts/0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/console.log' append='off'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target type='serial' port='0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </console>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <input type='tablet' bus='usb'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='input0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='usb' bus='0' port='1'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </input>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <input type='mouse' bus='ps2'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='input1'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </input>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <input type='keyboard' bus='ps2'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='input2'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </input>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <listen type='address' address='::0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </graphics>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <audio id='1' type='none'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <video>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model type='virtio' heads='1' primary='yes'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='video0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </video>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <watchdog model='itco' action='reset'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='watchdog0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </watchdog>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <memballoon model='virtio'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <stats period='10'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='balloon0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <rng model='virtio'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <backend model='random'>/dev/urandom</backend>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='rng0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <label>system_u:system_r:svirt_t:s0:c173,c529</label>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c173,c529</imagelabel>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <label>+107:+107</label>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <imagelabel>+107:+107</imagelabel>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:05:59 compute-0 nova_compute[243452]: </domain>
Feb 28 10:05:59 compute-0 nova_compute[243452]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.420 243456 INFO nova.virt.libvirt.driver [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully detached device tap5de28374-db from instance 3a118849-0d0a-4196-9bdd-65333da2e8f7 from the persistent domain config.
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.420 243456 DEBUG nova.virt.libvirt.driver [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] (1/8): Attempting to detach device tap5de28374-db with device alias net1 from instance 3a118849-0d0a-4196-9bdd-65333da2e8f7 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.420 243456 DEBUG nova.virt.libvirt.guest [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] detach device xml: <interface type="ethernet">
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:fc:8b:a2"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <target dev="tap5de28374-db"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]: </interface>
Feb 28 10:05:59 compute-0 nova_compute[243452]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.484 243456 INFO nova.virt.libvirt.driver [None req-b7ef84d0-ebbb-4e10-b1e1-c2fe873d56e2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Snapshot image upload complete
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.484 243456 INFO nova.compute.manager [None req-b7ef84d0-ebbb-4e10-b1e1-c2fe873d56e2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Took 4.87 seconds to snapshot the instance on the hypervisor.
Feb 28 10:05:59 compute-0 elastic_keldysh[276323]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:05:59 compute-0 elastic_keldysh[276323]: --> All data devices are unavailable
Feb 28 10:05:59 compute-0 kernel: tap5de28374-db (unregistering): left promiscuous mode
Feb 28 10:05:59 compute-0 NetworkManager[49805]: <info>  [1772273159.5354] device (tap5de28374-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.542 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:59 compute-0 ovn_controller[146846]: 2026-02-28T10:05:59Z|00267|binding|INFO|Releasing lport 5de28374-dbe4-4c8d-9f73-047a368cc895 from this chassis (sb_readonly=0)
Feb 28 10:05:59 compute-0 ovn_controller[146846]: 2026-02-28T10:05:59Z|00268|binding|INFO|Setting lport 5de28374-dbe4-4c8d-9f73-047a368cc895 down in Southbound
Feb 28 10:05:59 compute-0 ovn_controller[146846]: 2026-02-28T10:05:59Z|00269|binding|INFO|Removing iface tap5de28374-db ovn-installed in OVS
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.546 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.551 243456 DEBUG nova.virt.libvirt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Received event <DeviceRemovedEvent: 1772273159.5498974, 3a118849-0d0a-4196-9bdd-65333da2e8f7 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.552 243456 DEBUG nova.virt.libvirt.driver [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Start waiting for the detach event from libvirt for device tap5de28374-db with device alias net1 for instance 3a118849-0d0a-4196-9bdd-65333da2e8f7 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.552 243456 DEBUG nova.virt.libvirt.guest [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:fc:8b:a2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5de28374-db"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:05:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:59.553 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:8b:a2 10.100.0.6'], port_security=['fa:16:3e:fc:8b:a2 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3a118849-0d0a-4196-9bdd-65333da2e8f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '24808a61-2182-43ba-a46d-e629124276f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=5de28374-dbe4-4c8d-9f73-047a368cc895) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.557 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.558 243456 DEBUG nova.virt.libvirt.guest [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:fc:8b:a2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5de28374-db"/></interface>not found in domain: <domain type='kvm' id='36'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <name>instance-00000020</name>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <uuid>3a118849-0d0a-4196-9bdd-65333da2e8f7</uuid>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <nova:name>tempest-AttachInterfacesTestJSON-server-853218932</nova:name>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:05:56</nova:creationTime>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:port uuid="9f44b9f8-b888-40e8-be30-f985e3ca11b9">
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:port uuid="5de28374-dbe4-4c8d-9f73-047a368cc895">
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:port uuid="81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9">
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:port uuid="cd167ca6-85b1-4795-9ec3-1dab9ee468dd">
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:05:59 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <memory unit='KiB'>131072</memory>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <vcpu placement='static'>1</vcpu>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <resource>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <partition>/machine</partition>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </resource>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <sysinfo type='smbios'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <system>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <entry name='manufacturer'>RDO</entry>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <entry name='product'>OpenStack Compute</entry>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <entry name='serial'>3a118849-0d0a-4196-9bdd-65333da2e8f7</entry>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <entry name='uuid'>3a118849-0d0a-4196-9bdd-65333da2e8f7</entry>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <entry name='family'>Virtual Machine</entry>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </system>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <os>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <boot dev='hd'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <smbios mode='sysinfo'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </os>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <features>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <vmcoreinfo state='on'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </features>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <cpu mode='custom' match='exact' check='full'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <vendor>AMD</vendor>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='x2apic'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc-deadline'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='hypervisor'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc_adjust'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='spec-ctrl'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='stibp'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='ssbd'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='cmp_legacy'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='overflow-recov'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='succor'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='ibrs'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='amd-ssbd'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='virt-ssbd'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='lbrv'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='tsc-scale'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='vmcb-clean'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='flushbyasid'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='pause-filter'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='pfthreshold'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='xsaves'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='svm'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='require' name='topoext'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='npt'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <feature policy='disable' name='nrip-save'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <clock offset='utc'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <timer name='pit' tickpolicy='delay'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <timer name='hpet' present='no'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <on_poweroff>destroy</on_poweroff>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <on_reboot>restart</on_reboot>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <on_crash>destroy</on_crash>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <disk type='network' device='disk'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/3a118849-0d0a-4196-9bdd-65333da2e8f7_disk' index='2'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       </source>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target dev='vda' bus='virtio'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='virtio-disk0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <disk type='network' device='cdrom'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/3a118849-0d0a-4196-9bdd-65333da2e8f7_disk.config' index='1'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       </source>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target dev='sda' bus='sata'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <readonly/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='sata0-0-0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='0' model='pcie-root'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pcie.0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='1' port='0x10'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.1'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='2' port='0x11'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.2'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='3' port='0x12'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.3'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='4' port='0x13'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.4'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='5' port='0x14'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.5'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='6' port='0x15'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.6'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='7' port='0x16'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.7'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='8' port='0x17'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.8'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='9' port='0x18'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.9'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='10' port='0x19'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.10'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='11' port='0x1a'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.11'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='12' port='0x1b'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.12'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='13' port='0x1c'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.13'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='14' port='0x1d'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.14'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='15' port='0x1e'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.15'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='16' port='0x1f'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.16'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='17' port='0x20'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.17'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='18' port='0x21'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.18'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='19' port='0x22'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.19'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='20' port='0x23'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.20'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='21' port='0x24'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.21'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='22' port='0x25'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.22'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='23' port='0x26'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.23'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='24' port='0x27'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.24'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target chassis='25' port='0x28'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.25'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model name='pcie-pci-bridge'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='pci.26'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='usb'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <controller type='sata' index='0'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='ide'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:07:3c:f5'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target dev='tap9f44b9f8-b8'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='net0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:1a:bd:05'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target dev='tap81d09b7b-e0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='net2'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:6e:94:dc'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target dev='tapcd167ca6-85'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='net3'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <serial type='pty'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <source path='/dev/pts/0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/console.log' append='off'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target type='isa-serial' port='0'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:         <model name='isa-serial'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       </target>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <console type='pty' tty='/dev/pts/0'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <source path='/dev/pts/0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/console.log' append='off'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <target type='serial' port='0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </console>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <input type='tablet' bus='usb'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='input0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='usb' bus='0' port='1'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </input>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <input type='mouse' bus='ps2'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='input1'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </input>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <input type='keyboard' bus='ps2'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='input2'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </input>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <listen type='address' address='::0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </graphics>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <audio id='1' type='none'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <video>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <model type='virtio' heads='1' primary='yes'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='video0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </video>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <watchdog model='itco' action='reset'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='watchdog0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </watchdog>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <memballoon model='virtio'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <stats period='10'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='balloon0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <rng model='virtio'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <backend model='random'>/dev/urandom</backend>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <alias name='rng0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <label>system_u:system_r:svirt_t:s0:c173,c529</label>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c173,c529</imagelabel>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <label>+107:+107</label>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <imagelabel>+107:+107</imagelabel>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:05:59 compute-0 nova_compute[243452]: </domain>
Feb 28 10:05:59 compute-0 nova_compute[243452]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.560 243456 INFO nova.virt.libvirt.driver [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully detached device tap5de28374-db from instance 3a118849-0d0a-4196-9bdd-65333da2e8f7 from the live domain config.
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.560 243456 DEBUG nova.virt.libvirt.vif [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-853218932',display_name='tempest-AttachInterfacesTestJSON-server-853218932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-853218932',id=32,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNhRql1zNcgozp7+n+OTZNjDLjB8j28cqviorl1M0I+grkvDu6q5p/JYhTL7OoExHJt9j3CSAkdrK21HWqvkni4EGOra+zdcam7r8KHxbshLgFt9F0GQkeLVE/FGatuEGQ==',key_name='tempest-keypair-1407834863',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-b1a06y2e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=3a118849-0d0a-4196-9bdd-65333da2e8f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.561 243456 DEBUG nova.network.os_vif_util [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:05:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:59.561 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 5de28374-dbe4-4c8d-9f73-047a368cc895 in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f unbound from our chassis
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.562 243456 DEBUG nova.network.os_vif_util [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=5de28374-dbe4-4c8d-9f73-047a368cc895,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de28374-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.562 243456 DEBUG os_vif [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=5de28374-dbe4-4c8d-9f73-047a368cc895,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de28374-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:05:59 compute-0 systemd[1]: libpod-448bfcc7419745ca966ddd4e4c355cbefd1e7344ac61741f0f62003b94c8ed5f.scope: Deactivated successfully.
Feb 28 10:05:59 compute-0 podman[276306]: 2026-02-28 10:05:59.563353224 +0000 UTC m=+0.676547761 container died 448bfcc7419745ca966ddd4e4c355cbefd1e7344ac61741f0f62003b94c8ed5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.565 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.565 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5de28374-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:59.566 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.567 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.569 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.572 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.585 243456 INFO os_vif [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=5de28374-dbe4-4c8d-9f73-047a368cc895,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de28374-db')
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.586 243456 DEBUG nova.virt.libvirt.guest [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <nova:name>tempest-AttachInterfacesTestJSON-server-853218932</nova:name>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:05:59</nova:creationTime>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:port uuid="9f44b9f8-b888-40e8-be30-f985e3ca11b9">
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:port uuid="81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9">
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     <nova:port uuid="cd167ca6-85b1-4795-9ec3-1dab9ee468dd">
Feb 28 10:05:59 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:05:59 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:05:59 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:05:59 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:05:59 compute-0 nova_compute[243452]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 28 10:05:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:59.587 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5b440396-e2b6-49ce-a7a8-e57093dc1fbb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-60f06702cca2adb40fd27dc595e5c1398189ff8bf570ef1e9bf0de153bfd2600-merged.mount: Deactivated successfully.
Feb 28 10:05:59 compute-0 podman[276306]: 2026-02-28 10:05:59.614025139 +0000 UTC m=+0.727219676 container remove 448bfcc7419745ca966ddd4e4c355cbefd1e7344ac61741f0f62003b94c8ed5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_keldysh, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:05:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:59.620 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c126f19e-4e1b-4341-9a0b-1f5e4161703a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:59 compute-0 systemd[1]: libpod-conmon-448bfcc7419745ca966ddd4e4c355cbefd1e7344ac61741f0f62003b94c8ed5f.scope: Deactivated successfully.
Feb 28 10:05:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:59.643 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd4c487-afe7-4bcb-8828-f73374be5242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:59 compute-0 sudo[276229]: pam_unix(sudo:session): session closed for user root
Feb 28 10:05:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:59.673 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[19a75e6e-3949-4213-beb0-e7ff0e093f31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:59.687 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1d55eb63-c871-498a-b109-32fefd02bede]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461763, 'reachable_time': 27957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276383, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:59 compute-0 sudo[276365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:05:59 compute-0 sudo[276365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:05:59 compute-0 sudo[276365]: pam_unix(sudo:session): session closed for user root
Feb 28 10:05:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:59.713 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a2297e1d-bf33-400f-88ea-1ceacb0c9a20]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461774, 'tstamp': 461774}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276389, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461777, 'tstamp': 461777}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276389, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:05:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:59.716 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.718 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:59.720 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:59.720 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.720 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:05:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:59.721 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:05:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:05:59.722 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:05:59 compute-0 sudo[276392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:05:59 compute-0 sudo[276392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.819 243456 DEBUG nova.compute.manager [req-2631cb62-6f63-4a9d-b5f9-1dcd637112fb req-62896f87-6195-484d-8fb4-56c773e30074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-vif-plugged-cd167ca6-85b1-4795-9ec3-1dab9ee468dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.820 243456 DEBUG oslo_concurrency.lockutils [req-2631cb62-6f63-4a9d-b5f9-1dcd637112fb req-62896f87-6195-484d-8fb4-56c773e30074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.821 243456 DEBUG oslo_concurrency.lockutils [req-2631cb62-6f63-4a9d-b5f9-1dcd637112fb req-62896f87-6195-484d-8fb4-56c773e30074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.821 243456 DEBUG oslo_concurrency.lockutils [req-2631cb62-6f63-4a9d-b5f9-1dcd637112fb req-62896f87-6195-484d-8fb4-56c773e30074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.821 243456 DEBUG nova.compute.manager [req-2631cb62-6f63-4a9d-b5f9-1dcd637112fb req-62896f87-6195-484d-8fb4-56c773e30074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] No waiting events found dispatching network-vif-plugged-cd167ca6-85b1-4795-9ec3-1dab9ee468dd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:05:59 compute-0 nova_compute[243452]: 2026-02-28 10:05:59.821 243456 WARNING nova.compute.manager [req-2631cb62-6f63-4a9d-b5f9-1dcd637112fb req-62896f87-6195-484d-8fb4-56c773e30074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received unexpected event network-vif-plugged-cd167ca6-85b1-4795-9ec3-1dab9ee468dd for instance with vm_state active and task_state None.
Feb 28 10:05:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 305 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 1.8 MiB/s wr, 352 op/s
Feb 28 10:06:00 compute-0 podman[276429]: 2026-02-28 10:06:00.042293269 +0000 UTC m=+0.038525454 container create 9656be5a07cf74c79dcf141b0de2432b95e0c3185390a00e1e0850ec6147002e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 10:06:00 compute-0 systemd[1]: Started libpod-conmon-9656be5a07cf74c79dcf141b0de2432b95e0c3185390a00e1e0850ec6147002e.scope.
Feb 28 10:06:00 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:06:00 compute-0 podman[276429]: 2026-02-28 10:06:00.026761592 +0000 UTC m=+0.022993797 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:06:00 compute-0 podman[276429]: 2026-02-28 10:06:00.129225473 +0000 UTC m=+0.125457678 container init 9656be5a07cf74c79dcf141b0de2432b95e0c3185390a00e1e0850ec6147002e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mendel, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 10:06:00 compute-0 podman[276429]: 2026-02-28 10:06:00.138241386 +0000 UTC m=+0.134473571 container start 9656be5a07cf74c79dcf141b0de2432b95e0c3185390a00e1e0850ec6147002e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 28 10:06:00 compute-0 podman[276429]: 2026-02-28 10:06:00.141547509 +0000 UTC m=+0.137779714 container attach 9656be5a07cf74c79dcf141b0de2432b95e0c3185390a00e1e0850ec6147002e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mendel, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:06:00 compute-0 exciting_mendel[276445]: 167 167
Feb 28 10:06:00 compute-0 systemd[1]: libpod-9656be5a07cf74c79dcf141b0de2432b95e0c3185390a00e1e0850ec6147002e.scope: Deactivated successfully.
Feb 28 10:06:00 compute-0 conmon[276445]: conmon 9656be5a07cf74c79dcf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9656be5a07cf74c79dcf141b0de2432b95e0c3185390a00e1e0850ec6147002e.scope/container/memory.events
Feb 28 10:06:00 compute-0 podman[276429]: 2026-02-28 10:06:00.145795409 +0000 UTC m=+0.142027624 container died 9656be5a07cf74c79dcf141b0de2432b95e0c3185390a00e1e0850ec6147002e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:06:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-59ac4b3928c07afc4acecc768feaa644b2c19a31654733b09d811b8296a9adef-merged.mount: Deactivated successfully.
Feb 28 10:06:00 compute-0 podman[276429]: 2026-02-28 10:06:00.192562093 +0000 UTC m=+0.188794278 container remove 9656be5a07cf74c79dcf141b0de2432b95e0c3185390a00e1e0850ec6147002e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mendel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:06:00 compute-0 systemd[1]: libpod-conmon-9656be5a07cf74c79dcf141b0de2432b95e0c3185390a00e1e0850ec6147002e.scope: Deactivated successfully.
Feb 28 10:06:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:06:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:06:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:06:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:06:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:06:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:06:00 compute-0 podman[276467]: 2026-02-28 10:06:00.346388378 +0000 UTC m=+0.045915372 container create 648c2ac9dab17e1d9a3afc24e3357b84e9fb8b932098f0fc05864683d10ec13b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bhabha, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 10:06:00 compute-0 systemd[1]: Started libpod-conmon-648c2ac9dab17e1d9a3afc24e3357b84e9fb8b932098f0fc05864683d10ec13b.scope.
Feb 28 10:06:00 compute-0 podman[276467]: 2026-02-28 10:06:00.322228899 +0000 UTC m=+0.021755773 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:06:00 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:06:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4933bd62df049bbd899604b7cc46f397c4609e35adb0e02a0f87cbed3b007241/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:06:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4933bd62df049bbd899604b7cc46f397c4609e35adb0e02a0f87cbed3b007241/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:06:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4933bd62df049bbd899604b7cc46f397c4609e35adb0e02a0f87cbed3b007241/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:06:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4933bd62df049bbd899604b7cc46f397c4609e35adb0e02a0f87cbed3b007241/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:06:00 compute-0 podman[276467]: 2026-02-28 10:06:00.446364349 +0000 UTC m=+0.145891223 container init 648c2ac9dab17e1d9a3afc24e3357b84e9fb8b932098f0fc05864683d10ec13b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bhabha, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 10:06:00 compute-0 podman[276467]: 2026-02-28 10:06:00.454040244 +0000 UTC m=+0.153567148 container start 648c2ac9dab17e1d9a3afc24e3357b84e9fb8b932098f0fc05864683d10ec13b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bhabha, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:06:00 compute-0 podman[276467]: 2026-02-28 10:06:00.458915911 +0000 UTC m=+0.158442865 container attach 648c2ac9dab17e1d9a3afc24e3357b84e9fb8b932098f0fc05864683d10ec13b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]: {
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:     "0": [
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:         {
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "devices": [
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "/dev/loop3"
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             ],
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "lv_name": "ceph_lv0",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "lv_size": "21470642176",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "name": "ceph_lv0",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "tags": {
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.cluster_name": "ceph",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.crush_device_class": "",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.encrypted": "0",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.objectstore": "bluestore",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.osd_id": "0",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.type": "block",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.vdo": "0",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.with_tpm": "0"
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             },
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "type": "block",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "vg_name": "ceph_vg0"
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:         }
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:     ],
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:     "1": [
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:         {
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "devices": [
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "/dev/loop4"
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             ],
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "lv_name": "ceph_lv1",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "lv_size": "21470642176",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "name": "ceph_lv1",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "tags": {
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.cluster_name": "ceph",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.crush_device_class": "",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.encrypted": "0",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.objectstore": "bluestore",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.osd_id": "1",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.type": "block",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.vdo": "0",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.with_tpm": "0"
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             },
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "type": "block",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "vg_name": "ceph_vg1"
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:         }
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:     ],
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:     "2": [
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:         {
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "devices": [
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "/dev/loop5"
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             ],
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "lv_name": "ceph_lv2",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "lv_size": "21470642176",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "name": "ceph_lv2",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "tags": {
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.cluster_name": "ceph",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.crush_device_class": "",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.encrypted": "0",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.objectstore": "bluestore",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.osd_id": "2",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.type": "block",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.vdo": "0",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:                 "ceph.with_tpm": "0"
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             },
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "type": "block",
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:             "vg_name": "ceph_vg2"
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:         }
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]:     ]
Feb 28 10:06:00 compute-0 xenodochial_bhabha[276483]: }
Feb 28 10:06:00 compute-0 systemd[1]: libpod-648c2ac9dab17e1d9a3afc24e3357b84e9fb8b932098f0fc05864683d10ec13b.scope: Deactivated successfully.
Feb 28 10:06:00 compute-0 podman[276467]: 2026-02-28 10:06:00.810455364 +0000 UTC m=+0.509982258 container died 648c2ac9dab17e1d9a3afc24e3357b84e9fb8b932098f0fc05864683d10ec13b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 10:06:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4933bd62df049bbd899604b7cc46f397c4609e35adb0e02a0f87cbed3b007241-merged.mount: Deactivated successfully.
Feb 28 10:06:00 compute-0 podman[276467]: 2026-02-28 10:06:00.854834592 +0000 UTC m=+0.554361496 container remove 648c2ac9dab17e1d9a3afc24e3357b84e9fb8b932098f0fc05864683d10ec13b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 10:06:00 compute-0 systemd[1]: libpod-conmon-648c2ac9dab17e1d9a3afc24e3357b84e9fb8b932098f0fc05864683d10ec13b.scope: Deactivated successfully.
Feb 28 10:06:00 compute-0 sudo[276392]: pam_unix(sudo:session): session closed for user root
Feb 28 10:06:00 compute-0 sudo[276505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:06:00 compute-0 sudo[276505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:06:00 compute-0 sudo[276505]: pam_unix(sudo:session): session closed for user root
Feb 28 10:06:01 compute-0 sudo[276530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:06:01 compute-0 sudo[276530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:06:01 compute-0 ceph-mon[76304]: pgmap v1125: 305 pgs: 305 active+clean; 305 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 1.8 MiB/s wr, 352 op/s
Feb 28 10:06:01 compute-0 podman[276568]: 2026-02-28 10:06:01.283370629 +0000 UTC m=+0.044005448 container create c183bf5e9ea16e68bf91e7ee4e611c2c281fa73242c8448e6814f9e971a17b18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cohen, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 10:06:01 compute-0 systemd[1]: Started libpod-conmon-c183bf5e9ea16e68bf91e7ee4e611c2c281fa73242c8448e6814f9e971a17b18.scope.
Feb 28 10:06:01 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:06:01 compute-0 podman[276568]: 2026-02-28 10:06:01.266669 +0000 UTC m=+0.027303829 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:06:01 compute-0 podman[276568]: 2026-02-28 10:06:01.366407023 +0000 UTC m=+0.127041842 container init c183bf5e9ea16e68bf91e7ee4e611c2c281fa73242c8448e6814f9e971a17b18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:06:01 compute-0 podman[276568]: 2026-02-28 10:06:01.371845356 +0000 UTC m=+0.132480185 container start c183bf5e9ea16e68bf91e7ee4e611c2c281fa73242c8448e6814f9e971a17b18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:06:01 compute-0 podman[276568]: 2026-02-28 10:06:01.374990494 +0000 UTC m=+0.135625333 container attach c183bf5e9ea16e68bf91e7ee4e611c2c281fa73242c8448e6814f9e971a17b18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cohen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 10:06:01 compute-0 vigilant_cohen[276584]: 167 167
Feb 28 10:06:01 compute-0 systemd[1]: libpod-c183bf5e9ea16e68bf91e7ee4e611c2c281fa73242c8448e6814f9e971a17b18.scope: Deactivated successfully.
Feb 28 10:06:01 compute-0 podman[276568]: 2026-02-28 10:06:01.377268938 +0000 UTC m=+0.137903757 container died c183bf5e9ea16e68bf91e7ee4e611c2c281fa73242c8448e6814f9e971a17b18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:06:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0866cfae1220eaf63b93a9deac14eeb69837a1c98f0fb3e2a35181e2489f374a-merged.mount: Deactivated successfully.
Feb 28 10:06:01 compute-0 podman[276568]: 2026-02-28 10:06:01.408679391 +0000 UTC m=+0.169314210 container remove c183bf5e9ea16e68bf91e7ee4e611c2c281fa73242c8448e6814f9e971a17b18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cohen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:06:01 compute-0 systemd[1]: libpod-conmon-c183bf5e9ea16e68bf91e7ee4e611c2c281fa73242c8448e6814f9e971a17b18.scope: Deactivated successfully.
Feb 28 10:06:01 compute-0 podman[276609]: 2026-02-28 10:06:01.577746044 +0000 UTC m=+0.062374524 container create f59c305fe4087231fbeef2887fa609e03326ac1f26f5df7d1fe95c3f38358bb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_aryabhata, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.603 243456 DEBUG oslo_concurrency.lockutils [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.606 243456 DEBUG oslo_concurrency.lockutils [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquired lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.606 243456 DEBUG nova.network.neutron [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:06:01 compute-0 systemd[1]: Started libpod-conmon-f59c305fe4087231fbeef2887fa609e03326ac1f26f5df7d1fe95c3f38358bb5.scope.
Feb 28 10:06:01 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:06:01 compute-0 podman[276609]: 2026-02-28 10:06:01.543061349 +0000 UTC m=+0.027689889 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505dc428faef6907c4de48786935c35a886c1ef183ee33753a7aea38551ae2b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505dc428faef6907c4de48786935c35a886c1ef183ee33753a7aea38551ae2b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505dc428faef6907c4de48786935c35a886c1ef183ee33753a7aea38551ae2b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505dc428faef6907c4de48786935c35a886c1ef183ee33753a7aea38551ae2b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:06:01 compute-0 podman[276609]: 2026-02-28 10:06:01.652954669 +0000 UTC m=+0.137583149 container init f59c305fe4087231fbeef2887fa609e03326ac1f26f5df7d1fe95c3f38358bb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_aryabhata, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 28 10:06:01 compute-0 podman[276609]: 2026-02-28 10:06:01.659218965 +0000 UTC m=+0.143847445 container start f59c305fe4087231fbeef2887fa609e03326ac1f26f5df7d1fe95c3f38358bb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_aryabhata, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 10:06:01 compute-0 podman[276609]: 2026-02-28 10:06:01.666362516 +0000 UTC m=+0.150990996 container attach f59c305fe4087231fbeef2887fa609e03326ac1f26f5df7d1fe95c3f38358bb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.727 243456 DEBUG nova.compute.manager [req-b9df650d-babb-4b52-af1c-8a3695b63041 req-f1539733-53d3-42a2-9fc2-1e10448e7ace 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-vif-deleted-5de28374-dbe4-4c8d-9f73-047a368cc895 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.728 243456 INFO nova.compute.manager [req-b9df650d-babb-4b52-af1c-8a3695b63041 req-f1539733-53d3-42a2-9fc2-1e10448e7ace 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Neutron deleted interface 5de28374-dbe4-4c8d-9f73-047a368cc895; detaching it from the instance and deleting it from the info cache
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.730 243456 DEBUG nova.network.neutron [req-b9df650d-babb-4b52-af1c-8a3695b63041 req-f1539733-53d3-42a2-9fc2-1e10448e7ace 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updating instance_info_cache with network_info: [{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "address": "fa:16:3e:1a:bd:05", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81d09b7b-e0", "ovs_interfaceid": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "address": "fa:16:3e:6e:94:dc", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd167ca6-85", "ovs_interfaceid": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.756 243456 DEBUG nova.objects.instance [req-b9df650d-babb-4b52-af1c-8a3695b63041 req-f1539733-53d3-42a2-9fc2-1e10448e7ace 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lazy-loading 'system_metadata' on Instance uuid 3a118849-0d0a-4196-9bdd-65333da2e8f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.780 243456 DEBUG nova.objects.instance [req-b9df650d-babb-4b52-af1c-8a3695b63041 req-f1539733-53d3-42a2-9fc2-1e10448e7ace 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lazy-loading 'flavor' on Instance uuid 3a118849-0d0a-4196-9bdd-65333da2e8f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.797 243456 DEBUG nova.virt.libvirt.vif [req-b9df650d-babb-4b52-af1c-8a3695b63041 req-f1539733-53d3-42a2-9fc2-1e10448e7ace 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-853218932',display_name='tempest-AttachInterfacesTestJSON-server-853218932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-853218932',id=32,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNhRql1zNcgozp7+n+OTZNjDLjB8j28cqviorl1M0I+grkvDu6q5p/JYhTL7OoExHJt9j3CSAkdrK21HWqvkni4EGOra+zdcam7r8KHxbshLgFt9F0GQkeLVE/FGatuEGQ==',key_name='tempest-keypair-1407834863',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-b1a06y2e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=3a118849-0d0a-4196-9bdd-65333da2e8f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.798 243456 DEBUG nova.network.os_vif_util [req-b9df650d-babb-4b52-af1c-8a3695b63041 req-f1539733-53d3-42a2-9fc2-1e10448e7ace 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converting VIF {"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.799 243456 DEBUG nova.network.os_vif_util [req-b9df650d-babb-4b52-af1c-8a3695b63041 req-f1539733-53d3-42a2-9fc2-1e10448e7ace 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=5de28374-dbe4-4c8d-9f73-047a368cc895,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de28374-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.806 243456 DEBUG nova.virt.libvirt.guest [req-b9df650d-babb-4b52-af1c-8a3695b63041 req-f1539733-53d3-42a2-9fc2-1e10448e7ace 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:fc:8b:a2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5de28374-db"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.811 243456 DEBUG nova.virt.libvirt.guest [req-b9df650d-babb-4b52-af1c-8a3695b63041 req-f1539733-53d3-42a2-9fc2-1e10448e7ace 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:fc:8b:a2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5de28374-db"/></interface>not found in domain: <domain type='kvm' id='36'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <name>instance-00000020</name>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <uuid>3a118849-0d0a-4196-9bdd-65333da2e8f7</uuid>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <nova:name>tempest-AttachInterfacesTestJSON-server-853218932</nova:name>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:05:59</nova:creationTime>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:port uuid="9f44b9f8-b888-40e8-be30-f985e3ca11b9">
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:port uuid="81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9">
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:port uuid="cd167ca6-85b1-4795-9ec3-1dab9ee468dd">
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:06:01 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <memory unit='KiB'>131072</memory>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <vcpu placement='static'>1</vcpu>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <resource>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <partition>/machine</partition>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </resource>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <sysinfo type='smbios'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <system>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <entry name='manufacturer'>RDO</entry>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <entry name='product'>OpenStack Compute</entry>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <entry name='serial'>3a118849-0d0a-4196-9bdd-65333da2e8f7</entry>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <entry name='uuid'>3a118849-0d0a-4196-9bdd-65333da2e8f7</entry>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <entry name='family'>Virtual Machine</entry>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </system>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <os>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <boot dev='hd'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <smbios mode='sysinfo'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </os>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <features>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <vmcoreinfo state='on'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </features>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <cpu mode='custom' match='exact' check='full'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <vendor>AMD</vendor>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='x2apic'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc-deadline'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='hypervisor'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc_adjust'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='spec-ctrl'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='stibp'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='ssbd'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='cmp_legacy'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='overflow-recov'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='succor'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='ibrs'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='amd-ssbd'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='virt-ssbd'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='lbrv'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='tsc-scale'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='vmcb-clean'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='flushbyasid'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='pause-filter'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='pfthreshold'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='xsaves'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='svm'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='topoext'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='npt'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='nrip-save'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <clock offset='utc'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <timer name='pit' tickpolicy='delay'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <timer name='hpet' present='no'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <on_poweroff>destroy</on_poweroff>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <on_reboot>restart</on_reboot>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <on_crash>destroy</on_crash>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <disk type='network' device='disk'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/3a118849-0d0a-4196-9bdd-65333da2e8f7_disk' index='2'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       </source>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target dev='vda' bus='virtio'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='virtio-disk0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <disk type='network' device='cdrom'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/3a118849-0d0a-4196-9bdd-65333da2e8f7_disk.config' index='1'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       </source>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target dev='sda' bus='sata'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <readonly/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='sata0-0-0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='0' model='pcie-root'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pcie.0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='1' port='0x10'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.1'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='2' port='0x11'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.2'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='3' port='0x12'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.3'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='4' port='0x13'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.4'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='5' port='0x14'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.5'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='6' port='0x15'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.6'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='7' port='0x16'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.7'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='8' port='0x17'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.8'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='9' port='0x18'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.9'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='10' port='0x19'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.10'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='11' port='0x1a'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.11'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='12' port='0x1b'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.12'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='13' port='0x1c'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.13'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='14' port='0x1d'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.14'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='15' port='0x1e'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.15'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='16' port='0x1f'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.16'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='17' port='0x20'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.17'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='18' port='0x21'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.18'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='19' port='0x22'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.19'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='20' port='0x23'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.20'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='21' port='0x24'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.21'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='22' port='0x25'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.22'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='23' port='0x26'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.23'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='24' port='0x27'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.24'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='25' port='0x28'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.25'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-pci-bridge'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.26'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='usb'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='sata' index='0'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='ide'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:07:3c:f5'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target dev='tap9f44b9f8-b8'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='net0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:1a:bd:05'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target dev='tap81d09b7b-e0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='net2'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:6e:94:dc'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target dev='tapcd167ca6-85'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='net3'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <serial type='pty'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <source path='/dev/pts/0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/console.log' append='off'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target type='isa-serial' port='0'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:         <model name='isa-serial'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       </target>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <console type='pty' tty='/dev/pts/0'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <source path='/dev/pts/0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/console.log' append='off'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target type='serial' port='0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </console>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <input type='tablet' bus='usb'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='input0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='usb' bus='0' port='1'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </input>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <input type='mouse' bus='ps2'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='input1'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </input>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <input type='keyboard' bus='ps2'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='input2'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </input>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <listen type='address' address='::0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </graphics>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <audio id='1' type='none'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <video>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model type='virtio' heads='1' primary='yes'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='video0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </video>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <watchdog model='itco' action='reset'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='watchdog0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </watchdog>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <memballoon model='virtio'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <stats period='10'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='balloon0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <rng model='virtio'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <backend model='random'>/dev/urandom</backend>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='rng0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <label>system_u:system_r:svirt_t:s0:c173,c529</label>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c173,c529</imagelabel>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <label>+107:+107</label>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <imagelabel>+107:+107</imagelabel>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:06:01 compute-0 nova_compute[243452]: </domain>
Feb 28 10:06:01 compute-0 nova_compute[243452]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.825 243456 DEBUG nova.virt.libvirt.guest [req-b9df650d-babb-4b52-af1c-8a3695b63041 req-f1539733-53d3-42a2-9fc2-1e10448e7ace 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:fc:8b:a2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5de28374-db"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.834 243456 DEBUG nova.virt.libvirt.guest [req-b9df650d-babb-4b52-af1c-8a3695b63041 req-f1539733-53d3-42a2-9fc2-1e10448e7ace 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:fc:8b:a2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5de28374-db"/></interface>not found in domain: <domain type='kvm' id='36'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <name>instance-00000020</name>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <uuid>3a118849-0d0a-4196-9bdd-65333da2e8f7</uuid>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <nova:name>tempest-AttachInterfacesTestJSON-server-853218932</nova:name>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:05:59</nova:creationTime>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:port uuid="9f44b9f8-b888-40e8-be30-f985e3ca11b9">
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:port uuid="81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9">
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:port uuid="cd167ca6-85b1-4795-9ec3-1dab9ee468dd">
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:06:01 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <memory unit='KiB'>131072</memory>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <vcpu placement='static'>1</vcpu>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <resource>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <partition>/machine</partition>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </resource>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <sysinfo type='smbios'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <system>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <entry name='manufacturer'>RDO</entry>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <entry name='product'>OpenStack Compute</entry>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <entry name='serial'>3a118849-0d0a-4196-9bdd-65333da2e8f7</entry>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <entry name='uuid'>3a118849-0d0a-4196-9bdd-65333da2e8f7</entry>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <entry name='family'>Virtual Machine</entry>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </system>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <os>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <boot dev='hd'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <smbios mode='sysinfo'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </os>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <features>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <vmcoreinfo state='on'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </features>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <cpu mode='custom' match='exact' check='full'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <vendor>AMD</vendor>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='x2apic'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc-deadline'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='hypervisor'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc_adjust'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='spec-ctrl'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='stibp'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='ssbd'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='cmp_legacy'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='overflow-recov'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='succor'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='ibrs'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='amd-ssbd'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='virt-ssbd'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='lbrv'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='tsc-scale'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='vmcb-clean'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='flushbyasid'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='pause-filter'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='pfthreshold'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='xsaves'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='svm'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='require' name='topoext'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='npt'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <feature policy='disable' name='nrip-save'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <clock offset='utc'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <timer name='pit' tickpolicy='delay'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <timer name='hpet' present='no'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <on_poweroff>destroy</on_poweroff>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <on_reboot>restart</on_reboot>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <on_crash>destroy</on_crash>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <disk type='network' device='disk'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/3a118849-0d0a-4196-9bdd-65333da2e8f7_disk' index='2'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       </source>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target dev='vda' bus='virtio'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='virtio-disk0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <disk type='network' device='cdrom'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/3a118849-0d0a-4196-9bdd-65333da2e8f7_disk.config' index='1'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       </source>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target dev='sda' bus='sata'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <readonly/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='sata0-0-0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='0' model='pcie-root'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pcie.0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='1' port='0x10'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.1'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='2' port='0x11'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.2'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='3' port='0x12'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.3'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='4' port='0x13'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.4'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='5' port='0x14'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.5'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='6' port='0x15'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.6'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='7' port='0x16'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.7'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='8' port='0x17'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.8'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='9' port='0x18'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.9'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='10' port='0x19'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.10'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='11' port='0x1a'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.11'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='12' port='0x1b'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.12'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='13' port='0x1c'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.13'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='14' port='0x1d'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.14'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='15' port='0x1e'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.15'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='16' port='0x1f'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.16'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='17' port='0x20'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.17'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='18' port='0x21'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.18'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='19' port='0x22'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.19'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='20' port='0x23'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.20'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='21' port='0x24'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.21'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='22' port='0x25'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.22'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='23' port='0x26'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.23'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='24' port='0x27'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.24'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target chassis='25' port='0x28'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.25'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model name='pcie-pci-bridge'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='pci.26'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='usb'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <controller type='sata' index='0'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='ide'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:07:3c:f5'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target dev='tap9f44b9f8-b8'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='net0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:1a:bd:05'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target dev='tap81d09b7b-e0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='net2'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:6e:94:dc'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target dev='tapcd167ca6-85'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='net3'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <serial type='pty'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <source path='/dev/pts/0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/console.log' append='off'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target type='isa-serial' port='0'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:         <model name='isa-serial'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       </target>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <console type='pty' tty='/dev/pts/0'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <source path='/dev/pts/0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/console.log' append='off'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <target type='serial' port='0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </console>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <input type='tablet' bus='usb'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='input0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='usb' bus='0' port='1'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </input>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <input type='mouse' bus='ps2'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='input1'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </input>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <input type='keyboard' bus='ps2'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='input2'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </input>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <listen type='address' address='::0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </graphics>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <audio id='1' type='none'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <video>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <model type='virtio' heads='1' primary='yes'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='video0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </video>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <watchdog model='itco' action='reset'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='watchdog0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </watchdog>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <memballoon model='virtio'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <stats period='10'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='balloon0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <rng model='virtio'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <backend model='random'>/dev/urandom</backend>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <alias name='rng0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <label>system_u:system_r:svirt_t:s0:c173,c529</label>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c173,c529</imagelabel>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <label>+107:+107</label>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <imagelabel>+107:+107</imagelabel>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:06:01 compute-0 nova_compute[243452]: </domain>
Feb 28 10:06:01 compute-0 nova_compute[243452]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.835 243456 WARNING nova.virt.libvirt.driver [req-b9df650d-babb-4b52-af1c-8a3695b63041 req-f1539733-53d3-42a2-9fc2-1e10448e7ace 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Detaching interface fa:16:3e:fc:8b:a2 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap5de28374-db' not found.
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.836 243456 DEBUG nova.virt.libvirt.vif [req-b9df650d-babb-4b52-af1c-8a3695b63041 req-f1539733-53d3-42a2-9fc2-1e10448e7ace 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-853218932',display_name='tempest-AttachInterfacesTestJSON-server-853218932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-853218932',id=32,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNhRql1zNcgozp7+n+OTZNjDLjB8j28cqviorl1M0I+grkvDu6q5p/JYhTL7OoExHJt9j3CSAkdrK21HWqvkni4EGOra+zdcam7r8KHxbshLgFt9F0GQkeLVE/FGatuEGQ==',key_name='tempest-keypair-1407834863',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-b1a06y2e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=3a118849-0d0a-4196-9bdd-65333da2e8f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.837 243456 DEBUG nova.network.os_vif_util [req-b9df650d-babb-4b52-af1c-8a3695b63041 req-f1539733-53d3-42a2-9fc2-1e10448e7ace 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converting VIF {"id": "5de28374-dbe4-4c8d-9f73-047a368cc895", "address": "fa:16:3e:fc:8b:a2", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de28374-db", "ovs_interfaceid": "5de28374-dbe4-4c8d-9f73-047a368cc895", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.837 243456 DEBUG nova.network.os_vif_util [req-b9df650d-babb-4b52-af1c-8a3695b63041 req-f1539733-53d3-42a2-9fc2-1e10448e7ace 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=5de28374-dbe4-4c8d-9f73-047a368cc895,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de28374-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.838 243456 DEBUG os_vif [req-b9df650d-babb-4b52-af1c-8a3695b63041 req-f1539733-53d3-42a2-9fc2-1e10448e7ace 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=5de28374-dbe4-4c8d-9f73-047a368cc895,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de28374-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.840 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.840 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5de28374-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.840 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.855 243456 INFO os_vif [req-b9df650d-babb-4b52-af1c-8a3695b63041 req-f1539733-53d3-42a2-9fc2-1e10448e7ace 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:8b:a2,bridge_name='br-int',has_traffic_filtering=True,id=5de28374-dbe4-4c8d-9f73-047a368cc895,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de28374-db')
Feb 28 10:06:01 compute-0 nova_compute[243452]: 2026-02-28 10:06:01.856 243456 DEBUG nova.virt.libvirt.guest [req-b9df650d-babb-4b52-af1c-8a3695b63041 req-f1539733-53d3-42a2-9fc2-1e10448e7ace 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <nova:name>tempest-AttachInterfacesTestJSON-server-853218932</nova:name>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:06:01</nova:creationTime>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:port uuid="9f44b9f8-b888-40e8-be30-f985e3ca11b9">
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:port uuid="81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9">
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     <nova:port uuid="cd167ca6-85b1-4795-9ec3-1dab9ee468dd">
Feb 28 10:06:01 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:06:01 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:06:01 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:06:01 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:06:01 compute-0 nova_compute[243452]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 28 10:06:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 325 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 153 op/s
Feb 28 10:06:02 compute-0 lvm[276703]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:06:02 compute-0 lvm[276704]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:06:02 compute-0 lvm[276703]: VG ceph_vg0 finished
Feb 28 10:06:02 compute-0 lvm[276704]: VG ceph_vg1 finished
Feb 28 10:06:02 compute-0 lvm[276706]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:06:02 compute-0 lvm[276706]: VG ceph_vg2 finished
Feb 28 10:06:02 compute-0 condescending_aryabhata[276625]: {}
Feb 28 10:06:02 compute-0 systemd[1]: libpod-f59c305fe4087231fbeef2887fa609e03326ac1f26f5df7d1fe95c3f38358bb5.scope: Deactivated successfully.
Feb 28 10:06:02 compute-0 systemd[1]: libpod-f59c305fe4087231fbeef2887fa609e03326ac1f26f5df7d1fe95c3f38358bb5.scope: Consumed 1.226s CPU time.
Feb 28 10:06:02 compute-0 podman[276609]: 2026-02-28 10:06:02.542871637 +0000 UTC m=+1.027500127 container died f59c305fe4087231fbeef2887fa609e03326ac1f26f5df7d1fe95c3f38358bb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_aryabhata, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 28 10:06:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-505dc428faef6907c4de48786935c35a886c1ef183ee33753a7aea38551ae2b9-merged.mount: Deactivated successfully.
Feb 28 10:06:02 compute-0 podman[276609]: 2026-02-28 10:06:02.594918851 +0000 UTC m=+1.079547351 container remove f59c305fe4087231fbeef2887fa609e03326ac1f26f5df7d1fe95c3f38358bb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 10:06:02 compute-0 systemd[1]: libpod-conmon-f59c305fe4087231fbeef2887fa609e03326ac1f26f5df7d1fe95c3f38358bb5.scope: Deactivated successfully.
Feb 28 10:06:02 compute-0 sudo[276530]: pam_unix(sudo:session): session closed for user root
Feb 28 10:06:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:06:02 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:06:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:06:02 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:06:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:06:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Feb 28 10:06:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Feb 28 10:06:02 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Feb 28 10:06:02 compute-0 sudo[276721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:06:02 compute-0 sudo[276721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:06:02 compute-0 sudo[276721]: pam_unix(sudo:session): session closed for user root
Feb 28 10:06:03 compute-0 ceph-mon[76304]: pgmap v1126: 305 pgs: 305 active+clean; 325 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 153 op/s
Feb 28 10:06:03 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:06:03 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:06:03 compute-0 ceph-mon[76304]: osdmap e160: 3 total, 3 up, 3 in
Feb 28 10:06:03 compute-0 nova_compute[243452]: 2026-02-28 10:06:03.543 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 325 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.1 MiB/s wr, 136 op/s
Feb 28 10:06:03 compute-0 ovn_controller[146846]: 2026-02-28T10:06:03Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:6e:f7 10.100.0.5
Feb 28 10:06:03 compute-0 ovn_controller[146846]: 2026-02-28T10:06:03Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:6e:f7 10.100.0.5
Feb 28 10:06:04 compute-0 nova_compute[243452]: 2026-02-28 10:06:04.568 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:04 compute-0 nova_compute[243452]: 2026-02-28 10:06:04.967 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 ceph-mon[76304]: pgmap v1128: 305 pgs: 305 active+clean; 325 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.1 MiB/s wr, 136 op/s
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.293 156681 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port c35dec80-a181-46c0-80c7-b5d9fa3eee55 with type ""
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.295 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:94:dc 10.100.0.14'], port_security=['fa:16:3e:6e:94:dc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-119104979', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3a118849-0d0a-4196-9bdd-65333da2e8f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-119104979', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '24808a61-2182-43ba-a46d-e629124276f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cd167ca6-85b1-4795-9ec3-1dab9ee468dd) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.296 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cd167ca6-85b1-4795-9ec3-1dab9ee468dd in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f unbound from our chassis
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.298 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:06:05 compute-0 ovn_controller[146846]: 2026-02-28T10:06:05Z|00270|binding|INFO|Removing iface tapcd167ca6-85 ovn-installed in OVS
Feb 28 10:06:05 compute-0 ovn_controller[146846]: 2026-02-28T10:06:05Z|00271|binding|INFO|Removing lport cd167ca6-85b1-4795-9ec3-1dab9ee468dd ovn-installed in OVS
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.304 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.314 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.315 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dcce0f34-489c-4dcc-be21-52ab07c67a9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.345 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1584c026-adbf-4016-bd17-d933578088c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.348 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[89bc2138-f3d2-426e-8cc3-c7f08d317035]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.376 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9dcd789c-6a92-46a4-bed9-3a3457a81269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.398 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6c3f5a9a-2792-4661-ae03-66f5cd7681d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 13, 'rx_bytes': 826, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 13, 'rx_bytes': 826, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461763, 'reachable_time': 27957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276751, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.417 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[411ee10f-7393-4c43-99a8-f50fd7589868]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461774, 'tstamp': 461774}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276752, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461777, 'tstamp': 461777}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276752, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.419 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.420 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.421 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.421 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.422 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.422 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.422 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.488 243456 DEBUG nova.compute.manager [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-vif-deleted-cd167ca6-85b1-4795-9ec3-1dab9ee468dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.488 243456 INFO nova.compute.manager [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Neutron deleted interface cd167ca6-85b1-4795-9ec3-1dab9ee468dd; detaching it from the instance and deleting it from the info cache
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.488 243456 DEBUG nova.network.neutron [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updating instance_info_cache with network_info: [{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "address": "fa:16:3e:1a:bd:05", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81d09b7b-e0", "ovs_interfaceid": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:05 compute-0 ovn_controller[146846]: 2026-02-28T10:06:05Z|00272|binding|INFO|Releasing lport 76417fb5-53fd-4986-92c0-fafddb45b2f5 from this chassis (sb_readonly=0)
Feb 28 10:06:05 compute-0 ovn_controller[146846]: 2026-02-28T10:06:05Z|00273|binding|INFO|Releasing lport 5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3 from this chassis (sb_readonly=0)
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.522 243456 DEBUG nova.objects.instance [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lazy-loading 'system_metadata' on Instance uuid 3a118849-0d0a-4196-9bdd-65333da2e8f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.533 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.553 243456 DEBUG nova.objects.instance [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lazy-loading 'flavor' on Instance uuid 3a118849-0d0a-4196-9bdd-65333da2e8f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.577 243456 DEBUG nova.virt.libvirt.vif [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-853218932',display_name='tempest-AttachInterfacesTestJSON-server-853218932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-853218932',id=32,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNhRql1zNcgozp7+n+OTZNjDLjB8j28cqviorl1M0I+grkvDu6q5p/JYhTL7OoExHJt9j3CSAkdrK21HWqvkni4EGOra+zdcam7r8KHxbshLgFt9F0GQkeLVE/FGatuEGQ==',key_name='tempest-keypair-1407834863',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-b1a06y2e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=3a118849-0d0a-4196-9bdd-65333da2e8f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "address": "fa:16:3e:6e:94:dc", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd167ca6-85", "ovs_interfaceid": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.578 243456 DEBUG nova.network.os_vif_util [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converting VIF {"id": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "address": "fa:16:3e:6e:94:dc", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd167ca6-85", "ovs_interfaceid": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.578 243456 DEBUG nova.network.os_vif_util [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=cd167ca6-85b1-4795-9ec3-1dab9ee468dd,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapcd167ca6-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.579 243456 DEBUG oslo_concurrency.lockutils [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "3a118849-0d0a-4196-9bdd-65333da2e8f7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.579 243456 DEBUG oslo_concurrency.lockutils [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.579 243456 DEBUG oslo_concurrency.lockutils [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.580 243456 DEBUG oslo_concurrency.lockutils [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.580 243456 DEBUG oslo_concurrency.lockutils [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.581 243456 INFO nova.compute.manager [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Terminating instance
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.581 243456 DEBUG nova.compute.manager [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.583 243456 DEBUG nova.virt.libvirt.guest [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:6e:94:dc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcd167ca6-85"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.586 243456 DEBUG nova.virt.libvirt.guest [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:6e:94:dc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcd167ca6-85"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.587 243456 DEBUG nova.virt.libvirt.driver [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Attempting to detach device tapcd167ca6-85 from instance 3a118849-0d0a-4196-9bdd-65333da2e8f7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.588 243456 DEBUG nova.virt.libvirt.guest [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] detach device xml: <interface type="ethernet">
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:6e:94:dc"/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <target dev="tapcd167ca6-85"/>
Feb 28 10:06:05 compute-0 nova_compute[243452]: </interface>
Feb 28 10:06:05 compute-0 nova_compute[243452]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 28 10:06:05 compute-0 kernel: tap9f44b9f8-b8 (unregistering): left promiscuous mode
Feb 28 10:06:05 compute-0 NetworkManager[49805]: <info>  [1772273165.6827] device (tap9f44b9f8-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:06:05 compute-0 ovn_controller[146846]: 2026-02-28T10:06:05Z|00274|binding|INFO|Releasing lport 9f44b9f8-b888-40e8-be30-f985e3ca11b9 from this chassis (sb_readonly=0)
Feb 28 10:06:05 compute-0 ovn_controller[146846]: 2026-02-28T10:06:05Z|00275|binding|INFO|Setting lport 9f44b9f8-b888-40e8-be30-f985e3ca11b9 down in Southbound
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.693 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 ovn_controller[146846]: 2026-02-28T10:06:05Z|00276|binding|INFO|Removing iface tap9f44b9f8-b8 ovn-installed in OVS
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.696 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 kernel: tap81d09b7b-e0 (unregistering): left promiscuous mode
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.703 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:3c:f5 10.100.0.12'], port_security=['fa:16:3e:07:3c:f5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3a118849-0d0a-4196-9bdd-65333da2e8f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8d9a339b-3aab-4fbe-a87a-c3231e7f58e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.174'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=9f44b9f8-b888-40e8-be30-f985e3ca11b9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.704 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 9f44b9f8-b888-40e8-be30-f985e3ca11b9 in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f unbound from our chassis
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.706 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:06:05 compute-0 NetworkManager[49805]: <info>  [1772273165.7075] device (tap81d09b7b-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.708 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 ovn_controller[146846]: 2026-02-28T10:06:05Z|00277|binding|INFO|Releasing lport 81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9 from this chassis (sb_readonly=0)
Feb 28 10:06:05 compute-0 ovn_controller[146846]: 2026-02-28T10:06:05Z|00278|binding|INFO|Setting lport 81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9 down in Southbound
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.716 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 ovn_controller[146846]: 2026-02-28T10:06:05Z|00279|binding|INFO|Removing iface tap81d09b7b-e0 ovn-installed in OVS
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.719 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 kernel: tapcd167ca6-85 (unregistering): left promiscuous mode
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.723 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9f9e0090-3764-4e6a-8dc5-5ca7b40415e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:05 compute-0 NetworkManager[49805]: <info>  [1772273165.7271] device (tapcd167ca6-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.727 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.726 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:bd:05 10.100.0.13'], port_security=['fa:16:3e:1a:bd:05 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3a118849-0d0a-4196-9bdd-65333da2e8f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '24808a61-2182-43ba-a46d-e629124276f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.737 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.756 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a856f2ea-1004-441c-90e6-5675d1e2c96a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.760 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fd0be259-d1d1-48c6-a924-6552bbf8e085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:05 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000020.scope: Deactivated successfully.
Feb 28 10:06:05 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000020.scope: Consumed 13.819s CPU time.
Feb 28 10:06:05 compute-0 systemd-machined[209480]: Machine qemu-36-instance-00000020 terminated.
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.787 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[50290ccc-0c3f-4a83-ac61-63561fe1c3a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.806 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0e4c8db9-b4b2-4707-9da0-e71c81cd8391]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 15, 'rx_bytes': 826, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 15, 'rx_bytes': 826, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461763, 'reachable_time': 27957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276773, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:05 compute-0 NetworkManager[49805]: <info>  [1772273165.8125] manager: (tap81d09b7b-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/130)
Feb 28 10:06:05 compute-0 NetworkManager[49805]: <info>  [1772273165.8228] manager: (tapcd167ca6-85): new Tun device (/org/freedesktop/NetworkManager/Devices/131)
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.823 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a0230bae-2d16-49e5-bc7a-6f3fee730a0a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461774, 'tstamp': 461774}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276785, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461777, 'tstamp': 461777}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276785, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.826 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.828 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.841 243456 DEBUG oslo_concurrency.lockutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "98ff9b81-a55d-49aa-903f-1f00ff96f985" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.841 243456 DEBUG oslo_concurrency.lockutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "98ff9b81-a55d-49aa-903f-1f00ff96f985" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.850 243456 DEBUG nova.virt.libvirt.guest [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:6e:94:dc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcd167ca6-85"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.851 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.851 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.852 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.852 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.852 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.853 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9 in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f unbound from our chassis
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.854 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60dcefc3-95e1-437e-9c00-e51656c39b8f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.856 243456 INFO nova.virt.libvirt.driver [-] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Instance destroyed successfully.
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.856 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1d816b72-f1e2-43d1-8f3c-72a20568bd65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:05.857 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f namespace which is not needed anymore
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.857 243456 DEBUG nova.objects.instance [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'resources' on Instance uuid 3a118849-0d0a-4196-9bdd-65333da2e8f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.858 243456 DEBUG nova.virt.libvirt.guest [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:6e:94:dc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcd167ca6-85"/></interface>not found in domain: <domain type='kvm'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <name>instance-00000020</name>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <uuid>3a118849-0d0a-4196-9bdd-65333da2e8f7</uuid>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <nova:name>tempest-AttachInterfacesTestJSON-server-853218932</nova:name>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:05:07</nova:creationTime>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:06:05 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:06:05 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:06:05 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:06:05 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:06:05 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:06:05 compute-0 nova_compute[243452]:         <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:06:05 compute-0 nova_compute[243452]:         <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:06:05 compute-0 nova_compute[243452]:         <nova:port uuid="9f44b9f8-b888-40e8-be30-f985e3ca11b9">
Feb 28 10:06:05 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <memory unit='KiB'>131072</memory>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <vcpu placement='static'>1</vcpu>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <sysinfo type='smbios'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <system>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <entry name='manufacturer'>RDO</entry>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <entry name='product'>OpenStack Compute</entry>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <entry name='serial'>3a118849-0d0a-4196-9bdd-65333da2e8f7</entry>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <entry name='uuid'>3a118849-0d0a-4196-9bdd-65333da2e8f7</entry>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <entry name='family'>Virtual Machine</entry>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </system>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <os>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <boot dev='hd'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <smbios mode='sysinfo'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   </os>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <features>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <vmcoreinfo state='on'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   </features>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <cpu mode='host-model' check='partial'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <clock offset='utc'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <timer name='pit' tickpolicy='delay'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <timer name='hpet' present='no'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <on_poweroff>destroy</on_poweroff>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <on_reboot>restart</on_reboot>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <on_crash>destroy</on_crash>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <disk type='network' device='disk'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/3a118849-0d0a-4196-9bdd-65333da2e8f7_disk'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       </source>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target dev='vda' bus='virtio'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <disk type='network' device='cdrom'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/3a118849-0d0a-4196-9bdd-65333da2e8f7_disk.config'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       </source>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target dev='sda' bus='sata'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <readonly/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='0' model='pcie-root'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='1' port='0x10'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='2' port='0x11'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='3' port='0x12'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='4' port='0x13'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='5' port='0x14'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='6' port='0x15'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='7' port='0x16'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='8' port='0x17'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='9' port='0x18'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='10' port='0x19'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='11' port='0x1a'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='12' port='0x1b'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='13' port='0x1c'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='14' port='0x1d'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='15' port='0x1e'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='16' port='0x1f'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='17' port='0x20'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='18' port='0x21'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='19' port='0x22'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='20' port='0x23'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='21' port='0x24'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='22' port='0x25'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='23' port='0x26'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='24' port='0x27'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target chassis='25' port='0x28'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model name='pcie-pci-bridge'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <controller type='sata' index='0'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:07:3c:f5'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target dev='tap9f44b9f8-b8'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:1a:bd:05'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target dev='tap81d09b7b-e0'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <serial type='pty'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/console.log' append='off'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target type='isa-serial' port='0'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:         <model name='isa-serial'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       </target>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <console type='pty'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7/console.log' append='off'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <target type='serial' port='0'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </console>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <input type='tablet' bus='usb'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='usb' bus='0' port='1'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </input>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <input type='mouse' bus='ps2'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <input type='keyboard' bus='ps2'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <listen type='address' address='::0'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </graphics>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <audio id='1' type='none'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <video>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <model type='virtio' heads='1' primary='yes'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </video>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <watchdog model='itco' action='reset'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <memballoon model='virtio'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <stats period='10'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     <rng model='virtio'>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <backend model='random'>/dev/urandom</backend>
Feb 28 10:06:05 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:06:05 compute-0 nova_compute[243452]: </domain>
Feb 28 10:06:05 compute-0 nova_compute[243452]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.858 243456 INFO nova.virt.libvirt.driver [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Successfully detached device tapcd167ca6-85 from instance 3a118849-0d0a-4196-9bdd-65333da2e8f7 from the persistent domain config.
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.858 243456 DEBUG nova.virt.libvirt.driver [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] (1/8): Attempting to detach device tapcd167ca6-85 with device alias net3 from instance 3a118849-0d0a-4196-9bdd-65333da2e8f7 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.859 243456 DEBUG nova.virt.libvirt.guest [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] detach device xml: <interface type="ethernet">
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:6e:94:dc"/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:06:05 compute-0 nova_compute[243452]:   <target dev="tapcd167ca6-85"/>
Feb 28 10:06:05 compute-0 nova_compute[243452]: </interface>
Feb 28 10:06:05 compute-0 nova_compute[243452]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.860 243456 DEBUG nova.virt.libvirt.driver [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Libvirt returned error while detaching device tapcd167ca6-85 from instance 3a118849-0d0a-4196-9bdd-65333da2e8f7. Libvirt error code: 55, error message: Requested operation is not valid: domain is not running. _detach_sync /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2667
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.860 243456 WARNING nova.virt.libvirt.driver [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Unexpected libvirt error while detaching device tapcd167ca6-85 from instance 3a118849-0d0a-4196-9bdd-65333da2e8f7: Requested operation is not valid: domain is not running: libvirt.libvirtError: Requested operation is not valid: domain is not running
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.861 243456 DEBUG nova.virt.libvirt.vif [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-853218932',display_name='tempest-AttachInterfacesTestJSON-server-853218932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-853218932',id=32,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNhRql1zNcgozp7+n+OTZNjDLjB8j28cqviorl1M0I+grkvDu6q5p/JYhTL7OoExHJt9j3CSAkdrK21HWqvkni4EGOra+zdcam7r8KHxbshLgFt9F0GQkeLVE/FGatuEGQ==',key_name='tempest-keypair-1407834863',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-b1a06y2e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=3a118849-0d0a-4196-9bdd-65333da2e8f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "address": "fa:16:3e:6e:94:dc", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd167ca6-85", "ovs_interfaceid": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.862 243456 DEBUG nova.network.os_vif_util [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converting VIF {"id": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "address": "fa:16:3e:6e:94:dc", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd167ca6-85", "ovs_interfaceid": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.862 243456 DEBUG nova.network.os_vif_util [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=cd167ca6-85b1-4795-9ec3-1dab9ee468dd,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapcd167ca6-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.862 243456 DEBUG os_vif [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=cd167ca6-85b1-4795-9ec3-1dab9ee468dd,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapcd167ca6-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.864 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.864 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd167ca6-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.866 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.868 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.874 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.875 243456 DEBUG nova.compute.manager [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.879 243456 INFO os_vif [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=cd167ca6-85b1-4795-9ec3-1dab9ee468dd,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapcd167ca6-85')
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server [req-81e92aad-c74b-4b0b-9840-6a3abd724240 req-242febe9-0fbb-4949-b6f7-680afbc9785f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Exception during message handling: libvirt.libvirtError: Requested operation is not valid: domain is not running
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server     self.force_reraise()
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server     raise self.value
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11064, in external_instance_event
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server     self._process_instance_vif_deleted_event(context,
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10871, in _process_instance_vif_deleted_event
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server     self.driver.detach_interface(context, instance, vif)
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2943, in detach_interface
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server     self._detach_with_retry(
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2473, in _detach_with_retry
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server     self._detach_from_live_with_retry(
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2529, in _detach_from_live_with_retry
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server     self._detach_from_live_and_wait_for_event(
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2591, in _detach_from_live_and_wait_for_event
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server     self._detach_sync(
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2663, in _detach_sync
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server     guest.detach_device(dev, persistent=persistent, live=live)
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py", line 466, in detach_device
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server     self._domain.detachDeviceFlags(device_xml, flags=flags)
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 193, in doit
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server     result = proxy_call(self._autowrap, f, *args, **kwargs)
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 151, in proxy_call
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server     rv = execute(f, *args, **kwargs)
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 132, in execute
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server     six.reraise(c, e, tb)
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/six.py", line 709, in reraise
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server     raise value
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 86, in tworker
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server     rv = meth(*args, **kwargs)
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server   File "/usr/lib64/python3.9/site-packages/libvirt.py", line 1605, in detachDeviceFlags
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server     raise libvirtError('virDomainDetachDeviceFlags() failed')
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server libvirt.libvirtError: Requested operation is not valid: domain is not running
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.882 243456 ERROR oslo_messaging.rpc.server 
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.896 243456 DEBUG nova.virt.libvirt.vif [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-853218932',display_name='tempest-AttachInterfacesTestJSON-server-853218932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-853218932',id=32,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNhRql1zNcgozp7+n+OTZNjDLjB8j28cqviorl1M0I+grkvDu6q5p/JYhTL7OoExHJt9j3CSAkdrK21HWqvkni4EGOra+zdcam7r8KHxbshLgFt9F0GQkeLVE/FGatuEGQ==',key_name='tempest-keypair-1407834863',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-b1a06y2e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=3a118849-0d0a-4196-9bdd-65333da2e8f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.896 243456 DEBUG nova.network.os_vif_util [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.897 243456 DEBUG nova.network.os_vif_util [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:3c:f5,bridge_name='br-int',has_traffic_filtering=True,id=9f44b9f8-b888-40e8-be30-f985e3ca11b9,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f44b9f8-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.897 243456 DEBUG os_vif [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:3c:f5,bridge_name='br-int',has_traffic_filtering=True,id=9f44b9f8-b888-40e8-be30-f985e3ca11b9,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f44b9f8-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.898 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.898 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f44b9f8-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.900 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.902 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.905 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.908 243456 INFO os_vif [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:3c:f5,bridge_name='br-int',has_traffic_filtering=True,id=9f44b9f8-b888-40e8-be30-f985e3ca11b9,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f44b9f8-b8')
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.909 243456 DEBUG nova.virt.libvirt.vif [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-853218932',display_name='tempest-AttachInterfacesTestJSON-server-853218932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-853218932',id=32,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNhRql1zNcgozp7+n+OTZNjDLjB8j28cqviorl1M0I+grkvDu6q5p/JYhTL7OoExHJt9j3CSAkdrK21HWqvkni4EGOra+zdcam7r8KHxbshLgFt9F0GQkeLVE/FGatuEGQ==',key_name='tempest-keypair-1407834863',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-b1a06y2e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=3a118849-0d0a-4196-9bdd-65333da2e8f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "address": "fa:16:3e:1a:bd:05", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81d09b7b-e0", "ovs_interfaceid": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.909 243456 DEBUG nova.network.os_vif_util [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "address": "fa:16:3e:1a:bd:05", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81d09b7b-e0", "ovs_interfaceid": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.910 243456 DEBUG nova.network.os_vif_util [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:bd:05,bridge_name='br-int',has_traffic_filtering=True,id=81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81d09b7b-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.910 243456 DEBUG os_vif [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:bd:05,bridge_name='br-int',has_traffic_filtering=True,id=81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81d09b7b-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:06:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 342 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.6 MiB/s wr, 156 op/s
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.911 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.911 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81d09b7b-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.913 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.914 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.916 243456 INFO os_vif [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:bd:05,bridge_name='br-int',has_traffic_filtering=True,id=81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81d09b7b-e0')
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.917 243456 DEBUG nova.virt.libvirt.vif [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-853218932',display_name='tempest-AttachInterfacesTestJSON-server-853218932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-853218932',id=32,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNhRql1zNcgozp7+n+OTZNjDLjB8j28cqviorl1M0I+grkvDu6q5p/JYhTL7OoExHJt9j3CSAkdrK21HWqvkni4EGOra+zdcam7r8KHxbshLgFt9F0GQkeLVE/FGatuEGQ==',key_name='tempest-keypair-1407834863',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-b1a06y2e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=3a118849-0d0a-4196-9bdd-65333da2e8f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "address": "fa:16:3e:6e:94:dc", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd167ca6-85", "ovs_interfaceid": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.917 243456 DEBUG nova.network.os_vif_util [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "address": "fa:16:3e:6e:94:dc", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd167ca6-85", "ovs_interfaceid": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.918 243456 DEBUG nova.network.os_vif_util [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=cd167ca6-85b1-4795-9ec3-1dab9ee468dd,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapcd167ca6-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.918 243456 DEBUG os_vif [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=cd167ca6-85b1-4795-9ec3-1dab9ee468dd,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapcd167ca6-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.919 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.919 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd167ca6-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.919 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:05 compute-0 nova_compute[243452]: 2026-02-28 10:06:05.921 243456 INFO os_vif [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=cd167ca6-85b1-4795-9ec3-1dab9ee468dd,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapcd167ca6-85')
Feb 28 10:06:06 compute-0 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[273038]: [NOTICE]   (273042) : haproxy version is 2.8.14-c23fe91
Feb 28 10:06:06 compute-0 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[273038]: [NOTICE]   (273042) : path to executable is /usr/sbin/haproxy
Feb 28 10:06:06 compute-0 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[273038]: [WARNING]  (273042) : Exiting Master process...
Feb 28 10:06:06 compute-0 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[273038]: [ALERT]    (273042) : Current worker (273044) exited with code 143 (Terminated)
Feb 28 10:06:06 compute-0 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[273038]: [WARNING]  (273042) : All workers exited. Exiting... (0)
Feb 28 10:06:06 compute-0 systemd[1]: libpod-016f261e309610b5c58492c67c432b1c34975f7b62f1ac943c9fc4d7e5a60d52.scope: Deactivated successfully.
Feb 28 10:06:06 compute-0 podman[276848]: 2026-02-28 10:06:06.033041437 +0000 UTC m=+0.058693452 container died 016f261e309610b5c58492c67c432b1c34975f7b62f1ac943c9fc4d7e5a60d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 28 10:06:06 compute-0 nova_compute[243452]: 2026-02-28 10:06:06.035 243456 INFO nova.network.neutron [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Port 5de28374-dbe4-4c8d-9f73-047a368cc895 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Feb 28 10:06:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-016f261e309610b5c58492c67c432b1c34975f7b62f1ac943c9fc4d7e5a60d52-userdata-shm.mount: Deactivated successfully.
Feb 28 10:06:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5efc450d9fa849624f5c4721cc215a74a11ec38e71507abe88ff27e9bd2546bf-merged.mount: Deactivated successfully.
Feb 28 10:06:06 compute-0 podman[276848]: 2026-02-28 10:06:06.078918956 +0000 UTC m=+0.104570991 container cleanup 016f261e309610b5c58492c67c432b1c34975f7b62f1ac943c9fc4d7e5a60d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 10:06:06 compute-0 systemd[1]: libpod-conmon-016f261e309610b5c58492c67c432b1c34975f7b62f1ac943c9fc4d7e5a60d52.scope: Deactivated successfully.
Feb 28 10:06:06 compute-0 nova_compute[243452]: 2026-02-28 10:06:06.125 243456 DEBUG oslo_concurrency.lockutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:06 compute-0 nova_compute[243452]: 2026-02-28 10:06:06.125 243456 DEBUG oslo_concurrency.lockutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:06 compute-0 nova_compute[243452]: 2026-02-28 10:06:06.134 243456 DEBUG nova.virt.hardware [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:06:06 compute-0 nova_compute[243452]: 2026-02-28 10:06:06.134 243456 INFO nova.compute.claims [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:06:06 compute-0 podman[276881]: 2026-02-28 10:06:06.14734109 +0000 UTC m=+0.047705722 container remove 016f261e309610b5c58492c67c432b1c34975f7b62f1ac943c9fc4d7e5a60d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 10:06:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:06.151 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb757df-220c-4fa9-99d7-1c4798dc1da1]: (4, ('Sat Feb 28 10:06:05 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f (016f261e309610b5c58492c67c432b1c34975f7b62f1ac943c9fc4d7e5a60d52)\n016f261e309610b5c58492c67c432b1c34975f7b62f1ac943c9fc4d7e5a60d52\nSat Feb 28 10:06:06 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f (016f261e309610b5c58492c67c432b1c34975f7b62f1ac943c9fc4d7e5a60d52)\n016f261e309610b5c58492c67c432b1c34975f7b62f1ac943c9fc4d7e5a60d52\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:06.153 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[210c5158-efbd-42fb-8165-2956b117d160]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:06.154 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:06 compute-0 kernel: tap60dcefc3-90: left promiscuous mode
Feb 28 10:06:06 compute-0 nova_compute[243452]: 2026-02-28 10:06:06.155 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:06 compute-0 nova_compute[243452]: 2026-02-28 10:06:06.162 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:06.165 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[084a492e-7c0a-4946-b183-4de7a52535ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:06.180 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b37764d3-b147-4a63-8164-9192e365a7f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:06.182 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d517fb-5640-4363-8ded-dbdb415f0c50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:06.200 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5a976066-8358-4c53-a3e5-d14e4c2e0b18]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461757, 'reachable_time': 31298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276896, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:06.203 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:06:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:06.203 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[7f8682d0-ae1c-45f5-9d0d-442c6c9f9f19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d60dcefc3\x2d95e1\x2d437e\x2d9c00\x2de51656c39b8f.mount: Deactivated successfully.
Feb 28 10:06:06 compute-0 nova_compute[243452]: 2026-02-28 10:06:06.236 243456 INFO nova.virt.libvirt.driver [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Deleting instance files /var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7_del
Feb 28 10:06:06 compute-0 nova_compute[243452]: 2026-02-28 10:06:06.237 243456 INFO nova.virt.libvirt.driver [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Deletion of /var/lib/nova/instances/3a118849-0d0a-4196-9bdd-65333da2e8f7_del complete
Feb 28 10:06:06 compute-0 ceph-mon[76304]: pgmap v1129: 305 pgs: 305 active+clean; 342 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.6 MiB/s wr, 156 op/s
Feb 28 10:06:06 compute-0 nova_compute[243452]: 2026-02-28 10:06:06.296 243456 INFO nova.compute.manager [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Took 0.71 seconds to destroy the instance on the hypervisor.
Feb 28 10:06:06 compute-0 nova_compute[243452]: 2026-02-28 10:06:06.296 243456 DEBUG oslo.service.loopingcall [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:06:06 compute-0 nova_compute[243452]: 2026-02-28 10:06:06.297 243456 DEBUG nova.compute.manager [-] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:06:06 compute-0 nova_compute[243452]: 2026-02-28 10:06:06.297 243456 DEBUG nova.network.neutron [-] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:06:06 compute-0 nova_compute[243452]: 2026-02-28 10:06:06.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:06:06 compute-0 nova_compute[243452]: 2026-02-28 10:06:06.474 243456 DEBUG oslo_concurrency.processutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:06 compute-0 nova_compute[243452]: 2026-02-28 10:06:06.976 243456 DEBUG nova.compute.manager [req-704d109e-1a83-4e5e-b16c-c4a96bfba2e7 req-6c7c2791-2406-40ac-b4ae-52f4ed0beb73 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-vif-deleted-81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:06 compute-0 nova_compute[243452]: 2026-02-28 10:06:06.977 243456 INFO nova.compute.manager [req-704d109e-1a83-4e5e-b16c-c4a96bfba2e7 req-6c7c2791-2406-40ac-b4ae-52f4ed0beb73 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Neutron deleted interface 81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9; detaching it from the instance and deleting it from the info cache
Feb 28 10:06:06 compute-0 nova_compute[243452]: 2026-02-28 10:06:06.977 243456 DEBUG nova.network.neutron [req-704d109e-1a83-4e5e-b16c-c4a96bfba2e7 req-6c7c2791-2406-40ac-b4ae-52f4ed0beb73 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updating instance_info_cache with network_info: [{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.006 243456 DEBUG nova.compute.manager [req-704d109e-1a83-4e5e-b16c-c4a96bfba2e7 req-6c7c2791-2406-40ac-b4ae-52f4ed0beb73 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Detach interface failed, port_id=81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9, reason: Instance 3a118849-0d0a-4196-9bdd-65333da2e8f7 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:06:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:06:07 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1682871824' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.071 243456 DEBUG oslo_concurrency.processutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.078 243456 DEBUG nova.compute.provider_tree [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.090 243456 DEBUG nova.scheduler.client.report [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.109 243456 DEBUG oslo_concurrency.lockutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.109 243456 DEBUG nova.compute.manager [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.156 243456 DEBUG nova.compute.manager [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.157 243456 DEBUG nova.network.neutron [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.173 243456 INFO nova.virt.libvirt.driver [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.190 243456 DEBUG nova.compute.manager [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:06:07 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1682871824' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.284 243456 DEBUG nova.compute.manager [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.287 243456 DEBUG nova.virt.libvirt.driver [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.288 243456 INFO nova.virt.libvirt.driver [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Creating image(s)
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.324 243456 DEBUG nova.storage.rbd_utils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 98ff9b81-a55d-49aa-903f-1f00ff96f985_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.361 243456 DEBUG nova.storage.rbd_utils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 98ff9b81-a55d-49aa-903f-1f00ff96f985_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.396 243456 DEBUG nova.storage.rbd_utils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 98ff9b81-a55d-49aa-903f-1f00ff96f985_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.401 243456 DEBUG oslo_concurrency.lockutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "2ad22ddc06d5e3840d5725ad61fe29a6d3d5f9da" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.403 243456 DEBUG oslo_concurrency.lockutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "2ad22ddc06d5e3840d5725ad61fe29a6d3d5f9da" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.409 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.646 243456 DEBUG nova.virt.libvirt.imagebackend [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Image locations are: [{'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/38cb5514-6684-4136-959d-9444907f6bfa/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/38cb5514-6684-4136-959d-9444907f6bfa/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Feb 28 10:06:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.722 243456 DEBUG nova.virt.libvirt.imagebackend [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Selected location: {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/38cb5514-6684-4136-959d-9444907f6bfa/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.723 243456 DEBUG nova.storage.rbd_utils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] cloning images/38cb5514-6684-4136-959d-9444907f6bfa@snap to None/98ff9b81-a55d-49aa-903f-1f00ff96f985_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.824 243456 DEBUG oslo_concurrency.lockutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "2ad22ddc06d5e3840d5725ad61fe29a6d3d5f9da" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.422s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 338 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.4 MiB/s wr, 163 op/s
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.948 243456 DEBUG nova.objects.instance [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'migration_context' on Instance uuid 98ff9b81-a55d-49aa-903f-1f00ff96f985 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.964 243456 DEBUG nova.virt.libvirt.driver [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.965 243456 DEBUG nova.virt.libvirt.driver [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Ensure instance console log exists: /var/lib/nova/instances/98ff9b81-a55d-49aa-903f-1f00ff96f985/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.965 243456 DEBUG oslo_concurrency.lockutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.965 243456 DEBUG oslo_concurrency.lockutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.966 243456 DEBUG oslo_concurrency.lockutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:07 compute-0 nova_compute[243452]: 2026-02-28 10:06:07.990 243456 DEBUG nova.policy [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '163582c3e6a34c87b52f82ac4f189f77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:06:08 compute-0 nova_compute[243452]: 2026-02-28 10:06:08.166 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273153.1635494, b320ad06-d6fa-470f-8bd8-1ecd6a00b33a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:08 compute-0 nova_compute[243452]: 2026-02-28 10:06:08.167 243456 INFO nova.compute.manager [-] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] VM Stopped (Lifecycle Event)
Feb 28 10:06:08 compute-0 nova_compute[243452]: 2026-02-28 10:06:08.191 243456 DEBUG nova.compute.manager [None req-cfb36ad9-1c3f-4baa-933c-93b25d49b301 - - - - - -] [instance: b320ad06-d6fa-470f-8bd8-1ecd6a00b33a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:08 compute-0 nova_compute[243452]: 2026-02-28 10:06:08.192 243456 DEBUG nova.network.neutron [-] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:08 compute-0 nova_compute[243452]: 2026-02-28 10:06:08.210 243456 INFO nova.compute.manager [-] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Took 1.91 seconds to deallocate network for instance.
Feb 28 10:06:08 compute-0 nova_compute[243452]: 2026-02-28 10:06:08.255 243456 DEBUG oslo_concurrency.lockutils [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:08 compute-0 nova_compute[243452]: 2026-02-28 10:06:08.256 243456 DEBUG oslo_concurrency.lockutils [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:08 compute-0 nova_compute[243452]: 2026-02-28 10:06:08.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:06:08 compute-0 ceph-mon[76304]: pgmap v1130: 305 pgs: 305 active+clean; 338 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.4 MiB/s wr, 163 op/s
Feb 28 10:06:08 compute-0 nova_compute[243452]: 2026-02-28 10:06:08.358 243456 DEBUG oslo_concurrency.processutils [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:08 compute-0 nova_compute[243452]: 2026-02-28 10:06:08.545 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:08 compute-0 nova_compute[243452]: 2026-02-28 10:06:08.896 243456 DEBUG nova.compute.manager [req-163a16db-ef5f-4a02-9475-3fa8df2d86dd req-9ce117b8-f1d8-470c-bbf9-af61051ac18e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-vif-plugged-9f44b9f8-b888-40e8-be30-f985e3ca11b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:08 compute-0 nova_compute[243452]: 2026-02-28 10:06:08.897 243456 DEBUG oslo_concurrency.lockutils [req-163a16db-ef5f-4a02-9475-3fa8df2d86dd req-9ce117b8-f1d8-470c-bbf9-af61051ac18e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:08 compute-0 nova_compute[243452]: 2026-02-28 10:06:08.897 243456 DEBUG oslo_concurrency.lockutils [req-163a16db-ef5f-4a02-9475-3fa8df2d86dd req-9ce117b8-f1d8-470c-bbf9-af61051ac18e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:08 compute-0 nova_compute[243452]: 2026-02-28 10:06:08.898 243456 DEBUG oslo_concurrency.lockutils [req-163a16db-ef5f-4a02-9475-3fa8df2d86dd req-9ce117b8-f1d8-470c-bbf9-af61051ac18e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:08 compute-0 nova_compute[243452]: 2026-02-28 10:06:08.898 243456 DEBUG nova.compute.manager [req-163a16db-ef5f-4a02-9475-3fa8df2d86dd req-9ce117b8-f1d8-470c-bbf9-af61051ac18e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] No waiting events found dispatching network-vif-plugged-9f44b9f8-b888-40e8-be30-f985e3ca11b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:06:08 compute-0 nova_compute[243452]: 2026-02-28 10:06:08.898 243456 WARNING nova.compute.manager [req-163a16db-ef5f-4a02-9475-3fa8df2d86dd req-9ce117b8-f1d8-470c-bbf9-af61051ac18e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received unexpected event network-vif-plugged-9f44b9f8-b888-40e8-be30-f985e3ca11b9 for instance with vm_state deleted and task_state None.
Feb 28 10:06:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:06:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3564474198' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:08 compute-0 nova_compute[243452]: 2026-02-28 10:06:08.944 243456 DEBUG oslo_concurrency.processutils [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:08 compute-0 nova_compute[243452]: 2026-02-28 10:06:08.950 243456 DEBUG nova.compute.provider_tree [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:06:08 compute-0 nova_compute[243452]: 2026-02-28 10:06:08.969 243456 DEBUG nova.scheduler.client.report [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:06:08 compute-0 nova_compute[243452]: 2026-02-28 10:06:08.997 243456 DEBUG oslo_concurrency.lockutils [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:09 compute-0 nova_compute[243452]: 2026-02-28 10:06:09.057 243456 INFO nova.scheduler.client.report [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Deleted allocations for instance 3a118849-0d0a-4196-9bdd-65333da2e8f7
Feb 28 10:06:09 compute-0 nova_compute[243452]: 2026-02-28 10:06:09.123 243456 DEBUG nova.compute.manager [req-d5b8cb69-6b8a-456f-9678-d53fb4c52b9d req-20c5f545-4755-4022-b567-50edd9bc5d74 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Received event network-vif-deleted-9f44b9f8-b888-40e8-be30-f985e3ca11b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:09 compute-0 nova_compute[243452]: 2026-02-28 10:06:09.154 243456 DEBUG oslo_concurrency.lockutils [None req-f498eb7f-5b7e-4f03-a0b2-6fc246a2297e f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "3a118849-0d0a-4196-9bdd-65333da2e8f7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:09 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3564474198' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:09 compute-0 nova_compute[243452]: 2026-02-28 10:06:09.555 243456 DEBUG nova.network.neutron [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Successfully created port: 1db9b00b-cf35-4019-bd3b-bb6c5e15090b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:06:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 311 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Feb 28 10:06:10 compute-0 nova_compute[243452]: 2026-02-28 10:06:10.026 243456 DEBUG nova.network.neutron [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Updating instance_info_cache with network_info: [{"id": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "address": "fa:16:3e:07:3c:f5", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f44b9f8-b8", "ovs_interfaceid": "9f44b9f8-b888-40e8-be30-f985e3ca11b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "address": "fa:16:3e:1a:bd:05", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81d09b7b-e0", "ovs_interfaceid": "81d09b7b-e0ac-456f-b80d-7dd72e0c4ae9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "address": "fa:16:3e:6e:94:dc", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd167ca6-85", "ovs_interfaceid": "cd167ca6-85b1-4795-9ec3-1dab9ee468dd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:10 compute-0 nova_compute[243452]: 2026-02-28 10:06:10.051 243456 DEBUG oslo_concurrency.lockutils [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Releasing lock "refresh_cache-3a118849-0d0a-4196-9bdd-65333da2e8f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:06:10 compute-0 nova_compute[243452]: 2026-02-28 10:06:10.079 243456 DEBUG oslo_concurrency.lockutils [None req-9e516397-9a46-4a34-a5e6-04f80a1af046 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-3a118849-0d0a-4196-9bdd-65333da2e8f7-5de28374-dbe4-4c8d-9f73-047a368cc895" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 10.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:10 compute-0 nova_compute[243452]: 2026-02-28 10:06:10.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:06:10 compute-0 ceph-mon[76304]: pgmap v1131: 305 pgs: 305 active+clean; 311 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Feb 28 10:06:10 compute-0 nova_compute[243452]: 2026-02-28 10:06:10.622 243456 DEBUG nova.network.neutron [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Successfully updated port: 1db9b00b-cf35-4019-bd3b-bb6c5e15090b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:06:10 compute-0 nova_compute[243452]: 2026-02-28 10:06:10.639 243456 DEBUG oslo_concurrency.lockutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "refresh_cache-98ff9b81-a55d-49aa-903f-1f00ff96f985" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:10 compute-0 nova_compute[243452]: 2026-02-28 10:06:10.640 243456 DEBUG oslo_concurrency.lockutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquired lock "refresh_cache-98ff9b81-a55d-49aa-903f-1f00ff96f985" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:10 compute-0 nova_compute[243452]: 2026-02-28 10:06:10.640 243456 DEBUG nova.network.neutron [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:06:10 compute-0 nova_compute[243452]: 2026-02-28 10:06:10.857 243456 DEBUG nova.network.neutron [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:06:10 compute-0 nova_compute[243452]: 2026-02-28 10:06:10.915 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:11 compute-0 nova_compute[243452]: 2026-02-28 10:06:11.164 243456 DEBUG nova.compute.manager [req-68259c55-3a14-419a-928f-07b4ad35fcd5 req-402dbcc5-cf50-4f7d-ab98-9cfcbced2022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Received event network-changed-1db9b00b-cf35-4019-bd3b-bb6c5e15090b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:11 compute-0 nova_compute[243452]: 2026-02-28 10:06:11.165 243456 DEBUG nova.compute.manager [req-68259c55-3a14-419a-928f-07b4ad35fcd5 req-402dbcc5-cf50-4f7d-ab98-9cfcbced2022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Refreshing instance network info cache due to event network-changed-1db9b00b-cf35-4019-bd3b-bb6c5e15090b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:06:11 compute-0 nova_compute[243452]: 2026-02-28 10:06:11.165 243456 DEBUG oslo_concurrency.lockutils [req-68259c55-3a14-419a-928f-07b4ad35fcd5 req-402dbcc5-cf50-4f7d-ab98-9cfcbced2022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-98ff9b81-a55d-49aa-903f-1f00ff96f985" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:11 compute-0 nova_compute[243452]: 2026-02-28 10:06:11.332 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:06:11 compute-0 nova_compute[243452]: 2026-02-28 10:06:11.831 243456 DEBUG oslo_concurrency.lockutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquiring lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:11 compute-0 nova_compute[243452]: 2026-02-28 10:06:11.833 243456 DEBUG oslo_concurrency.lockutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:11 compute-0 nova_compute[243452]: 2026-02-28 10:06:11.870 243456 DEBUG nova.compute.manager [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:06:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 279 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 441 KiB/s rd, 2.6 MiB/s wr, 145 op/s
Feb 28 10:06:11 compute-0 nova_compute[243452]: 2026-02-28 10:06:11.934 243456 DEBUG oslo_concurrency.lockutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:11 compute-0 nova_compute[243452]: 2026-02-28 10:06:11.935 243456 DEBUG oslo_concurrency.lockutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:11 compute-0 nova_compute[243452]: 2026-02-28 10:06:11.944 243456 DEBUG nova.virt.hardware [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:06:11 compute-0 nova_compute[243452]: 2026-02-28 10:06:11.944 243456 INFO nova.compute.claims [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.039 243456 DEBUG nova.network.neutron [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Updating instance_info_cache with network_info: [{"id": "1db9b00b-cf35-4019-bd3b-bb6c5e15090b", "address": "fa:16:3e:19:43:01", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1db9b00b-cf", "ovs_interfaceid": "1db9b00b-cf35-4019-bd3b-bb6c5e15090b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.062 243456 DEBUG oslo_concurrency.lockutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Releasing lock "refresh_cache-98ff9b81-a55d-49aa-903f-1f00ff96f985" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.062 243456 DEBUG nova.compute.manager [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Instance network_info: |[{"id": "1db9b00b-cf35-4019-bd3b-bb6c5e15090b", "address": "fa:16:3e:19:43:01", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1db9b00b-cf", "ovs_interfaceid": "1db9b00b-cf35-4019-bd3b-bb6c5e15090b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.063 243456 DEBUG oslo_concurrency.lockutils [req-68259c55-3a14-419a-928f-07b4ad35fcd5 req-402dbcc5-cf50-4f7d-ab98-9cfcbced2022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-98ff9b81-a55d-49aa-903f-1f00ff96f985" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.064 243456 DEBUG nova.network.neutron [req-68259c55-3a14-419a-928f-07b4ad35fcd5 req-402dbcc5-cf50-4f7d-ab98-9cfcbced2022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Refreshing network info cache for port 1db9b00b-cf35-4019-bd3b-bb6c5e15090b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.067 243456 DEBUG nova.virt.libvirt.driver [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Start _get_guest_xml network_info=[{"id": "1db9b00b-cf35-4019-bd3b-bb6c5e15090b", "address": "fa:16:3e:19:43:01", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1db9b00b-cf", "ovs_interfaceid": "1db9b00b-cf35-4019-bd3b-bb6c5e15090b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-28T10:05:54Z,direct_url=<?>,disk_format='raw',id=38cb5514-6684-4136-959d-9444907f6bfa,min_disk=1,min_ram=0,name='tempest-test-snap-350209641',owner='a2ce6ed219d94b3b88c2d2d7001f6c3a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-28T10:05:59Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': '38cb5514-6684-4136-959d-9444907f6bfa'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.076 243456 WARNING nova.virt.libvirt.driver [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.080 243456 DEBUG nova.virt.libvirt.host [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.081 243456 DEBUG nova.virt.libvirt.host [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.089 243456 DEBUG nova.virt.libvirt.host [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.089 243456 DEBUG nova.virt.libvirt.host [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.090 243456 DEBUG nova.virt.libvirt.driver [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.090 243456 DEBUG nova.virt.hardware [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-28T10:05:54Z,direct_url=<?>,disk_format='raw',id=38cb5514-6684-4136-959d-9444907f6bfa,min_disk=1,min_ram=0,name='tempest-test-snap-350209641',owner='a2ce6ed219d94b3b88c2d2d7001f6c3a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-28T10:05:59Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.091 243456 DEBUG nova.virt.hardware [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.091 243456 DEBUG nova.virt.hardware [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.091 243456 DEBUG nova.virt.hardware [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.092 243456 DEBUG nova.virt.hardware [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.092 243456 DEBUG nova.virt.hardware [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.092 243456 DEBUG nova.virt.hardware [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.092 243456 DEBUG nova.virt.hardware [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.093 243456 DEBUG nova.virt.hardware [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.093 243456 DEBUG nova.virt.hardware [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.093 243456 DEBUG nova.virt.hardware [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.097 243456 DEBUG oslo_concurrency.processutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.131 243456 DEBUG oslo_concurrency.processutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.318 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.341 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.342 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 28 10:06:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:06:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/726396020' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.701 243456 DEBUG oslo_concurrency.processutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:06:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:06:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1271277404' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.730 243456 DEBUG nova.storage.rbd_utils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 98ff9b81-a55d-49aa-903f-1f00ff96f985_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.735 243456 DEBUG oslo_concurrency.processutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.770 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-a9cfb8a4-5855-4ff2-8afa-3e14094e801e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.772 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-a9cfb8a4-5855-4ff2-8afa-3e14094e801e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.772 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.773 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid a9cfb8a4-5855-4ff2-8afa-3e14094e801e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.777 243456 DEBUG oslo_concurrency.processutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.784 243456 DEBUG nova.compute.provider_tree [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.822 243456 DEBUG nova.scheduler.client.report [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.851 243456 DEBUG oslo_concurrency.lockutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.852 243456 DEBUG nova.compute.manager [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.903 243456 DEBUG nova.compute.manager [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.905 243456 DEBUG nova.network.neutron [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.936 243456 INFO nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:06:12 compute-0 nova_compute[243452]: 2026-02-28 10:06:12.957 243456 DEBUG nova.compute.manager [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:06:12 compute-0 ceph-mon[76304]: pgmap v1132: 305 pgs: 305 active+clean; 279 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 441 KiB/s rd, 2.6 MiB/s wr, 145 op/s
Feb 28 10:06:12 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/726396020' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:12 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1271277404' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.071 243456 DEBUG nova.compute.manager [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.073 243456 DEBUG nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.074 243456 INFO nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Creating image(s)
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.107 243456 DEBUG nova.storage.rbd_utils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] rbd image 60dcb9fa-f7b6-415d-86e5-d423d4613d6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.143 243456 DEBUG nova.storage.rbd_utils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] rbd image 60dcb9fa-f7b6-415d-86e5-d423d4613d6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.177 243456 DEBUG nova.storage.rbd_utils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] rbd image 60dcb9fa-f7b6-415d-86e5-d423d4613d6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.183 243456 DEBUG oslo_concurrency.processutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.219 243456 DEBUG nova.policy [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9a7366cce344abcb7310041ed02610a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5df107d99f104138b864f28cf3b749ad', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.263 243456 DEBUG oslo_concurrency.processutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.264 243456 DEBUG oslo_concurrency.lockutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.265 243456 DEBUG oslo_concurrency.lockutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.265 243456 DEBUG oslo_concurrency.lockutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.291 243456 DEBUG nova.storage.rbd_utils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] rbd image 60dcb9fa-f7b6-415d-86e5-d423d4613d6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.296 243456 DEBUG oslo_concurrency.processutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 60dcb9fa-f7b6-415d-86e5-d423d4613d6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:06:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1939366812' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.388 243456 DEBUG oslo_concurrency.processutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.653s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.391 243456 DEBUG nova.virt.libvirt.vif [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:06:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1545434906',display_name='tempest-ImagesTestJSON-server-1545434906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1545434906',id=38,image_ref='38cb5514-6684-4136-959d-9444907f6bfa',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-6urbia03',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='a9cfb8a4-5855-4ff2-8afa-3e14094e801e',image_min_disk='1',image_min_ram='0',image_owner_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',image_owner_project_name='tempest-ImagesTestJSON-2059286278',image_owner_user_name='tempest-ImagesTestJSON-2059286278-project-member',image_user_id='163582c3e6a34c87b52f82ac4f189f77',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:06:07Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=98ff9b81-a55d-49aa-903f-1f00ff96f985,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1db9b00b-cf35-4019-bd3b-bb6c5e15090b", "address": "fa:16:3e:19:43:01", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1db9b00b-cf", "ovs_interfaceid": "1db9b00b-cf35-4019-bd3b-bb6c5e15090b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.392 243456 DEBUG nova.network.os_vif_util [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "1db9b00b-cf35-4019-bd3b-bb6c5e15090b", "address": "fa:16:3e:19:43:01", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1db9b00b-cf", "ovs_interfaceid": "1db9b00b-cf35-4019-bd3b-bb6c5e15090b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.393 243456 DEBUG nova.network.os_vif_util [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:43:01,bridge_name='br-int',has_traffic_filtering=True,id=1db9b00b-cf35-4019-bd3b-bb6c5e15090b,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1db9b00b-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.394 243456 DEBUG nova.objects.instance [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid 98ff9b81-a55d-49aa-903f-1f00ff96f985 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.414 243456 DEBUG nova.virt.libvirt.driver [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:06:13 compute-0 nova_compute[243452]:   <uuid>98ff9b81-a55d-49aa-903f-1f00ff96f985</uuid>
Feb 28 10:06:13 compute-0 nova_compute[243452]:   <name>instance-00000026</name>
Feb 28 10:06:13 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:06:13 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:06:13 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <nova:name>tempest-ImagesTestJSON-server-1545434906</nova:name>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:06:12</nova:creationTime>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:06:13 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:06:13 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:06:13 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:06:13 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:06:13 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:06:13 compute-0 nova_compute[243452]:         <nova:user uuid="163582c3e6a34c87b52f82ac4f189f77">tempest-ImagesTestJSON-2059286278-project-member</nova:user>
Feb 28 10:06:13 compute-0 nova_compute[243452]:         <nova:project uuid="a2ce6ed219d94b3b88c2d2d7001f6c3a">tempest-ImagesTestJSON-2059286278</nova:project>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="38cb5514-6684-4136-959d-9444907f6bfa"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:06:13 compute-0 nova_compute[243452]:         <nova:port uuid="1db9b00b-cf35-4019-bd3b-bb6c5e15090b">
Feb 28 10:06:13 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:06:13 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:06:13 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <system>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <entry name="serial">98ff9b81-a55d-49aa-903f-1f00ff96f985</entry>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <entry name="uuid">98ff9b81-a55d-49aa-903f-1f00ff96f985</entry>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     </system>
Feb 28 10:06:13 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:06:13 compute-0 nova_compute[243452]:   <os>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:   </os>
Feb 28 10:06:13 compute-0 nova_compute[243452]:   <features>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:   </features>
Feb 28 10:06:13 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:06:13 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:06:13 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/98ff9b81-a55d-49aa-903f-1f00ff96f985_disk">
Feb 28 10:06:13 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       </source>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:06:13 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/98ff9b81-a55d-49aa-903f-1f00ff96f985_disk.config">
Feb 28 10:06:13 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       </source>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:06:13 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:19:43:01"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <target dev="tap1db9b00b-cf"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/98ff9b81-a55d-49aa-903f-1f00ff96f985/console.log" append="off"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <video>
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     </video>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <input type="keyboard" bus="usb"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:06:13 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:06:13 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:06:13 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:06:13 compute-0 nova_compute[243452]: </domain>
Feb 28 10:06:13 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.418 243456 DEBUG nova.compute.manager [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Preparing to wait for external event network-vif-plugged-1db9b00b-cf35-4019-bd3b-bb6c5e15090b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.418 243456 DEBUG oslo_concurrency.lockutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "98ff9b81-a55d-49aa-903f-1f00ff96f985-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.418 243456 DEBUG oslo_concurrency.lockutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "98ff9b81-a55d-49aa-903f-1f00ff96f985-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.419 243456 DEBUG oslo_concurrency.lockutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "98ff9b81-a55d-49aa-903f-1f00ff96f985-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.420 243456 DEBUG nova.virt.libvirt.vif [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:06:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1545434906',display_name='tempest-ImagesTestJSON-server-1545434906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1545434906',id=38,image_ref='38cb5514-6684-4136-959d-9444907f6bfa',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-6urbia03',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='a9cfb8a4-5855-4ff2-8afa-3e14094e801e',image_min_disk='1',image_min_ram='0',image_owner_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',image_owner_project_name='tempest-ImagesTestJSON-2059286278',image_owner_user_name='tempest-ImagesTestJSON-2059286278-project-member',image_user_id='163582c3e6a34c87b52f82ac4f189f77',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:06:07Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=98ff9b81-a55d-49aa-903f-1f00ff96f985,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1db9b00b-cf35-4019-bd3b-bb6c5e15090b", "address": "fa:16:3e:19:43:01", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1db9b00b-cf", "ovs_interfaceid": "1db9b00b-cf35-4019-bd3b-bb6c5e15090b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.420 243456 DEBUG nova.network.os_vif_util [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "1db9b00b-cf35-4019-bd3b-bb6c5e15090b", "address": "fa:16:3e:19:43:01", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1db9b00b-cf", "ovs_interfaceid": "1db9b00b-cf35-4019-bd3b-bb6c5e15090b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.421 243456 DEBUG nova.network.os_vif_util [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:43:01,bridge_name='br-int',has_traffic_filtering=True,id=1db9b00b-cf35-4019-bd3b-bb6c5e15090b,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1db9b00b-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.421 243456 DEBUG os_vif [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:43:01,bridge_name='br-int',has_traffic_filtering=True,id=1db9b00b-cf35-4019-bd3b-bb6c5e15090b,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1db9b00b-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.422 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.423 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.423 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.428 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.429 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1db9b00b-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.430 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1db9b00b-cf, col_values=(('external_ids', {'iface-id': '1db9b00b-cf35-4019-bd3b-bb6c5e15090b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:43:01', 'vm-uuid': '98ff9b81-a55d-49aa-903f-1f00ff96f985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:13 compute-0 NetworkManager[49805]: <info>  [1772273173.4332] manager: (tap1db9b00b-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.434 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.437 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.439 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.441 243456 INFO os_vif [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:43:01,bridge_name='br-int',has_traffic_filtering=True,id=1db9b00b-cf35-4019-bd3b-bb6c5e15090b,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1db9b00b-cf')
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.504 243456 DEBUG nova.virt.libvirt.driver [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.505 243456 DEBUG nova.virt.libvirt.driver [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.506 243456 DEBUG nova.virt.libvirt.driver [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No VIF found with MAC fa:16:3e:19:43:01, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.506 243456 INFO nova.virt.libvirt.driver [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Using config drive
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.524 243456 DEBUG nova.storage.rbd_utils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 98ff9b81-a55d-49aa-903f-1f00ff96f985_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.554 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.596 243456 DEBUG oslo_concurrency.processutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 60dcb9fa-f7b6-415d-86e5-d423d4613d6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.643 243456 DEBUG nova.storage.rbd_utils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] resizing rbd image 60dcb9fa-f7b6-415d-86e5-d423d4613d6c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.814 243456 DEBUG nova.objects.instance [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lazy-loading 'migration_context' on Instance uuid 60dcb9fa-f7b6-415d-86e5-d423d4613d6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.858 243456 DEBUG nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.859 243456 DEBUG nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Ensure instance console log exists: /var/lib/nova/instances/60dcb9fa-f7b6-415d-86e5-d423d4613d6c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.859 243456 DEBUG oslo_concurrency.lockutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.860 243456 DEBUG oslo_concurrency.lockutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:13 compute-0 nova_compute[243452]: 2026-02-28 10:06:13.860 243456 DEBUG oslo_concurrency.lockutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 2.3 MiB/s wr, 129 op/s
Feb 28 10:06:14 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1939366812' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:15 compute-0 ceph-mon[76304]: pgmap v1133: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 2.3 MiB/s wr, 129 op/s
Feb 28 10:06:15 compute-0 nova_compute[243452]: 2026-02-28 10:06:15.785 243456 INFO nova.virt.libvirt.driver [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Creating config drive at /var/lib/nova/instances/98ff9b81-a55d-49aa-903f-1f00ff96f985/disk.config
Feb 28 10:06:15 compute-0 nova_compute[243452]: 2026-02-28 10:06:15.788 243456 DEBUG oslo_concurrency.processutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98ff9b81-a55d-49aa-903f-1f00ff96f985/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpyrfupiuy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:15 compute-0 nova_compute[243452]: 2026-02-28 10:06:15.880 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 312 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 3.4 MiB/s wr, 136 op/s
Feb 28 10:06:15 compute-0 nova_compute[243452]: 2026-02-28 10:06:15.918 243456 DEBUG oslo_concurrency.processutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98ff9b81-a55d-49aa-903f-1f00ff96f985/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpyrfupiuy" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:15 compute-0 nova_compute[243452]: 2026-02-28 10:06:15.946 243456 DEBUG nova.storage.rbd_utils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 98ff9b81-a55d-49aa-903f-1f00ff96f985_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:15 compute-0 nova_compute[243452]: 2026-02-28 10:06:15.951 243456 DEBUG oslo_concurrency.processutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98ff9b81-a55d-49aa-903f-1f00ff96f985/disk.config 98ff9b81-a55d-49aa-903f-1f00ff96f985_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:16 compute-0 nova_compute[243452]: 2026-02-28 10:06:16.057 243456 DEBUG nova.network.neutron [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Successfully created port: 0690a322-e7c3-413d-8780-d9d6a0f84fd2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:06:16 compute-0 nova_compute[243452]: 2026-02-28 10:06:16.073 243456 DEBUG oslo_concurrency.processutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98ff9b81-a55d-49aa-903f-1f00ff96f985/disk.config 98ff9b81-a55d-49aa-903f-1f00ff96f985_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:16 compute-0 nova_compute[243452]: 2026-02-28 10:06:16.074 243456 INFO nova.virt.libvirt.driver [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Deleting local config drive /var/lib/nova/instances/98ff9b81-a55d-49aa-903f-1f00ff96f985/disk.config because it was imported into RBD.
Feb 28 10:06:16 compute-0 kernel: tap1db9b00b-cf: entered promiscuous mode
Feb 28 10:06:16 compute-0 NetworkManager[49805]: <info>  [1772273176.1085] manager: (tap1db9b00b-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/133)
Feb 28 10:06:16 compute-0 nova_compute[243452]: 2026-02-28 10:06:16.109 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:16 compute-0 ovn_controller[146846]: 2026-02-28T10:06:16Z|00280|binding|INFO|Claiming lport 1db9b00b-cf35-4019-bd3b-bb6c5e15090b for this chassis.
Feb 28 10:06:16 compute-0 ovn_controller[146846]: 2026-02-28T10:06:16Z|00281|binding|INFO|1db9b00b-cf35-4019-bd3b-bb6c5e15090b: Claiming fa:16:3e:19:43:01 10.100.0.14
Feb 28 10:06:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:16.118 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:43:01 10.100.0.14'], port_security=['fa:16:3e:19:43:01 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '98ff9b81-a55d-49aa-903f-1f00ff96f985', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b4baa283-ae4f-4852-9324-45f4fb1de1ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d63e728b-5dfe-4a66-867f-fa0957507bc0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1db9b00b-cf35-4019-bd3b-bb6c5e15090b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:06:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:16.119 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1db9b00b-cf35-4019-bd3b-bb6c5e15090b in datapath 3a8395bc-d7fc-4457-8cb4-52e2b9305b61 bound to our chassis
Feb 28 10:06:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:16.121 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 10:06:16 compute-0 nova_compute[243452]: 2026-02-28 10:06:16.126 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:16 compute-0 ovn_controller[146846]: 2026-02-28T10:06:16Z|00282|binding|INFO|Setting lport 1db9b00b-cf35-4019-bd3b-bb6c5e15090b ovn-installed in OVS
Feb 28 10:06:16 compute-0 ovn_controller[146846]: 2026-02-28T10:06:16Z|00283|binding|INFO|Setting lport 1db9b00b-cf35-4019-bd3b-bb6c5e15090b up in Southbound
Feb 28 10:06:16 compute-0 nova_compute[243452]: 2026-02-28 10:06:16.128 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:16 compute-0 systemd-machined[209480]: New machine qemu-42-instance-00000026.
Feb 28 10:06:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:16.140 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c93bcd3c-9127-45c7-89a9-476b560cc834]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:16 compute-0 systemd[1]: Started Virtual Machine qemu-42-instance-00000026.
Feb 28 10:06:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:16.159 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bdc9165e-1ee6-4b18-999c-5e6aed96d047]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:16.163 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f78d1bc7-962e-4887-bb1e-7f4399c99bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:16 compute-0 systemd-udevd[277444]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:06:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:16.180 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6e939e93-2754-478d-85a6-da98a6de11dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:16 compute-0 NetworkManager[49805]: <info>  [1772273176.1838] device (tap1db9b00b-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:06:16 compute-0 NetworkManager[49805]: <info>  [1772273176.1842] device (tap1db9b00b-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:06:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:16.194 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[44386e22-ae00-4612-bbd6-e4558e32d9f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3a8395bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:6b:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465801, 'reachable_time': 41180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277451, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:16.209 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9b071a2c-b65f-46b3-9607-3147b24644fc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3a8395bc-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465810, 'tstamp': 465810}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277455, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3a8395bc-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465812, 'tstamp': 465812}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277455, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:16.210 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a8395bc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:16 compute-0 nova_compute[243452]: 2026-02-28 10:06:16.212 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:16.214 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3a8395bc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:16.215 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:16.215 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3a8395bc-d0, col_values=(('external_ids', {'iface-id': '5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:16.215 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:16 compute-0 ceph-mon[76304]: pgmap v1134: 305 pgs: 305 active+clean; 312 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 3.4 MiB/s wr, 136 op/s
Feb 28 10:06:16 compute-0 nova_compute[243452]: 2026-02-28 10:06:16.403 243456 DEBUG nova.network.neutron [req-68259c55-3a14-419a-928f-07b4ad35fcd5 req-402dbcc5-cf50-4f7d-ab98-9cfcbced2022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Updated VIF entry in instance network info cache for port 1db9b00b-cf35-4019-bd3b-bb6c5e15090b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:06:16 compute-0 nova_compute[243452]: 2026-02-28 10:06:16.404 243456 DEBUG nova.network.neutron [req-68259c55-3a14-419a-928f-07b4ad35fcd5 req-402dbcc5-cf50-4f7d-ab98-9cfcbced2022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Updating instance_info_cache with network_info: [{"id": "1db9b00b-cf35-4019-bd3b-bb6c5e15090b", "address": "fa:16:3e:19:43:01", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1db9b00b-cf", "ovs_interfaceid": "1db9b00b-cf35-4019-bd3b-bb6c5e15090b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:16 compute-0 nova_compute[243452]: 2026-02-28 10:06:16.429 243456 DEBUG oslo_concurrency.lockutils [req-68259c55-3a14-419a-928f-07b4ad35fcd5 req-402dbcc5-cf50-4f7d-ab98-9cfcbced2022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-98ff9b81-a55d-49aa-903f-1f00ff96f985" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:06:16 compute-0 nova_compute[243452]: 2026-02-28 10:06:16.487 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273176.4863248, 98ff9b81-a55d-49aa-903f-1f00ff96f985 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:16 compute-0 nova_compute[243452]: 2026-02-28 10:06:16.488 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] VM Started (Lifecycle Event)
Feb 28 10:06:16 compute-0 nova_compute[243452]: 2026-02-28 10:06:16.522 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:16 compute-0 nova_compute[243452]: 2026-02-28 10:06:16.526 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273176.4871244, 98ff9b81-a55d-49aa-903f-1f00ff96f985 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:16 compute-0 nova_compute[243452]: 2026-02-28 10:06:16.527 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] VM Paused (Lifecycle Event)
Feb 28 10:06:16 compute-0 nova_compute[243452]: 2026-02-28 10:06:16.590 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:16 compute-0 nova_compute[243452]: 2026-02-28 10:06:16.594 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:06:16 compute-0 nova_compute[243452]: 2026-02-28 10:06:16.642 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:06:17 compute-0 nova_compute[243452]: 2026-02-28 10:06:17.401 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Updating instance_info_cache with network_info: [{"id": "53cf23fd-52e1-4c44-b96b-ca076c163326", "address": "fa:16:3e:24:6e:f7", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53cf23fd-52", "ovs_interfaceid": "53cf23fd-52e1-4c44-b96b-ca076c163326", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:17 compute-0 nova_compute[243452]: 2026-02-28 10:06:17.424 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-a9cfb8a4-5855-4ff2-8afa-3e14094e801e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:06:17 compute-0 nova_compute[243452]: 2026-02-28 10:06:17.425 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:06:17 compute-0 nova_compute[243452]: 2026-02-28 10:06:17.426 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:06:17 compute-0 nova_compute[243452]: 2026-02-28 10:06:17.426 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:06:17 compute-0 nova_compute[243452]: 2026-02-28 10:06:17.427 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:06:17 compute-0 nova_compute[243452]: 2026-02-28 10:06:17.427 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:06:17 compute-0 nova_compute[243452]: 2026-02-28 10:06:17.461 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:17 compute-0 nova_compute[243452]: 2026-02-28 10:06:17.462 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:17 compute-0 nova_compute[243452]: 2026-02-28 10:06:17.463 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:17 compute-0 nova_compute[243452]: 2026-02-28 10:06:17.463 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:06:17 compute-0 nova_compute[243452]: 2026-02-28 10:06:17.464 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:06:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 86 KiB/s rd, 2.3 MiB/s wr, 98 op/s
Feb 28 10:06:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:06:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3930477386' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:17 compute-0 nova_compute[243452]: 2026-02-28 10:06:17.976 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3930477386' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.060 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.061 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.064 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.064 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.185 243456 DEBUG nova.compute.manager [req-20d8b296-b089-46db-b842-83fa054a2dd5 req-a6d1c9c9-3b96-48cc-8c8c-b16ebdac993c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Received event network-vif-plugged-1db9b00b-cf35-4019-bd3b-bb6c5e15090b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.185 243456 DEBUG oslo_concurrency.lockutils [req-20d8b296-b089-46db-b842-83fa054a2dd5 req-a6d1c9c9-3b96-48cc-8c8c-b16ebdac993c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "98ff9b81-a55d-49aa-903f-1f00ff96f985-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.186 243456 DEBUG oslo_concurrency.lockutils [req-20d8b296-b089-46db-b842-83fa054a2dd5 req-a6d1c9c9-3b96-48cc-8c8c-b16ebdac993c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "98ff9b81-a55d-49aa-903f-1f00ff96f985-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.187 243456 DEBUG oslo_concurrency.lockutils [req-20d8b296-b089-46db-b842-83fa054a2dd5 req-a6d1c9c9-3b96-48cc-8c8c-b16ebdac993c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "98ff9b81-a55d-49aa-903f-1f00ff96f985-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.187 243456 DEBUG nova.compute.manager [req-20d8b296-b089-46db-b842-83fa054a2dd5 req-a6d1c9c9-3b96-48cc-8c8c-b16ebdac993c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Processing event network-vif-plugged-1db9b00b-cf35-4019-bd3b-bb6c5e15090b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.189 243456 DEBUG nova.compute.manager [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.196 243456 DEBUG nova.virt.libvirt.driver [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.198 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273178.1958985, 98ff9b81-a55d-49aa-903f-1f00ff96f985 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.198 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] VM Resumed (Lifecycle Event)
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.207 243456 INFO nova.virt.libvirt.driver [-] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Instance spawned successfully.
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.208 243456 INFO nova.compute.manager [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Took 10.92 seconds to spawn the instance on the hypervisor.
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.209 243456 DEBUG nova.compute.manager [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.237 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.248 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.276 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.277 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4034MB free_disk=59.92818937357515GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.277 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.277 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.295 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.360 243456 INFO nova.compute.manager [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Took 12.42 seconds to build instance.
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.393 243456 DEBUG oslo_concurrency.lockutils [None req-9b9bb6c1-e1fd-4573-98fc-19af958af8d2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "98ff9b81-a55d-49aa-903f-1f00ff96f985" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.434 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.477 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance a9cfb8a4-5855-4ff2-8afa-3e14094e801e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.477 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 98ff9b81-a55d-49aa-903f-1f00ff96f985 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.478 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 60dcb9fa-f7b6-415d-86e5-d423d4613d6c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.478 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.479 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.549 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.675 243456 DEBUG nova.network.neutron [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Successfully updated port: 0690a322-e7c3-413d-8780-d9d6a0f84fd2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.703 243456 DEBUG oslo_concurrency.lockutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquiring lock "refresh_cache-60dcb9fa-f7b6-415d-86e5-d423d4613d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.704 243456 DEBUG oslo_concurrency.lockutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquired lock "refresh_cache-60dcb9fa-f7b6-415d-86e5-d423d4613d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.704 243456 DEBUG nova.network.neutron [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.712 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:18 compute-0 nova_compute[243452]: 2026-02-28 10:06:18.979 243456 DEBUG nova.network.neutron [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:06:19 compute-0 ceph-mon[76304]: pgmap v1135: 305 pgs: 305 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 86 KiB/s rd, 2.3 MiB/s wr, 98 op/s
Feb 28 10:06:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:06:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2472480213' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:19 compute-0 nova_compute[243452]: 2026-02-28 10:06:19.343 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:19 compute-0 nova_compute[243452]: 2026-02-28 10:06:19.348 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:06:19 compute-0 nova_compute[243452]: 2026-02-28 10:06:19.363 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:06:19 compute-0 nova_compute[243452]: 2026-02-28 10:06:19.383 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:06:19 compute-0 nova_compute[243452]: 2026-02-28 10:06:19.384 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:19 compute-0 nova_compute[243452]: 2026-02-28 10:06:19.384 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:06:19 compute-0 nova_compute[243452]: 2026-02-28 10:06:19.385 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 28 10:06:19 compute-0 nova_compute[243452]: 2026-02-28 10:06:19.397 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 28 10:06:19 compute-0 nova_compute[243452]: 2026-02-28 10:06:19.398 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:06:19 compute-0 nova_compute[243452]: 2026-02-28 10:06:19.398 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 28 10:06:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 619 KiB/s rd, 1.8 MiB/s wr, 93 op/s
Feb 28 10:06:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2472480213' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:20 compute-0 nova_compute[243452]: 2026-02-28 10:06:20.409 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:06:20 compute-0 nova_compute[243452]: 2026-02-28 10:06:20.841 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273165.8343232, 3a118849-0d0a-4196-9bdd-65333da2e8f7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:20 compute-0 nova_compute[243452]: 2026-02-28 10:06:20.842 243456 INFO nova.compute.manager [-] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] VM Stopped (Lifecycle Event)
Feb 28 10:06:20 compute-0 nova_compute[243452]: 2026-02-28 10:06:20.872 243456 DEBUG nova.compute.manager [None req-c790c9bd-ce20-49d5-b612-24f3a8133798 - - - - - -] [instance: 3a118849-0d0a-4196-9bdd-65333da2e8f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:21 compute-0 ceph-mon[76304]: pgmap v1136: 305 pgs: 305 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 619 KiB/s rd, 1.8 MiB/s wr, 93 op/s
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.309 243456 DEBUG nova.compute.manager [req-880e177f-058a-4570-820e-447b7a6c0038 req-55934ab7-4a52-4e5b-9cc1-3929962ea4b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Received event network-vif-plugged-1db9b00b-cf35-4019-bd3b-bb6c5e15090b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.310 243456 DEBUG oslo_concurrency.lockutils [req-880e177f-058a-4570-820e-447b7a6c0038 req-55934ab7-4a52-4e5b-9cc1-3929962ea4b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "98ff9b81-a55d-49aa-903f-1f00ff96f985-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.310 243456 DEBUG oslo_concurrency.lockutils [req-880e177f-058a-4570-820e-447b7a6c0038 req-55934ab7-4a52-4e5b-9cc1-3929962ea4b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "98ff9b81-a55d-49aa-903f-1f00ff96f985-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.311 243456 DEBUG oslo_concurrency.lockutils [req-880e177f-058a-4570-820e-447b7a6c0038 req-55934ab7-4a52-4e5b-9cc1-3929962ea4b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "98ff9b81-a55d-49aa-903f-1f00ff96f985-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.311 243456 DEBUG nova.compute.manager [req-880e177f-058a-4570-820e-447b7a6c0038 req-55934ab7-4a52-4e5b-9cc1-3929962ea4b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] No waiting events found dispatching network-vif-plugged-1db9b00b-cf35-4019-bd3b-bb6c5e15090b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.311 243456 WARNING nova.compute.manager [req-880e177f-058a-4570-820e-447b7a6c0038 req-55934ab7-4a52-4e5b-9cc1-3929962ea4b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Received unexpected event network-vif-plugged-1db9b00b-cf35-4019-bd3b-bb6c5e15090b for instance with vm_state active and task_state None.
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.311 243456 DEBUG nova.compute.manager [req-880e177f-058a-4570-820e-447b7a6c0038 req-55934ab7-4a52-4e5b-9cc1-3929962ea4b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Received event network-changed-0690a322-e7c3-413d-8780-d9d6a0f84fd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.312 243456 DEBUG nova.compute.manager [req-880e177f-058a-4570-820e-447b7a6c0038 req-55934ab7-4a52-4e5b-9cc1-3929962ea4b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Refreshing instance network info cache due to event network-changed-0690a322-e7c3-413d-8780-d9d6a0f84fd2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.312 243456 DEBUG oslo_concurrency.lockutils [req-880e177f-058a-4570-820e-447b7a6c0038 req-55934ab7-4a52-4e5b-9cc1-3929962ea4b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-60dcb9fa-f7b6-415d-86e5-d423d4613d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.393 243456 DEBUG nova.network.neutron [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Updating instance_info_cache with network_info: [{"id": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "address": "fa:16:3e:b1:c6:3c", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690a322-e7", "ovs_interfaceid": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.426 243456 DEBUG oslo_concurrency.lockutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Releasing lock "refresh_cache-60dcb9fa-f7b6-415d-86e5-d423d4613d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.427 243456 DEBUG nova.compute.manager [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Instance network_info: |[{"id": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "address": "fa:16:3e:b1:c6:3c", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690a322-e7", "ovs_interfaceid": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.428 243456 DEBUG oslo_concurrency.lockutils [req-880e177f-058a-4570-820e-447b7a6c0038 req-55934ab7-4a52-4e5b-9cc1-3929962ea4b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-60dcb9fa-f7b6-415d-86e5-d423d4613d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.428 243456 DEBUG nova.network.neutron [req-880e177f-058a-4570-820e-447b7a6c0038 req-55934ab7-4a52-4e5b-9cc1-3929962ea4b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Refreshing network info cache for port 0690a322-e7c3-413d-8780-d9d6a0f84fd2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.432 243456 DEBUG nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Start _get_guest_xml network_info=[{"id": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "address": "fa:16:3e:b1:c6:3c", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690a322-e7", "ovs_interfaceid": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.438 243456 WARNING nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.443 243456 DEBUG nova.virt.libvirt.host [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.444 243456 DEBUG nova.virt.libvirt.host [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.451 243456 DEBUG nova.virt.libvirt.host [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.451 243456 DEBUG nova.virt.libvirt.host [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.452 243456 DEBUG nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.452 243456 DEBUG nova.virt.hardware [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.453 243456 DEBUG nova.virt.hardware [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.453 243456 DEBUG nova.virt.hardware [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.454 243456 DEBUG nova.virt.hardware [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.454 243456 DEBUG nova.virt.hardware [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.454 243456 DEBUG nova.virt.hardware [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.454 243456 DEBUG nova.virt.hardware [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.455 243456 DEBUG nova.virt.hardware [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.455 243456 DEBUG nova.virt.hardware [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.456 243456 DEBUG nova.virt.hardware [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.456 243456 DEBUG nova.virt.hardware [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.460 243456 DEBUG oslo_concurrency.processutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.698 243456 DEBUG oslo_concurrency.lockutils [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "98ff9b81-a55d-49aa-903f-1f00ff96f985" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.699 243456 DEBUG oslo_concurrency.lockutils [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "98ff9b81-a55d-49aa-903f-1f00ff96f985" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.700 243456 DEBUG oslo_concurrency.lockutils [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "98ff9b81-a55d-49aa-903f-1f00ff96f985-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.700 243456 DEBUG oslo_concurrency.lockutils [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "98ff9b81-a55d-49aa-903f-1f00ff96f985-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.701 243456 DEBUG oslo_concurrency.lockutils [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "98ff9b81-a55d-49aa-903f-1f00ff96f985-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.703 243456 INFO nova.compute.manager [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Terminating instance
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.704 243456 DEBUG nova.compute.manager [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:06:21 compute-0 kernel: tap1db9b00b-cf (unregistering): left promiscuous mode
Feb 28 10:06:21 compute-0 NetworkManager[49805]: <info>  [1772273181.8334] device (tap1db9b00b-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:06:21 compute-0 ovn_controller[146846]: 2026-02-28T10:06:21Z|00284|binding|INFO|Releasing lport 1db9b00b-cf35-4019-bd3b-bb6c5e15090b from this chassis (sb_readonly=0)
Feb 28 10:06:21 compute-0 ovn_controller[146846]: 2026-02-28T10:06:21Z|00285|binding|INFO|Setting lport 1db9b00b-cf35-4019-bd3b-bb6c5e15090b down in Southbound
Feb 28 10:06:21 compute-0 ovn_controller[146846]: 2026-02-28T10:06:21Z|00286|binding|INFO|Removing iface tap1db9b00b-cf ovn-installed in OVS
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.840 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:21.850 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:43:01 10.100.0.14'], port_security=['fa:16:3e:19:43:01 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '98ff9b81-a55d-49aa-903f-1f00ff96f985', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b4baa283-ae4f-4852-9324-45f4fb1de1ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d63e728b-5dfe-4a66-867f-fa0957507bc0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=1db9b00b-cf35-4019-bd3b-bb6c5e15090b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:06:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:21.853 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 1db9b00b-cf35-4019-bd3b-bb6c5e15090b in datapath 3a8395bc-d7fc-4457-8cb4-52e2b9305b61 unbound from our chassis
Feb 28 10:06:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:21.856 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.858 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:21.870 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8a2a059f-7fb2-43b9-8b79-beba4fe3cc61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:21 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000026.scope: Deactivated successfully.
Feb 28 10:06:21 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000026.scope: Consumed 4.054s CPU time.
Feb 28 10:06:21 compute-0 systemd-machined[209480]: Machine qemu-42-instance-00000026 terminated.
Feb 28 10:06:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:21.900 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[235bd098-02d1-47ac-a401-f112f8d09e16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:21.903 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[eddccd41-0463-4e63-abb3-bbdeacc548b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Feb 28 10:06:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:21.932 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[634ef66d-d05d-409d-9503-3b7a51d1925f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.948 243456 INFO nova.virt.libvirt.driver [-] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Instance destroyed successfully.
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.949 243456 DEBUG nova.objects.instance [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'resources' on Instance uuid 98ff9b81-a55d-49aa-903f-1f00ff96f985 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:21.949 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[06de086d-7f4b-4415-bd8c-31ece3ef36be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3a8395bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:6b:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465801, 'reachable_time': 41180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277588, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:21.962 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[07af188e-eff0-4680-8483-cd4f9d3cddbf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3a8395bc-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465810, 'tstamp': 465810}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277590, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3a8395bc-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465812, 'tstamp': 465812}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277590, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:21.965 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a8395bc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.967 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.970 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:21.971 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3a8395bc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:21.972 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:21.973 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3a8395bc-d0, col_values=(('external_ids', {'iface-id': '5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:21.973 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.977 243456 DEBUG nova.virt.libvirt.vif [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:06:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1545434906',display_name='tempest-ImagesTestJSON-server-1545434906',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1545434906',id=38,image_ref='38cb5514-6684-4136-959d-9444907f6bfa',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:06:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-6urbia03',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='a9cfb8a4-5855-4ff2-8afa-3e14094e801e',image_min_disk='1',image_min_ram='0',image_owner_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',image_owner_project_name='tempest-ImagesTestJSON-2059286278',image_owner_user_name='tempest-ImagesTestJSON-2059286278-project-member',image_user_id='163582c3e6a34c87b52f82ac4f189f77',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:06:18Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=98ff9b81-a55d-49aa-903f-1f00ff96f985,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1db9b00b-cf35-4019-bd3b-bb6c5e15090b", "address": "fa:16:3e:19:43:01", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1db9b00b-cf", "ovs_interfaceid": "1db9b00b-cf35-4019-bd3b-bb6c5e15090b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.977 243456 DEBUG nova.network.os_vif_util [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "1db9b00b-cf35-4019-bd3b-bb6c5e15090b", "address": "fa:16:3e:19:43:01", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1db9b00b-cf", "ovs_interfaceid": "1db9b00b-cf35-4019-bd3b-bb6c5e15090b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.978 243456 DEBUG nova.network.os_vif_util [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:43:01,bridge_name='br-int',has_traffic_filtering=True,id=1db9b00b-cf35-4019-bd3b-bb6c5e15090b,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1db9b00b-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.979 243456 DEBUG os_vif [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:43:01,bridge_name='br-int',has_traffic_filtering=True,id=1db9b00b-cf35-4019-bd3b-bb6c5e15090b,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1db9b00b-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.981 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.981 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1db9b00b-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.983 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.985 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:21 compute-0 nova_compute[243452]: 2026-02-28 10:06:21.987 243456 INFO os_vif [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:43:01,bridge_name='br-int',has_traffic_filtering=True,id=1db9b00b-cf35-4019-bd3b-bb6c5e15090b,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1db9b00b-cf')
Feb 28 10:06:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:06:22 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3694167747' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:22 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3694167747' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.069 243456 DEBUG oslo_concurrency.processutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.098 243456 DEBUG nova.storage.rbd_utils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] rbd image 60dcb9fa-f7b6-415d-86e5-d423d4613d6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.104 243456 DEBUG oslo_concurrency.processutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.277 243456 INFO nova.virt.libvirt.driver [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Deleting instance files /var/lib/nova/instances/98ff9b81-a55d-49aa-903f-1f00ff96f985_del
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.278 243456 INFO nova.virt.libvirt.driver [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Deletion of /var/lib/nova/instances/98ff9b81-a55d-49aa-903f-1f00ff96f985_del complete
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.345 243456 INFO nova.compute.manager [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Took 0.64 seconds to destroy the instance on the hypervisor.
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.345 243456 DEBUG oslo.service.loopingcall [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.346 243456 DEBUG nova.compute.manager [-] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.346 243456 DEBUG nova.network.neutron [-] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.685 243456 DEBUG oslo_concurrency.lockutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "c4dce6af-958c-4c5a-890b-469443cee915" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.685 243456 DEBUG oslo_concurrency.lockutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:06:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:06:22 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2658771638' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.709 243456 DEBUG nova.compute.manager [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.724 243456 DEBUG oslo_concurrency.processutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.725 243456 DEBUG nova.virt.libvirt.vif [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:06:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1882743683',display_name='tempest-SecurityGroupsTestJSON-server-1882743683',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1882743683',id=39,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5df107d99f104138b864f28cf3b749ad',ramdisk_id='',reservation_id='r-qqsbvlmq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-392060184',owner_user_name='tempest-SecurityGroupsTestJSON-392060184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:06:12Z,user_data=None,user_id='c9a7366cce344abcb7310041ed02610a',uuid=60dcb9fa-f7b6-415d-86e5-d423d4613d6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "address": "fa:16:3e:b1:c6:3c", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690a322-e7", "ovs_interfaceid": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.726 243456 DEBUG nova.network.os_vif_util [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Converting VIF {"id": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "address": "fa:16:3e:b1:c6:3c", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690a322-e7", "ovs_interfaceid": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.727 243456 DEBUG nova.network.os_vif_util [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:c6:3c,bridge_name='br-int',has_traffic_filtering=True,id=0690a322-e7c3-413d-8780-d9d6a0f84fd2,network=Network(f973a3f2-c3d9-4311-9c7b-ab6ca02111d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0690a322-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.728 243456 DEBUG nova.objects.instance [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lazy-loading 'pci_devices' on Instance uuid 60dcb9fa-f7b6-415d-86e5-d423d4613d6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.745 243456 DEBUG nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:06:22 compute-0 nova_compute[243452]:   <uuid>60dcb9fa-f7b6-415d-86e5-d423d4613d6c</uuid>
Feb 28 10:06:22 compute-0 nova_compute[243452]:   <name>instance-00000027</name>
Feb 28 10:06:22 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:06:22 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:06:22 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <nova:name>tempest-SecurityGroupsTestJSON-server-1882743683</nova:name>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:06:21</nova:creationTime>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:06:22 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:06:22 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:06:22 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:06:22 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:06:22 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:06:22 compute-0 nova_compute[243452]:         <nova:user uuid="c9a7366cce344abcb7310041ed02610a">tempest-SecurityGroupsTestJSON-392060184-project-member</nova:user>
Feb 28 10:06:22 compute-0 nova_compute[243452]:         <nova:project uuid="5df107d99f104138b864f28cf3b749ad">tempest-SecurityGroupsTestJSON-392060184</nova:project>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:06:22 compute-0 nova_compute[243452]:         <nova:port uuid="0690a322-e7c3-413d-8780-d9d6a0f84fd2">
Feb 28 10:06:22 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:06:22 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:06:22 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <system>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <entry name="serial">60dcb9fa-f7b6-415d-86e5-d423d4613d6c</entry>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <entry name="uuid">60dcb9fa-f7b6-415d-86e5-d423d4613d6c</entry>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     </system>
Feb 28 10:06:22 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:06:22 compute-0 nova_compute[243452]:   <os>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:   </os>
Feb 28 10:06:22 compute-0 nova_compute[243452]:   <features>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:   </features>
Feb 28 10:06:22 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:06:22 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:06:22 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/60dcb9fa-f7b6-415d-86e5-d423d4613d6c_disk">
Feb 28 10:06:22 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       </source>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:06:22 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/60dcb9fa-f7b6-415d-86e5-d423d4613d6c_disk.config">
Feb 28 10:06:22 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       </source>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:06:22 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:b1:c6:3c"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <target dev="tap0690a322-e7"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/60dcb9fa-f7b6-415d-86e5-d423d4613d6c/console.log" append="off"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <video>
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     </video>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:06:22 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:06:22 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:06:22 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:06:22 compute-0 nova_compute[243452]: </domain>
Feb 28 10:06:22 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.747 243456 DEBUG nova.compute.manager [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Preparing to wait for external event network-vif-plugged-0690a322-e7c3-413d-8780-d9d6a0f84fd2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.747 243456 DEBUG oslo_concurrency.lockutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquiring lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.747 243456 DEBUG oslo_concurrency.lockutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.748 243456 DEBUG oslo_concurrency.lockutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.749 243456 DEBUG nova.virt.libvirt.vif [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:06:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1882743683',display_name='tempest-SecurityGroupsTestJSON-server-1882743683',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1882743683',id=39,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5df107d99f104138b864f28cf3b749ad',ramdisk_id='',reservation_id='r-qqsbvlmq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-392060184',owner_user_name='tempest-SecurityGroupsTestJSON-392060184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:06:12Z,user_data=None,user_id='c9a7366cce344abcb7310041ed02610a',uuid=60dcb9fa-f7b6-415d-86e5-d423d4613d6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "address": "fa:16:3e:b1:c6:3c", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690a322-e7", "ovs_interfaceid": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.749 243456 DEBUG nova.network.os_vif_util [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Converting VIF {"id": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "address": "fa:16:3e:b1:c6:3c", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690a322-e7", "ovs_interfaceid": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.750 243456 DEBUG nova.network.os_vif_util [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:c6:3c,bridge_name='br-int',has_traffic_filtering=True,id=0690a322-e7c3-413d-8780-d9d6a0f84fd2,network=Network(f973a3f2-c3d9-4311-9c7b-ab6ca02111d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0690a322-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.750 243456 DEBUG os_vif [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:c6:3c,bridge_name='br-int',has_traffic_filtering=True,id=0690a322-e7c3-413d-8780-d9d6a0f84fd2,network=Network(f973a3f2-c3d9-4311-9c7b-ab6ca02111d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0690a322-e7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.751 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.751 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.752 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.759 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.759 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0690a322-e7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.760 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0690a322-e7, col_values=(('external_ids', {'iface-id': '0690a322-e7c3-413d-8780-d9d6a0f84fd2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b1:c6:3c', 'vm-uuid': '60dcb9fa-f7b6-415d-86e5-d423d4613d6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:22 compute-0 NetworkManager[49805]: <info>  [1772273182.7626] manager: (tap0690a322-e7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.763 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.767 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.770 243456 INFO os_vif [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:c6:3c,bridge_name='br-int',has_traffic_filtering=True,id=0690a322-e7c3-413d-8780-d9d6a0f84fd2,network=Network(f973a3f2-c3d9-4311-9c7b-ab6ca02111d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0690a322-e7')
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.774 243456 DEBUG oslo_concurrency.lockutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.774 243456 DEBUG oslo_concurrency.lockutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.785 243456 DEBUG nova.virt.hardware [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.786 243456 INFO nova.compute.claims [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.849 243456 DEBUG nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.850 243456 DEBUG nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.850 243456 DEBUG nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] No VIF found with MAC fa:16:3e:b1:c6:3c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.851 243456 INFO nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Using config drive
Feb 28 10:06:22 compute-0 nova_compute[243452]: 2026-02-28 10:06:22.879 243456 DEBUG nova.storage.rbd_utils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] rbd image 60dcb9fa-f7b6-415d-86e5-d423d4613d6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:23 compute-0 nova_compute[243452]: 2026-02-28 10:06:23.013 243456 DEBUG oslo_concurrency.processutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:23 compute-0 ceph-mon[76304]: pgmap v1137: 305 pgs: 305 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Feb 28 10:06:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2658771638' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:23 compute-0 nova_compute[243452]: 2026-02-28 10:06:23.261 243456 INFO nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Creating config drive at /var/lib/nova/instances/60dcb9fa-f7b6-415d-86e5-d423d4613d6c/disk.config
Feb 28 10:06:23 compute-0 nova_compute[243452]: 2026-02-28 10:06:23.269 243456 DEBUG oslo_concurrency.processutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/60dcb9fa-f7b6-415d-86e5-d423d4613d6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4u82xxlo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:23 compute-0 nova_compute[243452]: 2026-02-28 10:06:23.409 243456 DEBUG oslo_concurrency.processutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/60dcb9fa-f7b6-415d-86e5-d423d4613d6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4u82xxlo" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:23 compute-0 nova_compute[243452]: 2026-02-28 10:06:23.459 243456 DEBUG nova.storage.rbd_utils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] rbd image 60dcb9fa-f7b6-415d-86e5-d423d4613d6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:23 compute-0 nova_compute[243452]: 2026-02-28 10:06:23.466 243456 DEBUG oslo_concurrency.processutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/60dcb9fa-f7b6-415d-86e5-d423d4613d6c/disk.config 60dcb9fa-f7b6-415d-86e5-d423d4613d6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:06:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3469675487' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:23 compute-0 nova_compute[243452]: 2026-02-28 10:06:23.552 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:23 compute-0 nova_compute[243452]: 2026-02-28 10:06:23.576 243456 DEBUG oslo_concurrency.processutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:23 compute-0 nova_compute[243452]: 2026-02-28 10:06:23.586 243456 DEBUG nova.compute.provider_tree [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:06:23 compute-0 nova_compute[243452]: 2026-02-28 10:06:23.609 243456 DEBUG nova.scheduler.client.report [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:06:23 compute-0 nova_compute[243452]: 2026-02-28 10:06:23.640 243456 DEBUG oslo_concurrency.processutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/60dcb9fa-f7b6-415d-86e5-d423d4613d6c/disk.config 60dcb9fa-f7b6-415d-86e5-d423d4613d6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:23 compute-0 nova_compute[243452]: 2026-02-28 10:06:23.641 243456 INFO nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Deleting local config drive /var/lib/nova/instances/60dcb9fa-f7b6-415d-86e5-d423d4613d6c/disk.config because it was imported into RBD.
Feb 28 10:06:23 compute-0 nova_compute[243452]: 2026-02-28 10:06:23.643 243456 DEBUG oslo_concurrency.lockutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:23 compute-0 nova_compute[243452]: 2026-02-28 10:06:23.643 243456 DEBUG nova.compute.manager [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:06:23 compute-0 nova_compute[243452]: 2026-02-28 10:06:23.688 243456 DEBUG nova.compute.manager [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:06:23 compute-0 nova_compute[243452]: 2026-02-28 10:06:23.688 243456 DEBUG nova.network.neutron [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:06:23 compute-0 kernel: tap0690a322-e7: entered promiscuous mode
Feb 28 10:06:23 compute-0 systemd-udevd[277570]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:06:23 compute-0 NetworkManager[49805]: <info>  [1772273183.7004] manager: (tap0690a322-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/135)
Feb 28 10:06:23 compute-0 ovn_controller[146846]: 2026-02-28T10:06:23Z|00287|binding|INFO|Claiming lport 0690a322-e7c3-413d-8780-d9d6a0f84fd2 for this chassis.
Feb 28 10:06:23 compute-0 nova_compute[243452]: 2026-02-28 10:06:23.700 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:23 compute-0 ovn_controller[146846]: 2026-02-28T10:06:23Z|00288|binding|INFO|0690a322-e7c3-413d-8780-d9d6a0f84fd2: Claiming fa:16:3e:b1:c6:3c 10.100.0.12
Feb 28 10:06:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:23.708 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:c6:3c 10.100.0.12'], port_security=['fa:16:3e:b1:c6:3c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '60dcb9fa-f7b6-415d-86e5-d423d4613d6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5df107d99f104138b864f28cf3b749ad', 'neutron:revision_number': '2', 'neutron:security_group_ids': '428b5966-b573-43eb-a464-fcc424e52e98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4575c196-9c47-43a0-8ee2-589635106d32, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0690a322-e7c3-413d-8780-d9d6a0f84fd2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:06:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:23.710 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0690a322-e7c3-413d-8780-d9d6a0f84fd2 in datapath f973a3f2-c3d9-4311-9c7b-ab6ca02111d3 bound to our chassis
Feb 28 10:06:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:23.711 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f973a3f2-c3d9-4311-9c7b-ab6ca02111d3
Feb 28 10:06:23 compute-0 NetworkManager[49805]: <info>  [1772273183.7127] device (tap0690a322-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:06:23 compute-0 NetworkManager[49805]: <info>  [1772273183.7134] device (tap0690a322-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:06:23 compute-0 ovn_controller[146846]: 2026-02-28T10:06:23Z|00289|binding|INFO|Setting lport 0690a322-e7c3-413d-8780-d9d6a0f84fd2 ovn-installed in OVS
Feb 28 10:06:23 compute-0 ovn_controller[146846]: 2026-02-28T10:06:23Z|00290|binding|INFO|Setting lport 0690a322-e7c3-413d-8780-d9d6a0f84fd2 up in Southbound
Feb 28 10:06:23 compute-0 nova_compute[243452]: 2026-02-28 10:06:23.713 243456 INFO nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:06:23 compute-0 nova_compute[243452]: 2026-02-28 10:06:23.717 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:23.727 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e52fffc5-bdeb-463f-bac9-3fff05d99c60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:23.728 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf973a3f2-c1 in ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:06:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:23.730 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf973a3f2-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:06:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:23.730 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b9ad3b07-6293-42eb-a7af-ac8afd6a9225]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:23.731 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff190ff-08e8-4933-a9ac-f0ce8c2e4c9e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:23 compute-0 systemd-machined[209480]: New machine qemu-43-instance-00000027.
Feb 28 10:06:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:23.748 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe5ea92-e325-4acd-81ac-41a0e6b7b2dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:23 compute-0 nova_compute[243452]: 2026-02-28 10:06:23.750 243456 DEBUG nova.compute.manager [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:06:23 compute-0 systemd[1]: Started Virtual Machine qemu-43-instance-00000027.
Feb 28 10:06:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:23.764 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9cfa01-dcc5-4f38-b397-6fded5979e8c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:23.793 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a890baac-0967-44c2-8336-85709f2a300b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:23.800 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3cecdf37-91a2-4a0b-8aff-564f9f782e20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:23 compute-0 NetworkManager[49805]: <info>  [1772273183.8020] manager: (tapf973a3f2-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/136)
Feb 28 10:06:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:23.838 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ebbabccd-7efa-4bde-b346-7f556bb94f59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:23.843 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d5ad8331-6e04-4d8e-bd17-a833e199704f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:23 compute-0 NetworkManager[49805]: <info>  [1772273183.8693] device (tapf973a3f2-c0): carrier: link connected
Feb 28 10:06:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:23.876 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[327ceee0-020e-49fa-8cb8-960110af0dac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:23.898 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cb44629a-ff53-4484-8d7b-dbabcd079e9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf973a3f2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b7:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 90], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469115, 'reachable_time': 43459, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277777, 'error': None, 'target': 'ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:23.919 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[64c197bf-18b2-4d40-9d1a-505dd5c612b2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:b724'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469115, 'tstamp': 469115}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277778, 'error': None, 'target': 'ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Feb 28 10:06:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:23.934 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ba2544-37d6-4255-a5ca-308c83b5f7bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf973a3f2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b7:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 90], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469115, 'reachable_time': 43459, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277779, 'error': None, 'target': 'ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:23.969 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8c45b660-af7d-44e4-a950-e4c03df9a231]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:24.016 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7800c476-987a-44e2-bc30-58e9d5bcebaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:24.017 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf973a3f2-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:24.018 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:24.018 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf973a3f2-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.020 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:24 compute-0 NetworkManager[49805]: <info>  [1772273184.0219] manager: (tapf973a3f2-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Feb 28 10:06:24 compute-0 kernel: tapf973a3f2-c0: entered promiscuous mode
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.023 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:24.024 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf973a3f2-c0, col_values=(('external_ids', {'iface-id': '610498de-6d7e-49bb-b4f4-0bb4f081afde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.025 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.026 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:24 compute-0 ovn_controller[146846]: 2026-02-28T10:06:24Z|00291|binding|INFO|Releasing lport 610498de-6d7e-49bb-b4f4-0bb4f081afde from this chassis (sb_readonly=0)
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:24.026 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f973a3f2-c3d9-4311-9c7b-ab6ca02111d3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f973a3f2-c3d9-4311-9c7b-ab6ca02111d3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:24.027 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a411f776-2f33-4858-9d2d-edccbfae4354]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:24.028 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/f973a3f2-c3d9-4311-9c7b-ab6ca02111d3.pid.haproxy
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID f973a3f2-c3d9-4311-9c7b-ab6ca02111d3
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:06:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:24.029 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'env', 'PROCESS_TAG=haproxy-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f973a3f2-c3d9-4311-9c7b-ab6ca02111d3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.033 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:24 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3469675487' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.118 243456 DEBUG nova.policy [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1d18942aa7a4f4f9f673058f8c5f232', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5478fb37e3fa483c86ed85ad939d42da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.135 243456 DEBUG nova.network.neutron [req-880e177f-058a-4570-820e-447b7a6c0038 req-55934ab7-4a52-4e5b-9cc1-3929962ea4b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Updated VIF entry in instance network info cache for port 0690a322-e7c3-413d-8780-d9d6a0f84fd2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.135 243456 DEBUG nova.network.neutron [req-880e177f-058a-4570-820e-447b7a6c0038 req-55934ab7-4a52-4e5b-9cc1-3929962ea4b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Updating instance_info_cache with network_info: [{"id": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "address": "fa:16:3e:b1:c6:3c", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690a322-e7", "ovs_interfaceid": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.168 243456 DEBUG nova.network.neutron [-] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.182 243456 DEBUG oslo_concurrency.lockutils [req-880e177f-058a-4570-820e-447b7a6c0038 req-55934ab7-4a52-4e5b-9cc1-3929962ea4b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-60dcb9fa-f7b6-415d-86e5-d423d4613d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.195 243456 DEBUG nova.compute.manager [req-9370d52a-9049-469b-9c66-847bda7ff24e req-117ce4c4-1265-494a-a891-e583512b3884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Received event network-vif-deleted-1db9b00b-cf35-4019-bd3b-bb6c5e15090b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.195 243456 INFO nova.compute.manager [req-9370d52a-9049-469b-9c66-847bda7ff24e req-117ce4c4-1265-494a-a891-e583512b3884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Neutron deleted interface 1db9b00b-cf35-4019-bd3b-bb6c5e15090b; detaching it from the instance and deleting it from the info cache
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.195 243456 DEBUG nova.network.neutron [req-9370d52a-9049-469b-9c66-847bda7ff24e req-117ce4c4-1265-494a-a891-e583512b3884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.198 243456 DEBUG nova.compute.manager [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.199 243456 DEBUG nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.199 243456 INFO nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Creating image(s)
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.264 243456 DEBUG nova.storage.rbd_utils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image c4dce6af-958c-4c5a-890b-469443cee915_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.287 243456 DEBUG nova.storage.rbd_utils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image c4dce6af-958c-4c5a-890b-469443cee915_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.317 243456 DEBUG nova.storage.rbd_utils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image c4dce6af-958c-4c5a-890b-469443cee915_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.323 243456 DEBUG oslo_concurrency.processutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.355 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273184.3391125, 60dcb9fa-f7b6-415d-86e5-d423d4613d6c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.355 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] VM Started (Lifecycle Event)
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.362 243456 INFO nova.compute.manager [-] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Took 2.02 seconds to deallocate network for instance.
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.369 243456 DEBUG nova.compute.manager [req-9370d52a-9049-469b-9c66-847bda7ff24e req-117ce4c4-1265-494a-a891-e583512b3884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Detach interface failed, port_id=1db9b00b-cf35-4019-bd3b-bb6c5e15090b, reason: Instance 98ff9b81-a55d-49aa-903f-1f00ff96f985 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.384 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.387 243456 DEBUG oslo_concurrency.processutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.388 243456 DEBUG oslo_concurrency.lockutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.389 243456 DEBUG oslo_concurrency.lockutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.390 243456 DEBUG oslo_concurrency.lockutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.410 243456 DEBUG nova.storage.rbd_utils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image c4dce6af-958c-4c5a-890b-469443cee915_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.413 243456 DEBUG oslo_concurrency.processutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c4dce6af-958c-4c5a-890b-469443cee915_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:24 compute-0 podman[277906]: 2026-02-28 10:06:24.414403864 +0000 UTC m=+0.047247289 container create c88ffe19a51fdf24cf67781b51cdb7286aae7e9fc1fdfe6e1ba23f69478c066f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.447 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.452 243456 DEBUG oslo_concurrency.lockutils [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.453 243456 DEBUG oslo_concurrency.lockutils [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.455 243456 DEBUG nova.compute.manager [req-dc942c4b-1ef0-4256-b716-7e433746fec8 req-c8738c4d-c86d-4586-939f-313b4e2b98ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Received event network-vif-plugged-0690a322-e7c3-413d-8780-d9d6a0f84fd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.455 243456 DEBUG oslo_concurrency.lockutils [req-dc942c4b-1ef0-4256-b716-7e433746fec8 req-c8738c4d-c86d-4586-939f-313b4e2b98ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.455 243456 DEBUG oslo_concurrency.lockutils [req-dc942c4b-1ef0-4256-b716-7e433746fec8 req-c8738c4d-c86d-4586-939f-313b4e2b98ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.456 243456 DEBUG oslo_concurrency.lockutils [req-dc942c4b-1ef0-4256-b716-7e433746fec8 req-c8738c4d-c86d-4586-939f-313b4e2b98ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.456 243456 DEBUG nova.compute.manager [req-dc942c4b-1ef0-4256-b716-7e433746fec8 req-c8738c4d-c86d-4586-939f-313b4e2b98ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Processing event network-vif-plugged-0690a322-e7c3-413d-8780-d9d6a0f84fd2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.456 243456 DEBUG nova.compute.manager [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.458 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273184.3396378, 60dcb9fa-f7b6-415d-86e5-d423d4613d6c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:24 compute-0 systemd[1]: Started libpod-conmon-c88ffe19a51fdf24cf67781b51cdb7286aae7e9fc1fdfe6e1ba23f69478c066f.scope.
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.460 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] VM Paused (Lifecycle Event)
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.462 243456 DEBUG nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.469 243456 INFO nova.virt.libvirt.driver [-] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Instance spawned successfully.
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.470 243456 DEBUG nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:06:24 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:06:24 compute-0 podman[277906]: 2026-02-28 10:06:24.393186067 +0000 UTC m=+0.026029512 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbd14d3e7b115698d53bd6fefdc6f0cca3938be7119c8399bc73dd722379f62e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.494 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.499 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273184.4619305, 60dcb9fa-f7b6-415d-86e5-d423d4613d6c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.499 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] VM Resumed (Lifecycle Event)
Feb 28 10:06:24 compute-0 podman[277906]: 2026-02-28 10:06:24.50426426 +0000 UTC m=+0.137107765 container init c88ffe19a51fdf24cf67781b51cdb7286aae7e9fc1fdfe6e1ba23f69478c066f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:06:24 compute-0 podman[277906]: 2026-02-28 10:06:24.509704623 +0000 UTC m=+0.142548088 container start c88ffe19a51fdf24cf67781b51cdb7286aae7e9fc1fdfe6e1ba23f69478c066f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.512 243456 DEBUG nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.513 243456 DEBUG nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.513 243456 DEBUG nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.513 243456 DEBUG nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.514 243456 DEBUG nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.514 243456 DEBUG nova.virt.libvirt.driver [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.524 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.528 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:06:24 compute-0 neutron-haproxy-ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3[277942]: [NOTICE]   (277961) : New worker (277966) forked
Feb 28 10:06:24 compute-0 neutron-haproxy-ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3[277942]: [NOTICE]   (277961) : Loading success.
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.559 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.576 243456 INFO nova.compute.manager [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Took 11.50 seconds to spawn the instance on the hypervisor.
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.577 243456 DEBUG nova.compute.manager [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.584 243456 DEBUG oslo_concurrency.processutils [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.697 243456 DEBUG oslo_concurrency.processutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c4dce6af-958c-4c5a-890b-469443cee915_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.741 243456 INFO nova.compute.manager [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Took 12.83 seconds to build instance.
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.780 243456 DEBUG oslo_concurrency.lockutils [None req-4ac6ed79-069b-4fd8-8b00-994db39cd005 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.785 243456 DEBUG nova.storage.rbd_utils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] resizing rbd image c4dce6af-958c-4c5a-890b-469443cee915_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.861 243456 DEBUG nova.objects.instance [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'migration_context' on Instance uuid c4dce6af-958c-4c5a-890b-469443cee915 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.877 243456 DEBUG nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.877 243456 DEBUG nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Ensure instance console log exists: /var/lib/nova/instances/c4dce6af-958c-4c5a-890b-469443cee915/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.878 243456 DEBUG oslo_concurrency.lockutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.878 243456 DEBUG oslo_concurrency.lockutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:24 compute-0 nova_compute[243452]: 2026-02-28 10:06:24.879 243456 DEBUG oslo_concurrency.lockutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:25 compute-0 ceph-mon[76304]: pgmap v1138: 305 pgs: 305 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Feb 28 10:06:25 compute-0 nova_compute[243452]: 2026-02-28 10:06:25.126 243456 DEBUG nova.network.neutron [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Successfully created port: bd50336f-b10b-46c9-91bd-81e086b2e80e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:06:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:06:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1679524590' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:25 compute-0 nova_compute[243452]: 2026-02-28 10:06:25.165 243456 DEBUG oslo_concurrency.processutils [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:25 compute-0 nova_compute[243452]: 2026-02-28 10:06:25.170 243456 DEBUG nova.compute.provider_tree [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:06:25 compute-0 nova_compute[243452]: 2026-02-28 10:06:25.185 243456 DEBUG nova.scheduler.client.report [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:06:25 compute-0 nova_compute[243452]: 2026-02-28 10:06:25.206 243456 DEBUG oslo_concurrency.lockutils [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:25 compute-0 nova_compute[243452]: 2026-02-28 10:06:25.233 243456 INFO nova.scheduler.client.report [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Deleted allocations for instance 98ff9b81-a55d-49aa-903f-1f00ff96f985
Feb 28 10:06:25 compute-0 nova_compute[243452]: 2026-02-28 10:06:25.312 243456 DEBUG oslo_concurrency.lockutils [None req-0a69b2cd-b036-4f58-939a-fe7bb57bad39 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "98ff9b81-a55d-49aa-903f-1f00ff96f985" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 338 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.4 MiB/s wr, 162 op/s
Feb 28 10:06:26 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1679524590' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:26 compute-0 podman[278070]: 2026-02-28 10:06:26.131462465 +0000 UTC m=+0.067206370 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 28 10:06:26 compute-0 podman[278069]: 2026-02-28 10:06:26.149511793 +0000 UTC m=+0.088535470 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:06:26 compute-0 nova_compute[243452]: 2026-02-28 10:06:26.328 243456 DEBUG nova.network.neutron [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Successfully updated port: bd50336f-b10b-46c9-91bd-81e086b2e80e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:06:26 compute-0 nova_compute[243452]: 2026-02-28 10:06:26.355 243456 DEBUG oslo_concurrency.lockutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:26 compute-0 nova_compute[243452]: 2026-02-28 10:06:26.356 243456 DEBUG oslo_concurrency.lockutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquired lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:26 compute-0 nova_compute[243452]: 2026-02-28 10:06:26.356 243456 DEBUG nova.network.neutron [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:06:26 compute-0 nova_compute[243452]: 2026-02-28 10:06:26.562 243456 DEBUG nova.network.neutron [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:06:27 compute-0 ceph-mon[76304]: pgmap v1139: 305 pgs: 305 active+clean; 338 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.4 MiB/s wr, 162 op/s
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.663 243456 DEBUG nova.network.neutron [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Updating instance_info_cache with network_info: [{"id": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "address": "fa:16:3e:9f:3f:66", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd50336f-b1", "ovs_interfaceid": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.681 243456 DEBUG oslo_concurrency.lockutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Acquiring lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.682 243456 DEBUG oslo_concurrency.lockutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.762 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.777 243456 DEBUG nova.compute.manager [req-83a2db14-f1ce-4ded-a9c5-12cbfc33ac40 req-7490553f-8124-42fc-86bb-049f85205e6a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received event network-changed-bd50336f-b10b-46c9-91bd-81e086b2e80e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.778 243456 DEBUG nova.compute.manager [req-83a2db14-f1ce-4ded-a9c5-12cbfc33ac40 req-7490553f-8124-42fc-86bb-049f85205e6a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Refreshing instance network info cache due to event network-changed-bd50336f-b10b-46c9-91bd-81e086b2e80e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.778 243456 DEBUG oslo_concurrency.lockutils [req-83a2db14-f1ce-4ded-a9c5-12cbfc33ac40 req-7490553f-8124-42fc-86bb-049f85205e6a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.779 243456 DEBUG oslo_concurrency.lockutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Releasing lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.780 243456 DEBUG nova.compute.manager [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Instance network_info: |[{"id": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "address": "fa:16:3e:9f:3f:66", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd50336f-b1", "ovs_interfaceid": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.780 243456 DEBUG oslo_concurrency.lockutils [req-83a2db14-f1ce-4ded-a9c5-12cbfc33ac40 req-7490553f-8124-42fc-86bb-049f85205e6a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.781 243456 DEBUG nova.network.neutron [req-83a2db14-f1ce-4ded-a9c5-12cbfc33ac40 req-7490553f-8124-42fc-86bb-049f85205e6a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Refreshing network info cache for port bd50336f-b10b-46c9-91bd-81e086b2e80e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.785 243456 DEBUG nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Start _get_guest_xml network_info=[{"id": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "address": "fa:16:3e:9f:3f:66", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd50336f-b1", "ovs_interfaceid": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.791 243456 WARNING nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.796 243456 DEBUG nova.virt.libvirt.host [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.797 243456 DEBUG nova.virt.libvirt.host [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.801 243456 DEBUG nova.compute.manager [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.804 243456 DEBUG nova.virt.libvirt.host [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.805 243456 DEBUG nova.virt.libvirt.host [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.805 243456 DEBUG nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.806 243456 DEBUG nova.virt.hardware [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.806 243456 DEBUG nova.virt.hardware [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.807 243456 DEBUG nova.virt.hardware [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.807 243456 DEBUG nova.virt.hardware [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.807 243456 DEBUG nova.virt.hardware [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.808 243456 DEBUG nova.virt.hardware [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.808 243456 DEBUG nova.virt.hardware [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.809 243456 DEBUG nova.virt.hardware [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.809 243456 DEBUG nova.virt.hardware [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.810 243456 DEBUG nova.virt.hardware [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.810 243456 DEBUG nova.virt.hardware [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.814 243456 DEBUG oslo_concurrency.processutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.895 243456 DEBUG nova.compute.manager [req-fd784687-015c-4d44-a988-3ea7bf75c134 req-8159e813-42b9-4c97-853f-2a4051c34b0d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Received event network-vif-plugged-0690a322-e7c3-413d-8780-d9d6a0f84fd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.896 243456 DEBUG oslo_concurrency.lockutils [req-fd784687-015c-4d44-a988-3ea7bf75c134 req-8159e813-42b9-4c97-853f-2a4051c34b0d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.897 243456 DEBUG oslo_concurrency.lockutils [req-fd784687-015c-4d44-a988-3ea7bf75c134 req-8159e813-42b9-4c97-853f-2a4051c34b0d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.897 243456 DEBUG oslo_concurrency.lockutils [req-fd784687-015c-4d44-a988-3ea7bf75c134 req-8159e813-42b9-4c97-853f-2a4051c34b0d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.897 243456 DEBUG nova.compute.manager [req-fd784687-015c-4d44-a988-3ea7bf75c134 req-8159e813-42b9-4c97-853f-2a4051c34b0d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] No waiting events found dispatching network-vif-plugged-0690a322-e7c3-413d-8780-d9d6a0f84fd2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.898 243456 WARNING nova.compute.manager [req-fd784687-015c-4d44-a988-3ea7bf75c134 req-8159e813-42b9-4c97-853f-2a4051c34b0d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Received unexpected event network-vif-plugged-0690a322-e7c3-413d-8780-d9d6a0f84fd2 for instance with vm_state active and task_state None.
Feb 28 10:06:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 343 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.2 MiB/s wr, 172 op/s
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.952 243456 DEBUG oslo_concurrency.lockutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.953 243456 DEBUG oslo_concurrency.lockutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.960 243456 DEBUG nova.virt.hardware [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:06:27 compute-0 nova_compute[243452]: 2026-02-28 10:06:27.960 243456 INFO nova.compute.claims [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:06:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Feb 28 10:06:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Feb 28 10:06:28 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Feb 28 10:06:28 compute-0 nova_compute[243452]: 2026-02-28 10:06:28.229 243456 DEBUG oslo_concurrency.processutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:06:28 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1040072430' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:28 compute-0 nova_compute[243452]: 2026-02-28 10:06:28.456 243456 DEBUG oslo_concurrency.processutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:28 compute-0 nova_compute[243452]: 2026-02-28 10:06:28.491 243456 DEBUG nova.storage.rbd_utils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image c4dce6af-958c-4c5a-890b-469443cee915_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:28 compute-0 nova_compute[243452]: 2026-02-28 10:06:28.500 243456 DEBUG oslo_concurrency.processutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:28 compute-0 nova_compute[243452]: 2026-02-28 10:06:28.554 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:06:28 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/857037324' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:28 compute-0 nova_compute[243452]: 2026-02-28 10:06:28.842 243456 DEBUG oslo_concurrency.processutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:28 compute-0 nova_compute[243452]: 2026-02-28 10:06:28.854 243456 DEBUG nova.compute.provider_tree [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:06:28 compute-0 nova_compute[243452]: 2026-02-28 10:06:28.883 243456 DEBUG nova.scheduler.client.report [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:06:28 compute-0 nova_compute[243452]: 2026-02-28 10:06:28.887 243456 DEBUG oslo_concurrency.lockutils [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:28 compute-0 nova_compute[243452]: 2026-02-28 10:06:28.888 243456 DEBUG oslo_concurrency.lockutils [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:28 compute-0 nova_compute[243452]: 2026-02-28 10:06:28.888 243456 DEBUG oslo_concurrency.lockutils [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:28 compute-0 nova_compute[243452]: 2026-02-28 10:06:28.888 243456 DEBUG oslo_concurrency.lockutils [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:28 compute-0 nova_compute[243452]: 2026-02-28 10:06:28.889 243456 DEBUG oslo_concurrency.lockutils [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:28 compute-0 nova_compute[243452]: 2026-02-28 10:06:28.890 243456 INFO nova.compute.manager [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Terminating instance
Feb 28 10:06:28 compute-0 nova_compute[243452]: 2026-02-28 10:06:28.891 243456 DEBUG nova.compute.manager [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:06:28 compute-0 nova_compute[243452]: 2026-02-28 10:06:28.917 243456 DEBUG oslo_concurrency.lockutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:28 compute-0 nova_compute[243452]: 2026-02-28 10:06:28.919 243456 DEBUG nova.compute.manager [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:06:28 compute-0 kernel: tap53cf23fd-52 (unregistering): left promiscuous mode
Feb 28 10:06:28 compute-0 NetworkManager[49805]: <info>  [1772273188.9372] device (tap53cf23fd-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:06:28 compute-0 ovn_controller[146846]: 2026-02-28T10:06:28Z|00292|binding|INFO|Releasing lport 53cf23fd-52e1-4c44-b96b-ca076c163326 from this chassis (sb_readonly=0)
Feb 28 10:06:28 compute-0 ovn_controller[146846]: 2026-02-28T10:06:28Z|00293|binding|INFO|Setting lport 53cf23fd-52e1-4c44-b96b-ca076c163326 down in Southbound
Feb 28 10:06:28 compute-0 ovn_controller[146846]: 2026-02-28T10:06:28Z|00294|binding|INFO|Removing iface tap53cf23fd-52 ovn-installed in OVS
Feb 28 10:06:28 compute-0 nova_compute[243452]: 2026-02-28 10:06:28.957 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:28 compute-0 nova_compute[243452]: 2026-02-28 10:06:28.962 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:28 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:28.968 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:6e:f7 10.100.0.5'], port_security=['fa:16:3e:24:6e:f7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a9cfb8a4-5855-4ff2-8afa-3e14094e801e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b4baa283-ae4f-4852-9324-45f4fb1de1ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d63e728b-5dfe-4a66-867f-fa0957507bc0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=53cf23fd-52e1-4c44-b96b-ca076c163326) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:06:28 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:28.969 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 53cf23fd-52e1-4c44-b96b-ca076c163326 in datapath 3a8395bc-d7fc-4457-8cb4-52e2b9305b61 unbound from our chassis
Feb 28 10:06:28 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:28.971 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3a8395bc-d7fc-4457-8cb4-52e2b9305b61, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:06:28 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:28.972 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1e480b4c-a16e-468d-956f-5e15a6c16014]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:28 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:28.972 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 namespace which is not needed anymore
Feb 28 10:06:28 compute-0 nova_compute[243452]: 2026-02-28 10:06:28.988 243456 DEBUG nova.compute.manager [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:06:28 compute-0 nova_compute[243452]: 2026-02-28 10:06:28.988 243456 DEBUG nova.network.neutron [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:06:28 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000025.scope: Deactivated successfully.
Feb 28 10:06:28 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000025.scope: Consumed 12.507s CPU time.
Feb 28 10:06:28 compute-0 systemd-machined[209480]: Machine qemu-41-instance-00000025 terminated.
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.025 243456 INFO nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:06:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:06:29
Feb 28 10:06:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:06:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:06:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'volumes', 'backups', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'default.rgw.log', 'default.rgw.control']
Feb 28 10:06:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.060 243456 DEBUG nova.compute.manager [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:06:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:06:29 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1602581170' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.087 243456 DEBUG oslo_concurrency.processutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.089 243456 DEBUG nova.virt.libvirt.vif [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:06:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1836797536',display_name='tempest-tempest.common.compute-instance-1836797536',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1836797536',id=40,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJX0Fx82RGhILAY8gSL/81eMdy/bnRqX8dvA0uyc0VMC42rCn5VbA0lngPhybrpyL5tdeNJDAeYEVJl6vb2i2p3dxqnwW1uNXmGSt+gE0lS+RTjVWG5bC73bVNenJhGdiA==',key_name='tempest-keypair-1599844475',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-lo9kso4x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:06:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=c4dce6af-958c-4c5a-890b-469443cee915,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "address": "fa:16:3e:9f:3f:66", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd50336f-b1", "ovs_interfaceid": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.089 243456 DEBUG nova.network.os_vif_util [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "address": "fa:16:3e:9f:3f:66", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd50336f-b1", "ovs_interfaceid": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.090 243456 DEBUG nova.network.os_vif_util [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:3f:66,bridge_name='br-int',has_traffic_filtering=True,id=bd50336f-b10b-46c9-91bd-81e086b2e80e,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd50336f-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.091 243456 DEBUG nova.objects.instance [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'pci_devices' on Instance uuid c4dce6af-958c-4c5a-890b-469443cee915 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:29 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[275712]: [NOTICE]   (275716) : haproxy version is 2.8.14-c23fe91
Feb 28 10:06:29 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[275712]: [NOTICE]   (275716) : path to executable is /usr/sbin/haproxy
Feb 28 10:06:29 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[275712]: [WARNING]  (275716) : Exiting Master process...
Feb 28 10:06:29 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[275712]: [ALERT]    (275716) : Current worker (275719) exited with code 143 (Terminated)
Feb 28 10:06:29 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[275712]: [WARNING]  (275716) : All workers exited. Exiting... (0)
Feb 28 10:06:29 compute-0 systemd[1]: libpod-4caffb9cb864e194ced3d9b809cc6994a8252640f47ed7051c1b920e38e41c29.scope: Deactivated successfully.
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.108 243456 DEBUG nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:06:29 compute-0 nova_compute[243452]:   <uuid>c4dce6af-958c-4c5a-890b-469443cee915</uuid>
Feb 28 10:06:29 compute-0 nova_compute[243452]:   <name>instance-00000028</name>
Feb 28 10:06:29 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:06:29 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:06:29 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <nova:name>tempest-tempest.common.compute-instance-1836797536</nova:name>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:06:27</nova:creationTime>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:06:29 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:06:29 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:06:29 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:06:29 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:06:29 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:06:29 compute-0 nova_compute[243452]:         <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:06:29 compute-0 nova_compute[243452]:         <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:06:29 compute-0 nova_compute[243452]:         <nova:port uuid="bd50336f-b10b-46c9-91bd-81e086b2e80e">
Feb 28 10:06:29 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:06:29 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:06:29 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <system>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <entry name="serial">c4dce6af-958c-4c5a-890b-469443cee915</entry>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <entry name="uuid">c4dce6af-958c-4c5a-890b-469443cee915</entry>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     </system>
Feb 28 10:06:29 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:06:29 compute-0 nova_compute[243452]:   <os>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:   </os>
Feb 28 10:06:29 compute-0 nova_compute[243452]:   <features>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:   </features>
Feb 28 10:06:29 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:06:29 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:06:29 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c4dce6af-958c-4c5a-890b-469443cee915_disk">
Feb 28 10:06:29 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       </source>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:06:29 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c4dce6af-958c-4c5a-890b-469443cee915_disk.config">
Feb 28 10:06:29 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       </source>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:06:29 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:9f:3f:66"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <target dev="tapbd50336f-b1"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/c4dce6af-958c-4c5a-890b-469443cee915/console.log" append="off"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <video>
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     </video>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:06:29 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:06:29 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:06:29 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:06:29 compute-0 nova_compute[243452]: </domain>
Feb 28 10:06:29 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.109 243456 DEBUG nova.compute.manager [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Preparing to wait for external event network-vif-plugged-bd50336f-b10b-46c9-91bd-81e086b2e80e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.110 243456 DEBUG oslo_concurrency.lockutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "c4dce6af-958c-4c5a-890b-469443cee915-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.110 243456 DEBUG oslo_concurrency.lockutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.110 243456 DEBUG oslo_concurrency.lockutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.111 243456 DEBUG nova.virt.libvirt.vif [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:06:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1836797536',display_name='tempest-tempest.common.compute-instance-1836797536',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1836797536',id=40,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJX0Fx82RGhILAY8gSL/81eMdy/bnRqX8dvA0uyc0VMC42rCn5VbA0lngPhybrpyL5tdeNJDAeYEVJl6vb2i2p3dxqnwW1uNXmGSt+gE0lS+RTjVWG5bC73bVNenJhGdiA==',key_name='tempest-keypair-1599844475',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-lo9kso4x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:06:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=c4dce6af-958c-4c5a-890b-469443cee915,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "address": "fa:16:3e:9f:3f:66", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd50336f-b1", "ovs_interfaceid": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.111 243456 DEBUG nova.network.os_vif_util [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "address": "fa:16:3e:9f:3f:66", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd50336f-b1", "ovs_interfaceid": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:29 compute-0 podman[278215]: 2026-02-28 10:06:29.112054409 +0000 UTC m=+0.056558581 container died 4caffb9cb864e194ced3d9b809cc6994a8252640f47ed7051c1b920e38e41c29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.112 243456 DEBUG nova.network.os_vif_util [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:3f:66,bridge_name='br-int',has_traffic_filtering=True,id=bd50336f-b10b-46c9-91bd-81e086b2e80e,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd50336f-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.112 243456 DEBUG os_vif [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:3f:66,bridge_name='br-int',has_traffic_filtering=True,id=bd50336f-b10b-46c9-91bd-81e086b2e80e,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd50336f-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.114 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.114 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.115 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.121 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.122 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd50336f-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.122 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbd50336f-b1, col_values=(('external_ids', {'iface-id': 'bd50336f-b10b-46c9-91bd-81e086b2e80e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9f:3f:66', 'vm-uuid': 'c4dce6af-958c-4c5a-890b-469443cee915'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:29 compute-0 NetworkManager[49805]: <info>  [1772273189.1253] manager: (tapbd50336f-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.127 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.131 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.134 243456 INFO os_vif [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:3f:66,bridge_name='br-int',has_traffic_filtering=True,id=bd50336f-b10b-46c9-91bd-81e086b2e80e,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd50336f-b1')
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.137 243456 INFO nova.virt.libvirt.driver [-] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Instance destroyed successfully.
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.138 243456 DEBUG nova.objects.instance [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'resources' on Instance uuid a9cfb8a4-5855-4ff2-8afa-3e14094e801e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.148 243456 DEBUG nova.compute.manager [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.151 243456 DEBUG nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.152 243456 INFO nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Creating image(s)
Feb 28 10:06:29 compute-0 ceph-mon[76304]: pgmap v1140: 305 pgs: 305 active+clean; 343 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.2 MiB/s wr, 172 op/s
Feb 28 10:06:29 compute-0 ceph-mon[76304]: osdmap e161: 3 total, 3 up, 3 in
Feb 28 10:06:29 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1040072430' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:29 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/857037324' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:29 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1602581170' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4caffb9cb864e194ced3d9b809cc6994a8252640f47ed7051c1b920e38e41c29-userdata-shm.mount: Deactivated successfully.
Feb 28 10:06:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b993b25ac99679bb0f94483344a6fd93e034cb4cbc8ac30386a074979859cc20-merged.mount: Deactivated successfully.
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.188 243456 DEBUG nova.storage.rbd_utils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] rbd image 2b863bc5-8018-491d-82ee-dbc8f40d5aff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:29 compute-0 podman[278215]: 2026-02-28 10:06:29.195234877 +0000 UTC m=+0.139739029 container cleanup 4caffb9cb864e194ced3d9b809cc6994a8252640f47ed7051c1b920e38e41c29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:06:29 compute-0 systemd[1]: libpod-conmon-4caffb9cb864e194ced3d9b809cc6994a8252640f47ed7051c1b920e38e41c29.scope: Deactivated successfully.
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.214 243456 DEBUG nova.storage.rbd_utils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] rbd image 2b863bc5-8018-491d-82ee-dbc8f40d5aff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.239 243456 DEBUG nova.storage.rbd_utils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] rbd image 2b863bc5-8018-491d-82ee-dbc8f40d5aff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.245 243456 DEBUG oslo_concurrency.processutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.267 243456 DEBUG nova.virt.libvirt.vif [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:05:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-47808848',display_name='tempest-ImagesTestJSON-server-47808848',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-47808848',id=37,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:05:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-9gc4dzuc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:05:59Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=a9cfb8a4-5855-4ff2-8afa-3e14094e801e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "53cf23fd-52e1-4c44-b96b-ca076c163326", "address": "fa:16:3e:24:6e:f7", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53cf23fd-52", "ovs_interfaceid": "53cf23fd-52e1-4c44-b96b-ca076c163326", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.268 243456 DEBUG nova.network.os_vif_util [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "53cf23fd-52e1-4c44-b96b-ca076c163326", "address": "fa:16:3e:24:6e:f7", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53cf23fd-52", "ovs_interfaceid": "53cf23fd-52e1-4c44-b96b-ca076c163326", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.268 243456 DEBUG nova.network.os_vif_util [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:6e:f7,bridge_name='br-int',has_traffic_filtering=True,id=53cf23fd-52e1-4c44-b96b-ca076c163326,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53cf23fd-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.269 243456 DEBUG os_vif [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:6e:f7,bridge_name='br-int',has_traffic_filtering=True,id=53cf23fd-52e1-4c44-b96b-ca076c163326,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53cf23fd-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.271 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.271 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53cf23fd-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.273 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.275 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.280 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.284 243456 INFO os_vif [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:6e:f7,bridge_name='br-int',has_traffic_filtering=True,id=53cf23fd-52e1-4c44-b96b-ca076c163326,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53cf23fd-52')
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.317 243456 DEBUG oslo_concurrency.processutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.318 243456 DEBUG oslo_concurrency.lockutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.319 243456 DEBUG oslo_concurrency.lockutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.320 243456 DEBUG oslo_concurrency.lockutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.341 243456 DEBUG nova.storage.rbd_utils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] rbd image 2b863bc5-8018-491d-82ee-dbc8f40d5aff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.346 243456 DEBUG oslo_concurrency.processutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 2b863bc5-8018-491d-82ee-dbc8f40d5aff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:29 compute-0 podman[278282]: 2026-02-28 10:06:29.362199031 +0000 UTC m=+0.149221176 container remove 4caffb9cb864e194ced3d9b809cc6994a8252640f47ed7051c1b920e38e41c29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 10:06:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:29.366 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[932d40ad-32c5-4582-821e-60e4aa7ec423]: (4, ('Sat Feb 28 10:06:29 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 (4caffb9cb864e194ced3d9b809cc6994a8252640f47ed7051c1b920e38e41c29)\n4caffb9cb864e194ced3d9b809cc6994a8252640f47ed7051c1b920e38e41c29\nSat Feb 28 10:06:29 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 (4caffb9cb864e194ced3d9b809cc6994a8252640f47ed7051c1b920e38e41c29)\n4caffb9cb864e194ced3d9b809cc6994a8252640f47ed7051c1b920e38e41c29\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:29.367 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[db2cc28c-09a8-45ab-b2c2-57d4429cc932]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:29.368 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a8395bc-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:29 compute-0 kernel: tap3a8395bc-d0: left promiscuous mode
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.373 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.378 243456 DEBUG nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.379 243456 DEBUG nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.379 243456 DEBUG nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:9f:3f:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.380 243456 INFO nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Using config drive
Feb 28 10:06:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:29.385 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f22855-04f5-4c3f-8b25-8cf8174d7e43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:29.396 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[32c9db0c-a188-43ca-a310-c7be494250bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:29.398 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ed169df5-a902-43a5-9406-434d176b0a7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.410 243456 DEBUG nova.storage.rbd_utils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image c4dce6af-958c-4c5a-890b-469443cee915_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:29.411 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0985bd88-bd0a-4bc8-8149-a1b1488b2356]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465796, 'reachable_time': 40680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278385, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:29.414 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:06:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:29.414 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[0f771103-f85c-4ee6-9c47-7c4effb66052]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:29 compute-0 systemd[1]: run-netns-ovnmeta\x2d3a8395bc\x2dd7fc\x2d4457\x2d8cb4\x2d52e2b9305b61.mount: Deactivated successfully.
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.416 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.437 243456 DEBUG nova.policy [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd83ef2b77047458db9060496f444a384', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b2155277af74424e955b1904a947ab64', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.460 243456 DEBUG nova.network.neutron [req-83a2db14-f1ce-4ded-a9c5-12cbfc33ac40 req-7490553f-8124-42fc-86bb-049f85205e6a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Updated VIF entry in instance network info cache for port bd50336f-b10b-46c9-91bd-81e086b2e80e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.461 243456 DEBUG nova.network.neutron [req-83a2db14-f1ce-4ded-a9c5-12cbfc33ac40 req-7490553f-8124-42fc-86bb-049f85205e6a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Updating instance_info_cache with network_info: [{"id": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "address": "fa:16:3e:9f:3f:66", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd50336f-b1", "ovs_interfaceid": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.487 243456 DEBUG oslo_concurrency.lockutils [req-83a2db14-f1ce-4ded-a9c5-12cbfc33ac40 req-7490553f-8124-42fc-86bb-049f85205e6a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.652 243456 DEBUG oslo_concurrency.processutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 2b863bc5-8018-491d-82ee-dbc8f40d5aff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.750 243456 DEBUG nova.storage.rbd_utils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] resizing rbd image 2b863bc5-8018-491d-82ee-dbc8f40d5aff_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.787 243456 INFO nova.virt.libvirt.driver [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Deleting instance files /var/lib/nova/instances/a9cfb8a4-5855-4ff2-8afa-3e14094e801e_del
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.788 243456 INFO nova.virt.libvirt.driver [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Deletion of /var/lib/nova/instances/a9cfb8a4-5855-4ff2-8afa-3e14094e801e_del complete
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.837 243456 DEBUG nova.objects.instance [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Lazy-loading 'migration_context' on Instance uuid 2b863bc5-8018-491d-82ee-dbc8f40d5aff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.845 243456 INFO nova.compute.manager [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Took 0.95 seconds to destroy the instance on the hypervisor.
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.845 243456 DEBUG oslo.service.loopingcall [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.846 243456 DEBUG nova.compute.manager [-] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.846 243456 DEBUG nova.network.neutron [-] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.851 243456 DEBUG nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.851 243456 DEBUG nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Ensure instance console log exists: /var/lib/nova/instances/2b863bc5-8018-491d-82ee-dbc8f40d5aff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.852 243456 DEBUG oslo_concurrency.lockutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.852 243456 DEBUG oslo_concurrency.lockutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.852 243456 DEBUG oslo_concurrency.lockutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 325 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 220 op/s
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.961 243456 INFO nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Creating config drive at /var/lib/nova/instances/c4dce6af-958c-4c5a-890b-469443cee915/disk.config
Feb 28 10:06:29 compute-0 nova_compute[243452]: 2026-02-28 10:06:29.964 243456 DEBUG oslo_concurrency.processutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c4dce6af-958c-4c5a-890b-469443cee915/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpz192o2ek execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:30 compute-0 nova_compute[243452]: 2026-02-28 10:06:30.094 243456 DEBUG oslo_concurrency.processutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c4dce6af-958c-4c5a-890b-469443cee915/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpz192o2ek" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:30 compute-0 nova_compute[243452]: 2026-02-28 10:06:30.133 243456 DEBUG nova.storage.rbd_utils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image c4dce6af-958c-4c5a-890b-469443cee915_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:30 compute-0 nova_compute[243452]: 2026-02-28 10:06:30.137 243456 DEBUG oslo_concurrency.processutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c4dce6af-958c-4c5a-890b-469443cee915/disk.config c4dce6af-958c-4c5a-890b-469443cee915_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:30 compute-0 ceph-mon[76304]: pgmap v1142: 305 pgs: 305 active+clean; 325 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 220 op/s
Feb 28 10:06:30 compute-0 nova_compute[243452]: 2026-02-28 10:06:30.291 243456 DEBUG oslo_concurrency.processutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c4dce6af-958c-4c5a-890b-469443cee915/disk.config c4dce6af-958c-4c5a-890b-469443cee915_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:30 compute-0 nova_compute[243452]: 2026-02-28 10:06:30.293 243456 INFO nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Deleting local config drive /var/lib/nova/instances/c4dce6af-958c-4c5a-890b-469443cee915/disk.config because it was imported into RBD.
Feb 28 10:06:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:06:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:06:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:06:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:06:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:06:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:06:30 compute-0 kernel: tapbd50336f-b1: entered promiscuous mode
Feb 28 10:06:30 compute-0 NetworkManager[49805]: <info>  [1772273190.3446] manager: (tapbd50336f-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/139)
Feb 28 10:06:30 compute-0 systemd-udevd[278196]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:06:30 compute-0 nova_compute[243452]: 2026-02-28 10:06:30.345 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:30 compute-0 nova_compute[243452]: 2026-02-28 10:06:30.347 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:30 compute-0 ovn_controller[146846]: 2026-02-28T10:06:30Z|00295|binding|INFO|Claiming lport bd50336f-b10b-46c9-91bd-81e086b2e80e for this chassis.
Feb 28 10:06:30 compute-0 ovn_controller[146846]: 2026-02-28T10:06:30Z|00296|binding|INFO|bd50336f-b10b-46c9-91bd-81e086b2e80e: Claiming fa:16:3e:9f:3f:66 10.100.0.14
Feb 28 10:06:30 compute-0 nova_compute[243452]: 2026-02-28 10:06:30.354 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:30 compute-0 ovn_controller[146846]: 2026-02-28T10:06:30Z|00297|binding|INFO|Setting lport bd50336f-b10b-46c9-91bd-81e086b2e80e ovn-installed in OVS
Feb 28 10:06:30 compute-0 nova_compute[243452]: 2026-02-28 10:06:30.364 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:30 compute-0 NetworkManager[49805]: <info>  [1772273190.3690] device (tapbd50336f-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.368 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:3f:66 10.100.0.14'], port_security=['fa:16:3e:9f:3f:66 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c4dce6af-958c-4c5a-890b-469443cee915', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b06e2ec4-e889-49ea-aafd-6900649d681f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=bd50336f-b10b-46c9-91bd-81e086b2e80e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:06:30 compute-0 NetworkManager[49805]: <info>  [1772273190.3699] device (tapbd50336f-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:06:30 compute-0 ovn_controller[146846]: 2026-02-28T10:06:30Z|00298|binding|INFO|Setting lport bd50336f-b10b-46c9-91bd-81e086b2e80e up in Southbound
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.375 156681 INFO neutron.agent.ovn.metadata.agent [-] Port bd50336f-b10b-46c9-91bd-81e086b2e80e in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f bound to our chassis
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.379 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:06:30 compute-0 systemd-machined[209480]: New machine qemu-44-instance-00000028.
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.392 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[96f728b3-c0ca-4480-ab31-061c3f504cac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.393 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap60dcefc3-91 in ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:06:30 compute-0 systemd[1]: Started Virtual Machine qemu-44-instance-00000028.
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.395 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap60dcefc3-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.395 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2881276f-10a1-4807-8c15-28dd43ec921f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.396 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4a83b220-570f-40bf-bf70-8bc291124275]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.417 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[2d3d1cce-d53e-4126-988f-e31437b05685]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.428 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b891be13-87e9-48ba-84bd-cd56d11391d6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.462 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ffce0a8a-9665-4cbc-a954-a0df5c861799]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:30 compute-0 NetworkManager[49805]: <info>  [1772273190.4727] manager: (tap60dcefc3-90): new Veth device (/org/freedesktop/NetworkManager/Devices/140)
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.472 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[74c6d7fe-2601-40ce-834a-8e17ccefcd04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.501 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f41160ff-5b08-4558-8a16-d8030f037c9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.505 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb21adf-e895-4df4-9d63-61939ed2cea7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:30 compute-0 NetworkManager[49805]: <info>  [1772273190.5337] device (tap60dcefc3-90): carrier: link connected
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.540 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2d6e04b0-ea0d-4e8f-99ff-c4f287f73e9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:06:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.560 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7c34c205-a0bd-423a-8065-ee4f8bd5f347]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469782, 'reachable_time': 41463, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278562, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:06:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:06:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:06:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:06:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:06:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:06:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:06:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.582 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[84dafd9c-ce50-4ec0-9850-7e283cef0ada]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe46:227a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469781, 'tstamp': 469781}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278563, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.598 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4e1789ac-eea6-4a9d-a936-94779aa069b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469782, 'reachable_time': 41463, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278564, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:30 compute-0 nova_compute[243452]: 2026-02-28 10:06:30.618 243456 DEBUG nova.network.neutron [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Successfully created port: 912a1604-2c79-46e1-8e58-77c169237654 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.625 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ef0e5af5-b4d6-4e7b-8ada-8a284ec4af6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.686 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[734a4c74-f585-4fe4-9e90-a1dd325eb6d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.688 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.688 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.689 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:30 compute-0 nova_compute[243452]: 2026-02-28 10:06:30.691 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:30 compute-0 kernel: tap60dcefc3-90: entered promiscuous mode
Feb 28 10:06:30 compute-0 NetworkManager[49805]: <info>  [1772273190.6929] manager: (tap60dcefc3-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Feb 28 10:06:30 compute-0 nova_compute[243452]: 2026-02-28 10:06:30.696 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.698 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:30 compute-0 ovn_controller[146846]: 2026-02-28T10:06:30Z|00299|binding|INFO|Releasing lport 76417fb5-53fd-4986-92c0-fafddb45b2f5 from this chassis (sb_readonly=0)
Feb 28 10:06:30 compute-0 nova_compute[243452]: 2026-02-28 10:06:30.710 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.713 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/60dcefc3-95e1-437e-9c00-e51656c39b8f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/60dcefc3-95e1-437e-9c00-e51656c39b8f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.715 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5a54dad0-a2cf-4feb-8862-3ec449a4d42c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.716 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/60dcefc3-95e1-437e-9c00-e51656c39b8f.pid.haproxy
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:06:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:30.716 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'env', 'PROCESS_TAG=haproxy-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/60dcefc3-95e1-437e-9c00-e51656c39b8f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:06:30 compute-0 nova_compute[243452]: 2026-02-28 10:06:30.831 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273190.8305783, c4dce6af-958c-4c5a-890b-469443cee915 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:30 compute-0 nova_compute[243452]: 2026-02-28 10:06:30.831 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4dce6af-958c-4c5a-890b-469443cee915] VM Started (Lifecycle Event)
Feb 28 10:06:30 compute-0 nova_compute[243452]: 2026-02-28 10:06:30.860 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:30 compute-0 nova_compute[243452]: 2026-02-28 10:06:30.864 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273190.8331196, c4dce6af-958c-4c5a-890b-469443cee915 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:30 compute-0 nova_compute[243452]: 2026-02-28 10:06:30.864 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4dce6af-958c-4c5a-890b-469443cee915] VM Paused (Lifecycle Event)
Feb 28 10:06:30 compute-0 nova_compute[243452]: 2026-02-28 10:06:30.887 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:30 compute-0 nova_compute[243452]: 2026-02-28 10:06:30.890 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:06:30 compute-0 nova_compute[243452]: 2026-02-28 10:06:30.913 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4dce6af-958c-4c5a-890b-469443cee915] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:06:31 compute-0 podman[278638]: 2026-02-28 10:06:31.124046602 +0000 UTC m=+0.058850666 container create b72009207f5cdd613749aa78ff121ce80a242f0d26b7faf08e8b84c701e7d4e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 10:06:31 compute-0 systemd[1]: Started libpod-conmon-b72009207f5cdd613749aa78ff121ce80a242f0d26b7faf08e8b84c701e7d4e9.scope.
Feb 28 10:06:31 compute-0 podman[278638]: 2026-02-28 10:06:31.093419821 +0000 UTC m=+0.028223935 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:06:31 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:06:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbdceedde6d3f00a5c2e93b8e09e5525730c070b00811021d95e9188936c5e1c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:06:31 compute-0 podman[278638]: 2026-02-28 10:06:31.227280404 +0000 UTC m=+0.162084518 container init b72009207f5cdd613749aa78ff121ce80a242f0d26b7faf08e8b84c701e7d4e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 10:06:31 compute-0 podman[278638]: 2026-02-28 10:06:31.233721875 +0000 UTC m=+0.168525929 container start b72009207f5cdd613749aa78ff121ce80a242f0d26b7faf08e8b84c701e7d4e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 28 10:06:31 compute-0 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[278653]: [NOTICE]   (278657) : New worker (278659) forked
Feb 28 10:06:31 compute-0 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[278653]: [NOTICE]   (278657) : Loading success.
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.365 243456 DEBUG nova.network.neutron [-] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.544 243456 INFO nova.compute.manager [-] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Took 1.70 seconds to deallocate network for instance.
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.735 243456 DEBUG oslo_concurrency.lockutils [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.736 243456 DEBUG oslo_concurrency.lockutils [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.769 243456 DEBUG nova.compute.manager [req-0824183c-0d3d-43e0-a91a-fc3f127389e2 req-97148b22-0eb3-492e-8658-c4d572935d30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Received event network-changed-0690a322-e7c3-413d-8780-d9d6a0f84fd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.770 243456 DEBUG nova.compute.manager [req-0824183c-0d3d-43e0-a91a-fc3f127389e2 req-97148b22-0eb3-492e-8658-c4d572935d30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Refreshing instance network info cache due to event network-changed-0690a322-e7c3-413d-8780-d9d6a0f84fd2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.771 243456 DEBUG oslo_concurrency.lockutils [req-0824183c-0d3d-43e0-a91a-fc3f127389e2 req-97148b22-0eb3-492e-8658-c4d572935d30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-60dcb9fa-f7b6-415d-86e5-d423d4613d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.775 243456 DEBUG oslo_concurrency.lockutils [req-0824183c-0d3d-43e0-a91a-fc3f127389e2 req-97148b22-0eb3-492e-8658-c4d572935d30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-60dcb9fa-f7b6-415d-86e5-d423d4613d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.775 243456 DEBUG nova.network.neutron [req-0824183c-0d3d-43e0-a91a-fc3f127389e2 req-97148b22-0eb3-492e-8658-c4d572935d30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Refreshing network info cache for port 0690a322-e7c3-413d-8780-d9d6a0f84fd2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.789 243456 DEBUG nova.compute.manager [req-e2864731-7aea-41fd-875e-e1d241f52bed req-dea33259-b4cd-4a0d-b970-28b33bcda293 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received event network-vif-plugged-bd50336f-b10b-46c9-91bd-81e086b2e80e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.790 243456 DEBUG oslo_concurrency.lockutils [req-e2864731-7aea-41fd-875e-e1d241f52bed req-dea33259-b4cd-4a0d-b970-28b33bcda293 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4dce6af-958c-4c5a-890b-469443cee915-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.790 243456 DEBUG oslo_concurrency.lockutils [req-e2864731-7aea-41fd-875e-e1d241f52bed req-dea33259-b4cd-4a0d-b970-28b33bcda293 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.791 243456 DEBUG oslo_concurrency.lockutils [req-e2864731-7aea-41fd-875e-e1d241f52bed req-dea33259-b4cd-4a0d-b970-28b33bcda293 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.792 243456 DEBUG nova.compute.manager [req-e2864731-7aea-41fd-875e-e1d241f52bed req-dea33259-b4cd-4a0d-b970-28b33bcda293 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Processing event network-vif-plugged-bd50336f-b10b-46c9-91bd-81e086b2e80e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.794 243456 DEBUG nova.compute.manager [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.803 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273191.8024397, c4dce6af-958c-4c5a-890b-469443cee915 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.804 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4dce6af-958c-4c5a-890b-469443cee915] VM Resumed (Lifecycle Event)
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.806 243456 DEBUG nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.814 243456 INFO nova.virt.libvirt.driver [-] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Instance spawned successfully.
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.815 243456 DEBUG nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.923 243456 DEBUG oslo_concurrency.processutils [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 285 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 219 op/s
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.973 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.979 243456 DEBUG nova.network.neutron [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Successfully updated port: 912a1604-2c79-46e1-8e58-77c169237654 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.986 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.990 243456 DEBUG nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.991 243456 DEBUG nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.991 243456 DEBUG nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.992 243456 DEBUG nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.993 243456 DEBUG nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:31 compute-0 nova_compute[243452]: 2026-02-28 10:06:31.993 243456 DEBUG nova.virt.libvirt.driver [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:32 compute-0 nova_compute[243452]: 2026-02-28 10:06:32.031 243456 DEBUG oslo_concurrency.lockutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Acquiring lock "refresh_cache-2b863bc5-8018-491d-82ee-dbc8f40d5aff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:32 compute-0 nova_compute[243452]: 2026-02-28 10:06:32.031 243456 DEBUG oslo_concurrency.lockutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Acquired lock "refresh_cache-2b863bc5-8018-491d-82ee-dbc8f40d5aff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:32 compute-0 nova_compute[243452]: 2026-02-28 10:06:32.032 243456 DEBUG nova.network.neutron [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:06:32 compute-0 nova_compute[243452]: 2026-02-28 10:06:32.106 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4dce6af-958c-4c5a-890b-469443cee915] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:06:32 compute-0 nova_compute[243452]: 2026-02-28 10:06:32.186 243456 INFO nova.compute.manager [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Took 7.99 seconds to spawn the instance on the hypervisor.
Feb 28 10:06:32 compute-0 nova_compute[243452]: 2026-02-28 10:06:32.188 243456 DEBUG nova.compute.manager [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:32 compute-0 nova_compute[243452]: 2026-02-28 10:06:32.379 243456 INFO nova.compute.manager [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Took 9.62 seconds to build instance.
Feb 28 10:06:32 compute-0 nova_compute[243452]: 2026-02-28 10:06:32.451 243456 DEBUG oslo_concurrency.lockutils [None req-09317cbb-67ca-40cb-af9a-1b2afdd7bfbb f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:06:32 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4061973895' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:32 compute-0 nova_compute[243452]: 2026-02-28 10:06:32.528 243456 DEBUG oslo_concurrency.processutils [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:32 compute-0 nova_compute[243452]: 2026-02-28 10:06:32.534 243456 DEBUG nova.compute.provider_tree [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:06:32 compute-0 nova_compute[243452]: 2026-02-28 10:06:32.618 243456 DEBUG nova.scheduler.client.report [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:06:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:06:32 compute-0 nova_compute[243452]: 2026-02-28 10:06:32.795 243456 DEBUG oslo_concurrency.lockutils [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:32 compute-0 nova_compute[243452]: 2026-02-28 10:06:32.875 243456 DEBUG nova.network.neutron [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:06:32 compute-0 nova_compute[243452]: 2026-02-28 10:06:32.878 243456 INFO nova.scheduler.client.report [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Deleted allocations for instance a9cfb8a4-5855-4ff2-8afa-3e14094e801e
Feb 28 10:06:32 compute-0 ceph-mon[76304]: pgmap v1143: 305 pgs: 305 active+clean; 285 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 219 op/s
Feb 28 10:06:32 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4061973895' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:33 compute-0 nova_compute[243452]: 2026-02-28 10:06:33.336 243456 DEBUG oslo_concurrency.lockutils [None req-2b88e69f-628a-4f0f-b0bd-8d4c6ee6bba2 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:33 compute-0 nova_compute[243452]: 2026-02-28 10:06:33.558 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:33 compute-0 nova_compute[243452]: 2026-02-28 10:06:33.700 243456 DEBUG oslo_concurrency.lockutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "07582da6-e482-439d-b147-937e74817014" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:33 compute-0 nova_compute[243452]: 2026-02-28 10:06:33.700 243456 DEBUG oslo_concurrency.lockutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "07582da6-e482-439d-b147-937e74817014" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:33 compute-0 nova_compute[243452]: 2026-02-28 10:06:33.723 243456 DEBUG nova.compute.manager [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:06:33 compute-0 nova_compute[243452]: 2026-02-28 10:06:33.800 243456 DEBUG oslo_concurrency.lockutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:33 compute-0 nova_compute[243452]: 2026-02-28 10:06:33.801 243456 DEBUG oslo_concurrency.lockutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:33 compute-0 nova_compute[243452]: 2026-02-28 10:06:33.809 243456 DEBUG nova.virt.hardware [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:06:33 compute-0 nova_compute[243452]: 2026-02-28 10:06:33.810 243456 INFO nova.compute.claims [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:06:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 274 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.7 MiB/s wr, 218 op/s
Feb 28 10:06:34 compute-0 nova_compute[243452]: 2026-02-28 10:06:34.032 243456 DEBUG oslo_concurrency.processutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:34 compute-0 nova_compute[243452]: 2026-02-28 10:06:34.273 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:06:34 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1589562054' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:34 compute-0 nova_compute[243452]: 2026-02-28 10:06:34.580 243456 DEBUG oslo_concurrency.processutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:34 compute-0 nova_compute[243452]: 2026-02-28 10:06:34.587 243456 DEBUG nova.compute.provider_tree [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:06:34 compute-0 nova_compute[243452]: 2026-02-28 10:06:34.606 243456 DEBUG nova.scheduler.client.report [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:06:34 compute-0 nova_compute[243452]: 2026-02-28 10:06:34.636 243456 DEBUG oslo_concurrency.lockutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:34 compute-0 nova_compute[243452]: 2026-02-28 10:06:34.637 243456 DEBUG nova.compute.manager [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:06:34 compute-0 nova_compute[243452]: 2026-02-28 10:06:34.690 243456 DEBUG nova.compute.manager [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:06:34 compute-0 nova_compute[243452]: 2026-02-28 10:06:34.691 243456 DEBUG nova.network.neutron [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:06:34 compute-0 nova_compute[243452]: 2026-02-28 10:06:34.712 243456 INFO nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:06:34 compute-0 nova_compute[243452]: 2026-02-28 10:06:34.732 243456 DEBUG nova.compute.manager [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:06:34 compute-0 nova_compute[243452]: 2026-02-28 10:06:34.826 243456 DEBUG nova.compute.manager [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:06:34 compute-0 nova_compute[243452]: 2026-02-28 10:06:34.828 243456 DEBUG nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:06:34 compute-0 nova_compute[243452]: 2026-02-28 10:06:34.828 243456 INFO nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Creating image(s)
Feb 28 10:06:34 compute-0 nova_compute[243452]: 2026-02-28 10:06:34.860 243456 DEBUG nova.storage.rbd_utils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 07582da6-e482-439d-b147-937e74817014_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:34 compute-0 nova_compute[243452]: 2026-02-28 10:06:34.898 243456 DEBUG nova.storage.rbd_utils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 07582da6-e482-439d-b147-937e74817014_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:34 compute-0 nova_compute[243452]: 2026-02-28 10:06:34.937 243456 DEBUG nova.storage.rbd_utils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 07582da6-e482-439d-b147-937e74817014_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:34 compute-0 nova_compute[243452]: 2026-02-28 10:06:34.947 243456 DEBUG oslo_concurrency.processutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:34 compute-0 nova_compute[243452]: 2026-02-28 10:06:34.978 243456 DEBUG nova.policy [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '163582c3e6a34c87b52f82ac4f189f77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:06:34 compute-0 ceph-mon[76304]: pgmap v1144: 305 pgs: 305 active+clean; 274 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.7 MiB/s wr, 218 op/s
Feb 28 10:06:34 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1589562054' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.004 243456 DEBUG nova.compute.manager [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Received event network-vif-plugged-53cf23fd-52e1-4c44-b96b-ca076c163326 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.004 243456 DEBUG oslo_concurrency.lockutils [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.005 243456 DEBUG oslo_concurrency.lockutils [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.005 243456 DEBUG oslo_concurrency.lockutils [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.005 243456 DEBUG nova.compute.manager [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] No waiting events found dispatching network-vif-plugged-53cf23fd-52e1-4c44-b96b-ca076c163326 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.005 243456 WARNING nova.compute.manager [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Received unexpected event network-vif-plugged-53cf23fd-52e1-4c44-b96b-ca076c163326 for instance with vm_state deleted and task_state None.
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.006 243456 DEBUG nova.compute.manager [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Received event network-changed-0690a322-e7c3-413d-8780-d9d6a0f84fd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.006 243456 DEBUG nova.compute.manager [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Refreshing instance network info cache due to event network-changed-0690a322-e7c3-413d-8780-d9d6a0f84fd2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.006 243456 DEBUG oslo_concurrency.lockutils [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-60dcb9fa-f7b6-415d-86e5-d423d4613d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.014 243456 DEBUG oslo_concurrency.processutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.015 243456 DEBUG oslo_concurrency.lockutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.015 243456 DEBUG oslo_concurrency.lockutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.017 243456 DEBUG oslo_concurrency.lockutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.043 243456 DEBUG nova.storage.rbd_utils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 07582da6-e482-439d-b147-937e74817014_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.048 243456 DEBUG oslo_concurrency.processutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 07582da6-e482-439d-b147-937e74817014_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.092 243456 DEBUG nova.compute.manager [req-59e02494-5254-4ce3-a175-965da2e86458 req-5503d1e7-dbc9-4d5b-8e42-74ed74a69c77 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received event network-vif-plugged-bd50336f-b10b-46c9-91bd-81e086b2e80e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.092 243456 DEBUG oslo_concurrency.lockutils [req-59e02494-5254-4ce3-a175-965da2e86458 req-5503d1e7-dbc9-4d5b-8e42-74ed74a69c77 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4dce6af-958c-4c5a-890b-469443cee915-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.092 243456 DEBUG oslo_concurrency.lockutils [req-59e02494-5254-4ce3-a175-965da2e86458 req-5503d1e7-dbc9-4d5b-8e42-74ed74a69c77 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.093 243456 DEBUG oslo_concurrency.lockutils [req-59e02494-5254-4ce3-a175-965da2e86458 req-5503d1e7-dbc9-4d5b-8e42-74ed74a69c77 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.093 243456 DEBUG nova.compute.manager [req-59e02494-5254-4ce3-a175-965da2e86458 req-5503d1e7-dbc9-4d5b-8e42-74ed74a69c77 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] No waiting events found dispatching network-vif-plugged-bd50336f-b10b-46c9-91bd-81e086b2e80e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.093 243456 WARNING nova.compute.manager [req-59e02494-5254-4ce3-a175-965da2e86458 req-5503d1e7-dbc9-4d5b-8e42-74ed74a69c77 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received unexpected event network-vif-plugged-bd50336f-b10b-46c9-91bd-81e086b2e80e for instance with vm_state active and task_state None.
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.195 243456 DEBUG nova.network.neutron [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Updating instance_info_cache with network_info: [{"id": "912a1604-2c79-46e1-8e58-77c169237654", "address": "fa:16:3e:47:59:9b", "network": {"id": "c3064b28-4cc5-4292-a411-e47f76307d19", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1936405321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2155277af74424e955b1904a947ab64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912a1604-2c", "ovs_interfaceid": "912a1604-2c79-46e1-8e58-77c169237654", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.227 243456 DEBUG oslo_concurrency.lockutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Releasing lock "refresh_cache-2b863bc5-8018-491d-82ee-dbc8f40d5aff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.229 243456 DEBUG nova.compute.manager [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Instance network_info: |[{"id": "912a1604-2c79-46e1-8e58-77c169237654", "address": "fa:16:3e:47:59:9b", "network": {"id": "c3064b28-4cc5-4292-a411-e47f76307d19", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1936405321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2155277af74424e955b1904a947ab64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912a1604-2c", "ovs_interfaceid": "912a1604-2c79-46e1-8e58-77c169237654", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.234 243456 DEBUG nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Start _get_guest_xml network_info=[{"id": "912a1604-2c79-46e1-8e58-77c169237654", "address": "fa:16:3e:47:59:9b", "network": {"id": "c3064b28-4cc5-4292-a411-e47f76307d19", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1936405321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2155277af74424e955b1904a947ab64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912a1604-2c", "ovs_interfaceid": "912a1604-2c79-46e1-8e58-77c169237654", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.245 243456 WARNING nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.251 243456 DEBUG nova.virt.libvirt.host [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.251 243456 DEBUG nova.virt.libvirt.host [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.255 243456 DEBUG nova.virt.libvirt.host [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.256 243456 DEBUG nova.virt.libvirt.host [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.256 243456 DEBUG nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.257 243456 DEBUG nova.virt.hardware [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.257 243456 DEBUG nova.virt.hardware [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.257 243456 DEBUG nova.virt.hardware [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.257 243456 DEBUG nova.virt.hardware [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.257 243456 DEBUG nova.virt.hardware [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.258 243456 DEBUG nova.virt.hardware [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.258 243456 DEBUG nova.virt.hardware [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.259 243456 DEBUG nova.virt.hardware [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.259 243456 DEBUG nova.virt.hardware [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.259 243456 DEBUG nova.virt.hardware [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.260 243456 DEBUG nova.virt.hardware [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.264 243456 DEBUG oslo_concurrency.processutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.301 243456 DEBUG oslo_concurrency.processutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 07582da6-e482-439d-b147-937e74817014_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.366 243456 DEBUG nova.network.neutron [req-0824183c-0d3d-43e0-a91a-fc3f127389e2 req-97148b22-0eb3-492e-8658-c4d572935d30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Updated VIF entry in instance network info cache for port 0690a322-e7c3-413d-8780-d9d6a0f84fd2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.367 243456 DEBUG nova.network.neutron [req-0824183c-0d3d-43e0-a91a-fc3f127389e2 req-97148b22-0eb3-492e-8658-c4d572935d30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Updating instance_info_cache with network_info: [{"id": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "address": "fa:16:3e:b1:c6:3c", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690a322-e7", "ovs_interfaceid": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.371 243456 DEBUG nova.storage.rbd_utils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] resizing rbd image 07582da6-e482-439d-b147-937e74817014_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.397 243456 DEBUG oslo_concurrency.lockutils [req-0824183c-0d3d-43e0-a91a-fc3f127389e2 req-97148b22-0eb3-492e-8658-c4d572935d30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-60dcb9fa-f7b6-415d-86e5-d423d4613d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.398 243456 DEBUG nova.compute.manager [req-0824183c-0d3d-43e0-a91a-fc3f127389e2 req-97148b22-0eb3-492e-8658-c4d572935d30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Received event network-vif-unplugged-53cf23fd-52e1-4c44-b96b-ca076c163326 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.398 243456 DEBUG oslo_concurrency.lockutils [req-0824183c-0d3d-43e0-a91a-fc3f127389e2 req-97148b22-0eb3-492e-8658-c4d572935d30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.398 243456 DEBUG oslo_concurrency.lockutils [req-0824183c-0d3d-43e0-a91a-fc3f127389e2 req-97148b22-0eb3-492e-8658-c4d572935d30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.399 243456 DEBUG oslo_concurrency.lockutils [req-0824183c-0d3d-43e0-a91a-fc3f127389e2 req-97148b22-0eb3-492e-8658-c4d572935d30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a9cfb8a4-5855-4ff2-8afa-3e14094e801e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.399 243456 DEBUG nova.compute.manager [req-0824183c-0d3d-43e0-a91a-fc3f127389e2 req-97148b22-0eb3-492e-8658-c4d572935d30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] No waiting events found dispatching network-vif-unplugged-53cf23fd-52e1-4c44-b96b-ca076c163326 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.399 243456 WARNING nova.compute.manager [req-0824183c-0d3d-43e0-a91a-fc3f127389e2 req-97148b22-0eb3-492e-8658-c4d572935d30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Received unexpected event network-vif-unplugged-53cf23fd-52e1-4c44-b96b-ca076c163326 for instance with vm_state deleted and task_state None.
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.399 243456 DEBUG oslo_concurrency.lockutils [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-60dcb9fa-f7b6-415d-86e5-d423d4613d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.400 243456 DEBUG nova.network.neutron [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Refreshing network info cache for port 0690a322-e7c3-413d-8780-d9d6a0f84fd2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.599 243456 DEBUG nova.network.neutron [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Successfully created port: a4078728-674b-40cc-afbf-b4fc763283e1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.610 243456 DEBUG nova.objects.instance [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'migration_context' on Instance uuid 07582da6-e482-439d-b147-937e74817014 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.629 243456 DEBUG nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.630 243456 DEBUG nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Ensure instance console log exists: /var/lib/nova/instances/07582da6-e482-439d-b147-937e74817014/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.630 243456 DEBUG oslo_concurrency.lockutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.630 243456 DEBUG oslo_concurrency.lockutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.630 243456 DEBUG oslo_concurrency.lockutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:06:35 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/624466232' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.854 243456 DEBUG oslo_concurrency.processutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.879 243456 DEBUG nova.storage.rbd_utils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] rbd image 2b863bc5-8018-491d-82ee-dbc8f40d5aff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:35 compute-0 nova_compute[243452]: 2026-02-28 10:06:35.885 243456 DEBUG oslo_concurrency.processutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 320 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.3 MiB/s wr, 294 op/s
Feb 28 10:06:36 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/624466232' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:36 compute-0 ovn_controller[146846]: 2026-02-28T10:06:36Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b1:c6:3c 10.100.0.12
Feb 28 10:06:36 compute-0 ovn_controller[146846]: 2026-02-28T10:06:36Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b1:c6:3c 10.100.0.12
Feb 28 10:06:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:06:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1828299724' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.527 243456 DEBUG oslo_concurrency.processutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.530 243456 DEBUG nova.virt.libvirt.vif [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:06:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-63330761',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-63330761',id=41,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b2155277af74424e955b1904a947ab64',ramdisk_id='',reservation_id='r-lakmz1rt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1945846185',owner_user_name='tempest-ServerTagsTestJSON-1945846185-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:06:29Z,user_data=None,user_id='d83ef2b77047458db9060496f444a384',uuid=2b863bc5-8018-491d-82ee-dbc8f40d5aff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "912a1604-2c79-46e1-8e58-77c169237654", "address": "fa:16:3e:47:59:9b", "network": {"id": "c3064b28-4cc5-4292-a411-e47f76307d19", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1936405321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2155277af74424e955b1904a947ab64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912a1604-2c", "ovs_interfaceid": "912a1604-2c79-46e1-8e58-77c169237654", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.530 243456 DEBUG nova.network.os_vif_util [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Converting VIF {"id": "912a1604-2c79-46e1-8e58-77c169237654", "address": "fa:16:3e:47:59:9b", "network": {"id": "c3064b28-4cc5-4292-a411-e47f76307d19", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1936405321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2155277af74424e955b1904a947ab64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912a1604-2c", "ovs_interfaceid": "912a1604-2c79-46e1-8e58-77c169237654", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.531 243456 DEBUG nova.network.os_vif_util [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:59:9b,bridge_name='br-int',has_traffic_filtering=True,id=912a1604-2c79-46e1-8e58-77c169237654,network=Network(c3064b28-4cc5-4292-a411-e47f76307d19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap912a1604-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.533 243456 DEBUG nova.objects.instance [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b863bc5-8018-491d-82ee-dbc8f40d5aff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.553 243456 DEBUG nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:06:36 compute-0 nova_compute[243452]:   <uuid>2b863bc5-8018-491d-82ee-dbc8f40d5aff</uuid>
Feb 28 10:06:36 compute-0 nova_compute[243452]:   <name>instance-00000029</name>
Feb 28 10:06:36 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:06:36 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:06:36 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerTagsTestJSON-server-63330761</nova:name>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:06:35</nova:creationTime>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:06:36 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:06:36 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:06:36 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:06:36 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:06:36 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:06:36 compute-0 nova_compute[243452]:         <nova:user uuid="d83ef2b77047458db9060496f444a384">tempest-ServerTagsTestJSON-1945846185-project-member</nova:user>
Feb 28 10:06:36 compute-0 nova_compute[243452]:         <nova:project uuid="b2155277af74424e955b1904a947ab64">tempest-ServerTagsTestJSON-1945846185</nova:project>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:06:36 compute-0 nova_compute[243452]:         <nova:port uuid="912a1604-2c79-46e1-8e58-77c169237654">
Feb 28 10:06:36 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:06:36 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:06:36 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <system>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <entry name="serial">2b863bc5-8018-491d-82ee-dbc8f40d5aff</entry>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <entry name="uuid">2b863bc5-8018-491d-82ee-dbc8f40d5aff</entry>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     </system>
Feb 28 10:06:36 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:06:36 compute-0 nova_compute[243452]:   <os>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:   </os>
Feb 28 10:06:36 compute-0 nova_compute[243452]:   <features>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:   </features>
Feb 28 10:06:36 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:06:36 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:06:36 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/2b863bc5-8018-491d-82ee-dbc8f40d5aff_disk">
Feb 28 10:06:36 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       </source>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:06:36 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/2b863bc5-8018-491d-82ee-dbc8f40d5aff_disk.config">
Feb 28 10:06:36 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       </source>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:06:36 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:47:59:9b"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <target dev="tap912a1604-2c"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/2b863bc5-8018-491d-82ee-dbc8f40d5aff/console.log" append="off"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <video>
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     </video>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:06:36 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:06:36 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:06:36 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:06:36 compute-0 nova_compute[243452]: </domain>
Feb 28 10:06:36 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.559 243456 DEBUG nova.compute.manager [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Preparing to wait for external event network-vif-plugged-912a1604-2c79-46e1-8e58-77c169237654 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.560 243456 DEBUG oslo_concurrency.lockutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Acquiring lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.560 243456 DEBUG oslo_concurrency.lockutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.560 243456 DEBUG oslo_concurrency.lockutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.561 243456 DEBUG nova.virt.libvirt.vif [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:06:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-63330761',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-63330761',id=41,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b2155277af74424e955b1904a947ab64',ramdisk_id='',reservation_id='r-lakmz1rt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1945846185',owner_user_name='tempest-ServerTagsTestJSON-1945846185-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:06:29Z,user_data=None,user_id='d83ef2b77047458db9060496f444a384',uuid=2b863bc5-8018-491d-82ee-dbc8f40d5aff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "912a1604-2c79-46e1-8e58-77c169237654", "address": "fa:16:3e:47:59:9b", "network": {"id": "c3064b28-4cc5-4292-a411-e47f76307d19", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1936405321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2155277af74424e955b1904a947ab64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912a1604-2c", "ovs_interfaceid": "912a1604-2c79-46e1-8e58-77c169237654", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.561 243456 DEBUG nova.network.os_vif_util [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Converting VIF {"id": "912a1604-2c79-46e1-8e58-77c169237654", "address": "fa:16:3e:47:59:9b", "network": {"id": "c3064b28-4cc5-4292-a411-e47f76307d19", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1936405321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2155277af74424e955b1904a947ab64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912a1604-2c", "ovs_interfaceid": "912a1604-2c79-46e1-8e58-77c169237654", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.562 243456 DEBUG nova.network.os_vif_util [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:59:9b,bridge_name='br-int',has_traffic_filtering=True,id=912a1604-2c79-46e1-8e58-77c169237654,network=Network(c3064b28-4cc5-4292-a411-e47f76307d19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap912a1604-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.562 243456 DEBUG os_vif [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:59:9b,bridge_name='br-int',has_traffic_filtering=True,id=912a1604-2c79-46e1-8e58-77c169237654,network=Network(c3064b28-4cc5-4292-a411-e47f76307d19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap912a1604-2c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.564 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.564 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.565 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.570 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.571 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap912a1604-2c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.572 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap912a1604-2c, col_values=(('external_ids', {'iface-id': '912a1604-2c79-46e1-8e58-77c169237654', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:59:9b', 'vm-uuid': '2b863bc5-8018-491d-82ee-dbc8f40d5aff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:36 compute-0 NetworkManager[49805]: <info>  [1772273196.5745] manager: (tap912a1604-2c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.573 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.578 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.581 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.582 243456 INFO os_vif [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:59:9b,bridge_name='br-int',has_traffic_filtering=True,id=912a1604-2c79-46e1-8e58-77c169237654,network=Network(c3064b28-4cc5-4292-a411-e47f76307d19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap912a1604-2c')
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.637 243456 DEBUG nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.639 243456 DEBUG nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.639 243456 DEBUG nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] No VIF found with MAC fa:16:3e:47:59:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.640 243456 INFO nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Using config drive
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.666 243456 DEBUG nova.storage.rbd_utils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] rbd image 2b863bc5-8018-491d-82ee-dbc8f40d5aff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.944 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273181.9429233, 98ff9b81-a55d-49aa-903f-1f00ff96f985 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:36 compute-0 nova_compute[243452]: 2026-02-28 10:06:36.945 243456 INFO nova.compute.manager [-] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] VM Stopped (Lifecycle Event)
Feb 28 10:06:37 compute-0 nova_compute[243452]: 2026-02-28 10:06:37.002 243456 DEBUG nova.compute.manager [None req-8fad9daa-b7ba-4f3a-9ba1-04095919659b - - - - - -] [instance: 98ff9b81-a55d-49aa-903f-1f00ff96f985] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:37 compute-0 ceph-mon[76304]: pgmap v1145: 305 pgs: 305 active+clean; 320 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.3 MiB/s wr, 294 op/s
Feb 28 10:06:37 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1828299724' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:06:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Feb 28 10:06:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Feb 28 10:06:37 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Feb 28 10:06:37 compute-0 nova_compute[243452]: 2026-02-28 10:06:37.819 243456 DEBUG nova.network.neutron [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Successfully updated port: a4078728-674b-40cc-afbf-b4fc763283e1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:06:37 compute-0 nova_compute[243452]: 2026-02-28 10:06:37.836 243456 DEBUG oslo_concurrency.lockutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "refresh_cache-07582da6-e482-439d-b147-937e74817014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:37 compute-0 nova_compute[243452]: 2026-02-28 10:06:37.837 243456 DEBUG oslo_concurrency.lockutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquired lock "refresh_cache-07582da6-e482-439d-b147-937e74817014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:37 compute-0 nova_compute[243452]: 2026-02-28 10:06:37.837 243456 DEBUG nova.network.neutron [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:06:37 compute-0 nova_compute[243452]: 2026-02-28 10:06:37.915 243456 INFO nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Creating config drive at /var/lib/nova/instances/2b863bc5-8018-491d-82ee-dbc8f40d5aff/disk.config
Feb 28 10:06:37 compute-0 nova_compute[243452]: 2026-02-28 10:06:37.921 243456 DEBUG oslo_concurrency.processutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b863bc5-8018-491d-82ee-dbc8f40d5aff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp057ygt1c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 353 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 7.5 MiB/s wr, 302 op/s
Feb 28 10:06:37 compute-0 nova_compute[243452]: 2026-02-28 10:06:37.958 243456 DEBUG nova.network.neutron [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Updated VIF entry in instance network info cache for port 0690a322-e7c3-413d-8780-d9d6a0f84fd2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:06:37 compute-0 nova_compute[243452]: 2026-02-28 10:06:37.959 243456 DEBUG nova.network.neutron [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Updating instance_info_cache with network_info: [{"id": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "address": "fa:16:3e:b1:c6:3c", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690a322-e7", "ovs_interfaceid": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:37 compute-0 nova_compute[243452]: 2026-02-28 10:06:37.979 243456 DEBUG oslo_concurrency.lockutils [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-60dcb9fa-f7b6-415d-86e5-d423d4613d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:06:37 compute-0 nova_compute[243452]: 2026-02-28 10:06:37.979 243456 DEBUG nova.compute.manager [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Received event network-vif-deleted-53cf23fd-52e1-4c44-b96b-ca076c163326 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:37 compute-0 nova_compute[243452]: 2026-02-28 10:06:37.980 243456 DEBUG nova.compute.manager [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Received event network-changed-912a1604-2c79-46e1-8e58-77c169237654 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:37 compute-0 nova_compute[243452]: 2026-02-28 10:06:37.980 243456 DEBUG nova.compute.manager [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Refreshing instance network info cache due to event network-changed-912a1604-2c79-46e1-8e58-77c169237654. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:06:37 compute-0 nova_compute[243452]: 2026-02-28 10:06:37.980 243456 DEBUG oslo_concurrency.lockutils [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-2b863bc5-8018-491d-82ee-dbc8f40d5aff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:37 compute-0 nova_compute[243452]: 2026-02-28 10:06:37.981 243456 DEBUG oslo_concurrency.lockutils [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-2b863bc5-8018-491d-82ee-dbc8f40d5aff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:37 compute-0 nova_compute[243452]: 2026-02-28 10:06:37.981 243456 DEBUG nova.network.neutron [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Refreshing network info cache for port 912a1604-2c79-46e1-8e58-77c169237654 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:06:38 compute-0 nova_compute[243452]: 2026-02-28 10:06:38.027 243456 DEBUG nova.network.neutron [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:06:38 compute-0 nova_compute[243452]: 2026-02-28 10:06:38.066 243456 DEBUG oslo_concurrency.processutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b863bc5-8018-491d-82ee-dbc8f40d5aff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp057ygt1c" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:38 compute-0 nova_compute[243452]: 2026-02-28 10:06:38.096 243456 DEBUG nova.storage.rbd_utils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] rbd image 2b863bc5-8018-491d-82ee-dbc8f40d5aff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:38 compute-0 nova_compute[243452]: 2026-02-28 10:06:38.102 243456 DEBUG oslo_concurrency.processutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b863bc5-8018-491d-82ee-dbc8f40d5aff/disk.config 2b863bc5-8018-491d-82ee-dbc8f40d5aff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:38 compute-0 nova_compute[243452]: 2026-02-28 10:06:38.250 243456 DEBUG oslo_concurrency.processutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b863bc5-8018-491d-82ee-dbc8f40d5aff/disk.config 2b863bc5-8018-491d-82ee-dbc8f40d5aff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:38 compute-0 nova_compute[243452]: 2026-02-28 10:06:38.253 243456 INFO nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Deleting local config drive /var/lib/nova/instances/2b863bc5-8018-491d-82ee-dbc8f40d5aff/disk.config because it was imported into RBD.
Feb 28 10:06:38 compute-0 kernel: tap912a1604-2c: entered promiscuous mode
Feb 28 10:06:38 compute-0 NetworkManager[49805]: <info>  [1772273198.3283] manager: (tap912a1604-2c): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Feb 28 10:06:38 compute-0 ovn_controller[146846]: 2026-02-28T10:06:38Z|00300|binding|INFO|Claiming lport 912a1604-2c79-46e1-8e58-77c169237654 for this chassis.
Feb 28 10:06:38 compute-0 ovn_controller[146846]: 2026-02-28T10:06:38Z|00301|binding|INFO|912a1604-2c79-46e1-8e58-77c169237654: Claiming fa:16:3e:47:59:9b 10.100.0.13
Feb 28 10:06:38 compute-0 nova_compute[243452]: 2026-02-28 10:06:38.334 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.339 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:59:9b 10.100.0.13'], port_security=['fa:16:3e:47:59:9b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2b863bc5-8018-491d-82ee-dbc8f40d5aff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3064b28-4cc5-4292-a411-e47f76307d19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b2155277af74424e955b1904a947ab64', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0f66cfa-9d0f-49b8-9566-0ccda1e69bc8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df65cb92-cb3c-452d-9654-0318a819dc9b, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=912a1604-2c79-46e1-8e58-77c169237654) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.340 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 912a1604-2c79-46e1-8e58-77c169237654 in datapath c3064b28-4cc5-4292-a411-e47f76307d19 bound to our chassis
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.344 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c3064b28-4cc5-4292-a411-e47f76307d19
Feb 28 10:06:38 compute-0 ovn_controller[146846]: 2026-02-28T10:06:38Z|00302|binding|INFO|Setting lport 912a1604-2c79-46e1-8e58-77c169237654 ovn-installed in OVS
Feb 28 10:06:38 compute-0 ovn_controller[146846]: 2026-02-28T10:06:38Z|00303|binding|INFO|Setting lport 912a1604-2c79-46e1-8e58-77c169237654 up in Southbound
Feb 28 10:06:38 compute-0 nova_compute[243452]: 2026-02-28 10:06:38.354 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.361 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4115edcb-aa86-4a17-bae7-05c72e0426b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.362 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc3064b28-41 in ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.366 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc3064b28-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.366 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c863ee76-d2c5-4d8f-b5e0-5f7853dac01d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:38 compute-0 systemd-machined[209480]: New machine qemu-45-instance-00000029.
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.367 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[18f8c84c-3dce-4ada-b762-98e58e92ee64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.386 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[5b25c26e-4aad-4de2-8136-83b765aeda4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:38 compute-0 systemd[1]: Started Virtual Machine qemu-45-instance-00000029.
Feb 28 10:06:38 compute-0 systemd-udevd[279017]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.406 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9136d311-9571-4f78-8ab7-9125854536c8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:38 compute-0 NetworkManager[49805]: <info>  [1772273198.4202] device (tap912a1604-2c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:06:38 compute-0 NetworkManager[49805]: <info>  [1772273198.4212] device (tap912a1604-2c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.445 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ec7a3ea6-f251-4434-aa36-7e329c8bd02a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:38 compute-0 NetworkManager[49805]: <info>  [1772273198.4586] manager: (tapc3064b28-40): new Veth device (/org/freedesktop/NetworkManager/Devices/144)
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.457 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb1c9d0-34c5-4bb8-8caa-0732248c6141]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.494 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a61ee4-1ad8-4e36-9c74-44de436c8657]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.497 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0d622bfd-34dc-4a24-aef0-70f60e07820d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:38 compute-0 NetworkManager[49805]: <info>  [1772273198.5209] device (tapc3064b28-40): carrier: link connected
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.527 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[73fa970e-5807-40ff-b438-d8868babdd5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.548 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8ee9e8c9-8b98-4aeb-a072-b89728493721]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc3064b28-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:16:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470580, 'reachable_time': 32023, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279047, 'error': None, 'target': 'ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:38 compute-0 nova_compute[243452]: 2026-02-28 10:06:38.560 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.572 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fb40a45e-3a8a-4d25-9f8f-26e7fd52151e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:16a1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 470580, 'tstamp': 470580}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279048, 'error': None, 'target': 'ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.597 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5904ac48-f2c5-4ba9-9c6c-105603be0fce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc3064b28-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:16:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470580, 'reachable_time': 32023, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279049, 'error': None, 'target': 'ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.638 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef52c71-a823-4ee4-a67f-ab43cd70d80b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.707 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a4369ade-a5d3-4a46-b83a-0588aff2d8be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.710 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc3064b28-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.710 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.710 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc3064b28-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:38 compute-0 nova_compute[243452]: 2026-02-28 10:06:38.713 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:38 compute-0 NetworkManager[49805]: <info>  [1772273198.7140] manager: (tapc3064b28-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Feb 28 10:06:38 compute-0 kernel: tapc3064b28-40: entered promiscuous mode
Feb 28 10:06:38 compute-0 nova_compute[243452]: 2026-02-28 10:06:38.718 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.720 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc3064b28-40, col_values=(('external_ids', {'iface-id': 'ab3b1171-3cb3-45ae-ad2c-84e5980d2759'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:38 compute-0 nova_compute[243452]: 2026-02-28 10:06:38.721 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:38 compute-0 ovn_controller[146846]: 2026-02-28T10:06:38Z|00304|binding|INFO|Releasing lport ab3b1171-3cb3-45ae-ad2c-84e5980d2759 from this chassis (sb_readonly=0)
Feb 28 10:06:38 compute-0 ceph-mon[76304]: osdmap e162: 3 total, 3 up, 3 in
Feb 28 10:06:38 compute-0 ceph-mon[76304]: pgmap v1147: 305 pgs: 305 active+clean; 353 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 7.5 MiB/s wr, 302 op/s
Feb 28 10:06:38 compute-0 nova_compute[243452]: 2026-02-28 10:06:38.734 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:38 compute-0 nova_compute[243452]: 2026-02-28 10:06:38.735 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.736 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c3064b28-4cc5-4292-a411-e47f76307d19.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c3064b28-4cc5-4292-a411-e47f76307d19.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.738 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[517cc490-156f-49af-b188-6b39cfe8148d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.739 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-c3064b28-4cc5-4292-a411-e47f76307d19
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/c3064b28-4cc5-4292-a411-e47f76307d19.pid.haproxy
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID c3064b28-4cc5-4292-a411-e47f76307d19
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:06:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:38.740 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19', 'env', 'PROCESS_TAG=haproxy-c3064b28-4cc5-4292-a411-e47f76307d19', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c3064b28-4cc5-4292-a411-e47f76307d19.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:06:38 compute-0 nova_compute[243452]: 2026-02-28 10:06:38.768 243456 DEBUG nova.compute.manager [req-71547e36-b2e1-434c-b415-93ca7755aa84 req-f92c72b9-d6c5-4418-ac33-92389db1afff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received event network-changed-bd50336f-b10b-46c9-91bd-81e086b2e80e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:38 compute-0 nova_compute[243452]: 2026-02-28 10:06:38.768 243456 DEBUG nova.compute.manager [req-71547e36-b2e1-434c-b415-93ca7755aa84 req-f92c72b9-d6c5-4418-ac33-92389db1afff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Refreshing instance network info cache due to event network-changed-bd50336f-b10b-46c9-91bd-81e086b2e80e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:06:38 compute-0 nova_compute[243452]: 2026-02-28 10:06:38.769 243456 DEBUG oslo_concurrency.lockutils [req-71547e36-b2e1-434c-b415-93ca7755aa84 req-f92c72b9-d6c5-4418-ac33-92389db1afff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:38 compute-0 nova_compute[243452]: 2026-02-28 10:06:38.769 243456 DEBUG oslo_concurrency.lockutils [req-71547e36-b2e1-434c-b415-93ca7755aa84 req-f92c72b9-d6c5-4418-ac33-92389db1afff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:38 compute-0 nova_compute[243452]: 2026-02-28 10:06:38.769 243456 DEBUG nova.network.neutron [req-71547e36-b2e1-434c-b415-93ca7755aa84 req-f92c72b9-d6c5-4418-ac33-92389db1afff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Refreshing network info cache for port bd50336f-b10b-46c9-91bd-81e086b2e80e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.005 243456 DEBUG nova.network.neutron [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Updating instance_info_cache with network_info: [{"id": "a4078728-674b-40cc-afbf-b4fc763283e1", "address": "fa:16:3e:fc:62:fa", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4078728-67", "ovs_interfaceid": "a4078728-674b-40cc-afbf-b4fc763283e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.033 243456 DEBUG oslo_concurrency.lockutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Releasing lock "refresh_cache-07582da6-e482-439d-b147-937e74817014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.033 243456 DEBUG nova.compute.manager [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Instance network_info: |[{"id": "a4078728-674b-40cc-afbf-b4fc763283e1", "address": "fa:16:3e:fc:62:fa", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4078728-67", "ovs_interfaceid": "a4078728-674b-40cc-afbf-b4fc763283e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.036 243456 DEBUG nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Start _get_guest_xml network_info=[{"id": "a4078728-674b-40cc-afbf-b4fc763283e1", "address": "fa:16:3e:fc:62:fa", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4078728-67", "ovs_interfaceid": "a4078728-674b-40cc-afbf-b4fc763283e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.042 243456 WARNING nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.048 243456 DEBUG nova.virt.libvirt.host [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.048 243456 DEBUG nova.virt.libvirt.host [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.052 243456 DEBUG nova.virt.libvirt.host [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.053 243456 DEBUG nova.virt.libvirt.host [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.053 243456 DEBUG nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.053 243456 DEBUG nova.virt.hardware [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.054 243456 DEBUG nova.virt.hardware [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.054 243456 DEBUG nova.virt.hardware [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.055 243456 DEBUG nova.virt.hardware [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.055 243456 DEBUG nova.virt.hardware [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.055 243456 DEBUG nova.virt.hardware [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.055 243456 DEBUG nova.virt.hardware [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.056 243456 DEBUG nova.virt.hardware [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.056 243456 DEBUG nova.virt.hardware [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.056 243456 DEBUG nova.virt.hardware [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.056 243456 DEBUG nova.virt.hardware [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.060 243456 DEBUG oslo_concurrency.processutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:39 compute-0 podman[279081]: 2026-02-28 10:06:39.165554254 +0000 UTC m=+0.077650984 container create 6b26203d38d4071d1fc289a4fbc78361d9b4dcbe312513650f9fbee5b9994f9c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 10:06:39 compute-0 podman[279081]: 2026-02-28 10:06:39.118183102 +0000 UTC m=+0.030279842 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:06:39 compute-0 systemd[1]: Started libpod-conmon-6b26203d38d4071d1fc289a4fbc78361d9b4dcbe312513650f9fbee5b9994f9c.scope.
Feb 28 10:06:39 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:06:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5391d28ebdc6627030e5170cab093dc9322b6d8357983b907233eb7406ba7c8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:06:39 compute-0 podman[279081]: 2026-02-28 10:06:39.272019627 +0000 UTC m=+0.184116367 container init 6b26203d38d4071d1fc289a4fbc78361d9b4dcbe312513650f9fbee5b9994f9c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 10:06:39 compute-0 podman[279081]: 2026-02-28 10:06:39.27711769 +0000 UTC m=+0.189214420 container start 6b26203d38d4071d1fc289a4fbc78361d9b4dcbe312513650f9fbee5b9994f9c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0)
Feb 28 10:06:39 compute-0 neutron-haproxy-ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19[279146]: [NOTICE]   (279161) : New worker (279163) forked
Feb 28 10:06:39 compute-0 neutron-haproxy-ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19[279146]: [NOTICE]   (279161) : Loading success.
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.365 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273199.3637064, 2b863bc5-8018-491d-82ee-dbc8f40d5aff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.367 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] VM Started (Lifecycle Event)
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.392 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.398 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273199.3638322, 2b863bc5-8018-491d-82ee-dbc8f40d5aff => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.399 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] VM Paused (Lifecycle Event)
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.428 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.432 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.459 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:06:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:06:39 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3131927006' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.680 243456 DEBUG oslo_concurrency.processutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.709 243456 DEBUG nova.storage.rbd_utils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 07582da6-e482-439d-b147-937e74817014_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:39 compute-0 nova_compute[243452]: 2026-02-28 10:06:39.716 243456 DEBUG oslo_concurrency.processutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:39 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3131927006' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 370 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.8 MiB/s wr, 265 op/s
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.075 243456 DEBUG nova.network.neutron [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Updated VIF entry in instance network info cache for port 912a1604-2c79-46e1-8e58-77c169237654. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.076 243456 DEBUG nova.network.neutron [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Updating instance_info_cache with network_info: [{"id": "912a1604-2c79-46e1-8e58-77c169237654", "address": "fa:16:3e:47:59:9b", "network": {"id": "c3064b28-4cc5-4292-a411-e47f76307d19", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1936405321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2155277af74424e955b1904a947ab64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912a1604-2c", "ovs_interfaceid": "912a1604-2c79-46e1-8e58-77c169237654", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.101 243456 DEBUG oslo_concurrency.lockutils [req-61966931-9139-47c1-9ba4-7274647103d9 req-49164ca5-446a-4415-bae7-f5cc407f0b44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-2b863bc5-8018-491d-82ee-dbc8f40d5aff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:06:40 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:06:40 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2101873098' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.269 243456 DEBUG oslo_concurrency.processutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.271 243456 DEBUG nova.virt.libvirt.vif [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:06:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1018606413',display_name='tempest-ImagesTestJSON-server-1018606413',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1018606413',id=42,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-xv5jol0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:06:34Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=07582da6-e482-439d-b147-937e74817014,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4078728-674b-40cc-afbf-b4fc763283e1", "address": "fa:16:3e:fc:62:fa", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4078728-67", "ovs_interfaceid": "a4078728-674b-40cc-afbf-b4fc763283e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.271 243456 DEBUG nova.network.os_vif_util [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "a4078728-674b-40cc-afbf-b4fc763283e1", "address": "fa:16:3e:fc:62:fa", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4078728-67", "ovs_interfaceid": "a4078728-674b-40cc-afbf-b4fc763283e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.273 243456 DEBUG nova.network.os_vif_util [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:62:fa,bridge_name='br-int',has_traffic_filtering=True,id=a4078728-674b-40cc-afbf-b4fc763283e1,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4078728-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.275 243456 DEBUG nova.objects.instance [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid 07582da6-e482-439d-b147-937e74817014 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.299 243456 DEBUG nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:06:40 compute-0 nova_compute[243452]:   <uuid>07582da6-e482-439d-b147-937e74817014</uuid>
Feb 28 10:06:40 compute-0 nova_compute[243452]:   <name>instance-0000002a</name>
Feb 28 10:06:40 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:06:40 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:06:40 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <nova:name>tempest-ImagesTestJSON-server-1018606413</nova:name>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:06:39</nova:creationTime>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:06:40 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:06:40 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:06:40 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:06:40 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:06:40 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:06:40 compute-0 nova_compute[243452]:         <nova:user uuid="163582c3e6a34c87b52f82ac4f189f77">tempest-ImagesTestJSON-2059286278-project-member</nova:user>
Feb 28 10:06:40 compute-0 nova_compute[243452]:         <nova:project uuid="a2ce6ed219d94b3b88c2d2d7001f6c3a">tempest-ImagesTestJSON-2059286278</nova:project>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:06:40 compute-0 nova_compute[243452]:         <nova:port uuid="a4078728-674b-40cc-afbf-b4fc763283e1">
Feb 28 10:06:40 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:06:40 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:06:40 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <system>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <entry name="serial">07582da6-e482-439d-b147-937e74817014</entry>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <entry name="uuid">07582da6-e482-439d-b147-937e74817014</entry>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     </system>
Feb 28 10:06:40 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:06:40 compute-0 nova_compute[243452]:   <os>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:   </os>
Feb 28 10:06:40 compute-0 nova_compute[243452]:   <features>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:   </features>
Feb 28 10:06:40 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:06:40 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:06:40 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/07582da6-e482-439d-b147-937e74817014_disk">
Feb 28 10:06:40 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       </source>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:06:40 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/07582da6-e482-439d-b147-937e74817014_disk.config">
Feb 28 10:06:40 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       </source>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:06:40 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:fc:62:fa"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <target dev="tapa4078728-67"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/07582da6-e482-439d-b147-937e74817014/console.log" append="off"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <video>
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     </video>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:06:40 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:06:40 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:06:40 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:06:40 compute-0 nova_compute[243452]: </domain>
Feb 28 10:06:40 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.300 243456 DEBUG nova.compute.manager [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Preparing to wait for external event network-vif-plugged-a4078728-674b-40cc-afbf-b4fc763283e1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.301 243456 DEBUG oslo_concurrency.lockutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "07582da6-e482-439d-b147-937e74817014-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.302 243456 DEBUG oslo_concurrency.lockutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "07582da6-e482-439d-b147-937e74817014-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.303 243456 DEBUG oslo_concurrency.lockutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "07582da6-e482-439d-b147-937e74817014-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.304 243456 DEBUG nova.virt.libvirt.vif [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:06:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1018606413',display_name='tempest-ImagesTestJSON-server-1018606413',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1018606413',id=42,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-xv5jol0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:06:34Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=07582da6-e482-439d-b147-937e74817014,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4078728-674b-40cc-afbf-b4fc763283e1", "address": "fa:16:3e:fc:62:fa", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4078728-67", "ovs_interfaceid": "a4078728-674b-40cc-afbf-b4fc763283e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.304 243456 DEBUG nova.network.os_vif_util [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "a4078728-674b-40cc-afbf-b4fc763283e1", "address": "fa:16:3e:fc:62:fa", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4078728-67", "ovs_interfaceid": "a4078728-674b-40cc-afbf-b4fc763283e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.305 243456 DEBUG nova.network.os_vif_util [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:62:fa,bridge_name='br-int',has_traffic_filtering=True,id=a4078728-674b-40cc-afbf-b4fc763283e1,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4078728-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.305 243456 DEBUG os_vif [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:62:fa,bridge_name='br-int',has_traffic_filtering=True,id=a4078728-674b-40cc-afbf-b4fc763283e1,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4078728-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.306 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.306 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.307 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.310 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.310 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa4078728-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.310 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa4078728-67, col_values=(('external_ids', {'iface-id': 'a4078728-674b-40cc-afbf-b4fc763283e1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:62:fa', 'vm-uuid': '07582da6-e482-439d-b147-937e74817014'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.312 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:40 compute-0 NetworkManager[49805]: <info>  [1772273200.3140] manager: (tapa4078728-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.315 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.321 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.322 243456 INFO os_vif [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:62:fa,bridge_name='br-int',has_traffic_filtering=True,id=a4078728-674b-40cc-afbf-b4fc763283e1,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4078728-67')
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.382 243456 DEBUG nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.382 243456 DEBUG nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.383 243456 DEBUG nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No VIF found with MAC fa:16:3e:fc:62:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.383 243456 INFO nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Using config drive
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.408 243456 DEBUG nova.storage.rbd_utils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 07582da6-e482-439d-b147-937e74817014_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.681 243456 DEBUG nova.network.neutron [req-71547e36-b2e1-434c-b415-93ca7755aa84 req-f92c72b9-d6c5-4418-ac33-92389db1afff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Updated VIF entry in instance network info cache for port bd50336f-b10b-46c9-91bd-81e086b2e80e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.681 243456 DEBUG nova.network.neutron [req-71547e36-b2e1-434c-b415-93ca7755aa84 req-f92c72b9-d6c5-4418-ac33-92389db1afff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Updating instance_info_cache with network_info: [{"id": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "address": "fa:16:3e:9f:3f:66", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd50336f-b1", "ovs_interfaceid": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018021200305123716 of space, bias 1.0, pg target 0.5406360091537115 quantized to 32 (current 32)
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024919258847230113 of space, bias 1.0, pg target 0.7475777654169034 quantized to 32 (current 32)
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.794984982815883e-07 of space, bias 4.0, pg target 0.000935398197937906 quantized to 16 (current 16)
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:06:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.726 243456 DEBUG oslo_concurrency.lockutils [req-71547e36-b2e1-434c-b415-93ca7755aa84 req-f92c72b9-d6c5-4418-ac33-92389db1afff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:06:40 compute-0 ceph-mon[76304]: pgmap v1148: 305 pgs: 305 active+clean; 370 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.8 MiB/s wr, 265 op/s
Feb 28 10:06:40 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2101873098' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.870 243456 INFO nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Creating config drive at /var/lib/nova/instances/07582da6-e482-439d-b147-937e74817014/disk.config
Feb 28 10:06:40 compute-0 nova_compute[243452]: 2026-02-28 10:06:40.877 243456 DEBUG oslo_concurrency.processutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/07582da6-e482-439d-b147-937e74817014/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpla2eqt64 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.005 243456 DEBUG nova.compute.manager [req-ea83963e-8a78-46da-9c8f-03f7309b9c87 req-98711882-d8a8-4948-8935-8c1faebbd4e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Received event network-changed-a4078728-674b-40cc-afbf-b4fc763283e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.006 243456 DEBUG nova.compute.manager [req-ea83963e-8a78-46da-9c8f-03f7309b9c87 req-98711882-d8a8-4948-8935-8c1faebbd4e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Refreshing instance network info cache due to event network-changed-a4078728-674b-40cc-afbf-b4fc763283e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.007 243456 DEBUG oslo_concurrency.lockutils [req-ea83963e-8a78-46da-9c8f-03f7309b9c87 req-98711882-d8a8-4948-8935-8c1faebbd4e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-07582da6-e482-439d-b147-937e74817014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.008 243456 DEBUG oslo_concurrency.lockutils [req-ea83963e-8a78-46da-9c8f-03f7309b9c87 req-98711882-d8a8-4948-8935-8c1faebbd4e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-07582da6-e482-439d-b147-937e74817014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.008 243456 DEBUG nova.network.neutron [req-ea83963e-8a78-46da-9c8f-03f7309b9c87 req-98711882-d8a8-4948-8935-8c1faebbd4e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Refreshing network info cache for port a4078728-674b-40cc-afbf-b4fc763283e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.011 243456 DEBUG oslo_concurrency.lockutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.012 243456 DEBUG oslo_concurrency.lockutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.028 243456 DEBUG oslo_concurrency.processutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/07582da6-e482-439d-b147-937e74817014/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpla2eqt64" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.073 243456 DEBUG nova.storage.rbd_utils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] rbd image 07582da6-e482-439d-b147-937e74817014_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.082 243456 DEBUG oslo_concurrency.processutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/07582da6-e482-439d-b147-937e74817014/disk.config 07582da6-e482-439d-b147-937e74817014_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.129 243456 DEBUG nova.compute.manager [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.224 243456 DEBUG oslo_concurrency.lockutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.225 243456 DEBUG oslo_concurrency.lockutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.233 243456 DEBUG nova.virt.hardware [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.234 243456 INFO nova.compute.claims [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.275 243456 DEBUG oslo_concurrency.processutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/07582da6-e482-439d-b147-937e74817014/disk.config 07582da6-e482-439d-b147-937e74817014_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.276 243456 INFO nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Deleting local config drive /var/lib/nova/instances/07582da6-e482-439d-b147-937e74817014/disk.config because it was imported into RBD.
Feb 28 10:06:41 compute-0 kernel: tapa4078728-67: entered promiscuous mode
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.330 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:41 compute-0 ovn_controller[146846]: 2026-02-28T10:06:41Z|00305|binding|INFO|Claiming lport a4078728-674b-40cc-afbf-b4fc763283e1 for this chassis.
Feb 28 10:06:41 compute-0 ovn_controller[146846]: 2026-02-28T10:06:41Z|00306|binding|INFO|a4078728-674b-40cc-afbf-b4fc763283e1: Claiming fa:16:3e:fc:62:fa 10.100.0.5
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.334 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:41 compute-0 NetworkManager[49805]: <info>  [1772273201.3354] manager: (tapa4078728-67): new Tun device (/org/freedesktop/NetworkManager/Devices/147)
Feb 28 10:06:41 compute-0 ovn_controller[146846]: 2026-02-28T10:06:41Z|00307|binding|INFO|Setting lport a4078728-674b-40cc-afbf-b4fc763283e1 ovn-installed in OVS
Feb 28 10:06:41 compute-0 ovn_controller[146846]: 2026-02-28T10:06:41Z|00308|binding|INFO|Setting lport a4078728-674b-40cc-afbf-b4fc763283e1 up in Southbound
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.341 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.339 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:62:fa 10.100.0.5'], port_security=['fa:16:3e:fc:62:fa 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '07582da6-e482-439d-b147-937e74817014', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b4baa283-ae4f-4852-9324-45f4fb1de1ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d63e728b-5dfe-4a66-867f-fa0957507bc0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a4078728-674b-40cc-afbf-b4fc763283e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.340 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a4078728-674b-40cc-afbf-b4fc763283e1 in datapath 3a8395bc-d7fc-4457-8cb4-52e2b9305b61 bound to our chassis
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.342 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.361 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a6e98b-78ef-4e49-8e31-861b270d8b1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.363 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3a8395bc-d1 in ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.365 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3a8395bc-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.365 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd9038f-3c4a-4575-81c6-a1c7a2b990de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.366 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5c937795-dee4-437d-9533-ee67f335df1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:41 compute-0 systemd-machined[209480]: New machine qemu-46-instance-0000002a.
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.379 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[18b4601f-241c-4c73-9123-22f95df63864]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:41 compute-0 systemd[1]: Started Virtual Machine qemu-46-instance-0000002a.
Feb 28 10:06:41 compute-0 systemd-udevd[279292]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.409 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[395d047b-1897-43d7-8dc9-5a4a1f89ade8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:41 compute-0 NetworkManager[49805]: <info>  [1772273201.4152] device (tapa4078728-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:06:41 compute-0 NetworkManager[49805]: <info>  [1772273201.4159] device (tapa4078728-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.428 243456 DEBUG oslo_concurrency.processutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.444 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[cc032776-21d5-4dfe-b998-38da2ff9fc68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:41 compute-0 NetworkManager[49805]: <info>  [1772273201.4540] manager: (tap3a8395bc-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/148)
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.456 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[07748c53-b86d-46c8-ba71-7d73defd71b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.489 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f79aae71-5d4f-49a8-8095-9cac7902e1cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.493 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5396941e-20d9-4892-8361-c1e97aedafea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:41 compute-0 NetworkManager[49805]: <info>  [1772273201.5112] device (tap3a8395bc-d0): carrier: link connected
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.515 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d19834-1f3d-433c-b1b3-906ed748e253]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.531 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[47e91d65-17aa-48f3-8fec-49ac1c79a16f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3a8395bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:6b:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470879, 'reachable_time': 29026, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279323, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.547 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8a972092-3f16-4dfb-b3f1-66f6e3824894]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:6b8d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 470879, 'tstamp': 470879}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279324, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.560 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[715e32bc-5b03-43de-834d-0c5ce9b4c5a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3a8395bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:6b:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470879, 'reachable_time': 29026, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279327, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.592 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8263c6d8-f502-438a-b537-b3e375b38af8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.648 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0d91df8f-f043-4be6-8f54-733c30dbf526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.650 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a8395bc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.650 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.651 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3a8395bc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:41 compute-0 kernel: tap3a8395bc-d0: entered promiscuous mode
Feb 28 10:06:41 compute-0 NetworkManager[49805]: <info>  [1772273201.6539] manager: (tap3a8395bc-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/149)
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.652 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.658 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3a8395bc-d0, col_values=(('external_ids', {'iface-id': '5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.659 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.660 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.660 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:06:41 compute-0 ovn_controller[146846]: 2026-02-28T10:06:41Z|00309|binding|INFO|Releasing lport 5ebe3a2c-2a7c-485c-a1b1-d51cfb0beba3 from this chassis (sb_readonly=0)
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.662 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b23bbb6a-08f6-43ef-b494-f7a9085ff007]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.663 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.pid.haproxy
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 3a8395bc-d7fc-4457-8cb4-52e2b9305b61
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:06:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:41.664 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'env', 'PROCESS_TAG=haproxy-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3a8395bc-d7fc-4457-8cb4-52e2b9305b61.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.670 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.750 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273201.75001, 07582da6-e482-439d-b147-937e74817014 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.751 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 07582da6-e482-439d-b147-937e74817014] VM Started (Lifecycle Event)
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.784 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 07582da6-e482-439d-b147-937e74817014] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.791 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273201.7540486, 07582da6-e482-439d-b147-937e74817014 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.792 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 07582da6-e482-439d-b147-937e74817014] VM Paused (Lifecycle Event)
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.818 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 07582da6-e482-439d-b147-937e74817014] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.822 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 07582da6-e482-439d-b147-937e74817014] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.852 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 07582da6-e482-439d-b147-937e74817014] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.893 243456 DEBUG nova.compute.manager [req-4b3fd3ec-79c0-49ba-8128-e84145192578 req-2923832d-f7ca-4c95-8563-f3e92caa7f73 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Received event network-vif-plugged-a4078728-674b-40cc-afbf-b4fc763283e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.894 243456 DEBUG oslo_concurrency.lockutils [req-4b3fd3ec-79c0-49ba-8128-e84145192578 req-2923832d-f7ca-4c95-8563-f3e92caa7f73 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "07582da6-e482-439d-b147-937e74817014-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.894 243456 DEBUG oslo_concurrency.lockutils [req-4b3fd3ec-79c0-49ba-8128-e84145192578 req-2923832d-f7ca-4c95-8563-f3e92caa7f73 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "07582da6-e482-439d-b147-937e74817014-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.894 243456 DEBUG oslo_concurrency.lockutils [req-4b3fd3ec-79c0-49ba-8128-e84145192578 req-2923832d-f7ca-4c95-8563-f3e92caa7f73 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "07582da6-e482-439d-b147-937e74817014-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.894 243456 DEBUG nova.compute.manager [req-4b3fd3ec-79c0-49ba-8128-e84145192578 req-2923832d-f7ca-4c95-8563-f3e92caa7f73 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Processing event network-vif-plugged-a4078728-674b-40cc-afbf-b4fc763283e1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.895 243456 DEBUG nova.compute.manager [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.901 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273201.899806, 07582da6-e482-439d-b147-937e74817014 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.901 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 07582da6-e482-439d-b147-937e74817014] VM Resumed (Lifecycle Event)
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.904 243456 DEBUG nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.908 243456 INFO nova.virt.libvirt.driver [-] [instance: 07582da6-e482-439d-b147-937e74817014] Instance spawned successfully.
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.908 243456 DEBUG nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:06:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 372 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.5 MiB/s wr, 235 op/s
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.947 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 07582da6-e482-439d-b147-937e74817014] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.955 243456 DEBUG nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.956 243456 DEBUG nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.957 243456 DEBUG nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.957 243456 DEBUG nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.958 243456 DEBUG nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.959 243456 DEBUG nova.virt.libvirt.driver [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:41 compute-0 nova_compute[243452]: 2026-02-28 10:06:41.967 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 07582da6-e482-439d-b147-937e74817014] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:06:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:06:41 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2682387485' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.012 243456 DEBUG oslo_concurrency.processutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.018 243456 DEBUG nova.compute.provider_tree [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:06:42 compute-0 podman[279418]: 2026-02-28 10:06:42.04557735 +0000 UTC m=+0.065456941 container create 3fb3061fe19278d07084825c09410d4313d938630f28b0871147a3023794c0de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 28 10:06:42 compute-0 systemd[1]: Started libpod-conmon-3fb3061fe19278d07084825c09410d4313d938630f28b0871147a3023794c0de.scope.
Feb 28 10:06:42 compute-0 podman[279418]: 2026-02-28 10:06:42.014761384 +0000 UTC m=+0.034641025 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:06:42 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead57a8a8f6ad15e0f56df06d0bc6e6ef8a8568bf8f95b75a1fb2f55af30a5d0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.131 243456 DEBUG nova.scheduler.client.report [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:06:42 compute-0 podman[279418]: 2026-02-28 10:06:42.137831534 +0000 UTC m=+0.157711105 container init 3fb3061fe19278d07084825c09410d4313d938630f28b0871147a3023794c0de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.139 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 07582da6-e482-439d-b147-937e74817014] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:06:42 compute-0 podman[279418]: 2026-02-28 10:06:42.142736702 +0000 UTC m=+0.162616283 container start 3fb3061fe19278d07084825c09410d4313d938630f28b0871147a3023794c0de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.161 243456 DEBUG oslo_concurrency.lockutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.162 243456 DEBUG nova.compute.manager [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:06:42 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[279434]: [NOTICE]   (279440) : New worker (279442) forked
Feb 28 10:06:42 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[279434]: [NOTICE]   (279440) : Loading success.
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.170 243456 INFO nova.compute.manager [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Took 7.34 seconds to spawn the instance on the hypervisor.
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.171 243456 DEBUG nova.compute.manager [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.238 243456 DEBUG nova.compute.manager [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.238 243456 DEBUG nova.network.neutron [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.250 243456 INFO nova.compute.manager [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Took 8.47 seconds to build instance.
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.261 243456 INFO nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.278 243456 DEBUG oslo_concurrency.lockutils [None req-1a0dcb21-07e9-4036-a49b-c06881808747 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "07582da6-e482-439d-b147-937e74817014" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.284 243456 DEBUG nova.compute.manager [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.390 243456 DEBUG nova.compute.manager [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.393 243456 DEBUG nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.394 243456 INFO nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Creating image(s)
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.445 243456 DEBUG nova.storage.rbd_utils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image 9bc174be-7ebf-4dfb-a56f-b8855b0b2960_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.486 243456 DEBUG nova.storage.rbd_utils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image 9bc174be-7ebf-4dfb-a56f-b8855b0b2960_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.521 243456 DEBUG nova.storage.rbd_utils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image 9bc174be-7ebf-4dfb-a56f-b8855b0b2960_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.527 243456 DEBUG oslo_concurrency.processutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.623 243456 DEBUG oslo_concurrency.processutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.624 243456 DEBUG oslo_concurrency.lockutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.625 243456 DEBUG oslo_concurrency.lockutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.626 243456 DEBUG oslo_concurrency.lockutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.654 243456 DEBUG nova.storage.rbd_utils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image 9bc174be-7ebf-4dfb-a56f-b8855b0b2960_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.659 243456 DEBUG oslo_concurrency.processutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9bc174be-7ebf-4dfb-a56f-b8855b0b2960_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.938 243456 DEBUG nova.policy [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1d18942aa7a4f4f9f673058f8c5f232', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5478fb37e3fa483c86ed85ad939d42da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:06:42 compute-0 nova_compute[243452]: 2026-02-28 10:06:42.944 243456 DEBUG oslo_concurrency.processutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9bc174be-7ebf-4dfb-a56f-b8855b0b2960_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:42 compute-0 ceph-mon[76304]: pgmap v1149: 305 pgs: 305 active+clean; 372 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.5 MiB/s wr, 235 op/s
Feb 28 10:06:42 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2682387485' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:43 compute-0 nova_compute[243452]: 2026-02-28 10:06:43.013 243456 DEBUG nova.storage.rbd_utils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] resizing rbd image 9bc174be-7ebf-4dfb-a56f-b8855b0b2960_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:06:43 compute-0 nova_compute[243452]: 2026-02-28 10:06:43.082 243456 DEBUG nova.objects.instance [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'migration_context' on Instance uuid 9bc174be-7ebf-4dfb-a56f-b8855b0b2960 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:43 compute-0 nova_compute[243452]: 2026-02-28 10:06:43.097 243456 DEBUG nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:06:43 compute-0 nova_compute[243452]: 2026-02-28 10:06:43.097 243456 DEBUG nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Ensure instance console log exists: /var/lib/nova/instances/9bc174be-7ebf-4dfb-a56f-b8855b0b2960/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:06:43 compute-0 nova_compute[243452]: 2026-02-28 10:06:43.098 243456 DEBUG oslo_concurrency.lockutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:43 compute-0 nova_compute[243452]: 2026-02-28 10:06:43.098 243456 DEBUG oslo_concurrency.lockutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:43 compute-0 nova_compute[243452]: 2026-02-28 10:06:43.098 243456 DEBUG oslo_concurrency.lockutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:43 compute-0 ovn_controller[146846]: 2026-02-28T10:06:43Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9f:3f:66 10.100.0.14
Feb 28 10:06:43 compute-0 ovn_controller[146846]: 2026-02-28T10:06:43Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9f:3f:66 10.100.0.14
Feb 28 10:06:43 compute-0 nova_compute[243452]: 2026-02-28 10:06:43.564 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 380 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.0 MiB/s wr, 240 op/s
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.129 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273189.1289365, a9cfb8a4-5855-4ff2-8afa-3e14094e801e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.130 243456 INFO nova.compute.manager [-] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] VM Stopped (Lifecycle Event)
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.194 243456 DEBUG nova.compute.manager [None req-db12bae4-32a6-4060-9d70-4a262be8f2a7 - - - - - -] [instance: a9cfb8a4-5855-4ff2-8afa-3e14094e801e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.205 243456 DEBUG nova.compute.manager [req-b086c203-c89c-4021-a620-f7a7d5b106fd req-8ece43e7-b327-4dca-bef3-b54f949b26e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Received event network-vif-plugged-a4078728-674b-40cc-afbf-b4fc763283e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.205 243456 DEBUG oslo_concurrency.lockutils [req-b086c203-c89c-4021-a620-f7a7d5b106fd req-8ece43e7-b327-4dca-bef3-b54f949b26e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "07582da6-e482-439d-b147-937e74817014-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.205 243456 DEBUG oslo_concurrency.lockutils [req-b086c203-c89c-4021-a620-f7a7d5b106fd req-8ece43e7-b327-4dca-bef3-b54f949b26e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "07582da6-e482-439d-b147-937e74817014-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.205 243456 DEBUG oslo_concurrency.lockutils [req-b086c203-c89c-4021-a620-f7a7d5b106fd req-8ece43e7-b327-4dca-bef3-b54f949b26e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "07582da6-e482-439d-b147-937e74817014-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.205 243456 DEBUG nova.compute.manager [req-b086c203-c89c-4021-a620-f7a7d5b106fd req-8ece43e7-b327-4dca-bef3-b54f949b26e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] No waiting events found dispatching network-vif-plugged-a4078728-674b-40cc-afbf-b4fc763283e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.205 243456 WARNING nova.compute.manager [req-b086c203-c89c-4021-a620-f7a7d5b106fd req-8ece43e7-b327-4dca-bef3-b54f949b26e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Received unexpected event network-vif-plugged-a4078728-674b-40cc-afbf-b4fc763283e1 for instance with vm_state active and task_state None.
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.221 243456 DEBUG nova.network.neutron [req-ea83963e-8a78-46da-9c8f-03f7309b9c87 req-98711882-d8a8-4948-8935-8c1faebbd4e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Updated VIF entry in instance network info cache for port a4078728-674b-40cc-afbf-b4fc763283e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.221 243456 DEBUG nova.network.neutron [req-ea83963e-8a78-46da-9c8f-03f7309b9c87 req-98711882-d8a8-4948-8935-8c1faebbd4e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Updating instance_info_cache with network_info: [{"id": "a4078728-674b-40cc-afbf-b4fc763283e1", "address": "fa:16:3e:fc:62:fa", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4078728-67", "ovs_interfaceid": "a4078728-674b-40cc-afbf-b4fc763283e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.243 243456 DEBUG oslo_concurrency.lockutils [req-ea83963e-8a78-46da-9c8f-03f7309b9c87 req-98711882-d8a8-4948-8935-8c1faebbd4e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-07582da6-e482-439d-b147-937e74817014" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.244 243456 DEBUG nova.compute.manager [req-ea83963e-8a78-46da-9c8f-03f7309b9c87 req-98711882-d8a8-4948-8935-8c1faebbd4e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Received event network-vif-plugged-912a1604-2c79-46e1-8e58-77c169237654 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.244 243456 DEBUG oslo_concurrency.lockutils [req-ea83963e-8a78-46da-9c8f-03f7309b9c87 req-98711882-d8a8-4948-8935-8c1faebbd4e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.244 243456 DEBUG oslo_concurrency.lockutils [req-ea83963e-8a78-46da-9c8f-03f7309b9c87 req-98711882-d8a8-4948-8935-8c1faebbd4e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.244 243456 DEBUG oslo_concurrency.lockutils [req-ea83963e-8a78-46da-9c8f-03f7309b9c87 req-98711882-d8a8-4948-8935-8c1faebbd4e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.244 243456 DEBUG nova.compute.manager [req-ea83963e-8a78-46da-9c8f-03f7309b9c87 req-98711882-d8a8-4948-8935-8c1faebbd4e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Processing event network-vif-plugged-912a1604-2c79-46e1-8e58-77c169237654 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.244 243456 DEBUG nova.compute.manager [req-ea83963e-8a78-46da-9c8f-03f7309b9c87 req-98711882-d8a8-4948-8935-8c1faebbd4e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Received event network-vif-plugged-912a1604-2c79-46e1-8e58-77c169237654 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.245 243456 DEBUG oslo_concurrency.lockutils [req-ea83963e-8a78-46da-9c8f-03f7309b9c87 req-98711882-d8a8-4948-8935-8c1faebbd4e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.245 243456 DEBUG oslo_concurrency.lockutils [req-ea83963e-8a78-46da-9c8f-03f7309b9c87 req-98711882-d8a8-4948-8935-8c1faebbd4e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.245 243456 DEBUG oslo_concurrency.lockutils [req-ea83963e-8a78-46da-9c8f-03f7309b9c87 req-98711882-d8a8-4948-8935-8c1faebbd4e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.245 243456 DEBUG nova.compute.manager [req-ea83963e-8a78-46da-9c8f-03f7309b9c87 req-98711882-d8a8-4948-8935-8c1faebbd4e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] No waiting events found dispatching network-vif-plugged-912a1604-2c79-46e1-8e58-77c169237654 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.245 243456 WARNING nova.compute.manager [req-ea83963e-8a78-46da-9c8f-03f7309b9c87 req-98711882-d8a8-4948-8935-8c1faebbd4e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Received unexpected event network-vif-plugged-912a1604-2c79-46e1-8e58-77c169237654 for instance with vm_state building and task_state spawning.
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.246 243456 DEBUG nova.compute.manager [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.250 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273204.2502446, 2b863bc5-8018-491d-82ee-dbc8f40d5aff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.250 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] VM Resumed (Lifecycle Event)
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.256 243456 DEBUG nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.260 243456 INFO nova.virt.libvirt.driver [-] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Instance spawned successfully.
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.260 243456 DEBUG nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.284 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.293 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.299 243456 DEBUG nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.299 243456 DEBUG nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.299 243456 DEBUG nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.300 243456 DEBUG nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.300 243456 DEBUG nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.300 243456 DEBUG nova.virt.libvirt.driver [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.336 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.363 243456 INFO nova.compute.manager [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Took 15.21 seconds to spawn the instance on the hypervisor.
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.364 243456 DEBUG nova.compute.manager [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.433 243456 INFO nova.compute.manager [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Took 16.50 seconds to build instance.
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.459 243456 DEBUG oslo_concurrency.lockutils [None req-64a2ada9-b067-42a9-86c8-0a048f593bf1 d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:44 compute-0 nova_compute[243452]: 2026-02-28 10:06:44.499 243456 DEBUG nova.network.neutron [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Successfully created port: f2eab801-3c6f-481b-98bf-9751a9a7c6d6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:06:44 compute-0 ceph-mon[76304]: pgmap v1150: 305 pgs: 305 active+clean; 380 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.0 MiB/s wr, 240 op/s
Feb 28 10:06:45 compute-0 nova_compute[243452]: 2026-02-28 10:06:45.314 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:45 compute-0 nova_compute[243452]: 2026-02-28 10:06:45.345 243456 DEBUG nova.compute.manager [None req-7b78008d-d5be-4028-8f79-6429331fbc21 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:45 compute-0 nova_compute[243452]: 2026-02-28 10:06:45.394 243456 INFO nova.compute.manager [None req-7b78008d-d5be-4028-8f79-6429331fbc21 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] instance snapshotting
Feb 28 10:06:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:06:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2508372817' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:06:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:06:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2508372817' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:06:45 compute-0 nova_compute[243452]: 2026-02-28 10:06:45.656 243456 INFO nova.virt.libvirt.driver [None req-7b78008d-d5be-4028-8f79-6429331fbc21 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Beginning live snapshot process
Feb 28 10:06:45 compute-0 nova_compute[243452]: 2026-02-28 10:06:45.828 243456 DEBUG nova.virt.libvirt.imagebackend [None req-7b78008d-d5be-4028-8f79-6429331fbc21 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 28 10:06:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 421 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.2 MiB/s wr, 233 op/s
Feb 28 10:06:45 compute-0 nova_compute[243452]: 2026-02-28 10:06:45.932 243456 DEBUG nova.network.neutron [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Successfully updated port: f2eab801-3c6f-481b-98bf-9751a9a7c6d6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:06:45 compute-0 nova_compute[243452]: 2026-02-28 10:06:45.955 243456 DEBUG oslo_concurrency.lockutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:45 compute-0 nova_compute[243452]: 2026-02-28 10:06:45.955 243456 DEBUG oslo_concurrency.lockutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquired lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:45 compute-0 nova_compute[243452]: 2026-02-28 10:06:45.955 243456 DEBUG nova.network.neutron [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:06:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2508372817' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:06:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2508372817' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:06:46 compute-0 nova_compute[243452]: 2026-02-28 10:06:46.170 243456 DEBUG nova.storage.rbd_utils [None req-7b78008d-d5be-4028-8f79-6429331fbc21 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] creating snapshot(b31c8e6466464c99ae568218ac61c3e2) on rbd image(07582da6-e482-439d-b147-937e74817014_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:06:46 compute-0 nova_compute[243452]: 2026-02-28 10:06:46.216 243456 DEBUG nova.network.neutron [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:06:46 compute-0 nova_compute[243452]: 2026-02-28 10:06:46.476 243456 DEBUG nova.compute.manager [req-0baccf69-a112-48a4-9a99-76eced348c0d req-00f06476-5e7e-4363-a591-f5b87dbfeeff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Received event network-changed-f2eab801-3c6f-481b-98bf-9751a9a7c6d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:46 compute-0 nova_compute[243452]: 2026-02-28 10:06:46.476 243456 DEBUG nova.compute.manager [req-0baccf69-a112-48a4-9a99-76eced348c0d req-00f06476-5e7e-4363-a591-f5b87dbfeeff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Refreshing instance network info cache due to event network-changed-f2eab801-3c6f-481b-98bf-9751a9a7c6d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:06:46 compute-0 nova_compute[243452]: 2026-02-28 10:06:46.477 243456 DEBUG oslo_concurrency.lockutils [req-0baccf69-a112-48a4-9a99-76eced348c0d req-00f06476-5e7e-4363-a591-f5b87dbfeeff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Feb 28 10:06:47 compute-0 ceph-mon[76304]: pgmap v1151: 305 pgs: 305 active+clean; 421 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.2 MiB/s wr, 233 op/s
Feb 28 10:06:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Feb 28 10:06:47 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.083 243456 DEBUG nova.storage.rbd_utils [None req-7b78008d-d5be-4028-8f79-6429331fbc21 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] cloning vms/07582da6-e482-439d-b147-937e74817014_disk@b31c8e6466464c99ae568218ac61c3e2 to images/04d681c2-17b7-46f1-b55e-50765e3f3f4c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.198 243456 DEBUG nova.storage.rbd_utils [None req-7b78008d-d5be-4028-8f79-6429331fbc21 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] flattening images/04d681c2-17b7-46f1-b55e-50765e3f3f4c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.454 243456 DEBUG nova.storage.rbd_utils [None req-7b78008d-d5be-4028-8f79-6429331fbc21 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] removing snapshot(b31c8e6466464c99ae568218ac61c3e2) on rbd image(07582da6-e482-439d-b147-937e74817014_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.514 243456 DEBUG nova.network.neutron [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Updating instance_info_cache with network_info: [{"id": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "address": "fa:16:3e:75:95:ff", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2eab801-3c", "ovs_interfaceid": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.529 243456 DEBUG oslo_concurrency.lockutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Releasing lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.530 243456 DEBUG nova.compute.manager [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Instance network_info: |[{"id": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "address": "fa:16:3e:75:95:ff", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2eab801-3c", "ovs_interfaceid": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.530 243456 DEBUG oslo_concurrency.lockutils [req-0baccf69-a112-48a4-9a99-76eced348c0d req-00f06476-5e7e-4363-a591-f5b87dbfeeff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.531 243456 DEBUG nova.network.neutron [req-0baccf69-a112-48a4-9a99-76eced348c0d req-00f06476-5e7e-4363-a591-f5b87dbfeeff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Refreshing network info cache for port f2eab801-3c6f-481b-98bf-9751a9a7c6d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.535 243456 DEBUG nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Start _get_guest_xml network_info=[{"id": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "address": "fa:16:3e:75:95:ff", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2eab801-3c", "ovs_interfaceid": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.539 243456 WARNING nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.544 243456 DEBUG nova.virt.libvirt.host [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.545 243456 DEBUG nova.virt.libvirt.host [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.547 243456 DEBUG nova.virt.libvirt.host [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.548 243456 DEBUG nova.virt.libvirt.host [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.548 243456 DEBUG nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.548 243456 DEBUG nova.virt.hardware [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.549 243456 DEBUG nova.virt.hardware [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.549 243456 DEBUG nova.virt.hardware [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.549 243456 DEBUG nova.virt.hardware [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.549 243456 DEBUG nova.virt.hardware [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.549 243456 DEBUG nova.virt.hardware [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.550 243456 DEBUG nova.virt.hardware [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.550 243456 DEBUG nova.virt.hardware [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.550 243456 DEBUG nova.virt.hardware [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.551 243456 DEBUG nova.virt.hardware [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.552 243456 DEBUG nova.virt.hardware [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:06:47 compute-0 nova_compute[243452]: 2026-02-28 10:06:47.554 243456 DEBUG oslo_concurrency.processutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:06:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 465 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 6.4 MiB/s wr, 323 op/s
Feb 28 10:06:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Feb 28 10:06:48 compute-0 ceph-mon[76304]: osdmap e163: 3 total, 3 up, 3 in
Feb 28 10:06:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Feb 28 10:06:48 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Feb 28 10:06:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:06:48 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/832252638' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.118 243456 DEBUG nova.storage.rbd_utils [None req-7b78008d-d5be-4028-8f79-6429331fbc21 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] creating snapshot(snap) on rbd image(04d681c2-17b7-46f1-b55e-50765e3f3f4c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.200 243456 DEBUG oslo_concurrency.processutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.242 243456 DEBUG nova.storage.rbd_utils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image 9bc174be-7ebf-4dfb-a56f-b8855b0b2960_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.249 243456 DEBUG oslo_concurrency.processutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.567 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:06:48 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2503147355' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.799 243456 DEBUG oslo_concurrency.processutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.802 243456 DEBUG nova.virt.libvirt.vif [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1814385188',display_name='tempest-tempest.common.compute-instance-1814385188',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1814385188',id=43,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJX0Fx82RGhILAY8gSL/81eMdy/bnRqX8dvA0uyc0VMC42rCn5VbA0lngPhybrpyL5tdeNJDAeYEVJl6vb2i2p3dxqnwW1uNXmGSt+gE0lS+RTjVWG5bC73bVNenJhGdiA==',key_name='tempest-keypair-1599844475',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-6ym7qka6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:06:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=9bc174be-7ebf-4dfb-a56f-b8855b0b2960,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "address": "fa:16:3e:75:95:ff", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2eab801-3c", "ovs_interfaceid": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.803 243456 DEBUG nova.network.os_vif_util [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "address": "fa:16:3e:75:95:ff", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2eab801-3c", "ovs_interfaceid": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.804 243456 DEBUG nova.network.os_vif_util [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:95:ff,bridge_name='br-int',has_traffic_filtering=True,id=f2eab801-3c6f-481b-98bf-9751a9a7c6d6,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2eab801-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.807 243456 DEBUG nova.objects.instance [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'pci_devices' on Instance uuid 9bc174be-7ebf-4dfb-a56f-b8855b0b2960 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.837 243456 DEBUG nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:06:48 compute-0 nova_compute[243452]:   <uuid>9bc174be-7ebf-4dfb-a56f-b8855b0b2960</uuid>
Feb 28 10:06:48 compute-0 nova_compute[243452]:   <name>instance-0000002b</name>
Feb 28 10:06:48 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:06:48 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:06:48 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <nova:name>tempest-tempest.common.compute-instance-1814385188</nova:name>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:06:47</nova:creationTime>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:06:48 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:06:48 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:06:48 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:06:48 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:06:48 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:06:48 compute-0 nova_compute[243452]:         <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:06:48 compute-0 nova_compute[243452]:         <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:06:48 compute-0 nova_compute[243452]:         <nova:port uuid="f2eab801-3c6f-481b-98bf-9751a9a7c6d6">
Feb 28 10:06:48 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:06:48 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:06:48 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <system>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <entry name="serial">9bc174be-7ebf-4dfb-a56f-b8855b0b2960</entry>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <entry name="uuid">9bc174be-7ebf-4dfb-a56f-b8855b0b2960</entry>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     </system>
Feb 28 10:06:48 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:06:48 compute-0 nova_compute[243452]:   <os>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:   </os>
Feb 28 10:06:48 compute-0 nova_compute[243452]:   <features>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:   </features>
Feb 28 10:06:48 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:06:48 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:06:48 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9bc174be-7ebf-4dfb-a56f-b8855b0b2960_disk">
Feb 28 10:06:48 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       </source>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:06:48 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9bc174be-7ebf-4dfb-a56f-b8855b0b2960_disk.config">
Feb 28 10:06:48 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       </source>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:06:48 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:75:95:ff"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <target dev="tapf2eab801-3c"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/9bc174be-7ebf-4dfb-a56f-b8855b0b2960/console.log" append="off"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <video>
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     </video>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:06:48 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:06:48 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:06:48 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:06:48 compute-0 nova_compute[243452]: </domain>
Feb 28 10:06:48 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.839 243456 DEBUG nova.compute.manager [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Preparing to wait for external event network-vif-plugged-f2eab801-3c6f-481b-98bf-9751a9a7c6d6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.839 243456 DEBUG oslo_concurrency.lockutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.839 243456 DEBUG oslo_concurrency.lockutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.840 243456 DEBUG oslo_concurrency.lockutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.840 243456 DEBUG nova.virt.libvirt.vif [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1814385188',display_name='tempest-tempest.common.compute-instance-1814385188',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1814385188',id=43,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJX0Fx82RGhILAY8gSL/81eMdy/bnRqX8dvA0uyc0VMC42rCn5VbA0lngPhybrpyL5tdeNJDAeYEVJl6vb2i2p3dxqnwW1uNXmGSt+gE0lS+RTjVWG5bC73bVNenJhGdiA==',key_name='tempest-keypair-1599844475',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-6ym7qka6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:06:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=9bc174be-7ebf-4dfb-a56f-b8855b0b2960,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "address": "fa:16:3e:75:95:ff", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2eab801-3c", "ovs_interfaceid": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.841 243456 DEBUG nova.network.os_vif_util [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "address": "fa:16:3e:75:95:ff", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2eab801-3c", "ovs_interfaceid": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.841 243456 DEBUG nova.network.os_vif_util [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:95:ff,bridge_name='br-int',has_traffic_filtering=True,id=f2eab801-3c6f-481b-98bf-9751a9a7c6d6,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2eab801-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.842 243456 DEBUG os_vif [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:95:ff,bridge_name='br-int',has_traffic_filtering=True,id=f2eab801-3c6f-481b-98bf-9751a9a7c6d6,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2eab801-3c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.843 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.843 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.844 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.852 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.853 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2eab801-3c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.853 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf2eab801-3c, col_values=(('external_ids', {'iface-id': 'f2eab801-3c6f-481b-98bf-9751a9a7c6d6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:95:ff', 'vm-uuid': '9bc174be-7ebf-4dfb-a56f-b8855b0b2960'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.855 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:48 compute-0 NetworkManager[49805]: <info>  [1772273208.8568] manager: (tapf2eab801-3c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/150)
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.857 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.864 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.865 243456 INFO os_vif [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:95:ff,bridge_name='br-int',has_traffic_filtering=True,id=f2eab801-3c6f-481b-98bf-9751a9a7c6d6,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2eab801-3c')
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.942 243456 DEBUG nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.944 243456 DEBUG nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.944 243456 DEBUG nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:75:95:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.945 243456 INFO nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Using config drive
Feb 28 10:06:48 compute-0 nova_compute[243452]: 2026-02-28 10:06:48.969 243456 DEBUG nova.storage.rbd_utils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image 9bc174be-7ebf-4dfb-a56f-b8855b0b2960_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Feb 28 10:06:49 compute-0 ceph-mon[76304]: pgmap v1153: 305 pgs: 305 active+clean; 465 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 6.4 MiB/s wr, 323 op/s
Feb 28 10:06:49 compute-0 ceph-mon[76304]: osdmap e164: 3 total, 3 up, 3 in
Feb 28 10:06:49 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/832252638' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:49 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2503147355' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Feb 28 10:06:49 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver [None req-7b78008d-d5be-4028-8f79-6429331fbc21 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 04d681c2-17b7-46f1-b55e-50765e3f3f4c could not be found.
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     image = self._client.call(
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 04d681c2-17b7-46f1-b55e-50765e3f3f4c
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver 
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver 
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     image = self._client.call(
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 04d681c2-17b7-46f1-b55e-50765e3f3f4c could not be found.
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.260 243456 ERROR nova.virt.libvirt.driver 
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.302 243456 DEBUG nova.storage.rbd_utils [None req-7b78008d-d5be-4028-8f79-6429331fbc21 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] removing snapshot(snap) on rbd image(04d681c2-17b7-46f1-b55e-50765e3f3f4c) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.369 243456 INFO nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Creating config drive at /var/lib/nova/instances/9bc174be-7ebf-4dfb-a56f-b8855b0b2960/disk.config
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.375 243456 DEBUG oslo_concurrency.processutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9bc174be-7ebf-4dfb-a56f-b8855b0b2960/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpf1ykicbw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.407 243456 DEBUG nova.network.neutron [req-0baccf69-a112-48a4-9a99-76eced348c0d req-00f06476-5e7e-4363-a591-f5b87dbfeeff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Updated VIF entry in instance network info cache for port f2eab801-3c6f-481b-98bf-9751a9a7c6d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.408 243456 DEBUG nova.network.neutron [req-0baccf69-a112-48a4-9a99-76eced348c0d req-00f06476-5e7e-4363-a591-f5b87dbfeeff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Updating instance_info_cache with network_info: [{"id": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "address": "fa:16:3e:75:95:ff", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2eab801-3c", "ovs_interfaceid": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.424 243456 DEBUG oslo_concurrency.lockutils [req-0baccf69-a112-48a4-9a99-76eced348c0d req-00f06476-5e7e-4363-a591-f5b87dbfeeff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.514 243456 DEBUG oslo_concurrency.processutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9bc174be-7ebf-4dfb-a56f-b8855b0b2960/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpf1ykicbw" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.545 243456 DEBUG nova.storage.rbd_utils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] rbd image 9bc174be-7ebf-4dfb-a56f-b8855b0b2960_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.549 243456 DEBUG oslo_concurrency.processutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9bc174be-7ebf-4dfb-a56f-b8855b0b2960/disk.config 9bc174be-7ebf-4dfb-a56f-b8855b0b2960_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.702 243456 DEBUG oslo_concurrency.processutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9bc174be-7ebf-4dfb-a56f-b8855b0b2960/disk.config 9bc174be-7ebf-4dfb-a56f-b8855b0b2960_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.703 243456 INFO nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Deleting local config drive /var/lib/nova/instances/9bc174be-7ebf-4dfb-a56f-b8855b0b2960/disk.config because it was imported into RBD.
Feb 28 10:06:49 compute-0 kernel: tapf2eab801-3c: entered promiscuous mode
Feb 28 10:06:49 compute-0 NetworkManager[49805]: <info>  [1772273209.7760] manager: (tapf2eab801-3c): new Tun device (/org/freedesktop/NetworkManager/Devices/151)
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.777 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:49 compute-0 ovn_controller[146846]: 2026-02-28T10:06:49Z|00310|binding|INFO|Claiming lport f2eab801-3c6f-481b-98bf-9751a9a7c6d6 for this chassis.
Feb 28 10:06:49 compute-0 ovn_controller[146846]: 2026-02-28T10:06:49Z|00311|binding|INFO|f2eab801-3c6f-481b-98bf-9751a9a7c6d6: Claiming fa:16:3e:75:95:ff 10.100.0.8
Feb 28 10:06:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:49.789 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:95:ff 10.100.0.8'], port_security=['fa:16:3e:75:95:ff 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9bc174be-7ebf-4dfb-a56f-b8855b0b2960', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b06e2ec4-e889-49ea-aafd-6900649d681f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f2eab801-3c6f-481b-98bf-9751a9a7c6d6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:06:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:49.791 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f2eab801-3c6f-481b-98bf-9751a9a7c6d6 in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f bound to our chassis
Feb 28 10:06:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:49.793 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:06:49 compute-0 ovn_controller[146846]: 2026-02-28T10:06:49Z|00312|binding|INFO|Setting lport f2eab801-3c6f-481b-98bf-9751a9a7c6d6 ovn-installed in OVS
Feb 28 10:06:49 compute-0 ovn_controller[146846]: 2026-02-28T10:06:49Z|00313|binding|INFO|Setting lport f2eab801-3c6f-481b-98bf-9751a9a7c6d6 up in Southbound
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.798 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:49 compute-0 systemd-udevd[279910]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:06:49 compute-0 NetworkManager[49805]: <info>  [1772273209.8161] device (tapf2eab801-3c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:06:49 compute-0 NetworkManager[49805]: <info>  [1772273209.8169] device (tapf2eab801-3c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:06:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:49.815 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e03af7-bd8f-441c-8543-e29dd90593f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:49 compute-0 systemd-machined[209480]: New machine qemu-47-instance-0000002b.
Feb 28 10:06:49 compute-0 systemd[1]: Started Virtual Machine qemu-47-instance-0000002b.
Feb 28 10:06:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:49.844 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ba586f72-4b35-41a0-9dcd-d8c42129d994]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:49.848 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[88340ca2-dae4-4e75-aa51-1e90246ab337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:49.869 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4087cb05-e58e-46d2-9c0a-d3006540f828]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:49.885 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e30a094c-270c-4456-809e-10356617d85f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469782, 'reachable_time': 41463, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279924, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:49.896 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[78340e39-782c-43c3-b3c4-3f27ec1872d8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469794, 'tstamp': 469794}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279926, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469796, 'tstamp': 469796}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279926, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:49.897 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:49 compute-0 nova_compute[243452]: 2026-02-28 10:06:49.899 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:49.901 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:49.901 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:49.901 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:49.901 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 493 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 9.9 MiB/s wr, 484 op/s
Feb 28 10:06:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Feb 28 10:06:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Feb 28 10:06:50 compute-0 ceph-mon[76304]: osdmap e165: 3 total, 3 up, 3 in
Feb 28 10:06:50 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.198 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273210.1968577, 9bc174be-7ebf-4dfb-a56f-b8855b0b2960 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.199 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] VM Started (Lifecycle Event)
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.222 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.228 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273210.1970057, 9bc174be-7ebf-4dfb-a56f-b8855b0b2960 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.228 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] VM Paused (Lifecycle Event)
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.261 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.268 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.289 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.379 243456 WARNING nova.compute.manager [None req-7b78008d-d5be-4028-8f79-6429331fbc21 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Image not found during snapshot: nova.exception.ImageNotFound: Image 04d681c2-17b7-46f1-b55e-50765e3f3f4c could not be found.
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.535 243456 DEBUG nova.compute.manager [req-a06d78f1-d436-46dc-9ce8-79e8e6bd0852 req-bfdf4af4-fdc7-47c7-9ca5-d7e0a90cd3c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Received event network-vif-plugged-f2eab801-3c6f-481b-98bf-9751a9a7c6d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.535 243456 DEBUG oslo_concurrency.lockutils [req-a06d78f1-d436-46dc-9ce8-79e8e6bd0852 req-bfdf4af4-fdc7-47c7-9ca5-d7e0a90cd3c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.536 243456 DEBUG oslo_concurrency.lockutils [req-a06d78f1-d436-46dc-9ce8-79e8e6bd0852 req-bfdf4af4-fdc7-47c7-9ca5-d7e0a90cd3c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.536 243456 DEBUG oslo_concurrency.lockutils [req-a06d78f1-d436-46dc-9ce8-79e8e6bd0852 req-bfdf4af4-fdc7-47c7-9ca5-d7e0a90cd3c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.536 243456 DEBUG nova.compute.manager [req-a06d78f1-d436-46dc-9ce8-79e8e6bd0852 req-bfdf4af4-fdc7-47c7-9ca5-d7e0a90cd3c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Processing event network-vif-plugged-f2eab801-3c6f-481b-98bf-9751a9a7c6d6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.537 243456 DEBUG nova.compute.manager [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.541 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273210.5408804, 9bc174be-7ebf-4dfb-a56f-b8855b0b2960 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.542 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] VM Resumed (Lifecycle Event)
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.544 243456 DEBUG nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.547 243456 INFO nova.virt.libvirt.driver [-] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Instance spawned successfully.
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.548 243456 DEBUG nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.575 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.584 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.588 243456 DEBUG nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.588 243456 DEBUG nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.589 243456 DEBUG nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.589 243456 DEBUG nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.590 243456 DEBUG nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.591 243456 DEBUG nova.virt.libvirt.driver [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.601 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.643 243456 INFO nova.compute.manager [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Took 8.25 seconds to spawn the instance on the hypervisor.
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.643 243456 DEBUG nova.compute.manager [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.711 243456 INFO nova.compute.manager [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Took 9.52 seconds to build instance.
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.726 243456 DEBUG oslo_concurrency.lockutils [None req-e319bd4a-7e51-4848-b3d3-ddae61d8f6a3 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:50 compute-0 nova_compute[243452]: 2026-02-28 10:06:50.858 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:50.860 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:06:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:50.861 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:06:51 compute-0 ceph-mon[76304]: pgmap v1156: 305 pgs: 305 active+clean; 493 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 9.9 MiB/s wr, 484 op/s
Feb 28 10:06:51 compute-0 ceph-mon[76304]: osdmap e166: 3 total, 3 up, 3 in
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.390 243456 DEBUG oslo_concurrency.lockutils [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Acquiring lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.391 243456 DEBUG oslo_concurrency.lockutils [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.391 243456 DEBUG oslo_concurrency.lockutils [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Acquiring lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.391 243456 DEBUG oslo_concurrency.lockutils [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.392 243456 DEBUG oslo_concurrency.lockutils [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.393 243456 INFO nova.compute.manager [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Terminating instance
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.394 243456 DEBUG nova.compute.manager [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:06:51 compute-0 kernel: tap912a1604-2c (unregistering): left promiscuous mode
Feb 28 10:06:51 compute-0 NetworkManager[49805]: <info>  [1772273211.4287] device (tap912a1604-2c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:06:51 compute-0 ovn_controller[146846]: 2026-02-28T10:06:51Z|00314|binding|INFO|Releasing lport 912a1604-2c79-46e1-8e58-77c169237654 from this chassis (sb_readonly=0)
Feb 28 10:06:51 compute-0 ovn_controller[146846]: 2026-02-28T10:06:51Z|00315|binding|INFO|Setting lport 912a1604-2c79-46e1-8e58-77c169237654 down in Southbound
Feb 28 10:06:51 compute-0 ovn_controller[146846]: 2026-02-28T10:06:51Z|00316|binding|INFO|Removing iface tap912a1604-2c ovn-installed in OVS
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.438 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.441 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:51.447 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:59:9b 10.100.0.13'], port_security=['fa:16:3e:47:59:9b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2b863bc5-8018-491d-82ee-dbc8f40d5aff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3064b28-4cc5-4292-a411-e47f76307d19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b2155277af74424e955b1904a947ab64', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0f66cfa-9d0f-49b8-9566-0ccda1e69bc8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df65cb92-cb3c-452d-9654-0318a819dc9b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=912a1604-2c79-46e1-8e58-77c169237654) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:06:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:51.449 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 912a1604-2c79-46e1-8e58-77c169237654 in datapath c3064b28-4cc5-4292-a411-e47f76307d19 unbound from our chassis
Feb 28 10:06:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:51.450 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c3064b28-4cc5-4292-a411-e47f76307d19, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:06:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:51.451 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bded39ec-fecf-4952-a4df-8700223e40ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:51.452 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19 namespace which is not needed anymore
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.456 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:51 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Deactivated successfully.
Feb 28 10:06:51 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Consumed 8.118s CPU time.
Feb 28 10:06:51 compute-0 systemd-machined[209480]: Machine qemu-45-instance-00000029 terminated.
Feb 28 10:06:51 compute-0 neutron-haproxy-ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19[279146]: [NOTICE]   (279161) : haproxy version is 2.8.14-c23fe91
Feb 28 10:06:51 compute-0 neutron-haproxy-ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19[279146]: [NOTICE]   (279161) : path to executable is /usr/sbin/haproxy
Feb 28 10:06:51 compute-0 neutron-haproxy-ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19[279146]: [WARNING]  (279161) : Exiting Master process...
Feb 28 10:06:51 compute-0 neutron-haproxy-ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19[279146]: [WARNING]  (279161) : Exiting Master process...
Feb 28 10:06:51 compute-0 neutron-haproxy-ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19[279146]: [ALERT]    (279161) : Current worker (279163) exited with code 143 (Terminated)
Feb 28 10:06:51 compute-0 neutron-haproxy-ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19[279146]: [WARNING]  (279161) : All workers exited. Exiting... (0)
Feb 28 10:06:51 compute-0 systemd[1]: libpod-6b26203d38d4071d1fc289a4fbc78361d9b4dcbe312513650f9fbee5b9994f9c.scope: Deactivated successfully.
Feb 28 10:06:51 compute-0 NetworkManager[49805]: <info>  [1772273211.6136] manager: (tap912a1604-2c): new Tun device (/org/freedesktop/NetworkManager/Devices/152)
Feb 28 10:06:51 compute-0 podman[280011]: 2026-02-28 10:06:51.61942913 +0000 UTC m=+0.051331824 container died 6b26203d38d4071d1fc289a4fbc78361d9b4dcbe312513650f9fbee5b9994f9c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.631 243456 INFO nova.virt.libvirt.driver [-] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Instance destroyed successfully.
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.632 243456 DEBUG nova.objects.instance [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Lazy-loading 'resources' on Instance uuid 2b863bc5-8018-491d-82ee-dbc8f40d5aff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6b26203d38d4071d1fc289a4fbc78361d9b4dcbe312513650f9fbee5b9994f9c-userdata-shm.mount: Deactivated successfully.
Feb 28 10:06:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5391d28ebdc6627030e5170cab093dc9322b6d8357983b907233eb7406ba7c8-merged.mount: Deactivated successfully.
Feb 28 10:06:51 compute-0 podman[280011]: 2026-02-28 10:06:51.66955625 +0000 UTC m=+0.101458934 container cleanup 6b26203d38d4071d1fc289a4fbc78361d9b4dcbe312513650f9fbee5b9994f9c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.674 243456 DEBUG nova.virt.libvirt.vif [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:06:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-63330761',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-63330761',id=41,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:06:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b2155277af74424e955b1904a947ab64',ramdisk_id='',reservation_id='r-lakmz1rt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-1945846185',owner_user_name='tempest-ServerTagsTestJSON-1945846185-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:06:44Z,user_data=None,user_id='d83ef2b77047458db9060496f444a384',uuid=2b863bc5-8018-491d-82ee-dbc8f40d5aff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "912a1604-2c79-46e1-8e58-77c169237654", "address": "fa:16:3e:47:59:9b", "network": {"id": "c3064b28-4cc5-4292-a411-e47f76307d19", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1936405321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2155277af74424e955b1904a947ab64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912a1604-2c", "ovs_interfaceid": "912a1604-2c79-46e1-8e58-77c169237654", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.675 243456 DEBUG nova.network.os_vif_util [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Converting VIF {"id": "912a1604-2c79-46e1-8e58-77c169237654", "address": "fa:16:3e:47:59:9b", "network": {"id": "c3064b28-4cc5-4292-a411-e47f76307d19", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1936405321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2155277af74424e955b1904a947ab64", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912a1604-2c", "ovs_interfaceid": "912a1604-2c79-46e1-8e58-77c169237654", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.675 243456 DEBUG nova.network.os_vif_util [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:59:9b,bridge_name='br-int',has_traffic_filtering=True,id=912a1604-2c79-46e1-8e58-77c169237654,network=Network(c3064b28-4cc5-4292-a411-e47f76307d19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap912a1604-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.676 243456 DEBUG os_vif [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:59:9b,bridge_name='br-int',has_traffic_filtering=True,id=912a1604-2c79-46e1-8e58-77c169237654,network=Network(c3064b28-4cc5-4292-a411-e47f76307d19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap912a1604-2c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.677 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.678 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap912a1604-2c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.681 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.682 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.684 243456 INFO os_vif [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:59:9b,bridge_name='br-int',has_traffic_filtering=True,id=912a1604-2c79-46e1-8e58-77c169237654,network=Network(c3064b28-4cc5-4292-a411-e47f76307d19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap912a1604-2c')
Feb 28 10:06:51 compute-0 systemd[1]: libpod-conmon-6b26203d38d4071d1fc289a4fbc78361d9b4dcbe312513650f9fbee5b9994f9c.scope: Deactivated successfully.
Feb 28 10:06:51 compute-0 podman[280048]: 2026-02-28 10:06:51.74640197 +0000 UTC m=+0.044741239 container remove 6b26203d38d4071d1fc289a4fbc78361d9b4dcbe312513650f9fbee5b9994f9c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:06:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:51.751 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[80899839-7861-4637-80b0-8d26a94b5928]: (4, ('Sat Feb 28 10:06:51 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19 (6b26203d38d4071d1fc289a4fbc78361d9b4dcbe312513650f9fbee5b9994f9c)\n6b26203d38d4071d1fc289a4fbc78361d9b4dcbe312513650f9fbee5b9994f9c\nSat Feb 28 10:06:51 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19 (6b26203d38d4071d1fc289a4fbc78361d9b4dcbe312513650f9fbee5b9994f9c)\n6b26203d38d4071d1fc289a4fbc78361d9b4dcbe312513650f9fbee5b9994f9c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:51.753 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c50b38b8-40f8-4f06-9892-fc046fe174ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:51.755 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc3064b28-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.756 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:51 compute-0 kernel: tapc3064b28-40: left promiscuous mode
Feb 28 10:06:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:51.762 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8c207a90-b2e4-40e3-bb56-822a72643d77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.766 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:51.780 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[db96fd9d-a1e0-489e-ac5a-ee70154bf97f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:51.782 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[224e46be-cb59-4177-8a93-03f120153849]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:51.801 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c6446a95-22ea-4dad-88f1-512f911b6b83]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470572, 'reachable_time': 42582, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280081, 'error': None, 'target': 'ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:51 compute-0 systemd[1]: run-netns-ovnmeta\x2dc3064b28\x2d4cc5\x2d4292\x2da411\x2de47f76307d19.mount: Deactivated successfully.
Feb 28 10:06:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:51.803 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c3064b28-4cc5-4292-a411-e47f76307d19 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:06:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:51.804 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[5dce1ddf-c356-4e91-abe4-350d4e55bae7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.838 243456 DEBUG oslo_concurrency.lockutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquiring lock "af024638-459f-45c8-b52b-7d9ec937745a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.838 243456 DEBUG oslo_concurrency.lockutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.869 243456 DEBUG nova.compute.manager [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.932 243456 INFO nova.virt.libvirt.driver [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Deleting instance files /var/lib/nova/instances/2b863bc5-8018-491d-82ee-dbc8f40d5aff_del
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.934 243456 INFO nova.virt.libvirt.driver [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Deletion of /var/lib/nova/instances/2b863bc5-8018-491d-82ee-dbc8f40d5aff_del complete
Feb 28 10:06:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 493 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 7.4 MiB/s wr, 395 op/s
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.972 243456 DEBUG oslo_concurrency.lockutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.972 243456 DEBUG oslo_concurrency.lockutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.979 243456 DEBUG nova.virt.hardware [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:06:51 compute-0 nova_compute[243452]: 2026-02-28 10:06:51.979 243456 INFO nova.compute.claims [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.032 243456 INFO nova.compute.manager [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Took 0.64 seconds to destroy the instance on the hypervisor.
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.033 243456 DEBUG oslo.service.loopingcall [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.033 243456 DEBUG nova.compute.manager [-] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.034 243456 DEBUG nova.network.neutron [-] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.148 243456 DEBUG nova.compute.manager [req-9be69263-87f6-4219-8e98-3dddce866557 req-71e324b3-83d0-4204-b043-e072529068a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Received event network-vif-unplugged-912a1604-2c79-46e1-8e58-77c169237654 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.148 243456 DEBUG oslo_concurrency.lockutils [req-9be69263-87f6-4219-8e98-3dddce866557 req-71e324b3-83d0-4204-b043-e072529068a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.148 243456 DEBUG oslo_concurrency.lockutils [req-9be69263-87f6-4219-8e98-3dddce866557 req-71e324b3-83d0-4204-b043-e072529068a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.149 243456 DEBUG oslo_concurrency.lockutils [req-9be69263-87f6-4219-8e98-3dddce866557 req-71e324b3-83d0-4204-b043-e072529068a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.149 243456 DEBUG nova.compute.manager [req-9be69263-87f6-4219-8e98-3dddce866557 req-71e324b3-83d0-4204-b043-e072529068a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] No waiting events found dispatching network-vif-unplugged-912a1604-2c79-46e1-8e58-77c169237654 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.149 243456 DEBUG nova.compute.manager [req-9be69263-87f6-4219-8e98-3dddce866557 req-71e324b3-83d0-4204-b043-e072529068a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Received event network-vif-unplugged-912a1604-2c79-46e1-8e58-77c169237654 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.193 243456 DEBUG oslo_concurrency.processutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.301 243456 DEBUG oslo_concurrency.lockutils [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "07582da6-e482-439d-b147-937e74817014" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.302 243456 DEBUG oslo_concurrency.lockutils [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "07582da6-e482-439d-b147-937e74817014" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.302 243456 DEBUG oslo_concurrency.lockutils [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "07582da6-e482-439d-b147-937e74817014-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.302 243456 DEBUG oslo_concurrency.lockutils [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "07582da6-e482-439d-b147-937e74817014-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.303 243456 DEBUG oslo_concurrency.lockutils [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "07582da6-e482-439d-b147-937e74817014-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.305 243456 INFO nova.compute.manager [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Terminating instance
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.306 243456 DEBUG nova.compute.manager [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:06:52 compute-0 kernel: tapa4078728-67 (unregistering): left promiscuous mode
Feb 28 10:06:52 compute-0 NetworkManager[49805]: <info>  [1772273212.3669] device (tapa4078728-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:06:52 compute-0 ovn_controller[146846]: 2026-02-28T10:06:52Z|00317|binding|INFO|Releasing lport a4078728-674b-40cc-afbf-b4fc763283e1 from this chassis (sb_readonly=0)
Feb 28 10:06:52 compute-0 ovn_controller[146846]: 2026-02-28T10:06:52Z|00318|binding|INFO|Setting lport a4078728-674b-40cc-afbf-b4fc763283e1 down in Southbound
Feb 28 10:06:52 compute-0 ovn_controller[146846]: 2026-02-28T10:06:52Z|00319|binding|INFO|Removing iface tapa4078728-67 ovn-installed in OVS
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.372 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.377 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:52.381 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:62:fa 10.100.0.5'], port_security=['fa:16:3e:fc:62:fa 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '07582da6-e482-439d-b147-937e74817014', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2ce6ed219d94b3b88c2d2d7001f6c3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b4baa283-ae4f-4852-9324-45f4fb1de1ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d63e728b-5dfe-4a66-867f-fa0957507bc0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a4078728-674b-40cc-afbf-b4fc763283e1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:06:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:52.383 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a4078728-674b-40cc-afbf-b4fc763283e1 in datapath 3a8395bc-d7fc-4457-8cb4-52e2b9305b61 unbound from our chassis
Feb 28 10:06:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:52.385 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3a8395bc-d7fc-4457-8cb4-52e2b9305b61, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:06:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:52.386 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e201cdeb-9cd7-4ee4-b0a4-9a2713452fd7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:52.387 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 namespace which is not needed anymore
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.394 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:52 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Feb 28 10:06:52 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000002a.scope: Consumed 10.452s CPU time.
Feb 28 10:06:52 compute-0 systemd-machined[209480]: Machine qemu-46-instance-0000002a terminated.
Feb 28 10:06:52 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[279434]: [NOTICE]   (279440) : haproxy version is 2.8.14-c23fe91
Feb 28 10:06:52 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[279434]: [NOTICE]   (279440) : path to executable is /usr/sbin/haproxy
Feb 28 10:06:52 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[279434]: [WARNING]  (279440) : Exiting Master process...
Feb 28 10:06:52 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[279434]: [WARNING]  (279440) : Exiting Master process...
Feb 28 10:06:52 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[279434]: [ALERT]    (279440) : Current worker (279442) exited with code 143 (Terminated)
Feb 28 10:06:52 compute-0 neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61[279434]: [WARNING]  (279440) : All workers exited. Exiting... (0)
Feb 28 10:06:52 compute-0 systemd[1]: libpod-3fb3061fe19278d07084825c09410d4313d938630f28b0871147a3023794c0de.scope: Deactivated successfully.
Feb 28 10:06:52 compute-0 conmon[279434]: conmon 3fb3061fe19278d07084 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3fb3061fe19278d07084825c09410d4313d938630f28b0871147a3023794c0de.scope/container/memory.events
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.531 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.533 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:52 compute-0 podman[280123]: 2026-02-28 10:06:52.53428884 +0000 UTC m=+0.052929429 container died 3fb3061fe19278d07084825c09410d4313d938630f28b0871147a3023794c0de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.544 243456 INFO nova.virt.libvirt.driver [-] [instance: 07582da6-e482-439d-b147-937e74817014] Instance destroyed successfully.
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.545 243456 DEBUG nova.objects.instance [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lazy-loading 'resources' on Instance uuid 07582da6-e482-439d-b147-937e74817014 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.569 243456 DEBUG nova.virt.libvirt.vif [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:06:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1018606413',display_name='tempest-ImagesTestJSON-server-1018606413',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1018606413',id=42,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:06:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a2ce6ed219d94b3b88c2d2d7001f6c3a',ramdisk_id='',reservation_id='r-xv5jol0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-2059286278',owner_user_name='tempest-ImagesTestJSON-2059286278-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:06:50Z,user_data=None,user_id='163582c3e6a34c87b52f82ac4f189f77',uuid=07582da6-e482-439d-b147-937e74817014,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a4078728-674b-40cc-afbf-b4fc763283e1", "address": "fa:16:3e:fc:62:fa", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4078728-67", "ovs_interfaceid": "a4078728-674b-40cc-afbf-b4fc763283e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.570 243456 DEBUG nova.network.os_vif_util [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converting VIF {"id": "a4078728-674b-40cc-afbf-b4fc763283e1", "address": "fa:16:3e:fc:62:fa", "network": {"id": "3a8395bc-d7fc-4457-8cb4-52e2b9305b61", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1238669346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a2ce6ed219d94b3b88c2d2d7001f6c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4078728-67", "ovs_interfaceid": "a4078728-674b-40cc-afbf-b4fc763283e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.571 243456 DEBUG nova.network.os_vif_util [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:62:fa,bridge_name='br-int',has_traffic_filtering=True,id=a4078728-674b-40cc-afbf-b4fc763283e1,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4078728-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.572 243456 DEBUG os_vif [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:62:fa,bridge_name='br-int',has_traffic_filtering=True,id=a4078728-674b-40cc-afbf-b4fc763283e1,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4078728-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.574 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.574 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa4078728-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3fb3061fe19278d07084825c09410d4313d938630f28b0871147a3023794c0de-userdata-shm.mount: Deactivated successfully.
Feb 28 10:06:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ead57a8a8f6ad15e0f56df06d0bc6e6ef8a8568bf8f95b75a1fb2f55af30a5d0-merged.mount: Deactivated successfully.
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.580 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.583 243456 INFO os_vif [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:62:fa,bridge_name='br-int',has_traffic_filtering=True,id=a4078728-674b-40cc-afbf-b4fc763283e1,network=Network(3a8395bc-d7fc-4457-8cb4-52e2b9305b61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4078728-67')
Feb 28 10:06:52 compute-0 podman[280123]: 2026-02-28 10:06:52.587009742 +0000 UTC m=+0.105650311 container cleanup 3fb3061fe19278d07084825c09410d4313d938630f28b0871147a3023794c0de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:06:52 compute-0 systemd[1]: libpod-conmon-3fb3061fe19278d07084825c09410d4313d938630f28b0871147a3023794c0de.scope: Deactivated successfully.
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.635 243456 DEBUG nova.network.neutron [-] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.644 243456 DEBUG nova.compute.manager [req-9e4dc2dc-be3f-4d6a-8611-e3b082680f70 req-7c5ab436-40ba-4f91-9273-19e6aada1caf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Received event network-vif-plugged-f2eab801-3c6f-481b-98bf-9751a9a7c6d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.646 243456 DEBUG oslo_concurrency.lockutils [req-9e4dc2dc-be3f-4d6a-8611-e3b082680f70 req-7c5ab436-40ba-4f91-9273-19e6aada1caf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.646 243456 DEBUG oslo_concurrency.lockutils [req-9e4dc2dc-be3f-4d6a-8611-e3b082680f70 req-7c5ab436-40ba-4f91-9273-19e6aada1caf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.646 243456 DEBUG oslo_concurrency.lockutils [req-9e4dc2dc-be3f-4d6a-8611-e3b082680f70 req-7c5ab436-40ba-4f91-9273-19e6aada1caf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.647 243456 DEBUG nova.compute.manager [req-9e4dc2dc-be3f-4d6a-8611-e3b082680f70 req-7c5ab436-40ba-4f91-9273-19e6aada1caf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] No waiting events found dispatching network-vif-plugged-f2eab801-3c6f-481b-98bf-9751a9a7c6d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.647 243456 WARNING nova.compute.manager [req-9e4dc2dc-be3f-4d6a-8611-e3b082680f70 req-7c5ab436-40ba-4f91-9273-19e6aada1caf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Received unexpected event network-vif-plugged-f2eab801-3c6f-481b-98bf-9751a9a7c6d6 for instance with vm_state active and task_state None.
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.652 243456 INFO nova.compute.manager [-] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Took 0.62 seconds to deallocate network for instance.
Feb 28 10:06:52 compute-0 podman[280169]: 2026-02-28 10:06:52.655465347 +0000 UTC m=+0.050810920 container remove 3fb3061fe19278d07084825c09410d4313d938630f28b0871147a3023794c0de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 10:06:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:52.660 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd116ed-61cf-407c-be5f-13a9f29042be]: (4, ('Sat Feb 28 10:06:52 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 (3fb3061fe19278d07084825c09410d4313d938630f28b0871147a3023794c0de)\n3fb3061fe19278d07084825c09410d4313d938630f28b0871147a3023794c0de\nSat Feb 28 10:06:52 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 (3fb3061fe19278d07084825c09410d4313d938630f28b0871147a3023794c0de)\n3fb3061fe19278d07084825c09410d4313d938630f28b0871147a3023794c0de\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:52.662 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[af215605-a6d5-4715-a29a-19ce20daa458]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:52.664 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a8395bc-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.665 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:52 compute-0 kernel: tap3a8395bc-d0: left promiscuous mode
Feb 28 10:06:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:52.669 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6e07e104-bc60-4115-b4d6-ffadd974271c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.675 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:52.679 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[da07c74a-e8d9-454e-b1a8-77dc45c76488]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:52.683 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cbc389ef-3edc-443c-8e2e-02ef27d87d33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:52.698 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[24164655-d466-47be-b5d4-3c130d242946]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 470872, 'reachable_time': 32563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280195, 'error': None, 'target': 'ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d3a8395bc\x2dd7fc\x2d4457\x2d8cb4\x2d52e2b9305b61.mount: Deactivated successfully.
Feb 28 10:06:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:52.700 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3a8395bc-d7fc-4457-8cb4-52e2b9305b61 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:06:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:52.700 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[42d5d70c-56a4-4b34-bb7b-3b6cdc2e9518]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.708 243456 DEBUG oslo_concurrency.lockutils [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:06:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:06:52 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2778663867' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.763 243456 DEBUG oslo_concurrency.processutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.769 243456 DEBUG nova.compute.provider_tree [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.787 243456 DEBUG nova.scheduler.client.report [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.812 243456 DEBUG oslo_concurrency.lockutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.813 243456 DEBUG nova.compute.manager [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.816 243456 DEBUG oslo_concurrency.lockutils [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.865 243456 DEBUG nova.compute.manager [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.867 243456 DEBUG nova.network.neutron [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.887 243456 INFO nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.894 243456 INFO nova.virt.libvirt.driver [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Deleting instance files /var/lib/nova/instances/07582da6-e482-439d-b147-937e74817014_del
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.895 243456 INFO nova.virt.libvirt.driver [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Deletion of /var/lib/nova/instances/07582da6-e482-439d-b147-937e74817014_del complete
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.920 243456 DEBUG nova.compute.manager [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.974 243456 INFO nova.compute.manager [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Took 0.67 seconds to destroy the instance on the hypervisor.
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.975 243456 DEBUG oslo.service.loopingcall [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.976 243456 DEBUG oslo_concurrency.processutils [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.995 243456 DEBUG nova.compute.manager [-] [instance: 07582da6-e482-439d-b147-937e74817014] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:06:52 compute-0 nova_compute[243452]: 2026-02-28 10:06:52.996 243456 DEBUG nova.network.neutron [-] [instance: 07582da6-e482-439d-b147-937e74817014] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.060 243456 DEBUG nova.compute.manager [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.065 243456 DEBUG nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.066 243456 INFO nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Creating image(s)
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.095 243456 DEBUG nova.storage.rbd_utils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] rbd image af024638-459f-45c8-b52b-7d9ec937745a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:53 compute-0 ceph-mon[76304]: pgmap v1158: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 493 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 7.4 MiB/s wr, 395 op/s
Feb 28 10:06:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2778663867' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.135 243456 DEBUG nova.storage.rbd_utils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] rbd image af024638-459f-45c8-b52b-7d9ec937745a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.169 243456 DEBUG nova.storage.rbd_utils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] rbd image af024638-459f-45c8-b52b-7d9ec937745a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.175 243456 DEBUG oslo_concurrency.processutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.206 243456 DEBUG nova.policy [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9a7366cce344abcb7310041ed02610a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5df107d99f104138b864f28cf3b749ad', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.262 243456 DEBUG oslo_concurrency.processutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.263 243456 DEBUG oslo_concurrency.lockutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.263 243456 DEBUG oslo_concurrency.lockutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.264 243456 DEBUG oslo_concurrency.lockutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.295 243456 DEBUG nova.storage.rbd_utils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] rbd image af024638-459f-45c8-b52b-7d9ec937745a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.300 243456 DEBUG oslo_concurrency.processutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 af024638-459f-45c8-b52b-7d9ec937745a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:06:53 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/295385666' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.527 243456 DEBUG oslo_concurrency.processutils [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.533 243456 DEBUG nova.compute.provider_tree [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.553 243456 DEBUG nova.scheduler.client.report [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.570 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.587 243456 DEBUG oslo_concurrency.processutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 af024638-459f-45c8-b52b-7d9ec937745a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.620 243456 DEBUG oslo_concurrency.lockutils [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.657 243456 DEBUG nova.storage.rbd_utils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] resizing rbd image af024638-459f-45c8-b52b-7d9ec937745a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.735 243456 DEBUG nova.objects.instance [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lazy-loading 'migration_context' on Instance uuid af024638-459f-45c8-b52b-7d9ec937745a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.768 243456 DEBUG nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.769 243456 DEBUG nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Ensure instance console log exists: /var/lib/nova/instances/af024638-459f-45c8-b52b-7d9ec937745a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.769 243456 DEBUG oslo_concurrency.lockutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.769 243456 DEBUG oslo_concurrency.lockutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.769 243456 DEBUG oslo_concurrency.lockutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.788 243456 INFO nova.scheduler.client.report [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Deleted allocations for instance 2b863bc5-8018-491d-82ee-dbc8f40d5aff
Feb 28 10:06:53 compute-0 nova_compute[243452]: 2026-02-28 10:06:53.847 243456 DEBUG oslo_concurrency.lockutils [None req-2038fa46-b384-41b2-bb48-80882cf7085f d83ef2b77047458db9060496f444a384 b2155277af74424e955b1904a947ab64 - - default default] Lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 466 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.3 MiB/s wr, 229 op/s
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.850 243456 DEBUG nova.network.neutron [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Successfully created port: 747bc967-2869-43ae-bc69-d818504d5496 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.854 243456 DEBUG nova.network.neutron [-] [instance: 07582da6-e482-439d-b147-937e74817014] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.856 243456 DEBUG nova.compute.manager [req-2b686d29-50e4-4c28-b361-20587cc12442 req-90b27f14-460d-4893-b941-ff45dcc36489 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Received event network-vif-plugged-912a1604-2c79-46e1-8e58-77c169237654 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.856 243456 DEBUG oslo_concurrency.lockutils [req-2b686d29-50e4-4c28-b361-20587cc12442 req-90b27f14-460d-4893-b941-ff45dcc36489 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.857 243456 DEBUG oslo_concurrency.lockutils [req-2b686d29-50e4-4c28-b361-20587cc12442 req-90b27f14-460d-4893-b941-ff45dcc36489 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.857 243456 DEBUG oslo_concurrency.lockutils [req-2b686d29-50e4-4c28-b361-20587cc12442 req-90b27f14-460d-4893-b941-ff45dcc36489 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "2b863bc5-8018-491d-82ee-dbc8f40d5aff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.857 243456 DEBUG nova.compute.manager [req-2b686d29-50e4-4c28-b361-20587cc12442 req-90b27f14-460d-4893-b941-ff45dcc36489 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] No waiting events found dispatching network-vif-plugged-912a1604-2c79-46e1-8e58-77c169237654 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.857 243456 WARNING nova.compute.manager [req-2b686d29-50e4-4c28-b361-20587cc12442 req-90b27f14-460d-4893-b941-ff45dcc36489 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Received unexpected event network-vif-plugged-912a1604-2c79-46e1-8e58-77c169237654 for instance with vm_state deleted and task_state None.
Feb 28 10:06:54 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/295385666' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.860 243456 DEBUG nova.compute.manager [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Received event network-vif-deleted-912a1604-2c79-46e1-8e58-77c169237654 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.860 243456 DEBUG nova.compute.manager [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Received event network-vif-unplugged-a4078728-674b-40cc-afbf-b4fc763283e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.860 243456 DEBUG oslo_concurrency.lockutils [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "07582da6-e482-439d-b147-937e74817014-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.860 243456 DEBUG oslo_concurrency.lockutils [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "07582da6-e482-439d-b147-937e74817014-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.861 243456 DEBUG oslo_concurrency.lockutils [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "07582da6-e482-439d-b147-937e74817014-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.861 243456 DEBUG nova.compute.manager [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] No waiting events found dispatching network-vif-unplugged-a4078728-674b-40cc-afbf-b4fc763283e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.861 243456 DEBUG nova.compute.manager [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Received event network-vif-unplugged-a4078728-674b-40cc-afbf-b4fc763283e1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.861 243456 DEBUG nova.compute.manager [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received event network-changed-bd50336f-b10b-46c9-91bd-81e086b2e80e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.861 243456 DEBUG nova.compute.manager [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Refreshing instance network info cache due to event network-changed-bd50336f-b10b-46c9-91bd-81e086b2e80e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.862 243456 DEBUG oslo_concurrency.lockutils [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.862 243456 DEBUG oslo_concurrency.lockutils [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.862 243456 DEBUG nova.network.neutron [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Refreshing network info cache for port bd50336f-b10b-46c9-91bd-81e086b2e80e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.889 243456 INFO nova.compute.manager [-] [instance: 07582da6-e482-439d-b147-937e74817014] Took 1.89 seconds to deallocate network for instance.
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.938 243456 DEBUG oslo_concurrency.lockutils [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:54 compute-0 nova_compute[243452]: 2026-02-28 10:06:54.938 243456 DEBUG oslo_concurrency.lockutils [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:55 compute-0 nova_compute[243452]: 2026-02-28 10:06:55.092 243456 DEBUG oslo_concurrency.processutils [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:06:55 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2016411674' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:55 compute-0 nova_compute[243452]: 2026-02-28 10:06:55.689 243456 DEBUG oslo_concurrency.processutils [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:55 compute-0 nova_compute[243452]: 2026-02-28 10:06:55.697 243456 DEBUG nova.compute.provider_tree [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:06:55 compute-0 nova_compute[243452]: 2026-02-28 10:06:55.717 243456 DEBUG nova.scheduler.client.report [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:06:55 compute-0 nova_compute[243452]: 2026-02-28 10:06:55.753 243456 DEBUG oslo_concurrency.lockutils [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:55 compute-0 nova_compute[243452]: 2026-02-28 10:06:55.777 243456 INFO nova.scheduler.client.report [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Deleted allocations for instance 07582da6-e482-439d-b147-937e74817014
Feb 28 10:06:55 compute-0 nova_compute[243452]: 2026-02-28 10:06:55.842 243456 DEBUG oslo_concurrency.lockutils [None req-13c1f45c-3e9b-40e1-8f50-a32c09242cf6 163582c3e6a34c87b52f82ac4f189f77 a2ce6ed219d94b3b88c2d2d7001f6c3a - - default default] Lock "07582da6-e482-439d-b147-937e74817014" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:55 compute-0 ceph-mon[76304]: pgmap v1159: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 466 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.3 MiB/s wr, 229 op/s
Feb 28 10:06:55 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2016411674' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:06:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:55.864 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 412 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 6.7 MiB/s wr, 369 op/s
Feb 28 10:06:56 compute-0 nova_compute[243452]: 2026-02-28 10:06:56.033 243456 DEBUG oslo_concurrency.lockutils [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "interface-c4dce6af-958c-4c5a-890b-469443cee915-97c93122-26c8-464e-b452-aaa22188a591" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:56 compute-0 nova_compute[243452]: 2026-02-28 10:06:56.034 243456 DEBUG oslo_concurrency.lockutils [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-c4dce6af-958c-4c5a-890b-469443cee915-97c93122-26c8-464e-b452-aaa22188a591" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:56 compute-0 nova_compute[243452]: 2026-02-28 10:06:56.036 243456 DEBUG nova.objects.instance [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'flavor' on Instance uuid c4dce6af-958c-4c5a-890b-469443cee915 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:56 compute-0 nova_compute[243452]: 2026-02-28 10:06:56.126 243456 DEBUG nova.network.neutron [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Successfully updated port: 747bc967-2869-43ae-bc69-d818504d5496 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:06:56 compute-0 nova_compute[243452]: 2026-02-28 10:06:56.140 243456 DEBUG oslo_concurrency.lockutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquiring lock "refresh_cache-af024638-459f-45c8-b52b-7d9ec937745a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:56 compute-0 nova_compute[243452]: 2026-02-28 10:06:56.140 243456 DEBUG oslo_concurrency.lockutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquired lock "refresh_cache-af024638-459f-45c8-b52b-7d9ec937745a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:56 compute-0 nova_compute[243452]: 2026-02-28 10:06:56.141 243456 DEBUG nova.network.neutron [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:06:56 compute-0 nova_compute[243452]: 2026-02-28 10:06:56.340 243456 DEBUG nova.network.neutron [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:06:56 compute-0 nova_compute[243452]: 2026-02-28 10:06:56.598 243456 DEBUG nova.network.neutron [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Updated VIF entry in instance network info cache for port bd50336f-b10b-46c9-91bd-81e086b2e80e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:06:56 compute-0 nova_compute[243452]: 2026-02-28 10:06:56.599 243456 DEBUG nova.network.neutron [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Updating instance_info_cache with network_info: [{"id": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "address": "fa:16:3e:9f:3f:66", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd50336f-b1", "ovs_interfaceid": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:56 compute-0 nova_compute[243452]: 2026-02-28 10:06:56.617 243456 DEBUG oslo_concurrency.lockutils [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:06:56 compute-0 nova_compute[243452]: 2026-02-28 10:06:56.618 243456 DEBUG nova.compute.manager [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Received event network-changed-f2eab801-3c6f-481b-98bf-9751a9a7c6d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:56 compute-0 nova_compute[243452]: 2026-02-28 10:06:56.618 243456 DEBUG nova.compute.manager [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Refreshing instance network info cache due to event network-changed-f2eab801-3c6f-481b-98bf-9751a9a7c6d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:06:56 compute-0 nova_compute[243452]: 2026-02-28 10:06:56.619 243456 DEBUG oslo_concurrency.lockutils [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:56 compute-0 nova_compute[243452]: 2026-02-28 10:06:56.619 243456 DEBUG oslo_concurrency.lockutils [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:56 compute-0 nova_compute[243452]: 2026-02-28 10:06:56.619 243456 DEBUG nova.network.neutron [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Refreshing network info cache for port f2eab801-3c6f-481b-98bf-9751a9a7c6d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:06:56 compute-0 ceph-mon[76304]: pgmap v1160: 305 pgs: 305 active+clean; 412 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 6.7 MiB/s wr, 369 op/s
Feb 28 10:06:56 compute-0 nova_compute[243452]: 2026-02-28 10:06:56.908 243456 DEBUG nova.objects.instance [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'pci_requests' on Instance uuid c4dce6af-958c-4c5a-890b-469443cee915 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:56 compute-0 nova_compute[243452]: 2026-02-28 10:06:56.927 243456 DEBUG nova.network.neutron [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:06:57 compute-0 podman[280410]: 2026-02-28 10:06:57.193949688 +0000 UTC m=+0.121751604 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:06:57 compute-0 podman[280409]: 2026-02-28 10:06:57.218055486 +0000 UTC m=+0.145813050 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.315 243456 DEBUG nova.policy [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1d18942aa7a4f4f9f673058f8c5f232', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5478fb37e3fa483c86ed85ad939d42da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.549 243456 DEBUG nova.compute.manager [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Received event network-changed-f2eab801-3c6f-481b-98bf-9751a9a7c6d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.549 243456 DEBUG nova.compute.manager [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Refreshing instance network info cache due to event network-changed-f2eab801-3c6f-481b-98bf-9751a9a7c6d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.550 243456 DEBUG oslo_concurrency.lockutils [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.577 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:06:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Feb 28 10:06:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:57.845 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:57.845 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:06:57.846 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.878 243456 DEBUG nova.network.neutron [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Updating instance_info_cache with network_info: [{"id": "747bc967-2869-43ae-bc69-d818504d5496", "address": "fa:16:3e:1e:b6:8f", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap747bc967-28", "ovs_interfaceid": "747bc967-2869-43ae-bc69-d818504d5496", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:57 compute-0 ovn_controller[146846]: 2026-02-28T10:06:57Z|00320|binding|INFO|Releasing lport 610498de-6d7e-49bb-b4f4-0bb4f081afde from this chassis (sb_readonly=0)
Feb 28 10:06:57 compute-0 ovn_controller[146846]: 2026-02-28T10:06:57Z|00321|binding|INFO|Releasing lport 76417fb5-53fd-4986-92c0-fafddb45b2f5 from this chassis (sb_readonly=0)
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.902 243456 DEBUG oslo_concurrency.lockutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Releasing lock "refresh_cache-af024638-459f-45c8-b52b-7d9ec937745a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.903 243456 DEBUG nova.compute.manager [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Instance network_info: |[{"id": "747bc967-2869-43ae-bc69-d818504d5496", "address": "fa:16:3e:1e:b6:8f", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap747bc967-28", "ovs_interfaceid": "747bc967-2869-43ae-bc69-d818504d5496", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.906 243456 DEBUG nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Start _get_guest_xml network_info=[{"id": "747bc967-2869-43ae-bc69-d818504d5496", "address": "fa:16:3e:1e:b6:8f", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap747bc967-28", "ovs_interfaceid": "747bc967-2869-43ae-bc69-d818504d5496", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.921 243456 WARNING nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.926 243456 DEBUG nova.virt.libvirt.host [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.927 243456 DEBUG nova.virt.libvirt.host [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.930 243456 DEBUG nova.virt.libvirt.host [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.931 243456 DEBUG nova.virt.libvirt.host [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.931 243456 DEBUG nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.932 243456 DEBUG nova.virt.hardware [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.932 243456 DEBUG nova.virt.hardware [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.932 243456 DEBUG nova.virt.hardware [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.933 243456 DEBUG nova.virt.hardware [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.933 243456 DEBUG nova.virt.hardware [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.933 243456 DEBUG nova.virt.hardware [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.933 243456 DEBUG nova.virt.hardware [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.934 243456 DEBUG nova.virt.hardware [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.934 243456 DEBUG nova.virt.hardware [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.934 243456 DEBUG nova.virt.hardware [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.934 243456 DEBUG nova.virt.hardware [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.938 243456 DEBUG oslo_concurrency.processutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 405 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 6.5 MiB/s wr, 349 op/s
Feb 28 10:06:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Feb 28 10:06:57 compute-0 nova_compute[243452]: 2026-02-28 10:06:57.972 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:57 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.131 243456 DEBUG nova.network.neutron [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Successfully updated port: 97c93122-26c8-464e-b452-aaa22188a591 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.150 243456 DEBUG oslo_concurrency.lockutils [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.151 243456 DEBUG oslo_concurrency.lockutils [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquired lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.151 243456 DEBUG nova.network.neutron [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.244 243456 DEBUG nova.compute.manager [req-fad2cb0d-1e25-42fc-aa19-676dac11b28a req-c9e773b6-ed0c-4323-a4c3-43da92e3883c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received event network-changed-97c93122-26c8-464e-b452-aaa22188a591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.244 243456 DEBUG nova.compute.manager [req-fad2cb0d-1e25-42fc-aa19-676dac11b28a req-c9e773b6-ed0c-4323-a4c3-43da92e3883c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Refreshing instance network info cache due to event network-changed-97c93122-26c8-464e-b452-aaa22188a591. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.245 243456 DEBUG oslo_concurrency.lockutils [req-fad2cb0d-1e25-42fc-aa19-676dac11b28a req-c9e773b6-ed0c-4323-a4c3-43da92e3883c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.350 243456 WARNING nova.network.neutron [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] 60dcefc3-95e1-437e-9c00-e51656c39b8f already exists in list: networks containing: ['60dcefc3-95e1-437e-9c00-e51656c39b8f']. ignoring it
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.430 243456 DEBUG nova.network.neutron [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Updated VIF entry in instance network info cache for port f2eab801-3c6f-481b-98bf-9751a9a7c6d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.431 243456 DEBUG nova.network.neutron [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Updating instance_info_cache with network_info: [{"id": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "address": "fa:16:3e:75:95:ff", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2eab801-3c", "ovs_interfaceid": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:06:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2494144773' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.489 243456 DEBUG oslo_concurrency.processutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.514 243456 DEBUG nova.storage.rbd_utils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] rbd image af024638-459f-45c8-b52b-7d9ec937745a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.518 243456 DEBUG oslo_concurrency.processutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.572 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.641 243456 DEBUG oslo_concurrency.lockutils [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.642 243456 DEBUG nova.compute.manager [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Received event network-vif-plugged-a4078728-674b-40cc-afbf-b4fc763283e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.643 243456 DEBUG oslo_concurrency.lockutils [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "07582da6-e482-439d-b147-937e74817014-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.643 243456 DEBUG oslo_concurrency.lockutils [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "07582da6-e482-439d-b147-937e74817014-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.643 243456 DEBUG oslo_concurrency.lockutils [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "07582da6-e482-439d-b147-937e74817014-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.643 243456 DEBUG nova.compute.manager [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] No waiting events found dispatching network-vif-plugged-a4078728-674b-40cc-afbf-b4fc763283e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.643 243456 WARNING nova.compute.manager [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Received unexpected event network-vif-plugged-a4078728-674b-40cc-afbf-b4fc763283e1 for instance with vm_state active and task_state deleting.
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.644 243456 DEBUG nova.compute.manager [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Received event network-vif-deleted-a4078728-674b-40cc-afbf-b4fc763283e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.644 243456 INFO nova.compute.manager [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Neutron deleted interface a4078728-674b-40cc-afbf-b4fc763283e1; detaching it from the instance and deleting it from the info cache
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.644 243456 DEBUG nova.network.neutron [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.646 243456 DEBUG oslo_concurrency.lockutils [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.646 243456 DEBUG nova.network.neutron [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Refreshing network info cache for port f2eab801-3c6f-481b-98bf-9751a9a7c6d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:06:58 compute-0 nova_compute[243452]: 2026-02-28 10:06:58.728 243456 DEBUG nova.compute.manager [req-187ed2e3-a0e7-4f88-b18d-5e7910a8299e req-fac7f428-b851-41e4-8bec-151fbc10bacc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07582da6-e482-439d-b147-937e74817014] Detach interface failed, port_id=a4078728-674b-40cc-afbf-b4fc763283e1, reason: Instance 07582da6-e482-439d-b147-937e74817014 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:06:58 compute-0 ceph-mon[76304]: pgmap v1161: 305 pgs: 305 active+clean; 405 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 6.5 MiB/s wr, 349 op/s
Feb 28 10:06:58 compute-0 ceph-mon[76304]: osdmap e167: 3 total, 3 up, 3 in
Feb 28 10:06:58 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2494144773' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:06:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2432601067' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.042 243456 DEBUG oslo_concurrency.processutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.044 243456 DEBUG nova.virt.libvirt.vif [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:06:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1579937397',display_name='tempest-SecurityGroupsTestJSON-server-1579937397',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1579937397',id=44,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5df107d99f104138b864f28cf3b749ad',ramdisk_id='',reservation_id='r-7rjukza7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-392060184',owner_user_name='tempest-SecurityGroupsTestJSON-392060184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:06:52Z,user_data=None,user_id='c9a7366cce344abcb7310041ed02610a',uuid=af024638-459f-45c8-b52b-7d9ec937745a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "747bc967-2869-43ae-bc69-d818504d5496", "address": "fa:16:3e:1e:b6:8f", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap747bc967-28", "ovs_interfaceid": "747bc967-2869-43ae-bc69-d818504d5496", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.044 243456 DEBUG nova.network.os_vif_util [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Converting VIF {"id": "747bc967-2869-43ae-bc69-d818504d5496", "address": "fa:16:3e:1e:b6:8f", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap747bc967-28", "ovs_interfaceid": "747bc967-2869-43ae-bc69-d818504d5496", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.045 243456 DEBUG nova.network.os_vif_util [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:b6:8f,bridge_name='br-int',has_traffic_filtering=True,id=747bc967-2869-43ae-bc69-d818504d5496,network=Network(f973a3f2-c3d9-4311-9c7b-ab6ca02111d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap747bc967-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.047 243456 DEBUG nova.objects.instance [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lazy-loading 'pci_devices' on Instance uuid af024638-459f-45c8-b52b-7d9ec937745a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.080 243456 DEBUG nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:06:59 compute-0 nova_compute[243452]:   <uuid>af024638-459f-45c8-b52b-7d9ec937745a</uuid>
Feb 28 10:06:59 compute-0 nova_compute[243452]:   <name>instance-0000002c</name>
Feb 28 10:06:59 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:06:59 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:06:59 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <nova:name>tempest-SecurityGroupsTestJSON-server-1579937397</nova:name>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:06:57</nova:creationTime>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:06:59 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:06:59 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:06:59 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:06:59 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:06:59 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:06:59 compute-0 nova_compute[243452]:         <nova:user uuid="c9a7366cce344abcb7310041ed02610a">tempest-SecurityGroupsTestJSON-392060184-project-member</nova:user>
Feb 28 10:06:59 compute-0 nova_compute[243452]:         <nova:project uuid="5df107d99f104138b864f28cf3b749ad">tempest-SecurityGroupsTestJSON-392060184</nova:project>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:06:59 compute-0 nova_compute[243452]:         <nova:port uuid="747bc967-2869-43ae-bc69-d818504d5496">
Feb 28 10:06:59 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:06:59 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:06:59 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <system>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <entry name="serial">af024638-459f-45c8-b52b-7d9ec937745a</entry>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <entry name="uuid">af024638-459f-45c8-b52b-7d9ec937745a</entry>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     </system>
Feb 28 10:06:59 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:06:59 compute-0 nova_compute[243452]:   <os>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:   </os>
Feb 28 10:06:59 compute-0 nova_compute[243452]:   <features>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:   </features>
Feb 28 10:06:59 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:06:59 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:06:59 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/af024638-459f-45c8-b52b-7d9ec937745a_disk">
Feb 28 10:06:59 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       </source>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:06:59 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/af024638-459f-45c8-b52b-7d9ec937745a_disk.config">
Feb 28 10:06:59 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       </source>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:06:59 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:1e:b6:8f"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <target dev="tap747bc967-28"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/af024638-459f-45c8-b52b-7d9ec937745a/console.log" append="off"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <video>
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     </video>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:06:59 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:06:59 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:06:59 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:06:59 compute-0 nova_compute[243452]: </domain>
Feb 28 10:06:59 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.081 243456 DEBUG nova.compute.manager [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Preparing to wait for external event network-vif-plugged-747bc967-2869-43ae-bc69-d818504d5496 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.082 243456 DEBUG oslo_concurrency.lockutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquiring lock "af024638-459f-45c8-b52b-7d9ec937745a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.082 243456 DEBUG oslo_concurrency.lockutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.083 243456 DEBUG oslo_concurrency.lockutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.084 243456 DEBUG nova.virt.libvirt.vif [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:06:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1579937397',display_name='tempest-SecurityGroupsTestJSON-server-1579937397',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1579937397',id=44,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5df107d99f104138b864f28cf3b749ad',ramdisk_id='',reservation_id='r-7rjukza7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-392060184',owner_user_name='tempest-SecurityGroupsTestJSON-392060184-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:06:52Z,user_data=None,user_id='c9a7366cce344abcb7310041ed02610a',uuid=af024638-459f-45c8-b52b-7d9ec937745a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "747bc967-2869-43ae-bc69-d818504d5496", "address": "fa:16:3e:1e:b6:8f", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap747bc967-28", "ovs_interfaceid": "747bc967-2869-43ae-bc69-d818504d5496", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.084 243456 DEBUG nova.network.os_vif_util [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Converting VIF {"id": "747bc967-2869-43ae-bc69-d818504d5496", "address": "fa:16:3e:1e:b6:8f", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap747bc967-28", "ovs_interfaceid": "747bc967-2869-43ae-bc69-d818504d5496", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.085 243456 DEBUG nova.network.os_vif_util [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:b6:8f,bridge_name='br-int',has_traffic_filtering=True,id=747bc967-2869-43ae-bc69-d818504d5496,network=Network(f973a3f2-c3d9-4311-9c7b-ab6ca02111d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap747bc967-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.085 243456 DEBUG os_vif [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:b6:8f,bridge_name='br-int',has_traffic_filtering=True,id=747bc967-2869-43ae-bc69-d818504d5496,network=Network(f973a3f2-c3d9-4311-9c7b-ab6ca02111d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap747bc967-28') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.086 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.087 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.087 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.092 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.092 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap747bc967-28, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.093 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap747bc967-28, col_values=(('external_ids', {'iface-id': '747bc967-2869-43ae-bc69-d818504d5496', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:b6:8f', 'vm-uuid': 'af024638-459f-45c8-b52b-7d9ec937745a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.095 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:59 compute-0 NetworkManager[49805]: <info>  [1772273219.0965] manager: (tap747bc967-28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.097 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.102 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.103 243456 INFO os_vif [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:b6:8f,bridge_name='br-int',has_traffic_filtering=True,id=747bc967-2869-43ae-bc69-d818504d5496,network=Network(f973a3f2-c3d9-4311-9c7b-ab6ca02111d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap747bc967-28')
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.250 243456 DEBUG nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.251 243456 DEBUG nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.251 243456 DEBUG nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] No VIF found with MAC fa:16:3e:1e:b6:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.252 243456 INFO nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Using config drive
Feb 28 10:06:59 compute-0 nova_compute[243452]: 2026-02-28 10:06:59.282 243456 DEBUG nova.storage.rbd_utils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] rbd image af024638-459f-45c8-b52b-7d9ec937745a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:06:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 405 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.8 MiB/s wr, 282 op/s
Feb 28 10:07:00 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2432601067' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:07:00 compute-0 nova_compute[243452]: 2026-02-28 10:07:00.118 243456 INFO nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Creating config drive at /var/lib/nova/instances/af024638-459f-45c8-b52b-7d9ec937745a/disk.config
Feb 28 10:07:00 compute-0 nova_compute[243452]: 2026-02-28 10:07:00.122 243456 DEBUG oslo_concurrency.processutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af024638-459f-45c8-b52b-7d9ec937745a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpopijcldq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:00 compute-0 nova_compute[243452]: 2026-02-28 10:07:00.276 243456 DEBUG oslo_concurrency.processutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af024638-459f-45c8-b52b-7d9ec937745a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpopijcldq" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:00 compute-0 nova_compute[243452]: 2026-02-28 10:07:00.304 243456 DEBUG nova.storage.rbd_utils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] rbd image af024638-459f-45c8-b52b-7d9ec937745a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:07:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:07:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:07:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:07:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:07:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:07:00 compute-0 nova_compute[243452]: 2026-02-28 10:07:00.311 243456 DEBUG oslo_concurrency.processutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af024638-459f-45c8-b52b-7d9ec937745a/disk.config af024638-459f-45c8-b52b-7d9ec937745a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:00 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb 28 10:07:00 compute-0 nova_compute[243452]: 2026-02-28 10:07:00.987 243456 DEBUG oslo_concurrency.processutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af024638-459f-45c8-b52b-7d9ec937745a/disk.config af024638-459f-45c8-b52b-7d9ec937745a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.676s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:00 compute-0 nova_compute[243452]: 2026-02-28 10:07:00.989 243456 INFO nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Deleting local config drive /var/lib/nova/instances/af024638-459f-45c8-b52b-7d9ec937745a/disk.config because it was imported into RBD.
Feb 28 10:07:01 compute-0 kernel: tap747bc967-28: entered promiscuous mode
Feb 28 10:07:01 compute-0 NetworkManager[49805]: <info>  [1772273221.0402] manager: (tap747bc967-28): new Tun device (/org/freedesktop/NetworkManager/Devices/154)
Feb 28 10:07:01 compute-0 ovn_controller[146846]: 2026-02-28T10:07:01Z|00322|binding|INFO|Claiming lport 747bc967-2869-43ae-bc69-d818504d5496 for this chassis.
Feb 28 10:07:01 compute-0 ovn_controller[146846]: 2026-02-28T10:07:01Z|00323|binding|INFO|747bc967-2869-43ae-bc69-d818504d5496: Claiming fa:16:3e:1e:b6:8f 10.100.0.6
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.042 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.049 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:b6:8f 10.100.0.6'], port_security=['fa:16:3e:1e:b6:8f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'af024638-459f-45c8-b52b-7d9ec937745a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5df107d99f104138b864f28cf3b749ad', 'neutron:revision_number': '2', 'neutron:security_group_ids': '428b5966-b573-43eb-a464-fcc424e52e98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4575c196-9c47-43a0-8ee2-589635106d32, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=747bc967-2869-43ae-bc69-d818504d5496) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.050 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 747bc967-2869-43ae-bc69-d818504d5496 in datapath f973a3f2-c3d9-4311-9c7b-ab6ca02111d3 bound to our chassis
Feb 28 10:07:01 compute-0 ovn_controller[146846]: 2026-02-28T10:07:01Z|00324|binding|INFO|Setting lport 747bc967-2869-43ae-bc69-d818504d5496 ovn-installed in OVS
Feb 28 10:07:01 compute-0 ovn_controller[146846]: 2026-02-28T10:07:01Z|00325|binding|INFO|Setting lport 747bc967-2869-43ae-bc69-d818504d5496 up in Southbound
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.052 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.052 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f973a3f2-c3d9-4311-9c7b-ab6ca02111d3
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.055 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.069 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8d82b432-4c1b-4e48-8ba9-fb3faa3002a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:01 compute-0 ceph-mon[76304]: pgmap v1163: 305 pgs: 305 active+clean; 405 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.8 MiB/s wr, 282 op/s
Feb 28 10:07:01 compute-0 systemd-machined[209480]: New machine qemu-48-instance-0000002c.
Feb 28 10:07:01 compute-0 systemd[1]: Started Virtual Machine qemu-48-instance-0000002c.
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.093 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f2d517fb-a7fd-4805-a341-7951fed54348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:01 compute-0 systemd-udevd[280597]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.097 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7e158ef9-8a31-4de1-a9df-1ce1d7c2ceb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:01 compute-0 NetworkManager[49805]: <info>  [1772273221.1088] device (tap747bc967-28): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:07:01 compute-0 NetworkManager[49805]: <info>  [1772273221.1098] device (tap747bc967-28): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.120 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ce108ade-f86b-4b98-8253-ff09d1c93318]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.134 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2d7395e2-869c-4a0e-a1a1-12a95ab2b70a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf973a3f2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b7:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 90], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469115, 'reachable_time': 43459, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280607, 'error': None, 'target': 'ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.152 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3399c6da-d9fc-4b08-8fb4-402aa21bd526]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf973a3f2-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469127, 'tstamp': 469127}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280608, 'error': None, 'target': 'ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf973a3f2-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469130, 'tstamp': 469130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280608, 'error': None, 'target': 'ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.154 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf973a3f2-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.156 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.157 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf973a3f2-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.157 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.158 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf973a3f2-c0, col_values=(('external_ids', {'iface-id': '610498de-6d7e-49bb-b4f4-0bb4f081afde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.158 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.159 243456 DEBUG nova.network.neutron [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Updated VIF entry in instance network info cache for port f2eab801-3c6f-481b-98bf-9751a9a7c6d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.159 243456 DEBUG nova.network.neutron [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Updating instance_info_cache with network_info: [{"id": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "address": "fa:16:3e:75:95:ff", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2eab801-3c", "ovs_interfaceid": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.175 243456 DEBUG oslo_concurrency.lockutils [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.175 243456 DEBUG nova.compute.manager [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received event network-changed-bd50336f-b10b-46c9-91bd-81e086b2e80e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.176 243456 DEBUG nova.compute.manager [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Refreshing instance network info cache due to event network-changed-bd50336f-b10b-46c9-91bd-81e086b2e80e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.176 243456 DEBUG oslo_concurrency.lockutils [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.384 243456 DEBUG nova.network.neutron [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Updating instance_info_cache with network_info: [{"id": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "address": "fa:16:3e:9f:3f:66", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd50336f-b1", "ovs_interfaceid": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.411 243456 DEBUG oslo_concurrency.lockutils [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Releasing lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.413 243456 DEBUG oslo_concurrency.lockutils [req-fad2cb0d-1e25-42fc-aa19-676dac11b28a req-c9e773b6-ed0c-4323-a4c3-43da92e3883c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.413 243456 DEBUG nova.network.neutron [req-fad2cb0d-1e25-42fc-aa19-676dac11b28a req-c9e773b6-ed0c-4323-a4c3-43da92e3883c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Refreshing network info cache for port 97c93122-26c8-464e-b452-aaa22188a591 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.416 243456 DEBUG nova.virt.libvirt.vif [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:06:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1836797536',display_name='tempest-tempest.common.compute-instance-1836797536',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1836797536',id=40,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJX0Fx82RGhILAY8gSL/81eMdy/bnRqX8dvA0uyc0VMC42rCn5VbA0lngPhybrpyL5tdeNJDAeYEVJl6vb2i2p3dxqnwW1uNXmGSt+gE0lS+RTjVWG5bC73bVNenJhGdiA==',key_name='tempest-keypair-1599844475',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:06:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-lo9kso4x',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:06:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=c4dce6af-958c-4c5a-890b-469443cee915,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.416 243456 DEBUG nova.network.os_vif_util [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.417 243456 DEBUG nova.network.os_vif_util [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:61:f1,bridge_name='br-int',has_traffic_filtering=True,id=97c93122-26c8-464e-b452-aaa22188a591,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c93122-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.417 243456 DEBUG os_vif [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:61:f1,bridge_name='br-int',has_traffic_filtering=True,id=97c93122-26c8-464e-b452-aaa22188a591,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c93122-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.418 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.418 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.419 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.421 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.421 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97c93122-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.422 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap97c93122-26, col_values=(('external_ids', {'iface-id': '97c93122-26c8-464e-b452-aaa22188a591', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:61:f1', 'vm-uuid': 'c4dce6af-958c-4c5a-890b-469443cee915'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.423 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:01 compute-0 NetworkManager[49805]: <info>  [1772273221.4247] manager: (tap97c93122-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/155)
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.425 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.428 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.428 243456 INFO os_vif [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:61:f1,bridge_name='br-int',has_traffic_filtering=True,id=97c93122-26c8-464e-b452-aaa22188a591,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c93122-26')
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.429 243456 DEBUG nova.virt.libvirt.vif [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:06:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1836797536',display_name='tempest-tempest.common.compute-instance-1836797536',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1836797536',id=40,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJX0Fx82RGhILAY8gSL/81eMdy/bnRqX8dvA0uyc0VMC42rCn5VbA0lngPhybrpyL5tdeNJDAeYEVJl6vb2i2p3dxqnwW1uNXmGSt+gE0lS+RTjVWG5bC73bVNenJhGdiA==',key_name='tempest-keypair-1599844475',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:06:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-lo9kso4x',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:06:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=c4dce6af-958c-4c5a-890b-469443cee915,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.430 243456 DEBUG nova.network.os_vif_util [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.430 243456 DEBUG nova.network.os_vif_util [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:61:f1,bridge_name='br-int',has_traffic_filtering=True,id=97c93122-26c8-464e-b452-aaa22188a591,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c93122-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.437 243456 DEBUG nova.virt.libvirt.guest [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] attach device xml: <interface type="ethernet">
Feb 28 10:07:01 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:81:61:f1"/>
Feb 28 10:07:01 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:07:01 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:07:01 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:07:01 compute-0 nova_compute[243452]:   <target dev="tap97c93122-26"/>
Feb 28 10:07:01 compute-0 nova_compute[243452]: </interface>
Feb 28 10:07:01 compute-0 nova_compute[243452]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 28 10:07:01 compute-0 kernel: tap97c93122-26: entered promiscuous mode
Feb 28 10:07:01 compute-0 NetworkManager[49805]: <info>  [1772273221.4469] manager: (tap97c93122-26): new Tun device (/org/freedesktop/NetworkManager/Devices/156)
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.447 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:01 compute-0 ovn_controller[146846]: 2026-02-28T10:07:01Z|00326|binding|INFO|Claiming lport 97c93122-26c8-464e-b452-aaa22188a591 for this chassis.
Feb 28 10:07:01 compute-0 ovn_controller[146846]: 2026-02-28T10:07:01Z|00327|binding|INFO|97c93122-26c8-464e-b452-aaa22188a591: Claiming fa:16:3e:81:61:f1 10.100.0.3
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.456 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:61:f1 10.100.0.3'], port_security=['fa:16:3e:81:61:f1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1432262919', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c4dce6af-958c-4c5a-890b-469443cee915', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1432262919', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '24808a61-2182-43ba-a46d-e629124276f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=97c93122-26c8-464e-b452-aaa22188a591) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:07:01 compute-0 ovn_controller[146846]: 2026-02-28T10:07:01Z|00328|binding|INFO|Setting lport 97c93122-26c8-464e-b452-aaa22188a591 ovn-installed in OVS
Feb 28 10:07:01 compute-0 ovn_controller[146846]: 2026-02-28T10:07:01Z|00329|binding|INFO|Setting lport 97c93122-26c8-464e-b452-aaa22188a591 up in Southbound
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.457 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 97c93122-26c8-464e-b452-aaa22188a591 in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f bound to our chassis
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.457 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.459 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.461 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:01 compute-0 NetworkManager[49805]: <info>  [1772273221.4622] device (tap97c93122-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:07:01 compute-0 NetworkManager[49805]: <info>  [1772273221.4628] device (tap97c93122-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.475 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9b877402-e032-4ad4-a3d8-22650d3e88bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.507 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d04a3af7-4d9c-4efa-b18d-8bb3e0dc2a21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.511 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7e308aaf-1b0f-48f1-bb13-5817dd1e11a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.539 243456 DEBUG nova.virt.libvirt.driver [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.539 243456 DEBUG nova.virt.libvirt.driver [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.540 243456 DEBUG nova.virt.libvirt.driver [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:9f:3f:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.540 243456 DEBUG nova.virt.libvirt.driver [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:81:61:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.541 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[522e69f8-a04d-47c9-a812-96cdea118b87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.553 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ac566735-496b-4229-8093-c5c5cab03c2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469782, 'reachable_time': 41463, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280621, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.562 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[77405c7f-eb14-4504-a408-64ab4ee02874]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469794, 'tstamp': 469794}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280622, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469796, 'tstamp': 469796}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280622, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.564 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.566 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.567 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.567 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.567 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:01.568 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.572 243456 DEBUG nova.virt.libvirt.guest [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:07:01 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:07:01 compute-0 nova_compute[243452]:   <nova:name>tempest-tempest.common.compute-instance-1836797536</nova:name>
Feb 28 10:07:01 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:07:01</nova:creationTime>
Feb 28 10:07:01 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:07:01 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:07:01 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:07:01 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:07:01 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:07:01 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:07:01 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:07:01 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:07:01 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:07:01 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:07:01 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:07:01 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:07:01 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:07:01 compute-0 nova_compute[243452]:     <nova:port uuid="bd50336f-b10b-46c9-91bd-81e086b2e80e">
Feb 28 10:07:01 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:07:01 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:07:01 compute-0 nova_compute[243452]:     <nova:port uuid="97c93122-26c8-464e-b452-aaa22188a591">
Feb 28 10:07:01 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:07:01 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:07:01 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:07:01 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:07:01 compute-0 nova_compute[243452]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.602 243456 DEBUG oslo_concurrency.lockutils [None req-c8b9b273-8014-4194-ae68-798f0f058614 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-c4dce6af-958c-4c5a-890b-469443cee915-97c93122-26c8-464e-b452-aaa22188a591" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:01 compute-0 ovn_controller[146846]: 2026-02-28T10:07:01Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:75:95:ff 10.100.0.8
Feb 28 10:07:01 compute-0 ovn_controller[146846]: 2026-02-28T10:07:01Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:75:95:ff 10.100.0.8
Feb 28 10:07:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 405 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.5 MiB/s wr, 238 op/s
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.951 243456 DEBUG nova.compute.manager [req-1b14b175-f06f-4a6b-a8c9-672fc4be50b4 req-245edf01-3075-4fb8-b954-d13d2b8c1584 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Received event network-vif-plugged-747bc967-2869-43ae-bc69-d818504d5496 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.951 243456 DEBUG oslo_concurrency.lockutils [req-1b14b175-f06f-4a6b-a8c9-672fc4be50b4 req-245edf01-3075-4fb8-b954-d13d2b8c1584 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "af024638-459f-45c8-b52b-7d9ec937745a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.952 243456 DEBUG oslo_concurrency.lockutils [req-1b14b175-f06f-4a6b-a8c9-672fc4be50b4 req-245edf01-3075-4fb8-b954-d13d2b8c1584 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.952 243456 DEBUG oslo_concurrency.lockutils [req-1b14b175-f06f-4a6b-a8c9-672fc4be50b4 req-245edf01-3075-4fb8-b954-d13d2b8c1584 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:01 compute-0 nova_compute[243452]: 2026-02-28 10:07:01.953 243456 DEBUG nova.compute.manager [req-1b14b175-f06f-4a6b-a8c9-672fc4be50b4 req-245edf01-3075-4fb8-b954-d13d2b8c1584 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Processing event network-vif-plugged-747bc967-2869-43ae-bc69-d818504d5496 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.458 243456 DEBUG oslo_concurrency.lockutils [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "interface-c4dce6af-958c-4c5a-890b-469443cee915-97c93122-26c8-464e-b452-aaa22188a591" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.459 243456 DEBUG oslo_concurrency.lockutils [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-c4dce6af-958c-4c5a-890b-469443cee915-97c93122-26c8-464e-b452-aaa22188a591" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.495 243456 DEBUG nova.objects.instance [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'flavor' on Instance uuid c4dce6af-958c-4c5a-890b-469443cee915 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.700 243456 DEBUG nova.network.neutron [req-fad2cb0d-1e25-42fc-aa19-676dac11b28a req-c9e773b6-ed0c-4323-a4c3-43da92e3883c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Updated VIF entry in instance network info cache for port 97c93122-26c8-464e-b452-aaa22188a591. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.700 243456 DEBUG nova.network.neutron [req-fad2cb0d-1e25-42fc-aa19-676dac11b28a req-c9e773b6-ed0c-4323-a4c3-43da92e3883c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Updating instance_info_cache with network_info: [{"id": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "address": "fa:16:3e:9f:3f:66", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd50336f-b1", "ovs_interfaceid": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.810 243456 DEBUG nova.virt.libvirt.vif [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:06:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1836797536',display_name='tempest-tempest.common.compute-instance-1836797536',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1836797536',id=40,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJX0Fx82RGhILAY8gSL/81eMdy/bnRqX8dvA0uyc0VMC42rCn5VbA0lngPhybrpyL5tdeNJDAeYEVJl6vb2i2p3dxqnwW1uNXmGSt+gE0lS+RTjVWG5bC73bVNenJhGdiA==',key_name='tempest-keypair-1599844475',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:06:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-lo9kso4x',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:06:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=c4dce6af-958c-4c5a-890b-469443cee915,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.811 243456 DEBUG nova.network.os_vif_util [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.812 243456 DEBUG nova.network.os_vif_util [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:61:f1,bridge_name='br-int',has_traffic_filtering=True,id=97c93122-26c8-464e-b452-aaa22188a591,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c93122-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.815 243456 DEBUG nova.virt.libvirt.guest [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:81:61:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap97c93122-26"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.820 243456 DEBUG nova.virt.libvirt.guest [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:81:61:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap97c93122-26"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.824 243456 DEBUG nova.virt.libvirt.driver [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Attempting to detach device tap97c93122-26 from instance c4dce6af-958c-4c5a-890b-469443cee915 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.825 243456 DEBUG nova.virt.libvirt.guest [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] detach device xml: <interface type="ethernet">
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:81:61:f1"/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <target dev="tap97c93122-26"/>
Feb 28 10:07:02 compute-0 nova_compute[243452]: </interface>
Feb 28 10:07:02 compute-0 nova_compute[243452]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 28 10:07:02 compute-0 sudo[280623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.831 243456 DEBUG nova.virt.libvirt.guest [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:81:61:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap97c93122-26"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:07:02 compute-0 sudo[280623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.835 243456 DEBUG nova.virt.libvirt.guest [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:81:61:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap97c93122-26"/></interface>not found in domain: <domain type='kvm' id='44'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <name>instance-00000028</name>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <uuid>c4dce6af-958c-4c5a-890b-469443cee915</uuid>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <nova:name>tempest-tempest.common.compute-instance-1836797536</nova:name>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:07:01</nova:creationTime>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:07:02 compute-0 sudo[280623]: pam_unix(sudo:session): session closed for user root
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:port uuid="bd50336f-b10b-46c9-91bd-81e086b2e80e">
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:port uuid="97c93122-26c8-464e-b452-aaa22188a591">
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:07:02 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <memory unit='KiB'>131072</memory>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <vcpu placement='static'>1</vcpu>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <resource>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <partition>/machine</partition>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </resource>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <sysinfo type='smbios'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <system>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <entry name='manufacturer'>RDO</entry>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <entry name='product'>OpenStack Compute</entry>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <entry name='serial'>c4dce6af-958c-4c5a-890b-469443cee915</entry>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <entry name='uuid'>c4dce6af-958c-4c5a-890b-469443cee915</entry>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <entry name='family'>Virtual Machine</entry>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </system>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <os>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <boot dev='hd'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <smbios mode='sysinfo'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </os>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <features>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <vmcoreinfo state='on'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </features>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <cpu mode='custom' match='exact' check='full'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <vendor>AMD</vendor>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='x2apic'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc-deadline'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='hypervisor'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc_adjust'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='spec-ctrl'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='stibp'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='ssbd'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='cmp_legacy'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='overflow-recov'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='succor'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='ibrs'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='amd-ssbd'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='virt-ssbd'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='lbrv'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='tsc-scale'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='vmcb-clean'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='flushbyasid'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='pause-filter'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='pfthreshold'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='xsaves'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='svm'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='topoext'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='npt'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='nrip-save'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <clock offset='utc'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <timer name='pit' tickpolicy='delay'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <timer name='hpet' present='no'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <on_poweroff>destroy</on_poweroff>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <on_reboot>restart</on_reboot>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <on_crash>destroy</on_crash>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <disk type='network' device='disk'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/c4dce6af-958c-4c5a-890b-469443cee915_disk' index='2'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       </source>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target dev='vda' bus='virtio'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='virtio-disk0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <disk type='network' device='cdrom'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/c4dce6af-958c-4c5a-890b-469443cee915_disk.config' index='1'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       </source>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target dev='sda' bus='sata'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <readonly/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='sata0-0-0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='0' model='pcie-root'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pcie.0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='1' port='0x10'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='2' port='0x11'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.2'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='3' port='0x12'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.3'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='4' port='0x13'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.4'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='5' port='0x14'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.5'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='6' port='0x15'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.6'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='7' port='0x16'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.7'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='8' port='0x17'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.8'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='9' port='0x18'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.9'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='10' port='0x19'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.10'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='11' port='0x1a'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.11'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='12' port='0x1b'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.12'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='13' port='0x1c'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.13'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='14' port='0x1d'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.14'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='15' port='0x1e'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.15'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='16' port='0x1f'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.16'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='17' port='0x20'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.17'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='18' port='0x21'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.18'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='19' port='0x22'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.19'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='20' port='0x23'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.20'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='21' port='0x24'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.21'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='22' port='0x25'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.22'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='23' port='0x26'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.23'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='24' port='0x27'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.24'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='25' port='0x28'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.25'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-pci-bridge'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.26'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='usb'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='sata' index='0'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='ide'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:9f:3f:66'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target dev='tapbd50336f-b1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='net0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:81:61:f1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target dev='tap97c93122-26'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='net1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <serial type='pty'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <source path='/dev/pts/1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/c4dce6af-958c-4c5a-890b-469443cee915/console.log' append='off'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target type='isa-serial' port='0'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:         <model name='isa-serial'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       </target>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <console type='pty' tty='/dev/pts/1'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <source path='/dev/pts/1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/c4dce6af-958c-4c5a-890b-469443cee915/console.log' append='off'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target type='serial' port='0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </console>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <input type='tablet' bus='usb'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='input0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='usb' bus='0' port='1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </input>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <input type='mouse' bus='ps2'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='input1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </input>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <input type='keyboard' bus='ps2'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='input2'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </input>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <listen type='address' address='::0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </graphics>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <audio id='1' type='none'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <video>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model type='virtio' heads='1' primary='yes'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='video0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </video>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <watchdog model='itco' action='reset'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='watchdog0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </watchdog>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <memballoon model='virtio'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <stats period='10'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='balloon0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <rng model='virtio'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <backend model='random'>/dev/urandom</backend>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='rng0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <label>system_u:system_r:svirt_t:s0:c353,c926</label>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c353,c926</imagelabel>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <label>+107:+107</label>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <imagelabel>+107:+107</imagelabel>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:07:02 compute-0 nova_compute[243452]: </domain>
Feb 28 10:07:02 compute-0 nova_compute[243452]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.837 243456 INFO nova.virt.libvirt.driver [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully detached device tap97c93122-26 from instance c4dce6af-958c-4c5a-890b-469443cee915 from the persistent domain config.
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.837 243456 DEBUG nova.virt.libvirt.driver [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] (1/8): Attempting to detach device tap97c93122-26 with device alias net1 from instance c4dce6af-958c-4c5a-890b-469443cee915 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.837 243456 DEBUG nova.virt.libvirt.guest [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] detach device xml: <interface type="ethernet">
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:81:61:f1"/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <target dev="tap97c93122-26"/>
Feb 28 10:07:02 compute-0 nova_compute[243452]: </interface>
Feb 28 10:07:02 compute-0 nova_compute[243452]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 28 10:07:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:07:02 compute-0 kernel: tap97c93122-26 (unregistering): left promiscuous mode
Feb 28 10:07:02 compute-0 NetworkManager[49805]: <info>  [1772273222.9143] device (tap97c93122-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.914 243456 DEBUG oslo_concurrency.lockutils [req-fad2cb0d-1e25-42fc-aa19-676dac11b28a req-c9e773b6-ed0c-4323-a4c3-43da92e3883c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.916 243456 DEBUG oslo_concurrency.lockutils [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.916 243456 DEBUG nova.network.neutron [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Refreshing network info cache for port bd50336f-b10b-46c9-91bd-81e086b2e80e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.922 243456 DEBUG nova.virt.libvirt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Received event <DeviceRemovedEvent: 1772273222.9217808, c4dce6af-958c-4c5a-890b-469443cee915 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.924 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:02 compute-0 ovn_controller[146846]: 2026-02-28T10:07:02Z|00330|binding|INFO|Releasing lport 97c93122-26c8-464e-b452-aaa22188a591 from this chassis (sb_readonly=0)
Feb 28 10:07:02 compute-0 ovn_controller[146846]: 2026-02-28T10:07:02Z|00331|binding|INFO|Setting lport 97c93122-26c8-464e-b452-aaa22188a591 down in Southbound
Feb 28 10:07:02 compute-0 ovn_controller[146846]: 2026-02-28T10:07:02Z|00332|binding|INFO|Removing iface tap97c93122-26 ovn-installed in OVS
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.926 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.929 243456 DEBUG nova.virt.libvirt.driver [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Start waiting for the detach event from libvirt for device tap97c93122-26 with device alias net1 for instance c4dce6af-958c-4c5a-890b-469443cee915 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.929 243456 DEBUG nova.virt.libvirt.guest [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:81:61:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap97c93122-26"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:07:02 compute-0 sudo[280648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.936 243456 DEBUG nova.virt.libvirt.guest [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:81:61:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap97c93122-26"/></interface>not found in domain: <domain type='kvm' id='44'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <name>instance-00000028</name>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <uuid>c4dce6af-958c-4c5a-890b-469443cee915</uuid>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <nova:name>tempest-tempest.common.compute-instance-1836797536</nova:name>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:07:01</nova:creationTime>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:port uuid="bd50336f-b10b-46c9-91bd-81e086b2e80e">
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:port uuid="97c93122-26c8-464e-b452-aaa22188a591">
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:07:02 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <memory unit='KiB'>131072</memory>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <vcpu placement='static'>1</vcpu>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <resource>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <partition>/machine</partition>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </resource>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <sysinfo type='smbios'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <system>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <entry name='manufacturer'>RDO</entry>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <entry name='product'>OpenStack Compute</entry>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <entry name='serial'>c4dce6af-958c-4c5a-890b-469443cee915</entry>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <entry name='uuid'>c4dce6af-958c-4c5a-890b-469443cee915</entry>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <entry name='family'>Virtual Machine</entry>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </system>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <os>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <boot dev='hd'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <smbios mode='sysinfo'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </os>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <features>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <vmcoreinfo state='on'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </features>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <cpu mode='custom' match='exact' check='full'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <vendor>AMD</vendor>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='x2apic'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc-deadline'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='hypervisor'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc_adjust'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='spec-ctrl'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='stibp'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='ssbd'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='cmp_legacy'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='overflow-recov'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='succor'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='ibrs'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='amd-ssbd'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='virt-ssbd'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='lbrv'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='tsc-scale'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='vmcb-clean'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='flushbyasid'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='pause-filter'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='pfthreshold'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='xsaves'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='svm'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='require' name='topoext'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='npt'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <feature policy='disable' name='nrip-save'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <clock offset='utc'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <timer name='pit' tickpolicy='delay'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <timer name='hpet' present='no'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <on_poweroff>destroy</on_poweroff>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <on_reboot>restart</on_reboot>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <on_crash>destroy</on_crash>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <disk type='network' device='disk'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/c4dce6af-958c-4c5a-890b-469443cee915_disk' index='2'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       </source>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target dev='vda' bus='virtio'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='virtio-disk0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <disk type='network' device='cdrom'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/c4dce6af-958c-4c5a-890b-469443cee915_disk.config' index='1'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       </source>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target dev='sda' bus='sata'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <readonly/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='sata0-0-0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='0' model='pcie-root'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pcie.0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='1' port='0x10'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='2' port='0x11'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.2'/>
Feb 28 10:07:02 compute-0 sudo[280648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='3' port='0x12'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.3'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='4' port='0x13'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.4'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='5' port='0x14'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.5'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='6' port='0x15'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.6'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='7' port='0x16'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.7'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='8' port='0x17'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.8'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='9' port='0x18'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.9'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='10' port='0x19'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.10'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='11' port='0x1a'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.11'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='12' port='0x1b'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.12'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='13' port='0x1c'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.13'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='14' port='0x1d'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.14'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='15' port='0x1e'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.15'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='16' port='0x1f'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.16'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='17' port='0x20'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.17'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='18' port='0x21'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.18'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='19' port='0x22'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.19'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='20' port='0x23'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.20'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='21' port='0x24'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.21'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='22' port='0x25'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.22'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='23' port='0x26'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.23'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='24' port='0x27'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.24'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target chassis='25' port='0x28'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.25'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model name='pcie-pci-bridge'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='pci.26'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='usb'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <controller type='sata' index='0'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='ide'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:9f:3f:66'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target dev='tapbd50336f-b1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='net0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <serial type='pty'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <source path='/dev/pts/1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/c4dce6af-958c-4c5a-890b-469443cee915/console.log' append='off'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target type='isa-serial' port='0'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:         <model name='isa-serial'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       </target>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <console type='pty' tty='/dev/pts/1'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <source path='/dev/pts/1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/c4dce6af-958c-4c5a-890b-469443cee915/console.log' append='off'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <target type='serial' port='0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </console>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <input type='tablet' bus='usb'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='input0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='usb' bus='0' port='1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </input>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <input type='mouse' bus='ps2'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='input1'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </input>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <input type='keyboard' bus='ps2'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='input2'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </input>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <listen type='address' address='::0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </graphics>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <audio id='1' type='none'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <video>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <model type='virtio' heads='1' primary='yes'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='video0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </video>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <watchdog model='itco' action='reset'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='watchdog0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </watchdog>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <memballoon model='virtio'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <stats period='10'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='balloon0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <rng model='virtio'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <backend model='random'>/dev/urandom</backend>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <alias name='rng0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <label>system_u:system_r:svirt_t:s0:c353,c926</label>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c353,c926</imagelabel>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <label>+107:+107</label>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <imagelabel>+107:+107</imagelabel>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:07:02 compute-0 nova_compute[243452]: </domain>
Feb 28 10:07:02 compute-0 nova_compute[243452]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.938 243456 INFO nova.virt.libvirt.driver [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully detached device tap97c93122-26 from instance c4dce6af-958c-4c5a-890b-469443cee915 from the live domain config.
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.939 243456 DEBUG nova.virt.libvirt.vif [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:06:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1836797536',display_name='tempest-tempest.common.compute-instance-1836797536',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1836797536',id=40,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJX0Fx82RGhILAY8gSL/81eMdy/bnRqX8dvA0uyc0VMC42rCn5VbA0lngPhybrpyL5tdeNJDAeYEVJl6vb2i2p3dxqnwW1uNXmGSt+gE0lS+RTjVWG5bC73bVNenJhGdiA==',key_name='tempest-keypair-1599844475',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:06:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-lo9kso4x',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:06:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=c4dce6af-958c-4c5a-890b-469443cee915,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.939 243456 DEBUG nova.network.os_vif_util [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.940 243456 DEBUG nova.network.os_vif_util [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:61:f1,bridge_name='br-int',has_traffic_filtering=True,id=97c93122-26c8-464e-b452-aaa22188a591,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c93122-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:07:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:02.938 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:61:f1 10.100.0.3'], port_security=['fa:16:3e:81:61:f1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1432262919', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c4dce6af-958c-4c5a-890b-469443cee915', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1432262919', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '24808a61-2182-43ba-a46d-e629124276f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=97c93122-26c8-464e-b452-aaa22188a591) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:07:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:02.940 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 97c93122-26c8-464e-b452-aaa22188a591 in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f unbound from our chassis
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.940 243456 DEBUG os_vif [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:61:f1,bridge_name='br-int',has_traffic_filtering=True,id=97c93122-26c8-464e-b452-aaa22188a591,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c93122-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.943 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:02.943 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.944 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97c93122-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.945 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.948 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.951 243456 INFO os_vif [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:61:f1,bridge_name='br-int',has_traffic_filtering=True,id=97c93122-26c8-464e-b452-aaa22188a591,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c93122-26')
Feb 28 10:07:02 compute-0 nova_compute[243452]: 2026-02-28 10:07:02.951 243456 DEBUG nova.virt.libvirt.guest [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <nova:name>tempest-tempest.common.compute-instance-1836797536</nova:name>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:07:02</nova:creationTime>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     <nova:port uuid="bd50336f-b10b-46c9-91bd-81e086b2e80e">
Feb 28 10:07:02 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:07:02 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:07:02 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:07:02 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:07:02 compute-0 nova_compute[243452]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 28 10:07:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:02.962 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e5402a19-70b4-440e-9ac4-ee7efa7c1646]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:02.986 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a70998-56ff-477d-b3ec-a45753566c47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:02.990 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0d28509d-6bdf-489d-8d77-13129e3d0674]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:03.015 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2794e822-d096-44ec-abb2-aecccf8fd796]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:03.030 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[864547d4-de9a-4625-bcf9-979c20957800]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469782, 'reachable_time': 41463, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280682, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:03.043 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d7252e94-23ba-4841-b4b0-c5a2f6c965c6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469794, 'tstamp': 469794}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280683, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469796, 'tstamp': 469796}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280683, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:03.045 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.047 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:03.050 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:03.050 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:03.051 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:03.051 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:03 compute-0 ceph-mon[76304]: pgmap v1164: 305 pgs: 305 active+clean; 405 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.5 MiB/s wr, 238 op/s
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.396 243456 DEBUG nova.compute.manager [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.396 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273223.3954442, af024638-459f-45c8-b52b-7d9ec937745a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.397 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af024638-459f-45c8-b52b-7d9ec937745a] VM Started (Lifecycle Event)
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.399 243456 DEBUG nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.403 243456 INFO nova.virt.libvirt.driver [-] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Instance spawned successfully.
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.403 243456 DEBUG nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.421 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.423 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.431 243456 DEBUG nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.431 243456 DEBUG nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.431 243456 DEBUG nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.432 243456 DEBUG nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.432 243456 DEBUG nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.432 243456 DEBUG nova.virt.libvirt.driver [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:03 compute-0 sudo[280648]: pam_unix(sudo:session): session closed for user root
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.455 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af024638-459f-45c8-b52b-7d9ec937745a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.456 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273223.3973017, af024638-459f-45c8-b52b-7d9ec937745a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.456 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af024638-459f-45c8-b52b-7d9ec937745a] VM Paused (Lifecycle Event)
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.486 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.489 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273223.3993013, af024638-459f-45c8-b52b-7d9ec937745a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.490 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af024638-459f-45c8-b52b-7d9ec937745a] VM Resumed (Lifecycle Event)
Feb 28 10:07:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 28 10:07:03 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.499 243456 INFO nova.compute.manager [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Took 10.44 seconds to spawn the instance on the hypervisor.
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.499 243456 DEBUG nova.compute.manager [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:07:03 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:07:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:07:03 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:07:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:07:03 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:07:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:07:03 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.514 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:07:03 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:07:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:07:03 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.518 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.551 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af024638-459f-45c8-b52b-7d9ec937745a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:07:03 compute-0 sudo[280754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:07:03 compute-0 sudo[280754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:07:03 compute-0 sudo[280754]: pam_unix(sudo:session): session closed for user root
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.575 243456 INFO nova.compute.manager [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Took 11.64 seconds to build instance.
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.577 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:03 compute-0 nova_compute[243452]: 2026-02-28 10:07:03.598 243456 DEBUG oslo_concurrency.lockutils [None req-65ab2605-7ed3-4773-ba87-7dd31cdccb3b c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:03 compute-0 sudo[280779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:07:03 compute-0 sudo[280779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:07:03 compute-0 podman[280818]: 2026-02-28 10:07:03.921118249 +0000 UTC m=+0.060869272 container create 7d5f0e8edb78e982a1fed097586a8c581829f11e35630d4169be4ac56414b09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_almeida, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 10:07:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 419 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.9 MiB/s wr, 195 op/s
Feb 28 10:07:03 compute-0 systemd[1]: Started libpod-conmon-7d5f0e8edb78e982a1fed097586a8c581829f11e35630d4169be4ac56414b09c.scope.
Feb 28 10:07:03 compute-0 podman[280818]: 2026-02-28 10:07:03.894761348 +0000 UTC m=+0.034512461 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:07:04 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:07:04 compute-0 podman[280818]: 2026-02-28 10:07:04.020431401 +0000 UTC m=+0.160182444 container init 7d5f0e8edb78e982a1fed097586a8c581829f11e35630d4169be4ac56414b09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:07:04 compute-0 podman[280818]: 2026-02-28 10:07:04.025882005 +0000 UTC m=+0.165633028 container start 7d5f0e8edb78e982a1fed097586a8c581829f11e35630d4169be4ac56414b09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_almeida, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:07:04 compute-0 podman[280818]: 2026-02-28 10:07:04.030648519 +0000 UTC m=+0.170399552 container attach 7d5f0e8edb78e982a1fed097586a8c581829f11e35630d4169be4ac56414b09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:07:04 compute-0 silly_almeida[280834]: 167 167
Feb 28 10:07:04 compute-0 systemd[1]: libpod-7d5f0e8edb78e982a1fed097586a8c581829f11e35630d4169be4ac56414b09c.scope: Deactivated successfully.
Feb 28 10:07:04 compute-0 conmon[280834]: conmon 7d5f0e8edb78e982a1fe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d5f0e8edb78e982a1fed097586a8c581829f11e35630d4169be4ac56414b09c.scope/container/memory.events
Feb 28 10:07:04 compute-0 podman[280818]: 2026-02-28 10:07:04.034336792 +0000 UTC m=+0.174087845 container died 7d5f0e8edb78e982a1fed097586a8c581829f11e35630d4169be4ac56414b09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_almeida, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.063 243456 DEBUG nova.compute.manager [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Received event network-vif-plugged-747bc967-2869-43ae-bc69-d818504d5496 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.065 243456 DEBUG oslo_concurrency.lockutils [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "af024638-459f-45c8-b52b-7d9ec937745a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.066 243456 DEBUG oslo_concurrency.lockutils [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.066 243456 DEBUG oslo_concurrency.lockutils [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.066 243456 DEBUG nova.compute.manager [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] No waiting events found dispatching network-vif-plugged-747bc967-2869-43ae-bc69-d818504d5496 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.066 243456 WARNING nova.compute.manager [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Received unexpected event network-vif-plugged-747bc967-2869-43ae-bc69-d818504d5496 for instance with vm_state active and task_state None.
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.066 243456 DEBUG nova.compute.manager [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received event network-vif-plugged-97c93122-26c8-464e-b452-aaa22188a591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.067 243456 DEBUG oslo_concurrency.lockutils [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4dce6af-958c-4c5a-890b-469443cee915-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.067 243456 DEBUG oslo_concurrency.lockutils [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.067 243456 DEBUG oslo_concurrency.lockutils [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.067 243456 DEBUG nova.compute.manager [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] No waiting events found dispatching network-vif-plugged-97c93122-26c8-464e-b452-aaa22188a591 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.068 243456 WARNING nova.compute.manager [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received unexpected event network-vif-plugged-97c93122-26c8-464e-b452-aaa22188a591 for instance with vm_state active and task_state None.
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.068 243456 DEBUG nova.compute.manager [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received event network-vif-plugged-97c93122-26c8-464e-b452-aaa22188a591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.068 243456 DEBUG oslo_concurrency.lockutils [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4dce6af-958c-4c5a-890b-469443cee915-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c7771426b24da1462b2b80fbe582c5cfc0efa14d08b50c00dff94174246a720-merged.mount: Deactivated successfully.
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.068 243456 DEBUG oslo_concurrency.lockutils [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.071 243456 DEBUG oslo_concurrency.lockutils [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.071 243456 DEBUG nova.compute.manager [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] No waiting events found dispatching network-vif-plugged-97c93122-26c8-464e-b452-aaa22188a591 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.071 243456 WARNING nova.compute.manager [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received unexpected event network-vif-plugged-97c93122-26c8-464e-b452-aaa22188a591 for instance with vm_state active and task_state None.
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.072 243456 DEBUG nova.compute.manager [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received event network-vif-unplugged-97c93122-26c8-464e-b452-aaa22188a591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.072 243456 DEBUG oslo_concurrency.lockutils [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4dce6af-958c-4c5a-890b-469443cee915-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.072 243456 DEBUG oslo_concurrency.lockutils [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.072 243456 DEBUG oslo_concurrency.lockutils [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.072 243456 DEBUG nova.compute.manager [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] No waiting events found dispatching network-vif-unplugged-97c93122-26c8-464e-b452-aaa22188a591 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.073 243456 WARNING nova.compute.manager [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received unexpected event network-vif-unplugged-97c93122-26c8-464e-b452-aaa22188a591 for instance with vm_state active and task_state None.
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.073 243456 DEBUG nova.compute.manager [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received event network-vif-plugged-97c93122-26c8-464e-b452-aaa22188a591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.073 243456 DEBUG oslo_concurrency.lockutils [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4dce6af-958c-4c5a-890b-469443cee915-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.073 243456 DEBUG oslo_concurrency.lockutils [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.073 243456 DEBUG oslo_concurrency.lockutils [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.074 243456 DEBUG nova.compute.manager [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] No waiting events found dispatching network-vif-plugged-97c93122-26c8-464e-b452-aaa22188a591 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.074 243456 WARNING nova.compute.manager [req-bb61a73c-f808-4724-984d-1f44bfa1a99b req-136d8ced-ebda-4447-9eed-af44e6d8eccf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received unexpected event network-vif-plugged-97c93122-26c8-464e-b452-aaa22188a591 for instance with vm_state active and task_state None.
Feb 28 10:07:04 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 10:07:04 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:07:04 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:07:04 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:07:04 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:07:04 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:07:04 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:07:04 compute-0 podman[280818]: 2026-02-28 10:07:04.088843355 +0000 UTC m=+0.228594368 container remove 7d5f0e8edb78e982a1fed097586a8c581829f11e35630d4169be4ac56414b09c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 10:07:04 compute-0 systemd[1]: libpod-conmon-7d5f0e8edb78e982a1fed097586a8c581829f11e35630d4169be4ac56414b09c.scope: Deactivated successfully.
Feb 28 10:07:04 compute-0 podman[280857]: 2026-02-28 10:07:04.286542153 +0000 UTC m=+0.048399702 container create 3cf512f854b3a0dd1011df38ffadfcc6365d9406daee72d6938f8ebbc1d4d53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shtern, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 10:07:04 compute-0 systemd[1]: Started libpod-conmon-3cf512f854b3a0dd1011df38ffadfcc6365d9406daee72d6938f8ebbc1d4d53b.scope.
Feb 28 10:07:04 compute-0 podman[280857]: 2026-02-28 10:07:04.260510801 +0000 UTC m=+0.022368360 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:07:04 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59a9220ea7ce9aa68f2fbdd44fc41afc585f56c81739ddfc2ced977b952279ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59a9220ea7ce9aa68f2fbdd44fc41afc585f56c81739ddfc2ced977b952279ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59a9220ea7ce9aa68f2fbdd44fc41afc585f56c81739ddfc2ced977b952279ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59a9220ea7ce9aa68f2fbdd44fc41afc585f56c81739ddfc2ced977b952279ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59a9220ea7ce9aa68f2fbdd44fc41afc585f56c81739ddfc2ced977b952279ee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:07:04 compute-0 podman[280857]: 2026-02-28 10:07:04.412422552 +0000 UTC m=+0.174280091 container init 3cf512f854b3a0dd1011df38ffadfcc6365d9406daee72d6938f8ebbc1d4d53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shtern, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 10:07:04 compute-0 podman[280857]: 2026-02-28 10:07:04.418112202 +0000 UTC m=+0.179969741 container start 3cf512f854b3a0dd1011df38ffadfcc6365d9406daee72d6938f8ebbc1d4d53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shtern, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 10:07:04 compute-0 podman[280857]: 2026-02-28 10:07:04.422850935 +0000 UTC m=+0.184708494 container attach 3cf512f854b3a0dd1011df38ffadfcc6365d9406daee72d6938f8ebbc1d4d53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:07:04 compute-0 nova_compute[243452]: 2026-02-28 10:07:04.832 243456 DEBUG oslo_concurrency.lockutils [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:07:04 compute-0 clever_shtern[280873]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:07:04 compute-0 clever_shtern[280873]: --> All data devices are unavailable
Feb 28 10:07:04 compute-0 systemd[1]: libpod-3cf512f854b3a0dd1011df38ffadfcc6365d9406daee72d6938f8ebbc1d4d53b.scope: Deactivated successfully.
Feb 28 10:07:04 compute-0 conmon[280873]: conmon 3cf512f854b3a0dd1011 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3cf512f854b3a0dd1011df38ffadfcc6365d9406daee72d6938f8ebbc1d4d53b.scope/container/memory.events
Feb 28 10:07:04 compute-0 podman[280857]: 2026-02-28 10:07:04.923126609 +0000 UTC m=+0.684984148 container died 3cf512f854b3a0dd1011df38ffadfcc6365d9406daee72d6938f8ebbc1d4d53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 28 10:07:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-59a9220ea7ce9aa68f2fbdd44fc41afc585f56c81739ddfc2ced977b952279ee-merged.mount: Deactivated successfully.
Feb 28 10:07:04 compute-0 podman[280857]: 2026-02-28 10:07:04.98325648 +0000 UTC m=+0.745114029 container remove 3cf512f854b3a0dd1011df38ffadfcc6365d9406daee72d6938f8ebbc1d4d53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shtern, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 10:07:04 compute-0 systemd[1]: libpod-conmon-3cf512f854b3a0dd1011df38ffadfcc6365d9406daee72d6938f8ebbc1d4d53b.scope: Deactivated successfully.
Feb 28 10:07:05 compute-0 nova_compute[243452]: 2026-02-28 10:07:05.043 243456 DEBUG oslo_concurrency.lockutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Acquiring lock "090c2598-73ab-42de-88d7-3959c3b6ebd2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:05 compute-0 nova_compute[243452]: 2026-02-28 10:07:05.045 243456 DEBUG oslo_concurrency.lockutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "090c2598-73ab-42de-88d7-3959c3b6ebd2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:05 compute-0 sudo[280779]: pam_unix(sudo:session): session closed for user root
Feb 28 10:07:05 compute-0 nova_compute[243452]: 2026-02-28 10:07:05.066 243456 DEBUG nova.compute.manager [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:07:05 compute-0 ceph-mon[76304]: pgmap v1165: 305 pgs: 305 active+clean; 419 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.9 MiB/s wr, 195 op/s
Feb 28 10:07:05 compute-0 sudo[280906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:07:05 compute-0 sudo[280906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:07:05 compute-0 sudo[280906]: pam_unix(sudo:session): session closed for user root
Feb 28 10:07:05 compute-0 nova_compute[243452]: 2026-02-28 10:07:05.155 243456 DEBUG oslo_concurrency.lockutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:05 compute-0 nova_compute[243452]: 2026-02-28 10:07:05.155 243456 DEBUG oslo_concurrency.lockutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:05 compute-0 nova_compute[243452]: 2026-02-28 10:07:05.166 243456 DEBUG nova.virt.hardware [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:07:05 compute-0 nova_compute[243452]: 2026-02-28 10:07:05.167 243456 INFO nova.compute.claims [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:07:05 compute-0 nova_compute[243452]: 2026-02-28 10:07:05.170 243456 DEBUG nova.network.neutron [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Updated VIF entry in instance network info cache for port bd50336f-b10b-46c9-91bd-81e086b2e80e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:07:05 compute-0 nova_compute[243452]: 2026-02-28 10:07:05.171 243456 DEBUG nova.network.neutron [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Updating instance_info_cache with network_info: [{"id": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "address": "fa:16:3e:9f:3f:66", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd50336f-b1", "ovs_interfaceid": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:05 compute-0 sudo[280931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:07:05 compute-0 sudo[280931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:07:05 compute-0 nova_compute[243452]: 2026-02-28 10:07:05.200 243456 DEBUG oslo_concurrency.lockutils [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:07:05 compute-0 nova_compute[243452]: 2026-02-28 10:07:05.201 243456 DEBUG nova.compute.manager [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Received event network-changed-747bc967-2869-43ae-bc69-d818504d5496 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:05 compute-0 nova_compute[243452]: 2026-02-28 10:07:05.201 243456 DEBUG nova.compute.manager [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Refreshing instance network info cache due to event network-changed-747bc967-2869-43ae-bc69-d818504d5496. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:07:05 compute-0 nova_compute[243452]: 2026-02-28 10:07:05.202 243456 DEBUG oslo_concurrency.lockutils [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-af024638-459f-45c8-b52b-7d9ec937745a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:07:05 compute-0 nova_compute[243452]: 2026-02-28 10:07:05.202 243456 DEBUG oslo_concurrency.lockutils [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-af024638-459f-45c8-b52b-7d9ec937745a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:07:05 compute-0 nova_compute[243452]: 2026-02-28 10:07:05.202 243456 DEBUG nova.network.neutron [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Refreshing network info cache for port 747bc967-2869-43ae-bc69-d818504d5496 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:07:05 compute-0 nova_compute[243452]: 2026-02-28 10:07:05.204 243456 DEBUG oslo_concurrency.lockutils [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquired lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:07:05 compute-0 nova_compute[243452]: 2026-02-28 10:07:05.204 243456 DEBUG nova.network.neutron [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:07:05 compute-0 nova_compute[243452]: 2026-02-28 10:07:05.394 243456 DEBUG oslo_concurrency.processutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:05 compute-0 podman[280970]: 2026-02-28 10:07:05.543170411 +0000 UTC m=+0.062701414 container create bb5f09a1d60275e7752ee6e1eef6988da9f2d37ca47e052f8478bb1ab4d81d98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_chaplygin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:07:05 compute-0 podman[280970]: 2026-02-28 10:07:05.515441851 +0000 UTC m=+0.034972624 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:07:05 compute-0 systemd[1]: Started libpod-conmon-bb5f09a1d60275e7752ee6e1eef6988da9f2d37ca47e052f8478bb1ab4d81d98.scope.
Feb 28 10:07:05 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:07:05 compute-0 ovn_controller[146846]: 2026-02-28T10:07:05Z|00333|binding|INFO|Releasing lport 610498de-6d7e-49bb-b4f4-0bb4f081afde from this chassis (sb_readonly=0)
Feb 28 10:07:05 compute-0 ovn_controller[146846]: 2026-02-28T10:07:05Z|00334|binding|INFO|Releasing lport 76417fb5-53fd-4986-92c0-fafddb45b2f5 from this chassis (sb_readonly=0)
Feb 28 10:07:05 compute-0 podman[280970]: 2026-02-28 10:07:05.66443622 +0000 UTC m=+0.183967003 container init bb5f09a1d60275e7752ee6e1eef6988da9f2d37ca47e052f8478bb1ab4d81d98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_chaplygin, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 10:07:05 compute-0 podman[280970]: 2026-02-28 10:07:05.67083443 +0000 UTC m=+0.190365203 container start bb5f09a1d60275e7752ee6e1eef6988da9f2d37ca47e052f8478bb1ab4d81d98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_chaplygin, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:07:05 compute-0 podman[280970]: 2026-02-28 10:07:05.674614576 +0000 UTC m=+0.194145329 container attach bb5f09a1d60275e7752ee6e1eef6988da9f2d37ca47e052f8478bb1ab4d81d98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_chaplygin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 10:07:05 compute-0 naughty_chaplygin[281005]: 167 167
Feb 28 10:07:05 compute-0 systemd[1]: libpod-bb5f09a1d60275e7752ee6e1eef6988da9f2d37ca47e052f8478bb1ab4d81d98.scope: Deactivated successfully.
Feb 28 10:07:05 compute-0 podman[280970]: 2026-02-28 10:07:05.679555495 +0000 UTC m=+0.199086248 container died bb5f09a1d60275e7752ee6e1eef6988da9f2d37ca47e052f8478bb1ab4d81d98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 10:07:05 compute-0 nova_compute[243452]: 2026-02-28 10:07:05.682 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b0166d963f5d89a1831a646162e6d3a1d1b4fc91044b7d9063a8f82955121b8-merged.mount: Deactivated successfully.
Feb 28 10:07:05 compute-0 podman[280970]: 2026-02-28 10:07:05.715218887 +0000 UTC m=+0.234749640 container remove bb5f09a1d60275e7752ee6e1eef6988da9f2d37ca47e052f8478bb1ab4d81d98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_chaplygin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 10:07:05 compute-0 systemd[1]: libpod-conmon-bb5f09a1d60275e7752ee6e1eef6988da9f2d37ca47e052f8478bb1ab4d81d98.scope: Deactivated successfully.
Feb 28 10:07:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 438 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 160 op/s
Feb 28 10:07:05 compute-0 podman[281027]: 2026-02-28 10:07:05.947360153 +0000 UTC m=+0.078343534 container create f1eaf81bc1673edf5a3b20407e154ad66723fad7d30f683e7620ece7d0fe422d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_matsumoto, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:07:05 compute-0 systemd[1]: Started libpod-conmon-f1eaf81bc1673edf5a3b20407e154ad66723fad7d30f683e7620ece7d0fe422d.scope.
Feb 28 10:07:06 compute-0 podman[281027]: 2026-02-28 10:07:05.910739023 +0000 UTC m=+0.041722444 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:07:06 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:07:06 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/419487776' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:06 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.033 243456 DEBUG oslo_concurrency.processutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b65eed13513205ebaae5b1ad77c7551e6aa27341b074c67b53c570302a96903/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b65eed13513205ebaae5b1ad77c7551e6aa27341b074c67b53c570302a96903/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b65eed13513205ebaae5b1ad77c7551e6aa27341b074c67b53c570302a96903/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b65eed13513205ebaae5b1ad77c7551e6aa27341b074c67b53c570302a96903/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.046 243456 DEBUG nova.compute.provider_tree [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:07:06 compute-0 podman[281027]: 2026-02-28 10:07:06.057101318 +0000 UTC m=+0.188084699 container init f1eaf81bc1673edf5a3b20407e154ad66723fad7d30f683e7620ece7d0fe422d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_matsumoto, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:07:06 compute-0 podman[281027]: 2026-02-28 10:07:06.065459973 +0000 UTC m=+0.196443344 container start f1eaf81bc1673edf5a3b20407e154ad66723fad7d30f683e7620ece7d0fe422d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.066 243456 DEBUG nova.scheduler.client.report [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.102 243456 DEBUG oslo_concurrency.lockutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.103 243456 DEBUG nova.compute.manager [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:07:06 compute-0 podman[281027]: 2026-02-28 10:07:06.1663792 +0000 UTC m=+0.297362561 container attach f1eaf81bc1673edf5a3b20407e154ad66723fad7d30f683e7620ece7d0fe422d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_matsumoto, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 10:07:06 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/419487776' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.171 243456 DEBUG nova.compute.manager [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.225 243456 INFO nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.341 243456 DEBUG nova.compute.manager [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]: {
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:     "0": [
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:         {
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "devices": [
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "/dev/loop3"
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             ],
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "lv_name": "ceph_lv0",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "lv_size": "21470642176",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "name": "ceph_lv0",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "tags": {
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.cluster_name": "ceph",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.crush_device_class": "",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.encrypted": "0",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.objectstore": "bluestore",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.osd_id": "0",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.type": "block",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.vdo": "0",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.with_tpm": "0"
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             },
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "type": "block",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "vg_name": "ceph_vg0"
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:         }
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:     ],
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:     "1": [
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:         {
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "devices": [
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "/dev/loop4"
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             ],
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "lv_name": "ceph_lv1",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "lv_size": "21470642176",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "name": "ceph_lv1",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "tags": {
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.cluster_name": "ceph",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.crush_device_class": "",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.encrypted": "0",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.objectstore": "bluestore",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.osd_id": "1",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.type": "block",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.vdo": "0",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.with_tpm": "0"
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             },
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "type": "block",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "vg_name": "ceph_vg1"
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:         }
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:     ],
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:     "2": [
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:         {
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "devices": [
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "/dev/loop5"
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             ],
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "lv_name": "ceph_lv2",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "lv_size": "21470642176",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "name": "ceph_lv2",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "tags": {
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.cluster_name": "ceph",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.crush_device_class": "",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.encrypted": "0",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.objectstore": "bluestore",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.osd_id": "2",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.type": "block",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.vdo": "0",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:                 "ceph.with_tpm": "0"
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             },
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "type": "block",
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:             "vg_name": "ceph_vg2"
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:         }
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]:     ]
Feb 28 10:07:06 compute-0 romantic_matsumoto[281044]: }
Feb 28 10:07:06 compute-0 systemd[1]: libpod-f1eaf81bc1673edf5a3b20407e154ad66723fad7d30f683e7620ece7d0fe422d.scope: Deactivated successfully.
Feb 28 10:07:06 compute-0 podman[281027]: 2026-02-28 10:07:06.396449268 +0000 UTC m=+0.527432649 container died f1eaf81bc1673edf5a3b20407e154ad66723fad7d30f683e7620ece7d0fe422d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 10:07:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b65eed13513205ebaae5b1ad77c7551e6aa27341b074c67b53c570302a96903-merged.mount: Deactivated successfully.
Feb 28 10:07:06 compute-0 podman[281027]: 2026-02-28 10:07:06.517987805 +0000 UTC m=+0.648971176 container remove f1eaf81bc1673edf5a3b20407e154ad66723fad7d30f683e7620ece7d0fe422d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:07:06 compute-0 systemd[1]: libpod-conmon-f1eaf81bc1673edf5a3b20407e154ad66723fad7d30f683e7620ece7d0fe422d.scope: Deactivated successfully.
Feb 28 10:07:06 compute-0 sudo[280931]: pam_unix(sudo:session): session closed for user root
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.571 243456 DEBUG oslo_concurrency.lockutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Acquiring lock "2c8189d8-e4a5-412d-bd69-b690e34b8f4c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.571 243456 DEBUG oslo_concurrency.lockutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "2c8189d8-e4a5-412d-bd69-b690e34b8f4c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.590 243456 DEBUG nova.compute.manager [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.592 243456 DEBUG nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.592 243456 INFO nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Creating image(s)
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.630 243456 DEBUG nova.storage.rbd_utils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] rbd image 090c2598-73ab-42de-88d7-3959c3b6ebd2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:06 compute-0 sudo[281069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:07:06 compute-0 sudo[281069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:07:06 compute-0 sudo[281069]: pam_unix(sudo:session): session closed for user root
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.671 243456 DEBUG nova.storage.rbd_utils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] rbd image 090c2598-73ab-42de-88d7-3959c3b6ebd2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.713 243456 DEBUG nova.storage.rbd_utils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] rbd image 090c2598-73ab-42de-88d7-3959c3b6ebd2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:06 compute-0 sudo[281119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.717 243456 DEBUG oslo_concurrency.processutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:06 compute-0 sudo[281119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.753 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273211.6266735, 2b863bc5-8018-491d-82ee-dbc8f40d5aff => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.754 243456 INFO nova.compute.manager [-] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] VM Stopped (Lifecycle Event)
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.756 243456 DEBUG nova.compute.manager [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.763 243456 DEBUG nova.compute.manager [req-f186c40c-aa4f-41cb-9d18-163ddd721573 req-ffb40910-d807-437d-b3da-26c692de66d4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Received event network-changed-747bc967-2869-43ae-bc69-d818504d5496 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.763 243456 DEBUG nova.compute.manager [req-f186c40c-aa4f-41cb-9d18-163ddd721573 req-ffb40910-d807-437d-b3da-26c692de66d4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Refreshing instance network info cache due to event network-changed-747bc967-2869-43ae-bc69-d818504d5496. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.764 243456 DEBUG oslo_concurrency.lockutils [req-f186c40c-aa4f-41cb-9d18-163ddd721573 req-ffb40910-d807-437d-b3da-26c692de66d4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-af024638-459f-45c8-b52b-7d9ec937745a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.789 243456 DEBUG nova.compute.manager [None req-1488a662-71d8-4355-affb-614d77a05ea2 - - - - - -] [instance: 2b863bc5-8018-491d-82ee-dbc8f40d5aff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.793 243456 DEBUG oslo_concurrency.processutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.794 243456 DEBUG oslo_concurrency.lockutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.795 243456 DEBUG oslo_concurrency.lockutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.795 243456 DEBUG oslo_concurrency.lockutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.821 243456 DEBUG nova.storage.rbd_utils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] rbd image 090c2598-73ab-42de-88d7-3959c3b6ebd2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.825 243456 DEBUG oslo_concurrency.processutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 090c2598-73ab-42de-88d7-3959c3b6ebd2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.887 243456 DEBUG oslo_concurrency.lockutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.887 243456 DEBUG oslo_concurrency.lockutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.902 243456 DEBUG nova.virt.hardware [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:07:06 compute-0 nova_compute[243452]: 2026-02-28 10:07:06.903 243456 INFO nova.compute.claims [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:07:07 compute-0 podman[281225]: 2026-02-28 10:07:07.073729349 +0000 UTC m=+0.092734798 container create f35744c9de2f5dd18b99e342c5256312cd030239a3cac51eedcdc6d13b5933fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gould, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 10:07:07 compute-0 podman[281225]: 2026-02-28 10:07:07.012676293 +0000 UTC m=+0.031681762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.115 243456 DEBUG oslo_concurrency.processutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:07 compute-0 systemd[1]: Started libpod-conmon-f35744c9de2f5dd18b99e342c5256312cd030239a3cac51eedcdc6d13b5933fe.scope.
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.153 243456 DEBUG oslo_concurrency.processutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 090c2598-73ab-42de-88d7-3959c3b6ebd2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:07 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:07:07 compute-0 ceph-mon[76304]: pgmap v1166: 305 pgs: 305 active+clean; 438 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 160 op/s
Feb 28 10:07:07 compute-0 podman[281225]: 2026-02-28 10:07:07.188903427 +0000 UTC m=+0.207908896 container init f35744c9de2f5dd18b99e342c5256312cd030239a3cac51eedcdc6d13b5933fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gould, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 10:07:07 compute-0 podman[281225]: 2026-02-28 10:07:07.198121056 +0000 UTC m=+0.217126495 container start f35744c9de2f5dd18b99e342c5256312cd030239a3cac51eedcdc6d13b5933fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gould, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:07:07 compute-0 podman[281225]: 2026-02-28 10:07:07.201950664 +0000 UTC m=+0.220956103 container attach f35744c9de2f5dd18b99e342c5256312cd030239a3cac51eedcdc6d13b5933fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 28 10:07:07 compute-0 reverent_gould[281241]: 167 167
Feb 28 10:07:07 compute-0 systemd[1]: libpod-f35744c9de2f5dd18b99e342c5256312cd030239a3cac51eedcdc6d13b5933fe.scope: Deactivated successfully.
Feb 28 10:07:07 compute-0 podman[281225]: 2026-02-28 10:07:07.207278954 +0000 UTC m=+0.226284403 container died f35744c9de2f5dd18b99e342c5256312cd030239a3cac51eedcdc6d13b5933fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gould, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:07:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0891be5c333d8521c4d6e468e7a7a0e0ec31c67e5d3760cd9cdd6768931ab16-merged.mount: Deactivated successfully.
Feb 28 10:07:07 compute-0 podman[281225]: 2026-02-28 10:07:07.240927239 +0000 UTC m=+0.259932678 container remove f35744c9de2f5dd18b99e342c5256312cd030239a3cac51eedcdc6d13b5933fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.240 243456 DEBUG nova.storage.rbd_utils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] resizing rbd image 090c2598-73ab-42de-88d7-3959c3b6ebd2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:07:07 compute-0 systemd[1]: libpod-conmon-f35744c9de2f5dd18b99e342c5256312cd030239a3cac51eedcdc6d13b5933fe.scope: Deactivated successfully.
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.331 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.333 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.339 243456 DEBUG nova.objects.instance [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lazy-loading 'migration_context' on Instance uuid 090c2598-73ab-42de-88d7-3959c3b6ebd2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.353 243456 DEBUG nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.353 243456 DEBUG nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Ensure instance console log exists: /var/lib/nova/instances/090c2598-73ab-42de-88d7-3959c3b6ebd2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.356 243456 DEBUG oslo_concurrency.lockutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.356 243456 DEBUG oslo_concurrency.lockutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.356 243456 DEBUG oslo_concurrency.lockutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.358 243456 DEBUG nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.364 243456 WARNING nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.373 243456 DEBUG nova.virt.libvirt.host [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.375 243456 DEBUG nova.virt.libvirt.host [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.380 243456 DEBUG nova.virt.libvirt.host [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.380 243456 DEBUG nova.virt.libvirt.host [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.381 243456 DEBUG nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.381 243456 DEBUG nova.virt.hardware [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.382 243456 DEBUG nova.virt.hardware [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.382 243456 DEBUG nova.virt.hardware [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.382 243456 DEBUG nova.virt.hardware [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.383 243456 DEBUG nova.virt.hardware [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.383 243456 DEBUG nova.virt.hardware [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.383 243456 DEBUG nova.virt.hardware [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.384 243456 DEBUG nova.virt.hardware [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.384 243456 DEBUG nova.virt.hardware [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.384 243456 DEBUG nova.virt.hardware [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.384 243456 DEBUG nova.virt.hardware [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.388 243456 DEBUG oslo_concurrency.processutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:07 compute-0 podman[281356]: 2026-02-28 10:07:07.416183957 +0000 UTC m=+0.047382314 container create b24d0121a2d447cfced11092e20a7bcc9385903b57066b6fa0b866e06792c35e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 10:07:07 compute-0 systemd[1]: Started libpod-conmon-b24d0121a2d447cfced11092e20a7bcc9385903b57066b6fa0b866e06792c35e.scope.
Feb 28 10:07:07 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:07:07 compute-0 podman[281356]: 2026-02-28 10:07:07.398213561 +0000 UTC m=+0.029411928 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:07:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b4fb7e0a60413266a2dbbd9700b4c07900c7bd7ac991d51e4c71a714fd18142/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:07:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b4fb7e0a60413266a2dbbd9700b4c07900c7bd7ac991d51e4c71a714fd18142/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:07:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b4fb7e0a60413266a2dbbd9700b4c07900c7bd7ac991d51e4c71a714fd18142/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:07:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b4fb7e0a60413266a2dbbd9700b4c07900c7bd7ac991d51e4c71a714fd18142/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:07:07 compute-0 podman[281356]: 2026-02-28 10:07:07.514340156 +0000 UTC m=+0.145538533 container init b24d0121a2d447cfced11092e20a7bcc9385903b57066b6fa0b866e06792c35e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 10:07:07 compute-0 podman[281356]: 2026-02-28 10:07:07.521404705 +0000 UTC m=+0.152603052 container start b24d0121a2d447cfced11092e20a7bcc9385903b57066b6fa0b866e06792c35e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wu, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 10:07:07 compute-0 podman[281356]: 2026-02-28 10:07:07.525556311 +0000 UTC m=+0.156754858 container attach b24d0121a2d447cfced11092e20a7bcc9385903b57066b6fa0b866e06792c35e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.540 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273212.5371068, 07582da6-e482-439d-b147-937e74817014 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.542 243456 INFO nova.compute.manager [-] [instance: 07582da6-e482-439d-b147-937e74817014] VM Stopped (Lifecycle Event)
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.564 243456 DEBUG nova.compute.manager [None req-b3822a5c-690d-471d-8ff0-4a6b6717643a - - - - - -] [instance: 07582da6-e482-439d-b147-937e74817014] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:07:07 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3401833208' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.691 243456 DEBUG oslo_concurrency.processutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.700 243456 DEBUG nova.compute.provider_tree [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.715 243456 DEBUG nova.scheduler.client.report [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.735 243456 DEBUG oslo_concurrency.lockutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.735 243456 DEBUG nova.compute.manager [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.786 243456 DEBUG nova.compute.manager [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.806 243456 INFO nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.827 243456 DEBUG nova.compute.manager [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:07:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:07:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:07:07 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/364386392' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.930 243456 DEBUG nova.compute.manager [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.933 243456 DEBUG nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.933 243456 INFO nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Creating image(s)
Feb 28 10:07:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 456 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.8 MiB/s wr, 172 op/s
Feb 28 10:07:07 compute-0 nova_compute[243452]: 2026-02-28 10:07:07.958 243456 DEBUG nova.storage.rbd_utils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] rbd image 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.010 243456 DEBUG nova.storage.rbd_utils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] rbd image 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.046 243456 DEBUG nova.storage.rbd_utils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] rbd image 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.056 243456 DEBUG oslo_concurrency.processutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.094 243456 DEBUG nova.network.neutron [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Updated VIF entry in instance network info cache for port 747bc967-2869-43ae-bc69-d818504d5496. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.094 243456 DEBUG nova.network.neutron [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Updating instance_info_cache with network_info: [{"id": "747bc967-2869-43ae-bc69-d818504d5496", "address": "fa:16:3e:1e:b6:8f", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap747bc967-28", "ovs_interfaceid": "747bc967-2869-43ae-bc69-d818504d5496", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.097 243456 INFO nova.network.neutron [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Port 97c93122-26c8-464e-b452-aaa22188a591 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.097 243456 DEBUG nova.network.neutron [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Updating instance_info_cache with network_info: [{"id": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "address": "fa:16:3e:9f:3f:66", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd50336f-b1", "ovs_interfaceid": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.098 243456 DEBUG oslo_concurrency.processutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.710s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.098 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.144 243456 DEBUG nova.storage.rbd_utils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] rbd image 090c2598-73ab-42de-88d7-3959c3b6ebd2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.148 243456 DEBUG oslo_concurrency.processutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.176 243456 DEBUG oslo_concurrency.lockutils [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Releasing lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:07:08 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3401833208' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:08 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/364386392' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:07:08 compute-0 ceph-mon[76304]: pgmap v1167: 305 pgs: 305 active+clean; 456 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.8 MiB/s wr, 172 op/s
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.180 243456 DEBUG oslo_concurrency.lockutils [req-32f51c30-eaaa-4709-95ff-d3c08f5e62b6 req-c44ea246-e2b6-4dd8-88c5-442448ab452c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-af024638-459f-45c8-b52b-7d9ec937745a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.181 243456 DEBUG oslo_concurrency.processutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.181 243456 DEBUG oslo_concurrency.lockutils [req-f186c40c-aa4f-41cb-9d18-163ddd721573 req-ffb40910-d807-437d-b3da-26c692de66d4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-af024638-459f-45c8-b52b-7d9ec937745a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.181 243456 DEBUG nova.network.neutron [req-f186c40c-aa4f-41cb-9d18-163ddd721573 req-ffb40910-d807-437d-b3da-26c692de66d4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Refreshing network info cache for port 747bc967-2869-43ae-bc69-d818504d5496 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.182 243456 DEBUG oslo_concurrency.lockutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.182 243456 DEBUG oslo_concurrency.lockutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.182 243456 DEBUG oslo_concurrency.lockutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.210 243456 DEBUG nova.storage.rbd_utils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] rbd image 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.220 243456 DEBUG oslo_concurrency.processutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:08 compute-0 lvm[281569]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:07:08 compute-0 lvm[281569]: VG ceph_vg1 finished
Feb 28 10:07:08 compute-0 lvm[281571]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:07:08 compute-0 lvm[281571]: VG ceph_vg2 finished
Feb 28 10:07:08 compute-0 lvm[281568]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:07:08 compute-0 lvm[281568]: VG ceph_vg0 finished
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.255 243456 DEBUG oslo_concurrency.lockutils [None req-5ea96674-635d-4768-81ec-0a4e634aafe5 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-c4dce6af-958c-4c5a-890b-469443cee915-97c93122-26c8-464e-b452-aaa22188a591" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:08 compute-0 wonderful_wu[281373]: {}
Feb 28 10:07:08 compute-0 systemd[1]: libpod-b24d0121a2d447cfced11092e20a7bcc9385903b57066b6fa0b866e06792c35e.scope: Deactivated successfully.
Feb 28 10:07:08 compute-0 podman[281356]: 2026-02-28 10:07:08.338870576 +0000 UTC m=+0.970068933 container died b24d0121a2d447cfced11092e20a7bcc9385903b57066b6fa0b866e06792c35e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wu, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:07:08 compute-0 systemd[1]: libpod-b24d0121a2d447cfced11092e20a7bcc9385903b57066b6fa0b866e06792c35e.scope: Consumed 1.268s CPU time.
Feb 28 10:07:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b4fb7e0a60413266a2dbbd9700b4c07900c7bd7ac991d51e4c71a714fd18142-merged.mount: Deactivated successfully.
Feb 28 10:07:08 compute-0 podman[281356]: 2026-02-28 10:07:08.478708988 +0000 UTC m=+1.109907385 container remove b24d0121a2d447cfced11092e20a7bcc9385903b57066b6fa0b866e06792c35e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wu, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:07:08 compute-0 systemd[1]: libpod-conmon-b24d0121a2d447cfced11092e20a7bcc9385903b57066b6fa0b866e06792c35e.scope: Deactivated successfully.
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.508 243456 DEBUG oslo_concurrency.lockutils [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquiring lock "af024638-459f-45c8-b52b-7d9ec937745a" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.508 243456 DEBUG oslo_concurrency.lockutils [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.509 243456 INFO nova.compute.manager [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Rebooting instance
Feb 28 10:07:08 compute-0 sudo[281119]: pam_unix(sudo:session): session closed for user root
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.526 243456 DEBUG oslo_concurrency.lockutils [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquiring lock "refresh_cache-af024638-459f-45c8-b52b-7d9ec937745a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:07:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:07:08 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:07:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.532 243456 DEBUG oslo_concurrency.processutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:08 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:07:08 compute-0 sudo[281630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:07:08 compute-0 sudo[281630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:07:08 compute-0 sudo[281630]: pam_unix(sudo:session): session closed for user root
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.606 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.614 243456 DEBUG nova.storage.rbd_utils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] resizing rbd image 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:07:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:07:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3070876354' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.703 243456 DEBUG nova.objects.instance [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lazy-loading 'migration_context' on Instance uuid 2c8189d8-e4a5-412d-bd69-b690e34b8f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.711 243456 DEBUG oslo_concurrency.processutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.713 243456 DEBUG nova.objects.instance [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 090c2598-73ab-42de-88d7-3959c3b6ebd2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.719 243456 DEBUG nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.720 243456 DEBUG nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Ensure instance console log exists: /var/lib/nova/instances/2c8189d8-e4a5-412d-bd69-b690e34b8f4c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.720 243456 DEBUG oslo_concurrency.lockutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.721 243456 DEBUG oslo_concurrency.lockutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.721 243456 DEBUG oslo_concurrency.lockutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.722 243456 DEBUG nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.727 243456 WARNING nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.734 243456 DEBUG nova.virt.libvirt.host [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.735 243456 DEBUG nova.virt.libvirt.host [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.742 243456 DEBUG nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:07:08 compute-0 nova_compute[243452]:   <uuid>090c2598-73ab-42de-88d7-3959c3b6ebd2</uuid>
Feb 28 10:07:08 compute-0 nova_compute[243452]:   <name>instance-0000002d</name>
Feb 28 10:07:08 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:07:08 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:07:08 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerShowV247Test-server-663180967</nova:name>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:07:07</nova:creationTime>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:07:08 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:07:08 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:07:08 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:07:08 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:07:08 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:07:08 compute-0 nova_compute[243452]:         <nova:user uuid="4c3e34e421f447f386ae2320858c95b8">tempest-ServerShowV247Test-321012550-project-member</nova:user>
Feb 28 10:07:08 compute-0 nova_compute[243452]:         <nova:project uuid="696c1808e0d64cc7b255195d851e06d1">tempest-ServerShowV247Test-321012550</nova:project>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <nova:ports/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:07:08 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:07:08 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <system>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <entry name="serial">090c2598-73ab-42de-88d7-3959c3b6ebd2</entry>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <entry name="uuid">090c2598-73ab-42de-88d7-3959c3b6ebd2</entry>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     </system>
Feb 28 10:07:08 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:07:08 compute-0 nova_compute[243452]:   <os>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:   </os>
Feb 28 10:07:08 compute-0 nova_compute[243452]:   <features>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:   </features>
Feb 28 10:07:08 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:07:08 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:07:08 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/090c2598-73ab-42de-88d7-3959c3b6ebd2_disk">
Feb 28 10:07:08 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       </source>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:07:08 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/090c2598-73ab-42de-88d7-3959c3b6ebd2_disk.config">
Feb 28 10:07:08 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       </source>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:07:08 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/090c2598-73ab-42de-88d7-3959c3b6ebd2/console.log" append="off"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <video>
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     </video>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:07:08 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:07:08 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:07:08 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:07:08 compute-0 nova_compute[243452]: </domain>
Feb 28 10:07:08 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.743 243456 DEBUG nova.virt.libvirt.host [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.744 243456 DEBUG nova.virt.libvirt.host [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.744 243456 DEBUG nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.745 243456 DEBUG nova.virt.hardware [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.745 243456 DEBUG nova.virt.hardware [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.745 243456 DEBUG nova.virt.hardware [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.745 243456 DEBUG nova.virt.hardware [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.746 243456 DEBUG nova.virt.hardware [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.746 243456 DEBUG nova.virt.hardware [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.746 243456 DEBUG nova.virt.hardware [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.746 243456 DEBUG nova.virt.hardware [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.746 243456 DEBUG nova.virt.hardware [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.746 243456 DEBUG nova.virt.hardware [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.747 243456 DEBUG nova.virt.hardware [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.750 243456 DEBUG oslo_concurrency.processutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.840 243456 DEBUG nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.842 243456 DEBUG nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.842 243456 INFO nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Using config drive
Feb 28 10:07:08 compute-0 nova_compute[243452]: 2026-02-28 10:07:08.866 243456 DEBUG nova.storage.rbd_utils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] rbd image 090c2598-73ab-42de-88d7-3959c3b6ebd2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:07:09 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/541611606' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.279 243456 INFO nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Creating config drive at /var/lib/nova/instances/090c2598-73ab-42de-88d7-3959c3b6ebd2/disk.config
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.283 243456 DEBUG oslo_concurrency.processutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/090c2598-73ab-42de-88d7-3959c3b6ebd2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpywnd75aw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.308 243456 DEBUG oslo_concurrency.processutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.331 243456 DEBUG nova.storage.rbd_utils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] rbd image 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.335 243456 DEBUG oslo_concurrency.processutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.361 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.402 243456 DEBUG oslo_concurrency.lockutils [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "interface-9bc174be-7ebf-4dfb-a56f-b8855b0b2960-97c93122-26c8-464e-b452-aaa22188a591" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.402 243456 DEBUG oslo_concurrency.lockutils [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-9bc174be-7ebf-4dfb-a56f-b8855b0b2960-97c93122-26c8-464e-b452-aaa22188a591" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.403 243456 DEBUG nova.objects.instance [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'flavor' on Instance uuid 9bc174be-7ebf-4dfb-a56f-b8855b0b2960 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.412 243456 DEBUG oslo_concurrency.processutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/090c2598-73ab-42de-88d7-3959c3b6ebd2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpywnd75aw" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.434 243456 DEBUG nova.storage.rbd_utils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] rbd image 090c2598-73ab-42de-88d7-3959c3b6ebd2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.437 243456 DEBUG oslo_concurrency.processutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/090c2598-73ab-42de-88d7-3959c3b6ebd2/disk.config 090c2598-73ab-42de-88d7-3959c3b6ebd2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.472 243456 DEBUG nova.compute.manager [req-93d2fed7-66c9-457b-8624-52b041f0f5b0 req-7a48dc56-34e5-43b7-ba36-4e872e0f0074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received event network-changed-bd50336f-b10b-46c9-91bd-81e086b2e80e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.473 243456 DEBUG nova.compute.manager [req-93d2fed7-66c9-457b-8624-52b041f0f5b0 req-7a48dc56-34e5-43b7-ba36-4e872e0f0074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Refreshing instance network info cache due to event network-changed-bd50336f-b10b-46c9-91bd-81e086b2e80e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.473 243456 DEBUG oslo_concurrency.lockutils [req-93d2fed7-66c9-457b-8624-52b041f0f5b0 req-7a48dc56-34e5-43b7-ba36-4e872e0f0074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.473 243456 DEBUG oslo_concurrency.lockutils [req-93d2fed7-66c9-457b-8624-52b041f0f5b0 req-7a48dc56-34e5-43b7-ba36-4e872e0f0074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.473 243456 DEBUG nova.network.neutron [req-93d2fed7-66c9-457b-8624-52b041f0f5b0 req-7a48dc56-34e5-43b7-ba36-4e872e0f0074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Refreshing network info cache for port bd50336f-b10b-46c9-91bd-81e086b2e80e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:07:09 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:07:09 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:07:09 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3070876354' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:07:09 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/541611606' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.595 243456 DEBUG oslo_concurrency.processutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/090c2598-73ab-42de-88d7-3959c3b6ebd2/disk.config 090c2598-73ab-42de-88d7-3959c3b6ebd2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.596 243456 INFO nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Deleting local config drive /var/lib/nova/instances/090c2598-73ab-42de-88d7-3959c3b6ebd2/disk.config because it was imported into RBD.
Feb 28 10:07:09 compute-0 systemd-machined[209480]: New machine qemu-49-instance-0000002d.
Feb 28 10:07:09 compute-0 systemd[1]: Started Virtual Machine qemu-49-instance-0000002d.
Feb 28 10:07:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:07:09 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2551397049' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.896 243456 DEBUG oslo_concurrency.processutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.898 243456 DEBUG nova.objects.instance [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c8189d8-e4a5-412d-bd69-b690e34b8f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:09 compute-0 nova_compute[243452]: 2026-02-28 10:07:09.940 243456 DEBUG nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:07:09 compute-0 nova_compute[243452]:   <uuid>2c8189d8-e4a5-412d-bd69-b690e34b8f4c</uuid>
Feb 28 10:07:09 compute-0 nova_compute[243452]:   <name>instance-0000002e</name>
Feb 28 10:07:09 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:07:09 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:07:09 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerShowV247Test-server-2138859692</nova:name>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:07:08</nova:creationTime>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:07:09 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:07:09 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:07:09 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:07:09 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:07:09 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:07:09 compute-0 nova_compute[243452]:         <nova:user uuid="4c3e34e421f447f386ae2320858c95b8">tempest-ServerShowV247Test-321012550-project-member</nova:user>
Feb 28 10:07:09 compute-0 nova_compute[243452]:         <nova:project uuid="696c1808e0d64cc7b255195d851e06d1">tempest-ServerShowV247Test-321012550</nova:project>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <nova:ports/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:07:09 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:07:09 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <system>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <entry name="serial">2c8189d8-e4a5-412d-bd69-b690e34b8f4c</entry>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <entry name="uuid">2c8189d8-e4a5-412d-bd69-b690e34b8f4c</entry>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     </system>
Feb 28 10:07:09 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:07:09 compute-0 nova_compute[243452]:   <os>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:   </os>
Feb 28 10:07:09 compute-0 nova_compute[243452]:   <features>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:   </features>
Feb 28 10:07:09 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:07:09 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:07:09 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk">
Feb 28 10:07:09 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       </source>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:07:09 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk.config">
Feb 28 10:07:09 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       </source>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:07:09 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/2c8189d8-e4a5-412d-bd69-b690e34b8f4c/console.log" append="off"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <video>
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     </video>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:07:09 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:07:09 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:07:09 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:07:09 compute-0 nova_compute[243452]: </domain>
Feb 28 10:07:09 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:07:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 498 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.6 MiB/s wr, 149 op/s
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.006 243456 DEBUG nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.006 243456 DEBUG nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.007 243456 INFO nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Using config drive
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.033 243456 DEBUG nova.storage.rbd_utils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] rbd image 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.197 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273230.1972766, 090c2598-73ab-42de-88d7-3959c3b6ebd2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.198 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] VM Resumed (Lifecycle Event)
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.201 243456 DEBUG nova.compute.manager [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.201 243456 DEBUG nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.205 243456 INFO nova.virt.libvirt.driver [-] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Instance spawned successfully.
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.205 243456 DEBUG nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.225 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.230 243456 DEBUG nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.231 243456 DEBUG nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.231 243456 DEBUG nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.231 243456 DEBUG nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.232 243456 DEBUG nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.232 243456 DEBUG nova.virt.libvirt.driver [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.236 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.279 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.279 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273230.2006464, 090c2598-73ab-42de-88d7-3959c3b6ebd2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.280 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] VM Started (Lifecycle Event)
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.300 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.305 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.329 243456 INFO nova.compute.manager [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Took 3.74 seconds to spawn the instance on the hypervisor.
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.330 243456 DEBUG nova.compute.manager [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.331 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.335 243456 INFO nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Creating config drive at /var/lib/nova/instances/2c8189d8-e4a5-412d-bd69-b690e34b8f4c/disk.config
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.341 243456 DEBUG oslo_concurrency.processutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c8189d8-e4a5-412d-bd69-b690e34b8f4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpgupojvp8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.388 243456 DEBUG nova.network.neutron [req-f186c40c-aa4f-41cb-9d18-163ddd721573 req-ffb40910-d807-437d-b3da-26c692de66d4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Updated VIF entry in instance network info cache for port 747bc967-2869-43ae-bc69-d818504d5496. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.389 243456 DEBUG nova.network.neutron [req-f186c40c-aa4f-41cb-9d18-163ddd721573 req-ffb40910-d807-437d-b3da-26c692de66d4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Updating instance_info_cache with network_info: [{"id": "747bc967-2869-43ae-bc69-d818504d5496", "address": "fa:16:3e:1e:b6:8f", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap747bc967-28", "ovs_interfaceid": "747bc967-2869-43ae-bc69-d818504d5496", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.421 243456 DEBUG oslo_concurrency.lockutils [req-f186c40c-aa4f-41cb-9d18-163ddd721573 req-ffb40910-d807-437d-b3da-26c692de66d4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-af024638-459f-45c8-b52b-7d9ec937745a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.422 243456 DEBUG oslo_concurrency.lockutils [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquired lock "refresh_cache-af024638-459f-45c8-b52b-7d9ec937745a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.423 243456 DEBUG nova.network.neutron [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.456 243456 INFO nova.compute.manager [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Took 5.34 seconds to build instance.
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.482 243456 DEBUG oslo_concurrency.lockutils [None req-76b7ec47-d9a8-46cc-9c84-1fa08f9176e3 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "090c2598-73ab-42de-88d7-3959c3b6ebd2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.496 243456 DEBUG oslo_concurrency.processutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c8189d8-e4a5-412d-bd69-b690e34b8f4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpgupojvp8" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.523 243456 DEBUG nova.storage.rbd_utils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] rbd image 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.527 243456 DEBUG oslo_concurrency.processutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2c8189d8-e4a5-412d-bd69-b690e34b8f4c/disk.config 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:10 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2551397049' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:07:10 compute-0 ceph-mon[76304]: pgmap v1168: 305 pgs: 305 active+clean; 498 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.6 MiB/s wr, 149 op/s
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.695 243456 DEBUG oslo_concurrency.processutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2c8189d8-e4a5-412d-bd69-b690e34b8f4c/disk.config 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.696 243456 INFO nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Deleting local config drive /var/lib/nova/instances/2c8189d8-e4a5-412d-bd69-b690e34b8f4c/disk.config because it was imported into RBD.
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.716 243456 DEBUG nova.objects.instance [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'pci_requests' on Instance uuid 9bc174be-7ebf-4dfb-a56f-b8855b0b2960 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:10 compute-0 nova_compute[243452]: 2026-02-28 10:07:10.732 243456 DEBUG nova.network.neutron [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:07:10 compute-0 systemd-machined[209480]: New machine qemu-50-instance-0000002e.
Feb 28 10:07:10 compute-0 systemd[1]: Started Virtual Machine qemu-50-instance-0000002e.
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.357 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273231.3566256, 2c8189d8-e4a5-412d-bd69-b690e34b8f4c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.357 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] VM Resumed (Lifecycle Event)
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.360 243456 DEBUG nova.compute.manager [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.361 243456 DEBUG nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.365 243456 INFO nova.virt.libvirt.driver [-] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Instance spawned successfully.
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.366 243456 DEBUG nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.392 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.398 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.402 243456 DEBUG nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.402 243456 DEBUG nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.402 243456 DEBUG nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.403 243456 DEBUG nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.403 243456 DEBUG nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.404 243456 DEBUG nova.virt.libvirt.driver [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.440 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.440 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273231.3617866, 2c8189d8-e4a5-412d-bd69-b690e34b8f4c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.440 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] VM Started (Lifecycle Event)
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.487 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.491 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.495 243456 INFO nova.compute.manager [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Took 3.56 seconds to spawn the instance on the hypervisor.
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.495 243456 DEBUG nova.compute.manager [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.509 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.548 243456 INFO nova.compute.manager [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Took 4.69 seconds to build instance.
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.567 243456 DEBUG oslo_concurrency.lockutils [None req-dec64d0a-b43c-43c3-9926-d6bddee629ca 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "2c8189d8-e4a5-412d-bd69-b690e34b8f4c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.621 243456 DEBUG nova.policy [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1d18942aa7a4f4f9f673058f8c5f232', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5478fb37e3fa483c86ed85ad939d42da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.788 243456 DEBUG nova.network.neutron [req-93d2fed7-66c9-457b-8624-52b041f0f5b0 req-7a48dc56-34e5-43b7-ba36-4e872e0f0074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Updated VIF entry in instance network info cache for port bd50336f-b10b-46c9-91bd-81e086b2e80e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.789 243456 DEBUG nova.network.neutron [req-93d2fed7-66c9-457b-8624-52b041f0f5b0 req-7a48dc56-34e5-43b7-ba36-4e872e0f0074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Updating instance_info_cache with network_info: [{"id": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "address": "fa:16:3e:9f:3f:66", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd50336f-b1", "ovs_interfaceid": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.815 243456 DEBUG oslo_concurrency.lockutils [req-93d2fed7-66c9-457b-8624-52b041f0f5b0 req-7a48dc56-34e5-43b7-ba36-4e872e0f0074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c4dce6af-958c-4c5a-890b-469443cee915" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.816 243456 DEBUG nova.compute.manager [req-93d2fed7-66c9-457b-8624-52b041f0f5b0 req-7a48dc56-34e5-43b7-ba36-4e872e0f0074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Received event network-changed-f2eab801-3c6f-481b-98bf-9751a9a7c6d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.816 243456 DEBUG nova.compute.manager [req-93d2fed7-66c9-457b-8624-52b041f0f5b0 req-7a48dc56-34e5-43b7-ba36-4e872e0f0074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Refreshing instance network info cache due to event network-changed-f2eab801-3c6f-481b-98bf-9751a9a7c6d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.817 243456 DEBUG oslo_concurrency.lockutils [req-93d2fed7-66c9-457b-8624-52b041f0f5b0 req-7a48dc56-34e5-43b7-ba36-4e872e0f0074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.817 243456 DEBUG oslo_concurrency.lockutils [req-93d2fed7-66c9-457b-8624-52b041f0f5b0 req-7a48dc56-34e5-43b7-ba36-4e872e0f0074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:07:11 compute-0 nova_compute[243452]: 2026-02-28 10:07:11.817 243456 DEBUG nova.network.neutron [req-93d2fed7-66c9-457b-8624-52b041f0f5b0 req-7a48dc56-34e5-43b7-ba36-4e872e0f0074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Refreshing network info cache for port f2eab801-3c6f-481b-98bf-9751a9a7c6d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:07:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 515 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.3 MiB/s wr, 187 op/s
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.001 243456 DEBUG nova.network.neutron [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Updating instance_info_cache with network_info: [{"id": "747bc967-2869-43ae-bc69-d818504d5496", "address": "fa:16:3e:1e:b6:8f", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap747bc967-28", "ovs_interfaceid": "747bc967-2869-43ae-bc69-d818504d5496", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.017 243456 DEBUG oslo_concurrency.lockutils [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Releasing lock "refresh_cache-af024638-459f-45c8-b52b-7d9ec937745a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.019 243456 DEBUG nova.compute.manager [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:12 compute-0 kernel: tap747bc967-28 (unregistering): left promiscuous mode
Feb 28 10:07:12 compute-0 NetworkManager[49805]: <info>  [1772273232.1895] device (tap747bc967-28): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:07:12 compute-0 ovn_controller[146846]: 2026-02-28T10:07:12Z|00335|binding|INFO|Releasing lport 747bc967-2869-43ae-bc69-d818504d5496 from this chassis (sb_readonly=0)
Feb 28 10:07:12 compute-0 ovn_controller[146846]: 2026-02-28T10:07:12Z|00336|binding|INFO|Setting lport 747bc967-2869-43ae-bc69-d818504d5496 down in Southbound
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.198 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:12 compute-0 ovn_controller[146846]: 2026-02-28T10:07:12Z|00337|binding|INFO|Removing iface tap747bc967-28 ovn-installed in OVS
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.202 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:12.204 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:b6:8f 10.100.0.6'], port_security=['fa:16:3e:1e:b6:8f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'af024638-459f-45c8-b52b-7d9ec937745a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5df107d99f104138b864f28cf3b749ad', 'neutron:revision_number': '5', 'neutron:security_group_ids': '428b5966-b573-43eb-a464-fcc424e52e98 ab180755-48b8-45d2-a317-402bdfeca113', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4575c196-9c47-43a0-8ee2-589635106d32, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=747bc967-2869-43ae-bc69-d818504d5496) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:07:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:12.206 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 747bc967-2869-43ae-bc69-d818504d5496 in datapath f973a3f2-c3d9-4311-9c7b-ab6ca02111d3 unbound from our chassis
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.207 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:12.207 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f973a3f2-c3d9-4311-9c7b-ab6ca02111d3
Feb 28 10:07:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:12.222 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b34f0092-9cd9-4ccd-b792-ec156dad338b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:12 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Feb 28 10:07:12 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002c.scope: Consumed 10.816s CPU time.
Feb 28 10:07:12 compute-0 systemd-machined[209480]: Machine qemu-48-instance-0000002c terminated.
Feb 28 10:07:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:12.246 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7567d01b-a945-4246-a594-082983e789e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:12.249 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[53f50428-1cf4-4b7d-9563-ff932b905b4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:12.280 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a65bd9ff-cbe3-4a37-94c7-cc51b1ac7068]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:12.303 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4b860824-8040-418d-9605-b0f107e1a078]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf973a3f2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b7:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 90], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469115, 'reachable_time': 43459, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282025, 'error': None, 'target': 'ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:07:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:12.318 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2e433001-bb77-4bf9-ac9c-971bee4cdadb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf973a3f2-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469127, 'tstamp': 469127}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282026, 'error': None, 'target': 'ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf973a3f2-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469130, 'tstamp': 469130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282026, 'error': None, 'target': 'ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:12.319 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf973a3f2-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.322 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.327 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:12.325 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf973a3f2-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:12.325 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:12.326 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf973a3f2-c0, col_values=(('external_ids', {'iface-id': '610498de-6d7e-49bb-b4f4-0bb4f081afde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:12.326 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.384 243456 INFO nova.virt.libvirt.driver [-] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Instance destroyed successfully.
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.385 243456 DEBUG nova.objects.instance [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lazy-loading 'resources' on Instance uuid af024638-459f-45c8-b52b-7d9ec937745a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.404 243456 DEBUG nova.virt.libvirt.vif [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:06:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1579937397',display_name='tempest-SecurityGroupsTestJSON-server-1579937397',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1579937397',id=44,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:07:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5df107d99f104138b864f28cf3b749ad',ramdisk_id='',reservation_id='r-7rjukza7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-392060184',owner_user_name='tempest-SecurityGroupsTestJSON-392060184-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:07:12Z,user_data=None,user_id='c9a7366cce344abcb7310041ed02610a',uuid=af024638-459f-45c8-b52b-7d9ec937745a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "747bc967-2869-43ae-bc69-d818504d5496", "address": "fa:16:3e:1e:b6:8f", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap747bc967-28", "ovs_interfaceid": "747bc967-2869-43ae-bc69-d818504d5496", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.404 243456 DEBUG nova.network.os_vif_util [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Converting VIF {"id": "747bc967-2869-43ae-bc69-d818504d5496", "address": "fa:16:3e:1e:b6:8f", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap747bc967-28", "ovs_interfaceid": "747bc967-2869-43ae-bc69-d818504d5496", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.405 243456 DEBUG nova.network.os_vif_util [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1e:b6:8f,bridge_name='br-int',has_traffic_filtering=True,id=747bc967-2869-43ae-bc69-d818504d5496,network=Network(f973a3f2-c3d9-4311-9c7b-ab6ca02111d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap747bc967-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.405 243456 DEBUG os_vif [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:b6:8f,bridge_name='br-int',has_traffic_filtering=True,id=747bc967-2869-43ae-bc69-d818504d5496,network=Network(f973a3f2-c3d9-4311-9c7b-ab6ca02111d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap747bc967-28') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.407 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.408 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap747bc967-28, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.411 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.413 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.415 243456 INFO os_vif [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:b6:8f,bridge_name='br-int',has_traffic_filtering=True,id=747bc967-2869-43ae-bc69-d818504d5496,network=Network(f973a3f2-c3d9-4311-9c7b-ab6ca02111d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap747bc967-28')
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.422 243456 DEBUG nova.virt.libvirt.driver [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Start _get_guest_xml network_info=[{"id": "747bc967-2869-43ae-bc69-d818504d5496", "address": "fa:16:3e:1e:b6:8f", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap747bc967-28", "ovs_interfaceid": "747bc967-2869-43ae-bc69-d818504d5496", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.426 243456 WARNING nova.virt.libvirt.driver [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.430 243456 DEBUG nova.virt.libvirt.host [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.430 243456 DEBUG nova.virt.libvirt.host [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.433 243456 DEBUG nova.virt.libvirt.host [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.434 243456 DEBUG nova.virt.libvirt.host [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.434 243456 DEBUG nova.virt.libvirt.driver [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.435 243456 DEBUG nova.virt.hardware [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.435 243456 DEBUG nova.virt.hardware [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.435 243456 DEBUG nova.virt.hardware [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.435 243456 DEBUG nova.virt.hardware [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.436 243456 DEBUG nova.virt.hardware [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.436 243456 DEBUG nova.virt.hardware [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.436 243456 DEBUG nova.virt.hardware [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.436 243456 DEBUG nova.virt.hardware [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.436 243456 DEBUG nova.virt.hardware [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.437 243456 DEBUG nova.virt.hardware [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.437 243456 DEBUG nova.virt.hardware [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.437 243456 DEBUG nova.objects.instance [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lazy-loading 'vcpu_model' on Instance uuid af024638-459f-45c8-b52b-7d9ec937745a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:12 compute-0 nova_compute[243452]: 2026-02-28 10:07:12.466 243456 DEBUG oslo_concurrency.processutils [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:07:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:07:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2920508103' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:07:13 compute-0 ceph-mon[76304]: pgmap v1169: 305 pgs: 305 active+clean; 515 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.3 MiB/s wr, 187 op/s
Feb 28 10:07:13 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2920508103' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.015 243456 DEBUG oslo_concurrency.processutils [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.050 243456 DEBUG oslo_concurrency.processutils [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.582 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:07:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3422990498' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.662 243456 DEBUG oslo_concurrency.processutils [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.663 243456 DEBUG nova.virt.libvirt.vif [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:06:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1579937397',display_name='tempest-SecurityGroupsTestJSON-server-1579937397',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1579937397',id=44,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:07:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5df107d99f104138b864f28cf3b749ad',ramdisk_id='',reservation_id='r-7rjukza7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-392060184',owner_user_name='tempest-SecurityGroupsTestJSON-392060184-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:07:12Z,user_data=None,user_id='c9a7366cce344abcb7310041ed02610a',uuid=af024638-459f-45c8-b52b-7d9ec937745a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "747bc967-2869-43ae-bc69-d818504d5496", "address": "fa:16:3e:1e:b6:8f", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap747bc967-28", "ovs_interfaceid": "747bc967-2869-43ae-bc69-d818504d5496", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.664 243456 DEBUG nova.network.os_vif_util [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Converting VIF {"id": "747bc967-2869-43ae-bc69-d818504d5496", "address": "fa:16:3e:1e:b6:8f", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap747bc967-28", "ovs_interfaceid": "747bc967-2869-43ae-bc69-d818504d5496", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.665 243456 DEBUG nova.network.os_vif_util [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1e:b6:8f,bridge_name='br-int',has_traffic_filtering=True,id=747bc967-2869-43ae-bc69-d818504d5496,network=Network(f973a3f2-c3d9-4311-9c7b-ab6ca02111d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap747bc967-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.666 243456 DEBUG nova.objects.instance [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lazy-loading 'pci_devices' on Instance uuid af024638-459f-45c8-b52b-7d9ec937745a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.680 243456 DEBUG nova.virt.libvirt.driver [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:07:13 compute-0 nova_compute[243452]:   <uuid>af024638-459f-45c8-b52b-7d9ec937745a</uuid>
Feb 28 10:07:13 compute-0 nova_compute[243452]:   <name>instance-0000002c</name>
Feb 28 10:07:13 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:07:13 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:07:13 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <nova:name>tempest-SecurityGroupsTestJSON-server-1579937397</nova:name>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:07:12</nova:creationTime>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:07:13 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:07:13 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:07:13 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:07:13 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:07:13 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:07:13 compute-0 nova_compute[243452]:         <nova:user uuid="c9a7366cce344abcb7310041ed02610a">tempest-SecurityGroupsTestJSON-392060184-project-member</nova:user>
Feb 28 10:07:13 compute-0 nova_compute[243452]:         <nova:project uuid="5df107d99f104138b864f28cf3b749ad">tempest-SecurityGroupsTestJSON-392060184</nova:project>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:07:13 compute-0 nova_compute[243452]:         <nova:port uuid="747bc967-2869-43ae-bc69-d818504d5496">
Feb 28 10:07:13 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:07:13 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:07:13 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <system>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <entry name="serial">af024638-459f-45c8-b52b-7d9ec937745a</entry>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <entry name="uuid">af024638-459f-45c8-b52b-7d9ec937745a</entry>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     </system>
Feb 28 10:07:13 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:07:13 compute-0 nova_compute[243452]:   <os>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:   </os>
Feb 28 10:07:13 compute-0 nova_compute[243452]:   <features>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:   </features>
Feb 28 10:07:13 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:07:13 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:07:13 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/af024638-459f-45c8-b52b-7d9ec937745a_disk">
Feb 28 10:07:13 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       </source>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:07:13 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/af024638-459f-45c8-b52b-7d9ec937745a_disk.config">
Feb 28 10:07:13 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       </source>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:07:13 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:1e:b6:8f"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <target dev="tap747bc967-28"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/af024638-459f-45c8-b52b-7d9ec937745a/console.log" append="off"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <video>
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     </video>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <input type="keyboard" bus="usb"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:07:13 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:07:13 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:07:13 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:07:13 compute-0 nova_compute[243452]: </domain>
Feb 28 10:07:13 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.681 243456 DEBUG nova.virt.libvirt.driver [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] skipping disk for instance-0000002c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.681 243456 DEBUG nova.virt.libvirt.driver [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] skipping disk for instance-0000002c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.682 243456 DEBUG nova.virt.libvirt.vif [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:06:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1579937397',display_name='tempest-SecurityGroupsTestJSON-server-1579937397',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1579937397',id=44,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:07:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='5df107d99f104138b864f28cf3b749ad',ramdisk_id='',reservation_id='r-7rjukza7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-392060184',owner_user_name='tempest-SecurityGroupsTestJSON-392060184-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:07:12Z,user_data=None,user_id='c9a7366cce344abcb7310041ed02610a',uuid=af024638-459f-45c8-b52b-7d9ec937745a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "747bc967-2869-43ae-bc69-d818504d5496", "address": "fa:16:3e:1e:b6:8f", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap747bc967-28", "ovs_interfaceid": "747bc967-2869-43ae-bc69-d818504d5496", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.682 243456 DEBUG nova.network.os_vif_util [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Converting VIF {"id": "747bc967-2869-43ae-bc69-d818504d5496", "address": "fa:16:3e:1e:b6:8f", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap747bc967-28", "ovs_interfaceid": "747bc967-2869-43ae-bc69-d818504d5496", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.683 243456 DEBUG nova.network.os_vif_util [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1e:b6:8f,bridge_name='br-int',has_traffic_filtering=True,id=747bc967-2869-43ae-bc69-d818504d5496,network=Network(f973a3f2-c3d9-4311-9c7b-ab6ca02111d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap747bc967-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.684 243456 DEBUG os_vif [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:b6:8f,bridge_name='br-int',has_traffic_filtering=True,id=747bc967-2869-43ae-bc69-d818504d5496,network=Network(f973a3f2-c3d9-4311-9c7b-ab6ca02111d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap747bc967-28') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.684 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.685 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.685 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.687 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.687 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap747bc967-28, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.688 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap747bc967-28, col_values=(('external_ids', {'iface-id': '747bc967-2869-43ae-bc69-d818504d5496', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:b6:8f', 'vm-uuid': 'af024638-459f-45c8-b52b-7d9ec937745a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.689 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:13 compute-0 NetworkManager[49805]: <info>  [1772273233.6906] manager: (tap747bc967-28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.693 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.696 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.697 243456 INFO os_vif [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:b6:8f,bridge_name='br-int',has_traffic_filtering=True,id=747bc967-2869-43ae-bc69-d818504d5496,network=Network(f973a3f2-c3d9-4311-9c7b-ab6ca02111d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap747bc967-28')
Feb 28 10:07:13 compute-0 NetworkManager[49805]: <info>  [1772273233.7646] manager: (tap747bc967-28): new Tun device (/org/freedesktop/NetworkManager/Devices/158)
Feb 28 10:07:13 compute-0 systemd-udevd[282013]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:07:13 compute-0 kernel: tap747bc967-28: entered promiscuous mode
Feb 28 10:07:13 compute-0 ovn_controller[146846]: 2026-02-28T10:07:13Z|00338|binding|INFO|Claiming lport 747bc967-2869-43ae-bc69-d818504d5496 for this chassis.
Feb 28 10:07:13 compute-0 ovn_controller[146846]: 2026-02-28T10:07:13Z|00339|binding|INFO|747bc967-2869-43ae-bc69-d818504d5496: Claiming fa:16:3e:1e:b6:8f 10.100.0.6
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.775 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.780 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:13.782 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:b6:8f 10.100.0.6'], port_security=['fa:16:3e:1e:b6:8f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'af024638-459f-45c8-b52b-7d9ec937745a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5df107d99f104138b864f28cf3b749ad', 'neutron:revision_number': '5', 'neutron:security_group_ids': '428b5966-b573-43eb-a464-fcc424e52e98 ab180755-48b8-45d2-a317-402bdfeca113', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4575c196-9c47-43a0-8ee2-589635106d32, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=747bc967-2869-43ae-bc69-d818504d5496) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:07:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:13.784 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 747bc967-2869-43ae-bc69-d818504d5496 in datapath f973a3f2-c3d9-4311-9c7b-ab6ca02111d3 bound to our chassis
Feb 28 10:07:13 compute-0 NetworkManager[49805]: <info>  [1772273233.7858] device (tap747bc967-28): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:07:13 compute-0 NetworkManager[49805]: <info>  [1772273233.7867] device (tap747bc967-28): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:07:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:13.787 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f973a3f2-c3d9-4311-9c7b-ab6ca02111d3
Feb 28 10:07:13 compute-0 ovn_controller[146846]: 2026-02-28T10:07:13Z|00340|binding|INFO|Setting lport 747bc967-2869-43ae-bc69-d818504d5496 ovn-installed in OVS
Feb 28 10:07:13 compute-0 ovn_controller[146846]: 2026-02-28T10:07:13Z|00341|binding|INFO|Setting lport 747bc967-2869-43ae-bc69-d818504d5496 up in Southbound
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.798 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.801 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:13.808 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1d2e4b82-fc24-4b29-b9e5-aab7fae547af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:13 compute-0 systemd-machined[209480]: New machine qemu-51-instance-0000002c.
Feb 28 10:07:13 compute-0 systemd[1]: Started Virtual Machine qemu-51-instance-0000002c.
Feb 28 10:07:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:13.840 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[88aea256-e0ae-439a-adf7-0a8f17b24e81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:13.856 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7cbe8b01-25f0-447a-8971-3b5204e91a95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:13.886 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c5fdaa-c38b-41cb-9a74-9a0aaf4152b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:13.905 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a3049ad6-288b-45b5-aa26-1f4465332248]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf973a3f2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b7:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 90], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469115, 'reachable_time': 43459, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282126, 'error': None, 'target': 'ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.914 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-60dcb9fa-f7b6-415d-86e5-d423d4613d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.915 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-60dcb9fa-f7b6-415d-86e5-d423d4613d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.915 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.916 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60dcb9fa-f7b6-415d-86e5-d423d4613d6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:13.921 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f2cc784a-2477-44bf-95a3-d667d3128898]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf973a3f2-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469127, 'tstamp': 469127}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282127, 'error': None, 'target': 'ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf973a3f2-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469130, 'tstamp': 469130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282127, 'error': None, 'target': 'ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:13.923 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf973a3f2-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:13 compute-0 nova_compute[243452]: 2026-02-28 10:07:13.926 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:13.927 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf973a3f2-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:13.927 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:13.927 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf973a3f2-c0, col_values=(('external_ids', {'iface-id': '610498de-6d7e-49bb-b4f4-0bb4f081afde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:13.927 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 531 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.7 MiB/s wr, 260 op/s
Feb 28 10:07:14 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3422990498' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.477 243456 DEBUG nova.network.neutron [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Successfully updated port: 97c93122-26c8-464e-b452-aaa22188a591 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.497 243456 DEBUG oslo_concurrency.lockutils [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.612 243456 DEBUG nova.compute.manager [req-0f4946ac-f3c3-4a73-b954-6dfa841daeba req-753d3ca9-786c-4a26-920d-67078983e978 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Received event network-vif-unplugged-747bc967-2869-43ae-bc69-d818504d5496 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.612 243456 DEBUG oslo_concurrency.lockutils [req-0f4946ac-f3c3-4a73-b954-6dfa841daeba req-753d3ca9-786c-4a26-920d-67078983e978 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "af024638-459f-45c8-b52b-7d9ec937745a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.614 243456 DEBUG oslo_concurrency.lockutils [req-0f4946ac-f3c3-4a73-b954-6dfa841daeba req-753d3ca9-786c-4a26-920d-67078983e978 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.614 243456 DEBUG oslo_concurrency.lockutils [req-0f4946ac-f3c3-4a73-b954-6dfa841daeba req-753d3ca9-786c-4a26-920d-67078983e978 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.614 243456 DEBUG nova.compute.manager [req-0f4946ac-f3c3-4a73-b954-6dfa841daeba req-753d3ca9-786c-4a26-920d-67078983e978 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] No waiting events found dispatching network-vif-unplugged-747bc967-2869-43ae-bc69-d818504d5496 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.615 243456 WARNING nova.compute.manager [req-0f4946ac-f3c3-4a73-b954-6dfa841daeba req-753d3ca9-786c-4a26-920d-67078983e978 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Received unexpected event network-vif-unplugged-747bc967-2869-43ae-bc69-d818504d5496 for instance with vm_state active and task_state reboot_started_hard.
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.796 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for af024638-459f-45c8-b52b-7d9ec937745a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.797 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273234.794802, af024638-459f-45c8-b52b-7d9ec937745a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.798 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af024638-459f-45c8-b52b-7d9ec937745a] VM Resumed (Lifecycle Event)
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.801 243456 DEBUG nova.compute.manager [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.806 243456 INFO nova.virt.libvirt.driver [-] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Instance rebooted successfully.
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.807 243456 DEBUG nova.compute.manager [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.835 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.839 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.898 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af024638-459f-45c8-b52b-7d9ec937745a] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.899 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273234.7961814, af024638-459f-45c8-b52b-7d9ec937745a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.899 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af024638-459f-45c8-b52b-7d9ec937745a] VM Started (Lifecycle Event)
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.902 243456 INFO nova.compute.manager [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Rebuilding instance
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.914 243456 DEBUG oslo_concurrency.lockutils [None req-9a012f4f-894b-44a2-8bb1-aa9b119e5265 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.920 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.925 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.959 243456 DEBUG nova.network.neutron [req-93d2fed7-66c9-457b-8624-52b041f0f5b0 req-7a48dc56-34e5-43b7-ba36-4e872e0f0074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Updated VIF entry in instance network info cache for port f2eab801-3c6f-481b-98bf-9751a9a7c6d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.960 243456 DEBUG nova.network.neutron [req-93d2fed7-66c9-457b-8624-52b041f0f5b0 req-7a48dc56-34e5-43b7-ba36-4e872e0f0074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Updating instance_info_cache with network_info: [{"id": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "address": "fa:16:3e:75:95:ff", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2eab801-3c", "ovs_interfaceid": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.977 243456 DEBUG oslo_concurrency.lockutils [req-93d2fed7-66c9-457b-8624-52b041f0f5b0 req-7a48dc56-34e5-43b7-ba36-4e872e0f0074 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.977 243456 DEBUG oslo_concurrency.lockutils [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquired lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:07:14 compute-0 nova_compute[243452]: 2026-02-28 10:07:14.978 243456 DEBUG nova.network.neutron [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:07:15 compute-0 ceph-mon[76304]: pgmap v1170: 305 pgs: 305 active+clean; 531 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.7 MiB/s wr, 260 op/s
Feb 28 10:07:15 compute-0 nova_compute[243452]: 2026-02-28 10:07:15.159 243456 WARNING nova.network.neutron [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] 60dcefc3-95e1-437e-9c00-e51656c39b8f already exists in list: networks containing: ['60dcefc3-95e1-437e-9c00-e51656c39b8f']. ignoring it
Feb 28 10:07:15 compute-0 nova_compute[243452]: 2026-02-28 10:07:15.184 243456 DEBUG nova.objects.instance [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 2c8189d8-e4a5-412d-bd69-b690e34b8f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:15 compute-0 nova_compute[243452]: 2026-02-28 10:07:15.236 243456 DEBUG nova.compute.manager [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:15 compute-0 nova_compute[243452]: 2026-02-28 10:07:15.361 243456 DEBUG nova.objects.instance [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lazy-loading 'pci_requests' on Instance uuid 2c8189d8-e4a5-412d-bd69-b690e34b8f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:15 compute-0 nova_compute[243452]: 2026-02-28 10:07:15.379 243456 DEBUG nova.objects.instance [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c8189d8-e4a5-412d-bd69-b690e34b8f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:15 compute-0 nova_compute[243452]: 2026-02-28 10:07:15.400 243456 DEBUG nova.objects.instance [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lazy-loading 'resources' on Instance uuid 2c8189d8-e4a5-412d-bd69-b690e34b8f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:15 compute-0 nova_compute[243452]: 2026-02-28 10:07:15.411 243456 DEBUG nova.objects.instance [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lazy-loading 'migration_context' on Instance uuid 2c8189d8-e4a5-412d-bd69-b690e34b8f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:15 compute-0 nova_compute[243452]: 2026-02-28 10:07:15.419 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Updating instance_info_cache with network_info: [{"id": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "address": "fa:16:3e:b1:c6:3c", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690a322-e7", "ovs_interfaceid": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:15 compute-0 nova_compute[243452]: 2026-02-28 10:07:15.424 243456 DEBUG nova.objects.instance [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:07:15 compute-0 nova_compute[243452]: 2026-02-28 10:07:15.427 243456 DEBUG nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:07:15 compute-0 nova_compute[243452]: 2026-02-28 10:07:15.433 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-60dcb9fa-f7b6-415d-86e5-d423d4613d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:07:15 compute-0 nova_compute[243452]: 2026-02-28 10:07:15.434 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:07:15 compute-0 nova_compute[243452]: 2026-02-28 10:07:15.434 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:07:15 compute-0 nova_compute[243452]: 2026-02-28 10:07:15.435 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:07:15 compute-0 nova_compute[243452]: 2026-02-28 10:07:15.467 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:15 compute-0 nova_compute[243452]: 2026-02-28 10:07:15.468 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:15 compute-0 nova_compute[243452]: 2026-02-28 10:07:15.468 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:15 compute-0 nova_compute[243452]: 2026-02-28 10:07:15.468 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:07:15 compute-0 nova_compute[243452]: 2026-02-28 10:07:15.469 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 531 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 4.8 MiB/s wr, 331 op/s
Feb 28 10:07:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:07:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/604543875' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.198 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.729s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.315 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000028 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.316 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000028 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.320 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.321 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.325 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.325 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.330 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000002c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.330 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000002c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.335 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.335 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.339 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.339 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.573 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.575 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3255MB free_disk=59.78854111209512GB free_vcpus=2 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.575 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.576 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.658 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 60dcb9fa-f7b6-415d-86e5-d423d4613d6c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.659 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance c4dce6af-958c-4c5a-890b-469443cee915 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.659 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 9bc174be-7ebf-4dfb-a56f-b8855b0b2960 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.659 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance af024638-459f-45c8-b52b-7d9ec937745a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.659 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 090c2598-73ab-42de-88d7-3959c3b6ebd2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.660 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 2c8189d8-e4a5-412d-bd69-b690e34b8f4c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.660 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.660 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:07:16 compute-0 nova_compute[243452]: 2026-02-28 10:07:16.824 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:17 compute-0 ceph-mon[76304]: pgmap v1171: 305 pgs: 305 active+clean; 531 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 4.8 MiB/s wr, 331 op/s
Feb 28 10:07:17 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/604543875' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:07:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3590730015' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.394 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.403 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.422 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.452 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.452 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.909 243456 DEBUG nova.network.neutron [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Updating instance_info_cache with network_info: [{"id": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "address": "fa:16:3e:75:95:ff", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2eab801-3c", "ovs_interfaceid": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.928 243456 DEBUG oslo_concurrency.lockutils [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Releasing lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.931 243456 DEBUG nova.virt.libvirt.vif [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1814385188',display_name='tempest-tempest.common.compute-instance-1814385188',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1814385188',id=43,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJX0Fx82RGhILAY8gSL/81eMdy/bnRqX8dvA0uyc0VMC42rCn5VbA0lngPhybrpyL5tdeNJDAeYEVJl6vb2i2p3dxqnwW1uNXmGSt+gE0lS+RTjVWG5bC73bVNenJhGdiA==',key_name='tempest-keypair-1599844475',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:06:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-6ym7qka6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:06:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=9bc174be-7ebf-4dfb-a56f-b8855b0b2960,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.932 243456 DEBUG nova.network.os_vif_util [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.932 243456 DEBUG nova.network.os_vif_util [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:61:f1,bridge_name='br-int',has_traffic_filtering=True,id=97c93122-26c8-464e-b452-aaa22188a591,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c93122-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.933 243456 DEBUG os_vif [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:61:f1,bridge_name='br-int',has_traffic_filtering=True,id=97c93122-26c8-464e-b452-aaa22188a591,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c93122-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.933 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.933 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.934 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.939 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.939 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97c93122-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.939 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap97c93122-26, col_values=(('external_ids', {'iface-id': '97c93122-26c8-464e-b452-aaa22188a591', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:61:f1', 'vm-uuid': '9bc174be-7ebf-4dfb-a56f-b8855b0b2960'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:17 compute-0 NetworkManager[49805]: <info>  [1772273237.9428] manager: (tap97c93122-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/159)
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.945 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:07:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 531 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.6 MiB/s wr, 253 op/s
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.952 243456 DEBUG nova.compute.manager [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Received event network-vif-plugged-747bc967-2869-43ae-bc69-d818504d5496 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.952 243456 DEBUG oslo_concurrency.lockutils [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "af024638-459f-45c8-b52b-7d9ec937745a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.953 243456 DEBUG oslo_concurrency.lockutils [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.953 243456 DEBUG oslo_concurrency.lockutils [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.953 243456 DEBUG nova.compute.manager [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] No waiting events found dispatching network-vif-plugged-747bc967-2869-43ae-bc69-d818504d5496 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.953 243456 WARNING nova.compute.manager [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Received unexpected event network-vif-plugged-747bc967-2869-43ae-bc69-d818504d5496 for instance with vm_state active and task_state None.
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.953 243456 DEBUG nova.compute.manager [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Received event network-changed-97c93122-26c8-464e-b452-aaa22188a591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.953 243456 DEBUG nova.compute.manager [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Refreshing instance network info cache due to event network-changed-97c93122-26c8-464e-b452-aaa22188a591. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.954 243456 DEBUG oslo_concurrency.lockutils [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.954 243456 DEBUG oslo_concurrency.lockutils [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.954 243456 DEBUG nova.network.neutron [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Refreshing network info cache for port 97c93122-26c8-464e-b452-aaa22188a591 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.955 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.956 243456 INFO os_vif [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:61:f1,bridge_name='br-int',has_traffic_filtering=True,id=97c93122-26c8-464e-b452-aaa22188a591,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c93122-26')
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.957 243456 DEBUG nova.virt.libvirt.vif [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1814385188',display_name='tempest-tempest.common.compute-instance-1814385188',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1814385188',id=43,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJX0Fx82RGhILAY8gSL/81eMdy/bnRqX8dvA0uyc0VMC42rCn5VbA0lngPhybrpyL5tdeNJDAeYEVJl6vb2i2p3dxqnwW1uNXmGSt+gE0lS+RTjVWG5bC73bVNenJhGdiA==',key_name='tempest-keypair-1599844475',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:06:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-6ym7qka6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:06:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=9bc174be-7ebf-4dfb-a56f-b8855b0b2960,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.957 243456 DEBUG nova.network.os_vif_util [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.957 243456 DEBUG nova.network.os_vif_util [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:61:f1,bridge_name='br-int',has_traffic_filtering=True,id=97c93122-26c8-464e-b452-aaa22188a591,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c93122-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.960 243456 DEBUG nova.virt.libvirt.guest [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] attach device xml: <interface type="ethernet">
Feb 28 10:07:17 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:81:61:f1"/>
Feb 28 10:07:17 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:07:17 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:07:17 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:07:17 compute-0 nova_compute[243452]:   <target dev="tap97c93122-26"/>
Feb 28 10:07:17 compute-0 nova_compute[243452]: </interface>
Feb 28 10:07:17 compute-0 nova_compute[243452]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 28 10:07:17 compute-0 kernel: tap97c93122-26: entered promiscuous mode
Feb 28 10:07:17 compute-0 NetworkManager[49805]: <info>  [1772273237.9767] manager: (tap97c93122-26): new Tun device (/org/freedesktop/NetworkManager/Devices/160)
Feb 28 10:07:17 compute-0 ovn_controller[146846]: 2026-02-28T10:07:17Z|00342|binding|INFO|Claiming lport 97c93122-26c8-464e-b452-aaa22188a591 for this chassis.
Feb 28 10:07:17 compute-0 nova_compute[243452]: 2026-02-28 10:07:17.981 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:17 compute-0 ovn_controller[146846]: 2026-02-28T10:07:17Z|00343|binding|INFO|97c93122-26c8-464e-b452-aaa22188a591: Claiming fa:16:3e:81:61:f1 10.100.0.3
Feb 28 10:07:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:17.989 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:61:f1 10.100.0.3'], port_security=['fa:16:3e:81:61:f1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1432262919', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9bc174be-7ebf-4dfb-a56f-b8855b0b2960', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1432262919', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '7', 'neutron:security_group_ids': '24808a61-2182-43ba-a46d-e629124276f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=97c93122-26c8-464e-b452-aaa22188a591) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:07:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:17.990 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 97c93122-26c8-464e-b452-aaa22188a591 in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f bound to our chassis
Feb 28 10:07:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:17.991 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:07:18 compute-0 systemd-udevd[282221]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:07:18 compute-0 ovn_controller[146846]: 2026-02-28T10:07:18Z|00344|binding|INFO|Setting lport 97c93122-26c8-464e-b452-aaa22188a591 ovn-installed in OVS
Feb 28 10:07:18 compute-0 ovn_controller[146846]: 2026-02-28T10:07:18Z|00345|binding|INFO|Setting lport 97c93122-26c8-464e-b452-aaa22188a591 up in Southbound
Feb 28 10:07:18 compute-0 nova_compute[243452]: 2026-02-28 10:07:18.005 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:18 compute-0 NetworkManager[49805]: <info>  [1772273238.0181] device (tap97c93122-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:07:18 compute-0 NetworkManager[49805]: <info>  [1772273238.0192] device (tap97c93122-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:07:18 compute-0 nova_compute[243452]: 2026-02-28 10:07:18.020 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:18.028 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e5eade-00e9-42d0-974d-798e4a249abd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3590730015' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:18 compute-0 nova_compute[243452]: 2026-02-28 10:07:18.069 243456 DEBUG nova.virt.libvirt.driver [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:07:18 compute-0 nova_compute[243452]: 2026-02-28 10:07:18.069 243456 DEBUG nova.virt.libvirt.driver [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:07:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:18.068 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3566bf3d-c4e9-4a67-ae6d-1a73dc1e40ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:18 compute-0 nova_compute[243452]: 2026-02-28 10:07:18.069 243456 DEBUG nova.virt.libvirt.driver [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:75:95:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:07:18 compute-0 nova_compute[243452]: 2026-02-28 10:07:18.069 243456 DEBUG nova.virt.libvirt.driver [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] No VIF found with MAC fa:16:3e:81:61:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:07:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:18.076 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[209e0edc-d5e2-4b01-828e-371c4391f8fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:18 compute-0 nova_compute[243452]: 2026-02-28 10:07:18.095 243456 DEBUG nova.virt.libvirt.guest [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:07:18 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:07:18 compute-0 nova_compute[243452]:   <nova:name>tempest-tempest.common.compute-instance-1814385188</nova:name>
Feb 28 10:07:18 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:07:18</nova:creationTime>
Feb 28 10:07:18 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:07:18 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:07:18 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:07:18 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:07:18 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:07:18 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:07:18 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:07:18 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:07:18 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:07:18 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:07:18 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:07:18 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:07:18 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:07:18 compute-0 nova_compute[243452]:     <nova:port uuid="f2eab801-3c6f-481b-98bf-9751a9a7c6d6">
Feb 28 10:07:18 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 28 10:07:18 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:07:18 compute-0 nova_compute[243452]:     <nova:port uuid="97c93122-26c8-464e-b452-aaa22188a591">
Feb 28 10:07:18 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:07:18 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:07:18 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:07:18 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:07:18 compute-0 nova_compute[243452]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 28 10:07:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:18.104 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ffaeedd0-0679-4dea-bb96-905c4821ef2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:18 compute-0 nova_compute[243452]: 2026-02-28 10:07:18.121 243456 DEBUG oslo_concurrency.lockutils [None req-4a69d01d-e9f7-4cb0-b508-d03fe99ba3a4 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-9bc174be-7ebf-4dfb-a56f-b8855b0b2960-97c93122-26c8-464e-b452-aaa22188a591" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:18.121 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0e3df5ad-163f-4a21-9475-f60eecddec78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469782, 'reachable_time': 41463, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282230, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:18.135 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f31d1788-c634-4faf-995d-393352c0b9aa]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469794, 'tstamp': 469794}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282231, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469796, 'tstamp': 469796}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282231, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:18.137 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:18.140 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:18.140 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:18.141 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:18.141 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:18 compute-0 nova_compute[243452]: 2026-02-28 10:07:18.143 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:18 compute-0 nova_compute[243452]: 2026-02-28 10:07:18.333 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:07:18 compute-0 nova_compute[243452]: 2026-02-28 10:07:18.333 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:07:18 compute-0 nova_compute[243452]: 2026-02-28 10:07:18.370 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:07:18 compute-0 nova_compute[243452]: 2026-02-28 10:07:18.370 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:07:18 compute-0 nova_compute[243452]: 2026-02-28 10:07:18.583 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:19 compute-0 ceph-mon[76304]: pgmap v1172: 305 pgs: 305 active+clean; 531 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.6 MiB/s wr, 253 op/s
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.391 243456 DEBUG oslo_concurrency.lockutils [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "interface-9bc174be-7ebf-4dfb-a56f-b8855b0b2960-97c93122-26c8-464e-b452-aaa22188a591" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.391 243456 DEBUG oslo_concurrency.lockutils [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-9bc174be-7ebf-4dfb-a56f-b8855b0b2960-97c93122-26c8-464e-b452-aaa22188a591" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.409 243456 DEBUG nova.objects.instance [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'flavor' on Instance uuid 9bc174be-7ebf-4dfb-a56f-b8855b0b2960 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.431 243456 DEBUG nova.compute.manager [req-d49ed28b-a00e-4cb9-aac4-fa810053c9b2 req-58a426fa-2f59-46ce-9aea-4cbf217f0cf3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Received event network-changed-747bc967-2869-43ae-bc69-d818504d5496 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.431 243456 DEBUG nova.compute.manager [req-d49ed28b-a00e-4cb9-aac4-fa810053c9b2 req-58a426fa-2f59-46ce-9aea-4cbf217f0cf3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Refreshing instance network info cache due to event network-changed-747bc967-2869-43ae-bc69-d818504d5496. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.431 243456 DEBUG oslo_concurrency.lockutils [req-d49ed28b-a00e-4cb9-aac4-fa810053c9b2 req-58a426fa-2f59-46ce-9aea-4cbf217f0cf3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-af024638-459f-45c8-b52b-7d9ec937745a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.432 243456 DEBUG oslo_concurrency.lockutils [req-d49ed28b-a00e-4cb9-aac4-fa810053c9b2 req-58a426fa-2f59-46ce-9aea-4cbf217f0cf3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-af024638-459f-45c8-b52b-7d9ec937745a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.432 243456 DEBUG nova.network.neutron [req-d49ed28b-a00e-4cb9-aac4-fa810053c9b2 req-58a426fa-2f59-46ce-9aea-4cbf217f0cf3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Refreshing network info cache for port 747bc967-2869-43ae-bc69-d818504d5496 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.436 243456 DEBUG nova.virt.libvirt.vif [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1814385188',display_name='tempest-tempest.common.compute-instance-1814385188',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1814385188',id=43,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJX0Fx82RGhILAY8gSL/81eMdy/bnRqX8dvA0uyc0VMC42rCn5VbA0lngPhybrpyL5tdeNJDAeYEVJl6vb2i2p3dxqnwW1uNXmGSt+gE0lS+RTjVWG5bC73bVNenJhGdiA==',key_name='tempest-keypair-1599844475',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:06:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-6ym7qka6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:06:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=9bc174be-7ebf-4dfb-a56f-b8855b0b2960,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.436 243456 DEBUG nova.network.os_vif_util [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.437 243456 DEBUG nova.network.os_vif_util [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:61:f1,bridge_name='br-int',has_traffic_filtering=True,id=97c93122-26c8-464e-b452-aaa22188a591,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c93122-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.442 243456 DEBUG nova.virt.libvirt.guest [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:81:61:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap97c93122-26"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.447 243456 DEBUG nova.virt.libvirt.guest [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:81:61:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap97c93122-26"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.457 243456 DEBUG nova.virt.libvirt.driver [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Attempting to detach device tap97c93122-26 from instance 9bc174be-7ebf-4dfb-a56f-b8855b0b2960 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.458 243456 DEBUG nova.virt.libvirt.guest [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] detach device xml: <interface type="ethernet">
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:81:61:f1"/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <target dev="tap97c93122-26"/>
Feb 28 10:07:19 compute-0 nova_compute[243452]: </interface>
Feb 28 10:07:19 compute-0 nova_compute[243452]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.463 243456 DEBUG nova.virt.libvirt.guest [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:81:61:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap97c93122-26"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.468 243456 DEBUG nova.virt.libvirt.guest [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:81:61:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap97c93122-26"/></interface>not found in domain: <domain type='kvm' id='47'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <name>instance-0000002b</name>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <uuid>9bc174be-7ebf-4dfb-a56f-b8855b0b2960</uuid>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <nova:name>tempest-tempest.common.compute-instance-1814385188</nova:name>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:07:18</nova:creationTime>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:port uuid="f2eab801-3c6f-481b-98bf-9751a9a7c6d6">
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:port uuid="97c93122-26c8-464e-b452-aaa22188a591">
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:07:19 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <memory unit='KiB'>131072</memory>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <vcpu placement='static'>1</vcpu>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <resource>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <partition>/machine</partition>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </resource>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <sysinfo type='smbios'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <system>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <entry name='manufacturer'>RDO</entry>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <entry name='product'>OpenStack Compute</entry>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <entry name='serial'>9bc174be-7ebf-4dfb-a56f-b8855b0b2960</entry>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <entry name='uuid'>9bc174be-7ebf-4dfb-a56f-b8855b0b2960</entry>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <entry name='family'>Virtual Machine</entry>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </system>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <os>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <boot dev='hd'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <smbios mode='sysinfo'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </os>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <features>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <vmcoreinfo state='on'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </features>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <cpu mode='custom' match='exact' check='full'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <vendor>AMD</vendor>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='x2apic'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc-deadline'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='hypervisor'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc_adjust'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='spec-ctrl'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='stibp'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='ssbd'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='cmp_legacy'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='overflow-recov'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='succor'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='ibrs'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='amd-ssbd'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='virt-ssbd'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='lbrv'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='tsc-scale'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='vmcb-clean'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='flushbyasid'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='pause-filter'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='pfthreshold'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='xsaves'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='svm'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='topoext'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='npt'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='nrip-save'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <clock offset='utc'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <timer name='pit' tickpolicy='delay'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <timer name='hpet' present='no'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <on_poweroff>destroy</on_poweroff>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <on_reboot>restart</on_reboot>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <on_crash>destroy</on_crash>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <disk type='network' device='disk'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/9bc174be-7ebf-4dfb-a56f-b8855b0b2960_disk' index='2'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target dev='vda' bus='virtio'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='virtio-disk0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <disk type='network' device='cdrom'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/9bc174be-7ebf-4dfb-a56f-b8855b0b2960_disk.config' index='1'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target dev='sda' bus='sata'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <readonly/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='sata0-0-0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='0' model='pcie-root'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pcie.0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='1' port='0x10'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.1'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='2' port='0x11'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.2'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='3' port='0x12'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.3'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='4' port='0x13'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.4'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='5' port='0x14'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.5'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='6' port='0x15'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.6'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='7' port='0x16'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.7'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='8' port='0x17'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.8'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='9' port='0x18'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.9'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='10' port='0x19'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.10'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='11' port='0x1a'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.11'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='12' port='0x1b'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.12'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='13' port='0x1c'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.13'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='14' port='0x1d'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.14'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='15' port='0x1e'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.15'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='16' port='0x1f'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.16'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='17' port='0x20'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.17'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='18' port='0x21'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.18'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='19' port='0x22'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.19'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='20' port='0x23'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.20'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='21' port='0x24'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.21'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='22' port='0x25'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.22'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='23' port='0x26'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.23'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='24' port='0x27'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.24'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='25' port='0x28'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.25'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-pci-bridge'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.26'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='usb'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='sata' index='0'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='ide'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:75:95:ff'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target dev='tapf2eab801-3c'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='net0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:81:61:f1'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target dev='tap97c93122-26'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='net1'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <serial type='pty'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <source path='/dev/pts/4'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/9bc174be-7ebf-4dfb-a56f-b8855b0b2960/console.log' append='off'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target type='isa-serial' port='0'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:         <model name='isa-serial'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       </target>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <console type='pty' tty='/dev/pts/4'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <source path='/dev/pts/4'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/9bc174be-7ebf-4dfb-a56f-b8855b0b2960/console.log' append='off'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target type='serial' port='0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </console>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <input type='tablet' bus='usb'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='input0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='usb' bus='0' port='1'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </input>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <input type='mouse' bus='ps2'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='input1'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </input>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <input type='keyboard' bus='ps2'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='input2'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </input>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <graphics type='vnc' port='5904' autoport='yes' listen='::0'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <listen type='address' address='::0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </graphics>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <audio id='1' type='none'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <video>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model type='virtio' heads='1' primary='yes'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='video0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </video>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <watchdog model='itco' action='reset'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='watchdog0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </watchdog>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <memballoon model='virtio'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <stats period='10'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='balloon0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <rng model='virtio'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <backend model='random'>/dev/urandom</backend>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='rng0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <label>system_u:system_r:svirt_t:s0:c602,c731</label>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c602,c731</imagelabel>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <label>+107:+107</label>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <imagelabel>+107:+107</imagelabel>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:07:19 compute-0 nova_compute[243452]: </domain>
Feb 28 10:07:19 compute-0 nova_compute[243452]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.469 243456 INFO nova.virt.libvirt.driver [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully detached device tap97c93122-26 from instance 9bc174be-7ebf-4dfb-a56f-b8855b0b2960 from the persistent domain config.
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.470 243456 DEBUG nova.virt.libvirt.driver [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] (1/8): Attempting to detach device tap97c93122-26 with device alias net1 from instance 9bc174be-7ebf-4dfb-a56f-b8855b0b2960 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.470 243456 DEBUG nova.virt.libvirt.guest [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] detach device xml: <interface type="ethernet">
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:81:61:f1"/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <target dev="tap97c93122-26"/>
Feb 28 10:07:19 compute-0 nova_compute[243452]: </interface>
Feb 28 10:07:19 compute-0 nova_compute[243452]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 28 10:07:19 compute-0 kernel: tap97c93122-26 (unregistering): left promiscuous mode
Feb 28 10:07:19 compute-0 NetworkManager[49805]: <info>  [1772273239.5797] device (tap97c93122-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:07:19 compute-0 ovn_controller[146846]: 2026-02-28T10:07:19Z|00346|binding|INFO|Releasing lport 97c93122-26c8-464e-b452-aaa22188a591 from this chassis (sb_readonly=0)
Feb 28 10:07:19 compute-0 ovn_controller[146846]: 2026-02-28T10:07:19Z|00347|binding|INFO|Setting lport 97c93122-26c8-464e-b452-aaa22188a591 down in Southbound
Feb 28 10:07:19 compute-0 ovn_controller[146846]: 2026-02-28T10:07:19Z|00348|binding|INFO|Removing iface tap97c93122-26 ovn-installed in OVS
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.597 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.600 243456 DEBUG nova.virt.libvirt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Received event <DeviceRemovedEvent: 1772273239.5966744, 9bc174be-7ebf-4dfb-a56f-b8855b0b2960 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.601 243456 DEBUG nova.virt.libvirt.driver [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Start waiting for the detach event from libvirt for device tap97c93122-26 with device alias net1 for instance 9bc174be-7ebf-4dfb-a56f-b8855b0b2960 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.602 243456 DEBUG nova.virt.libvirt.guest [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:81:61:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap97c93122-26"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:07:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:19.605 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:61:f1 10.100.0.3'], port_security=['fa:16:3e:81:61:f1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1432262919', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9bc174be-7ebf-4dfb-a56f-b8855b0b2960', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1432262919', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '9', 'neutron:security_group_ids': '24808a61-2182-43ba-a46d-e629124276f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=97c93122-26c8-464e-b452-aaa22188a591) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.610 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.612 243456 DEBUG nova.virt.libvirt.guest [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:81:61:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap97c93122-26"/></interface>not found in domain: <domain type='kvm' id='47'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <name>instance-0000002b</name>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <uuid>9bc174be-7ebf-4dfb-a56f-b8855b0b2960</uuid>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <nova:name>tempest-tempest.common.compute-instance-1814385188</nova:name>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:07:18</nova:creationTime>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:port uuid="f2eab801-3c6f-481b-98bf-9751a9a7c6d6">
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:port uuid="97c93122-26c8-464e-b452-aaa22188a591">
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:07:19 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <memory unit='KiB'>131072</memory>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <vcpu placement='static'>1</vcpu>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <resource>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <partition>/machine</partition>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </resource>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <sysinfo type='smbios'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <system>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <entry name='manufacturer'>RDO</entry>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <entry name='product'>OpenStack Compute</entry>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <entry name='serial'>9bc174be-7ebf-4dfb-a56f-b8855b0b2960</entry>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <entry name='uuid'>9bc174be-7ebf-4dfb-a56f-b8855b0b2960</entry>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <entry name='family'>Virtual Machine</entry>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </system>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <os>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <boot dev='hd'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <smbios mode='sysinfo'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </os>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <features>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <vmcoreinfo state='on'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </features>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <cpu mode='custom' match='exact' check='full'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <vendor>AMD</vendor>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='x2apic'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc-deadline'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='hypervisor'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc_adjust'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='spec-ctrl'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='stibp'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='ssbd'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='cmp_legacy'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='overflow-recov'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='succor'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='ibrs'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='amd-ssbd'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='virt-ssbd'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='lbrv'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='tsc-scale'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='vmcb-clean'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='flushbyasid'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='pause-filter'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='pfthreshold'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='xsaves'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='svm'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='require' name='topoext'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='npt'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <feature policy='disable' name='nrip-save'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <clock offset='utc'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <timer name='pit' tickpolicy='delay'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <timer name='hpet' present='no'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <on_poweroff>destroy</on_poweroff>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <on_reboot>restart</on_reboot>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <on_crash>destroy</on_crash>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <disk type='network' device='disk'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/9bc174be-7ebf-4dfb-a56f-b8855b0b2960_disk' index='2'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target dev='vda' bus='virtio'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='virtio-disk0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <disk type='network' device='cdrom'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/9bc174be-7ebf-4dfb-a56f-b8855b0b2960_disk.config' index='1'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target dev='sda' bus='sata'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <readonly/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='sata0-0-0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='0' model='pcie-root'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pcie.0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='1' port='0x10'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.1'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='2' port='0x11'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.2'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='3' port='0x12'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.3'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 10:07:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:19.609 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 97c93122-26c8-464e-b452-aaa22188a591 in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f unbound from our chassis
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:19.611 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='4' port='0x13'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.4'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='5' port='0x14'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.5'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='6' port='0x15'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.6'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='7' port='0x16'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.7'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='8' port='0x17'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.8'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='9' port='0x18'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.9'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='10' port='0x19'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.10'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='11' port='0x1a'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.11'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='12' port='0x1b'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.12'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='13' port='0x1c'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.13'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='14' port='0x1d'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.14'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='15' port='0x1e'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.15'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='16' port='0x1f'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.16'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='17' port='0x20'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.17'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='18' port='0x21'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.18'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='19' port='0x22'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.19'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='20' port='0x23'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.20'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='21' port='0x24'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.21'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='22' port='0x25'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.22'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='23' port='0x26'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.23'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='24' port='0x27'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.24'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target chassis='25' port='0x28'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.25'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model name='pcie-pci-bridge'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='pci.26'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='usb'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <controller type='sata' index='0'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='ide'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:75:95:ff'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target dev='tapf2eab801-3c'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='net0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <serial type='pty'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <source path='/dev/pts/4'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/9bc174be-7ebf-4dfb-a56f-b8855b0b2960/console.log' append='off'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target type='isa-serial' port='0'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:         <model name='isa-serial'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       </target>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <console type='pty' tty='/dev/pts/4'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <source path='/dev/pts/4'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/9bc174be-7ebf-4dfb-a56f-b8855b0b2960/console.log' append='off'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <target type='serial' port='0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </console>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <input type='tablet' bus='usb'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='input0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='usb' bus='0' port='1'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </input>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <input type='mouse' bus='ps2'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='input1'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </input>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <input type='keyboard' bus='ps2'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='input2'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </input>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <graphics type='vnc' port='5904' autoport='yes' listen='::0'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <listen type='address' address='::0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </graphics>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <audio id='1' type='none'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <video>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <model type='virtio' heads='1' primary='yes'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='video0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </video>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <watchdog model='itco' action='reset'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='watchdog0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </watchdog>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <memballoon model='virtio'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <stats period='10'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='balloon0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <rng model='virtio'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <backend model='random'>/dev/urandom</backend>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <alias name='rng0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <label>system_u:system_r:svirt_t:s0:c602,c731</label>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c602,c731</imagelabel>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <label>+107:+107</label>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <imagelabel>+107:+107</imagelabel>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:07:19 compute-0 nova_compute[243452]: </domain>
Feb 28 10:07:19 compute-0 nova_compute[243452]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.614 243456 INFO nova.virt.libvirt.driver [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully detached device tap97c93122-26 from instance 9bc174be-7ebf-4dfb-a56f-b8855b0b2960 from the live domain config.
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.615 243456 DEBUG nova.virt.libvirt.vif [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1814385188',display_name='tempest-tempest.common.compute-instance-1814385188',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1814385188',id=43,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJX0Fx82RGhILAY8gSL/81eMdy/bnRqX8dvA0uyc0VMC42rCn5VbA0lngPhybrpyL5tdeNJDAeYEVJl6vb2i2p3dxqnwW1uNXmGSt+gE0lS+RTjVWG5bC73bVNenJhGdiA==',key_name='tempest-keypair-1599844475',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:06:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-6ym7qka6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:06:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=9bc174be-7ebf-4dfb-a56f-b8855b0b2960,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.616 243456 DEBUG nova.network.os_vif_util [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.616 243456 DEBUG nova.network.os_vif_util [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:61:f1,bridge_name='br-int',has_traffic_filtering=True,id=97c93122-26c8-464e-b452-aaa22188a591,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c93122-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.617 243456 DEBUG os_vif [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:61:f1,bridge_name='br-int',has_traffic_filtering=True,id=97c93122-26c8-464e-b452-aaa22188a591,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c93122-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.620 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.621 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97c93122-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.622 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.624 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.626 243456 INFO os_vif [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:61:f1,bridge_name='br-int',has_traffic_filtering=True,id=97c93122-26c8-464e-b452-aaa22188a591,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c93122-26')
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.627 243456 DEBUG nova.virt.libvirt.guest [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <nova:name>tempest-tempest.common.compute-instance-1814385188</nova:name>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:07:19</nova:creationTime>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:user uuid="f1d18942aa7a4f4f9f673058f8c5f232">tempest-AttachInterfacesTestJSON-441249959-project-member</nova:user>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:project uuid="5478fb37e3fa483c86ed85ad939d42da">tempest-AttachInterfacesTestJSON-441249959</nova:project>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     <nova:port uuid="f2eab801-3c6f-481b-98bf-9751a9a7c6d6">
Feb 28 10:07:19 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 28 10:07:19 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:07:19 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:07:19 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:07:19 compute-0 nova_compute[243452]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 28 10:07:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:19.645 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4a53ea90-918b-4fc2-be59-dc5f29a515c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:19.693 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[46991a1f-c076-44d6-873a-680cd4275013]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:19.697 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[dc3eb454-f2c9-4811-b289-005b6454e27e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.715 243456 DEBUG nova.network.neutron [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Updated VIF entry in instance network info cache for port 97c93122-26c8-464e-b452-aaa22188a591. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.716 243456 DEBUG nova.network.neutron [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Updating instance_info_cache with network_info: [{"id": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "address": "fa:16:3e:75:95:ff", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2eab801-3c", "ovs_interfaceid": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:19.725 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[790dc232-3ae8-4393-b6eb-7f929dda0dea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.736 243456 DEBUG oslo_concurrency.lockutils [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.737 243456 DEBUG nova.compute.manager [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Received event network-vif-plugged-747bc967-2869-43ae-bc69-d818504d5496 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.738 243456 DEBUG oslo_concurrency.lockutils [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "af024638-459f-45c8-b52b-7d9ec937745a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.738 243456 DEBUG oslo_concurrency.lockutils [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.738 243456 DEBUG oslo_concurrency.lockutils [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.738 243456 DEBUG nova.compute.manager [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] No waiting events found dispatching network-vif-plugged-747bc967-2869-43ae-bc69-d818504d5496 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.739 243456 WARNING nova.compute.manager [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Received unexpected event network-vif-plugged-747bc967-2869-43ae-bc69-d818504d5496 for instance with vm_state active and task_state None.
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.739 243456 DEBUG nova.compute.manager [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Received event network-vif-plugged-747bc967-2869-43ae-bc69-d818504d5496 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.740 243456 DEBUG oslo_concurrency.lockutils [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "af024638-459f-45c8-b52b-7d9ec937745a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.740 243456 DEBUG oslo_concurrency.lockutils [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.740 243456 DEBUG oslo_concurrency.lockutils [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.740 243456 DEBUG nova.compute.manager [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] No waiting events found dispatching network-vif-plugged-747bc967-2869-43ae-bc69-d818504d5496 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.741 243456 WARNING nova.compute.manager [req-f008d8d9-7815-4080-9c35-f3d33f717e53 req-324bf69d-cf31-49e4-b358-420b22c6450b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Received unexpected event network-vif-plugged-747bc967-2869-43ae-bc69-d818504d5496 for instance with vm_state active and task_state None.
Feb 28 10:07:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:19.745 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eaae319d-e39a-458a-87ab-ada060ad6e56]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469782, 'reachable_time': 41463, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282241, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:19.766 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ff9668cf-fed2-400c-8088-a60eb14a868d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469794, 'tstamp': 469794}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282242, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469796, 'tstamp': 469796}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282242, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:19.768 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.770 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:19.773 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:19.774 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:19.775 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:19.775 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 531 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 2.6 MiB/s wr, 268 op/s
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.983 243456 DEBUG oslo_concurrency.lockutils [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquiring lock "af024638-459f-45c8-b52b-7d9ec937745a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.984 243456 DEBUG oslo_concurrency.lockutils [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.984 243456 DEBUG oslo_concurrency.lockutils [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquiring lock "af024638-459f-45c8-b52b-7d9ec937745a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.985 243456 DEBUG oslo_concurrency.lockutils [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.985 243456 DEBUG oslo_concurrency.lockutils [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.986 243456 INFO nova.compute.manager [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Terminating instance
Feb 28 10:07:19 compute-0 nova_compute[243452]: 2026-02-28 10:07:19.987 243456 DEBUG nova.compute.manager [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:07:20 compute-0 kernel: tap747bc967-28 (unregistering): left promiscuous mode
Feb 28 10:07:20 compute-0 NetworkManager[49805]: <info>  [1772273240.0346] device (tap747bc967-28): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:07:20 compute-0 ovn_controller[146846]: 2026-02-28T10:07:20Z|00349|binding|INFO|Releasing lport 747bc967-2869-43ae-bc69-d818504d5496 from this chassis (sb_readonly=0)
Feb 28 10:07:20 compute-0 ovn_controller[146846]: 2026-02-28T10:07:20Z|00350|binding|INFO|Setting lport 747bc967-2869-43ae-bc69-d818504d5496 down in Southbound
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.044 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:20 compute-0 ovn_controller[146846]: 2026-02-28T10:07:20Z|00351|binding|INFO|Removing iface tap747bc967-28 ovn-installed in OVS
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.047 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:20.053 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:b6:8f 10.100.0.6'], port_security=['fa:16:3e:1e:b6:8f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'af024638-459f-45c8-b52b-7d9ec937745a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5df107d99f104138b864f28cf3b749ad', 'neutron:revision_number': '8', 'neutron:security_group_ids': '428b5966-b573-43eb-a464-fcc424e52e98 ab180755-48b8-45d2-a317-402bdfeca113 e82126c5-308a-46f4-99e6-03f4febd0c55', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4575c196-9c47-43a0-8ee2-589635106d32, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=747bc967-2869-43ae-bc69-d818504d5496) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:07:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:20.054 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 747bc967-2869-43ae-bc69-d818504d5496 in datapath f973a3f2-c3d9-4311-9c7b-ab6ca02111d3 unbound from our chassis
Feb 28 10:07:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:20.056 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f973a3f2-c3d9-4311-9c7b-ab6ca02111d3
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.058 243456 DEBUG nova.compute.manager [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Received event network-vif-plugged-97c93122-26c8-464e-b452-aaa22188a591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.058 243456 DEBUG oslo_concurrency.lockutils [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.059 243456 DEBUG oslo_concurrency.lockutils [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.059 243456 DEBUG oslo_concurrency.lockutils [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.059 243456 DEBUG nova.compute.manager [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] No waiting events found dispatching network-vif-plugged-97c93122-26c8-464e-b452-aaa22188a591 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.059 243456 WARNING nova.compute.manager [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Received unexpected event network-vif-plugged-97c93122-26c8-464e-b452-aaa22188a591 for instance with vm_state active and task_state None.
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.060 243456 DEBUG nova.compute.manager [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Received event network-vif-plugged-97c93122-26c8-464e-b452-aaa22188a591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.060 243456 DEBUG oslo_concurrency.lockutils [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.060 243456 DEBUG oslo_concurrency.lockutils [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.060 243456 DEBUG oslo_concurrency.lockutils [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.061 243456 DEBUG nova.compute.manager [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] No waiting events found dispatching network-vif-plugged-97c93122-26c8-464e-b452-aaa22188a591 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.061 243456 WARNING nova.compute.manager [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Received unexpected event network-vif-plugged-97c93122-26c8-464e-b452-aaa22188a591 for instance with vm_state active and task_state None.
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.061 243456 DEBUG nova.compute.manager [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Received event network-vif-unplugged-97c93122-26c8-464e-b452-aaa22188a591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.061 243456 DEBUG oslo_concurrency.lockutils [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.062 243456 DEBUG oslo_concurrency.lockutils [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.062 243456 DEBUG oslo_concurrency.lockutils [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.062 243456 DEBUG nova.compute.manager [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] No waiting events found dispatching network-vif-unplugged-97c93122-26c8-464e-b452-aaa22188a591 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.062 243456 WARNING nova.compute.manager [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Received unexpected event network-vif-unplugged-97c93122-26c8-464e-b452-aaa22188a591 for instance with vm_state active and task_state None.
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.063 243456 DEBUG nova.compute.manager [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Received event network-vif-plugged-97c93122-26c8-464e-b452-aaa22188a591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.063 243456 DEBUG oslo_concurrency.lockutils [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.063 243456 DEBUG oslo_concurrency.lockutils [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.063 243456 DEBUG oslo_concurrency.lockutils [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.064 243456 DEBUG nova.compute.manager [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] No waiting events found dispatching network-vif-plugged-97c93122-26c8-464e-b452-aaa22188a591 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.064 243456 WARNING nova.compute.manager [req-ae3d7e95-2a77-48c9-88df-8dd4fc50d34e req-3b296386-3564-47be-9812-60df3d33b7e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Received unexpected event network-vif-plugged-97c93122-26c8-464e-b452-aaa22188a591 for instance with vm_state active and task_state None.
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.065 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:20.070 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[be51c667-1c05-49e4-94cc-523a1d157189]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:20 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Feb 28 10:07:20 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002c.scope: Consumed 6.293s CPU time.
Feb 28 10:07:20 compute-0 systemd-machined[209480]: Machine qemu-51-instance-0000002c terminated.
Feb 28 10:07:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:20.122 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5b7448d7-7ed7-42a8-960e-bca41cbfdca1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:20.126 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f13de271-f6db-4e7f-9fca-15deb7d4abc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:20.160 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[18db8ac3-9191-491e-8a9b-76313425dc29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:20.181 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d6de35ba-3e0a-4ddf-b768-8653eef299f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf973a3f2-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b7:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 90], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469115, 'reachable_time': 43459, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282251, 'error': None, 'target': 'ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:20.215 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ca4bdcf5-33b8-4106-8092-6782714f40f7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf973a3f2-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469127, 'tstamp': 469127}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282252, 'error': None, 'target': 'ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf973a3f2-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469130, 'tstamp': 469130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282252, 'error': None, 'target': 'ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:20.218 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf973a3f2-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.220 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.226 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:20.226 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf973a3f2-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:20.227 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:20.228 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf973a3f2-c0, col_values=(('external_ids', {'iface-id': '610498de-6d7e-49bb-b4f4-0bb4f081afde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:20.228 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.230 243456 INFO nova.virt.libvirt.driver [-] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Instance destroyed successfully.
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.230 243456 DEBUG nova.objects.instance [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lazy-loading 'resources' on Instance uuid af024638-459f-45c8-b52b-7d9ec937745a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.245 243456 DEBUG nova.virt.libvirt.vif [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:06:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1579937397',display_name='tempest-SecurityGroupsTestJSON-server-1579937397',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1579937397',id=44,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:07:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5df107d99f104138b864f28cf3b749ad',ramdisk_id='',reservation_id='r-7rjukza7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-392060184',owner_user_name='tempest-SecurityGroupsTestJSON-392060184-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:07:14Z,user_data=None,user_id='c9a7366cce344abcb7310041ed02610a',uuid=af024638-459f-45c8-b52b-7d9ec937745a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "747bc967-2869-43ae-bc69-d818504d5496", "address": "fa:16:3e:1e:b6:8f", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap747bc967-28", "ovs_interfaceid": "747bc967-2869-43ae-bc69-d818504d5496", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.245 243456 DEBUG nova.network.os_vif_util [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Converting VIF {"id": "747bc967-2869-43ae-bc69-d818504d5496", "address": "fa:16:3e:1e:b6:8f", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap747bc967-28", "ovs_interfaceid": "747bc967-2869-43ae-bc69-d818504d5496", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.246 243456 DEBUG nova.network.os_vif_util [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1e:b6:8f,bridge_name='br-int',has_traffic_filtering=True,id=747bc967-2869-43ae-bc69-d818504d5496,network=Network(f973a3f2-c3d9-4311-9c7b-ab6ca02111d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap747bc967-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.246 243456 DEBUG os_vif [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:b6:8f,bridge_name='br-int',has_traffic_filtering=True,id=747bc967-2869-43ae-bc69-d818504d5496,network=Network(f973a3f2-c3d9-4311-9c7b-ab6ca02111d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap747bc967-28') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.248 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.248 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap747bc967-28, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.250 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.252 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.254 243456 INFO os_vif [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:b6:8f,bridge_name='br-int',has_traffic_filtering=True,id=747bc967-2869-43ae-bc69-d818504d5496,network=Network(f973a3f2-c3d9-4311-9c7b-ab6ca02111d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap747bc967-28')
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.541 243456 INFO nova.virt.libvirt.driver [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Deleting instance files /var/lib/nova/instances/af024638-459f-45c8-b52b-7d9ec937745a_del
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.542 243456 INFO nova.virt.libvirt.driver [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Deletion of /var/lib/nova/instances/af024638-459f-45c8-b52b-7d9ec937745a_del complete
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.607 243456 INFO nova.compute.manager [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Took 0.62 seconds to destroy the instance on the hypervisor.
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.607 243456 DEBUG oslo.service.loopingcall [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.608 243456 DEBUG nova.compute.manager [-] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:07:20 compute-0 nova_compute[243452]: 2026-02-28 10:07:20.608 243456 DEBUG nova.network.neutron [-] [instance: af024638-459f-45c8-b52b-7d9ec937745a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:07:21 compute-0 ceph-mon[76304]: pgmap v1173: 305 pgs: 305 active+clean; 531 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 2.6 MiB/s wr, 268 op/s
Feb 28 10:07:21 compute-0 nova_compute[243452]: 2026-02-28 10:07:21.129 243456 DEBUG nova.network.neutron [-] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:21 compute-0 nova_compute[243452]: 2026-02-28 10:07:21.154 243456 INFO nova.compute.manager [-] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Took 0.55 seconds to deallocate network for instance.
Feb 28 10:07:21 compute-0 nova_compute[243452]: 2026-02-28 10:07:21.216 243456 DEBUG oslo_concurrency.lockutils [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:21 compute-0 nova_compute[243452]: 2026-02-28 10:07:21.217 243456 DEBUG oslo_concurrency.lockutils [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:21 compute-0 nova_compute[243452]: 2026-02-28 10:07:21.219 243456 DEBUG nova.network.neutron [req-d49ed28b-a00e-4cb9-aac4-fa810053c9b2 req-58a426fa-2f59-46ce-9aea-4cbf217f0cf3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Updated VIF entry in instance network info cache for port 747bc967-2869-43ae-bc69-d818504d5496. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:07:21 compute-0 nova_compute[243452]: 2026-02-28 10:07:21.219 243456 DEBUG nova.network.neutron [req-d49ed28b-a00e-4cb9-aac4-fa810053c9b2 req-58a426fa-2f59-46ce-9aea-4cbf217f0cf3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Updating instance_info_cache with network_info: [{"id": "747bc967-2869-43ae-bc69-d818504d5496", "address": "fa:16:3e:1e:b6:8f", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap747bc967-28", "ovs_interfaceid": "747bc967-2869-43ae-bc69-d818504d5496", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:21 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb 28 10:07:21 compute-0 nova_compute[243452]: 2026-02-28 10:07:21.245 243456 DEBUG oslo_concurrency.lockutils [req-d49ed28b-a00e-4cb9-aac4-fa810053c9b2 req-58a426fa-2f59-46ce-9aea-4cbf217f0cf3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-af024638-459f-45c8-b52b-7d9ec937745a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:07:21 compute-0 nova_compute[243452]: 2026-02-28 10:07:21.389 243456 DEBUG oslo_concurrency.processutils [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:21 compute-0 nova_compute[243452]: 2026-02-28 10:07:21.469 243456 DEBUG oslo_concurrency.lockutils [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:07:21 compute-0 nova_compute[243452]: 2026-02-28 10:07:21.470 243456 DEBUG oslo_concurrency.lockutils [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquired lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:07:21 compute-0 nova_compute[243452]: 2026-02-28 10:07:21.471 243456 DEBUG nova.network.neutron [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:07:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 527 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.2 MiB/s wr, 268 op/s
Feb 28 10:07:21 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:07:21 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/421033801' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:21 compute-0 nova_compute[243452]: 2026-02-28 10:07:21.970 243456 DEBUG oslo_concurrency.processutils [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:21 compute-0 nova_compute[243452]: 2026-02-28 10:07:21.977 243456 DEBUG nova.compute.provider_tree [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:07:21 compute-0 nova_compute[243452]: 2026-02-28 10:07:21.997 243456 DEBUG nova.scheduler.client.report [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.028 243456 DEBUG oslo_concurrency.lockutils [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:22 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/421033801' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.084 243456 INFO nova.scheduler.client.report [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Deleted allocations for instance af024638-459f-45c8-b52b-7d9ec937745a
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.185 243456 DEBUG oslo_concurrency.lockutils [None req-e3c64931-68c2-47fb-8b5a-2d6cd1cd4f95 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.264 243456 DEBUG nova.compute.manager [req-1b732775-0311-4aa1-aa4d-7a8b3b8e85f9 req-fbcdb179-d684-4b32-be17-1bf955067e98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Received event network-vif-unplugged-747bc967-2869-43ae-bc69-d818504d5496 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.265 243456 DEBUG oslo_concurrency.lockutils [req-1b732775-0311-4aa1-aa4d-7a8b3b8e85f9 req-fbcdb179-d684-4b32-be17-1bf955067e98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "af024638-459f-45c8-b52b-7d9ec937745a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.266 243456 DEBUG oslo_concurrency.lockutils [req-1b732775-0311-4aa1-aa4d-7a8b3b8e85f9 req-fbcdb179-d684-4b32-be17-1bf955067e98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.266 243456 DEBUG oslo_concurrency.lockutils [req-1b732775-0311-4aa1-aa4d-7a8b3b8e85f9 req-fbcdb179-d684-4b32-be17-1bf955067e98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.266 243456 DEBUG nova.compute.manager [req-1b732775-0311-4aa1-aa4d-7a8b3b8e85f9 req-fbcdb179-d684-4b32-be17-1bf955067e98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] No waiting events found dispatching network-vif-unplugged-747bc967-2869-43ae-bc69-d818504d5496 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.267 243456 WARNING nova.compute.manager [req-1b732775-0311-4aa1-aa4d-7a8b3b8e85f9 req-fbcdb179-d684-4b32-be17-1bf955067e98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Received unexpected event network-vif-unplugged-747bc967-2869-43ae-bc69-d818504d5496 for instance with vm_state deleted and task_state None.
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.267 243456 DEBUG nova.compute.manager [req-1b732775-0311-4aa1-aa4d-7a8b3b8e85f9 req-fbcdb179-d684-4b32-be17-1bf955067e98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Received event network-vif-plugged-747bc967-2869-43ae-bc69-d818504d5496 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.268 243456 DEBUG oslo_concurrency.lockutils [req-1b732775-0311-4aa1-aa4d-7a8b3b8e85f9 req-fbcdb179-d684-4b32-be17-1bf955067e98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "af024638-459f-45c8-b52b-7d9ec937745a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.268 243456 DEBUG oslo_concurrency.lockutils [req-1b732775-0311-4aa1-aa4d-7a8b3b8e85f9 req-fbcdb179-d684-4b32-be17-1bf955067e98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.268 243456 DEBUG oslo_concurrency.lockutils [req-1b732775-0311-4aa1-aa4d-7a8b3b8e85f9 req-fbcdb179-d684-4b32-be17-1bf955067e98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af024638-459f-45c8-b52b-7d9ec937745a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.269 243456 DEBUG nova.compute.manager [req-1b732775-0311-4aa1-aa4d-7a8b3b8e85f9 req-fbcdb179-d684-4b32-be17-1bf955067e98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] No waiting events found dispatching network-vif-plugged-747bc967-2869-43ae-bc69-d818504d5496 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.269 243456 WARNING nova.compute.manager [req-1b732775-0311-4aa1-aa4d-7a8b3b8e85f9 req-fbcdb179-d684-4b32-be17-1bf955067e98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Received unexpected event network-vif-plugged-747bc967-2869-43ae-bc69-d818504d5496 for instance with vm_state deleted and task_state None.
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.270 243456 DEBUG nova.compute.manager [req-1b732775-0311-4aa1-aa4d-7a8b3b8e85f9 req-fbcdb179-d684-4b32-be17-1bf955067e98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Received event network-vif-deleted-747bc967-2869-43ae-bc69-d818504d5496 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.270 243456 INFO nova.compute.manager [req-1b732775-0311-4aa1-aa4d-7a8b3b8e85f9 req-fbcdb179-d684-4b32-be17-1bf955067e98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Neutron deleted interface 747bc967-2869-43ae-bc69-d818504d5496; detaching it from the instance and deleting it from the info cache
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.270 243456 DEBUG nova.network.neutron [req-1b732775-0311-4aa1-aa4d-7a8b3b8e85f9 req-fbcdb179-d684-4b32-be17-1bf955067e98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.278 243456 DEBUG nova.compute.manager [req-1b732775-0311-4aa1-aa4d-7a8b3b8e85f9 req-fbcdb179-d684-4b32-be17-1bf955067e98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Detach interface failed, port_id=747bc967-2869-43ae-bc69-d818504d5496, reason: Instance af024638-459f-45c8-b52b-7d9ec937745a could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.331 243456 DEBUG oslo_concurrency.lockutils [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.332 243456 DEBUG oslo_concurrency.lockutils [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.333 243456 DEBUG oslo_concurrency.lockutils [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.333 243456 DEBUG oslo_concurrency.lockutils [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.334 243456 DEBUG oslo_concurrency.lockutils [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.336 243456 INFO nova.compute.manager [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Terminating instance
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.338 243456 DEBUG nova.compute.manager [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:07:22 compute-0 kernel: tapf2eab801-3c (unregistering): left promiscuous mode
Feb 28 10:07:22 compute-0 NetworkManager[49805]: <info>  [1772273242.3725] device (tapf2eab801-3c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:07:22 compute-0 ovn_controller[146846]: 2026-02-28T10:07:22Z|00352|binding|INFO|Releasing lport f2eab801-3c6f-481b-98bf-9751a9a7c6d6 from this chassis (sb_readonly=0)
Feb 28 10:07:22 compute-0 ovn_controller[146846]: 2026-02-28T10:07:22Z|00353|binding|INFO|Setting lport f2eab801-3c6f-481b-98bf-9751a9a7c6d6 down in Southbound
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.378 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:22 compute-0 ovn_controller[146846]: 2026-02-28T10:07:22Z|00354|binding|INFO|Removing iface tapf2eab801-3c ovn-installed in OVS
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.380 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:22.388 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:95:ff 10.100.0.8'], port_security=['fa:16:3e:75:95:ff 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9bc174be-7ebf-4dfb-a56f-b8855b0b2960', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b06e2ec4-e889-49ea-aafd-6900649d681f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f2eab801-3c6f-481b-98bf-9751a9a7c6d6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.388 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:22.389 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f2eab801-3c6f-481b-98bf-9751a9a7c6d6 in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f unbound from our chassis
Feb 28 10:07:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:22.391 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60dcefc3-95e1-437e-9c00-e51656c39b8f
Feb 28 10:07:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:22.406 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7bb53060-1c06-484b-a23d-1eea36d61a0c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:22 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Feb 28 10:07:22 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002b.scope: Consumed 12.171s CPU time.
Feb 28 10:07:22 compute-0 systemd-machined[209480]: Machine qemu-47-instance-0000002b terminated.
Feb 28 10:07:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:22.439 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f2c5602a-7774-4870-b1b0-0a0b41cb7f63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:22.443 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ffdd1b8b-821f-4be1-afbc-c12feeb75c3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:22.464 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c0e30b2f-5835-4a0e-9b61-2d483ebac4d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:22.478 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e8722527-7170-409b-b41a-b1a6639205b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60dcefc3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:22:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469782, 'reachable_time': 41463, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282317, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:22.492 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b5484f0c-3e4d-4a0b-9697-3d341f52992b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469794, 'tstamp': 469794}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282318, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap60dcefc3-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469796, 'tstamp': 469796}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282318, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:22.494 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.495 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.501 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:22.501 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60dcefc3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:22.503 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:22.503 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60dcefc3-90, col_values=(('external_ids', {'iface-id': '76417fb5-53fd-4986-92c0-fafddb45b2f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:22.504 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.558 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.563 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.575 243456 INFO nova.virt.libvirt.driver [-] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Instance destroyed successfully.
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.575 243456 DEBUG nova.objects.instance [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'resources' on Instance uuid 9bc174be-7ebf-4dfb-a56f-b8855b0b2960 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.590 243456 DEBUG nova.virt.libvirt.vif [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1814385188',display_name='tempest-tempest.common.compute-instance-1814385188',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1814385188',id=43,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJX0Fx82RGhILAY8gSL/81eMdy/bnRqX8dvA0uyc0VMC42rCn5VbA0lngPhybrpyL5tdeNJDAeYEVJl6vb2i2p3dxqnwW1uNXmGSt+gE0lS+RTjVWG5bC73bVNenJhGdiA==',key_name='tempest-keypair-1599844475',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:06:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-6ym7qka6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:06:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=9bc174be-7ebf-4dfb-a56f-b8855b0b2960,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "address": "fa:16:3e:75:95:ff", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2eab801-3c", "ovs_interfaceid": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.590 243456 DEBUG nova.network.os_vif_util [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "address": "fa:16:3e:75:95:ff", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2eab801-3c", "ovs_interfaceid": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.591 243456 DEBUG nova.network.os_vif_util [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:75:95:ff,bridge_name='br-int',has_traffic_filtering=True,id=f2eab801-3c6f-481b-98bf-9751a9a7c6d6,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2eab801-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.592 243456 DEBUG os_vif [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:75:95:ff,bridge_name='br-int',has_traffic_filtering=True,id=f2eab801-3c6f-481b-98bf-9751a9a7c6d6,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2eab801-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.595 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.595 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2eab801-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.597 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.600 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.602 243456 INFO os_vif [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:75:95:ff,bridge_name='br-int',has_traffic_filtering=True,id=f2eab801-3c6f-481b-98bf-9751a9a7c6d6,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2eab801-3c')
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.604 243456 DEBUG nova.virt.libvirt.vif [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1814385188',display_name='tempest-tempest.common.compute-instance-1814385188',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1814385188',id=43,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJX0Fx82RGhILAY8gSL/81eMdy/bnRqX8dvA0uyc0VMC42rCn5VbA0lngPhybrpyL5tdeNJDAeYEVJl6vb2i2p3dxqnwW1uNXmGSt+gE0lS+RTjVWG5bC73bVNenJhGdiA==',key_name='tempest-keypair-1599844475',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:06:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-6ym7qka6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:06:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=9bc174be-7ebf-4dfb-a56f-b8855b0b2960,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.605 243456 DEBUG nova.network.os_vif_util [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "97c93122-26c8-464e-b452-aaa22188a591", "address": "fa:16:3e:81:61:f1", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97c93122-26", "ovs_interfaceid": "97c93122-26c8-464e-b452-aaa22188a591", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.606 243456 DEBUG nova.network.os_vif_util [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:81:61:f1,bridge_name='br-int',has_traffic_filtering=True,id=97c93122-26c8-464e-b452-aaa22188a591,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c93122-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.608 243456 DEBUG os_vif [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:61:f1,bridge_name='br-int',has_traffic_filtering=True,id=97c93122-26c8-464e-b452-aaa22188a591,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c93122-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.612 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.613 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97c93122-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.613 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.615 243456 DEBUG nova.compute.manager [req-c8a486be-b234-42cb-b2fb-5c1898d22ef2 req-f9c73076-8915-42ae-ada9-0758cf1f79b0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Received event network-vif-unplugged-f2eab801-3c6f-481b-98bf-9751a9a7c6d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.616 243456 DEBUG oslo_concurrency.lockutils [req-c8a486be-b234-42cb-b2fb-5c1898d22ef2 req-f9c73076-8915-42ae-ada9-0758cf1f79b0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.617 243456 DEBUG oslo_concurrency.lockutils [req-c8a486be-b234-42cb-b2fb-5c1898d22ef2 req-f9c73076-8915-42ae-ada9-0758cf1f79b0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.617 243456 DEBUG oslo_concurrency.lockutils [req-c8a486be-b234-42cb-b2fb-5c1898d22ef2 req-f9c73076-8915-42ae-ada9-0758cf1f79b0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.617 243456 DEBUG nova.compute.manager [req-c8a486be-b234-42cb-b2fb-5c1898d22ef2 req-f9c73076-8915-42ae-ada9-0758cf1f79b0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] No waiting events found dispatching network-vif-unplugged-f2eab801-3c6f-481b-98bf-9751a9a7c6d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.618 243456 DEBUG nova.compute.manager [req-c8a486be-b234-42cb-b2fb-5c1898d22ef2 req-f9c73076-8915-42ae-ada9-0758cf1f79b0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Received event network-vif-unplugged-f2eab801-3c6f-481b-98bf-9751a9a7c6d6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.620 243456 INFO os_vif [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:61:f1,bridge_name='br-int',has_traffic_filtering=True,id=97c93122-26c8-464e-b452-aaa22188a591,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap97c93122-26')
Feb 28 10:07:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.923 243456 INFO nova.virt.libvirt.driver [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Deleting instance files /var/lib/nova/instances/9bc174be-7ebf-4dfb-a56f-b8855b0b2960_del
Feb 28 10:07:22 compute-0 nova_compute[243452]: 2026-02-28 10:07:22.925 243456 INFO nova.virt.libvirt.driver [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Deletion of /var/lib/nova/instances/9bc174be-7ebf-4dfb-a56f-b8855b0b2960_del complete
Feb 28 10:07:23 compute-0 nova_compute[243452]: 2026-02-28 10:07:23.009 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:23 compute-0 nova_compute[243452]: 2026-02-28 10:07:23.034 243456 INFO nova.compute.manager [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Took 0.70 seconds to destroy the instance on the hypervisor.
Feb 28 10:07:23 compute-0 nova_compute[243452]: 2026-02-28 10:07:23.035 243456 DEBUG oslo.service.loopingcall [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:07:23 compute-0 nova_compute[243452]: 2026-02-28 10:07:23.035 243456 DEBUG nova.compute.manager [-] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:07:23 compute-0 nova_compute[243452]: 2026-02-28 10:07:23.035 243456 DEBUG nova.network.neutron [-] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:07:23 compute-0 ceph-mon[76304]: pgmap v1174: 305 pgs: 305 active+clean; 527 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.2 MiB/s wr, 268 op/s
Feb 28 10:07:23 compute-0 nova_compute[243452]: 2026-02-28 10:07:23.135 243456 INFO nova.network.neutron [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Port 97c93122-26c8-464e-b452-aaa22188a591 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Feb 28 10:07:23 compute-0 nova_compute[243452]: 2026-02-28 10:07:23.135 243456 DEBUG nova.network.neutron [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Updating instance_info_cache with network_info: [{"id": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "address": "fa:16:3e:75:95:ff", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2eab801-3c", "ovs_interfaceid": "f2eab801-3c6f-481b-98bf-9751a9a7c6d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:23 compute-0 nova_compute[243452]: 2026-02-28 10:07:23.163 243456 DEBUG oslo_concurrency.lockutils [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Releasing lock "refresh_cache-9bc174be-7ebf-4dfb-a56f-b8855b0b2960" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:07:23 compute-0 nova_compute[243452]: 2026-02-28 10:07:23.187 243456 DEBUG oslo_concurrency.lockutils [None req-120257f4-174c-421d-8941-7ce03cae4f4d f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "interface-9bc174be-7ebf-4dfb-a56f-b8855b0b2960-97c93122-26c8-464e-b452-aaa22188a591" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:23 compute-0 nova_compute[243452]: 2026-02-28 10:07:23.584 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 526 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 1.8 MiB/s wr, 259 op/s
Feb 28 10:07:24 compute-0 nova_compute[243452]: 2026-02-28 10:07:24.745 243456 DEBUG nova.compute.manager [req-8a2794e8-00f1-4742-ac53-34103dbd0c94 req-fc189b61-34ac-4d07-ae55-a087123ddaab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Received event network-vif-plugged-f2eab801-3c6f-481b-98bf-9751a9a7c6d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:24 compute-0 nova_compute[243452]: 2026-02-28 10:07:24.746 243456 DEBUG oslo_concurrency.lockutils [req-8a2794e8-00f1-4742-ac53-34103dbd0c94 req-fc189b61-34ac-4d07-ae55-a087123ddaab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:24 compute-0 nova_compute[243452]: 2026-02-28 10:07:24.746 243456 DEBUG oslo_concurrency.lockutils [req-8a2794e8-00f1-4742-ac53-34103dbd0c94 req-fc189b61-34ac-4d07-ae55-a087123ddaab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:24 compute-0 nova_compute[243452]: 2026-02-28 10:07:24.747 243456 DEBUG oslo_concurrency.lockutils [req-8a2794e8-00f1-4742-ac53-34103dbd0c94 req-fc189b61-34ac-4d07-ae55-a087123ddaab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:24 compute-0 nova_compute[243452]: 2026-02-28 10:07:24.747 243456 DEBUG nova.compute.manager [req-8a2794e8-00f1-4742-ac53-34103dbd0c94 req-fc189b61-34ac-4d07-ae55-a087123ddaab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] No waiting events found dispatching network-vif-plugged-f2eab801-3c6f-481b-98bf-9751a9a7c6d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:24 compute-0 nova_compute[243452]: 2026-02-28 10:07:24.748 243456 WARNING nova.compute.manager [req-8a2794e8-00f1-4742-ac53-34103dbd0c94 req-fc189b61-34ac-4d07-ae55-a087123ddaab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Received unexpected event network-vif-plugged-f2eab801-3c6f-481b-98bf-9751a9a7c6d6 for instance with vm_state active and task_state deleting.
Feb 28 10:07:25 compute-0 ceph-mon[76304]: pgmap v1175: 305 pgs: 305 active+clean; 526 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 1.8 MiB/s wr, 259 op/s
Feb 28 10:07:25 compute-0 nova_compute[243452]: 2026-02-28 10:07:25.496 243456 DEBUG nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 28 10:07:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 490 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 4.2 MiB/s wr, 324 op/s
Feb 28 10:07:26 compute-0 nova_compute[243452]: 2026-02-28 10:07:26.783 243456 DEBUG nova.network.neutron [-] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:26 compute-0 nova_compute[243452]: 2026-02-28 10:07:26.804 243456 INFO nova.compute.manager [-] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Took 3.77 seconds to deallocate network for instance.
Feb 28 10:07:26 compute-0 nova_compute[243452]: 2026-02-28 10:07:26.858 243456 DEBUG oslo_concurrency.lockutils [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:26 compute-0 nova_compute[243452]: 2026-02-28 10:07:26.858 243456 DEBUG oslo_concurrency.lockutils [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:26 compute-0 nova_compute[243452]: 2026-02-28 10:07:26.883 243456 DEBUG nova.compute.manager [req-4918c006-e4f8-420d-a22b-f0c1fc17611e req-4ab3f50e-355e-491a-98fd-35db87d884a0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Received event network-vif-deleted-f2eab801-3c6f-481b-98bf-9751a9a7c6d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:26 compute-0 nova_compute[243452]: 2026-02-28 10:07:26.956 243456 DEBUG oslo_concurrency.processutils [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:27 compute-0 ceph-mon[76304]: pgmap v1176: 305 pgs: 305 active+clean; 490 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 4.2 MiB/s wr, 324 op/s
Feb 28 10:07:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:07:27 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1595931521' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:27 compute-0 nova_compute[243452]: 2026-02-28 10:07:27.527 243456 DEBUG oslo_concurrency.processutils [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:27 compute-0 nova_compute[243452]: 2026-02-28 10:07:27.535 243456 DEBUG nova.compute.provider_tree [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:07:27 compute-0 nova_compute[243452]: 2026-02-28 10:07:27.556 243456 DEBUG nova.scheduler.client.report [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:07:27 compute-0 nova_compute[243452]: 2026-02-28 10:07:27.579 243456 DEBUG oslo_concurrency.lockutils [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:27 compute-0 nova_compute[243452]: 2026-02-28 10:07:27.597 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:27 compute-0 nova_compute[243452]: 2026-02-28 10:07:27.616 243456 INFO nova.scheduler.client.report [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Deleted allocations for instance 9bc174be-7ebf-4dfb-a56f-b8855b0b2960
Feb 28 10:07:27 compute-0 nova_compute[243452]: 2026-02-28 10:07:27.688 243456 DEBUG oslo_concurrency.lockutils [None req-8f2ab451-e769-4e5f-8492-6b19d9134fb2 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "9bc174be-7ebf-4dfb-a56f-b8855b0b2960" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:27 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Feb 28 10:07:27 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002e.scope: Consumed 12.092s CPU time.
Feb 28 10:07:27 compute-0 systemd-machined[209480]: Machine qemu-50-instance-0000002e terminated.
Feb 28 10:07:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:07:27 compute-0 podman[282373]: 2026-02-28 10:07:27.888482196 +0000 UTC m=+0.075048490 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 10:07:27 compute-0 podman[282372]: 2026-02-28 10:07:27.9160099 +0000 UTC m=+0.107983486 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 10:07:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 471 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 246 op/s
Feb 28 10:07:28 compute-0 nova_compute[243452]: 2026-02-28 10:07:28.159 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:28 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1595931521' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:28 compute-0 nova_compute[243452]: 2026-02-28 10:07:28.513 243456 INFO nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Instance shutdown successfully after 13 seconds.
Feb 28 10:07:28 compute-0 nova_compute[243452]: 2026-02-28 10:07:28.520 243456 INFO nova.virt.libvirt.driver [-] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Instance destroyed successfully.
Feb 28 10:07:28 compute-0 nova_compute[243452]: 2026-02-28 10:07:28.529 243456 INFO nova.virt.libvirt.driver [-] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Instance destroyed successfully.
Feb 28 10:07:28 compute-0 nova_compute[243452]: 2026-02-28 10:07:28.587 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:28 compute-0 nova_compute[243452]: 2026-02-28 10:07:28.820 243456 INFO nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Deleting instance files /var/lib/nova/instances/2c8189d8-e4a5-412d-bd69-b690e34b8f4c_del
Feb 28 10:07:28 compute-0 nova_compute[243452]: 2026-02-28 10:07:28.821 243456 INFO nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Deletion of /var/lib/nova/instances/2c8189d8-e4a5-412d-bd69-b690e34b8f4c_del complete
Feb 28 10:07:28 compute-0 nova_compute[243452]: 2026-02-28 10:07:28.922 243456 DEBUG oslo_concurrency.lockutils [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "c4dce6af-958c-4c5a-890b-469443cee915" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:28 compute-0 nova_compute[243452]: 2026-02-28 10:07:28.922 243456 DEBUG oslo_concurrency.lockutils [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:28 compute-0 nova_compute[243452]: 2026-02-28 10:07:28.922 243456 DEBUG oslo_concurrency.lockutils [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "c4dce6af-958c-4c5a-890b-469443cee915-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:28 compute-0 nova_compute[243452]: 2026-02-28 10:07:28.922 243456 DEBUG oslo_concurrency.lockutils [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:28 compute-0 nova_compute[243452]: 2026-02-28 10:07:28.923 243456 DEBUG oslo_concurrency.lockutils [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:28 compute-0 nova_compute[243452]: 2026-02-28 10:07:28.923 243456 INFO nova.compute.manager [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Terminating instance
Feb 28 10:07:28 compute-0 nova_compute[243452]: 2026-02-28 10:07:28.924 243456 DEBUG nova.compute.manager [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:07:28 compute-0 nova_compute[243452]: 2026-02-28 10:07:28.973 243456 DEBUG nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:07:28 compute-0 nova_compute[243452]: 2026-02-28 10:07:28.974 243456 INFO nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Creating image(s)
Feb 28 10:07:28 compute-0 kernel: tapbd50336f-b1 (unregistering): left promiscuous mode
Feb 28 10:07:28 compute-0 NetworkManager[49805]: <info>  [1772273248.9825] device (tapbd50336f-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:07:28 compute-0 ovn_controller[146846]: 2026-02-28T10:07:28Z|00355|binding|INFO|Releasing lport bd50336f-b10b-46c9-91bd-81e086b2e80e from this chassis (sb_readonly=0)
Feb 28 10:07:28 compute-0 ovn_controller[146846]: 2026-02-28T10:07:28Z|00356|binding|INFO|Setting lport bd50336f-b10b-46c9-91bd-81e086b2e80e down in Southbound
Feb 28 10:07:28 compute-0 ovn_controller[146846]: 2026-02-28T10:07:28Z|00357|binding|INFO|Removing iface tapbd50336f-b1 ovn-installed in OVS
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.009 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:3f:66 10.100.0.14'], port_security=['fa:16:3e:9f:3f:66 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c4dce6af-958c-4c5a-890b-469443cee915', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5478fb37e3fa483c86ed85ad939d42da', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b06e2ec4-e889-49ea-aafd-6900649d681f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fda1a101-a7b6-4941-9f07-7db3c89276cc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=bd50336f-b10b-46c9-91bd-81e086b2e80e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.011 156681 INFO neutron.agent.ovn.metadata.agent [-] Port bd50336f-b10b-46c9-91bd-81e086b2e80e in datapath 60dcefc3-95e1-437e-9c00-e51656c39b8f unbound from our chassis
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.012 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60dcefc3-95e1-437e-9c00-e51656c39b8f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.014 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b06e6211-6bab-44cb-8e43-afc80c5995b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.014 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f namespace which is not needed anymore
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.025 243456 DEBUG nova.storage.rbd_utils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] rbd image 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:29 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000028.scope: Deactivated successfully.
Feb 28 10:07:29 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000028.scope: Consumed 13.605s CPU time.
Feb 28 10:07:29 compute-0 systemd-machined[209480]: Machine qemu-44-instance-00000028 terminated.
Feb 28 10:07:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:07:29
Feb 28 10:07:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:07:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:07:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['images', 'backups', 'vms', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data']
Feb 28 10:07:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.066 243456 DEBUG nova.storage.rbd_utils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] rbd image 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.094 243456 DEBUG nova.storage.rbd_utils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] rbd image 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.099 243456 DEBUG oslo_concurrency.processutils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.128 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.143 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.147 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.158 243456 DEBUG oslo_concurrency.lockutils [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquiring lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.159 243456 DEBUG oslo_concurrency.lockutils [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.159 243456 DEBUG oslo_concurrency.lockutils [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquiring lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.160 243456 DEBUG oslo_concurrency.lockutils [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.160 243456 DEBUG oslo_concurrency.lockutils [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.163 243456 INFO nova.compute.manager [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Terminating instance
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.164 243456 DEBUG nova.compute.manager [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.167 243456 INFO nova.virt.libvirt.driver [-] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Instance destroyed successfully.
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.167 243456 DEBUG nova.objects.instance [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lazy-loading 'resources' on Instance uuid c4dce6af-958c-4c5a-890b-469443cee915 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:29 compute-0 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[278653]: [NOTICE]   (278657) : haproxy version is 2.8.14-c23fe91
Feb 28 10:07:29 compute-0 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[278653]: [NOTICE]   (278657) : path to executable is /usr/sbin/haproxy
Feb 28 10:07:29 compute-0 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[278653]: [WARNING]  (278657) : Exiting Master process...
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.170 243456 DEBUG oslo_concurrency.processutils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:29 compute-0 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[278653]: [ALERT]    (278657) : Current worker (278659) exited with code 143 (Terminated)
Feb 28 10:07:29 compute-0 neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f[278653]: [WARNING]  (278657) : All workers exited. Exiting... (0)
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.171 243456 DEBUG oslo_concurrency.lockutils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Acquiring lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.171 243456 DEBUG oslo_concurrency.lockutils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.172 243456 DEBUG oslo_concurrency.lockutils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:29 compute-0 systemd[1]: libpod-b72009207f5cdd613749aa78ff121ce80a242f0d26b7faf08e8b84c701e7d4e9.scope: Deactivated successfully.
Feb 28 10:07:29 compute-0 podman[282518]: 2026-02-28 10:07:29.181084146 +0000 UTC m=+0.053561217 container died b72009207f5cdd613749aa78ff121ce80a242f0d26b7faf08e8b84c701e7d4e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.199 243456 DEBUG nova.storage.rbd_utils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] rbd image 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:29 compute-0 ceph-mon[76304]: pgmap v1177: 305 pgs: 305 active+clean; 471 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 246 op/s
Feb 28 10:07:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b72009207f5cdd613749aa78ff121ce80a242f0d26b7faf08e8b84c701e7d4e9-userdata-shm.mount: Deactivated successfully.
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.205 243456 DEBUG oslo_concurrency.processutils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbdceedde6d3f00a5c2e93b8e09e5525730c070b00811021d95e9188936c5e1c-merged.mount: Deactivated successfully.
Feb 28 10:07:29 compute-0 podman[282518]: 2026-02-28 10:07:29.231641917 +0000 UTC m=+0.104118988 container cleanup b72009207f5cdd613749aa78ff121ce80a242f0d26b7faf08e8b84c701e7d4e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 28 10:07:29 compute-0 systemd[1]: libpod-conmon-b72009207f5cdd613749aa78ff121ce80a242f0d26b7faf08e8b84c701e7d4e9.scope: Deactivated successfully.
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.245 243456 DEBUG nova.virt.libvirt.vif [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:06:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1836797536',display_name='tempest-tempest.common.compute-instance-1836797536',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1836797536',id=40,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJX0Fx82RGhILAY8gSL/81eMdy/bnRqX8dvA0uyc0VMC42rCn5VbA0lngPhybrpyL5tdeNJDAeYEVJl6vb2i2p3dxqnwW1uNXmGSt+gE0lS+RTjVWG5bC73bVNenJhGdiA==',key_name='tempest-keypair-1599844475',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:06:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5478fb37e3fa483c86ed85ad939d42da',ramdisk_id='',reservation_id='r-lo9kso4x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-441249959',owner_user_name='tempest-AttachInterfacesTestJSON-441249959-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:06:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f1d18942aa7a4f4f9f673058f8c5f232',uuid=c4dce6af-958c-4c5a-890b-469443cee915,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "address": "fa:16:3e:9f:3f:66", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd50336f-b1", "ovs_interfaceid": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.246 243456 DEBUG nova.network.os_vif_util [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converting VIF {"id": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "address": "fa:16:3e:9f:3f:66", "network": {"id": "60dcefc3-95e1-437e-9c00-e51656c39b8f", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1370459451-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5478fb37e3fa483c86ed85ad939d42da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd50336f-b1", "ovs_interfaceid": "bd50336f-b10b-46c9-91bd-81e086b2e80e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.246 243456 DEBUG nova.network.os_vif_util [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9f:3f:66,bridge_name='br-int',has_traffic_filtering=True,id=bd50336f-b10b-46c9-91bd-81e086b2e80e,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd50336f-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.247 243456 DEBUG os_vif [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:3f:66,bridge_name='br-int',has_traffic_filtering=True,id=bd50336f-b10b-46c9-91bd-81e086b2e80e,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd50336f-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.251 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.252 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd50336f-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.256 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.262 243456 INFO os_vif [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:3f:66,bridge_name='br-int',has_traffic_filtering=True,id=bd50336f-b10b-46c9-91bd-81e086b2e80e,network=Network(60dcefc3-95e1-437e-9c00-e51656c39b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd50336f-b1')
Feb 28 10:07:29 compute-0 kernel: tap0690a322-e7 (unregistering): left promiscuous mode
Feb 28 10:07:29 compute-0 NetworkManager[49805]: <info>  [1772273249.2898] device (tap0690a322-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.290 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.300 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:29 compute-0 ovn_controller[146846]: 2026-02-28T10:07:29Z|00358|binding|INFO|Releasing lport 0690a322-e7c3-413d-8780-d9d6a0f84fd2 from this chassis (sb_readonly=0)
Feb 28 10:07:29 compute-0 ovn_controller[146846]: 2026-02-28T10:07:29Z|00359|binding|INFO|Setting lport 0690a322-e7c3-413d-8780-d9d6a0f84fd2 down in Southbound
Feb 28 10:07:29 compute-0 podman[282577]: 2026-02-28 10:07:29.300823202 +0000 UTC m=+0.050076369 container remove b72009207f5cdd613749aa78ff121ce80a242f0d26b7faf08e8b84c701e7d4e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 10:07:29 compute-0 ovn_controller[146846]: 2026-02-28T10:07:29Z|00360|binding|INFO|Removing iface tap0690a322-e7 ovn-installed in OVS
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.303 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.310 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:c6:3c 10.100.0.12'], port_security=['fa:16:3e:b1:c6:3c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '60dcb9fa-f7b6-415d-86e5-d423d4613d6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5df107d99f104138b864f28cf3b749ad', 'neutron:revision_number': '6', 'neutron:security_group_ids': '428b5966-b573-43eb-a464-fcc424e52e98 48de87b6-786b-4c67-8a86-e6a2d528940d 4bc2a468-05a6-4368-a0d4-55692d75b70e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4575c196-9c47-43a0-8ee2-589635106d32, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0690a322-e7c3-413d-8780-d9d6a0f84fd2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.310 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.308 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[be426009-cc5e-4a4a-8ceb-9cee7a47a7b0]: (4, ('Sat Feb 28 10:07:29 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f (b72009207f5cdd613749aa78ff121ce80a242f0d26b7faf08e8b84c701e7d4e9)\nb72009207f5cdd613749aa78ff121ce80a242f0d26b7faf08e8b84c701e7d4e9\nSat Feb 28 10:07:29 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f (b72009207f5cdd613749aa78ff121ce80a242f0d26b7faf08e8b84c701e7d4e9)\nb72009207f5cdd613749aa78ff121ce80a242f0d26b7faf08e8b84c701e7d4e9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.313 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[044cc0b8-a26a-4f93-9662-d2d117fb232a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.314 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60dcefc3-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.316 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:29 compute-0 kernel: tap60dcefc3-90: left promiscuous mode
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.331 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.335 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bef72ec7-8c46-4c08-ac1a-5f86f9788af2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:29 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000027.scope: Deactivated successfully.
Feb 28 10:07:29 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000027.scope: Consumed 14.120s CPU time.
Feb 28 10:07:29 compute-0 systemd-machined[209480]: Machine qemu-43-instance-00000027 terminated.
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.350 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3107fe98-df24-4c96-9c60-4d2715ebb6be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.354 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f5dcc6-c9b8-416d-a2b2-7650716ef5c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.372 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[10f5dcd1-eddd-44c0-8aab-dbf9767ac6b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469774, 'reachable_time': 19115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282635, 'error': None, 'target': 'ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:29 compute-0 systemd[1]: run-netns-ovnmeta\x2d60dcefc3\x2d95e1\x2d437e\x2d9c00\x2de51656c39b8f.mount: Deactivated successfully.
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.376 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-60dcefc3-95e1-437e-9c00-e51656c39b8f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.377 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[b3580462-0696-4c6f-a189-a125aafc1cb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.377 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0690a322-e7c3-413d-8780-d9d6a0f84fd2 in datapath f973a3f2-c3d9-4311-9c7b-ab6ca02111d3 unbound from our chassis
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.378 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f973a3f2-c3d9-4311-9c7b-ab6ca02111d3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.379 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0458ceac-ec2d-4a8a-b6ec-d998c6b1cdb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.380 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3 namespace which is not needed anymore
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.452 243456 DEBUG nova.compute.manager [req-f57674e4-658a-4d98-9efb-44c6f10b8c09 req-7946c5b3-774d-4c99-ba24-f85e1aac3242 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received event network-vif-unplugged-bd50336f-b10b-46c9-91bd-81e086b2e80e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.452 243456 DEBUG oslo_concurrency.lockutils [req-f57674e4-658a-4d98-9efb-44c6f10b8c09 req-7946c5b3-774d-4c99-ba24-f85e1aac3242 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4dce6af-958c-4c5a-890b-469443cee915-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.453 243456 DEBUG oslo_concurrency.lockutils [req-f57674e4-658a-4d98-9efb-44c6f10b8c09 req-7946c5b3-774d-4c99-ba24-f85e1aac3242 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.453 243456 DEBUG oslo_concurrency.lockutils [req-f57674e4-658a-4d98-9efb-44c6f10b8c09 req-7946c5b3-774d-4c99-ba24-f85e1aac3242 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.454 243456 DEBUG nova.compute.manager [req-f57674e4-658a-4d98-9efb-44c6f10b8c09 req-7946c5b3-774d-4c99-ba24-f85e1aac3242 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] No waiting events found dispatching network-vif-unplugged-bd50336f-b10b-46c9-91bd-81e086b2e80e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.454 243456 DEBUG nova.compute.manager [req-f57674e4-658a-4d98-9efb-44c6f10b8c09 req-7946c5b3-774d-4c99-ba24-f85e1aac3242 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received event network-vif-unplugged-bd50336f-b10b-46c9-91bd-81e086b2e80e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.468 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.472 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.483 243456 INFO nova.virt.libvirt.driver [-] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Instance destroyed successfully.
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.484 243456 DEBUG nova.objects.instance [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lazy-loading 'resources' on Instance uuid 60dcb9fa-f7b6-415d-86e5-d423d4613d6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.502 243456 DEBUG nova.virt.libvirt.vif [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:06:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1882743683',display_name='tempest-SecurityGroupsTestJSON-server-1882743683',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1882743683',id=39,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:06:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5df107d99f104138b864f28cf3b749ad',ramdisk_id='',reservation_id='r-qqsbvlmq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-392060184',owner_user_name='tempest-SecurityGroupsTestJSON-392060184-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:06:24Z,user_data=None,user_id='c9a7366cce344abcb7310041ed02610a',uuid=60dcb9fa-f7b6-415d-86e5-d423d4613d6c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "address": "fa:16:3e:b1:c6:3c", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690a322-e7", "ovs_interfaceid": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.503 243456 DEBUG nova.network.os_vif_util [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Converting VIF {"id": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "address": "fa:16:3e:b1:c6:3c", "network": {"id": "f973a3f2-c3d9-4311-9c7b-ab6ca02111d3", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-352579889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5df107d99f104138b864f28cf3b749ad", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0690a322-e7", "ovs_interfaceid": "0690a322-e7c3-413d-8780-d9d6a0f84fd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.504 243456 DEBUG nova.network.os_vif_util [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b1:c6:3c,bridge_name='br-int',has_traffic_filtering=True,id=0690a322-e7c3-413d-8780-d9d6a0f84fd2,network=Network(f973a3f2-c3d9-4311-9c7b-ab6ca02111d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0690a322-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.505 243456 DEBUG os_vif [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b1:c6:3c,bridge_name='br-int',has_traffic_filtering=True,id=0690a322-e7c3-413d-8780-d9d6a0f84fd2,network=Network(f973a3f2-c3d9-4311-9c7b-ab6ca02111d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0690a322-e7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.507 243456 DEBUG oslo_concurrency.processutils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.507 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.508 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0690a322-e7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:29 compute-0 neutron-haproxy-ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3[277942]: [NOTICE]   (277961) : haproxy version is 2.8.14-c23fe91
Feb 28 10:07:29 compute-0 neutron-haproxy-ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3[277942]: [NOTICE]   (277961) : path to executable is /usr/sbin/haproxy
Feb 28 10:07:29 compute-0 neutron-haproxy-ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3[277942]: [WARNING]  (277961) : Exiting Master process...
Feb 28 10:07:29 compute-0 neutron-haproxy-ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3[277942]: [WARNING]  (277961) : Exiting Master process...
Feb 28 10:07:29 compute-0 neutron-haproxy-ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3[277942]: [ALERT]    (277961) : Current worker (277966) exited with code 143 (Terminated)
Feb 28 10:07:29 compute-0 neutron-haproxy-ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3[277942]: [WARNING]  (277961) : All workers exited. Exiting... (0)
Feb 28 10:07:29 compute-0 systemd[1]: libpod-c88ffe19a51fdf24cf67781b51cdb7286aae7e9fc1fdfe6e1ba23f69478c066f.scope: Deactivated successfully.
Feb 28 10:07:29 compute-0 podman[282654]: 2026-02-28 10:07:29.538495354 +0000 UTC m=+0.066299445 container died c88ffe19a51fdf24cf67781b51cdb7286aae7e9fc1fdfe6e1ba23f69478c066f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.544 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.548 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:07:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c88ffe19a51fdf24cf67781b51cdb7286aae7e9fc1fdfe6e1ba23f69478c066f-userdata-shm.mount: Deactivated successfully.
Feb 28 10:07:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbd14d3e7b115698d53bd6fefdc6f0cca3938be7119c8399bc73dd722379f62e-merged.mount: Deactivated successfully.
Feb 28 10:07:29 compute-0 podman[282654]: 2026-02-28 10:07:29.573536279 +0000 UTC m=+0.101340370 container cleanup c88ffe19a51fdf24cf67781b51cdb7286aae7e9fc1fdfe6e1ba23f69478c066f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:07:29 compute-0 systemd[1]: libpod-conmon-c88ffe19a51fdf24cf67781b51cdb7286aae7e9fc1fdfe6e1ba23f69478c066f.scope: Deactivated successfully.
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.587 243456 INFO os_vif [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b1:c6:3c,bridge_name='br-int',has_traffic_filtering=True,id=0690a322-e7c3-413d-8780-d9d6a0f84fd2,network=Network(f973a3f2-c3d9-4311-9c7b-ab6ca02111d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0690a322-e7')
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.615 243456 DEBUG nova.storage.rbd_utils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] resizing rbd image 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:07:29 compute-0 podman[282724]: 2026-02-28 10:07:29.63830907 +0000 UTC m=+0.046276362 container remove c88ffe19a51fdf24cf67781b51cdb7286aae7e9fc1fdfe6e1ba23f69478c066f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.644 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8a9a80bd-7052-40fe-956a-67306be02879]: (4, ('Sat Feb 28 10:07:29 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3 (c88ffe19a51fdf24cf67781b51cdb7286aae7e9fc1fdfe6e1ba23f69478c066f)\nc88ffe19a51fdf24cf67781b51cdb7286aae7e9fc1fdfe6e1ba23f69478c066f\nSat Feb 28 10:07:29 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3 (c88ffe19a51fdf24cf67781b51cdb7286aae7e9fc1fdfe6e1ba23f69478c066f)\nc88ffe19a51fdf24cf67781b51cdb7286aae7e9fc1fdfe6e1ba23f69478c066f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.646 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bdbe477f-b715-4dc3-b9d4-8727e3f2c024]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.646 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf973a3f2-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.648 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:29 compute-0 kernel: tapf973a3f2-c0: left promiscuous mode
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.654 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8f25989d-672a-43f8-b968-96de69da31e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.659 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.671 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4d2e8514-660c-4783-aeae-f2639bb65c50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.672 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3979bc89-210e-4698-b48a-ae81ee9efa9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.691 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4af658-20a8-466b-a713-3655d9c3c94d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469107, 'reachable_time': 33580, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282778, 'error': None, 'target': 'ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.694 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f973a3f2-c3d9-4311-9c7b-ab6ca02111d3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:07:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:29.694 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[eb8201c3-aa0c-4d31-b42b-2d18b1056392]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.710 243456 DEBUG nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.711 243456 DEBUG nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Ensure instance console log exists: /var/lib/nova/instances/2c8189d8-e4a5-412d-bd69-b690e34b8f4c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.712 243456 DEBUG oslo_concurrency.lockutils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.712 243456 DEBUG oslo_concurrency.lockutils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.713 243456 DEBUG oslo_concurrency.lockutils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.715 243456 DEBUG nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.721 243456 WARNING nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.727 243456 DEBUG nova.virt.libvirt.host [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.728 243456 DEBUG nova.virt.libvirt.host [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.732 243456 DEBUG nova.virt.libvirt.host [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.733 243456 DEBUG nova.virt.libvirt.host [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.733 243456 DEBUG nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.733 243456 DEBUG nova.virt.hardware [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.734 243456 DEBUG nova.virt.hardware [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.734 243456 DEBUG nova.virt.hardware [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.735 243456 DEBUG nova.virt.hardware [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.735 243456 DEBUG nova.virt.hardware [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.735 243456 DEBUG nova.virt.hardware [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.735 243456 DEBUG nova.virt.hardware [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.736 243456 DEBUG nova.virt.hardware [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.737 243456 DEBUG nova.virt.hardware [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.738 243456 DEBUG nova.virt.hardware [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.738 243456 DEBUG nova.virt.hardware [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.738 243456 DEBUG nova.objects.instance [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2c8189d8-e4a5-412d-bd69-b690e34b8f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.758 243456 DEBUG oslo_concurrency.processutils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.804 243456 INFO nova.virt.libvirt.driver [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Deleting instance files /var/lib/nova/instances/c4dce6af-958c-4c5a-890b-469443cee915_del
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.805 243456 INFO nova.virt.libvirt.driver [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Deletion of /var/lib/nova/instances/c4dce6af-958c-4c5a-890b-469443cee915_del complete
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.854 243456 INFO nova.compute.manager [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Took 0.93 seconds to destroy the instance on the hypervisor.
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.855 243456 DEBUG oslo.service.loopingcall [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.856 243456 DEBUG nova.compute.manager [-] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.856 243456 DEBUG nova.network.neutron [-] [instance: c4dce6af-958c-4c5a-890b-469443cee915] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.935 243456 INFO nova.virt.libvirt.driver [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Deleting instance files /var/lib/nova/instances/60dcb9fa-f7b6-415d-86e5-d423d4613d6c_del
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.936 243456 INFO nova.virt.libvirt.driver [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Deletion of /var/lib/nova/instances/60dcb9fa-f7b6-415d-86e5-d423d4613d6c_del complete
Feb 28 10:07:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 449 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.7 MiB/s wr, 249 op/s
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.982 243456 INFO nova.compute.manager [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Took 0.82 seconds to destroy the instance on the hypervisor.
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.983 243456 DEBUG oslo.service.loopingcall [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.983 243456 DEBUG nova.compute.manager [-] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:07:29 compute-0 nova_compute[243452]: 2026-02-28 10:07:29.984 243456 DEBUG nova.network.neutron [-] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:07:30 compute-0 systemd[1]: run-netns-ovnmeta\x2df973a3f2\x2dc3d9\x2d4311\x2d9c7b\x2dab6ca02111d3.mount: Deactivated successfully.
Feb 28 10:07:30 compute-0 ceph-mon[76304]: pgmap v1178: 305 pgs: 305 active+clean; 449 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.7 MiB/s wr, 249 op/s
Feb 28 10:07:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:07:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3225017426' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:07:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:07:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:07:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:07:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:07:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:07:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:07:30 compute-0 nova_compute[243452]: 2026-02-28 10:07:30.314 243456 DEBUG oslo_concurrency.processutils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:30 compute-0 nova_compute[243452]: 2026-02-28 10:07:30.335 243456 DEBUG nova.storage.rbd_utils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] rbd image 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:30 compute-0 nova_compute[243452]: 2026-02-28 10:07:30.340 243456 DEBUG oslo_concurrency.processutils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:30 compute-0 nova_compute[243452]: 2026-02-28 10:07:30.394 243456 DEBUG nova.network.neutron [-] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:30 compute-0 nova_compute[243452]: 2026-02-28 10:07:30.411 243456 INFO nova.compute.manager [-] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Took 0.55 seconds to deallocate network for instance.
Feb 28 10:07:30 compute-0 nova_compute[243452]: 2026-02-28 10:07:30.451 243456 DEBUG oslo_concurrency.lockutils [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:30 compute-0 nova_compute[243452]: 2026-02-28 10:07:30.451 243456 DEBUG oslo_concurrency.lockutils [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:30 compute-0 nova_compute[243452]: 2026-02-28 10:07:30.552 243456 DEBUG oslo_concurrency.processutils [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:07:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:07:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:07:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:07:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:07:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:07:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:07:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:07:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:07:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:07:30 compute-0 nova_compute[243452]: 2026-02-28 10:07:30.611 243456 DEBUG nova.network.neutron [-] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:30 compute-0 nova_compute[243452]: 2026-02-28 10:07:30.629 243456 INFO nova.compute.manager [-] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Took 0.65 seconds to deallocate network for instance.
Feb 28 10:07:30 compute-0 nova_compute[243452]: 2026-02-28 10:07:30.674 243456 DEBUG oslo_concurrency.lockutils [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:30 compute-0 nova_compute[243452]: 2026-02-28 10:07:30.777 243456 DEBUG nova.compute.manager [req-32b81d09-9981-40aa-a91a-8e51699b9336 req-766bc442-9e2e-475a-a86b-8e9fffd59b56 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Received event network-vif-deleted-0690a322-e7c3-413d-8780-d9d6a0f84fd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:07:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3699810972' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:07:30 compute-0 nova_compute[243452]: 2026-02-28 10:07:30.923 243456 DEBUG oslo_concurrency.processutils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:30 compute-0 nova_compute[243452]: 2026-02-28 10:07:30.927 243456 DEBUG nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:07:30 compute-0 nova_compute[243452]:   <uuid>2c8189d8-e4a5-412d-bd69-b690e34b8f4c</uuid>
Feb 28 10:07:30 compute-0 nova_compute[243452]:   <name>instance-0000002e</name>
Feb 28 10:07:30 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:07:30 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:07:30 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerShowV247Test-server-2138859692</nova:name>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:07:29</nova:creationTime>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:07:30 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:07:30 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:07:30 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:07:30 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:07:30 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:07:30 compute-0 nova_compute[243452]:         <nova:user uuid="4c3e34e421f447f386ae2320858c95b8">tempest-ServerShowV247Test-321012550-project-member</nova:user>
Feb 28 10:07:30 compute-0 nova_compute[243452]:         <nova:project uuid="696c1808e0d64cc7b255195d851e06d1">tempest-ServerShowV247Test-321012550</nova:project>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="88971623-4808-4102-a4a7-34a287d8b7fe"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <nova:ports/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:07:30 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:07:30 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <system>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <entry name="serial">2c8189d8-e4a5-412d-bd69-b690e34b8f4c</entry>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <entry name="uuid">2c8189d8-e4a5-412d-bd69-b690e34b8f4c</entry>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     </system>
Feb 28 10:07:30 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:07:30 compute-0 nova_compute[243452]:   <os>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:   </os>
Feb 28 10:07:30 compute-0 nova_compute[243452]:   <features>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:   </features>
Feb 28 10:07:30 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:07:30 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:07:30 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk">
Feb 28 10:07:30 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       </source>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:07:30 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk.config">
Feb 28 10:07:30 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       </source>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:07:30 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/2c8189d8-e4a5-412d-bd69-b690e34b8f4c/console.log" append="off"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <video>
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     </video>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:07:30 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:07:30 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:07:30 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:07:30 compute-0 nova_compute[243452]: </domain>
Feb 28 10:07:30 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.007 243456 DEBUG nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.008 243456 DEBUG nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.009 243456 INFO nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Using config drive
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.035 243456 DEBUG nova.storage.rbd_utils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] rbd image 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.061 243456 DEBUG nova.objects.instance [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 2c8189d8-e4a5-412d-bd69-b690e34b8f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.090 243456 DEBUG nova.objects.instance [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lazy-loading 'keypairs' on Instance uuid 2c8189d8-e4a5-412d-bd69-b690e34b8f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:07:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3196647637' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.131 243456 DEBUG oslo_concurrency.processutils [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.137 243456 DEBUG nova.compute.provider_tree [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.152 243456 DEBUG nova.scheduler.client.report [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.178 243456 DEBUG oslo_concurrency.lockutils [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.181 243456 DEBUG oslo_concurrency.lockutils [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.210 243456 INFO nova.scheduler.client.report [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Deleted allocations for instance c4dce6af-958c-4c5a-890b-469443cee915
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.247 243456 INFO nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Creating config drive at /var/lib/nova/instances/2c8189d8-e4a5-412d-bd69-b690e34b8f4c/disk.config
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.252 243456 DEBUG oslo_concurrency.processutils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c8189d8-e4a5-412d-bd69-b690e34b8f4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4xxvamkm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:31 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3225017426' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:07:31 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3699810972' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:07:31 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3196647637' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.293 243456 DEBUG oslo_concurrency.lockutils [None req-a7416a15-af57-4961-8b44-44f216c02573 f1d18942aa7a4f4f9f673058f8c5f232 5478fb37e3fa483c86ed85ad939d42da - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.371s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.332 243456 DEBUG oslo_concurrency.processutils [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.395 243456 DEBUG oslo_concurrency.processutils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c8189d8-e4a5-412d-bd69-b690e34b8f4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4xxvamkm" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.429 243456 DEBUG nova.storage.rbd_utils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] rbd image 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.435 243456 DEBUG oslo_concurrency.processutils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2c8189d8-e4a5-412d-bd69-b690e34b8f4c/disk.config 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.556 243456 DEBUG oslo_concurrency.lockutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Acquiring lock "07aecf45-f323-42b5-850c-c413cb6d42da" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.557 243456 DEBUG oslo_concurrency.lockutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Lock "07aecf45-f323-42b5-850c-c413cb6d42da" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.565 243456 DEBUG nova.compute.manager [req-a38d6110-1e36-4bef-a40c-18e05dd1490e req-5b13efce-93fc-40ce-ab91-b8d69c35f49b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received event network-vif-plugged-bd50336f-b10b-46c9-91bd-81e086b2e80e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.565 243456 DEBUG oslo_concurrency.lockutils [req-a38d6110-1e36-4bef-a40c-18e05dd1490e req-5b13efce-93fc-40ce-ab91-b8d69c35f49b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4dce6af-958c-4c5a-890b-469443cee915-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.566 243456 DEBUG oslo_concurrency.lockutils [req-a38d6110-1e36-4bef-a40c-18e05dd1490e req-5b13efce-93fc-40ce-ab91-b8d69c35f49b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.566 243456 DEBUG oslo_concurrency.lockutils [req-a38d6110-1e36-4bef-a40c-18e05dd1490e req-5b13efce-93fc-40ce-ab91-b8d69c35f49b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4dce6af-958c-4c5a-890b-469443cee915-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.566 243456 DEBUG nova.compute.manager [req-a38d6110-1e36-4bef-a40c-18e05dd1490e req-5b13efce-93fc-40ce-ab91-b8d69c35f49b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] No waiting events found dispatching network-vif-plugged-bd50336f-b10b-46c9-91bd-81e086b2e80e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.567 243456 WARNING nova.compute.manager [req-a38d6110-1e36-4bef-a40c-18e05dd1490e req-5b13efce-93fc-40ce-ab91-b8d69c35f49b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received unexpected event network-vif-plugged-bd50336f-b10b-46c9-91bd-81e086b2e80e for instance with vm_state deleted and task_state None.
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.567 243456 DEBUG nova.compute.manager [req-a38d6110-1e36-4bef-a40c-18e05dd1490e req-5b13efce-93fc-40ce-ab91-b8d69c35f49b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Received event network-vif-unplugged-0690a322-e7c3-413d-8780-d9d6a0f84fd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.567 243456 DEBUG oslo_concurrency.lockutils [req-a38d6110-1e36-4bef-a40c-18e05dd1490e req-5b13efce-93fc-40ce-ab91-b8d69c35f49b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.567 243456 DEBUG oslo_concurrency.lockutils [req-a38d6110-1e36-4bef-a40c-18e05dd1490e req-5b13efce-93fc-40ce-ab91-b8d69c35f49b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.568 243456 DEBUG oslo_concurrency.lockutils [req-a38d6110-1e36-4bef-a40c-18e05dd1490e req-5b13efce-93fc-40ce-ab91-b8d69c35f49b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.568 243456 DEBUG nova.compute.manager [req-a38d6110-1e36-4bef-a40c-18e05dd1490e req-5b13efce-93fc-40ce-ab91-b8d69c35f49b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] No waiting events found dispatching network-vif-unplugged-0690a322-e7c3-413d-8780-d9d6a0f84fd2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.568 243456 WARNING nova.compute.manager [req-a38d6110-1e36-4bef-a40c-18e05dd1490e req-5b13efce-93fc-40ce-ab91-b8d69c35f49b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Received unexpected event network-vif-unplugged-0690a322-e7c3-413d-8780-d9d6a0f84fd2 for instance with vm_state deleted and task_state None.
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.568 243456 DEBUG nova.compute.manager [req-a38d6110-1e36-4bef-a40c-18e05dd1490e req-5b13efce-93fc-40ce-ab91-b8d69c35f49b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Received event network-vif-plugged-0690a322-e7c3-413d-8780-d9d6a0f84fd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.569 243456 DEBUG oslo_concurrency.lockutils [req-a38d6110-1e36-4bef-a40c-18e05dd1490e req-5b13efce-93fc-40ce-ab91-b8d69c35f49b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.569 243456 DEBUG oslo_concurrency.lockutils [req-a38d6110-1e36-4bef-a40c-18e05dd1490e req-5b13efce-93fc-40ce-ab91-b8d69c35f49b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.569 243456 DEBUG oslo_concurrency.lockutils [req-a38d6110-1e36-4bef-a40c-18e05dd1490e req-5b13efce-93fc-40ce-ab91-b8d69c35f49b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.570 243456 DEBUG nova.compute.manager [req-a38d6110-1e36-4bef-a40c-18e05dd1490e req-5b13efce-93fc-40ce-ab91-b8d69c35f49b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] No waiting events found dispatching network-vif-plugged-0690a322-e7c3-413d-8780-d9d6a0f84fd2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.570 243456 WARNING nova.compute.manager [req-a38d6110-1e36-4bef-a40c-18e05dd1490e req-5b13efce-93fc-40ce-ab91-b8d69c35f49b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Received unexpected event network-vif-plugged-0690a322-e7c3-413d-8780-d9d6a0f84fd2 for instance with vm_state deleted and task_state None.
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.570 243456 DEBUG nova.compute.manager [req-a38d6110-1e36-4bef-a40c-18e05dd1490e req-5b13efce-93fc-40ce-ab91-b8d69c35f49b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Received event network-vif-deleted-bd50336f-b10b-46c9-91bd-81e086b2e80e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.575 243456 DEBUG oslo_concurrency.processutils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2c8189d8-e4a5-412d-bd69-b690e34b8f4c/disk.config 2c8189d8-e4a5-412d-bd69-b690e34b8f4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.576 243456 INFO nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Deleting local config drive /var/lib/nova/instances/2c8189d8-e4a5-412d-bd69-b690e34b8f4c/disk.config because it was imported into RBD.
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.578 243456 DEBUG nova.compute.manager [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:07:31 compute-0 systemd-machined[209480]: New machine qemu-52-instance-0000002e.
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.640 243456 DEBUG oslo_concurrency.lockutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:31 compute-0 systemd[1]: Started Virtual Machine qemu-52-instance-0000002e.
Feb 28 10:07:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:07:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2372619909' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.875 243456 DEBUG oslo_concurrency.processutils [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.885 243456 DEBUG nova.compute.provider_tree [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.904 243456 DEBUG nova.scheduler.client.report [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.929 243456 DEBUG oslo_concurrency.lockutils [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.933 243456 DEBUG oslo_concurrency.lockutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.941 243456 DEBUG nova.virt.hardware [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.941 243456 INFO nova.compute.claims [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:07:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 410 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 808 KiB/s rd, 5.2 MiB/s wr, 232 op/s
Feb 28 10:07:31 compute-0 nova_compute[243452]: 2026-02-28 10:07:31.956 243456 INFO nova.scheduler.client.report [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Deleted allocations for instance 60dcb9fa-f7b6-415d-86e5-d423d4613d6c
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.034 243456 DEBUG oslo_concurrency.lockutils [None req-868dd7fa-892d-4a79-8e78-0e20830e05e5 c9a7366cce344abcb7310041ed02610a 5df107d99f104138b864f28cf3b749ad - - default default] Lock "60dcb9fa-f7b6-415d-86e5-d423d4613d6c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.055 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 2c8189d8-e4a5-412d-bd69-b690e34b8f4c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.056 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273252.054937, 2c8189d8-e4a5-412d-bd69-b690e34b8f4c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.056 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] VM Resumed (Lifecycle Event)
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.058 243456 DEBUG nova.compute.manager [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.058 243456 DEBUG nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.062 243456 INFO nova.virt.libvirt.driver [-] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Instance spawned successfully.
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.062 243456 DEBUG nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.079 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.085 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.088 243456 DEBUG nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.088 243456 DEBUG nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.089 243456 DEBUG nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.089 243456 DEBUG nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.090 243456 DEBUG nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.090 243456 DEBUG nova.virt.libvirt.driver [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.122 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.123 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273252.0557544, 2c8189d8-e4a5-412d-bd69-b690e34b8f4c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.123 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] VM Started (Lifecycle Event)
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.143 243456 DEBUG oslo_concurrency.processutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.176 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.179 243456 DEBUG nova.compute.manager [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.183 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.220 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.253 243456 DEBUG oslo_concurrency.lockutils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:32 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2372619909' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:32 compute-0 ceph-mon[76304]: pgmap v1179: 305 pgs: 305 active+clean; 410 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 808 KiB/s rd, 5.2 MiB/s wr, 232 op/s
Feb 28 10:07:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:07:32 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3095862296' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.685 243456 DEBUG oslo_concurrency.processutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.692 243456 DEBUG nova.compute.provider_tree [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.715 243456 DEBUG nova.scheduler.client.report [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.745 243456 DEBUG oslo_concurrency.lockutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.747 243456 DEBUG nova.compute.manager [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.749 243456 DEBUG oslo_concurrency.lockutils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.750 243456 DEBUG nova.objects.instance [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.818 243456 DEBUG nova.compute.manager [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.819 243456 DEBUG nova.network.neutron [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.839 243456 DEBUG oslo_concurrency.lockutils [None req-2296d291-da0d-4971-9006-bc7c12a12d62 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.841 243456 INFO nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.863 243456 DEBUG nova.compute.manager [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:07:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.960 243456 DEBUG nova.compute.manager [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.963 243456 DEBUG nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.963 243456 INFO nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Creating image(s)
Feb 28 10:07:32 compute-0 nova_compute[243452]: 2026-02-28 10:07:32.993 243456 DEBUG nova.storage.rbd_utils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] rbd image 07aecf45-f323-42b5-850c-c413cb6d42da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.029 243456 DEBUG nova.storage.rbd_utils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] rbd image 07aecf45-f323-42b5-850c-c413cb6d42da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.059 243456 DEBUG nova.storage.rbd_utils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] rbd image 07aecf45-f323-42b5-850c-c413cb6d42da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.065 243456 DEBUG oslo_concurrency.processutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.135 243456 DEBUG oslo_concurrency.processutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.137 243456 DEBUG oslo_concurrency.lockutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.138 243456 DEBUG oslo_concurrency.lockutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.138 243456 DEBUG oslo_concurrency.lockutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.163 243456 DEBUG nova.storage.rbd_utils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] rbd image 07aecf45-f323-42b5-850c-c413cb6d42da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.168 243456 DEBUG oslo_concurrency.processutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 07aecf45-f323-42b5-850c-c413cb6d42da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.206 243456 DEBUG nova.policy [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '606300b6675944f6a558effb03c9be57', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '96e86ed701304d78bdf80efa568a7706', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:07:33 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3095862296' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.517 243456 DEBUG oslo_concurrency.lockutils [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Acquiring lock "2c8189d8-e4a5-412d-bd69-b690e34b8f4c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.518 243456 DEBUG oslo_concurrency.lockutils [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "2c8189d8-e4a5-412d-bd69-b690e34b8f4c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.518 243456 DEBUG oslo_concurrency.lockutils [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Acquiring lock "2c8189d8-e4a5-412d-bd69-b690e34b8f4c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.518 243456 DEBUG oslo_concurrency.lockutils [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "2c8189d8-e4a5-412d-bd69-b690e34b8f4c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.519 243456 DEBUG oslo_concurrency.lockutils [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "2c8189d8-e4a5-412d-bd69-b690e34b8f4c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.520 243456 INFO nova.compute.manager [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Terminating instance
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.521 243456 DEBUG oslo_concurrency.lockutils [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Acquiring lock "refresh_cache-2c8189d8-e4a5-412d-bd69-b690e34b8f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.521 243456 DEBUG oslo_concurrency.lockutils [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Acquired lock "refresh_cache-2c8189d8-e4a5-412d-bd69-b690e34b8f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.521 243456 DEBUG nova.network.neutron [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.590 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.621 243456 DEBUG oslo_concurrency.processutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 07aecf45-f323-42b5-850c-c413cb6d42da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.661 243456 DEBUG nova.network.neutron [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.712 243456 DEBUG nova.storage.rbd_utils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] resizing rbd image 07aecf45-f323-42b5-850c-c413cb6d42da_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.812 243456 DEBUG nova.objects.instance [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Lazy-loading 'migration_context' on Instance uuid 07aecf45-f323-42b5-850c-c413cb6d42da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.829 243456 DEBUG nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.830 243456 DEBUG nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Ensure instance console log exists: /var/lib/nova/instances/07aecf45-f323-42b5-850c-c413cb6d42da/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.830 243456 DEBUG oslo_concurrency.lockutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.831 243456 DEBUG oslo_concurrency.lockutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:33 compute-0 nova_compute[243452]: 2026-02-28 10:07:33.831 243456 DEBUG oslo_concurrency.lockutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 336 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 710 KiB/s rd, 6.1 MiB/s wr, 284 op/s
Feb 28 10:07:34 compute-0 nova_compute[243452]: 2026-02-28 10:07:34.098 243456 DEBUG nova.network.neutron [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:34 compute-0 nova_compute[243452]: 2026-02-28 10:07:34.115 243456 DEBUG oslo_concurrency.lockutils [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Releasing lock "refresh_cache-2c8189d8-e4a5-412d-bd69-b690e34b8f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:07:34 compute-0 nova_compute[243452]: 2026-02-28 10:07:34.115 243456 DEBUG nova.compute.manager [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:07:34 compute-0 nova_compute[243452]: 2026-02-28 10:07:34.141 243456 DEBUG nova.network.neutron [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Successfully created port: 8837ac98-69a6-46dd-9c5a-65b30f0f7e5b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:07:34 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Feb 28 10:07:34 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002e.scope: Consumed 2.524s CPU time.
Feb 28 10:07:34 compute-0 systemd-machined[209480]: Machine qemu-52-instance-0000002e terminated.
Feb 28 10:07:34 compute-0 nova_compute[243452]: 2026-02-28 10:07:34.338 243456 INFO nova.virt.libvirt.driver [-] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Instance destroyed successfully.
Feb 28 10:07:34 compute-0 nova_compute[243452]: 2026-02-28 10:07:34.339 243456 DEBUG nova.objects.instance [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lazy-loading 'resources' on Instance uuid 2c8189d8-e4a5-412d-bd69-b690e34b8f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:34 compute-0 ceph-mon[76304]: pgmap v1180: 305 pgs: 305 active+clean; 336 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 710 KiB/s rd, 6.1 MiB/s wr, 284 op/s
Feb 28 10:07:34 compute-0 nova_compute[243452]: 2026-02-28 10:07:34.509 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:34 compute-0 nova_compute[243452]: 2026-02-28 10:07:34.643 243456 INFO nova.virt.libvirt.driver [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Deleting instance files /var/lib/nova/instances/2c8189d8-e4a5-412d-bd69-b690e34b8f4c_del
Feb 28 10:07:34 compute-0 nova_compute[243452]: 2026-02-28 10:07:34.644 243456 INFO nova.virt.libvirt.driver [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Deletion of /var/lib/nova/instances/2c8189d8-e4a5-412d-bd69-b690e34b8f4c_del complete
Feb 28 10:07:34 compute-0 nova_compute[243452]: 2026-02-28 10:07:34.712 243456 INFO nova.compute.manager [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Took 0.60 seconds to destroy the instance on the hypervisor.
Feb 28 10:07:34 compute-0 nova_compute[243452]: 2026-02-28 10:07:34.713 243456 DEBUG oslo.service.loopingcall [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:07:34 compute-0 nova_compute[243452]: 2026-02-28 10:07:34.713 243456 DEBUG nova.compute.manager [-] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:07:34 compute-0 nova_compute[243452]: 2026-02-28 10:07:34.713 243456 DEBUG nova.network.neutron [-] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:07:34 compute-0 nova_compute[243452]: 2026-02-28 10:07:34.854 243456 DEBUG nova.network.neutron [-] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:07:34 compute-0 nova_compute[243452]: 2026-02-28 10:07:34.869 243456 DEBUG nova.network.neutron [-] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:34 compute-0 nova_compute[243452]: 2026-02-28 10:07:34.889 243456 INFO nova.compute.manager [-] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Took 0.18 seconds to deallocate network for instance.
Feb 28 10:07:34 compute-0 nova_compute[243452]: 2026-02-28 10:07:34.926 243456 DEBUG oslo_concurrency.lockutils [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:34 compute-0 nova_compute[243452]: 2026-02-28 10:07:34.927 243456 DEBUG oslo_concurrency.lockutils [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:35 compute-0 nova_compute[243452]: 2026-02-28 10:07:35.009 243456 DEBUG oslo_concurrency.processutils [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:35 compute-0 nova_compute[243452]: 2026-02-28 10:07:35.228 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273240.2274878, af024638-459f-45c8-b52b-7d9ec937745a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:35 compute-0 nova_compute[243452]: 2026-02-28 10:07:35.229 243456 INFO nova.compute.manager [-] [instance: af024638-459f-45c8-b52b-7d9ec937745a] VM Stopped (Lifecycle Event)
Feb 28 10:07:35 compute-0 nova_compute[243452]: 2026-02-28 10:07:35.258 243456 DEBUG nova.compute.manager [None req-10c3861a-1293-4d6e-ba9e-d7d675430f60 - - - - - -] [instance: af024638-459f-45c8-b52b-7d9ec937745a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:07:35 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3906360736' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:35 compute-0 nova_compute[243452]: 2026-02-28 10:07:35.636 243456 DEBUG oslo_concurrency.processutils [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:35 compute-0 nova_compute[243452]: 2026-02-28 10:07:35.642 243456 DEBUG nova.compute.provider_tree [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:07:35 compute-0 nova_compute[243452]: 2026-02-28 10:07:35.665 243456 DEBUG nova.scheduler.client.report [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:07:35 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3906360736' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:35 compute-0 nova_compute[243452]: 2026-02-28 10:07:35.696 243456 DEBUG oslo_concurrency.lockutils [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:35 compute-0 nova_compute[243452]: 2026-02-28 10:07:35.734 243456 INFO nova.scheduler.client.report [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Deleted allocations for instance 2c8189d8-e4a5-412d-bd69-b690e34b8f4c
Feb 28 10:07:35 compute-0 nova_compute[243452]: 2026-02-28 10:07:35.820 243456 DEBUG oslo_concurrency.lockutils [None req-2b519922-e910-45a2-9102-266af21731f9 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "2c8189d8-e4a5-412d-bd69-b690e34b8f4c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 289 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.7 MiB/s wr, 332 op/s
Feb 28 10:07:36 compute-0 nova_compute[243452]: 2026-02-28 10:07:36.245 243456 DEBUG nova.network.neutron [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Successfully updated port: 8837ac98-69a6-46dd-9c5a-65b30f0f7e5b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:07:36 compute-0 nova_compute[243452]: 2026-02-28 10:07:36.271 243456 DEBUG oslo_concurrency.lockutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Acquiring lock "refresh_cache-07aecf45-f323-42b5-850c-c413cb6d42da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:07:36 compute-0 nova_compute[243452]: 2026-02-28 10:07:36.271 243456 DEBUG oslo_concurrency.lockutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Acquired lock "refresh_cache-07aecf45-f323-42b5-850c-c413cb6d42da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:07:36 compute-0 nova_compute[243452]: 2026-02-28 10:07:36.272 243456 DEBUG nova.network.neutron [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:07:36 compute-0 nova_compute[243452]: 2026-02-28 10:07:36.373 243456 DEBUG nova.compute.manager [req-3ef29d94-ac19-4342-b171-6d94f69b0f28 req-95469b99-ce0a-4b8c-83dc-b905f8dc1d8e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Received event network-changed-8837ac98-69a6-46dd-9c5a-65b30f0f7e5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:36 compute-0 nova_compute[243452]: 2026-02-28 10:07:36.373 243456 DEBUG nova.compute.manager [req-3ef29d94-ac19-4342-b171-6d94f69b0f28 req-95469b99-ce0a-4b8c-83dc-b905f8dc1d8e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Refreshing instance network info cache due to event network-changed-8837ac98-69a6-46dd-9c5a-65b30f0f7e5b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:07:36 compute-0 nova_compute[243452]: 2026-02-28 10:07:36.374 243456 DEBUG oslo_concurrency.lockutils [req-3ef29d94-ac19-4342-b171-6d94f69b0f28 req-95469b99-ce0a-4b8c-83dc-b905f8dc1d8e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-07aecf45-f323-42b5-850c-c413cb6d42da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:07:36 compute-0 nova_compute[243452]: 2026-02-28 10:07:36.469 243456 DEBUG nova.network.neutron [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:07:36 compute-0 ceph-mon[76304]: pgmap v1181: 305 pgs: 305 active+clean; 289 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.7 MiB/s wr, 332 op/s
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.066 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.073 243456 DEBUG oslo_concurrency.lockutils [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Acquiring lock "090c2598-73ab-42de-88d7-3959c3b6ebd2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.073 243456 DEBUG oslo_concurrency.lockutils [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "090c2598-73ab-42de-88d7-3959c3b6ebd2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.074 243456 DEBUG oslo_concurrency.lockutils [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Acquiring lock "090c2598-73ab-42de-88d7-3959c3b6ebd2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.075 243456 DEBUG oslo_concurrency.lockutils [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "090c2598-73ab-42de-88d7-3959c3b6ebd2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.075 243456 DEBUG oslo_concurrency.lockutils [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "090c2598-73ab-42de-88d7-3959c3b6ebd2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.077 243456 INFO nova.compute.manager [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Terminating instance
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.079 243456 DEBUG oslo_concurrency.lockutils [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Acquiring lock "refresh_cache-090c2598-73ab-42de-88d7-3959c3b6ebd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.079 243456 DEBUG oslo_concurrency.lockutils [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Acquired lock "refresh_cache-090c2598-73ab-42de-88d7-3959c3b6ebd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.080 243456 DEBUG nova.network.neutron [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.132 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.300 243456 DEBUG nova.network.neutron [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.574 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273242.5735517, 9bc174be-7ebf-4dfb-a56f-b8855b0b2960 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.574 243456 INFO nova.compute.manager [-] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] VM Stopped (Lifecycle Event)
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.601 243456 DEBUG nova.compute.manager [None req-8456ef09-2825-450f-993a-703ee44cc263 - - - - - -] [instance: 9bc174be-7ebf-4dfb-a56f-b8855b0b2960] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.826 243456 DEBUG nova.network.neutron [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Updating instance_info_cache with network_info: [{"id": "8837ac98-69a6-46dd-9c5a-65b30f0f7e5b", "address": "fa:16:3e:a5:e2:36", "network": {"id": "c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1971784680-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e86ed701304d78bdf80efa568a7706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8837ac98-69", "ovs_interfaceid": "8837ac98-69a6-46dd-9c5a-65b30f0f7e5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.855 243456 DEBUG oslo_concurrency.lockutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Releasing lock "refresh_cache-07aecf45-f323-42b5-850c-c413cb6d42da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.856 243456 DEBUG nova.compute.manager [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Instance network_info: |[{"id": "8837ac98-69a6-46dd-9c5a-65b30f0f7e5b", "address": "fa:16:3e:a5:e2:36", "network": {"id": "c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1971784680-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e86ed701304d78bdf80efa568a7706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8837ac98-69", "ovs_interfaceid": "8837ac98-69a6-46dd-9c5a-65b30f0f7e5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.858 243456 DEBUG nova.network.neutron [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.861 243456 DEBUG oslo_concurrency.lockutils [req-3ef29d94-ac19-4342-b171-6d94f69b0f28 req-95469b99-ce0a-4b8c-83dc-b905f8dc1d8e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-07aecf45-f323-42b5-850c-c413cb6d42da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.861 243456 DEBUG nova.network.neutron [req-3ef29d94-ac19-4342-b171-6d94f69b0f28 req-95469b99-ce0a-4b8c-83dc-b905f8dc1d8e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Refreshing network info cache for port 8837ac98-69a6-46dd-9c5a-65b30f0f7e5b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.868 243456 DEBUG nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Start _get_guest_xml network_info=[{"id": "8837ac98-69a6-46dd-9c5a-65b30f0f7e5b", "address": "fa:16:3e:a5:e2:36", "network": {"id": "c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1971784680-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e86ed701304d78bdf80efa568a7706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8837ac98-69", "ovs_interfaceid": "8837ac98-69a6-46dd-9c5a-65b30f0f7e5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.875 243456 WARNING nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.881 243456 DEBUG nova.virt.libvirt.host [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.882 243456 DEBUG nova.virt.libvirt.host [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:07:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.886 243456 DEBUG nova.virt.libvirt.host [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.886 243456 DEBUG nova.virt.libvirt.host [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.887 243456 DEBUG nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.887 243456 DEBUG nova.virt.hardware [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.888 243456 DEBUG nova.virt.hardware [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.889 243456 DEBUG nova.virt.hardware [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.889 243456 DEBUG nova.virt.hardware [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.890 243456 DEBUG nova.virt.hardware [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.890 243456 DEBUG nova.virt.hardware [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.891 243456 DEBUG nova.virt.hardware [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.891 243456 DEBUG nova.virt.hardware [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.892 243456 DEBUG nova.virt.hardware [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.892 243456 DEBUG nova.virt.hardware [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.893 243456 DEBUG nova.virt.hardware [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.898 243456 DEBUG oslo_concurrency.processutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.922 243456 DEBUG oslo_concurrency.lockutils [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Releasing lock "refresh_cache-090c2598-73ab-42de-88d7-3959c3b6ebd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:07:37 compute-0 nova_compute[243452]: 2026-02-28 10:07:37.924 243456 DEBUG nova.compute.manager [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:07:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 290 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.7 MiB/s wr, 246 op/s
Feb 28 10:07:38 compute-0 sshd-session[283251]: Invalid user sol from 45.148.10.240 port 52298
Feb 28 10:07:38 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Feb 28 10:07:38 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002d.scope: Consumed 12.761s CPU time.
Feb 28 10:07:38 compute-0 systemd-machined[209480]: Machine qemu-49-instance-0000002d terminated.
Feb 28 10:07:38 compute-0 nova_compute[243452]: 2026-02-28 10:07:38.351 243456 INFO nova.virt.libvirt.driver [-] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Instance destroyed successfully.
Feb 28 10:07:38 compute-0 nova_compute[243452]: 2026-02-28 10:07:38.352 243456 DEBUG nova.objects.instance [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lazy-loading 'resources' on Instance uuid 090c2598-73ab-42de-88d7-3959c3b6ebd2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:38 compute-0 sshd-session[283251]: Connection closed by invalid user sol 45.148.10.240 port 52298 [preauth]
Feb 28 10:07:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:07:38 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/831051838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:07:38 compute-0 nova_compute[243452]: 2026-02-28 10:07:38.577 243456 DEBUG oslo_concurrency.processutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.679s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:38 compute-0 nova_compute[243452]: 2026-02-28 10:07:38.600 243456 DEBUG nova.storage.rbd_utils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] rbd image 07aecf45-f323-42b5-850c-c413cb6d42da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:38 compute-0 nova_compute[243452]: 2026-02-28 10:07:38.604 243456 DEBUG oslo_concurrency.processutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:38 compute-0 nova_compute[243452]: 2026-02-28 10:07:38.632 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:38 compute-0 nova_compute[243452]: 2026-02-28 10:07:38.641 243456 INFO nova.virt.libvirt.driver [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Deleting instance files /var/lib/nova/instances/090c2598-73ab-42de-88d7-3959c3b6ebd2_del
Feb 28 10:07:38 compute-0 nova_compute[243452]: 2026-02-28 10:07:38.641 243456 INFO nova.virt.libvirt.driver [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Deletion of /var/lib/nova/instances/090c2598-73ab-42de-88d7-3959c3b6ebd2_del complete
Feb 28 10:07:38 compute-0 nova_compute[243452]: 2026-02-28 10:07:38.710 243456 INFO nova.compute.manager [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Took 0.78 seconds to destroy the instance on the hypervisor.
Feb 28 10:07:38 compute-0 nova_compute[243452]: 2026-02-28 10:07:38.711 243456 DEBUG oslo.service.loopingcall [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:07:38 compute-0 nova_compute[243452]: 2026-02-28 10:07:38.711 243456 DEBUG nova.compute.manager [-] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:07:38 compute-0 nova_compute[243452]: 2026-02-28 10:07:38.712 243456 DEBUG nova.network.neutron [-] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.021 243456 DEBUG nova.network.neutron [-] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.042 243456 DEBUG nova.network.neutron [-] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.059 243456 INFO nova.compute.manager [-] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Took 0.35 seconds to deallocate network for instance.
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.110 243456 DEBUG oslo_concurrency.lockutils [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.111 243456 DEBUG oslo_concurrency.lockutils [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:07:39 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3105685426' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.154 243456 DEBUG oslo_concurrency.processutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.156 243456 DEBUG nova.virt.libvirt.vif [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:07:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1495133543',display_name='tempest-ImagesNegativeTestJSON-server-1495133543',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1495133543',id=47,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96e86ed701304d78bdf80efa568a7706',ramdisk_id='',reservation_id='r-sqe0htv0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1084119861',owner_user_name='tempest-ImagesNegativeTestJSON-1084119861-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:07:32Z,user_data=None,user_id='606300b6675944f6a558effb03c9be57',uuid=07aecf45-f323-42b5-850c-c413cb6d42da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8837ac98-69a6-46dd-9c5a-65b30f0f7e5b", "address": "fa:16:3e:a5:e2:36", "network": {"id": "c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1971784680-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e86ed701304d78bdf80efa568a7706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8837ac98-69", "ovs_interfaceid": "8837ac98-69a6-46dd-9c5a-65b30f0f7e5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.156 243456 DEBUG nova.network.os_vif_util [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Converting VIF {"id": "8837ac98-69a6-46dd-9c5a-65b30f0f7e5b", "address": "fa:16:3e:a5:e2:36", "network": {"id": "c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1971784680-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e86ed701304d78bdf80efa568a7706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8837ac98-69", "ovs_interfaceid": "8837ac98-69a6-46dd-9c5a-65b30f0f7e5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.157 243456 DEBUG nova.network.os_vif_util [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:e2:36,bridge_name='br-int',has_traffic_filtering=True,id=8837ac98-69a6-46dd-9c5a-65b30f0f7e5b,network=Network(c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8837ac98-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.158 243456 DEBUG nova.objects.instance [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Lazy-loading 'pci_devices' on Instance uuid 07aecf45-f323-42b5-850c-c413cb6d42da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:39 compute-0 ceph-mon[76304]: pgmap v1182: 305 pgs: 305 active+clean; 290 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.7 MiB/s wr, 246 op/s
Feb 28 10:07:39 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/831051838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.180 243456 DEBUG nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:07:39 compute-0 nova_compute[243452]:   <uuid>07aecf45-f323-42b5-850c-c413cb6d42da</uuid>
Feb 28 10:07:39 compute-0 nova_compute[243452]:   <name>instance-0000002f</name>
Feb 28 10:07:39 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:07:39 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:07:39 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <nova:name>tempest-ImagesNegativeTestJSON-server-1495133543</nova:name>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:07:37</nova:creationTime>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:07:39 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:07:39 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:07:39 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:07:39 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:07:39 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:07:39 compute-0 nova_compute[243452]:         <nova:user uuid="606300b6675944f6a558effb03c9be57">tempest-ImagesNegativeTestJSON-1084119861-project-member</nova:user>
Feb 28 10:07:39 compute-0 nova_compute[243452]:         <nova:project uuid="96e86ed701304d78bdf80efa568a7706">tempest-ImagesNegativeTestJSON-1084119861</nova:project>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:07:39 compute-0 nova_compute[243452]:         <nova:port uuid="8837ac98-69a6-46dd-9c5a-65b30f0f7e5b">
Feb 28 10:07:39 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:07:39 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:07:39 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <system>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <entry name="serial">07aecf45-f323-42b5-850c-c413cb6d42da</entry>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <entry name="uuid">07aecf45-f323-42b5-850c-c413cb6d42da</entry>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     </system>
Feb 28 10:07:39 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:07:39 compute-0 nova_compute[243452]:   <os>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:   </os>
Feb 28 10:07:39 compute-0 nova_compute[243452]:   <features>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:   </features>
Feb 28 10:07:39 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:07:39 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:07:39 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/07aecf45-f323-42b5-850c-c413cb6d42da_disk">
Feb 28 10:07:39 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       </source>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:07:39 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/07aecf45-f323-42b5-850c-c413cb6d42da_disk.config">
Feb 28 10:07:39 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       </source>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:07:39 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:a5:e2:36"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <target dev="tap8837ac98-69"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/07aecf45-f323-42b5-850c-c413cb6d42da/console.log" append="off"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <video>
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     </video>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:07:39 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:07:39 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:07:39 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:07:39 compute-0 nova_compute[243452]: </domain>
Feb 28 10:07:39 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.181 243456 DEBUG nova.compute.manager [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Preparing to wait for external event network-vif-plugged-8837ac98-69a6-46dd-9c5a-65b30f0f7e5b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.182 243456 DEBUG oslo_concurrency.lockutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Acquiring lock "07aecf45-f323-42b5-850c-c413cb6d42da-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.182 243456 DEBUG oslo_concurrency.lockutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Lock "07aecf45-f323-42b5-850c-c413cb6d42da-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.182 243456 DEBUG oslo_concurrency.lockutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Lock "07aecf45-f323-42b5-850c-c413cb6d42da-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.183 243456 DEBUG nova.virt.libvirt.vif [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:07:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1495133543',display_name='tempest-ImagesNegativeTestJSON-server-1495133543',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1495133543',id=47,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96e86ed701304d78bdf80efa568a7706',ramdisk_id='',reservation_id='r-sqe0htv0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1084119861',owner_user_name='tempest-ImagesNegativeTestJSON-1084119861-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:07:32Z,user_data=None,user_id='606300b6675944f6a558effb03c9be57',uuid=07aecf45-f323-42b5-850c-c413cb6d42da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8837ac98-69a6-46dd-9c5a-65b30f0f7e5b", "address": "fa:16:3e:a5:e2:36", "network": {"id": "c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1971784680-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e86ed701304d78bdf80efa568a7706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8837ac98-69", "ovs_interfaceid": "8837ac98-69a6-46dd-9c5a-65b30f0f7e5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.183 243456 DEBUG nova.network.os_vif_util [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Converting VIF {"id": "8837ac98-69a6-46dd-9c5a-65b30f0f7e5b", "address": "fa:16:3e:a5:e2:36", "network": {"id": "c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1971784680-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e86ed701304d78bdf80efa568a7706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8837ac98-69", "ovs_interfaceid": "8837ac98-69a6-46dd-9c5a-65b30f0f7e5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.183 243456 DEBUG nova.network.os_vif_util [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:e2:36,bridge_name='br-int',has_traffic_filtering=True,id=8837ac98-69a6-46dd-9c5a-65b30f0f7e5b,network=Network(c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8837ac98-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.184 243456 DEBUG os_vif [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:e2:36,bridge_name='br-int',has_traffic_filtering=True,id=8837ac98-69a6-46dd-9c5a-65b30f0f7e5b,network=Network(c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8837ac98-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.184 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.184 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.185 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.186 243456 DEBUG oslo_concurrency.processutils [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.210 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.211 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8837ac98-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.212 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8837ac98-69, col_values=(('external_ids', {'iface-id': '8837ac98-69a6-46dd-9c5a-65b30f0f7e5b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:e2:36', 'vm-uuid': '07aecf45-f323-42b5-850c-c413cb6d42da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.214 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:39 compute-0 NetworkManager[49805]: <info>  [1772273259.2154] manager: (tap8837ac98-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/161)
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.216 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.221 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.223 243456 INFO os_vif [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:e2:36,bridge_name='br-int',has_traffic_filtering=True,id=8837ac98-69a6-46dd-9c5a-65b30f0f7e5b,network=Network(c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8837ac98-69')
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.369 243456 DEBUG nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.370 243456 DEBUG nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.370 243456 DEBUG nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] No VIF found with MAC fa:16:3e:a5:e2:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.371 243456 INFO nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Using config drive
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.397 243456 DEBUG nova.storage.rbd_utils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] rbd image 07aecf45-f323-42b5-850c-c413cb6d42da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.594 243456 DEBUG nova.network.neutron [req-3ef29d94-ac19-4342-b171-6d94f69b0f28 req-95469b99-ce0a-4b8c-83dc-b905f8dc1d8e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Updated VIF entry in instance network info cache for port 8837ac98-69a6-46dd-9c5a-65b30f0f7e5b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.594 243456 DEBUG nova.network.neutron [req-3ef29d94-ac19-4342-b171-6d94f69b0f28 req-95469b99-ce0a-4b8c-83dc-b905f8dc1d8e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Updating instance_info_cache with network_info: [{"id": "8837ac98-69a6-46dd-9c5a-65b30f0f7e5b", "address": "fa:16:3e:a5:e2:36", "network": {"id": "c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1971784680-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e86ed701304d78bdf80efa568a7706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8837ac98-69", "ovs_interfaceid": "8837ac98-69a6-46dd-9c5a-65b30f0f7e5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.616 243456 DEBUG oslo_concurrency.lockutils [req-3ef29d94-ac19-4342-b171-6d94f69b0f28 req-95469b99-ce0a-4b8c-83dc-b905f8dc1d8e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-07aecf45-f323-42b5-850c-c413cb6d42da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:07:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:07:39 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3661717653' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.747 243456 DEBUG oslo_concurrency.processutils [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.754 243456 DEBUG nova.compute.provider_tree [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.771 243456 DEBUG nova.scheduler.client.report [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.797 243456 DEBUG oslo_concurrency.lockutils [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.827 243456 INFO nova.scheduler.client.report [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Deleted allocations for instance 090c2598-73ab-42de-88d7-3959c3b6ebd2
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.888 243456 INFO nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Creating config drive at /var/lib/nova/instances/07aecf45-f323-42b5-850c-c413cb6d42da/disk.config
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.892 243456 DEBUG oslo_concurrency.processutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/07aecf45-f323-42b5-850c-c413cb6d42da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpzw3xqj6z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:39 compute-0 nova_compute[243452]: 2026-02-28 10:07:39.921 243456 DEBUG oslo_concurrency.lockutils [None req-41258362-887e-4d45-b7cc-0dfe5afeced6 4c3e34e421f447f386ae2320858c95b8 696c1808e0d64cc7b255195d851e06d1 - - default default] Lock "090c2598-73ab-42de-88d7-3959c3b6ebd2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 242 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 245 op/s
Feb 28 10:07:40 compute-0 nova_compute[243452]: 2026-02-28 10:07:40.024 243456 DEBUG oslo_concurrency.processutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/07aecf45-f323-42b5-850c-c413cb6d42da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpzw3xqj6z" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:40 compute-0 nova_compute[243452]: 2026-02-28 10:07:40.048 243456 DEBUG nova.storage.rbd_utils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] rbd image 07aecf45-f323-42b5-850c-c413cb6d42da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:40 compute-0 nova_compute[243452]: 2026-02-28 10:07:40.051 243456 DEBUG oslo_concurrency.processutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/07aecf45-f323-42b5-850c-c413cb6d42da/disk.config 07aecf45-f323-42b5-850c-c413cb6d42da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:40 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3105685426' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:07:40 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3661717653' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:40 compute-0 nova_compute[243452]: 2026-02-28 10:07:40.204 243456 DEBUG oslo_concurrency.processutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/07aecf45-f323-42b5-850c-c413cb6d42da/disk.config 07aecf45-f323-42b5-850c-c413cb6d42da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:40 compute-0 nova_compute[243452]: 2026-02-28 10:07:40.206 243456 INFO nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Deleting local config drive /var/lib/nova/instances/07aecf45-f323-42b5-850c-c413cb6d42da/disk.config because it was imported into RBD.
Feb 28 10:07:40 compute-0 kernel: tap8837ac98-69: entered promiscuous mode
Feb 28 10:07:40 compute-0 ovn_controller[146846]: 2026-02-28T10:07:40Z|00361|binding|INFO|Claiming lport 8837ac98-69a6-46dd-9c5a-65b30f0f7e5b for this chassis.
Feb 28 10:07:40 compute-0 systemd-udevd[283273]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:07:40 compute-0 NetworkManager[49805]: <info>  [1772273260.2668] manager: (tap8837ac98-69): new Tun device (/org/freedesktop/NetworkManager/Devices/162)
Feb 28 10:07:40 compute-0 nova_compute[243452]: 2026-02-28 10:07:40.265 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:40 compute-0 ovn_controller[146846]: 2026-02-28T10:07:40Z|00362|binding|INFO|8837ac98-69a6-46dd-9c5a-65b30f0f7e5b: Claiming fa:16:3e:a5:e2:36 10.100.0.11
Feb 28 10:07:40 compute-0 nova_compute[243452]: 2026-02-28 10:07:40.267 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.276 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:e2:36 10.100.0.11'], port_security=['fa:16:3e:a5:e2:36 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '07aecf45-f323-42b5-850c-c413cb6d42da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96e86ed701304d78bdf80efa568a7706', 'neutron:revision_number': '2', 'neutron:security_group_ids': '95161dbc-b2af-4345-ba4c-0b67282f1093', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a317047c-8183-4240-a9a1-f58223e79e39, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8837ac98-69a6-46dd-9c5a-65b30f0f7e5b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.277 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8837ac98-69a6-46dd-9c5a-65b30f0f7e5b in datapath c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f bound to our chassis
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.279 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f
Feb 28 10:07:40 compute-0 NetworkManager[49805]: <info>  [1772273260.2843] device (tap8837ac98-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:07:40 compute-0 NetworkManager[49805]: <info>  [1772273260.2849] device (tap8837ac98-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.291 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e34f62-db89-4164-b889-4d1e4303f8fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.292 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc03e8f56-71 in ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.294 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc03e8f56-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.294 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1edce0e2-a5b7-4b0b-b853-030f01b2ed89]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.296 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[60f0e3fe-3e6a-4d9a-8988-cac29391f308]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:40 compute-0 systemd-machined[209480]: New machine qemu-53-instance-0000002f.
Feb 28 10:07:40 compute-0 nova_compute[243452]: 2026-02-28 10:07:40.302 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.309 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[34dd3b05-1683-4851-a3b8-b7c82112ad08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:40 compute-0 ovn_controller[146846]: 2026-02-28T10:07:40Z|00363|binding|INFO|Setting lport 8837ac98-69a6-46dd-9c5a-65b30f0f7e5b ovn-installed in OVS
Feb 28 10:07:40 compute-0 ovn_controller[146846]: 2026-02-28T10:07:40Z|00364|binding|INFO|Setting lport 8837ac98-69a6-46dd-9c5a-65b30f0f7e5b up in Southbound
Feb 28 10:07:40 compute-0 systemd[1]: Started Virtual Machine qemu-53-instance-0000002f.
Feb 28 10:07:40 compute-0 nova_compute[243452]: 2026-02-28 10:07:40.314 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.322 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dc1cb4ee-fc88-4577-864a-ecbcc1cfc9d2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.346 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b0b139ff-4dcd-4f5e-82ad-3c066b28d44f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:40 compute-0 NetworkManager[49805]: <info>  [1772273260.3542] manager: (tapc03e8f56-70): new Veth device (/org/freedesktop/NetworkManager/Devices/163)
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.353 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[24219f40-8040-45ab-b784-4e1dd26b07a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.387 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[08b7d771-3cdb-4763-a7e7-368cddf6a60c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.391 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c63dd36f-a83c-4c86-bf78-b0f02c38fdba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:40 compute-0 NetworkManager[49805]: <info>  [1772273260.4134] device (tapc03e8f56-70): carrier: link connected
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.419 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0549b322-9b56-4ebc-a136-d06b3a7703c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.438 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9098320d-f34a-474c-9b7f-b86f9e471b2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc03e8f56-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:37:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476769, 'reachable_time': 26401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283465, 'error': None, 'target': 'ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.454 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a7fa3ede-521f-493f-8374-510175a611e7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5b:3762'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476769, 'tstamp': 476769}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283466, 'error': None, 'target': 'ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.472 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2a34ed6b-fe94-415b-a40c-aa236cc8dcd3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc03e8f56-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:37:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476769, 'reachable_time': 26401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283467, 'error': None, 'target': 'ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.510 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6cef09ce-be41-4fbf-b121-76f0d4ffacf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.564 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[16a08297-16db-4b67-95c6-f9e880e0744b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.565 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc03e8f56-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.565 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.566 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc03e8f56-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:40 compute-0 nova_compute[243452]: 2026-02-28 10:07:40.568 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:40 compute-0 NetworkManager[49805]: <info>  [1772273260.5693] manager: (tapc03e8f56-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/164)
Feb 28 10:07:40 compute-0 kernel: tapc03e8f56-70: entered promiscuous mode
Feb 28 10:07:40 compute-0 nova_compute[243452]: 2026-02-28 10:07:40.570 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.575 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc03e8f56-70, col_values=(('external_ids', {'iface-id': '21ae60dc-9b72-4d37-b9ad-c981b298017d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:40 compute-0 nova_compute[243452]: 2026-02-28 10:07:40.576 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:40 compute-0 ovn_controller[146846]: 2026-02-28T10:07:40Z|00365|binding|INFO|Releasing lport 21ae60dc-9b72-4d37-b9ad-c981b298017d from this chassis (sb_readonly=0)
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.579 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:07:40 compute-0 nova_compute[243452]: 2026-02-28 10:07:40.585 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.586 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4ea97354-fff2-4a9e-a3ce-a6b869d54754]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.586 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f.pid.haproxy
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:07:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:40.587 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f', 'env', 'PROCESS_TAG=haproxy-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:07:40 compute-0 nova_compute[243452]: 2026-02-28 10:07:40.668 243456 DEBUG nova.compute.manager [req-ad5d6826-a7ae-4a61-96d2-b8e1096ca29d req-9ece29a5-d8a8-46df-a064-b64c9ae6355b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Received event network-vif-plugged-8837ac98-69a6-46dd-9c5a-65b30f0f7e5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:40 compute-0 nova_compute[243452]: 2026-02-28 10:07:40.668 243456 DEBUG oslo_concurrency.lockutils [req-ad5d6826-a7ae-4a61-96d2-b8e1096ca29d req-9ece29a5-d8a8-46df-a064-b64c9ae6355b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "07aecf45-f323-42b5-850c-c413cb6d42da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:40 compute-0 nova_compute[243452]: 2026-02-28 10:07:40.668 243456 DEBUG oslo_concurrency.lockutils [req-ad5d6826-a7ae-4a61-96d2-b8e1096ca29d req-9ece29a5-d8a8-46df-a064-b64c9ae6355b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "07aecf45-f323-42b5-850c-c413cb6d42da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:40 compute-0 nova_compute[243452]: 2026-02-28 10:07:40.669 243456 DEBUG oslo_concurrency.lockutils [req-ad5d6826-a7ae-4a61-96d2-b8e1096ca29d req-9ece29a5-d8a8-46df-a064-b64c9ae6355b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "07aecf45-f323-42b5-850c-c413cb6d42da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:40 compute-0 nova_compute[243452]: 2026-02-28 10:07:40.669 243456 DEBUG nova.compute.manager [req-ad5d6826-a7ae-4a61-96d2-b8e1096ca29d req-9ece29a5-d8a8-46df-a064-b64c9ae6355b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Processing event network-vif-plugged-8837ac98-69a6-46dd-9c5a-65b30f0f7e5b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008076423855663751 of space, bias 1.0, pg target 0.24229271566991253 quantized to 32 (current 32)
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024920996724142573 of space, bias 1.0, pg target 0.7476299017242772 quantized to 32 (current 32)
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.043230934488344e-07 of space, bias 4.0, pg target 0.0009651877121386013 quantized to 16 (current 16)
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:07:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:07:40 compute-0 podman[283513]: 2026-02-28 10:07:40.995890766 +0000 UTC m=+0.105125306 container create 5db3418303d6bda8c11f4df942332a8e6926a42ada91a581d6ee6d9d492d4c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Feb 28 10:07:41 compute-0 podman[283513]: 2026-02-28 10:07:40.92525103 +0000 UTC m=+0.034485580 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:07:41 compute-0 systemd[1]: Started libpod-conmon-5db3418303d6bda8c11f4df942332a8e6926a42ada91a581d6ee6d9d492d4c65.scope.
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.042 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273261.0418525, 07aecf45-f323-42b5-850c-c413cb6d42da => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.042 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] VM Started (Lifecycle Event)
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.044 243456 DEBUG nova.compute.manager [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.049 243456 DEBUG nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.052 243456 INFO nova.virt.libvirt.driver [-] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Instance spawned successfully.
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.052 243456 DEBUG nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:07:41 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:07:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b21312c8db1ae725c6c6564e3c92a4abc75d1135b42e548bea1b0e1095209a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.073 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:41 compute-0 podman[283513]: 2026-02-28 10:07:41.077037857 +0000 UTC m=+0.186272417 container init 5db3418303d6bda8c11f4df942332a8e6926a42ada91a581d6ee6d9d492d4c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.080 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:07:41 compute-0 podman[283513]: 2026-02-28 10:07:41.084725854 +0000 UTC m=+0.193960384 container start 5db3418303d6bda8c11f4df942332a8e6926a42ada91a581d6ee6d9d492d4c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.084 243456 DEBUG nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.084 243456 DEBUG nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.085 243456 DEBUG nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.085 243456 DEBUG nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.086 243456 DEBUG nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.086 243456 DEBUG nova.virt.libvirt.driver [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:07:41 compute-0 neutron-haproxy-ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f[283554]: [NOTICE]   (283558) : New worker (283560) forked
Feb 28 10:07:41 compute-0 neutron-haproxy-ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f[283554]: [NOTICE]   (283558) : Loading success.
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.124 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.124 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273261.0420568, 07aecf45-f323-42b5-850c-c413cb6d42da => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.124 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] VM Paused (Lifecycle Event)
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.166 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.171 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273261.047868, 07aecf45-f323-42b5-850c-c413cb6d42da => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.171 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] VM Resumed (Lifecycle Event)
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.182 243456 INFO nova.compute.manager [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Took 8.22 seconds to spawn the instance on the hypervisor.
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.182 243456 DEBUG nova.compute.manager [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.195 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.199 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.236 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.273 243456 INFO nova.compute.manager [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Took 9.65 seconds to build instance.
Feb 28 10:07:41 compute-0 ceph-mon[76304]: pgmap v1183: 305 pgs: 305 active+clean; 242 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 245 op/s
Feb 28 10:07:41 compute-0 nova_compute[243452]: 2026-02-28 10:07:41.294 243456 DEBUG oslo_concurrency.lockutils [None req-d179a118-bace-40b4-ad07-39346041636d 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Lock "07aecf45-f323-42b5-850c-c413cb6d42da" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 222 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 236 op/s
Feb 28 10:07:42 compute-0 ceph-mon[76304]: pgmap v1184: 305 pgs: 305 active+clean; 222 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 236 op/s
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.691 243456 DEBUG oslo_concurrency.lockutils [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Acquiring lock "07aecf45-f323-42b5-850c-c413cb6d42da" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.691 243456 DEBUG oslo_concurrency.lockutils [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Lock "07aecf45-f323-42b5-850c-c413cb6d42da" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.691 243456 DEBUG oslo_concurrency.lockutils [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Acquiring lock "07aecf45-f323-42b5-850c-c413cb6d42da-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.692 243456 DEBUG oslo_concurrency.lockutils [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Lock "07aecf45-f323-42b5-850c-c413cb6d42da-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.692 243456 DEBUG oslo_concurrency.lockutils [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Lock "07aecf45-f323-42b5-850c-c413cb6d42da-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.693 243456 INFO nova.compute.manager [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Terminating instance
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.694 243456 DEBUG nova.compute.manager [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:07:42 compute-0 kernel: tap8837ac98-69 (unregistering): left promiscuous mode
Feb 28 10:07:42 compute-0 NetworkManager[49805]: <info>  [1772273262.7452] device (tap8837ac98-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:07:42 compute-0 ovn_controller[146846]: 2026-02-28T10:07:42Z|00366|binding|INFO|Releasing lport 8837ac98-69a6-46dd-9c5a-65b30f0f7e5b from this chassis (sb_readonly=0)
Feb 28 10:07:42 compute-0 ovn_controller[146846]: 2026-02-28T10:07:42Z|00367|binding|INFO|Setting lport 8837ac98-69a6-46dd-9c5a-65b30f0f7e5b down in Southbound
Feb 28 10:07:42 compute-0 ovn_controller[146846]: 2026-02-28T10:07:42Z|00368|binding|INFO|Removing iface tap8837ac98-69 ovn-installed in OVS
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.750 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.753 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:42.759 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:e2:36 10.100.0.11'], port_security=['fa:16:3e:a5:e2:36 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '07aecf45-f323-42b5-850c-c413cb6d42da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96e86ed701304d78bdf80efa568a7706', 'neutron:revision_number': '4', 'neutron:security_group_ids': '95161dbc-b2af-4345-ba4c-0b67282f1093', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a317047c-8183-4240-a9a1-f58223e79e39, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8837ac98-69a6-46dd-9c5a-65b30f0f7e5b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:07:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:42.760 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8837ac98-69a6-46dd-9c5a-65b30f0f7e5b in datapath c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f unbound from our chassis
Feb 28 10:07:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:42.761 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.762 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:42.762 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6639c2b4-2873-4f97-88a3-954b91696a44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:42.762 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f namespace which is not needed anymore
Feb 28 10:07:42 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Feb 28 10:07:42 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002f.scope: Consumed 2.452s CPU time.
Feb 28 10:07:42 compute-0 systemd-machined[209480]: Machine qemu-53-instance-0000002f terminated.
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.832 243456 DEBUG nova.compute.manager [req-f800a718-6433-4ac9-87d0-03eeb09896c8 req-37e94dc9-cb60-45b3-81a5-cebb831833f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Received event network-vif-plugged-8837ac98-69a6-46dd-9c5a-65b30f0f7e5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.832 243456 DEBUG oslo_concurrency.lockutils [req-f800a718-6433-4ac9-87d0-03eeb09896c8 req-37e94dc9-cb60-45b3-81a5-cebb831833f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "07aecf45-f323-42b5-850c-c413cb6d42da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.833 243456 DEBUG oslo_concurrency.lockutils [req-f800a718-6433-4ac9-87d0-03eeb09896c8 req-37e94dc9-cb60-45b3-81a5-cebb831833f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "07aecf45-f323-42b5-850c-c413cb6d42da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.833 243456 DEBUG oslo_concurrency.lockutils [req-f800a718-6433-4ac9-87d0-03eeb09896c8 req-37e94dc9-cb60-45b3-81a5-cebb831833f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "07aecf45-f323-42b5-850c-c413cb6d42da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.833 243456 DEBUG nova.compute.manager [req-f800a718-6433-4ac9-87d0-03eeb09896c8 req-37e94dc9-cb60-45b3-81a5-cebb831833f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] No waiting events found dispatching network-vif-plugged-8837ac98-69a6-46dd-9c5a-65b30f0f7e5b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.833 243456 WARNING nova.compute.manager [req-f800a718-6433-4ac9-87d0-03eeb09896c8 req-37e94dc9-cb60-45b3-81a5-cebb831833f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Received unexpected event network-vif-plugged-8837ac98-69a6-46dd-9c5a-65b30f0f7e5b for instance with vm_state active and task_state deleting.
Feb 28 10:07:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:07:42 compute-0 neutron-haproxy-ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f[283554]: [NOTICE]   (283558) : haproxy version is 2.8.14-c23fe91
Feb 28 10:07:42 compute-0 neutron-haproxy-ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f[283554]: [NOTICE]   (283558) : path to executable is /usr/sbin/haproxy
Feb 28 10:07:42 compute-0 neutron-haproxy-ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f[283554]: [WARNING]  (283558) : Exiting Master process...
Feb 28 10:07:42 compute-0 neutron-haproxy-ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f[283554]: [ALERT]    (283558) : Current worker (283560) exited with code 143 (Terminated)
Feb 28 10:07:42 compute-0 neutron-haproxy-ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f[283554]: [WARNING]  (283558) : All workers exited. Exiting... (0)
Feb 28 10:07:42 compute-0 systemd[1]: libpod-5db3418303d6bda8c11f4df942332a8e6926a42ada91a581d6ee6d9d492d4c65.scope: Deactivated successfully.
Feb 28 10:07:42 compute-0 podman[283595]: 2026-02-28 10:07:42.898977587 +0000 UTC m=+0.050100139 container died 5db3418303d6bda8c11f4df942332a8e6926a42ada91a581d6ee6d9d492d4c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.936 243456 INFO nova.virt.libvirt.driver [-] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Instance destroyed successfully.
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.937 243456 DEBUG nova.objects.instance [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Lazy-loading 'resources' on Instance uuid 07aecf45-f323-42b5-850c-c413cb6d42da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5db3418303d6bda8c11f4df942332a8e6926a42ada91a581d6ee6d9d492d4c65-userdata-shm.mount: Deactivated successfully.
Feb 28 10:07:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b21312c8db1ae725c6c6564e3c92a4abc75d1135b42e548bea1b0e1095209a6-merged.mount: Deactivated successfully.
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.956 243456 DEBUG nova.virt.libvirt.vif [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:07:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1495133543',display_name='tempest-ImagesNegativeTestJSON-server-1495133543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1495133543',id=47,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:07:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='96e86ed701304d78bdf80efa568a7706',ramdisk_id='',reservation_id='r-sqe0htv0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-1084119861',owner_user_name='tempest-ImagesNegativeTestJSON-1084119861-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:07:41Z,user_data=None,user_id='606300b6675944f6a558effb03c9be57',uuid=07aecf45-f323-42b5-850c-c413cb6d42da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8837ac98-69a6-46dd-9c5a-65b30f0f7e5b", "address": "fa:16:3e:a5:e2:36", "network": {"id": "c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1971784680-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e86ed701304d78bdf80efa568a7706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8837ac98-69", "ovs_interfaceid": "8837ac98-69a6-46dd-9c5a-65b30f0f7e5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.956 243456 DEBUG nova.network.os_vif_util [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Converting VIF {"id": "8837ac98-69a6-46dd-9c5a-65b30f0f7e5b", "address": "fa:16:3e:a5:e2:36", "network": {"id": "c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1971784680-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e86ed701304d78bdf80efa568a7706", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8837ac98-69", "ovs_interfaceid": "8837ac98-69a6-46dd-9c5a-65b30f0f7e5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.957 243456 DEBUG nova.network.os_vif_util [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:e2:36,bridge_name='br-int',has_traffic_filtering=True,id=8837ac98-69a6-46dd-9c5a-65b30f0f7e5b,network=Network(c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8837ac98-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.958 243456 DEBUG os_vif [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:e2:36,bridge_name='br-int',has_traffic_filtering=True,id=8837ac98-69a6-46dd-9c5a-65b30f0f7e5b,network=Network(c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8837ac98-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.961 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.961 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8837ac98-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.964 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.966 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:07:42 compute-0 podman[283595]: 2026-02-28 10:07:42.967182595 +0000 UTC m=+0.118305157 container cleanup 5db3418303d6bda8c11f4df942332a8e6926a42ada91a581d6ee6d9d492d4c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:07:42 compute-0 nova_compute[243452]: 2026-02-28 10:07:42.969 243456 INFO os_vif [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:e2:36,bridge_name='br-int',has_traffic_filtering=True,id=8837ac98-69a6-46dd-9c5a-65b30f0f7e5b,network=Network(c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8837ac98-69')
Feb 28 10:07:42 compute-0 systemd[1]: libpod-conmon-5db3418303d6bda8c11f4df942332a8e6926a42ada91a581d6ee6d9d492d4c65.scope: Deactivated successfully.
Feb 28 10:07:43 compute-0 podman[283637]: 2026-02-28 10:07:43.037420689 +0000 UTC m=+0.053408072 container remove 5db3418303d6bda8c11f4df942332a8e6926a42ada91a581d6ee6d9d492d4c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:07:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:43.044 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[74182a0a-75c5-43c6-a159-8b4bb64d0bd6]: (4, ('Sat Feb 28 10:07:42 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f (5db3418303d6bda8c11f4df942332a8e6926a42ada91a581d6ee6d9d492d4c65)\n5db3418303d6bda8c11f4df942332a8e6926a42ada91a581d6ee6d9d492d4c65\nSat Feb 28 10:07:42 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f (5db3418303d6bda8c11f4df942332a8e6926a42ada91a581d6ee6d9d492d4c65)\n5db3418303d6bda8c11f4df942332a8e6926a42ada91a581d6ee6d9d492d4c65\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:43.046 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c356141d-f5e3-4ae8-a7ce-f08e98c112b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:43.047 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc03e8f56-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:43 compute-0 nova_compute[243452]: 2026-02-28 10:07:43.049 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:43 compute-0 kernel: tapc03e8f56-70: left promiscuous mode
Feb 28 10:07:43 compute-0 nova_compute[243452]: 2026-02-28 10:07:43.060 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:43.065 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9532df94-d9a8-40d5-9fbd-e24325603221]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:43.085 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4f8a6832-c67e-43d7-9d17-fee95f6abc22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:43.087 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d010ea50-3f8f-451a-af88-fc9fe974c788]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:43.105 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1cc5cf78-c2fc-408b-93c2-b31d210e8ee4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476762, 'reachable_time': 18726, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283670, 'error': None, 'target': 'ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:43.108 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c03e8f56-74d1-4f8e-86ba-1aa23d8afe8f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:07:43 compute-0 systemd[1]: run-netns-ovnmeta\x2dc03e8f56\x2d74d1\x2d4f8e\x2d86ba\x2d1aa23d8afe8f.mount: Deactivated successfully.
Feb 28 10:07:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:43.109 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[5a3a5e66-fe29-4b02-b679-ea442de378c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:07:43 compute-0 nova_compute[243452]: 2026-02-28 10:07:43.301 243456 INFO nova.virt.libvirt.driver [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Deleting instance files /var/lib/nova/instances/07aecf45-f323-42b5-850c-c413cb6d42da_del
Feb 28 10:07:43 compute-0 nova_compute[243452]: 2026-02-28 10:07:43.302 243456 INFO nova.virt.libvirt.driver [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Deletion of /var/lib/nova/instances/07aecf45-f323-42b5-850c-c413cb6d42da_del complete
Feb 28 10:07:43 compute-0 nova_compute[243452]: 2026-02-28 10:07:43.364 243456 INFO nova.compute.manager [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Took 0.67 seconds to destroy the instance on the hypervisor.
Feb 28 10:07:43 compute-0 nova_compute[243452]: 2026-02-28 10:07:43.364 243456 DEBUG oslo.service.loopingcall [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:07:43 compute-0 nova_compute[243452]: 2026-02-28 10:07:43.364 243456 DEBUG nova.compute.manager [-] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:07:43 compute-0 nova_compute[243452]: 2026-02-28 10:07:43.365 243456 DEBUG nova.network.neutron [-] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:07:43 compute-0 nova_compute[243452]: 2026-02-28 10:07:43.594 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 200 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 250 op/s
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.148 243456 DEBUG nova.network.neutron [-] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.154 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273249.153018, c4dce6af-958c-4c5a-890b-469443cee915 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.154 243456 INFO nova.compute.manager [-] [instance: c4dce6af-958c-4c5a-890b-469443cee915] VM Stopped (Lifecycle Event)
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.176 243456 DEBUG nova.compute.manager [None req-4b732afc-4fe4-459e-a327-5ff01f598bf0 - - - - - -] [instance: c4dce6af-958c-4c5a-890b-469443cee915] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.177 243456 INFO nova.compute.manager [-] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Took 0.81 seconds to deallocate network for instance.
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.238 243456 DEBUG oslo_concurrency.lockutils [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.239 243456 DEBUG oslo_concurrency.lockutils [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.288 243456 DEBUG nova.compute.manager [req-eb76a7a6-0653-419c-bc63-14053bace700 req-fe68e6d6-2693-4483-8428-a6c23b70c4aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Received event network-vif-deleted-8837ac98-69a6-46dd-9c5a-65b30f0f7e5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.297 243456 DEBUG oslo_concurrency.processutils [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.481 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273249.4800007, 60dcb9fa-f7b6-415d-86e5-d423d4613d6c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.481 243456 INFO nova.compute.manager [-] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] VM Stopped (Lifecycle Event)
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.505 243456 DEBUG nova.compute.manager [None req-2689106a-abd4-4a78-adee-7e6f9b8f308f - - - - - -] [instance: 60dcb9fa-f7b6-415d-86e5-d423d4613d6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:07:44 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3501100451' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.839 243456 DEBUG oslo_concurrency.processutils [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.846 243456 DEBUG nova.compute.provider_tree [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.867 243456 DEBUG nova.scheduler.client.report [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.897 243456 DEBUG oslo_concurrency.lockutils [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.926 243456 DEBUG nova.compute.manager [req-ca80bbc3-405d-4288-a001-11e6688bea90 req-9d79d0fd-33eb-4ec0-8378-ef6393e1eacf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Received event network-vif-unplugged-8837ac98-69a6-46dd-9c5a-65b30f0f7e5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.927 243456 DEBUG oslo_concurrency.lockutils [req-ca80bbc3-405d-4288-a001-11e6688bea90 req-9d79d0fd-33eb-4ec0-8378-ef6393e1eacf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "07aecf45-f323-42b5-850c-c413cb6d42da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.928 243456 DEBUG oslo_concurrency.lockutils [req-ca80bbc3-405d-4288-a001-11e6688bea90 req-9d79d0fd-33eb-4ec0-8378-ef6393e1eacf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "07aecf45-f323-42b5-850c-c413cb6d42da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.929 243456 DEBUG oslo_concurrency.lockutils [req-ca80bbc3-405d-4288-a001-11e6688bea90 req-9d79d0fd-33eb-4ec0-8378-ef6393e1eacf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "07aecf45-f323-42b5-850c-c413cb6d42da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.929 243456 DEBUG nova.compute.manager [req-ca80bbc3-405d-4288-a001-11e6688bea90 req-9d79d0fd-33eb-4ec0-8378-ef6393e1eacf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] No waiting events found dispatching network-vif-unplugged-8837ac98-69a6-46dd-9c5a-65b30f0f7e5b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.930 243456 WARNING nova.compute.manager [req-ca80bbc3-405d-4288-a001-11e6688bea90 req-9d79d0fd-33eb-4ec0-8378-ef6393e1eacf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Received unexpected event network-vif-unplugged-8837ac98-69a6-46dd-9c5a-65b30f0f7e5b for instance with vm_state deleted and task_state None.
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.930 243456 DEBUG nova.compute.manager [req-ca80bbc3-405d-4288-a001-11e6688bea90 req-9d79d0fd-33eb-4ec0-8378-ef6393e1eacf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Received event network-vif-plugged-8837ac98-69a6-46dd-9c5a-65b30f0f7e5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.931 243456 DEBUG oslo_concurrency.lockutils [req-ca80bbc3-405d-4288-a001-11e6688bea90 req-9d79d0fd-33eb-4ec0-8378-ef6393e1eacf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "07aecf45-f323-42b5-850c-c413cb6d42da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.932 243456 DEBUG oslo_concurrency.lockutils [req-ca80bbc3-405d-4288-a001-11e6688bea90 req-9d79d0fd-33eb-4ec0-8378-ef6393e1eacf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "07aecf45-f323-42b5-850c-c413cb6d42da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.932 243456 DEBUG oslo_concurrency.lockutils [req-ca80bbc3-405d-4288-a001-11e6688bea90 req-9d79d0fd-33eb-4ec0-8378-ef6393e1eacf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "07aecf45-f323-42b5-850c-c413cb6d42da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.933 243456 DEBUG nova.compute.manager [req-ca80bbc3-405d-4288-a001-11e6688bea90 req-9d79d0fd-33eb-4ec0-8378-ef6393e1eacf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] No waiting events found dispatching network-vif-plugged-8837ac98-69a6-46dd-9c5a-65b30f0f7e5b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.933 243456 WARNING nova.compute.manager [req-ca80bbc3-405d-4288-a001-11e6688bea90 req-9d79d0fd-33eb-4ec0-8378-ef6393e1eacf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Received unexpected event network-vif-plugged-8837ac98-69a6-46dd-9c5a-65b30f0f7e5b for instance with vm_state deleted and task_state None.
Feb 28 10:07:44 compute-0 nova_compute[243452]: 2026-02-28 10:07:44.945 243456 INFO nova.scheduler.client.report [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Deleted allocations for instance 07aecf45-f323-42b5-850c-c413cb6d42da
Feb 28 10:07:45 compute-0 ceph-mon[76304]: pgmap v1185: 305 pgs: 305 active+clean; 200 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 250 op/s
Feb 28 10:07:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3501100451' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:45 compute-0 nova_compute[243452]: 2026-02-28 10:07:45.022 243456 DEBUG oslo_concurrency.lockutils [None req-04296580-4355-44e0-87ab-dd47c80eecd2 606300b6675944f6a558effb03c9be57 96e86ed701304d78bdf80efa568a7706 - - default default] Lock "07aecf45-f323-42b5-850c-c413cb6d42da" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:07:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4139667116' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:07:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:07:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4139667116' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:07:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 173 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 261 op/s
Feb 28 10:07:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/4139667116' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:07:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/4139667116' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:07:47 compute-0 ceph-mon[76304]: pgmap v1186: 305 pgs: 305 active+clean; 173 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 261 op/s
Feb 28 10:07:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:07:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 153 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 733 KiB/s wr, 185 op/s
Feb 28 10:07:47 compute-0 nova_compute[243452]: 2026-02-28 10:07:47.964 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:48 compute-0 nova_compute[243452]: 2026-02-28 10:07:48.596 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:49 compute-0 ceph-mon[76304]: pgmap v1187: 305 pgs: 305 active+clean; 153 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 733 KiB/s wr, 185 op/s
Feb 28 10:07:49 compute-0 nova_compute[243452]: 2026-02-28 10:07:49.336 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273254.335266, 2c8189d8-e4a5-412d-bd69-b690e34b8f4c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:49 compute-0 nova_compute[243452]: 2026-02-28 10:07:49.337 243456 INFO nova.compute.manager [-] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] VM Stopped (Lifecycle Event)
Feb 28 10:07:49 compute-0 nova_compute[243452]: 2026-02-28 10:07:49.357 243456 DEBUG nova.compute.manager [None req-f30b5384-f83c-45f5-9264-fda9bf1c5df2 - - - - - -] [instance: 2c8189d8-e4a5-412d-bd69-b690e34b8f4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 153 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 132 op/s
Feb 28 10:07:50 compute-0 ceph-mon[76304]: pgmap v1188: 305 pgs: 305 active+clean; 153 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 132 op/s
Feb 28 10:07:51 compute-0 nova_compute[243452]: 2026-02-28 10:07:51.087 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:51.100 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:07:51 compute-0 nova_compute[243452]: 2026-02-28 10:07:51.100 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:51.101 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:07:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 153 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 122 op/s
Feb 28 10:07:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:07:52 compute-0 nova_compute[243452]: 2026-02-28 10:07:52.968 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:53 compute-0 ceph-mon[76304]: pgmap v1189: 305 pgs: 305 active+clean; 153 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 122 op/s
Feb 28 10:07:53 compute-0 nova_compute[243452]: 2026-02-28 10:07:53.349 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273258.3484297, 090c2598-73ab-42de-88d7-3959c3b6ebd2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:53 compute-0 nova_compute[243452]: 2026-02-28 10:07:53.350 243456 INFO nova.compute.manager [-] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] VM Stopped (Lifecycle Event)
Feb 28 10:07:53 compute-0 nova_compute[243452]: 2026-02-28 10:07:53.521 243456 DEBUG nova.compute.manager [None req-fac6c00e-40e3-4d5e-8b60-8b72921225ca - - - - - -] [instance: 090c2598-73ab-42de-88d7-3959c3b6ebd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:53 compute-0 nova_compute[243452]: 2026-02-28 10:07:53.598 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:53 compute-0 nova_compute[243452]: 2026-02-28 10:07:53.789 243456 DEBUG oslo_concurrency.lockutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Acquiring lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:53 compute-0 nova_compute[243452]: 2026-02-28 10:07:53.789 243456 DEBUG oslo_concurrency.lockutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:53 compute-0 nova_compute[243452]: 2026-02-28 10:07:53.806 243456 DEBUG nova.compute.manager [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:07:53 compute-0 nova_compute[243452]: 2026-02-28 10:07:53.889 243456 DEBUG oslo_concurrency.lockutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:53 compute-0 nova_compute[243452]: 2026-02-28 10:07:53.890 243456 DEBUG oslo_concurrency.lockutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:53 compute-0 nova_compute[243452]: 2026-02-28 10:07:53.896 243456 DEBUG nova.virt.hardware [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:07:53 compute-0 nova_compute[243452]: 2026-02-28 10:07:53.897 243456 INFO nova.compute.claims [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:07:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 153 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 106 op/s
Feb 28 10:07:54 compute-0 nova_compute[243452]: 2026-02-28 10:07:54.050 243456 DEBUG oslo_concurrency.processutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:07:54 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1262071516' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:54 compute-0 nova_compute[243452]: 2026-02-28 10:07:54.582 243456 DEBUG oslo_concurrency.processutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:54 compute-0 nova_compute[243452]: 2026-02-28 10:07:54.588 243456 DEBUG nova.compute.provider_tree [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:07:54 compute-0 nova_compute[243452]: 2026-02-28 10:07:54.614 243456 DEBUG nova.scheduler.client.report [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:07:54 compute-0 nova_compute[243452]: 2026-02-28 10:07:54.636 243456 DEBUG oslo_concurrency.lockutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:54 compute-0 nova_compute[243452]: 2026-02-28 10:07:54.637 243456 DEBUG nova.compute.manager [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:07:54 compute-0 nova_compute[243452]: 2026-02-28 10:07:54.681 243456 DEBUG nova.compute.manager [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:07:54 compute-0 nova_compute[243452]: 2026-02-28 10:07:54.681 243456 DEBUG nova.network.neutron [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:07:54 compute-0 nova_compute[243452]: 2026-02-28 10:07:54.703 243456 INFO nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:07:54 compute-0 nova_compute[243452]: 2026-02-28 10:07:54.721 243456 DEBUG nova.compute.manager [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:07:54 compute-0 nova_compute[243452]: 2026-02-28 10:07:54.823 243456 DEBUG nova.compute.manager [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:07:54 compute-0 nova_compute[243452]: 2026-02-28 10:07:54.825 243456 DEBUG nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:07:54 compute-0 nova_compute[243452]: 2026-02-28 10:07:54.826 243456 INFO nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Creating image(s)
Feb 28 10:07:54 compute-0 nova_compute[243452]: 2026-02-28 10:07:54.858 243456 DEBUG nova.storage.rbd_utils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] rbd image bbaf0344-f1d3-4629-b6ff-3395549aa84b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:54 compute-0 nova_compute[243452]: 2026-02-28 10:07:54.893 243456 DEBUG nova.storage.rbd_utils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] rbd image bbaf0344-f1d3-4629-b6ff-3395549aa84b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:54 compute-0 nova_compute[243452]: 2026-02-28 10:07:54.929 243456 DEBUG nova.storage.rbd_utils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] rbd image bbaf0344-f1d3-4629-b6ff-3395549aa84b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:55 compute-0 nova_compute[243452]: 2026-02-28 10:07:55.317 243456 DEBUG oslo_concurrency.processutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:55 compute-0 ceph-mon[76304]: pgmap v1190: 305 pgs: 305 active+clean; 153 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 106 op/s
Feb 28 10:07:55 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1262071516' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:07:55 compute-0 nova_compute[243452]: 2026-02-28 10:07:55.351 243456 DEBUG nova.policy [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb52857f37ae4c42815767488316da21', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9d6b720bb7334b139fc12e9faa051906', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:07:55 compute-0 nova_compute[243452]: 2026-02-28 10:07:55.400 243456 DEBUG oslo_concurrency.processutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:55 compute-0 nova_compute[243452]: 2026-02-28 10:07:55.401 243456 DEBUG oslo_concurrency.lockutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:55 compute-0 nova_compute[243452]: 2026-02-28 10:07:55.402 243456 DEBUG oslo_concurrency.lockutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:55 compute-0 nova_compute[243452]: 2026-02-28 10:07:55.403 243456 DEBUG oslo_concurrency.lockutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:55 compute-0 nova_compute[243452]: 2026-02-28 10:07:55.432 243456 DEBUG nova.storage.rbd_utils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] rbd image bbaf0344-f1d3-4629-b6ff-3395549aa84b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:07:55 compute-0 nova_compute[243452]: 2026-02-28 10:07:55.437 243456 DEBUG oslo_concurrency.processutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 bbaf0344-f1d3-4629-b6ff-3395549aa84b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:07:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 153 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.4 KiB/s wr, 69 op/s
Feb 28 10:07:55 compute-0 nova_compute[243452]: 2026-02-28 10:07:55.982 243456 DEBUG oslo_concurrency.processutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 bbaf0344-f1d3-4629-b6ff-3395549aa84b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:07:56 compute-0 nova_compute[243452]: 2026-02-28 10:07:56.067 243456 DEBUG nova.storage.rbd_utils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] resizing rbd image bbaf0344-f1d3-4629-b6ff-3395549aa84b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:07:56 compute-0 nova_compute[243452]: 2026-02-28 10:07:56.174 243456 DEBUG nova.objects.instance [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lazy-loading 'migration_context' on Instance uuid bbaf0344-f1d3-4629-b6ff-3395549aa84b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:07:56 compute-0 nova_compute[243452]: 2026-02-28 10:07:56.198 243456 DEBUG nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:07:56 compute-0 nova_compute[243452]: 2026-02-28 10:07:56.199 243456 DEBUG nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Ensure instance console log exists: /var/lib/nova/instances/bbaf0344-f1d3-4629-b6ff-3395549aa84b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:07:56 compute-0 nova_compute[243452]: 2026-02-28 10:07:56.200 243456 DEBUG oslo_concurrency.lockutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:56 compute-0 nova_compute[243452]: 2026-02-28 10:07:56.200 243456 DEBUG oslo_concurrency.lockutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:56 compute-0 nova_compute[243452]: 2026-02-28 10:07:56.201 243456 DEBUG oslo_concurrency.lockutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:56 compute-0 ceph-mon[76304]: pgmap v1191: 305 pgs: 305 active+clean; 153 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.4 KiB/s wr, 69 op/s
Feb 28 10:07:56 compute-0 nova_compute[243452]: 2026-02-28 10:07:56.479 243456 DEBUG nova.network.neutron [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Successfully created port: 2096b485-4121-49ca-b440-f2d4580fb1cf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:07:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:57.103 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:07:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:57.845 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:07:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:57.846 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:07:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:07:57.846 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:07:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:07:57 compute-0 nova_compute[243452]: 2026-02-28 10:07:57.934 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273262.9333582, 07aecf45-f323-42b5-850c-c413cb6d42da => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:07:57 compute-0 nova_compute[243452]: 2026-02-28 10:07:57.934 243456 INFO nova.compute.manager [-] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] VM Stopped (Lifecycle Event)
Feb 28 10:07:57 compute-0 nova_compute[243452]: 2026-02-28 10:07:57.959 243456 DEBUG nova.compute.manager [None req-44c4285f-4270-484c-a2c2-c77bcacb2806 - - - - - -] [instance: 07aecf45-f323-42b5-850c-c413cb6d42da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:07:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 177 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 747 KiB/s wr, 26 op/s
Feb 28 10:07:57 compute-0 nova_compute[243452]: 2026-02-28 10:07:57.972 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:58 compute-0 podman[283883]: 2026-02-28 10:07:58.118055121 +0000 UTC m=+0.051915420 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:07:58 compute-0 podman[283882]: 2026-02-28 10:07:58.17561278 +0000 UTC m=+0.110798636 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Feb 28 10:07:58 compute-0 nova_compute[243452]: 2026-02-28 10:07:58.599 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:07:59 compute-0 ceph-mon[76304]: pgmap v1192: 305 pgs: 305 active+clean; 177 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 747 KiB/s wr, 26 op/s
Feb 28 10:07:59 compute-0 nova_compute[243452]: 2026-02-28 10:07:59.780 243456 DEBUG nova.network.neutron [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Successfully updated port: 2096b485-4121-49ca-b440-f2d4580fb1cf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:07:59 compute-0 nova_compute[243452]: 2026-02-28 10:07:59.796 243456 DEBUG oslo_concurrency.lockutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Acquiring lock "refresh_cache-bbaf0344-f1d3-4629-b6ff-3395549aa84b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:07:59 compute-0 nova_compute[243452]: 2026-02-28 10:07:59.796 243456 DEBUG oslo_concurrency.lockutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Acquired lock "refresh_cache-bbaf0344-f1d3-4629-b6ff-3395549aa84b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:07:59 compute-0 nova_compute[243452]: 2026-02-28 10:07:59.796 243456 DEBUG nova.network.neutron [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:07:59 compute-0 nova_compute[243452]: 2026-02-28 10:07:59.904 243456 DEBUG nova.compute.manager [req-9b3c1c6b-7976-442e-9c6a-c56537bac4b7 req-1be3bba8-4c76-4436-8bbf-d59f35872b47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Received event network-changed-2096b485-4121-49ca-b440-f2d4580fb1cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:07:59 compute-0 nova_compute[243452]: 2026-02-28 10:07:59.905 243456 DEBUG nova.compute.manager [req-9b3c1c6b-7976-442e-9c6a-c56537bac4b7 req-1be3bba8-4c76-4436-8bbf-d59f35872b47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Refreshing instance network info cache due to event network-changed-2096b485-4121-49ca-b440-f2d4580fb1cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:07:59 compute-0 nova_compute[243452]: 2026-02-28 10:07:59.905 243456 DEBUG oslo_concurrency.lockutils [req-9b3c1c6b-7976-442e-9c6a-c56537bac4b7 req-1be3bba8-4c76-4436-8bbf-d59f35872b47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-bbaf0344-f1d3-4629-b6ff-3395549aa84b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:07:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 188 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 26 op/s
Feb 28 10:08:00 compute-0 nova_compute[243452]: 2026-02-28 10:08:00.060 243456 DEBUG nova.network.neutron [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:08:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:08:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:08:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:08:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:08:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:08:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:08:00 compute-0 nova_compute[243452]: 2026-02-28 10:08:00.541 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:00 compute-0 nova_compute[243452]: 2026-02-28 10:08:00.542 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:00 compute-0 nova_compute[243452]: 2026-02-28 10:08:00.569 243456 DEBUG nova.compute.manager [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:08:00 compute-0 nova_compute[243452]: 2026-02-28 10:08:00.671 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:00 compute-0 nova_compute[243452]: 2026-02-28 10:08:00.672 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:00 compute-0 nova_compute[243452]: 2026-02-28 10:08:00.684 243456 DEBUG nova.virt.hardware [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:08:00 compute-0 nova_compute[243452]: 2026-02-28 10:08:00.686 243456 INFO nova.compute.claims [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:08:00 compute-0 nova_compute[243452]: 2026-02-28 10:08:00.830 243456 DEBUG oslo_concurrency.processutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:01 compute-0 ceph-mon[76304]: pgmap v1193: 305 pgs: 305 active+clean; 188 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 26 op/s
Feb 28 10:08:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:08:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4170975663' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.368 243456 DEBUG oslo_concurrency.processutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.374 243456 DEBUG nova.compute.provider_tree [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.399 243456 DEBUG nova.scheduler.client.report [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.433 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.434 243456 DEBUG nova.compute.manager [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.495 243456 DEBUG nova.compute.manager [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.495 243456 DEBUG nova.network.neutron [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.519 243456 INFO nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.544 243456 DEBUG nova.compute.manager [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.645 243456 DEBUG nova.compute.manager [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.647 243456 DEBUG nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.648 243456 INFO nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Creating image(s)
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.674 243456 DEBUG nova.storage.rbd_utils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] rbd image 5383d7b5-e11b-47e4-8cbc-4be283dd6b16_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.700 243456 DEBUG nova.storage.rbd_utils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] rbd image 5383d7b5-e11b-47e4-8cbc-4be283dd6b16_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.727 243456 DEBUG nova.storage.rbd_utils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] rbd image 5383d7b5-e11b-47e4-8cbc-4be283dd6b16_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.731 243456 DEBUG oslo_concurrency.processutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.829 243456 DEBUG oslo_concurrency.processutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.831 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.832 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.833 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.870 243456 DEBUG nova.storage.rbd_utils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] rbd image 5383d7b5-e11b-47e4-8cbc-4be283dd6b16_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.875 243456 DEBUG oslo_concurrency.processutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 5383d7b5-e11b-47e4-8cbc-4be283dd6b16_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.907 243456 DEBUG nova.network.neutron [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Updating instance_info_cache with network_info: [{"id": "2096b485-4121-49ca-b440-f2d4580fb1cf", "address": "fa:16:3e:14:19:51", "network": {"id": "e675fba7-c78a-4b4b-bd7f-fc505487edea", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1171721639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d6b720bb7334b139fc12e9faa051906", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2096b485-41", "ovs_interfaceid": "2096b485-4121-49ca-b440-f2d4580fb1cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.950 243456 DEBUG oslo_concurrency.lockutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Releasing lock "refresh_cache-bbaf0344-f1d3-4629-b6ff-3395549aa84b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.951 243456 DEBUG nova.compute.manager [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Instance network_info: |[{"id": "2096b485-4121-49ca-b440-f2d4580fb1cf", "address": "fa:16:3e:14:19:51", "network": {"id": "e675fba7-c78a-4b4b-bd7f-fc505487edea", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1171721639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d6b720bb7334b139fc12e9faa051906", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2096b485-41", "ovs_interfaceid": "2096b485-4121-49ca-b440-f2d4580fb1cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.951 243456 DEBUG oslo_concurrency.lockutils [req-9b3c1c6b-7976-442e-9c6a-c56537bac4b7 req-1be3bba8-4c76-4436-8bbf-d59f35872b47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-bbaf0344-f1d3-4629-b6ff-3395549aa84b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.952 243456 DEBUG nova.network.neutron [req-9b3c1c6b-7976-442e-9c6a-c56537bac4b7 req-1be3bba8-4c76-4436-8bbf-d59f35872b47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Refreshing network info cache for port 2096b485-4121-49ca-b440-f2d4580fb1cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.954 243456 DEBUG nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Start _get_guest_xml network_info=[{"id": "2096b485-4121-49ca-b440-f2d4580fb1cf", "address": "fa:16:3e:14:19:51", "network": {"id": "e675fba7-c78a-4b4b-bd7f-fc505487edea", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1171721639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d6b720bb7334b139fc12e9faa051906", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2096b485-41", "ovs_interfaceid": "2096b485-4121-49ca-b440-f2d4580fb1cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.962 243456 DEBUG nova.policy [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '35aa1fe862a2437dbcc12fc7b0acbf91', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '30cb5e2d14fb4fb7a9d37cf231549329', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.965 243456 WARNING nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:08:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 200 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.976 243456 DEBUG nova.virt.libvirt.host [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.977 243456 DEBUG nova.virt.libvirt.host [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.980 243456 DEBUG nova.virt.libvirt.host [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.981 243456 DEBUG nova.virt.libvirt.host [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.981 243456 DEBUG nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.982 243456 DEBUG nova.virt.hardware [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.982 243456 DEBUG nova.virt.hardware [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.983 243456 DEBUG nova.virt.hardware [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.983 243456 DEBUG nova.virt.hardware [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.983 243456 DEBUG nova.virt.hardware [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.984 243456 DEBUG nova.virt.hardware [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.984 243456 DEBUG nova.virt.hardware [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.984 243456 DEBUG nova.virt.hardware [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.984 243456 DEBUG nova.virt.hardware [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.985 243456 DEBUG nova.virt.hardware [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.985 243456 DEBUG nova.virt.hardware [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:08:01 compute-0 nova_compute[243452]: 2026-02-28 10:08:01.989 243456 DEBUG oslo_concurrency.processutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:02 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4170975663' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:08:02.213746) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273282213825, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2013, "num_deletes": 506, "total_data_size": 2529276, "memory_usage": 2581792, "flush_reason": "Manual Compaction"}
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273282228046, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1848746, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23882, "largest_seqno": 25894, "table_properties": {"data_size": 1841410, "index_size": 3578, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 21622, "raw_average_key_size": 20, "raw_value_size": 1823436, "raw_average_value_size": 1712, "num_data_blocks": 160, "num_entries": 1065, "num_filter_entries": 1065, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772273138, "oldest_key_time": 1772273138, "file_creation_time": 1772273282, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 14361 microseconds, and 4123 cpu microseconds.
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:08:02.228124) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1848746 bytes OK
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:08:02.228145) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:08:02.232864) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:08:02.232883) EVENT_LOG_v1 {"time_micros": 1772273282232878, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:08:02.232908) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2519575, prev total WAL file size 2519575, number of live WAL files 2.
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:08:02.233985) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1805KB)], [56(9309KB)]
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273282234090, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 11381775, "oldest_snapshot_seqno": -1}
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4895 keys, 7067830 bytes, temperature: kUnknown
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273282353373, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7067830, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7035811, "index_size": 18661, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12293, "raw_key_size": 122701, "raw_average_key_size": 25, "raw_value_size": 6948404, "raw_average_value_size": 1419, "num_data_blocks": 765, "num_entries": 4895, "num_filter_entries": 4895, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772273282, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:08:02.353642) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7067830 bytes
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:08:02.355746) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.4 rd, 59.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 9.1 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(10.0) write-amplify(3.8) OK, records in: 5874, records dropped: 979 output_compression: NoCompression
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:08:02.355812) EVENT_LOG_v1 {"time_micros": 1772273282355759, "job": 30, "event": "compaction_finished", "compaction_time_micros": 119362, "compaction_time_cpu_micros": 23093, "output_level": 6, "num_output_files": 1, "total_output_size": 7067830, "num_input_records": 5874, "num_output_records": 4895, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273282356332, "job": 30, "event": "table_file_deletion", "file_number": 58}
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273282359272, "job": 30, "event": "table_file_deletion", "file_number": 56}
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:08:02.233846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:08:02.359396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:08:02.359405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:08:02.359408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:08:02.359411) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:08:02 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:08:02.359414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:08:02 compute-0 nova_compute[243452]: 2026-02-28 10:08:02.520 243456 DEBUG oslo_concurrency.processutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 5383d7b5-e11b-47e4-8cbc-4be283dd6b16_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:02 compute-0 nova_compute[243452]: 2026-02-28 10:08:02.586 243456 DEBUG nova.storage.rbd_utils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] resizing rbd image 5383d7b5-e11b-47e4-8cbc-4be283dd6b16_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:08:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:08:02 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/469178647' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:08:02 compute-0 nova_compute[243452]: 2026-02-28 10:08:02.674 243456 DEBUG oslo_concurrency.processutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.685s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:02 compute-0 nova_compute[243452]: 2026-02-28 10:08:02.696 243456 DEBUG nova.storage.rbd_utils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] rbd image bbaf0344-f1d3-4629-b6ff-3395549aa84b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:08:02 compute-0 nova_compute[243452]: 2026-02-28 10:08:02.700 243456 DEBUG oslo_concurrency.processutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:02 compute-0 nova_compute[243452]: 2026-02-28 10:08:02.799 243456 DEBUG nova.objects.instance [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lazy-loading 'migration_context' on Instance uuid 5383d7b5-e11b-47e4-8cbc-4be283dd6b16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:08:02 compute-0 nova_compute[243452]: 2026-02-28 10:08:02.817 243456 DEBUG nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:08:02 compute-0 nova_compute[243452]: 2026-02-28 10:08:02.818 243456 DEBUG nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Ensure instance console log exists: /var/lib/nova/instances/5383d7b5-e11b-47e4-8cbc-4be283dd6b16/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:08:02 compute-0 nova_compute[243452]: 2026-02-28 10:08:02.818 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:02 compute-0 nova_compute[243452]: 2026-02-28 10:08:02.819 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:02 compute-0 nova_compute[243452]: 2026-02-28 10:08:02.819 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:08:02 compute-0 nova_compute[243452]: 2026-02-28 10:08:02.976 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.223 243456 DEBUG nova.network.neutron [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Successfully created port: 2ef6c454-23c6-4a31-be49-e69f506de5bc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:08:03 compute-0 ceph-mon[76304]: pgmap v1194: 305 pgs: 305 active+clean; 200 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:08:03 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/469178647' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:08:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:08:03 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1221648689' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.339 243456 DEBUG oslo_concurrency.processutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.341 243456 DEBUG nova.virt.libvirt.vif [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:07:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1968508282',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1968508282',id=48,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9d6b720bb7334b139fc12e9faa051906',ramdisk_id='',reservation_id='r-4u41kfgd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1509801759',owner_user_name='tempest-AttachInterfacesV270Test-1509801759-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:07:54Z,user_data=None,user_id='eb52857f37ae4c42815767488316da21',uuid=bbaf0344-f1d3-4629-b6ff-3395549aa84b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2096b485-4121-49ca-b440-f2d4580fb1cf", "address": "fa:16:3e:14:19:51", "network": {"id": "e675fba7-c78a-4b4b-bd7f-fc505487edea", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1171721639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d6b720bb7334b139fc12e9faa051906", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2096b485-41", "ovs_interfaceid": "2096b485-4121-49ca-b440-f2d4580fb1cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.342 243456 DEBUG nova.network.os_vif_util [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Converting VIF {"id": "2096b485-4121-49ca-b440-f2d4580fb1cf", "address": "fa:16:3e:14:19:51", "network": {"id": "e675fba7-c78a-4b4b-bd7f-fc505487edea", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1171721639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d6b720bb7334b139fc12e9faa051906", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2096b485-41", "ovs_interfaceid": "2096b485-4121-49ca-b440-f2d4580fb1cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.344 243456 DEBUG nova.network.os_vif_util [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:19:51,bridge_name='br-int',has_traffic_filtering=True,id=2096b485-4121-49ca-b440-f2d4580fb1cf,network=Network(e675fba7-c78a-4b4b-bd7f-fc505487edea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2096b485-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.345 243456 DEBUG nova.objects.instance [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lazy-loading 'pci_devices' on Instance uuid bbaf0344-f1d3-4629-b6ff-3395549aa84b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.360 243456 DEBUG nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:08:03 compute-0 nova_compute[243452]:   <uuid>bbaf0344-f1d3-4629-b6ff-3395549aa84b</uuid>
Feb 28 10:08:03 compute-0 nova_compute[243452]:   <name>instance-00000030</name>
Feb 28 10:08:03 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:08:03 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:08:03 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <nova:name>tempest-AttachInterfacesV270Test-server-1968508282</nova:name>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:08:01</nova:creationTime>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:08:03 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:08:03 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:08:03 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:08:03 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:08:03 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:08:03 compute-0 nova_compute[243452]:         <nova:user uuid="eb52857f37ae4c42815767488316da21">tempest-AttachInterfacesV270Test-1509801759-project-member</nova:user>
Feb 28 10:08:03 compute-0 nova_compute[243452]:         <nova:project uuid="9d6b720bb7334b139fc12e9faa051906">tempest-AttachInterfacesV270Test-1509801759</nova:project>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:08:03 compute-0 nova_compute[243452]:         <nova:port uuid="2096b485-4121-49ca-b440-f2d4580fb1cf">
Feb 28 10:08:03 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:08:03 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:08:03 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <system>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <entry name="serial">bbaf0344-f1d3-4629-b6ff-3395549aa84b</entry>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <entry name="uuid">bbaf0344-f1d3-4629-b6ff-3395549aa84b</entry>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     </system>
Feb 28 10:08:03 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:08:03 compute-0 nova_compute[243452]:   <os>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:   </os>
Feb 28 10:08:03 compute-0 nova_compute[243452]:   <features>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:   </features>
Feb 28 10:08:03 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:08:03 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:08:03 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/bbaf0344-f1d3-4629-b6ff-3395549aa84b_disk">
Feb 28 10:08:03 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       </source>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:08:03 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/bbaf0344-f1d3-4629-b6ff-3395549aa84b_disk.config">
Feb 28 10:08:03 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       </source>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:08:03 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:14:19:51"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <target dev="tap2096b485-41"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/bbaf0344-f1d3-4629-b6ff-3395549aa84b/console.log" append="off"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <video>
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     </video>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:08:03 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:08:03 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:08:03 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:08:03 compute-0 nova_compute[243452]: </domain>
Feb 28 10:08:03 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.362 243456 DEBUG nova.compute.manager [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Preparing to wait for external event network-vif-plugged-2096b485-4121-49ca-b440-f2d4580fb1cf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.362 243456 DEBUG oslo_concurrency.lockutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Acquiring lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.362 243456 DEBUG oslo_concurrency.lockutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.363 243456 DEBUG oslo_concurrency.lockutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.364 243456 DEBUG nova.virt.libvirt.vif [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:07:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1968508282',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1968508282',id=48,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9d6b720bb7334b139fc12e9faa051906',ramdisk_id='',reservation_id='r-4u41kfgd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1509801759',owner_user_name='tempest-AttachInterfacesV270Test-1509801759-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:07:54Z,user_data=None,user_id='eb52857f37ae4c42815767488316da21',uuid=bbaf0344-f1d3-4629-b6ff-3395549aa84b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2096b485-4121-49ca-b440-f2d4580fb1cf", "address": "fa:16:3e:14:19:51", "network": {"id": "e675fba7-c78a-4b4b-bd7f-fc505487edea", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1171721639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d6b720bb7334b139fc12e9faa051906", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2096b485-41", "ovs_interfaceid": "2096b485-4121-49ca-b440-f2d4580fb1cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.365 243456 DEBUG nova.network.os_vif_util [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Converting VIF {"id": "2096b485-4121-49ca-b440-f2d4580fb1cf", "address": "fa:16:3e:14:19:51", "network": {"id": "e675fba7-c78a-4b4b-bd7f-fc505487edea", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1171721639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d6b720bb7334b139fc12e9faa051906", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2096b485-41", "ovs_interfaceid": "2096b485-4121-49ca-b440-f2d4580fb1cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.366 243456 DEBUG nova.network.os_vif_util [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:19:51,bridge_name='br-int',has_traffic_filtering=True,id=2096b485-4121-49ca-b440-f2d4580fb1cf,network=Network(e675fba7-c78a-4b4b-bd7f-fc505487edea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2096b485-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.366 243456 DEBUG os_vif [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:19:51,bridge_name='br-int',has_traffic_filtering=True,id=2096b485-4121-49ca-b440-f2d4580fb1cf,network=Network(e675fba7-c78a-4b4b-bd7f-fc505487edea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2096b485-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.367 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.368 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.368 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.372 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.372 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2096b485-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.373 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2096b485-41, col_values=(('external_ids', {'iface-id': '2096b485-4121-49ca-b440-f2d4580fb1cf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:19:51', 'vm-uuid': 'bbaf0344-f1d3-4629-b6ff-3395549aa84b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.375 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:03 compute-0 NetworkManager[49805]: <info>  [1772273283.3767] manager: (tap2096b485-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/165)
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.378 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.381 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.382 243456 INFO os_vif [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:19:51,bridge_name='br-int',has_traffic_filtering=True,id=2096b485-4121-49ca-b440-f2d4580fb1cf,network=Network(e675fba7-c78a-4b4b-bd7f-fc505487edea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2096b485-41')
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.458 243456 DEBUG nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.459 243456 DEBUG nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.460 243456 DEBUG nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] No VIF found with MAC fa:16:3e:14:19:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.461 243456 INFO nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Using config drive
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.488 243456 DEBUG nova.storage.rbd_utils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] rbd image bbaf0344-f1d3-4629-b6ff-3395549aa84b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:08:03 compute-0 nova_compute[243452]: 2026-02-28 10:08:03.601 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 203 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.9 MiB/s wr, 30 op/s
Feb 28 10:08:04 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1221648689' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:08:04 compute-0 ceph-mon[76304]: pgmap v1195: 305 pgs: 305 active+clean; 203 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.9 MiB/s wr, 30 op/s
Feb 28 10:08:04 compute-0 nova_compute[243452]: 2026-02-28 10:08:04.587 243456 INFO nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Creating config drive at /var/lib/nova/instances/bbaf0344-f1d3-4629-b6ff-3395549aa84b/disk.config
Feb 28 10:08:04 compute-0 nova_compute[243452]: 2026-02-28 10:08:04.594 243456 DEBUG oslo_concurrency.processutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bbaf0344-f1d3-4629-b6ff-3395549aa84b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp91t_ls5s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:04 compute-0 nova_compute[243452]: 2026-02-28 10:08:04.722 243456 DEBUG oslo_concurrency.processutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bbaf0344-f1d3-4629-b6ff-3395549aa84b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp91t_ls5s" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:04 compute-0 nova_compute[243452]: 2026-02-28 10:08:04.766 243456 DEBUG nova.storage.rbd_utils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] rbd image bbaf0344-f1d3-4629-b6ff-3395549aa84b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:08:04 compute-0 nova_compute[243452]: 2026-02-28 10:08:04.772 243456 DEBUG oslo_concurrency.processutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bbaf0344-f1d3-4629-b6ff-3395549aa84b/disk.config bbaf0344-f1d3-4629-b6ff-3395549aa84b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:04 compute-0 nova_compute[243452]: 2026-02-28 10:08:04.806 243456 DEBUG nova.network.neutron [req-9b3c1c6b-7976-442e-9c6a-c56537bac4b7 req-1be3bba8-4c76-4436-8bbf-d59f35872b47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Updated VIF entry in instance network info cache for port 2096b485-4121-49ca-b440-f2d4580fb1cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:08:04 compute-0 nova_compute[243452]: 2026-02-28 10:08:04.808 243456 DEBUG nova.network.neutron [req-9b3c1c6b-7976-442e-9c6a-c56537bac4b7 req-1be3bba8-4c76-4436-8bbf-d59f35872b47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Updating instance_info_cache with network_info: [{"id": "2096b485-4121-49ca-b440-f2d4580fb1cf", "address": "fa:16:3e:14:19:51", "network": {"id": "e675fba7-c78a-4b4b-bd7f-fc505487edea", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1171721639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d6b720bb7334b139fc12e9faa051906", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2096b485-41", "ovs_interfaceid": "2096b485-4121-49ca-b440-f2d4580fb1cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:08:04 compute-0 nova_compute[243452]: 2026-02-28 10:08:04.834 243456 DEBUG oslo_concurrency.lockutils [req-9b3c1c6b-7976-442e-9c6a-c56537bac4b7 req-1be3bba8-4c76-4436-8bbf-d59f35872b47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-bbaf0344-f1d3-4629-b6ff-3395549aa84b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:08:04 compute-0 nova_compute[243452]: 2026-02-28 10:08:04.912 243456 DEBUG oslo_concurrency.processutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bbaf0344-f1d3-4629-b6ff-3395549aa84b/disk.config bbaf0344-f1d3-4629-b6ff-3395549aa84b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:04 compute-0 nova_compute[243452]: 2026-02-28 10:08:04.913 243456 INFO nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Deleting local config drive /var/lib/nova/instances/bbaf0344-f1d3-4629-b6ff-3395549aa84b/disk.config because it was imported into RBD.
Feb 28 10:08:04 compute-0 kernel: tap2096b485-41: entered promiscuous mode
Feb 28 10:08:04 compute-0 NetworkManager[49805]: <info>  [1772273284.9813] manager: (tap2096b485-41): new Tun device (/org/freedesktop/NetworkManager/Devices/166)
Feb 28 10:08:04 compute-0 nova_compute[243452]: 2026-02-28 10:08:04.982 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:04 compute-0 ovn_controller[146846]: 2026-02-28T10:08:04Z|00369|binding|INFO|Claiming lport 2096b485-4121-49ca-b440-f2d4580fb1cf for this chassis.
Feb 28 10:08:04 compute-0 ovn_controller[146846]: 2026-02-28T10:08:04Z|00370|binding|INFO|2096b485-4121-49ca-b440-f2d4580fb1cf: Claiming fa:16:3e:14:19:51 10.100.0.4
Feb 28 10:08:04 compute-0 nova_compute[243452]: 2026-02-28 10:08:04.988 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.020 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:19:51 10.100.0.4'], port_security=['fa:16:3e:14:19:51 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bbaf0344-f1d3-4629-b6ff-3395549aa84b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e675fba7-c78a-4b4b-bd7f-fc505487edea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9d6b720bb7334b139fc12e9faa051906', 'neutron:revision_number': '2', 'neutron:security_group_ids': '183a3227-9a4c-4328-a977-3287119f7f1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18b263e6-2e1e-46c4-8c86-ebc43f4711ea, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2096b485-4121-49ca-b440-f2d4580fb1cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.022 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2096b485-4121-49ca-b440-f2d4580fb1cf in datapath e675fba7-c78a-4b4b-bd7f-fc505487edea bound to our chassis
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.025 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e675fba7-c78a-4b4b-bd7f-fc505487edea
Feb 28 10:08:05 compute-0 systemd-udevd[284248]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:08:05 compute-0 nova_compute[243452]: 2026-02-28 10:08:05.035 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:05 compute-0 systemd-machined[209480]: New machine qemu-54-instance-00000030.
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.038 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f99acb38-5b39-4d00-9afb-8d60f69a92fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.039 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape675fba7-c1 in ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:08:05 compute-0 ovn_controller[146846]: 2026-02-28T10:08:05Z|00371|binding|INFO|Setting lport 2096b485-4121-49ca-b440-f2d4580fb1cf ovn-installed in OVS
Feb 28 10:08:05 compute-0 ovn_controller[146846]: 2026-02-28T10:08:05Z|00372|binding|INFO|Setting lport 2096b485-4121-49ca-b440-f2d4580fb1cf up in Southbound
Feb 28 10:08:05 compute-0 nova_compute[243452]: 2026-02-28 10:08:05.042 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.042 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape675fba7-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.042 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d9e4cb2f-8ec0-4477-aed5-8f5b4352bdc0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.043 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[abbeb3dd-f885-4f13-bf8c-028a6fd14ca2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:05 compute-0 systemd[1]: Started Virtual Machine qemu-54-instance-00000030.
Feb 28 10:08:05 compute-0 NetworkManager[49805]: <info>  [1772273285.0501] device (tap2096b485-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:08:05 compute-0 NetworkManager[49805]: <info>  [1772273285.0528] device (tap2096b485-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.059 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[e70ac283-82ba-46ee-b4b3-89487e5fab4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.082 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cc2c4eff-1455-4fc2-83bc-98876e9fde0a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.119 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[17465915-86bd-48c6-aa9a-c3c16dc8b4c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.124 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3081a2-0633-4e5a-b3e4-f85d34f1e129]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:05 compute-0 NetworkManager[49805]: <info>  [1772273285.1272] manager: (tape675fba7-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/167)
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.160 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4f9925-9019-429a-92ea-11ad65117651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.165 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0eeb06c6-8dac-4534-9d41-540efdf9cb59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:05 compute-0 NetworkManager[49805]: <info>  [1772273285.1931] device (tape675fba7-c0): carrier: link connected
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.198 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[97ea1ee9-a216-4b5f-a3c3-5c5be47cbd26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.219 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf9e332-9049-48d3-bd28-18a4948980d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape675fba7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:a8:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479247, 'reachable_time': 42401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284282, 'error': None, 'target': 'ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.237 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0e64c2a1-309e-4171-9a3c-7f513e21ea6e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:a8ca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479247, 'tstamp': 479247}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284283, 'error': None, 'target': 'ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.257 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[31fb40fb-3d56-44e4-8975-b8644d9afebd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape675fba7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:a8:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479247, 'reachable_time': 42401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284284, 'error': None, 'target': 'ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.290 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1c5986b0-09f7-43bb-a703-d409681304fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:05 compute-0 nova_compute[243452]: 2026-02-28 10:08:05.317 243456 DEBUG nova.network.neutron [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Successfully created port: 91edcb1e-3191-47da-be88-2834e4a98d73 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.359 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eff5ce15-b71c-4879-8642-b5172d9c50d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.361 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape675fba7-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.361 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.362 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape675fba7-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:05 compute-0 kernel: tape675fba7-c0: entered promiscuous mode
Feb 28 10:08:05 compute-0 NetworkManager[49805]: <info>  [1772273285.3656] manager: (tape675fba7-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/168)
Feb 28 10:08:05 compute-0 nova_compute[243452]: 2026-02-28 10:08:05.365 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.370 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape675fba7-c0, col_values=(('external_ids', {'iface-id': '8ca76364-db2c-475d-b8e8-95815d1c647e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:05 compute-0 nova_compute[243452]: 2026-02-28 10:08:05.372 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:05 compute-0 ovn_controller[146846]: 2026-02-28T10:08:05Z|00373|binding|INFO|Releasing lport 8ca76364-db2c-475d-b8e8-95815d1c647e from this chassis (sb_readonly=0)
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.373 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e675fba7-c78a-4b4b-bd7f-fc505487edea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e675fba7-c78a-4b4b-bd7f-fc505487edea.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.374 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[920644f0-079c-4511-b860-b9abca26d8bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.375 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-e675fba7-c78a-4b4b-bd7f-fc505487edea
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/e675fba7-c78a-4b4b-bd7f-fc505487edea.pid.haproxy
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID e675fba7-c78a-4b4b-bd7f-fc505487edea
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:08:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:05.375 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea', 'env', 'PROCESS_TAG=haproxy-e675fba7-c78a-4b4b-bd7f-fc505487edea', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e675fba7-c78a-4b4b-bd7f-fc505487edea.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:08:05 compute-0 nova_compute[243452]: 2026-02-28 10:08:05.380 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:05 compute-0 nova_compute[243452]: 2026-02-28 10:08:05.505 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273285.5040329, bbaf0344-f1d3-4629-b6ff-3395549aa84b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:08:05 compute-0 nova_compute[243452]: 2026-02-28 10:08:05.506 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] VM Started (Lifecycle Event)
Feb 28 10:08:05 compute-0 nova_compute[243452]: 2026-02-28 10:08:05.527 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:08:05 compute-0 nova_compute[243452]: 2026-02-28 10:08:05.533 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273285.5045989, bbaf0344-f1d3-4629-b6ff-3395549aa84b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:08:05 compute-0 nova_compute[243452]: 2026-02-28 10:08:05.533 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] VM Paused (Lifecycle Event)
Feb 28 10:08:05 compute-0 nova_compute[243452]: 2026-02-28 10:08:05.561 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:08:05 compute-0 nova_compute[243452]: 2026-02-28 10:08:05.566 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:08:05 compute-0 nova_compute[243452]: 2026-02-28 10:08:05.591 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:08:05 compute-0 podman[284358]: 2026-02-28 10:08:05.753630791 +0000 UTC m=+0.061051966 container create 47fb8f8d3bf70777e9bc76625d04098aaaf32ffbe72381735def0ba5e5a80feb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 28 10:08:05 compute-0 systemd[1]: Started libpod-conmon-47fb8f8d3bf70777e9bc76625d04098aaaf32ffbe72381735def0ba5e5a80feb.scope.
Feb 28 10:08:05 compute-0 podman[284358]: 2026-02-28 10:08:05.722778404 +0000 UTC m=+0.030199609 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:08:05 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61722e19ae9b0b609afb647ff08cf4d136af5438e6933a350d3eae212abb430b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:08:05 compute-0 podman[284358]: 2026-02-28 10:08:05.851648754 +0000 UTC m=+0.159069939 container init 47fb8f8d3bf70777e9bc76625d04098aaaf32ffbe72381735def0ba5e5a80feb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 28 10:08:05 compute-0 podman[284358]: 2026-02-28 10:08:05.860225595 +0000 UTC m=+0.167646750 container start 47fb8f8d3bf70777e9bc76625d04098aaaf32ffbe72381735def0ba5e5a80feb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 10:08:05 compute-0 neutron-haproxy-ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea[284373]: [NOTICE]   (284377) : New worker (284379) forked
Feb 28 10:08:05 compute-0 neutron-haproxy-ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea[284373]: [NOTICE]   (284377) : Loading success.
Feb 28 10:08:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 234 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 3.1 MiB/s wr, 50 op/s
Feb 28 10:08:07 compute-0 ceph-mon[76304]: pgmap v1196: 305 pgs: 305 active+clean; 234 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 3.1 MiB/s wr, 50 op/s
Feb 28 10:08:07 compute-0 nova_compute[243452]: 2026-02-28 10:08:07.105 243456 DEBUG nova.network.neutron [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Successfully created port: c749c499-3ec6-4a11-894e-ec79cfbd8829 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:08:07 compute-0 nova_compute[243452]: 2026-02-28 10:08:07.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:08:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:08:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 246 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.5 MiB/s wr, 62 op/s
Feb 28 10:08:08 compute-0 nova_compute[243452]: 2026-02-28 10:08:08.268 243456 DEBUG nova.network.neutron [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Successfully updated port: 2ef6c454-23c6-4a31-be49-e69f506de5bc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:08:08 compute-0 nova_compute[243452]: 2026-02-28 10:08:08.378 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:08 compute-0 nova_compute[243452]: 2026-02-28 10:08:08.603 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:08 compute-0 sudo[284388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:08:08 compute-0 nova_compute[243452]: 2026-02-28 10:08:08.672 243456 DEBUG nova.compute.manager [req-c1f51cf0-22dd-437e-b992-c6ee80bb269f req-8feb98ae-ec87-48fa-a2fe-8e57cf5307ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received event network-changed-2ef6c454-23c6-4a31-be49-e69f506de5bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:08 compute-0 nova_compute[243452]: 2026-02-28 10:08:08.674 243456 DEBUG nova.compute.manager [req-c1f51cf0-22dd-437e-b992-c6ee80bb269f req-8feb98ae-ec87-48fa-a2fe-8e57cf5307ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Refreshing instance network info cache due to event network-changed-2ef6c454-23c6-4a31-be49-e69f506de5bc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:08:08 compute-0 nova_compute[243452]: 2026-02-28 10:08:08.675 243456 DEBUG oslo_concurrency.lockutils [req-c1f51cf0-22dd-437e-b992-c6ee80bb269f req-8feb98ae-ec87-48fa-a2fe-8e57cf5307ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-5383d7b5-e11b-47e4-8cbc-4be283dd6b16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:08:08 compute-0 nova_compute[243452]: 2026-02-28 10:08:08.675 243456 DEBUG oslo_concurrency.lockutils [req-c1f51cf0-22dd-437e-b992-c6ee80bb269f req-8feb98ae-ec87-48fa-a2fe-8e57cf5307ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-5383d7b5-e11b-47e4-8cbc-4be283dd6b16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:08:08 compute-0 nova_compute[243452]: 2026-02-28 10:08:08.676 243456 DEBUG nova.network.neutron [req-c1f51cf0-22dd-437e-b992-c6ee80bb269f req-8feb98ae-ec87-48fa-a2fe-8e57cf5307ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Refreshing network info cache for port 2ef6c454-23c6-4a31-be49-e69f506de5bc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:08:08 compute-0 sudo[284388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:08:08 compute-0 sudo[284388]: pam_unix(sudo:session): session closed for user root
Feb 28 10:08:08 compute-0 sudo[284413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 28 10:08:08 compute-0 sudo[284413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:08:08 compute-0 nova_compute[243452]: 2026-02-28 10:08:08.931 243456 DEBUG nova.network.neutron [req-c1f51cf0-22dd-437e-b992-c6ee80bb269f req-8feb98ae-ec87-48fa-a2fe-8e57cf5307ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:08:09 compute-0 ceph-mon[76304]: pgmap v1197: 305 pgs: 305 active+clean; 246 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.5 MiB/s wr, 62 op/s
Feb 28 10:08:09 compute-0 podman[284481]: 2026-02-28 10:08:09.231105884 +0000 UTC m=+0.097614642 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 10:08:09 compute-0 nova_compute[243452]: 2026-02-28 10:08:09.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:08:09 compute-0 nova_compute[243452]: 2026-02-28 10:08:09.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:08:09 compute-0 nova_compute[243452]: 2026-02-28 10:08:09.341 243456 DEBUG nova.network.neutron [req-c1f51cf0-22dd-437e-b992-c6ee80bb269f req-8feb98ae-ec87-48fa-a2fe-8e57cf5307ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:08:09 compute-0 nova_compute[243452]: 2026-02-28 10:08:09.357 243456 DEBUG oslo_concurrency.lockutils [req-c1f51cf0-22dd-437e-b992-c6ee80bb269f req-8feb98ae-ec87-48fa-a2fe-8e57cf5307ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-5383d7b5-e11b-47e4-8cbc-4be283dd6b16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:08:09 compute-0 podman[284481]: 2026-02-28 10:08:09.388788943 +0000 UTC m=+0.255297721 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Feb 28 10:08:09 compute-0 nova_compute[243452]: 2026-02-28 10:08:09.454 243456 DEBUG nova.network.neutron [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Successfully updated port: 91edcb1e-3191-47da-be88-2834e4a98d73 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:08:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 246 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 MiB/s wr, 39 op/s
Feb 28 10:08:10 compute-0 ceph-mon[76304]: pgmap v1198: 305 pgs: 305 active+clean; 246 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 MiB/s wr, 39 op/s
Feb 28 10:08:10 compute-0 sudo[284413]: pam_unix(sudo:session): session closed for user root
Feb 28 10:08:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:08:10 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:08:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:08:10 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:08:10 compute-0 sudo[284661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:08:10 compute-0 sudo[284661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:08:10 compute-0 sudo[284661]: pam_unix(sudo:session): session closed for user root
Feb 28 10:08:10 compute-0 sudo[284686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:08:10 compute-0 sudo[284686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:08:10 compute-0 nova_compute[243452]: 2026-02-28 10:08:10.832 243456 DEBUG nova.compute.manager [req-fcae85db-7e17-40e8-ab4a-e0d43eb77b55 req-56ff2eac-fc8a-426e-9975-b6d4300783f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received event network-changed-91edcb1e-3191-47da-be88-2834e4a98d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:10 compute-0 nova_compute[243452]: 2026-02-28 10:08:10.833 243456 DEBUG nova.compute.manager [req-fcae85db-7e17-40e8-ab4a-e0d43eb77b55 req-56ff2eac-fc8a-426e-9975-b6d4300783f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Refreshing instance network info cache due to event network-changed-91edcb1e-3191-47da-be88-2834e4a98d73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:08:10 compute-0 nova_compute[243452]: 2026-02-28 10:08:10.833 243456 DEBUG oslo_concurrency.lockutils [req-fcae85db-7e17-40e8-ab4a-e0d43eb77b55 req-56ff2eac-fc8a-426e-9975-b6d4300783f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-5383d7b5-e11b-47e4-8cbc-4be283dd6b16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:08:10 compute-0 nova_compute[243452]: 2026-02-28 10:08:10.834 243456 DEBUG oslo_concurrency.lockutils [req-fcae85db-7e17-40e8-ab4a-e0d43eb77b55 req-56ff2eac-fc8a-426e-9975-b6d4300783f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-5383d7b5-e11b-47e4-8cbc-4be283dd6b16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:08:10 compute-0 nova_compute[243452]: 2026-02-28 10:08:10.834 243456 DEBUG nova.network.neutron [req-fcae85db-7e17-40e8-ab4a-e0d43eb77b55 req-56ff2eac-fc8a-426e-9975-b6d4300783f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Refreshing network info cache for port 91edcb1e-3191-47da-be88-2834e4a98d73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:08:11 compute-0 nova_compute[243452]: 2026-02-28 10:08:11.043 243456 DEBUG nova.network.neutron [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Successfully updated port: c749c499-3ec6-4a11-894e-ec79cfbd8829 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:08:11 compute-0 nova_compute[243452]: 2026-02-28 10:08:11.051 243456 DEBUG nova.network.neutron [req-fcae85db-7e17-40e8-ab4a-e0d43eb77b55 req-56ff2eac-fc8a-426e-9975-b6d4300783f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:08:11 compute-0 nova_compute[243452]: 2026-02-28 10:08:11.062 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "refresh_cache-5383d7b5-e11b-47e4-8cbc-4be283dd6b16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:08:11 compute-0 sudo[284686]: pam_unix(sudo:session): session closed for user root
Feb 28 10:08:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:08:11 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:08:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:08:11 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:08:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:08:11 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:08:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:08:11 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:08:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:08:11 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:08:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:08:11 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:08:11 compute-0 nova_compute[243452]: 2026-02-28 10:08:11.254 243456 DEBUG oslo_concurrency.lockutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Acquiring lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:11 compute-0 nova_compute[243452]: 2026-02-28 10:08:11.255 243456 DEBUG oslo_concurrency.lockutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:11 compute-0 nova_compute[243452]: 2026-02-28 10:08:11.272 243456 DEBUG nova.compute.manager [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:08:11 compute-0 sudo[284742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:08:11 compute-0 sudo[284742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:08:11 compute-0 sudo[284742]: pam_unix(sudo:session): session closed for user root
Feb 28 10:08:11 compute-0 nova_compute[243452]: 2026-02-28 10:08:11.347 243456 DEBUG oslo_concurrency.lockutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:11 compute-0 nova_compute[243452]: 2026-02-28 10:08:11.348 243456 DEBUG oslo_concurrency.lockutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:11 compute-0 nova_compute[243452]: 2026-02-28 10:08:11.358 243456 DEBUG nova.virt.hardware [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:08:11 compute-0 nova_compute[243452]: 2026-02-28 10:08:11.358 243456 INFO nova.compute.claims [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:08:11 compute-0 sudo[284767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:08:11 compute-0 sudo[284767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:08:11 compute-0 nova_compute[243452]: 2026-02-28 10:08:11.413 243456 DEBUG nova.network.neutron [req-fcae85db-7e17-40e8-ab4a-e0d43eb77b55 req-56ff2eac-fc8a-426e-9975-b6d4300783f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:08:11 compute-0 nova_compute[243452]: 2026-02-28 10:08:11.437 243456 DEBUG oslo_concurrency.lockutils [req-fcae85db-7e17-40e8-ab4a-e0d43eb77b55 req-56ff2eac-fc8a-426e-9975-b6d4300783f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-5383d7b5-e11b-47e4-8cbc-4be283dd6b16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:08:11 compute-0 nova_compute[243452]: 2026-02-28 10:08:11.438 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquired lock "refresh_cache-5383d7b5-e11b-47e4-8cbc-4be283dd6b16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:08:11 compute-0 nova_compute[243452]: 2026-02-28 10:08:11.438 243456 DEBUG nova.network.neutron [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:08:11 compute-0 nova_compute[243452]: 2026-02-28 10:08:11.507 243456 DEBUG oslo_concurrency.processutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:11 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:08:11 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:08:11 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:08:11 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:08:11 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:08:11 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:08:11 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:08:11 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:08:11 compute-0 nova_compute[243452]: 2026-02-28 10:08:11.604 243456 DEBUG nova.network.neutron [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:08:11 compute-0 podman[284805]: 2026-02-28 10:08:11.637534258 +0000 UTC m=+0.024670574 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:08:11 compute-0 podman[284805]: 2026-02-28 10:08:11.825647092 +0000 UTC m=+0.212783358 container create 9a26bf260fe02c20893438ea66561bdb6ea56fc372302b93e70b03c619da40aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_johnson, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 10:08:11 compute-0 systemd[1]: Started libpod-conmon-9a26bf260fe02c20893438ea66561bdb6ea56fc372302b93e70b03c619da40aa.scope.
Feb 28 10:08:11 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:08:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 246 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.2 MiB/s wr, 37 op/s
Feb 28 10:08:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:08:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/578004208' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.102 243456 DEBUG oslo_concurrency.processutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.109 243456 DEBUG nova.compute.provider_tree [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.129 243456 DEBUG nova.scheduler.client.report [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:08:12 compute-0 podman[284805]: 2026-02-28 10:08:12.139425874 +0000 UTC m=+0.526562150 container init 9a26bf260fe02c20893438ea66561bdb6ea56fc372302b93e70b03c619da40aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_johnson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:08:12 compute-0 podman[284805]: 2026-02-28 10:08:12.149705283 +0000 UTC m=+0.536841539 container start 9a26bf260fe02c20893438ea66561bdb6ea56fc372302b93e70b03c619da40aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_johnson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 10:08:12 compute-0 brave_johnson[284839]: 167 167
Feb 28 10:08:12 compute-0 systemd[1]: libpod-9a26bf260fe02c20893438ea66561bdb6ea56fc372302b93e70b03c619da40aa.scope: Deactivated successfully.
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.161 243456 DEBUG oslo_concurrency.lockutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.162 243456 DEBUG nova.compute.manager [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.243 243456 DEBUG nova.compute.manager [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.243 243456 DEBUG nova.network.neutron [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:08:12 compute-0 podman[284805]: 2026-02-28 10:08:12.258601701 +0000 UTC m=+0.645737967 container attach 9a26bf260fe02c20893438ea66561bdb6ea56fc372302b93e70b03c619da40aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_johnson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Feb 28 10:08:12 compute-0 podman[284805]: 2026-02-28 10:08:12.259338192 +0000 UTC m=+0.646474458 container died 9a26bf260fe02c20893438ea66561bdb6ea56fc372302b93e70b03c619da40aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_johnson, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.262 243456 INFO nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.283 243456 DEBUG nova.compute.manager [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.380 243456 DEBUG nova.compute.manager [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.381 243456 DEBUG nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.381 243456 INFO nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Creating image(s)
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.406 243456 DEBUG nova.storage.rbd_utils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] rbd image bbbba0d8-fff9-4f59-ab31-54ff03b71390_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.436 243456 DEBUG nova.storage.rbd_utils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] rbd image bbbba0d8-fff9-4f59-ab31-54ff03b71390_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.458 243456 DEBUG nova.storage.rbd_utils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] rbd image bbbba0d8-fff9-4f59-ab31-54ff03b71390_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.462 243456 DEBUG oslo_concurrency.processutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.516 243456 DEBUG oslo_concurrency.processutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.517 243456 DEBUG oslo_concurrency.lockutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.518 243456 DEBUG oslo_concurrency.lockutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.518 243456 DEBUG oslo_concurrency.lockutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f98f2e8e0c9351ee320cef3f38ecd85ee9b58628dff5203d6b2d1974670d0b4-merged.mount: Deactivated successfully.
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.546 243456 DEBUG nova.storage.rbd_utils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] rbd image bbbba0d8-fff9-4f59-ab31-54ff03b71390_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:08:12 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.551 243456 DEBUG oslo_concurrency.processutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 bbbba0d8-fff9-4f59-ab31-54ff03b71390_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:12 compute-0 ceph-mon[76304]: pgmap v1199: 305 pgs: 305 active+clean; 246 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.2 MiB/s wr, 37 op/s
Feb 28 10:08:12 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/578004208' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:08:12 compute-0 podman[284805]: 2026-02-28 10:08:12.814243727 +0000 UTC m=+1.201380033 container remove 9a26bf260fe02c20893438ea66561bdb6ea56fc372302b93e70b03c619da40aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_johnson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 10:08:12 compute-0 systemd[1]: libpod-conmon-9a26bf260fe02c20893438ea66561bdb6ea56fc372302b93e70b03c619da40aa.scope: Deactivated successfully.
Feb 28 10:08:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:08:13 compute-0 nova_compute[243452]: 2026-02-28 10:08:12.999 243456 DEBUG nova.policy [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5db9d3a48c914c5ea9326b6a8c8c0f36', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3b8b0d675b3747fd80cb2186e41d2ebf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:08:13 compute-0 podman[284959]: 2026-02-28 10:08:13.007913336 +0000 UTC m=+0.089944187 container create 66f82a2c839f4ff7c4e0cee3d83ec66f8229f69a543c5fa3a593c48fd7a39678 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_satoshi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 10:08:13 compute-0 podman[284959]: 2026-02-28 10:08:12.955167165 +0000 UTC m=+0.037197986 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:08:13 compute-0 systemd[1]: Started libpod-conmon-66f82a2c839f4ff7c4e0cee3d83ec66f8229f69a543c5fa3a593c48fd7a39678.scope.
Feb 28 10:08:13 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca90fcf074fffffc32ebd67d203d9c690d32f285c98be7b7bc8f7fb57484ece/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca90fcf074fffffc32ebd67d203d9c690d32f285c98be7b7bc8f7fb57484ece/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca90fcf074fffffc32ebd67d203d9c690d32f285c98be7b7bc8f7fb57484ece/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca90fcf074fffffc32ebd67d203d9c690d32f285c98be7b7bc8f7fb57484ece/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca90fcf074fffffc32ebd67d203d9c690d32f285c98be7b7bc8f7fb57484ece/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:08:13 compute-0 podman[284959]: 2026-02-28 10:08:13.184597568 +0000 UTC m=+0.266628459 container init 66f82a2c839f4ff7c4e0cee3d83ec66f8229f69a543c5fa3a593c48fd7a39678 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:08:13 compute-0 podman[284959]: 2026-02-28 10:08:13.195155775 +0000 UTC m=+0.277186616 container start 66f82a2c839f4ff7c4e0cee3d83ec66f8229f69a543c5fa3a593c48fd7a39678 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:08:13 compute-0 nova_compute[243452]: 2026-02-28 10:08:13.233 243456 DEBUG oslo_concurrency.processutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 bbbba0d8-fff9-4f59-ab31-54ff03b71390_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.682s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:13 compute-0 nova_compute[243452]: 2026-02-28 10:08:13.307 243456 DEBUG nova.storage.rbd_utils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] resizing rbd image bbbba0d8-fff9-4f59-ab31-54ff03b71390_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:08:13 compute-0 podman[284959]: 2026-02-28 10:08:13.36125796 +0000 UTC m=+0.443288901 container attach 66f82a2c839f4ff7c4e0cee3d83ec66f8229f69a543c5fa3a593c48fd7a39678 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_satoshi, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 10:08:13 compute-0 nova_compute[243452]: 2026-02-28 10:08:13.369 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:08:13 compute-0 nova_compute[243452]: 2026-02-28 10:08:13.382 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:13 compute-0 nova_compute[243452]: 2026-02-28 10:08:13.394 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:13 compute-0 nova_compute[243452]: 2026-02-28 10:08:13.394 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:13 compute-0 nova_compute[243452]: 2026-02-28 10:08:13.395 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:13 compute-0 nova_compute[243452]: 2026-02-28 10:08:13.395 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:08:13 compute-0 nova_compute[243452]: 2026-02-28 10:08:13.395 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:13 compute-0 nova_compute[243452]: 2026-02-28 10:08:13.641 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:13 compute-0 nova_compute[243452]: 2026-02-28 10:08:13.649 243456 DEBUG nova.objects.instance [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Lazy-loading 'migration_context' on Instance uuid bbbba0d8-fff9-4f59-ab31-54ff03b71390 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:08:13 compute-0 nova_compute[243452]: 2026-02-28 10:08:13.669 243456 DEBUG nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:08:13 compute-0 nova_compute[243452]: 2026-02-28 10:08:13.670 243456 DEBUG nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Ensure instance console log exists: /var/lib/nova/instances/bbbba0d8-fff9-4f59-ab31-54ff03b71390/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:08:13 compute-0 nova_compute[243452]: 2026-02-28 10:08:13.670 243456 DEBUG oslo_concurrency.lockutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:13 compute-0 nova_compute[243452]: 2026-02-28 10:08:13.671 243456 DEBUG oslo_concurrency.lockutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:13 compute-0 nova_compute[243452]: 2026-02-28 10:08:13.671 243456 DEBUG oslo_concurrency.lockutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:13 compute-0 keen_satoshi[284975]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:08:13 compute-0 keen_satoshi[284975]: --> All data devices are unavailable
Feb 28 10:08:13 compute-0 systemd[1]: libpod-66f82a2c839f4ff7c4e0cee3d83ec66f8229f69a543c5fa3a593c48fd7a39678.scope: Deactivated successfully.
Feb 28 10:08:13 compute-0 podman[284959]: 2026-02-28 10:08:13.755600544 +0000 UTC m=+0.837631385 container died 66f82a2c839f4ff7c4e0cee3d83ec66f8229f69a543c5fa3a593c48fd7a39678 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_satoshi, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 10:08:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ca90fcf074fffffc32ebd67d203d9c690d32f285c98be7b7bc8f7fb57484ece-merged.mount: Deactivated successfully.
Feb 28 10:08:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:08:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1152550668' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:08:13 compute-0 nova_compute[243452]: 2026-02-28 10:08:13.971 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 246 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 28 10:08:14 compute-0 nova_compute[243452]: 2026-02-28 10:08:14.065 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:08:14 compute-0 nova_compute[243452]: 2026-02-28 10:08:14.065 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:08:14 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1152550668' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:08:14 compute-0 podman[284959]: 2026-02-28 10:08:14.084261255 +0000 UTC m=+1.166292076 container remove 66f82a2c839f4ff7c4e0cee3d83ec66f8229f69a543c5fa3a593c48fd7a39678 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_satoshi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 10:08:14 compute-0 systemd[1]: libpod-conmon-66f82a2c839f4ff7c4e0cee3d83ec66f8229f69a543c5fa3a593c48fd7a39678.scope: Deactivated successfully.
Feb 28 10:08:14 compute-0 sudo[284767]: pam_unix(sudo:session): session closed for user root
Feb 28 10:08:14 compute-0 sudo[285104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:08:14 compute-0 sudo[285104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:08:14 compute-0 sudo[285104]: pam_unix(sudo:session): session closed for user root
Feb 28 10:08:14 compute-0 nova_compute[243452]: 2026-02-28 10:08:14.251 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:08:14 compute-0 nova_compute[243452]: 2026-02-28 10:08:14.252 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4099MB free_disk=59.946254786103964GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:08:14 compute-0 nova_compute[243452]: 2026-02-28 10:08:14.253 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:14 compute-0 nova_compute[243452]: 2026-02-28 10:08:14.253 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:14 compute-0 sudo[285129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:08:14 compute-0 sudo[285129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:08:14 compute-0 nova_compute[243452]: 2026-02-28 10:08:14.316 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance bbaf0344-f1d3-4629-b6ff-3395549aa84b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:08:14 compute-0 nova_compute[243452]: 2026-02-28 10:08:14.316 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 5383d7b5-e11b-47e4-8cbc-4be283dd6b16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:08:14 compute-0 nova_compute[243452]: 2026-02-28 10:08:14.316 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance bbbba0d8-fff9-4f59-ab31-54ff03b71390 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:08:14 compute-0 nova_compute[243452]: 2026-02-28 10:08:14.316 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:08:14 compute-0 nova_compute[243452]: 2026-02-28 10:08:14.317 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:08:14 compute-0 nova_compute[243452]: 2026-02-28 10:08:14.374 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:14 compute-0 podman[285167]: 2026-02-28 10:08:14.561404746 +0000 UTC m=+0.074963757 container create 827cb3ddf8bb09365d307cef6748d80110c1bec6a259228c08cee657b5143c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:08:14 compute-0 podman[285167]: 2026-02-28 10:08:14.508675705 +0000 UTC m=+0.022234696 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:08:14 compute-0 systemd[1]: Started libpod-conmon-827cb3ddf8bb09365d307cef6748d80110c1bec6a259228c08cee657b5143c7a.scope.
Feb 28 10:08:14 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:08:14 compute-0 podman[285167]: 2026-02-28 10:08:14.739363343 +0000 UTC m=+0.252922334 container init 827cb3ddf8bb09365d307cef6748d80110c1bec6a259228c08cee657b5143c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_kepler, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:08:14 compute-0 podman[285167]: 2026-02-28 10:08:14.753552982 +0000 UTC m=+0.267111963 container start 827cb3ddf8bb09365d307cef6748d80110c1bec6a259228c08cee657b5143c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_kepler, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 10:08:14 compute-0 frosty_kepler[285201]: 167 167
Feb 28 10:08:14 compute-0 systemd[1]: libpod-827cb3ddf8bb09365d307cef6748d80110c1bec6a259228c08cee657b5143c7a.scope: Deactivated successfully.
Feb 28 10:08:14 compute-0 podman[285167]: 2026-02-28 10:08:14.894296405 +0000 UTC m=+0.407855376 container attach 827cb3ddf8bb09365d307cef6748d80110c1bec6a259228c08cee657b5143c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_kepler, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:08:14 compute-0 podman[285167]: 2026-02-28 10:08:14.895029045 +0000 UTC m=+0.408588026 container died 827cb3ddf8bb09365d307cef6748d80110c1bec6a259228c08cee657b5143c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_kepler, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:08:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:08:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1940374374' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:08:14 compute-0 nova_compute[243452]: 2026-02-28 10:08:14.914 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:14 compute-0 nova_compute[243452]: 2026-02-28 10:08:14.920 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:08:14 compute-0 nova_compute[243452]: 2026-02-28 10:08:14.937 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:08:14 compute-0 nova_compute[243452]: 2026-02-28 10:08:14.969 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:08:14 compute-0 nova_compute[243452]: 2026-02-28 10:08:14.969 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-99ad1b249eeff86235a186900280158c15d255cc32fb270b7e9703f1d4858a56-merged.mount: Deactivated successfully.
Feb 28 10:08:15 compute-0 ceph-mon[76304]: pgmap v1200: 305 pgs: 305 active+clean; 246 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 28 10:08:15 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1940374374' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:08:15 compute-0 podman[285167]: 2026-02-28 10:08:15.363457251 +0000 UTC m=+0.877016222 container remove 827cb3ddf8bb09365d307cef6748d80110c1bec6a259228c08cee657b5143c7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_kepler, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:08:15 compute-0 systemd[1]: libpod-conmon-827cb3ddf8bb09365d307cef6748d80110c1bec6a259228c08cee657b5143c7a.scope: Deactivated successfully.
Feb 28 10:08:15 compute-0 nova_compute[243452]: 2026-02-28 10:08:15.515 243456 DEBUG nova.compute.manager [req-0e9c4529-e30a-4339-a6ba-34c7be10ef5d req-310c28c0-133e-4efc-98f0-ba220ef0b348 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received event network-changed-c749c499-3ec6-4a11-894e-ec79cfbd8829 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:15 compute-0 nova_compute[243452]: 2026-02-28 10:08:15.516 243456 DEBUG nova.compute.manager [req-0e9c4529-e30a-4339-a6ba-34c7be10ef5d req-310c28c0-133e-4efc-98f0-ba220ef0b348 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Refreshing instance network info cache due to event network-changed-c749c499-3ec6-4a11-894e-ec79cfbd8829. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:08:15 compute-0 nova_compute[243452]: 2026-02-28 10:08:15.517 243456 DEBUG oslo_concurrency.lockutils [req-0e9c4529-e30a-4339-a6ba-34c7be10ef5d req-310c28c0-133e-4efc-98f0-ba220ef0b348 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-5383d7b5-e11b-47e4-8cbc-4be283dd6b16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:08:15 compute-0 podman[285230]: 2026-02-28 10:08:15.556182764 +0000 UTC m=+0.046558108 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:08:15 compute-0 nova_compute[243452]: 2026-02-28 10:08:15.785 243456 DEBUG nova.network.neutron [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Successfully created port: 8feba913-868e-47d7-a6c6-50d33e14a69d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:08:15 compute-0 podman[285230]: 2026-02-28 10:08:15.79744143 +0000 UTC m=+0.287816704 container create a9774e97d8dd6b8219f61c093be938636680fdabfca2d551462417f1732f90e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pike, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 10:08:15 compute-0 nova_compute[243452]: 2026-02-28 10:08:15.916 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:08:15 compute-0 nova_compute[243452]: 2026-02-28 10:08:15.917 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:08:15 compute-0 nova_compute[243452]: 2026-02-28 10:08:15.918 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:08:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 276 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.5 MiB/s wr, 46 op/s
Feb 28 10:08:16 compute-0 nova_compute[243452]: 2026-02-28 10:08:16.018 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:08:16 compute-0 systemd[1]: Started libpod-conmon-a9774e97d8dd6b8219f61c093be938636680fdabfca2d551462417f1732f90e3.scope.
Feb 28 10:08:16 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707727d25112fa66f8c60330f28a4f0741cc210f58778480c0276925964054eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707727d25112fa66f8c60330f28a4f0741cc210f58778480c0276925964054eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707727d25112fa66f8c60330f28a4f0741cc210f58778480c0276925964054eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707727d25112fa66f8c60330f28a4f0741cc210f58778480c0276925964054eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:08:16 compute-0 podman[285230]: 2026-02-28 10:08:16.176749033 +0000 UTC m=+0.667124367 container init a9774e97d8dd6b8219f61c093be938636680fdabfca2d551462417f1732f90e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pike, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:08:16 compute-0 podman[285230]: 2026-02-28 10:08:16.187526886 +0000 UTC m=+0.677902170 container start a9774e97d8dd6b8219f61c093be938636680fdabfca2d551462417f1732f90e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pike, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:08:16 compute-0 podman[285230]: 2026-02-28 10:08:16.237996133 +0000 UTC m=+0.728371477 container attach a9774e97d8dd6b8219f61c093be938636680fdabfca2d551462417f1732f90e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pike, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 10:08:16 compute-0 ceph-mon[76304]: pgmap v1201: 305 pgs: 305 active+clean; 276 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.5 MiB/s wr, 46 op/s
Feb 28 10:08:16 compute-0 nova_compute[243452]: 2026-02-28 10:08:16.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:08:16 compute-0 nova_compute[243452]: 2026-02-28 10:08:16.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:08:16 compute-0 boring_pike[285246]: {
Feb 28 10:08:16 compute-0 boring_pike[285246]:     "0": [
Feb 28 10:08:16 compute-0 boring_pike[285246]:         {
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "devices": [
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "/dev/loop3"
Feb 28 10:08:16 compute-0 boring_pike[285246]:             ],
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "lv_name": "ceph_lv0",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "lv_size": "21470642176",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "name": "ceph_lv0",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "tags": {
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.cluster_name": "ceph",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.crush_device_class": "",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.encrypted": "0",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.objectstore": "bluestore",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.osd_id": "0",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.type": "block",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.vdo": "0",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.with_tpm": "0"
Feb 28 10:08:16 compute-0 boring_pike[285246]:             },
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "type": "block",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "vg_name": "ceph_vg0"
Feb 28 10:08:16 compute-0 boring_pike[285246]:         }
Feb 28 10:08:16 compute-0 boring_pike[285246]:     ],
Feb 28 10:08:16 compute-0 boring_pike[285246]:     "1": [
Feb 28 10:08:16 compute-0 boring_pike[285246]:         {
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "devices": [
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "/dev/loop4"
Feb 28 10:08:16 compute-0 boring_pike[285246]:             ],
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "lv_name": "ceph_lv1",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "lv_size": "21470642176",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "name": "ceph_lv1",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "tags": {
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.cluster_name": "ceph",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.crush_device_class": "",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.encrypted": "0",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.objectstore": "bluestore",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.osd_id": "1",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.type": "block",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.vdo": "0",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.with_tpm": "0"
Feb 28 10:08:16 compute-0 boring_pike[285246]:             },
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "type": "block",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "vg_name": "ceph_vg1"
Feb 28 10:08:16 compute-0 boring_pike[285246]:         }
Feb 28 10:08:16 compute-0 boring_pike[285246]:     ],
Feb 28 10:08:16 compute-0 boring_pike[285246]:     "2": [
Feb 28 10:08:16 compute-0 boring_pike[285246]:         {
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "devices": [
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "/dev/loop5"
Feb 28 10:08:16 compute-0 boring_pike[285246]:             ],
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "lv_name": "ceph_lv2",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "lv_size": "21470642176",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "name": "ceph_lv2",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "tags": {
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.cluster_name": "ceph",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.crush_device_class": "",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.encrypted": "0",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.objectstore": "bluestore",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.osd_id": "2",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.type": "block",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.vdo": "0",
Feb 28 10:08:16 compute-0 boring_pike[285246]:                 "ceph.with_tpm": "0"
Feb 28 10:08:16 compute-0 boring_pike[285246]:             },
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "type": "block",
Feb 28 10:08:16 compute-0 boring_pike[285246]:             "vg_name": "ceph_vg2"
Feb 28 10:08:16 compute-0 boring_pike[285246]:         }
Feb 28 10:08:16 compute-0 boring_pike[285246]:     ]
Feb 28 10:08:16 compute-0 boring_pike[285246]: }
Feb 28 10:08:16 compute-0 systemd[1]: libpod-a9774e97d8dd6b8219f61c093be938636680fdabfca2d551462417f1732f90e3.scope: Deactivated successfully.
Feb 28 10:08:16 compute-0 conmon[285246]: conmon a9774e97d8dd6b8219f6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a9774e97d8dd6b8219f61c093be938636680fdabfca2d551462417f1732f90e3.scope/container/memory.events
Feb 28 10:08:16 compute-0 nova_compute[243452]: 2026-02-28 10:08:16.596 243456 DEBUG nova.network.neutron [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Successfully updated port: 8feba913-868e-47d7-a6c6-50d33e14a69d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:08:16 compute-0 podman[285256]: 2026-02-28 10:08:16.607056318 +0000 UTC m=+0.036708072 container died a9774e97d8dd6b8219f61c093be938636680fdabfca2d551462417f1732f90e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pike, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 28 10:08:16 compute-0 nova_compute[243452]: 2026-02-28 10:08:16.611 243456 DEBUG oslo_concurrency.lockutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Acquiring lock "refresh_cache-bbbba0d8-fff9-4f59-ab31-54ff03b71390" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:08:16 compute-0 nova_compute[243452]: 2026-02-28 10:08:16.611 243456 DEBUG oslo_concurrency.lockutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Acquired lock "refresh_cache-bbbba0d8-fff9-4f59-ab31-54ff03b71390" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:08:16 compute-0 nova_compute[243452]: 2026-02-28 10:08:16.611 243456 DEBUG nova.network.neutron [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:08:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-707727d25112fa66f8c60330f28a4f0741cc210f58778480c0276925964054eb-merged.mount: Deactivated successfully.
Feb 28 10:08:16 compute-0 podman[285256]: 2026-02-28 10:08:16.651305351 +0000 UTC m=+0.080957085 container remove a9774e97d8dd6b8219f61c093be938636680fdabfca2d551462417f1732f90e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pike, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 10:08:16 compute-0 systemd[1]: libpod-conmon-a9774e97d8dd6b8219f61c093be938636680fdabfca2d551462417f1732f90e3.scope: Deactivated successfully.
Feb 28 10:08:16 compute-0 sudo[285129]: pam_unix(sudo:session): session closed for user root
Feb 28 10:08:16 compute-0 sudo[285271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:08:16 compute-0 sudo[285271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:08:16 compute-0 sudo[285271]: pam_unix(sudo:session): session closed for user root
Feb 28 10:08:16 compute-0 nova_compute[243452]: 2026-02-28 10:08:16.838 243456 DEBUG nova.network.neutron [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:08:16 compute-0 sudo[285296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:08:16 compute-0 sudo[285296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:08:17 compute-0 podman[285333]: 2026-02-28 10:08:17.13016712 +0000 UTC m=+0.048777691 container create 1952eb9db88b27c7ecefa39faa52b57c45c9a1ef58a1abf0522c6ec830e2c16c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_joliot, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:08:17 compute-0 systemd[1]: Started libpod-conmon-1952eb9db88b27c7ecefa39faa52b57c45c9a1ef58a1abf0522c6ec830e2c16c.scope.
Feb 28 10:08:17 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:08:17 compute-0 podman[285333]: 2026-02-28 10:08:17.19886604 +0000 UTC m=+0.117476621 container init 1952eb9db88b27c7ecefa39faa52b57c45c9a1ef58a1abf0522c6ec830e2c16c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_joliot, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:08:17 compute-0 podman[285333]: 2026-02-28 10:08:17.105341173 +0000 UTC m=+0.023951804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:08:17 compute-0 podman[285333]: 2026-02-28 10:08:17.205265249 +0000 UTC m=+0.123875800 container start 1952eb9db88b27c7ecefa39faa52b57c45c9a1ef58a1abf0522c6ec830e2c16c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_joliot, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 28 10:08:17 compute-0 podman[285333]: 2026-02-28 10:08:17.208928382 +0000 UTC m=+0.127538953 container attach 1952eb9db88b27c7ecefa39faa52b57c45c9a1ef58a1abf0522c6ec830e2c16c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_joliot, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:08:17 compute-0 laughing_joliot[285350]: 167 167
Feb 28 10:08:17 compute-0 systemd[1]: libpod-1952eb9db88b27c7ecefa39faa52b57c45c9a1ef58a1abf0522c6ec830e2c16c.scope: Deactivated successfully.
Feb 28 10:08:17 compute-0 podman[285333]: 2026-02-28 10:08:17.21347515 +0000 UTC m=+0.132085761 container died 1952eb9db88b27c7ecefa39faa52b57c45c9a1ef58a1abf0522c6ec830e2c16c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_joliot, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 10:08:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-91093a5bd9c227ac9d8e750888323e89baa440d7dcd2ae59a09b1e10ca35cab1-merged.mount: Deactivated successfully.
Feb 28 10:08:17 compute-0 podman[285333]: 2026-02-28 10:08:17.254986576 +0000 UTC m=+0.173597157 container remove 1952eb9db88b27c7ecefa39faa52b57c45c9a1ef58a1abf0522c6ec830e2c16c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_joliot, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:08:17 compute-0 systemd[1]: libpod-conmon-1952eb9db88b27c7ecefa39faa52b57c45c9a1ef58a1abf0522c6ec830e2c16c.scope: Deactivated successfully.
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.347 243456 DEBUG nova.network.neutron [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Updating instance_info_cache with network_info: [{"id": "2ef6c454-23c6-4a31-be49-e69f506de5bc", "address": "fa:16:3e:68:c0:a5", "network": {"id": "6dcf000f-adf8-4a38-81ff-219f317c1122", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-736483040", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.240", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef6c454-23", "ovs_interfaceid": "2ef6c454-23c6-4a31-be49-e69f506de5bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "91edcb1e-3191-47da-be88-2834e4a98d73", "address": "fa:16:3e:c2:49:b5", "network": {"id": "26719a5f-3b38-46df-9aa1-32fdd6dfdcf9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-76581631", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91edcb1e-31", "ovs_interfaceid": "91edcb1e-3191-47da-be88-2834e4a98d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c749c499-3ec6-4a11-894e-ec79cfbd8829", "address": "fa:16:3e:e0:e4:b8", "network": {"id": "6dcf000f-adf8-4a38-81ff-219f317c1122", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-736483040", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc749c499-3e", "ovs_interfaceid": "c749c499-3ec6-4a11-894e-ec79cfbd8829", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.372 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Releasing lock "refresh_cache-5383d7b5-e11b-47e4-8cbc-4be283dd6b16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.373 243456 DEBUG nova.compute.manager [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Instance network_info: |[{"id": "2ef6c454-23c6-4a31-be49-e69f506de5bc", "address": "fa:16:3e:68:c0:a5", "network": {"id": "6dcf000f-adf8-4a38-81ff-219f317c1122", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-736483040", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.240", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef6c454-23", "ovs_interfaceid": "2ef6c454-23c6-4a31-be49-e69f506de5bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "91edcb1e-3191-47da-be88-2834e4a98d73", "address": "fa:16:3e:c2:49:b5", "network": {"id": "26719a5f-3b38-46df-9aa1-32fdd6dfdcf9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-76581631", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91edcb1e-31", "ovs_interfaceid": "91edcb1e-3191-47da-be88-2834e4a98d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c749c499-3ec6-4a11-894e-ec79cfbd8829", "address": "fa:16:3e:e0:e4:b8", "network": {"id": "6dcf000f-adf8-4a38-81ff-219f317c1122", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-736483040", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc749c499-3e", "ovs_interfaceid": "c749c499-3ec6-4a11-894e-ec79cfbd8829", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.374 243456 DEBUG oslo_concurrency.lockutils [req-0e9c4529-e30a-4339-a6ba-34c7be10ef5d req-310c28c0-133e-4efc-98f0-ba220ef0b348 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-5383d7b5-e11b-47e4-8cbc-4be283dd6b16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.374 243456 DEBUG nova.network.neutron [req-0e9c4529-e30a-4339-a6ba-34c7be10ef5d req-310c28c0-133e-4efc-98f0-ba220ef0b348 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Refreshing network info cache for port c749c499-3ec6-4a11-894e-ec79cfbd8829 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.384 243456 DEBUG nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Start _get_guest_xml network_info=[{"id": "2ef6c454-23c6-4a31-be49-e69f506de5bc", "address": "fa:16:3e:68:c0:a5", "network": {"id": "6dcf000f-adf8-4a38-81ff-219f317c1122", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-736483040", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.240", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef6c454-23", "ovs_interfaceid": "2ef6c454-23c6-4a31-be49-e69f506de5bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "91edcb1e-3191-47da-be88-2834e4a98d73", "address": "fa:16:3e:c2:49:b5", "network": {"id": "26719a5f-3b38-46df-9aa1-32fdd6dfdcf9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-76581631", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91edcb1e-31", "ovs_interfaceid": "91edcb1e-3191-47da-be88-2834e4a98d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c749c499-3ec6-4a11-894e-ec79cfbd8829", "address": "fa:16:3e:e0:e4:b8", "network": {"id": "6dcf000f-adf8-4a38-81ff-219f317c1122", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-736483040", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc749c499-3e", "ovs_interfaceid": "c749c499-3ec6-4a11-894e-ec79cfbd8829", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.394 243456 WARNING nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.405 243456 DEBUG nova.virt.libvirt.host [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.406 243456 DEBUG nova.virt.libvirt.host [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.412 243456 DEBUG nova.virt.libvirt.host [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.412 243456 DEBUG nova.virt.libvirt.host [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.413 243456 DEBUG nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.414 243456 DEBUG nova.virt.hardware [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.415 243456 DEBUG nova.virt.hardware [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.415 243456 DEBUG nova.virt.hardware [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.416 243456 DEBUG nova.virt.hardware [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.416 243456 DEBUG nova.virt.hardware [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.416 243456 DEBUG nova.virt.hardware [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:08:17 compute-0 podman[285373]: 2026-02-28 10:08:17.417126449 +0000 UTC m=+0.051668023 container create 4a42691e90d6bfd3d03f370a1c9b860815200e26dbc3041c90d5189426321222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.417 243456 DEBUG nova.virt.hardware [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.417 243456 DEBUG nova.virt.hardware [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.418 243456 DEBUG nova.virt.hardware [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.418 243456 DEBUG nova.virt.hardware [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.418 243456 DEBUG nova.virt.hardware [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.424 243456 DEBUG oslo_concurrency.processutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:17 compute-0 systemd[1]: Started libpod-conmon-4a42691e90d6bfd3d03f370a1c9b860815200e26dbc3041c90d5189426321222.scope.
Feb 28 10:08:17 compute-0 podman[285373]: 2026-02-28 10:08:17.395963924 +0000 UTC m=+0.030505578 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:08:17 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1e80de419a684a958ae2a3ae5f7e4d53c1eb474b965eb5ba7b4fdfc23b76eb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1e80de419a684a958ae2a3ae5f7e4d53c1eb474b965eb5ba7b4fdfc23b76eb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1e80de419a684a958ae2a3ae5f7e4d53c1eb474b965eb5ba7b4fdfc23b76eb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1e80de419a684a958ae2a3ae5f7e4d53c1eb474b965eb5ba7b4fdfc23b76eb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:08:17 compute-0 podman[285373]: 2026-02-28 10:08:17.521182941 +0000 UTC m=+0.155724545 container init 4a42691e90d6bfd3d03f370a1c9b860815200e26dbc3041c90d5189426321222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_poitras, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Feb 28 10:08:17 compute-0 podman[285373]: 2026-02-28 10:08:17.528679251 +0000 UTC m=+0.163220825 container start 4a42691e90d6bfd3d03f370a1c9b860815200e26dbc3041c90d5189426321222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:08:17 compute-0 podman[285373]: 2026-02-28 10:08:17.544419514 +0000 UTC m=+0.178961088 container attach 4a42691e90d6bfd3d03f370a1c9b860815200e26dbc3041c90d5189426321222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_poitras, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.601 243456 DEBUG nova.compute.manager [req-a2af0a43-7155-42e4-9579-41d99941067b req-d83091cc-f368-43e7-b227-cf694a15cc3f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Received event network-vif-plugged-2096b485-4121-49ca-b440-f2d4580fb1cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.603 243456 DEBUG oslo_concurrency.lockutils [req-a2af0a43-7155-42e4-9579-41d99941067b req-d83091cc-f368-43e7-b227-cf694a15cc3f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.603 243456 DEBUG oslo_concurrency.lockutils [req-a2af0a43-7155-42e4-9579-41d99941067b req-d83091cc-f368-43e7-b227-cf694a15cc3f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.603 243456 DEBUG oslo_concurrency.lockutils [req-a2af0a43-7155-42e4-9579-41d99941067b req-d83091cc-f368-43e7-b227-cf694a15cc3f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.604 243456 DEBUG nova.compute.manager [req-a2af0a43-7155-42e4-9579-41d99941067b req-d83091cc-f368-43e7-b227-cf694a15cc3f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Processing event network-vif-plugged-2096b485-4121-49ca-b440-f2d4580fb1cf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.604 243456 DEBUG nova.compute.manager [req-a2af0a43-7155-42e4-9579-41d99941067b req-d83091cc-f368-43e7-b227-cf694a15cc3f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Received event network-vif-plugged-2096b485-4121-49ca-b440-f2d4580fb1cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.604 243456 DEBUG oslo_concurrency.lockutils [req-a2af0a43-7155-42e4-9579-41d99941067b req-d83091cc-f368-43e7-b227-cf694a15cc3f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.604 243456 DEBUG oslo_concurrency.lockutils [req-a2af0a43-7155-42e4-9579-41d99941067b req-d83091cc-f368-43e7-b227-cf694a15cc3f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.605 243456 DEBUG oslo_concurrency.lockutils [req-a2af0a43-7155-42e4-9579-41d99941067b req-d83091cc-f368-43e7-b227-cf694a15cc3f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.605 243456 DEBUG nova.compute.manager [req-a2af0a43-7155-42e4-9579-41d99941067b req-d83091cc-f368-43e7-b227-cf694a15cc3f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] No waiting events found dispatching network-vif-plugged-2096b485-4121-49ca-b440-f2d4580fb1cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.605 243456 WARNING nova.compute.manager [req-a2af0a43-7155-42e4-9579-41d99941067b req-d83091cc-f368-43e7-b227-cf694a15cc3f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Received unexpected event network-vif-plugged-2096b485-4121-49ca-b440-f2d4580fb1cf for instance with vm_state building and task_state spawning.
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.605 243456 DEBUG nova.compute.manager [req-a2af0a43-7155-42e4-9579-41d99941067b req-d83091cc-f368-43e7-b227-cf694a15cc3f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Received event network-changed-8feba913-868e-47d7-a6c6-50d33e14a69d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.606 243456 DEBUG nova.compute.manager [req-a2af0a43-7155-42e4-9579-41d99941067b req-d83091cc-f368-43e7-b227-cf694a15cc3f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Refreshing instance network info cache due to event network-changed-8feba913-868e-47d7-a6c6-50d33e14a69d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.606 243456 DEBUG oslo_concurrency.lockutils [req-a2af0a43-7155-42e4-9579-41d99941067b req-d83091cc-f368-43e7-b227-cf694a15cc3f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-bbbba0d8-fff9-4f59-ab31-54ff03b71390" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.606 243456 DEBUG nova.compute.manager [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Instance event wait completed in 12 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.614 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273297.6132069, bbaf0344-f1d3-4629-b6ff-3395549aa84b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.614 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] VM Resumed (Lifecycle Event)
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.617 243456 DEBUG nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.628 243456 INFO nova.virt.libvirt.driver [-] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Instance spawned successfully.
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.629 243456 DEBUG nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.635 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.658 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.669 243456 DEBUG nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.670 243456 DEBUG nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.671 243456 DEBUG nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.671 243456 DEBUG nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.671 243456 DEBUG nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.672 243456 DEBUG nova.virt.libvirt.driver [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.697 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.727 243456 INFO nova.compute.manager [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Took 22.90 seconds to spawn the instance on the hypervisor.
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.727 243456 DEBUG nova.compute.manager [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.796 243456 INFO nova.compute.manager [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Took 23.94 seconds to build instance.
Feb 28 10:08:17 compute-0 nova_compute[243452]: 2026-02-28 10:08:17.822 243456 DEBUG oslo_concurrency.lockutils [None req-89e10a6a-f7a7-4c7c-9763-98b516c7705e eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 24.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:08:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 292 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.2 MiB/s wr, 40 op/s
Feb 28 10:08:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:08:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/832396711' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.018 243456 DEBUG oslo_concurrency.processutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.040 243456 DEBUG nova.storage.rbd_utils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] rbd image 5383d7b5-e11b-47e4-8cbc-4be283dd6b16_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.044 243456 DEBUG oslo_concurrency.processutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/832396711' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:08:18 compute-0 lvm[285508]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:08:18 compute-0 lvm[285508]: VG ceph_vg0 finished
Feb 28 10:08:18 compute-0 lvm[285509]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:08:18 compute-0 lvm[285509]: VG ceph_vg1 finished
Feb 28 10:08:18 compute-0 lvm[285512]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:08:18 compute-0 lvm[285512]: VG ceph_vg2 finished
Feb 28 10:08:18 compute-0 exciting_poitras[285390]: {}
Feb 28 10:08:18 compute-0 systemd[1]: libpod-4a42691e90d6bfd3d03f370a1c9b860815200e26dbc3041c90d5189426321222.scope: Deactivated successfully.
Feb 28 10:08:18 compute-0 systemd[1]: libpod-4a42691e90d6bfd3d03f370a1c9b860815200e26dbc3041c90d5189426321222.scope: Consumed 1.020s CPU time.
Feb 28 10:08:18 compute-0 podman[285373]: 2026-02-28 10:08:18.268200601 +0000 UTC m=+0.902742215 container died 4a42691e90d6bfd3d03f370a1c9b860815200e26dbc3041c90d5189426321222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.349 243456 DEBUG nova.network.neutron [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Updating instance_info_cache with network_info: [{"id": "8feba913-868e-47d7-a6c6-50d33e14a69d", "address": "fa:16:3e:5b:bc:bd", "network": {"id": "20018e60-73d5-4de7-9f8d-17031bc634d7", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1934928850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b8b0d675b3747fd80cb2186e41d2ebf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8feba913-86", "ovs_interfaceid": "8feba913-868e-47d7-a6c6-50d33e14a69d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:08:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1e80de419a684a958ae2a3ae5f7e4d53c1eb474b965eb5ba7b4fdfc23b76eb4-merged.mount: Deactivated successfully.
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.374 243456 DEBUG oslo_concurrency.lockutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Releasing lock "refresh_cache-bbbba0d8-fff9-4f59-ab31-54ff03b71390" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.375 243456 DEBUG nova.compute.manager [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Instance network_info: |[{"id": "8feba913-868e-47d7-a6c6-50d33e14a69d", "address": "fa:16:3e:5b:bc:bd", "network": {"id": "20018e60-73d5-4de7-9f8d-17031bc634d7", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1934928850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b8b0d675b3747fd80cb2186e41d2ebf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8feba913-86", "ovs_interfaceid": "8feba913-868e-47d7-a6c6-50d33e14a69d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.376 243456 DEBUG oslo_concurrency.lockutils [req-a2af0a43-7155-42e4-9579-41d99941067b req-d83091cc-f368-43e7-b227-cf694a15cc3f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-bbbba0d8-fff9-4f59-ab31-54ff03b71390" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.377 243456 DEBUG nova.network.neutron [req-a2af0a43-7155-42e4-9579-41d99941067b req-d83091cc-f368-43e7-b227-cf694a15cc3f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Refreshing network info cache for port 8feba913-868e-47d7-a6c6-50d33e14a69d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.383 243456 DEBUG nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Start _get_guest_xml network_info=[{"id": "8feba913-868e-47d7-a6c6-50d33e14a69d", "address": "fa:16:3e:5b:bc:bd", "network": {"id": "20018e60-73d5-4de7-9f8d-17031bc634d7", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1934928850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b8b0d675b3747fd80cb2186e41d2ebf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8feba913-86", "ovs_interfaceid": "8feba913-868e-47d7-a6c6-50d33e14a69d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.387 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.392 243456 WARNING nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.405 243456 DEBUG nova.virt.libvirt.host [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.410 243456 DEBUG nova.virt.libvirt.host [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.416 243456 DEBUG nova.virt.libvirt.host [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.417 243456 DEBUG nova.virt.libvirt.host [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.418 243456 DEBUG nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.419 243456 DEBUG nova.virt.hardware [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.420 243456 DEBUG nova.virt.hardware [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.421 243456 DEBUG nova.virt.hardware [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.421 243456 DEBUG nova.virt.hardware [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.422 243456 DEBUG nova.virt.hardware [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.423 243456 DEBUG nova.virt.hardware [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.423 243456 DEBUG nova.virt.hardware [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.424 243456 DEBUG nova.virt.hardware [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.425 243456 DEBUG nova.virt.hardware [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.425 243456 DEBUG nova.virt.hardware [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.426 243456 DEBUG nova.virt.hardware [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.432 243456 DEBUG oslo_concurrency.processutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:18 compute-0 podman[285373]: 2026-02-28 10:08:18.506434582 +0000 UTC m=+1.140976156 container remove 4a42691e90d6bfd3d03f370a1c9b860815200e26dbc3041c90d5189426321222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:08:18 compute-0 systemd[1]: libpod-conmon-4a42691e90d6bfd3d03f370a1c9b860815200e26dbc3041c90d5189426321222.scope: Deactivated successfully.
Feb 28 10:08:18 compute-0 sudo[285296]: pam_unix(sudo:session): session closed for user root
Feb 28 10:08:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.607 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:18 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:08:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:08:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1296368364' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:08:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.649 243456 DEBUG oslo_concurrency.processutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.650 243456 DEBUG nova.virt.libvirt.vif [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:07:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-2137083594',display_name='tempest-ServersTestMultiNic-server-2137083594',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-2137083594',id=49,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='30cb5e2d14fb4fb7a9d37cf231549329',ramdisk_id='',reservation_id='r-wjk7kx7p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-116334619',owner_user_name='tempest-ServersTestMultiNic-116334619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:08:01Z,user_data=None,user_id='35aa1fe862a2437dbcc12fc7b0acbf91',uuid=5383d7b5-e11b-47e4-8cbc-4be283dd6b16,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ef6c454-23c6-4a31-be49-e69f506de5bc", "address": "fa:16:3e:68:c0:a5", "network": {"id": "6dcf000f-adf8-4a38-81ff-219f317c1122", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-736483040", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.240", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef6c454-23", "ovs_interfaceid": "2ef6c454-23c6-4a31-be49-e69f506de5bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.651 243456 DEBUG nova.network.os_vif_util [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converting VIF {"id": "2ef6c454-23c6-4a31-be49-e69f506de5bc", "address": "fa:16:3e:68:c0:a5", "network": {"id": "6dcf000f-adf8-4a38-81ff-219f317c1122", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-736483040", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.240", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef6c454-23", "ovs_interfaceid": "2ef6c454-23c6-4a31-be49-e69f506de5bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.652 243456 DEBUG nova.network.os_vif_util [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:c0:a5,bridge_name='br-int',has_traffic_filtering=True,id=2ef6c454-23c6-4a31-be49-e69f506de5bc,network=Network(6dcf000f-adf8-4a38-81ff-219f317c1122),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ef6c454-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.653 243456 DEBUG nova.virt.libvirt.vif [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:07:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-2137083594',display_name='tempest-ServersTestMultiNic-server-2137083594',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-2137083594',id=49,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='30cb5e2d14fb4fb7a9d37cf231549329',ramdisk_id='',reservation_id='r-wjk7kx7p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-116334619',owner_user_name='tempest-ServersTestMultiNic-116334619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:08:01Z,user_data=None,user_id='35aa1fe862a2437dbcc12fc7b0acbf91',uuid=5383d7b5-e11b-47e4-8cbc-4be283dd6b16,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91edcb1e-3191-47da-be88-2834e4a98d73", "address": "fa:16:3e:c2:49:b5", "network": {"id": "26719a5f-3b38-46df-9aa1-32fdd6dfdcf9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-76581631", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91edcb1e-31", "ovs_interfaceid": "91edcb1e-3191-47da-be88-2834e4a98d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.654 243456 DEBUG nova.network.os_vif_util [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converting VIF {"id": "91edcb1e-3191-47da-be88-2834e4a98d73", "address": "fa:16:3e:c2:49:b5", "network": {"id": "26719a5f-3b38-46df-9aa1-32fdd6dfdcf9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-76581631", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91edcb1e-31", "ovs_interfaceid": "91edcb1e-3191-47da-be88-2834e4a98d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.655 243456 DEBUG nova.network.os_vif_util [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:49:b5,bridge_name='br-int',has_traffic_filtering=True,id=91edcb1e-3191-47da-be88-2834e4a98d73,network=Network(26719a5f-3b38-46df-9aa1-32fdd6dfdcf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91edcb1e-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.656 243456 DEBUG nova.virt.libvirt.vif [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:07:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-2137083594',display_name='tempest-ServersTestMultiNic-server-2137083594',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-2137083594',id=49,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='30cb5e2d14fb4fb7a9d37cf231549329',ramdisk_id='',reservation_id='r-wjk7kx7p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-116334619',owner_user_name='tempest-ServersTestMultiNic-116334619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:08:01Z,user_data=None,user_id='35aa1fe862a2437dbcc12fc7b0acbf91',uuid=5383d7b5-e11b-47e4-8cbc-4be283dd6b16,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c749c499-3ec6-4a11-894e-ec79cfbd8829", "address": "fa:16:3e:e0:e4:b8", "network": {"id": "6dcf000f-adf8-4a38-81ff-219f317c1122", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-736483040", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc749c499-3e", "ovs_interfaceid": "c749c499-3ec6-4a11-894e-ec79cfbd8829", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.656 243456 DEBUG nova.network.os_vif_util [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converting VIF {"id": "c749c499-3ec6-4a11-894e-ec79cfbd8829", "address": "fa:16:3e:e0:e4:b8", "network": {"id": "6dcf000f-adf8-4a38-81ff-219f317c1122", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-736483040", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc749c499-3e", "ovs_interfaceid": "c749c499-3ec6-4a11-894e-ec79cfbd8829", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.657 243456 DEBUG nova.network.os_vif_util [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:e4:b8,bridge_name='br-int',has_traffic_filtering=True,id=c749c499-3ec6-4a11-894e-ec79cfbd8829,network=Network(6dcf000f-adf8-4a38-81ff-219f317c1122),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc749c499-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.658 243456 DEBUG nova.objects.instance [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5383d7b5-e11b-47e4-8cbc-4be283dd6b16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:08:18 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.675 243456 DEBUG nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:08:18 compute-0 nova_compute[243452]:   <uuid>5383d7b5-e11b-47e4-8cbc-4be283dd6b16</uuid>
Feb 28 10:08:18 compute-0 nova_compute[243452]:   <name>instance-00000031</name>
Feb 28 10:08:18 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:08:18 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:08:18 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersTestMultiNic-server-2137083594</nova:name>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:08:17</nova:creationTime>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:08:18 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:08:18 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:08:18 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:08:18 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:08:18 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:08:18 compute-0 nova_compute[243452]:         <nova:user uuid="35aa1fe862a2437dbcc12fc7b0acbf91">tempest-ServersTestMultiNic-116334619-project-member</nova:user>
Feb 28 10:08:18 compute-0 nova_compute[243452]:         <nova:project uuid="30cb5e2d14fb4fb7a9d37cf231549329">tempest-ServersTestMultiNic-116334619</nova:project>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:08:18 compute-0 nova_compute[243452]:         <nova:port uuid="2ef6c454-23c6-4a31-be49-e69f506de5bc">
Feb 28 10:08:18 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.240" ipVersion="4"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:08:18 compute-0 nova_compute[243452]:         <nova:port uuid="91edcb1e-3191-47da-be88-2834e4a98d73">
Feb 28 10:08:18 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.1.37" ipVersion="4"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:08:18 compute-0 nova_compute[243452]:         <nova:port uuid="c749c499-3ec6-4a11-894e-ec79cfbd8829">
Feb 28 10:08:18 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.26" ipVersion="4"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:08:18 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:08:18 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <system>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <entry name="serial">5383d7b5-e11b-47e4-8cbc-4be283dd6b16</entry>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <entry name="uuid">5383d7b5-e11b-47e4-8cbc-4be283dd6b16</entry>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     </system>
Feb 28 10:08:18 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:08:18 compute-0 nova_compute[243452]:   <os>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:   </os>
Feb 28 10:08:18 compute-0 nova_compute[243452]:   <features>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:   </features>
Feb 28 10:08:18 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:08:18 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:08:18 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/5383d7b5-e11b-47e4-8cbc-4be283dd6b16_disk">
Feb 28 10:08:18 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       </source>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:08:18 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/5383d7b5-e11b-47e4-8cbc-4be283dd6b16_disk.config">
Feb 28 10:08:18 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       </source>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:08:18 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:68:c0:a5"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <target dev="tap2ef6c454-23"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:c2:49:b5"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <target dev="tap91edcb1e-31"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:e0:e4:b8"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <target dev="tapc749c499-3e"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/5383d7b5-e11b-47e4-8cbc-4be283dd6b16/console.log" append="off"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <video>
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     </video>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:08:18 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:08:18 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:08:18 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:08:18 compute-0 nova_compute[243452]: </domain>
Feb 28 10:08:18 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.681 243456 DEBUG nova.compute.manager [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Preparing to wait for external event network-vif-plugged-2ef6c454-23c6-4a31-be49-e69f506de5bc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.682 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.683 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.684 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.684 243456 DEBUG nova.compute.manager [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Preparing to wait for external event network-vif-plugged-91edcb1e-3191-47da-be88-2834e4a98d73 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.685 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.685 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.686 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.686 243456 DEBUG nova.compute.manager [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Preparing to wait for external event network-vif-plugged-c749c499-3ec6-4a11-894e-ec79cfbd8829 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.687 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.687 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.688 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.689 243456 DEBUG nova.virt.libvirt.vif [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:07:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-2137083594',display_name='tempest-ServersTestMultiNic-server-2137083594',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-2137083594',id=49,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='30cb5e2d14fb4fb7a9d37cf231549329',ramdisk_id='',reservation_id='r-wjk7kx7p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-116334619',owner_user_name='tempest-ServersTestMultiNic-116334619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:08:01Z,user_data=None,user_id='35aa1fe862a2437dbcc12fc7b0acbf91',uuid=5383d7b5-e11b-47e4-8cbc-4be283dd6b16,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ef6c454-23c6-4a31-be49-e69f506de5bc", "address": "fa:16:3e:68:c0:a5", "network": {"id": "6dcf000f-adf8-4a38-81ff-219f317c1122", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-736483040", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.240", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef6c454-23", "ovs_interfaceid": "2ef6c454-23c6-4a31-be49-e69f506de5bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.690 243456 DEBUG nova.network.os_vif_util [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converting VIF {"id": "2ef6c454-23c6-4a31-be49-e69f506de5bc", "address": "fa:16:3e:68:c0:a5", "network": {"id": "6dcf000f-adf8-4a38-81ff-219f317c1122", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-736483040", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.240", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef6c454-23", "ovs_interfaceid": "2ef6c454-23c6-4a31-be49-e69f506de5bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.691 243456 DEBUG nova.network.os_vif_util [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:c0:a5,bridge_name='br-int',has_traffic_filtering=True,id=2ef6c454-23c6-4a31-be49-e69f506de5bc,network=Network(6dcf000f-adf8-4a38-81ff-219f317c1122),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ef6c454-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.692 243456 DEBUG os_vif [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:c0:a5,bridge_name='br-int',has_traffic_filtering=True,id=2ef6c454-23c6-4a31-be49-e69f506de5bc,network=Network(6dcf000f-adf8-4a38-81ff-219f317c1122),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ef6c454-23') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.693 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.693 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.694 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.698 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.698 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ef6c454-23, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.699 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2ef6c454-23, col_values=(('external_ids', {'iface-id': '2ef6c454-23c6-4a31-be49-e69f506de5bc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:c0:a5', 'vm-uuid': '5383d7b5-e11b-47e4-8cbc-4be283dd6b16'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.701 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:18 compute-0 NetworkManager[49805]: <info>  [1772273298.7026] manager: (tap2ef6c454-23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.702 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.707 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.709 243456 INFO os_vif [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:c0:a5,bridge_name='br-int',has_traffic_filtering=True,id=2ef6c454-23c6-4a31-be49-e69f506de5bc,network=Network(6dcf000f-adf8-4a38-81ff-219f317c1122),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ef6c454-23')
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.710 243456 DEBUG nova.virt.libvirt.vif [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:07:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-2137083594',display_name='tempest-ServersTestMultiNic-server-2137083594',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-2137083594',id=49,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='30cb5e2d14fb4fb7a9d37cf231549329',ramdisk_id='',reservation_id='r-wjk7kx7p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-116334619',owner_user_name='tempest-ServersTestMultiNic-116334619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:08:01Z,user_data=None,user_id='35aa1fe862a2437dbcc12fc7b0acbf91',uuid=5383d7b5-e11b-47e4-8cbc-4be283dd6b16,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91edcb1e-3191-47da-be88-2834e4a98d73", "address": "fa:16:3e:c2:49:b5", "network": {"id": "26719a5f-3b38-46df-9aa1-32fdd6dfdcf9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-76581631", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91edcb1e-31", "ovs_interfaceid": "91edcb1e-3191-47da-be88-2834e4a98d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.711 243456 DEBUG nova.network.os_vif_util [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converting VIF {"id": "91edcb1e-3191-47da-be88-2834e4a98d73", "address": "fa:16:3e:c2:49:b5", "network": {"id": "26719a5f-3b38-46df-9aa1-32fdd6dfdcf9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-76581631", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91edcb1e-31", "ovs_interfaceid": "91edcb1e-3191-47da-be88-2834e4a98d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.712 243456 DEBUG nova.network.os_vif_util [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:49:b5,bridge_name='br-int',has_traffic_filtering=True,id=91edcb1e-3191-47da-be88-2834e4a98d73,network=Network(26719a5f-3b38-46df-9aa1-32fdd6dfdcf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91edcb1e-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.712 243456 DEBUG os_vif [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:49:b5,bridge_name='br-int',has_traffic_filtering=True,id=91edcb1e-3191-47da-be88-2834e4a98d73,network=Network(26719a5f-3b38-46df-9aa1-32fdd6dfdcf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91edcb1e-31') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.713 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.713 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.714 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:08:18 compute-0 sudo[285566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.719 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.720 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91edcb1e-31, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.721 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap91edcb1e-31, col_values=(('external_ids', {'iface-id': '91edcb1e-3191-47da-be88-2834e4a98d73', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:49:b5', 'vm-uuid': '5383d7b5-e11b-47e4-8cbc-4be283dd6b16'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.722 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:18 compute-0 NetworkManager[49805]: <info>  [1772273298.7234] manager: (tap91edcb1e-31): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/170)
Feb 28 10:08:18 compute-0 sudo[285566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.725 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:08:18 compute-0 sudo[285566]: pam_unix(sudo:session): session closed for user root
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.730 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.732 243456 INFO os_vif [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:49:b5,bridge_name='br-int',has_traffic_filtering=True,id=91edcb1e-3191-47da-be88-2834e4a98d73,network=Network(26719a5f-3b38-46df-9aa1-32fdd6dfdcf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91edcb1e-31')
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.733 243456 DEBUG nova.virt.libvirt.vif [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:07:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-2137083594',display_name='tempest-ServersTestMultiNic-server-2137083594',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-2137083594',id=49,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='30cb5e2d14fb4fb7a9d37cf231549329',ramdisk_id='',reservation_id='r-wjk7kx7p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-116334619',owner_user_name='tempest-ServersTestMultiNic-116334619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:08:01Z,user_data=None,user_id='35aa1fe862a2437dbcc12fc7b0acbf91',uuid=5383d7b5-e11b-47e4-8cbc-4be283dd6b16,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c749c499-3ec6-4a11-894e-ec79cfbd8829", "address": "fa:16:3e:e0:e4:b8", "network": {"id": "6dcf000f-adf8-4a38-81ff-219f317c1122", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-736483040", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc749c499-3e", "ovs_interfaceid": "c749c499-3ec6-4a11-894e-ec79cfbd8829", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.733 243456 DEBUG nova.network.os_vif_util [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converting VIF {"id": "c749c499-3ec6-4a11-894e-ec79cfbd8829", "address": "fa:16:3e:e0:e4:b8", "network": {"id": "6dcf000f-adf8-4a38-81ff-219f317c1122", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-736483040", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc749c499-3e", "ovs_interfaceid": "c749c499-3ec6-4a11-894e-ec79cfbd8829", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.734 243456 DEBUG nova.network.os_vif_util [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:e4:b8,bridge_name='br-int',has_traffic_filtering=True,id=c749c499-3ec6-4a11-894e-ec79cfbd8829,network=Network(6dcf000f-adf8-4a38-81ff-219f317c1122),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc749c499-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.735 243456 DEBUG os_vif [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:e4:b8,bridge_name='br-int',has_traffic_filtering=True,id=c749c499-3ec6-4a11-894e-ec79cfbd8829,network=Network(6dcf000f-adf8-4a38-81ff-219f317c1122),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc749c499-3e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.735 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.736 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.736 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.738 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.738 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc749c499-3e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.739 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc749c499-3e, col_values=(('external_ids', {'iface-id': 'c749c499-3ec6-4a11-894e-ec79cfbd8829', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:e4:b8', 'vm-uuid': '5383d7b5-e11b-47e4-8cbc-4be283dd6b16'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.740 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:18 compute-0 NetworkManager[49805]: <info>  [1772273298.7414] manager: (tapc749c499-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.742 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.749 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:18 compute-0 nova_compute[243452]: 2026-02-28 10:08:18.750 243456 INFO os_vif [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:e4:b8,bridge_name='br-int',has_traffic_filtering=True,id=c749c499-3ec6-4a11-894e-ec79cfbd8829,network=Network(6dcf000f-adf8-4a38-81ff-219f317c1122),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc749c499-3e')
Feb 28 10:08:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:08:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3403781743' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.032 243456 DEBUG oslo_concurrency.processutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.074 243456 DEBUG nova.storage.rbd_utils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] rbd image bbbba0d8-fff9-4f59-ab31-54ff03b71390_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.079 243456 DEBUG oslo_concurrency.processutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.132 243456 DEBUG nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.133 243456 DEBUG nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.133 243456 DEBUG nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] No VIF found with MAC fa:16:3e:68:c0:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.134 243456 DEBUG nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] No VIF found with MAC fa:16:3e:c2:49:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.134 243456 DEBUG nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] No VIF found with MAC fa:16:3e:e0:e4:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.135 243456 INFO nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Using config drive
Feb 28 10:08:19 compute-0 ceph-mon[76304]: pgmap v1202: 305 pgs: 305 active+clean; 292 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.2 MiB/s wr, 40 op/s
Feb 28 10:08:19 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:08:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1296368364' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:08:19 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:08:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3403781743' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.195 243456 DEBUG nova.storage.rbd_utils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] rbd image 5383d7b5-e11b-47e4-8cbc-4be283dd6b16_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.467 243456 DEBUG oslo_concurrency.lockutils [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Acquiring lock "interface-bbaf0344-f1d3-4629-b6ff-3395549aa84b-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.467 243456 DEBUG oslo_concurrency.lockutils [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lock "interface-bbaf0344-f1d3-4629-b6ff-3395549aa84b-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.468 243456 DEBUG nova.objects.instance [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lazy-loading 'flavor' on Instance uuid bbaf0344-f1d3-4629-b6ff-3395549aa84b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.490 243456 DEBUG nova.objects.instance [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lazy-loading 'pci_requests' on Instance uuid bbaf0344-f1d3-4629-b6ff-3395549aa84b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.501 243456 DEBUG nova.network.neutron [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:08:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:08:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/25272029' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.586 243456 DEBUG nova.network.neutron [req-0e9c4529-e30a-4339-a6ba-34c7be10ef5d req-310c28c0-133e-4efc-98f0-ba220ef0b348 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Updated VIF entry in instance network info cache for port c749c499-3ec6-4a11-894e-ec79cfbd8829. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.587 243456 DEBUG nova.network.neutron [req-0e9c4529-e30a-4339-a6ba-34c7be10ef5d req-310c28c0-133e-4efc-98f0-ba220ef0b348 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Updating instance_info_cache with network_info: [{"id": "2ef6c454-23c6-4a31-be49-e69f506de5bc", "address": "fa:16:3e:68:c0:a5", "network": {"id": "6dcf000f-adf8-4a38-81ff-219f317c1122", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-736483040", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.240", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef6c454-23", "ovs_interfaceid": "2ef6c454-23c6-4a31-be49-e69f506de5bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "91edcb1e-3191-47da-be88-2834e4a98d73", "address": "fa:16:3e:c2:49:b5", "network": {"id": "26719a5f-3b38-46df-9aa1-32fdd6dfdcf9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-76581631", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91edcb1e-31", "ovs_interfaceid": "91edcb1e-3191-47da-be88-2834e4a98d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c749c499-3ec6-4a11-894e-ec79cfbd8829", "address": "fa:16:3e:e0:e4:b8", "network": {"id": "6dcf000f-adf8-4a38-81ff-219f317c1122", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-736483040", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc749c499-3e", "ovs_interfaceid": "c749c499-3ec6-4a11-894e-ec79cfbd8829", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.592 243456 DEBUG oslo_concurrency.processutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.594 243456 DEBUG nova.virt.libvirt.vif [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:08:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-503378260',display_name='tempest-ImagesOneServerTestJSON-server-503378260',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-503378260',id=50,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b8b0d675b3747fd80cb2186e41d2ebf',ramdisk_id='',reservation_id='r-0rus32qo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1749734354',owner_user_name='tempest-ImagesOneServerTestJSON-1749734354-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:08:12Z,user_data=None,user_id='5db9d3a48c914c5ea9326b6a8c8c0f36',uuid=bbbba0d8-fff9-4f59-ab31-54ff03b71390,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8feba913-868e-47d7-a6c6-50d33e14a69d", "address": "fa:16:3e:5b:bc:bd", "network": {"id": "20018e60-73d5-4de7-9f8d-17031bc634d7", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1934928850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b8b0d675b3747fd80cb2186e41d2ebf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8feba913-86", "ovs_interfaceid": "8feba913-868e-47d7-a6c6-50d33e14a69d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.594 243456 DEBUG nova.network.os_vif_util [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Converting VIF {"id": "8feba913-868e-47d7-a6c6-50d33e14a69d", "address": "fa:16:3e:5b:bc:bd", "network": {"id": "20018e60-73d5-4de7-9f8d-17031bc634d7", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1934928850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b8b0d675b3747fd80cb2186e41d2ebf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8feba913-86", "ovs_interfaceid": "8feba913-868e-47d7-a6c6-50d33e14a69d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.595 243456 DEBUG nova.network.os_vif_util [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:bc:bd,bridge_name='br-int',has_traffic_filtering=True,id=8feba913-868e-47d7-a6c6-50d33e14a69d,network=Network(20018e60-73d5-4de7-9f8d-17031bc634d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8feba913-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.597 243456 DEBUG nova.objects.instance [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Lazy-loading 'pci_devices' on Instance uuid bbbba0d8-fff9-4f59-ab31-54ff03b71390 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.626 243456 DEBUG oslo_concurrency.lockutils [req-0e9c4529-e30a-4339-a6ba-34c7be10ef5d req-310c28c0-133e-4efc-98f0-ba220ef0b348 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-5383d7b5-e11b-47e4-8cbc-4be283dd6b16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.629 243456 DEBUG nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:08:19 compute-0 nova_compute[243452]:   <uuid>bbbba0d8-fff9-4f59-ab31-54ff03b71390</uuid>
Feb 28 10:08:19 compute-0 nova_compute[243452]:   <name>instance-00000032</name>
Feb 28 10:08:19 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:08:19 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:08:19 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <nova:name>tempest-ImagesOneServerTestJSON-server-503378260</nova:name>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:08:18</nova:creationTime>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:08:19 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:08:19 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:08:19 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:08:19 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:08:19 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:08:19 compute-0 nova_compute[243452]:         <nova:user uuid="5db9d3a48c914c5ea9326b6a8c8c0f36">tempest-ImagesOneServerTestJSON-1749734354-project-member</nova:user>
Feb 28 10:08:19 compute-0 nova_compute[243452]:         <nova:project uuid="3b8b0d675b3747fd80cb2186e41d2ebf">tempest-ImagesOneServerTestJSON-1749734354</nova:project>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:08:19 compute-0 nova_compute[243452]:         <nova:port uuid="8feba913-868e-47d7-a6c6-50d33e14a69d">
Feb 28 10:08:19 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:08:19 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:08:19 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <system>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <entry name="serial">bbbba0d8-fff9-4f59-ab31-54ff03b71390</entry>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <entry name="uuid">bbbba0d8-fff9-4f59-ab31-54ff03b71390</entry>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     </system>
Feb 28 10:08:19 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:08:19 compute-0 nova_compute[243452]:   <os>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:   </os>
Feb 28 10:08:19 compute-0 nova_compute[243452]:   <features>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:   </features>
Feb 28 10:08:19 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:08:19 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:08:19 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/bbbba0d8-fff9-4f59-ab31-54ff03b71390_disk">
Feb 28 10:08:19 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:08:19 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/bbbba0d8-fff9-4f59-ab31-54ff03b71390_disk.config">
Feb 28 10:08:19 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:08:19 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:5b:bc:bd"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <target dev="tap8feba913-86"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/bbbba0d8-fff9-4f59-ab31-54ff03b71390/console.log" append="off"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <video>
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     </video>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:08:19 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:08:19 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:08:19 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:08:19 compute-0 nova_compute[243452]: </domain>
Feb 28 10:08:19 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:08:19 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.642 243456 DEBUG nova.compute.manager [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Preparing to wait for external event network-vif-plugged-8feba913-868e-47d7-a6c6-50d33e14a69d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.642 243456 DEBUG oslo_concurrency.lockutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Acquiring lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.642 243456 DEBUG oslo_concurrency.lockutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.643 243456 DEBUG oslo_concurrency.lockutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.644 243456 DEBUG nova.virt.libvirt.vif [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:08:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-503378260',display_name='tempest-ImagesOneServerTestJSON-server-503378260',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-503378260',id=50,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b8b0d675b3747fd80cb2186e41d2ebf',ramdisk_id='',reservation_id='r-0rus32qo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1749734354',owner_user_name='tempest-ImagesOneServerTestJSON-1749734354-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:08:12Z,user_data=None,user_id='5db9d3a48c914c5ea9326b6a8c8c0f36',uuid=bbbba0d8-fff9-4f59-ab31-54ff03b71390,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8feba913-868e-47d7-a6c6-50d33e14a69d", "address": "fa:16:3e:5b:bc:bd", "network": {"id": "20018e60-73d5-4de7-9f8d-17031bc634d7", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1934928850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b8b0d675b3747fd80cb2186e41d2ebf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8feba913-86", "ovs_interfaceid": "8feba913-868e-47d7-a6c6-50d33e14a69d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.644 243456 DEBUG nova.network.os_vif_util [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Converting VIF {"id": "8feba913-868e-47d7-a6c6-50d33e14a69d", "address": "fa:16:3e:5b:bc:bd", "network": {"id": "20018e60-73d5-4de7-9f8d-17031bc634d7", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1934928850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b8b0d675b3747fd80cb2186e41d2ebf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8feba913-86", "ovs_interfaceid": "8feba913-868e-47d7-a6c6-50d33e14a69d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.645 243456 DEBUG nova.network.os_vif_util [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:bc:bd,bridge_name='br-int',has_traffic_filtering=True,id=8feba913-868e-47d7-a6c6-50d33e14a69d,network=Network(20018e60-73d5-4de7-9f8d-17031bc634d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8feba913-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.645 243456 DEBUG os_vif [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:bc:bd,bridge_name='br-int',has_traffic_filtering=True,id=8feba913-868e-47d7-a6c6-50d33e14a69d,network=Network(20018e60-73d5-4de7-9f8d-17031bc634d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8feba913-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.646 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.646 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.647 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.657 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.657 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8feba913-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.658 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8feba913-86, col_values=(('external_ids', {'iface-id': '8feba913-868e-47d7-a6c6-50d33e14a69d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:bc:bd', 'vm-uuid': 'bbbba0d8-fff9-4f59-ab31-54ff03b71390'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.659 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.661 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:08:19 compute-0 NetworkManager[49805]: <info>  [1772273299.6642] manager: (tap8feba913-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/172)
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.670 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.672 243456 INFO os_vif [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:bc:bd,bridge_name='br-int',has_traffic_filtering=True,id=8feba913-868e-47d7-a6c6-50d33e14a69d,network=Network(20018e60-73d5-4de7-9f8d-17031bc634d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8feba913-86')
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.715 243456 DEBUG nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.716 243456 DEBUG nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.716 243456 DEBUG nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] No VIF found with MAC fa:16:3e:5b:bc:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.716 243456 INFO nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Using config drive
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.740 243456 DEBUG nova.storage.rbd_utils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] rbd image bbbba0d8-fff9-4f59-ab31-54ff03b71390_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.967 243456 INFO nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Creating config drive at /var/lib/nova/instances/5383d7b5-e11b-47e4-8cbc-4be283dd6b16/disk.config
Feb 28 10:08:19 compute-0 nova_compute[243452]: 2026-02-28 10:08:19.973 243456 DEBUG oslo_concurrency.processutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5383d7b5-e11b-47e4-8cbc-4be283dd6b16/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpdbuow5h4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 292 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 725 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.025 243456 DEBUG nova.policy [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb52857f37ae4c42815767488316da21', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9d6b720bb7334b139fc12e9faa051906', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.124 243456 DEBUG oslo_concurrency.processutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5383d7b5-e11b-47e4-8cbc-4be283dd6b16/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpdbuow5h4" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.159 243456 DEBUG nova.storage.rbd_utils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] rbd image 5383d7b5-e11b-47e4-8cbc-4be283dd6b16_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.164 243456 DEBUG oslo_concurrency.processutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5383d7b5-e11b-47e4-8cbc-4be283dd6b16/disk.config 5383d7b5-e11b-47e4-8cbc-4be283dd6b16_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/25272029' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.314 243456 DEBUG oslo_concurrency.processutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5383d7b5-e11b-47e4-8cbc-4be283dd6b16/disk.config 5383d7b5-e11b-47e4-8cbc-4be283dd6b16_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.315 243456 INFO nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Deleting local config drive /var/lib/nova/instances/5383d7b5-e11b-47e4-8cbc-4be283dd6b16/disk.config because it was imported into RBD.
Feb 28 10:08:20 compute-0 NetworkManager[49805]: <info>  [1772273300.3658] manager: (tap2ef6c454-23): new Tun device (/org/freedesktop/NetworkManager/Devices/173)
Feb 28 10:08:20 compute-0 systemd-udevd[285510]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:08:20 compute-0 kernel: tap2ef6c454-23: entered promiscuous mode
Feb 28 10:08:20 compute-0 ovn_controller[146846]: 2026-02-28T10:08:20Z|00374|binding|INFO|Claiming lport 2ef6c454-23c6-4a31-be49-e69f506de5bc for this chassis.
Feb 28 10:08:20 compute-0 ovn_controller[146846]: 2026-02-28T10:08:20Z|00375|binding|INFO|2ef6c454-23c6-4a31-be49-e69f506de5bc: Claiming fa:16:3e:68:c0:a5 10.100.0.240
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.377 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:20 compute-0 NetworkManager[49805]: <info>  [1772273300.3860] device (tap2ef6c454-23): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.387 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:c0:a5 10.100.0.240'], port_security=['fa:16:3e:68:c0:a5 10.100.0.240'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.240/24', 'neutron:device_id': '5383d7b5-e11b-47e4-8cbc-4be283dd6b16', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6dcf000f-adf8-4a38-81ff-219f317c1122', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30cb5e2d14fb4fb7a9d37cf231549329', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd203f6f-2b13-4be2-b266-8607f489044a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=35e770d0-0272-4b19-8749-679e373a56c2, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2ef6c454-23c6-4a31-be49-e69f506de5bc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:08:20 compute-0 systemd-udevd[285507]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.389 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2ef6c454-23c6-4a31-be49-e69f506de5bc in datapath 6dcf000f-adf8-4a38-81ff-219f317c1122 bound to our chassis
Feb 28 10:08:20 compute-0 NetworkManager[49805]: <info>  [1772273300.3911] manager: (tap91edcb1e-31): new Tun device (/org/freedesktop/NetworkManager/Devices/174)
Feb 28 10:08:20 compute-0 NetworkManager[49805]: <info>  [1772273300.3926] device (tap2ef6c454-23): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.392 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6dcf000f-adf8-4a38-81ff-219f317c1122
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.405 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d394099d-2bfd-424b-b747-cb01eb8d9718]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.405 243456 INFO nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Creating config drive at /var/lib/nova/instances/bbbba0d8-fff9-4f59-ab31-54ff03b71390/disk.config
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.406 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6dcf000f-a1 in ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:08:20 compute-0 NetworkManager[49805]: <info>  [1772273300.4070] manager: (tapc749c499-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/175)
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.408 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6dcf000f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.408 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6e0c0aa9-e9ed-4a07-9da3-396b0489ba39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.409 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3eee46b8-6898-467b-8eef-f8e3159a25ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.413 243456 DEBUG oslo_concurrency.processutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bbbba0d8-fff9-4f59-ab31-54ff03b71390/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpm2dqu5b5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:20 compute-0 NetworkManager[49805]: <info>  [1772273300.4144] device (tap91edcb1e-31): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:08:20 compute-0 kernel: tap91edcb1e-31: entered promiscuous mode
Feb 28 10:08:20 compute-0 kernel: tapc749c499-3e: entered promiscuous mode
Feb 28 10:08:20 compute-0 NetworkManager[49805]: <info>  [1772273300.4159] device (tap91edcb1e-31): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:08:20 compute-0 ovn_controller[146846]: 2026-02-28T10:08:20Z|00376|binding|INFO|Claiming lport 91edcb1e-3191-47da-be88-2834e4a98d73 for this chassis.
Feb 28 10:08:20 compute-0 ovn_controller[146846]: 2026-02-28T10:08:20Z|00377|binding|INFO|91edcb1e-3191-47da-be88-2834e4a98d73: Claiming fa:16:3e:c2:49:b5 10.100.1.37
Feb 28 10:08:20 compute-0 ovn_controller[146846]: 2026-02-28T10:08:20Z|00378|binding|INFO|Claiming lport c749c499-3ec6-4a11-894e-ec79cfbd8829 for this chassis.
Feb 28 10:08:20 compute-0 ovn_controller[146846]: 2026-02-28T10:08:20Z|00379|binding|INFO|c749c499-3ec6-4a11-894e-ec79cfbd8829: Claiming fa:16:3e:e0:e4:b8 10.100.0.26
Feb 28 10:08:20 compute-0 NetworkManager[49805]: <info>  [1772273300.4239] device (tapc749c499-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:08:20 compute-0 NetworkManager[49805]: <info>  [1772273300.4247] device (tapc749c499-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.425 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8a70bf-c2d8-4f8f-b978-991fdd7bcc9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:20 compute-0 ovn_controller[146846]: 2026-02-28T10:08:20Z|00380|binding|INFO|Setting lport 2ef6c454-23c6-4a31-be49-e69f506de5bc ovn-installed in OVS
Feb 28 10:08:20 compute-0 ovn_controller[146846]: 2026-02-28T10:08:20Z|00381|binding|INFO|Setting lport 2ef6c454-23c6-4a31-be49-e69f506de5bc up in Southbound
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.432 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:49:b5 10.100.1.37'], port_security=['fa:16:3e:c2:49:b5 10.100.1.37'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.37/24', 'neutron:device_id': '5383d7b5-e11b-47e4-8cbc-4be283dd6b16', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30cb5e2d14fb4fb7a9d37cf231549329', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd203f6f-2b13-4be2-b266-8607f489044a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c22bace2-50cd-448a-8610-df7bfe838781, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=91edcb1e-3191-47da-be88-2834e4a98d73) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.435 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:e4:b8 10.100.0.26'], port_security=['fa:16:3e:e0:e4:b8 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/24', 'neutron:device_id': '5383d7b5-e11b-47e4-8cbc-4be283dd6b16', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6dcf000f-adf8-4a38-81ff-219f317c1122', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30cb5e2d14fb4fb7a9d37cf231549329', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd203f6f-2b13-4be2-b266-8607f489044a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=35e770d0-0272-4b19-8749-679e373a56c2, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=c749c499-3ec6-4a11-894e-ec79cfbd8829) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:08:20 compute-0 systemd-machined[209480]: New machine qemu-55-instance-00000031.
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.448 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.448 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca3313b-21c3-46fc-ba17-b2e62962ad7c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:20 compute-0 ovn_controller[146846]: 2026-02-28T10:08:20Z|00382|binding|INFO|Setting lport c749c499-3ec6-4a11-894e-ec79cfbd8829 ovn-installed in OVS
Feb 28 10:08:20 compute-0 ovn_controller[146846]: 2026-02-28T10:08:20Z|00383|binding|INFO|Setting lport c749c499-3ec6-4a11-894e-ec79cfbd8829 up in Southbound
Feb 28 10:08:20 compute-0 ovn_controller[146846]: 2026-02-28T10:08:20Z|00384|binding|INFO|Setting lport 91edcb1e-3191-47da-be88-2834e4a98d73 ovn-installed in OVS
Feb 28 10:08:20 compute-0 ovn_controller[146846]: 2026-02-28T10:08:20Z|00385|binding|INFO|Setting lport 91edcb1e-3191-47da-be88-2834e4a98d73 up in Southbound
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.461 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:20 compute-0 systemd[1]: Started Virtual Machine qemu-55-instance-00000031.
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.472 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[778c6752-a6aa-4918-a534-781b4c26ce92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:20 compute-0 NetworkManager[49805]: <info>  [1772273300.4791] manager: (tap6dcf000f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/176)
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.478 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1dcbd853-7e3c-4de2-b8ff-95321c35eeea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.507 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6d811d90-6cf4-4670-ac48-3863060b04d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.511 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[acbbcec3-9371-4742-afdf-c504f414d6f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:20 compute-0 NetworkManager[49805]: <info>  [1772273300.5337] device (tap6dcf000f-a0): carrier: link connected
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.538 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[095fd4a0-cd86-4e10-9372-ad746a105e26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.550 243456 DEBUG oslo_concurrency.processutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bbbba0d8-fff9-4f59-ab31-54ff03b71390/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpm2dqu5b5" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.559 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[01483c83-135b-4e66-a804-6781cc334d7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6dcf000f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:f3:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480782, 'reachable_time': 24124, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285782, 'error': None, 'target': 'ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.572 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[40ddc371-414e-45fb-bea6-3533d3eab9cb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:f3f0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480782, 'tstamp': 480782}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285790, 'error': None, 'target': 'ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.587 243456 DEBUG nova.storage.rbd_utils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] rbd image bbbba0d8-fff9-4f59-ab31-54ff03b71390_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.588 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c4d1b211-b6c2-4e5f-9b47-855d709f3e8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6dcf000f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:f3:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480782, 'reachable_time': 24124, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285799, 'error': None, 'target': 'ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.603 243456 DEBUG oslo_concurrency.processutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bbbba0d8-fff9-4f59-ab31-54ff03b71390/disk.config bbbba0d8-fff9-4f59-ab31-54ff03b71390_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.610 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e7058561-68b8-413f-9f80-d38b0e89cce9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.672 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5c21b46a-359e-4930-a727-a5e7d95ac970]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.674 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6dcf000f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.674 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.674 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6dcf000f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:20 compute-0 NetworkManager[49805]: <info>  [1772273300.6769] manager: (tap6dcf000f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/177)
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.676 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:20 compute-0 kernel: tap6dcf000f-a0: entered promiscuous mode
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.680 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.681 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6dcf000f-a0, col_values=(('external_ids', {'iface-id': '41db4cab-fd3d-4c61-aee8-206e67bfa937'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:20 compute-0 ovn_controller[146846]: 2026-02-28T10:08:20Z|00386|binding|INFO|Releasing lport 41db4cab-fd3d-4c61-aee8-206e67bfa937 from this chassis (sb_readonly=0)
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.682 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.692 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.695 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.695 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6dcf000f-adf8-4a38-81ff-219f317c1122.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6dcf000f-adf8-4a38-81ff-219f317c1122.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.696 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[108defab-f065-4279-bb47-93501cf8fe93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.697 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-6dcf000f-adf8-4a38-81ff-219f317c1122
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/6dcf000f-adf8-4a38-81ff-219f317c1122.pid.haproxy
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 6dcf000f-adf8-4a38-81ff-219f317c1122
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.698 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122', 'env', 'PROCESS_TAG=haproxy-6dcf000f-adf8-4a38-81ff-219f317c1122', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6dcf000f-adf8-4a38-81ff-219f317c1122.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.761 243456 DEBUG oslo_concurrency.processutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bbbba0d8-fff9-4f59-ab31-54ff03b71390/disk.config bbbba0d8-fff9-4f59-ab31-54ff03b71390_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.762 243456 INFO nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Deleting local config drive /var/lib/nova/instances/bbbba0d8-fff9-4f59-ab31-54ff03b71390/disk.config because it was imported into RBD.
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.772 243456 DEBUG nova.network.neutron [req-a2af0a43-7155-42e4-9579-41d99941067b req-d83091cc-f368-43e7-b227-cf694a15cc3f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Updated VIF entry in instance network info cache for port 8feba913-868e-47d7-a6c6-50d33e14a69d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.773 243456 DEBUG nova.network.neutron [req-a2af0a43-7155-42e4-9579-41d99941067b req-d83091cc-f368-43e7-b227-cf694a15cc3f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Updating instance_info_cache with network_info: [{"id": "8feba913-868e-47d7-a6c6-50d33e14a69d", "address": "fa:16:3e:5b:bc:bd", "network": {"id": "20018e60-73d5-4de7-9f8d-17031bc634d7", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1934928850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b8b0d675b3747fd80cb2186e41d2ebf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8feba913-86", "ovs_interfaceid": "8feba913-868e-47d7-a6c6-50d33e14a69d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.782 243456 DEBUG nova.network.neutron [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Successfully created port: b1941300-baa6-48a1-9264-7d10ad675ca9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.799 243456 DEBUG oslo_concurrency.lockutils [req-a2af0a43-7155-42e4-9579-41d99941067b req-d83091cc-f368-43e7-b227-cf694a15cc3f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-bbbba0d8-fff9-4f59-ab31-54ff03b71390" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:08:20 compute-0 NetworkManager[49805]: <info>  [1772273300.8034] manager: (tap8feba913-86): new Tun device (/org/freedesktop/NetworkManager/Devices/178)
Feb 28 10:08:20 compute-0 kernel: tap8feba913-86: entered promiscuous mode
Feb 28 10:08:20 compute-0 systemd-udevd[285769]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:08:20 compute-0 ovn_controller[146846]: 2026-02-28T10:08:20Z|00387|binding|INFO|Claiming lport 8feba913-868e-47d7-a6c6-50d33e14a69d for this chassis.
Feb 28 10:08:20 compute-0 ovn_controller[146846]: 2026-02-28T10:08:20Z|00388|binding|INFO|8feba913-868e-47d7-a6c6-50d33e14a69d: Claiming fa:16:3e:5b:bc:bd 10.100.0.7
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.806 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:20 compute-0 NetworkManager[49805]: <info>  [1772273300.8181] device (tap8feba913-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:08:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:20.817 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:bc:bd 10.100.0.7'], port_security=['fa:16:3e:5b:bc:bd 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bbbba0d8-fff9-4f59-ab31-54ff03b71390', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20018e60-73d5-4de7-9f8d-17031bc634d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b8b0d675b3747fd80cb2186e41d2ebf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b4ca4e3a-cf16-433c-b8e2-2f626b510291', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68f93650-ecd0-430f-864a-a50c266ab3cd, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8feba913-868e-47d7-a6c6-50d33e14a69d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:08:20 compute-0 NetworkManager[49805]: <info>  [1772273300.8189] device (tap8feba913-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:08:20 compute-0 systemd-machined[209480]: New machine qemu-56-instance-00000032.
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.844 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:20 compute-0 systemd[1]: Started Virtual Machine qemu-56-instance-00000032.
Feb 28 10:08:20 compute-0 ovn_controller[146846]: 2026-02-28T10:08:20Z|00389|binding|INFO|Setting lport 8feba913-868e-47d7-a6c6-50d33e14a69d ovn-installed in OVS
Feb 28 10:08:20 compute-0 ovn_controller[146846]: 2026-02-28T10:08:20Z|00390|binding|INFO|Setting lport 8feba913-868e-47d7-a6c6-50d33e14a69d up in Southbound
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.854 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.916 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273300.916212, 5383d7b5-e11b-47e4-8cbc-4be283dd6b16 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.917 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] VM Started (Lifecycle Event)
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.940 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.944 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273300.9164026, 5383d7b5-e11b-47e4-8cbc-4be283dd6b16 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.945 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] VM Paused (Lifecycle Event)
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.963 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.970 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:08:20 compute-0 nova_compute[243452]: 2026-02-28 10:08:20.988 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:08:21 compute-0 podman[285922]: 2026-02-28 10:08:21.03120116 +0000 UTC m=+0.020030984 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:08:21 compute-0 podman[285922]: 2026-02-28 10:08:21.177168519 +0000 UTC m=+0.165998323 container create d77695227eed53b618d1f48ae7bef1bd1753c637a4cd5f16015e7abdf99857df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.184 243456 DEBUG nova.compute.manager [req-55754b90-2a02-412a-a98a-ecdf2f0b5b25 req-1ede6cbe-d51e-4f72-81c6-444030b5adf7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received event network-vif-plugged-2ef6c454-23c6-4a31-be49-e69f506de5bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.185 243456 DEBUG oslo_concurrency.lockutils [req-55754b90-2a02-412a-a98a-ecdf2f0b5b25 req-1ede6cbe-d51e-4f72-81c6-444030b5adf7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.185 243456 DEBUG oslo_concurrency.lockutils [req-55754b90-2a02-412a-a98a-ecdf2f0b5b25 req-1ede6cbe-d51e-4f72-81c6-444030b5adf7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.185 243456 DEBUG oslo_concurrency.lockutils [req-55754b90-2a02-412a-a98a-ecdf2f0b5b25 req-1ede6cbe-d51e-4f72-81c6-444030b5adf7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.186 243456 DEBUG nova.compute.manager [req-55754b90-2a02-412a-a98a-ecdf2f0b5b25 req-1ede6cbe-d51e-4f72-81c6-444030b5adf7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Processing event network-vif-plugged-2ef6c454-23c6-4a31-be49-e69f506de5bc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:08:21 compute-0 ceph-mon[76304]: pgmap v1203: 305 pgs: 305 active+clean; 292 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 725 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Feb 28 10:08:21 compute-0 systemd[1]: Started libpod-conmon-d77695227eed53b618d1f48ae7bef1bd1753c637a4cd5f16015e7abdf99857df.scope.
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.257 243456 DEBUG nova.compute.manager [req-83726cd6-271c-4303-8223-a4ca086138f0 req-d4f446a3-6eda-400f-bceb-eaead02c3b86 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received event network-vif-plugged-91edcb1e-3191-47da-be88-2834e4a98d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.258 243456 DEBUG oslo_concurrency.lockutils [req-83726cd6-271c-4303-8223-a4ca086138f0 req-d4f446a3-6eda-400f-bceb-eaead02c3b86 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.258 243456 DEBUG oslo_concurrency.lockutils [req-83726cd6-271c-4303-8223-a4ca086138f0 req-d4f446a3-6eda-400f-bceb-eaead02c3b86 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.258 243456 DEBUG oslo_concurrency.lockutils [req-83726cd6-271c-4303-8223-a4ca086138f0 req-d4f446a3-6eda-400f-bceb-eaead02c3b86 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.258 243456 DEBUG nova.compute.manager [req-83726cd6-271c-4303-8223-a4ca086138f0 req-d4f446a3-6eda-400f-bceb-eaead02c3b86 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Processing event network-vif-plugged-91edcb1e-3191-47da-be88-2834e4a98d73 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:08:21 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:08:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ea359792feaef24c92b36b16f64434abdfeeff8708536e89480c683d3a3854/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:08:21 compute-0 podman[285922]: 2026-02-28 10:08:21.34347777 +0000 UTC m=+0.332307614 container init d77695227eed53b618d1f48ae7bef1bd1753c637a4cd5f16015e7abdf99857df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 28 10:08:21 compute-0 podman[285922]: 2026-02-28 10:08:21.348705047 +0000 UTC m=+0.337534891 container start d77695227eed53b618d1f48ae7bef1bd1753c637a4cd5f16015e7abdf99857df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 10:08:21 compute-0 neutron-haproxy-ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122[285934]: [NOTICE]   (285959) : New worker (285968) forked
Feb 28 10:08:21 compute-0 neutron-haproxy-ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122[285934]: [NOTICE]   (285959) : Loading success.
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.453 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 91edcb1e-3191-47da-be88-2834e4a98d73 in datapath 26719a5f-3b38-46df-9aa1-32fdd6dfdcf9 unbound from our chassis
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.463 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26719a5f-3b38-46df-9aa1-32fdd6dfdcf9
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.474 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1bb14e0b-2777-49b2-8ca3-818ebfc34811]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.475 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap26719a5f-31 in ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.477 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap26719a5f-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.477 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[22b12408-f536-415a-8570-1aab6d00b5d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.479 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a3018948-4e23-408a-af85-637b3c1445ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.490 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[9a6d4948-1d37-4c8f-a468-6b9366cd8977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.503 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8f616e-06af-4138-8344-d8b1c43ab65b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.528 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273301.527446, bbbba0d8-fff9-4f59-ab31-54ff03b71390 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.529 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] VM Started (Lifecycle Event)
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.537 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0a249ddc-7bb3-40ed-a620-25ab004e3817]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.543 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[58236e73-e6ef-4d06-a8b6-494da75be2f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:21 compute-0 systemd-udevd[285994]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:08:21 compute-0 NetworkManager[49805]: <info>  [1772273301.5454] manager: (tap26719a5f-30): new Veth device (/org/freedesktop/NetworkManager/Devices/179)
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.550 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.555 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273301.5278234, bbbba0d8-fff9-4f59-ab31-54ff03b71390 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.556 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] VM Paused (Lifecycle Event)
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.571 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.575 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.579 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3d054b06-2ba0-438c-ad55-7f2d543856a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.583 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b736d384-1aef-4f1c-b5fd-91bda917a537]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.596 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:08:21 compute-0 NetworkManager[49805]: <info>  [1772273301.6129] device (tap26719a5f-30): carrier: link connected
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.613 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2384084f-b092-4c1d-a8ab-66904272ccd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.636 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ea33dfbf-73bb-412b-b2a1-f3258887fc96]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26719a5f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:59:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480889, 'reachable_time': 33536, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286020, 'error': None, 'target': 'ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.655 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b001f108-4b94-4138-9de4-e9b119a422c4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6b:5902'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480889, 'tstamp': 480889}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286021, 'error': None, 'target': 'ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.673 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c7dada10-59ac-4351-b9f7-fd4ca098d3d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26719a5f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:59:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480889, 'reachable_time': 33536, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286022, 'error': None, 'target': 'ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.699 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a1ff1f92-3d9c-4b19-90ff-06f990eb0f66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.740 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f2b41a21-f0ad-4f99-8551-1ee5445d6a7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.744 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26719a5f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.744 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.745 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26719a5f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.745 243456 DEBUG nova.network.neutron [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Successfully updated port: b1941300-baa6-48a1-9264-7d10ad675ca9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:08:21 compute-0 NetworkManager[49805]: <info>  [1772273301.7478] manager: (tap26719a5f-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/180)
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.747 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:21 compute-0 kernel: tap26719a5f-30: entered promiscuous mode
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.752 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26719a5f-30, col_values=(('external_ids', {'iface-id': '5be9ec01-c5df-4503-a3c7-864623ee8dbd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.752 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.756 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.756 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/26719a5f-3b38-46df-9aa1-32fdd6dfdcf9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/26719a5f-3b38-46df-9aa1-32fdd6dfdcf9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:08:21 compute-0 ovn_controller[146846]: 2026-02-28T10:08:21Z|00391|binding|INFO|Releasing lport 5be9ec01-c5df-4503-a3c7-864623ee8dbd from this chassis (sb_readonly=0)
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.759 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ff5f724f-98f8-42d0-998f-7e918698f362]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.760 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/26719a5f-3b38-46df-9aa1-32fdd6dfdcf9.pid.haproxy
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 26719a5f-3b38-46df-9aa1-32fdd6dfdcf9
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:08:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:21.760 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9', 'env', 'PROCESS_TAG=haproxy-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/26719a5f-3b38-46df-9aa1-32fdd6dfdcf9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.762 243456 DEBUG oslo_concurrency.lockutils [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Acquiring lock "refresh_cache-bbaf0344-f1d3-4629-b6ff-3395549aa84b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.762 243456 DEBUG oslo_concurrency.lockutils [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Acquired lock "refresh_cache-bbaf0344-f1d3-4629-b6ff-3395549aa84b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.763 243456 DEBUG nova.network.neutron [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:08:21 compute-0 nova_compute[243452]: 2026-02-28 10:08:21.773 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 292 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 63 op/s
Feb 28 10:08:22 compute-0 nova_compute[243452]: 2026-02-28 10:08:22.074 243456 WARNING nova.network.neutron [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] e675fba7-c78a-4b4b-bd7f-fc505487edea already exists in list: networks containing: ['e675fba7-c78a-4b4b-bd7f-fc505487edea']. ignoring it
Feb 28 10:08:22 compute-0 podman[286054]: 2026-02-28 10:08:22.168846841 +0000 UTC m=+0.090483152 container create 3c150965cb821a3199ffb2d9e631ce17e7be1a81c811e358852dcc98948861ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:08:22 compute-0 systemd[1]: Started libpod-conmon-3c150965cb821a3199ffb2d9e631ce17e7be1a81c811e358852dcc98948861ed.scope.
Feb 28 10:08:22 compute-0 podman[286054]: 2026-02-28 10:08:22.123096856 +0000 UTC m=+0.044733237 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:08:22 compute-0 ceph-mon[76304]: pgmap v1204: 305 pgs: 305 active+clean; 292 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 63 op/s
Feb 28 10:08:22 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974762120dba2f637d45ced8fb0a1736fcbd2ec42a7946014bd98e97a0caa409/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:08:22 compute-0 podman[286054]: 2026-02-28 10:08:22.255996828 +0000 UTC m=+0.177633189 container init 3c150965cb821a3199ffb2d9e631ce17e7be1a81c811e358852dcc98948861ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:08:22 compute-0 podman[286054]: 2026-02-28 10:08:22.261426881 +0000 UTC m=+0.183063182 container start 3c150965cb821a3199ffb2d9e631ce17e7be1a81c811e358852dcc98948861ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 28 10:08:22 compute-0 neutron-haproxy-ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9[286070]: [NOTICE]   (286074) : New worker (286076) forked
Feb 28 10:08:22 compute-0 neutron-haproxy-ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9[286070]: [NOTICE]   (286074) : Loading success.
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.331 156681 INFO neutron.agent.ovn.metadata.agent [-] Port c749c499-3ec6-4a11-894e-ec79cfbd8829 in datapath 6dcf000f-adf8-4a38-81ff-219f317c1122 unbound from our chassis
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.333 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6dcf000f-adf8-4a38-81ff-219f317c1122
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.360 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[524ba788-9d50-4464-86ac-22db1611422a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.384 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[33c48f14-09ca-46eb-9736-3d3e607901f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.387 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e55bd357-4452-48c3-a775-4b451e93b10c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.407 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[65301d91-03a1-42f5-8c86-e7b6edf7452b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.422 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[707258b5-cf3e-4807-882a-dc8e5f194537]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6dcf000f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:f3:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480782, 'reachable_time': 24124, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286090, 'error': None, 'target': 'ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.435 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0a15d39f-73ca-4a3c-ad66-7543222876d5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6dcf000f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480792, 'tstamp': 480792}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286091, 'error': None, 'target': 'ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap6dcf000f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480795, 'tstamp': 480795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286091, 'error': None, 'target': 'ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.437 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6dcf000f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:22 compute-0 nova_compute[243452]: 2026-02-28 10:08:22.438 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:22 compute-0 nova_compute[243452]: 2026-02-28 10:08:22.440 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.440 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6dcf000f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.441 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.441 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6dcf000f-a0, col_values=(('external_ids', {'iface-id': '41db4cab-fd3d-4c61-aee8-206e67bfa937'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.442 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.443 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8feba913-868e-47d7-a6c6-50d33e14a69d in datapath 20018e60-73d5-4de7-9f8d-17031bc634d7 unbound from our chassis
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.445 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20018e60-73d5-4de7-9f8d-17031bc634d7
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.454 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[424e4861-cfbf-4fa1-952a-61bc00a01164]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.455 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap20018e60-71 in ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.457 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap20018e60-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.457 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b51f7360-a89e-4326-b31a-1ae996f91490]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.458 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d2ad35-d171-4acc-9b4c-df31558d43eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.473 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[cc7e513a-9ec0-4d3b-a661-55d81c7e7f82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.494 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8c51aafa-7b30-4edf-8dd2-bc214a5ad142]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.519 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d505952e-fde0-42c5-a12c-61353bdcd9dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 systemd-udevd[286008]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:08:22 compute-0 NetworkManager[49805]: <info>  [1772273302.5293] manager: (tap20018e60-70): new Veth device (/org/freedesktop/NetworkManager/Devices/181)
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.535 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0d1a4f49-2f67-4304-9d70-fcfefc617ab6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.567 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f2424e37-009d-4f80-a691-84b919fe47dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.570 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[de90f912-ee34-4cdb-8d3e-5da7b9fc3fab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 NetworkManager[49805]: <info>  [1772273302.5917] device (tap20018e60-70): carrier: link connected
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.595 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[22aacb3b-6e81-4baa-a51b-770291e73a02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.612 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c6483605-3cb0-4349-9f89-2e2969174adb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20018e60-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:5f:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480987, 'reachable_time': 33231, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286102, 'error': None, 'target': 'ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.626 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ab268144-bb56-4a19-93ea-5654d4301572]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe29:5f49'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480987, 'tstamp': 480987}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286103, 'error': None, 'target': 'ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.647 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[48e2e557-4bdf-4a1b-9d4f-d3d9f5e7ca0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20018e60-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:5f:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480987, 'reachable_time': 33231, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286104, 'error': None, 'target': 'ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.678 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab96953-6603-4f75-8762-f631ae74aa3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.735 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ab291872-2796-4578-9fa2-dc99b0dad5e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.737 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20018e60-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.737 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.738 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20018e60-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:22 compute-0 nova_compute[243452]: 2026-02-28 10:08:22.740 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:22 compute-0 NetworkManager[49805]: <info>  [1772273302.7412] manager: (tap20018e60-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/182)
Feb 28 10:08:22 compute-0 kernel: tap20018e60-70: entered promiscuous mode
Feb 28 10:08:22 compute-0 nova_compute[243452]: 2026-02-28 10:08:22.745 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.747 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20018e60-70, col_values=(('external_ids', {'iface-id': '34db03cc-96b4-407f-81c0-0835e29fb2af'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:22 compute-0 nova_compute[243452]: 2026-02-28 10:08:22.748 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:22 compute-0 nova_compute[243452]: 2026-02-28 10:08:22.753 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.754 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/20018e60-73d5-4de7-9f8d-17031bc634d7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/20018e60-73d5-4de7-9f8d-17031bc634d7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:08:22 compute-0 ovn_controller[146846]: 2026-02-28T10:08:22Z|00392|binding|INFO|Releasing lport 34db03cc-96b4-407f-81c0-0835e29fb2af from this chassis (sb_readonly=0)
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.756 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a882fb45-90e9-428a-b177-ba3011f01f67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.757 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-20018e60-73d5-4de7-9f8d-17031bc634d7
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/20018e60-73d5-4de7-9f8d-17031bc634d7.pid.haproxy
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 20018e60-73d5-4de7-9f8d-17031bc634d7
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:08:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:22.757 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7', 'env', 'PROCESS_TAG=haproxy-20018e60-73d5-4de7-9f8d-17031bc634d7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/20018e60-73d5-4de7-9f8d-17031bc634d7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:08:22 compute-0 nova_compute[243452]: 2026-02-28 10:08:22.775 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:08:23 compute-0 podman[286136]: 2026-02-28 10:08:23.080184556 +0000 UTC m=+0.040434277 container create 490e153039e3711357a601a835f56cb620e1b30148d23d819e98d0055d0ddca8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 10:08:23 compute-0 systemd[1]: Started libpod-conmon-490e153039e3711357a601a835f56cb620e1b30148d23d819e98d0055d0ddca8.scope.
Feb 28 10:08:23 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:08:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b09aa5b5321be847d2ed9b9a5640eaf4b84680b63ac2ca6f237a1f669871d6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:08:23 compute-0 podman[286136]: 2026-02-28 10:08:23.151207371 +0000 UTC m=+0.111457092 container init 490e153039e3711357a601a835f56cb620e1b30148d23d819e98d0055d0ddca8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:08:23 compute-0 podman[286136]: 2026-02-28 10:08:23.058462066 +0000 UTC m=+0.018711767 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:08:23 compute-0 podman[286136]: 2026-02-28 10:08:23.158216627 +0000 UTC m=+0.118466318 container start 490e153039e3711357a601a835f56cb620e1b30148d23d819e98d0055d0ddca8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 10:08:23 compute-0 neutron-haproxy-ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7[286151]: [NOTICE]   (286155) : New worker (286157) forked
Feb 28 10:08:23 compute-0 neutron-haproxy-ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7[286151]: [NOTICE]   (286155) : Loading success.
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.610 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.785 243456 DEBUG nova.compute.manager [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received event network-vif-plugged-2ef6c454-23c6-4a31-be49-e69f506de5bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.785 243456 DEBUG oslo_concurrency.lockutils [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.786 243456 DEBUG oslo_concurrency.lockutils [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.786 243456 DEBUG oslo_concurrency.lockutils [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.786 243456 DEBUG nova.compute.manager [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] No event matching network-vif-plugged-2ef6c454-23c6-4a31-be49-e69f506de5bc in dict_keys([('network-vif-plugged', 'c749c499-3ec6-4a11-894e-ec79cfbd8829')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.786 243456 WARNING nova.compute.manager [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received unexpected event network-vif-plugged-2ef6c454-23c6-4a31-be49-e69f506de5bc for instance with vm_state building and task_state spawning.
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.787 243456 DEBUG nova.compute.manager [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received event network-vif-plugged-c749c499-3ec6-4a11-894e-ec79cfbd8829 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.787 243456 DEBUG oslo_concurrency.lockutils [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.787 243456 DEBUG oslo_concurrency.lockutils [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.787 243456 DEBUG oslo_concurrency.lockutils [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.788 243456 DEBUG nova.compute.manager [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Processing event network-vif-plugged-c749c499-3ec6-4a11-894e-ec79cfbd8829 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.788 243456 DEBUG nova.compute.manager [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received event network-vif-plugged-c749c499-3ec6-4a11-894e-ec79cfbd8829 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.788 243456 DEBUG oslo_concurrency.lockutils [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.788 243456 DEBUG oslo_concurrency.lockutils [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.789 243456 DEBUG oslo_concurrency.lockutils [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.789 243456 DEBUG nova.compute.manager [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] No waiting events found dispatching network-vif-plugged-c749c499-3ec6-4a11-894e-ec79cfbd8829 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.789 243456 WARNING nova.compute.manager [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received unexpected event network-vif-plugged-c749c499-3ec6-4a11-894e-ec79cfbd8829 for instance with vm_state building and task_state spawning.
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.789 243456 DEBUG nova.compute.manager [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Received event network-vif-plugged-8feba913-868e-47d7-a6c6-50d33e14a69d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.790 243456 DEBUG oslo_concurrency.lockutils [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.790 243456 DEBUG oslo_concurrency.lockutils [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.790 243456 DEBUG oslo_concurrency.lockutils [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.790 243456 DEBUG nova.compute.manager [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Processing event network-vif-plugged-8feba913-868e-47d7-a6c6-50d33e14a69d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.791 243456 DEBUG nova.compute.manager [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Received event network-vif-plugged-8feba913-868e-47d7-a6c6-50d33e14a69d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.791 243456 DEBUG oslo_concurrency.lockutils [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.791 243456 DEBUG oslo_concurrency.lockutils [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.791 243456 DEBUG oslo_concurrency.lockutils [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.792 243456 DEBUG nova.compute.manager [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] No waiting events found dispatching network-vif-plugged-8feba913-868e-47d7-a6c6-50d33e14a69d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.792 243456 WARNING nova.compute.manager [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Received unexpected event network-vif-plugged-8feba913-868e-47d7-a6c6-50d33e14a69d for instance with vm_state building and task_state spawning.
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.792 243456 DEBUG nova.compute.manager [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Received event network-changed-b1941300-baa6-48a1-9264-7d10ad675ca9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.792 243456 DEBUG nova.compute.manager [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Refreshing instance network info cache due to event network-changed-b1941300-baa6-48a1-9264-7d10ad675ca9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.793 243456 DEBUG oslo_concurrency.lockutils [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-bbaf0344-f1d3-4629-b6ff-3395549aa84b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.793 243456 DEBUG nova.compute.manager [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.794 243456 DEBUG nova.compute.manager [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.799 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273303.7988365, 5383d7b5-e11b-47e4-8cbc-4be283dd6b16 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.799 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] VM Resumed (Lifecycle Event)
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.803 243456 DEBUG nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.804 243456 DEBUG nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.808 243456 INFO nova.virt.libvirt.driver [-] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Instance spawned successfully.
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.809 243456 DEBUG nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.813 243456 INFO nova.virt.libvirt.driver [-] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Instance spawned successfully.
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.814 243456 DEBUG nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.819 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.830 243456 DEBUG nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.831 243456 DEBUG nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.832 243456 DEBUG nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.832 243456 DEBUG nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.833 243456 DEBUG nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.834 243456 DEBUG nova.virt.libvirt.driver [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.839 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.844 243456 DEBUG nova.compute.manager [req-a6293118-5971-4eb5-96b9-5b29bd011f48 req-a023895e-ef4c-4c3d-ba03-1183c4281494 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received event network-vif-plugged-91edcb1e-3191-47da-be88-2834e4a98d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.845 243456 DEBUG oslo_concurrency.lockutils [req-a6293118-5971-4eb5-96b9-5b29bd011f48 req-a023895e-ef4c-4c3d-ba03-1183c4281494 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.846 243456 DEBUG oslo_concurrency.lockutils [req-a6293118-5971-4eb5-96b9-5b29bd011f48 req-a023895e-ef4c-4c3d-ba03-1183c4281494 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.847 243456 DEBUG oslo_concurrency.lockutils [req-a6293118-5971-4eb5-96b9-5b29bd011f48 req-a023895e-ef4c-4c3d-ba03-1183c4281494 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.847 243456 DEBUG nova.compute.manager [req-a6293118-5971-4eb5-96b9-5b29bd011f48 req-a023895e-ef4c-4c3d-ba03-1183c4281494 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] No waiting events found dispatching network-vif-plugged-91edcb1e-3191-47da-be88-2834e4a98d73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.848 243456 WARNING nova.compute.manager [req-a6293118-5971-4eb5-96b9-5b29bd011f48 req-a023895e-ef4c-4c3d-ba03-1183c4281494 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received unexpected event network-vif-plugged-91edcb1e-3191-47da-be88-2834e4a98d73 for instance with vm_state building and task_state spawning.
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.858 243456 DEBUG nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.860 243456 DEBUG nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.862 243456 DEBUG nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.863 243456 DEBUG nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.864 243456 DEBUG nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.866 243456 DEBUG nova.virt.libvirt.driver [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.898 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.898 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273303.7997735, bbbba0d8-fff9-4f59-ab31-54ff03b71390 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.898 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] VM Resumed (Lifecycle Event)
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.942 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.948 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.963 243456 INFO nova.compute.manager [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Took 22.32 seconds to spawn the instance on the hypervisor.
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.964 243456 DEBUG nova.compute.manager [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.980 243456 INFO nova.compute.manager [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Took 11.60 seconds to spawn the instance on the hypervisor.
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.981 243456 DEBUG nova.compute.manager [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:08:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 292 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 28 10:08:23 compute-0 nova_compute[243452]: 2026-02-28 10:08:23.984 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.061 243456 INFO nova.compute.manager [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Took 23.44 seconds to build instance.
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.085 243456 DEBUG oslo_concurrency.lockutils [None req-dadf276f-5b3f-4932-95fa-ddb02cc0684d 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.087 243456 INFO nova.compute.manager [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Took 12.77 seconds to build instance.
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.101 243456 DEBUG oslo_concurrency.lockutils [None req-b22dba5a-cc6c-4716-872e-1e5f06713595 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.365 243456 DEBUG nova.network.neutron [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Updating instance_info_cache with network_info: [{"id": "2096b485-4121-49ca-b440-f2d4580fb1cf", "address": "fa:16:3e:14:19:51", "network": {"id": "e675fba7-c78a-4b4b-bd7f-fc505487edea", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1171721639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d6b720bb7334b139fc12e9faa051906", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2096b485-41", "ovs_interfaceid": "2096b485-4121-49ca-b440-f2d4580fb1cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b1941300-baa6-48a1-9264-7d10ad675ca9", "address": "fa:16:3e:65:81:4b", "network": {"id": "e675fba7-c78a-4b4b-bd7f-fc505487edea", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1171721639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d6b720bb7334b139fc12e9faa051906", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1941300-ba", "ovs_interfaceid": "b1941300-baa6-48a1-9264-7d10ad675ca9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.388 243456 DEBUG oslo_concurrency.lockutils [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Releasing lock "refresh_cache-bbaf0344-f1d3-4629-b6ff-3395549aa84b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.389 243456 DEBUG oslo_concurrency.lockutils [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-bbaf0344-f1d3-4629-b6ff-3395549aa84b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.390 243456 DEBUG nova.network.neutron [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Refreshing network info cache for port b1941300-baa6-48a1-9264-7d10ad675ca9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.393 243456 DEBUG nova.virt.libvirt.vif [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:07:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1968508282',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1968508282',id=48,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:08:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9d6b720bb7334b139fc12e9faa051906',ramdisk_id='',reservation_id='r-4u41kfgd',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1509801759',owner_user_name='tempest-AttachInterfacesV270Test-1509801759-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:08:17Z,user_data=None,user_id='eb52857f37ae4c42815767488316da21',uuid=bbaf0344-f1d3-4629-b6ff-3395549aa84b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b1941300-baa6-48a1-9264-7d10ad675ca9", "address": "fa:16:3e:65:81:4b", "network": {"id": "e675fba7-c78a-4b4b-bd7f-fc505487edea", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1171721639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d6b720bb7334b139fc12e9faa051906", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1941300-ba", "ovs_interfaceid": "b1941300-baa6-48a1-9264-7d10ad675ca9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.393 243456 DEBUG nova.network.os_vif_util [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Converting VIF {"id": "b1941300-baa6-48a1-9264-7d10ad675ca9", "address": "fa:16:3e:65:81:4b", "network": {"id": "e675fba7-c78a-4b4b-bd7f-fc505487edea", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1171721639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d6b720bb7334b139fc12e9faa051906", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1941300-ba", "ovs_interfaceid": "b1941300-baa6-48a1-9264-7d10ad675ca9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.394 243456 DEBUG nova.network.os_vif_util [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:81:4b,bridge_name='br-int',has_traffic_filtering=True,id=b1941300-baa6-48a1-9264-7d10ad675ca9,network=Network(e675fba7-c78a-4b4b-bd7f-fc505487edea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1941300-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.395 243456 DEBUG os_vif [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:81:4b,bridge_name='br-int',has_traffic_filtering=True,id=b1941300-baa6-48a1-9264-7d10ad675ca9,network=Network(e675fba7-c78a-4b4b-bd7f-fc505487edea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1941300-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.395 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.396 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.397 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.400 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.401 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1941300-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.401 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb1941300-ba, col_values=(('external_ids', {'iface-id': 'b1941300-baa6-48a1-9264-7d10ad675ca9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:81:4b', 'vm-uuid': 'bbaf0344-f1d3-4629-b6ff-3395549aa84b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.403 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:24 compute-0 NetworkManager[49805]: <info>  [1772273304.4042] manager: (tapb1941300-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.412 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.416 243456 INFO os_vif [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:81:4b,bridge_name='br-int',has_traffic_filtering=True,id=b1941300-baa6-48a1-9264-7d10ad675ca9,network=Network(e675fba7-c78a-4b4b-bd7f-fc505487edea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1941300-ba')
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.418 243456 DEBUG nova.virt.libvirt.vif [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:07:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1968508282',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1968508282',id=48,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:08:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9d6b720bb7334b139fc12e9faa051906',ramdisk_id='',reservation_id='r-4u41kfgd',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1509801759',owner_user_name='tempest-AttachInterfacesV270Test-1509801759-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:08:17Z,user_data=None,user_id='eb52857f37ae4c42815767488316da21',uuid=bbaf0344-f1d3-4629-b6ff-3395549aa84b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b1941300-baa6-48a1-9264-7d10ad675ca9", "address": "fa:16:3e:65:81:4b", "network": {"id": "e675fba7-c78a-4b4b-bd7f-fc505487edea", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1171721639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d6b720bb7334b139fc12e9faa051906", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1941300-ba", "ovs_interfaceid": "b1941300-baa6-48a1-9264-7d10ad675ca9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.418 243456 DEBUG nova.network.os_vif_util [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Converting VIF {"id": "b1941300-baa6-48a1-9264-7d10ad675ca9", "address": "fa:16:3e:65:81:4b", "network": {"id": "e675fba7-c78a-4b4b-bd7f-fc505487edea", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1171721639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d6b720bb7334b139fc12e9faa051906", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1941300-ba", "ovs_interfaceid": "b1941300-baa6-48a1-9264-7d10ad675ca9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.420 243456 DEBUG nova.network.os_vif_util [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:81:4b,bridge_name='br-int',has_traffic_filtering=True,id=b1941300-baa6-48a1-9264-7d10ad675ca9,network=Network(e675fba7-c78a-4b4b-bd7f-fc505487edea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1941300-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.424 243456 DEBUG nova.virt.libvirt.guest [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] attach device xml: <interface type="ethernet">
Feb 28 10:08:24 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:65:81:4b"/>
Feb 28 10:08:24 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:08:24 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:08:24 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:08:24 compute-0 nova_compute[243452]:   <target dev="tapb1941300-ba"/>
Feb 28 10:08:24 compute-0 nova_compute[243452]: </interface>
Feb 28 10:08:24 compute-0 nova_compute[243452]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 28 10:08:24 compute-0 NetworkManager[49805]: <info>  [1772273304.4386] manager: (tapb1941300-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/184)
Feb 28 10:08:24 compute-0 kernel: tapb1941300-ba: entered promiscuous mode
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.445 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:24 compute-0 ovn_controller[146846]: 2026-02-28T10:08:24Z|00393|binding|INFO|Claiming lport b1941300-baa6-48a1-9264-7d10ad675ca9 for this chassis.
Feb 28 10:08:24 compute-0 ovn_controller[146846]: 2026-02-28T10:08:24Z|00394|binding|INFO|b1941300-baa6-48a1-9264-7d10ad675ca9: Claiming fa:16:3e:65:81:4b 10.100.0.8
Feb 28 10:08:24 compute-0 ovn_controller[146846]: 2026-02-28T10:08:24Z|00395|binding|INFO|Setting lport b1941300-baa6-48a1-9264-7d10ad675ca9 ovn-installed in OVS
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.462 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.465 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:24 compute-0 ovn_controller[146846]: 2026-02-28T10:08:24Z|00396|binding|INFO|Setting lport b1941300-baa6-48a1-9264-7d10ad675ca9 up in Southbound
Feb 28 10:08:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:24.475 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:81:4b 10.100.0.8'], port_security=['fa:16:3e:65:81:4b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'bbaf0344-f1d3-4629-b6ff-3395549aa84b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e675fba7-c78a-4b4b-bd7f-fc505487edea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9d6b720bb7334b139fc12e9faa051906', 'neutron:revision_number': '2', 'neutron:security_group_ids': '183a3227-9a4c-4328-a977-3287119f7f1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18b263e6-2e1e-46c4-8c86-ebc43f4711ea, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b1941300-baa6-48a1-9264-7d10ad675ca9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:08:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:24.477 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b1941300-baa6-48a1-9264-7d10ad675ca9 in datapath e675fba7-c78a-4b4b-bd7f-fc505487edea bound to our chassis
Feb 28 10:08:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:24.479 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e675fba7-c78a-4b4b-bd7f-fc505487edea
Feb 28 10:08:24 compute-0 systemd-udevd[286173]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:08:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:24.493 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c21e67ef-9bf6-45b7-8fe5-5d2c2c43e74c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:24 compute-0 NetworkManager[49805]: <info>  [1772273304.5108] device (tapb1941300-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:08:24 compute-0 NetworkManager[49805]: <info>  [1772273304.5117] device (tapb1941300-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:08:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:24.525 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[614cf2a4-66fa-4db0-bdd1-0a4c53ca1cc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:24.530 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3d68dc2e-f3c4-4b6b-ad28-cd3d4ef08c28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:24.577 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[52b09376-3816-476f-85de-8b8b65e93c09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.595 243456 DEBUG nova.virt.libvirt.driver [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.596 243456 DEBUG nova.virt.libvirt.driver [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.596 243456 DEBUG nova.virt.libvirt.driver [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] No VIF found with MAC fa:16:3e:14:19:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.597 243456 DEBUG nova.virt.libvirt.driver [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] No VIF found with MAC fa:16:3e:65:81:4b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:08:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:24.600 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[422e462d-2716-4f8b-bb43-6e7f97431225]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape675fba7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:a8:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479247, 'reachable_time': 42401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286180, 'error': None, 'target': 'ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:24.622 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[61cc9a1f-30d3-4305-a4d9-d05ba3e30173]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape675fba7-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479260, 'tstamp': 479260}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286181, 'error': None, 'target': 'ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape675fba7-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479264, 'tstamp': 479264}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286181, 'error': None, 'target': 'ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:24.625 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape675fba7-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.627 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:24.628 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape675fba7-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:24.629 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:08:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:24.629 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape675fba7-c0, col_values=(('external_ids', {'iface-id': '8ca76364-db2c-475d-b8e8-95815d1c647e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:24.630 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.632 243456 DEBUG nova.virt.libvirt.guest [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:08:24 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:08:24 compute-0 nova_compute[243452]:   <nova:name>tempest-AttachInterfacesV270Test-server-1968508282</nova:name>
Feb 28 10:08:24 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:08:24</nova:creationTime>
Feb 28 10:08:24 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:08:24 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:08:24 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:08:24 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:08:24 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:08:24 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:08:24 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:08:24 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:08:24 compute-0 nova_compute[243452]:     <nova:user uuid="eb52857f37ae4c42815767488316da21">tempest-AttachInterfacesV270Test-1509801759-project-member</nova:user>
Feb 28 10:08:24 compute-0 nova_compute[243452]:     <nova:project uuid="9d6b720bb7334b139fc12e9faa051906">tempest-AttachInterfacesV270Test-1509801759</nova:project>
Feb 28 10:08:24 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:08:24 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:08:24 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:08:24 compute-0 nova_compute[243452]:     <nova:port uuid="2096b485-4121-49ca-b440-f2d4580fb1cf">
Feb 28 10:08:24 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 10:08:24 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:08:24 compute-0 nova_compute[243452]:     <nova:port uuid="b1941300-baa6-48a1-9264-7d10ad675ca9">
Feb 28 10:08:24 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 28 10:08:24 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:08:24 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:08:24 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:08:24 compute-0 nova_compute[243452]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 28 10:08:24 compute-0 nova_compute[243452]: 2026-02-28 10:08:24.668 243456 DEBUG oslo_concurrency.lockutils [None req-8cabf38a-1f58-4b40-a3fa-e8633d061e72 eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lock "interface-bbaf0344-f1d3-4629-b6ff-3395549aa84b-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:25 compute-0 ceph-mon[76304]: pgmap v1205: 305 pgs: 305 active+clean; 292 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 28 10:08:25 compute-0 nova_compute[243452]: 2026-02-28 10:08:25.757 243456 DEBUG nova.network.neutron [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Updated VIF entry in instance network info cache for port b1941300-baa6-48a1-9264-7d10ad675ca9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:08:25 compute-0 nova_compute[243452]: 2026-02-28 10:08:25.759 243456 DEBUG nova.network.neutron [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Updating instance_info_cache with network_info: [{"id": "2096b485-4121-49ca-b440-f2d4580fb1cf", "address": "fa:16:3e:14:19:51", "network": {"id": "e675fba7-c78a-4b4b-bd7f-fc505487edea", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1171721639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d6b720bb7334b139fc12e9faa051906", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2096b485-41", "ovs_interfaceid": "2096b485-4121-49ca-b440-f2d4580fb1cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b1941300-baa6-48a1-9264-7d10ad675ca9", "address": "fa:16:3e:65:81:4b", "network": {"id": "e675fba7-c78a-4b4b-bd7f-fc505487edea", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1171721639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d6b720bb7334b139fc12e9faa051906", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1941300-ba", "ovs_interfaceid": "b1941300-baa6-48a1-9264-7d10ad675ca9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:08:25 compute-0 nova_compute[243452]: 2026-02-28 10:08:25.787 243456 DEBUG oslo_concurrency.lockutils [req-76c8b669-49e2-438a-aa86-f6745ce66ba7 req-41437e81-c9b8-4fe2-98d4-e0b4def9b413 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-bbaf0344-f1d3-4629-b6ff-3395549aa84b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:08:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 293 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 157 op/s
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.110 243456 DEBUG nova.compute.manager [None req-5873a114-4ccd-425e-9370-2dc23d31d36c 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.158 243456 INFO nova.compute.manager [None req-5873a114-4ccd-425e-9370-2dc23d31d36c 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] instance snapshotting
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.454 243456 INFO nova.virt.libvirt.driver [None req-5873a114-4ccd-425e-9370-2dc23d31d36c 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Beginning live snapshot process
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.539 243456 DEBUG nova.compute.manager [req-5d44c1a8-666c-4026-9c72-fae355f9dc1e req-044f0c6b-1757-4ca4-b6a2-baf4c21899fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Received event network-vif-plugged-b1941300-baa6-48a1-9264-7d10ad675ca9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.540 243456 DEBUG oslo_concurrency.lockutils [req-5d44c1a8-666c-4026-9c72-fae355f9dc1e req-044f0c6b-1757-4ca4-b6a2-baf4c21899fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.540 243456 DEBUG oslo_concurrency.lockutils [req-5d44c1a8-666c-4026-9c72-fae355f9dc1e req-044f0c6b-1757-4ca4-b6a2-baf4c21899fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.541 243456 DEBUG oslo_concurrency.lockutils [req-5d44c1a8-666c-4026-9c72-fae355f9dc1e req-044f0c6b-1757-4ca4-b6a2-baf4c21899fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.541 243456 DEBUG nova.compute.manager [req-5d44c1a8-666c-4026-9c72-fae355f9dc1e req-044f0c6b-1757-4ca4-b6a2-baf4c21899fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] No waiting events found dispatching network-vif-plugged-b1941300-baa6-48a1-9264-7d10ad675ca9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.541 243456 WARNING nova.compute.manager [req-5d44c1a8-666c-4026-9c72-fae355f9dc1e req-044f0c6b-1757-4ca4-b6a2-baf4c21899fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Received unexpected event network-vif-plugged-b1941300-baa6-48a1-9264-7d10ad675ca9 for instance with vm_state active and task_state None.
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.542 243456 DEBUG nova.compute.manager [req-5d44c1a8-666c-4026-9c72-fae355f9dc1e req-044f0c6b-1757-4ca4-b6a2-baf4c21899fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Received event network-vif-plugged-b1941300-baa6-48a1-9264-7d10ad675ca9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.542 243456 DEBUG oslo_concurrency.lockutils [req-5d44c1a8-666c-4026-9c72-fae355f9dc1e req-044f0c6b-1757-4ca4-b6a2-baf4c21899fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.542 243456 DEBUG oslo_concurrency.lockutils [req-5d44c1a8-666c-4026-9c72-fae355f9dc1e req-044f0c6b-1757-4ca4-b6a2-baf4c21899fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.543 243456 DEBUG oslo_concurrency.lockutils [req-5d44c1a8-666c-4026-9c72-fae355f9dc1e req-044f0c6b-1757-4ca4-b6a2-baf4c21899fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.543 243456 DEBUG nova.compute.manager [req-5d44c1a8-666c-4026-9c72-fae355f9dc1e req-044f0c6b-1757-4ca4-b6a2-baf4c21899fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] No waiting events found dispatching network-vif-plugged-b1941300-baa6-48a1-9264-7d10ad675ca9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.543 243456 WARNING nova.compute.manager [req-5d44c1a8-666c-4026-9c72-fae355f9dc1e req-044f0c6b-1757-4ca4-b6a2-baf4c21899fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Received unexpected event network-vif-plugged-b1941300-baa6-48a1-9264-7d10ad675ca9 for instance with vm_state active and task_state None.
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.604 243456 DEBUG nova.virt.libvirt.imagebackend [None req-5873a114-4ccd-425e-9370-2dc23d31d36c 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.792 243456 DEBUG oslo_concurrency.lockutils [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Acquiring lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.793 243456 DEBUG oslo_concurrency.lockutils [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.793 243456 DEBUG oslo_concurrency.lockutils [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Acquiring lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.794 243456 DEBUG oslo_concurrency.lockutils [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.794 243456 DEBUG oslo_concurrency.lockutils [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.795 243456 INFO nova.compute.manager [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Terminating instance
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.797 243456 DEBUG nova.compute.manager [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:08:26 compute-0 kernel: tap2096b485-41 (unregistering): left promiscuous mode
Feb 28 10:08:26 compute-0 NetworkManager[49805]: <info>  [1772273306.8634] device (tap2096b485-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.872 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.875 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:26 compute-0 ovn_controller[146846]: 2026-02-28T10:08:26Z|00397|binding|INFO|Releasing lport 2096b485-4121-49ca-b440-f2d4580fb1cf from this chassis (sb_readonly=0)
Feb 28 10:08:26 compute-0 ovn_controller[146846]: 2026-02-28T10:08:26Z|00398|binding|INFO|Setting lport 2096b485-4121-49ca-b440-f2d4580fb1cf down in Southbound
Feb 28 10:08:26 compute-0 ovn_controller[146846]: 2026-02-28T10:08:26Z|00399|binding|INFO|Removing iface tap2096b485-41 ovn-installed in OVS
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.886 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:26 compute-0 kernel: tapb1941300-ba (unregistering): left promiscuous mode
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.890 243456 DEBUG nova.storage.rbd_utils [None req-5873a114-4ccd-425e-9370-2dc23d31d36c 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] creating snapshot(09d8f42da6e04e3b9c80c8b4f25601e2) on rbd image(bbbba0d8-fff9-4f59-ab31-54ff03b71390_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:08:26 compute-0 NetworkManager[49805]: <info>  [1772273306.8933] device (tapb1941300-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:08:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:26.890 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:19:51 10.100.0.4'], port_security=['fa:16:3e:14:19:51 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bbaf0344-f1d3-4629-b6ff-3395549aa84b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e675fba7-c78a-4b4b-bd7f-fc505487edea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9d6b720bb7334b139fc12e9faa051906', 'neutron:revision_number': '4', 'neutron:security_group_ids': '183a3227-9a4c-4328-a977-3287119f7f1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18b263e6-2e1e-46c4-8c86-ebc43f4711ea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2096b485-4121-49ca-b440-f2d4580fb1cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:08:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:26.893 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2096b485-4121-49ca-b440-f2d4580fb1cf in datapath e675fba7-c78a-4b4b-bd7f-fc505487edea unbound from our chassis
Feb 28 10:08:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:26.896 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e675fba7-c78a-4b4b-bd7f-fc505487edea
Feb 28 10:08:26 compute-0 ovn_controller[146846]: 2026-02-28T10:08:26Z|00400|binding|INFO|Releasing lport b1941300-baa6-48a1-9264-7d10ad675ca9 from this chassis (sb_readonly=0)
Feb 28 10:08:26 compute-0 ovn_controller[146846]: 2026-02-28T10:08:26Z|00401|binding|INFO|Setting lport b1941300-baa6-48a1-9264-7d10ad675ca9 down in Southbound
Feb 28 10:08:26 compute-0 ovn_controller[146846]: 2026-02-28T10:08:26Z|00402|binding|INFO|Removing iface tapb1941300-ba ovn-installed in OVS
Feb 28 10:08:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:26.911 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:81:4b 10.100.0.8'], port_security=['fa:16:3e:65:81:4b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'bbaf0344-f1d3-4629-b6ff-3395549aa84b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e675fba7-c78a-4b4b-bd7f-fc505487edea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9d6b720bb7334b139fc12e9faa051906', 'neutron:revision_number': '4', 'neutron:security_group_ids': '183a3227-9a4c-4328-a977-3287119f7f1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18b263e6-2e1e-46c4-8c86-ebc43f4711ea, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b1941300-baa6-48a1-9264-7d10ad675ca9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:08:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:26.923 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[872c75ea-6007-488d-905e-af03396d4a5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:26 compute-0 nova_compute[243452]: 2026-02-28 10:08:26.939 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:26 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000030.scope: Deactivated successfully.
Feb 28 10:08:26 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000030.scope: Consumed 9.706s CPU time.
Feb 28 10:08:26 compute-0 systemd-machined[209480]: Machine qemu-54-instance-00000030 terminated.
Feb 28 10:08:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:26.950 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bc30392d-93be-43d1-812e-f1ed07bea371]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:26.953 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb85e1d-6987-45f1-8414-b44c5762eadc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:26.983 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8b7be201-4ba3-4d5b-b3ef-28edc16171df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.007 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2a48ba5c-1e8f-4fed-9ecc-ac55fe0b47d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape675fba7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:a8:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479247, 'reachable_time': 42401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286246, 'error': None, 'target': 'ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.023 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5238c53f-41c0-4596-9666-733d0d49c2d8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape675fba7-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479260, 'tstamp': 479260}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286247, 'error': None, 'target': 'ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape675fba7-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479264, 'tstamp': 479264}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286247, 'error': None, 'target': 'ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.025 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape675fba7-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.028 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.036 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape675fba7-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.036 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.037 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape675fba7-c0, col_values=(('external_ids', {'iface-id': '8ca76364-db2c-475d-b8e8-95815d1c647e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.037 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.036 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.038 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b1941300-baa6-48a1-9264-7d10ad675ca9 in datapath e675fba7-c78a-4b4b-bd7f-fc505487edea unbound from our chassis
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.040 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e675fba7-c78a-4b4b-bd7f-fc505487edea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.041 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4b54f2d8-1378-4593-8fb6-f27ff8aba2f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.042 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea namespace which is not needed anymore
Feb 28 10:08:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Feb 28 10:08:27 compute-0 NetworkManager[49805]: <info>  [1772273307.0501] manager: (tapb1941300-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/185)
Feb 28 10:08:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Feb 28 10:08:27 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.056 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 ceph-mon[76304]: pgmap v1206: 305 pgs: 305 active+clean; 293 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 157 op/s
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.063 243456 DEBUG oslo_concurrency.lockutils [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.065 243456 DEBUG oslo_concurrency.lockutils [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.065 243456 DEBUG oslo_concurrency.lockutils [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.065 243456 DEBUG oslo_concurrency.lockutils [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.065 243456 DEBUG oslo_concurrency.lockutils [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.066 243456 INFO nova.compute.manager [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Terminating instance
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.067 243456 DEBUG nova.compute.manager [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.070 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.078 243456 INFO nova.virt.libvirt.driver [-] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Instance destroyed successfully.
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.079 243456 DEBUG nova.objects.instance [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lazy-loading 'resources' on Instance uuid bbaf0344-f1d3-4629-b6ff-3395549aa84b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.091 243456 DEBUG nova.virt.libvirt.vif [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:07:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1968508282',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1968508282',id=48,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:08:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9d6b720bb7334b139fc12e9faa051906',ramdisk_id='',reservation_id='r-4u41kfgd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1509801759',owner_user_name='tempest-AttachInterfacesV270Test-1509801759-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:08:17Z,user_data=None,user_id='eb52857f37ae4c42815767488316da21',uuid=bbaf0344-f1d3-4629-b6ff-3395549aa84b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2096b485-4121-49ca-b440-f2d4580fb1cf", "address": "fa:16:3e:14:19:51", "network": {"id": "e675fba7-c78a-4b4b-bd7f-fc505487edea", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1171721639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d6b720bb7334b139fc12e9faa051906", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2096b485-41", "ovs_interfaceid": "2096b485-4121-49ca-b440-f2d4580fb1cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.092 243456 DEBUG nova.network.os_vif_util [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Converting VIF {"id": "2096b485-4121-49ca-b440-f2d4580fb1cf", "address": "fa:16:3e:14:19:51", "network": {"id": "e675fba7-c78a-4b4b-bd7f-fc505487edea", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1171721639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d6b720bb7334b139fc12e9faa051906", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2096b485-41", "ovs_interfaceid": "2096b485-4121-49ca-b440-f2d4580fb1cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.092 243456 DEBUG nova.network.os_vif_util [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:14:19:51,bridge_name='br-int',has_traffic_filtering=True,id=2096b485-4121-49ca-b440-f2d4580fb1cf,network=Network(e675fba7-c78a-4b4b-bd7f-fc505487edea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2096b485-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.093 243456 DEBUG os_vif [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:19:51,bridge_name='br-int',has_traffic_filtering=True,id=2096b485-4121-49ca-b440-f2d4580fb1cf,network=Network(e675fba7-c78a-4b4b-bd7f-fc505487edea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2096b485-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.094 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.094 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2096b485-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.099 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.102 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.104 243456 INFO os_vif [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:19:51,bridge_name='br-int',has_traffic_filtering=True,id=2096b485-4121-49ca-b440-f2d4580fb1cf,network=Network(e675fba7-c78a-4b4b-bd7f-fc505487edea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2096b485-41')
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.105 243456 DEBUG nova.virt.libvirt.vif [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:07:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1968508282',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1968508282',id=48,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:08:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9d6b720bb7334b139fc12e9faa051906',ramdisk_id='',reservation_id='r-4u41kfgd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1509801759',owner_user_name='tempest-AttachInterfacesV270Test-1509801759-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:08:17Z,user_data=None,user_id='eb52857f37ae4c42815767488316da21',uuid=bbaf0344-f1d3-4629-b6ff-3395549aa84b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b1941300-baa6-48a1-9264-7d10ad675ca9", "address": "fa:16:3e:65:81:4b", "network": {"id": "e675fba7-c78a-4b4b-bd7f-fc505487edea", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1171721639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d6b720bb7334b139fc12e9faa051906", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1941300-ba", "ovs_interfaceid": "b1941300-baa6-48a1-9264-7d10ad675ca9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.105 243456 DEBUG nova.network.os_vif_util [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Converting VIF {"id": "b1941300-baa6-48a1-9264-7d10ad675ca9", "address": "fa:16:3e:65:81:4b", "network": {"id": "e675fba7-c78a-4b4b-bd7f-fc505487edea", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1171721639-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d6b720bb7334b139fc12e9faa051906", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1941300-ba", "ovs_interfaceid": "b1941300-baa6-48a1-9264-7d10ad675ca9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.106 243456 DEBUG nova.network.os_vif_util [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:81:4b,bridge_name='br-int',has_traffic_filtering=True,id=b1941300-baa6-48a1-9264-7d10ad675ca9,network=Network(e675fba7-c78a-4b4b-bd7f-fc505487edea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1941300-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.106 243456 DEBUG os_vif [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:81:4b,bridge_name='br-int',has_traffic_filtering=True,id=b1941300-baa6-48a1-9264-7d10ad675ca9,network=Network(e675fba7-c78a-4b4b-bd7f-fc505487edea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1941300-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:08:27 compute-0 kernel: tap2ef6c454-23 (unregistering): left promiscuous mode
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.110 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.111 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1941300-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:27 compute-0 NetworkManager[49805]: <info>  [1772273307.1120] device (tap2ef6c454-23): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.115 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.117 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:08:27 compute-0 ovn_controller[146846]: 2026-02-28T10:08:27Z|00403|binding|INFO|Releasing lport 2ef6c454-23c6-4a31-be49-e69f506de5bc from this chassis (sb_readonly=0)
Feb 28 10:08:27 compute-0 ovn_controller[146846]: 2026-02-28T10:08:27Z|00404|binding|INFO|Setting lport 2ef6c454-23c6-4a31-be49-e69f506de5bc down in Southbound
Feb 28 10:08:27 compute-0 ovn_controller[146846]: 2026-02-28T10:08:27Z|00405|binding|INFO|Removing iface tap2ef6c454-23 ovn-installed in OVS
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.126 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 kernel: tap91edcb1e-31 (unregistering): left promiscuous mode
Feb 28 10:08:27 compute-0 NetworkManager[49805]: <info>  [1772273307.1317] device (tap91edcb1e-31): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.135 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:c0:a5 10.100.0.240'], port_security=['fa:16:3e:68:c0:a5 10.100.0.240'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.240/24', 'neutron:device_id': '5383d7b5-e11b-47e4-8cbc-4be283dd6b16', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6dcf000f-adf8-4a38-81ff-219f317c1122', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30cb5e2d14fb4fb7a9d37cf231549329', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd203f6f-2b13-4be2-b266-8607f489044a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=35e770d0-0272-4b19-8749-679e373a56c2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2ef6c454-23c6-4a31-be49-e69f506de5bc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.138 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 kernel: tapc749c499-3e (unregistering): left promiscuous mode
Feb 28 10:08:27 compute-0 NetworkManager[49805]: <info>  [1772273307.1473] device (tapc749c499-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.146 243456 DEBUG nova.storage.rbd_utils [None req-5873a114-4ccd-425e-9370-2dc23d31d36c 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] cloning vms/bbbba0d8-fff9-4f59-ab31-54ff03b71390_disk@09d8f42da6e04e3b9c80c8b4f25601e2 to images/64671ca6-d66f-48a8-9905-66fd8ae90fc9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:08:27 compute-0 ovn_controller[146846]: 2026-02-28T10:08:27Z|00406|binding|INFO|Releasing lport 91edcb1e-3191-47da-be88-2834e4a98d73 from this chassis (sb_readonly=0)
Feb 28 10:08:27 compute-0 ovn_controller[146846]: 2026-02-28T10:08:27Z|00407|binding|INFO|Setting lport 91edcb1e-3191-47da-be88-2834e4a98d73 down in Southbound
Feb 28 10:08:27 compute-0 ovn_controller[146846]: 2026-02-28T10:08:27Z|00408|binding|INFO|Removing iface tap91edcb1e-31 ovn-installed in OVS
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.156 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:49:b5 10.100.1.37'], port_security=['fa:16:3e:c2:49:b5 10.100.1.37'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.37/24', 'neutron:device_id': '5383d7b5-e11b-47e4-8cbc-4be283dd6b16', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30cb5e2d14fb4fb7a9d37cf231549329', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd203f6f-2b13-4be2-b266-8607f489044a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c22bace2-50cd-448a-8610-df7bfe838781, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=91edcb1e-3191-47da-be88-2834e4a98d73) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:08:27 compute-0 ovn_controller[146846]: 2026-02-28T10:08:27Z|00409|binding|INFO|Releasing lport c749c499-3ec6-4a11-894e-ec79cfbd8829 from this chassis (sb_readonly=0)
Feb 28 10:08:27 compute-0 ovn_controller[146846]: 2026-02-28T10:08:27Z|00410|binding|INFO|Setting lport c749c499-3ec6-4a11-894e-ec79cfbd8829 down in Southbound
Feb 28 10:08:27 compute-0 ovn_controller[146846]: 2026-02-28T10:08:27Z|00411|binding|INFO|Removing iface tapc749c499-3e ovn-installed in OVS
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.170 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:e4:b8 10.100.0.26'], port_security=['fa:16:3e:e0:e4:b8 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/24', 'neutron:device_id': '5383d7b5-e11b-47e4-8cbc-4be283dd6b16', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6dcf000f-adf8-4a38-81ff-219f317c1122', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30cb5e2d14fb4fb7a9d37cf231549329', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd203f6f-2b13-4be2-b266-8607f489044a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=35e770d0-0272-4b19-8749-679e373a56c2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=c749c499-3ec6-4a11-894e-ec79cfbd8829) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.186 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.189 243456 INFO os_vif [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:81:4b,bridge_name='br-int',has_traffic_filtering=True,id=b1941300-baa6-48a1-9264-7d10ad675ca9,network=Network(e675fba7-c78a-4b4b-bd7f-fc505487edea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1941300-ba')
Feb 28 10:08:27 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000031.scope: Deactivated successfully.
Feb 28 10:08:27 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000031.scope: Consumed 3.729s CPU time.
Feb 28 10:08:27 compute-0 systemd-machined[209480]: Machine qemu-55-instance-00000031 terminated.
Feb 28 10:08:27 compute-0 neutron-haproxy-ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea[284373]: [NOTICE]   (284377) : haproxy version is 2.8.14-c23fe91
Feb 28 10:08:27 compute-0 neutron-haproxy-ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea[284373]: [NOTICE]   (284377) : path to executable is /usr/sbin/haproxy
Feb 28 10:08:27 compute-0 neutron-haproxy-ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea[284373]: [WARNING]  (284377) : Exiting Master process...
Feb 28 10:08:27 compute-0 neutron-haproxy-ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea[284373]: [ALERT]    (284377) : Current worker (284379) exited with code 143 (Terminated)
Feb 28 10:08:27 compute-0 neutron-haproxy-ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea[284373]: [WARNING]  (284377) : All workers exited. Exiting... (0)
Feb 28 10:08:27 compute-0 systemd[1]: libpod-47fb8f8d3bf70777e9bc76625d04098aaaf32ffbe72381735def0ba5e5a80feb.scope: Deactivated successfully.
Feb 28 10:08:27 compute-0 podman[286298]: 2026-02-28 10:08:27.218517691 +0000 UTC m=+0.056488248 container died 47fb8f8d3bf70777e9bc76625d04098aaaf32ffbe72381735def0ba5e5a80feb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:08:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-47fb8f8d3bf70777e9bc76625d04098aaaf32ffbe72381735def0ba5e5a80feb-userdata-shm.mount: Deactivated successfully.
Feb 28 10:08:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-61722e19ae9b0b609afb647ff08cf4d136af5438e6933a350d3eae212abb430b-merged.mount: Deactivated successfully.
Feb 28 10:08:27 compute-0 podman[286298]: 2026-02-28 10:08:27.258216626 +0000 UTC m=+0.096187183 container cleanup 47fb8f8d3bf70777e9bc76625d04098aaaf32ffbe72381735def0ba5e5a80feb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.269 243456 DEBUG nova.storage.rbd_utils [None req-5873a114-4ccd-425e-9370-2dc23d31d36c 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] flattening images/64671ca6-d66f-48a8-9905-66fd8ae90fc9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:08:27 compute-0 systemd[1]: libpod-conmon-47fb8f8d3bf70777e9bc76625d04098aaaf32ffbe72381735def0ba5e5a80feb.scope: Deactivated successfully.
Feb 28 10:08:27 compute-0 NetworkManager[49805]: <info>  [1772273307.3112] manager: (tap91edcb1e-31): new Tun device (/org/freedesktop/NetworkManager/Devices/186)
Feb 28 10:08:27 compute-0 NetworkManager[49805]: <info>  [1772273307.3203] manager: (tapc749c499-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/187)
Feb 28 10:08:27 compute-0 podman[286386]: 2026-02-28 10:08:27.34027379 +0000 UTC m=+0.062970580 container remove 47fb8f8d3bf70777e9bc76625d04098aaaf32ffbe72381735def0ba5e5a80feb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.344 243456 INFO nova.virt.libvirt.driver [-] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Instance destroyed successfully.
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.346 243456 DEBUG nova.objects.instance [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lazy-loading 'resources' on Instance uuid 5383d7b5-e11b-47e4-8cbc-4be283dd6b16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.345 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a5f5e604-20d3-49c4-90b4-a7dba5b0bc55]: (4, ('Sat Feb 28 10:08:27 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea (47fb8f8d3bf70777e9bc76625d04098aaaf32ffbe72381735def0ba5e5a80feb)\n47fb8f8d3bf70777e9bc76625d04098aaaf32ffbe72381735def0ba5e5a80feb\nSat Feb 28 10:08:27 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea (47fb8f8d3bf70777e9bc76625d04098aaaf32ffbe72381735def0ba5e5a80feb)\n47fb8f8d3bf70777e9bc76625d04098aaaf32ffbe72381735def0ba5e5a80feb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.348 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7675eeef-23af-4ef1-90b6-6082da700e6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.350 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape675fba7-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.353 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 kernel: tape675fba7-c0: left promiscuous mode
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.366 243456 DEBUG nova.virt.libvirt.vif [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:07:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-2137083594',display_name='tempest-ServersTestMultiNic-server-2137083594',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-2137083594',id=49,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:08:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='30cb5e2d14fb4fb7a9d37cf231549329',ramdisk_id='',reservation_id='r-wjk7kx7p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-116334619',owner_user_name='tempest-ServersTestMultiNic-116334619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:08:24Z,user_data=None,user_id='35aa1fe862a2437dbcc12fc7b0acbf91',uuid=5383d7b5-e11b-47e4-8cbc-4be283dd6b16,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ef6c454-23c6-4a31-be49-e69f506de5bc", "address": "fa:16:3e:68:c0:a5", "network": {"id": "6dcf000f-adf8-4a38-81ff-219f317c1122", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-736483040", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.240", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef6c454-23", "ovs_interfaceid": "2ef6c454-23c6-4a31-be49-e69f506de5bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.367 243456 DEBUG nova.network.os_vif_util [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converting VIF {"id": "2ef6c454-23c6-4a31-be49-e69f506de5bc", "address": "fa:16:3e:68:c0:a5", "network": {"id": "6dcf000f-adf8-4a38-81ff-219f317c1122", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-736483040", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.240", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef6c454-23", "ovs_interfaceid": "2ef6c454-23c6-4a31-be49-e69f506de5bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.368 243456 DEBUG nova.network.os_vif_util [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:c0:a5,bridge_name='br-int',has_traffic_filtering=True,id=2ef6c454-23c6-4a31-be49-e69f506de5bc,network=Network(6dcf000f-adf8-4a38-81ff-219f317c1122),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ef6c454-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.368 243456 DEBUG os_vif [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:c0:a5,bridge_name='br-int',has_traffic_filtering=True,id=2ef6c454-23c6-4a31-be49-e69f506de5bc,network=Network(6dcf000f-adf8-4a38-81ff-219f317c1122),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ef6c454-23') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.370 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.371 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ef6c454-23, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.371 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9f17d3e6-2980-41df-b115-0d2f3eacce7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.373 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.381 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.385 243456 INFO os_vif [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:c0:a5,bridge_name='br-int',has_traffic_filtering=True,id=2ef6c454-23c6-4a31-be49-e69f506de5bc,network=Network(6dcf000f-adf8-4a38-81ff-219f317c1122),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ef6c454-23')
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.386 243456 DEBUG nova.virt.libvirt.vif [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:07:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-2137083594',display_name='tempest-ServersTestMultiNic-server-2137083594',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-2137083594',id=49,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:08:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='30cb5e2d14fb4fb7a9d37cf231549329',ramdisk_id='',reservation_id='r-wjk7kx7p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-116334619',owner_user_name='tempest-ServersTestMultiNic-116334619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:08:24Z,user_data=None,user_id='35aa1fe862a2437dbcc12fc7b0acbf91',uuid=5383d7b5-e11b-47e4-8cbc-4be283dd6b16,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "91edcb1e-3191-47da-be88-2834e4a98d73", "address": "fa:16:3e:c2:49:b5", "network": {"id": "26719a5f-3b38-46df-9aa1-32fdd6dfdcf9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-76581631", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91edcb1e-31", "ovs_interfaceid": "91edcb1e-3191-47da-be88-2834e4a98d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.386 243456 DEBUG nova.network.os_vif_util [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converting VIF {"id": "91edcb1e-3191-47da-be88-2834e4a98d73", "address": "fa:16:3e:c2:49:b5", "network": {"id": "26719a5f-3b38-46df-9aa1-32fdd6dfdcf9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-76581631", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.37", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91edcb1e-31", "ovs_interfaceid": "91edcb1e-3191-47da-be88-2834e4a98d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.387 243456 DEBUG nova.network.os_vif_util [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:49:b5,bridge_name='br-int',has_traffic_filtering=True,id=91edcb1e-3191-47da-be88-2834e4a98d73,network=Network(26719a5f-3b38-46df-9aa1-32fdd6dfdcf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91edcb1e-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.387 243456 DEBUG os_vif [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:49:b5,bridge_name='br-int',has_traffic_filtering=True,id=91edcb1e-3191-47da-be88-2834e4a98d73,network=Network(26719a5f-3b38-46df-9aa1-32fdd6dfdcf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91edcb1e-31') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.389 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.390 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91edcb1e-31, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.392 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[caffb447-ce7c-4698-9c5e-ca557b512dee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.394 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f5397a78-8144-4b71-a0b0-3f38fe32dc16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.393 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.397 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.399 243456 INFO os_vif [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:49:b5,bridge_name='br-int',has_traffic_filtering=True,id=91edcb1e-3191-47da-be88-2834e4a98d73,network=Network(26719a5f-3b38-46df-9aa1-32fdd6dfdcf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91edcb1e-31')
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.400 243456 DEBUG nova.virt.libvirt.vif [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:07:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-2137083594',display_name='tempest-ServersTestMultiNic-server-2137083594',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-2137083594',id=49,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:08:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='30cb5e2d14fb4fb7a9d37cf231549329',ramdisk_id='',reservation_id='r-wjk7kx7p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-116334619',owner_user_name='tempest-ServersTestMultiNic-116334619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:08:24Z,user_data=None,user_id='35aa1fe862a2437dbcc12fc7b0acbf91',uuid=5383d7b5-e11b-47e4-8cbc-4be283dd6b16,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c749c499-3ec6-4a11-894e-ec79cfbd8829", "address": "fa:16:3e:e0:e4:b8", "network": {"id": "6dcf000f-adf8-4a38-81ff-219f317c1122", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-736483040", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc749c499-3e", "ovs_interfaceid": "c749c499-3ec6-4a11-894e-ec79cfbd8829", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.401 243456 DEBUG nova.network.os_vif_util [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converting VIF {"id": "c749c499-3ec6-4a11-894e-ec79cfbd8829", "address": "fa:16:3e:e0:e4:b8", "network": {"id": "6dcf000f-adf8-4a38-81ff-219f317c1122", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-736483040", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc749c499-3e", "ovs_interfaceid": "c749c499-3ec6-4a11-894e-ec79cfbd8829", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.402 243456 DEBUG nova.network.os_vif_util [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:e4:b8,bridge_name='br-int',has_traffic_filtering=True,id=c749c499-3ec6-4a11-894e-ec79cfbd8829,network=Network(6dcf000f-adf8-4a38-81ff-219f317c1122),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc749c499-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.402 243456 DEBUG os_vif [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:e4:b8,bridge_name='br-int',has_traffic_filtering=True,id=c749c499-3ec6-4a11-894e-ec79cfbd8829,network=Network(6dcf000f-adf8-4a38-81ff-219f317c1122),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc749c499-3e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.403 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.404 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc749c499-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.405 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.408 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.409 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f4cec350-3fff-44a6-a3e8-f3482f0001a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479239, 'reachable_time': 18654, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286466, 'error': None, 'target': 'ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 systemd[1]: run-netns-ovnmeta\x2de675fba7\x2dc78a\x2d4b4b\x2dbd7f\x2dfc505487edea.mount: Deactivated successfully.
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.413 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e675fba7-c78a-4b4b-bd7f-fc505487edea deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.413 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[41b87c9f-def4-4e94-9e29-a60cfcdb0be6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.414 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2ef6c454-23c6-4a31-be49-e69f506de5bc in datapath 6dcf000f-adf8-4a38-81ff-219f317c1122 unbound from our chassis
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.414 243456 INFO os_vif [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:e4:b8,bridge_name='br-int',has_traffic_filtering=True,id=c749c499-3ec6-4a11-894e-ec79cfbd8829,network=Network(6dcf000f-adf8-4a38-81ff-219f317c1122),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc749c499-3e')
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.416 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6dcf000f-adf8-4a38-81ff-219f317c1122, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.417 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[37113afa-afd2-42d1-a7c7-9b1105645f4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.420 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122 namespace which is not needed anymore
Feb 28 10:08:27 compute-0 neutron-haproxy-ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122[285934]: [NOTICE]   (285959) : haproxy version is 2.8.14-c23fe91
Feb 28 10:08:27 compute-0 neutron-haproxy-ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122[285934]: [NOTICE]   (285959) : path to executable is /usr/sbin/haproxy
Feb 28 10:08:27 compute-0 neutron-haproxy-ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122[285934]: [WARNING]  (285959) : Exiting Master process...
Feb 28 10:08:27 compute-0 neutron-haproxy-ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122[285934]: [WARNING]  (285959) : Exiting Master process...
Feb 28 10:08:27 compute-0 neutron-haproxy-ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122[285934]: [ALERT]    (285959) : Current worker (285968) exited with code 143 (Terminated)
Feb 28 10:08:27 compute-0 neutron-haproxy-ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122[285934]: [WARNING]  (285959) : All workers exited. Exiting... (0)
Feb 28 10:08:27 compute-0 systemd[1]: libpod-d77695227eed53b618d1f48ae7bef1bd1753c637a4cd5f16015e7abdf99857df.scope: Deactivated successfully.
Feb 28 10:08:27 compute-0 podman[286499]: 2026-02-28 10:08:27.554317192 +0000 UTC m=+0.052389363 container died d77695227eed53b618d1f48ae7bef1bd1753c637a4cd5f16015e7abdf99857df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 10:08:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d77695227eed53b618d1f48ae7bef1bd1753c637a4cd5f16015e7abdf99857df-userdata-shm.mount: Deactivated successfully.
Feb 28 10:08:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-00ea359792feaef24c92b36b16f64434abdfeeff8708536e89480c683d3a3854-merged.mount: Deactivated successfully.
Feb 28 10:08:27 compute-0 podman[286499]: 2026-02-28 10:08:27.592412152 +0000 UTC m=+0.090484323 container cleanup d77695227eed53b618d1f48ae7bef1bd1753c637a4cd5f16015e7abdf99857df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.604 243456 DEBUG nova.compute.manager [req-118cbd62-b446-4561-8a0c-ac4c582b0841 req-5f13d36e-6a7c-4d20-b957-6548eeceaec9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received event network-vif-unplugged-2ef6c454-23c6-4a31-be49-e69f506de5bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.604 243456 DEBUG oslo_concurrency.lockutils [req-118cbd62-b446-4561-8a0c-ac4c582b0841 req-5f13d36e-6a7c-4d20-b957-6548eeceaec9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.604 243456 DEBUG oslo_concurrency.lockutils [req-118cbd62-b446-4561-8a0c-ac4c582b0841 req-5f13d36e-6a7c-4d20-b957-6548eeceaec9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.605 243456 DEBUG oslo_concurrency.lockutils [req-118cbd62-b446-4561-8a0c-ac4c582b0841 req-5f13d36e-6a7c-4d20-b957-6548eeceaec9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:27 compute-0 systemd[1]: libpod-conmon-d77695227eed53b618d1f48ae7bef1bd1753c637a4cd5f16015e7abdf99857df.scope: Deactivated successfully.
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.606 243456 DEBUG nova.compute.manager [req-118cbd62-b446-4561-8a0c-ac4c582b0841 req-5f13d36e-6a7c-4d20-b957-6548eeceaec9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] No waiting events found dispatching network-vif-unplugged-2ef6c454-23c6-4a31-be49-e69f506de5bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.606 243456 DEBUG nova.compute.manager [req-118cbd62-b446-4561-8a0c-ac4c582b0841 req-5f13d36e-6a7c-4d20-b957-6548eeceaec9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received event network-vif-unplugged-2ef6c454-23c6-4a31-be49-e69f506de5bc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.622 243456 DEBUG nova.storage.rbd_utils [None req-5873a114-4ccd-425e-9370-2dc23d31d36c 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] removing snapshot(09d8f42da6e04e3b9c80c8b4f25601e2) on rbd image(bbbba0d8-fff9-4f59-ab31-54ff03b71390_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:08:27 compute-0 podman[286545]: 2026-02-28 10:08:27.669458406 +0000 UTC m=+0.052226158 container remove d77695227eed53b618d1f48ae7bef1bd1753c637a4cd5f16015e7abdf99857df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.673 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dcf3313f-3282-4549-b039-098d2ade7fb8]: (4, ('Sat Feb 28 10:08:27 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122 (d77695227eed53b618d1f48ae7bef1bd1753c637a4cd5f16015e7abdf99857df)\nd77695227eed53b618d1f48ae7bef1bd1753c637a4cd5f16015e7abdf99857df\nSat Feb 28 10:08:27 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122 (d77695227eed53b618d1f48ae7bef1bd1753c637a4cd5f16015e7abdf99857df)\nd77695227eed53b618d1f48ae7bef1bd1753c637a4cd5f16015e7abdf99857df\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.675 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d8370136-d884-4026-bd30-760f25e7b4e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.676 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6dcf000f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.678 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 kernel: tap6dcf000f-a0: left promiscuous mode
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.685 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.688 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e51b4f3f-bf43-46fe-b26e-e25c1ec29f59]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.701 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1ccdd44d-1699-40c4-95af-0bf640c0d9da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.702 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3f2a6707-901e-45ae-9c03-163f2ac925b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.710 243456 INFO nova.virt.libvirt.driver [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Deleting instance files /var/lib/nova/instances/bbaf0344-f1d3-4629-b6ff-3395549aa84b_del
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.712 243456 INFO nova.virt.libvirt.driver [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Deletion of /var/lib/nova/instances/bbaf0344-f1d3-4629-b6ff-3395549aa84b_del complete
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.725 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0fb30db6-07a8-47d8-8bb6-52ffc2429002]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480775, 'reachable_time': 41364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286559, 'error': None, 'target': 'ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.728 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6dcf000f-adf8-4a38-81ff-219f317c1122 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.729 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[fa387d20-6a7b-4f53-97ec-1d02a10d73c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.729 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 91edcb1e-3191-47da-be88-2834e4a98d73 in datapath 26719a5f-3b38-46df-9aa1-32fdd6dfdcf9 unbound from our chassis
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.730 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 26719a5f-3b38-46df-9aa1-32fdd6dfdcf9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.731 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[af934270-9d27-4e6f-b756-1151a24b8a58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.731 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9 namespace which is not needed anymore
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.768 243456 INFO nova.compute.manager [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Took 0.97 seconds to destroy the instance on the hypervisor.
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.769 243456 DEBUG oslo.service.loopingcall [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.769 243456 DEBUG nova.compute.manager [-] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.769 243456 DEBUG nova.network.neutron [-] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.798 243456 INFO nova.virt.libvirt.driver [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Deleting instance files /var/lib/nova/instances/5383d7b5-e11b-47e4-8cbc-4be283dd6b16_del
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.799 243456 INFO nova.virt.libvirt.driver [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Deletion of /var/lib/nova/instances/5383d7b5-e11b-47e4-8cbc-4be283dd6b16_del complete
Feb 28 10:08:27 compute-0 neutron-haproxy-ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9[286070]: [NOTICE]   (286074) : haproxy version is 2.8.14-c23fe91
Feb 28 10:08:27 compute-0 neutron-haproxy-ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9[286070]: [NOTICE]   (286074) : path to executable is /usr/sbin/haproxy
Feb 28 10:08:27 compute-0 neutron-haproxy-ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9[286070]: [WARNING]  (286074) : Exiting Master process...
Feb 28 10:08:27 compute-0 neutron-haproxy-ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9[286070]: [ALERT]    (286074) : Current worker (286076) exited with code 143 (Terminated)
Feb 28 10:08:27 compute-0 neutron-haproxy-ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9[286070]: [WARNING]  (286074) : All workers exited. Exiting... (0)
Feb 28 10:08:27 compute-0 systemd[1]: libpod-3c150965cb821a3199ffb2d9e631ce17e7be1a81c811e358852dcc98948861ed.scope: Deactivated successfully.
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.849 243456 INFO nova.compute.manager [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Took 0.78 seconds to destroy the instance on the hypervisor.
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.849 243456 DEBUG oslo.service.loopingcall [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.849 243456 DEBUG nova.compute.manager [-] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.850 243456 DEBUG nova.network.neutron [-] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:08:27 compute-0 podman[286581]: 2026-02-28 10:08:27.85727133 +0000 UTC m=+0.047888416 container died 3c150965cb821a3199ffb2d9e631ce17e7be1a81c811e358852dcc98948861ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 10:08:27 compute-0 podman[286581]: 2026-02-28 10:08:27.883119246 +0000 UTC m=+0.073736332 container cleanup 3c150965cb821a3199ffb2d9e631ce17e7be1a81c811e358852dcc98948861ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:08:27 compute-0 systemd[1]: libpod-conmon-3c150965cb821a3199ffb2d9e631ce17e7be1a81c811e358852dcc98948861ed.scope: Deactivated successfully.
Feb 28 10:08:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:08:27 compute-0 podman[286613]: 2026-02-28 10:08:27.930473866 +0000 UTC m=+0.034388557 container remove 3c150965cb821a3199ffb2d9e631ce17e7be1a81c811e358852dcc98948861ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.935 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b581f46d-b53c-47b5-86f1-a07b54f185c4]: (4, ('Sat Feb 28 10:08:27 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9 (3c150965cb821a3199ffb2d9e631ce17e7be1a81c811e358852dcc98948861ed)\n3c150965cb821a3199ffb2d9e631ce17e7be1a81c811e358852dcc98948861ed\nSat Feb 28 10:08:27 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9 (3c150965cb821a3199ffb2d9e631ce17e7be1a81c811e358852dcc98948861ed)\n3c150965cb821a3199ffb2d9e631ce17e7be1a81c811e358852dcc98948861ed\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.936 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2188c0ea-e6e6-4015-9f97-abfe97955d85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.937 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26719a5f-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.939 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 kernel: tap26719a5f-30: left promiscuous mode
Feb 28 10:08:27 compute-0 nova_compute[243452]: 2026-02-28 10:08:27.945 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.950 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[caef776c-a46e-4c3c-b3b1-83b2c49d5c71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.963 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a5127891-22a0-4cc5-8f33-f39fdd6cd732]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.964 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4a0809bd-11f5-4ac9-babf-fbc99077591d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.975 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[993df4ff-136e-4060-902f-a28b81b387ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480882, 'reachable_time': 37538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286627, 'error': None, 'target': 'ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.978 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-26719a5f-3b38-46df-9aa1-32fdd6dfdcf9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.978 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e9d510-5771-438b-bb81-dc6a34c72070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.978 156681 INFO neutron.agent.ovn.metadata.agent [-] Port c749c499-3ec6-4a11-894e-ec79cfbd8829 in datapath 6dcf000f-adf8-4a38-81ff-219f317c1122 unbound from our chassis
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.980 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6dcf000f-adf8-4a38-81ff-219f317c1122, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:08:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:27.981 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[67a5e02f-5b2f-49dd-94df-77a8560d1a3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 274 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 65 KiB/s wr, 262 op/s
Feb 28 10:08:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Feb 28 10:08:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Feb 28 10:08:28 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Feb 28 10:08:28 compute-0 ceph-mon[76304]: osdmap e168: 3 total, 3 up, 3 in
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.107 243456 DEBUG nova.storage.rbd_utils [None req-5873a114-4ccd-425e-9370-2dc23d31d36c 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] creating snapshot(snap) on rbd image(64671ca6-d66f-48a8-9905-66fd8ae90fc9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:08:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-974762120dba2f637d45ced8fb0a1736fcbd2ec42a7946014bd98e97a0caa409-merged.mount: Deactivated successfully.
Feb 28 10:08:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3c150965cb821a3199ffb2d9e631ce17e7be1a81c811e358852dcc98948861ed-userdata-shm.mount: Deactivated successfully.
Feb 28 10:08:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d26719a5f\x2d3b38\x2d46df\x2d9aa1\x2d32fdd6dfdcf9.mount: Deactivated successfully.
Feb 28 10:08:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d6dcf000f\x2dadf8\x2d4a38\x2d81ff\x2d219f317c1122.mount: Deactivated successfully.
Feb 28 10:08:28 compute-0 podman[286647]: 2026-02-28 10:08:28.345136201 +0000 UTC m=+0.058075812 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:08:28 compute-0 podman[286646]: 2026-02-28 10:08:28.381993936 +0000 UTC m=+0.102510360 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.611 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.642 243456 DEBUG nova.compute.manager [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Received event network-vif-unplugged-2096b485-4121-49ca-b440-f2d4580fb1cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.642 243456 DEBUG oslo_concurrency.lockutils [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.646 243456 DEBUG oslo_concurrency.lockutils [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.647 243456 DEBUG oslo_concurrency.lockutils [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.647 243456 DEBUG nova.compute.manager [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] No waiting events found dispatching network-vif-unplugged-2096b485-4121-49ca-b440-f2d4580fb1cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.648 243456 DEBUG nova.compute.manager [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Received event network-vif-unplugged-2096b485-4121-49ca-b440-f2d4580fb1cf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.648 243456 DEBUG nova.compute.manager [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Received event network-vif-plugged-2096b485-4121-49ca-b440-f2d4580fb1cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.649 243456 DEBUG oslo_concurrency.lockutils [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.649 243456 DEBUG oslo_concurrency.lockutils [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.650 243456 DEBUG oslo_concurrency.lockutils [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.650 243456 DEBUG nova.compute.manager [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] No waiting events found dispatching network-vif-plugged-2096b485-4121-49ca-b440-f2d4580fb1cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.651 243456 WARNING nova.compute.manager [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Received unexpected event network-vif-plugged-2096b485-4121-49ca-b440-f2d4580fb1cf for instance with vm_state active and task_state deleting.
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.651 243456 DEBUG nova.compute.manager [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Received event network-vif-unplugged-b1941300-baa6-48a1-9264-7d10ad675ca9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.652 243456 DEBUG oslo_concurrency.lockutils [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.653 243456 DEBUG oslo_concurrency.lockutils [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.653 243456 DEBUG oslo_concurrency.lockutils [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.654 243456 DEBUG nova.compute.manager [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] No waiting events found dispatching network-vif-unplugged-b1941300-baa6-48a1-9264-7d10ad675ca9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.654 243456 DEBUG nova.compute.manager [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Received event network-vif-unplugged-b1941300-baa6-48a1-9264-7d10ad675ca9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.655 243456 DEBUG nova.compute.manager [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Received event network-vif-plugged-b1941300-baa6-48a1-9264-7d10ad675ca9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.655 243456 DEBUG oslo_concurrency.lockutils [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.656 243456 DEBUG oslo_concurrency.lockutils [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.656 243456 DEBUG oslo_concurrency.lockutils [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.657 243456 DEBUG nova.compute.manager [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] No waiting events found dispatching network-vif-plugged-b1941300-baa6-48a1-9264-7d10ad675ca9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.657 243456 WARNING nova.compute.manager [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Received unexpected event network-vif-plugged-b1941300-baa6-48a1-9264-7d10ad675ca9 for instance with vm_state active and task_state deleting.
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.658 243456 DEBUG nova.compute.manager [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received event network-vif-unplugged-91edcb1e-3191-47da-be88-2834e4a98d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.658 243456 DEBUG oslo_concurrency.lockutils [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.659 243456 DEBUG oslo_concurrency.lockutils [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.659 243456 DEBUG oslo_concurrency.lockutils [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.660 243456 DEBUG nova.compute.manager [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] No waiting events found dispatching network-vif-unplugged-91edcb1e-3191-47da-be88-2834e4a98d73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.660 243456 DEBUG nova.compute.manager [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received event network-vif-unplugged-91edcb1e-3191-47da-be88-2834e4a98d73 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.661 243456 DEBUG nova.compute.manager [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received event network-vif-plugged-91edcb1e-3191-47da-be88-2834e4a98d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.661 243456 DEBUG oslo_concurrency.lockutils [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.662 243456 DEBUG oslo_concurrency.lockutils [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.662 243456 DEBUG oslo_concurrency.lockutils [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.663 243456 DEBUG nova.compute.manager [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] No waiting events found dispatching network-vif-plugged-91edcb1e-3191-47da-be88-2834e4a98d73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:08:28 compute-0 nova_compute[243452]: 2026-02-28 10:08:28.663 243456 WARNING nova.compute.manager [req-8597c853-0a04-4b42-b22d-c6a4abe1710c req-73156bc1-1298-4a2a-9ede-16cbfe0cb693 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received unexpected event network-vif-plugged-91edcb1e-3191-47da-be88-2834e4a98d73 for instance with vm_state active and task_state deleting.
Feb 28 10:08:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:08:29
Feb 28 10:08:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:08:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:08:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'backups', 'images', 'default.rgw.log']
Feb 28 10:08:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:08:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Feb 28 10:08:29 compute-0 ceph-mon[76304]: pgmap v1208: 305 pgs: 305 active+clean; 274 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 65 KiB/s wr, 262 op/s
Feb 28 10:08:29 compute-0 ceph-mon[76304]: osdmap e169: 3 total, 3 up, 3 in
Feb 28 10:08:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Feb 28 10:08:29 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.396 243456 DEBUG nova.network.neutron [-] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.428 243456 INFO nova.compute.manager [-] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Took 1.66 seconds to deallocate network for instance.
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.463 243456 DEBUG nova.network.neutron [-] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.490 243456 DEBUG oslo_concurrency.lockutils [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.490 243456 DEBUG oslo_concurrency.lockutils [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.495 243456 INFO nova.compute.manager [-] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Took 1.64 seconds to deallocate network for instance.
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.546 243456 DEBUG oslo_concurrency.lockutils [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.591 243456 DEBUG oslo_concurrency.processutils [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.691 243456 DEBUG nova.compute.manager [req-b2c63315-73f7-426a-bef4-4b865d4c0853 req-a9dfb502-afee-4da3-8af6-0e42d52725e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received event network-vif-plugged-2ef6c454-23c6-4a31-be49-e69f506de5bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.692 243456 DEBUG oslo_concurrency.lockutils [req-b2c63315-73f7-426a-bef4-4b865d4c0853 req-a9dfb502-afee-4da3-8af6-0e42d52725e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.693 243456 DEBUG oslo_concurrency.lockutils [req-b2c63315-73f7-426a-bef4-4b865d4c0853 req-a9dfb502-afee-4da3-8af6-0e42d52725e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.693 243456 DEBUG oslo_concurrency.lockutils [req-b2c63315-73f7-426a-bef4-4b865d4c0853 req-a9dfb502-afee-4da3-8af6-0e42d52725e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.694 243456 DEBUG nova.compute.manager [req-b2c63315-73f7-426a-bef4-4b865d4c0853 req-a9dfb502-afee-4da3-8af6-0e42d52725e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] No waiting events found dispatching network-vif-plugged-2ef6c454-23c6-4a31-be49-e69f506de5bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.694 243456 WARNING nova.compute.manager [req-b2c63315-73f7-426a-bef4-4b865d4c0853 req-a9dfb502-afee-4da3-8af6-0e42d52725e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received unexpected event network-vif-plugged-2ef6c454-23c6-4a31-be49-e69f506de5bc for instance with vm_state deleted and task_state None.
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.695 243456 DEBUG nova.compute.manager [req-b2c63315-73f7-426a-bef4-4b865d4c0853 req-a9dfb502-afee-4da3-8af6-0e42d52725e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received event network-vif-unplugged-c749c499-3ec6-4a11-894e-ec79cfbd8829 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.696 243456 DEBUG oslo_concurrency.lockutils [req-b2c63315-73f7-426a-bef4-4b865d4c0853 req-a9dfb502-afee-4da3-8af6-0e42d52725e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.696 243456 DEBUG oslo_concurrency.lockutils [req-b2c63315-73f7-426a-bef4-4b865d4c0853 req-a9dfb502-afee-4da3-8af6-0e42d52725e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.697 243456 DEBUG oslo_concurrency.lockutils [req-b2c63315-73f7-426a-bef4-4b865d4c0853 req-a9dfb502-afee-4da3-8af6-0e42d52725e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.697 243456 DEBUG nova.compute.manager [req-b2c63315-73f7-426a-bef4-4b865d4c0853 req-a9dfb502-afee-4da3-8af6-0e42d52725e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] No waiting events found dispatching network-vif-unplugged-c749c499-3ec6-4a11-894e-ec79cfbd8829 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.698 243456 WARNING nova.compute.manager [req-b2c63315-73f7-426a-bef4-4b865d4c0853 req-a9dfb502-afee-4da3-8af6-0e42d52725e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received unexpected event network-vif-unplugged-c749c499-3ec6-4a11-894e-ec79cfbd8829 for instance with vm_state deleted and task_state None.
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.698 243456 DEBUG nova.compute.manager [req-b2c63315-73f7-426a-bef4-4b865d4c0853 req-a9dfb502-afee-4da3-8af6-0e42d52725e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received event network-vif-plugged-c749c499-3ec6-4a11-894e-ec79cfbd8829 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.699 243456 DEBUG oslo_concurrency.lockutils [req-b2c63315-73f7-426a-bef4-4b865d4c0853 req-a9dfb502-afee-4da3-8af6-0e42d52725e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.700 243456 DEBUG oslo_concurrency.lockutils [req-b2c63315-73f7-426a-bef4-4b865d4c0853 req-a9dfb502-afee-4da3-8af6-0e42d52725e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.700 243456 DEBUG oslo_concurrency.lockutils [req-b2c63315-73f7-426a-bef4-4b865d4c0853 req-a9dfb502-afee-4da3-8af6-0e42d52725e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.701 243456 DEBUG nova.compute.manager [req-b2c63315-73f7-426a-bef4-4b865d4c0853 req-a9dfb502-afee-4da3-8af6-0e42d52725e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] No waiting events found dispatching network-vif-plugged-c749c499-3ec6-4a11-894e-ec79cfbd8829 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.701 243456 WARNING nova.compute.manager [req-b2c63315-73f7-426a-bef4-4b865d4c0853 req-a9dfb502-afee-4da3-8af6-0e42d52725e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received unexpected event network-vif-plugged-c749c499-3ec6-4a11-894e-ec79cfbd8829 for instance with vm_state deleted and task_state None.
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.702 243456 DEBUG nova.compute.manager [req-b2c63315-73f7-426a-bef4-4b865d4c0853 req-a9dfb502-afee-4da3-8af6-0e42d52725e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Received event network-vif-deleted-2096b485-4121-49ca-b440-f2d4580fb1cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:29 compute-0 nova_compute[243452]: 2026-02-28 10:08:29.703 243456 DEBUG nova.compute.manager [req-b2c63315-73f7-426a-bef4-4b865d4c0853 req-a9dfb502-afee-4da3-8af6-0e42d52725e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Received event network-vif-deleted-b1941300-baa6-48a1-9264-7d10ad675ca9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 272 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 2.7 MiB/s wr, 365 op/s
Feb 28 10:08:30 compute-0 ceph-mon[76304]: osdmap e170: 3 total, 3 up, 3 in
Feb 28 10:08:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:08:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1499308393' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:08:30 compute-0 nova_compute[243452]: 2026-02-28 10:08:30.155 243456 DEBUG oslo_concurrency.processutils [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:30 compute-0 nova_compute[243452]: 2026-02-28 10:08:30.163 243456 DEBUG nova.compute.provider_tree [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:08:30 compute-0 nova_compute[243452]: 2026-02-28 10:08:30.180 243456 DEBUG nova.scheduler.client.report [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:08:30 compute-0 nova_compute[243452]: 2026-02-28 10:08:30.204 243456 DEBUG oslo_concurrency.lockutils [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:30 compute-0 nova_compute[243452]: 2026-02-28 10:08:30.206 243456 DEBUG oslo_concurrency.lockutils [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:30 compute-0 nova_compute[243452]: 2026-02-28 10:08:30.225 243456 INFO nova.scheduler.client.report [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Deleted allocations for instance bbaf0344-f1d3-4629-b6ff-3395549aa84b
Feb 28 10:08:30 compute-0 nova_compute[243452]: 2026-02-28 10:08:30.287 243456 DEBUG oslo_concurrency.lockutils [None req-70743c0f-dd55-4c22-960f-65f951e9e9ff eb52857f37ae4c42815767488316da21 9d6b720bb7334b139fc12e9faa051906 - - default default] Lock "bbaf0344-f1d3-4629-b6ff-3395549aa84b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:30 compute-0 nova_compute[243452]: 2026-02-28 10:08:30.290 243456 DEBUG oslo_concurrency.processutils [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:08:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:08:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:08:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:08:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:08:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:08:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:08:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:08:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:08:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:08:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:08:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:08:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:08:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:08:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:08:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:08:30 compute-0 nova_compute[243452]: 2026-02-28 10:08:30.815 243456 DEBUG nova.compute.manager [req-e3080c39-238d-4455-a6ea-c9872ac428b0 req-463bc6ac-530a-48a2-8a15-67c665544cb6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received event network-vif-deleted-c749c499-3ec6-4a11-894e-ec79cfbd8829 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:30 compute-0 nova_compute[243452]: 2026-02-28 10:08:30.815 243456 DEBUG nova.compute.manager [req-e3080c39-238d-4455-a6ea-c9872ac428b0 req-463bc6ac-530a-48a2-8a15-67c665544cb6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received event network-vif-deleted-2ef6c454-23c6-4a31-be49-e69f506de5bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:30 compute-0 nova_compute[243452]: 2026-02-28 10:08:30.816 243456 DEBUG nova.compute.manager [req-e3080c39-238d-4455-a6ea-c9872ac428b0 req-463bc6ac-530a-48a2-8a15-67c665544cb6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Received event network-vif-deleted-91edcb1e-3191-47da-be88-2834e4a98d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:08:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1057837403' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:08:30 compute-0 nova_compute[243452]: 2026-02-28 10:08:30.864 243456 DEBUG oslo_concurrency.processutils [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:30 compute-0 nova_compute[243452]: 2026-02-28 10:08:30.870 243456 DEBUG nova.compute.provider_tree [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:08:30 compute-0 nova_compute[243452]: 2026-02-28 10:08:30.888 243456 DEBUG nova.scheduler.client.report [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:08:30 compute-0 nova_compute[243452]: 2026-02-28 10:08:30.970 243456 DEBUG oslo_concurrency.lockutils [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:31 compute-0 nova_compute[243452]: 2026-02-28 10:08:31.082 243456 INFO nova.scheduler.client.report [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Deleted allocations for instance 5383d7b5-e11b-47e4-8cbc-4be283dd6b16
Feb 28 10:08:31 compute-0 nova_compute[243452]: 2026-02-28 10:08:31.160 243456 DEBUG oslo_concurrency.lockutils [None req-d087d125-e3af-4c3e-8def-b74445d3992a 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "5383d7b5-e11b-47e4-8cbc-4be283dd6b16" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:31 compute-0 ceph-mon[76304]: pgmap v1211: 305 pgs: 305 active+clean; 272 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 2.7 MiB/s wr, 365 op/s
Feb 28 10:08:31 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1499308393' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:08:31 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1057837403' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:08:31 compute-0 nova_compute[243452]: 2026-02-28 10:08:31.530 243456 INFO nova.virt.libvirt.driver [None req-5873a114-4ccd-425e-9370-2dc23d31d36c 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Snapshot image upload complete
Feb 28 10:08:31 compute-0 nova_compute[243452]: 2026-02-28 10:08:31.532 243456 INFO nova.compute.manager [None req-5873a114-4ccd-425e-9370-2dc23d31d36c 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Took 5.37 seconds to snapshot the instance on the hypervisor.
Feb 28 10:08:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 260 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 3.3 MiB/s wr, 322 op/s
Feb 28 10:08:32 compute-0 ceph-mon[76304]: pgmap v1212: 305 pgs: 305 active+clean; 260 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 3.3 MiB/s wr, 322 op/s
Feb 28 10:08:32 compute-0 nova_compute[243452]: 2026-02-28 10:08:32.407 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:32 compute-0 ovn_controller[146846]: 2026-02-28T10:08:32Z|00412|binding|INFO|Releasing lport 34db03cc-96b4-407f-81c0-0835e29fb2af from this chassis (sb_readonly=0)
Feb 28 10:08:32 compute-0 nova_compute[243452]: 2026-02-28 10:08:32.834 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:08:33 compute-0 nova_compute[243452]: 2026-02-28 10:08:33.615 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Feb 28 10:08:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Feb 28 10:08:33 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Feb 28 10:08:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 246 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 200 op/s
Feb 28 10:08:34 compute-0 ceph-mon[76304]: osdmap e171: 3 total, 3 up, 3 in
Feb 28 10:08:34 compute-0 ceph-mon[76304]: pgmap v1214: 305 pgs: 305 active+clean; 246 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 200 op/s
Feb 28 10:08:35 compute-0 nova_compute[243452]: 2026-02-28 10:08:35.456 243456 DEBUG nova.compute.manager [None req-103be3f9-a60e-4516-ae99-1901aa1f3104 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:08:35 compute-0 nova_compute[243452]: 2026-02-28 10:08:35.503 243456 INFO nova.compute.manager [None req-103be3f9-a60e-4516-ae99-1901aa1f3104 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] instance snapshotting
Feb 28 10:08:35 compute-0 nova_compute[243452]: 2026-02-28 10:08:35.957 243456 INFO nova.virt.libvirt.driver [None req-103be3f9-a60e-4516-ae99-1901aa1f3104 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Beginning live snapshot process
Feb 28 10:08:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 216 MiB data, 554 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.2 MiB/s wr, 240 op/s
Feb 28 10:08:36 compute-0 ovn_controller[146846]: 2026-02-28T10:08:36Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5b:bc:bd 10.100.0.7
Feb 28 10:08:36 compute-0 ovn_controller[146846]: 2026-02-28T10:08:36Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:bc:bd 10.100.0.7
Feb 28 10:08:36 compute-0 nova_compute[243452]: 2026-02-28 10:08:36.134 243456 DEBUG nova.virt.libvirt.imagebackend [None req-103be3f9-a60e-4516-ae99-1901aa1f3104 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 28 10:08:36 compute-0 nova_compute[243452]: 2026-02-28 10:08:36.691 243456 DEBUG nova.storage.rbd_utils [None req-103be3f9-a60e-4516-ae99-1901aa1f3104 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] creating snapshot(7897c09bf7f14de7b4332906a0168e25) on rbd image(bbbba0d8-fff9-4f59-ab31-54ff03b71390_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:08:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Feb 28 10:08:37 compute-0 ceph-mon[76304]: pgmap v1215: 305 pgs: 305 active+clean; 216 MiB data, 554 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.2 MiB/s wr, 240 op/s
Feb 28 10:08:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Feb 28 10:08:37 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Feb 28 10:08:37 compute-0 nova_compute[243452]: 2026-02-28 10:08:37.104 243456 DEBUG nova.storage.rbd_utils [None req-103be3f9-a60e-4516-ae99-1901aa1f3104 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] cloning vms/bbbba0d8-fff9-4f59-ab31-54ff03b71390_disk@7897c09bf7f14de7b4332906a0168e25 to images/d6f9704e-a041-47c4-bed1-6ca11061c49a clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:08:37 compute-0 nova_compute[243452]: 2026-02-28 10:08:37.216 243456 DEBUG nova.storage.rbd_utils [None req-103be3f9-a60e-4516-ae99-1901aa1f3104 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] flattening images/d6f9704e-a041-47c4-bed1-6ca11061c49a flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:08:37 compute-0 nova_compute[243452]: 2026-02-28 10:08:37.411 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:37 compute-0 nova_compute[243452]: 2026-02-28 10:08:37.678 243456 DEBUG nova.storage.rbd_utils [None req-103be3f9-a60e-4516-ae99-1901aa1f3104 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] removing snapshot(7897c09bf7f14de7b4332906a0168e25) on rbd image(bbbba0d8-fff9-4f59-ab31-54ff03b71390_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:08:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:08:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Feb 28 10:08:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Feb 28 10:08:37 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Feb 28 10:08:37 compute-0 nova_compute[243452]: 2026-02-28 10:08:37.936 243456 DEBUG nova.storage.rbd_utils [None req-103be3f9-a60e-4516-ae99-1901aa1f3104 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] creating snapshot(snap) on rbd image(d6f9704e-a041-47c4-bed1-6ca11061c49a) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:08:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 244 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.6 MiB/s wr, 233 op/s
Feb 28 10:08:38 compute-0 ceph-mon[76304]: osdmap e172: 3 total, 3 up, 3 in
Feb 28 10:08:38 compute-0 ceph-mon[76304]: osdmap e173: 3 total, 3 up, 3 in
Feb 28 10:08:38 compute-0 nova_compute[243452]: 2026-02-28 10:08:38.617 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Feb 28 10:08:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Feb 28 10:08:38 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Feb 28 10:08:39 compute-0 ceph-mon[76304]: pgmap v1218: 305 pgs: 305 active+clean; 244 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.6 MiB/s wr, 233 op/s
Feb 28 10:08:39 compute-0 ceph-mon[76304]: osdmap e174: 3 total, 3 up, 3 in
Feb 28 10:08:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 270 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 8.1 MiB/s wr, 267 op/s
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007594899987773892 of space, bias 1.0, pg target 0.22784699963321675 quantized to 32 (current 32)
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0029771475460626047 of space, bias 1.0, pg target 0.8931442638187814 quantized to 32 (current 32)
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.039504916451985e-07 of space, bias 4.0, pg target 0.0009647405899742381 quantized to 16 (current 16)
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:08:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:08:41 compute-0 ceph-mon[76304]: pgmap v1220: 305 pgs: 305 active+clean; 270 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 8.1 MiB/s wr, 267 op/s
Feb 28 10:08:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 302 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 8.8 MiB/s wr, 205 op/s
Feb 28 10:08:42 compute-0 nova_compute[243452]: 2026-02-28 10:08:42.072 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273307.0684443, bbaf0344-f1d3-4629-b6ff-3395549aa84b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:08:42 compute-0 nova_compute[243452]: 2026-02-28 10:08:42.072 243456 INFO nova.compute.manager [-] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] VM Stopped (Lifecycle Event)
Feb 28 10:08:42 compute-0 nova_compute[243452]: 2026-02-28 10:08:42.128 243456 DEBUG nova.compute.manager [None req-bbd39ebd-1c45-4231-9b56-88f04a7a239d - - - - - -] [instance: bbaf0344-f1d3-4629-b6ff-3395549aa84b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:08:42 compute-0 nova_compute[243452]: 2026-02-28 10:08:42.337 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273307.332448, 5383d7b5-e11b-47e4-8cbc-4be283dd6b16 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:08:42 compute-0 nova_compute[243452]: 2026-02-28 10:08:42.338 243456 INFO nova.compute.manager [-] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] VM Stopped (Lifecycle Event)
Feb 28 10:08:42 compute-0 nova_compute[243452]: 2026-02-28 10:08:42.364 243456 DEBUG nova.compute.manager [None req-71c03b5f-81cc-405f-990c-2e2e797febc2 - - - - - -] [instance: 5383d7b5-e11b-47e4-8cbc-4be283dd6b16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:08:42 compute-0 nova_compute[243452]: 2026-02-28 10:08:42.414 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:42 compute-0 nova_compute[243452]: 2026-02-28 10:08:42.471 243456 INFO nova.virt.libvirt.driver [None req-103be3f9-a60e-4516-ae99-1901aa1f3104 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Snapshot image upload complete
Feb 28 10:08:42 compute-0 nova_compute[243452]: 2026-02-28 10:08:42.472 243456 INFO nova.compute.manager [None req-103be3f9-a60e-4516-ae99-1901aa1f3104 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Took 6.97 seconds to snapshot the instance on the hypervisor.
Feb 28 10:08:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:08:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Feb 28 10:08:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Feb 28 10:08:42 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Feb 28 10:08:43 compute-0 ceph-mon[76304]: pgmap v1221: 305 pgs: 305 active+clean; 302 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 8.8 MiB/s wr, 205 op/s
Feb 28 10:08:43 compute-0 ceph-mon[76304]: osdmap e175: 3 total, 3 up, 3 in
Feb 28 10:08:43 compute-0 nova_compute[243452]: 2026-02-28 10:08:43.619 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 312 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 5.6 MiB/s wr, 168 op/s
Feb 28 10:08:45 compute-0 ceph-mon[76304]: pgmap v1223: 305 pgs: 305 active+clean; 312 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 5.6 MiB/s wr, 168 op/s
Feb 28 10:08:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:08:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1350798257' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:08:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:08:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1350798257' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:08:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 312 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.3 MiB/s wr, 146 op/s
Feb 28 10:08:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Feb 28 10:08:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Feb 28 10:08:46 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Feb 28 10:08:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1350798257' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:08:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1350798257' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:08:46 compute-0 nova_compute[243452]: 2026-02-28 10:08:46.646 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:46 compute-0 nova_compute[243452]: 2026-02-28 10:08:46.647 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:46 compute-0 nova_compute[243452]: 2026-02-28 10:08:46.822 243456 DEBUG nova.compute.manager [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:08:47 compute-0 ceph-mon[76304]: pgmap v1224: 305 pgs: 305 active+clean; 312 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.3 MiB/s wr, 146 op/s
Feb 28 10:08:47 compute-0 ceph-mon[76304]: osdmap e176: 3 total, 3 up, 3 in
Feb 28 10:08:47 compute-0 nova_compute[243452]: 2026-02-28 10:08:47.417 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:47 compute-0 nova_compute[243452]: 2026-02-28 10:08:47.689 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:47 compute-0 nova_compute[243452]: 2026-02-28 10:08:47.690 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:47 compute-0 nova_compute[243452]: 2026-02-28 10:08:47.698 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:08:47 compute-0 nova_compute[243452]: 2026-02-28 10:08:47.699 243456 INFO nova.compute.claims [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:08:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:08:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Feb 28 10:08:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Feb 28 10:08:47 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Feb 28 10:08:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 281 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.1 MiB/s wr, 55 op/s
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.018 243456 DEBUG oslo_concurrency.lockutils [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Acquiring lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.019 243456 DEBUG oslo_concurrency.lockutils [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.019 243456 DEBUG oslo_concurrency.lockutils [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Acquiring lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.019 243456 DEBUG oslo_concurrency.lockutils [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.020 243456 DEBUG oslo_concurrency.lockutils [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.022 243456 INFO nova.compute.manager [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Terminating instance
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.023 243456 DEBUG nova.compute.manager [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:08:48 compute-0 kernel: tap8feba913-86 (unregistering): left promiscuous mode
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.104 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:48 compute-0 NetworkManager[49805]: <info>  [1772273328.1049] device (tap8feba913-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:08:48 compute-0 ovn_controller[146846]: 2026-02-28T10:08:48Z|00413|binding|INFO|Releasing lport 8feba913-868e-47d7-a6c6-50d33e14a69d from this chassis (sb_readonly=0)
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.114 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:48 compute-0 ovn_controller[146846]: 2026-02-28T10:08:48Z|00414|binding|INFO|Setting lport 8feba913-868e-47d7-a6c6-50d33e14a69d down in Southbound
Feb 28 10:08:48 compute-0 ovn_controller[146846]: 2026-02-28T10:08:48Z|00415|binding|INFO|Removing iface tap8feba913-86 ovn-installed in OVS
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.118 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.122 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:48 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000032.scope: Deactivated successfully.
Feb 28 10:08:48 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000032.scope: Consumed 12.747s CPU time.
Feb 28 10:08:48 compute-0 systemd-machined[209480]: Machine qemu-56-instance-00000032 terminated.
Feb 28 10:08:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:48.183 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:bc:bd 10.100.0.7'], port_security=['fa:16:3e:5b:bc:bd 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bbbba0d8-fff9-4f59-ab31-54ff03b71390', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20018e60-73d5-4de7-9f8d-17031bc634d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b8b0d675b3747fd80cb2186e41d2ebf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b4ca4e3a-cf16-433c-b8e2-2f626b510291', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68f93650-ecd0-430f-864a-a50c266ab3cd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8feba913-868e-47d7-a6c6-50d33e14a69d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:08:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:48.185 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8feba913-868e-47d7-a6c6-50d33e14a69d in datapath 20018e60-73d5-4de7-9f8d-17031bc634d7 unbound from our chassis
Feb 28 10:08:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:48.187 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 20018e60-73d5-4de7-9f8d-17031bc634d7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:08:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:48.189 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a77750b1-4226-415e-be61-a7b4f5806ee5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:48.190 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7 namespace which is not needed anymore
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.251 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.295 243456 INFO nova.virt.libvirt.driver [-] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Instance destroyed successfully.
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.296 243456 DEBUG nova.objects.instance [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Lazy-loading 'resources' on Instance uuid bbbba0d8-fff9-4f59-ab31-54ff03b71390 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:08:48 compute-0 neutron-haproxy-ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7[286151]: [NOTICE]   (286155) : haproxy version is 2.8.14-c23fe91
Feb 28 10:08:48 compute-0 neutron-haproxy-ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7[286151]: [NOTICE]   (286155) : path to executable is /usr/sbin/haproxy
Feb 28 10:08:48 compute-0 neutron-haproxy-ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7[286151]: [WARNING]  (286155) : Exiting Master process...
Feb 28 10:08:48 compute-0 neutron-haproxy-ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7[286151]: [ALERT]    (286155) : Current worker (286157) exited with code 143 (Terminated)
Feb 28 10:08:48 compute-0 neutron-haproxy-ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7[286151]: [WARNING]  (286155) : All workers exited. Exiting... (0)
Feb 28 10:08:48 compute-0 systemd[1]: libpod-490e153039e3711357a601a835f56cb620e1b30148d23d819e98d0055d0ddca8.scope: Deactivated successfully.
Feb 28 10:08:48 compute-0 podman[286909]: 2026-02-28 10:08:48.340895362 +0000 UTC m=+0.052687701 container died 490e153039e3711357a601a835f56cb620e1b30148d23d819e98d0055d0ddca8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:08:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-490e153039e3711357a601a835f56cb620e1b30148d23d819e98d0055d0ddca8-userdata-shm.mount: Deactivated successfully.
Feb 28 10:08:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-13b09aa5b5321be847d2ed9b9a5640eaf4b84680b63ac2ca6f237a1f669871d6-merged.mount: Deactivated successfully.
Feb 28 10:08:48 compute-0 podman[286909]: 2026-02-28 10:08:48.399102006 +0000 UTC m=+0.110894315 container cleanup 490e153039e3711357a601a835f56cb620e1b30148d23d819e98d0055d0ddca8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 28 10:08:48 compute-0 systemd[1]: libpod-conmon-490e153039e3711357a601a835f56cb620e1b30148d23d819e98d0055d0ddca8.scope: Deactivated successfully.
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.479 243456 DEBUG nova.virt.libvirt.vif [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:08:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-503378260',display_name='tempest-ImagesOneServerTestJSON-server-503378260',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-503378260',id=50,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:08:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b8b0d675b3747fd80cb2186e41d2ebf',ramdisk_id='',reservation_id='r-0rus32qo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-1749734354',owner_user_name='tempest-ImagesOneServerTestJSON-1749734354-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:08:42Z,user_data=None,user_id='5db9d3a48c914c5ea9326b6a8c8c0f36',uuid=bbbba0d8-fff9-4f59-ab31-54ff03b71390,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8feba913-868e-47d7-a6c6-50d33e14a69d", "address": "fa:16:3e:5b:bc:bd", "network": {"id": "20018e60-73d5-4de7-9f8d-17031bc634d7", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1934928850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b8b0d675b3747fd80cb2186e41d2ebf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8feba913-86", "ovs_interfaceid": "8feba913-868e-47d7-a6c6-50d33e14a69d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.480 243456 DEBUG nova.network.os_vif_util [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Converting VIF {"id": "8feba913-868e-47d7-a6c6-50d33e14a69d", "address": "fa:16:3e:5b:bc:bd", "network": {"id": "20018e60-73d5-4de7-9f8d-17031bc634d7", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1934928850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b8b0d675b3747fd80cb2186e41d2ebf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8feba913-86", "ovs_interfaceid": "8feba913-868e-47d7-a6c6-50d33e14a69d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.482 243456 DEBUG nova.network.os_vif_util [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:bc:bd,bridge_name='br-int',has_traffic_filtering=True,id=8feba913-868e-47d7-a6c6-50d33e14a69d,network=Network(20018e60-73d5-4de7-9f8d-17031bc634d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8feba913-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.483 243456 DEBUG os_vif [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:bc:bd,bridge_name='br-int',has_traffic_filtering=True,id=8feba913-868e-47d7-a6c6-50d33e14a69d,network=Network(20018e60-73d5-4de7-9f8d-17031bc634d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8feba913-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.486 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.487 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8feba913-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.490 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.492 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.494 243456 INFO os_vif [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:bc:bd,bridge_name='br-int',has_traffic_filtering=True,id=8feba913-868e-47d7-a6c6-50d33e14a69d,network=Network(20018e60-73d5-4de7-9f8d-17031bc634d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8feba913-86')
Feb 28 10:08:48 compute-0 podman[286959]: 2026-02-28 10:08:48.510657279 +0000 UTC m=+0.089974498 container remove 490e153039e3711357a601a835f56cb620e1b30148d23d819e98d0055d0ddca8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:08:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:48.517 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b08d9f1d-0056-4fe4-a698-981cfd9642e9]: (4, ('Sat Feb 28 10:08:48 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7 (490e153039e3711357a601a835f56cb620e1b30148d23d819e98d0055d0ddca8)\n490e153039e3711357a601a835f56cb620e1b30148d23d819e98d0055d0ddca8\nSat Feb 28 10:08:48 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7 (490e153039e3711357a601a835f56cb620e1b30148d23d819e98d0055d0ddca8)\n490e153039e3711357a601a835f56cb620e1b30148d23d819e98d0055d0ddca8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:48.520 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ece401-bc06-4f8f-8704-cf8a8d3df8c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:48.521 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20018e60-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:08:48 compute-0 kernel: tap20018e60-70: left promiscuous mode
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.523 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:48.528 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ea526d93-92f9-4fbf-84b4-3f6dd9f4cc0d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.533 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:48.541 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[00b38f74-ec07-4e23-acf7-a1078c93df30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:48.543 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8bf9f15a-5892-482e-99e3-edef0eb426a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:48.561 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8a84b58b-e705-4e20-af15-9791f83401e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480980, 'reachable_time': 42962, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286990, 'error': None, 'target': 'ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:48 compute-0 systemd[1]: run-netns-ovnmeta\x2d20018e60\x2d73d5\x2d4de7\x2d9f8d\x2d17031bc634d7.mount: Deactivated successfully.
Feb 28 10:08:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:48.568 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-20018e60-73d5-4de7-9f8d-17031bc634d7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:08:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:48.569 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[615c9586-69ce-45bc-b597-fad00fdd66cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.621 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:08:48 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1892497450' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.834 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.842 243456 DEBUG nova.compute.provider_tree [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.850 243456 DEBUG nova.compute.manager [req-6e01c141-f094-4e0b-8a33-25fc56aa32e3 req-91e9a5b6-8bed-44c7-88b2-279866512579 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Received event network-vif-unplugged-8feba913-868e-47d7-a6c6-50d33e14a69d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.851 243456 DEBUG oslo_concurrency.lockutils [req-6e01c141-f094-4e0b-8a33-25fc56aa32e3 req-91e9a5b6-8bed-44c7-88b2-279866512579 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.851 243456 DEBUG oslo_concurrency.lockutils [req-6e01c141-f094-4e0b-8a33-25fc56aa32e3 req-91e9a5b6-8bed-44c7-88b2-279866512579 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.851 243456 DEBUG oslo_concurrency.lockutils [req-6e01c141-f094-4e0b-8a33-25fc56aa32e3 req-91e9a5b6-8bed-44c7-88b2-279866512579 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.851 243456 DEBUG nova.compute.manager [req-6e01c141-f094-4e0b-8a33-25fc56aa32e3 req-91e9a5b6-8bed-44c7-88b2-279866512579 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] No waiting events found dispatching network-vif-unplugged-8feba913-868e-47d7-a6c6-50d33e14a69d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.852 243456 DEBUG nova.compute.manager [req-6e01c141-f094-4e0b-8a33-25fc56aa32e3 req-91e9a5b6-8bed-44c7-88b2-279866512579 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Received event network-vif-unplugged-8feba913-868e-47d7-a6c6-50d33e14a69d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.868 243456 DEBUG nova.scheduler.client.report [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.896 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.897 243456 DEBUG nova.compute.manager [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:08:48 compute-0 ceph-mon[76304]: osdmap e177: 3 total, 3 up, 3 in
Feb 28 10:08:48 compute-0 ceph-mon[76304]: pgmap v1227: 305 pgs: 305 active+clean; 281 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.1 MiB/s wr, 55 op/s
Feb 28 10:08:48 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1892497450' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.963 243456 DEBUG nova.compute.manager [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.964 243456 DEBUG nova.network.neutron [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:08:48 compute-0 nova_compute[243452]: 2026-02-28 10:08:48.992 243456 INFO nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:08:49 compute-0 nova_compute[243452]: 2026-02-28 10:08:49.030 243456 DEBUG nova.compute.manager [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:08:49 compute-0 nova_compute[243452]: 2026-02-28 10:08:49.040 243456 INFO nova.virt.libvirt.driver [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Deleting instance files /var/lib/nova/instances/bbbba0d8-fff9-4f59-ab31-54ff03b71390_del
Feb 28 10:08:49 compute-0 nova_compute[243452]: 2026-02-28 10:08:49.042 243456 INFO nova.virt.libvirt.driver [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Deletion of /var/lib/nova/instances/bbbba0d8-fff9-4f59-ab31-54ff03b71390_del complete
Feb 28 10:08:49 compute-0 nova_compute[243452]: 2026-02-28 10:08:49.124 243456 INFO nova.compute.manager [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Took 1.10 seconds to destroy the instance on the hypervisor.
Feb 28 10:08:49 compute-0 nova_compute[243452]: 2026-02-28 10:08:49.124 243456 DEBUG oslo.service.loopingcall [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:08:49 compute-0 nova_compute[243452]: 2026-02-28 10:08:49.125 243456 DEBUG nova.compute.manager [-] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:08:49 compute-0 nova_compute[243452]: 2026-02-28 10:08:49.125 243456 DEBUG nova.network.neutron [-] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:08:49 compute-0 nova_compute[243452]: 2026-02-28 10:08:49.152 243456 DEBUG nova.compute.manager [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:08:49 compute-0 nova_compute[243452]: 2026-02-28 10:08:49.154 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:08:49 compute-0 nova_compute[243452]: 2026-02-28 10:08:49.155 243456 INFO nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Creating image(s)
Feb 28 10:08:49 compute-0 nova_compute[243452]: 2026-02-28 10:08:49.186 243456 DEBUG nova.storage.rbd_utils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] rbd image 6e81c007-3bdc-4baf-b310-775c3122cd14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:08:49 compute-0 nova_compute[243452]: 2026-02-28 10:08:49.220 243456 DEBUG nova.storage.rbd_utils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] rbd image 6e81c007-3bdc-4baf-b310-775c3122cd14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:08:49 compute-0 nova_compute[243452]: 2026-02-28 10:08:49.251 243456 DEBUG nova.storage.rbd_utils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] rbd image 6e81c007-3bdc-4baf-b310-775c3122cd14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:08:49 compute-0 nova_compute[243452]: 2026-02-28 10:08:49.255 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:49 compute-0 nova_compute[243452]: 2026-02-28 10:08:49.285 243456 DEBUG nova.policy [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '35aa1fe862a2437dbcc12fc7b0acbf91', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '30cb5e2d14fb4fb7a9d37cf231549329', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:08:49 compute-0 nova_compute[243452]: 2026-02-28 10:08:49.332 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:49 compute-0 nova_compute[243452]: 2026-02-28 10:08:49.332 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:49 compute-0 nova_compute[243452]: 2026-02-28 10:08:49.333 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:49 compute-0 nova_compute[243452]: 2026-02-28 10:08:49.333 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:49 compute-0 nova_compute[243452]: 2026-02-28 10:08:49.361 243456 DEBUG nova.storage.rbd_utils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] rbd image 6e81c007-3bdc-4baf-b310-775c3122cd14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:08:49 compute-0 nova_compute[243452]: 2026-02-28 10:08:49.365 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6e81c007-3bdc-4baf-b310-775c3122cd14_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 230 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 23 KiB/s wr, 54 op/s
Feb 28 10:08:50 compute-0 nova_compute[243452]: 2026-02-28 10:08:50.011 243456 DEBUG nova.network.neutron [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Successfully created port: 2913b56a-9cff-4697-89c5-e6e3553b8002 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:08:50 compute-0 nova_compute[243452]: 2026-02-28 10:08:50.014 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6e81c007-3bdc-4baf-b310-775c3122cd14_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:50 compute-0 nova_compute[243452]: 2026-02-28 10:08:50.113 243456 DEBUG nova.storage.rbd_utils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] resizing rbd image 6e81c007-3bdc-4baf-b310-775c3122cd14_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:08:50 compute-0 nova_compute[243452]: 2026-02-28 10:08:50.431 243456 DEBUG nova.objects.instance [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lazy-loading 'migration_context' on Instance uuid 6e81c007-3bdc-4baf-b310-775c3122cd14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:08:50 compute-0 nova_compute[243452]: 2026-02-28 10:08:50.452 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:08:50 compute-0 nova_compute[243452]: 2026-02-28 10:08:50.453 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Ensure instance console log exists: /var/lib/nova/instances/6e81c007-3bdc-4baf-b310-775c3122cd14/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:08:50 compute-0 nova_compute[243452]: 2026-02-28 10:08:50.453 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:50 compute-0 nova_compute[243452]: 2026-02-28 10:08:50.454 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:50 compute-0 nova_compute[243452]: 2026-02-28 10:08:50.454 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:50 compute-0 nova_compute[243452]: 2026-02-28 10:08:50.750 243456 DEBUG nova.network.neutron [-] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:08:50 compute-0 nova_compute[243452]: 2026-02-28 10:08:50.788 243456 INFO nova.compute.manager [-] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Took 1.66 seconds to deallocate network for instance.
Feb 28 10:08:50 compute-0 nova_compute[243452]: 2026-02-28 10:08:50.818 243456 DEBUG nova.network.neutron [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Successfully created port: cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:08:50 compute-0 nova_compute[243452]: 2026-02-28 10:08:50.869 243456 DEBUG oslo_concurrency.lockutils [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:50 compute-0 nova_compute[243452]: 2026-02-28 10:08:50.870 243456 DEBUG oslo_concurrency.lockutils [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:50 compute-0 nova_compute[243452]: 2026-02-28 10:08:50.939 243456 DEBUG oslo_concurrency.processutils [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:08:51 compute-0 nova_compute[243452]: 2026-02-28 10:08:51.027 243456 DEBUG nova.compute.manager [req-e5b3e615-1545-40de-9cd3-7449fb944b2d req-27b6d312-5aed-4ee7-964c-0c01eb2a82b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Received event network-vif-deleted-8feba913-868e-47d7-a6c6-50d33e14a69d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:51 compute-0 ceph-mon[76304]: pgmap v1228: 305 pgs: 305 active+clean; 230 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 23 KiB/s wr, 54 op/s
Feb 28 10:08:51 compute-0 nova_compute[243452]: 2026-02-28 10:08:51.149 243456 DEBUG nova.compute.manager [req-0f26704a-af34-4ce4-82c5-7603aadb62c0 req-4dc13ea5-3fca-47f4-9993-82d38264df11 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Received event network-vif-plugged-8feba913-868e-47d7-a6c6-50d33e14a69d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:51 compute-0 nova_compute[243452]: 2026-02-28 10:08:51.150 243456 DEBUG oslo_concurrency.lockutils [req-0f26704a-af34-4ce4-82c5-7603aadb62c0 req-4dc13ea5-3fca-47f4-9993-82d38264df11 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:51 compute-0 nova_compute[243452]: 2026-02-28 10:08:51.150 243456 DEBUG oslo_concurrency.lockutils [req-0f26704a-af34-4ce4-82c5-7603aadb62c0 req-4dc13ea5-3fca-47f4-9993-82d38264df11 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:51 compute-0 nova_compute[243452]: 2026-02-28 10:08:51.151 243456 DEBUG oslo_concurrency.lockutils [req-0f26704a-af34-4ce4-82c5-7603aadb62c0 req-4dc13ea5-3fca-47f4-9993-82d38264df11 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:51 compute-0 nova_compute[243452]: 2026-02-28 10:08:51.151 243456 DEBUG nova.compute.manager [req-0f26704a-af34-4ce4-82c5-7603aadb62c0 req-4dc13ea5-3fca-47f4-9993-82d38264df11 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] No waiting events found dispatching network-vif-plugged-8feba913-868e-47d7-a6c6-50d33e14a69d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:08:51 compute-0 nova_compute[243452]: 2026-02-28 10:08:51.152 243456 WARNING nova.compute.manager [req-0f26704a-af34-4ce4-82c5-7603aadb62c0 req-4dc13ea5-3fca-47f4-9993-82d38264df11 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Received unexpected event network-vif-plugged-8feba913-868e-47d7-a6c6-50d33e14a69d for instance with vm_state deleted and task_state None.
Feb 28 10:08:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:08:51 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2421904946' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:08:51 compute-0 nova_compute[243452]: 2026-02-28 10:08:51.511 243456 DEBUG oslo_concurrency.processutils [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:08:51 compute-0 nova_compute[243452]: 2026-02-28 10:08:51.521 243456 DEBUG nova.compute.provider_tree [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:08:51 compute-0 nova_compute[243452]: 2026-02-28 10:08:51.649 243456 DEBUG nova.scheduler.client.report [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:08:51 compute-0 nova_compute[243452]: 2026-02-28 10:08:51.837 243456 DEBUG oslo_concurrency.lockutils [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:51 compute-0 nova_compute[243452]: 2026-02-28 10:08:51.941 243456 INFO nova.scheduler.client.report [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Deleted allocations for instance bbbba0d8-fff9-4f59-ab31-54ff03b71390
Feb 28 10:08:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 195 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.1 MiB/s wr, 80 op/s
Feb 28 10:08:52 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2421904946' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:08:52 compute-0 nova_compute[243452]: 2026-02-28 10:08:52.279 243456 DEBUG oslo_concurrency.lockutils [None req-97641fc0-8be8-48e9-a946-a75f7265e921 5db9d3a48c914c5ea9326b6a8c8c0f36 3b8b0d675b3747fd80cb2186e41d2ebf - - default default] Lock "bbbba0d8-fff9-4f59-ab31-54ff03b71390" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.260s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:08:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Feb 28 10:08:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Feb 28 10:08:52 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Feb 28 10:08:53 compute-0 ceph-mon[76304]: pgmap v1229: 305 pgs: 305 active+clean; 195 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.1 MiB/s wr, 80 op/s
Feb 28 10:08:53 compute-0 ceph-mon[76304]: osdmap e178: 3 total, 3 up, 3 in
Feb 28 10:08:53 compute-0 nova_compute[243452]: 2026-02-28 10:08:53.364 243456 DEBUG nova.network.neutron [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Successfully updated port: 2913b56a-9cff-4697-89c5-e6e3553b8002 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:08:53 compute-0 nova_compute[243452]: 2026-02-28 10:08:53.492 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:53 compute-0 nova_compute[243452]: 2026-02-28 10:08:53.623 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:53 compute-0 nova_compute[243452]: 2026-02-28 10:08:53.715 243456 DEBUG nova.compute.manager [req-8ef00435-6ab2-4fd1-a02c-6f40a0db6b1f req-0ee6034c-9522-4abc-9f0f-e11e6d1d18dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-changed-2913b56a-9cff-4697-89c5-e6e3553b8002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:53 compute-0 nova_compute[243452]: 2026-02-28 10:08:53.715 243456 DEBUG nova.compute.manager [req-8ef00435-6ab2-4fd1-a02c-6f40a0db6b1f req-0ee6034c-9522-4abc-9f0f-e11e6d1d18dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Refreshing instance network info cache due to event network-changed-2913b56a-9cff-4697-89c5-e6e3553b8002. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:08:53 compute-0 nova_compute[243452]: 2026-02-28 10:08:53.716 243456 DEBUG oslo_concurrency.lockutils [req-8ef00435-6ab2-4fd1-a02c-6f40a0db6b1f req-0ee6034c-9522-4abc-9f0f-e11e6d1d18dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-6e81c007-3bdc-4baf-b310-775c3122cd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:08:53 compute-0 nova_compute[243452]: 2026-02-28 10:08:53.716 243456 DEBUG oslo_concurrency.lockutils [req-8ef00435-6ab2-4fd1-a02c-6f40a0db6b1f req-0ee6034c-9522-4abc-9f0f-e11e6d1d18dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-6e81c007-3bdc-4baf-b310-775c3122cd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:08:53 compute-0 nova_compute[243452]: 2026-02-28 10:08:53.716 243456 DEBUG nova.network.neutron [req-8ef00435-6ab2-4fd1-a02c-6f40a0db6b1f req-0ee6034c-9522-4abc-9f0f-e11e6d1d18dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Refreshing network info cache for port 2913b56a-9cff-4697-89c5-e6e3553b8002 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:08:53 compute-0 nova_compute[243452]: 2026-02-28 10:08:53.957 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:53.958 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:08:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:53.959 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:08:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 178 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 1.6 MiB/s wr, 96 op/s
Feb 28 10:08:54 compute-0 nova_compute[243452]: 2026-02-28 10:08:54.449 243456 DEBUG nova.network.neutron [req-8ef00435-6ab2-4fd1-a02c-6f40a0db6b1f req-0ee6034c-9522-4abc-9f0f-e11e6d1d18dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:08:55 compute-0 nova_compute[243452]: 2026-02-28 10:08:55.010 243456 DEBUG nova.network.neutron [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Successfully updated port: cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:08:55 compute-0 ceph-mon[76304]: pgmap v1231: 305 pgs: 305 active+clean; 178 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 1.6 MiB/s wr, 96 op/s
Feb 28 10:08:55 compute-0 nova_compute[243452]: 2026-02-28 10:08:55.162 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "refresh_cache-6e81c007-3bdc-4baf-b310-775c3122cd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:08:55 compute-0 nova_compute[243452]: 2026-02-28 10:08:55.554 243456 DEBUG nova.network.neutron [req-8ef00435-6ab2-4fd1-a02c-6f40a0db6b1f req-0ee6034c-9522-4abc-9f0f-e11e6d1d18dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:08:55 compute-0 nova_compute[243452]: 2026-02-28 10:08:55.576 243456 DEBUG oslo_concurrency.lockutils [req-8ef00435-6ab2-4fd1-a02c-6f40a0db6b1f req-0ee6034c-9522-4abc-9f0f-e11e6d1d18dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-6e81c007-3bdc-4baf-b310-775c3122cd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:08:55 compute-0 nova_compute[243452]: 2026-02-28 10:08:55.577 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquired lock "refresh_cache-6e81c007-3bdc-4baf-b310-775c3122cd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:08:55 compute-0 nova_compute[243452]: 2026-02-28 10:08:55.578 243456 DEBUG nova.network.neutron [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:08:55 compute-0 nova_compute[243452]: 2026-02-28 10:08:55.874 243456 DEBUG nova.compute.manager [req-7e6a0d1c-f191-4cc1-9a70-d626285b675e req-f7ea98ce-1010-456b-8802-cb496fb327b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-changed-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:08:55 compute-0 nova_compute[243452]: 2026-02-28 10:08:55.875 243456 DEBUG nova.compute.manager [req-7e6a0d1c-f191-4cc1-9a70-d626285b675e req-f7ea98ce-1010-456b-8802-cb496fb327b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Refreshing instance network info cache due to event network-changed-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:08:55 compute-0 nova_compute[243452]: 2026-02-28 10:08:55.876 243456 DEBUG oslo_concurrency.lockutils [req-7e6a0d1c-f191-4cc1-9a70-d626285b675e req-f7ea98ce-1010-456b-8802-cb496fb327b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-6e81c007-3bdc-4baf-b310-775c3122cd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:08:55 compute-0 nova_compute[243452]: 2026-02-28 10:08:55.956 243456 DEBUG nova.network.neutron [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:08:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 2.6 MiB/s wr, 108 op/s
Feb 28 10:08:56 compute-0 rsyslogd[1017]: imjournal: 13558 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb 28 10:08:57 compute-0 ceph-mon[76304]: pgmap v1232: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 2.6 MiB/s wr, 108 op/s
Feb 28 10:08:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:57.846 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:08:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:57.847 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:08:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:57.847 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:08:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:08:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Feb 28 10:08:58 compute-0 nova_compute[243452]: 2026-02-28 10:08:58.496 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:58 compute-0 nova_compute[243452]: 2026-02-28 10:08:58.624 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:08:59 compute-0 podman[287186]: 2026-02-28 10:08:59.135787915 +0000 UTC m=+0.066297873 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 28 10:08:59 compute-0 ceph-mon[76304]: pgmap v1233: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Feb 28 10:08:59 compute-0 podman[287185]: 2026-02-28 10:08:59.168382081 +0000 UTC m=+0.102715656 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Feb 28 10:08:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:08:59.961 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.051 243456 DEBUG nova.network.neutron [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Updating instance_info_cache with network_info: [{"id": "2913b56a-9cff-4697-89c5-e6e3553b8002", "address": "fa:16:3e:cb:85:32", "network": {"id": "ee9d9c00-c168-42ad-9b0a-3feb4cee5b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2066554005", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2913b56a-9c", "ovs_interfaceid": "2913b56a-9cff-4697-89c5-e6e3553b8002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "address": "fa:16:3e:ea:a4:97", "network": {"id": "6393174b-f4f0-476b-a7a9-02f4c8a0425f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-40536825", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf3648de-8d", "ovs_interfaceid": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.076 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Releasing lock "refresh_cache-6e81c007-3bdc-4baf-b310-775c3122cd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.077 243456 DEBUG nova.compute.manager [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Instance network_info: |[{"id": "2913b56a-9cff-4697-89c5-e6e3553b8002", "address": "fa:16:3e:cb:85:32", "network": {"id": "ee9d9c00-c168-42ad-9b0a-3feb4cee5b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2066554005", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2913b56a-9c", "ovs_interfaceid": "2913b56a-9cff-4697-89c5-e6e3553b8002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "address": "fa:16:3e:ea:a4:97", "network": {"id": "6393174b-f4f0-476b-a7a9-02f4c8a0425f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-40536825", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf3648de-8d", "ovs_interfaceid": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.078 243456 DEBUG oslo_concurrency.lockutils [req-7e6a0d1c-f191-4cc1-9a70-d626285b675e req-f7ea98ce-1010-456b-8802-cb496fb327b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-6e81c007-3bdc-4baf-b310-775c3122cd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.080 243456 DEBUG nova.network.neutron [req-7e6a0d1c-f191-4cc1-9a70-d626285b675e req-f7ea98ce-1010-456b-8802-cb496fb327b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Refreshing network info cache for port cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.086 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Start _get_guest_xml network_info=[{"id": "2913b56a-9cff-4697-89c5-e6e3553b8002", "address": "fa:16:3e:cb:85:32", "network": {"id": "ee9d9c00-c168-42ad-9b0a-3feb4cee5b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2066554005", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2913b56a-9c", "ovs_interfaceid": "2913b56a-9cff-4697-89c5-e6e3553b8002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "address": "fa:16:3e:ea:a4:97", "network": {"id": "6393174b-f4f0-476b-a7a9-02f4c8a0425f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-40536825", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf3648de-8d", "ovs_interfaceid": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.094 243456 WARNING nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.100 243456 DEBUG nova.virt.libvirt.host [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.100 243456 DEBUG nova.virt.libvirt.host [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.111 243456 DEBUG nova.virt.libvirt.host [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.112 243456 DEBUG nova.virt.libvirt.host [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.113 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.113 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.114 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.115 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.115 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.116 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.116 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.116 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.117 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.117 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.118 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.118 243456 DEBUG nova.virt.hardware [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.123 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:09:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:09:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:09:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:09:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:09:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:09:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:09:00 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/161285979' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.682 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.721 243456 DEBUG nova.storage.rbd_utils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] rbd image 6e81c007-3bdc-4baf-b310-775c3122cd14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.726 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:00 compute-0 nova_compute[243452]: 2026-02-28 10:09:00.947 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:01 compute-0 ceph-mon[76304]: pgmap v1234: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 28 10:09:01 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/161285979' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:09:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/320021054' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.328 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.331 243456 DEBUG nova.virt.libvirt.vif [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:08:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-2003718246',display_name='tempest-ServersTestMultiNic-server-2003718246',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-2003718246',id=51,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='30cb5e2d14fb4fb7a9d37cf231549329',ramdisk_id='',reservation_id='r-2f5u8tlh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-116334619',owner_user_name='tempest-ServersTestMultiNic-116334619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:08:49Z,user_data=None,user_id='35aa1fe862a2437dbcc12fc7b0acbf91',uuid=6e81c007-3bdc-4baf-b310-775c3122cd14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2913b56a-9cff-4697-89c5-e6e3553b8002", "address": "fa:16:3e:cb:85:32", "network": {"id": "ee9d9c00-c168-42ad-9b0a-3feb4cee5b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2066554005", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2913b56a-9c", "ovs_interfaceid": "2913b56a-9cff-4697-89c5-e6e3553b8002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.331 243456 DEBUG nova.network.os_vif_util [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converting VIF {"id": "2913b56a-9cff-4697-89c5-e6e3553b8002", "address": "fa:16:3e:cb:85:32", "network": {"id": "ee9d9c00-c168-42ad-9b0a-3feb4cee5b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2066554005", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2913b56a-9c", "ovs_interfaceid": "2913b56a-9cff-4697-89c5-e6e3553b8002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.333 243456 DEBUG nova.network.os_vif_util [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:85:32,bridge_name='br-int',has_traffic_filtering=True,id=2913b56a-9cff-4697-89c5-e6e3553b8002,network=Network(ee9d9c00-c168-42ad-9b0a-3feb4cee5b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2913b56a-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.334 243456 DEBUG nova.virt.libvirt.vif [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:08:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-2003718246',display_name='tempest-ServersTestMultiNic-server-2003718246',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-2003718246',id=51,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='30cb5e2d14fb4fb7a9d37cf231549329',ramdisk_id='',reservation_id='r-2f5u8tlh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-116334619',owner_user_name='tempest-ServersTestMultiNic-116334619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:08:49Z,user_data=None,user_id='35aa1fe862a2437dbcc12fc7b0acbf91',uuid=6e81c007-3bdc-4baf-b310-775c3122cd14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "address": "fa:16:3e:ea:a4:97", "network": {"id": "6393174b-f4f0-476b-a7a9-02f4c8a0425f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-40536825", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf3648de-8d", "ovs_interfaceid": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.335 243456 DEBUG nova.network.os_vif_util [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converting VIF {"id": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "address": "fa:16:3e:ea:a4:97", "network": {"id": "6393174b-f4f0-476b-a7a9-02f4c8a0425f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-40536825", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf3648de-8d", "ovs_interfaceid": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.336 243456 DEBUG nova.network.os_vif_util [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:a4:97,bridge_name='br-int',has_traffic_filtering=True,id=cf3648de-8d18-4aa3-9cb9-b7a38ba349c7,network=Network(6393174b-f4f0-476b-a7a9-02f4c8a0425f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf3648de-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.338 243456 DEBUG nova.objects.instance [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6e81c007-3bdc-4baf-b310-775c3122cd14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.358 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:09:01 compute-0 nova_compute[243452]:   <uuid>6e81c007-3bdc-4baf-b310-775c3122cd14</uuid>
Feb 28 10:09:01 compute-0 nova_compute[243452]:   <name>instance-00000033</name>
Feb 28 10:09:01 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:09:01 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:09:01 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersTestMultiNic-server-2003718246</nova:name>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:09:00</nova:creationTime>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:09:01 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:09:01 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:09:01 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:09:01 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:09:01 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:09:01 compute-0 nova_compute[243452]:         <nova:user uuid="35aa1fe862a2437dbcc12fc7b0acbf91">tempest-ServersTestMultiNic-116334619-project-member</nova:user>
Feb 28 10:09:01 compute-0 nova_compute[243452]:         <nova:project uuid="30cb5e2d14fb4fb7a9d37cf231549329">tempest-ServersTestMultiNic-116334619</nova:project>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:09:01 compute-0 nova_compute[243452]:         <nova:port uuid="2913b56a-9cff-4697-89c5-e6e3553b8002">
Feb 28 10:09:01 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.207" ipVersion="4"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:09:01 compute-0 nova_compute[243452]:         <nova:port uuid="cf3648de-8d18-4aa3-9cb9-b7a38ba349c7">
Feb 28 10:09:01 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.1.62" ipVersion="4"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:09:01 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:09:01 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <system>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <entry name="serial">6e81c007-3bdc-4baf-b310-775c3122cd14</entry>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <entry name="uuid">6e81c007-3bdc-4baf-b310-775c3122cd14</entry>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     </system>
Feb 28 10:09:01 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:09:01 compute-0 nova_compute[243452]:   <os>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:   </os>
Feb 28 10:09:01 compute-0 nova_compute[243452]:   <features>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:   </features>
Feb 28 10:09:01 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:09:01 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:09:01 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/6e81c007-3bdc-4baf-b310-775c3122cd14_disk">
Feb 28 10:09:01 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       </source>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:09:01 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/6e81c007-3bdc-4baf-b310-775c3122cd14_disk.config">
Feb 28 10:09:01 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       </source>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:09:01 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:cb:85:32"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <target dev="tap2913b56a-9c"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:ea:a4:97"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <target dev="tapcf3648de-8d"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/6e81c007-3bdc-4baf-b310-775c3122cd14/console.log" append="off"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <video>
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     </video>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:09:01 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:09:01 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:09:01 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:09:01 compute-0 nova_compute[243452]: </domain>
Feb 28 10:09:01 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.360 243456 DEBUG nova.compute.manager [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Preparing to wait for external event network-vif-plugged-2913b56a-9cff-4697-89c5-e6e3553b8002 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.361 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.361 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.362 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.362 243456 DEBUG nova.compute.manager [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Preparing to wait for external event network-vif-plugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.362 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.363 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.363 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.365 243456 DEBUG nova.virt.libvirt.vif [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:08:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-2003718246',display_name='tempest-ServersTestMultiNic-server-2003718246',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-2003718246',id=51,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='30cb5e2d14fb4fb7a9d37cf231549329',ramdisk_id='',reservation_id='r-2f5u8tlh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-116334619',owner_user_name='tempest-ServersTestMultiNic-116334619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:08:49Z,user_data=None,user_id='35aa1fe862a2437dbcc12fc7b0acbf91',uuid=6e81c007-3bdc-4baf-b310-775c3122cd14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2913b56a-9cff-4697-89c5-e6e3553b8002", "address": "fa:16:3e:cb:85:32", "network": {"id": "ee9d9c00-c168-42ad-9b0a-3feb4cee5b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2066554005", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2913b56a-9c", "ovs_interfaceid": "2913b56a-9cff-4697-89c5-e6e3553b8002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.365 243456 DEBUG nova.network.os_vif_util [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converting VIF {"id": "2913b56a-9cff-4697-89c5-e6e3553b8002", "address": "fa:16:3e:cb:85:32", "network": {"id": "ee9d9c00-c168-42ad-9b0a-3feb4cee5b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2066554005", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2913b56a-9c", "ovs_interfaceid": "2913b56a-9cff-4697-89c5-e6e3553b8002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.366 243456 DEBUG nova.network.os_vif_util [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:85:32,bridge_name='br-int',has_traffic_filtering=True,id=2913b56a-9cff-4697-89c5-e6e3553b8002,network=Network(ee9d9c00-c168-42ad-9b0a-3feb4cee5b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2913b56a-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.367 243456 DEBUG os_vif [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:85:32,bridge_name='br-int',has_traffic_filtering=True,id=2913b56a-9cff-4697-89c5-e6e3553b8002,network=Network(ee9d9c00-c168-42ad-9b0a-3feb4cee5b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2913b56a-9c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.368 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.368 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.369 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.373 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.374 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2913b56a-9c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.375 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2913b56a-9c, col_values=(('external_ids', {'iface-id': '2913b56a-9cff-4697-89c5-e6e3553b8002', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:85:32', 'vm-uuid': '6e81c007-3bdc-4baf-b310-775c3122cd14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.378 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:01 compute-0 NetworkManager[49805]: <info>  [1772273341.3792] manager: (tap2913b56a-9c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/188)
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.381 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.383 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.384 243456 INFO os_vif [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:85:32,bridge_name='br-int',has_traffic_filtering=True,id=2913b56a-9cff-4697-89c5-e6e3553b8002,network=Network(ee9d9c00-c168-42ad-9b0a-3feb4cee5b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2913b56a-9c')
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.385 243456 DEBUG nova.virt.libvirt.vif [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:08:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-2003718246',display_name='tempest-ServersTestMultiNic-server-2003718246',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-2003718246',id=51,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='30cb5e2d14fb4fb7a9d37cf231549329',ramdisk_id='',reservation_id='r-2f5u8tlh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-116334619',owner_user_name='tempest-ServersTestMultiNic-116334619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:08:49Z,user_data=None,user_id='35aa1fe862a2437dbcc12fc7b0acbf91',uuid=6e81c007-3bdc-4baf-b310-775c3122cd14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "address": "fa:16:3e:ea:a4:97", "network": {"id": "6393174b-f4f0-476b-a7a9-02f4c8a0425f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-40536825", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf3648de-8d", "ovs_interfaceid": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.385 243456 DEBUG nova.network.os_vif_util [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converting VIF {"id": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "address": "fa:16:3e:ea:a4:97", "network": {"id": "6393174b-f4f0-476b-a7a9-02f4c8a0425f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-40536825", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf3648de-8d", "ovs_interfaceid": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.386 243456 DEBUG nova.network.os_vif_util [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:a4:97,bridge_name='br-int',has_traffic_filtering=True,id=cf3648de-8d18-4aa3-9cb9-b7a38ba349c7,network=Network(6393174b-f4f0-476b-a7a9-02f4c8a0425f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf3648de-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.387 243456 DEBUG os_vif [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:a4:97,bridge_name='br-int',has_traffic_filtering=True,id=cf3648de-8d18-4aa3-9cb9-b7a38ba349c7,network=Network(6393174b-f4f0-476b-a7a9-02f4c8a0425f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf3648de-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.387 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.387 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.388 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.391 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.391 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf3648de-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.392 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcf3648de-8d, col_values=(('external_ids', {'iface-id': 'cf3648de-8d18-4aa3-9cb9-b7a38ba349c7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ea:a4:97', 'vm-uuid': '6e81c007-3bdc-4baf-b310-775c3122cd14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:01 compute-0 NetworkManager[49805]: <info>  [1772273341.3946] manager: (tapcf3648de-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.396 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.400 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.401 243456 INFO os_vif [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:a4:97,bridge_name='br-int',has_traffic_filtering=True,id=cf3648de-8d18-4aa3-9cb9-b7a38ba349c7,network=Network(6393174b-f4f0-476b-a7a9-02f4c8a0425f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf3648de-8d')
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.604 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.605 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.606 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] No VIF found with MAC fa:16:3e:cb:85:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.606 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] No VIF found with MAC fa:16:3e:ea:a4:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.607 243456 INFO nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Using config drive
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.636 243456 DEBUG nova.storage.rbd_utils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] rbd image 6e81c007-3bdc-4baf-b310-775c3122cd14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.943 243456 DEBUG nova.network.neutron [req-7e6a0d1c-f191-4cc1-9a70-d626285b675e req-f7ea98ce-1010-456b-8802-cb496fb327b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Updated VIF entry in instance network info cache for port cf3648de-8d18-4aa3-9cb9-b7a38ba349c7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:09:01 compute-0 nova_compute[243452]: 2026-02-28 10:09:01.944 243456 DEBUG nova.network.neutron [req-7e6a0d1c-f191-4cc1-9a70-d626285b675e req-f7ea98ce-1010-456b-8802-cb496fb327b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Updating instance_info_cache with network_info: [{"id": "2913b56a-9cff-4697-89c5-e6e3553b8002", "address": "fa:16:3e:cb:85:32", "network": {"id": "ee9d9c00-c168-42ad-9b0a-3feb4cee5b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2066554005", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2913b56a-9c", "ovs_interfaceid": "2913b56a-9cff-4697-89c5-e6e3553b8002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "address": "fa:16:3e:ea:a4:97", "network": {"id": "6393174b-f4f0-476b-a7a9-02f4c8a0425f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-40536825", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf3648de-8d", "ovs_interfaceid": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:09:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.3 MiB/s wr, 39 op/s
Feb 28 10:09:02 compute-0 nova_compute[243452]: 2026-02-28 10:09:02.012 243456 DEBUG oslo_concurrency.lockutils [req-7e6a0d1c-f191-4cc1-9a70-d626285b675e req-f7ea98ce-1010-456b-8802-cb496fb327b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-6e81c007-3bdc-4baf-b310-775c3122cd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:09:02 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/320021054' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:02 compute-0 nova_compute[243452]: 2026-02-28 10:09:02.558 243456 INFO nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Creating config drive at /var/lib/nova/instances/6e81c007-3bdc-4baf-b310-775c3122cd14/disk.config
Feb 28 10:09:02 compute-0 nova_compute[243452]: 2026-02-28 10:09:02.564 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6e81c007-3bdc-4baf-b310-775c3122cd14/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3g6je2_4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:02 compute-0 nova_compute[243452]: 2026-02-28 10:09:02.711 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6e81c007-3bdc-4baf-b310-775c3122cd14/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3g6je2_4" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:02 compute-0 nova_compute[243452]: 2026-02-28 10:09:02.745 243456 DEBUG nova.storage.rbd_utils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] rbd image 6e81c007-3bdc-4baf-b310-775c3122cd14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:02 compute-0 nova_compute[243452]: 2026-02-28 10:09:02.749 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6e81c007-3bdc-4baf-b310-775c3122cd14/disk.config 6e81c007-3bdc-4baf-b310-775c3122cd14_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:02 compute-0 nova_compute[243452]: 2026-02-28 10:09:02.911 243456 DEBUG oslo_concurrency.processutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6e81c007-3bdc-4baf-b310-775c3122cd14/disk.config 6e81c007-3bdc-4baf-b310-775c3122cd14_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:09:02 compute-0 nova_compute[243452]: 2026-02-28 10:09:02.912 243456 INFO nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Deleting local config drive /var/lib/nova/instances/6e81c007-3bdc-4baf-b310-775c3122cd14/disk.config because it was imported into RBD.
Feb 28 10:09:02 compute-0 kernel: tap2913b56a-9c: entered promiscuous mode
Feb 28 10:09:02 compute-0 NetworkManager[49805]: <info>  [1772273342.9714] manager: (tap2913b56a-9c): new Tun device (/org/freedesktop/NetworkManager/Devices/190)
Feb 28 10:09:02 compute-0 ovn_controller[146846]: 2026-02-28T10:09:02Z|00416|binding|INFO|Claiming lport 2913b56a-9cff-4697-89c5-e6e3553b8002 for this chassis.
Feb 28 10:09:02 compute-0 nova_compute[243452]: 2026-02-28 10:09:02.974 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:02 compute-0 ovn_controller[146846]: 2026-02-28T10:09:02Z|00417|binding|INFO|2913b56a-9cff-4697-89c5-e6e3553b8002: Claiming fa:16:3e:cb:85:32 10.100.0.207
Feb 28 10:09:02 compute-0 NetworkManager[49805]: <info>  [1772273342.9853] manager: (tapcf3648de-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/191)
Feb 28 10:09:02 compute-0 nova_compute[243452]: 2026-02-28 10:09:02.994 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:02 compute-0 systemd-udevd[287370]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:09:02 compute-0 kernel: tapcf3648de-8d: entered promiscuous mode
Feb 28 10:09:02 compute-0 systemd-udevd[287371]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:09:03 compute-0 nova_compute[243452]: 2026-02-28 10:09:02.999 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:03 compute-0 ovn_controller[146846]: 2026-02-28T10:09:03Z|00418|if_status|INFO|Not updating pb chassis for cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 now as sb is readonly
Feb 28 10:09:03 compute-0 nova_compute[243452]: 2026-02-28 10:09:03.004 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:03 compute-0 NetworkManager[49805]: <info>  [1772273343.0130] device (tap2913b56a-9c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:09:03 compute-0 NetworkManager[49805]: <info>  [1772273343.0150] device (tap2913b56a-9c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:09:03 compute-0 NetworkManager[49805]: <info>  [1772273343.0162] device (tapcf3648de-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:09:03 compute-0 NetworkManager[49805]: <info>  [1772273343.0170] device (tapcf3648de-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:09:03 compute-0 systemd-machined[209480]: New machine qemu-57-instance-00000033.
Feb 28 10:09:03 compute-0 ovn_controller[146846]: 2026-02-28T10:09:03Z|00419|binding|INFO|Claiming lport cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 for this chassis.
Feb 28 10:09:03 compute-0 ovn_controller[146846]: 2026-02-28T10:09:03Z|00420|binding|INFO|cf3648de-8d18-4aa3-9cb9-b7a38ba349c7: Claiming fa:16:3e:ea:a4:97 10.100.1.62
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.024 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:85:32 10.100.0.207'], port_security=['fa:16:3e:cb:85:32 10.100.0.207'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.207/24', 'neutron:device_id': '6e81c007-3bdc-4baf-b310-775c3122cd14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30cb5e2d14fb4fb7a9d37cf231549329', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd203f6f-2b13-4be2-b266-8607f489044a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03dc6239-598e-4b30-827f-ab655e778931, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2913b56a-9cff-4697-89c5-e6e3553b8002) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.025 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2913b56a-9cff-4697-89c5-e6e3553b8002 in datapath ee9d9c00-c168-42ad-9b0a-3feb4cee5b51 bound to our chassis
Feb 28 10:09:03 compute-0 ovn_controller[146846]: 2026-02-28T10:09:03Z|00421|binding|INFO|Setting lport 2913b56a-9cff-4697-89c5-e6e3553b8002 ovn-installed in OVS
Feb 28 10:09:03 compute-0 ovn_controller[146846]: 2026-02-28T10:09:03Z|00422|binding|INFO|Setting lport 2913b56a-9cff-4697-89c5-e6e3553b8002 up in Southbound
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.026 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee9d9c00-c168-42ad-9b0a-3feb4cee5b51
Feb 28 10:09:03 compute-0 systemd[1]: Started Virtual Machine qemu-57-instance-00000033.
Feb 28 10:09:03 compute-0 nova_compute[243452]: 2026-02-28 10:09:03.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.038 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0a4ab0c5-e8f6-4814-86a7-209742463f58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.038 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapee9d9c00-c1 in ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.040 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapee9d9c00-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.041 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a98391ab-dccc-4655-a51a-6b925efbcddf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.042 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[11a0ce7d-5be9-4219-a41a-522689be8960]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 ovn_controller[146846]: 2026-02-28T10:09:03Z|00423|binding|INFO|Setting lport cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 ovn-installed in OVS
Feb 28 10:09:03 compute-0 nova_compute[243452]: 2026-02-28 10:09:03.044 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.055 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f9676f-6c62-4276-9034-22d80b0248fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.068 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8524699c-70fe-493b-8123-539a772dac4a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.091 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[61889b51-a190-47be-b262-b71e996450bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 NetworkManager[49805]: <info>  [1772273343.0987] manager: (tapee9d9c00-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/192)
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.098 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[380bd802-538f-4648-9f8b-ce3031b2ed15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.126 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9d7fea76-435f-464d-888c-ab2f82b78231]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.129 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[011b69c8-bf24-4d94-8497-11d73dd16a03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 NetworkManager[49805]: <info>  [1772273343.1504] device (tapee9d9c00-c0): carrier: link connected
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.157 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0a0eaf-e87d-4ff3-a537-4d5270fe11b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 ceph-mon[76304]: pgmap v1235: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.3 MiB/s wr, 39 op/s
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.174 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[12d680b2-911d-45aa-99f3-5e2fc64a0199]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee9d9c00-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:05:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485043, 'reachable_time': 35618, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287407, 'error': None, 'target': 'ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 ovn_controller[146846]: 2026-02-28T10:09:03Z|00424|binding|INFO|Setting lport cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 up in Southbound
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.187 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:a4:97 10.100.1.62'], port_security=['fa:16:3e:ea:a4:97 10.100.1.62'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.62/24', 'neutron:device_id': '6e81c007-3bdc-4baf-b310-775c3122cd14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6393174b-f4f0-476b-a7a9-02f4c8a0425f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30cb5e2d14fb4fb7a9d37cf231549329', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd203f6f-2b13-4be2-b266-8607f489044a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=641263bc-b0b1-40e0-a9a7-6631a5b53fe3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cf3648de-8d18-4aa3-9cb9-b7a38ba349c7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.188 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[48eb89c5-ebf9-43bf-89a4-78fb67fdddc8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8c:56a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485043, 'tstamp': 485043}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287408, 'error': None, 'target': 'ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.202 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[526cdef8-7328-4b31-b2c4-ebb165e63090]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee9d9c00-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:05:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485043, 'reachable_time': 35618, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287409, 'error': None, 'target': 'ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.235 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[15ae1078-17f8-4331-a0b1-75652ab0510b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 nova_compute[243452]: 2026-02-28 10:09:03.292 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273328.271685, bbbba0d8-fff9-4f59-ab31-54ff03b71390 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:03 compute-0 nova_compute[243452]: 2026-02-28 10:09:03.292 243456 INFO nova.compute.manager [-] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] VM Stopped (Lifecycle Event)
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.302 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5d6ddc0e-9eb4-4d33-b3df-1f9663d3adf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.304 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee9d9c00-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.304 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.305 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee9d9c00-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:03 compute-0 kernel: tapee9d9c00-c0: entered promiscuous mode
Feb 28 10:09:03 compute-0 nova_compute[243452]: 2026-02-28 10:09:03.307 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:03 compute-0 NetworkManager[49805]: <info>  [1772273343.3087] manager: (tapee9d9c00-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.319 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee9d9c00-c0, col_values=(('external_ids', {'iface-id': '0492ad6c-9098-45c5-863a-7d90483ac932'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:03 compute-0 ovn_controller[146846]: 2026-02-28T10:09:03Z|00425|binding|INFO|Releasing lport 0492ad6c-9098-45c5-863a-7d90483ac932 from this chassis (sb_readonly=0)
Feb 28 10:09:03 compute-0 nova_compute[243452]: 2026-02-28 10:09:03.321 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.329 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ee9d9c00-c168-42ad-9b0a-3feb4cee5b51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ee9d9c00-c168-42ad-9b0a-3feb4cee5b51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:09:03 compute-0 nova_compute[243452]: 2026-02-28 10:09:03.331 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.331 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[538227cf-ee48-499f-9183-e665c9f6356a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.332 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/ee9d9c00-c168-42ad-9b0a-3feb4cee5b51.pid.haproxy
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID ee9d9c00-c168-42ad-9b0a-3feb4cee5b51
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.335 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51', 'env', 'PROCESS_TAG=haproxy-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ee9d9c00-c168-42ad-9b0a-3feb4cee5b51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:09:03 compute-0 nova_compute[243452]: 2026-02-28 10:09:03.558 243456 DEBUG nova.compute.manager [None req-c7d49cd4-1e50-4c4c-9c47-50ca8b7f8f0e - - - - - -] [instance: bbbba0d8-fff9-4f59-ab31-54ff03b71390] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:03 compute-0 nova_compute[243452]: 2026-02-28 10:09:03.625 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:03 compute-0 podman[287441]: 2026-02-28 10:09:03.679115905 +0000 UTC m=+0.050577881 container create d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 10:09:03 compute-0 systemd[1]: Started libpod-conmon-d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709.scope.
Feb 28 10:09:03 compute-0 podman[287441]: 2026-02-28 10:09:03.650612105 +0000 UTC m=+0.022074101 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:09:03 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45fadf675fa9eecd4223d28e0b263c5d180910a3ada48d4b575cb25eeb657e56/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:09:03 compute-0 podman[287441]: 2026-02-28 10:09:03.767392965 +0000 UTC m=+0.138854951 container init d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:09:03 compute-0 podman[287441]: 2026-02-28 10:09:03.772554009 +0000 UTC m=+0.144015985 container start d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:09:03 compute-0 neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51[287457]: [NOTICE]   (287461) : New worker (287463) forked
Feb 28 10:09:03 compute-0 neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51[287457]: [NOTICE]   (287461) : Loading success.
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.841 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 in datapath 6393174b-f4f0-476b-a7a9-02f4c8a0425f unbound from our chassis
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.845 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6393174b-f4f0-476b-a7a9-02f4c8a0425f
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.854 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[07a5bb19-6420-41d8-9b40-18371c81fb71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.855 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6393174b-f1 in ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.857 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6393174b-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.858 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f7fbddff-a631-4f86-a498-bc6ad3f321ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.859 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a8d4e4f9-8a09-41c5-ac64-74f72307b273]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.870 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[54a13eb7-a1b6-4808-899c-9f56975ee766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.882 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b340c61b-06a9-43f6-aa4a-00e0a1032224]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.906 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[774a7e69-fd11-4bb8-be90-e9b5747969cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 systemd-udevd[287397]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:09:03 compute-0 NetworkManager[49805]: <info>  [1772273343.9181] manager: (tap6393174b-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/194)
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.915 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8bf05d-9735-4173-9877-a7adfde9e142]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.954 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0f1d1331-f08d-4d1c-b67a-14ade0aad2db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.958 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1256b6b5-cbb3-4fb0-82d7-a8af582508d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:03 compute-0 NetworkManager[49805]: <info>  [1772273343.9854] device (tap6393174b-f0): carrier: link connected
Feb 28 10:09:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:03.993 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[533bda35-55b7-4993-89f2-28d9073830b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 823 KiB/s wr, 12 op/s
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.016 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4935cca4-628b-4684-8f2a-fef00f309925]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6393174b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:25:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485127, 'reachable_time': 19594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287493, 'error': None, 'target': 'ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.039 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c7bd94c1-42b7-48e5-bb4d-5276abd4e4ab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:25df'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485127, 'tstamp': 485127}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287508, 'error': None, 'target': 'ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.058 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7e80e40a-4a04-4851-84ba-46fd374b7350]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6393174b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:25:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485127, 'reachable_time': 19594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287517, 'error': None, 'target': 'ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.092 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d161cb6d-a46d-4cab-a23f-f1d368ecb775]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.153 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[439bd937-aee2-47af-bdd7-a2176fc9860b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.154 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6393174b-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.155 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.155 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6393174b-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:04 compute-0 kernel: tap6393174b-f0: entered promiscuous mode
Feb 28 10:09:04 compute-0 NetworkManager[49805]: <info>  [1772273344.1586] manager: (tap6393174b-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/195)
Feb 28 10:09:04 compute-0 nova_compute[243452]: 2026-02-28 10:09:04.158 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.163 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6393174b-f0, col_values=(('external_ids', {'iface-id': 'ae3d677c-b1f9-4238-be1d-1c968c276024'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:04 compute-0 ovn_controller[146846]: 2026-02-28T10:09:04Z|00426|binding|INFO|Releasing lport ae3d677c-b1f9-4238-be1d-1c968c276024 from this chassis (sb_readonly=0)
Feb 28 10:09:04 compute-0 nova_compute[243452]: 2026-02-28 10:09:04.164 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273344.1633954, 6e81c007-3bdc-4baf-b310-775c3122cd14 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:04 compute-0 nova_compute[243452]: 2026-02-28 10:09:04.166 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] VM Started (Lifecycle Event)
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.166 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6393174b-f4f0-476b-a7a9-02f4c8a0425f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6393174b-f4f0-476b-a7a9-02f4c8a0425f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.167 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[67f66493-de63-4cb1-89b1-baef20b164b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.168 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-6393174b-f4f0-476b-a7a9-02f4c8a0425f
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/6393174b-f4f0-476b-a7a9-02f4c8a0425f.pid.haproxy
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 6393174b-f4f0-476b-a7a9-02f4c8a0425f
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:09:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:04.168 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f', 'env', 'PROCESS_TAG=haproxy-6393174b-f4f0-476b-a7a9-02f4c8a0425f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6393174b-f4f0-476b-a7a9-02f4c8a0425f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:09:04 compute-0 nova_compute[243452]: 2026-02-28 10:09:04.169 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:04 compute-0 nova_compute[243452]: 2026-02-28 10:09:04.173 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:04 compute-0 nova_compute[243452]: 2026-02-28 10:09:04.185 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:04 compute-0 nova_compute[243452]: 2026-02-28 10:09:04.190 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273344.1636717, 6e81c007-3bdc-4baf-b310-775c3122cd14 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:04 compute-0 nova_compute[243452]: 2026-02-28 10:09:04.190 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] VM Paused (Lifecycle Event)
Feb 28 10:09:04 compute-0 nova_compute[243452]: 2026-02-28 10:09:04.213 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:04 compute-0 nova_compute[243452]: 2026-02-28 10:09:04.218 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:09:04 compute-0 nova_compute[243452]: 2026-02-28 10:09:04.336 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:09:04 compute-0 podman[287559]: 2026-02-28 10:09:04.577153446 +0000 UTC m=+0.063918976 container create 0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:09:04 compute-0 systemd[1]: Started libpod-conmon-0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145.scope.
Feb 28 10:09:04 compute-0 podman[287559]: 2026-02-28 10:09:04.54632385 +0000 UTC m=+0.033089430 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:09:04 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28076557dba7d747e9b7cbfa4fae2599e2a1294bb9f8c658c1b2eee3c50b0bf7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:09:04 compute-0 podman[287559]: 2026-02-28 10:09:04.663978014 +0000 UTC m=+0.150743504 container init 0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:09:04 compute-0 podman[287559]: 2026-02-28 10:09:04.674255693 +0000 UTC m=+0.161021173 container start 0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 10:09:04 compute-0 neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f[287574]: [NOTICE]   (287578) : New worker (287580) forked
Feb 28 10:09:04 compute-0 neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f[287574]: [NOTICE]   (287578) : Loading success.
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.149 243456 DEBUG nova.compute.manager [req-425086b3-6cd7-404b-8710-cee90fc56332 req-cb2d69d2-e3ca-4bd3-a3dd-f0f5b3658130 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-plugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.150 243456 DEBUG oslo_concurrency.lockutils [req-425086b3-6cd7-404b-8710-cee90fc56332 req-cb2d69d2-e3ca-4bd3-a3dd-f0f5b3658130 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.151 243456 DEBUG oslo_concurrency.lockutils [req-425086b3-6cd7-404b-8710-cee90fc56332 req-cb2d69d2-e3ca-4bd3-a3dd-f0f5b3658130 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.151 243456 DEBUG oslo_concurrency.lockutils [req-425086b3-6cd7-404b-8710-cee90fc56332 req-cb2d69d2-e3ca-4bd3-a3dd-f0f5b3658130 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.151 243456 DEBUG nova.compute.manager [req-425086b3-6cd7-404b-8710-cee90fc56332 req-cb2d69d2-e3ca-4bd3-a3dd-f0f5b3658130 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Processing event network-vif-plugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:09:05 compute-0 ceph-mon[76304]: pgmap v1236: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 823 KiB/s wr, 12 op/s
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.245 243456 DEBUG nova.compute.manager [req-5828a295-5e8b-48f3-a70e-858695c0fa65 req-9103aeb3-06e6-4525-b599-4e5f4343ccf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-plugged-2913b56a-9cff-4697-89c5-e6e3553b8002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.246 243456 DEBUG oslo_concurrency.lockutils [req-5828a295-5e8b-48f3-a70e-858695c0fa65 req-9103aeb3-06e6-4525-b599-4e5f4343ccf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.246 243456 DEBUG oslo_concurrency.lockutils [req-5828a295-5e8b-48f3-a70e-858695c0fa65 req-9103aeb3-06e6-4525-b599-4e5f4343ccf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.246 243456 DEBUG oslo_concurrency.lockutils [req-5828a295-5e8b-48f3-a70e-858695c0fa65 req-9103aeb3-06e6-4525-b599-4e5f4343ccf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.246 243456 DEBUG nova.compute.manager [req-5828a295-5e8b-48f3-a70e-858695c0fa65 req-9103aeb3-06e6-4525-b599-4e5f4343ccf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Processing event network-vif-plugged-2913b56a-9cff-4697-89c5-e6e3553b8002 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.247 243456 DEBUG nova.compute.manager [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.251 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273345.2511153, 6e81c007-3bdc-4baf-b310-775c3122cd14 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.252 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] VM Resumed (Lifecycle Event)
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.255 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.259 243456 INFO nova.virt.libvirt.driver [-] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Instance spawned successfully.
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.260 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.294 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.304 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.310 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.311 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.312 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.313 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.313 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.314 243456 DEBUG nova.virt.libvirt.driver [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.330 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.448 243456 INFO nova.compute.manager [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Took 16.29 seconds to spawn the instance on the hypervisor.
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.449 243456 DEBUG nova.compute.manager [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.519 243456 INFO nova.compute.manager [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Took 17.86 seconds to build instance.
Feb 28 10:09:05 compute-0 nova_compute[243452]: 2026-02-28 10:09:05.541 243456 DEBUG oslo_concurrency.lockutils [None req-e4f0e991-3775-4175-be37-95300f5bc68f 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 772 KiB/s wr, 21 op/s
Feb 28 10:09:06 compute-0 nova_compute[243452]: 2026-02-28 10:09:06.184 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "51a0e59a-81d0-4f05-bb13-5ca025288da2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:06 compute-0 nova_compute[243452]: 2026-02-28 10:09:06.184 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:06 compute-0 nova_compute[243452]: 2026-02-28 10:09:06.204 243456 DEBUG nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:09:06 compute-0 nova_compute[243452]: 2026-02-28 10:09:06.294 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:06 compute-0 nova_compute[243452]: 2026-02-28 10:09:06.295 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:06 compute-0 nova_compute[243452]: 2026-02-28 10:09:06.304 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:09:06 compute-0 nova_compute[243452]: 2026-02-28 10:09:06.304 243456 INFO nova.compute.claims [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:09:06 compute-0 nova_compute[243452]: 2026-02-28 10:09:06.394 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:06 compute-0 nova_compute[243452]: 2026-02-28 10:09:06.397 243456 DEBUG nova.scheduler.client.report [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 28 10:09:06 compute-0 nova_compute[243452]: 2026-02-28 10:09:06.415 243456 DEBUG nova.scheduler.client.report [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 28 10:09:06 compute-0 nova_compute[243452]: 2026-02-28 10:09:06.415 243456 DEBUG nova.compute.provider_tree [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 28 10:09:06 compute-0 nova_compute[243452]: 2026-02-28 10:09:06.435 243456 DEBUG nova.scheduler.client.report [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 28 10:09:06 compute-0 nova_compute[243452]: 2026-02-28 10:09:06.464 243456 DEBUG nova.scheduler.client.report [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 28 10:09:06 compute-0 nova_compute[243452]: 2026-02-28 10:09:06.522 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:09:07 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/888599738' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.111 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.119 243456 DEBUG nova.compute.provider_tree [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.136 243456 DEBUG nova.scheduler.client.report [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.161 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.162 243456 DEBUG nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:09:07 compute-0 ceph-mon[76304]: pgmap v1237: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 772 KiB/s wr, 21 op/s
Feb 28 10:09:07 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/888599738' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.214 243456 DEBUG oslo_concurrency.lockutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.214 243456 DEBUG oslo_concurrency.lockutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.215 243456 DEBUG oslo_concurrency.lockutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.215 243456 DEBUG oslo_concurrency.lockutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.216 243456 DEBUG oslo_concurrency.lockutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.217 243456 INFO nova.compute.manager [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Terminating instance
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.218 243456 DEBUG nova.compute.manager [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.218 243456 DEBUG nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.218 243456 DEBUG nova.network.neutron [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.241 243456 INFO nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:09:07 compute-0 kernel: tap2913b56a-9c (unregistering): left promiscuous mode
Feb 28 10:09:07 compute-0 NetworkManager[49805]: <info>  [1772273347.2592] device (tap2913b56a-9c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:09:07 compute-0 ovn_controller[146846]: 2026-02-28T10:09:07Z|00427|binding|INFO|Releasing lport 2913b56a-9cff-4697-89c5-e6e3553b8002 from this chassis (sb_readonly=0)
Feb 28 10:09:07 compute-0 ovn_controller[146846]: 2026-02-28T10:09:07Z|00428|binding|INFO|Setting lport 2913b56a-9cff-4697-89c5-e6e3553b8002 down in Southbound
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.269 243456 DEBUG nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:09:07 compute-0 ovn_controller[146846]: 2026-02-28T10:09:07Z|00429|binding|INFO|Removing iface tap2913b56a-9c ovn-installed in OVS
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.272 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.277 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.277 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:85:32 10.100.0.207'], port_security=['fa:16:3e:cb:85:32 10.100.0.207'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.207/24', 'neutron:device_id': '6e81c007-3bdc-4baf-b310-775c3122cd14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30cb5e2d14fb4fb7a9d37cf231549329', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd203f6f-2b13-4be2-b266-8607f489044a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=03dc6239-598e-4b30-827f-ab655e778931, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2913b56a-9cff-4697-89c5-e6e3553b8002) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:09:07 compute-0 kernel: tapcf3648de-8d (unregistering): left promiscuous mode
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.279 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2913b56a-9cff-4697-89c5-e6e3553b8002 in datapath ee9d9c00-c168-42ad-9b0a-3feb4cee5b51 unbound from our chassis
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.281 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee9d9c00-c168-42ad-9b0a-3feb4cee5b51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:09:07 compute-0 NetworkManager[49805]: <info>  [1772273347.2823] device (tapcf3648de-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.283 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.282 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e7a016d9-798d-4bcc-8927-db90a299cc57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.283 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51 namespace which is not needed anymore
Feb 28 10:09:07 compute-0 ovn_controller[146846]: 2026-02-28T10:09:07Z|00430|binding|INFO|Releasing lport cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 from this chassis (sb_readonly=0)
Feb 28 10:09:07 compute-0 ovn_controller[146846]: 2026-02-28T10:09:07Z|00431|binding|INFO|Setting lport cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 down in Southbound
Feb 28 10:09:07 compute-0 ovn_controller[146846]: 2026-02-28T10:09:07Z|00432|binding|INFO|Removing iface tapcf3648de-8d ovn-installed in OVS
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.291 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.294 243456 DEBUG nova.compute.manager [req-86b80a8c-6bad-4cd7-8220-3263285ca2ad req-3474af61-7cfa-4730-848c-8e4660ba9c1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-plugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.295 243456 DEBUG oslo_concurrency.lockutils [req-86b80a8c-6bad-4cd7-8220-3263285ca2ad req-3474af61-7cfa-4730-848c-8e4660ba9c1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.295 243456 DEBUG oslo_concurrency.lockutils [req-86b80a8c-6bad-4cd7-8220-3263285ca2ad req-3474af61-7cfa-4730-848c-8e4660ba9c1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.295 243456 DEBUG oslo_concurrency.lockutils [req-86b80a8c-6bad-4cd7-8220-3263285ca2ad req-3474af61-7cfa-4730-848c-8e4660ba9c1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.295 243456 DEBUG nova.compute.manager [req-86b80a8c-6bad-4cd7-8220-3263285ca2ad req-3474af61-7cfa-4730-848c-8e4660ba9c1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] No waiting events found dispatching network-vif-plugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.295 243456 WARNING nova.compute.manager [req-86b80a8c-6bad-4cd7-8220-3263285ca2ad req-3474af61-7cfa-4730-848c-8e4660ba9c1c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received unexpected event network-vif-plugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 for instance with vm_state active and task_state deleting.
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.297 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.300 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:a4:97 10.100.1.62'], port_security=['fa:16:3e:ea:a4:97 10.100.1.62'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.62/24', 'neutron:device_id': '6e81c007-3bdc-4baf-b310-775c3122cd14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6393174b-f4f0-476b-a7a9-02f4c8a0425f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30cb5e2d14fb4fb7a9d37cf231549329', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd203f6f-2b13-4be2-b266-8607f489044a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=641263bc-b0b1-40e0-a9a7-6631a5b53fe3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cf3648de-8d18-4aa3-9cb9-b7a38ba349c7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.301 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:07 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000033.scope: Deactivated successfully.
Feb 28 10:09:07 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000033.scope: Consumed 3.156s CPU time.
Feb 28 10:09:07 compute-0 systemd-machined[209480]: Machine qemu-57-instance-00000033 terminated.
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.366 243456 DEBUG nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.369 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.370 243456 INFO nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Creating image(s)
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.396 243456 DEBUG nova.storage.rbd_utils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:07 compute-0 neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51[287457]: [NOTICE]   (287461) : haproxy version is 2.8.14-c23fe91
Feb 28 10:09:07 compute-0 neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51[287457]: [NOTICE]   (287461) : path to executable is /usr/sbin/haproxy
Feb 28 10:09:07 compute-0 neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51[287457]: [WARNING]  (287461) : Exiting Master process...
Feb 28 10:09:07 compute-0 neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51[287457]: [ALERT]    (287461) : Current worker (287463) exited with code 143 (Terminated)
Feb 28 10:09:07 compute-0 neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51[287457]: [WARNING]  (287461) : All workers exited. Exiting... (0)
Feb 28 10:09:07 compute-0 systemd[1]: libpod-d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709.scope: Deactivated successfully.
Feb 28 10:09:07 compute-0 podman[287639]: 2026-02-28 10:09:07.415431539 +0000 UTC m=+0.047716740 container died d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.422 243456 DEBUG nova.storage.rbd_utils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:07 compute-0 NetworkManager[49805]: <info>  [1772273347.4544] manager: (tapcf3648de-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/196)
Feb 28 10:09:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-45fadf675fa9eecd4223d28e0b263c5d180910a3ada48d4b575cb25eeb657e56-merged.mount: Deactivated successfully.
Feb 28 10:09:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709-userdata-shm.mount: Deactivated successfully.
Feb 28 10:09:07 compute-0 podman[287639]: 2026-02-28 10:09:07.471873554 +0000 UTC m=+0.104158755 container cleanup d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 28 10:09:07 compute-0 systemd[1]: libpod-conmon-d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709.scope: Deactivated successfully.
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.479 243456 DEBUG nova.storage.rbd_utils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.487 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.518 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.523 243456 DEBUG nova.policy [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33855957e5e3480b850c2ddef62a5f89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a5095f810f0d431788237ae1da262bf6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.530 243456 DEBUG nova.compute.manager [req-8b4e0c43-c8ba-4da9-b825-d63974f32fe2 req-160fcd7f-6bc7-4f56-b136-38a77a624ae8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-plugged-2913b56a-9cff-4697-89c5-e6e3553b8002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.530 243456 DEBUG oslo_concurrency.lockutils [req-8b4e0c43-c8ba-4da9-b825-d63974f32fe2 req-160fcd7f-6bc7-4f56-b136-38a77a624ae8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.530 243456 DEBUG oslo_concurrency.lockutils [req-8b4e0c43-c8ba-4da9-b825-d63974f32fe2 req-160fcd7f-6bc7-4f56-b136-38a77a624ae8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.531 243456 DEBUG oslo_concurrency.lockutils [req-8b4e0c43-c8ba-4da9-b825-d63974f32fe2 req-160fcd7f-6bc7-4f56-b136-38a77a624ae8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.531 243456 DEBUG nova.compute.manager [req-8b4e0c43-c8ba-4da9-b825-d63974f32fe2 req-160fcd7f-6bc7-4f56-b136-38a77a624ae8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] No waiting events found dispatching network-vif-plugged-2913b56a-9cff-4697-89c5-e6e3553b8002 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.531 243456 WARNING nova.compute.manager [req-8b4e0c43-c8ba-4da9-b825-d63974f32fe2 req-160fcd7f-6bc7-4f56-b136-38a77a624ae8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received unexpected event network-vif-plugged-2913b56a-9cff-4697-89c5-e6e3553b8002 for instance with vm_state active and task_state deleting.
Feb 28 10:09:07 compute-0 podman[287747]: 2026-02-28 10:09:07.535855861 +0000 UTC m=+0.041040074 container remove d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.536 243456 INFO nova.virt.libvirt.driver [-] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Instance destroyed successfully.
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.536 243456 DEBUG nova.objects.instance [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lazy-loading 'resources' on Instance uuid 6e81c007-3bdc-4baf-b310-775c3122cd14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.541 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d2da008a-d4f6-48c5-9c06-7738cb477ffe]: (4, ('Sat Feb 28 10:09:07 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51 (d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709)\nd888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709\nSat Feb 28 10:09:07 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51 (d888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709)\nd888b70aedb141859edd79f0b4738d06cf86f6e9064ae00b8f2cdc71a3784709\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.544 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4f682762-bb97-4a12-9d27-ef82274f9515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.545 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee9d9c00-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:07 compute-0 kernel: tapee9d9c00-c0: left promiscuous mode
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.547 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.552 243456 DEBUG nova.virt.libvirt.vif [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:08:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-2003718246',display_name='tempest-ServersTestMultiNic-server-2003718246',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-2003718246',id=51,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:09:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='30cb5e2d14fb4fb7a9d37cf231549329',ramdisk_id='',reservation_id='r-2f5u8tlh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-116334619',owner_user_name='tempest-ServersTestMultiNic-116334619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:09:05Z,user_data=None,user_id='35aa1fe862a2437dbcc12fc7b0acbf91',uuid=6e81c007-3bdc-4baf-b310-775c3122cd14,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2913b56a-9cff-4697-89c5-e6e3553b8002", "address": "fa:16:3e:cb:85:32", "network": {"id": "ee9d9c00-c168-42ad-9b0a-3feb4cee5b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2066554005", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2913b56a-9c", "ovs_interfaceid": "2913b56a-9cff-4697-89c5-e6e3553b8002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.552 243456 DEBUG nova.network.os_vif_util [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converting VIF {"id": "2913b56a-9cff-4697-89c5-e6e3553b8002", "address": "fa:16:3e:cb:85:32", "network": {"id": "ee9d9c00-c168-42ad-9b0a-3feb4cee5b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2066554005", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.207", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2913b56a-9c", "ovs_interfaceid": "2913b56a-9cff-4697-89c5-e6e3553b8002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.553 243456 DEBUG nova.network.os_vif_util [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:85:32,bridge_name='br-int',has_traffic_filtering=True,id=2913b56a-9cff-4697-89c5-e6e3553b8002,network=Network(ee9d9c00-c168-42ad-9b0a-3feb4cee5b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2913b56a-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.553 243456 DEBUG os_vif [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:85:32,bridge_name='br-int',has_traffic_filtering=True,id=2913b56a-9cff-4697-89c5-e6e3553b8002,network=Network(ee9d9c00-c168-42ad-9b0a-3feb4cee5b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2913b56a-9c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.556 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.556 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2913b56a-9c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.559 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.560 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.561 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.561 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.561 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.563 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ac776044-5dc6-4669-b934-674003a89ebf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.576 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[04e07c7c-22a8-4a84-8f03-b544e28026df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.577 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[18e7bb19-9eca-40ac-bc8d-3ed891cd9922]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.583 243456 DEBUG nova.storage.rbd_utils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.586 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.591 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[28efb751-a1c0-4c7c-8845-1571471852c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485037, 'reachable_time': 19249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287784, 'error': None, 'target': 'ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.593 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ee9d9c00-c168-42ad-9b0a-3feb4cee5b51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.593 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[1d3a8f16-91e3-4480-a7fb-d9ec56a4c419]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.594 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 in datapath 6393174b-f4f0-476b-a7a9-02f4c8a0425f unbound from our chassis
Feb 28 10:09:07 compute-0 systemd[1]: run-netns-ovnmeta\x2dee9d9c00\x2dc168\x2d42ad\x2d9b0a\x2d3feb4cee5b51.mount: Deactivated successfully.
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.596 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6393174b-f4f0-476b-a7a9-02f4c8a0425f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.596 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[88417aac-4db3-4f6a-b853-381c4928a70a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.597 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f namespace which is not needed anymore
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.609 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.614 243456 INFO os_vif [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:85:32,bridge_name='br-int',has_traffic_filtering=True,id=2913b56a-9cff-4697-89c5-e6e3553b8002,network=Network(ee9d9c00-c168-42ad-9b0a-3feb4cee5b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2913b56a-9c')
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.615 243456 DEBUG nova.virt.libvirt.vif [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:08:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-2003718246',display_name='tempest-ServersTestMultiNic-server-2003718246',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-2003718246',id=51,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:09:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='30cb5e2d14fb4fb7a9d37cf231549329',ramdisk_id='',reservation_id='r-2f5u8tlh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-116334619',owner_user_name='tempest-ServersTestMultiNic-116334619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:09:05Z,user_data=None,user_id='35aa1fe862a2437dbcc12fc7b0acbf91',uuid=6e81c007-3bdc-4baf-b310-775c3122cd14,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "address": "fa:16:3e:ea:a4:97", "network": {"id": "6393174b-f4f0-476b-a7a9-02f4c8a0425f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-40536825", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf3648de-8d", "ovs_interfaceid": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.615 243456 DEBUG nova.network.os_vif_util [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converting VIF {"id": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "address": "fa:16:3e:ea:a4:97", "network": {"id": "6393174b-f4f0-476b-a7a9-02f4c8a0425f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-40536825", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30cb5e2d14fb4fb7a9d37cf231549329", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf3648de-8d", "ovs_interfaceid": "cf3648de-8d18-4aa3-9cb9-b7a38ba349c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.616 243456 DEBUG nova.network.os_vif_util [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:a4:97,bridge_name='br-int',has_traffic_filtering=True,id=cf3648de-8d18-4aa3-9cb9-b7a38ba349c7,network=Network(6393174b-f4f0-476b-a7a9-02f4c8a0425f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf3648de-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.616 243456 DEBUG os_vif [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:a4:97,bridge_name='br-int',has_traffic_filtering=True,id=cf3648de-8d18-4aa3-9cb9-b7a38ba349c7,network=Network(6393174b-f4f0-476b-a7a9-02f4c8a0425f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf3648de-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.617 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.618 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf3648de-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.619 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.621 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.623 243456 INFO os_vif [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:a4:97,bridge_name='br-int',has_traffic_filtering=True,id=cf3648de-8d18-4aa3-9cb9-b7a38ba349c7,network=Network(6393174b-f4f0-476b-a7a9-02f4c8a0425f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf3648de-8d')
Feb 28 10:09:07 compute-0 neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f[287574]: [NOTICE]   (287578) : haproxy version is 2.8.14-c23fe91
Feb 28 10:09:07 compute-0 neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f[287574]: [NOTICE]   (287578) : path to executable is /usr/sbin/haproxy
Feb 28 10:09:07 compute-0 neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f[287574]: [WARNING]  (287578) : Exiting Master process...
Feb 28 10:09:07 compute-0 neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f[287574]: [ALERT]    (287578) : Current worker (287580) exited with code 143 (Terminated)
Feb 28 10:09:07 compute-0 neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f[287574]: [WARNING]  (287578) : All workers exited. Exiting... (0)
Feb 28 10:09:07 compute-0 systemd[1]: libpod-0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145.scope: Deactivated successfully.
Feb 28 10:09:07 compute-0 conmon[287574]: conmon 0f754d2119676e96a4ff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145.scope/container/memory.events
Feb 28 10:09:07 compute-0 podman[287838]: 2026-02-28 10:09:07.727819232 +0000 UTC m=+0.053351979 container died 0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:09:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145-userdata-shm.mount: Deactivated successfully.
Feb 28 10:09:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-28076557dba7d747e9b7cbfa4fae2599e2a1294bb9f8c658c1b2eee3c50b0bf7-merged.mount: Deactivated successfully.
Feb 28 10:09:07 compute-0 podman[287838]: 2026-02-28 10:09:07.791952513 +0000 UTC m=+0.117485250 container cleanup 0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:09:07 compute-0 systemd[1]: libpod-conmon-0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145.scope: Deactivated successfully.
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.808 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:07 compute-0 podman[287870]: 2026-02-28 10:09:07.849425958 +0000 UTC m=+0.042501225 container remove 0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.855 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6deeb59a-37ff-445d-884f-46131d04a930]: (4, ('Sat Feb 28 10:09:07 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f (0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145)\n0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145\nSat Feb 28 10:09:07 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f (0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145)\n0f754d2119676e96a4ffcde5fded3b009951612e3e462a76561cf1382df07145\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.857 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cb12d74c-ce6c-4ce3-b106-ecb7e9238cef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.858 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6393174b-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:07 compute-0 kernel: tap6393174b-f0: left promiscuous mode
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.874 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[405016b3-161c-45e8-be52-4e6f364ec571]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.885 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.889 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[87f1a413-49f7-4597-a4f9-8e7605548dcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.891 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab8e1f7-0ca2-43dc-95c5-102cfab70e15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.894 243456 DEBUG nova.storage.rbd_utils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] resizing rbd image 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.905 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0bfbd7aa-1f8f-4ae8-9165-b2ba84decee8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485118, 'reachable_time': 23527, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287922, 'error': None, 'target': 'ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.908 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6393174b-f4f0-476b-a7a9-02f4c8a0425f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:09:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:07.909 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[14c4b276-7e38-4a29-a7d7-922e01d1477d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:07 compute-0 systemd[1]: run-netns-ovnmeta\x2d6393174b\x2df4f0\x2d476b\x2da7a9\x2d02f4c8a0425f.mount: Deactivated successfully.
Feb 28 10:09:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.977 243456 INFO nova.virt.libvirt.driver [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Deleting instance files /var/lib/nova/instances/6e81c007-3bdc-4baf-b310-775c3122cd14_del
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.978 243456 INFO nova.virt.libvirt.driver [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Deletion of /var/lib/nova/instances/6e81c007-3bdc-4baf-b310-775c3122cd14_del complete
Feb 28 10:09:07 compute-0 nova_compute[243452]: 2026-02-28 10:09:07.984 243456 DEBUG nova.objects.instance [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'migration_context' on Instance uuid 51a0e59a-81d0-4f05-bb13-5ca025288da2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:09:08 compute-0 nova_compute[243452]: 2026-02-28 10:09:08.005 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:09:08 compute-0 nova_compute[243452]: 2026-02-28 10:09:08.005 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Ensure instance console log exists: /var/lib/nova/instances/51a0e59a-81d0-4f05-bb13-5ca025288da2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:09:08 compute-0 nova_compute[243452]: 2026-02-28 10:09:08.006 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:08 compute-0 nova_compute[243452]: 2026-02-28 10:09:08.006 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:08 compute-0 nova_compute[243452]: 2026-02-28 10:09:08.006 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 452 KiB/s rd, 12 KiB/s wr, 24 op/s
Feb 28 10:09:08 compute-0 nova_compute[243452]: 2026-02-28 10:09:08.033 243456 INFO nova.compute.manager [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Took 0.82 seconds to destroy the instance on the hypervisor.
Feb 28 10:09:08 compute-0 nova_compute[243452]: 2026-02-28 10:09:08.034 243456 DEBUG oslo.service.loopingcall [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:09:08 compute-0 nova_compute[243452]: 2026-02-28 10:09:08.035 243456 DEBUG nova.compute.manager [-] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:09:08 compute-0 nova_compute[243452]: 2026-02-28 10:09:08.036 243456 DEBUG nova.network.neutron [-] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:09:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Feb 28 10:09:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Feb 28 10:09:08 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Feb 28 10:09:08 compute-0 nova_compute[243452]: 2026-02-28 10:09:08.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:09:08 compute-0 nova_compute[243452]: 2026-02-28 10:09:08.628 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:08 compute-0 nova_compute[243452]: 2026-02-28 10:09:08.641 243456 DEBUG nova.network.neutron [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Successfully created port: d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:09:09 compute-0 ceph-mon[76304]: pgmap v1238: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 452 KiB/s rd, 12 KiB/s wr, 24 op/s
Feb 28 10:09:09 compute-0 ceph-mon[76304]: osdmap e179: 3 total, 3 up, 3 in
Feb 28 10:09:09 compute-0 nova_compute[243452]: 2026-02-28 10:09:09.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:09:09 compute-0 nova_compute[243452]: 2026-02-28 10:09:09.874 243456 DEBUG nova.network.neutron [-] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:09:09 compute-0 nova_compute[243452]: 2026-02-28 10:09:09.905 243456 DEBUG nova.compute.manager [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-unplugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:09 compute-0 nova_compute[243452]: 2026-02-28 10:09:09.905 243456 DEBUG oslo_concurrency.lockutils [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:09 compute-0 nova_compute[243452]: 2026-02-28 10:09:09.906 243456 DEBUG oslo_concurrency.lockutils [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:09 compute-0 nova_compute[243452]: 2026-02-28 10:09:09.906 243456 DEBUG oslo_concurrency.lockutils [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:09 compute-0 nova_compute[243452]: 2026-02-28 10:09:09.906 243456 DEBUG nova.compute.manager [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] No waiting events found dispatching network-vif-unplugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:09:09 compute-0 nova_compute[243452]: 2026-02-28 10:09:09.906 243456 DEBUG nova.compute.manager [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-unplugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:09:09 compute-0 nova_compute[243452]: 2026-02-28 10:09:09.907 243456 DEBUG nova.compute.manager [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-plugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:09 compute-0 nova_compute[243452]: 2026-02-28 10:09:09.907 243456 DEBUG oslo_concurrency.lockutils [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:09 compute-0 nova_compute[243452]: 2026-02-28 10:09:09.907 243456 DEBUG oslo_concurrency.lockutils [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:09 compute-0 nova_compute[243452]: 2026-02-28 10:09:09.907 243456 DEBUG oslo_concurrency.lockutils [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:09 compute-0 nova_compute[243452]: 2026-02-28 10:09:09.908 243456 DEBUG nova.compute.manager [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] No waiting events found dispatching network-vif-plugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:09:09 compute-0 nova_compute[243452]: 2026-02-28 10:09:09.908 243456 WARNING nova.compute.manager [req-605e5e24-ce6b-49ec-b4fb-51dc5a0c774c req-8d34bf1f-da09-4501-a343-06ebe10a92a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received unexpected event network-vif-plugged-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 for instance with vm_state active and task_state deleting.
Feb 28 10:09:09 compute-0 nova_compute[243452]: 2026-02-28 10:09:09.914 243456 INFO nova.compute.manager [-] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Took 1.88 seconds to deallocate network for instance.
Feb 28 10:09:09 compute-0 nova_compute[243452]: 2026-02-28 10:09:09.973 243456 DEBUG oslo_concurrency.lockutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:09 compute-0 nova_compute[243452]: 2026-02-28 10:09:09.974 243456 DEBUG oslo_concurrency.lockutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 185 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 280 KiB/s wr, 95 op/s
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.143 243456 DEBUG oslo_concurrency.processutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.209 243456 DEBUG nova.compute.manager [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-unplugged-2913b56a-9cff-4697-89c5-e6e3553b8002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.210 243456 DEBUG oslo_concurrency.lockutils [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.210 243456 DEBUG oslo_concurrency.lockutils [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.210 243456 DEBUG oslo_concurrency.lockutils [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.211 243456 DEBUG nova.compute.manager [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] No waiting events found dispatching network-vif-unplugged-2913b56a-9cff-4697-89c5-e6e3553b8002 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.211 243456 WARNING nova.compute.manager [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received unexpected event network-vif-unplugged-2913b56a-9cff-4697-89c5-e6e3553b8002 for instance with vm_state deleted and task_state None.
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.211 243456 DEBUG nova.compute.manager [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-plugged-2913b56a-9cff-4697-89c5-e6e3553b8002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.211 243456 DEBUG oslo_concurrency.lockutils [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.211 243456 DEBUG oslo_concurrency.lockutils [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.212 243456 DEBUG oslo_concurrency.lockutils [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.212 243456 DEBUG nova.compute.manager [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] No waiting events found dispatching network-vif-plugged-2913b56a-9cff-4697-89c5-e6e3553b8002 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.212 243456 WARNING nova.compute.manager [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received unexpected event network-vif-plugged-2913b56a-9cff-4697-89c5-e6e3553b8002 for instance with vm_state deleted and task_state None.
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.212 243456 DEBUG nova.compute.manager [req-ba092d40-26a6-4c6c-9b40-90641a6abc74 req-df99f6a8-ed47-4ca7-8ff3-0ee56d2dc80d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-deleted-cf3648de-8d18-4aa3-9cb9-b7a38ba349c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.538 243456 DEBUG nova.network.neutron [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Successfully updated port: d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.554 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "refresh_cache-51a0e59a-81d0-4f05-bb13-5ca025288da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.554 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquired lock "refresh_cache-51a0e59a-81d0-4f05-bb13-5ca025288da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.555 243456 DEBUG nova.network.neutron [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:09:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:09:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/405360413' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.739 243456 DEBUG oslo_concurrency.processutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.746 243456 DEBUG nova.compute.provider_tree [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.772 243456 DEBUG nova.scheduler.client.report [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.809 243456 DEBUG oslo_concurrency.lockutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.837 243456 INFO nova.scheduler.client.report [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Deleted allocations for instance 6e81c007-3bdc-4baf-b310-775c3122cd14
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.930 243456 DEBUG oslo_concurrency.lockutils [None req-4ddd607d-73c9-4ec0-a73e-c87e82354665 35aa1fe862a2437dbcc12fc7b0acbf91 30cb5e2d14fb4fb7a9d37cf231549329 - - default default] Lock "6e81c007-3bdc-4baf-b310-775c3122cd14" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:10 compute-0 nova_compute[243452]: 2026-02-28 10:09:10.933 243456 DEBUG nova.network.neutron [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:09:11 compute-0 ceph-mon[76304]: pgmap v1240: 305 pgs: 305 active+clean; 185 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 280 KiB/s wr, 95 op/s
Feb 28 10:09:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/405360413' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:11 compute-0 nova_compute[243452]: 2026-02-28 10:09:11.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:09:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 189 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 937 KiB/s wr, 147 op/s
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.138 243456 DEBUG nova.network.neutron [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Updating instance_info_cache with network_info: [{"id": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "address": "fa:16:3e:2d:11:10", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd85f2b17-bc", "ovs_interfaceid": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.157 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Releasing lock "refresh_cache-51a0e59a-81d0-4f05-bb13-5ca025288da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.157 243456 DEBUG nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Instance network_info: |[{"id": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "address": "fa:16:3e:2d:11:10", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd85f2b17-bc", "ovs_interfaceid": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.160 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Start _get_guest_xml network_info=[{"id": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "address": "fa:16:3e:2d:11:10", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd85f2b17-bc", "ovs_interfaceid": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.167 243456 WARNING nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.172 243456 DEBUG nova.virt.libvirt.host [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.173 243456 DEBUG nova.virt.libvirt.host [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.176 243456 DEBUG nova.virt.libvirt.host [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.176 243456 DEBUG nova.virt.libvirt.host [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.177 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.177 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.178 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.178 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.178 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.178 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.179 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.179 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.179 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.180 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.180 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.180 243456 DEBUG nova.virt.hardware [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.183 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Feb 28 10:09:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Feb 28 10:09:12 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.421 243456 DEBUG nova.compute.manager [req-0b9940a3-cb02-454c-af1f-ad0981dbf077 req-74831c3b-87cc-4210-8e67-f0573e0acc98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Received event network-vif-deleted-2913b56a-9cff-4697-89c5-e6e3553b8002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.422 243456 DEBUG nova.compute.manager [req-0b9940a3-cb02-454c-af1f-ad0981dbf077 req-74831c3b-87cc-4210-8e67-f0573e0acc98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Received event network-changed-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.423 243456 DEBUG nova.compute.manager [req-0b9940a3-cb02-454c-af1f-ad0981dbf077 req-74831c3b-87cc-4210-8e67-f0573e0acc98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Refreshing instance network info cache due to event network-changed-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.424 243456 DEBUG oslo_concurrency.lockutils [req-0b9940a3-cb02-454c-af1f-ad0981dbf077 req-74831c3b-87cc-4210-8e67-f0573e0acc98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-51a0e59a-81d0-4f05-bb13-5ca025288da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.424 243456 DEBUG oslo_concurrency.lockutils [req-0b9940a3-cb02-454c-af1f-ad0981dbf077 req-74831c3b-87cc-4210-8e67-f0573e0acc98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-51a0e59a-81d0-4f05-bb13-5ca025288da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.425 243456 DEBUG nova.network.neutron [req-0b9940a3-cb02-454c-af1f-ad0981dbf077 req-74831c3b-87cc-4210-8e67-f0573e0acc98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Refreshing network info cache for port d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.622 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:09:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2840125212' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.723 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.752 243456 DEBUG nova.storage.rbd_utils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.758 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:09:12 compute-0 nova_compute[243452]: 2026-02-28 10:09:12.940 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:13 compute-0 ceph-mon[76304]: pgmap v1241: 305 pgs: 305 active+clean; 189 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 937 KiB/s wr, 147 op/s
Feb 28 10:09:13 compute-0 ceph-mon[76304]: osdmap e180: 3 total, 3 up, 3 in
Feb 28 10:09:13 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2840125212' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:09:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2982194424' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.343 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.345 243456 DEBUG nova.virt.libvirt.vif [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:09:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1328994048',display_name='tempest-DeleteServersTestJSON-server-1328994048',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1328994048',id=52,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-eeazn004',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:09:07Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=51a0e59a-81d0-4f05-bb13-5ca025288da2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "address": "fa:16:3e:2d:11:10", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd85f2b17-bc", "ovs_interfaceid": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.346 243456 DEBUG nova.network.os_vif_util [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "address": "fa:16:3e:2d:11:10", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd85f2b17-bc", "ovs_interfaceid": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.348 243456 DEBUG nova.network.os_vif_util [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:11:10,bridge_name='br-int',has_traffic_filtering=True,id=d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd85f2b17-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.349 243456 DEBUG nova.objects.instance [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 51a0e59a-81d0-4f05-bb13-5ca025288da2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.367 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:09:13 compute-0 nova_compute[243452]:   <uuid>51a0e59a-81d0-4f05-bb13-5ca025288da2</uuid>
Feb 28 10:09:13 compute-0 nova_compute[243452]:   <name>instance-00000034</name>
Feb 28 10:09:13 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:09:13 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:09:13 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <nova:name>tempest-DeleteServersTestJSON-server-1328994048</nova:name>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:09:12</nova:creationTime>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:09:13 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:09:13 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:09:13 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:09:13 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:09:13 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:09:13 compute-0 nova_compute[243452]:         <nova:user uuid="33855957e5e3480b850c2ddef62a5f89">tempest-DeleteServersTestJSON-515886650-project-member</nova:user>
Feb 28 10:09:13 compute-0 nova_compute[243452]:         <nova:project uuid="a5095f810f0d431788237ae1da262bf6">tempest-DeleteServersTestJSON-515886650</nova:project>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:09:13 compute-0 nova_compute[243452]:         <nova:port uuid="d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf">
Feb 28 10:09:13 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:09:13 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:09:13 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <system>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <entry name="serial">51a0e59a-81d0-4f05-bb13-5ca025288da2</entry>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <entry name="uuid">51a0e59a-81d0-4f05-bb13-5ca025288da2</entry>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     </system>
Feb 28 10:09:13 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:09:13 compute-0 nova_compute[243452]:   <os>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:   </os>
Feb 28 10:09:13 compute-0 nova_compute[243452]:   <features>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:   </features>
Feb 28 10:09:13 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:09:13 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:09:13 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/51a0e59a-81d0-4f05-bb13-5ca025288da2_disk">
Feb 28 10:09:13 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       </source>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:09:13 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/51a0e59a-81d0-4f05-bb13-5ca025288da2_disk.config">
Feb 28 10:09:13 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       </source>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:09:13 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:2d:11:10"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <target dev="tapd85f2b17-bc"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/51a0e59a-81d0-4f05-bb13-5ca025288da2/console.log" append="off"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <video>
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     </video>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:09:13 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:09:13 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:09:13 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:09:13 compute-0 nova_compute[243452]: </domain>
Feb 28 10:09:13 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.368 243456 DEBUG nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Preparing to wait for external event network-vif-plugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.368 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.369 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.369 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.370 243456 DEBUG nova.virt.libvirt.vif [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:09:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1328994048',display_name='tempest-DeleteServersTestJSON-server-1328994048',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1328994048',id=52,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-eeazn004',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:09:07Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=51a0e59a-81d0-4f05-bb13-5ca025288da2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "address": "fa:16:3e:2d:11:10", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd85f2b17-bc", "ovs_interfaceid": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.370 243456 DEBUG nova.network.os_vif_util [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "address": "fa:16:3e:2d:11:10", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd85f2b17-bc", "ovs_interfaceid": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.371 243456 DEBUG nova.network.os_vif_util [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:11:10,bridge_name='br-int',has_traffic_filtering=True,id=d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd85f2b17-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.371 243456 DEBUG os_vif [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:11:10,bridge_name='br-int',has_traffic_filtering=True,id=d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd85f2b17-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.372 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.372 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.373 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.376 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.376 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd85f2b17-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.377 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd85f2b17-bc, col_values=(('external_ids', {'iface-id': 'd85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:11:10', 'vm-uuid': '51a0e59a-81d0-4f05-bb13-5ca025288da2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:13 compute-0 NetworkManager[49805]: <info>  [1772273353.3795] manager: (tapd85f2b17-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.378 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.386 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.388 243456 INFO os_vif [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:11:10,bridge_name='br-int',has_traffic_filtering=True,id=d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd85f2b17-bc')
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.454 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.455 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.455 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No VIF found with MAC fa:16:3e:2d:11:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.456 243456 INFO nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Using config drive
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.478 243456 DEBUG nova.storage.rbd_utils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.630 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.844 243456 INFO nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Creating config drive at /var/lib/nova/instances/51a0e59a-81d0-4f05-bb13-5ca025288da2/disk.config
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.849 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51a0e59a-81d0-4f05-bb13-5ca025288da2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpggyudb5f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:13 compute-0 nova_compute[243452]: 2026-02-28 10:09:13.983 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51a0e59a-81d0-4f05-bb13-5ca025288da2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpggyudb5f" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 200 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 194 op/s
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.017 243456 DEBUG nova.storage.rbd_utils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.023 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51a0e59a-81d0-4f05-bb13-5ca025288da2/disk.config 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.113 243456 DEBUG nova.network.neutron [req-0b9940a3-cb02-454c-af1f-ad0981dbf077 req-74831c3b-87cc-4210-8e67-f0573e0acc98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Updated VIF entry in instance network info cache for port d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.114 243456 DEBUG nova.network.neutron [req-0b9940a3-cb02-454c-af1f-ad0981dbf077 req-74831c3b-87cc-4210-8e67-f0573e0acc98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Updating instance_info_cache with network_info: [{"id": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "address": "fa:16:3e:2d:11:10", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd85f2b17-bc", "ovs_interfaceid": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.130 243456 DEBUG oslo_concurrency.lockutils [req-0b9940a3-cb02-454c-af1f-ad0981dbf077 req-74831c3b-87cc-4210-8e67-f0573e0acc98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-51a0e59a-81d0-4f05-bb13-5ca025288da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.209 243456 DEBUG oslo_concurrency.processutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51a0e59a-81d0-4f05-bb13-5ca025288da2/disk.config 51a0e59a-81d0-4f05-bb13-5ca025288da2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.210 243456 INFO nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Deleting local config drive /var/lib/nova/instances/51a0e59a-81d0-4f05-bb13-5ca025288da2/disk.config because it was imported into RBD.
Feb 28 10:09:14 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2982194424' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:14 compute-0 kernel: tapd85f2b17-bc: entered promiscuous mode
Feb 28 10:09:14 compute-0 NetworkManager[49805]: <info>  [1772273354.2616] manager: (tapd85f2b17-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/198)
Feb 28 10:09:14 compute-0 ovn_controller[146846]: 2026-02-28T10:09:14Z|00433|binding|INFO|Claiming lport d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf for this chassis.
Feb 28 10:09:14 compute-0 ovn_controller[146846]: 2026-02-28T10:09:14Z|00434|binding|INFO|d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf: Claiming fa:16:3e:2d:11:10 10.100.0.12
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.262 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.265 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.268 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.280 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:11:10 10.100.0.12'], port_security=['fa:16:3e:2d:11:10 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '51a0e59a-81d0-4f05-bb13-5ca025288da2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e92100d-850d-4567-9a5d-269bb15701d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5095f810f0d431788237ae1da262bf6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05124cc2-701f-43d1-96d7-3db27e805ae2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4409f277-3260-44c5-97fd-d8b2121f3d6b, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.282 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf in datapath 8e92100d-850d-4567-9a5d-269bb15701d5 bound to our chassis
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.284 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 10:09:14 compute-0 systemd-udevd[288115]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:09:14 compute-0 systemd-machined[209480]: New machine qemu-58-instance-00000034.
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.297 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.296 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2961852e-4395-49e4-a7fb-b430ed64eaaf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.297 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e92100d-81 in ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.299 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e92100d-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.299 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5e5838e4-9fb1-4d6d-89c9-fb13308eca71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:14 compute-0 NetworkManager[49805]: <info>  [1772273354.3013] device (tapd85f2b17-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:09:14 compute-0 NetworkManager[49805]: <info>  [1772273354.3017] device (tapd85f2b17-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.300 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4056b401-721c-4784-a563-c18742e29e85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:14 compute-0 systemd[1]: Started Virtual Machine qemu-58-instance-00000034.
Feb 28 10:09:14 compute-0 ovn_controller[146846]: 2026-02-28T10:09:14Z|00435|binding|INFO|Setting lport d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf ovn-installed in OVS
Feb 28 10:09:14 compute-0 ovn_controller[146846]: 2026-02-28T10:09:14Z|00436|binding|INFO|Setting lport d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf up in Southbound
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.305 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.315 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[ede29984-d974-4424-877f-50084c5fb180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.331 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.331 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.341 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[55914ecd-dee7-4737-b4e2-a38aa29512db]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.351 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.352 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.352 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.352 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.353 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.374 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2d5fb7b0-605e-441f-8ada-77c30abba0ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:14 compute-0 NetworkManager[49805]: <info>  [1772273354.3868] manager: (tap8e92100d-80): new Veth device (/org/freedesktop/NetworkManager/Devices/199)
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.386 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a96a852a-98da-469e-8c26-5c3c6604c64e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:14 compute-0 systemd-udevd[288119]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.421 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[061fc82b-6582-4269-94b6-bf1287c90861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.424 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a91840dc-4085-4c54-a7da-1c4417df1c93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:14 compute-0 NetworkManager[49805]: <info>  [1772273354.4481] device (tap8e92100d-80): carrier: link connected
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.453 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c16c18a2-fcc6-446e-9523-c7829ac04583]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.470 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7196994c-d49f-46d9-a52f-3280b49f1e86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e92100d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486173, 'reachable_time': 25164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288150, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.487 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[33d67bf1-60bf-4b03-81f7-65deefcce85f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:bd50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 486173, 'tstamp': 486173}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288152, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.505 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b4ef20d3-8459-40e9-9887-af9426ac02a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e92100d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486173, 'reachable_time': 25164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288162, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.534 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c4ae68d2-3752-4a97-8d20-8a95d9fbc754]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.587 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[86ea4bfb-d783-4560-9052-1b0fc971cef3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.589 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e92100d-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.589 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.589 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e92100d-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:14 compute-0 kernel: tap8e92100d-80: entered promiscuous mode
Feb 28 10:09:14 compute-0 NetworkManager[49805]: <info>  [1772273354.5922] manager: (tap8e92100d-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.591 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.594 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e92100d-80, col_values=(('external_ids', {'iface-id': 'df60d363-b5aa-4c1b-a9a1-997dfff36799'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:14 compute-0 ovn_controller[146846]: 2026-02-28T10:09:14Z|00437|binding|INFO|Releasing lport df60d363-b5aa-4c1b-a9a1-997dfff36799 from this chassis (sb_readonly=0)
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.603 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.606 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.607 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.608 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5159c429-aa73-45d3-a48a-7f9f48f7d02c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.609 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:09:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:14.610 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'env', 'PROCESS_TAG=haproxy-8e92100d-850d-4567-9a5d-269bb15701d5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e92100d-850d-4567-9a5d-269bb15701d5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.613 243456 DEBUG nova.compute.manager [req-94a48b55-65a8-48af-b9da-76c6a6d953c6 req-4b0e35c0-f203-4d07-a721-a9bd57fd8eed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Received event network-vif-plugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.614 243456 DEBUG oslo_concurrency.lockutils [req-94a48b55-65a8-48af-b9da-76c6a6d953c6 req-4b0e35c0-f203-4d07-a721-a9bd57fd8eed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.615 243456 DEBUG oslo_concurrency.lockutils [req-94a48b55-65a8-48af-b9da-76c6a6d953c6 req-4b0e35c0-f203-4d07-a721-a9bd57fd8eed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.615 243456 DEBUG oslo_concurrency.lockutils [req-94a48b55-65a8-48af-b9da-76c6a6d953c6 req-4b0e35c0-f203-4d07-a721-a9bd57fd8eed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.615 243456 DEBUG nova.compute.manager [req-94a48b55-65a8-48af-b9da-76c6a6d953c6 req-4b0e35c0-f203-4d07-a721-a9bd57fd8eed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Processing event network-vif-plugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.819 243456 DEBUG nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.820 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273354.818189, 51a0e59a-81d0-4f05-bb13-5ca025288da2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.821 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] VM Started (Lifecycle Event)
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.824 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.828 243456 INFO nova.virt.libvirt.driver [-] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Instance spawned successfully.
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.829 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.839 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.842 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.849 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.850 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.850 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.851 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.851 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.852 243456 DEBUG nova.virt.libvirt.driver [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.861 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.862 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273354.8185802, 51a0e59a-81d0-4f05-bb13-5ca025288da2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.862 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] VM Paused (Lifecycle Event)
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.894 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:09:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1713440759' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.900 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273354.8241525, 51a0e59a-81d0-4f05-bb13-5ca025288da2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.900 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] VM Resumed (Lifecycle Event)
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.918 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.933 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.941 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.948 243456 INFO nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Took 7.58 seconds to spawn the instance on the hypervisor.
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.949 243456 DEBUG nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:14 compute-0 nova_compute[243452]: 2026-02-28 10:09:14.984 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:09:14 compute-0 podman[288247]: 2026-02-28 10:09:14.989766923 +0000 UTC m=+0.061050515 container create cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:09:15 compute-0 nova_compute[243452]: 2026-02-28 10:09:15.018 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:09:15 compute-0 nova_compute[243452]: 2026-02-28 10:09:15.019 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:09:15 compute-0 nova_compute[243452]: 2026-02-28 10:09:15.026 243456 INFO nova.compute.manager [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Took 8.76 seconds to build instance.
Feb 28 10:09:15 compute-0 systemd[1]: Started libpod-conmon-cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98.scope.
Feb 28 10:09:15 compute-0 nova_compute[243452]: 2026-02-28 10:09:15.044 243456 DEBUG oslo_concurrency.lockutils [None req-0c0182cd-66a2-4064-ba3f-5530adb8742c 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:15 compute-0 podman[288247]: 2026-02-28 10:09:14.966737457 +0000 UTC m=+0.038021069 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:09:15 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:09:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e20019706ae76b9455ec6b594309c35a7d333801c45502a26fa08af52be625/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:09:15 compute-0 podman[288247]: 2026-02-28 10:09:15.094257318 +0000 UTC m=+0.165540910 container init cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:09:15 compute-0 podman[288247]: 2026-02-28 10:09:15.101394118 +0000 UTC m=+0.172677700 container start cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 28 10:09:15 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[288263]: [NOTICE]   (288267) : New worker (288269) forked
Feb 28 10:09:15 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[288263]: [NOTICE]   (288267) : Loading success.
Feb 28 10:09:15 compute-0 nova_compute[243452]: 2026-02-28 10:09:15.208 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:09:15 compute-0 nova_compute[243452]: 2026-02-28 10:09:15.209 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4004MB free_disk=59.967133364640176GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:09:15 compute-0 nova_compute[243452]: 2026-02-28 10:09:15.209 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:15 compute-0 nova_compute[243452]: 2026-02-28 10:09:15.209 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Feb 28 10:09:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Feb 28 10:09:15 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Feb 28 10:09:15 compute-0 ceph-mon[76304]: pgmap v1243: 305 pgs: 305 active+clean; 200 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 194 op/s
Feb 28 10:09:15 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1713440759' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:15 compute-0 nova_compute[243452]: 2026-02-28 10:09:15.300 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 51a0e59a-81d0-4f05-bb13-5ca025288da2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:09:15 compute-0 nova_compute[243452]: 2026-02-28 10:09:15.301 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:09:15 compute-0 nova_compute[243452]: 2026-02-28 10:09:15.301 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:09:15 compute-0 nova_compute[243452]: 2026-02-28 10:09:15.349 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:09:15 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1262194963' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:15 compute-0 nova_compute[243452]: 2026-02-28 10:09:15.912 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:15 compute-0 nova_compute[243452]: 2026-02-28 10:09:15.918 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:09:15 compute-0 nova_compute[243452]: 2026-02-28 10:09:15.936 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:09:15 compute-0 nova_compute[243452]: 2026-02-28 10:09:15.961 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:09:15 compute-0 nova_compute[243452]: 2026-02-28 10:09:15.961 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 223 op/s
Feb 28 10:09:16 compute-0 ceph-mon[76304]: osdmap e181: 3 total, 3 up, 3 in
Feb 28 10:09:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1262194963' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:16 compute-0 ceph-mon[76304]: pgmap v1245: 305 pgs: 305 active+clean; 200 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 223 op/s
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.381 243456 DEBUG oslo_concurrency.lockutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "51a0e59a-81d0-4f05-bb13-5ca025288da2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.382 243456 DEBUG oslo_concurrency.lockutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.382 243456 DEBUG oslo_concurrency.lockutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.383 243456 DEBUG oslo_concurrency.lockutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.383 243456 DEBUG oslo_concurrency.lockutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.385 243456 INFO nova.compute.manager [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Terminating instance
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.387 243456 DEBUG nova.compute.manager [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:09:16 compute-0 kernel: tapd85f2b17-bc (unregistering): left promiscuous mode
Feb 28 10:09:16 compute-0 NetworkManager[49805]: <info>  [1772273356.4277] device (tapd85f2b17-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.427 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:16 compute-0 ovn_controller[146846]: 2026-02-28T10:09:16Z|00438|binding|INFO|Releasing lport d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf from this chassis (sb_readonly=0)
Feb 28 10:09:16 compute-0 ovn_controller[146846]: 2026-02-28T10:09:16Z|00439|binding|INFO|Setting lport d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf down in Southbound
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.439 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:16 compute-0 ovn_controller[146846]: 2026-02-28T10:09:16Z|00440|binding|INFO|Removing iface tapd85f2b17-bc ovn-installed in OVS
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.441 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.450 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:11:10 10.100.0.12'], port_security=['fa:16:3e:2d:11:10 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '51a0e59a-81d0-4f05-bb13-5ca025288da2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e92100d-850d-4567-9a5d-269bb15701d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5095f810f0d431788237ae1da262bf6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05124cc2-701f-43d1-96d7-3db27e805ae2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4409f277-3260-44c5-97fd-d8b2121f3d6b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:09:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.453 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf in datapath 8e92100d-850d-4567-9a5d-269bb15701d5 unbound from our chassis
Feb 28 10:09:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.454 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e92100d-850d-4567-9a5d-269bb15701d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.454 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.455 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[239e17c1-b7e2-4bf0-9c25-28dd78a41c71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.456 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 namespace which is not needed anymore
Feb 28 10:09:16 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000034.scope: Deactivated successfully.
Feb 28 10:09:16 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000034.scope: Consumed 2.158s CPU time.
Feb 28 10:09:16 compute-0 systemd-machined[209480]: Machine qemu-58-instance-00000034 terminated.
Feb 28 10:09:16 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[288263]: [NOTICE]   (288267) : haproxy version is 2.8.14-c23fe91
Feb 28 10:09:16 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[288263]: [NOTICE]   (288267) : path to executable is /usr/sbin/haproxy
Feb 28 10:09:16 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[288263]: [WARNING]  (288267) : Exiting Master process...
Feb 28 10:09:16 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[288263]: [ALERT]    (288267) : Current worker (288269) exited with code 143 (Terminated)
Feb 28 10:09:16 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[288263]: [WARNING]  (288267) : All workers exited. Exiting... (0)
Feb 28 10:09:16 compute-0 systemd[1]: libpod-cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98.scope: Deactivated successfully.
Feb 28 10:09:16 compute-0 podman[288321]: 2026-02-28 10:09:16.619820034 +0000 UTC m=+0.061009165 container died cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.633 243456 INFO nova.virt.libvirt.driver [-] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Instance destroyed successfully.
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.634 243456 DEBUG nova.objects.instance [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'resources' on Instance uuid 51a0e59a-81d0-4f05-bb13-5ca025288da2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:09:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98-userdata-shm.mount: Deactivated successfully.
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.654 243456 DEBUG nova.virt.libvirt.vif [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:09:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1328994048',display_name='tempest-DeleteServersTestJSON-server-1328994048',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1328994048',id=52,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:09:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-eeazn004',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:09:14Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=51a0e59a-81d0-4f05-bb13-5ca025288da2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "address": "fa:16:3e:2d:11:10", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd85f2b17-bc", "ovs_interfaceid": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.655 243456 DEBUG nova.network.os_vif_util [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "address": "fa:16:3e:2d:11:10", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd85f2b17-bc", "ovs_interfaceid": "d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.656 243456 DEBUG nova.network.os_vif_util [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:11:10,bridge_name='br-int',has_traffic_filtering=True,id=d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd85f2b17-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.657 243456 DEBUG os_vif [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:11:10,bridge_name='br-int',has_traffic_filtering=True,id=d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd85f2b17-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:09:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-22e20019706ae76b9455ec6b594309c35a7d333801c45502a26fa08af52be625-merged.mount: Deactivated successfully.
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.660 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.660 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd85f2b17-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.662 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.665 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:09:16 compute-0 podman[288321]: 2026-02-28 10:09:16.667301187 +0000 UTC m=+0.108490298 container cleanup cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.668 243456 INFO os_vif [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:11:10,bridge_name='br-int',has_traffic_filtering=True,id=d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd85f2b17-bc')
Feb 28 10:09:16 compute-0 systemd[1]: libpod-conmon-cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98.scope: Deactivated successfully.
Feb 28 10:09:16 compute-0 podman[288359]: 2026-02-28 10:09:16.745174554 +0000 UTC m=+0.056561179 container remove cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 10:09:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.750 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c3117a86-4bcf-4940-abe6-30bb8b5d0345]: (4, ('Sat Feb 28 10:09:16 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 (cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98)\ncfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98\nSat Feb 28 10:09:16 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 (cfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98)\ncfaa654dab69537ed3f8089e8c070a96afb63fa58c5009363b4ecc70ec241c98\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.753 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e3affc57-d122-41f0-afff-e92169ced859]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.754 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e92100d-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:16 compute-0 kernel: tap8e92100d-80: left promiscuous mode
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.757 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.764 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.767 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[79f689ae-66fc-4de3-bc8f-c0805ec7d0e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.786 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd28050-2ba3-4546-8df3-878ce210d2f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.788 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b749dcce-b898-47c1-9fa4-204109584995]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.806 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[412e8c06-4aa3-4c9a-9d9c-58d68cd76fec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486165, 'reachable_time': 30131, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288392, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.810 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:09:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:16.810 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[81981dee-47e4-4f7f-bf6d-c887395b488e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d8e92100d\x2d850d\x2d4567\x2d9a5d\x2d269bb15701d5.mount: Deactivated successfully.
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.960 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.965 243456 INFO nova.virt.libvirt.driver [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Deleting instance files /var/lib/nova/instances/51a0e59a-81d0-4f05-bb13-5ca025288da2_del
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.967 243456 INFO nova.virt.libvirt.driver [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Deletion of /var/lib/nova/instances/51a0e59a-81d0-4f05-bb13-5ca025288da2_del complete
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.972 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.972 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:09:16 compute-0 nova_compute[243452]: 2026-02-28 10:09:16.972 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:09:17 compute-0 nova_compute[243452]: 2026-02-28 10:09:17.008 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Feb 28 10:09:17 compute-0 nova_compute[243452]: 2026-02-28 10:09:17.009 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:09:17 compute-0 nova_compute[243452]: 2026-02-28 10:09:17.036 243456 DEBUG nova.compute.manager [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Received event network-vif-plugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:17 compute-0 nova_compute[243452]: 2026-02-28 10:09:17.036 243456 DEBUG oslo_concurrency.lockutils [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:17 compute-0 nova_compute[243452]: 2026-02-28 10:09:17.037 243456 DEBUG oslo_concurrency.lockutils [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:17 compute-0 nova_compute[243452]: 2026-02-28 10:09:17.037 243456 DEBUG oslo_concurrency.lockutils [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:17 compute-0 nova_compute[243452]: 2026-02-28 10:09:17.038 243456 DEBUG nova.compute.manager [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] No waiting events found dispatching network-vif-plugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:09:17 compute-0 nova_compute[243452]: 2026-02-28 10:09:17.038 243456 WARNING nova.compute.manager [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Received unexpected event network-vif-plugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf for instance with vm_state active and task_state deleting.
Feb 28 10:09:17 compute-0 nova_compute[243452]: 2026-02-28 10:09:17.038 243456 DEBUG nova.compute.manager [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Received event network-vif-unplugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:17 compute-0 nova_compute[243452]: 2026-02-28 10:09:17.039 243456 DEBUG oslo_concurrency.lockutils [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:17 compute-0 nova_compute[243452]: 2026-02-28 10:09:17.041 243456 DEBUG oslo_concurrency.lockutils [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:17 compute-0 nova_compute[243452]: 2026-02-28 10:09:17.042 243456 DEBUG oslo_concurrency.lockutils [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:17 compute-0 nova_compute[243452]: 2026-02-28 10:09:17.042 243456 DEBUG nova.compute.manager [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] No waiting events found dispatching network-vif-unplugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:09:17 compute-0 nova_compute[243452]: 2026-02-28 10:09:17.042 243456 DEBUG nova.compute.manager [req-1b7cf545-c6a5-48d5-a8d0-4780ad6b17ce req-73cdc020-0938-4afa-b4c1-043b93f55f16 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Received event network-vif-unplugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:09:17 compute-0 nova_compute[243452]: 2026-02-28 10:09:17.047 243456 INFO nova.compute.manager [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Took 0.66 seconds to destroy the instance on the hypervisor.
Feb 28 10:09:17 compute-0 nova_compute[243452]: 2026-02-28 10:09:17.048 243456 DEBUG oslo.service.loopingcall [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:09:17 compute-0 nova_compute[243452]: 2026-02-28 10:09:17.049 243456 DEBUG nova.compute.manager [-] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:09:17 compute-0 nova_compute[243452]: 2026-02-28 10:09:17.049 243456 DEBUG nova.network.neutron [-] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:09:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:09:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 176 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.4 MiB/s wr, 219 op/s
Feb 28 10:09:18 compute-0 nova_compute[243452]: 2026-02-28 10:09:18.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:09:18 compute-0 nova_compute[243452]: 2026-02-28 10:09:18.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:09:18 compute-0 nova_compute[243452]: 2026-02-28 10:09:18.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:09:18 compute-0 nova_compute[243452]: 2026-02-28 10:09:18.559 243456 DEBUG nova.network.neutron [-] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:09:18 compute-0 nova_compute[243452]: 2026-02-28 10:09:18.588 243456 INFO nova.compute.manager [-] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Took 1.54 seconds to deallocate network for instance.
Feb 28 10:09:18 compute-0 nova_compute[243452]: 2026-02-28 10:09:18.631 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:18 compute-0 nova_compute[243452]: 2026-02-28 10:09:18.656 243456 DEBUG oslo_concurrency.lockutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:18 compute-0 nova_compute[243452]: 2026-02-28 10:09:18.657 243456 DEBUG oslo_concurrency.lockutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:18 compute-0 nova_compute[243452]: 2026-02-28 10:09:18.707 243456 DEBUG oslo_concurrency.processutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:18 compute-0 sudo[288395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:09:18 compute-0 sudo[288395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:09:18 compute-0 sudo[288395]: pam_unix(sudo:session): session closed for user root
Feb 28 10:09:18 compute-0 sudo[288420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:09:18 compute-0 sudo[288420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:09:18 compute-0 nova_compute[243452]: 2026-02-28 10:09:18.984 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "9098ebf3-e36c-492b-9c50-dc6f0078794d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:18 compute-0 nova_compute[243452]: 2026-02-28 10:09:18.985 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "9098ebf3-e36c-492b-9c50-dc6f0078794d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:19 compute-0 nova_compute[243452]: 2026-02-28 10:09:19.006 243456 DEBUG nova.compute.manager [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:09:19 compute-0 ceph-mon[76304]: pgmap v1246: 305 pgs: 305 active+clean; 176 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.4 MiB/s wr, 219 op/s
Feb 28 10:09:19 compute-0 nova_compute[243452]: 2026-02-28 10:09:19.090 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:19 compute-0 nova_compute[243452]: 2026-02-28 10:09:19.135 243456 DEBUG nova.compute.manager [req-e3789ec0-a9f4-4746-841d-76498ff23097 req-f80c987a-192f-4f25-9d92-591a63874439 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Received event network-vif-plugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:19 compute-0 nova_compute[243452]: 2026-02-28 10:09:19.136 243456 DEBUG oslo_concurrency.lockutils [req-e3789ec0-a9f4-4746-841d-76498ff23097 req-f80c987a-192f-4f25-9d92-591a63874439 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:19 compute-0 nova_compute[243452]: 2026-02-28 10:09:19.136 243456 DEBUG oslo_concurrency.lockutils [req-e3789ec0-a9f4-4746-841d-76498ff23097 req-f80c987a-192f-4f25-9d92-591a63874439 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:19 compute-0 nova_compute[243452]: 2026-02-28 10:09:19.137 243456 DEBUG oslo_concurrency.lockutils [req-e3789ec0-a9f4-4746-841d-76498ff23097 req-f80c987a-192f-4f25-9d92-591a63874439 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:19 compute-0 nova_compute[243452]: 2026-02-28 10:09:19.137 243456 DEBUG nova.compute.manager [req-e3789ec0-a9f4-4746-841d-76498ff23097 req-f80c987a-192f-4f25-9d92-591a63874439 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] No waiting events found dispatching network-vif-plugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:09:19 compute-0 nova_compute[243452]: 2026-02-28 10:09:19.137 243456 WARNING nova.compute.manager [req-e3789ec0-a9f4-4746-841d-76498ff23097 req-f80c987a-192f-4f25-9d92-591a63874439 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Received unexpected event network-vif-plugged-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf for instance with vm_state deleted and task_state None.
Feb 28 10:09:19 compute-0 nova_compute[243452]: 2026-02-28 10:09:19.138 243456 DEBUG nova.compute.manager [req-e3789ec0-a9f4-4746-841d-76498ff23097 req-f80c987a-192f-4f25-9d92-591a63874439 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Received event network-vif-deleted-d85f2b17-bc6a-43b9-9bd8-b8c5ab61bedf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:09:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3272923698' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:19 compute-0 nova_compute[243452]: 2026-02-28 10:09:19.257 243456 DEBUG oslo_concurrency.processutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:19 compute-0 nova_compute[243452]: 2026-02-28 10:09:19.264 243456 DEBUG nova.compute.provider_tree [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:09:19 compute-0 nova_compute[243452]: 2026-02-28 10:09:19.292 243456 DEBUG nova.scheduler.client.report [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:09:19 compute-0 nova_compute[243452]: 2026-02-28 10:09:19.314 243456 DEBUG oslo_concurrency.lockutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:19 compute-0 nova_compute[243452]: 2026-02-28 10:09:19.317 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:19 compute-0 nova_compute[243452]: 2026-02-28 10:09:19.323 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:09:19 compute-0 nova_compute[243452]: 2026-02-28 10:09:19.323 243456 INFO nova.compute.claims [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:09:19 compute-0 nova_compute[243452]: 2026-02-28 10:09:19.342 243456 INFO nova.scheduler.client.report [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Deleted allocations for instance 51a0e59a-81d0-4f05-bb13-5ca025288da2
Feb 28 10:09:19 compute-0 sudo[288420]: pam_unix(sudo:session): session closed for user root
Feb 28 10:09:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:09:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:09:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:09:19 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:09:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:09:19 compute-0 nova_compute[243452]: 2026-02-28 10:09:19.416 243456 DEBUG oslo_concurrency.lockutils [None req-c4a8d7f4-f602-4fac-b4b3-1ff441c31b38 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "51a0e59a-81d0-4f05-bb13-5ca025288da2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:19 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:09:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:09:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:09:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:09:19 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:09:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:09:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:09:19 compute-0 nova_compute[243452]: 2026-02-28 10:09:19.440 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:19 compute-0 sudo[288500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:09:19 compute-0 sudo[288500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:09:19 compute-0 sudo[288500]: pam_unix(sudo:session): session closed for user root
Feb 28 10:09:19 compute-0 sudo[288526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:09:19 compute-0 sudo[288526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:09:19 compute-0 podman[288582]: 2026-02-28 10:09:19.863536464 +0000 UTC m=+0.092616632 container create 9c20d4d6922ac8028d6f8bd8f8dab2b94c99901b6a3cfbd64b05223a0020e907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_hawking, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 10:09:19 compute-0 podman[288582]: 2026-02-28 10:09:19.794137165 +0000 UTC m=+0.023217373 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:09:19 compute-0 systemd[1]: Started libpod-conmon-9c20d4d6922ac8028d6f8bd8f8dab2b94c99901b6a3cfbd64b05223a0020e907.scope.
Feb 28 10:09:19 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:09:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:09:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3230522339' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:20 compute-0 podman[288582]: 2026-02-28 10:09:20.010970775 +0000 UTC m=+0.240050983 container init 9c20d4d6922ac8028d6f8bd8f8dab2b94c99901b6a3cfbd64b05223a0020e907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_hawking, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.010 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 169 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 174 op/s
Feb 28 10:09:20 compute-0 podman[288582]: 2026-02-28 10:09:20.0172074 +0000 UTC m=+0.246287558 container start 9c20d4d6922ac8028d6f8bd8f8dab2b94c99901b6a3cfbd64b05223a0020e907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.017 243456 DEBUG nova.compute.provider_tree [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:09:20 compute-0 peaceful_hawking[288599]: 167 167
Feb 28 10:09:20 compute-0 systemd[1]: libpod-9c20d4d6922ac8028d6f8bd8f8dab2b94c99901b6a3cfbd64b05223a0020e907.scope: Deactivated successfully.
Feb 28 10:09:20 compute-0 conmon[288599]: conmon 9c20d4d6922ac8028d6f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9c20d4d6922ac8028d6f8bd8f8dab2b94c99901b6a3cfbd64b05223a0020e907.scope/container/memory.events
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.047 243456 DEBUG nova.scheduler.client.report [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:09:20 compute-0 podman[288582]: 2026-02-28 10:09:20.058668844 +0000 UTC m=+0.287749012 container attach 9c20d4d6922ac8028d6f8bd8f8dab2b94c99901b6a3cfbd64b05223a0020e907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_hawking, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:09:20 compute-0 podman[288582]: 2026-02-28 10:09:20.060782274 +0000 UTC m=+0.289862442 container died 9c20d4d6922ac8028d6f8bd8f8dab2b94c99901b6a3cfbd64b05223a0020e907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_hawking, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.074 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.075 243456 DEBUG nova.compute.manager [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:09:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3272923698' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:20 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:09:20 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:09:20 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:09:20 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:09:20 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:09:20 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:09:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3230522339' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.128 243456 DEBUG nova.compute.manager [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.128 243456 DEBUG nova.network.neutron [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.150 243456 INFO nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.168 243456 DEBUG nova.compute.manager [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:09:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a37c330b9db123b77c6717c68fb506b3ab797a847ca536d9d6fa30c77d4d978a-merged.mount: Deactivated successfully.
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.296 243456 DEBUG nova.compute.manager [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.298 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.298 243456 INFO nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Creating image(s)
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.324 243456 DEBUG nova.storage.rbd_utils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.355 243456 DEBUG nova.storage.rbd_utils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:20 compute-0 podman[288582]: 2026-02-28 10:09:20.365505231 +0000 UTC m=+0.594585369 container remove 9c20d4d6922ac8028d6f8bd8f8dab2b94c99901b6a3cfbd64b05223a0020e907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_hawking, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.385 243456 DEBUG nova.storage.rbd_utils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.391 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:20 compute-0 systemd[1]: libpod-conmon-9c20d4d6922ac8028d6f8bd8f8dab2b94c99901b6a3cfbd64b05223a0020e907.scope: Deactivated successfully.
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.483 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.485 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.486 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.487 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.529 243456 DEBUG nova.storage.rbd_utils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.535 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:20 compute-0 podman[288679]: 2026-02-28 10:09:20.49860589 +0000 UTC m=+0.021693031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:09:20 compute-0 podman[288679]: 2026-02-28 10:09:20.629266939 +0000 UTC m=+0.152354110 container create 7620f1a046a2003a4c999975d6352fdb79951e22bbdfa07a80aaa56720a87f0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 10:09:20 compute-0 systemd[1]: Started libpod-conmon-7620f1a046a2003a4c999975d6352fdb79951e22bbdfa07a80aaa56720a87f0f.scope.
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.730 243456 DEBUG nova.network.neutron [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 28 10:09:20 compute-0 nova_compute[243452]: 2026-02-28 10:09:20.732 243456 DEBUG nova.compute.manager [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:09:20 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc778953745c9af41e3bfd94cdd9eaec2f0a44eb02cee95cae2e48ba7524b998/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc778953745c9af41e3bfd94cdd9eaec2f0a44eb02cee95cae2e48ba7524b998/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc778953745c9af41e3bfd94cdd9eaec2f0a44eb02cee95cae2e48ba7524b998/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc778953745c9af41e3bfd94cdd9eaec2f0a44eb02cee95cae2e48ba7524b998/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc778953745c9af41e3bfd94cdd9eaec2f0a44eb02cee95cae2e48ba7524b998/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:09:20 compute-0 podman[288679]: 2026-02-28 10:09:20.899866549 +0000 UTC m=+0.422953690 container init 7620f1a046a2003a4c999975d6352fdb79951e22bbdfa07a80aaa56720a87f0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:09:20 compute-0 podman[288679]: 2026-02-28 10:09:20.911198717 +0000 UTC m=+0.434285848 container start 7620f1a046a2003a4c999975d6352fdb79951e22bbdfa07a80aaa56720a87f0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_stonebraker, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 10:09:21 compute-0 nova_compute[243452]: 2026-02-28 10:09:21.297 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "226b6da4-15c9-4d10-ab4d-194b313446f9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:21 compute-0 nova_compute[243452]: 2026-02-28 10:09:21.299 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "226b6da4-15c9-4d10-ab4d-194b313446f9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:21 compute-0 nova_compute[243452]: 2026-02-28 10:09:21.325 243456 DEBUG nova.compute.manager [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:09:21 compute-0 silly_stonebraker[288736]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:09:21 compute-0 silly_stonebraker[288736]: --> All data devices are unavailable
Feb 28 10:09:21 compute-0 podman[288679]: 2026-02-28 10:09:21.373999645 +0000 UTC m=+0.897086776 container attach 7620f1a046a2003a4c999975d6352fdb79951e22bbdfa07a80aaa56720a87f0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_stonebraker, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 10:09:21 compute-0 systemd[1]: libpod-7620f1a046a2003a4c999975d6352fdb79951e22bbdfa07a80aaa56720a87f0f.scope: Deactivated successfully.
Feb 28 10:09:21 compute-0 nova_compute[243452]: 2026-02-28 10:09:21.419 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:21 compute-0 nova_compute[243452]: 2026-02-28 10:09:21.421 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:21 compute-0 nova_compute[243452]: 2026-02-28 10:09:21.432 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:09:21 compute-0 nova_compute[243452]: 2026-02-28 10:09:21.432 243456 INFO nova.compute.claims [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:09:21 compute-0 podman[288756]: 2026-02-28 10:09:21.452967433 +0000 UTC m=+0.027431111 container died 7620f1a046a2003a4c999975d6352fdb79951e22bbdfa07a80aaa56720a87f0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_stonebraker, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 10:09:21 compute-0 nova_compute[243452]: 2026-02-28 10:09:21.561 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:21 compute-0 ceph-mon[76304]: pgmap v1247: 305 pgs: 305 active+clean; 169 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 174 op/s
Feb 28 10:09:21 compute-0 nova_compute[243452]: 2026-02-28 10:09:21.663 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc778953745c9af41e3bfd94cdd9eaec2f0a44eb02cee95cae2e48ba7524b998-merged.mount: Deactivated successfully.
Feb 28 10:09:21 compute-0 nova_compute[243452]: 2026-02-28 10:09:21.951 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:22 compute-0 podman[288756]: 2026-02-28 10:09:22.002448375 +0000 UTC m=+0.576912043 container remove 7620f1a046a2003a4c999975d6352fdb79951e22bbdfa07a80aaa56720a87f0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_stonebraker, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:09:22 compute-0 systemd[1]: libpod-conmon-7620f1a046a2003a4c999975d6352fdb79951e22bbdfa07a80aaa56720a87f0f.scope: Deactivated successfully.
Feb 28 10:09:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 153 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.3 MiB/s wr, 174 op/s
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.045 243456 DEBUG nova.storage.rbd_utils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] resizing rbd image 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:09:22 compute-0 sudo[288526]: pam_unix(sudo:session): session closed for user root
Feb 28 10:09:22 compute-0 sudo[288843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:09:22 compute-0 sudo[288843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:09:22 compute-0 sudo[288843]: pam_unix(sudo:session): session closed for user root
Feb 28 10:09:22 compute-0 sudo[288870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:09:22 compute-0 sudo[288870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:09:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:09:22 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1263084625' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.246 243456 DEBUG nova.objects.instance [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lazy-loading 'migration_context' on Instance uuid 9098ebf3-e36c-492b-9c50-dc6f0078794d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.264 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.265 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Ensure instance console log exists: /var/lib/nova/instances/9098ebf3-e36c-492b-9c50-dc6f0078794d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.266 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.267 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.267 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.270 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.271 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.710s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.277 243456 WARNING nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.280 243456 DEBUG nova.compute.provider_tree [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.287 243456 DEBUG nova.virt.libvirt.host [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.288 243456 DEBUG nova.virt.libvirt.host [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.292 243456 DEBUG nova.virt.libvirt.host [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.293 243456 DEBUG nova.virt.libvirt.host [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.294 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.294 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.295 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.296 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.296 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.297 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.297 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.298 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.298 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.299 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.299 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.299 243456 DEBUG nova.virt.hardware [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.305 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.346 243456 DEBUG nova.scheduler.client.report [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.378 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.379 243456 DEBUG nova.compute.manager [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.436 243456 DEBUG nova.compute.manager [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.437 243456 DEBUG nova.network.neutron [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.455 243456 INFO nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.473 243456 DEBUG nova.compute.manager [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:09:22 compute-0 podman[288927]: 2026-02-28 10:09:22.486043236 +0000 UTC m=+0.042897866 container create d3f6634175b23c608a1248bcb21082cd3de7f04f5c2adbb68bcd25cdb9ba57f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.520 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273347.4671392, 6e81c007-3bdc-4baf-b310-775c3122cd14 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.521 243456 INFO nova.compute.manager [-] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] VM Stopped (Lifecycle Event)
Feb 28 10:09:22 compute-0 systemd[1]: Started libpod-conmon-d3f6634175b23c608a1248bcb21082cd3de7f04f5c2adbb68bcd25cdb9ba57f5.scope.
Feb 28 10:09:22 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.550 243456 DEBUG nova.compute.manager [None req-53a8bd45-8f0b-42d2-b53c-69f583f4e0ff - - - - - -] [instance: 6e81c007-3bdc-4baf-b310-775c3122cd14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:22 compute-0 podman[288927]: 2026-02-28 10:09:22.555213359 +0000 UTC m=+0.112067989 container init d3f6634175b23c608a1248bcb21082cd3de7f04f5c2adbb68bcd25cdb9ba57f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.555 243456 DEBUG nova.compute.manager [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.556 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.557 243456 INFO nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Creating image(s)
Feb 28 10:09:22 compute-0 podman[288927]: 2026-02-28 10:09:22.468108072 +0000 UTC m=+0.024962712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:09:22 compute-0 podman[288927]: 2026-02-28 10:09:22.562523184 +0000 UTC m=+0.119377814 container start d3f6634175b23c608a1248bcb21082cd3de7f04f5c2adbb68bcd25cdb9ba57f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:09:22 compute-0 podman[288927]: 2026-02-28 10:09:22.566089734 +0000 UTC m=+0.122944374 container attach d3f6634175b23c608a1248bcb21082cd3de7f04f5c2adbb68bcd25cdb9ba57f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:09:22 compute-0 eager_lamport[288962]: 167 167
Feb 28 10:09:22 compute-0 systemd[1]: libpod-d3f6634175b23c608a1248bcb21082cd3de7f04f5c2adbb68bcd25cdb9ba57f5.scope: Deactivated successfully.
Feb 28 10:09:22 compute-0 conmon[288962]: conmon d3f6634175b23c608a12 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d3f6634175b23c608a1248bcb21082cd3de7f04f5c2adbb68bcd25cdb9ba57f5.scope/container/memory.events
Feb 28 10:09:22 compute-0 podman[288927]: 2026-02-28 10:09:22.569660405 +0000 UTC m=+0.126515045 container died d3f6634175b23c608a1248bcb21082cd3de7f04f5c2adbb68bcd25cdb9ba57f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:09:22 compute-0 ceph-mon[76304]: pgmap v1248: 305 pgs: 305 active+clean; 153 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.3 MiB/s wr, 174 op/s
Feb 28 10:09:22 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1263084625' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce6e2472751119b85618870776c5133ec4753e9745c1a6f02eebe3aa7c903366-merged.mount: Deactivated successfully.
Feb 28 10:09:22 compute-0 podman[288927]: 2026-02-28 10:09:22.605952124 +0000 UTC m=+0.162806754 container remove d3f6634175b23c608a1248bcb21082cd3de7f04f5c2adbb68bcd25cdb9ba57f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lamport, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.606 243456 DEBUG nova.storage.rbd_utils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 226b6da4-15c9-4d10-ab4d-194b313446f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:22 compute-0 systemd[1]: libpod-conmon-d3f6634175b23c608a1248bcb21082cd3de7f04f5c2adbb68bcd25cdb9ba57f5.scope: Deactivated successfully.
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.639 243456 DEBUG nova.storage.rbd_utils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 226b6da4-15c9-4d10-ab4d-194b313446f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.665 243456 DEBUG nova.storage.rbd_utils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 226b6da4-15c9-4d10-ab4d-194b313446f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.670 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.761 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.763 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.764 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.764 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:22 compute-0 podman[289039]: 2026-02-28 10:09:22.770181027 +0000 UTC m=+0.062224398 container create f3ab909f0fc2bf24798e68e66a45d4be5d3b415c976b8b26396e1c782adab51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.796 243456 DEBUG nova.storage.rbd_utils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 226b6da4-15c9-4d10-ab4d-194b313446f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.802 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 226b6da4-15c9-4d10-ab4d-194b313446f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:22 compute-0 systemd[1]: Started libpod-conmon-f3ab909f0fc2bf24798e68e66a45d4be5d3b415c976b8b26396e1c782adab51d.scope.
Feb 28 10:09:22 compute-0 podman[289039]: 2026-02-28 10:09:22.73431154 +0000 UTC m=+0.026354981 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:09:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:09:22 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/236287320' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:22 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c14004ea5412f4ddc35babb28ba3e4ee0909078ae6e2644e9f492318c221db1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c14004ea5412f4ddc35babb28ba3e4ee0909078ae6e2644e9f492318c221db1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c14004ea5412f4ddc35babb28ba3e4ee0909078ae6e2644e9f492318c221db1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c14004ea5412f4ddc35babb28ba3e4ee0909078ae6e2644e9f492318c221db1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:09:22 compute-0 podman[289039]: 2026-02-28 10:09:22.871869653 +0000 UTC m=+0.163913044 container init f3ab909f0fc2bf24798e68e66a45d4be5d3b415c976b8b26396e1c782adab51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_burnell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.871 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:22 compute-0 podman[289039]: 2026-02-28 10:09:22.878636423 +0000 UTC m=+0.170679784 container start f3ab909f0fc2bf24798e68e66a45d4be5d3b415c976b8b26396e1c782adab51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_burnell, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:09:22 compute-0 podman[289039]: 2026-02-28 10:09:22.882134101 +0000 UTC m=+0.174177482 container attach f3ab909f0fc2bf24798e68e66a45d4be5d3b415c976b8b26396e1c782adab51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.907 243456 DEBUG nova.storage.rbd_utils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:22 compute-0 nova_compute[243452]: 2026-02-28 10:09:22.912 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.096 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 226b6da4-15c9-4d10-ab4d-194b313446f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.161 243456 DEBUG nova.storage.rbd_utils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] resizing rbd image 226b6da4-15c9-4d10-ab4d-194b313446f9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:09:23 compute-0 interesting_burnell[289077]: {
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:     "0": [
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:         {
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "devices": [
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "/dev/loop3"
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             ],
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "lv_name": "ceph_lv0",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "lv_size": "21470642176",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "name": "ceph_lv0",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "tags": {
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.cluster_name": "ceph",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.crush_device_class": "",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.encrypted": "0",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.objectstore": "bluestore",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.osd_id": "0",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.type": "block",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.vdo": "0",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.with_tpm": "0"
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             },
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "type": "block",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "vg_name": "ceph_vg0"
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:         }
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:     ],
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:     "1": [
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:         {
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "devices": [
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "/dev/loop4"
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             ],
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "lv_name": "ceph_lv1",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "lv_size": "21470642176",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "name": "ceph_lv1",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "tags": {
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.cluster_name": "ceph",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.crush_device_class": "",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.encrypted": "0",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.objectstore": "bluestore",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.osd_id": "1",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.type": "block",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.vdo": "0",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.with_tpm": "0"
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             },
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "type": "block",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "vg_name": "ceph_vg1"
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:         }
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:     ],
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:     "2": [
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:         {
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "devices": [
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "/dev/loop5"
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             ],
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "lv_name": "ceph_lv2",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "lv_size": "21470642176",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "name": "ceph_lv2",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "tags": {
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.cluster_name": "ceph",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.crush_device_class": "",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.encrypted": "0",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.objectstore": "bluestore",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.osd_id": "2",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.type": "block",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.vdo": "0",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:                 "ceph.with_tpm": "0"
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             },
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "type": "block",
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:             "vg_name": "ceph_vg2"
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:         }
Feb 28 10:09:23 compute-0 interesting_burnell[289077]:     ]
Feb 28 10:09:23 compute-0 interesting_burnell[289077]: }
Feb 28 10:09:23 compute-0 systemd[1]: libpod-f3ab909f0fc2bf24798e68e66a45d4be5d3b415c976b8b26396e1c782adab51d.scope: Deactivated successfully.
Feb 28 10:09:23 compute-0 podman[289039]: 2026-02-28 10:09:23.232210193 +0000 UTC m=+0.524253594 container died f3ab909f0fc2bf24798e68e66a45d4be5d3b415c976b8b26396e1c782adab51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_burnell, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.248 243456 DEBUG nova.objects.instance [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lazy-loading 'migration_context' on Instance uuid 226b6da4-15c9-4d10-ab4d-194b313446f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:09:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c14004ea5412f4ddc35babb28ba3e4ee0909078ae6e2644e9f492318c221db1-merged.mount: Deactivated successfully.
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.266 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.267 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Ensure instance console log exists: /var/lib/nova/instances/226b6da4-15c9-4d10-ab4d-194b313446f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.267 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.268 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.268 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:23 compute-0 podman[289039]: 2026-02-28 10:09:23.274888382 +0000 UTC m=+0.566931743 container remove f3ab909f0fc2bf24798e68e66a45d4be5d3b415c976b8b26396e1c782adab51d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:09:23 compute-0 systemd[1]: libpod-conmon-f3ab909f0fc2bf24798e68e66a45d4be5d3b415c976b8b26396e1c782adab51d.scope: Deactivated successfully.
Feb 28 10:09:23 compute-0 sudo[288870]: pam_unix(sudo:session): session closed for user root
Feb 28 10:09:23 compute-0 sudo[289226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:09:23 compute-0 sudo[289226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:09:23 compute-0 sudo[289226]: pam_unix(sudo:session): session closed for user root
Feb 28 10:09:23 compute-0 sudo[289251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:09:23 compute-0 sudo[289251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:09:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:09:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1389734995' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.525 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.529 243456 DEBUG nova.objects.instance [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lazy-loading 'pci_devices' on Instance uuid 9098ebf3-e36c-492b-9c50-dc6f0078794d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.554 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:09:23 compute-0 nova_compute[243452]:   <uuid>9098ebf3-e36c-492b-9c50-dc6f0078794d</uuid>
Feb 28 10:09:23 compute-0 nova_compute[243452]:   <name>instance-00000035</name>
Feb 28 10:09:23 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:09:23 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:09:23 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <nova:name>tempest-ListImageFiltersTestJSON-server-1661712571</nova:name>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:09:22</nova:creationTime>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:09:23 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:09:23 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:09:23 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:09:23 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:09:23 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:09:23 compute-0 nova_compute[243452]:         <nova:user uuid="7b7f4fcc1d0d41f59aed36b3de16f8e2">tempest-ListImageFiltersTestJSON-1834030841-project-member</nova:user>
Feb 28 10:09:23 compute-0 nova_compute[243452]:         <nova:project uuid="6a54f983c0fa466f9e11947f104ed5ca">tempest-ListImageFiltersTestJSON-1834030841</nova:project>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <nova:ports/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:09:23 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:09:23 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <system>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <entry name="serial">9098ebf3-e36c-492b-9c50-dc6f0078794d</entry>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <entry name="uuid">9098ebf3-e36c-492b-9c50-dc6f0078794d</entry>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     </system>
Feb 28 10:09:23 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:09:23 compute-0 nova_compute[243452]:   <os>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:   </os>
Feb 28 10:09:23 compute-0 nova_compute[243452]:   <features>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:   </features>
Feb 28 10:09:23 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:09:23 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:09:23 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9098ebf3-e36c-492b-9c50-dc6f0078794d_disk">
Feb 28 10:09:23 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       </source>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:09:23 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9098ebf3-e36c-492b-9c50-dc6f0078794d_disk.config">
Feb 28 10:09:23 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       </source>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:09:23 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/9098ebf3-e36c-492b-9c50-dc6f0078794d/console.log" append="off"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <video>
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     </video>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:09:23 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:09:23 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:09:23 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:09:23 compute-0 nova_compute[243452]: </domain>
Feb 28 10:09:23 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:09:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/236287320' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1389734995' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.633 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.641 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.642 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.643 243456 INFO nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Using config drive
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.669 243456 DEBUG nova.storage.rbd_utils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:23 compute-0 podman[289310]: 2026-02-28 10:09:23.724552091 +0000 UTC m=+0.045027726 container create 0ef2e1d1c10a60fa70bfba5ffa5ec4632630bf267617c750504635fbcae93228 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:09:23 compute-0 systemd[1]: Started libpod-conmon-0ef2e1d1c10a60fa70bfba5ffa5ec4632630bf267617c750504635fbcae93228.scope.
Feb 28 10:09:23 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:09:23 compute-0 podman[289310]: 2026-02-28 10:09:23.706772151 +0000 UTC m=+0.027247816 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:09:23 compute-0 podman[289310]: 2026-02-28 10:09:23.811883034 +0000 UTC m=+0.132358669 container init 0ef2e1d1c10a60fa70bfba5ffa5ec4632630bf267617c750504635fbcae93228 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:09:23 compute-0 podman[289310]: 2026-02-28 10:09:23.819308422 +0000 UTC m=+0.139784057 container start 0ef2e1d1c10a60fa70bfba5ffa5ec4632630bf267617c750504635fbcae93228 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 10:09:23 compute-0 podman[289310]: 2026-02-28 10:09:23.82207401 +0000 UTC m=+0.142549645 container attach 0ef2e1d1c10a60fa70bfba5ffa5ec4632630bf267617c750504635fbcae93228 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 10:09:23 compute-0 mystifying_wing[289326]: 167 167
Feb 28 10:09:23 compute-0 systemd[1]: libpod-0ef2e1d1c10a60fa70bfba5ffa5ec4632630bf267617c750504635fbcae93228.scope: Deactivated successfully.
Feb 28 10:09:23 compute-0 conmon[289326]: conmon 0ef2e1d1c10a60fa70bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ef2e1d1c10a60fa70bfba5ffa5ec4632630bf267617c750504635fbcae93228.scope/container/memory.events
Feb 28 10:09:23 compute-0 podman[289310]: 2026-02-28 10:09:23.824908459 +0000 UTC m=+0.145384094 container died 0ef2e1d1c10a60fa70bfba5ffa5ec4632630bf267617c750504635fbcae93228 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 10:09:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-15ef729bd0bd8843ce2464cc8d6126fb99b5607f41102f4da82b2d66f4499a56-merged.mount: Deactivated successfully.
Feb 28 10:09:23 compute-0 podman[289310]: 2026-02-28 10:09:23.860328934 +0000 UTC m=+0.180804569 container remove 0ef2e1d1c10a60fa70bfba5ffa5ec4632630bf267617c750504635fbcae93228 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 10:09:23 compute-0 systemd[1]: libpod-conmon-0ef2e1d1c10a60fa70bfba5ffa5ec4632630bf267617c750504635fbcae93228.scope: Deactivated successfully.
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.931 243456 INFO nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Creating config drive at /var/lib/nova/instances/9098ebf3-e36c-492b-9c50-dc6f0078794d/disk.config
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.936 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9098ebf3-e36c-492b-9c50-dc6f0078794d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8_ezggb_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.983 243456 DEBUG nova.network.neutron [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.984 243456 DEBUG nova.compute.manager [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.988 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:09:23 compute-0 nova_compute[243452]: 2026-02-28 10:09:23.996 243456 WARNING nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.007 243456 DEBUG nova.virt.libvirt.host [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:09:24 compute-0 podman[289349]: 2026-02-28 10:09:24.00724426 +0000 UTC m=+0.054745738 container create 614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.008 243456 DEBUG nova.virt.libvirt.host [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.013 243456 DEBUG nova.virt.libvirt.host [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.015 243456 DEBUG nova.virt.libvirt.host [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:09:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 172 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 180 op/s
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.016 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.016 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.017 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.018 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.019 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.020 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.020 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.021 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.021 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.022 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.023 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.023 243456 DEBUG nova.virt.hardware [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.030 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:24 compute-0 systemd[1]: Started libpod-conmon-614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb.scope.
Feb 28 10:09:24 compute-0 podman[289349]: 2026-02-28 10:09:23.979722708 +0000 UTC m=+0.027224246 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.085 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9098ebf3-e36c-492b-9c50-dc6f0078794d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8_ezggb_" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:24 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:09:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277ea15dbd84a698ea92a9c7b80efe63b2c2c28a71153a36957fea16eb9e1d0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:09:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277ea15dbd84a698ea92a9c7b80efe63b2c2c28a71153a36957fea16eb9e1d0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:09:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277ea15dbd84a698ea92a9c7b80efe63b2c2c28a71153a36957fea16eb9e1d0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:09:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277ea15dbd84a698ea92a9c7b80efe63b2c2c28a71153a36957fea16eb9e1d0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:09:24 compute-0 podman[289349]: 2026-02-28 10:09:24.120843391 +0000 UTC m=+0.168344849 container init 614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ishizaka, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 10:09:24 compute-0 podman[289349]: 2026-02-28 10:09:24.131731127 +0000 UTC m=+0.179232575 container start 614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ishizaka, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.133 243456 DEBUG nova.storage.rbd_utils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.139 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9098ebf3-e36c-492b-9c50-dc6f0078794d/disk.config 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:24 compute-0 podman[289349]: 2026-02-28 10:09:24.142906291 +0000 UTC m=+0.190407729 container attach 614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ishizaka, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.337 243456 DEBUG oslo_concurrency.processutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9098ebf3-e36c-492b-9c50-dc6f0078794d/disk.config 9098ebf3-e36c-492b-9c50-dc6f0078794d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.339 243456 INFO nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Deleting local config drive /var/lib/nova/instances/9098ebf3-e36c-492b-9c50-dc6f0078794d/disk.config because it was imported into RBD.
Feb 28 10:09:24 compute-0 systemd-machined[209480]: New machine qemu-59-instance-00000035.
Feb 28 10:09:24 compute-0 systemd[1]: Started Virtual Machine qemu-59-instance-00000035.
Feb 28 10:09:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:09:24 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1749059947' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.578 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:24 compute-0 ceph-mon[76304]: pgmap v1249: 305 pgs: 305 active+clean; 172 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 180 op/s
Feb 28 10:09:24 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1749059947' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.607 243456 DEBUG nova.storage.rbd_utils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 226b6da4-15c9-4d10-ab4d-194b313446f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:24 compute-0 nova_compute[243452]: 2026-02-28 10:09:24.611 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:24 compute-0 lvm[289560]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:09:24 compute-0 lvm[289559]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:09:24 compute-0 lvm[289560]: VG ceph_vg1 finished
Feb 28 10:09:24 compute-0 lvm[289559]: VG ceph_vg0 finished
Feb 28 10:09:24 compute-0 lvm[289562]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:09:24 compute-0 lvm[289562]: VG ceph_vg2 finished
Feb 28 10:09:24 compute-0 quizzical_ishizaka[289370]: {}
Feb 28 10:09:24 compute-0 systemd[1]: libpod-614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb.scope: Deactivated successfully.
Feb 28 10:09:24 compute-0 systemd[1]: libpod-614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb.scope: Consumed 1.227s CPU time.
Feb 28 10:09:24 compute-0 conmon[289370]: conmon 614a2d60d9a10275b15a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb.scope/container/memory.events
Feb 28 10:09:24 compute-0 podman[289349]: 2026-02-28 10:09:24.991083872 +0000 UTC m=+1.038585320 container died 614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 10:09:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-277ea15dbd84a698ea92a9c7b80efe63b2c2c28a71153a36957fea16eb9e1d0a-merged.mount: Deactivated successfully.
Feb 28 10:09:25 compute-0 podman[289349]: 2026-02-28 10:09:25.052117116 +0000 UTC m=+1.099618544 container remove 614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ishizaka, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 10:09:25 compute-0 systemd[1]: libpod-conmon-614a2d60d9a10275b15a4b100c6e25ae185a22e667950023354907035561c2cb.scope: Deactivated successfully.
Feb 28 10:09:25 compute-0 sudo[289251]: pam_unix(sudo:session): session closed for user root
Feb 28 10:09:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:09:25 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:09:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.136 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273365.135103, 9098ebf3-e36c-492b-9c50-dc6f0078794d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.141 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] VM Resumed (Lifecycle Event)
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.147 243456 DEBUG nova.compute.manager [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.149 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:09:25 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.154 243456 INFO nova.virt.libvirt.driver [-] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Instance spawned successfully.
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.155 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.182 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.188 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.189 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.189 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.190 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.190 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.191 243456 DEBUG nova.virt.libvirt.driver [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.196 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:09:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:09:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3788229665' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:25 compute-0 sudo[289618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:09:25 compute-0 sudo[289618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:09:25 compute-0 sudo[289618]: pam_unix(sudo:session): session closed for user root
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.230 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.230 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273365.13723, 9098ebf3-e36c-492b-9c50-dc6f0078794d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.231 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] VM Started (Lifecycle Event)
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.234 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.236 243456 DEBUG nova.objects.instance [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lazy-loading 'pci_devices' on Instance uuid 226b6da4-15c9-4d10-ab4d-194b313446f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.290 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.295 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:09:25 compute-0 nova_compute[243452]:   <uuid>226b6da4-15c9-4d10-ab4d-194b313446f9</uuid>
Feb 28 10:09:25 compute-0 nova_compute[243452]:   <name>instance-00000036</name>
Feb 28 10:09:25 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:09:25 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:09:25 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <nova:name>tempest-ListImageFiltersTestJSON-server-1188134634</nova:name>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:09:23</nova:creationTime>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:09:25 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:09:25 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:09:25 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:09:25 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:09:25 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:09:25 compute-0 nova_compute[243452]:         <nova:user uuid="7b7f4fcc1d0d41f59aed36b3de16f8e2">tempest-ListImageFiltersTestJSON-1834030841-project-member</nova:user>
Feb 28 10:09:25 compute-0 nova_compute[243452]:         <nova:project uuid="6a54f983c0fa466f9e11947f104ed5ca">tempest-ListImageFiltersTestJSON-1834030841</nova:project>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <nova:ports/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:09:25 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:09:25 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <system>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <entry name="serial">226b6da4-15c9-4d10-ab4d-194b313446f9</entry>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <entry name="uuid">226b6da4-15c9-4d10-ab4d-194b313446f9</entry>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     </system>
Feb 28 10:09:25 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:09:25 compute-0 nova_compute[243452]:   <os>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:   </os>
Feb 28 10:09:25 compute-0 nova_compute[243452]:   <features>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:   </features>
Feb 28 10:09:25 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:09:25 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:09:25 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/226b6da4-15c9-4d10-ab4d-194b313446f9_disk">
Feb 28 10:09:25 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       </source>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:09:25 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/226b6da4-15c9-4d10-ab4d-194b313446f9_disk.config">
Feb 28 10:09:25 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       </source>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:09:25 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/226b6da4-15c9-4d10-ab4d-194b313446f9/console.log" append="off"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <video>
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     </video>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:09:25 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:09:25 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:09:25 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:09:25 compute-0 nova_compute[243452]: </domain>
Feb 28 10:09:25 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.307 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.319 243456 INFO nova.compute.manager [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Took 5.02 seconds to spawn the instance on the hypervisor.
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.320 243456 DEBUG nova.compute.manager [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.454 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.487 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.488 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.501 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.501 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.502 243456 INFO nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Using config drive
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.530 243456 DEBUG nova.storage.rbd_utils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 226b6da4-15c9-4d10-ab4d-194b313446f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.539 243456 DEBUG nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.543 243456 INFO nova.compute.manager [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Took 6.48 seconds to build instance.
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.600 243456 DEBUG oslo_concurrency.lockutils [None req-5b746717-b3dc-4cee-95a9-c6040b4fc7d7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "9098ebf3-e36c-492b-9c50-dc6f0078794d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.632 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.633 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.639 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.639 243456 INFO nova.compute.claims [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.781 243456 INFO nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Creating config drive at /var/lib/nova/instances/226b6da4-15c9-4d10-ab4d-194b313446f9/disk.config
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.784 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/226b6da4-15c9-4d10-ab4d-194b313446f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp453tmx82 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.827 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.937 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/226b6da4-15c9-4d10-ab4d-194b313446f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp453tmx82" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.976 243456 DEBUG nova.storage.rbd_utils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] rbd image 226b6da4-15c9-4d10-ab4d-194b313446f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:25 compute-0 nova_compute[243452]: 2026-02-28 10:09:25.982 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/226b6da4-15c9-4d10-ab4d-194b313446f9/disk.config 226b6da4-15c9-4d10-ab4d-194b313446f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 219 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 189 op/s
Feb 28 10:09:26 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:09:26 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:09:26 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3788229665' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.160 243456 DEBUG oslo_concurrency.processutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/226b6da4-15c9-4d10-ab4d-194b313446f9/disk.config 226b6da4-15c9-4d10-ab4d-194b313446f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.162 243456 INFO nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Deleting local config drive /var/lib/nova/instances/226b6da4-15c9-4d10-ab4d-194b313446f9/disk.config because it was imported into RBD.
Feb 28 10:09:26 compute-0 systemd-machined[209480]: New machine qemu-60-instance-00000036.
Feb 28 10:09:26 compute-0 systemd[1]: Started Virtual Machine qemu-60-instance-00000036.
Feb 28 10:09:26 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:09:26 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/458384995' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.451 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.459 243456 DEBUG nova.compute.provider_tree [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.485 243456 DEBUG nova.scheduler.client.report [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.513 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.514 243456 DEBUG nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.566 243456 DEBUG nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.566 243456 DEBUG nova.network.neutron [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.591 243456 INFO nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.618 243456 DEBUG nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.666 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.702 243456 DEBUG nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.704 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.704 243456 INFO nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Creating image(s)
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.728 243456 DEBUG nova.storage.rbd_utils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.756 243456 DEBUG nova.storage.rbd_utils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.788 243456 DEBUG nova.storage.rbd_utils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.795 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.860 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.862 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.862 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.863 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.886 243456 DEBUG nova.storage.rbd_utils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.892 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:26 compute-0 nova_compute[243452]: 2026-02-28 10:09:26.930 243456 DEBUG nova.policy [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33855957e5e3480b850c2ddef62a5f89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a5095f810f0d431788237ae1da262bf6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:09:27 compute-0 ceph-mon[76304]: pgmap v1250: 305 pgs: 305 active+clean; 219 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 189 op/s
Feb 28 10:09:27 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/458384995' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.195 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.253 243456 DEBUG nova.storage.rbd_utils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] resizing rbd image 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.350 243456 DEBUG nova.objects.instance [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'migration_context' on Instance uuid 8eee8376-acc6-4a01-80c3-d7f0d579f9bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.372 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.373 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Ensure instance console log exists: /var/lib/nova/instances/8eee8376-acc6-4a01-80c3-d7f0d579f9bb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.374 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.374 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.375 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.491 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273367.4894414, 226b6da4-15c9-4d10-ab4d-194b313446f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.493 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] VM Resumed (Lifecycle Event)
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.495 243456 DEBUG nova.compute.manager [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.496 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.500 243456 INFO nova.virt.libvirt.driver [-] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Instance spawned successfully.
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.500 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.540 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.548 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.551 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.552 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.553 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.553 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.554 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.554 243456 DEBUG nova.virt.libvirt.driver [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.596 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.597 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273367.4910986, 226b6da4-15c9-4d10-ab4d-194b313446f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.597 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] VM Started (Lifecycle Event)
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.616 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.628 243456 INFO nova.compute.manager [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Took 5.07 seconds to spawn the instance on the hypervisor.
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.629 243456 DEBUG nova.compute.manager [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.631 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.664 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.698 243456 INFO nova.compute.manager [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Took 6.30 seconds to build instance.
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.717 243456 DEBUG oslo_concurrency.lockutils [None req-42df63b8-b666-45ed-ba89-7e30a0c1908a 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "226b6da4-15c9-4d10-ab4d-194b313446f9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:27 compute-0 nova_compute[243452]: 2026-02-28 10:09:27.730 243456 DEBUG nova.network.neutron [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Successfully created port: 3fb210f2-4c9d-4399-acf6-20e10d93fdd5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:09:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:09:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 259 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.7 MiB/s wr, 202 op/s
Feb 28 10:09:28 compute-0 nova_compute[243452]: 2026-02-28 10:09:28.635 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:28 compute-0 nova_compute[243452]: 2026-02-28 10:09:28.730 243456 DEBUG nova.network.neutron [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Successfully updated port: 3fb210f2-4c9d-4399-acf6-20e10d93fdd5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:09:28 compute-0 nova_compute[243452]: 2026-02-28 10:09:28.776 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "refresh_cache-8eee8376-acc6-4a01-80c3-d7f0d579f9bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:09:28 compute-0 nova_compute[243452]: 2026-02-28 10:09:28.776 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquired lock "refresh_cache-8eee8376-acc6-4a01-80c3-d7f0d579f9bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:09:28 compute-0 nova_compute[243452]: 2026-02-28 10:09:28.777 243456 DEBUG nova.network.neutron [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:09:28 compute-0 nova_compute[243452]: 2026-02-28 10:09:28.917 243456 DEBUG nova.network.neutron [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:09:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:09:29
Feb 28 10:09:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:09:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:09:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', '.mgr', 'vms', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', '.rgw.root', 'default.rgw.control']
Feb 28 10:09:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:09:29 compute-0 nova_compute[243452]: 2026-02-28 10:09:29.120 243456 DEBUG nova.compute.manager [req-5a3d874c-fe32-4e60-9612-5836020c99bd req-8eb9ca00-6fd7-4afd-b063-2d16a61d56a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Received event network-changed-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:29 compute-0 nova_compute[243452]: 2026-02-28 10:09:29.125 243456 DEBUG nova.compute.manager [req-5a3d874c-fe32-4e60-9612-5836020c99bd req-8eb9ca00-6fd7-4afd-b063-2d16a61d56a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Refreshing instance network info cache due to event network-changed-3fb210f2-4c9d-4399-acf6-20e10d93fdd5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:09:29 compute-0 nova_compute[243452]: 2026-02-28 10:09:29.128 243456 DEBUG oslo_concurrency.lockutils [req-5a3d874c-fe32-4e60-9612-5836020c99bd req-8eb9ca00-6fd7-4afd-b063-2d16a61d56a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-8eee8376-acc6-4a01-80c3-d7f0d579f9bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:09:29 compute-0 nova_compute[243452]: 2026-02-28 10:09:29.142 243456 DEBUG nova.compute.manager [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Feb 28 10:09:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Feb 28 10:09:29 compute-0 nova_compute[243452]: 2026-02-28 10:09:29.195 243456 INFO nova.compute.manager [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] instance snapshotting
Feb 28 10:09:29 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Feb 28 10:09:29 compute-0 ceph-mon[76304]: pgmap v1251: 305 pgs: 305 active+clean; 259 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.7 MiB/s wr, 202 op/s
Feb 28 10:09:29 compute-0 nova_compute[243452]: 2026-02-28 10:09:29.423 243456 INFO nova.virt.libvirt.driver [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Beginning live snapshot process
Feb 28 10:09:29 compute-0 nova_compute[243452]: 2026-02-28 10:09:29.572 243456 DEBUG nova.virt.libvirt.imagebackend [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 28 10:09:29 compute-0 nova_compute[243452]: 2026-02-28 10:09:29.797 243456 DEBUG nova.storage.rbd_utils [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] creating snapshot(7bebd4cdf79048b6b104db3e428177f0) on rbd image(9098ebf3-e36c-492b-9c50-dc6f0078794d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:09:29 compute-0 nova_compute[243452]: 2026-02-28 10:09:29.988 243456 DEBUG nova.network.neutron [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Updating instance_info_cache with network_info: [{"id": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "address": "fa:16:3e:be:7e:cf", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb210f2-4c", "ovs_interfaceid": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.007 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Releasing lock "refresh_cache-8eee8376-acc6-4a01-80c3-d7f0d579f9bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.008 243456 DEBUG nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Instance network_info: |[{"id": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "address": "fa:16:3e:be:7e:cf", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb210f2-4c", "ovs_interfaceid": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.009 243456 DEBUG oslo_concurrency.lockutils [req-5a3d874c-fe32-4e60-9612-5836020c99bd req-8eb9ca00-6fd7-4afd-b063-2d16a61d56a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-8eee8376-acc6-4a01-80c3-d7f0d579f9bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.009 243456 DEBUG nova.network.neutron [req-5a3d874c-fe32-4e60-9612-5836020c99bd req-8eb9ca00-6fd7-4afd-b063-2d16a61d56a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Refreshing network info cache for port 3fb210f2-4c9d-4399-acf6-20e10d93fdd5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.012 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Start _get_guest_xml network_info=[{"id": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "address": "fa:16:3e:be:7e:cf", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb210f2-4c", "ovs_interfaceid": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:09:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 288 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 6.3 MiB/s wr, 234 op/s
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.019 243456 WARNING nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.028 243456 DEBUG nova.virt.libvirt.host [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.029 243456 DEBUG nova.virt.libvirt.host [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.032 243456 DEBUG nova.virt.libvirt.host [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.032 243456 DEBUG nova.virt.libvirt.host [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.033 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.033 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.034 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.034 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.034 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.034 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.035 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.035 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.035 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.036 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.036 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.036 243456 DEBUG nova.virt.hardware [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.040 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:30 compute-0 podman[290001]: 2026-02-28 10:09:30.149810754 +0000 UTC m=+0.066490338 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:09:30 compute-0 podman[289999]: 2026-02-28 10:09:30.183191392 +0000 UTC m=+0.110478554 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible)
Feb 28 10:09:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:09:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:09:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:09:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:09:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:09:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:09:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Feb 28 10:09:30 compute-0 ceph-mon[76304]: osdmap e182: 3 total, 3 up, 3 in
Feb 28 10:09:30 compute-0 ceph-mon[76304]: pgmap v1253: 305 pgs: 305 active+clean; 288 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 6.3 MiB/s wr, 234 op/s
Feb 28 10:09:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Feb 28 10:09:30 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.415 243456 DEBUG nova.storage.rbd_utils [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] cloning vms/9098ebf3-e36c-492b-9c50-dc6f0078794d_disk@7bebd4cdf79048b6b104db3e428177f0 to images/9ce992ef-e998-43ea-9bd4-8cdaf2841ea4 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.509 243456 DEBUG nova.storage.rbd_utils [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] flattening images/9ce992ef-e998-43ea-9bd4-8cdaf2841ea4 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:09:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:09:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:09:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:09:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:09:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:09:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:09:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:09:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:09:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:09:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:09:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:09:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1048446427' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.659 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.684 243456 DEBUG nova.storage.rbd_utils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.689 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:30 compute-0 nova_compute[243452]: 2026-02-28 10:09:30.791 243456 DEBUG nova.storage.rbd_utils [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] removing snapshot(7bebd4cdf79048b6b104db3e428177f0) on rbd image(9098ebf3-e36c-492b-9c50-dc6f0078794d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:09:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:09:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1079513857' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.242 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.244 243456 DEBUG nova.virt.libvirt.vif [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1458563036',display_name='tempest-DeleteServersTestJSON-server-1458563036',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1458563036',id=55,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-8hsyesw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:09:26Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=8eee8376-acc6-4a01-80c3-d7f0d579f9bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "address": "fa:16:3e:be:7e:cf", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb210f2-4c", "ovs_interfaceid": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.245 243456 DEBUG nova.network.os_vif_util [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "address": "fa:16:3e:be:7e:cf", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb210f2-4c", "ovs_interfaceid": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.246 243456 DEBUG nova.network.os_vif_util [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:7e:cf,bridge_name='br-int',has_traffic_filtering=True,id=3fb210f2-4c9d-4399-acf6-20e10d93fdd5,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb210f2-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.247 243456 DEBUG nova.objects.instance [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8eee8376-acc6-4a01-80c3-d7f0d579f9bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.263 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:09:31 compute-0 nova_compute[243452]:   <uuid>8eee8376-acc6-4a01-80c3-d7f0d579f9bb</uuid>
Feb 28 10:09:31 compute-0 nova_compute[243452]:   <name>instance-00000037</name>
Feb 28 10:09:31 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:09:31 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:09:31 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <nova:name>tempest-DeleteServersTestJSON-server-1458563036</nova:name>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:09:30</nova:creationTime>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:09:31 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:09:31 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:09:31 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:09:31 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:09:31 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:09:31 compute-0 nova_compute[243452]:         <nova:user uuid="33855957e5e3480b850c2ddef62a5f89">tempest-DeleteServersTestJSON-515886650-project-member</nova:user>
Feb 28 10:09:31 compute-0 nova_compute[243452]:         <nova:project uuid="a5095f810f0d431788237ae1da262bf6">tempest-DeleteServersTestJSON-515886650</nova:project>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:09:31 compute-0 nova_compute[243452]:         <nova:port uuid="3fb210f2-4c9d-4399-acf6-20e10d93fdd5">
Feb 28 10:09:31 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:09:31 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:09:31 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <system>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <entry name="serial">8eee8376-acc6-4a01-80c3-d7f0d579f9bb</entry>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <entry name="uuid">8eee8376-acc6-4a01-80c3-d7f0d579f9bb</entry>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     </system>
Feb 28 10:09:31 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:09:31 compute-0 nova_compute[243452]:   <os>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:   </os>
Feb 28 10:09:31 compute-0 nova_compute[243452]:   <features>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:   </features>
Feb 28 10:09:31 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:09:31 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:09:31 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk">
Feb 28 10:09:31 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       </source>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:09:31 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk.config">
Feb 28 10:09:31 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       </source>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:09:31 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:be:7e:cf"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <target dev="tap3fb210f2-4c"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/8eee8376-acc6-4a01-80c3-d7f0d579f9bb/console.log" append="off"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <video>
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     </video>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:09:31 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:09:31 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:09:31 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:09:31 compute-0 nova_compute[243452]: </domain>
Feb 28 10:09:31 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.265 243456 DEBUG nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Preparing to wait for external event network-vif-plugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.265 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.265 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.266 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.266 243456 DEBUG nova.virt.libvirt.vif [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1458563036',display_name='tempest-DeleteServersTestJSON-server-1458563036',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1458563036',id=55,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-8hsyesw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:09:26Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=8eee8376-acc6-4a01-80c3-d7f0d579f9bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "address": "fa:16:3e:be:7e:cf", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb210f2-4c", "ovs_interfaceid": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.267 243456 DEBUG nova.network.os_vif_util [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "address": "fa:16:3e:be:7e:cf", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb210f2-4c", "ovs_interfaceid": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.267 243456 DEBUG nova.network.os_vif_util [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:7e:cf,bridge_name='br-int',has_traffic_filtering=True,id=3fb210f2-4c9d-4399-acf6-20e10d93fdd5,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb210f2-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.268 243456 DEBUG os_vif [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:7e:cf,bridge_name='br-int',has_traffic_filtering=True,id=3fb210f2-4c9d-4399-acf6-20e10d93fdd5,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb210f2-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.269 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.269 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.270 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.273 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.274 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3fb210f2-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.274 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3fb210f2-4c, col_values=(('external_ids', {'iface-id': '3fb210f2-4c9d-4399-acf6-20e10d93fdd5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:7e:cf', 'vm-uuid': '8eee8376-acc6-4a01-80c3-d7f0d579f9bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.276 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:31 compute-0 NetworkManager[49805]: <info>  [1772273371.2773] manager: (tap3fb210f2-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.279 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.285 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.286 243456 INFO os_vif [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:7e:cf,bridge_name='br-int',has_traffic_filtering=True,id=3fb210f2-4c9d-4399-acf6-20e10d93fdd5,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb210f2-4c')
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.349 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.350 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.357 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.357 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.358 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No VIF found with MAC fa:16:3e:be:7e:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.358 243456 INFO nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Using config drive
Feb 28 10:09:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Feb 28 10:09:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Feb 28 10:09:31 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Feb 28 10:09:31 compute-0 ceph-mon[76304]: osdmap e183: 3 total, 3 up, 3 in
Feb 28 10:09:31 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1048446427' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:31 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1079513857' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.403 243456 DEBUG nova.storage.rbd_utils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.410 243456 DEBUG nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.425 243456 DEBUG nova.storage.rbd_utils [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] creating snapshot(snap) on rbd image(9ce992ef-e998-43ea-9bd4-8cdaf2841ea4) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.494 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.494 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.504 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.505 243456 INFO nova.compute.claims [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.629 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273356.6282868, 51a0e59a-81d0-4f05-bb13-5ca025288da2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.629 243456 INFO nova.compute.manager [-] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] VM Stopped (Lifecycle Event)
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.655 243456 DEBUG nova.compute.manager [None req-2921a1fd-7a75-42da-8b0c-b639051058ee - - - - - -] [instance: 51a0e59a-81d0-4f05-bb13-5ca025288da2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.674 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.741 243456 INFO nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Creating config drive at /var/lib/nova/instances/8eee8376-acc6-4a01-80c3-d7f0d579f9bb/disk.config
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.749 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8eee8376-acc6-4a01-80c3-d7f0d579f9bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp30kndlo1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.901 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8eee8376-acc6-4a01-80c3-d7f0d579f9bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp30kndlo1" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.931 243456 DEBUG nova.storage.rbd_utils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:31 compute-0 nova_compute[243452]: 2026-02-28 10:09:31.937 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8eee8376-acc6-4a01-80c3-d7f0d579f9bb/disk.config 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 293 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 4.8 MiB/s wr, 319 op/s
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.054 243456 DEBUG nova.network.neutron [req-5a3d874c-fe32-4e60-9612-5836020c99bd req-8eb9ca00-6fd7-4afd-b063-2d16a61d56a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Updated VIF entry in instance network info cache for port 3fb210f2-4c9d-4399-acf6-20e10d93fdd5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.055 243456 DEBUG nova.network.neutron [req-5a3d874c-fe32-4e60-9612-5836020c99bd req-8eb9ca00-6fd7-4afd-b063-2d16a61d56a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Updating instance_info_cache with network_info: [{"id": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "address": "fa:16:3e:be:7e:cf", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb210f2-4c", "ovs_interfaceid": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.077 243456 DEBUG oslo_concurrency.lockutils [req-5a3d874c-fe32-4e60-9612-5836020c99bd req-8eb9ca00-6fd7-4afd-b063-2d16a61d56a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-8eee8376-acc6-4a01-80c3-d7f0d579f9bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.094 243456 DEBUG oslo_concurrency.processutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8eee8376-acc6-4a01-80c3-d7f0d579f9bb/disk.config 8eee8376-acc6-4a01-80c3-d7f0d579f9bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.095 243456 INFO nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Deleting local config drive /var/lib/nova/instances/8eee8376-acc6-4a01-80c3-d7f0d579f9bb/disk.config because it was imported into RBD.
Feb 28 10:09:32 compute-0 kernel: tap3fb210f2-4c: entered promiscuous mode
Feb 28 10:09:32 compute-0 NetworkManager[49805]: <info>  [1772273372.1437] manager: (tap3fb210f2-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/202)
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.149 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:32 compute-0 ovn_controller[146846]: 2026-02-28T10:09:32Z|00441|binding|INFO|Claiming lport 3fb210f2-4c9d-4399-acf6-20e10d93fdd5 for this chassis.
Feb 28 10:09:32 compute-0 ovn_controller[146846]: 2026-02-28T10:09:32Z|00442|binding|INFO|3fb210f2-4c9d-4399-acf6-20e10d93fdd5: Claiming fa:16:3e:be:7e:cf 10.100.0.6
Feb 28 10:09:32 compute-0 ovn_controller[146846]: 2026-02-28T10:09:32Z|00443|binding|INFO|Setting lport 3fb210f2-4c9d-4399-acf6-20e10d93fdd5 ovn-installed in OVS
Feb 28 10:09:32 compute-0 ovn_controller[146846]: 2026-02-28T10:09:32Z|00444|binding|INFO|Setting lport 3fb210f2-4c9d-4399-acf6-20e10d93fdd5 up in Southbound
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.172 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.170 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:7e:cf 10.100.0.6'], port_security=['fa:16:3e:be:7e:cf 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8eee8376-acc6-4a01-80c3-d7f0d579f9bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e92100d-850d-4567-9a5d-269bb15701d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5095f810f0d431788237ae1da262bf6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05124cc2-701f-43d1-96d7-3db27e805ae2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4409f277-3260-44c5-97fd-d8b2121f3d6b, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=3fb210f2-4c9d-4399-acf6-20e10d93fdd5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.172 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 3fb210f2-4c9d-4399-acf6-20e10d93fdd5 in datapath 8e92100d-850d-4567-9a5d-269bb15701d5 bound to our chassis
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.173 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 10:09:32 compute-0 systemd-machined[209480]: New machine qemu-61-instance-00000037.
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.188 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[266f00ec-6c37-4a5e-90e9-361aff99d777]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.189 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e92100d-81 in ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.192 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e92100d-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.192 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2c039fa8-de7d-487e-981e-b78057c00662]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.194 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5e16c7cd-3f15-4539-a4f6-c694d7711d4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:32 compute-0 systemd[1]: Started Virtual Machine qemu-61-instance-00000037.
Feb 28 10:09:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:09:32 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3430434087' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.209 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[dd19695c-2266-4cdd-bc45-1f949418d3a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:32 compute-0 systemd-udevd[290291]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:09:32 compute-0 NetworkManager[49805]: <info>  [1772273372.2381] device (tap3fb210f2-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:09:32 compute-0 NetworkManager[49805]: <info>  [1772273372.2388] device (tap3fb210f2-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.242 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a62b61b0-274e-4ef6-a2fb-896e05e63a21]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.257 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.274 243456 DEBUG nova.compute.provider_tree [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.277 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a90b5ba0-a540-4351-ab54-aea1bde9d1c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.281 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f6139f6e-d546-4cb9-82cb-46d28192c4ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:32 compute-0 NetworkManager[49805]: <info>  [1772273372.2827] manager: (tap8e92100d-80): new Veth device (/org/freedesktop/NetworkManager/Devices/203)
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.293 243456 DEBUG nova.scheduler.client.report [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.315 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[de43787f-077f-4ce9-8e01-d86e9b0f87a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.318 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4b6e14-bf81-4292-b28f-bae73c41a3bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.323 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.324 243456 DEBUG nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:09:32 compute-0 NetworkManager[49805]: <info>  [1772273372.3411] device (tap8e92100d-80): carrier: link connected
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.346 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[69dc6981-22ef-4efe-8823-7c02870d85bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.363 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[72afc927-248f-4eec-b46d-14ca9e8f83a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e92100d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487962, 'reachable_time': 32981, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290323, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.369 243456 DEBUG nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.370 243456 DEBUG nova.network.neutron [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.380 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc12998-3793-47f1-b27c-88056bc3a1d7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:bd50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 487962, 'tstamp': 487962}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290324, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.388 243456 INFO nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:09:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Feb 28 10:09:32 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Feb 28 10:09:32 compute-0 ceph-mon[76304]: osdmap e184: 3 total, 3 up, 3 in
Feb 28 10:09:32 compute-0 ceph-mon[76304]: pgmap v1256: 305 pgs: 305 active+clean; 293 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 4.8 MiB/s wr, 319 op/s
Feb 28 10:09:32 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3430434087' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.405 243456 DEBUG nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.413 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d892c4a4-8d30-4d70-ab01-16cc53bd291b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e92100d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487962, 'reachable_time': 32981, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290325, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.459 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[30d90d78-0039-4934-b015-13d7ee33829f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.492 243456 DEBUG nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.494 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.495 243456 INFO nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Creating image(s)
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.529 243456 DEBUG nova.storage.rbd_utils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] rbd image 163deb6e-49f4-4093-b0c1-98240f93c499_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.542 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e5262736-d7f8-461d-a1f2-f5143a8ca874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.544 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e92100d-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.544 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.545 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e92100d-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:32 compute-0 NetworkManager[49805]: <info>  [1772273372.5479] manager: (tap8e92100d-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Feb 28 10:09:32 compute-0 kernel: tap8e92100d-80: entered promiscuous mode
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.550 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e92100d-80, col_values=(('external_ids', {'iface-id': 'df60d363-b5aa-4c1b-a9a1-997dfff36799'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:32 compute-0 ovn_controller[146846]: 2026-02-28T10:09:32Z|00445|binding|INFO|Releasing lport df60d363-b5aa-4c1b-a9a1-997dfff36799 from this chassis (sb_readonly=0)
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.557 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.558 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7c1fd392-5e4a-4c5f-95c2-36722634292e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.559 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:09:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:32.560 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'env', 'PROCESS_TAG=haproxy-8e92100d-850d-4567-9a5d-269bb15701d5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e92100d-850d-4567-9a5d-269bb15701d5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.581 243456 DEBUG nova.storage.rbd_utils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] rbd image 163deb6e-49f4-4093-b0c1-98240f93c499_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.618 243456 DEBUG nova.storage.rbd_utils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] rbd image 163deb6e-49f4-4093-b0c1-98240f93c499_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.623 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.654 243456 DEBUG nova.policy [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1341b7bab4cc4ddca989e12ab7770723', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '13c8391ebb8644dea661a093a38db268', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.660 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.664 243456 DEBUG nova.compute.manager [req-4f19ee31-002e-40c6-b0c1-6867fffb7f2b req-18401ef4-24de-4414-9ff3-9520ceafc692 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Received event network-vif-plugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.664 243456 DEBUG oslo_concurrency.lockutils [req-4f19ee31-002e-40c6-b0c1-6867fffb7f2b req-18401ef4-24de-4414-9ff3-9520ceafc692 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.664 243456 DEBUG oslo_concurrency.lockutils [req-4f19ee31-002e-40c6-b0c1-6867fffb7f2b req-18401ef4-24de-4414-9ff3-9520ceafc692 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.664 243456 DEBUG oslo_concurrency.lockutils [req-4f19ee31-002e-40c6-b0c1-6867fffb7f2b req-18401ef4-24de-4414-9ff3-9520ceafc692 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.665 243456 DEBUG nova.compute.manager [req-4f19ee31-002e-40c6-b0c1-6867fffb7f2b req-18401ef4-24de-4414-9ff3-9520ceafc692 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Processing event network-vif-plugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.719 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.719 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.720 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.720 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.739 243456 DEBUG nova.storage.rbd_utils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] rbd image 163deb6e-49f4-4093-b0c1-98240f93c499_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:32 compute-0 nova_compute[243452]: 2026-02-28 10:09:32.743 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 163deb6e-49f4-4093-b0c1-98240f93c499_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:09:33 compute-0 podman[290474]: 2026-02-28 10:09:33.091557113 +0000 UTC m=+0.119536348 container create 3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 10:09:33 compute-0 podman[290474]: 2026-02-28 10:09:32.994681992 +0000 UTC m=+0.022661227 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:09:33 compute-0 systemd[1]: Started libpod-conmon-3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116.scope.
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.143 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 163deb6e-49f4-4093-b0c1-98240f93c499_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:33 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:09:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a11d72e3c1b7abf544db6e07b97f78e6bb5dfdbed62af2a9efcbccff78207a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:09:33 compute-0 podman[290474]: 2026-02-28 10:09:33.187860158 +0000 UTC m=+0.215839383 container init 3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.188 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273373.1559103, 8eee8376-acc6-4a01-80c3-d7f0d579f9bb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.188 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] VM Started (Lifecycle Event)
Feb 28 10:09:33 compute-0 podman[290474]: 2026-02-28 10:09:33.193166387 +0000 UTC m=+0.221145602 container start 3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.194 243456 DEBUG nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:09:33 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[290504]: [NOTICE]   (290528) : New worker (290531) forked
Feb 28 10:09:33 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[290504]: [NOTICE]   (290528) : Loading success.
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.239 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.258 243456 DEBUG nova.storage.rbd_utils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] resizing rbd image 163deb6e-49f4-4093-b0c1-98240f93c499_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.290 243456 INFO nova.virt.libvirt.driver [-] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Instance spawned successfully.
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.290 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.350 243456 DEBUG nova.objects.instance [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lazy-loading 'migration_context' on Instance uuid 163deb6e-49f4-4093-b0c1-98240f93c499 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.391 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.394 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:09:33 compute-0 ceph-mon[76304]: osdmap e185: 3 total, 3 up, 3 in
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.483 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.483 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Ensure instance console log exists: /var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.484 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.484 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.484 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.485 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.486 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273373.1560607, 8eee8376-acc6-4a01-80c3-d7f0d579f9bb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.486 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] VM Paused (Lifecycle Event)
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.492 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.493 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.494 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.494 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.495 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.495 243456 DEBUG nova.virt.libvirt.driver [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.511 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.514 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273373.201439, 8eee8376-acc6-4a01-80c3-d7f0d579f9bb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.514 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] VM Resumed (Lifecycle Event)
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.543 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.546 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.576 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.592 243456 INFO nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Took 6.89 seconds to spawn the instance on the hypervisor.
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.593 243456 DEBUG nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.637 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.662 243456 INFO nova.compute.manager [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Took 8.05 seconds to build instance.
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.700 243456 DEBUG oslo_concurrency.lockutils [None req-3e65dd5a-9d3b-4c7a-b9f3-c5a3f0fcc869 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:33 compute-0 nova_compute[243452]: 2026-02-28 10:09:33.925 243456 DEBUG nova.network.neutron [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Successfully created port: cf1a075d-084d-4b7f-afd3-5a1d130b7493 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:09:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 304 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 5.0 MiB/s wr, 409 op/s
Feb 28 10:09:34 compute-0 nova_compute[243452]: 2026-02-28 10:09:34.101 243456 INFO nova.virt.libvirt.driver [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Snapshot image upload complete
Feb 28 10:09:34 compute-0 nova_compute[243452]: 2026-02-28 10:09:34.103 243456 INFO nova.compute.manager [None req-66e986c6-d882-4392-8aa0-797f95202c66 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Took 4.90 seconds to snapshot the instance on the hypervisor.
Feb 28 10:09:34 compute-0 ceph-mon[76304]: pgmap v1258: 305 pgs: 305 active+clean; 304 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 5.0 MiB/s wr, 409 op/s
Feb 28 10:09:34 compute-0 nova_compute[243452]: 2026-02-28 10:09:34.609 243456 DEBUG nova.compute.manager [req-9f93617c-34a4-4cc7-a79d-b558f36246c8 req-19184e49-733c-4aad-988a-65a189ca75ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Received event network-vif-plugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:34 compute-0 nova_compute[243452]: 2026-02-28 10:09:34.610 243456 DEBUG oslo_concurrency.lockutils [req-9f93617c-34a4-4cc7-a79d-b558f36246c8 req-19184e49-733c-4aad-988a-65a189ca75ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:34 compute-0 nova_compute[243452]: 2026-02-28 10:09:34.610 243456 DEBUG oslo_concurrency.lockutils [req-9f93617c-34a4-4cc7-a79d-b558f36246c8 req-19184e49-733c-4aad-988a-65a189ca75ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:34 compute-0 nova_compute[243452]: 2026-02-28 10:09:34.610 243456 DEBUG oslo_concurrency.lockutils [req-9f93617c-34a4-4cc7-a79d-b558f36246c8 req-19184e49-733c-4aad-988a-65a189ca75ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:34 compute-0 nova_compute[243452]: 2026-02-28 10:09:34.610 243456 DEBUG nova.compute.manager [req-9f93617c-34a4-4cc7-a79d-b558f36246c8 req-19184e49-733c-4aad-988a-65a189ca75ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] No waiting events found dispatching network-vif-plugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:09:34 compute-0 nova_compute[243452]: 2026-02-28 10:09:34.611 243456 WARNING nova.compute.manager [req-9f93617c-34a4-4cc7-a79d-b558f36246c8 req-19184e49-733c-4aad-988a-65a189ca75ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Received unexpected event network-vif-plugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 for instance with vm_state active and task_state None.
Feb 28 10:09:34 compute-0 nova_compute[243452]: 2026-02-28 10:09:34.814 243456 DEBUG nova.network.neutron [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Successfully updated port: cf1a075d-084d-4b7f-afd3-5a1d130b7493 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:09:34 compute-0 nova_compute[243452]: 2026-02-28 10:09:34.830 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "refresh_cache-163deb6e-49f4-4093-b0c1-98240f93c499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:09:34 compute-0 nova_compute[243452]: 2026-02-28 10:09:34.830 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquired lock "refresh_cache-163deb6e-49f4-4093-b0c1-98240f93c499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:09:34 compute-0 nova_compute[243452]: 2026-02-28 10:09:34.830 243456 DEBUG nova.network.neutron [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:09:34 compute-0 nova_compute[243452]: 2026-02-28 10:09:34.955 243456 DEBUG nova.network.neutron [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:09:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Feb 28 10:09:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Feb 28 10:09:35 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Feb 28 10:09:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 445 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 20 MiB/s wr, 478 op/s
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.067 243456 DEBUG nova.network.neutron [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Updating instance_info_cache with network_info: [{"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.096 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Releasing lock "refresh_cache-163deb6e-49f4-4093-b0c1-98240f93c499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.096 243456 DEBUG nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Instance network_info: |[{"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.099 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Start _get_guest_xml network_info=[{"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.110 243456 WARNING nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.116 243456 DEBUG nova.virt.libvirt.host [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.116 243456 DEBUG nova.virt.libvirt.host [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.119 243456 DEBUG nova.virt.libvirt.host [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.119 243456 DEBUG nova.virt.libvirt.host [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.120 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.120 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.120 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.120 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.121 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.121 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.121 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.121 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.121 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.121 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.122 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.122 243456 DEBUG nova.virt.hardware [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.125 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.166 243456 INFO nova.compute.manager [None req-97004ae0-02a2-451b-87af-a3827300e816 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Pausing
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.168 243456 DEBUG nova.objects.instance [None req-97004ae0-02a2-451b-87af-a3827300e816 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'flavor' on Instance uuid 8eee8376-acc6-4a01-80c3-d7f0d579f9bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.196 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273376.195942, 8eee8376-acc6-4a01-80c3-d7f0d579f9bb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.197 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] VM Paused (Lifecycle Event)
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.199 243456 DEBUG nova.compute.manager [None req-97004ae0-02a2-451b-87af-a3827300e816 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.212 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.214 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.234 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] During sync_power_state the instance has a pending task (pausing). Skip.
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.276 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.321 243456 DEBUG nova.compute.manager [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.365 243456 INFO nova.compute.manager [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] instance snapshotting
Feb 28 10:09:36 compute-0 ceph-mon[76304]: osdmap e186: 3 total, 3 up, 3 in
Feb 28 10:09:36 compute-0 ceph-mon[76304]: pgmap v1260: 305 pgs: 305 active+clean; 445 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 20 MiB/s wr, 478 op/s
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.627 243456 INFO nova.virt.libvirt.driver [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Beginning live snapshot process
Feb 28 10:09:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:09:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/200886479' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.680 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.707 243456 DEBUG nova.storage.rbd_utils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] rbd image 163deb6e-49f4-4093-b0c1-98240f93c499_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.711 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.795 243456 DEBUG nova.compute.manager [req-7e1ff444-99f4-4f63-b0b0-33645176dd4c req-934f25e4-79f8-484e-83ac-6b2da57178b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-changed-cf1a075d-084d-4b7f-afd3-5a1d130b7493 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.796 243456 DEBUG nova.compute.manager [req-7e1ff444-99f4-4f63-b0b0-33645176dd4c req-934f25e4-79f8-484e-83ac-6b2da57178b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Refreshing instance network info cache due to event network-changed-cf1a075d-084d-4b7f-afd3-5a1d130b7493. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.796 243456 DEBUG oslo_concurrency.lockutils [req-7e1ff444-99f4-4f63-b0b0-33645176dd4c req-934f25e4-79f8-484e-83ac-6b2da57178b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-163deb6e-49f4-4093-b0c1-98240f93c499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.797 243456 DEBUG oslo_concurrency.lockutils [req-7e1ff444-99f4-4f63-b0b0-33645176dd4c req-934f25e4-79f8-484e-83ac-6b2da57178b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-163deb6e-49f4-4093-b0c1-98240f93c499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.797 243456 DEBUG nova.network.neutron [req-7e1ff444-99f4-4f63-b0b0-33645176dd4c req-934f25e4-79f8-484e-83ac-6b2da57178b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Refreshing network info cache for port cf1a075d-084d-4b7f-afd3-5a1d130b7493 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.804 243456 DEBUG nova.virt.libvirt.imagebackend [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 28 10:09:36 compute-0 nova_compute[243452]: 2026-02-28 10:09:36.990 243456 DEBUG nova.storage.rbd_utils [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] creating snapshot(d6439769e7bb4ab49fa1dd35452060d1) on rbd image(226b6da4-15c9-4d10-ab4d-194b313446f9_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:09:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:09:37 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2609547973' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.284 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.287 243456 DEBUG nova.virt.libvirt.vif [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:09:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1645315342',display_name='tempest-InstanceActionsTestJSON-server-1645315342',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1645315342',id=56,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='13c8391ebb8644dea661a093a38db268',ramdisk_id='',reservation_id='r-fir4075z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-1464907638',owner_user_name='tempest-InstanceActionsTestJSON-1464907638-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:09:32Z,user_data=None,user_id='1341b7bab4cc4ddca989e12ab7770723',uuid=163deb6e-49f4-4093-b0c1-98240f93c499,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.287 243456 DEBUG nova.network.os_vif_util [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converting VIF {"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.289 243456 DEBUG nova.network.os_vif_util [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.290 243456 DEBUG nova.objects.instance [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lazy-loading 'pci_devices' on Instance uuid 163deb6e-49f4-4093-b0c1-98240f93c499 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.316 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:09:37 compute-0 nova_compute[243452]:   <uuid>163deb6e-49f4-4093-b0c1-98240f93c499</uuid>
Feb 28 10:09:37 compute-0 nova_compute[243452]:   <name>instance-00000038</name>
Feb 28 10:09:37 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:09:37 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:09:37 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <nova:name>tempest-InstanceActionsTestJSON-server-1645315342</nova:name>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:09:36</nova:creationTime>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:09:37 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:09:37 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:09:37 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:09:37 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:09:37 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:09:37 compute-0 nova_compute[243452]:         <nova:user uuid="1341b7bab4cc4ddca989e12ab7770723">tempest-InstanceActionsTestJSON-1464907638-project-member</nova:user>
Feb 28 10:09:37 compute-0 nova_compute[243452]:         <nova:project uuid="13c8391ebb8644dea661a093a38db268">tempest-InstanceActionsTestJSON-1464907638</nova:project>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:09:37 compute-0 nova_compute[243452]:         <nova:port uuid="cf1a075d-084d-4b7f-afd3-5a1d130b7493">
Feb 28 10:09:37 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:09:37 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:09:37 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <system>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <entry name="serial">163deb6e-49f4-4093-b0c1-98240f93c499</entry>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <entry name="uuid">163deb6e-49f4-4093-b0c1-98240f93c499</entry>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     </system>
Feb 28 10:09:37 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:09:37 compute-0 nova_compute[243452]:   <os>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:   </os>
Feb 28 10:09:37 compute-0 nova_compute[243452]:   <features>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:   </features>
Feb 28 10:09:37 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:09:37 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:09:37 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/163deb6e-49f4-4093-b0c1-98240f93c499_disk">
Feb 28 10:09:37 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       </source>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:09:37 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/163deb6e-49f4-4093-b0c1-98240f93c499_disk.config">
Feb 28 10:09:37 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       </source>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:09:37 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:c1:c5:d4"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <target dev="tapcf1a075d-08"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499/console.log" append="off"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <video>
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     </video>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:09:37 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:09:37 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:09:37 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:09:37 compute-0 nova_compute[243452]: </domain>
Feb 28 10:09:37 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.327 243456 DEBUG nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Preparing to wait for external event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.328 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.328 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.329 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.330 243456 DEBUG nova.virt.libvirt.vif [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:09:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1645315342',display_name='tempest-InstanceActionsTestJSON-server-1645315342',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1645315342',id=56,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='13c8391ebb8644dea661a093a38db268',ramdisk_id='',reservation_id='r-fir4075z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-1464907638',owner_user_name='tempest-InstanceActionsTestJSON-1464907638-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:09:32Z,user_data=None,user_id='1341b7bab4cc4ddca989e12ab7770723',uuid=163deb6e-49f4-4093-b0c1-98240f93c499,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.331 243456 DEBUG nova.network.os_vif_util [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converting VIF {"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.332 243456 DEBUG nova.network.os_vif_util [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.332 243456 DEBUG os_vif [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.333 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.334 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.334 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.342 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.343 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf1a075d-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.344 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcf1a075d-08, col_values=(('external_ids', {'iface-id': 'cf1a075d-084d-4b7f-afd3-5a1d130b7493', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:c5:d4', 'vm-uuid': '163deb6e-49f4-4093-b0c1-98240f93c499'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.345 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:37 compute-0 NetworkManager[49805]: <info>  [1772273377.3466] manager: (tapcf1a075d-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.350 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.354 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.355 243456 INFO os_vif [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08')
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.402 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.403 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.403 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] No VIF found with MAC fa:16:3e:c1:c5:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.404 243456 INFO nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Using config drive
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.425 243456 DEBUG nova.storage.rbd_utils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] rbd image 163deb6e-49f4-4093-b0c1-98240f93c499_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Feb 28 10:09:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Feb 28 10:09:37 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Feb 28 10:09:37 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/200886479' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:37 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2609547973' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:37 compute-0 ceph-mon[76304]: osdmap e187: 3 total, 3 up, 3 in
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.599 243456 DEBUG nova.storage.rbd_utils [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] cloning vms/226b6da4-15c9-4d10-ab4d-194b313446f9_disk@d6439769e7bb4ab49fa1dd35452060d1 to images/fdddffc1-692d-46e3-8fbc-eca1a14df1a2 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:09:37 compute-0 nova_compute[243452]: 2026-02-28 10:09:37.707 243456 DEBUG nova.storage.rbd_utils [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] flattening images/fdddffc1-692d-46e3-8fbc-eca1a14df1a2 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:09:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:09:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Feb 28 10:09:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Feb 28 10:09:37 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Feb 28 10:09:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 488 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 9.4 MiB/s rd, 26 MiB/s wr, 489 op/s
Feb 28 10:09:38 compute-0 nova_compute[243452]: 2026-02-28 10:09:38.062 243456 DEBUG nova.storage.rbd_utils [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] removing snapshot(d6439769e7bb4ab49fa1dd35452060d1) on rbd image(226b6da4-15c9-4d10-ab4d-194b313446f9_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:09:38 compute-0 nova_compute[243452]: 2026-02-28 10:09:38.120 243456 INFO nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Creating config drive at /var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499/disk.config
Feb 28 10:09:38 compute-0 nova_compute[243452]: 2026-02-28 10:09:38.129 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpadjypnla execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:38 compute-0 nova_compute[243452]: 2026-02-28 10:09:38.275 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpadjypnla" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:38 compute-0 nova_compute[243452]: 2026-02-28 10:09:38.305 243456 DEBUG nova.storage.rbd_utils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] rbd image 163deb6e-49f4-4093-b0c1-98240f93c499_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:38 compute-0 nova_compute[243452]: 2026-02-28 10:09:38.309 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499/disk.config 163deb6e-49f4-4093-b0c1-98240f93c499_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:38 compute-0 nova_compute[243452]: 2026-02-28 10:09:38.414 243456 DEBUG nova.network.neutron [req-7e1ff444-99f4-4f63-b0b0-33645176dd4c req-934f25e4-79f8-484e-83ac-6b2da57178b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Updated VIF entry in instance network info cache for port cf1a075d-084d-4b7f-afd3-5a1d130b7493. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:09:38 compute-0 nova_compute[243452]: 2026-02-28 10:09:38.416 243456 DEBUG nova.network.neutron [req-7e1ff444-99f4-4f63-b0b0-33645176dd4c req-934f25e4-79f8-484e-83ac-6b2da57178b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Updating instance_info_cache with network_info: [{"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:09:38 compute-0 nova_compute[243452]: 2026-02-28 10:09:38.439 243456 DEBUG oslo_concurrency.lockutils [req-7e1ff444-99f4-4f63-b0b0-33645176dd4c req-934f25e4-79f8-484e-83ac-6b2da57178b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-163deb6e-49f4-4093-b0c1-98240f93c499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:09:38 compute-0 nova_compute[243452]: 2026-02-28 10:09:38.480 243456 DEBUG oslo_concurrency.processutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499/disk.config 163deb6e-49f4-4093-b0c1-98240f93c499_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:38 compute-0 nova_compute[243452]: 2026-02-28 10:09:38.480 243456 INFO nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Deleting local config drive /var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499/disk.config because it was imported into RBD.
Feb 28 10:09:38 compute-0 kernel: tapcf1a075d-08: entered promiscuous mode
Feb 28 10:09:38 compute-0 NetworkManager[49805]: <info>  [1772273378.5294] manager: (tapcf1a075d-08): new Tun device (/org/freedesktop/NetworkManager/Devices/206)
Feb 28 10:09:38 compute-0 nova_compute[243452]: 2026-02-28 10:09:38.529 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:38 compute-0 ovn_controller[146846]: 2026-02-28T10:09:38Z|00446|binding|INFO|Claiming lport cf1a075d-084d-4b7f-afd3-5a1d130b7493 for this chassis.
Feb 28 10:09:38 compute-0 ovn_controller[146846]: 2026-02-28T10:09:38Z|00447|binding|INFO|cf1a075d-084d-4b7f-afd3-5a1d130b7493: Claiming fa:16:3e:c1:c5:d4 10.100.0.13
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.541 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:c5:d4 10.100.0.13'], port_security=['fa:16:3e:c1:c5:d4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '163deb6e-49f4-4093-b0c1-98240f93c499', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13c8391ebb8644dea661a093a38db268', 'neutron:revision_number': '2', 'neutron:security_group_ids': '53eb53cd-49a8-4522-81fc-f6ca4a23f74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=510eab2e-34c0-4fd9-8330-79799ad97e91, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cf1a075d-084d-4b7f-afd3-5a1d130b7493) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.543 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cf1a075d-084d-4b7f-afd3-5a1d130b7493 in datapath f471a656-3c36-4e5b-a5f2-7df3f97122e0 bound to our chassis
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.545 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f471a656-3c36-4e5b-a5f2-7df3f97122e0
Feb 28 10:09:38 compute-0 systemd-udevd[290853]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:09:38 compute-0 nova_compute[243452]: 2026-02-28 10:09:38.558 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.558 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[94b90fd4-fd61-40a5-ab57-0f2bb702b854]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.559 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf471a656-31 in ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.561 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf471a656-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.561 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1161d4c3-213b-40f4-80c3-ac670d6a825f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:38 compute-0 ovn_controller[146846]: 2026-02-28T10:09:38Z|00448|binding|INFO|Setting lport cf1a075d-084d-4b7f-afd3-5a1d130b7493 ovn-installed in OVS
Feb 28 10:09:38 compute-0 ovn_controller[146846]: 2026-02-28T10:09:38Z|00449|binding|INFO|Setting lport cf1a075d-084d-4b7f-afd3-5a1d130b7493 up in Southbound
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.563 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0b0da612-e823-4775-bd27-d608e4dcfcd0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:38 compute-0 nova_compute[243452]: 2026-02-28 10:09:38.565 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:38 compute-0 systemd-machined[209480]: New machine qemu-62-instance-00000038.
Feb 28 10:09:38 compute-0 NetworkManager[49805]: <info>  [1772273378.5742] device (tapcf1a075d-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:09:38 compute-0 NetworkManager[49805]: <info>  [1772273378.5749] device (tapcf1a075d-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.578 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c9852497-b0dc-4760-a0f1-17c787107331]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:38 compute-0 systemd[1]: Started Virtual Machine qemu-62-instance-00000038.
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.597 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5adc457a-441f-46b9-ac1b-872b01902709]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.625 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3c82fa59-96c6-4763-8fc6-46bf3de9b894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:38 compute-0 NetworkManager[49805]: <info>  [1772273378.6332] manager: (tapf471a656-30): new Veth device (/org/freedesktop/NetworkManager/Devices/207)
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.633 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb21031-5a74-414a-9f2a-1d7db22e1c34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:38 compute-0 systemd-udevd[290856]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:09:38 compute-0 nova_compute[243452]: 2026-02-28 10:09:38.640 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.668 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0f3646ff-dc08-46fd-95e2-923f41526514]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.672 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ec6686c6-8459-4bc1-a004-8423e5c8092a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:38 compute-0 NetworkManager[49805]: <info>  [1772273378.6937] device (tapf471a656-30): carrier: link connected
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.704 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[da41ee3e-8a79-44ff-a851-0c38ac8460a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.723 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4abf72c2-de32-45c1-9d59-14622df4b314]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf471a656-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:76:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488597, 'reachable_time': 35111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290885, 'error': None, 'target': 'ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.735 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[644a84a8-6231-4076-b169-69412773ddfd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:76a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 488597, 'tstamp': 488597}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290886, 'error': None, 'target': 'ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.755 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5513ff1a-4588-4a4b-a98a-4c915e258ab8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf471a656-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:76:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488597, 'reachable_time': 35111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290887, 'error': None, 'target': 'ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.783 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e171c2d0-8fef-478a-90d0-d0563f863900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.849 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4b3dcb09-85e7-49f5-bded-9d93d967363e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.851 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf471a656-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.851 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.852 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf471a656-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:38 compute-0 nova_compute[243452]: 2026-02-28 10:09:38.854 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:38 compute-0 kernel: tapf471a656-30: entered promiscuous mode
Feb 28 10:09:38 compute-0 NetworkManager[49805]: <info>  [1772273378.8550] manager: (tapf471a656-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/208)
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.861 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf471a656-30, col_values=(('external_ids', {'iface-id': '403ee777-cb2a-4f95-bb3a-7e871bc2a2b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:38 compute-0 ovn_controller[146846]: 2026-02-28T10:09:38Z|00450|binding|INFO|Releasing lport 403ee777-cb2a-4f95-bb3a-7e871bc2a2b0 from this chassis (sb_readonly=0)
Feb 28 10:09:38 compute-0 nova_compute[243452]: 2026-02-28 10:09:38.862 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.866 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f471a656-3c36-4e5b-a5f2-7df3f97122e0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f471a656-3c36-4e5b-a5f2-7df3f97122e0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:09:38 compute-0 nova_compute[243452]: 2026-02-28 10:09:38.869 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.870 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0b260655-07ef-4909-a3bd-6b174633e151]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.872 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-f471a656-3c36-4e5b-a5f2-7df3f97122e0
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/f471a656-3c36-4e5b-a5f2-7df3f97122e0.pid.haproxy
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID f471a656-3c36-4e5b-a5f2-7df3f97122e0
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:09:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:38.873 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'env', 'PROCESS_TAG=haproxy-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f471a656-3c36-4e5b-a5f2-7df3f97122e0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:09:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Feb 28 10:09:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Feb 28 10:09:38 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Feb 28 10:09:38 compute-0 ceph-mon[76304]: osdmap e188: 3 total, 3 up, 3 in
Feb 28 10:09:38 compute-0 ceph-mon[76304]: pgmap v1263: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 488 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 9.4 MiB/s rd, 26 MiB/s wr, 489 op/s
Feb 28 10:09:38 compute-0 nova_compute[243452]: 2026-02-28 10:09:38.969 243456 DEBUG nova.storage.rbd_utils [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] creating snapshot(snap) on rbd image(fdddffc1-692d-46e3-8fbc-eca1a14df1a2) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.001 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273378.9883766, 163deb6e-49f4-4093-b0c1-98240f93c499 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.001 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] VM Started (Lifecycle Event)
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.028 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.033 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273378.9889781, 163deb6e-49f4-4093-b0c1-98240f93c499 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.033 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] VM Paused (Lifecycle Event)
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.056 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.059 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.084 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:09:39 compute-0 podman[290979]: 2026-02-28 10:09:39.28229681 +0000 UTC m=+0.063598107 container create 3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:09:39 compute-0 systemd[1]: Started libpod-conmon-3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a.scope.
Feb 28 10:09:39 compute-0 podman[290979]: 2026-02-28 10:09:39.250708413 +0000 UTC m=+0.032009810 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:09:39 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:09:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5353cbb467188b41a538d6dc884c63774e3e2d92080e318eb49b20370d14a5ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.368 243456 DEBUG oslo_concurrency.lockutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.368 243456 DEBUG oslo_concurrency.lockutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.369 243456 DEBUG oslo_concurrency.lockutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.369 243456 DEBUG oslo_concurrency.lockutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.369 243456 DEBUG oslo_concurrency.lockutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.371 243456 INFO nova.compute.manager [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Terminating instance
Feb 28 10:09:39 compute-0 podman[290979]: 2026-02-28 10:09:39.37165748 +0000 UTC m=+0.152958807 container init 3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.372 243456 DEBUG nova.compute.manager [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:09:39 compute-0 podman[290979]: 2026-02-28 10:09:39.379005066 +0000 UTC m=+0.160306383 container start 3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:09:39 compute-0 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[290994]: [NOTICE]   (290998) : New worker (291000) forked
Feb 28 10:09:39 compute-0 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[290994]: [NOTICE]   (290998) : Loading success.
Feb 28 10:09:39 compute-0 kernel: tap3fb210f2-4c (unregistering): left promiscuous mode
Feb 28 10:09:39 compute-0 NetworkManager[49805]: <info>  [1772273379.4120] device (tap3fb210f2-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:09:39 compute-0 ovn_controller[146846]: 2026-02-28T10:09:39Z|00451|binding|INFO|Releasing lport 3fb210f2-4c9d-4399-acf6-20e10d93fdd5 from this chassis (sb_readonly=0)
Feb 28 10:09:39 compute-0 ovn_controller[146846]: 2026-02-28T10:09:39Z|00452|binding|INFO|Setting lport 3fb210f2-4c9d-4399-acf6-20e10d93fdd5 down in Southbound
Feb 28 10:09:39 compute-0 ovn_controller[146846]: 2026-02-28T10:09:39Z|00453|binding|INFO|Removing iface tap3fb210f2-4c ovn-installed in OVS
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.424 243456 DEBUG nova.compute.manager [req-ea45cf74-ce16-4512-bb79-fe4844b10542 req-ee3c80cd-acce-4f02-94bd-06221085282e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.425 243456 DEBUG oslo_concurrency.lockutils [req-ea45cf74-ce16-4512-bb79-fe4844b10542 req-ee3c80cd-acce-4f02-94bd-06221085282e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.425 243456 DEBUG oslo_concurrency.lockutils [req-ea45cf74-ce16-4512-bb79-fe4844b10542 req-ee3c80cd-acce-4f02-94bd-06221085282e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.425 243456 DEBUG oslo_concurrency.lockutils [req-ea45cf74-ce16-4512-bb79-fe4844b10542 req-ee3c80cd-acce-4f02-94bd-06221085282e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.426 243456 DEBUG nova.compute.manager [req-ea45cf74-ce16-4512-bb79-fe4844b10542 req-ee3c80cd-acce-4f02-94bd-06221085282e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Processing event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.426 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.428 243456 DEBUG nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.431 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.433 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273379.4325452, 163deb6e-49f4-4093-b0c1-98240f93c499 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.433 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] VM Resumed (Lifecycle Event)
Feb 28 10:09:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.434 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:7e:cf 10.100.0.6'], port_security=['fa:16:3e:be:7e:cf 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8eee8376-acc6-4a01-80c3-d7f0d579f9bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e92100d-850d-4567-9a5d-269bb15701d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5095f810f0d431788237ae1da262bf6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05124cc2-701f-43d1-96d7-3db27e805ae2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4409f277-3260-44c5-97fd-d8b2121f3d6b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=3fb210f2-4c9d-4399-acf6-20e10d93fdd5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.443 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.447 243456 INFO nova.virt.libvirt.driver [-] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Instance spawned successfully.
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.447 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:09:39 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000037.scope: Deactivated successfully.
Feb 28 10:09:39 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000037.scope: Consumed 3.739s CPU time.
Feb 28 10:09:39 compute-0 systemd-machined[209480]: Machine qemu-61-instance-00000037 terminated.
Feb 28 10:09:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.476 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 3fb210f2-4c9d-4399-acf6-20e10d93fdd5 in datapath 8e92100d-850d-4567-9a5d-269bb15701d5 unbound from our chassis
Feb 28 10:09:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.478 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e92100d-850d-4567-9a5d-269bb15701d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.478 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.479 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[144310d8-c7df-404f-96ff-d34cb662e778]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.480 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 namespace which is not needed anymore
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.485 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.489 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.490 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.491 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.491 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.492 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.492 243456 DEBUG nova.virt.libvirt.driver [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.528 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.565 243456 INFO nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Took 7.07 seconds to spawn the instance on the hypervisor.
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.566 243456 DEBUG nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.613 243456 INFO nova.virt.libvirt.driver [-] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Instance destroyed successfully.
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.613 243456 DEBUG nova.objects.instance [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'resources' on Instance uuid 8eee8376-acc6-4a01-80c3-d7f0d579f9bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:09:39 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[290504]: [NOTICE]   (290528) : haproxy version is 2.8.14-c23fe91
Feb 28 10:09:39 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[290504]: [NOTICE]   (290528) : path to executable is /usr/sbin/haproxy
Feb 28 10:09:39 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[290504]: [WARNING]  (290528) : Exiting Master process...
Feb 28 10:09:39 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[290504]: [WARNING]  (290528) : Exiting Master process...
Feb 28 10:09:39 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[290504]: [ALERT]    (290528) : Current worker (290531) exited with code 143 (Terminated)
Feb 28 10:09:39 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[290504]: [WARNING]  (290528) : All workers exited. Exiting... (0)
Feb 28 10:09:39 compute-0 systemd[1]: libpod-3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116.scope: Deactivated successfully.
Feb 28 10:09:39 compute-0 podman[291029]: 2026-02-28 10:09:39.627596637 +0000 UTC m=+0.058853543 container died 3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.649 243456 DEBUG nova.virt.libvirt.vif [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1458563036',display_name='tempest-DeleteServersTestJSON-server-1458563036',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1458563036',id=55,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:09:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-8hsyesw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:09:36Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=8eee8376-acc6-4a01-80c3-d7f0d579f9bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "address": "fa:16:3e:be:7e:cf", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb210f2-4c", "ovs_interfaceid": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.650 243456 DEBUG nova.network.os_vif_util [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "address": "fa:16:3e:be:7e:cf", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb210f2-4c", "ovs_interfaceid": "3fb210f2-4c9d-4399-acf6-20e10d93fdd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.651 243456 DEBUG nova.network.os_vif_util [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:7e:cf,bridge_name='br-int',has_traffic_filtering=True,id=3fb210f2-4c9d-4399-acf6-20e10d93fdd5,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb210f2-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.651 243456 DEBUG os_vif [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:7e:cf,bridge_name='br-int',has_traffic_filtering=True,id=3fb210f2-4c9d-4399-acf6-20e10d93fdd5,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb210f2-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.653 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.654 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3fb210f2-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116-userdata-shm.mount: Deactivated successfully.
Feb 28 10:09:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a11d72e3c1b7abf544db6e07b97f78e6bb5dfdbed62af2a9efcbccff78207a6-merged.mount: Deactivated successfully.
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.664 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.668 243456 INFO nova.compute.manager [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Took 8.20 seconds to build instance.
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.670 243456 INFO os_vif [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:7e:cf,bridge_name='br-int',has_traffic_filtering=True,id=3fb210f2-4c9d-4399-acf6-20e10d93fdd5,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb210f2-4c')
Feb 28 10:09:39 compute-0 podman[291029]: 2026-02-28 10:09:39.676164601 +0000 UTC m=+0.107421507 container cleanup 3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:09:39 compute-0 systemd[1]: libpod-conmon-3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116.scope: Deactivated successfully.
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.695 243456 DEBUG oslo_concurrency.lockutils [None req-44ab6260-42e3-48d7-92ec-a087a2655cf3 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:39 compute-0 podman[291077]: 2026-02-28 10:09:39.748508953 +0000 UTC m=+0.046484147 container remove 3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 28 10:09:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.753 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bf3c9023-9e39-4a2b-819d-cb0aa4a3597f]: (4, ('Sat Feb 28 10:09:39 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 (3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116)\n3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116\nSat Feb 28 10:09:39 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 (3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116)\n3a1417b7bd406efffcdb2179320e353a7ad9d76485268269b520ee1f1005d116\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.755 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[94043fe7-787f-46db-8a74-d3877b2fb7c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.758 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e92100d-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.760 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:39 compute-0 kernel: tap8e92100d-80: left promiscuous mode
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.775 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.775 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[118a544d-60ad-48fd-8755-1df9fd3fefcb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.797 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1ac78930-d466-433b-9f2f-34ce0e3f43c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.799 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b564b0dc-e5a8-47f3-a105-48d9c1377357]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.816 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[93b540df-63c5-4047-929d-e6819c99a8d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487956, 'reachable_time': 38565, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291105, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:39 compute-0 systemd[1]: run-netns-ovnmeta\x2d8e92100d\x2d850d\x2d4567\x2d9a5d\x2d269bb15701d5.mount: Deactivated successfully.
Feb 28 10:09:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.821 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:09:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:39.821 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[50490164-e911-4bf1-9648-6fb65cfb2761]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Feb 28 10:09:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Feb 28 10:09:39 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.960 243456 INFO nova.virt.libvirt.driver [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Deleting instance files /var/lib/nova/instances/8eee8376-acc6-4a01-80c3-d7f0d579f9bb_del
Feb 28 10:09:39 compute-0 ceph-mon[76304]: osdmap e189: 3 total, 3 up, 3 in
Feb 28 10:09:39 compute-0 nova_compute[243452]: 2026-02-28 10:09:39.962 243456 INFO nova.virt.libvirt.driver [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Deletion of /var/lib/nova/instances/8eee8376-acc6-4a01-80c3-d7f0d579f9bb_del complete
Feb 28 10:09:40 compute-0 nova_compute[243452]: 2026-02-28 10:09:40.008 243456 INFO nova.compute.manager [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Took 0.64 seconds to destroy the instance on the hypervisor.
Feb 28 10:09:40 compute-0 nova_compute[243452]: 2026-02-28 10:09:40.009 243456 DEBUG oslo.service.loopingcall [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:09:40 compute-0 nova_compute[243452]: 2026-02-28 10:09:40.009 243456 DEBUG nova.compute.manager [-] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:09:40 compute-0 nova_compute[243452]: 2026-02-28 10:09:40.010 243456 DEBUG nova.network.neutron [-] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 499 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 21 MiB/s wr, 530 op/s
Feb 28 10:09:40 compute-0 nova_compute[243452]: 2026-02-28 10:09:40.584 243456 DEBUG nova.network.neutron [-] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:09:40 compute-0 nova_compute[243452]: 2026-02-28 10:09:40.618 243456 INFO nova.compute.manager [-] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Took 0.61 seconds to deallocate network for instance.
Feb 28 10:09:40 compute-0 nova_compute[243452]: 2026-02-28 10:09:40.675 243456 DEBUG oslo_concurrency.lockutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:40 compute-0 nova_compute[243452]: 2026-02-28 10:09:40.676 243456 DEBUG oslo_concurrency.lockutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019003105728686955 of space, bias 1.0, pg target 0.5700931718606087 quantized to 32 (current 32)
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00386913443571144 of space, bias 1.0, pg target 1.1607403307134319 quantized to 32 (current 32)
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.918254079518781e-07 of space, bias 4.0, pg target 0.0009470231879104462 quantized to 16 (current 16)
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:09:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Feb 28 10:09:40 compute-0 nova_compute[243452]: 2026-02-28 10:09:40.799 243456 DEBUG oslo_concurrency.processutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:40 compute-0 ceph-mon[76304]: osdmap e190: 3 total, 3 up, 3 in
Feb 28 10:09:40 compute-0 ceph-mon[76304]: pgmap v1266: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 499 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 21 MiB/s wr, 530 op/s
Feb 28 10:09:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:09:41 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1640933012' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.405 243456 DEBUG oslo_concurrency.processutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.423 243456 DEBUG nova.compute.provider_tree [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.433 243456 DEBUG oslo_concurrency.lockutils [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.435 243456 DEBUG oslo_concurrency.lockutils [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.435 243456 INFO nova.compute.manager [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Rebooting instance
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.444 243456 DEBUG nova.scheduler.client.report [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.460 243456 DEBUG oslo_concurrency.lockutils [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "refresh_cache-163deb6e-49f4-4093-b0c1-98240f93c499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.461 243456 DEBUG oslo_concurrency.lockutils [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquired lock "refresh_cache-163deb6e-49f4-4093-b0c1-98240f93c499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.462 243456 DEBUG nova.network.neutron [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.494 243456 DEBUG oslo_concurrency.lockutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.546 243456 INFO nova.scheduler.client.report [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Deleted allocations for instance 8eee8376-acc6-4a01-80c3-d7f0d579f9bb
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.578 243456 DEBUG nova.compute.manager [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.579 243456 DEBUG oslo_concurrency.lockutils [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.580 243456 DEBUG oslo_concurrency.lockutils [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.581 243456 DEBUG oslo_concurrency.lockutils [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.581 243456 DEBUG nova.compute.manager [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] No waiting events found dispatching network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.581 243456 WARNING nova.compute.manager [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received unexpected event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 for instance with vm_state active and task_state rebooting_hard.
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.582 243456 DEBUG nova.compute.manager [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Received event network-vif-unplugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.583 243456 DEBUG oslo_concurrency.lockutils [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.583 243456 DEBUG oslo_concurrency.lockutils [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.583 243456 DEBUG oslo_concurrency.lockutils [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.584 243456 DEBUG nova.compute.manager [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] No waiting events found dispatching network-vif-unplugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.584 243456 WARNING nova.compute.manager [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Received unexpected event network-vif-unplugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 for instance with vm_state deleted and task_state None.
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.585 243456 DEBUG nova.compute.manager [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Received event network-vif-plugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.585 243456 DEBUG oslo_concurrency.lockutils [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.586 243456 DEBUG oslo_concurrency.lockutils [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.586 243456 DEBUG oslo_concurrency.lockutils [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.586 243456 DEBUG nova.compute.manager [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] No waiting events found dispatching network-vif-plugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.587 243456 WARNING nova.compute.manager [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Received unexpected event network-vif-plugged-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 for instance with vm_state deleted and task_state None.
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.587 243456 DEBUG nova.compute.manager [req-451c137a-d326-4031-b10a-75d36d736533 req-387895b1-3b80-4f37-aee4-ff6745cf568c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Received event network-vif-deleted-3fb210f2-4c9d-4399-acf6-20e10d93fdd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.656 243456 DEBUG oslo_concurrency.lockutils [None req-d5237db8-8ae7-425c-bd3d-b6cf4004068b 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "8eee8376-acc6-4a01-80c3-d7f0d579f9bb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.287s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.776 243456 INFO nova.virt.libvirt.driver [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Snapshot image upload complete
Feb 28 10:09:41 compute-0 nova_compute[243452]: 2026-02-28 10:09:41.777 243456 INFO nova.compute.manager [None req-a5e7e905-11b5-4ad3-b484-fe80f207ca09 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Took 5.41 seconds to snapshot the instance on the hypervisor.
Feb 28 10:09:41 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1640933012' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 496 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 22 MiB/s wr, 679 op/s
Feb 28 10:09:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:09:42 compute-0 ceph-mon[76304]: pgmap v1267: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 496 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 22 MiB/s wr, 679 op/s
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.151 243456 DEBUG nova.network.neutron [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Updating instance_info_cache with network_info: [{"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.174 243456 DEBUG oslo_concurrency.lockutils [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Releasing lock "refresh_cache-163deb6e-49f4-4093-b0c1-98240f93c499" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.177 243456 DEBUG nova.compute.manager [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.297 243456 DEBUG nova.compute.manager [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.343 243456 INFO nova.compute.manager [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] instance snapshotting
Feb 28 10:09:43 compute-0 kernel: tapcf1a075d-08 (unregistering): left promiscuous mode
Feb 28 10:09:43 compute-0 NetworkManager[49805]: <info>  [1772273383.3542] device (tapcf1a075d-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.362 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:43 compute-0 ovn_controller[146846]: 2026-02-28T10:09:43Z|00454|binding|INFO|Releasing lport cf1a075d-084d-4b7f-afd3-5a1d130b7493 from this chassis (sb_readonly=0)
Feb 28 10:09:43 compute-0 ovn_controller[146846]: 2026-02-28T10:09:43Z|00455|binding|INFO|Setting lport cf1a075d-084d-4b7f-afd3-5a1d130b7493 down in Southbound
Feb 28 10:09:43 compute-0 ovn_controller[146846]: 2026-02-28T10:09:43Z|00456|binding|INFO|Removing iface tapcf1a075d-08 ovn-installed in OVS
Feb 28 10:09:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.388 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:c5:d4 10.100.0.13'], port_security=['fa:16:3e:c1:c5:d4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '163deb6e-49f4-4093-b0c1-98240f93c499', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13c8391ebb8644dea661a093a38db268', 'neutron:revision_number': '4', 'neutron:security_group_ids': '53eb53cd-49a8-4522-81fc-f6ca4a23f74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=510eab2e-34c0-4fd9-8330-79799ad97e91, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cf1a075d-084d-4b7f-afd3-5a1d130b7493) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.389 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.390 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cf1a075d-084d-4b7f-afd3-5a1d130b7493 in datapath f471a656-3c36-4e5b-a5f2-7df3f97122e0 unbound from our chassis
Feb 28 10:09:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.391 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f471a656-3c36-4e5b-a5f2-7df3f97122e0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:09:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.392 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[11167fc5-3cca-46f3-91fd-342da4a19bf8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.393 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0 namespace which is not needed anymore
Feb 28 10:09:43 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000038.scope: Deactivated successfully.
Feb 28 10:09:43 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000038.scope: Consumed 4.339s CPU time.
Feb 28 10:09:43 compute-0 systemd-machined[209480]: Machine qemu-62-instance-00000038 terminated.
Feb 28 10:09:43 compute-0 kernel: tapcf1a075d-08: entered promiscuous mode
Feb 28 10:09:43 compute-0 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[290994]: [NOTICE]   (290998) : haproxy version is 2.8.14-c23fe91
Feb 28 10:09:43 compute-0 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[290994]: [NOTICE]   (290998) : path to executable is /usr/sbin/haproxy
Feb 28 10:09:43 compute-0 kernel: tapcf1a075d-08 (unregistering): left promiscuous mode
Feb 28 10:09:43 compute-0 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[290994]: [WARNING]  (290998) : Exiting Master process...
Feb 28 10:09:43 compute-0 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[290994]: [WARNING]  (290998) : Exiting Master process...
Feb 28 10:09:43 compute-0 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[290994]: [ALERT]    (290998) : Current worker (291000) exited with code 143 (Terminated)
Feb 28 10:09:43 compute-0 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[290994]: [WARNING]  (290998) : All workers exited. Exiting... (0)
Feb 28 10:09:43 compute-0 NetworkManager[49805]: <info>  [1772273383.5371] manager: (tapcf1a075d-08): new Tun device (/org/freedesktop/NetworkManager/Devices/209)
Feb 28 10:09:43 compute-0 systemd[1]: libpod-3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a.scope: Deactivated successfully.
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.543 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:43 compute-0 podman[291155]: 2026-02-28 10:09:43.547147477 +0000 UTC m=+0.055839579 container died 3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.557 243456 INFO nova.virt.libvirt.driver [-] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Instance destroyed successfully.
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.558 243456 DEBUG nova.objects.instance [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lazy-loading 'resources' on Instance uuid 163deb6e-49f4-4093-b0c1-98240f93c499 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.576 243456 DEBUG nova.virt.libvirt.vif [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:09:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1645315342',display_name='tempest-InstanceActionsTestJSON-server-1645315342',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1645315342',id=56,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:09:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='13c8391ebb8644dea661a093a38db268',ramdisk_id='',reservation_id='r-fir4075z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1464907638',owner_user_name='tempest-InstanceActionsTestJSON-1464907638-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:09:43Z,user_data=None,user_id='1341b7bab4cc4ddca989e12ab7770723',uuid=163deb6e-49f4-4093-b0c1-98240f93c499,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.576 243456 DEBUG nova.network.os_vif_util [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converting VIF {"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.577 243456 DEBUG nova.network.os_vif_util [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.577 243456 DEBUG os_vif [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.580 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.580 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf1a075d-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.583 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.585 243456 INFO os_vif [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08')
Feb 28 10:09:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a-userdata-shm.mount: Deactivated successfully.
Feb 28 10:09:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-5353cbb467188b41a538d6dc884c63774e3e2d92080e318eb49b20370d14a5ad-merged.mount: Deactivated successfully.
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.600 243456 DEBUG nova.virt.libvirt.driver [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Start _get_guest_xml network_info=[{"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.606 243456 WARNING nova.virt.libvirt.driver [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.610 243456 DEBUG nova.virt.libvirt.host [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.611 243456 DEBUG nova.virt.libvirt.host [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.614 243456 DEBUG nova.virt.libvirt.host [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.614 243456 DEBUG nova.virt.libvirt.host [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.615 243456 DEBUG nova.virt.libvirt.driver [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.615 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.615 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.615 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.616 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.616 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.616 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.616 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.617 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.617 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.617 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.617 243456 DEBUG nova.virt.hardware [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.618 243456 DEBUG nova.objects.instance [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 163deb6e-49f4-4093-b0c1-98240f93c499 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:09:43 compute-0 podman[291155]: 2026-02-28 10:09:43.619802088 +0000 UTC m=+0.128494180 container cleanup 3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 10:09:43 compute-0 systemd[1]: libpod-conmon-3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a.scope: Deactivated successfully.
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.643 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.649 243456 DEBUG oslo_concurrency.processutils [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.704 243456 INFO nova.virt.libvirt.driver [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Beginning live snapshot process
Feb 28 10:09:43 compute-0 podman[291189]: 2026-02-28 10:09:43.706537074 +0000 UTC m=+0.066591161 container remove 3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 10:09:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.714 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ac4279ba-784e-4adb-9857-640f412f260c]: (4, ('Sat Feb 28 10:09:43 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0 (3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a)\n3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a\nSat Feb 28 10:09:43 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0 (3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a)\n3379808acd9d5040014deb9c2c5f4402a64baa66bdb8ad8eddc1d8c95ecbe54a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.716 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3a002ec2-8c46-4ce6-835b-4590326e05e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.717 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf471a656-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:43 compute-0 kernel: tapf471a656-30: left promiscuous mode
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.718 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.726 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.729 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a6dd1665-6ab9-4d53-976b-a66319ea58ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.742 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b78c12c3-25b4-4fea-952a-445d8df84adb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.743 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[27f4f603-2837-4dec-bb05-c63917f8910b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.758 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b690def9-ba1a-46fc-a7d1-8e609231bc81]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488590, 'reachable_time': 30132, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291204, 'error': None, 'target': 'ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:43 compute-0 systemd[1]: run-netns-ovnmeta\x2df471a656\x2d3c36\x2d4e5b\x2da5f2\x2d7df3f97122e0.mount: Deactivated successfully.
Feb 28 10:09:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.761 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:09:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:43.761 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f538da-ee20-4386-8d26-e7aa68f6f2d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:43 compute-0 nova_compute[243452]: 2026-02-28 10:09:43.858 243456 DEBUG nova.virt.libvirt.imagebackend [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 28 10:09:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 480 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 13 MiB/s wr, 541 op/s
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.094 243456 DEBUG nova.storage.rbd_utils [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] creating snapshot(1ef1506ee39c470ba24c9c06b3a17f6c) on rbd image(9098ebf3-e36c-492b-9c50-dc6f0078794d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:09:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:09:44 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3918153978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.205 243456 DEBUG oslo_concurrency.processutils [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.234 243456 DEBUG oslo_concurrency.processutils [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.281 243456 DEBUG nova.compute.manager [req-75321434-a2ea-43b6-a860-bf049ae85c7e req-9e941602-4486-4836-8511-51caa9f406a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-vif-unplugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.282 243456 DEBUG oslo_concurrency.lockutils [req-75321434-a2ea-43b6-a860-bf049ae85c7e req-9e941602-4486-4836-8511-51caa9f406a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.283 243456 DEBUG oslo_concurrency.lockutils [req-75321434-a2ea-43b6-a860-bf049ae85c7e req-9e941602-4486-4836-8511-51caa9f406a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.283 243456 DEBUG oslo_concurrency.lockutils [req-75321434-a2ea-43b6-a860-bf049ae85c7e req-9e941602-4486-4836-8511-51caa9f406a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.284 243456 DEBUG nova.compute.manager [req-75321434-a2ea-43b6-a860-bf049ae85c7e req-9e941602-4486-4836-8511-51caa9f406a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] No waiting events found dispatching network-vif-unplugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.284 243456 WARNING nova.compute.manager [req-75321434-a2ea-43b6-a860-bf049ae85c7e req-9e941602-4486-4836-8511-51caa9f406a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received unexpected event network-vif-unplugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 for instance with vm_state active and task_state reboot_started_hard.
Feb 28 10:09:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:09:44 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3177599203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.788 243456 DEBUG oslo_concurrency.processutils [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.791 243456 DEBUG nova.virt.libvirt.vif [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:09:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1645315342',display_name='tempest-InstanceActionsTestJSON-server-1645315342',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1645315342',id=56,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:09:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='13c8391ebb8644dea661a093a38db268',ramdisk_id='',reservation_id='r-fir4075z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1464907638',owner_user_name='tempest-InstanceActionsTestJSON-1464907638-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:09:43Z,user_data=None,user_id='1341b7bab4cc4ddca989e12ab7770723',uuid=163deb6e-49f4-4093-b0c1-98240f93c499,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.791 243456 DEBUG nova.network.os_vif_util [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converting VIF {"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.793 243456 DEBUG nova.network.os_vif_util [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.796 243456 DEBUG nova.objects.instance [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lazy-loading 'pci_devices' on Instance uuid 163deb6e-49f4-4093-b0c1-98240f93c499 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.813 243456 DEBUG nova.virt.libvirt.driver [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:09:44 compute-0 nova_compute[243452]:   <uuid>163deb6e-49f4-4093-b0c1-98240f93c499</uuid>
Feb 28 10:09:44 compute-0 nova_compute[243452]:   <name>instance-00000038</name>
Feb 28 10:09:44 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:09:44 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:09:44 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <nova:name>tempest-InstanceActionsTestJSON-server-1645315342</nova:name>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:09:43</nova:creationTime>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:09:44 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:09:44 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:09:44 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:09:44 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:09:44 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:09:44 compute-0 nova_compute[243452]:         <nova:user uuid="1341b7bab4cc4ddca989e12ab7770723">tempest-InstanceActionsTestJSON-1464907638-project-member</nova:user>
Feb 28 10:09:44 compute-0 nova_compute[243452]:         <nova:project uuid="13c8391ebb8644dea661a093a38db268">tempest-InstanceActionsTestJSON-1464907638</nova:project>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:09:44 compute-0 nova_compute[243452]:         <nova:port uuid="cf1a075d-084d-4b7f-afd3-5a1d130b7493">
Feb 28 10:09:44 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:09:44 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:09:44 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <system>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <entry name="serial">163deb6e-49f4-4093-b0c1-98240f93c499</entry>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <entry name="uuid">163deb6e-49f4-4093-b0c1-98240f93c499</entry>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     </system>
Feb 28 10:09:44 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:09:44 compute-0 nova_compute[243452]:   <os>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:   </os>
Feb 28 10:09:44 compute-0 nova_compute[243452]:   <features>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:   </features>
Feb 28 10:09:44 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:09:44 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:09:44 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/163deb6e-49f4-4093-b0c1-98240f93c499_disk">
Feb 28 10:09:44 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       </source>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:09:44 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/163deb6e-49f4-4093-b0c1-98240f93c499_disk.config">
Feb 28 10:09:44 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       </source>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:09:44 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:c1:c5:d4"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <target dev="tapcf1a075d-08"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499/console.log" append="off"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <video>
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     </video>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <input type="keyboard" bus="usb"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:09:44 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:09:44 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:09:44 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:09:44 compute-0 nova_compute[243452]: </domain>
Feb 28 10:09:44 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.815 243456 DEBUG nova.virt.libvirt.driver [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] skipping disk for instance-00000038 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.816 243456 DEBUG nova.virt.libvirt.driver [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] skipping disk for instance-00000038 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.817 243456 DEBUG nova.virt.libvirt.vif [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:09:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1645315342',display_name='tempest-InstanceActionsTestJSON-server-1645315342',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1645315342',id=56,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:09:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='13c8391ebb8644dea661a093a38db268',ramdisk_id='',reservation_id='r-fir4075z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1464907638',owner_user_name='tempest-InstanceActionsTestJSON-1464907638-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:09:43Z,user_data=None,user_id='1341b7bab4cc4ddca989e12ab7770723',uuid=163deb6e-49f4-4093-b0c1-98240f93c499,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.818 243456 DEBUG nova.network.os_vif_util [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converting VIF {"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.819 243456 DEBUG nova.network.os_vif_util [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.819 243456 DEBUG os_vif [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.820 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.820 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.821 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.825 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.825 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf1a075d-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.826 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcf1a075d-08, col_values=(('external_ids', {'iface-id': 'cf1a075d-084d-4b7f-afd3-5a1d130b7493', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:c5:d4', 'vm-uuid': '163deb6e-49f4-4093-b0c1-98240f93c499'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.828 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:44 compute-0 NetworkManager[49805]: <info>  [1772273384.8294] manager: (tapcf1a075d-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.831 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.834 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.834 243456 INFO os_vif [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08')
Feb 28 10:09:44 compute-0 kernel: tapcf1a075d-08: entered promiscuous mode
Feb 28 10:09:44 compute-0 NetworkManager[49805]: <info>  [1772273384.9295] manager: (tapcf1a075d-08): new Tun device (/org/freedesktop/NetworkManager/Devices/211)
Feb 28 10:09:44 compute-0 systemd-udevd[291133]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:09:44 compute-0 ovn_controller[146846]: 2026-02-28T10:09:44Z|00457|binding|INFO|Claiming lport cf1a075d-084d-4b7f-afd3-5a1d130b7493 for this chassis.
Feb 28 10:09:44 compute-0 ovn_controller[146846]: 2026-02-28T10:09:44Z|00458|binding|INFO|cf1a075d-084d-4b7f-afd3-5a1d130b7493: Claiming fa:16:3e:c1:c5:d4 10.100.0.13
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.932 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:44.942 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:c5:d4 10.100.0.13'], port_security=['fa:16:3e:c1:c5:d4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '163deb6e-49f4-4093-b0c1-98240f93c499', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13c8391ebb8644dea661a093a38db268', 'neutron:revision_number': '5', 'neutron:security_group_ids': '53eb53cd-49a8-4522-81fc-f6ca4a23f74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=510eab2e-34c0-4fd9-8330-79799ad97e91, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cf1a075d-084d-4b7f-afd3-5a1d130b7493) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:09:44 compute-0 NetworkManager[49805]: <info>  [1772273384.9462] device (tapcf1a075d-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:09:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:44.945 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cf1a075d-084d-4b7f-afd3-5a1d130b7493 in datapath f471a656-3c36-4e5b-a5f2-7df3f97122e0 bound to our chassis
Feb 28 10:09:44 compute-0 NetworkManager[49805]: <info>  [1772273384.9469] device (tapcf1a075d-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:09:44 compute-0 ovn_controller[146846]: 2026-02-28T10:09:44Z|00459|binding|INFO|Setting lport cf1a075d-084d-4b7f-afd3-5a1d130b7493 ovn-installed in OVS
Feb 28 10:09:44 compute-0 ovn_controller[146846]: 2026-02-28T10:09:44Z|00460|binding|INFO|Setting lport cf1a075d-084d-4b7f-afd3-5a1d130b7493 up in Southbound
Feb 28 10:09:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:44.947 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f471a656-3c36-4e5b-a5f2-7df3f97122e0
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.948 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:44 compute-0 nova_compute[243452]: 2026-02-28 10:09:44.950 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:44.963 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e785ffd3-a0cc-447c-a750-025af19ff01f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:44.964 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf471a656-31 in ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:09:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:44.966 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf471a656-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:09:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:44.966 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ff4e30-1660-424b-a145-84fa03aa991d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:44.967 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cb52cfb1-be9d-49ff-bdf2-d30e4dfc78fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:44 compute-0 systemd-machined[209480]: New machine qemu-63-instance-00000038.
Feb 28 10:09:44 compute-0 systemd[1]: Started Virtual Machine qemu-63-instance-00000038.
Feb 28 10:09:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:44.982 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[38762393-76ba-4cce-aa08-1f45b934bfa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:44.998 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2310e1c8-9f6b-4b31-acca-0fb156220269]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.024 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bfbe51cc-5ae6-40e9-8e73-bf95552158d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:45 compute-0 NetworkManager[49805]: <info>  [1772273385.0314] manager: (tapf471a656-30): new Veth device (/org/freedesktop/NetworkManager/Devices/212)
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.030 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f036d175-4acb-469e-a5ff-d4456739918b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.059 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9106e5ee-f4fd-44da-8a1b-ba3c535e9fd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.063 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[263a4817-9289-4bd1-a876-ba88b4f30bce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Feb 28 10:09:45 compute-0 NetworkManager[49805]: <info>  [1772273385.0815] device (tapf471a656-30): carrier: link connected
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.084 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c32ef933-a76d-4527-9afb-14c79c5d2880]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.100 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[93b11eac-d902-469f-8c37-633893fec887]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf471a656-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:76:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489236, 'reachable_time': 42598, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291362, 'error': None, 'target': 'ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.112 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[238af499-9b3a-4298-aae1-81cb55e4c388]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:76a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489236, 'tstamp': 489236}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291363, 'error': None, 'target': 'ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:45 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.129 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9817943f-af04-4e9b-b164-9ec44a560b84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf471a656-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:76:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489236, 'reachable_time': 42598, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291364, 'error': None, 'target': 'ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:45 compute-0 ceph-mon[76304]: pgmap v1268: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 480 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 13 MiB/s wr, 541 op/s
Feb 28 10:09:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3918153978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3177599203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.158 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea36e70-e001-438d-aebb-85cb1f22727d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:45 compute-0 nova_compute[243452]: 2026-02-28 10:09:45.179 243456 DEBUG nova.storage.rbd_utils [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] cloning vms/9098ebf3-e36c-492b-9c50-dc6f0078794d_disk@1ef1506ee39c470ba24c9c06b3a17f6c to images/b452f76e-84b8-461a-8b95-21b09f41396c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:09:45 compute-0 nova_compute[243452]: 2026-02-28 10:09:45.223 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.222 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c3843613-f010-48c6-9ac7-26ed61c27519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:45 compute-0 nova_compute[243452]: 2026-02-28 10:09:45.223 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.224 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf471a656-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.225 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.225 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf471a656-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:45 compute-0 nova_compute[243452]: 2026-02-28 10:09:45.227 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:45 compute-0 NetworkManager[49805]: <info>  [1772273385.2277] manager: (tapf471a656-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Feb 28 10:09:45 compute-0 kernel: tapf471a656-30: entered promiscuous mode
Feb 28 10:09:45 compute-0 nova_compute[243452]: 2026-02-28 10:09:45.230 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.231 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf471a656-30, col_values=(('external_ids', {'iface-id': '403ee777-cb2a-4f95-bb3a-7e871bc2a2b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:45 compute-0 nova_compute[243452]: 2026-02-28 10:09:45.232 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:45 compute-0 ovn_controller[146846]: 2026-02-28T10:09:45Z|00461|binding|INFO|Releasing lport 403ee777-cb2a-4f95-bb3a-7e871bc2a2b0 from this chassis (sb_readonly=0)
Feb 28 10:09:45 compute-0 nova_compute[243452]: 2026-02-28 10:09:45.233 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.235 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f471a656-3c36-4e5b-a5f2-7df3f97122e0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f471a656-3c36-4e5b-a5f2-7df3f97122e0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.237 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cdcd6d95-de52-4e6b-b9aa-97102c064dbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.238 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-f471a656-3c36-4e5b-a5f2-7df3f97122e0
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/f471a656-3c36-4e5b-a5f2-7df3f97122e0.pid.haproxy
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:09:45 compute-0 nova_compute[243452]: 2026-02-28 10:09:45.238 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID f471a656-3c36-4e5b-a5f2-7df3f97122e0
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:09:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:45.239 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'env', 'PROCESS_TAG=haproxy-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f471a656-3c36-4e5b-a5f2-7df3f97122e0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:09:45 compute-0 nova_compute[243452]: 2026-02-28 10:09:45.240 243456 DEBUG nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:09:45 compute-0 nova_compute[243452]: 2026-02-28 10:09:45.269 243456 DEBUG nova.storage.rbd_utils [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] flattening images/b452f76e-84b8-461a-8b95-21b09f41396c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:09:45 compute-0 nova_compute[243452]: 2026-02-28 10:09:45.342 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:45 compute-0 nova_compute[243452]: 2026-02-28 10:09:45.343 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:45 compute-0 nova_compute[243452]: 2026-02-28 10:09:45.351 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:09:45 compute-0 nova_compute[243452]: 2026-02-28 10:09:45.351 243456 INFO nova.compute.claims [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:09:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:09:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3112937870' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:09:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:09:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3112937870' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:09:45 compute-0 nova_compute[243452]: 2026-02-28 10:09:45.534 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:45 compute-0 podman[291466]: 2026-02-28 10:09:45.580111234 +0000 UTC m=+0.021532696 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:09:45 compute-0 podman[291466]: 2026-02-28 10:09:45.800875734 +0000 UTC m=+0.242297166 container create efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 28 10:09:45 compute-0 systemd[1]: Started libpod-conmon-efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88.scope.
Feb 28 10:09:45 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:09:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b352ab757dfa9cdc7e9943d005cb3aec7dad2b98cce6ff8ebb2b10d663fcb13/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:09:45 compute-0 nova_compute[243452]: 2026-02-28 10:09:45.995 243456 DEBUG nova.compute.manager [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:09:45 compute-0 nova_compute[243452]: 2026-02-28 10:09:45.996 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 163deb6e-49f4-4093-b0c1-98240f93c499 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:09:45 compute-0 nova_compute[243452]: 2026-02-28 10:09:45.996 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273385.9654663, 163deb6e-49f4-4093-b0c1-98240f93c499 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:45 compute-0 nova_compute[243452]: 2026-02-28 10:09:45.997 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] VM Resumed (Lifecycle Event)
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.003 243456 INFO nova.virt.libvirt.driver [-] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Instance rebooted successfully.
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.004 243456 DEBUG nova.compute.manager [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:46 compute-0 podman[291466]: 2026-02-28 10:09:46.010005237 +0000 UTC m=+0.451426689 container init efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:09:46 compute-0 podman[291466]: 2026-02-28 10:09:46.016114899 +0000 UTC m=+0.457536331 container start efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 10:09:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 455 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 12 MiB/s wr, 596 op/s
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.034 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.037 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:09:46 compute-0 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[291525]: [NOTICE]   (291548) : New worker (291550) forked
Feb 28 10:09:46 compute-0 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[291525]: [NOTICE]   (291548) : Loading success.
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.049 243456 DEBUG nova.storage.rbd_utils [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] removing snapshot(1ef1506ee39c470ba24c9c06b3a17f6c) on rbd image(9098ebf3-e36c-492b-9c50-dc6f0078794d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.074 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.075 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273385.9662154, 163deb6e-49f4-4093-b0c1-98240f93c499 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.075 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] VM Started (Lifecycle Event)
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.081 243456 DEBUG oslo_concurrency.lockutils [None req-f7615941-ec69-4ddd-86e3-73d976c937b5 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.097 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.101 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:09:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:09:46 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/14418466' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Feb 28 10:09:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Feb 28 10:09:46 compute-0 ceph-mon[76304]: osdmap e191: 3 total, 3 up, 3 in
Feb 28 10:09:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3112937870' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:09:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3112937870' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:09:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/14418466' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.163 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.169 243456 DEBUG nova.compute.provider_tree [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:09:46 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.190 243456 DEBUG nova.scheduler.client.report [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.204 243456 DEBUG nova.storage.rbd_utils [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] creating snapshot(snap) on rbd image(b452f76e-84b8-461a-8b95-21b09f41396c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.253 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.254 243456 DEBUG nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.304 243456 DEBUG nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.304 243456 DEBUG nova.network.neutron [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.331 243456 INFO nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.350 243456 DEBUG nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.465 243456 DEBUG nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.467 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.467 243456 INFO nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Creating image(s)
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.488 243456 DEBUG nova.storage.rbd_utils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.512 243456 DEBUG nova.storage.rbd_utils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.538 243456 DEBUG nova.storage.rbd_utils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.543 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.597 243456 DEBUG nova.compute.manager [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.598 243456 DEBUG oslo_concurrency.lockutils [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.599 243456 DEBUG oslo_concurrency.lockutils [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.599 243456 DEBUG oslo_concurrency.lockutils [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.599 243456 DEBUG nova.compute.manager [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] No waiting events found dispatching network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.599 243456 WARNING nova.compute.manager [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received unexpected event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 for instance with vm_state active and task_state None.
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.599 243456 DEBUG nova.compute.manager [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.599 243456 DEBUG oslo_concurrency.lockutils [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.600 243456 DEBUG oslo_concurrency.lockutils [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.600 243456 DEBUG oslo_concurrency.lockutils [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.600 243456 DEBUG nova.compute.manager [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] No waiting events found dispatching network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.600 243456 WARNING nova.compute.manager [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received unexpected event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 for instance with vm_state active and task_state None.
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.601 243456 DEBUG nova.compute.manager [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.601 243456 DEBUG oslo_concurrency.lockutils [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.601 243456 DEBUG oslo_concurrency.lockutils [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.601 243456 DEBUG oslo_concurrency.lockutils [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.601 243456 DEBUG nova.compute.manager [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] No waiting events found dispatching network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.602 243456 WARNING nova.compute.manager [req-76a273a9-9695-45b5-b579-9aca74975875 req-f8951e1f-cdcf-4d11-bdd8-f24069403f45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received unexpected event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 for instance with vm_state active and task_state None.
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.617 243456 DEBUG nova.policy [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33855957e5e3480b850c2ddef62a5f89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a5095f810f0d431788237ae1da262bf6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.638 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.639 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.639 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.640 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.664 243456 DEBUG nova.storage.rbd_utils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:46 compute-0 nova_compute[243452]: 2026-02-28 10:09:46.668 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.069 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.126 243456 DEBUG nova.storage.rbd_utils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] resizing rbd image 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:09:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Feb 28 10:09:47 compute-0 ceph-mon[76304]: pgmap v1270: 305 pgs: 305 active+clean; 455 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 12 MiB/s wr, 596 op/s
Feb 28 10:09:47 compute-0 ceph-mon[76304]: osdmap e192: 3 total, 3 up, 3 in
Feb 28 10:09:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Feb 28 10:09:47 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.343 243456 DEBUG nova.objects.instance [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'migration_context' on Instance uuid 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.365 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.365 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Ensure instance console log exists: /var/lib/nova/instances/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.366 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.366 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.366 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.538 243456 DEBUG oslo_concurrency.lockutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.538 243456 DEBUG oslo_concurrency.lockutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.538 243456 DEBUG oslo_concurrency.lockutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.541 243456 DEBUG oslo_concurrency.lockutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.541 243456 DEBUG oslo_concurrency.lockutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.542 243456 INFO nova.compute.manager [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Terminating instance
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.543 243456 DEBUG nova.compute.manager [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:09:47 compute-0 kernel: tapcf1a075d-08 (unregistering): left promiscuous mode
Feb 28 10:09:47 compute-0 NetworkManager[49805]: <info>  [1772273387.5793] device (tapcf1a075d-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.579 243456 DEBUG nova.network.neutron [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Successfully created port: b74d22f2-fc92-4f30-b403-6cecd975b301 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:09:47 compute-0 ovn_controller[146846]: 2026-02-28T10:09:47Z|00462|binding|INFO|Releasing lport cf1a075d-084d-4b7f-afd3-5a1d130b7493 from this chassis (sb_readonly=0)
Feb 28 10:09:47 compute-0 ovn_controller[146846]: 2026-02-28T10:09:47Z|00463|binding|INFO|Setting lport cf1a075d-084d-4b7f-afd3-5a1d130b7493 down in Southbound
Feb 28 10:09:47 compute-0 ovn_controller[146846]: 2026-02-28T10:09:47Z|00464|binding|INFO|Removing iface tapcf1a075d-08 ovn-installed in OVS
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.593 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.598 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:c5:d4 10.100.0.13'], port_security=['fa:16:3e:c1:c5:d4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '163deb6e-49f4-4093-b0c1-98240f93c499', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13c8391ebb8644dea661a093a38db268', 'neutron:revision_number': '6', 'neutron:security_group_ids': '53eb53cd-49a8-4522-81fc-f6ca4a23f74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=510eab2e-34c0-4fd9-8330-79799ad97e91, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cf1a075d-084d-4b7f-afd3-5a1d130b7493) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:09:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.599 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cf1a075d-084d-4b7f-afd3-5a1d130b7493 in datapath f471a656-3c36-4e5b-a5f2-7df3f97122e0 unbound from our chassis
Feb 28 10:09:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.602 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f471a656-3c36-4e5b-a5f2-7df3f97122e0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.603 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.603 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c0fc4d71-9d01-4b3b-bb3b-e3a2848fbe48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.604 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0 namespace which is not needed anymore
Feb 28 10:09:47 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000038.scope: Deactivated successfully.
Feb 28 10:09:47 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000038.scope: Consumed 2.185s CPU time.
Feb 28 10:09:47 compute-0 systemd-machined[209480]: Machine qemu-63-instance-00000038 terminated.
Feb 28 10:09:47 compute-0 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[291525]: [NOTICE]   (291548) : haproxy version is 2.8.14-c23fe91
Feb 28 10:09:47 compute-0 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[291525]: [NOTICE]   (291548) : path to executable is /usr/sbin/haproxy
Feb 28 10:09:47 compute-0 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[291525]: [WARNING]  (291548) : Exiting Master process...
Feb 28 10:09:47 compute-0 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[291525]: [ALERT]    (291548) : Current worker (291550) exited with code 143 (Terminated)
Feb 28 10:09:47 compute-0 neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0[291525]: [WARNING]  (291548) : All workers exited. Exiting... (0)
Feb 28 10:09:47 compute-0 systemd[1]: libpod-efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88.scope: Deactivated successfully.
Feb 28 10:09:47 compute-0 podman[291769]: 2026-02-28 10:09:47.730821866 +0000 UTC m=+0.044078189 container died efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 28 10:09:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88-userdata-shm.mount: Deactivated successfully.
Feb 28 10:09:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b352ab757dfa9cdc7e9943d005cb3aec7dad2b98cce6ff8ebb2b10d663fcb13-merged.mount: Deactivated successfully.
Feb 28 10:09:47 compute-0 podman[291769]: 2026-02-28 10:09:47.760651474 +0000 UTC m=+0.073907797 container cleanup efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 10:09:47 compute-0 NetworkManager[49805]: <info>  [1772273387.7629] manager: (tapcf1a075d-08): new Tun device (/org/freedesktop/NetworkManager/Devices/214)
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.768 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.774 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.784 243456 INFO nova.virt.libvirt.driver [-] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Instance destroyed successfully.
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.785 243456 DEBUG nova.objects.instance [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lazy-loading 'resources' on Instance uuid 163deb6e-49f4-4093-b0c1-98240f93c499 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:09:47 compute-0 systemd[1]: libpod-conmon-efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88.scope: Deactivated successfully.
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.798 243456 DEBUG nova.virt.libvirt.vif [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:09:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1645315342',display_name='tempest-InstanceActionsTestJSON-server-1645315342',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1645315342',id=56,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:09:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='13c8391ebb8644dea661a093a38db268',ramdisk_id='',reservation_id='r-fir4075z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1464907638',owner_user_name='tempest-InstanceActionsTestJSON-1464907638-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:09:46Z,user_data=None,user_id='1341b7bab4cc4ddca989e12ab7770723',uuid=163deb6e-49f4-4093-b0c1-98240f93c499,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.799 243456 DEBUG nova.network.os_vif_util [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converting VIF {"id": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "address": "fa:16:3e:c1:c5:d4", "network": {"id": "f471a656-3c36-4e5b-a5f2-7df3f97122e0", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-618174197-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13c8391ebb8644dea661a093a38db268", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf1a075d-08", "ovs_interfaceid": "cf1a075d-084d-4b7f-afd3-5a1d130b7493", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.799 243456 DEBUG nova.network.os_vif_util [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.800 243456 DEBUG os_vif [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.801 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.802 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf1a075d-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.804 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.810 243456 INFO os_vif [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c5:d4,bridge_name='br-int',has_traffic_filtering=True,id=cf1a075d-084d-4b7f-afd3-5a1d130b7493,network=Network(f471a656-3c36-4e5b-a5f2-7df3f97122e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf1a075d-08')
Feb 28 10:09:47 compute-0 podman[291803]: 2026-02-28 10:09:47.823417546 +0000 UTC m=+0.042283538 container remove efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:09:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.830 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9ce23c9d-14ef-4c3b-9ee3-d73c4d6834cd]: (4, ('Sat Feb 28 10:09:47 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0 (efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88)\nefdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88\nSat Feb 28 10:09:47 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0 (efdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88)\nefdce033e3f9cca6919aad4b3936f3c29c1ae8feef640bcc82c450e87cbb8e88\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.833 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eadf6622-5967-4ae2-b9e6-79af5d4dc27e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.835 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf471a656-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.837 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:47 compute-0 kernel: tapf471a656-30: left promiscuous mode
Feb 28 10:09:47 compute-0 nova_compute[243452]: 2026-02-28 10:09:47.843 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.846 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[04f2a9f2-389c-45f5-b6c3-b8a932fb2da9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.862 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2d894ec8-2e61-4bb0-ba58-5810306bc235]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.865 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[74061e2d-0edc-4d16-a97e-28b363b4cf75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.880 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[89f228d4-849d-4312-a03e-c0792cb1c551]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489230, 'reachable_time': 15766, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291842, 'error': None, 'target': 'ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:47 compute-0 systemd[1]: run-netns-ovnmeta\x2df471a656\x2d3c36\x2d4e5b\x2da5f2\x2d7df3f97122e0.mount: Deactivated successfully.
Feb 28 10:09:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.884 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f471a656-3c36-4e5b-a5f2-7df3f97122e0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:09:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:47.884 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[52943730-c327-46f2-98a8-5c6d280958ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:09:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Feb 28 10:09:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Feb 28 10:09:47 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Feb 28 10:09:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 509 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 7.1 MiB/s wr, 375 op/s
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.063 243456 INFO nova.virt.libvirt.driver [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Deleting instance files /var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499_del
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.063 243456 INFO nova.virt.libvirt.driver [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Deletion of /var/lib/nova/instances/163deb6e-49f4-4093-b0c1-98240f93c499_del complete
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.117 243456 INFO nova.compute.manager [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Took 0.57 seconds to destroy the instance on the hypervisor.
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.117 243456 DEBUG oslo.service.loopingcall [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.118 243456 DEBUG nova.compute.manager [-] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.118 243456 DEBUG nova.network.neutron [-] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:09:48 compute-0 ceph-mon[76304]: osdmap e193: 3 total, 3 up, 3 in
Feb 28 10:09:48 compute-0 ceph-mon[76304]: osdmap e194: 3 total, 3 up, 3 in
Feb 28 10:09:48 compute-0 ceph-mon[76304]: pgmap v1274: 305 pgs: 305 active+clean; 509 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 7.1 MiB/s wr, 375 op/s
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.625 243456 DEBUG nova.network.neutron [-] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.643 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.648 243456 INFO nova.compute.manager [-] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Took 0.53 seconds to deallocate network for instance.
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.669 243456 DEBUG nova.network.neutron [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Successfully updated port: b74d22f2-fc92-4f30-b403-6cecd975b301 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.686 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.687 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquired lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.687 243456 DEBUG nova.network.neutron [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.716 243456 DEBUG oslo_concurrency.lockutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.717 243456 DEBUG oslo_concurrency.lockutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.718 243456 DEBUG nova.compute.manager [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-vif-unplugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.719 243456 DEBUG oslo_concurrency.lockutils [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.719 243456 DEBUG oslo_concurrency.lockutils [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.719 243456 DEBUG oslo_concurrency.lockutils [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.719 243456 DEBUG nova.compute.manager [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] No waiting events found dispatching network-vif-unplugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.720 243456 DEBUG nova.compute.manager [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-vif-unplugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.720 243456 DEBUG nova.compute.manager [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.720 243456 DEBUG oslo_concurrency.lockutils [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.721 243456 DEBUG oslo_concurrency.lockutils [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.721 243456 DEBUG oslo_concurrency.lockutils [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.722 243456 DEBUG nova.compute.manager [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] No waiting events found dispatching network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.722 243456 WARNING nova.compute.manager [req-ba129f3c-266b-4720-a33d-fae6d364f7d6 req-d732e63c-5f20-4854-8f19-3a3bdf3b3e6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received unexpected event network-vif-plugged-cf1a075d-084d-4b7f-afd3-5a1d130b7493 for instance with vm_state active and task_state deleting.
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.733 243456 DEBUG nova.compute.manager [req-5a32fd09-67ff-4ef0-ab32-566023f9b3b6 req-e5740772-a121-4ac8-a024-7ad0a8ad96ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Received event network-vif-deleted-cf1a075d-084d-4b7f-afd3-5a1d130b7493 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.889 243456 DEBUG nova.network.neutron [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.892 243456 DEBUG oslo_concurrency.processutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.963 243456 INFO nova.virt.libvirt.driver [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Snapshot image upload complete
Feb 28 10:09:48 compute-0 nova_compute[243452]: 2026-02-28 10:09:48.964 243456 INFO nova.compute.manager [None req-313753f3-c9fa-46ab-a1f8-ebb614454755 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Took 5.62 seconds to snapshot the instance on the hypervisor.
Feb 28 10:09:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:09:49 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1041087252' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:49 compute-0 nova_compute[243452]: 2026-02-28 10:09:49.468 243456 DEBUG oslo_concurrency.processutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:49 compute-0 nova_compute[243452]: 2026-02-28 10:09:49.473 243456 DEBUG nova.compute.provider_tree [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:09:49 compute-0 nova_compute[243452]: 2026-02-28 10:09:49.801 243456 DEBUG nova.scheduler.client.report [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:09:49 compute-0 nova_compute[243452]: 2026-02-28 10:09:49.824 243456 DEBUG oslo_concurrency.lockutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:49 compute-0 nova_compute[243452]: 2026-02-28 10:09:49.861 243456 INFO nova.scheduler.client.report [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Deleted allocations for instance 163deb6e-49f4-4093-b0c1-98240f93c499
Feb 28 10:09:49 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1041087252' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:09:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 523 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 11 MiB/s wr, 372 op/s
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.156 243456 DEBUG nova.network.neutron [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Updating instance_info_cache with network_info: [{"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.191 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Releasing lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.192 243456 DEBUG nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Instance network_info: |[{"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.194 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Start _get_guest_xml network_info=[{"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.199 243456 WARNING nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.204 243456 DEBUG nova.virt.libvirt.host [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.205 243456 DEBUG nova.virt.libvirt.host [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.210 243456 DEBUG oslo_concurrency.lockutils [None req-cdb6c621-da2d-4c75-8ff3-f87d177a7e45 1341b7bab4cc4ddca989e12ab7770723 13c8391ebb8644dea661a093a38db268 - - default default] Lock "163deb6e-49f4-4093-b0c1-98240f93c499" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.211 243456 DEBUG nova.virt.libvirt.host [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.212 243456 DEBUG nova.virt.libvirt.host [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.212 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.213 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.213 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.213 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.214 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.214 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.214 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.214 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.214 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.215 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.215 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.215 243456 DEBUG nova.virt.hardware [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.218 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:09:50 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3449709121' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.774 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.802 243456 DEBUG nova.storage.rbd_utils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.807 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.878 243456 DEBUG nova.compute.manager [req-843086f3-7f5d-4a54-a6b5-71b4e4da3ed1 req-ef39f2e9-814d-4899-b24f-8fb99115f9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Received event network-changed-b74d22f2-fc92-4f30-b403-6cecd975b301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.879 243456 DEBUG nova.compute.manager [req-843086f3-7f5d-4a54-a6b5-71b4e4da3ed1 req-ef39f2e9-814d-4899-b24f-8fb99115f9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Refreshing instance network info cache due to event network-changed-b74d22f2-fc92-4f30-b403-6cecd975b301. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.880 243456 DEBUG oslo_concurrency.lockutils [req-843086f3-7f5d-4a54-a6b5-71b4e4da3ed1 req-ef39f2e9-814d-4899-b24f-8fb99115f9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.880 243456 DEBUG oslo_concurrency.lockutils [req-843086f3-7f5d-4a54-a6b5-71b4e4da3ed1 req-ef39f2e9-814d-4899-b24f-8fb99115f9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:09:50 compute-0 nova_compute[243452]: 2026-02-28 10:09:50.880 243456 DEBUG nova.network.neutron [req-843086f3-7f5d-4a54-a6b5-71b4e4da3ed1 req-ef39f2e9-814d-4899-b24f-8fb99115f9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Refreshing network info cache for port b74d22f2-fc92-4f30-b403-6cecd975b301 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:09:50 compute-0 ceph-mon[76304]: pgmap v1275: 305 pgs: 305 active+clean; 523 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 11 MiB/s wr, 372 op/s
Feb 28 10:09:50 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3449709121' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:09:51 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2904316254' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.381 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.384 243456 DEBUG nova.virt.libvirt.vif [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:09:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-980989576',display_name='tempest-DeleteServersTestJSON-server-980989576',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-980989576',id=57,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-kebdyvyk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:09:46Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=9633f2a0-e94b-40f8-a1c1-6a9ebf9458be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.384 243456 DEBUG nova.network.os_vif_util [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.386 243456 DEBUG nova.network.os_vif_util [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:e6:f5,bridge_name='br-int',has_traffic_filtering=True,id=b74d22f2-fc92-4f30-b403-6cecd975b301,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb74d22f2-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.387 243456 DEBUG nova.objects.instance [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.402 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:09:51 compute-0 nova_compute[243452]:   <uuid>9633f2a0-e94b-40f8-a1c1-6a9ebf9458be</uuid>
Feb 28 10:09:51 compute-0 nova_compute[243452]:   <name>instance-00000039</name>
Feb 28 10:09:51 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:09:51 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:09:51 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <nova:name>tempest-DeleteServersTestJSON-server-980989576</nova:name>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:09:50</nova:creationTime>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:09:51 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:09:51 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:09:51 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:09:51 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:09:51 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:09:51 compute-0 nova_compute[243452]:         <nova:user uuid="33855957e5e3480b850c2ddef62a5f89">tempest-DeleteServersTestJSON-515886650-project-member</nova:user>
Feb 28 10:09:51 compute-0 nova_compute[243452]:         <nova:project uuid="a5095f810f0d431788237ae1da262bf6">tempest-DeleteServersTestJSON-515886650</nova:project>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:09:51 compute-0 nova_compute[243452]:         <nova:port uuid="b74d22f2-fc92-4f30-b403-6cecd975b301">
Feb 28 10:09:51 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:09:51 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:09:51 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <system>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <entry name="serial">9633f2a0-e94b-40f8-a1c1-6a9ebf9458be</entry>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <entry name="uuid">9633f2a0-e94b-40f8-a1c1-6a9ebf9458be</entry>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     </system>
Feb 28 10:09:51 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:09:51 compute-0 nova_compute[243452]:   <os>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:   </os>
Feb 28 10:09:51 compute-0 nova_compute[243452]:   <features>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:   </features>
Feb 28 10:09:51 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:09:51 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:09:51 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk">
Feb 28 10:09:51 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       </source>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:09:51 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk.config">
Feb 28 10:09:51 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       </source>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:09:51 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:7f:e6:f5"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <target dev="tapb74d22f2-fc"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be/console.log" append="off"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <video>
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     </video>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:09:51 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:09:51 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:09:51 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:09:51 compute-0 nova_compute[243452]: </domain>
Feb 28 10:09:51 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.404 243456 DEBUG nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Preparing to wait for external event network-vif-plugged-b74d22f2-fc92-4f30-b403-6cecd975b301 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.404 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.404 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.404 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.405 243456 DEBUG nova.virt.libvirt.vif [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:09:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-980989576',display_name='tempest-DeleteServersTestJSON-server-980989576',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-980989576',id=57,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-kebdyvyk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:09:46Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=9633f2a0-e94b-40f8-a1c1-6a9ebf9458be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.406 243456 DEBUG nova.network.os_vif_util [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.407 243456 DEBUG nova.network.os_vif_util [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:e6:f5,bridge_name='br-int',has_traffic_filtering=True,id=b74d22f2-fc92-4f30-b403-6cecd975b301,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb74d22f2-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.407 243456 DEBUG os_vif [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:e6:f5,bridge_name='br-int',has_traffic_filtering=True,id=b74d22f2-fc92-4f30-b403-6cecd975b301,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb74d22f2-fc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.409 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.409 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.410 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.413 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.414 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb74d22f2-fc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.414 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb74d22f2-fc, col_values=(('external_ids', {'iface-id': 'b74d22f2-fc92-4f30-b403-6cecd975b301', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:e6:f5', 'vm-uuid': '9633f2a0-e94b-40f8-a1c1-6a9ebf9458be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.416 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:51 compute-0 NetworkManager[49805]: <info>  [1772273391.4176] manager: (tapb74d22f2-fc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/215)
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.418 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.422 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.423 243456 INFO os_vif [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:e6:f5,bridge_name='br-int',has_traffic_filtering=True,id=b74d22f2-fc92-4f30-b403-6cecd975b301,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb74d22f2-fc')
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.485 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.485 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.486 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No VIF found with MAC fa:16:3e:7f:e6:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.486 243456 INFO nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Using config drive
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.507 243456 DEBUG nova.storage.rbd_utils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.908 243456 INFO nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Creating config drive at /var/lib/nova/instances/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be/disk.config
Feb 28 10:09:51 compute-0 nova_compute[243452]: 2026-02-28 10:09:51.915 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphb96m2be execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:51 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2904316254' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:09:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 542 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 9.9 MiB/s rd, 11 MiB/s wr, 381 op/s
Feb 28 10:09:52 compute-0 nova_compute[243452]: 2026-02-28 10:09:52.069 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphb96m2be" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:52 compute-0 nova_compute[243452]: 2026-02-28 10:09:52.109 243456 DEBUG nova.storage.rbd_utils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:09:52 compute-0 nova_compute[243452]: 2026-02-28 10:09:52.115 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be/disk.config 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:09:52 compute-0 nova_compute[243452]: 2026-02-28 10:09:52.162 243456 DEBUG nova.network.neutron [req-843086f3-7f5d-4a54-a6b5-71b4e4da3ed1 req-ef39f2e9-814d-4899-b24f-8fb99115f9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Updated VIF entry in instance network info cache for port b74d22f2-fc92-4f30-b403-6cecd975b301. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:09:52 compute-0 nova_compute[243452]: 2026-02-28 10:09:52.163 243456 DEBUG nova.network.neutron [req-843086f3-7f5d-4a54-a6b5-71b4e4da3ed1 req-ef39f2e9-814d-4899-b24f-8fb99115f9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Updating instance_info_cache with network_info: [{"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:09:52 compute-0 nova_compute[243452]: 2026-02-28 10:09:52.187 243456 DEBUG oslo_concurrency.lockutils [req-843086f3-7f5d-4a54-a6b5-71b4e4da3ed1 req-ef39f2e9-814d-4899-b24f-8fb99115f9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:09:52 compute-0 nova_compute[243452]: 2026-02-28 10:09:52.398 243456 DEBUG oslo_concurrency.processutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be/disk.config 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:09:52 compute-0 nova_compute[243452]: 2026-02-28 10:09:52.399 243456 INFO nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Deleting local config drive /var/lib/nova/instances/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be/disk.config because it was imported into RBD.
Feb 28 10:09:52 compute-0 kernel: tapb74d22f2-fc: entered promiscuous mode
Feb 28 10:09:52 compute-0 NetworkManager[49805]: <info>  [1772273392.4535] manager: (tapb74d22f2-fc): new Tun device (/org/freedesktop/NetworkManager/Devices/216)
Feb 28 10:09:52 compute-0 nova_compute[243452]: 2026-02-28 10:09:52.454 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:52 compute-0 ovn_controller[146846]: 2026-02-28T10:09:52Z|00465|binding|INFO|Claiming lport b74d22f2-fc92-4f30-b403-6cecd975b301 for this chassis.
Feb 28 10:09:52 compute-0 ovn_controller[146846]: 2026-02-28T10:09:52Z|00466|binding|INFO|b74d22f2-fc92-4f30-b403-6cecd975b301: Claiming fa:16:3e:7f:e6:f5 10.100.0.5
Feb 28 10:09:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.462 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:e6:f5 10.100.0.5'], port_security=['fa:16:3e:7f:e6:f5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9633f2a0-e94b-40f8-a1c1-6a9ebf9458be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e92100d-850d-4567-9a5d-269bb15701d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5095f810f0d431788237ae1da262bf6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05124cc2-701f-43d1-96d7-3db27e805ae2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4409f277-3260-44c5-97fd-d8b2121f3d6b, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b74d22f2-fc92-4f30-b403-6cecd975b301) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:09:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.464 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b74d22f2-fc92-4f30-b403-6cecd975b301 in datapath 8e92100d-850d-4567-9a5d-269bb15701d5 bound to our chassis
Feb 28 10:09:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.465 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 10:09:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.480 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[29263ca6-ae44-43ad-9f5b-f5a4c6647435]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.481 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e92100d-81 in ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:09:52 compute-0 ovn_controller[146846]: 2026-02-28T10:09:52Z|00467|binding|INFO|Setting lport b74d22f2-fc92-4f30-b403-6cecd975b301 up in Southbound
Feb 28 10:09:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.839 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e92100d-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:09:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.839 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[759be36f-2279-4d55-b7be-777d1e9f217e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:52 compute-0 ovn_controller[146846]: 2026-02-28T10:09:52Z|00468|binding|INFO|Setting lport b74d22f2-fc92-4f30-b403-6cecd975b301 ovn-installed in OVS
Feb 28 10:09:52 compute-0 nova_compute[243452]: 2026-02-28 10:09:52.841 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.842 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a730ac69-c158-41a9-9bc8-84836fc8069c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.861 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a34e20-d603-4f18-8bbe-f2afef37bfbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:52 compute-0 systemd-machined[209480]: New machine qemu-64-instance-00000039.
Feb 28 10:09:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.881 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f90a394e-8f5c-45d1-9c55-6ae4a6cb02eb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:52 compute-0 systemd[1]: Started Virtual Machine qemu-64-instance-00000039.
Feb 28 10:09:52 compute-0 systemd-udevd[292006]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:09:52 compute-0 NetworkManager[49805]: <info>  [1772273392.9070] device (tapb74d22f2-fc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:09:52 compute-0 NetworkManager[49805]: <info>  [1772273392.9079] device (tapb74d22f2-fc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:09:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.912 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[382d9a52-7676-4942-83dd-b41e07fcd005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.919 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[880d1bf4-c7b4-4ff6-9a67-5c4940ec973e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:52 compute-0 NetworkManager[49805]: <info>  [1772273392.9211] manager: (tap8e92100d-80): new Veth device (/org/freedesktop/NetworkManager/Devices/217)
Feb 28 10:09:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:09:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.952 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[94ef4533-442f-432d-bf03-8f4c29a558d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.956 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[95b28ce9-e991-4db5-9acd-9688f72b8210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:52 compute-0 NetworkManager[49805]: <info>  [1772273392.9779] device (tap8e92100d-80): carrier: link connected
Feb 28 10:09:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:52.983 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[03b69734-7e51-4e06-a265-bb183bc9337d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.000 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3df4bd5e-777f-4387-a18e-67d228302ab4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e92100d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490026, 'reachable_time': 26109, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292035, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:53 compute-0 ceph-mon[76304]: pgmap v1276: 305 pgs: 305 active+clean; 542 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 9.9 MiB/s rd, 11 MiB/s wr, 381 op/s
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.019 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[064988d8-1249-4141-801c-d1531dce7838]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:bd50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490026, 'tstamp': 490026}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292036, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.043 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[da3cd615-af54-40d4-b494-d72135e72742]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e92100d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490026, 'reachable_time': 26109, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292037, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.082 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7654ae-d336-428a-b221-bd6eaf2c9ed6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.155 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[26ab27d5-760e-4955-a72e-79547f06b3ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.157 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e92100d-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.158 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.158 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e92100d-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:53 compute-0 kernel: tap8e92100d-80: entered promiscuous mode
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.161 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:53 compute-0 NetworkManager[49805]: <info>  [1772273393.1621] manager: (tap8e92100d-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.165 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e92100d-80, col_values=(('external_ids', {'iface-id': 'df60d363-b5aa-4c1b-a9a1-997dfff36799'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.167 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:53 compute-0 ovn_controller[146846]: 2026-02-28T10:09:53Z|00469|binding|INFO|Releasing lport df60d363-b5aa-4c1b-a9a1-997dfff36799 from this chassis (sb_readonly=0)
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.168 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.169 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f50d9ed2-3299-47d3-be93-9cf0bd27d025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.170 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:09:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:53.170 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'env', 'PROCESS_TAG=haproxy-8e92100d-850d-4567-9a5d-269bb15701d5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e92100d-850d-4567-9a5d-269bb15701d5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.174 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.184 243456 DEBUG nova.compute.manager [req-9f236ab7-19b9-470e-b02a-491db9ad18b8 req-799a60f9-33fe-4e78-b4ea-132afe49ed5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Received event network-vif-plugged-b74d22f2-fc92-4f30-b403-6cecd975b301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.185 243456 DEBUG oslo_concurrency.lockutils [req-9f236ab7-19b9-470e-b02a-491db9ad18b8 req-799a60f9-33fe-4e78-b4ea-132afe49ed5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.185 243456 DEBUG oslo_concurrency.lockutils [req-9f236ab7-19b9-470e-b02a-491db9ad18b8 req-799a60f9-33fe-4e78-b4ea-132afe49ed5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.186 243456 DEBUG oslo_concurrency.lockutils [req-9f236ab7-19b9-470e-b02a-491db9ad18b8 req-799a60f9-33fe-4e78-b4ea-132afe49ed5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.186 243456 DEBUG nova.compute.manager [req-9f236ab7-19b9-470e-b02a-491db9ad18b8 req-799a60f9-33fe-4e78-b4ea-132afe49ed5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Processing event network-vif-plugged-b74d22f2-fc92-4f30-b403-6cecd975b301 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.297 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273393.2964723, 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.297 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] VM Started (Lifecycle Event)
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.300 243456 DEBUG nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.303 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:09:53 compute-0 ovn_controller[146846]: 2026-02-28T10:09:53Z|00470|binding|INFO|Releasing lport df60d363-b5aa-4c1b-a9a1-997dfff36799 from this chassis (sb_readonly=0)
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.310 243456 INFO nova.virt.libvirt.driver [-] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Instance spawned successfully.
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.310 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.315 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.318 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.339 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.345 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.345 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273393.297761, 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.345 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] VM Paused (Lifecycle Event)
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.350 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.350 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.351 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.351 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.351 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.352 243456 DEBUG nova.virt.libvirt.driver [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.376 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.379 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273393.3024683, 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.379 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] VM Resumed (Lifecycle Event)
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.402 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.405 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.417 243456 INFO nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Took 6.95 seconds to spawn the instance on the hypervisor.
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.418 243456 DEBUG nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.459 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.506 243456 INFO nova.compute.manager [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Took 8.20 seconds to build instance.
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.523 243456 DEBUG oslo_concurrency.lockutils [None req-00373dea-7305-429d-bb7e-d302ae6dbe1a 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:53 compute-0 podman[292111]: 2026-02-28 10:09:53.576128591 +0000 UTC m=+0.057233878 container create 4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 10:09:53 compute-0 systemd[1]: Started libpod-conmon-4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6.scope.
Feb 28 10:09:53 compute-0 podman[292111]: 2026-02-28 10:09:53.541266792 +0000 UTC m=+0.022372119 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:09:53 compute-0 nova_compute[243452]: 2026-02-28 10:09:53.647 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:53 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:09:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92e74bf99b9c75f9c89a25a756913a23bd14227a4210479c5fb175cf03370893/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:09:53 compute-0 podman[292111]: 2026-02-28 10:09:53.675275696 +0000 UTC m=+0.156381003 container init 4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 28 10:09:53 compute-0 podman[292111]: 2026-02-28 10:09:53.68146736 +0000 UTC m=+0.162572647 container start 4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 28 10:09:53 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[292125]: [NOTICE]   (292129) : New worker (292131) forked
Feb 28 10:09:53 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[292125]: [NOTICE]   (292129) : Loading success.
Feb 28 10:09:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 530 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 8.6 MiB/s wr, 301 op/s
Feb 28 10:09:54 compute-0 nova_compute[243452]: 2026-02-28 10:09:54.610 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273379.6085157, 8eee8376-acc6-4a01-80c3-d7f0d579f9bb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:09:54 compute-0 nova_compute[243452]: 2026-02-28 10:09:54.610 243456 INFO nova.compute.manager [-] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] VM Stopped (Lifecycle Event)
Feb 28 10:09:54 compute-0 nova_compute[243452]: 2026-02-28 10:09:54.634 243456 DEBUG nova.compute.manager [None req-6fc0830b-eb4e-4bf1-a37f-50f2b5421def - - - - - -] [instance: 8eee8376-acc6-4a01-80c3-d7f0d579f9bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:09:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:54.934 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:09:54 compute-0 nova_compute[243452]: 2026-02-28 10:09:54.934 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:54.935 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:09:55 compute-0 ceph-mon[76304]: pgmap v1277: 305 pgs: 305 active+clean; 530 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 8.6 MiB/s wr, 301 op/s
Feb 28 10:09:55 compute-0 nova_compute[243452]: 2026-02-28 10:09:55.494 243456 DEBUG nova.compute.manager [req-970591b7-4623-480a-9e4d-7d3499be8756 req-1896fb99-8194-4520-9de5-56c8d2623d97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Received event network-vif-plugged-b74d22f2-fc92-4f30-b403-6cecd975b301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:09:55 compute-0 nova_compute[243452]: 2026-02-28 10:09:55.494 243456 DEBUG oslo_concurrency.lockutils [req-970591b7-4623-480a-9e4d-7d3499be8756 req-1896fb99-8194-4520-9de5-56c8d2623d97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:55 compute-0 nova_compute[243452]: 2026-02-28 10:09:55.494 243456 DEBUG oslo_concurrency.lockutils [req-970591b7-4623-480a-9e4d-7d3499be8756 req-1896fb99-8194-4520-9de5-56c8d2623d97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:55 compute-0 nova_compute[243452]: 2026-02-28 10:09:55.495 243456 DEBUG oslo_concurrency.lockutils [req-970591b7-4623-480a-9e4d-7d3499be8756 req-1896fb99-8194-4520-9de5-56c8d2623d97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:55 compute-0 nova_compute[243452]: 2026-02-28 10:09:55.495 243456 DEBUG nova.compute.manager [req-970591b7-4623-480a-9e4d-7d3499be8756 req-1896fb99-8194-4520-9de5-56c8d2623d97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] No waiting events found dispatching network-vif-plugged-b74d22f2-fc92-4f30-b403-6cecd975b301 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:09:55 compute-0 nova_compute[243452]: 2026-02-28 10:09:55.495 243456 WARNING nova.compute.manager [req-970591b7-4623-480a-9e4d-7d3499be8756 req-1896fb99-8194-4520-9de5-56c8d2623d97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Received unexpected event network-vif-plugged-b74d22f2-fc92-4f30-b403-6cecd975b301 for instance with vm_state active and task_state None.
Feb 28 10:09:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 530 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.7 MiB/s wr, 315 op/s
Feb 28 10:09:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Feb 28 10:09:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Feb 28 10:09:56 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Feb 28 10:09:56 compute-0 nova_compute[243452]: 2026-02-28 10:09:56.353 243456 DEBUG oslo_concurrency.lockutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:56 compute-0 nova_compute[243452]: 2026-02-28 10:09:56.354 243456 DEBUG oslo_concurrency.lockutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:56 compute-0 nova_compute[243452]: 2026-02-28 10:09:56.354 243456 INFO nova.compute.manager [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Shelving
Feb 28 10:09:56 compute-0 nova_compute[243452]: 2026-02-28 10:09:56.382 243456 DEBUG nova.virt.libvirt.driver [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:09:56 compute-0 nova_compute[243452]: 2026-02-28 10:09:56.417 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:56.938 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:09:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Feb 28 10:09:57 compute-0 ceph-mon[76304]: pgmap v1278: 305 pgs: 305 active+clean; 530 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.7 MiB/s wr, 315 op/s
Feb 28 10:09:57 compute-0 ceph-mon[76304]: osdmap e195: 3 total, 3 up, 3 in
Feb 28 10:09:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Feb 28 10:09:57 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Feb 28 10:09:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:57.848 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:09:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:57.849 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:09:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:09:57.849 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:09:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:09:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Feb 28 10:09:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Feb 28 10:09:57 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Feb 28 10:09:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 497 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 161 op/s
Feb 28 10:09:58 compute-0 ceph-mon[76304]: osdmap e196: 3 total, 3 up, 3 in
Feb 28 10:09:58 compute-0 ceph-mon[76304]: osdmap e197: 3 total, 3 up, 3 in
Feb 28 10:09:58 compute-0 nova_compute[243452]: 2026-02-28 10:09:58.649 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:09:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Feb 28 10:09:59 compute-0 ceph-mon[76304]: pgmap v1282: 305 pgs: 305 active+clean; 497 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 161 op/s
Feb 28 10:09:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Feb 28 10:09:59 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Feb 28 10:10:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 446 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.5 KiB/s wr, 136 op/s
Feb 28 10:10:00 compute-0 nova_compute[243452]: 2026-02-28 10:10:00.080 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "226b6da4-15c9-4d10-ab4d-194b313446f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:00 compute-0 nova_compute[243452]: 2026-02-28 10:10:00.082 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "226b6da4-15c9-4d10-ab4d-194b313446f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:00 compute-0 nova_compute[243452]: 2026-02-28 10:10:00.082 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "226b6da4-15c9-4d10-ab4d-194b313446f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:00 compute-0 nova_compute[243452]: 2026-02-28 10:10:00.083 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "226b6da4-15c9-4d10-ab4d-194b313446f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:00 compute-0 nova_compute[243452]: 2026-02-28 10:10:00.083 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "226b6da4-15c9-4d10-ab4d-194b313446f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:00 compute-0 nova_compute[243452]: 2026-02-28 10:10:00.085 243456 INFO nova.compute.manager [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Terminating instance
Feb 28 10:10:00 compute-0 nova_compute[243452]: 2026-02-28 10:10:00.086 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "refresh_cache-226b6da4-15c9-4d10-ab4d-194b313446f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:10:00 compute-0 nova_compute[243452]: 2026-02-28 10:10:00.086 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquired lock "refresh_cache-226b6da4-15c9-4d10-ab4d-194b313446f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:10:00 compute-0 nova_compute[243452]: 2026-02-28 10:10:00.087 243456 DEBUG nova.network.neutron [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:10:00 compute-0 ceph-mon[76304]: osdmap e198: 3 total, 3 up, 3 in
Feb 28 10:10:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:10:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:10:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:10:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:10:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:10:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:10:00 compute-0 nova_compute[243452]: 2026-02-28 10:10:00.331 243456 DEBUG nova.network.neutron [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:10:00 compute-0 nova_compute[243452]: 2026-02-28 10:10:00.589 243456 DEBUG nova.network.neutron [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:10:00 compute-0 nova_compute[243452]: 2026-02-28 10:10:00.605 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Releasing lock "refresh_cache-226b6da4-15c9-4d10-ab4d-194b313446f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:10:00 compute-0 nova_compute[243452]: 2026-02-28 10:10:00.607 243456 DEBUG nova.compute.manager [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:10:00 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000036.scope: Deactivated successfully.
Feb 28 10:10:00 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000036.scope: Consumed 13.260s CPU time.
Feb 28 10:10:00 compute-0 systemd-machined[209480]: Machine qemu-60-instance-00000036 terminated.
Feb 28 10:10:00 compute-0 podman[292141]: 2026-02-28 10:10:00.882199912 +0000 UTC m=+0.067989011 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 28 10:10:00 compute-0 podman[292140]: 2026-02-28 10:10:00.931179637 +0000 UTC m=+0.115641698 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:10:01 compute-0 nova_compute[243452]: 2026-02-28 10:10:01.038 243456 INFO nova.virt.libvirt.driver [-] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Instance destroyed successfully.
Feb 28 10:10:01 compute-0 nova_compute[243452]: 2026-02-28 10:10:01.039 243456 DEBUG nova.objects.instance [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lazy-loading 'resources' on Instance uuid 226b6da4-15c9-4d10-ab4d-194b313446f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:10:01 compute-0 ceph-mon[76304]: pgmap v1284: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 446 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.5 KiB/s wr, 136 op/s
Feb 28 10:10:01 compute-0 nova_compute[243452]: 2026-02-28 10:10:01.419 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:01 compute-0 nova_compute[243452]: 2026-02-28 10:10:01.656 243456 INFO nova.virt.libvirt.driver [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Deleting instance files /var/lib/nova/instances/226b6da4-15c9-4d10-ab4d-194b313446f9_del
Feb 28 10:10:01 compute-0 nova_compute[243452]: 2026-02-28 10:10:01.657 243456 INFO nova.virt.libvirt.driver [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Deletion of /var/lib/nova/instances/226b6da4-15c9-4d10-ab4d-194b313446f9_del complete
Feb 28 10:10:01 compute-0 nova_compute[243452]: 2026-02-28 10:10:01.710 243456 INFO nova.compute.manager [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Took 1.10 seconds to destroy the instance on the hypervisor.
Feb 28 10:10:01 compute-0 nova_compute[243452]: 2026-02-28 10:10:01.710 243456 DEBUG oslo.service.loopingcall [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:10:01 compute-0 nova_compute[243452]: 2026-02-28 10:10:01.711 243456 DEBUG nova.compute.manager [-] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:10:01 compute-0 nova_compute[243452]: 2026-02-28 10:10:01.711 243456 DEBUG nova.network.neutron [-] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:10:01 compute-0 nova_compute[243452]: 2026-02-28 10:10:01.916 243456 DEBUG nova.network.neutron [-] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:10:01 compute-0 nova_compute[243452]: 2026-02-28 10:10:01.930 243456 DEBUG nova.network.neutron [-] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:10:01 compute-0 nova_compute[243452]: 2026-02-28 10:10:01.943 243456 INFO nova.compute.manager [-] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Took 0.23 seconds to deallocate network for instance.
Feb 28 10:10:01 compute-0 nova_compute[243452]: 2026-02-28 10:10:01.996 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:01 compute-0 nova_compute[243452]: 2026-02-28 10:10:01.997 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 383 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.6 KiB/s wr, 211 op/s
Feb 28 10:10:02 compute-0 nova_compute[243452]: 2026-02-28 10:10:02.091 243456 DEBUG oslo_concurrency.processutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:10:02 compute-0 ceph-mon[76304]: pgmap v1285: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 383 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.6 KiB/s wr, 211 op/s
Feb 28 10:10:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:10:02 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3967393191' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:10:02 compute-0 nova_compute[243452]: 2026-02-28 10:10:02.666 243456 DEBUG oslo_concurrency.processutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:10:02 compute-0 nova_compute[243452]: 2026-02-28 10:10:02.673 243456 DEBUG nova.compute.provider_tree [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:10:02 compute-0 nova_compute[243452]: 2026-02-28 10:10:02.780 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273387.7784197, 163deb6e-49f4-4093-b0c1-98240f93c499 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:10:02 compute-0 nova_compute[243452]: 2026-02-28 10:10:02.781 243456 INFO nova.compute.manager [-] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] VM Stopped (Lifecycle Event)
Feb 28 10:10:02 compute-0 nova_compute[243452]: 2026-02-28 10:10:02.796 243456 DEBUG nova.scheduler.client.report [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:10:02 compute-0 nova_compute[243452]: 2026-02-28 10:10:02.812 243456 DEBUG nova.compute.manager [None req-d4e3a452-ffd9-4715-bb20-d2fc02dbf462 - - - - - -] [instance: 163deb6e-49f4-4093-b0c1-98240f93c499] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:10:02 compute-0 nova_compute[243452]: 2026-02-28 10:10:02.819 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:02 compute-0 nova_compute[243452]: 2026-02-28 10:10:02.858 243456 INFO nova.scheduler.client.report [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Deleted allocations for instance 226b6da4-15c9-4d10-ab4d-194b313446f9
Feb 28 10:10:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:10:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Feb 28 10:10:02 compute-0 nova_compute[243452]: 2026-02-28 10:10:02.946 243456 DEBUG oslo_concurrency.lockutils [None req-bad359d2-e3e8-4ff2-9ebd-b0ac8bef9ed7 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "226b6da4-15c9-4d10-ab4d-194b313446f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Feb 28 10:10:03 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Feb 28 10:10:03 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3967393191' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:10:03 compute-0 ceph-mon[76304]: osdmap e199: 3 total, 3 up, 3 in
Feb 28 10:10:03 compute-0 nova_compute[243452]: 2026-02-28 10:10:03.332 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "9098ebf3-e36c-492b-9c50-dc6f0078794d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:03 compute-0 nova_compute[243452]: 2026-02-28 10:10:03.333 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "9098ebf3-e36c-492b-9c50-dc6f0078794d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:03 compute-0 nova_compute[243452]: 2026-02-28 10:10:03.333 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "9098ebf3-e36c-492b-9c50-dc6f0078794d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:03 compute-0 nova_compute[243452]: 2026-02-28 10:10:03.333 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "9098ebf3-e36c-492b-9c50-dc6f0078794d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:03 compute-0 nova_compute[243452]: 2026-02-28 10:10:03.333 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "9098ebf3-e36c-492b-9c50-dc6f0078794d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:03 compute-0 nova_compute[243452]: 2026-02-28 10:10:03.334 243456 INFO nova.compute.manager [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Terminating instance
Feb 28 10:10:03 compute-0 nova_compute[243452]: 2026-02-28 10:10:03.335 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "refresh_cache-9098ebf3-e36c-492b-9c50-dc6f0078794d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:10:03 compute-0 nova_compute[243452]: 2026-02-28 10:10:03.335 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquired lock "refresh_cache-9098ebf3-e36c-492b-9c50-dc6f0078794d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:10:03 compute-0 nova_compute[243452]: 2026-02-28 10:10:03.336 243456 DEBUG nova.network.neutron [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:10:03 compute-0 nova_compute[243452]: 2026-02-28 10:10:03.650 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:03 compute-0 nova_compute[243452]: 2026-02-28 10:10:03.769 243456 DEBUG nova.network.neutron [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:10:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 333 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 5.9 KiB/s wr, 130 op/s
Feb 28 10:10:04 compute-0 nova_compute[243452]: 2026-02-28 10:10:04.101 243456 DEBUG nova.network.neutron [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:10:04 compute-0 nova_compute[243452]: 2026-02-28 10:10:04.121 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Releasing lock "refresh_cache-9098ebf3-e36c-492b-9c50-dc6f0078794d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:10:04 compute-0 nova_compute[243452]: 2026-02-28 10:10:04.123 243456 DEBUG nova.compute.manager [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:10:04 compute-0 ceph-mon[76304]: pgmap v1287: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 333 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 5.9 KiB/s wr, 130 op/s
Feb 28 10:10:04 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb 28 10:10:04 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000035.scope: Deactivated successfully.
Feb 28 10:10:04 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000035.scope: Consumed 13.461s CPU time.
Feb 28 10:10:04 compute-0 systemd-machined[209480]: Machine qemu-59-instance-00000035 terminated.
Feb 28 10:10:04 compute-0 nova_compute[243452]: 2026-02-28 10:10:04.547 243456 INFO nova.virt.libvirt.driver [-] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Instance destroyed successfully.
Feb 28 10:10:04 compute-0 nova_compute[243452]: 2026-02-28 10:10:04.548 243456 DEBUG nova.objects.instance [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lazy-loading 'resources' on Instance uuid 9098ebf3-e36c-492b-9c50-dc6f0078794d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:10:05 compute-0 nova_compute[243452]: 2026-02-28 10:10:05.469 243456 INFO nova.virt.libvirt.driver [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Deleting instance files /var/lib/nova/instances/9098ebf3-e36c-492b-9c50-dc6f0078794d_del
Feb 28 10:10:05 compute-0 nova_compute[243452]: 2026-02-28 10:10:05.471 243456 INFO nova.virt.libvirt.driver [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Deletion of /var/lib/nova/instances/9098ebf3-e36c-492b-9c50-dc6f0078794d_del complete
Feb 28 10:10:05 compute-0 nova_compute[243452]: 2026-02-28 10:10:05.529 243456 INFO nova.compute.manager [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Took 1.41 seconds to destroy the instance on the hypervisor.
Feb 28 10:10:05 compute-0 nova_compute[243452]: 2026-02-28 10:10:05.530 243456 DEBUG oslo.service.loopingcall [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:10:05 compute-0 nova_compute[243452]: 2026-02-28 10:10:05.531 243456 DEBUG nova.compute.manager [-] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:10:05 compute-0 nova_compute[243452]: 2026-02-28 10:10:05.531 243456 DEBUG nova.network.neutron [-] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:10:05 compute-0 nova_compute[243452]: 2026-02-28 10:10:05.979 243456 DEBUG nova.network.neutron [-] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:10:05 compute-0 nova_compute[243452]: 2026-02-28 10:10:05.996 243456 DEBUG nova.network.neutron [-] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:10:06 compute-0 nova_compute[243452]: 2026-02-28 10:10:06.011 243456 INFO nova.compute.manager [-] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Took 0.48 seconds to deallocate network for instance.
Feb 28 10:10:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 276 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 165 KiB/s rd, 1.8 MiB/s wr, 169 op/s
Feb 28 10:10:06 compute-0 nova_compute[243452]: 2026-02-28 10:10:06.098 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:06 compute-0 nova_compute[243452]: 2026-02-28 10:10:06.099 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:06 compute-0 nova_compute[243452]: 2026-02-28 10:10:06.200 243456 DEBUG oslo_concurrency.processutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:10:06 compute-0 nova_compute[243452]: 2026-02-28 10:10:06.422 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:06 compute-0 nova_compute[243452]: 2026-02-28 10:10:06.432 243456 DEBUG nova.virt.libvirt.driver [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 28 10:10:06 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:10:06 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2689457357' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:10:06 compute-0 nova_compute[243452]: 2026-02-28 10:10:06.797 243456 DEBUG oslo_concurrency.processutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:10:06 compute-0 nova_compute[243452]: 2026-02-28 10:10:06.805 243456 DEBUG nova.compute.provider_tree [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:10:06 compute-0 nova_compute[243452]: 2026-02-28 10:10:06.833 243456 DEBUG nova.scheduler.client.report [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:10:06 compute-0 nova_compute[243452]: 2026-02-28 10:10:06.865 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:06 compute-0 nova_compute[243452]: 2026-02-28 10:10:06.893 243456 INFO nova.scheduler.client.report [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Deleted allocations for instance 9098ebf3-e36c-492b-9c50-dc6f0078794d
Feb 28 10:10:06 compute-0 nova_compute[243452]: 2026-02-28 10:10:06.972 243456 DEBUG oslo_concurrency.lockutils [None req-52d6e437-b408-49a5-8ed7-7e5b74cb829f 7b7f4fcc1d0d41f59aed36b3de16f8e2 6a54f983c0fa466f9e11947f104ed5ca - - default default] Lock "9098ebf3-e36c-492b-9c50-dc6f0078794d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:07 compute-0 ceph-mon[76304]: pgmap v1288: 305 pgs: 305 active+clean; 276 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 165 KiB/s rd, 1.8 MiB/s wr, 169 op/s
Feb 28 10:10:07 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2689457357' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:10:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:10:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Feb 28 10:10:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Feb 28 10:10:08 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Feb 28 10:10:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 260 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 558 KiB/s rd, 3.1 MiB/s wr, 236 op/s
Feb 28 10:10:08 compute-0 nova_compute[243452]: 2026-02-28 10:10:08.651 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Feb 28 10:10:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Feb 28 10:10:09 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Feb 28 10:10:09 compute-0 ceph-mon[76304]: osdmap e200: 3 total, 3 up, 3 in
Feb 28 10:10:09 compute-0 ceph-mon[76304]: pgmap v1290: 305 pgs: 305 active+clean; 260 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 558 KiB/s rd, 3.1 MiB/s wr, 236 op/s
Feb 28 10:10:09 compute-0 kernel: tapb74d22f2-fc (unregistering): left promiscuous mode
Feb 28 10:10:09 compute-0 NetworkManager[49805]: <info>  [1772273409.2295] device (tapb74d22f2-fc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:10:09 compute-0 nova_compute[243452]: 2026-02-28 10:10:09.230 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:09 compute-0 ovn_controller[146846]: 2026-02-28T10:10:09Z|00471|binding|INFO|Releasing lport b74d22f2-fc92-4f30-b403-6cecd975b301 from this chassis (sb_readonly=0)
Feb 28 10:10:09 compute-0 ovn_controller[146846]: 2026-02-28T10:10:09Z|00472|binding|INFO|Setting lport b74d22f2-fc92-4f30-b403-6cecd975b301 down in Southbound
Feb 28 10:10:09 compute-0 ovn_controller[146846]: 2026-02-28T10:10:09Z|00473|binding|INFO|Removing iface tapb74d22f2-fc ovn-installed in OVS
Feb 28 10:10:09 compute-0 nova_compute[243452]: 2026-02-28 10:10:09.239 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.247 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:e6:f5 10.100.0.5'], port_security=['fa:16:3e:7f:e6:f5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9633f2a0-e94b-40f8-a1c1-6a9ebf9458be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e92100d-850d-4567-9a5d-269bb15701d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5095f810f0d431788237ae1da262bf6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05124cc2-701f-43d1-96d7-3db27e805ae2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4409f277-3260-44c5-97fd-d8b2121f3d6b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b74d22f2-fc92-4f30-b403-6cecd975b301) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:10:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.249 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b74d22f2-fc92-4f30-b403-6cecd975b301 in datapath 8e92100d-850d-4567-9a5d-269bb15701d5 unbound from our chassis
Feb 28 10:10:09 compute-0 nova_compute[243452]: 2026-02-28 10:10:09.250 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.251 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e92100d-850d-4567-9a5d-269bb15701d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:10:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.253 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cab45061-ff39-4533-a322-73d3cd2add79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.253 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 namespace which is not needed anymore
Feb 28 10:10:09 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000039.scope: Deactivated successfully.
Feb 28 10:10:09 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000039.scope: Consumed 12.409s CPU time.
Feb 28 10:10:09 compute-0 systemd-machined[209480]: Machine qemu-64-instance-00000039 terminated.
Feb 28 10:10:09 compute-0 nova_compute[243452]: 2026-02-28 10:10:09.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:10:09 compute-0 nova_compute[243452]: 2026-02-28 10:10:09.454 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:09 compute-0 nova_compute[243452]: 2026-02-28 10:10:09.457 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:09 compute-0 nova_compute[243452]: 2026-02-28 10:10:09.466 243456 INFO nova.virt.libvirt.driver [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Instance shutdown successfully after 13 seconds.
Feb 28 10:10:09 compute-0 nova_compute[243452]: 2026-02-28 10:10:09.471 243456 INFO nova.virt.libvirt.driver [-] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Instance destroyed successfully.
Feb 28 10:10:09 compute-0 nova_compute[243452]: 2026-02-28 10:10:09.472 243456 DEBUG nova.objects.instance [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:10:09 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[292125]: [NOTICE]   (292129) : haproxy version is 2.8.14-c23fe91
Feb 28 10:10:09 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[292125]: [NOTICE]   (292129) : path to executable is /usr/sbin/haproxy
Feb 28 10:10:09 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[292125]: [WARNING]  (292129) : Exiting Master process...
Feb 28 10:10:09 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[292125]: [ALERT]    (292129) : Current worker (292131) exited with code 143 (Terminated)
Feb 28 10:10:09 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[292125]: [WARNING]  (292129) : All workers exited. Exiting... (0)
Feb 28 10:10:09 compute-0 systemd[1]: libpod-4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6.scope: Deactivated successfully.
Feb 28 10:10:09 compute-0 podman[292296]: 2026-02-28 10:10:09.506440442 +0000 UTC m=+0.166301072 container died 4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 10:10:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6-userdata-shm.mount: Deactivated successfully.
Feb 28 10:10:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-92e74bf99b9c75f9c89a25a756913a23bd14227a4210479c5fb175cf03370893-merged.mount: Deactivated successfully.
Feb 28 10:10:09 compute-0 nova_compute[243452]: 2026-02-28 10:10:09.636 243456 DEBUG nova.compute.manager [req-0048cee6-eccf-4e59-8fdd-f5457519817f req-2160a2cb-3183-4841-ba89-777cd36f1f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Received event network-vif-unplugged-b74d22f2-fc92-4f30-b403-6cecd975b301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:10:09 compute-0 nova_compute[243452]: 2026-02-28 10:10:09.637 243456 DEBUG oslo_concurrency.lockutils [req-0048cee6-eccf-4e59-8fdd-f5457519817f req-2160a2cb-3183-4841-ba89-777cd36f1f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:09 compute-0 nova_compute[243452]: 2026-02-28 10:10:09.637 243456 DEBUG oslo_concurrency.lockutils [req-0048cee6-eccf-4e59-8fdd-f5457519817f req-2160a2cb-3183-4841-ba89-777cd36f1f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:09 compute-0 nova_compute[243452]: 2026-02-28 10:10:09.638 243456 DEBUG oslo_concurrency.lockutils [req-0048cee6-eccf-4e59-8fdd-f5457519817f req-2160a2cb-3183-4841-ba89-777cd36f1f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:09 compute-0 nova_compute[243452]: 2026-02-28 10:10:09.638 243456 DEBUG nova.compute.manager [req-0048cee6-eccf-4e59-8fdd-f5457519817f req-2160a2cb-3183-4841-ba89-777cd36f1f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] No waiting events found dispatching network-vif-unplugged-b74d22f2-fc92-4f30-b403-6cecd975b301 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:10:09 compute-0 nova_compute[243452]: 2026-02-28 10:10:09.638 243456 WARNING nova.compute.manager [req-0048cee6-eccf-4e59-8fdd-f5457519817f req-2160a2cb-3183-4841-ba89-777cd36f1f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Received unexpected event network-vif-unplugged-b74d22f2-fc92-4f30-b403-6cecd975b301 for instance with vm_state active and task_state shelving.
Feb 28 10:10:09 compute-0 podman[292296]: 2026-02-28 10:10:09.653375019 +0000 UTC m=+0.313235609 container cleanup 4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 10:10:09 compute-0 systemd[1]: libpod-conmon-4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6.scope: Deactivated successfully.
Feb 28 10:10:09 compute-0 podman[292336]: 2026-02-28 10:10:09.735299129 +0000 UTC m=+0.058905145 container remove 4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 28 10:10:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.740 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cfed5200-4be8-4b7f-a6e5-a31461254449]: (4, ('Sat Feb 28 10:10:09 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 (4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6)\n4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6\nSat Feb 28 10:10:09 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 (4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6)\n4f262631ba1337cc77e18d8bfc8f240646371774ad86c8c307d1f0237a69e0a6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.742 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c67e622f-02d6-4155-9131-1b2acefc4dfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.744 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e92100d-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:10:09 compute-0 nova_compute[243452]: 2026-02-28 10:10:09.745 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:09 compute-0 kernel: tap8e92100d-80: left promiscuous mode
Feb 28 10:10:09 compute-0 nova_compute[243452]: 2026-02-28 10:10:09.755 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.758 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[039ea8a4-efc2-4097-9444-13359e21f3bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.770 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4dd583-9bca-44a8-800a-7aaddc280d2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.771 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4db91db3-61af-4869-b597-bc41f4d16782]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.785 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[978b1b53-330e-4faa-9603-cd6e2eec38ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490019, 'reachable_time': 25468, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292355, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d8e92100d\x2d850d\x2d4567\x2d9a5d\x2d269bb15701d5.mount: Deactivated successfully.
Feb 28 10:10:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.788 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:10:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:09.788 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3da9a8-1887-44b4-9ceb-3acb581aa590]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 232 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 639 KiB/s rd, 3.7 MiB/s wr, 191 op/s
Feb 28 10:10:10 compute-0 nova_compute[243452]: 2026-02-28 10:10:10.128 243456 INFO nova.virt.libvirt.driver [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Beginning cold snapshot process
Feb 28 10:10:10 compute-0 ceph-mon[76304]: osdmap e201: 3 total, 3 up, 3 in
Feb 28 10:10:10 compute-0 nova_compute[243452]: 2026-02-28 10:10:10.269 243456 DEBUG nova.virt.libvirt.imagebackend [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 28 10:10:10 compute-0 nova_compute[243452]: 2026-02-28 10:10:10.733 243456 DEBUG nova.storage.rbd_utils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] creating snapshot(b9e3e9b3926241409c368dfbd5b637e4) on rbd image(9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:10:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Feb 28 10:10:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Feb 28 10:10:11 compute-0 nova_compute[243452]: 2026-02-28 10:10:11.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:10:11 compute-0 nova_compute[243452]: 2026-02-28 10:10:11.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:10:11 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Feb 28 10:10:11 compute-0 ceph-mon[76304]: pgmap v1292: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 232 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 639 KiB/s rd, 3.7 MiB/s wr, 191 op/s
Feb 28 10:10:11 compute-0 nova_compute[243452]: 2026-02-28 10:10:11.425 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:11 compute-0 nova_compute[243452]: 2026-02-28 10:10:11.548 243456 DEBUG nova.storage.rbd_utils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] cloning vms/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk@b9e3e9b3926241409c368dfbd5b637e4 to images/342d07d0-ce6e-40be-938e-c99ef1da978f clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:10:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 233 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 703 KiB/s rd, 1.9 MiB/s wr, 195 op/s
Feb 28 10:10:12 compute-0 nova_compute[243452]: 2026-02-28 10:10:12.266 243456 DEBUG nova.storage.rbd_utils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] flattening images/342d07d0-ce6e-40be-938e-c99ef1da978f flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:10:12 compute-0 ceph-mon[76304]: osdmap e202: 3 total, 3 up, 3 in
Feb 28 10:10:12 compute-0 ceph-mon[76304]: pgmap v1294: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 233 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 703 KiB/s rd, 1.9 MiB/s wr, 195 op/s
Feb 28 10:10:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:10:13 compute-0 nova_compute[243452]: 2026-02-28 10:10:13.429 243456 DEBUG nova.compute.manager [req-d737b28b-bcb3-4d68-812e-c7942fd2f181 req-cd7ab193-97d3-4287-bc25-40e7ecae139d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Received event network-vif-plugged-b74d22f2-fc92-4f30-b403-6cecd975b301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:10:13 compute-0 nova_compute[243452]: 2026-02-28 10:10:13.429 243456 DEBUG oslo_concurrency.lockutils [req-d737b28b-bcb3-4d68-812e-c7942fd2f181 req-cd7ab193-97d3-4287-bc25-40e7ecae139d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:13 compute-0 nova_compute[243452]: 2026-02-28 10:10:13.430 243456 DEBUG oslo_concurrency.lockutils [req-d737b28b-bcb3-4d68-812e-c7942fd2f181 req-cd7ab193-97d3-4287-bc25-40e7ecae139d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:13 compute-0 nova_compute[243452]: 2026-02-28 10:10:13.430 243456 DEBUG oslo_concurrency.lockutils [req-d737b28b-bcb3-4d68-812e-c7942fd2f181 req-cd7ab193-97d3-4287-bc25-40e7ecae139d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:13 compute-0 nova_compute[243452]: 2026-02-28 10:10:13.430 243456 DEBUG nova.compute.manager [req-d737b28b-bcb3-4d68-812e-c7942fd2f181 req-cd7ab193-97d3-4287-bc25-40e7ecae139d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] No waiting events found dispatching network-vif-plugged-b74d22f2-fc92-4f30-b403-6cecd975b301 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:10:13 compute-0 nova_compute[243452]: 2026-02-28 10:10:13.431 243456 WARNING nova.compute.manager [req-d737b28b-bcb3-4d68-812e-c7942fd2f181 req-cd7ab193-97d3-4287-bc25-40e7ecae139d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Received unexpected event network-vif-plugged-b74d22f2-fc92-4f30-b403-6cecd975b301 for instance with vm_state active and task_state shelving_image_uploading.
Feb 28 10:10:13 compute-0 nova_compute[243452]: 2026-02-28 10:10:13.450 243456 DEBUG nova.storage.rbd_utils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] removing snapshot(b9e3e9b3926241409c368dfbd5b637e4) on rbd image(9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:10:13 compute-0 nova_compute[243452]: 2026-02-28 10:10:13.654 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Feb 28 10:10:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Feb 28 10:10:13 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Feb 28 10:10:13 compute-0 nova_compute[243452]: 2026-02-28 10:10:13.989 243456 DEBUG nova.storage.rbd_utils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] creating snapshot(snap) on rbd image(342d07d0-ce6e-40be-938e-c99ef1da978f) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:10:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 233 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 257 KiB/s wr, 105 op/s
Feb 28 10:10:14 compute-0 nova_compute[243452]: 2026-02-28 10:10:14.093 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:14 compute-0 nova_compute[243452]: 2026-02-28 10:10:14.093 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:14 compute-0 nova_compute[243452]: 2026-02-28 10:10:14.121 243456 DEBUG nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:10:14 compute-0 nova_compute[243452]: 2026-02-28 10:10:14.200 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:14 compute-0 nova_compute[243452]: 2026-02-28 10:10:14.200 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:14 compute-0 nova_compute[243452]: 2026-02-28 10:10:14.207 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:10:14 compute-0 nova_compute[243452]: 2026-02-28 10:10:14.208 243456 INFO nova.compute.claims [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:10:14 compute-0 nova_compute[243452]: 2026-02-28 10:10:14.328 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:10:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Feb 28 10:10:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:10:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3048594996' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:10:14 compute-0 nova_compute[243452]: 2026-02-28 10:10:14.919 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:10:14 compute-0 nova_compute[243452]: 2026-02-28 10:10:14.954 243456 DEBUG nova.compute.provider_tree [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:10:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Feb 28 10:10:14 compute-0 nova_compute[243452]: 2026-02-28 10:10:14.973 243456 DEBUG nova.scheduler.client.report [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:10:14 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.002 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.003 243456 DEBUG nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:10:15 compute-0 ceph-mon[76304]: osdmap e203: 3 total, 3 up, 3 in
Feb 28 10:10:15 compute-0 ceph-mon[76304]: pgmap v1296: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 233 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 257 KiB/s wr, 105 op/s
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.079 243456 DEBUG nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.080 243456 DEBUG nova.network.neutron [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.110 243456 INFO nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.137 243456 DEBUG nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.269 243456 DEBUG nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.271 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.272 243456 INFO nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Creating image(s)
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.306 243456 DEBUG nova.storage.rbd_utils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.343 243456 DEBUG nova.storage.rbd_utils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.380 243456 DEBUG nova.storage.rbd_utils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.386 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.430 243456 DEBUG nova.policy [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '48e14a77ec8842f98a0d2efc6d5e167f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cffbbb9857954b188c363e9565817bf2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.435 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.436 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.437 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.460 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.461 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.462 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.462 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.497 243456 DEBUG nova.storage.rbd_utils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.503 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 30a5d845-ce28-490a-afe8-3b7552f02c63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.541 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.542 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.542 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.543 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:10:15 compute-0 nova_compute[243452]: 2026-02-28 10:10:15.543 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:10:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.035 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273401.0344145, 226b6da4-15c9-4d10-ab4d-194b313446f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.036 243456 INFO nova.compute.manager [-] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] VM Stopped (Lifecycle Event)
Feb 28 10:10:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 304 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 7.2 MiB/s wr, 254 op/s
Feb 28 10:10:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Feb 28 10:10:16 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.063 243456 DEBUG nova.compute.manager [None req-6da9b441-7083-400a-98b5-941a01633493 - - - - - -] [instance: 226b6da4-15c9-4d10-ab4d-194b313446f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:10:16 compute-0 sshd-session[292520]: Invalid user solana from 45.148.10.240 port 59340
Feb 28 10:10:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3048594996' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:10:16 compute-0 ceph-mon[76304]: osdmap e204: 3 total, 3 up, 3 in
Feb 28 10:10:16 compute-0 ceph-mon[76304]: osdmap e205: 3 total, 3 up, 3 in
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.129 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 30a5d845-ce28-490a-afe8-3b7552f02c63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:10:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:10:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/551922288' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.210 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.216 243456 DEBUG nova.storage.rbd_utils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] resizing rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:10:16 compute-0 sshd-session[292520]: Connection closed by invalid user solana 45.148.10.240 port 59340 [preauth]
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.330 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.331 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.382 243456 DEBUG nova.objects.instance [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'migration_context' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.407 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.408 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Ensure instance console log exists: /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.408 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.408 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.409 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.427 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.432 243456 DEBUG nova.network.neutron [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Successfully created port: 037eb744-3024-4a3d-b52c-894abe1cbac8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.547 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.548 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3903MB free_disk=59.94226722791791GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.548 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.549 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.623 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.624 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 30a5d845-ce28-490a-afe8-3b7552f02c63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.624 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.624 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:10:16 compute-0 nova_compute[243452]: 2026-02-28 10:10:16.673 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:10:17 compute-0 ceph-mon[76304]: pgmap v1298: 305 pgs: 305 active+clean; 304 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 7.2 MiB/s wr, 254 op/s
Feb 28 10:10:17 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/551922288' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:10:17 compute-0 nova_compute[243452]: 2026-02-28 10:10:17.156 243456 INFO nova.virt.libvirt.driver [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Snapshot image upload complete
Feb 28 10:10:17 compute-0 nova_compute[243452]: 2026-02-28 10:10:17.157 243456 DEBUG nova.compute.manager [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:10:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:10:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2894001400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:10:17 compute-0 nova_compute[243452]: 2026-02-28 10:10:17.189 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:10:17 compute-0 nova_compute[243452]: 2026-02-28 10:10:17.195 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:10:17 compute-0 nova_compute[243452]: 2026-02-28 10:10:17.212 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:10:17 compute-0 nova_compute[243452]: 2026-02-28 10:10:17.220 243456 INFO nova.compute.manager [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Shelve offloading
Feb 28 10:10:17 compute-0 nova_compute[243452]: 2026-02-28 10:10:17.231 243456 INFO nova.virt.libvirt.driver [-] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Instance destroyed successfully.
Feb 28 10:10:17 compute-0 nova_compute[243452]: 2026-02-28 10:10:17.233 243456 DEBUG nova.compute.manager [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:10:17 compute-0 nova_compute[243452]: 2026-02-28 10:10:17.235 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:10:17 compute-0 nova_compute[243452]: 2026-02-28 10:10:17.236 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:17 compute-0 nova_compute[243452]: 2026-02-28 10:10:17.238 243456 DEBUG oslo_concurrency.lockutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:10:17 compute-0 nova_compute[243452]: 2026-02-28 10:10:17.238 243456 DEBUG oslo_concurrency.lockutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquired lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:10:17 compute-0 nova_compute[243452]: 2026-02-28 10:10:17.238 243456 DEBUG nova.network.neutron [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:10:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:10:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Feb 28 10:10:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Feb 28 10:10:18 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Feb 28 10:10:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 317 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 12 MiB/s wr, 274 op/s
Feb 28 10:10:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2894001400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:10:18 compute-0 ceph-mon[76304]: osdmap e206: 3 total, 3 up, 3 in
Feb 28 10:10:18 compute-0 nova_compute[243452]: 2026-02-28 10:10:18.658 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:19 compute-0 nova_compute[243452]: 2026-02-28 10:10:19.115 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:10:19 compute-0 nova_compute[243452]: 2026-02-28 10:10:19.116 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:10:19 compute-0 nova_compute[243452]: 2026-02-28 10:10:19.116 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:10:19 compute-0 nova_compute[243452]: 2026-02-28 10:10:19.133 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 28 10:10:19 compute-0 nova_compute[243452]: 2026-02-28 10:10:19.133 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:10:19 compute-0 ceph-mon[76304]: pgmap v1301: 305 pgs: 305 active+clean; 317 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 12 MiB/s wr, 274 op/s
Feb 28 10:10:19 compute-0 nova_compute[243452]: 2026-02-28 10:10:19.342 243456 DEBUG nova.network.neutron [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Successfully updated port: 037eb744-3024-4a3d-b52c-894abe1cbac8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:10:19 compute-0 nova_compute[243452]: 2026-02-28 10:10:19.365 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:10:19 compute-0 nova_compute[243452]: 2026-02-28 10:10:19.366 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquired lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:10:19 compute-0 nova_compute[243452]: 2026-02-28 10:10:19.367 243456 DEBUG nova.network.neutron [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:10:19 compute-0 nova_compute[243452]: 2026-02-28 10:10:19.455 243456 DEBUG nova.compute.manager [req-aff2c63a-8a56-43db-956c-778540843062 req-40ad3852-8bcc-41bb-bac9-ace059a4c0cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-changed-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:10:19 compute-0 nova_compute[243452]: 2026-02-28 10:10:19.455 243456 DEBUG nova.compute.manager [req-aff2c63a-8a56-43db-956c-778540843062 req-40ad3852-8bcc-41bb-bac9-ace059a4c0cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Refreshing instance network info cache due to event network-changed-037eb744-3024-4a3d-b52c-894abe1cbac8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:10:19 compute-0 nova_compute[243452]: 2026-02-28 10:10:19.456 243456 DEBUG oslo_concurrency.lockutils [req-aff2c63a-8a56-43db-956c-778540843062 req-40ad3852-8bcc-41bb-bac9-ace059a4c0cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:10:19 compute-0 nova_compute[243452]: 2026-02-28 10:10:19.545 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273404.5437534, 9098ebf3-e36c-492b-9c50-dc6f0078794d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:10:19 compute-0 nova_compute[243452]: 2026-02-28 10:10:19.545 243456 INFO nova.compute.manager [-] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] VM Stopped (Lifecycle Event)
Feb 28 10:10:19 compute-0 nova_compute[243452]: 2026-02-28 10:10:19.588 243456 DEBUG nova.compute.manager [None req-405d5950-d170-46b0-a3bc-50acc69f4e5b - - - - - -] [instance: 9098ebf3-e36c-492b-9c50-dc6f0078794d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:10:19 compute-0 nova_compute[243452]: 2026-02-28 10:10:19.648 243456 DEBUG nova.network.neutron [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:10:19 compute-0 nova_compute[243452]: 2026-02-28 10:10:19.801 243456 DEBUG nova.network.neutron [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Updating instance_info_cache with network_info: [{"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:10:19 compute-0 nova_compute[243452]: 2026-02-28 10:10:19.817 243456 DEBUG oslo_concurrency.lockutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Releasing lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:10:19 compute-0 nova_compute[243452]: 2026-02-28 10:10:19.822 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:10:19 compute-0 nova_compute[243452]: 2026-02-28 10:10:19.822 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:10:19 compute-0 nova_compute[243452]: 2026-02-28 10:10:19.822 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:10:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 346 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 11 MiB/s wr, 233 op/s
Feb 28 10:10:21 compute-0 ceph-mon[76304]: pgmap v1302: 305 pgs: 305 active+clean; 346 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 11 MiB/s wr, 233 op/s
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.428 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.547 243456 DEBUG nova.network.neutron [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updating instance_info_cache with network_info: [{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.571 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Releasing lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.572 243456 DEBUG nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance network_info: |[{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.572 243456 DEBUG oslo_concurrency.lockutils [req-aff2c63a-8a56-43db-956c-778540843062 req-40ad3852-8bcc-41bb-bac9-ace059a4c0cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.573 243456 DEBUG nova.network.neutron [req-aff2c63a-8a56-43db-956c-778540843062 req-40ad3852-8bcc-41bb-bac9-ace059a4c0cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Refreshing network info cache for port 037eb744-3024-4a3d-b52c-894abe1cbac8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.577 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Start _get_guest_xml network_info=[{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.583 243456 WARNING nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.657 243456 DEBUG nova.virt.libvirt.host [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.658 243456 DEBUG nova.virt.libvirt.host [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.666 243456 DEBUG nova.virt.libvirt.host [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.667 243456 DEBUG nova.virt.libvirt.host [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.668 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.668 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.669 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.670 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.670 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.671 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.671 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.671 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.672 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.672 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.673 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.673 243456 DEBUG nova.virt.hardware [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.678 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.922 243456 INFO nova.virt.libvirt.driver [-] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Instance destroyed successfully.
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.922 243456 DEBUG nova.objects.instance [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'resources' on Instance uuid 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.938 243456 DEBUG nova.virt.libvirt.vif [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:09:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-980989576',display_name='tempest-DeleteServersTestJSON-server-980989576',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-980989576',id=57,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:09:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-kebdyvyk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member',shelved_at='2026-02-28T10:10:17.157658',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='342d07d0-ce6e-40be-938e-c99ef1da978f'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:10:10Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=9633f2a0-e94b-40f8-a1c1-6a9ebf9458be,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.938 243456 DEBUG nova.network.os_vif_util [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.939 243456 DEBUG nova.network.os_vif_util [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:e6:f5,bridge_name='br-int',has_traffic_filtering=True,id=b74d22f2-fc92-4f30-b403-6cecd975b301,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb74d22f2-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.940 243456 DEBUG os_vif [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:e6:f5,bridge_name='br-int',has_traffic_filtering=True,id=b74d22f2-fc92-4f30-b403-6cecd975b301,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb74d22f2-fc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.942 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.942 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb74d22f2-fc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.947 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.948 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:10:21 compute-0 nova_compute[243452]: 2026-02-28 10:10:21.953 243456 INFO os_vif [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:e6:f5,bridge_name='br-int',has_traffic_filtering=True,id=b74d22f2-fc92-4f30-b403-6cecd975b301,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb74d22f2-fc')
Feb 28 10:10:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 358 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 7.0 MiB/s wr, 180 op/s
Feb 28 10:10:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:10:22 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/314854447' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.242 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.271 243456 DEBUG nova.storage.rbd_utils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:10:22 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/314854447' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.274 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.312 243456 DEBUG nova.compute.manager [req-a67fad85-0f4c-4ac4-9ddc-9ad692bc7007 req-ff290743-f618-48c6-8c63-67c6b87212f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Received event network-changed-b74d22f2-fc92-4f30-b403-6cecd975b301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.312 243456 DEBUG nova.compute.manager [req-a67fad85-0f4c-4ac4-9ddc-9ad692bc7007 req-ff290743-f618-48c6-8c63-67c6b87212f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Refreshing instance network info cache due to event network-changed-b74d22f2-fc92-4f30-b403-6cecd975b301. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.313 243456 DEBUG oslo_concurrency.lockutils [req-a67fad85-0f4c-4ac4-9ddc-9ad692bc7007 req-ff290743-f618-48c6-8c63-67c6b87212f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.669 243456 INFO nova.virt.libvirt.driver [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Deleting instance files /var/lib/nova/instances/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_del
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.670 243456 INFO nova.virt.libvirt.driver [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Deletion of /var/lib/nova/instances/9633f2a0-e94b-40f8-a1c1-6a9ebf9458be_del complete
Feb 28 10:10:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:10:22 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1881119327' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.782 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.784 243456 DEBUG nova.virt.libvirt.vif [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-655402139',display_name='tempest-ServerActionsTestOtherB-server-655402139',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-655402139',id=58,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBqRVcgiI80flX1TIQUc3kE8k0bKr7rk5iIO2Yv6L90cfSe29lIWbu2zW6sL5TXWXaoniRhBj4ljGby24BY2TxBinS7LuRQtwYhYPFr+EO7gzkzEKLoiCVvk+xY5at1zhw==',key_name='tempest-keypair-1106870795',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-xj14uw0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:10:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=30a5d845-ce28-490a-afe8-3b7552f02c63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.784 243456 DEBUG nova.network.os_vif_util [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.785 243456 DEBUG nova.network.os_vif_util [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.786 243456 DEBUG nova.objects.instance [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.808 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:10:22 compute-0 nova_compute[243452]:   <uuid>30a5d845-ce28-490a-afe8-3b7552f02c63</uuid>
Feb 28 10:10:22 compute-0 nova_compute[243452]:   <name>instance-0000003a</name>
Feb 28 10:10:22 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:10:22 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:10:22 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerActionsTestOtherB-server-655402139</nova:name>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:10:21</nova:creationTime>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:10:22 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:10:22 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:10:22 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:10:22 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:10:22 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:10:22 compute-0 nova_compute[243452]:         <nova:user uuid="48e14a77ec8842f98a0d2efc6d5e167f">tempest-ServerActionsTestOtherB-909812490-project-member</nova:user>
Feb 28 10:10:22 compute-0 nova_compute[243452]:         <nova:project uuid="cffbbb9857954b188c363e9565817bf2">tempest-ServerActionsTestOtherB-909812490</nova:project>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:10:22 compute-0 nova_compute[243452]:         <nova:port uuid="037eb744-3024-4a3d-b52c-894abe1cbac8">
Feb 28 10:10:22 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:10:22 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:10:22 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <system>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <entry name="serial">30a5d845-ce28-490a-afe8-3b7552f02c63</entry>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <entry name="uuid">30a5d845-ce28-490a-afe8-3b7552f02c63</entry>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     </system>
Feb 28 10:10:22 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:10:22 compute-0 nova_compute[243452]:   <os>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:   </os>
Feb 28 10:10:22 compute-0 nova_compute[243452]:   <features>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:   </features>
Feb 28 10:10:22 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:10:22 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:10:22 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/30a5d845-ce28-490a-afe8-3b7552f02c63_disk">
Feb 28 10:10:22 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       </source>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:10:22 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config">
Feb 28 10:10:22 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       </source>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:10:22 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:32:d3:6f"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <target dev="tap037eb744-30"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/console.log" append="off"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <video>
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     </video>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:10:22 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:10:22 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:10:22 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:10:22 compute-0 nova_compute[243452]: </domain>
Feb 28 10:10:22 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.810 243456 DEBUG nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Preparing to wait for external event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.811 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.812 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.812 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.813 243456 DEBUG nova.virt.libvirt.vif [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-655402139',display_name='tempest-ServerActionsTestOtherB-server-655402139',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-655402139',id=58,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBqRVcgiI80flX1TIQUc3kE8k0bKr7rk5iIO2Yv6L90cfSe29lIWbu2zW6sL5TXWXaoniRhBj4ljGby24BY2TxBinS7LuRQtwYhYPFr+EO7gzkzEKLoiCVvk+xY5at1zhw==',key_name='tempest-keypair-1106870795',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-xj14uw0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:10:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=30a5d845-ce28-490a-afe8-3b7552f02c63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.814 243456 DEBUG nova.network.os_vif_util [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.815 243456 DEBUG nova.network.os_vif_util [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.816 243456 DEBUG os_vif [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.818 243456 INFO nova.scheduler.client.report [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Deleted allocations for instance 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.822 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.823 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.824 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.828 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.828 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap037eb744-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.829 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap037eb744-30, col_values=(('external_ids', {'iface-id': '037eb744-3024-4a3d-b52c-894abe1cbac8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:d3:6f', 'vm-uuid': '30a5d845-ce28-490a-afe8-3b7552f02c63'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:10:22 compute-0 NetworkManager[49805]: <info>  [1772273422.8319] manager: (tap037eb744-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/219)
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.835 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.837 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.838 243456 INFO os_vif [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30')
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.876 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Updating instance_info_cache with network_info: [{"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": "b74d22f2-fc92-4f30-b403-6cecd975b301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.886 243456 DEBUG oslo_concurrency.lockutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.886 243456 DEBUG oslo_concurrency.lockutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.903 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.903 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.903 243456 DEBUG oslo_concurrency.lockutils [req-a67fad85-0f4c-4ac4-9ddc-9ad692bc7007 req-ff290743-f618-48c6-8c63-67c6b87212f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.904 243456 DEBUG nova.network.neutron [req-a67fad85-0f4c-4ac4-9ddc-9ad692bc7007 req-ff290743-f618-48c6-8c63-67c6b87212f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Refreshing network info cache for port b74d22f2-fc92-4f30-b403-6cecd975b301 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.906 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.907 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.907 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.912 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.912 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.913 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No VIF found with MAC fa:16:3e:32:d3:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.913 243456 INFO nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Using config drive
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.939 243456 DEBUG nova.storage.rbd_utils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:10:22 compute-0 nova_compute[243452]: 2026-02-28 10:10:22.982 243456 DEBUG oslo_concurrency.processutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:10:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:10:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Feb 28 10:10:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Feb 28 10:10:23 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Feb 28 10:10:23 compute-0 ceph-mon[76304]: pgmap v1303: 305 pgs: 305 active+clean; 358 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 7.0 MiB/s wr, 180 op/s
Feb 28 10:10:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1881119327' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:10:23 compute-0 ceph-mon[76304]: osdmap e207: 3 total, 3 up, 3 in
Feb 28 10:10:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:10:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/258300255' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:10:23 compute-0 nova_compute[243452]: 2026-02-28 10:10:23.532 243456 DEBUG oslo_concurrency.processutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:10:23 compute-0 nova_compute[243452]: 2026-02-28 10:10:23.539 243456 DEBUG nova.compute.provider_tree [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:10:23 compute-0 nova_compute[243452]: 2026-02-28 10:10:23.562 243456 DEBUG nova.scheduler.client.report [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:10:23 compute-0 nova_compute[243452]: 2026-02-28 10:10:23.596 243456 DEBUG oslo_concurrency.lockutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:23 compute-0 nova_compute[243452]: 2026-02-28 10:10:23.656 243456 DEBUG oslo_concurrency.lockutils [None req-d0c6c59f-57ba-4d04-b191-b5fe55ed0cb9 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 27.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:23 compute-0 nova_compute[243452]: 2026-02-28 10:10:23.659 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 318 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 632 KiB/s rd, 3.1 MiB/s wr, 91 op/s
Feb 28 10:10:24 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/258300255' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:10:24 compute-0 ceph-mon[76304]: pgmap v1305: 305 pgs: 305 active+clean; 318 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 632 KiB/s rd, 3.1 MiB/s wr, 91 op/s
Feb 28 10:10:24 compute-0 nova_compute[243452]: 2026-02-28 10:10:24.466 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273409.4650848, 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:10:24 compute-0 nova_compute[243452]: 2026-02-28 10:10:24.466 243456 INFO nova.compute.manager [-] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] VM Stopped (Lifecycle Event)
Feb 28 10:10:24 compute-0 nova_compute[243452]: 2026-02-28 10:10:24.490 243456 DEBUG nova.compute.manager [None req-4f03788f-3437-42ac-9e9c-bfccd643ccf4 - - - - - -] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:10:24 compute-0 nova_compute[243452]: 2026-02-28 10:10:24.876 243456 INFO nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Creating config drive at /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config
Feb 28 10:10:24 compute-0 nova_compute[243452]: 2026-02-28 10:10:24.882 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkv9bib_u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:10:25 compute-0 nova_compute[243452]: 2026-02-28 10:10:25.030 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkv9bib_u" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:10:25 compute-0 nova_compute[243452]: 2026-02-28 10:10:25.056 243456 DEBUG nova.storage.rbd_utils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:10:25 compute-0 nova_compute[243452]: 2026-02-28 10:10:25.059 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config 30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:10:25 compute-0 sudo[292897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:10:25 compute-0 sudo[292897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:10:25 compute-0 sudo[292897]: pam_unix(sudo:session): session closed for user root
Feb 28 10:10:25 compute-0 sudo[292922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:10:25 compute-0 sudo[292922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:10:25 compute-0 nova_compute[243452]: 2026-02-28 10:10:25.404 243456 DEBUG oslo_concurrency.processutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config 30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:10:25 compute-0 nova_compute[243452]: 2026-02-28 10:10:25.405 243456 INFO nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Deleting local config drive /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config because it was imported into RBD.
Feb 28 10:10:25 compute-0 kernel: tap037eb744-30: entered promiscuous mode
Feb 28 10:10:25 compute-0 NetworkManager[49805]: <info>  [1772273425.4805] manager: (tap037eb744-30): new Tun device (/org/freedesktop/NetworkManager/Devices/220)
Feb 28 10:10:25 compute-0 nova_compute[243452]: 2026-02-28 10:10:25.482 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:25 compute-0 nova_compute[243452]: 2026-02-28 10:10:25.486 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:25 compute-0 ovn_controller[146846]: 2026-02-28T10:10:25Z|00474|binding|INFO|Claiming lport 037eb744-3024-4a3d-b52c-894abe1cbac8 for this chassis.
Feb 28 10:10:25 compute-0 ovn_controller[146846]: 2026-02-28T10:10:25Z|00475|binding|INFO|037eb744-3024-4a3d-b52c-894abe1cbac8: Claiming fa:16:3e:32:d3:6f 10.100.0.9
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.499 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:d3:6f 10.100.0.9'], port_security=['fa:16:3e:32:d3:6f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '30a5d845-ce28-490a-afe8-3b7552f02c63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cffbbb9857954b188c363e9565817bf2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '998e17ae-0a65-46f5-b817-1f8d2d0cba63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d55c9b7f-6b12-4934-aabc-ab954d2b71e1, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=037eb744-3024-4a3d-b52c-894abe1cbac8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.502 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 037eb744-3024-4a3d-b52c-894abe1cbac8 in datapath 41b22e92-d251-48dd-9bf8-8f38cbd749fa bound to our chassis
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.505 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41b22e92-d251-48dd-9bf8-8f38cbd749fa
Feb 28 10:10:25 compute-0 nova_compute[243452]: 2026-02-28 10:10:25.516 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.518 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[89e7a564-86f3-4d51-99a6-91ab7ed6e029]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.521 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41b22e92-d1 in ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:10:25 compute-0 ovn_controller[146846]: 2026-02-28T10:10:25Z|00476|binding|INFO|Setting lport 037eb744-3024-4a3d-b52c-894abe1cbac8 ovn-installed in OVS
Feb 28 10:10:25 compute-0 ovn_controller[146846]: 2026-02-28T10:10:25Z|00477|binding|INFO|Setting lport 037eb744-3024-4a3d-b52c-894abe1cbac8 up in Southbound
Feb 28 10:10:25 compute-0 nova_compute[243452]: 2026-02-28 10:10:25.524 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.524 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41b22e92-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.524 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc1f4b1-758e-49c6-be7f-417598ddfdc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.526 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f83524de-3d25-45d9-ac23-551841e9b902]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:25 compute-0 systemd-udevd[292960]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.541 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a039e34b-cb65-4652-b7cb-e01a2178621f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:25 compute-0 NetworkManager[49805]: <info>  [1772273425.5449] device (tap037eb744-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:10:25 compute-0 NetworkManager[49805]: <info>  [1772273425.5460] device (tap037eb744-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:10:25 compute-0 systemd-machined[209480]: New machine qemu-65-instance-0000003a.
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.556 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8a92f508-c053-46af-8ab8-07c024645a2c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:25 compute-0 systemd[1]: Started Virtual Machine qemu-65-instance-0000003a.
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.592 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[00956693-5b22-4fad-a48b-48ce67c573c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:25 compute-0 NetworkManager[49805]: <info>  [1772273425.6008] manager: (tap41b22e92-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/221)
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.599 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[739a6435-ac58-4eaf-8bc4-7f7cc7e97df5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.636 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6bdc668c-9247-455c-adab-cda09490a6d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.641 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c704648b-0410-4df6-9f04-0e9c4b9d1826]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:25 compute-0 NetworkManager[49805]: <info>  [1772273425.6621] device (tap41b22e92-d0): carrier: link connected
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.667 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[64fbc81c-58e1-479b-999f-243b4eb10f39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.683 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7d7080ea-4152-4d0c-9ca7-d0e2da9329f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41b22e92-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:1f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493294, 'reachable_time': 23633, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293007, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.700 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[89660529-e9df-4bdc-8559-a82fc343bc29]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:1fae'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493294, 'tstamp': 493294}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293009, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.713 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b14af751-25ee-4099-a52e-ab6a0041cf44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41b22e92-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:1f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493294, 'reachable_time': 23633, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293012, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.745 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d33879fd-e982-4895-b314-1943775557bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.796 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[97e2ecbd-70b3-4c9d-b66b-d7b3d551eb2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.797 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b22e92-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.798 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.798 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41b22e92-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:10:25 compute-0 kernel: tap41b22e92-d0: entered promiscuous mode
Feb 28 10:10:25 compute-0 nova_compute[243452]: 2026-02-28 10:10:25.800 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:25 compute-0 NetworkManager[49805]: <info>  [1772273425.8015] manager: (tap41b22e92-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/222)
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.803 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41b22e92-d0, col_values=(('external_ids', {'iface-id': 'cad89901-4493-47e9-b0fc-45158375eff8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:10:25 compute-0 ovn_controller[146846]: 2026-02-28T10:10:25Z|00478|binding|INFO|Releasing lport cad89901-4493-47e9-b0fc-45158375eff8 from this chassis (sb_readonly=0)
Feb 28 10:10:25 compute-0 nova_compute[243452]: 2026-02-28 10:10:25.804 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:25 compute-0 nova_compute[243452]: 2026-02-28 10:10:25.805 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.805 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41b22e92-d251-48dd-9bf8-8f38cbd749fa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41b22e92-d251-48dd-9bf8-8f38cbd749fa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.806 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d299c3-d477-48ff-a93d-9d1bc606d62e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.806 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-41b22e92-d251-48dd-9bf8-8f38cbd749fa
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/41b22e92-d251-48dd-9bf8-8f38cbd749fa.pid.haproxy
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 41b22e92-d251-48dd-9bf8-8f38cbd749fa
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:10:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:25.807 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'env', 'PROCESS_TAG=haproxy-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41b22e92-d251-48dd-9bf8-8f38cbd749fa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:10:25 compute-0 nova_compute[243452]: 2026-02-28 10:10:25.811 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:25 compute-0 sudo[292922]: pam_unix(sudo:session): session closed for user root
Feb 28 10:10:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:10:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:10:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:10:25 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:10:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:10:25 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:10:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:10:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:10:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:10:25 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:10:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:10:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:10:25 compute-0 nova_compute[243452]: 2026-02-28 10:10:25.900 243456 DEBUG nova.compute.manager [req-9fad18f8-4ceb-41b7-93d3-902d9edd64d1 req-0ec35797-a9dc-4158-b458-a9ca619efa81 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:10:25 compute-0 nova_compute[243452]: 2026-02-28 10:10:25.900 243456 DEBUG oslo_concurrency.lockutils [req-9fad18f8-4ceb-41b7-93d3-902d9edd64d1 req-0ec35797-a9dc-4158-b458-a9ca619efa81 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:25 compute-0 nova_compute[243452]: 2026-02-28 10:10:25.901 243456 DEBUG oslo_concurrency.lockutils [req-9fad18f8-4ceb-41b7-93d3-902d9edd64d1 req-0ec35797-a9dc-4158-b458-a9ca619efa81 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:25 compute-0 nova_compute[243452]: 2026-02-28 10:10:25.901 243456 DEBUG oslo_concurrency.lockutils [req-9fad18f8-4ceb-41b7-93d3-902d9edd64d1 req-0ec35797-a9dc-4158-b458-a9ca619efa81 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:25 compute-0 nova_compute[243452]: 2026-02-28 10:10:25.901 243456 DEBUG nova.compute.manager [req-9fad18f8-4ceb-41b7-93d3-902d9edd64d1 req-0ec35797-a9dc-4158-b458-a9ca619efa81 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Processing event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:10:25 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:10:25 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:10:25 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:10:25 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:10:25 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:10:25 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:10:25 compute-0 sudo[293036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:10:25 compute-0 sudo[293036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:10:25 compute-0 sudo[293036]: pam_unix(sudo:session): session closed for user root
Feb 28 10:10:25 compute-0 nova_compute[243452]: 2026-02-28 10:10:25.965 243456 DEBUG nova.network.neutron [req-aff2c63a-8a56-43db-956c-778540843062 req-40ad3852-8bcc-41bb-bac9-ace059a4c0cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updated VIF entry in instance network info cache for port 037eb744-3024-4a3d-b52c-894abe1cbac8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:10:25 compute-0 nova_compute[243452]: 2026-02-28 10:10:25.966 243456 DEBUG nova.network.neutron [req-aff2c63a-8a56-43db-956c-778540843062 req-40ad3852-8bcc-41bb-bac9-ace059a4c0cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updating instance_info_cache with network_info: [{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:10:25 compute-0 nova_compute[243452]: 2026-02-28 10:10:25.994 243456 DEBUG oslo_concurrency.lockutils [req-aff2c63a-8a56-43db-956c-778540843062 req-40ad3852-8bcc-41bb-bac9-ace059a4c0cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:10:26 compute-0 sudo[293061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:10:26 compute-0 sudo[293061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:10:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 279 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 2.0 MiB/s wr, 119 op/s
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.138 243456 DEBUG nova.network.neutron [req-a67fad85-0f4c-4ac4-9ddc-9ad692bc7007 req-ff290743-f618-48c6-8c63-67c6b87212f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Updated VIF entry in instance network info cache for port b74d22f2-fc92-4f30-b403-6cecd975b301. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.140 243456 DEBUG nova.network.neutron [req-a67fad85-0f4c-4ac4-9ddc-9ad692bc7007 req-ff290743-f618-48c6-8c63-67c6b87212f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9633f2a0-e94b-40f8-a1c1-6a9ebf9458be] Updating instance_info_cache with network_info: [{"id": "b74d22f2-fc92-4f30-b403-6cecd975b301", "address": "fa:16:3e:7f:e6:f5", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": null, "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapb74d22f2-fc", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.164 243456 DEBUG oslo_concurrency.lockutils [req-a67fad85-0f4c-4ac4-9ddc-9ad692bc7007 req-ff290743-f618-48c6-8c63-67c6b87212f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9633f2a0-e94b-40f8-a1c1-6a9ebf9458be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:10:26 compute-0 podman[293108]: 2026-02-28 10:10:26.190808991 +0000 UTC m=+0.067341121 container create 4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 10:10:26 compute-0 systemd[1]: Started libpod-conmon-4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6.scope.
Feb 28 10:10:26 compute-0 podman[293108]: 2026-02-28 10:10:26.161793066 +0000 UTC m=+0.038325226 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:10:26 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0f80b5e09dc8495c28b54de22f23404b9f324ad0e098a2a59a53d201acefe9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:10:26 compute-0 podman[293108]: 2026-02-28 10:10:26.289163833 +0000 UTC m=+0.165695993 container init 4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 10:10:26 compute-0 podman[293108]: 2026-02-28 10:10:26.29937308 +0000 UTC m=+0.175905200 container start 4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 10:10:26 compute-0 neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa[293148]: [NOTICE]   (293187) : New worker (293190) forked
Feb 28 10:10:26 compute-0 neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa[293148]: [NOTICE]   (293187) : Loading success.
Feb 28 10:10:26 compute-0 podman[293157]: 2026-02-28 10:10:26.336495933 +0000 UTC m=+0.054244105 container create facf7a643a3bf7ede1cdc0ebf809028d94b5c0f5f058831f6f5098c45d41efc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_benz, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:10:26 compute-0 systemd[1]: Started libpod-conmon-facf7a643a3bf7ede1cdc0ebf809028d94b5c0f5f058831f6f5098c45d41efc6.scope.
Feb 28 10:10:26 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:10:26 compute-0 podman[293157]: 2026-02-28 10:10:26.31431841 +0000 UTC m=+0.032066632 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.417 243456 DEBUG nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.418 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273426.416621, 30a5d845-ce28-490a-afe8-3b7552f02c63 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.419 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] VM Started (Lifecycle Event)
Feb 28 10:10:26 compute-0 podman[293157]: 2026-02-28 10:10:26.421412248 +0000 UTC m=+0.139160450 container init facf7a643a3bf7ede1cdc0ebf809028d94b5c0f5f058831f6f5098c45d41efc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_benz, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.422 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.427 243456 INFO nova.virt.libvirt.driver [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance spawned successfully.
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.427 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:10:26 compute-0 podman[293157]: 2026-02-28 10:10:26.431847701 +0000 UTC m=+0.149595893 container start facf7a643a3bf7ede1cdc0ebf809028d94b5c0f5f058831f6f5098c45d41efc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 10:10:26 compute-0 podman[293157]: 2026-02-28 10:10:26.435611126 +0000 UTC m=+0.153359308 container attach facf7a643a3bf7ede1cdc0ebf809028d94b5c0f5f058831f6f5098c45d41efc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_benz, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 10:10:26 compute-0 exciting_benz[293207]: 167 167
Feb 28 10:10:26 compute-0 systemd[1]: libpod-facf7a643a3bf7ede1cdc0ebf809028d94b5c0f5f058831f6f5098c45d41efc6.scope: Deactivated successfully.
Feb 28 10:10:26 compute-0 podman[293157]: 2026-02-28 10:10:26.442590662 +0000 UTC m=+0.160338834 container died facf7a643a3bf7ede1cdc0ebf809028d94b5c0f5f058831f6f5098c45d41efc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_benz, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.460 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.467 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.468 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.469 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.470 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.472 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:10:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-55832f87349a388083cdb220cf182f96094fd417cef162df011da3d575b45b33-merged.mount: Deactivated successfully.
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.473 243456 DEBUG nova.virt.libvirt.driver [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.482 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:10:26 compute-0 podman[293157]: 2026-02-28 10:10:26.495150849 +0000 UTC m=+0.212899021 container remove facf7a643a3bf7ede1cdc0ebf809028d94b5c0f5f058831f6f5098c45d41efc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 10:10:26 compute-0 systemd[1]: libpod-conmon-facf7a643a3bf7ede1cdc0ebf809028d94b5c0f5f058831f6f5098c45d41efc6.scope: Deactivated successfully.
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.530 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.530 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273426.417748, 30a5d845-ce28-490a-afe8-3b7552f02c63 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.531 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] VM Paused (Lifecycle Event)
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.567 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.571 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273426.421398, 30a5d845-ce28-490a-afe8-3b7552f02c63 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.571 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] VM Resumed (Lifecycle Event)
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.579 243456 INFO nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Took 11.31 seconds to spawn the instance on the hypervisor.
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.580 243456 DEBUG nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.627 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.631 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:10:26 compute-0 podman[293230]: 2026-02-28 10:10:26.645333547 +0000 UTC m=+0.045228202 container create c531ea07da39221a5a63cbad2043a310ce18d8093975ec5ac853b863b3f0051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.666 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:10:26 compute-0 systemd[1]: Started libpod-conmon-c531ea07da39221a5a63cbad2043a310ce18d8093975ec5ac853b863b3f0051d.scope.
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.692 243456 INFO nova.compute.manager [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Took 12.52 seconds to build instance.
Feb 28 10:10:26 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2b17bbd0659d33c5e7afcf0e9109f72f2983de49cf4a50ec84d89fcf12b905/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:10:26 compute-0 podman[293230]: 2026-02-28 10:10:26.625668644 +0000 UTC m=+0.025563339 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2b17bbd0659d33c5e7afcf0e9109f72f2983de49cf4a50ec84d89fcf12b905/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2b17bbd0659d33c5e7afcf0e9109f72f2983de49cf4a50ec84d89fcf12b905/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2b17bbd0659d33c5e7afcf0e9109f72f2983de49cf4a50ec84d89fcf12b905/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:10:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2b17bbd0659d33c5e7afcf0e9109f72f2983de49cf4a50ec84d89fcf12b905/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:10:26 compute-0 nova_compute[243452]: 2026-02-28 10:10:26.733 243456 DEBUG oslo_concurrency.lockutils [None req-a4177f37-ca2e-470f-8092-c8753d2199b0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:26 compute-0 podman[293230]: 2026-02-28 10:10:26.768817785 +0000 UTC m=+0.168712540 container init c531ea07da39221a5a63cbad2043a310ce18d8093975ec5ac853b863b3f0051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ptolemy, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 10:10:26 compute-0 podman[293230]: 2026-02-28 10:10:26.780308027 +0000 UTC m=+0.180202682 container start c531ea07da39221a5a63cbad2043a310ce18d8093975ec5ac853b863b3f0051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 10:10:26 compute-0 podman[293230]: 2026-02-28 10:10:26.784559587 +0000 UTC m=+0.184454282 container attach c531ea07da39221a5a63cbad2043a310ce18d8093975ec5ac853b863b3f0051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 10:10:26 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Feb 28 10:10:26 compute-0 ceph-mon[76304]: pgmap v1306: 305 pgs: 305 active+clean; 279 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 2.0 MiB/s wr, 119 op/s
Feb 28 10:10:26 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Feb 28 10:10:26 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Feb 28 10:10:27 compute-0 pensive_ptolemy[293247]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:10:27 compute-0 pensive_ptolemy[293247]: --> All data devices are unavailable
Feb 28 10:10:27 compute-0 systemd[1]: libpod-c531ea07da39221a5a63cbad2043a310ce18d8093975ec5ac853b863b3f0051d.scope: Deactivated successfully.
Feb 28 10:10:27 compute-0 conmon[293247]: conmon c531ea07da39221a5a63 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c531ea07da39221a5a63cbad2043a310ce18d8093975ec5ac853b863b3f0051d.scope/container/memory.events
Feb 28 10:10:27 compute-0 podman[293230]: 2026-02-28 10:10:27.301822184 +0000 UTC m=+0.701716839 container died c531ea07da39221a5a63cbad2043a310ce18d8093975ec5ac853b863b3f0051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 10:10:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a2b17bbd0659d33c5e7afcf0e9109f72f2983de49cf4a50ec84d89fcf12b905-merged.mount: Deactivated successfully.
Feb 28 10:10:27 compute-0 podman[293230]: 2026-02-28 10:10:27.351922521 +0000 UTC m=+0.751817186 container remove c531ea07da39221a5a63cbad2043a310ce18d8093975ec5ac853b863b3f0051d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ptolemy, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 10:10:27 compute-0 systemd[1]: libpod-conmon-c531ea07da39221a5a63cbad2043a310ce18d8093975ec5ac853b863b3f0051d.scope: Deactivated successfully.
Feb 28 10:10:27 compute-0 sudo[293061]: pam_unix(sudo:session): session closed for user root
Feb 28 10:10:27 compute-0 sudo[293280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:10:27 compute-0 sudo[293280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:10:27 compute-0 sudo[293280]: pam_unix(sudo:session): session closed for user root
Feb 28 10:10:27 compute-0 sudo[293305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:10:27 compute-0 sudo[293305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:10:27 compute-0 nova_compute[243452]: 2026-02-28 10:10:27.832 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:27 compute-0 podman[293342]: 2026-02-28 10:10:27.856265636 +0000 UTC m=+0.059682867 container create 1d48a191c9005b3e2d16e505064b17f3cade0d77eb03e713f0a80e06d98b734e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_noether, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 10:10:27 compute-0 systemd[1]: Started libpod-conmon-1d48a191c9005b3e2d16e505064b17f3cade0d77eb03e713f0a80e06d98b734e.scope.
Feb 28 10:10:27 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:10:27 compute-0 podman[293342]: 2026-02-28 10:10:27.828512706 +0000 UTC m=+0.031929987 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:10:27 compute-0 podman[293342]: 2026-02-28 10:10:27.937845887 +0000 UTC m=+0.141263128 container init 1d48a191c9005b3e2d16e505064b17f3cade0d77eb03e713f0a80e06d98b734e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_noether, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:10:27 compute-0 podman[293342]: 2026-02-28 10:10:27.948207278 +0000 UTC m=+0.151624509 container start 1d48a191c9005b3e2d16e505064b17f3cade0d77eb03e713f0a80e06d98b734e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_noether, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 10:10:27 compute-0 podman[293342]: 2026-02-28 10:10:27.952828568 +0000 UTC m=+0.156245859 container attach 1d48a191c9005b3e2d16e505064b17f3cade0d77eb03e713f0a80e06d98b734e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_noether, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:10:27 compute-0 ceph-mon[76304]: osdmap e208: 3 total, 3 up, 3 in
Feb 28 10:10:27 compute-0 zen_noether[293358]: 167 167
Feb 28 10:10:27 compute-0 systemd[1]: libpod-1d48a191c9005b3e2d16e505064b17f3cade0d77eb03e713f0a80e06d98b734e.scope: Deactivated successfully.
Feb 28 10:10:27 compute-0 podman[293342]: 2026-02-28 10:10:27.959451204 +0000 UTC m=+0.162868435 container died 1d48a191c9005b3e2d16e505064b17f3cade0d77eb03e713f0a80e06d98b734e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_noether, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Feb 28 10:10:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d6d04e1b71e8a5442d850b2d21f6d4afadb866fc6b93a09a9ff7c93382e6c61-merged.mount: Deactivated successfully.
Feb 28 10:10:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:10:28 compute-0 podman[293342]: 2026-02-28 10:10:28.002710489 +0000 UTC m=+0.206127710 container remove 1d48a191c9005b3e2d16e505064b17f3cade0d77eb03e713f0a80e06d98b734e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:10:28 compute-0 systemd[1]: libpod-conmon-1d48a191c9005b3e2d16e505064b17f3cade0d77eb03e713f0a80e06d98b734e.scope: Deactivated successfully.
Feb 28 10:10:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 271 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 178 KiB/s rd, 425 KiB/s wr, 98 op/s
Feb 28 10:10:28 compute-0 podman[293383]: 2026-02-28 10:10:28.161285752 +0000 UTC m=+0.042367031 container create 66d0a90907d609e0df4098e82c758b5345bda5f9a70a1fcdea4cfd8ad9690ffe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:10:28 compute-0 systemd[1]: Started libpod-conmon-66d0a90907d609e0df4098e82c758b5345bda5f9a70a1fcdea4cfd8ad9690ffe.scope.
Feb 28 10:10:28 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:10:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4e77349caa4feb03cd0c453c46e6c31d4af8b99f37abe1c48e97176003d3c72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:10:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4e77349caa4feb03cd0c453c46e6c31d4af8b99f37abe1c48e97176003d3c72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:10:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4e77349caa4feb03cd0c453c46e6c31d4af8b99f37abe1c48e97176003d3c72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:10:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4e77349caa4feb03cd0c453c46e6c31d4af8b99f37abe1c48e97176003d3c72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:10:28 compute-0 podman[293383]: 2026-02-28 10:10:28.143853273 +0000 UTC m=+0.024934622 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:10:28 compute-0 podman[293383]: 2026-02-28 10:10:28.252984238 +0000 UTC m=+0.134065547 container init 66d0a90907d609e0df4098e82c758b5345bda5f9a70a1fcdea4cfd8ad9690ffe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 28 10:10:28 compute-0 podman[293383]: 2026-02-28 10:10:28.263721729 +0000 UTC m=+0.144803028 container start 66d0a90907d609e0df4098e82c758b5345bda5f9a70a1fcdea4cfd8ad9690ffe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:10:28 compute-0 podman[293383]: 2026-02-28 10:10:28.268108643 +0000 UTC m=+0.149189942 container attach 66d0a90907d609e0df4098e82c758b5345bda5f9a70a1fcdea4cfd8ad9690ffe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:10:28 compute-0 nova_compute[243452]: 2026-02-28 10:10:28.316 243456 DEBUG nova.compute.manager [req-cd08ce76-97c1-443d-8c41-5ecf4b9acf36 req-693992c8-c065-4bff-9760-1896ee94231d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:10:28 compute-0 nova_compute[243452]: 2026-02-28 10:10:28.318 243456 DEBUG oslo_concurrency.lockutils [req-cd08ce76-97c1-443d-8c41-5ecf4b9acf36 req-693992c8-c065-4bff-9760-1896ee94231d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:28 compute-0 nova_compute[243452]: 2026-02-28 10:10:28.318 243456 DEBUG oslo_concurrency.lockutils [req-cd08ce76-97c1-443d-8c41-5ecf4b9acf36 req-693992c8-c065-4bff-9760-1896ee94231d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:28 compute-0 nova_compute[243452]: 2026-02-28 10:10:28.318 243456 DEBUG oslo_concurrency.lockutils [req-cd08ce76-97c1-443d-8c41-5ecf4b9acf36 req-693992c8-c065-4bff-9760-1896ee94231d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:28 compute-0 nova_compute[243452]: 2026-02-28 10:10:28.319 243456 DEBUG nova.compute.manager [req-cd08ce76-97c1-443d-8c41-5ecf4b9acf36 req-693992c8-c065-4bff-9760-1896ee94231d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] No waiting events found dispatching network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:10:28 compute-0 nova_compute[243452]: 2026-02-28 10:10:28.319 243456 WARNING nova.compute.manager [req-cd08ce76-97c1-443d-8c41-5ecf4b9acf36 req-693992c8-c065-4bff-9760-1896ee94231d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received unexpected event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 for instance with vm_state active and task_state None.
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]: {
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:     "0": [
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:         {
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "devices": [
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "/dev/loop3"
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             ],
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "lv_name": "ceph_lv0",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "lv_size": "21470642176",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "name": "ceph_lv0",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "tags": {
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.cluster_name": "ceph",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.crush_device_class": "",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.encrypted": "0",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.objectstore": "bluestore",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.osd_id": "0",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.type": "block",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.vdo": "0",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.with_tpm": "0"
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             },
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "type": "block",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "vg_name": "ceph_vg0"
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:         }
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:     ],
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:     "1": [
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:         {
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "devices": [
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "/dev/loop4"
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             ],
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "lv_name": "ceph_lv1",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "lv_size": "21470642176",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "name": "ceph_lv1",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "tags": {
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.cluster_name": "ceph",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.crush_device_class": "",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.encrypted": "0",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.objectstore": "bluestore",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.osd_id": "1",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.type": "block",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.vdo": "0",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.with_tpm": "0"
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             },
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "type": "block",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "vg_name": "ceph_vg1"
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:         }
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:     ],
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:     "2": [
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:         {
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "devices": [
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "/dev/loop5"
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             ],
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "lv_name": "ceph_lv2",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "lv_size": "21470642176",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "name": "ceph_lv2",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "tags": {
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.cluster_name": "ceph",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.crush_device_class": "",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.encrypted": "0",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.objectstore": "bluestore",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.osd_id": "2",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.type": "block",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.vdo": "0",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:                 "ceph.with_tpm": "0"
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             },
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "type": "block",
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:             "vg_name": "ceph_vg2"
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:         }
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]:     ]
Feb 28 10:10:28 compute-0 pedantic_nightingale[293400]: }
Feb 28 10:10:28 compute-0 systemd[1]: libpod-66d0a90907d609e0df4098e82c758b5345bda5f9a70a1fcdea4cfd8ad9690ffe.scope: Deactivated successfully.
Feb 28 10:10:28 compute-0 podman[293383]: 2026-02-28 10:10:28.602963117 +0000 UTC m=+0.484044416 container died 66d0a90907d609e0df4098e82c758b5345bda5f9a70a1fcdea4cfd8ad9690ffe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:10:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4e77349caa4feb03cd0c453c46e6c31d4af8b99f37abe1c48e97176003d3c72-merged.mount: Deactivated successfully.
Feb 28 10:10:28 compute-0 podman[293383]: 2026-02-28 10:10:28.651595643 +0000 UTC m=+0.532676932 container remove 66d0a90907d609e0df4098e82c758b5345bda5f9a70a1fcdea4cfd8ad9690ffe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 10:10:28 compute-0 nova_compute[243452]: 2026-02-28 10:10:28.662 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:28 compute-0 systemd[1]: libpod-conmon-66d0a90907d609e0df4098e82c758b5345bda5f9a70a1fcdea4cfd8ad9690ffe.scope: Deactivated successfully.
Feb 28 10:10:28 compute-0 sudo[293305]: pam_unix(sudo:session): session closed for user root
Feb 28 10:10:28 compute-0 sudo[293421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:10:28 compute-0 sudo[293421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:10:28 compute-0 sudo[293421]: pam_unix(sudo:session): session closed for user root
Feb 28 10:10:28 compute-0 sudo[293446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:10:28 compute-0 sudo[293446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:10:28 compute-0 ceph-mon[76304]: pgmap v1308: 305 pgs: 305 active+clean; 271 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 178 KiB/s rd, 425 KiB/s wr, 98 op/s
Feb 28 10:10:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:10:29
Feb 28 10:10:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:10:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:10:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'volumes', '.rgw.root', 'images', 'vms']
Feb 28 10:10:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:10:29 compute-0 podman[293484]: 2026-02-28 10:10:29.156173594 +0000 UTC m=+0.053491643 container create 7e0695e28b7a076a824569482db11dc9215a8fe376678f7a55d652762ec9c10b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:10:29 compute-0 systemd[1]: Started libpod-conmon-7e0695e28b7a076a824569482db11dc9215a8fe376678f7a55d652762ec9c10b.scope.
Feb 28 10:10:29 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:10:29 compute-0 podman[293484]: 2026-02-28 10:10:29.124511675 +0000 UTC m=+0.021829744 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:10:29 compute-0 podman[293484]: 2026-02-28 10:10:29.387771729 +0000 UTC m=+0.285089858 container init 7e0695e28b7a076a824569482db11dc9215a8fe376678f7a55d652762ec9c10b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 10:10:29 compute-0 podman[293484]: 2026-02-28 10:10:29.396787212 +0000 UTC m=+0.294105251 container start 7e0695e28b7a076a824569482db11dc9215a8fe376678f7a55d652762ec9c10b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 10:10:29 compute-0 flamboyant_hofstadter[293500]: 167 167
Feb 28 10:10:29 compute-0 systemd[1]: libpod-7e0695e28b7a076a824569482db11dc9215a8fe376678f7a55d652762ec9c10b.scope: Deactivated successfully.
Feb 28 10:10:29 compute-0 podman[293484]: 2026-02-28 10:10:29.418415369 +0000 UTC m=+0.315733508 container attach 7e0695e28b7a076a824569482db11dc9215a8fe376678f7a55d652762ec9c10b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 10:10:29 compute-0 podman[293484]: 2026-02-28 10:10:29.418942534 +0000 UTC m=+0.316260613 container died 7e0695e28b7a076a824569482db11dc9215a8fe376678f7a55d652762ec9c10b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 10:10:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f147d2fdbb015e61c847655d9eac2c8009ba138afdbbe5168e4b1d3a93063203-merged.mount: Deactivated successfully.
Feb 28 10:10:29 compute-0 podman[293484]: 2026-02-28 10:10:29.460918833 +0000 UTC m=+0.358236892 container remove 7e0695e28b7a076a824569482db11dc9215a8fe376678f7a55d652762ec9c10b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:10:29 compute-0 systemd[1]: libpod-conmon-7e0695e28b7a076a824569482db11dc9215a8fe376678f7a55d652762ec9c10b.scope: Deactivated successfully.
Feb 28 10:10:29 compute-0 podman[293525]: 2026-02-28 10:10:29.61179658 +0000 UTC m=+0.041777294 container create a1e8d58cb44a80a82163bc6ffc940e736ae282f5c9e18d41cfd12b9fdc6cf2bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_gauss, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:10:29 compute-0 systemd[1]: Started libpod-conmon-a1e8d58cb44a80a82163bc6ffc940e736ae282f5c9e18d41cfd12b9fdc6cf2bb.scope.
Feb 28 10:10:29 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8092e18d75816668f9bc4767930d3511af1ad37bb26d652f2f3e6f42ea6218a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8092e18d75816668f9bc4767930d3511af1ad37bb26d652f2f3e6f42ea6218a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8092e18d75816668f9bc4767930d3511af1ad37bb26d652f2f3e6f42ea6218a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:10:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8092e18d75816668f9bc4767930d3511af1ad37bb26d652f2f3e6f42ea6218a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:10:29 compute-0 podman[293525]: 2026-02-28 10:10:29.591457729 +0000 UTC m=+0.021438463 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:10:29 compute-0 podman[293525]: 2026-02-28 10:10:29.699871594 +0000 UTC m=+0.129852338 container init a1e8d58cb44a80a82163bc6ffc940e736ae282f5c9e18d41cfd12b9fdc6cf2bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_gauss, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 28 10:10:29 compute-0 podman[293525]: 2026-02-28 10:10:29.707754485 +0000 UTC m=+0.137735239 container start a1e8d58cb44a80a82163bc6ffc940e736ae282f5c9e18d41cfd12b9fdc6cf2bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 10:10:29 compute-0 podman[293525]: 2026-02-28 10:10:29.712794846 +0000 UTC m=+0.142775580 container attach a1e8d58cb44a80a82163bc6ffc940e736ae282f5c9e18d41cfd12b9fdc6cf2bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_gauss, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 10:10:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 245 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 24 KiB/s wr, 121 op/s
Feb 28 10:10:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:10:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:10:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:10:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:10:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:10:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:10:30 compute-0 lvm[293622]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:10:30 compute-0 lvm[293622]: VG ceph_vg0 finished
Feb 28 10:10:30 compute-0 lvm[293623]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:10:30 compute-0 lvm[293623]: VG ceph_vg1 finished
Feb 28 10:10:30 compute-0 lvm[293625]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:10:30 compute-0 lvm[293625]: VG ceph_vg2 finished
Feb 28 10:10:30 compute-0 zen_gauss[293543]: {}
Feb 28 10:10:30 compute-0 systemd[1]: libpod-a1e8d58cb44a80a82163bc6ffc940e736ae282f5c9e18d41cfd12b9fdc6cf2bb.scope: Deactivated successfully.
Feb 28 10:10:30 compute-0 systemd[1]: libpod-a1e8d58cb44a80a82163bc6ffc940e736ae282f5c9e18d41cfd12b9fdc6cf2bb.scope: Consumed 1.248s CPU time.
Feb 28 10:10:30 compute-0 podman[293525]: 2026-02-28 10:10:30.566792311 +0000 UTC m=+0.996773085 container died a1e8d58cb44a80a82163bc6ffc940e736ae282f5c9e18d41cfd12b9fdc6cf2bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:10:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:10:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:10:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:10:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:10:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:10:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:10:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8092e18d75816668f9bc4767930d3511af1ad37bb26d652f2f3e6f42ea6218a9-merged.mount: Deactivated successfully.
Feb 28 10:10:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:10:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:10:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:10:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:10:30 compute-0 podman[293525]: 2026-02-28 10:10:30.618684408 +0000 UTC m=+1.048665122 container remove a1e8d58cb44a80a82163bc6ffc940e736ae282f5c9e18d41cfd12b9fdc6cf2bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_gauss, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:10:30 compute-0 systemd[1]: libpod-conmon-a1e8d58cb44a80a82163bc6ffc940e736ae282f5c9e18d41cfd12b9fdc6cf2bb.scope: Deactivated successfully.
Feb 28 10:10:30 compute-0 sudo[293446]: pam_unix(sudo:session): session closed for user root
Feb 28 10:10:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:10:30 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:10:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:10:30 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:10:30 compute-0 sudo[293639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:10:30 compute-0 sudo[293639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:10:30 compute-0 sudo[293639]: pam_unix(sudo:session): session closed for user root
Feb 28 10:10:30 compute-0 NetworkManager[49805]: <info>  [1772273430.9433] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/223)
Feb 28 10:10:30 compute-0 ovn_controller[146846]: 2026-02-28T10:10:30Z|00479|binding|INFO|Releasing lport cad89901-4493-47e9-b0fc-45158375eff8 from this chassis (sb_readonly=0)
Feb 28 10:10:30 compute-0 NetworkManager[49805]: <info>  [1772273430.9443] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Feb 28 10:10:30 compute-0 nova_compute[243452]: 2026-02-28 10:10:30.942 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:30 compute-0 ovn_controller[146846]: 2026-02-28T10:10:30Z|00480|binding|INFO|Releasing lport cad89901-4493-47e9-b0fc-45158375eff8 from this chassis (sb_readonly=0)
Feb 28 10:10:30 compute-0 nova_compute[243452]: 2026-02-28 10:10:30.956 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:30 compute-0 nova_compute[243452]: 2026-02-28 10:10:30.963 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:31 compute-0 ceph-mon[76304]: pgmap v1309: 305 pgs: 305 active+clean; 245 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 24 KiB/s wr, 121 op/s
Feb 28 10:10:31 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:10:31 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:10:31 compute-0 podman[293666]: 2026-02-28 10:10:31.126328095 +0000 UTC m=+0.063255887 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 28 10:10:31 compute-0 podman[293665]: 2026-02-28 10:10:31.162324766 +0000 UTC m=+0.099655230 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 10:10:31 compute-0 nova_compute[243452]: 2026-02-28 10:10:31.482 243456 DEBUG nova.compute.manager [req-04b5d317-f409-42ef-955f-338446e094f1 req-7b6aa543-422a-4249-9188-0905080cc85e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-changed-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:10:31 compute-0 nova_compute[243452]: 2026-02-28 10:10:31.483 243456 DEBUG nova.compute.manager [req-04b5d317-f409-42ef-955f-338446e094f1 req-7b6aa543-422a-4249-9188-0905080cc85e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Refreshing instance network info cache due to event network-changed-037eb744-3024-4a3d-b52c-894abe1cbac8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:10:31 compute-0 nova_compute[243452]: 2026-02-28 10:10:31.483 243456 DEBUG oslo_concurrency.lockutils [req-04b5d317-f409-42ef-955f-338446e094f1 req-7b6aa543-422a-4249-9188-0905080cc85e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:10:31 compute-0 nova_compute[243452]: 2026-02-28 10:10:31.483 243456 DEBUG oslo_concurrency.lockutils [req-04b5d317-f409-42ef-955f-338446e094f1 req-7b6aa543-422a-4249-9188-0905080cc85e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:10:31 compute-0 nova_compute[243452]: 2026-02-28 10:10:31.483 243456 DEBUG nova.network.neutron [req-04b5d317-f409-42ef-955f-338446e094f1 req-7b6aa543-422a-4249-9188-0905080cc85e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Refreshing network info cache for port 037eb744-3024-4a3d-b52c-894abe1cbac8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:10:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 200 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 22 KiB/s wr, 144 op/s
Feb 28 10:10:32 compute-0 nova_compute[243452]: 2026-02-28 10:10:32.836 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:10:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Feb 28 10:10:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Feb 28 10:10:33 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Feb 28 10:10:33 compute-0 ceph-mon[76304]: pgmap v1310: 305 pgs: 305 active+clean; 200 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 22 KiB/s wr, 144 op/s
Feb 28 10:10:33 compute-0 ceph-mon[76304]: osdmap e209: 3 total, 3 up, 3 in
Feb 28 10:10:33 compute-0 nova_compute[243452]: 2026-02-28 10:10:33.665 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:33 compute-0 nova_compute[243452]: 2026-02-28 10:10:33.713 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:33 compute-0 nova_compute[243452]: 2026-02-28 10:10:33.714 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:33 compute-0 nova_compute[243452]: 2026-02-28 10:10:33.738 243456 DEBUG nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:10:33 compute-0 nova_compute[243452]: 2026-02-28 10:10:33.836 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:33 compute-0 nova_compute[243452]: 2026-02-28 10:10:33.837 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:33 compute-0 nova_compute[243452]: 2026-02-28 10:10:33.851 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:10:33 compute-0 nova_compute[243452]: 2026-02-28 10:10:33.852 243456 INFO nova.compute.claims [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:10:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 200 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 24 KiB/s wr, 147 op/s
Feb 28 10:10:34 compute-0 nova_compute[243452]: 2026-02-28 10:10:34.256 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:10:34 compute-0 nova_compute[243452]: 2026-02-28 10:10:34.354 243456 DEBUG nova.network.neutron [req-04b5d317-f409-42ef-955f-338446e094f1 req-7b6aa543-422a-4249-9188-0905080cc85e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updated VIF entry in instance network info cache for port 037eb744-3024-4a3d-b52c-894abe1cbac8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:10:34 compute-0 nova_compute[243452]: 2026-02-28 10:10:34.355 243456 DEBUG nova.network.neutron [req-04b5d317-f409-42ef-955f-338446e094f1 req-7b6aa543-422a-4249-9188-0905080cc85e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updating instance_info_cache with network_info: [{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:10:34 compute-0 nova_compute[243452]: 2026-02-28 10:10:34.374 243456 DEBUG oslo_concurrency.lockutils [req-04b5d317-f409-42ef-955f-338446e094f1 req-7b6aa543-422a-4249-9188-0905080cc85e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:10:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:10:34 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1224320205' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:10:34 compute-0 nova_compute[243452]: 2026-02-28 10:10:34.839 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:10:34 compute-0 nova_compute[243452]: 2026-02-28 10:10:34.845 243456 DEBUG nova.compute.provider_tree [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:10:34 compute-0 nova_compute[243452]: 2026-02-28 10:10:34.863 243456 DEBUG nova.scheduler.client.report [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:10:34 compute-0 nova_compute[243452]: 2026-02-28 10:10:34.884 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:34 compute-0 nova_compute[243452]: 2026-02-28 10:10:34.885 243456 DEBUG nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:10:34 compute-0 nova_compute[243452]: 2026-02-28 10:10:34.930 243456 DEBUG nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:10:34 compute-0 nova_compute[243452]: 2026-02-28 10:10:34.931 243456 DEBUG nova.network.neutron [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:10:34 compute-0 nova_compute[243452]: 2026-02-28 10:10:34.956 243456 INFO nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:10:34 compute-0 nova_compute[243452]: 2026-02-28 10:10:34.978 243456 DEBUG nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.075 243456 DEBUG nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.077 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.077 243456 INFO nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Creating image(s)
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.101 243456 DEBUG nova.storage.rbd_utils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.130 243456 DEBUG nova.storage.rbd_utils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.154 243456 DEBUG nova.storage.rbd_utils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.158 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:10:35 compute-0 ceph-mon[76304]: pgmap v1312: 305 pgs: 305 active+clean; 200 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 24 KiB/s wr, 147 op/s
Feb 28 10:10:35 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1224320205' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.229 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.231 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.232 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.232 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.256 243456 DEBUG nova.storage.rbd_utils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.261 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.482 243456 DEBUG nova.policy [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33855957e5e3480b850c2ddef62a5f89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a5095f810f0d431788237ae1da262bf6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.562 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.634 243456 DEBUG nova.storage.rbd_utils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] resizing rbd image 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.742 243456 DEBUG nova.objects.instance [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.759 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.759 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Ensure instance console log exists: /var/lib/nova/instances/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.760 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.760 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:35 compute-0 nova_compute[243452]: 2026-02-28 10:10:35.761 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 210 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 298 KiB/s wr, 130 op/s
Feb 28 10:10:37 compute-0 nova_compute[243452]: 2026-02-28 10:10:37.175 243456 DEBUG nova.network.neutron [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Successfully created port: 188d9948-e6ef-4c09-a2ff-5b07d0f93779 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:10:37 compute-0 ceph-mon[76304]: pgmap v1313: 305 pgs: 305 active+clean; 210 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 298 KiB/s wr, 130 op/s
Feb 28 10:10:37 compute-0 nova_compute[243452]: 2026-02-28 10:10:37.839 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:10:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 243 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 134 op/s
Feb 28 10:10:38 compute-0 nova_compute[243452]: 2026-02-28 10:10:38.465 243456 DEBUG nova.network.neutron [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Successfully updated port: 188d9948-e6ef-4c09-a2ff-5b07d0f93779 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:10:38 compute-0 nova_compute[243452]: 2026-02-28 10:10:38.490 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "refresh_cache-9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:10:38 compute-0 nova_compute[243452]: 2026-02-28 10:10:38.491 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquired lock "refresh_cache-9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:10:38 compute-0 nova_compute[243452]: 2026-02-28 10:10:38.492 243456 DEBUG nova.network.neutron [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:10:38 compute-0 nova_compute[243452]: 2026-02-28 10:10:38.609 243456 DEBUG nova.compute.manager [req-c67e4584-8891-4ce0-8c7b-3af5375d64d3 req-9e67664c-2908-40f2-a697-1bd94d2fedde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Received event network-changed-188d9948-e6ef-4c09-a2ff-5b07d0f93779 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:10:38 compute-0 nova_compute[243452]: 2026-02-28 10:10:38.609 243456 DEBUG nova.compute.manager [req-c67e4584-8891-4ce0-8c7b-3af5375d64d3 req-9e67664c-2908-40f2-a697-1bd94d2fedde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Refreshing instance network info cache due to event network-changed-188d9948-e6ef-4c09-a2ff-5b07d0f93779. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:10:38 compute-0 nova_compute[243452]: 2026-02-28 10:10:38.609 243456 DEBUG oslo_concurrency.lockutils [req-c67e4584-8891-4ce0-8c7b-3af5375d64d3 req-9e67664c-2908-40f2-a697-1bd94d2fedde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:10:38 compute-0 nova_compute[243452]: 2026-02-28 10:10:38.667 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:38 compute-0 nova_compute[243452]: 2026-02-28 10:10:38.717 243456 DEBUG nova.network.neutron [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:10:39 compute-0 ovn_controller[146846]: 2026-02-28T10:10:39Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:32:d3:6f 10.100.0.9
Feb 28 10:10:39 compute-0 ovn_controller[146846]: 2026-02-28T10:10:39Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:32:d3:6f 10.100.0.9
Feb 28 10:10:39 compute-0 ceph-mon[76304]: pgmap v1314: 305 pgs: 305 active+clean; 243 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 134 op/s
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.873 243456 DEBUG nova.network.neutron [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Updating instance_info_cache with network_info: [{"id": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "address": "fa:16:3e:36:78:c1", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap188d9948-e6", "ovs_interfaceid": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.899 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Releasing lock "refresh_cache-9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.900 243456 DEBUG nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Instance network_info: |[{"id": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "address": "fa:16:3e:36:78:c1", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap188d9948-e6", "ovs_interfaceid": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.900 243456 DEBUG oslo_concurrency.lockutils [req-c67e4584-8891-4ce0-8c7b-3af5375d64d3 req-9e67664c-2908-40f2-a697-1bd94d2fedde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.900 243456 DEBUG nova.network.neutron [req-c67e4584-8891-4ce0-8c7b-3af5375d64d3 req-9e67664c-2908-40f2-a697-1bd94d2fedde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Refreshing network info cache for port 188d9948-e6ef-4c09-a2ff-5b07d0f93779 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.904 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Start _get_guest_xml network_info=[{"id": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "address": "fa:16:3e:36:78:c1", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap188d9948-e6", "ovs_interfaceid": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.909 243456 WARNING nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.921 243456 DEBUG nova.virt.libvirt.host [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.921 243456 DEBUG nova.virt.libvirt.host [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.927 243456 DEBUG nova.virt.libvirt.host [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.928 243456 DEBUG nova.virt.libvirt.host [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.928 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.929 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.929 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.929 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.929 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.930 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.930 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.930 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.930 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.931 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.931 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.931 243456 DEBUG nova.virt.hardware [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:10:39 compute-0 nova_compute[243452]: 2026-02-28 10:10:39.934 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 266 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.2 MiB/s wr, 115 op/s
Feb 28 10:10:40 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:10:40 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3217312333' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:10:40 compute-0 nova_compute[243452]: 2026-02-28 10:10:40.523 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:10:40 compute-0 nova_compute[243452]: 2026-02-28 10:10:40.545 243456 DEBUG nova.storage.rbd_utils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:10:40 compute-0 nova_compute[243452]: 2026-02-28 10:10:40.550 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0010363606648371541 of space, bias 1.0, pg target 0.3109081994511462 quantized to 32 (current 32)
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024934987145615343 of space, bias 1.0, pg target 0.7480496143684603 quantized to 32 (current 32)
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.894500714536989e-07 of space, bias 4.0, pg target 0.0009473400857444386 quantized to 16 (current 16)
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:10:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:10:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:10:41 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/41945769' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.111 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.113 243456 DEBUG nova.virt.libvirt.vif [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:10:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-389107082',display_name='tempest-DeleteServersTestJSON-server-389107082',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-389107082',id=59,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-p13pmd0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:10:35Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "address": "fa:16:3e:36:78:c1", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap188d9948-e6", "ovs_interfaceid": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.114 243456 DEBUG nova.network.os_vif_util [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "address": "fa:16:3e:36:78:c1", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap188d9948-e6", "ovs_interfaceid": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.115 243456 DEBUG nova.network.os_vif_util [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:78:c1,bridge_name='br-int',has_traffic_filtering=True,id=188d9948-e6ef-4c09-a2ff-5b07d0f93779,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap188d9948-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.117 243456 DEBUG nova.objects.instance [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.176 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:10:41 compute-0 nova_compute[243452]:   <uuid>9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab</uuid>
Feb 28 10:10:41 compute-0 nova_compute[243452]:   <name>instance-0000003b</name>
Feb 28 10:10:41 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:10:41 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:10:41 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <nova:name>tempest-DeleteServersTestJSON-server-389107082</nova:name>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:10:39</nova:creationTime>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:10:41 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:10:41 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:10:41 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:10:41 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:10:41 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:10:41 compute-0 nova_compute[243452]:         <nova:user uuid="33855957e5e3480b850c2ddef62a5f89">tempest-DeleteServersTestJSON-515886650-project-member</nova:user>
Feb 28 10:10:41 compute-0 nova_compute[243452]:         <nova:project uuid="a5095f810f0d431788237ae1da262bf6">tempest-DeleteServersTestJSON-515886650</nova:project>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:10:41 compute-0 nova_compute[243452]:         <nova:port uuid="188d9948-e6ef-4c09-a2ff-5b07d0f93779">
Feb 28 10:10:41 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:10:41 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:10:41 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <system>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <entry name="serial">9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab</entry>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <entry name="uuid">9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab</entry>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     </system>
Feb 28 10:10:41 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:10:41 compute-0 nova_compute[243452]:   <os>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:   </os>
Feb 28 10:10:41 compute-0 nova_compute[243452]:   <features>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:   </features>
Feb 28 10:10:41 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:10:41 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:10:41 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk">
Feb 28 10:10:41 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       </source>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:10:41 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk.config">
Feb 28 10:10:41 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       </source>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:10:41 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:36:78:c1"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <target dev="tap188d9948-e6"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab/console.log" append="off"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <video>
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     </video>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:10:41 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:10:41 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:10:41 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:10:41 compute-0 nova_compute[243452]: </domain>
Feb 28 10:10:41 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.178 243456 DEBUG nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Preparing to wait for external event network-vif-plugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.178 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.179 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.179 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.180 243456 DEBUG nova.virt.libvirt.vif [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:10:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-389107082',display_name='tempest-DeleteServersTestJSON-server-389107082',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-389107082',id=59,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-p13pmd0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:10:35Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "address": "fa:16:3e:36:78:c1", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap188d9948-e6", "ovs_interfaceid": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.181 243456 DEBUG nova.network.os_vif_util [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "address": "fa:16:3e:36:78:c1", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap188d9948-e6", "ovs_interfaceid": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.181 243456 DEBUG nova.network.os_vif_util [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:78:c1,bridge_name='br-int',has_traffic_filtering=True,id=188d9948-e6ef-4c09-a2ff-5b07d0f93779,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap188d9948-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.182 243456 DEBUG os_vif [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:78:c1,bridge_name='br-int',has_traffic_filtering=True,id=188d9948-e6ef-4c09-a2ff-5b07d0f93779,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap188d9948-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.182 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.183 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.183 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.187 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.188 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap188d9948-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.188 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap188d9948-e6, col_values=(('external_ids', {'iface-id': '188d9948-e6ef-4c09-a2ff-5b07d0f93779', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:78:c1', 'vm-uuid': '9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.190 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:41 compute-0 NetworkManager[49805]: <info>  [1772273441.1913] manager: (tap188d9948-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.191 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.198 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.199 243456 INFO os_vif [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:78:c1,bridge_name='br-int',has_traffic_filtering=True,id=188d9948-e6ef-4c09-a2ff-5b07d0f93779,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap188d9948-e6')
Feb 28 10:10:41 compute-0 ceph-mon[76304]: pgmap v1315: 305 pgs: 305 active+clean; 266 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.2 MiB/s wr, 115 op/s
Feb 28 10:10:41 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3217312333' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:10:41 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/41945769' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.252 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.253 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.253 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No VIF found with MAC fa:16:3e:36:78:c1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.254 243456 INFO nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Using config drive
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.277 243456 DEBUG nova.storage.rbd_utils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.454 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.936 243456 INFO nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Creating config drive at /var/lib/nova/instances/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab/disk.config
Feb 28 10:10:41 compute-0 nova_compute[243452]: 2026-02-28 10:10:41.943 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpmjwcx03v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:10:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 279 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 798 KiB/s rd, 4.7 MiB/s wr, 107 op/s
Feb 28 10:10:42 compute-0 nova_compute[243452]: 2026-02-28 10:10:42.091 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpmjwcx03v" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:10:42 compute-0 nova_compute[243452]: 2026-02-28 10:10:42.128 243456 DEBUG nova.storage.rbd_utils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:10:42 compute-0 nova_compute[243452]: 2026-02-28 10:10:42.133 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab/disk.config 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:10:42 compute-0 nova_compute[243452]: 2026-02-28 10:10:42.257 243456 DEBUG nova.network.neutron [req-c67e4584-8891-4ce0-8c7b-3af5375d64d3 req-9e67664c-2908-40f2-a697-1bd94d2fedde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Updated VIF entry in instance network info cache for port 188d9948-e6ef-4c09-a2ff-5b07d0f93779. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:10:42 compute-0 nova_compute[243452]: 2026-02-28 10:10:42.258 243456 DEBUG nova.network.neutron [req-c67e4584-8891-4ce0-8c7b-3af5375d64d3 req-9e67664c-2908-40f2-a697-1bd94d2fedde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Updating instance_info_cache with network_info: [{"id": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "address": "fa:16:3e:36:78:c1", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap188d9948-e6", "ovs_interfaceid": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:10:42 compute-0 nova_compute[243452]: 2026-02-28 10:10:42.274 243456 DEBUG oslo_concurrency.lockutils [req-c67e4584-8891-4ce0-8c7b-3af5375d64d3 req-9e67664c-2908-40f2-a697-1bd94d2fedde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:10:42 compute-0 nova_compute[243452]: 2026-02-28 10:10:42.293 243456 DEBUG oslo_concurrency.processutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab/disk.config 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:10:42 compute-0 nova_compute[243452]: 2026-02-28 10:10:42.294 243456 INFO nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Deleting local config drive /var/lib/nova/instances/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab/disk.config because it was imported into RBD.
Feb 28 10:10:42 compute-0 ceph-mon[76304]: pgmap v1316: 305 pgs: 305 active+clean; 279 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 798 KiB/s rd, 4.7 MiB/s wr, 107 op/s
Feb 28 10:10:42 compute-0 kernel: tap188d9948-e6: entered promiscuous mode
Feb 28 10:10:42 compute-0 NetworkManager[49805]: <info>  [1772273442.3456] manager: (tap188d9948-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/226)
Feb 28 10:10:42 compute-0 ovn_controller[146846]: 2026-02-28T10:10:42Z|00481|binding|INFO|Claiming lport 188d9948-e6ef-4c09-a2ff-5b07d0f93779 for this chassis.
Feb 28 10:10:42 compute-0 ovn_controller[146846]: 2026-02-28T10:10:42Z|00482|binding|INFO|188d9948-e6ef-4c09-a2ff-5b07d0f93779: Claiming fa:16:3e:36:78:c1 10.100.0.10
Feb 28 10:10:42 compute-0 nova_compute[243452]: 2026-02-28 10:10:42.349 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:42 compute-0 ovn_controller[146846]: 2026-02-28T10:10:42Z|00483|binding|INFO|Setting lport 188d9948-e6ef-4c09-a2ff-5b07d0f93779 ovn-installed in OVS
Feb 28 10:10:42 compute-0 ovn_controller[146846]: 2026-02-28T10:10:42Z|00484|binding|INFO|Setting lport 188d9948-e6ef-4c09-a2ff-5b07d0f93779 up in Southbound
Feb 28 10:10:42 compute-0 nova_compute[243452]: 2026-02-28 10:10:42.359 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.357 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:78:c1 10.100.0.10'], port_security=['fa:16:3e:36:78:c1 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e92100d-850d-4567-9a5d-269bb15701d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5095f810f0d431788237ae1da262bf6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05124cc2-701f-43d1-96d7-3db27e805ae2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4409f277-3260-44c5-97fd-d8b2121f3d6b, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=188d9948-e6ef-4c09-a2ff-5b07d0f93779) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:10:42 compute-0 nova_compute[243452]: 2026-02-28 10:10:42.360 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.360 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 188d9948-e6ef-4c09-a2ff-5b07d0f93779 in datapath 8e92100d-850d-4567-9a5d-269bb15701d5 bound to our chassis
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.362 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.376 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ee2a54-c667-461f-a37c-8f07cc6738c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.377 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e92100d-81 in ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.380 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e92100d-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.380 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2115bdb4-6eb7-4c08-8da8-5774689b0a61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.381 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a84ef553-cb13-4243-b154-14577aabe715]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:42 compute-0 systemd-machined[209480]: New machine qemu-66-instance-0000003b.
Feb 28 10:10:42 compute-0 systemd-udevd[294038]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:10:42 compute-0 NetworkManager[49805]: <info>  [1772273442.3965] device (tap188d9948-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:10:42 compute-0 NetworkManager[49805]: <info>  [1772273442.3972] device (tap188d9948-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.396 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[95476e4a-d88d-4af2-945b-9d6496247a1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:42 compute-0 systemd[1]: Started Virtual Machine qemu-66-instance-0000003b.
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.422 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[34c4444f-c7e8-491e-af3b-99e0b95bc768]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.454 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6f7c72b6-3a9d-4881-abd4-3cca3fd4fe89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:42 compute-0 NetworkManager[49805]: <info>  [1772273442.4625] manager: (tap8e92100d-80): new Veth device (/org/freedesktop/NetworkManager/Devices/227)
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.463 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd759ce-8879-49d3-be5f-79482f461feb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.496 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c51d37a5-c976-447b-8ec2-8f9a67ff4aa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.499 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[241a126d-adc9-4c8c-af08-89d9f47f00ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:42 compute-0 NetworkManager[49805]: <info>  [1772273442.5282] device (tap8e92100d-80): carrier: link connected
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.536 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb475c7-ae4a-4194-b168-c8d1a22b8ea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.557 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a8634bd9-f85f-4b06-86b5-08b3ed8eb56a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e92100d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494981, 'reachable_time': 27357, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294070, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.574 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[88f53669-0d1a-4cc0-898e-8eb7b1812aea]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:bd50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 494981, 'tstamp': 494981}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294071, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.591 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f3204a05-db1d-49d6-a8b4-afe1eecfdf81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e92100d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494981, 'reachable_time': 27357, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294072, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.617 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4593e698-3fe6-4d66-846e-cc7d0e20efd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.673 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9eab880f-e357-47bf-a4d6-b7e3bbb1461b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.674 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e92100d-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.675 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.675 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e92100d-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:10:42 compute-0 nova_compute[243452]: 2026-02-28 10:10:42.677 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:42 compute-0 kernel: tap8e92100d-80: entered promiscuous mode
Feb 28 10:10:42 compute-0 NetworkManager[49805]: <info>  [1772273442.6781] manager: (tap8e92100d-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Feb 28 10:10:42 compute-0 nova_compute[243452]: 2026-02-28 10:10:42.680 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.688 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e92100d-80, col_values=(('external_ids', {'iface-id': 'df60d363-b5aa-4c1b-a9a1-997dfff36799'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:10:42 compute-0 nova_compute[243452]: 2026-02-28 10:10:42.690 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:42 compute-0 ovn_controller[146846]: 2026-02-28T10:10:42Z|00485|binding|INFO|Releasing lport df60d363-b5aa-4c1b-a9a1-997dfff36799 from this chassis (sb_readonly=0)
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.693 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.694 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f566d028-8e4c-406e-951b-832b774f8057]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.695 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:10:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:42.695 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'env', 'PROCESS_TAG=haproxy-8e92100d-850d-4567-9a5d-269bb15701d5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e92100d-850d-4567-9a5d-269bb15701d5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:10:42 compute-0 nova_compute[243452]: 2026-02-28 10:10:42.699 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:42 compute-0 nova_compute[243452]: 2026-02-28 10:10:42.721 243456 DEBUG nova.compute.manager [req-0d2e3ba7-8183-44d7-80cb-9fced15822fa req-0207ba0d-9d58-41b3-8d4f-d37fccb1a5f1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Received event network-vif-plugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:10:42 compute-0 nova_compute[243452]: 2026-02-28 10:10:42.722 243456 DEBUG oslo_concurrency.lockutils [req-0d2e3ba7-8183-44d7-80cb-9fced15822fa req-0207ba0d-9d58-41b3-8d4f-d37fccb1a5f1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:42 compute-0 nova_compute[243452]: 2026-02-28 10:10:42.722 243456 DEBUG oslo_concurrency.lockutils [req-0d2e3ba7-8183-44d7-80cb-9fced15822fa req-0207ba0d-9d58-41b3-8d4f-d37fccb1a5f1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:42 compute-0 nova_compute[243452]: 2026-02-28 10:10:42.722 243456 DEBUG oslo_concurrency.lockutils [req-0d2e3ba7-8183-44d7-80cb-9fced15822fa req-0207ba0d-9d58-41b3-8d4f-d37fccb1a5f1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:42 compute-0 nova_compute[243452]: 2026-02-28 10:10:42.722 243456 DEBUG nova.compute.manager [req-0d2e3ba7-8183-44d7-80cb-9fced15822fa req-0207ba0d-9d58-41b3-8d4f-d37fccb1a5f1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Processing event network-vif-plugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:10:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:10:43 compute-0 podman[294104]: 2026-02-28 10:10:43.10798511 +0000 UTC m=+0.061496089 container create 65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:10:43 compute-0 systemd[1]: Started libpod-conmon-65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7.scope.
Feb 28 10:10:43 compute-0 podman[294104]: 2026-02-28 10:10:43.074999123 +0000 UTC m=+0.028510152 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:10:43 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:10:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7eda6696ef10c33dbfdcc6c094076f11bfa23cb58683d0b70ea823785b7457/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:10:43 compute-0 podman[294104]: 2026-02-28 10:10:43.198336587 +0000 UTC m=+0.151847546 container init 65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 10:10:43 compute-0 podman[294104]: 2026-02-28 10:10:43.204463639 +0000 UTC m=+0.157974608 container start 65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:10:43 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[294144]: [NOTICE]   (294164) : New worker (294166) forked
Feb 28 10:10:43 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[294144]: [NOTICE]   (294164) : Loading success.
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.275 243456 DEBUG nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.276 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273443.2749145, 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.276 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] VM Started (Lifecycle Event)
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.280 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.284 243456 INFO nova.virt.libvirt.driver [-] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Instance spawned successfully.
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.285 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.309 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.315 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.318 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.319 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.319 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.319 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.320 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.320 243456 DEBUG nova.virt.libvirt.driver [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.366 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.366 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273443.2791846, 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.367 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] VM Paused (Lifecycle Event)
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.393 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.397 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273443.2797794, 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.397 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] VM Resumed (Lifecycle Event)
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.401 243456 INFO nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Took 8.33 seconds to spawn the instance on the hypervisor.
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.401 243456 DEBUG nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.430 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.433 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.458 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.474 243456 INFO nova.compute.manager [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Took 9.67 seconds to build instance.
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.522 243456 DEBUG oslo_concurrency.lockutils [None req-c02ec95f-5ed0-424d-9833-62e821f0c338 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:43 compute-0 nova_compute[243452]: 2026-02-28 10:10:43.670 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 279 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 4.2 MiB/s wr, 102 op/s
Feb 28 10:10:44 compute-0 nova_compute[243452]: 2026-02-28 10:10:44.830 243456 DEBUG nova.compute.manager [req-ba61098d-c4c5-4837-a085-cd0a8d41c9b8 req-a2231f53-a69e-430f-8128-43c2932934ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Received event network-vif-plugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:10:44 compute-0 nova_compute[243452]: 2026-02-28 10:10:44.831 243456 DEBUG oslo_concurrency.lockutils [req-ba61098d-c4c5-4837-a085-cd0a8d41c9b8 req-a2231f53-a69e-430f-8128-43c2932934ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:44 compute-0 nova_compute[243452]: 2026-02-28 10:10:44.831 243456 DEBUG oslo_concurrency.lockutils [req-ba61098d-c4c5-4837-a085-cd0a8d41c9b8 req-a2231f53-a69e-430f-8128-43c2932934ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:44 compute-0 nova_compute[243452]: 2026-02-28 10:10:44.831 243456 DEBUG oslo_concurrency.lockutils [req-ba61098d-c4c5-4837-a085-cd0a8d41c9b8 req-a2231f53-a69e-430f-8128-43c2932934ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:44 compute-0 nova_compute[243452]: 2026-02-28 10:10:44.831 243456 DEBUG nova.compute.manager [req-ba61098d-c4c5-4837-a085-cd0a8d41c9b8 req-a2231f53-a69e-430f-8128-43c2932934ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] No waiting events found dispatching network-vif-plugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:10:44 compute-0 nova_compute[243452]: 2026-02-28 10:10:44.832 243456 WARNING nova.compute.manager [req-ba61098d-c4c5-4837-a085-cd0a8d41c9b8 req-a2231f53-a69e-430f-8128-43c2932934ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Received unexpected event network-vif-plugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 for instance with vm_state active and task_state None.
Feb 28 10:10:44 compute-0 nova_compute[243452]: 2026-02-28 10:10:44.859 243456 DEBUG oslo_concurrency.lockutils [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:44 compute-0 nova_compute[243452]: 2026-02-28 10:10:44.860 243456 DEBUG oslo_concurrency.lockutils [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:44 compute-0 nova_compute[243452]: 2026-02-28 10:10:44.860 243456 DEBUG nova.compute.manager [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:10:44 compute-0 nova_compute[243452]: 2026-02-28 10:10:44.864 243456 DEBUG nova.compute.manager [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Feb 28 10:10:44 compute-0 nova_compute[243452]: 2026-02-28 10:10:44.865 243456 DEBUG nova.objects.instance [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'flavor' on Instance uuid 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:10:44 compute-0 nova_compute[243452]: 2026-02-28 10:10:44.903 243456 DEBUG nova.virt.libvirt.driver [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:10:45 compute-0 ceph-mon[76304]: pgmap v1317: 305 pgs: 305 active+clean; 279 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 4.2 MiB/s wr, 102 op/s
Feb 28 10:10:45 compute-0 nova_compute[243452]: 2026-02-28 10:10:45.404 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:10:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1450302044' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:10:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:10:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1450302044' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:10:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 279 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 124 op/s
Feb 28 10:10:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1450302044' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:10:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1450302044' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:10:46 compute-0 nova_compute[243452]: 2026-02-28 10:10:46.190 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:47 compute-0 ceph-mon[76304]: pgmap v1318: 305 pgs: 305 active+clean; 279 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 124 op/s
Feb 28 10:10:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:10:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 279 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.7 MiB/s wr, 164 op/s
Feb 28 10:10:48 compute-0 nova_compute[243452]: 2026-02-28 10:10:48.672 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:49 compute-0 nova_compute[243452]: 2026-02-28 10:10:49.057 243456 DEBUG nova.compute.manager [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:10:49 compute-0 nova_compute[243452]: 2026-02-28 10:10:49.102 243456 INFO nova.compute.manager [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] instance snapshotting
Feb 28 10:10:49 compute-0 nova_compute[243452]: 2026-02-28 10:10:49.103 243456 DEBUG nova.objects.instance [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'flavor' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:10:49 compute-0 ceph-mon[76304]: pgmap v1319: 305 pgs: 305 active+clean; 279 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.7 MiB/s wr, 164 op/s
Feb 28 10:10:49 compute-0 nova_compute[243452]: 2026-02-28 10:10:49.364 243456 INFO nova.virt.libvirt.driver [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Beginning live snapshot process
Feb 28 10:10:49 compute-0 nova_compute[243452]: 2026-02-28 10:10:49.485 243456 DEBUG nova.virt.libvirt.imagebackend [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 28 10:10:49 compute-0 nova_compute[243452]: 2026-02-28 10:10:49.730 243456 DEBUG nova.storage.rbd_utils [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] creating snapshot(7f8bac8210f6497cb3b3fd7287b74f64) on rbd image(30a5d845-ce28-490a-afe8-3b7552f02c63_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:10:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 279 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 145 op/s
Feb 28 10:10:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Feb 28 10:10:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Feb 28 10:10:50 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Feb 28 10:10:50 compute-0 nova_compute[243452]: 2026-02-28 10:10:50.183 243456 DEBUG nova.storage.rbd_utils [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] cloning vms/30a5d845-ce28-490a-afe8-3b7552f02c63_disk@7f8bac8210f6497cb3b3fd7287b74f64 to images/566c962b-ab07-4ea4-8c4a-daa71c23c042 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:10:50 compute-0 nova_compute[243452]: 2026-02-28 10:10:50.274 243456 DEBUG nova.storage.rbd_utils [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] flattening images/566c962b-ab07-4ea4-8c4a-daa71c23c042 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:10:50 compute-0 nova_compute[243452]: 2026-02-28 10:10:50.663 243456 DEBUG nova.storage.rbd_utils [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] removing snapshot(7f8bac8210f6497cb3b3fd7287b74f64) on rbd image(30a5d845-ce28-490a-afe8-3b7552f02c63_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:10:51 compute-0 ceph-mon[76304]: pgmap v1320: 305 pgs: 305 active+clean; 279 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 145 op/s
Feb 28 10:10:51 compute-0 ceph-mon[76304]: osdmap e210: 3 total, 3 up, 3 in
Feb 28 10:10:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Feb 28 10:10:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Feb 28 10:10:51 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Feb 28 10:10:51 compute-0 nova_compute[243452]: 2026-02-28 10:10:51.193 243456 DEBUG nova.storage.rbd_utils [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] creating snapshot(snap) on rbd image(566c962b-ab07-4ea4-8c4a-daa71c23c042) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:10:51 compute-0 nova_compute[243452]: 2026-02-28 10:10:51.239 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 291 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.2 MiB/s wr, 143 op/s
Feb 28 10:10:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Feb 28 10:10:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Feb 28 10:10:52 compute-0 ceph-mon[76304]: osdmap e211: 3 total, 3 up, 3 in
Feb 28 10:10:52 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Feb 28 10:10:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:10:53 compute-0 ceph-mon[76304]: pgmap v1323: 305 pgs: 305 active+clean; 291 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.2 MiB/s wr, 143 op/s
Feb 28 10:10:53 compute-0 ceph-mon[76304]: osdmap e212: 3 total, 3 up, 3 in
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.194892) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273453194936, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2310, "num_deletes": 263, "total_data_size": 3420643, "memory_usage": 3474160, "flush_reason": "Manual Compaction"}
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273453219614, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3344023, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25895, "largest_seqno": 28204, "table_properties": {"data_size": 3333210, "index_size": 7051, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22482, "raw_average_key_size": 21, "raw_value_size": 3311568, "raw_average_value_size": 3106, "num_data_blocks": 307, "num_entries": 1066, "num_filter_entries": 1066, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772273283, "oldest_key_time": 1772273283, "file_creation_time": 1772273453, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 24809 microseconds, and 6947 cpu microseconds.
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.219690) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3344023 bytes OK
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.219726) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.221580) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.221610) EVENT_LOG_v1 {"time_micros": 1772273453221602, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.221647) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3410774, prev total WAL file size 3410774, number of live WAL files 2.
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.223864) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3265KB)], [59(6902KB)]
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273453223929, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10411853, "oldest_snapshot_seqno": -1}
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5429 keys, 8757292 bytes, temperature: kUnknown
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273453322645, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8757292, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8719381, "index_size": 23209, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 134990, "raw_average_key_size": 24, "raw_value_size": 8620268, "raw_average_value_size": 1587, "num_data_blocks": 951, "num_entries": 5429, "num_filter_entries": 5429, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772273453, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.323102) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8757292 bytes
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.330702) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.3 rd, 88.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 6.7 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 5961, records dropped: 532 output_compression: NoCompression
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.330756) EVENT_LOG_v1 {"time_micros": 1772273453330739, "job": 32, "event": "compaction_finished", "compaction_time_micros": 98834, "compaction_time_cpu_micros": 19499, "output_level": 6, "num_output_files": 1, "total_output_size": 8757292, "num_input_records": 5961, "num_output_records": 5429, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273453331962, "job": 32, "event": "table_file_deletion", "file_number": 61}
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273453333100, "job": 32, "event": "table_file_deletion", "file_number": 59}
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.223676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.333212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.333216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.333217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.333219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:10:53 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:10:53.333220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:10:53 compute-0 nova_compute[243452]: 2026-02-28 10:10:53.673 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:53 compute-0 nova_compute[243452]: 2026-02-28 10:10:53.971 243456 INFO nova.virt.libvirt.driver [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Snapshot image upload complete
Feb 28 10:10:53 compute-0 nova_compute[243452]: 2026-02-28 10:10:53.971 243456 INFO nova.compute.manager [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Took 4.85 seconds to snapshot the instance on the hypervisor.
Feb 28 10:10:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 311 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.9 MiB/s wr, 54 op/s
Feb 28 10:10:54 compute-0 nova_compute[243452]: 2026-02-28 10:10:54.291 243456 DEBUG nova.compute.manager [None req-25e57c54-eaaf-4bde-b34e-43ef37c5f5b7 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Feb 28 10:10:54 compute-0 nova_compute[243452]: 2026-02-28 10:10:54.950 243456 DEBUG nova.virt.libvirt.driver [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 28 10:10:55 compute-0 ceph-mon[76304]: pgmap v1325: 305 pgs: 305 active+clean; 311 MiB data, 633 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.9 MiB/s wr, 54 op/s
Feb 28 10:10:56 compute-0 nova_compute[243452]: 2026-02-28 10:10:56.046 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:56.047 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:10:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:56.048 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:10:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 381 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 11 MiB/s wr, 259 op/s
Feb 28 10:10:56 compute-0 nova_compute[243452]: 2026-02-28 10:10:56.241 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:57 compute-0 ceph-mon[76304]: pgmap v1326: 305 pgs: 305 active+clean; 381 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 11 MiB/s wr, 259 op/s
Feb 28 10:10:57 compute-0 kernel: tap188d9948-e6 (unregistering): left promiscuous mode
Feb 28 10:10:57 compute-0 NetworkManager[49805]: <info>  [1772273457.2556] device (tap188d9948-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:10:57 compute-0 ovn_controller[146846]: 2026-02-28T10:10:57Z|00486|binding|INFO|Releasing lport 188d9948-e6ef-4c09-a2ff-5b07d0f93779 from this chassis (sb_readonly=0)
Feb 28 10:10:57 compute-0 ovn_controller[146846]: 2026-02-28T10:10:57Z|00487|binding|INFO|Setting lport 188d9948-e6ef-4c09-a2ff-5b07d0f93779 down in Southbound
Feb 28 10:10:57 compute-0 ovn_controller[146846]: 2026-02-28T10:10:57Z|00488|binding|INFO|Removing iface tap188d9948-e6 ovn-installed in OVS
Feb 28 10:10:57 compute-0 nova_compute[243452]: 2026-02-28 10:10:57.264 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:57 compute-0 nova_compute[243452]: 2026-02-28 10:10:57.268 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:57 compute-0 nova_compute[243452]: 2026-02-28 10:10:57.275 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.278 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:78:c1 10.100.0.10'], port_security=['fa:16:3e:36:78:c1 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e92100d-850d-4567-9a5d-269bb15701d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5095f810f0d431788237ae1da262bf6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05124cc2-701f-43d1-96d7-3db27e805ae2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4409f277-3260-44c5-97fd-d8b2121f3d6b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=188d9948-e6ef-4c09-a2ff-5b07d0f93779) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:10:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.279 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 188d9948-e6ef-4c09-a2ff-5b07d0f93779 in datapath 8e92100d-850d-4567-9a5d-269bb15701d5 unbound from our chassis
Feb 28 10:10:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.281 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e92100d-850d-4567-9a5d-269bb15701d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:10:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.282 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[569bccc8-fc15-42db-a107-634a59970946]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.282 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 namespace which is not needed anymore
Feb 28 10:10:57 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Feb 28 10:10:57 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000003b.scope: Consumed 12.248s CPU time.
Feb 28 10:10:57 compute-0 systemd-machined[209480]: Machine qemu-66-instance-0000003b terminated.
Feb 28 10:10:57 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[294144]: [NOTICE]   (294164) : haproxy version is 2.8.14-c23fe91
Feb 28 10:10:57 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[294144]: [NOTICE]   (294164) : path to executable is /usr/sbin/haproxy
Feb 28 10:10:57 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[294144]: [WARNING]  (294164) : Exiting Master process...
Feb 28 10:10:57 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[294144]: [ALERT]    (294164) : Current worker (294166) exited with code 143 (Terminated)
Feb 28 10:10:57 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[294144]: [WARNING]  (294164) : All workers exited. Exiting... (0)
Feb 28 10:10:57 compute-0 systemd[1]: libpod-65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7.scope: Deactivated successfully.
Feb 28 10:10:57 compute-0 conmon[294144]: conmon 65cb934f7051612ccce1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7.scope/container/memory.events
Feb 28 10:10:57 compute-0 podman[294340]: 2026-02-28 10:10:57.415422584 +0000 UTC m=+0.051195309 container died 65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:10:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7-userdata-shm.mount: Deactivated successfully.
Feb 28 10:10:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e7eda6696ef10c33dbfdcc6c094076f11bfa23cb58683d0b70ea823785b7457-merged.mount: Deactivated successfully.
Feb 28 10:10:57 compute-0 podman[294340]: 2026-02-28 10:10:57.454014248 +0000 UTC m=+0.089786973 container cleanup 65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 10:10:57 compute-0 systemd[1]: libpod-conmon-65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7.scope: Deactivated successfully.
Feb 28 10:10:57 compute-0 podman[294369]: 2026-02-28 10:10:57.524140767 +0000 UTC m=+0.049200503 container remove 65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 10:10:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.529 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2319094f-789b-4eaa-95ba-cec7eadcfc5d]: (4, ('Sat Feb 28 10:10:57 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 (65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7)\n65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7\nSat Feb 28 10:10:57 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 (65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7)\n65cb934f7051612ccce1a27027c9d9c2fdd1f713911a5fcbfb7936b89e4019e7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.532 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[91bd963c-27ef-41b7-81b5-2b5ec546fc0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.533 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e92100d-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:10:57 compute-0 nova_compute[243452]: 2026-02-28 10:10:57.535 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:57 compute-0 kernel: tap8e92100d-80: left promiscuous mode
Feb 28 10:10:57 compute-0 nova_compute[243452]: 2026-02-28 10:10:57.546 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.549 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[76d7615d-ea33-4e77-897e-7d2d836d4c76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.567 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[299a1d6b-5bb9-44a9-ba6a-96c8218222ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.569 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ec40e64d-2e93-4f8a-b4d2-c88be4941441]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.590 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[892d3980-98f9-4fc2-bbd8-c02b8ccf9594]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494973, 'reachable_time': 36485, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294398, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.593 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:10:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.593 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[058e3bec-23ea-498a-8da1-b103af2baca3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:10:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d8e92100d\x2d850d\x2d4567\x2d9a5d\x2d269bb15701d5.mount: Deactivated successfully.
Feb 28 10:10:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.849 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.850 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:10:57.851 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:57 compute-0 nova_compute[243452]: 2026-02-28 10:10:57.969 243456 INFO nova.virt.libvirt.driver [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Instance shutdown successfully after 13 seconds.
Feb 28 10:10:57 compute-0 nova_compute[243452]: 2026-02-28 10:10:57.975 243456 INFO nova.virt.libvirt.driver [-] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Instance destroyed successfully.
Feb 28 10:10:57 compute-0 nova_compute[243452]: 2026-02-28 10:10:57.975 243456 DEBUG nova.objects.instance [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:10:57 compute-0 nova_compute[243452]: 2026-02-28 10:10:57.990 243456 DEBUG nova.compute.manager [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:10:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:10:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Feb 28 10:10:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Feb 28 10:10:58 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Feb 28 10:10:58 compute-0 nova_compute[243452]: 2026-02-28 10:10:58.042 243456 DEBUG oslo_concurrency.lockutils [None req-e9f85009-e662-400f-8c01-36e7e6eeea40 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 388 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 9.1 MiB/s wr, 206 op/s
Feb 28 10:10:58 compute-0 nova_compute[243452]: 2026-02-28 10:10:58.279 243456 DEBUG nova.compute.manager [req-660511db-4d59-4cff-a7b8-dd512b52da63 req-9c6bb3b8-6bb5-4fbc-bd04-036208b65c8a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Received event network-vif-unplugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:10:58 compute-0 nova_compute[243452]: 2026-02-28 10:10:58.280 243456 DEBUG oslo_concurrency.lockutils [req-660511db-4d59-4cff-a7b8-dd512b52da63 req-9c6bb3b8-6bb5-4fbc-bd04-036208b65c8a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:10:58 compute-0 nova_compute[243452]: 2026-02-28 10:10:58.280 243456 DEBUG oslo_concurrency.lockutils [req-660511db-4d59-4cff-a7b8-dd512b52da63 req-9c6bb3b8-6bb5-4fbc-bd04-036208b65c8a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:10:58 compute-0 nova_compute[243452]: 2026-02-28 10:10:58.280 243456 DEBUG oslo_concurrency.lockutils [req-660511db-4d59-4cff-a7b8-dd512b52da63 req-9c6bb3b8-6bb5-4fbc-bd04-036208b65c8a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:10:58 compute-0 nova_compute[243452]: 2026-02-28 10:10:58.280 243456 DEBUG nova.compute.manager [req-660511db-4d59-4cff-a7b8-dd512b52da63 req-9c6bb3b8-6bb5-4fbc-bd04-036208b65c8a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] No waiting events found dispatching network-vif-unplugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:10:58 compute-0 nova_compute[243452]: 2026-02-28 10:10:58.280 243456 WARNING nova.compute.manager [req-660511db-4d59-4cff-a7b8-dd512b52da63 req-9c6bb3b8-6bb5-4fbc-bd04-036208b65c8a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Received unexpected event network-vif-unplugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 for instance with vm_state stopped and task_state None.
Feb 28 10:10:58 compute-0 nova_compute[243452]: 2026-02-28 10:10:58.676 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:10:59 compute-0 ceph-mon[76304]: osdmap e213: 3 total, 3 up, 3 in
Feb 28 10:10:59 compute-0 ceph-mon[76304]: pgmap v1328: 305 pgs: 305 active+clean; 388 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 9.1 MiB/s wr, 206 op/s
Feb 28 10:11:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 391 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 7.9 MiB/s wr, 192 op/s
Feb 28 10:11:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:11:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:11:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:11:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:11:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:11:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.523 243456 DEBUG nova.compute.manager [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.846 243456 DEBUG nova.compute.manager [req-0cf0ee96-230f-4150-adcf-5c208c3fd83d req-2908bb04-1c66-4869-8559-2f0ac1a8e635 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Received event network-vif-plugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.847 243456 DEBUG oslo_concurrency.lockutils [req-0cf0ee96-230f-4150-adcf-5c208c3fd83d req-2908bb04-1c66-4869-8559-2f0ac1a8e635 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.847 243456 DEBUG oslo_concurrency.lockutils [req-0cf0ee96-230f-4150-adcf-5c208c3fd83d req-2908bb04-1c66-4869-8559-2f0ac1a8e635 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.847 243456 DEBUG oslo_concurrency.lockutils [req-0cf0ee96-230f-4150-adcf-5c208c3fd83d req-2908bb04-1c66-4869-8559-2f0ac1a8e635 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.848 243456 DEBUG nova.compute.manager [req-0cf0ee96-230f-4150-adcf-5c208c3fd83d req-2908bb04-1c66-4869-8559-2f0ac1a8e635 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] No waiting events found dispatching network-vif-plugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.848 243456 WARNING nova.compute.manager [req-0cf0ee96-230f-4150-adcf-5c208c3fd83d req-2908bb04-1c66-4869-8559-2f0ac1a8e635 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Received unexpected event network-vif-plugged-188d9948-e6ef-4c09-a2ff-5b07d0f93779 for instance with vm_state stopped and task_state deleting.
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.876 243456 INFO nova.compute.manager [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] instance snapshotting
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.877 243456 DEBUG nova.objects.instance [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'flavor' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.879 243456 DEBUG oslo_concurrency.lockutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.879 243456 DEBUG oslo_concurrency.lockutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.879 243456 DEBUG oslo_concurrency.lockutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.880 243456 DEBUG oslo_concurrency.lockutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.880 243456 DEBUG oslo_concurrency.lockutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.881 243456 INFO nova.compute.manager [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Terminating instance
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.882 243456 DEBUG nova.compute.manager [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.889 243456 INFO nova.virt.libvirt.driver [-] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Instance destroyed successfully.
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.890 243456 DEBUG nova.objects.instance [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'resources' on Instance uuid 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.908 243456 DEBUG nova.virt.libvirt.vif [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:10:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-389107082',display_name='tempest-DeleteServersTestJSON-server-389107082',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-389107082',id=59,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:10:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-p13pmd0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:10:58Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "address": "fa:16:3e:36:78:c1", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap188d9948-e6", "ovs_interfaceid": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.909 243456 DEBUG nova.network.os_vif_util [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "address": "fa:16:3e:36:78:c1", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap188d9948-e6", "ovs_interfaceid": "188d9948-e6ef-4c09-a2ff-5b07d0f93779", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.910 243456 DEBUG nova.network.os_vif_util [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:78:c1,bridge_name='br-int',has_traffic_filtering=True,id=188d9948-e6ef-4c09-a2ff-5b07d0f93779,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap188d9948-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.910 243456 DEBUG os_vif [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:78:c1,bridge_name='br-int',has_traffic_filtering=True,id=188d9948-e6ef-4c09-a2ff-5b07d0f93779,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap188d9948-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.914 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.914 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap188d9948-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.917 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.918 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:00 compute-0 nova_compute[243452]: 2026-02-28 10:11:00.924 243456 INFO os_vif [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:78:c1,bridge_name='br-int',has_traffic_filtering=True,id=188d9948-e6ef-4c09-a2ff-5b07d0f93779,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap188d9948-e6')
Feb 28 10:11:01 compute-0 ceph-mon[76304]: pgmap v1329: 305 pgs: 305 active+clean; 391 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 7.9 MiB/s wr, 192 op/s
Feb 28 10:11:01 compute-0 nova_compute[243452]: 2026-02-28 10:11:01.179 243456 INFO nova.virt.libvirt.driver [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Deleting instance files /var/lib/nova/instances/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_del
Feb 28 10:11:01 compute-0 nova_compute[243452]: 2026-02-28 10:11:01.180 243456 INFO nova.virt.libvirt.driver [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Deletion of /var/lib/nova/instances/9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab_del complete
Feb 28 10:11:01 compute-0 nova_compute[243452]: 2026-02-28 10:11:01.226 243456 INFO nova.compute.manager [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Took 0.34 seconds to destroy the instance on the hypervisor.
Feb 28 10:11:01 compute-0 nova_compute[243452]: 2026-02-28 10:11:01.227 243456 DEBUG oslo.service.loopingcall [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:11:01 compute-0 nova_compute[243452]: 2026-02-28 10:11:01.227 243456 DEBUG nova.compute.manager [-] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:11:01 compute-0 nova_compute[243452]: 2026-02-28 10:11:01.227 243456 DEBUG nova.network.neutron [-] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:11:01 compute-0 nova_compute[243452]: 2026-02-28 10:11:01.295 243456 INFO nova.virt.libvirt.driver [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Beginning live snapshot process
Feb 28 10:11:01 compute-0 nova_compute[243452]: 2026-02-28 10:11:01.457 243456 DEBUG nova.virt.libvirt.imagebackend [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 28 10:11:01 compute-0 nova_compute[243452]: 2026-02-28 10:11:01.876 243456 DEBUG nova.storage.rbd_utils [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] creating snapshot(0d10d9664e754370883f1e992dc7293a) on rbd image(30a5d845-ce28-490a-afe8-3b7552f02c63_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:11:01 compute-0 nova_compute[243452]: 2026-02-28 10:11:01.918 243456 DEBUG nova.network.neutron [-] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:11:01 compute-0 nova_compute[243452]: 2026-02-28 10:11:01.933 243456 INFO nova.compute.manager [-] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Took 0.71 seconds to deallocate network for instance.
Feb 28 10:11:01 compute-0 nova_compute[243452]: 2026-02-28 10:11:01.955 243456 DEBUG nova.compute.manager [req-9b016cf7-9658-4a26-a1c8-ae13cd52b86e req-bb34fbc4-d4ef-4f65-85af-197a59412670 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Received event network-vif-deleted-188d9948-e6ef-4c09-a2ff-5b07d0f93779 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:01 compute-0 nova_compute[243452]: 2026-02-28 10:11:01.972 243456 DEBUG oslo_concurrency.lockutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:01 compute-0 nova_compute[243452]: 2026-02-28 10:11:01.973 243456 DEBUG oslo_concurrency.lockutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:02 compute-0 nova_compute[243452]: 2026-02-28 10:11:02.046 243456 DEBUG oslo_concurrency.processutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 391 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.4 MiB/s wr, 156 op/s
Feb 28 10:11:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Feb 28 10:11:02 compute-0 podman[294470]: 2026-02-28 10:11:02.124094104 +0000 UTC m=+0.056397904 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, tcib_managed=true)
Feb 28 10:11:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Feb 28 10:11:02 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Feb 28 10:11:02 compute-0 nova_compute[243452]: 2026-02-28 10:11:02.173 243456 DEBUG nova.storage.rbd_utils [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] cloning vms/30a5d845-ce28-490a-afe8-3b7552f02c63_disk@0d10d9664e754370883f1e992dc7293a to images/31c8918a-4cbd-4459-b04d-e55d79f71575 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:11:02 compute-0 podman[294469]: 2026-02-28 10:11:02.175023865 +0000 UTC m=+0.105630488 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:11:02 compute-0 nova_compute[243452]: 2026-02-28 10:11:02.267 243456 DEBUG nova.storage.rbd_utils [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] flattening images/31c8918a-4cbd-4459-b04d-e55d79f71575 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:11:02 compute-0 nova_compute[243452]: 2026-02-28 10:11:02.638 243456 DEBUG nova.storage.rbd_utils [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] removing snapshot(0d10d9664e754370883f1e992dc7293a) on rbd image(30a5d845-ce28-490a-afe8-3b7552f02c63_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:11:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:11:02 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/268796098' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:11:02 compute-0 nova_compute[243452]: 2026-02-28 10:11:02.680 243456 DEBUG oslo_concurrency.processutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:02 compute-0 nova_compute[243452]: 2026-02-28 10:11:02.686 243456 DEBUG nova.compute.provider_tree [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:11:02 compute-0 nova_compute[243452]: 2026-02-28 10:11:02.707 243456 DEBUG nova.scheduler.client.report [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:11:02 compute-0 nova_compute[243452]: 2026-02-28 10:11:02.732 243456 DEBUG oslo_concurrency.lockutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:02 compute-0 nova_compute[243452]: 2026-02-28 10:11:02.765 243456 INFO nova.scheduler.client.report [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Deleted allocations for instance 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab
Feb 28 10:11:02 compute-0 ovn_controller[146846]: 2026-02-28T10:11:02Z|00489|binding|INFO|Releasing lport cad89901-4493-47e9-b0fc-45158375eff8 from this chassis (sb_readonly=0)
Feb 28 10:11:02 compute-0 nova_compute[243452]: 2026-02-28 10:11:02.826 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:02 compute-0 nova_compute[243452]: 2026-02-28 10:11:02.840 243456 DEBUG oslo_concurrency.lockutils [None req-c6506249-cbea-4f35-b81e-a291bc65aa7e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:11:03 compute-0 ceph-mon[76304]: pgmap v1330: 305 pgs: 305 active+clean; 391 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.4 MiB/s wr, 156 op/s
Feb 28 10:11:03 compute-0 ceph-mon[76304]: osdmap e214: 3 total, 3 up, 3 in
Feb 28 10:11:03 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/268796098' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:11:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Feb 28 10:11:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Feb 28 10:11:03 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Feb 28 10:11:03 compute-0 nova_compute[243452]: 2026-02-28 10:11:03.179 243456 DEBUG nova.storage.rbd_utils [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] creating snapshot(snap) on rbd image(31c8918a-4cbd-4459-b04d-e55d79f71575) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:11:03 compute-0 nova_compute[243452]: 2026-02-28 10:11:03.677 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 390 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.6 MiB/s wr, 67 op/s
Feb 28 10:11:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Feb 28 10:11:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Feb 28 10:11:04 compute-0 ceph-mon[76304]: osdmap e215: 3 total, 3 up, 3 in
Feb 28 10:11:04 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Feb 28 10:11:05 compute-0 ceph-mon[76304]: pgmap v1333: 305 pgs: 305 active+clean; 390 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.6 MiB/s wr, 67 op/s
Feb 28 10:11:05 compute-0 ceph-mon[76304]: osdmap e216: 3 total, 3 up, 3 in
Feb 28 10:11:05 compute-0 nova_compute[243452]: 2026-02-28 10:11:05.918 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:05 compute-0 nova_compute[243452]: 2026-02-28 10:11:05.945 243456 INFO nova.virt.libvirt.driver [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Snapshot image upload complete
Feb 28 10:11:05 compute-0 nova_compute[243452]: 2026-02-28 10:11:05.946 243456 INFO nova.compute.manager [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Took 5.03 seconds to snapshot the instance on the hypervisor.
Feb 28 10:11:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:06.051 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 391 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 209 op/s
Feb 28 10:11:06 compute-0 nova_compute[243452]: 2026-02-28 10:11:06.337 243456 DEBUG nova.compute.manager [None req-d8e93841-fd24-48d8-9803-1f1a2d17c604 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Feb 28 10:11:06 compute-0 nova_compute[243452]: 2026-02-28 10:11:06.588 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9e4c0569-eb76-4874-89e2-751c8237a762" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:06 compute-0 nova_compute[243452]: 2026-02-28 10:11:06.589 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:06 compute-0 nova_compute[243452]: 2026-02-28 10:11:06.614 243456 DEBUG nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:11:06 compute-0 nova_compute[243452]: 2026-02-28 10:11:06.779 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:06 compute-0 nova_compute[243452]: 2026-02-28 10:11:06.779 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:06 compute-0 nova_compute[243452]: 2026-02-28 10:11:06.787 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:11:06 compute-0 nova_compute[243452]: 2026-02-28 10:11:06.788 243456 INFO nova.compute.claims [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:11:07 compute-0 nova_compute[243452]: 2026-02-28 10:11:07.066 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:07 compute-0 ceph-mon[76304]: pgmap v1335: 305 pgs: 305 active+clean; 391 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 209 op/s
Feb 28 10:11:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:11:07 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1960429364' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:11:07 compute-0 nova_compute[243452]: 2026-02-28 10:11:07.622 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:07 compute-0 nova_compute[243452]: 2026-02-28 10:11:07.629 243456 DEBUG nova.compute.provider_tree [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:11:07 compute-0 nova_compute[243452]: 2026-02-28 10:11:07.644 243456 DEBUG nova.scheduler.client.report [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:11:07 compute-0 nova_compute[243452]: 2026-02-28 10:11:07.665 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:07 compute-0 nova_compute[243452]: 2026-02-28 10:11:07.666 243456 DEBUG nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:11:07 compute-0 nova_compute[243452]: 2026-02-28 10:11:07.731 243456 DEBUG nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:11:07 compute-0 nova_compute[243452]: 2026-02-28 10:11:07.732 243456 DEBUG nova.network.neutron [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:11:07 compute-0 nova_compute[243452]: 2026-02-28 10:11:07.754 243456 INFO nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:11:07 compute-0 nova_compute[243452]: 2026-02-28 10:11:07.765 243456 DEBUG nova.compute.manager [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:07 compute-0 nova_compute[243452]: 2026-02-28 10:11:07.774 243456 DEBUG nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:11:07 compute-0 nova_compute[243452]: 2026-02-28 10:11:07.840 243456 INFO nova.compute.manager [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] instance snapshotting
Feb 28 10:11:07 compute-0 nova_compute[243452]: 2026-02-28 10:11:07.841 243456 DEBUG nova.objects.instance [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'flavor' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:11:07 compute-0 nova_compute[243452]: 2026-02-28 10:11:07.897 243456 DEBUG nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:11:07 compute-0 nova_compute[243452]: 2026-02-28 10:11:07.899 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:11:07 compute-0 nova_compute[243452]: 2026-02-28 10:11:07.899 243456 INFO nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Creating image(s)
Feb 28 10:11:07 compute-0 nova_compute[243452]: 2026-02-28 10:11:07.924 243456 DEBUG nova.storage.rbd_utils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9e4c0569-eb76-4874-89e2-751c8237a762_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:07 compute-0 nova_compute[243452]: 2026-02-28 10:11:07.951 243456 DEBUG nova.storage.rbd_utils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9e4c0569-eb76-4874-89e2-751c8237a762_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:07 compute-0 nova_compute[243452]: 2026-02-28 10:11:07.976 243456 DEBUG nova.storage.rbd_utils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9e4c0569-eb76-4874-89e2-751c8237a762_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:07 compute-0 nova_compute[243452]: 2026-02-28 10:11:07.981 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:11:08 compute-0 nova_compute[243452]: 2026-02-28 10:11:08.017 243456 DEBUG nova.policy [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33855957e5e3480b850c2ddef62a5f89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a5095f810f0d431788237ae1da262bf6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:11:08 compute-0 nova_compute[243452]: 2026-02-28 10:11:08.052 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:08 compute-0 nova_compute[243452]: 2026-02-28 10:11:08.053 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:08 compute-0 nova_compute[243452]: 2026-02-28 10:11:08.054 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:08 compute-0 nova_compute[243452]: 2026-02-28 10:11:08.054 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 391 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 212 op/s
Feb 28 10:11:08 compute-0 nova_compute[243452]: 2026-02-28 10:11:08.077 243456 DEBUG nova.storage.rbd_utils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9e4c0569-eb76-4874-89e2-751c8237a762_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:08 compute-0 nova_compute[243452]: 2026-02-28 10:11:08.082 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9e4c0569-eb76-4874-89e2-751c8237a762_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:08 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1960429364' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:11:08 compute-0 nova_compute[243452]: 2026-02-28 10:11:08.216 243456 INFO nova.virt.libvirt.driver [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Beginning live snapshot process
Feb 28 10:11:08 compute-0 nova_compute[243452]: 2026-02-28 10:11:08.365 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9e4c0569-eb76-4874-89e2-751c8237a762_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:08 compute-0 nova_compute[243452]: 2026-02-28 10:11:08.399 243456 DEBUG nova.virt.libvirt.imagebackend [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 28 10:11:08 compute-0 nova_compute[243452]: 2026-02-28 10:11:08.443 243456 DEBUG nova.storage.rbd_utils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] resizing rbd image 9e4c0569-eb76-4874-89e2-751c8237a762_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:11:08 compute-0 nova_compute[243452]: 2026-02-28 10:11:08.541 243456 DEBUG nova.objects.instance [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'migration_context' on Instance uuid 9e4c0569-eb76-4874-89e2-751c8237a762 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:11:08 compute-0 nova_compute[243452]: 2026-02-28 10:11:08.563 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:11:08 compute-0 nova_compute[243452]: 2026-02-28 10:11:08.564 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Ensure instance console log exists: /var/lib/nova/instances/9e4c0569-eb76-4874-89e2-751c8237a762/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:11:08 compute-0 nova_compute[243452]: 2026-02-28 10:11:08.565 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:08 compute-0 nova_compute[243452]: 2026-02-28 10:11:08.565 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:08 compute-0 nova_compute[243452]: 2026-02-28 10:11:08.566 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:08 compute-0 nova_compute[243452]: 2026-02-28 10:11:08.663 243456 DEBUG nova.storage.rbd_utils [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] creating snapshot(58a67ab8e75c40eaa402f557cd0cffe6) on rbd image(30a5d845-ce28-490a-afe8-3b7552f02c63_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:11:08 compute-0 nova_compute[243452]: 2026-02-28 10:11:08.695 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:09 compute-0 nova_compute[243452]: 2026-02-28 10:11:09.131 243456 DEBUG nova.network.neutron [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Successfully created port: 92050f95-5357-4df4-bb17-7553685a3edb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:11:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Feb 28 10:11:09 compute-0 ceph-mon[76304]: pgmap v1336: 305 pgs: 305 active+clean; 391 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 212 op/s
Feb 28 10:11:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Feb 28 10:11:09 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Feb 28 10:11:09 compute-0 nova_compute[243452]: 2026-02-28 10:11:09.459 243456 DEBUG nova.storage.rbd_utils [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] cloning vms/30a5d845-ce28-490a-afe8-3b7552f02c63_disk@58a67ab8e75c40eaa402f557cd0cffe6 to images/a3ff9934-08db-4bb8-903b-7c981dea1b00 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:11:09 compute-0 nova_compute[243452]: 2026-02-28 10:11:09.607 243456 DEBUG nova.storage.rbd_utils [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] flattening images/a3ff9934-08db-4bb8-903b-7c981dea1b00 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:11:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 406 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.2 MiB/s wr, 184 op/s
Feb 28 10:11:10 compute-0 nova_compute[243452]: 2026-02-28 10:11:10.187 243456 DEBUG nova.network.neutron [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Successfully updated port: 92050f95-5357-4df4-bb17-7553685a3edb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:11:10 compute-0 nova_compute[243452]: 2026-02-28 10:11:10.210 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "refresh_cache-9e4c0569-eb76-4874-89e2-751c8237a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:11:10 compute-0 nova_compute[243452]: 2026-02-28 10:11:10.211 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquired lock "refresh_cache-9e4c0569-eb76-4874-89e2-751c8237a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:11:10 compute-0 nova_compute[243452]: 2026-02-28 10:11:10.211 243456 DEBUG nova.network.neutron [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:11:10 compute-0 nova_compute[243452]: 2026-02-28 10:11:10.281 243456 DEBUG nova.compute.manager [req-29067d24-295f-4c2b-a519-fae98e30cca4 req-c47f7bc0-1bd8-43f4-8a9b-cd1b83487a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Received event network-changed-92050f95-5357-4df4-bb17-7553685a3edb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:10 compute-0 nova_compute[243452]: 2026-02-28 10:11:10.282 243456 DEBUG nova.compute.manager [req-29067d24-295f-4c2b-a519-fae98e30cca4 req-c47f7bc0-1bd8-43f4-8a9b-cd1b83487a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Refreshing instance network info cache due to event network-changed-92050f95-5357-4df4-bb17-7553685a3edb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:11:10 compute-0 nova_compute[243452]: 2026-02-28 10:11:10.283 243456 DEBUG oslo_concurrency.lockutils [req-29067d24-295f-4c2b-a519-fae98e30cca4 req-c47f7bc0-1bd8-43f4-8a9b-cd1b83487a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9e4c0569-eb76-4874-89e2-751c8237a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:11:10 compute-0 nova_compute[243452]: 2026-02-28 10:11:10.360 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:10 compute-0 nova_compute[243452]: 2026-02-28 10:11:10.420 243456 DEBUG nova.network.neutron [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:11:10 compute-0 ceph-mon[76304]: osdmap e217: 3 total, 3 up, 3 in
Feb 28 10:11:10 compute-0 ceph-mon[76304]: pgmap v1338: 305 pgs: 305 active+clean; 406 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.2 MiB/s wr, 184 op/s
Feb 28 10:11:10 compute-0 nova_compute[243452]: 2026-02-28 10:11:10.728 243456 DEBUG nova.storage.rbd_utils [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] removing snapshot(58a67ab8e75c40eaa402f557cd0cffe6) on rbd image(30a5d845-ce28-490a-afe8-3b7552f02c63_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:11:10 compute-0 nova_compute[243452]: 2026-02-28 10:11:10.920 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:11:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Feb 28 10:11:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Feb 28 10:11:11 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.550 243456 DEBUG nova.storage.rbd_utils [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] creating snapshot(snap) on rbd image(a3ff9934-08db-4bb8-903b-7c981dea1b00) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.874 243456 DEBUG nova.network.neutron [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Updating instance_info_cache with network_info: [{"id": "92050f95-5357-4df4-bb17-7553685a3edb", "address": "fa:16:3e:b3:0b:82", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92050f95-53", "ovs_interfaceid": "92050f95-5357-4df4-bb17-7553685a3edb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.895 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Releasing lock "refresh_cache-9e4c0569-eb76-4874-89e2-751c8237a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.896 243456 DEBUG nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Instance network_info: |[{"id": "92050f95-5357-4df4-bb17-7553685a3edb", "address": "fa:16:3e:b3:0b:82", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92050f95-53", "ovs_interfaceid": "92050f95-5357-4df4-bb17-7553685a3edb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.896 243456 DEBUG oslo_concurrency.lockutils [req-29067d24-295f-4c2b-a519-fae98e30cca4 req-c47f7bc0-1bd8-43f4-8a9b-cd1b83487a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9e4c0569-eb76-4874-89e2-751c8237a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.897 243456 DEBUG nova.network.neutron [req-29067d24-295f-4c2b-a519-fae98e30cca4 req-c47f7bc0-1bd8-43f4-8a9b-cd1b83487a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Refreshing network info cache for port 92050f95-5357-4df4-bb17-7553685a3edb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.899 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Start _get_guest_xml network_info=[{"id": "92050f95-5357-4df4-bb17-7553685a3edb", "address": "fa:16:3e:b3:0b:82", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92050f95-53", "ovs_interfaceid": "92050f95-5357-4df4-bb17-7553685a3edb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.905 243456 WARNING nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.910 243456 DEBUG nova.virt.libvirt.host [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.911 243456 DEBUG nova.virt.libvirt.host [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.921 243456 DEBUG nova.virt.libvirt.host [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.922 243456 DEBUG nova.virt.libvirt.host [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.922 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.923 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.923 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.924 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.924 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.924 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.924 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.925 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.925 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.925 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.926 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.926 243456 DEBUG nova.virt.hardware [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:11:11 compute-0 nova_compute[243452]: 2026-02-28 10:11:11.929 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 453 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 8.7 MiB/s wr, 220 op/s
Feb 28 10:11:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:11:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2980146945' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:11:12 compute-0 nova_compute[243452]: 2026-02-28 10:11:12.469 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Feb 28 10:11:12 compute-0 nova_compute[243452]: 2026-02-28 10:11:12.505 243456 DEBUG nova.storage.rbd_utils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9e4c0569-eb76-4874-89e2-751c8237a762_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:12 compute-0 nova_compute[243452]: 2026-02-28 10:11:12.509 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:12 compute-0 nova_compute[243452]: 2026-02-28 10:11:12.575 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273457.49659, 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:11:12 compute-0 nova_compute[243452]: 2026-02-28 10:11:12.576 243456 INFO nova.compute.manager [-] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] VM Stopped (Lifecycle Event)
Feb 28 10:11:12 compute-0 nova_compute[243452]: 2026-02-28 10:11:12.607 243456 DEBUG nova.compute.manager [None req-f107b39f-15b2-4c7a-8c00-ae63e5fbd145 - - - - - -] [instance: 9b0bdcbe-2922-46dc-a2d3-6c4042cc85ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Feb 28 10:11:12 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Feb 28 10:11:12 compute-0 ceph-mon[76304]: osdmap e218: 3 total, 3 up, 3 in
Feb 28 10:11:12 compute-0 ceph-mon[76304]: pgmap v1340: 305 pgs: 305 active+clean; 453 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 8.7 MiB/s wr, 220 op/s
Feb 28 10:11:12 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2980146945' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:11:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:11:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Feb 28 10:11:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Feb 28 10:11:13 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Feb 28 10:11:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:11:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2300921922' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.203 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.694s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.205 243456 DEBUG nova.virt.libvirt.vif [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:11:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-837461685',display_name='tempest-DeleteServersTestJSON-server-837461685',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-837461685',id=60,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-4o8foh91',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:11:07Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=9e4c0569-eb76-4874-89e2-751c8237a762,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "92050f95-5357-4df4-bb17-7553685a3edb", "address": "fa:16:3e:b3:0b:82", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92050f95-53", "ovs_interfaceid": "92050f95-5357-4df4-bb17-7553685a3edb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.206 243456 DEBUG nova.network.os_vif_util [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "92050f95-5357-4df4-bb17-7553685a3edb", "address": "fa:16:3e:b3:0b:82", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92050f95-53", "ovs_interfaceid": "92050f95-5357-4df4-bb17-7553685a3edb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.207 243456 DEBUG nova.network.os_vif_util [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:0b:82,bridge_name='br-int',has_traffic_filtering=True,id=92050f95-5357-4df4-bb17-7553685a3edb,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92050f95-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.208 243456 DEBUG nova.objects.instance [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9e4c0569-eb76-4874-89e2-751c8237a762 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.229 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:11:13 compute-0 nova_compute[243452]:   <uuid>9e4c0569-eb76-4874-89e2-751c8237a762</uuid>
Feb 28 10:11:13 compute-0 nova_compute[243452]:   <name>instance-0000003c</name>
Feb 28 10:11:13 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:11:13 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:11:13 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <nova:name>tempest-DeleteServersTestJSON-server-837461685</nova:name>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:11:11</nova:creationTime>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:11:13 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:11:13 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:11:13 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:11:13 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:11:13 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:11:13 compute-0 nova_compute[243452]:         <nova:user uuid="33855957e5e3480b850c2ddef62a5f89">tempest-DeleteServersTestJSON-515886650-project-member</nova:user>
Feb 28 10:11:13 compute-0 nova_compute[243452]:         <nova:project uuid="a5095f810f0d431788237ae1da262bf6">tempest-DeleteServersTestJSON-515886650</nova:project>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:11:13 compute-0 nova_compute[243452]:         <nova:port uuid="92050f95-5357-4df4-bb17-7553685a3edb">
Feb 28 10:11:13 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:11:13 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:11:13 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <system>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <entry name="serial">9e4c0569-eb76-4874-89e2-751c8237a762</entry>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <entry name="uuid">9e4c0569-eb76-4874-89e2-751c8237a762</entry>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     </system>
Feb 28 10:11:13 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:11:13 compute-0 nova_compute[243452]:   <os>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:   </os>
Feb 28 10:11:13 compute-0 nova_compute[243452]:   <features>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:   </features>
Feb 28 10:11:13 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:11:13 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:11:13 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9e4c0569-eb76-4874-89e2-751c8237a762_disk">
Feb 28 10:11:13 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       </source>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:11:13 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9e4c0569-eb76-4874-89e2-751c8237a762_disk.config">
Feb 28 10:11:13 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       </source>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:11:13 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:b3:0b:82"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <target dev="tap92050f95-53"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/9e4c0569-eb76-4874-89e2-751c8237a762/console.log" append="off"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <video>
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     </video>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:11:13 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:11:13 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:11:13 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:11:13 compute-0 nova_compute[243452]: </domain>
Feb 28 10:11:13 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.231 243456 DEBUG nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Preparing to wait for external event network-vif-plugged-92050f95-5357-4df4-bb17-7553685a3edb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.232 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.232 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.232 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.233 243456 DEBUG nova.virt.libvirt.vif [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:11:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-837461685',display_name='tempest-DeleteServersTestJSON-server-837461685',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-837461685',id=60,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-4o8foh91',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:11:07Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=9e4c0569-eb76-4874-89e2-751c8237a762,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "92050f95-5357-4df4-bb17-7553685a3edb", "address": "fa:16:3e:b3:0b:82", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92050f95-53", "ovs_interfaceid": "92050f95-5357-4df4-bb17-7553685a3edb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.234 243456 DEBUG nova.network.os_vif_util [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "92050f95-5357-4df4-bb17-7553685a3edb", "address": "fa:16:3e:b3:0b:82", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92050f95-53", "ovs_interfaceid": "92050f95-5357-4df4-bb17-7553685a3edb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.234 243456 DEBUG nova.network.os_vif_util [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:0b:82,bridge_name='br-int',has_traffic_filtering=True,id=92050f95-5357-4df4-bb17-7553685a3edb,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92050f95-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.235 243456 DEBUG os_vif [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:0b:82,bridge_name='br-int',has_traffic_filtering=True,id=92050f95-5357-4df4-bb17-7553685a3edb,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92050f95-53') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.235 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.236 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.236 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.241 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.241 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92050f95-53, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.242 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap92050f95-53, col_values=(('external_ids', {'iface-id': '92050f95-5357-4df4-bb17-7553685a3edb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:0b:82', 'vm-uuid': '9e4c0569-eb76-4874-89e2-751c8237a762'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.244 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:13 compute-0 NetworkManager[49805]: <info>  [1772273473.2453] manager: (tap92050f95-53): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.245 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.250 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.250 243456 INFO os_vif [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:0b:82,bridge_name='br-int',has_traffic_filtering=True,id=92050f95-5357-4df4-bb17-7553685a3edb,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92050f95-53')
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.330 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.332 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.332 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] No VIF found with MAC fa:16:3e:b3:0b:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.333 243456 INFO nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Using config drive
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.371 243456 DEBUG nova.storage.rbd_utils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9e4c0569-eb76-4874-89e2-751c8237a762_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.454 243456 DEBUG nova.network.neutron [req-29067d24-295f-4c2b-a519-fae98e30cca4 req-c47f7bc0-1bd8-43f4-8a9b-cd1b83487a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Updated VIF entry in instance network info cache for port 92050f95-5357-4df4-bb17-7553685a3edb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.455 243456 DEBUG nova.network.neutron [req-29067d24-295f-4c2b-a519-fae98e30cca4 req-c47f7bc0-1bd8-43f4-8a9b-cd1b83487a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Updating instance_info_cache with network_info: [{"id": "92050f95-5357-4df4-bb17-7553685a3edb", "address": "fa:16:3e:b3:0b:82", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92050f95-53", "ovs_interfaceid": "92050f95-5357-4df4-bb17-7553685a3edb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.471 243456 DEBUG oslo_concurrency.lockutils [req-29067d24-295f-4c2b-a519-fae98e30cca4 req-c47f7bc0-1bd8-43f4-8a9b-cd1b83487a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9e4c0569-eb76-4874-89e2-751c8237a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.682 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.727 243456 INFO nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Creating config drive at /var/lib/nova/instances/9e4c0569-eb76-4874-89e2-751c8237a762/disk.config
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.731 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9e4c0569-eb76-4874-89e2-751c8237a762/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpuwxce3sc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:13 compute-0 ceph-mon[76304]: osdmap e219: 3 total, 3 up, 3 in
Feb 28 10:11:13 compute-0 ceph-mon[76304]: osdmap e220: 3 total, 3 up, 3 in
Feb 28 10:11:13 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2300921922' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.878 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9e4c0569-eb76-4874-89e2-751c8237a762/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpuwxce3sc" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.902 243456 DEBUG nova.storage.rbd_utils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] rbd image 9e4c0569-eb76-4874-89e2-751c8237a762_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:13 compute-0 nova_compute[243452]: 2026-02-28 10:11:13.907 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9e4c0569-eb76-4874-89e2-751c8237a762/disk.config 9e4c0569-eb76-4874-89e2-751c8237a762_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.059 243456 DEBUG oslo_concurrency.processutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9e4c0569-eb76-4874-89e2-751c8237a762/disk.config 9e4c0569-eb76-4874-89e2-751c8237a762_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.060 243456 INFO nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Deleting local config drive /var/lib/nova/instances/9e4c0569-eb76-4874-89e2-751c8237a762/disk.config because it was imported into RBD.
Feb 28 10:11:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 494 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 12 MiB/s wr, 231 op/s
Feb 28 10:11:14 compute-0 kernel: tap92050f95-53: entered promiscuous mode
Feb 28 10:11:14 compute-0 NetworkManager[49805]: <info>  [1772273474.0929] manager: (tap92050f95-53): new Tun device (/org/freedesktop/NetworkManager/Devices/230)
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.095 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:14 compute-0 ovn_controller[146846]: 2026-02-28T10:11:14Z|00490|binding|INFO|Claiming lport 92050f95-5357-4df4-bb17-7553685a3edb for this chassis.
Feb 28 10:11:14 compute-0 ovn_controller[146846]: 2026-02-28T10:11:14Z|00491|binding|INFO|92050f95-5357-4df4-bb17-7553685a3edb: Claiming fa:16:3e:b3:0b:82 10.100.0.6
Feb 28 10:11:14 compute-0 ovn_controller[146846]: 2026-02-28T10:11:14Z|00492|binding|INFO|Setting lport 92050f95-5357-4df4-bb17-7553685a3edb ovn-installed in OVS
Feb 28 10:11:14 compute-0 ovn_controller[146846]: 2026-02-28T10:11:14Z|00493|binding|INFO|Setting lport 92050f95-5357-4df4-bb17-7553685a3edb up in Southbound
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.105 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.108 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.109 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:0b:82 10.100.0.6'], port_security=['fa:16:3e:b3:0b:82 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9e4c0569-eb76-4874-89e2-751c8237a762', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e92100d-850d-4567-9a5d-269bb15701d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5095f810f0d431788237ae1da262bf6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05124cc2-701f-43d1-96d7-3db27e805ae2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4409f277-3260-44c5-97fd-d8b2121f3d6b, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=92050f95-5357-4df4-bb17-7553685a3edb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.111 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 92050f95-5357-4df4-bb17-7553685a3edb in datapath 8e92100d-850d-4567-9a5d-269bb15701d5 bound to our chassis
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.113 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 10:11:14 compute-0 systemd-udevd[295089]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.127 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff60738-1d2a-47aa-99df-a8d34345da01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.128 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e92100d-81 in ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:11:14 compute-0 systemd-machined[209480]: New machine qemu-67-instance-0000003c.
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.131 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e92100d-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.131 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe5361d-ddc1-4f93-9781-6b434d1489c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.132 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5664f976-0e25-4432-8e4e-2321e6e80ca6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:14 compute-0 NetworkManager[49805]: <info>  [1772273474.1359] device (tap92050f95-53): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:11:14 compute-0 NetworkManager[49805]: <info>  [1772273474.1367] device (tap92050f95-53): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:11:14 compute-0 systemd[1]: Started Virtual Machine qemu-67-instance-0000003c.
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.147 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c1211fb8-b627-4110-a66a-221a7fc20c80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.159 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5735c02b-da90-44ab-8d1a-ab77806ef373]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.184 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6965186a-dde4-41b5-9938-4196154e7dd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.191 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7dafd514-5674-4903-b901-e6575c18d269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:14 compute-0 NetworkManager[49805]: <info>  [1772273474.1923] manager: (tap8e92100d-80): new Veth device (/org/freedesktop/NetworkManager/Devices/231)
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.220 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5160dd0b-7301-42bf-8128-724cb5434124]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.224 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[75272fa4-e3a2-4947-8452-5a83c0c06c28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:14 compute-0 NetworkManager[49805]: <info>  [1772273474.2495] device (tap8e92100d-80): carrier: link connected
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.255 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b11a5ed3-1486-430f-80e3-7a1fbab50187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.273 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d954335f-3902-46ff-ac69-103f333522d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e92100d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 156], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498153, 'reachable_time': 42312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295122, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.288 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f4841681-3c3b-4fc5-9025-34ad6dc33cf3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:bd50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498153, 'tstamp': 498153}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295123, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.303 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6fce0335-2c30-4e32-af15-17558e785414]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e92100d-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 156], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498153, 'reachable_time': 42312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295124, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.335 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[39cf85db-e7a9-4cd6-8904-f6e85c1f7e1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.390 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f6001ed0-bad2-4714-81ab-9430211f23ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.393 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e92100d-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.393 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.394 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e92100d-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.396 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:14 compute-0 kernel: tap8e92100d-80: entered promiscuous mode
Feb 28 10:11:14 compute-0 NetworkManager[49805]: <info>  [1772273474.3970] manager: (tap8e92100d-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.404 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e92100d-80, col_values=(('external_ids', {'iface-id': 'df60d363-b5aa-4c1b-a9a1-997dfff36799'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:14 compute-0 ovn_controller[146846]: 2026-02-28T10:11:14Z|00494|binding|INFO|Releasing lport df60d363-b5aa-4c1b-a9a1-997dfff36799 from this chassis (sb_readonly=0)
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.406 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.412 243456 DEBUG nova.compute.manager [req-b18af631-2033-4380-8ef1-73153732aaa8 req-a580fbd0-7af5-4c4d-9384-19346fb0bcc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Received event network-vif-plugged-92050f95-5357-4df4-bb17-7553685a3edb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.412 243456 DEBUG oslo_concurrency.lockutils [req-b18af631-2033-4380-8ef1-73153732aaa8 req-a580fbd0-7af5-4c4d-9384-19346fb0bcc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.412 243456 DEBUG oslo_concurrency.lockutils [req-b18af631-2033-4380-8ef1-73153732aaa8 req-a580fbd0-7af5-4c4d-9384-19346fb0bcc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.413 243456 DEBUG oslo_concurrency.lockutils [req-b18af631-2033-4380-8ef1-73153732aaa8 req-a580fbd0-7af5-4c4d-9384-19346fb0bcc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.413 243456 DEBUG nova.compute.manager [req-b18af631-2033-4380-8ef1-73153732aaa8 req-a580fbd0-7af5-4c4d-9384-19346fb0bcc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Processing event network-vif-plugged-92050f95-5357-4df4-bb17-7553685a3edb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.417 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.419 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.420 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.423 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.423 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f07f8a4c-03cd-4777-8661-e0876fdb36f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.426 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/8e92100d-850d-4567-9a5d-269bb15701d5.pid.haproxy
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 8e92100d-850d-4567-9a5d-269bb15701d5
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:11:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:14.426 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'env', 'PROCESS_TAG=haproxy-8e92100d-850d-4567-9a5d-269bb15701d5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e92100d-850d-4567-9a5d-269bb15701d5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.596 243456 DEBUG nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.597 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273474.595994, 9e4c0569-eb76-4874-89e2-751c8237a762 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.597 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] VM Started (Lifecycle Event)
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.602 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.608 243456 INFO nova.virt.libvirt.driver [-] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Instance spawned successfully.
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.608 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.617 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.621 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.635 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.636 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.636 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.637 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.637 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.638 243456 DEBUG nova.virt.libvirt.driver [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.646 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.647 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273474.6002932, 9e4c0569-eb76-4874-89e2-751c8237a762 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.647 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] VM Paused (Lifecycle Event)
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.672 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.677 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273474.6023083, 9e4c0569-eb76-4874-89e2-751c8237a762 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.677 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] VM Resumed (Lifecycle Event)
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.700 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.703 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.714 243456 INFO nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Took 6.82 seconds to spawn the instance on the hypervisor.
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.715 243456 DEBUG nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.730 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.746 243456 INFO nova.virt.libvirt.driver [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Snapshot image upload complete
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.747 243456 INFO nova.compute.manager [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Took 6.87 seconds to snapshot the instance on the hypervisor.
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.787 243456 INFO nova.compute.manager [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Took 8.12 seconds to build instance.
Feb 28 10:11:14 compute-0 ceph-mon[76304]: pgmap v1343: 305 pgs: 305 active+clean; 494 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 12 MiB/s wr, 231 op/s
Feb 28 10:11:14 compute-0 nova_compute[243452]: 2026-02-28 10:11:14.813 243456 DEBUG oslo_concurrency.lockutils [None req-50f33b0e-0995-42fd-9c11-6149e2d30f4e 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:14 compute-0 podman[295198]: 2026-02-28 10:11:14.801203209 +0000 UTC m=+0.027172694 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:11:14 compute-0 podman[295198]: 2026-02-28 10:11:14.902642328 +0000 UTC m=+0.128611783 container create 26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:11:14 compute-0 systemd[1]: Started libpod-conmon-26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e.scope.
Feb 28 10:11:14 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd6fa8301760fba518821551e121f076081ba5a3390aa504e7ff77efc7e192a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:11:14 compute-0 podman[295198]: 2026-02-28 10:11:14.993833329 +0000 UTC m=+0.219802814 container init 26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 10:11:14 compute-0 podman[295198]: 2026-02-28 10:11:14.999283173 +0000 UTC m=+0.225252628 container start 26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:11:15 compute-0 nova_compute[243452]: 2026-02-28 10:11:15.013 243456 DEBUG nova.compute.manager [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Feb 28 10:11:15 compute-0 nova_compute[243452]: 2026-02-28 10:11:15.013 243456 DEBUG nova.compute.manager [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458
Feb 28 10:11:15 compute-0 nova_compute[243452]: 2026-02-28 10:11:15.013 243456 DEBUG nova.compute.manager [None req-c23246cb-79d9-42d5-a9e5-f953e06b2c81 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Deleting image 566c962b-ab07-4ea4-8c4a-daa71c23c042 _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463
Feb 28 10:11:15 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[295212]: [NOTICE]   (295216) : New worker (295218) forked
Feb 28 10:11:15 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[295212]: [NOTICE]   (295216) : Loading success.
Feb 28 10:11:15 compute-0 nova_compute[243452]: 2026-02-28 10:11:15.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:11:15 compute-0 nova_compute[243452]: 2026-02-28 10:11:15.397 243456 DEBUG nova.objects.instance [None req-7a1cffa1-95f1-408e-9347-d7147770fd10 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9e4c0569-eb76-4874-89e2-751c8237a762 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:11:15 compute-0 nova_compute[243452]: 2026-02-28 10:11:15.419 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273475.4189055, 9e4c0569-eb76-4874-89e2-751c8237a762 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:11:15 compute-0 nova_compute[243452]: 2026-02-28 10:11:15.419 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] VM Paused (Lifecycle Event)
Feb 28 10:11:15 compute-0 nova_compute[243452]: 2026-02-28 10:11:15.445 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:15 compute-0 nova_compute[243452]: 2026-02-28 10:11:15.450 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:11:15 compute-0 nova_compute[243452]: 2026-02-28 10:11:15.479 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] During sync_power_state the instance has a pending task (suspending). Skip.
Feb 28 10:11:15 compute-0 kernel: tap92050f95-53 (unregistering): left promiscuous mode
Feb 28 10:11:15 compute-0 NetworkManager[49805]: <info>  [1772273475.5489] device (tap92050f95-53): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:11:15 compute-0 nova_compute[243452]: 2026-02-28 10:11:15.557 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:15 compute-0 ovn_controller[146846]: 2026-02-28T10:11:15Z|00495|binding|INFO|Releasing lport 92050f95-5357-4df4-bb17-7553685a3edb from this chassis (sb_readonly=0)
Feb 28 10:11:15 compute-0 ovn_controller[146846]: 2026-02-28T10:11:15Z|00496|binding|INFO|Setting lport 92050f95-5357-4df4-bb17-7553685a3edb down in Southbound
Feb 28 10:11:15 compute-0 ovn_controller[146846]: 2026-02-28T10:11:15Z|00497|binding|INFO|Removing iface tap92050f95-53 ovn-installed in OVS
Feb 28 10:11:15 compute-0 nova_compute[243452]: 2026-02-28 10:11:15.559 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:15 compute-0 nova_compute[243452]: 2026-02-28 10:11:15.564 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.567 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:0b:82 10.100.0.6'], port_security=['fa:16:3e:b3:0b:82 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9e4c0569-eb76-4874-89e2-751c8237a762', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e92100d-850d-4567-9a5d-269bb15701d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5095f810f0d431788237ae1da262bf6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05124cc2-701f-43d1-96d7-3db27e805ae2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4409f277-3260-44c5-97fd-d8b2121f3d6b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=92050f95-5357-4df4-bb17-7553685a3edb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:11:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.569 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 92050f95-5357-4df4-bb17-7553685a3edb in datapath 8e92100d-850d-4567-9a5d-269bb15701d5 unbound from our chassis
Feb 28 10:11:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.570 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e92100d-850d-4567-9a5d-269bb15701d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:11:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.571 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c01bc85a-465c-4532-81e6-5fdc724e9275]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.572 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 namespace which is not needed anymore
Feb 28 10:11:15 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Feb 28 10:11:15 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003c.scope: Consumed 1.306s CPU time.
Feb 28 10:11:15 compute-0 systemd-machined[209480]: Machine qemu-67-instance-0000003c terminated.
Feb 28 10:11:15 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[295212]: [NOTICE]   (295216) : haproxy version is 2.8.14-c23fe91
Feb 28 10:11:15 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[295212]: [NOTICE]   (295216) : path to executable is /usr/sbin/haproxy
Feb 28 10:11:15 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[295212]: [WARNING]  (295216) : Exiting Master process...
Feb 28 10:11:15 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[295212]: [WARNING]  (295216) : Exiting Master process...
Feb 28 10:11:15 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[295212]: [ALERT]    (295216) : Current worker (295218) exited with code 143 (Terminated)
Feb 28 10:11:15 compute-0 neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5[295212]: [WARNING]  (295216) : All workers exited. Exiting... (0)
Feb 28 10:11:15 compute-0 systemd[1]: libpod-26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e.scope: Deactivated successfully.
Feb 28 10:11:15 compute-0 podman[295251]: 2026-02-28 10:11:15.690190837 +0000 UTC m=+0.041800675 container died 26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 10:11:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e-userdata-shm.mount: Deactivated successfully.
Feb 28 10:11:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fd6fa8301760fba518821551e121f076081ba5a3390aa504e7ff77efc7e192a-merged.mount: Deactivated successfully.
Feb 28 10:11:15 compute-0 podman[295251]: 2026-02-28 10:11:15.728427831 +0000 UTC m=+0.080037659 container cleanup 26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:11:15 compute-0 nova_compute[243452]: 2026-02-28 10:11:15.743 243456 DEBUG nova.compute.manager [None req-7a1cffa1-95f1-408e-9347-d7147770fd10 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:15 compute-0 systemd[1]: libpod-conmon-26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e.scope: Deactivated successfully.
Feb 28 10:11:15 compute-0 podman[295284]: 2026-02-28 10:11:15.783518738 +0000 UTC m=+0.034235143 container remove 26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:11:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.789 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[43732717-a143-4ef9-b5c6-cf485ad12a81]: (4, ('Sat Feb 28 10:11:15 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 (26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e)\n26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e\nSat Feb 28 10:11:15 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 (26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e)\n26e8a063d07caf5f7ad3bcf495901cc4bd6323eaaeef4d804036be0156de4b0e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.791 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ee9db69f-251e-481a-a2a1-b1f0a49de817]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.792 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e92100d-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:15 compute-0 kernel: tap8e92100d-80: left promiscuous mode
Feb 28 10:11:15 compute-0 nova_compute[243452]: 2026-02-28 10:11:15.797 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:15 compute-0 nova_compute[243452]: 2026-02-28 10:11:15.803 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.806 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[89c97877-985c-4459-8553-3e99a4944fbc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.823 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[12660fed-70c0-4e43-b1e0-81011715cdf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.824 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f1e4da-9882-455f-8fd9-a4eafcfa5eda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.837 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[765fdc55-082c-4259-92d1-04851d5ed434]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498146, 'reachable_time': 16156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295310, 'error': None, 'target': 'ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:15 compute-0 systemd[1]: run-netns-ovnmeta\x2d8e92100d\x2d850d\x2d4567\x2d9a5d\x2d269bb15701d5.mount: Deactivated successfully.
Feb 28 10:11:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.840 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e92100d-850d-4567-9a5d-269bb15701d5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:11:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:15.840 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[83d47b80-a01b-45e2-9d08-d860822412a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Feb 28 10:11:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Feb 28 10:11:15 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Feb 28 10:11:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 517 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 8.0 MiB/s wr, 212 op/s
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.338 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.339 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.339 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.340 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.340 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.534 243456 DEBUG nova.compute.manager [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Received event network-vif-plugged-92050f95-5357-4df4-bb17-7553685a3edb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.535 243456 DEBUG oslo_concurrency.lockutils [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.535 243456 DEBUG oslo_concurrency.lockutils [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.536 243456 DEBUG oslo_concurrency.lockutils [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.536 243456 DEBUG nova.compute.manager [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] No waiting events found dispatching network-vif-plugged-92050f95-5357-4df4-bb17-7553685a3edb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.536 243456 WARNING nova.compute.manager [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Received unexpected event network-vif-plugged-92050f95-5357-4df4-bb17-7553685a3edb for instance with vm_state suspended and task_state None.
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.537 243456 DEBUG nova.compute.manager [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Received event network-vif-unplugged-92050f95-5357-4df4-bb17-7553685a3edb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.537 243456 DEBUG oslo_concurrency.lockutils [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.538 243456 DEBUG oslo_concurrency.lockutils [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.538 243456 DEBUG oslo_concurrency.lockutils [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.538 243456 DEBUG nova.compute.manager [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] No waiting events found dispatching network-vif-unplugged-92050f95-5357-4df4-bb17-7553685a3edb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.539 243456 WARNING nova.compute.manager [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Received unexpected event network-vif-unplugged-92050f95-5357-4df4-bb17-7553685a3edb for instance with vm_state suspended and task_state None.
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.539 243456 DEBUG nova.compute.manager [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Received event network-vif-plugged-92050f95-5357-4df4-bb17-7553685a3edb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.539 243456 DEBUG oslo_concurrency.lockutils [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.540 243456 DEBUG oslo_concurrency.lockutils [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.540 243456 DEBUG oslo_concurrency.lockutils [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.541 243456 DEBUG nova.compute.manager [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] No waiting events found dispatching network-vif-plugged-92050f95-5357-4df4-bb17-7553685a3edb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.541 243456 WARNING nova.compute.manager [req-f8c14cdf-65b6-4c18-9a4a-c724d35e93a6 req-adfb952c-a79c-4e8e-9929-0e5d62922316 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Received unexpected event network-vif-plugged-92050f95-5357-4df4-bb17-7553685a3edb for instance with vm_state suspended and task_state None.
Feb 28 10:11:16 compute-0 ceph-mon[76304]: osdmap e221: 3 total, 3 up, 3 in
Feb 28 10:11:16 compute-0 ceph-mon[76304]: pgmap v1345: 305 pgs: 305 active+clean; 517 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 8.0 MiB/s wr, 212 op/s
Feb 28 10:11:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:11:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4068071505' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:11:16 compute-0 nova_compute[243452]: 2026-02-28 10:11:16.914 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.002 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.002 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.006 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.006 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.122 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.123 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3773MB free_disk=59.92144317924976GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.123 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.124 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.218 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 30a5d845-ce28-490a-afe8-3b7552f02c63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.219 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 9e4c0569-eb76-4874-89e2-751c8237a762 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.219 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.220 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.325 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.638 243456 DEBUG oslo_concurrency.lockutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9e4c0569-eb76-4874-89e2-751c8237a762" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.639 243456 DEBUG oslo_concurrency.lockutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.640 243456 DEBUG oslo_concurrency.lockutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.640 243456 DEBUG oslo_concurrency.lockutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.641 243456 DEBUG oslo_concurrency.lockutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.642 243456 INFO nova.compute.manager [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Terminating instance
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.644 243456 DEBUG nova.compute.manager [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.652 243456 INFO nova.virt.libvirt.driver [-] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Instance destroyed successfully.
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.652 243456 DEBUG nova.objects.instance [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lazy-loading 'resources' on Instance uuid 9e4c0569-eb76-4874-89e2-751c8237a762 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.667 243456 DEBUG nova.virt.libvirt.vif [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:11:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-837461685',display_name='tempest-DeleteServersTestJSON-server-837461685',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-837461685',id=60,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:11:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a5095f810f0d431788237ae1da262bf6',ramdisk_id='',reservation_id='r-4o8foh91',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-515886650',owner_user_name='tempest-DeleteServersTestJSON-515886650-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:11:15Z,user_data=None,user_id='33855957e5e3480b850c2ddef62a5f89',uuid=9e4c0569-eb76-4874-89e2-751c8237a762,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "92050f95-5357-4df4-bb17-7553685a3edb", "address": "fa:16:3e:b3:0b:82", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92050f95-53", "ovs_interfaceid": "92050f95-5357-4df4-bb17-7553685a3edb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.668 243456 DEBUG nova.network.os_vif_util [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converting VIF {"id": "92050f95-5357-4df4-bb17-7553685a3edb", "address": "fa:16:3e:b3:0b:82", "network": {"id": "8e92100d-850d-4567-9a5d-269bb15701d5", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-959057373-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a5095f810f0d431788237ae1da262bf6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92050f95-53", "ovs_interfaceid": "92050f95-5357-4df4-bb17-7553685a3edb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.669 243456 DEBUG nova.network.os_vif_util [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:0b:82,bridge_name='br-int',has_traffic_filtering=True,id=92050f95-5357-4df4-bb17-7553685a3edb,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92050f95-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.669 243456 DEBUG os_vif [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:0b:82,bridge_name='br-int',has_traffic_filtering=True,id=92050f95-5357-4df4-bb17-7553685a3edb,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92050f95-53') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.671 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.672 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92050f95-53, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.674 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.675 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.678 243456 INFO os_vif [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:0b:82,bridge_name='br-int',has_traffic_filtering=True,id=92050f95-5357-4df4-bb17-7553685a3edb,network=Network(8e92100d-850d-4567-9a5d-269bb15701d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92050f95-53')
Feb 28 10:11:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:11:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2446873241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:11:17 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4068071505' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:11:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Feb 28 10:11:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Feb 28 10:11:17 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.912 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.918 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.944 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.970 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:11:17 compute-0 nova_compute[243452]: 2026-02-28 10:11:17.971 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:18 compute-0 nova_compute[243452]: 2026-02-28 10:11:18.004 243456 INFO nova.virt.libvirt.driver [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Deleting instance files /var/lib/nova/instances/9e4c0569-eb76-4874-89e2-751c8237a762_del
Feb 28 10:11:18 compute-0 nova_compute[243452]: 2026-02-28 10:11:18.005 243456 INFO nova.virt.libvirt.driver [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Deletion of /var/lib/nova/instances/9e4c0569-eb76-4874-89e2-751c8237a762_del complete
Feb 28 10:11:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:11:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Feb 28 10:11:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Feb 28 10:11:18 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Feb 28 10:11:18 compute-0 nova_compute[243452]: 2026-02-28 10:11:18.066 243456 INFO nova.compute.manager [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Took 0.42 seconds to destroy the instance on the hypervisor.
Feb 28 10:11:18 compute-0 nova_compute[243452]: 2026-02-28 10:11:18.066 243456 DEBUG oslo.service.loopingcall [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:11:18 compute-0 nova_compute[243452]: 2026-02-28 10:11:18.067 243456 DEBUG nova.compute.manager [-] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:11:18 compute-0 nova_compute[243452]: 2026-02-28 10:11:18.067 243456 DEBUG nova.network.neutron [-] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:11:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 497 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 2.2 MiB/s wr, 190 op/s
Feb 28 10:11:18 compute-0 nova_compute[243452]: 2026-02-28 10:11:18.684 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2446873241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:11:18 compute-0 ceph-mon[76304]: osdmap e222: 3 total, 3 up, 3 in
Feb 28 10:11:18 compute-0 ceph-mon[76304]: osdmap e223: 3 total, 3 up, 3 in
Feb 28 10:11:18 compute-0 ceph-mon[76304]: pgmap v1348: 305 pgs: 305 active+clean; 497 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 2.2 MiB/s wr, 190 op/s
Feb 28 10:11:18 compute-0 nova_compute[243452]: 2026-02-28 10:11:18.971 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:11:18 compute-0 nova_compute[243452]: 2026-02-28 10:11:18.972 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:11:18 compute-0 nova_compute[243452]: 2026-02-28 10:11:18.972 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:11:18 compute-0 nova_compute[243452]: 2026-02-28 10:11:18.995 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Feb 28 10:11:18 compute-0 nova_compute[243452]: 2026-02-28 10:11:18.998 243456 DEBUG nova.network.neutron [-] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.039 243456 INFO nova.compute.manager [-] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Took 0.97 seconds to deallocate network for instance.
Feb 28 10:11:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Feb 28 10:11:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Feb 28 10:11:19 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.082 243456 DEBUG oslo_concurrency.lockutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.082 243456 DEBUG oslo_concurrency.lockutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.093 243456 DEBUG nova.compute.manager [req-5b0f2c84-6c92-4844-b210-e98012590642 req-a0da3bfa-677c-4e14-a877-ea1d05f2afc6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Received event network-vif-deleted-92050f95-5357-4df4-bb17-7553685a3edb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.143 243456 DEBUG oslo_concurrency.processutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.330 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.331 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.332 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.332 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.693 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.694 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.721 243456 DEBUG nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:11:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:11:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/417357746' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.746 243456 DEBUG oslo_concurrency.processutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.753 243456 DEBUG nova.compute.provider_tree [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.796 243456 DEBUG nova.scheduler.client.report [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.826 243456 DEBUG oslo_concurrency.lockutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.830 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.830 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.839 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.839 243456 INFO nova.compute.claims [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.872 243456 INFO nova.scheduler.client.report [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Deleted allocations for instance 9e4c0569-eb76-4874-89e2-751c8237a762
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.936 243456 DEBUG oslo_concurrency.lockutils [None req-5bc871de-e07f-42ed-8606-ab3ce86cdb65 33855957e5e3480b850c2ddef62a5f89 a5095f810f0d431788237ae1da262bf6 - - default default] Lock "9e4c0569-eb76-4874-89e2-751c8237a762" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.297s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:19 compute-0 nova_compute[243452]: 2026-02-28 10:11:19.962 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:20 compute-0 ceph-mon[76304]: osdmap e224: 3 total, 3 up, 3 in
Feb 28 10:11:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/417357746' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:11:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 411 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 8.4 KiB/s wr, 262 op/s
Feb 28 10:11:20 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:11:20 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3582915146' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:11:20 compute-0 nova_compute[243452]: 2026-02-28 10:11:20.542 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:20 compute-0 nova_compute[243452]: 2026-02-28 10:11:20.549 243456 DEBUG nova.compute.provider_tree [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:11:20 compute-0 nova_compute[243452]: 2026-02-28 10:11:20.572 243456 DEBUG nova.scheduler.client.report [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:11:20 compute-0 nova_compute[243452]: 2026-02-28 10:11:20.599 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:20 compute-0 nova_compute[243452]: 2026-02-28 10:11:20.600 243456 DEBUG nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:11:20 compute-0 nova_compute[243452]: 2026-02-28 10:11:20.665 243456 DEBUG nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:11:20 compute-0 nova_compute[243452]: 2026-02-28 10:11:20.666 243456 DEBUG nova.network.neutron [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:11:20 compute-0 nova_compute[243452]: 2026-02-28 10:11:20.718 243456 INFO nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:11:20 compute-0 nova_compute[243452]: 2026-02-28 10:11:20.736 243456 DEBUG nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:11:20 compute-0 nova_compute[243452]: 2026-02-28 10:11:20.828 243456 DEBUG nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:11:20 compute-0 nova_compute[243452]: 2026-02-28 10:11:20.830 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:11:20 compute-0 nova_compute[243452]: 2026-02-28 10:11:20.831 243456 INFO nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Creating image(s)
Feb 28 10:11:20 compute-0 nova_compute[243452]: 2026-02-28 10:11:20.870 243456 DEBUG nova.storage.rbd_utils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] rbd image fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:20 compute-0 nova_compute[243452]: 2026-02-28 10:11:20.905 243456 DEBUG nova.storage.rbd_utils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] rbd image fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:20 compute-0 nova_compute[243452]: 2026-02-28 10:11:20.931 243456 DEBUG nova.storage.rbd_utils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] rbd image fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:20 compute-0 nova_compute[243452]: 2026-02-28 10:11:20.934 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:20 compute-0 nova_compute[243452]: 2026-02-28 10:11:20.977 243456 DEBUG nova.policy [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4d03368d5ddc403db8a8315dabe88681', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '676657ed4ac447c580a9480d26bd7f87', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:11:20 compute-0 nova_compute[243452]: 2026-02-28 10:11:20.981 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updating instance_info_cache with network_info: [{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.007 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.008 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.008 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.008 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.008 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.019 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.019 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.020 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.020 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.043 243456 DEBUG nova.storage.rbd_utils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] rbd image fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.047 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:21 compute-0 ceph-mon[76304]: pgmap v1350: 305 pgs: 305 active+clean; 411 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 8.4 KiB/s wr, 262 op/s
Feb 28 10:11:21 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3582915146' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.097 243456 WARNING nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.098 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.098 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid fc527bc2-3cc2-4ce2-b99e-24d252793d06 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.098 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.099 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.099 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.100 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.137 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.322 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.345 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.376 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.376 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.380 243456 DEBUG nova.storage.rbd_utils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] resizing rbd image fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.475 243456 DEBUG nova.objects.instance [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lazy-loading 'migration_context' on Instance uuid fc527bc2-3cc2-4ce2-b99e-24d252793d06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.505 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.506 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Ensure instance console log exists: /var/lib/nova/instances/fc527bc2-3cc2-4ce2-b99e-24d252793d06/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.506 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.506 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.507 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:21 compute-0 nova_compute[243452]: 2026-02-28 10:11:21.928 243456 DEBUG nova.network.neutron [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Successfully created port: 52c9c534-2d55-4b9a-bd70-a8114ac975c6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:11:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 298 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 7.5 KiB/s wr, 226 op/s
Feb 28 10:11:22 compute-0 nova_compute[243452]: 2026-02-28 10:11:22.572 243456 DEBUG nova.network.neutron [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Successfully updated port: 52c9c534-2d55-4b9a-bd70-a8114ac975c6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:11:22 compute-0 nova_compute[243452]: 2026-02-28 10:11:22.590 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:11:22 compute-0 nova_compute[243452]: 2026-02-28 10:11:22.591 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquired lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:11:22 compute-0 nova_compute[243452]: 2026-02-28 10:11:22.591 243456 DEBUG nova.network.neutron [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:11:22 compute-0 nova_compute[243452]: 2026-02-28 10:11:22.666 243456 DEBUG nova.compute.manager [req-77623f0c-fe49-4d7f-9998-45eeccec05a0 req-fa39db7d-d00a-4d1e-87fa-42231b0035a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received event network-changed-52c9c534-2d55-4b9a-bd70-a8114ac975c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:22 compute-0 nova_compute[243452]: 2026-02-28 10:11:22.666 243456 DEBUG nova.compute.manager [req-77623f0c-fe49-4d7f-9998-45eeccec05a0 req-fa39db7d-d00a-4d1e-87fa-42231b0035a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Refreshing instance network info cache due to event network-changed-52c9c534-2d55-4b9a-bd70-a8114ac975c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:11:22 compute-0 nova_compute[243452]: 2026-02-28 10:11:22.667 243456 DEBUG oslo_concurrency.lockutils [req-77623f0c-fe49-4d7f-9998-45eeccec05a0 req-fa39db7d-d00a-4d1e-87fa-42231b0035a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:11:22 compute-0 nova_compute[243452]: 2026-02-28 10:11:22.675 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:22 compute-0 nova_compute[243452]: 2026-02-28 10:11:22.794 243456 DEBUG nova.network.neutron [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:11:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:11:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Feb 28 10:11:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Feb 28 10:11:23 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Feb 28 10:11:23 compute-0 ceph-mon[76304]: pgmap v1351: 305 pgs: 305 active+clean; 298 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 7.5 KiB/s wr, 226 op/s
Feb 28 10:11:23 compute-0 nova_compute[243452]: 2026-02-28 10:11:23.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:11:23 compute-0 nova_compute[243452]: 2026-02-28 10:11:23.318 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 28 10:11:23 compute-0 nova_compute[243452]: 2026-02-28 10:11:23.685 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 258 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 989 KiB/s rd, 1.7 MiB/s wr, 229 op/s
Feb 28 10:11:24 compute-0 ceph-mon[76304]: osdmap e225: 3 total, 3 up, 3 in
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.883 243456 DEBUG nova.network.neutron [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updating instance_info_cache with network_info: [{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.903 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Releasing lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.903 243456 DEBUG nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Instance network_info: |[{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.903 243456 DEBUG oslo_concurrency.lockutils [req-77623f0c-fe49-4d7f-9998-45eeccec05a0 req-fa39db7d-d00a-4d1e-87fa-42231b0035a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.904 243456 DEBUG nova.network.neutron [req-77623f0c-fe49-4d7f-9998-45eeccec05a0 req-fa39db7d-d00a-4d1e-87fa-42231b0035a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Refreshing network info cache for port 52c9c534-2d55-4b9a-bd70-a8114ac975c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.907 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Start _get_guest_xml network_info=[{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.910 243456 WARNING nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.914 243456 DEBUG nova.virt.libvirt.host [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.915 243456 DEBUG nova.virt.libvirt.host [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.920 243456 DEBUG nova.virt.libvirt.host [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.921 243456 DEBUG nova.virt.libvirt.host [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.922 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.922 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.922 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.923 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.923 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.923 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.923 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.923 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.924 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.924 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.924 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.924 243456 DEBUG nova.virt.hardware [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:11:24 compute-0 nova_compute[243452]: 2026-02-28 10:11:24.927 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:25 compute-0 ceph-mon[76304]: pgmap v1353: 305 pgs: 305 active+clean; 258 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 989 KiB/s rd, 1.7 MiB/s wr, 229 op/s
Feb 28 10:11:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:11:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4060572511' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:11:25 compute-0 nova_compute[243452]: 2026-02-28 10:11:25.467 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:25 compute-0 nova_compute[243452]: 2026-02-28 10:11:25.498 243456 DEBUG nova.storage.rbd_utils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] rbd image fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:25 compute-0 nova_compute[243452]: 2026-02-28 10:11:25.503 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:26 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:11:26 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2025441163' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:11:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 279 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 757 KiB/s rd, 2.7 MiB/s wr, 198 op/s
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.077 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.079 243456 DEBUG nova.virt.libvirt.vif [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:11:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1011485011',display_name='tempest-AttachInterfacesUnderV243Test-server-1011485011',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1011485011',id=61,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNjnzagFJoSFtRanAJV8Vh8se7ZkhkWvbV0r0AVyZNItt2AKKQ99gHb6oQBNip6kETClAVoOoJsFdbRjy0fqH4CfM5YyB80Yui5ULnbk8Emyxy1XpzzeCEJgH6ejC7SlXQ==',key_name='tempest-keypair-1736116569',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='676657ed4ac447c580a9480d26bd7f87',ramdisk_id='',reservation_id='r-041m6mn9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-809208084',owner_user_name='tempest-AttachInterfacesUnderV243Test-809208084-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:11:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d03368d5ddc403db8a8315dabe88681',uuid=fc527bc2-3cc2-4ce2-b99e-24d252793d06,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.080 243456 DEBUG nova.network.os_vif_util [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Converting VIF {"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.081 243456 DEBUG nova.network.os_vif_util [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:55:cf,bridge_name='br-int',has_traffic_filtering=True,id=52c9c534-2d55-4b9a-bd70-a8114ac975c6,network=Network(0192f192-ffd8-4bb7-b267-d74d97cc6cf5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52c9c534-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.083 243456 DEBUG nova.objects.instance [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lazy-loading 'pci_devices' on Instance uuid fc527bc2-3cc2-4ce2-b99e-24d252793d06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.108 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:11:26 compute-0 nova_compute[243452]:   <uuid>fc527bc2-3cc2-4ce2-b99e-24d252793d06</uuid>
Feb 28 10:11:26 compute-0 nova_compute[243452]:   <name>instance-0000003d</name>
Feb 28 10:11:26 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:11:26 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:11:26 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-1011485011</nova:name>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:11:24</nova:creationTime>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:11:26 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:11:26 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:11:26 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:11:26 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:11:26 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:11:26 compute-0 nova_compute[243452]:         <nova:user uuid="4d03368d5ddc403db8a8315dabe88681">tempest-AttachInterfacesUnderV243Test-809208084-project-member</nova:user>
Feb 28 10:11:26 compute-0 nova_compute[243452]:         <nova:project uuid="676657ed4ac447c580a9480d26bd7f87">tempest-AttachInterfacesUnderV243Test-809208084</nova:project>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:11:26 compute-0 nova_compute[243452]:         <nova:port uuid="52c9c534-2d55-4b9a-bd70-a8114ac975c6">
Feb 28 10:11:26 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:11:26 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:11:26 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <system>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <entry name="serial">fc527bc2-3cc2-4ce2-b99e-24d252793d06</entry>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <entry name="uuid">fc527bc2-3cc2-4ce2-b99e-24d252793d06</entry>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     </system>
Feb 28 10:11:26 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:11:26 compute-0 nova_compute[243452]:   <os>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:   </os>
Feb 28 10:11:26 compute-0 nova_compute[243452]:   <features>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:   </features>
Feb 28 10:11:26 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:11:26 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:11:26 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk">
Feb 28 10:11:26 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       </source>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:11:26 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk.config">
Feb 28 10:11:26 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       </source>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:11:26 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:ec:55:cf"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <target dev="tap52c9c534-2d"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/fc527bc2-3cc2-4ce2-b99e-24d252793d06/console.log" append="off"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <video>
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     </video>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:11:26 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:11:26 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:11:26 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:11:26 compute-0 nova_compute[243452]: </domain>
Feb 28 10:11:26 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.121 243456 DEBUG nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Preparing to wait for external event network-vif-plugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.121 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.122 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.122 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.123 243456 DEBUG nova.virt.libvirt.vif [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:11:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1011485011',display_name='tempest-AttachInterfacesUnderV243Test-server-1011485011',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1011485011',id=61,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNjnzagFJoSFtRanAJV8Vh8se7ZkhkWvbV0r0AVyZNItt2AKKQ99gHb6oQBNip6kETClAVoOoJsFdbRjy0fqH4CfM5YyB80Yui5ULnbk8Emyxy1XpzzeCEJgH6ejC7SlXQ==',key_name='tempest-keypair-1736116569',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='676657ed4ac447c580a9480d26bd7f87',ramdisk_id='',reservation_id='r-041m6mn9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-809208084',owner_user_name='tempest-AttachInterfacesUnderV243Test-809208084-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:11:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d03368d5ddc403db8a8315dabe88681',uuid=fc527bc2-3cc2-4ce2-b99e-24d252793d06,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.123 243456 DEBUG nova.network.os_vif_util [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Converting VIF {"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.124 243456 DEBUG nova.network.os_vif_util [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:55:cf,bridge_name='br-int',has_traffic_filtering=True,id=52c9c534-2d55-4b9a-bd70-a8114ac975c6,network=Network(0192f192-ffd8-4bb7-b267-d74d97cc6cf5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52c9c534-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.124 243456 DEBUG os_vif [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:55:cf,bridge_name='br-int',has_traffic_filtering=True,id=52c9c534-2d55-4b9a-bd70-a8114ac975c6,network=Network(0192f192-ffd8-4bb7-b267-d74d97cc6cf5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52c9c534-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.125 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.126 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.126 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.130 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.131 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52c9c534-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.132 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52c9c534-2d, col_values=(('external_ids', {'iface-id': '52c9c534-2d55-4b9a-bd70-a8114ac975c6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:55:cf', 'vm-uuid': 'fc527bc2-3cc2-4ce2-b99e-24d252793d06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.134 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:26 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4060572511' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:11:26 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2025441163' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:11:26 compute-0 NetworkManager[49805]: <info>  [1772273486.1351] manager: (tap52c9c534-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/233)
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.138 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.141 243456 INFO os_vif [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:55:cf,bridge_name='br-int',has_traffic_filtering=True,id=52c9c534-2d55-4b9a-bd70-a8114ac975c6,network=Network(0192f192-ffd8-4bb7-b267-d74d97cc6cf5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52c9c534-2d')
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.207 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.208 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.209 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] No VIF found with MAC fa:16:3e:ec:55:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.210 243456 INFO nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Using config drive
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.242 243456 DEBUG nova.storage.rbd_utils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] rbd image fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.710 243456 DEBUG nova.network.neutron [req-77623f0c-fe49-4d7f-9998-45eeccec05a0 req-fa39db7d-d00a-4d1e-87fa-42231b0035a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updated VIF entry in instance network info cache for port 52c9c534-2d55-4b9a-bd70-a8114ac975c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.710 243456 DEBUG nova.network.neutron [req-77623f0c-fe49-4d7f-9998-45eeccec05a0 req-fa39db7d-d00a-4d1e-87fa-42231b0035a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updating instance_info_cache with network_info: [{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.736 243456 DEBUG oslo_concurrency.lockutils [req-77623f0c-fe49-4d7f-9998-45eeccec05a0 req-fa39db7d-d00a-4d1e-87fa-42231b0035a1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.749 243456 INFO nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Creating config drive at /var/lib/nova/instances/fc527bc2-3cc2-4ce2-b99e-24d252793d06/disk.config
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.755 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fc527bc2-3cc2-4ce2-b99e-24d252793d06/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphzwezgjo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:26 compute-0 ovn_controller[146846]: 2026-02-28T10:11:26Z|00498|binding|INFO|Releasing lport cad89901-4493-47e9-b0fc-45158375eff8 from this chassis (sb_readonly=0)
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.882 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.910 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fc527bc2-3cc2-4ce2-b99e-24d252793d06/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphzwezgjo" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.935 243456 DEBUG nova.storage.rbd_utils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] rbd image fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:26 compute-0 nova_compute[243452]: 2026-02-28 10:11:26.940 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fc527bc2-3cc2-4ce2-b99e-24d252793d06/disk.config fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.087 243456 DEBUG oslo_concurrency.processutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fc527bc2-3cc2-4ce2-b99e-24d252793d06/disk.config fc527bc2-3cc2-4ce2-b99e-24d252793d06_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.088 243456 INFO nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Deleting local config drive /var/lib/nova/instances/fc527bc2-3cc2-4ce2-b99e-24d252793d06/disk.config because it was imported into RBD.
Feb 28 10:11:27 compute-0 kernel: tap52c9c534-2d: entered promiscuous mode
Feb 28 10:11:27 compute-0 NetworkManager[49805]: <info>  [1772273487.1374] manager: (tap52c9c534-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/234)
Feb 28 10:11:27 compute-0 ovn_controller[146846]: 2026-02-28T10:11:27Z|00499|binding|INFO|Claiming lport 52c9c534-2d55-4b9a-bd70-a8114ac975c6 for this chassis.
Feb 28 10:11:27 compute-0 ovn_controller[146846]: 2026-02-28T10:11:27Z|00500|binding|INFO|52c9c534-2d55-4b9a-bd70-a8114ac975c6: Claiming fa:16:3e:ec:55:cf 10.100.0.8
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.139 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:27 compute-0 ovn_controller[146846]: 2026-02-28T10:11:27Z|00501|binding|INFO|Setting lport 52c9c534-2d55-4b9a-bd70-a8114ac975c6 ovn-installed in OVS
Feb 28 10:11:27 compute-0 ovn_controller[146846]: 2026-02-28T10:11:27Z|00502|binding|INFO|Setting lport 52c9c534-2d55-4b9a-bd70-a8114ac975c6 up in Southbound
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.146 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:55:cf 10.100.0.8'], port_security=['fa:16:3e:ec:55:cf 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fc527bc2-3cc2-4ce2-b99e-24d252793d06', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0192f192-ffd8-4bb7-b267-d74d97cc6cf5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '676657ed4ac447c580a9480d26bd7f87', 'neutron:revision_number': '2', 'neutron:security_group_ids': '27cca83f-7b50-4cdc-b0c5-9a7ed57a0cbe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=527b0cac-8159-448a-a976-d464a8e38db9, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=52c9c534-2d55-4b9a-bd70-a8114ac975c6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.148 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:27 compute-0 ceph-mon[76304]: pgmap v1354: 305 pgs: 305 active+clean; 279 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 757 KiB/s rd, 2.7 MiB/s wr, 198 op/s
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.149 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 52c9c534-2d55-4b9a-bd70-a8114ac975c6 in datapath 0192f192-ffd8-4bb7-b267-d74d97cc6cf5 bound to our chassis
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.151 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0192f192-ffd8-4bb7-b267-d74d97cc6cf5
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.163 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[66e10842-993b-4b2d-b973-580b3d4db385]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.166 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0192f192-f1 in ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.168 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0192f192-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.168 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc6d1b3-77aa-4b79-8091-713550db1567]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.169 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9299d201-55b9-471f-866c-f3b56a29fc09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:27 compute-0 systemd-udevd[295721]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:11:27 compute-0 systemd-machined[209480]: New machine qemu-68-instance-0000003d.
Feb 28 10:11:27 compute-0 NetworkManager[49805]: <info>  [1772273487.1890] device (tap52c9c534-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:11:27 compute-0 NetworkManager[49805]: <info>  [1772273487.1896] device (tap52c9c534-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.186 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[5307e642-e71b-4d81-9bbc-cbc7ac746865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.198 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f8d9a670-df1c-447e-898f-53c65eca209c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:27 compute-0 systemd[1]: Started Virtual Machine qemu-68-instance-0000003d.
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.226 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fd368bce-cddc-4610-a770-cbc245e06eed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:27 compute-0 NetworkManager[49805]: <info>  [1772273487.2345] manager: (tap0192f192-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/235)
Feb 28 10:11:27 compute-0 systemd-udevd[295725]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.233 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3aeab9a3-e0c8-4a09-88bf-56775b69b12d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.252 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.252 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.264 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c298c527-6e27-4c6c-841b-9b0598131ddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.268 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2bba92a5-b36a-451a-be0c-6fe9adba3979]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.271 243456 DEBUG nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:11:27 compute-0 NetworkManager[49805]: <info>  [1772273487.2918] device (tap0192f192-f0): carrier: link connected
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.298 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[dc98b3a1-eaa9-47c8-b553-c99bc4abf112]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.316 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9af073-73c4-414f-82f3-00638c1a030a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0192f192-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c0:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499457, 'reachable_time': 17927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295754, 'error': None, 'target': 'ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.333 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3243740b-58a1-47b6-98e5-535a4e451954]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:c094'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499457, 'tstamp': 499457}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295755, 'error': None, 'target': 'ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.354 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ba1d65-6c12-42e2-8b25-a9d8c2e9355b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0192f192-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c0:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499457, 'reachable_time': 17927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295756, 'error': None, 'target': 'ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.359 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.360 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.369 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.370 243456 INFO nova.compute.claims [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.389 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b2eb2d5f-2117-478a-9de1-24c9d52854b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.420 243456 DEBUG nova.compute.manager [req-e7e63c1b-a6b3-4963-82e4-be76d6c7eb68 req-8fabb2ba-d072-4114-a3c6-4fe49ead07a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received event network-vif-plugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.421 243456 DEBUG oslo_concurrency.lockutils [req-e7e63c1b-a6b3-4963-82e4-be76d6c7eb68 req-8fabb2ba-d072-4114-a3c6-4fe49ead07a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.421 243456 DEBUG oslo_concurrency.lockutils [req-e7e63c1b-a6b3-4963-82e4-be76d6c7eb68 req-8fabb2ba-d072-4114-a3c6-4fe49ead07a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.422 243456 DEBUG oslo_concurrency.lockutils [req-e7e63c1b-a6b3-4963-82e4-be76d6c7eb68 req-8fabb2ba-d072-4114-a3c6-4fe49ead07a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.423 243456 DEBUG nova.compute.manager [req-e7e63c1b-a6b3-4963-82e4-be76d6c7eb68 req-8fabb2ba-d072-4114-a3c6-4fe49ead07a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Processing event network-vif-plugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.445 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6becbbf1-227f-4d4e-bbea-0cb6a5608c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.446 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0192f192-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.447 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.447 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0192f192-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:27 compute-0 NetworkManager[49805]: <info>  [1772273487.4506] manager: (tap0192f192-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/236)
Feb 28 10:11:27 compute-0 kernel: tap0192f192-f0: entered promiscuous mode
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.453 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0192f192-f0, col_values=(('external_ids', {'iface-id': '3526e0c3-6eaf-40f3-a85d-8dfbd6551585'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:27 compute-0 ovn_controller[146846]: 2026-02-28T10:11:27Z|00503|binding|INFO|Releasing lport 3526e0c3-6eaf-40f3-a85d-8dfbd6551585 from this chassis (sb_readonly=0)
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.455 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.469 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.471 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0192f192-ffd8-4bb7-b267-d74d97cc6cf5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0192f192-ffd8-4bb7-b267-d74d97cc6cf5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.472 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[87dd73c9-5f11-4363-b9e7-39da660f49b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.474 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-0192f192-ffd8-4bb7-b267-d74d97cc6cf5
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/0192f192-ffd8-4bb7-b267-d74d97cc6cf5.pid.haproxy
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 0192f192-ffd8-4bb7-b267-d74d97cc6cf5
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:11:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:27.475 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5', 'env', 'PROCESS_TAG=haproxy-0192f192-ffd8-4bb7-b267-d74d97cc6cf5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0192f192-ffd8-4bb7-b267-d74d97cc6cf5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.579 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.755 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273487.7545323, fc527bc2-3cc2-4ce2-b99e-24d252793d06 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.756 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] VM Started (Lifecycle Event)
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.759 243456 DEBUG nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.769 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.792 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.796 243456 INFO nova.virt.libvirt.driver [-] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Instance spawned successfully.
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.797 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.799 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:11:27 compute-0 podman[295850]: 2026-02-28 10:11:27.817682586 +0000 UTC m=+0.049325566 container create 00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.826 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.827 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273487.7549033, fc527bc2-3cc2-4ce2-b99e-24d252793d06 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.828 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] VM Paused (Lifecycle Event)
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.834 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.834 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.835 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.835 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.836 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.836 243456 DEBUG nova.virt.libvirt.driver [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.846 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.850 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273487.7633803, fc527bc2-3cc2-4ce2-b99e-24d252793d06 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.850 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] VM Resumed (Lifecycle Event)
Feb 28 10:11:27 compute-0 systemd[1]: Started libpod-conmon-00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625.scope.
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.879 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:27 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.883 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:11:27 compute-0 podman[295850]: 2026-02-28 10:11:27.791971074 +0000 UTC m=+0.023614064 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:11:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd06fea17760e87046dfe6bdf5d2603ded7e2ad74c164232a00dda0bdcd2728/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:11:27 compute-0 podman[295850]: 2026-02-28 10:11:27.901791938 +0000 UTC m=+0.133434958 container init 00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:11:27 compute-0 podman[295850]: 2026-02-28 10:11:27.906697506 +0000 UTC m=+0.138340486 container start 00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223)
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.909 243456 INFO nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Took 7.08 seconds to spawn the instance on the hypervisor.
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.909 243456 DEBUG nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.911 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:11:27 compute-0 neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5[295865]: [NOTICE]   (295869) : New worker (295871) forked
Feb 28 10:11:27 compute-0 neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5[295865]: [NOTICE]   (295869) : Loading success.
Feb 28 10:11:27 compute-0 nova_compute[243452]: 2026-02-28 10:11:27.977 243456 INFO nova.compute.manager [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Took 8.17 seconds to build instance.
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.001 243456 DEBUG oslo_concurrency.lockutils [None req-0dfa37cc-4824-4029-ba1a-b200070f9389 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.001 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 6.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.002 243456 INFO nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.002 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:11:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Feb 28 10:11:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 279 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 673 KiB/s rd, 2.4 MiB/s wr, 178 op/s
Feb 28 10:11:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Feb 28 10:11:28 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Feb 28 10:11:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:11:28 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1849421748' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.164 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.170 243456 DEBUG nova.compute.provider_tree [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.207 243456 DEBUG nova.scheduler.client.report [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.228 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.228 243456 DEBUG nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.283 243456 DEBUG nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.283 243456 DEBUG nova.network.neutron [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.305 243456 INFO nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.322 243456 DEBUG nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.336 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.337 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.374 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.437 243456 DEBUG nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.439 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.439 243456 INFO nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Creating image(s)
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.463 243456 DEBUG nova.storage.rbd_utils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.492 243456 DEBUG nova.storage.rbd_utils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.520 243456 DEBUG nova.storage.rbd_utils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.525 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.574 243456 DEBUG nova.policy [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '48e14a77ec8842f98a0d2efc6d5e167f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cffbbb9857954b188c363e9565817bf2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.609 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.610 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.611 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.611 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.640 243456 DEBUG nova.storage.rbd_utils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.645 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.688 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:28 compute-0 nova_compute[243452]: 2026-02-28 10:11:28.949 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:29 compute-0 nova_compute[243452]: 2026-02-28 10:11:29.029 243456 DEBUG nova.storage.rbd_utils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] resizing rbd image 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:11:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:11:29
Feb 28 10:11:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:11:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:11:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'volumes', 'default.rgw.log', 'vms', 'backups', 'default.rgw.meta', 'images']
Feb 28 10:11:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:11:29 compute-0 ceph-mon[76304]: pgmap v1355: 305 pgs: 305 active+clean; 279 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 673 KiB/s rd, 2.4 MiB/s wr, 178 op/s
Feb 28 10:11:29 compute-0 ceph-mon[76304]: osdmap e226: 3 total, 3 up, 3 in
Feb 28 10:11:29 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1849421748' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:11:29 compute-0 nova_compute[243452]: 2026-02-28 10:11:29.124 243456 DEBUG nova.objects.instance [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'migration_context' on Instance uuid 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:11:29 compute-0 nova_compute[243452]: 2026-02-28 10:11:29.157 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:11:29 compute-0 nova_compute[243452]: 2026-02-28 10:11:29.158 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Ensure instance console log exists: /var/lib/nova/instances/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:11:29 compute-0 nova_compute[243452]: 2026-02-28 10:11:29.158 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:29 compute-0 nova_compute[243452]: 2026-02-28 10:11:29.159 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:29 compute-0 nova_compute[243452]: 2026-02-28 10:11:29.159 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:29 compute-0 nova_compute[243452]: 2026-02-28 10:11:29.546 243456 DEBUG nova.compute.manager [req-64fef0b9-ec3c-4b62-baec-904cd839fbab req-2e4b87ee-ce2a-4ad9-a990-c326f0af680a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received event network-vif-plugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:29 compute-0 nova_compute[243452]: 2026-02-28 10:11:29.547 243456 DEBUG oslo_concurrency.lockutils [req-64fef0b9-ec3c-4b62-baec-904cd839fbab req-2e4b87ee-ce2a-4ad9-a990-c326f0af680a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:29 compute-0 nova_compute[243452]: 2026-02-28 10:11:29.547 243456 DEBUG oslo_concurrency.lockutils [req-64fef0b9-ec3c-4b62-baec-904cd839fbab req-2e4b87ee-ce2a-4ad9-a990-c326f0af680a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:29 compute-0 nova_compute[243452]: 2026-02-28 10:11:29.547 243456 DEBUG oslo_concurrency.lockutils [req-64fef0b9-ec3c-4b62-baec-904cd839fbab req-2e4b87ee-ce2a-4ad9-a990-c326f0af680a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:29 compute-0 nova_compute[243452]: 2026-02-28 10:11:29.548 243456 DEBUG nova.compute.manager [req-64fef0b9-ec3c-4b62-baec-904cd839fbab req-2e4b87ee-ce2a-4ad9-a990-c326f0af680a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] No waiting events found dispatching network-vif-plugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:11:29 compute-0 nova_compute[243452]: 2026-02-28 10:11:29.548 243456 WARNING nova.compute.manager [req-64fef0b9-ec3c-4b62-baec-904cd839fbab req-2e4b87ee-ce2a-4ad9-a990-c326f0af680a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received unexpected event network-vif-plugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 for instance with vm_state active and task_state None.
Feb 28 10:11:29 compute-0 nova_compute[243452]: 2026-02-28 10:11:29.770 243456 DEBUG nova.network.neutron [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Successfully created port: 2c3f8e94-025d-4aea-97be-c325f6366e0d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:11:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 298 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 707 KiB/s rd, 3.5 MiB/s wr, 102 op/s
Feb 28 10:11:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:11:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:11:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:11:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:11:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:11:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:11:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:11:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:11:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:11:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:11:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:11:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:11:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:11:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:11:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:11:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:11:30 compute-0 nova_compute[243452]: 2026-02-28 10:11:30.745 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273475.742595, 9e4c0569-eb76-4874-89e2-751c8237a762 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:11:30 compute-0 nova_compute[243452]: 2026-02-28 10:11:30.746 243456 INFO nova.compute.manager [-] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] VM Stopped (Lifecycle Event)
Feb 28 10:11:30 compute-0 nova_compute[243452]: 2026-02-28 10:11:30.766 243456 DEBUG nova.compute.manager [None req-b3ba19fe-00b7-4fca-bd5e-c3a2a09639d3 - - - - - -] [instance: 9e4c0569-eb76-4874-89e2-751c8237a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:30 compute-0 sudo[296048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:11:30 compute-0 sudo[296048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:11:30 compute-0 sudo[296048]: pam_unix(sudo:session): session closed for user root
Feb 28 10:11:30 compute-0 sudo[296073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:11:30 compute-0 sudo[296073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:11:31 compute-0 ceph-mon[76304]: pgmap v1357: 305 pgs: 305 active+clean; 298 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 707 KiB/s rd, 3.5 MiB/s wr, 102 op/s
Feb 28 10:11:31 compute-0 nova_compute[243452]: 2026-02-28 10:11:31.137 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:31 compute-0 nova_compute[243452]: 2026-02-28 10:11:31.145 243456 DEBUG nova.network.neutron [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Successfully updated port: 2c3f8e94-025d-4aea-97be-c325f6366e0d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:11:31 compute-0 nova_compute[243452]: 2026-02-28 10:11:31.167 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "refresh_cache-51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:11:31 compute-0 nova_compute[243452]: 2026-02-28 10:11:31.167 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquired lock "refresh_cache-51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:11:31 compute-0 nova_compute[243452]: 2026-02-28 10:11:31.168 243456 DEBUG nova.network.neutron [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:11:31 compute-0 sudo[296073]: pam_unix(sudo:session): session closed for user root
Feb 28 10:11:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:11:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:11:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:11:31 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:11:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:11:31 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:11:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:11:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:11:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:11:31 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:11:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:11:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:11:31 compute-0 nova_compute[243452]: 2026-02-28 10:11:31.457 243456 DEBUG nova.network.neutron [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:11:31 compute-0 sudo[296128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:11:31 compute-0 sudo[296128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:11:31 compute-0 sudo[296128]: pam_unix(sudo:session): session closed for user root
Feb 28 10:11:31 compute-0 sudo[296153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:11:31 compute-0 sudo[296153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:11:31 compute-0 nova_compute[243452]: 2026-02-28 10:11:31.654 243456 DEBUG nova.compute.manager [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received event network-changed-52c9c534-2d55-4b9a-bd70-a8114ac975c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:31 compute-0 nova_compute[243452]: 2026-02-28 10:11:31.654 243456 DEBUG nova.compute.manager [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Refreshing instance network info cache due to event network-changed-52c9c534-2d55-4b9a-bd70-a8114ac975c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:11:31 compute-0 nova_compute[243452]: 2026-02-28 10:11:31.655 243456 DEBUG oslo_concurrency.lockutils [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:11:31 compute-0 nova_compute[243452]: 2026-02-28 10:11:31.655 243456 DEBUG oslo_concurrency.lockutils [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:11:31 compute-0 nova_compute[243452]: 2026-02-28 10:11:31.655 243456 DEBUG nova.network.neutron [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Refreshing network info cache for port 52c9c534-2d55-4b9a-bd70-a8114ac975c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:11:31 compute-0 podman[296191]: 2026-02-28 10:11:31.817552622 +0000 UTC m=+0.043224945 container create 282d302fb1928ea268bc8ad9fc69141221501ee48eea77d76ad994452488be2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 10:11:31 compute-0 systemd[1]: Started libpod-conmon-282d302fb1928ea268bc8ad9fc69141221501ee48eea77d76ad994452488be2c.scope.
Feb 28 10:11:31 compute-0 podman[296191]: 2026-02-28 10:11:31.799503765 +0000 UTC m=+0.025176108 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:11:31 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:11:31 compute-0 podman[296191]: 2026-02-28 10:11:31.919842295 +0000 UTC m=+0.145514628 container init 282d302fb1928ea268bc8ad9fc69141221501ee48eea77d76ad994452488be2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_davinci, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:11:31 compute-0 podman[296191]: 2026-02-28 10:11:31.927546631 +0000 UTC m=+0.153218944 container start 282d302fb1928ea268bc8ad9fc69141221501ee48eea77d76ad994452488be2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_davinci, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:11:31 compute-0 podman[296191]: 2026-02-28 10:11:31.931229645 +0000 UTC m=+0.156901978 container attach 282d302fb1928ea268bc8ad9fc69141221501ee48eea77d76ad994452488be2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:11:31 compute-0 systemd[1]: libpod-282d302fb1928ea268bc8ad9fc69141221501ee48eea77d76ad994452488be2c.scope: Deactivated successfully.
Feb 28 10:11:31 compute-0 nostalgic_davinci[296208]: 167 167
Feb 28 10:11:31 compute-0 podman[296191]: 2026-02-28 10:11:31.938409527 +0000 UTC m=+0.164081840 container died 282d302fb1928ea268bc8ad9fc69141221501ee48eea77d76ad994452488be2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_davinci, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 10:11:31 compute-0 conmon[296208]: conmon 282d302fb1928ea268bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-282d302fb1928ea268bc8ad9fc69141221501ee48eea77d76ad994452488be2c.scope/container/memory.events
Feb 28 10:11:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d395c1a78465fab8e5ea3fc6b165fd403227ae009991a043febbffb083d7936-merged.mount: Deactivated successfully.
Feb 28 10:11:31 compute-0 podman[296191]: 2026-02-28 10:11:31.974162521 +0000 UTC m=+0.199834834 container remove 282d302fb1928ea268bc8ad9fc69141221501ee48eea77d76ad994452488be2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_davinci, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:11:31 compute-0 systemd[1]: libpod-conmon-282d302fb1928ea268bc8ad9fc69141221501ee48eea77d76ad994452488be2c.scope: Deactivated successfully.
Feb 28 10:11:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 310 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.5 MiB/s wr, 112 op/s
Feb 28 10:11:32 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:11:32 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:11:32 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:11:32 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:11:32 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:11:32 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:11:32 compute-0 podman[296231]: 2026-02-28 10:11:32.133108325 +0000 UTC m=+0.040500319 container create fb174ed62b1ee324f88e21d7e25087b8a58197358548437c8fe68e4431fdc70f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:11:32 compute-0 systemd[1]: Started libpod-conmon-fb174ed62b1ee324f88e21d7e25087b8a58197358548437c8fe68e4431fdc70f.scope.
Feb 28 10:11:32 compute-0 podman[296231]: 2026-02-28 10:11:32.114393839 +0000 UTC m=+0.021785883 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:11:32 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:11:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6531d7cd8a20913c1be29cbb3616f4391bd1f05b7b843c4454e6638421d5ea80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:11:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6531d7cd8a20913c1be29cbb3616f4391bd1f05b7b843c4454e6638421d5ea80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:11:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6531d7cd8a20913c1be29cbb3616f4391bd1f05b7b843c4454e6638421d5ea80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:11:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6531d7cd8a20913c1be29cbb3616f4391bd1f05b7b843c4454e6638421d5ea80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:11:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6531d7cd8a20913c1be29cbb3616f4391bd1f05b7b843c4454e6638421d5ea80/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:11:32 compute-0 podman[296231]: 2026-02-28 10:11:32.244202785 +0000 UTC m=+0.151594779 container init fb174ed62b1ee324f88e21d7e25087b8a58197358548437c8fe68e4431fdc70f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lovelace, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:11:32 compute-0 podman[296245]: 2026-02-28 10:11:32.252173099 +0000 UTC m=+0.084322630 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:11:32 compute-0 podman[296231]: 2026-02-28 10:11:32.253493406 +0000 UTC m=+0.160885400 container start fb174ed62b1ee324f88e21d7e25087b8a58197358548437c8fe68e4431fdc70f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lovelace, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 10:11:32 compute-0 podman[296231]: 2026-02-28 10:11:32.256639514 +0000 UTC m=+0.164031508 container attach fb174ed62b1ee324f88e21d7e25087b8a58197358548437c8fe68e4431fdc70f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:11:32 compute-0 podman[296263]: 2026-02-28 10:11:32.312985647 +0000 UTC m=+0.092536600 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller)
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.377 243456 DEBUG nova.network.neutron [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Updating instance_info_cache with network_info: [{"id": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "address": "fa:16:3e:81:e3:c3", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c3f8e94-02", "ovs_interfaceid": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.399 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Releasing lock "refresh_cache-51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.399 243456 DEBUG nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Instance network_info: |[{"id": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "address": "fa:16:3e:81:e3:c3", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c3f8e94-02", "ovs_interfaceid": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.402 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Start _get_guest_xml network_info=[{"id": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "address": "fa:16:3e:81:e3:c3", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c3f8e94-02", "ovs_interfaceid": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.408 243456 WARNING nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.413 243456 DEBUG nova.virt.libvirt.host [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.414 243456 DEBUG nova.virt.libvirt.host [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.417 243456 DEBUG nova.virt.libvirt.host [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.418 243456 DEBUG nova.virt.libvirt.host [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.418 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.419 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.419 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.420 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.420 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.420 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.421 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.421 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.421 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.421 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.422 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.422 243456 DEBUG nova.virt.hardware [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:11:32 compute-0 nova_compute[243452]: 2026-02-28 10:11:32.428 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:32 compute-0 jolly_lovelace[296261]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:11:32 compute-0 jolly_lovelace[296261]: --> All data devices are unavailable
Feb 28 10:11:32 compute-0 systemd[1]: libpod-fb174ed62b1ee324f88e21d7e25087b8a58197358548437c8fe68e4431fdc70f.scope: Deactivated successfully.
Feb 28 10:11:32 compute-0 podman[296331]: 2026-02-28 10:11:32.787292248 +0000 UTC m=+0.023340457 container died fb174ed62b1ee324f88e21d7e25087b8a58197358548437c8fe68e4431fdc70f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lovelace, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 10:11:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-6531d7cd8a20913c1be29cbb3616f4391bd1f05b7b843c4454e6638421d5ea80-merged.mount: Deactivated successfully.
Feb 28 10:11:32 compute-0 podman[296331]: 2026-02-28 10:11:32.822609699 +0000 UTC m=+0.058657898 container remove fb174ed62b1ee324f88e21d7e25087b8a58197358548437c8fe68e4431fdc70f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_lovelace, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Feb 28 10:11:32 compute-0 systemd[1]: libpod-conmon-fb174ed62b1ee324f88e21d7e25087b8a58197358548437c8fe68e4431fdc70f.scope: Deactivated successfully.
Feb 28 10:11:32 compute-0 sudo[296153]: pam_unix(sudo:session): session closed for user root
Feb 28 10:11:32 compute-0 sudo[296347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:11:32 compute-0 sudo[296347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:11:32 compute-0 sudo[296347]: pam_unix(sudo:session): session closed for user root
Feb 28 10:11:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:11:32 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4012521364' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:11:32 compute-0 sudo[296372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:11:32 compute-0 sudo[296372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.008 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.036 243456 DEBUG nova.storage.rbd_utils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.042 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.087 243456 DEBUG nova.network.neutron [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updated VIF entry in instance network info cache for port 52c9c534-2d55-4b9a-bd70-a8114ac975c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.088 243456 DEBUG nova.network.neutron [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updating instance_info_cache with network_info: [{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:11:33 compute-0 ceph-mon[76304]: pgmap v1358: 305 pgs: 305 active+clean; 310 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.5 MiB/s wr, 112 op/s
Feb 28 10:11:33 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4012521364' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.107 243456 DEBUG oslo_concurrency.lockutils [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.107 243456 DEBUG nova.compute.manager [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Received event network-changed-2c3f8e94-025d-4aea-97be-c325f6366e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.107 243456 DEBUG nova.compute.manager [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Refreshing instance network info cache due to event network-changed-2c3f8e94-025d-4aea-97be-c325f6366e0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.108 243456 DEBUG oslo_concurrency.lockutils [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.108 243456 DEBUG oslo_concurrency.lockutils [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.108 243456 DEBUG nova.network.neutron [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Refreshing network info cache for port 2c3f8e94-025d-4aea-97be-c325f6366e0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:11:33 compute-0 podman[296450]: 2026-02-28 10:11:33.269379517 +0000 UTC m=+0.043819012 container create 99266dd6f136a4fff4489e48e3173942b636a7516cf6f04995a11f17307a5cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bose, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:11:33 compute-0 systemd[1]: Started libpod-conmon-99266dd6f136a4fff4489e48e3173942b636a7516cf6f04995a11f17307a5cf4.scope.
Feb 28 10:11:33 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:11:33 compute-0 podman[296450]: 2026-02-28 10:11:33.248585573 +0000 UTC m=+0.023025038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:11:33 compute-0 podman[296450]: 2026-02-28 10:11:33.360667291 +0000 UTC m=+0.135106676 container init 99266dd6f136a4fff4489e48e3173942b636a7516cf6f04995a11f17307a5cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bose, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:11:33 compute-0 podman[296450]: 2026-02-28 10:11:33.367633517 +0000 UTC m=+0.142072882 container start 99266dd6f136a4fff4489e48e3173942b636a7516cf6f04995a11f17307a5cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bose, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 10:11:33 compute-0 podman[296450]: 2026-02-28 10:11:33.370458856 +0000 UTC m=+0.144898241 container attach 99266dd6f136a4fff4489e48e3173942b636a7516cf6f04995a11f17307a5cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bose, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:11:33 compute-0 wonderful_bose[296467]: 167 167
Feb 28 10:11:33 compute-0 systemd[1]: libpod-99266dd6f136a4fff4489e48e3173942b636a7516cf6f04995a11f17307a5cf4.scope: Deactivated successfully.
Feb 28 10:11:33 compute-0 conmon[296467]: conmon 99266dd6f136a4fff448 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-99266dd6f136a4fff4489e48e3173942b636a7516cf6f04995a11f17307a5cf4.scope/container/memory.events
Feb 28 10:11:33 compute-0 podman[296450]: 2026-02-28 10:11:33.375308372 +0000 UTC m=+0.149747747 container died 99266dd6f136a4fff4489e48e3173942b636a7516cf6f04995a11f17307a5cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bose, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:11:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b5c2dd933ff798dba6b965134bef652df487cc213b23a65b245849a9af173ae-merged.mount: Deactivated successfully.
Feb 28 10:11:33 compute-0 podman[296450]: 2026-02-28 10:11:33.412292131 +0000 UTC m=+0.186731516 container remove 99266dd6f136a4fff4489e48e3173942b636a7516cf6f04995a11f17307a5cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:11:33 compute-0 systemd[1]: libpod-conmon-99266dd6f136a4fff4489e48e3173942b636a7516cf6f04995a11f17307a5cf4.scope: Deactivated successfully.
Feb 28 10:11:33 compute-0 podman[296490]: 2026-02-28 10:11:33.559263969 +0000 UTC m=+0.040872379 container create 45d68a8cbe44c23868d2bb02100a54977a3f835dc1a0e086deb5e60bf5f68b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 10:11:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:11:33 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/975269225' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:11:33 compute-0 systemd[1]: Started libpod-conmon-45d68a8cbe44c23868d2bb02100a54977a3f835dc1a0e086deb5e60bf5f68b40.scope.
Feb 28 10:11:33 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18b65f04999f8950064df8a21662870f4213db0bf5d6ae3437a28fd5d97486c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18b65f04999f8950064df8a21662870f4213db0bf5d6ae3437a28fd5d97486c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18b65f04999f8950064df8a21662870f4213db0bf5d6ae3437a28fd5d97486c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e18b65f04999f8950064df8a21662870f4213db0bf5d6ae3437a28fd5d97486c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.611 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.614 243456 DEBUG nova.virt.libvirt.vif [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:11:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1290428002',display_name='tempest-ServerActionsTestOtherB-server-1290428002',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1290428002',id=62,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-2snxvf9r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:11:28Z,user_data=None,user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "address": "fa:16:3e:81:e3:c3", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c3f8e94-02", "ovs_interfaceid": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.614 243456 DEBUG nova.network.os_vif_util [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "address": "fa:16:3e:81:e3:c3", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c3f8e94-02", "ovs_interfaceid": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.615 243456 DEBUG nova.network.os_vif_util [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:e3:c3,bridge_name='br-int',has_traffic_filtering=True,id=2c3f8e94-025d-4aea-97be-c325f6366e0d,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c3f8e94-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.617 243456 DEBUG nova.objects.instance [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:11:33 compute-0 podman[296490]: 2026-02-28 10:11:33.625688724 +0000 UTC m=+0.107297144 container init 45d68a8cbe44c23868d2bb02100a54977a3f835dc1a0e086deb5e60bf5f68b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:11:33 compute-0 podman[296490]: 2026-02-28 10:11:33.632140825 +0000 UTC m=+0.113749235 container start 45d68a8cbe44c23868d2bb02100a54977a3f835dc1a0e086deb5e60bf5f68b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_taussig, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.635 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:11:33 compute-0 nova_compute[243452]:   <uuid>51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3</uuid>
Feb 28 10:11:33 compute-0 nova_compute[243452]:   <name>instance-0000003e</name>
Feb 28 10:11:33 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:11:33 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:11:33 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerActionsTestOtherB-server-1290428002</nova:name>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:11:32</nova:creationTime>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:11:33 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:11:33 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:11:33 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:11:33 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:11:33 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:11:33 compute-0 nova_compute[243452]:         <nova:user uuid="48e14a77ec8842f98a0d2efc6d5e167f">tempest-ServerActionsTestOtherB-909812490-project-member</nova:user>
Feb 28 10:11:33 compute-0 nova_compute[243452]:         <nova:project uuid="cffbbb9857954b188c363e9565817bf2">tempest-ServerActionsTestOtherB-909812490</nova:project>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:11:33 compute-0 nova_compute[243452]:         <nova:port uuid="2c3f8e94-025d-4aea-97be-c325f6366e0d">
Feb 28 10:11:33 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:11:33 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:11:33 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <system>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <entry name="serial">51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3</entry>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <entry name="uuid">51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3</entry>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     </system>
Feb 28 10:11:33 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:11:33 compute-0 nova_compute[243452]:   <os>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:   </os>
Feb 28 10:11:33 compute-0 nova_compute[243452]:   <features>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:   </features>
Feb 28 10:11:33 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:11:33 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:11:33 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk">
Feb 28 10:11:33 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       </source>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:11:33 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk.config">
Feb 28 10:11:33 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       </source>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:11:33 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:81:e3:c3"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <target dev="tap2c3f8e94-02"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3/console.log" append="off"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <video>
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     </video>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:11:33 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:11:33 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:11:33 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:11:33 compute-0 nova_compute[243452]: </domain>
Feb 28 10:11:33 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:11:33 compute-0 podman[296490]: 2026-02-28 10:11:33.541099738 +0000 UTC m=+0.022708168 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.637 243456 DEBUG nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Preparing to wait for external event network-vif-plugged-2c3f8e94-025d-4aea-97be-c325f6366e0d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.637 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.638 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:33 compute-0 podman[296490]: 2026-02-28 10:11:33.638440812 +0000 UTC m=+0.120049302 container attach 45d68a8cbe44c23868d2bb02100a54977a3f835dc1a0e086deb5e60bf5f68b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.638 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.639 243456 DEBUG nova.virt.libvirt.vif [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:11:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1290428002',display_name='tempest-ServerActionsTestOtherB-server-1290428002',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1290428002',id=62,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-2snxvf9r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:11:28Z,user_data=None,user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "address": "fa:16:3e:81:e3:c3", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c3f8e94-02", "ovs_interfaceid": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.639 243456 DEBUG nova.network.os_vif_util [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "address": "fa:16:3e:81:e3:c3", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c3f8e94-02", "ovs_interfaceid": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.640 243456 DEBUG nova.network.os_vif_util [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:e3:c3,bridge_name='br-int',has_traffic_filtering=True,id=2c3f8e94-025d-4aea-97be-c325f6366e0d,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c3f8e94-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.640 243456 DEBUG os_vif [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:e3:c3,bridge_name='br-int',has_traffic_filtering=True,id=2c3f8e94-025d-4aea-97be-c325f6366e0d,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c3f8e94-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.641 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.641 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.642 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.647 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.647 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c3f8e94-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.648 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2c3f8e94-02, col_values=(('external_ids', {'iface-id': '2c3f8e94-025d-4aea-97be-c325f6366e0d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:e3:c3', 'vm-uuid': '51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:33 compute-0 NetworkManager[49805]: <info>  [1772273493.6505] manager: (tap2c3f8e94-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.653 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.657 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.659 243456 INFO os_vif [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:e3:c3,bridge_name='br-int',has_traffic_filtering=True,id=2c3f8e94-025d-4aea-97be-c325f6366e0d,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c3f8e94-02')
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.690 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.713 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.715 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.715 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No VIF found with MAC fa:16:3e:81:e3:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.716 243456 INFO nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Using config drive
Feb 28 10:11:33 compute-0 nova_compute[243452]: 2026-02-28 10:11:33.739 243456 DEBUG nova.storage.rbd_utils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]: {
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:     "0": [
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:         {
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "devices": [
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "/dev/loop3"
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             ],
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "lv_name": "ceph_lv0",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "lv_size": "21470642176",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "name": "ceph_lv0",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "tags": {
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.cluster_name": "ceph",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.crush_device_class": "",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.encrypted": "0",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.objectstore": "bluestore",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.osd_id": "0",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.type": "block",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.vdo": "0",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.with_tpm": "0"
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             },
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "type": "block",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "vg_name": "ceph_vg0"
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:         }
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:     ],
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:     "1": [
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:         {
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "devices": [
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "/dev/loop4"
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             ],
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "lv_name": "ceph_lv1",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "lv_size": "21470642176",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "name": "ceph_lv1",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "tags": {
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.cluster_name": "ceph",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.crush_device_class": "",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.encrypted": "0",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.objectstore": "bluestore",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.osd_id": "1",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.type": "block",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.vdo": "0",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.with_tpm": "0"
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             },
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "type": "block",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "vg_name": "ceph_vg1"
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:         }
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:     ],
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:     "2": [
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:         {
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "devices": [
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "/dev/loop5"
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             ],
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "lv_name": "ceph_lv2",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "lv_size": "21470642176",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "name": "ceph_lv2",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "tags": {
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.cluster_name": "ceph",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.crush_device_class": "",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.encrypted": "0",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.objectstore": "bluestore",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.osd_id": "2",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.type": "block",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.vdo": "0",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:                 "ceph.with_tpm": "0"
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             },
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "type": "block",
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:             "vg_name": "ceph_vg2"
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:         }
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]:     ]
Feb 28 10:11:33 compute-0 xenodochial_taussig[296508]: }
Feb 28 10:11:33 compute-0 systemd[1]: libpod-45d68a8cbe44c23868d2bb02100a54977a3f835dc1a0e086deb5e60bf5f68b40.scope: Deactivated successfully.
Feb 28 10:11:33 compute-0 conmon[296508]: conmon 45d68a8cbe44c23868d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-45d68a8cbe44c23868d2bb02100a54977a3f835dc1a0e086deb5e60bf5f68b40.scope/container/memory.events
Feb 28 10:11:33 compute-0 podman[296490]: 2026-02-28 10:11:33.956727481 +0000 UTC m=+0.438335921 container died 45d68a8cbe44c23868d2bb02100a54977a3f835dc1a0e086deb5e60bf5f68b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 10:11:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e18b65f04999f8950064df8a21662870f4213db0bf5d6ae3437a28fd5d97486c-merged.mount: Deactivated successfully.
Feb 28 10:11:34 compute-0 podman[296490]: 2026-02-28 10:11:34.007821996 +0000 UTC m=+0.489430436 container remove 45d68a8cbe44c23868d2bb02100a54977a3f835dc1a0e086deb5e60bf5f68b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_taussig, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:11:34 compute-0 systemd[1]: libpod-conmon-45d68a8cbe44c23868d2bb02100a54977a3f835dc1a0e086deb5e60bf5f68b40.scope: Deactivated successfully.
Feb 28 10:11:34 compute-0 sudo[296372]: pam_unix(sudo:session): session closed for user root
Feb 28 10:11:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 325 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 141 op/s
Feb 28 10:11:34 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/975269225' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:11:34 compute-0 sudo[296551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:11:34 compute-0 sudo[296551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:11:34 compute-0 sudo[296551]: pam_unix(sudo:session): session closed for user root
Feb 28 10:11:34 compute-0 sudo[296576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:11:34 compute-0 sudo[296576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:11:34 compute-0 nova_compute[243452]: 2026-02-28 10:11:34.253 243456 INFO nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Creating config drive at /var/lib/nova/instances/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3/disk.config
Feb 28 10:11:34 compute-0 nova_compute[243452]: 2026-02-28 10:11:34.261 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpsweco1mg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:34 compute-0 nova_compute[243452]: 2026-02-28 10:11:34.407 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpsweco1mg" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:34 compute-0 nova_compute[243452]: 2026-02-28 10:11:34.435 243456 DEBUG nova.storage.rbd_utils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:34 compute-0 nova_compute[243452]: 2026-02-28 10:11:34.442 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3/disk.config 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:34 compute-0 podman[296616]: 2026-02-28 10:11:34.462360761 +0000 UTC m=+0.050671864 container create a6072a97313189c1ea2ec1814ef5ed3deb8f75c32926908aa248f2e0a5d74a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 10:11:34 compute-0 systemd[1]: Started libpod-conmon-a6072a97313189c1ea2ec1814ef5ed3deb8f75c32926908aa248f2e0a5d74a61.scope.
Feb 28 10:11:34 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:11:34 compute-0 podman[296616]: 2026-02-28 10:11:34.443280215 +0000 UTC m=+0.031591338 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:11:34 compute-0 podman[296616]: 2026-02-28 10:11:34.555664782 +0000 UTC m=+0.143975905 container init a6072a97313189c1ea2ec1814ef5ed3deb8f75c32926908aa248f2e0a5d74a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_khorana, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 10:11:34 compute-0 podman[296616]: 2026-02-28 10:11:34.563851052 +0000 UTC m=+0.152162145 container start a6072a97313189c1ea2ec1814ef5ed3deb8f75c32926908aa248f2e0a5d74a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Feb 28 10:11:34 compute-0 podman[296616]: 2026-02-28 10:11:34.567681119 +0000 UTC m=+0.155992202 container attach a6072a97313189c1ea2ec1814ef5ed3deb8f75c32926908aa248f2e0a5d74a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 10:11:34 compute-0 awesome_khorana[296652]: 167 167
Feb 28 10:11:34 compute-0 systemd[1]: libpod-a6072a97313189c1ea2ec1814ef5ed3deb8f75c32926908aa248f2e0a5d74a61.scope: Deactivated successfully.
Feb 28 10:11:34 compute-0 conmon[296652]: conmon a6072a97313189c1ea2e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a6072a97313189c1ea2ec1814ef5ed3deb8f75c32926908aa248f2e0a5d74a61.scope/container/memory.events
Feb 28 10:11:34 compute-0 podman[296616]: 2026-02-28 10:11:34.572301929 +0000 UTC m=+0.160613022 container died a6072a97313189c1ea2ec1814ef5ed3deb8f75c32926908aa248f2e0a5d74a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_khorana, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 10:11:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b26095004ee561d7f1bd8c94279f6a6aecfbfbb2bfdcfc1a0d78b473de936fc-merged.mount: Deactivated successfully.
Feb 28 10:11:34 compute-0 nova_compute[243452]: 2026-02-28 10:11:34.601 243456 DEBUG oslo_concurrency.processutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3/disk.config 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:34 compute-0 nova_compute[243452]: 2026-02-28 10:11:34.602 243456 INFO nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Deleting local config drive /var/lib/nova/instances/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3/disk.config because it was imported into RBD.
Feb 28 10:11:34 compute-0 podman[296616]: 2026-02-28 10:11:34.608432074 +0000 UTC m=+0.196743167 container remove a6072a97313189c1ea2ec1814ef5ed3deb8f75c32926908aa248f2e0a5d74a61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 10:11:34 compute-0 systemd[1]: libpod-conmon-a6072a97313189c1ea2ec1814ef5ed3deb8f75c32926908aa248f2e0a5d74a61.scope: Deactivated successfully.
Feb 28 10:11:34 compute-0 kernel: tap2c3f8e94-02: entered promiscuous mode
Feb 28 10:11:34 compute-0 NetworkManager[49805]: <info>  [1772273494.6573] manager: (tap2c3f8e94-02): new Tun device (/org/freedesktop/NetworkManager/Devices/238)
Feb 28 10:11:34 compute-0 ovn_controller[146846]: 2026-02-28T10:11:34Z|00504|binding|INFO|Claiming lport 2c3f8e94-025d-4aea-97be-c325f6366e0d for this chassis.
Feb 28 10:11:34 compute-0 ovn_controller[146846]: 2026-02-28T10:11:34Z|00505|binding|INFO|2c3f8e94-025d-4aea-97be-c325f6366e0d: Claiming fa:16:3e:81:e3:c3 10.100.0.7
Feb 28 10:11:34 compute-0 nova_compute[243452]: 2026-02-28 10:11:34.669 243456 DEBUG nova.network.neutron [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Updated VIF entry in instance network info cache for port 2c3f8e94-025d-4aea-97be-c325f6366e0d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:11:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.672 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:e3:c3 10.100.0.7'], port_security=['fa:16:3e:81:e3:c3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cffbbb9857954b188c363e9565817bf2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9dcd58cd-d32a-4394-9f61-d8f4c1381371', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d55c9b7f-6b12-4934-aabc-ab954d2b71e1, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2c3f8e94-025d-4aea-97be-c325f6366e0d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:11:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.673 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2c3f8e94-025d-4aea-97be-c325f6366e0d in datapath 41b22e92-d251-48dd-9bf8-8f38cbd749fa bound to our chassis
Feb 28 10:11:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.675 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41b22e92-d251-48dd-9bf8-8f38cbd749fa
Feb 28 10:11:34 compute-0 nova_compute[243452]: 2026-02-28 10:11:34.676 243456 DEBUG nova.network.neutron [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Updating instance_info_cache with network_info: [{"id": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "address": "fa:16:3e:81:e3:c3", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c3f8e94-02", "ovs_interfaceid": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:11:34 compute-0 nova_compute[243452]: 2026-02-28 10:11:34.679 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:34 compute-0 ovn_controller[146846]: 2026-02-28T10:11:34Z|00506|binding|INFO|Setting lport 2c3f8e94-025d-4aea-97be-c325f6366e0d ovn-installed in OVS
Feb 28 10:11:34 compute-0 ovn_controller[146846]: 2026-02-28T10:11:34Z|00507|binding|INFO|Setting lport 2c3f8e94-025d-4aea-97be-c325f6366e0d up in Southbound
Feb 28 10:11:34 compute-0 systemd-machined[209480]: New machine qemu-69-instance-0000003e.
Feb 28 10:11:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.697 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d8338103-e402-40d0-933d-0c0dd86f1ed0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:34 compute-0 nova_compute[243452]: 2026-02-28 10:11:34.700 243456 DEBUG oslo_concurrency.lockutils [req-b08bdc0d-8a8f-47bf-ad92-255bb72b011f req-2214d759-d5c6-4c05-be12-bdeda6cd4bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:11:34 compute-0 systemd[1]: Started Virtual Machine qemu-69-instance-0000003e.
Feb 28 10:11:34 compute-0 systemd-udevd[296700]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:11:34 compute-0 NetworkManager[49805]: <info>  [1772273494.7268] device (tap2c3f8e94-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:11:34 compute-0 NetworkManager[49805]: <info>  [1772273494.7276] device (tap2c3f8e94-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:11:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.730 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3e65121c-250f-40fe-81e8-4dced3cca489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.735 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[02021e42-9f26-464f-b0bb-1b4265097a72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:34 compute-0 nova_compute[243452]: 2026-02-28 10:11:34.760 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.777 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ec99d611-a998-4f59-9a2a-496413c4233c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.796 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7de45a39-0d75-4419-ae18-522c147b5689]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41b22e92-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:1f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493294, 'reachable_time': 16597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296728, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:34 compute-0 podman[296714]: 2026-02-28 10:11:34.804575342 +0000 UTC m=+0.046784755 container create 655eb667657bbe507029c3643321176256b99c9e3b314b896790bb57d428b02d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:11:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.816 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4e87d6-09c9-4b1c-b615-f6e999ff84ba]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493305, 'tstamp': 493305}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296729, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493308, 'tstamp': 493308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296729, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.818 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b22e92-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:34 compute-0 nova_compute[243452]: 2026-02-28 10:11:34.819 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.821 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41b22e92-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.821 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:11:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.821 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41b22e92-d0, col_values=(('external_ids', {'iface-id': 'cad89901-4493-47e9-b0fc-45158375eff8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:34.822 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:11:34 compute-0 systemd[1]: Started libpod-conmon-655eb667657bbe507029c3643321176256b99c9e3b314b896790bb57d428b02d.scope.
Feb 28 10:11:34 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0250d3bdf11b6fcd9af5181b737ff7c470aa22cc7d8d4952b4e746718f32408/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0250d3bdf11b6fcd9af5181b737ff7c470aa22cc7d8d4952b4e746718f32408/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0250d3bdf11b6fcd9af5181b737ff7c470aa22cc7d8d4952b4e746718f32408/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:11:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0250d3bdf11b6fcd9af5181b737ff7c470aa22cc7d8d4952b4e746718f32408/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:11:34 compute-0 podman[296714]: 2026-02-28 10:11:34.788325376 +0000 UTC m=+0.030534819 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:11:34 compute-0 podman[296714]: 2026-02-28 10:11:34.88565753 +0000 UTC m=+0.127867033 container init 655eb667657bbe507029c3643321176256b99c9e3b314b896790bb57d428b02d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_brown, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:11:34 compute-0 podman[296714]: 2026-02-28 10:11:34.896304949 +0000 UTC m=+0.138514362 container start 655eb667657bbe507029c3643321176256b99c9e3b314b896790bb57d428b02d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_brown, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 10:11:34 compute-0 podman[296714]: 2026-02-28 10:11:34.89920536 +0000 UTC m=+0.141414793 container attach 655eb667657bbe507029c3643321176256b99c9e3b314b896790bb57d428b02d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_brown, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.071 243456 DEBUG nova.compute.manager [req-24adf661-5d6d-4a83-b287-45cf4a7cfe5d req-981c9aef-83b4-4ed8-adf8-64249dfd342d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Received event network-vif-plugged-2c3f8e94-025d-4aea-97be-c325f6366e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.072 243456 DEBUG oslo_concurrency.lockutils [req-24adf661-5d6d-4a83-b287-45cf4a7cfe5d req-981c9aef-83b4-4ed8-adf8-64249dfd342d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.072 243456 DEBUG oslo_concurrency.lockutils [req-24adf661-5d6d-4a83-b287-45cf4a7cfe5d req-981c9aef-83b4-4ed8-adf8-64249dfd342d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.072 243456 DEBUG oslo_concurrency.lockutils [req-24adf661-5d6d-4a83-b287-45cf4a7cfe5d req-981c9aef-83b4-4ed8-adf8-64249dfd342d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.072 243456 DEBUG nova.compute.manager [req-24adf661-5d6d-4a83-b287-45cf4a7cfe5d req-981c9aef-83b4-4ed8-adf8-64249dfd342d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Processing event network-vif-plugged-2c3f8e94-025d-4aea-97be-c325f6366e0d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:11:35 compute-0 ceph-mon[76304]: pgmap v1359: 305 pgs: 305 active+clean; 325 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 141 op/s
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.489 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273495.4892802, 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.491 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] VM Started (Lifecycle Event)
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.493 243456 DEBUG nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.497 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.501 243456 INFO nova.virt.libvirt.driver [-] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Instance spawned successfully.
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.501 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.566 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.570 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:11:35 compute-0 lvm[296854]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:11:35 compute-0 lvm[296854]: VG ceph_vg0 finished
Feb 28 10:11:35 compute-0 lvm[296856]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:11:35 compute-0 lvm[296856]: VG ceph_vg1 finished
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.597 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.598 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.599 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.599 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.600 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.600 243456 DEBUG nova.virt.libvirt.driver [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.610 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.611 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273495.490647, 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.611 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] VM Paused (Lifecycle Event)
Feb 28 10:11:35 compute-0 lvm[296858]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:11:35 compute-0 lvm[296858]: VG ceph_vg2 finished
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.648 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.655 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273495.4971533, 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.656 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] VM Resumed (Lifecycle Event)
Feb 28 10:11:35 compute-0 admiring_brown[296737]: {}
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.687 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.691 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.698 243456 INFO nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Took 7.26 seconds to spawn the instance on the hypervisor.
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.698 243456 DEBUG nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.709 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:11:35 compute-0 systemd[1]: libpod-655eb667657bbe507029c3643321176256b99c9e3b314b896790bb57d428b02d.scope: Deactivated successfully.
Feb 28 10:11:35 compute-0 systemd[1]: libpod-655eb667657bbe507029c3643321176256b99c9e3b314b896790bb57d428b02d.scope: Consumed 1.254s CPU time.
Feb 28 10:11:35 compute-0 podman[296714]: 2026-02-28 10:11:35.731589038 +0000 UTC m=+0.973798451 container died 655eb667657bbe507029c3643321176256b99c9e3b314b896790bb57d428b02d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_brown, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 10:11:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0250d3bdf11b6fcd9af5181b737ff7c470aa22cc7d8d4952b4e746718f32408-merged.mount: Deactivated successfully.
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.770 243456 INFO nova.compute.manager [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Took 8.45 seconds to build instance.
Feb 28 10:11:35 compute-0 podman[296714]: 2026-02-28 10:11:35.77724747 +0000 UTC m=+1.019456893 container remove 655eb667657bbe507029c3643321176256b99c9e3b314b896790bb57d428b02d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:11:35 compute-0 nova_compute[243452]: 2026-02-28 10:11:35.784 243456 DEBUG oslo_concurrency.lockutils [None req-d65e6b5f-f586-48f2-8b5a-def1a902a904 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:35 compute-0 systemd[1]: libpod-conmon-655eb667657bbe507029c3643321176256b99c9e3b314b896790bb57d428b02d.scope: Deactivated successfully.
Feb 28 10:11:35 compute-0 sudo[296576]: pam_unix(sudo:session): session closed for user root
Feb 28 10:11:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:11:35 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:11:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:11:35 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:11:35 compute-0 sudo[296871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:11:35 compute-0 sudo[296871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:11:35 compute-0 sudo[296871]: pam_unix(sudo:session): session closed for user root
Feb 28 10:11:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 325 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Feb 28 10:11:36 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:11:36 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:11:36 compute-0 ceph-mon[76304]: pgmap v1360: 305 pgs: 305 active+clean; 325 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Feb 28 10:11:37 compute-0 nova_compute[243452]: 2026-02-28 10:11:37.191 243456 DEBUG nova.compute.manager [req-a495ea97-5711-4e59-a7a6-dc2d13ea971b req-49d3b45e-adf8-49b8-9a8e-29ea279bd1b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Received event network-vif-plugged-2c3f8e94-025d-4aea-97be-c325f6366e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:37 compute-0 nova_compute[243452]: 2026-02-28 10:11:37.192 243456 DEBUG oslo_concurrency.lockutils [req-a495ea97-5711-4e59-a7a6-dc2d13ea971b req-49d3b45e-adf8-49b8-9a8e-29ea279bd1b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:37 compute-0 nova_compute[243452]: 2026-02-28 10:11:37.192 243456 DEBUG oslo_concurrency.lockutils [req-a495ea97-5711-4e59-a7a6-dc2d13ea971b req-49d3b45e-adf8-49b8-9a8e-29ea279bd1b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:37 compute-0 nova_compute[243452]: 2026-02-28 10:11:37.192 243456 DEBUG oslo_concurrency.lockutils [req-a495ea97-5711-4e59-a7a6-dc2d13ea971b req-49d3b45e-adf8-49b8-9a8e-29ea279bd1b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:37 compute-0 nova_compute[243452]: 2026-02-28 10:11:37.192 243456 DEBUG nova.compute.manager [req-a495ea97-5711-4e59-a7a6-dc2d13ea971b req-49d3b45e-adf8-49b8-9a8e-29ea279bd1b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] No waiting events found dispatching network-vif-plugged-2c3f8e94-025d-4aea-97be-c325f6366e0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:11:37 compute-0 nova_compute[243452]: 2026-02-28 10:11:37.192 243456 WARNING nova.compute.manager [req-a495ea97-5711-4e59-a7a6-dc2d13ea971b req-49d3b45e-adf8-49b8-9a8e-29ea279bd1b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Received unexpected event network-vif-plugged-2c3f8e94-025d-4aea-97be-c325f6366e0d for instance with vm_state active and task_state None.
Feb 28 10:11:37 compute-0 nova_compute[243452]: 2026-02-28 10:11:37.564 243456 INFO nova.compute.manager [None req-18536f81-6840-417e-906b-35015774754e 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Get console output
Feb 28 10:11:37 compute-0 nova_compute[243452]: 2026-02-28 10:11:37.571 243456 INFO oslo.privsep.daemon [None req-18536f81-6840-417e-906b-35015774754e 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpi731famn/privsep.sock']
Feb 28 10:11:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:11:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 325 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.1 MiB/s wr, 165 op/s
Feb 28 10:11:38 compute-0 nova_compute[243452]: 2026-02-28 10:11:38.655 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:38 compute-0 nova_compute[243452]: 2026-02-28 10:11:38.687 243456 INFO oslo.privsep.daemon [None req-18536f81-6840-417e-906b-35015774754e 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Spawned new privsep daemon via rootwrap
Feb 28 10:11:38 compute-0 nova_compute[243452]: 2026-02-28 10:11:38.526 296900 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 28 10:11:38 compute-0 nova_compute[243452]: 2026-02-28 10:11:38.530 296900 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 28 10:11:38 compute-0 nova_compute[243452]: 2026-02-28 10:11:38.533 296900 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Feb 28 10:11:38 compute-0 nova_compute[243452]: 2026-02-28 10:11:38.533 296900 INFO oslo.privsep.daemon [-] privsep daemon running as pid 296900
Feb 28 10:11:38 compute-0 nova_compute[243452]: 2026-02-28 10:11:38.695 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:38 compute-0 nova_compute[243452]: 2026-02-28 10:11:38.784 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:38 compute-0 nova_compute[243452]: 2026-02-28 10:11:38.920 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:11:39 compute-0 ceph-mon[76304]: pgmap v1361: 305 pgs: 305 active+clean; 325 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.1 MiB/s wr, 165 op/s
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 329 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.0 MiB/s wr, 160 op/s
Feb 28 10:11:40 compute-0 ceph-mon[76304]: pgmap v1362: 305 pgs: 305 active+clean; 329 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.0 MiB/s wr, 160 op/s
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014983719041231532 of space, bias 1.0, pg target 0.44951157123694596 quantized to 32 (current 32)
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024935186177078785 of space, bias 1.0, pg target 0.7480555853123635 quantized to 32 (current 32)
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.028637363845935e-07 of space, bias 4.0, pg target 0.0009634364836615122 quantized to 16 (current 16)
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:11:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:11:40 compute-0 ovn_controller[146846]: 2026-02-28T10:11:40Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:55:cf 10.100.0.8
Feb 28 10:11:40 compute-0 ovn_controller[146846]: 2026-02-28T10:11:40Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:55:cf 10.100.0.8
Feb 28 10:11:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 334 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.0 MiB/s wr, 171 op/s
Feb 28 10:11:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:11:43 compute-0 ceph-mon[76304]: pgmap v1363: 305 pgs: 305 active+clean; 334 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.0 MiB/s wr, 171 op/s
Feb 28 10:11:43 compute-0 nova_compute[243452]: 2026-02-28 10:11:43.410 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Acquiring lock "59d0cb01-5644-425e-82b1-b79cf4265dfb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:43 compute-0 nova_compute[243452]: 2026-02-28 10:11:43.411 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:43 compute-0 nova_compute[243452]: 2026-02-28 10:11:43.439 243456 DEBUG nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:11:43 compute-0 nova_compute[243452]: 2026-02-28 10:11:43.529 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:43 compute-0 nova_compute[243452]: 2026-02-28 10:11:43.529 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:43 compute-0 nova_compute[243452]: 2026-02-28 10:11:43.537 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:11:43 compute-0 nova_compute[243452]: 2026-02-28 10:11:43.537 243456 INFO nova.compute.claims [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:11:43 compute-0 nova_compute[243452]: 2026-02-28 10:11:43.668 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:43 compute-0 nova_compute[243452]: 2026-02-28 10:11:43.679 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:43 compute-0 nova_compute[243452]: 2026-02-28 10:11:43.705 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 347 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.9 MiB/s wr, 154 op/s
Feb 28 10:11:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:11:44 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2401319329' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.295 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.304 243456 DEBUG nova.compute.provider_tree [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.324 243456 DEBUG nova.scheduler.client.report [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.351 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.352 243456 DEBUG nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.398 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "56badf5b-d05a-4123-b43c-087a91e0e3b6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.398 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.404 243456 DEBUG nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.405 243456 DEBUG nova.network.neutron [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.413 243456 DEBUG nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.421 243456 INFO nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.447 243456 DEBUG nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:11:44 compute-0 ceph-mon[76304]: pgmap v1364: 305 pgs: 305 active+clean; 347 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.9 MiB/s wr, 154 op/s
Feb 28 10:11:44 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2401319329' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.504 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.504 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.513 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.513 243456 INFO nova.compute.claims [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.544 243456 DEBUG nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.546 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.546 243456 INFO nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Creating image(s)
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.573 243456 DEBUG nova.storage.rbd_utils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] rbd image 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.605 243456 DEBUG nova.storage.rbd_utils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] rbd image 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.634 243456 DEBUG nova.storage.rbd_utils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] rbd image 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.638 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.665 243456 DEBUG nova.policy [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '05f7daf505a349dcb8574e9ef6f061fb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'af2c91609b444c458a32203261ac88d3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.700 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.700 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.701 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.701 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.722 243456 DEBUG nova.storage.rbd_utils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] rbd image 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.726 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:44 compute-0 nova_compute[243452]: 2026-02-28 10:11:44.879 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.280 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.356 243456 DEBUG nova.storage.rbd_utils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] resizing rbd image 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.389 243456 DEBUG nova.network.neutron [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Successfully created port: 52a3be20-6165-41ea-9677-2b0575c65db6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:11:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:11:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4162066550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:11:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:11:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3874446015' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:11:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:11:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3874446015' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.487 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.492 243456 DEBUG nova.compute.provider_tree [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:11:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4162066550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:11:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3874446015' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:11:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3874446015' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.542 243456 DEBUG nova.scheduler.client.report [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.553 243456 DEBUG nova.objects.instance [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lazy-loading 'migration_context' on Instance uuid 59d0cb01-5644-425e-82b1-b79cf4265dfb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.566 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.567 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Ensure instance console log exists: /var/lib/nova/instances/59d0cb01-5644-425e-82b1-b79cf4265dfb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.568 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.568 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.568 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.569 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.570 243456 DEBUG nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.621 243456 DEBUG nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.621 243456 DEBUG nova.network.neutron [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.652 243456 INFO nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.673 243456 DEBUG nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.761 243456 DEBUG nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.763 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.764 243456 INFO nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Creating image(s)
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.789 243456 DEBUG nova.storage.rbd_utils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.819 243456 DEBUG nova.storage.rbd_utils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.846 243456 DEBUG nova.storage.rbd_utils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.851 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.898 243456 DEBUG nova.policy [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '48e14a77ec8842f98a0d2efc6d5e167f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cffbbb9857954b188c363e9565817bf2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.933 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.934 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.935 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.935 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.970 243456 DEBUG nova.storage.rbd_utils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:45 compute-0 nova_compute[243452]: 2026-02-28 10:11:45.976 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 374 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 151 op/s
Feb 28 10:11:46 compute-0 nova_compute[243452]: 2026-02-28 10:11:46.202 243456 DEBUG nova.network.neutron [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Successfully updated port: 52a3be20-6165-41ea-9677-2b0575c65db6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:11:46 compute-0 nova_compute[243452]: 2026-02-28 10:11:46.223 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Acquiring lock "refresh_cache-59d0cb01-5644-425e-82b1-b79cf4265dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:11:46 compute-0 nova_compute[243452]: 2026-02-28 10:11:46.224 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Acquired lock "refresh_cache-59d0cb01-5644-425e-82b1-b79cf4265dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:11:46 compute-0 nova_compute[243452]: 2026-02-28 10:11:46.224 243456 DEBUG nova.network.neutron [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:11:46 compute-0 nova_compute[243452]: 2026-02-28 10:11:46.303 243456 DEBUG nova.compute.manager [req-dad6faa5-2c82-4bcb-b60e-7d0cda41e0c0 req-34a5c7a0-5b57-4b7c-a60b-965a658f48c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Received event network-changed-52a3be20-6165-41ea-9677-2b0575c65db6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:46 compute-0 nova_compute[243452]: 2026-02-28 10:11:46.304 243456 DEBUG nova.compute.manager [req-dad6faa5-2c82-4bcb-b60e-7d0cda41e0c0 req-34a5c7a0-5b57-4b7c-a60b-965a658f48c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Refreshing instance network info cache due to event network-changed-52a3be20-6165-41ea-9677-2b0575c65db6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:11:46 compute-0 nova_compute[243452]: 2026-02-28 10:11:46.304 243456 DEBUG oslo_concurrency.lockutils [req-dad6faa5-2c82-4bcb-b60e-7d0cda41e0c0 req-34a5c7a0-5b57-4b7c-a60b-965a658f48c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-59d0cb01-5644-425e-82b1-b79cf4265dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:11:46 compute-0 nova_compute[243452]: 2026-02-28 10:11:46.365 243456 DEBUG nova.network.neutron [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:11:46 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Feb 28 10:11:46 compute-0 nova_compute[243452]: 2026-02-28 10:11:46.470 243456 DEBUG nova.network.neutron [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Successfully created port: 2a10bc2f-52a7-47e7-a308-eaa218f335b6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:11:46 compute-0 ceph-mon[76304]: pgmap v1365: 305 pgs: 305 active+clean; 374 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 151 op/s
Feb 28 10:11:46 compute-0 nova_compute[243452]: 2026-02-28 10:11:46.685 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.709s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:46 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Feb 28 10:11:46 compute-0 nova_compute[243452]: 2026-02-28 10:11:46.764 243456 DEBUG nova.storage.rbd_utils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] resizing rbd image 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:11:46 compute-0 nova_compute[243452]: 2026-02-28 10:11:46.872 243456 DEBUG nova.objects.instance [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'migration_context' on Instance uuid 56badf5b-d05a-4123-b43c-087a91e0e3b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:11:46 compute-0 nova_compute[243452]: 2026-02-28 10:11:46.892 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:11:46 compute-0 nova_compute[243452]: 2026-02-28 10:11:46.893 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Ensure instance console log exists: /var/lib/nova/instances/56badf5b-d05a-4123-b43c-087a91e0e3b6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:11:46 compute-0 nova_compute[243452]: 2026-02-28 10:11:46.893 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:46 compute-0 nova_compute[243452]: 2026-02-28 10:11:46.894 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:46 compute-0 nova_compute[243452]: 2026-02-28 10:11:46.894 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.168 243456 DEBUG nova.network.neutron [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Updating instance_info_cache with network_info: [{"id": "52a3be20-6165-41ea-9677-2b0575c65db6", "address": "fa:16:3e:7d:57:92", "network": {"id": "81c61437-7ddd-45da-a105-90d1e0bc7134", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1204378504-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2c91609b444c458a32203261ac88d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a3be20-61", "ovs_interfaceid": "52a3be20-6165-41ea-9677-2b0575c65db6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.197 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Releasing lock "refresh_cache-59d0cb01-5644-425e-82b1-b79cf4265dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.198 243456 DEBUG nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Instance network_info: |[{"id": "52a3be20-6165-41ea-9677-2b0575c65db6", "address": "fa:16:3e:7d:57:92", "network": {"id": "81c61437-7ddd-45da-a105-90d1e0bc7134", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1204378504-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2c91609b444c458a32203261ac88d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a3be20-61", "ovs_interfaceid": "52a3be20-6165-41ea-9677-2b0575c65db6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.198 243456 DEBUG oslo_concurrency.lockutils [req-dad6faa5-2c82-4bcb-b60e-7d0cda41e0c0 req-34a5c7a0-5b57-4b7c-a60b-965a658f48c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-59d0cb01-5644-425e-82b1-b79cf4265dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.198 243456 DEBUG nova.network.neutron [req-dad6faa5-2c82-4bcb-b60e-7d0cda41e0c0 req-34a5c7a0-5b57-4b7c-a60b-965a658f48c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Refreshing network info cache for port 52a3be20-6165-41ea-9677-2b0575c65db6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.203 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Start _get_guest_xml network_info=[{"id": "52a3be20-6165-41ea-9677-2b0575c65db6", "address": "fa:16:3e:7d:57:92", "network": {"id": "81c61437-7ddd-45da-a105-90d1e0bc7134", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1204378504-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2c91609b444c458a32203261ac88d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a3be20-61", "ovs_interfaceid": "52a3be20-6165-41ea-9677-2b0575c65db6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.208 243456 WARNING nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.214 243456 DEBUG nova.virt.libvirt.host [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.215 243456 DEBUG nova.virt.libvirt.host [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.218 243456 DEBUG nova.virt.libvirt.host [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.218 243456 DEBUG nova.virt.libvirt.host [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.219 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.219 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.219 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.220 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.220 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.220 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.220 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.220 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.221 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.221 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.221 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.221 243456 DEBUG nova.virt.hardware [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.224 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.558 243456 DEBUG nova.network.neutron [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Successfully updated port: 2a10bc2f-52a7-47e7-a308-eaa218f335b6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.586 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.586 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquired lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.586 243456 DEBUG nova.network.neutron [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.708 243456 DEBUG nova.compute.manager [req-da25178a-6360-4ac8-9f53-1fd239519ef1 req-b47503f1-7136-46c6-b850-14457202d001 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Received event network-changed-2a10bc2f-52a7-47e7-a308-eaa218f335b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.709 243456 DEBUG nova.compute.manager [req-da25178a-6360-4ac8-9f53-1fd239519ef1 req-b47503f1-7136-46c6-b850-14457202d001 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Refreshing instance network info cache due to event network-changed-2a10bc2f-52a7-47e7-a308-eaa218f335b6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.710 243456 DEBUG oslo_concurrency.lockutils [req-da25178a-6360-4ac8-9f53-1fd239519ef1 req-b47503f1-7136-46c6-b850-14457202d001 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:11:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:11:47 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/969045207' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.821 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.848 243456 DEBUG nova.storage.rbd_utils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] rbd image 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.853 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:47 compute-0 ovn_controller[146846]: 2026-02-28T10:11:47Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:81:e3:c3 10.100.0.7
Feb 28 10:11:47 compute-0 ovn_controller[146846]: 2026-02-28T10:11:47Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:81:e3:c3 10.100.0.7
Feb 28 10:11:47 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/969045207' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:11:47 compute-0 nova_compute[243452]: 2026-02-28 10:11:47.900 243456 DEBUG nova.network.neutron [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:11:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:11:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 420 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.5 MiB/s wr, 167 op/s
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.245 243456 DEBUG nova.objects.instance [None req-d8a08f2d-e2b4-47ac-b98e-e31a721fe8dd 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lazy-loading 'flavor' on Instance uuid fc527bc2-3cc2-4ce2-b99e-24d252793d06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.273 243456 DEBUG oslo_concurrency.lockutils [None req-d8a08f2d-e2b4-47ac-b98e-e31a721fe8dd 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.273 243456 DEBUG oslo_concurrency.lockutils [None req-d8a08f2d-e2b4-47ac-b98e-e31a721fe8dd 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquired lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:11:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:11:48 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3516329336' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.438 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.440 243456 DEBUG nova.virt.libvirt.vif [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:11:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-536333370',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-536333370',id=63,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af2c91609b444c458a32203261ac88d3',ramdisk_id='',reservation_id='r-mp5idvof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-644416342',owner_user_name='tempest-InstanceActionsV221TestJSON-644416342-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:11:44Z,user_data=None,user_id='05f7daf505a349dcb8574e9ef6f061fb',uuid=59d0cb01-5644-425e-82b1-b79cf4265dfb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52a3be20-6165-41ea-9677-2b0575c65db6", "address": "fa:16:3e:7d:57:92", "network": {"id": "81c61437-7ddd-45da-a105-90d1e0bc7134", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1204378504-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2c91609b444c458a32203261ac88d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a3be20-61", "ovs_interfaceid": "52a3be20-6165-41ea-9677-2b0575c65db6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.440 243456 DEBUG nova.network.os_vif_util [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Converting VIF {"id": "52a3be20-6165-41ea-9677-2b0575c65db6", "address": "fa:16:3e:7d:57:92", "network": {"id": "81c61437-7ddd-45da-a105-90d1e0bc7134", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1204378504-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2c91609b444c458a32203261ac88d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a3be20-61", "ovs_interfaceid": "52a3be20-6165-41ea-9677-2b0575c65db6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.441 243456 DEBUG nova.network.os_vif_util [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:57:92,bridge_name='br-int',has_traffic_filtering=True,id=52a3be20-6165-41ea-9677-2b0575c65db6,network=Network(81c61437-7ddd-45da-a105-90d1e0bc7134),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a3be20-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.442 243456 DEBUG nova.objects.instance [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 59d0cb01-5644-425e-82b1-b79cf4265dfb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.460 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:11:48 compute-0 nova_compute[243452]:   <uuid>59d0cb01-5644-425e-82b1-b79cf4265dfb</uuid>
Feb 28 10:11:48 compute-0 nova_compute[243452]:   <name>instance-0000003f</name>
Feb 28 10:11:48 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:11:48 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:11:48 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <nova:name>tempest-InstanceActionsV221TestJSON-server-536333370</nova:name>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:11:47</nova:creationTime>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:11:48 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:11:48 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:11:48 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:11:48 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:11:48 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:11:48 compute-0 nova_compute[243452]:         <nova:user uuid="05f7daf505a349dcb8574e9ef6f061fb">tempest-InstanceActionsV221TestJSON-644416342-project-member</nova:user>
Feb 28 10:11:48 compute-0 nova_compute[243452]:         <nova:project uuid="af2c91609b444c458a32203261ac88d3">tempest-InstanceActionsV221TestJSON-644416342</nova:project>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:11:48 compute-0 nova_compute[243452]:         <nova:port uuid="52a3be20-6165-41ea-9677-2b0575c65db6">
Feb 28 10:11:48 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:11:48 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:11:48 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <system>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <entry name="serial">59d0cb01-5644-425e-82b1-b79cf4265dfb</entry>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <entry name="uuid">59d0cb01-5644-425e-82b1-b79cf4265dfb</entry>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     </system>
Feb 28 10:11:48 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:11:48 compute-0 nova_compute[243452]:   <os>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:   </os>
Feb 28 10:11:48 compute-0 nova_compute[243452]:   <features>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:   </features>
Feb 28 10:11:48 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:11:48 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:11:48 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/59d0cb01-5644-425e-82b1-b79cf4265dfb_disk">
Feb 28 10:11:48 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       </source>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:11:48 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/59d0cb01-5644-425e-82b1-b79cf4265dfb_disk.config">
Feb 28 10:11:48 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       </source>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:11:48 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:7d:57:92"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <target dev="tap52a3be20-61"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/59d0cb01-5644-425e-82b1-b79cf4265dfb/console.log" append="off"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <video>
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     </video>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:11:48 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:11:48 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:11:48 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:11:48 compute-0 nova_compute[243452]: </domain>
Feb 28 10:11:48 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.461 243456 DEBUG nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Preparing to wait for external event network-vif-plugged-52a3be20-6165-41ea-9677-2b0575c65db6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.461 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Acquiring lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.461 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.461 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.462 243456 DEBUG nova.virt.libvirt.vif [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:11:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-536333370',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-536333370',id=63,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af2c91609b444c458a32203261ac88d3',ramdisk_id='',reservation_id='r-mp5idvof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-644416342',owner_user_name='tempest-InstanceActionsV221TestJSON-644416342-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:11:44Z,user_data=None,user_id='05f7daf505a349dcb8574e9ef6f061fb',uuid=59d0cb01-5644-425e-82b1-b79cf4265dfb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52a3be20-6165-41ea-9677-2b0575c65db6", "address": "fa:16:3e:7d:57:92", "network": {"id": "81c61437-7ddd-45da-a105-90d1e0bc7134", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1204378504-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2c91609b444c458a32203261ac88d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a3be20-61", "ovs_interfaceid": "52a3be20-6165-41ea-9677-2b0575c65db6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.462 243456 DEBUG nova.network.os_vif_util [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Converting VIF {"id": "52a3be20-6165-41ea-9677-2b0575c65db6", "address": "fa:16:3e:7d:57:92", "network": {"id": "81c61437-7ddd-45da-a105-90d1e0bc7134", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1204378504-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2c91609b444c458a32203261ac88d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a3be20-61", "ovs_interfaceid": "52a3be20-6165-41ea-9677-2b0575c65db6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.463 243456 DEBUG nova.network.os_vif_util [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:57:92,bridge_name='br-int',has_traffic_filtering=True,id=52a3be20-6165-41ea-9677-2b0575c65db6,network=Network(81c61437-7ddd-45da-a105-90d1e0bc7134),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a3be20-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.463 243456 DEBUG os_vif [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:57:92,bridge_name='br-int',has_traffic_filtering=True,id=52a3be20-6165-41ea-9677-2b0575c65db6,network=Network(81c61437-7ddd-45da-a105-90d1e0bc7134),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a3be20-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.463 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.464 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.464 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.467 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.467 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52a3be20-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.468 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52a3be20-61, col_values=(('external_ids', {'iface-id': '52a3be20-6165-41ea-9677-2b0575c65db6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:57:92', 'vm-uuid': '59d0cb01-5644-425e-82b1-b79cf4265dfb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.469 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:48 compute-0 NetworkManager[49805]: <info>  [1772273508.4713] manager: (tap52a3be20-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.472 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.476 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.477 243456 INFO os_vif [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:57:92,bridge_name='br-int',has_traffic_filtering=True,id=52a3be20-6165-41ea-9677-2b0575c65db6,network=Network(81c61437-7ddd-45da-a105-90d1e0bc7134),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a3be20-61')
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.540 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.541 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.541 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] No VIF found with MAC fa:16:3e:7d:57:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.541 243456 INFO nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Using config drive
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.564 243456 DEBUG nova.storage.rbd_utils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] rbd image 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:48 compute-0 nova_compute[243452]: 2026-02-28 10:11:48.694 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:48 compute-0 ceph-mon[76304]: pgmap v1366: 305 pgs: 305 active+clean; 420 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.5 MiB/s wr, 167 op/s
Feb 28 10:11:48 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3516329336' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:11:49 compute-0 nova_compute[243452]: 2026-02-28 10:11:49.355 243456 INFO nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Creating config drive at /var/lib/nova/instances/59d0cb01-5644-425e-82b1-b79cf4265dfb/disk.config
Feb 28 10:11:49 compute-0 nova_compute[243452]: 2026-02-28 10:11:49.359 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/59d0cb01-5644-425e-82b1-b79cf4265dfb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpljztfuaj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:49 compute-0 nova_compute[243452]: 2026-02-28 10:11:49.396 243456 DEBUG nova.network.neutron [req-dad6faa5-2c82-4bcb-b60e-7d0cda41e0c0 req-34a5c7a0-5b57-4b7c-a60b-965a658f48c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Updated VIF entry in instance network info cache for port 52a3be20-6165-41ea-9677-2b0575c65db6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:11:49 compute-0 nova_compute[243452]: 2026-02-28 10:11:49.397 243456 DEBUG nova.network.neutron [req-dad6faa5-2c82-4bcb-b60e-7d0cda41e0c0 req-34a5c7a0-5b57-4b7c-a60b-965a658f48c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Updating instance_info_cache with network_info: [{"id": "52a3be20-6165-41ea-9677-2b0575c65db6", "address": "fa:16:3e:7d:57:92", "network": {"id": "81c61437-7ddd-45da-a105-90d1e0bc7134", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1204378504-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2c91609b444c458a32203261ac88d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a3be20-61", "ovs_interfaceid": "52a3be20-6165-41ea-9677-2b0575c65db6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:11:49 compute-0 nova_compute[243452]: 2026-02-28 10:11:49.414 243456 DEBUG oslo_concurrency.lockutils [req-dad6faa5-2c82-4bcb-b60e-7d0cda41e0c0 req-34a5c7a0-5b57-4b7c-a60b-965a658f48c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-59d0cb01-5644-425e-82b1-b79cf4265dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:11:49 compute-0 nova_compute[243452]: 2026-02-28 10:11:49.500 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/59d0cb01-5644-425e-82b1-b79cf4265dfb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpljztfuaj" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:49 compute-0 nova_compute[243452]: 2026-02-28 10:11:49.527 243456 DEBUG nova.storage.rbd_utils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] rbd image 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:49 compute-0 nova_compute[243452]: 2026-02-28 10:11:49.531 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/59d0cb01-5644-425e-82b1-b79cf4265dfb/disk.config 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:49 compute-0 nova_compute[243452]: 2026-02-28 10:11:49.680 243456 DEBUG oslo_concurrency.processutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/59d0cb01-5644-425e-82b1-b79cf4265dfb/disk.config 59d0cb01-5644-425e-82b1-b79cf4265dfb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:49 compute-0 nova_compute[243452]: 2026-02-28 10:11:49.681 243456 INFO nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Deleting local config drive /var/lib/nova/instances/59d0cb01-5644-425e-82b1-b79cf4265dfb/disk.config because it was imported into RBD.
Feb 28 10:11:49 compute-0 kernel: tap52a3be20-61: entered promiscuous mode
Feb 28 10:11:49 compute-0 NetworkManager[49805]: <info>  [1772273509.7357] manager: (tap52a3be20-61): new Tun device (/org/freedesktop/NetworkManager/Devices/240)
Feb 28 10:11:49 compute-0 nova_compute[243452]: 2026-02-28 10:11:49.737 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:49 compute-0 ovn_controller[146846]: 2026-02-28T10:11:49Z|00508|binding|INFO|Claiming lport 52a3be20-6165-41ea-9677-2b0575c65db6 for this chassis.
Feb 28 10:11:49 compute-0 ovn_controller[146846]: 2026-02-28T10:11:49Z|00509|binding|INFO|52a3be20-6165-41ea-9677-2b0575c65db6: Claiming fa:16:3e:7d:57:92 10.100.0.4
Feb 28 10:11:49 compute-0 nova_compute[243452]: 2026-02-28 10:11:49.740 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.746 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:57:92 10.100.0.4'], port_security=['fa:16:3e:7d:57:92 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '59d0cb01-5644-425e-82b1-b79cf4265dfb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81c61437-7ddd-45da-a105-90d1e0bc7134', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af2c91609b444c458a32203261ac88d3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5a493f80-3664-4942-99b2-7d547baf2c48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8acc948-8b91-464a-8bb5-c73edb1af1bf, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=52a3be20-6165-41ea-9677-2b0575c65db6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:11:49 compute-0 ovn_controller[146846]: 2026-02-28T10:11:49Z|00510|binding|INFO|Setting lport 52a3be20-6165-41ea-9677-2b0575c65db6 ovn-installed in OVS
Feb 28 10:11:49 compute-0 ovn_controller[146846]: 2026-02-28T10:11:49Z|00511|binding|INFO|Setting lport 52a3be20-6165-41ea-9677-2b0575c65db6 up in Southbound
Feb 28 10:11:49 compute-0 nova_compute[243452]: 2026-02-28 10:11:49.748 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.749 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 52a3be20-6165-41ea-9677-2b0575c65db6 in datapath 81c61437-7ddd-45da-a105-90d1e0bc7134 bound to our chassis
Feb 28 10:11:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.752 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 81c61437-7ddd-45da-a105-90d1e0bc7134
Feb 28 10:11:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.765 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e139b133-d8b6-4a8a-9a61-12ee3807422c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.767 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap81c61437-71 in ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:11:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.771 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap81c61437-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:11:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.771 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c2aeabf7-0a75-4dbf-8b68-36c025724b35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:49 compute-0 systemd-udevd[297415]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:11:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.772 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bace6d2b-773f-4ef9-be96-db365402b590]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:49 compute-0 systemd-machined[209480]: New machine qemu-70-instance-0000003f.
Feb 28 10:11:49 compute-0 systemd[1]: Started Virtual Machine qemu-70-instance-0000003f.
Feb 28 10:11:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.783 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[429eff71-d6ec-44ed-a2b9-dec7444de1ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:49 compute-0 NetworkManager[49805]: <info>  [1772273509.7882] device (tap52a3be20-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:11:49 compute-0 NetworkManager[49805]: <info>  [1772273509.7889] device (tap52a3be20-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:11:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.798 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cea6a5b6-4479-47d0-a1ba-cd53592a5e47]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.826 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6d131e4e-b0e8-495c-93e0-f144e292276f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:49 compute-0 NetworkManager[49805]: <info>  [1772273509.8346] manager: (tap81c61437-70): new Veth device (/org/freedesktop/NetworkManager/Devices/241)
Feb 28 10:11:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.834 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[53bb25e2-3696-4285-8b4b-9f2370e6e637]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.862 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[51f65961-f0ca-499b-bd73-2a12cd8e5759]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.866 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[97bd8f73-406e-40ee-bc5f-f999f6abef8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:49 compute-0 NetworkManager[49805]: <info>  [1772273509.8847] device (tap81c61437-70): carrier: link connected
Feb 28 10:11:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.889 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[03a856d3-8864-468a-ba52-189f70e1c728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.908 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eca5d38d-095a-4385-b651-433c6df8b51b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81c61437-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:40:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501717, 'reachable_time': 35470, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297447, 'error': None, 'target': 'ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.925 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[129233ab-05d0-48d8-8884-bb4503070b85]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:4037'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501717, 'tstamp': 501717}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297448, 'error': None, 'target': 'ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.947 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e45ae643-bc9f-430e-a056-798a1fdf612c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81c61437-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:40:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501717, 'reachable_time': 35470, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297449, 'error': None, 'target': 'ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:49.983 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b823912b-f528-4583-9603-99e3a382c27f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:50.047 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e914c7d1-e559-402c-bb98-f85adc4f0efc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:50.049 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81c61437-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:50.049 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:50.050 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81c61437-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:50 compute-0 kernel: tap81c61437-70: entered promiscuous mode
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.052 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:50 compute-0 NetworkManager[49805]: <info>  [1772273510.0531] manager: (tap81c61437-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/242)
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:50.057 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap81c61437-70, col_values=(('external_ids', {'iface-id': 'db035a80-54f4-4899-9603-57d2e9388511'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.058 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:50 compute-0 ovn_controller[146846]: 2026-02-28T10:11:50Z|00512|binding|INFO|Releasing lport db035a80-54f4-4899-9603-57d2e9388511 from this chassis (sb_readonly=0)
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:50.060 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/81c61437-7ddd-45da-a105-90d1e0bc7134.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/81c61437-7ddd-45da-a105-90d1e0bc7134.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:50.061 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8c12f472-b6d3-445b-b3e4-e67a52d414ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:50.062 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-81c61437-7ddd-45da-a105-90d1e0bc7134
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/81c61437-7ddd-45da-a105-90d1e0bc7134.pid.haproxy
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 81c61437-7ddd-45da-a105-90d1e0bc7134
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:11:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:50.063 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134', 'env', 'PROCESS_TAG=haproxy-81c61437-7ddd-45da-a105-90d1e0bc7134', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/81c61437-7ddd-45da-a105-90d1e0bc7134.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.066 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 459 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 6.9 MiB/s wr, 173 op/s
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.111 243456 DEBUG nova.network.neutron [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Updating instance_info_cache with network_info: [{"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.139 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Releasing lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.139 243456 DEBUG nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Instance network_info: |[{"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.139 243456 DEBUG oslo_concurrency.lockutils [req-da25178a-6360-4ac8-9f53-1fd239519ef1 req-b47503f1-7136-46c6-b850-14457202d001 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.140 243456 DEBUG nova.network.neutron [req-da25178a-6360-4ac8-9f53-1fd239519ef1 req-b47503f1-7136-46c6-b850-14457202d001 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Refreshing network info cache for port 2a10bc2f-52a7-47e7-a308-eaa218f335b6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.143 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Start _get_guest_xml network_info=[{"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.144 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273510.1433833, 59d0cb01-5644-425e-82b1-b79cf4265dfb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.144 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] VM Started (Lifecycle Event)
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.149 243456 WARNING nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.154 243456 DEBUG nova.virt.libvirt.host [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.154 243456 DEBUG nova.virt.libvirt.host [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.157 243456 DEBUG nova.virt.libvirt.host [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.157 243456 DEBUG nova.virt.libvirt.host [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.157 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.158 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.158 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.158 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.158 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.158 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.159 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.159 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.159 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.159 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.159 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.160 243456 DEBUG nova.virt.hardware [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.163 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.210 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.216 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273510.1458104, 59d0cb01-5644-425e-82b1-b79cf4265dfb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.217 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] VM Paused (Lifecycle Event)
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.243 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.246 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.271 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.320 243456 DEBUG nova.compute.manager [req-21fea942-65af-41a3-895d-c97f45441bb4 req-ff403c9f-0f49-4b8d-a3b5-6bbac951be10 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Received event network-vif-plugged-52a3be20-6165-41ea-9677-2b0575c65db6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.321 243456 DEBUG oslo_concurrency.lockutils [req-21fea942-65af-41a3-895d-c97f45441bb4 req-ff403c9f-0f49-4b8d-a3b5-6bbac951be10 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.321 243456 DEBUG oslo_concurrency.lockutils [req-21fea942-65af-41a3-895d-c97f45441bb4 req-ff403c9f-0f49-4b8d-a3b5-6bbac951be10 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.321 243456 DEBUG oslo_concurrency.lockutils [req-21fea942-65af-41a3-895d-c97f45441bb4 req-ff403c9f-0f49-4b8d-a3b5-6bbac951be10 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.322 243456 DEBUG nova.compute.manager [req-21fea942-65af-41a3-895d-c97f45441bb4 req-ff403c9f-0f49-4b8d-a3b5-6bbac951be10 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Processing event network-vif-plugged-52a3be20-6165-41ea-9677-2b0575c65db6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.322 243456 DEBUG nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.328 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273510.328399, 59d0cb01-5644-425e-82b1-b79cf4265dfb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.329 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] VM Resumed (Lifecycle Event)
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.336 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.348 243456 INFO nova.virt.libvirt.driver [-] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Instance spawned successfully.
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.349 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.375 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.382 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.389 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.389 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.392 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.393 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.393 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.393 243456 DEBUG nova.virt.libvirt.driver [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.402 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:11:50 compute-0 podman[297543]: 2026-02-28 10:11:50.451590187 +0000 UTC m=+0.064657987 container create ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.455 243456 INFO nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Took 5.91 seconds to spawn the instance on the hypervisor.
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.455 243456 DEBUG nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.472 243456 DEBUG nova.network.neutron [None req-d8a08f2d-e2b4-47ac-b98e-e31a721fe8dd 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:11:50 compute-0 systemd[1]: Started libpod-conmon-ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1.scope.
Feb 28 10:11:50 compute-0 podman[297543]: 2026-02-28 10:11:50.418841947 +0000 UTC m=+0.031909757 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:11:50 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d333aed14cb07f78bcef36484a3c058b6aad07f16329dc547e47f3c304a89cb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.548 243456 INFO nova.compute.manager [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Took 7.05 seconds to build instance.
Feb 28 10:11:50 compute-0 podman[297543]: 2026-02-28 10:11:50.550794253 +0000 UTC m=+0.163862123 container init ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 28 10:11:50 compute-0 podman[297543]: 2026-02-28 10:11:50.55601249 +0000 UTC m=+0.169080320 container start ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.570 243456 DEBUG oslo_concurrency.lockutils [None req-3e70f508-b6a8-4623-abae-d2c682d2a984 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:50 compute-0 neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134[297558]: [NOTICE]   (297563) : New worker (297565) forked
Feb 28 10:11:50 compute-0 neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134[297558]: [NOTICE]   (297563) : Loading success.
Feb 28 10:11:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:11:50 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/484140731' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.770 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.799 243456 DEBUG nova.storage.rbd_utils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:50 compute-0 nova_compute[243452]: 2026-02-28 10:11:50.805 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:51 compute-0 ceph-mon[76304]: pgmap v1367: 305 pgs: 305 active+clean; 459 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 6.9 MiB/s wr, 173 op/s
Feb 28 10:11:51 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/484140731' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.146 243456 DEBUG nova.compute.manager [req-7ae39e46-19ff-47a6-a06c-fe295c313563 req-e4c92ea7-7ffc-4542-9142-6502dd1a7597 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received event network-changed-52c9c534-2d55-4b9a-bd70-a8114ac975c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.149 243456 DEBUG nova.compute.manager [req-7ae39e46-19ff-47a6-a06c-fe295c313563 req-e4c92ea7-7ffc-4542-9142-6502dd1a7597 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Refreshing instance network info cache due to event network-changed-52c9c534-2d55-4b9a-bd70-a8114ac975c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.150 243456 DEBUG oslo_concurrency.lockutils [req-7ae39e46-19ff-47a6-a06c-fe295c313563 req-e4c92ea7-7ffc-4542-9142-6502dd1a7597 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:11:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:11:51 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1912729658' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.432 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.436 243456 DEBUG nova.virt.libvirt.vif [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:11:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-440852658',display_name='tempest-ServerActionsTestOtherB-server-440852658',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-440852658',id=64,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-sfdugq8v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:11:45Z,user_data=None,user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=56badf5b-d05a-4123-b43c-087a91e0e3b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.437 243456 DEBUG nova.network.os_vif_util [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.439 243456 DEBUG nova.network.os_vif_util [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:29:e4,bridge_name='br-int',has_traffic_filtering=True,id=2a10bc2f-52a7-47e7-a308-eaa218f335b6,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a10bc2f-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.441 243456 DEBUG nova.objects.instance [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 56badf5b-d05a-4123-b43c-087a91e0e3b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.460 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:11:51 compute-0 nova_compute[243452]:   <uuid>56badf5b-d05a-4123-b43c-087a91e0e3b6</uuid>
Feb 28 10:11:51 compute-0 nova_compute[243452]:   <name>instance-00000040</name>
Feb 28 10:11:51 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:11:51 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:11:51 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerActionsTestOtherB-server-440852658</nova:name>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:11:50</nova:creationTime>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:11:51 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:11:51 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:11:51 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:11:51 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:11:51 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:11:51 compute-0 nova_compute[243452]:         <nova:user uuid="48e14a77ec8842f98a0d2efc6d5e167f">tempest-ServerActionsTestOtherB-909812490-project-member</nova:user>
Feb 28 10:11:51 compute-0 nova_compute[243452]:         <nova:project uuid="cffbbb9857954b188c363e9565817bf2">tempest-ServerActionsTestOtherB-909812490</nova:project>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:11:51 compute-0 nova_compute[243452]:         <nova:port uuid="2a10bc2f-52a7-47e7-a308-eaa218f335b6">
Feb 28 10:11:51 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:11:51 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:11:51 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <system>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <entry name="serial">56badf5b-d05a-4123-b43c-087a91e0e3b6</entry>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <entry name="uuid">56badf5b-d05a-4123-b43c-087a91e0e3b6</entry>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     </system>
Feb 28 10:11:51 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:11:51 compute-0 nova_compute[243452]:   <os>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:   </os>
Feb 28 10:11:51 compute-0 nova_compute[243452]:   <features>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:   </features>
Feb 28 10:11:51 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:11:51 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:11:51 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/56badf5b-d05a-4123-b43c-087a91e0e3b6_disk">
Feb 28 10:11:51 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       </source>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:11:51 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/56badf5b-d05a-4123-b43c-087a91e0e3b6_disk.config">
Feb 28 10:11:51 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       </source>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:11:51 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:25:29:e4"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <target dev="tap2a10bc2f-52"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/56badf5b-d05a-4123-b43c-087a91e0e3b6/console.log" append="off"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <video>
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     </video>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:11:51 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:11:51 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:11:51 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:11:51 compute-0 nova_compute[243452]: </domain>
Feb 28 10:11:51 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.470 243456 DEBUG nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Preparing to wait for external event network-vif-plugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.471 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.472 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.472 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.474 243456 DEBUG nova.virt.libvirt.vif [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:11:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-440852658',display_name='tempest-ServerActionsTestOtherB-server-440852658',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-440852658',id=64,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-sfdugq8v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:11:45Z,user_data=None,user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=56badf5b-d05a-4123-b43c-087a91e0e3b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.475 243456 DEBUG nova.network.os_vif_util [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.476 243456 DEBUG nova.network.os_vif_util [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:29:e4,bridge_name='br-int',has_traffic_filtering=True,id=2a10bc2f-52a7-47e7-a308-eaa218f335b6,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a10bc2f-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.477 243456 DEBUG os_vif [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:29:e4,bridge_name='br-int',has_traffic_filtering=True,id=2a10bc2f-52a7-47e7-a308-eaa218f335b6,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a10bc2f-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.478 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.479 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.480 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.484 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.484 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a10bc2f-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.485 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a10bc2f-52, col_values=(('external_ids', {'iface-id': '2a10bc2f-52a7-47e7-a308-eaa218f335b6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:29:e4', 'vm-uuid': '56badf5b-d05a-4123-b43c-087a91e0e3b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:51 compute-0 NetworkManager[49805]: <info>  [1772273511.4884] manager: (tap2a10bc2f-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.490 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.495 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.496 243456 INFO os_vif [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:29:e4,bridge_name='br-int',has_traffic_filtering=True,id=2a10bc2f-52a7-47e7-a308-eaa218f335b6,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a10bc2f-52')
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.551 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.552 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.553 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No VIF found with MAC fa:16:3e:25:29:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.553 243456 INFO nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Using config drive
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.581 243456 DEBUG nova.storage.rbd_utils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.926 243456 DEBUG nova.network.neutron [req-da25178a-6360-4ac8-9f53-1fd239519ef1 req-b47503f1-7136-46c6-b850-14457202d001 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Updated VIF entry in instance network info cache for port 2a10bc2f-52a7-47e7-a308-eaa218f335b6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.927 243456 DEBUG nova.network.neutron [req-da25178a-6360-4ac8-9f53-1fd239519ef1 req-b47503f1-7136-46c6-b850-14457202d001 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Updating instance_info_cache with network_info: [{"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:11:51 compute-0 nova_compute[243452]: 2026-02-28 10:11:51.946 243456 DEBUG oslo_concurrency.lockutils [req-da25178a-6360-4ac8-9f53-1fd239519ef1 req-b47503f1-7136-46c6-b850-14457202d001 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:11:52 compute-0 nova_compute[243452]: 2026-02-28 10:11:52.025 243456 DEBUG nova.network.neutron [None req-d8a08f2d-e2b4-47ac-b98e-e31a721fe8dd 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updating instance_info_cache with network_info: [{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:11:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 481 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 7.6 MiB/s wr, 193 op/s
Feb 28 10:11:52 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1912729658' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:11:52 compute-0 nova_compute[243452]: 2026-02-28 10:11:52.205 243456 INFO nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Creating config drive at /var/lib/nova/instances/56badf5b-d05a-4123-b43c-087a91e0e3b6/disk.config
Feb 28 10:11:52 compute-0 nova_compute[243452]: 2026-02-28 10:11:52.209 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/56badf5b-d05a-4123-b43c-087a91e0e3b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmppgkcws94 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:52 compute-0 nova_compute[243452]: 2026-02-28 10:11:52.358 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/56badf5b-d05a-4123-b43c-087a91e0e3b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmppgkcws94" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:52 compute-0 nova_compute[243452]: 2026-02-28 10:11:52.478 243456 DEBUG nova.storage.rbd_utils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:11:52 compute-0 nova_compute[243452]: 2026-02-28 10:11:52.494 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/56badf5b-d05a-4123-b43c-087a91e0e3b6/disk.config 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:52 compute-0 nova_compute[243452]: 2026-02-28 10:11:52.556 243456 DEBUG oslo_concurrency.lockutils [None req-d8a08f2d-e2b4-47ac-b98e-e31a721fe8dd 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Releasing lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:11:52 compute-0 nova_compute[243452]: 2026-02-28 10:11:52.557 243456 DEBUG nova.compute.manager [None req-d8a08f2d-e2b4-47ac-b98e-e31a721fe8dd 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Feb 28 10:11:52 compute-0 nova_compute[243452]: 2026-02-28 10:11:52.557 243456 DEBUG nova.compute.manager [None req-d8a08f2d-e2b4-47ac-b98e-e31a721fe8dd 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] network_info to inject: |[{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Feb 28 10:11:52 compute-0 nova_compute[243452]: 2026-02-28 10:11:52.561 243456 DEBUG oslo_concurrency.lockutils [req-7ae39e46-19ff-47a6-a06c-fe295c313563 req-e4c92ea7-7ffc-4542-9142-6502dd1a7597 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:11:52 compute-0 nova_compute[243452]: 2026-02-28 10:11:52.561 243456 DEBUG nova.network.neutron [req-7ae39e46-19ff-47a6-a06c-fe295c313563 req-e4c92ea7-7ffc-4542-9142-6502dd1a7597 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Refreshing network info cache for port 52c9c534-2d55-4b9a-bd70-a8114ac975c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:11:52 compute-0 nova_compute[243452]: 2026-02-28 10:11:52.669 243456 DEBUG oslo_concurrency.processutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/56badf5b-d05a-4123-b43c-087a91e0e3b6/disk.config 56badf5b-d05a-4123-b43c-087a91e0e3b6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:52 compute-0 nova_compute[243452]: 2026-02-28 10:11:52.670 243456 INFO nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Deleting local config drive /var/lib/nova/instances/56badf5b-d05a-4123-b43c-087a91e0e3b6/disk.config because it was imported into RBD.
Feb 28 10:11:52 compute-0 NetworkManager[49805]: <info>  [1772273512.7248] manager: (tap2a10bc2f-52): new Tun device (/org/freedesktop/NetworkManager/Devices/244)
Feb 28 10:11:52 compute-0 kernel: tap2a10bc2f-52: entered promiscuous mode
Feb 28 10:11:52 compute-0 systemd-udevd[297436]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:11:52 compute-0 nova_compute[243452]: 2026-02-28 10:11:52.728 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:52 compute-0 ovn_controller[146846]: 2026-02-28T10:11:52Z|00513|binding|INFO|Claiming lport 2a10bc2f-52a7-47e7-a308-eaa218f335b6 for this chassis.
Feb 28 10:11:52 compute-0 ovn_controller[146846]: 2026-02-28T10:11:52Z|00514|binding|INFO|2a10bc2f-52a7-47e7-a308-eaa218f335b6: Claiming fa:16:3e:25:29:e4 10.100.0.6
Feb 28 10:11:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.738 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:29:e4 10.100.0.6'], port_security=['fa:16:3e:25:29:e4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '56badf5b-d05a-4123-b43c-087a91e0e3b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cffbbb9857954b188c363e9565817bf2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9dcd58cd-d32a-4394-9f61-d8f4c1381371', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d55c9b7f-6b12-4934-aabc-ab954d2b71e1, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2a10bc2f-52a7-47e7-a308-eaa218f335b6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:11:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.739 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2a10bc2f-52a7-47e7-a308-eaa218f335b6 in datapath 41b22e92-d251-48dd-9bf8-8f38cbd749fa bound to our chassis
Feb 28 10:11:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.741 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41b22e92-d251-48dd-9bf8-8f38cbd749fa
Feb 28 10:11:52 compute-0 ovn_controller[146846]: 2026-02-28T10:11:52Z|00515|binding|INFO|Setting lport 2a10bc2f-52a7-47e7-a308-eaa218f335b6 ovn-installed in OVS
Feb 28 10:11:52 compute-0 ovn_controller[146846]: 2026-02-28T10:11:52Z|00516|binding|INFO|Setting lport 2a10bc2f-52a7-47e7-a308-eaa218f335b6 up in Southbound
Feb 28 10:11:52 compute-0 nova_compute[243452]: 2026-02-28 10:11:52.745 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:52 compute-0 NetworkManager[49805]: <info>  [1772273512.7462] device (tap2a10bc2f-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:11:52 compute-0 NetworkManager[49805]: <info>  [1772273512.7467] device (tap2a10bc2f-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:11:52 compute-0 nova_compute[243452]: 2026-02-28 10:11:52.754 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.763 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b7254812-8ecb-4e62-8127-d090df8fb515]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:52 compute-0 systemd-machined[209480]: New machine qemu-71-instance-00000040.
Feb 28 10:11:52 compute-0 systemd[1]: Started Virtual Machine qemu-71-instance-00000040.
Feb 28 10:11:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.795 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[126afbeb-e426-4bfe-8811-530e2ddea327]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.800 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7ced0625-3003-4cee-84ea-d35b53f9e664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.833 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4c83af92-f82b-404b-bca6-32ad5c072185]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.856 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ee488bf8-4a00-4b5d-ae5e-040fae3674d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41b22e92-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:1f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493294, 'reachable_time': 16597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297704, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.880 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f14cd5-cafc-4c9e-bfeb-f9866c58a2ab]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493305, 'tstamp': 493305}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297705, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493308, 'tstamp': 493308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297705, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.883 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b22e92-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:52 compute-0 nova_compute[243452]: 2026-02-28 10:11:52.885 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:52 compute-0 nova_compute[243452]: 2026-02-28 10:11:52.887 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.887 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41b22e92-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.887 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:11:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.888 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41b22e92-d0, col_values=(('external_ids', {'iface-id': 'cad89901-4493-47e9-b0fc-45158375eff8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:52.888 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:11:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:11:53 compute-0 ceph-mon[76304]: pgmap v1368: 305 pgs: 305 active+clean; 481 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 7.6 MiB/s wr, 193 op/s
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.329 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273513.3291252, 56badf5b-d05a-4123-b43c-087a91e0e3b6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.330 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] VM Started (Lifecycle Event)
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.350 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.356 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273513.333007, 56badf5b-d05a-4123-b43c-087a91e0e3b6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.357 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] VM Paused (Lifecycle Event)
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.379 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.385 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.409 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.429 243456 DEBUG nova.compute.manager [req-76aa7c76-7771-4b89-a00b-c6ebf93b6b6c req-f7a6ed5a-a6e2-4ecf-ba55-6ee76289f6e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Received event network-vif-plugged-52a3be20-6165-41ea-9677-2b0575c65db6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.429 243456 DEBUG oslo_concurrency.lockutils [req-76aa7c76-7771-4b89-a00b-c6ebf93b6b6c req-f7a6ed5a-a6e2-4ecf-ba55-6ee76289f6e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.430 243456 DEBUG oslo_concurrency.lockutils [req-76aa7c76-7771-4b89-a00b-c6ebf93b6b6c req-f7a6ed5a-a6e2-4ecf-ba55-6ee76289f6e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.430 243456 DEBUG oslo_concurrency.lockutils [req-76aa7c76-7771-4b89-a00b-c6ebf93b6b6c req-f7a6ed5a-a6e2-4ecf-ba55-6ee76289f6e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.430 243456 DEBUG nova.compute.manager [req-76aa7c76-7771-4b89-a00b-c6ebf93b6b6c req-f7a6ed5a-a6e2-4ecf-ba55-6ee76289f6e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] No waiting events found dispatching network-vif-plugged-52a3be20-6165-41ea-9677-2b0575c65db6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.430 243456 WARNING nova.compute.manager [req-76aa7c76-7771-4b89-a00b-c6ebf93b6b6c req-f7a6ed5a-a6e2-4ecf-ba55-6ee76289f6e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Received unexpected event network-vif-plugged-52a3be20-6165-41ea-9677-2b0575c65db6 for instance with vm_state active and task_state None.
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.577 243456 DEBUG oslo_concurrency.lockutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Acquiring lock "59d0cb01-5644-425e-82b1-b79cf4265dfb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.578 243456 DEBUG oslo_concurrency.lockutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.578 243456 DEBUG oslo_concurrency.lockutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Acquiring lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.579 243456 DEBUG oslo_concurrency.lockutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.579 243456 DEBUG oslo_concurrency.lockutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.580 243456 INFO nova.compute.manager [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Terminating instance
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.581 243456 DEBUG nova.compute.manager [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:11:53 compute-0 kernel: tap52a3be20-61 (unregistering): left promiscuous mode
Feb 28 10:11:53 compute-0 NetworkManager[49805]: <info>  [1772273513.6524] device (tap52a3be20-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.662 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:53 compute-0 ovn_controller[146846]: 2026-02-28T10:11:53Z|00517|binding|INFO|Releasing lport 52a3be20-6165-41ea-9677-2b0575c65db6 from this chassis (sb_readonly=0)
Feb 28 10:11:53 compute-0 ovn_controller[146846]: 2026-02-28T10:11:53Z|00518|binding|INFO|Setting lport 52a3be20-6165-41ea-9677-2b0575c65db6 down in Southbound
Feb 28 10:11:53 compute-0 ovn_controller[146846]: 2026-02-28T10:11:53Z|00519|binding|INFO|Removing iface tap52a3be20-61 ovn-installed in OVS
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.665 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:53.668 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:57:92 10.100.0.4'], port_security=['fa:16:3e:7d:57:92 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '59d0cb01-5644-425e-82b1-b79cf4265dfb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81c61437-7ddd-45da-a105-90d1e0bc7134', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af2c91609b444c458a32203261ac88d3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5a493f80-3664-4942-99b2-7d547baf2c48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8acc948-8b91-464a-8bb5-c73edb1af1bf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=52a3be20-6165-41ea-9677-2b0575c65db6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:11:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:53.669 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 52a3be20-6165-41ea-9677-2b0575c65db6 in datapath 81c61437-7ddd-45da-a105-90d1e0bc7134 unbound from our chassis
Feb 28 10:11:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:53.671 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 81c61437-7ddd-45da-a105-90d1e0bc7134, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.672 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:53.672 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[58856626-db38-4752-aa28-0ef7fa0d63e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:53.673 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134 namespace which is not needed anymore
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.697 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:53 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Feb 28 10:11:53 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003f.scope: Consumed 3.655s CPU time.
Feb 28 10:11:53 compute-0 systemd-machined[209480]: Machine qemu-70-instance-0000003f terminated.
Feb 28 10:11:53 compute-0 neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134[297558]: [NOTICE]   (297563) : haproxy version is 2.8.14-c23fe91
Feb 28 10:11:53 compute-0 neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134[297558]: [NOTICE]   (297563) : path to executable is /usr/sbin/haproxy
Feb 28 10:11:53 compute-0 neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134[297558]: [WARNING]  (297563) : Exiting Master process...
Feb 28 10:11:53 compute-0 neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134[297558]: [ALERT]    (297563) : Current worker (297565) exited with code 143 (Terminated)
Feb 28 10:11:53 compute-0 neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134[297558]: [WARNING]  (297563) : All workers exited. Exiting... (0)
Feb 28 10:11:53 compute-0 systemd[1]: libpod-ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1.scope: Deactivated successfully.
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.827 243456 INFO nova.virt.libvirt.driver [-] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Instance destroyed successfully.
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.828 243456 DEBUG nova.objects.instance [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lazy-loading 'resources' on Instance uuid 59d0cb01-5644-425e-82b1-b79cf4265dfb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:11:53 compute-0 podman[297768]: 2026-02-28 10:11:53.830622916 +0000 UTC m=+0.049907792 container died ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.857 243456 DEBUG nova.virt.libvirt.vif [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:11:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-536333370',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-536333370',id=63,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:11:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='af2c91609b444c458a32203261ac88d3',ramdisk_id='',reservation_id='r-mp5idvof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-644416342',owner_user_name='tempest-InstanceActionsV221TestJSON-644416342-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:11:50Z,user_data=None,user_id='05f7daf505a349dcb8574e9ef6f061fb',uuid=59d0cb01-5644-425e-82b1-b79cf4265dfb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52a3be20-6165-41ea-9677-2b0575c65db6", "address": "fa:16:3e:7d:57:92", "network": {"id": "81c61437-7ddd-45da-a105-90d1e0bc7134", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1204378504-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2c91609b444c458a32203261ac88d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a3be20-61", "ovs_interfaceid": "52a3be20-6165-41ea-9677-2b0575c65db6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.858 243456 DEBUG nova.network.os_vif_util [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Converting VIF {"id": "52a3be20-6165-41ea-9677-2b0575c65db6", "address": "fa:16:3e:7d:57:92", "network": {"id": "81c61437-7ddd-45da-a105-90d1e0bc7134", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1204378504-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af2c91609b444c458a32203261ac88d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a3be20-61", "ovs_interfaceid": "52a3be20-6165-41ea-9677-2b0575c65db6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.859 243456 DEBUG nova.network.os_vif_util [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:57:92,bridge_name='br-int',has_traffic_filtering=True,id=52a3be20-6165-41ea-9677-2b0575c65db6,network=Network(81c61437-7ddd-45da-a105-90d1e0bc7134),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a3be20-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.859 243456 DEBUG os_vif [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:57:92,bridge_name='br-int',has_traffic_filtering=True,id=52a3be20-6165-41ea-9677-2b0575c65db6,network=Network(81c61437-7ddd-45da-a105-90d1e0bc7134),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a3be20-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:11:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1-userdata-shm.mount: Deactivated successfully.
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.861 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.861 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52a3be20-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d333aed14cb07f78bcef36484a3c058b6aad07f16329dc547e47f3c304a89cb-merged.mount: Deactivated successfully.
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.864 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.865 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.866 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.868 243456 INFO os_vif [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:57:92,bridge_name='br-int',has_traffic_filtering=True,id=52a3be20-6165-41ea-9677-2b0575c65db6,network=Network(81c61437-7ddd-45da-a105-90d1e0bc7134),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a3be20-61')
Feb 28 10:11:53 compute-0 podman[297768]: 2026-02-28 10:11:53.88097216 +0000 UTC m=+0.100257036 container cleanup ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 10:11:53 compute-0 systemd[1]: libpod-conmon-ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1.scope: Deactivated successfully.
Feb 28 10:11:53 compute-0 podman[297820]: 2026-02-28 10:11:53.974814156 +0000 UTC m=+0.070345677 container remove ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 28 10:11:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:53.979 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5e27b60f-06c7-485d-b55c-f476b6bde3a1]: (4, ('Sat Feb 28 10:11:53 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134 (ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1)\ned7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1\nSat Feb 28 10:11:53 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134 (ed7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1)\ned7311be269cb99d4f9f9515b1ca0155f8644e5bebfb3a56d6451f02b84e59e1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:53.981 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[37ddaace-d690-4a6f-b7f2-e785bc4dcf58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:53.982 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81c61437-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:11:53 compute-0 kernel: tap81c61437-70: left promiscuous mode
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.984 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:53 compute-0 nova_compute[243452]: 2026-02-28 10:11:53.991 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:53.995 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[be2bbf08-347f-4d66-9fe6-f673bd1cf1b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:54.011 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e12f7c89-49f2-40cc-a282-794103089b5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:54.012 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[db608de0-0f3d-451d-8903-0bd51b08c845]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:54.029 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[59d009fe-5d45-4a21-b424-fa44ac844058]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501710, 'reachable_time': 33165, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297838, 'error': None, 'target': 'ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:54.032 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-81c61437-7ddd-45da-a105-90d1e0bc7134 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:11:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:54.032 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[d574115d-0ebf-40bc-bfd8-363e68eb063f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:11:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d81c61437\x2d7ddd\x2d45da\x2da105\x2d90d1e0bc7134.mount: Deactivated successfully.
Feb 28 10:11:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 484 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 7.1 MiB/s wr, 198 op/s
Feb 28 10:11:54 compute-0 nova_compute[243452]: 2026-02-28 10:11:54.094 243456 DEBUG nova.objects.instance [None req-fba13b83-7b47-4242-9841-d655ffc52dd5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lazy-loading 'flavor' on Instance uuid fc527bc2-3cc2-4ce2-b99e-24d252793d06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:11:54 compute-0 nova_compute[243452]: 2026-02-28 10:11:54.116 243456 DEBUG oslo_concurrency.lockutils [None req-fba13b83-7b47-4242-9841-d655ffc52dd5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:11:54 compute-0 nova_compute[243452]: 2026-02-28 10:11:54.256 243456 INFO nova.virt.libvirt.driver [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Deleting instance files /var/lib/nova/instances/59d0cb01-5644-425e-82b1-b79cf4265dfb_del
Feb 28 10:11:54 compute-0 nova_compute[243452]: 2026-02-28 10:11:54.257 243456 INFO nova.virt.libvirt.driver [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Deletion of /var/lib/nova/instances/59d0cb01-5644-425e-82b1-b79cf4265dfb_del complete
Feb 28 10:11:54 compute-0 nova_compute[243452]: 2026-02-28 10:11:54.312 243456 INFO nova.compute.manager [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Took 0.73 seconds to destroy the instance on the hypervisor.
Feb 28 10:11:54 compute-0 nova_compute[243452]: 2026-02-28 10:11:54.313 243456 DEBUG oslo.service.loopingcall [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:11:54 compute-0 nova_compute[243452]: 2026-02-28 10:11:54.314 243456 DEBUG nova.compute.manager [-] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:11:54 compute-0 nova_compute[243452]: 2026-02-28 10:11:54.314 243456 DEBUG nova.network.neutron [-] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:11:54 compute-0 nova_compute[243452]: 2026-02-28 10:11:54.332 243456 DEBUG nova.network.neutron [req-7ae39e46-19ff-47a6-a06c-fe295c313563 req-e4c92ea7-7ffc-4542-9142-6502dd1a7597 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updated VIF entry in instance network info cache for port 52c9c534-2d55-4b9a-bd70-a8114ac975c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:11:54 compute-0 nova_compute[243452]: 2026-02-28 10:11:54.333 243456 DEBUG nova.network.neutron [req-7ae39e46-19ff-47a6-a06c-fe295c313563 req-e4c92ea7-7ffc-4542-9142-6502dd1a7597 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updating instance_info_cache with network_info: [{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:11:54 compute-0 nova_compute[243452]: 2026-02-28 10:11:54.347 243456 DEBUG oslo_concurrency.lockutils [req-7ae39e46-19ff-47a6-a06c-fe295c313563 req-e4c92ea7-7ffc-4542-9142-6502dd1a7597 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:11:54 compute-0 nova_compute[243452]: 2026-02-28 10:11:54.348 243456 DEBUG oslo_concurrency.lockutils [None req-fba13b83-7b47-4242-9841-d655ffc52dd5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquired lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.077 243456 DEBUG nova.network.neutron [-] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.095 243456 INFO nova.compute.manager [-] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Took 0.78 seconds to deallocate network for instance.
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.155 243456 DEBUG oslo_concurrency.lockutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.156 243456 DEBUG oslo_concurrency.lockutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:55 compute-0 ceph-mon[76304]: pgmap v1369: 305 pgs: 305 active+clean; 484 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 7.1 MiB/s wr, 198 op/s
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.191 243456 DEBUG nova.network.neutron [None req-fba13b83-7b47-4242-9841-d655ffc52dd5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.253 243456 DEBUG nova.compute.manager [req-c816ddcf-488c-4c68-900c-4dd2ee8a28b0 req-b7eeaf59-7508-4d11-8337-1c8435bc05d6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Received event network-vif-deleted-52a3be20-6165-41ea-9677-2b0575c65db6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.292 243456 DEBUG oslo_concurrency.processutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.532 243456 DEBUG nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Received event network-vif-plugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.535 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.535 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.536 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.536 243456 DEBUG nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Processing event network-vif-plugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.536 243456 DEBUG nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Received event network-vif-plugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.536 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.537 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.537 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.537 243456 DEBUG nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] No waiting events found dispatching network-vif-plugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.537 243456 WARNING nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Received unexpected event network-vif-plugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 for instance with vm_state building and task_state spawning.
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.537 243456 DEBUG nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Received event network-vif-unplugged-52a3be20-6165-41ea-9677-2b0575c65db6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.538 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.538 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.538 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.538 243456 DEBUG nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] No waiting events found dispatching network-vif-unplugged-52a3be20-6165-41ea-9677-2b0575c65db6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.538 243456 WARNING nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Received unexpected event network-vif-unplugged-52a3be20-6165-41ea-9677-2b0575c65db6 for instance with vm_state deleted and task_state None.
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.539 243456 DEBUG nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Received event network-vif-plugged-52a3be20-6165-41ea-9677-2b0575c65db6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.539 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.539 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.539 243456 DEBUG oslo_concurrency.lockutils [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.539 243456 DEBUG nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] No waiting events found dispatching network-vif-plugged-52a3be20-6165-41ea-9677-2b0575c65db6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.540 243456 WARNING nova.compute.manager [req-1336f97a-2193-4ff3-8a4f-5f00a7a6ed57 req-cd12fd3b-8c6f-47b3-8eaa-2cd1def05e65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Received unexpected event network-vif-plugged-52a3be20-6165-41ea-9677-2b0575c65db6 for instance with vm_state deleted and task_state None.
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.540 243456 DEBUG nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.561 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273515.5443044, 56badf5b-d05a-4123-b43c-087a91e0e3b6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.561 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] VM Resumed (Lifecycle Event)
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.563 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.588 243456 INFO nova.virt.libvirt.driver [-] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Instance spawned successfully.
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.588 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.607 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.814 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.817 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.817 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.817 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.818 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.818 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.818 243456 DEBUG nova.virt.libvirt.driver [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.859 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:11:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:11:55 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/572110918' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.894 243456 INFO nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Took 10.13 seconds to spawn the instance on the hypervisor.
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.894 243456 DEBUG nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.907 243456 DEBUG oslo_concurrency.processutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.912 243456 DEBUG nova.compute.provider_tree [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.935 243456 DEBUG nova.scheduler.client.report [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.965 243456 DEBUG oslo_concurrency.lockutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.968 243456 INFO nova.compute.manager [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Took 11.49 seconds to build instance.
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.986 243456 DEBUG oslo_concurrency.lockutils [None req-f6d9f081-30ca-418d-8ff5-4af433ccac9b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:55 compute-0 nova_compute[243452]: 2026-02-28 10:11:55.999 243456 INFO nova.scheduler.client.report [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Deleted allocations for instance 59d0cb01-5644-425e-82b1-b79cf4265dfb
Feb 28 10:11:56 compute-0 nova_compute[243452]: 2026-02-28 10:11:56.062 243456 DEBUG oslo_concurrency.lockutils [None req-7af860ff-c9d0-4b57-aa94-01c2780b97a0 05f7daf505a349dcb8574e9ef6f061fb af2c91609b444c458a32203261ac88d3 - - default default] Lock "59d0cb01-5644-425e-82b1-b79cf4265dfb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 463 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.8 MiB/s wr, 231 op/s
Feb 28 10:11:56 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/572110918' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:11:56 compute-0 nova_compute[243452]: 2026-02-28 10:11:56.567 243456 DEBUG nova.network.neutron [None req-fba13b83-7b47-4242-9841-d655ffc52dd5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updating instance_info_cache with network_info: [{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:11:56 compute-0 nova_compute[243452]: 2026-02-28 10:11:56.584 243456 DEBUG oslo_concurrency.lockutils [None req-fba13b83-7b47-4242-9841-d655ffc52dd5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Releasing lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:11:56 compute-0 nova_compute[243452]: 2026-02-28 10:11:56.585 243456 DEBUG nova.compute.manager [None req-fba13b83-7b47-4242-9841-d655ffc52dd5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Feb 28 10:11:56 compute-0 nova_compute[243452]: 2026-02-28 10:11:56.585 243456 DEBUG nova.compute.manager [None req-fba13b83-7b47-4242-9841-d655ffc52dd5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] network_info to inject: |[{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Feb 28 10:11:57 compute-0 ceph-mon[76304]: pgmap v1370: 305 pgs: 305 active+clean; 463 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.8 MiB/s wr, 231 op/s
Feb 28 10:11:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:57.851 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:11:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:57.852 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:11:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:11:57.853 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:11:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:11:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 438 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.2 MiB/s wr, 227 op/s
Feb 28 10:11:58 compute-0 nova_compute[243452]: 2026-02-28 10:11:58.699 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:58 compute-0 nova_compute[243452]: 2026-02-28 10:11:58.865 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:11:59 compute-0 ceph-mon[76304]: pgmap v1371: 305 pgs: 305 active+clean; 438 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.2 MiB/s wr, 227 op/s
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.034 243456 DEBUG nova.compute.manager [req-cc59da39-5f5f-4e54-8543-26fe1689b1eb req-eec2234f-833f-4e3d-9693-0c6af805ba9b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received event network-changed-52c9c534-2d55-4b9a-bd70-a8114ac975c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.034 243456 DEBUG nova.compute.manager [req-cc59da39-5f5f-4e54-8543-26fe1689b1eb req-eec2234f-833f-4e3d-9693-0c6af805ba9b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Refreshing instance network info cache due to event network-changed-52c9c534-2d55-4b9a-bd70-a8114ac975c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.035 243456 DEBUG oslo_concurrency.lockutils [req-cc59da39-5f5f-4e54-8543-26fe1689b1eb req-eec2234f-833f-4e3d-9693-0c6af805ba9b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.035 243456 DEBUG oslo_concurrency.lockutils [req-cc59da39-5f5f-4e54-8543-26fe1689b1eb req-eec2234f-833f-4e3d-9693-0c6af805ba9b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.035 243456 DEBUG nova.network.neutron [req-cc59da39-5f5f-4e54-8543-26fe1689b1eb req-eec2234f-833f-4e3d-9693-0c6af805ba9b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Refreshing network info cache for port 52c9c534-2d55-4b9a-bd70-a8114ac975c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:12:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 438 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.3 MiB/s wr, 249 op/s
Feb 28 10:12:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:12:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:12:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:12:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:12:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:12:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.536 243456 DEBUG oslo_concurrency.lockutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.537 243456 DEBUG oslo_concurrency.lockutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.537 243456 DEBUG oslo_concurrency.lockutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.538 243456 DEBUG oslo_concurrency.lockutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.538 243456 DEBUG oslo_concurrency.lockutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.539 243456 INFO nova.compute.manager [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Terminating instance
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.540 243456 DEBUG nova.compute.manager [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:12:00 compute-0 kernel: tap52c9c534-2d (unregistering): left promiscuous mode
Feb 28 10:12:00 compute-0 NetworkManager[49805]: <info>  [1772273520.5983] device (tap52c9c534-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:12:00 compute-0 ovn_controller[146846]: 2026-02-28T10:12:00Z|00520|binding|INFO|Releasing lport 52c9c534-2d55-4b9a-bd70-a8114ac975c6 from this chassis (sb_readonly=0)
Feb 28 10:12:00 compute-0 ovn_controller[146846]: 2026-02-28T10:12:00Z|00521|binding|INFO|Setting lport 52c9c534-2d55-4b9a-bd70-a8114ac975c6 down in Southbound
Feb 28 10:12:00 compute-0 ovn_controller[146846]: 2026-02-28T10:12:00Z|00522|binding|INFO|Removing iface tap52c9c534-2d ovn-installed in OVS
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.606 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.628 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:00 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Feb 28 10:12:00 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003d.scope: Consumed 13.043s CPU time.
Feb 28 10:12:00 compute-0 systemd-machined[209480]: Machine qemu-68-instance-0000003d terminated.
Feb 28 10:12:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:00.671 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:55:cf 10.100.0.8'], port_security=['fa:16:3e:ec:55:cf 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fc527bc2-3cc2-4ce2-b99e-24d252793d06', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0192f192-ffd8-4bb7-b267-d74d97cc6cf5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '676657ed4ac447c580a9480d26bd7f87', 'neutron:revision_number': '6', 'neutron:security_group_ids': '27cca83f-7b50-4cdc-b0c5-9a7ed57a0cbe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=527b0cac-8159-448a-a976-d464a8e38db9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=52c9c534-2d55-4b9a-bd70-a8114ac975c6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:12:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:00.672 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 52c9c534-2d55-4b9a-bd70-a8114ac975c6 in datapath 0192f192-ffd8-4bb7-b267-d74d97cc6cf5 unbound from our chassis
Feb 28 10:12:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:00.674 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0192f192-ffd8-4bb7-b267-d74d97cc6cf5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:12:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:00.675 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6124d6b7-3700-4aba-b83c-a184ec5a349a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:00.676 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5 namespace which is not needed anymore
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.779 243456 INFO nova.virt.libvirt.driver [-] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Instance destroyed successfully.
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.780 243456 DEBUG nova.objects.instance [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lazy-loading 'resources' on Instance uuid fc527bc2-3cc2-4ce2-b99e-24d252793d06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:12:00 compute-0 neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5[295865]: [NOTICE]   (295869) : haproxy version is 2.8.14-c23fe91
Feb 28 10:12:00 compute-0 neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5[295865]: [NOTICE]   (295869) : path to executable is /usr/sbin/haproxy
Feb 28 10:12:00 compute-0 neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5[295865]: [WARNING]  (295869) : Exiting Master process...
Feb 28 10:12:00 compute-0 neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5[295865]: [WARNING]  (295869) : Exiting Master process...
Feb 28 10:12:00 compute-0 neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5[295865]: [ALERT]    (295869) : Current worker (295871) exited with code 143 (Terminated)
Feb 28 10:12:00 compute-0 neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5[295865]: [WARNING]  (295869) : All workers exited. Exiting... (0)
Feb 28 10:12:00 compute-0 systemd[1]: libpod-00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625.scope: Deactivated successfully.
Feb 28 10:12:00 compute-0 podman[297885]: 2026-02-28 10:12:00.810811774 +0000 UTC m=+0.052349121 container died 00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.832 243456 DEBUG nova.virt.libvirt.vif [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:11:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1011485011',display_name='tempest-AttachInterfacesUnderV243Test-server-1011485011',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1011485011',id=61,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNjnzagFJoSFtRanAJV8Vh8se7ZkhkWvbV0r0AVyZNItt2AKKQ99gHb6oQBNip6kETClAVoOoJsFdbRjy0fqH4CfM5YyB80Yui5ULnbk8Emyxy1XpzzeCEJgH6ejC7SlXQ==',key_name='tempest-keypair-1736116569',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:11:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='676657ed4ac447c580a9480d26bd7f87',ramdisk_id='',reservation_id='r-041m6mn9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-809208084',owner_user_name='tempest-AttachInterfacesUnderV243Test-809208084-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:11:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d03368d5ddc403db8a8315dabe88681',uuid=fc527bc2-3cc2-4ce2-b99e-24d252793d06,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.834 243456 DEBUG nova.network.os_vif_util [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Converting VIF {"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.835 243456 DEBUG nova.network.os_vif_util [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:55:cf,bridge_name='br-int',has_traffic_filtering=True,id=52c9c534-2d55-4b9a-bd70-a8114ac975c6,network=Network(0192f192-ffd8-4bb7-b267-d74d97cc6cf5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52c9c534-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.835 243456 DEBUG os_vif [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:55:cf,bridge_name='br-int',has_traffic_filtering=True,id=52c9c534-2d55-4b9a-bd70-a8114ac975c6,network=Network(0192f192-ffd8-4bb7-b267-d74d97cc6cf5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52c9c534-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.838 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.839 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52c9c534-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.840 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.842 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.845 243456 INFO os_vif [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:55:cf,bridge_name='br-int',has_traffic_filtering=True,id=52c9c534-2d55-4b9a-bd70-a8114ac975c6,network=Network(0192f192-ffd8-4bb7-b267-d74d97cc6cf5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52c9c534-2d')
Feb 28 10:12:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625-userdata-shm.mount: Deactivated successfully.
Feb 28 10:12:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-abd06fea17760e87046dfe6bdf5d2603ded7e2ad74c164232a00dda0bdcd2728-merged.mount: Deactivated successfully.
Feb 28 10:12:00 compute-0 podman[297885]: 2026-02-28 10:12:00.887172839 +0000 UTC m=+0.128710186 container cleanup 00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:12:00 compute-0 systemd[1]: libpod-conmon-00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625.scope: Deactivated successfully.
Feb 28 10:12:00 compute-0 podman[297942]: 2026-02-28 10:12:00.970239622 +0000 UTC m=+0.057503926 container remove 00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:12:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:00.979 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[612769d4-e521-42e7-8cae-0f00c5da9f2d]: (4, ('Sat Feb 28 10:12:00 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5 (00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625)\n00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625\nSat Feb 28 10:12:00 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5 (00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625)\n00ce0e967b03ee68f72746215e468aebbb15f55fd33ca6577336beb9162c0625\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:00.982 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[51090b13-bdd3-462c-ba99-d14e9c28159c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:00.983 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0192f192-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.985 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:00 compute-0 kernel: tap0192f192-f0: left promiscuous mode
Feb 28 10:12:00 compute-0 nova_compute[243452]: 2026-02-28 10:12:00.993 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:00.997 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[845108cc-298a-4132-a2b5-ea6429eced39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:01.022 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d80d25f3-c24e-4938-aeb4-43013290bee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:01.024 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e1131701-6ffa-4a6b-b0e6-44b795946e4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:01.040 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6a73dc29-9839-485f-a8ad-90a0a284d1bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499450, 'reachable_time': 35700, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297957, 'error': None, 'target': 'ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:01.043 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0192f192-ffd8-4bb7-b267-d74d97cc6cf5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:12:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:01.043 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[727d1d55-78fe-413c-94fb-e8cbf4d48aff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d0192f192\x2dffd8\x2d4bb7\x2db267\x2dd74d97cc6cf5.mount: Deactivated successfully.
Feb 28 10:12:01 compute-0 nova_compute[243452]: 2026-02-28 10:12:01.144 243456 INFO nova.compute.manager [None req-201d74ce-cf60-42a0-a1ac-43804dffa5cc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Pausing
Feb 28 10:12:01 compute-0 nova_compute[243452]: 2026-02-28 10:12:01.145 243456 DEBUG nova.objects.instance [None req-201d74ce-cf60-42a0-a1ac-43804dffa5cc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'flavor' on Instance uuid 56badf5b-d05a-4123-b43c-087a91e0e3b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:12:01 compute-0 nova_compute[243452]: 2026-02-28 10:12:01.181 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273521.1814907, 56badf5b-d05a-4123-b43c-087a91e0e3b6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:12:01 compute-0 nova_compute[243452]: 2026-02-28 10:12:01.182 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] VM Paused (Lifecycle Event)
Feb 28 10:12:01 compute-0 nova_compute[243452]: 2026-02-28 10:12:01.184 243456 DEBUG nova.compute.manager [None req-201d74ce-cf60-42a0-a1ac-43804dffa5cc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:01 compute-0 nova_compute[243452]: 2026-02-28 10:12:01.199 243456 INFO nova.virt.libvirt.driver [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Deleting instance files /var/lib/nova/instances/fc527bc2-3cc2-4ce2-b99e-24d252793d06_del
Feb 28 10:12:01 compute-0 nova_compute[243452]: 2026-02-28 10:12:01.200 243456 INFO nova.virt.libvirt.driver [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Deletion of /var/lib/nova/instances/fc527bc2-3cc2-4ce2-b99e-24d252793d06_del complete
Feb 28 10:12:01 compute-0 ceph-mon[76304]: pgmap v1372: 305 pgs: 305 active+clean; 438 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.3 MiB/s wr, 249 op/s
Feb 28 10:12:01 compute-0 nova_compute[243452]: 2026-02-28 10:12:01.226 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:01 compute-0 nova_compute[243452]: 2026-02-28 10:12:01.229 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:12:01 compute-0 nova_compute[243452]: 2026-02-28 10:12:01.261 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] During sync_power_state the instance has a pending task (pausing). Skip.
Feb 28 10:12:01 compute-0 nova_compute[243452]: 2026-02-28 10:12:01.288 243456 INFO nova.compute.manager [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Took 0.75 seconds to destroy the instance on the hypervisor.
Feb 28 10:12:01 compute-0 nova_compute[243452]: 2026-02-28 10:12:01.289 243456 DEBUG oslo.service.loopingcall [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:12:01 compute-0 nova_compute[243452]: 2026-02-28 10:12:01.289 243456 DEBUG nova.compute.manager [-] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:12:01 compute-0 nova_compute[243452]: 2026-02-28 10:12:01.289 243456 DEBUG nova.network.neutron [-] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:12:01 compute-0 nova_compute[243452]: 2026-02-28 10:12:01.731 243456 DEBUG nova.compute.manager [req-1a3fa58a-26ed-4e43-9c04-c505d40d4af7 req-38a73385-fe97-433d-985d-b7cd66ff00ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received event network-vif-unplugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:01 compute-0 nova_compute[243452]: 2026-02-28 10:12:01.733 243456 DEBUG oslo_concurrency.lockutils [req-1a3fa58a-26ed-4e43-9c04-c505d40d4af7 req-38a73385-fe97-433d-985d-b7cd66ff00ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:01 compute-0 nova_compute[243452]: 2026-02-28 10:12:01.733 243456 DEBUG oslo_concurrency.lockutils [req-1a3fa58a-26ed-4e43-9c04-c505d40d4af7 req-38a73385-fe97-433d-985d-b7cd66ff00ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:01 compute-0 nova_compute[243452]: 2026-02-28 10:12:01.733 243456 DEBUG oslo_concurrency.lockutils [req-1a3fa58a-26ed-4e43-9c04-c505d40d4af7 req-38a73385-fe97-433d-985d-b7cd66ff00ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:01 compute-0 nova_compute[243452]: 2026-02-28 10:12:01.733 243456 DEBUG nova.compute.manager [req-1a3fa58a-26ed-4e43-9c04-c505d40d4af7 req-38a73385-fe97-433d-985d-b7cd66ff00ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] No waiting events found dispatching network-vif-unplugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:12:01 compute-0 nova_compute[243452]: 2026-02-28 10:12:01.734 243456 DEBUG nova.compute.manager [req-1a3fa58a-26ed-4e43-9c04-c505d40d4af7 req-38a73385-fe97-433d-985d-b7cd66ff00ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received event network-vif-unplugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:12:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 438 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1016 KiB/s wr, 219 op/s
Feb 28 10:12:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:02.471 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:12:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:02.476 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:12:02 compute-0 nova_compute[243452]: 2026-02-28 10:12:02.477 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:02 compute-0 nova_compute[243452]: 2026-02-28 10:12:02.496 243456 DEBUG nova.network.neutron [-] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:12:02 compute-0 nova_compute[243452]: 2026-02-28 10:12:02.523 243456 INFO nova.compute.manager [-] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Took 1.23 seconds to deallocate network for instance.
Feb 28 10:12:02 compute-0 nova_compute[243452]: 2026-02-28 10:12:02.589 243456 DEBUG oslo_concurrency.lockutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:02 compute-0 nova_compute[243452]: 2026-02-28 10:12:02.590 243456 DEBUG oslo_concurrency.lockutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:02 compute-0 nova_compute[243452]: 2026-02-28 10:12:02.594 243456 DEBUG nova.compute.manager [req-8e20f2da-1c7e-41b4-ab55-fb15fe272c46 req-5f564e98-f687-4a29-abd6-f0c9c6274c27 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received event network-vif-deleted-52c9c534-2d55-4b9a-bd70-a8114ac975c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:02 compute-0 nova_compute[243452]: 2026-02-28 10:12:02.730 243456 DEBUG oslo_concurrency.processutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:02 compute-0 nova_compute[243452]: 2026-02-28 10:12:02.982 243456 DEBUG nova.network.neutron [req-cc59da39-5f5f-4e54-8543-26fe1689b1eb req-eec2234f-833f-4e3d-9693-0c6af805ba9b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updated VIF entry in instance network info cache for port 52c9c534-2d55-4b9a-bd70-a8114ac975c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:12:02 compute-0 nova_compute[243452]: 2026-02-28 10:12:02.983 243456 DEBUG nova.network.neutron [req-cc59da39-5f5f-4e54-8543-26fe1689b1eb req-eec2234f-833f-4e3d-9693-0c6af805ba9b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Updating instance_info_cache with network_info: [{"id": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "address": "fa:16:3e:ec:55:cf", "network": {"id": "0192f192-ffd8-4bb7-b267-d74d97cc6cf5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-299903042-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "676657ed4ac447c580a9480d26bd7f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52c9c534-2d", "ovs_interfaceid": "52c9c534-2d55-4b9a-bd70-a8114ac975c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:12:03 compute-0 nova_compute[243452]: 2026-02-28 10:12:03.002 243456 DEBUG oslo_concurrency.lockutils [req-cc59da39-5f5f-4e54-8543-26fe1689b1eb req-eec2234f-833f-4e3d-9693-0c6af805ba9b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-fc527bc2-3cc2-4ce2-b99e-24d252793d06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:12:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:12:03 compute-0 podman[297980]: 2026-02-28 10:12:03.146210044 +0000 UTC m=+0.076918181 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent)
Feb 28 10:12:03 compute-0 podman[297979]: 2026-02-28 10:12:03.180672532 +0000 UTC m=+0.110546586 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 28 10:12:03 compute-0 ceph-mon[76304]: pgmap v1373: 305 pgs: 305 active+clean; 438 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1016 KiB/s wr, 219 op/s
Feb 28 10:12:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:12:03 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209316137' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:12:03 compute-0 nova_compute[243452]: 2026-02-28 10:12:03.303 243456 DEBUG oslo_concurrency.processutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:03 compute-0 nova_compute[243452]: 2026-02-28 10:12:03.309 243456 DEBUG nova.compute.provider_tree [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:12:03 compute-0 nova_compute[243452]: 2026-02-28 10:12:03.333 243456 DEBUG nova.scheduler.client.report [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:12:03 compute-0 nova_compute[243452]: 2026-02-28 10:12:03.354 243456 DEBUG oslo_concurrency.lockutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:03 compute-0 nova_compute[243452]: 2026-02-28 10:12:03.404 243456 INFO nova.scheduler.client.report [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Deleted allocations for instance fc527bc2-3cc2-4ce2-b99e-24d252793d06
Feb 28 10:12:03 compute-0 nova_compute[243452]: 2026-02-28 10:12:03.485 243456 DEBUG oslo_concurrency.lockutils [None req-46fbd21a-6155-43bc-a29c-9c65ae9291e5 4d03368d5ddc403db8a8315dabe88681 676657ed4ac447c580a9480d26bd7f87 - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:03 compute-0 nova_compute[243452]: 2026-02-28 10:12:03.702 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:03 compute-0 nova_compute[243452]: 2026-02-28 10:12:03.974 243456 DEBUG nova.compute.manager [req-0ff22998-e4c8-4526-addb-6ea115ebb08f req-6fd9909a-b946-4662-b2cf-91f0d7cadd12 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received event network-vif-plugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:03 compute-0 nova_compute[243452]: 2026-02-28 10:12:03.974 243456 DEBUG oslo_concurrency.lockutils [req-0ff22998-e4c8-4526-addb-6ea115ebb08f req-6fd9909a-b946-4662-b2cf-91f0d7cadd12 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:03 compute-0 nova_compute[243452]: 2026-02-28 10:12:03.975 243456 DEBUG oslo_concurrency.lockutils [req-0ff22998-e4c8-4526-addb-6ea115ebb08f req-6fd9909a-b946-4662-b2cf-91f0d7cadd12 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:03 compute-0 nova_compute[243452]: 2026-02-28 10:12:03.975 243456 DEBUG oslo_concurrency.lockutils [req-0ff22998-e4c8-4526-addb-6ea115ebb08f req-6fd9909a-b946-4662-b2cf-91f0d7cadd12 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "fc527bc2-3cc2-4ce2-b99e-24d252793d06-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:03 compute-0 nova_compute[243452]: 2026-02-28 10:12:03.975 243456 DEBUG nova.compute.manager [req-0ff22998-e4c8-4526-addb-6ea115ebb08f req-6fd9909a-b946-4662-b2cf-91f0d7cadd12 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] No waiting events found dispatching network-vif-plugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:12:03 compute-0 nova_compute[243452]: 2026-02-28 10:12:03.975 243456 WARNING nova.compute.manager [req-0ff22998-e4c8-4526-addb-6ea115ebb08f req-6fd9909a-b946-4662-b2cf-91f0d7cadd12 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Received unexpected event network-vif-plugged-52c9c534-2d55-4b9a-bd70-a8114ac975c6 for instance with vm_state deleted and task_state None.
Feb 28 10:12:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 400 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 71 KiB/s wr, 190 op/s
Feb 28 10:12:04 compute-0 nova_compute[243452]: 2026-02-28 10:12:04.113 243456 DEBUG oslo_concurrency.lockutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "56badf5b-d05a-4123-b43c-087a91e0e3b6" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:04 compute-0 nova_compute[243452]: 2026-02-28 10:12:04.113 243456 DEBUG oslo_concurrency.lockutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:04 compute-0 nova_compute[243452]: 2026-02-28 10:12:04.114 243456 INFO nova.compute.manager [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Shelving
Feb 28 10:12:04 compute-0 kernel: tap2a10bc2f-52 (unregistering): left promiscuous mode
Feb 28 10:12:04 compute-0 NetworkManager[49805]: <info>  [1772273524.1778] device (tap2a10bc2f-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:12:04 compute-0 ovn_controller[146846]: 2026-02-28T10:12:04Z|00523|binding|INFO|Releasing lport 2a10bc2f-52a7-47e7-a308-eaa218f335b6 from this chassis (sb_readonly=0)
Feb 28 10:12:04 compute-0 ovn_controller[146846]: 2026-02-28T10:12:04Z|00524|binding|INFO|Setting lport 2a10bc2f-52a7-47e7-a308-eaa218f335b6 down in Southbound
Feb 28 10:12:04 compute-0 ovn_controller[146846]: 2026-02-28T10:12:04Z|00525|binding|INFO|Removing iface tap2a10bc2f-52 ovn-installed in OVS
Feb 28 10:12:04 compute-0 nova_compute[243452]: 2026-02-28 10:12:04.187 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:04 compute-0 nova_compute[243452]: 2026-02-28 10:12:04.189 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:04 compute-0 nova_compute[243452]: 2026-02-28 10:12:04.195 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.198 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:29:e4 10.100.0.6'], port_security=['fa:16:3e:25:29:e4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '56badf5b-d05a-4123-b43c-087a91e0e3b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cffbbb9857954b188c363e9565817bf2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9dcd58cd-d32a-4394-9f61-d8f4c1381371', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d55c9b7f-6b12-4934-aabc-ab954d2b71e1, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2a10bc2f-52a7-47e7-a308-eaa218f335b6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:12:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.201 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2a10bc2f-52a7-47e7-a308-eaa218f335b6 in datapath 41b22e92-d251-48dd-9bf8-8f38cbd749fa unbound from our chassis
Feb 28 10:12:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.204 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41b22e92-d251-48dd-9bf8-8f38cbd749fa
Feb 28 10:12:04 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000040.scope: Deactivated successfully.
Feb 28 10:12:04 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000040.scope: Consumed 6.188s CPU time.
Feb 28 10:12:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.223 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2dcd51-a6cc-4956-ae62-090bc40183ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:04 compute-0 systemd-machined[209480]: Machine qemu-71-instance-00000040 terminated.
Feb 28 10:12:04 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3209316137' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:12:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.262 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a198e53a-3c15-4da5-a209-b2415e855808]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.269 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ee958317-282a-4561-97f1-80cb4c563833]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.306 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[34f239fe-cd15-4d38-8cc7-a5f7703cf1a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.330 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[438b5ef2-ef18-40ca-8150-777d9c1332ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41b22e92-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:1f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493294, 'reachable_time': 16597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298036, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.355 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[51a18559-faf0-4a90-b034-a86d226fa978]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493305, 'tstamp': 493305}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298038, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493308, 'tstamp': 493308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298038, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.358 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b22e92-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:04 compute-0 nova_compute[243452]: 2026-02-28 10:12:04.361 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:04 compute-0 nova_compute[243452]: 2026-02-28 10:12:04.366 243456 INFO nova.virt.libvirt.driver [-] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Instance destroyed successfully.
Feb 28 10:12:04 compute-0 nova_compute[243452]: 2026-02-28 10:12:04.366 243456 DEBUG nova.objects.instance [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'numa_topology' on Instance uuid 56badf5b-d05a-4123-b43c-087a91e0e3b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:12:04 compute-0 nova_compute[243452]: 2026-02-28 10:12:04.368 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.370 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41b22e92-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.370 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:12:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.371 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41b22e92-d0, col_values=(('external_ids', {'iface-id': 'cad89901-4493-47e9-b0fc-45158375eff8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:04.372 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:12:04 compute-0 nova_compute[243452]: 2026-02-28 10:12:04.693 243456 INFO nova.virt.libvirt.driver [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Beginning cold snapshot process
Feb 28 10:12:04 compute-0 nova_compute[243452]: 2026-02-28 10:12:04.889 243456 DEBUG nova.virt.libvirt.imagebackend [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 28 10:12:05 compute-0 ovn_controller[146846]: 2026-02-28T10:12:05Z|00526|binding|INFO|Releasing lport cad89901-4493-47e9-b0fc-45158375eff8 from this chassis (sb_readonly=0)
Feb 28 10:12:05 compute-0 nova_compute[243452]: 2026-02-28 10:12:05.230 243456 DEBUG nova.storage.rbd_utils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] creating snapshot(15e4b23e728846d999ec44c208d8074e) on rbd image(56badf5b-d05a-4123-b43c-087a91e0e3b6_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:12:05 compute-0 ceph-mon[76304]: pgmap v1374: 305 pgs: 305 active+clean; 400 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 71 KiB/s wr, 190 op/s
Feb 28 10:12:05 compute-0 nova_compute[243452]: 2026-02-28 10:12:05.257 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:05 compute-0 nova_compute[243452]: 2026-02-28 10:12:05.841 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:06 compute-0 nova_compute[243452]: 2026-02-28 10:12:06.082 243456 DEBUG nova.compute.manager [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Received event network-vif-unplugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:06 compute-0 nova_compute[243452]: 2026-02-28 10:12:06.082 243456 DEBUG oslo_concurrency.lockutils [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:06 compute-0 nova_compute[243452]: 2026-02-28 10:12:06.082 243456 DEBUG oslo_concurrency.lockutils [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:06 compute-0 nova_compute[243452]: 2026-02-28 10:12:06.083 243456 DEBUG oslo_concurrency.lockutils [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:06 compute-0 nova_compute[243452]: 2026-02-28 10:12:06.083 243456 DEBUG nova.compute.manager [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] No waiting events found dispatching network-vif-unplugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:12:06 compute-0 nova_compute[243452]: 2026-02-28 10:12:06.083 243456 WARNING nova.compute.manager [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Received unexpected event network-vif-unplugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 for instance with vm_state paused and task_state shelving_image_uploading.
Feb 28 10:12:06 compute-0 nova_compute[243452]: 2026-02-28 10:12:06.083 243456 DEBUG nova.compute.manager [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Received event network-vif-plugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:06 compute-0 nova_compute[243452]: 2026-02-28 10:12:06.083 243456 DEBUG oslo_concurrency.lockutils [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:06 compute-0 nova_compute[243452]: 2026-02-28 10:12:06.083 243456 DEBUG oslo_concurrency.lockutils [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:06 compute-0 nova_compute[243452]: 2026-02-28 10:12:06.084 243456 DEBUG oslo_concurrency.lockutils [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:06 compute-0 nova_compute[243452]: 2026-02-28 10:12:06.084 243456 DEBUG nova.compute.manager [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] No waiting events found dispatching network-vif-plugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:12:06 compute-0 nova_compute[243452]: 2026-02-28 10:12:06.084 243456 WARNING nova.compute.manager [req-e0065e92-1dcd-4590-906b-a8cf2efa241a req-fa221932-2544-40e0-af12-c4380435f668 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Received unexpected event network-vif-plugged-2a10bc2f-52a7-47e7-a308-eaa218f335b6 for instance with vm_state paused and task_state shelving_image_uploading.
Feb 28 10:12:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 358 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 42 KiB/s wr, 173 op/s
Feb 28 10:12:06 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Feb 28 10:12:06 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Feb 28 10:12:06 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Feb 28 10:12:06 compute-0 nova_compute[243452]: 2026-02-28 10:12:06.299 243456 DEBUG nova.storage.rbd_utils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] cloning vms/56badf5b-d05a-4123-b43c-087a91e0e3b6_disk@15e4b23e728846d999ec44c208d8074e to images/a76de62a-d69e-4c03-92ec-aaff7623c365 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:12:06 compute-0 nova_compute[243452]: 2026-02-28 10:12:06.397 243456 DEBUG nova.storage.rbd_utils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] flattening images/a76de62a-d69e-4c03-92ec-aaff7623c365 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:12:06 compute-0 nova_compute[243452]: 2026-02-28 10:12:06.599 243456 DEBUG nova.storage.rbd_utils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] removing snapshot(15e4b23e728846d999ec44c208d8074e) on rbd image(56badf5b-d05a-4123-b43c-087a91e0e3b6_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:12:07 compute-0 ceph-mon[76304]: pgmap v1375: 305 pgs: 305 active+clean; 358 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 42 KiB/s wr, 173 op/s
Feb 28 10:12:07 compute-0 ceph-mon[76304]: osdmap e227: 3 total, 3 up, 3 in
Feb 28 10:12:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Feb 28 10:12:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Feb 28 10:12:07 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Feb 28 10:12:07 compute-0 nova_compute[243452]: 2026-02-28 10:12:07.300 243456 DEBUG nova.storage.rbd_utils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] creating snapshot(snap) on rbd image(a76de62a-d69e-4c03-92ec-aaff7623c365) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:12:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:12:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 370 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 510 KiB/s wr, 73 op/s
Feb 28 10:12:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Feb 28 10:12:08 compute-0 ceph-mon[76304]: osdmap e228: 3 total, 3 up, 3 in
Feb 28 10:12:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Feb 28 10:12:08 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Feb 28 10:12:08 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:08.480 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:08 compute-0 nova_compute[243452]: 2026-02-28 10:12:08.705 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:08 compute-0 nova_compute[243452]: 2026-02-28 10:12:08.824 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273513.821662, 59d0cb01-5644-425e-82b1-b79cf4265dfb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:12:08 compute-0 nova_compute[243452]: 2026-02-28 10:12:08.826 243456 INFO nova.compute.manager [-] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] VM Stopped (Lifecycle Event)
Feb 28 10:12:08 compute-0 nova_compute[243452]: 2026-02-28 10:12:08.860 243456 DEBUG nova.compute.manager [None req-2957d2d9-acf5-4219-89af-2e3ebc2afebc - - - - - -] [instance: 59d0cb01-5644-425e-82b1-b79cf4265dfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:09 compute-0 ceph-mon[76304]: pgmap v1378: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 370 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 510 KiB/s wr, 73 op/s
Feb 28 10:12:09 compute-0 ceph-mon[76304]: osdmap e229: 3 total, 3 up, 3 in
Feb 28 10:12:10 compute-0 ovn_controller[146846]: 2026-02-28T10:12:10Z|00527|binding|INFO|Releasing lport cad89901-4493-47e9-b0fc-45158375eff8 from this chassis (sb_readonly=0)
Feb 28 10:12:10 compute-0 nova_compute[243452]: 2026-02-28 10:12:10.088 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 397 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.9 MiB/s wr, 54 op/s
Feb 28 10:12:10 compute-0 nova_compute[243452]: 2026-02-28 10:12:10.262 243456 INFO nova.virt.libvirt.driver [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Snapshot image upload complete
Feb 28 10:12:10 compute-0 nova_compute[243452]: 2026-02-28 10:12:10.264 243456 DEBUG nova.compute.manager [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:10 compute-0 ceph-mon[76304]: pgmap v1380: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 397 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.9 MiB/s wr, 54 op/s
Feb 28 10:12:10 compute-0 nova_compute[243452]: 2026-02-28 10:12:10.331 243456 INFO nova.compute.manager [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Shelve offloading
Feb 28 10:12:10 compute-0 nova_compute[243452]: 2026-02-28 10:12:10.339 243456 INFO nova.virt.libvirt.driver [-] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Instance destroyed successfully.
Feb 28 10:12:10 compute-0 nova_compute[243452]: 2026-02-28 10:12:10.340 243456 DEBUG nova.compute.manager [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:10 compute-0 nova_compute[243452]: 2026-02-28 10:12:10.343 243456 DEBUG oslo_concurrency.lockutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:12:10 compute-0 nova_compute[243452]: 2026-02-28 10:12:10.344 243456 DEBUG oslo_concurrency.lockutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquired lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:12:10 compute-0 nova_compute[243452]: 2026-02-28 10:12:10.345 243456 DEBUG nova.network.neutron [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:12:10 compute-0 nova_compute[243452]: 2026-02-28 10:12:10.843 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 405 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 120 op/s
Feb 28 10:12:12 compute-0 nova_compute[243452]: 2026-02-28 10:12:12.107 243456 DEBUG nova.network.neutron [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Updating instance_info_cache with network_info: [{"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:12:12 compute-0 nova_compute[243452]: 2026-02-28 10:12:12.130 243456 DEBUG oslo_concurrency.lockutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Releasing lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:12:12 compute-0 nova_compute[243452]: 2026-02-28 10:12:12.355 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:12:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:12:13 compute-0 ceph-mon[76304]: pgmap v1381: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 405 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 120 op/s
Feb 28 10:12:13 compute-0 nova_compute[243452]: 2026-02-28 10:12:13.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:12:13 compute-0 nova_compute[243452]: 2026-02-28 10:12:13.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:12:13 compute-0 nova_compute[243452]: 2026-02-28 10:12:13.707 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 405 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 91 op/s
Feb 28 10:12:14 compute-0 nova_compute[243452]: 2026-02-28 10:12:14.973 243456 INFO nova.virt.libvirt.driver [-] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Instance destroyed successfully.
Feb 28 10:12:14 compute-0 nova_compute[243452]: 2026-02-28 10:12:14.974 243456 DEBUG nova.objects.instance [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'resources' on Instance uuid 56badf5b-d05a-4123-b43c-087a91e0e3b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:12:14 compute-0 nova_compute[243452]: 2026-02-28 10:12:14.998 243456 DEBUG nova.virt.libvirt.vif [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:11:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-440852658',display_name='tempest-ServerActionsTestOtherB-server-440852658',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-440852658',id=64,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:11:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-sfdugq8v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member',shelved_at='2026-02-28T10:12:10.264021',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='a76de62a-d69e-4c03-92ec-aaff7623c365'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:12:04Z,user_data=None,user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=56badf5b-d05a-4123-b43c-087a91e0e3b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:12:14 compute-0 nova_compute[243452]: 2026-02-28 10:12:14.999 243456 DEBUG nova.network.os_vif_util [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:12:15 compute-0 nova_compute[243452]: 2026-02-28 10:12:15.000 243456 DEBUG nova.network.os_vif_util [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:29:e4,bridge_name='br-int',has_traffic_filtering=True,id=2a10bc2f-52a7-47e7-a308-eaa218f335b6,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a10bc2f-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:12:15 compute-0 nova_compute[243452]: 2026-02-28 10:12:15.000 243456 DEBUG os_vif [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:29:e4,bridge_name='br-int',has_traffic_filtering=True,id=2a10bc2f-52a7-47e7-a308-eaa218f335b6,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a10bc2f-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:12:15 compute-0 nova_compute[243452]: 2026-02-28 10:12:15.002 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:15 compute-0 nova_compute[243452]: 2026-02-28 10:12:15.002 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a10bc2f-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:15 compute-0 nova_compute[243452]: 2026-02-28 10:12:15.005 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:15 compute-0 nova_compute[243452]: 2026-02-28 10:12:15.006 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:12:15 compute-0 nova_compute[243452]: 2026-02-28 10:12:15.009 243456 INFO os_vif [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:29:e4,bridge_name='br-int',has_traffic_filtering=True,id=2a10bc2f-52a7-47e7-a308-eaa218f335b6,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a10bc2f-52')
Feb 28 10:12:15 compute-0 nova_compute[243452]: 2026-02-28 10:12:15.165 243456 DEBUG nova.compute.manager [req-e92bbf7a-2041-47c2-a740-119cebbfb2f5 req-119ffa0e-5c13-4952-8169-6f04c9ec48d8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Received event network-changed-2a10bc2f-52a7-47e7-a308-eaa218f335b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:15 compute-0 nova_compute[243452]: 2026-02-28 10:12:15.166 243456 DEBUG nova.compute.manager [req-e92bbf7a-2041-47c2-a740-119cebbfb2f5 req-119ffa0e-5c13-4952-8169-6f04c9ec48d8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Refreshing instance network info cache due to event network-changed-2a10bc2f-52a7-47e7-a308-eaa218f335b6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:12:15 compute-0 nova_compute[243452]: 2026-02-28 10:12:15.166 243456 DEBUG oslo_concurrency.lockutils [req-e92bbf7a-2041-47c2-a740-119cebbfb2f5 req-119ffa0e-5c13-4952-8169-6f04c9ec48d8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:12:15 compute-0 ceph-mon[76304]: pgmap v1382: 305 pgs: 305 active+clean; 405 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 91 op/s
Feb 28 10:12:15 compute-0 nova_compute[243452]: 2026-02-28 10:12:15.167 243456 DEBUG oslo_concurrency.lockutils [req-e92bbf7a-2041-47c2-a740-119cebbfb2f5 req-119ffa0e-5c13-4952-8169-6f04c9ec48d8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:12:15 compute-0 nova_compute[243452]: 2026-02-28 10:12:15.167 243456 DEBUG nova.network.neutron [req-e92bbf7a-2041-47c2-a740-119cebbfb2f5 req-119ffa0e-5c13-4952-8169-6f04c9ec48d8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Refreshing network info cache for port 2a10bc2f-52a7-47e7-a308-eaa218f335b6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:12:15 compute-0 nova_compute[243452]: 2026-02-28 10:12:15.317 243456 INFO nova.virt.libvirt.driver [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Deleting instance files /var/lib/nova/instances/56badf5b-d05a-4123-b43c-087a91e0e3b6_del
Feb 28 10:12:15 compute-0 nova_compute[243452]: 2026-02-28 10:12:15.318 243456 INFO nova.virt.libvirt.driver [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Deletion of /var/lib/nova/instances/56badf5b-d05a-4123-b43c-087a91e0e3b6_del complete
Feb 28 10:12:15 compute-0 nova_compute[243452]: 2026-02-28 10:12:15.425 243456 INFO nova.scheduler.client.report [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Deleted allocations for instance 56badf5b-d05a-4123-b43c-087a91e0e3b6
Feb 28 10:12:15 compute-0 nova_compute[243452]: 2026-02-28 10:12:15.477 243456 DEBUG oslo_concurrency.lockutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:15 compute-0 nova_compute[243452]: 2026-02-28 10:12:15.478 243456 DEBUG oslo_concurrency.lockutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:15 compute-0 nova_compute[243452]: 2026-02-28 10:12:15.549 243456 DEBUG oslo_concurrency.processutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:15 compute-0 nova_compute[243452]: 2026-02-28 10:12:15.778 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273520.7764163, fc527bc2-3cc2-4ce2-b99e-24d252793d06 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:12:15 compute-0 nova_compute[243452]: 2026-02-28 10:12:15.779 243456 INFO nova.compute.manager [-] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] VM Stopped (Lifecycle Event)
Feb 28 10:12:15 compute-0 nova_compute[243452]: 2026-02-28 10:12:15.809 243456 DEBUG nova.compute.manager [None req-04ed2f7a-82f6-4346-8620-80120f7e14d8 - - - - - -] [instance: fc527bc2-3cc2-4ce2-b99e-24d252793d06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:12:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/940536717' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:12:16 compute-0 nova_compute[243452]: 2026-02-28 10:12:16.090 243456 DEBUG oslo_concurrency.processutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:16 compute-0 nova_compute[243452]: 2026-02-28 10:12:16.097 243456 DEBUG nova.compute.provider_tree [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:12:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 390 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.4 MiB/s wr, 97 op/s
Feb 28 10:12:16 compute-0 nova_compute[243452]: 2026-02-28 10:12:16.120 243456 DEBUG nova.scheduler.client.report [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:12:16 compute-0 nova_compute[243452]: 2026-02-28 10:12:16.153 243456 DEBUG oslo_concurrency.lockutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/940536717' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:12:16 compute-0 nova_compute[243452]: 2026-02-28 10:12:16.228 243456 DEBUG oslo_concurrency.lockutils [None req-e9330fa3-4f5e-4817-8bcb-d8e69d364cb0 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "56badf5b-d05a-4123-b43c-087a91e0e3b6" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 12.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:16 compute-0 nova_compute[243452]: 2026-02-28 10:12:16.636 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:17 compute-0 ceph-mon[76304]: pgmap v1383: 305 pgs: 305 active+clean; 390 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.4 MiB/s wr, 97 op/s
Feb 28 10:12:17 compute-0 nova_compute[243452]: 2026-02-28 10:12:17.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:12:17 compute-0 nova_compute[243452]: 2026-02-28 10:12:17.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:12:17 compute-0 nova_compute[243452]: 2026-02-28 10:12:17.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:12:17 compute-0 nova_compute[243452]: 2026-02-28 10:12:17.337 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:17 compute-0 nova_compute[243452]: 2026-02-28 10:12:17.337 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:17 compute-0 nova_compute[243452]: 2026-02-28 10:12:17.338 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:17 compute-0 nova_compute[243452]: 2026-02-28 10:12:17.338 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:12:17 compute-0 nova_compute[243452]: 2026-02-28 10:12:17.338 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:12:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3737643756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:12:17 compute-0 nova_compute[243452]: 2026-02-28 10:12:17.884 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.004 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.005 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.009 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.009 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:12:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:12:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Feb 28 10:12:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Feb 28 10:12:18 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Feb 28 10:12:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 378 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.155 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.157 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3561MB free_disk=59.880274675786495GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.157 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.157 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.162 243456 DEBUG nova.network.neutron [req-e92bbf7a-2041-47c2-a740-119cebbfb2f5 req-119ffa0e-5c13-4952-8169-6f04c9ec48d8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Updated VIF entry in instance network info cache for port 2a10bc2f-52a7-47e7-a308-eaa218f335b6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.162 243456 DEBUG nova.network.neutron [req-e92bbf7a-2041-47c2-a740-119cebbfb2f5 req-119ffa0e-5c13-4952-8169-6f04c9ec48d8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Updating instance_info_cache with network_info: [{"id": "2a10bc2f-52a7-47e7-a308-eaa218f335b6", "address": "fa:16:3e:25:29:e4", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap2a10bc2f-52", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.189 243456 DEBUG oslo_concurrency.lockutils [req-e92bbf7a-2041-47c2-a740-119cebbfb2f5 req-119ffa0e-5c13-4952-8169-6f04c9ec48d8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-56badf5b-d05a-4123-b43c-087a91e0e3b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:12:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3737643756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:12:18 compute-0 ceph-mon[76304]: osdmap e230: 3 total, 3 up, 3 in
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.254 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 30a5d845-ce28-490a-afe8-3b7552f02c63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.254 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.254 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.255 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.327 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.711 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:12:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3947280260' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.861 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.867 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.888 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.912 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:12:18 compute-0 nova_compute[243452]: 2026-02-28 10:12:18.913 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:19 compute-0 ceph-mon[76304]: pgmap v1385: 305 pgs: 305 active+clean; 378 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 10:12:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3947280260' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:12:19 compute-0 nova_compute[243452]: 2026-02-28 10:12:19.364 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273524.3632941, 56badf5b-d05a-4123-b43c-087a91e0e3b6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:12:19 compute-0 nova_compute[243452]: 2026-02-28 10:12:19.365 243456 INFO nova.compute.manager [-] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] VM Stopped (Lifecycle Event)
Feb 28 10:12:19 compute-0 nova_compute[243452]: 2026-02-28 10:12:19.397 243456 DEBUG nova.compute.manager [None req-bf8c5cf7-6c0c-4ec7-ba9e-b27ea4dc06eb - - - - - -] [instance: 56badf5b-d05a-4123-b43c-087a91e0e3b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:20 compute-0 nova_compute[243452]: 2026-02-28 10:12:20.005 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 358 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 524 KiB/s rd, 430 KiB/s wr, 95 op/s
Feb 28 10:12:20 compute-0 nova_compute[243452]: 2026-02-28 10:12:20.637 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:20 compute-0 nova_compute[243452]: 2026-02-28 10:12:20.915 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:12:20 compute-0 nova_compute[243452]: 2026-02-28 10:12:20.916 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:12:20 compute-0 nova_compute[243452]: 2026-02-28 10:12:20.917 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:12:20 compute-0 nova_compute[243452]: 2026-02-28 10:12:20.935 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:12:20 compute-0 nova_compute[243452]: 2026-02-28 10:12:20.935 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:12:20 compute-0 nova_compute[243452]: 2026-02-28 10:12:20.935 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:12:20 compute-0 nova_compute[243452]: 2026-02-28 10:12:20.936 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:12:21 compute-0 nova_compute[243452]: 2026-02-28 10:12:21.006 243456 DEBUG oslo_concurrency.lockutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:21 compute-0 nova_compute[243452]: 2026-02-28 10:12:21.007 243456 DEBUG oslo_concurrency.lockutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:21 compute-0 nova_compute[243452]: 2026-02-28 10:12:21.007 243456 INFO nova.compute.manager [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Shelving
Feb 28 10:12:21 compute-0 nova_compute[243452]: 2026-02-28 10:12:21.030 243456 DEBUG nova.virt.libvirt.driver [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:12:21 compute-0 ceph-mon[76304]: pgmap v1386: 305 pgs: 305 active+clean; 358 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 524 KiB/s rd, 430 KiB/s wr, 95 op/s
Feb 28 10:12:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 358 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 2.6 KiB/s wr, 32 op/s
Feb 28 10:12:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:12:23 compute-0 ceph-mon[76304]: pgmap v1387: 305 pgs: 305 active+clean; 358 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 2.6 KiB/s wr, 32 op/s
Feb 28 10:12:23 compute-0 kernel: tap037eb744-30 (unregistering): left promiscuous mode
Feb 28 10:12:23 compute-0 NetworkManager[49805]: <info>  [1772273543.3450] device (tap037eb744-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:12:23 compute-0 ovn_controller[146846]: 2026-02-28T10:12:23Z|00528|binding|INFO|Releasing lport 037eb744-3024-4a3d-b52c-894abe1cbac8 from this chassis (sb_readonly=0)
Feb 28 10:12:23 compute-0 ovn_controller[146846]: 2026-02-28T10:12:23Z|00529|binding|INFO|Setting lport 037eb744-3024-4a3d-b52c-894abe1cbac8 down in Southbound
Feb 28 10:12:23 compute-0 nova_compute[243452]: 2026-02-28 10:12:23.352 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:23 compute-0 ovn_controller[146846]: 2026-02-28T10:12:23Z|00530|binding|INFO|Removing iface tap037eb744-30 ovn-installed in OVS
Feb 28 10:12:23 compute-0 nova_compute[243452]: 2026-02-28 10:12:23.354 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:23 compute-0 nova_compute[243452]: 2026-02-28 10:12:23.361 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.362 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:d3:6f 10.100.0.9'], port_security=['fa:16:3e:32:d3:6f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '30a5d845-ce28-490a-afe8-3b7552f02c63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cffbbb9857954b188c363e9565817bf2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '998e17ae-0a65-46f5-b817-1f8d2d0cba63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d55c9b7f-6b12-4934-aabc-ab954d2b71e1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=037eb744-3024-4a3d-b52c-894abe1cbac8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:12:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.364 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 037eb744-3024-4a3d-b52c-894abe1cbac8 in datapath 41b22e92-d251-48dd-9bf8-8f38cbd749fa unbound from our chassis
Feb 28 10:12:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.366 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41b22e92-d251-48dd-9bf8-8f38cbd749fa
Feb 28 10:12:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.380 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6ce8e2cd-d2ea-447a-8e70-01a6169182db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:23 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Feb 28 10:12:23 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000003a.scope: Consumed 16.952s CPU time.
Feb 28 10:12:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.408 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5eec3869-6b3f-46c4-98f6-65f27d136c7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:23 compute-0 systemd-machined[209480]: Machine qemu-65-instance-0000003a terminated.
Feb 28 10:12:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.411 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ab92aa33-4902-4e55-9ee7-eba9276554bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.438 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f22cae30-6488-4063-a26e-6ded47d87b34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.452 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[26f3bc19-b163-4635-a2f5-c8a3ccae4528]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41b22e92-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:1f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493294, 'reachable_time': 16597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298289, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.465 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[149706f5-d50f-4e6a-895b-afa47b5da0bc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493305, 'tstamp': 493305}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298290, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493308, 'tstamp': 493308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298290, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.467 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b22e92-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:23 compute-0 nova_compute[243452]: 2026-02-28 10:12:23.469 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:23 compute-0 nova_compute[243452]: 2026-02-28 10:12:23.473 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.474 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41b22e92-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.474 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:12:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.475 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41b22e92-d0, col_values=(('external_ids', {'iface-id': 'cad89901-4493-47e9-b0fc-45158375eff8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:23.475 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:12:23 compute-0 nova_compute[243452]: 2026-02-28 10:12:23.570 243456 DEBUG nova.compute.manager [req-30b5340d-54c6-4a51-8a15-49633e87b396 req-5f81ff99-7623-4216-9580-0862a0e58b47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-vif-unplugged-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:23 compute-0 nova_compute[243452]: 2026-02-28 10:12:23.571 243456 DEBUG oslo_concurrency.lockutils [req-30b5340d-54c6-4a51-8a15-49633e87b396 req-5f81ff99-7623-4216-9580-0862a0e58b47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:23 compute-0 nova_compute[243452]: 2026-02-28 10:12:23.571 243456 DEBUG oslo_concurrency.lockutils [req-30b5340d-54c6-4a51-8a15-49633e87b396 req-5f81ff99-7623-4216-9580-0862a0e58b47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:23 compute-0 nova_compute[243452]: 2026-02-28 10:12:23.571 243456 DEBUG oslo_concurrency.lockutils [req-30b5340d-54c6-4a51-8a15-49633e87b396 req-5f81ff99-7623-4216-9580-0862a0e58b47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:23 compute-0 nova_compute[243452]: 2026-02-28 10:12:23.571 243456 DEBUG nova.compute.manager [req-30b5340d-54c6-4a51-8a15-49633e87b396 req-5f81ff99-7623-4216-9580-0862a0e58b47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] No waiting events found dispatching network-vif-unplugged-037eb744-3024-4a3d-b52c-894abe1cbac8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:12:23 compute-0 nova_compute[243452]: 2026-02-28 10:12:23.572 243456 WARNING nova.compute.manager [req-30b5340d-54c6-4a51-8a15-49633e87b396 req-5f81ff99-7623-4216-9580-0862a0e58b47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received unexpected event network-vif-unplugged-037eb744-3024-4a3d-b52c-894abe1cbac8 for instance with vm_state active and task_state shelving.
Feb 28 10:12:23 compute-0 nova_compute[243452]: 2026-02-28 10:12:23.575 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:23 compute-0 nova_compute[243452]: 2026-02-28 10:12:23.610 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:23 compute-0 nova_compute[243452]: 2026-02-28 10:12:23.711 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:24 compute-0 nova_compute[243452]: 2026-02-28 10:12:24.051 243456 INFO nova.virt.libvirt.driver [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance shutdown successfully after 3 seconds.
Feb 28 10:12:24 compute-0 nova_compute[243452]: 2026-02-28 10:12:24.055 243456 INFO nova.virt.libvirt.driver [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance destroyed successfully.
Feb 28 10:12:24 compute-0 nova_compute[243452]: 2026-02-28 10:12:24.056 243456 DEBUG nova.objects.instance [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'numa_topology' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:12:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 358 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 5.4 KiB/s wr, 32 op/s
Feb 28 10:12:24 compute-0 nova_compute[243452]: 2026-02-28 10:12:24.387 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:24 compute-0 nova_compute[243452]: 2026-02-28 10:12:24.414 243456 INFO nova.virt.libvirt.driver [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Beginning cold snapshot process
Feb 28 10:12:24 compute-0 nova_compute[243452]: 2026-02-28 10:12:24.458 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updating instance_info_cache with network_info: [{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:12:24 compute-0 nova_compute[243452]: 2026-02-28 10:12:24.482 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:12:24 compute-0 nova_compute[243452]: 2026-02-28 10:12:24.483 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:12:24 compute-0 nova_compute[243452]: 2026-02-28 10:12:24.484 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:12:24 compute-0 nova_compute[243452]: 2026-02-28 10:12:24.484 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:12:24 compute-0 nova_compute[243452]: 2026-02-28 10:12:24.484 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:12:24 compute-0 nova_compute[243452]: 2026-02-28 10:12:24.581 243456 DEBUG nova.virt.libvirt.imagebackend [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 28 10:12:24 compute-0 nova_compute[243452]: 2026-02-28 10:12:24.917 243456 DEBUG nova.storage.rbd_utils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] creating snapshot(35a42d0e4ba9432381c70d253b64fb1b) on rbd image(30a5d845-ce28-490a-afe8-3b7552f02c63_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:12:25 compute-0 nova_compute[243452]: 2026-02-28 10:12:25.008 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Feb 28 10:12:25 compute-0 ceph-mon[76304]: pgmap v1388: 305 pgs: 305 active+clean; 358 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 5.4 KiB/s wr, 32 op/s
Feb 28 10:12:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Feb 28 10:12:25 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Feb 28 10:12:25 compute-0 nova_compute[243452]: 2026-02-28 10:12:25.290 243456 DEBUG nova.storage.rbd_utils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] cloning vms/30a5d845-ce28-490a-afe8-3b7552f02c63_disk@35a42d0e4ba9432381c70d253b64fb1b to images/337de210-963f-41af-92a4-16b5716eae17 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:12:25 compute-0 nova_compute[243452]: 2026-02-28 10:12:25.403 243456 DEBUG nova.storage.rbd_utils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] flattening images/337de210-963f-41af-92a4-16b5716eae17 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:12:25 compute-0 nova_compute[243452]: 2026-02-28 10:12:25.687 243456 DEBUG nova.storage.rbd_utils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] removing snapshot(35a42d0e4ba9432381c70d253b64fb1b) on rbd image(30a5d845-ce28-490a-afe8-3b7552f02c63_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:12:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 364 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 825 KiB/s rd, 628 KiB/s wr, 39 op/s
Feb 28 10:12:26 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Feb 28 10:12:26 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Feb 28 10:12:26 compute-0 ceph-mon[76304]: osdmap e231: 3 total, 3 up, 3 in
Feb 28 10:12:26 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Feb 28 10:12:26 compute-0 nova_compute[243452]: 2026-02-28 10:12:26.275 243456 DEBUG nova.storage.rbd_utils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] creating snapshot(snap) on rbd image(337de210-963f-41af-92a4-16b5716eae17) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:12:26 compute-0 nova_compute[243452]: 2026-02-28 10:12:26.362 243456 DEBUG nova.compute.manager [req-54dd7941-eb7e-4804-9c80-284c0f47e494 req-7ebef1ac-97a2-40d2-9434-287513373a66 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:26 compute-0 nova_compute[243452]: 2026-02-28 10:12:26.362 243456 DEBUG oslo_concurrency.lockutils [req-54dd7941-eb7e-4804-9c80-284c0f47e494 req-7ebef1ac-97a2-40d2-9434-287513373a66 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:26 compute-0 nova_compute[243452]: 2026-02-28 10:12:26.363 243456 DEBUG oslo_concurrency.lockutils [req-54dd7941-eb7e-4804-9c80-284c0f47e494 req-7ebef1ac-97a2-40d2-9434-287513373a66 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:26 compute-0 nova_compute[243452]: 2026-02-28 10:12:26.363 243456 DEBUG oslo_concurrency.lockutils [req-54dd7941-eb7e-4804-9c80-284c0f47e494 req-7ebef1ac-97a2-40d2-9434-287513373a66 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:26 compute-0 nova_compute[243452]: 2026-02-28 10:12:26.363 243456 DEBUG nova.compute.manager [req-54dd7941-eb7e-4804-9c80-284c0f47e494 req-7ebef1ac-97a2-40d2-9434-287513373a66 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] No waiting events found dispatching network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:12:26 compute-0 nova_compute[243452]: 2026-02-28 10:12:26.363 243456 WARNING nova.compute.manager [req-54dd7941-eb7e-4804-9c80-284c0f47e494 req-7ebef1ac-97a2-40d2-9434-287513373a66 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received unexpected event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 for instance with vm_state active and task_state shelving_image_uploading.
Feb 28 10:12:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Feb 28 10:12:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Feb 28 10:12:27 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Feb 28 10:12:27 compute-0 ceph-mon[76304]: pgmap v1390: 305 pgs: 305 active+clean; 364 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 825 KiB/s rd, 628 KiB/s wr, 39 op/s
Feb 28 10:12:27 compute-0 ceph-mon[76304]: osdmap e232: 3 total, 3 up, 3 in
Feb 28 10:12:27 compute-0 nova_compute[243452]: 2026-02-28 10:12:27.760 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:12:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 377 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.5 MiB/s wr, 54 op/s
Feb 28 10:12:28 compute-0 nova_compute[243452]: 2026-02-28 10:12:28.240 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Acquiring lock "920aae47-311f-4921-818d-92025cc1abee" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:28 compute-0 nova_compute[243452]: 2026-02-28 10:12:28.242 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:28 compute-0 ceph-mon[76304]: osdmap e233: 3 total, 3 up, 3 in
Feb 28 10:12:28 compute-0 nova_compute[243452]: 2026-02-28 10:12:28.267 243456 DEBUG nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:12:28 compute-0 nova_compute[243452]: 2026-02-28 10:12:28.368 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:28 compute-0 nova_compute[243452]: 2026-02-28 10:12:28.370 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:28 compute-0 nova_compute[243452]: 2026-02-28 10:12:28.379 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:12:28 compute-0 nova_compute[243452]: 2026-02-28 10:12:28.379 243456 INFO nova.compute.claims [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:12:28 compute-0 nova_compute[243452]: 2026-02-28 10:12:28.594 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:28 compute-0 nova_compute[243452]: 2026-02-28 10:12:28.691 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:28 compute-0 nova_compute[243452]: 2026-02-28 10:12:28.712 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:28 compute-0 nova_compute[243452]: 2026-02-28 10:12:28.941 243456 INFO nova.virt.libvirt.driver [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Snapshot image upload complete
Feb 28 10:12:28 compute-0 nova_compute[243452]: 2026-02-28 10:12:28.942 243456 DEBUG nova.compute.manager [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:28 compute-0 nova_compute[243452]: 2026-02-28 10:12:28.994 243456 INFO nova.compute.manager [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Shelve offloading
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.002 243456 INFO nova.virt.libvirt.driver [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance destroyed successfully.
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.002 243456 DEBUG nova.compute.manager [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.005 243456 DEBUG oslo_concurrency.lockutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.005 243456 DEBUG oslo_concurrency.lockutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquired lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.005 243456 DEBUG nova.network.neutron [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:12:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:12:29
Feb 28 10:12:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:12:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:12:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', '.rgw.root']
Feb 28 10:12:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:12:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:12:29 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2027665590' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.203 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.209 243456 DEBUG nova.compute.provider_tree [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.230 243456 DEBUG nova.scheduler.client.report [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.261 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.261 243456 DEBUG nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:12:29 compute-0 ceph-mon[76304]: pgmap v1393: 305 pgs: 305 active+clean; 377 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.5 MiB/s wr, 54 op/s
Feb 28 10:12:29 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2027665590' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.336 243456 DEBUG nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.336 243456 DEBUG nova.network.neutron [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.357 243456 INFO nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.374 243456 DEBUG nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.470 243456 DEBUG nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.473 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.473 243456 INFO nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Creating image(s)
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.503 243456 DEBUG nova.storage.rbd_utils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] rbd image 920aae47-311f-4921-818d-92025cc1abee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.537 243456 DEBUG nova.storage.rbd_utils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] rbd image 920aae47-311f-4921-818d-92025cc1abee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.572 243456 DEBUG nova.storage.rbd_utils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] rbd image 920aae47-311f-4921-818d-92025cc1abee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.578 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.654 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.655 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.656 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.656 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.684 243456 DEBUG nova.storage.rbd_utils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] rbd image 920aae47-311f-4921-818d-92025cc1abee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.689 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 920aae47-311f-4921-818d-92025cc1abee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.742 243456 DEBUG nova.policy [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4285303dac0b4ee497a908cdca0aecf4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8174cdce90534957854824466483d42b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:12:29 compute-0 nova_compute[243452]: 2026-02-28 10:12:29.945 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 920aae47-311f-4921-818d-92025cc1abee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.256s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:30 compute-0 nova_compute[243452]: 2026-02-28 10:12:30.012 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:30 compute-0 nova_compute[243452]: 2026-02-28 10:12:30.019 243456 DEBUG nova.storage.rbd_utils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] resizing rbd image 920aae47-311f-4921-818d-92025cc1abee_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:12:30 compute-0 nova_compute[243452]: 2026-02-28 10:12:30.095 243456 DEBUG nova.objects.instance [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lazy-loading 'migration_context' on Instance uuid 920aae47-311f-4921-818d-92025cc1abee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:12:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 396 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 3.5 MiB/s wr, 113 op/s
Feb 28 10:12:30 compute-0 nova_compute[243452]: 2026-02-28 10:12:30.227 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:12:30 compute-0 nova_compute[243452]: 2026-02-28 10:12:30.228 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Ensure instance console log exists: /var/lib/nova/instances/920aae47-311f-4921-818d-92025cc1abee/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:12:30 compute-0 nova_compute[243452]: 2026-02-28 10:12:30.228 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:30 compute-0 nova_compute[243452]: 2026-02-28 10:12:30.228 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:30 compute-0 nova_compute[243452]: 2026-02-28 10:12:30.229 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:30 compute-0 ceph-mon[76304]: pgmap v1394: 305 pgs: 305 active+clean; 396 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 3.5 MiB/s wr, 113 op/s
Feb 28 10:12:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:12:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:12:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:12:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:12:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:12:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:12:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:12:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:12:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:12:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:12:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:12:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:12:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:12:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:12:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:12:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:12:31 compute-0 nova_compute[243452]: 2026-02-28 10:12:31.289 243456 DEBUG nova.network.neutron [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Successfully created port: 0df27d88-0475-4a4d-8a7c-883b977bc7ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:12:31 compute-0 nova_compute[243452]: 2026-02-28 10:12:31.436 243456 DEBUG nova.network.neutron [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updating instance_info_cache with network_info: [{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:12:31 compute-0 nova_compute[243452]: 2026-02-28 10:12:31.482 243456 DEBUG oslo_concurrency.lockutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Releasing lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:12:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 453 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 7.6 MiB/s wr, 154 op/s
Feb 28 10:12:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:12:33 compute-0 ceph-mon[76304]: pgmap v1395: 305 pgs: 305 active+clean; 453 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 7.6 MiB/s wr, 154 op/s
Feb 28 10:12:33 compute-0 nova_compute[243452]: 2026-02-28 10:12:33.603 243456 DEBUG nova.network.neutron [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Successfully updated port: 0df27d88-0475-4a4d-8a7c-883b977bc7ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:12:33 compute-0 nova_compute[243452]: 2026-02-28 10:12:33.626 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Acquiring lock "refresh_cache-920aae47-311f-4921-818d-92025cc1abee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:12:33 compute-0 nova_compute[243452]: 2026-02-28 10:12:33.627 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Acquired lock "refresh_cache-920aae47-311f-4921-818d-92025cc1abee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:12:33 compute-0 nova_compute[243452]: 2026-02-28 10:12:33.627 243456 DEBUG nova.network.neutron [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:12:33 compute-0 nova_compute[243452]: 2026-02-28 10:12:33.718 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:33 compute-0 nova_compute[243452]: 2026-02-28 10:12:33.760 243456 DEBUG nova.compute.manager [req-705e2661-2110-4450-aef1-9a8b8a26012d req-77a7f18d-50d9-41ae-81a4-65510f3a7882 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Received event network-changed-0df27d88-0475-4a4d-8a7c-883b977bc7ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:33 compute-0 nova_compute[243452]: 2026-02-28 10:12:33.760 243456 DEBUG nova.compute.manager [req-705e2661-2110-4450-aef1-9a8b8a26012d req-77a7f18d-50d9-41ae-81a4-65510f3a7882 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Refreshing instance network info cache due to event network-changed-0df27d88-0475-4a4d-8a7c-883b977bc7ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:12:33 compute-0 nova_compute[243452]: 2026-02-28 10:12:33.761 243456 DEBUG oslo_concurrency.lockutils [req-705e2661-2110-4450-aef1-9a8b8a26012d req-77a7f18d-50d9-41ae-81a4-65510f3a7882 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-920aae47-311f-4921-818d-92025cc1abee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:12:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 477 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 7.5 MiB/s wr, 133 op/s
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.155 243456 DEBUG nova.network.neutron [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:12:34 compute-0 podman[298632]: 2026-02-28 10:12:34.1677674 +0000 UTC m=+0.094955801 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 28 10:12:34 compute-0 podman[298631]: 2026-02-28 10:12:34.190986503 +0000 UTC m=+0.123061481 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.203 243456 INFO nova.virt.libvirt.driver [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance destroyed successfully.
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.204 243456 DEBUG nova.objects.instance [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'resources' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.219 243456 DEBUG nova.virt.libvirt.vif [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-655402139',display_name='tempest-ServerActionsTestOtherB-server-655402139',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-655402139',id=58,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBqRVcgiI80flX1TIQUc3kE8k0bKr7rk5iIO2Yv6L90cfSe29lIWbu2zW6sL5TXWXaoniRhBj4ljGby24BY2TxBinS7LuRQtwYhYPFr+EO7gzkzEKLoiCVvk+xY5at1zhw==',key_name='tempest-keypair-1106870795',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:10:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-xj14uw0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member',shelved_at='2026-02-28T10:12:28.942409',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='337de210-963f-41af-92a4-16b5716eae17'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:12:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=30a5d845-ce28-490a-afe8-3b7552f02c63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.220 243456 DEBUG nova.network.os_vif_util [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.221 243456 DEBUG nova.network.os_vif_util [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.221 243456 DEBUG os_vif [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.223 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.223 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap037eb744-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.225 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.227 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.230 243456 INFO os_vif [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30')
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.454 243456 INFO nova.virt.libvirt.driver [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Deleting instance files /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63_del
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.455 243456 INFO nova.virt.libvirt.driver [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Deletion of /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63_del complete
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.551 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Acquiring lock "6cac1749-1126-44c9-b31c-1041025c52cf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.552 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.573 243456 DEBUG nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.604 243456 INFO nova.scheduler.client.report [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Deleted allocations for instance 30a5d845-ce28-490a-afe8-3b7552f02c63
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.667 243456 DEBUG oslo_concurrency.lockutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.668 243456 DEBUG oslo_concurrency.lockutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.684 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:34 compute-0 nova_compute[243452]: 2026-02-28 10:12:34.782 243456 DEBUG oslo_concurrency.processutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:35 compute-0 ceph-mon[76304]: pgmap v1396: 305 pgs: 305 active+clean; 477 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 7.5 MiB/s wr, 133 op/s
Feb 28 10:12:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:12:35 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1510402414' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.345 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.350 243456 DEBUG oslo_concurrency.processutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.358 243456 DEBUG nova.compute.provider_tree [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.375 243456 DEBUG nova.scheduler.client.report [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.392 243456 DEBUG oslo_concurrency.lockutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.395 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.401 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.402 243456 INFO nova.compute.claims [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.462 243456 DEBUG oslo_concurrency.lockutils [None req-1d08cded-ac57-4c95-87b2-1edfabcfaf4b 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 14.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.465 243456 DEBUG nova.network.neutron [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Updating instance_info_cache with network_info: [{"id": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "address": "fa:16:3e:e1:db:88", "network": {"id": "ed8f95d8-16ff-486a-9986-045e9e754d77", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-960697188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8174cdce90534957854824466483d42b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df27d88-04", "ovs_interfaceid": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.494 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Releasing lock "refresh_cache-920aae47-311f-4921-818d-92025cc1abee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.495 243456 DEBUG nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Instance network_info: |[{"id": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "address": "fa:16:3e:e1:db:88", "network": {"id": "ed8f95d8-16ff-486a-9986-045e9e754d77", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-960697188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8174cdce90534957854824466483d42b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df27d88-04", "ovs_interfaceid": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.495 243456 DEBUG oslo_concurrency.lockutils [req-705e2661-2110-4450-aef1-9a8b8a26012d req-77a7f18d-50d9-41ae-81a4-65510f3a7882 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-920aae47-311f-4921-818d-92025cc1abee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.496 243456 DEBUG nova.network.neutron [req-705e2661-2110-4450-aef1-9a8b8a26012d req-77a7f18d-50d9-41ae-81a4-65510f3a7882 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Refreshing network info cache for port 0df27d88-0475-4a4d-8a7c-883b977bc7ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.499 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Start _get_guest_xml network_info=[{"id": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "address": "fa:16:3e:e1:db:88", "network": {"id": "ed8f95d8-16ff-486a-9986-045e9e754d77", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-960697188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8174cdce90534957854824466483d42b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df27d88-04", "ovs_interfaceid": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.506 243456 WARNING nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.512 243456 DEBUG nova.virt.libvirt.host [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.512 243456 DEBUG nova.virt.libvirt.host [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.522 243456 DEBUG nova.virt.libvirt.host [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.523 243456 DEBUG nova.virt.libvirt.host [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.523 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.524 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.524 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.524 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.525 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.525 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.525 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.525 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.526 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.526 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.526 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.526 243456 DEBUG nova.virt.hardware [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.530 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:35 compute-0 nova_compute[243452]: 2026-02-28 10:12:35.632 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:35 compute-0 sudo[298759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:12:35 compute-0 sudo[298759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:12:35 compute-0 sudo[298759]: pam_unix(sudo:session): session closed for user root
Feb 28 10:12:36 compute-0 sudo[298784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:12:36 compute-0 sudo[298784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:12:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:12:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1585548673' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.087 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 424 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.4 MiB/s wr, 143 op/s
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.115 243456 DEBUG nova.storage.rbd_utils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] rbd image 920aae47-311f-4921-818d-92025cc1abee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.120 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:12:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/943179781' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:12:36 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1510402414' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:12:36 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1585548673' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.211 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.217 243456 DEBUG nova.compute.provider_tree [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.270 243456 DEBUG nova.scheduler.client.report [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.305 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.306 243456 DEBUG nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.364 243456 DEBUG nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.365 243456 DEBUG nova.network.neutron [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.387 243456 INFO nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.407 243456 DEBUG nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.525 243456 DEBUG nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.526 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.526 243456 INFO nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Creating image(s)
Feb 28 10:12:36 compute-0 sudo[298784]: pam_unix(sudo:session): session closed for user root
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.564 243456 DEBUG nova.storage.rbd_utils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] rbd image 6cac1749-1126-44c9-b31c-1041025c52cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.584 243456 DEBUG nova.storage.rbd_utils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] rbd image 6cac1749-1126-44c9-b31c-1041025c52cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:12:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:12:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:12:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:12:36 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:12:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:12:36 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:12:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:12:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:12:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:12:36 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:12:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:12:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.618 243456 DEBUG nova.storage.rbd_utils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] rbd image 6cac1749-1126-44c9-b31c-1041025c52cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.624 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:36 compute-0 sudo[298933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:12:36 compute-0 sudo[298933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:12:36 compute-0 sudo[298933]: pam_unix(sudo:session): session closed for user root
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.691 243456 DEBUG nova.policy [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2465d2d41534ef098e24bdd413eefab', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b83705c4693849a58c70b1271f24f320', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:12:36 compute-0 sudo[298962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:12:36 compute-0 sudo[298962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.702 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.703 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.703 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.703 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.726 243456 DEBUG nova.storage.rbd_utils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] rbd image 6cac1749-1126-44c9-b31c-1041025c52cf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:12:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:12:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2010978883' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.730 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6cac1749-1126-44c9-b31c-1041025c52cf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.781 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.661s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.783 243456 DEBUG nova.virt.libvirt.vif [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1001227597',display_name='tempest-InstanceActionsNegativeTestJSON-server-1001227597',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1001227597',id=65,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8174cdce90534957854824466483d42b',ramdisk_id='',reservation_id='r-xoam7m7h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1693183666',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1693183666-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:12:29Z,user_data=None,user_id='4285303dac0b4ee497a908cdca0aecf4',uuid=920aae47-311f-4921-818d-92025cc1abee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "address": "fa:16:3e:e1:db:88", "network": {"id": "ed8f95d8-16ff-486a-9986-045e9e754d77", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-960697188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8174cdce90534957854824466483d42b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df27d88-04", "ovs_interfaceid": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.783 243456 DEBUG nova.network.os_vif_util [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Converting VIF {"id": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "address": "fa:16:3e:e1:db:88", "network": {"id": "ed8f95d8-16ff-486a-9986-045e9e754d77", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-960697188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8174cdce90534957854824466483d42b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df27d88-04", "ovs_interfaceid": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.784 243456 DEBUG nova.network.os_vif_util [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:db:88,bridge_name='br-int',has_traffic_filtering=True,id=0df27d88-0475-4a4d-8a7c-883b977bc7ad,network=Network(ed8f95d8-16ff-486a-9986-045e9e754d77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0df27d88-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.785 243456 DEBUG nova.objects.instance [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lazy-loading 'pci_devices' on Instance uuid 920aae47-311f-4921-818d-92025cc1abee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.800 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:12:36 compute-0 nova_compute[243452]:   <uuid>920aae47-311f-4921-818d-92025cc1abee</uuid>
Feb 28 10:12:36 compute-0 nova_compute[243452]:   <name>instance-00000041</name>
Feb 28 10:12:36 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:12:36 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:12:36 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <nova:name>tempest-InstanceActionsNegativeTestJSON-server-1001227597</nova:name>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:12:35</nova:creationTime>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:12:36 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:12:36 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:12:36 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:12:36 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:12:36 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:12:36 compute-0 nova_compute[243452]:         <nova:user uuid="4285303dac0b4ee497a908cdca0aecf4">tempest-InstanceActionsNegativeTestJSON-1693183666-project-member</nova:user>
Feb 28 10:12:36 compute-0 nova_compute[243452]:         <nova:project uuid="8174cdce90534957854824466483d42b">tempest-InstanceActionsNegativeTestJSON-1693183666</nova:project>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:12:36 compute-0 nova_compute[243452]:         <nova:port uuid="0df27d88-0475-4a4d-8a7c-883b977bc7ad">
Feb 28 10:12:36 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:12:36 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:12:36 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <system>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <entry name="serial">920aae47-311f-4921-818d-92025cc1abee</entry>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <entry name="uuid">920aae47-311f-4921-818d-92025cc1abee</entry>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     </system>
Feb 28 10:12:36 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:12:36 compute-0 nova_compute[243452]:   <os>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:   </os>
Feb 28 10:12:36 compute-0 nova_compute[243452]:   <features>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:   </features>
Feb 28 10:12:36 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:12:36 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:12:36 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/920aae47-311f-4921-818d-92025cc1abee_disk">
Feb 28 10:12:36 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       </source>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:12:36 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/920aae47-311f-4921-818d-92025cc1abee_disk.config">
Feb 28 10:12:36 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       </source>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:12:36 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:e1:db:88"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <target dev="tap0df27d88-04"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/920aae47-311f-4921-818d-92025cc1abee/console.log" append="off"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <video>
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     </video>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:12:36 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:12:36 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:12:36 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:12:36 compute-0 nova_compute[243452]: </domain>
Feb 28 10:12:36 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.800 243456 DEBUG nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Preparing to wait for external event network-vif-plugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.800 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Acquiring lock "920aae47-311f-4921-818d-92025cc1abee-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.801 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.801 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.801 243456 DEBUG nova.virt.libvirt.vif [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1001227597',display_name='tempest-InstanceActionsNegativeTestJSON-server-1001227597',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1001227597',id=65,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8174cdce90534957854824466483d42b',ramdisk_id='',reservation_id='r-xoam7m7h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1693183666',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1693183666-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:12:29Z,user_data=None,user_id='4285303dac0b4ee497a908cdca0aecf4',uuid=920aae47-311f-4921-818d-92025cc1abee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "address": "fa:16:3e:e1:db:88", "network": {"id": "ed8f95d8-16ff-486a-9986-045e9e754d77", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-960697188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8174cdce90534957854824466483d42b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df27d88-04", "ovs_interfaceid": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.802 243456 DEBUG nova.network.os_vif_util [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Converting VIF {"id": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "address": "fa:16:3e:e1:db:88", "network": {"id": "ed8f95d8-16ff-486a-9986-045e9e754d77", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-960697188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8174cdce90534957854824466483d42b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df27d88-04", "ovs_interfaceid": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.802 243456 DEBUG nova.network.os_vif_util [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:db:88,bridge_name='br-int',has_traffic_filtering=True,id=0df27d88-0475-4a4d-8a7c-883b977bc7ad,network=Network(ed8f95d8-16ff-486a-9986-045e9e754d77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0df27d88-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.802 243456 DEBUG os_vif [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:db:88,bridge_name='br-int',has_traffic_filtering=True,id=0df27d88-0475-4a4d-8a7c-883b977bc7ad,network=Network(ed8f95d8-16ff-486a-9986-045e9e754d77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0df27d88-04') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.803 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.803 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.804 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.806 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.806 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0df27d88-04, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.807 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0df27d88-04, col_values=(('external_ids', {'iface-id': '0df27d88-0475-4a4d-8a7c-883b977bc7ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:db:88', 'vm-uuid': '920aae47-311f-4921-818d-92025cc1abee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.809 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:36 compute-0 NetworkManager[49805]: <info>  [1772273556.8095] manager: (tap0df27d88-04): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/245)
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.811 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.816 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.818 243456 INFO os_vif [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:db:88,bridge_name='br-int',has_traffic_filtering=True,id=0df27d88-0475-4a4d-8a7c-883b977bc7ad,network=Network(ed8f95d8-16ff-486a-9986-045e9e754d77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0df27d88-04')
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.881 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.882 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.882 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] No VIF found with MAC fa:16:3e:e1:db:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.883 243456 INFO nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Using config drive
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.925 243456 DEBUG nova.storage.rbd_utils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] rbd image 920aae47-311f-4921-818d-92025cc1abee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:12:36 compute-0 nova_compute[243452]: 2026-02-28 10:12:36.991 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6cac1749-1126-44c9-b31c-1041025c52cf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:37 compute-0 podman[299060]: 2026-02-28 10:12:37.022656414 +0000 UTC m=+0.044930904 container create 2d6d84bdfe6dd1f347f41f3696a23ee70e85c9b79f7b6ee8d1549804b4def003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_antonelli, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:12:37 compute-0 systemd[1]: Started libpod-conmon-2d6d84bdfe6dd1f347f41f3696a23ee70e85c9b79f7b6ee8d1549804b4def003.scope.
Feb 28 10:12:37 compute-0 nova_compute[243452]: 2026-02-28 10:12:37.076 243456 DEBUG nova.storage.rbd_utils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] resizing rbd image 6cac1749-1126-44c9-b31c-1041025c52cf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:12:37 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:12:37 compute-0 podman[299060]: 2026-02-28 10:12:37.001021086 +0000 UTC m=+0.023295576 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:12:37 compute-0 podman[299060]: 2026-02-28 10:12:37.109377342 +0000 UTC m=+0.131651872 container init 2d6d84bdfe6dd1f347f41f3696a23ee70e85c9b79f7b6ee8d1549804b4def003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:12:37 compute-0 podman[299060]: 2026-02-28 10:12:37.116535914 +0000 UTC m=+0.138810424 container start 2d6d84bdfe6dd1f347f41f3696a23ee70e85c9b79f7b6ee8d1549804b4def003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:12:37 compute-0 optimistic_antonelli[299112]: 167 167
Feb 28 10:12:37 compute-0 systemd[1]: libpod-2d6d84bdfe6dd1f347f41f3696a23ee70e85c9b79f7b6ee8d1549804b4def003.scope: Deactivated successfully.
Feb 28 10:12:37 compute-0 podman[299060]: 2026-02-28 10:12:37.122742338 +0000 UTC m=+0.145016848 container attach 2d6d84bdfe6dd1f347f41f3696a23ee70e85c9b79f7b6ee8d1549804b4def003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_antonelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 10:12:37 compute-0 podman[299060]: 2026-02-28 10:12:37.12314749 +0000 UTC m=+0.145421980 container died 2d6d84bdfe6dd1f347f41f3696a23ee70e85c9b79f7b6ee8d1549804b4def003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:12:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a554d08977115c17003803cff50f639922991977f9daaf1d618246a1fe3c72ed-merged.mount: Deactivated successfully.
Feb 28 10:12:37 compute-0 podman[299060]: 2026-02-28 10:12:37.161112507 +0000 UTC m=+0.183386997 container remove 2d6d84bdfe6dd1f347f41f3696a23ee70e85c9b79f7b6ee8d1549804b4def003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_antonelli, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:12:37 compute-0 nova_compute[243452]: 2026-02-28 10:12:37.174 243456 DEBUG nova.objects.instance [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lazy-loading 'migration_context' on Instance uuid 6cac1749-1126-44c9-b31c-1041025c52cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:12:37 compute-0 systemd[1]: libpod-conmon-2d6d84bdfe6dd1f347f41f3696a23ee70e85c9b79f7b6ee8d1549804b4def003.scope: Deactivated successfully.
Feb 28 10:12:37 compute-0 nova_compute[243452]: 2026-02-28 10:12:37.188 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:12:37 compute-0 nova_compute[243452]: 2026-02-28 10:12:37.189 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Ensure instance console log exists: /var/lib/nova/instances/6cac1749-1126-44c9-b31c-1041025c52cf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:12:37 compute-0 nova_compute[243452]: 2026-02-28 10:12:37.189 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:37 compute-0 nova_compute[243452]: 2026-02-28 10:12:37.189 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:37 compute-0 nova_compute[243452]: 2026-02-28 10:12:37.190 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:37 compute-0 ceph-mon[76304]: pgmap v1397: 305 pgs: 305 active+clean; 424 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.4 MiB/s wr, 143 op/s
Feb 28 10:12:37 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/943179781' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:12:37 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:12:37 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:12:37 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:12:37 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:12:37 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:12:37 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:12:37 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2010978883' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:12:37 compute-0 podman[299171]: 2026-02-28 10:12:37.318124722 +0000 UTC m=+0.049207255 container create 21088ba95b54af188e8572bc3902f00f0cf0c959bbe274bc41312d543fc019f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_margulis, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:12:37 compute-0 nova_compute[243452]: 2026-02-28 10:12:37.333 243456 INFO nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Creating config drive at /var/lib/nova/instances/920aae47-311f-4921-818d-92025cc1abee/disk.config
Feb 28 10:12:37 compute-0 nova_compute[243452]: 2026-02-28 10:12:37.338 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/920aae47-311f-4921-818d-92025cc1abee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkc73nzct execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:37 compute-0 systemd[1]: Started libpod-conmon-21088ba95b54af188e8572bc3902f00f0cf0c959bbe274bc41312d543fc019f8.scope.
Feb 28 10:12:37 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd51f4861a7cdf682905343d9964867db2fc2f4bcd781f97e55c0d6e975336c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd51f4861a7cdf682905343d9964867db2fc2f4bcd781f97e55c0d6e975336c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd51f4861a7cdf682905343d9964867db2fc2f4bcd781f97e55c0d6e975336c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd51f4861a7cdf682905343d9964867db2fc2f4bcd781f97e55c0d6e975336c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd51f4861a7cdf682905343d9964867db2fc2f4bcd781f97e55c0d6e975336c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:12:37 compute-0 podman[299171]: 2026-02-28 10:12:37.302482102 +0000 UTC m=+0.033564685 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:12:37 compute-0 podman[299171]: 2026-02-28 10:12:37.398558374 +0000 UTC m=+0.129640927 container init 21088ba95b54af188e8572bc3902f00f0cf0c959bbe274bc41312d543fc019f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_margulis, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:12:37 compute-0 podman[299171]: 2026-02-28 10:12:37.406080595 +0000 UTC m=+0.137163128 container start 21088ba95b54af188e8572bc3902f00f0cf0c959bbe274bc41312d543fc019f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_margulis, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 10:12:37 compute-0 podman[299171]: 2026-02-28 10:12:37.409125111 +0000 UTC m=+0.140207644 container attach 21088ba95b54af188e8572bc3902f00f0cf0c959bbe274bc41312d543fc019f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_margulis, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 10:12:37 compute-0 nova_compute[243452]: 2026-02-28 10:12:37.489 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/920aae47-311f-4921-818d-92025cc1abee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkc73nzct" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:37 compute-0 nova_compute[243452]: 2026-02-28 10:12:37.519 243456 DEBUG nova.storage.rbd_utils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] rbd image 920aae47-311f-4921-818d-92025cc1abee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:12:37 compute-0 nova_compute[243452]: 2026-02-28 10:12:37.523 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/920aae47-311f-4921-818d-92025cc1abee/disk.config 920aae47-311f-4921-818d-92025cc1abee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:37 compute-0 nova_compute[243452]: 2026-02-28 10:12:37.663 243456 DEBUG nova.network.neutron [req-705e2661-2110-4450-aef1-9a8b8a26012d req-77a7f18d-50d9-41ae-81a4-65510f3a7882 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Updated VIF entry in instance network info cache for port 0df27d88-0475-4a4d-8a7c-883b977bc7ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:12:37 compute-0 nova_compute[243452]: 2026-02-28 10:12:37.664 243456 DEBUG nova.network.neutron [req-705e2661-2110-4450-aef1-9a8b8a26012d req-77a7f18d-50d9-41ae-81a4-65510f3a7882 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Updating instance_info_cache with network_info: [{"id": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "address": "fa:16:3e:e1:db:88", "network": {"id": "ed8f95d8-16ff-486a-9986-045e9e754d77", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-960697188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8174cdce90534957854824466483d42b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df27d88-04", "ovs_interfaceid": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:12:37 compute-0 nova_compute[243452]: 2026-02-28 10:12:37.669 243456 DEBUG oslo_concurrency.processutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/920aae47-311f-4921-818d-92025cc1abee/disk.config 920aae47-311f-4921-818d-92025cc1abee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:37 compute-0 nova_compute[243452]: 2026-02-28 10:12:37.669 243456 INFO nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Deleting local config drive /var/lib/nova/instances/920aae47-311f-4921-818d-92025cc1abee/disk.config because it was imported into RBD.
Feb 28 10:12:37 compute-0 nova_compute[243452]: 2026-02-28 10:12:37.680 243456 DEBUG oslo_concurrency.lockutils [req-705e2661-2110-4450-aef1-9a8b8a26012d req-77a7f18d-50d9-41ae-81a4-65510f3a7882 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-920aae47-311f-4921-818d-92025cc1abee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:12:37 compute-0 kernel: tap0df27d88-04: entered promiscuous mode
Feb 28 10:12:37 compute-0 NetworkManager[49805]: <info>  [1772273557.7186] manager: (tap0df27d88-04): new Tun device (/org/freedesktop/NetworkManager/Devices/246)
Feb 28 10:12:37 compute-0 ovn_controller[146846]: 2026-02-28T10:12:37Z|00531|binding|INFO|Claiming lport 0df27d88-0475-4a4d-8a7c-883b977bc7ad for this chassis.
Feb 28 10:12:37 compute-0 ovn_controller[146846]: 2026-02-28T10:12:37Z|00532|binding|INFO|0df27d88-0475-4a4d-8a7c-883b977bc7ad: Claiming fa:16:3e:e1:db:88 10.100.0.5
Feb 28 10:12:37 compute-0 nova_compute[243452]: 2026-02-28 10:12:37.721 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.728 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:db:88 10.100.0.5'], port_security=['fa:16:3e:e1:db:88 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '920aae47-311f-4921-818d-92025cc1abee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed8f95d8-16ff-486a-9986-045e9e754d77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8174cdce90534957854824466483d42b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd62d1bc-27fc-4a40-8f5a-4d9d80596045', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad705234-e36d-496f-9ca6-546e227c6770, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0df27d88-0475-4a4d-8a7c-883b977bc7ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:12:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.729 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0df27d88-0475-4a4d-8a7c-883b977bc7ad in datapath ed8f95d8-16ff-486a-9986-045e9e754d77 bound to our chassis
Feb 28 10:12:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.731 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed8f95d8-16ff-486a-9986-045e9e754d77
Feb 28 10:12:37 compute-0 nova_compute[243452]: 2026-02-28 10:12:37.732 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:37 compute-0 ovn_controller[146846]: 2026-02-28T10:12:37Z|00533|binding|INFO|Setting lport 0df27d88-0475-4a4d-8a7c-883b977bc7ad ovn-installed in OVS
Feb 28 10:12:37 compute-0 ovn_controller[146846]: 2026-02-28T10:12:37Z|00534|binding|INFO|Setting lport 0df27d88-0475-4a4d-8a7c-883b977bc7ad up in Southbound
Feb 28 10:12:37 compute-0 nova_compute[243452]: 2026-02-28 10:12:37.735 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:37 compute-0 nova_compute[243452]: 2026-02-28 10:12:37.736 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.742 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b86bdbb4-7a64-4841-8db6-886597443e85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.743 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH taped8f95d8-11 in ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:12:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.746 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface taped8f95d8-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:12:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.746 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[45fc2b0d-6cd3-446a-aedb-841ac4480018]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.747 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[11bc89ba-0f2e-416b-8082-1109a290a8ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:37 compute-0 systemd-udevd[299254]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:12:37 compute-0 systemd-machined[209480]: New machine qemu-72-instance-00000041.
Feb 28 10:12:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.764 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[7d7e0f9d-e147-4c9d-b7cd-a146ddad833e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:37 compute-0 NetworkManager[49805]: <info>  [1772273557.7701] device (tap0df27d88-04): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:12:37 compute-0 NetworkManager[49805]: <info>  [1772273557.7707] device (tap0df27d88-04): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:12:37 compute-0 systemd[1]: Started Virtual Machine qemu-72-instance-00000041.
Feb 28 10:12:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.789 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b678ce01-b9f3-4b73-9ba5-b4b28e0fc698]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.812 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8da299-5793-49c4-9e15-2307b8cd03b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:37 compute-0 NetworkManager[49805]: <info>  [1772273557.8200] manager: (taped8f95d8-10): new Veth device (/org/freedesktop/NetworkManager/Devices/247)
Feb 28 10:12:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.819 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c804e7-e884-4a37-a0db-a3389b9da8f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.847 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[46d2d585-bb28-4622-ae5d-30d08e1d49b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.852 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[be138a0c-fdad-4ea9-8398-ffc3e6541a0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:37 compute-0 modest_margulis[299187]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:12:37 compute-0 modest_margulis[299187]: --> All data devices are unavailable
Feb 28 10:12:37 compute-0 NetworkManager[49805]: <info>  [1772273557.8728] device (taped8f95d8-10): carrier: link connected
Feb 28 10:12:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.880 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[035d239c-42bc-47ca-982d-43ddffd2e452]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.896 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[23f710bd-767e-4921-9e0d-28b5984b36ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped8f95d8-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:e7:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506515, 'reachable_time': 39739, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299292, 'error': None, 'target': 'ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:37 compute-0 systemd[1]: libpod-21088ba95b54af188e8572bc3902f00f0cf0c959bbe274bc41312d543fc019f8.scope: Deactivated successfully.
Feb 28 10:12:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.911 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5afe2c56-6d3b-4a5e-b142-366971299df8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe65:e7fb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506515, 'tstamp': 506515}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299293, 'error': None, 'target': 'ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:37 compute-0 conmon[299187]: conmon 21088ba95b54af188e85 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-21088ba95b54af188e8572bc3902f00f0cf0c959bbe274bc41312d543fc019f8.scope/container/memory.events
Feb 28 10:12:37 compute-0 podman[299171]: 2026-02-28 10:12:37.914688016 +0000 UTC m=+0.645770549 container died 21088ba95b54af188e8572bc3902f00f0cf0c959bbe274bc41312d543fc019f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_margulis, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:12:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.933 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ed0c0c8a-7f7a-4d02-a17b-77da424d5e39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped8f95d8-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:e7:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506515, 'reachable_time': 39739, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299294, 'error': None, 'target': 'ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dd51f4861a7cdf682905343d9964867db2fc2f4bcd781f97e55c0d6e975336c-merged.mount: Deactivated successfully.
Feb 28 10:12:37 compute-0 podman[299171]: 2026-02-28 10:12:37.960430273 +0000 UTC m=+0.691512806 container remove 21088ba95b54af188e8572bc3902f00f0cf0c959bbe274bc41312d543fc019f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Feb 28 10:12:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:37.966 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0224a192-5d2b-483d-a36d-c3fa53f2798e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:37 compute-0 systemd[1]: libpod-conmon-21088ba95b54af188e8572bc3902f00f0cf0c959bbe274bc41312d543fc019f8.scope: Deactivated successfully.
Feb 28 10:12:38 compute-0 sudo[298962]: pam_unix(sudo:session): session closed for user root
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:38.015 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[51dd4a7d-4c35-43c9-8fd1-529909d9f534]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:38.025 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped8f95d8-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:38.026 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:38.028 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped8f95d8-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:38 compute-0 kernel: taped8f95d8-10: entered promiscuous mode
Feb 28 10:12:38 compute-0 NetworkManager[49805]: <info>  [1772273558.0328] manager: (taped8f95d8-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.031 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.035 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:38.038 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped8f95d8-10, col_values=(('external_ids', {'iface-id': '886def4f-e7cf-45ce-8b39-a6416f0f0a59'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:38 compute-0 ovn_controller[146846]: 2026-02-28T10:12:38Z|00535|binding|INFO|Releasing lport 886def4f-e7cf-45ce-8b39-a6416f0f0a59 from this chassis (sb_readonly=0)
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.040 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.042 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:38.043 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ed8f95d8-16ff-486a-9986-045e9e754d77.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ed8f95d8-16ff-486a-9986-045e9e754d77.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:38.046 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[476ecb6a-0872-4e3a-9fbf-749d6ff3db7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:38.047 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-ed8f95d8-16ff-486a-9986-045e9e754d77
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/ed8f95d8-16ff-486a-9986-045e9e754d77.pid.haproxy
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID ed8f95d8-16ff-486a-9986-045e9e754d77
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:12:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:38.048 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77', 'env', 'PROCESS_TAG=haproxy-ed8f95d8-16ff-486a-9986-045e9e754d77', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ed8f95d8-16ff-486a-9986-045e9e754d77.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.054 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:38 compute-0 sudo[299312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:12:38 compute-0 sudo[299312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:12:38 compute-0 sudo[299312]: pam_unix(sudo:session): session closed for user root
Feb 28 10:12:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:12:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Feb 28 10:12:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Feb 28 10:12:38 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Feb 28 10:12:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 425 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.4 MiB/s wr, 167 op/s
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.120 243456 DEBUG nova.network.neutron [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Successfully created port: ebb57b0b-4fa0-4ee0-8791-87271512797e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:12:38 compute-0 sudo[299340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:12:38 compute-0 sudo[299340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.252 243456 DEBUG nova.compute.manager [req-99a4c5d7-88a3-4dec-a93a-2f3f7d1d5b89 req-929630eb-3181-4219-b9ed-d2501497d1b7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Received event network-vif-plugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.252 243456 DEBUG oslo_concurrency.lockutils [req-99a4c5d7-88a3-4dec-a93a-2f3f7d1d5b89 req-929630eb-3181-4219-b9ed-d2501497d1b7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "920aae47-311f-4921-818d-92025cc1abee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.253 243456 DEBUG oslo_concurrency.lockutils [req-99a4c5d7-88a3-4dec-a93a-2f3f7d1d5b89 req-929630eb-3181-4219-b9ed-d2501497d1b7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.253 243456 DEBUG oslo_concurrency.lockutils [req-99a4c5d7-88a3-4dec-a93a-2f3f7d1d5b89 req-929630eb-3181-4219-b9ed-d2501497d1b7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.254 243456 DEBUG nova.compute.manager [req-99a4c5d7-88a3-4dec-a93a-2f3f7d1d5b89 req-929630eb-3181-4219-b9ed-d2501497d1b7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Processing event network-vif-plugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.349 243456 DEBUG nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.350 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273558.3481872, 920aae47-311f-4921-818d-92025cc1abee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.352 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] VM Started (Lifecycle Event)
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.355 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.359 243456 INFO nova.virt.libvirt.driver [-] [instance: 920aae47-311f-4921-818d-92025cc1abee] Instance spawned successfully.
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.360 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.377 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.385 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.388 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.389 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.389 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.390 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.390 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.390 243456 DEBUG nova.virt.libvirt.driver [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:12:38 compute-0 podman[299442]: 2026-02-28 10:12:38.409231512 +0000 UTC m=+0.037992639 container create a1002d5ab77c125c8839ace6f447e0fc930a27be96b999b0dca3843b755fa320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_clarke, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 10:12:38 compute-0 podman[299441]: 2026-02-28 10:12:38.422648429 +0000 UTC m=+0.051492949 container create a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.427 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.428 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273558.349836, 920aae47-311f-4921-818d-92025cc1abee => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.428 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] VM Paused (Lifecycle Event)
Feb 28 10:12:38 compute-0 systemd[1]: Started libpod-conmon-a1002d5ab77c125c8839ace6f447e0fc930a27be96b999b0dca3843b755fa320.scope.
Feb 28 10:12:38 compute-0 systemd[1]: Started libpod-conmon-a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92.scope.
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.453 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.457 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273558.3543012, 920aae47-311f-4921-818d-92025cc1abee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.457 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] VM Resumed (Lifecycle Event)
Feb 28 10:12:38 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.465 243456 INFO nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Took 8.99 seconds to spawn the instance on the hypervisor.
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.466 243456 DEBUG nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:38 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e66799afe237481138dae22134609236b539bbb4a9cb5a77f81d9bf57753d4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.477 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:38 compute-0 podman[299442]: 2026-02-28 10:12:38.477851922 +0000 UTC m=+0.106613069 container init a1002d5ab77c125c8839ace6f447e0fc930a27be96b999b0dca3843b755fa320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_clarke, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.481 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:12:38 compute-0 podman[299441]: 2026-02-28 10:12:38.486770112 +0000 UTC m=+0.115614642 container init a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 28 10:12:38 compute-0 podman[299442]: 2026-02-28 10:12:38.487672928 +0000 UTC m=+0.116434045 container start a1002d5ab77c125c8839ace6f447e0fc930a27be96b999b0dca3843b755fa320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:12:38 compute-0 podman[299442]: 2026-02-28 10:12:38.491190917 +0000 UTC m=+0.119952074 container attach a1002d5ab77c125c8839ace6f447e0fc930a27be96b999b0dca3843b755fa320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_clarke, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:12:38 compute-0 podman[299441]: 2026-02-28 10:12:38.491787003 +0000 UTC m=+0.120631523 container start a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:12:38 compute-0 podman[299442]: 2026-02-28 10:12:38.395967839 +0000 UTC m=+0.024728976 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:12:38 compute-0 podman[299441]: 2026-02-28 10:12:38.396738821 +0000 UTC m=+0.025583361 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:12:38 compute-0 dreamy_clarke[299471]: 167 167
Feb 28 10:12:38 compute-0 systemd[1]: libpod-a1002d5ab77c125c8839ace6f447e0fc930a27be96b999b0dca3843b755fa320.scope: Deactivated successfully.
Feb 28 10:12:38 compute-0 conmon[299471]: conmon a1002d5ab77c125c8839 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a1002d5ab77c125c8839ace6f447e0fc930a27be96b999b0dca3843b755fa320.scope/container/memory.events
Feb 28 10:12:38 compute-0 podman[299442]: 2026-02-28 10:12:38.496379743 +0000 UTC m=+0.125140880 container died a1002d5ab77c125c8839ace6f447e0fc930a27be96b999b0dca3843b755fa320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_clarke, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.510 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:12:38 compute-0 neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77[299473]: [NOTICE]   (299480) : New worker (299489) forked
Feb 28 10:12:38 compute-0 neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77[299473]: [NOTICE]   (299480) : Loading success.
Feb 28 10:12:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f34c024675d54f3eaafd79e915bd3f486ded296d0417af11a6da35712ae7b12-merged.mount: Deactivated successfully.
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.533 243456 INFO nova.compute.manager [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Took 10.20 seconds to build instance.
Feb 28 10:12:38 compute-0 podman[299442]: 2026-02-28 10:12:38.53577403 +0000 UTC m=+0.164535147 container remove a1002d5ab77c125c8839ace6f447e0fc930a27be96b999b0dca3843b755fa320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_clarke, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 10:12:38 compute-0 systemd[1]: libpod-conmon-a1002d5ab77c125c8839ace6f447e0fc930a27be96b999b0dca3843b755fa320.scope: Deactivated successfully.
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.551 243456 DEBUG oslo_concurrency.lockutils [None req-4781ab7f-d6a6-4434-9888-cdcace676ba2 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.617 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273543.6167212, 30a5d845-ce28-490a-afe8-3b7552f02c63 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.618 243456 INFO nova.compute.manager [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] VM Stopped (Lifecycle Event)
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.643 243456 DEBUG nova.compute.manager [None req-81a2826a-a947-454c-8104-f67517d333b9 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:38 compute-0 podman[299509]: 2026-02-28 10:12:38.676824015 +0000 UTC m=+0.036666152 container create 5ac5cdb7a3846b0b214698ce6dcf910fd38a9c995f341755550bd748297ceb44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:12:38 compute-0 systemd[1]: Started libpod-conmon-5ac5cdb7a3846b0b214698ce6dcf910fd38a9c995f341755550bd748297ceb44.scope.
Feb 28 10:12:38 compute-0 nova_compute[243452]: 2026-02-28 10:12:38.722 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:38 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44e89c183fd9d1701504b8f251e140b88fbdbce10295fc3ce298d37153f90f0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44e89c183fd9d1701504b8f251e140b88fbdbce10295fc3ce298d37153f90f0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44e89c183fd9d1701504b8f251e140b88fbdbce10295fc3ce298d37153f90f0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44e89c183fd9d1701504b8f251e140b88fbdbce10295fc3ce298d37153f90f0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:12:38 compute-0 podman[299509]: 2026-02-28 10:12:38.661194856 +0000 UTC m=+0.021037003 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:12:38 compute-0 podman[299509]: 2026-02-28 10:12:38.767400432 +0000 UTC m=+0.127242589 container init 5ac5cdb7a3846b0b214698ce6dcf910fd38a9c995f341755550bd748297ceb44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_banzai, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:12:38 compute-0 podman[299509]: 2026-02-28 10:12:38.77515168 +0000 UTC m=+0.134993817 container start 5ac5cdb7a3846b0b214698ce6dcf910fd38a9c995f341755550bd748297ceb44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_banzai, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 10:12:38 compute-0 podman[299509]: 2026-02-28 10:12:38.778801153 +0000 UTC m=+0.138643300 container attach 5ac5cdb7a3846b0b214698ce6dcf910fd38a9c995f341755550bd748297ceb44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_banzai, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]: {
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:     "0": [
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:         {
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "devices": [
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "/dev/loop3"
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             ],
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "lv_name": "ceph_lv0",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "lv_size": "21470642176",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "name": "ceph_lv0",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "tags": {
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.cluster_name": "ceph",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.crush_device_class": "",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.encrypted": "0",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.objectstore": "bluestore",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.osd_id": "0",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.type": "block",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.vdo": "0",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.with_tpm": "0"
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             },
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "type": "block",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "vg_name": "ceph_vg0"
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:         }
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:     ],
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:     "1": [
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:         {
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "devices": [
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "/dev/loop4"
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             ],
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "lv_name": "ceph_lv1",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "lv_size": "21470642176",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "name": "ceph_lv1",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "tags": {
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.cluster_name": "ceph",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.crush_device_class": "",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.encrypted": "0",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.objectstore": "bluestore",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.osd_id": "1",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.type": "block",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.vdo": "0",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.with_tpm": "0"
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             },
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "type": "block",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "vg_name": "ceph_vg1"
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:         }
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:     ],
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:     "2": [
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:         {
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "devices": [
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "/dev/loop5"
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             ],
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "lv_name": "ceph_lv2",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "lv_size": "21470642176",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "name": "ceph_lv2",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "tags": {
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.cluster_name": "ceph",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.crush_device_class": "",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.encrypted": "0",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.objectstore": "bluestore",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.osd_id": "2",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.type": "block",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.vdo": "0",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:                 "ceph.with_tpm": "0"
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             },
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "type": "block",
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:             "vg_name": "ceph_vg2"
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:         }
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]:     ]
Feb 28 10:12:39 compute-0 wizardly_banzai[299526]: }
Feb 28 10:12:39 compute-0 ceph-mon[76304]: osdmap e234: 3 total, 3 up, 3 in
Feb 28 10:12:39 compute-0 ceph-mon[76304]: pgmap v1399: 305 pgs: 305 active+clean; 425 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.4 MiB/s wr, 167 op/s
Feb 28 10:12:39 compute-0 systemd[1]: libpod-5ac5cdb7a3846b0b214698ce6dcf910fd38a9c995f341755550bd748297ceb44.scope: Deactivated successfully.
Feb 28 10:12:39 compute-0 podman[299509]: 2026-02-28 10:12:39.124207235 +0000 UTC m=+0.484049372 container died 5ac5cdb7a3846b0b214698ce6dcf910fd38a9c995f341755550bd748297ceb44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 10:12:39 compute-0 nova_compute[243452]: 2026-02-28 10:12:39.144 243456 DEBUG nova.network.neutron [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Successfully updated port: ebb57b0b-4fa0-4ee0-8791-87271512797e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:12:39 compute-0 podman[299509]: 2026-02-28 10:12:39.165037703 +0000 UTC m=+0.524879840 container remove 5ac5cdb7a3846b0b214698ce6dcf910fd38a9c995f341755550bd748297ceb44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:12:39 compute-0 nova_compute[243452]: 2026-02-28 10:12:39.166 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Acquiring lock "refresh_cache-6cac1749-1126-44c9-b31c-1041025c52cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:12:39 compute-0 nova_compute[243452]: 2026-02-28 10:12:39.166 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Acquired lock "refresh_cache-6cac1749-1126-44c9-b31c-1041025c52cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:12:39 compute-0 nova_compute[243452]: 2026-02-28 10:12:39.166 243456 DEBUG nova.network.neutron [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:12:39 compute-0 systemd[1]: libpod-conmon-5ac5cdb7a3846b0b214698ce6dcf910fd38a9c995f341755550bd748297ceb44.scope: Deactivated successfully.
Feb 28 10:12:39 compute-0 sudo[299340]: pam_unix(sudo:session): session closed for user root
Feb 28 10:12:39 compute-0 nova_compute[243452]: 2026-02-28 10:12:39.272 243456 DEBUG nova.compute.manager [req-d7b97c98-6569-4353-84cb-74c038dae680 req-c294475b-515c-406d-9dbd-c59e5a68d9c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Received event network-changed-ebb57b0b-4fa0-4ee0-8791-87271512797e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:39 compute-0 nova_compute[243452]: 2026-02-28 10:12:39.272 243456 DEBUG nova.compute.manager [req-d7b97c98-6569-4353-84cb-74c038dae680 req-c294475b-515c-406d-9dbd-c59e5a68d9c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Refreshing instance network info cache due to event network-changed-ebb57b0b-4fa0-4ee0-8791-87271512797e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:12:39 compute-0 nova_compute[243452]: 2026-02-28 10:12:39.272 243456 DEBUG oslo_concurrency.lockutils [req-d7b97c98-6569-4353-84cb-74c038dae680 req-c294475b-515c-406d-9dbd-c59e5a68d9c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-6cac1749-1126-44c9-b31c-1041025c52cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:12:39 compute-0 sudo[299547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:12:39 compute-0 sudo[299547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:12:39 compute-0 sudo[299547]: pam_unix(sudo:session): session closed for user root
Feb 28 10:12:39 compute-0 sudo[299572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:12:39 compute-0 sudo[299572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:12:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-44e89c183fd9d1701504b8f251e140b88fbdbce10295fc3ce298d37153f90f0d-merged.mount: Deactivated successfully.
Feb 28 10:12:39 compute-0 nova_compute[243452]: 2026-02-28 10:12:39.533 243456 DEBUG nova.network.neutron [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:12:39 compute-0 podman[299609]: 2026-02-28 10:12:39.648548739 +0000 UTC m=+0.051177110 container create 6a9a265d0bf38bf5576e05703b6b064fad61eb4b4f73dfd8fb2a013318dca6f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_clarke, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:12:39 compute-0 systemd[1]: Started libpod-conmon-6a9a265d0bf38bf5576e05703b6b064fad61eb4b4f73dfd8fb2a013318dca6f9.scope.
Feb 28 10:12:39 compute-0 podman[299609]: 2026-02-28 10:12:39.624443281 +0000 UTC m=+0.027071672 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:12:39 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:12:39 compute-0 podman[299609]: 2026-02-28 10:12:39.740809223 +0000 UTC m=+0.143437574 container init 6a9a265d0bf38bf5576e05703b6b064fad61eb4b4f73dfd8fb2a013318dca6f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:12:39 compute-0 podman[299609]: 2026-02-28 10:12:39.747089529 +0000 UTC m=+0.149717890 container start 6a9a265d0bf38bf5576e05703b6b064fad61eb4b4f73dfd8fb2a013318dca6f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 10:12:39 compute-0 podman[299609]: 2026-02-28 10:12:39.750048423 +0000 UTC m=+0.152676804 container attach 6a9a265d0bf38bf5576e05703b6b064fad61eb4b4f73dfd8fb2a013318dca6f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 10:12:39 compute-0 happy_clarke[299625]: 167 167
Feb 28 10:12:39 compute-0 systemd[1]: libpod-6a9a265d0bf38bf5576e05703b6b064fad61eb4b4f73dfd8fb2a013318dca6f9.scope: Deactivated successfully.
Feb 28 10:12:39 compute-0 podman[299609]: 2026-02-28 10:12:39.751628947 +0000 UTC m=+0.154257288 container died 6a9a265d0bf38bf5576e05703b6b064fad61eb4b4f73dfd8fb2a013318dca6f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_clarke, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Feb 28 10:12:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-5de62a73b0fbfe1ae9c34f2b427a509e0dbcc063d6ab332c07b54b5462ba700f-merged.mount: Deactivated successfully.
Feb 28 10:12:39 compute-0 podman[299609]: 2026-02-28 10:12:39.791391185 +0000 UTC m=+0.194019546 container remove 6a9a265d0bf38bf5576e05703b6b064fad61eb4b4f73dfd8fb2a013318dca6f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:12:39 compute-0 systemd[1]: libpod-conmon-6a9a265d0bf38bf5576e05703b6b064fad61eb4b4f73dfd8fb2a013318dca6f9.scope: Deactivated successfully.
Feb 28 10:12:39 compute-0 podman[299649]: 2026-02-28 10:12:39.995035661 +0000 UTC m=+0.064864805 container create dcc9d489819ec65401b1534d01243726ae559a498aafe162ff9b0342b233fa7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_kare, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:12:40 compute-0 systemd[1]: Started libpod-conmon-dcc9d489819ec65401b1534d01243726ae559a498aafe162ff9b0342b233fa7e.scope.
Feb 28 10:12:40 compute-0 podman[299649]: 2026-02-28 10:12:39.970268055 +0000 UTC m=+0.040097239 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:12:40 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009454b242c02fa00933761d3e546a8d96c78fd82c751fe1c3f28546cf1b8139/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009454b242c02fa00933761d3e546a8d96c78fd82c751fe1c3f28546cf1b8139/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009454b242c02fa00933761d3e546a8d96c78fd82c751fe1c3f28546cf1b8139/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009454b242c02fa00933761d3e546a8d96c78fd82c751fe1c3f28546cf1b8139/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:12:40 compute-0 podman[299649]: 2026-02-28 10:12:40.108639506 +0000 UTC m=+0.178468670 container init dcc9d489819ec65401b1534d01243726ae559a498aafe162ff9b0342b233fa7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 444 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 452 KiB/s rd, 6.6 MiB/s wr, 150 op/s
Feb 28 10:12:40 compute-0 podman[299649]: 2026-02-28 10:12:40.115829078 +0000 UTC m=+0.185658202 container start dcc9d489819ec65401b1534d01243726ae559a498aafe162ff9b0342b233fa7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 10:12:40 compute-0 podman[299649]: 2026-02-28 10:12:40.120105978 +0000 UTC m=+0.189935112 container attach dcc9d489819ec65401b1534d01243726ae559a498aafe162ff9b0342b233fa7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.425 243456 DEBUG nova.compute.manager [req-f8bc8bd6-f1af-4924-8f7f-39d722931679 req-b6ffb4c2-dfe5-4a88-ad6e-0de87099142c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Received event network-vif-plugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.426 243456 DEBUG oslo_concurrency.lockutils [req-f8bc8bd6-f1af-4924-8f7f-39d722931679 req-b6ffb4c2-dfe5-4a88-ad6e-0de87099142c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "920aae47-311f-4921-818d-92025cc1abee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.426 243456 DEBUG oslo_concurrency.lockutils [req-f8bc8bd6-f1af-4924-8f7f-39d722931679 req-b6ffb4c2-dfe5-4a88-ad6e-0de87099142c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.426 243456 DEBUG oslo_concurrency.lockutils [req-f8bc8bd6-f1af-4924-8f7f-39d722931679 req-b6ffb4c2-dfe5-4a88-ad6e-0de87099142c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.426 243456 DEBUG nova.compute.manager [req-f8bc8bd6-f1af-4924-8f7f-39d722931679 req-b6ffb4c2-dfe5-4a88-ad6e-0de87099142c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] No waiting events found dispatching network-vif-plugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.427 243456 WARNING nova.compute.manager [req-f8bc8bd6-f1af-4924-8f7f-39d722931679 req-b6ffb4c2-dfe5-4a88-ad6e-0de87099142c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Received unexpected event network-vif-plugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad for instance with vm_state active and task_state None.
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.525 243456 DEBUG nova.network.neutron [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Updating instance_info_cache with network_info: [{"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.563 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Releasing lock "refresh_cache-6cac1749-1126-44c9-b31c-1041025c52cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.563 243456 DEBUG nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Instance network_info: |[{"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.563 243456 DEBUG oslo_concurrency.lockutils [req-d7b97c98-6569-4353-84cb-74c038dae680 req-c294475b-515c-406d-9dbd-c59e5a68d9c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-6cac1749-1126-44c9-b31c-1041025c52cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.563 243456 DEBUG nova.network.neutron [req-d7b97c98-6569-4353-84cb-74c038dae680 req-c294475b-515c-406d-9dbd-c59e5a68d9c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Refreshing network info cache for port ebb57b0b-4fa0-4ee0-8791-87271512797e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.566 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Start _get_guest_xml network_info=[{"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.571 243456 WARNING nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.580 243456 DEBUG nova.virt.libvirt.host [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.581 243456 DEBUG nova.virt.libvirt.host [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.585 243456 DEBUG nova.virt.libvirt.host [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.586 243456 DEBUG nova.virt.libvirt.host [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.586 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.586 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.587 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.587 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.587 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.587 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.587 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.587 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.588 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.588 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.588 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.588 243456 DEBUG nova.virt.hardware [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.590 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.695 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.695 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.695 243456 INFO nova.compute.manager [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Unshelving
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.731 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:40 compute-0 lvm[299765]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:12:40 compute-0 lvm[299765]: VG ceph_vg1 finished
Feb 28 10:12:40 compute-0 lvm[299764]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:12:40 compute-0 lvm[299764]: VG ceph_vg0 finished
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:12:40 compute-0 lvm[299767]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:12:40 compute-0 lvm[299767]: VG ceph_vg2 finished
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014177182537818348 of space, bias 1.0, pg target 0.42531547613455045 quantized to 32 (current 32)
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003597432983149043 of space, bias 1.0, pg target 1.0792298949447128 quantized to 32 (current 32)
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.021961581530791e-07 of space, bias 4.0, pg target 0.0009594266051510827 quantized to 16 (current 16)
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:12:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.817 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.817 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:40 compute-0 lvm[299768]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:12:40 compute-0 lvm[299768]: VG ceph_vg0 finished
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.827 243456 DEBUG nova.objects.instance [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'pci_requests' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.846 243456 DEBUG nova.objects.instance [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'numa_topology' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.859 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.859 243456 INFO nova.compute.claims [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:12:40 compute-0 stoic_kare[299666]: {}
Feb 28 10:12:40 compute-0 systemd[1]: libpod-dcc9d489819ec65401b1534d01243726ae559a498aafe162ff9b0342b233fa7e.scope: Deactivated successfully.
Feb 28 10:12:40 compute-0 podman[299649]: 2026-02-28 10:12:40.911906372 +0000 UTC m=+0.981735496 container died dcc9d489819ec65401b1534d01243726ae559a498aafe162ff9b0342b233fa7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_kare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 10:12:40 compute-0 systemd[1]: libpod-dcc9d489819ec65401b1534d01243726ae559a498aafe162ff9b0342b233fa7e.scope: Consumed 1.115s CPU time.
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.921 243456 DEBUG oslo_concurrency.lockutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Acquiring lock "920aae47-311f-4921-818d-92025cc1abee" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.921 243456 DEBUG oslo_concurrency.lockutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.922 243456 DEBUG oslo_concurrency.lockutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Acquiring lock "920aae47-311f-4921-818d-92025cc1abee-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.922 243456 DEBUG oslo_concurrency.lockutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.922 243456 DEBUG oslo_concurrency.lockutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.923 243456 INFO nova.compute.manager [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Terminating instance
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.924 243456 DEBUG nova.compute.manager [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:12:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-009454b242c02fa00933761d3e546a8d96c78fd82c751fe1c3f28546cf1b8139-merged.mount: Deactivated successfully.
Feb 28 10:12:40 compute-0 podman[299649]: 2026-02-28 10:12:40.949762567 +0000 UTC m=+1.019591681 container remove dcc9d489819ec65401b1534d01243726ae559a498aafe162ff9b0342b233fa7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:12:40 compute-0 systemd[1]: libpod-conmon-dcc9d489819ec65401b1534d01243726ae559a498aafe162ff9b0342b233fa7e.scope: Deactivated successfully.
Feb 28 10:12:40 compute-0 kernel: tap0df27d88-04 (unregistering): left promiscuous mode
Feb 28 10:12:40 compute-0 ovn_controller[146846]: 2026-02-28T10:12:40Z|00536|binding|INFO|Releasing lport 0df27d88-0475-4a4d-8a7c-883b977bc7ad from this chassis (sb_readonly=0)
Feb 28 10:12:40 compute-0 ovn_controller[146846]: 2026-02-28T10:12:40Z|00537|binding|INFO|Setting lport 0df27d88-0475-4a4d-8a7c-883b977bc7ad down in Southbound
Feb 28 10:12:40 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.995 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:40 compute-0 ovn_controller[146846]: 2026-02-28T10:12:40Z|00538|binding|INFO|Removing iface tap0df27d88-04 ovn-installed in OVS
Feb 28 10:12:40 compute-0 NetworkManager[49805]: <info>  [1772273560.9960] device (tap0df27d88-04): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:12:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.002 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:db:88 10.100.0.5'], port_security=['fa:16:3e:e1:db:88 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '920aae47-311f-4921-818d-92025cc1abee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed8f95d8-16ff-486a-9986-045e9e754d77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8174cdce90534957854824466483d42b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd62d1bc-27fc-4a40-8f5a-4d9d80596045', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad705234-e36d-496f-9ca6-546e227c6770, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0df27d88-0475-4a4d-8a7c-883b977bc7ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:12:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.003 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0df27d88-0475-4a4d-8a7c-883b977bc7ad in datapath ed8f95d8-16ff-486a-9986-045e9e754d77 unbound from our chassis
Feb 28 10:12:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.005 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ed8f95d8-16ff-486a-9986-045e9e754d77, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:40.999 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:41 compute-0 sudo[299572]: pam_unix(sudo:session): session closed for user root
Feb 28 10:12:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.008 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b830f980-1553-4633-83dc-0d2a88aaa142]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.009 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77 namespace which is not needed anymore
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.010 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:12:41 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:12:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:12:41 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:12:41 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000041.scope: Deactivated successfully.
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.048 243456 DEBUG oslo_concurrency.processutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:41 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000041.scope: Consumed 3.196s CPU time.
Feb 28 10:12:41 compute-0 systemd-machined[209480]: Machine qemu-72-instance-00000041 terminated.
Feb 28 10:12:41 compute-0 sudo[299791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:12:41 compute-0 sudo[299791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:12:41 compute-0 sudo[299791]: pam_unix(sudo:session): session closed for user root
Feb 28 10:12:41 compute-0 neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77[299473]: [NOTICE]   (299480) : haproxy version is 2.8.14-c23fe91
Feb 28 10:12:41 compute-0 neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77[299473]: [NOTICE]   (299480) : path to executable is /usr/sbin/haproxy
Feb 28 10:12:41 compute-0 neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77[299473]: [WARNING]  (299480) : Exiting Master process...
Feb 28 10:12:41 compute-0 neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77[299473]: [ALERT]    (299480) : Current worker (299489) exited with code 143 (Terminated)
Feb 28 10:12:41 compute-0 neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77[299473]: [WARNING]  (299480) : All workers exited. Exiting... (0)
Feb 28 10:12:41 compute-0 systemd[1]: libpod-a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92.scope: Deactivated successfully.
Feb 28 10:12:41 compute-0 podman[299826]: 2026-02-28 10:12:41.147552118 +0000 UTC m=+0.046986322 container died a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.166 243456 INFO nova.virt.libvirt.driver [-] [instance: 920aae47-311f-4921-818d-92025cc1abee] Instance destroyed successfully.
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.167 243456 DEBUG nova.objects.instance [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lazy-loading 'resources' on Instance uuid 920aae47-311f-4921-818d-92025cc1abee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:12:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:12:41 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3867140259' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:12:41 compute-0 ceph-mon[76304]: pgmap v1400: 305 pgs: 305 active+clean; 444 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 452 KiB/s rd, 6.6 MiB/s wr, 150 op/s
Feb 28 10:12:41 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:12:41 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:12:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3e66799afe237481138dae22134609236b539bbb4a9cb5a77f81d9bf57753d4-merged.mount: Deactivated successfully.
Feb 28 10:12:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92-userdata-shm.mount: Deactivated successfully.
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.189 243456 DEBUG nova.virt.libvirt.vif [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1001227597',display_name='tempest-InstanceActionsNegativeTestJSON-server-1001227597',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1001227597',id=65,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:12:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8174cdce90534957854824466483d42b',ramdisk_id='',reservation_id='r-xoam7m7h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1693183666',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1693183666-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:12:38Z,user_data=None,user_id='4285303dac0b4ee497a908cdca0aecf4',uuid=920aae47-311f-4921-818d-92025cc1abee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "address": "fa:16:3e:e1:db:88", "network": {"id": "ed8f95d8-16ff-486a-9986-045e9e754d77", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-960697188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8174cdce90534957854824466483d42b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df27d88-04", "ovs_interfaceid": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.190 243456 DEBUG nova.network.os_vif_util [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Converting VIF {"id": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "address": "fa:16:3e:e1:db:88", "network": {"id": "ed8f95d8-16ff-486a-9986-045e9e754d77", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-960697188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8174cdce90534957854824466483d42b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0df27d88-04", "ovs_interfaceid": "0df27d88-0475-4a4d-8a7c-883b977bc7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:12:41 compute-0 podman[299826]: 2026-02-28 10:12:41.19136678 +0000 UTC m=+0.090800974 container cleanup a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.191 243456 DEBUG nova.network.os_vif_util [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:db:88,bridge_name='br-int',has_traffic_filtering=True,id=0df27d88-0475-4a4d-8a7c-883b977bc7ad,network=Network(ed8f95d8-16ff-486a-9986-045e9e754d77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0df27d88-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.191 243456 DEBUG os_vif [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:db:88,bridge_name='br-int',has_traffic_filtering=True,id=0df27d88-0475-4a4d-8a7c-883b977bc7ad,network=Network(ed8f95d8-16ff-486a-9986-045e9e754d77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0df27d88-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.194 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.194 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0df27d88-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.196 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:41 compute-0 systemd[1]: libpod-conmon-a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92.scope: Deactivated successfully.
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.200 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.204 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.205 243456 INFO os_vif [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:db:88,bridge_name='br-int',has_traffic_filtering=True,id=0df27d88-0475-4a4d-8a7c-883b977bc7ad,network=Network(ed8f95d8-16ff-486a-9986-045e9e754d77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0df27d88-04')
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.248 243456 DEBUG nova.storage.rbd_utils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] rbd image 6cac1749-1126-44c9-b31c-1041025c52cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.253 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:41 compute-0 podman[299875]: 2026-02-28 10:12:41.256964935 +0000 UTC m=+0.049058401 container remove a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:12:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.261 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[88deb2d9-5bc6-4ff6-af0c-c045daeeeca7]: (4, ('Sat Feb 28 10:12:41 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77 (a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92)\na29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92\nSat Feb 28 10:12:41 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77 (a29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92)\na29e01f80f4b4c180b881befb18ece4fef12c766f918078944128a6b3fb35c92\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.263 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[448105af-bc85-46ae-95f0-bf8cccf300ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.264 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped8f95d8-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:41 compute-0 kernel: taped8f95d8-10: left promiscuous mode
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.285 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.288 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6cbd2e8d-ab21-457e-b05f-a1fcc837c5ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.312 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f88941de-baf1-4c7f-8d90-0dc3334e3792]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.313 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1163310d-13b6-4114-94be-a3eda0bd74d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.326 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[81eacc21-512f-4e97-83dc-2f398bcc2d21]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506509, 'reachable_time': 16679, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299939, 'error': None, 'target': 'ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.328 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ed8f95d8-16ff-486a-9986-045e9e754d77 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:12:41 compute-0 systemd[1]: run-netns-ovnmeta\x2ded8f95d8\x2d16ff\x2d486a\x2d9986\x2d045e9e754d77.mount: Deactivated successfully.
Feb 28 10:12:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:41.328 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[9679f7ed-3a34-47f0-8cb7-134bf61aed2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.534 243456 INFO nova.virt.libvirt.driver [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Deleting instance files /var/lib/nova/instances/920aae47-311f-4921-818d-92025cc1abee_del
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.536 243456 INFO nova.virt.libvirt.driver [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Deletion of /var/lib/nova/instances/920aae47-311f-4921-818d-92025cc1abee_del complete
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.590 243456 INFO nova.compute.manager [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Took 0.67 seconds to destroy the instance on the hypervisor.
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.591 243456 DEBUG oslo.service.loopingcall [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.592 243456 DEBUG nova.compute.manager [-] [instance: 920aae47-311f-4921-818d-92025cc1abee] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.592 243456 DEBUG nova.network.neutron [-] [instance: 920aae47-311f-4921-818d-92025cc1abee] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:12:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:12:41 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2040599172' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.653 243456 DEBUG oslo_concurrency.processutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.659 243456 DEBUG nova.compute.provider_tree [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.677 243456 DEBUG nova.scheduler.client.report [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.718 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:12:41 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1028071564' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.815 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.817 243456 DEBUG nova.virt.libvirt.vif [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1212768098',display_name='tempest-ServersTestManualDisk-server-1212768098',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1212768098',id=66,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0ik5kXdNalWXEzR/gysmRW7reVUwqtO6ef6zS+WfZMKLSAXSFSlMXiq8+ds7SdKqMeBVaNe0jzsweQ7HRyVzZGYG856pZmMCyHy7oKBxG1WfL6nFJqI05F/mBkTXIwrw==',key_name='tempest-keypair-1576574983',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b83705c4693849a58c70b1271f24f320',ramdisk_id='',reservation_id='r-e45nckj8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1613202530',owner_user_name='tempest-ServersTestManualDisk-1613202530-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:12:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2465d2d41534ef098e24bdd413eefab',uuid=6cac1749-1126-44c9-b31c-1041025c52cf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.818 243456 DEBUG nova.network.os_vif_util [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Converting VIF {"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.819 243456 DEBUG nova.network.os_vif_util [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:c6:f2,bridge_name='br-int',has_traffic_filtering=True,id=ebb57b0b-4fa0-4ee0-8791-87271512797e,network=Network(8f735fcd-0d4b-4c56-85c4-3ead65135429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebb57b0b-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.820 243456 DEBUG nova.objects.instance [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6cac1749-1126-44c9-b31c-1041025c52cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.839 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:12:41 compute-0 nova_compute[243452]:   <uuid>6cac1749-1126-44c9-b31c-1041025c52cf</uuid>
Feb 28 10:12:41 compute-0 nova_compute[243452]:   <name>instance-00000042</name>
Feb 28 10:12:41 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:12:41 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:12:41 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersTestManualDisk-server-1212768098</nova:name>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:12:40</nova:creationTime>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:12:41 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:12:41 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:12:41 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:12:41 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:12:41 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:12:41 compute-0 nova_compute[243452]:         <nova:user uuid="c2465d2d41534ef098e24bdd413eefab">tempest-ServersTestManualDisk-1613202530-project-member</nova:user>
Feb 28 10:12:41 compute-0 nova_compute[243452]:         <nova:project uuid="b83705c4693849a58c70b1271f24f320">tempest-ServersTestManualDisk-1613202530</nova:project>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:12:41 compute-0 nova_compute[243452]:         <nova:port uuid="ebb57b0b-4fa0-4ee0-8791-87271512797e">
Feb 28 10:12:41 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:12:41 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:12:41 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <system>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <entry name="serial">6cac1749-1126-44c9-b31c-1041025c52cf</entry>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <entry name="uuid">6cac1749-1126-44c9-b31c-1041025c52cf</entry>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     </system>
Feb 28 10:12:41 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:12:41 compute-0 nova_compute[243452]:   <os>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:   </os>
Feb 28 10:12:41 compute-0 nova_compute[243452]:   <features>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:   </features>
Feb 28 10:12:41 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:12:41 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:12:41 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/6cac1749-1126-44c9-b31c-1041025c52cf_disk">
Feb 28 10:12:41 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       </source>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:12:41 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/6cac1749-1126-44c9-b31c-1041025c52cf_disk.config">
Feb 28 10:12:41 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       </source>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:12:41 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:e6:c6:f2"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <target dev="tapebb57b0b-4f"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/6cac1749-1126-44c9-b31c-1041025c52cf/console.log" append="off"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <video>
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     </video>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:12:41 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:12:41 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:12:41 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:12:41 compute-0 nova_compute[243452]: </domain>
Feb 28 10:12:41 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.840 243456 DEBUG nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Preparing to wait for external event network-vif-plugged-ebb57b0b-4fa0-4ee0-8791-87271512797e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.840 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Acquiring lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.841 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.841 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.842 243456 DEBUG nova.virt.libvirt.vif [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1212768098',display_name='tempest-ServersTestManualDisk-server-1212768098',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1212768098',id=66,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0ik5kXdNalWXEzR/gysmRW7reVUwqtO6ef6zS+WfZMKLSAXSFSlMXiq8+ds7SdKqMeBVaNe0jzsweQ7HRyVzZGYG856pZmMCyHy7oKBxG1WfL6nFJqI05F/mBkTXIwrw==',key_name='tempest-keypair-1576574983',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b83705c4693849a58c70b1271f24f320',ramdisk_id='',reservation_id='r-e45nckj8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1613202530',owner_user_name='tempest-ServersTestManualDisk-1613202530-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:12:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2465d2d41534ef098e24bdd413eefab',uuid=6cac1749-1126-44c9-b31c-1041025c52cf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.842 243456 DEBUG nova.network.os_vif_util [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Converting VIF {"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.843 243456 DEBUG nova.network.os_vif_util [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:c6:f2,bridge_name='br-int',has_traffic_filtering=True,id=ebb57b0b-4fa0-4ee0-8791-87271512797e,network=Network(8f735fcd-0d4b-4c56-85c4-3ead65135429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebb57b0b-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.844 243456 DEBUG os_vif [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:c6:f2,bridge_name='br-int',has_traffic_filtering=True,id=ebb57b0b-4fa0-4ee0-8791-87271512797e,network=Network(8f735fcd-0d4b-4c56-85c4-3ead65135429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebb57b0b-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.845 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.845 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.846 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.849 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.849 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebb57b0b-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.850 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapebb57b0b-4f, col_values=(('external_ids', {'iface-id': 'ebb57b0b-4fa0-4ee0-8791-87271512797e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:c6:f2', 'vm-uuid': '6cac1749-1126-44c9-b31c-1041025c52cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.852 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:41 compute-0 NetworkManager[49805]: <info>  [1772273561.8532] manager: (tapebb57b0b-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.855 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.858 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.859 243456 INFO os_vif [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:c6:f2,bridge_name='br-int',has_traffic_filtering=True,id=ebb57b0b-4fa0-4ee0-8791-87271512797e,network=Network(8f735fcd-0d4b-4c56-85c4-3ead65135429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebb57b0b-4f')
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.928 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.929 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.929 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] No VIF found with MAC fa:16:3e:e6:c6:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.930 243456 INFO nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Using config drive
Feb 28 10:12:41 compute-0 nova_compute[243452]: 2026-02-28 10:12:41.959 243456 DEBUG nova.storage.rbd_utils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] rbd image 6cac1749-1126-44c9-b31c-1041025c52cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.088 243456 INFO nova.network.neutron [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updating port 037eb744-3024-4a3d-b52c-894abe1cbac8 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Feb 28 10:12:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 451 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 964 KiB/s rd, 3.7 MiB/s wr, 136 op/s
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.140 243456 DEBUG nova.network.neutron [req-d7b97c98-6569-4353-84cb-74c038dae680 req-c294475b-515c-406d-9dbd-c59e5a68d9c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Updated VIF entry in instance network info cache for port ebb57b0b-4fa0-4ee0-8791-87271512797e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.141 243456 DEBUG nova.network.neutron [req-d7b97c98-6569-4353-84cb-74c038dae680 req-c294475b-515c-406d-9dbd-c59e5a68d9c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Updating instance_info_cache with network_info: [{"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.167 243456 DEBUG oslo_concurrency.lockutils [req-d7b97c98-6569-4353-84cb-74c038dae680 req-c294475b-515c-406d-9dbd-c59e5a68d9c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-6cac1749-1126-44c9-b31c-1041025c52cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:12:42 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3867140259' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:12:42 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2040599172' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:12:42 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1028071564' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.514 243456 DEBUG nova.compute.manager [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Received event network-vif-unplugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.515 243456 DEBUG oslo_concurrency.lockutils [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "920aae47-311f-4921-818d-92025cc1abee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.515 243456 DEBUG oslo_concurrency.lockutils [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.516 243456 DEBUG oslo_concurrency.lockutils [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.516 243456 DEBUG nova.compute.manager [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] No waiting events found dispatching network-vif-unplugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.517 243456 DEBUG nova.compute.manager [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Received event network-vif-unplugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.517 243456 DEBUG nova.compute.manager [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Received event network-vif-plugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.517 243456 DEBUG oslo_concurrency.lockutils [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "920aae47-311f-4921-818d-92025cc1abee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.518 243456 DEBUG oslo_concurrency.lockutils [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.518 243456 DEBUG oslo_concurrency.lockutils [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.519 243456 DEBUG nova.compute.manager [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] No waiting events found dispatching network-vif-plugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.519 243456 WARNING nova.compute.manager [req-e0f05c44-ae01-4923-b68b-d52c1697a3db req-66c5f677-e641-49b5-809e-a511570c0539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Received unexpected event network-vif-plugged-0df27d88-0475-4a4d-8a7c-883b977bc7ad for instance with vm_state active and task_state deleting.
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.695 243456 INFO nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Creating config drive at /var/lib/nova/instances/6cac1749-1126-44c9-b31c-1041025c52cf/disk.config
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.703 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6cac1749-1126-44c9-b31c-1041025c52cf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpb7l5ajgb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.757 243456 DEBUG nova.network.neutron [-] [instance: 920aae47-311f-4921-818d-92025cc1abee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.779 243456 INFO nova.compute.manager [-] [instance: 920aae47-311f-4921-818d-92025cc1abee] Took 1.19 seconds to deallocate network for instance.
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.845 243456 DEBUG oslo_concurrency.lockutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.845 243456 DEBUG oslo_concurrency.lockutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.846 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6cac1749-1126-44c9-b31c-1041025c52cf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpb7l5ajgb" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.875 243456 DEBUG nova.storage.rbd_utils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] rbd image 6cac1749-1126-44c9-b31c-1041025c52cf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.881 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6cac1749-1126-44c9-b31c-1041025c52cf/disk.config 6cac1749-1126-44c9-b31c-1041025c52cf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:42 compute-0 nova_compute[243452]: 2026-02-28 10:12:42.926 243456 DEBUG nova.compute.manager [req-d9e5444e-f861-4fdd-b7f8-d7bffc791926 req-6b575949-9093-45af-a628-22eb194ecc25 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 920aae47-311f-4921-818d-92025cc1abee] Received event network-vif-deleted-0df27d88-0475-4a4d-8a7c-883b977bc7ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.053 243456 DEBUG oslo_concurrency.processutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.099 243456 DEBUG oslo_concurrency.processutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6cac1749-1126-44c9-b31c-1041025c52cf/disk.config 6cac1749-1126-44c9-b31c-1041025c52cf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.101 243456 INFO nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Deleting local config drive /var/lib/nova/instances/6cac1749-1126-44c9-b31c-1041025c52cf/disk.config because it was imported into RBD.
Feb 28 10:12:43 compute-0 kernel: tapebb57b0b-4f: entered promiscuous mode
Feb 28 10:12:43 compute-0 systemd-udevd[299762]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:12:43 compute-0 NetworkManager[49805]: <info>  [1772273563.1557] manager: (tapebb57b0b-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/250)
Feb 28 10:12:43 compute-0 ovn_controller[146846]: 2026-02-28T10:12:43Z|00539|binding|INFO|Claiming lport ebb57b0b-4fa0-4ee0-8791-87271512797e for this chassis.
Feb 28 10:12:43 compute-0 ovn_controller[146846]: 2026-02-28T10:12:43Z|00540|binding|INFO|ebb57b0b-4fa0-4ee0-8791-87271512797e: Claiming fa:16:3e:e6:c6:f2 10.100.0.13
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.160 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.169 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:c6:f2 10.100.0.13'], port_security=['fa:16:3e:e6:c6:f2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6cac1749-1126-44c9-b31c-1041025c52cf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f735fcd-0d4b-4c56-85c4-3ead65135429', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b83705c4693849a58c70b1271f24f320', 'neutron:revision_number': '2', 'neutron:security_group_ids': '49856350-974b-427f-aa68-4afea31e430e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b184abca-cc34-4f4e-8734-d06f0ef64e8b, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ebb57b0b-4fa0-4ee0-8791-87271512797e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:12:43 compute-0 NetworkManager[49805]: <info>  [1772273563.1726] device (tapebb57b0b-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:12:43 compute-0 NetworkManager[49805]: <info>  [1772273563.1733] device (tapebb57b0b-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.172 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ebb57b0b-4fa0-4ee0-8791-87271512797e in datapath 8f735fcd-0d4b-4c56-85c4-3ead65135429 bound to our chassis
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.173 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f735fcd-0d4b-4c56-85c4-3ead65135429
Feb 28 10:12:43 compute-0 ovn_controller[146846]: 2026-02-28T10:12:43Z|00541|binding|INFO|Setting lport ebb57b0b-4fa0-4ee0-8791-87271512797e ovn-installed in OVS
Feb 28 10:12:43 compute-0 ovn_controller[146846]: 2026-02-28T10:12:43Z|00542|binding|INFO|Setting lport ebb57b0b-4fa0-4ee0-8791-87271512797e up in Southbound
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.191 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.190 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[efdb7e3e-7969-4ff0-ab45-26151a11be2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.192 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8f735fcd-01 in ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:12:43 compute-0 ceph-mon[76304]: pgmap v1401: 305 pgs: 305 active+clean; 451 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 964 KiB/s rd, 3.7 MiB/s wr, 136 op/s
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.194 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8f735fcd-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.194 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7ba3bfa6-f74a-4ed7-840d-3b0a2953afb5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.199 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cad675dd-66c1-4812-92ac-6653aa044db0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.212 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a3fd6e-11a5-422b-b02f-ec5264630c67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:43 compute-0 systemd-machined[209480]: New machine qemu-73-instance-00000042.
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.225 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa42337-e8c1-4355-9511-99e740417ea5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:43 compute-0 systemd[1]: Started Virtual Machine qemu-73-instance-00000042.
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.252 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[345ac387-fae0-48fb-803c-de819dfc8fc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:43 compute-0 NetworkManager[49805]: <info>  [1772273563.2593] manager: (tap8f735fcd-00): new Veth device (/org/freedesktop/NetworkManager/Devices/251)
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.258 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d5856971-a30f-4bc5-9926-ea29c8739032]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.286 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f7853a5a-135c-4121-80db-814f9011b0d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.290 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[12b0a5fa-ad77-4472-9290-0debba5e0301]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:43 compute-0 NetworkManager[49805]: <info>  [1772273563.3126] device (tap8f735fcd-00): carrier: link connected
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.319 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8bcb4817-4ed2-45ae-b35f-eb949a42ea85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.335 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1f462e09-b7c7-4da9-a93d-0d4464cfcb09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f735fcd-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:88:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507059, 'reachable_time': 43810, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300087, 'error': None, 'target': 'ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.351 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a89398-7a18-44b3-ad32-b696103b8fd9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefe:88bb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 507059, 'tstamp': 507059}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300088, 'error': None, 'target': 'ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.374 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[de13b997-fa27-4c3a-bf51-070fc5dc0a03]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f735fcd-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:88:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507059, 'reachable_time': 43810, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300089, 'error': None, 'target': 'ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.407 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1c96be48-ca8f-46a7-8074-c4bf301c9d9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.427 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.427 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquired lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.428 243456 DEBUG nova.network.neutron [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.472 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[927e4250-f563-499e-8364-6a5ff9da65b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.473 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f735fcd-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.473 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.474 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f735fcd-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:43 compute-0 NetworkManager[49805]: <info>  [1772273563.4764] manager: (tap8f735fcd-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Feb 28 10:12:43 compute-0 kernel: tap8f735fcd-00: entered promiscuous mode
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.477 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f735fcd-00, col_values=(('external_ids', {'iface-id': '4d3929e7-28fe-4c5f-a36e-3a516397a383'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:43 compute-0 ovn_controller[146846]: 2026-02-28T10:12:43Z|00543|binding|INFO|Releasing lport 4d3929e7-28fe-4c5f-a36e-3a516397a383 from this chassis (sb_readonly=0)
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.475 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.484 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8f735fcd-0d4b-4c56-85c4-3ead65135429.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8f735fcd-0d4b-4c56-85c4-3ead65135429.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.484 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[600cad15-8b47-4592-940f-a7c371d8a33c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.485 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-8f735fcd-0d4b-4c56-85c4-3ead65135429
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/8f735fcd-0d4b-4c56-85c4-3ead65135429.pid.haproxy
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 8f735fcd-0d4b-4c56-85c4-3ead65135429
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:12:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:43.486 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429', 'env', 'PROCESS_TAG=haproxy-8f735fcd-0d4b-4c56-85c4-3ead65135429', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8f735fcd-0d4b-4c56-85c4-3ead65135429.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:12:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:12:43 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2928744060' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.636 243456 DEBUG oslo_concurrency.processutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.641 243456 DEBUG nova.compute.provider_tree [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.660 243456 DEBUG nova.scheduler.client.report [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.686 243456 DEBUG oslo_concurrency.lockutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.718 243456 INFO nova.scheduler.client.report [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Deleted allocations for instance 920aae47-311f-4921-818d-92025cc1abee
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.722 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.767 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273563.7669137, 6cac1749-1126-44c9-b31c-1041025c52cf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.768 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] VM Started (Lifecycle Event)
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.801 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.810 243456 DEBUG oslo_concurrency.lockutils [None req-7f0bd448-e64f-4fb6-b47d-d7582376aef8 4285303dac0b4ee497a908cdca0aecf4 8174cdce90534957854824466483d42b - - default default] Lock "920aae47-311f-4921-818d-92025cc1abee" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.817 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273563.767477, 6cac1749-1126-44c9-b31c-1041025c52cf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.818 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] VM Paused (Lifecycle Event)
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.851 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.856 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:12:43 compute-0 nova_compute[243452]: 2026-02-28 10:12:43.875 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:12:43 compute-0 podman[300162]: 2026-02-28 10:12:43.88927297 +0000 UTC m=+0.090214488 container create ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:12:43 compute-0 podman[300162]: 2026-02-28 10:12:43.821297598 +0000 UTC m=+0.022239136 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:12:43 compute-0 systemd[1]: Started libpod-conmon-ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e.scope.
Feb 28 10:12:43 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9745197d659a7b64f0784193f16e075d7a26cccfd957b8957e2a616a406e62/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:12:43 compute-0 podman[300162]: 2026-02-28 10:12:43.991147264 +0000 UTC m=+0.192088802 container init ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:12:43 compute-0 podman[300162]: 2026-02-28 10:12:43.996495605 +0000 UTC m=+0.197437123 container start ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 10:12:44 compute-0 neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429[300178]: [NOTICE]   (300182) : New worker (300184) forked
Feb 28 10:12:44 compute-0 neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429[300178]: [NOTICE]   (300182) : Loading success.
Feb 28 10:12:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 427 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 173 op/s
Feb 28 10:12:44 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2928744060' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:12:45 compute-0 ceph-mon[76304]: pgmap v1402: 305 pgs: 305 active+clean; 427 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 173 op/s
Feb 28 10:12:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:12:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/572620644' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:12:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:12:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/572620644' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:12:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 404 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 165 op/s
Feb 28 10:12:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/572620644' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:12:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/572620644' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:12:46 compute-0 nova_compute[243452]: 2026-02-28 10:12:46.852 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.240 243456 DEBUG nova.compute.manager [req-29d507b4-1d25-45f0-8640-ae1d1b3c1b89 req-b6f6a8e3-25ba-41dc-aa53-0f967cf0b9c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-changed-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.240 243456 DEBUG nova.compute.manager [req-29d507b4-1d25-45f0-8640-ae1d1b3c1b89 req-b6f6a8e3-25ba-41dc-aa53-0f967cf0b9c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Refreshing instance network info cache due to event network-changed-037eb744-3024-4a3d-b52c-894abe1cbac8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.240 243456 DEBUG oslo_concurrency.lockutils [req-29d507b4-1d25-45f0-8640-ae1d1b3c1b89 req-b6f6a8e3-25ba-41dc-aa53-0f967cf0b9c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:12:47 compute-0 ceph-mon[76304]: pgmap v1403: 305 pgs: 305 active+clean; 404 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 165 op/s
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.312 243456 DEBUG nova.compute.manager [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Received event network-vif-plugged-ebb57b0b-4fa0-4ee0-8791-87271512797e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.313 243456 DEBUG oslo_concurrency.lockutils [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.313 243456 DEBUG oslo_concurrency.lockutils [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.313 243456 DEBUG oslo_concurrency.lockutils [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.313 243456 DEBUG nova.compute.manager [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Processing event network-vif-plugged-ebb57b0b-4fa0-4ee0-8791-87271512797e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.313 243456 DEBUG nova.compute.manager [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Received event network-vif-plugged-ebb57b0b-4fa0-4ee0-8791-87271512797e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.314 243456 DEBUG oslo_concurrency.lockutils [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.314 243456 DEBUG oslo_concurrency.lockutils [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.314 243456 DEBUG oslo_concurrency.lockutils [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.314 243456 DEBUG nova.compute.manager [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] No waiting events found dispatching network-vif-plugged-ebb57b0b-4fa0-4ee0-8791-87271512797e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.314 243456 WARNING nova.compute.manager [req-9b92de22-b2cb-496e-afca-9176cde4f2bb req-5b622c02-e211-4247-a597-875c2e7ed854 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Received unexpected event network-vif-plugged-ebb57b0b-4fa0-4ee0-8791-87271512797e for instance with vm_state building and task_state spawning.
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.315 243456 DEBUG nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.320 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273567.3204653, 6cac1749-1126-44c9-b31c-1041025c52cf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.320 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] VM Resumed (Lifecycle Event)
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.323 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.326 243456 INFO nova.virt.libvirt.driver [-] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Instance spawned successfully.
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.327 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.357 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.371 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.377 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.378 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.379 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.380 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.380 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.381 243456 DEBUG nova.virt.libvirt.driver [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.397 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.455 243456 INFO nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Took 10.93 seconds to spawn the instance on the hypervisor.
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.456 243456 DEBUG nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.531 243456 INFO nova.compute.manager [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Took 12.87 seconds to build instance.
Feb 28 10:12:47 compute-0 nova_compute[243452]: 2026-02-28 10:12:47.606 243456 DEBUG oslo_concurrency.lockutils [None req-17d9a3a6-3640-468e-ac2d-d12a1ba69396 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:12:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 405 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.1 MiB/s wr, 136 op/s
Feb 28 10:12:48 compute-0 nova_compute[243452]: 2026-02-28 10:12:48.183 243456 DEBUG nova.network.neutron [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updating instance_info_cache with network_info: [{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:12:48 compute-0 nova_compute[243452]: 2026-02-28 10:12:48.226 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Releasing lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:12:48 compute-0 nova_compute[243452]: 2026-02-28 10:12:48.227 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:12:48 compute-0 nova_compute[243452]: 2026-02-28 10:12:48.228 243456 INFO nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Creating image(s)
Feb 28 10:12:48 compute-0 nova_compute[243452]: 2026-02-28 10:12:48.253 243456 DEBUG nova.storage.rbd_utils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:12:48 compute-0 nova_compute[243452]: 2026-02-28 10:12:48.257 243456 DEBUG nova.objects.instance [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:12:48 compute-0 nova_compute[243452]: 2026-02-28 10:12:48.259 243456 DEBUG oslo_concurrency.lockutils [req-29d507b4-1d25-45f0-8640-ae1d1b3c1b89 req-b6f6a8e3-25ba-41dc-aa53-0f967cf0b9c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:12:48 compute-0 nova_compute[243452]: 2026-02-28 10:12:48.259 243456 DEBUG nova.network.neutron [req-29d507b4-1d25-45f0-8640-ae1d1b3c1b89 req-b6f6a8e3-25ba-41dc-aa53-0f967cf0b9c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Refreshing network info cache for port 037eb744-3024-4a3d-b52c-894abe1cbac8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:12:48 compute-0 nova_compute[243452]: 2026-02-28 10:12:48.303 243456 DEBUG nova.storage.rbd_utils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:12:48 compute-0 nova_compute[243452]: 2026-02-28 10:12:48.329 243456 DEBUG nova.storage.rbd_utils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:12:48 compute-0 nova_compute[243452]: 2026-02-28 10:12:48.333 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "de414522523c5d89af98d076848191bbc8097e6f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:48 compute-0 nova_compute[243452]: 2026-02-28 10:12:48.334 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "de414522523c5d89af98d076848191bbc8097e6f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:48 compute-0 ceph-mon[76304]: pgmap v1404: 305 pgs: 305 active+clean; 405 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.1 MiB/s wr, 136 op/s
Feb 28 10:12:48 compute-0 nova_compute[243452]: 2026-02-28 10:12:48.364 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:48 compute-0 nova_compute[243452]: 2026-02-28 10:12:48.724 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:48 compute-0 nova_compute[243452]: 2026-02-28 10:12:48.749 243456 DEBUG nova.virt.libvirt.imagebackend [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Image locations are: [{'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/337de210-963f-41af-92a4-16b5716eae17/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/337de210-963f-41af-92a4-16b5716eae17/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Feb 28 10:12:48 compute-0 nova_compute[243452]: 2026-02-28 10:12:48.828 243456 DEBUG nova.virt.libvirt.imagebackend [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Selected location: {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/337de210-963f-41af-92a4-16b5716eae17/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Feb 28 10:12:48 compute-0 nova_compute[243452]: 2026-02-28 10:12:48.829 243456 DEBUG nova.storage.rbd_utils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] cloning images/337de210-963f-41af-92a4-16b5716eae17@snap to None/30a5d845-ce28-490a-afe8-3b7552f02c63_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:12:48 compute-0 nova_compute[243452]: 2026-02-28 10:12:48.934 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "de414522523c5d89af98d076848191bbc8097e6f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.099 243456 DEBUG nova.objects.instance [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'migration_context' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.179 243456 DEBUG nova.storage.rbd_utils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] flattening vms/30a5d845-ce28-490a-afe8-3b7552f02c63_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.574 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Image rbd:vms/30a5d845-ce28-490a-afe8-3b7552f02c63_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.575 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.575 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Ensure instance console log exists: /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.576 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.576 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.577 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.579 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Start _get_guest_xml network_info=[{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-28T10:12:20Z,direct_url=<?>,disk_format='raw',id=337de210-963f-41af-92a4-16b5716eae17,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-655402139-shelved',owner='cffbbb9857954b188c363e9565817bf2',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-28T10:12:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.583 243456 WARNING nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.589 243456 DEBUG nova.virt.libvirt.host [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.589 243456 DEBUG nova.virt.libvirt.host [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.592 243456 DEBUG nova.virt.libvirt.host [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.592 243456 DEBUG nova.virt.libvirt.host [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.593 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.593 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-28T10:12:20Z,direct_url=<?>,disk_format='raw',id=337de210-963f-41af-92a4-16b5716eae17,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-655402139-shelved',owner='cffbbb9857954b188c363e9565817bf2',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-28T10:12:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.593 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.594 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.594 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.594 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.594 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.594 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.595 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.595 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.595 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.595 243456 DEBUG nova.virt.hardware [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.596 243456 DEBUG nova.objects.instance [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.617 243456 DEBUG oslo_concurrency.processutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.927 243456 DEBUG nova.network.neutron [req-29d507b4-1d25-45f0-8640-ae1d1b3c1b89 req-b6f6a8e3-25ba-41dc-aa53-0f967cf0b9c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updated VIF entry in instance network info cache for port 037eb744-3024-4a3d-b52c-894abe1cbac8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.928 243456 DEBUG nova.network.neutron [req-29d507b4-1d25-45f0-8640-ae1d1b3c1b89 req-b6f6a8e3-25ba-41dc-aa53-0f967cf0b9c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updating instance_info_cache with network_info: [{"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:12:49 compute-0 nova_compute[243452]: 2026-02-28 10:12:49.945 243456 DEBUG oslo_concurrency.lockutils [req-29d507b4-1d25-45f0-8640-ae1d1b3c1b89 req-b6f6a8e3-25ba-41dc-aa53-0f967cf0b9c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-30a5d845-ce28-490a-afe8-3b7552f02c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:12:50 compute-0 nova_compute[243452]: 2026-02-28 10:12:50.085 243456 DEBUG nova.compute.manager [req-5aaa6f68-1cb2-4263-ad71-120e68dd10a0 req-292a61ac-e06f-49c5-8017-8b57c4ce8080 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Received event network-changed-ebb57b0b-4fa0-4ee0-8791-87271512797e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:50 compute-0 nova_compute[243452]: 2026-02-28 10:12:50.085 243456 DEBUG nova.compute.manager [req-5aaa6f68-1cb2-4263-ad71-120e68dd10a0 req-292a61ac-e06f-49c5-8017-8b57c4ce8080 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Refreshing instance network info cache due to event network-changed-ebb57b0b-4fa0-4ee0-8791-87271512797e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:12:50 compute-0 nova_compute[243452]: 2026-02-28 10:12:50.086 243456 DEBUG oslo_concurrency.lockutils [req-5aaa6f68-1cb2-4263-ad71-120e68dd10a0 req-292a61ac-e06f-49c5-8017-8b57c4ce8080 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-6cac1749-1126-44c9-b31c-1041025c52cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:12:50 compute-0 nova_compute[243452]: 2026-02-28 10:12:50.086 243456 DEBUG oslo_concurrency.lockutils [req-5aaa6f68-1cb2-4263-ad71-120e68dd10a0 req-292a61ac-e06f-49c5-8017-8b57c4ce8080 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-6cac1749-1126-44c9-b31c-1041025c52cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:12:50 compute-0 nova_compute[243452]: 2026-02-28 10:12:50.086 243456 DEBUG nova.network.neutron [req-5aaa6f68-1cb2-4263-ad71-120e68dd10a0 req-292a61ac-e06f-49c5-8017-8b57c4ce8080 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Refreshing network info cache for port ebb57b0b-4fa0-4ee0-8791-87271512797e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:12:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 436 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.9 MiB/s wr, 152 op/s
Feb 28 10:12:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:12:50 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3063679996' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:12:50 compute-0 nova_compute[243452]: 2026-02-28 10:12:50.176 243456 DEBUG oslo_concurrency.processutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:50 compute-0 ceph-mon[76304]: pgmap v1405: 305 pgs: 305 active+clean; 436 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.9 MiB/s wr, 152 op/s
Feb 28 10:12:50 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3063679996' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:12:50 compute-0 nova_compute[243452]: 2026-02-28 10:12:50.667 243456 DEBUG nova.storage.rbd_utils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:12:50 compute-0 nova_compute[243452]: 2026-02-28 10:12:50.673 243456 DEBUG oslo_concurrency.processutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:12:51 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2294504967' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.266 243456 DEBUG oslo_concurrency.processutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.268 243456 DEBUG nova.virt.libvirt.vif [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-655402139',display_name='tempest-ServerActionsTestOtherB-server-655402139',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-655402139',id=58,image_ref='337de210-963f-41af-92a4-16b5716eae17',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-1106870795',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:10:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-xj14uw0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member',shelved_at='2026-02-28T10:12:28.942409',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='337de210-963f-41af-92a4-16b5716eae17'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:12:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=30a5d845-ce28-490a-afe8-3b7552f02c63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.268 243456 DEBUG nova.network.os_vif_util [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.269 243456 DEBUG nova.network.os_vif_util [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.272 243456 DEBUG nova.objects.instance [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.296 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:12:51 compute-0 nova_compute[243452]:   <uuid>30a5d845-ce28-490a-afe8-3b7552f02c63</uuid>
Feb 28 10:12:51 compute-0 nova_compute[243452]:   <name>instance-0000003a</name>
Feb 28 10:12:51 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:12:51 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:12:51 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerActionsTestOtherB-server-655402139</nova:name>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:12:49</nova:creationTime>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:12:51 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:12:51 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:12:51 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:12:51 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:12:51 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:12:51 compute-0 nova_compute[243452]:         <nova:user uuid="48e14a77ec8842f98a0d2efc6d5e167f">tempest-ServerActionsTestOtherB-909812490-project-member</nova:user>
Feb 28 10:12:51 compute-0 nova_compute[243452]:         <nova:project uuid="cffbbb9857954b188c363e9565817bf2">tempest-ServerActionsTestOtherB-909812490</nova:project>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="337de210-963f-41af-92a4-16b5716eae17"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:12:51 compute-0 nova_compute[243452]:         <nova:port uuid="037eb744-3024-4a3d-b52c-894abe1cbac8">
Feb 28 10:12:51 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:12:51 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:12:51 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <system>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <entry name="serial">30a5d845-ce28-490a-afe8-3b7552f02c63</entry>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <entry name="uuid">30a5d845-ce28-490a-afe8-3b7552f02c63</entry>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     </system>
Feb 28 10:12:51 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:12:51 compute-0 nova_compute[243452]:   <os>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:   </os>
Feb 28 10:12:51 compute-0 nova_compute[243452]:   <features>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:   </features>
Feb 28 10:12:51 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:12:51 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:12:51 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/30a5d845-ce28-490a-afe8-3b7552f02c63_disk">
Feb 28 10:12:51 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       </source>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:12:51 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config">
Feb 28 10:12:51 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       </source>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:12:51 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:32:d3:6f"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <target dev="tap037eb744-30"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/console.log" append="off"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <video>
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     </video>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <input type="keyboard" bus="usb"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:12:51 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:12:51 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:12:51 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:12:51 compute-0 nova_compute[243452]: </domain>
Feb 28 10:12:51 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.299 243456 DEBUG nova.compute.manager [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Preparing to wait for external event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.299 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.300 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.301 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.304 243456 DEBUG nova.virt.libvirt.vif [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-655402139',display_name='tempest-ServerActionsTestOtherB-server-655402139',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-655402139',id=58,image_ref='337de210-963f-41af-92a4-16b5716eae17',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-1106870795',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:10:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-xj14uw0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member',shelved_at='2026-02-28T10:12:28.942409',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='337de210-963f-41af-92a4-16b5716eae17'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:12:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=30a5d845-ce28-490a-afe8-3b7552f02c63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.305 243456 DEBUG nova.network.os_vif_util [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.307 243456 DEBUG nova.network.os_vif_util [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.309 243456 DEBUG os_vif [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.310 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.311 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.313 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.318 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.319 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap037eb744-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.321 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap037eb744-30, col_values=(('external_ids', {'iface-id': '037eb744-3024-4a3d-b52c-894abe1cbac8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:d3:6f', 'vm-uuid': '30a5d845-ce28-490a-afe8-3b7552f02c63'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:51 compute-0 NetworkManager[49805]: <info>  [1772273571.3239] manager: (tap037eb744-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/253)
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.327 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.336 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.338 243456 INFO os_vif [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30')
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.391 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.392 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.392 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] No VIF found with MAC fa:16:3e:32:d3:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.393 243456 INFO nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Using config drive
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.413 243456 DEBUG nova.storage.rbd_utils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.431 243456 DEBUG nova.objects.instance [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.470 243456 DEBUG nova.objects.instance [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'keypairs' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:12:51 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2294504967' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.852 243456 INFO nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Creating config drive at /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config
Feb 28 10:12:51 compute-0 nova_compute[243452]: 2026-02-28 10:12:51.860 243456 DEBUG oslo_concurrency.processutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpq3ozy6nv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.015 243456 DEBUG oslo_concurrency.processutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpq3ozy6nv" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.053 243456 DEBUG nova.storage.rbd_utils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] rbd image 30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.059 243456 DEBUG oslo_concurrency.processutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config 30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:12:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 452 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.2 MiB/s wr, 189 op/s
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.228 243456 DEBUG oslo_concurrency.processutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config 30a5d845-ce28-490a-afe8-3b7552f02c63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.230 243456 INFO nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Deleting local config drive /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63/disk.config because it was imported into RBD.
Feb 28 10:12:52 compute-0 kernel: tap037eb744-30: entered promiscuous mode
Feb 28 10:12:52 compute-0 NetworkManager[49805]: <info>  [1772273572.2897] manager: (tap037eb744-30): new Tun device (/org/freedesktop/NetworkManager/Devices/254)
Feb 28 10:12:52 compute-0 ovn_controller[146846]: 2026-02-28T10:12:52Z|00544|binding|INFO|Claiming lport 037eb744-3024-4a3d-b52c-894abe1cbac8 for this chassis.
Feb 28 10:12:52 compute-0 ovn_controller[146846]: 2026-02-28T10:12:52Z|00545|binding|INFO|037eb744-3024-4a3d-b52c-894abe1cbac8: Claiming fa:16:3e:32:d3:6f 10.100.0.9
Feb 28 10:12:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.299 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:d3:6f 10.100.0.9'], port_security=['fa:16:3e:32:d3:6f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '30a5d845-ce28-490a-afe8-3b7552f02c63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cffbbb9857954b188c363e9565817bf2', 'neutron:revision_number': '7', 'neutron:security_group_ids': '998e17ae-0a65-46f5-b817-1f8d2d0cba63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d55c9b7f-6b12-4934-aabc-ab954d2b71e1, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=037eb744-3024-4a3d-b52c-894abe1cbac8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:12:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.301 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 037eb744-3024-4a3d-b52c-894abe1cbac8 in datapath 41b22e92-d251-48dd-9bf8-8f38cbd749fa bound to our chassis
Feb 28 10:12:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.303 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41b22e92-d251-48dd-9bf8-8f38cbd749fa
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.303 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:52 compute-0 ovn_controller[146846]: 2026-02-28T10:12:52Z|00546|binding|INFO|Setting lport 037eb744-3024-4a3d-b52c-894abe1cbac8 ovn-installed in OVS
Feb 28 10:12:52 compute-0 ovn_controller[146846]: 2026-02-28T10:12:52Z|00547|binding|INFO|Setting lport 037eb744-3024-4a3d-b52c-894abe1cbac8 up in Southbound
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.305 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:52 compute-0 ovn_controller[146846]: 2026-02-28T10:12:52Z|00548|binding|INFO|Releasing lport cad89901-4493-47e9-b0fc-45158375eff8 from this chassis (sb_readonly=0)
Feb 28 10:12:52 compute-0 ovn_controller[146846]: 2026-02-28T10:12:52Z|00549|binding|INFO|Releasing lport 4d3929e7-28fe-4c5f-a36e-3a516397a383 from this chassis (sb_readonly=0)
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.313 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.322 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[49d66a51-f41d-469d-8538-6591a2f56474]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:52 compute-0 systemd-udevd[300542]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:12:52 compute-0 systemd-machined[209480]: New machine qemu-74-instance-0000003a.
Feb 28 10:12:52 compute-0 NetworkManager[49805]: <info>  [1772273572.3438] device (tap037eb744-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:12:52 compute-0 NetworkManager[49805]: <info>  [1772273572.3444] device (tap037eb744-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:12:52 compute-0 systemd[1]: Started Virtual Machine qemu-74-instance-0000003a.
Feb 28 10:12:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.359 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6396b56d-663f-4124-b38e-25f892a09998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.361 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.363 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b8db32b8-b89b-48a9-a8d3-975476ac0524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.388 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e1079034-b029-4410-b2ff-16b9349bbff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.406 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[485f875a-1948-40b0-8df1-3df431038c28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41b22e92-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:1f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 700, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 700, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493294, 'reachable_time': 16597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300552, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.432 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6a48db22-7c22-463b-9977-b5385061d659]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493305, 'tstamp': 493305}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300555, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493308, 'tstamp': 493308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300555, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.435 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b22e92-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.437 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.438 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.442 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41b22e92-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.442 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:12:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.443 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41b22e92-d0, col_values=(('external_ids', {'iface-id': 'cad89901-4493-47e9-b0fc-45158375eff8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:52.443 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.501 243456 DEBUG nova.network.neutron [req-5aaa6f68-1cb2-4263-ad71-120e68dd10a0 req-292a61ac-e06f-49c5-8017-8b57c4ce8080 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Updated VIF entry in instance network info cache for port ebb57b0b-4fa0-4ee0-8791-87271512797e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.502 243456 DEBUG nova.network.neutron [req-5aaa6f68-1cb2-4263-ad71-120e68dd10a0 req-292a61ac-e06f-49c5-8017-8b57c4ce8080 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Updating instance_info_cache with network_info: [{"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.521 243456 DEBUG oslo_concurrency.lockutils [req-5aaa6f68-1cb2-4263-ad71-120e68dd10a0 req-292a61ac-e06f-49c5-8017-8b57c4ce8080 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-6cac1749-1126-44c9-b31c-1041025c52cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.586 243456 DEBUG nova.compute.manager [req-82d5f45f-d03f-42e9-8ad9-caecdf7d092f req-50aa4d16-ea4a-4234-9cfe-f0bcdd2ee522 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.587 243456 DEBUG oslo_concurrency.lockutils [req-82d5f45f-d03f-42e9-8ad9-caecdf7d092f req-50aa4d16-ea4a-4234-9cfe-f0bcdd2ee522 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.587 243456 DEBUG oslo_concurrency.lockutils [req-82d5f45f-d03f-42e9-8ad9-caecdf7d092f req-50aa4d16-ea4a-4234-9cfe-f0bcdd2ee522 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.587 243456 DEBUG oslo_concurrency.lockutils [req-82d5f45f-d03f-42e9-8ad9-caecdf7d092f req-50aa4d16-ea4a-4234-9cfe-f0bcdd2ee522 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.588 243456 DEBUG nova.compute.manager [req-82d5f45f-d03f-42e9-8ad9-caecdf7d092f req-50aa4d16-ea4a-4234-9cfe-f0bcdd2ee522 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Processing event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:12:52 compute-0 ceph-mon[76304]: pgmap v1406: 305 pgs: 305 active+clean; 452 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.2 MiB/s wr, 189 op/s
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.865 243456 DEBUG nova.compute.manager [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.866 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273572.8650043, 30a5d845-ce28-490a-afe8-3b7552f02c63 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.866 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] VM Started (Lifecycle Event)
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.871 243456 DEBUG nova.virt.libvirt.driver [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.875 243456 INFO nova.virt.libvirt.driver [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance spawned successfully.
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.889 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.892 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.912 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.912 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273572.8662136, 30a5d845-ce28-490a-afe8-3b7552f02c63 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.913 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] VM Paused (Lifecycle Event)
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.929 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.933 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273572.8698864, 30a5d845-ce28-490a-afe8-3b7552f02c63 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.933 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] VM Resumed (Lifecycle Event)
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.956 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.959 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:12:52 compute-0 nova_compute[243452]: 2026-02-28 10:12:52.979 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:12:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:12:53 compute-0 ovn_controller[146846]: 2026-02-28T10:12:53Z|00550|binding|INFO|Releasing lport cad89901-4493-47e9-b0fc-45158375eff8 from this chassis (sb_readonly=0)
Feb 28 10:12:53 compute-0 ovn_controller[146846]: 2026-02-28T10:12:53Z|00551|binding|INFO|Releasing lport 4d3929e7-28fe-4c5f-a36e-3a516397a383 from this chassis (sb_readonly=0)
Feb 28 10:12:53 compute-0 nova_compute[243452]: 2026-02-28 10:12:53.393 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Feb 28 10:12:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Feb 28 10:12:53 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Feb 28 10:12:53 compute-0 nova_compute[243452]: 2026-02-28 10:12:53.725 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:53 compute-0 nova_compute[243452]: 2026-02-28 10:12:53.995 243456 DEBUG nova.compute.manager [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:54 compute-0 nova_compute[243452]: 2026-02-28 10:12:54.081 243456 DEBUG oslo_concurrency.lockutils [None req-08926740-aed1-4c32-b1e8-d9e2c19a61d6 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 13.386s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 484 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 200 op/s
Feb 28 10:12:54 compute-0 sshd-session[300599]: Invalid user solana from 45.148.10.240 port 53102
Feb 28 10:12:54 compute-0 sshd-session[300599]: Connection closed by invalid user solana 45.148.10.240 port 53102 [preauth]
Feb 28 10:12:54 compute-0 nova_compute[243452]: 2026-02-28 10:12:54.677 243456 DEBUG nova.compute.manager [req-4d946485-0ee1-4afc-8850-4d1564f4ff4b req-bf29f170-e081-4c5b-9603-d4c2f6320e28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:12:54 compute-0 nova_compute[243452]: 2026-02-28 10:12:54.679 243456 DEBUG oslo_concurrency.lockutils [req-4d946485-0ee1-4afc-8850-4d1564f4ff4b req-bf29f170-e081-4c5b-9603-d4c2f6320e28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:54 compute-0 nova_compute[243452]: 2026-02-28 10:12:54.680 243456 DEBUG oslo_concurrency.lockutils [req-4d946485-0ee1-4afc-8850-4d1564f4ff4b req-bf29f170-e081-4c5b-9603-d4c2f6320e28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:54 compute-0 nova_compute[243452]: 2026-02-28 10:12:54.680 243456 DEBUG oslo_concurrency.lockutils [req-4d946485-0ee1-4afc-8850-4d1564f4ff4b req-bf29f170-e081-4c5b-9603-d4c2f6320e28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:54 compute-0 nova_compute[243452]: 2026-02-28 10:12:54.681 243456 DEBUG nova.compute.manager [req-4d946485-0ee1-4afc-8850-4d1564f4ff4b req-bf29f170-e081-4c5b-9603-d4c2f6320e28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] No waiting events found dispatching network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:12:54 compute-0 nova_compute[243452]: 2026-02-28 10:12:54.681 243456 WARNING nova.compute.manager [req-4d946485-0ee1-4afc-8850-4d1564f4ff4b req-bf29f170-e081-4c5b-9603-d4c2f6320e28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received unexpected event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 for instance with vm_state active and task_state None.
Feb 28 10:12:54 compute-0 ceph-mon[76304]: osdmap e235: 3 total, 3 up, 3 in
Feb 28 10:12:54 compute-0 ceph-mon[76304]: pgmap v1408: 305 pgs: 305 active+clean; 484 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 200 op/s
Feb 28 10:12:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 418 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 4.7 MiB/s wr, 252 op/s
Feb 28 10:12:56 compute-0 nova_compute[243452]: 2026-02-28 10:12:56.162 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273561.1609914, 920aae47-311f-4921-818d-92025cc1abee => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:12:56 compute-0 nova_compute[243452]: 2026-02-28 10:12:56.163 243456 INFO nova.compute.manager [-] [instance: 920aae47-311f-4921-818d-92025cc1abee] VM Stopped (Lifecycle Event)
Feb 28 10:12:56 compute-0 nova_compute[243452]: 2026-02-28 10:12:56.191 243456 DEBUG nova.compute.manager [None req-f0e48fa4-a7ab-451e-9e74-745f4468f067 - - - - - -] [instance: 920aae47-311f-4921-818d-92025cc1abee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:12:56 compute-0 nova_compute[243452]: 2026-02-28 10:12:56.323 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Feb 28 10:12:57 compute-0 ceph-mon[76304]: pgmap v1409: 305 pgs: 305 active+clean; 418 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 4.7 MiB/s wr, 252 op/s
Feb 28 10:12:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Feb 28 10:12:57 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Feb 28 10:12:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:57.851 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:57.852 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:57.853 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:58 compute-0 ovn_controller[146846]: 2026-02-28T10:12:58Z|00552|binding|INFO|Releasing lport cad89901-4493-47e9-b0fc-45158375eff8 from this chassis (sb_readonly=0)
Feb 28 10:12:58 compute-0 ovn_controller[146846]: 2026-02-28T10:12:58Z|00553|binding|INFO|Releasing lport 4d3929e7-28fe-4c5f-a36e-3a516397a383 from this chassis (sb_readonly=0)
Feb 28 10:12:58 compute-0 nova_compute[243452]: 2026-02-28 10:12:58.089 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:12:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 394 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 9.7 MiB/s rd, 2.9 MiB/s wr, 299 op/s
Feb 28 10:12:58 compute-0 ceph-mon[76304]: osdmap e236: 3 total, 3 up, 3 in
Feb 28 10:12:58 compute-0 nova_compute[243452]: 2026-02-28 10:12:58.728 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.118 243456 DEBUG oslo_concurrency.lockutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.118 243456 DEBUG oslo_concurrency.lockutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.119 243456 DEBUG oslo_concurrency.lockutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.119 243456 DEBUG oslo_concurrency.lockutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.119 243456 DEBUG oslo_concurrency.lockutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.120 243456 INFO nova.compute.manager [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Terminating instance
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.121 243456 DEBUG nova.compute.manager [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:12:59 compute-0 kernel: tap2c3f8e94-02 (unregistering): left promiscuous mode
Feb 28 10:12:59 compute-0 NetworkManager[49805]: <info>  [1772273579.1787] device (tap2c3f8e94-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.184 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:59 compute-0 ovn_controller[146846]: 2026-02-28T10:12:59Z|00554|binding|INFO|Releasing lport 2c3f8e94-025d-4aea-97be-c325f6366e0d from this chassis (sb_readonly=0)
Feb 28 10:12:59 compute-0 ovn_controller[146846]: 2026-02-28T10:12:59Z|00555|binding|INFO|Setting lport 2c3f8e94-025d-4aea-97be-c325f6366e0d down in Southbound
Feb 28 10:12:59 compute-0 ovn_controller[146846]: 2026-02-28T10:12:59Z|00556|binding|INFO|Removing iface tap2c3f8e94-02 ovn-installed in OVS
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.188 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.193 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:e3:c3 10.100.0.7'], port_security=['fa:16:3e:81:e3:c3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cffbbb9857954b188c363e9565817bf2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9dcd58cd-d32a-4394-9f61-d8f4c1381371', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d55c9b7f-6b12-4934-aabc-ab954d2b71e1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2c3f8e94-025d-4aea-97be-c325f6366e0d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:12:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.194 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2c3f8e94-025d-4aea-97be-c325f6366e0d in datapath 41b22e92-d251-48dd-9bf8-8f38cbd749fa unbound from our chassis
Feb 28 10:12:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.195 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41b22e92-d251-48dd-9bf8-8f38cbd749fa
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.199 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:59 compute-0 ceph-mon[76304]: pgmap v1411: 305 pgs: 305 active+clean; 394 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 9.7 MiB/s rd, 2.9 MiB/s wr, 299 op/s
Feb 28 10:12:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.213 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4df737d2-5659-4ef7-a4b0-8ac3ece77b7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:59 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Feb 28 10:12:59 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003e.scope: Consumed 14.942s CPU time.
Feb 28 10:12:59 compute-0 systemd-machined[209480]: Machine qemu-69-instance-0000003e terminated.
Feb 28 10:12:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.251 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3dcf5c05-273e-404b-8f4c-0f32c1b983da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.255 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a64ec726-478a-406b-88c5-b829623bfbda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.285 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d52dd52f-9d29-49fb-8347-e065c01649a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.302 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[00251a57-62ce-41f7-8f03-a2d72e00c13b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41b22e92-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:1f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 700, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 700, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493294, 'reachable_time': 16597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300614, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.319 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[15c53f9b-d288-4050-ba6b-7c7db0a68858]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493305, 'tstamp': 493305}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300615, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap41b22e92-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493308, 'tstamp': 493308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300615, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:12:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.320 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b22e92-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.322 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.326 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.326 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41b22e92-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.327 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:12:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.327 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41b22e92-d0, col_values=(('external_ids', {'iface-id': 'cad89901-4493-47e9-b0fc-45158375eff8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:12:59.327 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.350 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.359 243456 INFO nova.virt.libvirt.driver [-] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Instance destroyed successfully.
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.360 243456 DEBUG nova.objects.instance [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'resources' on Instance uuid 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.374 243456 DEBUG nova.virt.libvirt.vif [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:11:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1290428002',display_name='tempest-ServerActionsTestOtherB-server-1290428002',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1290428002',id=62,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:11:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-2snxvf9r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:11:35Z,user_data=None,user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "address": "fa:16:3e:81:e3:c3", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c3f8e94-02", "ovs_interfaceid": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.375 243456 DEBUG nova.network.os_vif_util [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "address": "fa:16:3e:81:e3:c3", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c3f8e94-02", "ovs_interfaceid": "2c3f8e94-025d-4aea-97be-c325f6366e0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.376 243456 DEBUG nova.network.os_vif_util [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:e3:c3,bridge_name='br-int',has_traffic_filtering=True,id=2c3f8e94-025d-4aea-97be-c325f6366e0d,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c3f8e94-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.376 243456 DEBUG os_vif [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:e3:c3,bridge_name='br-int',has_traffic_filtering=True,id=2c3f8e94-025d-4aea-97be-c325f6366e0d,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c3f8e94-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.378 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.379 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c3f8e94-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.381 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.382 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.386 243456 INFO os_vif [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:e3:c3,bridge_name='br-int',has_traffic_filtering=True,id=2c3f8e94-025d-4aea-97be-c325f6366e0d,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c3f8e94-02')
Feb 28 10:12:59 compute-0 ovn_controller[146846]: 2026-02-28T10:12:59Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:c6:f2 10.100.0.13
Feb 28 10:12:59 compute-0 ovn_controller[146846]: 2026-02-28T10:12:59Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:c6:f2 10.100.0.13
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.679 243456 INFO nova.virt.libvirt.driver [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Deleting instance files /var/lib/nova/instances/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_del
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.681 243456 INFO nova.virt.libvirt.driver [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Deletion of /var/lib/nova/instances/51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3_del complete
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.760 243456 INFO nova.compute.manager [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Took 0.64 seconds to destroy the instance on the hypervisor.
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.761 243456 DEBUG oslo.service.loopingcall [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.761 243456 DEBUG nova.compute.manager [-] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:12:59 compute-0 nova_compute[243452]: 2026-02-28 10:12:59.761 243456 DEBUG nova.network.neutron [-] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:13:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 341 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.4 MiB/s wr, 265 op/s
Feb 28 10:13:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:13:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:13:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:13:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:13:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:13:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:13:00 compute-0 nova_compute[243452]: 2026-02-28 10:13:00.434 243456 DEBUG nova.compute.manager [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Received event network-vif-unplugged-2c3f8e94-025d-4aea-97be-c325f6366e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:00 compute-0 nova_compute[243452]: 2026-02-28 10:13:00.434 243456 DEBUG oslo_concurrency.lockutils [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:00 compute-0 nova_compute[243452]: 2026-02-28 10:13:00.435 243456 DEBUG oslo_concurrency.lockutils [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:00 compute-0 nova_compute[243452]: 2026-02-28 10:13:00.435 243456 DEBUG oslo_concurrency.lockutils [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:00 compute-0 nova_compute[243452]: 2026-02-28 10:13:00.435 243456 DEBUG nova.compute.manager [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] No waiting events found dispatching network-vif-unplugged-2c3f8e94-025d-4aea-97be-c325f6366e0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:00 compute-0 nova_compute[243452]: 2026-02-28 10:13:00.436 243456 DEBUG nova.compute.manager [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Received event network-vif-unplugged-2c3f8e94-025d-4aea-97be-c325f6366e0d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:13:00 compute-0 nova_compute[243452]: 2026-02-28 10:13:00.436 243456 DEBUG nova.compute.manager [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Received event network-vif-plugged-2c3f8e94-025d-4aea-97be-c325f6366e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:00 compute-0 nova_compute[243452]: 2026-02-28 10:13:00.436 243456 DEBUG oslo_concurrency.lockutils [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:00 compute-0 nova_compute[243452]: 2026-02-28 10:13:00.437 243456 DEBUG oslo_concurrency.lockutils [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:00 compute-0 nova_compute[243452]: 2026-02-28 10:13:00.437 243456 DEBUG oslo_concurrency.lockutils [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:00 compute-0 nova_compute[243452]: 2026-02-28 10:13:00.437 243456 DEBUG nova.compute.manager [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] No waiting events found dispatching network-vif-plugged-2c3f8e94-025d-4aea-97be-c325f6366e0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:00 compute-0 nova_compute[243452]: 2026-02-28 10:13:00.438 243456 WARNING nova.compute.manager [req-9b8ac49f-92c0-402d-8be9-7eb5163426db req-89b6d183-78ef-4d55-90de-c495d2c4e5a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Received unexpected event network-vif-plugged-2c3f8e94-025d-4aea-97be-c325f6366e0d for instance with vm_state active and task_state deleting.
Feb 28 10:13:00 compute-0 nova_compute[243452]: 2026-02-28 10:13:00.461 243456 DEBUG nova.network.neutron [-] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:00 compute-0 nova_compute[243452]: 2026-02-28 10:13:00.479 243456 INFO nova.compute.manager [-] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Took 0.72 seconds to deallocate network for instance.
Feb 28 10:13:00 compute-0 nova_compute[243452]: 2026-02-28 10:13:00.530 243456 DEBUG oslo_concurrency.lockutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:00 compute-0 nova_compute[243452]: 2026-02-28 10:13:00.530 243456 DEBUG oslo_concurrency.lockutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:00 compute-0 nova_compute[243452]: 2026-02-28 10:13:00.561 243456 DEBUG nova.compute.manager [req-a7d8093b-ed40-4858-9d20-940f3f454639 req-3763c14b-ada4-42f3-b9de-0c5bb4768ee4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Received event network-vif-deleted-2c3f8e94-025d-4aea-97be-c325f6366e0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:00 compute-0 nova_compute[243452]: 2026-02-28 10:13:00.627 243456 DEBUG oslo_concurrency.processutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:01 compute-0 ceph-mon[76304]: pgmap v1412: 305 pgs: 305 active+clean; 341 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.4 MiB/s wr, 265 op/s
Feb 28 10:13:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:13:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2993608191' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:01 compute-0 nova_compute[243452]: 2026-02-28 10:13:01.260 243456 DEBUG oslo_concurrency.processutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:01 compute-0 nova_compute[243452]: 2026-02-28 10:13:01.268 243456 DEBUG nova.compute.provider_tree [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:13:01 compute-0 nova_compute[243452]: 2026-02-28 10:13:01.287 243456 DEBUG nova.scheduler.client.report [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:13:01 compute-0 nova_compute[243452]: 2026-02-28 10:13:01.312 243456 DEBUG oslo_concurrency.lockutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:01 compute-0 nova_compute[243452]: 2026-02-28 10:13:01.359 243456 INFO nova.scheduler.client.report [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Deleted allocations for instance 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3
Feb 28 10:13:01 compute-0 nova_compute[243452]: 2026-02-28 10:13:01.450 243456 DEBUG oslo_concurrency.lockutils [None req-0d2a4899-0c81-40c4-9a4f-8aa4deb13dbc 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 326 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.6 MiB/s wr, 230 op/s
Feb 28 10:13:02 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2993608191' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:02 compute-0 nova_compute[243452]: 2026-02-28 10:13:02.813 243456 DEBUG oslo_concurrency.lockutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:02 compute-0 nova_compute[243452]: 2026-02-28 10:13:02.814 243456 DEBUG oslo_concurrency.lockutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:02 compute-0 nova_compute[243452]: 2026-02-28 10:13:02.814 243456 DEBUG oslo_concurrency.lockutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:02 compute-0 nova_compute[243452]: 2026-02-28 10:13:02.814 243456 DEBUG oslo_concurrency.lockutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:02 compute-0 nova_compute[243452]: 2026-02-28 10:13:02.814 243456 DEBUG oslo_concurrency.lockutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:02 compute-0 nova_compute[243452]: 2026-02-28 10:13:02.816 243456 INFO nova.compute.manager [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Terminating instance
Feb 28 10:13:02 compute-0 nova_compute[243452]: 2026-02-28 10:13:02.817 243456 DEBUG nova.compute.manager [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:13:02 compute-0 kernel: tap037eb744-30 (unregistering): left promiscuous mode
Feb 28 10:13:02 compute-0 NetworkManager[49805]: <info>  [1772273582.8631] device (tap037eb744-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:13:02 compute-0 ovn_controller[146846]: 2026-02-28T10:13:02Z|00557|binding|INFO|Releasing lport 037eb744-3024-4a3d-b52c-894abe1cbac8 from this chassis (sb_readonly=0)
Feb 28 10:13:02 compute-0 ovn_controller[146846]: 2026-02-28T10:13:02Z|00558|binding|INFO|Setting lport 037eb744-3024-4a3d-b52c-894abe1cbac8 down in Southbound
Feb 28 10:13:02 compute-0 nova_compute[243452]: 2026-02-28 10:13:02.868 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:02 compute-0 ovn_controller[146846]: 2026-02-28T10:13:02Z|00559|binding|INFO|Removing iface tap037eb744-30 ovn-installed in OVS
Feb 28 10:13:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:02.875 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:d3:6f 10.100.0.9'], port_security=['fa:16:3e:32:d3:6f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '30a5d845-ce28-490a-afe8-3b7552f02c63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cffbbb9857954b188c363e9565817bf2', 'neutron:revision_number': '9', 'neutron:security_group_ids': '998e17ae-0a65-46f5-b817-1f8d2d0cba63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.219', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d55c9b7f-6b12-4934-aabc-ab954d2b71e1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=037eb744-3024-4a3d-b52c-894abe1cbac8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:13:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:02.876 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 037eb744-3024-4a3d-b52c-894abe1cbac8 in datapath 41b22e92-d251-48dd-9bf8-8f38cbd749fa unbound from our chassis
Feb 28 10:13:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:02.877 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41b22e92-d251-48dd-9bf8-8f38cbd749fa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:13:02 compute-0 nova_compute[243452]: 2026-02-28 10:13:02.877 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:02.879 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ff6a15-d4a7-46c0-9086-af45ad4cde69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:02.880 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa namespace which is not needed anymore
Feb 28 10:13:02 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Feb 28 10:13:02 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000003a.scope: Consumed 10.703s CPU time.
Feb 28 10:13:02 compute-0 systemd-machined[209480]: Machine qemu-74-instance-0000003a terminated.
Feb 28 10:13:03 compute-0 neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa[293148]: [NOTICE]   (293187) : haproxy version is 2.8.14-c23fe91
Feb 28 10:13:03 compute-0 neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa[293148]: [NOTICE]   (293187) : path to executable is /usr/sbin/haproxy
Feb 28 10:13:03 compute-0 neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa[293148]: [WARNING]  (293187) : Exiting Master process...
Feb 28 10:13:03 compute-0 neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa[293148]: [WARNING]  (293187) : Exiting Master process...
Feb 28 10:13:03 compute-0 neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa[293148]: [ALERT]    (293187) : Current worker (293190) exited with code 143 (Terminated)
Feb 28 10:13:03 compute-0 neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa[293148]: [WARNING]  (293187) : All workers exited. Exiting... (0)
Feb 28 10:13:03 compute-0 systemd[1]: libpod-4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6.scope: Deactivated successfully.
Feb 28 10:13:03 compute-0 podman[300693]: 2026-02-28 10:13:03.023248499 +0000 UTC m=+0.048681620 container died 4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.041 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.047 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6-userdata-shm.mount: Deactivated successfully.
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.058 243456 INFO nova.virt.libvirt.driver [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Instance destroyed successfully.
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.058 243456 DEBUG nova.objects.instance [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lazy-loading 'resources' on Instance uuid 30a5d845-ce28-490a-afe8-3b7552f02c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc0f80b5e09dc8495c28b54de22f23404b9f324ad0e098a2a59a53d201acefe9-merged.mount: Deactivated successfully.
Feb 28 10:13:03 compute-0 podman[300693]: 2026-02-28 10:13:03.080150589 +0000 UTC m=+0.105583740 container cleanup 4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:13:03 compute-0 systemd[1]: libpod-conmon-4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6.scope: Deactivated successfully.
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.093 243456 DEBUG nova.virt.libvirt.vif [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-655402139',display_name='tempest-ServerActionsTestOtherB-server-655402139',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-655402139',id=58,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBqRVcgiI80flX1TIQUc3kE8k0bKr7rk5iIO2Yv6L90cfSe29lIWbu2zW6sL5TXWXaoniRhBj4ljGby24BY2TxBinS7LuRQtwYhYPFr+EO7gzkzEKLoiCVvk+xY5at1zhw==',key_name='tempest-keypair-1106870795',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:12:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cffbbb9857954b188c363e9565817bf2',ramdisk_id='',reservation_id='r-xj14uw0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-909812490',owner_user_name='tempest-ServerActionsTestOtherB-909812490-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:12:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48e14a77ec8842f98a0d2efc6d5e167f',uuid=30a5d845-ce28-490a-afe8-3b7552f02c63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.094 243456 DEBUG nova.network.os_vif_util [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converting VIF {"id": "037eb744-3024-4a3d-b52c-894abe1cbac8", "address": "fa:16:3e:32:d3:6f", "network": {"id": "41b22e92-d251-48dd-9bf8-8f38cbd749fa", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1840091376-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cffbbb9857954b188c363e9565817bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037eb744-30", "ovs_interfaceid": "037eb744-3024-4a3d-b52c-894abe1cbac8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.095 243456 DEBUG nova.network.os_vif_util [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.096 243456 DEBUG os_vif [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.097 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.098 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap037eb744-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.099 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:13:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.102 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:13:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.104 243456 INFO os_vif [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:d3:6f,bridge_name='br-int',has_traffic_filtering=True,id=037eb744-3024-4a3d-b52c-894abe1cbac8,network=Network(41b22e92-d251-48dd-9bf8-8f38cbd749fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037eb744-30')
Feb 28 10:13:03 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.152 243456 DEBUG nova.compute.manager [req-c26265a2-ad21-43f6-b2d2-7703329ff82e req-69f90944-3c6c-4bc2-af8d-8f9ef26f642d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-vif-unplugged-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.153 243456 DEBUG oslo_concurrency.lockutils [req-c26265a2-ad21-43f6-b2d2-7703329ff82e req-69f90944-3c6c-4bc2-af8d-8f9ef26f642d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.153 243456 DEBUG oslo_concurrency.lockutils [req-c26265a2-ad21-43f6-b2d2-7703329ff82e req-69f90944-3c6c-4bc2-af8d-8f9ef26f642d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.153 243456 DEBUG oslo_concurrency.lockutils [req-c26265a2-ad21-43f6-b2d2-7703329ff82e req-69f90944-3c6c-4bc2-af8d-8f9ef26f642d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.153 243456 DEBUG nova.compute.manager [req-c26265a2-ad21-43f6-b2d2-7703329ff82e req-69f90944-3c6c-4bc2-af8d-8f9ef26f642d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] No waiting events found dispatching network-vif-unplugged-037eb744-3024-4a3d-b52c-894abe1cbac8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.154 243456 DEBUG nova.compute.manager [req-c26265a2-ad21-43f6-b2d2-7703329ff82e req-69f90944-3c6c-4bc2-af8d-8f9ef26f642d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-vif-unplugged-037eb744-3024-4a3d-b52c-894abe1cbac8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:13:03 compute-0 podman[300731]: 2026-02-28 10:13:03.155307752 +0000 UTC m=+0.057437716 container remove 4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:13:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.160 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[41b3b9ae-f9aa-4c6c-b3bc-771d16512ba5]: (4, ('Sat Feb 28 10:13:02 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa (4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6)\n4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6\nSat Feb 28 10:13:03 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa (4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6)\n4eb838bc9ffba4d633a1660f20685244302c9cff7a640a88bf3ef5b3559b4dd6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.163 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9e95839f-9684-4d56-88b2-03cbaf49f5c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.164 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b22e92-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:03 compute-0 kernel: tap41b22e92-d0: left promiscuous mode
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.165 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.174 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.177 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3896a4c2-1f0d-493e-8d57-7afdba42f939]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.189 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[10c1502f-64da-44d4-9807-570847a98afd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.190 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8ea4f2e3-fddb-4907-aafd-8bc45f826082]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.207 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a07704cb-f85a-4568-868d-7e0aee8d8197]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493287, 'reachable_time': 34066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300764, 'error': None, 'target': 'ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.210 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41b22e92-d251-48dd-9bf8-8f38cbd749fa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:13:03 compute-0 systemd[1]: run-netns-ovnmeta\x2d41b22e92\x2dd251\x2d48dd\x2d9bf8\x2d8f38cbd749fa.mount: Deactivated successfully.
Feb 28 10:13:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.211 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[d236c172-9d7a-471d-95f5-4e699b54c786]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:03 compute-0 ceph-mon[76304]: pgmap v1413: 305 pgs: 305 active+clean; 326 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.6 MiB/s wr, 230 op/s
Feb 28 10:13:03 compute-0 ceph-mon[76304]: osdmap e237: 3 total, 3 up, 3 in
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.411 243456 INFO nova.virt.libvirt.driver [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Deleting instance files /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63_del
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.412 243456 INFO nova.virt.libvirt.driver [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Deletion of /var/lib/nova/instances/30a5d845-ce28-490a-afe8-3b7552f02c63_del complete
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.487 243456 INFO nova.compute.manager [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Took 0.67 seconds to destroy the instance on the hypervisor.
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.489 243456 DEBUG oslo.service.loopingcall [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.489 243456 DEBUG nova.compute.manager [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.490 243456 DEBUG nova.network.neutron [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.499 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.501 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:13:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:03.502 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.547 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:03 compute-0 nova_compute[243452]: 2026-02-28 10:13:03.730 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 312 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.2 MiB/s wr, 206 op/s
Feb 28 10:13:04 compute-0 nova_compute[243452]: 2026-02-28 10:13:04.822 243456 DEBUG nova.network.neutron [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:04 compute-0 nova_compute[243452]: 2026-02-28 10:13:04.841 243456 INFO nova.compute.manager [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Took 1.35 seconds to deallocate network for instance.
Feb 28 10:13:04 compute-0 nova_compute[243452]: 2026-02-28 10:13:04.881 243456 DEBUG oslo_concurrency.lockutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:04 compute-0 nova_compute[243452]: 2026-02-28 10:13:04.881 243456 DEBUG oslo_concurrency.lockutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:04 compute-0 nova_compute[243452]: 2026-02-28 10:13:04.936 243456 DEBUG oslo_concurrency.processutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:05 compute-0 nova_compute[243452]: 2026-02-28 10:13:05.048 243456 DEBUG nova.compute.manager [req-6f25a1de-e1e5-482e-a75f-ffcf7321c849 req-35232f4c-baba-49a1-813d-28508d0e3a2b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-vif-deleted-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:05 compute-0 podman[300768]: 2026-02-28 10:13:05.136856909 +0000 UTC m=+0.062043266 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 28 10:13:05 compute-0 podman[300767]: 2026-02-28 10:13:05.185093685 +0000 UTC m=+0.114562722 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:13:05 compute-0 ceph-mon[76304]: pgmap v1415: 305 pgs: 305 active+clean; 312 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.2 MiB/s wr, 206 op/s
Feb 28 10:13:05 compute-0 nova_compute[243452]: 2026-02-28 10:13:05.319 243456 DEBUG nova.compute.manager [req-c51e297f-f64b-4ab3-b4a2-5698ea2dd7b3 req-b1bbb829-1e6e-4d8c-9b50-7d780f379c78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:05 compute-0 nova_compute[243452]: 2026-02-28 10:13:05.319 243456 DEBUG oslo_concurrency.lockutils [req-c51e297f-f64b-4ab3-b4a2-5698ea2dd7b3 req-b1bbb829-1e6e-4d8c-9b50-7d780f379c78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:05 compute-0 nova_compute[243452]: 2026-02-28 10:13:05.320 243456 DEBUG oslo_concurrency.lockutils [req-c51e297f-f64b-4ab3-b4a2-5698ea2dd7b3 req-b1bbb829-1e6e-4d8c-9b50-7d780f379c78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:05 compute-0 nova_compute[243452]: 2026-02-28 10:13:05.320 243456 DEBUG oslo_concurrency.lockutils [req-c51e297f-f64b-4ab3-b4a2-5698ea2dd7b3 req-b1bbb829-1e6e-4d8c-9b50-7d780f379c78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:05 compute-0 nova_compute[243452]: 2026-02-28 10:13:05.320 243456 DEBUG nova.compute.manager [req-c51e297f-f64b-4ab3-b4a2-5698ea2dd7b3 req-b1bbb829-1e6e-4d8c-9b50-7d780f379c78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] No waiting events found dispatching network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:05 compute-0 nova_compute[243452]: 2026-02-28 10:13:05.321 243456 WARNING nova.compute.manager [req-c51e297f-f64b-4ab3-b4a2-5698ea2dd7b3 req-b1bbb829-1e6e-4d8c-9b50-7d780f379c78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Received unexpected event network-vif-plugged-037eb744-3024-4a3d-b52c-894abe1cbac8 for instance with vm_state deleted and task_state None.
Feb 28 10:13:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:13:05 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/919389033' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:05 compute-0 nova_compute[243452]: 2026-02-28 10:13:05.519 243456 DEBUG oslo_concurrency.processutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:05 compute-0 nova_compute[243452]: 2026-02-28 10:13:05.525 243456 DEBUG nova.compute.provider_tree [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:13:05 compute-0 nova_compute[243452]: 2026-02-28 10:13:05.552 243456 DEBUG nova.scheduler.client.report [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:13:05 compute-0 nova_compute[243452]: 2026-02-28 10:13:05.582 243456 DEBUG oslo_concurrency.lockutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:05 compute-0 nova_compute[243452]: 2026-02-28 10:13:05.606 243456 INFO nova.scheduler.client.report [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Deleted allocations for instance 30a5d845-ce28-490a-afe8-3b7552f02c63
Feb 28 10:13:05 compute-0 nova_compute[243452]: 2026-02-28 10:13:05.687 243456 DEBUG oslo_concurrency.lockutils [None req-c50cc179-4001-429f-b853-cc2821e25b25 48e14a77ec8842f98a0d2efc6d5e167f cffbbb9857954b188c363e9565817bf2 - - default default] Lock "30a5d845-ce28-490a-afe8-3b7552f02c63" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 265 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.9 MiB/s wr, 203 op/s
Feb 28 10:13:06 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/919389033' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:06 compute-0 nova_compute[243452]: 2026-02-28 10:13:06.849 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:06 compute-0 nova_compute[243452]: 2026-02-28 10:13:06.850 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:06 compute-0 nova_compute[243452]: 2026-02-28 10:13:06.871 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:13:06 compute-0 nova_compute[243452]: 2026-02-28 10:13:06.908 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:06 compute-0 nova_compute[243452]: 2026-02-28 10:13:06.910 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:06 compute-0 nova_compute[243452]: 2026-02-28 10:13:06.938 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:13:06 compute-0 nova_compute[243452]: 2026-02-28 10:13:06.977 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:06 compute-0 nova_compute[243452]: 2026-02-28 10:13:06.978 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:06 compute-0 nova_compute[243452]: 2026-02-28 10:13:06.987 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:13:06 compute-0 nova_compute[243452]: 2026-02-28 10:13:06.988 243456 INFO nova.compute.claims [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:13:07 compute-0 nova_compute[243452]: 2026-02-28 10:13:07.016 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:07 compute-0 nova_compute[243452]: 2026-02-28 10:13:07.145 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:07 compute-0 ceph-mon[76304]: pgmap v1416: 305 pgs: 305 active+clean; 265 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.9 MiB/s wr, 203 op/s
Feb 28 10:13:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:07.504 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:13:07 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3077429748' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:07 compute-0 nova_compute[243452]: 2026-02-28 10:13:07.697 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:07 compute-0 nova_compute[243452]: 2026-02-28 10:13:07.703 243456 DEBUG nova.compute.provider_tree [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:13:07 compute-0 nova_compute[243452]: 2026-02-28 10:13:07.730 243456 DEBUG nova.scheduler.client.report [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:13:07 compute-0 nova_compute[243452]: 2026-02-28 10:13:07.773 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:07 compute-0 nova_compute[243452]: 2026-02-28 10:13:07.775 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:13:07 compute-0 nova_compute[243452]: 2026-02-28 10:13:07.778 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:07 compute-0 nova_compute[243452]: 2026-02-28 10:13:07.786 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:13:07 compute-0 nova_compute[243452]: 2026-02-28 10:13:07.786 243456 INFO nova.compute.claims [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:13:07 compute-0 nova_compute[243452]: 2026-02-28 10:13:07.871 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:13:07 compute-0 nova_compute[243452]: 2026-02-28 10:13:07.872 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:13:07 compute-0 nova_compute[243452]: 2026-02-28 10:13:07.892 243456 INFO nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:13:07 compute-0 nova_compute[243452]: 2026-02-28 10:13:07.911 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:13:07 compute-0 nova_compute[243452]: 2026-02-28 10:13:07.962 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.100 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:13:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 233 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 431 KiB/s rd, 2.6 MiB/s wr, 167 op/s
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.145 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.147 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.147 243456 INFO nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Creating image(s)
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.172 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.198 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.225 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.229 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.262 243456 DEBUG nova.policy [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c60ae50e478245b49930e1e71ea14df4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f398725990434948ba6927f1c1477015', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.295 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.295 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.296 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.297 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:08 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3077429748' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.331 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.336 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.371 243456 DEBUG oslo_concurrency.lockutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Acquiring lock "6cac1749-1126-44c9-b31c-1041025c52cf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.371 243456 DEBUG oslo_concurrency.lockutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.372 243456 DEBUG oslo_concurrency.lockutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Acquiring lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.372 243456 DEBUG oslo_concurrency.lockutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.372 243456 DEBUG oslo_concurrency.lockutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.374 243456 INFO nova.compute.manager [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Terminating instance
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.376 243456 DEBUG nova.compute.manager [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:13:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:13:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1643035264' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.514 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.521 243456 DEBUG nova.compute.provider_tree [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.538 243456 DEBUG nova.scheduler.client.report [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.570 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.571 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.616 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.617 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.648 243456 INFO nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.684 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:13:08 compute-0 kernel: tapebb57b0b-4f (unregistering): left promiscuous mode
Feb 28 10:13:08 compute-0 NetworkManager[49805]: <info>  [1772273588.7078] device (tapebb57b0b-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.710 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.716 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:08 compute-0 ovn_controller[146846]: 2026-02-28T10:13:08Z|00560|binding|INFO|Releasing lport ebb57b0b-4fa0-4ee0-8791-87271512797e from this chassis (sb_readonly=0)
Feb 28 10:13:08 compute-0 ovn_controller[146846]: 2026-02-28T10:13:08Z|00561|binding|INFO|Setting lport ebb57b0b-4fa0-4ee0-8791-87271512797e down in Southbound
Feb 28 10:13:08 compute-0 ovn_controller[146846]: 2026-02-28T10:13:08Z|00562|binding|INFO|Removing iface tapebb57b0b-4f ovn-installed in OVS
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.720 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.729 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:08 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:08.729 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:c6:f2 10.100.0.13'], port_security=['fa:16:3e:e6:c6:f2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6cac1749-1126-44c9-b31c-1041025c52cf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f735fcd-0d4b-4c56-85c4-3ead65135429', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b83705c4693849a58c70b1271f24f320', 'neutron:revision_number': '4', 'neutron:security_group_ids': '49856350-974b-427f-aa68-4afea31e430e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b184abca-cc34-4f4e-8734-d06f0ef64e8b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ebb57b0b-4fa0-4ee0-8791-87271512797e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:13:08 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:08.731 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ebb57b0b-4fa0-4ee0-8791-87271512797e in datapath 8f735fcd-0d4b-4c56-85c4-3ead65135429 unbound from our chassis
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.731 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:08 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:08.733 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f735fcd-0d4b-4c56-85c4-3ead65135429, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:13:08 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:08.735 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[05f2c8bf-094c-48aa-8f92-a4e5dc3c9771]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:08 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:08.736 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429 namespace which is not needed anymore
Feb 28 10:13:08 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000042.scope: Deactivated successfully.
Feb 28 10:13:08 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000042.scope: Consumed 12.600s CPU time.
Feb 28 10:13:08 compute-0 systemd-machined[209480]: Machine qemu-73-instance-00000042 terminated.
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.805 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.806 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.807 243456 INFO nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Creating image(s)
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.835 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image bd2e7775-9332-417e-a139-0847263b3343_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.858 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image bd2e7775-9332-417e-a139-0847263b3343_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.882 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image bd2e7775-9332-417e-a139-0847263b3343_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.888 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:08 compute-0 neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429[300178]: [NOTICE]   (300182) : haproxy version is 2.8.14-c23fe91
Feb 28 10:13:08 compute-0 neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429[300178]: [NOTICE]   (300182) : path to executable is /usr/sbin/haproxy
Feb 28 10:13:08 compute-0 neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429[300178]: [WARNING]  (300182) : Exiting Master process...
Feb 28 10:13:08 compute-0 neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429[300178]: [ALERT]    (300182) : Current worker (300184) exited with code 143 (Terminated)
Feb 28 10:13:08 compute-0 neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429[300178]: [WARNING]  (300182) : All workers exited. Exiting... (0)
Feb 28 10:13:08 compute-0 systemd[1]: libpod-ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e.scope: Deactivated successfully.
Feb 28 10:13:08 compute-0 conmon[300178]: conmon ed7bf78f9237022bffc0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e.scope/container/memory.events
Feb 28 10:13:08 compute-0 podman[301006]: 2026-02-28 10:13:08.905002761 +0000 UTC m=+0.074784054 container died ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.933 243456 DEBUG nova.policy [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c60ae50e478245b49930e1e71ea14df4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f398725990434948ba6927f1c1477015', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.941 243456 INFO nova.virt.libvirt.driver [-] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Instance destroyed successfully.
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.942 243456 DEBUG nova.objects.instance [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lazy-loading 'resources' on Instance uuid 6cac1749-1126-44c9-b31c-1041025c52cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.957 243456 DEBUG nova.virt.libvirt.vif [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1212768098',display_name='tempest-ServersTestManualDisk-server-1212768098',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1212768098',id=66,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0ik5kXdNalWXEzR/gysmRW7reVUwqtO6ef6zS+WfZMKLSAXSFSlMXiq8+ds7SdKqMeBVaNe0jzsweQ7HRyVzZGYG856pZmMCyHy7oKBxG1WfL6nFJqI05F/mBkTXIwrw==',key_name='tempest-keypair-1576574983',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:12:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b83705c4693849a58c70b1271f24f320',ramdisk_id='',reservation_id='r-e45nckj8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1613202530',owner_user_name='tempest-ServersTestManualDisk-1613202530-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:12:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2465d2d41534ef098e24bdd413eefab',uuid=6cac1749-1126-44c9-b31c-1041025c52cf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.958 243456 DEBUG nova.network.os_vif_util [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Converting VIF {"id": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "address": "fa:16:3e:e6:c6:f2", "network": {"id": "8f735fcd-0d4b-4c56-85c4-3ead65135429", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1080673219-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b83705c4693849a58c70b1271f24f320", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebb57b0b-4f", "ovs_interfaceid": "ebb57b0b-4fa0-4ee0-8791-87271512797e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.959 243456 DEBUG nova.network.os_vif_util [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:c6:f2,bridge_name='br-int',has_traffic_filtering=True,id=ebb57b0b-4fa0-4ee0-8791-87271512797e,network=Network(8f735fcd-0d4b-4c56-85c4-3ead65135429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebb57b0b-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.959 243456 DEBUG os_vif [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:c6:f2,bridge_name='br-int',has_traffic_filtering=True,id=ebb57b0b-4fa0-4ee0-8791-87271512797e,network=Network(8f735fcd-0d4b-4c56-85c4-3ead65135429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebb57b0b-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.961 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.962 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebb57b0b-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.964 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.966 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.970 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:08 compute-0 nova_compute[243452]: 2026-02-28 10:13:08.972 243456 INFO os_vif [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:c6:f2,bridge_name='br-int',has_traffic_filtering=True,id=ebb57b0b-4fa0-4ee0-8791-87271512797e,network=Network(8f735fcd-0d4b-4c56-85c4-3ead65135429),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebb57b0b-4f')
Feb 28 10:13:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e-userdata-shm.mount: Deactivated successfully.
Feb 28 10:13:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac9745197d659a7b64f0784193f16e075d7a26cccfd957b8957e2a616a406e62-merged.mount: Deactivated successfully.
Feb 28 10:13:09 compute-0 podman[301006]: 2026-02-28 10:13:09.000715922 +0000 UTC m=+0.170497225 container cleanup ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:13:09 compute-0 systemd[1]: libpod-conmon-ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e.scope: Deactivated successfully.
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.023 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.025 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.026 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.026 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.047 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image bd2e7775-9332-417e-a139-0847263b3343_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.052 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 bd2e7775-9332-417e-a139-0847263b3343_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:09 compute-0 podman[301116]: 2026-02-28 10:13:09.099524731 +0000 UTC m=+0.074483826 container remove ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 28 10:13:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:09.105 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[55decd61-747a-4002-bc4b-80d56a5e48c8]: (4, ('Sat Feb 28 10:13:08 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429 (ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e)\ned7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e\nSat Feb 28 10:13:09 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429 (ed7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e)\ned7bf78f9237022bffc04b41eca186c60caa019da9dc5ce2a8eb74441a12568e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:09.107 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[632b61d4-8821-45eb-8c69-3b04a4ebf3ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:09.108 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f735fcd-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:09 compute-0 kernel: tap8f735fcd-00: left promiscuous mode
Feb 28 10:13:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:09.122 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8f5d0f-02f0-47f7-8c87-732f61f6ff08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:09.138 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ef31b91e-9224-4816-8a7c-5672ec79ffc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:09.142 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d402da-d308-471b-a09a-8c9bbd7996b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.160 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:09.165 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ee738fb2-1755-4d7d-99bf-bdda6cf7d9c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507053, 'reachable_time': 30067, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301181, 'error': None, 'target': 'ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d8f735fcd\x2d0d4b\x2d4c56\x2d85c4\x2d3ead65135429.mount: Deactivated successfully.
Feb 28 10:13:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:09.168 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8f735fcd-0d4b-4c56-85c4-3ead65135429 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:13:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:09.168 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[26fe1f86-58a8-42f1-a3a0-25ee6245c412]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.172 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] resizing rbd image 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.209 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Successfully created port: 0f76084a-5cb2-4246-adc3-ec58ff470ed4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:13:09 compute-0 ceph-mon[76304]: pgmap v1417: 305 pgs: 305 active+clean; 233 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 431 KiB/s rd, 2.6 MiB/s wr, 167 op/s
Feb 28 10:13:09 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1643035264' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.429 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 bd2e7775-9332-417e-a139-0847263b3343_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.377s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.464 243456 DEBUG nova.objects.instance [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'migration_context' on Instance uuid 6af19b7d-b0a9-40fe-8d1a-f38c95452a10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.504 243456 DEBUG nova.compute.manager [req-cd114ef6-0c01-4db3-85ec-4283ba448670 req-935df6da-7721-43b2-ad6c-51a26855d5af 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Received event network-vif-unplugged-ebb57b0b-4fa0-4ee0-8791-87271512797e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.505 243456 DEBUG oslo_concurrency.lockutils [req-cd114ef6-0c01-4db3-85ec-4283ba448670 req-935df6da-7721-43b2-ad6c-51a26855d5af 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.506 243456 DEBUG oslo_concurrency.lockutils [req-cd114ef6-0c01-4db3-85ec-4283ba448670 req-935df6da-7721-43b2-ad6c-51a26855d5af 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.506 243456 DEBUG oslo_concurrency.lockutils [req-cd114ef6-0c01-4db3-85ec-4283ba448670 req-935df6da-7721-43b2-ad6c-51a26855d5af 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.507 243456 DEBUG nova.compute.manager [req-cd114ef6-0c01-4db3-85ec-4283ba448670 req-935df6da-7721-43b2-ad6c-51a26855d5af 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] No waiting events found dispatching network-vif-unplugged-ebb57b0b-4fa0-4ee0-8791-87271512797e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.507 243456 DEBUG nova.compute.manager [req-cd114ef6-0c01-4db3-85ec-4283ba448670 req-935df6da-7721-43b2-ad6c-51a26855d5af 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Received event network-vif-unplugged-ebb57b0b-4fa0-4ee0-8791-87271512797e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.508 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.508 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Ensure instance console log exists: /var/lib/nova/instances/6af19b7d-b0a9-40fe-8d1a-f38c95452a10/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.509 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.509 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.510 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.517 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] resizing rbd image bd2e7775-9332-417e-a139-0847263b3343_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.678 243456 DEBUG nova.objects.instance [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'migration_context' on Instance uuid bd2e7775-9332-417e-a139-0847263b3343 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.710 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.711 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Ensure instance console log exists: /var/lib/nova/instances/bd2e7775-9332-417e-a139-0847263b3343/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.712 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.712 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.713 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.780 243456 INFO nova.virt.libvirt.driver [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Deleting instance files /var/lib/nova/instances/6cac1749-1126-44c9-b31c-1041025c52cf_del
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.782 243456 INFO nova.virt.libvirt.driver [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Deletion of /var/lib/nova/instances/6cac1749-1126-44c9-b31c-1041025c52cf_del complete
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.797 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.852 243456 INFO nova.compute.manager [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Took 1.48 seconds to destroy the instance on the hypervisor.
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.853 243456 DEBUG oslo.service.loopingcall [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.854 243456 DEBUG nova.compute.manager [-] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.854 243456 DEBUG nova.network.neutron [-] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:13:09 compute-0 nova_compute[243452]: 2026-02-28 10:13:09.890 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 251 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 246 KiB/s rd, 3.2 MiB/s wr, 144 op/s
Feb 28 10:13:10 compute-0 nova_compute[243452]: 2026-02-28 10:13:10.249 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Successfully created port: cac4ab79-f021-4f19-8f15-95ea09460f70 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:13:10 compute-0 ceph-mon[76304]: pgmap v1418: 305 pgs: 305 active+clean; 251 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 246 KiB/s rd, 3.2 MiB/s wr, 144 op/s
Feb 28 10:13:10 compute-0 nova_compute[243452]: 2026-02-28 10:13:10.689 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Successfully updated port: 0f76084a-5cb2-4246-adc3-ec58ff470ed4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:13:10 compute-0 nova_compute[243452]: 2026-02-28 10:13:10.712 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "refresh_cache-6af19b7d-b0a9-40fe-8d1a-f38c95452a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:13:10 compute-0 nova_compute[243452]: 2026-02-28 10:13:10.713 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquired lock "refresh_cache-6af19b7d-b0a9-40fe-8d1a-f38c95452a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:13:10 compute-0 nova_compute[243452]: 2026-02-28 10:13:10.713 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:13:10 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:13:10 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6636 writes, 29K keys, 6636 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6636 writes, 6636 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1802 writes, 8224 keys, 1802 commit groups, 1.0 writes per commit group, ingest: 10.65 MB, 0.02 MB/s
                                           Interval WAL: 1802 writes, 1802 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     56.4      0.59              0.08        16    0.037       0      0       0.0       0.0
                                             L6      1/0    8.35 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4    135.5    111.0      1.01              0.28        15    0.068     71K   8331       0.0       0.0
                                            Sum      1/0    8.35 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     85.8     91.0      1.60              0.36        31    0.052     71K   8331       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8     87.5     90.1      0.46              0.11         8    0.057     22K   2559       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    135.5    111.0      1.01              0.28        15    0.068     71K   8331       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     56.8      0.58              0.08        15    0.039       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.5      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.032, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.06 MB/s read, 1.6 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555caadbb8d0#2 capacity: 304.00 MB usage: 15.53 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000203 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(985,14.95 MB,4.9186%) FilterBlock(32,205.42 KB,0.0659892%) IndexBlock(32,381.70 KB,0.122617%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 28 10:13:10 compute-0 nova_compute[243452]: 2026-02-28 10:13:10.839 243456 DEBUG nova.compute.manager [req-b6c4a804-dd18-49d0-a215-44246bf8b43b req-1476afa8-4ec2-40d4-bc73-a6cde1736da3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Received event network-changed-0f76084a-5cb2-4246-adc3-ec58ff470ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:10 compute-0 nova_compute[243452]: 2026-02-28 10:13:10.840 243456 DEBUG nova.compute.manager [req-b6c4a804-dd18-49d0-a215-44246bf8b43b req-1476afa8-4ec2-40d4-bc73-a6cde1736da3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Refreshing instance network info cache due to event network-changed-0f76084a-5cb2-4246-adc3-ec58ff470ed4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:13:10 compute-0 nova_compute[243452]: 2026-02-28 10:13:10.840 243456 DEBUG oslo_concurrency.lockutils [req-b6c4a804-dd18-49d0-a215-44246bf8b43b req-1476afa8-4ec2-40d4-bc73-a6cde1736da3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-6af19b7d-b0a9-40fe-8d1a-f38c95452a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:13:11 compute-0 nova_compute[243452]: 2026-02-28 10:13:11.029 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:13:11 compute-0 nova_compute[243452]: 2026-02-28 10:13:11.188 243456 DEBUG nova.network.neutron [-] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:11 compute-0 nova_compute[243452]: 2026-02-28 10:13:11.207 243456 INFO nova.compute.manager [-] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Took 1.35 seconds to deallocate network for instance.
Feb 28 10:13:11 compute-0 nova_compute[243452]: 2026-02-28 10:13:11.267 243456 DEBUG oslo_concurrency.lockutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:11 compute-0 nova_compute[243452]: 2026-02-28 10:13:11.268 243456 DEBUG oslo_concurrency.lockutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:11 compute-0 nova_compute[243452]: 2026-02-28 10:13:11.551 243456 DEBUG oslo_concurrency.processutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:11 compute-0 nova_compute[243452]: 2026-02-28 10:13:11.622 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Successfully updated port: cac4ab79-f021-4f19-8f15-95ea09460f70 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:13:11 compute-0 nova_compute[243452]: 2026-02-28 10:13:11.633 243456 DEBUG nova.compute.manager [req-a47970ce-ea85-4185-b97f-8da042888cd5 req-bd55c91d-dd7d-4680-b0df-d9a9c1e5058d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Received event network-vif-plugged-ebb57b0b-4fa0-4ee0-8791-87271512797e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:11 compute-0 nova_compute[243452]: 2026-02-28 10:13:11.633 243456 DEBUG oslo_concurrency.lockutils [req-a47970ce-ea85-4185-b97f-8da042888cd5 req-bd55c91d-dd7d-4680-b0df-d9a9c1e5058d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:11 compute-0 nova_compute[243452]: 2026-02-28 10:13:11.634 243456 DEBUG oslo_concurrency.lockutils [req-a47970ce-ea85-4185-b97f-8da042888cd5 req-bd55c91d-dd7d-4680-b0df-d9a9c1e5058d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:11 compute-0 nova_compute[243452]: 2026-02-28 10:13:11.634 243456 DEBUG oslo_concurrency.lockutils [req-a47970ce-ea85-4185-b97f-8da042888cd5 req-bd55c91d-dd7d-4680-b0df-d9a9c1e5058d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:11 compute-0 nova_compute[243452]: 2026-02-28 10:13:11.634 243456 DEBUG nova.compute.manager [req-a47970ce-ea85-4185-b97f-8da042888cd5 req-bd55c91d-dd7d-4680-b0df-d9a9c1e5058d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] No waiting events found dispatching network-vif-plugged-ebb57b0b-4fa0-4ee0-8791-87271512797e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:11 compute-0 nova_compute[243452]: 2026-02-28 10:13:11.635 243456 WARNING nova.compute.manager [req-a47970ce-ea85-4185-b97f-8da042888cd5 req-bd55c91d-dd7d-4680-b0df-d9a9c1e5058d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Received unexpected event network-vif-plugged-ebb57b0b-4fa0-4ee0-8791-87271512797e for instance with vm_state deleted and task_state None.
Feb 28 10:13:11 compute-0 nova_compute[243452]: 2026-02-28 10:13:11.636 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "refresh_cache-bd2e7775-9332-417e-a139-0847263b3343" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:13:11 compute-0 nova_compute[243452]: 2026-02-28 10:13:11.636 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquired lock "refresh_cache-bd2e7775-9332-417e-a139-0847263b3343" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:13:11 compute-0 nova_compute[243452]: 2026-02-28 10:13:11.636 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:13:11 compute-0 nova_compute[243452]: 2026-02-28 10:13:11.883 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:13:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:13:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/166054627' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.122 243456 DEBUG oslo_concurrency.processutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.128 243456 DEBUG nova.compute.provider_tree [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:13:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 262 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 4.6 MiB/s wr, 108 op/s
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.146 243456 DEBUG nova.scheduler.client.report [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:13:12 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/166054627' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.167 243456 DEBUG oslo_concurrency.lockutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.208 243456 INFO nova.scheduler.client.report [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Deleted allocations for instance 6cac1749-1126-44c9-b31c-1041025c52cf
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.276 243456 DEBUG oslo_concurrency.lockutils [None req-cf157fcf-d06a-4e7c-b8af-96a973c98920 c2465d2d41534ef098e24bdd413eefab b83705c4693849a58c70b1271f24f320 - - default default] Lock "6cac1749-1126-44c9-b31c-1041025c52cf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.430 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Updating instance_info_cache with network_info: [{"id": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "address": "fa:16:3e:20:c9:da", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f76084a-5c", "ovs_interfaceid": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.451 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Releasing lock "refresh_cache-6af19b7d-b0a9-40fe-8d1a-f38c95452a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.451 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Instance network_info: |[{"id": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "address": "fa:16:3e:20:c9:da", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f76084a-5c", "ovs_interfaceid": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.452 243456 DEBUG oslo_concurrency.lockutils [req-b6c4a804-dd18-49d0-a215-44246bf8b43b req-1476afa8-4ec2-40d4-bc73-a6cde1736da3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-6af19b7d-b0a9-40fe-8d1a-f38c95452a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.452 243456 DEBUG nova.network.neutron [req-b6c4a804-dd18-49d0-a215-44246bf8b43b req-1476afa8-4ec2-40d4-bc73-a6cde1736da3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Refreshing network info cache for port 0f76084a-5cb2-4246-adc3-ec58ff470ed4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.455 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Start _get_guest_xml network_info=[{"id": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "address": "fa:16:3e:20:c9:da", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f76084a-5c", "ovs_interfaceid": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.459 243456 WARNING nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.465 243456 DEBUG nova.virt.libvirt.host [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.468 243456 DEBUG nova.virt.libvirt.host [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.483 243456 DEBUG nova.virt.libvirt.host [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.484 243456 DEBUG nova.virt.libvirt.host [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.485 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.485 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.486 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.486 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.487 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.487 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.487 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.488 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.488 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.489 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.489 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.489 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.493 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:12 compute-0 nova_compute[243452]: 2026-02-28 10:13:12.979 243456 DEBUG nova.compute.manager [req-acb828ae-5d02-4a47-bd2d-bd1ea90a28d4 req-415510a3-48e8-4906-aadd-defe81c4d6b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Received event network-vif-deleted-ebb57b0b-4fa0-4ee0-8791-87271512797e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:13:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1449638553' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.058 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.082 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.086 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.167 243456 DEBUG nova.network.neutron [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Updating instance_info_cache with network_info: [{"id": "cac4ab79-f021-4f19-8f15-95ea09460f70", "address": "fa:16:3e:1b:f3:42", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcac4ab79-f0", "ovs_interfaceid": "cac4ab79-f021-4f19-8f15-95ea09460f70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:13 compute-0 ceph-mon[76304]: pgmap v1419: 305 pgs: 305 active+clean; 262 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 4.6 MiB/s wr, 108 op/s
Feb 28 10:13:13 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1449638553' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.195 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Releasing lock "refresh_cache-bd2e7775-9332-417e-a139-0847263b3343" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.195 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Instance network_info: |[{"id": "cac4ab79-f021-4f19-8f15-95ea09460f70", "address": "fa:16:3e:1b:f3:42", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcac4ab79-f0", "ovs_interfaceid": "cac4ab79-f021-4f19-8f15-95ea09460f70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.199 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Start _get_guest_xml network_info=[{"id": "cac4ab79-f021-4f19-8f15-95ea09460f70", "address": "fa:16:3e:1b:f3:42", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcac4ab79-f0", "ovs_interfaceid": "cac4ab79-f021-4f19-8f15-95ea09460f70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.203 243456 WARNING nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.207 243456 DEBUG nova.virt.libvirt.host [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.208 243456 DEBUG nova.virt.libvirt.host [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.212 243456 DEBUG nova.virt.libvirt.host [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.212 243456 DEBUG nova.virt.libvirt.host [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.213 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.213 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.214 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.214 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.214 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.214 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.215 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.215 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.215 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.216 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.216 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.216 243456 DEBUG nova.virt.hardware [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.220 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:13:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/312306103' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.649 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.652 243456 DEBUG nova.virt.libvirt.vif [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1120322056',display_name='tempest-tempest.common.compute-instance-1120322056-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1120322056-1',id=67,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-0gks6oxa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:07Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=6af19b7d-b0a9-40fe-8d1a-f38c95452a10,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "address": "fa:16:3e:20:c9:da", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f76084a-5c", "ovs_interfaceid": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.652 243456 DEBUG nova.network.os_vif_util [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "address": "fa:16:3e:20:c9:da", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f76084a-5c", "ovs_interfaceid": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.654 243456 DEBUG nova.network.os_vif_util [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:c9:da,bridge_name='br-int',has_traffic_filtering=True,id=0f76084a-5cb2-4246-adc3-ec58ff470ed4,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f76084a-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.655 243456 DEBUG nova.objects.instance [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6af19b7d-b0a9-40fe-8d1a-f38c95452a10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.672 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:13:13 compute-0 nova_compute[243452]:   <uuid>6af19b7d-b0a9-40fe-8d1a-f38c95452a10</uuid>
Feb 28 10:13:13 compute-0 nova_compute[243452]:   <name>instance-00000043</name>
Feb 28 10:13:13 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:13:13 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:13:13 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <nova:name>tempest-tempest.common.compute-instance-1120322056-1</nova:name>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:13:12</nova:creationTime>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:13:13 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:13:13 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:13:13 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:13:13 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:13:13 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:13:13 compute-0 nova_compute[243452]:         <nova:user uuid="c60ae50e478245b49930e1e71ea14df4">tempest-MultipleCreateTestJSON-211863181-project-member</nova:user>
Feb 28 10:13:13 compute-0 nova_compute[243452]:         <nova:project uuid="f398725990434948ba6927f1c1477015">tempest-MultipleCreateTestJSON-211863181</nova:project>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:13:13 compute-0 nova_compute[243452]:         <nova:port uuid="0f76084a-5cb2-4246-adc3-ec58ff470ed4">
Feb 28 10:13:13 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:13:13 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:13:13 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <system>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <entry name="serial">6af19b7d-b0a9-40fe-8d1a-f38c95452a10</entry>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <entry name="uuid">6af19b7d-b0a9-40fe-8d1a-f38c95452a10</entry>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     </system>
Feb 28 10:13:13 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:13:13 compute-0 nova_compute[243452]:   <os>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:   </os>
Feb 28 10:13:13 compute-0 nova_compute[243452]:   <features>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:   </features>
Feb 28 10:13:13 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:13:13 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:13:13 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk">
Feb 28 10:13:13 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       </source>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:13:13 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk.config">
Feb 28 10:13:13 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       </source>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:13:13 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:20:c9:da"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <target dev="tap0f76084a-5c"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/6af19b7d-b0a9-40fe-8d1a-f38c95452a10/console.log" append="off"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <video>
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     </video>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:13:13 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:13:13 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:13:13 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:13:13 compute-0 nova_compute[243452]: </domain>
Feb 28 10:13:13 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.679 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Preparing to wait for external event network-vif-plugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.679 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.680 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.680 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.681 243456 DEBUG nova.virt.libvirt.vif [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1120322056',display_name='tempest-tempest.common.compute-instance-1120322056-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1120322056-1',id=67,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-0gks6oxa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:07Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=6af19b7d-b0a9-40fe-8d1a-f38c95452a10,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "address": "fa:16:3e:20:c9:da", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f76084a-5c", "ovs_interfaceid": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.682 243456 DEBUG nova.network.os_vif_util [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "address": "fa:16:3e:20:c9:da", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f76084a-5c", "ovs_interfaceid": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.683 243456 DEBUG nova.network.os_vif_util [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:c9:da,bridge_name='br-int',has_traffic_filtering=True,id=0f76084a-5cb2-4246-adc3-ec58ff470ed4,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f76084a-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.683 243456 DEBUG os_vif [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:c9:da,bridge_name='br-int',has_traffic_filtering=True,id=0f76084a-5cb2-4246-adc3-ec58ff470ed4,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f76084a-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.684 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.685 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.685 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.688 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.689 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f76084a-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.690 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0f76084a-5c, col_values=(('external_ids', {'iface-id': '0f76084a-5cb2-4246-adc3-ec58ff470ed4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:c9:da', 'vm-uuid': '6af19b7d-b0a9-40fe-8d1a-f38c95452a10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.691 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:13 compute-0 NetworkManager[49805]: <info>  [1772273593.6926] manager: (tap0f76084a-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.694 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.697 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.698 243456 INFO os_vif [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:c9:da,bridge_name='br-int',has_traffic_filtering=True,id=0f76084a-5cb2-4246-adc3-ec58ff470ed4,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f76084a-5c')
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.734 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.758 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.759 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.759 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No VIF found with MAC fa:16:3e:20:c9:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.760 243456 INFO nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Using config drive
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.786 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.796 243456 DEBUG nova.compute.manager [req-9f145982-69b0-4fed-9abb-8465aef0f4c0 req-298e812b-114d-42b4-96c9-d3a7dcd44400 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received event network-changed-cac4ab79-f021-4f19-8f15-95ea09460f70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.797 243456 DEBUG nova.compute.manager [req-9f145982-69b0-4fed-9abb-8465aef0f4c0 req-298e812b-114d-42b4-96c9-d3a7dcd44400 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Refreshing instance network info cache due to event network-changed-cac4ab79-f021-4f19-8f15-95ea09460f70. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.797 243456 DEBUG oslo_concurrency.lockutils [req-9f145982-69b0-4fed-9abb-8465aef0f4c0 req-298e812b-114d-42b4-96c9-d3a7dcd44400 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-bd2e7775-9332-417e-a139-0847263b3343" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.798 243456 DEBUG oslo_concurrency.lockutils [req-9f145982-69b0-4fed-9abb-8465aef0f4c0 req-298e812b-114d-42b4-96c9-d3a7dcd44400 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-bd2e7775-9332-417e-a139-0847263b3343" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.798 243456 DEBUG nova.network.neutron [req-9f145982-69b0-4fed-9abb-8465aef0f4c0 req-298e812b-114d-42b4-96c9-d3a7dcd44400 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Refreshing network info cache for port cac4ab79-f021-4f19-8f15-95ea09460f70 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:13:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:13:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2409038409' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.834 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.858 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image bd2e7775-9332-417e-a139-0847263b3343_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:13 compute-0 nova_compute[243452]: 2026-02-28 10:13:13.863 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 246 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 3.9 MiB/s wr, 119 op/s
Feb 28 10:13:14 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/312306103' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:14 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2409038409' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.358 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273579.356753, 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.359 243456 INFO nova.compute.manager [-] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] VM Stopped (Lifecycle Event)
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.382 243456 DEBUG nova.compute.manager [None req-f19a1aa9-fb14-46d0-aa5b-5569860717c6 - - - - - -] [instance: 51ed0a9d-dcdb-455f-a0e9-1c7259ff8cc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:13:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2815072042' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.425 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.427 243456 DEBUG nova.virt.libvirt.vif [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1120322056',display_name='tempest-tempest.common.compute-instance-1120322056-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1120322056-2',id=68,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-0gks6oxa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:08Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=bd2e7775-9332-417e-a139-0847263b3343,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cac4ab79-f021-4f19-8f15-95ea09460f70", "address": "fa:16:3e:1b:f3:42", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcac4ab79-f0", "ovs_interfaceid": "cac4ab79-f021-4f19-8f15-95ea09460f70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.427 243456 DEBUG nova.network.os_vif_util [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "cac4ab79-f021-4f19-8f15-95ea09460f70", "address": "fa:16:3e:1b:f3:42", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcac4ab79-f0", "ovs_interfaceid": "cac4ab79-f021-4f19-8f15-95ea09460f70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.428 243456 DEBUG nova.network.os_vif_util [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:f3:42,bridge_name='br-int',has_traffic_filtering=True,id=cac4ab79-f021-4f19-8f15-95ea09460f70,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcac4ab79-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.429 243456 DEBUG nova.objects.instance [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'pci_devices' on Instance uuid bd2e7775-9332-417e-a139-0847263b3343 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.450 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:13:14 compute-0 nova_compute[243452]:   <uuid>bd2e7775-9332-417e-a139-0847263b3343</uuid>
Feb 28 10:13:14 compute-0 nova_compute[243452]:   <name>instance-00000044</name>
Feb 28 10:13:14 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:13:14 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:13:14 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <nova:name>tempest-tempest.common.compute-instance-1120322056-2</nova:name>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:13:13</nova:creationTime>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:13:14 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:13:14 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:13:14 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:13:14 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:13:14 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:13:14 compute-0 nova_compute[243452]:         <nova:user uuid="c60ae50e478245b49930e1e71ea14df4">tempest-MultipleCreateTestJSON-211863181-project-member</nova:user>
Feb 28 10:13:14 compute-0 nova_compute[243452]:         <nova:project uuid="f398725990434948ba6927f1c1477015">tempest-MultipleCreateTestJSON-211863181</nova:project>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:13:14 compute-0 nova_compute[243452]:         <nova:port uuid="cac4ab79-f021-4f19-8f15-95ea09460f70">
Feb 28 10:13:14 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:13:14 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:13:14 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <system>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <entry name="serial">bd2e7775-9332-417e-a139-0847263b3343</entry>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <entry name="uuid">bd2e7775-9332-417e-a139-0847263b3343</entry>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     </system>
Feb 28 10:13:14 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:13:14 compute-0 nova_compute[243452]:   <os>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:   </os>
Feb 28 10:13:14 compute-0 nova_compute[243452]:   <features>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:   </features>
Feb 28 10:13:14 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:13:14 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:13:14 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/bd2e7775-9332-417e-a139-0847263b3343_disk">
Feb 28 10:13:14 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       </source>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:13:14 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/bd2e7775-9332-417e-a139-0847263b3343_disk.config">
Feb 28 10:13:14 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       </source>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:13:14 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:1b:f3:42"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <target dev="tapcac4ab79-f0"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/bd2e7775-9332-417e-a139-0847263b3343/console.log" append="off"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <video>
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     </video>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:13:14 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:13:14 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:13:14 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:13:14 compute-0 nova_compute[243452]: </domain>
Feb 28 10:13:14 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.451 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Preparing to wait for external event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.451 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.452 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.452 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.453 243456 DEBUG nova.virt.libvirt.vif [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1120322056',display_name='tempest-tempest.common.compute-instance-1120322056-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1120322056-2',id=68,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-0gks6oxa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:08Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=bd2e7775-9332-417e-a139-0847263b3343,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cac4ab79-f021-4f19-8f15-95ea09460f70", "address": "fa:16:3e:1b:f3:42", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcac4ab79-f0", "ovs_interfaceid": "cac4ab79-f021-4f19-8f15-95ea09460f70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.453 243456 DEBUG nova.network.os_vif_util [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "cac4ab79-f021-4f19-8f15-95ea09460f70", "address": "fa:16:3e:1b:f3:42", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcac4ab79-f0", "ovs_interfaceid": "cac4ab79-f021-4f19-8f15-95ea09460f70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.453 243456 DEBUG nova.network.os_vif_util [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:f3:42,bridge_name='br-int',has_traffic_filtering=True,id=cac4ab79-f021-4f19-8f15-95ea09460f70,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcac4ab79-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.454 243456 DEBUG os_vif [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:f3:42,bridge_name='br-int',has_traffic_filtering=True,id=cac4ab79-f021-4f19-8f15-95ea09460f70,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcac4ab79-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.454 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.455 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.455 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.457 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.457 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcac4ab79-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.458 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcac4ab79-f0, col_values=(('external_ids', {'iface-id': 'cac4ab79-f021-4f19-8f15-95ea09460f70', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:f3:42', 'vm-uuid': 'bd2e7775-9332-417e-a139-0847263b3343'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:14 compute-0 NetworkManager[49805]: <info>  [1772273594.4599] manager: (tapcac4ab79-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.459 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.463 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.464 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.464 243456 INFO os_vif [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:f3:42,bridge_name='br-int',has_traffic_filtering=True,id=cac4ab79-f021-4f19-8f15-95ea09460f70,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcac4ab79-f0')
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.506 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.506 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.507 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No VIF found with MAC fa:16:3e:1b:f3:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.507 243456 INFO nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Using config drive
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.527 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image bd2e7775-9332-417e-a139-0847263b3343_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.766 243456 INFO nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Creating config drive at /var/lib/nova/instances/6af19b7d-b0a9-40fe-8d1a-f38c95452a10/disk.config
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.770 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6af19b7d-b0a9-40fe-8d1a-f38c95452a10/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpsjyzgadp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.854 243456 DEBUG nova.network.neutron [req-b6c4a804-dd18-49d0-a215-44246bf8b43b req-1476afa8-4ec2-40d4-bc73-a6cde1736da3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Updated VIF entry in instance network info cache for port 0f76084a-5cb2-4246-adc3-ec58ff470ed4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.854 243456 DEBUG nova.network.neutron [req-b6c4a804-dd18-49d0-a215-44246bf8b43b req-1476afa8-4ec2-40d4-bc73-a6cde1736da3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Updating instance_info_cache with network_info: [{"id": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "address": "fa:16:3e:20:c9:da", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f76084a-5c", "ovs_interfaceid": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.884 243456 DEBUG oslo_concurrency.lockutils [req-b6c4a804-dd18-49d0-a215-44246bf8b43b req-1476afa8-4ec2-40d4-bc73-a6cde1736da3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-6af19b7d-b0a9-40fe-8d1a-f38c95452a10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.902 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6af19b7d-b0a9-40fe-8d1a-f38c95452a10/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpsjyzgadp" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.927 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:14 compute-0 nova_compute[243452]: 2026-02-28 10:13:14.930 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6af19b7d-b0a9-40fe-8d1a-f38c95452a10/disk.config 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.097 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6af19b7d-b0a9-40fe-8d1a-f38c95452a10/disk.config 6af19b7d-b0a9-40fe-8d1a-f38c95452a10_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.098 243456 INFO nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Deleting local config drive /var/lib/nova/instances/6af19b7d-b0a9-40fe-8d1a-f38c95452a10/disk.config because it was imported into RBD.
Feb 28 10:13:15 compute-0 kernel: tap0f76084a-5c: entered promiscuous mode
Feb 28 10:13:15 compute-0 NetworkManager[49805]: <info>  [1772273595.1551] manager: (tap0f76084a-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/257)
Feb 28 10:13:15 compute-0 ovn_controller[146846]: 2026-02-28T10:13:15Z|00563|binding|INFO|Claiming lport 0f76084a-5cb2-4246-adc3-ec58ff470ed4 for this chassis.
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.158 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:15 compute-0 ovn_controller[146846]: 2026-02-28T10:13:15Z|00564|binding|INFO|0f76084a-5cb2-4246-adc3-ec58ff470ed4: Claiming fa:16:3e:20:c9:da 10.100.0.4
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.178 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:c9:da 10.100.0.4'], port_security=['fa:16:3e:20:c9:da 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '6af19b7d-b0a9-40fe-8d1a-f38c95452a10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f398725990434948ba6927f1c1477015', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9892012a-4876-4e21-91d3-1183462d5768', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72248a44-6915-4752-941f-1938e32574c6, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0f76084a-5cb2-4246-adc3-ec58ff470ed4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.180 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0f76084a-5cb2-4246-adc3-ec58ff470ed4 in datapath 2ee6adef-26da-41a9-91a7-f9a806d37d26 bound to our chassis
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.181 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ee6adef-26da-41a9-91a7-f9a806d37d26
Feb 28 10:13:15 compute-0 systemd-udevd[301546]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:13:15 compute-0 systemd-machined[209480]: New machine qemu-75-instance-00000043.
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.194 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c33b7c62-df71-4178-b041-85a259f639f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.196 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.196 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2ee6adef-21 in ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.199 243456 INFO nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Creating config drive at /var/lib/nova/instances/bd2e7775-9332-417e-a139-0847263b3343/disk.config
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.200 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2ee6adef-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.200 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5e465bef-3ef5-4c6b-8cec-5c0360aacc28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.202 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6f4d6f-3a48-4ff6-ab5c-f2aac862f1a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.202 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bd2e7775-9332-417e-a139-0847263b3343/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp7nffun_1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:15 compute-0 ovn_controller[146846]: 2026-02-28T10:13:15Z|00565|binding|INFO|Setting lport 0f76084a-5cb2-4246-adc3-ec58ff470ed4 ovn-installed in OVS
Feb 28 10:13:15 compute-0 ovn_controller[146846]: 2026-02-28T10:13:15Z|00566|binding|INFO|Setting lport 0f76084a-5cb2-4246-adc3-ec58ff470ed4 up in Southbound
Feb 28 10:13:15 compute-0 ceph-mon[76304]: pgmap v1420: 305 pgs: 305 active+clean; 246 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 3.9 MiB/s wr, 119 op/s
Feb 28 10:13:15 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2815072042' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:15 compute-0 NetworkManager[49805]: <info>  [1772273595.2137] device (tap0f76084a-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:13:15 compute-0 systemd[1]: Started Virtual Machine qemu-75-instance-00000043.
Feb 28 10:13:15 compute-0 NetworkManager[49805]: <info>  [1772273595.2148] device (tap0f76084a-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.220 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[039c7ff1-f46c-4c0d-adc6-5ad93bece74d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.236 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2ed781ac-01fd-4b28-8eac-0f3fc7d06396]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.241 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.265 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bec1a43f-d45e-4838-b01c-2b280d9d2daa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.270 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[84b472b3-421a-40ef-bab2-86d28e60efce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:15 compute-0 systemd-udevd[301551]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:13:15 compute-0 NetworkManager[49805]: <info>  [1772273595.2732] manager: (tap2ee6adef-20): new Veth device (/org/freedesktop/NetworkManager/Devices/258)
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.305 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[46424fb8-c90b-4469-a0d9-e7c96f17bb78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.309 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0c9c7352-6098-4492-8874-fff08627d7dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:13:15 compute-0 NetworkManager[49805]: <info>  [1772273595.3385] device (tap2ee6adef-20): carrier: link connected
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.344 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7b9e5a80-a2b9-4100-8d6c-58fe6ebbd437]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.348 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bd2e7775-9332-417e-a139-0847263b3343/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp7nffun_1" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.373 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b67e63f8-5132-444e-8353-064e383941a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ee6adef-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:79:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510262, 'reachable_time': 26991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301593, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.391 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[84d19f9d-624a-4014-aa3d-2a163c02f953]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0f:79d6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 510262, 'tstamp': 510262}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301602, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.392 243456 DEBUG nova.storage.rbd_utils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image bd2e7775-9332-417e-a139-0847263b3343_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.401 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bd2e7775-9332-417e-a139-0847263b3343/disk.config bd2e7775-9332-417e-a139-0847263b3343_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.411 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e65d5932-c707-40ff-ac45-98cbbd3ced1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ee6adef-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:79:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510262, 'reachable_time': 26991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301606, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.442 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9cfa8d41-3d18-4728-8df2-778828033cc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.491 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b84d3ac4-82ec-4f35-994b-fba50ed25d41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.492 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ee6adef-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.493 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.493 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ee6adef-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:15 compute-0 NetworkManager[49805]: <info>  [1772273595.4957] manager: (tap2ee6adef-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Feb 28 10:13:15 compute-0 kernel: tap2ee6adef-20: entered promiscuous mode
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.495 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.500 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ee6adef-20, col_values=(('external_ids', {'iface-id': '51b33bea-9c2c-447c-817e-7f72887f045f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:15 compute-0 ovn_controller[146846]: 2026-02-28T10:13:15Z|00567|binding|INFO|Releasing lport 51b33bea-9c2c-447c-817e-7f72887f045f from this chassis (sb_readonly=0)
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.501 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.511 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.512 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2ee6adef-26da-41a9-91a7-f9a806d37d26.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2ee6adef-26da-41a9-91a7-f9a806d37d26.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.513 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[23b63601-4457-4ee7-803a-499c37c3b96f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.514 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-2ee6adef-26da-41a9-91a7-f9a806d37d26
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/2ee6adef-26da-41a9-91a7-f9a806d37d26.pid.haproxy
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 2ee6adef-26da-41a9-91a7-f9a806d37d26
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.514 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'env', 'PROCESS_TAG=haproxy-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2ee6adef-26da-41a9-91a7-f9a806d37d26.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.578 243456 DEBUG oslo_concurrency.processutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bd2e7775-9332-417e-a139-0847263b3343/disk.config bd2e7775-9332-417e-a139-0847263b3343_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.579 243456 INFO nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Deleting local config drive /var/lib/nova/instances/bd2e7775-9332-417e-a139-0847263b3343/disk.config because it was imported into RBD.
Feb 28 10:13:15 compute-0 kernel: tapcac4ab79-f0: entered promiscuous mode
Feb 28 10:13:15 compute-0 systemd-udevd[301573]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:13:15 compute-0 NetworkManager[49805]: <info>  [1772273595.6344] manager: (tapcac4ab79-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/260)
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.635 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:15 compute-0 ovn_controller[146846]: 2026-02-28T10:13:15Z|00568|binding|INFO|Claiming lport cac4ab79-f021-4f19-8f15-95ea09460f70 for this chassis.
Feb 28 10:13:15 compute-0 ovn_controller[146846]: 2026-02-28T10:13:15Z|00569|binding|INFO|cac4ab79-f021-4f19-8f15-95ea09460f70: Claiming fa:16:3e:1b:f3:42 10.100.0.8
Feb 28 10:13:15 compute-0 NetworkManager[49805]: <info>  [1772273595.6468] device (tapcac4ab79-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:13:15 compute-0 NetworkManager[49805]: <info>  [1772273595.6485] device (tapcac4ab79-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:13:15 compute-0 ovn_controller[146846]: 2026-02-28T10:13:15Z|00570|binding|INFO|Setting lport cac4ab79-f021-4f19-8f15-95ea09460f70 ovn-installed in OVS
Feb 28 10:13:15 compute-0 ovn_controller[146846]: 2026-02-28T10:13:15Z|00571|binding|INFO|Setting lport cac4ab79-f021-4f19-8f15-95ea09460f70 up in Southbound
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.649 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:15.653 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:f3:42 10.100.0.8'], port_security=['fa:16:3e:1b:f3:42 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'bd2e7775-9332-417e-a139-0847263b3343', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f398725990434948ba6927f1c1477015', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9892012a-4876-4e21-91d3-1183462d5768', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72248a44-6915-4752-941f-1938e32574c6, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cac4ab79-f021-4f19-8f15-95ea09460f70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.657 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:15 compute-0 systemd-machined[209480]: New machine qemu-76-instance-00000044.
Feb 28 10:13:15 compute-0 systemd[1]: Started Virtual Machine qemu-76-instance-00000044.
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.853 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273595.852532, 6af19b7d-b0a9-40fe-8d1a-f38c95452a10 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.853 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] VM Started (Lifecycle Event)
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.876 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.880 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273595.8557558, 6af19b7d-b0a9-40fe-8d1a-f38c95452a10 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.881 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] VM Paused (Lifecycle Event)
Feb 28 10:13:15 compute-0 podman[301721]: 2026-02-28 10:13:15.906341765 +0000 UTC m=+0.071604964 container create e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.911 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.919 243456 DEBUG nova.compute.manager [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Received event network-vif-plugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.920 243456 DEBUG oslo_concurrency.lockutils [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.920 243456 DEBUG oslo_concurrency.lockutils [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.920 243456 DEBUG oslo_concurrency.lockutils [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.921 243456 DEBUG nova.compute.manager [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Processing event network-vif-plugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.921 243456 DEBUG nova.compute.manager [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Received event network-vif-plugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.921 243456 DEBUG oslo_concurrency.lockutils [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.921 243456 DEBUG oslo_concurrency.lockutils [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.922 243456 DEBUG oslo_concurrency.lockutils [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.922 243456 DEBUG nova.compute.manager [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] No waiting events found dispatching network-vif-plugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.922 243456 WARNING nova.compute.manager [req-65488ff3-2726-41d0-a31b-7205de28d267 req-85e0abad-e4e0-4bb4-ac9a-79a954e8cd44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Received unexpected event network-vif-plugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 for instance with vm_state building and task_state spawning.
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.923 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.924 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.942 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.947 243456 INFO nova.virt.libvirt.driver [-] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Instance spawned successfully.
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.948 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.959 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Acquiring lock "43abdac9-81fa-437c-8a7c-fb7b1a9ff97f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.959 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "43abdac9-81fa-437c-8a7c-fb7b1a9ff97f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:15 compute-0 podman[301721]: 2026-02-28 10:13:15.863239073 +0000 UTC m=+0.028502262 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.965 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.965 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273595.934672, 6af19b7d-b0a9-40fe-8d1a-f38c95452a10 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.966 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] VM Resumed (Lifecycle Event)
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.977 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:15 compute-0 systemd[1]: Started libpod-conmon-e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04.scope.
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.977 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.978 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.978 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.979 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.980 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.985 243456 DEBUG nova.compute.manager [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:13:15 compute-0 nova_compute[243452]: 2026-02-28 10:13:15.995 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.001 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:13:16 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/642c3d42b12e83ccf48d08b0d735927c24084e9d8d6d8b72eb20cdfcacadcea2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:13:16 compute-0 podman[301721]: 2026-02-28 10:13:16.029702324 +0000 UTC m=+0.194965513 container init e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223)
Feb 28 10:13:16 compute-0 podman[301721]: 2026-02-28 10:13:16.035218929 +0000 UTC m=+0.200482108 container start e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.051 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:13:16 compute-0 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[301737]: [NOTICE]   (301741) : New worker (301743) forked
Feb 28 10:13:16 compute-0 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[301737]: [NOTICE]   (301741) : Loading success.
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.073 243456 INFO nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Took 7.93 seconds to spawn the instance on the hypervisor.
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.074 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.093 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.094 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.102 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.103 243456 INFO nova.compute.claims [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:13:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.115 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cac4ab79-f021-4f19-8f15-95ea09460f70 in datapath 2ee6adef-26da-41a9-91a7-f9a806d37d26 unbound from our chassis
Feb 28 10:13:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.118 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ee6adef-26da-41a9-91a7-f9a806d37d26
Feb 28 10:13:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 246 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 3.6 MiB/s wr, 116 op/s
Feb 28 10:13:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.133 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f80a9a24-f1c7-492c-a7dc-d8ceb7b69ae8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.148 243456 INFO nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Took 9.20 seconds to build instance.
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.153 243456 DEBUG nova.network.neutron [req-9f145982-69b0-4fed-9abb-8465aef0f4c0 req-298e812b-114d-42b4-96c9-d3a7dcd44400 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Updated VIF entry in instance network info cache for port cac4ab79-f021-4f19-8f15-95ea09460f70. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.154 243456 DEBUG nova.network.neutron [req-9f145982-69b0-4fed-9abb-8465aef0f4c0 req-298e812b-114d-42b4-96c9-d3a7dcd44400 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Updating instance_info_cache with network_info: [{"id": "cac4ab79-f021-4f19-8f15-95ea09460f70", "address": "fa:16:3e:1b:f3:42", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcac4ab79-f0", "ovs_interfaceid": "cac4ab79-f021-4f19-8f15-95ea09460f70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.161 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6c50a34c-3c85-49e3-a5fe-cda7398850b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.166 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bac7bc6e-a1db-4154-ab4f-c068b1a463f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.172 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.173 243456 DEBUG oslo_concurrency.lockutils [req-9f145982-69b0-4fed-9abb-8465aef0f4c0 req-298e812b-114d-42b4-96c9-d3a7dcd44400 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-bd2e7775-9332-417e-a139-0847263b3343" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:13:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.202 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d3666be0-22e9-43dc-8172-0f9cf4e4f4f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.220 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[604ca787-6d53-4cbf-8b9f-b9fa1f3bd6d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ee6adef-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:79:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 266, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 266, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510262, 'reachable_time': 26991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301798, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.240 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[235790e2-2884-432f-b090-7d826f0dc602]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2ee6adef-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 510275, 'tstamp': 510275}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301799, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2ee6adef-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 510277, 'tstamp': 510277}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301799, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.242 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ee6adef-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.244 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.245 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ee6adef-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.246 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:13:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.246 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ee6adef-20, col_values=(('external_ids', {'iface-id': '51b33bea-9c2c-447c-817e-7f72887f045f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:16.246 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.251 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.292 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273596.2845995, bd2e7775-9332-417e-a139-0847263b3343 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.293 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] VM Started (Lifecycle Event)
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.312 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.318 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273596.2851293, bd2e7775-9332-417e-a139-0847263b3343 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.318 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] VM Paused (Lifecycle Event)
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.335 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.338 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.356 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:13:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:13:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3798364864' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.906 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.911 243456 DEBUG nova.compute.provider_tree [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.937 243456 DEBUG nova.scheduler.client.report [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.966 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:16 compute-0 nova_compute[243452]: 2026-02-28 10:13:16.967 243456 DEBUG nova.compute.manager [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:13:17 compute-0 nova_compute[243452]: 2026-02-28 10:13:17.021 243456 DEBUG nova.compute.manager [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Feb 28 10:13:17 compute-0 nova_compute[243452]: 2026-02-28 10:13:17.042 243456 INFO nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:13:17 compute-0 nova_compute[243452]: 2026-02-28 10:13:17.058 243456 DEBUG nova.compute.manager [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:13:17 compute-0 nova_compute[243452]: 2026-02-28 10:13:17.157 243456 DEBUG nova.compute.manager [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:13:17 compute-0 nova_compute[243452]: 2026-02-28 10:13:17.158 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:13:17 compute-0 nova_compute[243452]: 2026-02-28 10:13:17.159 243456 INFO nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Creating image(s)
Feb 28 10:13:17 compute-0 nova_compute[243452]: 2026-02-28 10:13:17.188 243456 DEBUG nova.storage.rbd_utils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] rbd image 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:17 compute-0 ceph-mon[76304]: pgmap v1421: 305 pgs: 305 active+clean; 246 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 3.6 MiB/s wr, 116 op/s
Feb 28 10:13:17 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3798364864' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:17 compute-0 nova_compute[243452]: 2026-02-28 10:13:17.214 243456 DEBUG nova.storage.rbd_utils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] rbd image 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:17 compute-0 nova_compute[243452]: 2026-02-28 10:13:17.242 243456 DEBUG nova.storage.rbd_utils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] rbd image 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:17 compute-0 nova_compute[243452]: 2026-02-28 10:13:17.248 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:17 compute-0 nova_compute[243452]: 2026-02-28 10:13:17.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:13:17 compute-0 nova_compute[243452]: 2026-02-28 10:13:17.334 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:17 compute-0 nova_compute[243452]: 2026-02-28 10:13:17.335 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:17 compute-0 nova_compute[243452]: 2026-02-28 10:13:17.336 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:17 compute-0 nova_compute[243452]: 2026-02-28 10:13:17.336 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:17 compute-0 nova_compute[243452]: 2026-02-28 10:13:17.362 243456 DEBUG nova.storage.rbd_utils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] rbd image 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:17 compute-0 nova_compute[243452]: 2026-02-28 10:13:17.367 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:17 compute-0 nova_compute[243452]: 2026-02-28 10:13:17.641 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.274s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:17 compute-0 nova_compute[243452]: 2026-02-28 10:13:17.711 243456 DEBUG nova.storage.rbd_utils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] resizing rbd image 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:13:17 compute-0 nova_compute[243452]: 2026-02-28 10:13:17.800 243456 DEBUG nova.objects.instance [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lazy-loading 'migration_context' on Instance uuid 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.016 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.017 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.026 243456 DEBUG nova.compute.manager [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.026 243456 DEBUG oslo_concurrency.lockutils [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.026 243456 DEBUG oslo_concurrency.lockutils [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.027 243456 DEBUG oslo_concurrency.lockutils [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.027 243456 DEBUG nova.compute.manager [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Processing event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.027 243456 DEBUG nova.compute.manager [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.027 243456 DEBUG oslo_concurrency.lockutils [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.027 243456 DEBUG oslo_concurrency.lockutils [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.028 243456 DEBUG oslo_concurrency.lockutils [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.028 243456 DEBUG nova.compute.manager [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] No waiting events found dispatching network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.028 243456 WARNING nova.compute.manager [req-fcb5c50e-fd39-4879-a9b8-19d46f57275e req-e0a79079-7c1f-428d-89df-ffdc0ee21f44 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received unexpected event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 for instance with vm_state building and task_state spawning.
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.029 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.029 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.030 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Ensure instance console log exists: /var/lib/nova/instances/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.030 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.030 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.031 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.032 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.033 243456 DEBUG nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.037 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.037 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273598.0369453, bd2e7775-9332-417e-a139-0847263b3343 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.038 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] VM Resumed (Lifecycle Event)
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.043 243456 INFO nova.virt.libvirt.driver [-] [instance: bd2e7775-9332-417e-a139-0847263b3343] Instance spawned successfully.
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.044 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.046 243456 WARNING nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.056 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273583.055168, 30a5d845-ce28-490a-afe8-3b7552f02c63 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.056 243456 INFO nova.compute.manager [-] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] VM Stopped (Lifecycle Event)
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.060 243456 DEBUG nova.virt.libvirt.host [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.060 243456 DEBUG nova.virt.libvirt.host [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.065 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.066 243456 DEBUG nova.virt.libvirt.host [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.067 243456 DEBUG nova.virt.libvirt.host [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.067 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.067 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.068 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.068 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.068 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.068 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.068 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.068 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.069 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.069 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.069 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.069 243456 DEBUG nova.virt.hardware [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.071 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.118 243456 DEBUG nova.compute.manager [None req-4b289d54-5d0f-48e8-917f-8c9c0040b96e - - - - - -] [instance: 30a5d845-ce28-490a-afe8-3b7552f02c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.127 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.128 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.128 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.129 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.129 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.130 243456 DEBUG nova.virt.libvirt.driver [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 263 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 4.3 MiB/s wr, 125 op/s
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.136 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.161 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.162 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.166 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.169 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.169 243456 INFO nova.compute.claims [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.206 243456 INFO nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Took 9.40 seconds to spawn the instance on the hypervisor.
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.206 243456 DEBUG nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.269 243456 INFO nova.compute.manager [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Took 11.27 seconds to build instance.
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.285 243456 DEBUG oslo_concurrency.lockutils [None req-bc35f10e-7937-41d0-b79d-2dbc23fb41ee c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.332 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.377 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:13:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/969748596' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.681 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.706 243456 DEBUG nova.storage.rbd_utils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] rbd image 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.709 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.742 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:13:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1567930006' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.902 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.909 243456 DEBUG nova.compute.provider_tree [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.928 243456 DEBUG nova.scheduler.client.report [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.953 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.954 243456 DEBUG nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.960 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.961 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.961 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:13:18 compute-0 nova_compute[243452]: 2026-02-28 10:13:18.962 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.063 243456 DEBUG nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.064 243456 DEBUG nova.network.neutron [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.084 243456 INFO nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.099 243456 DEBUG nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:13:19 compute-0 ceph-mon[76304]: pgmap v1422: 305 pgs: 305 active+clean; 263 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 4.3 MiB/s wr, 125 op/s
Feb 28 10:13:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/969748596' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1567930006' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.241 243456 DEBUG nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.243 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.244 243456 INFO nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Creating image(s)
Feb 28 10:13:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:13:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/422064088' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.367 243456 DEBUG nova.storage.rbd_utils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.397 243456 DEBUG nova.storage.rbd_utils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.426 243456 DEBUG nova.storage.rbd_utils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.431 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.464 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.469 243456 DEBUG nova.policy [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '99530c323188499c8d0e75b8edf1f77b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4c568ca6a09a48c1a1197267be4d4583', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.472 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.763s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.474 243456 DEBUG nova.objects.instance [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lazy-loading 'pci_devices' on Instance uuid 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.490 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:13:19 compute-0 nova_compute[243452]:   <uuid>43abdac9-81fa-437c-8a7c-fb7b1a9ff97f</uuid>
Feb 28 10:13:19 compute-0 nova_compute[243452]:   <name>instance-00000045</name>
Feb 28 10:13:19 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:13:19 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:13:19 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersAaction247Test-server-1375251304</nova:name>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:13:18</nova:creationTime>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:13:19 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:13:19 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:13:19 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:13:19 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:13:19 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:13:19 compute-0 nova_compute[243452]:         <nova:user uuid="a11a56ff5fb844d1b03a25da76136c9d">tempest-ServersAaction247Test-1778300402-project-member</nova:user>
Feb 28 10:13:19 compute-0 nova_compute[243452]:         <nova:project uuid="efd550ad4a4044b5b976691be30a846c">tempest-ServersAaction247Test-1778300402</nova:project>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <nova:ports/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:13:19 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:13:19 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <system>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <entry name="serial">43abdac9-81fa-437c-8a7c-fb7b1a9ff97f</entry>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <entry name="uuid">43abdac9-81fa-437c-8a7c-fb7b1a9ff97f</entry>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     </system>
Feb 28 10:13:19 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:13:19 compute-0 nova_compute[243452]:   <os>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:   </os>
Feb 28 10:13:19 compute-0 nova_compute[243452]:   <features>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:   </features>
Feb 28 10:13:19 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:13:19 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:13:19 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk">
Feb 28 10:13:19 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:13:19 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk.config">
Feb 28 10:13:19 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:13:19 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f/console.log" append="off"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <video>
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     </video>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:13:19 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:13:19 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:13:19 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:13:19 compute-0 nova_compute[243452]: </domain>
Feb 28 10:13:19 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.508 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.509 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.510 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.510 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.533 243456 DEBUG nova.storage.rbd_utils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.538 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:13:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1751327151' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.613 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.614 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.615 243456 INFO nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Using config drive
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.645 243456 DEBUG nova.storage.rbd_utils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] rbd image 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.654 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.692s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.740 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.741 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.751 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.752 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.757 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.758 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.799 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.851 243456 INFO nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Creating config drive at /var/lib/nova/instances/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f/disk.config
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.855 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjmwgswpk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.897 243456 DEBUG nova.storage.rbd_utils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] resizing rbd image c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:13:19 compute-0 nova_compute[243452]: 2026-02-28 10:13:19.997 243456 DEBUG nova.objects.instance [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'migration_context' on Instance uuid c4a13c84-8fca-43c8-87c3-fde9f5d1c031 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.000 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjmwgswpk" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.023 243456 DEBUG nova.storage.rbd_utils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] rbd image 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.028 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f/disk.config 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.067 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.068 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Ensure instance console log exists: /var/lib/nova/instances/c4a13c84-8fca-43c8-87c3-fde9f5d1c031/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.069 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.070 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.070 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 274 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.8 MiB/s wr, 170 op/s
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.172 243456 DEBUG oslo_concurrency.processutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f/disk.config 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.173 243456 INFO nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Deleting local config drive /var/lib/nova/instances/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f/disk.config because it was imported into RBD.
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.216 243456 DEBUG oslo_concurrency.lockutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.217 243456 DEBUG oslo_concurrency.lockutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.218 243456 DEBUG oslo_concurrency.lockutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.218 243456 DEBUG oslo_concurrency.lockutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.218 243456 DEBUG oslo_concurrency.lockutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.219 243456 INFO nova.compute.manager [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Terminating instance
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.220 243456 DEBUG nova.compute.manager [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.222 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.223 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3685MB free_disk=59.93733172863722GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.224 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.224 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/422064088' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1751327151' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:20 compute-0 systemd-machined[209480]: New machine qemu-77-instance-00000045.
Feb 28 10:13:20 compute-0 systemd[1]: Started Virtual Machine qemu-77-instance-00000045.
Feb 28 10:13:20 compute-0 kernel: tap0f76084a-5c (unregistering): left promiscuous mode
Feb 28 10:13:20 compute-0 NetworkManager[49805]: <info>  [1772273600.2844] device (tap0f76084a-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:13:20 compute-0 ovn_controller[146846]: 2026-02-28T10:13:20Z|00572|binding|INFO|Releasing lport 0f76084a-5cb2-4246-adc3-ec58ff470ed4 from this chassis (sb_readonly=0)
Feb 28 10:13:20 compute-0 ovn_controller[146846]: 2026-02-28T10:13:20Z|00573|binding|INFO|Setting lport 0f76084a-5cb2-4246-adc3-ec58ff470ed4 down in Southbound
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.297 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:20 compute-0 ovn_controller[146846]: 2026-02-28T10:13:20Z|00574|binding|INFO|Removing iface tap0f76084a-5c ovn-installed in OVS
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.305 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.311 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.324 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:c9:da 10.100.0.4'], port_security=['fa:16:3e:20:c9:da 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '6af19b7d-b0a9-40fe-8d1a-f38c95452a10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f398725990434948ba6927f1c1477015', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9892012a-4876-4e21-91d3-1183462d5768', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72248a44-6915-4752-941f-1938e32574c6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0f76084a-5cb2-4246-adc3-ec58ff470ed4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.326 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0f76084a-5cb2-4246-adc3-ec58ff470ed4 in datapath 2ee6adef-26da-41a9-91a7-f9a806d37d26 unbound from our chassis
Feb 28 10:13:20 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000043.scope: Deactivated successfully.
Feb 28 10:13:20 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000043.scope: Consumed 4.923s CPU time.
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.326 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 6af19b7d-b0a9-40fe-8d1a-f38c95452a10 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.327 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance bd2e7775-9332-417e-a139-0847263b3343 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.327 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.327 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance c4a13c84-8fca-43c8-87c3-fde9f5d1c031 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.327 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.328 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.327 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ee6adef-26da-41a9-91a7-f9a806d37d26
Feb 28 10:13:20 compute-0 systemd-machined[209480]: Machine qemu-75-instance-00000043 terminated.
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.339 243456 DEBUG nova.network.neutron [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Successfully created port: 9be79a2c-76fa-4a58-a532-eac0151d2bb1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.347 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9bcedda6-31d6-4948-a7fa-47192e2561e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.381 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a047390d-51d4-4772-96d3-2928c3ebf8e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.384 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[182299d1-a16e-498f-b086-ea40425ae804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.409 243456 DEBUG oslo_concurrency.lockutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.409 243456 DEBUG oslo_concurrency.lockutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.410 243456 DEBUG oslo_concurrency.lockutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.410 243456 DEBUG oslo_concurrency.lockutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.410 243456 DEBUG oslo_concurrency.lockutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.411 243456 INFO nova.compute.manager [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Terminating instance
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.413 243456 DEBUG nova.compute.manager [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.433 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[46de1dd4-2733-4623-ba60-3fee6e3df6e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.442 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.452 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bf49378d-f2cd-4ba4-aa64-4a09fbc73a1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ee6adef-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:79:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510262, 'reachable_time': 26991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302351, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:20 compute-0 kernel: tapcac4ab79-f0 (unregistering): left promiscuous mode
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.467 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b9db6824-0371-44c0-bbda-14d099cd6deb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2ee6adef-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 510275, 'tstamp': 510275}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302360, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2ee6adef-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 510277, 'tstamp': 510277}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302360, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:20 compute-0 NetworkManager[49805]: <info>  [1772273600.4698] device (tapcac4ab79-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.470 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ee6adef-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:20 compute-0 ovn_controller[146846]: 2026-02-28T10:13:20Z|00575|binding|INFO|Releasing lport cac4ab79-f021-4f19-8f15-95ea09460f70 from this chassis (sb_readonly=0)
Feb 28 10:13:20 compute-0 ovn_controller[146846]: 2026-02-28T10:13:20Z|00576|binding|INFO|Setting lport cac4ab79-f021-4f19-8f15-95ea09460f70 down in Southbound
Feb 28 10:13:20 compute-0 ovn_controller[146846]: 2026-02-28T10:13:20Z|00577|binding|INFO|Removing iface tapcac4ab79-f0 ovn-installed in OVS
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.479 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ee6adef-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.480 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.480 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ee6adef-20, col_values=(('external_ids', {'iface-id': '51b33bea-9c2c-447c-817e-7f72887f045f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.481 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.485 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:f3:42 10.100.0.8'], port_security=['fa:16:3e:1b:f3:42 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'bd2e7775-9332-417e-a139-0847263b3343', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f398725990434948ba6927f1c1477015', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9892012a-4876-4e21-91d3-1183462d5768', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72248a44-6915-4752-941f-1938e32574c6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cac4ab79-f021-4f19-8f15-95ea09460f70) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.487 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cac4ab79-f021-4f19-8f15-95ea09460f70 in datapath 2ee6adef-26da-41a9-91a7-f9a806d37d26 unbound from our chassis
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.488 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.489 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2ee6adef-26da-41a9-91a7-f9a806d37d26, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.490 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[29cbf046-e6b8-4c78-92ec-e990da63fd2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.491 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26 namespace which is not needed anymore
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.494 243456 INFO nova.virt.libvirt.driver [-] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Instance destroyed successfully.
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.494 243456 DEBUG nova.objects.instance [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'resources' on Instance uuid 6af19b7d-b0a9-40fe-8d1a-f38c95452a10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.516 243456 DEBUG nova.virt.libvirt.vif [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:13:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1120322056',display_name='tempest-tempest.common.compute-instance-1120322056-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1120322056-1',id=67,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:13:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-0gks6oxa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:13:16Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=6af19b7d-b0a9-40fe-8d1a-f38c95452a10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "address": "fa:16:3e:20:c9:da", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f76084a-5c", "ovs_interfaceid": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.517 243456 DEBUG nova.network.os_vif_util [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "address": "fa:16:3e:20:c9:da", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f76084a-5c", "ovs_interfaceid": "0f76084a-5cb2-4246-adc3-ec58ff470ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.518 243456 DEBUG nova.network.os_vif_util [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:c9:da,bridge_name='br-int',has_traffic_filtering=True,id=0f76084a-5cb2-4246-adc3-ec58ff470ed4,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f76084a-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.518 243456 DEBUG os_vif [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:c9:da,bridge_name='br-int',has_traffic_filtering=True,id=0f76084a-5cb2-4246-adc3-ec58ff470ed4,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f76084a-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.521 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.521 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f76084a-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.523 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:20 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000044.scope: Deactivated successfully.
Feb 28 10:13:20 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000044.scope: Consumed 2.928s CPU time.
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.525 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:13:20 compute-0 systemd-machined[209480]: Machine qemu-76-instance-00000044 terminated.
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.528 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.531 243456 INFO os_vif [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:c9:da,bridge_name='br-int',has_traffic_filtering=True,id=0f76084a-5cb2-4246-adc3-ec58ff470ed4,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f76084a-5c')
Feb 28 10:13:20 compute-0 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[301737]: [NOTICE]   (301741) : haproxy version is 2.8.14-c23fe91
Feb 28 10:13:20 compute-0 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[301737]: [NOTICE]   (301741) : path to executable is /usr/sbin/haproxy
Feb 28 10:13:20 compute-0 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[301737]: [WARNING]  (301741) : Exiting Master process...
Feb 28 10:13:20 compute-0 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[301737]: [ALERT]    (301741) : Current worker (301743) exited with code 143 (Terminated)
Feb 28 10:13:20 compute-0 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[301737]: [WARNING]  (301741) : All workers exited. Exiting... (0)
Feb 28 10:13:20 compute-0 systemd[1]: libpod-e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04.scope: Deactivated successfully.
Feb 28 10:13:20 compute-0 kernel: tapcac4ab79-f0: entered promiscuous mode
Feb 28 10:13:20 compute-0 ovn_controller[146846]: 2026-02-28T10:13:20Z|00578|binding|INFO|Claiming lport cac4ab79-f021-4f19-8f15-95ea09460f70 for this chassis.
Feb 28 10:13:20 compute-0 NetworkManager[49805]: <info>  [1772273600.6363] manager: (tapcac4ab79-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/261)
Feb 28 10:13:20 compute-0 ovn_controller[146846]: 2026-02-28T10:13:20Z|00579|binding|INFO|cac4ab79-f021-4f19-8f15-95ea09460f70: Claiming fa:16:3e:1b:f3:42 10.100.0.8
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.636 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:20 compute-0 podman[302404]: 2026-02-28 10:13:20.639935365 +0000 UTC m=+0.053819564 container died e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 10:13:20 compute-0 ovn_controller[146846]: 2026-02-28T10:13:20Z|00580|binding|INFO|Setting lport cac4ab79-f021-4f19-8f15-95ea09460f70 ovn-installed in OVS
Feb 28 10:13:20 compute-0 ovn_controller[146846]: 2026-02-28T10:13:20Z|00581|binding|INFO|Setting lport cac4ab79-f021-4f19-8f15-95ea09460f70 up in Southbound
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.644 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:f3:42 10.100.0.8'], port_security=['fa:16:3e:1b:f3:42 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'bd2e7775-9332-417e-a139-0847263b3343', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f398725990434948ba6927f1c1477015', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9892012a-4876-4e21-91d3-1183462d5768', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72248a44-6915-4752-941f-1938e32574c6, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cac4ab79-f021-4f19-8f15-95ea09460f70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:13:20 compute-0 kernel: tapcac4ab79-f0 (unregistering): left promiscuous mode
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.646 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:20 compute-0 ovn_controller[146846]: 2026-02-28T10:13:20Z|00582|binding|INFO|Releasing lport cac4ab79-f021-4f19-8f15-95ea09460f70 from this chassis (sb_readonly=0)
Feb 28 10:13:20 compute-0 ovn_controller[146846]: 2026-02-28T10:13:20Z|00583|binding|INFO|Setting lport cac4ab79-f021-4f19-8f15-95ea09460f70 down in Southbound
Feb 28 10:13:20 compute-0 ovn_controller[146846]: 2026-02-28T10:13:20Z|00584|binding|INFO|Removing iface tapcac4ab79-f0 ovn-installed in OVS
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.654 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.661 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.665 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:f3:42 10.100.0.8'], port_security=['fa:16:3e:1b:f3:42 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'bd2e7775-9332-417e-a139-0847263b3343', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f398725990434948ba6927f1c1477015', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9892012a-4876-4e21-91d3-1183462d5768', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72248a44-6915-4752-941f-1938e32574c6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cac4ab79-f021-4f19-8f15-95ea09460f70) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.670 243456 INFO nova.virt.libvirt.driver [-] [instance: bd2e7775-9332-417e-a139-0847263b3343] Instance destroyed successfully.
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.670 243456 DEBUG nova.objects.instance [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'resources' on Instance uuid bd2e7775-9332-417e-a139-0847263b3343 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04-userdata-shm.mount: Deactivated successfully.
Feb 28 10:13:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-642c3d42b12e83ccf48d08b0d735927c24084e9d8d6d8b72eb20cdfcacadcea2-merged.mount: Deactivated successfully.
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.689 243456 DEBUG nova.virt.libvirt.vif [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:13:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1120322056',display_name='tempest-tempest.common.compute-instance-1120322056-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1120322056-2',id=68,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-02-28T10:13:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-0gks6oxa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:13:18Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=bd2e7775-9332-417e-a139-0847263b3343,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cac4ab79-f021-4f19-8f15-95ea09460f70", "address": "fa:16:3e:1b:f3:42", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcac4ab79-f0", "ovs_interfaceid": "cac4ab79-f021-4f19-8f15-95ea09460f70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.689 243456 DEBUG nova.network.os_vif_util [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "cac4ab79-f021-4f19-8f15-95ea09460f70", "address": "fa:16:3e:1b:f3:42", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcac4ab79-f0", "ovs_interfaceid": "cac4ab79-f021-4f19-8f15-95ea09460f70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.690 243456 DEBUG nova.network.os_vif_util [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:f3:42,bridge_name='br-int',has_traffic_filtering=True,id=cac4ab79-f021-4f19-8f15-95ea09460f70,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcac4ab79-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.690 243456 DEBUG os_vif [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:f3:42,bridge_name='br-int',has_traffic_filtering=True,id=cac4ab79-f021-4f19-8f15-95ea09460f70,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcac4ab79-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.691 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.692 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcac4ab79-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.693 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:20 compute-0 podman[302404]: 2026-02-28 10:13:20.695349653 +0000 UTC m=+0.109233842 container cleanup e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.696 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.698 243456 INFO os_vif [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:f3:42,bridge_name='br-int',has_traffic_filtering=True,id=cac4ab79-f021-4f19-8f15-95ea09460f70,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcac4ab79-f0')
Feb 28 10:13:20 compute-0 systemd[1]: libpod-conmon-e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04.scope: Deactivated successfully.
Feb 28 10:13:20 compute-0 podman[302471]: 2026-02-28 10:13:20.760957578 +0000 UTC m=+0.042598809 container remove e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.765 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8762a2c6-48b2-49ed-87ef-d90fe880b517]: (4, ('Sat Feb 28 10:13:20 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26 (e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04)\ne15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04\nSat Feb 28 10:13:20 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26 (e15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04)\ne15c7ecc9dcc08acc14e1a7e2e45e907682153cb6713e65a0666d49f95fb3b04\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.768 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[10748680-423b-4ee0-b053-bbeb5c87db06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.769 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ee6adef-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.770 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:20 compute-0 kernel: tap2ee6adef-20: left promiscuous mode
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.778 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.782 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[16513c05-368e-42aa-9c33-4d393e1a62e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.794 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0e00fc01-c78c-4fe3-a02f-05495868ffa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.796 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[30c7dea2-a5e1-4459-a331-a5ec9fbd3dbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.815 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[68f3dcf4-8f06-4bc4-b9b3-31bf7bd6dddc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510254, 'reachable_time': 36190, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302535, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.819 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.819 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a7910dd9-7229-48b5-b3eb-a35828ea2035]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d2ee6adef\x2d26da\x2d41a9\x2d91a7\x2df9a806d37d26.mount: Deactivated successfully.
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.820 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cac4ab79-f021-4f19-8f15-95ea09460f70 in datapath 2ee6adef-26da-41a9-91a7-f9a806d37d26 unbound from our chassis
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.821 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2ee6adef-26da-41a9-91a7-f9a806d37d26, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.822 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e3e1bb1b-599e-4462-88d1-197307fc90bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.823 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cac4ab79-f021-4f19-8f15-95ea09460f70 in datapath 2ee6adef-26da-41a9-91a7-f9a806d37d26 unbound from our chassis
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.824 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2ee6adef-26da-41a9-91a7-f9a806d37d26, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.824 243456 DEBUG nova.compute.manager [req-f67f70aa-8bbc-4e8a-8318-ffc4bcd3efb4 req-14e79ce5-d85a-4f1b-b5e5-c0fe561bfa62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Received event network-vif-unplugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:20.824 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5b459c17-6a84-41d3-b9b7-306c4c5e8498]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.825 243456 DEBUG oslo_concurrency.lockutils [req-f67f70aa-8bbc-4e8a-8318-ffc4bcd3efb4 req-14e79ce5-d85a-4f1b-b5e5-c0fe561bfa62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.825 243456 DEBUG oslo_concurrency.lockutils [req-f67f70aa-8bbc-4e8a-8318-ffc4bcd3efb4 req-14e79ce5-d85a-4f1b-b5e5-c0fe561bfa62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.825 243456 DEBUG oslo_concurrency.lockutils [req-f67f70aa-8bbc-4e8a-8318-ffc4bcd3efb4 req-14e79ce5-d85a-4f1b-b5e5-c0fe561bfa62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.825 243456 DEBUG nova.compute.manager [req-f67f70aa-8bbc-4e8a-8318-ffc4bcd3efb4 req-14e79ce5-d85a-4f1b-b5e5-c0fe561bfa62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] No waiting events found dispatching network-vif-unplugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.826 243456 DEBUG nova.compute.manager [req-f67f70aa-8bbc-4e8a-8318-ffc4bcd3efb4 req-14e79ce5-d85a-4f1b-b5e5-c0fe561bfa62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Received event network-vif-unplugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.898 243456 INFO nova.virt.libvirt.driver [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Deleting instance files /var/lib/nova/instances/6af19b7d-b0a9-40fe-8d1a-f38c95452a10_del
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.899 243456 INFO nova.virt.libvirt.driver [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Deletion of /var/lib/nova/instances/6af19b7d-b0a9-40fe-8d1a-f38c95452a10_del complete
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.913 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273600.912879, 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.914 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] VM Resumed (Lifecycle Event)
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.916 243456 DEBUG nova.compute.manager [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.917 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.921 243456 INFO nova.virt.libvirt.driver [-] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Instance spawned successfully.
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.922 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.943 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.956 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.962 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.963 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.964 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.965 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.966 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.966 243456 DEBUG nova.virt.libvirt.driver [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.973 243456 INFO nova.compute.manager [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Took 0.75 seconds to destroy the instance on the hypervisor.
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.974 243456 DEBUG oslo.service.loopingcall [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.975 243456 DEBUG nova.compute.manager [-] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.976 243456 DEBUG nova.network.neutron [-] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.981 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.982 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273600.9140139, 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:20 compute-0 nova_compute[243452]: 2026-02-28 10:13:20.982 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] VM Started (Lifecycle Event)
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.007 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.012 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.019 243456 INFO nova.compute.manager [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Took 3.86 seconds to spawn the instance on the hypervisor.
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.020 243456 DEBUG nova.compute.manager [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:21 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:13:21 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1367673230' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.031 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.052 243456 INFO nova.virt.libvirt.driver [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Deleting instance files /var/lib/nova/instances/bd2e7775-9332-417e-a139-0847263b3343_del
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.053 243456 INFO nova.virt.libvirt.driver [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Deletion of /var/lib/nova/instances/bd2e7775-9332-417e-a139-0847263b3343_del complete
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.056 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.061 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.081 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.086 243456 INFO nova.compute.manager [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Took 5.02 seconds to build instance.
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.111 243456 DEBUG nova.network.neutron [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Successfully updated port: 9be79a2c-76fa-4a58-a532-eac0151d2bb1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.115 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.116 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.117 243456 DEBUG oslo_concurrency.lockutils [None req-45b0fe2f-3ba4-4682-b9c2-8478770e67dd a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "43abdac9-81fa-437c-8a7c-fb7b1a9ff97f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.121 243456 INFO nova.compute.manager [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Took 0.71 seconds to destroy the instance on the hypervisor.
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.122 243456 DEBUG oslo.service.loopingcall [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.123 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.123 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquired lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.123 243456 DEBUG nova.network.neutron [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.124 243456 DEBUG nova.compute.manager [-] [instance: bd2e7775-9332-417e-a139-0847263b3343] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.124 243456 DEBUG nova.network.neutron [-] [instance: bd2e7775-9332-417e-a139-0847263b3343] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.204 243456 DEBUG nova.compute.manager [req-13657082-4fe6-4e8c-8808-a254256cc185 req-220911a3-163c-48e4-b0e0-3f0d891bd497 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received event network-changed-9be79a2c-76fa-4a58-a532-eac0151d2bb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.205 243456 DEBUG nova.compute.manager [req-13657082-4fe6-4e8c-8808-a254256cc185 req-220911a3-163c-48e4-b0e0-3f0d891bd497 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Refreshing instance network info cache due to event network-changed-9be79a2c-76fa-4a58-a532-eac0151d2bb1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.205 243456 DEBUG oslo_concurrency.lockutils [req-13657082-4fe6-4e8c-8808-a254256cc185 req-220911a3-163c-48e4-b0e0-3f0d891bd497 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:13:21 compute-0 ceph-mon[76304]: pgmap v1423: 305 pgs: 305 active+clean; 274 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.8 MiB/s wr, 170 op/s
Feb 28 10:13:21 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1367673230' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.312 243456 DEBUG nova.network.neutron [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.635 243456 DEBUG nova.network.neutron [-] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.657 243456 INFO nova.compute.manager [-] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Took 0.68 seconds to deallocate network for instance.
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.706 243456 DEBUG oslo_concurrency.lockutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.707 243456 DEBUG oslo_concurrency.lockutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:21 compute-0 nova_compute[243452]: 2026-02-28 10:13:21.804 243456 DEBUG oslo_concurrency.processutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.049 243456 DEBUG nova.network.neutron [-] [instance: bd2e7775-9332-417e-a139-0847263b3343] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.069 243456 INFO nova.compute.manager [-] [instance: bd2e7775-9332-417e-a139-0847263b3343] Took 0.94 seconds to deallocate network for instance.
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.110 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.111 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:13:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 295 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.3 MiB/s wr, 231 op/s
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.158 243456 DEBUG oslo_concurrency.lockutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.159 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.159 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.176 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.177 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.254 243456 DEBUG nova.network.neutron [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Updating instance_info_cache with network_info: [{"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.271 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Releasing lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.272 243456 DEBUG nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Instance network_info: |[{"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.273 243456 DEBUG oslo_concurrency.lockutils [req-13657082-4fe6-4e8c-8808-a254256cc185 req-220911a3-163c-48e4-b0e0-3f0d891bd497 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.274 243456 DEBUG nova.network.neutron [req-13657082-4fe6-4e8c-8808-a254256cc185 req-220911a3-163c-48e4-b0e0-3f0d891bd497 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Refreshing network info cache for port 9be79a2c-76fa-4a58-a532-eac0151d2bb1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.276 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Start _get_guest_xml network_info=[{"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.284 243456 WARNING nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.297 243456 DEBUG nova.virt.libvirt.host [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.298 243456 DEBUG nova.virt.libvirt.host [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.302 243456 DEBUG nova.virt.libvirt.host [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.303 243456 DEBUG nova.virt.libvirt.host [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.303 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.304 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.305 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.305 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.305 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.306 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.306 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.306 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.307 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.307 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.308 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.308 243456 DEBUG nova.virt.hardware [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.312 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:13:22 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1038723385' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.380 243456 DEBUG oslo_concurrency.processutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.392 243456 DEBUG nova.compute.provider_tree [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.411 243456 DEBUG nova.scheduler.client.report [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.438 243456 DEBUG oslo_concurrency.lockutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.441 243456 DEBUG oslo_concurrency.lockutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.284s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.470 243456 INFO nova.scheduler.client.report [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Deleted allocations for instance 6af19b7d-b0a9-40fe-8d1a-f38c95452a10
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.474 243456 DEBUG nova.compute.manager [None req-bbeafa4b-900d-49d7-94b4-fd5ef8d04487 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.533 243456 INFO nova.compute.manager [None req-bbeafa4b-900d-49d7-94b4-fd5ef8d04487 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] instance snapshotting
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.534 243456 DEBUG nova.objects.instance [None req-bbeafa4b-900d-49d7-94b4-fd5ef8d04487 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lazy-loading 'flavor' on Instance uuid 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.563 243456 DEBUG oslo_concurrency.processutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.623 243456 DEBUG oslo_concurrency.lockutils [None req-1df582da-1748-40e9-a15b-06d7853c9463 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.690 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Acquiring lock "43abdac9-81fa-437c-8a7c-fb7b1a9ff97f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.692 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "43abdac9-81fa-437c-8a7c-fb7b1a9ff97f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.692 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Acquiring lock "43abdac9-81fa-437c-8a7c-fb7b1a9ff97f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.693 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "43abdac9-81fa-437c-8a7c-fb7b1a9ff97f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.694 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "43abdac9-81fa-437c-8a7c-fb7b1a9ff97f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.696 243456 INFO nova.compute.manager [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Terminating instance
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.698 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Acquiring lock "refresh_cache-43abdac9-81fa-437c-8a7c-fb7b1a9ff97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.699 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Acquired lock "refresh_cache-43abdac9-81fa-437c-8a7c-fb7b1a9ff97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.699 243456 DEBUG nova.network.neutron [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:13:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:13:22 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1542243089' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.877 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.897 243456 DEBUG nova.storage.rbd_utils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.901 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.954 243456 DEBUG nova.network.neutron [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.963 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Received event network-vif-plugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.963 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.964 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.964 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6af19b7d-b0a9-40fe-8d1a-f38c95452a10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.964 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] No waiting events found dispatching network-vif-plugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.965 243456 WARNING nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Received unexpected event network-vif-plugged-0f76084a-5cb2-4246-adc3-ec58ff470ed4 for instance with vm_state deleted and task_state None.
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.965 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received event network-vif-unplugged-cac4ab79-f021-4f19-8f15-95ea09460f70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.965 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.965 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.965 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.966 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] No waiting events found dispatching network-vif-unplugged-cac4ab79-f021-4f19-8f15-95ea09460f70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.966 243456 WARNING nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received unexpected event network-vif-unplugged-cac4ab79-f021-4f19-8f15-95ea09460f70 for instance with vm_state deleted and task_state None.
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.966 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.966 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.966 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.966 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.966 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] No waiting events found dispatching network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.967 243456 WARNING nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received unexpected event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 for instance with vm_state deleted and task_state None.
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.967 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.967 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.967 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.967 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.967 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] No waiting events found dispatching network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.968 243456 WARNING nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received unexpected event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 for instance with vm_state deleted and task_state None.
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.968 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.968 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.968 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.968 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.968 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] No waiting events found dispatching network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.968 243456 WARNING nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received unexpected event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 for instance with vm_state deleted and task_state None.
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.969 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received event network-vif-unplugged-cac4ab79-f021-4f19-8f15-95ea09460f70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.969 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.969 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.969 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.969 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] No waiting events found dispatching network-vif-unplugged-cac4ab79-f021-4f19-8f15-95ea09460f70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.969 243456 WARNING nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received unexpected event network-vif-unplugged-cac4ab79-f021-4f19-8f15-95ea09460f70 for instance with vm_state deleted and task_state None.
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.970 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.970 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bd2e7775-9332-417e-a139-0847263b3343-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.970 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.970 243456 DEBUG oslo_concurrency.lockutils [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.970 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] No waiting events found dispatching network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.970 243456 WARNING nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received unexpected event network-vif-plugged-cac4ab79-f021-4f19-8f15-95ea09460f70 for instance with vm_state deleted and task_state None.
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.970 243456 DEBUG nova.compute.manager [req-c362070d-18fe-457a-8e18-db3577403df7 req-cb63fcd4-ee46-4caf-828d-025faff98e76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Received event network-vif-deleted-0f76084a-5cb2-4246-adc3-ec58ff470ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:22 compute-0 nova_compute[243452]: 2026-02-28 10:13:22.979 243456 INFO nova.virt.libvirt.driver [None req-bbeafa4b-900d-49d7-94b4-fd5ef8d04487 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Beginning live snapshot process
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.063 243456 DEBUG nova.compute.manager [None req-bbeafa4b-900d-49d7-94b4-fd5ef8d04487 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390
Feb 28 10:13:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:13:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:13:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1742701125' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.182 243456 DEBUG oslo_concurrency.processutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.187 243456 DEBUG nova.compute.provider_tree [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.209 243456 DEBUG nova.scheduler.client.report [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.243 243456 DEBUG oslo_concurrency.lockutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:23 compute-0 ceph-mon[76304]: pgmap v1424: 305 pgs: 305 active+clean; 295 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.3 MiB/s wr, 231 op/s
Feb 28 10:13:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1038723385' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1542243089' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1742701125' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.294 243456 INFO nova.scheduler.client.report [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Deleted allocations for instance bd2e7775-9332-417e-a139-0847263b3343
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.331 243456 DEBUG nova.compute.manager [req-f40751e4-9982-4cef-85f3-601c19078689 req-545b6372-4ee2-4026-af8d-06c171aa3c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bd2e7775-9332-417e-a139-0847263b3343] Received event network-vif-deleted-cac4ab79-f021-4f19-8f15-95ea09460f70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.395 243456 DEBUG oslo_concurrency.lockutils [None req-45213258-2580-4137-a74b-ae88d8cd6e12 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "bd2e7775-9332-417e-a139-0847263b3343" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.398 243456 DEBUG nova.network.neutron [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.438 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Releasing lock "refresh_cache-43abdac9-81fa-437c-8a7c-fb7b1a9ff97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.438 243456 DEBUG nova.compute.manager [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:13:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:13:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3082823000' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.495 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:23 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000045.scope: Deactivated successfully.
Feb 28 10:13:23 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000045.scope: Consumed 3.123s CPU time.
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.498 243456 DEBUG nova.virt.libvirt.vif [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1132377693',display_name='tempest-TestNetworkAdvancedServerOps-server-1132377693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1132377693',id=70,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF644+rgLhlFamGuaTmUUFO0KlVXAPKX08nnbDy2rOoT8ige3Fj9kAWfB90244fjvTph7JmDo+5JGi6o3TsUwr4tGSK5dI2cnuR44Pht5/KXvc5e6feRQxx/zv8GZOsh+w==',key_name='tempest-TestNetworkAdvancedServerOps-1238511245',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-8dn1q2v0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:19Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=c4a13c84-8fca-43c8-87c3-fde9f5d1c031,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.498 243456 DEBUG nova.network.os_vif_util [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.499 243456 DEBUG nova.network.os_vif_util [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:18:0c,bridge_name='br-int',has_traffic_filtering=True,id=9be79a2c-76fa-4a58-a532-eac0151d2bb1,network=Network(5d8683e2-4377-476f-ae8b-6d3dd4e61943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9be79a2c-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.501 243456 DEBUG nova.objects.instance [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'pci_devices' on Instance uuid c4a13c84-8fca-43c8-87c3-fde9f5d1c031 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:23 compute-0 systemd-machined[209480]: Machine qemu-77-instance-00000045 terminated.
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.525 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:13:23 compute-0 nova_compute[243452]:   <uuid>c4a13c84-8fca-43c8-87c3-fde9f5d1c031</uuid>
Feb 28 10:13:23 compute-0 nova_compute[243452]:   <name>instance-00000046</name>
Feb 28 10:13:23 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:13:23 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:13:23 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1132377693</nova:name>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:13:22</nova:creationTime>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:13:23 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:13:23 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:13:23 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:13:23 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:13:23 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:13:23 compute-0 nova_compute[243452]:         <nova:user uuid="99530c323188499c8d0e75b8edf1f77b">tempest-TestNetworkAdvancedServerOps-1987172309-project-member</nova:user>
Feb 28 10:13:23 compute-0 nova_compute[243452]:         <nova:project uuid="4c568ca6a09a48c1a1197267be4d4583">tempest-TestNetworkAdvancedServerOps-1987172309</nova:project>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:13:23 compute-0 nova_compute[243452]:         <nova:port uuid="9be79a2c-76fa-4a58-a532-eac0151d2bb1">
Feb 28 10:13:23 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:13:23 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:13:23 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <system>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <entry name="serial">c4a13c84-8fca-43c8-87c3-fde9f5d1c031</entry>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <entry name="uuid">c4a13c84-8fca-43c8-87c3-fde9f5d1c031</entry>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     </system>
Feb 28 10:13:23 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:13:23 compute-0 nova_compute[243452]:   <os>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:   </os>
Feb 28 10:13:23 compute-0 nova_compute[243452]:   <features>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:   </features>
Feb 28 10:13:23 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:13:23 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:13:23 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk">
Feb 28 10:13:23 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       </source>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:13:23 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk.config">
Feb 28 10:13:23 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       </source>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:13:23 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:54:18:0c"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <target dev="tap9be79a2c-76"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/c4a13c84-8fca-43c8-87c3-fde9f5d1c031/console.log" append="off"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <video>
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     </video>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:13:23 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:13:23 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:13:23 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:13:23 compute-0 nova_compute[243452]: </domain>
Feb 28 10:13:23 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.526 243456 DEBUG nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Preparing to wait for external event network-vif-plugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.526 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.526 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.527 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.527 243456 DEBUG nova.virt.libvirt.vif [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1132377693',display_name='tempest-TestNetworkAdvancedServerOps-server-1132377693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1132377693',id=70,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF644+rgLhlFamGuaTmUUFO0KlVXAPKX08nnbDy2rOoT8ige3Fj9kAWfB90244fjvTph7JmDo+5JGi6o3TsUwr4tGSK5dI2cnuR44Pht5/KXvc5e6feRQxx/zv8GZOsh+w==',key_name='tempest-TestNetworkAdvancedServerOps-1238511245',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-8dn1q2v0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:19Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=c4a13c84-8fca-43c8-87c3-fde9f5d1c031,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.527 243456 DEBUG nova.network.os_vif_util [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.528 243456 DEBUG nova.network.os_vif_util [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:18:0c,bridge_name='br-int',has_traffic_filtering=True,id=9be79a2c-76fa-4a58-a532-eac0151d2bb1,network=Network(5d8683e2-4377-476f-ae8b-6d3dd4e61943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9be79a2c-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.528 243456 DEBUG os_vif [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:18:0c,bridge_name='br-int',has_traffic_filtering=True,id=9be79a2c-76fa-4a58-a532-eac0151d2bb1,network=Network(5d8683e2-4377-476f-ae8b-6d3dd4e61943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9be79a2c-76') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.529 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.529 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.529 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.536 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.536 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9be79a2c-76, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.537 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9be79a2c-76, col_values=(('external_ids', {'iface-id': '9be79a2c-76fa-4a58-a532-eac0151d2bb1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:18:0c', 'vm-uuid': 'c4a13c84-8fca-43c8-87c3-fde9f5d1c031'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:23 compute-0 NetworkManager[49805]: <info>  [1772273603.5394] manager: (tap9be79a2c-76): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/262)
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.540 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.545 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.545 243456 INFO os_vif [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:18:0c,bridge_name='br-int',has_traffic_filtering=True,id=9be79a2c-76fa-4a58-a532-eac0151d2bb1,network=Network(5d8683e2-4377-476f-ae8b-6d3dd4e61943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9be79a2c-76')
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.619 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.620 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.620 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No VIF found with MAC fa:16:3e:54:18:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.620 243456 INFO nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Using config drive
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.643 243456 DEBUG nova.storage.rbd_utils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.656 243456 INFO nova.virt.libvirt.driver [-] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Instance destroyed successfully.
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.656 243456 DEBUG nova.objects.instance [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lazy-loading 'resources' on Instance uuid 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.707 243456 DEBUG nova.compute.manager [None req-bbeafa4b-900d-49d7-94b4-fd5ef8d04487 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.738 243456 DEBUG nova.network.neutron [req-13657082-4fe6-4e8c-8808-a254256cc185 req-220911a3-163c-48e4-b0e0-3f0d891bd497 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Updated VIF entry in instance network info cache for port 9be79a2c-76fa-4a58-a532-eac0151d2bb1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.739 243456 DEBUG nova.network.neutron [req-13657082-4fe6-4e8c-8808-a254256cc185 req-220911a3-163c-48e4-b0e0-3f0d891bd497 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Updating instance_info_cache with network_info: [{"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.742 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.754 243456 DEBUG oslo_concurrency.lockutils [req-13657082-4fe6-4e8c-8808-a254256cc185 req-220911a3-163c-48e4-b0e0-3f0d891bd497 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.936 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273588.809136, 6cac1749-1126-44c9-b31c-1041025c52cf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.938 243456 INFO nova.compute.manager [-] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] VM Stopped (Lifecycle Event)
Feb 28 10:13:23 compute-0 nova_compute[243452]: 2026-02-28 10:13:23.976 243456 DEBUG nova.compute.manager [None req-756119fa-3b38-4d26-a376-e677e97f6cef - - - - - -] [instance: 6cac1749-1126-44c9-b31c-1041025c52cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:24 compute-0 nova_compute[243452]: 2026-02-28 10:13:24.004 243456 INFO nova.virt.libvirt.driver [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Deleting instance files /var/lib/nova/instances/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_del
Feb 28 10:13:24 compute-0 nova_compute[243452]: 2026-02-28 10:13:24.005 243456 INFO nova.virt.libvirt.driver [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Deletion of /var/lib/nova/instances/43abdac9-81fa-437c-8a7c-fb7b1a9ff97f_del complete
Feb 28 10:13:24 compute-0 nova_compute[243452]: 2026-02-28 10:13:24.073 243456 INFO nova.compute.manager [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Took 0.63 seconds to destroy the instance on the hypervisor.
Feb 28 10:13:24 compute-0 nova_compute[243452]: 2026-02-28 10:13:24.074 243456 DEBUG oslo.service.loopingcall [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:13:24 compute-0 nova_compute[243452]: 2026-02-28 10:13:24.074 243456 DEBUG nova.compute.manager [-] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:13:24 compute-0 nova_compute[243452]: 2026-02-28 10:13:24.075 243456 DEBUG nova.network.neutron [-] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:13:24 compute-0 nova_compute[243452]: 2026-02-28 10:13:24.130 243456 INFO nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Creating config drive at /var/lib/nova/instances/c4a13c84-8fca-43c8-87c3-fde9f5d1c031/disk.config
Feb 28 10:13:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 294 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.3 MiB/s wr, 330 op/s
Feb 28 10:13:24 compute-0 nova_compute[243452]: 2026-02-28 10:13:24.136 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c4a13c84-8fca-43c8-87c3-fde9f5d1c031/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpzukkz8cj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:24 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3082823000' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:24 compute-0 nova_compute[243452]: 2026-02-28 10:13:24.274 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c4a13c84-8fca-43c8-87c3-fde9f5d1c031/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpzukkz8cj" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:24 compute-0 nova_compute[243452]: 2026-02-28 10:13:24.310 243456 DEBUG nova.storage.rbd_utils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:24 compute-0 nova_compute[243452]: 2026-02-28 10:13:24.315 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c4a13c84-8fca-43c8-87c3-fde9f5d1c031/disk.config c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:24 compute-0 nova_compute[243452]: 2026-02-28 10:13:24.489 243456 DEBUG oslo_concurrency.processutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c4a13c84-8fca-43c8-87c3-fde9f5d1c031/disk.config c4a13c84-8fca-43c8-87c3-fde9f5d1c031_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:24 compute-0 nova_compute[243452]: 2026-02-28 10:13:24.491 243456 INFO nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Deleting local config drive /var/lib/nova/instances/c4a13c84-8fca-43c8-87c3-fde9f5d1c031/disk.config because it was imported into RBD.
Feb 28 10:13:24 compute-0 kernel: tap9be79a2c-76: entered promiscuous mode
Feb 28 10:13:24 compute-0 NetworkManager[49805]: <info>  [1772273604.5521] manager: (tap9be79a2c-76): new Tun device (/org/freedesktop/NetworkManager/Devices/263)
Feb 28 10:13:24 compute-0 systemd-udevd[302650]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:13:24 compute-0 ovn_controller[146846]: 2026-02-28T10:13:24Z|00585|binding|INFO|Claiming lport 9be79a2c-76fa-4a58-a532-eac0151d2bb1 for this chassis.
Feb 28 10:13:24 compute-0 ovn_controller[146846]: 2026-02-28T10:13:24Z|00586|binding|INFO|9be79a2c-76fa-4a58-a532-eac0151d2bb1: Claiming fa:16:3e:54:18:0c 10.100.0.3
Feb 28 10:13:24 compute-0 nova_compute[243452]: 2026-02-28 10:13:24.554 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:24 compute-0 NetworkManager[49805]: <info>  [1772273604.5712] device (tap9be79a2c-76): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:13:24 compute-0 NetworkManager[49805]: <info>  [1772273604.5723] device (tap9be79a2c-76): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.575 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:18:0c 10.100.0.3'], port_security=['fa:16:3e:54:18:0c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c4a13c84-8fca-43c8-87c3-fde9f5d1c031', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d8683e2-4377-476f-ae8b-6d3dd4e61943', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aeb82654-4ae5-4b85-ab47-1d2fe984318d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4eff969a-a040-4a86-b625-9ebfe779e412, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=9be79a2c-76fa-4a58-a532-eac0151d2bb1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.577 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 9be79a2c-76fa-4a58-a532-eac0151d2bb1 in datapath 5d8683e2-4377-476f-ae8b-6d3dd4e61943 bound to our chassis
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.578 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d8683e2-4377-476f-ae8b-6d3dd4e61943
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.593 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6a1c3b47-590b-4715-b282-c5ca5ce1e0fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:24 compute-0 systemd-machined[209480]: New machine qemu-78-instance-00000046.
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.595 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5d8683e2-41 in ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.598 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5d8683e2-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.598 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9ddb0034-0a4c-4b0c-9682-392f520deeeb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.600 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a76ed88f-edc6-405e-a3bb-1adf0eee9200]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:24 compute-0 nova_compute[243452]: 2026-02-28 10:13:24.610 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:24 compute-0 systemd[1]: Started Virtual Machine qemu-78-instance-00000046.
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.618 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c70666-4bd0-44e6-9ab0-a1cba7784b9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:24 compute-0 ovn_controller[146846]: 2026-02-28T10:13:24Z|00587|binding|INFO|Setting lport 9be79a2c-76fa-4a58-a532-eac0151d2bb1 ovn-installed in OVS
Feb 28 10:13:24 compute-0 ovn_controller[146846]: 2026-02-28T10:13:24Z|00588|binding|INFO|Setting lport 9be79a2c-76fa-4a58-a532-eac0151d2bb1 up in Southbound
Feb 28 10:13:24 compute-0 nova_compute[243452]: 2026-02-28 10:13:24.622 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.631 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[66a08e6b-9268-40cc-a9be-92c5dd76c1fe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.664 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[65ccda1a-71ad-4d74-9e29-8a7dc38ec3d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.672 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[12bbc8a5-7826-4191-be3d-97d908f08ac8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:24 compute-0 NetworkManager[49805]: <info>  [1772273604.6739] manager: (tap5d8683e2-40): new Veth device (/org/freedesktop/NetworkManager/Devices/264)
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.712 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[14e3a695-076a-4b42-9624-29a25e6c4bfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.717 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3b7ceb2d-f38b-4c44-b3c9-83da1d8562dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:24 compute-0 NetworkManager[49805]: <info>  [1772273604.7483] device (tap5d8683e2-40): carrier: link connected
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.757 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7e4469-46a1-472c-9c04-6971ffab4252]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.774 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[138edc01-d5d0-4819-a975-1e001549cc43]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d8683e2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:0b:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 183], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511203, 'reachable_time': 18784, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302778, 'error': None, 'target': 'ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.791 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d4fc8fa4-81e7-44f2-b27c-d0ea9dff27d6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0c:b58'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 511203, 'tstamp': 511203}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302779, 'error': None, 'target': 'ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.816 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8a725acc-870b-4879-804b-5f95669abe8a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d8683e2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:0b:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 183], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511203, 'reachable_time': 18784, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302780, 'error': None, 'target': 'ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.856 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[772c66c0-0d5f-45a2-a303-d27edca92d6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.931 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6f7f86-1547-4fdb-8fb8-d2f0b07d3dcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.933 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d8683e2-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.934 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.934 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d8683e2-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:24 compute-0 nova_compute[243452]: 2026-02-28 10:13:24.936 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:24 compute-0 NetworkManager[49805]: <info>  [1772273604.9371] manager: (tap5d8683e2-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/265)
Feb 28 10:13:24 compute-0 kernel: tap5d8683e2-40: entered promiscuous mode
Feb 28 10:13:24 compute-0 nova_compute[243452]: 2026-02-28 10:13:24.940 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.941 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d8683e2-40, col_values=(('external_ids', {'iface-id': '6fea9809-bf5e-43e8-beec-b6cf6edc52cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:24 compute-0 nova_compute[243452]: 2026-02-28 10:13:24.942 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:24 compute-0 ovn_controller[146846]: 2026-02-28T10:13:24Z|00589|binding|INFO|Releasing lport 6fea9809-bf5e-43e8-beec-b6cf6edc52cd from this chassis (sb_readonly=0)
Feb 28 10:13:24 compute-0 nova_compute[243452]: 2026-02-28 10:13:24.952 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.954 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5d8683e2-4377-476f-ae8b-6d3dd4e61943.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5d8683e2-4377-476f-ae8b-6d3dd4e61943.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.955 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[43dfcc7e-4201-4bc4-9227-79fb56086c73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.956 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-5d8683e2-4377-476f-ae8b-6d3dd4e61943
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/5d8683e2-4377-476f-ae8b-6d3dd4e61943.pid.haproxy
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 5d8683e2-4377-476f-ae8b-6d3dd4e61943
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:13:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:24.957 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943', 'env', 'PROCESS_TAG=haproxy-5d8683e2-4377-476f-ae8b-6d3dd4e61943', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5d8683e2-4377-476f-ae8b-6d3dd4e61943.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:13:25 compute-0 nova_compute[243452]: 2026-02-28 10:13:25.054 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273605.0533674, c4a13c84-8fca-43c8-87c3-fde9f5d1c031 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:25 compute-0 nova_compute[243452]: 2026-02-28 10:13:25.054 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] VM Started (Lifecycle Event)
Feb 28 10:13:25 compute-0 nova_compute[243452]: 2026-02-28 10:13:25.083 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:25 compute-0 nova_compute[243452]: 2026-02-28 10:13:25.098 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273605.053559, c4a13c84-8fca-43c8-87c3-fde9f5d1c031 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:25 compute-0 nova_compute[243452]: 2026-02-28 10:13:25.098 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] VM Paused (Lifecycle Event)
Feb 28 10:13:25 compute-0 nova_compute[243452]: 2026-02-28 10:13:25.132 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:25 compute-0 nova_compute[243452]: 2026-02-28 10:13:25.135 243456 DEBUG nova.network.neutron [-] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:13:25 compute-0 nova_compute[243452]: 2026-02-28 10:13:25.140 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:13:25 compute-0 nova_compute[243452]: 2026-02-28 10:13:25.180 243456 DEBUG nova.network.neutron [-] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:25 compute-0 nova_compute[243452]: 2026-02-28 10:13:25.186 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:13:25 compute-0 nova_compute[243452]: 2026-02-28 10:13:25.207 243456 INFO nova.compute.manager [-] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Took 1.13 seconds to deallocate network for instance.
Feb 28 10:13:25 compute-0 ceph-mon[76304]: pgmap v1425: 305 pgs: 305 active+clean; 294 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.3 MiB/s wr, 330 op/s
Feb 28 10:13:25 compute-0 nova_compute[243452]: 2026-02-28 10:13:25.293 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:25 compute-0 nova_compute[243452]: 2026-02-28 10:13:25.294 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:25 compute-0 podman[302852]: 2026-02-28 10:13:25.297161806 +0000 UTC m=+0.052142927 container create a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:13:25 compute-0 systemd[1]: Started libpod-conmon-a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612.scope.
Feb 28 10:13:25 compute-0 podman[302852]: 2026-02-28 10:13:25.270392704 +0000 UTC m=+0.025373835 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:13:25 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:13:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8da54b6f18717d512266281e970794ab514d96d93e0121c460ef59a108898240/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:13:25 compute-0 podman[302852]: 2026-02-28 10:13:25.396226342 +0000 UTC m=+0.151207483 container init a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:13:25 compute-0 podman[302852]: 2026-02-28 10:13:25.40361183 +0000 UTC m=+0.158592951 container start a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 10:13:25 compute-0 nova_compute[243452]: 2026-02-28 10:13:25.401 243456 DEBUG oslo_concurrency.processutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:25 compute-0 neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943[302867]: [NOTICE]   (302871) : New worker (302874) forked
Feb 28 10:13:25 compute-0 neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943[302867]: [NOTICE]   (302871) : Loading success.
Feb 28 10:13:26 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:13:26 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3015353337' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.059 243456 DEBUG oslo_concurrency.processutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.068 243456 DEBUG nova.compute.provider_tree [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:13:26 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 10:13:26 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.095 243456 DEBUG nova.scheduler.client.report [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.119 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 216 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 357 op/s
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.162 243456 INFO nova.scheduler.client.report [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Deleted allocations for instance 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.237 243456 DEBUG oslo_concurrency.lockutils [None req-f3448866-3382-47f1-8d30-ee283a9e7820 a11a56ff5fb844d1b03a25da76136c9d efd550ad4a4044b5b976691be30a846c - - default default] Lock "43abdac9-81fa-437c-8a7c-fb7b1a9ff97f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:26 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3015353337' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.564 243456 DEBUG nova.compute.manager [req-04ed243d-1096-497c-820a-8b0baf3d136e req-b6b86042-204c-4cd4-b90e-d089ce00fec0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received event network-vif-plugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.565 243456 DEBUG oslo_concurrency.lockutils [req-04ed243d-1096-497c-820a-8b0baf3d136e req-b6b86042-204c-4cd4-b90e-d089ce00fec0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.566 243456 DEBUG oslo_concurrency.lockutils [req-04ed243d-1096-497c-820a-8b0baf3d136e req-b6b86042-204c-4cd4-b90e-d089ce00fec0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.566 243456 DEBUG oslo_concurrency.lockutils [req-04ed243d-1096-497c-820a-8b0baf3d136e req-b6b86042-204c-4cd4-b90e-d089ce00fec0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.566 243456 DEBUG nova.compute.manager [req-04ed243d-1096-497c-820a-8b0baf3d136e req-b6b86042-204c-4cd4-b90e-d089ce00fec0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Processing event network-vif-plugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.567 243456 DEBUG nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.574 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273606.5738165, c4a13c84-8fca-43c8-87c3-fde9f5d1c031 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.575 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] VM Resumed (Lifecycle Event)
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.582 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.586 243456 INFO nova.virt.libvirt.driver [-] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Instance spawned successfully.
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.587 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.617 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.626 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.633 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.634 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.635 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.636 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.637 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.638 243456 DEBUG nova.virt.libvirt.driver [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.648 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.690 243456 INFO nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Took 7.45 seconds to spawn the instance on the hypervisor.
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.692 243456 DEBUG nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.793 243456 INFO nova.compute.manager [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Took 8.67 seconds to build instance.
Feb 28 10:13:26 compute-0 nova_compute[243452]: 2026-02-28 10:13:26.843 243456 DEBUG oslo_concurrency.lockutils [None req-eaca6973-ba51-483d-890d-542a75f6f872 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:27 compute-0 ceph-mon[76304]: pgmap v1426: 305 pgs: 305 active+clean; 216 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 357 op/s
Feb 28 10:13:27 compute-0 nova_compute[243452]: 2026-02-28 10:13:27.740 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:27 compute-0 nova_compute[243452]: 2026-02-28 10:13:27.740 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:27 compute-0 nova_compute[243452]: 2026-02-28 10:13:27.762 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:13:27 compute-0 nova_compute[243452]: 2026-02-28 10:13:27.776 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "f3def0af-1227-498f-a525-0df8d5bb3768" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:27 compute-0 nova_compute[243452]: 2026-02-28 10:13:27.776 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:27 compute-0 nova_compute[243452]: 2026-02-28 10:13:27.802 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:13:27 compute-0 nova_compute[243452]: 2026-02-28 10:13:27.887 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:27 compute-0 nova_compute[243452]: 2026-02-28 10:13:27.888 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:27 compute-0 nova_compute[243452]: 2026-02-28 10:13:27.894 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:13:27 compute-0 nova_compute[243452]: 2026-02-28 10:13:27.894 243456 INFO nova.compute.claims [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:13:27 compute-0 nova_compute[243452]: 2026-02-28 10:13:27.901 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.068 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:13:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 200 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 3.6 MiB/s wr, 384 op/s
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.540 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:13:28 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2324003416' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.639 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.648 243456 DEBUG nova.compute.provider_tree [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.667 243456 DEBUG nova.scheduler.client.report [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.694 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.695 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.698 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.703 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.703 243456 INFO nova.compute.claims [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.739 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.765 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.766 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.792 243456 INFO nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.821 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.876 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.934 243456 DEBUG nova.compute.manager [req-c0eb0349-3a36-4801-af08-a78e2d9b8d57 req-77089efb-b992-4d3d-a076-339c0845de41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received event network-vif-plugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.935 243456 DEBUG oslo_concurrency.lockutils [req-c0eb0349-3a36-4801-af08-a78e2d9b8d57 req-77089efb-b992-4d3d-a076-339c0845de41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.935 243456 DEBUG oslo_concurrency.lockutils [req-c0eb0349-3a36-4801-af08-a78e2d9b8d57 req-77089efb-b992-4d3d-a076-339c0845de41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.935 243456 DEBUG oslo_concurrency.lockutils [req-c0eb0349-3a36-4801-af08-a78e2d9b8d57 req-77089efb-b992-4d3d-a076-339c0845de41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.935 243456 DEBUG nova.compute.manager [req-c0eb0349-3a36-4801-af08-a78e2d9b8d57 req-77089efb-b992-4d3d-a076-339c0845de41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] No waiting events found dispatching network-vif-plugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.936 243456 WARNING nova.compute.manager [req-c0eb0349-3a36-4801-af08-a78e2d9b8d57 req-77089efb-b992-4d3d-a076-339c0845de41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received unexpected event network-vif-plugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 for instance with vm_state active and task_state None.
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.978 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.980 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:13:28 compute-0 nova_compute[243452]: 2026-02-28 10:13:28.980 243456 INFO nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Creating image(s)
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.003 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.030 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.054 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.063 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:13:29
Feb 28 10:13:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:13:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:13:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', '.mgr', 'default.rgw.meta', '.rgw.root', 'images', 'vms']
Feb 28 10:13:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.105 243456 DEBUG nova.policy [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c60ae50e478245b49930e1e71ea14df4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f398725990434948ba6927f1c1477015', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.138 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.139 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.139 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.139 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.169 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.173 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:29 compute-0 ceph-mon[76304]: pgmap v1427: 305 pgs: 305 active+clean; 200 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 3.6 MiB/s wr, 384 op/s
Feb 28 10:13:29 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2324003416' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.407 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:13:29 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2697708311' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.507 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.516 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] resizing rbd image a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.554 243456 DEBUG nova.compute.provider_tree [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.573 243456 DEBUG nova.scheduler.client.report [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.595 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.596 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.639 243456 DEBUG nova.objects.instance [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'migration_context' on Instance uuid a16c1faa-2568-47fc-8006-91c68ae7ae5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.651 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.651 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.656 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.656 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Ensure instance console log exists: /var/lib/nova/instances/a16c1faa-2568-47fc-8006-91c68ae7ae5d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.657 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.657 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.657 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.672 243456 INFO nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.699 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.792 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.794 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.795 243456 INFO nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Creating image(s)
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.827 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image f3def0af-1227-498f-a525-0df8d5bb3768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.853 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image f3def0af-1227-498f-a525-0df8d5bb3768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.876 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image f3def0af-1227-498f-a525-0df8d5bb3768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.880 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.941 243456 DEBUG nova.policy [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c60ae50e478245b49930e1e71ea14df4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f398725990434948ba6927f1c1477015', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.950 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.950 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.951 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.951 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.974 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image f3def0af-1227-498f-a525-0df8d5bb3768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:29 compute-0 nova_compute[243452]: 2026-02-28 10:13:29.978 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f3def0af-1227-498f-a525-0df8d5bb3768_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 209 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 3.4 MiB/s wr, 412 op/s
Feb 28 10:13:30 compute-0 nova_compute[243452]: 2026-02-28 10:13:30.298 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Successfully created port: b8b97a75-8d54-4bd6-8372-eaff2a5aa910 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:13:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:13:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:13:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:13:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:13:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:13:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:13:30 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2697708311' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:30 compute-0 nova_compute[243452]: 2026-02-28 10:13:30.460 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f3def0af-1227-498f-a525-0df8d5bb3768_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:30 compute-0 nova_compute[243452]: 2026-02-28 10:13:30.551 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] resizing rbd image f3def0af-1227-498f-a525-0df8d5bb3768_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:13:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:13:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:13:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:13:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:13:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:13:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:13:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:13:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:13:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:13:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:13:30 compute-0 nova_compute[243452]: 2026-02-28 10:13:30.677 243456 DEBUG nova.objects.instance [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'migration_context' on Instance uuid f3def0af-1227-498f-a525-0df8d5bb3768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:30 compute-0 nova_compute[243452]: 2026-02-28 10:13:30.705 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:13:30 compute-0 nova_compute[243452]: 2026-02-28 10:13:30.705 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Ensure instance console log exists: /var/lib/nova/instances/f3def0af-1227-498f-a525-0df8d5bb3768/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:13:30 compute-0 nova_compute[243452]: 2026-02-28 10:13:30.706 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:30 compute-0 nova_compute[243452]: 2026-02-28 10:13:30.707 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:30 compute-0 nova_compute[243452]: 2026-02-28 10:13:30.707 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:30 compute-0 NetworkManager[49805]: <info>  [1772273610.7164] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/266)
Feb 28 10:13:30 compute-0 NetworkManager[49805]: <info>  [1772273610.7174] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/267)
Feb 28 10:13:30 compute-0 nova_compute[243452]: 2026-02-28 10:13:30.719 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Successfully created port: fa6d7d29-c113-4729-a953-5dc14a05cd16 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:13:30 compute-0 nova_compute[243452]: 2026-02-28 10:13:30.724 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:30 compute-0 nova_compute[243452]: 2026-02-28 10:13:30.747 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:30 compute-0 ovn_controller[146846]: 2026-02-28T10:13:30Z|00590|binding|INFO|Releasing lport 6fea9809-bf5e-43e8-beec-b6cf6edc52cd from this chassis (sb_readonly=0)
Feb 28 10:13:30 compute-0 nova_compute[243452]: 2026-02-28 10:13:30.773 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:31 compute-0 ceph-mon[76304]: pgmap v1428: 305 pgs: 305 active+clean; 209 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 3.4 MiB/s wr, 412 op/s
Feb 28 10:13:31 compute-0 nova_compute[243452]: 2026-02-28 10:13:31.783 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Successfully updated port: b8b97a75-8d54-4bd6-8372-eaff2a5aa910 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:13:31 compute-0 nova_compute[243452]: 2026-02-28 10:13:31.800 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "refresh_cache-a16c1faa-2568-47fc-8006-91c68ae7ae5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:13:31 compute-0 nova_compute[243452]: 2026-02-28 10:13:31.801 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquired lock "refresh_cache-a16c1faa-2568-47fc-8006-91c68ae7ae5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:13:31 compute-0 nova_compute[243452]: 2026-02-28 10:13:31.801 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:13:31 compute-0 nova_compute[243452]: 2026-02-28 10:13:31.921 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Successfully updated port: fa6d7d29-c113-4729-a953-5dc14a05cd16 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:13:31 compute-0 nova_compute[243452]: 2026-02-28 10:13:31.938 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "refresh_cache-f3def0af-1227-498f-a525-0df8d5bb3768" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:13:31 compute-0 nova_compute[243452]: 2026-02-28 10:13:31.939 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquired lock "refresh_cache-f3def0af-1227-498f-a525-0df8d5bb3768" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:13:31 compute-0 nova_compute[243452]: 2026-02-28 10:13:31.939 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:13:31 compute-0 nova_compute[243452]: 2026-02-28 10:13:31.989 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:13:32 compute-0 nova_compute[243452]: 2026-02-28 10:13:32.078 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:13:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 246 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 4.5 MiB/s wr, 354 op/s
Feb 28 10:13:32 compute-0 nova_compute[243452]: 2026-02-28 10:13:32.221 243456 DEBUG nova.compute.manager [req-57dc64d9-dc84-45e5-a3f5-7ca6fbe7d483 req-e877e983-6b00-495c-8bfb-50c3589c7d9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received event network-changed-9be79a2c-76fa-4a58-a532-eac0151d2bb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:32 compute-0 nova_compute[243452]: 2026-02-28 10:13:32.221 243456 DEBUG nova.compute.manager [req-57dc64d9-dc84-45e5-a3f5-7ca6fbe7d483 req-e877e983-6b00-495c-8bfb-50c3589c7d9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Refreshing instance network info cache due to event network-changed-9be79a2c-76fa-4a58-a532-eac0151d2bb1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:13:32 compute-0 nova_compute[243452]: 2026-02-28 10:13:32.222 243456 DEBUG oslo_concurrency.lockutils [req-57dc64d9-dc84-45e5-a3f5-7ca6fbe7d483 req-e877e983-6b00-495c-8bfb-50c3589c7d9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:13:32 compute-0 nova_compute[243452]: 2026-02-28 10:13:32.222 243456 DEBUG oslo_concurrency.lockutils [req-57dc64d9-dc84-45e5-a3f5-7ca6fbe7d483 req-e877e983-6b00-495c-8bfb-50c3589c7d9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:13:32 compute-0 nova_compute[243452]: 2026-02-28 10:13:32.222 243456 DEBUG nova.network.neutron [req-57dc64d9-dc84-45e5-a3f5-7ca6fbe7d483 req-e877e983-6b00-495c-8bfb-50c3589c7d9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Refreshing network info cache for port 9be79a2c-76fa-4a58-a532-eac0151d2bb1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:13:32 compute-0 nova_compute[243452]: 2026-02-28 10:13:32.300 243456 DEBUG nova.compute.manager [req-6b008d7e-18d7-464b-ae40-00133486e433 req-bf82f2c9-7a6e-431d-8e7d-4e633b31829f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Received event network-changed-fa6d7d29-c113-4729-a953-5dc14a05cd16 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:32 compute-0 nova_compute[243452]: 2026-02-28 10:13:32.300 243456 DEBUG nova.compute.manager [req-6b008d7e-18d7-464b-ae40-00133486e433 req-bf82f2c9-7a6e-431d-8e7d-4e633b31829f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Refreshing instance network info cache due to event network-changed-fa6d7d29-c113-4729-a953-5dc14a05cd16. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:13:32 compute-0 nova_compute[243452]: 2026-02-28 10:13:32.300 243456 DEBUG oslo_concurrency.lockutils [req-6b008d7e-18d7-464b-ae40-00133486e433 req-bf82f2c9-7a6e-431d-8e7d-4e633b31829f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f3def0af-1227-498f-a525-0df8d5bb3768" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:13:32 compute-0 ceph-mon[76304]: pgmap v1429: 305 pgs: 305 active+clean; 246 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 4.5 MiB/s wr, 354 op/s
Feb 28 10:13:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:13:33 compute-0 nova_compute[243452]: 2026-02-28 10:13:33.546 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:33 compute-0 nova_compute[243452]: 2026-02-28 10:13:33.596 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:33 compute-0 nova_compute[243452]: 2026-02-28 10:13:33.598 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:33 compute-0 nova_compute[243452]: 2026-02-28 10:13:33.621 243456 DEBUG nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:13:33 compute-0 nova_compute[243452]: 2026-02-28 10:13:33.703 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:33 compute-0 nova_compute[243452]: 2026-02-28 10:13:33.704 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:33 compute-0 nova_compute[243452]: 2026-02-28 10:13:33.714 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:13:33 compute-0 nova_compute[243452]: 2026-02-28 10:13:33.715 243456 INFO nova.compute.claims [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:13:33 compute-0 nova_compute[243452]: 2026-02-28 10:13:33.741 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:33 compute-0 nova_compute[243452]: 2026-02-28 10:13:33.892 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 282 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 5.1 MiB/s wr, 298 op/s
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.271 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Updating instance_info_cache with network_info: [{"id": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "address": "fa:16:3e:dd:7a:dc", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b97a75-8d", "ovs_interfaceid": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.280 243456 DEBUG nova.network.neutron [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Updating instance_info_cache with network_info: [{"id": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "address": "fa:16:3e:2c:d1:ad", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa6d7d29-c1", "ovs_interfaceid": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.306 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Releasing lock "refresh_cache-a16c1faa-2568-47fc-8006-91c68ae7ae5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.307 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Instance network_info: |[{"id": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "address": "fa:16:3e:dd:7a:dc", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b97a75-8d", "ovs_interfaceid": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.308 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Releasing lock "refresh_cache-f3def0af-1227-498f-a525-0df8d5bb3768" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.308 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Instance network_info: |[{"id": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "address": "fa:16:3e:2c:d1:ad", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa6d7d29-c1", "ovs_interfaceid": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.313 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Start _get_guest_xml network_info=[{"id": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "address": "fa:16:3e:dd:7a:dc", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b97a75-8d", "ovs_interfaceid": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.314 243456 DEBUG oslo_concurrency.lockutils [req-6b008d7e-18d7-464b-ae40-00133486e433 req-bf82f2c9-7a6e-431d-8e7d-4e633b31829f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f3def0af-1227-498f-a525-0df8d5bb3768" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.315 243456 DEBUG nova.network.neutron [req-6b008d7e-18d7-464b-ae40-00133486e433 req-bf82f2c9-7a6e-431d-8e7d-4e633b31829f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Refreshing network info cache for port fa6d7d29-c113-4729-a953-5dc14a05cd16 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.320 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Start _get_guest_xml network_info=[{"id": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "address": "fa:16:3e:2c:d1:ad", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa6d7d29-c1", "ovs_interfaceid": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.331 243456 DEBUG nova.compute.manager [req-25cfc3ba-df31-4b4b-9405-45bcfc670eba req-f4e6b4ec-31d5-49d4-b89e-ac2ddba785ef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Received event network-changed-b8b97a75-8d54-4bd6-8372-eaff2a5aa910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.332 243456 DEBUG nova.compute.manager [req-25cfc3ba-df31-4b4b-9405-45bcfc670eba req-f4e6b4ec-31d5-49d4-b89e-ac2ddba785ef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Refreshing instance network info cache due to event network-changed-b8b97a75-8d54-4bd6-8372-eaff2a5aa910. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.333 243456 DEBUG oslo_concurrency.lockutils [req-25cfc3ba-df31-4b4b-9405-45bcfc670eba req-f4e6b4ec-31d5-49d4-b89e-ac2ddba785ef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-a16c1faa-2568-47fc-8006-91c68ae7ae5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.333 243456 DEBUG oslo_concurrency.lockutils [req-25cfc3ba-df31-4b4b-9405-45bcfc670eba req-f4e6b4ec-31d5-49d4-b89e-ac2ddba785ef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-a16c1faa-2568-47fc-8006-91c68ae7ae5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.333 243456 DEBUG nova.network.neutron [req-25cfc3ba-df31-4b4b-9405-45bcfc670eba req-f4e6b4ec-31d5-49d4-b89e-ac2ddba785ef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Refreshing network info cache for port b8b97a75-8d54-4bd6-8372-eaff2a5aa910 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.349 243456 WARNING nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.356 243456 WARNING nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.360 243456 DEBUG nova.virt.libvirt.host [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.361 243456 DEBUG nova.virt.libvirt.host [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.363 243456 DEBUG nova.virt.libvirt.host [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.364 243456 DEBUG nova.virt.libvirt.host [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.365 243456 DEBUG nova.virt.libvirt.host [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.366 243456 DEBUG nova.virt.libvirt.host [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.366 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.367 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.367 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.368 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.368 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.368 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.368 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.369 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.369 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.369 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.369 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.369 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.373 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.417 243456 DEBUG nova.virt.libvirt.host [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.418 243456 DEBUG nova.virt.libvirt.host [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.419 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.419 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.420 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.420 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.420 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.420 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.421 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.421 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.421 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.421 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.422 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.422 243456 DEBUG nova.virt.hardware [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.425 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:13:34 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1963363167' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.485 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.492 243456 DEBUG nova.compute.provider_tree [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.507 243456 DEBUG nova.scheduler.client.report [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.530 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.531 243456 DEBUG nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.581 243456 DEBUG nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.583 243456 DEBUG nova.network.neutron [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.601 243456 INFO nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.617 243456 DEBUG nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.697 243456 DEBUG nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.699 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.700 243456 INFO nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Creating image(s)
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.726 243456 DEBUG nova.storage.rbd_utils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.760 243456 DEBUG nova.storage.rbd_utils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.793 243456 DEBUG nova.storage.rbd_utils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.797 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.893 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.895 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.895 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.896 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:13:34 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1242527802' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.917 243456 DEBUG nova.storage.rbd_utils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.922 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.959 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:13:34 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/282437512' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.985 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:34 compute-0 nova_compute[243452]: 2026-02-28 10:13:34.990 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.038 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.077 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image f3def0af-1227-498f-a525-0df8d5bb3768_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.088 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.180 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:35 compute-0 ceph-mon[76304]: pgmap v1430: 305 pgs: 305 active+clean; 282 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 5.1 MiB/s wr, 298 op/s
Feb 28 10:13:35 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1963363167' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:35 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1242527802' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:35 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/282437512' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.263 243456 DEBUG nova.network.neutron [req-57dc64d9-dc84-45e5-a3f5-7ca6fbe7d483 req-e877e983-6b00-495c-8bfb-50c3589c7d9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Updated VIF entry in instance network info cache for port 9be79a2c-76fa-4a58-a532-eac0151d2bb1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.264 243456 DEBUG nova.network.neutron [req-57dc64d9-dc84-45e5-a3f5-7ca6fbe7d483 req-e877e983-6b00-495c-8bfb-50c3589c7d9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Updating instance_info_cache with network_info: [{"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.270 243456 DEBUG nova.policy [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c6b5724da2e648fd85fd8cb293525967', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.283 243456 DEBUG nova.storage.rbd_utils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] resizing rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.317 243456 DEBUG oslo_concurrency.lockutils [req-57dc64d9-dc84-45e5-a3f5-7ca6fbe7d483 req-e877e983-6b00-495c-8bfb-50c3589c7d9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.379 243456 DEBUG nova.objects.instance [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'migration_context' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.403 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.403 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Ensure instance console log exists: /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.404 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.404 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.405 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.489 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273600.4606256, 6af19b7d-b0a9-40fe-8d1a-f38c95452a10 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.490 243456 INFO nova.compute.manager [-] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] VM Stopped (Lifecycle Event)
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.513 243456 DEBUG nova.compute.manager [None req-cbdfa404-fcb6-4b9b-8942-c46cb283e620 - - - - - -] [instance: 6af19b7d-b0a9-40fe-8d1a-f38c95452a10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:13:35 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1598360059' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.661 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273600.6599488, bd2e7775-9332-417e-a139-0847263b3343 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.662 243456 INFO nova.compute.manager [-] [instance: bd2e7775-9332-417e-a139-0847263b3343] VM Stopped (Lifecycle Event)
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.671 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.681s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.673 243456 DEBUG nova.virt.libvirt.vif [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-650257309',display_name='tempest-MultipleCreateTestJSON-server-650257309-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-650257309-1',id=71,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-fb7tvvwh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:28Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=a16c1faa-2568-47fc-8006-91c68ae7ae5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "address": "fa:16:3e:dd:7a:dc", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b97a75-8d", "ovs_interfaceid": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.673 243456 DEBUG nova.network.os_vif_util [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "address": "fa:16:3e:dd:7a:dc", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b97a75-8d", "ovs_interfaceid": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.674 243456 DEBUG nova.network.os_vif_util [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:7a:dc,bridge_name='br-int',has_traffic_filtering=True,id=b8b97a75-8d54-4bd6-8372-eaff2a5aa910,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b97a75-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.675 243456 DEBUG nova.objects.instance [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'pci_devices' on Instance uuid a16c1faa-2568-47fc-8006-91c68ae7ae5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:13:35 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2798180972' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.690 243456 DEBUG nova.compute.manager [None req-6c082276-e6a9-4a7f-9837-ded9d67b7c29 - - - - - -] [instance: bd2e7775-9332-417e-a139-0847263b3343] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.692 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <uuid>a16c1faa-2568-47fc-8006-91c68ae7ae5d</uuid>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <name>instance-00000047</name>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <nova:name>tempest-MultipleCreateTestJSON-server-650257309-1</nova:name>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:13:34</nova:creationTime>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <nova:user uuid="c60ae50e478245b49930e1e71ea14df4">tempest-MultipleCreateTestJSON-211863181-project-member</nova:user>
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <nova:project uuid="f398725990434948ba6927f1c1477015">tempest-MultipleCreateTestJSON-211863181</nova:project>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <nova:port uuid="b8b97a75-8d54-4bd6-8372-eaff2a5aa910">
Feb 28 10:13:35 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <system>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <entry name="serial">a16c1faa-2568-47fc-8006-91c68ae7ae5d</entry>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <entry name="uuid">a16c1faa-2568-47fc-8006-91c68ae7ae5d</entry>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     </system>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <os>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   </os>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <features>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   </features>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk">
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       </source>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk.config">
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       </source>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:dd:7a:dc"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <target dev="tapb8b97a75-8d"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/a16c1faa-2568-47fc-8006-91c68ae7ae5d/console.log" append="off"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <video>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     </video>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:13:35 compute-0 nova_compute[243452]: </domain>
Feb 28 10:13:35 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.697 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Preparing to wait for external event network-vif-plugged-b8b97a75-8d54-4bd6-8372-eaff2a5aa910 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.697 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.697 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.698 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.698 243456 DEBUG nova.virt.libvirt.vif [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-650257309',display_name='tempest-MultipleCreateTestJSON-server-650257309-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-650257309-1',id=71,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-fb7tvvwh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:28Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=a16c1faa-2568-47fc-8006-91c68ae7ae5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "address": "fa:16:3e:dd:7a:dc", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b97a75-8d", "ovs_interfaceid": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.699 243456 DEBUG nova.network.os_vif_util [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "address": "fa:16:3e:dd:7a:dc", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b97a75-8d", "ovs_interfaceid": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.700 243456 DEBUG nova.network.os_vif_util [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:7a:dc,bridge_name='br-int',has_traffic_filtering=True,id=b8b97a75-8d54-4bd6-8372-eaff2a5aa910,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b97a75-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.700 243456 DEBUG os_vif [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:7a:dc,bridge_name='br-int',has_traffic_filtering=True,id=b8b97a75-8d54-4bd6-8372-eaff2a5aa910,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b97a75-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.701 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.701 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.702 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.702 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.704 243456 DEBUG nova.virt.libvirt.vif [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-650257309',display_name='tempest-MultipleCreateTestJSON-server-650257309-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-650257309-2',id=72,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-fb7tvvwh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:29Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=f3def0af-1227-498f-a525-0df8d5bb3768,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "address": "fa:16:3e:2c:d1:ad", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa6d7d29-c1", "ovs_interfaceid": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.705 243456 DEBUG nova.network.os_vif_util [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "address": "fa:16:3e:2c:d1:ad", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa6d7d29-c1", "ovs_interfaceid": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.705 243456 DEBUG nova.network.os_vif_util [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:d1:ad,bridge_name='br-int',has_traffic_filtering=True,id=fa6d7d29-c113-4729-a953-5dc14a05cd16,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa6d7d29-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.706 243456 DEBUG nova.objects.instance [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'pci_devices' on Instance uuid f3def0af-1227-498f-a525-0df8d5bb3768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.708 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.708 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8b97a75-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.709 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb8b97a75-8d, col_values=(('external_ids', {'iface-id': 'b8b97a75-8d54-4bd6-8372-eaff2a5aa910', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:7a:dc', 'vm-uuid': 'a16c1faa-2568-47fc-8006-91c68ae7ae5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.710 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:35 compute-0 NetworkManager[49805]: <info>  [1772273615.7113] manager: (tapb8b97a75-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.713 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.717 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.718 243456 INFO os_vif [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:7a:dc,bridge_name='br-int',has_traffic_filtering=True,id=b8b97a75-8d54-4bd6-8372-eaff2a5aa910,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b97a75-8d')
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.721 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <uuid>f3def0af-1227-498f-a525-0df8d5bb3768</uuid>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <name>instance-00000048</name>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <nova:name>tempest-MultipleCreateTestJSON-server-650257309-2</nova:name>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:13:34</nova:creationTime>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <nova:user uuid="c60ae50e478245b49930e1e71ea14df4">tempest-MultipleCreateTestJSON-211863181-project-member</nova:user>
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <nova:project uuid="f398725990434948ba6927f1c1477015">tempest-MultipleCreateTestJSON-211863181</nova:project>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <nova:port uuid="fa6d7d29-c113-4729-a953-5dc14a05cd16">
Feb 28 10:13:35 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <system>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <entry name="serial">f3def0af-1227-498f-a525-0df8d5bb3768</entry>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <entry name="uuid">f3def0af-1227-498f-a525-0df8d5bb3768</entry>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     </system>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <os>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   </os>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <features>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   </features>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/f3def0af-1227-498f-a525-0df8d5bb3768_disk">
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       </source>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/f3def0af-1227-498f-a525-0df8d5bb3768_disk.config">
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       </source>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:13:35 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:2c:d1:ad"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <target dev="tapfa6d7d29-c1"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/f3def0af-1227-498f-a525-0df8d5bb3768/console.log" append="off"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <video>
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     </video>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:13:35 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:13:35 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:13:35 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:13:35 compute-0 nova_compute[243452]: </domain>
Feb 28 10:13:35 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.725 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Preparing to wait for external event network-vif-plugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.725 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.726 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.726 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.727 243456 DEBUG nova.virt.libvirt.vif [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-650257309',display_name='tempest-MultipleCreateTestJSON-server-650257309-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-650257309-2',id=72,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-fb7tvvwh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:29Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=f3def0af-1227-498f-a525-0df8d5bb3768,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "address": "fa:16:3e:2c:d1:ad", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa6d7d29-c1", "ovs_interfaceid": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.727 243456 DEBUG nova.network.os_vif_util [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "address": "fa:16:3e:2c:d1:ad", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa6d7d29-c1", "ovs_interfaceid": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.727 243456 DEBUG nova.network.os_vif_util [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:d1:ad,bridge_name='br-int',has_traffic_filtering=True,id=fa6d7d29-c113-4729-a953-5dc14a05cd16,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa6d7d29-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.728 243456 DEBUG os_vif [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:d1:ad,bridge_name='br-int',has_traffic_filtering=True,id=fa6d7d29-c113-4729-a953-5dc14a05cd16,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa6d7d29-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.728 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.728 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.729 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.733 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.733 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa6d7d29-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.734 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfa6d7d29-c1, col_values=(('external_ids', {'iface-id': 'fa6d7d29-c113-4729-a953-5dc14a05cd16', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2c:d1:ad', 'vm-uuid': 'f3def0af-1227-498f-a525-0df8d5bb3768'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.735 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:35 compute-0 NetworkManager[49805]: <info>  [1772273615.7378] manager: (tapfa6d7d29-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.739 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.747 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.748 243456 INFO os_vif [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:d1:ad,bridge_name='br-int',has_traffic_filtering=True,id=fa6d7d29-c113-4729-a953-5dc14a05cd16,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa6d7d29-c1')
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.794 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.794 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.795 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No VIF found with MAC fa:16:3e:dd:7a:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.795 243456 INFO nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Using config drive
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.817 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:35 compute-0 podman[303600]: 2026-02-28 10:13:35.820406891 +0000 UTC m=+0.055539333 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true)
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.840 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.840 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.841 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] No VIF found with MAC fa:16:3e:2c:d1:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.841 243456 INFO nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Using config drive
Feb 28 10:13:35 compute-0 podman[303597]: 2026-02-28 10:13:35.857180914 +0000 UTC m=+0.099465527 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Feb 28 10:13:35 compute-0 nova_compute[243452]: 2026-02-28 10:13:35.871 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image f3def0af-1227-498f-a525-0df8d5bb3768_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:36 compute-0 nova_compute[243452]: 2026-02-28 10:13:36.120 243456 DEBUG nova.network.neutron [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Successfully created port: 41441957-9492-481d-847c-895c9fd2ef8f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:13:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 307 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.1 MiB/s wr, 227 op/s
Feb 28 10:13:36 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1598360059' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:36 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2798180972' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:36 compute-0 nova_compute[243452]: 2026-02-28 10:13:36.298 243456 INFO nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Creating config drive at /var/lib/nova/instances/f3def0af-1227-498f-a525-0df8d5bb3768/disk.config
Feb 28 10:13:36 compute-0 nova_compute[243452]: 2026-02-28 10:13:36.302 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f3def0af-1227-498f-a525-0df8d5bb3768/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxz8qkimt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:36 compute-0 nova_compute[243452]: 2026-02-28 10:13:36.347 243456 INFO nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Creating config drive at /var/lib/nova/instances/a16c1faa-2568-47fc-8006-91c68ae7ae5d/disk.config
Feb 28 10:13:36 compute-0 nova_compute[243452]: 2026-02-28 10:13:36.351 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a16c1faa-2568-47fc-8006-91c68ae7ae5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpb8hqeua_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:36 compute-0 nova_compute[243452]: 2026-02-28 10:13:36.445 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:36 compute-0 nova_compute[243452]: 2026-02-28 10:13:36.448 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f3def0af-1227-498f-a525-0df8d5bb3768/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxz8qkimt" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:36 compute-0 nova_compute[243452]: 2026-02-28 10:13:36.472 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image f3def0af-1227-498f-a525-0df8d5bb3768_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:36 compute-0 nova_compute[243452]: 2026-02-28 10:13:36.476 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f3def0af-1227-498f-a525-0df8d5bb3768/disk.config f3def0af-1227-498f-a525-0df8d5bb3768_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:36 compute-0 nova_compute[243452]: 2026-02-28 10:13:36.507 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a16c1faa-2568-47fc-8006-91c68ae7ae5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpb8hqeua_" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:36 compute-0 nova_compute[243452]: 2026-02-28 10:13:36.535 243456 DEBUG nova.storage.rbd_utils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] rbd image a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:36 compute-0 nova_compute[243452]: 2026-02-28 10:13:36.541 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a16c1faa-2568-47fc-8006-91c68ae7ae5d/disk.config a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:36 compute-0 nova_compute[243452]: 2026-02-28 10:13:36.611 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f3def0af-1227-498f-a525-0df8d5bb3768/disk.config f3def0af-1227-498f-a525-0df8d5bb3768_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:36 compute-0 nova_compute[243452]: 2026-02-28 10:13:36.612 243456 INFO nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Deleting local config drive /var/lib/nova/instances/f3def0af-1227-498f-a525-0df8d5bb3768/disk.config because it was imported into RBD.
Feb 28 10:13:36 compute-0 kernel: tapfa6d7d29-c1: entered promiscuous mode
Feb 28 10:13:36 compute-0 NetworkManager[49805]: <info>  [1772273616.6709] manager: (tapfa6d7d29-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/270)
Feb 28 10:13:36 compute-0 ovn_controller[146846]: 2026-02-28T10:13:36Z|00591|binding|INFO|Claiming lport fa6d7d29-c113-4729-a953-5dc14a05cd16 for this chassis.
Feb 28 10:13:36 compute-0 nova_compute[243452]: 2026-02-28 10:13:36.676 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:36 compute-0 ovn_controller[146846]: 2026-02-28T10:13:36Z|00592|binding|INFO|fa6d7d29-c113-4729-a953-5dc14a05cd16: Claiming fa:16:3e:2c:d1:ad 10.100.0.3
Feb 28 10:13:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.688 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:d1:ad 10.100.0.3'], port_security=['fa:16:3e:2c:d1:ad 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f3def0af-1227-498f-a525-0df8d5bb3768', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f398725990434948ba6927f1c1477015', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9892012a-4876-4e21-91d3-1183462d5768', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72248a44-6915-4752-941f-1938e32574c6, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=fa6d7d29-c113-4729-a953-5dc14a05cd16) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:13:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.689 156681 INFO neutron.agent.ovn.metadata.agent [-] Port fa6d7d29-c113-4729-a953-5dc14a05cd16 in datapath 2ee6adef-26da-41a9-91a7-f9a806d37d26 bound to our chassis
Feb 28 10:13:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.691 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ee6adef-26da-41a9-91a7-f9a806d37d26
Feb 28 10:13:36 compute-0 ovn_controller[146846]: 2026-02-28T10:13:36Z|00593|binding|INFO|Setting lport fa6d7d29-c113-4729-a953-5dc14a05cd16 ovn-installed in OVS
Feb 28 10:13:36 compute-0 ovn_controller[146846]: 2026-02-28T10:13:36Z|00594|binding|INFO|Setting lport fa6d7d29-c113-4729-a953-5dc14a05cd16 up in Southbound
Feb 28 10:13:36 compute-0 nova_compute[243452]: 2026-02-28 10:13:36.701 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:36 compute-0 systemd-udevd[303775]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:13:36 compute-0 systemd-machined[209480]: New machine qemu-79-instance-00000048.
Feb 28 10:13:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.707 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e2963e-4467-4493-a56a-38174fadc521]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:36 compute-0 nova_compute[243452]: 2026-02-28 10:13:36.707 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.708 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2ee6adef-21 in ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:13:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.710 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2ee6adef-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:13:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.710 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[635741ba-499e-48aa-8f27-a94d5dcfb0d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.712 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a1783739-477a-4ae2-b370-e2dca5aae4ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:36 compute-0 systemd[1]: Started Virtual Machine qemu-79-instance-00000048.
Feb 28 10:13:36 compute-0 nova_compute[243452]: 2026-02-28 10:13:36.715 243456 DEBUG oslo_concurrency.processutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a16c1faa-2568-47fc-8006-91c68ae7ae5d/disk.config a16c1faa-2568-47fc-8006-91c68ae7ae5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:36 compute-0 nova_compute[243452]: 2026-02-28 10:13:36.715 243456 INFO nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Deleting local config drive /var/lib/nova/instances/a16c1faa-2568-47fc-8006-91c68ae7ae5d/disk.config because it was imported into RBD.
Feb 28 10:13:36 compute-0 NetworkManager[49805]: <info>  [1772273616.7248] device (tapfa6d7d29-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:13:36 compute-0 NetworkManager[49805]: <info>  [1772273616.7258] device (tapfa6d7d29-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:13:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.734 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[8575d6cd-6de5-4d0f-a02c-80f5885a1e7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.748 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fae84556-1268-49ec-b541-5c3b58547d3e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.780 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[589ef9ea-7542-458d-8e43-c5cfed55f7d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:36 compute-0 NetworkManager[49805]: <info>  [1772273616.7887] manager: (tapb8b97a75-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/271)
Feb 28 10:13:36 compute-0 kernel: tapb8b97a75-8d: entered promiscuous mode
Feb 28 10:13:36 compute-0 NetworkManager[49805]: <info>  [1772273616.7902] manager: (tap2ee6adef-20): new Veth device (/org/freedesktop/NetworkManager/Devices/272)
Feb 28 10:13:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.788 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c48511dc-dbae-499d-9d77-baceb8ac7d69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:36 compute-0 systemd-udevd[303779]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:13:36 compute-0 nova_compute[243452]: 2026-02-28 10:13:36.791 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:36 compute-0 ovn_controller[146846]: 2026-02-28T10:13:36Z|00595|binding|INFO|Claiming lport b8b97a75-8d54-4bd6-8372-eaff2a5aa910 for this chassis.
Feb 28 10:13:36 compute-0 ovn_controller[146846]: 2026-02-28T10:13:36Z|00596|binding|INFO|b8b97a75-8d54-4bd6-8372-eaff2a5aa910: Claiming fa:16:3e:dd:7a:dc 10.100.0.7
Feb 28 10:13:36 compute-0 NetworkManager[49805]: <info>  [1772273616.8007] device (tapb8b97a75-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:13:36 compute-0 NetworkManager[49805]: <info>  [1772273616.8017] device (tapb8b97a75-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:13:36 compute-0 ovn_controller[146846]: 2026-02-28T10:13:36Z|00597|binding|INFO|Setting lport b8b97a75-8d54-4bd6-8372-eaff2a5aa910 ovn-installed in OVS
Feb 28 10:13:36 compute-0 ovn_controller[146846]: 2026-02-28T10:13:36Z|00598|binding|INFO|Setting lport b8b97a75-8d54-4bd6-8372-eaff2a5aa910 up in Southbound
Feb 28 10:13:36 compute-0 nova_compute[243452]: 2026-02-28 10:13:36.803 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.808 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:7a:dc 10.100.0.7'], port_security=['fa:16:3e:dd:7a:dc 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a16c1faa-2568-47fc-8006-91c68ae7ae5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f398725990434948ba6927f1c1477015', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9892012a-4876-4e21-91d3-1183462d5768', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72248a44-6915-4752-941f-1938e32574c6, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b8b97a75-8d54-4bd6-8372-eaff2a5aa910) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:13:36 compute-0 systemd-machined[209480]: New machine qemu-80-instance-00000047.
Feb 28 10:13:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.834 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e28a8514-dd59-4078-abd5-571e81f476c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.838 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e91e5d98-e6bb-4b18-baeb-98e07eab6600]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:36 compute-0 systemd[1]: Started Virtual Machine qemu-80-instance-00000047.
Feb 28 10:13:36 compute-0 NetworkManager[49805]: <info>  [1772273616.8663] device (tap2ee6adef-20): carrier: link connected
Feb 28 10:13:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.876 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[23d20a2e-96c4-4a87-a7f9-3fa7a7a66c6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.907 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[96c4b22a-b3ed-421a-9050-6190aec682a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ee6adef-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:79:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512415, 'reachable_time': 16033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303823, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.934 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[968311fd-eab3-4bc7-bace-8136eb0c10d4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0f:79d6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512415, 'tstamp': 512415}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303827, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.953 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d959ae2e-685c-472a-b558-c6dc6d3588ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ee6adef-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:79:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512415, 'reachable_time': 16033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303829, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:36.984 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[04307bb9-b828-4c00-aa52-4a6304875c1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.030 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a075beaf-e4d6-4b29-a3b8-e0cda06e0641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.032 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ee6adef-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.032 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.033 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ee6adef-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:37 compute-0 kernel: tap2ee6adef-20: entered promiscuous mode
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.034 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:37 compute-0 NetworkManager[49805]: <info>  [1772273617.0352] manager: (tap2ee6adef-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.037 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ee6adef-20, col_values=(('external_ids', {'iface-id': '51b33bea-9c2c-447c-817e-7f72887f045f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:37 compute-0 ovn_controller[146846]: 2026-02-28T10:13:37Z|00599|binding|INFO|Releasing lport 51b33bea-9c2c-447c-817e-7f72887f045f from this chassis (sb_readonly=0)
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.038 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.041 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2ee6adef-26da-41a9-91a7-f9a806d37d26.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2ee6adef-26da-41a9-91a7-f9a806d37d26.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.042 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e67c8ac8-44cb-45c3-9ff7-fd7c31b24152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.043 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-2ee6adef-26da-41a9-91a7-f9a806d37d26
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/2ee6adef-26da-41a9-91a7-f9a806d37d26.pid.haproxy
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 2ee6adef-26da-41a9-91a7-f9a806d37d26
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.044 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.045 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'env', 'PROCESS_TAG=haproxy-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2ee6adef-26da-41a9-91a7-f9a806d37d26.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:13:37 compute-0 ceph-mon[76304]: pgmap v1431: 305 pgs: 305 active+clean; 307 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.1 MiB/s wr, 227 op/s
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.309 243456 DEBUG nova.network.neutron [req-25cfc3ba-df31-4b4b-9405-45bcfc670eba req-f4e6b4ec-31d5-49d4-b89e-ac2ddba785ef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Updated VIF entry in instance network info cache for port b8b97a75-8d54-4bd6-8372-eaff2a5aa910. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.310 243456 DEBUG nova.network.neutron [req-25cfc3ba-df31-4b4b-9405-45bcfc670eba req-f4e6b4ec-31d5-49d4-b89e-ac2ddba785ef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Updating instance_info_cache with network_info: [{"id": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "address": "fa:16:3e:dd:7a:dc", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b97a75-8d", "ovs_interfaceid": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.321 243456 DEBUG nova.network.neutron [req-6b008d7e-18d7-464b-ae40-00133486e433 req-bf82f2c9-7a6e-431d-8e7d-4e633b31829f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Updated VIF entry in instance network info cache for port fa6d7d29-c113-4729-a953-5dc14a05cd16. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.322 243456 DEBUG nova.network.neutron [req-6b008d7e-18d7-464b-ae40-00133486e433 req-bf82f2c9-7a6e-431d-8e7d-4e633b31829f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Updating instance_info_cache with network_info: [{"id": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "address": "fa:16:3e:2c:d1:ad", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa6d7d29-c1", "ovs_interfaceid": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.325 243456 DEBUG oslo_concurrency.lockutils [req-25cfc3ba-df31-4b4b-9405-45bcfc670eba req-f4e6b4ec-31d5-49d4-b89e-ac2ddba785ef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-a16c1faa-2568-47fc-8006-91c68ae7ae5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.336 243456 DEBUG oslo_concurrency.lockutils [req-6b008d7e-18d7-464b-ae40-00133486e433 req-bf82f2c9-7a6e-431d-8e7d-4e633b31829f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f3def0af-1227-498f-a525-0df8d5bb3768" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.386 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273617.3855097, a16c1faa-2568-47fc-8006-91c68ae7ae5d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.387 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] VM Started (Lifecycle Event)
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.405 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.411 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273617.3856285, a16c1faa-2568-47fc-8006-91c68ae7ae5d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.411 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] VM Paused (Lifecycle Event)
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.429 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.433 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.456 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:13:37 compute-0 podman[303903]: 2026-02-28 10:13:37.46810868 +0000 UTC m=+0.059526055 container create d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:13:37 compute-0 systemd[1]: Started libpod-conmon-d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f.scope.
Feb 28 10:13:37 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff72f63fb99d7b1ef7c7e48e92434ddd62cf45aed08cb6ac8121bfdf9bdc9341/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:13:37 compute-0 podman[303903]: 2026-02-28 10:13:37.437572182 +0000 UTC m=+0.028989577 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:13:37 compute-0 podman[303903]: 2026-02-28 10:13:37.541330799 +0000 UTC m=+0.132748194 container init d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:13:37 compute-0 podman[303903]: 2026-02-28 10:13:37.54491117 +0000 UTC m=+0.136328545 container start d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 28 10:13:37 compute-0 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[303958]: [NOTICE]   (303963) : New worker (303965) forked
Feb 28 10:13:37 compute-0 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[303958]: [NOTICE]   (303963) : Loading success.
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.576 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273617.575528, f3def0af-1227-498f-a525-0df8d5bb3768 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.577 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] VM Started (Lifecycle Event)
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.610 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.615 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273617.5767467, f3def0af-1227-498f-a525-0df8d5bb3768 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.616 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] VM Paused (Lifecycle Event)
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.629 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b8b97a75-8d54-4bd6-8372-eaff2a5aa910 in datapath 2ee6adef-26da-41a9-91a7-f9a806d37d26 unbound from our chassis
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.631 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ee6adef-26da-41a9-91a7-f9a806d37d26
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.636 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.642 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.644 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[07fee492-5ca3-4be7-adf9-c3c2414486c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.648 243456 DEBUG nova.compute.manager [req-6671b4ce-e5f0-4770-a7a2-7f03795ba435 req-dfa51912-019b-47a5-b38c-7083628c8fea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Received event network-vif-plugged-b8b97a75-8d54-4bd6-8372-eaff2a5aa910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.648 243456 DEBUG oslo_concurrency.lockutils [req-6671b4ce-e5f0-4770-a7a2-7f03795ba435 req-dfa51912-019b-47a5-b38c-7083628c8fea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.648 243456 DEBUG oslo_concurrency.lockutils [req-6671b4ce-e5f0-4770-a7a2-7f03795ba435 req-dfa51912-019b-47a5-b38c-7083628c8fea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.649 243456 DEBUG oslo_concurrency.lockutils [req-6671b4ce-e5f0-4770-a7a2-7f03795ba435 req-dfa51912-019b-47a5-b38c-7083628c8fea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.649 243456 DEBUG nova.compute.manager [req-6671b4ce-e5f0-4770-a7a2-7f03795ba435 req-dfa51912-019b-47a5-b38c-7083628c8fea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Processing event network-vif-plugged-b8b97a75-8d54-4bd6-8372-eaff2a5aa910 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.650 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.654 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.656 243456 INFO nova.virt.libvirt.driver [-] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Instance spawned successfully.
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.657 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.662 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.663 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273617.6537325, a16c1faa-2568-47fc-8006-91c68ae7ae5d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.663 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] VM Resumed (Lifecycle Event)
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.665 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[949b216f-aeff-46e2-8df9-a495461c6022]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.668 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[79cb4d80-82e2-4506-85ba-62f9cb604423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.678 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.679 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.679 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.680 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.681 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.681 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.686 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.689 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.690 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[115db8df-ab8e-4e55-b0d6-27329953d14a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.708 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea2e562-feff-4ee0-87ab-4c5e7286ee26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ee6adef-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:79:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 5, 'rx_bytes': 176, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 5, 'rx_bytes': 176, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512415, 'reachable_time': 16033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303979, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.715 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.723 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d60a5932-3af2-4446-865b-e7d167091276]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2ee6adef-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512429, 'tstamp': 512429}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303980, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2ee6adef-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512431, 'tstamp': 512431}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303980, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.725 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ee6adef-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.726 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.727 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.728 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ee6adef-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.728 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.729 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ee6adef-20, col_values=(('external_ids', {'iface-id': '51b33bea-9c2c-447c-817e-7f72887f045f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:37.730 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.742 243456 INFO nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Took 8.76 seconds to spawn the instance on the hypervisor.
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.743 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.795 243456 INFO nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Took 9.93 seconds to build instance.
Feb 28 10:13:37 compute-0 nova_compute[243452]: 2026-02-28 10:13:37.821 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:38 compute-0 nova_compute[243452]: 2026-02-28 10:13:38.094 243456 DEBUG nova.network.neutron [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Successfully updated port: 41441957-9492-481d-847c-895c9fd2ef8f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:13:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:13:38 compute-0 nova_compute[243452]: 2026-02-28 10:13:38.120 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "refresh_cache-1e13ffbf-dba5-421b-afc3-84eb471e2d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:13:38 compute-0 nova_compute[243452]: 2026-02-28 10:13:38.121 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquired lock "refresh_cache-1e13ffbf-dba5-421b-afc3-84eb471e2d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:13:38 compute-0 nova_compute[243452]: 2026-02-28 10:13:38.121 243456 DEBUG nova.network.neutron [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:13:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 329 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.5 MiB/s wr, 178 op/s
Feb 28 10:13:38 compute-0 nova_compute[243452]: 2026-02-28 10:13:38.274 243456 DEBUG nova.network.neutron [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:13:38 compute-0 ovn_controller[146846]: 2026-02-28T10:13:38Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:54:18:0c 10.100.0.3
Feb 28 10:13:38 compute-0 ovn_controller[146846]: 2026-02-28T10:13:38Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:18:0c 10.100.0.3
Feb 28 10:13:38 compute-0 nova_compute[243452]: 2026-02-28 10:13:38.655 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273603.6542428, 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:38 compute-0 nova_compute[243452]: 2026-02-28 10:13:38.655 243456 INFO nova.compute.manager [-] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] VM Stopped (Lifecycle Event)
Feb 28 10:13:38 compute-0 nova_compute[243452]: 2026-02-28 10:13:38.674 243456 DEBUG nova.compute.manager [None req-e1736646-d36f-46a5-b5f5-26eb25670736 - - - - - -] [instance: 43abdac9-81fa-437c-8a7c-fb7b1a9ff97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:38 compute-0 nova_compute[243452]: 2026-02-28 10:13:38.743 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:39 compute-0 ceph-mon[76304]: pgmap v1432: 305 pgs: 305 active+clean; 329 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.5 MiB/s wr, 178 op/s
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 359 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 7.3 MiB/s wr, 204 op/s
Feb 28 10:13:40 compute-0 nova_compute[243452]: 2026-02-28 10:13:40.285 243456 DEBUG nova.compute.manager [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Received event network-vif-plugged-b8b97a75-8d54-4bd6-8372-eaff2a5aa910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:40 compute-0 nova_compute[243452]: 2026-02-28 10:13:40.285 243456 DEBUG oslo_concurrency.lockutils [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:40 compute-0 nova_compute[243452]: 2026-02-28 10:13:40.286 243456 DEBUG oslo_concurrency.lockutils [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:40 compute-0 nova_compute[243452]: 2026-02-28 10:13:40.286 243456 DEBUG oslo_concurrency.lockutils [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:40 compute-0 nova_compute[243452]: 2026-02-28 10:13:40.286 243456 DEBUG nova.compute.manager [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] No waiting events found dispatching network-vif-plugged-b8b97a75-8d54-4bd6-8372-eaff2a5aa910 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:40 compute-0 nova_compute[243452]: 2026-02-28 10:13:40.287 243456 WARNING nova.compute.manager [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Received unexpected event network-vif-plugged-b8b97a75-8d54-4bd6-8372-eaff2a5aa910 for instance with vm_state active and task_state None.
Feb 28 10:13:40 compute-0 nova_compute[243452]: 2026-02-28 10:13:40.287 243456 DEBUG nova.compute.manager [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received event network-changed-41441957-9492-481d-847c-895c9fd2ef8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:40 compute-0 nova_compute[243452]: 2026-02-28 10:13:40.287 243456 DEBUG nova.compute.manager [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Refreshing instance network info cache due to event network-changed-41441957-9492-481d-847c-895c9fd2ef8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:13:40 compute-0 nova_compute[243452]: 2026-02-28 10:13:40.287 243456 DEBUG oslo_concurrency.lockutils [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-1e13ffbf-dba5-421b-afc3-84eb471e2d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:13:40 compute-0 nova_compute[243452]: 2026-02-28 10:13:40.738 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017718529960501666 of space, bias 1.0, pg target 0.5315558988150499 quantized to 32 (current 32)
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024938815784398454 of space, bias 1.0, pg target 0.7481644735319536 quantized to 32 (current 32)
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.948993728318748e-07 of space, bias 4.0, pg target 0.0009538792473982498 quantized to 16 (current 16)
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:13:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:13:41 compute-0 sudo[303981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.170 243456 DEBUG nova.network.neutron [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Updating instance_info_cache with network_info: [{"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:41 compute-0 sudo[303981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:13:41 compute-0 sudo[303981]: pam_unix(sudo:session): session closed for user root
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.222 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Releasing lock "refresh_cache-1e13ffbf-dba5-421b-afc3-84eb471e2d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.222 243456 DEBUG nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance network_info: |[{"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.223 243456 DEBUG oslo_concurrency.lockutils [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-1e13ffbf-dba5-421b-afc3-84eb471e2d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.223 243456 DEBUG nova.network.neutron [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Refreshing network info cache for port 41441957-9492-481d-847c-895c9fd2ef8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.225 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Start _get_guest_xml network_info=[{"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.231 243456 WARNING nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:13:41 compute-0 ceph-mon[76304]: pgmap v1433: 305 pgs: 305 active+clean; 359 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 7.3 MiB/s wr, 204 op/s
Feb 28 10:13:41 compute-0 sudo[304006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.241 243456 DEBUG nova.virt.libvirt.host [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.242 243456 DEBUG nova.virt.libvirt.host [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:13:41 compute-0 sudo[304006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.249 243456 DEBUG nova.virt.libvirt.host [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.250 243456 DEBUG nova.virt.libvirt.host [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.250 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.250 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.251 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.251 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.251 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.252 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.252 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.252 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.252 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.253 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.253 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.253 243456 DEBUG nova.virt.hardware [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.256 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.295 243456 DEBUG nova.compute.manager [req-ce6b1eb4-1f48-453d-aeee-35adf2c84fb6 req-755113e7-03c5-4028-b79f-a1719657d0fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Received event network-vif-plugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.296 243456 DEBUG oslo_concurrency.lockutils [req-ce6b1eb4-1f48-453d-aeee-35adf2c84fb6 req-755113e7-03c5-4028-b79f-a1719657d0fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.297 243456 DEBUG oslo_concurrency.lockutils [req-ce6b1eb4-1f48-453d-aeee-35adf2c84fb6 req-755113e7-03c5-4028-b79f-a1719657d0fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.297 243456 DEBUG oslo_concurrency.lockutils [req-ce6b1eb4-1f48-453d-aeee-35adf2c84fb6 req-755113e7-03c5-4028-b79f-a1719657d0fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.297 243456 DEBUG nova.compute.manager [req-ce6b1eb4-1f48-453d-aeee-35adf2c84fb6 req-755113e7-03c5-4028-b79f-a1719657d0fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Processing event network-vif-plugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.298 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.303 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.308 243456 INFO nova.virt.libvirt.driver [-] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Instance spawned successfully.
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.309 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.330 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273621.3302917, f3def0af-1227-498f-a525-0df8d5bb3768 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.332 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] VM Resumed (Lifecycle Event)
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.350 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.351 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.352 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.353 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.353 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.354 243456 DEBUG nova.virt.libvirt.driver [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.361 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.367 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.400 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.423 243456 INFO nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Took 11.63 seconds to spawn the instance on the hypervisor.
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.424 243456 DEBUG nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.496 243456 INFO nova.compute.manager [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Took 13.62 seconds to build instance.
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.521 243456 DEBUG oslo_concurrency.lockutils [None req-63465724-422d-4385-a388-9e2f7742c9e6 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:41 compute-0 sudo[304006]: pam_unix(sudo:session): session closed for user root
Feb 28 10:13:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:13:41 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:13:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:13:41 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:13:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:13:41 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:13:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:13:41 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:13:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:13:41 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:13:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:13:41 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:13:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:13:41 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/303814099' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.889 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:41 compute-0 sudo[304081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:13:41 compute-0 sudo[304081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:13:41 compute-0 sudo[304081]: pam_unix(sudo:session): session closed for user root
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.926 243456 DEBUG nova.storage.rbd_utils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:41 compute-0 nova_compute[243452]: 2026-02-28 10:13:41.936 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:41 compute-0 sudo[304123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:13:41 compute-0 sudo[304123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:13:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 364 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.9 MiB/s wr, 172 op/s
Feb 28 10:13:42 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:13:42 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:13:42 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:13:42 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:13:42 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:13:42 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:13:42 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/303814099' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:42 compute-0 podman[304181]: 2026-02-28 10:13:42.263752335 +0000 UTC m=+0.059075092 container create aea8008f262a728dfd1c2f8f5b1da68b1183904389d70cce1c685ac24f199faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_heisenberg, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 10:13:42 compute-0 systemd[1]: Started libpod-conmon-aea8008f262a728dfd1c2f8f5b1da68b1183904389d70cce1c685ac24f199faa.scope.
Feb 28 10:13:42 compute-0 podman[304181]: 2026-02-28 10:13:42.229872602 +0000 UTC m=+0.025195369 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:13:42 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:13:42 compute-0 podman[304181]: 2026-02-28 10:13:42.380716234 +0000 UTC m=+0.176039021 container init aea8008f262a728dfd1c2f8f5b1da68b1183904389d70cce1c685ac24f199faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:13:42 compute-0 podman[304181]: 2026-02-28 10:13:42.391091445 +0000 UTC m=+0.186414232 container start aea8008f262a728dfd1c2f8f5b1da68b1183904389d70cce1c685ac24f199faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 10:13:42 compute-0 podman[304181]: 2026-02-28 10:13:42.395136549 +0000 UTC m=+0.190482767 container attach aea8008f262a728dfd1c2f8f5b1da68b1183904389d70cce1c685ac24f199faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_heisenberg, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:13:42 compute-0 naughty_heisenberg[304197]: 167 167
Feb 28 10:13:42 compute-0 podman[304181]: 2026-02-28 10:13:42.403392641 +0000 UTC m=+0.198715428 container died aea8008f262a728dfd1c2f8f5b1da68b1183904389d70cce1c685ac24f199faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_heisenberg, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 10:13:42 compute-0 systemd[1]: libpod-aea8008f262a728dfd1c2f8f5b1da68b1183904389d70cce1c685ac24f199faa.scope: Deactivated successfully.
Feb 28 10:13:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-642cf929cb620ccf00e0a5bc1a2462b8f794c620ecae56d46cc8ab567c9d03b9-merged.mount: Deactivated successfully.
Feb 28 10:13:42 compute-0 podman[304181]: 2026-02-28 10:13:42.449404815 +0000 UTC m=+0.244727572 container remove aea8008f262a728dfd1c2f8f5b1da68b1183904389d70cce1c685ac24f199faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_heisenberg, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:13:42 compute-0 systemd[1]: libpod-conmon-aea8008f262a728dfd1c2f8f5b1da68b1183904389d70cce1c685ac24f199faa.scope: Deactivated successfully.
Feb 28 10:13:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:13:42 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/994772234' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.543 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.545 243456 DEBUG nova.virt.libvirt.vif [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-933663289',display_name='tempest-ServerDiskConfigTestJSON-server-933663289',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-933663289',id=73,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-nc00nqtf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:34Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=1e13ffbf-dba5-421b-afc3-84eb471e2d44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.546 243456 DEBUG nova.network.os_vif_util [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.547 243456 DEBUG nova.network.os_vif_util [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.548 243456 DEBUG nova.objects.instance [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.564 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:13:42 compute-0 nova_compute[243452]:   <uuid>1e13ffbf-dba5-421b-afc3-84eb471e2d44</uuid>
Feb 28 10:13:42 compute-0 nova_compute[243452]:   <name>instance-00000049</name>
Feb 28 10:13:42 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:13:42 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:13:42 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-933663289</nova:name>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:13:41</nova:creationTime>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:13:42 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:13:42 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:13:42 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:13:42 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:13:42 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:13:42 compute-0 nova_compute[243452]:         <nova:user uuid="c6b5724da2e648fd85fd8cb293525967">tempest-ServerDiskConfigTestJSON-1778232696-project-member</nova:user>
Feb 28 10:13:42 compute-0 nova_compute[243452]:         <nova:project uuid="92b324a375ad4f198dc44d31a0e0a6eb">tempest-ServerDiskConfigTestJSON-1778232696</nova:project>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:13:42 compute-0 nova_compute[243452]:         <nova:port uuid="41441957-9492-481d-847c-895c9fd2ef8f">
Feb 28 10:13:42 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:13:42 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:13:42 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <system>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <entry name="serial">1e13ffbf-dba5-421b-afc3-84eb471e2d44</entry>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <entry name="uuid">1e13ffbf-dba5-421b-afc3-84eb471e2d44</entry>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     </system>
Feb 28 10:13:42 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:13:42 compute-0 nova_compute[243452]:   <os>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:   </os>
Feb 28 10:13:42 compute-0 nova_compute[243452]:   <features>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:   </features>
Feb 28 10:13:42 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:13:42 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:13:42 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk">
Feb 28 10:13:42 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       </source>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:13:42 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config">
Feb 28 10:13:42 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       </source>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:13:42 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:6c:7c:88"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <target dev="tap41441957-94"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/console.log" append="off"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <video>
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     </video>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:13:42 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:13:42 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:13:42 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:13:42 compute-0 nova_compute[243452]: </domain>
Feb 28 10:13:42 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.564 243456 DEBUG nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Preparing to wait for external event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.564 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.565 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.565 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.566 243456 DEBUG nova.virt.libvirt.vif [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-933663289',display_name='tempest-ServerDiskConfigTestJSON-server-933663289',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-933663289',id=73,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-nc00nqtf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:34Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=1e13ffbf-dba5-421b-afc3-84eb471e2d44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.566 243456 DEBUG nova.network.os_vif_util [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.567 243456 DEBUG nova.network.os_vif_util [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.568 243456 DEBUG os_vif [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.568 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.568 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.569 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.573 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.574 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41441957-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.574 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41441957-94, col_values=(('external_ids', {'iface-id': '41441957-9492-481d-847c-895c9fd2ef8f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:7c:88', 'vm-uuid': '1e13ffbf-dba5-421b-afc3-84eb471e2d44'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.576 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:42 compute-0 NetworkManager[49805]: <info>  [1772273622.5785] manager: (tap41441957-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/274)
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.578 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.589 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.591 243456 INFO os_vif [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94')
Feb 28 10:13:42 compute-0 podman[304222]: 2026-02-28 10:13:42.630359333 +0000 UTC m=+0.041666782 container create daa907dfc20d3f830299a346cddf7a46786e2d084d19d289b045e3b4c8846997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kepler, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.649 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.651 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.651 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No VIF found with MAC fa:16:3e:6c:7c:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.651 243456 INFO nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Using config drive
Feb 28 10:13:42 compute-0 systemd[1]: Started libpod-conmon-daa907dfc20d3f830299a346cddf7a46786e2d084d19d289b045e3b4c8846997.scope.
Feb 28 10:13:42 compute-0 nova_compute[243452]: 2026-02-28 10:13:42.674 243456 DEBUG nova.storage.rbd_utils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:42 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5e004925d6b26e484e97e52b0f4ad03898043da66953a75936441b89c320e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5e004925d6b26e484e97e52b0f4ad03898043da66953a75936441b89c320e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5e004925d6b26e484e97e52b0f4ad03898043da66953a75936441b89c320e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5e004925d6b26e484e97e52b0f4ad03898043da66953a75936441b89c320e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5e004925d6b26e484e97e52b0f4ad03898043da66953a75936441b89c320e5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:13:42 compute-0 podman[304222]: 2026-02-28 10:13:42.612285945 +0000 UTC m=+0.023593424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:13:42 compute-0 podman[304222]: 2026-02-28 10:13:42.731862757 +0000 UTC m=+0.143170206 container init daa907dfc20d3f830299a346cddf7a46786e2d084d19d289b045e3b4c8846997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kepler, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:13:42 compute-0 podman[304222]: 2026-02-28 10:13:42.737613919 +0000 UTC m=+0.148921358 container start daa907dfc20d3f830299a346cddf7a46786e2d084d19d289b045e3b4c8846997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kepler, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 10:13:42 compute-0 podman[304222]: 2026-02-28 10:13:42.741876929 +0000 UTC m=+0.153184378 container attach daa907dfc20d3f830299a346cddf7a46786e2d084d19d289b045e3b4c8846997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kepler, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:13:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:13:43 compute-0 flamboyant_kepler[304260]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:13:43 compute-0 flamboyant_kepler[304260]: --> All data devices are unavailable
Feb 28 10:13:43 compute-0 systemd[1]: libpod-daa907dfc20d3f830299a346cddf7a46786e2d084d19d289b045e3b4c8846997.scope: Deactivated successfully.
Feb 28 10:13:43 compute-0 podman[304222]: 2026-02-28 10:13:43.235811666 +0000 UTC m=+0.647119205 container died daa907dfc20d3f830299a346cddf7a46786e2d084d19d289b045e3b4c8846997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kepler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 10:13:43 compute-0 ceph-mon[76304]: pgmap v1434: 305 pgs: 305 active+clean; 364 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.9 MiB/s wr, 172 op/s
Feb 28 10:13:43 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/994772234' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d5e004925d6b26e484e97e52b0f4ad03898043da66953a75936441b89c320e5-merged.mount: Deactivated successfully.
Feb 28 10:13:43 compute-0 podman[304222]: 2026-02-28 10:13:43.283741534 +0000 UTC m=+0.695049023 container remove daa907dfc20d3f830299a346cddf7a46786e2d084d19d289b045e3b4c8846997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 10:13:43 compute-0 systemd[1]: libpod-conmon-daa907dfc20d3f830299a346cddf7a46786e2d084d19d289b045e3b4c8846997.scope: Deactivated successfully.
Feb 28 10:13:43 compute-0 sudo[304123]: pam_unix(sudo:session): session closed for user root
Feb 28 10:13:43 compute-0 sudo[304292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:13:43 compute-0 sudo[304292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:13:43 compute-0 sudo[304292]: pam_unix(sudo:session): session closed for user root
Feb 28 10:13:43 compute-0 nova_compute[243452]: 2026-02-28 10:13:43.394 243456 INFO nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Creating config drive at /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config
Feb 28 10:13:43 compute-0 nova_compute[243452]: 2026-02-28 10:13:43.401 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp29cdv40p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:43 compute-0 sudo[304317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:13:43 compute-0 sudo[304317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:13:43 compute-0 nova_compute[243452]: 2026-02-28 10:13:43.552 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp29cdv40p" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:43 compute-0 nova_compute[243452]: 2026-02-28 10:13:43.574 243456 DEBUG nova.storage.rbd_utils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:43 compute-0 nova_compute[243452]: 2026-02-28 10:13:43.577 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:43 compute-0 nova_compute[243452]: 2026-02-28 10:13:43.744 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:43 compute-0 nova_compute[243452]: 2026-02-28 10:13:43.752 243456 DEBUG oslo_concurrency.processutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:43 compute-0 nova_compute[243452]: 2026-02-28 10:13:43.753 243456 INFO nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Deleting local config drive /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config because it was imported into RBD.
Feb 28 10:13:43 compute-0 podman[304395]: 2026-02-28 10:13:43.759029848 +0000 UTC m=+0.046536979 container create a033146514e102a863a4404146e7c904e34506189b5fe602c17f9cbbda4d19e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 10:13:43 compute-0 kernel: tap41441957-94: entered promiscuous mode
Feb 28 10:13:43 compute-0 NetworkManager[49805]: <info>  [1772273623.8057] manager: (tap41441957-94): new Tun device (/org/freedesktop/NetworkManager/Devices/275)
Feb 28 10:13:43 compute-0 ovn_controller[146846]: 2026-02-28T10:13:43Z|00600|binding|INFO|Claiming lport 41441957-9492-481d-847c-895c9fd2ef8f for this chassis.
Feb 28 10:13:43 compute-0 ovn_controller[146846]: 2026-02-28T10:13:43Z|00601|binding|INFO|41441957-9492-481d-847c-895c9fd2ef8f: Claiming fa:16:3e:6c:7c:88 10.100.0.4
Feb 28 10:13:43 compute-0 nova_compute[243452]: 2026-02-28 10:13:43.807 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:43 compute-0 systemd[1]: Started libpod-conmon-a033146514e102a863a4404146e7c904e34506189b5fe602c17f9cbbda4d19e8.scope.
Feb 28 10:13:43 compute-0 ovn_controller[146846]: 2026-02-28T10:13:43Z|00602|binding|INFO|Setting lport 41441957-9492-481d-847c-895c9fd2ef8f ovn-installed in OVS
Feb 28 10:13:43 compute-0 ovn_controller[146846]: 2026-02-28T10:13:43Z|00603|binding|INFO|Setting lport 41441957-9492-481d-847c-895c9fd2ef8f up in Southbound
Feb 28 10:13:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.828 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:7c:88 10.100.0.4'], port_security=['fa:16:3e:6c:7c:88 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1e13ffbf-dba5-421b-afc3-84eb471e2d44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=41441957-9492-481d-847c-895c9fd2ef8f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:13:43 compute-0 nova_compute[243452]: 2026-02-28 10:13:43.828 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.829 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 41441957-9492-481d-847c-895c9fd2ef8f in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 bound to our chassis
Feb 28 10:13:43 compute-0 podman[304395]: 2026-02-28 10:13:43.736113554 +0000 UTC m=+0.023620785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:13:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.830 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 10:13:43 compute-0 nova_compute[243452]: 2026-02-28 10:13:43.835 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.843 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e5990e12-e78c-4a6f-8f5f-d5bd5e762dd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.843 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77a5b13a-e1 in ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:13:43 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:13:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.847 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77a5b13a-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:13:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.847 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aa076858-21c1-4a97-9849-e15cb91ba2ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.849 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a06122-2859-455d-bee4-e2b752f11843]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.859 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c4a5fd-eaa2-4a28-8140-e2df318c7ed5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:43 compute-0 systemd-udevd[304425]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:13:43 compute-0 systemd-machined[209480]: New machine qemu-81-instance-00000049.
Feb 28 10:13:43 compute-0 podman[304395]: 2026-02-28 10:13:43.872043756 +0000 UTC m=+0.159550927 container init a033146514e102a863a4404146e7c904e34506189b5fe602c17f9cbbda4d19e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 10:13:43 compute-0 NetworkManager[49805]: <info>  [1772273623.8767] device (tap41441957-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:13:43 compute-0 NetworkManager[49805]: <info>  [1772273623.8773] device (tap41441957-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:13:43 compute-0 systemd[1]: Started Virtual Machine qemu-81-instance-00000049.
Feb 28 10:13:43 compute-0 podman[304395]: 2026-02-28 10:13:43.879321671 +0000 UTC m=+0.166828802 container start a033146514e102a863a4404146e7c904e34506189b5fe602c17f9cbbda4d19e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_villani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 10:13:43 compute-0 podman[304395]: 2026-02-28 10:13:43.883757506 +0000 UTC m=+0.171264677 container attach a033146514e102a863a4404146e7c904e34506189b5fe602c17f9cbbda4d19e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Feb 28 10:13:43 compute-0 amazing_villani[304417]: 167 167
Feb 28 10:13:43 compute-0 systemd[1]: libpod-a033146514e102a863a4404146e7c904e34506189b5fe602c17f9cbbda4d19e8.scope: Deactivated successfully.
Feb 28 10:13:43 compute-0 podman[304395]: 2026-02-28 10:13:43.886670877 +0000 UTC m=+0.174178008 container died a033146514e102a863a4404146e7c904e34506189b5fe602c17f9cbbda4d19e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_villani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:13:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.893 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2411bc-f4c9-413a-b38a-abe131139da4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-efdebb9e5b886c339c37ef3a4ca8f2244df9adac3e0b9458bae6bccc757b4ce4-merged.mount: Deactivated successfully.
Feb 28 10:13:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.925 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[576d72ce-49d1-4537-b066-ba58668d4349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:43 compute-0 podman[304395]: 2026-02-28 10:13:43.929307306 +0000 UTC m=+0.216814437 container remove a033146514e102a863a4404146e7c904e34506189b5fe602c17f9cbbda4d19e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:13:43 compute-0 NetworkManager[49805]: <info>  [1772273623.9334] manager: (tap77a5b13a-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/276)
Feb 28 10:13:43 compute-0 systemd-udevd[304430]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:13:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.932 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e363986f-1548-4d28-a5f5-25544a4574b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:43 compute-0 systemd[1]: libpod-conmon-a033146514e102a863a4404146e7c904e34506189b5fe602c17f9cbbda4d19e8.scope: Deactivated successfully.
Feb 28 10:13:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.966 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b5c415-7a91-44d5-99c9-13ebb42e69a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:43.972 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2059b610-226f-4708-a041-544a84895210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:43 compute-0 NetworkManager[49805]: <info>  [1772273623.9964] device (tap77a5b13a-e0): carrier: link connected
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.003 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[11fb5b18-7596-46bc-a215-52583df8ef53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.019 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb419e6-fc7f-459b-96fe-d3aee102fb54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77a5b13a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ae:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 513128, 'reachable_time': 34791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304471, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.032 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b84c89-8a42-4b94-a17f-280b75f98b65]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:aeac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 513128, 'tstamp': 513128}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304472, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.056 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d728aa9b-e3e2-419d-9f06-b4f273031520]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77a5b13a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ae:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 513128, 'reachable_time': 34791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304474, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.084 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fbbbdced-f2dc-4f63-8164-f4647473f5df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:44 compute-0 podman[304479]: 2026-02-28 10:13:44.121977194 +0000 UTC m=+0.052764825 container create 71cfe95c4972daa3f66388604209ca0a8f2109a78a9fa1a2520bc06d5be42161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hawking, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 10:13:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 372 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.3 MiB/s wr, 217 op/s
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.150 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[16c25420-e6b4-473e-8f4b-d219eda9715b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.152 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77a5b13a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.152 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:13:44 compute-0 kernel: tap77a5b13a-e0: entered promiscuous mode
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.153 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77a5b13a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:44 compute-0 nova_compute[243452]: 2026-02-28 10:13:44.154 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:44 compute-0 NetworkManager[49805]: <info>  [1772273624.1563] manager: (tap77a5b13a-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.157 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77a5b13a-e0, col_values=(('external_ids', {'iface-id': '5829ec02-3925-4479-9cc6-4b24ee8cfe06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:44 compute-0 ovn_controller[146846]: 2026-02-28T10:13:44Z|00604|binding|INFO|Releasing lport 5829ec02-3925-4479-9cc6-4b24ee8cfe06 from this chassis (sb_readonly=0)
Feb 28 10:13:44 compute-0 nova_compute[243452]: 2026-02-28 10:13:44.164 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.165 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.166 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4e57b1c6-bd18-40d1-a9cc-955c98d9a4e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.167 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:13:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:44.168 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'env', 'PROCESS_TAG=haproxy-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77a5b13a-ec2d-4bde-b8f1-201557ef8008.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:13:44 compute-0 systemd[1]: Started libpod-conmon-71cfe95c4972daa3f66388604209ca0a8f2109a78a9fa1a2520bc06d5be42161.scope.
Feb 28 10:13:44 compute-0 nova_compute[243452]: 2026-02-28 10:13:44.187 243456 DEBUG nova.network.neutron [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Updated VIF entry in instance network info cache for port 41441957-9492-481d-847c-895c9fd2ef8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:13:44 compute-0 nova_compute[243452]: 2026-02-28 10:13:44.188 243456 DEBUG nova.network.neutron [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Updating instance_info_cache with network_info: [{"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:44 compute-0 podman[304479]: 2026-02-28 10:13:44.101754725 +0000 UTC m=+0.032542396 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:13:44 compute-0 nova_compute[243452]: 2026-02-28 10:13:44.209 243456 DEBUG oslo_concurrency.lockutils [req-7fe72c8a-654c-4c0d-899c-0836eb68ae9a req-5e04a6bf-c306-477c-9594-ec64e329aaed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-1e13ffbf-dba5-421b-afc3-84eb471e2d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:13:44 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:13:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3b314df0e8948085be8b2fd2881f735449cbfd97a603a066456394a94770f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:13:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3b314df0e8948085be8b2fd2881f735449cbfd97a603a066456394a94770f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:13:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3b314df0e8948085be8b2fd2881f735449cbfd97a603a066456394a94770f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:13:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3b314df0e8948085be8b2fd2881f735449cbfd97a603a066456394a94770f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:13:44 compute-0 podman[304479]: 2026-02-28 10:13:44.241596507 +0000 UTC m=+0.172384148 container init 71cfe95c4972daa3f66388604209ca0a8f2109a78a9fa1a2520bc06d5be42161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 10:13:44 compute-0 podman[304479]: 2026-02-28 10:13:44.250537869 +0000 UTC m=+0.181325510 container start 71cfe95c4972daa3f66388604209ca0a8f2109a78a9fa1a2520bc06d5be42161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hawking, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:13:44 compute-0 nova_compute[243452]: 2026-02-28 10:13:44.251 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:44 compute-0 podman[304479]: 2026-02-28 10:13:44.254219192 +0000 UTC m=+0.185006863 container attach 71cfe95c4972daa3f66388604209ca0a8f2109a78a9fa1a2520bc06d5be42161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 10:13:44 compute-0 nova_compute[243452]: 2026-02-28 10:13:44.304 243456 INFO nova.compute.manager [None req-c46f4e69-1731-4d16-80bd-882275079361 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Get console output
Feb 28 10:13:44 compute-0 nova_compute[243452]: 2026-02-28 10:13:44.313 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:13:44 compute-0 nova_compute[243452]: 2026-02-28 10:13:44.325 243456 DEBUG nova.compute.manager [req-9ae0139e-bd7d-4241-a3cc-133a97f8b742 req-083514cc-8a39-454e-9c9d-288cfa75af60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Received event network-vif-plugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:44 compute-0 nova_compute[243452]: 2026-02-28 10:13:44.326 243456 DEBUG oslo_concurrency.lockutils [req-9ae0139e-bd7d-4241-a3cc-133a97f8b742 req-083514cc-8a39-454e-9c9d-288cfa75af60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:44 compute-0 nova_compute[243452]: 2026-02-28 10:13:44.326 243456 DEBUG oslo_concurrency.lockutils [req-9ae0139e-bd7d-4241-a3cc-133a97f8b742 req-083514cc-8a39-454e-9c9d-288cfa75af60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:44 compute-0 nova_compute[243452]: 2026-02-28 10:13:44.327 243456 DEBUG oslo_concurrency.lockutils [req-9ae0139e-bd7d-4241-a3cc-133a97f8b742 req-083514cc-8a39-454e-9c9d-288cfa75af60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:44 compute-0 nova_compute[243452]: 2026-02-28 10:13:44.327 243456 DEBUG nova.compute.manager [req-9ae0139e-bd7d-4241-a3cc-133a97f8b742 req-083514cc-8a39-454e-9c9d-288cfa75af60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] No waiting events found dispatching network-vif-plugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:44 compute-0 nova_compute[243452]: 2026-02-28 10:13:44.327 243456 WARNING nova.compute.manager [req-9ae0139e-bd7d-4241-a3cc-133a97f8b742 req-083514cc-8a39-454e-9c9d-288cfa75af60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Received unexpected event network-vif-plugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 for instance with vm_state active and task_state None.
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]: {
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:     "0": [
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:         {
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "devices": [
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "/dev/loop3"
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             ],
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "lv_name": "ceph_lv0",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "lv_size": "21470642176",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "name": "ceph_lv0",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "tags": {
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.cluster_name": "ceph",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.crush_device_class": "",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.encrypted": "0",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.objectstore": "bluestore",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.osd_id": "0",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.type": "block",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.vdo": "0",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.with_tpm": "0"
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             },
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "type": "block",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "vg_name": "ceph_vg0"
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:         }
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:     ],
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:     "1": [
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:         {
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "devices": [
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "/dev/loop4"
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             ],
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "lv_name": "ceph_lv1",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "lv_size": "21470642176",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "name": "ceph_lv1",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "tags": {
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.cluster_name": "ceph",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.crush_device_class": "",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.encrypted": "0",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.objectstore": "bluestore",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.osd_id": "1",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.type": "block",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.vdo": "0",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.with_tpm": "0"
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             },
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "type": "block",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "vg_name": "ceph_vg1"
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:         }
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:     ],
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:     "2": [
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:         {
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "devices": [
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "/dev/loop5"
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             ],
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "lv_name": "ceph_lv2",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "lv_size": "21470642176",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "name": "ceph_lv2",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "tags": {
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.cluster_name": "ceph",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.crush_device_class": "",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.encrypted": "0",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.objectstore": "bluestore",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.osd_id": "2",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.type": "block",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.vdo": "0",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:                 "ceph.with_tpm": "0"
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             },
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "type": "block",
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:             "vg_name": "ceph_vg2"
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:         }
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]:     ]
Feb 28 10:13:44 compute-0 unruffled_hawking[304502]: }
Feb 28 10:13:44 compute-0 systemd[1]: libpod-71cfe95c4972daa3f66388604209ca0a8f2109a78a9fa1a2520bc06d5be42161.scope: Deactivated successfully.
Feb 28 10:13:44 compute-0 podman[304533]: 2026-02-28 10:13:44.540256975 +0000 UTC m=+0.043100483 container create e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:13:44 compute-0 conmon[304502]: conmon 71cfe95c4972daa3f663 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-71cfe95c4972daa3f66388604209ca0a8f2109a78a9fa1a2520bc06d5be42161.scope/container/memory.events
Feb 28 10:13:44 compute-0 systemd[1]: Started libpod-conmon-e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6.scope.
Feb 28 10:13:44 compute-0 podman[304545]: 2026-02-28 10:13:44.601083346 +0000 UTC m=+0.033693739 container died 71cfe95c4972daa3f66388604209ca0a8f2109a78a9fa1a2520bc06d5be42161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hawking, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 10:13:44 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:13:44 compute-0 podman[304533]: 2026-02-28 10:13:44.517043043 +0000 UTC m=+0.019886571 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:13:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c17d24a091a535ceaf27eeb52692c29dbc8d488028fb382fe11058cd494becd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:13:44 compute-0 nova_compute[243452]: 2026-02-28 10:13:44.619 243456 INFO nova.compute.manager [None req-4dac448d-5b7c-43e2-bc18-6f2aa140938b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Pausing
Feb 28 10:13:44 compute-0 nova_compute[243452]: 2026-02-28 10:13:44.624 243456 DEBUG nova.objects.instance [None req-4dac448d-5b7c-43e2-bc18-6f2aa140938b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'flavor' on Instance uuid c4a13c84-8fca-43c8-87c3-fde9f5d1c031 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:44 compute-0 podman[304533]: 2026-02-28 10:13:44.634873846 +0000 UTC m=+0.137717374 container init e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 10:13:44 compute-0 podman[304533]: 2026-02-28 10:13:44.640398881 +0000 UTC m=+0.143242389 container start e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:13:44 compute-0 podman[304545]: 2026-02-28 10:13:44.654356794 +0000 UTC m=+0.086967157 container remove 71cfe95c4972daa3f66388604209ca0a8f2109a78a9fa1a2520bc06d5be42161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hawking, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 10:13:44 compute-0 systemd[1]: libpod-conmon-71cfe95c4972daa3f66388604209ca0a8f2109a78a9fa1a2520bc06d5be42161.scope: Deactivated successfully.
Feb 28 10:13:44 compute-0 nova_compute[243452]: 2026-02-28 10:13:44.663 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273624.6633108, c4a13c84-8fca-43c8-87c3-fde9f5d1c031 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:44 compute-0 nova_compute[243452]: 2026-02-28 10:13:44.663 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] VM Paused (Lifecycle Event)
Feb 28 10:13:44 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[304558]: [NOTICE]   (304564) : New worker (304566) forked
Feb 28 10:13:44 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[304558]: [NOTICE]   (304564) : Loading success.
Feb 28 10:13:44 compute-0 nova_compute[243452]: 2026-02-28 10:13:44.665 243456 DEBUG nova.compute.manager [None req-4dac448d-5b7c-43e2-bc18-6f2aa140938b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:44 compute-0 nova_compute[243452]: 2026-02-28 10:13:44.694 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:44 compute-0 sudo[304317]: pam_unix(sudo:session): session closed for user root
Feb 28 10:13:44 compute-0 nova_compute[243452]: 2026-02-28 10:13:44.701 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:13:44 compute-0 sudo[304575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:13:44 compute-0 sudo[304575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:13:44 compute-0 sudo[304575]: pam_unix(sudo:session): session closed for user root
Feb 28 10:13:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef3b314df0e8948085be8b2fd2881f735449cbfd97a603a066456394a94770f7-merged.mount: Deactivated successfully.
Feb 28 10:13:44 compute-0 sudo[304600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:13:44 compute-0 sudo[304600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.027 243456 DEBUG oslo_concurrency.lockutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.032 243456 DEBUG oslo_concurrency.lockutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.032 243456 DEBUG oslo_concurrency.lockutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.033 243456 DEBUG oslo_concurrency.lockutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.035 243456 DEBUG oslo_concurrency.lockutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.040 243456 INFO nova.compute.manager [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Terminating instance
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.048 243456 DEBUG nova.compute.manager [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:13:45 compute-0 kernel: tapb8b97a75-8d (unregistering): left promiscuous mode
Feb 28 10:13:45 compute-0 NetworkManager[49805]: <info>  [1772273625.0965] device (tapb8b97a75-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:13:45 compute-0 ovn_controller[146846]: 2026-02-28T10:13:45Z|00605|binding|INFO|Releasing lport b8b97a75-8d54-4bd6-8372-eaff2a5aa910 from this chassis (sb_readonly=0)
Feb 28 10:13:45 compute-0 ovn_controller[146846]: 2026-02-28T10:13:45Z|00606|binding|INFO|Setting lport b8b97a75-8d54-4bd6-8372-eaff2a5aa910 down in Southbound
Feb 28 10:13:45 compute-0 ovn_controller[146846]: 2026-02-28T10:13:45Z|00607|binding|INFO|Removing iface tapb8b97a75-8d ovn-installed in OVS
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.102 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.104 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.109 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:7a:dc 10.100.0.7'], port_security=['fa:16:3e:dd:7a:dc 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a16c1faa-2568-47fc-8006-91c68ae7ae5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f398725990434948ba6927f1c1477015', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9892012a-4876-4e21-91d3-1183462d5768', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72248a44-6915-4752-941f-1938e32574c6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b8b97a75-8d54-4bd6-8372-eaff2a5aa910) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.110 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b8b97a75-8d54-4bd6-8372-eaff2a5aa910 in datapath 2ee6adef-26da-41a9-91a7-f9a806d37d26 unbound from our chassis
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.111 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ee6adef-26da-41a9-91a7-f9a806d37d26
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.114 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.129 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[09d3445f-b8d4-410d-98a6-9c614fae689a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:45 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000047.scope: Deactivated successfully.
Feb 28 10:13:45 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000047.scope: Consumed 7.934s CPU time.
Feb 28 10:13:45 compute-0 systemd-machined[209480]: Machine qemu-80-instance-00000047 terminated.
Feb 28 10:13:45 compute-0 podman[304656]: 2026-02-28 10:13:45.148344444 +0000 UTC m=+0.070115553 container create c976f335fdd4ab45ad54168e8ce3e1bbf7d43894d4ece8354c1c8cf835085690 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.156 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b421b61a-2b48-4f21-9cad-556f77151429]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.162 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f09e00a4-2204-4ed6-9e30-fbdca27bc10e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.170 243456 DEBUG oslo_concurrency.lockutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "f3def0af-1227-498f-a525-0df8d5bb3768" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.171 243456 DEBUG oslo_concurrency.lockutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.171 243456 DEBUG oslo_concurrency.lockutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.172 243456 DEBUG oslo_concurrency.lockutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.172 243456 DEBUG oslo_concurrency.lockutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.173 243456 INFO nova.compute.manager [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Terminating instance
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.175 243456 DEBUG nova.compute.manager [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.187 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6d5925-7b99-456b-80fa-21e492181be2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:45 compute-0 systemd[1]: Started libpod-conmon-c976f335fdd4ab45ad54168e8ce3e1bbf7d43894d4ece8354c1c8cf835085690.scope.
Feb 28 10:13:45 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.212 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273625.211634, 1e13ffbf-dba5-421b-afc3-84eb471e2d44 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.213 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] VM Started (Lifecycle Event)
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.209 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f1a6d168-ae5d-44da-b82c-0070d9634548]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ee6adef-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:79:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512415, 'reachable_time': 16033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304707, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:45 compute-0 podman[304656]: 2026-02-28 10:13:45.122905058 +0000 UTC m=+0.044676197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:13:45 compute-0 kernel: tapfa6d7d29-c1 (unregistering): left promiscuous mode
Feb 28 10:13:45 compute-0 NetworkManager[49805]: <info>  [1772273625.2228] device (tapfa6d7d29-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:13:45 compute-0 podman[304656]: 2026-02-28 10:13:45.233049065 +0000 UTC m=+0.154820194 container init c976f335fdd4ab45ad54168e8ce3e1bbf7d43894d4ece8354c1c8cf835085690 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.234 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.235 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ff501dcd-2214-4606-8da1-7371147524c9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2ee6adef-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512429, 'tstamp': 512429}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304712, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2ee6adef-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512431, 'tstamp': 512431}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304712, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:45 compute-0 ovn_controller[146846]: 2026-02-28T10:13:45Z|00608|binding|INFO|Releasing lport fa6d7d29-c113-4729-a953-5dc14a05cd16 from this chassis (sb_readonly=0)
Feb 28 10:13:45 compute-0 ovn_controller[146846]: 2026-02-28T10:13:45Z|00609|binding|INFO|Setting lport fa6d7d29-c113-4729-a953-5dc14a05cd16 down in Southbound
Feb 28 10:13:45 compute-0 ovn_controller[146846]: 2026-02-28T10:13:45Z|00610|binding|INFO|Removing iface tapfa6d7d29-c1 ovn-installed in OVS
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.236 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.237 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ee6adef-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.239 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.239 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.240 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:d1:ad 10.100.0.3'], port_security=['fa:16:3e:2c:d1:ad 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f3def0af-1227-498f-a525-0df8d5bb3768', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f398725990434948ba6927f1c1477015', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9892012a-4876-4e21-91d3-1183462d5768', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72248a44-6915-4752-941f-1938e32574c6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=fa6d7d29-c113-4729-a953-5dc14a05cd16) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:13:45 compute-0 podman[304656]: 2026-02-28 10:13:45.24211185 +0000 UTC m=+0.163882979 container start c976f335fdd4ab45ad54168e8ce3e1bbf7d43894d4ece8354c1c8cf835085690 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bhabha, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.242 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.244 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273625.2117395, 1e13ffbf-dba5-421b-afc3-84eb471e2d44 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.244 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] VM Paused (Lifecycle Event)
Feb 28 10:13:45 compute-0 podman[304656]: 2026-02-28 10:13:45.245465565 +0000 UTC m=+0.167236704 container attach c976f335fdd4ab45ad54168e8ce3e1bbf7d43894d4ece8354c1c8cf835085690 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bhabha, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 10:13:45 compute-0 ceph-mon[76304]: pgmap v1435: 305 pgs: 305 active+clean; 372 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.3 MiB/s wr, 217 op/s
Feb 28 10:13:45 compute-0 hardcore_bhabha[304708]: 167 167
Feb 28 10:13:45 compute-0 systemd[1]: libpod-c976f335fdd4ab45ad54168e8ce3e1bbf7d43894d4ece8354c1c8cf835085690.scope: Deactivated successfully.
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.250 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.250 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ee6adef-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.251 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.251 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ee6adef-20, col_values=(('external_ids', {'iface-id': '51b33bea-9c2c-447c-817e-7f72887f045f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.252 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:13:45 compute-0 conmon[304708]: conmon c976f335fdd4ab45ad54 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c976f335fdd4ab45ad54168e8ce3e1bbf7d43894d4ece8354c1c8cf835085690.scope/container/memory.events
Feb 28 10:13:45 compute-0 podman[304656]: 2026-02-28 10:13:45.254674603 +0000 UTC m=+0.176445712 container died c976f335fdd4ab45ad54168e8ce3e1bbf7d43894d4ece8354c1c8cf835085690 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bhabha, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.255 156681 INFO neutron.agent.ovn.metadata.agent [-] Port fa6d7d29-c113-4729-a953-5dc14a05cd16 in datapath 2ee6adef-26da-41a9-91a7-f9a806d37d26 unbound from our chassis
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.256 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2ee6adef-26da-41a9-91a7-f9a806d37d26, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.258 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ab2689f4-1b74-4ff4-af00-650dd9202860]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.258 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26 namespace which is not needed anymore
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.273 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3026c4219fe373d42a74d44c6897bf3069ef98694db766c78462032acc5029ee-merged.mount: Deactivated successfully.
Feb 28 10:13:45 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000048.scope: Deactivated successfully.
Feb 28 10:13:45 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000048.scope: Consumed 4.636s CPU time.
Feb 28 10:13:45 compute-0 systemd-machined[209480]: Machine qemu-79-instance-00000048 terminated.
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.283 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:13:45 compute-0 podman[304656]: 2026-02-28 10:13:45.293482325 +0000 UTC m=+0.215253434 container remove c976f335fdd4ab45ad54168e8ce3e1bbf7d43894d4ece8354c1c8cf835085690 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.305 243456 INFO nova.virt.libvirt.driver [-] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Instance destroyed successfully.
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.306 243456 DEBUG nova.objects.instance [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'resources' on Instance uuid a16c1faa-2568-47fc-8006-91c68ae7ae5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.314 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:13:45 compute-0 systemd[1]: libpod-conmon-c976f335fdd4ab45ad54168e8ce3e1bbf7d43894d4ece8354c1c8cf835085690.scope: Deactivated successfully.
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.323 243456 DEBUG nova.virt.libvirt.vif [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:13:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-650257309',display_name='tempest-MultipleCreateTestJSON-server-650257309-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-650257309-1',id=71,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:13:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-fb7tvvwh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:13:37Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=a16c1faa-2568-47fc-8006-91c68ae7ae5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "address": "fa:16:3e:dd:7a:dc", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b97a75-8d", "ovs_interfaceid": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.324 243456 DEBUG nova.network.os_vif_util [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "address": "fa:16:3e:dd:7a:dc", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8b97a75-8d", "ovs_interfaceid": "b8b97a75-8d54-4bd6-8372-eaff2a5aa910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.325 243456 DEBUG nova.network.os_vif_util [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:7a:dc,bridge_name='br-int',has_traffic_filtering=True,id=b8b97a75-8d54-4bd6-8372-eaff2a5aa910,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b97a75-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.325 243456 DEBUG os_vif [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:7a:dc,bridge_name='br-int',has_traffic_filtering=True,id=b8b97a75-8d54-4bd6-8372-eaff2a5aa910,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b97a75-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.327 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.328 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8b97a75-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.329 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.331 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.335 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.339 243456 INFO os_vif [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:7a:dc,bridge_name='br-int',has_traffic_filtering=True,id=b8b97a75-8d54-4bd6-8372-eaff2a5aa910,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8b97a75-8d')
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.402 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.406 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.421 243456 INFO nova.virt.libvirt.driver [-] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Instance destroyed successfully.
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.422 243456 DEBUG nova.objects.instance [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lazy-loading 'resources' on Instance uuid f3def0af-1227-498f-a525-0df8d5bb3768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:45 compute-0 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[303958]: [NOTICE]   (303963) : haproxy version is 2.8.14-c23fe91
Feb 28 10:13:45 compute-0 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[303958]: [NOTICE]   (303963) : path to executable is /usr/sbin/haproxy
Feb 28 10:13:45 compute-0 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[303958]: [WARNING]  (303963) : Exiting Master process...
Feb 28 10:13:45 compute-0 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[303958]: [ALERT]    (303963) : Current worker (303965) exited with code 143 (Terminated)
Feb 28 10:13:45 compute-0 neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26[303958]: [WARNING]  (303963) : All workers exited. Exiting... (0)
Feb 28 10:13:45 compute-0 systemd[1]: libpod-d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f.scope: Deactivated successfully.
Feb 28 10:13:45 compute-0 podman[304778]: 2026-02-28 10:13:45.442348391 +0000 UTC m=+0.067785927 container died d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.444 243456 DEBUG nova.virt.libvirt.vif [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:13:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-650257309',display_name='tempest-MultipleCreateTestJSON-server-650257309-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-650257309-2',id=72,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-02-28T10:13:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f398725990434948ba6927f1c1477015',ramdisk_id='',reservation_id='r-fb7tvvwh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-211863181',owner_user_name='tempest-MultipleCreateTestJSON-211863181-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:13:41Z,user_data=None,user_id='c60ae50e478245b49930e1e71ea14df4',uuid=f3def0af-1227-498f-a525-0df8d5bb3768,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "address": "fa:16:3e:2c:d1:ad", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa6d7d29-c1", "ovs_interfaceid": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.445 243456 DEBUG nova.network.os_vif_util [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converting VIF {"id": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "address": "fa:16:3e:2c:d1:ad", "network": {"id": "2ee6adef-26da-41a9-91a7-f9a806d37d26", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-528300512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f398725990434948ba6927f1c1477015", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa6d7d29-c1", "ovs_interfaceid": "fa6d7d29-c113-4729-a953-5dc14a05cd16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.446 243456 DEBUG nova.network.os_vif_util [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:d1:ad,bridge_name='br-int',has_traffic_filtering=True,id=fa6d7d29-c113-4729-a953-5dc14a05cd16,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa6d7d29-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.447 243456 DEBUG os_vif [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:d1:ad,bridge_name='br-int',has_traffic_filtering=True,id=fa6d7d29-c113-4729-a953-5dc14a05cd16,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa6d7d29-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.448 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.448 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa6d7d29-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.451 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.455 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.458 243456 INFO os_vif [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:d1:ad,bridge_name='br-int',has_traffic_filtering=True,id=fa6d7d29-c113-4729-a953-5dc14a05cd16,network=Network(2ee6adef-26da-41a9-91a7-f9a806d37d26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa6d7d29-c1')
Feb 28 10:13:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:13:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2741424992' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:13:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:13:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2741424992' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:13:45 compute-0 podman[304802]: 2026-02-28 10:13:45.464673278 +0000 UTC m=+0.044741269 container create 0bfdeabc66373f8ab305ca5f897975be09a6b54d5444a4a3a4af3210c9b0b6cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_rhodes, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:13:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f-userdata-shm.mount: Deactivated successfully.
Feb 28 10:13:45 compute-0 podman[304778]: 2026-02-28 10:13:45.491792411 +0000 UTC m=+0.117229947 container cleanup d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 28 10:13:45 compute-0 systemd[1]: libpod-conmon-d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f.scope: Deactivated successfully.
Feb 28 10:13:45 compute-0 systemd[1]: Started libpod-conmon-0bfdeabc66373f8ab305ca5f897975be09a6b54d5444a4a3a4af3210c9b0b6cf.scope.
Feb 28 10:13:45 compute-0 podman[304802]: 2026-02-28 10:13:45.447987269 +0000 UTC m=+0.028055280 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:13:45 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009387b1e1c57f2dd3516b1e9061f77e130b06aedaf967af89ad5a277f486ad6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009387b1e1c57f2dd3516b1e9061f77e130b06aedaf967af89ad5a277f486ad6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009387b1e1c57f2dd3516b1e9061f77e130b06aedaf967af89ad5a277f486ad6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/009387b1e1c57f2dd3516b1e9061f77e130b06aedaf967af89ad5a277f486ad6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:13:45 compute-0 podman[304856]: 2026-02-28 10:13:45.57107862 +0000 UTC m=+0.056030536 container remove d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.577 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8b396425-d6ba-4db4-9ccb-a0cc2ba7e8cb]: (4, ('Sat Feb 28 10:13:45 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26 (d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f)\nd4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f\nSat Feb 28 10:13:45 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26 (d4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f)\nd4b4c1f5dead3644c9c9c70435d4f80a342b2b24a1fdd88cdaa91df662f8d71f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:45 compute-0 podman[304802]: 2026-02-28 10:13:45.578406086 +0000 UTC m=+0.158474117 container init 0bfdeabc66373f8ab305ca5f897975be09a6b54d5444a4a3a4af3210c9b0b6cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_rhodes, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.580 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c580ec-e32b-4ff2-8938-e121ca111540]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.583 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ee6adef-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:45 compute-0 podman[304802]: 2026-02-28 10:13:45.585466695 +0000 UTC m=+0.165534686 container start 0bfdeabc66373f8ab305ca5f897975be09a6b54d5444a4a3a4af3210c9b0b6cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_rhodes, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:13:45 compute-0 kernel: tap2ee6adef-20: left promiscuous mode
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.585 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:45 compute-0 podman[304802]: 2026-02-28 10:13:45.589753665 +0000 UTC m=+0.169821676 container attach 0bfdeabc66373f8ab305ca5f897975be09a6b54d5444a4a3a4af3210c9b0b6cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_rhodes, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.594 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.598 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c35337-9d43-4c52-a136-a3dbb1f8cfa7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.612 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[584696a7-1640-471d-a157-112e40597b16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.615 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d9044428-b5db-46eb-ad4f-340aa6d3fae0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.637 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[16150e7a-7455-4895-ab21-07b0c0e6ce34]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512405, 'reachable_time': 16566, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304882, 'error': None, 'target': 'ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.640 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2ee6adef-26da-41a9-91a7-f9a806d37d26 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:13:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:45.641 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[23fb139d-be34-4538-8aef-bc7380056857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.737 243456 INFO nova.virt.libvirt.driver [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Deleting instance files /var/lib/nova/instances/a16c1faa-2568-47fc-8006-91c68ae7ae5d_del
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.738 243456 INFO nova.virt.libvirt.driver [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Deletion of /var/lib/nova/instances/a16c1faa-2568-47fc-8006-91c68ae7ae5d_del complete
Feb 28 10:13:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff72f63fb99d7b1ef7c7e48e92434ddd62cf45aed08cb6ac8121bfdf9bdc9341-merged.mount: Deactivated successfully.
Feb 28 10:13:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d2ee6adef\x2d26da\x2d41a9\x2d91a7\x2df9a806d37d26.mount: Deactivated successfully.
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.791 243456 INFO nova.virt.libvirt.driver [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Deleting instance files /var/lib/nova/instances/f3def0af-1227-498f-a525-0df8d5bb3768_del
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.792 243456 INFO nova.virt.libvirt.driver [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Deletion of /var/lib/nova/instances/f3def0af-1227-498f-a525-0df8d5bb3768_del complete
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.840 243456 INFO nova.compute.manager [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Took 0.79 seconds to destroy the instance on the hypervisor.
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.840 243456 DEBUG oslo.service.loopingcall [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.840 243456 DEBUG nova.compute.manager [-] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.841 243456 DEBUG nova.network.neutron [-] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.854 243456 INFO nova.compute.manager [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Took 0.68 seconds to destroy the instance on the hypervisor.
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.855 243456 DEBUG oslo.service.loopingcall [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.855 243456 DEBUG nova.compute.manager [-] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:13:45 compute-0 nova_compute[243452]: 2026-02-28 10:13:45.856 243456 DEBUG nova.network.neutron [-] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:13:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 364 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.1 MiB/s wr, 265 op/s
Feb 28 10:13:46 compute-0 lvm[304954]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:13:46 compute-0 lvm[304954]: VG ceph_vg0 finished
Feb 28 10:13:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2741424992' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:13:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2741424992' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:13:46 compute-0 lvm[304956]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:13:46 compute-0 lvm[304956]: VG ceph_vg1 finished
Feb 28 10:13:46 compute-0 lvm[304957]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:13:46 compute-0 lvm[304957]: VG ceph_vg2 finished
Feb 28 10:13:46 compute-0 hardcore_rhodes[304866]: {}
Feb 28 10:13:46 compute-0 systemd[1]: libpod-0bfdeabc66373f8ab305ca5f897975be09a6b54d5444a4a3a4af3210c9b0b6cf.scope: Deactivated successfully.
Feb 28 10:13:46 compute-0 systemd[1]: libpod-0bfdeabc66373f8ab305ca5f897975be09a6b54d5444a4a3a4af3210c9b0b6cf.scope: Consumed 1.279s CPU time.
Feb 28 10:13:46 compute-0 podman[304802]: 2026-02-28 10:13:46.460901541 +0000 UTC m=+1.040969562 container died 0bfdeabc66373f8ab305ca5f897975be09a6b54d5444a4a3a4af3210c9b0b6cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 10:13:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-009387b1e1c57f2dd3516b1e9061f77e130b06aedaf967af89ad5a277f486ad6-merged.mount: Deactivated successfully.
Feb 28 10:13:46 compute-0 podman[304802]: 2026-02-28 10:13:46.513812748 +0000 UTC m=+1.093880759 container remove 0bfdeabc66373f8ab305ca5f897975be09a6b54d5444a4a3a4af3210c9b0b6cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:13:46 compute-0 systemd[1]: libpod-conmon-0bfdeabc66373f8ab305ca5f897975be09a6b54d5444a4a3a4af3210c9b0b6cf.scope: Deactivated successfully.
Feb 28 10:13:46 compute-0 sudo[304600]: pam_unix(sudo:session): session closed for user root
Feb 28 10:13:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:13:46 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:13:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:13:46 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:13:46 compute-0 sudo[304971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:13:46 compute-0 sudo[304971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:13:46 compute-0 sudo[304971]: pam_unix(sudo:session): session closed for user root
Feb 28 10:13:47 compute-0 ceph-mon[76304]: pgmap v1436: 305 pgs: 305 active+clean; 364 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.1 MiB/s wr, 265 op/s
Feb 28 10:13:47 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:13:47 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:13:47 compute-0 nova_compute[243452]: 2026-02-28 10:13:47.632 243456 DEBUG nova.network.neutron [-] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:47 compute-0 nova_compute[243452]: 2026-02-28 10:13:47.667 243456 DEBUG nova.network.neutron [-] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:47 compute-0 nova_compute[243452]: 2026-02-28 10:13:47.669 243456 INFO nova.compute.manager [-] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Took 1.81 seconds to deallocate network for instance.
Feb 28 10:13:47 compute-0 nova_compute[243452]: 2026-02-28 10:13:47.696 243456 INFO nova.compute.manager [-] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Took 1.86 seconds to deallocate network for instance.
Feb 28 10:13:47 compute-0 nova_compute[243452]: 2026-02-28 10:13:47.750 243456 DEBUG oslo_concurrency.lockutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:47 compute-0 nova_compute[243452]: 2026-02-28 10:13:47.751 243456 DEBUG oslo_concurrency.lockutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:47 compute-0 nova_compute[243452]: 2026-02-28 10:13:47.787 243456 DEBUG oslo_concurrency.lockutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:47 compute-0 nova_compute[243452]: 2026-02-28 10:13:47.804 243456 DEBUG nova.compute.manager [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Received event network-vif-unplugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:47 compute-0 nova_compute[243452]: 2026-02-28 10:13:47.804 243456 DEBUG oslo_concurrency.lockutils [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:47 compute-0 nova_compute[243452]: 2026-02-28 10:13:47.805 243456 DEBUG oslo_concurrency.lockutils [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:47 compute-0 nova_compute[243452]: 2026-02-28 10:13:47.805 243456 DEBUG oslo_concurrency.lockutils [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:47 compute-0 nova_compute[243452]: 2026-02-28 10:13:47.805 243456 DEBUG nova.compute.manager [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] No waiting events found dispatching network-vif-unplugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:47 compute-0 nova_compute[243452]: 2026-02-28 10:13:47.806 243456 WARNING nova.compute.manager [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Received unexpected event network-vif-unplugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 for instance with vm_state deleted and task_state None.
Feb 28 10:13:47 compute-0 nova_compute[243452]: 2026-02-28 10:13:47.806 243456 DEBUG nova.compute.manager [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Received event network-vif-plugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:47 compute-0 nova_compute[243452]: 2026-02-28 10:13:47.807 243456 DEBUG oslo_concurrency.lockutils [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:47 compute-0 nova_compute[243452]: 2026-02-28 10:13:47.807 243456 DEBUG oslo_concurrency.lockutils [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:47 compute-0 nova_compute[243452]: 2026-02-28 10:13:47.807 243456 DEBUG oslo_concurrency.lockutils [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:47 compute-0 nova_compute[243452]: 2026-02-28 10:13:47.808 243456 DEBUG nova.compute.manager [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] No waiting events found dispatching network-vif-plugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:47 compute-0 nova_compute[243452]: 2026-02-28 10:13:47.808 243456 WARNING nova.compute.manager [req-a4932efe-ba17-42b5-b40b-6be30f856988 req-e3753b3e-681e-4b62-a688-742e958cb55a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Received unexpected event network-vif-plugged-fa6d7d29-c113-4729-a953-5dc14a05cd16 for instance with vm_state deleted and task_state None.
Feb 28 10:13:47 compute-0 nova_compute[243452]: 2026-02-28 10:13:47.876 243456 DEBUG oslo_concurrency.processutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.032 243456 DEBUG nova.compute.manager [req-76ad6b27-4c89-44d3-8993-66d231ec3f25 req-42993927-e4d1-4474-84ca-f89e69a516ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.033 243456 DEBUG oslo_concurrency.lockutils [req-76ad6b27-4c89-44d3-8993-66d231ec3f25 req-42993927-e4d1-4474-84ca-f89e69a516ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.033 243456 DEBUG oslo_concurrency.lockutils [req-76ad6b27-4c89-44d3-8993-66d231ec3f25 req-42993927-e4d1-4474-84ca-f89e69a516ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.033 243456 DEBUG oslo_concurrency.lockutils [req-76ad6b27-4c89-44d3-8993-66d231ec3f25 req-42993927-e4d1-4474-84ca-f89e69a516ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.034 243456 DEBUG nova.compute.manager [req-76ad6b27-4c89-44d3-8993-66d231ec3f25 req-42993927-e4d1-4474-84ca-f89e69a516ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Processing event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.035 243456 DEBUG nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.042 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.045 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273628.044446, 1e13ffbf-dba5-421b-afc3-84eb471e2d44 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.046 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] VM Resumed (Lifecycle Event)
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.053 243456 INFO nova.virt.libvirt.driver [-] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance spawned successfully.
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.053 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.072 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.080 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.088 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.088 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.089 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.089 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.090 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.090 243456 DEBUG nova.virt.libvirt.driver [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:13:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.125 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:13:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 328 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.4 MiB/s wr, 259 op/s
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.154 243456 INFO nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Took 13.46 seconds to spawn the instance on the hypervisor.
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.155 243456 DEBUG nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.209 243456 INFO nova.compute.manager [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Took 14.53 seconds to build instance.
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.226 243456 DEBUG oslo_concurrency.lockutils [None req-5e6a71f1-8eba-4b0f-a921-eb149392fd71 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.274 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.275 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.291 243456 DEBUG nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.359 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:13:48 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/675098835' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.459 243456 DEBUG oslo_concurrency.processutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.466 243456 DEBUG nova.compute.provider_tree [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.483 243456 DEBUG nova.scheduler.client.report [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.506 243456 DEBUG oslo_concurrency.lockutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.509 243456 DEBUG oslo_concurrency.lockutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.552 243456 INFO nova.scheduler.client.report [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Deleted allocations for instance f3def0af-1227-498f-a525-0df8d5bb3768
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.647 243456 DEBUG oslo_concurrency.lockutils [None req-a64a8cc9-09f9-4c06-af9c-565aab8d12e3 c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "f3def0af-1227-498f-a525-0df8d5bb3768" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.651 243456 DEBUG oslo_concurrency.processutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:48 compute-0 nova_compute[243452]: 2026-02-28 10:13:48.747 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:49 compute-0 ceph-mon[76304]: pgmap v1437: 305 pgs: 305 active+clean; 328 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.4 MiB/s wr, 259 op/s
Feb 28 10:13:49 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/675098835' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:13:49 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3576097265' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:49 compute-0 nova_compute[243452]: 2026-02-28 10:13:49.303 243456 DEBUG oslo_concurrency.processutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:49 compute-0 nova_compute[243452]: 2026-02-28 10:13:49.313 243456 DEBUG nova.compute.provider_tree [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:13:49 compute-0 nova_compute[243452]: 2026-02-28 10:13:49.329 243456 DEBUG nova.scheduler.client.report [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:13:49 compute-0 nova_compute[243452]: 2026-02-28 10:13:49.357 243456 DEBUG oslo_concurrency.lockutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:49 compute-0 nova_compute[243452]: 2026-02-28 10:13:49.364 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:49 compute-0 nova_compute[243452]: 2026-02-28 10:13:49.373 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:13:49 compute-0 nova_compute[243452]: 2026-02-28 10:13:49.374 243456 INFO nova.compute.claims [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:13:49 compute-0 nova_compute[243452]: 2026-02-28 10:13:49.401 243456 INFO nova.scheduler.client.report [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Deleted allocations for instance a16c1faa-2568-47fc-8006-91c68ae7ae5d
Feb 28 10:13:49 compute-0 nova_compute[243452]: 2026-02-28 10:13:49.495 243456 DEBUG oslo_concurrency.lockutils [None req-43e0f388-79fb-40ed-a2a1-0d24155b917e c60ae50e478245b49930e1e71ea14df4 f398725990434948ba6927f1c1477015 - - default default] Lock "a16c1faa-2568-47fc-8006-91c68ae7ae5d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:49 compute-0 nova_compute[243452]: 2026-02-28 10:13:49.540 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:49 compute-0 nova_compute[243452]: 2026-02-28 10:13:49.687 243456 INFO nova.compute.manager [None req-70be3216-ee33-411e-97b0-116e09b04da6 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Get console output
Feb 28 10:13:49 compute-0 nova_compute[243452]: 2026-02-28 10:13:49.704 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:13:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:13:50 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2894740837' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.098 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.103 243456 DEBUG nova.compute.provider_tree [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.125 243456 DEBUG nova.scheduler.client.report [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.146 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.147 243456 DEBUG nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:13:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 308 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.0 MiB/s wr, 270 op/s
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.211 243456 DEBUG nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.212 243456 DEBUG nova.network.neutron [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.234 243456 INFO nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.255 243456 DEBUG nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:13:50 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3576097265' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:50 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2894740837' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.357 243456 DEBUG nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.360 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.360 243456 INFO nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Creating image(s)
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.386 243456 DEBUG nova.storage.rbd_utils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.414 243456 DEBUG nova.storage.rbd_utils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.440 243456 DEBUG nova.storage.rbd_utils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.444 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.493 243456 DEBUG nova.policy [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ec5caafc16ec43a493f7d553353a27c3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '809bf856030f4316b385ba1c02291ca7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.496 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.525 243456 INFO nova.compute.manager [None req-3a0ecb06-216d-4943-9b0f-92f6a1c9b510 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Unpausing
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.527 243456 DEBUG nova.objects.instance [None req-3a0ecb06-216d-4943-9b0f-92f6a1c9b510 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'flavor' on Instance uuid c4a13c84-8fca-43c8-87c3-fde9f5d1c031 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.533 243456 DEBUG nova.compute.manager [req-0ec19d9e-402d-4158-967f-02b4700fad0c req-dedb58cc-f0bf-49c5-90df-3961a8aff8fd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Received event network-vif-deleted-fa6d7d29-c113-4729-a953-5dc14a05cd16 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.534 243456 DEBUG nova.compute.manager [req-0ec19d9e-402d-4158-967f-02b4700fad0c req-dedb58cc-f0bf-49c5-90df-3961a8aff8fd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Received event network-vif-deleted-b8b97a75-8d54-4bd6-8372-eaff2a5aa910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.534 243456 DEBUG nova.compute.manager [req-0ec19d9e-402d-4158-967f-02b4700fad0c req-dedb58cc-f0bf-49c5-90df-3961a8aff8fd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.535 243456 DEBUG oslo_concurrency.lockutils [req-0ec19d9e-402d-4158-967f-02b4700fad0c req-dedb58cc-f0bf-49c5-90df-3961a8aff8fd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.535 243456 DEBUG oslo_concurrency.lockutils [req-0ec19d9e-402d-4158-967f-02b4700fad0c req-dedb58cc-f0bf-49c5-90df-3961a8aff8fd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.535 243456 DEBUG oslo_concurrency.lockutils [req-0ec19d9e-402d-4158-967f-02b4700fad0c req-dedb58cc-f0bf-49c5-90df-3961a8aff8fd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.536 243456 DEBUG nova.compute.manager [req-0ec19d9e-402d-4158-967f-02b4700fad0c req-dedb58cc-f0bf-49c5-90df-3961a8aff8fd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] No waiting events found dispatching network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.536 243456 WARNING nova.compute.manager [req-0ec19d9e-402d-4158-967f-02b4700fad0c req-dedb58cc-f0bf-49c5-90df-3961a8aff8fd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received unexpected event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f for instance with vm_state active and task_state None.
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.538 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.538 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.539 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.539 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.562 243456 DEBUG nova.storage.rbd_utils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.566 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.618 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273630.6172748, c4a13c84-8fca-43c8-87c3-fde9f5d1c031 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.622 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] VM Resumed (Lifecycle Event)
Feb 28 10:13:50 compute-0 virtqemud[242837]: argument unsupported: QEMU guest agent is not configured
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.630 243456 DEBUG nova.virt.libvirt.guest [None req-3a0ecb06-216d-4943-9b0f-92f6a1c9b510 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.631 243456 DEBUG nova.compute.manager [None req-3a0ecb06-216d-4943-9b0f-92f6a1c9b510 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.645 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.649 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.686 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] During sync_power_state the instance has a pending task (unpausing). Skip.
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.824 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.858 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.859 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.895 243456 DEBUG nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.905 243456 DEBUG nova.storage.rbd_utils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] resizing rbd image 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.965 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:50 compute-0 nova_compute[243452]: 2026-02-28 10:13:50.965 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.002 243456 DEBUG nova.objects.instance [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'migration_context' on Instance uuid 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.007 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.007 243456 INFO nova.compute.claims [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.032 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.033 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Ensure instance console log exists: /var/lib/nova/instances/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.033 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.034 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.034 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.160 243456 DEBUG nova.network.neutron [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Successfully created port: 26c42747-4919-4440-9b73-cf3516525108 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.180 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:51 compute-0 ceph-mon[76304]: pgmap v1438: 305 pgs: 305 active+clean; 308 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.0 MiB/s wr, 270 op/s
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.305 243456 INFO nova.compute.manager [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Rebuilding instance
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.521 243456 DEBUG nova.objects.instance [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.525 243456 INFO nova.compute.manager [None req-9e455dd4-5a06-4f87-abb8-7898c6c5bc2e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Get console output
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.533 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.538 243456 DEBUG nova.compute.manager [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.593 243456 DEBUG nova.objects.instance [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'pci_requests' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.613 243456 DEBUG nova.objects.instance [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.626 243456 DEBUG nova.objects.instance [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'resources' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.639 243456 DEBUG nova.objects.instance [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'migration_context' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.655 243456 DEBUG nova.objects.instance [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.660 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:13:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:13:51 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4213241902' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.839 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.845 243456 DEBUG nova.compute.provider_tree [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.860 243456 DEBUG nova.scheduler.client.report [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.885 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.886 243456 DEBUG nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.935 243456 DEBUG nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.936 243456 DEBUG nova.network.neutron [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.956 243456 INFO nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:13:51 compute-0 nova_compute[243452]: 2026-02-28 10:13:51.974 243456 DEBUG nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:13:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 279 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 228 KiB/s wr, 238 op/s
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.538 243456 DEBUG nova.policy [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ec5caafc16ec43a493f7d553353a27c3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '809bf856030f4316b385ba1c02291ca7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:13:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.542 243456 DEBUG nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.543 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.543 243456 INFO nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Creating image(s)
Feb 28 10:13:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4213241902' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:53 compute-0 ceph-mon[76304]: pgmap v1439: 305 pgs: 305 active+clean; 279 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 228 KiB/s wr, 238 op/s
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.567 243456 DEBUG nova.storage.rbd_utils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.594 243456 DEBUG nova.storage.rbd_utils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.615 243456 DEBUG nova.storage.rbd_utils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.619 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.652 243456 DEBUG nova.network.neutron [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Successfully updated port: 26c42747-4919-4440-9b73-cf3516525108 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.655 243456 DEBUG nova.compute.manager [req-2971f4e5-4b81-4452-8d74-5d1159cb8c77 req-a72efe1b-6825-4e58-af2d-4367dab88ddb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Received event network-changed-26c42747-4919-4440-9b73-cf3516525108 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.656 243456 DEBUG nova.compute.manager [req-2971f4e5-4b81-4452-8d74-5d1159cb8c77 req-a72efe1b-6825-4e58-af2d-4367dab88ddb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Refreshing instance network info cache due to event network-changed-26c42747-4919-4440-9b73-cf3516525108. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.656 243456 DEBUG oslo_concurrency.lockutils [req-2971f4e5-4b81-4452-8d74-5d1159cb8c77 req-a72efe1b-6825-4e58-af2d-4367dab88ddb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.656 243456 DEBUG oslo_concurrency.lockutils [req-2971f4e5-4b81-4452-8d74-5d1159cb8c77 req-a72efe1b-6825-4e58-af2d-4367dab88ddb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.659 243456 DEBUG nova.network.neutron [req-2971f4e5-4b81-4452-8d74-5d1159cb8c77 req-a72efe1b-6825-4e58-af2d-4367dab88ddb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Refreshing network info cache for port 26c42747-4919-4440-9b73-cf3516525108 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.661 243456 DEBUG nova.compute.manager [req-0b09a729-96f8-4215-9c01-45e7a8db858e req-f1dd96db-4804-4ef8-8605-ecfa8ad36b03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received event network-changed-9be79a2c-76fa-4a58-a532-eac0151d2bb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.661 243456 DEBUG nova.compute.manager [req-0b09a729-96f8-4215-9c01-45e7a8db858e req-f1dd96db-4804-4ef8-8605-ecfa8ad36b03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Refreshing instance network info cache due to event network-changed-9be79a2c-76fa-4a58-a532-eac0151d2bb1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.661 243456 DEBUG oslo_concurrency.lockutils [req-0b09a729-96f8-4215-9c01-45e7a8db858e req-f1dd96db-4804-4ef8-8605-ecfa8ad36b03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.661 243456 DEBUG oslo_concurrency.lockutils [req-0b09a729-96f8-4215-9c01-45e7a8db858e req-f1dd96db-4804-4ef8-8605-ecfa8ad36b03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.662 243456 DEBUG nova.network.neutron [req-0b09a729-96f8-4215-9c01-45e7a8db858e req-f1dd96db-4804-4ef8-8605-ecfa8ad36b03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Refreshing network info cache for port 9be79a2c-76fa-4a58-a532-eac0151d2bb1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.691 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.692 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.692 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.693 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.714 243456 DEBUG nova.storage.rbd_utils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.718 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 dc2dbab8-312e-4130-8141-d848beeb6bec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.758 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.762 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "refresh_cache-40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.768 243456 DEBUG oslo_concurrency.lockutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.772 243456 DEBUG oslo_concurrency.lockutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.774 243456 DEBUG oslo_concurrency.lockutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.774 243456 DEBUG oslo_concurrency.lockutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.774 243456 DEBUG oslo_concurrency.lockutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.776 243456 INFO nova.compute.manager [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Terminating instance
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.777 243456 DEBUG nova.compute.manager [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:13:53 compute-0 kernel: tap9be79a2c-76 (unregistering): left promiscuous mode
Feb 28 10:13:53 compute-0 NetworkManager[49805]: <info>  [1772273633.9145] device (tap9be79a2c-76): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.922 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:53 compute-0 ovn_controller[146846]: 2026-02-28T10:13:53Z|00611|binding|INFO|Releasing lport 9be79a2c-76fa-4a58-a532-eac0151d2bb1 from this chassis (sb_readonly=0)
Feb 28 10:13:53 compute-0 ovn_controller[146846]: 2026-02-28T10:13:53Z|00612|binding|INFO|Setting lport 9be79a2c-76fa-4a58-a532-eac0151d2bb1 down in Southbound
Feb 28 10:13:53 compute-0 ovn_controller[146846]: 2026-02-28T10:13:53Z|00613|binding|INFO|Removing iface tap9be79a2c-76 ovn-installed in OVS
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.924 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:53.931 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:18:0c 10.100.0.3'], port_security=['fa:16:3e:54:18:0c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c4a13c84-8fca-43c8-87c3-fde9f5d1c031', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d8683e2-4377-476f-ae8b-6d3dd4e61943', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aeb82654-4ae5-4b85-ab47-1d2fe984318d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4eff969a-a040-4a86-b625-9ebfe779e412, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=9be79a2c-76fa-4a58-a532-eac0151d2bb1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:13:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:53.934 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 9be79a2c-76fa-4a58-a532-eac0151d2bb1 in datapath 5d8683e2-4377-476f-ae8b-6d3dd4e61943 unbound from our chassis
Feb 28 10:13:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:53.936 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d8683e2-4377-476f-ae8b-6d3dd4e61943, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:13:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:53.938 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b24985bd-ff78-4ad7-a5a0-bd9caba1a2b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:53.940 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943 namespace which is not needed anymore
Feb 28 10:13:53 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.943 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:53 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000046.scope: Deactivated successfully.
Feb 28 10:13:53 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000046.scope: Consumed 12.040s CPU time.
Feb 28 10:13:53 compute-0 systemd-machined[209480]: Machine qemu-78-instance-00000046 terminated.
Feb 28 10:13:53 compute-0 ovn_controller[146846]: 2026-02-28T10:13:53Z|00614|binding|INFO|Releasing lport 6fea9809-bf5e-43e8-beec-b6cf6edc52cd from this chassis (sb_readonly=0)
Feb 28 10:13:53 compute-0 ovn_controller[146846]: 2026-02-28T10:13:53Z|00615|binding|INFO|Releasing lport 5829ec02-3925-4479-9cc6-4b24ee8cfe06 from this chassis (sb_readonly=0)
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:53.999 243456 DEBUG nova.network.neutron [req-2971f4e5-4b81-4452-8d74-5d1159cb8c77 req-a72efe1b-6825-4e58-af2d-4367dab88ddb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.002 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.036 243456 INFO nova.virt.libvirt.driver [-] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Instance destroyed successfully.
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.037 243456 DEBUG nova.objects.instance [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'resources' on Instance uuid c4a13c84-8fca-43c8-87c3-fde9f5d1c031 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.075 243456 DEBUG nova.virt.libvirt.vif [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:13:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1132377693',display_name='tempest-TestNetworkAdvancedServerOps-server-1132377693',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1132377693',id=70,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF644+rgLhlFamGuaTmUUFO0KlVXAPKX08nnbDy2rOoT8ige3Fj9kAWfB90244fjvTph7JmDo+5JGi6o3TsUwr4tGSK5dI2cnuR44Pht5/KXvc5e6feRQxx/zv8GZOsh+w==',key_name='tempest-TestNetworkAdvancedServerOps-1238511245',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:13:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-8dn1q2v0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:13:50Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=c4a13c84-8fca-43c8-87c3-fde9f5d1c031,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.075 243456 DEBUG nova.network.os_vif_util [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.076 243456 DEBUG nova.network.os_vif_util [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:18:0c,bridge_name='br-int',has_traffic_filtering=True,id=9be79a2c-76fa-4a58-a532-eac0151d2bb1,network=Network(5d8683e2-4377-476f-ae8b-6d3dd4e61943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9be79a2c-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.077 243456 DEBUG os_vif [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:18:0c,bridge_name='br-int',has_traffic_filtering=True,id=9be79a2c-76fa-4a58-a532-eac0151d2bb1,network=Network(5d8683e2-4377-476f-ae8b-6d3dd4e61943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9be79a2c-76') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.079 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.079 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9be79a2c-76, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.080 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.082 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.084 243456 INFO os_vif [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:18:0c,bridge_name='br-int',has_traffic_filtering=True,id=9be79a2c-76fa-4a58-a532-eac0151d2bb1,network=Network(5d8683e2-4377-476f-ae8b-6d3dd4e61943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9be79a2c-76')
Feb 28 10:13:54 compute-0 ovn_controller[146846]: 2026-02-28T10:13:54Z|00616|binding|INFO|Releasing lport 6fea9809-bf5e-43e8-beec-b6cf6edc52cd from this chassis (sb_readonly=0)
Feb 28 10:13:54 compute-0 ovn_controller[146846]: 2026-02-28T10:13:54Z|00617|binding|INFO|Releasing lport 5829ec02-3925-4479-9cc6-4b24ee8cfe06 from this chassis (sb_readonly=0)
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.097 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:54 compute-0 neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943[302867]: [NOTICE]   (302871) : haproxy version is 2.8.14-c23fe91
Feb 28 10:13:54 compute-0 neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943[302867]: [NOTICE]   (302871) : path to executable is /usr/sbin/haproxy
Feb 28 10:13:54 compute-0 neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943[302867]: [WARNING]  (302871) : Exiting Master process...
Feb 28 10:13:54 compute-0 neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943[302867]: [WARNING]  (302871) : Exiting Master process...
Feb 28 10:13:54 compute-0 neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943[302867]: [ALERT]    (302871) : Current worker (302874) exited with code 143 (Terminated)
Feb 28 10:13:54 compute-0 neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943[302867]: [WARNING]  (302871) : All workers exited. Exiting... (0)
Feb 28 10:13:54 compute-0 systemd[1]: libpod-a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612.scope: Deactivated successfully.
Feb 28 10:13:54 compute-0 podman[305375]: 2026-02-28 10:13:54.117676213 +0000 UTC m=+0.059591436 container died a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.139 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 dc2dbab8-312e-4130-8141-d848beeb6bec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612-userdata-shm.mount: Deactivated successfully.
Feb 28 10:13:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8da54b6f18717d512266281e970794ab514d96d93e0121c460ef59a108898240-merged.mount: Deactivated successfully.
Feb 28 10:13:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 292 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 692 KiB/s wr, 240 op/s
Feb 28 10:13:54 compute-0 podman[305375]: 2026-02-28 10:13:54.160160918 +0000 UTC m=+0.102076141 container cleanup a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 28 10:13:54 compute-0 systemd[1]: libpod-conmon-a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612.scope: Deactivated successfully.
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.236 243456 DEBUG nova.storage.rbd_utils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] resizing rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:13:54 compute-0 podman[305439]: 2026-02-28 10:13:54.244397927 +0000 UTC m=+0.062870459 container remove a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 28 10:13:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:54.249 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1906cf62-5f9f-4a71-8d67-4bacd0f671b7]: (4, ('Sat Feb 28 10:13:54 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943 (a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612)\na9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612\nSat Feb 28 10:13:54 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943 (a9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612)\na9fed8d08c3570a59d50194d0acc2507ab3c12f4d1a57a443a55adfedffb7612\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:54.251 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[46769a2d-6d0f-4d9c-9e78-8d9cc8301bd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:54.254 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d8683e2-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:54 compute-0 kernel: tap5d8683e2-40: left promiscuous mode
Feb 28 10:13:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:54.267 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f1ecf8bc-35ef-4418-bb69-d5e94f99eab0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:54.279 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[16a004b4-f8f1-40e4-85ff-d4a1178a5c00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:54.282 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d068ad3e-1d03-4558-a7be-0e94a40e391b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.288 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:54.299 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[349cd4b8-33fd-4892-9e2b-34661a05d740]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511194, 'reachable_time': 24977, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305492, 'error': None, 'target': 'ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d5d8683e2\x2d4377\x2d476f\x2dae8b\x2d6d3dd4e61943.mount: Deactivated successfully.
Feb 28 10:13:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:54.305 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5d8683e2-4377-476f-ae8b-6d3dd4e61943 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:13:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:54.305 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[11b12927-238b-49df-8191-dc3195a2da8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.353 243456 DEBUG nova.objects.instance [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'migration_context' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.371 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.372 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Ensure instance console log exists: /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.372 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.373 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.373 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.376 243456 DEBUG nova.network.neutron [req-2971f4e5-4b81-4452-8d74-5d1159cb8c77 req-a72efe1b-6825-4e58-af2d-4367dab88ddb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.390 243456 DEBUG oslo_concurrency.lockutils [req-2971f4e5-4b81-4452-8d74-5d1159cb8c77 req-a72efe1b-6825-4e58-af2d-4367dab88ddb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.391 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquired lock "refresh_cache-40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.391 243456 DEBUG nova.network.neutron [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.414 243456 INFO nova.virt.libvirt.driver [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Deleting instance files /var/lib/nova/instances/c4a13c84-8fca-43c8-87c3-fde9f5d1c031_del
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.415 243456 INFO nova.virt.libvirt.driver [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Deletion of /var/lib/nova/instances/c4a13c84-8fca-43c8-87c3-fde9f5d1c031_del complete
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.469 243456 INFO nova.compute.manager [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Took 0.69 seconds to destroy the instance on the hypervisor.
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.470 243456 DEBUG oslo.service.loopingcall [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.470 243456 DEBUG nova.compute.manager [-] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.470 243456 DEBUG nova.network.neutron [-] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.596887) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273634596940, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2464, "num_deletes": 514, "total_data_size": 3253611, "memory_usage": 3329568, "flush_reason": "Manual Compaction"}
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Feb 28 10:13:54 compute-0 ceph-mon[76304]: pgmap v1440: 305 pgs: 305 active+clean; 292 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 692 KiB/s wr, 240 op/s
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273634617148, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3191416, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28205, "largest_seqno": 30668, "table_properties": {"data_size": 3180766, "index_size": 6312, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 25684, "raw_average_key_size": 19, "raw_value_size": 3157123, "raw_average_value_size": 2451, "num_data_blocks": 276, "num_entries": 1288, "num_filter_entries": 1288, "num_deletions": 514, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772273454, "oldest_key_time": 1772273454, "file_creation_time": 1772273634, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 20333 microseconds, and 10078 cpu microseconds.
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.617218) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3191416 bytes OK
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.617250) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.620517) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.620540) EVENT_LOG_v1 {"time_micros": 1772273634620531, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.620568) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3242173, prev total WAL file size 3242173, number of live WAL files 2.
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.621730) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3116KB)], [62(8552KB)]
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273634621776, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 11948708, "oldest_snapshot_seqno": -1}
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5678 keys, 10204970 bytes, temperature: kUnknown
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273634679755, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10204970, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10163137, "index_size": 26555, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 142427, "raw_average_key_size": 25, "raw_value_size": 10057371, "raw_average_value_size": 1771, "num_data_blocks": 1083, "num_entries": 5678, "num_filter_entries": 5678, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772273634, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.680142) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10204970 bytes
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.681825) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 205.6 rd, 175.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 8.4 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 6717, records dropped: 1039 output_compression: NoCompression
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.681848) EVENT_LOG_v1 {"time_micros": 1772273634681836, "job": 34, "event": "compaction_finished", "compaction_time_micros": 58110, "compaction_time_cpu_micros": 22512, "output_level": 6, "num_output_files": 1, "total_output_size": 10204970, "num_input_records": 6717, "num_output_records": 5678, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273634682378, "job": 34, "event": "table_file_deletion", "file_number": 64}
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273634683325, "job": 34, "event": "table_file_deletion", "file_number": 62}
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.621619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.683368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.683383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.683391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.683395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:13:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:13:54.683400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.860 243456 DEBUG nova.network.neutron [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:13:54 compute-0 nova_compute[243452]: 2026-02-28 10:13:54.981 243456 DEBUG nova.network.neutron [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Successfully created port: a2f66c0b-78f3-49cb-929b-5e9b4072beb0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:13:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 314 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.3 MiB/s wr, 262 op/s
Feb 28 10:13:57 compute-0 ceph-mon[76304]: pgmap v1441: 305 pgs: 305 active+clean; 314 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.3 MiB/s wr, 262 op/s
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.254 243456 DEBUG nova.network.neutron [req-0b09a729-96f8-4215-9c01-45e7a8db858e req-f1dd96db-4804-4ef8-8605-ecfa8ad36b03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Updated VIF entry in instance network info cache for port 9be79a2c-76fa-4a58-a532-eac0151d2bb1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.255 243456 DEBUG nova.network.neutron [req-0b09a729-96f8-4215-9c01-45e7a8db858e req-f1dd96db-4804-4ef8-8605-ecfa8ad36b03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Updating instance_info_cache with network_info: [{"id": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "address": "fa:16:3e:54:18:0c", "network": {"id": "5d8683e2-4377-476f-ae8b-6d3dd4e61943", "bridge": "br-int", "label": "tempest-network-smoke--813977532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9be79a2c-76", "ovs_interfaceid": "9be79a2c-76fa-4a58-a532-eac0151d2bb1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.289 243456 DEBUG oslo_concurrency.lockutils [req-0b09a729-96f8-4215-9c01-45e7a8db858e req-f1dd96db-4804-4ef8-8605-ecfa8ad36b03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c4a13c84-8fca-43c8-87c3-fde9f5d1c031" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.397 243456 DEBUG nova.compute.manager [req-8e724451-4399-4604-b68a-387ba927b915 req-97b26794-946e-4d44-99eb-938e6d4f4efe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received event network-vif-unplugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.398 243456 DEBUG oslo_concurrency.lockutils [req-8e724451-4399-4604-b68a-387ba927b915 req-97b26794-946e-4d44-99eb-938e6d4f4efe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.398 243456 DEBUG oslo_concurrency.lockutils [req-8e724451-4399-4604-b68a-387ba927b915 req-97b26794-946e-4d44-99eb-938e6d4f4efe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.398 243456 DEBUG oslo_concurrency.lockutils [req-8e724451-4399-4604-b68a-387ba927b915 req-97b26794-946e-4d44-99eb-938e6d4f4efe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.398 243456 DEBUG nova.compute.manager [req-8e724451-4399-4604-b68a-387ba927b915 req-97b26794-946e-4d44-99eb-938e6d4f4efe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] No waiting events found dispatching network-vif-unplugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.398 243456 DEBUG nova.compute.manager [req-8e724451-4399-4604-b68a-387ba927b915 req-97b26794-946e-4d44-99eb-938e6d4f4efe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received event network-vif-unplugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.623 243456 DEBUG nova.network.neutron [-] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.645 243456 INFO nova.compute.manager [-] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Took 3.17 seconds to deallocate network for instance.
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.687 243456 DEBUG oslo_concurrency.lockutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.688 243456 DEBUG oslo_concurrency.lockutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.791 243456 DEBUG oslo_concurrency.processutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.831 243456 DEBUG nova.network.neutron [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Successfully updated port: a2f66c0b-78f3-49cb-929b-5e9b4072beb0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.837 243456 DEBUG nova.compute.manager [req-72990cc7-252c-4bd9-91cb-00639e508100 req-d2725763-5a6a-4098-a484-fccb0c1d78ab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received event network-vif-deleted-9be79a2c-76fa-4a58-a532-eac0151d2bb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.849 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "refresh_cache-dc2dbab8-312e-4130-8141-d848beeb6bec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.849 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquired lock "refresh_cache-dc2dbab8-312e-4130-8141-d848beeb6bec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.849 243456 DEBUG nova.network.neutron [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:13:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:57.852 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:57.853 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:13:57.854 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.859 243456 DEBUG nova.network.neutron [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Updating instance_info_cache with network_info: [{"id": "26c42747-4919-4440-9b73-cf3516525108", "address": "fa:16:3e:5f:43:e1", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c42747-49", "ovs_interfaceid": "26c42747-4919-4440-9b73-cf3516525108", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.888 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Releasing lock "refresh_cache-40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.889 243456 DEBUG nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Instance network_info: |[{"id": "26c42747-4919-4440-9b73-cf3516525108", "address": "fa:16:3e:5f:43:e1", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c42747-49", "ovs_interfaceid": "26c42747-4919-4440-9b73-cf3516525108", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.898 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Start _get_guest_xml network_info=[{"id": "26c42747-4919-4440-9b73-cf3516525108", "address": "fa:16:3e:5f:43:e1", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c42747-49", "ovs_interfaceid": "26c42747-4919-4440-9b73-cf3516525108", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.911 243456 WARNING nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.917 243456 DEBUG nova.virt.libvirt.host [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.918 243456 DEBUG nova.virt.libvirt.host [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.924 243456 DEBUG nova.virt.libvirt.host [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.924 243456 DEBUG nova.virt.libvirt.host [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.925 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.925 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.926 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.926 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.927 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.927 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.927 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.927 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.928 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.928 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.928 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.928 243456 DEBUG nova.virt.hardware [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.932 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:57 compute-0 nova_compute[243452]: 2026-02-28 10:13:57.975 243456 DEBUG nova.network.neutron [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:13:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 292 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 205 op/s
Feb 28 10:13:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:13:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1381478180' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:58 compute-0 nova_compute[243452]: 2026-02-28 10:13:58.431 243456 DEBUG oslo_concurrency.processutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:58 compute-0 nova_compute[243452]: 2026-02-28 10:13:58.437 243456 DEBUG nova.compute.provider_tree [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:13:58 compute-0 nova_compute[243452]: 2026-02-28 10:13:58.458 243456 DEBUG nova.scheduler.client.report [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:13:58 compute-0 nova_compute[243452]: 2026-02-28 10:13:58.483 243456 DEBUG oslo_concurrency.lockutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:13:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/164005645' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:58 compute-0 nova_compute[243452]: 2026-02-28 10:13:58.509 243456 INFO nova.scheduler.client.report [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Deleted allocations for instance c4a13c84-8fca-43c8-87c3-fde9f5d1c031
Feb 28 10:13:58 compute-0 nova_compute[243452]: 2026-02-28 10:13:58.517 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:13:58 compute-0 nova_compute[243452]: 2026-02-28 10:13:58.551 243456 DEBUG nova.storage.rbd_utils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:58 compute-0 nova_compute[243452]: 2026-02-28 10:13:58.557 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:58 compute-0 nova_compute[243452]: 2026-02-28 10:13:58.669 243456 DEBUG oslo_concurrency.lockutils [None req-53e15627-3075-470f-9e63-a95d9389fa8e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:58 compute-0 nova_compute[243452]: 2026-02-28 10:13:58.753 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:58 compute-0 nova_compute[243452]: 2026-02-28 10:13:58.963 243456 DEBUG nova.network.neutron [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Updating instance_info_cache with network_info: [{"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:13:58 compute-0 nova_compute[243452]: 2026-02-28 10:13:58.987 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Releasing lock "refresh_cache-dc2dbab8-312e-4130-8141-d848beeb6bec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:13:58 compute-0 nova_compute[243452]: 2026-02-28 10:13:58.987 243456 DEBUG nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Instance network_info: |[{"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:13:58 compute-0 nova_compute[243452]: 2026-02-28 10:13:58.991 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Start _get_guest_xml network_info=[{"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:13:58 compute-0 nova_compute[243452]: 2026-02-28 10:13:58.998 243456 WARNING nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.005 243456 DEBUG nova.virt.libvirt.host [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.006 243456 DEBUG nova.virt.libvirt.host [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.013 243456 DEBUG nova.virt.libvirt.host [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.014 243456 DEBUG nova.virt.libvirt.host [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.017 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.017 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.018 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.018 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.019 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.019 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.020 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.020 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.021 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.021 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.021 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.022 243456 DEBUG nova.virt.hardware [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.026 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:13:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/490792730' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.081 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.083 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.085 243456 DEBUG nova.virt.libvirt.vif [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-191562024',display_name='tempest-ServerRescueNegativeTestJSON-server-191562024',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-191562024',id=74,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='809bf856030f4316b385ba1c02291ca7',ramdisk_id='',reservation_id='r-55ob4e3h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-338743494',owner_user_name='tempest-ServerRescueNegativeTestJSON-338743494-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:50Z,user_data=None,user_id='ec5caafc16ec43a493f7d553353a27c3',uuid=40f8f3fa-1f1c-440e-a640-5a223b1ca9b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26c42747-4919-4440-9b73-cf3516525108", "address": "fa:16:3e:5f:43:e1", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c42747-49", "ovs_interfaceid": "26c42747-4919-4440-9b73-cf3516525108", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.085 243456 DEBUG nova.network.os_vif_util [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converting VIF {"id": "26c42747-4919-4440-9b73-cf3516525108", "address": "fa:16:3e:5f:43:e1", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c42747-49", "ovs_interfaceid": "26c42747-4919-4440-9b73-cf3516525108", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.086 243456 DEBUG nova.network.os_vif_util [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:43:e1,bridge_name='br-int',has_traffic_filtering=True,id=26c42747-4919-4440-9b73-cf3516525108,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c42747-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.088 243456 DEBUG nova.objects.instance [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.107 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:13:59 compute-0 nova_compute[243452]:   <uuid>40f8f3fa-1f1c-440e-a640-5a223b1ca9b8</uuid>
Feb 28 10:13:59 compute-0 nova_compute[243452]:   <name>instance-0000004a</name>
Feb 28 10:13:59 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:13:59 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:13:59 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-191562024</nova:name>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:13:57</nova:creationTime>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:13:59 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:13:59 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:13:59 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:13:59 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:13:59 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:13:59 compute-0 nova_compute[243452]:         <nova:user uuid="ec5caafc16ec43a493f7d553353a27c3">tempest-ServerRescueNegativeTestJSON-338743494-project-member</nova:user>
Feb 28 10:13:59 compute-0 nova_compute[243452]:         <nova:project uuid="809bf856030f4316b385ba1c02291ca7">tempest-ServerRescueNegativeTestJSON-338743494</nova:project>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:13:59 compute-0 nova_compute[243452]:         <nova:port uuid="26c42747-4919-4440-9b73-cf3516525108">
Feb 28 10:13:59 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:13:59 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:13:59 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <system>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <entry name="serial">40f8f3fa-1f1c-440e-a640-5a223b1ca9b8</entry>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <entry name="uuid">40f8f3fa-1f1c-440e-a640-5a223b1ca9b8</entry>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     </system>
Feb 28 10:13:59 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:13:59 compute-0 nova_compute[243452]:   <os>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:   </os>
Feb 28 10:13:59 compute-0 nova_compute[243452]:   <features>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:   </features>
Feb 28 10:13:59 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:13:59 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:13:59 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk">
Feb 28 10:13:59 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       </source>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:13:59 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk.config">
Feb 28 10:13:59 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       </source>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:13:59 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:5f:43:e1"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <target dev="tap26c42747-49"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8/console.log" append="off"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <video>
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     </video>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:13:59 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:13:59 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:13:59 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:13:59 compute-0 nova_compute[243452]: </domain>
Feb 28 10:13:59 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.108 243456 DEBUG nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Preparing to wait for external event network-vif-plugged-26c42747-4919-4440-9b73-cf3516525108 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.108 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.109 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.109 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.110 243456 DEBUG nova.virt.libvirt.vif [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-191562024',display_name='tempest-ServerRescueNegativeTestJSON-server-191562024',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-191562024',id=74,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='809bf856030f4316b385ba1c02291ca7',ramdisk_id='',reservation_id='r-55ob4e3h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-338743494',owner_user_name='tempest-ServerRescueNegativeTestJSON-338743494-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:50Z,user_data=None,user_id='ec5caafc16ec43a493f7d553353a27c3',uuid=40f8f3fa-1f1c-440e-a640-5a223b1ca9b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26c42747-4919-4440-9b73-cf3516525108", "address": "fa:16:3e:5f:43:e1", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c42747-49", "ovs_interfaceid": "26c42747-4919-4440-9b73-cf3516525108", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.110 243456 DEBUG nova.network.os_vif_util [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converting VIF {"id": "26c42747-4919-4440-9b73-cf3516525108", "address": "fa:16:3e:5f:43:e1", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c42747-49", "ovs_interfaceid": "26c42747-4919-4440-9b73-cf3516525108", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.111 243456 DEBUG nova.network.os_vif_util [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:43:e1,bridge_name='br-int',has_traffic_filtering=True,id=26c42747-4919-4440-9b73-cf3516525108,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c42747-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.111 243456 DEBUG os_vif [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:43:e1,bridge_name='br-int',has_traffic_filtering=True,id=26c42747-4919-4440-9b73-cf3516525108,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c42747-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.112 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.113 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.113 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.116 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.117 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26c42747-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.117 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap26c42747-49, col_values=(('external_ids', {'iface-id': '26c42747-4919-4440-9b73-cf3516525108', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:43:e1', 'vm-uuid': '40f8f3fa-1f1c-440e-a640-5a223b1ca9b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:13:59 compute-0 NetworkManager[49805]: <info>  [1772273639.1204] manager: (tap26c42747-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.121 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.125 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.126 243456 INFO os_vif [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:43:e1,bridge_name='br-int',has_traffic_filtering=True,id=26c42747-4919-4440-9b73-cf3516525108,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c42747-49')
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.185 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.185 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.185 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] No VIF found with MAC fa:16:3e:5f:43:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.186 243456 INFO nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Using config drive
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.206 243456 DEBUG nova.storage.rbd_utils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:59 compute-0 ceph-mon[76304]: pgmap v1442: 305 pgs: 305 active+clean; 292 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 205 op/s
Feb 28 10:13:59 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1381478180' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:13:59 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/164005645' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:59 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/490792730' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.492 243456 DEBUG nova.compute.manager [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received event network-vif-plugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.493 243456 DEBUG oslo_concurrency.lockutils [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.494 243456 DEBUG oslo_concurrency.lockutils [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.495 243456 DEBUG oslo_concurrency.lockutils [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a13c84-8fca-43c8-87c3-fde9f5d1c031-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.495 243456 DEBUG nova.compute.manager [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] No waiting events found dispatching network-vif-plugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.495 243456 WARNING nova.compute.manager [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Received unexpected event network-vif-plugged-9be79a2c-76fa-4a58-a532-eac0151d2bb1 for instance with vm_state deleted and task_state None.
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.496 243456 DEBUG nova.compute.manager [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-changed-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.496 243456 DEBUG nova.compute.manager [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Refreshing instance network info cache due to event network-changed-a2f66c0b-78f3-49cb-929b-5e9b4072beb0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.496 243456 DEBUG oslo_concurrency.lockutils [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-dc2dbab8-312e-4130-8141-d848beeb6bec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.497 243456 DEBUG oslo_concurrency.lockutils [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-dc2dbab8-312e-4130-8141-d848beeb6bec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.497 243456 DEBUG nova.network.neutron [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Refreshing network info cache for port a2f66c0b-78f3-49cb-929b-5e9b4072beb0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:13:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:13:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3488652800' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.590 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.616 243456 DEBUG nova.storage.rbd_utils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.626 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.690 243456 INFO nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Creating config drive at /var/lib/nova/instances/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8/disk.config
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.700 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpc481ioc6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.853 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpc481ioc6" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.879 243456 DEBUG nova.storage.rbd_utils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:13:59 compute-0 nova_compute[243452]: 2026-02-28 10:13:59.884 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8/disk.config 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.071 243456 DEBUG oslo_concurrency.processutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8/disk.config 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.073 243456 INFO nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Deleting local config drive /var/lib/nova/instances/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8/disk.config because it was imported into RBD.
Feb 28 10:14:00 compute-0 kernel: tap26c42747-49: entered promiscuous mode
Feb 28 10:14:00 compute-0 ovn_controller[146846]: 2026-02-28T10:14:00Z|00618|binding|INFO|Claiming lport 26c42747-4919-4440-9b73-cf3516525108 for this chassis.
Feb 28 10:14:00 compute-0 ovn_controller[146846]: 2026-02-28T10:14:00Z|00619|binding|INFO|26c42747-4919-4440-9b73-cf3516525108: Claiming fa:16:3e:5f:43:e1 10.100.0.5
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.123 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:00 compute-0 NetworkManager[49805]: <info>  [1772273640.1281] manager: (tap26c42747-49): new Tun device (/org/freedesktop/NetworkManager/Devices/279)
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.141 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:43:e1 10.100.0.5'], port_security=['fa:16:3e:5f:43:e1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '40f8f3fa-1f1c-440e-a640-5a223b1ca9b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-621843b6-256a-4ce5-83c3-83b888738508', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809bf856030f4316b385ba1c02291ca7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2ac4acb3-b14a-4b15-b397-21203f1665be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a33e3f0-a2b2-429c-8d14-4c6d980064b2, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=26c42747-4919-4440-9b73-cf3516525108) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.142 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 26c42747-4919-4440-9b73-cf3516525108 in datapath 621843b6-256a-4ce5-83c3-83b888738508 bound to our chassis
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.143 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 621843b6-256a-4ce5-83c3-83b888738508
Feb 28 10:14:00 compute-0 systemd-udevd[305725]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:14:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 297 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 209 op/s
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.163 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f41958-6541-4b6e-a672-abfb6c7a67fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.164 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap621843b6-21 in ovnmeta-621843b6-256a-4ce5-83c3-83b888738508 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:14:00 compute-0 NetworkManager[49805]: <info>  [1772273640.1674] device (tap26c42747-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:14:00 compute-0 NetworkManager[49805]: <info>  [1772273640.1682] device (tap26c42747-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.169 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap621843b6-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.169 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[671e6f7c-6071-49e4-9339-90100f842610]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.171 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[50ae1198-7b04-45c8-a157-2f446d36cb05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.170 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:00 compute-0 ovn_controller[146846]: 2026-02-28T10:14:00Z|00620|binding|INFO|Setting lport 26c42747-4919-4440-9b73-cf3516525108 ovn-installed in OVS
Feb 28 10:14:00 compute-0 ovn_controller[146846]: 2026-02-28T10:14:00Z|00621|binding|INFO|Setting lport 26c42747-4919-4440-9b73-cf3516525108 up in Southbound
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.174 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:00 compute-0 systemd-machined[209480]: New machine qemu-82-instance-0000004a.
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.186 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[8b33c79d-de28-4940-8557-285adbdf1ac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:00 compute-0 systemd[1]: Started Virtual Machine qemu-82-instance-0000004a.
Feb 28 10:14:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:14:00 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/715874071' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.196 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[060e94f6-f7eb-4e5b-97ef-a046ae29e3be]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.220 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.222 243456 DEBUG nova.virt.libvirt.vif [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1249757035',display_name='tempest-ServerRescueNegativeTestJSON-server-1249757035',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1249757035',id=75,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='809bf856030f4316b385ba1c02291ca7',ramdisk_id='',reservation_id='r-hyvw557d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-338743494',owner_user_name='tempest-ServerRescueNegativeTestJSON-338743494-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:52Z,user_data=None,user_id='ec5caafc16ec43a493f7d553353a27c3',uuid=dc2dbab8-312e-4130-8141-d848beeb6bec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.223 243456 DEBUG nova.network.os_vif_util [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converting VIF {"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.224 243456 DEBUG nova.network.os_vif_util [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:b7:32,bridge_name='br-int',has_traffic_filtering=True,id=a2f66c0b-78f3-49cb-929b-5e9b4072beb0,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2f66c0b-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.225 243456 DEBUG nova.objects.instance [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'pci_devices' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.226 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ee1852a1-434a-4c9b-ae3e-6673f5f2cfe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.231 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a30074b0-36e1-4dd3-ad7f-752ec3759e35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:00 compute-0 NetworkManager[49805]: <info>  [1772273640.2323] manager: (tap621843b6-20): new Veth device (/org/freedesktop/NetworkManager/Devices/280)
Feb 28 10:14:00 compute-0 systemd-udevd[305730]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.240 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:14:00 compute-0 nova_compute[243452]:   <uuid>dc2dbab8-312e-4130-8141-d848beeb6bec</uuid>
Feb 28 10:14:00 compute-0 nova_compute[243452]:   <name>instance-0000004b</name>
Feb 28 10:14:00 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:14:00 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:14:00 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1249757035</nova:name>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:13:58</nova:creationTime>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:14:00 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:14:00 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:14:00 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:14:00 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:14:00 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:14:00 compute-0 nova_compute[243452]:         <nova:user uuid="ec5caafc16ec43a493f7d553353a27c3">tempest-ServerRescueNegativeTestJSON-338743494-project-member</nova:user>
Feb 28 10:14:00 compute-0 nova_compute[243452]:         <nova:project uuid="809bf856030f4316b385ba1c02291ca7">tempest-ServerRescueNegativeTestJSON-338743494</nova:project>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:14:00 compute-0 nova_compute[243452]:         <nova:port uuid="a2f66c0b-78f3-49cb-929b-5e9b4072beb0">
Feb 28 10:14:00 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:14:00 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:14:00 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <system>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <entry name="serial">dc2dbab8-312e-4130-8141-d848beeb6bec</entry>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <entry name="uuid">dc2dbab8-312e-4130-8141-d848beeb6bec</entry>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     </system>
Feb 28 10:14:00 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:14:00 compute-0 nova_compute[243452]:   <os>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:   </os>
Feb 28 10:14:00 compute-0 nova_compute[243452]:   <features>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:   </features>
Feb 28 10:14:00 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:14:00 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:14:00 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/dc2dbab8-312e-4130-8141-d848beeb6bec_disk">
Feb 28 10:14:00 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       </source>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:14:00 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config">
Feb 28 10:14:00 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       </source>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:14:00 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:39:b7:32"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <target dev="tapa2f66c0b-78"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/console.log" append="off"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <video>
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     </video>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:14:00 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:14:00 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:14:00 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:14:00 compute-0 nova_compute[243452]: </domain>
Feb 28 10:14:00 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.240 243456 DEBUG nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Preparing to wait for external event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.241 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.241 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:00 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3488652800' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.241 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:00 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/715874071' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.242 243456 DEBUG nova.virt.libvirt.vif [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:13:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1249757035',display_name='tempest-ServerRescueNegativeTestJSON-server-1249757035',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1249757035',id=75,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='809bf856030f4316b385ba1c02291ca7',ramdisk_id='',reservation_id='r-hyvw557d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-338743494',owner_user_name='tempest-ServerRescueNegativeTestJSON-338743494-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:52Z,user_data=None,user_id='ec5caafc16ec43a493f7d553353a27c3',uuid=dc2dbab8-312e-4130-8141-d848beeb6bec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.242 243456 DEBUG nova.network.os_vif_util [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converting VIF {"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.243 243456 DEBUG nova.network.os_vif_util [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:b7:32,bridge_name='br-int',has_traffic_filtering=True,id=a2f66c0b-78f3-49cb-929b-5e9b4072beb0,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2f66c0b-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.243 243456 DEBUG os_vif [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:b7:32,bridge_name='br-int',has_traffic_filtering=True,id=a2f66c0b-78f3-49cb-929b-5e9b4072beb0,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2f66c0b-78') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.243 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.244 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.244 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.249 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.250 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa2f66c0b-78, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.250 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa2f66c0b-78, col_values=(('external_ids', {'iface-id': 'a2f66c0b-78f3-49cb-929b-5e9b4072beb0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:b7:32', 'vm-uuid': 'dc2dbab8-312e-4130-8141-d848beeb6bec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.254 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:00 compute-0 NetworkManager[49805]: <info>  [1772273640.2557] manager: (tapa2f66c0b-78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/281)
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.258 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.263 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.264 243456 INFO os_vif [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:b7:32,bridge_name='br-int',has_traffic_filtering=True,id=a2f66c0b-78f3-49cb-929b-5e9b4072beb0,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2f66c0b-78')
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.272 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[79563ad7-70f7-4758-9b50-89dc933c08e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.276 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1833c74e-a00b-4013-a539-ec4f59b4938b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:00 compute-0 NetworkManager[49805]: <info>  [1772273640.2964] device (tap621843b6-20): carrier: link connected
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.301 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6a4648cd-3230-4d93-8b0f-573dee937da2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.310 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.310 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.311 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] No VIF found with MAC fa:16:3e:39:b7:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.311 243456 INFO nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Using config drive
Feb 28 10:14:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:14:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:14:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:14:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:14:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:14:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.321 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2fe49798-af5f-4504-8b91-32356f8955cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap621843b6-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:07:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514758, 'reachable_time': 23572, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305766, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.336 243456 DEBUG nova.storage.rbd_utils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:00 compute-0 ovn_controller[146846]: 2026-02-28T10:14:00Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6c:7c:88 10.100.0.4
Feb 28 10:14:00 compute-0 ovn_controller[146846]: 2026-02-28T10:14:00Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6c:7c:88 10.100.0.4
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.340 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8502df1b-eae3-4e4d-9c4d-3832d67f432a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3f:7e0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514758, 'tstamp': 514758}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305782, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.344 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273625.297358, a16c1faa-2568-47fc-8006-91c68ae7ae5d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.345 243456 INFO nova.compute.manager [-] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] VM Stopped (Lifecycle Event)
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.358 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad5cf27-ba8f-4df1-a94a-cbe0c4d0e202]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap621843b6-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:07:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514758, 'reachable_time': 23572, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305786, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.372 243456 DEBUG nova.compute.manager [None req-819fbf67-d6fc-4dfd-a0c4-a3b5ed5249ea - - - - - -] [instance: a16c1faa-2568-47fc-8006-91c68ae7ae5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.383 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d9711357-d166-43bd-b0c6-57c8326094f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.415 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273625.4146235, f3def0af-1227-498f-a525-0df8d5bb3768 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.416 243456 INFO nova.compute.manager [-] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] VM Stopped (Lifecycle Event)
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.435 243456 DEBUG nova.compute.manager [None req-0f8ea9ad-92f8-4d3f-99e6-6fd73e986b25 - - - - - -] [instance: f3def0af-1227-498f-a525-0df8d5bb3768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.444 243456 DEBUG nova.compute.manager [req-2498c707-f814-44de-b1b0-81f2e06bb299 req-0cabaa93-1d2e-4fd2-b39e-855a54175580 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Received event network-vif-plugged-26c42747-4919-4440-9b73-cf3516525108 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.445 243456 DEBUG oslo_concurrency.lockutils [req-2498c707-f814-44de-b1b0-81f2e06bb299 req-0cabaa93-1d2e-4fd2-b39e-855a54175580 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.445 243456 DEBUG oslo_concurrency.lockutils [req-2498c707-f814-44de-b1b0-81f2e06bb299 req-0cabaa93-1d2e-4fd2-b39e-855a54175580 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.445 243456 DEBUG oslo_concurrency.lockutils [req-2498c707-f814-44de-b1b0-81f2e06bb299 req-0cabaa93-1d2e-4fd2-b39e-855a54175580 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.445 243456 DEBUG nova.compute.manager [req-2498c707-f814-44de-b1b0-81f2e06bb299 req-0cabaa93-1d2e-4fd2-b39e-855a54175580 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Processing event network-vif-plugged-26c42747-4919-4440-9b73-cf3516525108 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.454 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[693b2532-eba4-4d7c-b72d-f898bd733bb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.456 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap621843b6-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.456 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.456 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap621843b6-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:00 compute-0 kernel: tap621843b6-20: entered promiscuous mode
Feb 28 10:14:00 compute-0 NetworkManager[49805]: <info>  [1772273640.4603] manager: (tap621843b6-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.459 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.464 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.465 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap621843b6-20, col_values=(('external_ids', {'iface-id': '92bcea78-9a21-4d44-99f4-fd3e41fc7e97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.466 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:00 compute-0 ovn_controller[146846]: 2026-02-28T10:14:00Z|00622|binding|INFO|Releasing lport 92bcea78-9a21-4d44-99f4-fd3e41fc7e97 from this chassis (sb_readonly=0)
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.476 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.480 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.481 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/621843b6-256a-4ce5-83c3-83b888738508.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/621843b6-256a-4ce5-83c3-83b888738508.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.482 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c521fdd0-dd71-4d88-ade3-645ac254b9ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.483 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-621843b6-256a-4ce5-83c3-83b888738508
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/621843b6-256a-4ce5-83c3-83b888738508.pid.haproxy
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 621843b6-256a-4ce5-83c3-83b888738508
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:14:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:00.484 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'env', 'PROCESS_TAG=haproxy-621843b6-256a-4ce5-83c3-83b888738508', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/621843b6-256a-4ce5-83c3-83b888738508.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:14:00 compute-0 podman[305857]: 2026-02-28 10:14:00.841202327 +0000 UTC m=+0.053893387 container create 05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.851 243456 DEBUG nova.network.neutron [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Updated VIF entry in instance network info cache for port a2f66c0b-78f3-49cb-929b-5e9b4072beb0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.852 243456 DEBUG nova.network.neutron [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Updating instance_info_cache with network_info: [{"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.860 243456 INFO nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Creating config drive at /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.864 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqmvk2nss execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:00 compute-0 systemd[1]: Started libpod-conmon-05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6.scope.
Feb 28 10:14:00 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.901 243456 DEBUG oslo_concurrency.lockutils [req-57fc52f3-0b1a-4550-9394-b522b1a8e9b5 req-4b3bdd7d-670d-4828-b7b9-0e5341861772 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-dc2dbab8-312e-4130-8141-d848beeb6bec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.903 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273640.8822312, 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.903 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] VM Started (Lifecycle Event)
Feb 28 10:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cfd78d30250744f4812bed5eda336963389aeb75643f7f79dbbc06f1fb2c979/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.906 243456 DEBUG nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:14:00 compute-0 podman[305857]: 2026-02-28 10:14:00.812317215 +0000 UTC m=+0.025008305 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.912 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.918 243456 INFO nova.virt.libvirt.driver [-] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Instance spawned successfully.
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.919 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:14:00 compute-0 podman[305857]: 2026-02-28 10:14:00.919724744 +0000 UTC m=+0.132415844 container init 05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true)
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.925 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:00 compute-0 podman[305857]: 2026-02-28 10:14:00.926879605 +0000 UTC m=+0.139570695 container start 05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.929 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.939 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.939 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.940 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.940 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.940 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.941 243456 DEBUG nova.virt.libvirt.driver [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.946 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.946 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273640.8829153, 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.946 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] VM Paused (Lifecycle Event)
Feb 28 10:14:00 compute-0 neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508[305881]: [NOTICE]   (305887) : New worker (305889) forked
Feb 28 10:14:00 compute-0 neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508[305881]: [NOTICE]   (305887) : Loading success.
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.983 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.987 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273640.9116664, 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:00 compute-0 nova_compute[243452]: 2026-02-28 10:14:00.987 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] VM Resumed (Lifecycle Event)
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.003 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqmvk2nss" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.029 243456 DEBUG nova.storage.rbd_utils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.036 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.084 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.086 243456 INFO nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Took 10.73 seconds to spawn the instance on the hypervisor.
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.087 243456 DEBUG nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.091 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.119 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.157 243456 INFO nova.compute.manager [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Took 12.82 seconds to build instance.
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.179 243456 DEBUG oslo_concurrency.processutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.180 243456 INFO nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Deleting local config drive /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config because it was imported into RBD.
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.185 243456 DEBUG oslo_concurrency.lockutils [None req-702a6883-ae8a-4ae2-9dcc-b7f5f6880495 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:01 compute-0 kernel: tapa2f66c0b-78: entered promiscuous mode
Feb 28 10:14:01 compute-0 NetworkManager[49805]: <info>  [1772273641.2338] manager: (tapa2f66c0b-78): new Tun device (/org/freedesktop/NetworkManager/Devices/283)
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.233 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:01 compute-0 systemd-udevd[305746]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:14:01 compute-0 ovn_controller[146846]: 2026-02-28T10:14:01Z|00623|binding|INFO|Claiming lport a2f66c0b-78f3-49cb-929b-5e9b4072beb0 for this chassis.
Feb 28 10:14:01 compute-0 ovn_controller[146846]: 2026-02-28T10:14:01Z|00624|binding|INFO|a2f66c0b-78f3-49cb-929b-5e9b4072beb0: Claiming fa:16:3e:39:b7:32 10.100.0.6
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.238 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:01 compute-0 ovn_controller[146846]: 2026-02-28T10:14:01Z|00625|binding|INFO|Setting lport a2f66c0b-78f3-49cb-929b-5e9b4072beb0 ovn-installed in OVS
Feb 28 10:14:01 compute-0 ovn_controller[146846]: 2026-02-28T10:14:01Z|00626|binding|INFO|Setting lport a2f66c0b-78f3-49cb-929b-5e9b4072beb0 up in Southbound
Feb 28 10:14:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.246 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:b7:32 10.100.0.6'], port_security=['fa:16:3e:39:b7:32 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dc2dbab8-312e-4130-8141-d848beeb6bec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-621843b6-256a-4ce5-83c3-83b888738508', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809bf856030f4316b385ba1c02291ca7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2ac4acb3-b14a-4b15-b397-21203f1665be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a33e3f0-a2b2-429c-8d14-4c6d980064b2, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a2f66c0b-78f3-49cb-929b-5e9b4072beb0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.247 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:01 compute-0 NetworkManager[49805]: <info>  [1772273641.2528] device (tapa2f66c0b-78): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:14:01 compute-0 NetworkManager[49805]: <info>  [1772273641.2535] device (tapa2f66c0b-78): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:14:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.252 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a2f66c0b-78f3-49cb-929b-5e9b4072beb0 in datapath 621843b6-256a-4ce5-83c3-83b888738508 bound to our chassis
Feb 28 10:14:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.258 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 621843b6-256a-4ce5-83c3-83b888738508
Feb 28 10:14:01 compute-0 ceph-mon[76304]: pgmap v1443: 305 pgs: 305 active+clean; 297 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 209 op/s
Feb 28 10:14:01 compute-0 systemd-machined[209480]: New machine qemu-83-instance-0000004b.
Feb 28 10:14:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.274 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8329d044-7f53-4b06-a979-3243bb09585e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:01 compute-0 systemd[1]: Started Virtual Machine qemu-83-instance-0000004b.
Feb 28 10:14:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.303 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3950d3c6-5cda-4132-9063-ccdbac519e04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.308 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[52b2559c-8e10-44b4-bf0d-e39f156513ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.337 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8f6593b9-6702-4920-b5fd-102f2e36e61f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.364 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[550440eb-ab92-49b2-8694-57cff066b0ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap621843b6-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:07:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514758, 'reachable_time': 23572, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305959, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.383 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0592c1cf-dc80-4a07-85c3-34599fec0910]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap621843b6-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514771, 'tstamp': 514771}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305961, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap621843b6-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514773, 'tstamp': 514773}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305961, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.384 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap621843b6-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.386 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.388 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap621843b6-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.388 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.388 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:14:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.389 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap621843b6-20, col_values=(('external_ids', {'iface-id': '92bcea78-9a21-4d44-99f4-fd3e41fc7e97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:01.389 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.594 243456 DEBUG nova.compute.manager [req-d8b4bdfe-61cf-4d8c-84d9-04ee44a9f765 req-623bfe39-3daf-4ecb-9202-c045b2e372a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.595 243456 DEBUG oslo_concurrency.lockutils [req-d8b4bdfe-61cf-4d8c-84d9-04ee44a9f765 req-623bfe39-3daf-4ecb-9202-c045b2e372a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.595 243456 DEBUG oslo_concurrency.lockutils [req-d8b4bdfe-61cf-4d8c-84d9-04ee44a9f765 req-623bfe39-3daf-4ecb-9202-c045b2e372a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.596 243456 DEBUG oslo_concurrency.lockutils [req-d8b4bdfe-61cf-4d8c-84d9-04ee44a9f765 req-623bfe39-3daf-4ecb-9202-c045b2e372a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.596 243456 DEBUG nova.compute.manager [req-d8b4bdfe-61cf-4d8c-84d9-04ee44a9f765 req-623bfe39-3daf-4ecb-9202-c045b2e372a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Processing event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.821 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273641.8204968, dc2dbab8-312e-4130-8141-d848beeb6bec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.821 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] VM Started (Lifecycle Event)
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.823 243456 DEBUG nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.827 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.831 243456 INFO nova.virt.libvirt.driver [-] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Instance spawned successfully.
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.831 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.849 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.855 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.864 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.864 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.865 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.866 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.866 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.867 243456 DEBUG nova.virt.libvirt.driver [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.891 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.892 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273641.8207939, dc2dbab8-312e-4130-8141-d848beeb6bec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.892 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] VM Paused (Lifecycle Event)
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.917 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.923 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273641.8274448, dc2dbab8-312e-4130-8141-d848beeb6bec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.924 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] VM Resumed (Lifecycle Event)
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.928 243456 INFO nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Took 8.39 seconds to spawn the instance on the hypervisor.
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.929 243456 DEBUG nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.944 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.949 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:14:01 compute-0 nova_compute[243452]: 2026-02-28 10:14:01.987 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:14:02 compute-0 nova_compute[243452]: 2026-02-28 10:14:02.011 243456 INFO nova.compute.manager [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Took 11.07 seconds to build instance.
Feb 28 10:14:02 compute-0 nova_compute[243452]: 2026-02-28 10:14:02.028 243456 DEBUG oslo_concurrency.lockutils [None req-f4e1a52d-6e33-453c-ab29-baafc7ac98d7 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 310 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.9 MiB/s wr, 178 op/s
Feb 28 10:14:02 compute-0 nova_compute[243452]: 2026-02-28 10:14:02.776 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 28 10:14:03 compute-0 ceph-mon[76304]: pgmap v1444: 305 pgs: 305 active+clean; 310 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.9 MiB/s wr, 178 op/s
Feb 28 10:14:03 compute-0 nova_compute[243452]: 2026-02-28 10:14:03.413 243456 DEBUG nova.compute.manager [req-ecea2efe-307e-4921-a58c-cb3de6c157a6 req-88e6ecdf-000b-4a6b-8d48-44e821e6f180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Received event network-vif-plugged-26c42747-4919-4440-9b73-cf3516525108 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:03 compute-0 nova_compute[243452]: 2026-02-28 10:14:03.413 243456 DEBUG oslo_concurrency.lockutils [req-ecea2efe-307e-4921-a58c-cb3de6c157a6 req-88e6ecdf-000b-4a6b-8d48-44e821e6f180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:03 compute-0 nova_compute[243452]: 2026-02-28 10:14:03.414 243456 DEBUG oslo_concurrency.lockutils [req-ecea2efe-307e-4921-a58c-cb3de6c157a6 req-88e6ecdf-000b-4a6b-8d48-44e821e6f180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:03 compute-0 nova_compute[243452]: 2026-02-28 10:14:03.414 243456 DEBUG oslo_concurrency.lockutils [req-ecea2efe-307e-4921-a58c-cb3de6c157a6 req-88e6ecdf-000b-4a6b-8d48-44e821e6f180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:03 compute-0 nova_compute[243452]: 2026-02-28 10:14:03.414 243456 DEBUG nova.compute.manager [req-ecea2efe-307e-4921-a58c-cb3de6c157a6 req-88e6ecdf-000b-4a6b-8d48-44e821e6f180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] No waiting events found dispatching network-vif-plugged-26c42747-4919-4440-9b73-cf3516525108 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:03 compute-0 nova_compute[243452]: 2026-02-28 10:14:03.414 243456 WARNING nova.compute.manager [req-ecea2efe-307e-4921-a58c-cb3de6c157a6 req-88e6ecdf-000b-4a6b-8d48-44e821e6f180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Received unexpected event network-vif-plugged-26c42747-4919-4440-9b73-cf3516525108 for instance with vm_state active and task_state None.
Feb 28 10:14:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:14:03 compute-0 nova_compute[243452]: 2026-02-28 10:14:03.731 243456 DEBUG nova.compute.manager [req-4560d5f8-ee7e-4069-819d-4ea932c163ea req-dbdc402b-93da-4c14-bbe2-80815bd20760 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:03 compute-0 nova_compute[243452]: 2026-02-28 10:14:03.732 243456 DEBUG oslo_concurrency.lockutils [req-4560d5f8-ee7e-4069-819d-4ea932c163ea req-dbdc402b-93da-4c14-bbe2-80815bd20760 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:03 compute-0 nova_compute[243452]: 2026-02-28 10:14:03.733 243456 DEBUG oslo_concurrency.lockutils [req-4560d5f8-ee7e-4069-819d-4ea932c163ea req-dbdc402b-93da-4c14-bbe2-80815bd20760 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:03 compute-0 nova_compute[243452]: 2026-02-28 10:14:03.733 243456 DEBUG oslo_concurrency.lockutils [req-4560d5f8-ee7e-4069-819d-4ea932c163ea req-dbdc402b-93da-4c14-bbe2-80815bd20760 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:03 compute-0 nova_compute[243452]: 2026-02-28 10:14:03.733 243456 DEBUG nova.compute.manager [req-4560d5f8-ee7e-4069-819d-4ea932c163ea req-dbdc402b-93da-4c14-bbe2-80815bd20760 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] No waiting events found dispatching network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:03 compute-0 nova_compute[243452]: 2026-02-28 10:14:03.734 243456 WARNING nova.compute.manager [req-4560d5f8-ee7e-4069-819d-4ea932c163ea req-dbdc402b-93da-4c14-bbe2-80815bd20760 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received unexpected event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 for instance with vm_state active and task_state None.
Feb 28 10:14:03 compute-0 nova_compute[243452]: 2026-02-28 10:14:03.756 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:03.901 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:03 compute-0 nova_compute[243452]: 2026-02-28 10:14:03.901 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:03.902 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:14:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 325 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.7 MiB/s wr, 198 op/s
Feb 28 10:14:05 compute-0 kernel: tap41441957-94 (unregistering): left promiscuous mode
Feb 28 10:14:05 compute-0 NetworkManager[49805]: <info>  [1772273645.0280] device (tap41441957-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:14:05 compute-0 ovn_controller[146846]: 2026-02-28T10:14:05Z|00627|binding|INFO|Releasing lport 41441957-9492-481d-847c-895c9fd2ef8f from this chassis (sb_readonly=0)
Feb 28 10:14:05 compute-0 ovn_controller[146846]: 2026-02-28T10:14:05Z|00628|binding|INFO|Setting lport 41441957-9492-481d-847c-895c9fd2ef8f down in Southbound
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.036 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:05 compute-0 ovn_controller[146846]: 2026-02-28T10:14:05Z|00629|binding|INFO|Removing iface tap41441957-94 ovn-installed in OVS
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.039 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.042 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:7c:88 10.100.0.4'], port_security=['fa:16:3e:6c:7c:88 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1e13ffbf-dba5-421b-afc3-84eb471e2d44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=41441957-9492-481d-847c-895c9fd2ef8f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.044 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 41441957-9492-481d-847c-895c9fd2ef8f in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 unbound from our chassis
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.044 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.045 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:14:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.046 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[14dcb47c-c3fe-4bfc-aa8d-9d2420b398d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.047 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 namespace which is not needed anymore
Feb 28 10:14:05 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000049.scope: Deactivated successfully.
Feb 28 10:14:05 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000049.scope: Consumed 12.707s CPU time.
Feb 28 10:14:05 compute-0 systemd-machined[209480]: Machine qemu-81-instance-00000049 terminated.
Feb 28 10:14:05 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[304558]: [NOTICE]   (304564) : haproxy version is 2.8.14-c23fe91
Feb 28 10:14:05 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[304558]: [NOTICE]   (304564) : path to executable is /usr/sbin/haproxy
Feb 28 10:14:05 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[304558]: [WARNING]  (304564) : Exiting Master process...
Feb 28 10:14:05 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[304558]: [ALERT]    (304564) : Current worker (304566) exited with code 143 (Terminated)
Feb 28 10:14:05 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[304558]: [WARNING]  (304564) : All workers exited. Exiting... (0)
Feb 28 10:14:05 compute-0 systemd[1]: libpod-e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6.scope: Deactivated successfully.
Feb 28 10:14:05 compute-0 podman[306029]: 2026-02-28 10:14:05.211603403 +0000 UTC m=+0.044133422 container died e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:14:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6-userdata-shm.mount: Deactivated successfully.
Feb 28 10:14:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c17d24a091a535ceaf27eeb52692c29dbc8d488028fb382fe11058cd494becd-merged.mount: Deactivated successfully.
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.256 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:05 compute-0 podman[306029]: 2026-02-28 10:14:05.25489313 +0000 UTC m=+0.087423159 container cleanup e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 10:14:05 compute-0 systemd[1]: libpod-conmon-e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6.scope: Deactivated successfully.
Feb 28 10:14:05 compute-0 ceph-mon[76304]: pgmap v1445: 305 pgs: 305 active+clean; 325 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.7 MiB/s wr, 198 op/s
Feb 28 10:14:05 compute-0 podman[306064]: 2026-02-28 10:14:05.334684074 +0000 UTC m=+0.042684991 container remove e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 10:14:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.339 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f30a20e4-5c91-4f2b-aa2e-3349d0545a88]: (4, ('Sat Feb 28 10:14:05 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 (e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6)\ne11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6\nSat Feb 28 10:14:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 (e11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6)\ne11a50e7ae0a990b783e06e952ba0129285f327233489b7123f101a832f1ddc6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.341 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2402ec6c-c603-4282-8882-2228608e8fef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.342 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77a5b13a-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:05 compute-0 kernel: tap77a5b13a-e0: left promiscuous mode
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.344 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.354 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.358 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ef409c92-8a92-4f5c-9b45-67d71f68ca53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.374 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[397691f9-20bd-4383-af23-9152f7c828a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.376 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[76db14dd-54da-4196-8842-30c53d4f8ec1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.391 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6e0e7ec9-86f7-4f53-9a4b-01a0b22b5f8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 513120, 'reachable_time': 35950, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306084, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d77a5b13a\x2dec2d\x2d4bde\x2db8f1\x2d201557ef8008.mount: Deactivated successfully.
Feb 28 10:14:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.397 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:14:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:05.397 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[fa94ba09-acee-4f15-a0d9-b6fa26a25393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.408 243456 INFO nova.compute.manager [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Rescuing
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.409 243456 DEBUG oslo_concurrency.lockutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "refresh_cache-dc2dbab8-312e-4130-8141-d848beeb6bec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.409 243456 DEBUG oslo_concurrency.lockutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquired lock "refresh_cache-dc2dbab8-312e-4130-8141-d848beeb6bec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.410 243456 DEBUG nova.network.neutron [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.791 243456 INFO nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance shutdown successfully after 13 seconds.
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.796 243456 INFO nova.virt.libvirt.driver [-] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance destroyed successfully.
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.802 243456 INFO nova.virt.libvirt.driver [-] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance destroyed successfully.
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.803 243456 DEBUG nova.virt.libvirt.vif [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-933663289',display_name='tempest-ServerDiskConfigTestJSON-server-933663289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-933663289',id=73,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:13:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-nc00nqtf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:13:50Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=1e13ffbf-dba5-421b-afc3-84eb471e2d44,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.804 243456 DEBUG nova.network.os_vif_util [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.805 243456 DEBUG nova.network.os_vif_util [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.805 243456 DEBUG os_vif [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.808 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.808 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41441957-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.810 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.812 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.815 243456 INFO os_vif [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94')
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.872 243456 DEBUG nova.compute.manager [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received event network-vif-unplugged-41441957-9492-481d-847c-895c9fd2ef8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.872 243456 DEBUG oslo_concurrency.lockutils [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.872 243456 DEBUG oslo_concurrency.lockutils [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.873 243456 DEBUG oslo_concurrency.lockutils [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.873 243456 DEBUG nova.compute.manager [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] No waiting events found dispatching network-vif-unplugged-41441957-9492-481d-847c-895c9fd2ef8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.873 243456 WARNING nova.compute.manager [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received unexpected event network-vif-unplugged-41441957-9492-481d-847c-895c9fd2ef8f for instance with vm_state active and task_state rebuilding.
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.873 243456 DEBUG nova.compute.manager [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.873 243456 DEBUG oslo_concurrency.lockutils [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.874 243456 DEBUG oslo_concurrency.lockutils [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.874 243456 DEBUG oslo_concurrency.lockutils [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.874 243456 DEBUG nova.compute.manager [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] No waiting events found dispatching network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:05 compute-0 nova_compute[243452]: 2026-02-28 10:14:05.874 243456 WARNING nova.compute.manager [req-42713826-98af-42bb-b59a-b1ac145199f1 req-35f6a192-0333-496c-aa05-7eb365262c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received unexpected event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f for instance with vm_state active and task_state rebuilding.
Feb 28 10:14:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 326 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.1 MiB/s wr, 286 op/s
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.155 243456 INFO nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Deleting instance files /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44_del
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.156 243456 INFO nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Deletion of /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44_del complete
Feb 28 10:14:06 compute-0 podman[306105]: 2026-02-28 10:14:06.186236208 +0000 UTC m=+0.105996011 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 10:14:06 compute-0 podman[306104]: 2026-02-28 10:14:06.205173551 +0000 UTC m=+0.126604221 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.327 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.328 243456 INFO nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Creating image(s)
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.360 243456 DEBUG nova.storage.rbd_utils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.401 243456 DEBUG nova.storage.rbd_utils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.439 243456 DEBUG nova.storage.rbd_utils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.447 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.534 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.535 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.536 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.536 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.564 243456 DEBUG nova.storage.rbd_utils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.572 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.834 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.897 243456 DEBUG nova.storage.rbd_utils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] resizing rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.980 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.981 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Ensure instance console log exists: /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.982 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.982 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.983 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.985 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Start _get_guest_xml network_info=[{"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.989 243456 WARNING nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.996 243456 DEBUG nova.virt.libvirt.host [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:14:06 compute-0 nova_compute[243452]: 2026-02-28 10:14:06.997 243456 DEBUG nova.virt.libvirt.host [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:14:07 compute-0 nova_compute[243452]: 2026-02-28 10:14:07.001 243456 DEBUG nova.virt.libvirt.host [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:14:07 compute-0 nova_compute[243452]: 2026-02-28 10:14:07.001 243456 DEBUG nova.virt.libvirt.host [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:14:07 compute-0 nova_compute[243452]: 2026-02-28 10:14:07.002 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:14:07 compute-0 nova_compute[243452]: 2026-02-28 10:14:07.002 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:14:07 compute-0 nova_compute[243452]: 2026-02-28 10:14:07.003 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:14:07 compute-0 nova_compute[243452]: 2026-02-28 10:14:07.003 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:14:07 compute-0 nova_compute[243452]: 2026-02-28 10:14:07.003 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:14:07 compute-0 nova_compute[243452]: 2026-02-28 10:14:07.004 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:14:07 compute-0 nova_compute[243452]: 2026-02-28 10:14:07.004 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:14:07 compute-0 nova_compute[243452]: 2026-02-28 10:14:07.004 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:14:07 compute-0 nova_compute[243452]: 2026-02-28 10:14:07.005 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:14:07 compute-0 nova_compute[243452]: 2026-02-28 10:14:07.005 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:14:07 compute-0 nova_compute[243452]: 2026-02-28 10:14:07.005 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:14:07 compute-0 nova_compute[243452]: 2026-02-28 10:14:07.006 243456 DEBUG nova.virt.hardware [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:14:07 compute-0 nova_compute[243452]: 2026-02-28 10:14:07.006 243456 DEBUG nova.objects.instance [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:07 compute-0 nova_compute[243452]: 2026-02-28 10:14:07.026 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:07 compute-0 ceph-mon[76304]: pgmap v1446: 305 pgs: 305 active+clean; 326 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.1 MiB/s wr, 286 op/s
Feb 28 10:14:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:14:07 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1461921869' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:07 compute-0 nova_compute[243452]: 2026-02-28 10:14:07.607 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:07 compute-0 nova_compute[243452]: 2026-02-28 10:14:07.632 243456 DEBUG nova.storage.rbd_utils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:07 compute-0 nova_compute[243452]: 2026-02-28 10:14:07.637 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:07 compute-0 nova_compute[243452]: 2026-02-28 10:14:07.894 243456 DEBUG nova.network.neutron [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Updating instance_info_cache with network_info: [{"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:14:07 compute-0 nova_compute[243452]: 2026-02-28 10:14:07.932 243456 DEBUG oslo_concurrency.lockutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Releasing lock "refresh_cache-dc2dbab8-312e-4130-8141-d848beeb6bec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:14:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 312 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.1 MiB/s wr, 252 op/s
Feb 28 10:14:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:14:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1407093777' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.189 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.192 243456 DEBUG nova.virt.libvirt.vif [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-933663289',display_name='tempest-ServerDiskConfigTestJSON-server-933663289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-933663289',id=73,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:13:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-nc00nqtf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:06Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=1e13ffbf-dba5-421b-afc3-84eb471e2d44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.193 243456 DEBUG nova.network.os_vif_util [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.194 243456 DEBUG nova.network.os_vif_util [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.199 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:14:08 compute-0 nova_compute[243452]:   <uuid>1e13ffbf-dba5-421b-afc3-84eb471e2d44</uuid>
Feb 28 10:14:08 compute-0 nova_compute[243452]:   <name>instance-00000049</name>
Feb 28 10:14:08 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:14:08 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:14:08 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-933663289</nova:name>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:14:06</nova:creationTime>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:14:08 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:14:08 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:14:08 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:14:08 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:14:08 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:14:08 compute-0 nova_compute[243452]:         <nova:user uuid="c6b5724da2e648fd85fd8cb293525967">tempest-ServerDiskConfigTestJSON-1778232696-project-member</nova:user>
Feb 28 10:14:08 compute-0 nova_compute[243452]:         <nova:project uuid="92b324a375ad4f198dc44d31a0e0a6eb">tempest-ServerDiskConfigTestJSON-1778232696</nova:project>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="88971623-4808-4102-a4a7-34a287d8b7fe"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:14:08 compute-0 nova_compute[243452]:         <nova:port uuid="41441957-9492-481d-847c-895c9fd2ef8f">
Feb 28 10:14:08 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:14:08 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:14:08 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <system>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <entry name="serial">1e13ffbf-dba5-421b-afc3-84eb471e2d44</entry>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <entry name="uuid">1e13ffbf-dba5-421b-afc3-84eb471e2d44</entry>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     </system>
Feb 28 10:14:08 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:14:08 compute-0 nova_compute[243452]:   <os>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:   </os>
Feb 28 10:14:08 compute-0 nova_compute[243452]:   <features>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:   </features>
Feb 28 10:14:08 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:14:08 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:14:08 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk">
Feb 28 10:14:08 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       </source>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:14:08 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config">
Feb 28 10:14:08 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       </source>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:14:08 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:6c:7c:88"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <target dev="tap41441957-94"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/console.log" append="off"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <video>
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     </video>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:14:08 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:14:08 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:14:08 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:14:08 compute-0 nova_compute[243452]: </domain>
Feb 28 10:14:08 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.209 243456 DEBUG nova.compute.manager [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Preparing to wait for external event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.210 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.210 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.211 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.213 243456 DEBUG nova.virt.libvirt.vif [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-933663289',display_name='tempest-ServerDiskConfigTestJSON-server-933663289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-933663289',id=73,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:13:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-nc00nqtf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:06Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=1e13ffbf-dba5-421b-afc3-84eb471e2d44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.214 243456 DEBUG nova.network.os_vif_util [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.215 243456 DEBUG nova.network.os_vif_util [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.216 243456 DEBUG os_vif [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.218 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.219 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.220 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.225 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.225 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41441957-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.226 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41441957-94, col_values=(('external_ids', {'iface-id': '41441957-9492-481d-847c-895c9fd2ef8f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:7c:88', 'vm-uuid': '1e13ffbf-dba5-421b-afc3-84eb471e2d44'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.229 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:08 compute-0 NetworkManager[49805]: <info>  [1772273648.2304] manager: (tap41441957-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/284)
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.233 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.235 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.237 243456 INFO os_vif [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94')
Feb 28 10:14:08 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1461921869' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:08 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1407093777' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.311 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.313 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.314 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No VIF found with MAC fa:16:3e:6c:7c:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.315 243456 INFO nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Using config drive
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.345 243456 DEBUG nova.storage.rbd_utils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.357 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.365 243456 DEBUG nova.objects.instance [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'ec2_ids' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.397 243456 DEBUG nova.objects.instance [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'keypairs' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:14:08 compute-0 nova_compute[243452]: 2026-02-28 10:14:08.757 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:09 compute-0 nova_compute[243452]: 2026-02-28 10:14:09.028 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273634.027302, c4a13c84-8fca-43c8-87c3-fde9f5d1c031 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:09 compute-0 nova_compute[243452]: 2026-02-28 10:14:09.030 243456 INFO nova.compute.manager [-] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] VM Stopped (Lifecycle Event)
Feb 28 10:14:09 compute-0 nova_compute[243452]: 2026-02-28 10:14:09.067 243456 DEBUG nova.compute.manager [None req-f436d665-147f-4727-b2c9-f142de07389e - - - - - -] [instance: c4a13c84-8fca-43c8-87c3-fde9f5d1c031] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:09 compute-0 nova_compute[243452]: 2026-02-28 10:14:09.180 243456 INFO nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Creating config drive at /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config
Feb 28 10:14:09 compute-0 nova_compute[243452]: 2026-02-28 10:14:09.187 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpv1ykfaco execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:09 compute-0 ceph-mon[76304]: pgmap v1447: 305 pgs: 305 active+clean; 312 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.1 MiB/s wr, 252 op/s
Feb 28 10:14:09 compute-0 nova_compute[243452]: 2026-02-28 10:14:09.347 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpv1ykfaco" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:09 compute-0 nova_compute[243452]: 2026-02-28 10:14:09.395 243456 DEBUG nova.storage.rbd_utils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:09 compute-0 nova_compute[243452]: 2026-02-28 10:14:09.400 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:09 compute-0 nova_compute[243452]: 2026-02-28 10:14:09.573 243456 DEBUG oslo_concurrency.processutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config 1e13ffbf-dba5-421b-afc3-84eb471e2d44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:09 compute-0 nova_compute[243452]: 2026-02-28 10:14:09.574 243456 INFO nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Deleting local config drive /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44/disk.config because it was imported into RBD.
Feb 28 10:14:09 compute-0 kernel: tap41441957-94: entered promiscuous mode
Feb 28 10:14:09 compute-0 NetworkManager[49805]: <info>  [1772273649.6269] manager: (tap41441957-94): new Tun device (/org/freedesktop/NetworkManager/Devices/285)
Feb 28 10:14:09 compute-0 ovn_controller[146846]: 2026-02-28T10:14:09Z|00630|binding|INFO|Claiming lport 41441957-9492-481d-847c-895c9fd2ef8f for this chassis.
Feb 28 10:14:09 compute-0 ovn_controller[146846]: 2026-02-28T10:14:09Z|00631|binding|INFO|41441957-9492-481d-847c-895c9fd2ef8f: Claiming fa:16:3e:6c:7c:88 10.100.0.4
Feb 28 10:14:09 compute-0 nova_compute[243452]: 2026-02-28 10:14:09.630 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:09 compute-0 ovn_controller[146846]: 2026-02-28T10:14:09Z|00632|binding|INFO|Setting lport 41441957-9492-481d-847c-895c9fd2ef8f ovn-installed in OVS
Feb 28 10:14:09 compute-0 ovn_controller[146846]: 2026-02-28T10:14:09Z|00633|binding|INFO|Setting lport 41441957-9492-481d-847c-895c9fd2ef8f up in Southbound
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.638 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:7c:88 10.100.0.4'], port_security=['fa:16:3e:6c:7c:88 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1e13ffbf-dba5-421b-afc3-84eb471e2d44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '5', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=41441957-9492-481d-847c-895c9fd2ef8f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.641 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 41441957-9492-481d-847c-895c9fd2ef8f in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 bound to our chassis
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.644 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 10:14:09 compute-0 nova_compute[243452]: 2026-02-28 10:14:09.652 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:09 compute-0 systemd-machined[209480]: New machine qemu-84-instance-00000049.
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.662 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[37e0968a-68ea-4696-909d-c5bd6b72194c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.664 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77a5b13a-e1 in ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.665 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77a5b13a-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.666 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1ad6f3c9-d24c-44e8-b524-f761b12800fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.666 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[97c8e85a-555a-4838-99f4-3d0940b8b55a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:09 compute-0 systemd[1]: Started Virtual Machine qemu-84-instance-00000049.
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.681 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[87616cba-20d7-4e25-a316-14aad18d9f59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:09 compute-0 systemd-udevd[306453]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.700 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3c8b2e9c-c1a2-4486-841e-3c67224e99a3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:09 compute-0 NetworkManager[49805]: <info>  [1772273649.7071] device (tap41441957-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:14:09 compute-0 NetworkManager[49805]: <info>  [1772273649.7080] device (tap41441957-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.736 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9c324085-7c6a-45bf-8442-6a4400fa8076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.742 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[093b67b0-5e7d-4752-a27e-b6c69b6e9dc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:09 compute-0 NetworkManager[49805]: <info>  [1772273649.7435] manager: (tap77a5b13a-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/286)
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.772 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[34fe1808-4d27-4684-b215-375a39ae4af9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.776 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[910d825e-43cc-46b1-9df2-968a81b16b0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:09 compute-0 NetworkManager[49805]: <info>  [1772273649.8035] device (tap77a5b13a-e0): carrier: link connected
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.810 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7646e6e4-9d91-4a72-bf4b-57e39695c442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.827 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3bf13585-252d-4ab3-9787-2d0ba5b5181f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77a5b13a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ae:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515708, 'reachable_time': 27002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306484, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.851 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[64c58558-ed4c-456b-88fa-abcdbd6e20e3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:aeac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 515708, 'tstamp': 515708}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306485, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.874 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f362ff30-a8eb-4e13-8c8f-5053c017460a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77a5b13a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ae:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515708, 'reachable_time': 27002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306486, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.904 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.916 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6d848625-5597-4897-8c5d-ec4705e52f85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.993 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f7d00f34-5915-49c4-bbd7-e325634c1582]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.995 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77a5b13a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.996 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:14:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:09.996 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77a5b13a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:10 compute-0 nova_compute[243452]: 2026-02-28 10:14:09.999 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:10 compute-0 kernel: tap77a5b13a-e0: entered promiscuous mode
Feb 28 10:14:10 compute-0 NetworkManager[49805]: <info>  [1772273650.0000] manager: (tap77a5b13a-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:10.013 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77a5b13a-e0, col_values=(('external_ids', {'iface-id': '5829ec02-3925-4479-9cc6-4b24ee8cfe06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:10 compute-0 ovn_controller[146846]: 2026-02-28T10:14:10Z|00634|binding|INFO|Releasing lport 5829ec02-3925-4479-9cc6-4b24ee8cfe06 from this chassis (sb_readonly=0)
Feb 28 10:14:10 compute-0 nova_compute[243452]: 2026-02-28 10:14:10.015 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:10.019 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:14:10 compute-0 nova_compute[243452]: 2026-02-28 10:14:10.022 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:10.024 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[727af06d-92f2-4117-b313-06c8c1202d88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:10.025 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:14:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:10.027 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'env', 'PROCESS_TAG=haproxy-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77a5b13a-ec2d-4bde-b8f1-201557ef8008.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:14:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 304 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.5 MiB/s wr, 260 op/s
Feb 28 10:14:10 compute-0 nova_compute[243452]: 2026-02-28 10:14:10.328 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 1e13ffbf-dba5-421b-afc3-84eb471e2d44 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:14:10 compute-0 nova_compute[243452]: 2026-02-28 10:14:10.329 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273650.3283272, 1e13ffbf-dba5-421b-afc3-84eb471e2d44 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:10 compute-0 nova_compute[243452]: 2026-02-28 10:14:10.330 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] VM Started (Lifecycle Event)
Feb 28 10:14:10 compute-0 nova_compute[243452]: 2026-02-28 10:14:10.371 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:10 compute-0 nova_compute[243452]: 2026-02-28 10:14:10.378 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273650.329309, 1e13ffbf-dba5-421b-afc3-84eb471e2d44 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:10 compute-0 nova_compute[243452]: 2026-02-28 10:14:10.379 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] VM Paused (Lifecycle Event)
Feb 28 10:14:10 compute-0 podman[306558]: 2026-02-28 10:14:10.400031002 +0000 UTC m=+0.061070078 container create 29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:14:10 compute-0 nova_compute[243452]: 2026-02-28 10:14:10.417 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:10 compute-0 nova_compute[243452]: 2026-02-28 10:14:10.428 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:14:10 compute-0 systemd[1]: Started libpod-conmon-29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa.scope.
Feb 28 10:14:10 compute-0 podman[306558]: 2026-02-28 10:14:10.369873444 +0000 UTC m=+0.030912340 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:14:10 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:14:10 compute-0 nova_compute[243452]: 2026-02-28 10:14:10.475 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7ac191ddc7bc426d6a4ce16e6d43bcc048e51a5a880fd81e5c01f4c30677ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:14:10 compute-0 podman[306558]: 2026-02-28 10:14:10.494425996 +0000 UTC m=+0.155464852 container init 29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 10:14:10 compute-0 podman[306558]: 2026-02-28 10:14:10.499412487 +0000 UTC m=+0.160451343 container start 29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 10:14:10 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[306574]: [NOTICE]   (306578) : New worker (306580) forked
Feb 28 10:14:10 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[306574]: [NOTICE]   (306578) : Loading success.
Feb 28 10:14:11 compute-0 ceph-mon[76304]: pgmap v1448: 305 pgs: 305 active+clean; 304 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.5 MiB/s wr, 260 op/s
Feb 28 10:14:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 293 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.2 MiB/s wr, 236 op/s
Feb 28 10:14:12 compute-0 ovn_controller[146846]: 2026-02-28T10:14:12Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5f:43:e1 10.100.0.5
Feb 28 10:14:12 compute-0 ovn_controller[146846]: 2026-02-28T10:14:12Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5f:43:e1 10.100.0.5
Feb 28 10:14:12 compute-0 nova_compute[243452]: 2026-02-28 10:14:12.844 243456 DEBUG nova.compute.manager [req-1ad64536-bfb8-4605-ba22-6877c3e1772b req-b1925e73-4b97-42aa-ac26-01e7edf2ccde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:12 compute-0 nova_compute[243452]: 2026-02-28 10:14:12.844 243456 DEBUG oslo_concurrency.lockutils [req-1ad64536-bfb8-4605-ba22-6877c3e1772b req-b1925e73-4b97-42aa-ac26-01e7edf2ccde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:12 compute-0 nova_compute[243452]: 2026-02-28 10:14:12.844 243456 DEBUG oslo_concurrency.lockutils [req-1ad64536-bfb8-4605-ba22-6877c3e1772b req-b1925e73-4b97-42aa-ac26-01e7edf2ccde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:12 compute-0 nova_compute[243452]: 2026-02-28 10:14:12.845 243456 DEBUG oslo_concurrency.lockutils [req-1ad64536-bfb8-4605-ba22-6877c3e1772b req-b1925e73-4b97-42aa-ac26-01e7edf2ccde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:12 compute-0 nova_compute[243452]: 2026-02-28 10:14:12.845 243456 DEBUG nova.compute.manager [req-1ad64536-bfb8-4605-ba22-6877c3e1772b req-b1925e73-4b97-42aa-ac26-01e7edf2ccde 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Processing event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:14:12 compute-0 nova_compute[243452]: 2026-02-28 10:14:12.846 243456 DEBUG nova.compute.manager [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:14:12 compute-0 nova_compute[243452]: 2026-02-28 10:14:12.850 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273652.8506882, 1e13ffbf-dba5-421b-afc3-84eb471e2d44 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:12 compute-0 nova_compute[243452]: 2026-02-28 10:14:12.851 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] VM Resumed (Lifecycle Event)
Feb 28 10:14:12 compute-0 nova_compute[243452]: 2026-02-28 10:14:12.856 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:14:12 compute-0 nova_compute[243452]: 2026-02-28 10:14:12.859 243456 INFO nova.virt.libvirt.driver [-] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance spawned successfully.
Feb 28 10:14:12 compute-0 nova_compute[243452]: 2026-02-28 10:14:12.860 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:14:12 compute-0 nova_compute[243452]: 2026-02-28 10:14:12.885 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:12 compute-0 nova_compute[243452]: 2026-02-28 10:14:12.891 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:12 compute-0 nova_compute[243452]: 2026-02-28 10:14:12.892 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:12 compute-0 nova_compute[243452]: 2026-02-28 10:14:12.892 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:12 compute-0 nova_compute[243452]: 2026-02-28 10:14:12.893 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:12 compute-0 nova_compute[243452]: 2026-02-28 10:14:12.893 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:12 compute-0 nova_compute[243452]: 2026-02-28 10:14:12.894 243456 DEBUG nova.virt.libvirt.driver [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:12 compute-0 nova_compute[243452]: 2026-02-28 10:14:12.900 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:14:12 compute-0 nova_compute[243452]: 2026-02-28 10:14:12.942 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:14:12 compute-0 nova_compute[243452]: 2026-02-28 10:14:12.963 243456 DEBUG nova.compute.manager [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:13 compute-0 nova_compute[243452]: 2026-02-28 10:14:13.021 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:13 compute-0 nova_compute[243452]: 2026-02-28 10:14:13.021 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:13 compute-0 nova_compute[243452]: 2026-02-28 10:14:13.022 243456 DEBUG nova.objects.instance [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:14:13 compute-0 nova_compute[243452]: 2026-02-28 10:14:13.078 243456 DEBUG oslo_concurrency.lockutils [None req-8fc1fda8-42dc-4373-b569-4b381d6fbe50 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:13 compute-0 nova_compute[243452]: 2026-02-28 10:14:13.230 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:13 compute-0 ceph-mon[76304]: pgmap v1449: 305 pgs: 305 active+clean; 293 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.2 MiB/s wr, 236 op/s
Feb 28 10:14:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:14:13 compute-0 nova_compute[243452]: 2026-02-28 10:14:13.759 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 302 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.2 MiB/s wr, 238 op/s
Feb 28 10:14:14 compute-0 nova_compute[243452]: 2026-02-28 10:14:14.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:14:14 compute-0 ovn_controller[146846]: 2026-02-28T10:14:14Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:39:b7:32 10.100.0.6
Feb 28 10:14:14 compute-0 ovn_controller[146846]: 2026-02-28T10:14:14Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:39:b7:32 10.100.0.6
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.246 243456 DEBUG nova.compute.manager [req-1a0cdea4-2c30-4a92-9ef8-cfd0edfcf6a3 req-8a22f47c-f129-4574-9ce5-c9853efa2cb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.247 243456 DEBUG oslo_concurrency.lockutils [req-1a0cdea4-2c30-4a92-9ef8-cfd0edfcf6a3 req-8a22f47c-f129-4574-9ce5-c9853efa2cb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.247 243456 DEBUG oslo_concurrency.lockutils [req-1a0cdea4-2c30-4a92-9ef8-cfd0edfcf6a3 req-8a22f47c-f129-4574-9ce5-c9853efa2cb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.248 243456 DEBUG oslo_concurrency.lockutils [req-1a0cdea4-2c30-4a92-9ef8-cfd0edfcf6a3 req-8a22f47c-f129-4574-9ce5-c9853efa2cb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.248 243456 DEBUG nova.compute.manager [req-1a0cdea4-2c30-4a92-9ef8-cfd0edfcf6a3 req-8a22f47c-f129-4574-9ce5-c9853efa2cb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] No waiting events found dispatching network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.249 243456 WARNING nova.compute.manager [req-1a0cdea4-2c30-4a92-9ef8-cfd0edfcf6a3 req-8a22f47c-f129-4574-9ce5-c9853efa2cb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received unexpected event network-vif-plugged-41441957-9492-481d-847c-895c9fd2ef8f for instance with vm_state active and task_state None.
Feb 28 10:14:15 compute-0 ceph-mon[76304]: pgmap v1450: 305 pgs: 305 active+clean; 302 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.2 MiB/s wr, 238 op/s
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.509 243456 DEBUG oslo_concurrency.lockutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.509 243456 DEBUG oslo_concurrency.lockutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.510 243456 DEBUG oslo_concurrency.lockutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.511 243456 DEBUG oslo_concurrency.lockutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.511 243456 DEBUG oslo_concurrency.lockutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.514 243456 INFO nova.compute.manager [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Terminating instance
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.515 243456 DEBUG nova.compute.manager [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:14:15 compute-0 kernel: tap41441957-94 (unregistering): left promiscuous mode
Feb 28 10:14:15 compute-0 NetworkManager[49805]: <info>  [1772273655.5583] device (tap41441957-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.563 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:15 compute-0 ovn_controller[146846]: 2026-02-28T10:14:15Z|00635|binding|INFO|Releasing lport 41441957-9492-481d-847c-895c9fd2ef8f from this chassis (sb_readonly=0)
Feb 28 10:14:15 compute-0 ovn_controller[146846]: 2026-02-28T10:14:15Z|00636|binding|INFO|Setting lport 41441957-9492-481d-847c-895c9fd2ef8f down in Southbound
Feb 28 10:14:15 compute-0 ovn_controller[146846]: 2026-02-28T10:14:15Z|00637|binding|INFO|Removing iface tap41441957-94 ovn-installed in OVS
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.566 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.576 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:7c:88 10.100.0.4'], port_security=['fa:16:3e:6c:7c:88 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1e13ffbf-dba5-421b-afc3-84eb471e2d44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '6', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=41441957-9492-481d-847c-895c9fd2ef8f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.579 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 41441957-9492-481d-847c-895c9fd2ef8f in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 unbound from our chassis
Feb 28 10:14:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.581 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:14:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.583 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c7dee43a-9271-463b-aa1f-48d9c084f4e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.585 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 namespace which is not needed anymore
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.588 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:15 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000049.scope: Deactivated successfully.
Feb 28 10:14:15 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000049.scope: Consumed 3.363s CPU time.
Feb 28 10:14:15 compute-0 systemd-machined[209480]: Machine qemu-84-instance-00000049 terminated.
Feb 28 10:14:15 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[306574]: [NOTICE]   (306578) : haproxy version is 2.8.14-c23fe91
Feb 28 10:14:15 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[306574]: [NOTICE]   (306578) : path to executable is /usr/sbin/haproxy
Feb 28 10:14:15 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[306574]: [WARNING]  (306578) : Exiting Master process...
Feb 28 10:14:15 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[306574]: [ALERT]    (306578) : Current worker (306580) exited with code 143 (Terminated)
Feb 28 10:14:15 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[306574]: [WARNING]  (306578) : All workers exited. Exiting... (0)
Feb 28 10:14:15 compute-0 systemd[1]: libpod-29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa.scope: Deactivated successfully.
Feb 28 10:14:15 compute-0 podman[306614]: 2026-02-28 10:14:15.70013213 +0000 UTC m=+0.044939515 container died 29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:14:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb7ac191ddc7bc426d6a4ce16e6d43bcc048e51a5a880fd81e5c01f4c30677ae-merged.mount: Deactivated successfully.
Feb 28 10:14:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa-userdata-shm.mount: Deactivated successfully.
Feb 28 10:14:15 compute-0 podman[306614]: 2026-02-28 10:14:15.742519992 +0000 UTC m=+0.087327397 container cleanup 29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:14:15 compute-0 systemd[1]: libpod-conmon-29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa.scope: Deactivated successfully.
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.761 243456 INFO nova.virt.libvirt.driver [-] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Instance destroyed successfully.
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.762 243456 DEBUG nova.objects.instance [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'resources' on Instance uuid 1e13ffbf-dba5-421b-afc3-84eb471e2d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.777 243456 DEBUG nova.virt.libvirt.vif [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-933663289',display_name='tempest-ServerDiskConfigTestJSON-server-933663289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-933663289',id=73,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:14:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-nc00nqtf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:14:13Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=1e13ffbf-dba5-421b-afc3-84eb471e2d44,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.779 243456 DEBUG nova.network.os_vif_util [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "41441957-9492-481d-847c-895c9fd2ef8f", "address": "fa:16:3e:6c:7c:88", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41441957-94", "ovs_interfaceid": "41441957-9492-481d-847c-895c9fd2ef8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.780 243456 DEBUG nova.network.os_vif_util [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.781 243456 DEBUG os_vif [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.783 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.783 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41441957-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.785 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.788 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.790 243456 INFO os_vif [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:7c:88,bridge_name='br-int',has_traffic_filtering=True,id=41441957-9492-481d-847c-895c9fd2ef8f,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41441957-94')
Feb 28 10:14:15 compute-0 podman[306648]: 2026-02-28 10:14:15.838088309 +0000 UTC m=+0.065031830 container remove 29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 10:14:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.842 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4d0ff094-c8d7-471d-b09b-4e709169c42c]: (4, ('Sat Feb 28 10:14:15 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 (29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa)\n29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa\nSat Feb 28 10:14:15 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 (29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa)\n29e72d768733d9be7118506f6db566a8054604e11eef514193bbbee29e842bfa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.844 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf72fdd-197c-4b24-b823-3414adcc7028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.845 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77a5b13a-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.846 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:15 compute-0 kernel: tap77a5b13a-e0: left promiscuous mode
Feb 28 10:14:15 compute-0 nova_compute[243452]: 2026-02-28 10:14:15.853 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.856 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7af3b6-2121-43ca-86fc-b90497a7cf96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.868 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d96a9c-6e97-4a2a-8667-f7c177d5788b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.870 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe4c46a-815c-415f-8001-11790bff16b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.884 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a15d5bcb-af4d-49a3-a34a-d192c90c35c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515701, 'reachable_time': 22950, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306683, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:15 compute-0 systemd[1]: run-netns-ovnmeta\x2d77a5b13a\x2dec2d\x2d4bde\x2db8f1\x2d201557ef8008.mount: Deactivated successfully.
Feb 28 10:14:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.887 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:14:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:15.887 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[4d786a2e-c58f-4e08-bdb8-f46b02555431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:16 compute-0 nova_compute[243452]: 2026-02-28 10:14:16.059 243456 INFO nova.virt.libvirt.driver [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Deleting instance files /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44_del
Feb 28 10:14:16 compute-0 nova_compute[243452]: 2026-02-28 10:14:16.061 243456 INFO nova.virt.libvirt.driver [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Deletion of /var/lib/nova/instances/1e13ffbf-dba5-421b-afc3-84eb471e2d44_del complete
Feb 28 10:14:16 compute-0 nova_compute[243452]: 2026-02-28 10:14:16.120 243456 INFO nova.compute.manager [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Took 0.60 seconds to destroy the instance on the hypervisor.
Feb 28 10:14:16 compute-0 nova_compute[243452]: 2026-02-28 10:14:16.121 243456 DEBUG oslo.service.loopingcall [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:14:16 compute-0 nova_compute[243452]: 2026-02-28 10:14:16.122 243456 DEBUG nova.compute.manager [-] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:14:16 compute-0 nova_compute[243452]: 2026-02-28 10:14:16.122 243456 DEBUG nova.network.neutron [-] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:14:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 349 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.6 MiB/s wr, 321 op/s
Feb 28 10:14:16 compute-0 nova_compute[243452]: 2026-02-28 10:14:16.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:14:16 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:14:16 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 23K writes, 95K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.04 MB/s
                                           Cumulative WAL: 23K writes, 8207 syncs, 2.91 writes per sync, written: 0.09 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 42.16 MB, 0.07 MB/s
                                           Interval WAL: 11K writes, 4559 syncs, 2.48 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 10:14:17 compute-0 nova_compute[243452]: 2026-02-28 10:14:17.182 243456 DEBUG nova.network.neutron [-] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:14:17 compute-0 nova_compute[243452]: 2026-02-28 10:14:17.199 243456 INFO nova.compute.manager [-] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Took 1.08 seconds to deallocate network for instance.
Feb 28 10:14:17 compute-0 nova_compute[243452]: 2026-02-28 10:14:17.239 243456 DEBUG oslo_concurrency.lockutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:17 compute-0 nova_compute[243452]: 2026-02-28 10:14:17.240 243456 DEBUG oslo_concurrency.lockutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:17 compute-0 nova_compute[243452]: 2026-02-28 10:14:17.283 243456 DEBUG nova.scheduler.client.report [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 28 10:14:17 compute-0 nova_compute[243452]: 2026-02-28 10:14:17.311 243456 DEBUG nova.scheduler.client.report [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 28 10:14:17 compute-0 nova_compute[243452]: 2026-02-28 10:14:17.312 243456 DEBUG nova.compute.provider_tree [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 28 10:14:17 compute-0 nova_compute[243452]: 2026-02-28 10:14:17.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:14:17 compute-0 ceph-mon[76304]: pgmap v1451: 305 pgs: 305 active+clean; 349 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.6 MiB/s wr, 321 op/s
Feb 28 10:14:17 compute-0 nova_compute[243452]: 2026-02-28 10:14:17.336 243456 DEBUG nova.scheduler.client.report [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 28 10:14:17 compute-0 nova_compute[243452]: 2026-02-28 10:14:17.366 243456 DEBUG nova.scheduler.client.report [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 28 10:14:17 compute-0 nova_compute[243452]: 2026-02-28 10:14:17.439 243456 DEBUG oslo_concurrency.processutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:14:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/247075934' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:17 compute-0 nova_compute[243452]: 2026-02-28 10:14:17.975 243456 DEBUG oslo_concurrency.processutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:17 compute-0 nova_compute[243452]: 2026-02-28 10:14:17.983 243456 DEBUG nova.compute.provider_tree [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:14:18 compute-0 nova_compute[243452]: 2026-02-28 10:14:18.006 243456 DEBUG nova.scheduler.client.report [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:14:18 compute-0 nova_compute[243452]: 2026-02-28 10:14:18.040 243456 DEBUG oslo_concurrency.lockutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:18 compute-0 nova_compute[243452]: 2026-02-28 10:14:18.083 243456 INFO nova.scheduler.client.report [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Deleted allocations for instance 1e13ffbf-dba5-421b-afc3-84eb471e2d44
Feb 28 10:14:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 344 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 6.0 MiB/s wr, 276 op/s
Feb 28 10:14:18 compute-0 nova_compute[243452]: 2026-02-28 10:14:18.166 243456 DEBUG oslo_concurrency.lockutils [None req-34eb6a08-6c57-41ed-9d25-a7a608baf6c4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "1e13ffbf-dba5-421b-afc3-84eb471e2d44" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:18 compute-0 nova_compute[243452]: 2026-02-28 10:14:18.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:14:18 compute-0 nova_compute[243452]: 2026-02-28 10:14:18.332 243456 DEBUG nova.compute.manager [req-f2008e52-0014-45e6-8cc0-9dd7bca5d9cd req-c481a3b3-c57f-4d21-85cc-c618d1144b9a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Received event network-vif-deleted-41441957-9492-481d-847c-895c9fd2ef8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/247075934' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:18 compute-0 nova_compute[243452]: 2026-02-28 10:14:18.491 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 28 10:14:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:14:18 compute-0 nova_compute[243452]: 2026-02-28 10:14:18.708 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:18 compute-0 nova_compute[243452]: 2026-02-28 10:14:18.708 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:18 compute-0 nova_compute[243452]: 2026-02-28 10:14:18.919 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:18 compute-0 nova_compute[243452]: 2026-02-28 10:14:18.925 243456 DEBUG nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:14:18 compute-0 nova_compute[243452]: 2026-02-28 10:14:18.927 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:18 compute-0 nova_compute[243452]: 2026-02-28 10:14:18.928 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:18 compute-0 nova_compute[243452]: 2026-02-28 10:14:18.965 243456 DEBUG nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:14:19 compute-0 nova_compute[243452]: 2026-02-28 10:14:19.036 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:19 compute-0 nova_compute[243452]: 2026-02-28 10:14:19.037 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:19 compute-0 nova_compute[243452]: 2026-02-28 10:14:19.045 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:14:19 compute-0 nova_compute[243452]: 2026-02-28 10:14:19.046 243456 INFO nova.compute.claims [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:14:19 compute-0 nova_compute[243452]: 2026-02-28 10:14:19.053 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:19 compute-0 nova_compute[243452]: 2026-02-28 10:14:19.191 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:19 compute-0 ceph-mon[76304]: pgmap v1452: 305 pgs: 305 active+clean; 344 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 6.0 MiB/s wr, 276 op/s
Feb 28 10:14:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:14:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3701244892' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:19 compute-0 nova_compute[243452]: 2026-02-28 10:14:19.763 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:19 compute-0 nova_compute[243452]: 2026-02-28 10:14:19.770 243456 DEBUG nova.compute.provider_tree [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:14:19 compute-0 nova_compute[243452]: 2026-02-28 10:14:19.792 243456 DEBUG nova.scheduler.client.report [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:14:19 compute-0 nova_compute[243452]: 2026-02-28 10:14:19.816 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:19 compute-0 nova_compute[243452]: 2026-02-28 10:14:19.816 243456 DEBUG nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:14:19 compute-0 nova_compute[243452]: 2026-02-28 10:14:19.818 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:19 compute-0 nova_compute[243452]: 2026-02-28 10:14:19.825 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:14:19 compute-0 nova_compute[243452]: 2026-02-28 10:14:19.826 243456 INFO nova.compute.claims [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:14:19 compute-0 nova_compute[243452]: 2026-02-28 10:14:19.882 243456 DEBUG nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:14:19 compute-0 nova_compute[243452]: 2026-02-28 10:14:19.883 243456 DEBUG nova.network.neutron [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:14:19 compute-0 nova_compute[243452]: 2026-02-28 10:14:19.904 243456 INFO nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:14:19 compute-0 nova_compute[243452]: 2026-02-28 10:14:19.922 243456 DEBUG nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.020 243456 DEBUG nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.021 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.022 243456 INFO nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Creating image(s)
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.052 243456 DEBUG nova.storage.rbd_utils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.088 243456 DEBUG nova.storage.rbd_utils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.123 243456 DEBUG nova.storage.rbd_utils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.127 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.164 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 328 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.4 MiB/s wr, 249 op/s
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.203 243456 DEBUG nova.policy [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c6b5724da2e648fd85fd8cb293525967', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.209 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.211 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.212 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.212 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.242 243456 DEBUG nova.storage.rbd_utils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.247 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ba33446e-fcd5-454c-bc8c-79a367002d57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.340 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3701244892' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.489 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ba33446e-fcd5-454c-bc8c-79a367002d57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.242s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.522 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "b883c1a1-cf01-434d-8258-24ca193a2683" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.523 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.551 243456 DEBUG nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.558 243456 DEBUG nova.storage.rbd_utils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] resizing rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.619 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.687 243456 DEBUG nova.objects.instance [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'migration_context' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.701 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.701 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Ensure instance console log exists: /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.701 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.702 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.702 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:20 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:14:20 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2478028342' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.717 243456 DEBUG nova.network.neutron [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Successfully created port: 0226697b-95b2-4303-aa60-b98eb0bb4cd9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.723 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.727 243456 DEBUG nova.compute.provider_tree [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.741 243456 DEBUG nova.scheduler.client.report [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:14:20 compute-0 kernel: tapa2f66c0b-78 (unregistering): left promiscuous mode
Feb 28 10:14:20 compute-0 NetworkManager[49805]: <info>  [1772273660.7565] device (tapa2f66c0b-78): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.760 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.761 243456 DEBUG nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:14:20 compute-0 ovn_controller[146846]: 2026-02-28T10:14:20Z|00638|binding|INFO|Releasing lport a2f66c0b-78f3-49cb-929b-5e9b4072beb0 from this chassis (sb_readonly=0)
Feb 28 10:14:20 compute-0 ovn_controller[146846]: 2026-02-28T10:14:20Z|00639|binding|INFO|Setting lport a2f66c0b-78f3-49cb-929b-5e9b4072beb0 down in Southbound
Feb 28 10:14:20 compute-0 ovn_controller[146846]: 2026-02-28T10:14:20Z|00640|binding|INFO|Removing iface tapa2f66c0b-78 ovn-installed in OVS
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.766 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.769 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.429s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.770 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.770 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:b7:32 10.100.0.6'], port_security=['fa:16:3e:39:b7:32 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dc2dbab8-312e-4130-8141-d848beeb6bec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-621843b6-256a-4ce5-83c3-83b888738508', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809bf856030f4316b385ba1c02291ca7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2ac4acb3-b14a-4b15-b397-21203f1665be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a33e3f0-a2b2-429c-8d14-4c6d980064b2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a2f66c0b-78f3-49cb-929b-5e9b4072beb0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.770 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.771 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.772 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a2f66c0b-78f3-49cb-929b-5e9b4072beb0 in datapath 621843b6-256a-4ce5-83c3-83b888738508 unbound from our chassis
Feb 28 10:14:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.773 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 621843b6-256a-4ce5-83c3-83b888738508
Feb 28 10:14:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.788 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[be6830ed-222e-4fd8-86af-dbeec5245246]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.802 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.805 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.811 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.811 243456 INFO nova.compute.claims [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:14:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.813 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a3cfd7fd-e3c6-441c-b229-8892354c2a16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.816 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7a93c089-1140-4d8d-be1a-ee872b0bf3a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:20 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Feb 28 10:14:20 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d0000004b.scope: Consumed 12.735s CPU time.
Feb 28 10:14:20 compute-0 systemd-machined[209480]: Machine qemu-83-instance-0000004b terminated.
Feb 28 10:14:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.833 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f079a522-a1cc-4c62-8673-da438a386f12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.844 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b233421f-ad01-4f9e-afa2-09355039ae03]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap621843b6-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:07:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514758, 'reachable_time': 23572, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306929, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.853 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c00bad-aab3-4b2f-b5e4-34ff34760d03]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap621843b6-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514771, 'tstamp': 514771}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306930, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap621843b6-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514773, 'tstamp': 514773}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306930, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.854 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap621843b6-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.855 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.858 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.859 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap621843b6-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.859 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:14:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.860 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap621843b6-20, col_values=(('external_ids', {'iface-id': '92bcea78-9a21-4d44-99f4-fd3e41fc7e97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:20.860 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.865 243456 DEBUG nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.865 243456 DEBUG nova.network.neutron [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.886 243456 INFO nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:14:20 compute-0 nova_compute[243452]: 2026-02-28 10:14:20.911 243456 DEBUG nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.010 243456 DEBUG nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.012 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.012 243456 INFO nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Creating image(s)
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.045 243456 DEBUG nova.storage.rbd_utils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.083 243456 DEBUG nova.storage.rbd_utils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.113 243456 DEBUG nova.storage.rbd_utils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.118 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.149 243456 DEBUG nova.policy [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '99530c323188499c8d0e75b8edf1f77b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4c568ca6a09a48c1a1197267be4d4583', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.160 243456 DEBUG nova.compute.manager [req-40fa971b-3862-4c93-a5c9-264a19e7bc58 req-c7730c54-2ec8-480f-8fdd-53a2aa8c4bc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-vif-unplugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.161 243456 DEBUG oslo_concurrency.lockutils [req-40fa971b-3862-4c93-a5c9-264a19e7bc58 req-c7730c54-2ec8-480f-8fdd-53a2aa8c4bc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.161 243456 DEBUG oslo_concurrency.lockutils [req-40fa971b-3862-4c93-a5c9-264a19e7bc58 req-c7730c54-2ec8-480f-8fdd-53a2aa8c4bc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.161 243456 DEBUG oslo_concurrency.lockutils [req-40fa971b-3862-4c93-a5c9-264a19e7bc58 req-c7730c54-2ec8-480f-8fdd-53a2aa8c4bc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.162 243456 DEBUG nova.compute.manager [req-40fa971b-3862-4c93-a5c9-264a19e7bc58 req-c7730c54-2ec8-480f-8fdd-53a2aa8c4bc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] No waiting events found dispatching network-vif-unplugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.162 243456 WARNING nova.compute.manager [req-40fa971b-3862-4c93-a5c9-264a19e7bc58 req-c7730c54-2ec8-480f-8fdd-53a2aa8c4bc2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received unexpected event network-vif-unplugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 for instance with vm_state active and task_state rescuing.
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.178 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.211 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.212 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.212 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.213 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.250 243456 DEBUG nova.storage.rbd_utils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.254 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:21 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:14:21 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1659373007' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.281 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:21 compute-0 ceph-mon[76304]: pgmap v1453: 305 pgs: 305 active+clean; 328 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.4 MiB/s wr, 249 op/s
Feb 28 10:14:21 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2478028342' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:21 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1659373007' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.372 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.373 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.382 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.382 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.457 243456 DEBUG nova.network.neutron [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Successfully updated port: 0226697b-95b2-4303-aa60-b98eb0bb4cd9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.465 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.538 243456 DEBUG nova.storage.rbd_utils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] resizing rbd image 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.587 243456 INFO nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Instance shutdown successfully after 13 seconds.
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.623 243456 DEBUG nova.objects.instance [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'migration_context' on Instance uuid 32fe69ba-ea8d-411e-8917-de872b62b8b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.631 243456 INFO nova.virt.libvirt.driver [-] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Instance destroyed successfully.
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.632 243456 DEBUG nova.objects.instance [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'numa_topology' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.648 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "refresh_cache-ba33446e-fcd5-454c-bc8c-79a367002d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.648 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquired lock "refresh_cache-ba33446e-fcd5-454c-bc8c-79a367002d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.648 243456 DEBUG nova.network.neutron [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.650 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.651 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Ensure instance console log exists: /var/lib/nova/instances/32fe69ba-ea8d-411e-8917-de872b62b8b0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.651 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.651 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.651 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.659 243456 INFO nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Attempting rescue
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.660 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.670 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.670 243456 INFO nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Creating image(s)
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.690 243456 DEBUG nova.storage.rbd_utils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.693 243456 DEBUG nova.objects.instance [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'trusted_certs' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.731 243456 DEBUG nova.storage.rbd_utils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.756 243456 DEBUG nova.storage.rbd_utils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.760 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:21 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:14:21 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/423912894' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.792 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.796 243456 DEBUG nova.network.neutron [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.802 243456 DEBUG nova.compute.provider_tree [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.817 243456 DEBUG nova.scheduler.client.report [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.836 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.836 243456 DEBUG nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.844 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.845 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3700MB free_disk=59.891969472169876GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.845 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.845 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.848 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.848 243456 DEBUG oslo_concurrency.lockutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.849 243456 DEBUG oslo_concurrency.lockutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.849 243456 DEBUG oslo_concurrency.lockutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.869 243456 DEBUG nova.storage.rbd_utils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.872 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 dc2dbab8-312e-4130-8141-d848beeb6bec_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.903 243456 DEBUG nova.compute.manager [req-9cb1bbc3-ddd8-4b85-b4fe-fd1d8546b3ad req-5cd455a3-601c-49df-9cfc-e8a843cb4d28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-changed-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.903 243456 DEBUG nova.compute.manager [req-9cb1bbc3-ddd8-4b85-b4fe-fd1d8546b3ad req-5cd455a3-601c-49df-9cfc-e8a843cb4d28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Refreshing instance network info cache due to event network-changed-0226697b-95b2-4303-aa60-b98eb0bb4cd9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.904 243456 DEBUG oslo_concurrency.lockutils [req-9cb1bbc3-ddd8-4b85-b4fe-fd1d8546b3ad req-5cd455a3-601c-49df-9cfc-e8a843cb4d28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ba33446e-fcd5-454c-bc8c-79a367002d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.905 243456 DEBUG nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.905 243456 DEBUG nova.network.neutron [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.944 243456 INFO nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.965 243456 DEBUG nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.991 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.991 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance dc2dbab8-312e-4130-8141-d848beeb6bec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.991 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance ba33446e-fcd5-454c-bc8c-79a367002d57 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.992 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 32fe69ba-ea8d-411e-8917-de872b62b8b0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.992 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance b883c1a1-cf01-434d-8258-24ca193a2683 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.992 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:14:21 compute-0 nova_compute[243452]: 2026-02-28 10:14:21.992 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.064 243456 DEBUG nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.065 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.066 243456 INFO nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Creating image(s)
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.090 243456 DEBUG nova.storage.rbd_utils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image b883c1a1-cf01-434d-8258-24ca193a2683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.115 243456 DEBUG nova.storage.rbd_utils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image b883c1a1-cf01-434d-8258-24ca193a2683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.140 243456 DEBUG nova.storage.rbd_utils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image b883c1a1-cf01-434d-8258-24ca193a2683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.144 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 322 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.1 MiB/s wr, 253 op/s
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.183 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 dc2dbab8-312e-4130-8141-d848beeb6bec_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.185 243456 DEBUG nova.objects.instance [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'migration_context' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.195 243456 DEBUG nova.policy [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14b2d28379164786ad68563acb83a50a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fd70835696bf4e12a062516e9de5527d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.202 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.203 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Start _get_guest_xml network_info=[{"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "vif_mac": "fa:16:3e:39:b7:32"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.204 243456 DEBUG nova.objects.instance [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'resources' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.219 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.220 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.221 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.222 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.246 243456 DEBUG nova.storage.rbd_utils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image b883c1a1-cf01-434d-8258-24ca193a2683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.250 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b883c1a1-cf01-434d-8258-24ca193a2683_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.282 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.309 243456 WARNING nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.313 243456 DEBUG nova.virt.libvirt.host [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.314 243456 DEBUG nova.virt.libvirt.host [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.316 243456 DEBUG nova.virt.libvirt.host [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.317 243456 DEBUG nova.virt.libvirt.host [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.317 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.317 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.318 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.318 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.318 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.318 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.318 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.319 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.319 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.319 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.319 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.320 243456 DEBUG nova.virt.hardware [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.320 243456 DEBUG nova.objects.instance [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.338 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:22 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/423912894' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:22 compute-0 ceph-mon[76304]: pgmap v1454: 305 pgs: 305 active+clean; 322 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.1 MiB/s wr, 253 op/s
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.461 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b883c1a1-cf01-434d-8258-24ca193a2683_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.532 243456 DEBUG nova.storage.rbd_utils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] resizing rbd image b883c1a1-cf01-434d-8258-24ca193a2683_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.618 243456 DEBUG nova.objects.instance [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'migration_context' on Instance uuid b883c1a1-cf01-434d-8258-24ca193a2683 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.633 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.634 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Ensure instance console log exists: /var/lib/nova/instances/b883c1a1-cf01-434d-8258-24ca193a2683/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.634 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.634 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.635 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.643 243456 DEBUG nova.network.neutron [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Successfully created port: 6b5acb8c-5d09-42b0-9c1d-b51be18712fe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:14:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:14:22 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3701906499' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:22 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:14:22 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2401.6 total, 600.0 interval
                                           Cumulative writes: 26K writes, 103K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.04 MB/s
                                           Cumulative WAL: 26K writes, 9226 syncs, 2.87 writes per sync, written: 0.09 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 12K writes, 48K keys, 12K commit groups, 1.0 writes per commit group, ingest: 47.98 MB, 0.08 MB/s
                                           Interval WAL: 12K writes, 5009 syncs, 2.49 writes per sync, written: 0.05 GB, 0.08 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.848 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.855 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.871 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:14:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:14:22 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1146444698' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.894 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.895 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.938 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:14:22 compute-0 nova_compute[243452]: 2026-02-28 10:14:22.939 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.124 243456 DEBUG nova.compute.manager [req-1c95ed0b-52e2-4f38-b6f7-1d69675d84a5 req-72dd9fc9-0cb8-4b46-b47c-0d677d1bb255 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.125 243456 DEBUG oslo_concurrency.lockutils [req-1c95ed0b-52e2-4f38-b6f7-1d69675d84a5 req-72dd9fc9-0cb8-4b46-b47c-0d677d1bb255 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.125 243456 DEBUG oslo_concurrency.lockutils [req-1c95ed0b-52e2-4f38-b6f7-1d69675d84a5 req-72dd9fc9-0cb8-4b46-b47c-0d677d1bb255 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.125 243456 DEBUG oslo_concurrency.lockutils [req-1c95ed0b-52e2-4f38-b6f7-1d69675d84a5 req-72dd9fc9-0cb8-4b46-b47c-0d677d1bb255 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.125 243456 DEBUG nova.compute.manager [req-1c95ed0b-52e2-4f38-b6f7-1d69675d84a5 req-72dd9fc9-0cb8-4b46-b47c-0d677d1bb255 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] No waiting events found dispatching network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.126 243456 WARNING nova.compute.manager [req-1c95ed0b-52e2-4f38-b6f7-1d69675d84a5 req-72dd9fc9-0cb8-4b46-b47c-0d677d1bb255 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received unexpected event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 for instance with vm_state active and task_state rescuing.
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.201 243456 DEBUG nova.network.neutron [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Updating instance_info_cache with network_info: [{"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.228 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Releasing lock "refresh_cache-ba33446e-fcd5-454c-bc8c-79a367002d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.228 243456 DEBUG nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance network_info: |[{"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.229 243456 DEBUG oslo_concurrency.lockutils [req-9cb1bbc3-ddd8-4b85-b4fe-fd1d8546b3ad req-5cd455a3-601c-49df-9cfc-e8a843cb4d28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ba33446e-fcd5-454c-bc8c-79a367002d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.229 243456 DEBUG nova.network.neutron [req-9cb1bbc3-ddd8-4b85-b4fe-fd1d8546b3ad req-5cd455a3-601c-49df-9cfc-e8a843cb4d28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Refreshing network info cache for port 0226697b-95b2-4303-aa60-b98eb0bb4cd9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.233 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Start _get_guest_xml network_info=[{"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.235 243456 DEBUG nova.network.neutron [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Successfully created port: aa9724a7-fad1-4968-a1b0-0d8182007723 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.246 243456 WARNING nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.251 243456 DEBUG nova.virt.libvirt.host [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.252 243456 DEBUG nova.virt.libvirt.host [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.257 243456 DEBUG nova.virt.libvirt.host [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.258 243456 DEBUG nova.virt.libvirt.host [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.258 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.259 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.260 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.260 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.261 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.261 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.261 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.262 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.262 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.262 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.263 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.263 243456 DEBUG nova.virt.hardware [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.269 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3701906499' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1146444698' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:14:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1782364055' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.467 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.469 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.511 243456 DEBUG nova.network.neutron [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Successfully updated port: 6b5acb8c-5d09-42b0-9c1d-b51be18712fe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:14:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.549 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.549 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquired lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.550 243456 DEBUG nova.network.neutron [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.733 243456 DEBUG nova.network.neutron [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.762 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:14:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/627594311' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.791 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.816 243456 DEBUG nova.storage.rbd_utils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.820 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.940 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.941 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.941 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:14:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.977 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.977 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 28 10:14:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2550592236' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:23 compute-0 nova_compute[243452]: 2026-02-28 10:14:23.978 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.001 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.003 243456 DEBUG nova.virt.libvirt.vif [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:13:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1249757035',display_name='tempest-ServerRescueNegativeTestJSON-server-1249757035',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1249757035',id=75,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:14:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='809bf856030f4316b385ba1c02291ca7',ramdisk_id='',reservation_id='r-hyvw557d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-338743494',owner_user_name='tempest-ServerRescueNegativeTestJSON-338743494-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:01Z,user_data=None,user_id='ec5caafc16ec43a493f7d553353a27c3',uuid=dc2dbab8-312e-4130-8141-d848beeb6bec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "vif_mac": "fa:16:3e:39:b7:32"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.004 243456 DEBUG nova.network.os_vif_util [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converting VIF {"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "vif_mac": "fa:16:3e:39:b7:32"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.004 243456 DEBUG nova.network.os_vif_util [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:b7:32,bridge_name='br-int',has_traffic_filtering=True,id=a2f66c0b-78f3-49cb-929b-5e9b4072beb0,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2f66c0b-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.006 243456 DEBUG nova.objects.instance [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'pci_devices' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.021 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <uuid>dc2dbab8-312e-4130-8141-d848beeb6bec</uuid>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <name>instance-0000004b</name>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1249757035</nova:name>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:14:22</nova:creationTime>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <nova:user uuid="ec5caafc16ec43a493f7d553353a27c3">tempest-ServerRescueNegativeTestJSON-338743494-project-member</nova:user>
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <nova:project uuid="809bf856030f4316b385ba1c02291ca7">tempest-ServerRescueNegativeTestJSON-338743494</nova:project>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <nova:port uuid="a2f66c0b-78f3-49cb-929b-5e9b4072beb0">
Feb 28 10:14:24 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <system>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <entry name="serial">dc2dbab8-312e-4130-8141-d848beeb6bec</entry>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <entry name="uuid">dc2dbab8-312e-4130-8141-d848beeb6bec</entry>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     </system>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <os>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   </os>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <features>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   </features>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/dc2dbab8-312e-4130-8141-d848beeb6bec_disk.rescue">
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       </source>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/dc2dbab8-312e-4130-8141-d848beeb6bec_disk">
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       </source>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <target dev="vdb" bus="virtio"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config.rescue">
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       </source>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:39:b7:32"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <target dev="tapa2f66c0b-78"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/console.log" append="off"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <video>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     </video>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:14:24 compute-0 nova_compute[243452]: </domain>
Feb 28 10:14:24 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.030 243456 INFO nova.virt.libvirt.driver [-] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Instance destroyed successfully.
Feb 28 10:14:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 365 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.3 MiB/s wr, 282 op/s
Feb 28 10:14:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:14:24 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3935120704' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.368 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.369 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.369 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.370 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.379 243456 DEBUG nova.network.neutron [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Successfully updated port: aa9724a7-fad1-4968-a1b0-0d8182007723 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:14:24 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1782364055' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:24 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/627594311' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:24 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2550592236' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:24 compute-0 ceph-mon[76304]: pgmap v1455: 305 pgs: 305 active+clean; 365 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.3 MiB/s wr, 282 op/s
Feb 28 10:14:24 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3935120704' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.395 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.397 243456 DEBUG nova.virt.libvirt.vif [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:14:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-877654664',display_name='tempest-ServerDiskConfigTestJSON-server-877654664',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-877654664',id=76,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-rxssdzd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:19Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=ba33446e-fcd5-454c-bc8c-79a367002d57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.398 243456 DEBUG nova.network.os_vif_util [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.399 243456 DEBUG nova.network.os_vif_util [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.400 243456 DEBUG nova.objects.instance [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'pci_devices' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.424 243456 DEBUG nova.compute.manager [req-16f45327-abfb-4113-89e2-7c2e2dc1a038 req-0acc1a20-37bc-4813-828e-123510aca461 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Received event network-changed-aa9724a7-fad1-4968-a1b0-0d8182007723 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.425 243456 DEBUG nova.compute.manager [req-16f45327-abfb-4113-89e2-7c2e2dc1a038 req-0acc1a20-37bc-4813-828e-123510aca461 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Refreshing instance network info cache due to event network-changed-aa9724a7-fad1-4968-a1b0-0d8182007723. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.425 243456 DEBUG oslo_concurrency.lockutils [req-16f45327-abfb-4113-89e2-7c2e2dc1a038 req-0acc1a20-37bc-4813-828e-123510aca461 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b883c1a1-cf01-434d-8258-24ca193a2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.425 243456 DEBUG oslo_concurrency.lockutils [req-16f45327-abfb-4113-89e2-7c2e2dc1a038 req-0acc1a20-37bc-4813-828e-123510aca461 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b883c1a1-cf01-434d-8258-24ca193a2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.426 243456 DEBUG nova.network.neutron [req-16f45327-abfb-4113-89e2-7c2e2dc1a038 req-0acc1a20-37bc-4813-828e-123510aca461 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Refreshing network info cache for port aa9724a7-fad1-4968-a1b0-0d8182007723 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.429 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "refresh_cache-b883c1a1-cf01-434d-8258-24ca193a2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.431 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <uuid>ba33446e-fcd5-454c-bc8c-79a367002d57</uuid>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <name>instance-0000004c</name>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-877654664</nova:name>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:14:23</nova:creationTime>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <nova:user uuid="c6b5724da2e648fd85fd8cb293525967">tempest-ServerDiskConfigTestJSON-1778232696-project-member</nova:user>
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <nova:project uuid="92b324a375ad4f198dc44d31a0e0a6eb">tempest-ServerDiskConfigTestJSON-1778232696</nova:project>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <nova:port uuid="0226697b-95b2-4303-aa60-b98eb0bb4cd9">
Feb 28 10:14:24 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <system>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <entry name="serial">ba33446e-fcd5-454c-bc8c-79a367002d57</entry>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <entry name="uuid">ba33446e-fcd5-454c-bc8c-79a367002d57</entry>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     </system>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <os>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   </os>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <features>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   </features>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ba33446e-fcd5-454c-bc8c-79a367002d57_disk">
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       </source>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config">
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       </source>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:14:24 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:70:1d:4f"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <target dev="tap0226697b-95"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/console.log" append="off"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <video>
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     </video>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:14:24 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:14:24 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:14:24 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:14:24 compute-0 nova_compute[243452]: </domain>
Feb 28 10:14:24 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.432 243456 DEBUG nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Preparing to wait for external event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.432 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.433 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.433 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.434 243456 DEBUG nova.virt.libvirt.vif [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:14:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-877654664',display_name='tempest-ServerDiskConfigTestJSON-server-877654664',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-877654664',id=76,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-rxssdzd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:19Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=ba33446e-fcd5-454c-bc8c-79a367002d57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.434 243456 DEBUG nova.network.os_vif_util [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.435 243456 DEBUG nova.network.os_vif_util [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.435 243456 DEBUG os_vif [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.437 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.438 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.438 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.442 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.443 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.443 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.443 243456 DEBUG nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] No VIF found with MAC fa:16:3e:39:b7:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.444 243456 INFO nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Using config drive
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.466 243456 DEBUG nova.storage.rbd_utils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.477 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.477 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0226697b-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.478 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0226697b-95, col_values=(('external_ids', {'iface-id': '0226697b-95b2-4303-aa60-b98eb0bb4cd9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:1d:4f', 'vm-uuid': 'ba33446e-fcd5-454c-bc8c-79a367002d57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.479 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:24 compute-0 NetworkManager[49805]: <info>  [1772273664.4807] manager: (tap0226697b-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.482 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.485 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.486 243456 INFO os_vif [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95')
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.488 243456 DEBUG nova.objects.instance [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'ec2_ids' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.517 243456 DEBUG nova.objects.instance [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'keypairs' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.537 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.537 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.537 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No VIF found with MAC fa:16:3e:70:1d:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.538 243456 INFO nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Using config drive
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.559 243456 DEBUG nova.storage.rbd_utils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.661 243456 DEBUG nova.network.neutron [req-16f45327-abfb-4113-89e2-7c2e2dc1a038 req-0acc1a20-37bc-4813-828e-123510aca461 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.839 243456 DEBUG nova.network.neutron [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Updating instance_info_cache with network_info: [{"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.872 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Releasing lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.873 243456 DEBUG nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Instance network_info: |[{"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.876 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Start _get_guest_xml network_info=[{"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.881 243456 WARNING nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.886 243456 DEBUG nova.virt.libvirt.host [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.886 243456 DEBUG nova.virt.libvirt.host [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.890 243456 DEBUG nova.virt.libvirt.host [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.890 243456 DEBUG nova.virt.libvirt.host [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.890 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.890 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.891 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.891 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.891 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.891 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.892 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.892 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.892 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.892 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.893 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.893 243456 DEBUG nova.virt.hardware [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.895 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.919 243456 INFO nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Creating config drive at /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config.rescue
Feb 28 10:14:24 compute-0 nova_compute[243452]: 2026-02-28 10:14:24.924 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmppcyagode execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.055 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmppcyagode" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.082 243456 DEBUG nova.storage.rbd_utils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] rbd image dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.086 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config.rescue dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.229 243456 DEBUG oslo_concurrency.processutils [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config.rescue dc2dbab8-312e-4130-8141-d848beeb6bec_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.230 243456 INFO nova.virt.libvirt.driver [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Deleting local config drive /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec/disk.config.rescue because it was imported into RBD.
Feb 28 10:14:25 compute-0 kernel: tapa2f66c0b-78: entered promiscuous mode
Feb 28 10:14:25 compute-0 NetworkManager[49805]: <info>  [1772273665.2730] manager: (tapa2f66c0b-78): new Tun device (/org/freedesktop/NetworkManager/Devices/289)
Feb 28 10:14:25 compute-0 ovn_controller[146846]: 2026-02-28T10:14:25Z|00641|binding|INFO|Claiming lport a2f66c0b-78f3-49cb-929b-5e9b4072beb0 for this chassis.
Feb 28 10:14:25 compute-0 ovn_controller[146846]: 2026-02-28T10:14:25Z|00642|binding|INFO|a2f66c0b-78f3-49cb-929b-5e9b4072beb0: Claiming fa:16:3e:39:b7:32 10.100.0.6
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.279 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.292 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:b7:32 10.100.0.6'], port_security=['fa:16:3e:39:b7:32 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dc2dbab8-312e-4130-8141-d848beeb6bec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-621843b6-256a-4ce5-83c3-83b888738508', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809bf856030f4316b385ba1c02291ca7', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2ac4acb3-b14a-4b15-b397-21203f1665be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a33e3f0-a2b2-429c-8d14-4c6d980064b2, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a2f66c0b-78f3-49cb-929b-5e9b4072beb0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.295 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a2f66c0b-78f3-49cb-929b-5e9b4072beb0 in datapath 621843b6-256a-4ce5-83c3-83b888738508 bound to our chassis
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.299 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 621843b6-256a-4ce5-83c3-83b888738508
Feb 28 10:14:25 compute-0 ovn_controller[146846]: 2026-02-28T10:14:25Z|00643|binding|INFO|Setting lport a2f66c0b-78f3-49cb-929b-5e9b4072beb0 ovn-installed in OVS
Feb 28 10:14:25 compute-0 ovn_controller[146846]: 2026-02-28T10:14:25Z|00644|binding|INFO|Setting lport a2f66c0b-78f3-49cb-929b-5e9b4072beb0 up in Southbound
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.305 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:25 compute-0 systemd-machined[209480]: New machine qemu-85-instance-0000004b.
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.318 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0bbe3f34-7a78-4043-92a4-41799b098d49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:25 compute-0 systemd[1]: Started Virtual Machine qemu-85-instance-0000004b.
Feb 28 10:14:25 compute-0 systemd-udevd[307678]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.344 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d27fa713-c159-4a7e-9f22-e773644875e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.348 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[83c674a3-c9a7-48d5-8676-b077c773db4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:25 compute-0 NetworkManager[49805]: <info>  [1772273665.3558] device (tapa2f66c0b-78): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:14:25 compute-0 NetworkManager[49805]: <info>  [1772273665.3573] device (tapa2f66c0b-78): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.373 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[46eb5b74-6e3e-4494-94cf-ac2690e7a863]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.378 243456 DEBUG nova.network.neutron [req-16f45327-abfb-4113-89e2-7c2e2dc1a038 req-0acc1a20-37bc-4813-828e-123510aca461 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.390 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[535eae58-5366-4d7d-b49c-124940d88da8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap621843b6-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:07:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514758, 'reachable_time': 23572, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307688, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.394 243456 DEBUG oslo_concurrency.lockutils [req-16f45327-abfb-4113-89e2-7c2e2dc1a038 req-0acc1a20-37bc-4813-828e-123510aca461 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b883c1a1-cf01-434d-8258-24ca193a2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.395 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquired lock "refresh_cache-b883c1a1-cf01-434d-8258-24ca193a2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.395 243456 DEBUG nova.network.neutron [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.403 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4a53d19e-bf80-4e2b-aa01-9172bdd23649]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap621843b6-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514771, 'tstamp': 514771}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307690, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap621843b6-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514773, 'tstamp': 514773}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307690, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.405 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap621843b6-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.408 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.414 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap621843b6-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.415 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.415 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap621843b6-20, col_values=(('external_ids', {'iface-id': '92bcea78-9a21-4d44-99f4-fd3e41fc7e97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.416 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.421 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.440 243456 INFO nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Creating config drive at /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.444 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpgw8_dt31 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:14:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/727041209' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.483 243456 DEBUG nova.network.neutron [req-9cb1bbc3-ddd8-4b85-b4fe-fd1d8546b3ad req-5cd455a3-601c-49df-9cfc-e8a843cb4d28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Updated VIF entry in instance network info cache for port 0226697b-95b2-4303-aa60-b98eb0bb4cd9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.484 243456 DEBUG nova.network.neutron [req-9cb1bbc3-ddd8-4b85-b4fe-fd1d8546b3ad req-5cd455a3-601c-49df-9cfc-e8a843cb4d28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Updating instance_info_cache with network_info: [{"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.486 243456 DEBUG nova.compute.manager [req-9660b201-fb75-465b-a3e3-c35f3abe3307 req-e8e715a2-0de3-4938-a8d1-2234f70605c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-changed-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.487 243456 DEBUG nova.compute.manager [req-9660b201-fb75-465b-a3e3-c35f3abe3307 req-e8e715a2-0de3-4938-a8d1-2234f70605c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Refreshing instance network info cache due to event network-changed-6b5acb8c-5d09-42b0-9c1d-b51be18712fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.487 243456 DEBUG oslo_concurrency.lockutils [req-9660b201-fb75-465b-a3e3-c35f3abe3307 req-e8e715a2-0de3-4938-a8d1-2234f70605c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.487 243456 DEBUG oslo_concurrency.lockutils [req-9660b201-fb75-465b-a3e3-c35f3abe3307 req-e8e715a2-0de3-4938-a8d1-2234f70605c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.487 243456 DEBUG nova.network.neutron [req-9660b201-fb75-465b-a3e3-c35f3abe3307 req-e8e715a2-0de3-4938-a8d1-2234f70605c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Refreshing network info cache for port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.489 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:25 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/727041209' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.519 243456 DEBUG nova.storage.rbd_utils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.522 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.546 243456 DEBUG oslo_concurrency.lockutils [req-9cb1bbc3-ddd8-4b85-b4fe-fd1d8546b3ad req-5cd455a3-601c-49df-9cfc-e8a843cb4d28 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ba33446e-fcd5-454c-bc8c-79a367002d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.584 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpgw8_dt31" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.618 243456 DEBUG nova.storage.rbd_utils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.626 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.653 243456 DEBUG nova.network.neutron [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.763 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for dc2dbab8-312e-4130-8141-d848beeb6bec due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.763 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273665.7626216, dc2dbab8-312e-4130-8141-d848beeb6bec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.763 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] VM Resumed (Lifecycle Event)
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.768 243456 DEBUG nova.compute.manager [None req-88a977ea-7783-43ce-a18e-c44b074efb47 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.785 243456 DEBUG oslo_concurrency.processutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.785 243456 INFO nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Deleting local config drive /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config because it was imported into RBD.
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.807 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.816 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:14:25 compute-0 kernel: tap0226697b-95: entered promiscuous mode
Feb 28 10:14:25 compute-0 NetworkManager[49805]: <info>  [1772273665.8369] manager: (tap0226697b-95): new Tun device (/org/freedesktop/NetworkManager/Devices/290)
Feb 28 10:14:25 compute-0 systemd-udevd[307680]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:14:25 compute-0 ovn_controller[146846]: 2026-02-28T10:14:25Z|00645|binding|INFO|Claiming lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 for this chassis.
Feb 28 10:14:25 compute-0 ovn_controller[146846]: 2026-02-28T10:14:25Z|00646|binding|INFO|0226697b-95b2-4303-aa60-b98eb0bb4cd9: Claiming fa:16:3e:70:1d:4f 10.100.0.13
Feb 28 10:14:25 compute-0 ovn_controller[146846]: 2026-02-28T10:14:25Z|00647|binding|INFO|Setting lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 ovn-installed in OVS
Feb 28 10:14:25 compute-0 ovn_controller[146846]: 2026-02-28T10:14:25Z|00648|binding|INFO|Setting lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 up in Southbound
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.857 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.857 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:1d:4f 10.100.0.13'], port_security=['fa:16:3e:70:1d:4f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ba33446e-fcd5-454c-bc8c-79a367002d57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0226697b-95b2-4303-aa60-b98eb0bb4cd9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.858 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0226697b-95b2-4303-aa60-b98eb0bb4cd9 in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 bound to our chassis
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.859 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 10:14:25 compute-0 NetworkManager[49805]: <info>  [1772273665.8614] device (tap0226697b-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.863 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] During sync_power_state the instance has a pending task (rescuing). Skip.
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.864 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273665.7630217, dc2dbab8-312e-4130-8141-d848beeb6bec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.864 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] VM Started (Lifecycle Event)
Feb 28 10:14:25 compute-0 NetworkManager[49805]: <info>  [1772273665.8656] device (tap0226697b-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.869 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.871 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[49cea0db-3ee5-4d54-ab63-8624300c162c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.872 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77a5b13a-e1 in ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.873 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77a5b13a-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.873 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f6ff8d-5971-413e-a0e4-719846b0cdda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.874 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[158150f9-bd1b-4763-b4e3-f20fdebf1039]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:25 compute-0 systemd-machined[209480]: New machine qemu-86-instance-0000004c.
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.888 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.891 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[d10639bb-ac4e-4ed0-85fb-8e4b80f2f829]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:25 compute-0 nova_compute[243452]: 2026-02-28 10:14:25.893 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:14:25 compute-0 systemd[1]: Started Virtual Machine qemu-86-instance-0000004c.
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.912 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6b13476f-8649-4185-acc3-c476491e91cc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.946 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e9f05bb0-220f-4b6f-8b16-3394427f4943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:25 compute-0 NetworkManager[49805]: <info>  [1772273665.9534] manager: (tap77a5b13a-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/291)
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.952 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b00c7e20-a71c-418e-8233-b8a0304e074b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.993 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5f26c7d3-03d5-4575-ac5b-b79075ca8937]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:25.999 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f98a2b-5fe8-45e8-9926-9d31decfc2a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:26 compute-0 NetworkManager[49805]: <info>  [1772273666.0249] device (tap77a5b13a-e0): carrier: link connected
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.036 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[af2b08cf-db17-4a2f-8f38-30b4f79db89b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.051 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[515d21d5-1ec8-4972-b852-9878335eb287]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77a5b13a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ae:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 202], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517331, 'reachable_time': 27035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307878, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:26 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:14:26 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2644307750' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.072 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e87b5e59-084e-4e56-a368-032f1393ac33]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:aeac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517331, 'tstamp': 517331}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307880, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.079 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.080 243456 DEBUG nova.virt.libvirt.vif [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:14:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1765724638',display_name='tempest-TestNetworkAdvancedServerOps-server-1765724638',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1765724638',id=77,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBuYc1DeElc1rkkDaA1+Wg1yBEjd9NzIhoaxm5Bt8KGJiKrnwEv07p/i1kPPlomnS4Xw2edPyOwDq78Zz+5s4I0hLVriZjH75jhUt93INSMBcBhlqyJu7ug5cVivkPNiww==',key_name='tempest-TestNetworkAdvancedServerOps-1506787166',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-0cvc6m44',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:20Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=32fe69ba-ea8d-411e-8917-de872b62b8b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.080 243456 DEBUG nova.network.os_vif_util [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.081 243456 DEBUG nova.network.os_vif_util [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:90:03,bridge_name='br-int',has_traffic_filtering=True,id=6b5acb8c-5d09-42b0-9c1d-b51be18712fe,network=Network(269fae56-42c3-478e-88d5-36164c0a6ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5acb8c-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.082 243456 DEBUG nova.objects.instance [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'pci_devices' on Instance uuid 32fe69ba-ea8d-411e-8917-de872b62b8b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.092 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf7b6e0-69f3-4e8a-8c82-aefc5666492d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77a5b13a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ae:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 202], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517331, 'reachable_time': 27035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307882, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.098 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:14:26 compute-0 nova_compute[243452]:   <uuid>32fe69ba-ea8d-411e-8917-de872b62b8b0</uuid>
Feb 28 10:14:26 compute-0 nova_compute[243452]:   <name>instance-0000004d</name>
Feb 28 10:14:26 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:14:26 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:14:26 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1765724638</nova:name>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:14:24</nova:creationTime>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:14:26 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:14:26 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:14:26 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:14:26 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:14:26 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:14:26 compute-0 nova_compute[243452]:         <nova:user uuid="99530c323188499c8d0e75b8edf1f77b">tempest-TestNetworkAdvancedServerOps-1987172309-project-member</nova:user>
Feb 28 10:14:26 compute-0 nova_compute[243452]:         <nova:project uuid="4c568ca6a09a48c1a1197267be4d4583">tempest-TestNetworkAdvancedServerOps-1987172309</nova:project>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:14:26 compute-0 nova_compute[243452]:         <nova:port uuid="6b5acb8c-5d09-42b0-9c1d-b51be18712fe">
Feb 28 10:14:26 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:14:26 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:14:26 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <system>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <entry name="serial">32fe69ba-ea8d-411e-8917-de872b62b8b0</entry>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <entry name="uuid">32fe69ba-ea8d-411e-8917-de872b62b8b0</entry>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     </system>
Feb 28 10:14:26 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:14:26 compute-0 nova_compute[243452]:   <os>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:   </os>
Feb 28 10:14:26 compute-0 nova_compute[243452]:   <features>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:   </features>
Feb 28 10:14:26 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:14:26 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:14:26 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/32fe69ba-ea8d-411e-8917-de872b62b8b0_disk">
Feb 28 10:14:26 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       </source>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:14:26 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/32fe69ba-ea8d-411e-8917-de872b62b8b0_disk.config">
Feb 28 10:14:26 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       </source>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:14:26 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:88:90:03"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <target dev="tap6b5acb8c-5d"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/32fe69ba-ea8d-411e-8917-de872b62b8b0/console.log" append="off"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <video>
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     </video>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:14:26 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:14:26 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:14:26 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:14:26 compute-0 nova_compute[243452]: </domain>
Feb 28 10:14:26 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.099 243456 DEBUG nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Preparing to wait for external event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.099 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.099 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.100 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.100 243456 DEBUG nova.virt.libvirt.vif [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:14:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1765724638',display_name='tempest-TestNetworkAdvancedServerOps-server-1765724638',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1765724638',id=77,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBuYc1DeElc1rkkDaA1+Wg1yBEjd9NzIhoaxm5Bt8KGJiKrnwEv07p/i1kPPlomnS4Xw2edPyOwDq78Zz+5s4I0hLVriZjH75jhUt93INSMBcBhlqyJu7ug5cVivkPNiww==',key_name='tempest-TestNetworkAdvancedServerOps-1506787166',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-0cvc6m44',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:20Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=32fe69ba-ea8d-411e-8917-de872b62b8b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.101 243456 DEBUG nova.network.os_vif_util [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.101 243456 DEBUG nova.network.os_vif_util [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:90:03,bridge_name='br-int',has_traffic_filtering=True,id=6b5acb8c-5d09-42b0-9c1d-b51be18712fe,network=Network(269fae56-42c3-478e-88d5-36164c0a6ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5acb8c-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.102 243456 DEBUG os_vif [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:90:03,bridge_name='br-int',has_traffic_filtering=True,id=6b5acb8c-5d09-42b0-9c1d-b51be18712fe,network=Network(269fae56-42c3-478e-88d5-36164c0a6ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5acb8c-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.102 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.103 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.103 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.105 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.105 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b5acb8c-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.106 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b5acb8c-5d, col_values=(('external_ids', {'iface-id': '6b5acb8c-5d09-42b0-9c1d-b51be18712fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:90:03', 'vm-uuid': '32fe69ba-ea8d-411e-8917-de872b62b8b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.107 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:26 compute-0 NetworkManager[49805]: <info>  [1772273666.1088] manager: (tap6b5acb8c-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/292)
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.110 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.112 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.112 243456 INFO os_vif [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:90:03,bridge_name='br-int',has_traffic_filtering=True,id=6b5acb8c-5d09-42b0-9c1d-b51be18712fe,network=Network(269fae56-42c3-478e-88d5-36164c0a6ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5acb8c-5d')
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.129 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[29cd9192-9ec7-48a9-8ab6-58ca3d731f09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.161 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.161 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.162 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No VIF found with MAC fa:16:3e:88:90:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.164 243456 INFO nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Using config drive
Feb 28 10:14:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 497 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 11 MiB/s wr, 306 op/s
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.190 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ece7e3ea-8aa2-4187-8890-83de97215982]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.192 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77a5b13a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.193 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.194 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77a5b13a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:26 compute-0 NetworkManager[49805]: <info>  [1772273666.1974] manager: (tap77a5b13a-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/293)
Feb 28 10:14:26 compute-0 kernel: tap77a5b13a-e0: entered promiscuous mode
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.206 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77a5b13a-e0, col_values=(('external_ids', {'iface-id': '5829ec02-3925-4479-9cc6-4b24ee8cfe06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:26 compute-0 ovn_controller[146846]: 2026-02-28T10:14:26Z|00649|binding|INFO|Releasing lport 5829ec02-3925-4479-9cc6-4b24ee8cfe06 from this chassis (sb_readonly=0)
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.226 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.225 243456 DEBUG nova.storage.rbd_utils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.228 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8785f08b-7820-47a7-9221-7ea661563a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.230 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:14:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:26.233 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'env', 'PROCESS_TAG=haproxy-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77a5b13a-ec2d-4bde-b8f1-201557ef8008.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.238 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.250 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Updating instance_info_cache with network_info: [{"id": "26c42747-4919-4440-9b73-cf3516525108", "address": "fa:16:3e:5f:43:e1", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c42747-49", "ovs_interfaceid": "26c42747-4919-4440-9b73-cf3516525108", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.285 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.285 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.286 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.286 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.286 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.391 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273666.390876, ba33446e-fcd5-454c-bc8c-79a367002d57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.391 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] VM Started (Lifecycle Event)
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.414 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.419 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273666.39205, ba33446e-fcd5-454c-bc8c-79a367002d57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.419 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] VM Paused (Lifecycle Event)
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.442 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.445 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.468 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:14:26 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2644307750' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:26 compute-0 ceph-mon[76304]: pgmap v1456: 305 pgs: 305 active+clean; 497 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 11 MiB/s wr, 306 op/s
Feb 28 10:14:26 compute-0 podman[307980]: 2026-02-28 10:14:26.59953232 +0000 UTC m=+0.056418447 container create 80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 28 10:14:26 compute-0 systemd[1]: Started libpod-conmon-80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4.scope.
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.641 243456 INFO nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Creating config drive at /var/lib/nova/instances/32fe69ba-ea8d-411e-8917-de872b62b8b0/disk.config
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.649 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/32fe69ba-ea8d-411e-8917-de872b62b8b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbkkcx0bf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:26 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:14:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/750aff55c3900492540f27eedaa3917fbcf26f3c02e0a8efc1a0b07485dba889/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:14:26 compute-0 podman[307980]: 2026-02-28 10:14:26.571957355 +0000 UTC m=+0.028843512 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:14:26 compute-0 podman[307980]: 2026-02-28 10:14:26.677588625 +0000 UTC m=+0.134474812 container init 80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:14:26 compute-0 podman[307980]: 2026-02-28 10:14:26.68382117 +0000 UTC m=+0.140707317 container start 80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:14:26 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[307996]: [NOTICE]   (308001) : New worker (308005) forked
Feb 28 10:14:26 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[307996]: [NOTICE]   (308001) : Loading success.
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.805 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/32fe69ba-ea8d-411e-8917-de872b62b8b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbkkcx0bf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.831 243456 DEBUG nova.storage.rbd_utils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.835 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/32fe69ba-ea8d-411e-8917-de872b62b8b0/disk.config 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.882 243456 DEBUG nova.network.neutron [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Updating instance_info_cache with network_info: [{"id": "aa9724a7-fad1-4968-a1b0-0d8182007723", "address": "fa:16:3e:c4:71:d6", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9724a7-fa", "ovs_interfaceid": "aa9724a7-fad1-4968-a1b0-0d8182007723", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.951 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Releasing lock "refresh_cache-b883c1a1-cf01-434d-8258-24ca193a2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.952 243456 DEBUG nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Instance network_info: |[{"id": "aa9724a7-fad1-4968-a1b0-0d8182007723", "address": "fa:16:3e:c4:71:d6", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9724a7-fa", "ovs_interfaceid": "aa9724a7-fad1-4968-a1b0-0d8182007723", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.957 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Start _get_guest_xml network_info=[{"id": "aa9724a7-fad1-4968-a1b0-0d8182007723", "address": "fa:16:3e:c4:71:d6", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9724a7-fa", "ovs_interfaceid": "aa9724a7-fad1-4968-a1b0-0d8182007723", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.962 243456 WARNING nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.969 243456 DEBUG nova.virt.libvirt.host [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.970 243456 DEBUG nova.virt.libvirt.host [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.973 243456 DEBUG nova.virt.libvirt.host [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.973 243456 DEBUG nova.virt.libvirt.host [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.974 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.974 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.975 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.975 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.975 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.975 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.976 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.976 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.976 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.976 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.977 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.977 243456 DEBUG nova.virt.hardware [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:14:26 compute-0 nova_compute[243452]: 2026-02-28 10:14:26.982 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.016 243456 DEBUG oslo_concurrency.processutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/32fe69ba-ea8d-411e-8917-de872b62b8b0/disk.config 32fe69ba-ea8d-411e-8917-de872b62b8b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.017 243456 INFO nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Deleting local config drive /var/lib/nova/instances/32fe69ba-ea8d-411e-8917-de872b62b8b0/disk.config because it was imported into RBD.
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.021 243456 DEBUG nova.compute.manager [req-5542d08b-ce3a-4516-af9d-1e3c78590db1 req-5ed42ef6-a351-4782-be9e-0b060119e481 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.022 243456 DEBUG oslo_concurrency.lockutils [req-5542d08b-ce3a-4516-af9d-1e3c78590db1 req-5ed42ef6-a351-4782-be9e-0b060119e481 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.022 243456 DEBUG oslo_concurrency.lockutils [req-5542d08b-ce3a-4516-af9d-1e3c78590db1 req-5ed42ef6-a351-4782-be9e-0b060119e481 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.022 243456 DEBUG oslo_concurrency.lockutils [req-5542d08b-ce3a-4516-af9d-1e3c78590db1 req-5ed42ef6-a351-4782-be9e-0b060119e481 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.023 243456 DEBUG nova.compute.manager [req-5542d08b-ce3a-4516-af9d-1e3c78590db1 req-5ed42ef6-a351-4782-be9e-0b060119e481 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Processing event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.023 243456 DEBUG nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.029 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273667.0286882, ba33446e-fcd5-454c-bc8c-79a367002d57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.029 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] VM Resumed (Lifecycle Event)
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.032 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.040 243456 INFO nova.virt.libvirt.driver [-] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance spawned successfully.
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.041 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.059 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:27 compute-0 kernel: tap6b5acb8c-5d: entered promiscuous mode
Feb 28 10:14:27 compute-0 systemd-udevd[307863]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:14:27 compute-0 NetworkManager[49805]: <info>  [1772273667.0648] manager: (tap6b5acb8c-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/294)
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.066 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.068 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:27 compute-0 ovn_controller[146846]: 2026-02-28T10:14:27Z|00650|binding|INFO|Claiming lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe for this chassis.
Feb 28 10:14:27 compute-0 ovn_controller[146846]: 2026-02-28T10:14:27Z|00651|binding|INFO|6b5acb8c-5d09-42b0-9c1d-b51be18712fe: Claiming fa:16:3e:88:90:03 10.100.0.11
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.074 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.074 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.075 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.075 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.076 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.076 243456 DEBUG nova.virt.libvirt.driver [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.079 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:27 compute-0 NetworkManager[49805]: <info>  [1772273667.0809] device (tap6b5acb8c-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:14:27 compute-0 NetworkManager[49805]: <info>  [1772273667.0826] device (tap6b5acb8c-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.082 243456 DEBUG nova.network.neutron [req-9660b201-fb75-465b-a3e3-c35f3abe3307 req-e8e715a2-0de3-4938-a8d1-2234f70605c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Updated VIF entry in instance network info cache for port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.083 243456 DEBUG nova.network.neutron [req-9660b201-fb75-465b-a3e3-c35f3abe3307 req-e8e715a2-0de3-4938-a8d1-2234f70605c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Updating instance_info_cache with network_info: [{"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.085 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.090 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:90:03 10.100.0.11'], port_security=['fa:16:3e:88:90:03 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '32fe69ba-ea8d-411e-8917-de872b62b8b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-269fae56-42c3-478e-88d5-36164c0a6ae4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '2', 'neutron:security_group_ids': '61546fe4-ca04-44ca-b6ae-d1b7a21ab6e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8585beb-af2c-4eb8-805b-2614cf37e3d3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6b5acb8c-5d09-42b0-9c1d-b51be18712fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.092 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe in datapath 269fae56-42c3-478e-88d5-36164c0a6ae4 bound to our chassis
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.094 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 269fae56-42c3-478e-88d5-36164c0a6ae4
Feb 28 10:14:27 compute-0 systemd-machined[209480]: New machine qemu-87-instance-0000004d.
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.106 243456 DEBUG oslo_concurrency.lockutils [req-9660b201-fb75-465b-a3e3-c35f3abe3307 req-e8e715a2-0de3-4938-a8d1-2234f70605c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.110 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a0367f14-0b50-4e18-890c-4a5cd8b51dbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.112 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap269fae56-41 in ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.114 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap269fae56-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.115 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0c8e83ae-38fe-4acf-9a67-f3e0491d66cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:27 compute-0 ovn_controller[146846]: 2026-02-28T10:14:27Z|00652|binding|INFO|Setting lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe ovn-installed in OVS
Feb 28 10:14:27 compute-0 ovn_controller[146846]: 2026-02-28T10:14:27Z|00653|binding|INFO|Setting lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe up in Southbound
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.116 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c7716475-f69d-46d1-8a54-09612ccd61d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.117 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:27 compute-0 systemd[1]: Started Virtual Machine qemu-87-instance-0000004d.
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.129 243456 INFO nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Took 7.11 seconds to spawn the instance on the hypervisor.
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.129 243456 DEBUG nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.134 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[3623756c-d38e-4c57-b0fa-4c7ebc706dc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.155 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5b33215d-8a83-4aaf-92c3-53a82fde8d1a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.184 243456 INFO nova.compute.manager [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Took 8.19 seconds to build instance.
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.184 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd3d3a8-d884-4104-8ae3-27edda837031]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.190 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8b9f9515-7752-4494-b30c-bc9890cbf164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:27 compute-0 NetworkManager[49805]: <info>  [1772273667.1913] manager: (tap269fae56-40): new Veth device (/org/freedesktop/NetworkManager/Devices/295)
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.206 243456 DEBUG oslo_concurrency.lockutils [None req-ccf60478-ebe0-4263-9640-77fe062df9ba c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.232 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[892f2cd2-25ec-4fce-bd84-bac1d7d6c14f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.236 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[638d633f-d666-4e8e-9033-6f267833394c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:27 compute-0 NetworkManager[49805]: <info>  [1772273667.2566] device (tap269fae56-40): carrier: link connected
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.262 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[efdcb874-58e9-4a66-a039-c80ed494c1d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.278 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[18245f5d-746e-4808-8e0f-065e838e12e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap269fae56-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:28:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517454, 'reachable_time': 24789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308101, 'error': None, 'target': 'ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.294 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0c5611b6-cac3-48e0-b303-c5d8cd20745c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:282c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517454, 'tstamp': 517454}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308102, 'error': None, 'target': 'ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.312 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f26a80-a328-46fe-a9f0-16f4489e1a12]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap269fae56-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:28:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517454, 'reachable_time': 24789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308103, 'error': None, 'target': 'ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.341 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[78ac8bc2-5a04-4686-8b10-8ee65c2094f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.402 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2a467cd3-f18f-4cac-bef8-de3714eded04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.404 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap269fae56-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.404 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.405 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap269fae56-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.407 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:27 compute-0 NetworkManager[49805]: <info>  [1772273667.4076] manager: (tap269fae56-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/296)
Feb 28 10:14:27 compute-0 kernel: tap269fae56-40: entered promiscuous mode
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.409 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap269fae56-40, col_values=(('external_ids', {'iface-id': '7bc082a7-4576-4494-b633-962a40b4d816'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.410 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:27 compute-0 ovn_controller[146846]: 2026-02-28T10:14:27Z|00654|binding|INFO|Releasing lport 7bc082a7-4576-4494-b633-962a40b4d816 from this chassis (sb_readonly=0)
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.411 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.412 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/269fae56-42c3-478e-88d5-36164c0a6ae4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/269fae56-42c3-478e-88d5-36164c0a6ae4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.413 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3f753990-b2c2-4155-84a2-94ebffd1d6df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.414 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-269fae56-42c3-478e-88d5-36164c0a6ae4
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/269fae56-42c3-478e-88d5-36164c0a6ae4.pid.haproxy
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 269fae56-42c3-478e-88d5-36164c0a6ae4
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:14:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:27.416 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4', 'env', 'PROCESS_TAG=haproxy-269fae56-42c3-478e-88d5-36164c0a6ae4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/269fae56-42c3-478e-88d5-36164c0a6ae4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.423 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.570 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273667.5700831, 32fe69ba-ea8d-411e-8917-de872b62b8b0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.570 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] VM Started (Lifecycle Event)
Feb 28 10:14:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:14:27 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3482838717' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.599 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.604 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273667.5727224, 32fe69ba-ea8d-411e-8917-de872b62b8b0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.604 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] VM Paused (Lifecycle Event)
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.611 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.639 243456 DEBUG nova.storage.rbd_utils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image b883c1a1-cf01-434d-8258-24ca193a2683_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:27 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3482838717' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.648 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.679 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.689 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:14:27 compute-0 nova_compute[243452]: 2026-02-28 10:14:27.712 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:14:27 compute-0 podman[308195]: 2026-02-28 10:14:27.845317169 +0000 UTC m=+0.115878269 container create 4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:14:27 compute-0 podman[308195]: 2026-02-28 10:14:27.75784011 +0000 UTC m=+0.028401240 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:14:27 compute-0 systemd[1]: Started libpod-conmon-4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9.scope.
Feb 28 10:14:27 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:14:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea0b749c5a528cfce51b141fd23aad936756a87d85288db6d59ade46c812befa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:14:27 compute-0 podman[308195]: 2026-02-28 10:14:27.943354826 +0000 UTC m=+0.213915936 container init 4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 10:14:27 compute-0 podman[308195]: 2026-02-28 10:14:27.951117534 +0000 UTC m=+0.221678634 container start 4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:14:27 compute-0 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[308227]: [NOTICE]   (308231) : New worker (308233) forked
Feb 28 10:14:27 compute-0 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[308227]: [NOTICE]   (308231) : Loading success.
Feb 28 10:14:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 497 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.6 MiB/s wr, 230 op/s
Feb 28 10:14:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:14:28 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1966840864' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.244 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.245 243456 DEBUG nova.virt.libvirt.vif [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:14:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1601523722',display_name='tempest-ServerActionsTestOtherA-server-1601523722',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1601523722',id=78,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBeB4M8j3RPMEGsTEupU809MpDMu1lONxa3GM96jOaKy7lCnQVg4MzBbpF5eLhYMsfAQf+axdx0pdKDPLAAkphsN2WtFcI9X16V02fEsKKASEotygshJqgIA8eut813xpw==',key_name='tempest-keypair-127070709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd70835696bf4e12a062516e9de5527d',ramdisk_id='',reservation_id='r-by9e4o0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1764257371',owner_user_name='tempest-ServerActionsTestOtherA-1764257371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14b2d28379164786ad68563acb83a50a',uuid=b883c1a1-cf01-434d-8258-24ca193a2683,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa9724a7-fad1-4968-a1b0-0d8182007723", "address": "fa:16:3e:c4:71:d6", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9724a7-fa", "ovs_interfaceid": "aa9724a7-fad1-4968-a1b0-0d8182007723", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.245 243456 DEBUG nova.network.os_vif_util [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converting VIF {"id": "aa9724a7-fad1-4968-a1b0-0d8182007723", "address": "fa:16:3e:c4:71:d6", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9724a7-fa", "ovs_interfaceid": "aa9724a7-fad1-4968-a1b0-0d8182007723", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.246 243456 DEBUG nova.network.os_vif_util [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:71:d6,bridge_name='br-int',has_traffic_filtering=True,id=aa9724a7-fad1-4968-a1b0-0d8182007723,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9724a7-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.248 243456 DEBUG nova.objects.instance [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'pci_devices' on Instance uuid b883c1a1-cf01-434d-8258-24ca193a2683 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.269 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:14:28 compute-0 nova_compute[243452]:   <uuid>b883c1a1-cf01-434d-8258-24ca193a2683</uuid>
Feb 28 10:14:28 compute-0 nova_compute[243452]:   <name>instance-0000004e</name>
Feb 28 10:14:28 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:14:28 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:14:28 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerActionsTestOtherA-server-1601523722</nova:name>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:14:26</nova:creationTime>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:14:28 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:14:28 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:14:28 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:14:28 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:14:28 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:14:28 compute-0 nova_compute[243452]:         <nova:user uuid="14b2d28379164786ad68563acb83a50a">tempest-ServerActionsTestOtherA-1764257371-project-member</nova:user>
Feb 28 10:14:28 compute-0 nova_compute[243452]:         <nova:project uuid="fd70835696bf4e12a062516e9de5527d">tempest-ServerActionsTestOtherA-1764257371</nova:project>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:14:28 compute-0 nova_compute[243452]:         <nova:port uuid="aa9724a7-fad1-4968-a1b0-0d8182007723">
Feb 28 10:14:28 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:14:28 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:14:28 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <system>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <entry name="serial">b883c1a1-cf01-434d-8258-24ca193a2683</entry>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <entry name="uuid">b883c1a1-cf01-434d-8258-24ca193a2683</entry>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     </system>
Feb 28 10:14:28 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:14:28 compute-0 nova_compute[243452]:   <os>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:   </os>
Feb 28 10:14:28 compute-0 nova_compute[243452]:   <features>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:   </features>
Feb 28 10:14:28 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:14:28 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:14:28 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/b883c1a1-cf01-434d-8258-24ca193a2683_disk">
Feb 28 10:14:28 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       </source>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:14:28 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/b883c1a1-cf01-434d-8258-24ca193a2683_disk.config">
Feb 28 10:14:28 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       </source>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:14:28 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:c4:71:d6"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <target dev="tapaa9724a7-fa"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/b883c1a1-cf01-434d-8258-24ca193a2683/console.log" append="off"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <video>
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     </video>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:14:28 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:14:28 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:14:28 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:14:28 compute-0 nova_compute[243452]: </domain>
Feb 28 10:14:28 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.270 243456 DEBUG nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Preparing to wait for external event network-vif-plugged-aa9724a7-fad1-4968-a1b0-0d8182007723 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.271 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.271 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.271 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.272 243456 DEBUG nova.virt.libvirt.vif [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:14:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1601523722',display_name='tempest-ServerActionsTestOtherA-server-1601523722',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1601523722',id=78,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBeB4M8j3RPMEGsTEupU809MpDMu1lONxa3GM96jOaKy7lCnQVg4MzBbpF5eLhYMsfAQf+axdx0pdKDPLAAkphsN2WtFcI9X16V02fEsKKASEotygshJqgIA8eut813xpw==',key_name='tempest-keypair-127070709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd70835696bf4e12a062516e9de5527d',ramdisk_id='',reservation_id='r-by9e4o0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1764257371',owner_user_name='tempest-ServerActionsTestOtherA-1764257371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14b2d28379164786ad68563acb83a50a',uuid=b883c1a1-cf01-434d-8258-24ca193a2683,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa9724a7-fad1-4968-a1b0-0d8182007723", "address": "fa:16:3e:c4:71:d6", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9724a7-fa", "ovs_interfaceid": "aa9724a7-fad1-4968-a1b0-0d8182007723", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.272 243456 DEBUG nova.network.os_vif_util [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converting VIF {"id": "aa9724a7-fad1-4968-a1b0-0d8182007723", "address": "fa:16:3e:c4:71:d6", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9724a7-fa", "ovs_interfaceid": "aa9724a7-fad1-4968-a1b0-0d8182007723", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.273 243456 DEBUG nova.network.os_vif_util [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:71:d6,bridge_name='br-int',has_traffic_filtering=True,id=aa9724a7-fad1-4968-a1b0-0d8182007723,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9724a7-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.274 243456 DEBUG os_vif [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:71:d6,bridge_name='br-int',has_traffic_filtering=True,id=aa9724a7-fad1-4968-a1b0-0d8182007723,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9724a7-fa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.274 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.275 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.275 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.278 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.279 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa9724a7-fa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.279 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaa9724a7-fa, col_values=(('external_ids', {'iface-id': 'aa9724a7-fad1-4968-a1b0-0d8182007723', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:71:d6', 'vm-uuid': 'b883c1a1-cf01-434d-8258-24ca193a2683'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:28 compute-0 NetworkManager[49805]: <info>  [1772273668.2822] manager: (tapaa9724a7-fa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.285 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.289 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.290 243456 INFO os_vif [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:71:d6,bridge_name='br-int',has_traffic_filtering=True,id=aa9724a7-fad1-4968-a1b0-0d8182007723,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9724a7-fa')
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.339 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.339 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.339 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] No VIF found with MAC fa:16:3e:c4:71:d6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.340 243456 INFO nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Using config drive
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.360 243456 DEBUG nova.storage.rbd_utils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image b883c1a1-cf01-434d-8258-24ca193a2683_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.411 243456 DEBUG nova.compute.manager [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.411 243456 DEBUG oslo_concurrency.lockutils [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.412 243456 DEBUG oslo_concurrency.lockutils [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.412 243456 DEBUG oslo_concurrency.lockutils [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.412 243456 DEBUG nova.compute.manager [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] No waiting events found dispatching network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.412 243456 WARNING nova.compute.manager [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received unexpected event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 for instance with vm_state rescued and task_state None.
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.412 243456 DEBUG nova.compute.manager [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.413 243456 DEBUG oslo_concurrency.lockutils [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.413 243456 DEBUG oslo_concurrency.lockutils [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.413 243456 DEBUG oslo_concurrency.lockutils [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.413 243456 DEBUG nova.compute.manager [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] No waiting events found dispatching network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.413 243456 WARNING nova.compute.manager [req-bf795205-7a84-4cd0-99e8-43fe5672f21f req-ff1134a8-fc70-4054-9f24-6a38c79eca90 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received unexpected event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 for instance with vm_state rescued and task_state None.
Feb 28 10:14:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:14:28 compute-0 ceph-mon[76304]: pgmap v1457: 305 pgs: 305 active+clean; 497 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.6 MiB/s wr, 230 op/s
Feb 28 10:14:28 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1966840864' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.768 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.856 243456 INFO nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Creating config drive at /var/lib/nova/instances/b883c1a1-cf01-434d-8258-24ca193a2683/disk.config
Feb 28 10:14:28 compute-0 nova_compute[243452]: 2026-02-28 10:14:28.865 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b883c1a1-cf01-434d-8258-24ca193a2683/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpp0o5oljj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:28 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:14:28 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 21K writes, 84K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s
                                           Cumulative WAL: 21K writes, 7517 syncs, 2.90 writes per sync, written: 0.08 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 40.75 MB, 0.07 MB/s
                                           Interval WAL: 11K writes, 4570 syncs, 2.44 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.011 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b883c1a1-cf01-434d-8258-24ca193a2683/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpp0o5oljj" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.037 243456 DEBUG nova.storage.rbd_utils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image b883c1a1-cf01-434d-8258-24ca193a2683_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.042 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b883c1a1-cf01-434d-8258-24ca193a2683/disk.config b883c1a1-cf01-434d-8258-24ca193a2683_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:14:29
Feb 28 10:14:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:14:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:14:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', '.mgr', 'backups']
Feb 28 10:14:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.226 243456 DEBUG oslo_concurrency.processutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b883c1a1-cf01-434d-8258-24ca193a2683/disk.config b883c1a1-cf01-434d-8258-24ca193a2683_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.227 243456 INFO nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Deleting local config drive /var/lib/nova/instances/b883c1a1-cf01-434d-8258-24ca193a2683/disk.config because it was imported into RBD.
Feb 28 10:14:29 compute-0 NetworkManager[49805]: <info>  [1772273669.2765] manager: (tapaa9724a7-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/298)
Feb 28 10:14:29 compute-0 kernel: tapaa9724a7-fa: entered promiscuous mode
Feb 28 10:14:29 compute-0 ovn_controller[146846]: 2026-02-28T10:14:29Z|00655|binding|INFO|Claiming lport aa9724a7-fad1-4968-a1b0-0d8182007723 for this chassis.
Feb 28 10:14:29 compute-0 ovn_controller[146846]: 2026-02-28T10:14:29Z|00656|binding|INFO|aa9724a7-fad1-4968-a1b0-0d8182007723: Claiming fa:16:3e:c4:71:d6 10.100.0.6
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.280 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.291 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.301 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:71:d6 10.100.0.6'], port_security=['fa:16:3e:c4:71:d6 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b883c1a1-cf01-434d-8258-24ca193a2683', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd70835696bf4e12a062516e9de5527d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a3d01661-9794-4315-81d4-c2d74d609338', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90546891-a028-4a5f-a7b5-01dac44edc93, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=aa9724a7-fad1-4968-a1b0-0d8182007723) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.304 156681 INFO neutron.agent.ovn.metadata.agent [-] Port aa9724a7-fad1-4968-a1b0-0d8182007723 in datapath 2e5dcf5b-2f4a-41dc-9c28-b500e2889923 bound to our chassis
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.307 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2e5dcf5b-2f4a-41dc-9c28-b500e2889923
Feb 28 10:14:29 compute-0 systemd-machined[209480]: New machine qemu-88-instance-0000004e.
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.318 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[17147773-7826-4f01-b867-a7bf217dc308]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.322 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2e5dcf5b-21 in ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.322 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.327 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2e5dcf5b-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.327 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aaf83cb0-9122-40b2-a231-27947e16e6fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.328 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cd35f3b2-c67d-4bc9-8466-376e91070e85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:29 compute-0 systemd[1]: Started Virtual Machine qemu-88-instance-0000004e.
Feb 28 10:14:29 compute-0 ovn_controller[146846]: 2026-02-28T10:14:29Z|00657|binding|INFO|Setting lport aa9724a7-fad1-4968-a1b0-0d8182007723 ovn-installed in OVS
Feb 28 10:14:29 compute-0 ovn_controller[146846]: 2026-02-28T10:14:29Z|00658|binding|INFO|Setting lport aa9724a7-fad1-4968-a1b0-0d8182007723 up in Southbound
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.333 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.343 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd6b508-c09f-4c9b-8f3d-f734b9151a2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:29 compute-0 systemd-udevd[308318]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:14:29 compute-0 NetworkManager[49805]: <info>  [1772273669.3563] device (tapaa9724a7-fa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:14:29 compute-0 NetworkManager[49805]: <info>  [1772273669.3570] device (tapaa9724a7-fa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.359 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[27fcef33-4285-4b94-aaf1-4b2ee55993f0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.386 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9596399a-b7a2-4457-a932-312abd9f062e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:29 compute-0 NetworkManager[49805]: <info>  [1772273669.3934] manager: (tap2e5dcf5b-20): new Veth device (/org/freedesktop/NetworkManager/Devices/299)
Feb 28 10:14:29 compute-0 systemd-udevd[308321]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.396 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5a40b4cb-8478-4ae2-acdf-3460b9eba159]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.423 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[49f15fb0-b1f8-4065-a994-5545940d8f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.425 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c25f1aed-1ee4-4a33-80b7-063300757064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:29 compute-0 NetworkManager[49805]: <info>  [1772273669.4473] device (tap2e5dcf5b-20): carrier: link connected
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.453 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[eabf4463-c09f-48a8-82ce-dc84612241eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.467 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b01e82ef-3c84-4e17-954c-efb202777100]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e5dcf5b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:a8:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517673, 'reachable_time': 18521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308349, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.481 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4941b6f6-f665-40a1-b3cb-19b6e0b20465]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe25:a820'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517673, 'tstamp': 517673}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308350, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.499 243456 DEBUG nova.compute.manager [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.499 243456 DEBUG oslo_concurrency.lockutils [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.499 243456 DEBUG oslo_concurrency.lockutils [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.500 243456 DEBUG oslo_concurrency.lockutils [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.500 243456 DEBUG nova.compute.manager [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] No waiting events found dispatching network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.500 243456 WARNING nova.compute.manager [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received unexpected event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 for instance with vm_state active and task_state None.
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.500 243456 DEBUG nova.compute.manager [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.500 243456 DEBUG oslo_concurrency.lockutils [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.501 243456 DEBUG oslo_concurrency.lockutils [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.501 243456 DEBUG oslo_concurrency.lockutils [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.501 243456 DEBUG nova.compute.manager [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Processing event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.501 243456 DEBUG nova.compute.manager [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.502 243456 DEBUG oslo_concurrency.lockutils [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.502 243456 DEBUG oslo_concurrency.lockutils [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.502 243456 DEBUG oslo_concurrency.lockutils [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.502 243456 DEBUG nova.compute.manager [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] No waiting events found dispatching network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.502 243456 WARNING nova.compute.manager [req-be65a5f4-6b9d-4123-aaae-400a667837de req-aff30f4d-bd2c-47c2-ab32-33a64440a993 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received unexpected event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe for instance with vm_state building and task_state spawning.
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.502 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9d796c6d-68a1-414c-a47b-8d163ef6265f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e5dcf5b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:a8:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517673, 'reachable_time': 18521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308351, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.503 243456 DEBUG nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.513 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273669.5134647, 32fe69ba-ea8d-411e-8917-de872b62b8b0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.514 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] VM Resumed (Lifecycle Event)
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.517 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.524 243456 INFO nova.virt.libvirt.driver [-] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Instance spawned successfully.
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.524 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.542 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[01d5bc47-c8b1-4e10-b32e-a205029adf7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.544 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.548 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.560 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.560 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.560 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.561 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.561 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.562 243456 DEBUG nova.virt.libvirt.driver [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.566 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.614 243456 INFO nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Took 8.60 seconds to spawn the instance on the hypervisor.
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.614 243456 DEBUG nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.611 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d9c7085e-66f2-4078-9c58-42f6a78bf21c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.616 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e5dcf5b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.616 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.616 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e5dcf5b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:29 compute-0 NetworkManager[49805]: <info>  [1772273669.6189] manager: (tap2e5dcf5b-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/300)
Feb 28 10:14:29 compute-0 kernel: tap2e5dcf5b-20: entered promiscuous mode
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.619 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.621 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2e5dcf5b-20, col_values=(('external_ids', {'iface-id': '4070e10c-8283-47ad-b9bf-0e19e9198bce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:29 compute-0 ovn_controller[146846]: 2026-02-28T10:14:29Z|00659|binding|INFO|Releasing lport 4070e10c-8283-47ad-b9bf-0e19e9198bce from this chassis (sb_readonly=0)
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.631 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2e5dcf5b-2f4a-41dc-9c28-b500e2889923.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2e5dcf5b-2f4a-41dc-9c28-b500e2889923.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.632 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e468e314-cc3c-4939-b158-c4af7b935466]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.632 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-2e5dcf5b-2f4a-41dc-9c28-b500e2889923
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/2e5dcf5b-2f4a-41dc-9c28-b500e2889923.pid.haproxy
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 2e5dcf5b-2f4a-41dc-9c28-b500e2889923
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:14:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:29.633 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'env', 'PROCESS_TAG=haproxy-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2e5dcf5b-2f4a-41dc-9c28-b500e2889923.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.634 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.695 243456 INFO nova.compute.manager [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Took 10.67 seconds to build instance.
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.710 243456 DEBUG oslo_concurrency.lockutils [None req-390f04bd-894c-4448-8f3c-e6e277a82b4c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.786 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273669.786109, b883c1a1-cf01-434d-8258-24ca193a2683 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.788 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] VM Started (Lifecycle Event)
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.816 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.826 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273669.7885325, b883c1a1-cf01-434d-8258-24ca193a2683 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.827 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] VM Paused (Lifecycle Event)
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.848 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.852 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.872 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.904 243456 INFO nova.compute.manager [None req-e3852afc-64e9-4632-b4ec-e0e2b4c56088 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Pausing
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.905 243456 DEBUG nova.objects.instance [None req-e3852afc-64e9-4632-b4ec-e0e2b4c56088 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'flavor' on Instance uuid 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.931 243456 DEBUG nova.compute.manager [None req-e3852afc-64e9-4632-b4ec-e0e2b4c56088 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.932 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273669.931356, 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.932 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] VM Paused (Lifecycle Event)
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.968 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.972 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:14:29 compute-0 podman[308423]: 2026-02-28 10:14:29.994363676 +0000 UTC m=+0.057402975 container create 2cc631f7684fc8cc1a9d6516a01e4c5df19646460928bd860ccd63048096cf4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 10:14:29 compute-0 nova_compute[243452]: 2026-02-28 10:14:29.997 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] During sync_power_state the instance has a pending task (pausing). Skip.
Feb 28 10:14:30 compute-0 systemd[1]: Started libpod-conmon-2cc631f7684fc8cc1a9d6516a01e4c5df19646460928bd860ccd63048096cf4b.scope.
Feb 28 10:14:30 compute-0 podman[308423]: 2026-02-28 10:14:29.96286217 +0000 UTC m=+0.025901489 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:14:30 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70ff2f5773caafa9c58f78253f13ae91fe67282a6374207ba8de2071a659ac6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:14:30 compute-0 podman[308423]: 2026-02-28 10:14:30.098773752 +0000 UTC m=+0.161813071 container init 2cc631f7684fc8cc1a9d6516a01e4c5df19646460928bd860ccd63048096cf4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 28 10:14:30 compute-0 podman[308423]: 2026-02-28 10:14:30.1147305 +0000 UTC m=+0.177769839 container start 2cc631f7684fc8cc1a9d6516a01e4c5df19646460928bd860ccd63048096cf4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:14:30 compute-0 neutron-haproxy-ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923[308437]: [NOTICE]   (308441) : New worker (308443) forked
Feb 28 10:14:30 compute-0 neutron-haproxy-ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923[308437]: [NOTICE]   (308441) : Loading success.
Feb 28 10:14:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 497 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 7.1 MiB/s wr, 236 op/s
Feb 28 10:14:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:14:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:14:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:14:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:14:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:14:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:14:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:14:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:14:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:14:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:14:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:14:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:14:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:14:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:14:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:14:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:14:30 compute-0 nova_compute[243452]: 2026-02-28 10:14:30.761 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273655.759299, 1e13ffbf-dba5-421b-afc3-84eb471e2d44 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:30 compute-0 nova_compute[243452]: 2026-02-28 10:14:30.761 243456 INFO nova.compute.manager [-] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] VM Stopped (Lifecycle Event)
Feb 28 10:14:30 compute-0 nova_compute[243452]: 2026-02-28 10:14:30.780 243456 DEBUG nova.compute.manager [None req-8a2faace-8cbf-44a5-83de-d20840d249c2 - - - - - -] [instance: 1e13ffbf-dba5-421b-afc3-84eb471e2d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:31 compute-0 ceph-mgr[76610]: [devicehealth INFO root] Check health
Feb 28 10:14:31 compute-0 ceph-mon[76304]: pgmap v1458: 305 pgs: 305 active+clean; 497 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 7.1 MiB/s wr, 236 op/s
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.282 243456 DEBUG nova.compute.manager [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Received event network-vif-plugged-aa9724a7-fad1-4968-a1b0-0d8182007723 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.282 243456 DEBUG oslo_concurrency.lockutils [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.283 243456 DEBUG oslo_concurrency.lockutils [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.283 243456 DEBUG oslo_concurrency.lockutils [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.283 243456 DEBUG nova.compute.manager [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Processing event network-vif-plugged-aa9724a7-fad1-4968-a1b0-0d8182007723 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.284 243456 DEBUG nova.compute.manager [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Received event network-vif-plugged-aa9724a7-fad1-4968-a1b0-0d8182007723 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.284 243456 DEBUG oslo_concurrency.lockutils [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.284 243456 DEBUG oslo_concurrency.lockutils [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.284 243456 DEBUG oslo_concurrency.lockutils [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.285 243456 DEBUG nova.compute.manager [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] No waiting events found dispatching network-vif-plugged-aa9724a7-fad1-4968-a1b0-0d8182007723 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.285 243456 WARNING nova.compute.manager [req-5d9ac18f-fdc5-41de-97c9-dc916f30848d req-5bfd5e07-8cc1-48a9-bf55-8c6639c0bfb4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Received unexpected event network-vif-plugged-aa9724a7-fad1-4968-a1b0-0d8182007723 for instance with vm_state building and task_state spawning.
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.286 243456 DEBUG nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.291 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273671.2909417, b883c1a1-cf01-434d-8258-24ca193a2683 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.291 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] VM Resumed (Lifecycle Event)
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.294 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.298 243456 INFO nova.virt.libvirt.driver [-] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Instance spawned successfully.
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.298 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.329 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.337 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.342 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.342 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.343 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.344 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.344 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.345 243456 DEBUG nova.virt.libvirt.driver [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.386 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.515 243456 INFO nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Took 9.45 seconds to spawn the instance on the hypervisor.
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.516 243456 DEBUG nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.615 243456 INFO nova.compute.manager [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Took 11.02 seconds to build instance.
Feb 28 10:14:31 compute-0 nova_compute[243452]: 2026-02-28 10:14:31.645 243456 DEBUG oslo_concurrency.lockutils [None req-63dba35b-12be-49b3-88d1-c8f69280a2d8 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 498 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 7.1 MiB/s wr, 283 op/s
Feb 28 10:14:32 compute-0 nova_compute[243452]: 2026-02-28 10:14:32.349 243456 INFO nova.compute.manager [None req-d6e172ce-63f4-4605-8258-3115deba821a ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Unpausing
Feb 28 10:14:32 compute-0 nova_compute[243452]: 2026-02-28 10:14:32.350 243456 DEBUG nova.objects.instance [None req-d6e172ce-63f4-4605-8258-3115deba821a ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'flavor' on Instance uuid 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:32 compute-0 nova_compute[243452]: 2026-02-28 10:14:32.376 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273672.375541, 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:32 compute-0 nova_compute[243452]: 2026-02-28 10:14:32.377 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] VM Resumed (Lifecycle Event)
Feb 28 10:14:32 compute-0 virtqemud[242837]: argument unsupported: QEMU guest agent is not configured
Feb 28 10:14:32 compute-0 nova_compute[243452]: 2026-02-28 10:14:32.381 243456 DEBUG nova.virt.libvirt.guest [None req-d6e172ce-63f4-4605-8258-3115deba821a ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Feb 28 10:14:32 compute-0 nova_compute[243452]: 2026-02-28 10:14:32.382 243456 DEBUG nova.compute.manager [None req-d6e172ce-63f4-4605-8258-3115deba821a ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:32 compute-0 nova_compute[243452]: 2026-02-28 10:14:32.394 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:32 compute-0 nova_compute[243452]: 2026-02-28 10:14:32.398 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:14:32 compute-0 nova_compute[243452]: 2026-02-28 10:14:32.426 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] During sync_power_state the instance has a pending task (unpausing). Skip.
Feb 28 10:14:33 compute-0 nova_compute[243452]: 2026-02-28 10:14:33.117 243456 INFO nova.compute.manager [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Rebuilding instance
Feb 28 10:14:33 compute-0 ceph-mon[76304]: pgmap v1459: 305 pgs: 305 active+clean; 498 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 7.1 MiB/s wr, 283 op/s
Feb 28 10:14:33 compute-0 nova_compute[243452]: 2026-02-28 10:14:33.284 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:33 compute-0 nova_compute[243452]: 2026-02-28 10:14:33.436 243456 DEBUG nova.objects.instance [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'trusted_certs' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:33 compute-0 nova_compute[243452]: 2026-02-28 10:14:33.463 243456 DEBUG nova.compute.manager [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:33 compute-0 nova_compute[243452]: 2026-02-28 10:14:33.506 243456 DEBUG nova.objects.instance [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'pci_requests' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:33 compute-0 nova_compute[243452]: 2026-02-28 10:14:33.529 243456 DEBUG nova.objects.instance [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'pci_devices' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:33 compute-0 nova_compute[243452]: 2026-02-28 10:14:33.539 243456 DEBUG nova.objects.instance [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'resources' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:14:33 compute-0 nova_compute[243452]: 2026-02-28 10:14:33.551 243456 DEBUG nova.objects.instance [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'migration_context' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:33 compute-0 nova_compute[243452]: 2026-02-28 10:14:33.564 243456 DEBUG nova.objects.instance [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:14:33 compute-0 nova_compute[243452]: 2026-02-28 10:14:33.568 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:14:33 compute-0 nova_compute[243452]: 2026-02-28 10:14:33.773 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 498 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 6.8 MiB/s wr, 290 op/s
Feb 28 10:14:34 compute-0 NetworkManager[49805]: <info>  [1772273674.4021] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/301)
Feb 28 10:14:34 compute-0 NetworkManager[49805]: <info>  [1772273674.4031] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Feb 28 10:14:34 compute-0 nova_compute[243452]: 2026-02-28 10:14:34.406 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:34 compute-0 ovn_controller[146846]: 2026-02-28T10:14:34Z|00660|binding|INFO|Releasing lport 4070e10c-8283-47ad-b9bf-0e19e9198bce from this chassis (sb_readonly=0)
Feb 28 10:14:34 compute-0 ovn_controller[146846]: 2026-02-28T10:14:34Z|00661|binding|INFO|Releasing lport 92bcea78-9a21-4d44-99f4-fd3e41fc7e97 from this chassis (sb_readonly=0)
Feb 28 10:14:34 compute-0 nova_compute[243452]: 2026-02-28 10:14:34.409 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:34 compute-0 ovn_controller[146846]: 2026-02-28T10:14:34Z|00662|binding|INFO|Releasing lport 7bc082a7-4576-4494-b633-962a40b4d816 from this chassis (sb_readonly=0)
Feb 28 10:14:34 compute-0 ovn_controller[146846]: 2026-02-28T10:14:34Z|00663|binding|INFO|Releasing lport 5829ec02-3925-4479-9cc6-4b24ee8cfe06 from this chassis (sb_readonly=0)
Feb 28 10:14:34 compute-0 nova_compute[243452]: 2026-02-28 10:14:34.422 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.238 243456 DEBUG nova.compute.manager [req-e9baf6c1-e594-4de2-bad8-9f6fe1d9fe40 req-134dcaa3-4e81-4ac7-a66e-755552174824 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-changed-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.239 243456 DEBUG nova.compute.manager [req-e9baf6c1-e594-4de2-bad8-9f6fe1d9fe40 req-134dcaa3-4e81-4ac7-a66e-755552174824 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Refreshing instance network info cache due to event network-changed-6b5acb8c-5d09-42b0-9c1d-b51be18712fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.241 243456 DEBUG oslo_concurrency.lockutils [req-e9baf6c1-e594-4de2-bad8-9f6fe1d9fe40 req-134dcaa3-4e81-4ac7-a66e-755552174824 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.241 243456 DEBUG oslo_concurrency.lockutils [req-e9baf6c1-e594-4de2-bad8-9f6fe1d9fe40 req-134dcaa3-4e81-4ac7-a66e-755552174824 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.242 243456 DEBUG nova.network.neutron [req-e9baf6c1-e594-4de2-bad8-9f6fe1d9fe40 req-134dcaa3-4e81-4ac7-a66e-755552174824 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Refreshing network info cache for port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:14:35 compute-0 ceph-mon[76304]: pgmap v1460: 305 pgs: 305 active+clean; 498 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 6.8 MiB/s wr, 290 op/s
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.506 243456 DEBUG oslo_concurrency.lockutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.508 243456 DEBUG oslo_concurrency.lockutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.508 243456 DEBUG oslo_concurrency.lockutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.509 243456 DEBUG oslo_concurrency.lockutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.510 243456 DEBUG oslo_concurrency.lockutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.512 243456 INFO nova.compute.manager [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Terminating instance
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.515 243456 DEBUG nova.compute.manager [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:14:35 compute-0 kernel: tapa2f66c0b-78 (unregistering): left promiscuous mode
Feb 28 10:14:35 compute-0 NetworkManager[49805]: <info>  [1772273675.5767] device (tapa2f66c0b-78): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:14:35 compute-0 ovn_controller[146846]: 2026-02-28T10:14:35Z|00664|binding|INFO|Releasing lport a2f66c0b-78f3-49cb-929b-5e9b4072beb0 from this chassis (sb_readonly=0)
Feb 28 10:14:35 compute-0 ovn_controller[146846]: 2026-02-28T10:14:35Z|00665|binding|INFO|Setting lport a2f66c0b-78f3-49cb-929b-5e9b4072beb0 down in Southbound
Feb 28 10:14:35 compute-0 ovn_controller[146846]: 2026-02-28T10:14:35Z|00666|binding|INFO|Removing iface tapa2f66c0b-78 ovn-installed in OVS
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.594 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.604 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:b7:32 10.100.0.6'], port_security=['fa:16:3e:39:b7:32 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dc2dbab8-312e-4130-8141-d848beeb6bec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-621843b6-256a-4ce5-83c3-83b888738508', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809bf856030f4316b385ba1c02291ca7', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2ac4acb3-b14a-4b15-b397-21203f1665be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a33e3f0-a2b2-429c-8d14-4c6d980064b2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a2f66c0b-78f3-49cb-929b-5e9b4072beb0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.605 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.605 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a2f66c0b-78f3-49cb-929b-5e9b4072beb0 in datapath 621843b6-256a-4ce5-83c3-83b888738508 unbound from our chassis
Feb 28 10:14:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.607 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 621843b6-256a-4ce5-83c3-83b888738508
Feb 28 10:14:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.623 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9794184f-3db3-4a40-b8a7-74b55c0c3308]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:35 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Feb 28 10:14:35 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d0000004b.scope: Consumed 10.084s CPU time.
Feb 28 10:14:35 compute-0 systemd-machined[209480]: Machine qemu-85-instance-0000004b terminated.
Feb 28 10:14:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.661 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[097f59b5-08a2-4c74-afd3-8f3a0e151e7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.665 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0ccf2174-2f8f-48ff-a5e7-a111cce50a3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.693 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[aa72a19e-ab3d-45ae-8e72-db2fcb622ca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.708 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1baa0ec2-646e-4038-b41c-84ae54dbf19c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap621843b6-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:07:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514758, 'reachable_time': 23572, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308465, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.725 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4a28d6c5-ad39-40bb-9c2e-f46a582e57f5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap621843b6-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514771, 'tstamp': 514771}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308466, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap621843b6-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514773, 'tstamp': 514773}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308466, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.727 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap621843b6-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.730 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.734 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.735 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap621843b6-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.735 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:14:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.735 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap621843b6-20, col_values=(('external_ids', {'iface-id': '92bcea78-9a21-4d44-99f4-fd3e41fc7e97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:35.736 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.741 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.745 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.755 243456 INFO nova.virt.libvirt.driver [-] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Instance destroyed successfully.
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.756 243456 DEBUG nova.objects.instance [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'resources' on Instance uuid dc2dbab8-312e-4130-8141-d848beeb6bec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.770 243456 DEBUG nova.virt.libvirt.vif [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:13:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1249757035',display_name='tempest-ServerRescueNegativeTestJSON-server-1249757035',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1249757035',id=75,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:14:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='809bf856030f4316b385ba1c02291ca7',ramdisk_id='',reservation_id='r-hyvw557d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-338743494',owner_user_name='tempest-ServerRescueNegativeTestJSON-338743494-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:14:25Z,user_data=None,user_id='ec5caafc16ec43a493f7d553353a27c3',uuid=dc2dbab8-312e-4130-8141-d848beeb6bec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.770 243456 DEBUG nova.network.os_vif_util [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converting VIF {"id": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "address": "fa:16:3e:39:b7:32", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2f66c0b-78", "ovs_interfaceid": "a2f66c0b-78f3-49cb-929b-5e9b4072beb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.771 243456 DEBUG nova.network.os_vif_util [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:b7:32,bridge_name='br-int',has_traffic_filtering=True,id=a2f66c0b-78f3-49cb-929b-5e9b4072beb0,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2f66c0b-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.771 243456 DEBUG os_vif [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:b7:32,bridge_name='br-int',has_traffic_filtering=True,id=a2f66c0b-78f3-49cb-929b-5e9b4072beb0,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2f66c0b-78') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.773 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.774 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa2f66c0b-78, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.775 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.778 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:14:35 compute-0 nova_compute[243452]: 2026-02-28 10:14:35.780 243456 INFO os_vif [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:b7:32,bridge_name='br-int',has_traffic_filtering=True,id=a2f66c0b-78f3-49cb-929b-5e9b4072beb0,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2f66c0b-78')
Feb 28 10:14:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 498 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 5.2 MiB/s wr, 340 op/s
Feb 28 10:14:36 compute-0 nova_compute[243452]: 2026-02-28 10:14:36.221 243456 INFO nova.virt.libvirt.driver [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Deleting instance files /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec_del
Feb 28 10:14:36 compute-0 nova_compute[243452]: 2026-02-28 10:14:36.222 243456 INFO nova.virt.libvirt.driver [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Deletion of /var/lib/nova/instances/dc2dbab8-312e-4130-8141-d848beeb6bec_del complete
Feb 28 10:14:36 compute-0 nova_compute[243452]: 2026-02-28 10:14:36.302 243456 INFO nova.compute.manager [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Took 0.79 seconds to destroy the instance on the hypervisor.
Feb 28 10:14:36 compute-0 nova_compute[243452]: 2026-02-28 10:14:36.303 243456 DEBUG oslo.service.loopingcall [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:14:36 compute-0 nova_compute[243452]: 2026-02-28 10:14:36.303 243456 DEBUG nova.compute.manager [-] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:14:36 compute-0 nova_compute[243452]: 2026-02-28 10:14:36.304 243456 DEBUG nova.network.neutron [-] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:14:36 compute-0 nova_compute[243452]: 2026-02-28 10:14:36.721 243456 DEBUG nova.compute.manager [req-50259988-339e-495d-b775-d997ddc3d095 req-291314b2-412d-4861-b377-83efe7ce464e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-vif-unplugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:36 compute-0 nova_compute[243452]: 2026-02-28 10:14:36.721 243456 DEBUG oslo_concurrency.lockutils [req-50259988-339e-495d-b775-d997ddc3d095 req-291314b2-412d-4861-b377-83efe7ce464e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:36 compute-0 nova_compute[243452]: 2026-02-28 10:14:36.722 243456 DEBUG oslo_concurrency.lockutils [req-50259988-339e-495d-b775-d997ddc3d095 req-291314b2-412d-4861-b377-83efe7ce464e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:36 compute-0 nova_compute[243452]: 2026-02-28 10:14:36.722 243456 DEBUG oslo_concurrency.lockutils [req-50259988-339e-495d-b775-d997ddc3d095 req-291314b2-412d-4861-b377-83efe7ce464e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:36 compute-0 nova_compute[243452]: 2026-02-28 10:14:36.723 243456 DEBUG nova.compute.manager [req-50259988-339e-495d-b775-d997ddc3d095 req-291314b2-412d-4861-b377-83efe7ce464e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] No waiting events found dispatching network-vif-unplugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:36 compute-0 nova_compute[243452]: 2026-02-28 10:14:36.723 243456 DEBUG nova.compute.manager [req-50259988-339e-495d-b775-d997ddc3d095 req-291314b2-412d-4861-b377-83efe7ce464e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-vif-unplugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:14:37 compute-0 podman[308498]: 2026-02-28 10:14:37.144036501 +0000 UTC m=+0.076930904 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:14:37 compute-0 podman[308497]: 2026-02-28 10:14:37.157655404 +0000 UTC m=+0.093286524 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 10:14:37 compute-0 nova_compute[243452]: 2026-02-28 10:14:37.206 243456 DEBUG nova.network.neutron [-] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:14:37 compute-0 nova_compute[243452]: 2026-02-28 10:14:37.224 243456 INFO nova.compute.manager [-] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Took 0.92 seconds to deallocate network for instance.
Feb 28 10:14:37 compute-0 nova_compute[243452]: 2026-02-28 10:14:37.264 243456 DEBUG nova.network.neutron [req-e9baf6c1-e594-4de2-bad8-9f6fe1d9fe40 req-134dcaa3-4e81-4ac7-a66e-755552174824 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Updated VIF entry in instance network info cache for port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:14:37 compute-0 nova_compute[243452]: 2026-02-28 10:14:37.265 243456 DEBUG nova.network.neutron [req-e9baf6c1-e594-4de2-bad8-9f6fe1d9fe40 req-134dcaa3-4e81-4ac7-a66e-755552174824 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Updating instance_info_cache with network_info: [{"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:14:37 compute-0 ceph-mon[76304]: pgmap v1461: 305 pgs: 305 active+clean; 498 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 5.2 MiB/s wr, 340 op/s
Feb 28 10:14:37 compute-0 nova_compute[243452]: 2026-02-28 10:14:37.280 243456 DEBUG oslo_concurrency.lockutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:37 compute-0 nova_compute[243452]: 2026-02-28 10:14:37.281 243456 DEBUG oslo_concurrency.lockutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:37 compute-0 nova_compute[243452]: 2026-02-28 10:14:37.293 243456 DEBUG oslo_concurrency.lockutils [req-e9baf6c1-e594-4de2-bad8-9f6fe1d9fe40 req-134dcaa3-4e81-4ac7-a66e-755552174824 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:14:37 compute-0 nova_compute[243452]: 2026-02-28 10:14:37.329 243456 DEBUG nova.compute.manager [req-9500b82e-2c3a-455b-ab89-7673d9dde0a0 req-f4ac86c8-c97a-4cb8-a362-ddcfa3d3cc83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Received event network-changed-aa9724a7-fad1-4968-a1b0-0d8182007723 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:37 compute-0 nova_compute[243452]: 2026-02-28 10:14:37.331 243456 DEBUG nova.compute.manager [req-9500b82e-2c3a-455b-ab89-7673d9dde0a0 req-f4ac86c8-c97a-4cb8-a362-ddcfa3d3cc83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Refreshing instance network info cache due to event network-changed-aa9724a7-fad1-4968-a1b0-0d8182007723. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:14:37 compute-0 nova_compute[243452]: 2026-02-28 10:14:37.331 243456 DEBUG oslo_concurrency.lockutils [req-9500b82e-2c3a-455b-ab89-7673d9dde0a0 req-f4ac86c8-c97a-4cb8-a362-ddcfa3d3cc83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b883c1a1-cf01-434d-8258-24ca193a2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:14:37 compute-0 nova_compute[243452]: 2026-02-28 10:14:37.331 243456 DEBUG oslo_concurrency.lockutils [req-9500b82e-2c3a-455b-ab89-7673d9dde0a0 req-f4ac86c8-c97a-4cb8-a362-ddcfa3d3cc83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b883c1a1-cf01-434d-8258-24ca193a2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:14:37 compute-0 nova_compute[243452]: 2026-02-28 10:14:37.332 243456 DEBUG nova.network.neutron [req-9500b82e-2c3a-455b-ab89-7673d9dde0a0 req-f4ac86c8-c97a-4cb8-a362-ddcfa3d3cc83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Refreshing network info cache for port aa9724a7-fad1-4968-a1b0-0d8182007723 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:14:37 compute-0 nova_compute[243452]: 2026-02-28 10:14:37.409 243456 DEBUG oslo_concurrency.processutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:14:37 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2375755774' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:38 compute-0 nova_compute[243452]: 2026-02-28 10:14:38.019 243456 DEBUG oslo_concurrency.processutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:38 compute-0 nova_compute[243452]: 2026-02-28 10:14:38.026 243456 DEBUG nova.compute.provider_tree [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:14:38 compute-0 nova_compute[243452]: 2026-02-28 10:14:38.045 243456 DEBUG nova.scheduler.client.report [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:14:38 compute-0 nova_compute[243452]: 2026-02-28 10:14:38.071 243456 DEBUG oslo_concurrency.lockutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:38 compute-0 nova_compute[243452]: 2026-02-28 10:14:38.111 243456 INFO nova.scheduler.client.report [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Deleted allocations for instance dc2dbab8-312e-4130-8141-d848beeb6bec
Feb 28 10:14:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 457 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 41 KiB/s wr, 312 op/s
Feb 28 10:14:38 compute-0 nova_compute[243452]: 2026-02-28 10:14:38.193 243456 DEBUG oslo_concurrency.lockutils [None req-5e284f4b-e3a7-4c8e-8577-bf0383f6ad46 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:38 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2375755774' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:38 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Feb 28 10:14:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:14:38 compute-0 nova_compute[243452]: 2026-02-28 10:14:38.774 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:38 compute-0 nova_compute[243452]: 2026-02-28 10:14:38.805 243456 DEBUG nova.compute.manager [req-81681a52-1d93-4403-8c15-6b24ffc8ccb3 req-24ac5a34-84bb-448c-b571-62e35222feb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:38 compute-0 nova_compute[243452]: 2026-02-28 10:14:38.806 243456 DEBUG oslo_concurrency.lockutils [req-81681a52-1d93-4403-8c15-6b24ffc8ccb3 req-24ac5a34-84bb-448c-b571-62e35222feb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:38 compute-0 nova_compute[243452]: 2026-02-28 10:14:38.807 243456 DEBUG oslo_concurrency.lockutils [req-81681a52-1d93-4403-8c15-6b24ffc8ccb3 req-24ac5a34-84bb-448c-b571-62e35222feb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:38 compute-0 nova_compute[243452]: 2026-02-28 10:14:38.807 243456 DEBUG oslo_concurrency.lockutils [req-81681a52-1d93-4403-8c15-6b24ffc8ccb3 req-24ac5a34-84bb-448c-b571-62e35222feb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "dc2dbab8-312e-4130-8141-d848beeb6bec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:38 compute-0 nova_compute[243452]: 2026-02-28 10:14:38.807 243456 DEBUG nova.compute.manager [req-81681a52-1d93-4403-8c15-6b24ffc8ccb3 req-24ac5a34-84bb-448c-b571-62e35222feb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] No waiting events found dispatching network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:38 compute-0 nova_compute[243452]: 2026-02-28 10:14:38.808 243456 WARNING nova.compute.manager [req-81681a52-1d93-4403-8c15-6b24ffc8ccb3 req-24ac5a34-84bb-448c-b571-62e35222feb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received unexpected event network-vif-plugged-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 for instance with vm_state deleted and task_state None.
Feb 28 10:14:38 compute-0 nova_compute[243452]: 2026-02-28 10:14:38.808 243456 DEBUG nova.compute.manager [req-81681a52-1d93-4403-8c15-6b24ffc8ccb3 req-24ac5a34-84bb-448c-b571-62e35222feb0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Received event network-vif-deleted-a2f66c0b-78f3-49cb-929b-5e9b4072beb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:39 compute-0 ceph-mon[76304]: pgmap v1462: 305 pgs: 305 active+clean; 457 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 41 KiB/s wr, 312 op/s
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.361 243456 DEBUG nova.network.neutron [req-9500b82e-2c3a-455b-ab89-7673d9dde0a0 req-f4ac86c8-c97a-4cb8-a362-ddcfa3d3cc83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Updated VIF entry in instance network info cache for port aa9724a7-fad1-4968-a1b0-0d8182007723. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.363 243456 DEBUG nova.network.neutron [req-9500b82e-2c3a-455b-ab89-7673d9dde0a0 req-f4ac86c8-c97a-4cb8-a362-ddcfa3d3cc83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Updating instance_info_cache with network_info: [{"id": "aa9724a7-fad1-4968-a1b0-0d8182007723", "address": "fa:16:3e:c4:71:d6", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9724a7-fa", "ovs_interfaceid": "aa9724a7-fad1-4968-a1b0-0d8182007723", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.383 243456 DEBUG oslo_concurrency.lockutils [req-9500b82e-2c3a-455b-ab89-7673d9dde0a0 req-f4ac86c8-c97a-4cb8-a362-ddcfa3d3cc83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b883c1a1-cf01-434d-8258-24ca193a2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.434 243456 DEBUG oslo_concurrency.lockutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.434 243456 DEBUG oslo_concurrency.lockutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.435 243456 DEBUG oslo_concurrency.lockutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.435 243456 DEBUG oslo_concurrency.lockutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.435 243456 DEBUG oslo_concurrency.lockutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.436 243456 INFO nova.compute.manager [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Terminating instance
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.437 243456 DEBUG nova.compute.manager [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:14:39 compute-0 kernel: tap26c42747-49 (unregistering): left promiscuous mode
Feb 28 10:14:39 compute-0 NetworkManager[49805]: <info>  [1772273679.4757] device (tap26c42747-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:14:39 compute-0 ovn_controller[146846]: 2026-02-28T10:14:39Z|00667|binding|INFO|Releasing lport 26c42747-4919-4440-9b73-cf3516525108 from this chassis (sb_readonly=0)
Feb 28 10:14:39 compute-0 ovn_controller[146846]: 2026-02-28T10:14:39Z|00668|binding|INFO|Setting lport 26c42747-4919-4440-9b73-cf3516525108 down in Southbound
Feb 28 10:14:39 compute-0 ovn_controller[146846]: 2026-02-28T10:14:39Z|00669|binding|INFO|Removing iface tap26c42747-49 ovn-installed in OVS
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.487 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.497 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:43:e1 10.100.0.5'], port_security=['fa:16:3e:5f:43:e1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '40f8f3fa-1f1c-440e-a640-5a223b1ca9b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-621843b6-256a-4ce5-83c3-83b888738508', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809bf856030f4316b385ba1c02291ca7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2ac4acb3-b14a-4b15-b397-21203f1665be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a33e3f0-a2b2-429c-8d14-4c6d980064b2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=26c42747-4919-4440-9b73-cf3516525108) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.499 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.500 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 26c42747-4919-4440-9b73-cf3516525108 in datapath 621843b6-256a-4ce5-83c3-83b888738508 unbound from our chassis
Feb 28 10:14:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.502 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 621843b6-256a-4ce5-83c3-83b888738508, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:14:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.503 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[02a2d2a4-192f-4fce-b166-9969edfd5e48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.504 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-621843b6-256a-4ce5-83c3-83b888738508 namespace which is not needed anymore
Feb 28 10:14:39 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Feb 28 10:14:39 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d0000004a.scope: Consumed 13.161s CPU time.
Feb 28 10:14:39 compute-0 systemd-machined[209480]: Machine qemu-82-instance-0000004a terminated.
Feb 28 10:14:39 compute-0 neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508[305881]: [NOTICE]   (305887) : haproxy version is 2.8.14-c23fe91
Feb 28 10:14:39 compute-0 neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508[305881]: [NOTICE]   (305887) : path to executable is /usr/sbin/haproxy
Feb 28 10:14:39 compute-0 neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508[305881]: [WARNING]  (305887) : Exiting Master process...
Feb 28 10:14:39 compute-0 neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508[305881]: [WARNING]  (305887) : Exiting Master process...
Feb 28 10:14:39 compute-0 neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508[305881]: [ALERT]    (305887) : Current worker (305889) exited with code 143 (Terminated)
Feb 28 10:14:39 compute-0 neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508[305881]: [WARNING]  (305887) : All workers exited. Exiting... (0)
Feb 28 10:14:39 compute-0 systemd[1]: libpod-05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6.scope: Deactivated successfully.
Feb 28 10:14:39 compute-0 podman[308582]: 2026-02-28 10:14:39.632706798 +0000 UTC m=+0.052217199 container died 05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 28 10:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6-userdata-shm.mount: Deactivated successfully.
Feb 28 10:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cfd78d30250744f4812bed5eda336963389aeb75643f7f79dbbc06f1fb2c979-merged.mount: Deactivated successfully.
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.673 243456 INFO nova.virt.libvirt.driver [-] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Instance destroyed successfully.
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.674 243456 DEBUG nova.objects.instance [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lazy-loading 'resources' on Instance uuid 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:39 compute-0 podman[308582]: 2026-02-28 10:14:39.682602051 +0000 UTC m=+0.102112372 container cleanup 05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0)
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.693 243456 DEBUG nova.virt.libvirt.vif [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:13:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-191562024',display_name='tempest-ServerRescueNegativeTestJSON-server-191562024',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-191562024',id=74,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:14:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='809bf856030f4316b385ba1c02291ca7',ramdisk_id='',reservation_id='r-55ob4e3h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-338743494',owner_user_name='tempest-ServerRescueNegativeTestJSON-338743494-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:14:32Z,user_data=None,user_id='ec5caafc16ec43a493f7d553353a27c3',uuid=40f8f3fa-1f1c-440e-a640-5a223b1ca9b8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "26c42747-4919-4440-9b73-cf3516525108", "address": "fa:16:3e:5f:43:e1", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c42747-49", "ovs_interfaceid": "26c42747-4919-4440-9b73-cf3516525108", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.693 243456 DEBUG nova.network.os_vif_util [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converting VIF {"id": "26c42747-4919-4440-9b73-cf3516525108", "address": "fa:16:3e:5f:43:e1", "network": {"id": "621843b6-256a-4ce5-83c3-83b888738508", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445495126-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "809bf856030f4316b385ba1c02291ca7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c42747-49", "ovs_interfaceid": "26c42747-4919-4440-9b73-cf3516525108", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.696 243456 DEBUG nova.network.os_vif_util [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5f:43:e1,bridge_name='br-int',has_traffic_filtering=True,id=26c42747-4919-4440-9b73-cf3516525108,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c42747-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.697 243456 DEBUG os_vif [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:43:e1,bridge_name='br-int',has_traffic_filtering=True,id=26c42747-4919-4440-9b73-cf3516525108,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c42747-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.702 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.702 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26c42747-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.704 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.706 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:39 compute-0 systemd[1]: libpod-conmon-05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6.scope: Deactivated successfully.
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.710 243456 INFO os_vif [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:43:e1,bridge_name='br-int',has_traffic_filtering=True,id=26c42747-4919-4440-9b73-cf3516525108,network=Network(621843b6-256a-4ce5-83c3-83b888738508),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c42747-49')
Feb 28 10:14:39 compute-0 podman[308621]: 2026-02-28 10:14:39.749727989 +0000 UTC m=+0.047825406 container remove 05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 10:14:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.758 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[13a3eefa-6166-4d00-8596-49b59916fe08]: (4, ('Sat Feb 28 10:14:39 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508 (05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6)\n05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6\nSat Feb 28 10:14:39 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-621843b6-256a-4ce5-83c3-83b888738508 (05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6)\n05b20fd73e9009d8be5a7522bbea0f66998015005d576af422f88197902a44a6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.761 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b909289c-fb28-4f2c-902d-6a59753e8cc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.763 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap621843b6-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.766 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:39 compute-0 kernel: tap621843b6-20: left promiscuous mode
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.775 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.777 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[56370350-6a33-4377-9493-b67b1a566243]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.788 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ac798feb-2880-44be-bc76-91947c91ae65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.790 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb2799e-b9ae-4053-96c6-2a11f5fc0175]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.802 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7c00c7ba-ce6f-4e7b-b07e-6d7a544ec32f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514750, 'reachable_time': 21671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308654, 'error': None, 'target': 'ovnmeta-621843b6-256a-4ce5-83c3-83b888738508', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:39 compute-0 systemd[1]: run-netns-ovnmeta\x2d621843b6\x2d256a\x2d4ce5\x2d83c3\x2d83b888738508.mount: Deactivated successfully.
Feb 28 10:14:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.807 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-621843b6-256a-4ce5-83c3-83b888738508 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:14:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:39.807 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[61a63d4e-131f-457f-a8be-22fab07c6863]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:39 compute-0 nova_compute[243452]: 2026-02-28 10:14:39.999 243456 INFO nova.virt.libvirt.driver [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Deleting instance files /var/lib/nova/instances/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_del
Feb 28 10:14:40 compute-0 nova_compute[243452]: 2026-02-28 10:14:40.002 243456 INFO nova.virt.libvirt.driver [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Deletion of /var/lib/nova/instances/40f8f3fa-1f1c-440e-a640-5a223b1ca9b8_del complete
Feb 28 10:14:40 compute-0 nova_compute[243452]: 2026-02-28 10:14:40.076 243456 INFO nova.compute.manager [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Took 0.64 seconds to destroy the instance on the hypervisor.
Feb 28 10:14:40 compute-0 nova_compute[243452]: 2026-02-28 10:14:40.077 243456 DEBUG oslo.service.loopingcall [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:14:40 compute-0 nova_compute[243452]: 2026-02-28 10:14:40.077 243456 DEBUG nova.compute.manager [-] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:14:40 compute-0 nova_compute[243452]: 2026-02-28 10:14:40.077 243456 DEBUG nova.network.neutron [-] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 423 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 931 KiB/s wr, 302 op/s
Feb 28 10:14:40 compute-0 ovn_controller[146846]: 2026-02-28T10:14:40Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:70:1d:4f 10.100.0.13
Feb 28 10:14:40 compute-0 ovn_controller[146846]: 2026-02-28T10:14:40Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:1d:4f 10.100.0.13
Feb 28 10:14:40 compute-0 nova_compute[243452]: 2026-02-28 10:14:40.303 243456 DEBUG nova.compute.manager [req-bc87526d-a278-4304-8fcf-d7926bbbe1a1 req-b2321d45-f459-46e5-909d-7e97d7faef2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Received event network-vif-unplugged-26c42747-4919-4440-9b73-cf3516525108 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:40 compute-0 nova_compute[243452]: 2026-02-28 10:14:40.303 243456 DEBUG oslo_concurrency.lockutils [req-bc87526d-a278-4304-8fcf-d7926bbbe1a1 req-b2321d45-f459-46e5-909d-7e97d7faef2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:40 compute-0 nova_compute[243452]: 2026-02-28 10:14:40.304 243456 DEBUG oslo_concurrency.lockutils [req-bc87526d-a278-4304-8fcf-d7926bbbe1a1 req-b2321d45-f459-46e5-909d-7e97d7faef2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:40 compute-0 nova_compute[243452]: 2026-02-28 10:14:40.304 243456 DEBUG oslo_concurrency.lockutils [req-bc87526d-a278-4304-8fcf-d7926bbbe1a1 req-b2321d45-f459-46e5-909d-7e97d7faef2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:40 compute-0 nova_compute[243452]: 2026-02-28 10:14:40.304 243456 DEBUG nova.compute.manager [req-bc87526d-a278-4304-8fcf-d7926bbbe1a1 req-b2321d45-f459-46e5-909d-7e97d7faef2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] No waiting events found dispatching network-vif-unplugged-26c42747-4919-4440-9b73-cf3516525108 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:40 compute-0 nova_compute[243452]: 2026-02-28 10:14:40.304 243456 DEBUG nova.compute.manager [req-bc87526d-a278-4304-8fcf-d7926bbbe1a1 req-b2321d45-f459-46e5-909d-7e97d7faef2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Received event network-vif-unplugged-26c42747-4919-4440-9b73-cf3516525108 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0023727217339721053 of space, bias 1.0, pg target 0.7118165201916316 quantized to 32 (current 32)
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493456532932348 of space, bias 1.0, pg target 0.7480369598797043 quantized to 32 (current 32)
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.871523603312771e-07 of space, bias 4.0, pg target 0.0009445828323975324 quantized to 16 (current 16)
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:14:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:14:41 compute-0 ceph-mon[76304]: pgmap v1463: 305 pgs: 305 active+clean; 423 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 931 KiB/s wr, 302 op/s
Feb 28 10:14:41 compute-0 nova_compute[243452]: 2026-02-28 10:14:41.356 243456 DEBUG nova.network.neutron [-] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:14:41 compute-0 nova_compute[243452]: 2026-02-28 10:14:41.382 243456 INFO nova.compute.manager [-] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Took 1.30 seconds to deallocate network for instance.
Feb 28 10:14:41 compute-0 nova_compute[243452]: 2026-02-28 10:14:41.416 243456 DEBUG nova.compute.manager [req-6b23a7bb-4ecb-4e65-b051-78c751131130 req-97ced286-2112-4d58-a77c-9312e5b4bad0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Received event network-vif-deleted-26c42747-4919-4440-9b73-cf3516525108 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:41 compute-0 nova_compute[243452]: 2026-02-28 10:14:41.436 243456 DEBUG oslo_concurrency.lockutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:41 compute-0 nova_compute[243452]: 2026-02-28 10:14:41.436 243456 DEBUG oslo_concurrency.lockutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:41 compute-0 nova_compute[243452]: 2026-02-28 10:14:41.520 243456 DEBUG oslo_concurrency.processutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:14:42 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2089390432' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:42 compute-0 nova_compute[243452]: 2026-02-28 10:14:42.117 243456 DEBUG oslo_concurrency.processutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:42 compute-0 nova_compute[243452]: 2026-02-28 10:14:42.125 243456 DEBUG nova.compute.provider_tree [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:14:42 compute-0 nova_compute[243452]: 2026-02-28 10:14:42.147 243456 DEBUG nova.scheduler.client.report [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:14:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 373 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 1.5 MiB/s wr, 274 op/s
Feb 28 10:14:42 compute-0 nova_compute[243452]: 2026-02-28 10:14:42.180 243456 DEBUG oslo_concurrency.lockutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:42 compute-0 nova_compute[243452]: 2026-02-28 10:14:42.209 243456 INFO nova.scheduler.client.report [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Deleted allocations for instance 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8
Feb 28 10:14:42 compute-0 ovn_controller[146846]: 2026-02-28T10:14:42Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:90:03 10.100.0.11
Feb 28 10:14:42 compute-0 nova_compute[243452]: 2026-02-28 10:14:42.294 243456 DEBUG oslo_concurrency.lockutils [None req-4ebbf142-9abe-434d-81f3-4728ca8117a2 ec5caafc16ec43a493f7d553353a27c3 809bf856030f4316b385ba1c02291ca7 - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:42 compute-0 ovn_controller[146846]: 2026-02-28T10:14:42Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:90:03 10.100.0.11
Feb 28 10:14:42 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2089390432' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:42 compute-0 nova_compute[243452]: 2026-02-28 10:14:42.382 243456 DEBUG nova.compute.manager [req-f58bc33b-86dc-4a11-995b-94d692d6320d req-15e52f56-36c4-4f99-88c5-8c7671fd9e2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Received event network-vif-plugged-26c42747-4919-4440-9b73-cf3516525108 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:42 compute-0 nova_compute[243452]: 2026-02-28 10:14:42.383 243456 DEBUG oslo_concurrency.lockutils [req-f58bc33b-86dc-4a11-995b-94d692d6320d req-15e52f56-36c4-4f99-88c5-8c7671fd9e2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:42 compute-0 nova_compute[243452]: 2026-02-28 10:14:42.384 243456 DEBUG oslo_concurrency.lockutils [req-f58bc33b-86dc-4a11-995b-94d692d6320d req-15e52f56-36c4-4f99-88c5-8c7671fd9e2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:42 compute-0 nova_compute[243452]: 2026-02-28 10:14:42.384 243456 DEBUG oslo_concurrency.lockutils [req-f58bc33b-86dc-4a11-995b-94d692d6320d req-15e52f56-36c4-4f99-88c5-8c7671fd9e2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "40f8f3fa-1f1c-440e-a640-5a223b1ca9b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:42 compute-0 nova_compute[243452]: 2026-02-28 10:14:42.385 243456 DEBUG nova.compute.manager [req-f58bc33b-86dc-4a11-995b-94d692d6320d req-15e52f56-36c4-4f99-88c5-8c7671fd9e2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] No waiting events found dispatching network-vif-plugged-26c42747-4919-4440-9b73-cf3516525108 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:42 compute-0 nova_compute[243452]: 2026-02-28 10:14:42.385 243456 WARNING nova.compute.manager [req-f58bc33b-86dc-4a11-995b-94d692d6320d req-15e52f56-36c4-4f99-88c5-8c7671fd9e2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Received unexpected event network-vif-plugged-26c42747-4919-4440-9b73-cf3516525108 for instance with vm_state deleted and task_state None.
Feb 28 10:14:43 compute-0 ceph-mon[76304]: pgmap v1464: 305 pgs: 305 active+clean; 373 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 1.5 MiB/s wr, 274 op/s
Feb 28 10:14:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:14:43 compute-0 nova_compute[243452]: 2026-02-28 10:14:43.624 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 28 10:14:43 compute-0 nova_compute[243452]: 2026-02-28 10:14:43.777 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 386 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.0 MiB/s wr, 283 op/s
Feb 28 10:14:44 compute-0 ceph-mon[76304]: pgmap v1465: 305 pgs: 305 active+clean; 386 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.0 MiB/s wr, 283 op/s
Feb 28 10:14:44 compute-0 nova_compute[243452]: 2026-02-28 10:14:44.705 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:14:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/959705387' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:14:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:14:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/959705387' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:14:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/959705387' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:14:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/959705387' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:14:45 compute-0 kernel: tap0226697b-95 (unregistering): left promiscuous mode
Feb 28 10:14:45 compute-0 NetworkManager[49805]: <info>  [1772273685.9790] device (tap0226697b-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:14:45 compute-0 ovn_controller[146846]: 2026-02-28T10:14:45Z|00670|binding|INFO|Releasing lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 from this chassis (sb_readonly=0)
Feb 28 10:14:45 compute-0 ovn_controller[146846]: 2026-02-28T10:14:45Z|00671|binding|INFO|Setting lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 down in Southbound
Feb 28 10:14:45 compute-0 ovn_controller[146846]: 2026-02-28T10:14:45Z|00672|binding|INFO|Removing iface tap0226697b-95 ovn-installed in OVS
Feb 28 10:14:45 compute-0 nova_compute[243452]: 2026-02-28 10:14:45.981 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:45.989 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:1d:4f 10.100.0.13'], port_security=['fa:16:3e:70:1d:4f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ba33446e-fcd5-454c-bc8c-79a367002d57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0226697b-95b2-4303-aa60-b98eb0bb4cd9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:45.991 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0226697b-95b2-4303-aa60-b98eb0bb4cd9 in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 unbound from our chassis
Feb 28 10:14:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:45.993 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:14:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:45.993 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9ebde475-68e0-46b0-84cf-4dcd08cc7016]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:45.994 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 namespace which is not needed anymore
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.001 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:46 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Feb 28 10:14:46 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d0000004c.scope: Consumed 12.479s CPU time.
Feb 28 10:14:46 compute-0 systemd-machined[209480]: Machine qemu-86-instance-0000004c terminated.
Feb 28 10:14:46 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[307996]: [NOTICE]   (308001) : haproxy version is 2.8.14-c23fe91
Feb 28 10:14:46 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[307996]: [NOTICE]   (308001) : path to executable is /usr/sbin/haproxy
Feb 28 10:14:46 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[307996]: [WARNING]  (308001) : Exiting Master process...
Feb 28 10:14:46 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[307996]: [WARNING]  (308001) : Exiting Master process...
Feb 28 10:14:46 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[307996]: [ALERT]    (308001) : Current worker (308005) exited with code 143 (Terminated)
Feb 28 10:14:46 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[307996]: [WARNING]  (308001) : All workers exited. Exiting... (0)
Feb 28 10:14:46 compute-0 systemd[1]: libpod-80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4.scope: Deactivated successfully.
Feb 28 10:14:46 compute-0 podman[308702]: 2026-02-28 10:14:46.126024696 +0000 UTC m=+0.044560604 container died 80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 10:14:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4-userdata-shm.mount: Deactivated successfully.
Feb 28 10:14:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-750aff55c3900492540f27eedaa3917fbcf26f3c02e0a8efc1a0b07485dba889-merged.mount: Deactivated successfully.
Feb 28 10:14:46 compute-0 podman[308702]: 2026-02-28 10:14:46.167460391 +0000 UTC m=+0.085996299 container cleanup 80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 28 10:14:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 376 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.8 MiB/s wr, 326 op/s
Feb 28 10:14:46 compute-0 systemd[1]: libpod-conmon-80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4.scope: Deactivated successfully.
Feb 28 10:14:46 compute-0 podman[308730]: 2026-02-28 10:14:46.235281908 +0000 UTC m=+0.053741342 container remove 80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 28 10:14:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:46.241 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c6c114-b361-4bb3-90cc-ef1c8d729dd5]: (4, ('Sat Feb 28 10:14:46 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 (80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4)\n80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4\nSat Feb 28 10:14:46 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 (80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4)\n80523720c60d7e21ee54ae4d4c5f0912e0a1427dea8eaa3eaf68896698a53dd4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:46.244 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bc20fcfa-fc96-4d57-8858-f19f94a5b3cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:46.245 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77a5b13a-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.247 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:46 compute-0 kernel: tap77a5b13a-e0: left promiscuous mode
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.259 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:46.263 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aa83d7eb-8d04-4765-93b8-c8e854d0f294]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:46.284 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[446bc177-cb65-4df5-bed0-d41aaba0b306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:46.286 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[10049df3-937e-4f3f-a488-b3042401b2f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:46.301 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b7e053-ee1c-4791-8c35-45c040119c20]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517321, 'reachable_time': 37761, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308756, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:46.304 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:14:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:46.304 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[15c33980-a58f-40fd-bdca-9f4b36cf57b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d77a5b13a\x2dec2d\x2d4bde\x2db8f1\x2d201557ef8008.mount: Deactivated successfully.
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.342 243456 DEBUG nova.compute.manager [req-dfd0e9b6-0aaf-4f74-9eb0-354f65ca9430 req-7d4cd4fe-9d48-4c59-a6ee-c77ba0432eb9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-unplugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.342 243456 DEBUG oslo_concurrency.lockutils [req-dfd0e9b6-0aaf-4f74-9eb0-354f65ca9430 req-7d4cd4fe-9d48-4c59-a6ee-c77ba0432eb9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.343 243456 DEBUG oslo_concurrency.lockutils [req-dfd0e9b6-0aaf-4f74-9eb0-354f65ca9430 req-7d4cd4fe-9d48-4c59-a6ee-c77ba0432eb9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.343 243456 DEBUG oslo_concurrency.lockutils [req-dfd0e9b6-0aaf-4f74-9eb0-354f65ca9430 req-7d4cd4fe-9d48-4c59-a6ee-c77ba0432eb9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.343 243456 DEBUG nova.compute.manager [req-dfd0e9b6-0aaf-4f74-9eb0-354f65ca9430 req-7d4cd4fe-9d48-4c59-a6ee-c77ba0432eb9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] No waiting events found dispatching network-vif-unplugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.344 243456 WARNING nova.compute.manager [req-dfd0e9b6-0aaf-4f74-9eb0-354f65ca9430 req-7d4cd4fe-9d48-4c59-a6ee-c77ba0432eb9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received unexpected event network-vif-unplugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 for instance with vm_state active and task_state rebuilding.
Feb 28 10:14:46 compute-0 ovn_controller[146846]: 2026-02-28T10:14:46Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c4:71:d6 10.100.0.6
Feb 28 10:14:46 compute-0 ovn_controller[146846]: 2026-02-28T10:14:46Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c4:71:d6 10.100.0.6
Feb 28 10:14:46 compute-0 ceph-mon[76304]: pgmap v1466: 305 pgs: 305 active+clean; 376 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.8 MiB/s wr, 326 op/s
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.638 243456 INFO nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance shutdown successfully after 13 seconds.
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.646 243456 INFO nova.virt.libvirt.driver [-] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance destroyed successfully.
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.653 243456 INFO nova.virt.libvirt.driver [-] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance destroyed successfully.
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.654 243456 DEBUG nova.virt.libvirt.vif [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:14:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-877654664',display_name='tempest-ServerDiskConfigTestJSON-server-877654664',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-877654664',id=76,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:14:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-rxssdzd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:32Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=ba33446e-fcd5-454c-bc8c-79a367002d57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.654 243456 DEBUG nova.network.os_vif_util [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.655 243456 DEBUG nova.network.os_vif_util [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.655 243456 DEBUG os_vif [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.658 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.658 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0226697b-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.660 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.662 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.666 243456 INFO os_vif [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95')
Feb 28 10:14:46 compute-0 sudo[308757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:14:46 compute-0 sudo[308757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:14:46 compute-0 sudo[308757]: pam_unix(sudo:session): session closed for user root
Feb 28 10:14:46 compute-0 sudo[308800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:14:46 compute-0 sudo[308800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.951 243456 INFO nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Deleting instance files /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57_del
Feb 28 10:14:46 compute-0 nova_compute[243452]: 2026-02-28 10:14:46.952 243456 INFO nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Deletion of /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57_del complete
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.147 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.148 243456 INFO nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Creating image(s)
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.178 243456 DEBUG nova.storage.rbd_utils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.203 243456 DEBUG nova.storage.rbd_utils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.230 243456 DEBUG nova.storage.rbd_utils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.234 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:47 compute-0 sudo[308800]: pam_unix(sudo:session): session closed for user root
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.309 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.310 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.311 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.311 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:14:47 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:14:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:14:47 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:14:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:14:47 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:14:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:14:47 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:14:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:14:47 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:14:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:14:47 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.339 243456 DEBUG nova.storage.rbd_utils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.344 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 ba33446e-fcd5-454c-bc8c-79a367002d57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:47 compute-0 sudo[308934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:14:47 compute-0 sudo[308934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:14:47 compute-0 sudo[308934]: pam_unix(sudo:session): session closed for user root
Feb 28 10:14:47 compute-0 sudo[308960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:14:47 compute-0 sudo[308960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:14:47 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:14:47 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:14:47 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:14:47 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:14:47 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:14:47 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.617 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 ba33446e-fcd5-454c-bc8c-79a367002d57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.678 243456 DEBUG nova.storage.rbd_utils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] resizing rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:14:47 compute-0 podman[309057]: 2026-02-28 10:14:47.743664699 +0000 UTC m=+0.046294152 container create 37e21d4baddeac6cfdbed16be5c99c1b6cc823e1f7ae1a27bfd319b90fa41a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.765 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.766 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Ensure instance console log exists: /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.766 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.767 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.767 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.769 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Start _get_guest_xml network_info=[{"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.773 243456 WARNING nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.780 243456 DEBUG nova.virt.libvirt.host [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.781 243456 DEBUG nova.virt.libvirt.host [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:14:47 compute-0 systemd[1]: Started libpod-conmon-37e21d4baddeac6cfdbed16be5c99c1b6cc823e1f7ae1a27bfd319b90fa41a99.scope.
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.784 243456 DEBUG nova.virt.libvirt.host [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.784 243456 DEBUG nova.virt.libvirt.host [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.785 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.785 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.785 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.786 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.786 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.786 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.786 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.787 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.787 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.787 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.787 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.787 243456 DEBUG nova.virt.hardware [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.788 243456 DEBUG nova.objects.instance [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'vcpu_model' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:47 compute-0 nova_compute[243452]: 2026-02-28 10:14:47.811 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:47 compute-0 podman[309057]: 2026-02-28 10:14:47.722847104 +0000 UTC m=+0.025476607 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:14:47 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:14:47 compute-0 podman[309057]: 2026-02-28 10:14:47.840737329 +0000 UTC m=+0.143366882 container init 37e21d4baddeac6cfdbed16be5c99c1b6cc823e1f7ae1a27bfd319b90fa41a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goodall, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:14:47 compute-0 podman[309057]: 2026-02-28 10:14:47.84859286 +0000 UTC m=+0.151222353 container start 37e21d4baddeac6cfdbed16be5c99c1b6cc823e1f7ae1a27bfd319b90fa41a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goodall, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:14:47 compute-0 podman[309057]: 2026-02-28 10:14:47.852738706 +0000 UTC m=+0.155368199 container attach 37e21d4baddeac6cfdbed16be5c99c1b6cc823e1f7ae1a27bfd319b90fa41a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goodall, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 10:14:47 compute-0 heuristic_goodall[309102]: 167 167
Feb 28 10:14:47 compute-0 systemd[1]: libpod-37e21d4baddeac6cfdbed16be5c99c1b6cc823e1f7ae1a27bfd319b90fa41a99.scope: Deactivated successfully.
Feb 28 10:14:47 compute-0 podman[309057]: 2026-02-28 10:14:47.858262232 +0000 UTC m=+0.160891685 container died 37e21d4baddeac6cfdbed16be5c99c1b6cc823e1f7ae1a27bfd319b90fa41a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goodall, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:14:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-11989493354dfaeae9eb2e78ac84a83a1984133f3db8dc99d237b4052f988411-merged.mount: Deactivated successfully.
Feb 28 10:14:47 compute-0 podman[309057]: 2026-02-28 10:14:47.905765967 +0000 UTC m=+0.208395440 container remove 37e21d4baddeac6cfdbed16be5c99c1b6cc823e1f7ae1a27bfd319b90fa41a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_goodall, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Feb 28 10:14:47 compute-0 systemd[1]: libpod-conmon-37e21d4baddeac6cfdbed16be5c99c1b6cc823e1f7ae1a27bfd319b90fa41a99.scope: Deactivated successfully.
Feb 28 10:14:48 compute-0 podman[309145]: 2026-02-28 10:14:48.106562903 +0000 UTC m=+0.052215080 container create 6ecc4158ac4fb687750c6d5b53b756ce04cc28371bd14e85de5eb1f18033ed32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:14:48 compute-0 systemd[1]: Started libpod-conmon-6ecc4158ac4fb687750c6d5b53b756ce04cc28371bd14e85de5eb1f18033ed32.scope.
Feb 28 10:14:48 compute-0 podman[309145]: 2026-02-28 10:14:48.082408614 +0000 UTC m=+0.028060780 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:14:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 371 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 910 KiB/s rd, 6.4 MiB/s wr, 293 op/s
Feb 28 10:14:48 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a26a5d30ea58b2215aaf795b61c01ed421a3eb1b38b110c21316123f2b1ea13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a26a5d30ea58b2215aaf795b61c01ed421a3eb1b38b110c21316123f2b1ea13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a26a5d30ea58b2215aaf795b61c01ed421a3eb1b38b110c21316123f2b1ea13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a26a5d30ea58b2215aaf795b61c01ed421a3eb1b38b110c21316123f2b1ea13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a26a5d30ea58b2215aaf795b61c01ed421a3eb1b38b110c21316123f2b1ea13/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:14:48 compute-0 podman[309145]: 2026-02-28 10:14:48.223944884 +0000 UTC m=+0.169597060 container init 6ecc4158ac4fb687750c6d5b53b756ce04cc28371bd14e85de5eb1f18033ed32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 10:14:48 compute-0 podman[309145]: 2026-02-28 10:14:48.232881195 +0000 UTC m=+0.178533331 container start 6ecc4158ac4fb687750c6d5b53b756ce04cc28371bd14e85de5eb1f18033ed32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 10:14:48 compute-0 podman[309145]: 2026-02-28 10:14:48.237381312 +0000 UTC m=+0.183033498 container attach 6ecc4158ac4fb687750c6d5b53b756ce04cc28371bd14e85de5eb1f18033ed32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_wilbur, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:14:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:14:48 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2556272107' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:48 compute-0 nova_compute[243452]: 2026-02-28 10:14:48.439 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:48 compute-0 nova_compute[243452]: 2026-02-28 10:14:48.466 243456 DEBUG nova.storage.rbd_utils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:48 compute-0 nova_compute[243452]: 2026-02-28 10:14:48.469 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:48 compute-0 nova_compute[243452]: 2026-02-28 10:14:48.497 243456 DEBUG nova.compute.manager [req-a086dac7-8742-4b61-a3c8-3b09de117d66 req-152039f9-58ad-4b82-9575-fef171a96201 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:48 compute-0 nova_compute[243452]: 2026-02-28 10:14:48.498 243456 DEBUG oslo_concurrency.lockutils [req-a086dac7-8742-4b61-a3c8-3b09de117d66 req-152039f9-58ad-4b82-9575-fef171a96201 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:48 compute-0 nova_compute[243452]: 2026-02-28 10:14:48.498 243456 DEBUG oslo_concurrency.lockutils [req-a086dac7-8742-4b61-a3c8-3b09de117d66 req-152039f9-58ad-4b82-9575-fef171a96201 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:48 compute-0 nova_compute[243452]: 2026-02-28 10:14:48.498 243456 DEBUG oslo_concurrency.lockutils [req-a086dac7-8742-4b61-a3c8-3b09de117d66 req-152039f9-58ad-4b82-9575-fef171a96201 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:48 compute-0 nova_compute[243452]: 2026-02-28 10:14:48.499 243456 DEBUG nova.compute.manager [req-a086dac7-8742-4b61-a3c8-3b09de117d66 req-152039f9-58ad-4b82-9575-fef171a96201 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] No waiting events found dispatching network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:48 compute-0 nova_compute[243452]: 2026-02-28 10:14:48.499 243456 WARNING nova.compute.manager [req-a086dac7-8742-4b61-a3c8-3b09de117d66 req-152039f9-58ad-4b82-9575-fef171a96201 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received unexpected event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 for instance with vm_state active and task_state rebuild_spawning.
Feb 28 10:14:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:14:48 compute-0 ceph-mon[76304]: pgmap v1467: 305 pgs: 305 active+clean; 371 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 910 KiB/s rd, 6.4 MiB/s wr, 293 op/s
Feb 28 10:14:48 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2556272107' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:48 compute-0 gallant_wilbur[309162]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:14:48 compute-0 gallant_wilbur[309162]: --> All data devices are unavailable
Feb 28 10:14:48 compute-0 systemd[1]: libpod-6ecc4158ac4fb687750c6d5b53b756ce04cc28371bd14e85de5eb1f18033ed32.scope: Deactivated successfully.
Feb 28 10:14:48 compute-0 podman[309145]: 2026-02-28 10:14:48.704498427 +0000 UTC m=+0.650150603 container died 6ecc4158ac4fb687750c6d5b53b756ce04cc28371bd14e85de5eb1f18033ed32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Feb 28 10:14:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a26a5d30ea58b2215aaf795b61c01ed421a3eb1b38b110c21316123f2b1ea13-merged.mount: Deactivated successfully.
Feb 28 10:14:48 compute-0 podman[309145]: 2026-02-28 10:14:48.75048291 +0000 UTC m=+0.696135066 container remove 6ecc4158ac4fb687750c6d5b53b756ce04cc28371bd14e85de5eb1f18033ed32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_wilbur, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:14:48 compute-0 nova_compute[243452]: 2026-02-28 10:14:48.781 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:48 compute-0 systemd[1]: libpod-conmon-6ecc4158ac4fb687750c6d5b53b756ce04cc28371bd14e85de5eb1f18033ed32.scope: Deactivated successfully.
Feb 28 10:14:48 compute-0 sudo[308960]: pam_unix(sudo:session): session closed for user root
Feb 28 10:14:48 compute-0 sudo[309233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:14:48 compute-0 sudo[309233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:14:48 compute-0 sudo[309233]: pam_unix(sudo:session): session closed for user root
Feb 28 10:14:48 compute-0 sudo[309258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:14:48 compute-0 sudo[309258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.037 243456 INFO nova.compute.manager [None req-acfc8abe-19d5-4309-aa92-5a4058c4c85f 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Get console output
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.045 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:14:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:14:49 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2046892496' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.072 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.074 243456 DEBUG nova.virt.libvirt.vif [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:14:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-877654664',display_name='tempest-ServerDiskConfigTestJSON-server-877654664',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-877654664',id=76,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:14:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-rxssdzd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:47Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=ba33446e-fcd5-454c-bc8c-79a367002d57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.074 243456 DEBUG nova.network.os_vif_util [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.075 243456 DEBUG nova.network.os_vif_util [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.078 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:14:49 compute-0 nova_compute[243452]:   <uuid>ba33446e-fcd5-454c-bc8c-79a367002d57</uuid>
Feb 28 10:14:49 compute-0 nova_compute[243452]:   <name>instance-0000004c</name>
Feb 28 10:14:49 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:14:49 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:14:49 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-877654664</nova:name>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:14:47</nova:creationTime>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:14:49 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:14:49 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:14:49 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:14:49 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:14:49 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:14:49 compute-0 nova_compute[243452]:         <nova:user uuid="c6b5724da2e648fd85fd8cb293525967">tempest-ServerDiskConfigTestJSON-1778232696-project-member</nova:user>
Feb 28 10:14:49 compute-0 nova_compute[243452]:         <nova:project uuid="92b324a375ad4f198dc44d31a0e0a6eb">tempest-ServerDiskConfigTestJSON-1778232696</nova:project>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="88971623-4808-4102-a4a7-34a287d8b7fe"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:14:49 compute-0 nova_compute[243452]:         <nova:port uuid="0226697b-95b2-4303-aa60-b98eb0bb4cd9">
Feb 28 10:14:49 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:14:49 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:14:49 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <system>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <entry name="serial">ba33446e-fcd5-454c-bc8c-79a367002d57</entry>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <entry name="uuid">ba33446e-fcd5-454c-bc8c-79a367002d57</entry>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     </system>
Feb 28 10:14:49 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:14:49 compute-0 nova_compute[243452]:   <os>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:   </os>
Feb 28 10:14:49 compute-0 nova_compute[243452]:   <features>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:   </features>
Feb 28 10:14:49 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:14:49 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:14:49 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ba33446e-fcd5-454c-bc8c-79a367002d57_disk">
Feb 28 10:14:49 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       </source>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:14:49 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config">
Feb 28 10:14:49 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       </source>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:14:49 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:70:1d:4f"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <target dev="tap0226697b-95"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/console.log" append="off"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <video>
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     </video>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:14:49 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:14:49 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:14:49 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:14:49 compute-0 nova_compute[243452]: </domain>
Feb 28 10:14:49 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.078 243456 DEBUG nova.compute.manager [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Preparing to wait for external event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.079 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.079 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.079 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.080 243456 DEBUG nova.virt.libvirt.vif [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:14:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-877654664',display_name='tempest-ServerDiskConfigTestJSON-server-877654664',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-877654664',id=76,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:14:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-rxssdzd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:47Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=ba33446e-fcd5-454c-bc8c-79a367002d57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.080 243456 DEBUG nova.network.os_vif_util [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.080 243456 DEBUG nova.network.os_vif_util [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.081 243456 DEBUG os_vif [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.082 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.082 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.082 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.086 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.086 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0226697b-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.087 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0226697b-95, col_values=(('external_ids', {'iface-id': '0226697b-95b2-4303-aa60-b98eb0bb4cd9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:1d:4f', 'vm-uuid': 'ba33446e-fcd5-454c-bc8c-79a367002d57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.089 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:49 compute-0 NetworkManager[49805]: <info>  [1772273689.0910] manager: (tap0226697b-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/303)
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.093 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.099 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.100 243456 INFO os_vif [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95')
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.173 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.173 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.173 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No VIF found with MAC fa:16:3e:70:1d:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.173 243456 INFO nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Using config drive
Feb 28 10:14:49 compute-0 podman[309302]: 2026-02-28 10:14:49.193799895 +0000 UTC m=+0.051141549 container create 349059f9ebc88309cf7e68db69045f64e40e3977bd15735b3ef4105d89baa49e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mendel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.207 243456 DEBUG nova.storage.rbd_utils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.234 243456 DEBUG nova.objects.instance [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'ec2_ids' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:49 compute-0 systemd[1]: Started libpod-conmon-349059f9ebc88309cf7e68db69045f64e40e3977bd15735b3ef4105d89baa49e.scope.
Feb 28 10:14:49 compute-0 podman[309302]: 2026-02-28 10:14:49.175875631 +0000 UTC m=+0.033217365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:14:49 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.275 243456 DEBUG nova.objects.instance [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'keypairs' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:49 compute-0 podman[309302]: 2026-02-28 10:14:49.287376906 +0000 UTC m=+0.144718650 container init 349059f9ebc88309cf7e68db69045f64e40e3977bd15735b3ef4105d89baa49e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 10:14:49 compute-0 podman[309302]: 2026-02-28 10:14:49.293966751 +0000 UTC m=+0.151308405 container start 349059f9ebc88309cf7e68db69045f64e40e3977bd15735b3ef4105d89baa49e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mendel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 10:14:49 compute-0 trusting_mendel[309336]: 167 167
Feb 28 10:14:49 compute-0 systemd[1]: libpod-349059f9ebc88309cf7e68db69045f64e40e3977bd15735b3ef4105d89baa49e.scope: Deactivated successfully.
Feb 28 10:14:49 compute-0 conmon[309336]: conmon 349059f9ebc88309cf7e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-349059f9ebc88309cf7e68db69045f64e40e3977bd15735b3ef4105d89baa49e.scope/container/memory.events
Feb 28 10:14:49 compute-0 podman[309302]: 2026-02-28 10:14:49.299876108 +0000 UTC m=+0.157217862 container attach 349059f9ebc88309cf7e68db69045f64e40e3977bd15735b3ef4105d89baa49e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:14:49 compute-0 podman[309302]: 2026-02-28 10:14:49.301235726 +0000 UTC m=+0.158577410 container died 349059f9ebc88309cf7e68db69045f64e40e3977bd15735b3ef4105d89baa49e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mendel, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 10:14:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4834dd39f65511192abfcbcaba9e8868b77c1e87c3aa52baf0094eb645e766fa-merged.mount: Deactivated successfully.
Feb 28 10:14:49 compute-0 podman[309302]: 2026-02-28 10:14:49.343077172 +0000 UTC m=+0.200418826 container remove 349059f9ebc88309cf7e68db69045f64e40e3977bd15735b3ef4105d89baa49e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:14:49 compute-0 systemd[1]: libpod-conmon-349059f9ebc88309cf7e68db69045f64e40e3977bd15735b3ef4105d89baa49e.scope: Deactivated successfully.
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.416 243456 DEBUG oslo_concurrency.lockutils [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.418 243456 DEBUG oslo_concurrency.lockutils [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.419 243456 INFO nova.compute.manager [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Rebooting instance
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.438 243456 DEBUG oslo_concurrency.lockutils [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.439 243456 DEBUG oslo_concurrency.lockutils [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquired lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:14:49 compute-0 nova_compute[243452]: 2026-02-28 10:14:49.440 243456 DEBUG nova.network.neutron [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:14:49 compute-0 podman[309360]: 2026-02-28 10:14:49.498940415 +0000 UTC m=+0.047271040 container create b82afb29ba98a7d654c4751d3ad04dee10c53bcf83ec5e28a14926611e414c61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_margulis, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:14:49 compute-0 systemd[1]: Started libpod-conmon-b82afb29ba98a7d654c4751d3ad04dee10c53bcf83ec5e28a14926611e414c61.scope.
Feb 28 10:14:49 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:14:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584eb1627ec16f2573d7e5462ccacf62ecc67d49f6169f78f319fd7d127a89f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:14:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584eb1627ec16f2573d7e5462ccacf62ecc67d49f6169f78f319fd7d127a89f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:14:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584eb1627ec16f2573d7e5462ccacf62ecc67d49f6169f78f319fd7d127a89f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:14:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584eb1627ec16f2573d7e5462ccacf62ecc67d49f6169f78f319fd7d127a89f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:14:49 compute-0 podman[309360]: 2026-02-28 10:14:49.479264242 +0000 UTC m=+0.027594887 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:14:49 compute-0 podman[309360]: 2026-02-28 10:14:49.592510516 +0000 UTC m=+0.140841171 container init b82afb29ba98a7d654c4751d3ad04dee10c53bcf83ec5e28a14926611e414c61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_margulis, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:14:49 compute-0 podman[309360]: 2026-02-28 10:14:49.60724978 +0000 UTC m=+0.155580405 container start b82afb29ba98a7d654c4751d3ad04dee10c53bcf83ec5e28a14926611e414c61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_margulis, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 10:14:49 compute-0 podman[309360]: 2026-02-28 10:14:49.611130839 +0000 UTC m=+0.159461484 container attach b82afb29ba98a7d654c4751d3ad04dee10c53bcf83ec5e28a14926611e414c61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 10:14:49 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2046892496' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:14:49 compute-0 romantic_margulis[309376]: {
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:     "0": [
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:         {
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "devices": [
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "/dev/loop3"
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             ],
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "lv_name": "ceph_lv0",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "lv_size": "21470642176",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "name": "ceph_lv0",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "tags": {
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.cluster_name": "ceph",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.crush_device_class": "",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.encrypted": "0",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.objectstore": "bluestore",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.osd_id": "0",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.type": "block",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.vdo": "0",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.with_tpm": "0"
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             },
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "type": "block",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "vg_name": "ceph_vg0"
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:         }
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:     ],
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:     "1": [
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:         {
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "devices": [
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "/dev/loop4"
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             ],
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "lv_name": "ceph_lv1",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "lv_size": "21470642176",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "name": "ceph_lv1",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "tags": {
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.cluster_name": "ceph",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.crush_device_class": "",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.encrypted": "0",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.objectstore": "bluestore",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.osd_id": "1",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.type": "block",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.vdo": "0",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.with_tpm": "0"
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             },
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "type": "block",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "vg_name": "ceph_vg1"
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:         }
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:     ],
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:     "2": [
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:         {
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "devices": [
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "/dev/loop5"
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             ],
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "lv_name": "ceph_lv2",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "lv_size": "21470642176",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "name": "ceph_lv2",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "tags": {
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.cluster_name": "ceph",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.crush_device_class": "",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.encrypted": "0",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.objectstore": "bluestore",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.osd_id": "2",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.type": "block",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.vdo": "0",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:                 "ceph.with_tpm": "0"
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             },
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "type": "block",
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:             "vg_name": "ceph_vg2"
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:         }
Feb 28 10:14:49 compute-0 romantic_margulis[309376]:     ]
Feb 28 10:14:49 compute-0 romantic_margulis[309376]: }
Feb 28 10:14:49 compute-0 systemd[1]: libpod-b82afb29ba98a7d654c4751d3ad04dee10c53bcf83ec5e28a14926611e414c61.scope: Deactivated successfully.
Feb 28 10:14:49 compute-0 podman[309385]: 2026-02-28 10:14:49.933653078 +0000 UTC m=+0.031864527 container died b82afb29ba98a7d654c4751d3ad04dee10c53bcf83ec5e28a14926611e414c61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 10:14:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-584eb1627ec16f2573d7e5462ccacf62ecc67d49f6169f78f319fd7d127a89f4-merged.mount: Deactivated successfully.
Feb 28 10:14:49 compute-0 podman[309385]: 2026-02-28 10:14:49.989299263 +0000 UTC m=+0.087510672 container remove b82afb29ba98a7d654c4751d3ad04dee10c53bcf83ec5e28a14926611e414c61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_margulis, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:14:49 compute-0 systemd[1]: libpod-conmon-b82afb29ba98a7d654c4751d3ad04dee10c53bcf83ec5e28a14926611e414c61.scope: Deactivated successfully.
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.002 243456 INFO nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Creating config drive at /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.009 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5krtzcb1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:50 compute-0 ovn_controller[146846]: 2026-02-28T10:14:50Z|00673|binding|INFO|Releasing lport 4070e10c-8283-47ad-b9bf-0e19e9198bce from this chassis (sb_readonly=0)
Feb 28 10:14:50 compute-0 ovn_controller[146846]: 2026-02-28T10:14:50Z|00674|binding|INFO|Releasing lport 7bc082a7-4576-4494-b633-962a40b4d816 from this chassis (sb_readonly=0)
Feb 28 10:14:50 compute-0 sudo[309258]: pam_unix(sudo:session): session closed for user root
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.077 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:50 compute-0 sudo[309403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:14:50 compute-0 sudo[309403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:14:50 compute-0 sudo[309403]: pam_unix(sudo:session): session closed for user root
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.160 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5krtzcb1" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 364 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 906 KiB/s rd, 7.5 MiB/s wr, 286 op/s
Feb 28 10:14:50 compute-0 sudo[309428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:14:50 compute-0 sudo[309428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.200 243456 DEBUG nova.storage.rbd_utils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.205 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.377 243456 DEBUG oslo_concurrency.processutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config ba33446e-fcd5-454c-bc8c-79a367002d57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.378 243456 INFO nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Deleting local config drive /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57/disk.config because it was imported into RBD.
Feb 28 10:14:50 compute-0 kernel: tap0226697b-95: entered promiscuous mode
Feb 28 10:14:50 compute-0 NetworkManager[49805]: <info>  [1772273690.4401] manager: (tap0226697b-95): new Tun device (/org/freedesktop/NetworkManager/Devices/304)
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.442 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:50 compute-0 ovn_controller[146846]: 2026-02-28T10:14:50Z|00675|binding|INFO|Claiming lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 for this chassis.
Feb 28 10:14:50 compute-0 ovn_controller[146846]: 2026-02-28T10:14:50Z|00676|binding|INFO|0226697b-95b2-4303-aa60-b98eb0bb4cd9: Claiming fa:16:3e:70:1d:4f 10.100.0.13
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.451 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:1d:4f 10.100.0.13'], port_security=['fa:16:3e:70:1d:4f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ba33446e-fcd5-454c-bc8c-79a367002d57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '5', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0226697b-95b2-4303-aa60-b98eb0bb4cd9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.452 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0226697b-95b2-4303-aa60-b98eb0bb4cd9 in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 bound to our chassis
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.453 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 10:14:50 compute-0 ovn_controller[146846]: 2026-02-28T10:14:50Z|00677|binding|INFO|Setting lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 ovn-installed in OVS
Feb 28 10:14:50 compute-0 ovn_controller[146846]: 2026-02-28T10:14:50Z|00678|binding|INFO|Setting lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 up in Southbound
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.461 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.464 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.466 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3b7d8d5f-7b35-4d1b-9e75-1accfb3ae8e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.466 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77a5b13a-e1 in ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.468 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77a5b13a-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.469 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6e024bbb-6c45-4d9d-8dcb-85b2a91e8bf5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.469 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[367e365b-dfbf-484c-bc8f-7a803d7c081d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:50 compute-0 systemd-machined[209480]: New machine qemu-89-instance-0000004c.
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.485 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[44767f32-81b3-42c7-8e55-c5d1536c2fc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:50 compute-0 systemd-udevd[309519]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:14:50 compute-0 systemd[1]: Started Virtual Machine qemu-89-instance-0000004c.
Feb 28 10:14:50 compute-0 NetworkManager[49805]: <info>  [1772273690.5057] device (tap0226697b-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:14:50 compute-0 NetworkManager[49805]: <info>  [1772273690.5064] device (tap0226697b-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.508 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[85423711-3bdf-4e80-a1c2-645a7c5930c2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.537 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[92b894a3-fef8-4797-bdd6-57d9be38b28a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.541 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[007598aa-78a9-40e9-b37e-75ee975583bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:50 compute-0 NetworkManager[49805]: <info>  [1772273690.5433] manager: (tap77a5b13a-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/305)
Feb 28 10:14:50 compute-0 podman[309517]: 2026-02-28 10:14:50.560882755 +0000 UTC m=+0.071521672 container create 7f984d968e601b0c729a9eeae2c70405a343b85892e49fda91b649d3faedf4d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kirch, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.586 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[eb9a3c51-c577-4ab1-91d7-aeb7406127d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.590 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8dbb4dfb-2732-4ddf-bc64-7101d4c324c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:50 compute-0 systemd[1]: Started libpod-conmon-7f984d968e601b0c729a9eeae2c70405a343b85892e49fda91b649d3faedf4d7.scope.
Feb 28 10:14:50 compute-0 NetworkManager[49805]: <info>  [1772273690.6140] device (tap77a5b13a-e0): carrier: link connected
Feb 28 10:14:50 compute-0 ceph-mon[76304]: pgmap v1468: 305 pgs: 305 active+clean; 364 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 906 KiB/s rd, 7.5 MiB/s wr, 286 op/s
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.619 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c6be2018-43a8-4b8a-b2db-073f5ebfb3c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:50 compute-0 podman[309517]: 2026-02-28 10:14:50.526481528 +0000 UTC m=+0.037120475 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.639 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a98ac6dc-f7e6-43f6-a2fa-749f250c4749]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77a5b13a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ae:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519790, 'reachable_time': 36879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309566, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:50 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.656 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4348b0e8-b6dc-42d4-8898-cd4ad96ebced]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:aeac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519790, 'tstamp': 519790}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309568, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:50 compute-0 podman[309517]: 2026-02-28 10:14:50.658644054 +0000 UTC m=+0.169283011 container init 7f984d968e601b0c729a9eeae2c70405a343b85892e49fda91b649d3faedf4d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 10:14:50 compute-0 podman[309517]: 2026-02-28 10:14:50.669328244 +0000 UTC m=+0.179967151 container start 7f984d968e601b0c729a9eeae2c70405a343b85892e49fda91b649d3faedf4d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kirch, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:14:50 compute-0 podman[309517]: 2026-02-28 10:14:50.672802132 +0000 UTC m=+0.183441159 container attach 7f984d968e601b0c729a9eeae2c70405a343b85892e49fda91b649d3faedf4d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:14:50 compute-0 fervent_kirch[309564]: 167 167
Feb 28 10:14:50 compute-0 systemd[1]: libpod-7f984d968e601b0c729a9eeae2c70405a343b85892e49fda91b649d3faedf4d7.scope: Deactivated successfully.
Feb 28 10:14:50 compute-0 podman[309517]: 2026-02-28 10:14:50.677101463 +0000 UTC m=+0.187740380 container died 7f984d968e601b0c729a9eeae2c70405a343b85892e49fda91b649d3faedf4d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kirch, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.678 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4a43a2ea-e2db-4b66-8dfc-479f4ab6fb6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77a5b13a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ae:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519790, 'reachable_time': 36879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309570, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd30f041bbfcd0e9b012b045b00af521ab95ced666631c3bae0e2479b628c3ec-merged.mount: Deactivated successfully.
Feb 28 10:14:50 compute-0 podman[309517]: 2026-02-28 10:14:50.716739027 +0000 UTC m=+0.227377934 container remove 7f984d968e601b0c729a9eeae2c70405a343b85892e49fda91b649d3faedf4d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kirch, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.726 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e50e3baf-c6da-442d-8f1a-db1c12ece1c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:50 compute-0 systemd[1]: libpod-conmon-7f984d968e601b0c729a9eeae2c70405a343b85892e49fda91b649d3faedf4d7.scope: Deactivated successfully.
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.749 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273675.747812, dc2dbab8-312e-4130-8141-d848beeb6bec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.749 243456 INFO nova.compute.manager [-] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] VM Stopped (Lifecycle Event)
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.788 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[93f31cbe-5d20-430a-9882-3e2644d8b0b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.790 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77a5b13a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.790 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.790 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77a5b13a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:50 compute-0 kernel: tap77a5b13a-e0: entered promiscuous mode
Feb 28 10:14:50 compute-0 NetworkManager[49805]: <info>  [1772273690.7931] manager: (tap77a5b13a-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/306)
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.794 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77a5b13a-e0, col_values=(('external_ids', {'iface-id': '5829ec02-3925-4479-9cc6-4b24ee8cfe06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.792 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:50 compute-0 ovn_controller[146846]: 2026-02-28T10:14:50Z|00679|binding|INFO|Releasing lport 5829ec02-3925-4479-9cc6-4b24ee8cfe06 from this chassis (sb_readonly=0)
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.797 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.799 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[830bac77-fa11-4616-b603-cd0be972da96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.799 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:14:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:50.800 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'env', 'PROCESS_TAG=haproxy-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77a5b13a-ec2d-4bde-b8f1-201557ef8008.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.802 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.805 243456 DEBUG nova.compute.manager [None req-49c65651-53bc-4764-b26c-6cc8f9f5dda0 - - - - - -] [instance: dc2dbab8-312e-4130-8141-d848beeb6bec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.875 243456 DEBUG nova.compute.manager [req-a29a94c1-d475-4afb-92cc-5ad1ce78362f req-49a26b02-6110-42e3-9726-b95d728ece3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.876 243456 DEBUG oslo_concurrency.lockutils [req-a29a94c1-d475-4afb-92cc-5ad1ce78362f req-49a26b02-6110-42e3-9726-b95d728ece3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.876 243456 DEBUG oslo_concurrency.lockutils [req-a29a94c1-d475-4afb-92cc-5ad1ce78362f req-49a26b02-6110-42e3-9726-b95d728ece3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.876 243456 DEBUG oslo_concurrency.lockutils [req-a29a94c1-d475-4afb-92cc-5ad1ce78362f req-49a26b02-6110-42e3-9726-b95d728ece3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.876 243456 DEBUG nova.compute.manager [req-a29a94c1-d475-4afb-92cc-5ad1ce78362f req-49a26b02-6110-42e3-9726-b95d728ece3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Processing event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:14:50 compute-0 podman[309639]: 2026-02-28 10:14:50.895427942 +0000 UTC m=+0.049034840 container create 00224d781ae3cd792cebbcbd561eeaa5e1a089c0da4d2fd9e71d06dbeefc62b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.918 243456 DEBUG nova.compute.manager [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.919 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for ba33446e-fcd5-454c-bc8c-79a367002d57 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.920 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273690.9190905, ba33446e-fcd5-454c-bc8c-79a367002d57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.920 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] VM Started (Lifecycle Event)
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.925 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.930 243456 INFO nova.virt.libvirt.driver [-] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance spawned successfully.
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.930 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:14:50 compute-0 systemd[1]: Started libpod-conmon-00224d781ae3cd792cebbcbd561eeaa5e1a089c0da4d2fd9e71d06dbeefc62b6.scope.
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.958 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:50 compute-0 podman[309639]: 2026-02-28 10:14:50.873548797 +0000 UTC m=+0.027155675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.971 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.972 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.972 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.973 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.973 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.974 243456 DEBUG nova.virt.libvirt.driver [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:14:50 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:14:50 compute-0 nova_compute[243452]: 2026-02-28 10:14:50.978 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07f03913c9eb0693bbc0970752abfd9cb74906940b0baa5e8bbdaa5c181130/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07f03913c9eb0693bbc0970752abfd9cb74906940b0baa5e8bbdaa5c181130/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07f03913c9eb0693bbc0970752abfd9cb74906940b0baa5e8bbdaa5c181130/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07f03913c9eb0693bbc0970752abfd9cb74906940b0baa5e8bbdaa5c181130/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:14:50 compute-0 podman[309639]: 2026-02-28 10:14:50.997777139 +0000 UTC m=+0.151384047 container init 00224d781ae3cd792cebbcbd561eeaa5e1a089c0da4d2fd9e71d06dbeefc62b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 10:14:51 compute-0 podman[309639]: 2026-02-28 10:14:51.004630361 +0000 UTC m=+0.158237249 container start 00224d781ae3cd792cebbcbd561eeaa5e1a089c0da4d2fd9e71d06dbeefc62b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:14:51 compute-0 podman[309639]: 2026-02-28 10:14:51.008986634 +0000 UTC m=+0.162593512 container attach 00224d781ae3cd792cebbcbd561eeaa5e1a089c0da4d2fd9e71d06dbeefc62b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_satoshi, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 10:14:51 compute-0 nova_compute[243452]: 2026-02-28 10:14:51.034 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:14:51 compute-0 nova_compute[243452]: 2026-02-28 10:14:51.035 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273690.9196408, ba33446e-fcd5-454c-bc8c-79a367002d57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:51 compute-0 nova_compute[243452]: 2026-02-28 10:14:51.035 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] VM Paused (Lifecycle Event)
Feb 28 10:14:51 compute-0 nova_compute[243452]: 2026-02-28 10:14:51.063 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:51 compute-0 nova_compute[243452]: 2026-02-28 10:14:51.067 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273690.9231927, ba33446e-fcd5-454c-bc8c-79a367002d57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:51 compute-0 nova_compute[243452]: 2026-02-28 10:14:51.068 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] VM Resumed (Lifecycle Event)
Feb 28 10:14:51 compute-0 nova_compute[243452]: 2026-02-28 10:14:51.072 243456 DEBUG nova.compute.manager [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:51 compute-0 nova_compute[243452]: 2026-02-28 10:14:51.086 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:51 compute-0 nova_compute[243452]: 2026-02-28 10:14:51.089 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:14:51 compute-0 nova_compute[243452]: 2026-02-28 10:14:51.119 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:14:51 compute-0 nova_compute[243452]: 2026-02-28 10:14:51.144 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:51 compute-0 nova_compute[243452]: 2026-02-28 10:14:51.144 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:51 compute-0 nova_compute[243452]: 2026-02-28 10:14:51.144 243456 DEBUG nova.objects.instance [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:14:51 compute-0 podman[309681]: 2026-02-28 10:14:51.190283112 +0000 UTC m=+0.055995026 container create e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:14:51 compute-0 nova_compute[243452]: 2026-02-28 10:14:51.207 243456 DEBUG oslo_concurrency.lockutils [None req-45775333-b0c4-4166-b577-596d96c68590 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:51 compute-0 systemd[1]: Started libpod-conmon-e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927.scope.
Feb 28 10:14:51 compute-0 nova_compute[243452]: 2026-02-28 10:14:51.251 243456 DEBUG nova.network.neutron [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Updating instance_info_cache with network_info: [{"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:14:51 compute-0 podman[309681]: 2026-02-28 10:14:51.161514803 +0000 UTC m=+0.027226707 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:14:51 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:14:51 compute-0 nova_compute[243452]: 2026-02-28 10:14:51.271 243456 DEBUG oslo_concurrency.lockutils [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Releasing lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:14:51 compute-0 nova_compute[243452]: 2026-02-28 10:14:51.272 243456 DEBUG nova.compute.manager [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8037661ea88197897d9b5ce5e2095358c49702921dd5e2e5c0833d172654169/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:14:51 compute-0 podman[309681]: 2026-02-28 10:14:51.285341415 +0000 UTC m=+0.151053339 container init e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 28 10:14:51 compute-0 podman[309681]: 2026-02-28 10:14:51.292888027 +0000 UTC m=+0.158599931 container start e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 10:14:51 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[309706]: [NOTICE]   (309710) : New worker (309712) forked
Feb 28 10:14:51 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[309706]: [NOTICE]   (309710) : Loading success.
Feb 28 10:14:51 compute-0 lvm[309782]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:14:51 compute-0 lvm[309782]: VG ceph_vg0 finished
Feb 28 10:14:51 compute-0 lvm[309784]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:14:51 compute-0 lvm[309784]: VG ceph_vg1 finished
Feb 28 10:14:51 compute-0 lvm[309785]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:14:51 compute-0 lvm[309785]: VG ceph_vg2 finished
Feb 28 10:14:51 compute-0 suspicious_satoshi[309656]: {}
Feb 28 10:14:51 compute-0 systemd[1]: libpod-00224d781ae3cd792cebbcbd561eeaa5e1a089c0da4d2fd9e71d06dbeefc62b6.scope: Deactivated successfully.
Feb 28 10:14:51 compute-0 systemd[1]: libpod-00224d781ae3cd792cebbcbd561eeaa5e1a089c0da4d2fd9e71d06dbeefc62b6.scope: Consumed 1.185s CPU time.
Feb 28 10:14:51 compute-0 podman[309639]: 2026-02-28 10:14:51.811402666 +0000 UTC m=+0.965009544 container died 00224d781ae3cd792cebbcbd561eeaa5e1a089c0da4d2fd9e71d06dbeefc62b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_satoshi, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:14:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb07f03913c9eb0693bbc0970752abfd9cb74906940b0baa5e8bbdaa5c181130-merged.mount: Deactivated successfully.
Feb 28 10:14:51 compute-0 podman[309639]: 2026-02-28 10:14:51.852819261 +0000 UTC m=+1.006426139 container remove 00224d781ae3cd792cebbcbd561eeaa5e1a089c0da4d2fd9e71d06dbeefc62b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 10:14:51 compute-0 systemd[1]: libpod-conmon-00224d781ae3cd792cebbcbd561eeaa5e1a089c0da4d2fd9e71d06dbeefc62b6.scope: Deactivated successfully.
Feb 28 10:14:51 compute-0 sudo[309428]: pam_unix(sudo:session): session closed for user root
Feb 28 10:14:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:14:51 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:14:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:14:51 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:14:51 compute-0 sudo[309799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:14:51 compute-0 sudo[309799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:14:51 compute-0 sudo[309799]: pam_unix(sudo:session): session closed for user root
Feb 28 10:14:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 358 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 793 KiB/s rd, 7.3 MiB/s wr, 268 op/s
Feb 28 10:14:53 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:14:53 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:14:53 compute-0 ceph-mon[76304]: pgmap v1469: 305 pgs: 305 active+clean; 358 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 793 KiB/s rd, 7.3 MiB/s wr, 268 op/s
Feb 28 10:14:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:14:53 compute-0 kernel: tap6b5acb8c-5d (unregistering): left promiscuous mode
Feb 28 10:14:53 compute-0 NetworkManager[49805]: <info>  [1772273693.6354] device (tap6b5acb8c-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:14:53 compute-0 nova_compute[243452]: 2026-02-28 10:14:53.641 243456 DEBUG nova.compute.manager [req-ee251f27-473c-43f9-904a-d798fe76e14e req-7dbc9cc4-a500-487e-bd2f-c4de70498770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:53 compute-0 nova_compute[243452]: 2026-02-28 10:14:53.644 243456 DEBUG oslo_concurrency.lockutils [req-ee251f27-473c-43f9-904a-d798fe76e14e req-7dbc9cc4-a500-487e-bd2f-c4de70498770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:53 compute-0 nova_compute[243452]: 2026-02-28 10:14:53.644 243456 DEBUG oslo_concurrency.lockutils [req-ee251f27-473c-43f9-904a-d798fe76e14e req-7dbc9cc4-a500-487e-bd2f-c4de70498770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:53 compute-0 nova_compute[243452]: 2026-02-28 10:14:53.644 243456 DEBUG oslo_concurrency.lockutils [req-ee251f27-473c-43f9-904a-d798fe76e14e req-7dbc9cc4-a500-487e-bd2f-c4de70498770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:53 compute-0 nova_compute[243452]: 2026-02-28 10:14:53.644 243456 DEBUG nova.compute.manager [req-ee251f27-473c-43f9-904a-d798fe76e14e req-7dbc9cc4-a500-487e-bd2f-c4de70498770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] No waiting events found dispatching network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:53 compute-0 nova_compute[243452]: 2026-02-28 10:14:53.644 243456 WARNING nova.compute.manager [req-ee251f27-473c-43f9-904a-d798fe76e14e req-7dbc9cc4-a500-487e-bd2f-c4de70498770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received unexpected event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 for instance with vm_state active and task_state None.
Feb 28 10:14:53 compute-0 ovn_controller[146846]: 2026-02-28T10:14:53Z|00680|binding|INFO|Releasing lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe from this chassis (sb_readonly=0)
Feb 28 10:14:53 compute-0 ovn_controller[146846]: 2026-02-28T10:14:53Z|00681|binding|INFO|Setting lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe down in Southbound
Feb 28 10:14:53 compute-0 ovn_controller[146846]: 2026-02-28T10:14:53Z|00682|binding|INFO|Removing iface tap6b5acb8c-5d ovn-installed in OVS
Feb 28 10:14:53 compute-0 nova_compute[243452]: 2026-02-28 10:14:53.659 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.665 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:90:03 10.100.0.11'], port_security=['fa:16:3e:88:90:03 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '32fe69ba-ea8d-411e-8917-de872b62b8b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-269fae56-42c3-478e-88d5-36164c0a6ae4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '4', 'neutron:security_group_ids': '61546fe4-ca04-44ca-b6ae-d1b7a21ab6e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8585beb-af2c-4eb8-805b-2614cf37e3d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6b5acb8c-5d09-42b0-9c1d-b51be18712fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:53 compute-0 nova_compute[243452]: 2026-02-28 10:14:53.666 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.667 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe in datapath 269fae56-42c3-478e-88d5-36164c0a6ae4 unbound from our chassis
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.668 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 269fae56-42c3-478e-88d5-36164c0a6ae4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.670 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ba46ae2b-b0c0-4105-941e-c7f32904f76d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.670 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4 namespace which is not needed anymore
Feb 28 10:14:53 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Feb 28 10:14:53 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d0000004d.scope: Consumed 13.378s CPU time.
Feb 28 10:14:53 compute-0 systemd-machined[209480]: Machine qemu-87-instance-0000004d terminated.
Feb 28 10:14:53 compute-0 nova_compute[243452]: 2026-02-28 10:14:53.783 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:53 compute-0 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[308227]: [NOTICE]   (308231) : haproxy version is 2.8.14-c23fe91
Feb 28 10:14:53 compute-0 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[308227]: [NOTICE]   (308231) : path to executable is /usr/sbin/haproxy
Feb 28 10:14:53 compute-0 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[308227]: [ALERT]    (308231) : Current worker (308233) exited with code 143 (Terminated)
Feb 28 10:14:53 compute-0 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[308227]: [WARNING]  (308231) : All workers exited. Exiting... (0)
Feb 28 10:14:53 compute-0 systemd[1]: libpod-4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9.scope: Deactivated successfully.
Feb 28 10:14:53 compute-0 podman[309846]: 2026-02-28 10:14:53.810631212 +0000 UTC m=+0.050677746 container died 4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 10:14:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9-userdata-shm.mount: Deactivated successfully.
Feb 28 10:14:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea0b749c5a528cfce51b141fd23aad936756a87d85288db6d59ade46c812befa-merged.mount: Deactivated successfully.
Feb 28 10:14:53 compute-0 podman[309846]: 2026-02-28 10:14:53.845105091 +0000 UTC m=+0.085151665 container cleanup 4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:14:53 compute-0 systemd[1]: libpod-conmon-4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9.scope: Deactivated successfully.
Feb 28 10:14:53 compute-0 NetworkManager[49805]: <info>  [1772273693.8674] manager: (tap6b5acb8c-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/307)
Feb 28 10:14:53 compute-0 kernel: tap6b5acb8c-5d: entered promiscuous mode
Feb 28 10:14:53 compute-0 kernel: tap6b5acb8c-5d (unregistering): left promiscuous mode
Feb 28 10:14:53 compute-0 nova_compute[243452]: 2026-02-28 10:14:53.871 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:53 compute-0 ovn_controller[146846]: 2026-02-28T10:14:53Z|00683|binding|INFO|Claiming lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe for this chassis.
Feb 28 10:14:53 compute-0 ovn_controller[146846]: 2026-02-28T10:14:53Z|00684|binding|INFO|6b5acb8c-5d09-42b0-9c1d-b51be18712fe: Claiming fa:16:3e:88:90:03 10.100.0.11
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.886 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:90:03 10.100.0.11'], port_security=['fa:16:3e:88:90:03 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '32fe69ba-ea8d-411e-8917-de872b62b8b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-269fae56-42c3-478e-88d5-36164c0a6ae4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '4', 'neutron:security_group_ids': '61546fe4-ca04-44ca-b6ae-d1b7a21ab6e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8585beb-af2c-4eb8-805b-2614cf37e3d3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6b5acb8c-5d09-42b0-9c1d-b51be18712fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:53 compute-0 ovn_controller[146846]: 2026-02-28T10:14:53Z|00685|binding|INFO|Releasing lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe from this chassis (sb_readonly=0)
Feb 28 10:14:53 compute-0 nova_compute[243452]: 2026-02-28 10:14:53.899 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.912 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:90:03 10.100.0.11'], port_security=['fa:16:3e:88:90:03 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '32fe69ba-ea8d-411e-8917-de872b62b8b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-269fae56-42c3-478e-88d5-36164c0a6ae4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '4', 'neutron:security_group_ids': '61546fe4-ca04-44ca-b6ae-d1b7a21ab6e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8585beb-af2c-4eb8-805b-2614cf37e3d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6b5acb8c-5d09-42b0-9c1d-b51be18712fe) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:53 compute-0 podman[309875]: 2026-02-28 10:14:53.928310361 +0000 UTC m=+0.054710030 container remove 4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.934 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[06d8e051-8a28-45b4-8303-7992688d1a0b]: (4, ('Sat Feb 28 10:14:53 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4 (4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9)\n4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9\nSat Feb 28 10:14:53 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4 (4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9)\n4801c70987802e288cccdedf23cdd58ccd8ddbe7bb1dca18a48ebe6af2fd1fb9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.936 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0498fd7b-7f8d-4268-8c60-9fa82d840bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.937 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap269fae56-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:53 compute-0 nova_compute[243452]: 2026-02-28 10:14:53.939 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:53 compute-0 kernel: tap269fae56-40: left promiscuous mode
Feb 28 10:14:53 compute-0 nova_compute[243452]: 2026-02-28 10:14:53.948 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.951 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[08c0bbd4-78bf-489f-8d23-fa1dcb07f816]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.969 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd1b429-f19f-45b9-bcc5-009705470af1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.970 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5c867bab-8729-4181-9255-3ba0db075cf1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.988 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6aad80e0-b49e-49da-a27d-2f1b5c9d3dcf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517446, 'reachable_time': 22305, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309898, 'error': None, 'target': 'ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d269fae56\x2d42c3\x2d478e\x2d88d5\x2d36164c0a6ae4.mount: Deactivated successfully.
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.993 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.993 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c19c5da7-fbf8-433b-ac1e-f283cf51bad6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.994 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe in datapath 269fae56-42c3-478e-88d5-36164c0a6ae4 unbound from our chassis
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.996 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 269fae56-42c3-478e-88d5-36164c0a6ae4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.997 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9455ac56-8d50-4e33-a75f-a7fb642d284a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.997 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe in datapath 269fae56-42c3-478e-88d5-36164c0a6ae4 unbound from our chassis
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.999 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 269fae56-42c3-478e-88d5-36164c0a6ae4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:14:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:53.999 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6144ff33-0e9e-4caf-933a-3ab6583b5763]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.090 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 358 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 755 KiB/s rd, 6.8 MiB/s wr, 233 op/s
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.308 243456 DEBUG oslo_concurrency.lockutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.309 243456 DEBUG oslo_concurrency.lockutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.310 243456 DEBUG oslo_concurrency.lockutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.310 243456 DEBUG oslo_concurrency.lockutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.310 243456 DEBUG oslo_concurrency.lockutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.311 243456 INFO nova.compute.manager [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Terminating instance
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.312 243456 DEBUG nova.compute.manager [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:14:54 compute-0 kernel: tap0226697b-95 (unregistering): left promiscuous mode
Feb 28 10:14:54 compute-0 NetworkManager[49805]: <info>  [1772273694.3525] device (tap0226697b-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.364 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:54 compute-0 ovn_controller[146846]: 2026-02-28T10:14:54Z|00686|binding|INFO|Releasing lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 from this chassis (sb_readonly=0)
Feb 28 10:14:54 compute-0 ovn_controller[146846]: 2026-02-28T10:14:54Z|00687|binding|INFO|Setting lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 down in Southbound
Feb 28 10:14:54 compute-0 ovn_controller[146846]: 2026-02-28T10:14:54Z|00688|binding|INFO|Removing iface tap0226697b-95 ovn-installed in OVS
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.366 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.372 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:1d:4f 10.100.0.13'], port_security=['fa:16:3e:70:1d:4f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ba33446e-fcd5-454c-bc8c-79a367002d57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '6', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0226697b-95b2-4303-aa60-b98eb0bb4cd9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.374 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0226697b-95b2-4303-aa60-b98eb0bb4cd9 in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 unbound from our chassis
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.376 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.377 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2a58ea6d-0c05-457d-80ad-2189d6625caa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.378 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 namespace which is not needed anymore
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.382 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.400 243456 INFO nova.virt.libvirt.driver [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Instance shutdown successfully.
Feb 28 10:14:54 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Feb 28 10:14:54 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d0000004c.scope: Consumed 3.783s CPU time.
Feb 28 10:14:54 compute-0 systemd-machined[209480]: Machine qemu-89-instance-0000004c terminated.
Feb 28 10:14:54 compute-0 NetworkManager[49805]: <info>  [1772273694.4542] manager: (tap6b5acb8c-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/308)
Feb 28 10:14:54 compute-0 systemd-udevd[309826]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:14:54 compute-0 kernel: tap6b5acb8c-5d: entered promiscuous mode
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.461 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:54 compute-0 ovn_controller[146846]: 2026-02-28T10:14:54Z|00689|binding|INFO|Claiming lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe for this chassis.
Feb 28 10:14:54 compute-0 ovn_controller[146846]: 2026-02-28T10:14:54Z|00690|binding|INFO|6b5acb8c-5d09-42b0-9c1d-b51be18712fe: Claiming fa:16:3e:88:90:03 10.100.0.11
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.472 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:90:03 10.100.0.11'], port_security=['fa:16:3e:88:90:03 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '32fe69ba-ea8d-411e-8917-de872b62b8b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-269fae56-42c3-478e-88d5-36164c0a6ae4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '5', 'neutron:security_group_ids': '61546fe4-ca04-44ca-b6ae-d1b7a21ab6e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8585beb-af2c-4eb8-805b-2614cf37e3d3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6b5acb8c-5d09-42b0-9c1d-b51be18712fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:54 compute-0 NetworkManager[49805]: <info>  [1772273694.4759] device (tap6b5acb8c-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:14:54 compute-0 ovn_controller[146846]: 2026-02-28T10:14:54Z|00691|binding|INFO|Setting lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe ovn-installed in OVS
Feb 28 10:14:54 compute-0 NetworkManager[49805]: <info>  [1772273694.4775] device (tap6b5acb8c-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:14:54 compute-0 ovn_controller[146846]: 2026-02-28T10:14:54Z|00692|binding|INFO|Setting lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe up in Southbound
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.480 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:54 compute-0 systemd-machined[209480]: New machine qemu-90-instance-0000004d.
Feb 28 10:14:54 compute-0 systemd[1]: Started Virtual Machine qemu-90-instance-0000004d.
Feb 28 10:14:54 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[309706]: [NOTICE]   (309710) : haproxy version is 2.8.14-c23fe91
Feb 28 10:14:54 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[309706]: [NOTICE]   (309710) : path to executable is /usr/sbin/haproxy
Feb 28 10:14:54 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[309706]: [WARNING]  (309710) : Exiting Master process...
Feb 28 10:14:54 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[309706]: [ALERT]    (309710) : Current worker (309712) exited with code 143 (Terminated)
Feb 28 10:14:54 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[309706]: [WARNING]  (309710) : All workers exited. Exiting... (0)
Feb 28 10:14:54 compute-0 systemd[1]: libpod-e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927.scope: Deactivated successfully.
Feb 28 10:14:54 compute-0 podman[309931]: 2026-02-28 10:14:54.524926736 +0000 UTC m=+0.057490677 container died e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:14:54 compute-0 kernel: tap0226697b-95: entered promiscuous mode
Feb 28 10:14:54 compute-0 NetworkManager[49805]: <info>  [1772273694.5376] manager: (tap0226697b-95): new Tun device (/org/freedesktop/NetworkManager/Devices/309)
Feb 28 10:14:54 compute-0 ovn_controller[146846]: 2026-02-28T10:14:54Z|00693|binding|INFO|Claiming lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 for this chassis.
Feb 28 10:14:54 compute-0 ovn_controller[146846]: 2026-02-28T10:14:54Z|00694|binding|INFO|0226697b-95b2-4303-aa60-b98eb0bb4cd9: Claiming fa:16:3e:70:1d:4f 10.100.0.13
Feb 28 10:14:54 compute-0 kernel: tap0226697b-95 (unregistering): left promiscuous mode
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.538 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:54 compute-0 ovn_controller[146846]: 2026-02-28T10:14:54Z|00695|binding|INFO|Setting lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 ovn-installed in OVS
Feb 28 10:14:54 compute-0 ovn_controller[146846]: 2026-02-28T10:14:54Z|00696|binding|INFO|Setting lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 up in Southbound
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.555 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:1d:4f 10.100.0.13'], port_security=['fa:16:3e:70:1d:4f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ba33446e-fcd5-454c-bc8c-79a367002d57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '6', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0226697b-95b2-4303-aa60-b98eb0bb4cd9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:54 compute-0 ovn_controller[146846]: 2026-02-28T10:14:54Z|00697|binding|INFO|Releasing lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 from this chassis (sb_readonly=1)
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.557 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:54 compute-0 ovn_controller[146846]: 2026-02-28T10:14:54Z|00698|if_status|INFO|Dropped 1 log messages in last 782 seconds (most recently, 782 seconds ago) due to excessive rate
Feb 28 10:14:54 compute-0 ovn_controller[146846]: 2026-02-28T10:14:54Z|00699|if_status|INFO|Not setting lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 down as sb is readonly
Feb 28 10:14:54 compute-0 ovn_controller[146846]: 2026-02-28T10:14:54Z|00700|binding|INFO|Removing iface tap0226697b-95 ovn-installed in OVS
Feb 28 10:14:54 compute-0 ovn_controller[146846]: 2026-02-28T10:14:54Z|00701|binding|INFO|Releasing lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 from this chassis (sb_readonly=0)
Feb 28 10:14:54 compute-0 ovn_controller[146846]: 2026-02-28T10:14:54Z|00702|binding|INFO|Setting lport 0226697b-95b2-4303-aa60-b98eb0bb4cd9 down in Southbound
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.567 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.570 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:1d:4f 10.100.0.13'], port_security=['fa:16:3e:70:1d:4f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ba33446e-fcd5-454c-bc8c-79a367002d57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '6', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0226697b-95b2-4303-aa60-b98eb0bb4cd9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:14:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927-userdata-shm.mount: Deactivated successfully.
Feb 28 10:14:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8037661ea88197897d9b5ce5e2095358c49702921dd5e2e5c0833d172654169-merged.mount: Deactivated successfully.
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.587 243456 INFO nova.virt.libvirt.driver [-] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Instance destroyed successfully.
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.588 243456 DEBUG nova.objects.instance [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'resources' on Instance uuid ba33446e-fcd5-454c-bc8c-79a367002d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:54 compute-0 podman[309931]: 2026-02-28 10:14:54.593672298 +0000 UTC m=+0.126236249 container cleanup e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.603 243456 DEBUG nova.virt.libvirt.vif [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:14:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-877654664',display_name='tempest-ServerDiskConfigTestJSON-server-877654664',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-877654664',id=76,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:14:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-rxssdzd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:14:51Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=ba33446e-fcd5-454c-bc8c-79a367002d57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.604 243456 DEBUG nova.network.os_vif_util [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "address": "fa:16:3e:70:1d:4f", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0226697b-95", "ovs_interfaceid": "0226697b-95b2-4303-aa60-b98eb0bb4cd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:14:54 compute-0 systemd[1]: libpod-conmon-e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927.scope: Deactivated successfully.
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.605 243456 DEBUG nova.network.os_vif_util [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.606 243456 DEBUG os_vif [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.608 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.608 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0226697b-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.610 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.611 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.614 243456 INFO os_vif [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:1d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0226697b-95b2-4303-aa60-b98eb0bb4cd9,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0226697b-95')
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.671 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273679.6692448, 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.671 243456 INFO nova.compute.manager [-] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] VM Stopped (Lifecycle Event)
Feb 28 10:14:54 compute-0 podman[309980]: 2026-02-28 10:14:54.677795424 +0000 UTC m=+0.054141993 container remove e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.683 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b73eb1cc-f3db-4b2b-9865-c2f617bdd9b8]: (4, ('Sat Feb 28 10:14:54 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 (e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927)\ne3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927\nSat Feb 28 10:14:54 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 (e3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927)\ne3cfa48ab4b65c1060784c4cf581530c9d7b336c9a553e9a0aefaaa646d22927\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.686 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bd7cc4e6-fbab-41f6-a2fa-f0d57e405400]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.687 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77a5b13a-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.689 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:54 compute-0 kernel: tap77a5b13a-e0: left promiscuous mode
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.693 243456 DEBUG nova.compute.manager [None req-9661c424-9811-4e9a-9512-4707e4c8023d - - - - - -] [instance: 40f8f3fa-1f1c-440e-a640-5a223b1ca9b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.698 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.703 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b03a1e-ec21-4a6e-a616-aa55fb3d9ace]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.720 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[191571a3-b253-43e6-b454-645019824929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.722 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[84c242c5-2dd6-4f8f-b9df-863260db2575]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.738 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[82e2ced1-4547-4b00-b202-b0c1354cd3d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519781, 'reachable_time': 24502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310013, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.740 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.741 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[092e7a0c-e395-4a63-887b-40335d9e04b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.741 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe in datapath 269fae56-42c3-478e-88d5-36164c0a6ae4 unbound from our chassis
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.742 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 269fae56-42c3-478e-88d5-36164c0a6ae4
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.755 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e336ef5a-d0dd-443e-b8d2-914a9a73e565]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.757 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap269fae56-41 in ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.759 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap269fae56-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.759 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d2b8dd-5644-4c6f-92e6-f685b81bcf69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.761 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9efa259c-0523-4fb4-a007-3d1dc2a692c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.770 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[3f75d81d-17d3-4e53-8bbd-91c6f07bf6b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.795 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f652b458-cad5-43c3-b2ef-5c09c2d4a041]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.819 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[cd9ecf16-41ef-47ac-8bb1-241d5c329cdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.826 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8a339077-f513-4a3b-b5b0-85e48421f4c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 NetworkManager[49805]: <info>  [1772273694.8275] manager: (tap269fae56-40): new Veth device (/org/freedesktop/NetworkManager/Devices/310)
Feb 28 10:14:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d77a5b13a\x2dec2d\x2d4bde\x2db8f1\x2d201557ef8008.mount: Deactivated successfully.
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.857 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c6dd5562-5d41-4cd1-9b45-3138e8abc2fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.861 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[356e5f46-878d-49da-9a16-d20d60524d94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 NetworkManager[49805]: <info>  [1772273694.8831] device (tap269fae56-40): carrier: link connected
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.885 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f01b52f7-636c-4b50-8cc4-be59f1134451]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.902 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6973a86a-6c7f-4f18-b4de-2fdfb0ad6e51]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap269fae56-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:28:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520216, 'reachable_time': 21260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310075, 'error': None, 'target': 'ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.916 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[22db6c6a-de7b-494b-8ff3-32ff3efec31e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:282c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520216, 'tstamp': 520216}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310079, 'error': None, 'target': 'ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.929 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[edb57b6a-d3ee-4275-a346-0d7e43b3de2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap269fae56-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:28:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520216, 'reachable_time': 21260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310081, 'error': None, 'target': 'ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.938 243456 INFO nova.virt.libvirt.driver [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Deleting instance files /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57_del
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.938 243456 INFO nova.virt.libvirt.driver [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Deletion of /var/lib/nova/instances/ba33446e-fcd5-454c-bc8c-79a367002d57_del complete
Feb 28 10:14:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:54.960 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d7a64f82-86d6-4262-bf5c-6c7c34833919]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.984 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 32fe69ba-ea8d-411e-8917-de872b62b8b0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.984 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273694.9833775, 32fe69ba-ea8d-411e-8917-de872b62b8b0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.985 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] VM Resumed (Lifecycle Event)
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.993 243456 INFO nova.virt.libvirt.driver [-] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Instance running successfully.
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.994 243456 INFO nova.virt.libvirt.driver [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Instance soft rebooted successfully.
Feb 28 10:14:54 compute-0 nova_compute[243452]: 2026-02-28 10:14:54.994 243456 DEBUG nova.compute.manager [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.003 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7ebbf6c5-ffe3-457c-b7b3-e8b972805eee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.004 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap269fae56-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.004 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.005 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap269fae56-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:55 compute-0 kernel: tap269fae56-40: entered promiscuous mode
Feb 28 10:14:55 compute-0 NetworkManager[49805]: <info>  [1772273695.0070] manager: (tap269fae56-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Feb 28 10:14:55 compute-0 nova_compute[243452]: 2026-02-28 10:14:55.006 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.008 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap269fae56-40, col_values=(('external_ids', {'iface-id': '7bc082a7-4576-4494-b633-962a40b4d816'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:14:55 compute-0 ovn_controller[146846]: 2026-02-28T10:14:55Z|00703|binding|INFO|Releasing lport 7bc082a7-4576-4494-b633-962a40b4d816 from this chassis (sb_readonly=0)
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.010 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/269fae56-42c3-478e-88d5-36164c0a6ae4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/269fae56-42c3-478e-88d5-36164c0a6ae4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.011 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[41ff608a-8afe-43d9-bb3d-6fd04a989283]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.012 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-269fae56-42c3-478e-88d5-36164c0a6ae4
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/269fae56-42c3-478e-88d5-36164c0a6ae4.pid.haproxy
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 269fae56-42c3-478e-88d5-36164c0a6ae4
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.013 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4', 'env', 'PROCESS_TAG=haproxy-269fae56-42c3-478e-88d5-36164c0a6ae4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/269fae56-42c3-478e-88d5-36164c0a6ae4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:14:55 compute-0 nova_compute[243452]: 2026-02-28 10:14:55.016 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:55 compute-0 nova_compute[243452]: 2026-02-28 10:14:55.024 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:55 compute-0 nova_compute[243452]: 2026-02-28 10:14:55.029 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:14:55 compute-0 nova_compute[243452]: 2026-02-28 10:14:55.045 243456 INFO nova.compute.manager [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Took 0.73 seconds to destroy the instance on the hypervisor.
Feb 28 10:14:55 compute-0 nova_compute[243452]: 2026-02-28 10:14:55.045 243456 DEBUG oslo.service.loopingcall [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:14:55 compute-0 nova_compute[243452]: 2026-02-28 10:14:55.046 243456 DEBUG nova.compute.manager [-] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:14:55 compute-0 nova_compute[243452]: 2026-02-28 10:14:55.046 243456 DEBUG nova.network.neutron [-] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:14:55 compute-0 nova_compute[243452]: 2026-02-28 10:14:55.057 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] During sync_power_state the instance has a pending task (reboot_started). Skip.
Feb 28 10:14:55 compute-0 nova_compute[243452]: 2026-02-28 10:14:55.058 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273694.9852946, 32fe69ba-ea8d-411e-8917-de872b62b8b0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:14:55 compute-0 nova_compute[243452]: 2026-02-28 10:14:55.058 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] VM Started (Lifecycle Event)
Feb 28 10:14:55 compute-0 nova_compute[243452]: 2026-02-28 10:14:55.069 243456 DEBUG oslo_concurrency.lockutils [None req-1c4bb949-17be-4600-aca4-b8c68b65a3ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:55 compute-0 nova_compute[243452]: 2026-02-28 10:14:55.077 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:14:55 compute-0 nova_compute[243452]: 2026-02-28 10:14:55.080 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:14:55 compute-0 ceph-mon[76304]: pgmap v1470: 305 pgs: 305 active+clean; 358 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 755 KiB/s rd, 6.8 MiB/s wr, 233 op/s
Feb 28 10:14:55 compute-0 podman[310114]: 2026-02-28 10:14:55.354970445 +0000 UTC m=+0.056533991 container create 35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:14:55 compute-0 systemd[1]: Started libpod-conmon-35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350.scope.
Feb 28 10:14:55 compute-0 podman[310114]: 2026-02-28 10:14:55.323048157 +0000 UTC m=+0.024611783 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:14:55 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:14:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/207a55a22fbbac4fb5440ec404e2752fc098d104a50be06532a7ca6329c7d281/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:14:55 compute-0 podman[310114]: 2026-02-28 10:14:55.443245347 +0000 UTC m=+0.144808913 container init 35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 10:14:55 compute-0 podman[310114]: 2026-02-28 10:14:55.449730069 +0000 UTC m=+0.151293635 container start 35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:14:55 compute-0 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[310129]: [NOTICE]   (310133) : New worker (310135) forked
Feb 28 10:14:55 compute-0 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[310129]: [NOTICE]   (310133) : Loading success.
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.520 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0226697b-95b2-4303-aa60-b98eb0bb4cd9 in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 unbound from our chassis
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.523 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.524 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9d5b4e05-b666-4644-84f5-0254b3e3155c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.525 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0226697b-95b2-4303-aa60-b98eb0bb4cd9 in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 unbound from our chassis
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.528 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:14:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:55.529 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[77a53553-5cc9-4864-aa59-4840f86c0894]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:14:55 compute-0 nova_compute[243452]: 2026-02-28 10:14:55.823 243456 DEBUG nova.network.neutron [-] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:14:55 compute-0 nova_compute[243452]: 2026-02-28 10:14:55.853 243456 INFO nova.compute.manager [-] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Took 0.81 seconds to deallocate network for instance.
Feb 28 10:14:55 compute-0 nova_compute[243452]: 2026-02-28 10:14:55.906 243456 DEBUG oslo_concurrency.lockutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:55 compute-0 nova_compute[243452]: 2026-02-28 10:14:55.907 243456 DEBUG oslo_concurrency.lockutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.007 243456 DEBUG oslo_concurrency.processutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 344 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.2 MiB/s wr, 247 op/s
Feb 28 10:14:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:14:56 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3003900474' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.592 243456 DEBUG oslo_concurrency.processutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.602 243456 DEBUG nova.compute.provider_tree [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.622 243456 DEBUG nova.scheduler.client.report [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.650 243456 DEBUG oslo_concurrency.lockutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.686 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-vif-unplugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.687 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.687 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.688 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.688 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] No waiting events found dispatching network-vif-unplugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.689 243456 WARNING nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received unexpected event network-vif-unplugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe for instance with vm_state active and task_state None.
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.689 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.690 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.690 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.690 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.691 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] No waiting events found dispatching network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.691 243456 WARNING nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received unexpected event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe for instance with vm_state active and task_state None.
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.692 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-unplugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.692 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.693 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.693 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.694 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] No waiting events found dispatching network-vif-unplugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.695 243456 WARNING nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received unexpected event network-vif-unplugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 for instance with vm_state deleted and task_state None.
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.695 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.696 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.696 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.697 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.697 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] No waiting events found dispatching network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.698 243456 WARNING nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received unexpected event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 for instance with vm_state deleted and task_state None.
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.698 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.699 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.699 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.700 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.700 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] No waiting events found dispatching network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.700 243456 WARNING nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received unexpected event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe for instance with vm_state active and task_state None.
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.701 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.701 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.701 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.702 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.702 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] No waiting events found dispatching network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.703 243456 WARNING nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received unexpected event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe for instance with vm_state active and task_state None.
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.703 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.704 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.704 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.705 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.705 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] No waiting events found dispatching network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.705 243456 WARNING nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received unexpected event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 for instance with vm_state deleted and task_state None.
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.706 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.706 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.706 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.707 243456 DEBUG oslo_concurrency.lockutils [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.707 243456 DEBUG nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] No waiting events found dispatching network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.708 243456 WARNING nova.compute.manager [req-8953cec0-02e9-4870-8738-8102bc968c31 req-320ad8b2-fa74-481e-a539-62044c373d37 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received unexpected event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 for instance with vm_state deleted and task_state None.
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.710 243456 INFO nova.scheduler.client.report [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Deleted allocations for instance ba33446e-fcd5-454c-bc8c-79a367002d57
Feb 28 10:14:56 compute-0 nova_compute[243452]: 2026-02-28 10:14:56.785 243456 DEBUG oslo_concurrency.lockutils [None req-7bddc39c-e53f-4ba1-a078-88fc40590819 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.476s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:57 compute-0 ceph-mon[76304]: pgmap v1471: 305 pgs: 305 active+clean; 344 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.2 MiB/s wr, 247 op/s
Feb 28 10:14:57 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3003900474' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:57 compute-0 nova_compute[243452]: 2026-02-28 10:14:57.481 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:57 compute-0 nova_compute[243452]: 2026-02-28 10:14:57.482 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:57 compute-0 nova_compute[243452]: 2026-02-28 10:14:57.500 243456 DEBUG nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:14:57 compute-0 nova_compute[243452]: 2026-02-28 10:14:57.562 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:57 compute-0 nova_compute[243452]: 2026-02-28 10:14:57.562 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:57 compute-0 nova_compute[243452]: 2026-02-28 10:14:57.574 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:14:57 compute-0 nova_compute[243452]: 2026-02-28 10:14:57.574 243456 INFO nova.compute.claims [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:14:57 compute-0 nova_compute[243452]: 2026-02-28 10:14:57.695 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:57.853 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:57.853 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:14:57.854 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:57 compute-0 nova_compute[243452]: 2026-02-28 10:14:57.857 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "a28c23bd-34cb-4189-9cca-778178eb41b1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:57 compute-0 nova_compute[243452]: 2026-02-28 10:14:57.858 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:57 compute-0 nova_compute[243452]: 2026-02-28 10:14:57.875 243456 DEBUG nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:14:57 compute-0 nova_compute[243452]: 2026-02-28 10:14:57.933 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 336 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 198 op/s
Feb 28 10:14:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:14:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1746732956' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.276 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.283 243456 DEBUG nova.compute.provider_tree [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.306 243456 DEBUG nova.scheduler.client.report [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:14:58 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1746732956' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.330 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.331 243456 DEBUG nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.335 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.344 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.345 243456 INFO nova.compute.claims [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.412 243456 DEBUG nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.413 243456 DEBUG nova.network.neutron [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.443 243456 INFO nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.464 243456 DEBUG nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:14:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.554 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.588 243456 DEBUG nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.590 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.590 243456 INFO nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Creating image(s)
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.614 243456 DEBUG nova.storage.rbd_utils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.643 243456 DEBUG nova.storage.rbd_utils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.671 243456 DEBUG nova.storage.rbd_utils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.676 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.757 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.758 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.758 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.759 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.794 243456 DEBUG nova.storage.rbd_utils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.798 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:58 compute-0 nova_compute[243452]: 2026-02-28 10:14:58.826 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:14:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3480607201' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.110 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.141 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.178 243456 DEBUG nova.compute.provider_tree [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.184 243456 DEBUG nova.storage.rbd_utils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] resizing rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.259 243456 DEBUG nova.policy [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14b2d28379164786ad68563acb83a50a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fd70835696bf4e12a062516e9de5527d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.263 243456 DEBUG nova.scheduler.client.report [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.267 243456 DEBUG nova.compute.manager [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-unplugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.267 243456 DEBUG oslo_concurrency.lockutils [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.267 243456 DEBUG oslo_concurrency.lockutils [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.268 243456 DEBUG oslo_concurrency.lockutils [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.268 243456 DEBUG nova.compute.manager [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] No waiting events found dispatching network-vif-unplugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.268 243456 WARNING nova.compute.manager [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received unexpected event network-vif-unplugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 for instance with vm_state deleted and task_state None.
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.268 243456 DEBUG nova.compute.manager [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-deleted-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.268 243456 DEBUG nova.compute.manager [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.269 243456 DEBUG oslo_concurrency.lockutils [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.269 243456 DEBUG oslo_concurrency.lockutils [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.269 243456 DEBUG oslo_concurrency.lockutils [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ba33446e-fcd5-454c-bc8c-79a367002d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.269 243456 DEBUG nova.compute.manager [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] No waiting events found dispatching network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.269 243456 WARNING nova.compute.manager [req-6ea68d2d-bf7e-4b6d-bdae-dc74b730eb0f req-74cc4f67-380e-4a51-9981-090c5ac04396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Received unexpected event network-vif-plugged-0226697b-95b2-4303-aa60-b98eb0bb4cd9 for instance with vm_state deleted and task_state None.
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.275 243456 DEBUG nova.objects.instance [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'migration_context' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.290 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.291 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Ensure instance console log exists: /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.291 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.291 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.292 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.293 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.958s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.294 243456 DEBUG nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:14:59 compute-0 ceph-mon[76304]: pgmap v1472: 305 pgs: 305 active+clean; 336 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 198 op/s
Feb 28 10:14:59 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3480607201' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.336 243456 DEBUG nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.337 243456 DEBUG nova.network.neutron [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.352 243456 INFO nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.368 243456 DEBUG nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.440 243456 DEBUG nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.441 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.441 243456 INFO nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Creating image(s)
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.460 243456 DEBUG nova.storage.rbd_utils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image a28c23bd-34cb-4189-9cca-778178eb41b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.480 243456 DEBUG nova.storage.rbd_utils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image a28c23bd-34cb-4189-9cca-778178eb41b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.502 243456 DEBUG nova.storage.rbd_utils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image a28c23bd-34cb-4189-9cca-778178eb41b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.508 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.532 243456 DEBUG nova.policy [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c6b5724da2e648fd85fd8cb293525967', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.564 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.565 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.565 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.566 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.586 243456 DEBUG nova.storage.rbd_utils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image a28c23bd-34cb-4189-9cca-778178eb41b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.590 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a28c23bd-34cb-4189-9cca-778178eb41b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.611 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.891 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a28c23bd-34cb-4189-9cca-778178eb41b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:14:59 compute-0 nova_compute[243452]: 2026-02-28 10:14:59.982 243456 DEBUG nova.storage.rbd_utils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] resizing rbd image a28c23bd-34cb-4189-9cca-778178eb41b1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:15:00 compute-0 nova_compute[243452]: 2026-02-28 10:15:00.091 243456 DEBUG nova.objects.instance [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'migration_context' on Instance uuid a28c23bd-34cb-4189-9cca-778178eb41b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:00 compute-0 nova_compute[243452]: 2026-02-28 10:15:00.107 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:15:00 compute-0 nova_compute[243452]: 2026-02-28 10:15:00.107 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Ensure instance console log exists: /var/lib/nova/instances/a28c23bd-34cb-4189-9cca-778178eb41b1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:15:00 compute-0 nova_compute[243452]: 2026-02-28 10:15:00.108 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:00 compute-0 nova_compute[243452]: 2026-02-28 10:15:00.108 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:00 compute-0 nova_compute[243452]: 2026-02-28 10:15:00.108 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:00 compute-0 nova_compute[243452]: 2026-02-28 10:15:00.179 243456 DEBUG nova.network.neutron [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Successfully created port: d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:15:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 330 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 176 op/s
Feb 28 10:15:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:15:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:15:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:15:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:15:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:15:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:15:00 compute-0 nova_compute[243452]: 2026-02-28 10:15:00.703 243456 DEBUG nova.network.neutron [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Successfully created port: f6d838dc-126d-40e0-bd84-54c611b21b22 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:15:01 compute-0 ceph-mon[76304]: pgmap v1473: 305 pgs: 305 active+clean; 330 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 176 op/s
Feb 28 10:15:01 compute-0 nova_compute[243452]: 2026-02-28 10:15:01.393 243456 DEBUG nova.network.neutron [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Successfully updated port: d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:15:01 compute-0 nova_compute[243452]: 2026-02-28 10:15:01.409 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "refresh_cache-ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:15:01 compute-0 nova_compute[243452]: 2026-02-28 10:15:01.410 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquired lock "refresh_cache-ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:15:01 compute-0 nova_compute[243452]: 2026-02-28 10:15:01.410 243456 DEBUG nova.network.neutron [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:15:01 compute-0 nova_compute[243452]: 2026-02-28 10:15:01.493 243456 DEBUG nova.compute.manager [req-42bafc81-ed81-4d58-a2cb-06f678fbee89 req-53279fdc-86c7-4b08-a408-15e2d7b714a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received event network-changed-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:01 compute-0 nova_compute[243452]: 2026-02-28 10:15:01.494 243456 DEBUG nova.compute.manager [req-42bafc81-ed81-4d58-a2cb-06f678fbee89 req-53279fdc-86c7-4b08-a408-15e2d7b714a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Refreshing instance network info cache due to event network-changed-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:15:01 compute-0 nova_compute[243452]: 2026-02-28 10:15:01.494 243456 DEBUG oslo_concurrency.lockutils [req-42bafc81-ed81-4d58-a2cb-06f678fbee89 req-53279fdc-86c7-4b08-a408-15e2d7b714a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:15:01 compute-0 nova_compute[243452]: 2026-02-28 10:15:01.576 243456 DEBUG nova.network.neutron [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:15:01 compute-0 nova_compute[243452]: 2026-02-28 10:15:01.584 243456 DEBUG nova.network.neutron [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Successfully updated port: f6d838dc-126d-40e0-bd84-54c611b21b22 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:15:01 compute-0 nova_compute[243452]: 2026-02-28 10:15:01.602 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "refresh_cache-a28c23bd-34cb-4189-9cca-778178eb41b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:15:01 compute-0 nova_compute[243452]: 2026-02-28 10:15:01.603 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquired lock "refresh_cache-a28c23bd-34cb-4189-9cca-778178eb41b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:15:01 compute-0 nova_compute[243452]: 2026-02-28 10:15:01.603 243456 DEBUG nova.network.neutron [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:15:02 compute-0 nova_compute[243452]: 2026-02-28 10:15:02.186 243456 DEBUG nova.network.neutron [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:15:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 346 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.0 MiB/s wr, 213 op/s
Feb 28 10:15:02 compute-0 nova_compute[243452]: 2026-02-28 10:15:02.558 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:03 compute-0 ceph-mon[76304]: pgmap v1474: 305 pgs: 305 active+clean; 346 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.0 MiB/s wr, 213 op/s
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.511 243456 DEBUG nova.network.neutron [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Updating instance_info_cache with network_info: [{"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.558 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Releasing lock "refresh_cache-ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.558 243456 DEBUG nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance network_info: |[{"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.559 243456 DEBUG oslo_concurrency.lockutils [req-42bafc81-ed81-4d58-a2cb-06f678fbee89 req-53279fdc-86c7-4b08-a408-15e2d7b714a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.559 243456 DEBUG nova.network.neutron [req-42bafc81-ed81-4d58-a2cb-06f678fbee89 req-53279fdc-86c7-4b08-a408-15e2d7b714a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Refreshing network info cache for port d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.562 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Start _get_guest_xml network_info=[{"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.567 243456 WARNING nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.575 243456 DEBUG nova.virt.libvirt.host [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.576 243456 DEBUG nova.virt.libvirt.host [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.585 243456 DEBUG nova.virt.libvirt.host [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.586 243456 DEBUG nova.virt.libvirt.host [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.587 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.587 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.588 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.588 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.588 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.588 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.588 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.589 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.589 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.589 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.589 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.590 243456 DEBUG nova.virt.hardware [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.592 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.643 243456 DEBUG nova.compute.manager [req-ea430fe9-dfd1-48d5-98fd-ed8243c260bf req-4ceb00dd-8ebe-47d0-b50d-181c78873b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Received event network-changed-f6d838dc-126d-40e0-bd84-54c611b21b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.644 243456 DEBUG nova.compute.manager [req-ea430fe9-dfd1-48d5-98fd-ed8243c260bf req-4ceb00dd-8ebe-47d0-b50d-181c78873b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Refreshing instance network info cache due to event network-changed-f6d838dc-126d-40e0-bd84-54c611b21b22. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.644 243456 DEBUG oslo_concurrency.lockutils [req-ea430fe9-dfd1-48d5-98fd-ed8243c260bf req-4ceb00dd-8ebe-47d0-b50d-181c78873b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-a28c23bd-34cb-4189-9cca-778178eb41b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:15:03 compute-0 nova_compute[243452]: 2026-02-28 10:15:03.789 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:15:04 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/837647096' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 391 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.2 MiB/s wr, 226 op/s
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.196 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.220 243456 DEBUG nova.storage.rbd_utils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.229 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.283 243456 DEBUG nova.network.neutron [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Updating instance_info_cache with network_info: [{"id": "f6d838dc-126d-40e0-bd84-54c611b21b22", "address": "fa:16:3e:25:80:96", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d838dc-12", "ovs_interfaceid": "f6d838dc-126d-40e0-bd84-54c611b21b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.320 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Releasing lock "refresh_cache-a28c23bd-34cb-4189-9cca-778178eb41b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.321 243456 DEBUG nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Instance network_info: |[{"id": "f6d838dc-126d-40e0-bd84-54c611b21b22", "address": "fa:16:3e:25:80:96", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d838dc-12", "ovs_interfaceid": "f6d838dc-126d-40e0-bd84-54c611b21b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.321 243456 DEBUG oslo_concurrency.lockutils [req-ea430fe9-dfd1-48d5-98fd-ed8243c260bf req-4ceb00dd-8ebe-47d0-b50d-181c78873b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-a28c23bd-34cb-4189-9cca-778178eb41b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.322 243456 DEBUG nova.network.neutron [req-ea430fe9-dfd1-48d5-98fd-ed8243c260bf req-4ceb00dd-8ebe-47d0-b50d-181c78873b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Refreshing network info cache for port f6d838dc-126d-40e0-bd84-54c611b21b22 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.325 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Start _get_guest_xml network_info=[{"id": "f6d838dc-126d-40e0-bd84-54c611b21b22", "address": "fa:16:3e:25:80:96", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d838dc-12", "ovs_interfaceid": "f6d838dc-126d-40e0-bd84-54c611b21b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.331 243456 WARNING nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.336 243456 DEBUG nova.virt.libvirt.host [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.336 243456 DEBUG nova.virt.libvirt.host [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.341 243456 DEBUG nova.virt.libvirt.host [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.341 243456 DEBUG nova.virt.libvirt.host [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.342 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.342 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.342 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.343 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.343 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.343 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.343 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.344 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.344 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.344 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.344 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.344 243456 DEBUG nova.virt.hardware [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.348 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:04 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/837647096' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.613 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:04.729 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.729 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:04.730 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:15:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:15:04 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3018996537' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.802 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.804 243456 DEBUG nova.virt.libvirt.vif [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:14:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-542187541',display_name='tempest-tempest.common.compute-instance-542187541',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-542187541',id=79,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd70835696bf4e12a062516e9de5527d',ramdisk_id='',reservation_id='r-tzwfcb59',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1764257371',owner_user_name='tempest-ServerActionsTestOtherA-1764257371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:58Z,user_data=None,user_id='14b2d28379164786ad68563acb83a50a',uuid=ca4fec3f-7355-47c7-baa5-8d9af25c6eb4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.805 243456 DEBUG nova.network.os_vif_util [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converting VIF {"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.806 243456 DEBUG nova.network.os_vif_util [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.807 243456 DEBUG nova.objects.instance [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'pci_devices' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.827 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:15:04 compute-0 nova_compute[243452]:   <uuid>ca4fec3f-7355-47c7-baa5-8d9af25c6eb4</uuid>
Feb 28 10:15:04 compute-0 nova_compute[243452]:   <name>instance-0000004f</name>
Feb 28 10:15:04 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:15:04 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:15:04 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <nova:name>tempest-tempest.common.compute-instance-542187541</nova:name>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:15:03</nova:creationTime>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:15:04 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:15:04 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:15:04 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:15:04 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:15:04 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:15:04 compute-0 nova_compute[243452]:         <nova:user uuid="14b2d28379164786ad68563acb83a50a">tempest-ServerActionsTestOtherA-1764257371-project-member</nova:user>
Feb 28 10:15:04 compute-0 nova_compute[243452]:         <nova:project uuid="fd70835696bf4e12a062516e9de5527d">tempest-ServerActionsTestOtherA-1764257371</nova:project>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:15:04 compute-0 nova_compute[243452]:         <nova:port uuid="d7f6883b-88ea-45f6-a85b-7fe7dd5cf814">
Feb 28 10:15:04 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:15:04 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:15:04 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <system>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <entry name="serial">ca4fec3f-7355-47c7-baa5-8d9af25c6eb4</entry>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <entry name="uuid">ca4fec3f-7355-47c7-baa5-8d9af25c6eb4</entry>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     </system>
Feb 28 10:15:04 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:15:04 compute-0 nova_compute[243452]:   <os>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:   </os>
Feb 28 10:15:04 compute-0 nova_compute[243452]:   <features>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:   </features>
Feb 28 10:15:04 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:15:04 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:15:04 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk">
Feb 28 10:15:04 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       </source>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:15:04 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config">
Feb 28 10:15:04 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       </source>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:15:04 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:87:04:95"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <target dev="tapd7f6883b-88"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/console.log" append="off"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <video>
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     </video>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:15:04 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:15:04 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:15:04 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:15:04 compute-0 nova_compute[243452]: </domain>
Feb 28 10:15:04 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.828 243456 DEBUG nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Preparing to wait for external event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.828 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.828 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.829 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.829 243456 DEBUG nova.virt.libvirt.vif [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:14:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-542187541',display_name='tempest-tempest.common.compute-instance-542187541',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-542187541',id=79,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd70835696bf4e12a062516e9de5527d',ramdisk_id='',reservation_id='r-tzwfcb59',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1764257371',owner_user_name='tempest-ServerActionsTestOtherA-1764257371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:58Z,user_data=None,user_id='14b2d28379164786ad68563acb83a50a',uuid=ca4fec3f-7355-47c7-baa5-8d9af25c6eb4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.830 243456 DEBUG nova.network.os_vif_util [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converting VIF {"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.831 243456 DEBUG nova.network.os_vif_util [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.831 243456 DEBUG os_vif [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.832 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.832 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.833 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.835 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.835 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7f6883b-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.836 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd7f6883b-88, col_values=(('external_ids', {'iface-id': 'd7f6883b-88ea-45f6-a85b-7fe7dd5cf814', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:04:95', 'vm-uuid': 'ca4fec3f-7355-47c7-baa5-8d9af25c6eb4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:04 compute-0 NetworkManager[49805]: <info>  [1772273704.8386] manager: (tapd7f6883b-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.837 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.842 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.848 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.849 243456 INFO os_vif [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88')
Feb 28 10:15:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:15:04 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2879990967' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.908 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.908 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.909 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] No VIF found with MAC fa:16:3e:87:04:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.910 243456 INFO nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Using config drive
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.933 243456 DEBUG nova.storage.rbd_utils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.939 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.960 243456 DEBUG nova.storage.rbd_utils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image a28c23bd-34cb-4189-9cca-778178eb41b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:04 compute-0 nova_compute[243452]: 2026-02-28 10:15:04.963 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:05 compute-0 ceph-mon[76304]: pgmap v1475: 305 pgs: 305 active+clean; 391 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.2 MiB/s wr, 226 op/s
Feb 28 10:15:05 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3018996537' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:05 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2879990967' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:15:05 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3829889935' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.586 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.589 243456 DEBUG nova.virt.libvirt.vif [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:14:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-228209281',display_name='tempest-ServerDiskConfigTestJSON-server-228209281',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-228209281',id=80,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-s88x0ti1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:59Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=a28c23bd-34cb-4189-9cca-778178eb41b1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6d838dc-126d-40e0-bd84-54c611b21b22", "address": "fa:16:3e:25:80:96", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d838dc-12", "ovs_interfaceid": "f6d838dc-126d-40e0-bd84-54c611b21b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.589 243456 DEBUG nova.network.os_vif_util [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "f6d838dc-126d-40e0-bd84-54c611b21b22", "address": "fa:16:3e:25:80:96", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d838dc-12", "ovs_interfaceid": "f6d838dc-126d-40e0-bd84-54c611b21b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.590 243456 DEBUG nova.network.os_vif_util [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:80:96,bridge_name='br-int',has_traffic_filtering=True,id=f6d838dc-126d-40e0-bd84-54c611b21b22,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d838dc-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.592 243456 DEBUG nova.objects.instance [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'pci_devices' on Instance uuid a28c23bd-34cb-4189-9cca-778178eb41b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.609 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:15:05 compute-0 nova_compute[243452]:   <uuid>a28c23bd-34cb-4189-9cca-778178eb41b1</uuid>
Feb 28 10:15:05 compute-0 nova_compute[243452]:   <name>instance-00000050</name>
Feb 28 10:15:05 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:15:05 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:15:05 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-228209281</nova:name>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:15:04</nova:creationTime>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:15:05 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:15:05 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:15:05 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:15:05 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:15:05 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:15:05 compute-0 nova_compute[243452]:         <nova:user uuid="c6b5724da2e648fd85fd8cb293525967">tempest-ServerDiskConfigTestJSON-1778232696-project-member</nova:user>
Feb 28 10:15:05 compute-0 nova_compute[243452]:         <nova:project uuid="92b324a375ad4f198dc44d31a0e0a6eb">tempest-ServerDiskConfigTestJSON-1778232696</nova:project>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:15:05 compute-0 nova_compute[243452]:         <nova:port uuid="f6d838dc-126d-40e0-bd84-54c611b21b22">
Feb 28 10:15:05 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:15:05 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:15:05 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <system>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <entry name="serial">a28c23bd-34cb-4189-9cca-778178eb41b1</entry>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <entry name="uuid">a28c23bd-34cb-4189-9cca-778178eb41b1</entry>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     </system>
Feb 28 10:15:05 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:15:05 compute-0 nova_compute[243452]:   <os>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:   </os>
Feb 28 10:15:05 compute-0 nova_compute[243452]:   <features>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:   </features>
Feb 28 10:15:05 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:15:05 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:15:05 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/a28c23bd-34cb-4189-9cca-778178eb41b1_disk">
Feb 28 10:15:05 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       </source>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:15:05 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/a28c23bd-34cb-4189-9cca-778178eb41b1_disk.config">
Feb 28 10:15:05 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       </source>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:15:05 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:25:80:96"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <target dev="tapf6d838dc-12"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/a28c23bd-34cb-4189-9cca-778178eb41b1/console.log" append="off"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <video>
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     </video>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:15:05 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:15:05 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:15:05 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:15:05 compute-0 nova_compute[243452]: </domain>
Feb 28 10:15:05 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.610 243456 DEBUG nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Preparing to wait for external event network-vif-plugged-f6d838dc-126d-40e0-bd84-54c611b21b22 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.610 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.611 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.611 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.612 243456 DEBUG nova.virt.libvirt.vif [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:14:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-228209281',display_name='tempest-ServerDiskConfigTestJSON-server-228209281',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-228209281',id=80,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-s88x0ti1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:14:59Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=a28c23bd-34cb-4189-9cca-778178eb41b1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6d838dc-126d-40e0-bd84-54c611b21b22", "address": "fa:16:3e:25:80:96", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d838dc-12", "ovs_interfaceid": "f6d838dc-126d-40e0-bd84-54c611b21b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.612 243456 DEBUG nova.network.os_vif_util [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "f6d838dc-126d-40e0-bd84-54c611b21b22", "address": "fa:16:3e:25:80:96", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d838dc-12", "ovs_interfaceid": "f6d838dc-126d-40e0-bd84-54c611b21b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.613 243456 DEBUG nova.network.os_vif_util [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:80:96,bridge_name='br-int',has_traffic_filtering=True,id=f6d838dc-126d-40e0-bd84-54c611b21b22,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d838dc-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.613 243456 DEBUG os_vif [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:80:96,bridge_name='br-int',has_traffic_filtering=True,id=f6d838dc-126d-40e0-bd84-54c611b21b22,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d838dc-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.614 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.615 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.616 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.626 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.627 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6d838dc-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.628 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf6d838dc-12, col_values=(('external_ids', {'iface-id': 'f6d838dc-126d-40e0-bd84-54c611b21b22', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:80:96', 'vm-uuid': 'a28c23bd-34cb-4189-9cca-778178eb41b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.630 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:05 compute-0 NetworkManager[49805]: <info>  [1772273705.6323] manager: (tapf6d838dc-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/313)
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.633 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.639 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.641 243456 INFO os_vif [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:80:96,bridge_name='br-int',has_traffic_filtering=True,id=f6d838dc-126d-40e0-bd84-54c611b21b22,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d838dc-12')
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.683 243456 INFO nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Creating config drive at /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.690 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp2cv0e20j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.760 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.762 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.763 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] No VIF found with MAC fa:16:3e:25:80:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.765 243456 INFO nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Using config drive
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.790 243456 DEBUG nova.storage.rbd_utils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image a28c23bd-34cb-4189-9cca-778178eb41b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.840 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp2cv0e20j" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.869 243456 DEBUG nova.storage.rbd_utils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.874 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.904 243456 DEBUG nova.network.neutron [req-42bafc81-ed81-4d58-a2cb-06f678fbee89 req-53279fdc-86c7-4b08-a408-15e2d7b714a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Updated VIF entry in instance network info cache for port d7f6883b-88ea-45f6-a85b-7fe7dd5cf814. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.904 243456 DEBUG nova.network.neutron [req-42bafc81-ed81-4d58-a2cb-06f678fbee89 req-53279fdc-86c7-4b08-a408-15e2d7b714a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Updating instance_info_cache with network_info: [{"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:05 compute-0 nova_compute[243452]: 2026-02-28 10:15:05.924 243456 DEBUG oslo_concurrency.lockutils [req-42bafc81-ed81-4d58-a2cb-06f678fbee89 req-53279fdc-86c7-4b08-a408-15e2d7b714a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.040 243456 DEBUG oslo_concurrency.processutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.040 243456 INFO nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Deleting local config drive /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config because it was imported into RBD.
Feb 28 10:15:06 compute-0 NetworkManager[49805]: <info>  [1772273706.0881] manager: (tapd7f6883b-88): new Tun device (/org/freedesktop/NetworkManager/Devices/314)
Feb 28 10:15:06 compute-0 kernel: tapd7f6883b-88: entered promiscuous mode
Feb 28 10:15:06 compute-0 ovn_controller[146846]: 2026-02-28T10:15:06Z|00704|binding|INFO|Claiming lport d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 for this chassis.
Feb 28 10:15:06 compute-0 ovn_controller[146846]: 2026-02-28T10:15:06Z|00705|binding|INFO|d7f6883b-88ea-45f6-a85b-7fe7dd5cf814: Claiming fa:16:3e:87:04:95 10.100.0.9
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.097 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:06 compute-0 ovn_controller[146846]: 2026-02-28T10:15:06Z|00706|binding|INFO|Setting lport d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 ovn-installed in OVS
Feb 28 10:15:06 compute-0 ovn_controller[146846]: 2026-02-28T10:15:06Z|00707|binding|INFO|Setting lport d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 up in Southbound
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.107 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.109 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:04:95 10.100.0.9'], port_security=['fa:16:3e:87:04:95 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ca4fec3f-7355-47c7-baa5-8d9af25c6eb4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd70835696bf4e12a062516e9de5527d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f8eaa742-8504-4c21-8533-267de16b101e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90546891-a028-4a5f-a7b5-01dac44edc93, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.116 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 in datapath 2e5dcf5b-2f4a-41dc-9c28-b500e2889923 bound to our chassis
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.118 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.119 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2e5dcf5b-2f4a-41dc-9c28-b500e2889923
Feb 28 10:15:06 compute-0 systemd-udevd[310764]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.141 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[04b675b9-4867-4237-991b-6d929c890730]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:06 compute-0 systemd-machined[209480]: New machine qemu-91-instance-0000004f.
Feb 28 10:15:06 compute-0 NetworkManager[49805]: <info>  [1772273706.1485] device (tapd7f6883b-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:15:06 compute-0 NetworkManager[49805]: <info>  [1772273706.1492] device (tapd7f6883b-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:15:06 compute-0 systemd[1]: Started Virtual Machine qemu-91-instance-0000004f.
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.159 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.176 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[60468ba3-6cb2-405a-a8cd-6b435e7fa9df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.181 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[26e2891e-48a6-4ec3-a8fe-b4f6c69c6a12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 405 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 221 op/s
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.206 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c26bc0ec-ee01-4147-89aa-7dbef205064f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.224 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3e31bdaa-cf04-4933-8828-b78918c2f33c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e5dcf5b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:a8:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517673, 'reachable_time': 18521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310777, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.238 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b97f28d8-b31c-4a73-86f5-991eadcc805f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2e5dcf5b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517685, 'tstamp': 517685}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310779, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2e5dcf5b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517689, 'tstamp': 517689}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310779, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.241 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e5dcf5b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.245 243456 INFO nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Creating config drive at /var/lib/nova/instances/a28c23bd-34cb-4189-9cca-778178eb41b1/disk.config
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.250 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e5dcf5b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.250 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.250 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2e5dcf5b-20, col_values=(('external_ids', {'iface-id': '4070e10c-8283-47ad-b9bf-0e19e9198bce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.250 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a28c23bd-34cb-4189-9cca-778178eb41b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6xzm38yg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.251 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.292 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.295 243456 DEBUG nova.network.neutron [req-ea430fe9-dfd1-48d5-98fd-ed8243c260bf req-4ceb00dd-8ebe-47d0-b50d-181c78873b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Updated VIF entry in instance network info cache for port f6d838dc-126d-40e0-bd84-54c611b21b22. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.296 243456 DEBUG nova.network.neutron [req-ea430fe9-dfd1-48d5-98fd-ed8243c260bf req-4ceb00dd-8ebe-47d0-b50d-181c78873b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Updating instance_info_cache with network_info: [{"id": "f6d838dc-126d-40e0-bd84-54c611b21b22", "address": "fa:16:3e:25:80:96", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d838dc-12", "ovs_interfaceid": "f6d838dc-126d-40e0-bd84-54c611b21b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.320 243456 DEBUG oslo_concurrency.lockutils [req-ea430fe9-dfd1-48d5-98fd-ed8243c260bf req-4ceb00dd-8ebe-47d0-b50d-181c78873b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-a28c23bd-34cb-4189-9cca-778178eb41b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.402 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a28c23bd-34cb-4189-9cca-778178eb41b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6xzm38yg" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.440 243456 DEBUG nova.storage.rbd_utils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] rbd image a28c23bd-34cb-4189-9cca-778178eb41b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.445 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a28c23bd-34cb-4189-9cca-778178eb41b1/disk.config a28c23bd-34cb-4189-9cca-778178eb41b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.494 243456 DEBUG nova.compute.manager [req-4ebee71b-4cef-421d-866f-0cfd70b83141 req-0855b919-0df8-4a97-a7a7-a3e156bdc60d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.495 243456 DEBUG oslo_concurrency.lockutils [req-4ebee71b-4cef-421d-866f-0cfd70b83141 req-0855b919-0df8-4a97-a7a7-a3e156bdc60d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.495 243456 DEBUG oslo_concurrency.lockutils [req-4ebee71b-4cef-421d-866f-0cfd70b83141 req-0855b919-0df8-4a97-a7a7-a3e156bdc60d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.495 243456 DEBUG oslo_concurrency.lockutils [req-4ebee71b-4cef-421d-866f-0cfd70b83141 req-0855b919-0df8-4a97-a7a7-a3e156bdc60d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.496 243456 DEBUG nova.compute.manager [req-4ebee71b-4cef-421d-866f-0cfd70b83141 req-0855b919-0df8-4a97-a7a7-a3e156bdc60d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Processing event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:15:06 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3829889935' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:06 compute-0 ceph-mon[76304]: pgmap v1476: 305 pgs: 305 active+clean; 405 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 221 op/s
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.626 243456 DEBUG oslo_concurrency.processutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a28c23bd-34cb-4189-9cca-778178eb41b1/disk.config a28c23bd-34cb-4189-9cca-778178eb41b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.628 243456 INFO nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Deleting local config drive /var/lib/nova/instances/a28c23bd-34cb-4189-9cca-778178eb41b1/disk.config because it was imported into RBD.
Feb 28 10:15:06 compute-0 kernel: tapf6d838dc-12: entered promiscuous mode
Feb 28 10:15:06 compute-0 NetworkManager[49805]: <info>  [1772273706.6755] manager: (tapf6d838dc-12): new Tun device (/org/freedesktop/NetworkManager/Devices/315)
Feb 28 10:15:06 compute-0 systemd-udevd[310767]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:15:06 compute-0 ovn_controller[146846]: 2026-02-28T10:15:06Z|00708|binding|INFO|Claiming lport f6d838dc-126d-40e0-bd84-54c611b21b22 for this chassis.
Feb 28 10:15:06 compute-0 ovn_controller[146846]: 2026-02-28T10:15:06Z|00709|binding|INFO|f6d838dc-126d-40e0-bd84-54c611b21b22: Claiming fa:16:3e:25:80:96 10.100.0.5
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.679 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:06 compute-0 ovn_controller[146846]: 2026-02-28T10:15:06Z|00710|binding|INFO|Setting lport f6d838dc-126d-40e0-bd84-54c611b21b22 ovn-installed in OVS
Feb 28 10:15:06 compute-0 ovn_controller[146846]: 2026-02-28T10:15:06Z|00711|binding|INFO|Setting lport f6d838dc-126d-40e0-bd84-54c611b21b22 up in Southbound
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.690 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.690 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:80:96 10.100.0.5'], port_security=['fa:16:3e:25:80:96 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a28c23bd-34cb-4189-9cca-778178eb41b1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f6d838dc-126d-40e0-bd84-54c611b21b22) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.692 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f6d838dc-126d-40e0-bd84-54c611b21b22 in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 bound to our chassis
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.695 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 10:15:06 compute-0 NetworkManager[49805]: <info>  [1772273706.6977] device (tapf6d838dc-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:15:06 compute-0 NetworkManager[49805]: <info>  [1772273706.6986] device (tapf6d838dc-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:15:06 compute-0 nova_compute[243452]: 2026-02-28 10:15:06.701 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.707 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[040f8b72-99b9-4966-a04a-2f5d54fed6df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.708 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77a5b13a-e1 in ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.710 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77a5b13a-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.710 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b038a428-5dc9-4f16-b3b1-7880f3f09f89]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.711 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7586227d-6cb9-47b0-a4a6-dbf70b89c478]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:06 compute-0 systemd-machined[209480]: New machine qemu-92-instance-00000050.
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.727 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[881803f0-bcf1-4db0-b38e-ea3530318192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:06 compute-0 systemd[1]: Started Virtual Machine qemu-92-instance-00000050.
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.744 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3a1841cf-f31a-4739-9a30-9f2ef77dfd19]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.777 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[145379cc-141a-4b0f-816f-defe6e0e6188]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:06 compute-0 NetworkManager[49805]: <info>  [1772273706.7846] manager: (tap77a5b13a-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/316)
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.783 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0dc1013a-df95-45ff-b6d2-9179058899ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.818 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[19252e7c-0950-49a0-b47f-c2c8dbfcef73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.822 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f775d1a0-c7a6-435b-8ba1-b956b8b1206d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:06 compute-0 NetworkManager[49805]: <info>  [1772273706.8428] device (tap77a5b13a-e0): carrier: link connected
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.848 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e4ab581e-6acb-4772-9c60-4f7fc48895f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.867 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d9eae520-2413-4ec7-862b-9219853dd3d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77a5b13a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ae:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521412, 'reachable_time': 38514, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310864, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.881 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8d114a-0e23-45b3-9fc6-9682518115b3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:aeac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 521412, 'tstamp': 521412}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310865, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.906 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[65dae70a-62ce-4e9a-b81d-88856ee775d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77a5b13a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ae:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521412, 'reachable_time': 38514, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310866, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:06 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:06.943 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f02790de-4417-485a-964d-16e9c0018dab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:07.020 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eb050cad-7608-4897-a4ac-d53fdcc94727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:07.023 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77a5b13a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:07.023 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:07.024 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77a5b13a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:07 compute-0 NetworkManager[49805]: <info>  [1772273707.0277] manager: (tap77a5b13a-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/317)
Feb 28 10:15:07 compute-0 kernel: tap77a5b13a-e0: entered promiscuous mode
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:07.033 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77a5b13a-e0, col_values=(('external_ids', {'iface-id': '5829ec02-3925-4479-9cc6-4b24ee8cfe06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.034 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:07 compute-0 ovn_controller[146846]: 2026-02-28T10:15:07Z|00712|binding|INFO|Releasing lport 5829ec02-3925-4479-9cc6-4b24ee8cfe06 from this chassis (sb_readonly=0)
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.046 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:07.048 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.048 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:07.049 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2a879ea5-8c52-40ec-9566-3deb96d00b00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:07.051 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/77a5b13a-ec2d-4bde-b8f1-201557ef8008.pid.haproxy
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 77a5b13a-ec2d-4bde-b8f1-201557ef8008
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:15:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:07.054 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'env', 'PROCESS_TAG=haproxy-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77a5b13a-ec2d-4bde-b8f1-201557ef8008.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.124 243456 DEBUG nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.125 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273707.1233804, ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.125 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] VM Started (Lifecycle Event)
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.135 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.140 243456 INFO nova.virt.libvirt.driver [-] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance spawned successfully.
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.141 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.147 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.151 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.160 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.161 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.161 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.162 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.163 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.163 243456 DEBUG nova.virt.libvirt.driver [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.169 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.170 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273707.129277, ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.170 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] VM Paused (Lifecycle Event)
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.192 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.205 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273707.1351213, ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.206 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] VM Resumed (Lifecycle Event)
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.222 243456 INFO nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Took 8.63 seconds to spawn the instance on the hypervisor.
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.222 243456 DEBUG nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.231 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.234 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.259 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.298 243456 INFO nova.compute.manager [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Took 9.76 seconds to build instance.
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.316 243456 DEBUG oslo_concurrency.lockutils [None req-83faff03-1c48-4592-bbd7-f1bb8c21b17a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:07 compute-0 podman[310940]: 2026-02-28 10:15:07.454577503 +0000 UTC m=+0.064641069 container create 804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 28 10:15:07 compute-0 systemd[1]: Started libpod-conmon-804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047.scope.
Feb 28 10:15:07 compute-0 podman[310940]: 2026-02-28 10:15:07.415080272 +0000 UTC m=+0.025143848 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:15:07 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:15:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08db0d9b3cbf48242c2a9835ca420d7ec1169862bc5ffbff25f96cc7a4c202d2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:15:07 compute-0 podman[310940]: 2026-02-28 10:15:07.578158348 +0000 UTC m=+0.188222224 container init 804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 10:15:07 compute-0 podman[310954]: 2026-02-28 10:15:07.582042417 +0000 UTC m=+0.084037754 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 10:15:07 compute-0 podman[310940]: 2026-02-28 10:15:07.588854558 +0000 UTC m=+0.198918134 container start 804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:15:07 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[310974]: [NOTICE]   (310996) : New worker (311003) forked
Feb 28 10:15:07 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[310974]: [NOTICE]   (310996) : Loading success.
Feb 28 10:15:07 compute-0 podman[310953]: 2026-02-28 10:15:07.617878204 +0000 UTC m=+0.116572278 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 28 10:15:07 compute-0 ovn_controller[146846]: 2026-02-28T10:15:07Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:90:03 10.100.0.11
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.865 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273707.8645816, a28c23bd-34cb-4189-9cca-778178eb41b1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.866 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] VM Started (Lifecycle Event)
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.895 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.899 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273707.865, a28c23bd-34cb-4189-9cca-778178eb41b1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.900 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] VM Paused (Lifecycle Event)
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.923 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.927 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:15:07 compute-0 nova_compute[243452]: 2026-02-28 10:15:07.953 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:15:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 405 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 165 op/s
Feb 28 10:15:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:15:08 compute-0 nova_compute[243452]: 2026-02-28 10:15:08.792 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:09 compute-0 ceph-mon[76304]: pgmap v1477: 305 pgs: 305 active+clean; 405 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 165 op/s
Feb 28 10:15:09 compute-0 nova_compute[243452]: 2026-02-28 10:15:09.266 243456 DEBUG nova.compute.manager [req-b5d12f8d-209f-4f9f-b5dd-e9ac2bb5b9a8 req-c5225c30-b68a-4957-b48e-2a5e9cdb9a11 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:09 compute-0 nova_compute[243452]: 2026-02-28 10:15:09.267 243456 DEBUG oslo_concurrency.lockutils [req-b5d12f8d-209f-4f9f-b5dd-e9ac2bb5b9a8 req-c5225c30-b68a-4957-b48e-2a5e9cdb9a11 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:09 compute-0 nova_compute[243452]: 2026-02-28 10:15:09.267 243456 DEBUG oslo_concurrency.lockutils [req-b5d12f8d-209f-4f9f-b5dd-e9ac2bb5b9a8 req-c5225c30-b68a-4957-b48e-2a5e9cdb9a11 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:09 compute-0 nova_compute[243452]: 2026-02-28 10:15:09.267 243456 DEBUG oslo_concurrency.lockutils [req-b5d12f8d-209f-4f9f-b5dd-e9ac2bb5b9a8 req-c5225c30-b68a-4957-b48e-2a5e9cdb9a11 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:09 compute-0 nova_compute[243452]: 2026-02-28 10:15:09.267 243456 DEBUG nova.compute.manager [req-b5d12f8d-209f-4f9f-b5dd-e9ac2bb5b9a8 req-c5225c30-b68a-4957-b48e-2a5e9cdb9a11 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] No waiting events found dispatching network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:15:09 compute-0 nova_compute[243452]: 2026-02-28 10:15:09.268 243456 WARNING nova.compute.manager [req-b5d12f8d-209f-4f9f-b5dd-e9ac2bb5b9a8 req-c5225c30-b68a-4957-b48e-2a5e9cdb9a11 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received unexpected event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 for instance with vm_state active and task_state None.
Feb 28 10:15:09 compute-0 nova_compute[243452]: 2026-02-28 10:15:09.489 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:09 compute-0 nova_compute[243452]: 2026-02-28 10:15:09.489 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:09 compute-0 nova_compute[243452]: 2026-02-28 10:15:09.511 243456 DEBUG nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:15:09 compute-0 nova_compute[243452]: 2026-02-28 10:15:09.579 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273694.5728126, ba33446e-fcd5-454c-bc8c-79a367002d57 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:09 compute-0 nova_compute[243452]: 2026-02-28 10:15:09.579 243456 INFO nova.compute.manager [-] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] VM Stopped (Lifecycle Event)
Feb 28 10:15:09 compute-0 nova_compute[243452]: 2026-02-28 10:15:09.602 243456 DEBUG nova.compute.manager [None req-6aa6c024-43ae-4fb8-97a5-2d81873f54b7 - - - - - -] [instance: ba33446e-fcd5-454c-bc8c-79a367002d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:09 compute-0 nova_compute[243452]: 2026-02-28 10:15:09.604 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:09 compute-0 nova_compute[243452]: 2026-02-28 10:15:09.604 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:09 compute-0 nova_compute[243452]: 2026-02-28 10:15:09.612 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:15:09 compute-0 nova_compute[243452]: 2026-02-28 10:15:09.613 243456 INFO nova.compute.claims [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:15:09 compute-0 nova_compute[243452]: 2026-02-28 10:15:09.781 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.026 243456 DEBUG oslo_concurrency.lockutils [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.027 243456 DEBUG oslo_concurrency.lockutils [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.028 243456 DEBUG nova.compute.manager [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.034 243456 DEBUG nova.compute.manager [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.035 243456 DEBUG nova.objects.instance [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'flavor' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.072 243456 DEBUG nova.virt.libvirt.driver [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:15:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 405 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 179 op/s
Feb 28 10:15:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:15:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1139855722' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.360 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.367 243456 DEBUG nova.compute.provider_tree [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.380 243456 DEBUG nova.scheduler.client.report [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.403 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.405 243456 DEBUG nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.458 243456 DEBUG nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.459 243456 DEBUG nova.network.neutron [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.480 243456 INFO nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.503 243456 DEBUG nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.593 243456 DEBUG nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.594 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.595 243456 INFO nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Creating image(s)
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.621 243456 DEBUG nova.storage.rbd_utils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.654 243456 DEBUG nova.storage.rbd_utils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.692 243456 DEBUG nova.storage.rbd_utils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.702 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:10.732 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.758 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.807 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.808 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.809 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.809 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.834 243456 DEBUG nova.storage.rbd_utils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:10 compute-0 nova_compute[243452]: 2026-02-28 10:15:10.840 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.187 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.281 243456 DEBUG nova.storage.rbd_utils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] resizing rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:15:11 compute-0 ceph-mon[76304]: pgmap v1478: 305 pgs: 305 active+clean; 405 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 179 op/s
Feb 28 10:15:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1139855722' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.331 243456 DEBUG nova.policy [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7ef51521ffc947cbbce8323ec2b71753', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c0c4bc44c37f4a4f83c83b6105be3190', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.398 243456 DEBUG nova.objects.instance [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'migration_context' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.418 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.419 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Ensure instance console log exists: /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.419 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.421 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.423 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.448 243456 DEBUG nova.compute.manager [req-ee6a59d9-12fc-4561-9f94-3da599168bca req-6e5c8fd7-32b2-47d2-96d2-8fcca41e3853 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Received event network-vif-plugged-f6d838dc-126d-40e0-bd84-54c611b21b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.448 243456 DEBUG oslo_concurrency.lockutils [req-ee6a59d9-12fc-4561-9f94-3da599168bca req-6e5c8fd7-32b2-47d2-96d2-8fcca41e3853 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.449 243456 DEBUG oslo_concurrency.lockutils [req-ee6a59d9-12fc-4561-9f94-3da599168bca req-6e5c8fd7-32b2-47d2-96d2-8fcca41e3853 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.449 243456 DEBUG oslo_concurrency.lockutils [req-ee6a59d9-12fc-4561-9f94-3da599168bca req-6e5c8fd7-32b2-47d2-96d2-8fcca41e3853 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.450 243456 DEBUG nova.compute.manager [req-ee6a59d9-12fc-4561-9f94-3da599168bca req-6e5c8fd7-32b2-47d2-96d2-8fcca41e3853 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Processing event network-vif-plugged-f6d838dc-126d-40e0-bd84-54c611b21b22 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.451 243456 DEBUG nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.455 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.460 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273711.4603863, a28c23bd-34cb-4189-9cca-778178eb41b1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.461 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] VM Resumed (Lifecycle Event)
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.468 243456 INFO nova.virt.libvirt.driver [-] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Instance spawned successfully.
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.469 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.490 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.498 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.502 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.503 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.503 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.504 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.505 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.505 243456 DEBUG nova.virt.libvirt.driver [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.538 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.566 243456 INFO nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Took 12.13 seconds to spawn the instance on the hypervisor.
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.567 243456 DEBUG nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.621 243456 INFO nova.compute.manager [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Took 13.70 seconds to build instance.
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.636 243456 DEBUG oslo_concurrency.lockutils [None req-0ad4b842-1c1c-4543-a151-f8dda3144a6a c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:11 compute-0 nova_compute[243452]: 2026-02-28 10:15:11.848 243456 DEBUG nova.network.neutron [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Successfully created port: 52f49649-6181-4c24-95b7-fc7227858c70 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:15:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 405 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.2 MiB/s wr, 175 op/s
Feb 28 10:15:13 compute-0 nova_compute[243452]: 2026-02-28 10:15:13.234 243456 DEBUG nova.network.neutron [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Successfully updated port: 52f49649-6181-4c24-95b7-fc7227858c70 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:15:13 compute-0 nova_compute[243452]: 2026-02-28 10:15:13.265 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:15:13 compute-0 nova_compute[243452]: 2026-02-28 10:15:13.266 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquired lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:15:13 compute-0 nova_compute[243452]: 2026-02-28 10:15:13.266 243456 DEBUG nova.network.neutron [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:15:13 compute-0 ceph-mon[76304]: pgmap v1479: 305 pgs: 305 active+clean; 405 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.2 MiB/s wr, 175 op/s
Feb 28 10:15:13 compute-0 nova_compute[243452]: 2026-02-28 10:15:13.419 243456 DEBUG nova.network.neutron [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:15:13 compute-0 nova_compute[243452]: 2026-02-28 10:15:13.511 243456 DEBUG nova.compute.manager [req-0396b9cd-f2e3-4faa-ad00-9be441db718f req-9cd16c41-a88d-4247-bd82-465336f3ffed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received event network-changed-52f49649-6181-4c24-95b7-fc7227858c70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:13 compute-0 nova_compute[243452]: 2026-02-28 10:15:13.512 243456 DEBUG nova.compute.manager [req-0396b9cd-f2e3-4faa-ad00-9be441db718f req-9cd16c41-a88d-4247-bd82-465336f3ffed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Refreshing instance network info cache due to event network-changed-52f49649-6181-4c24-95b7-fc7227858c70. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:15:13 compute-0 nova_compute[243452]: 2026-02-28 10:15:13.512 243456 DEBUG oslo_concurrency.lockutils [req-0396b9cd-f2e3-4faa-ad00-9be441db718f req-9cd16c41-a88d-4247-bd82-465336f3ffed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:15:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:15:13 compute-0 nova_compute[243452]: 2026-02-28 10:15:13.573 243456 DEBUG nova.compute.manager [req-3b313b82-e7aa-4d70-b938-8e416ff2e478 req-5ed7e7df-9239-40e4-b1ef-561e347790ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Received event network-vif-plugged-f6d838dc-126d-40e0-bd84-54c611b21b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:13 compute-0 nova_compute[243452]: 2026-02-28 10:15:13.573 243456 DEBUG oslo_concurrency.lockutils [req-3b313b82-e7aa-4d70-b938-8e416ff2e478 req-5ed7e7df-9239-40e4-b1ef-561e347790ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:13 compute-0 nova_compute[243452]: 2026-02-28 10:15:13.574 243456 DEBUG oslo_concurrency.lockutils [req-3b313b82-e7aa-4d70-b938-8e416ff2e478 req-5ed7e7df-9239-40e4-b1ef-561e347790ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:13 compute-0 nova_compute[243452]: 2026-02-28 10:15:13.574 243456 DEBUG oslo_concurrency.lockutils [req-3b313b82-e7aa-4d70-b938-8e416ff2e478 req-5ed7e7df-9239-40e4-b1ef-561e347790ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:13 compute-0 nova_compute[243452]: 2026-02-28 10:15:13.574 243456 DEBUG nova.compute.manager [req-3b313b82-e7aa-4d70-b938-8e416ff2e478 req-5ed7e7df-9239-40e4-b1ef-561e347790ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] No waiting events found dispatching network-vif-plugged-f6d838dc-126d-40e0-bd84-54c611b21b22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:15:13 compute-0 nova_compute[243452]: 2026-02-28 10:15:13.574 243456 WARNING nova.compute.manager [req-3b313b82-e7aa-4d70-b938-8e416ff2e478 req-5ed7e7df-9239-40e4-b1ef-561e347790ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Received unexpected event network-vif-plugged-f6d838dc-126d-40e0-bd84-54c611b21b22 for instance with vm_state active and task_state None.
Feb 28 10:15:13 compute-0 nova_compute[243452]: 2026-02-28 10:15:13.794 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 411 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.8 MiB/s wr, 200 op/s
Feb 28 10:15:14 compute-0 nova_compute[243452]: 2026-02-28 10:15:14.443 243456 INFO nova.compute.manager [None req-eef97643-ce1c-4ec0-8954-c239e2561d18 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Get console output
Feb 28 10:15:14 compute-0 nova_compute[243452]: 2026-02-28 10:15:14.452 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:15:15 compute-0 ceph-mon[76304]: pgmap v1480: 305 pgs: 305 active+clean; 411 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.8 MiB/s wr, 200 op/s
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.422 243456 DEBUG nova.network.neutron [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Updating instance_info_cache with network_info: [{"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.453 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Releasing lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.455 243456 DEBUG nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Instance network_info: |[{"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.456 243456 DEBUG oslo_concurrency.lockutils [req-0396b9cd-f2e3-4faa-ad00-9be441db718f req-9cd16c41-a88d-4247-bd82-465336f3ffed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.457 243456 DEBUG nova.network.neutron [req-0396b9cd-f2e3-4faa-ad00-9be441db718f req-9cd16c41-a88d-4247-bd82-465336f3ffed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Refreshing network info cache for port 52f49649-6181-4c24-95b7-fc7227858c70 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.459 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Start _get_guest_xml network_info=[{"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.464 243456 WARNING nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.470 243456 DEBUG nova.virt.libvirt.host [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.470 243456 DEBUG nova.virt.libvirt.host [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.477 243456 DEBUG nova.virt.libvirt.host [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.478 243456 DEBUG nova.virt.libvirt.host [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.478 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.479 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.479 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.480 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.480 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.480 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.480 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.481 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.481 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.481 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.482 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.482 243456 DEBUG nova.virt.hardware [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.485 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.762 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.877 243456 DEBUG oslo_concurrency.lockutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.879 243456 DEBUG oslo_concurrency.lockutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.879 243456 DEBUG oslo_concurrency.lockutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.880 243456 DEBUG oslo_concurrency.lockutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.881 243456 DEBUG oslo_concurrency.lockutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.885 243456 INFO nova.compute.manager [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Terminating instance
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.888 243456 DEBUG nova.compute.manager [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:15:15 compute-0 kernel: tap6b5acb8c-5d (unregistering): left promiscuous mode
Feb 28 10:15:15 compute-0 NetworkManager[49805]: <info>  [1772273715.9378] device (tap6b5acb8c-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.935 243456 DEBUG nova.compute.manager [req-03c0d5a9-13bf-4abe-aa57-1ea2e564c43c req-3b1279c8-a15d-4b8c-8c97-e18d1a66c02a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-changed-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.941 243456 DEBUG nova.compute.manager [req-03c0d5a9-13bf-4abe-aa57-1ea2e564c43c req-3b1279c8-a15d-4b8c-8c97-e18d1a66c02a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Refreshing instance network info cache due to event network-changed-6b5acb8c-5d09-42b0-9c1d-b51be18712fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.942 243456 DEBUG oslo_concurrency.lockutils [req-03c0d5a9-13bf-4abe-aa57-1ea2e564c43c req-3b1279c8-a15d-4b8c-8c97-e18d1a66c02a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.942 243456 DEBUG oslo_concurrency.lockutils [req-03c0d5a9-13bf-4abe-aa57-1ea2e564c43c req-3b1279c8-a15d-4b8c-8c97-e18d1a66c02a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.943 243456 DEBUG nova.network.neutron [req-03c0d5a9-13bf-4abe-aa57-1ea2e564c43c req-3b1279c8-a15d-4b8c-8c97-e18d1a66c02a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Refreshing network info cache for port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.951 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:15 compute-0 ovn_controller[146846]: 2026-02-28T10:15:15Z|00713|binding|INFO|Releasing lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe from this chassis (sb_readonly=0)
Feb 28 10:15:15 compute-0 ovn_controller[146846]: 2026-02-28T10:15:15Z|00714|binding|INFO|Setting lport 6b5acb8c-5d09-42b0-9c1d-b51be18712fe down in Southbound
Feb 28 10:15:15 compute-0 ovn_controller[146846]: 2026-02-28T10:15:15Z|00715|binding|INFO|Removing iface tap6b5acb8c-5d ovn-installed in OVS
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.954 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:15.961 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:90:03 10.100.0.11'], port_security=['fa:16:3e:88:90:03 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '32fe69ba-ea8d-411e-8917-de872b62b8b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-269fae56-42c3-478e-88d5-36164c0a6ae4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '6', 'neutron:security_group_ids': '61546fe4-ca04-44ca-b6ae-d1b7a21ab6e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8585beb-af2c-4eb8-805b-2614cf37e3d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6b5acb8c-5d09-42b0-9c1d-b51be18712fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:15:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:15.962 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe in datapath 269fae56-42c3-478e-88d5-36164c0a6ae4 unbound from our chassis
Feb 28 10:15:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:15.964 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 269fae56-42c3-478e-88d5-36164c0a6ae4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:15:15 compute-0 nova_compute[243452]: 2026-02-28 10:15:15.965 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:15.966 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[495f758c-ce98-4f25-84e7-afd2ea5e68ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:15.966 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4 namespace which is not needed anymore
Feb 28 10:15:16 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Feb 28 10:15:16 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004d.scope: Consumed 12.106s CPU time.
Feb 28 10:15:16 compute-0 systemd-machined[209480]: Machine qemu-90-instance-0000004d terminated.
Feb 28 10:15:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:15:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3973807842' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.080 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.115 243456 DEBUG nova.storage.rbd_utils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.124 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:16 compute-0 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[310129]: [NOTICE]   (310133) : haproxy version is 2.8.14-c23fe91
Feb 28 10:15:16 compute-0 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[310129]: [NOTICE]   (310133) : path to executable is /usr/sbin/haproxy
Feb 28 10:15:16 compute-0 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[310129]: [WARNING]  (310133) : Exiting Master process...
Feb 28 10:15:16 compute-0 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[310129]: [WARNING]  (310133) : Exiting Master process...
Feb 28 10:15:16 compute-0 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[310129]: [ALERT]    (310133) : Current worker (310135) exited with code 143 (Terminated)
Feb 28 10:15:16 compute-0 neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4[310129]: [WARNING]  (310133) : All workers exited. Exiting... (0)
Feb 28 10:15:16 compute-0 systemd[1]: libpod-35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350.scope: Deactivated successfully.
Feb 28 10:15:16 compute-0 podman[311288]: 2026-02-28 10:15:16.1368248 +0000 UTC m=+0.052220459 container died 35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:15:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350-userdata-shm.mount: Deactivated successfully.
Feb 28 10:15:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-207a55a22fbbac4fb5440ec404e2752fc098d104a50be06532a7ca6329c7d281-merged.mount: Deactivated successfully.
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.182 243456 INFO nova.virt.libvirt.driver [-] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Instance destroyed successfully.
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.183 243456 DEBUG nova.objects.instance [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'resources' on Instance uuid 32fe69ba-ea8d-411e-8917-de872b62b8b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:16 compute-0 podman[311288]: 2026-02-28 10:15:16.186468116 +0000 UTC m=+0.101863775 container cleanup 35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:15:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 453 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.2 MiB/s wr, 225 op/s
Feb 28 10:15:16 compute-0 systemd[1]: libpod-conmon-35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350.scope: Deactivated successfully.
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.199 243456 DEBUG nova.virt.libvirt.vif [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:14:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1765724638',display_name='tempest-TestNetworkAdvancedServerOps-server-1765724638',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1765724638',id=77,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBuYc1DeElc1rkkDaA1+Wg1yBEjd9NzIhoaxm5Bt8KGJiKrnwEv07p/i1kPPlomnS4Xw2edPyOwDq78Zz+5s4I0hLVriZjH75jhUt93INSMBcBhlqyJu7ug5cVivkPNiww==',key_name='tempest-TestNetworkAdvancedServerOps-1506787166',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:14:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-0cvc6m44',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:14:55Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=32fe69ba-ea8d-411e-8917-de872b62b8b0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.200 243456 DEBUG nova.network.os_vif_util [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.200 243456 DEBUG nova.network.os_vif_util [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:90:03,bridge_name='br-int',has_traffic_filtering=True,id=6b5acb8c-5d09-42b0-9c1d-b51be18712fe,network=Network(269fae56-42c3-478e-88d5-36164c0a6ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5acb8c-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.201 243456 DEBUG os_vif [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:90:03,bridge_name='br-int',has_traffic_filtering=True,id=6b5acb8c-5d09-42b0-9c1d-b51be18712fe,network=Network(269fae56-42c3-478e-88d5-36164c0a6ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5acb8c-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.204 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.204 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b5acb8c-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.210 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.213 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.216 243456 INFO os_vif [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:90:03,bridge_name='br-int',has_traffic_filtering=True,id=6b5acb8c-5d09-42b0-9c1d-b51be18712fe,network=Network(269fae56-42c3-478e-88d5-36164c0a6ae4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b5acb8c-5d')
Feb 28 10:15:16 compute-0 podman[311347]: 2026-02-28 10:15:16.262409761 +0000 UTC m=+0.050971014 container remove 35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:15:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:16.267 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8202075b-ebe7-4957-af13-7cfed3b7795d]: (4, ('Sat Feb 28 10:15:16 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4 (35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350)\n35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350\nSat Feb 28 10:15:16 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4 (35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350)\n35501cab5215f007896467321c998467304c73eef09f0896ee0cb30ade075350\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:16.270 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5b993289-d48e-48da-a2dd-223775f4d321]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:16.271 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap269fae56-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.273 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:16 compute-0 kernel: tap269fae56-40: left promiscuous mode
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.284 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:16.286 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[53b53594-6b2f-4234-8b05-80cf5a3a2437]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:16.295 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9d8c7c-f1e8-4cd2-92d8-3ad70e6ef022]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:16.296 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[645dd627-d912-4831-8847-198168c1eed2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:16.314 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d6c7d2-a44d-4421-b5c5-de730e167f35]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520210, 'reachable_time': 28185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311380, 'error': None, 'target': 'ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d269fae56\x2d42c3\x2d478e\x2d88d5\x2d36164c0a6ae4.mount: Deactivated successfully.
Feb 28 10:15:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:16.318 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-269fae56-42c3-478e-88d5-36164c0a6ae4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:15:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:16.318 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[63b881b0-36ac-492d-b22a-c04452d37c4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3973807842' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.593 243456 DEBUG nova.compute.manager [req-d3b8b7ff-465c-4c52-8a17-a7ef07eb8149 req-dfb2eab4-e35c-40ec-9306-86d870628c76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-vif-unplugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.594 243456 DEBUG oslo_concurrency.lockutils [req-d3b8b7ff-465c-4c52-8a17-a7ef07eb8149 req-dfb2eab4-e35c-40ec-9306-86d870628c76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.594 243456 DEBUG oslo_concurrency.lockutils [req-d3b8b7ff-465c-4c52-8a17-a7ef07eb8149 req-dfb2eab4-e35c-40ec-9306-86d870628c76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.595 243456 DEBUG oslo_concurrency.lockutils [req-d3b8b7ff-465c-4c52-8a17-a7ef07eb8149 req-dfb2eab4-e35c-40ec-9306-86d870628c76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.595 243456 DEBUG nova.compute.manager [req-d3b8b7ff-465c-4c52-8a17-a7ef07eb8149 req-dfb2eab4-e35c-40ec-9306-86d870628c76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] No waiting events found dispatching network-vif-unplugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.595 243456 DEBUG nova.compute.manager [req-d3b8b7ff-465c-4c52-8a17-a7ef07eb8149 req-dfb2eab4-e35c-40ec-9306-86d870628c76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-vif-unplugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.664 243456 INFO nova.virt.libvirt.driver [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Deleting instance files /var/lib/nova/instances/32fe69ba-ea8d-411e-8917-de872b62b8b0_del
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.666 243456 INFO nova.virt.libvirt.driver [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Deletion of /var/lib/nova/instances/32fe69ba-ea8d-411e-8917-de872b62b8b0_del complete
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.707 243456 INFO nova.compute.manager [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Took 0.82 seconds to destroy the instance on the hypervisor.
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.707 243456 DEBUG oslo.service.loopingcall [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.708 243456 DEBUG nova.compute.manager [-] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.708 243456 DEBUG nova.network.neutron [-] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:15:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:15:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/992169226' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.749 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.751 243456 DEBUG nova.virt.libvirt.vif [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:15:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-76744621',display_name='tempest-ServersNegativeTestJSON-server-76744621',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-76744621',id=81,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c0c4bc44c37f4a4f83c83b6105be3190',ramdisk_id='',reservation_id='r-labnea0j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-621636341',owner_user_name='tempest-ServersNegativeTestJSON-621636341-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:10Z,user_data=None,user_id='7ef51521ffc947cbbce8323ec2b71753',uuid=4db5bcd7-8b41-4850-8c88-89ad757c8558,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.752 243456 DEBUG nova.network.os_vif_util [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converting VIF {"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.753 243456 DEBUG nova.network.os_vif_util [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:e7:39,bridge_name='br-int',has_traffic_filtering=True,id=52f49649-6181-4c24-95b7-fc7227858c70,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f49649-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.754 243456 DEBUG nova.objects.instance [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.768 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:15:16 compute-0 nova_compute[243452]:   <uuid>4db5bcd7-8b41-4850-8c88-89ad757c8558</uuid>
Feb 28 10:15:16 compute-0 nova_compute[243452]:   <name>instance-00000051</name>
Feb 28 10:15:16 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:15:16 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:15:16 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersNegativeTestJSON-server-76744621</nova:name>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:15:15</nova:creationTime>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:15:16 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:15:16 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:15:16 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:15:16 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:15:16 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:15:16 compute-0 nova_compute[243452]:         <nova:user uuid="7ef51521ffc947cbbce8323ec2b71753">tempest-ServersNegativeTestJSON-621636341-project-member</nova:user>
Feb 28 10:15:16 compute-0 nova_compute[243452]:         <nova:project uuid="c0c4bc44c37f4a4f83c83b6105be3190">tempest-ServersNegativeTestJSON-621636341</nova:project>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:15:16 compute-0 nova_compute[243452]:         <nova:port uuid="52f49649-6181-4c24-95b7-fc7227858c70">
Feb 28 10:15:16 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:15:16 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:15:16 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <system>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <entry name="serial">4db5bcd7-8b41-4850-8c88-89ad757c8558</entry>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <entry name="uuid">4db5bcd7-8b41-4850-8c88-89ad757c8558</entry>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     </system>
Feb 28 10:15:16 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:15:16 compute-0 nova_compute[243452]:   <os>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:   </os>
Feb 28 10:15:16 compute-0 nova_compute[243452]:   <features>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:   </features>
Feb 28 10:15:16 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:15:16 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:15:16 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/4db5bcd7-8b41-4850-8c88-89ad757c8558_disk">
Feb 28 10:15:16 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       </source>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:15:16 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/4db5bcd7-8b41-4850-8c88-89ad757c8558_disk.config">
Feb 28 10:15:16 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       </source>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:15:16 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:22:e7:39"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <target dev="tap52f49649-61"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/console.log" append="off"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <video>
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     </video>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:15:16 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:15:16 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:15:16 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:15:16 compute-0 nova_compute[243452]: </domain>
Feb 28 10:15:16 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.775 243456 DEBUG nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Preparing to wait for external event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.776 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.776 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.777 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.778 243456 DEBUG nova.virt.libvirt.vif [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:15:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-76744621',display_name='tempest-ServersNegativeTestJSON-server-76744621',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-76744621',id=81,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c0c4bc44c37f4a4f83c83b6105be3190',ramdisk_id='',reservation_id='r-labnea0j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-621636341',owner_user_name='tempest-ServersNegativeTestJSON-621636341-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:10Z,user_data=None,user_id='7ef51521ffc947cbbce8323ec2b71753',uuid=4db5bcd7-8b41-4850-8c88-89ad757c8558,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.778 243456 DEBUG nova.network.os_vif_util [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converting VIF {"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.779 243456 DEBUG nova.network.os_vif_util [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:e7:39,bridge_name='br-int',has_traffic_filtering=True,id=52f49649-6181-4c24-95b7-fc7227858c70,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f49649-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.780 243456 DEBUG os_vif [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:e7:39,bridge_name='br-int',has_traffic_filtering=True,id=52f49649-6181-4c24-95b7-fc7227858c70,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f49649-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.780 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.781 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.781 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.784 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.785 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52f49649-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.785 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52f49649-61, col_values=(('external_ids', {'iface-id': '52f49649-6181-4c24-95b7-fc7227858c70', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:e7:39', 'vm-uuid': '4db5bcd7-8b41-4850-8c88-89ad757c8558'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.787 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:16 compute-0 NetworkManager[49805]: <info>  [1772273716.7878] manager: (tap52f49649-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/318)
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.790 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.793 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.794 243456 INFO os_vif [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:e7:39,bridge_name='br-int',has_traffic_filtering=True,id=52f49649-6181-4c24-95b7-fc7227858c70,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f49649-61')
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.841 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.842 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.843 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] No VIF found with MAC fa:16:3e:22:e7:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.844 243456 INFO nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Using config drive
Feb 28 10:15:16 compute-0 nova_compute[243452]: 2026-02-28 10:15:16.871 243456 DEBUG nova.storage.rbd_utils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:17 compute-0 ceph-mon[76304]: pgmap v1481: 305 pgs: 305 active+clean; 453 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.2 MiB/s wr, 225 op/s
Feb 28 10:15:17 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/992169226' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 425 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.8 MiB/s wr, 265 op/s
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.267 243456 INFO nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Creating config drive at /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/disk.config
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.271 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpzgkqovsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.309 243456 DEBUG nova.network.neutron [req-0396b9cd-f2e3-4faa-ad00-9be441db718f req-9cd16c41-a88d-4247-bd82-465336f3ffed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Updated VIF entry in instance network info cache for port 52f49649-6181-4c24-95b7-fc7227858c70. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.310 243456 DEBUG nova.network.neutron [req-0396b9cd-f2e3-4faa-ad00-9be441db718f req-9cd16c41-a88d-4247-bd82-465336f3ffed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Updating instance_info_cache with network_info: [{"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.342 243456 DEBUG oslo_concurrency.lockutils [req-0396b9cd-f2e3-4faa-ad00-9be441db718f req-9cd16c41-a88d-4247-bd82-465336f3ffed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.418 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpzgkqovsi" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.448 243456 DEBUG nova.storage.rbd_utils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.453 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/disk.config 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.504 243456 DEBUG nova.network.neutron [-] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.521 243456 INFO nova.compute.manager [-] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Took 1.81 seconds to deallocate network for instance.
Feb 28 10:15:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.568 243456 DEBUG oslo_concurrency.lockutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.569 243456 DEBUG oslo_concurrency.lockutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.583 243456 DEBUG oslo_concurrency.processutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/disk.config 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.585 243456 INFO nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Deleting local config drive /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/disk.config because it was imported into RBD.
Feb 28 10:15:18 compute-0 kernel: tap52f49649-61: entered promiscuous mode
Feb 28 10:15:18 compute-0 NetworkManager[49805]: <info>  [1772273718.6428] manager: (tap52f49649-61): new Tun device (/org/freedesktop/NetworkManager/Devices/319)
Feb 28 10:15:18 compute-0 systemd-udevd[311266]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:15:18 compute-0 ovn_controller[146846]: 2026-02-28T10:15:18Z|00716|binding|INFO|Claiming lport 52f49649-6181-4c24-95b7-fc7227858c70 for this chassis.
Feb 28 10:15:18 compute-0 ovn_controller[146846]: 2026-02-28T10:15:18Z|00717|binding|INFO|52f49649-6181-4c24-95b7-fc7227858c70: Claiming fa:16:3e:22:e7:39 10.100.0.9
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.645 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:18 compute-0 NetworkManager[49805]: <info>  [1772273718.6558] device (tap52f49649-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:15:18 compute-0 NetworkManager[49805]: <info>  [1772273718.6564] device (tap52f49649-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.657 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:e7:39 10.100.0.9'], port_security=['fa:16:3e:22:e7:39 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4db5bcd7-8b41-4850-8c88-89ad757c8558', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c0c4bc44c37f4a4f83c83b6105be3190', 'neutron:revision_number': '2', 'neutron:security_group_ids': '60e2cad2-1539-4f21-ae07-3933335fcb5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbe93b8f-a4a4-4682-b3ab-de91ea6bc538, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=52f49649-6181-4c24-95b7-fc7227858c70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.659 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.658 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 52f49649-6181-4c24-95b7-fc7227858c70 in datapath ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 bound to our chassis
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.660 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce4b855a-cb9e-4dad-bfe0-ddfe326a1505
Feb 28 10:15:18 compute-0 ovn_controller[146846]: 2026-02-28T10:15:18Z|00718|binding|INFO|Setting lport 52f49649-6181-4c24-95b7-fc7227858c70 ovn-installed in OVS
Feb 28 10:15:18 compute-0 ovn_controller[146846]: 2026-02-28T10:15:18Z|00719|binding|INFO|Setting lport 52f49649-6181-4c24-95b7-fc7227858c70 up in Southbound
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.663 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.668 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.670 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2abb287b-5aec-42b8-8c42-d07e7c64ec86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.671 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce4b855a-c1 in ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.672 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce4b855a-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.672 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7e95ea-f9cd-4e59-a07b-a5234422eef3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.673 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[371b6b89-d7a9-4dc0-ad3c-f4f118220084]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.684 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c3e1fea5-b7b9-4569-970e-bcad45c1cb17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.688 243456 DEBUG oslo_concurrency.processutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:18 compute-0 systemd-machined[209480]: New machine qemu-93-instance-00000051.
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.698 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[18f0de95-b5b6-48e3-9fb6-4d8de04bbe28]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:18 compute-0 systemd[1]: Started Virtual Machine qemu-93-instance-00000051.
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.723 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[72cd8157-7943-46f7-ad0d-3967eb2c8062]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:18 compute-0 NetworkManager[49805]: <info>  [1772273718.7333] manager: (tapce4b855a-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/320)
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.733 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0691ea80-2970-4b4b-8dcd-8a0cd8c3fde7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.771 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7946e6d8-9abc-4421-b6f5-477560575e3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.775 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5197006d-f9e9-411a-9c1c-4dfc5fca0fa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.795 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:18 compute-0 NetworkManager[49805]: <info>  [1772273718.7986] device (tapce4b855a-c0): carrier: link connected
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.802 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[998c1d5f-e277-4367-8f8b-64549c485355]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.820 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3d1366d2-3ac6-4b2b-a26f-7403514d04a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce4b855a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:cf:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522608, 'reachable_time': 35465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311518, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.838 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cafad3a7-ccc0-489a-a89c-cc9a7bd35cac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:cf33'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522608, 'tstamp': 522608}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311528, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.855 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[240075ff-ee06-4afe-9ea7-8f32da1bdf5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce4b855a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:cf:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522608, 'reachable_time': 35465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311529, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.889 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c5dd13b1-6f04-4a5c-813e-314242be87af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.955 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[14182573-a69e-41d9-8c38-0a4432a7adfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.957 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce4b855a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.957 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.958 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce4b855a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.960 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:18 compute-0 NetworkManager[49805]: <info>  [1772273718.9608] manager: (tapce4b855a-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Feb 28 10:15:18 compute-0 kernel: tapce4b855a-c0: entered promiscuous mode
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.963 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.965 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce4b855a-c0, col_values=(('external_ids', {'iface-id': 'f0acdf7e-44d8-43d4-ade1-89536f5a8e0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.967 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:18 compute-0 ovn_controller[146846]: 2026-02-28T10:15:18Z|00720|binding|INFO|Releasing lport f0acdf7e-44d8-43d4-ade1-89536f5a8e0e from this chassis (sb_readonly=0)
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.968 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce4b855a-cb9e-4dad-bfe0-ddfe326a1505.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce4b855a-cb9e-4dad-bfe0-ddfe326a1505.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.969 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2ceddfde-3fb8-4ef1-930e-7980436ee2cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.970 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/ce4b855a-cb9e-4dad-bfe0-ddfe326a1505.pid.haproxy
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID ce4b855a-cb9e-4dad-bfe0-ddfe326a1505
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:15:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:18.971 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'env', 'PROCESS_TAG=haproxy-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce4b855a-cb9e-4dad-bfe0-ddfe326a1505.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:15:18 compute-0 nova_compute[243452]: 2026-02-28 10:15:18.976 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:19 compute-0 ovn_controller[146846]: 2026-02-28T10:15:19Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:04:95 10.100.0.9
Feb 28 10:15:19 compute-0 ovn_controller[146846]: 2026-02-28T10:15:19Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:04:95 10.100.0.9
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.100 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273719.0997443, 4db5bcd7-8b41-4850-8c88-89ad757c8558 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.102 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] VM Started (Lifecycle Event)
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.128 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.133 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273719.0999036, 4db5bcd7-8b41-4850-8c88-89ad757c8558 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.133 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] VM Paused (Lifecycle Event)
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.154 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.158 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.182 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.217 243456 DEBUG nova.network.neutron [req-03c0d5a9-13bf-4abe-aa57-1ea2e564c43c req-3b1279c8-a15d-4b8c-8c97-e18d1a66c02a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Updated VIF entry in instance network info cache for port 6b5acb8c-5d09-42b0-9c1d-b51be18712fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.218 243456 DEBUG nova.network.neutron [req-03c0d5a9-13bf-4abe-aa57-1ea2e564c43c req-3b1279c8-a15d-4b8c-8c97-e18d1a66c02a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Updating instance_info_cache with network_info: [{"id": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "address": "fa:16:3e:88:90:03", "network": {"id": "269fae56-42c3-478e-88d5-36164c0a6ae4", "bridge": "br-int", "label": "tempest-network-smoke--2058090371", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b5acb8c-5d", "ovs_interfaceid": "6b5acb8c-5d09-42b0-9c1d-b51be18712fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:15:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3257134379' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.240 243456 DEBUG oslo_concurrency.lockutils [req-03c0d5a9-13bf-4abe-aa57-1ea2e564c43c req-3b1279c8-a15d-4b8c-8c97-e18d1a66c02a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-32fe69ba-ea8d-411e-8917-de872b62b8b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.255 243456 DEBUG oslo_concurrency.processutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.262 243456 DEBUG nova.compute.provider_tree [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.280 243456 DEBUG nova.scheduler.client.report [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.302 243456 DEBUG oslo_concurrency.lockutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.335 243456 INFO nova.scheduler.client.report [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Deleted allocations for instance 32fe69ba-ea8d-411e-8917-de872b62b8b0
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.393 243456 DEBUG oslo_concurrency.lockutils [None req-7df36d86-cda7-41f7-aaf1-d0f1ad13d32b 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:19 compute-0 podman[311606]: 2026-02-28 10:15:19.313595154 +0000 UTC m=+0.021721592 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:15:19 compute-0 ceph-mon[76304]: pgmap v1482: 305 pgs: 305 active+clean; 425 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.8 MiB/s wr, 265 op/s
Feb 28 10:15:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3257134379' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:19 compute-0 podman[311606]: 2026-02-28 10:15:19.489958483 +0000 UTC m=+0.198084901 container create f121583a9a088cc3ceffdb0e5a079816e7d7baef454c8803be51187524d3575e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.531 243456 DEBUG nova.compute.manager [req-91496c26-d62e-48c6-a6c5-27bd2217df40 req-f1cc0d54-c483-4a39-908e-866d3493582a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.531 243456 DEBUG oslo_concurrency.lockutils [req-91496c26-d62e-48c6-a6c5-27bd2217df40 req-f1cc0d54-c483-4a39-908e-866d3493582a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.532 243456 DEBUG oslo_concurrency.lockutils [req-91496c26-d62e-48c6-a6c5-27bd2217df40 req-f1cc0d54-c483-4a39-908e-866d3493582a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.532 243456 DEBUG oslo_concurrency.lockutils [req-91496c26-d62e-48c6-a6c5-27bd2217df40 req-f1cc0d54-c483-4a39-908e-866d3493582a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "32fe69ba-ea8d-411e-8917-de872b62b8b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.532 243456 DEBUG nova.compute.manager [req-91496c26-d62e-48c6-a6c5-27bd2217df40 req-f1cc0d54-c483-4a39-908e-866d3493582a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] No waiting events found dispatching network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.533 243456 WARNING nova.compute.manager [req-91496c26-d62e-48c6-a6c5-27bd2217df40 req-f1cc0d54-c483-4a39-908e-866d3493582a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received unexpected event network-vif-plugged-6b5acb8c-5d09-42b0-9c1d-b51be18712fe for instance with vm_state deleted and task_state None.
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.533 243456 DEBUG nova.compute.manager [req-91496c26-d62e-48c6-a6c5-27bd2217df40 req-f1cc0d54-c483-4a39-908e-866d3493582a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Received event network-vif-deleted-6b5acb8c-5d09-42b0-9c1d-b51be18712fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.533 243456 INFO nova.compute.manager [req-91496c26-d62e-48c6-a6c5-27bd2217df40 req-f1cc0d54-c483-4a39-908e-866d3493582a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Neutron deleted interface 6b5acb8c-5d09-42b0-9c1d-b51be18712fe; detaching it from the instance and deleting it from the info cache
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.533 243456 DEBUG nova.network.neutron [req-91496c26-d62e-48c6-a6c5-27bd2217df40 req-f1cc0d54-c483-4a39-908e-866d3493582a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Feb 28 10:15:19 compute-0 nova_compute[243452]: 2026-02-28 10:15:19.536 243456 DEBUG nova.compute.manager [req-91496c26-d62e-48c6-a6c5-27bd2217df40 req-f1cc0d54-c483-4a39-908e-866d3493582a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Detach interface failed, port_id=6b5acb8c-5d09-42b0-9c1d-b51be18712fe, reason: Instance 32fe69ba-ea8d-411e-8917-de872b62b8b0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:15:19 compute-0 systemd[1]: Started libpod-conmon-f121583a9a088cc3ceffdb0e5a079816e7d7baef454c8803be51187524d3575e.scope.
Feb 28 10:15:19 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2721b40a4fc85af083307e4f102b5144e734f21a07b53ee098a9eebd71c6ce49/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:15:19 compute-0 podman[311606]: 2026-02-28 10:15:19.709692 +0000 UTC m=+0.417818448 container init f121583a9a088cc3ceffdb0e5a079816e7d7baef454c8803be51187524d3575e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:15:19 compute-0 podman[311606]: 2026-02-28 10:15:19.719173337 +0000 UTC m=+0.427299755 container start f121583a9a088cc3ceffdb0e5a079816e7d7baef454c8803be51187524d3575e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:15:19 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[311622]: [NOTICE]   (311626) : New worker (311628) forked
Feb 28 10:15:19 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[311622]: [NOTICE]   (311626) : Loading success.
Feb 28 10:15:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 403 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.6 MiB/s wr, 277 op/s
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.200 243456 DEBUG nova.virt.libvirt.driver [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.340 243456 DEBUG nova.compute.manager [req-696fd095-d3ed-4854-ada8-e3bd7f5c7d67 req-a8045f26-82a3-4494-bed2-8ac3056204aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.341 243456 DEBUG oslo_concurrency.lockutils [req-696fd095-d3ed-4854-ada8-e3bd7f5c7d67 req-a8045f26-82a3-4494-bed2-8ac3056204aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.341 243456 DEBUG oslo_concurrency.lockutils [req-696fd095-d3ed-4854-ada8-e3bd7f5c7d67 req-a8045f26-82a3-4494-bed2-8ac3056204aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.342 243456 DEBUG oslo_concurrency.lockutils [req-696fd095-d3ed-4854-ada8-e3bd7f5c7d67 req-a8045f26-82a3-4494-bed2-8ac3056204aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.342 243456 DEBUG nova.compute.manager [req-696fd095-d3ed-4854-ada8-e3bd7f5c7d67 req-a8045f26-82a3-4494-bed2-8ac3056204aa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Processing event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.344 243456 DEBUG nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.348 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273720.347408, 4db5bcd7-8b41-4850-8c88-89ad757c8558 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.350 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] VM Resumed (Lifecycle Event)
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.353 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.358 243456 INFO nova.virt.libvirt.driver [-] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Instance spawned successfully.
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.359 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.380 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.390 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.396 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.396 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.397 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.398 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.399 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.399 243456 DEBUG nova.virt.libvirt.driver [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.437 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:15:20 compute-0 ceph-mon[76304]: pgmap v1483: 305 pgs: 305 active+clean; 403 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.6 MiB/s wr, 277 op/s
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.494 243456 INFO nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Took 9.90 seconds to spawn the instance on the hypervisor.
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.495 243456 DEBUG nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.604 243456 INFO nova.compute.manager [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Took 11.03 seconds to build instance.
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.628 243456 DEBUG oslo_concurrency.lockutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "a28c23bd-34cb-4189-9cca-778178eb41b1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.629 243456 DEBUG oslo_concurrency.lockutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.630 243456 DEBUG oslo_concurrency.lockutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.630 243456 DEBUG oslo_concurrency.lockutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.631 243456 DEBUG oslo_concurrency.lockutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.634 243456 INFO nova.compute.manager [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Terminating instance
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.636 243456 DEBUG nova.compute.manager [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.639 243456 DEBUG oslo_concurrency.lockutils [None req-06622d4c-d7f0-4873-bffd-1ead5b13b97b 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:20 compute-0 kernel: tapf6d838dc-12 (unregistering): left promiscuous mode
Feb 28 10:15:20 compute-0 NetworkManager[49805]: <info>  [1772273720.6847] device (tapf6d838dc-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:15:20 compute-0 ovn_controller[146846]: 2026-02-28T10:15:20Z|00721|binding|INFO|Releasing lport f6d838dc-126d-40e0-bd84-54c611b21b22 from this chassis (sb_readonly=0)
Feb 28 10:15:20 compute-0 ovn_controller[146846]: 2026-02-28T10:15:20Z|00722|binding|INFO|Setting lport f6d838dc-126d-40e0-bd84-54c611b21b22 down in Southbound
Feb 28 10:15:20 compute-0 ovn_controller[146846]: 2026-02-28T10:15:20Z|00723|binding|INFO|Removing iface tapf6d838dc-12 ovn-installed in OVS
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.699 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:20.708 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:80:96 10.100.0.5'], port_security=['fa:16:3e:25:80:96 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a28c23bd-34cb-4189-9cca-778178eb41b1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '92b324a375ad4f198dc44d31a0e0a6eb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '73844c74-9151-4190-8457-a421e77e32a1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1433cfb0-2a88-4f7f-838a-e6496a4d73b0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f6d838dc-126d-40e0-bd84-54c611b21b22) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:15:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:20.710 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f6d838dc-126d-40e0-bd84-54c611b21b22 in datapath 77a5b13a-ec2d-4bde-b8f1-201557ef8008 unbound from our chassis
Feb 28 10:15:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:20.711 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77a5b13a-ec2d-4bde-b8f1-201557ef8008, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:15:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:20.712 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[455af9a3-217f-4c1f-8035-35b936cc5b89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:20.713 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 namespace which is not needed anymore
Feb 28 10:15:20 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d00000050.scope: Deactivated successfully.
Feb 28 10:15:20 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d00000050.scope: Consumed 10.215s CPU time.
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.724 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:20 compute-0 systemd-machined[209480]: Machine qemu-92-instance-00000050 terminated.
Feb 28 10:15:20 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[310974]: [NOTICE]   (310996) : haproxy version is 2.8.14-c23fe91
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.865 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:20 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[310974]: [NOTICE]   (310996) : path to executable is /usr/sbin/haproxy
Feb 28 10:15:20 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[310974]: [WARNING]  (310996) : Exiting Master process...
Feb 28 10:15:20 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[310974]: [WARNING]  (310996) : Exiting Master process...
Feb 28 10:15:20 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[310974]: [ALERT]    (310996) : Current worker (311003) exited with code 143 (Terminated)
Feb 28 10:15:20 compute-0 neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008[310974]: [WARNING]  (310996) : All workers exited. Exiting... (0)
Feb 28 10:15:20 compute-0 systemd[1]: libpod-804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047.scope: Deactivated successfully.
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.870 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:20 compute-0 podman[311658]: 2026-02-28 10:15:20.879264587 +0000 UTC m=+0.051252052 container died 804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.884 243456 INFO nova.virt.libvirt.driver [-] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Instance destroyed successfully.
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.885 243456 DEBUG nova.objects.instance [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lazy-loading 'resources' on Instance uuid a28c23bd-34cb-4189-9cca-778178eb41b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.899 243456 DEBUG nova.virt.libvirt.vif [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:14:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-228209281',display_name='tempest-ServerDiskConfigTestJSON-server-228209281',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-228209281',id=80,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='92b324a375ad4f198dc44d31a0e0a6eb',ramdisk_id='',reservation_id='r-s88x0ti1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778232696',owner_user_name='tempest-ServerDiskConfigTestJSON-1778232696-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:15:16Z,user_data=None,user_id='c6b5724da2e648fd85fd8cb293525967',uuid=a28c23bd-34cb-4189-9cca-778178eb41b1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f6d838dc-126d-40e0-bd84-54c611b21b22", "address": "fa:16:3e:25:80:96", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d838dc-12", "ovs_interfaceid": "f6d838dc-126d-40e0-bd84-54c611b21b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.899 243456 DEBUG nova.network.os_vif_util [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converting VIF {"id": "f6d838dc-126d-40e0-bd84-54c611b21b22", "address": "fa:16:3e:25:80:96", "network": {"id": "77a5b13a-ec2d-4bde-b8f1-201557ef8008", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1422042369-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "92b324a375ad4f198dc44d31a0e0a6eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6d838dc-12", "ovs_interfaceid": "f6d838dc-126d-40e0-bd84-54c611b21b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.900 243456 DEBUG nova.network.os_vif_util [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:80:96,bridge_name='br-int',has_traffic_filtering=True,id=f6d838dc-126d-40e0-bd84-54c611b21b22,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d838dc-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.901 243456 DEBUG os_vif [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:80:96,bridge_name='br-int',has_traffic_filtering=True,id=f6d838dc-126d-40e0-bd84-54c611b21b22,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d838dc-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.902 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.902 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6d838dc-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.907 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.910 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:15:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047-userdata-shm.mount: Deactivated successfully.
Feb 28 10:15:20 compute-0 nova_compute[243452]: 2026-02-28 10:15:20.916 243456 INFO os_vif [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:80:96,bridge_name='br-int',has_traffic_filtering=True,id=f6d838dc-126d-40e0-bd84-54c611b21b22,network=Network(77a5b13a-ec2d-4bde-b8f1-201557ef8008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6d838dc-12')
Feb 28 10:15:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-08db0d9b3cbf48242c2a9835ca420d7ec1169862bc5ffbff25f96cc7a4c202d2-merged.mount: Deactivated successfully.
Feb 28 10:15:20 compute-0 podman[311658]: 2026-02-28 10:15:20.935138868 +0000 UTC m=+0.107126303 container cleanup 804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 10:15:20 compute-0 systemd[1]: libpod-conmon-804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047.scope: Deactivated successfully.
Feb 28 10:15:21 compute-0 podman[311706]: 2026-02-28 10:15:21.014277363 +0000 UTC m=+0.061886531 container remove 804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:15:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:21.024 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[937c142f-fadc-4817-a20c-f529b788f47c]: (4, ('Sat Feb 28 10:15:20 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 (804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047)\n804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047\nSat Feb 28 10:15:20 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 (804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047)\n804fbb02e6fab1affbb3280fdc28791962de2d21aefc29865266f387c4743047\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:21.028 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c45ef643-9b25-4003-8765-534773be9888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:21.029 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77a5b13a-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:21 compute-0 kernel: tap77a5b13a-e0: left promiscuous mode
Feb 28 10:15:21 compute-0 nova_compute[243452]: 2026-02-28 10:15:21.034 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:21 compute-0 nova_compute[243452]: 2026-02-28 10:15:21.043 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:21.047 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[68fd9856-2fc2-4df3-8857-140664c587bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:21.069 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3f294453-4f18-47b1-86fb-7c618ff775cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:21.072 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[327e5778-b4ce-4181-bc8e-a1808a50523d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:21.091 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1978ffe7-60f0-4c9e-824e-db4bca082f8e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521405, 'reachable_time': 36571, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311724, 'error': None, 'target': 'ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:21.096 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77a5b13a-ec2d-4bde-b8f1-201557ef8008 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:15:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d77a5b13a\x2dec2d\x2d4bde\x2db8f1\x2d201557ef8008.mount: Deactivated successfully.
Feb 28 10:15:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:21.096 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[92251bc2-060f-42cf-bd64-8073c66d3598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:21 compute-0 nova_compute[243452]: 2026-02-28 10:15:21.254 243456 INFO nova.virt.libvirt.driver [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Deleting instance files /var/lib/nova/instances/a28c23bd-34cb-4189-9cca-778178eb41b1_del
Feb 28 10:15:21 compute-0 nova_compute[243452]: 2026-02-28 10:15:21.256 243456 INFO nova.virt.libvirt.driver [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Deletion of /var/lib/nova/instances/a28c23bd-34cb-4189-9cca-778178eb41b1_del complete
Feb 28 10:15:21 compute-0 nova_compute[243452]: 2026-02-28 10:15:21.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:15:21 compute-0 nova_compute[243452]: 2026-02-28 10:15:21.355 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:21 compute-0 nova_compute[243452]: 2026-02-28 10:15:21.355 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:21 compute-0 nova_compute[243452]: 2026-02-28 10:15:21.356 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:21 compute-0 nova_compute[243452]: 2026-02-28 10:15:21.356 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:15:21 compute-0 nova_compute[243452]: 2026-02-28 10:15:21.356 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:21 compute-0 nova_compute[243452]: 2026-02-28 10:15:21.394 243456 INFO nova.compute.manager [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Took 0.76 seconds to destroy the instance on the hypervisor.
Feb 28 10:15:21 compute-0 nova_compute[243452]: 2026-02-28 10:15:21.395 243456 DEBUG oslo.service.loopingcall [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:15:21 compute-0 nova_compute[243452]: 2026-02-28 10:15:21.395 243456 DEBUG nova.compute.manager [-] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:15:21 compute-0 nova_compute[243452]: 2026-02-28 10:15:21.395 243456 DEBUG nova.network.neutron [-] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:15:21 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:15:21 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1429912051' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:21 compute-0 nova_compute[243452]: 2026-02-28 10:15:21.917 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:21 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1429912051' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.024 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.024 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.029 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.030 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.037 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.037 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:15:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 387 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.0 MiB/s wr, 289 op/s
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.251 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.253 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3344MB free_disk=59.85517138708383GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.253 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.253 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.362 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance b883c1a1-cf01-434d-8258-24ca193a2683 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.363 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.363 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance a28c23bd-34cb-4189-9cca-778178eb41b1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.363 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 4db5bcd7-8b41-4850-8c88-89ad757c8558 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.363 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.364 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:15:22 compute-0 kernel: tapd7f6883b-88 (unregistering): left promiscuous mode
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.429 243456 DEBUG nova.compute.manager [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.429 243456 DEBUG oslo_concurrency.lockutils [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.430 243456 DEBUG oslo_concurrency.lockutils [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.430 243456 DEBUG oslo_concurrency.lockutils [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.430 243456 DEBUG nova.compute.manager [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] No waiting events found dispatching network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.431 243456 WARNING nova.compute.manager [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received unexpected event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 for instance with vm_state active and task_state None.
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.431 243456 DEBUG nova.compute.manager [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Received event network-vif-unplugged-f6d838dc-126d-40e0-bd84-54c611b21b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.431 243456 DEBUG oslo_concurrency.lockutils [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.431 243456 DEBUG oslo_concurrency.lockutils [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.432 243456 DEBUG oslo_concurrency.lockutils [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.432 243456 DEBUG nova.compute.manager [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] No waiting events found dispatching network-vif-unplugged-f6d838dc-126d-40e0-bd84-54c611b21b22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.432 243456 DEBUG nova.compute.manager [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Received event network-vif-unplugged-f6d838dc-126d-40e0-bd84-54c611b21b22 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.432 243456 DEBUG nova.compute.manager [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Received event network-vif-plugged-f6d838dc-126d-40e0-bd84-54c611b21b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.433 243456 DEBUG oslo_concurrency.lockutils [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.433 243456 DEBUG oslo_concurrency.lockutils [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.433 243456 DEBUG oslo_concurrency.lockutils [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.433 243456 DEBUG nova.compute.manager [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] No waiting events found dispatching network-vif-plugged-f6d838dc-126d-40e0-bd84-54c611b21b22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.434 243456 WARNING nova.compute.manager [req-fe5a6ae0-f806-4dd9-8638-1ab830fd4742 req-08904247-288b-4d73-ba8a-f31c2feb4ee2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Received unexpected event network-vif-plugged-f6d838dc-126d-40e0-bd84-54c611b21b22 for instance with vm_state active and task_state deleting.
Feb 28 10:15:22 compute-0 NetworkManager[49805]: <info>  [1772273722.4463] device (tapd7f6883b-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.453 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:22 compute-0 ovn_controller[146846]: 2026-02-28T10:15:22Z|00724|binding|INFO|Releasing lport d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 from this chassis (sb_readonly=0)
Feb 28 10:15:22 compute-0 ovn_controller[146846]: 2026-02-28T10:15:22Z|00725|binding|INFO|Setting lport d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 down in Southbound
Feb 28 10:15:22 compute-0 ovn_controller[146846]: 2026-02-28T10:15:22Z|00726|binding|INFO|Removing iface tapd7f6883b-88 ovn-installed in OVS
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.455 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.459 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:04:95 10.100.0.9'], port_security=['fa:16:3e:87:04:95 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ca4fec3f-7355-47c7-baa5-8d9af25c6eb4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd70835696bf4e12a062516e9de5527d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f8eaa742-8504-4c21-8533-267de16b101e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90546891-a028-4a5f-a7b5-01dac44edc93, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:15:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.461 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 in datapath 2e5dcf5b-2f4a-41dc-9c28-b500e2889923 unbound from our chassis
Feb 28 10:15:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.462 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2e5dcf5b-2f4a-41dc-9c28-b500e2889923
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.471 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.485 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9da69ecf-188f-412d-9770-7bd72a38596c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:22 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Feb 28 10:15:22 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d0000004f.scope: Consumed 12.900s CPU time.
Feb 28 10:15:22 compute-0 systemd-machined[209480]: Machine qemu-91-instance-0000004f terminated.
Feb 28 10:15:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.519 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8e46a11e-3908-462c-8ea9-0f484296f822]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.523 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.524 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c558c585-503e-451f-bb45-339617666e82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.552 243456 DEBUG nova.network.neutron [-] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.552 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[69f0612f-1237-4a1f-a5dc-072200568fd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.570 243456 INFO nova.compute.manager [-] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Took 1.17 seconds to deallocate network for instance.
Feb 28 10:15:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.568 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[74ea6c56-53ac-4cc3-8800-4c53dafdb1cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e5dcf5b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:a8:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517673, 'reachable_time': 18521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311761, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.593 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfe8fc7-4041-4748-9673-94654251a8fc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2e5dcf5b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517685, 'tstamp': 517685}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311762, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2e5dcf5b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517689, 'tstamp': 517689}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311762, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.595 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e5dcf5b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.597 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.601 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e5dcf5b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.602 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.602 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2e5dcf5b-20, col_values=(('external_ids', {'iface-id': '4070e10c-8283-47ad-b9bf-0e19e9198bce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:22.603 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.604 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:22 compute-0 nova_compute[243452]: 2026-02-28 10:15:22.639 243456 DEBUG oslo_concurrency.lockutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:22 compute-0 ceph-mon[76304]: pgmap v1484: 305 pgs: 305 active+clean; 387 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.0 MiB/s wr, 289 op/s
Feb 28 10:15:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:15:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/349934823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:23 compute-0 nova_compute[243452]: 2026-02-28 10:15:23.054 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:23 compute-0 nova_compute[243452]: 2026-02-28 10:15:23.060 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:15:23 compute-0 nova_compute[243452]: 2026-02-28 10:15:23.077 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:15:23 compute-0 nova_compute[243452]: 2026-02-28 10:15:23.102 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:15:23 compute-0 nova_compute[243452]: 2026-02-28 10:15:23.103 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:23 compute-0 nova_compute[243452]: 2026-02-28 10:15:23.103 243456 DEBUG oslo_concurrency.lockutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:23 compute-0 nova_compute[243452]: 2026-02-28 10:15:23.215 243456 INFO nova.virt.libvirt.driver [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance shutdown successfully after 13 seconds.
Feb 28 10:15:23 compute-0 nova_compute[243452]: 2026-02-28 10:15:23.222 243456 DEBUG oslo_concurrency.processutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:23 compute-0 nova_compute[243452]: 2026-02-28 10:15:23.268 243456 INFO nova.virt.libvirt.driver [-] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance destroyed successfully.
Feb 28 10:15:23 compute-0 nova_compute[243452]: 2026-02-28 10:15:23.271 243456 DEBUG nova.objects.instance [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'numa_topology' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:23 compute-0 nova_compute[243452]: 2026-02-28 10:15:23.288 243456 DEBUG nova.compute.manager [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:23 compute-0 nova_compute[243452]: 2026-02-28 10:15:23.319 243456 DEBUG nova.compute.manager [req-196243a5-6032-412f-b463-d43b7392cd61 req-a901539d-6de5-4be9-a6b3-0e77057f9431 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Received event network-vif-deleted-f6d838dc-126d-40e0-bd84-54c611b21b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:23 compute-0 nova_compute[243452]: 2026-02-28 10:15:23.345 243456 DEBUG oslo_concurrency.lockutils [None req-891f012c-ec4e-4e5d-9695-05a2cf3043ca 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:15:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:15:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3234714658' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:23 compute-0 nova_compute[243452]: 2026-02-28 10:15:23.798 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:23 compute-0 nova_compute[243452]: 2026-02-28 10:15:23.805 243456 DEBUG oslo_concurrency.processutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:23 compute-0 nova_compute[243452]: 2026-02-28 10:15:23.810 243456 DEBUG nova.compute.provider_tree [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:15:23 compute-0 nova_compute[243452]: 2026-02-28 10:15:23.823 243456 DEBUG nova.scheduler.client.report [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:15:23 compute-0 nova_compute[243452]: 2026-02-28 10:15:23.844 243456 DEBUG oslo_concurrency.lockutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:23 compute-0 nova_compute[243452]: 2026-02-28 10:15:23.877 243456 INFO nova.scheduler.client.report [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Deleted allocations for instance a28c23bd-34cb-4189-9cca-778178eb41b1
Feb 28 10:15:23 compute-0 nova_compute[243452]: 2026-02-28 10:15:23.934 243456 DEBUG oslo_concurrency.lockutils [None req-f5d7fa86-2f57-4c94-b7c4-8a1f0e9493e4 c6b5724da2e648fd85fd8cb293525967 92b324a375ad4f198dc44d31a0e0a6eb - - default default] Lock "a28c23bd-34cb-4189-9cca-778178eb41b1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/349934823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3234714658' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:24 compute-0 nova_compute[243452]: 2026-02-28 10:15:24.099 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:15:24 compute-0 nova_compute[243452]: 2026-02-28 10:15:24.133 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:15:24 compute-0 nova_compute[243452]: 2026-02-28 10:15:24.134 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:15:24 compute-0 nova_compute[243452]: 2026-02-28 10:15:24.155 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:15:24 compute-0 nova_compute[243452]: 2026-02-28 10:15:24.155 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:15:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 378 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 318 op/s
Feb 28 10:15:24 compute-0 ovn_controller[146846]: 2026-02-28T10:15:24Z|00727|binding|INFO|Releasing lport 4070e10c-8283-47ad-b9bf-0e19e9198bce from this chassis (sb_readonly=0)
Feb 28 10:15:24 compute-0 ovn_controller[146846]: 2026-02-28T10:15:24Z|00728|binding|INFO|Releasing lport f0acdf7e-44d8-43d4-ade1-89536f5a8e0e from this chassis (sb_readonly=0)
Feb 28 10:15:24 compute-0 nova_compute[243452]: 2026-02-28 10:15:24.409 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:24 compute-0 ceph-mon[76304]: pgmap v1485: 305 pgs: 305 active+clean; 378 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 318 op/s
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.494 243456 DEBUG nova.compute.manager [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received event network-vif-unplugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.494 243456 DEBUG oslo_concurrency.lockutils [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.495 243456 DEBUG oslo_concurrency.lockutils [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.495 243456 DEBUG oslo_concurrency.lockutils [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.495 243456 DEBUG nova.compute.manager [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] No waiting events found dispatching network-vif-unplugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.495 243456 WARNING nova.compute.manager [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received unexpected event network-vif-unplugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 for instance with vm_state stopped and task_state None.
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.495 243456 DEBUG nova.compute.manager [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.496 243456 DEBUG oslo_concurrency.lockutils [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.496 243456 DEBUG oslo_concurrency.lockutils [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.496 243456 DEBUG oslo_concurrency.lockutils [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.496 243456 DEBUG nova.compute.manager [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] No waiting events found dispatching network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.497 243456 WARNING nova.compute.manager [req-b5743242-0c68-4be1-9822-d3844048019a req-c8556161-7f67-4d04-a0cc-48ae13d71a64 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received unexpected event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 for instance with vm_state stopped and task_state None.
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.583 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "c2763dc4-f643-48bd-964a-d4ab75938d0a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.583 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "c2763dc4-f643-48bd-964a-d4ab75938d0a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.609 243456 DEBUG nova.compute.manager [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.701 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.701 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.707 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.708 243456 INFO nova.compute.claims [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.861 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.907 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:25 compute-0 nova_compute[243452]: 2026-02-28 10:15:25.966 243456 INFO nova.compute.manager [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Rebuilding instance
Feb 28 10:15:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 358 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.5 MiB/s wr, 313 op/s
Feb 28 10:15:26 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:15:26 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/704125383' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.386 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.391 243456 DEBUG nova.compute.provider_tree [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.436 243456 DEBUG nova.scheduler.client.report [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.459 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.460 243456 DEBUG nova.compute.manager [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.512 243456 DEBUG nova.compute.manager [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.513 243456 DEBUG nova.network.neutron [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.521 243456 DEBUG nova.objects.instance [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'trusted_certs' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.534 243456 INFO nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.541 243456 DEBUG nova.compute.manager [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.573 243456 DEBUG nova.compute.manager [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.634 243456 DEBUG nova.objects.instance [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'pci_requests' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.646 243456 DEBUG nova.objects.instance [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'pci_devices' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.665 243456 DEBUG nova.objects.instance [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'resources' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.686 243456 DEBUG nova.compute.manager [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.689 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.689 243456 INFO nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Creating image(s)
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.721 243456 DEBUG nova.storage.rbd_utils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image c2763dc4-f643-48bd-964a-d4ab75938d0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.757 243456 DEBUG nova.storage.rbd_utils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image c2763dc4-f643-48bd-964a-d4ab75938d0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.785 243456 DEBUG nova.storage.rbd_utils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image c2763dc4-f643-48bd-964a-d4ab75938d0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.788 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.815 243456 DEBUG nova.objects.instance [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'migration_context' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.821 243456 DEBUG nova.policy [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7ef51521ffc947cbbce8323ec2b71753', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c0c4bc44c37f4a4f83c83b6105be3190', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.841 243456 DEBUG nova.objects.instance [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.845 243456 INFO nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance already shutdown.
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.850 243456 INFO nova.virt.libvirt.driver [-] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance destroyed successfully.
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.856 243456 INFO nova.virt.libvirt.driver [-] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance destroyed successfully.
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.857 243456 DEBUG nova.virt.libvirt.vif [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:14:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-542187541',display_name='tempest-tempest.common.compute-instance-542187541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-542187541',id=79,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='fd70835696bf4e12a062516e9de5527d',ramdisk_id='',reservation_id='r-tzwfcb59',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1764257371',owner_user_name='tempest-ServerActionsTestOtherA-1764257371-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:25Z,user_data=None,user_id='14b2d28379164786ad68563acb83a50a',uuid=ca4fec3f-7355-47c7-baa5-8d9af25c6eb4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.858 243456 DEBUG nova.network.os_vif_util [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converting VIF {"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.859 243456 DEBUG nova.network.os_vif_util [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.859 243456 DEBUG os_vif [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.861 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.862 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7f6883b-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.864 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.866 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.870 243456 INFO os_vif [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88')
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.891 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.893 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.893 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.894 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.914 243456 DEBUG nova.storage.rbd_utils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image c2763dc4-f643-48bd-964a-d4ab75938d0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:26 compute-0 nova_compute[243452]: 2026-02-28 10:15:26.917 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c2763dc4-f643-48bd-964a-d4ab75938d0a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.243 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c2763dc4-f643-48bd-964a-d4ab75938d0a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:27 compute-0 ceph-mon[76304]: pgmap v1486: 305 pgs: 305 active+clean; 358 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.5 MiB/s wr, 313 op/s
Feb 28 10:15:27 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/704125383' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.316 243456 DEBUG nova.storage.rbd_utils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] resizing rbd image c2763dc4-f643-48bd-964a-d4ab75938d0a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.420 243456 INFO nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Deleting instance files /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_del
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.422 243456 INFO nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Deletion of /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_del complete
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.433 243456 DEBUG nova.objects.instance [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'migration_context' on Instance uuid c2763dc4-f643-48bd-964a-d4ab75938d0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.464 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.464 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Ensure instance console log exists: /var/lib/nova/instances/c2763dc4-f643-48bd-964a-d4ab75938d0a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.465 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.465 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.465 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.693 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.694 243456 INFO nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Creating image(s)
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.718 243456 DEBUG nova.storage.rbd_utils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.749 243456 DEBUG nova.storage.rbd_utils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.782 243456 DEBUG nova.storage.rbd_utils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.787 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.855 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.857 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.857 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.858 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.883 243456 DEBUG nova.storage.rbd_utils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:27 compute-0 nova_compute[243452]: 2026-02-28 10:15:27.887 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 342 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 268 op/s
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.212 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.256 243456 DEBUG nova.network.neutron [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Successfully created port: 590fac49-f2a2-48ec-ad9f-7bd17a63fe37 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.310 243456 DEBUG nova.storage.rbd_utils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] resizing rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.415 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.415 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Ensure instance console log exists: /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.416 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.416 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.417 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.420 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Start _get_guest_xml network_info=[{"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.425 243456 WARNING nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.432 243456 DEBUG nova.virt.libvirt.host [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.432 243456 DEBUG nova.virt.libvirt.host [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.436 243456 DEBUG nova.virt.libvirt.host [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.436 243456 DEBUG nova.virt.libvirt.host [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.437 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.437 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.438 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.438 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.439 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.439 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.439 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.439 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.440 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.440 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.440 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.441 243456 DEBUG nova.virt.hardware [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.441 243456 DEBUG nova.objects.instance [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'vcpu_model' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.458 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:15:28 compute-0 nova_compute[243452]: 2026-02-28 10:15:28.798 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:15:28 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2594062940' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.010 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.036 243456 DEBUG nova.storage.rbd_utils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.041 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:15:29
Feb 28 10:15:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:15:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:15:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'backups', '.rgw.root', '.mgr', 'default.rgw.log', 'vms', 'volumes', 'default.rgw.control', 'default.rgw.meta']
Feb 28 10:15:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:15:29 compute-0 ceph-mon[76304]: pgmap v1487: 305 pgs: 305 active+clean; 342 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 268 op/s
Feb 28 10:15:29 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2594062940' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.449 243456 DEBUG nova.network.neutron [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Successfully updated port: 590fac49-f2a2-48ec-ad9f-7bd17a63fe37 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.467 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "refresh_cache-c2763dc4-f643-48bd-964a-d4ab75938d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.467 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquired lock "refresh_cache-c2763dc4-f643-48bd-964a-d4ab75938d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.468 243456 DEBUG nova.network.neutron [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:15:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:15:29 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1231944948' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.560 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.562 243456 DEBUG nova.virt.libvirt.vif [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:14:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-542187541',display_name='tempest-tempest.common.compute-instance-542187541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-542187541',id=79,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='fd70835696bf4e12a062516e9de5527d',ramdisk_id='',reservation_id='r-tzwfcb59',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1764257371',owner_user_name='tempest-ServerActionsTestOtherA-1764257371-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:27Z,user_data=None,user_id='14b2d28379164786ad68563acb83a50a',uuid=ca4fec3f-7355-47c7-baa5-8d9af25c6eb4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.562 243456 DEBUG nova.network.os_vif_util [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converting VIF {"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.563 243456 DEBUG nova.network.os_vif_util [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.566 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:15:29 compute-0 nova_compute[243452]:   <uuid>ca4fec3f-7355-47c7-baa5-8d9af25c6eb4</uuid>
Feb 28 10:15:29 compute-0 nova_compute[243452]:   <name>instance-0000004f</name>
Feb 28 10:15:29 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:15:29 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:15:29 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <nova:name>tempest-tempest.common.compute-instance-542187541</nova:name>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:15:28</nova:creationTime>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:15:29 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:15:29 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:15:29 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:15:29 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:15:29 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:15:29 compute-0 nova_compute[243452]:         <nova:user uuid="14b2d28379164786ad68563acb83a50a">tempest-ServerActionsTestOtherA-1764257371-project-member</nova:user>
Feb 28 10:15:29 compute-0 nova_compute[243452]:         <nova:project uuid="fd70835696bf4e12a062516e9de5527d">tempest-ServerActionsTestOtherA-1764257371</nova:project>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="88971623-4808-4102-a4a7-34a287d8b7fe"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:15:29 compute-0 nova_compute[243452]:         <nova:port uuid="d7f6883b-88ea-45f6-a85b-7fe7dd5cf814">
Feb 28 10:15:29 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:15:29 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:15:29 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <system>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <entry name="serial">ca4fec3f-7355-47c7-baa5-8d9af25c6eb4</entry>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <entry name="uuid">ca4fec3f-7355-47c7-baa5-8d9af25c6eb4</entry>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     </system>
Feb 28 10:15:29 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:15:29 compute-0 nova_compute[243452]:   <os>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:   </os>
Feb 28 10:15:29 compute-0 nova_compute[243452]:   <features>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:   </features>
Feb 28 10:15:29 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:15:29 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:15:29 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk">
Feb 28 10:15:29 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       </source>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:15:29 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config">
Feb 28 10:15:29 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       </source>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:15:29 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:87:04:95"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <target dev="tapd7f6883b-88"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/console.log" append="off"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <video>
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     </video>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:15:29 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:15:29 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:15:29 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:15:29 compute-0 nova_compute[243452]: </domain>
Feb 28 10:15:29 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.567 243456 DEBUG nova.compute.manager [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Preparing to wait for external event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.567 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.567 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.568 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.568 243456 DEBUG nova.virt.libvirt.vif [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:14:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-542187541',display_name='tempest-tempest.common.compute-instance-542187541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-542187541',id=79,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='fd70835696bf4e12a062516e9de5527d',ramdisk_id='',reservation_id='r-tzwfcb59',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1764257371',owner_user_name='tempest-ServerActionsTestOtherA-1764257371-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:27Z,user_data=None,user_id='14b2d28379164786ad68563acb83a50a',uuid=ca4fec3f-7355-47c7-baa5-8d9af25c6eb4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.569 243456 DEBUG nova.network.os_vif_util [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converting VIF {"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.570 243456 DEBUG nova.network.os_vif_util [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.570 243456 DEBUG os_vif [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.571 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.571 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.572 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.575 243456 DEBUG nova.compute.manager [req-9c145437-3318-4404-a72e-6e798a08c4d7 req-2566d28e-dc5b-424f-b354-92bdc6ce2180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Received event network-changed-590fac49-f2a2-48ec-ad9f-7bd17a63fe37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.575 243456 DEBUG nova.compute.manager [req-9c145437-3318-4404-a72e-6e798a08c4d7 req-2566d28e-dc5b-424f-b354-92bdc6ce2180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Refreshing instance network info cache due to event network-changed-590fac49-f2a2-48ec-ad9f-7bd17a63fe37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.575 243456 DEBUG oslo_concurrency.lockutils [req-9c145437-3318-4404-a72e-6e798a08c4d7 req-2566d28e-dc5b-424f-b354-92bdc6ce2180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c2763dc4-f643-48bd-964a-d4ab75938d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.576 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.577 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7f6883b-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.577 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd7f6883b-88, col_values=(('external_ids', {'iface-id': 'd7f6883b-88ea-45f6-a85b-7fe7dd5cf814', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:04:95', 'vm-uuid': 'ca4fec3f-7355-47c7-baa5-8d9af25c6eb4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:29 compute-0 NetworkManager[49805]: <info>  [1772273729.5798] manager: (tapd7f6883b-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/322)
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.579 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.581 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.584 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.585 243456 INFO os_vif [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88')
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.643 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.644 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.644 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] No VIF found with MAC fa:16:3e:87:04:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.644 243456 INFO nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Using config drive
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.667 243456 DEBUG nova.storage.rbd_utils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.678 243456 DEBUG nova.network.neutron [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.686 243456 DEBUG nova.objects.instance [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'ec2_ids' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:29 compute-0 nova_compute[243452]: 2026-02-28 10:15:29.723 243456 DEBUG nova.objects.instance [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'keypairs' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 344 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 240 op/s
Feb 28 10:15:30 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1231944948' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:15:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:15:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:15:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:15:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:15:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:15:30 compute-0 nova_compute[243452]: 2026-02-28 10:15:30.320 243456 INFO nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Creating config drive at /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config
Feb 28 10:15:30 compute-0 nova_compute[243452]: 2026-02-28 10:15:30.328 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpa6zhv94q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:30 compute-0 nova_compute[243452]: 2026-02-28 10:15:30.470 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpa6zhv94q" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:30 compute-0 ovn_controller[146846]: 2026-02-28T10:15:30Z|00729|binding|INFO|Releasing lport 4070e10c-8283-47ad-b9bf-0e19e9198bce from this chassis (sb_readonly=0)
Feb 28 10:15:30 compute-0 ovn_controller[146846]: 2026-02-28T10:15:30Z|00730|binding|INFO|Releasing lport f0acdf7e-44d8-43d4-ade1-89536f5a8e0e from this chassis (sb_readonly=0)
Feb 28 10:15:30 compute-0 nova_compute[243452]: 2026-02-28 10:15:30.523 243456 DEBUG nova.storage.rbd_utils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:30 compute-0 nova_compute[243452]: 2026-02-28 10:15:30.526 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:30 compute-0 nova_compute[243452]: 2026-02-28 10:15:30.554 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:15:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:15:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:15:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:15:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:15:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:15:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:15:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:15:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:15:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:15:30 compute-0 nova_compute[243452]: 2026-02-28 10:15:30.674 243456 DEBUG oslo_concurrency.processutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:30 compute-0 nova_compute[243452]: 2026-02-28 10:15:30.674 243456 INFO nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Deleting local config drive /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4/disk.config because it was imported into RBD.
Feb 28 10:15:30 compute-0 kernel: tapd7f6883b-88: entered promiscuous mode
Feb 28 10:15:30 compute-0 NetworkManager[49805]: <info>  [1772273730.7269] manager: (tapd7f6883b-88): new Tun device (/org/freedesktop/NetworkManager/Devices/323)
Feb 28 10:15:30 compute-0 nova_compute[243452]: 2026-02-28 10:15:30.728 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:30 compute-0 ovn_controller[146846]: 2026-02-28T10:15:30Z|00731|binding|INFO|Claiming lport d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 for this chassis.
Feb 28 10:15:30 compute-0 ovn_controller[146846]: 2026-02-28T10:15:30Z|00732|binding|INFO|d7f6883b-88ea-45f6-a85b-7fe7dd5cf814: Claiming fa:16:3e:87:04:95 10.100.0.9
Feb 28 10:15:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.735 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:04:95 10.100.0.9'], port_security=['fa:16:3e:87:04:95 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ca4fec3f-7355-47c7-baa5-8d9af25c6eb4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd70835696bf4e12a062516e9de5527d', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f8eaa742-8504-4c21-8533-267de16b101e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90546891-a028-4a5f-a7b5-01dac44edc93, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:15:30 compute-0 ovn_controller[146846]: 2026-02-28T10:15:30Z|00733|binding|INFO|Setting lport d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 ovn-installed in OVS
Feb 28 10:15:30 compute-0 ovn_controller[146846]: 2026-02-28T10:15:30Z|00734|binding|INFO|Setting lport d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 up in Southbound
Feb 28 10:15:30 compute-0 nova_compute[243452]: 2026-02-28 10:15:30.738 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.739 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 in datapath 2e5dcf5b-2f4a-41dc-9c28-b500e2889923 bound to our chassis
Feb 28 10:15:30 compute-0 nova_compute[243452]: 2026-02-28 10:15:30.740 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.741 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2e5dcf5b-2f4a-41dc-9c28-b500e2889923
Feb 28 10:15:30 compute-0 systemd-udevd[312328]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:15:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.757 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fb604fea-685f-41c2-aec6-0c9f37f1adf5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:30 compute-0 NetworkManager[49805]: <info>  [1772273730.7613] device (tapd7f6883b-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:15:30 compute-0 NetworkManager[49805]: <info>  [1772273730.7623] device (tapd7f6883b-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:15:30 compute-0 systemd-machined[209480]: New machine qemu-94-instance-0000004f.
Feb 28 10:15:30 compute-0 systemd[1]: Started Virtual Machine qemu-94-instance-0000004f.
Feb 28 10:15:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.790 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[cb1e699e-23b6-46be-92e0-31680415c7cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.793 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fd418621-a36a-40d7-a786-6d1e60201bd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.819 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[cddb7f1a-be1a-4ea3-93d9-3f1d35d73169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.832 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ad77b4-2632-4b8b-8792-225b1ba5e8b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e5dcf5b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:a8:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517673, 'reachable_time': 18521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312341, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.849 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1faebe70-0890-44cc-866c-52d903d53171]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2e5dcf5b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517685, 'tstamp': 517685}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312342, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2e5dcf5b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517689, 'tstamp': 517689}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312342, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.850 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e5dcf5b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:30 compute-0 nova_compute[243452]: 2026-02-28 10:15:30.852 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.853 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e5dcf5b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.853 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.853 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2e5dcf5b-20, col_values=(('external_ids', {'iface-id': '4070e10c-8283-47ad-b9bf-0e19e9198bce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:30.854 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.127 243456 DEBUG nova.compute.manager [req-e9cdf859-a629-45f3-a77e-8a0c66148e82 req-42ef8376-8160-4975-9bdd-fe1e9500cb7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.127 243456 DEBUG oslo_concurrency.lockutils [req-e9cdf859-a629-45f3-a77e-8a0c66148e82 req-42ef8376-8160-4975-9bdd-fe1e9500cb7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.128 243456 DEBUG oslo_concurrency.lockutils [req-e9cdf859-a629-45f3-a77e-8a0c66148e82 req-42ef8376-8160-4975-9bdd-fe1e9500cb7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.128 243456 DEBUG oslo_concurrency.lockutils [req-e9cdf859-a629-45f3-a77e-8a0c66148e82 req-42ef8376-8160-4975-9bdd-fe1e9500cb7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.128 243456 DEBUG nova.compute.manager [req-e9cdf859-a629-45f3-a77e-8a0c66148e82 req-42ef8376-8160-4975-9bdd-fe1e9500cb7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Processing event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.139 243456 DEBUG nova.network.neutron [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Updating instance_info_cache with network_info: [{"id": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "address": "fa:16:3e:34:9c:8f", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap590fac49-f2", "ovs_interfaceid": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.164 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Releasing lock "refresh_cache-c2763dc4-f643-48bd-964a-d4ab75938d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.164 243456 DEBUG nova.compute.manager [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Instance network_info: |[{"id": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "address": "fa:16:3e:34:9c:8f", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap590fac49-f2", "ovs_interfaceid": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.164 243456 DEBUG oslo_concurrency.lockutils [req-9c145437-3318-4404-a72e-6e798a08c4d7 req-2566d28e-dc5b-424f-b354-92bdc6ce2180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c2763dc4-f643-48bd-964a-d4ab75938d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.165 243456 DEBUG nova.network.neutron [req-9c145437-3318-4404-a72e-6e798a08c4d7 req-2566d28e-dc5b-424f-b354-92bdc6ce2180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Refreshing network info cache for port 590fac49-f2a2-48ec-ad9f-7bd17a63fe37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.168 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Start _get_guest_xml network_info=[{"id": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "address": "fa:16:3e:34:9c:8f", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap590fac49-f2", "ovs_interfaceid": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.171 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273716.1233923, 32fe69ba-ea8d-411e-8917-de872b62b8b0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.171 243456 INFO nova.compute.manager [-] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] VM Stopped (Lifecycle Event)
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.175 243456 WARNING nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.181 243456 DEBUG nova.virt.libvirt.host [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.181 243456 DEBUG nova.virt.libvirt.host [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.184 243456 DEBUG nova.virt.libvirt.host [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.184 243456 DEBUG nova.virt.libvirt.host [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.185 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.185 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.185 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.185 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.185 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.186 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.186 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.186 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.186 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.186 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.186 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.187 243456 DEBUG nova.virt.hardware [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.189 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.239 243456 DEBUG nova.compute.manager [None req-cb2f8817-b3a6-4fdd-bbb8-b0dc5d2400f6 - - - - - -] [instance: 32fe69ba-ea8d-411e-8917-de872b62b8b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:31 compute-0 ceph-mon[76304]: pgmap v1488: 305 pgs: 305 active+clean; 344 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 240 op/s
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.491 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.492 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273731.4905484, ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.493 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] VM Started (Lifecycle Event)
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.498 243456 DEBUG nova.compute.manager [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.504 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.509 243456 INFO nova.virt.libvirt.driver [-] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance spawned successfully.
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.510 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.520 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.526 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.543 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.544 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.545 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.546 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.547 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.548 243456 DEBUG nova.virt.libvirt.driver [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.554 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.555 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273731.4907484, ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.556 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] VM Paused (Lifecycle Event)
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.590 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.596 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273731.5035603, ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.596 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] VM Resumed (Lifecycle Event)
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.619 243456 DEBUG nova.compute.manager [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.621 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.628 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.666 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.689 243456 INFO nova.compute.manager [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] bringing vm to original state: 'stopped'
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.751 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.766 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.767 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.768 243456 DEBUG nova.compute.manager [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.779 243456 DEBUG nova.compute.manager [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Feb 28 10:15:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:15:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4257579247' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.820 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:31 compute-0 kernel: tapd7f6883b-88 (unregistering): left promiscuous mode
Feb 28 10:15:31 compute-0 NetworkManager[49805]: <info>  [1772273731.8245] device (tapd7f6883b-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:15:31 compute-0 ovn_controller[146846]: 2026-02-28T10:15:31Z|00735|binding|INFO|Releasing lport d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 from this chassis (sb_readonly=0)
Feb 28 10:15:31 compute-0 ovn_controller[146846]: 2026-02-28T10:15:31Z|00736|binding|INFO|Setting lport d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 down in Southbound
Feb 28 10:15:31 compute-0 ovn_controller[146846]: 2026-02-28T10:15:31Z|00737|binding|INFO|Removing iface tapd7f6883b-88 ovn-installed in OVS
Feb 28 10:15:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.840 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:04:95 10.100.0.9'], port_security=['fa:16:3e:87:04:95 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ca4fec3f-7355-47c7-baa5-8d9af25c6eb4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd70835696bf4e12a062516e9de5527d', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f8eaa742-8504-4c21-8533-267de16b101e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90546891-a028-4a5f-a7b5-01dac44edc93, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:15:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.841 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 in datapath 2e5dcf5b-2f4a-41dc-9c28-b500e2889923 unbound from our chassis
Feb 28 10:15:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.842 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2e5dcf5b-2f4a-41dc-9c28-b500e2889923
Feb 28 10:15:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.858 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[89f220c5-5f44-44d9-9941-64f59ddbedac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:31 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.866 243456 DEBUG nova.storage.rbd_utils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image c2763dc4-f643-48bd-964a-d4ab75938d0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:31 compute-0 systemd-machined[209480]: Machine qemu-94-instance-0000004f terminated.
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.875 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.883 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d3785daf-8de8-4fd0-8a11-edadad16141b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.888 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d415187d-5181-478c-839f-e33b6aafd917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.916 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.918 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5f977b8f-53bf-44ad-b2f7-fa842a201d4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.935 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a9b88f8c-05ef-4358-b1c7-c12cd5ef974d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e5dcf5b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:a8:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517673, 'reachable_time': 18521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312436, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.946 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[35620329-d81d-4a28-ae3c-080604e71af0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2e5dcf5b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517685, 'tstamp': 517685}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312437, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2e5dcf5b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517689, 'tstamp': 517689}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312437, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.948 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e5dcf5b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.950 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.954 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e5dcf5b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.954 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.955 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2e5dcf5b-20, col_values=(('external_ids', {'iface-id': '4070e10c-8283-47ad-b9bf-0e19e9198bce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:31.955 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:31 compute-0 nova_compute[243452]: 2026-02-28 10:15:31.956 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.023 243456 INFO nova.virt.libvirt.driver [-] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance destroyed successfully.
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.024 243456 DEBUG nova.compute.manager [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.079 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.311s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.120 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.120 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.121 243456 DEBUG nova.objects.instance [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.188 243456 DEBUG oslo_concurrency.lockutils [None req-602dcae1-8b25-47b1-8e5d-e4288427fe79 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 353 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.4 MiB/s wr, 249 op/s
Feb 28 10:15:32 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4257579247' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:15:32 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1336649039' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.459 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.461 243456 DEBUG nova.virt.libvirt.vif [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:15:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-35499043',display_name='tempest-ServersNegativeTestJSON-server-35499043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-35499043',id=82,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c0c4bc44c37f4a4f83c83b6105be3190',ramdisk_id='',reservation_id='r-vudkfx5m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-621636341',owner_user_name='tempest-ServersNegativeTestJSON-621636341-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:26Z,user_data=None,user_id='7ef51521ffc947cbbce8323ec2b71753',uuid=c2763dc4-f643-48bd-964a-d4ab75938d0a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "address": "fa:16:3e:34:9c:8f", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap590fac49-f2", "ovs_interfaceid": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.462 243456 DEBUG nova.network.os_vif_util [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converting VIF {"id": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "address": "fa:16:3e:34:9c:8f", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap590fac49-f2", "ovs_interfaceid": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.463 243456 DEBUG nova.network.os_vif_util [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:9c:8f,bridge_name='br-int',has_traffic_filtering=True,id=590fac49-f2a2-48ec-ad9f-7bd17a63fe37,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap590fac49-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.465 243456 DEBUG nova.objects.instance [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'pci_devices' on Instance uuid c2763dc4-f643-48bd-964a-d4ab75938d0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.495 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:15:32 compute-0 nova_compute[243452]:   <uuid>c2763dc4-f643-48bd-964a-d4ab75938d0a</uuid>
Feb 28 10:15:32 compute-0 nova_compute[243452]:   <name>instance-00000052</name>
Feb 28 10:15:32 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:15:32 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:15:32 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersNegativeTestJSON-server-35499043</nova:name>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:15:31</nova:creationTime>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:15:32 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:15:32 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:15:32 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:15:32 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:15:32 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:15:32 compute-0 nova_compute[243452]:         <nova:user uuid="7ef51521ffc947cbbce8323ec2b71753">tempest-ServersNegativeTestJSON-621636341-project-member</nova:user>
Feb 28 10:15:32 compute-0 nova_compute[243452]:         <nova:project uuid="c0c4bc44c37f4a4f83c83b6105be3190">tempest-ServersNegativeTestJSON-621636341</nova:project>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:15:32 compute-0 nova_compute[243452]:         <nova:port uuid="590fac49-f2a2-48ec-ad9f-7bd17a63fe37">
Feb 28 10:15:32 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:15:32 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:15:32 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <system>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <entry name="serial">c2763dc4-f643-48bd-964a-d4ab75938d0a</entry>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <entry name="uuid">c2763dc4-f643-48bd-964a-d4ab75938d0a</entry>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     </system>
Feb 28 10:15:32 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:15:32 compute-0 nova_compute[243452]:   <os>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:   </os>
Feb 28 10:15:32 compute-0 nova_compute[243452]:   <features>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:   </features>
Feb 28 10:15:32 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:15:32 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:15:32 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c2763dc4-f643-48bd-964a-d4ab75938d0a_disk">
Feb 28 10:15:32 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       </source>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:15:32 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c2763dc4-f643-48bd-964a-d4ab75938d0a_disk.config">
Feb 28 10:15:32 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       </source>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:15:32 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:34:9c:8f"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <target dev="tap590fac49-f2"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/c2763dc4-f643-48bd-964a-d4ab75938d0a/console.log" append="off"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <video>
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     </video>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:15:32 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:15:32 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:15:32 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:15:32 compute-0 nova_compute[243452]: </domain>
Feb 28 10:15:32 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.497 243456 DEBUG nova.compute.manager [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Preparing to wait for external event network-vif-plugged-590fac49-f2a2-48ec-ad9f-7bd17a63fe37 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.498 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "c2763dc4-f643-48bd-964a-d4ab75938d0a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.498 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "c2763dc4-f643-48bd-964a-d4ab75938d0a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.498 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "c2763dc4-f643-48bd-964a-d4ab75938d0a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.499 243456 DEBUG nova.virt.libvirt.vif [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:15:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-35499043',display_name='tempest-ServersNegativeTestJSON-server-35499043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-35499043',id=82,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c0c4bc44c37f4a4f83c83b6105be3190',ramdisk_id='',reservation_id='r-vudkfx5m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-621636341',owner_user_name='tempest-ServersNegativeTestJSON-621636341-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:26Z,user_data=None,user_id='7ef51521ffc947cbbce8323ec2b71753',uuid=c2763dc4-f643-48bd-964a-d4ab75938d0a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "address": "fa:16:3e:34:9c:8f", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap590fac49-f2", "ovs_interfaceid": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.499 243456 DEBUG nova.network.os_vif_util [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converting VIF {"id": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "address": "fa:16:3e:34:9c:8f", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap590fac49-f2", "ovs_interfaceid": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.500 243456 DEBUG nova.network.os_vif_util [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:9c:8f,bridge_name='br-int',has_traffic_filtering=True,id=590fac49-f2a2-48ec-ad9f-7bd17a63fe37,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap590fac49-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.500 243456 DEBUG os_vif [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:9c:8f,bridge_name='br-int',has_traffic_filtering=True,id=590fac49-f2a2-48ec-ad9f-7bd17a63fe37,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap590fac49-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.501 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.501 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.501 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.507 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.508 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap590fac49-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.509 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap590fac49-f2, col_values=(('external_ids', {'iface-id': '590fac49-f2a2-48ec-ad9f-7bd17a63fe37', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:34:9c:8f', 'vm-uuid': 'c2763dc4-f643-48bd-964a-d4ab75938d0a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.511 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:32 compute-0 NetworkManager[49805]: <info>  [1772273732.5126] manager: (tap590fac49-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.515 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.521 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.524 243456 INFO os_vif [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:9c:8f,bridge_name='br-int',has_traffic_filtering=True,id=590fac49-f2a2-48ec-ad9f-7bd17a63fe37,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap590fac49-f2')
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.595 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.596 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.596 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] No VIF found with MAC fa:16:3e:34:9c:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.597 243456 INFO nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Using config drive
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.632 243456 DEBUG nova.storage.rbd_utils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image c2763dc4-f643-48bd-964a-d4ab75938d0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.980 243456 DEBUG nova.network.neutron [req-9c145437-3318-4404-a72e-6e798a08c4d7 req-2566d28e-dc5b-424f-b354-92bdc6ce2180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Updated VIF entry in instance network info cache for port 590fac49-f2a2-48ec-ad9f-7bd17a63fe37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.981 243456 DEBUG nova.network.neutron [req-9c145437-3318-4404-a72e-6e798a08c4d7 req-2566d28e-dc5b-424f-b354-92bdc6ce2180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Updating instance_info_cache with network_info: [{"id": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "address": "fa:16:3e:34:9c:8f", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap590fac49-f2", "ovs_interfaceid": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:32 compute-0 nova_compute[243452]: 2026-02-28 10:15:32.996 243456 DEBUG oslo_concurrency.lockutils [req-9c145437-3318-4404-a72e-6e798a08c4d7 req-2566d28e-dc5b-424f-b354-92bdc6ce2180 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c2763dc4-f643-48bd-964a-d4ab75938d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:15:33 compute-0 nova_compute[243452]: 2026-02-28 10:15:33.284 243456 DEBUG nova.compute.manager [req-4f4e6e45-8211-449a-924a-a4c77e907ea0 req-2340049f-f059-4a7f-a438-63bae82686d3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:33 compute-0 nova_compute[243452]: 2026-02-28 10:15:33.284 243456 DEBUG oslo_concurrency.lockutils [req-4f4e6e45-8211-449a-924a-a4c77e907ea0 req-2340049f-f059-4a7f-a438-63bae82686d3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:33 compute-0 nova_compute[243452]: 2026-02-28 10:15:33.284 243456 DEBUG oslo_concurrency.lockutils [req-4f4e6e45-8211-449a-924a-a4c77e907ea0 req-2340049f-f059-4a7f-a438-63bae82686d3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:33 compute-0 nova_compute[243452]: 2026-02-28 10:15:33.285 243456 DEBUG oslo_concurrency.lockutils [req-4f4e6e45-8211-449a-924a-a4c77e907ea0 req-2340049f-f059-4a7f-a438-63bae82686d3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:33 compute-0 nova_compute[243452]: 2026-02-28 10:15:33.285 243456 DEBUG nova.compute.manager [req-4f4e6e45-8211-449a-924a-a4c77e907ea0 req-2340049f-f059-4a7f-a438-63bae82686d3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] No waiting events found dispatching network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:15:33 compute-0 nova_compute[243452]: 2026-02-28 10:15:33.285 243456 WARNING nova.compute.manager [req-4f4e6e45-8211-449a-924a-a4c77e907ea0 req-2340049f-f059-4a7f-a438-63bae82686d3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received unexpected event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 for instance with vm_state stopped and task_state None.
Feb 28 10:15:33 compute-0 ceph-mon[76304]: pgmap v1489: 305 pgs: 305 active+clean; 353 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.4 MiB/s wr, 249 op/s
Feb 28 10:15:33 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1336649039' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:33 compute-0 nova_compute[243452]: 2026-02-28 10:15:33.415 243456 INFO nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Creating config drive at /var/lib/nova/instances/c2763dc4-f643-48bd-964a-d4ab75938d0a/disk.config
Feb 28 10:15:33 compute-0 nova_compute[243452]: 2026-02-28 10:15:33.422 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c2763dc4-f643-48bd-964a-d4ab75938d0a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprpft0hl7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:15:33 compute-0 nova_compute[243452]: 2026-02-28 10:15:33.560 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c2763dc4-f643-48bd-964a-d4ab75938d0a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprpft0hl7" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:33 compute-0 nova_compute[243452]: 2026-02-28 10:15:33.597 243456 DEBUG nova.storage.rbd_utils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image c2763dc4-f643-48bd-964a-d4ab75938d0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:33 compute-0 nova_compute[243452]: 2026-02-28 10:15:33.602 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c2763dc4-f643-48bd-964a-d4ab75938d0a/disk.config c2763dc4-f643-48bd-964a-d4ab75938d0a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:33 compute-0 ovn_controller[146846]: 2026-02-28T10:15:33Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:22:e7:39 10.100.0.9
Feb 28 10:15:33 compute-0 ovn_controller[146846]: 2026-02-28T10:15:33Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:22:e7:39 10.100.0.9
Feb 28 10:15:33 compute-0 nova_compute[243452]: 2026-02-28 10:15:33.759 243456 DEBUG oslo_concurrency.processutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c2763dc4-f643-48bd-964a-d4ab75938d0a/disk.config c2763dc4-f643-48bd-964a-d4ab75938d0a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:33 compute-0 nova_compute[243452]: 2026-02-28 10:15:33.760 243456 INFO nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Deleting local config drive /var/lib/nova/instances/c2763dc4-f643-48bd-964a-d4ab75938d0a/disk.config because it was imported into RBD.
Feb 28 10:15:33 compute-0 nova_compute[243452]: 2026-02-28 10:15:33.801 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:33 compute-0 kernel: tap590fac49-f2: entered promiscuous mode
Feb 28 10:15:33 compute-0 NetworkManager[49805]: <info>  [1772273733.8084] manager: (tap590fac49-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/325)
Feb 28 10:15:33 compute-0 nova_compute[243452]: 2026-02-28 10:15:33.815 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:33 compute-0 ovn_controller[146846]: 2026-02-28T10:15:33Z|00738|binding|INFO|Claiming lport 590fac49-f2a2-48ec-ad9f-7bd17a63fe37 for this chassis.
Feb 28 10:15:33 compute-0 ovn_controller[146846]: 2026-02-28T10:15:33Z|00739|binding|INFO|590fac49-f2a2-48ec-ad9f-7bd17a63fe37: Claiming fa:16:3e:34:9c:8f 10.100.0.11
Feb 28 10:15:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.823 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:9c:8f 10.100.0.11'], port_security=['fa:16:3e:34:9c:8f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c2763dc4-f643-48bd-964a-d4ab75938d0a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c0c4bc44c37f4a4f83c83b6105be3190', 'neutron:revision_number': '2', 'neutron:security_group_ids': '60e2cad2-1539-4f21-ae07-3933335fcb5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbe93b8f-a4a4-4682-b3ab-de91ea6bc538, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=590fac49-f2a2-48ec-ad9f-7bd17a63fe37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:15:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.824 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 590fac49-f2a2-48ec-ad9f-7bd17a63fe37 in datapath ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 bound to our chassis
Feb 28 10:15:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.825 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce4b855a-cb9e-4dad-bfe0-ddfe326a1505
Feb 28 10:15:33 compute-0 ovn_controller[146846]: 2026-02-28T10:15:33Z|00740|binding|INFO|Setting lport 590fac49-f2a2-48ec-ad9f-7bd17a63fe37 ovn-installed in OVS
Feb 28 10:15:33 compute-0 ovn_controller[146846]: 2026-02-28T10:15:33Z|00741|binding|INFO|Setting lport 590fac49-f2a2-48ec-ad9f-7bd17a63fe37 up in Southbound
Feb 28 10:15:33 compute-0 nova_compute[243452]: 2026-02-28 10:15:33.830 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.837 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3821c6-622d-4377-bb19-937b3f39268f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:33 compute-0 systemd-machined[209480]: New machine qemu-95-instance-00000052.
Feb 28 10:15:33 compute-0 systemd[1]: Started Virtual Machine qemu-95-instance-00000052.
Feb 28 10:15:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.863 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c86d8ac3-6c8b-4ff8-8fa9-9a07abf7eff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.866 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[db7bb5b7-f42a-4bf2-8a70-72b183d1f4d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:33 compute-0 systemd-udevd[312550]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:15:33 compute-0 NetworkManager[49805]: <info>  [1772273733.8829] device (tap590fac49-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:15:33 compute-0 NetworkManager[49805]: <info>  [1772273733.8834] device (tap590fac49-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:15:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.897 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[53f177f6-f4e6-405d-b540-33d526a14e36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.914 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[488f4d62-bfed-435f-b494-6adb85661e1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce4b855a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:cf:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522608, 'reachable_time': 35465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312557, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.928 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c03b5f-6ccf-4299-8904-f66d197faab7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce4b855a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522620, 'tstamp': 522620}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312559, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce4b855a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522623, 'tstamp': 522623}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312559, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.932 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce4b855a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:33 compute-0 nova_compute[243452]: 2026-02-28 10:15:33.934 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:33 compute-0 nova_compute[243452]: 2026-02-28 10:15:33.938 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.939 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce4b855a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.939 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.940 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce4b855a-c0, col_values=(('external_ids', {'iface-id': 'f0acdf7e-44d8-43d4-ade1-89536f5a8e0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:33.941 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 376 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.7 MiB/s wr, 227 op/s
Feb 28 10:15:34 compute-0 nova_compute[243452]: 2026-02-28 10:15:34.375 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273734.374168, c2763dc4-f643-48bd-964a-d4ab75938d0a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:34 compute-0 nova_compute[243452]: 2026-02-28 10:15:34.377 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] VM Started (Lifecycle Event)
Feb 28 10:15:34 compute-0 nova_compute[243452]: 2026-02-28 10:15:34.398 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:34 compute-0 nova_compute[243452]: 2026-02-28 10:15:34.405 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273734.3743386, c2763dc4-f643-48bd-964a-d4ab75938d0a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:34 compute-0 nova_compute[243452]: 2026-02-28 10:15:34.405 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] VM Paused (Lifecycle Event)
Feb 28 10:15:34 compute-0 nova_compute[243452]: 2026-02-28 10:15:34.422 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:34 compute-0 nova_compute[243452]: 2026-02-28 10:15:34.426 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:15:34 compute-0 nova_compute[243452]: 2026-02-28 10:15:34.445 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:15:35 compute-0 sshd-session[312561]: Invalid user sol from 45.148.10.240 port 53200
Feb 28 10:15:35 compute-0 sshd-session[312561]: Connection closed by invalid user sol 45.148.10.240 port 53200 [preauth]
Feb 28 10:15:35 compute-0 ceph-mon[76304]: pgmap v1490: 305 pgs: 305 active+clean; 376 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.7 MiB/s wr, 227 op/s
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.455 243456 DEBUG oslo_concurrency.lockutils [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.456 243456 DEBUG oslo_concurrency.lockutils [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.457 243456 DEBUG oslo_concurrency.lockutils [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.457 243456 DEBUG oslo_concurrency.lockutils [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.458 243456 DEBUG oslo_concurrency.lockutils [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.460 243456 INFO nova.compute.manager [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Terminating instance
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.462 243456 DEBUG nova.compute.manager [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.472 243456 INFO nova.virt.libvirt.driver [-] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Instance destroyed successfully.
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.473 243456 DEBUG nova.objects.instance [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'resources' on Instance uuid ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.492 243456 DEBUG nova.virt.libvirt.vif [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:14:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-542187541',display_name='tempest-tempest.common.compute-instance-542187541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-542187541',id=79,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='fd70835696bf4e12a062516e9de5527d',ramdisk_id='',reservation_id='r-tzwfcb59',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1764257371',owner_user_name='tempest-ServerActionsTestOtherA-1764257371-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:15:32Z,user_data=None,user_id='14b2d28379164786ad68563acb83a50a',uuid=ca4fec3f-7355-47c7-baa5-8d9af25c6eb4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.493 243456 DEBUG nova.network.os_vif_util [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converting VIF {"id": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "address": "fa:16:3e:87:04:95", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7f6883b-88", "ovs_interfaceid": "d7f6883b-88ea-45f6-a85b-7fe7dd5cf814", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.494 243456 DEBUG nova.network.os_vif_util [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.495 243456 DEBUG os_vif [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.497 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.498 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7f6883b-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.500 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.501 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.502 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.506 243456 INFO os_vif [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:04:95,bridge_name='br-int',has_traffic_filtering=True,id=d7f6883b-88ea-45f6-a85b-7fe7dd5cf814,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7f6883b-88')
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.540 243456 DEBUG nova.compute.manager [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received event network-vif-unplugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.541 243456 DEBUG oslo_concurrency.lockutils [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.542 243456 DEBUG oslo_concurrency.lockutils [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.543 243456 DEBUG oslo_concurrency.lockutils [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.544 243456 DEBUG nova.compute.manager [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] No waiting events found dispatching network-vif-unplugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.544 243456 DEBUG nova.compute.manager [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received event network-vif-unplugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.545 243456 DEBUG nova.compute.manager [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.545 243456 DEBUG oslo_concurrency.lockutils [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.546 243456 DEBUG oslo_concurrency.lockutils [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.546 243456 DEBUG oslo_concurrency.lockutils [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.546 243456 DEBUG nova.compute.manager [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] No waiting events found dispatching network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.547 243456 WARNING nova.compute.manager [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received unexpected event network-vif-plugged-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 for instance with vm_state stopped and task_state deleting.
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.547 243456 DEBUG nova.compute.manager [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Received event network-vif-plugged-590fac49-f2a2-48ec-ad9f-7bd17a63fe37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.547 243456 DEBUG oslo_concurrency.lockutils [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c2763dc4-f643-48bd-964a-d4ab75938d0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.548 243456 DEBUG oslo_concurrency.lockutils [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c2763dc4-f643-48bd-964a-d4ab75938d0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.548 243456 DEBUG oslo_concurrency.lockutils [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c2763dc4-f643-48bd-964a-d4ab75938d0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.549 243456 DEBUG nova.compute.manager [req-427267bf-0a78-480d-8e83-8380af09e043 req-fa407628-d5b3-4236-9e1a-9190505d0dae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Processing event network-vif-plugged-590fac49-f2a2-48ec-ad9f-7bd17a63fe37 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.549 243456 DEBUG nova.compute.manager [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.554 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273735.554152, c2763dc4-f643-48bd-964a-d4ab75938d0a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.555 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] VM Resumed (Lifecycle Event)
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.560 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.564 243456 INFO nova.virt.libvirt.driver [-] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Instance spawned successfully.
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.564 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.585 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.585 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.586 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.586 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.587 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.588 243456 DEBUG nova.virt.libvirt.driver [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.593 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.599 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.621 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.656 243456 INFO nova.compute.manager [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Took 8.97 seconds to spawn the instance on the hypervisor.
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.657 243456 DEBUG nova.compute.manager [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.723 243456 INFO nova.compute.manager [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Took 10.04 seconds to build instance.
Feb 28 10:15:35 compute-0 rsyslogd[1017]: imjournal from <np0005634017:nova_compute>: begin to drop messages due to rate-limiting
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.738 243456 DEBUG oslo_concurrency.lockutils [None req-8f75856d-b3f9-4597-a0bc-80c5fae8f6de 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "c2763dc4-f643-48bd-964a-d4ab75938d0a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.838 243456 INFO nova.virt.libvirt.driver [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Deleting instance files /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_del
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.839 243456 INFO nova.virt.libvirt.driver [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Deletion of /var/lib/nova/instances/ca4fec3f-7355-47c7-baa5-8d9af25c6eb4_del complete
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.881 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273720.880746, a28c23bd-34cb-4189-9cca-778178eb41b1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.882 243456 INFO nova.compute.manager [-] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] VM Stopped (Lifecycle Event)
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.890 243456 INFO nova.compute.manager [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Took 0.43 seconds to destroy the instance on the hypervisor.
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.890 243456 DEBUG oslo.service.loopingcall [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.891 243456 DEBUG nova.compute.manager [-] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.892 243456 DEBUG nova.network.neutron [-] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.901 243456 DEBUG nova.compute.manager [None req-802a6a4b-25d4-4501-b57f-34372e1fd23d - - - - - -] [instance: a28c23bd-34cb-4189-9cca-778178eb41b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:35 compute-0 nova_compute[243452]: 2026-02-28 10:15:35.986 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 404 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.7 MiB/s wr, 220 op/s
Feb 28 10:15:36 compute-0 nova_compute[243452]: 2026-02-28 10:15:36.543 243456 DEBUG nova.network.neutron [-] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:36 compute-0 nova_compute[243452]: 2026-02-28 10:15:36.561 243456 INFO nova.compute.manager [-] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Took 0.67 seconds to deallocate network for instance.
Feb 28 10:15:36 compute-0 nova_compute[243452]: 2026-02-28 10:15:36.603 243456 DEBUG oslo_concurrency.lockutils [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:36 compute-0 nova_compute[243452]: 2026-02-28 10:15:36.605 243456 DEBUG oslo_concurrency.lockutils [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:36 compute-0 nova_compute[243452]: 2026-02-28 10:15:36.733 243456 DEBUG oslo_concurrency.processutils [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:15:37 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3824416123' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.284 243456 DEBUG oslo_concurrency.processutils [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.291 243456 DEBUG nova.compute.provider_tree [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.312 243456 DEBUG nova.scheduler.client.report [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:15:37 compute-0 ceph-mon[76304]: pgmap v1491: 305 pgs: 305 active+clean; 404 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.7 MiB/s wr, 220 op/s
Feb 28 10:15:37 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3824416123' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.342 243456 DEBUG oslo_concurrency.lockutils [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.365 243456 INFO nova.scheduler.client.report [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Deleted allocations for instance ca4fec3f-7355-47c7-baa5-8d9af25c6eb4
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.393 243456 DEBUG nova.compute.manager [req-af1e3bab-7569-45f3-a422-7cb94bcfe0c5 req-6e68437d-b72e-4827-8cfc-93d572145591 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Received event network-vif-deleted-d7f6883b-88ea-45f6-a85b-7fe7dd5cf814 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.444 243456 DEBUG oslo_concurrency.lockutils [None req-61167e30-98c4-4c8a-a5d4-50d4dd3dccf2 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "ca4fec3f-7355-47c7-baa5-8d9af25c6eb4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.988s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.604 243456 DEBUG oslo_concurrency.lockutils [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "c2763dc4-f643-48bd-964a-d4ab75938d0a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.605 243456 DEBUG oslo_concurrency.lockutils [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "c2763dc4-f643-48bd-964a-d4ab75938d0a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.605 243456 DEBUG oslo_concurrency.lockutils [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "c2763dc4-f643-48bd-964a-d4ab75938d0a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.606 243456 DEBUG oslo_concurrency.lockutils [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "c2763dc4-f643-48bd-964a-d4ab75938d0a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.606 243456 DEBUG oslo_concurrency.lockutils [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "c2763dc4-f643-48bd-964a-d4ab75938d0a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.607 243456 INFO nova.compute.manager [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Terminating instance
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.608 243456 DEBUG nova.compute.manager [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:15:37 compute-0 kernel: tap590fac49-f2 (unregistering): left promiscuous mode
Feb 28 10:15:37 compute-0 NetworkManager[49805]: <info>  [1772273737.6476] device (tap590fac49-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.650 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:37 compute-0 ovn_controller[146846]: 2026-02-28T10:15:37Z|00742|binding|INFO|Releasing lport 590fac49-f2a2-48ec-ad9f-7bd17a63fe37 from this chassis (sb_readonly=0)
Feb 28 10:15:37 compute-0 ovn_controller[146846]: 2026-02-28T10:15:37Z|00743|binding|INFO|Setting lport 590fac49-f2a2-48ec-ad9f-7bd17a63fe37 down in Southbound
Feb 28 10:15:37 compute-0 ovn_controller[146846]: 2026-02-28T10:15:37Z|00744|binding|INFO|Removing iface tap590fac49-f2 ovn-installed in OVS
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.652 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:37.656 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:9c:8f 10.100.0.11'], port_security=['fa:16:3e:34:9c:8f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c2763dc4-f643-48bd-964a-d4ab75938d0a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c0c4bc44c37f4a4f83c83b6105be3190', 'neutron:revision_number': '4', 'neutron:security_group_ids': '60e2cad2-1539-4f21-ae07-3933335fcb5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbe93b8f-a4a4-4682-b3ab-de91ea6bc538, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=590fac49-f2a2-48ec-ad9f-7bd17a63fe37) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:15:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:37.657 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 590fac49-f2a2-48ec-ad9f-7bd17a63fe37 in datapath ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 unbound from our chassis
Feb 28 10:15:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:37.658 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce4b855a-cb9e-4dad-bfe0-ddfe326a1505
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.663 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:37.670 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e68361-260e-4d44-99dd-3cdd278e70d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:37 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d00000052.scope: Deactivated successfully.
Feb 28 10:15:37 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d00000052.scope: Consumed 2.683s CPU time.
Feb 28 10:15:37 compute-0 systemd-machined[209480]: Machine qemu-95-instance-00000052 terminated.
Feb 28 10:15:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:37.698 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[79cdbc4a-efb5-4520-841e-51645c2f5800]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:37.701 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[85937f25-2a3f-480e-b73a-1bf6d1bc2196]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:37.726 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6279b7e0-c6c9-49d0-9f93-57921a65a06f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:37 compute-0 podman[312653]: 2026-02-28 10:15:37.733180196 +0000 UTC m=+0.055080390 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:15:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:37.741 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b4189fe2-07c2-4b77-8ae4-96ed49815030]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce4b855a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:cf:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522608, 'reachable_time': 35465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312699, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:37.754 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[12423b15-3e87-43da-ba97-9257910e0f62]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce4b855a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522620, 'tstamp': 522620}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312703, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce4b855a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522623, 'tstamp': 522623}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312703, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:37.756 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce4b855a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.758 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:37 compute-0 podman[312650]: 2026-02-28 10:15:37.76106043 +0000 UTC m=+0.082334436 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0)
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.761 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:37.762 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce4b855a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:37.762 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:37.762 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce4b855a-c0, col_values=(('external_ids', {'iface-id': 'f0acdf7e-44d8-43d4-ade1-89536f5a8e0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:37.763 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.825 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.832 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.846 243456 INFO nova.virt.libvirt.driver [-] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Instance destroyed successfully.
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.847 243456 DEBUG nova.objects.instance [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'resources' on Instance uuid c2763dc4-f643-48bd-964a-d4ab75938d0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.861 243456 DEBUG nova.virt.libvirt.vif [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:15:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-35499043',display_name='tempest-ServersNegativeTestJSON-server-35499043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-35499043',id=82,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c0c4bc44c37f4a4f83c83b6105be3190',ramdisk_id='',reservation_id='r-vudkfx5m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-621636341',owner_user_name='tempest-ServersNegativeTestJSON-621636341-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:15:35Z,user_data=None,user_id='7ef51521ffc947cbbce8323ec2b71753',uuid=c2763dc4-f643-48bd-964a-d4ab75938d0a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "address": "fa:16:3e:34:9c:8f", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap590fac49-f2", "ovs_interfaceid": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.862 243456 DEBUG nova.network.os_vif_util [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converting VIF {"id": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "address": "fa:16:3e:34:9c:8f", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap590fac49-f2", "ovs_interfaceid": "590fac49-f2a2-48ec-ad9f-7bd17a63fe37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.862 243456 DEBUG nova.network.os_vif_util [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:9c:8f,bridge_name='br-int',has_traffic_filtering=True,id=590fac49-f2a2-48ec-ad9f-7bd17a63fe37,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap590fac49-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.863 243456 DEBUG os_vif [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:9c:8f,bridge_name='br-int',has_traffic_filtering=True,id=590fac49-f2a2-48ec-ad9f-7bd17a63fe37,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap590fac49-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.864 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.864 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap590fac49-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.866 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.867 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:37 compute-0 nova_compute[243452]: 2026-02-28 10:15:37.869 243456 INFO os_vif [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:9c:8f,bridge_name='br-int',has_traffic_filtering=True,id=590fac49-f2a2-48ec-ad9f-7bd17a63fe37,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap590fac49-f2')
Feb 28 10:15:38 compute-0 nova_compute[243452]: 2026-02-28 10:15:38.162 243456 INFO nova.virt.libvirt.driver [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Deleting instance files /var/lib/nova/instances/c2763dc4-f643-48bd-964a-d4ab75938d0a_del
Feb 28 10:15:38 compute-0 nova_compute[243452]: 2026-02-28 10:15:38.163 243456 INFO nova.virt.libvirt.driver [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Deletion of /var/lib/nova/instances/c2763dc4-f643-48bd-964a-d4ab75938d0a_del complete
Feb 28 10:15:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 386 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.7 MiB/s wr, 209 op/s
Feb 28 10:15:38 compute-0 nova_compute[243452]: 2026-02-28 10:15:38.244 243456 INFO nova.compute.manager [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Took 0.64 seconds to destroy the instance on the hypervisor.
Feb 28 10:15:38 compute-0 nova_compute[243452]: 2026-02-28 10:15:38.245 243456 DEBUG oslo.service.loopingcall [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:15:38 compute-0 nova_compute[243452]: 2026-02-28 10:15:38.245 243456 DEBUG nova.compute.manager [-] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:15:38 compute-0 nova_compute[243452]: 2026-02-28 10:15:38.245 243456 DEBUG nova.network.neutron [-] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:15:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:15:38 compute-0 nova_compute[243452]: 2026-02-28 10:15:38.672 243456 DEBUG nova.compute.manager [req-b88fa1f5-1194-415f-942c-8c46e6999422 req-41c767a7-5fbf-4960-9d48-d2c62bc2d880 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Received event network-vif-plugged-590fac49-f2a2-48ec-ad9f-7bd17a63fe37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:38 compute-0 nova_compute[243452]: 2026-02-28 10:15:38.673 243456 DEBUG oslo_concurrency.lockutils [req-b88fa1f5-1194-415f-942c-8c46e6999422 req-41c767a7-5fbf-4960-9d48-d2c62bc2d880 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c2763dc4-f643-48bd-964a-d4ab75938d0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:38 compute-0 nova_compute[243452]: 2026-02-28 10:15:38.673 243456 DEBUG oslo_concurrency.lockutils [req-b88fa1f5-1194-415f-942c-8c46e6999422 req-41c767a7-5fbf-4960-9d48-d2c62bc2d880 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c2763dc4-f643-48bd-964a-d4ab75938d0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:38 compute-0 nova_compute[243452]: 2026-02-28 10:15:38.674 243456 DEBUG oslo_concurrency.lockutils [req-b88fa1f5-1194-415f-942c-8c46e6999422 req-41c767a7-5fbf-4960-9d48-d2c62bc2d880 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c2763dc4-f643-48bd-964a-d4ab75938d0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:38 compute-0 nova_compute[243452]: 2026-02-28 10:15:38.674 243456 DEBUG nova.compute.manager [req-b88fa1f5-1194-415f-942c-8c46e6999422 req-41c767a7-5fbf-4960-9d48-d2c62bc2d880 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] No waiting events found dispatching network-vif-plugged-590fac49-f2a2-48ec-ad9f-7bd17a63fe37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:15:38 compute-0 nova_compute[243452]: 2026-02-28 10:15:38.674 243456 WARNING nova.compute.manager [req-b88fa1f5-1194-415f-942c-8c46e6999422 req-41c767a7-5fbf-4960-9d48-d2c62bc2d880 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Received unexpected event network-vif-plugged-590fac49-f2a2-48ec-ad9f-7bd17a63fe37 for instance with vm_state active and task_state deleting.
Feb 28 10:15:38 compute-0 nova_compute[243452]: 2026-02-28 10:15:38.803 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:39 compute-0 nova_compute[243452]: 2026-02-28 10:15:39.220 243456 DEBUG nova.network.neutron [-] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:39 compute-0 nova_compute[243452]: 2026-02-28 10:15:39.252 243456 INFO nova.compute.manager [-] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Took 1.01 seconds to deallocate network for instance.
Feb 28 10:15:39 compute-0 nova_compute[243452]: 2026-02-28 10:15:39.307 243456 DEBUG oslo_concurrency.lockutils [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:39 compute-0 nova_compute[243452]: 2026-02-28 10:15:39.308 243456 DEBUG oslo_concurrency.lockutils [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:39 compute-0 ceph-mon[76304]: pgmap v1492: 305 pgs: 305 active+clean; 386 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.7 MiB/s wr, 209 op/s
Feb 28 10:15:39 compute-0 nova_compute[243452]: 2026-02-28 10:15:39.386 243456 DEBUG oslo_concurrency.processutils [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:39 compute-0 nova_compute[243452]: 2026-02-28 10:15:39.486 243456 DEBUG oslo_concurrency.lockutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:39 compute-0 nova_compute[243452]: 2026-02-28 10:15:39.487 243456 DEBUG oslo_concurrency.lockutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:39 compute-0 nova_compute[243452]: 2026-02-28 10:15:39.507 243456 DEBUG nova.compute.manager [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:15:39 compute-0 nova_compute[243452]: 2026-02-28 10:15:39.581 243456 DEBUG oslo_concurrency.lockutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:39 compute-0 nova_compute[243452]: 2026-02-28 10:15:39.782 243456 DEBUG nova.compute.manager [req-ada47503-d633-4b84-965f-b57e9ba38276 req-4fc4e572-ca87-46bf-9910-68fd2e02b511 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Received event network-vif-deleted-590fac49-f2a2-48ec-ad9f-7bd17a63fe37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:15:39 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1418598986' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:39 compute-0 nova_compute[243452]: 2026-02-28 10:15:39.924 243456 DEBUG oslo_concurrency.processutils [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:39 compute-0 nova_compute[243452]: 2026-02-28 10:15:39.932 243456 DEBUG nova.compute.provider_tree [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:15:39 compute-0 nova_compute[243452]: 2026-02-28 10:15:39.949 243456 DEBUG nova.scheduler.client.report [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:15:39 compute-0 nova_compute[243452]: 2026-02-28 10:15:39.970 243456 DEBUG oslo_concurrency.lockutils [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:39 compute-0 nova_compute[243452]: 2026-02-28 10:15:39.973 243456 DEBUG oslo_concurrency.lockutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:39 compute-0 nova_compute[243452]: 2026-02-28 10:15:39.984 243456 DEBUG nova.virt.hardware [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:15:39 compute-0 nova_compute[243452]: 2026-02-28 10:15:39.985 243456 INFO nova.compute.claims [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:15:40 compute-0 nova_compute[243452]: 2026-02-28 10:15:40.008 243456 INFO nova.scheduler.client.report [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Deleted allocations for instance c2763dc4-f643-48bd-964a-d4ab75938d0a
Feb 28 10:15:40 compute-0 nova_compute[243452]: 2026-02-28 10:15:40.127 243456 DEBUG oslo_concurrency.lockutils [None req-affc97a8-f2df-4b48-8e91-adb733e1d4d4 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "c2763dc4-f643-48bd-964a-d4ab75938d0a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 355 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.7 MiB/s wr, 221 op/s
Feb 28 10:15:40 compute-0 nova_compute[243452]: 2026-02-28 10:15:40.249 243456 DEBUG oslo_concurrency.processutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:40 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1418598986' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:40 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:15:40 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2229914972' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:40 compute-0 nova_compute[243452]: 2026-02-28 10:15:40.820 243456 DEBUG oslo_concurrency.processutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:40 compute-0 nova_compute[243452]: 2026-02-28 10:15:40.829 243456 DEBUG nova.compute.provider_tree [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:15:40 compute-0 nova_compute[243452]: 2026-02-28 10:15:40.848 243456 DEBUG nova.scheduler.client.report [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:15:40 compute-0 nova_compute[243452]: 2026-02-28 10:15:40.861 243456 DEBUG oslo_concurrency.lockutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "30194398-5601-43ac-aae7-290d9d311d6c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:40 compute-0 nova_compute[243452]: 2026-02-28 10:15:40.861 243456 DEBUG oslo_concurrency.lockutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001895681647511675 of space, bias 1.0, pg target 0.5687044942535024 quantized to 32 (current 32)
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493313515940052 of space, bias 1.0, pg target 0.7479940547820156 quantized to 32 (current 32)
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.802437018888603e-07 of space, bias 4.0, pg target 0.0009362924422666323 quantized to 16 (current 16)
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:15:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:15:40 compute-0 nova_compute[243452]: 2026-02-28 10:15:40.884 243456 DEBUG oslo_concurrency.lockutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:40 compute-0 nova_compute[243452]: 2026-02-28 10:15:40.886 243456 DEBUG nova.compute.manager [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:15:40 compute-0 nova_compute[243452]: 2026-02-28 10:15:40.892 243456 DEBUG nova.compute.manager [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:15:40 compute-0 nova_compute[243452]: 2026-02-28 10:15:40.955 243456 DEBUG nova.compute.manager [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:15:40 compute-0 nova_compute[243452]: 2026-02-28 10:15:40.956 243456 DEBUG nova.network.neutron [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:15:40 compute-0 nova_compute[243452]: 2026-02-28 10:15:40.985 243456 INFO nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:15:40 compute-0 nova_compute[243452]: 2026-02-28 10:15:40.992 243456 DEBUG oslo_concurrency.lockutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:40 compute-0 nova_compute[243452]: 2026-02-28 10:15:40.993 243456 DEBUG oslo_concurrency.lockutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.010 243456 DEBUG nova.virt.hardware [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.011 243456 INFO nova.compute.claims [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.015 243456 DEBUG nova.compute.manager [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.123 243456 DEBUG nova.compute.manager [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.125 243456 DEBUG nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.126 243456 INFO nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Creating image(s)
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.153 243456 DEBUG nova.storage.rbd_utils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.182 243456 DEBUG nova.storage.rbd_utils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.212 243456 DEBUG nova.storage.rbd_utils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.219 243456 DEBUG oslo_concurrency.processutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.260 243456 DEBUG nova.policy [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '99530c323188499c8d0e75b8edf1f77b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4c568ca6a09a48c1a1197267be4d4583', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.290 243456 DEBUG oslo_concurrency.processutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.291 243456 DEBUG oslo_concurrency.lockutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.291 243456 DEBUG oslo_concurrency.lockutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.292 243456 DEBUG oslo_concurrency.lockutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.321 243456 DEBUG nova.storage.rbd_utils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.327 243456 DEBUG oslo_concurrency.processutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:41 compute-0 ceph-mon[76304]: pgmap v1493: 305 pgs: 305 active+clean; 355 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.7 MiB/s wr, 221 op/s
Feb 28 10:15:41 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2229914972' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.405 243456 DEBUG oslo_concurrency.processutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.645 243456 DEBUG oslo_concurrency.processutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.750 243456 DEBUG nova.storage.rbd_utils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] resizing rbd image e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.849 243456 DEBUG nova.objects.instance [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'migration_context' on Instance uuid e0b403b3-2f95-4f8c-a00c-53dab3c643b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.867 243456 DEBUG nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.869 243456 DEBUG nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Ensure instance console log exists: /var/lib/nova/instances/e0b403b3-2f95-4f8c-a00c-53dab3c643b9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.870 243456 DEBUG oslo_concurrency.lockutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.870 243456 DEBUG oslo_concurrency.lockutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:41 compute-0 nova_compute[243452]: 2026-02-28 10:15:41.871 243456 DEBUG oslo_concurrency.lockutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:15:41 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1456595474' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.003 243456 DEBUG oslo_concurrency.processutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.011 243456 DEBUG nova.compute.provider_tree [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.031 243456 DEBUG nova.scheduler.client.report [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.060 243456 DEBUG oslo_concurrency.lockutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.062 243456 DEBUG nova.compute.manager [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.112 243456 DEBUG nova.compute.manager [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.113 243456 DEBUG nova.network.neutron [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.171 243456 INFO nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.192 243456 DEBUG nova.compute.manager [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:15:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 331 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.7 MiB/s wr, 234 op/s
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.310 243456 DEBUG nova.compute.manager [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.313 243456 DEBUG nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.314 243456 INFO nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Creating image(s)
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.351 243456 DEBUG nova.storage.rbd_utils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] rbd image 30194398-5601-43ac-aae7-290d9d311d6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:42 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1456595474' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.396 243456 DEBUG nova.storage.rbd_utils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] rbd image 30194398-5601-43ac-aae7-290d9d311d6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.434 243456 DEBUG nova.storage.rbd_utils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] rbd image 30194398-5601-43ac-aae7-290d9d311d6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.440 243456 DEBUG oslo_concurrency.processutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.475 243456 DEBUG nova.policy [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cc52c9235e704591a857b1b746c257ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '692561f0659d4af58ab14beffb24eb70', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.478 243456 DEBUG nova.network.neutron [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Successfully created port: cace90b2-5d6b-49ae-a68a-251838fec4ee _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.506 243456 DEBUG oslo_concurrency.processutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.507 243456 DEBUG oslo_concurrency.lockutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.508 243456 DEBUG oslo_concurrency.lockutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.508 243456 DEBUG oslo_concurrency.lockutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.536 243456 DEBUG nova.storage.rbd_utils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] rbd image 30194398-5601-43ac-aae7-290d9d311d6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.541 243456 DEBUG oslo_concurrency.processutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 30194398-5601-43ac-aae7-290d9d311d6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.646 243456 DEBUG oslo_concurrency.lockutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.647 243456 DEBUG oslo_concurrency.lockutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.672 243456 DEBUG nova.compute.manager [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.744 243456 DEBUG oslo_concurrency.lockutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.745 243456 DEBUG oslo_concurrency.lockutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.753 243456 DEBUG nova.virt.hardware [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.753 243456 INFO nova.compute.claims [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.859 243456 DEBUG oslo_concurrency.processutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 30194398-5601-43ac-aae7-290d9d311d6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.893 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:42 compute-0 nova_compute[243452]: 2026-02-28 10:15:42.939 243456 DEBUG nova.storage.rbd_utils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] resizing rbd image 30194398-5601-43ac-aae7-290d9d311d6c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.035 243456 DEBUG oslo_concurrency.processutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.123 243456 DEBUG nova.objects.instance [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lazy-loading 'migration_context' on Instance uuid 30194398-5601-43ac-aae7-290d9d311d6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.138 243456 DEBUG nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.139 243456 DEBUG nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Ensure instance console log exists: /var/lib/nova/instances/30194398-5601-43ac-aae7-290d9d311d6c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.139 243456 DEBUG oslo_concurrency.lockutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.140 243456 DEBUG oslo_concurrency.lockutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.140 243456 DEBUG oslo_concurrency.lockutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.342 243456 DEBUG nova.network.neutron [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Successfully created port: b7b37fba-503f-4a0c-98ec-29224477d25f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:15:43 compute-0 ceph-mon[76304]: pgmap v1494: 305 pgs: 305 active+clean; 331 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.7 MiB/s wr, 234 op/s
Feb 28 10:15:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:15:43.571343) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273743571429, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1219, "num_deletes": 250, "total_data_size": 1658980, "memory_usage": 1692168, "flush_reason": "Manual Compaction"}
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Feb 28 10:15:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:15:43 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/344151162' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273743581520, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1002255, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30669, "largest_seqno": 31887, "table_properties": {"data_size": 997811, "index_size": 1840, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12405, "raw_average_key_size": 20, "raw_value_size": 987954, "raw_average_value_size": 1666, "num_data_blocks": 83, "num_entries": 593, "num_filter_entries": 593, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772273636, "oldest_key_time": 1772273636, "file_creation_time": 1772273743, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 10265 microseconds, and 6706 cpu microseconds.
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:15:43.581603) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1002255 bytes OK
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:15:43.581643) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:15:43.583765) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:15:43.583797) EVENT_LOG_v1 {"time_micros": 1772273743583785, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:15:43.583829) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1653389, prev total WAL file size 1653389, number of live WAL files 2.
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:15:43.584821) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323533' seq:0, type:0; will stop at (end)
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(978KB)], [65(9965KB)]
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273743584897, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 11207225, "oldest_snapshot_seqno": -1}
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.600 243456 DEBUG oslo_concurrency.processutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.608 243456 DEBUG nova.compute.provider_tree [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.622 243456 DEBUG nova.scheduler.client.report [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.645 243456 DEBUG oslo_concurrency.lockutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.646 243456 DEBUG nova.compute.manager [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.654 243456 DEBUG nova.network.neutron [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Successfully updated port: cace90b2-5d6b-49ae-a68a-251838fec4ee _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5807 keys, 8502451 bytes, temperature: kUnknown
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273743659423, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 8502451, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8463297, "index_size": 23556, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 145399, "raw_average_key_size": 25, "raw_value_size": 8358857, "raw_average_value_size": 1439, "num_data_blocks": 961, "num_entries": 5807, "num_filter_entries": 5807, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772273743, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:15:43.659786) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 8502451 bytes
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:15:43.661422) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.0 rd, 113.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 9.7 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(19.7) write-amplify(8.5) OK, records in: 6271, records dropped: 464 output_compression: NoCompression
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:15:43.661438) EVENT_LOG_v1 {"time_micros": 1772273743661430, "job": 36, "event": "compaction_finished", "compaction_time_micros": 74735, "compaction_time_cpu_micros": 32047, "output_level": 6, "num_output_files": 1, "total_output_size": 8502451, "num_input_records": 6271, "num_output_records": 5807, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273743661883, "job": 36, "event": "table_file_deletion", "file_number": 67}
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273743662879, "job": 36, "event": "table_file_deletion", "file_number": 65}
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:15:43.584704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:15:43.663040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:15:43.663052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:15:43.663055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:15:43.663058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:15:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:15:43.663062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.687 243456 DEBUG oslo_concurrency.lockutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "refresh_cache-e0b403b3-2f95-4f8c-a00c-53dab3c643b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.688 243456 DEBUG oslo_concurrency.lockutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquired lock "refresh_cache-e0b403b3-2f95-4f8c-a00c-53dab3c643b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.688 243456 DEBUG nova.network.neutron [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.714 243456 DEBUG nova.compute.manager [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.715 243456 DEBUG nova.network.neutron [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.734 243456 INFO nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.750 243456 DEBUG nova.compute.manager [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.808 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.863 243456 DEBUG nova.compute.manager [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.866 243456 DEBUG nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.867 243456 INFO nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Creating image(s)
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.892 243456 DEBUG nova.storage.rbd_utils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] rbd image cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.920 243456 DEBUG nova.storage.rbd_utils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] rbd image cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.948 243456 DEBUG nova.storage.rbd_utils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] rbd image cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.952 243456 DEBUG oslo_concurrency.processutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:43 compute-0 nova_compute[243452]: 2026-02-28 10:15:43.990 243456 DEBUG nova.policy [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cc52c9235e704591a857b1b746c257ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '692561f0659d4af58ab14beffb24eb70', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.021 243456 DEBUG oslo_concurrency.processutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.022 243456 DEBUG oslo_concurrency.lockutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.022 243456 DEBUG oslo_concurrency.lockutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.023 243456 DEBUG oslo_concurrency.lockutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.049 243456 DEBUG nova.storage.rbd_utils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] rbd image cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.053 243456 DEBUG oslo_concurrency.processutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.141 243456 DEBUG nova.network.neutron [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:15:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 328 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 214 op/s
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.367 243456 DEBUG oslo_concurrency.processutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.444 243456 DEBUG nova.storage.rbd_utils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] resizing rbd image cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.532 243456 DEBUG nova.compute.manager [req-b313a572-efbd-45ce-8fc2-ea557e62371e req-a11637d6-076c-4914-bec3-9f3651b197c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Received event network-changed-cace90b2-5d6b-49ae-a68a-251838fec4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.533 243456 DEBUG nova.compute.manager [req-b313a572-efbd-45ce-8fc2-ea557e62371e req-a11637d6-076c-4914-bec3-9f3651b197c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Refreshing instance network info cache due to event network-changed-cace90b2-5d6b-49ae-a68a-251838fec4ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.533 243456 DEBUG oslo_concurrency.lockutils [req-b313a572-efbd-45ce-8fc2-ea557e62371e req-a11637d6-076c-4914-bec3-9f3651b197c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-e0b403b3-2f95-4f8c-a00c-53dab3c643b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.538 243456 DEBUG nova.objects.instance [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lazy-loading 'migration_context' on Instance uuid cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.556 243456 DEBUG nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.557 243456 DEBUG nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Ensure instance console log exists: /var/lib/nova/instances/cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.557 243456 DEBUG oslo_concurrency.lockutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.557 243456 DEBUG oslo_concurrency.lockutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.558 243456 DEBUG oslo_concurrency.lockutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:44 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/344151162' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:44 compute-0 ceph-mon[76304]: pgmap v1495: 305 pgs: 305 active+clean; 328 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 214 op/s
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.593 243456 DEBUG nova.network.neutron [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Successfully updated port: b7b37fba-503f-4a0c-98ec-29224477d25f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.610 243456 DEBUG oslo_concurrency.lockutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "refresh_cache-30194398-5601-43ac-aae7-290d9d311d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.611 243456 DEBUG oslo_concurrency.lockutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquired lock "refresh_cache-30194398-5601-43ac-aae7-290d9d311d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.611 243456 DEBUG nova.network.neutron [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.713 243456 DEBUG oslo_concurrency.lockutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "9119fb04-24fa-460c-a772-4ca398874b4e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.714 243456 DEBUG oslo_concurrency.lockutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "9119fb04-24fa-460c-a772-4ca398874b4e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.717 243456 DEBUG nova.network.neutron [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Successfully created port: c2860307-4800-4c5e-adb7-ab75c130f158 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.732 243456 DEBUG nova.compute.manager [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.803 243456 DEBUG nova.network.neutron [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.810 243456 DEBUG oslo_concurrency.lockutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.811 243456 DEBUG oslo_concurrency.lockutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.820 243456 DEBUG nova.virt.hardware [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:15:44 compute-0 nova_compute[243452]: 2026-02-28 10:15:44.821 243456 INFO nova.compute.claims [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.030 243456 DEBUG oslo_concurrency.processutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.157 243456 DEBUG nova.network.neutron [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Updating instance_info_cache with network_info: [{"id": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "address": "fa:16:3e:46:a7:4e", "network": {"id": "61b03a6e-b883-4f32-b23d-d6fea7058b29", "bridge": "br-int", "label": "tempest-network-smoke--1037265809", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcace90b2-5d", "ovs_interfaceid": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.188 243456 DEBUG oslo_concurrency.lockutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Releasing lock "refresh_cache-e0b403b3-2f95-4f8c-a00c-53dab3c643b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.188 243456 DEBUG nova.compute.manager [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Instance network_info: |[{"id": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "address": "fa:16:3e:46:a7:4e", "network": {"id": "61b03a6e-b883-4f32-b23d-d6fea7058b29", "bridge": "br-int", "label": "tempest-network-smoke--1037265809", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcace90b2-5d", "ovs_interfaceid": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.189 243456 DEBUG oslo_concurrency.lockutils [req-b313a572-efbd-45ce-8fc2-ea557e62371e req-a11637d6-076c-4914-bec3-9f3651b197c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-e0b403b3-2f95-4f8c-a00c-53dab3c643b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.190 243456 DEBUG nova.network.neutron [req-b313a572-efbd-45ce-8fc2-ea557e62371e req-a11637d6-076c-4914-bec3-9f3651b197c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Refreshing network info cache for port cace90b2-5d6b-49ae-a68a-251838fec4ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.195 243456 DEBUG nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Start _get_guest_xml network_info=[{"id": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "address": "fa:16:3e:46:a7:4e", "network": {"id": "61b03a6e-b883-4f32-b23d-d6fea7058b29", "bridge": "br-int", "label": "tempest-network-smoke--1037265809", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcace90b2-5d", "ovs_interfaceid": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.203 243456 WARNING nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.209 243456 DEBUG nova.virt.libvirt.host [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.210 243456 DEBUG nova.virt.libvirt.host [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.222 243456 DEBUG nova.virt.libvirt.host [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.223 243456 DEBUG nova.virt.libvirt.host [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.223 243456 DEBUG nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.224 243456 DEBUG nova.virt.hardware [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.224 243456 DEBUG nova.virt.hardware [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.225 243456 DEBUG nova.virt.hardware [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.225 243456 DEBUG nova.virt.hardware [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.226 243456 DEBUG nova.virt.hardware [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.226 243456 DEBUG nova.virt.hardware [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.227 243456 DEBUG nova.virt.hardware [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.227 243456 DEBUG nova.virt.hardware [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.227 243456 DEBUG nova.virt.hardware [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.228 243456 DEBUG nova.virt.hardware [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.228 243456 DEBUG nova.virt.hardware [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.234 243456 DEBUG oslo_concurrency.processutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.292 243456 DEBUG oslo_concurrency.lockutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.293 243456 DEBUG oslo_concurrency.lockutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.315 243456 DEBUG nova.compute.manager [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.384 243456 DEBUG oslo_concurrency.lockutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:15:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1742389226' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:15:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:15:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1742389226' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.567 243456 DEBUG nova.network.neutron [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Successfully updated port: c2860307-4800-4c5e-adb7-ab75c130f158 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.584 243456 DEBUG oslo_concurrency.lockutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "refresh_cache-cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.584 243456 DEBUG oslo_concurrency.lockutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquired lock "refresh_cache-cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.584 243456 DEBUG nova.network.neutron [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:15:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1742389226' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:15:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1742389226' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:15:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:15:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1490306831' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.652 243456 DEBUG nova.compute.manager [req-dd34c8e8-0f60-455e-8f0d-a0acd7928b45 req-02013f64-1e2c-436c-90dc-53c15788bae3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Received event network-changed-c2860307-4800-4c5e-adb7-ab75c130f158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.653 243456 DEBUG nova.compute.manager [req-dd34c8e8-0f60-455e-8f0d-a0acd7928b45 req-02013f64-1e2c-436c-90dc-53c15788bae3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Refreshing instance network info cache due to event network-changed-c2860307-4800-4c5e-adb7-ab75c130f158. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.653 243456 DEBUG oslo_concurrency.lockutils [req-dd34c8e8-0f60-455e-8f0d-a0acd7928b45 req-02013f64-1e2c-436c-90dc-53c15788bae3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.671 243456 DEBUG oslo_concurrency.processutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.681 243456 DEBUG nova.compute.provider_tree [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.702 243456 DEBUG nova.scheduler.client.report [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.722 243456 DEBUG nova.network.neutron [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.732 243456 DEBUG oslo_concurrency.lockutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.733 243456 DEBUG nova.compute.manager [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.737 243456 DEBUG oslo_concurrency.lockutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.746 243456 DEBUG nova.virt.hardware [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.747 243456 INFO nova.compute.claims [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:15:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:15:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1275741341' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.821 243456 DEBUG nova.compute.manager [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.822 243456 DEBUG nova.network.neutron [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.835 243456 DEBUG oslo_concurrency.processutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.870 243456 DEBUG nova.storage.rbd_utils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.876 243456 DEBUG oslo_concurrency.processutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.930 243456 DEBUG nova.network.neutron [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Updating instance_info_cache with network_info: [{"id": "b7b37fba-503f-4a0c-98ec-29224477d25f", "address": "fa:16:3e:62:3b:12", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7b37fba-50", "ovs_interfaceid": "b7b37fba-503f-4a0c-98ec-29224477d25f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.937 243456 INFO nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.952 243456 DEBUG oslo_concurrency.lockutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Releasing lock "refresh_cache-30194398-5601-43ac-aae7-290d9d311d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.952 243456 DEBUG nova.compute.manager [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Instance network_info: |[{"id": "b7b37fba-503f-4a0c-98ec-29224477d25f", "address": "fa:16:3e:62:3b:12", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7b37fba-50", "ovs_interfaceid": "b7b37fba-503f-4a0c-98ec-29224477d25f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.955 243456 DEBUG nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Start _get_guest_xml network_info=[{"id": "b7b37fba-503f-4a0c-98ec-29224477d25f", "address": "fa:16:3e:62:3b:12", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7b37fba-50", "ovs_interfaceid": "b7b37fba-503f-4a0c-98ec-29224477d25f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.959 243456 WARNING nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.962 243456 DEBUG nova.compute.manager [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.965 243456 DEBUG nova.virt.libvirt.host [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.966 243456 DEBUG nova.virt.libvirt.host [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.985 243456 DEBUG nova.virt.libvirt.host [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.985 243456 DEBUG nova.virt.libvirt.host [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.986 243456 DEBUG nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.986 243456 DEBUG nova.virt.hardware [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.986 243456 DEBUG nova.virt.hardware [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.986 243456 DEBUG nova.virt.hardware [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.987 243456 DEBUG nova.virt.hardware [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.987 243456 DEBUG nova.virt.hardware [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.987 243456 DEBUG nova.virt.hardware [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.987 243456 DEBUG nova.virt.hardware [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.987 243456 DEBUG nova.virt.hardware [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.987 243456 DEBUG nova.virt.hardware [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.987 243456 DEBUG nova.virt.hardware [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.988 243456 DEBUG nova.virt.hardware [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:15:45 compute-0 nova_compute[243452]: 2026-02-28 10:15:45.991 243456 DEBUG oslo_concurrency.processutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.086 243456 DEBUG nova.compute.manager [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.088 243456 DEBUG nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.089 243456 INFO nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Creating image(s)
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.133 243456 DEBUG nova.storage.rbd_utils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] rbd image 9119fb04-24fa-460c-a772-4ca398874b4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.181 243456 DEBUG nova.storage.rbd_utils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] rbd image 9119fb04-24fa-460c-a772-4ca398874b4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 424 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.6 MiB/s wr, 234 op/s
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.214 243456 DEBUG nova.storage.rbd_utils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] rbd image 9119fb04-24fa-460c-a772-4ca398874b4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.218 243456 DEBUG oslo_concurrency.processutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.257 243456 DEBUG nova.policy [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cc52c9235e704591a857b1b746c257ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '692561f0659d4af58ab14beffb24eb70', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.302 243456 DEBUG oslo_concurrency.processutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.303 243456 DEBUG oslo_concurrency.lockutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.304 243456 DEBUG oslo_concurrency.lockutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.304 243456 DEBUG oslo_concurrency.lockutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.328 243456 DEBUG nova.storage.rbd_utils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] rbd image 9119fb04-24fa-460c-a772-4ca398874b4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.331 243456 DEBUG oslo_concurrency.processutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9119fb04-24fa-460c-a772-4ca398874b4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.386 243456 DEBUG oslo_concurrency.processutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:15:46 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1269318903' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.474 243456 DEBUG oslo_concurrency.processutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.477 243456 DEBUG nova.virt.libvirt.vif [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:15:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1502065968',display_name='tempest-TestNetworkAdvancedServerOps-server-1502065968',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1502065968',id=83,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD7Li636QSVlGT4GqO4s5nNVtfSqEGFXbp91j8sBDWGgCeJ7DY5ScywhTjhOid22UFDvYMoisPR8wfRjbxlccvbHXWgfKkxWbHpB3Y/bp2JgqOgL25/vEzzhz8VLHisNrw==',key_name='tempest-TestNetworkAdvancedServerOps-210036872',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-6sjybjr1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:41Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=e0b403b3-2f95-4f8c-a00c-53dab3c643b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "address": "fa:16:3e:46:a7:4e", "network": {"id": "61b03a6e-b883-4f32-b23d-d6fea7058b29", "bridge": "br-int", "label": "tempest-network-smoke--1037265809", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcace90b2-5d", "ovs_interfaceid": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.478 243456 DEBUG nova.network.os_vif_util [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "address": "fa:16:3e:46:a7:4e", "network": {"id": "61b03a6e-b883-4f32-b23d-d6fea7058b29", "bridge": "br-int", "label": "tempest-network-smoke--1037265809", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcace90b2-5d", "ovs_interfaceid": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.479 243456 DEBUG nova.network.os_vif_util [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:a7:4e,bridge_name='br-int',has_traffic_filtering=True,id=cace90b2-5d6b-49ae-a68a-251838fec4ee,network=Network(61b03a6e-b883-4f32-b23d-d6fea7058b29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcace90b2-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.481 243456 DEBUG nova.objects.instance [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'pci_devices' on Instance uuid e0b403b3-2f95-4f8c-a00c-53dab3c643b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:46 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.497 243456 DEBUG nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:15:46 compute-0 nova_compute[243452]:   <uuid>e0b403b3-2f95-4f8c-a00c-53dab3c643b9</uuid>
Feb 28 10:15:46 compute-0 nova_compute[243452]:   <name>instance-00000053</name>
Feb 28 10:15:46 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:15:46 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:15:46 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1502065968</nova:name>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:15:45</nova:creationTime>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:15:46 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:15:46 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:15:46 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:15:46 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:15:46 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:15:46 compute-0 nova_compute[243452]:         <nova:user uuid="99530c323188499c8d0e75b8edf1f77b">tempest-TestNetworkAdvancedServerOps-1987172309-project-member</nova:user>
Feb 28 10:15:46 compute-0 nova_compute[243452]:         <nova:project uuid="4c568ca6a09a48c1a1197267be4d4583">tempest-TestNetworkAdvancedServerOps-1987172309</nova:project>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:15:46 compute-0 nova_compute[243452]:         <nova:port uuid="cace90b2-5d6b-49ae-a68a-251838fec4ee">
Feb 28 10:15:46 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:15:46 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:15:46 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <system>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <entry name="serial">e0b403b3-2f95-4f8c-a00c-53dab3c643b9</entry>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <entry name="uuid">e0b403b3-2f95-4f8c-a00c-53dab3c643b9</entry>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     </system>
Feb 28 10:15:46 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:15:46 compute-0 nova_compute[243452]:   <os>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:   </os>
Feb 28 10:15:46 compute-0 nova_compute[243452]:   <features>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:   </features>
Feb 28 10:15:46 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:15:46 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:15:46 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk">
Feb 28 10:15:46 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       </source>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:15:46 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk.config">
Feb 28 10:15:46 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       </source>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:15:46 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:46:a7:4e"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <target dev="tapcace90b2-5d"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/e0b403b3-2f95-4f8c-a00c-53dab3c643b9/console.log" append="off"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <video>
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     </video>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:15:46 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:15:46 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:15:46 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:15:46 compute-0 nova_compute[243452]: </domain>
Feb 28 10:15:46 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.503 243456 DEBUG nova.compute.manager [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Preparing to wait for external event network-vif-plugged-cace90b2-5d6b-49ae-a68a-251838fec4ee prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.504 243456 DEBUG oslo_concurrency.lockutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.504 243456 DEBUG oslo_concurrency.lockutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.505 243456 DEBUG oslo_concurrency.lockutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.506 243456 DEBUG nova.virt.libvirt.vif [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:15:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1502065968',display_name='tempest-TestNetworkAdvancedServerOps-server-1502065968',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1502065968',id=83,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD7Li636QSVlGT4GqO4s5nNVtfSqEGFXbp91j8sBDWGgCeJ7DY5ScywhTjhOid22UFDvYMoisPR8wfRjbxlccvbHXWgfKkxWbHpB3Y/bp2JgqOgL25/vEzzhz8VLHisNrw==',key_name='tempest-TestNetworkAdvancedServerOps-210036872',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-6sjybjr1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:41Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=e0b403b3-2f95-4f8c-a00c-53dab3c643b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "address": "fa:16:3e:46:a7:4e", "network": {"id": "61b03a6e-b883-4f32-b23d-d6fea7058b29", "bridge": "br-int", "label": "tempest-network-smoke--1037265809", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcace90b2-5d", "ovs_interfaceid": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.506 243456 DEBUG nova.network.os_vif_util [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "address": "fa:16:3e:46:a7:4e", "network": {"id": "61b03a6e-b883-4f32-b23d-d6fea7058b29", "bridge": "br-int", "label": "tempest-network-smoke--1037265809", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcace90b2-5d", "ovs_interfaceid": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.507 243456 DEBUG nova.network.os_vif_util [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:a7:4e,bridge_name='br-int',has_traffic_filtering=True,id=cace90b2-5d6b-49ae-a68a-251838fec4ee,network=Network(61b03a6e-b883-4f32-b23d-d6fea7058b29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcace90b2-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.508 243456 DEBUG os_vif [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:a7:4e,bridge_name='br-int',has_traffic_filtering=True,id=cace90b2-5d6b-49ae-a68a-251838fec4ee,network=Network(61b03a6e-b883-4f32-b23d-d6fea7058b29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcace90b2-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.509 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.510 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.512 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.514 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.515 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcace90b2-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.515 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcace90b2-5d, col_values=(('external_ids', {'iface-id': 'cace90b2-5d6b-49ae-a68a-251838fec4ee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:a7:4e', 'vm-uuid': 'e0b403b3-2f95-4f8c-a00c-53dab3c643b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.517 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:46 compute-0 NetworkManager[49805]: <info>  [1772273746.5189] manager: (tapcace90b2-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/326)
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.522 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.525 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.526 243456 INFO os_vif [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:a7:4e,bridge_name='br-int',has_traffic_filtering=True,id=cace90b2-5d6b-49ae-a68a-251838fec4ee,network=Network(61b03a6e-b883-4f32-b23d-d6fea7058b29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcace90b2-5d')
Feb 28 10:15:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1490306831' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1275741341' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:46 compute-0 ceph-mon[76304]: pgmap v1496: 305 pgs: 305 active+clean; 424 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.6 MiB/s wr, 234 op/s
Feb 28 10:15:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1269318903' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:15:46 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3194025754' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.611 243456 DEBUG nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.612 243456 DEBUG nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.613 243456 DEBUG nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No VIF found with MAC fa:16:3e:46:a7:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.614 243456 INFO nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Using config drive
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.643 243456 DEBUG nova.storage.rbd_utils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.666 243456 DEBUG oslo_concurrency.processutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9119fb04-24fa-460c-a772-4ca398874b4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.667 243456 DEBUG oslo_concurrency.processutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.676s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.717 243456 DEBUG nova.storage.rbd_utils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] rbd image 30194398-5601-43ac-aae7-290d9d311d6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.722 243456 DEBUG oslo_concurrency.processutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.806 243456 DEBUG nova.storage.rbd_utils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] resizing rbd image 9119fb04-24fa-460c-a772-4ca398874b4e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.893 243456 DEBUG nova.objects.instance [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lazy-loading 'migration_context' on Instance uuid 9119fb04-24fa-460c-a772-4ca398874b4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.909 243456 DEBUG nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.910 243456 DEBUG nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Ensure instance console log exists: /var/lib/nova/instances/9119fb04-24fa-460c-a772-4ca398874b4e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.910 243456 DEBUG oslo_concurrency.lockutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.911 243456 DEBUG oslo_concurrency.lockutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.911 243456 DEBUG oslo_concurrency.lockutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:15:46 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4260150674' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:46 compute-0 nova_compute[243452]: 2026-02-28 10:15:46.994 243456 DEBUG oslo_concurrency.processutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.001 243456 DEBUG nova.compute.provider_tree [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.017 243456 DEBUG nova.scheduler.client.report [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.024 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273732.0188048, ca4fec3f-7355-47c7-baa5-8d9af25c6eb4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.025 243456 INFO nova.compute.manager [-] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] VM Stopped (Lifecycle Event)
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.049 243456 DEBUG oslo_concurrency.lockutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.051 243456 DEBUG nova.compute.manager [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.056 243456 DEBUG nova.compute.manager [None req-64f59d80-1463-40be-b540-9b93c1c4d747 - - - - - -] [instance: ca4fec3f-7355-47c7-baa5-8d9af25c6eb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.101 243456 DEBUG nova.compute.manager [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.102 243456 DEBUG nova.network.neutron [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.120 243456 INFO nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.138 243456 DEBUG nova.compute.manager [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.241 243456 DEBUG nova.compute.manager [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.243 243456 DEBUG nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.243 243456 INFO nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Creating image(s)
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.263 243456 DEBUG nova.storage.rbd_utils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image 7a3c0169-3430-4dbe-b080-9ae7b56a101b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:15:47 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3413714056' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.287 243456 DEBUG nova.storage.rbd_utils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image 7a3c0169-3430-4dbe-b080-9ae7b56a101b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.314 243456 DEBUG nova.storage.rbd_utils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image 7a3c0169-3430-4dbe-b080-9ae7b56a101b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.319 243456 DEBUG oslo_concurrency.processutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.360 243456 DEBUG nova.compute.manager [req-1754c703-2ad7-437c-ba06-7bd4a854dcc5 req-fffa5b89-afbe-42fd-a659-a7567cf043e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received event network-changed-b7b37fba-503f-4a0c-98ec-29224477d25f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.361 243456 DEBUG nova.compute.manager [req-1754c703-2ad7-437c-ba06-7bd4a854dcc5 req-fffa5b89-afbe-42fd-a659-a7567cf043e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Refreshing instance network info cache due to event network-changed-b7b37fba-503f-4a0c-98ec-29224477d25f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.362 243456 DEBUG oslo_concurrency.lockutils [req-1754c703-2ad7-437c-ba06-7bd4a854dcc5 req-fffa5b89-afbe-42fd-a659-a7567cf043e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-30194398-5601-43ac-aae7-290d9d311d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.362 243456 DEBUG oslo_concurrency.lockutils [req-1754c703-2ad7-437c-ba06-7bd4a854dcc5 req-fffa5b89-afbe-42fd-a659-a7567cf043e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-30194398-5601-43ac-aae7-290d9d311d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.362 243456 DEBUG nova.network.neutron [req-1754c703-2ad7-437c-ba06-7bd4a854dcc5 req-fffa5b89-afbe-42fd-a659-a7567cf043e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Refreshing network info cache for port b7b37fba-503f-4a0c-98ec-29224477d25f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.364 243456 DEBUG oslo_concurrency.processutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.366 243456 DEBUG nova.virt.libvirt.vif [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:15:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-885707745',display_name='tempest-ListServerFiltersTestJSON-instance-885707745',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-885707745',id=84,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='692561f0659d4af58ab14beffb24eb70',ramdisk_id='',reservation_id='r-39yaumf1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-866927856',owner_user_name='tempest-ListServerFiltersTestJSON-866927856-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:42Z,user_data=None,user_id='cc52c9235e704591a857b1b746c257ea',uuid=30194398-5601-43ac-aae7-290d9d311d6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7b37fba-503f-4a0c-98ec-29224477d25f", "address": "fa:16:3e:62:3b:12", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7b37fba-50", "ovs_interfaceid": "b7b37fba-503f-4a0c-98ec-29224477d25f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.367 243456 DEBUG nova.network.os_vif_util [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converting VIF {"id": "b7b37fba-503f-4a0c-98ec-29224477d25f", "address": "fa:16:3e:62:3b:12", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7b37fba-50", "ovs_interfaceid": "b7b37fba-503f-4a0c-98ec-29224477d25f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.367 243456 DEBUG nova.network.os_vif_util [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:3b:12,bridge_name='br-int',has_traffic_filtering=True,id=b7b37fba-503f-4a0c-98ec-29224477d25f,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7b37fba-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.368 243456 DEBUG nova.objects.instance [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lazy-loading 'pci_devices' on Instance uuid 30194398-5601-43ac-aae7-290d9d311d6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.392 243456 DEBUG nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:15:47 compute-0 nova_compute[243452]:   <uuid>30194398-5601-43ac-aae7-290d9d311d6c</uuid>
Feb 28 10:15:47 compute-0 nova_compute[243452]:   <name>instance-00000054</name>
Feb 28 10:15:47 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:15:47 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:15:47 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-885707745</nova:name>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:15:45</nova:creationTime>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:15:47 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:15:47 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:15:47 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:15:47 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:15:47 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:15:47 compute-0 nova_compute[243452]:         <nova:user uuid="cc52c9235e704591a857b1b746c257ea">tempest-ListServerFiltersTestJSON-866927856-project-member</nova:user>
Feb 28 10:15:47 compute-0 nova_compute[243452]:         <nova:project uuid="692561f0659d4af58ab14beffb24eb70">tempest-ListServerFiltersTestJSON-866927856</nova:project>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:15:47 compute-0 nova_compute[243452]:         <nova:port uuid="b7b37fba-503f-4a0c-98ec-29224477d25f">
Feb 28 10:15:47 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:15:47 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:15:47 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <system>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <entry name="serial">30194398-5601-43ac-aae7-290d9d311d6c</entry>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <entry name="uuid">30194398-5601-43ac-aae7-290d9d311d6c</entry>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     </system>
Feb 28 10:15:47 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:15:47 compute-0 nova_compute[243452]:   <os>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:   </os>
Feb 28 10:15:47 compute-0 nova_compute[243452]:   <features>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:   </features>
Feb 28 10:15:47 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:15:47 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:15:47 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/30194398-5601-43ac-aae7-290d9d311d6c_disk">
Feb 28 10:15:47 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       </source>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:15:47 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/30194398-5601-43ac-aae7-290d9d311d6c_disk.config">
Feb 28 10:15:47 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       </source>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:15:47 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:62:3b:12"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <target dev="tapb7b37fba-50"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/30194398-5601-43ac-aae7-290d9d311d6c/console.log" append="off"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <video>
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     </video>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:15:47 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:15:47 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:15:47 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:15:47 compute-0 nova_compute[243452]: </domain>
Feb 28 10:15:47 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.394 243456 DEBUG nova.compute.manager [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Preparing to wait for external event network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.394 243456 DEBUG oslo_concurrency.lockutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "30194398-5601-43ac-aae7-290d9d311d6c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.395 243456 DEBUG oslo_concurrency.lockutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.395 243456 DEBUG oslo_concurrency.lockutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.396 243456 DEBUG nova.virt.libvirt.vif [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:15:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-885707745',display_name='tempest-ListServerFiltersTestJSON-instance-885707745',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-885707745',id=84,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='692561f0659d4af58ab14beffb24eb70',ramdisk_id='',reservation_id='r-39yaumf1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-866927856',owner_user_name='tempest-ListServerFiltersTestJSON-866927856-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:42Z,user_data=None,user_id='cc52c9235e704591a857b1b746c257ea',uuid=30194398-5601-43ac-aae7-290d9d311d6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7b37fba-503f-4a0c-98ec-29224477d25f", "address": "fa:16:3e:62:3b:12", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7b37fba-50", "ovs_interfaceid": "b7b37fba-503f-4a0c-98ec-29224477d25f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.396 243456 DEBUG nova.network.os_vif_util [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converting VIF {"id": "b7b37fba-503f-4a0c-98ec-29224477d25f", "address": "fa:16:3e:62:3b:12", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7b37fba-50", "ovs_interfaceid": "b7b37fba-503f-4a0c-98ec-29224477d25f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.398 243456 DEBUG nova.network.os_vif_util [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:3b:12,bridge_name='br-int',has_traffic_filtering=True,id=b7b37fba-503f-4a0c-98ec-29224477d25f,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7b37fba-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.398 243456 DEBUG os_vif [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:3b:12,bridge_name='br-int',has_traffic_filtering=True,id=b7b37fba-503f-4a0c-98ec-29224477d25f,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7b37fba-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.401 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.401 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.402 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.404 243456 DEBUG oslo_concurrency.processutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.405 243456 DEBUG oslo_concurrency.lockutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.405 243456 DEBUG oslo_concurrency.lockutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.405 243456 DEBUG oslo_concurrency.lockutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.427 243456 DEBUG nova.storage.rbd_utils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image 7a3c0169-3430-4dbe-b080-9ae7b56a101b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.432 243456 DEBUG oslo_concurrency.processutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 7a3c0169-3430-4dbe-b080-9ae7b56a101b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.466 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.467 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7b37fba-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.467 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb7b37fba-50, col_values=(('external_ids', {'iface-id': 'b7b37fba-503f-4a0c-98ec-29224477d25f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:3b:12', 'vm-uuid': '30194398-5601-43ac-aae7-290d9d311d6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.469 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:47 compute-0 NetworkManager[49805]: <info>  [1772273747.4706] manager: (tapb7b37fba-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/327)
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.471 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.475 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.476 243456 INFO os_vif [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:3b:12,bridge_name='br-int',has_traffic_filtering=True,id=b7b37fba-503f-4a0c-98ec-29224477d25f,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7b37fba-50')
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.523 243456 DEBUG nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.523 243456 DEBUG nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.524 243456 DEBUG nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] No VIF found with MAC fa:16:3e:62:3b:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.524 243456 INFO nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Using config drive
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.551 243456 DEBUG nova.storage.rbd_utils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] rbd image 30194398-5601-43ac-aae7-290d9d311d6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:47 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3194025754' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:47 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4260150674' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:15:47 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3413714056' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.781 243456 DEBUG oslo_concurrency.processutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 7a3c0169-3430-4dbe-b080-9ae7b56a101b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.862 243456 DEBUG nova.storage.rbd_utils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] resizing rbd image 7a3c0169-3430-4dbe-b080-9ae7b56a101b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.956 243456 DEBUG nova.objects.instance [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'migration_context' on Instance uuid 7a3c0169-3430-4dbe-b080-9ae7b56a101b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.978 243456 DEBUG nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.979 243456 DEBUG nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Ensure instance console log exists: /var/lib/nova/instances/7a3c0169-3430-4dbe-b080-9ae7b56a101b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.979 243456 DEBUG oslo_concurrency.lockutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.980 243456 DEBUG oslo_concurrency.lockutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:47 compute-0 nova_compute[243452]: 2026-02-28 10:15:47.980 243456 DEBUG oslo_concurrency.lockutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 486 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.5 MiB/s wr, 236 op/s
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.307 243456 INFO nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Creating config drive at /var/lib/nova/instances/e0b403b3-2f95-4f8c-a00c-53dab3c643b9/disk.config
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.311 243456 DEBUG oslo_concurrency.processutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e0b403b3-2f95-4f8c-a00c-53dab3c643b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp07fsdyaz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.340 243456 DEBUG nova.policy [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14b2d28379164786ad68563acb83a50a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fd70835696bf4e12a062516e9de5527d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.445 243456 DEBUG oslo_concurrency.processutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e0b403b3-2f95-4f8c-a00c-53dab3c643b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp07fsdyaz" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.473 243456 DEBUG nova.storage.rbd_utils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.478 243456 DEBUG oslo_concurrency.processutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e0b403b3-2f95-4f8c-a00c-53dab3c643b9/disk.config e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.503 243456 INFO nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Creating config drive at /var/lib/nova/instances/30194398-5601-43ac-aae7-290d9d311d6c/disk.config
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.507 243456 DEBUG oslo_concurrency.processutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30194398-5601-43ac-aae7-290d9d311d6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp9x7ttd2m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:15:48 compute-0 ceph-mon[76304]: pgmap v1497: 305 pgs: 305 active+clean; 486 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.5 MiB/s wr, 236 op/s
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.617 243456 DEBUG oslo_concurrency.processutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e0b403b3-2f95-4f8c-a00c-53dab3c643b9/disk.config e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.618 243456 INFO nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Deleting local config drive /var/lib/nova/instances/e0b403b3-2f95-4f8c-a00c-53dab3c643b9/disk.config because it was imported into RBD.
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.643 243456 DEBUG oslo_concurrency.processutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30194398-5601-43ac-aae7-290d9d311d6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp9x7ttd2m" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:48 compute-0 NetworkManager[49805]: <info>  [1772273748.6725] manager: (tapcace90b2-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/328)
Feb 28 10:15:48 compute-0 kernel: tapcace90b2-5d: entered promiscuous mode
Feb 28 10:15:48 compute-0 ovn_controller[146846]: 2026-02-28T10:15:48Z|00745|binding|INFO|Claiming lport cace90b2-5d6b-49ae-a68a-251838fec4ee for this chassis.
Feb 28 10:15:48 compute-0 ovn_controller[146846]: 2026-02-28T10:15:48Z|00746|binding|INFO|cace90b2-5d6b-49ae-a68a-251838fec4ee: Claiming fa:16:3e:46:a7:4e 10.100.0.4
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.683 243456 DEBUG nova.storage.rbd_utils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] rbd image 30194398-5601-43ac-aae7-290d9d311d6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.689 243456 DEBUG oslo_concurrency.processutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30194398-5601-43ac-aae7-290d9d311d6c/disk.config 30194398-5601-43ac-aae7-290d9d311d6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:48.690 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:a7:4e 10.100.0.4'], port_security=['fa:16:3e:46:a7:4e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e0b403b3-2f95-4f8c-a00c-53dab3c643b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61b03a6e-b883-4f32-b23d-d6fea7058b29', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '2', 'neutron:security_group_ids': '59190708-228d-45d4-972b-cf1e677cee18', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3678363f-59af-4198-9c0f-ea20e21245ac, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cace90b2-5d6b-49ae-a68a-251838fec4ee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:15:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:48.692 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cace90b2-5d6b-49ae-a68a-251838fec4ee in datapath 61b03a6e-b883-4f32-b23d-d6fea7058b29 bound to our chassis
Feb 28 10:15:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:48.694 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61b03a6e-b883-4f32-b23d-d6fea7058b29
Feb 28 10:15:48 compute-0 ovn_controller[146846]: 2026-02-28T10:15:48Z|00747|binding|INFO|Setting lport cace90b2-5d6b-49ae-a68a-251838fec4ee ovn-installed in OVS
Feb 28 10:15:48 compute-0 ovn_controller[146846]: 2026-02-28T10:15:48Z|00748|binding|INFO|Setting lport cace90b2-5d6b-49ae-a68a-251838fec4ee up in Southbound
Feb 28 10:15:48 compute-0 systemd-udevd[313942]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:15:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:48.708 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[04f62220-7411-42e5-a0f3-dd6b624268c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:48.710 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap61b03a6e-b1 in ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:15:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:48.713 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap61b03a6e-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:15:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:48.713 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3d80f39a-280d-422b-81cf-b31f096d4261]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:48 compute-0 systemd-machined[209480]: New machine qemu-96-instance-00000053.
Feb 28 10:15:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:48.715 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6413044c-6e89-4b1e-afd4-40c32ea30b31]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:48 compute-0 NetworkManager[49805]: <info>  [1772273748.7179] device (tapcace90b2-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:15:48 compute-0 NetworkManager[49805]: <info>  [1772273748.7184] device (tapcace90b2-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:15:48 compute-0 systemd[1]: Started Virtual Machine qemu-96-instance-00000053.
Feb 28 10:15:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:48.732 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a71666-8ab4-4bdc-b037-ae1dbe48fb5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.734 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.743 243456 DEBUG nova.network.neutron [req-b313a572-efbd-45ce-8fc2-ea557e62371e req-a11637d6-076c-4914-bec3-9f3651b197c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Updated VIF entry in instance network info cache for port cace90b2-5d6b-49ae-a68a-251838fec4ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.743 243456 DEBUG nova.network.neutron [req-b313a572-efbd-45ce-8fc2-ea557e62371e req-a11637d6-076c-4914-bec3-9f3651b197c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Updating instance_info_cache with network_info: [{"id": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "address": "fa:16:3e:46:a7:4e", "network": {"id": "61b03a6e-b883-4f32-b23d-d6fea7058b29", "bridge": "br-int", "label": "tempest-network-smoke--1037265809", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcace90b2-5d", "ovs_interfaceid": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.746 243456 DEBUG nova.network.neutron [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Updating instance_info_cache with network_info: [{"id": "c2860307-4800-4c5e-adb7-ab75c130f158", "address": "fa:16:3e:b4:f3:ce", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2860307-48", "ovs_interfaceid": "c2860307-4800-4c5e-adb7-ab75c130f158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:48.745 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0094e350-8cd5-4625-b976-345ea1759dd1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.771 243456 DEBUG oslo_concurrency.lockutils [req-b313a572-efbd-45ce-8fc2-ea557e62371e req-a11637d6-076c-4914-bec3-9f3651b197c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-e0b403b3-2f95-4f8c-a00c-53dab3c643b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.774 243456 DEBUG oslo_concurrency.lockutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Releasing lock "refresh_cache-cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.775 243456 DEBUG nova.compute.manager [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Instance network_info: |[{"id": "c2860307-4800-4c5e-adb7-ab75c130f158", "address": "fa:16:3e:b4:f3:ce", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2860307-48", "ovs_interfaceid": "c2860307-4800-4c5e-adb7-ab75c130f158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.777 243456 DEBUG oslo_concurrency.lockutils [req-dd34c8e8-0f60-455e-8f0d-a0acd7928b45 req-02013f64-1e2c-436c-90dc-53c15788bae3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.777 243456 DEBUG nova.network.neutron [req-dd34c8e8-0f60-455e-8f0d-a0acd7928b45 req-02013f64-1e2c-436c-90dc-53c15788bae3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Refreshing network info cache for port c2860307-4800-4c5e-adb7-ab75c130f158 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:15:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:48.778 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7ecd1268-1633-434a-a5d5-8e54ad403e36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:48.784 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7e0b311a-7bb3-477d-b55d-3a8d9a1529b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:48 compute-0 NetworkManager[49805]: <info>  [1772273748.7851] manager: (tap61b03a6e-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/329)
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.785 243456 DEBUG nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Start _get_guest_xml network_info=[{"id": "c2860307-4800-4c5e-adb7-ab75c130f158", "address": "fa:16:3e:b4:f3:ce", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2860307-48", "ovs_interfaceid": "c2860307-4800-4c5e-adb7-ab75c130f158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': '88971623-4808-4102-a4a7-34a287d8b7fe'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.804 243456 WARNING nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.808 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:48.823 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[be5da28a-1a09-4146-8fc9-1a0f9c454e35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:48.827 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b706ac47-5164-4d9e-8bc8-0b8950eb4ccc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.832 243456 DEBUG nova.virt.libvirt.host [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.833 243456 DEBUG nova.virt.libvirt.host [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.839 243456 DEBUG nova.virt.libvirt.host [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.840 243456 DEBUG nova.virt.libvirt.host [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.841 243456 DEBUG nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.841 243456 DEBUG nova.virt.hardware [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.842 243456 DEBUG nova.virt.hardware [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.842 243456 DEBUG nova.virt.hardware [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.843 243456 DEBUG nova.virt.hardware [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.844 243456 DEBUG nova.virt.hardware [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.844 243456 DEBUG nova.virt.hardware [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.845 243456 DEBUG nova.virt.hardware [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.845 243456 DEBUG nova.virt.hardware [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.846 243456 DEBUG nova.virt.hardware [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.846 243456 DEBUG nova.virt.hardware [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.846 243456 DEBUG nova.virt.hardware [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.852 243456 DEBUG oslo_concurrency.processutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:48 compute-0 NetworkManager[49805]: <info>  [1772273748.8556] device (tap61b03a6e-b0): carrier: link connected
Feb 28 10:15:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:48.859 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f094253f-8363-4832-84e3-a7aba85cedae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:48.876 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[af20b73c-7d0a-4442-a036-17ddf911e369]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61b03a6e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:ea:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 229], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525614, 'reachable_time': 34311, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313998, 'error': None, 'target': 'ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:48.893 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f8ac8a-a213-4c08-b7b8-e17e4368dac4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee6:ea04'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525614, 'tstamp': 525614}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314000, 'error': None, 'target': 'ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:48.911 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1bdebc1f-3bd4-4747-8988-05e60128da6b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61b03a6e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:ea:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 229], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525614, 'reachable_time': 34311, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314001, 'error': None, 'target': 'ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.923 243456 DEBUG oslo_concurrency.processutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30194398-5601-43ac-aae7-290d9d311d6c/disk.config 30194398-5601-43ac-aae7-290d9d311d6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.924 243456 INFO nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Deleting local config drive /var/lib/nova/instances/30194398-5601-43ac-aae7-290d9d311d6c/disk.config because it was imported into RBD.
Feb 28 10:15:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:48.955 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fa62684f-968e-4628-a7b0-2c0f4a96f300]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:48 compute-0 NetworkManager[49805]: <info>  [1772273748.9696] manager: (tapb7b37fba-50): new Tun device (/org/freedesktop/NetworkManager/Devices/330)
Feb 28 10:15:48 compute-0 kernel: tapb7b37fba-50: entered promiscuous mode
Feb 28 10:15:48 compute-0 systemd-udevd[313980]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.971 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:48 compute-0 ovn_controller[146846]: 2026-02-28T10:15:48Z|00749|binding|INFO|Claiming lport b7b37fba-503f-4a0c-98ec-29224477d25f for this chassis.
Feb 28 10:15:48 compute-0 ovn_controller[146846]: 2026-02-28T10:15:48Z|00750|binding|INFO|b7b37fba-503f-4a0c-98ec-29224477d25f: Claiming fa:16:3e:62:3b:12 10.100.0.13
Feb 28 10:15:48 compute-0 NetworkManager[49805]: <info>  [1772273748.9806] device (tapb7b37fba-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:15:48 compute-0 NetworkManager[49805]: <info>  [1772273748.9810] device (tapb7b37fba-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:15:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:48.986 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:3b:12 10.100.0.13'], port_security=['fa:16:3e:62:3b:12 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '30194398-5601-43ac-aae7-290d9d311d6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c26d2eda-0121-4d10-a6e6-2d194139720b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '692561f0659d4af58ab14beffb24eb70', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a45296f-8417-45bb-aaf5-e24181cd6a54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=859a5b6e-8469-47b0-aace-63af7cf02d70, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b7b37fba-503f-4a0c-98ec-29224477d25f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:15:48 compute-0 nova_compute[243452]: 2026-02-28 10:15:48.998 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:49 compute-0 ovn_controller[146846]: 2026-02-28T10:15:49Z|00751|binding|INFO|Setting lport b7b37fba-503f-4a0c-98ec-29224477d25f ovn-installed in OVS
Feb 28 10:15:49 compute-0 ovn_controller[146846]: 2026-02-28T10:15:49Z|00752|binding|INFO|Setting lport b7b37fba-503f-4a0c-98ec-29224477d25f up in Southbound
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.008 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:49 compute-0 systemd-machined[209480]: New machine qemu-97-instance-00000054.
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.017 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[002733a6-0772-43c0-bf82-9255e92a6b32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.018 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61b03a6e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.018 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.018 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61b03a6e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:49 compute-0 NetworkManager[49805]: <info>  [1772273749.0206] manager: (tap61b03a6e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/331)
Feb 28 10:15:49 compute-0 kernel: tap61b03a6e-b0: entered promiscuous mode
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.021 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:49 compute-0 systemd[1]: Started Virtual Machine qemu-97-instance-00000054.
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.024 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61b03a6e-b0, col_values=(('external_ids', {'iface-id': '69515a36-1a3b-4baa-87a6-97137a6ee885'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:49 compute-0 ovn_controller[146846]: 2026-02-28T10:15:49Z|00753|binding|INFO|Releasing lport 69515a36-1a3b-4baa-87a6-97137a6ee885 from this chassis (sb_readonly=0)
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.029 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.033 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/61b03a6e-b883-4f32-b23d-d6fea7058b29.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/61b03a6e-b883-4f32-b23d-d6fea7058b29.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.033 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[06580b24-a821-485a-85d2-e6b1a1520943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.035 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.035 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-61b03a6e-b883-4f32-b23d-d6fea7058b29
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/61b03a6e-b883-4f32-b23d-d6fea7058b29.pid.haproxy
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 61b03a6e-b883-4f32-b23d-d6fea7058b29
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.036 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29', 'env', 'PROCESS_TAG=haproxy-61b03a6e-b883-4f32-b23d-d6fea7058b29', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/61b03a6e-b883-4f32-b23d-d6fea7058b29.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.073 243456 DEBUG nova.network.neutron [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Successfully created port: e6d4a5ad-f493-413a-a412-747c3a07943b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.314 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273749.3138394, e0b403b3-2f95-4f8c-a00c-53dab3c643b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.315 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] VM Started (Lifecycle Event)
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.336 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.341 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273749.3142455, e0b403b3-2f95-4f8c-a00c-53dab3c643b9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.341 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] VM Paused (Lifecycle Event)
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.358 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.364 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.387 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.424 243456 DEBUG nova.network.neutron [req-1754c703-2ad7-437c-ba06-7bd4a854dcc5 req-fffa5b89-afbe-42fd-a659-a7567cf043e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Updated VIF entry in instance network info cache for port b7b37fba-503f-4a0c-98ec-29224477d25f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.424 243456 DEBUG nova.network.neutron [req-1754c703-2ad7-437c-ba06-7bd4a854dcc5 req-fffa5b89-afbe-42fd-a659-a7567cf043e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Updating instance_info_cache with network_info: [{"id": "b7b37fba-503f-4a0c-98ec-29224477d25f", "address": "fa:16:3e:62:3b:12", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7b37fba-50", "ovs_interfaceid": "b7b37fba-503f-4a0c-98ec-29224477d25f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.432 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273749.431988, 30194398-5601-43ac-aae7-290d9d311d6c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.432 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] VM Started (Lifecycle Event)
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.442 243456 DEBUG oslo_concurrency.lockutils [req-1754c703-2ad7-437c-ba06-7bd4a854dcc5 req-fffa5b89-afbe-42fd-a659-a7567cf043e1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-30194398-5601-43ac-aae7-290d9d311d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:15:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:15:49 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3705482483' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:49 compute-0 podman[314146]: 2026-02-28 10:15:49.349769762 +0000 UTC m=+0.025152808 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.449 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.451 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273749.4321783, 30194398-5601-43ac-aae7-290d9d311d6c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.452 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] VM Paused (Lifecycle Event)
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.462 243456 DEBUG oslo_concurrency.processutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.482 243456 DEBUG nova.storage.rbd_utils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] rbd image cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.487 243456 DEBUG oslo_concurrency.processutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:49 compute-0 podman[314146]: 2026-02-28 10:15:49.493713149 +0000 UTC m=+0.169096195 container create a4ccd79c023880696d899e80cdd84a64839df595660165ca6888ae66de2619cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.511 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.515 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.539 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:15:49 compute-0 systemd[1]: Started libpod-conmon-a4ccd79c023880696d899e80cdd84a64839df595660165ca6888ae66de2619cf.scope.
Feb 28 10:15:49 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88a09b3e5a203c55b77f888c9ee0d5e395e7e939941ed510be9ce729b0bfa45b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:15:49 compute-0 podman[314146]: 2026-02-28 10:15:49.656433955 +0000 UTC m=+0.331817001 container init a4ccd79c023880696d899e80cdd84a64839df595660165ca6888ae66de2619cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:15:49 compute-0 podman[314146]: 2026-02-28 10:15:49.661088896 +0000 UTC m=+0.336471922 container start a4ccd79c023880696d899e80cdd84a64839df595660165ca6888ae66de2619cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:15:49 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3705482483' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:49 compute-0 neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29[314191]: [NOTICE]   (314214) : New worker (314216) forked
Feb 28 10:15:49 compute-0 neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29[314191]: [NOTICE]   (314214) : Loading success.
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.702 243456 DEBUG nova.network.neutron [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Successfully created port: e5c02b5f-a54e-4612-b236-0f03ef62a3c7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.750 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b7b37fba-503f-4a0c-98ec-29224477d25f in datapath c26d2eda-0121-4d10-a6e6-2d194139720b unbound from our chassis
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.752 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c26d2eda-0121-4d10-a6e6-2d194139720b
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.757 243456 DEBUG nova.compute.manager [req-703f8ff0-8667-4142-96de-b686e014e57c req-e052361c-5d4e-4b07-a1ad-cc11e798cc5d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Received event network-vif-plugged-cace90b2-5d6b-49ae-a68a-251838fec4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.758 243456 DEBUG oslo_concurrency.lockutils [req-703f8ff0-8667-4142-96de-b686e014e57c req-e052361c-5d4e-4b07-a1ad-cc11e798cc5d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.758 243456 DEBUG oslo_concurrency.lockutils [req-703f8ff0-8667-4142-96de-b686e014e57c req-e052361c-5d4e-4b07-a1ad-cc11e798cc5d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.759 243456 DEBUG oslo_concurrency.lockutils [req-703f8ff0-8667-4142-96de-b686e014e57c req-e052361c-5d4e-4b07-a1ad-cc11e798cc5d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.759 243456 DEBUG nova.compute.manager [req-703f8ff0-8667-4142-96de-b686e014e57c req-e052361c-5d4e-4b07-a1ad-cc11e798cc5d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Processing event network-vif-plugged-cace90b2-5d6b-49ae-a68a-251838fec4ee _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.760 243456 DEBUG nova.compute.manager [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.762 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0deb8601-8462-41a1-9b14-115f8230f2bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.763 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc26d2eda-01 in ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.771 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273749.7707856, e0b403b3-2f95-4f8c-a00c-53dab3c643b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.771 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] VM Resumed (Lifecycle Event)
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.773 243456 DEBUG nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.777 243456 INFO nova.virt.libvirt.driver [-] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Instance spawned successfully.
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.778 243456 DEBUG nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.781 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc26d2eda-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.781 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[814a52fc-bec6-44ae-9193-8f6297408684]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.782 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[321851c3-626c-4a05-b83b-630eb4368676]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.793 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.802 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[50981bc9-4bd2-4491-9d17-8600dce2cf11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.807 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.810 243456 DEBUG nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.810 243456 DEBUG nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.811 243456 DEBUG nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.811 243456 DEBUG nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.811 243456 DEBUG nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.811 243456 DEBUG nova.virt.libvirt.driver [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.828 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1df0c216-3778-4c92-a5e1-6adb9b389eb4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.841 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.846 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[17485136-e353-451a-90c3-59febe28322c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:49 compute-0 NetworkManager[49805]: <info>  [1772273749.8570] manager: (tapc26d2eda-00): new Veth device (/org/freedesktop/NetworkManager/Devices/332)
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.858 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[192d5d69-7396-4f22-8363-9b12b3050f8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.871 243456 INFO nova.compute.manager [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Took 8.75 seconds to spawn the instance on the hypervisor.
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.871 243456 DEBUG nova.compute.manager [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.889 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d01f2962-50af-4ae2-b046-487aff847421]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.894 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b422b7a3-32e3-46e5-ad72-c12ee232b046]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:49 compute-0 NetworkManager[49805]: <info>  [1772273749.9223] device (tapc26d2eda-00): carrier: link connected
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.929 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[52b19e09-1285-4b49-97dd-a81afd3cdd89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.935 243456 INFO nova.compute.manager [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Took 10.38 seconds to build instance.
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.950 243456 DEBUG nova.network.neutron [req-dd34c8e8-0f60-455e-8f0d-a0acd7928b45 req-02013f64-1e2c-436c-90dc-53c15788bae3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Updated VIF entry in instance network info cache for port c2860307-4800-4c5e-adb7-ab75c130f158. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.950 243456 DEBUG nova.network.neutron [req-dd34c8e8-0f60-455e-8f0d-a0acd7928b45 req-02013f64-1e2c-436c-90dc-53c15788bae3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Updating instance_info_cache with network_info: [{"id": "c2860307-4800-4c5e-adb7-ab75c130f158", "address": "fa:16:3e:b4:f3:ce", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2860307-48", "ovs_interfaceid": "c2860307-4800-4c5e-adb7-ab75c130f158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.951 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cd54f989-5850-4598-97b5-6625a7cb0089]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc26d2eda-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:c3:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 231], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525720, 'reachable_time': 39275, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314235, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.953 243456 DEBUG oslo_concurrency.lockutils [None req-573f8ee4-6f6c-4479-91a6-22feedd5e61e 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.466s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:49 compute-0 nova_compute[243452]: 2026-02-28 10:15:49.965 243456 DEBUG oslo_concurrency.lockutils [req-dd34c8e8-0f60-455e-8f0d-a0acd7928b45 req-02013f64-1e2c-436c-90dc-53c15788bae3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.969 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fb6063a9-a74d-4fb1-a4de-7e63f72957e7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe68:c38a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525720, 'tstamp': 525720}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314236, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:49.994 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[588835cf-b70b-466e-8184-017d39eac003]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc26d2eda-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:c3:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 231], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525720, 'reachable_time': 39275, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314237, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:50.028 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ff54e4-748d-44a0-9a14-350ac69d208f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:15:50 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2275855061' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.058 243456 DEBUG oslo_concurrency.processutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.060 243456 DEBUG nova.virt.libvirt.vif [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1581061762',display_name='tempest-ListServerFiltersTestJSON-instance-1581061762',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1581061762',id=85,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='692561f0659d4af58ab14beffb24eb70',ramdisk_id='',reservation_id='r-8a0rn971',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-866927856',owner_user_name='tempest-ListServerFiltersTestJSON-866927856-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:43Z,user_data=None,user_id='cc52c9235e704591a857b1b746c257ea',uuid=cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c2860307-4800-4c5e-adb7-ab75c130f158", "address": "fa:16:3e:b4:f3:ce", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2860307-48", "ovs_interfaceid": "c2860307-4800-4c5e-adb7-ab75c130f158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.061 243456 DEBUG nova.network.os_vif_util [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converting VIF {"id": "c2860307-4800-4c5e-adb7-ab75c130f158", "address": "fa:16:3e:b4:f3:ce", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2860307-48", "ovs_interfaceid": "c2860307-4800-4c5e-adb7-ab75c130f158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.062 243456 DEBUG nova.network.os_vif_util [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:f3:ce,bridge_name='br-int',has_traffic_filtering=True,id=c2860307-4800-4c5e-adb7-ab75c130f158,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2860307-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.065 243456 DEBUG nova.objects.instance [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lazy-loading 'pci_devices' on Instance uuid cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.083 243456 DEBUG nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:15:50 compute-0 nova_compute[243452]:   <uuid>cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a</uuid>
Feb 28 10:15:50 compute-0 nova_compute[243452]:   <name>instance-00000055</name>
Feb 28 10:15:50 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:15:50 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:15:50 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1581061762</nova:name>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:15:48</nova:creationTime>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:15:50 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:15:50 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:15:50 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:15:50 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:15:50 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:15:50 compute-0 nova_compute[243452]:         <nova:user uuid="cc52c9235e704591a857b1b746c257ea">tempest-ListServerFiltersTestJSON-866927856-project-member</nova:user>
Feb 28 10:15:50 compute-0 nova_compute[243452]:         <nova:project uuid="692561f0659d4af58ab14beffb24eb70">tempest-ListServerFiltersTestJSON-866927856</nova:project>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="88971623-4808-4102-a4a7-34a287d8b7fe"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:15:50 compute-0 nova_compute[243452]:         <nova:port uuid="c2860307-4800-4c5e-adb7-ab75c130f158">
Feb 28 10:15:50 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:15:50 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:15:50 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <system>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <entry name="serial">cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a</entry>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <entry name="uuid">cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a</entry>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     </system>
Feb 28 10:15:50 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:15:50 compute-0 nova_compute[243452]:   <os>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:   </os>
Feb 28 10:15:50 compute-0 nova_compute[243452]:   <features>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:   </features>
Feb 28 10:15:50 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:15:50 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:15:50 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a_disk">
Feb 28 10:15:50 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       </source>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:15:50 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a_disk.config">
Feb 28 10:15:50 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       </source>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:15:50 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:b4:f3:ce"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <target dev="tapc2860307-48"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a/console.log" append="off"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <video>
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     </video>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:15:50 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:15:50 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:15:50 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:15:50 compute-0 nova_compute[243452]: </domain>
Feb 28 10:15:50 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.084 243456 DEBUG nova.compute.manager [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Preparing to wait for external event network-vif-plugged-c2860307-4800-4c5e-adb7-ab75c130f158 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.084 243456 DEBUG oslo_concurrency.lockutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.085 243456 DEBUG oslo_concurrency.lockutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.085 243456 DEBUG oslo_concurrency.lockutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.086 243456 DEBUG nova.virt.libvirt.vif [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1581061762',display_name='tempest-ListServerFiltersTestJSON-instance-1581061762',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1581061762',id=85,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='692561f0659d4af58ab14beffb24eb70',ramdisk_id='',reservation_id='r-8a0rn971',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-866927856',owner_user_name='tempest-ListServerFiltersTestJSON-866927856-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:43Z,user_data=None,user_id='cc52c9235e704591a857b1b746c257ea',uuid=cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c2860307-4800-4c5e-adb7-ab75c130f158", "address": "fa:16:3e:b4:f3:ce", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2860307-48", "ovs_interfaceid": "c2860307-4800-4c5e-adb7-ab75c130f158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.087 243456 DEBUG nova.network.os_vif_util [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converting VIF {"id": "c2860307-4800-4c5e-adb7-ab75c130f158", "address": "fa:16:3e:b4:f3:ce", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2860307-48", "ovs_interfaceid": "c2860307-4800-4c5e-adb7-ab75c130f158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:50.088 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[25d9b08a-12c9-47a4-98c9-fbd8e33b6237]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:50.090 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc26d2eda-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:50.090 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:50.091 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc26d2eda-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.088 243456 DEBUG nova.network.os_vif_util [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:f3:ce,bridge_name='br-int',has_traffic_filtering=True,id=c2860307-4800-4c5e-adb7-ab75c130f158,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2860307-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:50 compute-0 NetworkManager[49805]: <info>  [1772273750.0946] manager: (tapc26d2eda-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/333)
Feb 28 10:15:50 compute-0 kernel: tapc26d2eda-00: entered promiscuous mode
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.093 243456 DEBUG os_vif [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:f3:ce,bridge_name='br-int',has_traffic_filtering=True,id=c2860307-4800-4c5e-adb7-ab75c130f158,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2860307-48') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.096 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.099 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.099 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.100 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:50.100 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc26d2eda-00, col_values=(('external_ids', {'iface-id': 'eb88c6a8-1cff-4adc-aed5-eb769bcec23d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:50 compute-0 ovn_controller[146846]: 2026-02-28T10:15:50Z|00754|binding|INFO|Releasing lport eb88c6a8-1cff-4adc-aed5-eb769bcec23d from this chassis (sb_readonly=0)
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.102 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:50.104 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c26d2eda-0121-4d10-a6e6-2d194139720b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c26d2eda-0121-4d10-a6e6-2d194139720b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.105 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.105 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2860307-48, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.105 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc2860307-48, col_values=(('external_ids', {'iface-id': 'c2860307-4800-4c5e-adb7-ab75c130f158', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:f3:ce', 'vm-uuid': 'cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:50 compute-0 NetworkManager[49805]: <info>  [1772273750.1078] manager: (tapc2860307-48): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/334)
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.109 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:50.109 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[73229d45-06f9-44c8-b4a2-548bb6d871a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:50.111 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-c26d2eda-0121-4d10-a6e6-2d194139720b
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/c26d2eda-0121-4d10-a6e6-2d194139720b.pid.haproxy
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID c26d2eda-0121-4d10-a6e6-2d194139720b
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:15:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:50.113 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'env', 'PROCESS_TAG=haproxy-c26d2eda-0121-4d10-a6e6-2d194139720b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c26d2eda-0121-4d10-a6e6-2d194139720b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.120 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.122 243456 INFO os_vif [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:f3:ce,bridge_name='br-int',has_traffic_filtering=True,id=c2860307-4800-4c5e-adb7-ab75c130f158,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2860307-48')
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.204 243456 DEBUG nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.205 243456 DEBUG nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.205 243456 DEBUG nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] No VIF found with MAC fa:16:3e:b4:f3:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.206 243456 INFO nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Using config drive
Feb 28 10:15:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 514 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 7.5 MiB/s wr, 195 op/s
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.236 243456 DEBUG nova.storage.rbd_utils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] rbd image cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:50 compute-0 podman[314292]: 2026-02-28 10:15:50.442861808 +0000 UTC m=+0.044520773 container create 41d9d06dabb6b8ec09d0fa28e3292d092af3baa6bd9fbcad4b077bf28b3bcb74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 28 10:15:50 compute-0 systemd[1]: Started libpod-conmon-41d9d06dabb6b8ec09d0fa28e3292d092af3baa6bd9fbcad4b077bf28b3bcb74.scope.
Feb 28 10:15:50 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:15:50 compute-0 podman[314292]: 2026-02-28 10:15:50.417208886 +0000 UTC m=+0.018867881 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c9c8fa6800f8878181e70321d8347959c19d95496dffeec0d4d8b240d4163b5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:15:50 compute-0 podman[314292]: 2026-02-28 10:15:50.566661709 +0000 UTC m=+0.168320714 container init 41d9d06dabb6b8ec09d0fa28e3292d092af3baa6bd9fbcad4b077bf28b3bcb74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223)
Feb 28 10:15:50 compute-0 podman[314292]: 2026-02-28 10:15:50.571967528 +0000 UTC m=+0.173626503 container start 41d9d06dabb6b8ec09d0fa28e3292d092af3baa6bd9fbcad4b077bf28b3bcb74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:15:50 compute-0 neutron-haproxy-ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b[314307]: [NOTICE]   (314311) : New worker (314313) forked
Feb 28 10:15:50 compute-0 neutron-haproxy-ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b[314307]: [NOTICE]   (314311) : Loading success.
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.764 243456 INFO nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Creating config drive at /var/lib/nova/instances/cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a/disk.config
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.768 243456 DEBUG oslo_concurrency.processutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpvtdabd4g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:50 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2275855061' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:50 compute-0 ceph-mon[76304]: pgmap v1498: 305 pgs: 305 active+clean; 514 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 7.5 MiB/s wr, 195 op/s
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.796 243456 DEBUG nova.network.neutron [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Successfully updated port: e6d4a5ad-f493-413a-a412-747c3a07943b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.811 243456 DEBUG oslo_concurrency.lockutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "refresh_cache-9119fb04-24fa-460c-a772-4ca398874b4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.812 243456 DEBUG oslo_concurrency.lockutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquired lock "refresh_cache-9119fb04-24fa-460c-a772-4ca398874b4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.812 243456 DEBUG nova.network.neutron [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.901 243456 DEBUG oslo_concurrency.processutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpvtdabd4g" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.973 243456 DEBUG nova.storage.rbd_utils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] rbd image cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:50 compute-0 nova_compute[243452]: 2026-02-28 10:15:50.977 243456 DEBUG oslo_concurrency.processutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a/disk.config cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.007 243456 DEBUG nova.network.neutron [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.014 243456 DEBUG nova.network.neutron [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Successfully updated port: e5c02b5f-a54e-4612-b236-0f03ef62a3c7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.037 243456 DEBUG oslo_concurrency.lockutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "refresh_cache-7a3c0169-3430-4dbe-b080-9ae7b56a101b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.037 243456 DEBUG oslo_concurrency.lockutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquired lock "refresh_cache-7a3c0169-3430-4dbe-b080-9ae7b56a101b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.038 243456 DEBUG nova.network.neutron [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.101 243456 DEBUG oslo_concurrency.processutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a/disk.config cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.102 243456 INFO nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Deleting local config drive /var/lib/nova/instances/cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a/disk.config because it was imported into RBD.
Feb 28 10:15:51 compute-0 NetworkManager[49805]: <info>  [1772273751.1510] manager: (tapc2860307-48): new Tun device (/org/freedesktop/NetworkManager/Devices/335)
Feb 28 10:15:51 compute-0 kernel: tapc2860307-48: entered promiscuous mode
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.154 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:51 compute-0 ovn_controller[146846]: 2026-02-28T10:15:51Z|00755|binding|INFO|Claiming lport c2860307-4800-4c5e-adb7-ab75c130f158 for this chassis.
Feb 28 10:15:51 compute-0 ovn_controller[146846]: 2026-02-28T10:15:51Z|00756|binding|INFO|c2860307-4800-4c5e-adb7-ab75c130f158: Claiming fa:16:3e:b4:f3:ce 10.100.0.8
Feb 28 10:15:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:51.164 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:f3:ce 10.100.0.8'], port_security=['fa:16:3e:b4:f3:ce 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c26d2eda-0121-4d10-a6e6-2d194139720b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '692561f0659d4af58ab14beffb24eb70', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a45296f-8417-45bb-aaf5-e24181cd6a54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=859a5b6e-8469-47b0-aace-63af7cf02d70, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=c2860307-4800-4c5e-adb7-ab75c130f158) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:15:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:51.165 156681 INFO neutron.agent.ovn.metadata.agent [-] Port c2860307-4800-4c5e-adb7-ab75c130f158 in datapath c26d2eda-0121-4d10-a6e6-2d194139720b bound to our chassis
Feb 28 10:15:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:51.166 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c26d2eda-0121-4d10-a6e6-2d194139720b
Feb 28 10:15:51 compute-0 ovn_controller[146846]: 2026-02-28T10:15:51Z|00757|binding|INFO|Setting lport c2860307-4800-4c5e-adb7-ab75c130f158 ovn-installed in OVS
Feb 28 10:15:51 compute-0 ovn_controller[146846]: 2026-02-28T10:15:51Z|00758|binding|INFO|Setting lport c2860307-4800-4c5e-adb7-ab75c130f158 up in Southbound
Feb 28 10:15:51 compute-0 NetworkManager[49805]: <info>  [1772273751.1724] device (tapc2860307-48): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.167 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.169 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.172 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:51 compute-0 NetworkManager[49805]: <info>  [1772273751.1746] device (tapc2860307-48): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.181 243456 DEBUG nova.network.neutron [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:15:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:51.186 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d72b5b09-7ad8-48e7-946b-77e24dc8efb4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:51 compute-0 systemd-machined[209480]: New machine qemu-98-instance-00000055.
Feb 28 10:15:51 compute-0 systemd[1]: Started Virtual Machine qemu-98-instance-00000055.
Feb 28 10:15:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:51.221 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b0ba29f1-2b98-45e6-8474-0b121e28139c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:51.225 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c7b75ed9-e1b7-4c60-94ac-875f60aeafcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:51.246 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[139f6948-c030-4505-a2c0-ebe93e94e69e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:51.267 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a25583b6-2f5c-4b5f-aeda-029261723ff9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc26d2eda-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:c3:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 231], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525720, 'reachable_time': 39275, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314386, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:51.284 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e77930fc-785a-4864-9296-50b8efc39d9c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc26d2eda-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525733, 'tstamp': 525733}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314388, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc26d2eda-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525736, 'tstamp': 525736}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314388, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:51.286 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc26d2eda-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:51.289 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc26d2eda-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:51.289 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:51.289 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc26d2eda-00, col_values=(('external_ids', {'iface-id': 'eb88c6a8-1cff-4adc-aed5-eb769bcec23d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:51.289 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.291 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.675 243456 DEBUG nova.compute.manager [req-1d07a2bd-90a4-452c-8231-41ceca559ddf req-b016f152-b69d-4309-972b-b23a5a0b3370 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Received event network-vif-plugged-c2860307-4800-4c5e-adb7-ab75c130f158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.676 243456 DEBUG oslo_concurrency.lockutils [req-1d07a2bd-90a4-452c-8231-41ceca559ddf req-b016f152-b69d-4309-972b-b23a5a0b3370 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.676 243456 DEBUG oslo_concurrency.lockutils [req-1d07a2bd-90a4-452c-8231-41ceca559ddf req-b016f152-b69d-4309-972b-b23a5a0b3370 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.676 243456 DEBUG oslo_concurrency.lockutils [req-1d07a2bd-90a4-452c-8231-41ceca559ddf req-b016f152-b69d-4309-972b-b23a5a0b3370 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.677 243456 DEBUG nova.compute.manager [req-1d07a2bd-90a4-452c-8231-41ceca559ddf req-b016f152-b69d-4309-972b-b23a5a0b3370 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Processing event network-vif-plugged-c2860307-4800-4c5e-adb7-ab75c130f158 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.818 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273751.8176916, cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.818 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] VM Started (Lifecycle Event)
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.820 243456 DEBUG nova.compute.manager [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.824 243456 DEBUG nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.828 243456 INFO nova.virt.libvirt.driver [-] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Instance spawned successfully.
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.828 243456 DEBUG nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.855 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.860 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.865 243456 DEBUG nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.866 243456 DEBUG nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.866 243456 DEBUG nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.866 243456 DEBUG nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.867 243456 DEBUG nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.867 243456 DEBUG nova.virt.libvirt.driver [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.891 243456 DEBUG nova.network.neutron [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Updating instance_info_cache with network_info: [{"id": "e6d4a5ad-f493-413a-a412-747c3a07943b", "address": "fa:16:3e:b8:3c:8a", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6d4a5ad-f4", "ovs_interfaceid": "e6d4a5ad-f493-413a-a412-747c3a07943b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.909 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.910 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273751.8178873, cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.910 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] VM Paused (Lifecycle Event)
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.933 243456 DEBUG oslo_concurrency.lockutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Releasing lock "refresh_cache-9119fb04-24fa-460c-a772-4ca398874b4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.934 243456 DEBUG nova.compute.manager [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Instance network_info: |[{"id": "e6d4a5ad-f493-413a-a412-747c3a07943b", "address": "fa:16:3e:b8:3c:8a", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6d4a5ad-f4", "ovs_interfaceid": "e6d4a5ad-f493-413a-a412-747c3a07943b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.936 243456 DEBUG nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Start _get_guest_xml network_info=[{"id": "e6d4a5ad-f493-413a-a412-747c3a07943b", "address": "fa:16:3e:b8:3c:8a", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6d4a5ad-f4", "ovs_interfaceid": "e6d4a5ad-f493-413a-a412-747c3a07943b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.938 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.941 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273751.8236177, cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.941 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] VM Resumed (Lifecycle Event)
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.943 243456 WARNING nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.948 243456 DEBUG nova.virt.libvirt.host [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.948 243456 DEBUG nova.virt.libvirt.host [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.950 243456 DEBUG nova.virt.libvirt.host [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.951 243456 DEBUG nova.virt.libvirt.host [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.951 243456 DEBUG nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.952 243456 DEBUG nova.virt.hardware [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cef5f00e-a384-43a4-928c-9acfa38d87af',id=5,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.952 243456 DEBUG nova.virt.hardware [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.952 243456 DEBUG nova.virt.hardware [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.952 243456 DEBUG nova.virt.hardware [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.953 243456 DEBUG nova.virt.hardware [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.953 243456 DEBUG nova.virt.hardware [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.953 243456 DEBUG nova.virt.hardware [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.953 243456 DEBUG nova.virt.hardware [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.953 243456 DEBUG nova.virt.hardware [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.954 243456 DEBUG nova.virt.hardware [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.954 243456 DEBUG nova.virt.hardware [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:15:51 compute-0 nova_compute[243452]: 2026-02-28 10:15:51.958 243456 DEBUG oslo_concurrency.processutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.008 243456 INFO nova.compute.manager [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Took 8.14 seconds to spawn the instance on the hypervisor.
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.009 243456 DEBUG nova.compute.manager [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.010 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.019 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.045 243456 DEBUG nova.network.neutron [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Updating instance_info_cache with network_info: [{"id": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "address": "fa:16:3e:77:14:24", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c02b5f-a5", "ovs_interfaceid": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.049 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:15:52 compute-0 sudo[314433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:15:52 compute-0 sudo[314433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:15:52 compute-0 sudo[314433]: pam_unix(sudo:session): session closed for user root
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.078 243456 DEBUG oslo_concurrency.lockutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Releasing lock "refresh_cache-7a3c0169-3430-4dbe-b080-9ae7b56a101b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.078 243456 DEBUG nova.compute.manager [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Instance network_info: |[{"id": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "address": "fa:16:3e:77:14:24", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c02b5f-a5", "ovs_interfaceid": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.080 243456 DEBUG nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Start _get_guest_xml network_info=[{"id": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "address": "fa:16:3e:77:14:24", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c02b5f-a5", "ovs_interfaceid": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.091 243456 INFO nova.compute.manager [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Took 9.37 seconds to build instance.
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.095 243456 WARNING nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.101 243456 DEBUG nova.virt.libvirt.host [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.101 243456 DEBUG nova.virt.libvirt.host [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.105 243456 DEBUG nova.virt.libvirt.host [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.106 243456 DEBUG nova.virt.libvirt.host [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.106 243456 DEBUG nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.106 243456 DEBUG nova.virt.hardware [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.107 243456 DEBUG nova.virt.hardware [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.107 243456 DEBUG nova.virt.hardware [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.107 243456 DEBUG nova.virt.hardware [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.107 243456 DEBUG nova.virt.hardware [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.108 243456 DEBUG nova.virt.hardware [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.108 243456 DEBUG nova.virt.hardware [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.108 243456 DEBUG nova.virt.hardware [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.108 243456 DEBUG nova.virt.hardware [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.108 243456 DEBUG nova.virt.hardware [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.109 243456 DEBUG nova.virt.hardware [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.112 243456 DEBUG oslo_concurrency.processutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:52 compute-0 sudo[314458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:15:52 compute-0 sudo[314458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.153 243456 DEBUG oslo_concurrency.lockutils [None req-7a119bf6-7e14-4c1e-8f14-4ed90c425400 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.506s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.159 243456 DEBUG nova.compute.manager [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Received event network-vif-plugged-cace90b2-5d6b-49ae-a68a-251838fec4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.160 243456 DEBUG oslo_concurrency.lockutils [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.160 243456 DEBUG oslo_concurrency.lockutils [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.160 243456 DEBUG oslo_concurrency.lockutils [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.160 243456 DEBUG nova.compute.manager [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] No waiting events found dispatching network-vif-plugged-cace90b2-5d6b-49ae-a68a-251838fec4ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.160 243456 WARNING nova.compute.manager [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Received unexpected event network-vif-plugged-cace90b2-5d6b-49ae-a68a-251838fec4ee for instance with vm_state active and task_state None.
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.161 243456 DEBUG nova.compute.manager [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received event network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.161 243456 DEBUG oslo_concurrency.lockutils [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30194398-5601-43ac-aae7-290d9d311d6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.161 243456 DEBUG oslo_concurrency.lockutils [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.161 243456 DEBUG oslo_concurrency.lockutils [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.161 243456 DEBUG nova.compute.manager [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Processing event network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.161 243456 DEBUG nova.compute.manager [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received event network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.161 243456 DEBUG oslo_concurrency.lockutils [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30194398-5601-43ac-aae7-290d9d311d6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.162 243456 DEBUG oslo_concurrency.lockutils [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.162 243456 DEBUG oslo_concurrency.lockutils [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.162 243456 DEBUG nova.compute.manager [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] No waiting events found dispatching network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.162 243456 WARNING nova.compute.manager [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received unexpected event network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f for instance with vm_state building and task_state spawning.
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.162 243456 DEBUG nova.compute.manager [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Received event network-changed-e6d4a5ad-f493-413a-a412-747c3a07943b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.162 243456 DEBUG nova.compute.manager [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Refreshing instance network info cache due to event network-changed-e6d4a5ad-f493-413a-a412-747c3a07943b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.163 243456 DEBUG oslo_concurrency.lockutils [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9119fb04-24fa-460c-a772-4ca398874b4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.163 243456 DEBUG oslo_concurrency.lockutils [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9119fb04-24fa-460c-a772-4ca398874b4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.163 243456 DEBUG nova.network.neutron [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Refreshing network info cache for port e6d4a5ad-f493-413a-a412-747c3a07943b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.164 243456 DEBUG nova.compute.manager [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.189 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273752.1877253, 30194398-5601-43ac-aae7-290d9d311d6c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.189 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] VM Resumed (Lifecycle Event)
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.195 243456 DEBUG nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.199 243456 INFO nova.virt.libvirt.driver [-] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Instance spawned successfully.
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.199 243456 DEBUG nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.214 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 540 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 649 KiB/s rd, 8.9 MiB/s wr, 188 op/s
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.220 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.229 243456 DEBUG nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.232 243456 DEBUG nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.233 243456 DEBUG nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.233 243456 DEBUG nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.234 243456 DEBUG nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.234 243456 DEBUG nova.virt.libvirt.driver [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.245 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.287 243456 INFO nova.compute.manager [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Took 9.98 seconds to spawn the instance on the hypervisor.
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.287 243456 DEBUG nova.compute.manager [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.357 243456 INFO nova.compute.manager [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Took 11.40 seconds to build instance.
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.381 243456 DEBUG oslo_concurrency.lockutils [None req-56d88a3c-ff43-4d82-803d-f786ff64048c cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:15:52 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2971283506' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.615 243456 DEBUG oslo_concurrency.processutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.646 243456 DEBUG nova.storage.rbd_utils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] rbd image 9119fb04-24fa-460c-a772-4ca398874b4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.666 243456 DEBUG oslo_concurrency.processutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:52 compute-0 sudo[314458]: pam_unix(sudo:session): session closed for user root
Feb 28 10:15:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:15:52 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1719877964' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:15:52 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:15:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:15:52 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:15:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.781 243456 DEBUG oslo_concurrency.processutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.668s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:52 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:15:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:15:52 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:15:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:15:52 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:15:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:15:52 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.833 243456 DEBUG nova.storage.rbd_utils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image 7a3c0169-3430-4dbe-b080-9ae7b56a101b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.849 243456 DEBUG oslo_concurrency.processutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:52 compute-0 sudo[314581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:15:52 compute-0 sudo[314581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:15:52 compute-0 sudo[314581]: pam_unix(sudo:session): session closed for user root
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.887 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273737.8444078, c2763dc4-f643-48bd-964a-d4ab75938d0a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:52 compute-0 nova_compute[243452]: 2026-02-28 10:15:52.888 243456 INFO nova.compute.manager [-] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] VM Stopped (Lifecycle Event)
Feb 28 10:15:52 compute-0 sudo[314636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:15:52 compute-0 sudo[314636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.068 243456 DEBUG nova.compute.manager [None req-b6a687f6-c04f-43cb-8a07-179112dca519 - - - - - -] [instance: c2763dc4-f643-48bd-964a-d4ab75938d0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:53 compute-0 podman[314692]: 2026-02-28 10:15:53.210078076 +0000 UTC m=+0.062749725 container create d9c4c6e97377d3b1a5bf7014acca58ba686e8033339bf7703e1e4b2afd3f4878 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_elgamal, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 10:15:53 compute-0 systemd[1]: Started libpod-conmon-d9c4c6e97377d3b1a5bf7014acca58ba686e8033339bf7703e1e4b2afd3f4878.scope.
Feb 28 10:15:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:15:53 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2555241482' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:53 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:15:53 compute-0 podman[314692]: 2026-02-28 10:15:53.188344285 +0000 UTC m=+0.041015954 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:15:53 compute-0 podman[314692]: 2026-02-28 10:15:53.290193969 +0000 UTC m=+0.142865658 container init d9c4c6e97377d3b1a5bf7014acca58ba686e8033339bf7703e1e4b2afd3f4878 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_elgamal, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:15:53 compute-0 ceph-mon[76304]: pgmap v1499: 305 pgs: 305 active+clean; 540 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 649 KiB/s rd, 8.9 MiB/s wr, 188 op/s
Feb 28 10:15:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2971283506' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1719877964' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:53 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:15:53 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:15:53 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:15:53 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:15:53 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:15:53 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:15:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2555241482' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:53 compute-0 podman[314692]: 2026-02-28 10:15:53.301161137 +0000 UTC m=+0.153832796 container start d9c4c6e97377d3b1a5bf7014acca58ba686e8033339bf7703e1e4b2afd3f4878 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_elgamal, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:15:53 compute-0 podman[314692]: 2026-02-28 10:15:53.305388676 +0000 UTC m=+0.158060335 container attach d9c4c6e97377d3b1a5bf7014acca58ba686e8033339bf7703e1e4b2afd3f4878 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_elgamal, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 10:15:53 compute-0 hungry_elgamal[314707]: 167 167
Feb 28 10:15:53 compute-0 systemd[1]: libpod-d9c4c6e97377d3b1a5bf7014acca58ba686e8033339bf7703e1e4b2afd3f4878.scope: Deactivated successfully.
Feb 28 10:15:53 compute-0 conmon[314707]: conmon d9c4c6e97377d3b1a5bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d9c4c6e97377d3b1a5bf7014acca58ba686e8033339bf7703e1e4b2afd3f4878.scope/container/memory.events
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.307 243456 DEBUG oslo_concurrency.processutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.309 243456 DEBUG nova.virt.libvirt.vif [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:15:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1068423179',display_name='tempest-ListServerFiltersTestJSON-instance-1068423179',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1068423179',id=86,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='692561f0659d4af58ab14beffb24eb70',ramdisk_id='',reservation_id='r-zu8ssj30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-866927856',owner_user_name='tempest-ListServerFiltersTestJSON-866927856-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:46Z,user_data=None,user_id='cc52c9235e704591a857b1b746c257ea',uuid=9119fb04-24fa-460c-a772-4ca398874b4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e6d4a5ad-f493-413a-a412-747c3a07943b", "address": "fa:16:3e:b8:3c:8a", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6d4a5ad-f4", "ovs_interfaceid": "e6d4a5ad-f493-413a-a412-747c3a07943b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.310 243456 DEBUG nova.network.os_vif_util [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converting VIF {"id": "e6d4a5ad-f493-413a-a412-747c3a07943b", "address": "fa:16:3e:b8:3c:8a", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6d4a5ad-f4", "ovs_interfaceid": "e6d4a5ad-f493-413a-a412-747c3a07943b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.311 243456 DEBUG nova.network.os_vif_util [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:3c:8a,bridge_name='br-int',has_traffic_filtering=True,id=e6d4a5ad-f493-413a-a412-747c3a07943b,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6d4a5ad-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.312 243456 DEBUG nova.objects.instance [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9119fb04-24fa-460c-a772-4ca398874b4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:53 compute-0 podman[314714]: 2026-02-28 10:15:53.366291669 +0000 UTC m=+0.038226736 container died d9c4c6e97377d3b1a5bf7014acca58ba686e8033339bf7703e1e4b2afd3f4878 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 10:15:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d3074b5c4f88527ee60032fc8bb5b06ee83d3ab37a50aee8771705199f6ff3b-merged.mount: Deactivated successfully.
Feb 28 10:15:53 compute-0 podman[314714]: 2026-02-28 10:15:53.414963117 +0000 UTC m=+0.086898144 container remove d9c4c6e97377d3b1a5bf7014acca58ba686e8033339bf7703e1e4b2afd3f4878 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_elgamal, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 10:15:53 compute-0 systemd[1]: libpod-conmon-d9c4c6e97377d3b1a5bf7014acca58ba686e8033339bf7703e1e4b2afd3f4878.scope: Deactivated successfully.
Feb 28 10:15:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:15:53 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2106944190' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.520 243456 DEBUG oslo_concurrency.processutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.671s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.523 243456 DEBUG nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <uuid>9119fb04-24fa-460c-a772-4ca398874b4e</uuid>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <name>instance-00000056</name>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <memory>196608</memory>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1068423179</nova:name>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:15:51</nova:creationTime>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <nova:flavor name="m1.micro">
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <nova:memory>192</nova:memory>
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <nova:user uuid="cc52c9235e704591a857b1b746c257ea">tempest-ListServerFiltersTestJSON-866927856-project-member</nova:user>
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <nova:project uuid="692561f0659d4af58ab14beffb24eb70">tempest-ListServerFiltersTestJSON-866927856</nova:project>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <nova:port uuid="e6d4a5ad-f493-413a-a412-747c3a07943b">
Feb 28 10:15:53 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <system>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <entry name="serial">9119fb04-24fa-460c-a772-4ca398874b4e</entry>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <entry name="uuid">9119fb04-24fa-460c-a772-4ca398874b4e</entry>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     </system>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <os>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   </os>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <features>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   </features>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9119fb04-24fa-460c-a772-4ca398874b4e_disk">
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       </source>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9119fb04-24fa-460c-a772-4ca398874b4e_disk.config">
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       </source>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:b8:3c:8a"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <target dev="tape6d4a5ad-f4"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/9119fb04-24fa-460c-a772-4ca398874b4e/console.log" append="off"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <video>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     </video>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:15:53 compute-0 nova_compute[243452]: </domain>
Feb 28 10:15:53 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.523 243456 DEBUG nova.compute.manager [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Preparing to wait for external event network-vif-plugged-e6d4a5ad-f493-413a-a412-747c3a07943b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.524 243456 DEBUG oslo_concurrency.lockutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "9119fb04-24fa-460c-a772-4ca398874b4e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.524 243456 DEBUG oslo_concurrency.lockutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "9119fb04-24fa-460c-a772-4ca398874b4e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.524 243456 DEBUG oslo_concurrency.lockutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "9119fb04-24fa-460c-a772-4ca398874b4e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.525 243456 DEBUG nova.virt.libvirt.vif [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:15:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1068423179',display_name='tempest-ListServerFiltersTestJSON-instance-1068423179',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1068423179',id=86,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='692561f0659d4af58ab14beffb24eb70',ramdisk_id='',reservation_id='r-zu8ssj30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-866927856',owner_user_name='tempest-ListServerFiltersTestJSON-866927856-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:46Z,user_data=None,user_id='cc52c9235e704591a857b1b746c257ea',uuid=9119fb04-24fa-460c-a772-4ca398874b4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e6d4a5ad-f493-413a-a412-747c3a07943b", "address": "fa:16:3e:b8:3c:8a", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6d4a5ad-f4", "ovs_interfaceid": "e6d4a5ad-f493-413a-a412-747c3a07943b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.525 243456 DEBUG nova.network.os_vif_util [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converting VIF {"id": "e6d4a5ad-f493-413a-a412-747c3a07943b", "address": "fa:16:3e:b8:3c:8a", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6d4a5ad-f4", "ovs_interfaceid": "e6d4a5ad-f493-413a-a412-747c3a07943b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.527 243456 DEBUG nova.network.os_vif_util [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:3c:8a,bridge_name='br-int',has_traffic_filtering=True,id=e6d4a5ad-f493-413a-a412-747c3a07943b,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6d4a5ad-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.528 243456 DEBUG os_vif [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:3c:8a,bridge_name='br-int',has_traffic_filtering=True,id=e6d4a5ad-f493-413a-a412-747c3a07943b,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6d4a5ad-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.530 243456 DEBUG nova.virt.libvirt.vif [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:15:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-121294672',display_name='tempest-ServerActionsTestOtherA-server-121294672',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-121294672',id=87,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd70835696bf4e12a062516e9de5527d',ramdisk_id='',reservation_id='r-0lq9qnn1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1764257371',owner_user_name='tempest-ServerActionsTestOtherA-1764257371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:47Z,user_data=None,user_id='14b2d28379164786ad68563acb83a50a',uuid=7a3c0169-3430-4dbe-b080-9ae7b56a101b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "address": "fa:16:3e:77:14:24", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c02b5f-a5", "ovs_interfaceid": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.530 243456 DEBUG nova.network.os_vif_util [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converting VIF {"id": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "address": "fa:16:3e:77:14:24", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c02b5f-a5", "ovs_interfaceid": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.531 243456 DEBUG nova.network.os_vif_util [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:14:24,bridge_name='br-int',has_traffic_filtering=True,id=e5c02b5f-a54e-4612-b236-0f03ef62a3c7,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5c02b5f-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.532 243456 DEBUG nova.objects.instance [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'pci_devices' on Instance uuid 7a3c0169-3430-4dbe-b080-9ae7b56a101b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.533 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.534 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.534 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.539 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.539 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape6d4a5ad-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.540 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape6d4a5ad-f4, col_values=(('external_ids', {'iface-id': 'e6d4a5ad-f493-413a-a412-747c3a07943b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:3c:8a', 'vm-uuid': '9119fb04-24fa-460c-a772-4ca398874b4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.541 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:53 compute-0 NetworkManager[49805]: <info>  [1772273753.5424] manager: (tape6d4a5ad-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.543 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.548 243456 DEBUG nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <uuid>7a3c0169-3430-4dbe-b080-9ae7b56a101b</uuid>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <name>instance-00000057</name>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerActionsTestOtherA-server-121294672</nova:name>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:15:52</nova:creationTime>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <nova:user uuid="14b2d28379164786ad68563acb83a50a">tempest-ServerActionsTestOtherA-1764257371-project-member</nova:user>
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <nova:project uuid="fd70835696bf4e12a062516e9de5527d">tempest-ServerActionsTestOtherA-1764257371</nova:project>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <nova:port uuid="e5c02b5f-a54e-4612-b236-0f03ef62a3c7">
Feb 28 10:15:53 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <system>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <entry name="serial">7a3c0169-3430-4dbe-b080-9ae7b56a101b</entry>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <entry name="uuid">7a3c0169-3430-4dbe-b080-9ae7b56a101b</entry>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     </system>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <os>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   </os>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <features>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   </features>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/7a3c0169-3430-4dbe-b080-9ae7b56a101b_disk">
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       </source>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/7a3c0169-3430-4dbe-b080-9ae7b56a101b_disk.config">
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       </source>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:15:53 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:77:14:24"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <target dev="tape5c02b5f-a5"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/7a3c0169-3430-4dbe-b080-9ae7b56a101b/console.log" append="off"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <video>
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     </video>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:15:53 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:15:53 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:15:53 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:15:53 compute-0 nova_compute[243452]: </domain>
Feb 28 10:15:53 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.549 243456 DEBUG nova.compute.manager [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Preparing to wait for external event network-vif-plugged-e5c02b5f-a54e-4612-b236-0f03ef62a3c7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.549 243456 DEBUG oslo_concurrency.lockutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.549 243456 DEBUG oslo_concurrency.lockutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.550 243456 DEBUG oslo_concurrency.lockutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.550 243456 DEBUG nova.virt.libvirt.vif [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:15:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-121294672',display_name='tempest-ServerActionsTestOtherA-server-121294672',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-121294672',id=87,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd70835696bf4e12a062516e9de5527d',ramdisk_id='',reservation_id='r-0lq9qnn1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1764257371',owner_user_name='tempest-ServerActionsTestOtherA-1764257371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:15:47Z,user_data=None,user_id='14b2d28379164786ad68563acb83a50a',uuid=7a3c0169-3430-4dbe-b080-9ae7b56a101b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "address": "fa:16:3e:77:14:24", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c02b5f-a5", "ovs_interfaceid": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.550 243456 DEBUG nova.network.os_vif_util [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converting VIF {"id": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "address": "fa:16:3e:77:14:24", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c02b5f-a5", "ovs_interfaceid": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.551 243456 DEBUG nova.network.os_vif_util [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:14:24,bridge_name='br-int',has_traffic_filtering=True,id=e5c02b5f-a54e-4612-b236-0f03ef62a3c7,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5c02b5f-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.551 243456 DEBUG os_vif [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:14:24,bridge_name='br-int',has_traffic_filtering=True,id=e5c02b5f-a54e-4612-b236-0f03ef62a3c7,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5c02b5f-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.554 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.555 243456 INFO os_vif [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:3c:8a,bridge_name='br-int',has_traffic_filtering=True,id=e6d4a5ad-f493-413a-a412-747c3a07943b,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6d4a5ad-f4')
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.555 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.556 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.556 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.558 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.558 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5c02b5f-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.559 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape5c02b5f-a5, col_values=(('external_ids', {'iface-id': 'e5c02b5f-a54e-4612-b236-0f03ef62a3c7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:14:24', 'vm-uuid': '7a3c0169-3430-4dbe-b080-9ae7b56a101b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.560 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.562 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:15:53 compute-0 NetworkManager[49805]: <info>  [1772273753.5638] manager: (tape5c02b5f-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/337)
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.569 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.571 243456 INFO os_vif [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:14:24,bridge_name='br-int',has_traffic_filtering=True,id=e5c02b5f-a54e-4612-b236-0f03ef62a3c7,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5c02b5f-a5')
Feb 28 10:15:53 compute-0 podman[314736]: 2026-02-28 10:15:53.600275198 +0000 UTC m=+0.046647223 container create 8b1330e10f69609a3bfc8e891e8600798cd6b9564909bc24af7b1d795c35b83b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.612 243456 DEBUG nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.613 243456 DEBUG nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.613 243456 DEBUG nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] No VIF found with MAC fa:16:3e:b8:3c:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.614 243456 INFO nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Using config drive
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.645 243456 DEBUG nova.storage.rbd_utils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] rbd image 9119fb04-24fa-460c-a772-4ca398874b4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:53 compute-0 systemd[1]: Started libpod-conmon-8b1330e10f69609a3bfc8e891e8600798cd6b9564909bc24af7b1d795c35b83b.scope.
Feb 28 10:15:53 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8036111b3ed41dfec686ac7a5d849406a8846e6f225a80560ad9098baab1cb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:15:53 compute-0 podman[314736]: 2026-02-28 10:15:53.579497794 +0000 UTC m=+0.025869849 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8036111b3ed41dfec686ac7a5d849406a8846e6f225a80560ad9098baab1cb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8036111b3ed41dfec686ac7a5d849406a8846e6f225a80560ad9098baab1cb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8036111b3ed41dfec686ac7a5d849406a8846e6f225a80560ad9098baab1cb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8036111b3ed41dfec686ac7a5d849406a8846e6f225a80560ad9098baab1cb0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:15:53 compute-0 podman[314736]: 2026-02-28 10:15:53.698676115 +0000 UTC m=+0.145048150 container init 8b1330e10f69609a3bfc8e891e8600798cd6b9564909bc24af7b1d795c35b83b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:15:53 compute-0 podman[314736]: 2026-02-28 10:15:53.705335382 +0000 UTC m=+0.151707407 container start 8b1330e10f69609a3bfc8e891e8600798cd6b9564909bc24af7b1d795c35b83b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lewin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 10:15:53 compute-0 podman[314736]: 2026-02-28 10:15:53.708134891 +0000 UTC m=+0.154506916 container attach 8b1330e10f69609a3bfc8e891e8600798cd6b9564909bc24af7b1d795c35b83b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lewin, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.808 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.925 243456 DEBUG nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.926 243456 DEBUG nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.927 243456 DEBUG nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] No VIF found with MAC fa:16:3e:77:14:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.928 243456 INFO nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Using config drive
Feb 28 10:15:53 compute-0 nova_compute[243452]: 2026-02-28 10:15:53.955 243456 DEBUG nova.storage.rbd_utils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image 7a3c0169-3430-4dbe-b080-9ae7b56a101b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:54 compute-0 intelligent_lewin[314776]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:15:54 compute-0 intelligent_lewin[314776]: --> All data devices are unavailable
Feb 28 10:15:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 544 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 1010 KiB/s rd, 8.9 MiB/s wr, 203 op/s
Feb 28 10:15:54 compute-0 systemd[1]: libpod-8b1330e10f69609a3bfc8e891e8600798cd6b9564909bc24af7b1d795c35b83b.scope: Deactivated successfully.
Feb 28 10:15:54 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2106944190' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:15:54 compute-0 nova_compute[243452]: 2026-02-28 10:15:54.312 243456 DEBUG nova.compute.manager [req-0b85207e-e91c-4d3c-b1b0-4c799008daf0 req-b802c2c8-d738-4ae4-8873-615310f2f366 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Received event network-vif-plugged-c2860307-4800-4c5e-adb7-ab75c130f158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:54 compute-0 nova_compute[243452]: 2026-02-28 10:15:54.313 243456 DEBUG oslo_concurrency.lockutils [req-0b85207e-e91c-4d3c-b1b0-4c799008daf0 req-b802c2c8-d738-4ae4-8873-615310f2f366 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:54 compute-0 nova_compute[243452]: 2026-02-28 10:15:54.313 243456 DEBUG oslo_concurrency.lockutils [req-0b85207e-e91c-4d3c-b1b0-4c799008daf0 req-b802c2c8-d738-4ae4-8873-615310f2f366 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:54 compute-0 nova_compute[243452]: 2026-02-28 10:15:54.313 243456 DEBUG oslo_concurrency.lockutils [req-0b85207e-e91c-4d3c-b1b0-4c799008daf0 req-b802c2c8-d738-4ae4-8873-615310f2f366 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:54 compute-0 nova_compute[243452]: 2026-02-28 10:15:54.314 243456 DEBUG nova.compute.manager [req-0b85207e-e91c-4d3c-b1b0-4c799008daf0 req-b802c2c8-d738-4ae4-8873-615310f2f366 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] No waiting events found dispatching network-vif-plugged-c2860307-4800-4c5e-adb7-ab75c130f158 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:15:54 compute-0 nova_compute[243452]: 2026-02-28 10:15:54.314 243456 WARNING nova.compute.manager [req-0b85207e-e91c-4d3c-b1b0-4c799008daf0 req-b802c2c8-d738-4ae4-8873-615310f2f366 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Received unexpected event network-vif-plugged-c2860307-4800-4c5e-adb7-ab75c130f158 for instance with vm_state active and task_state None.
Feb 28 10:15:54 compute-0 podman[314814]: 2026-02-28 10:15:54.3237399 +0000 UTC m=+0.075669738 container died 8b1330e10f69609a3bfc8e891e8600798cd6b9564909bc24af7b1d795c35b83b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:15:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8036111b3ed41dfec686ac7a5d849406a8846e6f225a80560ad9098baab1cb0-merged.mount: Deactivated successfully.
Feb 28 10:15:54 compute-0 podman[314814]: 2026-02-28 10:15:54.366003599 +0000 UTC m=+0.117933437 container remove 8b1330e10f69609a3bfc8e891e8600798cd6b9564909bc24af7b1d795c35b83b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lewin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:15:54 compute-0 systemd[1]: libpod-conmon-8b1330e10f69609a3bfc8e891e8600798cd6b9564909bc24af7b1d795c35b83b.scope: Deactivated successfully.
Feb 28 10:15:54 compute-0 sudo[314636]: pam_unix(sudo:session): session closed for user root
Feb 28 10:15:54 compute-0 sudo[314829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:15:54 compute-0 nova_compute[243452]: 2026-02-28 10:15:54.472 243456 INFO nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Creating config drive at /var/lib/nova/instances/7a3c0169-3430-4dbe-b080-9ae7b56a101b/disk.config
Feb 28 10:15:54 compute-0 sudo[314829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:15:54 compute-0 sudo[314829]: pam_unix(sudo:session): session closed for user root
Feb 28 10:15:54 compute-0 nova_compute[243452]: 2026-02-28 10:15:54.492 243456 DEBUG oslo_concurrency.processutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7a3c0169-3430-4dbe-b080-9ae7b56a101b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3hd4l1rp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:54 compute-0 sudo[314854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:15:54 compute-0 sudo[314854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:15:54 compute-0 nova_compute[243452]: 2026-02-28 10:15:54.540 243456 INFO nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Creating config drive at /var/lib/nova/instances/9119fb04-24fa-460c-a772-4ca398874b4e/disk.config
Feb 28 10:15:54 compute-0 nova_compute[243452]: 2026-02-28 10:15:54.551 243456 DEBUG oslo_concurrency.processutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9119fb04-24fa-460c-a772-4ca398874b4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpeui7va8d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:54 compute-0 nova_compute[243452]: 2026-02-28 10:15:54.636 243456 DEBUG oslo_concurrency.processutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7a3c0169-3430-4dbe-b080-9ae7b56a101b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3hd4l1rp" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:54 compute-0 nova_compute[243452]: 2026-02-28 10:15:54.694 243456 DEBUG nova.storage.rbd_utils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] rbd image 7a3c0169-3430-4dbe-b080-9ae7b56a101b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:54 compute-0 nova_compute[243452]: 2026-02-28 10:15:54.702 243456 DEBUG oslo_concurrency.processutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7a3c0169-3430-4dbe-b080-9ae7b56a101b/disk.config 7a3c0169-3430-4dbe-b080-9ae7b56a101b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:54 compute-0 nova_compute[243452]: 2026-02-28 10:15:54.745 243456 DEBUG oslo_concurrency.processutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9119fb04-24fa-460c-a772-4ca398874b4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpeui7va8d" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:54 compute-0 nova_compute[243452]: 2026-02-28 10:15:54.792 243456 DEBUG nova.storage.rbd_utils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] rbd image 9119fb04-24fa-460c-a772-4ca398874b4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:15:54 compute-0 nova_compute[243452]: 2026-02-28 10:15:54.802 243456 DEBUG oslo_concurrency.processutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9119fb04-24fa-460c-a772-4ca398874b4e/disk.config 9119fb04-24fa-460c-a772-4ca398874b4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:15:54 compute-0 podman[314949]: 2026-02-28 10:15:54.887991675 +0000 UTC m=+0.060234284 container create 1bc327a93f6a33cd55feeec004690d4f243217708de3240906254cd42bf04563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 10:15:54 compute-0 nova_compute[243452]: 2026-02-28 10:15:54.891 243456 DEBUG oslo_concurrency.processutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7a3c0169-3430-4dbe-b080-9ae7b56a101b/disk.config 7a3c0169-3430-4dbe-b080-9ae7b56a101b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:54 compute-0 nova_compute[243452]: 2026-02-28 10:15:54.900 243456 INFO nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Deleting local config drive /var/lib/nova/instances/7a3c0169-3430-4dbe-b080-9ae7b56a101b/disk.config because it was imported into RBD.
Feb 28 10:15:54 compute-0 systemd[1]: Started libpod-conmon-1bc327a93f6a33cd55feeec004690d4f243217708de3240906254cd42bf04563.scope.
Feb 28 10:15:54 compute-0 podman[314949]: 2026-02-28 10:15:54.864374242 +0000 UTC m=+0.036616871 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:15:54 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:15:54 compute-0 NetworkManager[49805]: <info>  [1772273754.9709] manager: (tape5c02b5f-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/338)
Feb 28 10:15:54 compute-0 kernel: tape5c02b5f-a5: entered promiscuous mode
Feb 28 10:15:54 compute-0 nova_compute[243452]: 2026-02-28 10:15:54.975 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:54 compute-0 ovn_controller[146846]: 2026-02-28T10:15:54Z|00759|binding|INFO|Claiming lport e5c02b5f-a54e-4612-b236-0f03ef62a3c7 for this chassis.
Feb 28 10:15:54 compute-0 ovn_controller[146846]: 2026-02-28T10:15:54Z|00760|binding|INFO|e5c02b5f-a54e-4612-b236-0f03ef62a3c7: Claiming fa:16:3e:77:14:24 10.100.0.4
Feb 28 10:15:54 compute-0 podman[314949]: 2026-02-28 10:15:54.984729796 +0000 UTC m=+0.156972405 container init 1bc327a93f6a33cd55feeec004690d4f243217708de3240906254cd42bf04563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 10:15:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:54.986 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:14:24 10.100.0.4'], port_security=['fa:16:3e:77:14:24 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7a3c0169-3430-4dbe-b080-9ae7b56a101b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd70835696bf4e12a062516e9de5527d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f8eaa742-8504-4c21-8533-267de16b101e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90546891-a028-4a5f-a7b5-01dac44edc93, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=e5c02b5f-a54e-4612-b236-0f03ef62a3c7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:15:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:54.987 156681 INFO neutron.agent.ovn.metadata.agent [-] Port e5c02b5f-a54e-4612-b236-0f03ef62a3c7 in datapath 2e5dcf5b-2f4a-41dc-9c28-b500e2889923 bound to our chassis
Feb 28 10:15:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:54.988 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2e5dcf5b-2f4a-41dc-9c28-b500e2889923
Feb 28 10:15:54 compute-0 ovn_controller[146846]: 2026-02-28T10:15:54Z|00761|binding|INFO|Setting lport e5c02b5f-a54e-4612-b236-0f03ef62a3c7 ovn-installed in OVS
Feb 28 10:15:54 compute-0 ovn_controller[146846]: 2026-02-28T10:15:54Z|00762|binding|INFO|Setting lport e5c02b5f-a54e-4612-b236-0f03ef62a3c7 up in Southbound
Feb 28 10:15:54 compute-0 podman[314949]: 2026-02-28 10:15:54.993631266 +0000 UTC m=+0.165873875 container start 1bc327a93f6a33cd55feeec004690d4f243217708de3240906254cd42bf04563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 10:15:54 compute-0 podman[314949]: 2026-02-28 10:15:54.998362889 +0000 UTC m=+0.170605498 container attach 1bc327a93f6a33cd55feeec004690d4f243217708de3240906254cd42bf04563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_boyd, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:54.993 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:55 compute-0 systemd[1]: libpod-1bc327a93f6a33cd55feeec004690d4f243217708de3240906254cd42bf04563.scope: Deactivated successfully.
Feb 28 10:15:55 compute-0 thirsty_boyd[314982]: 167 167
Feb 28 10:15:55 compute-0 podman[314949]: 2026-02-28 10:15:55.010193451 +0000 UTC m=+0.182436050 container died 1bc327a93f6a33cd55feeec004690d4f243217708de3240906254cd42bf04563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_boyd, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:15:55 compute-0 conmon[314982]: conmon 1bc327a93f6a33cd55fe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1bc327a93f6a33cd55feeec004690d4f243217708de3240906254cd42bf04563.scope/container/memory.events
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.010 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.011 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3ccf2bd9-ddcc-4e9c-bbf8-e79f725d4f0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-214fae667fdbb0eae20439dda0b7cc8a303bdff39bf3f4694c8a11124fabd911-merged.mount: Deactivated successfully.
Feb 28 10:15:55 compute-0 systemd-udevd[315016]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:15:55 compute-0 systemd-machined[209480]: New machine qemu-99-instance-00000057.
Feb 28 10:15:55 compute-0 podman[314949]: 2026-02-28 10:15:55.044498296 +0000 UTC m=+0.216740895 container remove 1bc327a93f6a33cd55feeec004690d4f243217708de3240906254cd42bf04563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_boyd, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 10:15:55 compute-0 systemd[1]: Started Virtual Machine qemu-99-instance-00000057.
Feb 28 10:15:55 compute-0 NetworkManager[49805]: <info>  [1772273755.0570] device (tape5c02b5f-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:15:55 compute-0 NetworkManager[49805]: <info>  [1772273755.0577] device (tape5c02b5f-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.063 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2e427e04-c007-4ae9-a9a4-c16bd0932bcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.068 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c6041552-ee3b-4f91-a40a-82f1d3bacc0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:55 compute-0 systemd[1]: libpod-conmon-1bc327a93f6a33cd55feeec004690d4f243217708de3240906254cd42bf04563.scope: Deactivated successfully.
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.095 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9362cad8-c33b-4c0f-b478-28bb305902a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.113 243456 DEBUG oslo_concurrency.processutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9119fb04-24fa-460c-a772-4ca398874b4e/disk.config 9119fb04-24fa-460c-a772-4ca398874b4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.114 243456 INFO nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Deleting local config drive /var/lib/nova/instances/9119fb04-24fa-460c-a772-4ca398874b4e/disk.config because it was imported into RBD.
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.118 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c509b0b1-1495-4732-ae8b-481de6223c37]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e5dcf5b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:a8:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517673, 'reachable_time': 18521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315029, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.130 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e186809a-f36b-4f13-b1a4-b79ca0a1db39]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2e5dcf5b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517685, 'tstamp': 517685}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315032, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2e5dcf5b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517689, 'tstamp': 517689}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315032, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.131 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e5dcf5b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.133 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.141 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.143 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e5dcf5b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.144 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.144 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2e5dcf5b-20, col_values=(('external_ids', {'iface-id': '4070e10c-8283-47ad-b9bf-0e19e9198bce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.144 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:55 compute-0 kernel: tape6d4a5ad-f4: entered promiscuous mode
Feb 28 10:15:55 compute-0 NetworkManager[49805]: <info>  [1772273755.1831] manager: (tape6d4a5ad-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/339)
Feb 28 10:15:55 compute-0 ovn_controller[146846]: 2026-02-28T10:15:55Z|00763|binding|INFO|Claiming lport e6d4a5ad-f493-413a-a412-747c3a07943b for this chassis.
Feb 28 10:15:55 compute-0 ovn_controller[146846]: 2026-02-28T10:15:55Z|00764|binding|INFO|e6d4a5ad-f493-413a-a412-747c3a07943b: Claiming fa:16:3e:b8:3c:8a 10.100.0.5
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.186 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:55 compute-0 systemd-udevd[315022]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.192 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:3c:8a 10.100.0.5'], port_security=['fa:16:3e:b8:3c:8a 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9119fb04-24fa-460c-a772-4ca398874b4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c26d2eda-0121-4d10-a6e6-2d194139720b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '692561f0659d4af58ab14beffb24eb70', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a45296f-8417-45bb-aaf5-e24181cd6a54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=859a5b6e-8469-47b0-aace-63af7cf02d70, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=e6d4a5ad-f493-413a-a412-747c3a07943b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.193 156681 INFO neutron.agent.ovn.metadata.agent [-] Port e6d4a5ad-f493-413a-a412-747c3a07943b in datapath c26d2eda-0121-4d10-a6e6-2d194139720b bound to our chassis
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.195 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c26d2eda-0121-4d10-a6e6-2d194139720b
Feb 28 10:15:55 compute-0 ovn_controller[146846]: 2026-02-28T10:15:55Z|00765|binding|INFO|Setting lport e6d4a5ad-f493-413a-a412-747c3a07943b ovn-installed in OVS
Feb 28 10:15:55 compute-0 ovn_controller[146846]: 2026-02-28T10:15:55Z|00766|binding|INFO|Setting lport e6d4a5ad-f493-413a-a412-747c3a07943b up in Southbound
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.200 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.204 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:55 compute-0 NetworkManager[49805]: <info>  [1772273755.2089] device (tape6d4a5ad-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:15:55 compute-0 NetworkManager[49805]: <info>  [1772273755.2099] device (tape6d4a5ad-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.220 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f10973-4a4b-451a-a08c-bcbb3b278757]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:55 compute-0 systemd-machined[209480]: New machine qemu-100-instance-00000056.
Feb 28 10:15:55 compute-0 systemd[1]: Started Virtual Machine qemu-100-instance-00000056.
Feb 28 10:15:55 compute-0 podman[315044]: 2026-02-28 10:15:55.247108083 +0000 UTC m=+0.070712529 container create f533cfc8d6d90dd33b04423d0da7665b0b9fafdaaf2aabe6eb8516b6c6b81cf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_black, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.259 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d72bf264-762d-449a-9426-5cc301a9b516]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.268 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[33e2158e-8535-40a4-aba9-d3ff1281848c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.298 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9a9389f5-d7a3-4b59-ad5a-dd7836ebecfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:55 compute-0 systemd[1]: Started libpod-conmon-f533cfc8d6d90dd33b04423d0da7665b0b9fafdaaf2aabe6eb8516b6c6b81cf9.scope.
Feb 28 10:15:55 compute-0 ceph-mon[76304]: pgmap v1500: 305 pgs: 305 active+clean; 544 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 1010 KiB/s rd, 8.9 MiB/s wr, 203 op/s
Feb 28 10:15:55 compute-0 podman[315044]: 2026-02-28 10:15:55.225490565 +0000 UTC m=+0.049095031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:15:55 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ded6445a36cd7a2f5171f069af38287364f92a04f81cfbe7c089913c098aaf32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ded6445a36cd7a2f5171f069af38287364f92a04f81cfbe7c089913c098aaf32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ded6445a36cd7a2f5171f069af38287364f92a04f81cfbe7c089913c098aaf32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ded6445a36cd7a2f5171f069af38287364f92a04f81cfbe7c089913c098aaf32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.329 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[54fdf129-c399-4d46-bf28-2b964ac0f410]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc26d2eda-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:c3:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 231], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525720, 'reachable_time': 39275, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315099, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:55 compute-0 podman[315044]: 2026-02-28 10:15:55.350760177 +0000 UTC m=+0.174364623 container init f533cfc8d6d90dd33b04423d0da7665b0b9fafdaaf2aabe6eb8516b6c6b81cf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_black, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:15:55 compute-0 podman[315044]: 2026-02-28 10:15:55.358698941 +0000 UTC m=+0.182303387 container start f533cfc8d6d90dd33b04423d0da7665b0b9fafdaaf2aabe6eb8516b6c6b81cf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_black, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.360 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[10377abf-fdbd-4a02-9c46-d6f4c9e0b627]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc26d2eda-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525733, 'tstamp': 525733}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315116, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc26d2eda-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525736, 'tstamp': 525736}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315116, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:15:55 compute-0 podman[315044]: 2026-02-28 10:15:55.363726942 +0000 UTC m=+0.187331418 container attach f533cfc8d6d90dd33b04423d0da7665b0b9fafdaaf2aabe6eb8516b6c6b81cf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.365 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc26d2eda-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.368 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.369 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc26d2eda-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.369 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.370 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc26d2eda-00, col_values=(('external_ids', {'iface-id': 'eb88c6a8-1cff-4adc-aed5-eb769bcec23d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:15:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:55.370 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.442 243456 DEBUG nova.compute.manager [req-8b3be542-cb64-4653-b274-8b9018c7ad3a req-18986175-4253-41d7-8642-b6c6b5dd88a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Received event network-vif-plugged-e5c02b5f-a54e-4612-b236-0f03ef62a3c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.445 243456 DEBUG oslo_concurrency.lockutils [req-8b3be542-cb64-4653-b274-8b9018c7ad3a req-18986175-4253-41d7-8642-b6c6b5dd88a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.446 243456 DEBUG oslo_concurrency.lockutils [req-8b3be542-cb64-4653-b274-8b9018c7ad3a req-18986175-4253-41d7-8642-b6c6b5dd88a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.446 243456 DEBUG oslo_concurrency.lockutils [req-8b3be542-cb64-4653-b274-8b9018c7ad3a req-18986175-4253-41d7-8642-b6c6b5dd88a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.447 243456 DEBUG nova.compute.manager [req-8b3be542-cb64-4653-b274-8b9018c7ad3a req-18986175-4253-41d7-8642-b6c6b5dd88a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Processing event network-vif-plugged-e5c02b5f-a54e-4612-b236-0f03ef62a3c7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.496 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273755.4953623, 7a3c0169-3430-4dbe-b080-9ae7b56a101b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.497 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] VM Started (Lifecycle Event)
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.499 243456 DEBUG nova.compute.manager [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.504 243456 DEBUG nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.508 243456 INFO nova.virt.libvirt.driver [-] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Instance spawned successfully.
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.508 243456 DEBUG nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.535 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.543 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.549 243456 DEBUG nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.551 243456 DEBUG nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.551 243456 DEBUG nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.552 243456 DEBUG nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.552 243456 DEBUG nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.553 243456 DEBUG nova.virt.libvirt.driver [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.587 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.590 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273755.496954, 7a3c0169-3430-4dbe-b080-9ae7b56a101b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.591 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] VM Paused (Lifecycle Event)
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.619 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.623 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273755.5026586, 7a3c0169-3430-4dbe-b080-9ae7b56a101b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.624 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] VM Resumed (Lifecycle Event)
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.656 243456 INFO nova.compute.manager [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Took 8.41 seconds to spawn the instance on the hypervisor.
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.658 243456 DEBUG nova.compute.manager [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:55 compute-0 gifted_black[315097]: {
Feb 28 10:15:55 compute-0 gifted_black[315097]:     "0": [
Feb 28 10:15:55 compute-0 gifted_black[315097]:         {
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "devices": [
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "/dev/loop3"
Feb 28 10:15:55 compute-0 gifted_black[315097]:             ],
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "lv_name": "ceph_lv0",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "lv_size": "21470642176",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "name": "ceph_lv0",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "tags": {
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.cluster_name": "ceph",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.crush_device_class": "",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.encrypted": "0",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.objectstore": "bluestore",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.osd_id": "0",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.type": "block",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.vdo": "0",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.with_tpm": "0"
Feb 28 10:15:55 compute-0 gifted_black[315097]:             },
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "type": "block",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "vg_name": "ceph_vg0"
Feb 28 10:15:55 compute-0 gifted_black[315097]:         }
Feb 28 10:15:55 compute-0 gifted_black[315097]:     ],
Feb 28 10:15:55 compute-0 gifted_black[315097]:     "1": [
Feb 28 10:15:55 compute-0 gifted_black[315097]:         {
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "devices": [
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "/dev/loop4"
Feb 28 10:15:55 compute-0 gifted_black[315097]:             ],
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "lv_name": "ceph_lv1",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "lv_size": "21470642176",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "name": "ceph_lv1",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "tags": {
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.cluster_name": "ceph",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.crush_device_class": "",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.encrypted": "0",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.objectstore": "bluestore",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.osd_id": "1",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.type": "block",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.vdo": "0",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.with_tpm": "0"
Feb 28 10:15:55 compute-0 gifted_black[315097]:             },
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "type": "block",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "vg_name": "ceph_vg1"
Feb 28 10:15:55 compute-0 gifted_black[315097]:         }
Feb 28 10:15:55 compute-0 gifted_black[315097]:     ],
Feb 28 10:15:55 compute-0 gifted_black[315097]:     "2": [
Feb 28 10:15:55 compute-0 gifted_black[315097]:         {
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "devices": [
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "/dev/loop5"
Feb 28 10:15:55 compute-0 gifted_black[315097]:             ],
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "lv_name": "ceph_lv2",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "lv_size": "21470642176",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "name": "ceph_lv2",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "tags": {
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.cluster_name": "ceph",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.crush_device_class": "",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.encrypted": "0",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.objectstore": "bluestore",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.osd_id": "2",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.type": "block",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.vdo": "0",
Feb 28 10:15:55 compute-0 gifted_black[315097]:                 "ceph.with_tpm": "0"
Feb 28 10:15:55 compute-0 gifted_black[315097]:             },
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "type": "block",
Feb 28 10:15:55 compute-0 gifted_black[315097]:             "vg_name": "ceph_vg2"
Feb 28 10:15:55 compute-0 gifted_black[315097]:         }
Feb 28 10:15:55 compute-0 gifted_black[315097]:     ]
Feb 28 10:15:55 compute-0 gifted_black[315097]: }
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.666 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:55 compute-0 nova_compute[243452]: 2026-02-28 10:15:55.674 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:15:55 compute-0 systemd[1]: libpod-f533cfc8d6d90dd33b04423d0da7665b0b9fafdaaf2aabe6eb8516b6c6b81cf9.scope: Deactivated successfully.
Feb 28 10:15:55 compute-0 conmon[315097]: conmon f533cfc8d6d90dd33b04 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f533cfc8d6d90dd33b04423d0da7665b0b9fafdaaf2aabe6eb8516b6c6b81cf9.scope/container/memory.events
Feb 28 10:15:55 compute-0 podman[315175]: 2026-02-28 10:15:55.766725503 +0000 UTC m=+0.036446336 container died f533cfc8d6d90dd33b04423d0da7665b0b9fafdaaf2aabe6eb8516b6c6b81cf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_black, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:15:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-ded6445a36cd7a2f5171f069af38287364f92a04f81cfbe7c089913c098aaf32-merged.mount: Deactivated successfully.
Feb 28 10:15:55 compute-0 podman[315175]: 2026-02-28 10:15:55.98392095 +0000 UTC m=+0.253641753 container remove f533cfc8d6d90dd33b04423d0da7665b0b9fafdaaf2aabe6eb8516b6c6b81cf9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 28 10:15:55 compute-0 systemd[1]: libpod-conmon-f533cfc8d6d90dd33b04423d0da7665b0b9fafdaaf2aabe6eb8516b6c6b81cf9.scope: Deactivated successfully.
Feb 28 10:15:56 compute-0 sudo[314854]: pam_unix(sudo:session): session closed for user root
Feb 28 10:15:56 compute-0 sudo[315189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:15:56 compute-0 sudo[315189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:15:56 compute-0 sudo[315189]: pam_unix(sudo:session): session closed for user root
Feb 28 10:15:56 compute-0 sudo[315214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:15:56 compute-0 sudo[315214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:15:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 544 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 8.4 MiB/s wr, 328 op/s
Feb 28 10:15:56 compute-0 podman[315248]: 2026-02-28 10:15:56.533903174 +0000 UTC m=+0.113325757 container create 2014469c3fa0b1934d8f3d2fe5c03a4ae253538489db974a4011bbe90b531dc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:15:56 compute-0 podman[315248]: 2026-02-28 10:15:56.453317109 +0000 UTC m=+0.032739752 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:15:56 compute-0 systemd[1]: Started libpod-conmon-2014469c3fa0b1934d8f3d2fe5c03a4ae253538489db974a4011bbe90b531dc4.scope.
Feb 28 10:15:56 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:15:56 compute-0 podman[315248]: 2026-02-28 10:15:56.636029786 +0000 UTC m=+0.215452369 container init 2014469c3fa0b1934d8f3d2fe5c03a4ae253538489db974a4011bbe90b531dc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_banzai, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:15:56 compute-0 podman[315248]: 2026-02-28 10:15:56.644241017 +0000 UTC m=+0.223663620 container start 2014469c3fa0b1934d8f3d2fe5c03a4ae253538489db974a4011bbe90b531dc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:15:56 compute-0 systemd[1]: libpod-2014469c3fa0b1934d8f3d2fe5c03a4ae253538489db974a4011bbe90b531dc4.scope: Deactivated successfully.
Feb 28 10:15:56 compute-0 cool_banzai[315263]: 167 167
Feb 28 10:15:56 compute-0 conmon[315263]: conmon 2014469c3fa0b1934d8f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2014469c3fa0b1934d8f3d2fe5c03a4ae253538489db974a4011bbe90b531dc4.scope/container/memory.events
Feb 28 10:15:56 compute-0 podman[315248]: 2026-02-28 10:15:56.736858091 +0000 UTC m=+0.316280704 container attach 2014469c3fa0b1934d8f3d2fe5c03a4ae253538489db974a4011bbe90b531dc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 10:15:56 compute-0 podman[315248]: 2026-02-28 10:15:56.738269981 +0000 UTC m=+0.317692574 container died 2014469c3fa0b1934d8f3d2fe5c03a4ae253538489db974a4011bbe90b531dc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 10:15:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c6e7933ffc8a466e301c4b7f8b42d7d549e764e97233fc7025c8ea78fc8fe46-merged.mount: Deactivated successfully.
Feb 28 10:15:57 compute-0 podman[315248]: 2026-02-28 10:15:57.047110575 +0000 UTC m=+0.626533158 container remove 2014469c3fa0b1934d8f3d2fe5c03a4ae253538489db974a4011bbe90b531dc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_banzai, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 10:15:57 compute-0 nova_compute[243452]: 2026-02-28 10:15:57.090 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:15:57 compute-0 nova_compute[243452]: 2026-02-28 10:15:57.091 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273755.7195213, 9119fb04-24fa-460c-a772-4ca398874b4e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:57 compute-0 nova_compute[243452]: 2026-02-28 10:15:57.091 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] VM Started (Lifecycle Event)
Feb 28 10:15:57 compute-0 nova_compute[243452]: 2026-02-28 10:15:57.116 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:57 compute-0 nova_compute[243452]: 2026-02-28 10:15:57.119 243456 INFO nova.compute.manager [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Took 11.76 seconds to build instance.
Feb 28 10:15:57 compute-0 nova_compute[243452]: 2026-02-28 10:15:57.129 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273755.7196217, 9119fb04-24fa-460c-a772-4ca398874b4e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:15:57 compute-0 nova_compute[243452]: 2026-02-28 10:15:57.130 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] VM Paused (Lifecycle Event)
Feb 28 10:15:57 compute-0 systemd[1]: libpod-conmon-2014469c3fa0b1934d8f3d2fe5c03a4ae253538489db974a4011bbe90b531dc4.scope: Deactivated successfully.
Feb 28 10:15:57 compute-0 nova_compute[243452]: 2026-02-28 10:15:57.136 243456 DEBUG oslo_concurrency.lockutils [None req-a36e0175-55b4-4de2-bd24-08126d49d91a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:57 compute-0 nova_compute[243452]: 2026-02-28 10:15:57.150 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:15:57 compute-0 nova_compute[243452]: 2026-02-28 10:15:57.157 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:15:57 compute-0 nova_compute[243452]: 2026-02-28 10:15:57.182 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:15:57 compute-0 podman[315289]: 2026-02-28 10:15:57.229332409 +0000 UTC m=+0.042135166 container create 7455f03762e2e4a4f96a47a9d53ea385cdc33cff11a227b68cbae29c24b1f6a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_stonebraker, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 10:15:57 compute-0 systemd[1]: Started libpod-conmon-7455f03762e2e4a4f96a47a9d53ea385cdc33cff11a227b68cbae29c24b1f6a6.scope.
Feb 28 10:15:57 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/713e0949ad07ad05075903c797be65a830a77684ebca73f0c0c8af7ed2c9bdeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/713e0949ad07ad05075903c797be65a830a77684ebca73f0c0c8af7ed2c9bdeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:15:57 compute-0 podman[315289]: 2026-02-28 10:15:57.213593036 +0000 UTC m=+0.026395823 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/713e0949ad07ad05075903c797be65a830a77684ebca73f0c0c8af7ed2c9bdeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/713e0949ad07ad05075903c797be65a830a77684ebca73f0c0c8af7ed2c9bdeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:15:57 compute-0 podman[315289]: 2026-02-28 10:15:57.32751294 +0000 UTC m=+0.140315697 container init 7455f03762e2e4a4f96a47a9d53ea385cdc33cff11a227b68cbae29c24b1f6a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_stonebraker, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 10:15:57 compute-0 podman[315289]: 2026-02-28 10:15:57.335375341 +0000 UTC m=+0.148178098 container start 7455f03762e2e4a4f96a47a9d53ea385cdc33cff11a227b68cbae29c24b1f6a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_stonebraker, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:15:57 compute-0 podman[315289]: 2026-02-28 10:15:57.338621662 +0000 UTC m=+0.151424449 container attach 7455f03762e2e4a4f96a47a9d53ea385cdc33cff11a227b68cbae29c24b1f6a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:15:57 compute-0 ceph-mon[76304]: pgmap v1501: 305 pgs: 305 active+clean; 544 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 8.4 MiB/s wr, 328 op/s
Feb 28 10:15:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:57.853 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:57.854 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:15:57.855 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:58 compute-0 lvm[315385]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:15:58 compute-0 lvm[315385]: VG ceph_vg1 finished
Feb 28 10:15:58 compute-0 lvm[315384]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:15:58 compute-0 lvm[315384]: VG ceph_vg0 finished
Feb 28 10:15:58 compute-0 lvm[315387]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:15:58 compute-0 lvm[315387]: VG ceph_vg2 finished
Feb 28 10:15:58 compute-0 lvm[315388]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:15:58 compute-0 lvm[315388]: VG ceph_vg0 finished
Feb 28 10:15:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 544 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 4.3 MiB/s wr, 349 op/s
Feb 28 10:15:58 compute-0 brave_stonebraker[315304]: {}
Feb 28 10:15:58 compute-0 systemd[1]: libpod-7455f03762e2e4a4f96a47a9d53ea385cdc33cff11a227b68cbae29c24b1f6a6.scope: Deactivated successfully.
Feb 28 10:15:58 compute-0 systemd[1]: libpod-7455f03762e2e4a4f96a47a9d53ea385cdc33cff11a227b68cbae29c24b1f6a6.scope: Consumed 1.305s CPU time.
Feb 28 10:15:58 compute-0 podman[315289]: 2026-02-28 10:15:58.27318605 +0000 UTC m=+1.085988817 container died 7455f03762e2e4a4f96a47a9d53ea385cdc33cff11a227b68cbae29c24b1f6a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:15:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-713e0949ad07ad05075903c797be65a830a77684ebca73f0c0c8af7ed2c9bdeb-merged.mount: Deactivated successfully.
Feb 28 10:15:58 compute-0 podman[315289]: 2026-02-28 10:15:58.321837358 +0000 UTC m=+1.134640125 container remove 7455f03762e2e4a4f96a47a9d53ea385cdc33cff11a227b68cbae29c24b1f6a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 10:15:58 compute-0 systemd[1]: libpod-conmon-7455f03762e2e4a4f96a47a9d53ea385cdc33cff11a227b68cbae29c24b1f6a6.scope: Deactivated successfully.
Feb 28 10:15:58 compute-0 sudo[315214]: pam_unix(sudo:session): session closed for user root
Feb 28 10:15:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:15:58 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:15:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:15:58 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:15:58 compute-0 sudo[315401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:15:58 compute-0 sudo[315401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:15:58 compute-0 nova_compute[243452]: 2026-02-28 10:15:58.447 243456 DEBUG nova.network.neutron [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Updated VIF entry in instance network info cache for port e6d4a5ad-f493-413a-a412-747c3a07943b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:15:58 compute-0 nova_compute[243452]: 2026-02-28 10:15:58.448 243456 DEBUG nova.network.neutron [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Updating instance_info_cache with network_info: [{"id": "e6d4a5ad-f493-413a-a412-747c3a07943b", "address": "fa:16:3e:b8:3c:8a", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6d4a5ad-f4", "ovs_interfaceid": "e6d4a5ad-f493-413a-a412-747c3a07943b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:15:58 compute-0 sudo[315401]: pam_unix(sudo:session): session closed for user root
Feb 28 10:15:58 compute-0 nova_compute[243452]: 2026-02-28 10:15:58.468 243456 DEBUG oslo_concurrency.lockutils [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9119fb04-24fa-460c-a772-4ca398874b4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:15:58 compute-0 nova_compute[243452]: 2026-02-28 10:15:58.469 243456 DEBUG nova.compute.manager [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Received event network-changed-e5c02b5f-a54e-4612-b236-0f03ef62a3c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:58 compute-0 nova_compute[243452]: 2026-02-28 10:15:58.469 243456 DEBUG nova.compute.manager [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Refreshing instance network info cache due to event network-changed-e5c02b5f-a54e-4612-b236-0f03ef62a3c7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:15:58 compute-0 nova_compute[243452]: 2026-02-28 10:15:58.469 243456 DEBUG oslo_concurrency.lockutils [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-7a3c0169-3430-4dbe-b080-9ae7b56a101b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:15:58 compute-0 nova_compute[243452]: 2026-02-28 10:15:58.469 243456 DEBUG oslo_concurrency.lockutils [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-7a3c0169-3430-4dbe-b080-9ae7b56a101b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:15:58 compute-0 nova_compute[243452]: 2026-02-28 10:15:58.469 243456 DEBUG nova.network.neutron [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Refreshing network info cache for port e5c02b5f-a54e-4612-b236-0f03ef62a3c7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:15:58 compute-0 nova_compute[243452]: 2026-02-28 10:15:58.561 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:15:58 compute-0 nova_compute[243452]: 2026-02-28 10:15:58.812 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:15:58 compute-0 nova_compute[243452]: 2026-02-28 10:15:58.861 243456 DEBUG nova.compute.manager [req-9c7f4e7f-633e-4f5f-b331-61dc9b0d6f66 req-74641021-bf6a-4ae2-aa83-8a7b57d3f3d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Received event network-vif-plugged-e5c02b5f-a54e-4612-b236-0f03ef62a3c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:15:58 compute-0 nova_compute[243452]: 2026-02-28 10:15:58.861 243456 DEBUG oslo_concurrency.lockutils [req-9c7f4e7f-633e-4f5f-b331-61dc9b0d6f66 req-74641021-bf6a-4ae2-aa83-8a7b57d3f3d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:15:58 compute-0 nova_compute[243452]: 2026-02-28 10:15:58.861 243456 DEBUG oslo_concurrency.lockutils [req-9c7f4e7f-633e-4f5f-b331-61dc9b0d6f66 req-74641021-bf6a-4ae2-aa83-8a7b57d3f3d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:15:58 compute-0 nova_compute[243452]: 2026-02-28 10:15:58.861 243456 DEBUG oslo_concurrency.lockutils [req-9c7f4e7f-633e-4f5f-b331-61dc9b0d6f66 req-74641021-bf6a-4ae2-aa83-8a7b57d3f3d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:15:58 compute-0 nova_compute[243452]: 2026-02-28 10:15:58.862 243456 DEBUG nova.compute.manager [req-9c7f4e7f-633e-4f5f-b331-61dc9b0d6f66 req-74641021-bf6a-4ae2-aa83-8a7b57d3f3d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] No waiting events found dispatching network-vif-plugged-e5c02b5f-a54e-4612-b236-0f03ef62a3c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:15:58 compute-0 nova_compute[243452]: 2026-02-28 10:15:58.862 243456 WARNING nova.compute.manager [req-9c7f4e7f-633e-4f5f-b331-61dc9b0d6f66 req-74641021-bf6a-4ae2-aa83-8a7b57d3f3d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Received unexpected event network-vif-plugged-e5c02b5f-a54e-4612-b236-0f03ef62a3c7 for instance with vm_state active and task_state None.
Feb 28 10:15:59 compute-0 ceph-mon[76304]: pgmap v1502: 305 pgs: 305 active+clean; 544 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 4.3 MiB/s wr, 349 op/s
Feb 28 10:15:59 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:15:59 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:16:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 544 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 2.4 MiB/s wr, 324 op/s
Feb 28 10:16:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:16:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:16:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:16:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:16:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:16:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:16:00 compute-0 ceph-mon[76304]: pgmap v1503: 305 pgs: 305 active+clean; 544 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 2.4 MiB/s wr, 324 op/s
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.184 243456 DEBUG nova.network.neutron [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Updated VIF entry in instance network info cache for port e5c02b5f-a54e-4612-b236-0f03ef62a3c7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.185 243456 DEBUG nova.network.neutron [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Updating instance_info_cache with network_info: [{"id": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "address": "fa:16:3e:77:14:24", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c02b5f-a5", "ovs_interfaceid": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.217 243456 DEBUG oslo_concurrency.lockutils [req-d55102b7-be84-47de-9429-f39ae5f0ed2a req-f05c6dfb-5cb7-4879-a80e-cb723f310694 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-7a3c0169-3430-4dbe-b080-9ae7b56a101b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.858 243456 DEBUG nova.compute.manager [req-585d9333-7fdf-4e22-b74c-e2555f870cc8 req-ef8edadc-6624-4496-9e8b-06ff55331d29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Received event network-vif-plugged-e6d4a5ad-f493-413a-a412-747c3a07943b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.858 243456 DEBUG oslo_concurrency.lockutils [req-585d9333-7fdf-4e22-b74c-e2555f870cc8 req-ef8edadc-6624-4496-9e8b-06ff55331d29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9119fb04-24fa-460c-a772-4ca398874b4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.859 243456 DEBUG oslo_concurrency.lockutils [req-585d9333-7fdf-4e22-b74c-e2555f870cc8 req-ef8edadc-6624-4496-9e8b-06ff55331d29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9119fb04-24fa-460c-a772-4ca398874b4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.860 243456 DEBUG oslo_concurrency.lockutils [req-585d9333-7fdf-4e22-b74c-e2555f870cc8 req-ef8edadc-6624-4496-9e8b-06ff55331d29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9119fb04-24fa-460c-a772-4ca398874b4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.860 243456 DEBUG nova.compute.manager [req-585d9333-7fdf-4e22-b74c-e2555f870cc8 req-ef8edadc-6624-4496-9e8b-06ff55331d29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Processing event network-vif-plugged-e6d4a5ad-f493-413a-a412-747c3a07943b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.861 243456 DEBUG nova.compute.manager [req-585d9333-7fdf-4e22-b74c-e2555f870cc8 req-ef8edadc-6624-4496-9e8b-06ff55331d29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Received event network-changed-cace90b2-5d6b-49ae-a68a-251838fec4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.862 243456 DEBUG nova.compute.manager [req-585d9333-7fdf-4e22-b74c-e2555f870cc8 req-ef8edadc-6624-4496-9e8b-06ff55331d29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Refreshing instance network info cache due to event network-changed-cace90b2-5d6b-49ae-a68a-251838fec4ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.862 243456 DEBUG oslo_concurrency.lockutils [req-585d9333-7fdf-4e22-b74c-e2555f870cc8 req-ef8edadc-6624-4496-9e8b-06ff55331d29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-e0b403b3-2f95-4f8c-a00c-53dab3c643b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.862 243456 DEBUG oslo_concurrency.lockutils [req-585d9333-7fdf-4e22-b74c-e2555f870cc8 req-ef8edadc-6624-4496-9e8b-06ff55331d29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-e0b403b3-2f95-4f8c-a00c-53dab3c643b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.863 243456 DEBUG nova.network.neutron [req-585d9333-7fdf-4e22-b74c-e2555f870cc8 req-ef8edadc-6624-4496-9e8b-06ff55331d29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Refreshing network info cache for port cace90b2-5d6b-49ae-a68a-251838fec4ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.870 243456 DEBUG nova.compute.manager [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.882 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273761.8811324, 9119fb04-24fa-460c-a772-4ca398874b4e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.884 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] VM Resumed (Lifecycle Event)
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.901 243456 DEBUG nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.917 243456 INFO nova.virt.libvirt.driver [-] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Instance spawned successfully.
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.918 243456 DEBUG nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.930 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.938 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.955 243456 DEBUG nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.958 243456 DEBUG nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.959 243456 DEBUG nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.960 243456 DEBUG nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.961 243456 DEBUG nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.962 243456 DEBUG nova.virt.libvirt.driver [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:16:01 compute-0 nova_compute[243452]: 2026-02-28 10:16:01.974 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:16:02 compute-0 nova_compute[243452]: 2026-02-28 10:16:02.037 243456 INFO nova.compute.manager [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Took 15.95 seconds to spawn the instance on the hypervisor.
Feb 28 10:16:02 compute-0 nova_compute[243452]: 2026-02-28 10:16:02.038 243456 DEBUG nova.compute.manager [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:02 compute-0 nova_compute[243452]: 2026-02-28 10:16:02.119 243456 INFO nova.compute.manager [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Took 17.34 seconds to build instance.
Feb 28 10:16:02 compute-0 nova_compute[243452]: 2026-02-28 10:16:02.149 243456 DEBUG oslo_concurrency.lockutils [None req-b890e2fe-d689-4c70-917c-8b3611efe53b cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "9119fb04-24fa-460c-a772-4ca398874b4e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 544 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 1.4 MiB/s wr, 318 op/s
Feb 28 10:16:03 compute-0 ceph-mon[76304]: pgmap v1504: 305 pgs: 305 active+clean; 544 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 1.4 MiB/s wr, 318 op/s
Feb 28 10:16:03 compute-0 ovn_controller[146846]: 2026-02-28T10:16:03Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:46:a7:4e 10.100.0.4
Feb 28 10:16:03 compute-0 ovn_controller[146846]: 2026-02-28T10:16:03Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:46:a7:4e 10.100.0.4
Feb 28 10:16:03 compute-0 nova_compute[243452]: 2026-02-28 10:16:03.564 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:16:03 compute-0 nova_compute[243452]: 2026-02-28 10:16:03.813 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 555 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 778 KiB/s wr, 313 op/s
Feb 28 10:16:05 compute-0 nova_compute[243452]: 2026-02-28 10:16:05.199 243456 DEBUG nova.network.neutron [req-585d9333-7fdf-4e22-b74c-e2555f870cc8 req-ef8edadc-6624-4496-9e8b-06ff55331d29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Updated VIF entry in instance network info cache for port cace90b2-5d6b-49ae-a68a-251838fec4ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:16:05 compute-0 nova_compute[243452]: 2026-02-28 10:16:05.200 243456 DEBUG nova.network.neutron [req-585d9333-7fdf-4e22-b74c-e2555f870cc8 req-ef8edadc-6624-4496-9e8b-06ff55331d29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Updating instance_info_cache with network_info: [{"id": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "address": "fa:16:3e:46:a7:4e", "network": {"id": "61b03a6e-b883-4f32-b23d-d6fea7058b29", "bridge": "br-int", "label": "tempest-network-smoke--1037265809", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcace90b2-5d", "ovs_interfaceid": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:16:05 compute-0 nova_compute[243452]: 2026-02-28 10:16:05.263 243456 DEBUG nova.compute.manager [req-5e29e349-5b8d-4c8f-addd-ae56244d762d req-3f2c2458-6661-4cab-9371-163527c19609 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Received event network-changed-e5c02b5f-a54e-4612-b236-0f03ef62a3c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:05 compute-0 nova_compute[243452]: 2026-02-28 10:16:05.264 243456 DEBUG nova.compute.manager [req-5e29e349-5b8d-4c8f-addd-ae56244d762d req-3f2c2458-6661-4cab-9371-163527c19609 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Refreshing instance network info cache due to event network-changed-e5c02b5f-a54e-4612-b236-0f03ef62a3c7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:16:05 compute-0 nova_compute[243452]: 2026-02-28 10:16:05.264 243456 DEBUG oslo_concurrency.lockutils [req-5e29e349-5b8d-4c8f-addd-ae56244d762d req-3f2c2458-6661-4cab-9371-163527c19609 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-7a3c0169-3430-4dbe-b080-9ae7b56a101b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:16:05 compute-0 nova_compute[243452]: 2026-02-28 10:16:05.265 243456 DEBUG oslo_concurrency.lockutils [req-5e29e349-5b8d-4c8f-addd-ae56244d762d req-3f2c2458-6661-4cab-9371-163527c19609 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-7a3c0169-3430-4dbe-b080-9ae7b56a101b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:16:05 compute-0 nova_compute[243452]: 2026-02-28 10:16:05.265 243456 DEBUG nova.network.neutron [req-5e29e349-5b8d-4c8f-addd-ae56244d762d req-3f2c2458-6661-4cab-9371-163527c19609 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Refreshing network info cache for port e5c02b5f-a54e-4612-b236-0f03ef62a3c7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:16:05 compute-0 nova_compute[243452]: 2026-02-28 10:16:05.270 243456 DEBUG oslo_concurrency.lockutils [req-585d9333-7fdf-4e22-b74c-e2555f870cc8 req-ef8edadc-6624-4496-9e8b-06ff55331d29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-e0b403b3-2f95-4f8c-a00c-53dab3c643b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:16:05 compute-0 nova_compute[243452]: 2026-02-28 10:16:05.271 243456 DEBUG nova.compute.manager [req-585d9333-7fdf-4e22-b74c-e2555f870cc8 req-ef8edadc-6624-4496-9e8b-06ff55331d29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Received event network-vif-plugged-e6d4a5ad-f493-413a-a412-747c3a07943b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:05 compute-0 nova_compute[243452]: 2026-02-28 10:16:05.271 243456 DEBUG oslo_concurrency.lockutils [req-585d9333-7fdf-4e22-b74c-e2555f870cc8 req-ef8edadc-6624-4496-9e8b-06ff55331d29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9119fb04-24fa-460c-a772-4ca398874b4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:05 compute-0 nova_compute[243452]: 2026-02-28 10:16:05.272 243456 DEBUG oslo_concurrency.lockutils [req-585d9333-7fdf-4e22-b74c-e2555f870cc8 req-ef8edadc-6624-4496-9e8b-06ff55331d29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9119fb04-24fa-460c-a772-4ca398874b4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:05 compute-0 nova_compute[243452]: 2026-02-28 10:16:05.272 243456 DEBUG oslo_concurrency.lockutils [req-585d9333-7fdf-4e22-b74c-e2555f870cc8 req-ef8edadc-6624-4496-9e8b-06ff55331d29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9119fb04-24fa-460c-a772-4ca398874b4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:05 compute-0 nova_compute[243452]: 2026-02-28 10:16:05.273 243456 DEBUG nova.compute.manager [req-585d9333-7fdf-4e22-b74c-e2555f870cc8 req-ef8edadc-6624-4496-9e8b-06ff55331d29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] No waiting events found dispatching network-vif-plugged-e6d4a5ad-f493-413a-a412-747c3a07943b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:05 compute-0 nova_compute[243452]: 2026-02-28 10:16:05.273 243456 WARNING nova.compute.manager [req-585d9333-7fdf-4e22-b74c-e2555f870cc8 req-ef8edadc-6624-4496-9e8b-06ff55331d29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Received unexpected event network-vif-plugged-e6d4a5ad-f493-413a-a412-747c3a07943b for instance with vm_state building and task_state spawning.
Feb 28 10:16:05 compute-0 ceph-mon[76304]: pgmap v1505: 305 pgs: 305 active+clean; 555 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 778 KiB/s wr, 313 op/s
Feb 28 10:16:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 575 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 2.1 MiB/s wr, 371 op/s
Feb 28 10:16:07 compute-0 ceph-mon[76304]: pgmap v1506: 305 pgs: 305 active+clean; 575 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 2.1 MiB/s wr, 371 op/s
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.455 243456 DEBUG oslo_concurrency.lockutils [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.457 243456 DEBUG oslo_concurrency.lockutils [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.458 243456 DEBUG oslo_concurrency.lockutils [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.458 243456 DEBUG oslo_concurrency.lockutils [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.459 243456 DEBUG oslo_concurrency.lockutils [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.462 243456 INFO nova.compute.manager [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Terminating instance
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.464 243456 DEBUG nova.compute.manager [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:16:07 compute-0 kernel: tape5c02b5f-a5 (unregistering): left promiscuous mode
Feb 28 10:16:07 compute-0 NetworkManager[49805]: <info>  [1772273767.5189] device (tape5c02b5f-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:16:07 compute-0 ovn_controller[146846]: 2026-02-28T10:16:07Z|00767|binding|INFO|Releasing lport e5c02b5f-a54e-4612-b236-0f03ef62a3c7 from this chassis (sb_readonly=0)
Feb 28 10:16:07 compute-0 ovn_controller[146846]: 2026-02-28T10:16:07Z|00768|binding|INFO|Setting lport e5c02b5f-a54e-4612-b236-0f03ef62a3c7 down in Southbound
Feb 28 10:16:07 compute-0 ovn_controller[146846]: 2026-02-28T10:16:07Z|00769|binding|INFO|Removing iface tape5c02b5f-a5 ovn-installed in OVS
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.534 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.536 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:07.542 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:14:24 10.100.0.4'], port_security=['fa:16:3e:77:14:24 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7a3c0169-3430-4dbe-b080-9ae7b56a101b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd70835696bf4e12a062516e9de5527d', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90546891-a028-4a5f-a7b5-01dac44edc93, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=e5c02b5f-a54e-4612-b236-0f03ef62a3c7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:16:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:07.543 156681 INFO neutron.agent.ovn.metadata.agent [-] Port e5c02b5f-a54e-4612-b236-0f03ef62a3c7 in datapath 2e5dcf5b-2f4a-41dc-9c28-b500e2889923 unbound from our chassis
Feb 28 10:16:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:07.544 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2e5dcf5b-2f4a-41dc-9c28-b500e2889923
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.551 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:07.563 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[93ce3c3f-9430-45f8-9cfb-d4a5adf1cfcc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:07 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000057.scope: Deactivated successfully.
Feb 28 10:16:07 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000057.scope: Consumed 12.242s CPU time.
Feb 28 10:16:07 compute-0 systemd-machined[209480]: Machine qemu-99-instance-00000057 terminated.
Feb 28 10:16:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:07.599 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae20d4f-f08b-4b71-89e3-bb264a7d8815]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:07.603 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[72d31fd1-3193-4869-bc85-713e265fab4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:07.627 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[55fc5c57-50a2-419f-8891-94163c543492]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:07.647 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3872e10e-89cd-4ab8-9c5e-163bdfc17a1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e5dcf5b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:a8:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517673, 'reachable_time': 18521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315439, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:07.667 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[79233d26-dd89-4609-98f7-b7724bb4136e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2e5dcf5b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517685, 'tstamp': 517685}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315440, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2e5dcf5b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517689, 'tstamp': 517689}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315440, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:07.669 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e5dcf5b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.671 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.675 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:07.675 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e5dcf5b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:07.676 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:16:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:07.676 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2e5dcf5b-20, col_values=(('external_ids', {'iface-id': '4070e10c-8283-47ad-b9bf-0e19e9198bce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:07.676 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.702 243456 INFO nova.virt.libvirt.driver [-] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Instance destroyed successfully.
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.703 243456 DEBUG nova.objects.instance [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'resources' on Instance uuid 7a3c0169-3430-4dbe-b080-9ae7b56a101b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.717 243456 DEBUG nova.virt.libvirt.vif [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:15:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-121294672',display_name='tempest-ServerActionsTestOtherA-server-121294672',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-121294672',id=87,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd70835696bf4e12a062516e9de5527d',ramdisk_id='',reservation_id='r-0lq9qnn1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1764257371',owner_user_name='tempest-ServerActionsTestOtherA-1764257371-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:15:57Z,user_data=None,user_id='14b2d28379164786ad68563acb83a50a',uuid=7a3c0169-3430-4dbe-b080-9ae7b56a101b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "address": "fa:16:3e:77:14:24", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c02b5f-a5", "ovs_interfaceid": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.719 243456 DEBUG nova.network.os_vif_util [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converting VIF {"id": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "address": "fa:16:3e:77:14:24", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c02b5f-a5", "ovs_interfaceid": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.720 243456 DEBUG nova.network.os_vif_util [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:77:14:24,bridge_name='br-int',has_traffic_filtering=True,id=e5c02b5f-a54e-4612-b236-0f03ef62a3c7,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5c02b5f-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.720 243456 DEBUG os_vif [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:14:24,bridge_name='br-int',has_traffic_filtering=True,id=e5c02b5f-a54e-4612-b236-0f03ef62a3c7,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5c02b5f-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.722 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.722 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5c02b5f-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.726 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:07 compute-0 nova_compute[243452]: 2026-02-28 10:16:07.729 243456 INFO os_vif [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:14:24,bridge_name='br-int',has_traffic_filtering=True,id=e5c02b5f-a54e-4612-b236-0f03ef62a3c7,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape5c02b5f-a5')
Feb 28 10:16:08 compute-0 ovn_controller[146846]: 2026-02-28T10:16:08Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:62:3b:12 10.100.0.13
Feb 28 10:16:08 compute-0 ovn_controller[146846]: 2026-02-28T10:16:08Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:62:3b:12 10.100.0.13
Feb 28 10:16:08 compute-0 nova_compute[243452]: 2026-02-28 10:16:08.154 243456 DEBUG nova.network.neutron [req-5e29e349-5b8d-4c8f-addd-ae56244d762d req-3f2c2458-6661-4cab-9371-163527c19609 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Updated VIF entry in instance network info cache for port e5c02b5f-a54e-4612-b236-0f03ef62a3c7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:16:08 compute-0 nova_compute[243452]: 2026-02-28 10:16:08.155 243456 DEBUG nova.network.neutron [req-5e29e349-5b8d-4c8f-addd-ae56244d762d req-3f2c2458-6661-4cab-9371-163527c19609 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Updating instance_info_cache with network_info: [{"id": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "address": "fa:16:3e:77:14:24", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape5c02b5f-a5", "ovs_interfaceid": "e5c02b5f-a54e-4612-b236-0f03ef62a3c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:16:08 compute-0 podman[315472]: 2026-02-28 10:16:08.161310184 +0000 UTC m=+0.084461746 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 28 10:16:08 compute-0 nova_compute[243452]: 2026-02-28 10:16:08.182 243456 DEBUG oslo_concurrency.lockutils [req-5e29e349-5b8d-4c8f-addd-ae56244d762d req-3f2c2458-6661-4cab-9371-163527c19609 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-7a3c0169-3430-4dbe-b080-9ae7b56a101b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:16:08 compute-0 podman[315471]: 2026-02-28 10:16:08.1892546 +0000 UTC m=+0.112574876 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 28 10:16:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 589 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.3 MiB/s wr, 253 op/s
Feb 28 10:16:08 compute-0 nova_compute[243452]: 2026-02-28 10:16:08.454 243456 DEBUG nova.compute.manager [req-5eae658f-90c6-4e25-a307-167dd7f9733a req-3e51bb65-a054-41b2-b4ec-7d6d334d9b22 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Received event network-vif-unplugged-e5c02b5f-a54e-4612-b236-0f03ef62a3c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:08 compute-0 nova_compute[243452]: 2026-02-28 10:16:08.455 243456 DEBUG oslo_concurrency.lockutils [req-5eae658f-90c6-4e25-a307-167dd7f9733a req-3e51bb65-a054-41b2-b4ec-7d6d334d9b22 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:08 compute-0 nova_compute[243452]: 2026-02-28 10:16:08.456 243456 DEBUG oslo_concurrency.lockutils [req-5eae658f-90c6-4e25-a307-167dd7f9733a req-3e51bb65-a054-41b2-b4ec-7d6d334d9b22 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:08 compute-0 nova_compute[243452]: 2026-02-28 10:16:08.456 243456 DEBUG oslo_concurrency.lockutils [req-5eae658f-90c6-4e25-a307-167dd7f9733a req-3e51bb65-a054-41b2-b4ec-7d6d334d9b22 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:08 compute-0 nova_compute[243452]: 2026-02-28 10:16:08.456 243456 DEBUG nova.compute.manager [req-5eae658f-90c6-4e25-a307-167dd7f9733a req-3e51bb65-a054-41b2-b4ec-7d6d334d9b22 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] No waiting events found dispatching network-vif-unplugged-e5c02b5f-a54e-4612-b236-0f03ef62a3c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:08 compute-0 nova_compute[243452]: 2026-02-28 10:16:08.456 243456 DEBUG nova.compute.manager [req-5eae658f-90c6-4e25-a307-167dd7f9733a req-3e51bb65-a054-41b2-b4ec-7d6d334d9b22 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Received event network-vif-unplugged-e5c02b5f-a54e-4612-b236-0f03ef62a3c7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:16:08 compute-0 ceph-mon[76304]: pgmap v1507: 305 pgs: 305 active+clean; 589 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.3 MiB/s wr, 253 op/s
Feb 28 10:16:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:16:08 compute-0 nova_compute[243452]: 2026-02-28 10:16:08.817 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:09 compute-0 ovn_controller[146846]: 2026-02-28T10:16:09Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b4:f3:ce 10.100.0.8
Feb 28 10:16:09 compute-0 ovn_controller[146846]: 2026-02-28T10:16:09Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b4:f3:ce 10.100.0.8
Feb 28 10:16:09 compute-0 nova_compute[243452]: 2026-02-28 10:16:09.520 243456 INFO nova.virt.libvirt.driver [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Deleting instance files /var/lib/nova/instances/7a3c0169-3430-4dbe-b080-9ae7b56a101b_del
Feb 28 10:16:09 compute-0 nova_compute[243452]: 2026-02-28 10:16:09.523 243456 INFO nova.virt.libvirt.driver [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Deletion of /var/lib/nova/instances/7a3c0169-3430-4dbe-b080-9ae7b56a101b_del complete
Feb 28 10:16:09 compute-0 nova_compute[243452]: 2026-02-28 10:16:09.584 243456 INFO nova.compute.manager [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Took 2.12 seconds to destroy the instance on the hypervisor.
Feb 28 10:16:09 compute-0 nova_compute[243452]: 2026-02-28 10:16:09.584 243456 DEBUG oslo.service.loopingcall [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:16:09 compute-0 nova_compute[243452]: 2026-02-28 10:16:09.585 243456 DEBUG nova.compute.manager [-] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:16:09 compute-0 nova_compute[243452]: 2026-02-28 10:16:09.585 243456 DEBUG nova.network.neutron [-] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:16:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 611 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.7 MiB/s wr, 284 op/s
Feb 28 10:16:10 compute-0 nova_compute[243452]: 2026-02-28 10:16:10.577 243456 DEBUG nova.network.neutron [-] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:16:10 compute-0 nova_compute[243452]: 2026-02-28 10:16:10.594 243456 INFO nova.compute.manager [-] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Took 1.01 seconds to deallocate network for instance.
Feb 28 10:16:10 compute-0 nova_compute[243452]: 2026-02-28 10:16:10.639 243456 DEBUG oslo_concurrency.lockutils [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:10 compute-0 nova_compute[243452]: 2026-02-28 10:16:10.640 243456 DEBUG oslo_concurrency.lockutils [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:11 compute-0 nova_compute[243452]: 2026-02-28 10:16:11.280 243456 DEBUG oslo_concurrency.processutils [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:11 compute-0 ceph-mon[76304]: pgmap v1508: 305 pgs: 305 active+clean; 611 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.7 MiB/s wr, 284 op/s
Feb 28 10:16:11 compute-0 nova_compute[243452]: 2026-02-28 10:16:11.379 243456 DEBUG nova.compute.manager [req-27b9fef3-372b-42a0-bdb8-6bef3b832577 req-a146da7e-ac93-4df0-b395-5687840d2f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Received event network-vif-plugged-e5c02b5f-a54e-4612-b236-0f03ef62a3c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:11 compute-0 nova_compute[243452]: 2026-02-28 10:16:11.380 243456 DEBUG oslo_concurrency.lockutils [req-27b9fef3-372b-42a0-bdb8-6bef3b832577 req-a146da7e-ac93-4df0-b395-5687840d2f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:11 compute-0 nova_compute[243452]: 2026-02-28 10:16:11.380 243456 DEBUG oslo_concurrency.lockutils [req-27b9fef3-372b-42a0-bdb8-6bef3b832577 req-a146da7e-ac93-4df0-b395-5687840d2f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:11 compute-0 nova_compute[243452]: 2026-02-28 10:16:11.380 243456 DEBUG oslo_concurrency.lockutils [req-27b9fef3-372b-42a0-bdb8-6bef3b832577 req-a146da7e-ac93-4df0-b395-5687840d2f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:11 compute-0 nova_compute[243452]: 2026-02-28 10:16:11.381 243456 DEBUG nova.compute.manager [req-27b9fef3-372b-42a0-bdb8-6bef3b832577 req-a146da7e-ac93-4df0-b395-5687840d2f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] No waiting events found dispatching network-vif-plugged-e5c02b5f-a54e-4612-b236-0f03ef62a3c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:11 compute-0 nova_compute[243452]: 2026-02-28 10:16:11.381 243456 WARNING nova.compute.manager [req-27b9fef3-372b-42a0-bdb8-6bef3b832577 req-a146da7e-ac93-4df0-b395-5687840d2f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Received unexpected event network-vif-plugged-e5c02b5f-a54e-4612-b236-0f03ef62a3c7 for instance with vm_state deleted and task_state None.
Feb 28 10:16:11 compute-0 nova_compute[243452]: 2026-02-28 10:16:11.459 243456 INFO nova.compute.manager [None req-8633b77e-afbc-4381-b708-2c5fa63507d6 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Get console output
Feb 28 10:16:11 compute-0 nova_compute[243452]: 2026-02-28 10:16:11.467 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:16:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:16:11 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3282883938' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:11 compute-0 nova_compute[243452]: 2026-02-28 10:16:11.849 243456 DEBUG oslo_concurrency.processutils [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:11 compute-0 nova_compute[243452]: 2026-02-28 10:16:11.856 243456 DEBUG nova.compute.provider_tree [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:16:11 compute-0 nova_compute[243452]: 2026-02-28 10:16:11.873 243456 DEBUG nova.scheduler.client.report [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:16:11 compute-0 nova_compute[243452]: 2026-02-28 10:16:11.896 243456 DEBUG oslo_concurrency.lockutils [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:11 compute-0 nova_compute[243452]: 2026-02-28 10:16:11.930 243456 INFO nova.scheduler.client.report [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Deleted allocations for instance 7a3c0169-3430-4dbe-b080-9ae7b56a101b
Feb 28 10:16:12 compute-0 nova_compute[243452]: 2026-02-28 10:16:12.019 243456 DEBUG oslo_concurrency.lockutils [None req-4d875381-ea21-483f-a9ed-20f60af155d4 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "7a3c0169-3430-4dbe-b080-9ae7b56a101b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 615 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 6.3 MiB/s wr, 263 op/s
Feb 28 10:16:12 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3282883938' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:12 compute-0 nova_compute[243452]: 2026-02-28 10:16:12.724 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:12 compute-0 nova_compute[243452]: 2026-02-28 10:16:12.836 243456 INFO nova.compute.manager [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Rebuilding instance
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.061 243456 DEBUG nova.objects.instance [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e0b403b3-2f95-4f8c-a00c-53dab3c643b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.077 243456 DEBUG nova.compute.manager [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.135 243456 DEBUG nova.objects.instance [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'pci_requests' on Instance uuid e0b403b3-2f95-4f8c-a00c-53dab3c643b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.151 243456 DEBUG nova.objects.instance [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'pci_devices' on Instance uuid e0b403b3-2f95-4f8c-a00c-53dab3c643b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.167 243456 DEBUG nova.objects.instance [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'resources' on Instance uuid e0b403b3-2f95-4f8c-a00c-53dab3c643b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.180 243456 DEBUG nova.objects.instance [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'migration_context' on Instance uuid e0b403b3-2f95-4f8c-a00c-53dab3c643b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.201 243456 DEBUG nova.objects.instance [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.205 243456 DEBUG nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:16:13 compute-0 ceph-mon[76304]: pgmap v1509: 305 pgs: 305 active+clean; 615 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 6.3 MiB/s wr, 263 op/s
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.544 243456 DEBUG nova.compute.manager [req-ec2ba43e-ff02-4a61-b31c-16eff62a6259 req-e5d857ef-c0b0-4f76-b9ec-fdbc6da765e9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Received event network-vif-deleted-e5c02b5f-a54e-4612-b236-0f03ef62a3c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.555 243456 DEBUG oslo_concurrency.lockutils [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "b883c1a1-cf01-434d-8258-24ca193a2683" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.556 243456 DEBUG oslo_concurrency.lockutils [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.556 243456 DEBUG oslo_concurrency.lockutils [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.557 243456 DEBUG oslo_concurrency.lockutils [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.557 243456 DEBUG oslo_concurrency.lockutils [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.558 243456 INFO nova.compute.manager [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Terminating instance
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.559 243456 DEBUG nova.compute.manager [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:16:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:16:13 compute-0 kernel: tapaa9724a7-fa (unregistering): left promiscuous mode
Feb 28 10:16:13 compute-0 NetworkManager[49805]: <info>  [1772273773.6184] device (tapaa9724a7-fa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:16:13 compute-0 ovn_controller[146846]: 2026-02-28T10:16:13Z|00770|binding|INFO|Releasing lport aa9724a7-fad1-4968-a1b0-0d8182007723 from this chassis (sb_readonly=0)
Feb 28 10:16:13 compute-0 ovn_controller[146846]: 2026-02-28T10:16:13Z|00771|binding|INFO|Setting lport aa9724a7-fad1-4968-a1b0-0d8182007723 down in Southbound
Feb 28 10:16:13 compute-0 ovn_controller[146846]: 2026-02-28T10:16:13Z|00772|binding|INFO|Removing iface tapaa9724a7-fa ovn-installed in OVS
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.629 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.634 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:13.638 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:71:d6 10.100.0.6'], port_security=['fa:16:3e:c4:71:d6 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b883c1a1-cf01-434d-8258-24ca193a2683', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd70835696bf4e12a062516e9de5527d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a3d01661-9794-4315-81d4-c2d74d609338', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90546891-a028-4a5f-a7b5-01dac44edc93, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=aa9724a7-fad1-4968-a1b0-0d8182007723) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:16:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:13.640 156681 INFO neutron.agent.ovn.metadata.agent [-] Port aa9724a7-fad1-4968-a1b0-0d8182007723 in datapath 2e5dcf5b-2f4a-41dc-9c28-b500e2889923 unbound from our chassis
Feb 28 10:16:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:13.642 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2e5dcf5b-2f4a-41dc-9c28-b500e2889923, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:16:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:13.645 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6a4901e4-31b4-49ed-95af-4665108731af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:13.646 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923 namespace which is not needed anymore
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.647 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:13 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Feb 28 10:16:13 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d0000004e.scope: Consumed 17.723s CPU time.
Feb 28 10:16:13 compute-0 systemd-machined[209480]: Machine qemu-88-instance-0000004e terminated.
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.786 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.790 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:13 compute-0 neutron-haproxy-ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923[308437]: [NOTICE]   (308441) : haproxy version is 2.8.14-c23fe91
Feb 28 10:16:13 compute-0 neutron-haproxy-ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923[308437]: [NOTICE]   (308441) : path to executable is /usr/sbin/haproxy
Feb 28 10:16:13 compute-0 neutron-haproxy-ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923[308437]: [WARNING]  (308441) : Exiting Master process...
Feb 28 10:16:13 compute-0 neutron-haproxy-ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923[308437]: [ALERT]    (308441) : Current worker (308443) exited with code 143 (Terminated)
Feb 28 10:16:13 compute-0 neutron-haproxy-ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923[308437]: [WARNING]  (308441) : All workers exited. Exiting... (0)
Feb 28 10:16:13 compute-0 systemd[1]: libpod-2cc631f7684fc8cc1a9d6516a01e4c5df19646460928bd860ccd63048096cf4b.scope: Deactivated successfully.
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.807 243456 INFO nova.virt.libvirt.driver [-] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Instance destroyed successfully.
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.807 243456 DEBUG nova.objects.instance [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lazy-loading 'resources' on Instance uuid b883c1a1-cf01-434d-8258-24ca193a2683 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:13 compute-0 podman[315563]: 2026-02-28 10:16:13.809345525 +0000 UTC m=+0.057978561 container died 2cc631f7684fc8cc1a9d6516a01e4c5df19646460928bd860ccd63048096cf4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.819 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.824 243456 DEBUG nova.virt.libvirt.vif [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:14:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1601523722',display_name='tempest-ServerActionsTestOtherA-server-1601523722',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1601523722',id=78,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBeB4M8j3RPMEGsTEupU809MpDMu1lONxa3GM96jOaKy7lCnQVg4MzBbpF5eLhYMsfAQf+axdx0pdKDPLAAkphsN2WtFcI9X16V02fEsKKASEotygshJqgIA8eut813xpw==',key_name='tempest-keypair-127070709',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:14:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd70835696bf4e12a062516e9de5527d',ramdisk_id='',reservation_id='r-by9e4o0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1764257371',owner_user_name='tempest-ServerActionsTestOtherA-1764257371-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:14:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14b2d28379164786ad68563acb83a50a',uuid=b883c1a1-cf01-434d-8258-24ca193a2683,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aa9724a7-fad1-4968-a1b0-0d8182007723", "address": "fa:16:3e:c4:71:d6", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9724a7-fa", "ovs_interfaceid": "aa9724a7-fad1-4968-a1b0-0d8182007723", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.825 243456 DEBUG nova.network.os_vif_util [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converting VIF {"id": "aa9724a7-fad1-4968-a1b0-0d8182007723", "address": "fa:16:3e:c4:71:d6", "network": {"id": "2e5dcf5b-2f4a-41dc-9c28-b500e2889923", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1575119899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd70835696bf4e12a062516e9de5527d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa9724a7-fa", "ovs_interfaceid": "aa9724a7-fad1-4968-a1b0-0d8182007723", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.826 243456 DEBUG nova.network.os_vif_util [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c4:71:d6,bridge_name='br-int',has_traffic_filtering=True,id=aa9724a7-fad1-4968-a1b0-0d8182007723,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9724a7-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.826 243456 DEBUG os_vif [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:71:d6,bridge_name='br-int',has_traffic_filtering=True,id=aa9724a7-fad1-4968-a1b0-0d8182007723,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9724a7-fa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.829 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.829 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa9724a7-fa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.830 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.833 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.835 243456 INFO os_vif [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:71:d6,bridge_name='br-int',has_traffic_filtering=True,id=aa9724a7-fad1-4968-a1b0-0d8182007723,network=Network(2e5dcf5b-2f4a-41dc-9c28-b500e2889923),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa9724a7-fa')
Feb 28 10:16:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2cc631f7684fc8cc1a9d6516a01e4c5df19646460928bd860ccd63048096cf4b-userdata-shm.mount: Deactivated successfully.
Feb 28 10:16:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a70ff2f5773caafa9c58f78253f13ae91fe67282a6374207ba8de2071a659ac6-merged.mount: Deactivated successfully.
Feb 28 10:16:13 compute-0 podman[315563]: 2026-02-28 10:16:13.866796661 +0000 UTC m=+0.115429697 container cleanup 2cc631f7684fc8cc1a9d6516a01e4c5df19646460928bd860ccd63048096cf4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:16:13 compute-0 systemd[1]: libpod-conmon-2cc631f7684fc8cc1a9d6516a01e4c5df19646460928bd860ccd63048096cf4b.scope: Deactivated successfully.
Feb 28 10:16:13 compute-0 podman[315619]: 2026-02-28 10:16:13.944115545 +0000 UTC m=+0.059588217 container remove 2cc631f7684fc8cc1a9d6516a01e4c5df19646460928bd860ccd63048096cf4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 10:16:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:13.961 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2aba7002-c01b-4ece-8842-67714b114628]: (4, ('Sat Feb 28 10:16:13 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923 (2cc631f7684fc8cc1a9d6516a01e4c5df19646460928bd860ccd63048096cf4b)\n2cc631f7684fc8cc1a9d6516a01e4c5df19646460928bd860ccd63048096cf4b\nSat Feb 28 10:16:13 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923 (2cc631f7684fc8cc1a9d6516a01e4c5df19646460928bd860ccd63048096cf4b)\n2cc631f7684fc8cc1a9d6516a01e4c5df19646460928bd860ccd63048096cf4b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:13.964 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0783cc18-d924-40d0-babf-22d0588cc1cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:13.966 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e5dcf5b-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.968 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:13 compute-0 kernel: tap2e5dcf5b-20: left promiscuous mode
Feb 28 10:16:13 compute-0 nova_compute[243452]: 2026-02-28 10:16:13.976 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:13.982 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[05315461-a974-4148-a984-b95b5f2b7ffd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:14.009 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d0946ddf-3871-49d8-9f94-d9df8bc06385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:14.011 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc8f10b-fbd5-4351-ba77-6e4ba4edb9f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:14.025 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d78f63de-1b7e-4cbc-ab6b-0ecadde25a68]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517666, 'reachable_time': 31967, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315638, 'error': None, 'target': 'ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:14 compute-0 systemd[1]: run-netns-ovnmeta\x2d2e5dcf5b\x2d2f4a\x2d41dc\x2d9c28\x2db500e2889923.mount: Deactivated successfully.
Feb 28 10:16:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:14.029 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2e5dcf5b-2f4a-41dc-9c28-b500e2889923 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:16:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:14.030 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a62b9eb1-a75a-4414-81d6-1158bd1d0fb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:14 compute-0 nova_compute[243452]: 2026-02-28 10:16:14.139 243456 INFO nova.virt.libvirt.driver [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Deleting instance files /var/lib/nova/instances/b883c1a1-cf01-434d-8258-24ca193a2683_del
Feb 28 10:16:14 compute-0 nova_compute[243452]: 2026-02-28 10:16:14.140 243456 INFO nova.virt.libvirt.driver [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Deletion of /var/lib/nova/instances/b883c1a1-cf01-434d-8258-24ca193a2683_del complete
Feb 28 10:16:14 compute-0 nova_compute[243452]: 2026-02-28 10:16:14.189 243456 INFO nova.compute.manager [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Took 0.63 seconds to destroy the instance on the hypervisor.
Feb 28 10:16:14 compute-0 nova_compute[243452]: 2026-02-28 10:16:14.190 243456 DEBUG oslo.service.loopingcall [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:16:14 compute-0 nova_compute[243452]: 2026-02-28 10:16:14.190 243456 DEBUG nova.compute.manager [-] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:16:14 compute-0 nova_compute[243452]: 2026-02-28 10:16:14.191 243456 DEBUG nova.network.neutron [-] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:16:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 596 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.4 MiB/s wr, 283 op/s
Feb 28 10:16:14 compute-0 nova_compute[243452]: 2026-02-28 10:16:14.366 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:14.368 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:16:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:14.370 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:16:14 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.172 243456 DEBUG nova.network.neutron [-] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.187 243456 INFO nova.compute.manager [-] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Took 1.00 seconds to deallocate network for instance.
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.242 243456 DEBUG oslo_concurrency.lockutils [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.242 243456 DEBUG oslo_concurrency.lockutils [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:15 compute-0 ceph-mon[76304]: pgmap v1510: 305 pgs: 305 active+clean; 596 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.4 MiB/s wr, 283 op/s
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.408 243456 DEBUG oslo_concurrency.processutils [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:15 compute-0 kernel: tapcace90b2-5d (unregistering): left promiscuous mode
Feb 28 10:16:15 compute-0 NetworkManager[49805]: <info>  [1772273775.4715] device (tapcace90b2-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.483 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:15 compute-0 ovn_controller[146846]: 2026-02-28T10:16:15Z|00773|binding|INFO|Releasing lport cace90b2-5d6b-49ae-a68a-251838fec4ee from this chassis (sb_readonly=0)
Feb 28 10:16:15 compute-0 ovn_controller[146846]: 2026-02-28T10:16:15Z|00774|binding|INFO|Setting lport cace90b2-5d6b-49ae-a68a-251838fec4ee down in Southbound
Feb 28 10:16:15 compute-0 ovn_controller[146846]: 2026-02-28T10:16:15Z|00775|binding|INFO|Removing iface tapcace90b2-5d ovn-installed in OVS
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.494 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:15.496 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:a7:4e 10.100.0.4'], port_security=['fa:16:3e:46:a7:4e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e0b403b3-2f95-4f8c-a00c-53dab3c643b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61b03a6e-b883-4f32-b23d-d6fea7058b29', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '4', 'neutron:security_group_ids': '59190708-228d-45d4-972b-cf1e677cee18', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3678363f-59af-4198-9c0f-ea20e21245ac, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cace90b2-5d6b-49ae-a68a-251838fec4ee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:16:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:15.498 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cace90b2-5d6b-49ae-a68a-251838fec4ee in datapath 61b03a6e-b883-4f32-b23d-d6fea7058b29 unbound from our chassis
Feb 28 10:16:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:15.500 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 61b03a6e-b883-4f32-b23d-d6fea7058b29, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:16:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:15.501 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ac24257a-a744-46b0-bc08-2615303c98a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:15.501 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29 namespace which is not needed anymore
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.503 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:15 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d00000053.scope: Deactivated successfully.
Feb 28 10:16:15 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d00000053.scope: Consumed 13.726s CPU time.
Feb 28 10:16:15 compute-0 systemd-machined[209480]: Machine qemu-96-instance-00000053 terminated.
Feb 28 10:16:15 compute-0 neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29[314191]: [NOTICE]   (314214) : haproxy version is 2.8.14-c23fe91
Feb 28 10:16:15 compute-0 neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29[314191]: [NOTICE]   (314214) : path to executable is /usr/sbin/haproxy
Feb 28 10:16:15 compute-0 neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29[314191]: [WARNING]  (314214) : Exiting Master process...
Feb 28 10:16:15 compute-0 neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29[314191]: [WARNING]  (314214) : Exiting Master process...
Feb 28 10:16:15 compute-0 neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29[314191]: [ALERT]    (314214) : Current worker (314216) exited with code 143 (Terminated)
Feb 28 10:16:15 compute-0 neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29[314191]: [WARNING]  (314214) : All workers exited. Exiting... (0)
Feb 28 10:16:15 compute-0 systemd[1]: libpod-a4ccd79c023880696d899e80cdd84a64839df595660165ca6888ae66de2619cf.scope: Deactivated successfully.
Feb 28 10:16:15 compute-0 podman[315671]: 2026-02-28 10:16:15.621619944 +0000 UTC m=+0.042291571 container died a4ccd79c023880696d899e80cdd84a64839df595660165ca6888ae66de2619cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.699 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.703 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.722 243456 DEBUG oslo_concurrency.lockutils [None req-504a488d-c7df-4261-b8bf-9da2e1cf46b4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "30194398-5601-43ac-aae7-290d9d311d6c" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.723 243456 DEBUG oslo_concurrency.lockutils [None req-504a488d-c7df-4261-b8bf-9da2e1cf46b4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.723 243456 DEBUG nova.compute.manager [None req-504a488d-c7df-4261-b8bf-9da2e1cf46b4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.730 243456 DEBUG nova.compute.manager [None req-504a488d-c7df-4261-b8bf-9da2e1cf46b4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.731 243456 DEBUG nova.objects.instance [None req-504a488d-c7df-4261-b8bf-9da2e1cf46b4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lazy-loading 'flavor' on Instance uuid 30194398-5601-43ac-aae7-290d9d311d6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.749 243456 DEBUG nova.compute.manager [req-e23347e5-f708-49c9-bb84-dd4884c78fba req-9836be5b-cc85-4ec4-bb57-5f66eb2f6247 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Received event network-vif-unplugged-aa9724a7-fad1-4968-a1b0-0d8182007723 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.749 243456 DEBUG oslo_concurrency.lockutils [req-e23347e5-f708-49c9-bb84-dd4884c78fba req-9836be5b-cc85-4ec4-bb57-5f66eb2f6247 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.750 243456 DEBUG oslo_concurrency.lockutils [req-e23347e5-f708-49c9-bb84-dd4884c78fba req-9836be5b-cc85-4ec4-bb57-5f66eb2f6247 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.750 243456 DEBUG oslo_concurrency.lockutils [req-e23347e5-f708-49c9-bb84-dd4884c78fba req-9836be5b-cc85-4ec4-bb57-5f66eb2f6247 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.750 243456 DEBUG nova.compute.manager [req-e23347e5-f708-49c9-bb84-dd4884c78fba req-9836be5b-cc85-4ec4-bb57-5f66eb2f6247 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] No waiting events found dispatching network-vif-unplugged-aa9724a7-fad1-4968-a1b0-0d8182007723 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.750 243456 WARNING nova.compute.manager [req-e23347e5-f708-49c9-bb84-dd4884c78fba req-9836be5b-cc85-4ec4-bb57-5f66eb2f6247 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Received unexpected event network-vif-unplugged-aa9724a7-fad1-4968-a1b0-0d8182007723 for instance with vm_state deleted and task_state None.
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.750 243456 DEBUG nova.compute.manager [req-e23347e5-f708-49c9-bb84-dd4884c78fba req-9836be5b-cc85-4ec4-bb57-5f66eb2f6247 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Received event network-vif-plugged-aa9724a7-fad1-4968-a1b0-0d8182007723 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.751 243456 DEBUG oslo_concurrency.lockutils [req-e23347e5-f708-49c9-bb84-dd4884c78fba req-9836be5b-cc85-4ec4-bb57-5f66eb2f6247 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.751 243456 DEBUG oslo_concurrency.lockutils [req-e23347e5-f708-49c9-bb84-dd4884c78fba req-9836be5b-cc85-4ec4-bb57-5f66eb2f6247 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.751 243456 DEBUG oslo_concurrency.lockutils [req-e23347e5-f708-49c9-bb84-dd4884c78fba req-9836be5b-cc85-4ec4-bb57-5f66eb2f6247 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.751 243456 DEBUG nova.compute.manager [req-e23347e5-f708-49c9-bb84-dd4884c78fba req-9836be5b-cc85-4ec4-bb57-5f66eb2f6247 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] No waiting events found dispatching network-vif-plugged-aa9724a7-fad1-4968-a1b0-0d8182007723 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.751 243456 WARNING nova.compute.manager [req-e23347e5-f708-49c9-bb84-dd4884c78fba req-9836be5b-cc85-4ec4-bb57-5f66eb2f6247 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Received unexpected event network-vif-plugged-aa9724a7-fad1-4968-a1b0-0d8182007723 for instance with vm_state deleted and task_state None.
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.751 243456 DEBUG nova.compute.manager [req-e23347e5-f708-49c9-bb84-dd4884c78fba req-9836be5b-cc85-4ec4-bb57-5f66eb2f6247 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Received event network-vif-deleted-aa9724a7-fad1-4968-a1b0-0d8182007723 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:15 compute-0 nova_compute[243452]: 2026-02-28 10:16:15.757 243456 DEBUG nova.virt.libvirt.driver [None req-504a488d-c7df-4261-b8bf-9da2e1cf46b4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:16:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a4ccd79c023880696d899e80cdd84a64839df595660165ca6888ae66de2619cf-userdata-shm.mount: Deactivated successfully.
Feb 28 10:16:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-88a09b3e5a203c55b77f888c9ee0d5e395e7e939941ed510be9ce729b0bfa45b-merged.mount: Deactivated successfully.
Feb 28 10:16:15 compute-0 podman[315671]: 2026-02-28 10:16:15.906635958 +0000 UTC m=+0.327307585 container cleanup a4ccd79c023880696d899e80cdd84a64839df595660165ca6888ae66de2619cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:16:15 compute-0 systemd[1]: libpod-conmon-a4ccd79c023880696d899e80cdd84a64839df595660165ca6888ae66de2619cf.scope: Deactivated successfully.
Feb 28 10:16:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:16:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1099607475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.022 243456 DEBUG oslo_concurrency.processutils [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.028 243456 DEBUG nova.compute.provider_tree [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:16:16 compute-0 podman[315717]: 2026-02-28 10:16:16.042401285 +0000 UTC m=+0.113902814 container remove a4ccd79c023880696d899e80cdd84a64839df595660165ca6888ae66de2619cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:16:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:16.046 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc0661f-ac4f-4895-9045-1479739bfe86]: (4, ('Sat Feb 28 10:16:15 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29 (a4ccd79c023880696d899e80cdd84a64839df595660165ca6888ae66de2619cf)\na4ccd79c023880696d899e80cdd84a64839df595660165ca6888ae66de2619cf\nSat Feb 28 10:16:15 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29 (a4ccd79c023880696d899e80cdd84a64839df595660165ca6888ae66de2619cf)\na4ccd79c023880696d899e80cdd84a64839df595660165ca6888ae66de2619cf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:16.048 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0b53092e-055f-4a37-ac11-20b91fb42d17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.048 243456 DEBUG nova.scheduler.client.report [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:16:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:16.049 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61b03a6e-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:16 compute-0 kernel: tap61b03a6e-b0: left promiscuous mode
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.055 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.068 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:16.071 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6276b72a-99a9-48f0-a11e-5caa7e64feb9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.072 243456 DEBUG oslo_concurrency.lockutils [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:16.088 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5f0cc976-06db-4f3a-8abd-ec132edc9c77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:16.089 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[af13c684-5dcc-48a8-8d55-42194ba304ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.103 243456 INFO nova.scheduler.client.report [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Deleted allocations for instance b883c1a1-cf01-434d-8258-24ca193a2683
Feb 28 10:16:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:16.106 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[38ce4130-c093-4436-b03c-65427ba910df]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525605, 'reachable_time': 22224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315738, 'error': None, 'target': 'ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:16.110 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:16:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d61b03a6e\x2db883\x2d4f32\x2db23d\x2dd6fea7058b29.mount: Deactivated successfully.
Feb 28 10:16:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:16.110 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[cf7e8a6d-9598-423e-9b89-bc24bb90e816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.168 243456 DEBUG oslo_concurrency.lockutils [None req-40ec0014-d20f-4856-902e-7f730bd92f6a 14b2d28379164786ad68563acb83a50a fd70835696bf4e12a062516e9de5527d - - default default] Lock "b883c1a1-cf01-434d-8258-24ca193a2683" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.223 243456 INFO nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Instance shutdown successfully after 3 seconds.
Feb 28 10:16:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 551 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.7 MiB/s wr, 304 op/s
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.232 243456 INFO nova.virt.libvirt.driver [-] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Instance destroyed successfully.
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.238 243456 INFO nova.virt.libvirt.driver [-] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Instance destroyed successfully.
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.239 243456 DEBUG nova.virt.libvirt.vif [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:15:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1502065968',display_name='tempest-TestNetworkAdvancedServerOps-server-1502065968',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1502065968',id=83,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD7Li636QSVlGT4GqO4s5nNVtfSqEGFXbp91j8sBDWGgCeJ7DY5ScywhTjhOid22UFDvYMoisPR8wfRjbxlccvbHXWgfKkxWbHpB3Y/bp2JgqOgL25/vEzzhz8VLHisNrw==',key_name='tempest-TestNetworkAdvancedServerOps-210036872',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-6sjybjr1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:16:12Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=e0b403b3-2f95-4f8c-a00c-53dab3c643b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "address": "fa:16:3e:46:a7:4e", "network": {"id": "61b03a6e-b883-4f32-b23d-d6fea7058b29", "bridge": "br-int", "label": "tempest-network-smoke--1037265809", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcace90b2-5d", "ovs_interfaceid": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.239 243456 DEBUG nova.network.os_vif_util [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "address": "fa:16:3e:46:a7:4e", "network": {"id": "61b03a6e-b883-4f32-b23d-d6fea7058b29", "bridge": "br-int", "label": "tempest-network-smoke--1037265809", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcace90b2-5d", "ovs_interfaceid": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.240 243456 DEBUG nova.network.os_vif_util [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:46:a7:4e,bridge_name='br-int',has_traffic_filtering=True,id=cace90b2-5d6b-49ae-a68a-251838fec4ee,network=Network(61b03a6e-b883-4f32-b23d-d6fea7058b29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcace90b2-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.240 243456 DEBUG os_vif [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:a7:4e,bridge_name='br-int',has_traffic_filtering=True,id=cace90b2-5d6b-49ae-a68a-251838fec4ee,network=Network(61b03a6e-b883-4f32-b23d-d6fea7058b29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcace90b2-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.242 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.243 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcace90b2-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.244 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.248 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.251 243456 INFO os_vif [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:a7:4e,bridge_name='br-int',has_traffic_filtering=True,id=cace90b2-5d6b-49ae-a68a-251838fec4ee,network=Network(61b03a6e-b883-4f32-b23d-d6fea7058b29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcace90b2-5d')
Feb 28 10:16:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1099607475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:16 compute-0 ovn_controller[146846]: 2026-02-28T10:16:16Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b8:3c:8a 10.100.0.5
Feb 28 10:16:16 compute-0 ovn_controller[146846]: 2026-02-28T10:16:16Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:3c:8a 10.100.0.5
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.683 243456 INFO nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Deleting instance files /var/lib/nova/instances/e0b403b3-2f95-4f8c-a00c-53dab3c643b9_del
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.684 243456 INFO nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Deletion of /var/lib/nova/instances/e0b403b3-2f95-4f8c-a00c-53dab3c643b9_del complete
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.871 243456 DEBUG nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.872 243456 INFO nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Creating image(s)
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.889 243456 DEBUG nova.storage.rbd_utils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.908 243456 DEBUG nova.storage.rbd_utils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.929 243456 DEBUG nova.storage.rbd_utils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:16:16 compute-0 nova_compute[243452]: 2026-02-28 10:16:16.932 243456 DEBUG oslo_concurrency.processutils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.012 243456 DEBUG oslo_concurrency.processutils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.013 243456 DEBUG oslo_concurrency.lockutils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.014 243456 DEBUG oslo_concurrency.lockutils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.014 243456 DEBUG oslo_concurrency.lockutils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.032 243456 DEBUG nova.storage.rbd_utils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.035 243456 DEBUG oslo_concurrency.processutils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:17 compute-0 ceph-mon[76304]: pgmap v1511: 305 pgs: 305 active+clean; 551 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.7 MiB/s wr, 304 op/s
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.537 243456 DEBUG oslo_concurrency.processutils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.616 243456 DEBUG nova.storage.rbd_utils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] resizing rbd image e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.714 243456 DEBUG nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.715 243456 DEBUG nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Ensure instance console log exists: /var/lib/nova/instances/e0b403b3-2f95-4f8c-a00c-53dab3c643b9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.715 243456 DEBUG oslo_concurrency.lockutils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.716 243456 DEBUG oslo_concurrency.lockutils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.716 243456 DEBUG oslo_concurrency.lockutils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.718 243456 DEBUG nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Start _get_guest_xml network_info=[{"id": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "address": "fa:16:3e:46:a7:4e", "network": {"id": "61b03a6e-b883-4f32-b23d-d6fea7058b29", "bridge": "br-int", "label": "tempest-network-smoke--1037265809", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcace90b2-5d", "ovs_interfaceid": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.723 243456 WARNING nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.729 243456 DEBUG nova.virt.libvirt.host [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.729 243456 DEBUG nova.virt.libvirt.host [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.733 243456 DEBUG nova.virt.libvirt.host [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.733 243456 DEBUG nova.virt.libvirt.host [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.734 243456 DEBUG nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.734 243456 DEBUG nova.virt.hardware [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.734 243456 DEBUG nova.virt.hardware [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.734 243456 DEBUG nova.virt.hardware [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.735 243456 DEBUG nova.virt.hardware [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.735 243456 DEBUG nova.virt.hardware [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.735 243456 DEBUG nova.virt.hardware [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.735 243456 DEBUG nova.virt.hardware [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.735 243456 DEBUG nova.virt.hardware [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.736 243456 DEBUG nova.virt.hardware [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.736 243456 DEBUG nova.virt.hardware [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.737 243456 DEBUG nova.virt.hardware [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.738 243456 DEBUG nova.objects.instance [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e0b403b3-2f95-4f8c-a00c-53dab3c643b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:17 compute-0 nova_compute[243452]: 2026-02-28 10:16:17.756 243456 DEBUG oslo_concurrency.processutils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 531 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 6.4 MiB/s wr, 247 op/s
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.230 243456 DEBUG nova.compute.manager [req-3550c2e3-ab59-4baf-a611-026ea335ce73 req-a609f27a-4103-4e78-8b2a-7b87d6aa28b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Received event network-vif-unplugged-cace90b2-5d6b-49ae-a68a-251838fec4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.231 243456 DEBUG oslo_concurrency.lockutils [req-3550c2e3-ab59-4baf-a611-026ea335ce73 req-a609f27a-4103-4e78-8b2a-7b87d6aa28b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.232 243456 DEBUG oslo_concurrency.lockutils [req-3550c2e3-ab59-4baf-a611-026ea335ce73 req-a609f27a-4103-4e78-8b2a-7b87d6aa28b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.233 243456 DEBUG oslo_concurrency.lockutils [req-3550c2e3-ab59-4baf-a611-026ea335ce73 req-a609f27a-4103-4e78-8b2a-7b87d6aa28b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.233 243456 DEBUG nova.compute.manager [req-3550c2e3-ab59-4baf-a611-026ea335ce73 req-a609f27a-4103-4e78-8b2a-7b87d6aa28b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] No waiting events found dispatching network-vif-unplugged-cace90b2-5d6b-49ae-a68a-251838fec4ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.233 243456 WARNING nova.compute.manager [req-3550c2e3-ab59-4baf-a611-026ea335ce73 req-a609f27a-4103-4e78-8b2a-7b87d6aa28b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Received unexpected event network-vif-unplugged-cace90b2-5d6b-49ae-a68a-251838fec4ee for instance with vm_state active and task_state rebuild_spawning.
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.234 243456 DEBUG nova.compute.manager [req-3550c2e3-ab59-4baf-a611-026ea335ce73 req-a609f27a-4103-4e78-8b2a-7b87d6aa28b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Received event network-vif-plugged-cace90b2-5d6b-49ae-a68a-251838fec4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.234 243456 DEBUG oslo_concurrency.lockutils [req-3550c2e3-ab59-4baf-a611-026ea335ce73 req-a609f27a-4103-4e78-8b2a-7b87d6aa28b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.234 243456 DEBUG oslo_concurrency.lockutils [req-3550c2e3-ab59-4baf-a611-026ea335ce73 req-a609f27a-4103-4e78-8b2a-7b87d6aa28b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.235 243456 DEBUG oslo_concurrency.lockutils [req-3550c2e3-ab59-4baf-a611-026ea335ce73 req-a609f27a-4103-4e78-8b2a-7b87d6aa28b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.235 243456 DEBUG nova.compute.manager [req-3550c2e3-ab59-4baf-a611-026ea335ce73 req-a609f27a-4103-4e78-8b2a-7b87d6aa28b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] No waiting events found dispatching network-vif-plugged-cace90b2-5d6b-49ae-a68a-251838fec4ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.236 243456 WARNING nova.compute.manager [req-3550c2e3-ab59-4baf-a611-026ea335ce73 req-a609f27a-4103-4e78-8b2a-7b87d6aa28b5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Received unexpected event network-vif-plugged-cace90b2-5d6b-49ae-a68a-251838fec4ee for instance with vm_state active and task_state rebuild_spawning.
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:16:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:16:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1419344282' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.356 243456 DEBUG oslo_concurrency.processutils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.380 243456 DEBUG nova.storage.rbd_utils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:16:18 compute-0 kernel: tapb7b37fba-50 (unregistering): left promiscuous mode
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.392 243456 DEBUG oslo_concurrency.processutils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:18 compute-0 NetworkManager[49805]: <info>  [1772273778.3939] device (tapb7b37fba-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:16:18 compute-0 ovn_controller[146846]: 2026-02-28T10:16:18Z|00776|binding|INFO|Releasing lport b7b37fba-503f-4a0c-98ec-29224477d25f from this chassis (sb_readonly=0)
Feb 28 10:16:18 compute-0 ovn_controller[146846]: 2026-02-28T10:16:18Z|00777|binding|INFO|Setting lport b7b37fba-503f-4a0c-98ec-29224477d25f down in Southbound
Feb 28 10:16:18 compute-0 ovn_controller[146846]: 2026-02-28T10:16:18Z|00778|binding|INFO|Removing iface tapb7b37fba-50 ovn-installed in OVS
Feb 28 10:16:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:18.412 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:3b:12 10.100.0.13'], port_security=['fa:16:3e:62:3b:12 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '30194398-5601-43ac-aae7-290d9d311d6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c26d2eda-0121-4d10-a6e6-2d194139720b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '692561f0659d4af58ab14beffb24eb70', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9a45296f-8417-45bb-aaf5-e24181cd6a54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=859a5b6e-8469-47b0-aace-63af7cf02d70, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b7b37fba-503f-4a0c-98ec-29224477d25f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:16:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:18.416 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b7b37fba-503f-4a0c-98ec-29224477d25f in datapath c26d2eda-0121-4d10-a6e6-2d194139720b unbound from our chassis
Feb 28 10:16:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:18.417 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c26d2eda-0121-4d10-a6e6-2d194139720b
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.429 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:18.431 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[73bb60eb-73a9-4e9f-b729-01352eea011d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:18.455 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2b8a20b3-f229-4547-a71e-4f8ae0b6f5b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:18.458 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e96f1b80-ed5c-409b-81fa-130667cc8537]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:18 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d00000054.scope: Deactivated successfully.
Feb 28 10:16:18 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d00000054.scope: Consumed 15.889s CPU time.
Feb 28 10:16:18 compute-0 systemd-machined[209480]: Machine qemu-97-instance-00000054 terminated.
Feb 28 10:16:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:18.481 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4b86160f-0770-4f89-89bd-a3a20e026944]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:18 compute-0 ceph-mon[76304]: pgmap v1512: 305 pgs: 305 active+clean; 531 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 6.4 MiB/s wr, 247 op/s
Feb 28 10:16:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1419344282' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:16:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:18.501 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[085c2171-3800-4e8d-831a-3e8569628ef2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc26d2eda-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:c3:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 231], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525720, 'reachable_time': 39275, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315976, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:18.513 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[43a76b7d-bd83-4e19-8f6c-226a45776690]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc26d2eda-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525733, 'tstamp': 525733}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315977, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc26d2eda-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525736, 'tstamp': 525736}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315977, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:18.514 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc26d2eda-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.516 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.519 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:18.520 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc26d2eda-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:18.520 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:16:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:18.520 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc26d2eda-00, col_values=(('external_ids', {'iface-id': 'eb88c6a8-1cff-4adc-aed5-eb769bcec23d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:18.520 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:16:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.791 243456 INFO nova.virt.libvirt.driver [None req-504a488d-c7df-4261-b8bf-9da2e1cf46b4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Instance shutdown successfully after 3 seconds.
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.800 243456 INFO nova.virt.libvirt.driver [-] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Instance destroyed successfully.
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.801 243456 DEBUG nova.objects.instance [None req-504a488d-c7df-4261-b8bf-9da2e1cf46b4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lazy-loading 'numa_topology' on Instance uuid 30194398-5601-43ac-aae7-290d9d311d6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.820 243456 DEBUG nova.compute.manager [None req-504a488d-c7df-4261-b8bf-9da2e1cf46b4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.822 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.905 243456 DEBUG oslo_concurrency.lockutils [None req-504a488d-c7df-4261-b8bf-9da2e1cf46b4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:16:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1356057742' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.944 243456 DEBUG oslo_concurrency.processutils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.946 243456 DEBUG nova.virt.libvirt.vif [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:15:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1502065968',display_name='tempest-TestNetworkAdvancedServerOps-server-1502065968',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1502065968',id=83,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD7Li636QSVlGT4GqO4s5nNVtfSqEGFXbp91j8sBDWGgCeJ7DY5ScywhTjhOid22UFDvYMoisPR8wfRjbxlccvbHXWgfKkxWbHpB3Y/bp2JgqOgL25/vEzzhz8VLHisNrw==',key_name='tempest-TestNetworkAdvancedServerOps-210036872',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-6sjybjr1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:16:16Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=e0b403b3-2f95-4f8c-a00c-53dab3c643b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "address": "fa:16:3e:46:a7:4e", "network": {"id": "61b03a6e-b883-4f32-b23d-d6fea7058b29", "bridge": "br-int", "label": "tempest-network-smoke--1037265809", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcace90b2-5d", "ovs_interfaceid": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.947 243456 DEBUG nova.network.os_vif_util [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "address": "fa:16:3e:46:a7:4e", "network": {"id": "61b03a6e-b883-4f32-b23d-d6fea7058b29", "bridge": "br-int", "label": "tempest-network-smoke--1037265809", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcace90b2-5d", "ovs_interfaceid": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.948 243456 DEBUG nova.network.os_vif_util [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:46:a7:4e,bridge_name='br-int',has_traffic_filtering=True,id=cace90b2-5d6b-49ae-a68a-251838fec4ee,network=Network(61b03a6e-b883-4f32-b23d-d6fea7058b29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcace90b2-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.953 243456 DEBUG nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:16:18 compute-0 nova_compute[243452]:   <uuid>e0b403b3-2f95-4f8c-a00c-53dab3c643b9</uuid>
Feb 28 10:16:18 compute-0 nova_compute[243452]:   <name>instance-00000053</name>
Feb 28 10:16:18 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:16:18 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:16:18 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1502065968</nova:name>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:16:17</nova:creationTime>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:16:18 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:16:18 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:16:18 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:16:18 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:16:18 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:16:18 compute-0 nova_compute[243452]:         <nova:user uuid="99530c323188499c8d0e75b8edf1f77b">tempest-TestNetworkAdvancedServerOps-1987172309-project-member</nova:user>
Feb 28 10:16:18 compute-0 nova_compute[243452]:         <nova:project uuid="4c568ca6a09a48c1a1197267be4d4583">tempest-TestNetworkAdvancedServerOps-1987172309</nova:project>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="88971623-4808-4102-a4a7-34a287d8b7fe"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:16:18 compute-0 nova_compute[243452]:         <nova:port uuid="cace90b2-5d6b-49ae-a68a-251838fec4ee">
Feb 28 10:16:18 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:16:18 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:16:18 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <system>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <entry name="serial">e0b403b3-2f95-4f8c-a00c-53dab3c643b9</entry>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <entry name="uuid">e0b403b3-2f95-4f8c-a00c-53dab3c643b9</entry>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     </system>
Feb 28 10:16:18 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:16:18 compute-0 nova_compute[243452]:   <os>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:   </os>
Feb 28 10:16:18 compute-0 nova_compute[243452]:   <features>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:   </features>
Feb 28 10:16:18 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:16:18 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:16:18 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk">
Feb 28 10:16:18 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       </source>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:16:18 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk.config">
Feb 28 10:16:18 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       </source>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:16:18 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:46:a7:4e"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <target dev="tapcace90b2-5d"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/e0b403b3-2f95-4f8c-a00c-53dab3c643b9/console.log" append="off"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <video>
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     </video>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:16:18 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:16:18 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:16:18 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:16:18 compute-0 nova_compute[243452]: </domain>
Feb 28 10:16:18 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.954 243456 DEBUG nova.virt.libvirt.vif [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:15:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1502065968',display_name='tempest-TestNetworkAdvancedServerOps-server-1502065968',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1502065968',id=83,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD7Li636QSVlGT4GqO4s5nNVtfSqEGFXbp91j8sBDWGgCeJ7DY5ScywhTjhOid22UFDvYMoisPR8wfRjbxlccvbHXWgfKkxWbHpB3Y/bp2JgqOgL25/vEzzhz8VLHisNrw==',key_name='tempest-TestNetworkAdvancedServerOps-210036872',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-6sjybjr1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:16:16Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=e0b403b3-2f95-4f8c-a00c-53dab3c643b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "address": "fa:16:3e:46:a7:4e", "network": {"id": "61b03a6e-b883-4f32-b23d-d6fea7058b29", "bridge": "br-int", "label": "tempest-network-smoke--1037265809", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcace90b2-5d", "ovs_interfaceid": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.955 243456 DEBUG nova.network.os_vif_util [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "address": "fa:16:3e:46:a7:4e", "network": {"id": "61b03a6e-b883-4f32-b23d-d6fea7058b29", "bridge": "br-int", "label": "tempest-network-smoke--1037265809", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcace90b2-5d", "ovs_interfaceid": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.956 243456 DEBUG nova.network.os_vif_util [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:46:a7:4e,bridge_name='br-int',has_traffic_filtering=True,id=cace90b2-5d6b-49ae-a68a-251838fec4ee,network=Network(61b03a6e-b883-4f32-b23d-d6fea7058b29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcace90b2-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.956 243456 DEBUG os_vif [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:a7:4e,bridge_name='br-int',has_traffic_filtering=True,id=cace90b2-5d6b-49ae-a68a-251838fec4ee,network=Network(61b03a6e-b883-4f32-b23d-d6fea7058b29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcace90b2-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.957 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.958 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.959 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.961 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.962 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcace90b2-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.963 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcace90b2-5d, col_values=(('external_ids', {'iface-id': 'cace90b2-5d6b-49ae-a68a-251838fec4ee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:a7:4e', 'vm-uuid': 'e0b403b3-2f95-4f8c-a00c-53dab3c643b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.964 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:18 compute-0 NetworkManager[49805]: <info>  [1772273778.9657] manager: (tapcace90b2-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/340)
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.969 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.973 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:18 compute-0 nova_compute[243452]: 2026-02-28 10:16:18.974 243456 INFO os_vif [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:a7:4e,bridge_name='br-int',has_traffic_filtering=True,id=cace90b2-5d6b-49ae-a68a-251838fec4ee,network=Network(61b03a6e-b883-4f32-b23d-d6fea7058b29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcace90b2-5d')
Feb 28 10:16:19 compute-0 nova_compute[243452]: 2026-02-28 10:16:19.029 243456 DEBUG nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:16:19 compute-0 nova_compute[243452]: 2026-02-28 10:16:19.029 243456 DEBUG nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:16:19 compute-0 nova_compute[243452]: 2026-02-28 10:16:19.030 243456 DEBUG nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No VIF found with MAC fa:16:3e:46:a7:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:16:19 compute-0 nova_compute[243452]: 2026-02-28 10:16:19.031 243456 INFO nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Using config drive
Feb 28 10:16:19 compute-0 nova_compute[243452]: 2026-02-28 10:16:19.062 243456 DEBUG nova.storage.rbd_utils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:16:19 compute-0 nova_compute[243452]: 2026-02-28 10:16:19.080 243456 DEBUG nova.objects.instance [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'ec2_ids' on Instance uuid e0b403b3-2f95-4f8c-a00c-53dab3c643b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:19 compute-0 nova_compute[243452]: 2026-02-28 10:16:19.106 243456 DEBUG nova.objects.instance [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'keypairs' on Instance uuid e0b403b3-2f95-4f8c-a00c-53dab3c643b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:19 compute-0 nova_compute[243452]: 2026-02-28 10:16:19.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:16:19 compute-0 nova_compute[243452]: 2026-02-28 10:16:19.435 243456 INFO nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Creating config drive at /var/lib/nova/instances/e0b403b3-2f95-4f8c-a00c-53dab3c643b9/disk.config
Feb 28 10:16:19 compute-0 nova_compute[243452]: 2026-02-28 10:16:19.439 243456 DEBUG oslo_concurrency.processutils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e0b403b3-2f95-4f8c-a00c-53dab3c643b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpef9s611e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:19 compute-0 nova_compute[243452]: 2026-02-28 10:16:19.573 243456 DEBUG oslo_concurrency.processutils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e0b403b3-2f95-4f8c-a00c-53dab3c643b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpef9s611e" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:19 compute-0 nova_compute[243452]: 2026-02-28 10:16:19.608 243456 DEBUG nova.storage.rbd_utils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:16:19 compute-0 nova_compute[243452]: 2026-02-28 10:16:19.613 243456 DEBUG oslo_concurrency.processutils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e0b403b3-2f95-4f8c-a00c-53dab3c643b9/disk.config e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1356057742' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:16:19 compute-0 nova_compute[243452]: 2026-02-28 10:16:19.768 243456 DEBUG oslo_concurrency.processutils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e0b403b3-2f95-4f8c-a00c-53dab3c643b9/disk.config e0b403b3-2f95-4f8c-a00c-53dab3c643b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:19 compute-0 nova_compute[243452]: 2026-02-28 10:16:19.770 243456 INFO nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Deleting local config drive /var/lib/nova/instances/e0b403b3-2f95-4f8c-a00c-53dab3c643b9/disk.config because it was imported into RBD.
Feb 28 10:16:19 compute-0 systemd-udevd[315968]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:16:19 compute-0 kernel: tapcace90b2-5d: entered promiscuous mode
Feb 28 10:16:19 compute-0 NetworkManager[49805]: <info>  [1772273779.8288] manager: (tapcace90b2-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/341)
Feb 28 10:16:19 compute-0 ovn_controller[146846]: 2026-02-28T10:16:19Z|00779|binding|INFO|Claiming lport cace90b2-5d6b-49ae-a68a-251838fec4ee for this chassis.
Feb 28 10:16:19 compute-0 ovn_controller[146846]: 2026-02-28T10:16:19Z|00780|binding|INFO|cace90b2-5d6b-49ae-a68a-251838fec4ee: Claiming fa:16:3e:46:a7:4e 10.100.0.4
Feb 28 10:16:19 compute-0 nova_compute[243452]: 2026-02-28 10:16:19.831 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:19 compute-0 NetworkManager[49805]: <info>  [1772273779.8408] device (tapcace90b2-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:16:19 compute-0 NetworkManager[49805]: <info>  [1772273779.8413] device (tapcace90b2-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:16:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:19.844 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:a7:4e 10.100.0.4'], port_security=['fa:16:3e:46:a7:4e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e0b403b3-2f95-4f8c-a00c-53dab3c643b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61b03a6e-b883-4f32-b23d-d6fea7058b29', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '5', 'neutron:security_group_ids': '59190708-228d-45d4-972b-cf1e677cee18', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3678363f-59af-4198-9c0f-ea20e21245ac, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cace90b2-5d6b-49ae-a68a-251838fec4ee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:16:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:19.845 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cace90b2-5d6b-49ae-a68a-251838fec4ee in datapath 61b03a6e-b883-4f32-b23d-d6fea7058b29 bound to our chassis
Feb 28 10:16:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:19.847 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61b03a6e-b883-4f32-b23d-d6fea7058b29
Feb 28 10:16:19 compute-0 ovn_controller[146846]: 2026-02-28T10:16:19Z|00781|binding|INFO|Setting lport cace90b2-5d6b-49ae-a68a-251838fec4ee ovn-installed in OVS
Feb 28 10:16:19 compute-0 ovn_controller[146846]: 2026-02-28T10:16:19Z|00782|binding|INFO|Setting lport cace90b2-5d6b-49ae-a68a-251838fec4ee up in Southbound
Feb 28 10:16:19 compute-0 nova_compute[243452]: 2026-02-28 10:16:19.851 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:19 compute-0 nova_compute[243452]: 2026-02-28 10:16:19.854 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:19.859 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[05d0a32c-15a9-4efb-926f-2afe4e79ed0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:19.860 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap61b03a6e-b1 in ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:16:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:19.863 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap61b03a6e-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:16:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:19.863 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bb476d05-308a-4fb4-a6f1-21a4816d0fc5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:19.864 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[75c4bf29-43fe-4ea5-9ccb-99e9f2f94fa9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:19 compute-0 systemd-machined[209480]: New machine qemu-101-instance-00000053.
Feb 28 10:16:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:19.873 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[770f8460-6d6e-48b8-8bc7-a011d42c01c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:19.886 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[66a71158-8c82-4814-8f2f-2601044eb3ca]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:19 compute-0 systemd[1]: Started Virtual Machine qemu-101-instance-00000053.
Feb 28 10:16:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:19.920 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[61228028-7b73-44d5-995a-acee8239e018]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:19.927 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[abf11c8b-b7a4-4954-b979-154699c0f73e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:19 compute-0 NetworkManager[49805]: <info>  [1772273779.9295] manager: (tap61b03a6e-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/342)
Feb 28 10:16:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:19.961 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[deaf6d47-9f84-4546-9053-6812e1500b2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:19.965 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a31f044c-5cc5-45b9-8fcf-1151ddc6f7e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:19 compute-0 NetworkManager[49805]: <info>  [1772273779.9871] device (tap61b03a6e-b0): carrier: link connected
Feb 28 10:16:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:19.992 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c6cc8574-147e-452d-9ddf-8d5f80fd4eba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:20.011 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[73188557-ace8-4910-ab42-a5fedcf4b87a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61b03a6e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:ea:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528727, 'reachable_time': 24453, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316120, 'error': None, 'target': 'ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:20.027 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b8abd374-06c4-4b52-9bf1-677a2ff58a53]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee6:ea04'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 528727, 'tstamp': 528727}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316121, 'error': None, 'target': 'ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:20.047 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[22d32b30-7d7c-4a26-bccc-0b94cad3a729]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61b03a6e-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:ea:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528727, 'reachable_time': 24453, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316122, 'error': None, 'target': 'ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:20.078 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0e9e8659-432b-4730-be0a-ffc4f22dd5a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:20.138 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e2ff894c-6f0d-409d-8880-ceaa6002b3f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:20.140 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61b03a6e-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:20.140 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:20.140 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61b03a6e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:20 compute-0 NetworkManager[49805]: <info>  [1772273780.1430] manager: (tap61b03a6e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/343)
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.143 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:20 compute-0 kernel: tap61b03a6e-b0: entered promiscuous mode
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.148 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:20.149 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61b03a6e-b0, col_values=(('external_ids', {'iface-id': '69515a36-1a3b-4baa-87a6-97137a6ee885'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:20 compute-0 ovn_controller[146846]: 2026-02-28T10:16:20Z|00783|binding|INFO|Releasing lport 69515a36-1a3b-4baa-87a6-97137a6ee885 from this chassis (sb_readonly=0)
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.150 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.160 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:20.161 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/61b03a6e-b883-4f32-b23d-d6fea7058b29.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/61b03a6e-b883-4f32-b23d-d6fea7058b29.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:20.163 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8a351895-dc73-4df8-9437-b692b7cd3537]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:20.163 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-61b03a6e-b883-4f32-b23d-d6fea7058b29
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/61b03a6e-b883-4f32-b23d-d6fea7058b29.pid.haproxy
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 61b03a6e-b883-4f32-b23d-d6fea7058b29
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:16:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:20.164 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29', 'env', 'PROCESS_TAG=haproxy-61b03a6e-b883-4f32-b23d-d6fea7058b29', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/61b03a6e-b883-4f32-b23d-d6fea7058b29.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:16:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 521 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 871 KiB/s rd, 6.1 MiB/s wr, 249 op/s
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.336 243456 DEBUG nova.compute.manager [req-e09ca8eb-50bc-4028-927a-f87c1e8fd256 req-908e3a54-c2cb-4a31-85bc-850aaa2e8622 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received event network-vif-unplugged-b7b37fba-503f-4a0c-98ec-29224477d25f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.337 243456 DEBUG oslo_concurrency.lockutils [req-e09ca8eb-50bc-4028-927a-f87c1e8fd256 req-908e3a54-c2cb-4a31-85bc-850aaa2e8622 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30194398-5601-43ac-aae7-290d9d311d6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.337 243456 DEBUG oslo_concurrency.lockutils [req-e09ca8eb-50bc-4028-927a-f87c1e8fd256 req-908e3a54-c2cb-4a31-85bc-850aaa2e8622 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.337 243456 DEBUG oslo_concurrency.lockutils [req-e09ca8eb-50bc-4028-927a-f87c1e8fd256 req-908e3a54-c2cb-4a31-85bc-850aaa2e8622 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.337 243456 DEBUG nova.compute.manager [req-e09ca8eb-50bc-4028-927a-f87c1e8fd256 req-908e3a54-c2cb-4a31-85bc-850aaa2e8622 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] No waiting events found dispatching network-vif-unplugged-b7b37fba-503f-4a0c-98ec-29224477d25f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.338 243456 WARNING nova.compute.manager [req-e09ca8eb-50bc-4028-927a-f87c1e8fd256 req-908e3a54-c2cb-4a31-85bc-850aaa2e8622 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received unexpected event network-vif-unplugged-b7b37fba-503f-4a0c-98ec-29224477d25f for instance with vm_state stopped and task_state None.
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.338 243456 DEBUG nova.compute.manager [req-e09ca8eb-50bc-4028-927a-f87c1e8fd256 req-908e3a54-c2cb-4a31-85bc-850aaa2e8622 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received event network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.338 243456 DEBUG oslo_concurrency.lockutils [req-e09ca8eb-50bc-4028-927a-f87c1e8fd256 req-908e3a54-c2cb-4a31-85bc-850aaa2e8622 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30194398-5601-43ac-aae7-290d9d311d6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.338 243456 DEBUG oslo_concurrency.lockutils [req-e09ca8eb-50bc-4028-927a-f87c1e8fd256 req-908e3a54-c2cb-4a31-85bc-850aaa2e8622 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.338 243456 DEBUG oslo_concurrency.lockutils [req-e09ca8eb-50bc-4028-927a-f87c1e8fd256 req-908e3a54-c2cb-4a31-85bc-850aaa2e8622 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.339 243456 DEBUG nova.compute.manager [req-e09ca8eb-50bc-4028-927a-f87c1e8fd256 req-908e3a54-c2cb-4a31-85bc-850aaa2e8622 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] No waiting events found dispatching network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.339 243456 WARNING nova.compute.manager [req-e09ca8eb-50bc-4028-927a-f87c1e8fd256 req-908e3a54-c2cb-4a31-85bc-850aaa2e8622 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received unexpected event network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f for instance with vm_state stopped and task_state None.
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.339 243456 DEBUG nova.compute.manager [req-e09ca8eb-50bc-4028-927a-f87c1e8fd256 req-908e3a54-c2cb-4a31-85bc-850aaa2e8622 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Received event network-vif-plugged-cace90b2-5d6b-49ae-a68a-251838fec4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.339 243456 DEBUG oslo_concurrency.lockutils [req-e09ca8eb-50bc-4028-927a-f87c1e8fd256 req-908e3a54-c2cb-4a31-85bc-850aaa2e8622 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.339 243456 DEBUG oslo_concurrency.lockutils [req-e09ca8eb-50bc-4028-927a-f87c1e8fd256 req-908e3a54-c2cb-4a31-85bc-850aaa2e8622 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.340 243456 DEBUG oslo_concurrency.lockutils [req-e09ca8eb-50bc-4028-927a-f87c1e8fd256 req-908e3a54-c2cb-4a31-85bc-850aaa2e8622 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.340 243456 DEBUG nova.compute.manager [req-e09ca8eb-50bc-4028-927a-f87c1e8fd256 req-908e3a54-c2cb-4a31-85bc-850aaa2e8622 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] No waiting events found dispatching network-vif-plugged-cace90b2-5d6b-49ae-a68a-251838fec4ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.340 243456 WARNING nova.compute.manager [req-e09ca8eb-50bc-4028-927a-f87c1e8fd256 req-908e3a54-c2cb-4a31-85bc-850aaa2e8622 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Received unexpected event network-vif-plugged-cace90b2-5d6b-49ae-a68a-251838fec4ee for instance with vm_state active and task_state rebuild_spawning.
Feb 28 10:16:20 compute-0 podman[316158]: 2026-02-28 10:16:20.556563244 +0000 UTC m=+0.059977118 container create 757f74f91598eb6bc5782123f2ad804db325d6848ad4a6448c3748f840e8db81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:16:20 compute-0 systemd[1]: Started libpod-conmon-757f74f91598eb6bc5782123f2ad804db325d6848ad4a6448c3748f840e8db81.scope.
Feb 28 10:16:20 compute-0 podman[316158]: 2026-02-28 10:16:20.525740977 +0000 UTC m=+0.029154941 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:16:20 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d763255489ece1c87daeaab985953e12e57a3a8174e95e23dc08c85608541f9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:16:20 compute-0 podman[316158]: 2026-02-28 10:16:20.638817417 +0000 UTC m=+0.142231311 container init 757f74f91598eb6bc5782123f2ad804db325d6848ad4a6448c3748f840e8db81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:16:20 compute-0 podman[316158]: 2026-02-28 10:16:20.645140124 +0000 UTC m=+0.148553998 container start 757f74f91598eb6bc5782123f2ad804db325d6848ad4a6448c3748f840e8db81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 10:16:20 compute-0 ceph-mon[76304]: pgmap v1513: 305 pgs: 305 active+clean; 521 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 871 KiB/s rd, 6.1 MiB/s wr, 249 op/s
Feb 28 10:16:20 compute-0 neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29[316191]: [NOTICE]   (316213) : New worker (316222) forked
Feb 28 10:16:20 compute-0 neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29[316191]: [NOTICE]   (316213) : Loading success.
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.750 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for e0b403b3-2f95-4f8c-a00c-53dab3c643b9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.751 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273780.749519, e0b403b3-2f95-4f8c-a00c-53dab3c643b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.751 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] VM Resumed (Lifecycle Event)
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.755 243456 DEBUG nova.compute.manager [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.756 243456 DEBUG nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.760 243456 INFO nova.virt.libvirt.driver [-] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Instance spawned successfully.
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.761 243456 DEBUG nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.786 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.794 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.801 243456 DEBUG nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.803 243456 DEBUG nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.804 243456 DEBUG nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.805 243456 DEBUG nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.805 243456 DEBUG nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.806 243456 DEBUG nova.virt.libvirt.driver [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.839 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.840 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273780.7543452, e0b403b3-2f95-4f8c-a00c-53dab3c643b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.841 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] VM Started (Lifecycle Event)
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.879 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.883 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:16:20 compute-0 ovn_controller[146846]: 2026-02-28T10:16:20Z|00784|binding|INFO|Releasing lport eb88c6a8-1cff-4adc-aed5-eb769bcec23d from this chassis (sb_readonly=0)
Feb 28 10:16:20 compute-0 ovn_controller[146846]: 2026-02-28T10:16:20Z|00785|binding|INFO|Releasing lport 69515a36-1a3b-4baa-87a6-97137a6ee885 from this chassis (sb_readonly=0)
Feb 28 10:16:20 compute-0 ovn_controller[146846]: 2026-02-28T10:16:20Z|00786|binding|INFO|Releasing lport f0acdf7e-44d8-43d4-ade1-89536f5a8e0e from this chassis (sb_readonly=0)
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.913 243456 DEBUG nova.compute.manager [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.914 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.940 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.987 243456 DEBUG oslo_concurrency.lockutils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.988 243456 DEBUG oslo_concurrency.lockutils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.988 243456 DEBUG nova.objects.instance [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:16:20 compute-0 nova_compute[243452]: 2026-02-28 10:16:20.992 243456 DEBUG nova.objects.instance [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lazy-loading 'flavor' on Instance uuid 30194398-5601-43ac-aae7-290d9d311d6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:21 compute-0 nova_compute[243452]: 2026-02-28 10:16:21.020 243456 DEBUG oslo_concurrency.lockutils [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "refresh_cache-30194398-5601-43ac-aae7-290d9d311d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:16:21 compute-0 nova_compute[243452]: 2026-02-28 10:16:21.021 243456 DEBUG oslo_concurrency.lockutils [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquired lock "refresh_cache-30194398-5601-43ac-aae7-290d9d311d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:16:21 compute-0 nova_compute[243452]: 2026-02-28 10:16:21.021 243456 DEBUG nova.network.neutron [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:16:21 compute-0 nova_compute[243452]: 2026-02-28 10:16:21.021 243456 DEBUG nova.objects.instance [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lazy-loading 'info_cache' on Instance uuid 30194398-5601-43ac-aae7-290d9d311d6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:21 compute-0 nova_compute[243452]: 2026-02-28 10:16:21.083 243456 DEBUG oslo_concurrency.lockutils [None req-f30cef3a-3480-436c-8430-b58227b71c5c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:21 compute-0 nova_compute[243452]: 2026-02-28 10:16:21.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:16:21 compute-0 nova_compute[243452]: 2026-02-28 10:16:21.558 243456 INFO nova.compute.manager [None req-af8dcc46-a333-45b0-8004-7a6935f72ff8 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Pausing
Feb 28 10:16:21 compute-0 nova_compute[243452]: 2026-02-28 10:16:21.559 243456 DEBUG nova.objects.instance [None req-af8dcc46-a333-45b0-8004-7a6935f72ff8 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'flavor' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:21 compute-0 nova_compute[243452]: 2026-02-28 10:16:21.606 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273781.606118, 4db5bcd7-8b41-4850-8c88-89ad757c8558 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:16:21 compute-0 nova_compute[243452]: 2026-02-28 10:16:21.606 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] VM Paused (Lifecycle Event)
Feb 28 10:16:21 compute-0 nova_compute[243452]: 2026-02-28 10:16:21.609 243456 DEBUG nova.compute.manager [None req-af8dcc46-a333-45b0-8004-7a6935f72ff8 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:21 compute-0 nova_compute[243452]: 2026-02-28 10:16:21.641 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:21 compute-0 nova_compute[243452]: 2026-02-28 10:16:21.646 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:16:21 compute-0 nova_compute[243452]: 2026-02-28 10:16:21.676 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] During sync_power_state the instance has a pending task (pausing). Skip.
Feb 28 10:16:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 517 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 4.7 MiB/s wr, 185 op/s
Feb 28 10:16:22 compute-0 nova_compute[243452]: 2026-02-28 10:16:22.329 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:16:22 compute-0 nova_compute[243452]: 2026-02-28 10:16:22.460 243456 DEBUG nova.compute.manager [req-423800fd-b5eb-4662-b6de-78b099a70a41 req-09f3a0cd-012a-4394-97c2-dad6cf4608ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Received event network-vif-plugged-cace90b2-5d6b-49ae-a68a-251838fec4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:22 compute-0 nova_compute[243452]: 2026-02-28 10:16:22.461 243456 DEBUG oslo_concurrency.lockutils [req-423800fd-b5eb-4662-b6de-78b099a70a41 req-09f3a0cd-012a-4394-97c2-dad6cf4608ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:22 compute-0 nova_compute[243452]: 2026-02-28 10:16:22.462 243456 DEBUG oslo_concurrency.lockutils [req-423800fd-b5eb-4662-b6de-78b099a70a41 req-09f3a0cd-012a-4394-97c2-dad6cf4608ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:22 compute-0 nova_compute[243452]: 2026-02-28 10:16:22.462 243456 DEBUG oslo_concurrency.lockutils [req-423800fd-b5eb-4662-b6de-78b099a70a41 req-09f3a0cd-012a-4394-97c2-dad6cf4608ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:22 compute-0 nova_compute[243452]: 2026-02-28 10:16:22.463 243456 DEBUG nova.compute.manager [req-423800fd-b5eb-4662-b6de-78b099a70a41 req-09f3a0cd-012a-4394-97c2-dad6cf4608ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] No waiting events found dispatching network-vif-plugged-cace90b2-5d6b-49ae-a68a-251838fec4ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:22 compute-0 nova_compute[243452]: 2026-02-28 10:16:22.463 243456 WARNING nova.compute.manager [req-423800fd-b5eb-4662-b6de-78b099a70a41 req-09f3a0cd-012a-4394-97c2-dad6cf4608ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Received unexpected event network-vif-plugged-cace90b2-5d6b-49ae-a68a-251838fec4ee for instance with vm_state active and task_state None.
Feb 28 10:16:22 compute-0 nova_compute[243452]: 2026-02-28 10:16:22.701 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273767.7007277, 7a3c0169-3430-4dbe-b080-9ae7b56a101b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:16:22 compute-0 nova_compute[243452]: 2026-02-28 10:16:22.702 243456 INFO nova.compute.manager [-] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] VM Stopped (Lifecycle Event)
Feb 28 10:16:22 compute-0 nova_compute[243452]: 2026-02-28 10:16:22.721 243456 DEBUG nova.compute.manager [None req-a75eefa4-d1bd-4e68-822d-8e5596325766 - - - - - -] [instance: 7a3c0169-3430-4dbe-b080-9ae7b56a101b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:23 compute-0 ceph-mon[76304]: pgmap v1514: 305 pgs: 305 active+clean; 517 MiB data, 956 MiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 4.7 MiB/s wr, 185 op/s
Feb 28 10:16:23 compute-0 nova_compute[243452]: 2026-02-28 10:16:23.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:16:23 compute-0 nova_compute[243452]: 2026-02-28 10:16:23.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:16:23 compute-0 nova_compute[243452]: 2026-02-28 10:16:23.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:16:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:23.373 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:16:23 compute-0 nova_compute[243452]: 2026-02-28 10:16:23.825 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:23 compute-0 nova_compute[243452]: 2026-02-28 10:16:23.965 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.130 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.130 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.131 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.131 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.182 243456 DEBUG nova.network.neutron [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Updating instance_info_cache with network_info: [{"id": "b7b37fba-503f-4a0c-98ec-29224477d25f", "address": "fa:16:3e:62:3b:12", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7b37fba-50", "ovs_interfaceid": "b7b37fba-503f-4a0c-98ec-29224477d25f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.206 243456 DEBUG oslo_concurrency.lockutils [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Releasing lock "refresh_cache-30194398-5601-43ac-aae7-290d9d311d6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:16:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 517 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 633 KiB/s rd, 4.0 MiB/s wr, 192 op/s
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.246 243456 INFO nova.virt.libvirt.driver [-] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Instance destroyed successfully.
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.246 243456 DEBUG nova.objects.instance [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lazy-loading 'numa_topology' on Instance uuid 30194398-5601-43ac-aae7-290d9d311d6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.256 243456 DEBUG nova.objects.instance [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lazy-loading 'resources' on Instance uuid 30194398-5601-43ac-aae7-290d9d311d6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.266 243456 DEBUG nova.virt.libvirt.vif [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:15:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-885707745',display_name='tempest-ListServerFiltersTestJSON-instance-885707745',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-885707745',id=84,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='692561f0659d4af58ab14beffb24eb70',ramdisk_id='',reservation_id='r-39yaumf1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-866927856',owner_user_name='tempest-ListServerFiltersTestJSON-866927856-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:16:18Z,user_data=None,user_id='cc52c9235e704591a857b1b746c257ea',uuid=30194398-5601-43ac-aae7-290d9d311d6c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b7b37fba-503f-4a0c-98ec-29224477d25f", "address": "fa:16:3e:62:3b:12", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7b37fba-50", "ovs_interfaceid": "b7b37fba-503f-4a0c-98ec-29224477d25f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.267 243456 DEBUG nova.network.os_vif_util [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converting VIF {"id": "b7b37fba-503f-4a0c-98ec-29224477d25f", "address": "fa:16:3e:62:3b:12", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7b37fba-50", "ovs_interfaceid": "b7b37fba-503f-4a0c-98ec-29224477d25f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.268 243456 DEBUG nova.network.os_vif_util [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:3b:12,bridge_name='br-int',has_traffic_filtering=True,id=b7b37fba-503f-4a0c-98ec-29224477d25f,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7b37fba-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.268 243456 DEBUG os_vif [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:3b:12,bridge_name='br-int',has_traffic_filtering=True,id=b7b37fba-503f-4a0c-98ec-29224477d25f,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7b37fba-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.270 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.271 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7b37fba-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.425 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.427 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.429 243456 INFO os_vif [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:3b:12,bridge_name='br-int',has_traffic_filtering=True,id=b7b37fba-503f-4a0c-98ec-29224477d25f,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7b37fba-50')
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.435 243456 DEBUG nova.virt.libvirt.driver [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Start _get_guest_xml network_info=[{"id": "b7b37fba-503f-4a0c-98ec-29224477d25f", "address": "fa:16:3e:62:3b:12", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7b37fba-50", "ovs_interfaceid": "b7b37fba-503f-4a0c-98ec-29224477d25f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.438 243456 WARNING nova.virt.libvirt.driver [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.445 243456 DEBUG nova.virt.libvirt.host [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.446 243456 DEBUG nova.virt.libvirt.host [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.449 243456 DEBUG nova.virt.libvirt.host [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.449 243456 DEBUG nova.virt.libvirt.host [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.449 243456 DEBUG nova.virt.libvirt.driver [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.450 243456 DEBUG nova.virt.hardware [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.450 243456 DEBUG nova.virt.hardware [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.450 243456 DEBUG nova.virt.hardware [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.451 243456 DEBUG nova.virt.hardware [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.451 243456 DEBUG nova.virt.hardware [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.451 243456 DEBUG nova.virt.hardware [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.451 243456 DEBUG nova.virt.hardware [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.452 243456 DEBUG nova.virt.hardware [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.452 243456 DEBUG nova.virt.hardware [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.452 243456 DEBUG nova.virt.hardware [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.453 243456 DEBUG nova.virt.hardware [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.453 243456 DEBUG nova.objects.instance [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 30194398-5601-43ac-aae7-290d9d311d6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:24 compute-0 nova_compute[243452]: 2026-02-28 10:16:24.471 243456 DEBUG oslo_concurrency.processutils [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:16:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3180845540' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.052 243456 DEBUG oslo_concurrency.processutils [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.106 243456 DEBUG oslo_concurrency.processutils [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:25 compute-0 ceph-mon[76304]: pgmap v1515: 305 pgs: 305 active+clean; 517 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 633 KiB/s rd, 4.0 MiB/s wr, 192 op/s
Feb 28 10:16:25 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3180845540' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:16:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:16:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/724856555' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.668 243456 DEBUG oslo_concurrency.processutils [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.670 243456 DEBUG nova.virt.libvirt.vif [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:15:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-885707745',display_name='tempest-ListServerFiltersTestJSON-instance-885707745',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-885707745',id=84,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='692561f0659d4af58ab14beffb24eb70',ramdisk_id='',reservation_id='r-39yaumf1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-866927856',owner_user_name='tempest-ListServerFiltersTestJSON-866927856-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:16:18Z,user_data=None,user_id='cc52c9235e704591a857b1b746c257ea',uuid=30194398-5601-43ac-aae7-290d9d311d6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b7b37fba-503f-4a0c-98ec-29224477d25f", "address": "fa:16:3e:62:3b:12", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7b37fba-50", "ovs_interfaceid": "b7b37fba-503f-4a0c-98ec-29224477d25f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.671 243456 DEBUG nova.network.os_vif_util [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converting VIF {"id": "b7b37fba-503f-4a0c-98ec-29224477d25f", "address": "fa:16:3e:62:3b:12", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7b37fba-50", "ovs_interfaceid": "b7b37fba-503f-4a0c-98ec-29224477d25f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.672 243456 DEBUG nova.network.os_vif_util [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:3b:12,bridge_name='br-int',has_traffic_filtering=True,id=b7b37fba-503f-4a0c-98ec-29224477d25f,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7b37fba-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.674 243456 DEBUG nova.objects.instance [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lazy-loading 'pci_devices' on Instance uuid 30194398-5601-43ac-aae7-290d9d311d6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.704 243456 DEBUG nova.virt.libvirt.driver [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:16:25 compute-0 nova_compute[243452]:   <uuid>30194398-5601-43ac-aae7-290d9d311d6c</uuid>
Feb 28 10:16:25 compute-0 nova_compute[243452]:   <name>instance-00000054</name>
Feb 28 10:16:25 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:16:25 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:16:25 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-885707745</nova:name>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:16:24</nova:creationTime>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:16:25 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:16:25 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:16:25 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:16:25 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:16:25 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:16:25 compute-0 nova_compute[243452]:         <nova:user uuid="cc52c9235e704591a857b1b746c257ea">tempest-ListServerFiltersTestJSON-866927856-project-member</nova:user>
Feb 28 10:16:25 compute-0 nova_compute[243452]:         <nova:project uuid="692561f0659d4af58ab14beffb24eb70">tempest-ListServerFiltersTestJSON-866927856</nova:project>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:16:25 compute-0 nova_compute[243452]:         <nova:port uuid="b7b37fba-503f-4a0c-98ec-29224477d25f">
Feb 28 10:16:25 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:16:25 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:16:25 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <system>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <entry name="serial">30194398-5601-43ac-aae7-290d9d311d6c</entry>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <entry name="uuid">30194398-5601-43ac-aae7-290d9d311d6c</entry>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     </system>
Feb 28 10:16:25 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:16:25 compute-0 nova_compute[243452]:   <os>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:   </os>
Feb 28 10:16:25 compute-0 nova_compute[243452]:   <features>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:   </features>
Feb 28 10:16:25 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:16:25 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:16:25 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/30194398-5601-43ac-aae7-290d9d311d6c_disk">
Feb 28 10:16:25 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       </source>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:16:25 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/30194398-5601-43ac-aae7-290d9d311d6c_disk.config">
Feb 28 10:16:25 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       </source>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:16:25 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:62:3b:12"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <target dev="tapb7b37fba-50"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/30194398-5601-43ac-aae7-290d9d311d6c/console.log" append="off"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <video>
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     </video>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <input type="keyboard" bus="usb"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:16:25 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:16:25 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:16:25 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:16:25 compute-0 nova_compute[243452]: </domain>
Feb 28 10:16:25 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.712 243456 DEBUG nova.virt.libvirt.driver [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.712 243456 DEBUG nova.virt.libvirt.driver [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.713 243456 DEBUG nova.virt.libvirt.vif [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:15:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-885707745',display_name='tempest-ListServerFiltersTestJSON-instance-885707745',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-885707745',id=84,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='692561f0659d4af58ab14beffb24eb70',ramdisk_id='',reservation_id='r-39yaumf1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-866927856',owner_user_name='tempest-ListServerFiltersTestJSON-866927856-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:16:18Z,user_data=None,user_id='cc52c9235e704591a857b1b746c257ea',uuid=30194398-5601-43ac-aae7-290d9d311d6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b7b37fba-503f-4a0c-98ec-29224477d25f", "address": "fa:16:3e:62:3b:12", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7b37fba-50", "ovs_interfaceid": "b7b37fba-503f-4a0c-98ec-29224477d25f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.714 243456 DEBUG nova.network.os_vif_util [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converting VIF {"id": "b7b37fba-503f-4a0c-98ec-29224477d25f", "address": "fa:16:3e:62:3b:12", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7b37fba-50", "ovs_interfaceid": "b7b37fba-503f-4a0c-98ec-29224477d25f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.715 243456 DEBUG nova.network.os_vif_util [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:3b:12,bridge_name='br-int',has_traffic_filtering=True,id=b7b37fba-503f-4a0c-98ec-29224477d25f,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7b37fba-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.715 243456 DEBUG os_vif [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:3b:12,bridge_name='br-int',has_traffic_filtering=True,id=b7b37fba-503f-4a0c-98ec-29224477d25f,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7b37fba-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.717 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.718 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.719 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.723 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.724 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7b37fba-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.725 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb7b37fba-50, col_values=(('external_ids', {'iface-id': 'b7b37fba-503f-4a0c-98ec-29224477d25f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:3b:12', 'vm-uuid': '30194398-5601-43ac-aae7-290d9d311d6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.727 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:25 compute-0 NetworkManager[49805]: <info>  [1772273785.7284] manager: (tapb7b37fba-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/344)
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.731 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.732 243456 INFO os_vif [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:3b:12,bridge_name='br-int',has_traffic_filtering=True,id=b7b37fba-503f-4a0c-98ec-29224477d25f,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7b37fba-50')
Feb 28 10:16:25 compute-0 kernel: tapb7b37fba-50: entered promiscuous mode
Feb 28 10:16:25 compute-0 NetworkManager[49805]: <info>  [1772273785.8033] manager: (tapb7b37fba-50): new Tun device (/org/freedesktop/NetworkManager/Devices/345)
Feb 28 10:16:25 compute-0 ovn_controller[146846]: 2026-02-28T10:16:25Z|00787|binding|INFO|Claiming lport b7b37fba-503f-4a0c-98ec-29224477d25f for this chassis.
Feb 28 10:16:25 compute-0 ovn_controller[146846]: 2026-02-28T10:16:25Z|00788|binding|INFO|b7b37fba-503f-4a0c-98ec-29224477d25f: Claiming fa:16:3e:62:3b:12 10.100.0.13
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.815 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:25.821 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:3b:12 10.100.0.13'], port_security=['fa:16:3e:62:3b:12 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '30194398-5601-43ac-aae7-290d9d311d6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c26d2eda-0121-4d10-a6e6-2d194139720b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '692561f0659d4af58ab14beffb24eb70', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9a45296f-8417-45bb-aaf5-e24181cd6a54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=859a5b6e-8469-47b0-aace-63af7cf02d70, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b7b37fba-503f-4a0c-98ec-29224477d25f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:16:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:25.822 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b7b37fba-503f-4a0c-98ec-29224477d25f in datapath c26d2eda-0121-4d10-a6e6-2d194139720b bound to our chassis
Feb 28 10:16:25 compute-0 ovn_controller[146846]: 2026-02-28T10:16:25Z|00789|binding|INFO|Setting lport b7b37fba-503f-4a0c-98ec-29224477d25f ovn-installed in OVS
Feb 28 10:16:25 compute-0 ovn_controller[146846]: 2026-02-28T10:16:25Z|00790|binding|INFO|Setting lport b7b37fba-503f-4a0c-98ec-29224477d25f up in Southbound
Feb 28 10:16:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:25.824 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c26d2eda-0121-4d10-a6e6-2d194139720b
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.826 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:25 compute-0 systemd-udevd[316309]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:16:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:25.851 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a0ba10-8d5d-4e29-8cdd-19dca0f7de54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:25 compute-0 systemd-machined[209480]: New machine qemu-102-instance-00000054.
Feb 28 10:16:25 compute-0 systemd[1]: Started Virtual Machine qemu-102-instance-00000054.
Feb 28 10:16:25 compute-0 NetworkManager[49805]: <info>  [1772273785.8718] device (tapb7b37fba-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:16:25 compute-0 NetworkManager[49805]: <info>  [1772273785.8727] device (tapb7b37fba-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:16:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:25.890 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[464b1d1e-9666-4a80-a37e-f5082c8844f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:25.894 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[63e1873d-5188-41fd-9835-492699aa950d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:25.922 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0a78385c-90f8-4a88-8698-12bd11b43b6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:25.941 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5b7ba609-cafa-403a-a784-a4fb53ddc65e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc26d2eda-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:c3:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 231], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525720, 'reachable_time': 26829, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316322, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:25.969 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c6de9419-fb9f-47c4-9116-8554ffad52c4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc26d2eda-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525733, 'tstamp': 525733}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316323, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc26d2eda-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525736, 'tstamp': 525736}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316323, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:25.972 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc26d2eda-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.973 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:25.976 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc26d2eda-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:25.976 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:16:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:25.976 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc26d2eda-00, col_values=(('external_ids', {'iface-id': 'eb88c6a8-1cff-4adc-aed5-eb769bcec23d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:25 compute-0 nova_compute[243452]: 2026-02-28 10:16:25.976 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:25.976 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:16:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 517 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.0 MiB/s wr, 221 op/s
Feb 28 10:16:26 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/724856555' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:16:26 compute-0 nova_compute[243452]: 2026-02-28 10:16:26.377 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 30194398-5601-43ac-aae7-290d9d311d6c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:16:26 compute-0 nova_compute[243452]: 2026-02-28 10:16:26.378 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273786.3766954, 30194398-5601-43ac-aae7-290d9d311d6c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:16:26 compute-0 nova_compute[243452]: 2026-02-28 10:16:26.378 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] VM Resumed (Lifecycle Event)
Feb 28 10:16:26 compute-0 nova_compute[243452]: 2026-02-28 10:16:26.387 243456 DEBUG nova.compute.manager [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:16:26 compute-0 nova_compute[243452]: 2026-02-28 10:16:26.392 243456 INFO nova.virt.libvirt.driver [-] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Instance rebooted successfully.
Feb 28 10:16:26 compute-0 nova_compute[243452]: 2026-02-28 10:16:26.393 243456 DEBUG nova.compute.manager [None req-346bfda9-f92d-44a1-8344-e960466a4a89 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:26 compute-0 nova_compute[243452]: 2026-02-28 10:16:26.424 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:26 compute-0 nova_compute[243452]: 2026-02-28 10:16:26.428 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:16:26 compute-0 nova_compute[243452]: 2026-02-28 10:16:26.462 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] During sync_power_state the instance has a pending task (powering-on). Skip.
Feb 28 10:16:26 compute-0 nova_compute[243452]: 2026-02-28 10:16:26.462 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273786.3873472, 30194398-5601-43ac-aae7-290d9d311d6c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:16:26 compute-0 nova_compute[243452]: 2026-02-28 10:16:26.463 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] VM Started (Lifecycle Event)
Feb 28 10:16:26 compute-0 nova_compute[243452]: 2026-02-28 10:16:26.507 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:26 compute-0 nova_compute[243452]: 2026-02-28 10:16:26.513 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:16:26 compute-0 nova_compute[243452]: 2026-02-28 10:16:26.765 243456 DEBUG nova.compute.manager [req-59e54095-bbf9-4d91-ad71-5457a03ddf99 req-b0adae7c-59a9-4731-8a9d-31b069c798ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received event network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:26 compute-0 nova_compute[243452]: 2026-02-28 10:16:26.767 243456 DEBUG oslo_concurrency.lockutils [req-59e54095-bbf9-4d91-ad71-5457a03ddf99 req-b0adae7c-59a9-4731-8a9d-31b069c798ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30194398-5601-43ac-aae7-290d9d311d6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:26 compute-0 nova_compute[243452]: 2026-02-28 10:16:26.767 243456 DEBUG oslo_concurrency.lockutils [req-59e54095-bbf9-4d91-ad71-5457a03ddf99 req-b0adae7c-59a9-4731-8a9d-31b069c798ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:26 compute-0 nova_compute[243452]: 2026-02-28 10:16:26.767 243456 DEBUG oslo_concurrency.lockutils [req-59e54095-bbf9-4d91-ad71-5457a03ddf99 req-b0adae7c-59a9-4731-8a9d-31b069c798ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:26 compute-0 nova_compute[243452]: 2026-02-28 10:16:26.767 243456 DEBUG nova.compute.manager [req-59e54095-bbf9-4d91-ad71-5457a03ddf99 req-b0adae7c-59a9-4731-8a9d-31b069c798ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] No waiting events found dispatching network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:26 compute-0 nova_compute[243452]: 2026-02-28 10:16:26.768 243456 WARNING nova.compute.manager [req-59e54095-bbf9-4d91-ad71-5457a03ddf99 req-b0adae7c-59a9-4731-8a9d-31b069c798ea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received unexpected event network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f for instance with vm_state active and task_state None.
Feb 28 10:16:27 compute-0 nova_compute[243452]: 2026-02-28 10:16:27.214 243456 INFO nova.compute.manager [None req-44fd91a6-6640-4ad9-aa0e-16672f0a85c5 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Unpausing
Feb 28 10:16:27 compute-0 nova_compute[243452]: 2026-02-28 10:16:27.215 243456 DEBUG nova.objects.instance [None req-44fd91a6-6640-4ad9-aa0e-16672f0a85c5 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'flavor' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:27 compute-0 nova_compute[243452]: 2026-02-28 10:16:27.251 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273787.2514946, 4db5bcd7-8b41-4850-8c88-89ad757c8558 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:16:27 compute-0 nova_compute[243452]: 2026-02-28 10:16:27.253 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] VM Resumed (Lifecycle Event)
Feb 28 10:16:27 compute-0 virtqemud[242837]: argument unsupported: QEMU guest agent is not configured
Feb 28 10:16:27 compute-0 nova_compute[243452]: 2026-02-28 10:16:27.259 243456 DEBUG nova.virt.libvirt.guest [None req-44fd91a6-6640-4ad9-aa0e-16672f0a85c5 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Feb 28 10:16:27 compute-0 nova_compute[243452]: 2026-02-28 10:16:27.259 243456 DEBUG nova.compute.manager [None req-44fd91a6-6640-4ad9-aa0e-16672f0a85c5 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:27 compute-0 nova_compute[243452]: 2026-02-28 10:16:27.272 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:27 compute-0 nova_compute[243452]: 2026-02-28 10:16:27.276 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:16:27 compute-0 nova_compute[243452]: 2026-02-28 10:16:27.303 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] During sync_power_state the instance has a pending task (unpausing). Skip.
Feb 28 10:16:27 compute-0 ceph-mon[76304]: pgmap v1516: 305 pgs: 305 active+clean; 517 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.0 MiB/s wr, 221 op/s
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.150 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Updating instance_info_cache with network_info: [{"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.169 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.169 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.169 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.169 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.169 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.186 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.186 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.186 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.186 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.186 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 517 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.0 MiB/s wr, 188 op/s
Feb 28 10:16:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:16:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:16:28 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3608675283' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.746 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.801 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273773.8003082, b883c1a1-cf01-434d-8258-24ca193a2683 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.803 243456 INFO nova.compute.manager [-] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] VM Stopped (Lifecycle Event)
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.827 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.844 243456 DEBUG nova.compute.manager [None req-75045e25-f6df-42a6-8249-e9d054b3a1aa - - - - - -] [instance: b883c1a1-cf01-434d-8258-24ca193a2683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.865 243456 DEBUG nova.compute.manager [req-2c64da6c-9d1d-48dd-ad96-4ff810859fdb req-db7fd2bd-63a6-4576-8442-72465bcbedd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received event network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.866 243456 DEBUG oslo_concurrency.lockutils [req-2c64da6c-9d1d-48dd-ad96-4ff810859fdb req-db7fd2bd-63a6-4576-8442-72465bcbedd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30194398-5601-43ac-aae7-290d9d311d6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.866 243456 DEBUG oslo_concurrency.lockutils [req-2c64da6c-9d1d-48dd-ad96-4ff810859fdb req-db7fd2bd-63a6-4576-8442-72465bcbedd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.867 243456 DEBUG oslo_concurrency.lockutils [req-2c64da6c-9d1d-48dd-ad96-4ff810859fdb req-db7fd2bd-63a6-4576-8442-72465bcbedd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.867 243456 DEBUG nova.compute.manager [req-2c64da6c-9d1d-48dd-ad96-4ff810859fdb req-db7fd2bd-63a6-4576-8442-72465bcbedd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] No waiting events found dispatching network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.867 243456 WARNING nova.compute.manager [req-2c64da6c-9d1d-48dd-ad96-4ff810859fdb req-db7fd2bd-63a6-4576-8442-72465bcbedd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received unexpected event network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f for instance with vm_state active and task_state None.
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.881 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.882 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.889 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.889 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.894 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.894 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.898 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.899 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.903 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:16:28 compute-0 nova_compute[243452]: 2026-02-28 10:16:28.904 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:16:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:16:29
Feb 28 10:16:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:16:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:16:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'images', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'backups', 'vms']
Feb 28 10:16:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:16:29 compute-0 nova_compute[243452]: 2026-02-28 10:16:29.170 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:16:29 compute-0 nova_compute[243452]: 2026-02-28 10:16:29.173 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2855MB free_disk=59.78472742624581GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:16:29 compute-0 nova_compute[243452]: 2026-02-28 10:16:29.173 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:29 compute-0 nova_compute[243452]: 2026-02-28 10:16:29.173 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:29 compute-0 nova_compute[243452]: 2026-02-28 10:16:29.277 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 4db5bcd7-8b41-4850-8c88-89ad757c8558 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:16:29 compute-0 nova_compute[243452]: 2026-02-28 10:16:29.279 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance e0b403b3-2f95-4f8c-a00c-53dab3c643b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:16:29 compute-0 nova_compute[243452]: 2026-02-28 10:16:29.280 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 30194398-5601-43ac-aae7-290d9d311d6c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:16:29 compute-0 nova_compute[243452]: 2026-02-28 10:16:29.281 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:16:29 compute-0 nova_compute[243452]: 2026-02-28 10:16:29.281 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 9119fb04-24fa-460c-a772-4ca398874b4e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:16:29 compute-0 nova_compute[243452]: 2026-02-28 10:16:29.281 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:16:29 compute-0 nova_compute[243452]: 2026-02-28 10:16:29.282 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1216MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:16:29 compute-0 ceph-mon[76304]: pgmap v1517: 305 pgs: 305 active+clean; 517 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.0 MiB/s wr, 188 op/s
Feb 28 10:16:29 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3608675283' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:29 compute-0 nova_compute[243452]: 2026-02-28 10:16:29.409 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:16:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/874763013' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:30 compute-0 nova_compute[243452]: 2026-02-28 10:16:30.047 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:30 compute-0 nova_compute[243452]: 2026-02-28 10:16:30.054 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:16:30 compute-0 nova_compute[243452]: 2026-02-28 10:16:30.080 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:16:30 compute-0 nova_compute[243452]: 2026-02-28 10:16:30.110 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:16:30 compute-0 nova_compute[243452]: 2026-02-28 10:16:30.111 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 517 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.9 MiB/s wr, 170 op/s
Feb 28 10:16:30 compute-0 nova_compute[243452]: 2026-02-28 10:16:30.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:16:30 compute-0 nova_compute[243452]: 2026-02-28 10:16:30.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 28 10:16:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:16:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:16:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:16:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:16:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:16:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:16:30 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/874763013' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:16:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:16:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:16:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:16:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:16:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:16:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:16:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:16:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:16:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:16:30 compute-0 nova_compute[243452]: 2026-02-28 10:16:30.728 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:31 compute-0 ceph-mon[76304]: pgmap v1518: 305 pgs: 305 active+clean; 517 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.9 MiB/s wr, 170 op/s
Feb 28 10:16:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 517 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.0 MiB/s wr, 175 op/s
Feb 28 10:16:32 compute-0 ceph-mon[76304]: pgmap v1519: 305 pgs: 305 active+clean; 517 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.0 MiB/s wr, 175 op/s
Feb 28 10:16:33 compute-0 nova_compute[243452]: 2026-02-28 10:16:33.493 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:16:33 compute-0 nova_compute[243452]: 2026-02-28 10:16:33.829 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:33 compute-0 ovn_controller[146846]: 2026-02-28T10:16:33Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:46:a7:4e 10.100.0.4
Feb 28 10:16:33 compute-0 ovn_controller[146846]: 2026-02-28T10:16:33Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:46:a7:4e 10.100.0.4
Feb 28 10:16:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 535 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.4 MiB/s wr, 182 op/s
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.207 243456 DEBUG oslo_concurrency.lockutils [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "9119fb04-24fa-460c-a772-4ca398874b4e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.208 243456 DEBUG oslo_concurrency.lockutils [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "9119fb04-24fa-460c-a772-4ca398874b4e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.208 243456 DEBUG oslo_concurrency.lockutils [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "9119fb04-24fa-460c-a772-4ca398874b4e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.208 243456 DEBUG oslo_concurrency.lockutils [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "9119fb04-24fa-460c-a772-4ca398874b4e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.209 243456 DEBUG oslo_concurrency.lockutils [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "9119fb04-24fa-460c-a772-4ca398874b4e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.211 243456 INFO nova.compute.manager [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Terminating instance
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.212 243456 DEBUG nova.compute.manager [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:16:35 compute-0 kernel: tape6d4a5ad-f4 (unregistering): left promiscuous mode
Feb 28 10:16:35 compute-0 NetworkManager[49805]: <info>  [1772273795.2861] device (tape6d4a5ad-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:16:35 compute-0 ceph-mon[76304]: pgmap v1520: 305 pgs: 305 active+clean; 535 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.4 MiB/s wr, 182 op/s
Feb 28 10:16:35 compute-0 ovn_controller[146846]: 2026-02-28T10:16:35Z|00791|binding|INFO|Releasing lport e6d4a5ad-f493-413a-a412-747c3a07943b from this chassis (sb_readonly=0)
Feb 28 10:16:35 compute-0 ovn_controller[146846]: 2026-02-28T10:16:35Z|00792|binding|INFO|Setting lport e6d4a5ad-f493-413a-a412-747c3a07943b down in Southbound
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.306 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:35 compute-0 ovn_controller[146846]: 2026-02-28T10:16:35Z|00793|binding|INFO|Removing iface tape6d4a5ad-f4 ovn-installed in OVS
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.310 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:35.316 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:3c:8a 10.100.0.5'], port_security=['fa:16:3e:b8:3c:8a 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9119fb04-24fa-460c-a772-4ca398874b4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c26d2eda-0121-4d10-a6e6-2d194139720b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '692561f0659d4af58ab14beffb24eb70', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9a45296f-8417-45bb-aaf5-e24181cd6a54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=859a5b6e-8469-47b0-aace-63af7cf02d70, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=e6d4a5ad-f493-413a-a412-747c3a07943b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:16:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:35.318 156681 INFO neutron.agent.ovn.metadata.agent [-] Port e6d4a5ad-f493-413a-a412-747c3a07943b in datapath c26d2eda-0121-4d10-a6e6-2d194139720b unbound from our chassis
Feb 28 10:16:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:35.321 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c26d2eda-0121-4d10-a6e6-2d194139720b
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.331 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:35 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000056.scope: Deactivated successfully.
Feb 28 10:16:35 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000056.scope: Consumed 14.966s CPU time.
Feb 28 10:16:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:35.339 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8c2b92e4-0d6c-4dba-bba6-ddba8237ecb3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:35 compute-0 systemd-machined[209480]: Machine qemu-100-instance-00000056 terminated.
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.343 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.343 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 28 10:16:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:35.366 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[eca8d5e9-82c0-40c4-9b36-c5599607dc07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.370 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 28 10:16:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:35.373 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fe358581-0d0c-4dd2-8f99-3b75acd96691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:35.399 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f94809df-b007-41a4-854d-dfdf7543fe1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:35.416 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[754f5c2b-adcb-4b97-b5ef-5f894958071c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc26d2eda-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:c3:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 231], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525720, 'reachable_time': 26829, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316423, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.434 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.440 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:35.446 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ff1243ec-55ec-480c-af77-7ca992c29560]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc26d2eda-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525733, 'tstamp': 525733}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316425, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc26d2eda-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525736, 'tstamp': 525736}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316425, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:35.448 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc26d2eda-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.450 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.452 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:35.453 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc26d2eda-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:35.453 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:16:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:35.454 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc26d2eda-00, col_values=(('external_ids', {'iface-id': 'eb88c6a8-1cff-4adc-aed5-eb769bcec23d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.454 243456 INFO nova.virt.libvirt.driver [-] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Instance destroyed successfully.
Feb 28 10:16:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:35.454 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.454 243456 DEBUG nova.objects.instance [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lazy-loading 'resources' on Instance uuid 9119fb04-24fa-460c-a772-4ca398874b4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.469 243456 DEBUG nova.virt.libvirt.vif [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:15:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1068423179',display_name='tempest-ListServerFiltersTestJSON-instance-1068423179',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1068423179',id=86,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:16:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='692561f0659d4af58ab14beffb24eb70',ramdisk_id='',reservation_id='r-zu8ssj30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-866927856',owner_user_name='tempest-ListServerFiltersTestJSON-866927856-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:16:02Z,user_data=None,user_id='cc52c9235e704591a857b1b746c257ea',uuid=9119fb04-24fa-460c-a772-4ca398874b4e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e6d4a5ad-f493-413a-a412-747c3a07943b", "address": "fa:16:3e:b8:3c:8a", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6d4a5ad-f4", "ovs_interfaceid": "e6d4a5ad-f493-413a-a412-747c3a07943b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.470 243456 DEBUG nova.network.os_vif_util [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converting VIF {"id": "e6d4a5ad-f493-413a-a412-747c3a07943b", "address": "fa:16:3e:b8:3c:8a", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6d4a5ad-f4", "ovs_interfaceid": "e6d4a5ad-f493-413a-a412-747c3a07943b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.471 243456 DEBUG nova.network.os_vif_util [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:3c:8a,bridge_name='br-int',has_traffic_filtering=True,id=e6d4a5ad-f493-413a-a412-747c3a07943b,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6d4a5ad-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.472 243456 DEBUG os_vif [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:3c:8a,bridge_name='br-int',has_traffic_filtering=True,id=e6d4a5ad-f493-413a-a412-747c3a07943b,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6d4a5ad-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.475 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.475 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape6d4a5ad-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.480 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.484 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.486 243456 INFO os_vif [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:3c:8a,bridge_name='br-int',has_traffic_filtering=True,id=e6d4a5ad-f493-413a-a412-747c3a07943b,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6d4a5ad-f4')
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.775 243456 INFO nova.virt.libvirt.driver [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Deleting instance files /var/lib/nova/instances/9119fb04-24fa-460c-a772-4ca398874b4e_del
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.776 243456 INFO nova.virt.libvirt.driver [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Deletion of /var/lib/nova/instances/9119fb04-24fa-460c-a772-4ca398874b4e_del complete
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.830 243456 INFO nova.compute.manager [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Took 0.62 seconds to destroy the instance on the hypervisor.
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.830 243456 DEBUG oslo.service.loopingcall [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.831 243456 DEBUG nova.compute.manager [-] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:16:35 compute-0 nova_compute[243452]: 2026-02-28 10:16:35.831 243456 DEBUG nova.network.neutron [-] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:16:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 548 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 197 op/s
Feb 28 10:16:36 compute-0 nova_compute[243452]: 2026-02-28 10:16:36.953 243456 DEBUG nova.network.neutron [-] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:16:36 compute-0 nova_compute[243452]: 2026-02-28 10:16:36.972 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:36 compute-0 nova_compute[243452]: 2026-02-28 10:16:36.977 243456 INFO nova.compute.manager [-] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Took 1.15 seconds to deallocate network for instance.
Feb 28 10:16:37 compute-0 nova_compute[243452]: 2026-02-28 10:16:37.033 243456 DEBUG oslo_concurrency.lockutils [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:37 compute-0 nova_compute[243452]: 2026-02-28 10:16:37.034 243456 DEBUG oslo_concurrency.lockutils [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:37 compute-0 nova_compute[243452]: 2026-02-28 10:16:37.050 243456 DEBUG nova.compute.manager [req-c725994a-5e0d-47a8-806a-b4b33404d6b4 req-a51d1926-a1f7-4e90-840e-30db934b62d6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Received event network-vif-deleted-e6d4a5ad-f493-413a-a412-747c3a07943b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:37 compute-0 nova_compute[243452]: 2026-02-28 10:16:37.214 243456 DEBUG oslo_concurrency.processutils [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:37 compute-0 ceph-mon[76304]: pgmap v1521: 305 pgs: 305 active+clean; 548 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 197 op/s
Feb 28 10:16:37 compute-0 ovn_controller[146846]: 2026-02-28T10:16:37Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:62:3b:12 10.100.0.13
Feb 28 10:16:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:16:37 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2420029018' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:37 compute-0 nova_compute[243452]: 2026-02-28 10:16:37.810 243456 DEBUG oslo_concurrency.processutils [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:37 compute-0 nova_compute[243452]: 2026-02-28 10:16:37.815 243456 DEBUG nova.compute.provider_tree [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:16:37 compute-0 nova_compute[243452]: 2026-02-28 10:16:37.830 243456 DEBUG nova.scheduler.client.report [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:16:37 compute-0 nova_compute[243452]: 2026-02-28 10:16:37.849 243456 DEBUG oslo_concurrency.lockutils [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:37 compute-0 nova_compute[243452]: 2026-02-28 10:16:37.870 243456 INFO nova.scheduler.client.report [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Deleted allocations for instance 9119fb04-24fa-460c-a772-4ca398874b4e
Feb 28 10:16:37 compute-0 nova_compute[243452]: 2026-02-28 10:16:37.929 243456 DEBUG oslo_concurrency.lockutils [None req-2859e45e-5b80-4c76-9ee6-3f203a1e6454 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "9119fb04-24fa-460c-a772-4ca398874b4e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 532 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 162 op/s
Feb 28 10:16:38 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2420029018' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:16:38 compute-0 nova_compute[243452]: 2026-02-28 10:16:38.834 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:39 compute-0 podman[316478]: 2026-02-28 10:16:39.155532025 +0000 UTC m=+0.082564915 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible)
Feb 28 10:16:39 compute-0 podman[316477]: 2026-02-28 10:16:39.183926484 +0000 UTC m=+0.111017845 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.268 243456 DEBUG oslo_concurrency.lockutils [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.269 243456 DEBUG oslo_concurrency.lockutils [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.269 243456 DEBUG oslo_concurrency.lockutils [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.269 243456 DEBUG oslo_concurrency.lockutils [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.269 243456 DEBUG oslo_concurrency.lockutils [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.271 243456 INFO nova.compute.manager [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Terminating instance
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.272 243456 DEBUG nova.compute.manager [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:16:39 compute-0 ceph-mon[76304]: pgmap v1522: 305 pgs: 305 active+clean; 532 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 162 op/s
Feb 28 10:16:39 compute-0 kernel: tapc2860307-48 (unregistering): left promiscuous mode
Feb 28 10:16:39 compute-0 NetworkManager[49805]: <info>  [1772273799.3353] device (tapc2860307-48): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.346 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:39 compute-0 ovn_controller[146846]: 2026-02-28T10:16:39Z|00794|binding|INFO|Releasing lport c2860307-4800-4c5e-adb7-ab75c130f158 from this chassis (sb_readonly=0)
Feb 28 10:16:39 compute-0 ovn_controller[146846]: 2026-02-28T10:16:39Z|00795|binding|INFO|Setting lport c2860307-4800-4c5e-adb7-ab75c130f158 down in Southbound
Feb 28 10:16:39 compute-0 ovn_controller[146846]: 2026-02-28T10:16:39Z|00796|binding|INFO|Removing iface tapc2860307-48 ovn-installed in OVS
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.356 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:39.361 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:f3:ce 10.100.0.8'], port_security=['fa:16:3e:b4:f3:ce 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c26d2eda-0121-4d10-a6e6-2d194139720b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '692561f0659d4af58ab14beffb24eb70', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9a45296f-8417-45bb-aaf5-e24181cd6a54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=859a5b6e-8469-47b0-aace-63af7cf02d70, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=c2860307-4800-4c5e-adb7-ab75c130f158) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:16:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:39.364 156681 INFO neutron.agent.ovn.metadata.agent [-] Port c2860307-4800-4c5e-adb7-ab75c130f158 in datapath c26d2eda-0121-4d10-a6e6-2d194139720b unbound from our chassis
Feb 28 10:16:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:39.366 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c26d2eda-0121-4d10-a6e6-2d194139720b
Feb 28 10:16:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:39.382 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2d3b7555-19b0-4302-b856-cf883fbbc0cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:39.406 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[91c6d2d0-90a9-4aa8-86da-3d26237a3b00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:39 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d00000055.scope: Deactivated successfully.
Feb 28 10:16:39 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d00000055.scope: Consumed 16.303s CPU time.
Feb 28 10:16:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:39.410 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec8d1b1-a9f6-453a-b539-897048ba43fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:39 compute-0 systemd-machined[209480]: Machine qemu-98-instance-00000055 terminated.
Feb 28 10:16:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:39.436 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8ecfccbc-62f0-40a3-ad1c-6402a50f16c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:39.450 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[afb471cd-af86-4c02-b9c8-e71610d8b824]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc26d2eda-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:c3:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 231], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525720, 'reachable_time': 26829, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316534, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:39.465 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[140fc83e-71a3-41ce-bdb2-30d2a3aa863c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc26d2eda-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525733, 'tstamp': 525733}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316535, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc26d2eda-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 525736, 'tstamp': 525736}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316535, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:39.467 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc26d2eda-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.470 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.474 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:39.475 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc26d2eda-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:39.476 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:16:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:39.477 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc26d2eda-00, col_values=(('external_ids', {'iface-id': 'eb88c6a8-1cff-4adc-aed5-eb769bcec23d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:39.478 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.493 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.497 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.507 243456 INFO nova.virt.libvirt.driver [-] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Instance destroyed successfully.
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.508 243456 DEBUG nova.objects.instance [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lazy-loading 'resources' on Instance uuid cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.524 243456 DEBUG nova.virt.libvirt.vif [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1581061762',display_name='tempest-ListServerFiltersTestJSON-instance-1581061762',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1581061762',id=85,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='692561f0659d4af58ab14beffb24eb70',ramdisk_id='',reservation_id='r-8a0rn971',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-866927856',owner_user_name='tempest-ListServerFiltersTestJSON-866927856-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:15:52Z,user_data=None,user_id='cc52c9235e704591a857b1b746c257ea',uuid=cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c2860307-4800-4c5e-adb7-ab75c130f158", "address": "fa:16:3e:b4:f3:ce", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2860307-48", "ovs_interfaceid": "c2860307-4800-4c5e-adb7-ab75c130f158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.525 243456 DEBUG nova.network.os_vif_util [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converting VIF {"id": "c2860307-4800-4c5e-adb7-ab75c130f158", "address": "fa:16:3e:b4:f3:ce", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2860307-48", "ovs_interfaceid": "c2860307-4800-4c5e-adb7-ab75c130f158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.526 243456 DEBUG nova.network.os_vif_util [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:f3:ce,bridge_name='br-int',has_traffic_filtering=True,id=c2860307-4800-4c5e-adb7-ab75c130f158,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2860307-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.527 243456 DEBUG os_vif [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:f3:ce,bridge_name='br-int',has_traffic_filtering=True,id=c2860307-4800-4c5e-adb7-ab75c130f158,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2860307-48') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.529 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.529 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2860307-48, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.532 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.534 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.537 243456 INFO os_vif [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:f3:ce,bridge_name='br-int',has_traffic_filtering=True,id=c2860307-4800-4c5e-adb7-ab75c130f158,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2860307-48')
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.665 243456 DEBUG oslo_concurrency.lockutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Acquiring lock "86b24867-eb43-40c0-b73c-1416fd35d033" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.665 243456 DEBUG oslo_concurrency.lockutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Lock "86b24867-eb43-40c0-b73c-1416fd35d033" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.694 243456 DEBUG nova.compute.manager [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.699 243456 DEBUG nova.compute.manager [req-ac559f78-8919-4ce5-8cfd-2cb4edd37249 req-a55ab208-a476-4d6f-b638-f56a77d939d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Received event network-vif-unplugged-c2860307-4800-4c5e-adb7-ab75c130f158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.699 243456 DEBUG oslo_concurrency.lockutils [req-ac559f78-8919-4ce5-8cfd-2cb4edd37249 req-a55ab208-a476-4d6f-b638-f56a77d939d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.700 243456 DEBUG oslo_concurrency.lockutils [req-ac559f78-8919-4ce5-8cfd-2cb4edd37249 req-a55ab208-a476-4d6f-b638-f56a77d939d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.700 243456 DEBUG oslo_concurrency.lockutils [req-ac559f78-8919-4ce5-8cfd-2cb4edd37249 req-a55ab208-a476-4d6f-b638-f56a77d939d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.700 243456 DEBUG nova.compute.manager [req-ac559f78-8919-4ce5-8cfd-2cb4edd37249 req-a55ab208-a476-4d6f-b638-f56a77d939d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] No waiting events found dispatching network-vif-unplugged-c2860307-4800-4c5e-adb7-ab75c130f158 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.700 243456 DEBUG nova.compute.manager [req-ac559f78-8919-4ce5-8cfd-2cb4edd37249 req-a55ab208-a476-4d6f-b638-f56a77d939d1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Received event network-vif-unplugged-c2860307-4800-4c5e-adb7-ab75c130f158 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.763 243456 DEBUG oslo_concurrency.lockutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.764 243456 DEBUG oslo_concurrency.lockutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.773 243456 DEBUG nova.virt.hardware [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.773 243456 INFO nova.compute.claims [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.835 243456 INFO nova.virt.libvirt.driver [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Deleting instance files /var/lib/nova/instances/cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a_del
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.836 243456 INFO nova.virt.libvirt.driver [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Deletion of /var/lib/nova/instances/cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a_del complete
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.890 243456 INFO nova.compute.manager [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Took 0.62 seconds to destroy the instance on the hypervisor.
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.891 243456 DEBUG oslo.service.loopingcall [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.892 243456 DEBUG nova.compute.manager [-] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.892 243456 DEBUG nova.network.neutron [-] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:16:39 compute-0 nova_compute[243452]: 2026-02-28 10:16:39.950 243456 DEBUG oslo_concurrency.processutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 502 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 193 op/s
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.470 243456 DEBUG nova.network.neutron [-] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.489 243456 INFO nova.compute.manager [-] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Took 0.60 seconds to deallocate network for instance.
Feb 28 10:16:40 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:16:40 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2079525481' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.519 243456 DEBUG oslo_concurrency.processutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.530 243456 DEBUG nova.compute.manager [req-9d838af2-4ed2-483c-a85e-dee41bcd9261 req-c7c4d289-9346-46c1-9166-b7a8887fd206 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Received event network-vif-deleted-c2860307-4800-4c5e-adb7-ab75c130f158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.533 243456 DEBUG nova.compute.provider_tree [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.537 243456 DEBUG oslo_concurrency.lockutils [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.551 243456 DEBUG nova.scheduler.client.report [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.570 243456 DEBUG oslo_concurrency.lockutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.571 243456 DEBUG nova.compute.manager [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.573 243456 DEBUG oslo_concurrency.lockutils [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.619 243456 DEBUG nova.compute.manager [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.620 243456 DEBUG nova.network.neutron [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.640 243456 INFO nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.663 243456 DEBUG nova.compute.manager [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.681 243456 DEBUG oslo_concurrency.processutils [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.751 243456 DEBUG nova.compute.manager [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.753 243456 DEBUG nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.754 243456 INFO nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Creating image(s)
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.777 243456 DEBUG nova.storage.rbd_utils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] rbd image 86b24867-eb43-40c0-b73c-1416fd35d033_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.800 243456 DEBUG nova.storage.rbd_utils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] rbd image 86b24867-eb43-40c0-b73c-1416fd35d033_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.831 243456 DEBUG nova.storage.rbd_utils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] rbd image 86b24867-eb43-40c0-b73c-1416fd35d033_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.835 243456 DEBUG oslo_concurrency.processutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.863 243456 DEBUG nova.policy [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1cb883c7e1bc470eb00cb23a94d0ddb0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5d42755d9af04d5ba2e00e083f33ca82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003125534087425332 of space, bias 1.0, pg target 0.9376602262275996 quantized to 32 (current 32)
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493184471515393 of space, bias 1.0, pg target 0.747955341454618 quantized to 32 (current 32)
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.750583267882597e-07 of space, bias 4.0, pg target 0.0009300699921459116 quantized to 16 (current 16)
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:16:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.911 243456 DEBUG oslo_concurrency.processutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.912 243456 DEBUG oslo_concurrency.lockutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.913 243456 DEBUG oslo_concurrency.lockutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.914 243456 DEBUG oslo_concurrency.lockutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.937 243456 DEBUG nova.storage.rbd_utils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] rbd image 86b24867-eb43-40c0-b73c-1416fd35d033_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.941 243456 DEBUG oslo_concurrency.processutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 86b24867-eb43-40c0-b73c-1416fd35d033_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.978 243456 INFO nova.compute.manager [None req-dd4c9346-b962-4657-ab5b-341da59e8e84 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Get console output
Feb 28 10:16:40 compute-0 nova_compute[243452]: 2026-02-28 10:16:40.984 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:16:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:16:41 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2653402231' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:41 compute-0 nova_compute[243452]: 2026-02-28 10:16:41.233 243456 DEBUG oslo_concurrency.processutils [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:41 compute-0 nova_compute[243452]: 2026-02-28 10:16:41.240 243456 DEBUG nova.compute.provider_tree [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:16:41 compute-0 nova_compute[243452]: 2026-02-28 10:16:41.261 243456 DEBUG nova.scheduler.client.report [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:16:41 compute-0 nova_compute[243452]: 2026-02-28 10:16:41.270 243456 DEBUG oslo_concurrency.processutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 86b24867-eb43-40c0-b73c-1416fd35d033_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:41 compute-0 nova_compute[243452]: 2026-02-28 10:16:41.297 243456 DEBUG oslo_concurrency.lockutils [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:41 compute-0 ceph-mon[76304]: pgmap v1523: 305 pgs: 305 active+clean; 502 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 193 op/s
Feb 28 10:16:41 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2079525481' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:41 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2653402231' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:41 compute-0 nova_compute[243452]: 2026-02-28 10:16:41.337 243456 DEBUG nova.storage.rbd_utils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] resizing rbd image 86b24867-eb43-40c0-b73c-1416fd35d033_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:16:41 compute-0 nova_compute[243452]: 2026-02-28 10:16:41.369 243456 INFO nova.scheduler.client.report [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Deleted allocations for instance cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a
Feb 28 10:16:41 compute-0 nova_compute[243452]: 2026-02-28 10:16:41.426 243456 DEBUG nova.objects.instance [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Lazy-loading 'migration_context' on Instance uuid 86b24867-eb43-40c0-b73c-1416fd35d033 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:41 compute-0 nova_compute[243452]: 2026-02-28 10:16:41.447 243456 DEBUG nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:16:41 compute-0 nova_compute[243452]: 2026-02-28 10:16:41.448 243456 DEBUG nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Ensure instance console log exists: /var/lib/nova/instances/86b24867-eb43-40c0-b73c-1416fd35d033/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:16:41 compute-0 nova_compute[243452]: 2026-02-28 10:16:41.448 243456 DEBUG oslo_concurrency.lockutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:41 compute-0 nova_compute[243452]: 2026-02-28 10:16:41.448 243456 DEBUG oslo_concurrency.lockutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:41 compute-0 nova_compute[243452]: 2026-02-28 10:16:41.449 243456 DEBUG oslo_concurrency.lockutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:41 compute-0 nova_compute[243452]: 2026-02-28 10:16:41.456 243456 DEBUG oslo_concurrency.lockutils [None req-f6ea31b7-3677-47b3-808f-b482183840ca cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:41 compute-0 nova_compute[243452]: 2026-02-28 10:16:41.779 243456 DEBUG nova.compute.manager [req-6f5581b4-88de-4c3e-b32a-97d25d1410d4 req-5bc7c4d7-58ed-4177-9f53-3b4d5c14f469 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Received event network-vif-plugged-c2860307-4800-4c5e-adb7-ab75c130f158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:41 compute-0 nova_compute[243452]: 2026-02-28 10:16:41.779 243456 DEBUG oslo_concurrency.lockutils [req-6f5581b4-88de-4c3e-b32a-97d25d1410d4 req-5bc7c4d7-58ed-4177-9f53-3b4d5c14f469 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:41 compute-0 nova_compute[243452]: 2026-02-28 10:16:41.779 243456 DEBUG oslo_concurrency.lockutils [req-6f5581b4-88de-4c3e-b32a-97d25d1410d4 req-5bc7c4d7-58ed-4177-9f53-3b4d5c14f469 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:41 compute-0 nova_compute[243452]: 2026-02-28 10:16:41.780 243456 DEBUG oslo_concurrency.lockutils [req-6f5581b4-88de-4c3e-b32a-97d25d1410d4 req-5bc7c4d7-58ed-4177-9f53-3b4d5c14f469 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:41 compute-0 nova_compute[243452]: 2026-02-28 10:16:41.780 243456 DEBUG nova.compute.manager [req-6f5581b4-88de-4c3e-b32a-97d25d1410d4 req-5bc7c4d7-58ed-4177-9f53-3b4d5c14f469 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] No waiting events found dispatching network-vif-plugged-c2860307-4800-4c5e-adb7-ab75c130f158 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:41 compute-0 nova_compute[243452]: 2026-02-28 10:16:41.780 243456 WARNING nova.compute.manager [req-6f5581b4-88de-4c3e-b32a-97d25d1410d4 req-5bc7c4d7-58ed-4177-9f53-3b4d5c14f469 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Received unexpected event network-vif-plugged-c2860307-4800-4c5e-adb7-ab75c130f158 for instance with vm_state deleted and task_state None.
Feb 28 10:16:41 compute-0 nova_compute[243452]: 2026-02-28 10:16:41.913 243456 DEBUG nova.network.neutron [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Successfully created port: 6d7a8479-aede-41fc-89b1-904bcf5e4080 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.068 243456 DEBUG oslo_concurrency.lockutils [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "30194398-5601-43ac-aae7-290d9d311d6c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.069 243456 DEBUG oslo_concurrency.lockutils [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.070 243456 DEBUG oslo_concurrency.lockutils [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "30194398-5601-43ac-aae7-290d9d311d6c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.070 243456 DEBUG oslo_concurrency.lockutils [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.070 243456 DEBUG oslo_concurrency.lockutils [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.072 243456 INFO nova.compute.manager [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Terminating instance
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.074 243456 DEBUG nova.compute.manager [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:16:42 compute-0 kernel: tapb7b37fba-50 (unregistering): left promiscuous mode
Feb 28 10:16:42 compute-0 NetworkManager[49805]: <info>  [1772273802.1342] device (tapb7b37fba-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.137 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:42 compute-0 ovn_controller[146846]: 2026-02-28T10:16:42Z|00797|binding|INFO|Releasing lport b7b37fba-503f-4a0c-98ec-29224477d25f from this chassis (sb_readonly=0)
Feb 28 10:16:42 compute-0 ovn_controller[146846]: 2026-02-28T10:16:42Z|00798|binding|INFO|Setting lport b7b37fba-503f-4a0c-98ec-29224477d25f down in Southbound
Feb 28 10:16:42 compute-0 ovn_controller[146846]: 2026-02-28T10:16:42Z|00799|binding|INFO|Removing iface tapb7b37fba-50 ovn-installed in OVS
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.145 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:3b:12 10.100.0.13'], port_security=['fa:16:3e:62:3b:12 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '30194398-5601-43ac-aae7-290d9d311d6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c26d2eda-0121-4d10-a6e6-2d194139720b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '692561f0659d4af58ab14beffb24eb70', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9a45296f-8417-45bb-aaf5-e24181cd6a54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=859a5b6e-8469-47b0-aace-63af7cf02d70, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b7b37fba-503f-4a0c-98ec-29224477d25f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.146 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b7b37fba-503f-4a0c-98ec-29224477d25f in datapath c26d2eda-0121-4d10-a6e6-2d194139720b unbound from our chassis
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.147 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c26d2eda-0121-4d10-a6e6-2d194139720b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.148 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[868e92dd-72b7-4271-9c38-3b26c56f3101]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.149 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b namespace which is not needed anymore
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.150 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:42 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000054.scope: Deactivated successfully.
Feb 28 10:16:42 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000054.scope: Consumed 12.245s CPU time.
Feb 28 10:16:42 compute-0 systemd-machined[209480]: Machine qemu-102-instance-00000054 terminated.
Feb 28 10:16:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 459 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 185 op/s
Feb 28 10:16:42 compute-0 kernel: tapb7b37fba-50: entered promiscuous mode
Feb 28 10:16:42 compute-0 systemd-udevd[316526]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:16:42 compute-0 kernel: tapb7b37fba-50 (unregistering): left promiscuous mode
Feb 28 10:16:42 compute-0 neutron-haproxy-ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b[314307]: [NOTICE]   (314311) : haproxy version is 2.8.14-c23fe91
Feb 28 10:16:42 compute-0 neutron-haproxy-ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b[314307]: [NOTICE]   (314311) : path to executable is /usr/sbin/haproxy
Feb 28 10:16:42 compute-0 neutron-haproxy-ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b[314307]: [WARNING]  (314311) : Exiting Master process...
Feb 28 10:16:42 compute-0 neutron-haproxy-ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b[314307]: [WARNING]  (314311) : Exiting Master process...
Feb 28 10:16:42 compute-0 neutron-haproxy-ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b[314307]: [ALERT]    (314311) : Current worker (314313) exited with code 143 (Terminated)
Feb 28 10:16:42 compute-0 neutron-haproxy-ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b[314307]: [WARNING]  (314311) : All workers exited. Exiting... (0)
Feb 28 10:16:42 compute-0 ovn_controller[146846]: 2026-02-28T10:16:42Z|00800|binding|INFO|Claiming lport b7b37fba-503f-4a0c-98ec-29224477d25f for this chassis.
Feb 28 10:16:42 compute-0 ovn_controller[146846]: 2026-02-28T10:16:42Z|00801|binding|INFO|b7b37fba-503f-4a0c-98ec-29224477d25f: Claiming fa:16:3e:62:3b:12 10.100.0.13
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.302 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:42 compute-0 systemd[1]: libpod-41d9d06dabb6b8ec09d0fa28e3292d092af3baa6bd9fbcad4b077bf28b3bcb74.scope: Deactivated successfully.
Feb 28 10:16:42 compute-0 podman[316799]: 2026-02-28 10:16:42.311210189 +0000 UTC m=+0.067281469 container died 41d9d06dabb6b8ec09d0fa28e3292d092af3baa6bd9fbcad4b077bf28b3bcb74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.313 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:3b:12 10.100.0.13'], port_security=['fa:16:3e:62:3b:12 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '30194398-5601-43ac-aae7-290d9d311d6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c26d2eda-0121-4d10-a6e6-2d194139720b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '692561f0659d4af58ab14beffb24eb70', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9a45296f-8417-45bb-aaf5-e24181cd6a54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=859a5b6e-8469-47b0-aace-63af7cf02d70, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b7b37fba-503f-4a0c-98ec-29224477d25f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.320 243456 INFO nova.virt.libvirt.driver [-] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Instance destroyed successfully.
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.320 243456 DEBUG nova.objects.instance [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lazy-loading 'resources' on Instance uuid 30194398-5601-43ac-aae7-290d9d311d6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:42 compute-0 ovn_controller[146846]: 2026-02-28T10:16:42Z|00802|binding|INFO|Setting lport b7b37fba-503f-4a0c-98ec-29224477d25f ovn-installed in OVS
Feb 28 10:16:42 compute-0 ovn_controller[146846]: 2026-02-28T10:16:42Z|00803|binding|INFO|Setting lport b7b37fba-503f-4a0c-98ec-29224477d25f up in Southbound
Feb 28 10:16:42 compute-0 ovn_controller[146846]: 2026-02-28T10:16:42Z|00804|binding|INFO|Releasing lport b7b37fba-503f-4a0c-98ec-29224477d25f from this chassis (sb_readonly=1)
Feb 28 10:16:42 compute-0 ovn_controller[146846]: 2026-02-28T10:16:42Z|00805|if_status|INFO|Dropped 2 log messages in last 108 seconds (most recently, 108 seconds ago) due to excessive rate
Feb 28 10:16:42 compute-0 ovn_controller[146846]: 2026-02-28T10:16:42Z|00806|if_status|INFO|Not setting lport b7b37fba-503f-4a0c-98ec-29224477d25f down as sb is readonly
Feb 28 10:16:42 compute-0 ovn_controller[146846]: 2026-02-28T10:16:42Z|00807|binding|INFO|Removing iface tapb7b37fba-50 ovn-installed in OVS
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.327 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:42 compute-0 ovn_controller[146846]: 2026-02-28T10:16:42Z|00808|binding|INFO|Releasing lport b7b37fba-503f-4a0c-98ec-29224477d25f from this chassis (sb_readonly=0)
Feb 28 10:16:42 compute-0 ovn_controller[146846]: 2026-02-28T10:16:42Z|00809|binding|INFO|Setting lport b7b37fba-503f-4a0c-98ec-29224477d25f down in Southbound
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.335 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.336 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:3b:12 10.100.0.13'], port_security=['fa:16:3e:62:3b:12 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '30194398-5601-43ac-aae7-290d9d311d6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c26d2eda-0121-4d10-a6e6-2d194139720b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '692561f0659d4af58ab14beffb24eb70', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9a45296f-8417-45bb-aaf5-e24181cd6a54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=859a5b6e-8469-47b0-aace-63af7cf02d70, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b7b37fba-503f-4a0c-98ec-29224477d25f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.345 243456 DEBUG nova.virt.libvirt.vif [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:15:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-885707745',display_name='tempest-ListServerFiltersTestJSON-instance-885707745',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-885707745',id=84,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='692561f0659d4af58ab14beffb24eb70',ramdisk_id='',reservation_id='r-39yaumf1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-866927856',owner_user_name='tempest-ListServerFiltersTestJSON-866927856-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:16:26Z,user_data=None,user_id='cc52c9235e704591a857b1b746c257ea',uuid=30194398-5601-43ac-aae7-290d9d311d6c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b7b37fba-503f-4a0c-98ec-29224477d25f", "address": "fa:16:3e:62:3b:12", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7b37fba-50", "ovs_interfaceid": "b7b37fba-503f-4a0c-98ec-29224477d25f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.346 243456 DEBUG nova.network.os_vif_util [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converting VIF {"id": "b7b37fba-503f-4a0c-98ec-29224477d25f", "address": "fa:16:3e:62:3b:12", "network": {"id": "c26d2eda-0121-4d10-a6e6-2d194139720b", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-637349132-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "692561f0659d4af58ab14beffb24eb70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7b37fba-50", "ovs_interfaceid": "b7b37fba-503f-4a0c-98ec-29224477d25f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.347 243456 DEBUG nova.network.os_vif_util [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:3b:12,bridge_name='br-int',has_traffic_filtering=True,id=b7b37fba-503f-4a0c-98ec-29224477d25f,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7b37fba-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.347 243456 DEBUG os_vif [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:3b:12,bridge_name='br-int',has_traffic_filtering=True,id=b7b37fba-503f-4a0c-98ec-29224477d25f,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7b37fba-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.350 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.350 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7b37fba-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.352 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.354 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.357 243456 INFO os_vif [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:3b:12,bridge_name='br-int',has_traffic_filtering=True,id=b7b37fba-503f-4a0c-98ec-29224477d25f,network=Network(c26d2eda-0121-4d10-a6e6-2d194139720b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7b37fba-50')
Feb 28 10:16:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-41d9d06dabb6b8ec09d0fa28e3292d092af3baa6bd9fbcad4b077bf28b3bcb74-userdata-shm.mount: Deactivated successfully.
Feb 28 10:16:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c9c8fa6800f8878181e70321d8347959c19d95496dffeec0d4d8b240d4163b5-merged.mount: Deactivated successfully.
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.407 243456 DEBUG oslo_concurrency.lockutils [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.408 243456 DEBUG oslo_concurrency.lockutils [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.408 243456 DEBUG oslo_concurrency.lockutils [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.408 243456 DEBUG oslo_concurrency.lockutils [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.408 243456 DEBUG oslo_concurrency.lockutils [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.410 243456 INFO nova.compute.manager [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Terminating instance
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.411 243456 DEBUG nova.compute.manager [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:16:42 compute-0 podman[316799]: 2026-02-28 10:16:42.421259736 +0000 UTC m=+0.177331026 container cleanup 41d9d06dabb6b8ec09d0fa28e3292d092af3baa6bd9fbcad4b077bf28b3bcb74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 10:16:42 compute-0 systemd[1]: libpod-conmon-41d9d06dabb6b8ec09d0fa28e3292d092af3baa6bd9fbcad4b077bf28b3bcb74.scope: Deactivated successfully.
Feb 28 10:16:42 compute-0 kernel: tapcace90b2-5d (unregistering): left promiscuous mode
Feb 28 10:16:42 compute-0 NetworkManager[49805]: <info>  [1772273802.4744] device (tapcace90b2-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.482 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:42 compute-0 ovn_controller[146846]: 2026-02-28T10:16:42Z|00810|binding|INFO|Releasing lport cace90b2-5d6b-49ae-a68a-251838fec4ee from this chassis (sb_readonly=0)
Feb 28 10:16:42 compute-0 ovn_controller[146846]: 2026-02-28T10:16:42Z|00811|binding|INFO|Setting lport cace90b2-5d6b-49ae-a68a-251838fec4ee down in Southbound
Feb 28 10:16:42 compute-0 ovn_controller[146846]: 2026-02-28T10:16:42Z|00812|binding|INFO|Removing iface tapcace90b2-5d ovn-installed in OVS
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.492 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.497 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:a7:4e 10.100.0.4'], port_security=['fa:16:3e:46:a7:4e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e0b403b3-2f95-4f8c-a00c-53dab3c643b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61b03a6e-b883-4f32-b23d-d6fea7058b29', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '6', 'neutron:security_group_ids': '59190708-228d-45d4-972b-cf1e677cee18', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3678363f-59af-4198-9c0f-ea20e21245ac, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cace90b2-5d6b-49ae-a68a-251838fec4ee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.500 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:42 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000053.scope: Deactivated successfully.
Feb 28 10:16:42 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000053.scope: Consumed 12.902s CPU time.
Feb 28 10:16:42 compute-0 systemd-machined[209480]: Machine qemu-101-instance-00000053 terminated.
Feb 28 10:16:42 compute-0 podman[316852]: 2026-02-28 10:16:42.552681201 +0000 UTC m=+0.106257399 container remove 41d9d06dabb6b8ec09d0fa28e3292d092af3baa6bd9fbcad4b077bf28b3bcb74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.559 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[43c728ba-1f77-476f-b237-a0195768d3c1]: (4, ('Sat Feb 28 10:16:42 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b (41d9d06dabb6b8ec09d0fa28e3292d092af3baa6bd9fbcad4b077bf28b3bcb74)\n41d9d06dabb6b8ec09d0fa28e3292d092af3baa6bd9fbcad4b077bf28b3bcb74\nSat Feb 28 10:16:42 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b (41d9d06dabb6b8ec09d0fa28e3292d092af3baa6bd9fbcad4b077bf28b3bcb74)\n41d9d06dabb6b8ec09d0fa28e3292d092af3baa6bd9fbcad4b077bf28b3bcb74\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.561 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec193ae-ef2d-48fc-bfea-695d24855c76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.563 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc26d2eda-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.567 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:42 compute-0 kernel: tapc26d2eda-00: left promiscuous mode
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.583 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.587 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ce6cdfab-e4a2-488c-8cb8-9ed489bedd64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.607 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5c75bc99-7140-471d-a246-2dac5fc8d2ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.609 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[721ef679-723f-44e4-a819-953fff53dc62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.628 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c0623963-47b7-4ff0-8f0c-63c3d8f197c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 525713, 'reachable_time': 21499, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316875, 'error': None, 'target': 'ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:42 compute-0 systemd[1]: run-netns-ovnmeta\x2dc26d2eda\x2d0121\x2d4d10\x2da6e6\x2d2d194139720b.mount: Deactivated successfully.
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.632 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c26d2eda-0121-4d10-a6e6-2d194139720b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.633 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab2bc56-3a48-4cca-8e9e-3929e36facfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.634 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b7b37fba-503f-4a0c-98ec-29224477d25f in datapath c26d2eda-0121-4d10-a6e6-2d194139720b unbound from our chassis
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.636 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c26d2eda-0121-4d10-a6e6-2d194139720b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.637 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[308d599b-0203-4cd0-951b-93bd45071801]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.638 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b7b37fba-503f-4a0c-98ec-29224477d25f in datapath c26d2eda-0121-4d10-a6e6-2d194139720b unbound from our chassis
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.639 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c26d2eda-0121-4d10-a6e6-2d194139720b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.640 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7a201abc-b0a7-440c-bcdc-5fff3277d1fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.640 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cace90b2-5d6b-49ae-a68a-251838fec4ee in datapath 61b03a6e-b883-4f32-b23d-d6fea7058b29 unbound from our chassis
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.641 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 61b03a6e-b883-4f32-b23d-d6fea7058b29, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.642 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a42021-0bdf-45cc-aa1f-6d5393861cb3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:42.643 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29 namespace which is not needed anymore
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.654 243456 INFO nova.virt.libvirt.driver [-] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Instance destroyed successfully.
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.655 243456 DEBUG nova.objects.instance [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'resources' on Instance uuid e0b403b3-2f95-4f8c-a00c-53dab3c643b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.669 243456 DEBUG nova.virt.libvirt.vif [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:15:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1502065968',display_name='tempest-TestNetworkAdvancedServerOps-server-1502065968',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1502065968',id=83,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD7Li636QSVlGT4GqO4s5nNVtfSqEGFXbp91j8sBDWGgCeJ7DY5ScywhTjhOid22UFDvYMoisPR8wfRjbxlccvbHXWgfKkxWbHpB3Y/bp2JgqOgL25/vEzzhz8VLHisNrw==',key_name='tempest-TestNetworkAdvancedServerOps-210036872',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:16:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-6sjybjr1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:16:21Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=e0b403b3-2f95-4f8c-a00c-53dab3c643b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "address": "fa:16:3e:46:a7:4e", "network": {"id": "61b03a6e-b883-4f32-b23d-d6fea7058b29", "bridge": "br-int", "label": "tempest-network-smoke--1037265809", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcace90b2-5d", "ovs_interfaceid": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.670 243456 DEBUG nova.network.os_vif_util [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "address": "fa:16:3e:46:a7:4e", "network": {"id": "61b03a6e-b883-4f32-b23d-d6fea7058b29", "bridge": "br-int", "label": "tempest-network-smoke--1037265809", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcace90b2-5d", "ovs_interfaceid": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.671 243456 DEBUG nova.network.os_vif_util [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:46:a7:4e,bridge_name='br-int',has_traffic_filtering=True,id=cace90b2-5d6b-49ae-a68a-251838fec4ee,network=Network(61b03a6e-b883-4f32-b23d-d6fea7058b29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcace90b2-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.672 243456 DEBUG os_vif [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:a7:4e,bridge_name='br-int',has_traffic_filtering=True,id=cace90b2-5d6b-49ae-a68a-251838fec4ee,network=Network(61b03a6e-b883-4f32-b23d-d6fea7058b29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcace90b2-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.673 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.674 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcace90b2-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.678 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.679 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.681 243456 INFO os_vif [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:a7:4e,bridge_name='br-int',has_traffic_filtering=True,id=cace90b2-5d6b-49ae-a68a-251838fec4ee,network=Network(61b03a6e-b883-4f32-b23d-d6fea7058b29),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcace90b2-5d')
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.778 243456 DEBUG oslo_concurrency.lockutils [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.779 243456 DEBUG oslo_concurrency.lockutils [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.779 243456 INFO nova.compute.manager [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Shelving
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.796 243456 DEBUG nova.virt.libvirt.driver [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:16:42 compute-0 neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29[316191]: [NOTICE]   (316213) : haproxy version is 2.8.14-c23fe91
Feb 28 10:16:42 compute-0 neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29[316191]: [NOTICE]   (316213) : path to executable is /usr/sbin/haproxy
Feb 28 10:16:42 compute-0 neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29[316191]: [WARNING]  (316213) : Exiting Master process...
Feb 28 10:16:42 compute-0 neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29[316191]: [WARNING]  (316213) : Exiting Master process...
Feb 28 10:16:42 compute-0 neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29[316191]: [ALERT]    (316213) : Current worker (316222) exited with code 143 (Terminated)
Feb 28 10:16:42 compute-0 neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29[316191]: [WARNING]  (316213) : All workers exited. Exiting... (0)
Feb 28 10:16:42 compute-0 systemd[1]: libpod-757f74f91598eb6bc5782123f2ad804db325d6848ad4a6448c3748f840e8db81.scope: Deactivated successfully.
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.831 243456 DEBUG nova.compute.manager [req-ce7cc6f1-3b90-4c66-a1a0-edce8f27c58b req-c300ea5f-db14-4144-8240-5a8d8aefcdea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Received event network-changed-cace90b2-5d6b-49ae-a68a-251838fec4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.832 243456 DEBUG nova.compute.manager [req-ce7cc6f1-3b90-4c66-a1a0-edce8f27c58b req-c300ea5f-db14-4144-8240-5a8d8aefcdea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Refreshing instance network info cache due to event network-changed-cace90b2-5d6b-49ae-a68a-251838fec4ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.832 243456 DEBUG oslo_concurrency.lockutils [req-ce7cc6f1-3b90-4c66-a1a0-edce8f27c58b req-c300ea5f-db14-4144-8240-5a8d8aefcdea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-e0b403b3-2f95-4f8c-a00c-53dab3c643b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.832 243456 DEBUG oslo_concurrency.lockutils [req-ce7cc6f1-3b90-4c66-a1a0-edce8f27c58b req-c300ea5f-db14-4144-8240-5a8d8aefcdea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-e0b403b3-2f95-4f8c-a00c-53dab3c643b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:16:42 compute-0 nova_compute[243452]: 2026-02-28 10:16:42.832 243456 DEBUG nova.network.neutron [req-ce7cc6f1-3b90-4c66-a1a0-edce8f27c58b req-c300ea5f-db14-4144-8240-5a8d8aefcdea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Refreshing network info cache for port cace90b2-5d6b-49ae-a68a-251838fec4ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:16:42 compute-0 podman[316922]: 2026-02-28 10:16:42.837786518 +0000 UTC m=+0.077545832 container died 757f74f91598eb6bc5782123f2ad804db325d6848ad4a6448c3748f840e8db81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 28 10:16:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-757f74f91598eb6bc5782123f2ad804db325d6848ad4a6448c3748f840e8db81-userdata-shm.mount: Deactivated successfully.
Feb 28 10:16:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d763255489ece1c87daeaab985953e12e57a3a8174e95e23dc08c85608541f9-merged.mount: Deactivated successfully.
Feb 28 10:16:43 compute-0 podman[316922]: 2026-02-28 10:16:43.043361347 +0000 UTC m=+0.283120681 container cleanup 757f74f91598eb6bc5782123f2ad804db325d6848ad4a6448c3748f840e8db81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 10:16:43 compute-0 systemd[1]: libpod-conmon-757f74f91598eb6bc5782123f2ad804db325d6848ad4a6448c3748f840e8db81.scope: Deactivated successfully.
Feb 28 10:16:43 compute-0 nova_compute[243452]: 2026-02-28 10:16:43.130 243456 INFO nova.virt.libvirt.driver [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Deleting instance files /var/lib/nova/instances/30194398-5601-43ac-aae7-290d9d311d6c_del
Feb 28 10:16:43 compute-0 nova_compute[243452]: 2026-02-28 10:16:43.131 243456 INFO nova.virt.libvirt.driver [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Deletion of /var/lib/nova/instances/30194398-5601-43ac-aae7-290d9d311d6c_del complete
Feb 28 10:16:43 compute-0 podman[316953]: 2026-02-28 10:16:43.166083105 +0000 UTC m=+0.094291369 container remove 757f74f91598eb6bc5782123f2ad804db325d6848ad4a6448c3748f840e8db81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 28 10:16:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:43.171 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb6e2e8-acf4-4187-a3ce-f59fb58cda2a]: (4, ('Sat Feb 28 10:16:42 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29 (757f74f91598eb6bc5782123f2ad804db325d6848ad4a6448c3748f840e8db81)\n757f74f91598eb6bc5782123f2ad804db325d6848ad4a6448c3748f840e8db81\nSat Feb 28 10:16:43 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29 (757f74f91598eb6bc5782123f2ad804db325d6848ad4a6448c3748f840e8db81)\n757f74f91598eb6bc5782123f2ad804db325d6848ad4a6448c3748f840e8db81\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:43.173 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c5314045-9998-468c-ad1d-1863cf94fa54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:43.174 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61b03a6e-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:43 compute-0 nova_compute[243452]: 2026-02-28 10:16:43.176 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:43 compute-0 kernel: tap61b03a6e-b0: left promiscuous mode
Feb 28 10:16:43 compute-0 nova_compute[243452]: 2026-02-28 10:16:43.191 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:43.193 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fafecfb3-db06-4b89-843a-ea663e25326c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:43 compute-0 nova_compute[243452]: 2026-02-28 10:16:43.197 243456 INFO nova.compute.manager [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Took 1.12 seconds to destroy the instance on the hypervisor.
Feb 28 10:16:43 compute-0 nova_compute[243452]: 2026-02-28 10:16:43.199 243456 DEBUG oslo.service.loopingcall [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:16:43 compute-0 nova_compute[243452]: 2026-02-28 10:16:43.201 243456 DEBUG nova.compute.manager [-] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:16:43 compute-0 nova_compute[243452]: 2026-02-28 10:16:43.201 243456 DEBUG nova.network.neutron [-] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:16:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:43.214 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0e6ed4b4-3bea-4540-946f-0cf5c8e3a814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:43.216 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[446fd39c-035a-4e25-a47b-534b5f938b39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:43.233 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[96c54a67-de8f-4766-8a37-cbb4c97f1ee4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528720, 'reachable_time': 43274, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316969, 'error': None, 'target': 'ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:43.235 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-61b03a6e-b883-4f32-b23d-d6fea7058b29 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:16:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:43.235 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[94060f34-7186-4d3c-a12f-0f0b157d8d48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:43 compute-0 ceph-mon[76304]: pgmap v1524: 305 pgs: 305 active+clean; 459 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 185 op/s
Feb 28 10:16:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d61b03a6e\x2db883\x2d4f32\x2db23d\x2dd6fea7058b29.mount: Deactivated successfully.
Feb 28 10:16:43 compute-0 nova_compute[243452]: 2026-02-28 10:16:43.435 243456 INFO nova.virt.libvirt.driver [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Deleting instance files /var/lib/nova/instances/e0b403b3-2f95-4f8c-a00c-53dab3c643b9_del
Feb 28 10:16:43 compute-0 nova_compute[243452]: 2026-02-28 10:16:43.436 243456 INFO nova.virt.libvirt.driver [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Deletion of /var/lib/nova/instances/e0b403b3-2f95-4f8c-a00c-53dab3c643b9_del complete
Feb 28 10:16:43 compute-0 nova_compute[243452]: 2026-02-28 10:16:43.500 243456 INFO nova.compute.manager [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Took 1.09 seconds to destroy the instance on the hypervisor.
Feb 28 10:16:43 compute-0 nova_compute[243452]: 2026-02-28 10:16:43.500 243456 DEBUG oslo.service.loopingcall [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:16:43 compute-0 nova_compute[243452]: 2026-02-28 10:16:43.501 243456 DEBUG nova.compute.manager [-] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:16:43 compute-0 nova_compute[243452]: 2026-02-28 10:16:43.501 243456 DEBUG nova.network.neutron [-] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:16:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:16:43 compute-0 nova_compute[243452]: 2026-02-28 10:16:43.802 243456 DEBUG nova.network.neutron [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Successfully updated port: 6d7a8479-aede-41fc-89b1-904bcf5e4080 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:16:43 compute-0 nova_compute[243452]: 2026-02-28 10:16:43.818 243456 DEBUG oslo_concurrency.lockutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Acquiring lock "refresh_cache-86b24867-eb43-40c0-b73c-1416fd35d033" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:16:43 compute-0 nova_compute[243452]: 2026-02-28 10:16:43.819 243456 DEBUG oslo_concurrency.lockutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Acquired lock "refresh_cache-86b24867-eb43-40c0-b73c-1416fd35d033" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:16:43 compute-0 nova_compute[243452]: 2026-02-28 10:16:43.819 243456 DEBUG nova.network.neutron [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:16:43 compute-0 nova_compute[243452]: 2026-02-28 10:16:43.837 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:44 compute-0 nova_compute[243452]: 2026-02-28 10:16:44.112 243456 DEBUG nova.network.neutron [-] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:16:44 compute-0 nova_compute[243452]: 2026-02-28 10:16:44.135 243456 INFO nova.compute.manager [-] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Took 0.63 seconds to deallocate network for instance.
Feb 28 10:16:44 compute-0 nova_compute[243452]: 2026-02-28 10:16:44.166 243456 DEBUG nova.compute.manager [req-1194969a-237e-4f26-9e4c-30f7ac15857a req-e2353906-eba8-48ab-a54c-972e61b9ee6c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Received event network-vif-deleted-cace90b2-5d6b-49ae-a68a-251838fec4ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:44 compute-0 nova_compute[243452]: 2026-02-28 10:16:44.194 243456 DEBUG oslo_concurrency.lockutils [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:44 compute-0 nova_compute[243452]: 2026-02-28 10:16:44.195 243456 DEBUG oslo_concurrency.lockutils [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 426 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 995 KiB/s rd, 2.8 MiB/s wr, 169 op/s
Feb 28 10:16:44 compute-0 nova_compute[243452]: 2026-02-28 10:16:44.307 243456 DEBUG oslo_concurrency.processutils [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:44 compute-0 nova_compute[243452]: 2026-02-28 10:16:44.380 243456 DEBUG nova.network.neutron [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:16:44 compute-0 nova_compute[243452]: 2026-02-28 10:16:44.709 243456 DEBUG nova.network.neutron [-] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:16:44 compute-0 nova_compute[243452]: 2026-02-28 10:16:44.730 243456 INFO nova.compute.manager [-] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Took 1.53 seconds to deallocate network for instance.
Feb 28 10:16:44 compute-0 nova_compute[243452]: 2026-02-28 10:16:44.772 243456 DEBUG oslo_concurrency.lockutils [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:16:44 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2173296229' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:44 compute-0 nova_compute[243452]: 2026-02-28 10:16:44.868 243456 DEBUG oslo_concurrency.processutils [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:44 compute-0 nova_compute[243452]: 2026-02-28 10:16:44.874 243456 DEBUG nova.compute.provider_tree [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:16:44 compute-0 nova_compute[243452]: 2026-02-28 10:16:44.888 243456 DEBUG nova.scheduler.client.report [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:16:44 compute-0 nova_compute[243452]: 2026-02-28 10:16:44.906 243456 DEBUG oslo_concurrency.lockutils [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:44 compute-0 nova_compute[243452]: 2026-02-28 10:16:44.909 243456 DEBUG oslo_concurrency.lockutils [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:44 compute-0 nova_compute[243452]: 2026-02-28 10:16:44.935 243456 INFO nova.scheduler.client.report [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Deleted allocations for instance e0b403b3-2f95-4f8c-a00c-53dab3c643b9
Feb 28 10:16:44 compute-0 nova_compute[243452]: 2026-02-28 10:16:44.957 243456 DEBUG nova.network.neutron [req-ce7cc6f1-3b90-4c66-a1a0-edce8f27c58b req-c300ea5f-db14-4144-8240-5a8d8aefcdea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Updated VIF entry in instance network info cache for port cace90b2-5d6b-49ae-a68a-251838fec4ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:16:44 compute-0 nova_compute[243452]: 2026-02-28 10:16:44.958 243456 DEBUG nova.network.neutron [req-ce7cc6f1-3b90-4c66-a1a0-edce8f27c58b req-c300ea5f-db14-4144-8240-5a8d8aefcdea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Updating instance_info_cache with network_info: [{"id": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "address": "fa:16:3e:46:a7:4e", "network": {"id": "61b03a6e-b883-4f32-b23d-d6fea7058b29", "bridge": "br-int", "label": "tempest-network-smoke--1037265809", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcace90b2-5d", "ovs_interfaceid": "cace90b2-5d6b-49ae-a68a-251838fec4ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:16:44 compute-0 nova_compute[243452]: 2026-02-28 10:16:44.987 243456 DEBUG oslo_concurrency.lockutils [req-ce7cc6f1-3b90-4c66-a1a0-edce8f27c58b req-c300ea5f-db14-4144-8240-5a8d8aefcdea 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-e0b403b3-2f95-4f8c-a00c-53dab3c643b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.005 243456 DEBUG oslo_concurrency.lockutils [None req-9a618367-82ab-42e8-8f86-5ae9349a9476 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "e0b403b3-2f95-4f8c-a00c-53dab3c643b9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.017 243456 DEBUG oslo_concurrency.processutils [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.077 243456 DEBUG nova.compute.manager [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received event network-vif-unplugged-b7b37fba-503f-4a0c-98ec-29224477d25f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.077 243456 DEBUG oslo_concurrency.lockutils [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30194398-5601-43ac-aae7-290d9d311d6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.077 243456 DEBUG oslo_concurrency.lockutils [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.078 243456 DEBUG oslo_concurrency.lockutils [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.078 243456 DEBUG nova.compute.manager [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] No waiting events found dispatching network-vif-unplugged-b7b37fba-503f-4a0c-98ec-29224477d25f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.078 243456 WARNING nova.compute.manager [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received unexpected event network-vif-unplugged-b7b37fba-503f-4a0c-98ec-29224477d25f for instance with vm_state deleted and task_state None.
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.078 243456 DEBUG nova.compute.manager [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received event network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.078 243456 DEBUG oslo_concurrency.lockutils [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30194398-5601-43ac-aae7-290d9d311d6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.078 243456 DEBUG oslo_concurrency.lockutils [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.078 243456 DEBUG oslo_concurrency.lockutils [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.079 243456 DEBUG nova.compute.manager [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] No waiting events found dispatching network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.079 243456 WARNING nova.compute.manager [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received unexpected event network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f for instance with vm_state deleted and task_state None.
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.079 243456 DEBUG nova.compute.manager [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received event network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.079 243456 DEBUG oslo_concurrency.lockutils [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30194398-5601-43ac-aae7-290d9d311d6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.079 243456 DEBUG oslo_concurrency.lockutils [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.079 243456 DEBUG oslo_concurrency.lockutils [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.079 243456 DEBUG nova.compute.manager [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] No waiting events found dispatching network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.080 243456 WARNING nova.compute.manager [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received unexpected event network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f for instance with vm_state deleted and task_state None.
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.080 243456 DEBUG nova.compute.manager [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Received event network-changed-6d7a8479-aede-41fc-89b1-904bcf5e4080 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.080 243456 DEBUG nova.compute.manager [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Refreshing instance network info cache due to event network-changed-6d7a8479-aede-41fc-89b1-904bcf5e4080. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.080 243456 DEBUG oslo_concurrency.lockutils [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-86b24867-eb43-40c0-b73c-1416fd35d033" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:16:45 compute-0 kernel: tap52f49649-61 (unregistering): left promiscuous mode
Feb 28 10:16:45 compute-0 NetworkManager[49805]: <info>  [1772273805.2577] device (tap52f49649-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:16:45 compute-0 ovn_controller[146846]: 2026-02-28T10:16:45Z|00813|binding|INFO|Releasing lport 52f49649-6181-4c24-95b7-fc7227858c70 from this chassis (sb_readonly=0)
Feb 28 10:16:45 compute-0 ovn_controller[146846]: 2026-02-28T10:16:45Z|00814|binding|INFO|Setting lport 52f49649-6181-4c24-95b7-fc7227858c70 down in Southbound
Feb 28 10:16:45 compute-0 ovn_controller[146846]: 2026-02-28T10:16:45Z|00815|binding|INFO|Removing iface tap52f49649-61 ovn-installed in OVS
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.270 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.275 243456 DEBUG nova.network.neutron [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Updating instance_info_cache with network_info: [{"id": "6d7a8479-aede-41fc-89b1-904bcf5e4080", "address": "fa:16:3e:89:5b:c5", "network": {"id": "d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-251416721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d42755d9af04d5ba2e00e083f33ca82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d7a8479-ae", "ovs_interfaceid": "6d7a8479-aede-41fc-89b1-904bcf5e4080", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:16:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:45.277 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:e7:39 10.100.0.9'], port_security=['fa:16:3e:22:e7:39 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4db5bcd7-8b41-4850-8c88-89ad757c8558', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c0c4bc44c37f4a4f83c83b6105be3190', 'neutron:revision_number': '4', 'neutron:security_group_ids': '60e2cad2-1539-4f21-ae07-3933335fcb5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbe93b8f-a4a4-4682-b3ab-de91ea6bc538, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=52f49649-6181-4c24-95b7-fc7227858c70) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:16:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:45.278 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 52f49649-6181-4c24-95b7-fc7227858c70 in datapath ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 unbound from our chassis
Feb 28 10:16:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:45.280 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:16:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:45.281 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5a918db9-b8d9-4cd3-afdb-a8d3aaa2d172]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:45.282 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 namespace which is not needed anymore
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.282 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.301 243456 DEBUG oslo_concurrency.lockutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Releasing lock "refresh_cache-86b24867-eb43-40c0-b73c-1416fd35d033" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.302 243456 DEBUG nova.compute.manager [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Instance network_info: |[{"id": "6d7a8479-aede-41fc-89b1-904bcf5e4080", "address": "fa:16:3e:89:5b:c5", "network": {"id": "d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-251416721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d42755d9af04d5ba2e00e083f33ca82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d7a8479-ae", "ovs_interfaceid": "6d7a8479-aede-41fc-89b1-904bcf5e4080", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.304 243456 DEBUG oslo_concurrency.lockutils [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-86b24867-eb43-40c0-b73c-1416fd35d033" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.305 243456 DEBUG nova.network.neutron [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Refreshing network info cache for port 6d7a8479-aede-41fc-89b1-904bcf5e4080 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.310 243456 DEBUG nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Start _get_guest_xml network_info=[{"id": "6d7a8479-aede-41fc-89b1-904bcf5e4080", "address": "fa:16:3e:89:5b:c5", "network": {"id": "d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-251416721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d42755d9af04d5ba2e00e083f33ca82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d7a8479-ae", "ovs_interfaceid": "6d7a8479-aede-41fc-89b1-904bcf5e4080", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:16:45 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d00000051.scope: Deactivated successfully.
Feb 28 10:16:45 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d00000051.scope: Consumed 14.808s CPU time.
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.322 243456 WARNING nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:16:45 compute-0 systemd-machined[209480]: Machine qemu-93-instance-00000051 terminated.
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.330 243456 DEBUG nova.virt.libvirt.host [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.331 243456 DEBUG nova.virt.libvirt.host [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.334 243456 DEBUG nova.virt.libvirt.host [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.335 243456 DEBUG nova.virt.libvirt.host [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.336 243456 DEBUG nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.336 243456 DEBUG nova.virt.hardware [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.337 243456 DEBUG nova.virt.hardware [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.338 243456 DEBUG nova.virt.hardware [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.338 243456 DEBUG nova.virt.hardware [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.338 243456 DEBUG nova.virt.hardware [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.339 243456 DEBUG nova.virt.hardware [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.339 243456 DEBUG nova.virt.hardware [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.340 243456 DEBUG nova.virt.hardware [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.340 243456 DEBUG nova.virt.hardware [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.341 243456 DEBUG nova.virt.hardware [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.341 243456 DEBUG nova.virt.hardware [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.345 243456 DEBUG oslo_concurrency.processutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:45 compute-0 ceph-mon[76304]: pgmap v1525: 305 pgs: 305 active+clean; 426 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 995 KiB/s rd, 2.8 MiB/s wr, 169 op/s
Feb 28 10:16:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2173296229' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:45 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[311622]: [NOTICE]   (311626) : haproxy version is 2.8.14-c23fe91
Feb 28 10:16:45 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[311622]: [NOTICE]   (311626) : path to executable is /usr/sbin/haproxy
Feb 28 10:16:45 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[311622]: [WARNING]  (311626) : Exiting Master process...
Feb 28 10:16:45 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[311622]: [WARNING]  (311626) : Exiting Master process...
Feb 28 10:16:45 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[311622]: [ALERT]    (311626) : Current worker (311628) exited with code 143 (Terminated)
Feb 28 10:16:45 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[311622]: [WARNING]  (311626) : All workers exited. Exiting... (0)
Feb 28 10:16:45 compute-0 systemd[1]: libpod-f121583a9a088cc3ceffdb0e5a079816e7d7baef454c8803be51187524d3575e.scope: Deactivated successfully.
Feb 28 10:16:45 compute-0 podman[317036]: 2026-02-28 10:16:45.429716324 +0000 UTC m=+0.048364659 container died f121583a9a088cc3ceffdb0e5a079816e7d7baef454c8803be51187524d3575e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:16:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f121583a9a088cc3ceffdb0e5a079816e7d7baef454c8803be51187524d3575e-userdata-shm.mount: Deactivated successfully.
Feb 28 10:16:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-2721b40a4fc85af083307e4f102b5144e734f21a07b53ee098a9eebd71c6ce49-merged.mount: Deactivated successfully.
Feb 28 10:16:45 compute-0 podman[317036]: 2026-02-28 10:16:45.474187502 +0000 UTC m=+0.092835827 container cleanup f121583a9a088cc3ceffdb0e5a079816e7d7baef454c8803be51187524d3575e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:16:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:16:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/988655584' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:16:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:16:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/988655584' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:16:45 compute-0 NetworkManager[49805]: <info>  [1772273805.4866] manager: (tap52f49649-61): new Tun device (/org/freedesktop/NetworkManager/Devices/346)
Feb 28 10:16:45 compute-0 systemd[1]: libpod-conmon-f121583a9a088cc3ceffdb0e5a079816e7d7baef454c8803be51187524d3575e.scope: Deactivated successfully.
Feb 28 10:16:45 compute-0 podman[317072]: 2026-02-28 10:16:45.553570394 +0000 UTC m=+0.052594399 container remove f121583a9a088cc3ceffdb0e5a079816e7d7baef454c8803be51187524d3575e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:16:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:45.559 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[45a48f43-f7b1-474b-b823-2f1e589dbd29]: (4, ('Sat Feb 28 10:16:45 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 (f121583a9a088cc3ceffdb0e5a079816e7d7baef454c8803be51187524d3575e)\nf121583a9a088cc3ceffdb0e5a079816e7d7baef454c8803be51187524d3575e\nSat Feb 28 10:16:45 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 (f121583a9a088cc3ceffdb0e5a079816e7d7baef454c8803be51187524d3575e)\nf121583a9a088cc3ceffdb0e5a079816e7d7baef454c8803be51187524d3575e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:45.561 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4f0dcb-00a6-4b5e-a00e-70b22d502d5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:45.562 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce4b855a-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.564 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:45 compute-0 kernel: tapce4b855a-c0: left promiscuous mode
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.578 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:16:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3764573134' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:45.582 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d12ecc7c-fa93-45ce-8ee8-76cd8979ef07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:45.601 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ad4a8a4b-89f7-44be-9a3a-5336eef43b81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:45.603 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1088f19c-e659-4eae-99c1-c7429c8741cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.607 243456 DEBUG oslo_concurrency.processutils [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.614 243456 DEBUG nova.compute.provider_tree [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:16:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:45.616 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[653527ec-0eba-4897-a85d-72f6fc0e5db4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522600, 'reachable_time': 43720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317114, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:45 compute-0 systemd[1]: run-netns-ovnmeta\x2dce4b855a\x2dcb9e\x2d4dad\x2dbfe0\x2dddfe326a1505.mount: Deactivated successfully.
Feb 28 10:16:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:45.621 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:16:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:45.621 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[90ff8ce6-d913-43d9-b381-5443473a7107]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.629 243456 DEBUG nova.scheduler.client.report [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.651 243456 DEBUG oslo_concurrency.lockutils [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.678 243456 INFO nova.scheduler.client.report [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Deleted allocations for instance 30194398-5601-43ac-aae7-290d9d311d6c
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.766 243456 DEBUG oslo_concurrency.lockutils [None req-af3b82aa-aaf2-44b3-8ab8-af93fd8723d4 cc52c9235e704591a857b1b746c257ea 692561f0659d4af58ab14beffb24eb70 - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.818 243456 INFO nova.virt.libvirt.driver [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Instance shutdown successfully after 3 seconds.
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.825 243456 INFO nova.virt.libvirt.driver [-] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Instance destroyed successfully.
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.825 243456 DEBUG nova.objects.instance [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:16:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1912882234' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.961 243456 DEBUG oslo_concurrency.processutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:45 compute-0 nova_compute[243452]: 2026-02-28 10:16:45.994 243456 DEBUG nova.storage.rbd_utils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] rbd image 86b24867-eb43-40c0-b73c-1416fd35d033_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.001 243456 DEBUG oslo_concurrency.processutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.056 243456 INFO nova.virt.libvirt.driver [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Beginning cold snapshot process
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.236 243456 DEBUG nova.virt.libvirt.imagebackend [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 28 10:16:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 321 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 791 KiB/s rd, 2.6 MiB/s wr, 203 op/s
Feb 28 10:16:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/988655584' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:16:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/988655584' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:16:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3764573134' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1912882234' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.438 243456 DEBUG nova.storage.rbd_utils [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] creating snapshot(2f0ded14493c41dc856d5ad77ca23193) on rbd image(4db5bcd7-8b41-4850-8c88-89ad757c8558_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:16:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:16:46 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/639639626' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.599 243456 DEBUG nova.network.neutron [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Updated VIF entry in instance network info cache for port 6d7a8479-aede-41fc-89b1-904bcf5e4080. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.600 243456 DEBUG nova.network.neutron [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Updating instance_info_cache with network_info: [{"id": "6d7a8479-aede-41fc-89b1-904bcf5e4080", "address": "fa:16:3e:89:5b:c5", "network": {"id": "d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-251416721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d42755d9af04d5ba2e00e083f33ca82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d7a8479-ae", "ovs_interfaceid": "6d7a8479-aede-41fc-89b1-904bcf5e4080", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.603 243456 DEBUG oslo_concurrency.processutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.604 243456 DEBUG nova.virt.libvirt.vif [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:16:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1436685910',display_name='tempest-ServerAddressesTestJSON-server-1436685910',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1436685910',id=88,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d42755d9af04d5ba2e00e083f33ca82',ramdisk_id='',reservation_id='r-9chm121b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-407730341',owner_user_name='tempest-ServerAddressesTestJSON-407730341-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:16:40Z,user_data=None,user_id='1cb883c7e1bc470eb00cb23a94d0ddb0',uuid=86b24867-eb43-40c0-b73c-1416fd35d033,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6d7a8479-aede-41fc-89b1-904bcf5e4080", "address": "fa:16:3e:89:5b:c5", "network": {"id": "d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-251416721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d42755d9af04d5ba2e00e083f33ca82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d7a8479-ae", "ovs_interfaceid": "6d7a8479-aede-41fc-89b1-904bcf5e4080", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.604 243456 DEBUG nova.network.os_vif_util [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Converting VIF {"id": "6d7a8479-aede-41fc-89b1-904bcf5e4080", "address": "fa:16:3e:89:5b:c5", "network": {"id": "d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-251416721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d42755d9af04d5ba2e00e083f33ca82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d7a8479-ae", "ovs_interfaceid": "6d7a8479-aede-41fc-89b1-904bcf5e4080", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.605 243456 DEBUG nova.network.os_vif_util [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:5b:c5,bridge_name='br-int',has_traffic_filtering=True,id=6d7a8479-aede-41fc-89b1-904bcf5e4080,network=Network(d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d7a8479-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.607 243456 DEBUG nova.objects.instance [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Lazy-loading 'pci_devices' on Instance uuid 86b24867-eb43-40c0-b73c-1416fd35d033 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.638 243456 DEBUG nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:16:46 compute-0 nova_compute[243452]:   <uuid>86b24867-eb43-40c0-b73c-1416fd35d033</uuid>
Feb 28 10:16:46 compute-0 nova_compute[243452]:   <name>instance-00000058</name>
Feb 28 10:16:46 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:16:46 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:16:46 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerAddressesTestJSON-server-1436685910</nova:name>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:16:45</nova:creationTime>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:16:46 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:16:46 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:16:46 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:16:46 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:16:46 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:16:46 compute-0 nova_compute[243452]:         <nova:user uuid="1cb883c7e1bc470eb00cb23a94d0ddb0">tempest-ServerAddressesTestJSON-407730341-project-member</nova:user>
Feb 28 10:16:46 compute-0 nova_compute[243452]:         <nova:project uuid="5d42755d9af04d5ba2e00e083f33ca82">tempest-ServerAddressesTestJSON-407730341</nova:project>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:16:46 compute-0 nova_compute[243452]:         <nova:port uuid="6d7a8479-aede-41fc-89b1-904bcf5e4080">
Feb 28 10:16:46 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:16:46 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:16:46 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <system>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <entry name="serial">86b24867-eb43-40c0-b73c-1416fd35d033</entry>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <entry name="uuid">86b24867-eb43-40c0-b73c-1416fd35d033</entry>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     </system>
Feb 28 10:16:46 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:16:46 compute-0 nova_compute[243452]:   <os>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:   </os>
Feb 28 10:16:46 compute-0 nova_compute[243452]:   <features>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:   </features>
Feb 28 10:16:46 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:16:46 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:16:46 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/86b24867-eb43-40c0-b73c-1416fd35d033_disk">
Feb 28 10:16:46 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       </source>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:16:46 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/86b24867-eb43-40c0-b73c-1416fd35d033_disk.config">
Feb 28 10:16:46 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       </source>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:16:46 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:89:5b:c5"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <target dev="tap6d7a8479-ae"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/86b24867-eb43-40c0-b73c-1416fd35d033/console.log" append="off"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <video>
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     </video>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:16:46 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:16:46 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:16:46 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:16:46 compute-0 nova_compute[243452]: </domain>
Feb 28 10:16:46 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.639 243456 DEBUG nova.compute.manager [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Preparing to wait for external event network-vif-plugged-6d7a8479-aede-41fc-89b1-904bcf5e4080 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.640 243456 DEBUG oslo_concurrency.lockutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Acquiring lock "86b24867-eb43-40c0-b73c-1416fd35d033-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.640 243456 DEBUG oslo_concurrency.lockutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Lock "86b24867-eb43-40c0-b73c-1416fd35d033-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.641 243456 DEBUG oslo_concurrency.lockutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Lock "86b24867-eb43-40c0-b73c-1416fd35d033-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.642 243456 DEBUG nova.virt.libvirt.vif [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:16:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1436685910',display_name='tempest-ServerAddressesTestJSON-server-1436685910',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1436685910',id=88,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d42755d9af04d5ba2e00e083f33ca82',ramdisk_id='',reservation_id='r-9chm121b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-407730341',owner_user_name='tempest-ServerAddressesTestJSON-407730341-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:16:40Z,user_data=None,user_id='1cb883c7e1bc470eb00cb23a94d0ddb0',uuid=86b24867-eb43-40c0-b73c-1416fd35d033,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6d7a8479-aede-41fc-89b1-904bcf5e4080", "address": "fa:16:3e:89:5b:c5", "network": {"id": "d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-251416721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d42755d9af04d5ba2e00e083f33ca82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d7a8479-ae", "ovs_interfaceid": "6d7a8479-aede-41fc-89b1-904bcf5e4080", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.642 243456 DEBUG nova.network.os_vif_util [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Converting VIF {"id": "6d7a8479-aede-41fc-89b1-904bcf5e4080", "address": "fa:16:3e:89:5b:c5", "network": {"id": "d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-251416721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d42755d9af04d5ba2e00e083f33ca82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d7a8479-ae", "ovs_interfaceid": "6d7a8479-aede-41fc-89b1-904bcf5e4080", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.643 243456 DEBUG nova.network.os_vif_util [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:5b:c5,bridge_name='br-int',has_traffic_filtering=True,id=6d7a8479-aede-41fc-89b1-904bcf5e4080,network=Network(d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d7a8479-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.643 243456 DEBUG os_vif [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:5b:c5,bridge_name='br-int',has_traffic_filtering=True,id=6d7a8479-aede-41fc-89b1-904bcf5e4080,network=Network(d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d7a8479-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.644 243456 DEBUG oslo_concurrency.lockutils [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-86b24867-eb43-40c0-b73c-1416fd35d033" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.645 243456 DEBUG nova.compute.manager [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received event network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.645 243456 DEBUG oslo_concurrency.lockutils [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30194398-5601-43ac-aae7-290d9d311d6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.645 243456 DEBUG oslo_concurrency.lockutils [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.646 243456 DEBUG oslo_concurrency.lockutils [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.646 243456 DEBUG nova.compute.manager [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] No waiting events found dispatching network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.646 243456 WARNING nova.compute.manager [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received unexpected event network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f for instance with vm_state deleted and task_state None.
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.647 243456 DEBUG nova.compute.manager [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received event network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.647 243456 DEBUG oslo_concurrency.lockutils [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "30194398-5601-43ac-aae7-290d9d311d6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.647 243456 DEBUG oslo_concurrency.lockutils [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.647 243456 DEBUG oslo_concurrency.lockutils [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "30194398-5601-43ac-aae7-290d9d311d6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.648 243456 DEBUG nova.compute.manager [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] No waiting events found dispatching network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.648 243456 WARNING nova.compute.manager [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received unexpected event network-vif-plugged-b7b37fba-503f-4a0c-98ec-29224477d25f for instance with vm_state deleted and task_state None.
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.648 243456 DEBUG nova.compute.manager [req-da61c714-d149-44e5-aedc-8515b33f937a req-b690a027-95c1-491c-aa75-923cfe604b60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Received event network-vif-deleted-b7b37fba-503f-4a0c-98ec-29224477d25f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.649 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.649 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.650 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.653 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.654 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d7a8479-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.654 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6d7a8479-ae, col_values=(('external_ids', {'iface-id': '6d7a8479-aede-41fc-89b1-904bcf5e4080', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:5b:c5', 'vm-uuid': '86b24867-eb43-40c0-b73c-1416fd35d033'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.656 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:46 compute-0 NetworkManager[49805]: <info>  [1772273806.6579] manager: (tap6d7a8479-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/347)
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.658 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.663 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.664 243456 INFO os_vif [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:5b:c5,bridge_name='br-int',has_traffic_filtering=True,id=6d7a8479-aede-41fc-89b1-904bcf5e4080,network=Network(d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d7a8479-ae')
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.741 243456 DEBUG nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.742 243456 DEBUG nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.743 243456 DEBUG nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] No VIF found with MAC fa:16:3e:89:5b:c5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.744 243456 INFO nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Using config drive
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.779 243456 DEBUG nova.storage.rbd_utils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] rbd image 86b24867-eb43-40c0-b73c-1416fd35d033_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.874 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:46 compute-0 nova_compute[243452]: 2026-02-28 10:16:46.995 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:47 compute-0 nova_compute[243452]: 2026-02-28 10:16:47.269 243456 DEBUG nova.compute.manager [req-f914a0b9-cbf8-42df-a1b0-9025fd2e9711 req-fcce919e-8b16-48a3-9a9e-732816524f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received event network-vif-unplugged-52f49649-6181-4c24-95b7-fc7227858c70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:47 compute-0 nova_compute[243452]: 2026-02-28 10:16:47.270 243456 DEBUG oslo_concurrency.lockutils [req-f914a0b9-cbf8-42df-a1b0-9025fd2e9711 req-fcce919e-8b16-48a3-9a9e-732816524f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:47 compute-0 nova_compute[243452]: 2026-02-28 10:16:47.270 243456 DEBUG oslo_concurrency.lockutils [req-f914a0b9-cbf8-42df-a1b0-9025fd2e9711 req-fcce919e-8b16-48a3-9a9e-732816524f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:47 compute-0 nova_compute[243452]: 2026-02-28 10:16:47.271 243456 DEBUG oslo_concurrency.lockutils [req-f914a0b9-cbf8-42df-a1b0-9025fd2e9711 req-fcce919e-8b16-48a3-9a9e-732816524f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:47 compute-0 nova_compute[243452]: 2026-02-28 10:16:47.271 243456 DEBUG nova.compute.manager [req-f914a0b9-cbf8-42df-a1b0-9025fd2e9711 req-fcce919e-8b16-48a3-9a9e-732816524f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] No waiting events found dispatching network-vif-unplugged-52f49649-6181-4c24-95b7-fc7227858c70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:47 compute-0 nova_compute[243452]: 2026-02-28 10:16:47.271 243456 WARNING nova.compute.manager [req-f914a0b9-cbf8-42df-a1b0-9025fd2e9711 req-fcce919e-8b16-48a3-9a9e-732816524f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received unexpected event network-vif-unplugged-52f49649-6181-4c24-95b7-fc7227858c70 for instance with vm_state active and task_state shelving_image_uploading.
Feb 28 10:16:47 compute-0 nova_compute[243452]: 2026-02-28 10:16:47.271 243456 DEBUG nova.compute.manager [req-f914a0b9-cbf8-42df-a1b0-9025fd2e9711 req-fcce919e-8b16-48a3-9a9e-732816524f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:47 compute-0 nova_compute[243452]: 2026-02-28 10:16:47.272 243456 DEBUG oslo_concurrency.lockutils [req-f914a0b9-cbf8-42df-a1b0-9025fd2e9711 req-fcce919e-8b16-48a3-9a9e-732816524f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:47 compute-0 nova_compute[243452]: 2026-02-28 10:16:47.272 243456 DEBUG oslo_concurrency.lockutils [req-f914a0b9-cbf8-42df-a1b0-9025fd2e9711 req-fcce919e-8b16-48a3-9a9e-732816524f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:47 compute-0 nova_compute[243452]: 2026-02-28 10:16:47.272 243456 DEBUG oslo_concurrency.lockutils [req-f914a0b9-cbf8-42df-a1b0-9025fd2e9711 req-fcce919e-8b16-48a3-9a9e-732816524f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:47 compute-0 nova_compute[243452]: 2026-02-28 10:16:47.273 243456 DEBUG nova.compute.manager [req-f914a0b9-cbf8-42df-a1b0-9025fd2e9711 req-fcce919e-8b16-48a3-9a9e-732816524f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] No waiting events found dispatching network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:47 compute-0 nova_compute[243452]: 2026-02-28 10:16:47.273 243456 WARNING nova.compute.manager [req-f914a0b9-cbf8-42df-a1b0-9025fd2e9711 req-fcce919e-8b16-48a3-9a9e-732816524f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received unexpected event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 for instance with vm_state active and task_state shelving_image_uploading.
Feb 28 10:16:47 compute-0 ceph-mon[76304]: pgmap v1526: 305 pgs: 305 active+clean; 321 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 791 KiB/s rd, 2.6 MiB/s wr, 203 op/s
Feb 28 10:16:47 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/639639626' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:16:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Feb 28 10:16:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Feb 28 10:16:47 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Feb 28 10:16:47 compute-0 nova_compute[243452]: 2026-02-28 10:16:47.472 243456 DEBUG nova.storage.rbd_utils [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] cloning vms/4db5bcd7-8b41-4850-8c88-89ad757c8558_disk@2f0ded14493c41dc856d5ad77ca23193 to images/22a70b75-075f-4875-bc53-299afbb39f44 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:16:47 compute-0 nova_compute[243452]: 2026-02-28 10:16:47.576 243456 DEBUG nova.storage.rbd_utils [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] flattening images/22a70b75-075f-4875-bc53-299afbb39f44 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:16:48 compute-0 nova_compute[243452]: 2026-02-28 10:16:48.142 243456 DEBUG nova.storage.rbd_utils [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] removing snapshot(2f0ded14493c41dc856d5ad77ca23193) on rbd image(4db5bcd7-8b41-4850-8c88-89ad757c8558_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:16:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 279 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 193 op/s
Feb 28 10:16:48 compute-0 nova_compute[243452]: 2026-02-28 10:16:48.364 243456 INFO nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Creating config drive at /var/lib/nova/instances/86b24867-eb43-40c0-b73c-1416fd35d033/disk.config
Feb 28 10:16:48 compute-0 nova_compute[243452]: 2026-02-28 10:16:48.368 243456 DEBUG oslo_concurrency.processutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/86b24867-eb43-40c0-b73c-1416fd35d033/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmps2snf4f3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Feb 28 10:16:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Feb 28 10:16:48 compute-0 ceph-mon[76304]: osdmap e238: 3 total, 3 up, 3 in
Feb 28 10:16:48 compute-0 ceph-mon[76304]: pgmap v1528: 305 pgs: 305 active+clean; 279 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 193 op/s
Feb 28 10:16:48 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Feb 28 10:16:48 compute-0 nova_compute[243452]: 2026-02-28 10:16:48.471 243456 DEBUG nova.storage.rbd_utils [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] creating snapshot(snap) on rbd image(22a70b75-075f-4875-bc53-299afbb39f44) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:16:48 compute-0 nova_compute[243452]: 2026-02-28 10:16:48.530 243456 DEBUG oslo_concurrency.processutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/86b24867-eb43-40c0-b73c-1416fd35d033/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmps2snf4f3" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:48 compute-0 nova_compute[243452]: 2026-02-28 10:16:48.552 243456 DEBUG nova.storage.rbd_utils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] rbd image 86b24867-eb43-40c0-b73c-1416fd35d033_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:16:48 compute-0 nova_compute[243452]: 2026-02-28 10:16:48.556 243456 DEBUG oslo_concurrency.processutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/86b24867-eb43-40c0-b73c-1416fd35d033/disk.config 86b24867-eb43-40c0-b73c-1416fd35d033_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:16:48 compute-0 nova_compute[243452]: 2026-02-28 10:16:48.718 243456 DEBUG oslo_concurrency.processutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/86b24867-eb43-40c0-b73c-1416fd35d033/disk.config 86b24867-eb43-40c0-b73c-1416fd35d033_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:48 compute-0 nova_compute[243452]: 2026-02-28 10:16:48.720 243456 INFO nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Deleting local config drive /var/lib/nova/instances/86b24867-eb43-40c0-b73c-1416fd35d033/disk.config because it was imported into RBD.
Feb 28 10:16:48 compute-0 NetworkManager[49805]: <info>  [1772273808.7793] manager: (tap6d7a8479-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/348)
Feb 28 10:16:48 compute-0 kernel: tap6d7a8479-ae: entered promiscuous mode
Feb 28 10:16:48 compute-0 ovn_controller[146846]: 2026-02-28T10:16:48Z|00816|binding|INFO|Claiming lport 6d7a8479-aede-41fc-89b1-904bcf5e4080 for this chassis.
Feb 28 10:16:48 compute-0 nova_compute[243452]: 2026-02-28 10:16:48.785 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:48 compute-0 ovn_controller[146846]: 2026-02-28T10:16:48Z|00817|binding|INFO|6d7a8479-aede-41fc-89b1-904bcf5e4080: Claiming fa:16:3e:89:5b:c5 10.100.0.7
Feb 28 10:16:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:48.799 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:5b:c5 10.100.0.7'], port_security=['fa:16:3e:89:5b:c5 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '86b24867-eb43-40c0-b73c-1416fd35d033', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d42755d9af04d5ba2e00e083f33ca82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '07ba65c1-0f0f-486a-9983-57d9976a616f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7bc6dd1-c8d1-4227-b559-08cba2bf9212, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6d7a8479-aede-41fc-89b1-904bcf5e4080) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:16:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:48.800 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6d7a8479-aede-41fc-89b1-904bcf5e4080 in datapath d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a bound to our chassis
Feb 28 10:16:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:48.801 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a
Feb 28 10:16:48 compute-0 systemd-udevd[317378]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:16:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:48.814 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cb89e9ef-a8bc-4d85-b893-421137d562e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:48.817 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd62e70ba-f1 in ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:16:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:48.819 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd62e70ba-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:16:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:48.819 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dd68ff72-228a-4072-a739-91103d1177a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:48.821 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[05cdb1b8-ab42-41af-8188-cef6904030df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:48 compute-0 nova_compute[243452]: 2026-02-28 10:16:48.821 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:48 compute-0 systemd-machined[209480]: New machine qemu-103-instance-00000058.
Feb 28 10:16:48 compute-0 NetworkManager[49805]: <info>  [1772273808.8243] device (tap6d7a8479-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:16:48 compute-0 NetworkManager[49805]: <info>  [1772273808.8253] device (tap6d7a8479-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:16:48 compute-0 systemd[1]: Started Virtual Machine qemu-103-instance-00000058.
Feb 28 10:16:48 compute-0 ovn_controller[146846]: 2026-02-28T10:16:48Z|00818|binding|INFO|Setting lport 6d7a8479-aede-41fc-89b1-904bcf5e4080 ovn-installed in OVS
Feb 28 10:16:48 compute-0 ovn_controller[146846]: 2026-02-28T10:16:48Z|00819|binding|INFO|Setting lport 6d7a8479-aede-41fc-89b1-904bcf5e4080 up in Southbound
Feb 28 10:16:48 compute-0 nova_compute[243452]: 2026-02-28 10:16:48.838 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:48.841 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[6d293505-d511-4355-a088-51dfa35235bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:48.854 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c6640cbb-be49-44b8-960f-fcc623723b26]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:48.880 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d56d4afb-be88-4fa0-a7ed-4c6d872359fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:48 compute-0 systemd-udevd[317384]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:16:48 compute-0 NetworkManager[49805]: <info>  [1772273808.8878] manager: (tapd62e70ba-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/349)
Feb 28 10:16:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:48.887 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[277ef9b7-1fbc-4a5a-9695-fd1e1a36fbe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:48.918 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[07d39cc9-7817-4cce-94f3-2679f9774ea3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:48.922 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[22d292d5-8fe2-4a35-b38d-b6cac220d90f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:48 compute-0 NetworkManager[49805]: <info>  [1772273808.9426] device (tapd62e70ba-f0): carrier: link connected
Feb 28 10:16:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:48.946 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[233bdc76-0d1d-4737-a2a6-3f7eb1931036]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:48.963 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[642d804a-257e-4fa6-b221-c021bacff0a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd62e70ba-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:61:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531622, 'reachable_time': 30631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317417, 'error': None, 'target': 'ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:48.975 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9de1e68e-1abb-422e-bc3f-717c229b9679]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe12:6195'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531622, 'tstamp': 531622}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317418, 'error': None, 'target': 'ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:48.994 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9d583303-83bd-4833-a7fe-327e47288c09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd62e70ba-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:61:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531622, 'reachable_time': 30631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317419, 'error': None, 'target': 'ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:49.031 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec6e641-9730-47e7-9844-a0b8c1698698]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:49.222 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e4ba2c80-3a7e-4415-89c6-e79b66659b68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:49.224 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd62e70ba-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:49.224 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:49.224 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd62e70ba-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:49 compute-0 nova_compute[243452]: 2026-02-28 10:16:49.226 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:49 compute-0 NetworkManager[49805]: <info>  [1772273809.2275] manager: (tapd62e70ba-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/350)
Feb 28 10:16:49 compute-0 kernel: tapd62e70ba-f0: entered promiscuous mode
Feb 28 10:16:49 compute-0 nova_compute[243452]: 2026-02-28 10:16:49.230 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:49.232 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd62e70ba-f0, col_values=(('external_ids', {'iface-id': '2188e2cf-2c43-4995-a3d8-5e06f86c6194'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:49 compute-0 nova_compute[243452]: 2026-02-28 10:16:49.233 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:49 compute-0 ovn_controller[146846]: 2026-02-28T10:16:49Z|00820|binding|INFO|Releasing lport 2188e2cf-2c43-4995-a3d8-5e06f86c6194 from this chassis (sb_readonly=0)
Feb 28 10:16:49 compute-0 nova_compute[243452]: 2026-02-28 10:16:49.241 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:49 compute-0 nova_compute[243452]: 2026-02-28 10:16:49.245 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:49.246 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:49.247 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[772fb604-54b1-4564-8698-628a4065bfab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:49.248 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a.pid.haproxy
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:16:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:49.249 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a', 'env', 'PROCESS_TAG=haproxy-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:16:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Feb 28 10:16:49 compute-0 nova_compute[243452]: 2026-02-28 10:16:49.443 243456 DEBUG nova.compute.manager [req-0fb8f016-31f9-4e64-a6f3-b1089200fd4b req-c18fad78-03a8-428b-9751-f27b210d4dd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Received event network-vif-plugged-6d7a8479-aede-41fc-89b1-904bcf5e4080 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:49 compute-0 nova_compute[243452]: 2026-02-28 10:16:49.443 243456 DEBUG oslo_concurrency.lockutils [req-0fb8f016-31f9-4e64-a6f3-b1089200fd4b req-c18fad78-03a8-428b-9751-f27b210d4dd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "86b24867-eb43-40c0-b73c-1416fd35d033-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:49 compute-0 nova_compute[243452]: 2026-02-28 10:16:49.444 243456 DEBUG oslo_concurrency.lockutils [req-0fb8f016-31f9-4e64-a6f3-b1089200fd4b req-c18fad78-03a8-428b-9751-f27b210d4dd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "86b24867-eb43-40c0-b73c-1416fd35d033-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:49 compute-0 nova_compute[243452]: 2026-02-28 10:16:49.444 243456 DEBUG oslo_concurrency.lockutils [req-0fb8f016-31f9-4e64-a6f3-b1089200fd4b req-c18fad78-03a8-428b-9751-f27b210d4dd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "86b24867-eb43-40c0-b73c-1416fd35d033-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:49 compute-0 nova_compute[243452]: 2026-02-28 10:16:49.444 243456 DEBUG nova.compute.manager [req-0fb8f016-31f9-4e64-a6f3-b1089200fd4b req-c18fad78-03a8-428b-9751-f27b210d4dd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Processing event network-vif-plugged-6d7a8479-aede-41fc-89b1-904bcf5e4080 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:16:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Feb 28 10:16:49 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Feb 28 10:16:49 compute-0 ceph-mon[76304]: osdmap e239: 3 total, 3 up, 3 in
Feb 28 10:16:49 compute-0 podman[317454]: 2026-02-28 10:16:49.644767593 +0000 UTC m=+0.064173450 container create d6cf664ebb4cb232b16be2594302a5293b28d8fd29bba4f4c634fb10c7ec6f90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 10:16:49 compute-0 systemd[1]: Started libpod-conmon-d6cf664ebb4cb232b16be2594302a5293b28d8fd29bba4f4c634fb10c7ec6f90.scope.
Feb 28 10:16:49 compute-0 podman[317454]: 2026-02-28 10:16:49.603094155 +0000 UTC m=+0.022500042 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:16:49 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19da99a957ee788867e2e82b55051f701acf44d38fdaf6548b70aac6ad798785/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:16:49 compute-0 podman[317454]: 2026-02-28 10:16:49.765781742 +0000 UTC m=+0.185187659 container init d6cf664ebb4cb232b16be2594302a5293b28d8fd29bba4f4c634fb10c7ec6f90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:16:49 compute-0 podman[317454]: 2026-02-28 10:16:49.772795892 +0000 UTC m=+0.192201799 container start d6cf664ebb4cb232b16be2594302a5293b28d8fd29bba4f4c634fb10c7ec6f90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:16:49 compute-0 neutron-haproxy-ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a[317469]: [NOTICE]   (317473) : New worker (317475) forked
Feb 28 10:16:49 compute-0 neutron-haproxy-ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a[317469]: [NOTICE]   (317473) : Loading success.
Feb 28 10:16:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 301 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.2 MiB/s wr, 249 op/s
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.452 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273795.4508047, 9119fb04-24fa-460c-a772-4ca398874b4e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.453 243456 INFO nova.compute.manager [-] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] VM Stopped (Lifecycle Event)
Feb 28 10:16:50 compute-0 ceph-mon[76304]: osdmap e240: 3 total, 3 up, 3 in
Feb 28 10:16:50 compute-0 ceph-mon[76304]: pgmap v1531: 305 pgs: 305 active+clean; 301 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.2 MiB/s wr, 249 op/s
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.470 243456 DEBUG nova.compute.manager [None req-d751ccc0-9080-40f3-9c4f-e8dfa6862fd0 - - - - - -] [instance: 9119fb04-24fa-460c-a772-4ca398874b4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.567 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273810.566988, 86b24867-eb43-40c0-b73c-1416fd35d033 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.568 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] VM Started (Lifecycle Event)
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.571 243456 DEBUG nova.compute.manager [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.575 243456 DEBUG nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.578 243456 INFO nova.virt.libvirt.driver [-] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Instance spawned successfully.
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.579 243456 DEBUG nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.595 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.603 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.609 243456 DEBUG nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.609 243456 DEBUG nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.610 243456 DEBUG nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.611 243456 DEBUG nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.611 243456 DEBUG nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.612 243456 DEBUG nova.virt.libvirt.driver [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.636 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.636 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273810.5701234, 86b24867-eb43-40c0-b73c-1416fd35d033 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.636 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] VM Paused (Lifecycle Event)
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.661 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.665 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273810.5740044, 86b24867-eb43-40c0-b73c-1416fd35d033 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.665 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] VM Resumed (Lifecycle Event)
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.673 243456 INFO nova.compute.manager [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Took 9.92 seconds to spawn the instance on the hypervisor.
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.674 243456 DEBUG nova.compute.manager [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.689 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.692 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.713 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.749 243456 INFO nova.compute.manager [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Took 11.01 seconds to build instance.
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.768 243456 DEBUG oslo_concurrency.lockutils [None req-8c8e0500-afba-4fad-a5bf-3faf16167167 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Lock "86b24867-eb43-40c0-b73c-1416fd35d033" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.934 243456 INFO nova.virt.libvirt.driver [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Snapshot image upload complete
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.935 243456 DEBUG nova.compute.manager [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:50 compute-0 nova_compute[243452]: 2026-02-28 10:16:50.997 243456 INFO nova.compute.manager [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Shelve offloading
Feb 28 10:16:51 compute-0 nova_compute[243452]: 2026-02-28 10:16:51.004 243456 INFO nova.virt.libvirt.driver [-] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Instance destroyed successfully.
Feb 28 10:16:51 compute-0 nova_compute[243452]: 2026-02-28 10:16:51.004 243456 DEBUG nova.compute.manager [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:51 compute-0 nova_compute[243452]: 2026-02-28 10:16:51.008 243456 DEBUG oslo_concurrency.lockutils [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:16:51 compute-0 nova_compute[243452]: 2026-02-28 10:16:51.008 243456 DEBUG oslo_concurrency.lockutils [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquired lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:16:51 compute-0 nova_compute[243452]: 2026-02-28 10:16:51.009 243456 DEBUG nova.network.neutron [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:16:51 compute-0 nova_compute[243452]: 2026-02-28 10:16:51.522 243456 DEBUG nova.compute.manager [req-d9b3a635-831c-4642-a2ef-7b7c46bc4fb6 req-c7492957-6126-42d7-b689-b6961785ec51 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Received event network-vif-plugged-6d7a8479-aede-41fc-89b1-904bcf5e4080 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:51 compute-0 nova_compute[243452]: 2026-02-28 10:16:51.522 243456 DEBUG oslo_concurrency.lockutils [req-d9b3a635-831c-4642-a2ef-7b7c46bc4fb6 req-c7492957-6126-42d7-b689-b6961785ec51 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "86b24867-eb43-40c0-b73c-1416fd35d033-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:51 compute-0 nova_compute[243452]: 2026-02-28 10:16:51.523 243456 DEBUG oslo_concurrency.lockutils [req-d9b3a635-831c-4642-a2ef-7b7c46bc4fb6 req-c7492957-6126-42d7-b689-b6961785ec51 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "86b24867-eb43-40c0-b73c-1416fd35d033-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:51 compute-0 nova_compute[243452]: 2026-02-28 10:16:51.523 243456 DEBUG oslo_concurrency.lockutils [req-d9b3a635-831c-4642-a2ef-7b7c46bc4fb6 req-c7492957-6126-42d7-b689-b6961785ec51 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "86b24867-eb43-40c0-b73c-1416fd35d033-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:51 compute-0 nova_compute[243452]: 2026-02-28 10:16:51.523 243456 DEBUG nova.compute.manager [req-d9b3a635-831c-4642-a2ef-7b7c46bc4fb6 req-c7492957-6126-42d7-b689-b6961785ec51 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] No waiting events found dispatching network-vif-plugged-6d7a8479-aede-41fc-89b1-904bcf5e4080 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:51 compute-0 nova_compute[243452]: 2026-02-28 10:16:51.523 243456 WARNING nova.compute.manager [req-d9b3a635-831c-4642-a2ef-7b7c46bc4fb6 req-c7492957-6126-42d7-b689-b6961785ec51 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Received unexpected event network-vif-plugged-6d7a8479-aede-41fc-89b1-904bcf5e4080 for instance with vm_state active and task_state None.
Feb 28 10:16:51 compute-0 nova_compute[243452]: 2026-02-28 10:16:51.656 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 342 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 6.5 MiB/s wr, 170 op/s
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.537 243456 DEBUG nova.network.neutron [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Updating instance_info_cache with network_info: [{"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.574 243456 DEBUG oslo_concurrency.lockutils [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Releasing lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.656 243456 DEBUG oslo_concurrency.lockutils [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Acquiring lock "86b24867-eb43-40c0-b73c-1416fd35d033" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.657 243456 DEBUG oslo_concurrency.lockutils [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Lock "86b24867-eb43-40c0-b73c-1416fd35d033" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.658 243456 DEBUG oslo_concurrency.lockutils [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Acquiring lock "86b24867-eb43-40c0-b73c-1416fd35d033-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.658 243456 DEBUG oslo_concurrency.lockutils [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Lock "86b24867-eb43-40c0-b73c-1416fd35d033-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.658 243456 DEBUG oslo_concurrency.lockutils [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Lock "86b24867-eb43-40c0-b73c-1416fd35d033-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.660 243456 INFO nova.compute.manager [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Terminating instance
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.661 243456 DEBUG nova.compute.manager [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:16:52 compute-0 kernel: tap6d7a8479-ae (unregistering): left promiscuous mode
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.706 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:52 compute-0 NetworkManager[49805]: <info>  [1772273812.7086] device (tap6d7a8479-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:16:52 compute-0 ovn_controller[146846]: 2026-02-28T10:16:52Z|00821|binding|INFO|Releasing lport 6d7a8479-aede-41fc-89b1-904bcf5e4080 from this chassis (sb_readonly=0)
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.722 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:52 compute-0 ovn_controller[146846]: 2026-02-28T10:16:52Z|00822|binding|INFO|Setting lport 6d7a8479-aede-41fc-89b1-904bcf5e4080 down in Southbound
Feb 28 10:16:52 compute-0 ovn_controller[146846]: 2026-02-28T10:16:52Z|00823|binding|INFO|Removing iface tap6d7a8479-ae ovn-installed in OVS
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.734 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:52 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000058.scope: Deactivated successfully.
Feb 28 10:16:52 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000058.scope: Consumed 3.896s CPU time.
Feb 28 10:16:52 compute-0 systemd-machined[209480]: Machine qemu-103-instance-00000058 terminated.
Feb 28 10:16:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:52.752 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:5b:c5 10.100.0.7'], port_security=['fa:16:3e:89:5b:c5 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '86b24867-eb43-40c0-b73c-1416fd35d033', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d42755d9af04d5ba2e00e083f33ca82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '07ba65c1-0f0f-486a-9983-57d9976a616f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7bc6dd1-c8d1-4227-b559-08cba2bf9212, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6d7a8479-aede-41fc-89b1-904bcf5e4080) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:16:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:52.755 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6d7a8479-aede-41fc-89b1-904bcf5e4080 in datapath d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a unbound from our chassis
Feb 28 10:16:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:52.757 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:16:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:52.758 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a1a1ee-294e-4b4f-96fb-c7f11370231c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:52.759 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a namespace which is not needed anymore
Feb 28 10:16:52 compute-0 neutron-haproxy-ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a[317469]: [NOTICE]   (317473) : haproxy version is 2.8.14-c23fe91
Feb 28 10:16:52 compute-0 neutron-haproxy-ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a[317469]: [NOTICE]   (317473) : path to executable is /usr/sbin/haproxy
Feb 28 10:16:52 compute-0 neutron-haproxy-ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a[317469]: [WARNING]  (317473) : Exiting Master process...
Feb 28 10:16:52 compute-0 neutron-haproxy-ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a[317469]: [WARNING]  (317473) : Exiting Master process...
Feb 28 10:16:52 compute-0 neutron-haproxy-ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a[317469]: [ALERT]    (317473) : Current worker (317475) exited with code 143 (Terminated)
Feb 28 10:16:52 compute-0 neutron-haproxy-ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a[317469]: [WARNING]  (317473) : All workers exited. Exiting... (0)
Feb 28 10:16:52 compute-0 systemd[1]: libpod-d6cf664ebb4cb232b16be2594302a5293b28d8fd29bba4f4c634fb10c7ec6f90.scope: Deactivated successfully.
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.906 243456 INFO nova.virt.libvirt.driver [-] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Instance destroyed successfully.
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.907 243456 DEBUG nova.objects.instance [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Lazy-loading 'resources' on Instance uuid 86b24867-eb43-40c0-b73c-1416fd35d033 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:52 compute-0 podman[317554]: 2026-02-28 10:16:52.910144884 +0000 UTC m=+0.060239048 container died d6cf664ebb4cb232b16be2594302a5293b28d8fd29bba4f4c634fb10c7ec6f90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 10:16:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d6cf664ebb4cb232b16be2594302a5293b28d8fd29bba4f4c634fb10c7ec6f90-userdata-shm.mount: Deactivated successfully.
Feb 28 10:16:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-19da99a957ee788867e2e82b55051f701acf44d38fdaf6548b70aac6ad798785-merged.mount: Deactivated successfully.
Feb 28 10:16:52 compute-0 podman[317554]: 2026-02-28 10:16:52.960698915 +0000 UTC m=+0.110793059 container cleanup d6cf664ebb4cb232b16be2594302a5293b28d8fd29bba4f4c634fb10c7ec6f90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:16:52 compute-0 systemd[1]: libpod-conmon-d6cf664ebb4cb232b16be2594302a5293b28d8fd29bba4f4c634fb10c7ec6f90.scope: Deactivated successfully.
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.985 243456 DEBUG nova.virt.libvirt.vif [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:16:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1436685910',display_name='tempest-ServerAddressesTestJSON-server-1436685910',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1436685910',id=88,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:16:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5d42755d9af04d5ba2e00e083f33ca82',ramdisk_id='',reservation_id='r-9chm121b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-407730341',owner_user_name='tempest-ServerAddressesTestJSON-407730341-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:16:50Z,user_data=None,user_id='1cb883c7e1bc470eb00cb23a94d0ddb0',uuid=86b24867-eb43-40c0-b73c-1416fd35d033,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6d7a8479-aede-41fc-89b1-904bcf5e4080", "address": "fa:16:3e:89:5b:c5", "network": {"id": "d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-251416721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d42755d9af04d5ba2e00e083f33ca82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d7a8479-ae", "ovs_interfaceid": "6d7a8479-aede-41fc-89b1-904bcf5e4080", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.985 243456 DEBUG nova.network.os_vif_util [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Converting VIF {"id": "6d7a8479-aede-41fc-89b1-904bcf5e4080", "address": "fa:16:3e:89:5b:c5", "network": {"id": "d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-251416721-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d42755d9af04d5ba2e00e083f33ca82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d7a8479-ae", "ovs_interfaceid": "6d7a8479-aede-41fc-89b1-904bcf5e4080", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.986 243456 DEBUG nova.network.os_vif_util [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:5b:c5,bridge_name='br-int',has_traffic_filtering=True,id=6d7a8479-aede-41fc-89b1-904bcf5e4080,network=Network(d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d7a8479-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.987 243456 DEBUG os_vif [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:5b:c5,bridge_name='br-int',has_traffic_filtering=True,id=6d7a8479-aede-41fc-89b1-904bcf5e4080,network=Network(d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d7a8479-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.989 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.989 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d7a8479-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.991 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.994 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:16:52 compute-0 nova_compute[243452]: 2026-02-28 10:16:52.998 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:53 compute-0 nova_compute[243452]: 2026-02-28 10:16:53.001 243456 INFO os_vif [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:5b:c5,bridge_name='br-int',has_traffic_filtering=True,id=6d7a8479-aede-41fc-89b1-904bcf5e4080,network=Network(d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d7a8479-ae')
Feb 28 10:16:53 compute-0 podman[317596]: 2026-02-28 10:16:53.064708799 +0000 UTC m=+0.084211011 container remove d6cf664ebb4cb232b16be2594302a5293b28d8fd29bba4f4c634fb10c7ec6f90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:16:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:53.070 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3027f4-0c48-4ecd-879d-102d91b0553f]: (4, ('Sat Feb 28 10:16:52 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a (d6cf664ebb4cb232b16be2594302a5293b28d8fd29bba4f4c634fb10c7ec6f90)\nd6cf664ebb4cb232b16be2594302a5293b28d8fd29bba4f4c634fb10c7ec6f90\nSat Feb 28 10:16:52 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a (d6cf664ebb4cb232b16be2594302a5293b28d8fd29bba4f4c634fb10c7ec6f90)\nd6cf664ebb4cb232b16be2594302a5293b28d8fd29bba4f4c634fb10c7ec6f90\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:53.073 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cf7a90b5-97e5-4777-9eea-28a077529d8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:53.074 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd62e70ba-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:53 compute-0 nova_compute[243452]: 2026-02-28 10:16:53.076 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:53 compute-0 kernel: tapd62e70ba-f0: left promiscuous mode
Feb 28 10:16:53 compute-0 nova_compute[243452]: 2026-02-28 10:16:53.086 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:53.089 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2e2418be-404c-4057-b751-010c079911b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:53.100 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c6712780-79f6-4def-8cef-ce01e7909564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:53.102 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[73b580a7-fa5a-48cf-8747-05b129e65b14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:53.121 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b209e6ca-9bf0-454b-a58c-a84c4bcff998]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531616, 'reachable_time': 41043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317634, 'error': None, 'target': 'ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:53.123 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d62e70ba-fd0f-493a-b3c9-f68ca04c7c2a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:16:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:53.123 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[4a3a56af-61fb-4831-b860-f107568c7356]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:16:53 compute-0 systemd[1]: run-netns-ovnmeta\x2dd62e70ba\x2dfd0f\x2d493a\x2db3c9\x2df68ca04c7c2a.mount: Deactivated successfully.
Feb 28 10:16:53 compute-0 nova_compute[243452]: 2026-02-28 10:16:53.321 243456 INFO nova.virt.libvirt.driver [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Deleting instance files /var/lib/nova/instances/86b24867-eb43-40c0-b73c-1416fd35d033_del
Feb 28 10:16:53 compute-0 nova_compute[243452]: 2026-02-28 10:16:53.322 243456 INFO nova.virt.libvirt.driver [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Deletion of /var/lib/nova/instances/86b24867-eb43-40c0-b73c-1416fd35d033_del complete
Feb 28 10:16:53 compute-0 ceph-mon[76304]: pgmap v1532: 305 pgs: 305 active+clean; 342 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 6.5 MiB/s wr, 170 op/s
Feb 28 10:16:53 compute-0 nova_compute[243452]: 2026-02-28 10:16:53.367 243456 INFO nova.compute.manager [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Took 0.71 seconds to destroy the instance on the hypervisor.
Feb 28 10:16:53 compute-0 nova_compute[243452]: 2026-02-28 10:16:53.367 243456 DEBUG oslo.service.loopingcall [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:16:53 compute-0 nova_compute[243452]: 2026-02-28 10:16:53.367 243456 DEBUG nova.compute.manager [-] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:16:53 compute-0 nova_compute[243452]: 2026-02-28 10:16:53.368 243456 DEBUG nova.network.neutron [-] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:16:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:16:53 compute-0 nova_compute[243452]: 2026-02-28 10:16:53.594 243456 DEBUG nova.compute.manager [req-b8cb1dd0-8ee3-4f5f-9792-4d97a10064f7 req-89d82986-bcad-4ea9-9ebb-7358dac642fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Received event network-vif-unplugged-6d7a8479-aede-41fc-89b1-904bcf5e4080 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:53 compute-0 nova_compute[243452]: 2026-02-28 10:16:53.594 243456 DEBUG oslo_concurrency.lockutils [req-b8cb1dd0-8ee3-4f5f-9792-4d97a10064f7 req-89d82986-bcad-4ea9-9ebb-7358dac642fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "86b24867-eb43-40c0-b73c-1416fd35d033-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:53 compute-0 nova_compute[243452]: 2026-02-28 10:16:53.594 243456 DEBUG oslo_concurrency.lockutils [req-b8cb1dd0-8ee3-4f5f-9792-4d97a10064f7 req-89d82986-bcad-4ea9-9ebb-7358dac642fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "86b24867-eb43-40c0-b73c-1416fd35d033-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:53 compute-0 nova_compute[243452]: 2026-02-28 10:16:53.594 243456 DEBUG oslo_concurrency.lockutils [req-b8cb1dd0-8ee3-4f5f-9792-4d97a10064f7 req-89d82986-bcad-4ea9-9ebb-7358dac642fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "86b24867-eb43-40c0-b73c-1416fd35d033-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:53 compute-0 nova_compute[243452]: 2026-02-28 10:16:53.595 243456 DEBUG nova.compute.manager [req-b8cb1dd0-8ee3-4f5f-9792-4d97a10064f7 req-89d82986-bcad-4ea9-9ebb-7358dac642fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] No waiting events found dispatching network-vif-unplugged-6d7a8479-aede-41fc-89b1-904bcf5e4080 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:53 compute-0 nova_compute[243452]: 2026-02-28 10:16:53.595 243456 DEBUG nova.compute.manager [req-b8cb1dd0-8ee3-4f5f-9792-4d97a10064f7 req-89d82986-bcad-4ea9-9ebb-7358dac642fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Received event network-vif-unplugged-6d7a8479-aede-41fc-89b1-904bcf5e4080 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:16:53 compute-0 nova_compute[243452]: 2026-02-28 10:16:53.840 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 358 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 6.8 MiB/s wr, 206 op/s
Feb 28 10:16:54 compute-0 nova_compute[243452]: 2026-02-28 10:16:54.505 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273799.5048418, cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:16:54 compute-0 nova_compute[243452]: 2026-02-28 10:16:54.506 243456 INFO nova.compute.manager [-] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] VM Stopped (Lifecycle Event)
Feb 28 10:16:54 compute-0 nova_compute[243452]: 2026-02-28 10:16:54.526 243456 DEBUG nova.compute.manager [None req-8d7ae80d-7115-40e1-8433-814b1f654c63 - - - - - -] [instance: cfb3231b-bbd1-4dec-b5d4-06f33f8f0b6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:54 compute-0 nova_compute[243452]: 2026-02-28 10:16:54.592 243456 DEBUG nova.network.neutron [-] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:16:54 compute-0 nova_compute[243452]: 2026-02-28 10:16:54.613 243456 INFO nova.compute.manager [-] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Took 1.25 seconds to deallocate network for instance.
Feb 28 10:16:54 compute-0 nova_compute[243452]: 2026-02-28 10:16:54.654 243456 DEBUG oslo_concurrency.lockutils [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:54 compute-0 nova_compute[243452]: 2026-02-28 10:16:54.654 243456 DEBUG oslo_concurrency.lockutils [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:54 compute-0 nova_compute[243452]: 2026-02-28 10:16:54.673 243456 DEBUG nova.compute.manager [req-a2ac8591-4aa3-4408-926b-0cd3f9edc25a req-f837204e-f31f-4379-abef-fa1d9d8f13b7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Received event network-vif-deleted-6d7a8479-aede-41fc-89b1-904bcf5e4080 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:54 compute-0 nova_compute[243452]: 2026-02-28 10:16:54.731 243456 DEBUG oslo_concurrency.processutils [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:54 compute-0 nova_compute[243452]: 2026-02-28 10:16:54.868 243456 INFO nova.virt.libvirt.driver [-] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Instance destroyed successfully.
Feb 28 10:16:54 compute-0 nova_compute[243452]: 2026-02-28 10:16:54.871 243456 DEBUG nova.objects.instance [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'resources' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:16:54 compute-0 nova_compute[243452]: 2026-02-28 10:16:54.893 243456 DEBUG nova.virt.libvirt.vif [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:15:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-76744621',display_name='tempest-ServersNegativeTestJSON-server-76744621',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-76744621',id=81,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c0c4bc44c37f4a4f83c83b6105be3190',ramdisk_id='',reservation_id='r-labnea0j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-621636341',owner_user_name='tempest-ServersNegativeTestJSON-621636341-project-member',shelved_at='2026-02-28T10:16:50.935519',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='22a70b75-075f-4875-bc53-299afbb39f44'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:16:46Z,user_data=None,user_id='7ef51521ffc947cbbce8323ec2b71753',uuid=4db5bcd7-8b41-4850-8c88-89ad757c8558,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:16:54 compute-0 nova_compute[243452]: 2026-02-28 10:16:54.894 243456 DEBUG nova.network.os_vif_util [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converting VIF {"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:16:54 compute-0 nova_compute[243452]: 2026-02-28 10:16:54.895 243456 DEBUG nova.network.os_vif_util [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:e7:39,bridge_name='br-int',has_traffic_filtering=True,id=52f49649-6181-4c24-95b7-fc7227858c70,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f49649-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:16:54 compute-0 nova_compute[243452]: 2026-02-28 10:16:54.895 243456 DEBUG os_vif [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:e7:39,bridge_name='br-int',has_traffic_filtering=True,id=52f49649-6181-4c24-95b7-fc7227858c70,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f49649-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:16:54 compute-0 nova_compute[243452]: 2026-02-28 10:16:54.898 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:54 compute-0 nova_compute[243452]: 2026-02-28 10:16:54.899 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52f49649-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:16:54 compute-0 nova_compute[243452]: 2026-02-28 10:16:54.903 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:54 compute-0 nova_compute[243452]: 2026-02-28 10:16:54.905 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:16:54 compute-0 nova_compute[243452]: 2026-02-28 10:16:54.909 243456 INFO os_vif [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:e7:39,bridge_name='br-int',has_traffic_filtering=True,id=52f49649-6181-4c24-95b7-fc7227858c70,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f49649-61')
Feb 28 10:16:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:16:55 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1175416881' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.282 243456 DEBUG oslo_concurrency.processutils [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.289 243456 DEBUG nova.compute.provider_tree [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.314 243456 DEBUG nova.scheduler.client.report [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.342 243456 DEBUG oslo_concurrency.lockutils [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:55 compute-0 ceph-mon[76304]: pgmap v1533: 305 pgs: 305 active+clean; 358 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 6.8 MiB/s wr, 206 op/s
Feb 28 10:16:55 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1175416881' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.373 243456 INFO nova.scheduler.client.report [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Deleted allocations for instance 86b24867-eb43-40c0-b73c-1416fd35d033
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.381 243456 INFO nova.virt.libvirt.driver [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Deleting instance files /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558_del
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.382 243456 INFO nova.virt.libvirt.driver [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Deletion of /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558_del complete
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.460 243456 DEBUG oslo_concurrency.lockutils [None req-b20afb48-1e16-42d0-b412-d176abc118a1 1cb883c7e1bc470eb00cb23a94d0ddb0 5d42755d9af04d5ba2e00e083f33ca82 - - default default] Lock "86b24867-eb43-40c0-b73c-1416fd35d033" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.503 243456 INFO nova.scheduler.client.report [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Deleted allocations for instance 4db5bcd7-8b41-4850-8c88-89ad757c8558
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.549 243456 DEBUG oslo_concurrency.lockutils [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.550 243456 DEBUG oslo_concurrency.lockutils [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.570 243456 DEBUG oslo_concurrency.processutils [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.816 243456 DEBUG nova.compute.manager [req-0b8224ff-70b2-42fe-92c5-2a060973c768 req-357e59c9-3c3c-4a99-8a69-24a1d9a4d5f2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Received event network-vif-plugged-6d7a8479-aede-41fc-89b1-904bcf5e4080 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.817 243456 DEBUG oslo_concurrency.lockutils [req-0b8224ff-70b2-42fe-92c5-2a060973c768 req-357e59c9-3c3c-4a99-8a69-24a1d9a4d5f2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "86b24867-eb43-40c0-b73c-1416fd35d033-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.818 243456 DEBUG oslo_concurrency.lockutils [req-0b8224ff-70b2-42fe-92c5-2a060973c768 req-357e59c9-3c3c-4a99-8a69-24a1d9a4d5f2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "86b24867-eb43-40c0-b73c-1416fd35d033-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.818 243456 DEBUG oslo_concurrency.lockutils [req-0b8224ff-70b2-42fe-92c5-2a060973c768 req-357e59c9-3c3c-4a99-8a69-24a1d9a4d5f2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "86b24867-eb43-40c0-b73c-1416fd35d033-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.819 243456 DEBUG nova.compute.manager [req-0b8224ff-70b2-42fe-92c5-2a060973c768 req-357e59c9-3c3c-4a99-8a69-24a1d9a4d5f2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] No waiting events found dispatching network-vif-plugged-6d7a8479-aede-41fc-89b1-904bcf5e4080 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.820 243456 WARNING nova.compute.manager [req-0b8224ff-70b2-42fe-92c5-2a060973c768 req-357e59c9-3c3c-4a99-8a69-24a1d9a4d5f2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Received unexpected event network-vif-plugged-6d7a8479-aede-41fc-89b1-904bcf5e4080 for instance with vm_state deleted and task_state None.
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.820 243456 DEBUG nova.compute.manager [req-0b8224ff-70b2-42fe-92c5-2a060973c768 req-357e59c9-3c3c-4a99-8a69-24a1d9a4d5f2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received event network-changed-52f49649-6181-4c24-95b7-fc7227858c70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.820 243456 DEBUG nova.compute.manager [req-0b8224ff-70b2-42fe-92c5-2a060973c768 req-357e59c9-3c3c-4a99-8a69-24a1d9a4d5f2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Refreshing instance network info cache due to event network-changed-52f49649-6181-4c24-95b7-fc7227858c70. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.821 243456 DEBUG oslo_concurrency.lockutils [req-0b8224ff-70b2-42fe-92c5-2a060973c768 req-357e59c9-3c3c-4a99-8a69-24a1d9a4d5f2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.821 243456 DEBUG oslo_concurrency.lockutils [req-0b8224ff-70b2-42fe-92c5-2a060973c768 req-357e59c9-3c3c-4a99-8a69-24a1d9a4d5f2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:16:55 compute-0 nova_compute[243452]: 2026-02-28 10:16:55.822 243456 DEBUG nova.network.neutron [req-0b8224ff-70b2-42fe-92c5-2a060973c768 req-357e59c9-3c3c-4a99-8a69-24a1d9a4d5f2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Refreshing network info cache for port 52f49649-6181-4c24-95b7-fc7227858c70 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:16:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:16:56 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/606999513' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:56 compute-0 nova_compute[243452]: 2026-02-28 10:16:56.115 243456 DEBUG oslo_concurrency.processutils [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:16:56 compute-0 nova_compute[243452]: 2026-02-28 10:16:56.123 243456 DEBUG nova.compute.provider_tree [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:16:56 compute-0 nova_compute[243452]: 2026-02-28 10:16:56.143 243456 DEBUG nova.scheduler.client.report [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:16:56 compute-0 nova_compute[243452]: 2026-02-28 10:16:56.170 243456 DEBUG oslo_concurrency.lockutils [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:56 compute-0 nova_compute[243452]: 2026-02-28 10:16:56.226 243456 DEBUG oslo_concurrency.lockutils [None req-517c9898-b695-4178-8332-d5f85324f5bd 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 13.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 291 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 5.8 MiB/s wr, 294 op/s
Feb 28 10:16:56 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/606999513' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:16:57 compute-0 nova_compute[243452]: 2026-02-28 10:16:57.319 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273802.3182952, 30194398-5601-43ac-aae7-290d9d311d6c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:16:57 compute-0 nova_compute[243452]: 2026-02-28 10:16:57.320 243456 INFO nova.compute.manager [-] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] VM Stopped (Lifecycle Event)
Feb 28 10:16:57 compute-0 ceph-mon[76304]: pgmap v1534: 305 pgs: 305 active+clean; 291 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 5.8 MiB/s wr, 294 op/s
Feb 28 10:16:57 compute-0 nova_compute[243452]: 2026-02-28 10:16:57.465 243456 DEBUG nova.compute.manager [None req-647abeda-bed1-454c-bd00-d2bba54f6119 - - - - - -] [instance: 30194398-5601-43ac-aae7-290d9d311d6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:57 compute-0 nova_compute[243452]: 2026-02-28 10:16:57.649 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273802.6478417, e0b403b3-2f95-4f8c-a00c-53dab3c643b9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:16:57 compute-0 nova_compute[243452]: 2026-02-28 10:16:57.650 243456 INFO nova.compute.manager [-] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] VM Stopped (Lifecycle Event)
Feb 28 10:16:57 compute-0 nova_compute[243452]: 2026-02-28 10:16:57.670 243456 DEBUG nova.compute.manager [None req-0d2bf253-4794-4186-8580-fdb5ad05bf6c - - - - - -] [instance: e0b403b3-2f95-4f8c-a00c-53dab3c643b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:16:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:57.854 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:16:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:57.855 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:16:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:16:57.855 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:16:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 261 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 4.8 MiB/s wr, 257 op/s
Feb 28 10:16:58 compute-0 sudo[317699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:16:58 compute-0 sudo[317699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:16:58 compute-0 sudo[317699]: pam_unix(sudo:session): session closed for user root
Feb 28 10:16:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:16:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Feb 28 10:16:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Feb 28 10:16:58 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Feb 28 10:16:58 compute-0 sudo[317724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 28 10:16:58 compute-0 sudo[317724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:16:58 compute-0 nova_compute[243452]: 2026-02-28 10:16:58.842 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:16:58 compute-0 sudo[317724]: pam_unix(sudo:session): session closed for user root
Feb 28 10:16:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:16:58 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:16:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:16:59 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:16:59 compute-0 sudo[317769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:16:59 compute-0 sudo[317769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:16:59 compute-0 sudo[317769]: pam_unix(sudo:session): session closed for user root
Feb 28 10:16:59 compute-0 sudo[317794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:16:59 compute-0 sudo[317794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:16:59 compute-0 ceph-mon[76304]: pgmap v1535: 305 pgs: 305 active+clean; 261 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 4.8 MiB/s wr, 257 op/s
Feb 28 10:16:59 compute-0 ceph-mon[76304]: osdmap e241: 3 total, 3 up, 3 in
Feb 28 10:16:59 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:16:59 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:16:59 compute-0 nova_compute[243452]: 2026-02-28 10:16:59.535 243456 DEBUG nova.network.neutron [req-0b8224ff-70b2-42fe-92c5-2a060973c768 req-357e59c9-3c3c-4a99-8a69-24a1d9a4d5f2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Updated VIF entry in instance network info cache for port 52f49649-6181-4c24-95b7-fc7227858c70. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:16:59 compute-0 nova_compute[243452]: 2026-02-28 10:16:59.536 243456 DEBUG nova.network.neutron [req-0b8224ff-70b2-42fe-92c5-2a060973c768 req-357e59c9-3c3c-4a99-8a69-24a1d9a4d5f2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Updating instance_info_cache with network_info: [{"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": null, "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap52f49649-61", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:16:59 compute-0 sudo[317794]: pam_unix(sudo:session): session closed for user root
Feb 28 10:16:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:16:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:16:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:16:59 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:16:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:16:59 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:16:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:16:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:16:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:16:59 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:16:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:16:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:16:59 compute-0 nova_compute[243452]: 2026-02-28 10:16:59.710 243456 DEBUG oslo_concurrency.lockutils [req-0b8224ff-70b2-42fe-92c5-2a060973c768 req-357e59c9-3c3c-4a99-8a69-24a1d9a4d5f2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:16:59 compute-0 sudo[317850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:16:59 compute-0 sudo[317850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:16:59 compute-0 sudo[317850]: pam_unix(sudo:session): session closed for user root
Feb 28 10:16:59 compute-0 sudo[317875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:16:59 compute-0 sudo[317875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:16:59 compute-0 nova_compute[243452]: 2026-02-28 10:16:59.902 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:00 compute-0 podman[317912]: 2026-02-28 10:17:00.078234941 +0000 UTC m=+0.046411784 container create 891fc15349d1bad9b2fa90e1b5b661771f115293b54a8fbdb139fd10eeb7730d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hawking, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:17:00 compute-0 systemd[1]: Started libpod-conmon-891fc15349d1bad9b2fa90e1b5b661771f115293b54a8fbdb139fd10eeb7730d.scope.
Feb 28 10:17:00 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:17:00 compute-0 podman[317912]: 2026-02-28 10:17:00.058747745 +0000 UTC m=+0.026924568 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:17:00 compute-0 podman[317912]: 2026-02-28 10:17:00.165987162 +0000 UTC m=+0.134164035 container init 891fc15349d1bad9b2fa90e1b5b661771f115293b54a8fbdb139fd10eeb7730d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 10:17:00 compute-0 podman[317912]: 2026-02-28 10:17:00.175772441 +0000 UTC m=+0.143949264 container start 891fc15349d1bad9b2fa90e1b5b661771f115293b54a8fbdb139fd10eeb7730d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hawking, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:17:00 compute-0 podman[317912]: 2026-02-28 10:17:00.179522628 +0000 UTC m=+0.147699621 container attach 891fc15349d1bad9b2fa90e1b5b661771f115293b54a8fbdb139fd10eeb7730d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hawking, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 10:17:00 compute-0 stoic_hawking[317928]: 167 167
Feb 28 10:17:00 compute-0 systemd[1]: libpod-891fc15349d1bad9b2fa90e1b5b661771f115293b54a8fbdb139fd10eeb7730d.scope: Deactivated successfully.
Feb 28 10:17:00 compute-0 conmon[317928]: conmon 891fc15349d1bad9b2fa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-891fc15349d1bad9b2fa90e1b5b661771f115293b54a8fbdb139fd10eeb7730d.scope/container/memory.events
Feb 28 10:17:00 compute-0 podman[317912]: 2026-02-28 10:17:00.184014536 +0000 UTC m=+0.152191389 container died 891fc15349d1bad9b2fa90e1b5b661771f115293b54a8fbdb139fd10eeb7730d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hawking, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:17:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-112f788b413947da1cca5c0ab4325baee3c4e72352c0799718eba12ded1d1142-merged.mount: Deactivated successfully.
Feb 28 10:17:00 compute-0 podman[317912]: 2026-02-28 10:17:00.230210843 +0000 UTC m=+0.198387676 container remove 891fc15349d1bad9b2fa90e1b5b661771f115293b54a8fbdb139fd10eeb7730d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hawking, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 10:17:00 compute-0 systemd[1]: libpod-conmon-891fc15349d1bad9b2fa90e1b5b661771f115293b54a8fbdb139fd10eeb7730d.scope: Deactivated successfully.
Feb 28 10:17:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 232 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.5 MiB/s wr, 205 op/s
Feb 28 10:17:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:17:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:17:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:17:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:17:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:17:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:17:00 compute-0 podman[317950]: 2026-02-28 10:17:00.38415983 +0000 UTC m=+0.055104171 container create df905bee0a45e894ee5494819a7246d4fa792f45d415b828ab584051d1d73d12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 10:17:00 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:17:00 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:17:00 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:17:00 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:17:00 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:17:00 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:17:00 compute-0 podman[317950]: 2026-02-28 10:17:00.352653142 +0000 UTC m=+0.023597563 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:17:00 compute-0 systemd[1]: Started libpod-conmon-df905bee0a45e894ee5494819a7246d4fa792f45d415b828ab584051d1d73d12.scope.
Feb 28 10:17:00 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9d3b9f4fb159e189f113d762ae4a6b15522716a60844190c093c9d2200230c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9d3b9f4fb159e189f113d762ae4a6b15522716a60844190c093c9d2200230c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9d3b9f4fb159e189f113d762ae4a6b15522716a60844190c093c9d2200230c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9d3b9f4fb159e189f113d762ae4a6b15522716a60844190c093c9d2200230c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9d3b9f4fb159e189f113d762ae4a6b15522716a60844190c093c9d2200230c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:17:00 compute-0 nova_compute[243452]: 2026-02-28 10:17:00.505 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273805.4987009, 4db5bcd7-8b41-4850-8c88-89ad757c8558 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:00 compute-0 nova_compute[243452]: 2026-02-28 10:17:00.507 243456 INFO nova.compute.manager [-] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] VM Stopped (Lifecycle Event)
Feb 28 10:17:00 compute-0 podman[317950]: 2026-02-28 10:17:00.527490526 +0000 UTC m=+0.198434887 container init df905bee0a45e894ee5494819a7246d4fa792f45d415b828ab584051d1d73d12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 10:17:00 compute-0 nova_compute[243452]: 2026-02-28 10:17:00.531 243456 DEBUG nova.compute.manager [None req-66afd87a-68aa-4ef0-96b0-3d1830e3dfae - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:00 compute-0 podman[317950]: 2026-02-28 10:17:00.539462847 +0000 UTC m=+0.210407188 container start df905bee0a45e894ee5494819a7246d4fa792f45d415b828ab584051d1d73d12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_poincare, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:17:00 compute-0 podman[317950]: 2026-02-28 10:17:00.544090119 +0000 UTC m=+0.215034500 container attach df905bee0a45e894ee5494819a7246d4fa792f45d415b828ab584051d1d73d12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_poincare, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 10:17:00 compute-0 reverent_poincare[317967]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:17:00 compute-0 reverent_poincare[317967]: --> All data devices are unavailable
Feb 28 10:17:00 compute-0 systemd[1]: libpod-df905bee0a45e894ee5494819a7246d4fa792f45d415b828ab584051d1d73d12.scope: Deactivated successfully.
Feb 28 10:17:01 compute-0 podman[317987]: 2026-02-28 10:17:01.005365887 +0000 UTC m=+0.029704298 container died df905bee0a45e894ee5494819a7246d4fa792f45d415b828ab584051d1d73d12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_poincare, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 28 10:17:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9d3b9f4fb159e189f113d762ae4a6b15522716a60844190c093c9d2200230c5-merged.mount: Deactivated successfully.
Feb 28 10:17:01 compute-0 podman[317987]: 2026-02-28 10:17:01.049387421 +0000 UTC m=+0.073725772 container remove df905bee0a45e894ee5494819a7246d4fa792f45d415b828ab584051d1d73d12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_poincare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 10:17:01 compute-0 systemd[1]: libpod-conmon-df905bee0a45e894ee5494819a7246d4fa792f45d415b828ab584051d1d73d12.scope: Deactivated successfully.
Feb 28 10:17:01 compute-0 sudo[317875]: pam_unix(sudo:session): session closed for user root
Feb 28 10:17:01 compute-0 sudo[318002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:17:01 compute-0 sudo[318002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:17:01 compute-0 sudo[318002]: pam_unix(sudo:session): session closed for user root
Feb 28 10:17:01 compute-0 sudo[318027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:17:01 compute-0 sudo[318027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:17:01 compute-0 ceph-mon[76304]: pgmap v1537: 305 pgs: 305 active+clean; 232 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.5 MiB/s wr, 205 op/s
Feb 28 10:17:01 compute-0 podman[318064]: 2026-02-28 10:17:01.551657427 +0000 UTC m=+0.042193343 container create 7a02274fa0cf383e6e84080859945790ac94e0bb20054d4cdf4ae2e61363b742 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_spence, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:17:01 compute-0 systemd[1]: Started libpod-conmon-7a02274fa0cf383e6e84080859945790ac94e0bb20054d4cdf4ae2e61363b742.scope.
Feb 28 10:17:01 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:17:01 compute-0 podman[318064]: 2026-02-28 10:17:01.531843953 +0000 UTC m=+0.022379669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:17:01 compute-0 podman[318064]: 2026-02-28 10:17:01.638119972 +0000 UTC m=+0.128655688 container init 7a02274fa0cf383e6e84080859945790ac94e0bb20054d4cdf4ae2e61363b742 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_spence, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:17:01 compute-0 podman[318064]: 2026-02-28 10:17:01.642871847 +0000 UTC m=+0.133407563 container start 7a02274fa0cf383e6e84080859945790ac94e0bb20054d4cdf4ae2e61363b742 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_spence, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 10:17:01 compute-0 podman[318064]: 2026-02-28 10:17:01.646419458 +0000 UTC m=+0.136955164 container attach 7a02274fa0cf383e6e84080859945790ac94e0bb20054d4cdf4ae2e61363b742 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_spence, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Feb 28 10:17:01 compute-0 frosty_spence[318080]: 167 167
Feb 28 10:17:01 compute-0 systemd[1]: libpod-7a02274fa0cf383e6e84080859945790ac94e0bb20054d4cdf4ae2e61363b742.scope: Deactivated successfully.
Feb 28 10:17:01 compute-0 podman[318064]: 2026-02-28 10:17:01.651561835 +0000 UTC m=+0.142097561 container died 7a02274fa0cf383e6e84080859945790ac94e0bb20054d4cdf4ae2e61363b742 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 10:17:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bfc1f2387961464997c76dba9727b28b3f2ce77c5e1a0bd4e4173d0e0537247-merged.mount: Deactivated successfully.
Feb 28 10:17:01 compute-0 podman[318064]: 2026-02-28 10:17:01.697436062 +0000 UTC m=+0.187971768 container remove 7a02274fa0cf383e6e84080859945790ac94e0bb20054d4cdf4ae2e61363b742 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_spence, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 10:17:01 compute-0 systemd[1]: libpod-conmon-7a02274fa0cf383e6e84080859945790ac94e0bb20054d4cdf4ae2e61363b742.scope: Deactivated successfully.
Feb 28 10:17:01 compute-0 nova_compute[243452]: 2026-02-28 10:17:01.889 243456 DEBUG oslo_concurrency.lockutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:01 compute-0 nova_compute[243452]: 2026-02-28 10:17:01.892 243456 DEBUG oslo_concurrency.lockutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:01 compute-0 nova_compute[243452]: 2026-02-28 10:17:01.893 243456 INFO nova.compute.manager [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Unshelving
Feb 28 10:17:01 compute-0 podman[318103]: 2026-02-28 10:17:01.895697853 +0000 UTC m=+0.064101558 container create 391668ab539bb223cc8831fbcaa652cf1b2e679d45dcae05c1b741e92b1d3d67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 10:17:01 compute-0 systemd[1]: Started libpod-conmon-391668ab539bb223cc8831fbcaa652cf1b2e679d45dcae05c1b741e92b1d3d67.scope.
Feb 28 10:17:01 compute-0 podman[318103]: 2026-02-28 10:17:01.87068655 +0000 UTC m=+0.039090355 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:17:01 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1332f19d1be26bc2a6f12b9ae096e081d93adeafced18f8948a5564f6894fe4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1332f19d1be26bc2a6f12b9ae096e081d93adeafced18f8948a5564f6894fe4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1332f19d1be26bc2a6f12b9ae096e081d93adeafced18f8948a5564f6894fe4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1332f19d1be26bc2a6f12b9ae096e081d93adeafced18f8948a5564f6894fe4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:17:01 compute-0 podman[318103]: 2026-02-28 10:17:01.998359889 +0000 UTC m=+0.166763614 container init 391668ab539bb223cc8831fbcaa652cf1b2e679d45dcae05c1b741e92b1d3d67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 10:17:01 compute-0 nova_compute[243452]: 2026-02-28 10:17:01.997 243456 DEBUG oslo_concurrency.lockutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:01 compute-0 nova_compute[243452]: 2026-02-28 10:17:01.998 243456 DEBUG oslo_concurrency.lockutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:02 compute-0 podman[318103]: 2026-02-28 10:17:02.005082681 +0000 UTC m=+0.173486386 container start 391668ab539bb223cc8831fbcaa652cf1b2e679d45dcae05c1b741e92b1d3d67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carver, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 28 10:17:02 compute-0 nova_compute[243452]: 2026-02-28 10:17:02.007 243456 DEBUG nova.objects.instance [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'pci_requests' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:02 compute-0 podman[318103]: 2026-02-28 10:17:02.009407284 +0000 UTC m=+0.177811039 container attach 391668ab539bb223cc8831fbcaa652cf1b2e679d45dcae05c1b741e92b1d3d67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carver, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:17:02 compute-0 nova_compute[243452]: 2026-02-28 10:17:02.036 243456 DEBUG nova.objects.instance [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:02 compute-0 nova_compute[243452]: 2026-02-28 10:17:02.090 243456 DEBUG nova.virt.hardware [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:17:02 compute-0 nova_compute[243452]: 2026-02-28 10:17:02.091 243456 INFO nova.compute.claims [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:17:02 compute-0 nova_compute[243452]: 2026-02-28 10:17:02.239 243456 DEBUG oslo_concurrency.processutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 232 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 785 KiB/s wr, 164 op/s
Feb 28 10:17:02 compute-0 quirky_carver[318119]: {
Feb 28 10:17:02 compute-0 quirky_carver[318119]:     "0": [
Feb 28 10:17:02 compute-0 quirky_carver[318119]:         {
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "devices": [
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "/dev/loop3"
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             ],
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "lv_name": "ceph_lv0",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "lv_size": "21470642176",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "name": "ceph_lv0",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "tags": {
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.cluster_name": "ceph",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.crush_device_class": "",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.encrypted": "0",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.objectstore": "bluestore",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.osd_id": "0",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.type": "block",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.vdo": "0",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.with_tpm": "0"
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             },
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "type": "block",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "vg_name": "ceph_vg0"
Feb 28 10:17:02 compute-0 quirky_carver[318119]:         }
Feb 28 10:17:02 compute-0 quirky_carver[318119]:     ],
Feb 28 10:17:02 compute-0 quirky_carver[318119]:     "1": [
Feb 28 10:17:02 compute-0 quirky_carver[318119]:         {
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "devices": [
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "/dev/loop4"
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             ],
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "lv_name": "ceph_lv1",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "lv_size": "21470642176",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "name": "ceph_lv1",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "tags": {
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.cluster_name": "ceph",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.crush_device_class": "",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.encrypted": "0",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.objectstore": "bluestore",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.osd_id": "1",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.type": "block",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.vdo": "0",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.with_tpm": "0"
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             },
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "type": "block",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "vg_name": "ceph_vg1"
Feb 28 10:17:02 compute-0 quirky_carver[318119]:         }
Feb 28 10:17:02 compute-0 quirky_carver[318119]:     ],
Feb 28 10:17:02 compute-0 quirky_carver[318119]:     "2": [
Feb 28 10:17:02 compute-0 quirky_carver[318119]:         {
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "devices": [
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "/dev/loop5"
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             ],
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "lv_name": "ceph_lv2",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "lv_size": "21470642176",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "name": "ceph_lv2",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "tags": {
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.cluster_name": "ceph",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.crush_device_class": "",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.encrypted": "0",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.objectstore": "bluestore",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.osd_id": "2",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.type": "block",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.vdo": "0",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:                 "ceph.with_tpm": "0"
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             },
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "type": "block",
Feb 28 10:17:02 compute-0 quirky_carver[318119]:             "vg_name": "ceph_vg2"
Feb 28 10:17:02 compute-0 quirky_carver[318119]:         }
Feb 28 10:17:02 compute-0 quirky_carver[318119]:     ]
Feb 28 10:17:02 compute-0 quirky_carver[318119]: }
Feb 28 10:17:02 compute-0 systemd[1]: libpod-391668ab539bb223cc8831fbcaa652cf1b2e679d45dcae05c1b741e92b1d3d67.scope: Deactivated successfully.
Feb 28 10:17:02 compute-0 podman[318129]: 2026-02-28 10:17:02.407928063 +0000 UTC m=+0.045242510 container died 391668ab539bb223cc8831fbcaa652cf1b2e679d45dcae05c1b741e92b1d3d67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 10:17:02 compute-0 ceph-mon[76304]: pgmap v1538: 305 pgs: 305 active+clean; 232 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 785 KiB/s wr, 164 op/s
Feb 28 10:17:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1332f19d1be26bc2a6f12b9ae096e081d93adeafced18f8948a5564f6894fe4-merged.mount: Deactivated successfully.
Feb 28 10:17:02 compute-0 podman[318129]: 2026-02-28 10:17:02.457194157 +0000 UTC m=+0.094508574 container remove 391668ab539bb223cc8831fbcaa652cf1b2e679d45dcae05c1b741e92b1d3d67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carver, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:17:02 compute-0 systemd[1]: libpod-conmon-391668ab539bb223cc8831fbcaa652cf1b2e679d45dcae05c1b741e92b1d3d67.scope: Deactivated successfully.
Feb 28 10:17:02 compute-0 sudo[318027]: pam_unix(sudo:session): session closed for user root
Feb 28 10:17:02 compute-0 sudo[318163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:17:02 compute-0 sudo[318163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:17:02 compute-0 sudo[318163]: pam_unix(sudo:session): session closed for user root
Feb 28 10:17:02 compute-0 sudo[318188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:17:02 compute-0 sudo[318188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:17:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:17:02 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637854152' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:17:02 compute-0 nova_compute[243452]: 2026-02-28 10:17:02.971 243456 DEBUG oslo_concurrency.processutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.732s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:02 compute-0 nova_compute[243452]: 2026-02-28 10:17:02.983 243456 DEBUG nova.compute.provider_tree [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:17:03 compute-0 nova_compute[243452]: 2026-02-28 10:17:03.001 243456 DEBUG nova.scheduler.client.report [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:17:03 compute-0 podman[318225]: 2026-02-28 10:17:03.010526699 +0000 UTC m=+0.057948063 container create 9f947c72d935cca20e862ed18c73bfa86ce830727d7af01ad83b9ef69f60e193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:17:03 compute-0 nova_compute[243452]: 2026-02-28 10:17:03.047 243456 DEBUG oslo_concurrency.lockutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:03 compute-0 systemd[1]: Started libpod-conmon-9f947c72d935cca20e862ed18c73bfa86ce830727d7af01ad83b9ef69f60e193.scope.
Feb 28 10:17:03 compute-0 podman[318225]: 2026-02-28 10:17:02.980847183 +0000 UTC m=+0.028268597 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:17:03 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:17:03 compute-0 podman[318225]: 2026-02-28 10:17:03.113284948 +0000 UTC m=+0.160706312 container init 9f947c72d935cca20e862ed18c73bfa86ce830727d7af01ad83b9ef69f60e193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 10:17:03 compute-0 podman[318225]: 2026-02-28 10:17:03.126909646 +0000 UTC m=+0.174331010 container start 9f947c72d935cca20e862ed18c73bfa86ce830727d7af01ad83b9ef69f60e193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wozniak, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:17:03 compute-0 podman[318225]: 2026-02-28 10:17:03.130930531 +0000 UTC m=+0.178351955 container attach 9f947c72d935cca20e862ed18c73bfa86ce830727d7af01ad83b9ef69f60e193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wozniak, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:17:03 compute-0 confident_wozniak[318243]: 167 167
Feb 28 10:17:03 compute-0 systemd[1]: libpod-9f947c72d935cca20e862ed18c73bfa86ce830727d7af01ad83b9ef69f60e193.scope: Deactivated successfully.
Feb 28 10:17:03 compute-0 podman[318225]: 2026-02-28 10:17:03.137518178 +0000 UTC m=+0.184939512 container died 9f947c72d935cca20e862ed18c73bfa86ce830727d7af01ad83b9ef69f60e193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wozniak, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:17:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-a41dd376fb1ace3f0ef0ce00211987c26320fbeb0a955650a1a7e65bab322361-merged.mount: Deactivated successfully.
Feb 28 10:17:03 compute-0 podman[318225]: 2026-02-28 10:17:03.180924826 +0000 UTC m=+0.228346190 container remove 9f947c72d935cca20e862ed18c73bfa86ce830727d7af01ad83b9ef69f60e193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wozniak, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:17:03 compute-0 systemd[1]: libpod-conmon-9f947c72d935cca20e862ed18c73bfa86ce830727d7af01ad83b9ef69f60e193.scope: Deactivated successfully.
Feb 28 10:17:03 compute-0 podman[318268]: 2026-02-28 10:17:03.374172444 +0000 UTC m=+0.055173024 container create 0521045eb2d6c6f3285de0399e9d29d1aafe5c78729345377dcdb013fc93c536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_germain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 10:17:03 compute-0 nova_compute[243452]: 2026-02-28 10:17:03.406 243456 INFO nova.network.neutron [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Updating port 52f49649-6181-4c24-95b7-fc7227858c70 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Feb 28 10:17:03 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2637854152' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:17:03 compute-0 systemd[1]: Started libpod-conmon-0521045eb2d6c6f3285de0399e9d29d1aafe5c78729345377dcdb013fc93c536.scope.
Feb 28 10:17:03 compute-0 podman[318268]: 2026-02-28 10:17:03.343963953 +0000 UTC m=+0.024964583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:17:03 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:17:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f32bfe8987c26afdb9713956469adb04a0ac2b910abaea6ea74db272e2ac31b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:17:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f32bfe8987c26afdb9713956469adb04a0ac2b910abaea6ea74db272e2ac31b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:17:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f32bfe8987c26afdb9713956469adb04a0ac2b910abaea6ea74db272e2ac31b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:17:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f32bfe8987c26afdb9713956469adb04a0ac2b910abaea6ea74db272e2ac31b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:17:03 compute-0 podman[318268]: 2026-02-28 10:17:03.484692763 +0000 UTC m=+0.165693383 container init 0521045eb2d6c6f3285de0399e9d29d1aafe5c78729345377dcdb013fc93c536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_germain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 10:17:03 compute-0 podman[318268]: 2026-02-28 10:17:03.49545372 +0000 UTC m=+0.176454300 container start 0521045eb2d6c6f3285de0399e9d29d1aafe5c78729345377dcdb013fc93c536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 10:17:03 compute-0 podman[318268]: 2026-02-28 10:17:03.500399011 +0000 UTC m=+0.181399611 container attach 0521045eb2d6c6f3285de0399e9d29d1aafe5c78729345377dcdb013fc93c536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_germain, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:17:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:17:03 compute-0 nova_compute[243452]: 2026-02-28 10:17:03.842 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:04 compute-0 lvm[318361]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:17:04 compute-0 lvm[318361]: VG ceph_vg0 finished
Feb 28 10:17:04 compute-0 lvm[318364]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:17:04 compute-0 lvm[318364]: VG ceph_vg1 finished
Feb 28 10:17:04 compute-0 lvm[318366]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:17:04 compute-0 lvm[318366]: VG ceph_vg2 finished
Feb 28 10:17:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 232 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.8 KiB/s wr, 125 op/s
Feb 28 10:17:04 compute-0 practical_germain[318285]: {}
Feb 28 10:17:04 compute-0 systemd[1]: libpod-0521045eb2d6c6f3285de0399e9d29d1aafe5c78729345377dcdb013fc93c536.scope: Deactivated successfully.
Feb 28 10:17:04 compute-0 systemd[1]: libpod-0521045eb2d6c6f3285de0399e9d29d1aafe5c78729345377dcdb013fc93c536.scope: Consumed 1.573s CPU time.
Feb 28 10:17:04 compute-0 podman[318268]: 2026-02-28 10:17:04.392092646 +0000 UTC m=+1.073093216 container died 0521045eb2d6c6f3285de0399e9d29d1aafe5c78729345377dcdb013fc93c536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_germain, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:17:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f32bfe8987c26afdb9713956469adb04a0ac2b910abaea6ea74db272e2ac31b1-merged.mount: Deactivated successfully.
Feb 28 10:17:04 compute-0 ceph-mon[76304]: pgmap v1539: 305 pgs: 305 active+clean; 232 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.8 KiB/s wr, 125 op/s
Feb 28 10:17:04 compute-0 podman[318268]: 2026-02-28 10:17:04.444996354 +0000 UTC m=+1.125996934 container remove 0521045eb2d6c6f3285de0399e9d29d1aafe5c78729345377dcdb013fc93c536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_germain, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:17:04 compute-0 systemd[1]: libpod-conmon-0521045eb2d6c6f3285de0399e9d29d1aafe5c78729345377dcdb013fc93c536.scope: Deactivated successfully.
Feb 28 10:17:04 compute-0 sudo[318188]: pam_unix(sudo:session): session closed for user root
Feb 28 10:17:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:17:04 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:17:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:17:04 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:17:04 compute-0 sudo[318381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:17:04 compute-0 sudo[318381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:17:04 compute-0 sudo[318381]: pam_unix(sudo:session): session closed for user root
Feb 28 10:17:04 compute-0 nova_compute[243452]: 2026-02-28 10:17:04.906 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:05 compute-0 nova_compute[243452]: 2026-02-28 10:17:05.392 243456 DEBUG oslo_concurrency.lockutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:17:05 compute-0 nova_compute[243452]: 2026-02-28 10:17:05.392 243456 DEBUG oslo_concurrency.lockutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquired lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:17:05 compute-0 nova_compute[243452]: 2026-02-28 10:17:05.393 243456 DEBUG nova.network.neutron [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:17:05 compute-0 nova_compute[243452]: 2026-02-28 10:17:05.437 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:05 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:17:05 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:17:05 compute-0 nova_compute[243452]: 2026-02-28 10:17:05.538 243456 DEBUG nova.compute.manager [req-9e058fba-9847-4819-901e-bae8ef614d94 req-e9f69023-17c8-40b7-82ec-87fe2658b1dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received event network-changed-52f49649-6181-4c24-95b7-fc7227858c70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:05 compute-0 nova_compute[243452]: 2026-02-28 10:17:05.539 243456 DEBUG nova.compute.manager [req-9e058fba-9847-4819-901e-bae8ef614d94 req-e9f69023-17c8-40b7-82ec-87fe2658b1dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Refreshing instance network info cache due to event network-changed-52f49649-6181-4c24-95b7-fc7227858c70. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:17:05 compute-0 nova_compute[243452]: 2026-02-28 10:17:05.539 243456 DEBUG oslo_concurrency.lockutils [req-9e058fba-9847-4819-901e-bae8ef614d94 req-e9f69023-17c8-40b7-82ec-87fe2658b1dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:17:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 232 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1023 B/s wr, 20 op/s
Feb 28 10:17:06 compute-0 ceph-mon[76304]: pgmap v1540: 305 pgs: 305 active+clean; 232 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1023 B/s wr, 20 op/s
Feb 28 10:17:07 compute-0 nova_compute[243452]: 2026-02-28 10:17:07.326 243456 DEBUG nova.network.neutron [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Updating instance_info_cache with network_info: [{"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:17:07 compute-0 nova_compute[243452]: 2026-02-28 10:17:07.356 243456 DEBUG oslo_concurrency.lockutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Releasing lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:17:07 compute-0 nova_compute[243452]: 2026-02-28 10:17:07.359 243456 DEBUG nova.virt.libvirt.driver [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:17:07 compute-0 nova_compute[243452]: 2026-02-28 10:17:07.360 243456 INFO nova.virt.libvirt.driver [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Creating image(s)
Feb 28 10:17:07 compute-0 nova_compute[243452]: 2026-02-28 10:17:07.401 243456 DEBUG nova.storage.rbd_utils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:07 compute-0 nova_compute[243452]: 2026-02-28 10:17:07.408 243456 DEBUG nova.objects.instance [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:07 compute-0 nova_compute[243452]: 2026-02-28 10:17:07.412 243456 DEBUG oslo_concurrency.lockutils [req-9e058fba-9847-4819-901e-bae8ef614d94 req-e9f69023-17c8-40b7-82ec-87fe2658b1dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:17:07 compute-0 nova_compute[243452]: 2026-02-28 10:17:07.412 243456 DEBUG nova.network.neutron [req-9e058fba-9847-4819-901e-bae8ef614d94 req-e9f69023-17c8-40b7-82ec-87fe2658b1dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Refreshing network info cache for port 52f49649-6181-4c24-95b7-fc7227858c70 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:17:07 compute-0 nova_compute[243452]: 2026-02-28 10:17:07.461 243456 DEBUG nova.storage.rbd_utils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:07 compute-0 nova_compute[243452]: 2026-02-28 10:17:07.499 243456 DEBUG nova.storage.rbd_utils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:07 compute-0 nova_compute[243452]: 2026-02-28 10:17:07.505 243456 DEBUG oslo_concurrency.lockutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "d828e78f7bfe1b9cc50eeef45e89307770512eb9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:07 compute-0 nova_compute[243452]: 2026-02-28 10:17:07.506 243456 DEBUG oslo_concurrency.lockutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "d828e78f7bfe1b9cc50eeef45e89307770512eb9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:07 compute-0 nova_compute[243452]: 2026-02-28 10:17:07.840 243456 DEBUG nova.virt.libvirt.imagebackend [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Image locations are: [{'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/22a70b75-075f-4875-bc53-299afbb39f44/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/22a70b75-075f-4875-bc53-299afbb39f44/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Feb 28 10:17:07 compute-0 nova_compute[243452]: 2026-02-28 10:17:07.902 243456 DEBUG nova.virt.libvirt.imagebackend [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Selected location: {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/22a70b75-075f-4875-bc53-299afbb39f44/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Feb 28 10:17:07 compute-0 nova_compute[243452]: 2026-02-28 10:17:07.903 243456 DEBUG nova.storage.rbd_utils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] cloning images/22a70b75-075f-4875-bc53-299afbb39f44@snap to None/4db5bcd7-8b41-4850-8c88-89ad757c8558_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:17:07 compute-0 nova_compute[243452]: 2026-02-28 10:17:07.937 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273812.8985567, 86b24867-eb43-40c0-b73c-1416fd35d033 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:07 compute-0 nova_compute[243452]: 2026-02-28 10:17:07.938 243456 INFO nova.compute.manager [-] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] VM Stopped (Lifecycle Event)
Feb 28 10:17:07 compute-0 nova_compute[243452]: 2026-02-28 10:17:07.959 243456 DEBUG nova.compute.manager [None req-6b1a6e7b-a34d-4b17-9bed-59b260bab17a - - - - - -] [instance: 86b24867-eb43-40c0-b73c-1416fd35d033] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.019 243456 DEBUG oslo_concurrency.lockutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "d828e78f7bfe1b9cc50eeef45e89307770512eb9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.179 243456 DEBUG nova.objects.instance [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'migration_context' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.249 243456 DEBUG nova.storage.rbd_utils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] flattening vms/4db5bcd7-8b41-4850-8c88-89ad757c8558_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:17:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 232 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 614 B/s wr, 3 op/s
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.586 243456 DEBUG nova.virt.libvirt.driver [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Image rbd:vms/4db5bcd7-8b41-4850-8c88-89ad757c8558_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.588 243456 DEBUG nova.virt.libvirt.driver [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.588 243456 DEBUG nova.virt.libvirt.driver [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Ensure instance console log exists: /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.589 243456 DEBUG oslo_concurrency.lockutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.589 243456 DEBUG oslo_concurrency.lockutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.590 243456 DEBUG oslo_concurrency.lockutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.594 243456 DEBUG nova.virt.libvirt.driver [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Start _get_guest_xml network_info=[{"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-28T10:16:42Z,direct_url=<?>,disk_format='raw',id=22a70b75-075f-4875-bc53-299afbb39f44,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-76744621-shelved',owner='c0c4bc44c37f4a4f83c83b6105be3190',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-28T10:16:50Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.599 243456 WARNING nova.virt.libvirt.driver [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.605 243456 DEBUG nova.virt.libvirt.host [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.607 243456 DEBUG nova.virt.libvirt.host [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.612 243456 DEBUG nova.virt.libvirt.host [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.613 243456 DEBUG nova.virt.libvirt.host [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.613 243456 DEBUG nova.virt.libvirt.driver [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.614 243456 DEBUG nova.virt.hardware [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-28T10:16:42Z,direct_url=<?>,disk_format='raw',id=22a70b75-075f-4875-bc53-299afbb39f44,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-76744621-shelved',owner='c0c4bc44c37f4a4f83c83b6105be3190',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-28T10:16:50Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.614 243456 DEBUG nova.virt.hardware [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.615 243456 DEBUG nova.virt.hardware [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.615 243456 DEBUG nova.virt.hardware [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.616 243456 DEBUG nova.virt.hardware [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.616 243456 DEBUG nova.virt.hardware [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.617 243456 DEBUG nova.virt.hardware [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.617 243456 DEBUG nova.virt.hardware [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.617 243456 DEBUG nova.virt.hardware [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.619 243456 DEBUG nova.virt.hardware [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.619 243456 DEBUG nova.virt.hardware [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.619 243456 DEBUG nova.objects.instance [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.643 243456 DEBUG oslo_concurrency.processutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.718 243456 DEBUG nova.network.neutron [req-9e058fba-9847-4819-901e-bae8ef614d94 req-e9f69023-17c8-40b7-82ec-87fe2658b1dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Updated VIF entry in instance network info cache for port 52f49649-6181-4c24-95b7-fc7227858c70. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.720 243456 DEBUG nova.network.neutron [req-9e058fba-9847-4819-901e-bae8ef614d94 req-e9f69023-17c8-40b7-82ec-87fe2658b1dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Updating instance_info_cache with network_info: [{"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.740 243456 DEBUG oslo_concurrency.lockutils [req-9e058fba-9847-4819-901e-bae8ef614d94 req-e9f69023-17c8-40b7-82ec-87fe2658b1dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:17:08 compute-0 nova_compute[243452]: 2026-02-28 10:17:08.844 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:17:09 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2116663950' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.219 243456 DEBUG oslo_concurrency.processutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.254 243456 DEBUG nova.storage.rbd_utils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.259 243456 DEBUG oslo_concurrency.processutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:09 compute-0 ceph-mon[76304]: pgmap v1541: 305 pgs: 305 active+clean; 232 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 614 B/s wr, 3 op/s
Feb 28 10:17:09 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2116663950' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.761 243456 DEBUG oslo_concurrency.lockutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "5f8708bd-35d7-4952-ba18-0b6635872b86" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.762 243456 DEBUG oslo_concurrency.lockutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.779 243456 DEBUG nova.compute.manager [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:17:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:17:09 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/131330731' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.799 243456 DEBUG oslo_concurrency.processutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.801 243456 DEBUG nova.virt.libvirt.vif [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:15:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-76744621',display_name='tempest-ServersNegativeTestJSON-server-76744621',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-76744621',id=81,image_ref='22a70b75-075f-4875-bc53-299afbb39f44',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c0c4bc44c37f4a4f83c83b6105be3190',ramdisk_id='',reservation_id='r-labnea0j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-621636341',owner_user_name='tempest-ServersNegativeTestJSON-621636341-project-member',shelved_at='2026-02-28T10:16:50.935519',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='22a70b75-075f-4875-bc53-299afbb39f44'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:17:01Z,user_data=None,user_id='7ef51521ffc947cbbce8323ec2b71753',uuid=4db5bcd7-8b41-4850-8c88-89ad757c8558,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.802 243456 DEBUG nova.network.os_vif_util [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converting VIF {"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.803 243456 DEBUG nova.network.os_vif_util [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:e7:39,bridge_name='br-int',has_traffic_filtering=True,id=52f49649-6181-4c24-95b7-fc7227858c70,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f49649-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.805 243456 DEBUG nova.objects.instance [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.831 243456 DEBUG nova.virt.libvirt.driver [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:17:09 compute-0 nova_compute[243452]:   <uuid>4db5bcd7-8b41-4850-8c88-89ad757c8558</uuid>
Feb 28 10:17:09 compute-0 nova_compute[243452]:   <name>instance-00000051</name>
Feb 28 10:17:09 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:17:09 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:17:09 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersNegativeTestJSON-server-76744621</nova:name>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:17:08</nova:creationTime>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:17:09 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:17:09 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:17:09 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:17:09 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:17:09 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:17:09 compute-0 nova_compute[243452]:         <nova:user uuid="7ef51521ffc947cbbce8323ec2b71753">tempest-ServersNegativeTestJSON-621636341-project-member</nova:user>
Feb 28 10:17:09 compute-0 nova_compute[243452]:         <nova:project uuid="c0c4bc44c37f4a4f83c83b6105be3190">tempest-ServersNegativeTestJSON-621636341</nova:project>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="22a70b75-075f-4875-bc53-299afbb39f44"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:17:09 compute-0 nova_compute[243452]:         <nova:port uuid="52f49649-6181-4c24-95b7-fc7227858c70">
Feb 28 10:17:09 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:17:09 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:17:09 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <system>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <entry name="serial">4db5bcd7-8b41-4850-8c88-89ad757c8558</entry>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <entry name="uuid">4db5bcd7-8b41-4850-8c88-89ad757c8558</entry>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     </system>
Feb 28 10:17:09 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:17:09 compute-0 nova_compute[243452]:   <os>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:   </os>
Feb 28 10:17:09 compute-0 nova_compute[243452]:   <features>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:   </features>
Feb 28 10:17:09 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:17:09 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:17:09 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/4db5bcd7-8b41-4850-8c88-89ad757c8558_disk">
Feb 28 10:17:09 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       </source>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:17:09 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/4db5bcd7-8b41-4850-8c88-89ad757c8558_disk.config">
Feb 28 10:17:09 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       </source>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:17:09 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:22:e7:39"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <target dev="tap52f49649-61"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/console.log" append="off"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <video>
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     </video>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <input type="keyboard" bus="usb"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:17:09 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:17:09 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:17:09 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:17:09 compute-0 nova_compute[243452]: </domain>
Feb 28 10:17:09 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.832 243456 DEBUG nova.compute.manager [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Preparing to wait for external event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.832 243456 DEBUG oslo_concurrency.lockutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.833 243456 DEBUG oslo_concurrency.lockutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.833 243456 DEBUG oslo_concurrency.lockutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.834 243456 DEBUG nova.virt.libvirt.vif [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:15:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-76744621',display_name='tempest-ServersNegativeTestJSON-server-76744621',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-76744621',id=81,image_ref='22a70b75-075f-4875-bc53-299afbb39f44',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:15:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c0c4bc44c37f4a4f83c83b6105be3190',ramdisk_id='',reservation_id='r-labnea0j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-621636341',owner_user_name='tempest-ServersNegativeTestJSON-621636341-project-member',shelved_at='2026-02-28T10:16:50.935519',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='22a70b75-075f-4875-bc53-299afbb39f44'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:17:01Z,user_data=None,user_id='7ef51521ffc947cbbce8323ec2b71753',uuid=4db5bcd7-8b41-4850-8c88-89ad757c8558,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.834 243456 DEBUG nova.network.os_vif_util [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converting VIF {"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.835 243456 DEBUG nova.network.os_vif_util [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:e7:39,bridge_name='br-int',has_traffic_filtering=True,id=52f49649-6181-4c24-95b7-fc7227858c70,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f49649-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.836 243456 DEBUG os_vif [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:e7:39,bridge_name='br-int',has_traffic_filtering=True,id=52f49649-6181-4c24-95b7-fc7227858c70,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f49649-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.837 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.838 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.839 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.844 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.844 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52f49649-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.845 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52f49649-61, col_values=(('external_ids', {'iface-id': '52f49649-6181-4c24-95b7-fc7227858c70', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:e7:39', 'vm-uuid': '4db5bcd7-8b41-4850-8c88-89ad757c8558'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.847 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:09 compute-0 NetworkManager[49805]: <info>  [1772273829.8486] manager: (tap52f49649-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.860 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.862 243456 INFO os_vif [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:e7:39,bridge_name='br-int',has_traffic_filtering=True,id=52f49649-6181-4c24-95b7-fc7227858c70,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f49649-61')
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.892 243456 DEBUG oslo_concurrency.lockutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.892 243456 DEBUG oslo_concurrency.lockutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.904 243456 DEBUG nova.virt.hardware [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.905 243456 INFO nova.compute.claims [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.940 243456 DEBUG nova.virt.libvirt.driver [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.941 243456 DEBUG nova.virt.libvirt.driver [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.941 243456 DEBUG nova.virt.libvirt.driver [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] No VIF found with MAC fa:16:3e:22:e7:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.942 243456 INFO nova.virt.libvirt.driver [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Using config drive
Feb 28 10:17:09 compute-0 podman[318686]: 2026-02-28 10:17:09.974622123 +0000 UTC m=+0.067945958 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 28 10:17:09 compute-0 nova_compute[243452]: 2026-02-28 10:17:09.979 243456 DEBUG nova.storage.rbd_utils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:10 compute-0 podman[318685]: 2026-02-28 10:17:10.004452133 +0000 UTC m=+0.103293765 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.006 243456 DEBUG nova.objects.instance [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.076 243456 DEBUG nova.objects.instance [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'keypairs' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.080 243456 DEBUG oslo_concurrency.processutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 258 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 65 op/s
Feb 28 10:17:10 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/131330731' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.523 243456 INFO nova.virt.libvirt.driver [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Creating config drive at /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/disk.config
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.528 243456 DEBUG oslo_concurrency.processutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpay5mf_6n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:17:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4252891673' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.681 243456 DEBUG oslo_concurrency.processutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.683 243456 DEBUG oslo_concurrency.processutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpay5mf_6n" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.709 243456 DEBUG nova.storage.rbd_utils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] rbd image 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.714 243456 DEBUG oslo_concurrency.processutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/disk.config 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.749 243456 DEBUG nova.compute.provider_tree [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.778 243456 DEBUG nova.scheduler.client.report [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.801 243456 DEBUG oslo_concurrency.lockutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.801 243456 DEBUG nova.compute.manager [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.857 243456 DEBUG nova.compute.manager [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.858 243456 DEBUG nova.network.neutron [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.865 243456 DEBUG oslo_concurrency.processutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/disk.config 4db5bcd7-8b41-4850-8c88-89ad757c8558_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.866 243456 INFO nova.virt.libvirt.driver [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Deleting local config drive /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558/disk.config because it was imported into RBD.
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.876 243456 INFO nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.897 243456 DEBUG nova.compute.manager [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:17:10 compute-0 kernel: tap52f49649-61: entered promiscuous mode
Feb 28 10:17:10 compute-0 NetworkManager[49805]: <info>  [1772273830.9217] manager: (tap52f49649-61): new Tun device (/org/freedesktop/NetworkManager/Devices/352)
Feb 28 10:17:10 compute-0 ovn_controller[146846]: 2026-02-28T10:17:10Z|00824|binding|INFO|Claiming lport 52f49649-6181-4c24-95b7-fc7227858c70 for this chassis.
Feb 28 10:17:10 compute-0 ovn_controller[146846]: 2026-02-28T10:17:10Z|00825|binding|INFO|52f49649-6181-4c24-95b7-fc7227858c70: Claiming fa:16:3e:22:e7:39 10.100.0.9
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.922 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.926 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:10.936 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:e7:39 10.100.0.9'], port_security=['fa:16:3e:22:e7:39 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4db5bcd7-8b41-4850-8c88-89ad757c8558', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c0c4bc44c37f4a4f83c83b6105be3190', 'neutron:revision_number': '7', 'neutron:security_group_ids': '60e2cad2-1539-4f21-ae07-3933335fcb5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbe93b8f-a4a4-4682-b3ab-de91ea6bc538, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=52f49649-6181-4c24-95b7-fc7227858c70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:17:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:10.938 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 52f49649-6181-4c24-95b7-fc7227858c70 in datapath ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 bound to our chassis
Feb 28 10:17:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:10.939 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce4b855a-cb9e-4dad-bfe0-ddfe326a1505
Feb 28 10:17:10 compute-0 systemd-machined[209480]: New machine qemu-104-instance-00000051.
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.951 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:10.951 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c46228a2-97b5-4942-a415-58d65fbb6de9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:10.952 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce4b855a-c1 in ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:17:10 compute-0 systemd-udevd[318821]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:17:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:10.954 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce4b855a-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:17:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:10.954 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f7236521-6d1b-4b85-9e87-7be05c57049a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:10.955 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[60a29faa-9102-4e98-a011-00efc31dfdb0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.957 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:10 compute-0 systemd[1]: Started Virtual Machine qemu-104-instance-00000051.
Feb 28 10:17:10 compute-0 ovn_controller[146846]: 2026-02-28T10:17:10Z|00826|binding|INFO|Setting lport 52f49649-6181-4c24-95b7-fc7227858c70 ovn-installed in OVS
Feb 28 10:17:10 compute-0 ovn_controller[146846]: 2026-02-28T10:17:10Z|00827|binding|INFO|Setting lport 52f49649-6181-4c24-95b7-fc7227858c70 up in Southbound
Feb 28 10:17:10 compute-0 nova_compute[243452]: 2026-02-28 10:17:10.963 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:10 compute-0 NetworkManager[49805]: <info>  [1772273830.9738] device (tap52f49649-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:17:10 compute-0 NetworkManager[49805]: <info>  [1772273830.9744] device (tap52f49649-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:17:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:10.971 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a31c3749-7baf-4d80-aacb-b90c58e34196]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:10.989 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4facb9df-e6d2-48d3-b0df-c21a14373f26]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:11.021 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7f58ff-5324-43ad-9b99-cc45d89623b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.027 243456 DEBUG nova.compute.manager [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:11.027 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c58c70-548a-4fbd-83d6-7eadfdf6be17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.028 243456 DEBUG nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:17:11 compute-0 NetworkManager[49805]: <info>  [1772273831.0291] manager: (tapce4b855a-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/353)
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.029 243456 INFO nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Creating image(s)
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:11.053 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fc821164-629f-4e18-b1d6-25941abec0b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:11.056 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[17935d6a-1b1a-4330-a69e-8f56347a6636]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.063 243456 DEBUG nova.storage.rbd_utils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image 5f8708bd-35d7-4952-ba18-0b6635872b86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:11 compute-0 NetworkManager[49805]: <info>  [1772273831.0751] device (tapce4b855a-c0): carrier: link connected
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:11.080 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[43d1d497-065a-4dae-92b1-e31b4456318a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.091 243456 DEBUG nova.storage.rbd_utils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image 5f8708bd-35d7-4952-ba18-0b6635872b86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:11.099 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b07100bc-7b21-4eaf-bf12-77bcfeedd0f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce4b855a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:cf:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 251], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533836, 'reachable_time': 30998, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318887, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.120 243456 DEBUG nova.storage.rbd_utils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image 5f8708bd-35d7-4952-ba18-0b6635872b86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:11.121 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6675a7c7-01bb-447c-922e-3fdbf7472a64]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:cf33'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 533836, 'tstamp': 533836}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318901, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.130 243456 DEBUG oslo_concurrency.processutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:11.139 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8f5739a2-9773-477e-bb8e-9ca073cbc3be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce4b855a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:cf:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 251], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533836, 'reachable_time': 30998, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318909, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.171 243456 DEBUG nova.policy [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '99530c323188499c8d0e75b8edf1f77b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4c568ca6a09a48c1a1197267be4d4583', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:11.175 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5bee14ae-b535-4143-bd16-cc837895b550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.180 243456 DEBUG oslo_concurrency.lockutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.180 243456 DEBUG oslo_concurrency.lockutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.201 243456 DEBUG nova.compute.manager [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.205 243456 DEBUG oslo_concurrency.processutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.206 243456 DEBUG oslo_concurrency.lockutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.206 243456 DEBUG oslo_concurrency.lockutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.206 243456 DEBUG oslo_concurrency.lockutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.231 243456 DEBUG nova.storage.rbd_utils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image 5f8708bd-35d7-4952-ba18-0b6635872b86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:11.235 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d65f19e0-e571-497d-ab47-6794b104d80f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:11.236 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce4b855a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:11.237 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:11.237 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce4b855a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:11 compute-0 NetworkManager[49805]: <info>  [1772273831.2405] manager: (tapce4b855a-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/354)
Feb 28 10:17:11 compute-0 kernel: tapce4b855a-c0: entered promiscuous mode
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.241 243456 DEBUG oslo_concurrency.processutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 5f8708bd-35d7-4952-ba18-0b6635872b86_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:11.246 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce4b855a-c0, col_values=(('external_ids', {'iface-id': 'f0acdf7e-44d8-43d4-ade1-89536f5a8e0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:11 compute-0 ovn_controller[146846]: 2026-02-28T10:17:11Z|00828|binding|INFO|Releasing lport f0acdf7e-44d8-43d4-ade1-89536f5a8e0e from this chassis (sb_readonly=0)
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:11.250 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce4b855a-cb9e-4dad-bfe0-ddfe326a1505.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce4b855a-cb9e-4dad-bfe0-ddfe326a1505.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:11.253 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fe51dc96-e91c-4e58-b3c9-7c3e6252236d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:11.254 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/ce4b855a-cb9e-4dad-bfe0-ddfe326a1505.pid.haproxy
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID ce4b855a-cb9e-4dad-bfe0-ddfe326a1505
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:17:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:11.255 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'env', 'PROCESS_TAG=haproxy-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce4b855a-cb9e-4dad-bfe0-ddfe326a1505.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.282 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.317 243456 DEBUG oslo_concurrency.lockutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.317 243456 DEBUG oslo_concurrency.lockutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.326 243456 DEBUG nova.virt.hardware [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.326 243456 INFO nova.compute.claims [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:17:11 compute-0 ceph-mon[76304]: pgmap v1542: 305 pgs: 305 active+clean; 258 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 65 op/s
Feb 28 10:17:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4252891673' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:11.346869) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273831346942, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1141, "num_deletes": 252, "total_data_size": 1530430, "memory_usage": 1561008, "flush_reason": "Manual Compaction"}
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273831356111, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1491304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31888, "largest_seqno": 33028, "table_properties": {"data_size": 1485894, "index_size": 2742, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12502, "raw_average_key_size": 20, "raw_value_size": 1474653, "raw_average_value_size": 2382, "num_data_blocks": 122, "num_entries": 619, "num_filter_entries": 619, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772273744, "oldest_key_time": 1772273744, "file_creation_time": 1772273831, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 9262 microseconds, and 3368 cpu microseconds.
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:11.356148) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1491304 bytes OK
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:11.356168) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:11.357869) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:11.357883) EVENT_LOG_v1 {"time_micros": 1772273831357879, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:11.357907) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1525071, prev total WAL file size 1525071, number of live WAL files 2.
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:11.358475) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1456KB)], [68(8303KB)]
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273831358543, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 9993755, "oldest_snapshot_seqno": -1}
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5903 keys, 8262429 bytes, temperature: kUnknown
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273831440220, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8262429, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8223213, "index_size": 23362, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 148199, "raw_average_key_size": 25, "raw_value_size": 8117631, "raw_average_value_size": 1375, "num_data_blocks": 946, "num_entries": 5903, "num_filter_entries": 5903, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772273831, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:11.440436) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8262429 bytes
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:11.454294) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.3 rd, 101.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 8.1 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(12.2) write-amplify(5.5) OK, records in: 6426, records dropped: 523 output_compression: NoCompression
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:11.454328) EVENT_LOG_v1 {"time_micros": 1772273831454315, "job": 38, "event": "compaction_finished", "compaction_time_micros": 81732, "compaction_time_cpu_micros": 19860, "output_level": 6, "num_output_files": 1, "total_output_size": 8262429, "num_input_records": 6426, "num_output_records": 5903, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273831454581, "job": 38, "event": "table_file_deletion", "file_number": 70}
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273831455242, "job": 38, "event": "table_file_deletion", "file_number": 68}
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:11.358382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:11.455432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:11.455440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:11.455441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:11.455443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:17:11 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:11.455444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.477 243456 DEBUG oslo_concurrency.processutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:11 compute-0 podman[318998]: 2026-02-28 10:17:11.688639276 +0000 UTC m=+0.112303892 container create f99bee114c681eeae69eb94dc8bf0df9be3c392024774333789b81c6bbfa95e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 28 10:17:11 compute-0 podman[318998]: 2026-02-28 10:17:11.595554903 +0000 UTC m=+0.019219499 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.699 243456 DEBUG oslo_concurrency.processutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 5f8708bd-35d7-4952-ba18-0b6635872b86_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:11 compute-0 systemd[1]: Started libpod-conmon-f99bee114c681eeae69eb94dc8bf0df9be3c392024774333789b81c6bbfa95e5.scope.
Feb 28 10:17:11 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9926ac552b8e3224f50fee270221bb61b3fe908d6867af415dc79b17afc3be43/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.780 243456 DEBUG nova.storage.rbd_utils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] resizing rbd image 5f8708bd-35d7-4952-ba18-0b6635872b86_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:17:11 compute-0 podman[318998]: 2026-02-28 10:17:11.785578609 +0000 UTC m=+0.209243245 container init f99bee114c681eeae69eb94dc8bf0df9be3c392024774333789b81c6bbfa95e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 28 10:17:11 compute-0 podman[318998]: 2026-02-28 10:17:11.791002553 +0000 UTC m=+0.214667169 container start f99bee114c681eeae69eb94dc8bf0df9be3c392024774333789b81c6bbfa95e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 10:17:11 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[319077]: [NOTICE]   (319103) : New worker (319114) forked
Feb 28 10:17:11 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[319077]: [NOTICE]   (319103) : Loading success.
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.897 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273831.8777325, 4db5bcd7-8b41-4850-8c88-89ad757c8558 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.897 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] VM Started (Lifecycle Event)
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.904 243456 DEBUG nova.objects.instance [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'migration_context' on Instance uuid 5f8708bd-35d7-4952-ba18-0b6635872b86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.933 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.933 243456 DEBUG nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.934 243456 DEBUG nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Ensure instance console log exists: /var/lib/nova/instances/5f8708bd-35d7-4952-ba18-0b6635872b86/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.934 243456 DEBUG oslo_concurrency.lockutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.935 243456 DEBUG oslo_concurrency.lockutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.935 243456 DEBUG oslo_concurrency.lockutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.938 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273831.8779445, 4db5bcd7-8b41-4850-8c88-89ad757c8558 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.939 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] VM Paused (Lifecycle Event)
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.954 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.958 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:17:11 compute-0 nova_compute[243452]: 2026-02-28 10:17:11.974 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:17:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:17:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1764731747' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.080 243456 DEBUG oslo_concurrency.processutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.087 243456 DEBUG nova.compute.provider_tree [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.103 243456 DEBUG nova.scheduler.client.report [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.124 243456 DEBUG oslo_concurrency.lockutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.125 243456 DEBUG nova.compute.manager [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.182 243456 DEBUG nova.compute.manager [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.182 243456 DEBUG nova.network.neutron [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.203 243456 INFO nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.230 243456 DEBUG nova.compute.manager [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:17:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 293 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.1 MiB/s wr, 69 op/s
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.344 243456 DEBUG nova.compute.manager [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.346 243456 DEBUG nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.347 243456 INFO nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Creating image(s)
Feb 28 10:17:12 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1764731747' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.380 243456 DEBUG nova.storage.rbd_utils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 690896df-6307-469c-9685-325a61a62b88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.411 243456 DEBUG nova.storage.rbd_utils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 690896df-6307-469c-9685-325a61a62b88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.441 243456 DEBUG nova.storage.rbd_utils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 690896df-6307-469c-9685-325a61a62b88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.446 243456 DEBUG oslo_concurrency.processutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.492 243456 DEBUG nova.policy [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b9006c7543a244aa948b78020335223a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6952e00efd364e1491714983e2425e93', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.540 243456 DEBUG oslo_concurrency.processutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.541 243456 DEBUG oslo_concurrency.lockutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.542 243456 DEBUG oslo_concurrency.lockutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.542 243456 DEBUG oslo_concurrency.lockutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.570 243456 DEBUG nova.storage.rbd_utils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 690896df-6307-469c-9685-325a61a62b88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.575 243456 DEBUG oslo_concurrency.processutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 690896df-6307-469c-9685-325a61a62b88_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.614 243456 DEBUG nova.network.neutron [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Successfully created port: 9e45b488-45d1-4293-a1a6-7b01b726b58b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.884 243456 DEBUG oslo_concurrency.processutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 690896df-6307-469c-9685-325a61a62b88_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:12 compute-0 nova_compute[243452]: 2026-02-28 10:17:12.967 243456 DEBUG nova.storage.rbd_utils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] resizing rbd image 690896df-6307-469c-9685-325a61a62b88_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:17:13 compute-0 nova_compute[243452]: 2026-02-28 10:17:13.069 243456 DEBUG nova.objects.instance [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'migration_context' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:13 compute-0 nova_compute[243452]: 2026-02-28 10:17:13.085 243456 DEBUG nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:17:13 compute-0 nova_compute[243452]: 2026-02-28 10:17:13.086 243456 DEBUG nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Ensure instance console log exists: /var/lib/nova/instances/690896df-6307-469c-9685-325a61a62b88/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:17:13 compute-0 nova_compute[243452]: 2026-02-28 10:17:13.087 243456 DEBUG oslo_concurrency.lockutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:13 compute-0 nova_compute[243452]: 2026-02-28 10:17:13.087 243456 DEBUG oslo_concurrency.lockutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:13 compute-0 nova_compute[243452]: 2026-02-28 10:17:13.088 243456 DEBUG oslo_concurrency.lockutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:13 compute-0 ceph-mon[76304]: pgmap v1543: 305 pgs: 305 active+clean; 293 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.1 MiB/s wr, 69 op/s
Feb 28 10:17:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:17:13 compute-0 nova_compute[243452]: 2026-02-28 10:17:13.712 243456 DEBUG nova.network.neutron [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Successfully created port: ed25d1f8-c3a0-43d4-b57e-12b647a48b3c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:17:13 compute-0 nova_compute[243452]: 2026-02-28 10:17:13.846 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 327 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.2 MiB/s wr, 98 op/s
Feb 28 10:17:14 compute-0 nova_compute[243452]: 2026-02-28 10:17:14.322 243456 DEBUG nova.network.neutron [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Successfully updated port: 9e45b488-45d1-4293-a1a6-7b01b726b58b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:17:14 compute-0 nova_compute[243452]: 2026-02-28 10:17:14.341 243456 DEBUG oslo_concurrency.lockutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "refresh_cache-5f8708bd-35d7-4952-ba18-0b6635872b86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:17:14 compute-0 nova_compute[243452]: 2026-02-28 10:17:14.341 243456 DEBUG oslo_concurrency.lockutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquired lock "refresh_cache-5f8708bd-35d7-4952-ba18-0b6635872b86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:17:14 compute-0 nova_compute[243452]: 2026-02-28 10:17:14.342 243456 DEBUG nova.network.neutron [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:17:14 compute-0 nova_compute[243452]: 2026-02-28 10:17:14.473 243456 DEBUG nova.compute.manager [req-baebfd12-03d9-48ad-aff7-78ed827af2ec req-60515a5d-f929-4915-8a08-b0571237f52c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Received event network-changed-9e45b488-45d1-4293-a1a6-7b01b726b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:14 compute-0 nova_compute[243452]: 2026-02-28 10:17:14.474 243456 DEBUG nova.compute.manager [req-baebfd12-03d9-48ad-aff7-78ed827af2ec req-60515a5d-f929-4915-8a08-b0571237f52c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Refreshing instance network info cache due to event network-changed-9e45b488-45d1-4293-a1a6-7b01b726b58b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:17:14 compute-0 nova_compute[243452]: 2026-02-28 10:17:14.474 243456 DEBUG oslo_concurrency.lockutils [req-baebfd12-03d9-48ad-aff7-78ed827af2ec req-60515a5d-f929-4915-8a08-b0571237f52c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-5f8708bd-35d7-4952-ba18-0b6635872b86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:17:14 compute-0 nova_compute[243452]: 2026-02-28 10:17:14.583 243456 DEBUG nova.network.neutron [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:17:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:14.813 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:17:14 compute-0 nova_compute[243452]: 2026-02-28 10:17:14.813 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:14.815 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:17:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:14.817 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:14 compute-0 nova_compute[243452]: 2026-02-28 10:17:14.848 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:15 compute-0 ceph-mon[76304]: pgmap v1544: 305 pgs: 305 active+clean; 327 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.2 MiB/s wr, 98 op/s
Feb 28 10:17:15 compute-0 nova_compute[243452]: 2026-02-28 10:17:15.831 243456 DEBUG nova.network.neutron [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Successfully updated port: ed25d1f8-c3a0-43d4-b57e-12b647a48b3c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:17:15 compute-0 nova_compute[243452]: 2026-02-28 10:17:15.853 243456 DEBUG oslo_concurrency.lockutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:17:15 compute-0 nova_compute[243452]: 2026-02-28 10:17:15.854 243456 DEBUG oslo_concurrency.lockutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquired lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:17:15 compute-0 nova_compute[243452]: 2026-02-28 10:17:15.854 243456 DEBUG nova.network.neutron [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.143 243456 DEBUG nova.network.neutron [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Updating instance_info_cache with network_info: [{"id": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "address": "fa:16:3e:bb:83:cd", "network": {"id": "88dbe3c2-5a58-4a5e-93e1-51b691c2901f", "bridge": "br-int", "label": "tempest-network-smoke--1244861554", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e45b488-45", "ovs_interfaceid": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.165 243456 DEBUG oslo_concurrency.lockutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Releasing lock "refresh_cache-5f8708bd-35d7-4952-ba18-0b6635872b86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.166 243456 DEBUG nova.compute.manager [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Instance network_info: |[{"id": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "address": "fa:16:3e:bb:83:cd", "network": {"id": "88dbe3c2-5a58-4a5e-93e1-51b691c2901f", "bridge": "br-int", "label": "tempest-network-smoke--1244861554", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e45b488-45", "ovs_interfaceid": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.166 243456 DEBUG oslo_concurrency.lockutils [req-baebfd12-03d9-48ad-aff7-78ed827af2ec req-60515a5d-f929-4915-8a08-b0571237f52c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-5f8708bd-35d7-4952-ba18-0b6635872b86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.166 243456 DEBUG nova.network.neutron [req-baebfd12-03d9-48ad-aff7-78ed827af2ec req-60515a5d-f929-4915-8a08-b0571237f52c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Refreshing network info cache for port 9e45b488-45d1-4293-a1a6-7b01b726b58b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.169 243456 DEBUG nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Start _get_guest_xml network_info=[{"id": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "address": "fa:16:3e:bb:83:cd", "network": {"id": "88dbe3c2-5a58-4a5e-93e1-51b691c2901f", "bridge": "br-int", "label": "tempest-network-smoke--1244861554", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e45b488-45", "ovs_interfaceid": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.174 243456 WARNING nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.186 243456 DEBUG nova.virt.libvirt.host [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.187 243456 DEBUG nova.virt.libvirt.host [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.199 243456 DEBUG nova.virt.libvirt.host [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.199 243456 DEBUG nova.virt.libvirt.host [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.200 243456 DEBUG nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.201 243456 DEBUG nova.virt.hardware [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.202 243456 DEBUG nova.virt.hardware [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.202 243456 DEBUG nova.virt.hardware [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.203 243456 DEBUG nova.virt.hardware [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.203 243456 DEBUG nova.virt.hardware [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.203 243456 DEBUG nova.virt.hardware [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.204 243456 DEBUG nova.virt.hardware [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.204 243456 DEBUG nova.virt.hardware [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.205 243456 DEBUG nova.virt.hardware [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.205 243456 DEBUG nova.virt.hardware [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.206 243456 DEBUG nova.virt.hardware [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.211 243456 DEBUG oslo_concurrency.processutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 388 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 6.7 MiB/s wr, 137 op/s
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.329 243456 DEBUG nova.network.neutron [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.705 243456 DEBUG nova.compute.manager [req-5e7d922f-ef0b-43e1-8356-095781ccead3 req-c4842e12-311f-49d9-8b50-8e7ad77ca1c9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-changed-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.705 243456 DEBUG nova.compute.manager [req-5e7d922f-ef0b-43e1-8356-095781ccead3 req-c4842e12-311f-49d9-8b50-8e7ad77ca1c9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Refreshing instance network info cache due to event network-changed-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.706 243456 DEBUG oslo_concurrency.lockutils [req-5e7d922f-ef0b-43e1-8356-095781ccead3 req-c4842e12-311f-49d9-8b50-8e7ad77ca1c9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:17:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:17:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2541181036' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.783 243456 DEBUG oslo_concurrency.processutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.821 243456 DEBUG nova.storage.rbd_utils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image 5f8708bd-35d7-4952-ba18-0b6635872b86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:16 compute-0 nova_compute[243452]: 2026-02-28 10:17:16.827 243456 DEBUG oslo_concurrency.processutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.236 243456 DEBUG nova.network.neutron [req-baebfd12-03d9-48ad-aff7-78ed827af2ec req-60515a5d-f929-4915-8a08-b0571237f52c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Updated VIF entry in instance network info cache for port 9e45b488-45d1-4293-a1a6-7b01b726b58b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.237 243456 DEBUG nova.network.neutron [req-baebfd12-03d9-48ad-aff7-78ed827af2ec req-60515a5d-f929-4915-8a08-b0571237f52c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Updating instance_info_cache with network_info: [{"id": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "address": "fa:16:3e:bb:83:cd", "network": {"id": "88dbe3c2-5a58-4a5e-93e1-51b691c2901f", "bridge": "br-int", "label": "tempest-network-smoke--1244861554", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e45b488-45", "ovs_interfaceid": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.331 243456 DEBUG oslo_concurrency.lockutils [req-baebfd12-03d9-48ad-aff7-78ed827af2ec req-60515a5d-f929-4915-8a08-b0571237f52c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-5f8708bd-35d7-4952-ba18-0b6635872b86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.343 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:17:17 compute-0 ceph-mon[76304]: pgmap v1545: 305 pgs: 305 active+clean; 388 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 6.7 MiB/s wr, 137 op/s
Feb 28 10:17:17 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2541181036' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:17:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2485312820' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.414 243456 DEBUG oslo_concurrency.processutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.415 243456 DEBUG nova.virt.libvirt.vif [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:17:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2076056969',display_name='tempest-TestNetworkAdvancedServerOps-server-2076056969',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2076056969',id=89,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAJoDJeszbjfLiA8r7EvxipNNTd3Kd7FJumJ1jhIQwl69HS1653r1vVD8jlI2B8YSH2IOoSN/hqwS+59551ZK2ZQDz9Qcn6YlypHv+eriiRgIViYoZ/6DEE1ZesCiz+yyg==',key_name='tempest-TestNetworkAdvancedServerOps-291574014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-vdh7t6ga',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:17:10Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=5f8708bd-35d7-4952-ba18-0b6635872b86,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "address": "fa:16:3e:bb:83:cd", "network": {"id": "88dbe3c2-5a58-4a5e-93e1-51b691c2901f", "bridge": "br-int", "label": "tempest-network-smoke--1244861554", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e45b488-45", "ovs_interfaceid": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.416 243456 DEBUG nova.network.os_vif_util [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "address": "fa:16:3e:bb:83:cd", "network": {"id": "88dbe3c2-5a58-4a5e-93e1-51b691c2901f", "bridge": "br-int", "label": "tempest-network-smoke--1244861554", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e45b488-45", "ovs_interfaceid": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.417 243456 DEBUG nova.network.os_vif_util [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:83:cd,bridge_name='br-int',has_traffic_filtering=True,id=9e45b488-45d1-4293-a1a6-7b01b726b58b,network=Network(88dbe3c2-5a58-4a5e-93e1-51b691c2901f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e45b488-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.418 243456 DEBUG nova.objects.instance [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5f8708bd-35d7-4952-ba18-0b6635872b86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.434 243456 DEBUG nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:17:17 compute-0 nova_compute[243452]:   <uuid>5f8708bd-35d7-4952-ba18-0b6635872b86</uuid>
Feb 28 10:17:17 compute-0 nova_compute[243452]:   <name>instance-00000059</name>
Feb 28 10:17:17 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:17:17 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:17:17 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-2076056969</nova:name>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:17:16</nova:creationTime>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:17:17 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:17:17 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:17:17 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:17:17 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:17:17 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:17:17 compute-0 nova_compute[243452]:         <nova:user uuid="99530c323188499c8d0e75b8edf1f77b">tempest-TestNetworkAdvancedServerOps-1987172309-project-member</nova:user>
Feb 28 10:17:17 compute-0 nova_compute[243452]:         <nova:project uuid="4c568ca6a09a48c1a1197267be4d4583">tempest-TestNetworkAdvancedServerOps-1987172309</nova:project>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:17:17 compute-0 nova_compute[243452]:         <nova:port uuid="9e45b488-45d1-4293-a1a6-7b01b726b58b">
Feb 28 10:17:17 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:17:17 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:17:17 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <system>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <entry name="serial">5f8708bd-35d7-4952-ba18-0b6635872b86</entry>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <entry name="uuid">5f8708bd-35d7-4952-ba18-0b6635872b86</entry>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     </system>
Feb 28 10:17:17 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:17:17 compute-0 nova_compute[243452]:   <os>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:   </os>
Feb 28 10:17:17 compute-0 nova_compute[243452]:   <features>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:   </features>
Feb 28 10:17:17 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:17:17 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:17:17 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/5f8708bd-35d7-4952-ba18-0b6635872b86_disk">
Feb 28 10:17:17 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       </source>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:17:17 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/5f8708bd-35d7-4952-ba18-0b6635872b86_disk.config">
Feb 28 10:17:17 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       </source>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:17:17 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:bb:83:cd"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <target dev="tap9e45b488-45"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/5f8708bd-35d7-4952-ba18-0b6635872b86/console.log" append="off"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <video>
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     </video>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:17:17 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:17:17 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:17:17 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:17:17 compute-0 nova_compute[243452]: </domain>
Feb 28 10:17:17 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.435 243456 DEBUG nova.compute.manager [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Preparing to wait for external event network-vif-plugged-9e45b488-45d1-4293-a1a6-7b01b726b58b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.436 243456 DEBUG oslo_concurrency.lockutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.436 243456 DEBUG oslo_concurrency.lockutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.436 243456 DEBUG oslo_concurrency.lockutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.437 243456 DEBUG nova.virt.libvirt.vif [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:17:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2076056969',display_name='tempest-TestNetworkAdvancedServerOps-server-2076056969',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2076056969',id=89,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAJoDJeszbjfLiA8r7EvxipNNTd3Kd7FJumJ1jhIQwl69HS1653r1vVD8jlI2B8YSH2IOoSN/hqwS+59551ZK2ZQDz9Qcn6YlypHv+eriiRgIViYoZ/6DEE1ZesCiz+yyg==',key_name='tempest-TestNetworkAdvancedServerOps-291574014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-vdh7t6ga',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:17:10Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=5f8708bd-35d7-4952-ba18-0b6635872b86,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "address": "fa:16:3e:bb:83:cd", "network": {"id": "88dbe3c2-5a58-4a5e-93e1-51b691c2901f", "bridge": "br-int", "label": "tempest-network-smoke--1244861554", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e45b488-45", "ovs_interfaceid": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.437 243456 DEBUG nova.network.os_vif_util [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "address": "fa:16:3e:bb:83:cd", "network": {"id": "88dbe3c2-5a58-4a5e-93e1-51b691c2901f", "bridge": "br-int", "label": "tempest-network-smoke--1244861554", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e45b488-45", "ovs_interfaceid": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.438 243456 DEBUG nova.network.os_vif_util [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:83:cd,bridge_name='br-int',has_traffic_filtering=True,id=9e45b488-45d1-4293-a1a6-7b01b726b58b,network=Network(88dbe3c2-5a58-4a5e-93e1-51b691c2901f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e45b488-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.438 243456 DEBUG os_vif [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:83:cd,bridge_name='br-int',has_traffic_filtering=True,id=9e45b488-45d1-4293-a1a6-7b01b726b58b,network=Network(88dbe3c2-5a58-4a5e-93e1-51b691c2901f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e45b488-45') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.439 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.440 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.440 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.446 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.446 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e45b488-45, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.446 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9e45b488-45, col_values=(('external_ids', {'iface-id': '9e45b488-45d1-4293-a1a6-7b01b726b58b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:83:cd', 'vm-uuid': '5f8708bd-35d7-4952-ba18-0b6635872b86'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.448 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:17 compute-0 NetworkManager[49805]: <info>  [1772273837.4499] manager: (tap9e45b488-45): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/355)
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.451 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.457 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.458 243456 INFO os_vif [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:83:cd,bridge_name='br-int',has_traffic_filtering=True,id=9e45b488-45d1-4293-a1a6-7b01b726b58b,network=Network(88dbe3c2-5a58-4a5e-93e1-51b691c2901f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e45b488-45')
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.520 243456 DEBUG nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.521 243456 DEBUG nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.521 243456 DEBUG nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No VIF found with MAC fa:16:3e:bb:83:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.522 243456 INFO nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Using config drive
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.547 243456 DEBUG nova.storage.rbd_utils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image 5f8708bd-35d7-4952-ba18-0b6635872b86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.856 243456 DEBUG nova.network.neutron [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Updating instance_info_cache with network_info: [{"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.876 243456 DEBUG oslo_concurrency.lockutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Releasing lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.877 243456 DEBUG nova.compute.manager [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance network_info: |[{"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.879 243456 DEBUG oslo_concurrency.lockutils [req-5e7d922f-ef0b-43e1-8356-095781ccead3 req-c4842e12-311f-49d9-8b50-8e7ad77ca1c9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.879 243456 DEBUG nova.network.neutron [req-5e7d922f-ef0b-43e1-8356-095781ccead3 req-c4842e12-311f-49d9-8b50-8e7ad77ca1c9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Refreshing network info cache for port ed25d1f8-c3a0-43d4-b57e-12b647a48b3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.886 243456 DEBUG nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Start _get_guest_xml network_info=[{"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.894 243456 WARNING nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.900 243456 DEBUG nova.virt.libvirt.host [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.902 243456 DEBUG nova.virt.libvirt.host [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.913 243456 DEBUG nova.virt.libvirt.host [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.913 243456 DEBUG nova.virt.libvirt.host [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.914 243456 DEBUG nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.914 243456 DEBUG nova.virt.hardware [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.915 243456 DEBUG nova.virt.hardware [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.916 243456 DEBUG nova.virt.hardware [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.916 243456 DEBUG nova.virt.hardware [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.917 243456 DEBUG nova.virt.hardware [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.917 243456 DEBUG nova.virt.hardware [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.918 243456 DEBUG nova.virt.hardware [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.918 243456 DEBUG nova.virt.hardware [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.918 243456 DEBUG nova.virt.hardware [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.919 243456 DEBUG nova.virt.hardware [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.919 243456 DEBUG nova.virt.hardware [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:17:17 compute-0 nova_compute[243452]: 2026-02-28 10:17:17.924 243456 DEBUG oslo_concurrency.processutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 404 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.4 MiB/s wr, 140 op/s
Feb 28 10:17:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2485312820' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:17:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3035778658' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:18 compute-0 nova_compute[243452]: 2026-02-28 10:17:18.489 243456 DEBUG oslo_concurrency.processutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:18 compute-0 nova_compute[243452]: 2026-02-28 10:17:18.516 243456 DEBUG nova.storage.rbd_utils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 690896df-6307-469c-9685-325a61a62b88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:18 compute-0 nova_compute[243452]: 2026-02-28 10:17:18.521 243456 DEBUG oslo_concurrency.processutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:17:18 compute-0 nova_compute[243452]: 2026-02-28 10:17:18.810 243456 INFO nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Creating config drive at /var/lib/nova/instances/5f8708bd-35d7-4952-ba18-0b6635872b86/disk.config
Feb 28 10:17:18 compute-0 nova_compute[243452]: 2026-02-28 10:17:18.816 243456 DEBUG oslo_concurrency.processutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5f8708bd-35d7-4952-ba18-0b6635872b86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp41d5zkzy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:18 compute-0 nova_compute[243452]: 2026-02-28 10:17:18.864 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:18 compute-0 nova_compute[243452]: 2026-02-28 10:17:18.959 243456 DEBUG nova.compute.manager [req-1aee738e-2602-4a12-bd58-df010805ff68 req-9a8ac4f0-2868-49b3-98be-1bcae9f18976 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:18 compute-0 nova_compute[243452]: 2026-02-28 10:17:18.960 243456 DEBUG oslo_concurrency.lockutils [req-1aee738e-2602-4a12-bd58-df010805ff68 req-9a8ac4f0-2868-49b3-98be-1bcae9f18976 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:18 compute-0 nova_compute[243452]: 2026-02-28 10:17:18.961 243456 DEBUG oslo_concurrency.lockutils [req-1aee738e-2602-4a12-bd58-df010805ff68 req-9a8ac4f0-2868-49b3-98be-1bcae9f18976 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:18 compute-0 nova_compute[243452]: 2026-02-28 10:17:18.961 243456 DEBUG oslo_concurrency.lockutils [req-1aee738e-2602-4a12-bd58-df010805ff68 req-9a8ac4f0-2868-49b3-98be-1bcae9f18976 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:18 compute-0 nova_compute[243452]: 2026-02-28 10:17:18.962 243456 DEBUG nova.compute.manager [req-1aee738e-2602-4a12-bd58-df010805ff68 req-9a8ac4f0-2868-49b3-98be-1bcae9f18976 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Processing event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:17:18 compute-0 nova_compute[243452]: 2026-02-28 10:17:18.962 243456 DEBUG nova.compute.manager [req-1aee738e-2602-4a12-bd58-df010805ff68 req-9a8ac4f0-2868-49b3-98be-1bcae9f18976 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:18 compute-0 nova_compute[243452]: 2026-02-28 10:17:18.962 243456 DEBUG oslo_concurrency.lockutils [req-1aee738e-2602-4a12-bd58-df010805ff68 req-9a8ac4f0-2868-49b3-98be-1bcae9f18976 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:18 compute-0 nova_compute[243452]: 2026-02-28 10:17:18.962 243456 DEBUG oslo_concurrency.lockutils [req-1aee738e-2602-4a12-bd58-df010805ff68 req-9a8ac4f0-2868-49b3-98be-1bcae9f18976 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:18 compute-0 nova_compute[243452]: 2026-02-28 10:17:18.962 243456 DEBUG oslo_concurrency.lockutils [req-1aee738e-2602-4a12-bd58-df010805ff68 req-9a8ac4f0-2868-49b3-98be-1bcae9f18976 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:18 compute-0 nova_compute[243452]: 2026-02-28 10:17:18.963 243456 DEBUG nova.compute.manager [req-1aee738e-2602-4a12-bd58-df010805ff68 req-9a8ac4f0-2868-49b3-98be-1bcae9f18976 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] No waiting events found dispatching network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:17:18 compute-0 nova_compute[243452]: 2026-02-28 10:17:18.963 243456 WARNING nova.compute.manager [req-1aee738e-2602-4a12-bd58-df010805ff68 req-9a8ac4f0-2868-49b3-98be-1bcae9f18976 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received unexpected event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 for instance with vm_state shelved_offloaded and task_state spawning.
Feb 28 10:17:18 compute-0 nova_compute[243452]: 2026-02-28 10:17:18.964 243456 DEBUG nova.compute.manager [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:17:18 compute-0 nova_compute[243452]: 2026-02-28 10:17:18.966 243456 DEBUG oslo_concurrency.processutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5f8708bd-35d7-4952-ba18-0b6635872b86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp41d5zkzy" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:18 compute-0 nova_compute[243452]: 2026-02-28 10:17:18.994 243456 DEBUG nova.storage.rbd_utils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image 5f8708bd-35d7-4952-ba18-0b6635872b86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:18 compute-0 nova_compute[243452]: 2026-02-28 10:17:18.998 243456 DEBUG oslo_concurrency.processutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5f8708bd-35d7-4952-ba18-0b6635872b86/disk.config 5f8708bd-35d7-4952-ba18-0b6635872b86_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:17:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3707770167' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.031 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273838.9691703, 4db5bcd7-8b41-4850-8c88-89ad757c8558 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.032 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] VM Resumed (Lifecycle Event)
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.035 243456 DEBUG nova.virt.libvirt.driver [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.037 243456 DEBUG oslo_concurrency.processutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.038 243456 DEBUG nova.virt.libvirt.vif [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:17:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-49030969',display_name='tempest-ServerActionsTestJSON-server-49030969',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-49030969',id=90,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2gogpz3Yx3EhEkpX1VCBq2KO8dk2oxi+PMU5eyyDvXF1ONWBCuHAC/cBb5pEuXFL5UEmuntYZ5G82kfqwpjBS+OnOyyPUfTpF/G370TEdXgIbjgT7mCqXRtxntkLSfyw==',key_name='tempest-keypair-1420549814',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-iwkgdg48',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:17:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9006c7543a244aa948b78020335223a',uuid=690896df-6307-469c-9685-325a61a62b88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.039 243456 DEBUG nova.network.os_vif_util [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.039 243456 DEBUG nova.network.os_vif_util [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.041 243456 DEBUG nova.objects.instance [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'pci_devices' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.048 243456 INFO nova.virt.libvirt.driver [-] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Instance spawned successfully.
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.055 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.060 243456 DEBUG nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:17:19 compute-0 nova_compute[243452]:   <uuid>690896df-6307-469c-9685-325a61a62b88</uuid>
Feb 28 10:17:19 compute-0 nova_compute[243452]:   <name>instance-0000005a</name>
Feb 28 10:17:19 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:17:19 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:17:19 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerActionsTestJSON-server-49030969</nova:name>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:17:17</nova:creationTime>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:17:19 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:17:19 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:17:19 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:17:19 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:17:19 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:17:19 compute-0 nova_compute[243452]:         <nova:user uuid="b9006c7543a244aa948b78020335223a">tempest-ServerActionsTestJSON-152155156-project-member</nova:user>
Feb 28 10:17:19 compute-0 nova_compute[243452]:         <nova:project uuid="6952e00efd364e1491714983e2425e93">tempest-ServerActionsTestJSON-152155156</nova:project>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:17:19 compute-0 nova_compute[243452]:         <nova:port uuid="ed25d1f8-c3a0-43d4-b57e-12b647a48b3c">
Feb 28 10:17:19 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:17:19 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:17:19 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <system>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <entry name="serial">690896df-6307-469c-9685-325a61a62b88</entry>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <entry name="uuid">690896df-6307-469c-9685-325a61a62b88</entry>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     </system>
Feb 28 10:17:19 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:17:19 compute-0 nova_compute[243452]:   <os>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:   </os>
Feb 28 10:17:19 compute-0 nova_compute[243452]:   <features>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:   </features>
Feb 28 10:17:19 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:17:19 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:17:19 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/690896df-6307-469c-9685-325a61a62b88_disk">
Feb 28 10:17:19 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:17:19 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/690896df-6307-469c-9685-325a61a62b88_disk.config">
Feb 28 10:17:19 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:17:19 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:f6:05:21"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <target dev="taped25d1f8-c3"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/690896df-6307-469c-9685-325a61a62b88/console.log" append="off"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <video>
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     </video>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:17:19 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:17:19 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:17:19 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:17:19 compute-0 nova_compute[243452]: </domain>
Feb 28 10:17:19 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.067 243456 DEBUG nova.compute.manager [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Preparing to wait for external event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.068 243456 DEBUG oslo_concurrency.lockutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.068 243456 DEBUG oslo_concurrency.lockutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.068 243456 DEBUG oslo_concurrency.lockutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.069 243456 DEBUG nova.virt.libvirt.vif [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:17:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-49030969',display_name='tempest-ServerActionsTestJSON-server-49030969',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-49030969',id=90,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2gogpz3Yx3EhEkpX1VCBq2KO8dk2oxi+PMU5eyyDvXF1ONWBCuHAC/cBb5pEuXFL5UEmuntYZ5G82kfqwpjBS+OnOyyPUfTpF/G370TEdXgIbjgT7mCqXRtxntkLSfyw==',key_name='tempest-keypair-1420549814',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-iwkgdg48',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:17:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9006c7543a244aa948b78020335223a',uuid=690896df-6307-469c-9685-325a61a62b88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.070 243456 DEBUG nova.network.os_vif_util [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.070 243456 DEBUG nova.network.os_vif_util [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.071 243456 DEBUG os_vif [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.072 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.073 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.073 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.077 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.079 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.079 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped25d1f8-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.080 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=taped25d1f8-c3, col_values=(('external_ids', {'iface-id': 'ed25d1f8-c3a0-43d4-b57e-12b647a48b3c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:05:21', 'vm-uuid': '690896df-6307-469c-9685-325a61a62b88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.081 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:19 compute-0 NetworkManager[49805]: <info>  [1772273839.0834] manager: (taped25d1f8-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/356)
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.085 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.088 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.089 243456 INFO os_vif [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3')
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.094 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.138 243456 DEBUG nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.139 243456 DEBUG nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.139 243456 DEBUG nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] No VIF found with MAC fa:16:3e:f6:05:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.140 243456 INFO nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Using config drive
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.162 243456 DEBUG nova.storage.rbd_utils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 690896df-6307-469c-9685-325a61a62b88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.168 243456 DEBUG oslo_concurrency.processutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5f8708bd-35d7-4952-ba18-0b6635872b86/disk.config 5f8708bd-35d7-4952-ba18-0b6635872b86_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.169 243456 INFO nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Deleting local config drive /var/lib/nova/instances/5f8708bd-35d7-4952-ba18-0b6635872b86/disk.config because it was imported into RBD.
Feb 28 10:17:19 compute-0 NetworkManager[49805]: <info>  [1772273839.2028] manager: (tap9e45b488-45): new Tun device (/org/freedesktop/NetworkManager/Devices/357)
Feb 28 10:17:19 compute-0 kernel: tap9e45b488-45: entered promiscuous mode
Feb 28 10:17:19 compute-0 ovn_controller[146846]: 2026-02-28T10:17:19Z|00829|binding|INFO|Claiming lport 9e45b488-45d1-4293-a1a6-7b01b726b58b for this chassis.
Feb 28 10:17:19 compute-0 ovn_controller[146846]: 2026-02-28T10:17:19Z|00830|binding|INFO|9e45b488-45d1-4293-a1a6-7b01b726b58b: Claiming fa:16:3e:bb:83:cd 10.100.0.12
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.208 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.214 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.220 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:83:cd 10.100.0.12'], port_security=['fa:16:3e:bb:83:cd 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5f8708bd-35d7-4952-ba18-0b6635872b86', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88dbe3c2-5a58-4a5e-93e1-51b691c2901f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6208c787-2d1b-4dd1-8098-37be7ada4419', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a3b86456-762e-43a0-947f-ce5fc38977cf, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=9e45b488-45d1-4293-a1a6-7b01b726b58b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.221 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 9e45b488-45d1-4293-a1a6-7b01b726b58b in datapath 88dbe3c2-5a58-4a5e-93e1-51b691c2901f bound to our chassis
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.222 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 88dbe3c2-5a58-4a5e-93e1-51b691c2901f
Feb 28 10:17:19 compute-0 systemd-udevd[319534]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.235 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0b0ae795-aeff-40b0-bef5-7c66e25ad936]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.236 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap88dbe3c2-51 in ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:17:19 compute-0 systemd-machined[209480]: New machine qemu-105-instance-00000059.
Feb 28 10:17:19 compute-0 NetworkManager[49805]: <info>  [1772273839.2405] device (tap9e45b488-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:17:19 compute-0 NetworkManager[49805]: <info>  [1772273839.2418] device (tap9e45b488-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.239 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap88dbe3c2-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.239 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[656354e0-33e6-432a-950c-38ef9699a40f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.241 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bc86c37d-52e3-447d-9f11-45df7d896aad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:19 compute-0 systemd[1]: Started Virtual Machine qemu-105-instance-00000059.
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.247 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:19 compute-0 ovn_controller[146846]: 2026-02-28T10:17:19Z|00831|binding|INFO|Setting lport 9e45b488-45d1-4293-a1a6-7b01b726b58b ovn-installed in OVS
Feb 28 10:17:19 compute-0 ovn_controller[146846]: 2026-02-28T10:17:19Z|00832|binding|INFO|Setting lport 9e45b488-45d1-4293-a1a6-7b01b726b58b up in Southbound
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.254 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[76a5a594-35e3-4090-a711-4fa72678d482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.256 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.269 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4a740ba6-08d8-46e7-b4f9-6cc117c54d11]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.290 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd4dcad-6e74-41df-b86a-cbe8dd04ea6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:19 compute-0 NetworkManager[49805]: <info>  [1772273839.2983] manager: (tap88dbe3c2-50): new Veth device (/org/freedesktop/NetworkManager/Devices/358)
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.299 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6deacb55-574f-4c86-b86d-1b1527d0179e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.325 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[97249958-0a9c-4696-a039-6172b40e0d98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.329 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a1c07761-91e9-42a1-b752-c280d745e4c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:19 compute-0 NetworkManager[49805]: <info>  [1772273839.3545] device (tap88dbe3c2-50): carrier: link connected
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.359 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[19592af5-3564-43af-bf07-fd306520fbc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.377 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[29f55ee7-d229-49c7-a594-247c71aa6935]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88dbe3c2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:2d:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 253], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534664, 'reachable_time': 25310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319569, 'error': None, 'target': 'ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Feb 28 10:17:19 compute-0 ceph-mon[76304]: pgmap v1546: 305 pgs: 305 active+clean; 404 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.4 MiB/s wr, 140 op/s
Feb 28 10:17:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3035778658' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3707770167' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.396 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[53c17057-c884-4a8f-941d-4a399b760cb9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:2d79'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534664, 'tstamp': 534664}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319570, 'error': None, 'target': 'ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Feb 28 10:17:19 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.422 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[adf99f23-a401-4026-ac9a-5d4a80e72e94]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88dbe3c2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:2d:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 253], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534664, 'reachable_time': 25310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319571, 'error': None, 'target': 'ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.454 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f95b2014-640b-48cb-92bf-ff3cc0fa51f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.508 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4d122c1a-e72c-42a9-b78f-89b55f99bcb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.510 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88dbe3c2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.510 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.511 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88dbe3c2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:19 compute-0 NetworkManager[49805]: <info>  [1772273839.5134] manager: (tap88dbe3c2-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/359)
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.513 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:19 compute-0 kernel: tap88dbe3c2-50: entered promiscuous mode
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.518 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.520 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap88dbe3c2-50, col_values=(('external_ids', {'iface-id': 'b833c568-d94d-4da6-b765-0f13045f9c5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.522 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:19 compute-0 ovn_controller[146846]: 2026-02-28T10:17:19Z|00833|binding|INFO|Releasing lport b833c568-d94d-4da6-b765-0f13045f9c5d from this chassis (sb_readonly=0)
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.531 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/88dbe3c2-5a58-4a5e-93e1-51b691c2901f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/88dbe3c2-5a58-4a5e-93e1-51b691c2901f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.532 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.534 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[99cdf6d2-6d03-4b59-9197-c72418316af2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.535 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-88dbe3c2-5a58-4a5e-93e1-51b691c2901f
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/88dbe3c2-5a58-4a5e-93e1-51b691c2901f.pid.haproxy
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 88dbe3c2-5a58-4a5e-93e1-51b691c2901f
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:17:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:19.537 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f', 'env', 'PROCESS_TAG=haproxy-88dbe3c2-5a58-4a5e-93e1-51b691c2901f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/88dbe3c2-5a58-4a5e-93e1-51b691c2901f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.640 243456 DEBUG nova.compute.manager [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.649 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273839.649761, 5f8708bd-35d7-4952-ba18-0b6635872b86 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.650 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] VM Started (Lifecycle Event)
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.654 243456 INFO nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Creating config drive at /var/lib/nova/instances/690896df-6307-469c-9685-325a61a62b88/disk.config
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.658 243456 DEBUG oslo_concurrency.processutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/690896df-6307-469c-9685-325a61a62b88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjaylorwb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.694 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.706 243456 DEBUG oslo_concurrency.lockutils [None req-31d9c36f-1a64-4462-8713-2ee5f9c5eeaf 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 17.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.710 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273839.6498327, 5f8708bd-35d7-4952-ba18-0b6635872b86 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.711 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] VM Paused (Lifecycle Event)
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.739 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.743 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.759 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.799 243456 DEBUG oslo_concurrency.processutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/690896df-6307-469c-9685-325a61a62b88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjaylorwb" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.828 243456 DEBUG nova.storage.rbd_utils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 690896df-6307-469c-9685-325a61a62b88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.834 243456 DEBUG oslo_concurrency.processutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/690896df-6307-469c-9685-325a61a62b88/disk.config 690896df-6307-469c-9685-325a61a62b88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:19 compute-0 podman[319653]: 2026-02-28 10:17:19.870160909 +0000 UTC m=+0.046883697 container create c5e52cb457bd6c3b70331a84678219cfa72077ef6462b5821b2db4e1ec64e70e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:17:19 compute-0 systemd[1]: Started libpod-conmon-c5e52cb457bd6c3b70331a84678219cfa72077ef6462b5821b2db4e1ec64e70e.scope.
Feb 28 10:17:19 compute-0 podman[319653]: 2026-02-28 10:17:19.846626239 +0000 UTC m=+0.023349067 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:17:19 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:17:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26c4837cf5f0863526aa86e5d0fd7a7fd1643044229c0db8b2475108be7597f6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:17:19 compute-0 podman[319653]: 2026-02-28 10:17:19.95824402 +0000 UTC m=+0.134966828 container init c5e52cb457bd6c3b70331a84678219cfa72077ef6462b5821b2db4e1ec64e70e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.959 243456 DEBUG nova.network.neutron [req-5e7d922f-ef0b-43e1-8356-095781ccead3 req-c4842e12-311f-49d9-8b50-8e7ad77ca1c9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Updated VIF entry in instance network info cache for port ed25d1f8-c3a0-43d4-b57e-12b647a48b3c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.961 243456 DEBUG nova.network.neutron [req-5e7d922f-ef0b-43e1-8356-095781ccead3 req-c4842e12-311f-49d9-8b50-8e7ad77ca1c9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Updating instance_info_cache with network_info: [{"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:17:19 compute-0 podman[319653]: 2026-02-28 10:17:19.964327013 +0000 UTC m=+0.141049811 container start c5e52cb457bd6c3b70331a84678219cfa72077ef6462b5821b2db4e1ec64e70e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 28 10:17:19 compute-0 nova_compute[243452]: 2026-02-28 10:17:19.985 243456 DEBUG oslo_concurrency.lockutils [req-5e7d922f-ef0b-43e1-8356-095781ccead3 req-c4842e12-311f-49d9-8b50-8e7ad77ca1c9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:17:19 compute-0 neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f[319694]: [NOTICE]   (319709) : New worker (319711) forked
Feb 28 10:17:19 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 10:17:19 compute-0 neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f[319694]: [NOTICE]   (319709) : Loading success.
Feb 28 10:17:19 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 10:17:20 compute-0 nova_compute[243452]: 2026-02-28 10:17:20.031 243456 DEBUG oslo_concurrency.processutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/690896df-6307-469c-9685-325a61a62b88/disk.config 690896df-6307-469c-9685-325a61a62b88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:20 compute-0 nova_compute[243452]: 2026-02-28 10:17:20.032 243456 INFO nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Deleting local config drive /var/lib/nova/instances/690896df-6307-469c-9685-325a61a62b88/disk.config because it was imported into RBD.
Feb 28 10:17:20 compute-0 NetworkManager[49805]: <info>  [1772273840.0876] manager: (taped25d1f8-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/360)
Feb 28 10:17:20 compute-0 kernel: taped25d1f8-c3: entered promiscuous mode
Feb 28 10:17:20 compute-0 ovn_controller[146846]: 2026-02-28T10:17:20Z|00834|binding|INFO|Claiming lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for this chassis.
Feb 28 10:17:20 compute-0 ovn_controller[146846]: 2026-02-28T10:17:20Z|00835|binding|INFO|ed25d1f8-c3a0-43d4-b57e-12b647a48b3c: Claiming fa:16:3e:f6:05:21 10.100.0.14
Feb 28 10:17:20 compute-0 nova_compute[243452]: 2026-02-28 10:17:20.090 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:20 compute-0 nova_compute[243452]: 2026-02-28 10:17:20.096 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:20 compute-0 NetworkManager[49805]: <info>  [1772273840.1002] device (taped25d1f8-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:17:20 compute-0 NetworkManager[49805]: <info>  [1772273840.1009] device (taped25d1f8-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.104 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:05:21 10.100.0.14'], port_security=['fa:16:3e:f6:05:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '690896df-6307-469c-9685-325a61a62b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '2', 'neutron:security_group_ids': '26695444-5c7f-4bb0-b1ed-94a026868cfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.105 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ed25d1f8-c3a0-43d4-b57e-12b647a48b3c in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb bound to our chassis
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.107 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.117 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f03b9c93-dd56-4222-b69b-c75379162547]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.118 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8082b9e7-a1 in ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.119 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8082b9e7-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.120 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a190286e-523a-4d5f-aee0-9c9567e5b3b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.120 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[490bcdd9-902b-4e11-8d94-3afea80ef283]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:20 compute-0 nova_compute[243452]: 2026-02-28 10:17:20.131 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:20 compute-0 systemd-machined[209480]: New machine qemu-106-instance-0000005a.
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.132 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[8249f5e0-9ea9-430d-b660-aca99ab61497]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:20 compute-0 ovn_controller[146846]: 2026-02-28T10:17:20Z|00836|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c ovn-installed in OVS
Feb 28 10:17:20 compute-0 ovn_controller[146846]: 2026-02-28T10:17:20Z|00837|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c up in Southbound
Feb 28 10:17:20 compute-0 nova_compute[243452]: 2026-02-28 10:17:20.136 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:20 compute-0 systemd[1]: Started Virtual Machine qemu-106-instance-0000005a.
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.163 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8634edb6-f0b8-4773-ac81-d12a97dd72e3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.185 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[34d389ee-d9d1-4f0a-b97c-982c53640290]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.190 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0d9d8a8a-0b3d-474d-8712-9f5bec811524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:20 compute-0 NetworkManager[49805]: <info>  [1772273840.1918] manager: (tap8082b9e7-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/361)
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.214 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[39c390a5-95df-4722-9dfa-549227fea15c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.217 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[968bc4e9-905d-43cf-b035-e8eb59754f42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:20 compute-0 NetworkManager[49805]: <info>  [1772273840.2361] device (tap8082b9e7-a0): carrier: link connected
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.238 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e102a313-029f-4bad-8599-20112a2ecfbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.258 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a87115b8-6348-4d5f-ba3c-5359fb420216]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534752, 'reachable_time': 25217, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319751, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 382 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 7.6 MiB/s wr, 126 op/s
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.272 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cbda62ae-d034-49b7-bbc2-098d241ad635]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:9bb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534752, 'tstamp': 534752}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319752, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.295 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[73099150-2be5-46cb-beb8-638c85dc8e57]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534752, 'reachable_time': 25217, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319753, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.313 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c78272-d69e-4472-9847-52631db8af7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:20 compute-0 nova_compute[243452]: 2026-02-28 10:17:20.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:17:20 compute-0 nova_compute[243452]: 2026-02-28 10:17:20.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.379 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[38a53ca1-27c5-4200-b71e-2766b0bb2fd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.382 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.382 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.383 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8082b9e7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:20 compute-0 nova_compute[243452]: 2026-02-28 10:17:20.385 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:20 compute-0 kernel: tap8082b9e7-a0: entered promiscuous mode
Feb 28 10:17:20 compute-0 NetworkManager[49805]: <info>  [1772273840.3873] manager: (tap8082b9e7-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Feb 28 10:17:20 compute-0 nova_compute[243452]: 2026-02-28 10:17:20.389 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.394 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8082b9e7-a0, col_values=(('external_ids', {'iface-id': 'ecb4bce5-f543-4ade-98cc-d98a0d015682'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:20 compute-0 nova_compute[243452]: 2026-02-28 10:17:20.395 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:20 compute-0 ovn_controller[146846]: 2026-02-28T10:17:20Z|00838|binding|INFO|Releasing lport ecb4bce5-f543-4ade-98cc-d98a0d015682 from this chassis (sb_readonly=0)
Feb 28 10:17:20 compute-0 ceph-mon[76304]: osdmap e242: 3 total, 3 up, 3 in
Feb 28 10:17:20 compute-0 nova_compute[243452]: 2026-02-28 10:17:20.407 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:20 compute-0 nova_compute[243452]: 2026-02-28 10:17:20.408 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.410 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.411 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[af919f75-928c-4080-a4ab-89750ec45244]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.412 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:17:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:20.412 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'env', 'PROCESS_TAG=haproxy-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8082b9e7-a888-4fb7-b48c-a7c16db892eb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:17:20 compute-0 nova_compute[243452]: 2026-02-28 10:17:20.598 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273840.5976427, 690896df-6307-469c-9685-325a61a62b88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:20 compute-0 nova_compute[243452]: 2026-02-28 10:17:20.599 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Started (Lifecycle Event)
Feb 28 10:17:20 compute-0 nova_compute[243452]: 2026-02-28 10:17:20.619 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:20 compute-0 nova_compute[243452]: 2026-02-28 10:17:20.624 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273840.5980117, 690896df-6307-469c-9685-325a61a62b88 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:20 compute-0 nova_compute[243452]: 2026-02-28 10:17:20.624 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Paused (Lifecycle Event)
Feb 28 10:17:20 compute-0 nova_compute[243452]: 2026-02-28 10:17:20.641 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:20 compute-0 nova_compute[243452]: 2026-02-28 10:17:20.645 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:17:20 compute-0 nova_compute[243452]: 2026-02-28 10:17:20.669 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:17:20 compute-0 podman[319827]: 2026-02-28 10:17:20.778263743 +0000 UTC m=+0.069416070 container create 110996cbde181c8cc9be5b9c5a42eb1613e142ef4604384e3f826e4682032618 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:17:20 compute-0 systemd[1]: Started libpod-conmon-110996cbde181c8cc9be5b9c5a42eb1613e142ef4604384e3f826e4682032618.scope.
Feb 28 10:17:20 compute-0 podman[319827]: 2026-02-28 10:17:20.742111422 +0000 UTC m=+0.033263809 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:17:20 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d2e81568415edbf6eba3d94c80a2db56eb9e80cd7dae0cd9b37034546a63bb1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:17:20 compute-0 podman[319827]: 2026-02-28 10:17:20.879464677 +0000 UTC m=+0.170617004 container init 110996cbde181c8cc9be5b9c5a42eb1613e142ef4604384e3f826e4682032618 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 10:17:20 compute-0 podman[319827]: 2026-02-28 10:17:20.884608744 +0000 UTC m=+0.175761041 container start 110996cbde181c8cc9be5b9c5a42eb1613e142ef4604384e3f826e4682032618 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:17:20 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[319842]: [NOTICE]   (319846) : New worker (319848) forked
Feb 28 10:17:20 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[319842]: [NOTICE]   (319846) : Loading success.
Feb 28 10:17:21 compute-0 nova_compute[243452]: 2026-02-28 10:17:21.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:17:21 compute-0 ceph-mon[76304]: pgmap v1548: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 382 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 7.6 MiB/s wr, 126 op/s
Feb 28 10:17:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 342 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.2 MiB/s wr, 166 op/s
Feb 28 10:17:22 compute-0 ceph-mon[76304]: pgmap v1549: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 342 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.2 MiB/s wr, 166 op/s
Feb 28 10:17:23 compute-0 nova_compute[243452]: 2026-02-28 10:17:23.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:17:23 compute-0 nova_compute[243452]: 2026-02-28 10:17:23.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:17:23 compute-0 nova_compute[243452]: 2026-02-28 10:17:23.389 243456 DEBUG oslo_concurrency.lockutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Acquiring lock "cc735034-2b3b-4f09-858d-124f2b3e71bd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:23 compute-0 nova_compute[243452]: 2026-02-28 10:17:23.389 243456 DEBUG oslo_concurrency.lockutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Lock "cc735034-2b3b-4f09-858d-124f2b3e71bd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:23 compute-0 nova_compute[243452]: 2026-02-28 10:17:23.395 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:17:23 compute-0 nova_compute[243452]: 2026-02-28 10:17:23.395 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:17:23 compute-0 nova_compute[243452]: 2026-02-28 10:17:23.405 243456 DEBUG nova.compute.manager [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:17:23 compute-0 nova_compute[243452]: 2026-02-28 10:17:23.489 243456 DEBUG oslo_concurrency.lockutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:23 compute-0 nova_compute[243452]: 2026-02-28 10:17:23.490 243456 DEBUG oslo_concurrency.lockutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:23 compute-0 nova_compute[243452]: 2026-02-28 10:17:23.498 243456 DEBUG nova.virt.hardware [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:17:23 compute-0 nova_compute[243452]: 2026-02-28 10:17:23.498 243456 INFO nova.compute.claims [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:17:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:23.601674) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273843601729, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 378, "num_deletes": 255, "total_data_size": 204817, "memory_usage": 211992, "flush_reason": "Manual Compaction"}
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273843605366, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 203107, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33029, "largest_seqno": 33406, "table_properties": {"data_size": 200801, "index_size": 406, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5608, "raw_average_key_size": 17, "raw_value_size": 196193, "raw_average_value_size": 628, "num_data_blocks": 18, "num_entries": 312, "num_filter_entries": 312, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772273832, "oldest_key_time": 1772273832, "file_creation_time": 1772273843, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 3749 microseconds, and 1665 cpu microseconds.
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:23.605423) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 203107 bytes OK
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:23.605444) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:23.606980) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:23.607000) EVENT_LOG_v1 {"time_micros": 1772273843606994, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:23.607019) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 202336, prev total WAL file size 202336, number of live WAL files 2.
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:23.607469) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303031' seq:72057594037927935, type:22 .. '6C6F676D0031323532' seq:0, type:0; will stop at (end)
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(198KB)], [71(8068KB)]
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273843607524, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 8465536, "oldest_snapshot_seqno": -1}
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5694 keys, 8348564 bytes, temperature: kUnknown
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273843656012, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 8348564, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8310037, "index_size": 23185, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14277, "raw_key_size": 144797, "raw_average_key_size": 25, "raw_value_size": 8207404, "raw_average_value_size": 1441, "num_data_blocks": 935, "num_entries": 5694, "num_filter_entries": 5694, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772273843, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:23.656273) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8348564 bytes
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:23.657755) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.2 rd, 171.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 7.9 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(82.8) write-amplify(41.1) OK, records in: 6215, records dropped: 521 output_compression: NoCompression
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:23.657775) EVENT_LOG_v1 {"time_micros": 1772273843657765, "job": 40, "event": "compaction_finished", "compaction_time_micros": 48600, "compaction_time_cpu_micros": 13374, "output_level": 6, "num_output_files": 1, "total_output_size": 8348564, "num_input_records": 6215, "num_output_records": 5694, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273843657906, "job": 40, "event": "table_file_deletion", "file_number": 73}
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772273843658568, "job": 40, "event": "table_file_deletion", "file_number": 71}
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:23.607375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:23.658621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:23.658627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:23.658629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:23.658630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:17:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:17:23.658632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:17:23 compute-0 nova_compute[243452]: 2026-02-28 10:17:23.680 243456 DEBUG oslo_concurrency.processutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:23 compute-0 nova_compute[243452]: 2026-02-28 10:17:23.851 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.082 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:17:24 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2258014769' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:17:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 326 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 162 op/s
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.265 243456 DEBUG oslo_concurrency.processutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.271 243456 DEBUG nova.compute.provider_tree [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.289 243456 DEBUG nova.scheduler.client.report [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.325 243456 DEBUG oslo_concurrency.lockutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.326 243456 DEBUG nova.compute.manager [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.359 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.377 243456 DEBUG nova.compute.manager [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.378 243456 DEBUG nova.network.neutron [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.383 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.383 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.383 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.383 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.384 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.415 243456 INFO nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.446 243456 DEBUG nova.compute.manager [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.520 243456 DEBUG nova.compute.manager [req-f36016bb-554c-4870-8873-ad081954d3c7 req-81904699-f0bb-49de-9a9e-c820c7f5fa70 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Received event network-vif-plugged-9e45b488-45d1-4293-a1a6-7b01b726b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.521 243456 DEBUG oslo_concurrency.lockutils [req-f36016bb-554c-4870-8873-ad081954d3c7 req-81904699-f0bb-49de-9a9e-c820c7f5fa70 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.521 243456 DEBUG oslo_concurrency.lockutils [req-f36016bb-554c-4870-8873-ad081954d3c7 req-81904699-f0bb-49de-9a9e-c820c7f5fa70 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.522 243456 DEBUG oslo_concurrency.lockutils [req-f36016bb-554c-4870-8873-ad081954d3c7 req-81904699-f0bb-49de-9a9e-c820c7f5fa70 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.522 243456 DEBUG nova.compute.manager [req-f36016bb-554c-4870-8873-ad081954d3c7 req-81904699-f0bb-49de-9a9e-c820c7f5fa70 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Processing event network-vif-plugged-9e45b488-45d1-4293-a1a6-7b01b726b58b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.523 243456 DEBUG nova.compute.manager [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.529 243456 DEBUG nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.531 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273844.529476, 5f8708bd-35d7-4952-ba18-0b6635872b86 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.531 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] VM Resumed (Lifecycle Event)
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.541 243456 INFO nova.virt.libvirt.driver [-] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Instance spawned successfully.
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.542 243456 DEBUG nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.551 243456 DEBUG nova.compute.manager [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.552 243456 DEBUG nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.553 243456 INFO nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Creating image(s)
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.575 243456 DEBUG nova.storage.rbd_utils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] rbd image cc735034-2b3b-4f09-858d-124f2b3e71bd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.603 243456 DEBUG nova.storage.rbd_utils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] rbd image cc735034-2b3b-4f09-858d-124f2b3e71bd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:24 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2258014769' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:17:24 compute-0 ceph-mon[76304]: pgmap v1550: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 326 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 162 op/s
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.629 243456 DEBUG nova.storage.rbd_utils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] rbd image cc735034-2b3b-4f09-858d-124f2b3e71bd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.633 243456 DEBUG oslo_concurrency.processutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.659 243456 DEBUG nova.policy [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3ae46addcec3432c94c933fdd18dfa9c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9a2bd9695a47497e8f68da82a992d545', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.664 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.677 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.682 243456 DEBUG nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.683 243456 DEBUG nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.684 243456 DEBUG nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.684 243456 DEBUG nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.685 243456 DEBUG nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.685 243456 DEBUG nova.virt.libvirt.driver [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.693 243456 DEBUG oslo_concurrency.processutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.693 243456 DEBUG oslo_concurrency.lockutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.694 243456 DEBUG oslo_concurrency.lockutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.694 243456 DEBUG oslo_concurrency.lockutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.719 243456 DEBUG nova.storage.rbd_utils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] rbd image cc735034-2b3b-4f09-858d-124f2b3e71bd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.724 243456 DEBUG oslo_concurrency.processutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 cc735034-2b3b-4f09-858d-124f2b3e71bd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.767 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.769 243456 INFO nova.compute.manager [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Took 13.74 seconds to spawn the instance on the hypervisor.
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.770 243456 DEBUG nova.compute.manager [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.804 243456 DEBUG nova.objects.instance [None req-d438540b-f01a-4dd4-80b6-6b14ce840c05 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.847 243456 INFO nova.compute.manager [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Took 14.99 seconds to build instance.
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.852 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273844.848572, 4db5bcd7-8b41-4850-8c88-89ad757c8558 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.852 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] VM Paused (Lifecycle Event)
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.874 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.875 243456 DEBUG oslo_concurrency.lockutils [None req-2ec3e5de-235d-48a0-973e-b7584efaa778 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.882 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:17:24 compute-0 nova_compute[243452]: 2026-02-28 10:17:24.903 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] During sync_power_state the instance has a pending task (suspending). Skip.
Feb 28 10:17:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:17:24 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/912061868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.002 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.080 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.081 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.088 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.088 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.092 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.092 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.188 243456 DEBUG nova.network.neutron [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Successfully created port: a51a44ab-ad43-4c65-94c0-22c6bfcc9039 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.296 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.297 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3476MB free_disk=59.90022315457463GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.297 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.298 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:25 compute-0 kernel: tap52f49649-61 (unregistering): left promiscuous mode
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.363 243456 DEBUG oslo_concurrency.processutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 cc735034-2b3b-4f09-858d-124f2b3e71bd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:25 compute-0 NetworkManager[49805]: <info>  [1772273845.3725] device (tap52f49649-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:17:25 compute-0 ovn_controller[146846]: 2026-02-28T10:17:25Z|00839|binding|INFO|Releasing lport 52f49649-6181-4c24-95b7-fc7227858c70 from this chassis (sb_readonly=0)
Feb 28 10:17:25 compute-0 ovn_controller[146846]: 2026-02-28T10:17:25Z|00840|binding|INFO|Setting lport 52f49649-6181-4c24-95b7-fc7227858c70 down in Southbound
Feb 28 10:17:25 compute-0 ovn_controller[146846]: 2026-02-28T10:17:25Z|00841|binding|INFO|Removing iface tap52f49649-61 ovn-installed in OVS
Feb 28 10:17:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:25.390 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:e7:39 10.100.0.9'], port_security=['fa:16:3e:22:e7:39 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4db5bcd7-8b41-4850-8c88-89ad757c8558', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c0c4bc44c37f4a4f83c83b6105be3190', 'neutron:revision_number': '9', 'neutron:security_group_ids': '60e2cad2-1539-4f21-ae07-3933335fcb5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbe93b8f-a4a4-4682-b3ab-de91ea6bc538, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=52f49649-6181-4c24-95b7-fc7227858c70) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:17:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:25.393 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 52f49649-6181-4c24-95b7-fc7227858c70 in datapath ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 unbound from our chassis
Feb 28 10:17:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:25.395 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:17:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:25.396 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d60b4df2-17b2-49bc-ab8a-788e4368a7ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:25.396 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 namespace which is not needed anymore
Feb 28 10:17:25 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000051.scope: Deactivated successfully.
Feb 28 10:17:25 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000051.scope: Consumed 6.881s CPU time.
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.411 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:25 compute-0 systemd-machined[209480]: Machine qemu-104-instance-00000051 terminated.
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.445 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 4db5bcd7-8b41-4850-8c88-89ad757c8558 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.445 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 5f8708bd-35d7-4952-ba18-0b6635872b86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.445 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 690896df-6307-469c-9685-325a61a62b88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.446 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance cc735034-2b3b-4f09-858d-124f2b3e71bd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.446 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.446 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.454 243456 DEBUG nova.storage.rbd_utils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] resizing rbd image cc735034-2b3b-4f09-858d-124f2b3e71bd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:17:25 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[319077]: [NOTICE]   (319103) : haproxy version is 2.8.14-c23fe91
Feb 28 10:17:25 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[319077]: [NOTICE]   (319103) : path to executable is /usr/sbin/haproxy
Feb 28 10:17:25 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[319077]: [WARNING]  (319103) : Exiting Master process...
Feb 28 10:17:25 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[319077]: [ALERT]    (319103) : Current worker (319114) exited with code 143 (Terminated)
Feb 28 10:17:25 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[319077]: [WARNING]  (319103) : All workers exited. Exiting... (0)
Feb 28 10:17:25 compute-0 systemd[1]: libpod-f99bee114c681eeae69eb94dc8bf0df9be3c392024774333789b81c6bbfa95e5.scope: Deactivated successfully.
Feb 28 10:17:25 compute-0 podman[320074]: 2026-02-28 10:17:25.530314607 +0000 UTC m=+0.045435976 container died f99bee114c681eeae69eb94dc8bf0df9be3c392024774333789b81c6bbfa95e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.530 243456 DEBUG nova.objects.instance [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Lazy-loading 'migration_context' on Instance uuid cc735034-2b3b-4f09-858d-124f2b3e71bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.555 243456 DEBUG nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.555 243456 DEBUG nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Ensure instance console log exists: /var/lib/nova/instances/cc735034-2b3b-4f09-858d-124f2b3e71bd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.555 243456 DEBUG oslo_concurrency.lockutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.555 243456 DEBUG oslo_concurrency.lockutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.556 243456 DEBUG oslo_concurrency.lockutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.562 243456 DEBUG nova.compute.manager [None req-d438540b-f01a-4dd4-80b6-6b14ce840c05 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f99bee114c681eeae69eb94dc8bf0df9be3c392024774333789b81c6bbfa95e5-userdata-shm.mount: Deactivated successfully.
Feb 28 10:17:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-9926ac552b8e3224f50fee270221bb61b3fe908d6867af415dc79b17afc3be43-merged.mount: Deactivated successfully.
Feb 28 10:17:25 compute-0 podman[320074]: 2026-02-28 10:17:25.579425877 +0000 UTC m=+0.094547256 container cleanup f99bee114c681eeae69eb94dc8bf0df9be3c392024774333789b81c6bbfa95e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:17:25 compute-0 systemd[1]: libpod-conmon-f99bee114c681eeae69eb94dc8bf0df9be3c392024774333789b81c6bbfa95e5.scope: Deactivated successfully.
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.600 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:25 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/912061868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:17:25 compute-0 podman[320129]: 2026-02-28 10:17:25.650631336 +0000 UTC m=+0.049802850 container remove f99bee114c681eeae69eb94dc8bf0df9be3c392024774333789b81c6bbfa95e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 10:17:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:25.655 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[274aad8e-bfdc-43ce-93bd-971c9519f414]: (4, ('Sat Feb 28 10:17:25 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 (f99bee114c681eeae69eb94dc8bf0df9be3c392024774333789b81c6bbfa95e5)\nf99bee114c681eeae69eb94dc8bf0df9be3c392024774333789b81c6bbfa95e5\nSat Feb 28 10:17:25 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 (f99bee114c681eeae69eb94dc8bf0df9be3c392024774333789b81c6bbfa95e5)\nf99bee114c681eeae69eb94dc8bf0df9be3c392024774333789b81c6bbfa95e5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:25.656 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5c35d4c5-da06-4e75-baaa-b6ca6412571a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:25.657 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce4b855a-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:25 compute-0 kernel: tapce4b855a-c0: left promiscuous mode
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.659 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.673 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:25.677 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c453e05d-beea-4d8b-8583-7a63df0e4d94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:25.688 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b85463ba-b83e-4659-a734-85149eb9e211]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:25.690 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f09fcfad-9ab5-43d0-a6d4-ffe1b3d1a763]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:25.706 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cbc08512-500b-4c56-8dd2-207503226c3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533830, 'reachable_time': 30800, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320149, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:25 compute-0 systemd[1]: run-netns-ovnmeta\x2dce4b855a\x2dcb9e\x2d4dad\x2dbfe0\x2dddfe326a1505.mount: Deactivated successfully.
Feb 28 10:17:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:25.709 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:17:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:25.710 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[96fc16eb-a2b3-4b1b-a9d2-ded0fb26aa39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.961 243456 DEBUG nova.network.neutron [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Successfully updated port: a51a44ab-ad43-4c65-94c0-22c6bfcc9039 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.993 243456 DEBUG oslo_concurrency.lockutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Acquiring lock "refresh_cache-cc735034-2b3b-4f09-858d-124f2b3e71bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.993 243456 DEBUG oslo_concurrency.lockutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Acquired lock "refresh_cache-cc735034-2b3b-4f09-858d-124f2b3e71bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:17:25 compute-0 nova_compute[243452]: 2026-02-28 10:17:25.993 243456 DEBUG nova.network.neutron [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.040 243456 DEBUG nova.compute.manager [req-eed776a1-a34f-48c2-add5-cd81ab3a9392 req-60f6fb61-8ffd-4a01-a82e-6bb327c3b042 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Received event network-changed-a51a44ab-ad43-4c65-94c0-22c6bfcc9039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.040 243456 DEBUG nova.compute.manager [req-eed776a1-a34f-48c2-add5-cd81ab3a9392 req-60f6fb61-8ffd-4a01-a82e-6bb327c3b042 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Refreshing instance network info cache due to event network-changed-a51a44ab-ad43-4c65-94c0-22c6bfcc9039. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.041 243456 DEBUG oslo_concurrency.lockutils [req-eed776a1-a34f-48c2-add5-cd81ab3a9392 req-60f6fb61-8ffd-4a01-a82e-6bb327c3b042 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-cc735034-2b3b-4f09-858d-124f2b3e71bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:17:26 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:17:26 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/471616728' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.135 243456 DEBUG nova.network.neutron [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.140 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.148 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.167 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.196 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.197 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 333 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.1 MiB/s wr, 153 op/s
Feb 28 10:17:26 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/471616728' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:17:26 compute-0 ceph-mon[76304]: pgmap v1551: 305 pgs: 305 active+clean; 333 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.1 MiB/s wr, 153 op/s
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.643 243456 DEBUG nova.compute.manager [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Received event network-vif-plugged-9e45b488-45d1-4293-a1a6-7b01b726b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.643 243456 DEBUG oslo_concurrency.lockutils [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.644 243456 DEBUG oslo_concurrency.lockutils [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.645 243456 DEBUG oslo_concurrency.lockutils [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.645 243456 DEBUG nova.compute.manager [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] No waiting events found dispatching network-vif-plugged-9e45b488-45d1-4293-a1a6-7b01b726b58b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.646 243456 WARNING nova.compute.manager [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Received unexpected event network-vif-plugged-9e45b488-45d1-4293-a1a6-7b01b726b58b for instance with vm_state active and task_state None.
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.646 243456 DEBUG nova.compute.manager [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.647 243456 DEBUG oslo_concurrency.lockutils [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.647 243456 DEBUG oslo_concurrency.lockutils [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.648 243456 DEBUG oslo_concurrency.lockutils [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.648 243456 DEBUG nova.compute.manager [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Processing event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.649 243456 DEBUG nova.compute.manager [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.649 243456 DEBUG oslo_concurrency.lockutils [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.650 243456 DEBUG oslo_concurrency.lockutils [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.650 243456 DEBUG oslo_concurrency.lockutils [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.651 243456 DEBUG nova.compute.manager [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.651 243456 WARNING nova.compute.manager [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state building and task_state spawning.
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.652 243456 DEBUG nova.compute.manager [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received event network-vif-unplugged-52f49649-6181-4c24-95b7-fc7227858c70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.652 243456 DEBUG oslo_concurrency.lockutils [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.653 243456 DEBUG oslo_concurrency.lockutils [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.653 243456 DEBUG oslo_concurrency.lockutils [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.654 243456 DEBUG nova.compute.manager [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] No waiting events found dispatching network-vif-unplugged-52f49649-6181-4c24-95b7-fc7227858c70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.654 243456 WARNING nova.compute.manager [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received unexpected event network-vif-unplugged-52f49649-6181-4c24-95b7-fc7227858c70 for instance with vm_state suspended and task_state None.
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.655 243456 DEBUG nova.compute.manager [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.655 243456 DEBUG oslo_concurrency.lockutils [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.656 243456 DEBUG oslo_concurrency.lockutils [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.656 243456 DEBUG oslo_concurrency.lockutils [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.657 243456 DEBUG nova.compute.manager [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] No waiting events found dispatching network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.657 243456 WARNING nova.compute.manager [req-de806deb-10ec-4545-aa7b-e2378360a94d req-124710ef-8d09-4b1d-bb17-6274314bd521 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received unexpected event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 for instance with vm_state suspended and task_state None.
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.664 243456 DEBUG nova.compute.manager [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.681 243456 DEBUG nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.682 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273846.6812077, 690896df-6307-469c-9685-325a61a62b88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.683 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Resumed (Lifecycle Event)
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.692 243456 INFO nova.virt.libvirt.driver [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance spawned successfully.
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.693 243456 DEBUG nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.712 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.717 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.720 243456 DEBUG nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.721 243456 DEBUG nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.721 243456 DEBUG nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.722 243456 DEBUG nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.722 243456 DEBUG nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.722 243456 DEBUG nova.virt.libvirt.driver [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.762 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.806 243456 INFO nova.compute.manager [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Took 14.46 seconds to spawn the instance on the hypervisor.
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.807 243456 DEBUG nova.compute.manager [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.837 243456 DEBUG nova.network.neutron [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Updating instance_info_cache with network_info: [{"id": "a51a44ab-ad43-4c65-94c0-22c6bfcc9039", "address": "fa:16:3e:62:f2:f2", "network": {"id": "12a1e1ce-14e8-4472-8ef8-ee5b53098b66", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-614719452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a2bd9695a47497e8f68da82a992d545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51a44ab-ad", "ovs_interfaceid": "a51a44ab-ad43-4c65-94c0-22c6bfcc9039", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.867 243456 DEBUG oslo_concurrency.lockutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Releasing lock "refresh_cache-cc735034-2b3b-4f09-858d-124f2b3e71bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.868 243456 DEBUG nova.compute.manager [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Instance network_info: |[{"id": "a51a44ab-ad43-4c65-94c0-22c6bfcc9039", "address": "fa:16:3e:62:f2:f2", "network": {"id": "12a1e1ce-14e8-4472-8ef8-ee5b53098b66", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-614719452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a2bd9695a47497e8f68da82a992d545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51a44ab-ad", "ovs_interfaceid": "a51a44ab-ad43-4c65-94c0-22c6bfcc9039", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.869 243456 DEBUG oslo_concurrency.lockutils [req-eed776a1-a34f-48c2-add5-cd81ab3a9392 req-60f6fb61-8ffd-4a01-a82e-6bb327c3b042 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-cc735034-2b3b-4f09-858d-124f2b3e71bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.869 243456 DEBUG nova.network.neutron [req-eed776a1-a34f-48c2-add5-cd81ab3a9392 req-60f6fb61-8ffd-4a01-a82e-6bb327c3b042 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Refreshing network info cache for port a51a44ab-ad43-4c65-94c0-22c6bfcc9039 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.871 243456 DEBUG nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Start _get_guest_xml network_info=[{"id": "a51a44ab-ad43-4c65-94c0-22c6bfcc9039", "address": "fa:16:3e:62:f2:f2", "network": {"id": "12a1e1ce-14e8-4472-8ef8-ee5b53098b66", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-614719452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a2bd9695a47497e8f68da82a992d545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51a44ab-ad", "ovs_interfaceid": "a51a44ab-ad43-4c65-94c0-22c6bfcc9039", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.873 243456 INFO nova.compute.manager [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Took 15.59 seconds to build instance.
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.876 243456 WARNING nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.881 243456 DEBUG nova.virt.libvirt.host [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.881 243456 DEBUG nova.virt.libvirt.host [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.885 243456 DEBUG nova.virt.libvirt.host [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.886 243456 DEBUG nova.virt.libvirt.host [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.886 243456 DEBUG nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.886 243456 DEBUG nova.virt.hardware [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.887 243456 DEBUG nova.virt.hardware [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.887 243456 DEBUG nova.virt.hardware [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.887 243456 DEBUG nova.virt.hardware [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.887 243456 DEBUG nova.virt.hardware [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.888 243456 DEBUG nova.virt.hardware [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.888 243456 DEBUG nova.virt.hardware [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.888 243456 DEBUG nova.virt.hardware [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.888 243456 DEBUG nova.virt.hardware [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.889 243456 DEBUG nova.virt.hardware [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.889 243456 DEBUG nova.virt.hardware [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.891 243456 DEBUG oslo_concurrency.processutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:26 compute-0 nova_compute[243452]: 2026-02-28 10:17:26.920 243456 DEBUG oslo_concurrency.lockutils [None req-e843f251-a60b-4305-8c42-cb015963e7d7 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:17:27 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/999632232' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:27 compute-0 nova_compute[243452]: 2026-02-28 10:17:27.435 243456 DEBUG oslo_concurrency.processutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:27 compute-0 nova_compute[243452]: 2026-02-28 10:17:27.469 243456 DEBUG nova.storage.rbd_utils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] rbd image cc735034-2b3b-4f09-858d-124f2b3e71bd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:27 compute-0 nova_compute[243452]: 2026-02-28 10:17:27.478 243456 DEBUG oslo_concurrency.processutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:27 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/999632232' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:17:27 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1772650957' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.017 243456 DEBUG oslo_concurrency.processutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.020 243456 DEBUG nova.virt.libvirt.vif [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1580853116',display_name='tempest-ServerAddressesNegativeTestJSON-server-1580853116',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1580853116',id=91,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9a2bd9695a47497e8f68da82a992d545',ramdisk_id='',reservation_id='r-5etcxjkg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-632901319',owner_user_name='tempest-ServerAddressesNegativeTestJSON-632901319-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:17:24Z,user_data=None,user_id='3ae46addcec3432c94c933fdd18dfa9c',uuid=cc735034-2b3b-4f09-858d-124f2b3e71bd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a51a44ab-ad43-4c65-94c0-22c6bfcc9039", "address": "fa:16:3e:62:f2:f2", "network": {"id": "12a1e1ce-14e8-4472-8ef8-ee5b53098b66", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-614719452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a2bd9695a47497e8f68da82a992d545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51a44ab-ad", "ovs_interfaceid": "a51a44ab-ad43-4c65-94c0-22c6bfcc9039", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.020 243456 DEBUG nova.network.os_vif_util [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Converting VIF {"id": "a51a44ab-ad43-4c65-94c0-22c6bfcc9039", "address": "fa:16:3e:62:f2:f2", "network": {"id": "12a1e1ce-14e8-4472-8ef8-ee5b53098b66", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-614719452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a2bd9695a47497e8f68da82a992d545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51a44ab-ad", "ovs_interfaceid": "a51a44ab-ad43-4c65-94c0-22c6bfcc9039", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.022 243456 DEBUG nova.network.os_vif_util [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:f2:f2,bridge_name='br-int',has_traffic_filtering=True,id=a51a44ab-ad43-4c65-94c0-22c6bfcc9039,network=Network(12a1e1ce-14e8-4472-8ef8-ee5b53098b66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa51a44ab-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.024 243456 DEBUG nova.objects.instance [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Lazy-loading 'pci_devices' on Instance uuid cc735034-2b3b-4f09-858d-124f2b3e71bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.050 243456 DEBUG nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:17:28 compute-0 nova_compute[243452]:   <uuid>cc735034-2b3b-4f09-858d-124f2b3e71bd</uuid>
Feb 28 10:17:28 compute-0 nova_compute[243452]:   <name>instance-0000005b</name>
Feb 28 10:17:28 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:17:28 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:17:28 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerAddressesNegativeTestJSON-server-1580853116</nova:name>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:17:26</nova:creationTime>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:17:28 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:17:28 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:17:28 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:17:28 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:17:28 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:17:28 compute-0 nova_compute[243452]:         <nova:user uuid="3ae46addcec3432c94c933fdd18dfa9c">tempest-ServerAddressesNegativeTestJSON-632901319-project-member</nova:user>
Feb 28 10:17:28 compute-0 nova_compute[243452]:         <nova:project uuid="9a2bd9695a47497e8f68da82a992d545">tempest-ServerAddressesNegativeTestJSON-632901319</nova:project>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:17:28 compute-0 nova_compute[243452]:         <nova:port uuid="a51a44ab-ad43-4c65-94c0-22c6bfcc9039">
Feb 28 10:17:28 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:17:28 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:17:28 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <system>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <entry name="serial">cc735034-2b3b-4f09-858d-124f2b3e71bd</entry>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <entry name="uuid">cc735034-2b3b-4f09-858d-124f2b3e71bd</entry>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     </system>
Feb 28 10:17:28 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:17:28 compute-0 nova_compute[243452]:   <os>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:   </os>
Feb 28 10:17:28 compute-0 nova_compute[243452]:   <features>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:   </features>
Feb 28 10:17:28 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:17:28 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:17:28 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/cc735034-2b3b-4f09-858d-124f2b3e71bd_disk">
Feb 28 10:17:28 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       </source>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:17:28 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/cc735034-2b3b-4f09-858d-124f2b3e71bd_disk.config">
Feb 28 10:17:28 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       </source>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:17:28 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:62:f2:f2"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <target dev="tapa51a44ab-ad"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/cc735034-2b3b-4f09-858d-124f2b3e71bd/console.log" append="off"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <video>
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     </video>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:17:28 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:17:28 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:17:28 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:17:28 compute-0 nova_compute[243452]: </domain>
Feb 28 10:17:28 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.052 243456 DEBUG nova.compute.manager [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Preparing to wait for external event network-vif-plugged-a51a44ab-ad43-4c65-94c0-22c6bfcc9039 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.052 243456 DEBUG oslo_concurrency.lockutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Acquiring lock "cc735034-2b3b-4f09-858d-124f2b3e71bd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.053 243456 DEBUG oslo_concurrency.lockutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Lock "cc735034-2b3b-4f09-858d-124f2b3e71bd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.053 243456 DEBUG oslo_concurrency.lockutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Lock "cc735034-2b3b-4f09-858d-124f2b3e71bd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.055 243456 DEBUG nova.virt.libvirt.vif [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1580853116',display_name='tempest-ServerAddressesNegativeTestJSON-server-1580853116',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1580853116',id=91,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9a2bd9695a47497e8f68da82a992d545',ramdisk_id='',reservation_id='r-5etcxjkg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-632901319',owner_user_name='tempest-ServerAddressesNegativeTestJSON-632901319-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:17:24Z,user_data=None,user_id='3ae46addcec3432c94c933fdd18dfa9c',uuid=cc735034-2b3b-4f09-858d-124f2b3e71bd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a51a44ab-ad43-4c65-94c0-22c6bfcc9039", "address": "fa:16:3e:62:f2:f2", "network": {"id": "12a1e1ce-14e8-4472-8ef8-ee5b53098b66", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-614719452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a2bd9695a47497e8f68da82a992d545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51a44ab-ad", "ovs_interfaceid": "a51a44ab-ad43-4c65-94c0-22c6bfcc9039", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.055 243456 DEBUG nova.network.os_vif_util [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Converting VIF {"id": "a51a44ab-ad43-4c65-94c0-22c6bfcc9039", "address": "fa:16:3e:62:f2:f2", "network": {"id": "12a1e1ce-14e8-4472-8ef8-ee5b53098b66", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-614719452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a2bd9695a47497e8f68da82a992d545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51a44ab-ad", "ovs_interfaceid": "a51a44ab-ad43-4c65-94c0-22c6bfcc9039", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.056 243456 DEBUG nova.network.os_vif_util [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:f2:f2,bridge_name='br-int',has_traffic_filtering=True,id=a51a44ab-ad43-4c65-94c0-22c6bfcc9039,network=Network(12a1e1ce-14e8-4472-8ef8-ee5b53098b66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa51a44ab-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.057 243456 DEBUG os_vif [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:f2:f2,bridge_name='br-int',has_traffic_filtering=True,id=a51a44ab-ad43-4c65-94c0-22c6bfcc9039,network=Network(12a1e1ce-14e8-4472-8ef8-ee5b53098b66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa51a44ab-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.058 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.059 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.060 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.064 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.064 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa51a44ab-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.065 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa51a44ab-ad, col_values=(('external_ids', {'iface-id': 'a51a44ab-ad43-4c65-94c0-22c6bfcc9039', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:f2:f2', 'vm-uuid': 'cc735034-2b3b-4f09-858d-124f2b3e71bd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.067 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:28 compute-0 NetworkManager[49805]: <info>  [1772273848.0684] manager: (tapa51a44ab-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/363)
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.071 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.077 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.079 243456 INFO os_vif [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:f2:f2,bridge_name='br-int',has_traffic_filtering=True,id=a51a44ab-ad43-4c65-94c0-22c6bfcc9039,network=Network(12a1e1ce-14e8-4472-8ef8-ee5b53098b66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa51a44ab-ad')
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.144 243456 DEBUG nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.145 243456 DEBUG nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.145 243456 DEBUG nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] No VIF found with MAC fa:16:3e:62:f2:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.146 243456 INFO nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Using config drive
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.178 243456 DEBUG nova.storage.rbd_utils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] rbd image cc735034-2b3b-4f09-858d-124f2b3e71bd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.193 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.194 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:17:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 357 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 199 op/s
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.339 243456 INFO nova.compute.manager [None req-92fa0c37-f3d5-4279-8a4c-53fa7fcbafab 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Resuming
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.341 243456 DEBUG nova.objects.instance [None req-92fa0c37-f3d5-4279-8a4c-53fa7fcbafab 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'flavor' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.372 243456 DEBUG oslo_concurrency.lockutils [None req-92fa0c37-f3d5-4279-8a4c-53fa7fcbafab 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.373 243456 DEBUG oslo_concurrency.lockutils [None req-92fa0c37-f3d5-4279-8a4c-53fa7fcbafab 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquired lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.374 243456 DEBUG nova.network.neutron [None req-92fa0c37-f3d5-4279-8a4c-53fa7fcbafab 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:17:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:17:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Feb 28 10:17:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Feb 28 10:17:28 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Feb 28 10:17:28 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1772650957' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:28 compute-0 ceph-mon[76304]: pgmap v1552: 305 pgs: 305 active+clean; 357 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 199 op/s
Feb 28 10:17:28 compute-0 ceph-mon[76304]: osdmap e243: 3 total, 3 up, 3 in
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.688 243456 INFO nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Creating config drive at /var/lib/nova/instances/cc735034-2b3b-4f09-858d-124f2b3e71bd/disk.config
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.692 243456 DEBUG oslo_concurrency.processutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cc735034-2b3b-4f09-858d-124f2b3e71bd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpt7_4ua29 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.828 243456 DEBUG oslo_concurrency.processutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cc735034-2b3b-4f09-858d-124f2b3e71bd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpt7_4ua29" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.866 243456 DEBUG nova.storage.rbd_utils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] rbd image cc735034-2b3b-4f09-858d-124f2b3e71bd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.873 243456 DEBUG oslo_concurrency.processutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cc735034-2b3b-4f09-858d-124f2b3e71bd/disk.config cc735034-2b3b-4f09-858d-124f2b3e71bd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:28 compute-0 nova_compute[243452]: 2026-02-28 10:17:28.911 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.023 243456 DEBUG oslo_concurrency.processutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cc735034-2b3b-4f09-858d-124f2b3e71bd/disk.config cc735034-2b3b-4f09-858d-124f2b3e71bd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.024 243456 INFO nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Deleting local config drive /var/lib/nova/instances/cc735034-2b3b-4f09-858d-124f2b3e71bd/disk.config because it was imported into RBD.
Feb 28 10:17:29 compute-0 kernel: tapa51a44ab-ad: entered promiscuous mode
Feb 28 10:17:29 compute-0 NetworkManager[49805]: <info>  [1772273849.0656] manager: (tapa51a44ab-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/364)
Feb 28 10:17:29 compute-0 ovn_controller[146846]: 2026-02-28T10:17:29Z|00842|binding|INFO|Claiming lport a51a44ab-ad43-4c65-94c0-22c6bfcc9039 for this chassis.
Feb 28 10:17:29 compute-0 ovn_controller[146846]: 2026-02-28T10:17:29Z|00843|binding|INFO|a51a44ab-ad43-4c65-94c0-22c6bfcc9039: Claiming fa:16:3e:62:f2:f2 10.100.0.9
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.073 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.076 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.082 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:f2:f2 10.100.0.9'], port_security=['fa:16:3e:62:f2:f2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'cc735034-2b3b-4f09-858d-124f2b3e71bd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12a1e1ce-14e8-4472-8ef8-ee5b53098b66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9a2bd9695a47497e8f68da82a992d545', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8ba58661-0762-4cbe-9be3-bbdcfffd9b74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bea5839-d462-4fe4-acf7-0db53ac15604, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a51a44ab-ad43-4c65-94c0-22c6bfcc9039) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.084 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a51a44ab-ad43-4c65-94c0-22c6bfcc9039 in datapath 12a1e1ce-14e8-4472-8ef8-ee5b53098b66 bound to our chassis
Feb 28 10:17:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:17:29
Feb 28 10:17:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:17:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:17:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'volumes']
Feb 28 10:17:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.090 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 12a1e1ce-14e8-4472-8ef8-ee5b53098b66
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.100 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[92e9a738-0065-4534-911d-d3c861fd532e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.101 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap12a1e1ce-11 in ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:17:29 compute-0 systemd-udevd[320311]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.103 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap12a1e1ce-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.103 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0c3a0858-79a3-4349-a5b6-b83823b879bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.109 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3ee13e48-5d0c-49b8-88be-0c2ef0e5292c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:29 compute-0 systemd-machined[209480]: New machine qemu-107-instance-0000005b.
Feb 28 10:17:29 compute-0 ovn_controller[146846]: 2026-02-28T10:17:29Z|00844|binding|INFO|Setting lport a51a44ab-ad43-4c65-94c0-22c6bfcc9039 ovn-installed in OVS
Feb 28 10:17:29 compute-0 ovn_controller[146846]: 2026-02-28T10:17:29Z|00845|binding|INFO|Setting lport a51a44ab-ad43-4c65-94c0-22c6bfcc9039 up in Southbound
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.115 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:29 compute-0 NetworkManager[49805]: <info>  [1772273849.1189] device (tapa51a44ab-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:17:29 compute-0 NetworkManager[49805]: <info>  [1772273849.1198] device (tapa51a44ab-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:17:29 compute-0 systemd[1]: Started Virtual Machine qemu-107-instance-0000005b.
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.121 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[bb520506-6d09-4dac-af3c-b432be389b06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.138 243456 DEBUG nova.network.neutron [req-eed776a1-a34f-48c2-add5-cd81ab3a9392 req-60f6fb61-8ffd-4a01-a82e-6bb327c3b042 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Updated VIF entry in instance network info cache for port a51a44ab-ad43-4c65-94c0-22c6bfcc9039. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.139 243456 DEBUG nova.network.neutron [req-eed776a1-a34f-48c2-add5-cd81ab3a9392 req-60f6fb61-8ffd-4a01-a82e-6bb327c3b042 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Updating instance_info_cache with network_info: [{"id": "a51a44ab-ad43-4c65-94c0-22c6bfcc9039", "address": "fa:16:3e:62:f2:f2", "network": {"id": "12a1e1ce-14e8-4472-8ef8-ee5b53098b66", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-614719452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a2bd9695a47497e8f68da82a992d545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51a44ab-ad", "ovs_interfaceid": "a51a44ab-ad43-4c65-94c0-22c6bfcc9039", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.144 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[41fe9cd4-490c-4357-9b27-2111eb828e8d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.159 243456 DEBUG oslo_concurrency.lockutils [req-eed776a1-a34f-48c2-add5-cd81ab3a9392 req-60f6fb61-8ffd-4a01-a82e-6bb327c3b042 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-cc735034-2b3b-4f09-858d-124f2b3e71bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.168 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5840e8c3-dce3-495d-b58c-f612d9fb966c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.175 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[04cd1719-4c71-4f7d-9a01-2407913b4379]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:29 compute-0 systemd-udevd[320314]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:17:29 compute-0 NetworkManager[49805]: <info>  [1772273849.1769] manager: (tap12a1e1ce-10): new Veth device (/org/freedesktop/NetworkManager/Devices/365)
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.201 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d8809f02-f3a0-49ac-95b5-2d0704d9bd19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.204 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ff258e3e-ebef-421d-b941-39bfa63b889c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:29 compute-0 NetworkManager[49805]: <info>  [1772273849.2236] device (tap12a1e1ce-10): carrier: link connected
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.228 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4f8b9bce-aadd-49d5-8e15-f316774f97bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.246 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[44af739e-906e-4900-990a-994fbc069750]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12a1e1ce-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:f7:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535651, 'reachable_time': 36793, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320346, 'error': None, 'target': 'ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.259 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1e865e-634a-4897-96ff-bd6a5462b8b3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:f72d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535651, 'tstamp': 535651}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320347, 'error': None, 'target': 'ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.275 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[176748a5-358c-4290-8d23-f9205b40992f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12a1e1ce-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:f7:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535651, 'reachable_time': 36793, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320348, 'error': None, 'target': 'ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.301 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c30a035d-7077-4a27-81be-e0a0cdae40bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.303 243456 DEBUG nova.compute.manager [req-4276b11f-3074-4023-9ed5-46d98f66395d req-cc805265-b21e-4d75-bf16-38b81941249f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Received event network-vif-plugged-a51a44ab-ad43-4c65-94c0-22c6bfcc9039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.304 243456 DEBUG oslo_concurrency.lockutils [req-4276b11f-3074-4023-9ed5-46d98f66395d req-cc805265-b21e-4d75-bf16-38b81941249f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "cc735034-2b3b-4f09-858d-124f2b3e71bd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.304 243456 DEBUG oslo_concurrency.lockutils [req-4276b11f-3074-4023-9ed5-46d98f66395d req-cc805265-b21e-4d75-bf16-38b81941249f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cc735034-2b3b-4f09-858d-124f2b3e71bd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.304 243456 DEBUG oslo_concurrency.lockutils [req-4276b11f-3074-4023-9ed5-46d98f66395d req-cc805265-b21e-4d75-bf16-38b81941249f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cc735034-2b3b-4f09-858d-124f2b3e71bd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.305 243456 DEBUG nova.compute.manager [req-4276b11f-3074-4023-9ed5-46d98f66395d req-cc805265-b21e-4d75-bf16-38b81941249f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Processing event network-vif-plugged-a51a44ab-ad43-4c65-94c0-22c6bfcc9039 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.356 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1f30f8f8-9932-446e-836e-81079f82d444]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.358 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12a1e1ce-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.358 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.358 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12a1e1ce-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.360 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:29 compute-0 NetworkManager[49805]: <info>  [1772273849.3610] manager: (tap12a1e1ce-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Feb 28 10:17:29 compute-0 kernel: tap12a1e1ce-10: entered promiscuous mode
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.366 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.370 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap12a1e1ce-10, col_values=(('external_ids', {'iface-id': 'b8e91af7-e0ae-4025-99c0-dae71c53a2f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:29 compute-0 ovn_controller[146846]: 2026-02-28T10:17:29Z|00846|binding|INFO|Releasing lport b8e91af7-e0ae-4025-99c0-dae71c53a2f4 from this chassis (sb_readonly=0)
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.373 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.378 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.382 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.382 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/12a1e1ce-14e8-4472-8ef8-ee5b53098b66.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/12a1e1ce-14e8-4472-8ef8-ee5b53098b66.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.383 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4cc285c6-6988-4223-a846-91c9db1daa7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.384 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-12a1e1ce-14e8-4472-8ef8-ee5b53098b66
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/12a1e1ce-14e8-4472-8ef8-ee5b53098b66.pid.haproxy
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 12a1e1ce-14e8-4472-8ef8-ee5b53098b66
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:17:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:29.385 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66', 'env', 'PROCESS_TAG=haproxy-12a1e1ce-14e8-4472-8ef8-ee5b53098b66', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/12a1e1ce-14e8-4472-8ef8-ee5b53098b66.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.638 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:29 compute-0 NetworkManager[49805]: <info>  [1772273849.6400] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Feb 28 10:17:29 compute-0 NetworkManager[49805]: <info>  [1772273849.6408] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Feb 28 10:17:29 compute-0 ovn_controller[146846]: 2026-02-28T10:17:29Z|00847|binding|INFO|Releasing lport ecb4bce5-f543-4ade-98cc-d98a0d015682 from this chassis (sb_readonly=0)
Feb 28 10:17:29 compute-0 ovn_controller[146846]: 2026-02-28T10:17:29Z|00848|binding|INFO|Releasing lport b8e91af7-e0ae-4025-99c0-dae71c53a2f4 from this chassis (sb_readonly=0)
Feb 28 10:17:29 compute-0 ovn_controller[146846]: 2026-02-28T10:17:29Z|00849|binding|INFO|Releasing lport b833c568-d94d-4da6-b765-0f13045f9c5d from this chassis (sb_readonly=0)
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.662 243456 DEBUG nova.compute.manager [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.663 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273849.6621366, cc735034-2b3b-4f09-858d-124f2b3e71bd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.663 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] VM Started (Lifecycle Event)
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.665 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.668 243456 DEBUG nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.672 243456 INFO nova.virt.libvirt.driver [-] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Instance spawned successfully.
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.673 243456 DEBUG nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:17:29 compute-0 ovn_controller[146846]: 2026-02-28T10:17:29Z|00850|binding|INFO|Releasing lport ecb4bce5-f543-4ade-98cc-d98a0d015682 from this chassis (sb_readonly=0)
Feb 28 10:17:29 compute-0 ovn_controller[146846]: 2026-02-28T10:17:29Z|00851|binding|INFO|Releasing lport b8e91af7-e0ae-4025-99c0-dae71c53a2f4 from this chassis (sb_readonly=0)
Feb 28 10:17:29 compute-0 ovn_controller[146846]: 2026-02-28T10:17:29Z|00852|binding|INFO|Releasing lport b833c568-d94d-4da6-b765-0f13045f9c5d from this chassis (sb_readonly=0)
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.703 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:29 compute-0 podman[320426]: 2026-02-28 10:17:29.738210612 +0000 UTC m=+0.052627231 container create c59dcbab0413c1c9fc3f8ad3b1a4ed0ecff1c81e47876b0e2a5dc521a5612383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 10:17:29 compute-0 systemd[1]: Started libpod-conmon-c59dcbab0413c1c9fc3f8ad3b1a4ed0ecff1c81e47876b0e2a5dc521a5612383.scope.
Feb 28 10:17:29 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:17:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a5ac37f35f8b27419eb5200be5a28fabb27bc2c8ca85b94d246508c615c51b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:17:29 compute-0 podman[320426]: 2026-02-28 10:17:29.719448867 +0000 UTC m=+0.033865506 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:17:29 compute-0 podman[320426]: 2026-02-28 10:17:29.826794877 +0000 UTC m=+0.141211496 container init c59dcbab0413c1c9fc3f8ad3b1a4ed0ecff1c81e47876b0e2a5dc521a5612383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 10:17:29 compute-0 podman[320426]: 2026-02-28 10:17:29.834537588 +0000 UTC m=+0.148954217 container start c59dcbab0413c1c9fc3f8ad3b1a4ed0ecff1c81e47876b0e2a5dc521a5612383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.837 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.853 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.858 243456 DEBUG nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.859 243456 DEBUG nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.859 243456 DEBUG nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.860 243456 DEBUG nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.860 243456 DEBUG nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.861 243456 DEBUG nova.virt.libvirt.driver [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:17:29 compute-0 neutron-haproxy-ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66[320441]: [NOTICE]   (320445) : New worker (320447) forked
Feb 28 10:17:29 compute-0 neutron-haproxy-ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66[320441]: [NOTICE]   (320445) : Loading success.
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.887 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.887 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273849.662348, cc735034-2b3b-4f09-858d-124f2b3e71bd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.888 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] VM Paused (Lifecycle Event)
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.956 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.962 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273849.6687124, cc735034-2b3b-4f09-858d-124f2b3e71bd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.963 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] VM Resumed (Lifecycle Event)
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.967 243456 INFO nova.compute.manager [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Took 5.42 seconds to spawn the instance on the hypervisor.
Feb 28 10:17:29 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.968 243456 DEBUG nova.compute.manager [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:29.999 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.003 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.013 243456 DEBUG nova.network.neutron [None req-92fa0c37-f3d5-4279-8a4c-53fa7fcbafab 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Updating instance_info_cache with network_info: [{"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.031 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.032 243456 DEBUG oslo_concurrency.lockutils [None req-92fa0c37-f3d5-4279-8a4c-53fa7fcbafab 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Releasing lock "refresh_cache-4db5bcd7-8b41-4850-8c88-89ad757c8558" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.034 243456 INFO nova.compute.manager [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Took 6.58 seconds to build instance.
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.039 243456 DEBUG nova.virt.libvirt.vif [None req-92fa0c37-f3d5-4279-8a4c-53fa7fcbafab 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:15:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-76744621',display_name='tempest-ServersNegativeTestJSON-server-76744621',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-76744621',id=81,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c0c4bc44c37f4a4f83c83b6105be3190',ramdisk_id='',reservation_id='r-labnea0j',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-621636341',owner_user_name='tempest-ServersNegativeTestJSON-621636341-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:17:25Z,user_data=None,user_id='7ef51521ffc947cbbce8323ec2b71753',uuid=4db5bcd7-8b41-4850-8c88-89ad757c8558,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.040 243456 DEBUG nova.network.os_vif_util [None req-92fa0c37-f3d5-4279-8a4c-53fa7fcbafab 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converting VIF {"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.041 243456 DEBUG nova.network.os_vif_util [None req-92fa0c37-f3d5-4279-8a4c-53fa7fcbafab 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:e7:39,bridge_name='br-int',has_traffic_filtering=True,id=52f49649-6181-4c24-95b7-fc7227858c70,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f49649-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.041 243456 DEBUG os_vif [None req-92fa0c37-f3d5-4279-8a4c-53fa7fcbafab 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:e7:39,bridge_name='br-int',has_traffic_filtering=True,id=52f49649-6181-4c24-95b7-fc7227858c70,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f49649-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.042 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.042 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.043 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.045 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.046 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52f49649-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.046 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52f49649-61, col_values=(('external_ids', {'iface-id': '52f49649-6181-4c24-95b7-fc7227858c70', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:e7:39', 'vm-uuid': '4db5bcd7-8b41-4850-8c88-89ad757c8558'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.046 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.047 243456 INFO os_vif [None req-92fa0c37-f3d5-4279-8a4c-53fa7fcbafab 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:e7:39,bridge_name='br-int',has_traffic_filtering=True,id=52f49649-6181-4c24-95b7-fc7227858c70,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f49649-61')
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.052 243456 DEBUG oslo_concurrency.lockutils [None req-732fb04c-868b-463a-95af-794605186649 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Lock "cc735034-2b3b-4f09-858d-124f2b3e71bd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.064 243456 DEBUG nova.objects.instance [None req-92fa0c37-f3d5-4279-8a4c-53fa7fcbafab 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:30 compute-0 NetworkManager[49805]: <info>  [1772273850.1341] manager: (tap52f49649-61): new Tun device (/org/freedesktop/NetworkManager/Devices/369)
Feb 28 10:17:30 compute-0 kernel: tap52f49649-61: entered promiscuous mode
Feb 28 10:17:30 compute-0 systemd-udevd[320340]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:17:30 compute-0 ovn_controller[146846]: 2026-02-28T10:17:30Z|00853|binding|INFO|Claiming lport 52f49649-6181-4c24-95b7-fc7227858c70 for this chassis.
Feb 28 10:17:30 compute-0 ovn_controller[146846]: 2026-02-28T10:17:30Z|00854|binding|INFO|52f49649-6181-4c24-95b7-fc7227858c70: Claiming fa:16:3e:22:e7:39 10.100.0.9
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.140 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.147 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:e7:39 10.100.0.9'], port_security=['fa:16:3e:22:e7:39 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4db5bcd7-8b41-4850-8c88-89ad757c8558', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c0c4bc44c37f4a4f83c83b6105be3190', 'neutron:revision_number': '10', 'neutron:security_group_ids': '60e2cad2-1539-4f21-ae07-3933335fcb5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbe93b8f-a4a4-4682-b3ab-de91ea6bc538, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=52f49649-6181-4c24-95b7-fc7227858c70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.149 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 52f49649-6181-4c24-95b7-fc7227858c70 in datapath ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 bound to our chassis
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.151 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce4b855a-cb9e-4dad-bfe0-ddfe326a1505
Feb 28 10:17:30 compute-0 ovn_controller[146846]: 2026-02-28T10:17:30Z|00855|binding|INFO|Setting lport 52f49649-6181-4c24-95b7-fc7227858c70 ovn-installed in OVS
Feb 28 10:17:30 compute-0 ovn_controller[146846]: 2026-02-28T10:17:30Z|00856|binding|INFO|Setting lport 52f49649-6181-4c24-95b7-fc7227858c70 up in Southbound
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.154 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:30 compute-0 NetworkManager[49805]: <info>  [1772273850.1633] device (tap52f49649-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:17:30 compute-0 systemd-machined[209480]: New machine qemu-108-instance-00000051.
Feb 28 10:17:30 compute-0 NetworkManager[49805]: <info>  [1772273850.1644] device (tap52f49649-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.165 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4015cf-28d0-4bd3-9187-2f4a4ab0e6cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.166 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce4b855a-c1 in ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.167 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce4b855a-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.168 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b73c4f7a-ac04-49e8-9ce3-19086983e554]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.169 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fd1e042a-3051-4000-8af9-bd0a3a43d5fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:30 compute-0 systemd[1]: Started Virtual Machine qemu-108-instance-00000051.
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.180 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[e6959556-9c05-45bd-8f57-e693ed04e375]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.196 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f0923f3d-5c13-4592-92ce-59e80d2e632c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.220 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e09f36f8-8164-4086-977b-bc5996de23cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:30 compute-0 NetworkManager[49805]: <info>  [1772273850.2334] manager: (tapce4b855a-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/370)
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.232 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[88ea966e-f45c-4c69-8e5a-d49c4bc0b1ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 372 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.2 MiB/s wr, 259 op/s
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.270 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3289c1-e3b5-4fd1-bc40-42df26ebc761]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.274 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[59a9c5dd-f84a-4d14-84ea-1d4ca5183fda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:30 compute-0 NetworkManager[49805]: <info>  [1772273850.2960] device (tapce4b855a-c0): carrier: link connected
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.301 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e042a4e3-1b30-4316-b7d0-f4135774109f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.319 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5d15a5ae-7bb0-40b6-836b-42b55efe2fbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce4b855a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:cf:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535758, 'reachable_time': 18610, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320488, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:17:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:17:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:17:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:17:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:17:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.333 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0f9ef277-85f9-4a88-9f69-dd4927ab7a5b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:cf33'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535758, 'tstamp': 535758}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320489, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.351 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[caa55935-d0d9-492c-939a-e0686fe44072]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce4b855a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:cf:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535758, 'reachable_time': 18610, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320490, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.387 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[49fa187d-058c-4c25-9e9b-628ccf806ac0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.453 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ac12cb02-0e15-47f5-b782-ae1e1f2d3113]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.455 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce4b855a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.456 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.456 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce4b855a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.458 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:30 compute-0 NetworkManager[49805]: <info>  [1772273850.4596] manager: (tapce4b855a-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/371)
Feb 28 10:17:30 compute-0 kernel: tapce4b855a-c0: entered promiscuous mode
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.461 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.465 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce4b855a-c0, col_values=(('external_ids', {'iface-id': 'f0acdf7e-44d8-43d4-ade1-89536f5a8e0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.467 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.468 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:30 compute-0 ovn_controller[146846]: 2026-02-28T10:17:30Z|00857|binding|INFO|Releasing lport f0acdf7e-44d8-43d4-ade1-89536f5a8e0e from this chassis (sb_readonly=0)
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.472 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce4b855a-cb9e-4dad-bfe0-ddfe326a1505.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce4b855a-cb9e-4dad-bfe0-ddfe326a1505.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.475 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.475 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[af297ee0-f02d-4f76-abbc-8cb56bcb2689]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.477 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/ce4b855a-cb9e-4dad-bfe0-ddfe326a1505.pid.haproxy
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID ce4b855a-cb9e-4dad-bfe0-ddfe326a1505
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:17:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:30.479 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'env', 'PROCESS_TAG=haproxy-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce4b855a-cb9e-4dad-bfe0-ddfe326a1505.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.635 243456 DEBUG nova.compute.manager [req-2639349c-979a-4e5f-83ec-722f66a59042 req-7bf9a339-0889-4bd3-a59e-07819babaca3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Received event network-changed-9e45b488-45d1-4293-a1a6-7b01b726b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.635 243456 DEBUG nova.compute.manager [req-2639349c-979a-4e5f-83ec-722f66a59042 req-7bf9a339-0889-4bd3-a59e-07819babaca3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Refreshing instance network info cache due to event network-changed-9e45b488-45d1-4293-a1a6-7b01b726b58b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.636 243456 DEBUG oslo_concurrency.lockutils [req-2639349c-979a-4e5f-83ec-722f66a59042 req-7bf9a339-0889-4bd3-a59e-07819babaca3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-5f8708bd-35d7-4952-ba18-0b6635872b86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.636 243456 DEBUG oslo_concurrency.lockutils [req-2639349c-979a-4e5f-83ec-722f66a59042 req-7bf9a339-0889-4bd3-a59e-07819babaca3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-5f8708bd-35d7-4952-ba18-0b6635872b86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.636 243456 DEBUG nova.network.neutron [req-2639349c-979a-4e5f-83ec-722f66a59042 req-7bf9a339-0889-4bd3-a59e-07819babaca3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Refreshing network info cache for port 9e45b488-45d1-4293-a1a6-7b01b726b58b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:17:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:17:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:17:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:17:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:17:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:17:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:17:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:17:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:17:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:17:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.657 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 4db5bcd7-8b41-4850-8c88-89ad757c8558 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.658 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273850.65709, 4db5bcd7-8b41-4850-8c88-89ad757c8558 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.658 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] VM Started (Lifecycle Event)
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.666 243456 DEBUG nova.compute.manager [None req-92fa0c37-f3d5-4279-8a4c-53fa7fcbafab 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.668 243456 DEBUG nova.objects.instance [None req-92fa0c37-f3d5-4279-8a4c-53fa7fcbafab 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.674 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.677 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.681 243456 INFO nova.virt.libvirt.driver [-] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Instance running successfully.
Feb 28 10:17:30 compute-0 virtqemud[242837]: argument unsupported: QEMU guest agent is not configured
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.684 243456 DEBUG nova.virt.libvirt.guest [None req-92fa0c37-f3d5-4279-8a4c-53fa7fcbafab 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.685 243456 DEBUG nova.compute.manager [None req-92fa0c37-f3d5-4279-8a4c-53fa7fcbafab 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.698 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] During sync_power_state the instance has a pending task (resuming). Skip.
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.698 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273850.6610188, 4db5bcd7-8b41-4850-8c88-89ad757c8558 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.699 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] VM Resumed (Lifecycle Event)
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.725 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:30 compute-0 nova_compute[243452]: 2026-02-28 10:17:30.738 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:17:30 compute-0 podman[320562]: 2026-02-28 10:17:30.904643189 +0000 UTC m=+0.068415182 container create eaf204a4707e1f5233086b19eefc7bf3b799f13cb13cd9c0a92e3b097feb3684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:17:30 compute-0 systemd[1]: Started libpod-conmon-eaf204a4707e1f5233086b19eefc7bf3b799f13cb13cd9c0a92e3b097feb3684.scope.
Feb 28 10:17:30 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:17:30 compute-0 podman[320562]: 2026-02-28 10:17:30.875858628 +0000 UTC m=+0.039630641 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:17:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ca5eb5b157edcd9696af1ba29adfa1c81a6ec70ea2a082ca9342cc4a008a76b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:17:30 compute-0 podman[320562]: 2026-02-28 10:17:30.991523475 +0000 UTC m=+0.155295488 container init eaf204a4707e1f5233086b19eefc7bf3b799f13cb13cd9c0a92e3b097feb3684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 28 10:17:31 compute-0 podman[320562]: 2026-02-28 10:17:31.030919618 +0000 UTC m=+0.194691611 container start eaf204a4707e1f5233086b19eefc7bf3b799f13cb13cd9c0a92e3b097feb3684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:17:31 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[320575]: [NOTICE]   (320579) : New worker (320581) forked
Feb 28 10:17:31 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[320575]: [NOTICE]   (320579) : Loading success.
Feb 28 10:17:31 compute-0 ceph-mon[76304]: pgmap v1554: 305 pgs: 305 active+clean; 372 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.2 MiB/s wr, 259 op/s
Feb 28 10:17:31 compute-0 nova_compute[243452]: 2026-02-28 10:17:31.388 243456 DEBUG nova.compute.manager [req-9010f7c1-2df4-49aa-972a-08cd06974b29 req-3985a7bf-e830-49ad-96ae-2a8afe5da39a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Received event network-vif-plugged-a51a44ab-ad43-4c65-94c0-22c6bfcc9039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:31 compute-0 nova_compute[243452]: 2026-02-28 10:17:31.389 243456 DEBUG oslo_concurrency.lockutils [req-9010f7c1-2df4-49aa-972a-08cd06974b29 req-3985a7bf-e830-49ad-96ae-2a8afe5da39a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "cc735034-2b3b-4f09-858d-124f2b3e71bd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:31 compute-0 nova_compute[243452]: 2026-02-28 10:17:31.390 243456 DEBUG oslo_concurrency.lockutils [req-9010f7c1-2df4-49aa-972a-08cd06974b29 req-3985a7bf-e830-49ad-96ae-2a8afe5da39a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cc735034-2b3b-4f09-858d-124f2b3e71bd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:31 compute-0 nova_compute[243452]: 2026-02-28 10:17:31.390 243456 DEBUG oslo_concurrency.lockutils [req-9010f7c1-2df4-49aa-972a-08cd06974b29 req-3985a7bf-e830-49ad-96ae-2a8afe5da39a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cc735034-2b3b-4f09-858d-124f2b3e71bd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:31 compute-0 nova_compute[243452]: 2026-02-28 10:17:31.391 243456 DEBUG nova.compute.manager [req-9010f7c1-2df4-49aa-972a-08cd06974b29 req-3985a7bf-e830-49ad-96ae-2a8afe5da39a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] No waiting events found dispatching network-vif-plugged-a51a44ab-ad43-4c65-94c0-22c6bfcc9039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:17:31 compute-0 nova_compute[243452]: 2026-02-28 10:17:31.391 243456 WARNING nova.compute.manager [req-9010f7c1-2df4-49aa-972a-08cd06974b29 req-3985a7bf-e830-49ad-96ae-2a8afe5da39a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Received unexpected event network-vif-plugged-a51a44ab-ad43-4c65-94c0-22c6bfcc9039 for instance with vm_state active and task_state None.
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.262 243456 DEBUG nova.network.neutron [req-2639349c-979a-4e5f-83ec-722f66a59042 req-7bf9a339-0889-4bd3-a59e-07819babaca3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Updated VIF entry in instance network info cache for port 9e45b488-45d1-4293-a1a6-7b01b726b58b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.263 243456 DEBUG nova.network.neutron [req-2639349c-979a-4e5f-83ec-722f66a59042 req-7bf9a339-0889-4bd3-a59e-07819babaca3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Updating instance_info_cache with network_info: [{"id": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "address": "fa:16:3e:bb:83:cd", "network": {"id": "88dbe3c2-5a58-4a5e-93e1-51b691c2901f", "bridge": "br-int", "label": "tempest-network-smoke--1244861554", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e45b488-45", "ovs_interfaceid": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:17:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 372 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.2 MiB/s wr, 254 op/s
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.282 243456 DEBUG oslo_concurrency.lockutils [req-2639349c-979a-4e5f-83ec-722f66a59042 req-7bf9a339-0889-4bd3-a59e-07819babaca3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-5f8708bd-35d7-4952-ba18-0b6635872b86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.429 243456 DEBUG oslo_concurrency.lockutils [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Acquiring lock "cc735034-2b3b-4f09-858d-124f2b3e71bd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.430 243456 DEBUG oslo_concurrency.lockutils [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Lock "cc735034-2b3b-4f09-858d-124f2b3e71bd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.431 243456 DEBUG oslo_concurrency.lockutils [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Acquiring lock "cc735034-2b3b-4f09-858d-124f2b3e71bd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.431 243456 DEBUG oslo_concurrency.lockutils [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Lock "cc735034-2b3b-4f09-858d-124f2b3e71bd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.432 243456 DEBUG oslo_concurrency.lockutils [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Lock "cc735034-2b3b-4f09-858d-124f2b3e71bd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.433 243456 INFO nova.compute.manager [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Terminating instance
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.435 243456 DEBUG nova.compute.manager [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:17:32 compute-0 kernel: tapa51a44ab-ad (unregistering): left promiscuous mode
Feb 28 10:17:32 compute-0 NetworkManager[49805]: <info>  [1772273852.4732] device (tapa51a44ab-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:17:32 compute-0 ovn_controller[146846]: 2026-02-28T10:17:32Z|00858|binding|INFO|Releasing lport a51a44ab-ad43-4c65-94c0-22c6bfcc9039 from this chassis (sb_readonly=0)
Feb 28 10:17:32 compute-0 ovn_controller[146846]: 2026-02-28T10:17:32Z|00859|binding|INFO|Setting lport a51a44ab-ad43-4c65-94c0-22c6bfcc9039 down in Southbound
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.486 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:32 compute-0 ovn_controller[146846]: 2026-02-28T10:17:32Z|00860|binding|INFO|Removing iface tapa51a44ab-ad ovn-installed in OVS
Feb 28 10:17:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:32.496 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:f2:f2 10.100.0.9'], port_security=['fa:16:3e:62:f2:f2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'cc735034-2b3b-4f09-858d-124f2b3e71bd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12a1e1ce-14e8-4472-8ef8-ee5b53098b66', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9a2bd9695a47497e8f68da82a992d545', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8ba58661-0762-4cbe-9be3-bbdcfffd9b74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bea5839-d462-4fe4-acf7-0db53ac15604, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a51a44ab-ad43-4c65-94c0-22c6bfcc9039) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:17:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:32.497 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a51a44ab-ad43-4c65-94c0-22c6bfcc9039 in datapath 12a1e1ce-14e8-4472-8ef8-ee5b53098b66 unbound from our chassis
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.499 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:32.500 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 12a1e1ce-14e8-4472-8ef8-ee5b53098b66, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:17:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:32.501 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[31ce5537-f0a3-47bf-a058-720b4b25d39e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:32.501 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66 namespace which is not needed anymore
Feb 28 10:17:32 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Feb 28 10:17:32 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d0000005b.scope: Consumed 3.261s CPU time.
Feb 28 10:17:32 compute-0 systemd-machined[209480]: Machine qemu-107-instance-0000005b terminated.
Feb 28 10:17:32 compute-0 neutron-haproxy-ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66[320441]: [NOTICE]   (320445) : haproxy version is 2.8.14-c23fe91
Feb 28 10:17:32 compute-0 neutron-haproxy-ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66[320441]: [NOTICE]   (320445) : path to executable is /usr/sbin/haproxy
Feb 28 10:17:32 compute-0 neutron-haproxy-ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66[320441]: [WARNING]  (320445) : Exiting Master process...
Feb 28 10:17:32 compute-0 neutron-haproxy-ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66[320441]: [ALERT]    (320445) : Current worker (320447) exited with code 143 (Terminated)
Feb 28 10:17:32 compute-0 neutron-haproxy-ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66[320441]: [WARNING]  (320445) : All workers exited. Exiting... (0)
Feb 28 10:17:32 compute-0 systemd[1]: libpod-c59dcbab0413c1c9fc3f8ad3b1a4ed0ecff1c81e47876b0e2a5dc521a5612383.scope: Deactivated successfully.
Feb 28 10:17:32 compute-0 podman[320613]: 2026-02-28 10:17:32.617110877 +0000 UTC m=+0.036991535 container died c59dcbab0413c1c9fc3f8ad3b1a4ed0ecff1c81e47876b0e2a5dc521a5612383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 28 10:17:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c59dcbab0413c1c9fc3f8ad3b1a4ed0ecff1c81e47876b0e2a5dc521a5612383-userdata-shm.mount: Deactivated successfully.
Feb 28 10:17:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8a5ac37f35f8b27419eb5200be5a28fabb27bc2c8ca85b94d246508c615c51b-merged.mount: Deactivated successfully.
Feb 28 10:17:32 compute-0 podman[320613]: 2026-02-28 10:17:32.672401163 +0000 UTC m=+0.092281811 container cleanup c59dcbab0413c1c9fc3f8ad3b1a4ed0ecff1c81e47876b0e2a5dc521a5612383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.678 243456 INFO nova.virt.libvirt.driver [-] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Instance destroyed successfully.
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.678 243456 DEBUG nova.objects.instance [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Lazy-loading 'resources' on Instance uuid cc735034-2b3b-4f09-858d-124f2b3e71bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:32 compute-0 systemd[1]: libpod-conmon-c59dcbab0413c1c9fc3f8ad3b1a4ed0ecff1c81e47876b0e2a5dc521a5612383.scope: Deactivated successfully.
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.693 243456 DEBUG nova.virt.libvirt.vif [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1580853116',display_name='tempest-ServerAddressesNegativeTestJSON-server-1580853116',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1580853116',id=91,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9a2bd9695a47497e8f68da82a992d545',ramdisk_id='',reservation_id='r-5etcxjkg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-632901319',owner_user_name='tempest-ServerAddressesNegativeTestJSON-632901319-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:17:30Z,user_data=None,user_id='3ae46addcec3432c94c933fdd18dfa9c',uuid=cc735034-2b3b-4f09-858d-124f2b3e71bd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a51a44ab-ad43-4c65-94c0-22c6bfcc9039", "address": "fa:16:3e:62:f2:f2", "network": {"id": "12a1e1ce-14e8-4472-8ef8-ee5b53098b66", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-614719452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a2bd9695a47497e8f68da82a992d545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51a44ab-ad", "ovs_interfaceid": "a51a44ab-ad43-4c65-94c0-22c6bfcc9039", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.694 243456 DEBUG nova.network.os_vif_util [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Converting VIF {"id": "a51a44ab-ad43-4c65-94c0-22c6bfcc9039", "address": "fa:16:3e:62:f2:f2", "network": {"id": "12a1e1ce-14e8-4472-8ef8-ee5b53098b66", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-614719452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a2bd9695a47497e8f68da82a992d545", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51a44ab-ad", "ovs_interfaceid": "a51a44ab-ad43-4c65-94c0-22c6bfcc9039", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.695 243456 DEBUG nova.network.os_vif_util [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:f2:f2,bridge_name='br-int',has_traffic_filtering=True,id=a51a44ab-ad43-4c65-94c0-22c6bfcc9039,network=Network(12a1e1ce-14e8-4472-8ef8-ee5b53098b66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa51a44ab-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.695 243456 DEBUG os_vif [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:f2:f2,bridge_name='br-int',has_traffic_filtering=True,id=a51a44ab-ad43-4c65-94c0-22c6bfcc9039,network=Network(12a1e1ce-14e8-4472-8ef8-ee5b53098b66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa51a44ab-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.697 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.697 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa51a44ab-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.699 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.700 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.702 243456 INFO os_vif [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:f2:f2,bridge_name='br-int',has_traffic_filtering=True,id=a51a44ab-ad43-4c65-94c0-22c6bfcc9039,network=Network(12a1e1ce-14e8-4472-8ef8-ee5b53098b66),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa51a44ab-ad')
Feb 28 10:17:32 compute-0 podman[320652]: 2026-02-28 10:17:32.74001046 +0000 UTC m=+0.048761241 container remove c59dcbab0413c1c9fc3f8ad3b1a4ed0ecff1c81e47876b0e2a5dc521a5612383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 10:17:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:32.744 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb5e269-61f1-4736-8436-7831007970c5]: (4, ('Sat Feb 28 10:17:32 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66 (c59dcbab0413c1c9fc3f8ad3b1a4ed0ecff1c81e47876b0e2a5dc521a5612383)\nc59dcbab0413c1c9fc3f8ad3b1a4ed0ecff1c81e47876b0e2a5dc521a5612383\nSat Feb 28 10:17:32 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66 (c59dcbab0413c1c9fc3f8ad3b1a4ed0ecff1c81e47876b0e2a5dc521a5612383)\nc59dcbab0413c1c9fc3f8ad3b1a4ed0ecff1c81e47876b0e2a5dc521a5612383\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:32.746 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4addf3e0-61ab-421a-801b-164239748b72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:32.748 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12a1e1ce-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:32 compute-0 kernel: tap12a1e1ce-10: left promiscuous mode
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.749 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.756 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:32.763 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9b9c4e8b-1a8d-4a17-bc75-6ba48ca1e7b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.765 243456 DEBUG nova.compute.manager [req-37c7cf09-8044-47c1-aaa2-914995097b6c req-fd3d339c-c669-4630-a9bb-27de3f1bbcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.766 243456 DEBUG oslo_concurrency.lockutils [req-37c7cf09-8044-47c1-aaa2-914995097b6c req-fd3d339c-c669-4630-a9bb-27de3f1bbcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.766 243456 DEBUG oslo_concurrency.lockutils [req-37c7cf09-8044-47c1-aaa2-914995097b6c req-fd3d339c-c669-4630-a9bb-27de3f1bbcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.767 243456 DEBUG oslo_concurrency.lockutils [req-37c7cf09-8044-47c1-aaa2-914995097b6c req-fd3d339c-c669-4630-a9bb-27de3f1bbcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.769 243456 DEBUG nova.compute.manager [req-37c7cf09-8044-47c1-aaa2-914995097b6c req-fd3d339c-c669-4630-a9bb-27de3f1bbcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] No waiting events found dispatching network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.770 243456 WARNING nova.compute.manager [req-37c7cf09-8044-47c1-aaa2-914995097b6c req-fd3d339c-c669-4630-a9bb-27de3f1bbcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received unexpected event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 for instance with vm_state active and task_state None.
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.770 243456 DEBUG nova.compute.manager [req-37c7cf09-8044-47c1-aaa2-914995097b6c req-fd3d339c-c669-4630-a9bb-27de3f1bbcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.771 243456 DEBUG oslo_concurrency.lockutils [req-37c7cf09-8044-47c1-aaa2-914995097b6c req-fd3d339c-c669-4630-a9bb-27de3f1bbcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.771 243456 DEBUG oslo_concurrency.lockutils [req-37c7cf09-8044-47c1-aaa2-914995097b6c req-fd3d339c-c669-4630-a9bb-27de3f1bbcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.772 243456 DEBUG oslo_concurrency.lockutils [req-37c7cf09-8044-47c1-aaa2-914995097b6c req-fd3d339c-c669-4630-a9bb-27de3f1bbcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.772 243456 DEBUG nova.compute.manager [req-37c7cf09-8044-47c1-aaa2-914995097b6c req-fd3d339c-c669-4630-a9bb-27de3f1bbcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] No waiting events found dispatching network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.773 243456 WARNING nova.compute.manager [req-37c7cf09-8044-47c1-aaa2-914995097b6c req-fd3d339c-c669-4630-a9bb-27de3f1bbcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received unexpected event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 for instance with vm_state active and task_state None.
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.774 243456 DEBUG nova.compute.manager [req-37c7cf09-8044-47c1-aaa2-914995097b6c req-fd3d339c-c669-4630-a9bb-27de3f1bbcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-changed-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.774 243456 DEBUG nova.compute.manager [req-37c7cf09-8044-47c1-aaa2-914995097b6c req-fd3d339c-c669-4630-a9bb-27de3f1bbcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Refreshing instance network info cache due to event network-changed-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.775 243456 DEBUG oslo_concurrency.lockutils [req-37c7cf09-8044-47c1-aaa2-914995097b6c req-fd3d339c-c669-4630-a9bb-27de3f1bbcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.775 243456 DEBUG oslo_concurrency.lockutils [req-37c7cf09-8044-47c1-aaa2-914995097b6c req-fd3d339c-c669-4630-a9bb-27de3f1bbcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:17:32 compute-0 nova_compute[243452]: 2026-02-28 10:17:32.775 243456 DEBUG nova.network.neutron [req-37c7cf09-8044-47c1-aaa2-914995097b6c req-fd3d339c-c669-4630-a9bb-27de3f1bbcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Refreshing network info cache for port ed25d1f8-c3a0-43d4-b57e-12b647a48b3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:17:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:32.775 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[654534e5-5302-47b3-be34-abae62e572bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:32.776 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3a97ef7f-7519-4136-901e-c7e2da753da7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:32.789 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ec827f4e-d719-41fe-ab5a-fb298cebb31e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535645, 'reachable_time': 41716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320683, 'error': None, 'target': 'ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d12a1e1ce\x2d14e8\x2d4472\x2d8ef8\x2dee5b53098b66.mount: Deactivated successfully.
Feb 28 10:17:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:32.794 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-12a1e1ce-14e8-4472-8ef8-ee5b53098b66 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:17:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:32.794 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee71f0a-28d5-4f72-bf36-8aab17aee6cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:33 compute-0 nova_compute[243452]: 2026-02-28 10:17:33.029 243456 INFO nova.virt.libvirt.driver [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Deleting instance files /var/lib/nova/instances/cc735034-2b3b-4f09-858d-124f2b3e71bd_del
Feb 28 10:17:33 compute-0 nova_compute[243452]: 2026-02-28 10:17:33.030 243456 INFO nova.virt.libvirt.driver [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Deletion of /var/lib/nova/instances/cc735034-2b3b-4f09-858d-124f2b3e71bd_del complete
Feb 28 10:17:33 compute-0 nova_compute[243452]: 2026-02-28 10:17:33.082 243456 INFO nova.compute.manager [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Took 0.65 seconds to destroy the instance on the hypervisor.
Feb 28 10:17:33 compute-0 nova_compute[243452]: 2026-02-28 10:17:33.083 243456 DEBUG oslo.service.loopingcall [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:17:33 compute-0 nova_compute[243452]: 2026-02-28 10:17:33.084 243456 DEBUG nova.compute.manager [-] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:17:33 compute-0 nova_compute[243452]: 2026-02-28 10:17:33.084 243456 DEBUG nova.network.neutron [-] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:17:33 compute-0 ceph-mon[76304]: pgmap v1555: 305 pgs: 305 active+clean; 372 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.2 MiB/s wr, 254 op/s
Feb 28 10:17:33 compute-0 nova_compute[243452]: 2026-02-28 10:17:33.462 243456 DEBUG nova.compute.manager [req-0167249b-3970-44c0-b7b8-359914c2e797 req-05e8d4dd-9782-4fc6-ae6b-f2faada3ccdc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Received event network-vif-unplugged-a51a44ab-ad43-4c65-94c0-22c6bfcc9039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:33 compute-0 nova_compute[243452]: 2026-02-28 10:17:33.463 243456 DEBUG oslo_concurrency.lockutils [req-0167249b-3970-44c0-b7b8-359914c2e797 req-05e8d4dd-9782-4fc6-ae6b-f2faada3ccdc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "cc735034-2b3b-4f09-858d-124f2b3e71bd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:33 compute-0 nova_compute[243452]: 2026-02-28 10:17:33.463 243456 DEBUG oslo_concurrency.lockutils [req-0167249b-3970-44c0-b7b8-359914c2e797 req-05e8d4dd-9782-4fc6-ae6b-f2faada3ccdc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cc735034-2b3b-4f09-858d-124f2b3e71bd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:33 compute-0 nova_compute[243452]: 2026-02-28 10:17:33.463 243456 DEBUG oslo_concurrency.lockutils [req-0167249b-3970-44c0-b7b8-359914c2e797 req-05e8d4dd-9782-4fc6-ae6b-f2faada3ccdc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cc735034-2b3b-4f09-858d-124f2b3e71bd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:33 compute-0 nova_compute[243452]: 2026-02-28 10:17:33.464 243456 DEBUG nova.compute.manager [req-0167249b-3970-44c0-b7b8-359914c2e797 req-05e8d4dd-9782-4fc6-ae6b-f2faada3ccdc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] No waiting events found dispatching network-vif-unplugged-a51a44ab-ad43-4c65-94c0-22c6bfcc9039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:17:33 compute-0 nova_compute[243452]: 2026-02-28 10:17:33.464 243456 DEBUG nova.compute.manager [req-0167249b-3970-44c0-b7b8-359914c2e797 req-05e8d4dd-9782-4fc6-ae6b-f2faada3ccdc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Received event network-vif-unplugged-a51a44ab-ad43-4c65-94c0-22c6bfcc9039 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:17:33 compute-0 nova_compute[243452]: 2026-02-28 10:17:33.464 243456 DEBUG nova.compute.manager [req-0167249b-3970-44c0-b7b8-359914c2e797 req-05e8d4dd-9782-4fc6-ae6b-f2faada3ccdc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Received event network-vif-plugged-a51a44ab-ad43-4c65-94c0-22c6bfcc9039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:33 compute-0 nova_compute[243452]: 2026-02-28 10:17:33.465 243456 DEBUG oslo_concurrency.lockutils [req-0167249b-3970-44c0-b7b8-359914c2e797 req-05e8d4dd-9782-4fc6-ae6b-f2faada3ccdc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "cc735034-2b3b-4f09-858d-124f2b3e71bd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:33 compute-0 nova_compute[243452]: 2026-02-28 10:17:33.465 243456 DEBUG oslo_concurrency.lockutils [req-0167249b-3970-44c0-b7b8-359914c2e797 req-05e8d4dd-9782-4fc6-ae6b-f2faada3ccdc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cc735034-2b3b-4f09-858d-124f2b3e71bd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:33 compute-0 nova_compute[243452]: 2026-02-28 10:17:33.465 243456 DEBUG oslo_concurrency.lockutils [req-0167249b-3970-44c0-b7b8-359914c2e797 req-05e8d4dd-9782-4fc6-ae6b-f2faada3ccdc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cc735034-2b3b-4f09-858d-124f2b3e71bd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:33 compute-0 nova_compute[243452]: 2026-02-28 10:17:33.466 243456 DEBUG nova.compute.manager [req-0167249b-3970-44c0-b7b8-359914c2e797 req-05e8d4dd-9782-4fc6-ae6b-f2faada3ccdc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] No waiting events found dispatching network-vif-plugged-a51a44ab-ad43-4c65-94c0-22c6bfcc9039 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:17:33 compute-0 nova_compute[243452]: 2026-02-28 10:17:33.466 243456 WARNING nova.compute.manager [req-0167249b-3970-44c0-b7b8-359914c2e797 req-05e8d4dd-9782-4fc6-ae6b-f2faada3ccdc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Received unexpected event network-vif-plugged-a51a44ab-ad43-4c65-94c0-22c6bfcc9039 for instance with vm_state active and task_state deleting.
Feb 28 10:17:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:17:33 compute-0 nova_compute[243452]: 2026-02-28 10:17:33.855 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 372 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 2.1 MiB/s wr, 275 op/s
Feb 28 10:17:35 compute-0 nova_compute[243452]: 2026-02-28 10:17:35.213 243456 DEBUG nova.network.neutron [-] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:17:35 compute-0 nova_compute[243452]: 2026-02-28 10:17:35.231 243456 INFO nova.compute.manager [-] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Took 2.15 seconds to deallocate network for instance.
Feb 28 10:17:35 compute-0 nova_compute[243452]: 2026-02-28 10:17:35.292 243456 DEBUG oslo_concurrency.lockutils [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:35 compute-0 nova_compute[243452]: 2026-02-28 10:17:35.293 243456 DEBUG oslo_concurrency.lockutils [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:35 compute-0 ceph-mon[76304]: pgmap v1556: 305 pgs: 305 active+clean; 372 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 2.1 MiB/s wr, 275 op/s
Feb 28 10:17:35 compute-0 nova_compute[243452]: 2026-02-28 10:17:35.420 243456 DEBUG oslo_concurrency.processutils [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:35 compute-0 nova_compute[243452]: 2026-02-28 10:17:35.544 243456 DEBUG nova.compute.manager [req-48c16efd-1295-4d7d-af3a-afbd2dfc1467 req-ee1fff59-8e8a-486a-9f69-be09c1df93ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Received event network-vif-deleted-a51a44ab-ad43-4c65-94c0-22c6bfcc9039 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:17:35 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2093056689' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:17:36 compute-0 nova_compute[243452]: 2026-02-28 10:17:36.016 243456 DEBUG oslo_concurrency.processutils [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:36 compute-0 nova_compute[243452]: 2026-02-28 10:17:36.022 243456 DEBUG nova.compute.provider_tree [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:17:36 compute-0 nova_compute[243452]: 2026-02-28 10:17:36.038 243456 DEBUG nova.scheduler.client.report [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:17:36 compute-0 nova_compute[243452]: 2026-02-28 10:17:36.066 243456 DEBUG oslo_concurrency.lockutils [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:36 compute-0 nova_compute[243452]: 2026-02-28 10:17:36.100 243456 INFO nova.scheduler.client.report [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Deleted allocations for instance cc735034-2b3b-4f09-858d-124f2b3e71bd
Feb 28 10:17:36 compute-0 nova_compute[243452]: 2026-02-28 10:17:36.159 243456 DEBUG nova.network.neutron [req-37c7cf09-8044-47c1-aaa2-914995097b6c req-fd3d339c-c669-4630-a9bb-27de3f1bbcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Updated VIF entry in instance network info cache for port ed25d1f8-c3a0-43d4-b57e-12b647a48b3c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:17:36 compute-0 nova_compute[243452]: 2026-02-28 10:17:36.160 243456 DEBUG nova.network.neutron [req-37c7cf09-8044-47c1-aaa2-914995097b6c req-fd3d339c-c669-4630-a9bb-27de3f1bbcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Updating instance_info_cache with network_info: [{"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:17:36 compute-0 nova_compute[243452]: 2026-02-28 10:17:36.168 243456 DEBUG oslo_concurrency.lockutils [None req-d78d50be-239e-4c77-ba17-7ed6762e6072 3ae46addcec3432c94c933fdd18dfa9c 9a2bd9695a47497e8f68da82a992d545 - - default default] Lock "cc735034-2b3b-4f09-858d-124f2b3e71bd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:36 compute-0 nova_compute[243452]: 2026-02-28 10:17:36.188 243456 DEBUG oslo_concurrency.lockutils [req-37c7cf09-8044-47c1-aaa2-914995097b6c req-fd3d339c-c669-4630-a9bb-27de3f1bbcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:17:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 354 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 2.4 MiB/s wr, 295 op/s
Feb 28 10:17:36 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2093056689' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:17:37 compute-0 ovn_controller[146846]: 2026-02-28T10:17:37Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:22:e7:39 10.100.0.9
Feb 28 10:17:37 compute-0 ceph-mon[76304]: pgmap v1557: 305 pgs: 305 active+clean; 354 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 2.4 MiB/s wr, 295 op/s
Feb 28 10:17:37 compute-0 ovn_controller[146846]: 2026-02-28T10:17:37Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bb:83:cd 10.100.0.12
Feb 28 10:17:37 compute-0 nova_compute[243452]: 2026-02-28 10:17:37.701 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:37 compute-0 ovn_controller[146846]: 2026-02-28T10:17:37Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bb:83:cd 10.100.0.12
Feb 28 10:17:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 338 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.0 MiB/s wr, 297 op/s
Feb 28 10:17:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:17:38 compute-0 ovn_controller[146846]: 2026-02-28T10:17:38Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f6:05:21 10.100.0.14
Feb 28 10:17:38 compute-0 ovn_controller[146846]: 2026-02-28T10:17:38Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:05:21 10.100.0.14
Feb 28 10:17:38 compute-0 nova_compute[243452]: 2026-02-28 10:17:38.858 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:39 compute-0 ceph-mon[76304]: pgmap v1558: 305 pgs: 305 active+clean; 338 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.0 MiB/s wr, 297 op/s
Feb 28 10:17:40 compute-0 podman[320708]: 2026-02-28 10:17:40.169330414 +0000 UTC m=+0.099515667 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Feb 28 10:17:40 compute-0 podman[320707]: 2026-02-28 10:17:40.171437404 +0000 UTC m=+0.105060425 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller)
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 361 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 3.3 MiB/s wr, 313 op/s
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020218835706363675 of space, bias 1.0, pg target 0.6065650711909103 quantized to 32 (current 32)
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024931118452138344 of space, bias 1.0, pg target 0.7479335535641503 quantized to 32 (current 32)
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.740026216779579e-07 of space, bias 4.0, pg target 0.0009288031460135494 quantized to 16 (current 16)
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:17:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:17:41 compute-0 ceph-mon[76304]: pgmap v1559: 305 pgs: 305 active+clean; 361 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 3.3 MiB/s wr, 313 op/s
Feb 28 10:17:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 388 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 4.2 MiB/s wr, 284 op/s
Feb 28 10:17:42 compute-0 ovn_controller[146846]: 2026-02-28T10:17:42Z|00861|binding|INFO|Releasing lport ecb4bce5-f543-4ade-98cc-d98a0d015682 from this chassis (sb_readonly=0)
Feb 28 10:17:42 compute-0 ovn_controller[146846]: 2026-02-28T10:17:42Z|00862|binding|INFO|Releasing lport f0acdf7e-44d8-43d4-ade1-89536f5a8e0e from this chassis (sb_readonly=0)
Feb 28 10:17:42 compute-0 ovn_controller[146846]: 2026-02-28T10:17:42Z|00863|binding|INFO|Releasing lport b833c568-d94d-4da6-b765-0f13045f9c5d from this chassis (sb_readonly=0)
Feb 28 10:17:42 compute-0 nova_compute[243452]: 2026-02-28 10:17:42.482 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:42 compute-0 nova_compute[243452]: 2026-02-28 10:17:42.704 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:43 compute-0 ceph-mon[76304]: pgmap v1560: 305 pgs: 305 active+clean; 388 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 4.2 MiB/s wr, 284 op/s
Feb 28 10:17:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:17:43 compute-0 nova_compute[243452]: 2026-02-28 10:17:43.861 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 391 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.3 MiB/s wr, 256 op/s
Feb 28 10:17:44 compute-0 nova_compute[243452]: 2026-02-28 10:17:44.365 243456 INFO nova.compute.manager [None req-822c04f1-15d3-4c90-96c1-3a5d4f683a12 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Get console output
Feb 28 10:17:44 compute-0 nova_compute[243452]: 2026-02-28 10:17:44.373 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:17:44 compute-0 nova_compute[243452]: 2026-02-28 10:17:44.682 243456 DEBUG oslo_concurrency.lockutils [None req-fc8fd7ce-8cee-4e4d-a851-7a540b7949a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "5f8708bd-35d7-4952-ba18-0b6635872b86" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:44 compute-0 nova_compute[243452]: 2026-02-28 10:17:44.682 243456 DEBUG oslo_concurrency.lockutils [None req-fc8fd7ce-8cee-4e4d-a851-7a540b7949a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:44 compute-0 nova_compute[243452]: 2026-02-28 10:17:44.683 243456 DEBUG nova.compute.manager [None req-fc8fd7ce-8cee-4e4d-a851-7a540b7949a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:44 compute-0 nova_compute[243452]: 2026-02-28 10:17:44.687 243456 DEBUG nova.compute.manager [None req-fc8fd7ce-8cee-4e4d-a851-7a540b7949a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Feb 28 10:17:44 compute-0 nova_compute[243452]: 2026-02-28 10:17:44.688 243456 DEBUG nova.objects.instance [None req-fc8fd7ce-8cee-4e4d-a851-7a540b7949a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'flavor' on Instance uuid 5f8708bd-35d7-4952-ba18-0b6635872b86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:44 compute-0 nova_compute[243452]: 2026-02-28 10:17:44.717 243456 DEBUG nova.virt.libvirt.driver [None req-fc8fd7ce-8cee-4e4d-a851-7a540b7949a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:17:45 compute-0 ceph-mon[76304]: pgmap v1561: 305 pgs: 305 active+clean; 391 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.3 MiB/s wr, 256 op/s
Feb 28 10:17:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:17:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/64618447' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:17:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:17:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/64618447' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.477 243456 DEBUG oslo_concurrency.lockutils [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.478 243456 DEBUG oslo_concurrency.lockutils [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.478 243456 DEBUG oslo_concurrency.lockutils [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.479 243456 DEBUG oslo_concurrency.lockutils [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.479 243456 DEBUG oslo_concurrency.lockutils [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.481 243456 INFO nova.compute.manager [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Terminating instance
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.483 243456 DEBUG nova.compute.manager [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:17:45 compute-0 kernel: tap52f49649-61 (unregistering): left promiscuous mode
Feb 28 10:17:45 compute-0 NetworkManager[49805]: <info>  [1772273865.5330] device (tap52f49649-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.546 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:45 compute-0 ovn_controller[146846]: 2026-02-28T10:17:45Z|00864|binding|INFO|Releasing lport 52f49649-6181-4c24-95b7-fc7227858c70 from this chassis (sb_readonly=0)
Feb 28 10:17:45 compute-0 ovn_controller[146846]: 2026-02-28T10:17:45Z|00865|binding|INFO|Setting lport 52f49649-6181-4c24-95b7-fc7227858c70 down in Southbound
Feb 28 10:17:45 compute-0 ovn_controller[146846]: 2026-02-28T10:17:45Z|00866|binding|INFO|Removing iface tap52f49649-61 ovn-installed in OVS
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.551 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:45.556 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:e7:39 10.100.0.9'], port_security=['fa:16:3e:22:e7:39 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4db5bcd7-8b41-4850-8c88-89ad757c8558', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c0c4bc44c37f4a4f83c83b6105be3190', 'neutron:revision_number': '11', 'neutron:security_group_ids': '60e2cad2-1539-4f21-ae07-3933335fcb5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbe93b8f-a4a4-4682-b3ab-de91ea6bc538, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=52f49649-6181-4c24-95b7-fc7227858c70) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:17:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:45.558 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 52f49649-6181-4c24-95b7-fc7227858c70 in datapath ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 unbound from our chassis
Feb 28 10:17:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:45.559 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:17:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:45.560 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c21ca974-3848-4a75-b783-d4cc775aa102]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:45.560 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 namespace which is not needed anymore
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.562 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:45 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000051.scope: Deactivated successfully.
Feb 28 10:17:45 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000051.scope: Consumed 7.179s CPU time.
Feb 28 10:17:45 compute-0 systemd-machined[209480]: Machine qemu-108-instance-00000051 terminated.
Feb 28 10:17:45 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[320575]: [NOTICE]   (320579) : haproxy version is 2.8.14-c23fe91
Feb 28 10:17:45 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[320575]: [NOTICE]   (320579) : path to executable is /usr/sbin/haproxy
Feb 28 10:17:45 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[320575]: [WARNING]  (320579) : Exiting Master process...
Feb 28 10:17:45 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[320575]: [ALERT]    (320579) : Current worker (320581) exited with code 143 (Terminated)
Feb 28 10:17:45 compute-0 neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505[320575]: [WARNING]  (320579) : All workers exited. Exiting... (0)
Feb 28 10:17:45 compute-0 systemd[1]: libpod-eaf204a4707e1f5233086b19eefc7bf3b799f13cb13cd9c0a92e3b097feb3684.scope: Deactivated successfully.
Feb 28 10:17:45 compute-0 podman[320773]: 2026-02-28 10:17:45.692972524 +0000 UTC m=+0.051502859 container died eaf204a4707e1f5233086b19eefc7bf3b799f13cb13cd9c0a92e3b097feb3684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.720 243456 INFO nova.virt.libvirt.driver [-] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Instance destroyed successfully.
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.721 243456 DEBUG nova.objects.instance [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lazy-loading 'resources' on Instance uuid 4db5bcd7-8b41-4850-8c88-89ad757c8558 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eaf204a4707e1f5233086b19eefc7bf3b799f13cb13cd9c0a92e3b097feb3684-userdata-shm.mount: Deactivated successfully.
Feb 28 10:17:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ca5eb5b157edcd9696af1ba29adfa1c81a6ec70ea2a082ca9342cc4a008a76b-merged.mount: Deactivated successfully.
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.745 243456 DEBUG nova.virt.libvirt.vif [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:15:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-76744621',display_name='tempest-ServersNegativeTestJSON-server-76744621',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-76744621',id=81,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c0c4bc44c37f4a4f83c83b6105be3190',ramdisk_id='',reservation_id='r-labnea0j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-621636341',owner_user_name='tempest-ServersNegativeTestJSON-621636341-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:17:30Z,user_data=None,user_id='7ef51521ffc947cbbce8323ec2b71753',uuid=4db5bcd7-8b41-4850-8c88-89ad757c8558,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.746 243456 DEBUG nova.network.os_vif_util [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converting VIF {"id": "52f49649-6181-4c24-95b7-fc7227858c70", "address": "fa:16:3e:22:e7:39", "network": {"id": "ce4b855a-cb9e-4dad-bfe0-ddfe326a1505", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1278521155-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c0c4bc44c37f4a4f83c83b6105be3190", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f49649-61", "ovs_interfaceid": "52f49649-6181-4c24-95b7-fc7227858c70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.747 243456 DEBUG nova.network.os_vif_util [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:e7:39,bridge_name='br-int',has_traffic_filtering=True,id=52f49649-6181-4c24-95b7-fc7227858c70,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f49649-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.748 243456 DEBUG os_vif [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:e7:39,bridge_name='br-int',has_traffic_filtering=True,id=52f49649-6181-4c24-95b7-fc7227858c70,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f49649-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.749 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.750 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52f49649-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:45 compute-0 podman[320773]: 2026-02-28 10:17:45.751353458 +0000 UTC m=+0.109883793 container cleanup eaf204a4707e1f5233086b19eefc7bf3b799f13cb13cd9c0a92e3b097feb3684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.752 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.753 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.756 243456 INFO os_vif [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:e7:39,bridge_name='br-int',has_traffic_filtering=True,id=52f49649-6181-4c24-95b7-fc7227858c70,network=Network(ce4b855a-cb9e-4dad-bfe0-ddfe326a1505),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f49649-61')
Feb 28 10:17:45 compute-0 systemd[1]: libpod-conmon-eaf204a4707e1f5233086b19eefc7bf3b799f13cb13cd9c0a92e3b097feb3684.scope: Deactivated successfully.
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.805 243456 DEBUG nova.compute.manager [req-1aee6f7f-d687-4a5d-9315-3ac3586a098b req-af1f2df1-b9b6-44bb-9484-f659378b8d0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received event network-vif-unplugged-52f49649-6181-4c24-95b7-fc7227858c70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.806 243456 DEBUG oslo_concurrency.lockutils [req-1aee6f7f-d687-4a5d-9315-3ac3586a098b req-af1f2df1-b9b6-44bb-9484-f659378b8d0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.806 243456 DEBUG oslo_concurrency.lockutils [req-1aee6f7f-d687-4a5d-9315-3ac3586a098b req-af1f2df1-b9b6-44bb-9484-f659378b8d0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.807 243456 DEBUG oslo_concurrency.lockutils [req-1aee6f7f-d687-4a5d-9315-3ac3586a098b req-af1f2df1-b9b6-44bb-9484-f659378b8d0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.807 243456 DEBUG nova.compute.manager [req-1aee6f7f-d687-4a5d-9315-3ac3586a098b req-af1f2df1-b9b6-44bb-9484-f659378b8d0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] No waiting events found dispatching network-vif-unplugged-52f49649-6181-4c24-95b7-fc7227858c70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.807 243456 DEBUG nova.compute.manager [req-1aee6f7f-d687-4a5d-9315-3ac3586a098b req-af1f2df1-b9b6-44bb-9484-f659378b8d0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received event network-vif-unplugged-52f49649-6181-4c24-95b7-fc7227858c70 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:17:45 compute-0 podman[320815]: 2026-02-28 10:17:45.820467908 +0000 UTC m=+0.047928127 container remove eaf204a4707e1f5233086b19eefc7bf3b799f13cb13cd9c0a92e3b097feb3684 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:17:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:45.825 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[16984bb0-e5db-409f-b61f-272cabe6e4c1]: (4, ('Sat Feb 28 10:17:45 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 (eaf204a4707e1f5233086b19eefc7bf3b799f13cb13cd9c0a92e3b097feb3684)\neaf204a4707e1f5233086b19eefc7bf3b799f13cb13cd9c0a92e3b097feb3684\nSat Feb 28 10:17:45 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 (eaf204a4707e1f5233086b19eefc7bf3b799f13cb13cd9c0a92e3b097feb3684)\neaf204a4707e1f5233086b19eefc7bf3b799f13cb13cd9c0a92e3b097feb3684\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:45.827 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5bdb5029-b7d9-4f1e-ae98-ad44f50635a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:45.828 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce4b855a-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.829 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:45 compute-0 kernel: tapce4b855a-c0: left promiscuous mode
Feb 28 10:17:45 compute-0 nova_compute[243452]: 2026-02-28 10:17:45.840 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:45.844 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ce14c3-4e53-4dc6-831f-6f124dd3ea77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:45.861 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8a2230-4f29-4e57-86a1-a4c5248b8d2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:45.863 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f027180f-fbae-40e0-a4a5-fbca4548ae8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:45.881 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0db58799-bfd2-40d3-a649-8ae9be7fc653]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535750, 'reachable_time': 40122, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320848, 'error': None, 'target': 'ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:45 compute-0 systemd[1]: run-netns-ovnmeta\x2dce4b855a\x2dcb9e\x2d4dad\x2dbfe0\x2dddfe326a1505.mount: Deactivated successfully.
Feb 28 10:17:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:45.884 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce4b855a-cb9e-4dad-bfe0-ddfe326a1505 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:17:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:45.884 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[212b7706-cef9-436a-afed-098a0c6813ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:46 compute-0 nova_compute[243452]: 2026-02-28 10:17:46.065 243456 INFO nova.virt.libvirt.driver [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Deleting instance files /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558_del
Feb 28 10:17:46 compute-0 nova_compute[243452]: 2026-02-28 10:17:46.067 243456 INFO nova.virt.libvirt.driver [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Deletion of /var/lib/nova/instances/4db5bcd7-8b41-4850-8c88-89ad757c8558_del complete
Feb 28 10:17:46 compute-0 nova_compute[243452]: 2026-02-28 10:17:46.149 243456 INFO nova.compute.manager [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Took 0.67 seconds to destroy the instance on the hypervisor.
Feb 28 10:17:46 compute-0 nova_compute[243452]: 2026-02-28 10:17:46.150 243456 DEBUG oslo.service.loopingcall [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:17:46 compute-0 nova_compute[243452]: 2026-02-28 10:17:46.150 243456 DEBUG nova.compute.manager [-] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:17:46 compute-0 nova_compute[243452]: 2026-02-28 10:17:46.150 243456 DEBUG nova.network.neutron [-] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:17:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 393 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.3 MiB/s wr, 215 op/s
Feb 28 10:17:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/64618447' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:17:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/64618447' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:17:46 compute-0 nova_compute[243452]: 2026-02-28 10:17:46.859 243456 DEBUG nova.network.neutron [-] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:17:46 compute-0 nova_compute[243452]: 2026-02-28 10:17:46.890 243456 INFO nova.compute.manager [-] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Took 0.74 seconds to deallocate network for instance.
Feb 28 10:17:46 compute-0 nova_compute[243452]: 2026-02-28 10:17:46.940 243456 DEBUG nova.compute.manager [req-160fcbba-5bec-4154-9518-cca0cf1c1c3b req-9cb374cf-65c7-49a3-9a0d-7455ca04e3bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received event network-vif-deleted-52f49649-6181-4c24-95b7-fc7227858c70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:46 compute-0 nova_compute[243452]: 2026-02-28 10:17:46.941 243456 DEBUG oslo_concurrency.lockutils [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:46 compute-0 nova_compute[243452]: 2026-02-28 10:17:46.942 243456 DEBUG oslo_concurrency.lockutils [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:47 compute-0 kernel: tap9e45b488-45 (unregistering): left promiscuous mode
Feb 28 10:17:47 compute-0 NetworkManager[49805]: <info>  [1772273867.0097] device (tap9e45b488-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:17:47 compute-0 ovn_controller[146846]: 2026-02-28T10:17:47Z|00867|binding|INFO|Releasing lport 9e45b488-45d1-4293-a1a6-7b01b726b58b from this chassis (sb_readonly=0)
Feb 28 10:17:47 compute-0 ovn_controller[146846]: 2026-02-28T10:17:47Z|00868|binding|INFO|Setting lport 9e45b488-45d1-4293-a1a6-7b01b726b58b down in Southbound
Feb 28 10:17:47 compute-0 ovn_controller[146846]: 2026-02-28T10:17:47Z|00869|binding|INFO|Removing iface tap9e45b488-45 ovn-installed in OVS
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.015 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:47.020 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:83:cd 10.100.0.12'], port_security=['fa:16:3e:bb:83:cd 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5f8708bd-35d7-4952-ba18-0b6635872b86', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88dbe3c2-5a58-4a5e-93e1-51b691c2901f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6208c787-2d1b-4dd1-8098-37be7ada4419', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a3b86456-762e-43a0-947f-ce5fc38977cf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=9e45b488-45d1-4293-a1a6-7b01b726b58b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:17:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:47.021 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 9e45b488-45d1-4293-a1a6-7b01b726b58b in datapath 88dbe3c2-5a58-4a5e-93e1-51b691c2901f unbound from our chassis
Feb 28 10:17:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:47.022 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 88dbe3c2-5a58-4a5e-93e1-51b691c2901f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:17:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:47.023 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d645bd80-2785-4026-9c26-974964de4b82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:47.024 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f namespace which is not needed anymore
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.025 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.029 243456 DEBUG oslo_concurrency.processutils [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:47 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000059.scope: Deactivated successfully.
Feb 28 10:17:47 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000059.scope: Consumed 13.359s CPU time.
Feb 28 10:17:47 compute-0 systemd-machined[209480]: Machine qemu-105-instance-00000059 terminated.
Feb 28 10:17:47 compute-0 neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f[319694]: [NOTICE]   (319709) : haproxy version is 2.8.14-c23fe91
Feb 28 10:17:47 compute-0 neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f[319694]: [NOTICE]   (319709) : path to executable is /usr/sbin/haproxy
Feb 28 10:17:47 compute-0 neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f[319694]: [WARNING]  (319709) : Exiting Master process...
Feb 28 10:17:47 compute-0 neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f[319694]: [ALERT]    (319709) : Current worker (319711) exited with code 143 (Terminated)
Feb 28 10:17:47 compute-0 neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f[319694]: [WARNING]  (319709) : All workers exited. Exiting... (0)
Feb 28 10:17:47 compute-0 systemd[1]: libpod-c5e52cb457bd6c3b70331a84678219cfa72077ef6462b5821b2db4e1ec64e70e.scope: Deactivated successfully.
Feb 28 10:17:47 compute-0 podman[320871]: 2026-02-28 10:17:47.154213273 +0000 UTC m=+0.041511534 container died c5e52cb457bd6c3b70331a84678219cfa72077ef6462b5821b2db4e1ec64e70e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 10:17:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c5e52cb457bd6c3b70331a84678219cfa72077ef6462b5821b2db4e1ec64e70e-userdata-shm.mount: Deactivated successfully.
Feb 28 10:17:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-26c4837cf5f0863526aa86e5d0fd7a7fd1643044229c0db8b2475108be7597f6-merged.mount: Deactivated successfully.
Feb 28 10:17:47 compute-0 podman[320871]: 2026-02-28 10:17:47.193652847 +0000 UTC m=+0.080951158 container cleanup c5e52cb457bd6c3b70331a84678219cfa72077ef6462b5821b2db4e1ec64e70e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:17:47 compute-0 systemd[1]: libpod-conmon-c5e52cb457bd6c3b70331a84678219cfa72077ef6462b5821b2db4e1ec64e70e.scope: Deactivated successfully.
Feb 28 10:17:47 compute-0 podman[320920]: 2026-02-28 10:17:47.276851308 +0000 UTC m=+0.060477475 container remove c5e52cb457bd6c3b70331a84678219cfa72077ef6462b5821b2db4e1ec64e70e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 10:17:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:47.284 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ee165faf-793e-4340-9293-2b1243386603]: (4, ('Sat Feb 28 10:17:47 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f (c5e52cb457bd6c3b70331a84678219cfa72077ef6462b5821b2db4e1ec64e70e)\nc5e52cb457bd6c3b70331a84678219cfa72077ef6462b5821b2db4e1ec64e70e\nSat Feb 28 10:17:47 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f (c5e52cb457bd6c3b70331a84678219cfa72077ef6462b5821b2db4e1ec64e70e)\nc5e52cb457bd6c3b70331a84678219cfa72077ef6462b5821b2db4e1ec64e70e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:47.285 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fba9c4a8-945a-4ac2-90b4-798b810361b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:47.287 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88dbe3c2-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:47 compute-0 kernel: tap88dbe3c2-50: left promiscuous mode
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.289 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.297 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:47.301 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a1022b5f-f11e-47b8-85fd-e2b028c10107]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:47.319 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b4fd9570-4cab-49e3-8d0d-5a3ca91375bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:47.321 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb3bd34-7162-437f-85dc-9593daceaa6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:47.342 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f597fea1-750d-40d8-9fe0-eb0499432504]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534657, 'reachable_time': 29690, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320947, 'error': None, 'target': 'ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d88dbe3c2\x2d5a58\x2d4a5e\x2d93e1\x2d51b691c2901f.mount: Deactivated successfully.
Feb 28 10:17:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:47.345 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:17:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:47.345 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a365a530-adb4-4ab1-935c-a2acd8e294ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:47 compute-0 ceph-mon[76304]: pgmap v1562: 305 pgs: 305 active+clean; 393 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.3 MiB/s wr, 215 op/s
Feb 28 10:17:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:17:47 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4007771187' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.644 243456 DEBUG oslo_concurrency.processutils [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.653 243456 DEBUG nova.compute.provider_tree [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.669 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273852.6684918, cc735034-2b3b-4f09-858d-124f2b3e71bd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.669 243456 INFO nova.compute.manager [-] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] VM Stopped (Lifecycle Event)
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.679 243456 DEBUG nova.scheduler.client.report [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.692 243456 DEBUG nova.compute.manager [None req-cb92c9ce-a292-4963-9824-d91a7859470c - - - - - -] [instance: cc735034-2b3b-4f09-858d-124f2b3e71bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.707 243456 DEBUG oslo_concurrency.lockutils [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.739 243456 INFO nova.scheduler.client.report [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Deleted allocations for instance 4db5bcd7-8b41-4850-8c88-89ad757c8558
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.750 243456 INFO nova.virt.libvirt.driver [None req-fc8fd7ce-8cee-4e4d-a851-7a540b7949a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Instance shutdown successfully after 3 seconds.
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.767 243456 INFO nova.virt.libvirt.driver [-] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Instance destroyed successfully.
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.768 243456 DEBUG nova.objects.instance [None req-fc8fd7ce-8cee-4e4d-a851-7a540b7949a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'numa_topology' on Instance uuid 5f8708bd-35d7-4952-ba18-0b6635872b86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.800 243456 DEBUG nova.compute.manager [None req-fc8fd7ce-8cee-4e4d-a851-7a540b7949a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.832 243456 DEBUG oslo_concurrency.lockutils [None req-f5acba14-f653-4c48-be0b-94e8244b3cc3 7ef51521ffc947cbbce8323ec2b71753 c0c4bc44c37f4a4f83c83b6105be3190 - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.355s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.866 243456 DEBUG oslo_concurrency.lockutils [None req-fc8fd7ce-8cee-4e4d-a851-7a540b7949a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.898 243456 DEBUG nova.compute.manager [req-3af80887-a383-4de7-95e4-abcf63fbdc93 req-0ebac242-56b4-46c8-986b-a45964cc5654 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.898 243456 DEBUG oslo_concurrency.lockutils [req-3af80887-a383-4de7-95e4-abcf63fbdc93 req-0ebac242-56b4-46c8-986b-a45964cc5654 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.899 243456 DEBUG oslo_concurrency.lockutils [req-3af80887-a383-4de7-95e4-abcf63fbdc93 req-0ebac242-56b4-46c8-986b-a45964cc5654 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.900 243456 DEBUG oslo_concurrency.lockutils [req-3af80887-a383-4de7-95e4-abcf63fbdc93 req-0ebac242-56b4-46c8-986b-a45964cc5654 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4db5bcd7-8b41-4850-8c88-89ad757c8558-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.900 243456 DEBUG nova.compute.manager [req-3af80887-a383-4de7-95e4-abcf63fbdc93 req-0ebac242-56b4-46c8-986b-a45964cc5654 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] No waiting events found dispatching network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.900 243456 WARNING nova.compute.manager [req-3af80887-a383-4de7-95e4-abcf63fbdc93 req-0ebac242-56b4-46c8-986b-a45964cc5654 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Received unexpected event network-vif-plugged-52f49649-6181-4c24-95b7-fc7227858c70 for instance with vm_state deleted and task_state None.
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.901 243456 DEBUG nova.compute.manager [req-3af80887-a383-4de7-95e4-abcf63fbdc93 req-0ebac242-56b4-46c8-986b-a45964cc5654 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Received event network-vif-unplugged-9e45b488-45d1-4293-a1a6-7b01b726b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.901 243456 DEBUG oslo_concurrency.lockutils [req-3af80887-a383-4de7-95e4-abcf63fbdc93 req-0ebac242-56b4-46c8-986b-a45964cc5654 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.902 243456 DEBUG oslo_concurrency.lockutils [req-3af80887-a383-4de7-95e4-abcf63fbdc93 req-0ebac242-56b4-46c8-986b-a45964cc5654 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.902 243456 DEBUG oslo_concurrency.lockutils [req-3af80887-a383-4de7-95e4-abcf63fbdc93 req-0ebac242-56b4-46c8-986b-a45964cc5654 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.902 243456 DEBUG nova.compute.manager [req-3af80887-a383-4de7-95e4-abcf63fbdc93 req-0ebac242-56b4-46c8-986b-a45964cc5654 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] No waiting events found dispatching network-vif-unplugged-9e45b488-45d1-4293-a1a6-7b01b726b58b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.903 243456 WARNING nova.compute.manager [req-3af80887-a383-4de7-95e4-abcf63fbdc93 req-0ebac242-56b4-46c8-986b-a45964cc5654 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Received unexpected event network-vif-unplugged-9e45b488-45d1-4293-a1a6-7b01b726b58b for instance with vm_state stopped and task_state None.
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.903 243456 DEBUG nova.compute.manager [req-3af80887-a383-4de7-95e4-abcf63fbdc93 req-0ebac242-56b4-46c8-986b-a45964cc5654 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Received event network-vif-plugged-9e45b488-45d1-4293-a1a6-7b01b726b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.904 243456 DEBUG oslo_concurrency.lockutils [req-3af80887-a383-4de7-95e4-abcf63fbdc93 req-0ebac242-56b4-46c8-986b-a45964cc5654 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.904 243456 DEBUG oslo_concurrency.lockutils [req-3af80887-a383-4de7-95e4-abcf63fbdc93 req-0ebac242-56b4-46c8-986b-a45964cc5654 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.905 243456 DEBUG oslo_concurrency.lockutils [req-3af80887-a383-4de7-95e4-abcf63fbdc93 req-0ebac242-56b4-46c8-986b-a45964cc5654 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.905 243456 DEBUG nova.compute.manager [req-3af80887-a383-4de7-95e4-abcf63fbdc93 req-0ebac242-56b4-46c8-986b-a45964cc5654 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] No waiting events found dispatching network-vif-plugged-9e45b488-45d1-4293-a1a6-7b01b726b58b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:17:47 compute-0 nova_compute[243452]: 2026-02-28 10:17:47.905 243456 WARNING nova.compute.manager [req-3af80887-a383-4de7-95e4-abcf63fbdc93 req-0ebac242-56b4-46c8-986b-a45964cc5654 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Received unexpected event network-vif-plugged-9e45b488-45d1-4293-a1a6-7b01b726b58b for instance with vm_state stopped and task_state None.
Feb 28 10:17:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 374 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 171 op/s
Feb 28 10:17:48 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4007771187' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:17:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:17:48 compute-0 nova_compute[243452]: 2026-02-28 10:17:48.723 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:48 compute-0 nova_compute[243452]: 2026-02-28 10:17:48.864 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:49 compute-0 ceph-mon[76304]: pgmap v1563: 305 pgs: 305 active+clean; 374 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 171 op/s
Feb 28 10:17:49 compute-0 nova_compute[243452]: 2026-02-28 10:17:49.569 243456 DEBUG oslo_concurrency.lockutils [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:49 compute-0 nova_compute[243452]: 2026-02-28 10:17:49.569 243456 DEBUG oslo_concurrency.lockutils [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:49 compute-0 nova_compute[243452]: 2026-02-28 10:17:49.570 243456 INFO nova.compute.manager [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Rebooting instance
Feb 28 10:17:49 compute-0 nova_compute[243452]: 2026-02-28 10:17:49.584 243456 DEBUG oslo_concurrency.lockutils [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:17:49 compute-0 nova_compute[243452]: 2026-02-28 10:17:49.585 243456 DEBUG oslo_concurrency.lockutils [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquired lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:17:49 compute-0 nova_compute[243452]: 2026-02-28 10:17:49.585 243456 DEBUG nova.network.neutron [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:17:49 compute-0 nova_compute[243452]: 2026-02-28 10:17:49.864 243456 INFO nova.compute.manager [None req-d4c214c7-ab6a-41ae-b15a-d0cb6b8b95e6 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Get console output
Feb 28 10:17:50 compute-0 nova_compute[243452]: 2026-02-28 10:17:50.074 243456 DEBUG nova.objects.instance [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'flavor' on Instance uuid 5f8708bd-35d7-4952-ba18-0b6635872b86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:50 compute-0 nova_compute[243452]: 2026-02-28 10:17:50.111 243456 DEBUG oslo_concurrency.lockutils [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "refresh_cache-5f8708bd-35d7-4952-ba18-0b6635872b86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:17:50 compute-0 nova_compute[243452]: 2026-02-28 10:17:50.111 243456 DEBUG oslo_concurrency.lockutils [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquired lock "refresh_cache-5f8708bd-35d7-4952-ba18-0b6635872b86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:17:50 compute-0 nova_compute[243452]: 2026-02-28 10:17:50.112 243456 DEBUG nova.network.neutron [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:17:50 compute-0 nova_compute[243452]: 2026-02-28 10:17:50.112 243456 DEBUG nova.objects.instance [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'info_cache' on Instance uuid 5f8708bd-35d7-4952-ba18-0b6635872b86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 347 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 961 KiB/s rd, 3.1 MiB/s wr, 145 op/s
Feb 28 10:17:50 compute-0 ceph-mon[76304]: pgmap v1564: 305 pgs: 305 active+clean; 347 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 961 KiB/s rd, 3.1 MiB/s wr, 145 op/s
Feb 28 10:17:50 compute-0 nova_compute[243452]: 2026-02-28 10:17:50.754 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.181 243456 DEBUG nova.network.neutron [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Updating instance_info_cache with network_info: [{"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.213 243456 DEBUG oslo_concurrency.lockutils [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Releasing lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.215 243456 DEBUG nova.compute.manager [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:51 compute-0 kernel: taped25d1f8-c3 (unregistering): left promiscuous mode
Feb 28 10:17:51 compute-0 NetworkManager[49805]: <info>  [1772273871.3911] device (taped25d1f8-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.397 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:51 compute-0 ovn_controller[146846]: 2026-02-28T10:17:51Z|00870|binding|INFO|Releasing lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c from this chassis (sb_readonly=0)
Feb 28 10:17:51 compute-0 ovn_controller[146846]: 2026-02-28T10:17:51Z|00871|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c down in Southbound
Feb 28 10:17:51 compute-0 ovn_controller[146846]: 2026-02-28T10:17:51Z|00872|binding|INFO|Removing iface taped25d1f8-c3 ovn-installed in OVS
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.400 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:51.404 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:05:21 10.100.0.14'], port_security=['fa:16:3e:f6:05:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '690896df-6307-469c-9685-325a61a62b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '4', 'neutron:security_group_ids': '26695444-5c7f-4bb0-b1ed-94a026868cfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:17:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:51.405 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ed25d1f8-c3a0-43d4-b57e-12b647a48b3c in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb unbound from our chassis
Feb 28 10:17:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:51.406 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:17:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:51.407 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[554388cb-6766-432f-9bde-f9939c90b6f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:51.408 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb namespace which is not needed anymore
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.421 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:51 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Feb 28 10:17:51 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d0000005a.scope: Consumed 12.372s CPU time.
Feb 28 10:17:51 compute-0 systemd-machined[209480]: Machine qemu-106-instance-0000005a terminated.
Feb 28 10:17:51 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[319842]: [NOTICE]   (319846) : haproxy version is 2.8.14-c23fe91
Feb 28 10:17:51 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[319842]: [NOTICE]   (319846) : path to executable is /usr/sbin/haproxy
Feb 28 10:17:51 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[319842]: [WARNING]  (319846) : Exiting Master process...
Feb 28 10:17:51 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[319842]: [ALERT]    (319846) : Current worker (319848) exited with code 143 (Terminated)
Feb 28 10:17:51 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[319842]: [WARNING]  (319846) : All workers exited. Exiting... (0)
Feb 28 10:17:51 compute-0 systemd[1]: libpod-110996cbde181c8cc9be5b9c5a42eb1613e142ef4604384e3f826e4682032618.scope: Deactivated successfully.
Feb 28 10:17:51 compute-0 podman[320979]: 2026-02-28 10:17:51.556773026 +0000 UTC m=+0.053785114 container died 110996cbde181c8cc9be5b9c5a42eb1613e142ef4604384e3f826e4682032618 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.575 243456 INFO nova.virt.libvirt.driver [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance destroyed successfully.
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.575 243456 DEBUG nova.objects.instance [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'resources' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-110996cbde181c8cc9be5b9c5a42eb1613e142ef4604384e3f826e4682032618-userdata-shm.mount: Deactivated successfully.
Feb 28 10:17:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d2e81568415edbf6eba3d94c80a2db56eb9e80cd7dae0cd9b37034546a63bb1-merged.mount: Deactivated successfully.
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.595 243456 DEBUG nova.virt.libvirt.vif [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-49030969',display_name='tempest-ServerActionsTestJSON-server-49030969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-49030969',id=90,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2gogpz3Yx3EhEkpX1VCBq2KO8dk2oxi+PMU5eyyDvXF1ONWBCuHAC/cBb5pEuXFL5UEmuntYZ5G82kfqwpjBS+OnOyyPUfTpF/G370TEdXgIbjgT7mCqXRtxntkLSfyw==',key_name='tempest-keypair-1420549814',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-iwkgdg48',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:17:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9006c7543a244aa948b78020335223a',uuid=690896df-6307-469c-9685-325a61a62b88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.596 243456 DEBUG nova.network.os_vif_util [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.596 243456 DEBUG nova.network.os_vif_util [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.597 243456 DEBUG os_vif [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.599 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.599 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped25d1f8-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.600 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.602 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:17:51 compute-0 podman[320979]: 2026-02-28 10:17:51.605351051 +0000 UTC m=+0.102363139 container cleanup 110996cbde181c8cc9be5b9c5a42eb1613e142ef4604384e3f826e4682032618 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.607 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.609 243456 INFO os_vif [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3')
Feb 28 10:17:51 compute-0 systemd[1]: libpod-conmon-110996cbde181c8cc9be5b9c5a42eb1613e142ef4604384e3f826e4682032618.scope: Deactivated successfully.
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.616 243456 DEBUG nova.virt.libvirt.driver [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Start _get_guest_xml network_info=[{"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.620 243456 WARNING nova.virt.libvirt.driver [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.625 243456 DEBUG nova.virt.libvirt.host [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.625 243456 DEBUG nova.virt.libvirt.host [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.629 243456 DEBUG nova.virt.libvirt.host [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.630 243456 DEBUG nova.virt.libvirt.host [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.630 243456 DEBUG nova.virt.libvirt.driver [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.630 243456 DEBUG nova.virt.hardware [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.630 243456 DEBUG nova.virt.hardware [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.631 243456 DEBUG nova.virt.hardware [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.631 243456 DEBUG nova.virt.hardware [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.631 243456 DEBUG nova.virt.hardware [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.631 243456 DEBUG nova.virt.hardware [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.631 243456 DEBUG nova.virt.hardware [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.631 243456 DEBUG nova.virt.hardware [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.631 243456 DEBUG nova.virt.hardware [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.632 243456 DEBUG nova.virt.hardware [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.632 243456 DEBUG nova.virt.hardware [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.632 243456 DEBUG nova.objects.instance [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.660 243456 DEBUG oslo_concurrency.processutils [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:51 compute-0 podman[321021]: 2026-02-28 10:17:51.674642836 +0000 UTC m=+0.050539702 container remove 110996cbde181c8cc9be5b9c5a42eb1613e142ef4604384e3f826e4682032618 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:17:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:51.681 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e90be233-8c19-4847-a220-633c555ae163]: (4, ('Sat Feb 28 10:17:51 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb (110996cbde181c8cc9be5b9c5a42eb1613e142ef4604384e3f826e4682032618)\n110996cbde181c8cc9be5b9c5a42eb1613e142ef4604384e3f826e4682032618\nSat Feb 28 10:17:51 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb (110996cbde181c8cc9be5b9c5a42eb1613e142ef4604384e3f826e4682032618)\n110996cbde181c8cc9be5b9c5a42eb1613e142ef4604384e3f826e4682032618\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:51.683 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c7c04568-48b7-4639-9507-6604e90565ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:51.684 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:51 compute-0 kernel: tap8082b9e7-a0: left promiscuous mode
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.692 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.699 243456 DEBUG nova.network.neutron [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Updating instance_info_cache with network_info: [{"id": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "address": "fa:16:3e:bb:83:cd", "network": {"id": "88dbe3c2-5a58-4a5e-93e1-51b691c2901f", "bridge": "br-int", "label": "tempest-network-smoke--1244861554", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e45b488-45", "ovs_interfaceid": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:17:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:51.703 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e0354633-eda5-4165-b4f6-d289eee308ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.704 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:51.714 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ae69409e-0fba-4019-9f5f-d77a4511f9b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:51.715 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cf7332b0-3b46-4f9e-a081-7f281fb0d3c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.727 243456 DEBUG oslo_concurrency.lockutils [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Releasing lock "refresh_cache-5f8708bd-35d7-4952-ba18-0b6635872b86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:17:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:51.727 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[55b1618f-4469-4189-8d4a-73250e9e81e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534747, 'reachable_time': 19530, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321042, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d8082b9e7\x2da888\x2d4fb7\x2db48c\x2da7c16db892eb.mount: Deactivated successfully.
Feb 28 10:17:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:51.735 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:17:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:51.735 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[06314bba-d9f5-4882-a002-c7d3b746b1a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.752 243456 INFO nova.virt.libvirt.driver [-] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Instance destroyed successfully.
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.752 243456 DEBUG nova.objects.instance [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'numa_topology' on Instance uuid 5f8708bd-35d7-4952-ba18-0b6635872b86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.765 243456 DEBUG nova.objects.instance [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'resources' on Instance uuid 5f8708bd-35d7-4952-ba18-0b6635872b86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.779 243456 DEBUG nova.virt.libvirt.vif [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2076056969',display_name='tempest-TestNetworkAdvancedServerOps-server-2076056969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2076056969',id=89,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAJoDJeszbjfLiA8r7EvxipNNTd3Kd7FJumJ1jhIQwl69HS1653r1vVD8jlI2B8YSH2IOoSN/hqwS+59551ZK2ZQDz9Qcn6YlypHv+eriiRgIViYoZ/6DEE1ZesCiz+yyg==',key_name='tempest-TestNetworkAdvancedServerOps-291574014',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-vdh7t6ga',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:17:47Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=5f8708bd-35d7-4952-ba18-0b6635872b86,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "address": "fa:16:3e:bb:83:cd", "network": {"id": "88dbe3c2-5a58-4a5e-93e1-51b691c2901f", "bridge": "br-int", "label": "tempest-network-smoke--1244861554", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e45b488-45", "ovs_interfaceid": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.780 243456 DEBUG nova.network.os_vif_util [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "address": "fa:16:3e:bb:83:cd", "network": {"id": "88dbe3c2-5a58-4a5e-93e1-51b691c2901f", "bridge": "br-int", "label": "tempest-network-smoke--1244861554", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e45b488-45", "ovs_interfaceid": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.781 243456 DEBUG nova.network.os_vif_util [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:83:cd,bridge_name='br-int',has_traffic_filtering=True,id=9e45b488-45d1-4293-a1a6-7b01b726b58b,network=Network(88dbe3c2-5a58-4a5e-93e1-51b691c2901f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e45b488-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.781 243456 DEBUG os_vif [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:83:cd,bridge_name='br-int',has_traffic_filtering=True,id=9e45b488-45d1-4293-a1a6-7b01b726b58b,network=Network(88dbe3c2-5a58-4a5e-93e1-51b691c2901f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e45b488-45') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.782 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.782 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e45b488-45, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.784 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.786 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.789 243456 INFO os_vif [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:83:cd,bridge_name='br-int',has_traffic_filtering=True,id=9e45b488-45d1-4293-a1a6-7b01b726b58b,network=Network(88dbe3c2-5a58-4a5e-93e1-51b691c2901f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e45b488-45')
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.794 243456 DEBUG nova.virt.libvirt.driver [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Start _get_guest_xml network_info=[{"id": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "address": "fa:16:3e:bb:83:cd", "network": {"id": "88dbe3c2-5a58-4a5e-93e1-51b691c2901f", "bridge": "br-int", "label": "tempest-network-smoke--1244861554", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e45b488-45", "ovs_interfaceid": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.798 243456 WARNING nova.virt.libvirt.driver [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.812 243456 DEBUG nova.virt.libvirt.host [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.812 243456 DEBUG nova.virt.libvirt.host [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.815 243456 DEBUG nova.virt.libvirt.host [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.815 243456 DEBUG nova.virt.libvirt.host [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.816 243456 DEBUG nova.virt.libvirt.driver [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.816 243456 DEBUG nova.virt.hardware [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.816 243456 DEBUG nova.virt.hardware [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.816 243456 DEBUG nova.virt.hardware [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.817 243456 DEBUG nova.virt.hardware [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.817 243456 DEBUG nova.virt.hardware [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.817 243456 DEBUG nova.virt.hardware [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.817 243456 DEBUG nova.virt.hardware [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.817 243456 DEBUG nova.virt.hardware [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.817 243456 DEBUG nova.virt.hardware [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.818 243456 DEBUG nova.virt.hardware [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.818 243456 DEBUG nova.virt.hardware [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.818 243456 DEBUG nova.objects.instance [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5f8708bd-35d7-4952-ba18-0b6635872b86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:51 compute-0 nova_compute[243452]: 2026-02-28 10:17:51.833 243456 DEBUG oslo_concurrency.processutils [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 312 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 451 KiB/s rd, 1.5 MiB/s wr, 97 op/s
Feb 28 10:17:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:17:52 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1384044080' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:17:52 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/41169541' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:52 compute-0 nova_compute[243452]: 2026-02-28 10:17:52.429 243456 DEBUG oslo_concurrency.processutils [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:52 compute-0 ceph-mon[76304]: pgmap v1565: 305 pgs: 305 active+clean; 312 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 451 KiB/s rd, 1.5 MiB/s wr, 97 op/s
Feb 28 10:17:52 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1384044080' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:52 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/41169541' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:52 compute-0 nova_compute[243452]: 2026-02-28 10:17:52.461 243456 DEBUG oslo_concurrency.processutils [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.801s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:52 compute-0 nova_compute[243452]: 2026-02-28 10:17:52.501 243456 DEBUG oslo_concurrency.processutils [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:52 compute-0 nova_compute[243452]: 2026-02-28 10:17:52.539 243456 DEBUG oslo_concurrency.processutils [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:17:52 compute-0 sshd-session[320950]: Received disconnect from 103.67.78.202 port 57280:11: Bye Bye [preauth]
Feb 28 10:17:52 compute-0 sshd-session[320950]: Disconnected from authenticating user root 103.67.78.202 port 57280 [preauth]
Feb 28 10:17:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:17:53 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1699897466' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:17:53 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1610100073' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.121 243456 DEBUG oslo_concurrency.processutils [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.123 243456 DEBUG nova.virt.libvirt.vif [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-49030969',display_name='tempest-ServerActionsTestJSON-server-49030969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-49030969',id=90,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2gogpz3Yx3EhEkpX1VCBq2KO8dk2oxi+PMU5eyyDvXF1ONWBCuHAC/cBb5pEuXFL5UEmuntYZ5G82kfqwpjBS+OnOyyPUfTpF/G370TEdXgIbjgT7mCqXRtxntkLSfyw==',key_name='tempest-keypair-1420549814',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-iwkgdg48',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:17:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9006c7543a244aa948b78020335223a',uuid=690896df-6307-469c-9685-325a61a62b88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.123 243456 DEBUG nova.network.os_vif_util [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.124 243456 DEBUG nova.network.os_vif_util [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.126 243456 DEBUG nova.objects.instance [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'pci_devices' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.131 243456 DEBUG oslo_concurrency.processutils [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.132 243456 DEBUG nova.virt.libvirt.vif [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2076056969',display_name='tempest-TestNetworkAdvancedServerOps-server-2076056969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2076056969',id=89,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAJoDJeszbjfLiA8r7EvxipNNTd3Kd7FJumJ1jhIQwl69HS1653r1vVD8jlI2B8YSH2IOoSN/hqwS+59551ZK2ZQDz9Qcn6YlypHv+eriiRgIViYoZ/6DEE1ZesCiz+yyg==',key_name='tempest-TestNetworkAdvancedServerOps-291574014',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-vdh7t6ga',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:17:47Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=5f8708bd-35d7-4952-ba18-0b6635872b86,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "address": "fa:16:3e:bb:83:cd", "network": {"id": "88dbe3c2-5a58-4a5e-93e1-51b691c2901f", "bridge": "br-int", "label": "tempest-network-smoke--1244861554", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e45b488-45", "ovs_interfaceid": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.133 243456 DEBUG nova.network.os_vif_util [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "address": "fa:16:3e:bb:83:cd", "network": {"id": "88dbe3c2-5a58-4a5e-93e1-51b691c2901f", "bridge": "br-int", "label": "tempest-network-smoke--1244861554", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e45b488-45", "ovs_interfaceid": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.133 243456 DEBUG nova.network.os_vif_util [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:83:cd,bridge_name='br-int',has_traffic_filtering=True,id=9e45b488-45d1-4293-a1a6-7b01b726b58b,network=Network(88dbe3c2-5a58-4a5e-93e1-51b691c2901f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e45b488-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.134 243456 DEBUG nova.objects.instance [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5f8708bd-35d7-4952-ba18-0b6635872b86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.141 243456 DEBUG nova.virt.libvirt.driver [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <uuid>690896df-6307-469c-9685-325a61a62b88</uuid>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <name>instance-0000005a</name>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerActionsTestJSON-server-49030969</nova:name>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:17:51</nova:creationTime>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <nova:user uuid="b9006c7543a244aa948b78020335223a">tempest-ServerActionsTestJSON-152155156-project-member</nova:user>
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <nova:project uuid="6952e00efd364e1491714983e2425e93">tempest-ServerActionsTestJSON-152155156</nova:project>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <nova:port uuid="ed25d1f8-c3a0-43d4-b57e-12b647a48b3c">
Feb 28 10:17:53 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <system>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <entry name="serial">690896df-6307-469c-9685-325a61a62b88</entry>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <entry name="uuid">690896df-6307-469c-9685-325a61a62b88</entry>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     </system>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <os>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   </os>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <features>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   </features>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/690896df-6307-469c-9685-325a61a62b88_disk">
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       </source>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/690896df-6307-469c-9685-325a61a62b88_disk.config">
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       </source>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:f6:05:21"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <target dev="taped25d1f8-c3"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/690896df-6307-469c-9685-325a61a62b88/console.log" append="off"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <video>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     </video>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <input type="keyboard" bus="usb"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:17:53 compute-0 nova_compute[243452]: </domain>
Feb 28 10:17:53 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.142 243456 DEBUG nova.virt.libvirt.driver [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.143 243456 DEBUG nova.virt.libvirt.driver [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.143 243456 DEBUG nova.virt.libvirt.vif [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-49030969',display_name='tempest-ServerActionsTestJSON-server-49030969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-49030969',id=90,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2gogpz3Yx3EhEkpX1VCBq2KO8dk2oxi+PMU5eyyDvXF1ONWBCuHAC/cBb5pEuXFL5UEmuntYZ5G82kfqwpjBS+OnOyyPUfTpF/G370TEdXgIbjgT7mCqXRtxntkLSfyw==',key_name='tempest-keypair-1420549814',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-iwkgdg48',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:17:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9006c7543a244aa948b78020335223a',uuid=690896df-6307-469c-9685-325a61a62b88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.144 243456 DEBUG nova.network.os_vif_util [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.144 243456 DEBUG nova.network.os_vif_util [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.145 243456 DEBUG os_vif [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.145 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.146 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.146 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.149 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.149 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped25d1f8-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.150 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=taped25d1f8-c3, col_values=(('external_ids', {'iface-id': 'ed25d1f8-c3a0-43d4-b57e-12b647a48b3c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:05:21', 'vm-uuid': '690896df-6307-469c-9685-325a61a62b88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:53 compute-0 NetworkManager[49805]: <info>  [1772273873.1525] manager: (taped25d1f8-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.152 243456 DEBUG nova.virt.libvirt.driver [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <uuid>5f8708bd-35d7-4952-ba18-0b6635872b86</uuid>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <name>instance-00000059</name>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-2076056969</nova:name>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:17:51</nova:creationTime>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <nova:user uuid="99530c323188499c8d0e75b8edf1f77b">tempest-TestNetworkAdvancedServerOps-1987172309-project-member</nova:user>
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <nova:project uuid="4c568ca6a09a48c1a1197267be4d4583">tempest-TestNetworkAdvancedServerOps-1987172309</nova:project>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <nova:port uuid="9e45b488-45d1-4293-a1a6-7b01b726b58b">
Feb 28 10:17:53 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <system>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <entry name="serial">5f8708bd-35d7-4952-ba18-0b6635872b86</entry>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <entry name="uuid">5f8708bd-35d7-4952-ba18-0b6635872b86</entry>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     </system>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <os>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   </os>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <features>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   </features>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/5f8708bd-35d7-4952-ba18-0b6635872b86_disk">
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       </source>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/5f8708bd-35d7-4952-ba18-0b6635872b86_disk.config">
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       </source>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:17:53 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:bb:83:cd"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <target dev="tap9e45b488-45"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/5f8708bd-35d7-4952-ba18-0b6635872b86/console.log" append="off"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <video>
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     </video>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <input type="keyboard" bus="usb"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:17:53 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:17:53 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:17:53 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:17:53 compute-0 nova_compute[243452]: </domain>
Feb 28 10:17:53 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.153 243456 DEBUG nova.virt.libvirt.driver [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.154 243456 DEBUG nova.virt.libvirt.driver [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.154 243456 DEBUG nova.virt.libvirt.vif [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2076056969',display_name='tempest-TestNetworkAdvancedServerOps-server-2076056969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2076056969',id=89,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAJoDJeszbjfLiA8r7EvxipNNTd3Kd7FJumJ1jhIQwl69HS1653r1vVD8jlI2B8YSH2IOoSN/hqwS+59551ZK2ZQDz9Qcn6YlypHv+eriiRgIViYoZ/6DEE1ZesCiz+yyg==',key_name='tempest-TestNetworkAdvancedServerOps-291574014',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-vdh7t6ga',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:17:47Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=5f8708bd-35d7-4952-ba18-0b6635872b86,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "address": "fa:16:3e:bb:83:cd", "network": {"id": "88dbe3c2-5a58-4a5e-93e1-51b691c2901f", "bridge": "br-int", "label": "tempest-network-smoke--1244861554", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e45b488-45", "ovs_interfaceid": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.155 243456 DEBUG nova.network.os_vif_util [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "address": "fa:16:3e:bb:83:cd", "network": {"id": "88dbe3c2-5a58-4a5e-93e1-51b691c2901f", "bridge": "br-int", "label": "tempest-network-smoke--1244861554", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e45b488-45", "ovs_interfaceid": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.155 243456 DEBUG nova.network.os_vif_util [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:83:cd,bridge_name='br-int',has_traffic_filtering=True,id=9e45b488-45d1-4293-a1a6-7b01b726b58b,network=Network(88dbe3c2-5a58-4a5e-93e1-51b691c2901f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e45b488-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.156 243456 DEBUG os_vif [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:83:cd,bridge_name='br-int',has_traffic_filtering=True,id=9e45b488-45d1-4293-a1a6-7b01b726b58b,network=Network(88dbe3c2-5a58-4a5e-93e1-51b691c2901f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e45b488-45') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.156 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.158 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.159 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.159 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.159 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.159 243456 INFO os_vif [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3')
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.162 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.162 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e45b488-45, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.162 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9e45b488-45, col_values=(('external_ids', {'iface-id': '9e45b488-45d1-4293-a1a6-7b01b726b58b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:83:cd', 'vm-uuid': '5f8708bd-35d7-4952-ba18-0b6635872b86'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.163 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:53 compute-0 NetworkManager[49805]: <info>  [1772273873.1644] manager: (tap9e45b488-45): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.166 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.170 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.172 243456 INFO os_vif [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:83:cd,bridge_name='br-int',has_traffic_filtering=True,id=9e45b488-45d1-4293-a1a6-7b01b726b58b,network=Network(88dbe3c2-5a58-4a5e-93e1-51b691c2901f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e45b488-45')
Feb 28 10:17:53 compute-0 NetworkManager[49805]: <info>  [1772273873.2632] manager: (taped25d1f8-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/374)
Feb 28 10:17:53 compute-0 systemd-udevd[320960]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:17:53 compute-0 kernel: taped25d1f8-c3: entered promiscuous mode
Feb 28 10:17:53 compute-0 systemd-udevd[320958]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:17:53 compute-0 NetworkManager[49805]: <info>  [1772273873.2666] manager: (tap9e45b488-45): new Tun device (/org/freedesktop/NetworkManager/Devices/375)
Feb 28 10:17:53 compute-0 ovn_controller[146846]: 2026-02-28T10:17:53Z|00873|binding|INFO|Claiming lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for this chassis.
Feb 28 10:17:53 compute-0 ovn_controller[146846]: 2026-02-28T10:17:53Z|00874|binding|INFO|ed25d1f8-c3a0-43d4-b57e-12b647a48b3c: Claiming fa:16:3e:f6:05:21 10.100.0.14
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.270 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:53 compute-0 kernel: tap9e45b488-45: entered promiscuous mode
Feb 28 10:17:53 compute-0 NetworkManager[49805]: <info>  [1772273873.2776] device (tap9e45b488-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:17:53 compute-0 NetworkManager[49805]: <info>  [1772273873.2792] device (tap9e45b488-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.280 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:05:21 10.100.0.14'], port_security=['fa:16:3e:f6:05:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '690896df-6307-469c-9685-325a61a62b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '4', 'neutron:security_group_ids': '26695444-5c7f-4bb0-b1ed-94a026868cfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:17:53 compute-0 NetworkManager[49805]: <info>  [1772273873.2825] device (taped25d1f8-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.281 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ed25d1f8-c3a0-43d4-b57e-12b647a48b3c in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb bound to our chassis
Feb 28 10:17:53 compute-0 NetworkManager[49805]: <info>  [1772273873.2843] device (taped25d1f8-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.283 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.285 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:53 compute-0 ovn_controller[146846]: 2026-02-28T10:17:53Z|00875|binding|INFO|Claiming lport 9e45b488-45d1-4293-a1a6-7b01b726b58b for this chassis.
Feb 28 10:17:53 compute-0 ovn_controller[146846]: 2026-02-28T10:17:53Z|00876|binding|INFO|9e45b488-45d1-4293-a1a6-7b01b726b58b: Claiming fa:16:3e:bb:83:cd 10.100.0.12
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.289 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:53 compute-0 ovn_controller[146846]: 2026-02-28T10:17:53Z|00877|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c ovn-installed in OVS
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.291 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:53 compute-0 ovn_controller[146846]: 2026-02-28T10:17:53Z|00878|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c up in Southbound
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.296 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:83:cd 10.100.0.12'], port_security=['fa:16:3e:bb:83:cd 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5f8708bd-35d7-4952-ba18-0b6635872b86', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88dbe3c2-5a58-4a5e-93e1-51b691c2901f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '5', 'neutron:security_group_ids': '6208c787-2d1b-4dd1-8098-37be7ada4419', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a3b86456-762e-43a0-947f-ce5fc38977cf, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=9e45b488-45d1-4293-a1a6-7b01b726b58b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.297 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[377ffde7-dff1-47ff-8bc3-93e1724ac9df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.299 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8082b9e7-a1 in ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.301 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8082b9e7-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.301 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[73836ba1-677f-4f0a-b50d-a403f1660232]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.303 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e555064a-5341-4072-b085-663ee820fdbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:53 compute-0 ovn_controller[146846]: 2026-02-28T10:17:53Z|00879|binding|INFO|Setting lport 9e45b488-45d1-4293-a1a6-7b01b726b58b ovn-installed in OVS
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.306 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:53 compute-0 ovn_controller[146846]: 2026-02-28T10:17:53Z|00880|binding|INFO|Setting lport 9e45b488-45d1-4293-a1a6-7b01b726b58b up in Southbound
Feb 28 10:17:53 compute-0 systemd-machined[209480]: New machine qemu-110-instance-0000005a.
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.317 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[544d8e75-359a-49d1-9d64-d9c17c9ac2a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.331 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6a6c5762-21da-4a98-be7d-7079df7e7c08]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:53 compute-0 systemd[1]: Started Virtual Machine qemu-110-instance-0000005a.
Feb 28 10:17:53 compute-0 systemd-machined[209480]: New machine qemu-109-instance-00000059.
Feb 28 10:17:53 compute-0 systemd[1]: Started Virtual Machine qemu-109-instance-00000059.
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.362 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[207785e2-83cd-4f52-9f0d-57879e4acb0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:53 compute-0 NetworkManager[49805]: <info>  [1772273873.3718] manager: (tap8082b9e7-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/376)
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.372 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6d68ea1f-4cbc-456b-b5e3-918c186fc328]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.412 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fe9723fe-1923-437a-9ae4-aa262ecb38fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.417 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5489c807-0a29-40c2-9cf4-ee2ecbb354be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:53 compute-0 NetworkManager[49805]: <info>  [1772273873.4478] device (tap8082b9e7-a0): carrier: link connected
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.452 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[11ff0687-0ae5-44fc-807d-afe0a04729f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.471 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[26ab9538-0c03-4b1a-9210-6ddd24318d95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 267], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538073, 'reachable_time': 20571, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321233, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1699897466' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1610100073' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.493 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[212a56e7-fa3c-4768-a2c6-1e5c6cc1de70]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:9bb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538073, 'tstamp': 538073}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321234, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.511 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4f6b1d10-be81-43de-92ae-d902f036a240]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 267], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538073, 'reachable_time': 20571, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321235, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.541 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8c02c812-b9e9-4f3a-a4c7-65682ab73ef0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.608 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[46af17be-37a1-41f6-aa34-3233f768b5f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.612 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.612 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.613 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8082b9e7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:53 compute-0 kernel: tap8082b9e7-a0: entered promiscuous mode
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.615 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:53 compute-0 NetworkManager[49805]: <info>  [1772273873.6162] manager: (tap8082b9e7-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.619 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8082b9e7-a0, col_values=(('external_ids', {'iface-id': 'ecb4bce5-f543-4ade-98cc-d98a0d015682'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:53 compute-0 ovn_controller[146846]: 2026-02-28T10:17:53Z|00881|binding|INFO|Releasing lport ecb4bce5-f543-4ade-98cc-d98a0d015682 from this chassis (sb_readonly=0)
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.621 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.622 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.624 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[effeb52c-b725-4122-b160-ab7c0adecba7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.625 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:17:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:53.625 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'env', 'PROCESS_TAG=haproxy-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8082b9e7-a888-4fb7-b48c-a7c16db892eb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.628 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.790 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 690896df-6307-469c-9685-325a61a62b88 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.790 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273873.7860134, 690896df-6307-469c-9685-325a61a62b88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.791 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Resumed (Lifecycle Event)
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.795 243456 DEBUG nova.compute.manager [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.801 243456 DEBUG nova.compute.manager [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.807 243456 INFO nova.virt.libvirt.driver [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance rebooted successfully.
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.807 243456 DEBUG nova.compute.manager [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.808 243456 INFO nova.virt.libvirt.driver [-] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Instance rebooted successfully.
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.809 243456 DEBUG nova.compute.manager [None req-a0496224-aebb-493f-9ba7-40fb235242ae 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.812 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.828 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.866 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.875 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.876 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273873.7895346, 690896df-6307-469c-9685-325a61a62b88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.876 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Started (Lifecycle Event)
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.911 243456 DEBUG oslo_concurrency.lockutils [None req-ba656cf0-f22a-4bb2-a237-b76c514a2fca b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.341s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.916 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.929 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.952 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 5f8708bd-35d7-4952-ba18-0b6635872b86 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.953 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273873.7949684, 5f8708bd-35d7-4952-ba18-0b6635872b86 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.954 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] VM Resumed (Lifecycle Event)
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.972 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:53 compute-0 nova_compute[243452]: 2026-02-28 10:17:53.978 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:17:54 compute-0 podman[321351]: 2026-02-28 10:17:54.004009468 +0000 UTC m=+0.059027203 container create 3692037a852804603ec83fc6e8a8b73e4891a9d7e10979fc84c51e5433876a0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:17:54 compute-0 nova_compute[243452]: 2026-02-28 10:17:54.010 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273873.8006938, 5f8708bd-35d7-4952-ba18-0b6635872b86 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:17:54 compute-0 nova_compute[243452]: 2026-02-28 10:17:54.011 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] VM Started (Lifecycle Event)
Feb 28 10:17:54 compute-0 nova_compute[243452]: 2026-02-28 10:17:54.032 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:17:54 compute-0 nova_compute[243452]: 2026-02-28 10:17:54.038 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:17:54 compute-0 systemd[1]: Started libpod-conmon-3692037a852804603ec83fc6e8a8b73e4891a9d7e10979fc84c51e5433876a0f.scope.
Feb 28 10:17:54 compute-0 podman[321351]: 2026-02-28 10:17:53.976749921 +0000 UTC m=+0.031767696 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:17:54 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:17:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3843ddc687e2bcaa38c72e0435ee3e9eeefc4775850644c85e94b2a3b4bb94b6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:17:54 compute-0 podman[321351]: 2026-02-28 10:17:54.113600352 +0000 UTC m=+0.168618087 container init 3692037a852804603ec83fc6e8a8b73e4891a9d7e10979fc84c51e5433876a0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 28 10:17:54 compute-0 podman[321351]: 2026-02-28 10:17:54.121049894 +0000 UTC m=+0.176067629 container start 3692037a852804603ec83fc6e8a8b73e4891a9d7e10979fc84c51e5433876a0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:17:54 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[321364]: [NOTICE]   (321368) : New worker (321370) forked
Feb 28 10:17:54 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[321364]: [NOTICE]   (321368) : Loading success.
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.194 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 9e45b488-45d1-4293-a1a6-7b01b726b58b in datapath 88dbe3c2-5a58-4a5e-93e1-51b691c2901f unbound from our chassis
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.198 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 88dbe3c2-5a58-4a5e-93e1-51b691c2901f
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.211 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b52ca5fb-f7ed-4404-84dd-99fcde40bc0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.212 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap88dbe3c2-51 in ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.215 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap88dbe3c2-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.215 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[000a3bd4-cd87-45b7-992a-9293df02b6df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.217 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[713c68fd-2b86-4b96-a99e-46d748119ef4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.230 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[acd68da8-af82-4e54-9b6a-918ec7f9d89f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:54 compute-0 ovn_controller[146846]: 2026-02-28T10:17:54Z|00882|binding|INFO|Releasing lport ecb4bce5-f543-4ade-98cc-d98a0d015682 from this chassis (sb_readonly=0)
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.244 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e899ba-3e8b-4176-b7ec-2013458beaa2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.266 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b36b4da5-1853-483f-99b5-09e9453d2e7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:54 compute-0 NetworkManager[49805]: <info>  [1772273874.2743] manager: (tap88dbe3c2-50): new Veth device (/org/freedesktop/NetworkManager/Devices/378)
Feb 28 10:17:54 compute-0 systemd-udevd[321217]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.275 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d629259e-6042-46dc-aa38-bc2aca9cc121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 312 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 100 KiB/s wr, 45 op/s
Feb 28 10:17:54 compute-0 nova_compute[243452]: 2026-02-28 10:17:54.286 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.311 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8854299a-9a0d-41d9-84cc-458029da3efc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.314 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[603a12eb-d31e-4cd2-b7d2-1ef3bea924e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:54 compute-0 NetworkManager[49805]: <info>  [1772273874.3378] device (tap88dbe3c2-50): carrier: link connected
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.342 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b890bd07-f5f8-4e51-b2f1-97241ca209ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.357 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1b2cb1-a197-4b95-aecb-1f9f52c326de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88dbe3c2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:2d:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 268], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538162, 'reachable_time': 41483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321389, 'error': None, 'target': 'ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.373 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4ebce01f-c847-4b76-9cf3-0c4b52600be6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:2d79'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538162, 'tstamp': 538162}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321390, 'error': None, 'target': 'ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.388 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[24ee64d5-1613-4021-aefc-afdeace0db62]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88dbe3c2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:2d:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 268], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538162, 'reachable_time': 41483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321391, 'error': None, 'target': 'ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.428 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8cc9179e-0e47-4054-88bb-f22378acec41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:54 compute-0 ceph-mon[76304]: pgmap v1566: 305 pgs: 305 active+clean; 312 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 100 KiB/s wr, 45 op/s
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.508 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[22e8bb6f-cf68-427b-8527-af92534130c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.510 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88dbe3c2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.511 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.512 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88dbe3c2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:54 compute-0 NetworkManager[49805]: <info>  [1772273874.5155] manager: (tap88dbe3c2-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Feb 28 10:17:54 compute-0 nova_compute[243452]: 2026-02-28 10:17:54.515 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:54 compute-0 kernel: tap88dbe3c2-50: entered promiscuous mode
Feb 28 10:17:54 compute-0 nova_compute[243452]: 2026-02-28 10:17:54.517 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.525 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap88dbe3c2-50, col_values=(('external_ids', {'iface-id': 'b833c568-d94d-4da6-b765-0f13045f9c5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:17:54 compute-0 ovn_controller[146846]: 2026-02-28T10:17:54Z|00883|binding|INFO|Releasing lport b833c568-d94d-4da6-b765-0f13045f9c5d from this chassis (sb_readonly=0)
Feb 28 10:17:54 compute-0 nova_compute[243452]: 2026-02-28 10:17:54.527 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.528 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/88dbe3c2-5a58-4a5e-93e1-51b691c2901f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/88dbe3c2-5a58-4a5e-93e1-51b691c2901f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.529 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[550262bb-68db-4d61-9221-235e183013f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.531 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-88dbe3c2-5a58-4a5e-93e1-51b691c2901f
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/88dbe3c2-5a58-4a5e-93e1-51b691c2901f.pid.haproxy
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 88dbe3c2-5a58-4a5e-93e1-51b691c2901f
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:17:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:54.531 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f', 'env', 'PROCESS_TAG=haproxy-88dbe3c2-5a58-4a5e-93e1-51b691c2901f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/88dbe3c2-5a58-4a5e-93e1-51b691c2901f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:17:54 compute-0 nova_compute[243452]: 2026-02-28 10:17:54.536 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:54 compute-0 podman[321421]: 2026-02-28 10:17:54.933904722 +0000 UTC m=+0.047233157 container create a66e28a1c8833dfe3b0a8334dda895ded66019da0c327d2ece44fda318a2f3d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:17:54 compute-0 systemd[1]: Started libpod-conmon-a66e28a1c8833dfe3b0a8334dda895ded66019da0c327d2ece44fda318a2f3d3.scope.
Feb 28 10:17:54 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:17:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1247daefa15ff2b01aab26a97071a132a1b94f0b4be586fbe0e2800e79c263a4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:17:55 compute-0 podman[321421]: 2026-02-28 10:17:55.000548242 +0000 UTC m=+0.113876697 container init a66e28a1c8833dfe3b0a8334dda895ded66019da0c327d2ece44fda318a2f3d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 10:17:55 compute-0 podman[321421]: 2026-02-28 10:17:55.005125052 +0000 UTC m=+0.118453487 container start a66e28a1c8833dfe3b0a8334dda895ded66019da0c327d2ece44fda318a2f3d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:17:55 compute-0 podman[321421]: 2026-02-28 10:17:54.914430787 +0000 UTC m=+0.027759232 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:17:55 compute-0 neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f[321436]: [NOTICE]   (321440) : New worker (321442) forked
Feb 28 10:17:55 compute-0 neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f[321436]: [NOTICE]   (321440) : Loading success.
Feb 28 10:17:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 312 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 53 KiB/s wr, 101 op/s
Feb 28 10:17:56 compute-0 nova_compute[243452]: 2026-02-28 10:17:56.429 243456 INFO nova.compute.manager [None req-520862e3-4972-4927-9294-35f1cce485e5 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Get console output
Feb 28 10:17:57 compute-0 nova_compute[243452]: 2026-02-28 10:17:57.058 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:57 compute-0 ceph-mon[76304]: pgmap v1567: 305 pgs: 305 active+clean; 312 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 53 KiB/s wr, 101 op/s
Feb 28 10:17:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:57.856 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:57.857 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:17:57.858 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:58 compute-0 nova_compute[243452]: 2026-02-28 10:17:58.164 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 312 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 29 KiB/s wr, 144 op/s
Feb 28 10:17:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:17:58 compute-0 nova_compute[243452]: 2026-02-28 10:17:58.868 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:17:59 compute-0 nova_compute[243452]: 2026-02-28 10:17:59.307 243456 DEBUG nova.compute.manager [req-c9bd9992-1c47-4179-89bf-1c34e9eba8fa req-9a91d57f-e17d-4f0f-9d69-1c30b193733c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:17:59 compute-0 nova_compute[243452]: 2026-02-28 10:17:59.307 243456 DEBUG oslo_concurrency.lockutils [req-c9bd9992-1c47-4179-89bf-1c34e9eba8fa req-9a91d57f-e17d-4f0f-9d69-1c30b193733c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:17:59 compute-0 nova_compute[243452]: 2026-02-28 10:17:59.308 243456 DEBUG oslo_concurrency.lockutils [req-c9bd9992-1c47-4179-89bf-1c34e9eba8fa req-9a91d57f-e17d-4f0f-9d69-1c30b193733c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:17:59 compute-0 nova_compute[243452]: 2026-02-28 10:17:59.308 243456 DEBUG oslo_concurrency.lockutils [req-c9bd9992-1c47-4179-89bf-1c34e9eba8fa req-9a91d57f-e17d-4f0f-9d69-1c30b193733c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:17:59 compute-0 nova_compute[243452]: 2026-02-28 10:17:59.308 243456 DEBUG nova.compute.manager [req-c9bd9992-1c47-4179-89bf-1c34e9eba8fa req-9a91d57f-e17d-4f0f-9d69-1c30b193733c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:17:59 compute-0 nova_compute[243452]: 2026-02-28 10:17:59.308 243456 WARNING nova.compute.manager [req-c9bd9992-1c47-4179-89bf-1c34e9eba8fa req-9a91d57f-e17d-4f0f-9d69-1c30b193733c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state active and task_state None.
Feb 28 10:17:59 compute-0 ceph-mon[76304]: pgmap v1568: 305 pgs: 305 active+clean; 312 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 29 KiB/s wr, 144 op/s
Feb 28 10:18:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 312 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.8 KiB/s wr, 169 op/s
Feb 28 10:18:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:18:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:18:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:18:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:18:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:18:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:18:00 compute-0 nova_compute[243452]: 2026-02-28 10:18:00.717 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273865.7140925, 4db5bcd7-8b41-4850-8c88-89ad757c8558 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:18:00 compute-0 nova_compute[243452]: 2026-02-28 10:18:00.718 243456 INFO nova.compute.manager [-] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] VM Stopped (Lifecycle Event)
Feb 28 10:18:00 compute-0 nova_compute[243452]: 2026-02-28 10:18:00.751 243456 DEBUG nova.compute.manager [None req-7339079d-4547-4031-bf07-d46ba11d78bd - - - - - -] [instance: 4db5bcd7-8b41-4850-8c88-89ad757c8558] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.374 243456 DEBUG oslo_concurrency.lockutils [None req-f3616c48-2fe9-41a0-a788-648cea1de3d0 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.374 243456 DEBUG oslo_concurrency.lockutils [None req-f3616c48-2fe9-41a0-a788-648cea1de3d0 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.375 243456 DEBUG nova.compute.manager [None req-f3616c48-2fe9-41a0-a788-648cea1de3d0 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.379 243456 DEBUG nova.compute.manager [None req-f3616c48-2fe9-41a0-a788-648cea1de3d0 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.380 243456 DEBUG nova.objects.instance [None req-f3616c48-2fe9-41a0-a788-648cea1de3d0 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'flavor' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:01 compute-0 ceph-mon[76304]: pgmap v1569: 305 pgs: 305 active+clean; 312 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.8 KiB/s wr, 169 op/s
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.414 243456 DEBUG nova.virt.libvirt.driver [None req-f3616c48-2fe9-41a0-a788-648cea1de3d0 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.533 243456 DEBUG nova.compute.manager [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.534 243456 DEBUG oslo_concurrency.lockutils [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.535 243456 DEBUG oslo_concurrency.lockutils [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.535 243456 DEBUG oslo_concurrency.lockutils [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.536 243456 DEBUG nova.compute.manager [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.536 243456 WARNING nova.compute.manager [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state active and task_state powering-off.
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.537 243456 DEBUG nova.compute.manager [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.537 243456 DEBUG oslo_concurrency.lockutils [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.538 243456 DEBUG oslo_concurrency.lockutils [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.538 243456 DEBUG oslo_concurrency.lockutils [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.539 243456 DEBUG nova.compute.manager [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.539 243456 WARNING nova.compute.manager [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state active and task_state powering-off.
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.539 243456 DEBUG nova.compute.manager [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.540 243456 DEBUG oslo_concurrency.lockutils [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.540 243456 DEBUG oslo_concurrency.lockutils [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.541 243456 DEBUG oslo_concurrency.lockutils [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.541 243456 DEBUG nova.compute.manager [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.542 243456 WARNING nova.compute.manager [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state active and task_state powering-off.
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.543 243456 DEBUG nova.compute.manager [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Received event network-vif-plugged-9e45b488-45d1-4293-a1a6-7b01b726b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.543 243456 DEBUG oslo_concurrency.lockutils [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.544 243456 DEBUG oslo_concurrency.lockutils [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.544 243456 DEBUG oslo_concurrency.lockutils [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.545 243456 DEBUG nova.compute.manager [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] No waiting events found dispatching network-vif-plugged-9e45b488-45d1-4293-a1a6-7b01b726b58b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.546 243456 WARNING nova.compute.manager [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Received unexpected event network-vif-plugged-9e45b488-45d1-4293-a1a6-7b01b726b58b for instance with vm_state active and task_state None.
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.547 243456 DEBUG nova.compute.manager [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Received event network-vif-plugged-9e45b488-45d1-4293-a1a6-7b01b726b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.547 243456 DEBUG oslo_concurrency.lockutils [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.548 243456 DEBUG oslo_concurrency.lockutils [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.548 243456 DEBUG oslo_concurrency.lockutils [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.549 243456 DEBUG nova.compute.manager [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] No waiting events found dispatching network-vif-plugged-9e45b488-45d1-4293-a1a6-7b01b726b58b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:01 compute-0 nova_compute[243452]: 2026-02-28 10:18:01.549 243456 WARNING nova.compute.manager [req-863afd31-fc63-42f4-b4aa-c9f4eb91d3ae req-0b3f03fe-905e-4f2f-a372-c83dd59455bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Received unexpected event network-vif-plugged-9e45b488-45d1-4293-a1a6-7b01b726b58b for instance with vm_state active and task_state None.
Feb 28 10:18:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 312 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 0 B/s wr, 151 op/s
Feb 28 10:18:03 compute-0 nova_compute[243452]: 2026-02-28 10:18:03.169 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:03 compute-0 ceph-mon[76304]: pgmap v1570: 305 pgs: 305 active+clean; 312 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 0 B/s wr, 151 op/s
Feb 28 10:18:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:18:03 compute-0 nova_compute[243452]: 2026-02-28 10:18:03.871 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 312 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 0 B/s wr, 142 op/s
Feb 28 10:18:04 compute-0 sudo[321451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:18:04 compute-0 sudo[321451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:18:04 compute-0 sudo[321451]: pam_unix(sudo:session): session closed for user root
Feb 28 10:18:04 compute-0 sudo[321476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:18:04 compute-0 sudo[321476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:18:04 compute-0 ovn_controller[146846]: 2026-02-28T10:18:04Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:05:21 10.100.0.14
Feb 28 10:18:04 compute-0 nova_compute[243452]: 2026-02-28 10:18:04.881 243456 DEBUG oslo_concurrency.lockutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Acquiring lock "5a892841-d20f-4443-9652-37674cdb0878" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:04 compute-0 nova_compute[243452]: 2026-02-28 10:18:04.882 243456 DEBUG oslo_concurrency.lockutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Lock "5a892841-d20f-4443-9652-37674cdb0878" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:04 compute-0 nova_compute[243452]: 2026-02-28 10:18:04.902 243456 DEBUG nova.compute.manager [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:18:05 compute-0 nova_compute[243452]: 2026-02-28 10:18:05.010 243456 DEBUG oslo_concurrency.lockutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:05 compute-0 nova_compute[243452]: 2026-02-28 10:18:05.011 243456 DEBUG oslo_concurrency.lockutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:05 compute-0 nova_compute[243452]: 2026-02-28 10:18:05.029 243456 DEBUG nova.virt.hardware [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:18:05 compute-0 nova_compute[243452]: 2026-02-28 10:18:05.030 243456 INFO nova.compute.claims [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:18:05 compute-0 sudo[321476]: pam_unix(sudo:session): session closed for user root
Feb 28 10:18:05 compute-0 nova_compute[243452]: 2026-02-28 10:18:05.264 243456 DEBUG oslo_concurrency.processutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 28 10:18:05 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 10:18:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:18:05 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:18:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:18:05 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:18:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:18:05 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:18:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:18:05 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:18:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:18:05 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:18:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:18:05 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:18:05 compute-0 sudo[321531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:18:05 compute-0 sudo[321531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:18:05 compute-0 sudo[321531]: pam_unix(sudo:session): session closed for user root
Feb 28 10:18:05 compute-0 sudo[321556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:18:05 compute-0 sudo[321556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:18:05 compute-0 ceph-mon[76304]: pgmap v1571: 305 pgs: 305 active+clean; 312 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 0 B/s wr, 142 op/s
Feb 28 10:18:05 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 10:18:05 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:18:05 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:18:05 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:18:05 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:18:05 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:18:05 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:18:05 compute-0 ovn_controller[146846]: 2026-02-28T10:18:05Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bb:83:cd 10.100.0.12
Feb 28 10:18:05 compute-0 podman[321612]: 2026-02-28 10:18:05.691475959 +0000 UTC m=+0.056370698 container create d5cd0a7a0cba825d31aefd112e6dfb07732dafda12816401a5ff5ec3e8ef7e25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_gagarin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 10:18:05 compute-0 podman[321612]: 2026-02-28 10:18:05.650565333 +0000 UTC m=+0.015460072 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:18:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:18:05 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/34064956' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:18:05 compute-0 systemd[1]: Started libpod-conmon-d5cd0a7a0cba825d31aefd112e6dfb07732dafda12816401a5ff5ec3e8ef7e25.scope.
Feb 28 10:18:05 compute-0 nova_compute[243452]: 2026-02-28 10:18:05.822 243456 DEBUG oslo_concurrency.processutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:05 compute-0 nova_compute[243452]: 2026-02-28 10:18:05.827 243456 DEBUG nova.compute.provider_tree [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:18:05 compute-0 nova_compute[243452]: 2026-02-28 10:18:05.840 243456 DEBUG nova.scheduler.client.report [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:18:05 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:18:05 compute-0 podman[321612]: 2026-02-28 10:18:05.855953117 +0000 UTC m=+0.220847876 container init d5cd0a7a0cba825d31aefd112e6dfb07732dafda12816401a5ff5ec3e8ef7e25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:18:05 compute-0 podman[321612]: 2026-02-28 10:18:05.862127353 +0000 UTC m=+0.227022082 container start d5cd0a7a0cba825d31aefd112e6dfb07732dafda12816401a5ff5ec3e8ef7e25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_gagarin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Feb 28 10:18:05 compute-0 nova_compute[243452]: 2026-02-28 10:18:05.863 243456 DEBUG oslo_concurrency.lockutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:05 compute-0 podman[321612]: 2026-02-28 10:18:05.867054063 +0000 UTC m=+0.231948812 container attach d5cd0a7a0cba825d31aefd112e6dfb07732dafda12816401a5ff5ec3e8ef7e25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_gagarin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 28 10:18:05 compute-0 systemd[1]: libpod-d5cd0a7a0cba825d31aefd112e6dfb07732dafda12816401a5ff5ec3e8ef7e25.scope: Deactivated successfully.
Feb 28 10:18:05 compute-0 inspiring_gagarin[321630]: 167 167
Feb 28 10:18:05 compute-0 conmon[321630]: conmon d5cd0a7a0cba825d31ae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d5cd0a7a0cba825d31aefd112e6dfb07732dafda12816401a5ff5ec3e8ef7e25.scope/container/memory.events
Feb 28 10:18:05 compute-0 podman[321612]: 2026-02-28 10:18:05.869226835 +0000 UTC m=+0.234121564 container died d5cd0a7a0cba825d31aefd112e6dfb07732dafda12816401a5ff5ec3e8ef7e25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_gagarin, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:18:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-3dba66b309f78181cc6ab24802030e8bd0cbf410f47130b29b040e43cae7f594-merged.mount: Deactivated successfully.
Feb 28 10:18:05 compute-0 nova_compute[243452]: 2026-02-28 10:18:05.889 243456 DEBUG oslo_concurrency.lockutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Acquiring lock "d470f09c-ecaf-492d-98ef-712a212b3436" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:05 compute-0 nova_compute[243452]: 2026-02-28 10:18:05.890 243456 DEBUG oslo_concurrency.lockutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Lock "d470f09c-ecaf-492d-98ef-712a212b3436" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:05 compute-0 nova_compute[243452]: 2026-02-28 10:18:05.899 243456 DEBUG oslo_concurrency.lockutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Lock "d470f09c-ecaf-492d-98ef-712a212b3436" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:05 compute-0 nova_compute[243452]: 2026-02-28 10:18:05.900 243456 DEBUG nova.compute.manager [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:18:05 compute-0 podman[321612]: 2026-02-28 10:18:05.907031243 +0000 UTC m=+0.271925962 container remove d5cd0a7a0cba825d31aefd112e6dfb07732dafda12816401a5ff5ec3e8ef7e25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_gagarin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:18:05 compute-0 systemd[1]: libpod-conmon-d5cd0a7a0cba825d31aefd112e6dfb07732dafda12816401a5ff5ec3e8ef7e25.scope: Deactivated successfully.
Feb 28 10:18:05 compute-0 nova_compute[243452]: 2026-02-28 10:18:05.947 243456 DEBUG nova.compute.manager [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:18:05 compute-0 nova_compute[243452]: 2026-02-28 10:18:05.948 243456 DEBUG nova.network.neutron [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:18:05 compute-0 nova_compute[243452]: 2026-02-28 10:18:05.967 243456 INFO nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:18:05 compute-0 nova_compute[243452]: 2026-02-28 10:18:05.994 243456 DEBUG nova.compute.manager [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:18:06 compute-0 podman[321653]: 2026-02-28 10:18:06.050454301 +0000 UTC m=+0.055443072 container create f80156d0719b77c083b1f8715b0e912234914af5ce280d6dd6bd291bfe5c92e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.092 243456 DEBUG nova.compute.manager [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:18:06 compute-0 systemd[1]: Started libpod-conmon-f80156d0719b77c083b1f8715b0e912234914af5ce280d6dd6bd291bfe5c92e5.scope.
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.094 243456 DEBUG nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.094 243456 INFO nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Creating image(s)
Feb 28 10:18:06 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:18:06 compute-0 podman[321653]: 2026-02-28 10:18:06.028906906 +0000 UTC m=+0.033895787 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:18:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/227626d8f3d588f7e2c9a7e085a6876451c9e3ea4b0d4bc2458df0dbde6e02fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.120 243456 DEBUG nova.storage.rbd_utils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] rbd image 5a892841-d20f-4443-9652-37674cdb0878_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:18:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/227626d8f3d588f7e2c9a7e085a6876451c9e3ea4b0d4bc2458df0dbde6e02fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:18:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/227626d8f3d588f7e2c9a7e085a6876451c9e3ea4b0d4bc2458df0dbde6e02fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:18:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/227626d8f3d588f7e2c9a7e085a6876451c9e3ea4b0d4bc2458df0dbde6e02fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:18:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/227626d8f3d588f7e2c9a7e085a6876451c9e3ea4b0d4bc2458df0dbde6e02fb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:18:06 compute-0 podman[321653]: 2026-02-28 10:18:06.139094397 +0000 UTC m=+0.144083218 container init f80156d0719b77c083b1f8715b0e912234914af5ce280d6dd6bd291bfe5c92e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 10:18:06 compute-0 podman[321653]: 2026-02-28 10:18:06.145730146 +0000 UTC m=+0.150718937 container start f80156d0719b77c083b1f8715b0e912234914af5ce280d6dd6bd291bfe5c92e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:18:06 compute-0 podman[321653]: 2026-02-28 10:18:06.149508174 +0000 UTC m=+0.154496965 container attach f80156d0719b77c083b1f8715b0e912234914af5ce280d6dd6bd291bfe5c92e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.159 243456 DEBUG nova.storage.rbd_utils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] rbd image 5a892841-d20f-4443-9652-37674cdb0878_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.194 243456 DEBUG nova.storage.rbd_utils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] rbd image 5a892841-d20f-4443-9652-37674cdb0878_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.199 243456 DEBUG oslo_concurrency.processutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.264 243456 DEBUG oslo_concurrency.processutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.266 243456 DEBUG oslo_concurrency.lockutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.266 243456 DEBUG oslo_concurrency.lockutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.267 243456 DEBUG oslo_concurrency.lockutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 312 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 12 KiB/s wr, 198 op/s
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.298 243456 DEBUG nova.storage.rbd_utils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] rbd image 5a892841-d20f-4443-9652-37674cdb0878_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.302 243456 DEBUG oslo_concurrency.processutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 5a892841-d20f-4443-9652-37674cdb0878_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.330 243456 DEBUG nova.policy [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '39a5a5ba4d2d4c23a0ce8f1bd89eca09', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e8f77fb8fb324fd68b3aa3716dff1c40', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:18:06 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/34064956' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:18:06 compute-0 ceph-mon[76304]: pgmap v1572: 305 pgs: 305 active+clean; 312 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 12 KiB/s wr, 198 op/s
Feb 28 10:18:06 compute-0 interesting_ishizaka[321671]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:18:06 compute-0 interesting_ishizaka[321671]: --> All data devices are unavailable
Feb 28 10:18:06 compute-0 systemd[1]: libpod-f80156d0719b77c083b1f8715b0e912234914af5ce280d6dd6bd291bfe5c92e5.scope: Deactivated successfully.
Feb 28 10:18:06 compute-0 podman[321653]: 2026-02-28 10:18:06.629894806 +0000 UTC m=+0.634883657 container died f80156d0719b77c083b1f8715b0e912234914af5ce280d6dd6bd291bfe5c92e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_ishizaka, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.637 243456 DEBUG oslo_concurrency.processutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 5a892841-d20f-4443-9652-37674cdb0878_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-227626d8f3d588f7e2c9a7e085a6876451c9e3ea4b0d4bc2458df0dbde6e02fb-merged.mount: Deactivated successfully.
Feb 28 10:18:06 compute-0 podman[321653]: 2026-02-28 10:18:06.690101732 +0000 UTC m=+0.695090543 container remove f80156d0719b77c083b1f8715b0e912234914af5ce280d6dd6bd291bfe5c92e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_ishizaka, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 28 10:18:06 compute-0 systemd[1]: libpod-conmon-f80156d0719b77c083b1f8715b0e912234914af5ce280d6dd6bd291bfe5c92e5.scope: Deactivated successfully.
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.727 243456 DEBUG nova.storage.rbd_utils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] resizing rbd image 5a892841-d20f-4443-9652-37674cdb0878_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:18:06 compute-0 sudo[321556]: pam_unix(sudo:session): session closed for user root
Feb 28 10:18:06 compute-0 sudo[321851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:18:06 compute-0 sudo[321851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:18:06 compute-0 sudo[321851]: pam_unix(sudo:session): session closed for user root
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.825 243456 DEBUG nova.objects.instance [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Lazy-loading 'migration_context' on Instance uuid 5a892841-d20f-4443-9652-37674cdb0878 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.846 243456 DEBUG nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.846 243456 DEBUG nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Ensure instance console log exists: /var/lib/nova/instances/5a892841-d20f-4443-9652-37674cdb0878/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.847 243456 DEBUG oslo_concurrency.lockutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.847 243456 DEBUG oslo_concurrency.lockutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:06 compute-0 nova_compute[243452]: 2026-02-28 10:18:06.848 243456 DEBUG oslo_concurrency.lockutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:06 compute-0 sudo[321896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:18:06 compute-0 sudo[321896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:18:07 compute-0 podman[321933]: 2026-02-28 10:18:07.189123696 +0000 UTC m=+0.067295899 container create fb9ab995a51397bb46f39418017b0e9fcdbb8d4341a0dfcc815cc8635ef7b1ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Feb 28 10:18:07 compute-0 systemd[1]: Started libpod-conmon-fb9ab995a51397bb46f39418017b0e9fcdbb8d4341a0dfcc815cc8635ef7b1ed.scope.
Feb 28 10:18:07 compute-0 podman[321933]: 2026-02-28 10:18:07.160747427 +0000 UTC m=+0.038919670 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:18:07 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:18:07 compute-0 nova_compute[243452]: 2026-02-28 10:18:07.281 243456 DEBUG nova.network.neutron [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Successfully created port: 8298d9ed-da74-4002-a6a1-7ad18f465609 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:18:07 compute-0 podman[321933]: 2026-02-28 10:18:07.285791581 +0000 UTC m=+0.163963824 container init fb9ab995a51397bb46f39418017b0e9fcdbb8d4341a0dfcc815cc8635ef7b1ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_turing, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:18:07 compute-0 podman[321933]: 2026-02-28 10:18:07.297155515 +0000 UTC m=+0.175327718 container start fb9ab995a51397bb46f39418017b0e9fcdbb8d4341a0dfcc815cc8635ef7b1ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_turing, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:18:07 compute-0 podman[321933]: 2026-02-28 10:18:07.300897032 +0000 UTC m=+0.179069215 container attach fb9ab995a51397bb46f39418017b0e9fcdbb8d4341a0dfcc815cc8635ef7b1ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_turing, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:18:07 compute-0 eager_turing[321949]: 167 167
Feb 28 10:18:07 compute-0 systemd[1]: libpod-fb9ab995a51397bb46f39418017b0e9fcdbb8d4341a0dfcc815cc8635ef7b1ed.scope: Deactivated successfully.
Feb 28 10:18:07 compute-0 podman[321933]: 2026-02-28 10:18:07.303239968 +0000 UTC m=+0.181412171 container died fb9ab995a51397bb46f39418017b0e9fcdbb8d4341a0dfcc815cc8635ef7b1ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_turing, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Feb 28 10:18:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5396e3b0df2a5011a96658134023085c0b01787f2095d5cb5f9dcc69d2077337-merged.mount: Deactivated successfully.
Feb 28 10:18:07 compute-0 podman[321933]: 2026-02-28 10:18:07.354479489 +0000 UTC m=+0.232651692 container remove fb9ab995a51397bb46f39418017b0e9fcdbb8d4341a0dfcc815cc8635ef7b1ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_turing, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 10:18:07 compute-0 systemd[1]: libpod-conmon-fb9ab995a51397bb46f39418017b0e9fcdbb8d4341a0dfcc815cc8635ef7b1ed.scope: Deactivated successfully.
Feb 28 10:18:07 compute-0 podman[321972]: 2026-02-28 10:18:07.573238634 +0000 UTC m=+0.064249852 container create 064193d1cd41fd9a4f1bd13cec067c92f60dfc45e13464e79e3cd2cfccac2553 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_shaw, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 10:18:07 compute-0 systemd[1]: Started libpod-conmon-064193d1cd41fd9a4f1bd13cec067c92f60dfc45e13464e79e3cd2cfccac2553.scope.
Feb 28 10:18:07 compute-0 podman[321972]: 2026-02-28 10:18:07.548837709 +0000 UTC m=+0.039848977 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:18:07 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b49d82f9c45b0bff2ce3402cbd30b5a2320ddcb11fe5c101644aecb6923fd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b49d82f9c45b0bff2ce3402cbd30b5a2320ddcb11fe5c101644aecb6923fd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b49d82f9c45b0bff2ce3402cbd30b5a2320ddcb11fe5c101644aecb6923fd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b49d82f9c45b0bff2ce3402cbd30b5a2320ddcb11fe5c101644aecb6923fd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:18:07 compute-0 podman[321972]: 2026-02-28 10:18:07.685838824 +0000 UTC m=+0.176850042 container init 064193d1cd41fd9a4f1bd13cec067c92f60dfc45e13464e79e3cd2cfccac2553 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 10:18:07 compute-0 podman[321972]: 2026-02-28 10:18:07.69554481 +0000 UTC m=+0.186556028 container start 064193d1cd41fd9a4f1bd13cec067c92f60dfc45e13464e79e3cd2cfccac2553 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:18:07 compute-0 podman[321972]: 2026-02-28 10:18:07.702381765 +0000 UTC m=+0.193392973 container attach 064193d1cd41fd9a4f1bd13cec067c92f60dfc45e13464e79e3cd2cfccac2553 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]: {
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:     "0": [
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:         {
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "devices": [
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "/dev/loop3"
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             ],
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "lv_name": "ceph_lv0",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "lv_size": "21470642176",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "name": "ceph_lv0",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "tags": {
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.cluster_name": "ceph",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.crush_device_class": "",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.encrypted": "0",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.objectstore": "bluestore",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.osd_id": "0",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.type": "block",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.vdo": "0",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.with_tpm": "0"
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             },
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "type": "block",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "vg_name": "ceph_vg0"
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:         }
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:     ],
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:     "1": [
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:         {
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "devices": [
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "/dev/loop4"
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             ],
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "lv_name": "ceph_lv1",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "lv_size": "21470642176",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "name": "ceph_lv1",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "tags": {
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.cluster_name": "ceph",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.crush_device_class": "",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.encrypted": "0",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.objectstore": "bluestore",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.osd_id": "1",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.type": "block",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.vdo": "0",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.with_tpm": "0"
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             },
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "type": "block",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "vg_name": "ceph_vg1"
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:         }
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:     ],
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:     "2": [
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:         {
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "devices": [
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "/dev/loop5"
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             ],
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "lv_name": "ceph_lv2",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "lv_size": "21470642176",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "name": "ceph_lv2",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "tags": {
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.cluster_name": "ceph",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.crush_device_class": "",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.encrypted": "0",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.objectstore": "bluestore",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.osd_id": "2",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.type": "block",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.vdo": "0",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:                 "ceph.with_tpm": "0"
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             },
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "type": "block",
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:             "vg_name": "ceph_vg2"
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:         }
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]:     ]
Feb 28 10:18:07 compute-0 vigorous_shaw[321989]: }
Feb 28 10:18:08 compute-0 systemd[1]: libpod-064193d1cd41fd9a4f1bd13cec067c92f60dfc45e13464e79e3cd2cfccac2553.scope: Deactivated successfully.
Feb 28 10:18:08 compute-0 conmon[321989]: conmon 064193d1cd41fd9a4f1b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-064193d1cd41fd9a4f1bd13cec067c92f60dfc45e13464e79e3cd2cfccac2553.scope/container/memory.events
Feb 28 10:18:08 compute-0 podman[321972]: 2026-02-28 10:18:08.012238786 +0000 UTC m=+0.503250064 container died 064193d1cd41fd9a4f1bd13cec067c92f60dfc45e13464e79e3cd2cfccac2553 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 10:18:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5b49d82f9c45b0bff2ce3402cbd30b5a2320ddcb11fe5c101644aecb6923fd4-merged.mount: Deactivated successfully.
Feb 28 10:18:08 compute-0 podman[321972]: 2026-02-28 10:18:08.076708963 +0000 UTC m=+0.567720161 container remove 064193d1cd41fd9a4f1bd13cec067c92f60dfc45e13464e79e3cd2cfccac2553 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_shaw, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 10:18:08 compute-0 systemd[1]: libpod-conmon-064193d1cd41fd9a4f1bd13cec067c92f60dfc45e13464e79e3cd2cfccac2553.scope: Deactivated successfully.
Feb 28 10:18:08 compute-0 sudo[321896]: pam_unix(sudo:session): session closed for user root
Feb 28 10:18:08 compute-0 nova_compute[243452]: 2026-02-28 10:18:08.172 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:08 compute-0 sudo[322009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:18:08 compute-0 sudo[322009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:18:08 compute-0 sudo[322009]: pam_unix(sudo:session): session closed for user root
Feb 28 10:18:08 compute-0 sudo[322034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:18:08 compute-0 sudo[322034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:18:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 327 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 511 KiB/s wr, 170 op/s
Feb 28 10:18:08 compute-0 nova_compute[243452]: 2026-02-28 10:18:08.336 243456 DEBUG nova.network.neutron [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Successfully updated port: 8298d9ed-da74-4002-a6a1-7ad18f465609 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:18:08 compute-0 nova_compute[243452]: 2026-02-28 10:18:08.352 243456 DEBUG oslo_concurrency.lockutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Acquiring lock "refresh_cache-5a892841-d20f-4443-9652-37674cdb0878" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:18:08 compute-0 nova_compute[243452]: 2026-02-28 10:18:08.352 243456 DEBUG oslo_concurrency.lockutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Acquired lock "refresh_cache-5a892841-d20f-4443-9652-37674cdb0878" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:18:08 compute-0 nova_compute[243452]: 2026-02-28 10:18:08.352 243456 DEBUG nova.network.neutron [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:18:08 compute-0 podman[322071]: 2026-02-28 10:18:08.572950738 +0000 UTC m=+0.056029959 container create 815bf7b2cc56112caa5ac21a33922e67c6d04227f1cc2b94ba601498fc2c186b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brahmagupta, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:18:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:18:08 compute-0 systemd[1]: Started libpod-conmon-815bf7b2cc56112caa5ac21a33922e67c6d04227f1cc2b94ba601498fc2c186b.scope.
Feb 28 10:18:08 compute-0 podman[322071]: 2026-02-28 10:18:08.545778363 +0000 UTC m=+0.028857624 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:18:08 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:18:08 compute-0 nova_compute[243452]: 2026-02-28 10:18:08.660 243456 DEBUG nova.network.neutron [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:18:08 compute-0 podman[322071]: 2026-02-28 10:18:08.675245133 +0000 UTC m=+0.158324364 container init 815bf7b2cc56112caa5ac21a33922e67c6d04227f1cc2b94ba601498fc2c186b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:18:08 compute-0 podman[322071]: 2026-02-28 10:18:08.683746266 +0000 UTC m=+0.166825477 container start 815bf7b2cc56112caa5ac21a33922e67c6d04227f1cc2b94ba601498fc2c186b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 10:18:08 compute-0 fervent_brahmagupta[322087]: 167 167
Feb 28 10:18:08 compute-0 systemd[1]: libpod-815bf7b2cc56112caa5ac21a33922e67c6d04227f1cc2b94ba601498fc2c186b.scope: Deactivated successfully.
Feb 28 10:18:08 compute-0 podman[322071]: 2026-02-28 10:18:08.689239632 +0000 UTC m=+0.172318843 container attach 815bf7b2cc56112caa5ac21a33922e67c6d04227f1cc2b94ba601498fc2c186b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 10:18:08 compute-0 podman[322071]: 2026-02-28 10:18:08.689948592 +0000 UTC m=+0.173027813 container died 815bf7b2cc56112caa5ac21a33922e67c6d04227f1cc2b94ba601498fc2c186b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brahmagupta, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 10:18:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a239051d39e393d8370a4e3309518c8beb918e9e75a42e8a95902c6973aaba21-merged.mount: Deactivated successfully.
Feb 28 10:18:08 compute-0 podman[322071]: 2026-02-28 10:18:08.739240547 +0000 UTC m=+0.222319758 container remove 815bf7b2cc56112caa5ac21a33922e67c6d04227f1cc2b94ba601498fc2c186b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:18:08 compute-0 systemd[1]: libpod-conmon-815bf7b2cc56112caa5ac21a33922e67c6d04227f1cc2b94ba601498fc2c186b.scope: Deactivated successfully.
Feb 28 10:18:08 compute-0 nova_compute[243452]: 2026-02-28 10:18:08.874 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:08 compute-0 podman[322113]: 2026-02-28 10:18:08.943384216 +0000 UTC m=+0.061214626 container create 76567d929490625f1e9f238f4bf8f4af012fb6a95ec27455ed7cd2cf073c063d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:18:09 compute-0 systemd[1]: Started libpod-conmon-76567d929490625f1e9f238f4bf8f4af012fb6a95ec27455ed7cd2cf073c063d.scope.
Feb 28 10:18:09 compute-0 podman[322113]: 2026-02-28 10:18:08.919023421 +0000 UTC m=+0.036853851 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:18:09 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e40244457df69476faebaea1e81793c9d6a5d998c96f0f5231980905fc6bf69e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e40244457df69476faebaea1e81793c9d6a5d998c96f0f5231980905fc6bf69e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e40244457df69476faebaea1e81793c9d6a5d998c96f0f5231980905fc6bf69e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e40244457df69476faebaea1e81793c9d6a5d998c96f0f5231980905fc6bf69e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:18:09 compute-0 nova_compute[243452]: 2026-02-28 10:18:09.079 243456 DEBUG nova.compute.manager [req-728e09d0-4f0f-4004-aa6d-7ee25459046b req-671a64e2-3643-4462-92f6-96703c50d793 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Received event network-changed-8298d9ed-da74-4002-a6a1-7ad18f465609 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:09 compute-0 nova_compute[243452]: 2026-02-28 10:18:09.082 243456 DEBUG nova.compute.manager [req-728e09d0-4f0f-4004-aa6d-7ee25459046b req-671a64e2-3643-4462-92f6-96703c50d793 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Refreshing instance network info cache due to event network-changed-8298d9ed-da74-4002-a6a1-7ad18f465609. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:18:09 compute-0 nova_compute[243452]: 2026-02-28 10:18:09.083 243456 DEBUG oslo_concurrency.lockutils [req-728e09d0-4f0f-4004-aa6d-7ee25459046b req-671a64e2-3643-4462-92f6-96703c50d793 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-5a892841-d20f-4443-9652-37674cdb0878" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:18:09 compute-0 podman[322113]: 2026-02-28 10:18:09.087133663 +0000 UTC m=+0.204964093 container init 76567d929490625f1e9f238f4bf8f4af012fb6a95ec27455ed7cd2cf073c063d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_gould, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 28 10:18:09 compute-0 podman[322113]: 2026-02-28 10:18:09.096913082 +0000 UTC m=+0.214743472 container start 76567d929490625f1e9f238f4bf8f4af012fb6a95ec27455ed7cd2cf073c063d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 10:18:09 compute-0 podman[322113]: 2026-02-28 10:18:09.100685609 +0000 UTC m=+0.218516029 container attach 76567d929490625f1e9f238f4bf8f4af012fb6a95ec27455ed7cd2cf073c063d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:18:09 compute-0 ceph-mon[76304]: pgmap v1573: 305 pgs: 305 active+clean; 327 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 511 KiB/s wr, 170 op/s
Feb 28 10:18:09 compute-0 lvm[322209]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:18:09 compute-0 lvm[322208]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:18:09 compute-0 lvm[322209]: VG ceph_vg1 finished
Feb 28 10:18:09 compute-0 lvm[322208]: VG ceph_vg0 finished
Feb 28 10:18:09 compute-0 lvm[322211]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:18:09 compute-0 lvm[322211]: VG ceph_vg2 finished
Feb 28 10:18:09 compute-0 tender_gould[322130]: {}
Feb 28 10:18:09 compute-0 systemd[1]: libpod-76567d929490625f1e9f238f4bf8f4af012fb6a95ec27455ed7cd2cf073c063d.scope: Deactivated successfully.
Feb 28 10:18:09 compute-0 systemd[1]: libpod-76567d929490625f1e9f238f4bf8f4af012fb6a95ec27455ed7cd2cf073c063d.scope: Consumed 1.195s CPU time.
Feb 28 10:18:09 compute-0 podman[322214]: 2026-02-28 10:18:09.953846577 +0000 UTC m=+0.035627147 container died 76567d929490625f1e9f238f4bf8f4af012fb6a95ec27455ed7cd2cf073c063d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_gould, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:18:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e40244457df69476faebaea1e81793c9d6a5d998c96f0f5231980905fc6bf69e-merged.mount: Deactivated successfully.
Feb 28 10:18:10 compute-0 podman[322214]: 2026-02-28 10:18:10.001224027 +0000 UTC m=+0.083004587 container remove 76567d929490625f1e9f238f4bf8f4af012fb6a95ec27455ed7cd2cf073c063d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:18:10 compute-0 systemd[1]: libpod-conmon-76567d929490625f1e9f238f4bf8f4af012fb6a95ec27455ed7cd2cf073c063d.scope: Deactivated successfully.
Feb 28 10:18:10 compute-0 sudo[322034]: pam_unix(sudo:session): session closed for user root
Feb 28 10:18:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:18:10 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:18:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:18:10 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:18:10 compute-0 sudo[322229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:18:10 compute-0 sudo[322229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:18:10 compute-0 sudo[322229]: pam_unix(sudo:session): session closed for user root
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.216 243456 DEBUG nova.network.neutron [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Updating instance_info_cache with network_info: [{"id": "8298d9ed-da74-4002-a6a1-7ad18f465609", "address": "fa:16:3e:3d:f2:ee", "network": {"id": "6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2095706427-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e8f77fb8fb324fd68b3aa3716dff1c40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8298d9ed-da", "ovs_interfaceid": "8298d9ed-da74-4002-a6a1-7ad18f465609", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.242 243456 DEBUG oslo_concurrency.lockutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Releasing lock "refresh_cache-5a892841-d20f-4443-9652-37674cdb0878" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.243 243456 DEBUG nova.compute.manager [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Instance network_info: |[{"id": "8298d9ed-da74-4002-a6a1-7ad18f465609", "address": "fa:16:3e:3d:f2:ee", "network": {"id": "6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2095706427-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e8f77fb8fb324fd68b3aa3716dff1c40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8298d9ed-da", "ovs_interfaceid": "8298d9ed-da74-4002-a6a1-7ad18f465609", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.245 243456 DEBUG oslo_concurrency.lockutils [req-728e09d0-4f0f-4004-aa6d-7ee25459046b req-671a64e2-3643-4462-92f6-96703c50d793 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-5a892841-d20f-4443-9652-37674cdb0878" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.245 243456 DEBUG nova.network.neutron [req-728e09d0-4f0f-4004-aa6d-7ee25459046b req-671a64e2-3643-4462-92f6-96703c50d793 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Refreshing network info cache for port 8298d9ed-da74-4002-a6a1-7ad18f465609 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.250 243456 DEBUG nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Start _get_guest_xml network_info=[{"id": "8298d9ed-da74-4002-a6a1-7ad18f465609", "address": "fa:16:3e:3d:f2:ee", "network": {"id": "6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2095706427-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e8f77fb8fb324fd68b3aa3716dff1c40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8298d9ed-da", "ovs_interfaceid": "8298d9ed-da74-4002-a6a1-7ad18f465609", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.260 243456 WARNING nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.271 243456 DEBUG nova.virt.libvirt.host [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.273 243456 DEBUG nova.virt.libvirt.host [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.279 243456 DEBUG nova.virt.libvirt.host [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.280 243456 DEBUG nova.virt.libvirt.host [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.281 243456 DEBUG nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.281 243456 DEBUG nova.virt.hardware [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.282 243456 DEBUG nova.virt.hardware [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.282 243456 DEBUG nova.virt.hardware [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.283 243456 DEBUG nova.virt.hardware [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.283 243456 DEBUG nova.virt.hardware [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.284 243456 DEBUG nova.virt.hardware [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.284 243456 DEBUG nova.virt.hardware [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.284 243456 DEBUG nova.virt.hardware [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.285 243456 DEBUG nova.virt.hardware [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.286 243456 DEBUG nova.virt.hardware [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.287 243456 DEBUG nova.virt.hardware [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:18:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 345 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 146 op/s
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.292 243456 DEBUG oslo_concurrency.processutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:18:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2137294458' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.881 243456 DEBUG oslo_concurrency.processutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.907 243456 DEBUG nova.storage.rbd_utils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] rbd image 5a892841-d20f-4443-9652-37674cdb0878_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:18:10 compute-0 nova_compute[243452]: 2026-02-28 10:18:10.912 243456 DEBUG oslo_concurrency.processutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:10 compute-0 sshd-session[322103]: Received disconnect from 103.67.78.202 port 57768:11: Bye Bye [preauth]
Feb 28 10:18:10 compute-0 sshd-session[322103]: Disconnected from authenticating user root 103.67.78.202 port 57768 [preauth]
Feb 28 10:18:11 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:18:11 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:18:11 compute-0 ceph-mon[76304]: pgmap v1574: 305 pgs: 305 active+clean; 345 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 146 op/s
Feb 28 10:18:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2137294458' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:18:11 compute-0 podman[322297]: 2026-02-28 10:18:11.124195675 +0000 UTC m=+0.059664072 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:18:11 compute-0 podman[322295]: 2026-02-28 10:18:11.182816156 +0000 UTC m=+0.119674712 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:18:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:18:11 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3167751933' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.488 243456 DEBUG nova.virt.libvirt.driver [None req-f3616c48-2fe9-41a0-a788-648cea1de3d0 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.498 243456 DEBUG oslo_concurrency.processutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.500 243456 DEBUG nova.virt.libvirt.vif [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:18:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-355944121',display_name='tempest-ServerGroupTestJSON-server-355944121',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-355944121',id=92,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e8f77fb8fb324fd68b3aa3716dff1c40',ramdisk_id='',reservation_id='r-phewxdff',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1228589944',owner_user_name='tempest-ServerGroupTestJSON-1228589944-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:18:06Z,user_data=None,user_id='39a5a5ba4d2d4c23a0ce8f1bd89eca09',uuid=5a892841-d20f-4443-9652-37674cdb0878,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8298d9ed-da74-4002-a6a1-7ad18f465609", "address": "fa:16:3e:3d:f2:ee", "network": {"id": "6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2095706427-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e8f77fb8fb324fd68b3aa3716dff1c40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8298d9ed-da", "ovs_interfaceid": "8298d9ed-da74-4002-a6a1-7ad18f465609", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.500 243456 DEBUG nova.network.os_vif_util [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Converting VIF {"id": "8298d9ed-da74-4002-a6a1-7ad18f465609", "address": "fa:16:3e:3d:f2:ee", "network": {"id": "6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2095706427-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e8f77fb8fb324fd68b3aa3716dff1c40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8298d9ed-da", "ovs_interfaceid": "8298d9ed-da74-4002-a6a1-7ad18f465609", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.501 243456 DEBUG nova.network.os_vif_util [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:f2:ee,bridge_name='br-int',has_traffic_filtering=True,id=8298d9ed-da74-4002-a6a1-7ad18f465609,network=Network(6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8298d9ed-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.503 243456 DEBUG nova.objects.instance [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a892841-d20f-4443-9652-37674cdb0878 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.526 243456 DEBUG nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:18:11 compute-0 nova_compute[243452]:   <uuid>5a892841-d20f-4443-9652-37674cdb0878</uuid>
Feb 28 10:18:11 compute-0 nova_compute[243452]:   <name>instance-0000005c</name>
Feb 28 10:18:11 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:18:11 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:18:11 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerGroupTestJSON-server-355944121</nova:name>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:18:10</nova:creationTime>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:18:11 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:18:11 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:18:11 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:18:11 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:18:11 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:18:11 compute-0 nova_compute[243452]:         <nova:user uuid="39a5a5ba4d2d4c23a0ce8f1bd89eca09">tempest-ServerGroupTestJSON-1228589944-project-member</nova:user>
Feb 28 10:18:11 compute-0 nova_compute[243452]:         <nova:project uuid="e8f77fb8fb324fd68b3aa3716dff1c40">tempest-ServerGroupTestJSON-1228589944</nova:project>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:18:11 compute-0 nova_compute[243452]:         <nova:port uuid="8298d9ed-da74-4002-a6a1-7ad18f465609">
Feb 28 10:18:11 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:18:11 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:18:11 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <system>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <entry name="serial">5a892841-d20f-4443-9652-37674cdb0878</entry>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <entry name="uuid">5a892841-d20f-4443-9652-37674cdb0878</entry>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     </system>
Feb 28 10:18:11 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:18:11 compute-0 nova_compute[243452]:   <os>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:   </os>
Feb 28 10:18:11 compute-0 nova_compute[243452]:   <features>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:   </features>
Feb 28 10:18:11 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:18:11 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:18:11 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/5a892841-d20f-4443-9652-37674cdb0878_disk">
Feb 28 10:18:11 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       </source>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:18:11 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/5a892841-d20f-4443-9652-37674cdb0878_disk.config">
Feb 28 10:18:11 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       </source>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:18:11 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:3d:f2:ee"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <target dev="tap8298d9ed-da"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/5a892841-d20f-4443-9652-37674cdb0878/console.log" append="off"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <video>
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     </video>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:18:11 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:18:11 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:18:11 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:18:11 compute-0 nova_compute[243452]: </domain>
Feb 28 10:18:11 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.527 243456 DEBUG nova.compute.manager [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Preparing to wait for external event network-vif-plugged-8298d9ed-da74-4002-a6a1-7ad18f465609 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.527 243456 DEBUG oslo_concurrency.lockutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Acquiring lock "5a892841-d20f-4443-9652-37674cdb0878-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.527 243456 DEBUG oslo_concurrency.lockutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Lock "5a892841-d20f-4443-9652-37674cdb0878-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.527 243456 DEBUG oslo_concurrency.lockutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Lock "5a892841-d20f-4443-9652-37674cdb0878-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.528 243456 DEBUG nova.virt.libvirt.vif [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:18:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-355944121',display_name='tempest-ServerGroupTestJSON-server-355944121',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-355944121',id=92,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e8f77fb8fb324fd68b3aa3716dff1c40',ramdisk_id='',reservation_id='r-phewxdff',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1228589944',owner_user_name='tempest-ServerGroupTestJSON-1228589944-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:18:06Z,user_data=None,user_id='39a5a5ba4d2d4c23a0ce8f1bd89eca09',uuid=5a892841-d20f-4443-9652-37674cdb0878,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8298d9ed-da74-4002-a6a1-7ad18f465609", "address": "fa:16:3e:3d:f2:ee", "network": {"id": "6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2095706427-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e8f77fb8fb324fd68b3aa3716dff1c40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8298d9ed-da", "ovs_interfaceid": "8298d9ed-da74-4002-a6a1-7ad18f465609", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.528 243456 DEBUG nova.network.os_vif_util [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Converting VIF {"id": "8298d9ed-da74-4002-a6a1-7ad18f465609", "address": "fa:16:3e:3d:f2:ee", "network": {"id": "6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2095706427-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e8f77fb8fb324fd68b3aa3716dff1c40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8298d9ed-da", "ovs_interfaceid": "8298d9ed-da74-4002-a6a1-7ad18f465609", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.529 243456 DEBUG nova.network.os_vif_util [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:f2:ee,bridge_name='br-int',has_traffic_filtering=True,id=8298d9ed-da74-4002-a6a1-7ad18f465609,network=Network(6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8298d9ed-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.529 243456 DEBUG os_vif [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:f2:ee,bridge_name='br-int',has_traffic_filtering=True,id=8298d9ed-da74-4002-a6a1-7ad18f465609,network=Network(6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8298d9ed-da') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.530 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.530 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.530 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.534 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.534 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8298d9ed-da, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.535 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8298d9ed-da, col_values=(('external_ids', {'iface-id': '8298d9ed-da74-4002-a6a1-7ad18f465609', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:f2:ee', 'vm-uuid': '5a892841-d20f-4443-9652-37674cdb0878'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.536 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:11 compute-0 NetworkManager[49805]: <info>  [1772273891.5380] manager: (tap8298d9ed-da): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.538 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.543 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.544 243456 INFO os_vif [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:f2:ee,bridge_name='br-int',has_traffic_filtering=True,id=8298d9ed-da74-4002-a6a1-7ad18f465609,network=Network(6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8298d9ed-da')
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.611 243456 DEBUG nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.611 243456 DEBUG nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.611 243456 DEBUG nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] No VIF found with MAC fa:16:3e:3d:f2:ee, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.612 243456 INFO nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Using config drive
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.633 243456 DEBUG nova.storage.rbd_utils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] rbd image 5a892841-d20f-4443-9652-37674cdb0878_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.642 243456 INFO nova.compute.manager [None req-1b717dff-1c48-4e46-b4c8-f7dfb57a66d4 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Get console output
Feb 28 10:18:11 compute-0 nova_compute[243452]: 2026-02-28 10:18:11.647 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:18:12 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3167751933' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:18:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 360 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Feb 28 10:18:12 compute-0 nova_compute[243452]: 2026-02-28 10:18:12.380 243456 INFO nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Creating config drive at /var/lib/nova/instances/5a892841-d20f-4443-9652-37674cdb0878/disk.config
Feb 28 10:18:12 compute-0 nova_compute[243452]: 2026-02-28 10:18:12.383 243456 DEBUG oslo_concurrency.processutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a892841-d20f-4443-9652-37674cdb0878/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqdfoylf1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:12 compute-0 nova_compute[243452]: 2026-02-28 10:18:12.513 243456 DEBUG oslo_concurrency.processutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a892841-d20f-4443-9652-37674cdb0878/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqdfoylf1" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:12 compute-0 nova_compute[243452]: 2026-02-28 10:18:12.542 243456 DEBUG nova.storage.rbd_utils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] rbd image 5a892841-d20f-4443-9652-37674cdb0878_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:18:12 compute-0 nova_compute[243452]: 2026-02-28 10:18:12.546 243456 DEBUG oslo_concurrency.processutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a892841-d20f-4443-9652-37674cdb0878/disk.config 5a892841-d20f-4443-9652-37674cdb0878_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:12 compute-0 nova_compute[243452]: 2026-02-28 10:18:12.576 243456 DEBUG nova.network.neutron [req-728e09d0-4f0f-4004-aa6d-7ee25459046b req-671a64e2-3643-4462-92f6-96703c50d793 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Updated VIF entry in instance network info cache for port 8298d9ed-da74-4002-a6a1-7ad18f465609. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:18:12 compute-0 nova_compute[243452]: 2026-02-28 10:18:12.577 243456 DEBUG nova.network.neutron [req-728e09d0-4f0f-4004-aa6d-7ee25459046b req-671a64e2-3643-4462-92f6-96703c50d793 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Updating instance_info_cache with network_info: [{"id": "8298d9ed-da74-4002-a6a1-7ad18f465609", "address": "fa:16:3e:3d:f2:ee", "network": {"id": "6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2095706427-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e8f77fb8fb324fd68b3aa3716dff1c40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8298d9ed-da", "ovs_interfaceid": "8298d9ed-da74-4002-a6a1-7ad18f465609", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:18:12 compute-0 nova_compute[243452]: 2026-02-28 10:18:12.608 243456 DEBUG oslo_concurrency.lockutils [req-728e09d0-4f0f-4004-aa6d-7ee25459046b req-671a64e2-3643-4462-92f6-96703c50d793 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-5a892841-d20f-4443-9652-37674cdb0878" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:18:12 compute-0 nova_compute[243452]: 2026-02-28 10:18:12.701 243456 DEBUG oslo_concurrency.processutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a892841-d20f-4443-9652-37674cdb0878/disk.config 5a892841-d20f-4443-9652-37674cdb0878_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:12 compute-0 nova_compute[243452]: 2026-02-28 10:18:12.701 243456 INFO nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Deleting local config drive /var/lib/nova/instances/5a892841-d20f-4443-9652-37674cdb0878/disk.config because it was imported into RBD.
Feb 28 10:18:12 compute-0 kernel: tap8298d9ed-da: entered promiscuous mode
Feb 28 10:18:12 compute-0 NetworkManager[49805]: <info>  [1772273892.7665] manager: (tap8298d9ed-da): new Tun device (/org/freedesktop/NetworkManager/Devices/381)
Feb 28 10:18:12 compute-0 nova_compute[243452]: 2026-02-28 10:18:12.767 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:12 compute-0 ovn_controller[146846]: 2026-02-28T10:18:12Z|00884|binding|INFO|Claiming lport 8298d9ed-da74-4002-a6a1-7ad18f465609 for this chassis.
Feb 28 10:18:12 compute-0 ovn_controller[146846]: 2026-02-28T10:18:12Z|00885|binding|INFO|8298d9ed-da74-4002-a6a1-7ad18f465609: Claiming fa:16:3e:3d:f2:ee 10.100.0.12
Feb 28 10:18:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:12.776 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:f2:ee 10.100.0.12'], port_security=['fa:16:3e:3d:f2:ee 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5a892841-d20f-4443-9652-37674cdb0878', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e8f77fb8fb324fd68b3aa3716dff1c40', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aa3824b2-c81e-45b4-b804-91656f397d6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97fba5f6-13cc-48e6-ac38-00390081a0cd, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8298d9ed-da74-4002-a6a1-7ad18f465609) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:18:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:12.778 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8298d9ed-da74-4002-a6a1-7ad18f465609 in datapath 6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5 bound to our chassis
Feb 28 10:18:12 compute-0 ovn_controller[146846]: 2026-02-28T10:18:12Z|00886|binding|INFO|Setting lport 8298d9ed-da74-4002-a6a1-7ad18f465609 ovn-installed in OVS
Feb 28 10:18:12 compute-0 ovn_controller[146846]: 2026-02-28T10:18:12Z|00887|binding|INFO|Setting lport 8298d9ed-da74-4002-a6a1-7ad18f465609 up in Southbound
Feb 28 10:18:12 compute-0 nova_compute[243452]: 2026-02-28 10:18:12.782 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:12.781 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5
Feb 28 10:18:12 compute-0 nova_compute[243452]: 2026-02-28 10:18:12.784 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:12.796 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d4b340-fa34-4851-9c1a-e34b512cb3e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:12.797 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6fa3b1d7-21 in ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:18:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:12.801 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6fa3b1d7-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:18:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:12.801 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2a1f85eb-0bc4-4cc8-8dd7-b97be4f388e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:12.802 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[95fce2fc-d7c4-4c11-ab87-d836996c1464]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:12 compute-0 systemd-machined[209480]: New machine qemu-111-instance-0000005c.
Feb 28 10:18:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:12.819 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[22144cfd-f9ce-4d7d-be2f-92145b632d99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:12 compute-0 systemd[1]: Started Virtual Machine qemu-111-instance-0000005c.
Feb 28 10:18:12 compute-0 systemd-udevd[322438]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:18:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:12.838 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[716fdc98-1acf-480a-8c46-beea091d7853]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:12 compute-0 NetworkManager[49805]: <info>  [1772273892.8560] device (tap8298d9ed-da): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:18:12 compute-0 NetworkManager[49805]: <info>  [1772273892.8574] device (tap8298d9ed-da): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:18:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:12.868 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[158aca2e-4f04-480e-95b7-4bc112bbe151]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:12 compute-0 NetworkManager[49805]: <info>  [1772273892.8765] manager: (tap6fa3b1d7-20): new Veth device (/org/freedesktop/NetworkManager/Devices/382)
Feb 28 10:18:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:12.876 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba714b7-257f-4ea6-ba3b-44125361ee90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:12.905 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a42cf9bd-8916-477f-a2da-18690fbf8ae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:12.909 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0d9414d9-e817-4e56-b972-51e7fc784c93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:12 compute-0 NetworkManager[49805]: <info>  [1772273892.9286] device (tap6fa3b1d7-20): carrier: link connected
Feb 28 10:18:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:12.932 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2700bb7c-9642-4ebe-a37e-8f57d6ef1cae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:12.948 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[558c762a-9940-422d-8005-4e8bc4634cd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fa3b1d7-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:0b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540021, 'reachable_time': 41800, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322468, 'error': None, 'target': 'ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:12.967 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6908c367-171b-48b8-9718-23cb9751b471]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:bd2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540021, 'tstamp': 540021}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322469, 'error': None, 'target': 'ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:12.987 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f79a116d-b1e5-4236-a009-aedcf52caeb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fa3b1d7-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:0b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540021, 'reachable_time': 41800, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322470, 'error': None, 'target': 'ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.017 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[35fbab8b-827f-4ce6-ac01-6718c0314741]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.075 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7fa65b0b-4331-43fc-8abd-9a119fb9131c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.077 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fa3b1d7-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.077 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.078 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fa3b1d7-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:13 compute-0 NetworkManager[49805]: <info>  [1772273893.0814] manager: (tap6fa3b1d7-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/383)
Feb 28 10:18:13 compute-0 kernel: tap6fa3b1d7-20: entered promiscuous mode
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.081 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.094 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6fa3b1d7-20, col_values=(('external_ids', {'iface-id': 'e38630d3-71c5-4fcb-a9e2-406d8e87b72e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:13 compute-0 ovn_controller[146846]: 2026-02-28T10:18:13Z|00888|binding|INFO|Releasing lport e38630d3-71c5-4fcb-a9e2-406d8e87b72e from this chassis (sb_readonly=0)
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.096 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.104 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.108 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.109 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[602555d4-1b2f-430a-a742-64a6d79fc9ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.110 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5.pid.haproxy
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.111 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5', 'env', 'PROCESS_TAG=haproxy-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:18:13 compute-0 ceph-mon[76304]: pgmap v1575: 305 pgs: 305 active+clean; 360 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.261 243456 DEBUG nova.compute.manager [req-a14acffe-b128-4766-81ef-04b95eefafb3 req-47e2ea74-9a75-4525-9307-2fff6bc0c3e7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Received event network-vif-plugged-8298d9ed-da74-4002-a6a1-7ad18f465609 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.261 243456 DEBUG oslo_concurrency.lockutils [req-a14acffe-b128-4766-81ef-04b95eefafb3 req-47e2ea74-9a75-4525-9307-2fff6bc0c3e7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5a892841-d20f-4443-9652-37674cdb0878-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.262 243456 DEBUG oslo_concurrency.lockutils [req-a14acffe-b128-4766-81ef-04b95eefafb3 req-47e2ea74-9a75-4525-9307-2fff6bc0c3e7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5a892841-d20f-4443-9652-37674cdb0878-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.262 243456 DEBUG oslo_concurrency.lockutils [req-a14acffe-b128-4766-81ef-04b95eefafb3 req-47e2ea74-9a75-4525-9307-2fff6bc0c3e7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5a892841-d20f-4443-9652-37674cdb0878-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.262 243456 DEBUG nova.compute.manager [req-a14acffe-b128-4766-81ef-04b95eefafb3 req-47e2ea74-9a75-4525-9307-2fff6bc0c3e7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Processing event network-vif-plugged-8298d9ed-da74-4002-a6a1-7ad18f465609 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.365 243456 DEBUG oslo_concurrency.lockutils [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "5f8708bd-35d7-4952-ba18-0b6635872b86" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.366 243456 DEBUG oslo_concurrency.lockutils [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.366 243456 DEBUG oslo_concurrency.lockutils [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.366 243456 DEBUG oslo_concurrency.lockutils [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.367 243456 DEBUG oslo_concurrency.lockutils [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.368 243456 INFO nova.compute.manager [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Terminating instance
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.369 243456 DEBUG nova.compute.manager [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:18:13 compute-0 kernel: tap9e45b488-45 (unregistering): left promiscuous mode
Feb 28 10:18:13 compute-0 NetworkManager[49805]: <info>  [1772273893.4199] device (tap9e45b488-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:18:13 compute-0 ovn_controller[146846]: 2026-02-28T10:18:13Z|00889|binding|INFO|Releasing lport 9e45b488-45d1-4293-a1a6-7b01b726b58b from this chassis (sb_readonly=0)
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.425 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:13 compute-0 ovn_controller[146846]: 2026-02-28T10:18:13Z|00890|binding|INFO|Setting lport 9e45b488-45d1-4293-a1a6-7b01b726b58b down in Southbound
Feb 28 10:18:13 compute-0 ovn_controller[146846]: 2026-02-28T10:18:13Z|00891|binding|INFO|Removing iface tap9e45b488-45 ovn-installed in OVS
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.428 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.434 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.435 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:83:cd 10.100.0.12'], port_security=['fa:16:3e:bb:83:cd 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5f8708bd-35d7-4952-ba18-0b6635872b86', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88dbe3c2-5a58-4a5e-93e1-51b691c2901f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6208c787-2d1b-4dd1-8098-37be7ada4419', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a3b86456-762e-43a0-947f-ce5fc38977cf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=9e45b488-45d1-4293-a1a6-7b01b726b58b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:18:13 compute-0 podman[322502]: 2026-02-28 10:18:13.452935579 +0000 UTC m=+0.069307156 container create f0d54328a4f162a02dbf06733616c5c0d1530e5189e6ef1f96a5d9ecbb60fa50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 28 10:18:13 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d00000059.scope: Deactivated successfully.
Feb 28 10:18:13 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d00000059.scope: Consumed 12.418s CPU time.
Feb 28 10:18:13 compute-0 systemd[1]: Started libpod-conmon-f0d54328a4f162a02dbf06733616c5c0d1530e5189e6ef1f96a5d9ecbb60fa50.scope.
Feb 28 10:18:13 compute-0 systemd-machined[209480]: Machine qemu-109-instance-00000059 terminated.
Feb 28 10:18:13 compute-0 podman[322502]: 2026-02-28 10:18:13.414090072 +0000 UTC m=+0.030461659 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:18:13 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:18:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c4f4a97fd6cc6c4c929eb36d412e280a603953d9d48688e7c1d57d408af9209/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:18:13 compute-0 podman[322502]: 2026-02-28 10:18:13.523546552 +0000 UTC m=+0.139918109 container init f0d54328a4f162a02dbf06733616c5c0d1530e5189e6ef1f96a5d9ecbb60fa50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 10:18:13 compute-0 podman[322502]: 2026-02-28 10:18:13.528579696 +0000 UTC m=+0.144951253 container start f0d54328a4f162a02dbf06733616c5c0d1530e5189e6ef1f96a5d9ecbb60fa50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:18:13 compute-0 neutron-haproxy-ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5[322561]: [NOTICE]   (322567) : New worker (322569) forked
Feb 28 10:18:13 compute-0 neutron-haproxy-ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5[322561]: [NOTICE]   (322567) : Loading success.
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.586 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 9e45b488-45d1-4293-a1a6-7b01b726b58b in datapath 88dbe3c2-5a58-4a5e-93e1-51b691c2901f unbound from our chassis
Feb 28 10:18:13 compute-0 NetworkManager[49805]: <info>  [1772273893.5889] manager: (tap9e45b488-45): new Tun device (/org/freedesktop/NetworkManager/Devices/384)
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.588 243456 DEBUG nova.compute.manager [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.589 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 88dbe3c2-5a58-4a5e-93e1-51b691c2901f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.590 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273893.5900435, 5a892841-d20f-4443-9652-37674cdb0878 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.590 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5a892841-d20f-4443-9652-37674cdb0878] VM Started (Lifecycle Event)
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.591 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dfe1deb0-1096-41e6-954c-c3af57102bff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.592 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f namespace which is not needed anymore
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.593 243456 DEBUG nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.598 243456 INFO nova.virt.libvirt.driver [-] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Instance spawned successfully.
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.599 243456 DEBUG nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.605 243456 INFO nova.virt.libvirt.driver [-] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Instance destroyed successfully.
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.605 243456 DEBUG nova.objects.instance [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'resources' on Instance uuid 5f8708bd-35d7-4952-ba18-0b6635872b86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.629 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.633 243456 DEBUG nova.virt.libvirt.vif [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2076056969',display_name='tempest-TestNetworkAdvancedServerOps-server-2076056969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2076056969',id=89,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAJoDJeszbjfLiA8r7EvxipNNTd3Kd7FJumJ1jhIQwl69HS1653r1vVD8jlI2B8YSH2IOoSN/hqwS+59551ZK2ZQDz9Qcn6YlypHv+eriiRgIViYoZ/6DEE1ZesCiz+yyg==',key_name='tempest-TestNetworkAdvancedServerOps-291574014',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-vdh7t6ga',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:17:53Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=5f8708bd-35d7-4952-ba18-0b6635872b86,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "address": "fa:16:3e:bb:83:cd", "network": {"id": "88dbe3c2-5a58-4a5e-93e1-51b691c2901f", "bridge": "br-int", "label": "tempest-network-smoke--1244861554", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e45b488-45", "ovs_interfaceid": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.634 243456 DEBUG nova.network.os_vif_util [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "address": "fa:16:3e:bb:83:cd", "network": {"id": "88dbe3c2-5a58-4a5e-93e1-51b691c2901f", "bridge": "br-int", "label": "tempest-network-smoke--1244861554", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e45b488-45", "ovs_interfaceid": "9e45b488-45d1-4293-a1a6-7b01b726b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.635 243456 DEBUG nova.network.os_vif_util [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:83:cd,bridge_name='br-int',has_traffic_filtering=True,id=9e45b488-45d1-4293-a1a6-7b01b726b58b,network=Network(88dbe3c2-5a58-4a5e-93e1-51b691c2901f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e45b488-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.636 243456 DEBUG os_vif [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:83:cd,bridge_name='br-int',has_traffic_filtering=True,id=9e45b488-45d1-4293-a1a6-7b01b726b58b,network=Network(88dbe3c2-5a58-4a5e-93e1-51b691c2901f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e45b488-45') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.639 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.640 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e45b488-45, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.647 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.650 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.652 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.656 243456 INFO os_vif [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:83:cd,bridge_name='br-int',has_traffic_filtering=True,id=9e45b488-45d1-4293-a1a6-7b01b726b58b,network=Network(88dbe3c2-5a58-4a5e-93e1-51b691c2901f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e45b488-45')
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.680 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5a892841-d20f-4443-9652-37674cdb0878] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.681 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273893.5905876, 5a892841-d20f-4443-9652-37674cdb0878 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.681 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5a892841-d20f-4443-9652-37674cdb0878] VM Paused (Lifecycle Event)
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.683 243456 DEBUG nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.684 243456 DEBUG nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.684 243456 DEBUG nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.684 243456 DEBUG nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.685 243456 DEBUG nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.685 243456 DEBUG nova.virt.libvirt.driver [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.724 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.729 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273893.5929413, 5a892841-d20f-4443-9652-37674cdb0878 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.729 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5a892841-d20f-4443-9652-37674cdb0878] VM Resumed (Lifecycle Event)
Feb 28 10:18:13 compute-0 neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f[321436]: [NOTICE]   (321440) : haproxy version is 2.8.14-c23fe91
Feb 28 10:18:13 compute-0 neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f[321436]: [NOTICE]   (321440) : path to executable is /usr/sbin/haproxy
Feb 28 10:18:13 compute-0 neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f[321436]: [ALERT]    (321440) : Current worker (321442) exited with code 143 (Terminated)
Feb 28 10:18:13 compute-0 neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f[321436]: [WARNING]  (321440) : All workers exited. Exiting... (0)
Feb 28 10:18:13 compute-0 systemd[1]: libpod-a66e28a1c8833dfe3b0a8334dda895ded66019da0c327d2ece44fda318a2f3d3.scope: Deactivated successfully.
Feb 28 10:18:13 compute-0 podman[322613]: 2026-02-28 10:18:13.744567272 +0000 UTC m=+0.058225501 container died a66e28a1c8833dfe3b0a8334dda895ded66019da0c327d2ece44fda318a2f3d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.750 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.755 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.766 243456 INFO nova.compute.manager [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Took 7.67 seconds to spawn the instance on the hypervisor.
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.767 243456 DEBUG nova.compute.manager [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a66e28a1c8833dfe3b0a8334dda895ded66019da0c327d2ece44fda318a2f3d3-userdata-shm.mount: Deactivated successfully.
Feb 28 10:18:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1247daefa15ff2b01aab26a97071a132a1b94f0b4be586fbe0e2800e79c263a4-merged.mount: Deactivated successfully.
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.787 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5a892841-d20f-4443-9652-37674cdb0878] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:18:13 compute-0 podman[322613]: 2026-02-28 10:18:13.795635537 +0000 UTC m=+0.109293756 container cleanup a66e28a1c8833dfe3b0a8334dda895ded66019da0c327d2ece44fda318a2f3d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f, org.label-schema.build-date=20260223, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:18:13 compute-0 kernel: taped25d1f8-c3 (unregistering): left promiscuous mode
Feb 28 10:18:13 compute-0 NetworkManager[49805]: <info>  [1772273893.8006] device (taped25d1f8-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:18:13 compute-0 systemd[1]: libpod-conmon-a66e28a1c8833dfe3b0a8334dda895ded66019da0c327d2ece44fda318a2f3d3.scope: Deactivated successfully.
Feb 28 10:18:13 compute-0 ovn_controller[146846]: 2026-02-28T10:18:13Z|00892|binding|INFO|Releasing lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c from this chassis (sb_readonly=0)
Feb 28 10:18:13 compute-0 ovn_controller[146846]: 2026-02-28T10:18:13Z|00893|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c down in Southbound
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.809 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:13 compute-0 ovn_controller[146846]: 2026-02-28T10:18:13Z|00894|binding|INFO|Removing iface taped25d1f8-c3 ovn-installed in OVS
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.811 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.817 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:05:21 10.100.0.14'], port_security=['fa:16:3e:f6:05:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '690896df-6307-469c-9685-325a61a62b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '6', 'neutron:security_group_ids': '26695444-5c7f-4bb0-b1ed-94a026868cfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.820 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.846 243456 INFO nova.compute.manager [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Took 8.89 seconds to build instance.
Feb 28 10:18:13 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Feb 28 10:18:13 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d0000005a.scope: Consumed 12.268s CPU time.
Feb 28 10:18:13 compute-0 systemd-machined[209480]: Machine qemu-110-instance-0000005a terminated.
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.876 243456 DEBUG oslo_concurrency.lockutils [None req-1a070fba-39f8-4607-90ae-26612af518a0 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Lock "5a892841-d20f-4443-9652-37674cdb0878" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.877 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:13 compute-0 podman[322657]: 2026-02-28 10:18:13.89430017 +0000 UTC m=+0.072435026 container remove a66e28a1c8833dfe3b0a8334dda895ded66019da0c327d2ece44fda318a2f3d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.901 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0a86db2a-4b46-4833-a433-3f8afd012dcb]: (4, ('Sat Feb 28 10:18:13 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f (a66e28a1c8833dfe3b0a8334dda895ded66019da0c327d2ece44fda318a2f3d3)\na66e28a1c8833dfe3b0a8334dda895ded66019da0c327d2ece44fda318a2f3d3\nSat Feb 28 10:18:13 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f (a66e28a1c8833dfe3b0a8334dda895ded66019da0c327d2ece44fda318a2f3d3)\na66e28a1c8833dfe3b0a8334dda895ded66019da0c327d2ece44fda318a2f3d3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.903 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[acec7be5-0684-4aca-b3b2-6db05aedba0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.904 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88dbe3c2-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.906 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:13 compute-0 kernel: tap88dbe3c2-50: left promiscuous mode
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.913 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:13 compute-0 nova_compute[243452]: 2026-02-28 10:18:13.916 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.918 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c6eb321d-c954-48d7-b6b6-5229f22bb53c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.934 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d37ae493-a2b8-4123-91b9-08ef15bd1109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.936 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[63cd3274-ae08-4676-a43a-2b2db6c69a87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.954 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[394c6b53-6b99-4dc4-9400-22eb29f4ad24]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538155, 'reachable_time': 33583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322677, 'error': None, 'target': 'ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:13 compute-0 systemd[1]: run-netns-ovnmeta\x2d88dbe3c2\x2d5a58\x2d4a5e\x2d93e1\x2d51b691c2901f.mount: Deactivated successfully.
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.957 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-88dbe3c2-5a58-4a5e-93e1-51b691c2901f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.957 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e13e5b-258f-473c-abb0-f67a6491262c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.958 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ed25d1f8-c3a0-43d4-b57e-12b647a48b3c in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb unbound from our chassis
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.959 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.960 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4ea5885e-cc24-4d46-ae49-caaa13e08639]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:13.961 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb namespace which is not needed anymore
Feb 28 10:18:14 compute-0 NetworkManager[49805]: <info>  [1772273894.0247] manager: (taped25d1f8-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/385)
Feb 28 10:18:14 compute-0 nova_compute[243452]: 2026-02-28 10:18:14.028 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:14 compute-0 nova_compute[243452]: 2026-02-28 10:18:14.032 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:14 compute-0 nova_compute[243452]: 2026-02-28 10:18:14.069 243456 INFO nova.virt.libvirt.driver [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Deleting instance files /var/lib/nova/instances/5f8708bd-35d7-4952-ba18-0b6635872b86_del
Feb 28 10:18:14 compute-0 nova_compute[243452]: 2026-02-28 10:18:14.070 243456 INFO nova.virt.libvirt.driver [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Deletion of /var/lib/nova/instances/5f8708bd-35d7-4952-ba18-0b6635872b86_del complete
Feb 28 10:18:14 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[321364]: [NOTICE]   (321368) : haproxy version is 2.8.14-c23fe91
Feb 28 10:18:14 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[321364]: [NOTICE]   (321368) : path to executable is /usr/sbin/haproxy
Feb 28 10:18:14 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[321364]: [WARNING]  (321368) : Exiting Master process...
Feb 28 10:18:14 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[321364]: [WARNING]  (321368) : Exiting Master process...
Feb 28 10:18:14 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[321364]: [ALERT]    (321368) : Current worker (321370) exited with code 143 (Terminated)
Feb 28 10:18:14 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[321364]: [WARNING]  (321368) : All workers exited. Exiting... (0)
Feb 28 10:18:14 compute-0 systemd[1]: libpod-3692037a852804603ec83fc6e8a8b73e4891a9d7e10979fc84c51e5433876a0f.scope: Deactivated successfully.
Feb 28 10:18:14 compute-0 podman[322699]: 2026-02-28 10:18:14.095852234 +0000 UTC m=+0.044326884 container died 3692037a852804603ec83fc6e8a8b73e4891a9d7e10979fc84c51e5433876a0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:18:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3692037a852804603ec83fc6e8a8b73e4891a9d7e10979fc84c51e5433876a0f-userdata-shm.mount: Deactivated successfully.
Feb 28 10:18:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-3843ddc687e2bcaa38c72e0435ee3e9eeefc4775850644c85e94b2a3b4bb94b6-merged.mount: Deactivated successfully.
Feb 28 10:18:14 compute-0 podman[322699]: 2026-02-28 10:18:14.126388085 +0000 UTC m=+0.074862695 container cleanup 3692037a852804603ec83fc6e8a8b73e4891a9d7e10979fc84c51e5433876a0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 10:18:14 compute-0 nova_compute[243452]: 2026-02-28 10:18:14.129 243456 INFO nova.compute.manager [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Took 0.76 seconds to destroy the instance on the hypervisor.
Feb 28 10:18:14 compute-0 nova_compute[243452]: 2026-02-28 10:18:14.130 243456 DEBUG oslo.service.loopingcall [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:18:14 compute-0 nova_compute[243452]: 2026-02-28 10:18:14.130 243456 DEBUG nova.compute.manager [-] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:18:14 compute-0 nova_compute[243452]: 2026-02-28 10:18:14.131 243456 DEBUG nova.network.neutron [-] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:18:14 compute-0 systemd[1]: libpod-conmon-3692037a852804603ec83fc6e8a8b73e4891a9d7e10979fc84c51e5433876a0f.scope: Deactivated successfully.
Feb 28 10:18:14 compute-0 podman[322729]: 2026-02-28 10:18:14.188514095 +0000 UTC m=+0.036642405 container remove 3692037a852804603ec83fc6e8a8b73e4891a9d7e10979fc84c51e5433876a0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:18:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:14.192 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c9ca870d-73a3-461b-bb95-dc48f1f92144]: (4, ('Sat Feb 28 10:18:14 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb (3692037a852804603ec83fc6e8a8b73e4891a9d7e10979fc84c51e5433876a0f)\n3692037a852804603ec83fc6e8a8b73e4891a9d7e10979fc84c51e5433876a0f\nSat Feb 28 10:18:14 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb (3692037a852804603ec83fc6e8a8b73e4891a9d7e10979fc84c51e5433876a0f)\n3692037a852804603ec83fc6e8a8b73e4891a9d7e10979fc84c51e5433876a0f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:14.193 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[af5241e9-5de9-4b72-8b46-ff199f4f3451]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:14.194 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:14 compute-0 kernel: tap8082b9e7-a0: left promiscuous mode
Feb 28 10:18:14 compute-0 nova_compute[243452]: 2026-02-28 10:18:14.196 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:14 compute-0 nova_compute[243452]: 2026-02-28 10:18:14.205 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:14.207 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[78c4756c-bb8a-41d5-88e0-6eb90ec6da5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:14.223 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9a01c513-604f-417a-8b5a-979c4fe81ad5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:14.225 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0370a844-a36b-4ca1-8c93-eac0f861db8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:14.240 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2e95009b-9dec-4174-a98f-68a2853e3939]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538064, 'reachable_time': 32156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322749, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:14 compute-0 systemd[1]: run-netns-ovnmeta\x2d8082b9e7\x2da888\x2d4fb7\x2db48c\x2da7c16db892eb.mount: Deactivated successfully.
Feb 28 10:18:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:14.242 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:18:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:14.242 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[781678b2-1be6-4446-80c5-cf2091ee1608]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 362 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 121 op/s
Feb 28 10:18:14 compute-0 nova_compute[243452]: 2026-02-28 10:18:14.507 243456 INFO nova.virt.libvirt.driver [None req-f3616c48-2fe9-41a0-a788-648cea1de3d0 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance shutdown successfully after 13 seconds.
Feb 28 10:18:14 compute-0 nova_compute[243452]: 2026-02-28 10:18:14.515 243456 INFO nova.virt.libvirt.driver [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance destroyed successfully.
Feb 28 10:18:14 compute-0 nova_compute[243452]: 2026-02-28 10:18:14.516 243456 DEBUG nova.objects.instance [None req-f3616c48-2fe9-41a0-a788-648cea1de3d0 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'numa_topology' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:14 compute-0 nova_compute[243452]: 2026-02-28 10:18:14.533 243456 DEBUG nova.compute.manager [None req-f3616c48-2fe9-41a0-a788-648cea1de3d0 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:14 compute-0 nova_compute[243452]: 2026-02-28 10:18:14.579 243456 DEBUG oslo_concurrency.lockutils [None req-f3616c48-2fe9-41a0-a788-648cea1de3d0 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.110 243456 DEBUG nova.network.neutron [-] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.131 243456 INFO nova.compute.manager [-] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Took 1.00 seconds to deallocate network for instance.
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.193 243456 DEBUG oslo_concurrency.lockutils [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.195 243456 DEBUG oslo_concurrency.lockutils [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.310 243456 DEBUG oslo_concurrency.processutils [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:15 compute-0 sshd-session[322700]: Invalid user sol from 45.148.10.240 port 57932
Feb 28 10:18:15 compute-0 ceph-mon[76304]: pgmap v1576: 305 pgs: 305 active+clean; 362 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 121 op/s
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.427 243456 DEBUG nova.compute.manager [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Received event network-changed-9e45b488-45d1-4293-a1a6-7b01b726b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.430 243456 DEBUG nova.compute.manager [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Refreshing instance network info cache due to event network-changed-9e45b488-45d1-4293-a1a6-7b01b726b58b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.432 243456 DEBUG oslo_concurrency.lockutils [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-5f8708bd-35d7-4952-ba18-0b6635872b86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.432 243456 DEBUG oslo_concurrency.lockutils [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-5f8708bd-35d7-4952-ba18-0b6635872b86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.433 243456 DEBUG nova.network.neutron [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Refreshing network info cache for port 9e45b488-45d1-4293-a1a6-7b01b726b58b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:18:15 compute-0 sshd-session[322700]: Connection closed by invalid user sol 45.148.10.240 port 57932 [preauth]
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.594 243456 DEBUG nova.network.neutron [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.681 243456 DEBUG oslo_concurrency.lockutils [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Acquiring lock "5a892841-d20f-4443-9652-37674cdb0878" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.682 243456 DEBUG oslo_concurrency.lockutils [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Lock "5a892841-d20f-4443-9652-37674cdb0878" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.683 243456 DEBUG oslo_concurrency.lockutils [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Acquiring lock "5a892841-d20f-4443-9652-37674cdb0878-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.683 243456 DEBUG oslo_concurrency.lockutils [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Lock "5a892841-d20f-4443-9652-37674cdb0878-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.683 243456 DEBUG oslo_concurrency.lockutils [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Lock "5a892841-d20f-4443-9652-37674cdb0878-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.685 243456 INFO nova.compute.manager [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Terminating instance
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.686 243456 DEBUG nova.compute.manager [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:18:15 compute-0 kernel: tap8298d9ed-da (unregistering): left promiscuous mode
Feb 28 10:18:15 compute-0 NetworkManager[49805]: <info>  [1772273895.7332] device (tap8298d9ed-da): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.735 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.748 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:15 compute-0 ovn_controller[146846]: 2026-02-28T10:18:15Z|00895|binding|INFO|Releasing lport 8298d9ed-da74-4002-a6a1-7ad18f465609 from this chassis (sb_readonly=0)
Feb 28 10:18:15 compute-0 ovn_controller[146846]: 2026-02-28T10:18:15Z|00896|binding|INFO|Setting lport 8298d9ed-da74-4002-a6a1-7ad18f465609 down in Southbound
Feb 28 10:18:15 compute-0 ovn_controller[146846]: 2026-02-28T10:18:15Z|00897|binding|INFO|Removing iface tap8298d9ed-da ovn-installed in OVS
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.751 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:15.758 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:f2:ee 10.100.0.12'], port_security=['fa:16:3e:3d:f2:ee 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5a892841-d20f-4443-9652-37674cdb0878', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e8f77fb8fb324fd68b3aa3716dff1c40', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aa3824b2-c81e-45b4-b804-91656f397d6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97fba5f6-13cc-48e6-ac38-00390081a0cd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8298d9ed-da74-4002-a6a1-7ad18f465609) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:18:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:15.760 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8298d9ed-da74-4002-a6a1-7ad18f465609 in datapath 6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5 unbound from our chassis
Feb 28 10:18:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:15.762 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:18:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:15.764 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[49e38530-b855-434b-a4f9-dc730efb9911]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:15.765 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5 namespace which is not needed anymore
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.766 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:15 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Feb 28 10:18:15 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005c.scope: Consumed 2.891s CPU time.
Feb 28 10:18:15 compute-0 systemd-machined[209480]: Machine qemu-111-instance-0000005c terminated.
Feb 28 10:18:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:18:15 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3827773953' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.888 243456 DEBUG oslo_concurrency.processutils [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.897 243456 DEBUG nova.compute.provider_tree [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:18:15 compute-0 NetworkManager[49805]: <info>  [1772273895.9139] manager: (tap8298d9ed-da): new Tun device (/org/freedesktop/NetworkManager/Devices/386)
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.922 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.931 243456 DEBUG nova.scheduler.client.report [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.939 243456 INFO nova.virt.libvirt.driver [-] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Instance destroyed successfully.
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.940 243456 DEBUG nova.objects.instance [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Lazy-loading 'resources' on Instance uuid 5a892841-d20f-4443-9652-37674cdb0878 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:15 compute-0 neutron-haproxy-ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5[322561]: [NOTICE]   (322567) : haproxy version is 2.8.14-c23fe91
Feb 28 10:18:15 compute-0 neutron-haproxy-ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5[322561]: [NOTICE]   (322567) : path to executable is /usr/sbin/haproxy
Feb 28 10:18:15 compute-0 neutron-haproxy-ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5[322561]: [WARNING]  (322567) : Exiting Master process...
Feb 28 10:18:15 compute-0 neutron-haproxy-ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5[322561]: [ALERT]    (322567) : Current worker (322569) exited with code 143 (Terminated)
Feb 28 10:18:15 compute-0 neutron-haproxy-ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5[322561]: [WARNING]  (322567) : All workers exited. Exiting... (0)
Feb 28 10:18:15 compute-0 systemd[1]: libpod-f0d54328a4f162a02dbf06733616c5c0d1530e5189e6ef1f96a5d9ecbb60fa50.scope: Deactivated successfully.
Feb 28 10:18:15 compute-0 podman[322794]: 2026-02-28 10:18:15.953633675 +0000 UTC m=+0.068626657 container died f0d54328a4f162a02dbf06733616c5c0d1530e5189e6ef1f96a5d9ecbb60fa50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.951 243456 DEBUG oslo_concurrency.lockutils [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.955 243456 DEBUG nova.virt.libvirt.vif [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:18:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-355944121',display_name='tempest-ServerGroupTestJSON-server-355944121',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-355944121',id=92,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:18:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e8f77fb8fb324fd68b3aa3716dff1c40',ramdisk_id='',reservation_id='r-phewxdff',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-1228589944',owner_user_name='tempest-ServerGroupTestJSON-1228589944-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:18:13Z,user_data=None,user_id='39a5a5ba4d2d4c23a0ce8f1bd89eca09',uuid=5a892841-d20f-4443-9652-37674cdb0878,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8298d9ed-da74-4002-a6a1-7ad18f465609", "address": "fa:16:3e:3d:f2:ee", "network": {"id": "6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2095706427-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e8f77fb8fb324fd68b3aa3716dff1c40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8298d9ed-da", "ovs_interfaceid": "8298d9ed-da74-4002-a6a1-7ad18f465609", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.955 243456 DEBUG nova.network.os_vif_util [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Converting VIF {"id": "8298d9ed-da74-4002-a6a1-7ad18f465609", "address": "fa:16:3e:3d:f2:ee", "network": {"id": "6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-2095706427-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e8f77fb8fb324fd68b3aa3716dff1c40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8298d9ed-da", "ovs_interfaceid": "8298d9ed-da74-4002-a6a1-7ad18f465609", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.957 243456 DEBUG nova.network.os_vif_util [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:f2:ee,bridge_name='br-int',has_traffic_filtering=True,id=8298d9ed-da74-4002-a6a1-7ad18f465609,network=Network(6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8298d9ed-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.957 243456 DEBUG os_vif [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:f2:ee,bridge_name='br-int',has_traffic_filtering=True,id=8298d9ed-da74-4002-a6a1-7ad18f465609,network=Network(6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8298d9ed-da') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.959 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.960 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8298d9ed-da, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.962 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.965 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.968 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.971 243456 INFO os_vif [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:f2:ee,bridge_name='br-int',has_traffic_filtering=True,id=8298d9ed-da74-4002-a6a1-7ad18f465609,network=Network(6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8298d9ed-da')
Feb 28 10:18:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f0d54328a4f162a02dbf06733616c5c0d1530e5189e6ef1f96a5d9ecbb60fa50-userdata-shm.mount: Deactivated successfully.
Feb 28 10:18:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c4f4a97fd6cc6c4c929eb36d412e280a603953d9d48688e7c1d57d408af9209-merged.mount: Deactivated successfully.
Feb 28 10:18:15 compute-0 podman[322794]: 2026-02-28 10:18:15.996155907 +0000 UTC m=+0.111148879 container cleanup f0d54328a4f162a02dbf06733616c5c0d1530e5189e6ef1f96a5d9ecbb60fa50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 10:18:15 compute-0 nova_compute[243452]: 2026-02-28 10:18:15.997 243456 INFO nova.scheduler.client.report [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Deleted allocations for instance 5f8708bd-35d7-4952-ba18-0b6635872b86
Feb 28 10:18:16 compute-0 systemd[1]: libpod-conmon-f0d54328a4f162a02dbf06733616c5c0d1530e5189e6ef1f96a5d9ecbb60fa50.scope: Deactivated successfully.
Feb 28 10:18:16 compute-0 podman[322854]: 2026-02-28 10:18:16.075044486 +0000 UTC m=+0.053459805 container remove f0d54328a4f162a02dbf06733616c5c0d1530e5189e6ef1f96a5d9ecbb60fa50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.075 243456 DEBUG nova.network.neutron [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:18:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:16.080 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c02036-edb7-4bc0-8f5f-08c2ad7aebe1]: (4, ('Sat Feb 28 10:18:15 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5 (f0d54328a4f162a02dbf06733616c5c0d1530e5189e6ef1f96a5d9ecbb60fa50)\nf0d54328a4f162a02dbf06733616c5c0d1530e5189e6ef1f96a5d9ecbb60fa50\nSat Feb 28 10:18:16 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5 (f0d54328a4f162a02dbf06733616c5c0d1530e5189e6ef1f96a5d9ecbb60fa50)\nf0d54328a4f162a02dbf06733616c5c0d1530e5189e6ef1f96a5d9ecbb60fa50\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:16.082 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[642b3fe1-099c-4ad9-bd63-db19126fe09d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:16.084 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fa3b1d7-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:16 compute-0 kernel: tap6fa3b1d7-20: left promiscuous mode
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.088 243456 DEBUG oslo_concurrency.lockutils [None req-975fa511-e0e4-4824-a405-5a98c81683a8 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.092 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.096 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:16.101 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a04367de-b582-4fae-95fb-6094b516f465]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.106 243456 DEBUG oslo_concurrency.lockutils [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-5f8708bd-35d7-4952-ba18-0b6635872b86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.107 243456 DEBUG nova.compute.manager [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Received event network-vif-plugged-8298d9ed-da74-4002-a6a1-7ad18f465609 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.108 243456 DEBUG oslo_concurrency.lockutils [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5a892841-d20f-4443-9652-37674cdb0878-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.109 243456 DEBUG oslo_concurrency.lockutils [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5a892841-d20f-4443-9652-37674cdb0878-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.109 243456 DEBUG oslo_concurrency.lockutils [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5a892841-d20f-4443-9652-37674cdb0878-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.109 243456 DEBUG nova.compute.manager [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] No waiting events found dispatching network-vif-plugged-8298d9ed-da74-4002-a6a1-7ad18f465609 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.109 243456 WARNING nova.compute.manager [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Received unexpected event network-vif-plugged-8298d9ed-da74-4002-a6a1-7ad18f465609 for instance with vm_state active and task_state None.
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.110 243456 DEBUG nova.compute.manager [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Received event network-vif-unplugged-9e45b488-45d1-4293-a1a6-7b01b726b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.110 243456 DEBUG oslo_concurrency.lockutils [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.110 243456 DEBUG oslo_concurrency.lockutils [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.110 243456 DEBUG oslo_concurrency.lockutils [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.110 243456 DEBUG nova.compute.manager [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] No waiting events found dispatching network-vif-unplugged-9e45b488-45d1-4293-a1a6-7b01b726b58b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.111 243456 WARNING nova.compute.manager [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Received unexpected event network-vif-unplugged-9e45b488-45d1-4293-a1a6-7b01b726b58b for instance with vm_state deleted and task_state None.
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.111 243456 DEBUG nova.compute.manager [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Received event network-vif-plugged-9e45b488-45d1-4293-a1a6-7b01b726b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.111 243456 DEBUG oslo_concurrency.lockutils [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.111 243456 DEBUG oslo_concurrency.lockutils [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.111 243456 DEBUG oslo_concurrency.lockutils [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5f8708bd-35d7-4952-ba18-0b6635872b86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.112 243456 DEBUG nova.compute.manager [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] No waiting events found dispatching network-vif-plugged-9e45b488-45d1-4293-a1a6-7b01b726b58b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.112 243456 WARNING nova.compute.manager [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Received unexpected event network-vif-plugged-9e45b488-45d1-4293-a1a6-7b01b726b58b for instance with vm_state deleted and task_state None.
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.112 243456 DEBUG nova.compute.manager [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.112 243456 DEBUG oslo_concurrency.lockutils [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.112 243456 DEBUG oslo_concurrency.lockutils [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.113 243456 DEBUG oslo_concurrency.lockutils [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.113 243456 DEBUG nova.compute.manager [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.113 243456 WARNING nova.compute.manager [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state stopped and task_state None.
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.113 243456 DEBUG nova.compute.manager [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.114 243456 DEBUG oslo_concurrency.lockutils [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.114 243456 DEBUG oslo_concurrency.lockutils [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.114 243456 DEBUG oslo_concurrency.lockutils [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.114 243456 DEBUG nova.compute.manager [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.114 243456 WARNING nova.compute.manager [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state stopped and task_state None.
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.116 243456 DEBUG nova.compute.manager [req-7c94d49e-52da-4d53-b74a-70a30a269763 req-1ba4e046-4943-4d20-9252-fcf5dea287b1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Received event network-vif-deleted-9e45b488-45d1-4293-a1a6-7b01b726b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:16.117 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[db49d44f-bee6-42d5-9bcb-58a2318bc4b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:16.119 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e1043409-1fd9-4424-bd12-4db60bd56c79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:16.137 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8c126e04-4099-4951-998a-ef650c2ee519]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540015, 'reachable_time': 38257, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322876, 'error': None, 'target': 'ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:16.140 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6fa3b1d7-2c3b-4eb9-a284-d82184dec4d5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:18:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:16.140 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[56a3889a-ede3-4576-a0fc-d0d9ec38073e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d6fa3b1d7\x2d2c3b\x2d4eb9\x2da284\x2dd82184dec4d5.mount: Deactivated successfully.
Feb 28 10:18:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:16.285 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.286 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:16.287 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:18:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 293 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 193 op/s
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.299 243456 INFO nova.virt.libvirt.driver [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Deleting instance files /var/lib/nova/instances/5a892841-d20f-4443-9652-37674cdb0878_del
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.300 243456 INFO nova.virt.libvirt.driver [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Deletion of /var/lib/nova/instances/5a892841-d20f-4443-9652-37674cdb0878_del complete
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.345 243456 INFO nova.compute.manager [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Took 0.66 seconds to destroy the instance on the hypervisor.
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.346 243456 DEBUG oslo.service.loopingcall [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.347 243456 DEBUG nova.compute.manager [-] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.347 243456 DEBUG nova.network.neutron [-] [instance: 5a892841-d20f-4443-9652-37674cdb0878] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:18:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3827773953' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.743 243456 DEBUG nova.objects.instance [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'flavor' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.767 243456 DEBUG oslo_concurrency.lockutils [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.768 243456 DEBUG oslo_concurrency.lockutils [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquired lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.768 243456 DEBUG nova.network.neutron [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:18:16 compute-0 nova_compute[243452]: 2026-02-28 10:18:16.769 243456 DEBUG nova.objects.instance [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'info_cache' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:17 compute-0 nova_compute[243452]: 2026-02-28 10:18:17.094 243456 DEBUG nova.network.neutron [-] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:18:17 compute-0 nova_compute[243452]: 2026-02-28 10:18:17.108 243456 INFO nova.compute.manager [-] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Took 0.76 seconds to deallocate network for instance.
Feb 28 10:18:17 compute-0 nova_compute[243452]: 2026-02-28 10:18:17.168 243456 DEBUG oslo_concurrency.lockutils [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:17 compute-0 nova_compute[243452]: 2026-02-28 10:18:17.169 243456 DEBUG oslo_concurrency.lockutils [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:17.291 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:17 compute-0 nova_compute[243452]: 2026-02-28 10:18:17.299 243456 DEBUG oslo_concurrency.processutils [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:17 compute-0 ceph-mon[76304]: pgmap v1577: 305 pgs: 305 active+clean; 293 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 193 op/s
Feb 28 10:18:17 compute-0 nova_compute[243452]: 2026-02-28 10:18:17.521 243456 DEBUG nova.compute.manager [req-55ebecfb-f818-4b2f-9988-fb5ad28f1f3b req-e9129b8f-bff2-432b-8619-ead1ff899512 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Received event network-vif-unplugged-8298d9ed-da74-4002-a6a1-7ad18f465609 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:17 compute-0 nova_compute[243452]: 2026-02-28 10:18:17.522 243456 DEBUG oslo_concurrency.lockutils [req-55ebecfb-f818-4b2f-9988-fb5ad28f1f3b req-e9129b8f-bff2-432b-8619-ead1ff899512 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5a892841-d20f-4443-9652-37674cdb0878-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:17 compute-0 nova_compute[243452]: 2026-02-28 10:18:17.523 243456 DEBUG oslo_concurrency.lockutils [req-55ebecfb-f818-4b2f-9988-fb5ad28f1f3b req-e9129b8f-bff2-432b-8619-ead1ff899512 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5a892841-d20f-4443-9652-37674cdb0878-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:17 compute-0 nova_compute[243452]: 2026-02-28 10:18:17.523 243456 DEBUG oslo_concurrency.lockutils [req-55ebecfb-f818-4b2f-9988-fb5ad28f1f3b req-e9129b8f-bff2-432b-8619-ead1ff899512 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5a892841-d20f-4443-9652-37674cdb0878-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:17 compute-0 nova_compute[243452]: 2026-02-28 10:18:17.524 243456 DEBUG nova.compute.manager [req-55ebecfb-f818-4b2f-9988-fb5ad28f1f3b req-e9129b8f-bff2-432b-8619-ead1ff899512 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] No waiting events found dispatching network-vif-unplugged-8298d9ed-da74-4002-a6a1-7ad18f465609 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:17 compute-0 nova_compute[243452]: 2026-02-28 10:18:17.524 243456 WARNING nova.compute.manager [req-55ebecfb-f818-4b2f-9988-fb5ad28f1f3b req-e9129b8f-bff2-432b-8619-ead1ff899512 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Received unexpected event network-vif-unplugged-8298d9ed-da74-4002-a6a1-7ad18f465609 for instance with vm_state deleted and task_state None.
Feb 28 10:18:17 compute-0 nova_compute[243452]: 2026-02-28 10:18:17.525 243456 DEBUG nova.compute.manager [req-55ebecfb-f818-4b2f-9988-fb5ad28f1f3b req-e9129b8f-bff2-432b-8619-ead1ff899512 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Received event network-vif-plugged-8298d9ed-da74-4002-a6a1-7ad18f465609 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:17 compute-0 nova_compute[243452]: 2026-02-28 10:18:17.525 243456 DEBUG oslo_concurrency.lockutils [req-55ebecfb-f818-4b2f-9988-fb5ad28f1f3b req-e9129b8f-bff2-432b-8619-ead1ff899512 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5a892841-d20f-4443-9652-37674cdb0878-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:17 compute-0 nova_compute[243452]: 2026-02-28 10:18:17.525 243456 DEBUG oslo_concurrency.lockutils [req-55ebecfb-f818-4b2f-9988-fb5ad28f1f3b req-e9129b8f-bff2-432b-8619-ead1ff899512 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5a892841-d20f-4443-9652-37674cdb0878-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:17 compute-0 nova_compute[243452]: 2026-02-28 10:18:17.526 243456 DEBUG oslo_concurrency.lockutils [req-55ebecfb-f818-4b2f-9988-fb5ad28f1f3b req-e9129b8f-bff2-432b-8619-ead1ff899512 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5a892841-d20f-4443-9652-37674cdb0878-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:17 compute-0 nova_compute[243452]: 2026-02-28 10:18:17.526 243456 DEBUG nova.compute.manager [req-55ebecfb-f818-4b2f-9988-fb5ad28f1f3b req-e9129b8f-bff2-432b-8619-ead1ff899512 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] No waiting events found dispatching network-vif-plugged-8298d9ed-da74-4002-a6a1-7ad18f465609 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:17 compute-0 nova_compute[243452]: 2026-02-28 10:18:17.527 243456 WARNING nova.compute.manager [req-55ebecfb-f818-4b2f-9988-fb5ad28f1f3b req-e9129b8f-bff2-432b-8619-ead1ff899512 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Received unexpected event network-vif-plugged-8298d9ed-da74-4002-a6a1-7ad18f465609 for instance with vm_state deleted and task_state None.
Feb 28 10:18:17 compute-0 nova_compute[243452]: 2026-02-28 10:18:17.527 243456 DEBUG nova.compute.manager [req-55ebecfb-f818-4b2f-9988-fb5ad28f1f3b req-e9129b8f-bff2-432b-8619-ead1ff899512 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Received event network-vif-deleted-8298d9ed-da74-4002-a6a1-7ad18f465609 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:18:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/109441451' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:18:17 compute-0 nova_compute[243452]: 2026-02-28 10:18:17.934 243456 DEBUG oslo_concurrency.processutils [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:17 compute-0 nova_compute[243452]: 2026-02-28 10:18:17.940 243456 DEBUG nova.compute.provider_tree [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:18:17 compute-0 nova_compute[243452]: 2026-02-28 10:18:17.956 243456 DEBUG nova.scheduler.client.report [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.010 243456 DEBUG oslo_concurrency.lockutils [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.090 243456 INFO nova.scheduler.client.report [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Deleted allocations for instance 5a892841-d20f-4443-9652-37674cdb0878
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.159 243456 DEBUG oslo_concurrency.lockutils [None req-53ee3a95-5fc7-4bb6-8cdb-7ae832c74d1e 39a5a5ba4d2d4c23a0ce8f1bd89eca09 e8f77fb8fb324fd68b3aa3716dff1c40 - - default default] Lock "5a892841-d20f-4443-9652-37674cdb0878" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 266 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Feb 28 10:18:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/109441451' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.442 243456 DEBUG nova.network.neutron [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Updating instance_info_cache with network_info: [{"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.479 243456 DEBUG oslo_concurrency.lockutils [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Releasing lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.511 243456 INFO nova.virt.libvirt.driver [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance destroyed successfully.
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.512 243456 DEBUG nova.objects.instance [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'numa_topology' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.526 243456 DEBUG nova.objects.instance [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'resources' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.543 243456 DEBUG nova.virt.libvirt.vif [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-49030969',display_name='tempest-ServerActionsTestJSON-server-49030969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-49030969',id=90,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2gogpz3Yx3EhEkpX1VCBq2KO8dk2oxi+PMU5eyyDvXF1ONWBCuHAC/cBb5pEuXFL5UEmuntYZ5G82kfqwpjBS+OnOyyPUfTpF/G370TEdXgIbjgT7mCqXRtxntkLSfyw==',key_name='tempest-keypair-1420549814',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-iwkgdg48',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:18:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9006c7543a244aa948b78020335223a',uuid=690896df-6307-469c-9685-325a61a62b88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.544 243456 DEBUG nova.network.os_vif_util [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.546 243456 DEBUG nova.network.os_vif_util [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.547 243456 DEBUG os_vif [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.549 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.550 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped25d1f8-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.554 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.556 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.561 243456 INFO os_vif [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3')
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.572 243456 DEBUG nova.virt.libvirt.driver [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Start _get_guest_xml network_info=[{"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.576 243456 WARNING nova.virt.libvirt.driver [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.581 243456 DEBUG nova.virt.libvirt.host [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.581 243456 DEBUG nova.virt.libvirt.host [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.585 243456 DEBUG nova.virt.libvirt.host [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.585 243456 DEBUG nova.virt.libvirt.host [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.586 243456 DEBUG nova.virt.libvirt.driver [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.586 243456 DEBUG nova.virt.hardware [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.587 243456 DEBUG nova.virt.hardware [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.587 243456 DEBUG nova.virt.hardware [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.587 243456 DEBUG nova.virt.hardware [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.587 243456 DEBUG nova.virt.hardware [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.588 243456 DEBUG nova.virt.hardware [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.588 243456 DEBUG nova.virt.hardware [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.588 243456 DEBUG nova.virt.hardware [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.588 243456 DEBUG nova.virt.hardware [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.589 243456 DEBUG nova.virt.hardware [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.589 243456 DEBUG nova.virt.hardware [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.589 243456 DEBUG nova.objects.instance [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.613 243456 DEBUG oslo_concurrency.processutils [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:18 compute-0 nova_compute[243452]: 2026-02-28 10:18:18.877 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:18:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3974765483' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.214 243456 DEBUG oslo_concurrency.processutils [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.243 243456 DEBUG oslo_concurrency.processutils [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:18:19 compute-0 ceph-mon[76304]: pgmap v1578: 305 pgs: 305 active+clean; 266 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Feb 28 10:18:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3974765483' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.722 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.777 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:18:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/832787225' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.804 243456 DEBUG oslo_concurrency.processutils [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.807 243456 DEBUG nova.virt.libvirt.vif [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-49030969',display_name='tempest-ServerActionsTestJSON-server-49030969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-49030969',id=90,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2gogpz3Yx3EhEkpX1VCBq2KO8dk2oxi+PMU5eyyDvXF1ONWBCuHAC/cBb5pEuXFL5UEmuntYZ5G82kfqwpjBS+OnOyyPUfTpF/G370TEdXgIbjgT7mCqXRtxntkLSfyw==',key_name='tempest-keypair-1420549814',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-iwkgdg48',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:18:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9006c7543a244aa948b78020335223a',uuid=690896df-6307-469c-9685-325a61a62b88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.808 243456 DEBUG nova.network.os_vif_util [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.810 243456 DEBUG nova.network.os_vif_util [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.812 243456 DEBUG nova.objects.instance [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'pci_devices' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.834 243456 DEBUG nova.virt.libvirt.driver [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:18:19 compute-0 nova_compute[243452]:   <uuid>690896df-6307-469c-9685-325a61a62b88</uuid>
Feb 28 10:18:19 compute-0 nova_compute[243452]:   <name>instance-0000005a</name>
Feb 28 10:18:19 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:18:19 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:18:19 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerActionsTestJSON-server-49030969</nova:name>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:18:18</nova:creationTime>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:18:19 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:18:19 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:18:19 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:18:19 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:18:19 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:18:19 compute-0 nova_compute[243452]:         <nova:user uuid="b9006c7543a244aa948b78020335223a">tempest-ServerActionsTestJSON-152155156-project-member</nova:user>
Feb 28 10:18:19 compute-0 nova_compute[243452]:         <nova:project uuid="6952e00efd364e1491714983e2425e93">tempest-ServerActionsTestJSON-152155156</nova:project>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:18:19 compute-0 nova_compute[243452]:         <nova:port uuid="ed25d1f8-c3a0-43d4-b57e-12b647a48b3c">
Feb 28 10:18:19 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:18:19 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:18:19 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <system>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <entry name="serial">690896df-6307-469c-9685-325a61a62b88</entry>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <entry name="uuid">690896df-6307-469c-9685-325a61a62b88</entry>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     </system>
Feb 28 10:18:19 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:18:19 compute-0 nova_compute[243452]:   <os>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:   </os>
Feb 28 10:18:19 compute-0 nova_compute[243452]:   <features>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:   </features>
Feb 28 10:18:19 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:18:19 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:18:19 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/690896df-6307-469c-9685-325a61a62b88_disk">
Feb 28 10:18:19 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:18:19 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/690896df-6307-469c-9685-325a61a62b88_disk.config">
Feb 28 10:18:19 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:18:19 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:f6:05:21"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <target dev="taped25d1f8-c3"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/690896df-6307-469c-9685-325a61a62b88/console.log" append="off"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <video>
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     </video>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <input type="keyboard" bus="usb"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:18:19 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:18:19 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:18:19 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:18:19 compute-0 nova_compute[243452]: </domain>
Feb 28 10:18:19 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.837 243456 DEBUG nova.virt.libvirt.driver [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.838 243456 DEBUG nova.virt.libvirt.driver [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.839 243456 DEBUG nova.virt.libvirt.vif [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-49030969',display_name='tempest-ServerActionsTestJSON-server-49030969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-49030969',id=90,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2gogpz3Yx3EhEkpX1VCBq2KO8dk2oxi+PMU5eyyDvXF1ONWBCuHAC/cBb5pEuXFL5UEmuntYZ5G82kfqwpjBS+OnOyyPUfTpF/G370TEdXgIbjgT7mCqXRtxntkLSfyw==',key_name='tempest-keypair-1420549814',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-iwkgdg48',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:18:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9006c7543a244aa948b78020335223a',uuid=690896df-6307-469c-9685-325a61a62b88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.840 243456 DEBUG nova.network.os_vif_util [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.841 243456 DEBUG nova.network.os_vif_util [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.843 243456 DEBUG os_vif [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.844 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.844 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.845 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.849 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.849 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped25d1f8-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.850 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=taped25d1f8-c3, col_values=(('external_ids', {'iface-id': 'ed25d1f8-c3a0-43d4-b57e-12b647a48b3c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:05:21', 'vm-uuid': '690896df-6307-469c-9685-325a61a62b88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.851 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:19 compute-0 NetworkManager[49805]: <info>  [1772273899.8530] manager: (taped25d1f8-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.855 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.857 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.858 243456 INFO os_vif [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3')
Feb 28 10:18:19 compute-0 kernel: taped25d1f8-c3: entered promiscuous mode
Feb 28 10:18:19 compute-0 NetworkManager[49805]: <info>  [1772273899.9386] manager: (taped25d1f8-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/388)
Feb 28 10:18:19 compute-0 ovn_controller[146846]: 2026-02-28T10:18:19Z|00898|binding|INFO|Claiming lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for this chassis.
Feb 28 10:18:19 compute-0 ovn_controller[146846]: 2026-02-28T10:18:19Z|00899|binding|INFO|ed25d1f8-c3a0-43d4-b57e-12b647a48b3c: Claiming fa:16:3e:f6:05:21 10.100.0.14
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.939 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.944 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:19 compute-0 nova_compute[243452]: 2026-02-28 10:18:19.949 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:19 compute-0 NetworkManager[49805]: <info>  [1772273899.9512] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Feb 28 10:18:19 compute-0 NetworkManager[49805]: <info>  [1772273899.9530] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Feb 28 10:18:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:19.953 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:05:21 10.100.0.14'], port_security=['fa:16:3e:f6:05:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '690896df-6307-469c-9685-325a61a62b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '7', 'neutron:security_group_ids': '26695444-5c7f-4bb0-b1ed-94a026868cfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:18:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:19.954 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ed25d1f8-c3a0-43d4-b57e-12b647a48b3c in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb bound to our chassis
Feb 28 10:18:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:19.956 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:18:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:19.968 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3d29dfbf-8be4-46c1-b789-bafab3c05f8a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:19.969 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8082b9e7-a1 in ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:18:19 compute-0 systemd-machined[209480]: New machine qemu-112-instance-0000005a.
Feb 28 10:18:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:19.972 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8082b9e7-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:18:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:19.972 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b5647514-7746-41e0-80e5-fbbd7216a2c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:19.973 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d262f238-bb0f-4057-acc1-02463114c258]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:19.986 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[cde2ca6f-33d6-4534-bb17-c3c5296e2750]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:20 compute-0 systemd[1]: Started Virtual Machine qemu-112-instance-0000005a.
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:20.007 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8b4da6c5-7aac-403a-a331-e6b129f58e71]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:20 compute-0 systemd-udevd[322980]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.015 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:20 compute-0 NetworkManager[49805]: <info>  [1772273900.0228] device (taped25d1f8-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:18:20 compute-0 NetworkManager[49805]: <info>  [1772273900.0248] device (taped25d1f8-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:20 compute-0 ovn_controller[146846]: 2026-02-28T10:18:20Z|00900|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c ovn-installed in OVS
Feb 28 10:18:20 compute-0 ovn_controller[146846]: 2026-02-28T10:18:20Z|00901|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c up in Southbound
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.031 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:20.037 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b35ebd65-1211-451b-ba3e-a0119c5d6343]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:20 compute-0 systemd-udevd[322984]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:20.043 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[add9bbe8-8136-4084-90d6-bac9e854381a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:20 compute-0 NetworkManager[49805]: <info>  [1772273900.0443] manager: (tap8082b9e7-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/391)
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:20.073 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8760e0ef-39d5-464d-b78c-2fe436172223]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:20.075 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[704f68e0-8afa-4026-aa9a-86ce751e145a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:20 compute-0 NetworkManager[49805]: <info>  [1772273900.0932] device (tap8082b9e7-a0): carrier: link connected
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:20.098 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[350c583a-e832-4a1d-ba3c-59767ea8d47c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:20.115 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7afd98b0-7079-4f32-b542-bb52039bbfc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540737, 'reachable_time': 24741, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323010, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:20.133 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[525f0527-8a71-48d2-a206-aeacfa8cc59f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:9bb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540737, 'tstamp': 540737}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323011, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:20.152 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1dca7ea1-c1a6-4574-9383-e858b0cda510]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540737, 'reachable_time': 24741, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323012, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:20.186 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e09ae7d9-b0b9-4a53-992f-4ba67c7d454c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:20.235 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dc0e2102-656b-4e57-8533-f55521b0c106]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:20.237 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:20.237 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:20.238 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8082b9e7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:20 compute-0 kernel: tap8082b9e7-a0: entered promiscuous mode
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.241 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:20 compute-0 NetworkManager[49805]: <info>  [1772273900.2427] manager: (tap8082b9e7-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:20.244 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8082b9e7-a0, col_values=(('external_ids', {'iface-id': 'ecb4bce5-f543-4ade-98cc-d98a0d015682'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:20 compute-0 ovn_controller[146846]: 2026-02-28T10:18:20Z|00902|binding|INFO|Releasing lport ecb4bce5-f543-4ade-98cc-d98a0d015682 from this chassis (sb_readonly=0)
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.246 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:20.247 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:20.248 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b9e88e3f-835e-4242-b37e-2a05e10eaad5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:20.249 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:18:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:20.249 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'env', 'PROCESS_TAG=haproxy-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8082b9e7-a888-4fb7-b48c-a7c16db892eb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.255 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 248 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 153 op/s
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.318 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 690896df-6307-469c-9685-325a61a62b88 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.318 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273900.3179843, 690896df-6307-469c-9685-325a61a62b88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.319 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Resumed (Lifecycle Event)
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.321 243456 DEBUG nova.compute.manager [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.325 243456 INFO nova.virt.libvirt.driver [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance rebooted successfully.
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.326 243456 DEBUG nova.compute.manager [None req-5b3d2fd8-d915-4f1b-80c6-c6100dfec59d b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.373 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.377 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.409 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] During sync_power_state the instance has a pending task (powering-on). Skip.
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.409 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273900.3207643, 690896df-6307-469c-9685-325a61a62b88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.410 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Started (Lifecycle Event)
Feb 28 10:18:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/832787225' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.451 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.454 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.503 243456 DEBUG nova.compute.manager [req-fafd5654-726f-4a03-a99c-74ba0a91af0d req-b1e69be3-c498-48dd-bdda-18f6d53140a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.505 243456 DEBUG oslo_concurrency.lockutils [req-fafd5654-726f-4a03-a99c-74ba0a91af0d req-b1e69be3-c498-48dd-bdda-18f6d53140a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.505 243456 DEBUG oslo_concurrency.lockutils [req-fafd5654-726f-4a03-a99c-74ba0a91af0d req-b1e69be3-c498-48dd-bdda-18f6d53140a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.506 243456 DEBUG oslo_concurrency.lockutils [req-fafd5654-726f-4a03-a99c-74ba0a91af0d req-b1e69be3-c498-48dd-bdda-18f6d53140a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.506 243456 DEBUG nova.compute.manager [req-fafd5654-726f-4a03-a99c-74ba0a91af0d req-b1e69be3-c498-48dd-bdda-18f6d53140a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:20 compute-0 nova_compute[243452]: 2026-02-28 10:18:20.507 243456 WARNING nova.compute.manager [req-fafd5654-726f-4a03-a99c-74ba0a91af0d req-b1e69be3-c498-48dd-bdda-18f6d53140a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state active and task_state None.
Feb 28 10:18:20 compute-0 podman[323086]: 2026-02-28 10:18:20.607253056 +0000 UTC m=+0.050227503 container create 234b1a92a942bd88afa009ec5975d5d87427900451d32f3583c4b5deef11450b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 28 10:18:20 compute-0 systemd[1]: Started libpod-conmon-234b1a92a942bd88afa009ec5975d5d87427900451d32f3583c4b5deef11450b.scope.
Feb 28 10:18:20 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:18:20 compute-0 podman[323086]: 2026-02-28 10:18:20.589552471 +0000 UTC m=+0.032526938 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:18:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ad5d6630a9c99b49b147c49c62d20f88de2afe5884d11538ffa43068afe063/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:18:20 compute-0 podman[323086]: 2026-02-28 10:18:20.7003617 +0000 UTC m=+0.143336167 container init 234b1a92a942bd88afa009ec5975d5d87427900451d32f3583c4b5deef11450b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 28 10:18:20 compute-0 podman[323086]: 2026-02-28 10:18:20.704167748 +0000 UTC m=+0.147142205 container start 234b1a92a942bd88afa009ec5975d5d87427900451d32f3583c4b5deef11450b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 10:18:20 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[323101]: [NOTICE]   (323105) : New worker (323107) forked
Feb 28 10:18:20 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[323101]: [NOTICE]   (323105) : Loading success.
Feb 28 10:18:21 compute-0 nova_compute[243452]: 2026-02-28 10:18:21.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:18:21 compute-0 ceph-mon[76304]: pgmap v1579: 305 pgs: 305 active+clean; 248 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 153 op/s
Feb 28 10:18:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 235 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 705 KiB/s wr, 136 op/s
Feb 28 10:18:22 compute-0 nova_compute[243452]: 2026-02-28 10:18:22.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:18:22 compute-0 ovn_controller[146846]: 2026-02-28T10:18:22Z|00903|binding|INFO|Releasing lport ecb4bce5-f543-4ade-98cc-d98a0d015682 from this chassis (sb_readonly=0)
Feb 28 10:18:22 compute-0 nova_compute[243452]: 2026-02-28 10:18:22.359 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:22 compute-0 ceph-mon[76304]: pgmap v1580: 305 pgs: 305 active+clean; 235 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 705 KiB/s wr, 136 op/s
Feb 28 10:18:22 compute-0 nova_compute[243452]: 2026-02-28 10:18:22.627 243456 DEBUG nova.compute.manager [req-573ebd91-3136-4630-a218-64ee15e55275 req-fdc8410b-bf6b-4e38-95c5-5ec951fb9217 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:22 compute-0 nova_compute[243452]: 2026-02-28 10:18:22.628 243456 DEBUG oslo_concurrency.lockutils [req-573ebd91-3136-4630-a218-64ee15e55275 req-fdc8410b-bf6b-4e38-95c5-5ec951fb9217 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:22 compute-0 nova_compute[243452]: 2026-02-28 10:18:22.628 243456 DEBUG oslo_concurrency.lockutils [req-573ebd91-3136-4630-a218-64ee15e55275 req-fdc8410b-bf6b-4e38-95c5-5ec951fb9217 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:22 compute-0 nova_compute[243452]: 2026-02-28 10:18:22.629 243456 DEBUG oslo_concurrency.lockutils [req-573ebd91-3136-4630-a218-64ee15e55275 req-fdc8410b-bf6b-4e38-95c5-5ec951fb9217 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:22 compute-0 nova_compute[243452]: 2026-02-28 10:18:22.629 243456 DEBUG nova.compute.manager [req-573ebd91-3136-4630-a218-64ee15e55275 req-fdc8410b-bf6b-4e38-95c5-5ec951fb9217 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:22 compute-0 nova_compute[243452]: 2026-02-28 10:18:22.629 243456 WARNING nova.compute.manager [req-573ebd91-3136-4630-a218-64ee15e55275 req-fdc8410b-bf6b-4e38-95c5-5ec951fb9217 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state active and task_state None.
Feb 28 10:18:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:18:23 compute-0 nova_compute[243452]: 2026-02-28 10:18:23.879 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 235 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 50 KiB/s wr, 167 op/s
Feb 28 10:18:24 compute-0 nova_compute[243452]: 2026-02-28 10:18:24.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:18:24 compute-0 nova_compute[243452]: 2026-02-28 10:18:24.852 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:24 compute-0 nova_compute[243452]: 2026-02-28 10:18:24.973 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:25 compute-0 nova_compute[243452]: 2026-02-28 10:18:25.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:18:25 compute-0 nova_compute[243452]: 2026-02-28 10:18:25.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:18:25 compute-0 nova_compute[243452]: 2026-02-28 10:18:25.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:18:25 compute-0 ceph-mon[76304]: pgmap v1581: 305 pgs: 305 active+clean; 235 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 50 KiB/s wr, 167 op/s
Feb 28 10:18:25 compute-0 nova_compute[243452]: 2026-02-28 10:18:25.836 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:18:25 compute-0 nova_compute[243452]: 2026-02-28 10:18:25.837 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:18:25 compute-0 nova_compute[243452]: 2026-02-28 10:18:25.837 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:18:25 compute-0 nova_compute[243452]: 2026-02-28 10:18:25.838 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:26 compute-0 nova_compute[243452]: 2026-02-28 10:18:26.216 243456 INFO nova.compute.manager [None req-88817112-ca0c-46aa-bea0-76edb2d1c003 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Pausing
Feb 28 10:18:26 compute-0 nova_compute[243452]: 2026-02-28 10:18:26.217 243456 DEBUG nova.objects.instance [None req-88817112-ca0c-46aa-bea0-76edb2d1c003 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'flavor' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:26 compute-0 nova_compute[243452]: 2026-02-28 10:18:26.243 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273906.2438192, 690896df-6307-469c-9685-325a61a62b88 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:18:26 compute-0 nova_compute[243452]: 2026-02-28 10:18:26.244 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Paused (Lifecycle Event)
Feb 28 10:18:26 compute-0 nova_compute[243452]: 2026-02-28 10:18:26.246 243456 DEBUG nova.compute.manager [None req-88817112-ca0c-46aa-bea0-76edb2d1c003 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:26 compute-0 nova_compute[243452]: 2026-02-28 10:18:26.268 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:26 compute-0 nova_compute[243452]: 2026-02-28 10:18:26.271 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:18:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 235 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 24 KiB/s wr, 200 op/s
Feb 28 10:18:26 compute-0 nova_compute[243452]: 2026-02-28 10:18:26.308 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] During sync_power_state the instance has a pending task (pausing). Skip.
Feb 28 10:18:27 compute-0 ceph-mon[76304]: pgmap v1582: 305 pgs: 305 active+clean; 235 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 24 KiB/s wr, 200 op/s
Feb 28 10:18:27 compute-0 nova_compute[243452]: 2026-02-28 10:18:27.889 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Updating instance_info_cache with network_info: [{"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:18:27 compute-0 nova_compute[243452]: 2026-02-28 10:18:27.911 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:18:27 compute-0 nova_compute[243452]: 2026-02-28 10:18:27.912 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:18:27 compute-0 nova_compute[243452]: 2026-02-28 10:18:27.912 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:18:27 compute-0 nova_compute[243452]: 2026-02-28 10:18:27.952 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:27 compute-0 nova_compute[243452]: 2026-02-28 10:18:27.953 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:27 compute-0 nova_compute[243452]: 2026-02-28 10:18:27.953 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:27 compute-0 nova_compute[243452]: 2026-02-28 10:18:27.953 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:18:27 compute-0 nova_compute[243452]: 2026-02-28 10:18:27.954 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 235 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.3 KiB/s wr, 129 op/s
Feb 28 10:18:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:18:28 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4239342201' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:18:28 compute-0 nova_compute[243452]: 2026-02-28 10:18:28.528 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:18:28 compute-0 nova_compute[243452]: 2026-02-28 10:18:28.621 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:18:28 compute-0 nova_compute[243452]: 2026-02-28 10:18:28.621 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:18:28 compute-0 nova_compute[243452]: 2026-02-28 10:18:28.789 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273893.596977, 5f8708bd-35d7-4952-ba18-0b6635872b86 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:18:28 compute-0 nova_compute[243452]: 2026-02-28 10:18:28.789 243456 INFO nova.compute.manager [-] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] VM Stopped (Lifecycle Event)
Feb 28 10:18:28 compute-0 nova_compute[243452]: 2026-02-28 10:18:28.791 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:18:28 compute-0 nova_compute[243452]: 2026-02-28 10:18:28.792 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3619MB free_disk=59.94198196474463GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:18:28 compute-0 nova_compute[243452]: 2026-02-28 10:18:28.792 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:28 compute-0 nova_compute[243452]: 2026-02-28 10:18:28.792 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:28 compute-0 nova_compute[243452]: 2026-02-28 10:18:28.826 243456 DEBUG nova.compute.manager [None req-f71a539a-3e21-4ad6-b700-da2559a6e04d - - - - - -] [instance: 5f8708bd-35d7-4952-ba18-0b6635872b86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:28 compute-0 nova_compute[243452]: 2026-02-28 10:18:28.881 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:28 compute-0 nova_compute[243452]: 2026-02-28 10:18:28.884 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 690896df-6307-469c-9685-325a61a62b88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:18:28 compute-0 nova_compute[243452]: 2026-02-28 10:18:28.884 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:18:28 compute-0 nova_compute[243452]: 2026-02-28 10:18:28.884 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:18:28 compute-0 nova_compute[243452]: 2026-02-28 10:18:28.922 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:18:29
Feb 28 10:18:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:18:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:18:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'default.rgw.meta', 'volumes', '.mgr', 'images', '.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Feb 28 10:18:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:18:29 compute-0 ceph-mon[76304]: pgmap v1583: 305 pgs: 305 active+clean; 235 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.3 KiB/s wr, 129 op/s
Feb 28 10:18:29 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4239342201' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:18:29 compute-0 nova_compute[243452]: 2026-02-28 10:18:29.473 243456 INFO nova.compute.manager [None req-2ad846fb-86f0-4664-81cd-0ad8d13f2313 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Unpausing
Feb 28 10:18:29 compute-0 nova_compute[243452]: 2026-02-28 10:18:29.474 243456 DEBUG nova.objects.instance [None req-2ad846fb-86f0-4664-81cd-0ad8d13f2313 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'flavor' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:18:29 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3741678846' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:18:29 compute-0 nova_compute[243452]: 2026-02-28 10:18:29.495 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:29 compute-0 nova_compute[243452]: 2026-02-28 10:18:29.504 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273909.5037575, 690896df-6307-469c-9685-325a61a62b88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:18:29 compute-0 nova_compute[243452]: 2026-02-28 10:18:29.504 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Resumed (Lifecycle Event)
Feb 28 10:18:29 compute-0 nova_compute[243452]: 2026-02-28 10:18:29.507 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:18:29 compute-0 virtqemud[242837]: argument unsupported: QEMU guest agent is not configured
Feb 28 10:18:29 compute-0 nova_compute[243452]: 2026-02-28 10:18:29.510 243456 DEBUG nova.virt.libvirt.guest [None req-2ad846fb-86f0-4664-81cd-0ad8d13f2313 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Feb 28 10:18:29 compute-0 nova_compute[243452]: 2026-02-28 10:18:29.511 243456 DEBUG nova.compute.manager [None req-2ad846fb-86f0-4664-81cd-0ad8d13f2313 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:29 compute-0 nova_compute[243452]: 2026-02-28 10:18:29.525 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:18:29 compute-0 nova_compute[243452]: 2026-02-28 10:18:29.530 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:29 compute-0 nova_compute[243452]: 2026-02-28 10:18:29.534 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:18:29 compute-0 nova_compute[243452]: 2026-02-28 10:18:29.560 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:18:29 compute-0 nova_compute[243452]: 2026-02-28 10:18:29.560 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:29 compute-0 nova_compute[243452]: 2026-02-28 10:18:29.565 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] During sync_power_state the instance has a pending task (unpausing). Skip.
Feb 28 10:18:29 compute-0 nova_compute[243452]: 2026-02-28 10:18:29.854 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:29 compute-0 nova_compute[243452]: 2026-02-28 10:18:29.855 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:29 compute-0 nova_compute[243452]: 2026-02-28 10:18:29.967 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:18:29 compute-0 nova_compute[243452]: 2026-02-28 10:18:29.968 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:18:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 235 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1023 B/s wr, 94 op/s
Feb 28 10:18:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:18:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:18:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:18:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:18:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:18:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:18:30 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3741678846' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:18:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:18:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:18:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:18:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:18:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:18:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:18:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:18:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:18:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:18:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:18:30 compute-0 nova_compute[243452]: 2026-02-28 10:18:30.937 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273895.9358487, 5a892841-d20f-4443-9652-37674cdb0878 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:18:30 compute-0 nova_compute[243452]: 2026-02-28 10:18:30.937 243456 INFO nova.compute.manager [-] [instance: 5a892841-d20f-4443-9652-37674cdb0878] VM Stopped (Lifecycle Event)
Feb 28 10:18:30 compute-0 nova_compute[243452]: 2026-02-28 10:18:30.954 243456 DEBUG nova.compute.manager [None req-3831da1c-c840-429f-ad35-abd06922d61e - - - - - -] [instance: 5a892841-d20f-4443-9652-37674cdb0878] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:31 compute-0 nova_compute[243452]: 2026-02-28 10:18:31.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:31 compute-0 ceph-mon[76304]: pgmap v1584: 305 pgs: 305 active+clean; 235 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1023 B/s wr, 94 op/s
Feb 28 10:18:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 235 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 72 op/s
Feb 28 10:18:33 compute-0 ceph-mon[76304]: pgmap v1585: 305 pgs: 305 active+clean; 235 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 72 op/s
Feb 28 10:18:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:18:33 compute-0 nova_compute[243452]: 2026-02-28 10:18:33.884 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:34 compute-0 nova_compute[243452]: 2026-02-28 10:18:34.166 243456 DEBUG oslo_concurrency.lockutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:34 compute-0 nova_compute[243452]: 2026-02-28 10:18:34.167 243456 DEBUG oslo_concurrency.lockutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:34 compute-0 nova_compute[243452]: 2026-02-28 10:18:34.187 243456 DEBUG nova.compute.manager [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:18:34 compute-0 nova_compute[243452]: 2026-02-28 10:18:34.259 243456 DEBUG oslo_concurrency.lockutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:34 compute-0 nova_compute[243452]: 2026-02-28 10:18:34.260 243456 DEBUG oslo_concurrency.lockutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:34 compute-0 nova_compute[243452]: 2026-02-28 10:18:34.270 243456 DEBUG nova.virt.hardware [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:18:34 compute-0 nova_compute[243452]: 2026-02-28 10:18:34.270 243456 INFO nova.compute.claims [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:18:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 235 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 68 op/s
Feb 28 10:18:34 compute-0 nova_compute[243452]: 2026-02-28 10:18:34.395 243456 DEBUG oslo_concurrency.processutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:34 compute-0 ovn_controller[146846]: 2026-02-28T10:18:34Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:05:21 10.100.0.14
Feb 28 10:18:34 compute-0 nova_compute[243452]: 2026-02-28 10:18:34.856 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:18:34 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2704871245' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:18:34 compute-0 nova_compute[243452]: 2026-02-28 10:18:34.976 243456 DEBUG oslo_concurrency.processutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:34 compute-0 nova_compute[243452]: 2026-02-28 10:18:34.983 243456 DEBUG nova.compute.provider_tree [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.005 243456 DEBUG nova.scheduler.client.report [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.034 243456 DEBUG oslo_concurrency.lockutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.036 243456 DEBUG nova.compute.manager [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.043 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.096 243456 DEBUG nova.compute.manager [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.097 243456 DEBUG nova.network.neutron [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.118 243456 INFO nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.138 243456 DEBUG nova.compute.manager [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.241 243456 DEBUG nova.compute.manager [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.243 243456 DEBUG nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.243 243456 INFO nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Creating image(s)
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.270 243456 DEBUG nova.storage.rbd_utils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image e8c0bffa-2672-4f45-8646-3a41b8e780a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.299 243456 DEBUG nova.storage.rbd_utils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image e8c0bffa-2672-4f45-8646-3a41b8e780a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.324 243456 DEBUG nova.storage.rbd_utils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image e8c0bffa-2672-4f45-8646-3a41b8e780a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.328 243456 DEBUG oslo_concurrency.processutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.408 243456 DEBUG oslo_concurrency.processutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.410 243456 DEBUG oslo_concurrency.lockutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.411 243456 DEBUG oslo_concurrency.lockutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.411 243456 DEBUG oslo_concurrency.lockutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:35 compute-0 ceph-mon[76304]: pgmap v1586: 305 pgs: 305 active+clean; 235 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 68 op/s
Feb 28 10:18:35 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2704871245' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.448 243456 DEBUG nova.storage.rbd_utils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image e8c0bffa-2672-4f45-8646-3a41b8e780a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.452 243456 DEBUG oslo_concurrency.processutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 e8c0bffa-2672-4f45-8646-3a41b8e780a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.623 243456 DEBUG nova.policy [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '99530c323188499c8d0e75b8edf1f77b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4c568ca6a09a48c1a1197267be4d4583', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.687 243456 DEBUG oslo_concurrency.processutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 e8c0bffa-2672-4f45-8646-3a41b8e780a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.753 243456 DEBUG nova.storage.rbd_utils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] resizing rbd image e8c0bffa-2672-4f45-8646-3a41b8e780a8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.832 243456 DEBUG nova.objects.instance [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'migration_context' on Instance uuid e8c0bffa-2672-4f45-8646-3a41b8e780a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.848 243456 DEBUG nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.848 243456 DEBUG nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Ensure instance console log exists: /var/lib/nova/instances/e8c0bffa-2672-4f45-8646-3a41b8e780a8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.848 243456 DEBUG oslo_concurrency.lockutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.849 243456 DEBUG oslo_concurrency.lockutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:35 compute-0 nova_compute[243452]: 2026-02-28 10:18:35.849 243456 DEBUG oslo_concurrency.lockutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 252 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 481 KiB/s wr, 70 op/s
Feb 28 10:18:36 compute-0 nova_compute[243452]: 2026-02-28 10:18:36.954 243456 DEBUG nova.network.neutron [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Successfully created port: 6912d1ef-9679-45b5-ae80-a91f63ecce55 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:18:37 compute-0 ceph-mon[76304]: pgmap v1587: 305 pgs: 305 active+clean; 252 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 481 KiB/s wr, 70 op/s
Feb 28 10:18:37 compute-0 nova_compute[243452]: 2026-02-28 10:18:37.878 243456 DEBUG nova.network.neutron [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Successfully updated port: 6912d1ef-9679-45b5-ae80-a91f63ecce55 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:18:37 compute-0 nova_compute[243452]: 2026-02-28 10:18:37.898 243456 DEBUG oslo_concurrency.lockutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "refresh_cache-e8c0bffa-2672-4f45-8646-3a41b8e780a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:18:37 compute-0 nova_compute[243452]: 2026-02-28 10:18:37.898 243456 DEBUG oslo_concurrency.lockutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquired lock "refresh_cache-e8c0bffa-2672-4f45-8646-3a41b8e780a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:18:37 compute-0 nova_compute[243452]: 2026-02-28 10:18:37.899 243456 DEBUG nova.network.neutron [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:18:38 compute-0 nova_compute[243452]: 2026-02-28 10:18:38.105 243456 DEBUG nova.compute.manager [req-7fcd71ef-08d3-4902-afac-632fbd326f13 req-577c1a61-4f5f-4ac7-a08f-92ae583c1876 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-changed-6912d1ef-9679-45b5-ae80-a91f63ecce55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:38 compute-0 nova_compute[243452]: 2026-02-28 10:18:38.105 243456 DEBUG nova.compute.manager [req-7fcd71ef-08d3-4902-afac-632fbd326f13 req-577c1a61-4f5f-4ac7-a08f-92ae583c1876 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Refreshing instance network info cache due to event network-changed-6912d1ef-9679-45b5-ae80-a91f63ecce55. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:18:38 compute-0 nova_compute[243452]: 2026-02-28 10:18:38.106 243456 DEBUG oslo_concurrency.lockutils [req-7fcd71ef-08d3-4902-afac-632fbd326f13 req-577c1a61-4f5f-4ac7-a08f-92ae583c1876 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-e8c0bffa-2672-4f45-8646-3a41b8e780a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:18:38 compute-0 nova_compute[243452]: 2026-02-28 10:18:38.276 243456 DEBUG nova.network.neutron [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:18:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 264 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 540 KiB/s rd, 1.1 MiB/s wr, 59 op/s
Feb 28 10:18:38 compute-0 ceph-mon[76304]: pgmap v1588: 305 pgs: 305 active+clean; 264 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 540 KiB/s rd, 1.1 MiB/s wr, 59 op/s
Feb 28 10:18:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:18:38 compute-0 nova_compute[243452]: 2026-02-28 10:18:38.886 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.823 243456 DEBUG nova.network.neutron [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Updating instance_info_cache with network_info: [{"id": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "address": "fa:16:3e:b7:a0:a7", "network": {"id": "99091813-133c-46b0-a8d3-eeb21884f48a", "bridge": "br-int", "label": "tempest-network-smoke--923256228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6912d1ef-96", "ovs_interfaceid": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.839 243456 DEBUG oslo_concurrency.lockutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Releasing lock "refresh_cache-e8c0bffa-2672-4f45-8646-3a41b8e780a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.840 243456 DEBUG nova.compute.manager [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Instance network_info: |[{"id": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "address": "fa:16:3e:b7:a0:a7", "network": {"id": "99091813-133c-46b0-a8d3-eeb21884f48a", "bridge": "br-int", "label": "tempest-network-smoke--923256228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6912d1ef-96", "ovs_interfaceid": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.840 243456 DEBUG oslo_concurrency.lockutils [req-7fcd71ef-08d3-4902-afac-632fbd326f13 req-577c1a61-4f5f-4ac7-a08f-92ae583c1876 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-e8c0bffa-2672-4f45-8646-3a41b8e780a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.841 243456 DEBUG nova.network.neutron [req-7fcd71ef-08d3-4902-afac-632fbd326f13 req-577c1a61-4f5f-4ac7-a08f-92ae583c1876 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Refreshing network info cache for port 6912d1ef-9679-45b5-ae80-a91f63ecce55 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.843 243456 DEBUG nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Start _get_guest_xml network_info=[{"id": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "address": "fa:16:3e:b7:a0:a7", "network": {"id": "99091813-133c-46b0-a8d3-eeb21884f48a", "bridge": "br-int", "label": "tempest-network-smoke--923256228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6912d1ef-96", "ovs_interfaceid": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.851 243456 WARNING nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.857 243456 DEBUG nova.virt.libvirt.host [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.858 243456 DEBUG nova.virt.libvirt.host [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.861 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.870 243456 DEBUG nova.virt.libvirt.host [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.872 243456 DEBUG nova.virt.libvirt.host [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.872 243456 DEBUG nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.873 243456 DEBUG nova.virt.hardware [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.874 243456 DEBUG nova.virt.hardware [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.875 243456 DEBUG nova.virt.hardware [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.875 243456 DEBUG nova.virt.hardware [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.877 243456 DEBUG nova.virt.hardware [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.878 243456 DEBUG nova.virt.hardware [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.878 243456 DEBUG nova.virt.hardware [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.879 243456 DEBUG nova.virt.hardware [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.879 243456 DEBUG nova.virt.hardware [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.880 243456 DEBUG nova.virt.hardware [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.880 243456 DEBUG nova.virt.hardware [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.886 243456 DEBUG oslo_concurrency.processutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.947 243456 DEBUG oslo_concurrency.lockutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Acquiring lock "33627cb1-9db9-4b71-81a5-071a52daaba2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.948 243456 DEBUG oslo_concurrency.lockutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Lock "33627cb1-9db9-4b71-81a5-071a52daaba2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:39 compute-0 nova_compute[243452]: 2026-02-28 10:18:39.975 243456 DEBUG nova.compute.manager [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:18:40 compute-0 nova_compute[243452]: 2026-02-28 10:18:40.055 243456 DEBUG oslo_concurrency.lockutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:40 compute-0 nova_compute[243452]: 2026-02-28 10:18:40.055 243456 DEBUG oslo_concurrency.lockutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:40 compute-0 nova_compute[243452]: 2026-02-28 10:18:40.062 243456 DEBUG nova.virt.hardware [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:18:40 compute-0 nova_compute[243452]: 2026-02-28 10:18:40.062 243456 INFO nova.compute.claims [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:18:40 compute-0 nova_compute[243452]: 2026-02-28 10:18:40.219 243456 DEBUG oslo_concurrency.processutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 281 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 550 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Feb 28 10:18:40 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:18:40 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/615211117' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:18:40 compute-0 nova_compute[243452]: 2026-02-28 10:18:40.487 243456 DEBUG oslo_concurrency.processutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:40 compute-0 nova_compute[243452]: 2026-02-28 10:18:40.510 243456 DEBUG nova.storage.rbd_utils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image e8c0bffa-2672-4f45-8646-3a41b8e780a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:18:40 compute-0 nova_compute[243452]: 2026-02-28 10:18:40.515 243456 DEBUG oslo_concurrency.processutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:40 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:18:40 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3892462914' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:18:40 compute-0 nova_compute[243452]: 2026-02-28 10:18:40.845 243456 DEBUG oslo_concurrency.processutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:40 compute-0 nova_compute[243452]: 2026-02-28 10:18:40.853 243456 DEBUG nova.compute.provider_tree [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:18:40 compute-0 nova_compute[243452]: 2026-02-28 10:18:40.871 243456 DEBUG nova.scheduler.client.report [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:18:40 compute-0 nova_compute[243452]: 2026-02-28 10:18:40.897 243456 DEBUG oslo_concurrency.lockutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:40 compute-0 nova_compute[243452]: 2026-02-28 10:18:40.898 243456 DEBUG nova.compute.manager [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011177375818545554 of space, bias 1.0, pg target 0.3353212745563666 quantized to 32 (current 32)
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024930965064395845 of space, bias 1.0, pg target 0.7479289519318754 quantized to 32 (current 32)
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.72698515365232e-07 of space, bias 4.0, pg target 0.0009272382184382784 quantized to 16 (current 16)
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:18:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:18:40 compute-0 nova_compute[243452]: 2026-02-28 10:18:40.941 243456 DEBUG nova.compute.manager [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:18:40 compute-0 nova_compute[243452]: 2026-02-28 10:18:40.941 243456 DEBUG nova.network.neutron [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:18:40 compute-0 nova_compute[243452]: 2026-02-28 10:18:40.958 243456 INFO nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:18:40 compute-0 nova_compute[243452]: 2026-02-28 10:18:40.974 243456 DEBUG nova.compute.manager [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:18:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:18:41 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3382454260' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.054 243456 DEBUG nova.compute.manager [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.056 243456 DEBUG nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.057 243456 INFO nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Creating image(s)
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.089 243456 DEBUG nova.storage.rbd_utils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] rbd image 33627cb1-9db9-4b71-81a5-071a52daaba2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.117 243456 DEBUG nova.storage.rbd_utils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] rbd image 33627cb1-9db9-4b71-81a5-071a52daaba2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.143 243456 DEBUG nova.storage.rbd_utils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] rbd image 33627cb1-9db9-4b71-81a5-071a52daaba2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.148 243456 DEBUG oslo_concurrency.processutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.185 243456 DEBUG oslo_concurrency.processutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.189 243456 DEBUG nova.virt.libvirt.vif [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:18:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1132178903',display_name='tempest-TestNetworkAdvancedServerOps-server-1132178903',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1132178903',id=93,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAaE+RUFgVeYlRX30KMGxnwEz+7reAn227i7sOjhZIZoASG2YdNYnV+Hsj1MiXJ39Nt5WWX427Y5pNZnlCuOzU5m4otlC0FTaPuRwmOXLsqunpWkEbv7T/dSbfrRT0dp7Q==',key_name='tempest-TestNetworkAdvancedServerOps-156508609',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-g07k9xeh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:18:35Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=e8c0bffa-2672-4f45-8646-3a41b8e780a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "address": "fa:16:3e:b7:a0:a7", "network": {"id": "99091813-133c-46b0-a8d3-eeb21884f48a", "bridge": "br-int", "label": "tempest-network-smoke--923256228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6912d1ef-96", "ovs_interfaceid": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.189 243456 DEBUG nova.network.os_vif_util [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "address": "fa:16:3e:b7:a0:a7", "network": {"id": "99091813-133c-46b0-a8d3-eeb21884f48a", "bridge": "br-int", "label": "tempest-network-smoke--923256228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6912d1ef-96", "ovs_interfaceid": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.190 243456 DEBUG nova.network.os_vif_util [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a0:a7,bridge_name='br-int',has_traffic_filtering=True,id=6912d1ef-9679-45b5-ae80-a91f63ecce55,network=Network(99091813-133c-46b0-a8d3-eeb21884f48a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6912d1ef-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.191 243456 DEBUG nova.objects.instance [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'pci_devices' on Instance uuid e8c0bffa-2672-4f45-8646-3a41b8e780a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.213 243456 DEBUG nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:18:41 compute-0 nova_compute[243452]:   <uuid>e8c0bffa-2672-4f45-8646-3a41b8e780a8</uuid>
Feb 28 10:18:41 compute-0 nova_compute[243452]:   <name>instance-0000005d</name>
Feb 28 10:18:41 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:18:41 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:18:41 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1132178903</nova:name>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:18:39</nova:creationTime>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:18:41 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:18:41 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:18:41 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:18:41 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:18:41 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:18:41 compute-0 nova_compute[243452]:         <nova:user uuid="99530c323188499c8d0e75b8edf1f77b">tempest-TestNetworkAdvancedServerOps-1987172309-project-member</nova:user>
Feb 28 10:18:41 compute-0 nova_compute[243452]:         <nova:project uuid="4c568ca6a09a48c1a1197267be4d4583">tempest-TestNetworkAdvancedServerOps-1987172309</nova:project>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:18:41 compute-0 nova_compute[243452]:         <nova:port uuid="6912d1ef-9679-45b5-ae80-a91f63ecce55">
Feb 28 10:18:41 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:18:41 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:18:41 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <system>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <entry name="serial">e8c0bffa-2672-4f45-8646-3a41b8e780a8</entry>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <entry name="uuid">e8c0bffa-2672-4f45-8646-3a41b8e780a8</entry>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     </system>
Feb 28 10:18:41 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:18:41 compute-0 nova_compute[243452]:   <os>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:   </os>
Feb 28 10:18:41 compute-0 nova_compute[243452]:   <features>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:   </features>
Feb 28 10:18:41 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:18:41 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:18:41 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/e8c0bffa-2672-4f45-8646-3a41b8e780a8_disk">
Feb 28 10:18:41 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       </source>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:18:41 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/e8c0bffa-2672-4f45-8646-3a41b8e780a8_disk.config">
Feb 28 10:18:41 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       </source>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:18:41 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:b7:a0:a7"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <target dev="tap6912d1ef-96"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/e8c0bffa-2672-4f45-8646-3a41b8e780a8/console.log" append="off"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <video>
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     </video>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:18:41 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:18:41 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:18:41 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:18:41 compute-0 nova_compute[243452]: </domain>
Feb 28 10:18:41 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.214 243456 DEBUG nova.compute.manager [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Preparing to wait for external event network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.214 243456 DEBUG oslo_concurrency.lockutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.214 243456 DEBUG oslo_concurrency.lockutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.214 243456 DEBUG oslo_concurrency.lockutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.215 243456 DEBUG nova.virt.libvirt.vif [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:18:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1132178903',display_name='tempest-TestNetworkAdvancedServerOps-server-1132178903',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1132178903',id=93,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAaE+RUFgVeYlRX30KMGxnwEz+7reAn227i7sOjhZIZoASG2YdNYnV+Hsj1MiXJ39Nt5WWX427Y5pNZnlCuOzU5m4otlC0FTaPuRwmOXLsqunpWkEbv7T/dSbfrRT0dp7Q==',key_name='tempest-TestNetworkAdvancedServerOps-156508609',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-g07k9xeh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:18:35Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=e8c0bffa-2672-4f45-8646-3a41b8e780a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "address": "fa:16:3e:b7:a0:a7", "network": {"id": "99091813-133c-46b0-a8d3-eeb21884f48a", "bridge": "br-int", "label": "tempest-network-smoke--923256228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6912d1ef-96", "ovs_interfaceid": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.216 243456 DEBUG nova.network.os_vif_util [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "address": "fa:16:3e:b7:a0:a7", "network": {"id": "99091813-133c-46b0-a8d3-eeb21884f48a", "bridge": "br-int", "label": "tempest-network-smoke--923256228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6912d1ef-96", "ovs_interfaceid": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.216 243456 DEBUG nova.network.os_vif_util [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a0:a7,bridge_name='br-int',has_traffic_filtering=True,id=6912d1ef-9679-45b5-ae80-a91f63ecce55,network=Network(99091813-133c-46b0-a8d3-eeb21884f48a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6912d1ef-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.217 243456 DEBUG os_vif [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a0:a7,bridge_name='br-int',has_traffic_filtering=True,id=6912d1ef-9679-45b5-ae80-a91f63ecce55,network=Network(99091813-133c-46b0-a8d3-eeb21884f48a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6912d1ef-96') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.217 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.218 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.218 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.220 243456 DEBUG oslo_concurrency.processutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.222 243456 DEBUG oslo_concurrency.lockutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.223 243456 DEBUG oslo_concurrency.lockutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.223 243456 DEBUG oslo_concurrency.lockutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.245 243456 DEBUG nova.storage.rbd_utils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] rbd image 33627cb1-9db9-4b71-81a5-071a52daaba2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.249 243456 DEBUG oslo_concurrency.processutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 33627cb1-9db9-4b71-81a5-071a52daaba2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.287 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.287 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6912d1ef-96, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.288 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6912d1ef-96, col_values=(('external_ids', {'iface-id': '6912d1ef-9679-45b5-ae80-a91f63ecce55', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:a0:a7', 'vm-uuid': 'e8c0bffa-2672-4f45-8646-3a41b8e780a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.290 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:41 compute-0 NetworkManager[49805]: <info>  [1772273921.2917] manager: (tap6912d1ef-96): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.293 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.300 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.301 243456 INFO os_vif [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a0:a7,bridge_name='br-int',has_traffic_filtering=True,id=6912d1ef-9679-45b5-ae80-a91f63ecce55,network=Network(99091813-133c-46b0-a8d3-eeb21884f48a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6912d1ef-96')
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.315 243456 DEBUG nova.policy [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '699bde3f63e74d6398856d2096d2cba8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e987a1d2da224f548b18032faa94aa1a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.360 243456 DEBUG nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.360 243456 DEBUG nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.360 243456 DEBUG nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] No VIF found with MAC fa:16:3e:b7:a0:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.361 243456 INFO nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Using config drive
Feb 28 10:18:41 compute-0 ceph-mon[76304]: pgmap v1589: 305 pgs: 305 active+clean; 281 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 550 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Feb 28 10:18:41 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/615211117' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:18:41 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3892462914' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:18:41 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3382454260' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.384 243456 DEBUG nova.storage.rbd_utils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image e8c0bffa-2672-4f45-8646-3a41b8e780a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.531 243456 DEBUG nova.network.neutron [req-7fcd71ef-08d3-4902-afac-632fbd326f13 req-577c1a61-4f5f-4ac7-a08f-92ae583c1876 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Updated VIF entry in instance network info cache for port 6912d1ef-9679-45b5-ae80-a91f63ecce55. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.532 243456 DEBUG nova.network.neutron [req-7fcd71ef-08d3-4902-afac-632fbd326f13 req-577c1a61-4f5f-4ac7-a08f-92ae583c1876 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Updating instance_info_cache with network_info: [{"id": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "address": "fa:16:3e:b7:a0:a7", "network": {"id": "99091813-133c-46b0-a8d3-eeb21884f48a", "bridge": "br-int", "label": "tempest-network-smoke--923256228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6912d1ef-96", "ovs_interfaceid": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.535 243456 DEBUG oslo_concurrency.processutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 33627cb1-9db9-4b71-81a5-071a52daaba2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.572 243456 DEBUG oslo_concurrency.lockutils [req-7fcd71ef-08d3-4902-afac-632fbd326f13 req-577c1a61-4f5f-4ac7-a08f-92ae583c1876 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-e8c0bffa-2672-4f45-8646-3a41b8e780a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.616 243456 DEBUG nova.storage.rbd_utils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] resizing rbd image 33627cb1-9db9-4b71-81a5-071a52daaba2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.726 243456 DEBUG nova.objects.instance [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Lazy-loading 'migration_context' on Instance uuid 33627cb1-9db9-4b71-81a5-071a52daaba2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.744 243456 DEBUG nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.744 243456 DEBUG nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Ensure instance console log exists: /var/lib/nova/instances/33627cb1-9db9-4b71-81a5-071a52daaba2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.745 243456 DEBUG oslo_concurrency.lockutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.746 243456 DEBUG oslo_concurrency.lockutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:41 compute-0 nova_compute[243452]: 2026-02-28 10:18:41.746 243456 DEBUG oslo_concurrency.lockutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:42 compute-0 podman[323620]: 2026-02-28 10:18:42.14490437 +0000 UTC m=+0.071959112 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0)
Feb 28 10:18:42 compute-0 podman[323619]: 2026-02-28 10:18:42.180978778 +0000 UTC m=+0.107560197 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 28 10:18:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 281 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 1.8 MiB/s wr, 73 op/s
Feb 28 10:18:42 compute-0 nova_compute[243452]: 2026-02-28 10:18:42.358 243456 DEBUG nova.network.neutron [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Successfully created port: 193238a7-8ebc-4160-8a2a-edd1dcf804b2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:18:42 compute-0 nova_compute[243452]: 2026-02-28 10:18:42.498 243456 INFO nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Creating config drive at /var/lib/nova/instances/e8c0bffa-2672-4f45-8646-3a41b8e780a8/disk.config
Feb 28 10:18:42 compute-0 nova_compute[243452]: 2026-02-28 10:18:42.505 243456 DEBUG oslo_concurrency.processutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e8c0bffa-2672-4f45-8646-3a41b8e780a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5urg2gdq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:42 compute-0 nova_compute[243452]: 2026-02-28 10:18:42.661 243456 DEBUG oslo_concurrency.processutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e8c0bffa-2672-4f45-8646-3a41b8e780a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5urg2gdq" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:42 compute-0 nova_compute[243452]: 2026-02-28 10:18:42.684 243456 DEBUG nova.storage.rbd_utils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] rbd image e8c0bffa-2672-4f45-8646-3a41b8e780a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:18:42 compute-0 nova_compute[243452]: 2026-02-28 10:18:42.688 243456 DEBUG oslo_concurrency.processutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e8c0bffa-2672-4f45-8646-3a41b8e780a8/disk.config e8c0bffa-2672-4f45-8646-3a41b8e780a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:42 compute-0 nova_compute[243452]: 2026-02-28 10:18:42.820 243456 DEBUG oslo_concurrency.processutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e8c0bffa-2672-4f45-8646-3a41b8e780a8/disk.config e8c0bffa-2672-4f45-8646-3a41b8e780a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:42 compute-0 nova_compute[243452]: 2026-02-28 10:18:42.821 243456 INFO nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Deleting local config drive /var/lib/nova/instances/e8c0bffa-2672-4f45-8646-3a41b8e780a8/disk.config because it was imported into RBD.
Feb 28 10:18:42 compute-0 kernel: tap6912d1ef-96: entered promiscuous mode
Feb 28 10:18:42 compute-0 NetworkManager[49805]: <info>  [1772273922.8724] manager: (tap6912d1ef-96): new Tun device (/org/freedesktop/NetworkManager/Devices/394)
Feb 28 10:18:42 compute-0 ovn_controller[146846]: 2026-02-28T10:18:42Z|00904|binding|INFO|Claiming lport 6912d1ef-9679-45b5-ae80-a91f63ecce55 for this chassis.
Feb 28 10:18:42 compute-0 ovn_controller[146846]: 2026-02-28T10:18:42Z|00905|binding|INFO|6912d1ef-9679-45b5-ae80-a91f63ecce55: Claiming fa:16:3e:b7:a0:a7 10.100.0.8
Feb 28 10:18:42 compute-0 nova_compute[243452]: 2026-02-28 10:18:42.874 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:42.882 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a0:a7 10.100.0.8'], port_security=['fa:16:3e:b7:a0:a7 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e8c0bffa-2672-4f45-8646-3a41b8e780a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99091813-133c-46b0-a8d3-eeb21884f48a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '2', 'neutron:security_group_ids': '11fcf035-f78f-4ebd-8bac-b3303ff31262', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd6ca39e-5439-4056-b277-bd0e3b6ca6ca, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6912d1ef-9679-45b5-ae80-a91f63ecce55) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:18:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:42.884 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6912d1ef-9679-45b5-ae80-a91f63ecce55 in datapath 99091813-133c-46b0-a8d3-eeb21884f48a bound to our chassis
Feb 28 10:18:42 compute-0 nova_compute[243452]: 2026-02-28 10:18:42.883 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:42 compute-0 ovn_controller[146846]: 2026-02-28T10:18:42Z|00906|binding|INFO|Setting lport 6912d1ef-9679-45b5-ae80-a91f63ecce55 ovn-installed in OVS
Feb 28 10:18:42 compute-0 ovn_controller[146846]: 2026-02-28T10:18:42Z|00907|binding|INFO|Setting lport 6912d1ef-9679-45b5-ae80-a91f63ecce55 up in Southbound
Feb 28 10:18:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:42.885 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99091813-133c-46b0-a8d3-eeb21884f48a
Feb 28 10:18:42 compute-0 nova_compute[243452]: 2026-02-28 10:18:42.887 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:42 compute-0 nova_compute[243452]: 2026-02-28 10:18:42.888 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:42.900 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bf3e05b4-ec16-4ecb-945c-b9030a4e3681]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:42.901 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap99091813-11 in ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:18:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:42.908 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap99091813-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:18:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:42.908 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8b7d09c6-9415-46ec-b8e0-ea212b602717]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:42 compute-0 systemd-udevd[323716]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:18:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:42.909 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0d68d819-86c1-4cb1-886d-f4d036344ac8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:42 compute-0 systemd-machined[209480]: New machine qemu-113-instance-0000005d.
Feb 28 10:18:42 compute-0 NetworkManager[49805]: <info>  [1772273922.9244] device (tap6912d1ef-96): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:18:42 compute-0 NetworkManager[49805]: <info>  [1772273922.9252] device (tap6912d1ef-96): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:18:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:42.923 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[1cf458e4-570a-4972-8493-e1713ed50812]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:42 compute-0 systemd[1]: Started Virtual Machine qemu-113-instance-0000005d.
Feb 28 10:18:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:42.940 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd6805f-24f5-421c-8c1a-5d1e5b0c7f70]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:42.980 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0a798385-117c-44f3-9634-3672805f1d88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:42 compute-0 systemd-udevd[323719]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:18:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:42.987 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[92984996-8032-4fc2-930f-60bd40b62629]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:42 compute-0 NetworkManager[49805]: <info>  [1772273922.9888] manager: (tap99091813-10): new Veth device (/org/freedesktop/NetworkManager/Devices/395)
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:43.015 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0babd025-44db-42fc-82a1-506de1285e0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:43.019 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[46a471a3-b4ee-46f2-86b0-c5c10dd8b274]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:43 compute-0 NetworkManager[49805]: <info>  [1772273923.0457] device (tap99091813-10): carrier: link connected
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:43.054 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7d117e5f-2393-45df-9075-09170a5314d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:43.071 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4f2d5ae8-4fc0-4d65-bce4-f2b1458eab2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99091813-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:8b:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 277], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543033, 'reachable_time': 42316, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323748, 'error': None, 'target': 'ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:43.087 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f050a9a5-cdf6-402d-bb0d-917bc5190c05]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:8b20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543033, 'tstamp': 543033}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323749, 'error': None, 'target': 'ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:43.108 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7d58a0e7-d278-475a-bfb8-7e75084270e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99091813-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:8b:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 277], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543033, 'reachable_time': 42316, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323750, 'error': None, 'target': 'ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:43.139 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[709ccc75-5e72-449b-a4aa-b53d4da468f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:43.206 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[920f80de-b672-460e-a9e7-f0e4a0889009]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:43.208 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99091813-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:43.208 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:43.209 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99091813-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:43 compute-0 NetworkManager[49805]: <info>  [1772273923.2118] manager: (tap99091813-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Feb 28 10:18:43 compute-0 kernel: tap99091813-10: entered promiscuous mode
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.212 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:43.216 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99091813-10, col_values=(('external_ids', {'iface-id': 'e6986f00-b070-4e36-95ae-3683483bf103'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:43 compute-0 ovn_controller[146846]: 2026-02-28T10:18:43Z|00908|binding|INFO|Releasing lport e6986f00-b070-4e36-95ae-3683483bf103 from this chassis (sb_readonly=0)
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:43.218 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/99091813-133c-46b0-a8d3-eeb21884f48a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/99091813-133c-46b0-a8d3-eeb21884f48a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:43.219 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a567bd06-b7e7-4862-ac98-6c4cff074a22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:43.220 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-99091813-133c-46b0-a8d3-eeb21884f48a
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/99091813-133c-46b0-a8d3-eeb21884f48a.pid.haproxy
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 99091813-133c-46b0-a8d3-eeb21884f48a
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:18:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:43.221 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a', 'env', 'PROCESS_TAG=haproxy-99091813-133c-46b0-a8d3-eeb21884f48a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/99091813-133c-46b0-a8d3-eeb21884f48a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.222 243456 DEBUG oslo_concurrency.lockutils [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.223 243456 DEBUG oslo_concurrency.lockutils [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.223 243456 INFO nova.compute.manager [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Rebooting instance
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.225 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.240 243456 DEBUG oslo_concurrency.lockutils [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.241 243456 DEBUG oslo_concurrency.lockutils [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquired lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.241 243456 DEBUG nova.network.neutron [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:18:43 compute-0 ceph-mon[76304]: pgmap v1590: 305 pgs: 305 active+clean; 281 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 1.8 MiB/s wr, 73 op/s
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.379 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273923.379318, e8c0bffa-2672-4f45-8646-3a41b8e780a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.380 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] VM Started (Lifecycle Event)
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.397 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.403 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273923.379458, e8c0bffa-2672-4f45-8646-3a41b8e780a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.404 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] VM Paused (Lifecycle Event)
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.426 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.431 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.450 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:18:43 compute-0 podman[323824]: 2026-02-28 10:18:43.594464456 +0000 UTC m=+0.051419247 container create 09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 10:18:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:18:43 compute-0 systemd[1]: Started libpod-conmon-09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597.scope.
Feb 28 10:18:43 compute-0 podman[323824]: 2026-02-28 10:18:43.565362607 +0000 UTC m=+0.022317398 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:18:43 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:18:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/999094b081a957cbe1d5a0b39ac314b0f3f277acb666b4d99b76c83d7d5cdc56/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:18:43 compute-0 podman[323824]: 2026-02-28 10:18:43.697613335 +0000 UTC m=+0.154568126 container init 09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.700 243456 DEBUG nova.compute.manager [req-9f26b608-322e-43f9-8572-355b354adef3 req-8a30d06e-b086-4b58-be03-fc8471c3585c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.701 243456 DEBUG oslo_concurrency.lockutils [req-9f26b608-322e-43f9-8572-355b354adef3 req-8a30d06e-b086-4b58-be03-fc8471c3585c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.701 243456 DEBUG oslo_concurrency.lockutils [req-9f26b608-322e-43f9-8572-355b354adef3 req-8a30d06e-b086-4b58-be03-fc8471c3585c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.701 243456 DEBUG oslo_concurrency.lockutils [req-9f26b608-322e-43f9-8572-355b354adef3 req-8a30d06e-b086-4b58-be03-fc8471c3585c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.702 243456 DEBUG nova.compute.manager [req-9f26b608-322e-43f9-8572-355b354adef3 req-8a30d06e-b086-4b58-be03-fc8471c3585c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Processing event network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.703 243456 DEBUG nova.compute.manager [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:18:43 compute-0 podman[323824]: 2026-02-28 10:18:43.705729196 +0000 UTC m=+0.162683987 container start 09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.708 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273923.70721, e8c0bffa-2672-4f45-8646-3a41b8e780a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.708 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] VM Resumed (Lifecycle Event)
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.711 243456 DEBUG nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.715 243456 INFO nova.virt.libvirt.driver [-] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Instance spawned successfully.
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.715 243456 DEBUG nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.725 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.732 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.737 243456 DEBUG nova.network.neutron [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Successfully updated port: 193238a7-8ebc-4160-8a2a-edd1dcf804b2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:18:43 compute-0 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[323839]: [NOTICE]   (323843) : New worker (323845) forked
Feb 28 10:18:43 compute-0 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[323839]: [NOTICE]   (323843) : Loading success.
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.742 243456 DEBUG nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.742 243456 DEBUG nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.743 243456 DEBUG nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.743 243456 DEBUG nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.744 243456 DEBUG nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.744 243456 DEBUG nova.virt.libvirt.driver [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.756 243456 DEBUG oslo_concurrency.lockutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Acquiring lock "refresh_cache-33627cb1-9db9-4b71-81a5-071a52daaba2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.757 243456 DEBUG oslo_concurrency.lockutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Acquired lock "refresh_cache-33627cb1-9db9-4b71-81a5-071a52daaba2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.757 243456 DEBUG nova.network.neutron [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.758 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.815 243456 INFO nova.compute.manager [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Took 8.57 seconds to spawn the instance on the hypervisor.
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.815 243456 DEBUG nova.compute.manager [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.889 243456 INFO nova.compute.manager [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Took 9.66 seconds to build instance.
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.891 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.906 243456 DEBUG oslo_concurrency.lockutils [None req-833d1837-abd4-4de3-b81f-10df97909a7c 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:43 compute-0 nova_compute[243452]: 2026-02-28 10:18:43.952 243456 DEBUG nova.network.neutron [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:18:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 292 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 561 KiB/s rd, 2.3 MiB/s wr, 90 op/s
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.245 243456 DEBUG nova.network.neutron [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Updating instance_info_cache with network_info: [{"id": "193238a7-8ebc-4160-8a2a-edd1dcf804b2", "address": "fa:16:3e:8b:bf:6e", "network": {"id": "88f7c930-dd22-4b33-a9a8-559a9a62fb79", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-628307613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e987a1d2da224f548b18032faa94aa1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap193238a7-8e", "ovs_interfaceid": "193238a7-8ebc-4160-8a2a-edd1dcf804b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.271 243456 DEBUG oslo_concurrency.lockutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Releasing lock "refresh_cache-33627cb1-9db9-4b71-81a5-071a52daaba2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.272 243456 DEBUG nova.compute.manager [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Instance network_info: |[{"id": "193238a7-8ebc-4160-8a2a-edd1dcf804b2", "address": "fa:16:3e:8b:bf:6e", "network": {"id": "88f7c930-dd22-4b33-a9a8-559a9a62fb79", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-628307613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e987a1d2da224f548b18032faa94aa1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap193238a7-8e", "ovs_interfaceid": "193238a7-8ebc-4160-8a2a-edd1dcf804b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.276 243456 DEBUG nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Start _get_guest_xml network_info=[{"id": "193238a7-8ebc-4160-8a2a-edd1dcf804b2", "address": "fa:16:3e:8b:bf:6e", "network": {"id": "88f7c930-dd22-4b33-a9a8-559a9a62fb79", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-628307613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e987a1d2da224f548b18032faa94aa1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap193238a7-8e", "ovs_interfaceid": "193238a7-8ebc-4160-8a2a-edd1dcf804b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.282 243456 WARNING nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.290 243456 DEBUG nova.virt.libvirt.host [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.291 243456 DEBUG nova.virt.libvirt.host [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.296 243456 DEBUG nova.virt.libvirt.host [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.297 243456 DEBUG nova.virt.libvirt.host [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.298 243456 DEBUG nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.298 243456 DEBUG nova.virt.hardware [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.299 243456 DEBUG nova.virt.hardware [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.299 243456 DEBUG nova.virt.hardware [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.300 243456 DEBUG nova.virt.hardware [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.300 243456 DEBUG nova.virt.hardware [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.301 243456 DEBUG nova.virt.hardware [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.301 243456 DEBUG nova.virt.hardware [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.301 243456 DEBUG nova.virt.hardware [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.302 243456 DEBUG nova.virt.hardware [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.302 243456 DEBUG nova.virt.hardware [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.303 243456 DEBUG nova.virt.hardware [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.309 243456 DEBUG oslo_concurrency.processutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.352 243456 DEBUG nova.network.neutron [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Updating instance_info_cache with network_info: [{"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.380 243456 DEBUG oslo_concurrency.lockutils [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Releasing lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:18:45 compute-0 ceph-mon[76304]: pgmap v1591: 305 pgs: 305 active+clean; 292 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 561 KiB/s rd, 2.3 MiB/s wr, 90 op/s
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.383 243456 DEBUG nova.compute.manager [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:45 compute-0 kernel: taped25d1f8-c3 (unregistering): left promiscuous mode
Feb 28 10:18:45 compute-0 NetworkManager[49805]: <info>  [1772273925.5616] device (taped25d1f8-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:18:45 compute-0 ovn_controller[146846]: 2026-02-28T10:18:45Z|00909|binding|INFO|Releasing lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c from this chassis (sb_readonly=0)
Feb 28 10:18:45 compute-0 ovn_controller[146846]: 2026-02-28T10:18:45Z|00910|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c down in Southbound
Feb 28 10:18:45 compute-0 ovn_controller[146846]: 2026-02-28T10:18:45Z|00911|binding|INFO|Removing iface taped25d1f8-c3 ovn-installed in OVS
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.583 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:45.582 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:05:21 10.100.0.14'], port_security=['fa:16:3e:f6:05:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '690896df-6307-469c-9685-325a61a62b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '8', 'neutron:security_group_ids': '26695444-5c7f-4bb0-b1ed-94a026868cfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:18:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:45.584 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ed25d1f8-c3a0-43d4-b57e-12b647a48b3c in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb unbound from our chassis
Feb 28 10:18:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:45.586 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:18:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:45.587 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ccfb55e3-934c-4a2a-b2de-f9ee8d62d196]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:45.588 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb namespace which is not needed anymore
Feb 28 10:18:45 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Feb 28 10:18:45 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005a.scope: Consumed 12.403s CPU time.
Feb 28 10:18:45 compute-0 systemd-machined[209480]: Machine qemu-112-instance-0000005a terminated.
Feb 28 10:18:45 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[323101]: [NOTICE]   (323105) : haproxy version is 2.8.14-c23fe91
Feb 28 10:18:45 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[323101]: [NOTICE]   (323105) : path to executable is /usr/sbin/haproxy
Feb 28 10:18:45 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[323101]: [WARNING]  (323105) : Exiting Master process...
Feb 28 10:18:45 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[323101]: [ALERT]    (323105) : Current worker (323107) exited with code 143 (Terminated)
Feb 28 10:18:45 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[323101]: [WARNING]  (323105) : All workers exited. Exiting... (0)
Feb 28 10:18:45 compute-0 systemd[1]: libpod-234b1a92a942bd88afa009ec5975d5d87427900451d32f3583c4b5deef11450b.scope: Deactivated successfully.
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.735 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.738 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:45 compute-0 podman[323894]: 2026-02-28 10:18:45.743652283 +0000 UTC m=+0.045748275 container died 234b1a92a942bd88afa009ec5975d5d87427900451d32f3583c4b5deef11450b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.750 243456 INFO nova.virt.libvirt.driver [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance destroyed successfully.
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.751 243456 DEBUG nova.objects.instance [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'resources' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.771 243456 DEBUG nova.virt.libvirt.vif [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-49030969',display_name='tempest-ServerActionsTestJSON-server-49030969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-49030969',id=90,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2gogpz3Yx3EhEkpX1VCBq2KO8dk2oxi+PMU5eyyDvXF1ONWBCuHAC/cBb5pEuXFL5UEmuntYZ5G82kfqwpjBS+OnOyyPUfTpF/G370TEdXgIbjgT7mCqXRtxntkLSfyw==',key_name='tempest-keypair-1420549814',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-iwkgdg48',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:18:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9006c7543a244aa948b78020335223a',uuid=690896df-6307-469c-9685-325a61a62b88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.772 243456 DEBUG nova.network.os_vif_util [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.773 243456 DEBUG nova.network.os_vif_util [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.774 243456 DEBUG os_vif [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.776 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.776 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped25d1f8-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.778 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.781 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.783 243456 INFO os_vif [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3')
Feb 28 10:18:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-234b1a92a942bd88afa009ec5975d5d87427900451d32f3583c4b5deef11450b-userdata-shm.mount: Deactivated successfully.
Feb 28 10:18:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5ad5d6630a9c99b49b147c49c62d20f88de2afe5884d11538ffa43068afe063-merged.mount: Deactivated successfully.
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.797 243456 DEBUG nova.virt.libvirt.driver [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Start _get_guest_xml network_info=[{"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.802 243456 WARNING nova.virt.libvirt.driver [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:18:45 compute-0 podman[323894]: 2026-02-28 10:18:45.803636042 +0000 UTC m=+0.105732024 container cleanup 234b1a92a942bd88afa009ec5975d5d87427900451d32f3583c4b5deef11450b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.810 243456 DEBUG nova.virt.libvirt.host [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.811 243456 DEBUG nova.virt.libvirt.host [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:18:45 compute-0 systemd[1]: libpod-conmon-234b1a92a942bd88afa009ec5975d5d87427900451d32f3583c4b5deef11450b.scope: Deactivated successfully.
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.815 243456 DEBUG nova.virt.libvirt.host [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.815 243456 DEBUG nova.virt.libvirt.host [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.816 243456 DEBUG nova.virt.libvirt.driver [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.816 243456 DEBUG nova.virt.hardware [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.817 243456 DEBUG nova.virt.hardware [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.817 243456 DEBUG nova.virt.hardware [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.818 243456 DEBUG nova.virt.hardware [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.818 243456 DEBUG nova.virt.hardware [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.818 243456 DEBUG nova.virt.hardware [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.819 243456 DEBUG nova.virt.hardware [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.819 243456 DEBUG nova.virt.hardware [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.819 243456 DEBUG nova.virt.hardware [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.819 243456 DEBUG nova.virt.hardware [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.821 243456 DEBUG nova.virt.hardware [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.821 243456 DEBUG nova.objects.instance [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.838 243456 DEBUG oslo_concurrency.processutils [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:45 compute-0 podman[323929]: 2026-02-28 10:18:45.879733001 +0000 UTC m=+0.047263218 container remove 234b1a92a942bd88afa009ec5975d5d87427900451d32f3583c4b5deef11450b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:18:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:45.884 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fd5b2e5f-62f1-4f3c-aab4-5bb3af25140f]: (4, ('Sat Feb 28 10:18:45 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb (234b1a92a942bd88afa009ec5975d5d87427900451d32f3583c4b5deef11450b)\n234b1a92a942bd88afa009ec5975d5d87427900451d32f3583c4b5deef11450b\nSat Feb 28 10:18:45 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb (234b1a92a942bd88afa009ec5975d5d87427900451d32f3583c4b5deef11450b)\n234b1a92a942bd88afa009ec5975d5d87427900451d32f3583c4b5deef11450b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:45.886 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[28d6c577-e711-4fc2-a9b8-d78ac9ad62f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:45.886 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:45 compute-0 kernel: tap8082b9e7-a0: left promiscuous mode
Feb 28 10:18:45 compute-0 nova_compute[243452]: 2026-02-28 10:18:45.897 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:45.900 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae95065-9d36-4080-a83b-5307d222ff11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:45.916 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ac9585-accf-446d-8941-e7976c0a1150]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:45.917 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[350dd88c-53c4-4958-99d9-335bcdeb85f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:45.933 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe6495a-483b-47ce-b256-207c3d52e318]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540732, 'reachable_time': 23355, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323943, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d8082b9e7\x2da888\x2d4fb7\x2db48c\x2da7c16db892eb.mount: Deactivated successfully.
Feb 28 10:18:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:45.935 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:18:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:45.935 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[aea12cde-b402-43a6-93c7-a60250a9b0ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:18:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/842691740' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.007 243456 DEBUG oslo_concurrency.processutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.698s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.031 243456 DEBUG nova.storage.rbd_utils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] rbd image 33627cb1-9db9-4b71-81a5-071a52daaba2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.038 243456 DEBUG oslo_concurrency.processutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.067 243456 DEBUG nova.compute.manager [req-85a8eafb-c8f5-47f5-9133-83d196d0025a req-526a493e-6ded-4254-b464-8b25f450b610 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.067 243456 DEBUG oslo_concurrency.lockutils [req-85a8eafb-c8f5-47f5-9133-83d196d0025a req-526a493e-6ded-4254-b464-8b25f450b610 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.068 243456 DEBUG oslo_concurrency.lockutils [req-85a8eafb-c8f5-47f5-9133-83d196d0025a req-526a493e-6ded-4254-b464-8b25f450b610 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.068 243456 DEBUG oslo_concurrency.lockutils [req-85a8eafb-c8f5-47f5-9133-83d196d0025a req-526a493e-6ded-4254-b464-8b25f450b610 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.068 243456 DEBUG nova.compute.manager [req-85a8eafb-c8f5-47f5-9133-83d196d0025a req-526a493e-6ded-4254-b464-8b25f450b610 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.068 243456 WARNING nova.compute.manager [req-85a8eafb-c8f5-47f5-9133-83d196d0025a req-526a493e-6ded-4254-b464-8b25f450b610 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state active and task_state reboot_started_hard.
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.240 243456 DEBUG nova.compute.manager [req-18dc4649-5b80-4fb0-abc9-1a4c84136296 req-764cc3ef-6fce-450e-a3b5-c9f1c1f10faa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.240 243456 DEBUG oslo_concurrency.lockutils [req-18dc4649-5b80-4fb0-abc9-1a4c84136296 req-764cc3ef-6fce-450e-a3b5-c9f1c1f10faa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.241 243456 DEBUG oslo_concurrency.lockutils [req-18dc4649-5b80-4fb0-abc9-1a4c84136296 req-764cc3ef-6fce-450e-a3b5-c9f1c1f10faa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.241 243456 DEBUG oslo_concurrency.lockutils [req-18dc4649-5b80-4fb0-abc9-1a4c84136296 req-764cc3ef-6fce-450e-a3b5-c9f1c1f10faa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.241 243456 DEBUG nova.compute.manager [req-18dc4649-5b80-4fb0-abc9-1a4c84136296 req-764cc3ef-6fce-450e-a3b5-c9f1c1f10faa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] No waiting events found dispatching network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.242 243456 WARNING nova.compute.manager [req-18dc4649-5b80-4fb0-abc9-1a4c84136296 req-764cc3ef-6fce-450e-a3b5-c9f1c1f10faa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received unexpected event network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 for instance with vm_state active and task_state None.
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.242 243456 DEBUG nova.compute.manager [req-18dc4649-5b80-4fb0-abc9-1a4c84136296 req-764cc3ef-6fce-450e-a3b5-c9f1c1f10faa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Received event network-changed-193238a7-8ebc-4160-8a2a-edd1dcf804b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.242 243456 DEBUG nova.compute.manager [req-18dc4649-5b80-4fb0-abc9-1a4c84136296 req-764cc3ef-6fce-450e-a3b5-c9f1c1f10faa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Refreshing instance network info cache due to event network-changed-193238a7-8ebc-4160-8a2a-edd1dcf804b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.243 243456 DEBUG oslo_concurrency.lockutils [req-18dc4649-5b80-4fb0-abc9-1a4c84136296 req-764cc3ef-6fce-450e-a3b5-c9f1c1f10faa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-33627cb1-9db9-4b71-81a5-071a52daaba2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.243 243456 DEBUG oslo_concurrency.lockutils [req-18dc4649-5b80-4fb0-abc9-1a4c84136296 req-764cc3ef-6fce-450e-a3b5-c9f1c1f10faa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-33627cb1-9db9-4b71-81a5-071a52daaba2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.243 243456 DEBUG nova.network.neutron [req-18dc4649-5b80-4fb0-abc9-1a4c84136296 req-764cc3ef-6fce-450e-a3b5-c9f1c1f10faa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Refreshing network info cache for port 193238a7-8ebc-4160-8a2a-edd1dcf804b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:18:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 327 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 138 op/s
Feb 28 10:18:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/842691740' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:18:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:18:46 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1513317432' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.426 243456 DEBUG oslo_concurrency.processutils [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.471 243456 DEBUG oslo_concurrency.processutils [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:18:46 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2500979561' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.551 243456 DEBUG oslo_concurrency.processutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.555 243456 DEBUG nova.virt.libvirt.vif [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:18:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1504026090',display_name='tempest-ServerMetadataNegativeTestJSON-server-1504026090',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1504026090',id=94,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e987a1d2da224f548b18032faa94aa1a',ramdisk_id='',reservation_id='r-5torgii3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1166348078',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1166348078-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:18:41Z,user_data=None,user_id='699bde3f63e74d6398856d2096d2cba8',uuid=33627cb1-9db9-4b71-81a5-071a52daaba2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "193238a7-8ebc-4160-8a2a-edd1dcf804b2", "address": "fa:16:3e:8b:bf:6e", "network": {"id": "88f7c930-dd22-4b33-a9a8-559a9a62fb79", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-628307613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e987a1d2da224f548b18032faa94aa1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap193238a7-8e", "ovs_interfaceid": "193238a7-8ebc-4160-8a2a-edd1dcf804b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.555 243456 DEBUG nova.network.os_vif_util [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Converting VIF {"id": "193238a7-8ebc-4160-8a2a-edd1dcf804b2", "address": "fa:16:3e:8b:bf:6e", "network": {"id": "88f7c930-dd22-4b33-a9a8-559a9a62fb79", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-628307613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e987a1d2da224f548b18032faa94aa1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap193238a7-8e", "ovs_interfaceid": "193238a7-8ebc-4160-8a2a-edd1dcf804b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.557 243456 DEBUG nova.network.os_vif_util [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:bf:6e,bridge_name='br-int',has_traffic_filtering=True,id=193238a7-8ebc-4160-8a2a-edd1dcf804b2,network=Network(88f7c930-dd22-4b33-a9a8-559a9a62fb79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap193238a7-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.559 243456 DEBUG nova.objects.instance [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Lazy-loading 'pci_devices' on Instance uuid 33627cb1-9db9-4b71-81a5-071a52daaba2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.578 243456 DEBUG nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:18:46 compute-0 nova_compute[243452]:   <uuid>33627cb1-9db9-4b71-81a5-071a52daaba2</uuid>
Feb 28 10:18:46 compute-0 nova_compute[243452]:   <name>instance-0000005e</name>
Feb 28 10:18:46 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:18:46 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:18:46 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerMetadataNegativeTestJSON-server-1504026090</nova:name>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:18:45</nova:creationTime>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:18:46 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:18:46 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:18:46 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:18:46 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:18:46 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:18:46 compute-0 nova_compute[243452]:         <nova:user uuid="699bde3f63e74d6398856d2096d2cba8">tempest-ServerMetadataNegativeTestJSON-1166348078-project-member</nova:user>
Feb 28 10:18:46 compute-0 nova_compute[243452]:         <nova:project uuid="e987a1d2da224f548b18032faa94aa1a">tempest-ServerMetadataNegativeTestJSON-1166348078</nova:project>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:18:46 compute-0 nova_compute[243452]:         <nova:port uuid="193238a7-8ebc-4160-8a2a-edd1dcf804b2">
Feb 28 10:18:46 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:18:46 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:18:46 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <system>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <entry name="serial">33627cb1-9db9-4b71-81a5-071a52daaba2</entry>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <entry name="uuid">33627cb1-9db9-4b71-81a5-071a52daaba2</entry>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     </system>
Feb 28 10:18:46 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:18:46 compute-0 nova_compute[243452]:   <os>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:   </os>
Feb 28 10:18:46 compute-0 nova_compute[243452]:   <features>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:   </features>
Feb 28 10:18:46 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:18:46 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:18:46 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/33627cb1-9db9-4b71-81a5-071a52daaba2_disk">
Feb 28 10:18:46 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       </source>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:18:46 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/33627cb1-9db9-4b71-81a5-071a52daaba2_disk.config">
Feb 28 10:18:46 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       </source>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:18:46 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:8b:bf:6e"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <target dev="tap193238a7-8e"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/33627cb1-9db9-4b71-81a5-071a52daaba2/console.log" append="off"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <video>
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     </video>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:18:46 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:18:46 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:18:46 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:18:46 compute-0 nova_compute[243452]: </domain>
Feb 28 10:18:46 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.579 243456 DEBUG nova.compute.manager [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Preparing to wait for external event network-vif-plugged-193238a7-8ebc-4160-8a2a-edd1dcf804b2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.580 243456 DEBUG oslo_concurrency.lockutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Acquiring lock "33627cb1-9db9-4b71-81a5-071a52daaba2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.580 243456 DEBUG oslo_concurrency.lockutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Lock "33627cb1-9db9-4b71-81a5-071a52daaba2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.581 243456 DEBUG oslo_concurrency.lockutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Lock "33627cb1-9db9-4b71-81a5-071a52daaba2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.583 243456 DEBUG nova.virt.libvirt.vif [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:18:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1504026090',display_name='tempest-ServerMetadataNegativeTestJSON-server-1504026090',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1504026090',id=94,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e987a1d2da224f548b18032faa94aa1a',ramdisk_id='',reservation_id='r-5torgii3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1166348078',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1166348078-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:18:41Z,user_data=None,user_id='699bde3f63e74d6398856d2096d2cba8',uuid=33627cb1-9db9-4b71-81a5-071a52daaba2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "193238a7-8ebc-4160-8a2a-edd1dcf804b2", "address": "fa:16:3e:8b:bf:6e", "network": {"id": "88f7c930-dd22-4b33-a9a8-559a9a62fb79", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-628307613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e987a1d2da224f548b18032faa94aa1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap193238a7-8e", "ovs_interfaceid": "193238a7-8ebc-4160-8a2a-edd1dcf804b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.583 243456 DEBUG nova.network.os_vif_util [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Converting VIF {"id": "193238a7-8ebc-4160-8a2a-edd1dcf804b2", "address": "fa:16:3e:8b:bf:6e", "network": {"id": "88f7c930-dd22-4b33-a9a8-559a9a62fb79", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-628307613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e987a1d2da224f548b18032faa94aa1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap193238a7-8e", "ovs_interfaceid": "193238a7-8ebc-4160-8a2a-edd1dcf804b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.584 243456 DEBUG nova.network.os_vif_util [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:bf:6e,bridge_name='br-int',has_traffic_filtering=True,id=193238a7-8ebc-4160-8a2a-edd1dcf804b2,network=Network(88f7c930-dd22-4b33-a9a8-559a9a62fb79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap193238a7-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.585 243456 DEBUG os_vif [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:bf:6e,bridge_name='br-int',has_traffic_filtering=True,id=193238a7-8ebc-4160-8a2a-edd1dcf804b2,network=Network(88f7c930-dd22-4b33-a9a8-559a9a62fb79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap193238a7-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.587 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.588 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.589 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.593 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.594 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap193238a7-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.594 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap193238a7-8e, col_values=(('external_ids', {'iface-id': '193238a7-8ebc-4160-8a2a-edd1dcf804b2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:bf:6e', 'vm-uuid': '33627cb1-9db9-4b71-81a5-071a52daaba2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.596 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:46 compute-0 NetworkManager[49805]: <info>  [1772273926.5972] manager: (tap193238a7-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.599 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.601 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.603 243456 INFO os_vif [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:bf:6e,bridge_name='br-int',has_traffic_filtering=True,id=193238a7-8ebc-4160-8a2a-edd1dcf804b2,network=Network(88f7c930-dd22-4b33-a9a8-559a9a62fb79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap193238a7-8e')
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.648 243456 DEBUG nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.649 243456 DEBUG nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.659 243456 DEBUG nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] No VIF found with MAC fa:16:3e:8b:bf:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.660 243456 INFO nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Using config drive
Feb 28 10:18:46 compute-0 nova_compute[243452]: 2026-02-28 10:18:46.698 243456 DEBUG nova.storage.rbd_utils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] rbd image 33627cb1-9db9-4b71-81a5-071a52daaba2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:18:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:18:47 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1817705257' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.023 243456 DEBUG oslo_concurrency.processutils [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.025 243456 DEBUG nova.virt.libvirt.vif [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-49030969',display_name='tempest-ServerActionsTestJSON-server-49030969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-49030969',id=90,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2gogpz3Yx3EhEkpX1VCBq2KO8dk2oxi+PMU5eyyDvXF1ONWBCuHAC/cBb5pEuXFL5UEmuntYZ5G82kfqwpjBS+OnOyyPUfTpF/G370TEdXgIbjgT7mCqXRtxntkLSfyw==',key_name='tempest-keypair-1420549814',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-iwkgdg48',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:18:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9006c7543a244aa948b78020335223a',uuid=690896df-6307-469c-9685-325a61a62b88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.025 243456 DEBUG nova.network.os_vif_util [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.026 243456 DEBUG nova.network.os_vif_util [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.027 243456 DEBUG nova.objects.instance [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'pci_devices' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.071 243456 DEBUG nova.virt.libvirt.driver [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:18:47 compute-0 nova_compute[243452]:   <uuid>690896df-6307-469c-9685-325a61a62b88</uuid>
Feb 28 10:18:47 compute-0 nova_compute[243452]:   <name>instance-0000005a</name>
Feb 28 10:18:47 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:18:47 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:18:47 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerActionsTestJSON-server-49030969</nova:name>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:18:45</nova:creationTime>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:18:47 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:18:47 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:18:47 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:18:47 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:18:47 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:18:47 compute-0 nova_compute[243452]:         <nova:user uuid="b9006c7543a244aa948b78020335223a">tempest-ServerActionsTestJSON-152155156-project-member</nova:user>
Feb 28 10:18:47 compute-0 nova_compute[243452]:         <nova:project uuid="6952e00efd364e1491714983e2425e93">tempest-ServerActionsTestJSON-152155156</nova:project>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:18:47 compute-0 nova_compute[243452]:         <nova:port uuid="ed25d1f8-c3a0-43d4-b57e-12b647a48b3c">
Feb 28 10:18:47 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:18:47 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:18:47 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <system>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <entry name="serial">690896df-6307-469c-9685-325a61a62b88</entry>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <entry name="uuid">690896df-6307-469c-9685-325a61a62b88</entry>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     </system>
Feb 28 10:18:47 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:18:47 compute-0 nova_compute[243452]:   <os>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:   </os>
Feb 28 10:18:47 compute-0 nova_compute[243452]:   <features>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:   </features>
Feb 28 10:18:47 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:18:47 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:18:47 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/690896df-6307-469c-9685-325a61a62b88_disk">
Feb 28 10:18:47 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       </source>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:18:47 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/690896df-6307-469c-9685-325a61a62b88_disk.config">
Feb 28 10:18:47 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       </source>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:18:47 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:f6:05:21"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <target dev="taped25d1f8-c3"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/690896df-6307-469c-9685-325a61a62b88/console.log" append="off"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <video>
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     </video>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <input type="keyboard" bus="usb"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:18:47 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:18:47 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:18:47 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:18:47 compute-0 nova_compute[243452]: </domain>
Feb 28 10:18:47 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.073 243456 DEBUG nova.virt.libvirt.driver [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.073 243456 DEBUG nova.virt.libvirt.driver [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.074 243456 DEBUG nova.virt.libvirt.vif [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-49030969',display_name='tempest-ServerActionsTestJSON-server-49030969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-49030969',id=90,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2gogpz3Yx3EhEkpX1VCBq2KO8dk2oxi+PMU5eyyDvXF1ONWBCuHAC/cBb5pEuXFL5UEmuntYZ5G82kfqwpjBS+OnOyyPUfTpF/G370TEdXgIbjgT7mCqXRtxntkLSfyw==',key_name='tempest-keypair-1420549814',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-iwkgdg48',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:18:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9006c7543a244aa948b78020335223a',uuid=690896df-6307-469c-9685-325a61a62b88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.075 243456 DEBUG nova.network.os_vif_util [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.076 243456 DEBUG nova.network.os_vif_util [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.076 243456 DEBUG os_vif [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.077 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.078 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.078 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.081 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.081 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped25d1f8-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.082 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=taped25d1f8-c3, col_values=(('external_ids', {'iface-id': 'ed25d1f8-c3a0-43d4-b57e-12b647a48b3c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:05:21', 'vm-uuid': '690896df-6307-469c-9685-325a61a62b88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.083 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:47 compute-0 NetworkManager[49805]: <info>  [1772273927.0853] manager: (taped25d1f8-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.085 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.091 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.092 243456 INFO os_vif [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3')
Feb 28 10:18:47 compute-0 kernel: taped25d1f8-c3: entered promiscuous mode
Feb 28 10:18:47 compute-0 systemd-udevd[323944]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:18:47 compute-0 NetworkManager[49805]: <info>  [1772273927.2101] manager: (taped25d1f8-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/399)
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.214 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:47 compute-0 ovn_controller[146846]: 2026-02-28T10:18:47Z|00912|binding|INFO|Claiming lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for this chassis.
Feb 28 10:18:47 compute-0 ovn_controller[146846]: 2026-02-28T10:18:47Z|00913|binding|INFO|ed25d1f8-c3a0-43d4-b57e-12b647a48b3c: Claiming fa:16:3e:f6:05:21 10.100.0.14
Feb 28 10:18:47 compute-0 NetworkManager[49805]: <info>  [1772273927.2236] device (taped25d1f8-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:18:47 compute-0 NetworkManager[49805]: <info>  [1772273927.2252] device (taped25d1f8-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.226 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:05:21 10.100.0.14'], port_security=['fa:16:3e:f6:05:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '690896df-6307-469c-9685-325a61a62b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '9', 'neutron:security_group_ids': '26695444-5c7f-4bb0-b1ed-94a026868cfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.227 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ed25d1f8-c3a0-43d4-b57e-12b647a48b3c in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb bound to our chassis
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.228 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:18:47 compute-0 ovn_controller[146846]: 2026-02-28T10:18:47Z|00914|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c ovn-installed in OVS
Feb 28 10:18:47 compute-0 ovn_controller[146846]: 2026-02-28T10:18:47Z|00915|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c up in Southbound
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.236 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.247 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad68076-075c-4f31-bc32-0828d437e947]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.248 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8082b9e7-a1 in ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.250 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8082b9e7-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.250 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6cab2b3b-ae60-406c-8bab-23f37b30b8e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.251 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e738a1-fbf8-4fdf-a4f3-0988b300305f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:47 compute-0 systemd-machined[209480]: New machine qemu-114-instance-0000005a.
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.265 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[661203bb-e122-4d64-908c-1c7536fa668e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:47 compute-0 systemd[1]: Started Virtual Machine qemu-114-instance-0000005a.
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.282 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c85d19-12c8-4c10-b7e9-ed1dab443843]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.307 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c022f9e1-3301-43a4-afc6-da3c489811a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.312 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7caac3c1-33a1-4b4a-bb4b-78b9ea48e760]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:47 compute-0 NetworkManager[49805]: <info>  [1772273927.3133] manager: (tap8082b9e7-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/400)
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.347 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f1c3e623-8b0b-4395-a7d2-2e024ec457b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.351 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc80915-36e1-4293-893b-1988d53b3592]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:47 compute-0 NetworkManager[49805]: <info>  [1772273927.3766] device (tap8082b9e7-a0): carrier: link connected
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.381 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[94ab8e3b-bd33-49d5-b74e-0cdb23e202d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.394 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4e7342bc-b706-4b1d-a241-551f3f35bda1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543466, 'reachable_time': 27884, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324122, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:47 compute-0 ceph-mon[76304]: pgmap v1592: 305 pgs: 305 active+clean; 327 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 138 op/s
Feb 28 10:18:47 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1513317432' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:18:47 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2500979561' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:18:47 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1817705257' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.414 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d37ac2e0-3628-42fc-a8e1-848752a1bde6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:9bb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543466, 'tstamp': 543466}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324123, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.434 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[834bed0a-c544-454a-afd4-0041208d32a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543466, 'reachable_time': 27884, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324124, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.472 243456 INFO nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Creating config drive at /var/lib/nova/instances/33627cb1-9db9-4b71-81a5-071a52daaba2/disk.config
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.478 243456 DEBUG oslo_concurrency.processutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/33627cb1-9db9-4b71-81a5-071a52daaba2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3iv9_b6b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.490 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[867f9355-0934-49ff-bdb2-71621f021783]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.560 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c1d16b-1f2e-48cb-919a-ecd5bd17f12e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.564 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.565 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.566 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8082b9e7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:47 compute-0 kernel: tap8082b9e7-a0: entered promiscuous mode
Feb 28 10:18:47 compute-0 NetworkManager[49805]: <info>  [1772273927.5698] manager: (tap8082b9e7-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/401)
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.570 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.576 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8082b9e7-a0, col_values=(('external_ids', {'iface-id': 'ecb4bce5-f543-4ade-98cc-d98a0d015682'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:47 compute-0 ovn_controller[146846]: 2026-02-28T10:18:47Z|00916|binding|INFO|Releasing lport ecb4bce5-f543-4ade-98cc-d98a0d015682 from this chassis (sb_readonly=0)
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.578 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.589 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.592 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.593 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.594 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[72dc46e8-771b-443e-aa24-7d5b1db35139]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.595 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.596 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'env', 'PROCESS_TAG=haproxy-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8082b9e7-a888-4fb7-b48c-a7c16db892eb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.624 243456 DEBUG oslo_concurrency.processutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/33627cb1-9db9-4b71-81a5-071a52daaba2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3iv9_b6b" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.661 243456 DEBUG nova.storage.rbd_utils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] rbd image 33627cb1-9db9-4b71-81a5-071a52daaba2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.666 243456 DEBUG oslo_concurrency.processutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/33627cb1-9db9-4b71-81a5-071a52daaba2/disk.config 33627cb1-9db9-4b71-81a5-071a52daaba2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.826 243456 DEBUG oslo_concurrency.processutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/33627cb1-9db9-4b71-81a5-071a52daaba2/disk.config 33627cb1-9db9-4b71-81a5-071a52daaba2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.828 243456 INFO nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Deleting local config drive /var/lib/nova/instances/33627cb1-9db9-4b71-81a5-071a52daaba2/disk.config because it was imported into RBD.
Feb 28 10:18:47 compute-0 kernel: tap193238a7-8e: entered promiscuous mode
Feb 28 10:18:47 compute-0 systemd-udevd[324103]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:18:47 compute-0 NetworkManager[49805]: <info>  [1772273927.8708] manager: (tap193238a7-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/402)
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.874 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:47 compute-0 ovn_controller[146846]: 2026-02-28T10:18:47Z|00917|binding|INFO|Claiming lport 193238a7-8ebc-4160-8a2a-edd1dcf804b2 for this chassis.
Feb 28 10:18:47 compute-0 ovn_controller[146846]: 2026-02-28T10:18:47Z|00918|binding|INFO|193238a7-8ebc-4160-8a2a-edd1dcf804b2: Claiming fa:16:3e:8b:bf:6e 10.100.0.7
Feb 28 10:18:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:47.880 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:bf:6e 10.100.0.7'], port_security=['fa:16:3e:8b:bf:6e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '33627cb1-9db9-4b71-81a5-071a52daaba2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88f7c930-dd22-4b33-a9a8-559a9a62fb79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e987a1d2da224f548b18032faa94aa1a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f16fbb1c-e97b-40da-a3d5-aaacf4092def', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ac6f3c63-5a24-43dc-8752-40f51c86e902, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=193238a7-8ebc-4160-8a2a-edd1dcf804b2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:18:47 compute-0 NetworkManager[49805]: <info>  [1772273927.8829] device (tap193238a7-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:18:47 compute-0 ovn_controller[146846]: 2026-02-28T10:18:47Z|00919|binding|INFO|Setting lport 193238a7-8ebc-4160-8a2a-edd1dcf804b2 ovn-installed in OVS
Feb 28 10:18:47 compute-0 ovn_controller[146846]: 2026-02-28T10:18:47Z|00920|binding|INFO|Setting lport 193238a7-8ebc-4160-8a2a-edd1dcf804b2 up in Southbound
Feb 28 10:18:47 compute-0 NetworkManager[49805]: <info>  [1772273927.8840] device (tap193238a7-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.884 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:47 compute-0 systemd-machined[209480]: New machine qemu-115-instance-0000005e.
Feb 28 10:18:47 compute-0 systemd[1]: Started Virtual Machine qemu-115-instance-0000005e.
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.978 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 690896df-6307-469c-9685-325a61a62b88 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.979 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273927.9783666, 690896df-6307-469c-9685-325a61a62b88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.979 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Resumed (Lifecycle Event)
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.982 243456 DEBUG nova.compute.manager [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.986 243456 INFO nova.virt.libvirt.driver [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance rebooted successfully.
Feb 28 10:18:47 compute-0 nova_compute[243452]: 2026-02-28 10:18:47.987 243456 DEBUG nova.compute.manager [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:48 compute-0 podman[324256]: 2026-02-28 10:18:48.013827688 +0000 UTC m=+0.056091450 container create fe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.027 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.030 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:18:48 compute-0 systemd[1]: Started libpod-conmon-fe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3.scope.
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.055 243456 DEBUG oslo_concurrency.lockutils [None req-a40d2059-d7d4-48bc-9ca9-90299d54d694 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.057 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273927.9819255, 690896df-6307-469c-9685-325a61a62b88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.057 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Started (Lifecycle Event)
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.074 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.078 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:18:48 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:18:48 compute-0 podman[324256]: 2026-02-28 10:18:47.990301397 +0000 UTC m=+0.032565179 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503d7322e1f253bb2450dad4bde36cc78f849c077ea3c594879c4d79193de54d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:18:48 compute-0 podman[324256]: 2026-02-28 10:18:48.105659225 +0000 UTC m=+0.147923017 container init fe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:18:48 compute-0 podman[324256]: 2026-02-28 10:18:48.112399687 +0000 UTC m=+0.154663449 container start fe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 10:18:48 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[324272]: [NOTICE]   (324276) : New worker (324278) forked
Feb 28 10:18:48 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[324272]: [NOTICE]   (324276) : Loading success.
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.191 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 193238a7-8ebc-4160-8a2a-edd1dcf804b2 in datapath 88f7c930-dd22-4b33-a9a8-559a9a62fb79 unbound from our chassis
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.193 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 88f7c930-dd22-4b33-a9a8-559a9a62fb79
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.215 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ae545751-9c7f-4eda-8d66-530d12bf6e10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.216 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap88f7c930-d1 in ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.216 243456 DEBUG nova.compute.manager [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.217 243456 DEBUG oslo_concurrency.lockutils [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.217 243456 DEBUG oslo_concurrency.lockutils [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.218 243456 DEBUG oslo_concurrency.lockutils [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.218 243456 DEBUG nova.compute.manager [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.219 243456 WARNING nova.compute.manager [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state active and task_state None.
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.219 243456 DEBUG nova.compute.manager [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.219 243456 DEBUG oslo_concurrency.lockutils [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.220 243456 DEBUG oslo_concurrency.lockutils [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.220 243456 DEBUG oslo_concurrency.lockutils [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.220 243456 DEBUG nova.compute.manager [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.220 243456 WARNING nova.compute.manager [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state active and task_state None.
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.221 243456 DEBUG nova.compute.manager [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.221 243456 DEBUG oslo_concurrency.lockutils [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.221 243456 DEBUG oslo_concurrency.lockutils [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.222 243456 DEBUG oslo_concurrency.lockutils [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.221 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap88f7c930-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.222 243456 DEBUG nova.compute.manager [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.222 243456 WARNING nova.compute.manager [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state active and task_state None.
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.221 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0057c95e-075e-4aab-81ae-6433d5b4450c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.223 243456 DEBUG nova.compute.manager [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Received event network-vif-plugged-193238a7-8ebc-4160-8a2a-edd1dcf804b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.223 243456 DEBUG oslo_concurrency.lockutils [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "33627cb1-9db9-4b71-81a5-071a52daaba2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.223 243456 DEBUG oslo_concurrency.lockutils [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "33627cb1-9db9-4b71-81a5-071a52daaba2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.223 243456 DEBUG oslo_concurrency.lockutils [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "33627cb1-9db9-4b71-81a5-071a52daaba2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.224 243456 DEBUG nova.compute.manager [req-299c9d19-d6ef-4723-ad61-075458ae9310 req-474a65e7-71e2-4e80-a4ef-58fc105e60df 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Processing event network-vif-plugged-193238a7-8ebc-4160-8a2a-edd1dcf804b2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.224 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[06e8ac54-cb73-4653-a6cb-ebe2cf6d91a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.241 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[8bfb78d1-c1cf-4fb0-b682-ad4025fe22f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.257 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4bbfcaff-471a-4177-9c5b-6f34e19f8e1c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.296 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f717b01c-1b0e-4544-baff-d0ac49f15d8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 327 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.1 MiB/s wr, 132 op/s
Feb 28 10:18:48 compute-0 NetworkManager[49805]: <info>  [1772273928.3088] manager: (tap88f7c930-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/403)
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.308 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c1c48a04-f8a4-4270-bd59-1faf2ef4c1ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.349 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7c5c9438-73df-45c2-a3a6-70ec900fabb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.353 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca3afd5-7045-4bc8-aacc-d3848d5c612c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:48 compute-0 NetworkManager[49805]: <info>  [1772273928.3752] device (tap88f7c930-d0): carrier: link connected
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.380 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[660e1af4-287e-4b58-98fb-6e6e9b917840]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.399 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e5dadbe8-cebd-4526-8cc1-0804e8e296f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88f7c930-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:20:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 282], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543566, 'reachable_time': 40039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324297, 'error': None, 'target': 'ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.424 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2d00fa30-9e26-4a89-86ae-06621be4b21f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe12:20b6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543566, 'tstamp': 543566}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324298, 'error': None, 'target': 'ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.446 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3bd69b5c-0784-416a-adad-567d79005565]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88f7c930-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:20:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 282], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543566, 'reachable_time': 40039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324299, 'error': None, 'target': 'ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.489 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f8a16e69-1594-40d6-bde9-8bd425fed1c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.551 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9e23bbf3-0824-41ac-ab48-8c3efc581742]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.553 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88f7c930-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.554 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.554 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88f7c930-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:48 compute-0 kernel: tap88f7c930-d0: entered promiscuous mode
Feb 28 10:18:48 compute-0 NetworkManager[49805]: <info>  [1772273928.5583] manager: (tap88f7c930-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/404)
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.559 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap88f7c930-d0, col_values=(('external_ids', {'iface-id': '325dc531-6d86-45b6-ac2e-8bb6d606d4f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:48 compute-0 ovn_controller[146846]: 2026-02-28T10:18:48Z|00921|binding|INFO|Releasing lport 325dc531-6d86-45b6-ac2e-8bb6d606d4f9 from this chassis (sb_readonly=0)
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.561 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.567 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/88f7c930-dd22-4b33-a9a8-559a9a62fb79.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/88f7c930-dd22-4b33-a9a8-559a9a62fb79.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.568 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[28171da8-e08b-4450-b870-4a9329387bdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.568 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-88f7c930-dd22-4b33-a9a8-559a9a62fb79
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/88f7c930-dd22-4b33-a9a8-559a9a62fb79.pid.haproxy
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 88f7c930-dd22-4b33-a9a8-559a9a62fb79
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:18:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:48.569 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79', 'env', 'PROCESS_TAG=haproxy-88f7c930-dd22-4b33-a9a8-559a9a62fb79', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/88f7c930-dd22-4b33-a9a8-559a9a62fb79.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:18:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.743 243456 DEBUG nova.network.neutron [req-18dc4649-5b80-4fb0-abc9-1a4c84136296 req-764cc3ef-6fce-450e-a3b5-c9f1c1f10faa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Updated VIF entry in instance network info cache for port 193238a7-8ebc-4160-8a2a-edd1dcf804b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.745 243456 DEBUG nova.network.neutron [req-18dc4649-5b80-4fb0-abc9-1a4c84136296 req-764cc3ef-6fce-450e-a3b5-c9f1c1f10faa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Updating instance_info_cache with network_info: [{"id": "193238a7-8ebc-4160-8a2a-edd1dcf804b2", "address": "fa:16:3e:8b:bf:6e", "network": {"id": "88f7c930-dd22-4b33-a9a8-559a9a62fb79", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-628307613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e987a1d2da224f548b18032faa94aa1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap193238a7-8e", "ovs_interfaceid": "193238a7-8ebc-4160-8a2a-edd1dcf804b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.768 243456 DEBUG oslo_concurrency.lockutils [req-18dc4649-5b80-4fb0-abc9-1a4c84136296 req-764cc3ef-6fce-450e-a3b5-c9f1c1f10faa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-33627cb1-9db9-4b71-81a5-071a52daaba2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.891 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:48 compute-0 podman[324366]: 2026-02-28 10:18:48.95032374 +0000 UTC m=+0.044480949 container create ef855e2e93401eb6f1644d7a76bea5e34cc41da07a1def98525c78bfeb144b26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.971 243456 DEBUG nova.compute.manager [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.972 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273928.9716554, 33627cb1-9db9-4b71-81a5-071a52daaba2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.973 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] VM Started (Lifecycle Event)
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.981 243456 DEBUG nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.985 243456 INFO nova.virt.libvirt.driver [-] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Instance spawned successfully.
Feb 28 10:18:48 compute-0 nova_compute[243452]: 2026-02-28 10:18:48.985 243456 DEBUG nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:18:48 compute-0 systemd[1]: Started libpod-conmon-ef855e2e93401eb6f1644d7a76bea5e34cc41da07a1def98525c78bfeb144b26.scope.
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.006 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:49 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:18:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c68bb6829da51f0b0354982108de224485656f76fbdf44111a273d7294a796a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.014 243456 DEBUG nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.015 243456 DEBUG nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.015 243456 DEBUG nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.016 243456 DEBUG nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.016 243456 DEBUG nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.016 243456 DEBUG nova.virt.libvirt.driver [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.020 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:18:49 compute-0 podman[324366]: 2026-02-28 10:18:49.023004322 +0000 UTC m=+0.117161541 container init ef855e2e93401eb6f1644d7a76bea5e34cc41da07a1def98525c78bfeb144b26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 28 10:18:49 compute-0 podman[324366]: 2026-02-28 10:18:48.928258321 +0000 UTC m=+0.022415590 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:18:49 compute-0 podman[324366]: 2026-02-28 10:18:49.027643394 +0000 UTC m=+0.121800603 container start ef855e2e93401eb6f1644d7a76bea5e34cc41da07a1def98525c78bfeb144b26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:18:49 compute-0 neutron-haproxy-ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79[324386]: [NOTICE]   (324390) : New worker (324392) forked
Feb 28 10:18:49 compute-0 neutron-haproxy-ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79[324386]: [NOTICE]   (324390) : Loading success.
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.074 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.075 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273928.9717703, 33627cb1-9db9-4b71-81a5-071a52daaba2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.075 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] VM Paused (Lifecycle Event)
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.097 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.102 243456 INFO nova.compute.manager [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Took 8.05 seconds to spawn the instance on the hypervisor.
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.102 243456 DEBUG nova.compute.manager [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.104 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273928.9810946, 33627cb1-9db9-4b71-81a5-071a52daaba2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.104 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] VM Resumed (Lifecycle Event)
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.129 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.132 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.154 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.168 243456 INFO nova.compute.manager [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Took 9.14 seconds to build instance.
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.184 243456 DEBUG oslo_concurrency.lockutils [None req-dc2eb488-eb82-4053-855a-6e2b5edd3981 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Lock "33627cb1-9db9-4b71-81a5-071a52daaba2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:49 compute-0 ceph-mon[76304]: pgmap v1593: 305 pgs: 305 active+clean; 327 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.1 MiB/s wr, 132 op/s
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.490 243456 DEBUG nova.compute.manager [req-27a9ba56-b863-49ba-9916-78c965255abc req-0e3db155-04b8-4328-b85a-61787faf7d10 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-changed-6912d1ef-9679-45b5-ae80-a91f63ecce55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.491 243456 DEBUG nova.compute.manager [req-27a9ba56-b863-49ba-9916-78c965255abc req-0e3db155-04b8-4328-b85a-61787faf7d10 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Refreshing instance network info cache due to event network-changed-6912d1ef-9679-45b5-ae80-a91f63ecce55. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.491 243456 DEBUG oslo_concurrency.lockutils [req-27a9ba56-b863-49ba-9916-78c965255abc req-0e3db155-04b8-4328-b85a-61787faf7d10 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-e8c0bffa-2672-4f45-8646-3a41b8e780a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.492 243456 DEBUG oslo_concurrency.lockutils [req-27a9ba56-b863-49ba-9916-78c965255abc req-0e3db155-04b8-4328-b85a-61787faf7d10 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-e8c0bffa-2672-4f45-8646-3a41b8e780a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:18:49 compute-0 nova_compute[243452]: 2026-02-28 10:18:49.492 243456 DEBUG nova.network.neutron [req-27a9ba56-b863-49ba-9916-78c965255abc req-0e3db155-04b8-4328-b85a-61787faf7d10 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Refreshing network info cache for port 6912d1ef-9679-45b5-ae80-a91f63ecce55 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:18:50 compute-0 nova_compute[243452]: 2026-02-28 10:18:50.285 243456 DEBUG nova.compute.manager [req-8555bdd7-333e-4fde-b6a5-b1c37f10c8be req-71953494-b207-48e4-991c-27e52b36101f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Received event network-vif-plugged-193238a7-8ebc-4160-8a2a-edd1dcf804b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:50 compute-0 nova_compute[243452]: 2026-02-28 10:18:50.286 243456 DEBUG oslo_concurrency.lockutils [req-8555bdd7-333e-4fde-b6a5-b1c37f10c8be req-71953494-b207-48e4-991c-27e52b36101f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "33627cb1-9db9-4b71-81a5-071a52daaba2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:50 compute-0 nova_compute[243452]: 2026-02-28 10:18:50.286 243456 DEBUG oslo_concurrency.lockutils [req-8555bdd7-333e-4fde-b6a5-b1c37f10c8be req-71953494-b207-48e4-991c-27e52b36101f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "33627cb1-9db9-4b71-81a5-071a52daaba2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:50 compute-0 nova_compute[243452]: 2026-02-28 10:18:50.286 243456 DEBUG oslo_concurrency.lockutils [req-8555bdd7-333e-4fde-b6a5-b1c37f10c8be req-71953494-b207-48e4-991c-27e52b36101f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "33627cb1-9db9-4b71-81a5-071a52daaba2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:50 compute-0 nova_compute[243452]: 2026-02-28 10:18:50.287 243456 DEBUG nova.compute.manager [req-8555bdd7-333e-4fde-b6a5-b1c37f10c8be req-71953494-b207-48e4-991c-27e52b36101f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] No waiting events found dispatching network-vif-plugged-193238a7-8ebc-4160-8a2a-edd1dcf804b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:50 compute-0 nova_compute[243452]: 2026-02-28 10:18:50.288 243456 WARNING nova.compute.manager [req-8555bdd7-333e-4fde-b6a5-b1c37f10c8be req-71953494-b207-48e4-991c-27e52b36101f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Received unexpected event network-vif-plugged-193238a7-8ebc-4160-8a2a-edd1dcf804b2 for instance with vm_state active and task_state None.
Feb 28 10:18:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 328 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.5 MiB/s wr, 151 op/s
Feb 28 10:18:50 compute-0 nova_compute[243452]: 2026-02-28 10:18:50.712 243456 DEBUG nova.network.neutron [req-27a9ba56-b863-49ba-9916-78c965255abc req-0e3db155-04b8-4328-b85a-61787faf7d10 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Updated VIF entry in instance network info cache for port 6912d1ef-9679-45b5-ae80-a91f63ecce55. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:18:50 compute-0 nova_compute[243452]: 2026-02-28 10:18:50.739 243456 DEBUG nova.network.neutron [req-27a9ba56-b863-49ba-9916-78c965255abc req-0e3db155-04b8-4328-b85a-61787faf7d10 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Updating instance_info_cache with network_info: [{"id": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "address": "fa:16:3e:b7:a0:a7", "network": {"id": "99091813-133c-46b0-a8d3-eeb21884f48a", "bridge": "br-int", "label": "tempest-network-smoke--923256228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6912d1ef-96", "ovs_interfaceid": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:18:50 compute-0 nova_compute[243452]: 2026-02-28 10:18:50.765 243456 DEBUG oslo_concurrency.lockutils [req-27a9ba56-b863-49ba-9916-78c965255abc req-0e3db155-04b8-4328-b85a-61787faf7d10 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-e8c0bffa-2672-4f45-8646-3a41b8e780a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:18:51 compute-0 ceph-mon[76304]: pgmap v1594: 305 pgs: 305 active+clean; 328 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.5 MiB/s wr, 151 op/s
Feb 28 10:18:52 compute-0 nova_compute[243452]: 2026-02-28 10:18:52.084 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 328 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 171 op/s
Feb 28 10:18:53 compute-0 ceph-mon[76304]: pgmap v1595: 305 pgs: 305 active+clean; 328 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 171 op/s
Feb 28 10:18:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:18:53 compute-0 nova_compute[243452]: 2026-02-28 10:18:53.893 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 328 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 1.8 MiB/s wr, 223 op/s
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.568 243456 DEBUG oslo_concurrency.lockutils [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Acquiring lock "33627cb1-9db9-4b71-81a5-071a52daaba2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.569 243456 DEBUG oslo_concurrency.lockutils [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Lock "33627cb1-9db9-4b71-81a5-071a52daaba2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.570 243456 DEBUG oslo_concurrency.lockutils [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Acquiring lock "33627cb1-9db9-4b71-81a5-071a52daaba2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.570 243456 DEBUG oslo_concurrency.lockutils [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Lock "33627cb1-9db9-4b71-81a5-071a52daaba2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.571 243456 DEBUG oslo_concurrency.lockutils [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Lock "33627cb1-9db9-4b71-81a5-071a52daaba2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.572 243456 INFO nova.compute.manager [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Terminating instance
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.574 243456 DEBUG nova.compute.manager [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:18:54 compute-0 kernel: tap193238a7-8e (unregistering): left promiscuous mode
Feb 28 10:18:54 compute-0 NetworkManager[49805]: <info>  [1772273934.6088] device (tap193238a7-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:18:54 compute-0 ovn_controller[146846]: 2026-02-28T10:18:54Z|00922|binding|INFO|Releasing lport 193238a7-8ebc-4160-8a2a-edd1dcf804b2 from this chassis (sb_readonly=0)
Feb 28 10:18:54 compute-0 ovn_controller[146846]: 2026-02-28T10:18:54Z|00923|binding|INFO|Setting lport 193238a7-8ebc-4160-8a2a-edd1dcf804b2 down in Southbound
Feb 28 10:18:54 compute-0 ovn_controller[146846]: 2026-02-28T10:18:54Z|00924|binding|INFO|Removing iface tap193238a7-8e ovn-installed in OVS
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.618 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:54.627 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:bf:6e 10.100.0.7'], port_security=['fa:16:3e:8b:bf:6e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '33627cb1-9db9-4b71-81a5-071a52daaba2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88f7c930-dd22-4b33-a9a8-559a9a62fb79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e987a1d2da224f548b18032faa94aa1a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f16fbb1c-e97b-40da-a3d5-aaacf4092def', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ac6f3c63-5a24-43dc-8752-40f51c86e902, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=193238a7-8ebc-4160-8a2a-edd1dcf804b2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:18:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:54.630 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 193238a7-8ebc-4160-8a2a-edd1dcf804b2 in datapath 88f7c930-dd22-4b33-a9a8-559a9a62fb79 unbound from our chassis
Feb 28 10:18:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:54.631 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 88f7c930-dd22-4b33-a9a8-559a9a62fb79, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:18:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:54.633 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[54fbe9fe-af9f-4716-86de-061092cf62cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:54.634 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79 namespace which is not needed anymore
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.635 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:54 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Feb 28 10:18:54 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005e.scope: Consumed 6.624s CPU time.
Feb 28 10:18:54 compute-0 systemd-machined[209480]: Machine qemu-115-instance-0000005e terminated.
Feb 28 10:18:54 compute-0 neutron-haproxy-ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79[324386]: [NOTICE]   (324390) : haproxy version is 2.8.14-c23fe91
Feb 28 10:18:54 compute-0 neutron-haproxy-ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79[324386]: [NOTICE]   (324390) : path to executable is /usr/sbin/haproxy
Feb 28 10:18:54 compute-0 neutron-haproxy-ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79[324386]: [WARNING]  (324390) : Exiting Master process...
Feb 28 10:18:54 compute-0 neutron-haproxy-ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79[324386]: [ALERT]    (324390) : Current worker (324392) exited with code 143 (Terminated)
Feb 28 10:18:54 compute-0 neutron-haproxy-ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79[324386]: [WARNING]  (324390) : All workers exited. Exiting... (0)
Feb 28 10:18:54 compute-0 systemd[1]: libpod-ef855e2e93401eb6f1644d7a76bea5e34cc41da07a1def98525c78bfeb144b26.scope: Deactivated successfully.
Feb 28 10:18:54 compute-0 podman[324424]: 2026-02-28 10:18:54.773025311 +0000 UTC m=+0.044860909 container died ef855e2e93401eb6f1644d7a76bea5e34cc41da07a1def98525c78bfeb144b26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:18:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ef855e2e93401eb6f1644d7a76bea5e34cc41da07a1def98525c78bfeb144b26-userdata-shm.mount: Deactivated successfully.
Feb 28 10:18:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c68bb6829da51f0b0354982108de224485656f76fbdf44111a273d7294a796a-merged.mount: Deactivated successfully.
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.816 243456 INFO nova.virt.libvirt.driver [-] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Instance destroyed successfully.
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.816 243456 DEBUG nova.objects.instance [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Lazy-loading 'resources' on Instance uuid 33627cb1-9db9-4b71-81a5-071a52daaba2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:18:54 compute-0 podman[324424]: 2026-02-28 10:18:54.825297101 +0000 UTC m=+0.097132689 container cleanup ef855e2e93401eb6f1644d7a76bea5e34cc41da07a1def98525c78bfeb144b26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.839 243456 DEBUG nova.virt.libvirt.vif [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:18:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1504026090',display_name='tempest-ServerMetadataNegativeTestJSON-server-1504026090',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1504026090',id=94,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:18:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e987a1d2da224f548b18032faa94aa1a',ramdisk_id='',reservation_id='r-5torgii3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1166348078',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1166348078-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:18:49Z,user_data=None,user_id='699bde3f63e74d6398856d2096d2cba8',uuid=33627cb1-9db9-4b71-81a5-071a52daaba2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "193238a7-8ebc-4160-8a2a-edd1dcf804b2", "address": "fa:16:3e:8b:bf:6e", "network": {"id": "88f7c930-dd22-4b33-a9a8-559a9a62fb79", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-628307613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e987a1d2da224f548b18032faa94aa1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap193238a7-8e", "ovs_interfaceid": "193238a7-8ebc-4160-8a2a-edd1dcf804b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.841 243456 DEBUG nova.network.os_vif_util [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Converting VIF {"id": "193238a7-8ebc-4160-8a2a-edd1dcf804b2", "address": "fa:16:3e:8b:bf:6e", "network": {"id": "88f7c930-dd22-4b33-a9a8-559a9a62fb79", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-628307613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e987a1d2da224f548b18032faa94aa1a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap193238a7-8e", "ovs_interfaceid": "193238a7-8ebc-4160-8a2a-edd1dcf804b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.841 243456 DEBUG nova.network.os_vif_util [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:bf:6e,bridge_name='br-int',has_traffic_filtering=True,id=193238a7-8ebc-4160-8a2a-edd1dcf804b2,network=Network(88f7c930-dd22-4b33-a9a8-559a9a62fb79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap193238a7-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.842 243456 DEBUG os_vif [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:bf:6e,bridge_name='br-int',has_traffic_filtering=True,id=193238a7-8ebc-4160-8a2a-edd1dcf804b2,network=Network(88f7c930-dd22-4b33-a9a8-559a9a62fb79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap193238a7-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:18:54 compute-0 systemd[1]: libpod-conmon-ef855e2e93401eb6f1644d7a76bea5e34cc41da07a1def98525c78bfeb144b26.scope: Deactivated successfully.
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.844 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.845 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap193238a7-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.847 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.849 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.851 243456 INFO os_vif [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:bf:6e,bridge_name='br-int',has_traffic_filtering=True,id=193238a7-8ebc-4160-8a2a-edd1dcf804b2,network=Network(88f7c930-dd22-4b33-a9a8-559a9a62fb79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap193238a7-8e')
Feb 28 10:18:54 compute-0 podman[324465]: 2026-02-28 10:18:54.913375982 +0000 UTC m=+0.056049019 container remove ef855e2e93401eb6f1644d7a76bea5e34cc41da07a1def98525c78bfeb144b26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:18:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:54.923 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[40cac5ea-88da-4211-9565-62f3745b5394]: (4, ('Sat Feb 28 10:18:54 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79 (ef855e2e93401eb6f1644d7a76bea5e34cc41da07a1def98525c78bfeb144b26)\nef855e2e93401eb6f1644d7a76bea5e34cc41da07a1def98525c78bfeb144b26\nSat Feb 28 10:18:54 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79 (ef855e2e93401eb6f1644d7a76bea5e34cc41da07a1def98525c78bfeb144b26)\nef855e2e93401eb6f1644d7a76bea5e34cc41da07a1def98525c78bfeb144b26\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:54.926 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8ce4cfd4-c57e-4902-bf40-a00dcdec0a97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:54.927 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88f7c930-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.929 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:54 compute-0 kernel: tap88f7c930-d0: left promiscuous mode
Feb 28 10:18:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:54.935 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[844ec4e6-0940-44bb-8c6e-1cc23f406fd6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.945 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:54.950 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[063eef55-08b9-42ce-b305-128ef53fa517]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:54.952 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5788b2ce-e0b0-44de-9a1f-b4c105997b27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.964 243456 DEBUG nova.compute.manager [req-639cca23-f1b0-46f2-abc0-94c23d14302e req-47960f4a-a089-4eb1-a767-637cae88c677 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Received event network-vif-unplugged-193238a7-8ebc-4160-8a2a-edd1dcf804b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.965 243456 DEBUG oslo_concurrency.lockutils [req-639cca23-f1b0-46f2-abc0-94c23d14302e req-47960f4a-a089-4eb1-a767-637cae88c677 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "33627cb1-9db9-4b71-81a5-071a52daaba2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.966 243456 DEBUG oslo_concurrency.lockutils [req-639cca23-f1b0-46f2-abc0-94c23d14302e req-47960f4a-a089-4eb1-a767-637cae88c677 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "33627cb1-9db9-4b71-81a5-071a52daaba2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.966 243456 DEBUG oslo_concurrency.lockutils [req-639cca23-f1b0-46f2-abc0-94c23d14302e req-47960f4a-a089-4eb1-a767-637cae88c677 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "33627cb1-9db9-4b71-81a5-071a52daaba2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.966 243456 DEBUG nova.compute.manager [req-639cca23-f1b0-46f2-abc0-94c23d14302e req-47960f4a-a089-4eb1-a767-637cae88c677 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] No waiting events found dispatching network-vif-unplugged-193238a7-8ebc-4160-8a2a-edd1dcf804b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:54 compute-0 nova_compute[243452]: 2026-02-28 10:18:54.967 243456 DEBUG nova.compute.manager [req-639cca23-f1b0-46f2-abc0-94c23d14302e req-47960f4a-a089-4eb1-a767-637cae88c677 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Received event network-vif-unplugged-193238a7-8ebc-4160-8a2a-edd1dcf804b2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:18:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:54.976 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec6b89f-0884-4cdf-9ee2-760598a2419e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543557, 'reachable_time': 40295, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324497, 'error': None, 'target': 'ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:54.982 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-88f7c930-dd22-4b33-a9a8-559a9a62fb79 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:18:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:54.982 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[df0b3cd8-7271-4fc4-ba17-52e4f92c6de0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:18:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d88f7c930\x2ddd22\x2d4b33\x2da9a8\x2d559a9a62fb79.mount: Deactivated successfully.
Feb 28 10:18:55 compute-0 nova_compute[243452]: 2026-02-28 10:18:55.200 243456 INFO nova.virt.libvirt.driver [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Deleting instance files /var/lib/nova/instances/33627cb1-9db9-4b71-81a5-071a52daaba2_del
Feb 28 10:18:55 compute-0 nova_compute[243452]: 2026-02-28 10:18:55.201 243456 INFO nova.virt.libvirt.driver [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Deletion of /var/lib/nova/instances/33627cb1-9db9-4b71-81a5-071a52daaba2_del complete
Feb 28 10:18:55 compute-0 nova_compute[243452]: 2026-02-28 10:18:55.253 243456 INFO nova.compute.manager [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Took 0.68 seconds to destroy the instance on the hypervisor.
Feb 28 10:18:55 compute-0 nova_compute[243452]: 2026-02-28 10:18:55.254 243456 DEBUG oslo.service.loopingcall [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:18:55 compute-0 nova_compute[243452]: 2026-02-28 10:18:55.254 243456 DEBUG nova.compute.manager [-] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:18:55 compute-0 nova_compute[243452]: 2026-02-28 10:18:55.255 243456 DEBUG nova.network.neutron [-] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:18:55 compute-0 ovn_controller[146846]: 2026-02-28T10:18:55Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:a0:a7 10.100.0.8
Feb 28 10:18:55 compute-0 ovn_controller[146846]: 2026-02-28T10:18:55Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:a0:a7 10.100.0.8
Feb 28 10:18:55 compute-0 ceph-mon[76304]: pgmap v1596: 305 pgs: 305 active+clean; 328 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 1.8 MiB/s wr, 223 op/s
Feb 28 10:18:56 compute-0 nova_compute[243452]: 2026-02-28 10:18:56.061 243456 DEBUG nova.network.neutron [-] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:18:56 compute-0 nova_compute[243452]: 2026-02-28 10:18:56.090 243456 INFO nova.compute.manager [-] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Took 0.84 seconds to deallocate network for instance.
Feb 28 10:18:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 333 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 2.6 MiB/s wr, 283 op/s
Feb 28 10:18:56 compute-0 nova_compute[243452]: 2026-02-28 10:18:56.320 243456 DEBUG oslo_concurrency.lockutils [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:56 compute-0 nova_compute[243452]: 2026-02-28 10:18:56.321 243456 DEBUG oslo_concurrency.lockutils [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:56 compute-0 nova_compute[243452]: 2026-02-28 10:18:56.416 243456 DEBUG oslo_concurrency.processutils [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:18:56 compute-0 ceph-mon[76304]: pgmap v1597: 305 pgs: 305 active+clean; 333 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 2.6 MiB/s wr, 283 op/s
Feb 28 10:18:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:18:56 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2949975947' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:18:56 compute-0 nova_compute[243452]: 2026-02-28 10:18:56.981 243456 DEBUG oslo_concurrency.processutils [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:18:56 compute-0 nova_compute[243452]: 2026-02-28 10:18:56.991 243456 DEBUG nova.compute.provider_tree [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:18:57 compute-0 nova_compute[243452]: 2026-02-28 10:18:57.009 243456 DEBUG nova.scheduler.client.report [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:18:57 compute-0 nova_compute[243452]: 2026-02-28 10:18:57.042 243456 DEBUG oslo_concurrency.lockutils [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:57 compute-0 nova_compute[243452]: 2026-02-28 10:18:57.076 243456 INFO nova.scheduler.client.report [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Deleted allocations for instance 33627cb1-9db9-4b71-81a5-071a52daaba2
Feb 28 10:18:57 compute-0 nova_compute[243452]: 2026-02-28 10:18:57.147 243456 DEBUG oslo_concurrency.lockutils [None req-3a4b762d-6d71-4c29-a7d1-e645eb39fe71 699bde3f63e74d6398856d2096d2cba8 e987a1d2da224f548b18032faa94aa1a - - default default] Lock "33627cb1-9db9-4b71-81a5-071a52daaba2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:57 compute-0 nova_compute[243452]: 2026-02-28 10:18:57.189 243456 DEBUG nova.compute.manager [req-df10cdb5-06d6-4459-9a17-262b0d6a1355 req-9356ddab-2492-4811-adb0-1669714cf892 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Received event network-vif-plugged-193238a7-8ebc-4160-8a2a-edd1dcf804b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:57 compute-0 nova_compute[243452]: 2026-02-28 10:18:57.189 243456 DEBUG oslo_concurrency.lockutils [req-df10cdb5-06d6-4459-9a17-262b0d6a1355 req-9356ddab-2492-4811-adb0-1669714cf892 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "33627cb1-9db9-4b71-81a5-071a52daaba2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:57 compute-0 nova_compute[243452]: 2026-02-28 10:18:57.190 243456 DEBUG oslo_concurrency.lockutils [req-df10cdb5-06d6-4459-9a17-262b0d6a1355 req-9356ddab-2492-4811-adb0-1669714cf892 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "33627cb1-9db9-4b71-81a5-071a52daaba2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:57 compute-0 nova_compute[243452]: 2026-02-28 10:18:57.190 243456 DEBUG oslo_concurrency.lockutils [req-df10cdb5-06d6-4459-9a17-262b0d6a1355 req-9356ddab-2492-4811-adb0-1669714cf892 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "33627cb1-9db9-4b71-81a5-071a52daaba2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:57 compute-0 nova_compute[243452]: 2026-02-28 10:18:57.191 243456 DEBUG nova.compute.manager [req-df10cdb5-06d6-4459-9a17-262b0d6a1355 req-9356ddab-2492-4811-adb0-1669714cf892 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] No waiting events found dispatching network-vif-plugged-193238a7-8ebc-4160-8a2a-edd1dcf804b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:18:57 compute-0 nova_compute[243452]: 2026-02-28 10:18:57.191 243456 WARNING nova.compute.manager [req-df10cdb5-06d6-4459-9a17-262b0d6a1355 req-9356ddab-2492-4811-adb0-1669714cf892 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Received unexpected event network-vif-plugged-193238a7-8ebc-4160-8a2a-edd1dcf804b2 for instance with vm_state deleted and task_state None.
Feb 28 10:18:57 compute-0 nova_compute[243452]: 2026-02-28 10:18:57.192 243456 DEBUG nova.compute.manager [req-df10cdb5-06d6-4459-9a17-262b0d6a1355 req-9356ddab-2492-4811-adb0-1669714cf892 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Received event network-vif-deleted-193238a7-8ebc-4160-8a2a-edd1dcf804b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:18:57 compute-0 rsyslogd[1017]: imjournal: 10210 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb 28 10:18:57 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2949975947' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:18:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:57.857 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:18:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:57.858 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:18:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:18:57.860 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:18:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 335 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 2.1 MiB/s wr, 262 op/s
Feb 28 10:18:58 compute-0 ceph-mon[76304]: pgmap v1598: 305 pgs: 305 active+clean; 335 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 2.1 MiB/s wr, 262 op/s
Feb 28 10:18:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:18:58 compute-0 nova_compute[243452]: 2026-02-28 10:18:58.895 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:59 compute-0 nova_compute[243452]: 2026-02-28 10:18:59.848 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:18:59 compute-0 ovn_controller[146846]: 2026-02-28T10:18:59Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:05:21 10.100.0.14
Feb 28 10:19:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 314 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 2.2 MiB/s wr, 255 op/s
Feb 28 10:19:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:19:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:19:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:19:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:19:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:19:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:19:00 compute-0 ovn_controller[146846]: 2026-02-28T10:19:00Z|00925|binding|INFO|Releasing lport ecb4bce5-f543-4ade-98cc-d98a0d015682 from this chassis (sb_readonly=0)
Feb 28 10:19:00 compute-0 ovn_controller[146846]: 2026-02-28T10:19:00Z|00926|binding|INFO|Releasing lport e6986f00-b070-4e36-95ae-3683483bf103 from this chassis (sb_readonly=0)
Feb 28 10:19:00 compute-0 nova_compute[243452]: 2026-02-28 10:19:00.665 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:01 compute-0 ceph-mon[76304]: pgmap v1599: 305 pgs: 305 active+clean; 314 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 2.2 MiB/s wr, 255 op/s
Feb 28 10:19:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 314 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 232 op/s
Feb 28 10:19:02 compute-0 nova_compute[243452]: 2026-02-28 10:19:02.374 243456 INFO nova.compute.manager [None req-673e44f5-1768-438a-8ca1-302ac37ee9a3 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Get console output
Feb 28 10:19:02 compute-0 nova_compute[243452]: 2026-02-28 10:19:02.382 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:19:02 compute-0 nova_compute[243452]: 2026-02-28 10:19:02.900 243456 DEBUG nova.objects.instance [None req-4cf04ff2-8823-4afa-bf8c-381cd12ad8f4 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'pci_devices' on Instance uuid e8c0bffa-2672-4f45-8646-3a41b8e780a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:03 compute-0 nova_compute[243452]: 2026-02-28 10:19:03.091 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273943.0915222, e8c0bffa-2672-4f45-8646-3a41b8e780a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:19:03 compute-0 nova_compute[243452]: 2026-02-28 10:19:03.092 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] VM Paused (Lifecycle Event)
Feb 28 10:19:03 compute-0 nova_compute[243452]: 2026-02-28 10:19:03.213 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:03 compute-0 nova_compute[243452]: 2026-02-28 10:19:03.224 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:19:03 compute-0 nova_compute[243452]: 2026-02-28 10:19:03.251 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] During sync_power_state the instance has a pending task (suspending). Skip.
Feb 28 10:19:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:19:03 compute-0 ceph-mon[76304]: pgmap v1600: 305 pgs: 305 active+clean; 314 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 232 op/s
Feb 28 10:19:03 compute-0 kernel: tap6912d1ef-96 (unregistering): left promiscuous mode
Feb 28 10:19:03 compute-0 NetworkManager[49805]: <info>  [1772273943.7204] device (tap6912d1ef-96): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:19:03 compute-0 ovn_controller[146846]: 2026-02-28T10:19:03Z|00927|binding|INFO|Releasing lport 6912d1ef-9679-45b5-ae80-a91f63ecce55 from this chassis (sb_readonly=0)
Feb 28 10:19:03 compute-0 ovn_controller[146846]: 2026-02-28T10:19:03Z|00928|binding|INFO|Setting lport 6912d1ef-9679-45b5-ae80-a91f63ecce55 down in Southbound
Feb 28 10:19:03 compute-0 nova_compute[243452]: 2026-02-28 10:19:03.729 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:03 compute-0 ovn_controller[146846]: 2026-02-28T10:19:03Z|00929|binding|INFO|Removing iface tap6912d1ef-96 ovn-installed in OVS
Feb 28 10:19:03 compute-0 nova_compute[243452]: 2026-02-28 10:19:03.732 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:03 compute-0 nova_compute[243452]: 2026-02-28 10:19:03.736 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:03.746 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a0:a7 10.100.0.8'], port_security=['fa:16:3e:b7:a0:a7 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e8c0bffa-2672-4f45-8646-3a41b8e780a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99091813-133c-46b0-a8d3-eeb21884f48a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '4', 'neutron:security_group_ids': '11fcf035-f78f-4ebd-8bac-b3303ff31262', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd6ca39e-5439-4056-b277-bd0e3b6ca6ca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6912d1ef-9679-45b5-ae80-a91f63ecce55) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:19:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:03.748 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6912d1ef-9679-45b5-ae80-a91f63ecce55 in datapath 99091813-133c-46b0-a8d3-eeb21884f48a unbound from our chassis
Feb 28 10:19:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:03.750 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99091813-133c-46b0-a8d3-eeb21884f48a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:19:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:03.751 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[da6a5874-1b13-4672-ba86-ddf1ba9ba5a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:03.751 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a namespace which is not needed anymore
Feb 28 10:19:03 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Feb 28 10:19:03 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005d.scope: Consumed 12.105s CPU time.
Feb 28 10:19:03 compute-0 systemd-machined[209480]: Machine qemu-113-instance-0000005d terminated.
Feb 28 10:19:03 compute-0 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[323839]: [NOTICE]   (323843) : haproxy version is 2.8.14-c23fe91
Feb 28 10:19:03 compute-0 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[323839]: [NOTICE]   (323843) : path to executable is /usr/sbin/haproxy
Feb 28 10:19:03 compute-0 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[323839]: [WARNING]  (323843) : Exiting Master process...
Feb 28 10:19:03 compute-0 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[323839]: [ALERT]    (323843) : Current worker (323845) exited with code 143 (Terminated)
Feb 28 10:19:03 compute-0 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[323839]: [WARNING]  (323843) : All workers exited. Exiting... (0)
Feb 28 10:19:03 compute-0 systemd[1]: libpod-09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597.scope: Deactivated successfully.
Feb 28 10:19:03 compute-0 podman[324547]: 2026-02-28 10:19:03.878301895 +0000 UTC m=+0.045037415 container died 09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 10:19:03 compute-0 nova_compute[243452]: 2026-02-28 10:19:03.901 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:03 compute-0 nova_compute[243452]: 2026-02-28 10:19:03.905 243456 DEBUG nova.compute.manager [None req-4cf04ff2-8823-4afa-bf8c-381cd12ad8f4 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597-userdata-shm.mount: Deactivated successfully.
Feb 28 10:19:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-999094b081a957cbe1d5a0b39ac314b0f3f277acb666b4d99b76c83d7d5cdc56-merged.mount: Deactivated successfully.
Feb 28 10:19:03 compute-0 podman[324547]: 2026-02-28 10:19:03.929859275 +0000 UTC m=+0.096594825 container cleanup 09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:19:03 compute-0 systemd[1]: libpod-conmon-09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597.scope: Deactivated successfully.
Feb 28 10:19:03 compute-0 podman[324585]: 2026-02-28 10:19:03.996296418 +0000 UTC m=+0.047610128 container remove 09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:19:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:04.001 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[708752ff-492f-4dfd-a7f3-6fa34df953a0]: (4, ('Sat Feb 28 10:19:03 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a (09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597)\n09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597\nSat Feb 28 10:19:03 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a (09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597)\n09e8a579b9da90957a01738e490b8ff5b5e0010d980edb1ad884c0355f6f9597\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:04.004 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[42fd5da8-adee-4cdc-97bc-daafc4690cbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:04.005 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99091813-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:04 compute-0 nova_compute[243452]: 2026-02-28 10:19:04.007 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:04 compute-0 kernel: tap99091813-10: left promiscuous mode
Feb 28 10:19:04 compute-0 nova_compute[243452]: 2026-02-28 10:19:04.023 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:04.026 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2e3dbbf6-35e5-45b1-b3aa-d84ac2f4cfbc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:04.040 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9b281331-35a7-4ec4-92cc-1c687c8de965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:04.041 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0b9e4505-e18f-474a-9be7-16a8c0917de0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:04.056 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c9e5bc84-a0cf-46c2-bf0a-88ccf69ed542]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543026, 'reachable_time': 17803, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324604, 'error': None, 'target': 'ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:04.059 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:19:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:04.059 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[408be78f-f05a-436f-add2-36467841b6e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d99091813\x2d133c\x2d46b0\x2da8d3\x2deeb21884f48a.mount: Deactivated successfully.
Feb 28 10:19:04 compute-0 nova_compute[243452]: 2026-02-28 10:19:04.187 243456 DEBUG nova.compute.manager [req-b29e7060-ab2b-4128-bc1a-e657b8858e8d req-9bae07ef-16d4-472d-8f9d-201022c1d2cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-vif-unplugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:04 compute-0 nova_compute[243452]: 2026-02-28 10:19:04.188 243456 DEBUG oslo_concurrency.lockutils [req-b29e7060-ab2b-4128-bc1a-e657b8858e8d req-9bae07ef-16d4-472d-8f9d-201022c1d2cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:04 compute-0 nova_compute[243452]: 2026-02-28 10:19:04.188 243456 DEBUG oslo_concurrency.lockutils [req-b29e7060-ab2b-4128-bc1a-e657b8858e8d req-9bae07ef-16d4-472d-8f9d-201022c1d2cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:04 compute-0 nova_compute[243452]: 2026-02-28 10:19:04.188 243456 DEBUG oslo_concurrency.lockutils [req-b29e7060-ab2b-4128-bc1a-e657b8858e8d req-9bae07ef-16d4-472d-8f9d-201022c1d2cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:04 compute-0 nova_compute[243452]: 2026-02-28 10:19:04.188 243456 DEBUG nova.compute.manager [req-b29e7060-ab2b-4128-bc1a-e657b8858e8d req-9bae07ef-16d4-472d-8f9d-201022c1d2cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] No waiting events found dispatching network-vif-unplugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:19:04 compute-0 nova_compute[243452]: 2026-02-28 10:19:04.189 243456 WARNING nova.compute.manager [req-b29e7060-ab2b-4128-bc1a-e657b8858e8d req-9bae07ef-16d4-472d-8f9d-201022c1d2cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received unexpected event network-vif-unplugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 for instance with vm_state suspended and task_state None.
Feb 28 10:19:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 314 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 211 op/s
Feb 28 10:19:04 compute-0 ceph-mon[76304]: pgmap v1601: 305 pgs: 305 active+clean; 314 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 211 op/s
Feb 28 10:19:04 compute-0 nova_compute[243452]: 2026-02-28 10:19:04.851 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 314 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 159 op/s
Feb 28 10:19:06 compute-0 nova_compute[243452]: 2026-02-28 10:19:06.435 243456 DEBUG nova.compute.manager [req-70d6f3aa-5e9b-4a60-8855-dbbc31175c2e req-e1e09b01-6e16-41c7-8160-ce146029616e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:06 compute-0 nova_compute[243452]: 2026-02-28 10:19:06.436 243456 DEBUG oslo_concurrency.lockutils [req-70d6f3aa-5e9b-4a60-8855-dbbc31175c2e req-e1e09b01-6e16-41c7-8160-ce146029616e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:06 compute-0 nova_compute[243452]: 2026-02-28 10:19:06.436 243456 DEBUG oslo_concurrency.lockutils [req-70d6f3aa-5e9b-4a60-8855-dbbc31175c2e req-e1e09b01-6e16-41c7-8160-ce146029616e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:06 compute-0 nova_compute[243452]: 2026-02-28 10:19:06.436 243456 DEBUG oslo_concurrency.lockutils [req-70d6f3aa-5e9b-4a60-8855-dbbc31175c2e req-e1e09b01-6e16-41c7-8160-ce146029616e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:06 compute-0 nova_compute[243452]: 2026-02-28 10:19:06.437 243456 DEBUG nova.compute.manager [req-70d6f3aa-5e9b-4a60-8855-dbbc31175c2e req-e1e09b01-6e16-41c7-8160-ce146029616e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] No waiting events found dispatching network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:19:06 compute-0 nova_compute[243452]: 2026-02-28 10:19:06.437 243456 WARNING nova.compute.manager [req-70d6f3aa-5e9b-4a60-8855-dbbc31175c2e req-e1e09b01-6e16-41c7-8160-ce146029616e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received unexpected event network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 for instance with vm_state suspended and task_state None.
Feb 28 10:19:06 compute-0 nova_compute[243452]: 2026-02-28 10:19:06.561 243456 INFO nova.compute.manager [None req-a0c728fe-9098-4d74-a8ae-3809fc1b5edf 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Get console output
Feb 28 10:19:07 compute-0 nova_compute[243452]: 2026-02-28 10:19:07.097 243456 INFO nova.compute.manager [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Resuming
Feb 28 10:19:07 compute-0 nova_compute[243452]: 2026-02-28 10:19:07.098 243456 DEBUG nova.objects.instance [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'flavor' on Instance uuid e8c0bffa-2672-4f45-8646-3a41b8e780a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:07 compute-0 nova_compute[243452]: 2026-02-28 10:19:07.197 243456 DEBUG oslo_concurrency.lockutils [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "refresh_cache-e8c0bffa-2672-4f45-8646-3a41b8e780a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:19:07 compute-0 nova_compute[243452]: 2026-02-28 10:19:07.198 243456 DEBUG oslo_concurrency.lockutils [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquired lock "refresh_cache-e8c0bffa-2672-4f45-8646-3a41b8e780a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:19:07 compute-0 nova_compute[243452]: 2026-02-28 10:19:07.199 243456 DEBUG nova.network.neutron [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:19:07 compute-0 ceph-mon[76304]: pgmap v1602: 305 pgs: 305 active+clean; 314 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 159 op/s
Feb 28 10:19:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 314 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 611 KiB/s rd, 851 KiB/s wr, 83 op/s
Feb 28 10:19:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:19:08 compute-0 nova_compute[243452]: 2026-02-28 10:19:08.903 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:09 compute-0 ceph-mon[76304]: pgmap v1603: 305 pgs: 305 active+clean; 314 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 611 KiB/s rd, 851 KiB/s wr, 83 op/s
Feb 28 10:19:09 compute-0 nova_compute[243452]: 2026-02-28 10:19:09.809 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273934.8085463, 33627cb1-9db9-4b71-81a5-071a52daaba2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:19:09 compute-0 nova_compute[243452]: 2026-02-28 10:19:09.810 243456 INFO nova.compute.manager [-] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] VM Stopped (Lifecycle Event)
Feb 28 10:19:09 compute-0 nova_compute[243452]: 2026-02-28 10:19:09.854 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:09 compute-0 nova_compute[243452]: 2026-02-28 10:19:09.861 243456 DEBUG nova.compute.manager [None req-dc743a2d-d3b7-4f0d-b74b-712860c53c26 - - - - - -] [instance: 33627cb1-9db9-4b71-81a5-071a52daaba2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:10 compute-0 sudo[324605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:19:10 compute-0 sudo[324605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:19:10 compute-0 sudo[324605]: pam_unix(sudo:session): session closed for user root
Feb 28 10:19:10 compute-0 sudo[324630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 28 10:19:10 compute-0 sudo[324630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:19:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 314 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 596 KiB/s rd, 102 KiB/s wr, 57 op/s
Feb 28 10:19:10 compute-0 nova_compute[243452]: 2026-02-28 10:19:10.483 243456 DEBUG nova.network.neutron [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Updating instance_info_cache with network_info: [{"id": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "address": "fa:16:3e:b7:a0:a7", "network": {"id": "99091813-133c-46b0-a8d3-eeb21884f48a", "bridge": "br-int", "label": "tempest-network-smoke--923256228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6912d1ef-96", "ovs_interfaceid": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:19:10 compute-0 nova_compute[243452]: 2026-02-28 10:19:10.584 243456 DEBUG oslo_concurrency.lockutils [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Releasing lock "refresh_cache-e8c0bffa-2672-4f45-8646-3a41b8e780a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:19:10 compute-0 nova_compute[243452]: 2026-02-28 10:19:10.594 243456 DEBUG nova.virt.libvirt.vif [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:18:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1132178903',display_name='tempest-TestNetworkAdvancedServerOps-server-1132178903',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1132178903',id=93,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAaE+RUFgVeYlRX30KMGxnwEz+7reAn227i7sOjhZIZoASG2YdNYnV+Hsj1MiXJ39Nt5WWX427Y5pNZnlCuOzU5m4otlC0FTaPuRwmOXLsqunpWkEbv7T/dSbfrRT0dp7Q==',key_name='tempest-TestNetworkAdvancedServerOps-156508609',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:18:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-g07k9xeh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:19:03Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=e8c0bffa-2672-4f45-8646-3a41b8e780a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "address": "fa:16:3e:b7:a0:a7", "network": {"id": "99091813-133c-46b0-a8d3-eeb21884f48a", "bridge": "br-int", "label": "tempest-network-smoke--923256228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6912d1ef-96", "ovs_interfaceid": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:19:10 compute-0 nova_compute[243452]: 2026-02-28 10:19:10.595 243456 DEBUG nova.network.os_vif_util [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "address": "fa:16:3e:b7:a0:a7", "network": {"id": "99091813-133c-46b0-a8d3-eeb21884f48a", "bridge": "br-int", "label": "tempest-network-smoke--923256228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6912d1ef-96", "ovs_interfaceid": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:19:10 compute-0 nova_compute[243452]: 2026-02-28 10:19:10.598 243456 DEBUG nova.network.os_vif_util [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a0:a7,bridge_name='br-int',has_traffic_filtering=True,id=6912d1ef-9679-45b5-ae80-a91f63ecce55,network=Network(99091813-133c-46b0-a8d3-eeb21884f48a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6912d1ef-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:19:10 compute-0 nova_compute[243452]: 2026-02-28 10:19:10.598 243456 DEBUG os_vif [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a0:a7,bridge_name='br-int',has_traffic_filtering=True,id=6912d1ef-9679-45b5-ae80-a91f63ecce55,network=Network(99091813-133c-46b0-a8d3-eeb21884f48a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6912d1ef-96') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:19:10 compute-0 nova_compute[243452]: 2026-02-28 10:19:10.599 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:10 compute-0 nova_compute[243452]: 2026-02-28 10:19:10.600 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:10 compute-0 nova_compute[243452]: 2026-02-28 10:19:10.601 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:19:10 compute-0 nova_compute[243452]: 2026-02-28 10:19:10.606 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:10 compute-0 nova_compute[243452]: 2026-02-28 10:19:10.607 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6912d1ef-96, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:10 compute-0 nova_compute[243452]: 2026-02-28 10:19:10.607 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6912d1ef-96, col_values=(('external_ids', {'iface-id': '6912d1ef-9679-45b5-ae80-a91f63ecce55', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:a0:a7', 'vm-uuid': 'e8c0bffa-2672-4f45-8646-3a41b8e780a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:10 compute-0 nova_compute[243452]: 2026-02-28 10:19:10.609 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:19:10 compute-0 nova_compute[243452]: 2026-02-28 10:19:10.609 243456 INFO os_vif [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a0:a7,bridge_name='br-int',has_traffic_filtering=True,id=6912d1ef-9679-45b5-ae80-a91f63ecce55,network=Network(99091813-133c-46b0-a8d3-eeb21884f48a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6912d1ef-96')
Feb 28 10:19:10 compute-0 nova_compute[243452]: 2026-02-28 10:19:10.637 243456 DEBUG nova.objects.instance [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'numa_topology' on Instance uuid e8c0bffa-2672-4f45-8646-3a41b8e780a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:10 compute-0 NetworkManager[49805]: <info>  [1772273950.8360] manager: (tap6912d1ef-96): new Tun device (/org/freedesktop/NetworkManager/Devices/405)
Feb 28 10:19:10 compute-0 kernel: tap6912d1ef-96: entered promiscuous mode
Feb 28 10:19:10 compute-0 nova_compute[243452]: 2026-02-28 10:19:10.843 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:10 compute-0 ovn_controller[146846]: 2026-02-28T10:19:10Z|00930|binding|INFO|Claiming lport 6912d1ef-9679-45b5-ae80-a91f63ecce55 for this chassis.
Feb 28 10:19:10 compute-0 ovn_controller[146846]: 2026-02-28T10:19:10Z|00931|binding|INFO|6912d1ef-9679-45b5-ae80-a91f63ecce55: Claiming fa:16:3e:b7:a0:a7 10.100.0.8
Feb 28 10:19:10 compute-0 podman[324701]: 2026-02-28 10:19:10.854197356 +0000 UTC m=+0.089058370 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:19:10 compute-0 ovn_controller[146846]: 2026-02-28T10:19:10Z|00932|binding|INFO|Setting lport 6912d1ef-9679-45b5-ae80-a91f63ecce55 ovn-installed in OVS
Feb 28 10:19:10 compute-0 nova_compute[243452]: 2026-02-28 10:19:10.858 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:10 compute-0 systemd-udevd[324732]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:19:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.870 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a0:a7 10.100.0.8'], port_security=['fa:16:3e:b7:a0:a7 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e8c0bffa-2672-4f45-8646-3a41b8e780a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99091813-133c-46b0-a8d3-eeb21884f48a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '5', 'neutron:security_group_ids': '11fcf035-f78f-4ebd-8bac-b3303ff31262', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd6ca39e-5439-4056-b277-bd0e3b6ca6ca, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6912d1ef-9679-45b5-ae80-a91f63ecce55) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:19:10 compute-0 ovn_controller[146846]: 2026-02-28T10:19:10Z|00933|binding|INFO|Setting lport 6912d1ef-9679-45b5-ae80-a91f63ecce55 up in Southbound
Feb 28 10:19:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.871 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6912d1ef-9679-45b5-ae80-a91f63ecce55 in datapath 99091813-133c-46b0-a8d3-eeb21884f48a bound to our chassis
Feb 28 10:19:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.872 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99091813-133c-46b0-a8d3-eeb21884f48a
Feb 28 10:19:10 compute-0 NetworkManager[49805]: <info>  [1772273950.8843] device (tap6912d1ef-96): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:19:10 compute-0 NetworkManager[49805]: <info>  [1772273950.8849] device (tap6912d1ef-96): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:19:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.886 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e5da22c9-b030-4720-8dbf-cfc51e1bd3fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.887 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap99091813-11 in ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:19:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.888 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap99091813-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:19:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.888 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5158b230-5d27-42da-a7d9-e7f41ed10a4f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:10 compute-0 systemd-machined[209480]: New machine qemu-116-instance-0000005d.
Feb 28 10:19:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.889 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b29ae0-76de-4239-8734-b9d8f018690c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.903 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[881870f9-64ac-4370-a800-ee7474fcfa99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:10 compute-0 systemd[1]: Started Virtual Machine qemu-116-instance-0000005d.
Feb 28 10:19:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.929 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b455c3-4813-4289-a3a1-1355c69d19fb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.960 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b4ea8d-9cb3-4f92-b9ca-b0c61eb88e47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:10 compute-0 NetworkManager[49805]: <info>  [1772273950.9712] manager: (tap99091813-10): new Veth device (/org/freedesktop/NetworkManager/Devices/406)
Feb 28 10:19:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:10.970 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4f739bdd-fb39-4e4a-b8de-30c99a74748b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:10 compute-0 podman[324701]: 2026-02-28 10:19:10.975182714 +0000 UTC m=+0.210043768 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.007 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c361f775-a99f-4f7f-a6a1-a67880b252f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.011 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f2ec4b-9613-49e2-8b1c-7e95e0ede06d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:11 compute-0 NetworkManager[49805]: <info>  [1772273951.0368] device (tap99091813-10): carrier: link connected
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.043 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f601c71a-d3d6-44c5-85ff-5fb2141c8c1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.058 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[91aca993-e7d6-43bb-a0df-731826cc5aaf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99091813-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:8b:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 286], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545832, 'reachable_time': 42666, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324794, 'error': None, 'target': 'ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.073 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e5db8e84-23a7-4fa8-8e64-338531f86de9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:8b20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545832, 'tstamp': 545832}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324797, 'error': None, 'target': 'ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.091 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d653f63b-9dc9-49ab-bd41-99148ac1f87e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99091813-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:8b:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 286], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545832, 'reachable_time': 42666, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324803, 'error': None, 'target': 'ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.127 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[30c43f7f-e10b-448f-ad33-416f317d5b9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.185 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9b71a56d-8188-44a6-b318-64286843236b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.187 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99091813-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.188 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.188 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99091813-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.191 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:11 compute-0 NetworkManager[49805]: <info>  [1772273951.1921] manager: (tap99091813-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/407)
Feb 28 10:19:11 compute-0 kernel: tap99091813-10: entered promiscuous mode
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.194 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.196 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99091813-10, col_values=(('external_ids', {'iface-id': 'e6986f00-b070-4e36-95ae-3683483bf103'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.197 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:11 compute-0 ovn_controller[146846]: 2026-02-28T10:19:11Z|00934|binding|INFO|Releasing lport e6986f00-b070-4e36-95ae-3683483bf103 from this chassis (sb_readonly=0)
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.204 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.206 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/99091813-133c-46b0-a8d3-eeb21884f48a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/99091813-133c-46b0-a8d3-eeb21884f48a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.207 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f247434f-b3c0-4053-9537-97b7c322bfac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.208 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-99091813-133c-46b0-a8d3-eeb21884f48a
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/99091813-133c-46b0-a8d3-eeb21884f48a.pid.haproxy
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 99091813-133c-46b0-a8d3-eeb21884f48a
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:19:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:11.209 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a', 'env', 'PROCESS_TAG=haproxy-99091813-133c-46b0-a8d3-eeb21884f48a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/99091813-133c-46b0-a8d3-eeb21884f48a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:19:11 compute-0 ceph-mon[76304]: pgmap v1604: 305 pgs: 305 active+clean; 314 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 596 KiB/s rd, 102 KiB/s wr, 57 op/s
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.403 243456 DEBUG nova.compute.manager [req-66f2d065-649a-40c6-a2c7-3680f255320b req-cfd71f30-08c4-4c30-9dd8-2db08ab4e78f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.405 243456 DEBUG oslo_concurrency.lockutils [req-66f2d065-649a-40c6-a2c7-3680f255320b req-cfd71f30-08c4-4c30-9dd8-2db08ab4e78f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.405 243456 DEBUG oslo_concurrency.lockutils [req-66f2d065-649a-40c6-a2c7-3680f255320b req-cfd71f30-08c4-4c30-9dd8-2db08ab4e78f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.405 243456 DEBUG oslo_concurrency.lockutils [req-66f2d065-649a-40c6-a2c7-3680f255320b req-cfd71f30-08c4-4c30-9dd8-2db08ab4e78f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.405 243456 DEBUG nova.compute.manager [req-66f2d065-649a-40c6-a2c7-3680f255320b req-cfd71f30-08c4-4c30-9dd8-2db08ab4e78f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] No waiting events found dispatching network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.405 243456 WARNING nova.compute.manager [req-66f2d065-649a-40c6-a2c7-3680f255320b req-cfd71f30-08c4-4c30-9dd8-2db08ab4e78f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received unexpected event network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 for instance with vm_state suspended and task_state resuming.
Feb 28 10:19:11 compute-0 podman[324928]: 2026-02-28 10:19:11.561011332 +0000 UTC m=+0.056566473 container create 2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:19:11 compute-0 systemd[1]: Started libpod-conmon-2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f.scope.
Feb 28 10:19:11 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:19:11 compute-0 podman[324928]: 2026-02-28 10:19:11.530361998 +0000 UTC m=+0.025917179 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:19:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/020af8fd3586118fc9aa7bfe4f2379c7e7600d6ebd5072313956ddeff5e61ed4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:19:11 compute-0 podman[324928]: 2026-02-28 10:19:11.641878707 +0000 UTC m=+0.137433848 container init 2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 28 10:19:11 compute-0 podman[324928]: 2026-02-28 10:19:11.649174475 +0000 UTC m=+0.144729616 container start 2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 28 10:19:11 compute-0 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[324989]: [NOTICE]   (325004) : New worker (325011) forked
Feb 28 10:19:11 compute-0 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[324989]: [NOTICE]   (325004) : Loading success.
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.770 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for e8c0bffa-2672-4f45-8646-3a41b8e780a8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.770 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273951.7696404, e8c0bffa-2672-4f45-8646-3a41b8e780a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.771 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] VM Started (Lifecycle Event)
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.796 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.802 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.803 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.804 243456 DEBUG nova.compute.manager [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.804 243456 DEBUG nova.objects.instance [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'pci_devices' on Instance uuid e8c0bffa-2672-4f45-8646-3a41b8e780a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.807 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:19:11 compute-0 sudo[324630]: pam_unix(sudo:session): session closed for user root
Feb 28 10:19:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:19:11 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:19:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:19:11 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.844 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] During sync_power_state the instance has a pending task (resuming). Skip.
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.845 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273951.7727218, e8c0bffa-2672-4f45-8646-3a41b8e780a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.845 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] VM Resumed (Lifecycle Event)
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.847 243456 DEBUG nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.851 243456 INFO nova.virt.libvirt.driver [-] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Instance running successfully.
Feb 28 10:19:11 compute-0 virtqemud[242837]: argument unsupported: QEMU guest agent is not configured
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.855 243456 DEBUG nova.virt.libvirt.guest [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.855 243456 DEBUG nova.compute.manager [None req-7e4cba79-bada-4a4f-977e-65e38f6b5293 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:11 compute-0 sudo[325039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:19:11 compute-0 sudo[325039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:19:11 compute-0 sudo[325039]: pam_unix(sudo:session): session closed for user root
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.899 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.903 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.949 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] During sync_power_state the instance has a pending task (resuming). Skip.
Feb 28 10:19:11 compute-0 sudo[325064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:19:11 compute-0 sudo[325064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.981 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.982 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.988 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:19:11 compute-0 nova_compute[243452]: 2026-02-28 10:19:11.988 243456 INFO nova.compute.claims [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:19:12 compute-0 nova_compute[243452]: 2026-02-28 10:19:12.128 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:12 compute-0 nova_compute[243452]: 2026-02-28 10:19:12.182 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 314 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 25 KiB/s wr, 34 op/s
Feb 28 10:19:12 compute-0 sudo[325064]: pam_unix(sudo:session): session closed for user root
Feb 28 10:19:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:19:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:19:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:19:12 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:19:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:19:12 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:19:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:19:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:19:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:19:12 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:19:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:19:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:19:12 compute-0 sudo[325140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:19:12 compute-0 sudo[325140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:19:12 compute-0 sudo[325140]: pam_unix(sudo:session): session closed for user root
Feb 28 10:19:12 compute-0 sudo[325177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:19:12 compute-0 podman[325165]: 2026-02-28 10:19:12.686521651 +0000 UTC m=+0.051703635 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:19:12 compute-0 sudo[325177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:19:12 compute-0 podman[325164]: 2026-02-28 10:19:12.711615486 +0000 UTC m=+0.079426345 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 10:19:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:19:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3915906322' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:19:12 compute-0 nova_compute[243452]: 2026-02-28 10:19:12.735 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:12 compute-0 nova_compute[243452]: 2026-02-28 10:19:12.741 243456 DEBUG nova.compute.provider_tree [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:19:12 compute-0 nova_compute[243452]: 2026-02-28 10:19:12.760 243456 DEBUG nova.scheduler.client.report [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:19:12 compute-0 nova_compute[243452]: 2026-02-28 10:19:12.781 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:12 compute-0 nova_compute[243452]: 2026-02-28 10:19:12.782 243456 DEBUG nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:19:12 compute-0 nova_compute[243452]: 2026-02-28 10:19:12.824 243456 DEBUG nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:19:12 compute-0 nova_compute[243452]: 2026-02-28 10:19:12.824 243456 DEBUG nova.network.neutron [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:19:12 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:19:12 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:19:12 compute-0 ceph-mon[76304]: pgmap v1605: 305 pgs: 305 active+clean; 314 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 25 KiB/s wr, 34 op/s
Feb 28 10:19:12 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:19:12 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:19:12 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:19:12 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:19:12 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:19:12 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:19:12 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3915906322' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:19:12 compute-0 nova_compute[243452]: 2026-02-28 10:19:12.848 243456 INFO nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:19:12 compute-0 nova_compute[243452]: 2026-02-28 10:19:12.869 243456 DEBUG nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:19:12 compute-0 podman[325247]: 2026-02-28 10:19:12.946877552 +0000 UTC m=+0.038841798 container create c561191b29f1ac74ac3440633be08107ef8c48553e3448e6dbfb67d2d3309b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_easley, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:19:12 compute-0 systemd[1]: Started libpod-conmon-c561191b29f1ac74ac3440633be08107ef8c48553e3448e6dbfb67d2d3309b85.scope.
Feb 28 10:19:12 compute-0 nova_compute[243452]: 2026-02-28 10:19:12.985 243456 DEBUG nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:19:12 compute-0 nova_compute[243452]: 2026-02-28 10:19:12.987 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:19:12 compute-0 nova_compute[243452]: 2026-02-28 10:19:12.987 243456 INFO nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Creating image(s)
Feb 28 10:19:13 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.013 243456 DEBUG nova.storage.rbd_utils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:13 compute-0 podman[325247]: 2026-02-28 10:19:13.022190178 +0000 UTC m=+0.114154464 container init c561191b29f1ac74ac3440633be08107ef8c48553e3448e6dbfb67d2d3309b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_easley, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 10:19:13 compute-0 podman[325247]: 2026-02-28 10:19:12.929090155 +0000 UTC m=+0.021054421 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:19:13 compute-0 podman[325247]: 2026-02-28 10:19:13.033853411 +0000 UTC m=+0.125817687 container start c561191b29f1ac74ac3440633be08107ef8c48553e3448e6dbfb67d2d3309b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 10:19:13 compute-0 podman[325247]: 2026-02-28 10:19:13.037839064 +0000 UTC m=+0.129803390 container attach c561191b29f1ac74ac3440633be08107ef8c48553e3448e6dbfb67d2d3309b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_easley, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:19:13 compute-0 hungry_easley[325263]: 167 167
Feb 28 10:19:13 compute-0 systemd[1]: libpod-c561191b29f1ac74ac3440633be08107ef8c48553e3448e6dbfb67d2d3309b85.scope: Deactivated successfully.
Feb 28 10:19:13 compute-0 podman[325247]: 2026-02-28 10:19:13.044271348 +0000 UTC m=+0.136235614 container died c561191b29f1ac74ac3440633be08107ef8c48553e3448e6dbfb67d2d3309b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_easley, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.043 243456 DEBUG nova.storage.rbd_utils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-94f5a82e747f61196b2d14291f7e3015f76b11995ed97980aea8152825c1503e-merged.mount: Deactivated successfully.
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.071 243456 DEBUG nova.storage.rbd_utils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.078 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:13 compute-0 podman[325247]: 2026-02-28 10:19:13.083581788 +0000 UTC m=+0.175546034 container remove c561191b29f1ac74ac3440633be08107ef8c48553e3448e6dbfb67d2d3309b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_easley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 10:19:13 compute-0 systemd[1]: libpod-conmon-c561191b29f1ac74ac3440633be08107ef8c48553e3448e6dbfb67d2d3309b85.scope: Deactivated successfully.
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.157 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.158 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.159 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.159 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.185 243456 DEBUG nova.storage.rbd_utils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.191 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:13 compute-0 podman[325358]: 2026-02-28 10:19:13.235410776 +0000 UTC m=+0.046129906 container create 0d09091b6126a384307ff3d6c3df063e652ed4b17901063ba928313d7cb1a489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 10:19:13 compute-0 systemd[1]: Started libpod-conmon-0d09091b6126a384307ff3d6c3df063e652ed4b17901063ba928313d7cb1a489.scope.
Feb 28 10:19:13 compute-0 podman[325358]: 2026-02-28 10:19:13.213899293 +0000 UTC m=+0.024618213 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:19:13 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:19:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6d4049678991f2fd404633fbf3775c66c5d97a5c3ee57c50561b6324f2a472/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:19:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6d4049678991f2fd404633fbf3775c66c5d97a5c3ee57c50561b6324f2a472/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:19:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6d4049678991f2fd404633fbf3775c66c5d97a5c3ee57c50561b6324f2a472/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:19:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6d4049678991f2fd404633fbf3775c66c5d97a5c3ee57c50561b6324f2a472/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:19:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6d4049678991f2fd404633fbf3775c66c5d97a5c3ee57c50561b6324f2a472/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.355 243456 DEBUG nova.policy [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b9006c7543a244aa948b78020335223a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6952e00efd364e1491714983e2425e93', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:19:13 compute-0 podman[325358]: 2026-02-28 10:19:13.404642739 +0000 UTC m=+0.215361699 container init 0d09091b6126a384307ff3d6c3df063e652ed4b17901063ba928313d7cb1a489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_satoshi, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 10:19:13 compute-0 podman[325358]: 2026-02-28 10:19:13.411649349 +0000 UTC m=+0.222368279 container start 0d09091b6126a384307ff3d6c3df063e652ed4b17901063ba928313d7cb1a489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_satoshi, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:19:13 compute-0 podman[325358]: 2026-02-28 10:19:13.430050603 +0000 UTC m=+0.240769533 container attach 0d09091b6126a384307ff3d6c3df063e652ed4b17901063ba928313d7cb1a489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.510 243456 DEBUG nova.compute.manager [req-c887070b-1b8f-4785-be98-444c1337869e req-ba96c42f-b235-41a8-a5a1-f3d344750b6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.511 243456 DEBUG oslo_concurrency.lockutils [req-c887070b-1b8f-4785-be98-444c1337869e req-ba96c42f-b235-41a8-a5a1-f3d344750b6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.511 243456 DEBUG oslo_concurrency.lockutils [req-c887070b-1b8f-4785-be98-444c1337869e req-ba96c42f-b235-41a8-a5a1-f3d344750b6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.511 243456 DEBUG oslo_concurrency.lockutils [req-c887070b-1b8f-4785-be98-444c1337869e req-ba96c42f-b235-41a8-a5a1-f3d344750b6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.512 243456 DEBUG nova.compute.manager [req-c887070b-1b8f-4785-be98-444c1337869e req-ba96c42f-b235-41a8-a5a1-f3d344750b6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] No waiting events found dispatching network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.512 243456 WARNING nova.compute.manager [req-c887070b-1b8f-4785-be98-444c1337869e req-ba96c42f-b235-41a8-a5a1-f3d344750b6b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received unexpected event network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 for instance with vm_state active and task_state None.
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.592 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.689 243456 DEBUG nova.storage.rbd_utils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] resizing rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.848 243456 DEBUG nova.objects.instance [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'migration_context' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.875 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.875 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Ensure instance console log exists: /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.876 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.876 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.876 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:13 compute-0 serene_satoshi[325398]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:19:13 compute-0 serene_satoshi[325398]: --> All data devices are unavailable
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.905 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.915 243456 DEBUG nova.network.neutron [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Successfully created port: c058dd2c-3349-4364-8659-31bb8b2509bb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:19:13 compute-0 systemd[1]: libpod-0d09091b6126a384307ff3d6c3df063e652ed4b17901063ba928313d7cb1a489.scope: Deactivated successfully.
Feb 28 10:19:13 compute-0 podman[325358]: 2026-02-28 10:19:13.921138371 +0000 UTC m=+0.731857311 container died 0d09091b6126a384307ff3d6c3df063e652ed4b17901063ba928313d7cb1a489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.931 243456 INFO nova.compute.manager [None req-7219141d-d05f-47cc-a17c-bfc69603df46 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Get console output
Feb 28 10:19:13 compute-0 nova_compute[243452]: 2026-02-28 10:19:13.938 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:19:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f6d4049678991f2fd404633fbf3775c66c5d97a5c3ee57c50561b6324f2a472-merged.mount: Deactivated successfully.
Feb 28 10:19:14 compute-0 podman[325358]: 2026-02-28 10:19:14.032181796 +0000 UTC m=+0.842900716 container remove 0d09091b6126a384307ff3d6c3df063e652ed4b17901063ba928313d7cb1a489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_satoshi, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 10:19:14 compute-0 systemd[1]: libpod-conmon-0d09091b6126a384307ff3d6c3df063e652ed4b17901063ba928313d7cb1a489.scope: Deactivated successfully.
Feb 28 10:19:14 compute-0 sudo[325177]: pam_unix(sudo:session): session closed for user root
Feb 28 10:19:14 compute-0 sudo[325503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:19:14 compute-0 sudo[325503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:19:14 compute-0 sudo[325503]: pam_unix(sudo:session): session closed for user root
Feb 28 10:19:14 compute-0 sudo[325528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:19:14 compute-0 sudo[325528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:19:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 314 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 158 KiB/s rd, 26 KiB/s wr, 16 op/s
Feb 28 10:19:14 compute-0 podman[325564]: 2026-02-28 10:19:14.50492138 +0000 UTC m=+0.076352097 container create ac2fd85f2b02b8a85bde835184ba65f3b5e658e3a371cccc3ddd2df128b82db0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Feb 28 10:19:14 compute-0 systemd[1]: Started libpod-conmon-ac2fd85f2b02b8a85bde835184ba65f3b5e658e3a371cccc3ddd2df128b82db0.scope.
Feb 28 10:19:14 compute-0 podman[325564]: 2026-02-28 10:19:14.451142917 +0000 UTC m=+0.022573654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:19:14 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:19:14 compute-0 podman[325564]: 2026-02-28 10:19:14.622941904 +0000 UTC m=+0.194372621 container init ac2fd85f2b02b8a85bde835184ba65f3b5e658e3a371cccc3ddd2df128b82db0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:19:14 compute-0 podman[325564]: 2026-02-28 10:19:14.630396197 +0000 UTC m=+0.201826914 container start ac2fd85f2b02b8a85bde835184ba65f3b5e658e3a371cccc3ddd2df128b82db0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:19:14 compute-0 podman[325564]: 2026-02-28 10:19:14.63366409 +0000 UTC m=+0.205094827 container attach ac2fd85f2b02b8a85bde835184ba65f3b5e658e3a371cccc3ddd2df128b82db0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 10:19:14 compute-0 hardcore_gates[325580]: 167 167
Feb 28 10:19:14 compute-0 systemd[1]: libpod-ac2fd85f2b02b8a85bde835184ba65f3b5e658e3a371cccc3ddd2df128b82db0.scope: Deactivated successfully.
Feb 28 10:19:14 compute-0 podman[325564]: 2026-02-28 10:19:14.636730237 +0000 UTC m=+0.208160954 container died ac2fd85f2b02b8a85bde835184ba65f3b5e658e3a371cccc3ddd2df128b82db0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 10:19:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d43070ebddbe33f5ce735f70ecc427bbe7d395cd4af10614efac6e4a14882420-merged.mount: Deactivated successfully.
Feb 28 10:19:14 compute-0 podman[325564]: 2026-02-28 10:19:14.669790929 +0000 UTC m=+0.241221696 container remove ac2fd85f2b02b8a85bde835184ba65f3b5e658e3a371cccc3ddd2df128b82db0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_gates, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 10:19:14 compute-0 systemd[1]: libpod-conmon-ac2fd85f2b02b8a85bde835184ba65f3b5e658e3a371cccc3ddd2df128b82db0.scope: Deactivated successfully.
Feb 28 10:19:14 compute-0 podman[325605]: 2026-02-28 10:19:14.815612716 +0000 UTC m=+0.040867686 container create 06a0766103473cc56263279b03562a849807cbc56d7ae256db79d744be01d87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_pascal, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:19:14 compute-0 nova_compute[243452]: 2026-02-28 10:19:14.856 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:14 compute-0 systemd[1]: Started libpod-conmon-06a0766103473cc56263279b03562a849807cbc56d7ae256db79d744be01d87a.scope.
Feb 28 10:19:14 compute-0 podman[325605]: 2026-02-28 10:19:14.799154207 +0000 UTC m=+0.024409187 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:19:14 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0135a93154ca7ad5b56a41e694081cf646ef9400ac4ea9eed10ac9747f1dd919/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0135a93154ca7ad5b56a41e694081cf646ef9400ac4ea9eed10ac9747f1dd919/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0135a93154ca7ad5b56a41e694081cf646ef9400ac4ea9eed10ac9747f1dd919/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0135a93154ca7ad5b56a41e694081cf646ef9400ac4ea9eed10ac9747f1dd919/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:19:14 compute-0 podman[325605]: 2026-02-28 10:19:14.927874535 +0000 UTC m=+0.153129545 container init 06a0766103473cc56263279b03562a849807cbc56d7ae256db79d744be01d87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:19:14 compute-0 podman[325605]: 2026-02-28 10:19:14.934447413 +0000 UTC m=+0.159702383 container start 06a0766103473cc56263279b03562a849807cbc56d7ae256db79d744be01d87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_pascal, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 10:19:14 compute-0 podman[325605]: 2026-02-28 10:19:14.93891794 +0000 UTC m=+0.164172930 container attach 06a0766103473cc56263279b03562a849807cbc56d7ae256db79d744be01d87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_pascal, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.134 243456 DEBUG oslo_concurrency.lockutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.136 243456 DEBUG oslo_concurrency.lockutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.136 243456 DEBUG oslo_concurrency.lockutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.136 243456 DEBUG oslo_concurrency.lockutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.137 243456 DEBUG oslo_concurrency.lockutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.138 243456 INFO nova.compute.manager [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Terminating instance
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.139 243456 DEBUG nova.compute.manager [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:19:15 compute-0 kernel: tap6912d1ef-96 (unregistering): left promiscuous mode
Feb 28 10:19:15 compute-0 NetworkManager[49805]: <info>  [1772273955.1795] device (tap6912d1ef-96): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:19:15 compute-0 ovn_controller[146846]: 2026-02-28T10:19:15Z|00935|binding|INFO|Releasing lport 6912d1ef-9679-45b5-ae80-a91f63ecce55 from this chassis (sb_readonly=0)
Feb 28 10:19:15 compute-0 ovn_controller[146846]: 2026-02-28T10:19:15Z|00936|binding|INFO|Setting lport 6912d1ef-9679-45b5-ae80-a91f63ecce55 down in Southbound
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.193 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:15 compute-0 ovn_controller[146846]: 2026-02-28T10:19:15Z|00937|binding|INFO|Removing iface tap6912d1ef-96 ovn-installed in OVS
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.196 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:15 compute-0 strange_pascal[325621]: {
Feb 28 10:19:15 compute-0 strange_pascal[325621]:     "0": [
Feb 28 10:19:15 compute-0 strange_pascal[325621]:         {
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "devices": [
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "/dev/loop3"
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             ],
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "lv_name": "ceph_lv0",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "lv_size": "21470642176",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "name": "ceph_lv0",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "tags": {
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.cluster_name": "ceph",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.crush_device_class": "",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.encrypted": "0",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.objectstore": "bluestore",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.osd_id": "0",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.type": "block",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.vdo": "0",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.with_tpm": "0"
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             },
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "type": "block",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "vg_name": "ceph_vg0"
Feb 28 10:19:15 compute-0 strange_pascal[325621]:         }
Feb 28 10:19:15 compute-0 strange_pascal[325621]:     ],
Feb 28 10:19:15 compute-0 strange_pascal[325621]:     "1": [
Feb 28 10:19:15 compute-0 strange_pascal[325621]:         {
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "devices": [
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "/dev/loop4"
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             ],
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "lv_name": "ceph_lv1",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "lv_size": "21470642176",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "name": "ceph_lv1",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "tags": {
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.cluster_name": "ceph",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.crush_device_class": "",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.encrypted": "0",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.objectstore": "bluestore",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.osd_id": "1",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.type": "block",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.vdo": "0",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.with_tpm": "0"
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             },
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "type": "block",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "vg_name": "ceph_vg1"
Feb 28 10:19:15 compute-0 strange_pascal[325621]:         }
Feb 28 10:19:15 compute-0 strange_pascal[325621]:     ],
Feb 28 10:19:15 compute-0 strange_pascal[325621]:     "2": [
Feb 28 10:19:15 compute-0 strange_pascal[325621]:         {
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "devices": [
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "/dev/loop5"
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             ],
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "lv_name": "ceph_lv2",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "lv_size": "21470642176",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "name": "ceph_lv2",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "tags": {
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.cluster_name": "ceph",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.crush_device_class": "",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.encrypted": "0",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.objectstore": "bluestore",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.osd_id": "2",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.type": "block",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.vdo": "0",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:                 "ceph.with_tpm": "0"
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             },
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "type": "block",
Feb 28 10:19:15 compute-0 strange_pascal[325621]:             "vg_name": "ceph_vg2"
Feb 28 10:19:15 compute-0 strange_pascal[325621]:         }
Feb 28 10:19:15 compute-0 strange_pascal[325621]:     ]
Feb 28 10:19:15 compute-0 strange_pascal[325621]: }
Feb 28 10:19:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.204 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:a0:a7 10.100.0.8'], port_security=['fa:16:3e:b7:a0:a7 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e8c0bffa-2672-4f45-8646-3a41b8e780a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99091813-133c-46b0-a8d3-eeb21884f48a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c568ca6a09a48c1a1197267be4d4583', 'neutron:revision_number': '6', 'neutron:security_group_ids': '11fcf035-f78f-4ebd-8bac-b3303ff31262', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd6ca39e-5439-4056-b277-bd0e3b6ca6ca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6912d1ef-9679-45b5-ae80-a91f63ecce55) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:19:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.206 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6912d1ef-9679-45b5-ae80-a91f63ecce55 in datapath 99091813-133c-46b0-a8d3-eeb21884f48a unbound from our chassis
Feb 28 10:19:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.208 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99091813-133c-46b0-a8d3-eeb21884f48a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.209 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.211 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[de1d243d-a798-4b1a-9b8e-45a0e59e1b76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.212 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a namespace which is not needed anymore
Feb 28 10:19:15 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.224 243456 DEBUG nova.network.neutron [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Successfully updated port: c058dd2c-3349-4364-8659-31bb8b2509bb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:19:15 compute-0 systemd-machined[209480]: Machine qemu-116-instance-0000005d terminated.
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.240 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "refresh_cache-6a659835-f144-4e34-87ec-3b37ff81b0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.240 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquired lock "refresh_cache-6a659835-f144-4e34-87ec-3b37ff81b0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.241 243456 DEBUG nova.network.neutron [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:19:15 compute-0 systemd[1]: libpod-06a0766103473cc56263279b03562a849807cbc56d7ae256db79d744be01d87a.scope: Deactivated successfully.
Feb 28 10:19:15 compute-0 podman[325605]: 2026-02-28 10:19:15.245047416 +0000 UTC m=+0.470302396 container died 06a0766103473cc56263279b03562a849807cbc56d7ae256db79d744be01d87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_pascal, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:19:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-0135a93154ca7ad5b56a41e694081cf646ef9400ac4ea9eed10ac9747f1dd919-merged.mount: Deactivated successfully.
Feb 28 10:19:15 compute-0 podman[325605]: 2026-02-28 10:19:15.308092853 +0000 UTC m=+0.533347813 container remove 06a0766103473cc56263279b03562a849807cbc56d7ae256db79d744be01d87a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Feb 28 10:19:15 compute-0 systemd[1]: libpod-conmon-06a0766103473cc56263279b03562a849807cbc56d7ae256db79d744be01d87a.scope: Deactivated successfully.
Feb 28 10:19:15 compute-0 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[324989]: [NOTICE]   (325004) : haproxy version is 2.8.14-c23fe91
Feb 28 10:19:15 compute-0 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[324989]: [NOTICE]   (325004) : path to executable is /usr/sbin/haproxy
Feb 28 10:19:15 compute-0 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[324989]: [WARNING]  (325004) : Exiting Master process...
Feb 28 10:19:15 compute-0 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[324989]: [WARNING]  (325004) : Exiting Master process...
Feb 28 10:19:15 compute-0 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[324989]: [ALERT]    (325004) : Current worker (325011) exited with code 143 (Terminated)
Feb 28 10:19:15 compute-0 neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a[324989]: [WARNING]  (325004) : All workers exited. Exiting... (0)
Feb 28 10:19:15 compute-0 systemd[1]: libpod-2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f.scope: Deactivated successfully.
Feb 28 10:19:15 compute-0 podman[325666]: 2026-02-28 10:19:15.362956346 +0000 UTC m=+0.046470715 container died 2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.374 243456 INFO nova.virt.libvirt.driver [-] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Instance destroyed successfully.
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.375 243456 DEBUG nova.objects.instance [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lazy-loading 'resources' on Instance uuid e8c0bffa-2672-4f45-8646-3a41b8e780a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:15 compute-0 sudo[325528]: pam_unix(sudo:session): session closed for user root
Feb 28 10:19:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f-userdata-shm.mount: Deactivated successfully.
Feb 28 10:19:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-020af8fd3586118fc9aa7bfe4f2379c7e7600d6ebd5072313956ddeff5e61ed4-merged.mount: Deactivated successfully.
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.392 243456 DEBUG nova.virt.libvirt.vif [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:18:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1132178903',display_name='tempest-TestNetworkAdvancedServerOps-server-1132178903',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1132178903',id=93,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAaE+RUFgVeYlRX30KMGxnwEz+7reAn227i7sOjhZIZoASG2YdNYnV+Hsj1MiXJ39Nt5WWX427Y5pNZnlCuOzU5m4otlC0FTaPuRwmOXLsqunpWkEbv7T/dSbfrRT0dp7Q==',key_name='tempest-TestNetworkAdvancedServerOps-156508609',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:18:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4c568ca6a09a48c1a1197267be4d4583',ramdisk_id='',reservation_id='r-g07k9xeh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1987172309',owner_user_name='tempest-TestNetworkAdvancedServerOps-1987172309-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:19:11Z,user_data=None,user_id='99530c323188499c8d0e75b8edf1f77b',uuid=e8c0bffa-2672-4f45-8646-3a41b8e780a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "address": "fa:16:3e:b7:a0:a7", "network": {"id": "99091813-133c-46b0-a8d3-eeb21884f48a", "bridge": "br-int", "label": "tempest-network-smoke--923256228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6912d1ef-96", "ovs_interfaceid": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.393 243456 DEBUG nova.network.os_vif_util [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converting VIF {"id": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "address": "fa:16:3e:b7:a0:a7", "network": {"id": "99091813-133c-46b0-a8d3-eeb21884f48a", "bridge": "br-int", "label": "tempest-network-smoke--923256228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6912d1ef-96", "ovs_interfaceid": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.394 243456 DEBUG nova.network.os_vif_util [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a0:a7,bridge_name='br-int',has_traffic_filtering=True,id=6912d1ef-9679-45b5-ae80-a91f63ecce55,network=Network(99091813-133c-46b0-a8d3-eeb21884f48a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6912d1ef-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.394 243456 DEBUG os_vif [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a0:a7,bridge_name='br-int',has_traffic_filtering=True,id=6912d1ef-9679-45b5-ae80-a91f63ecce55,network=Network(99091813-133c-46b0-a8d3-eeb21884f48a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6912d1ef-96') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:19:15 compute-0 ceph-mon[76304]: pgmap v1606: 305 pgs: 305 active+clean; 314 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 158 KiB/s rd, 26 KiB/s wr, 16 op/s
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.399 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.400 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6912d1ef-96, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.401 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.403 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.405 243456 INFO os_vif [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:a0:a7,bridge_name='br-int',has_traffic_filtering=True,id=6912d1ef-9679-45b5-ae80-a91f63ecce55,network=Network(99091813-133c-46b0-a8d3-eeb21884f48a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6912d1ef-96')
Feb 28 10:19:15 compute-0 podman[325666]: 2026-02-28 10:19:15.407909948 +0000 UTC m=+0.091424317 container cleanup 2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:19:15 compute-0 systemd[1]: libpod-conmon-2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f.scope: Deactivated successfully.
Feb 28 10:19:15 compute-0 sudo[325701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:19:15 compute-0 sudo[325701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:19:15 compute-0 sudo[325701]: pam_unix(sudo:session): session closed for user root
Feb 28 10:19:15 compute-0 podman[325734]: 2026-02-28 10:19:15.484381317 +0000 UTC m=+0.050024716 container remove 2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 28 10:19:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.489 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[be8db31d-e317-4d4b-a265-760ecb31aabc]: (4, ('Sat Feb 28 10:19:15 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a (2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f)\n2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f\nSat Feb 28 10:19:15 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a (2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f)\n2fb8fe8b3eaee557eed31d5dd6fc5a88b0b111b686e3e08f904eeba6b009399f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.492 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[76fed827-00b8-43b6-b570-169b92d29580]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.495 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99091813-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.498 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:15 compute-0 kernel: tap99091813-10: left promiscuous mode
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.504 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.507 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0e065786-3306-4f64-ae11-4c1210662957]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:15 compute-0 sudo[325756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:19:15 compute-0 sudo[325756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:19:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.519 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c9bea1e8-35a5-4b0a-a44d-6dcf0cf02bc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.521 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[58522183-3d51-4a7a-a6d4-b5493e552620]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.541 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a1374249-0f0c-4601-8873-3c62c338dcca]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545823, 'reachable_time': 41715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325783, 'error': None, 'target': 'ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.545 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-99091813-133c-46b0-a8d3-eeb21884f48a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:19:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:15.545 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[9079921d-ebd7-4f7e-8984-626cff19ca29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:15 compute-0 systemd[1]: run-netns-ovnmeta\x2d99091813\x2d133c\x2d46b0\x2da8d3\x2deeb21884f48a.mount: Deactivated successfully.
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.675 243456 INFO nova.virt.libvirt.driver [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Deleting instance files /var/lib/nova/instances/e8c0bffa-2672-4f45-8646-3a41b8e780a8_del
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.678 243456 INFO nova.virt.libvirt.driver [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Deletion of /var/lib/nova/instances/e8c0bffa-2672-4f45-8646-3a41b8e780a8_del complete
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.739 243456 INFO nova.compute.manager [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Took 0.60 seconds to destroy the instance on the hypervisor.
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.739 243456 DEBUG oslo.service.loopingcall [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.739 243456 DEBUG nova.compute.manager [-] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:19:15 compute-0 nova_compute[243452]: 2026-02-28 10:19:15.740 243456 DEBUG nova.network.neutron [-] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:19:15 compute-0 podman[325796]: 2026-02-28 10:19:15.790666747 +0000 UTC m=+0.050173251 container create 6691f41e3c1e128c7b40397f618e2ddba736dab90e803bc7a60f716e5ff0e756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_robinson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:19:15 compute-0 systemd[1]: Started libpod-conmon-6691f41e3c1e128c7b40397f618e2ddba736dab90e803bc7a60f716e5ff0e756.scope.
Feb 28 10:19:15 compute-0 podman[325796]: 2026-02-28 10:19:15.76725839 +0000 UTC m=+0.026764894 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:19:15 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:19:15 compute-0 podman[325796]: 2026-02-28 10:19:15.89886514 +0000 UTC m=+0.158371624 container init 6691f41e3c1e128c7b40397f618e2ddba736dab90e803bc7a60f716e5ff0e756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_robinson, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:19:15 compute-0 podman[325796]: 2026-02-28 10:19:15.907091805 +0000 UTC m=+0.166598309 container start 6691f41e3c1e128c7b40397f618e2ddba736dab90e803bc7a60f716e5ff0e756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 10:19:15 compute-0 podman[325796]: 2026-02-28 10:19:15.911400138 +0000 UTC m=+0.170906622 container attach 6691f41e3c1e128c7b40397f618e2ddba736dab90e803bc7a60f716e5ff0e756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_robinson, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 10:19:15 compute-0 confident_robinson[325812]: 167 167
Feb 28 10:19:15 compute-0 systemd[1]: libpod-6691f41e3c1e128c7b40397f618e2ddba736dab90e803bc7a60f716e5ff0e756.scope: Deactivated successfully.
Feb 28 10:19:15 compute-0 conmon[325812]: conmon 6691f41e3c1e128c7b40 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6691f41e3c1e128c7b40397f618e2ddba736dab90e803bc7a60f716e5ff0e756.scope/container/memory.events
Feb 28 10:19:15 compute-0 podman[325796]: 2026-02-28 10:19:15.916558735 +0000 UTC m=+0.176065239 container died 6691f41e3c1e128c7b40397f618e2ddba736dab90e803bc7a60f716e5ff0e756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_robinson, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:19:15 compute-0 podman[325796]: 2026-02-28 10:19:15.962136704 +0000 UTC m=+0.221643208 container remove 6691f41e3c1e128c7b40397f618e2ddba736dab90e803bc7a60f716e5ff0e756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_robinson, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:19:15 compute-0 systemd[1]: libpod-conmon-6691f41e3c1e128c7b40397f618e2ddba736dab90e803bc7a60f716e5ff0e756.scope: Deactivated successfully.
Feb 28 10:19:16 compute-0 podman[325838]: 2026-02-28 10:19:16.163279907 +0000 UTC m=+0.057122499 container create 2ddae9da1512a1d74e7a45e5eb1a47eab9044b3dbed407d0273e44d222f05eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swirles, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:19:16 compute-0 systemd[1]: Started libpod-conmon-2ddae9da1512a1d74e7a45e5eb1a47eab9044b3dbed407d0273e44d222f05eac.scope.
Feb 28 10:19:16 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:19:16 compute-0 podman[325838]: 2026-02-28 10:19:16.142245817 +0000 UTC m=+0.036088409 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9435238adb36c0e8b230e4e4f8f69f203c54790df78ed580d420abf4aff516a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9435238adb36c0e8b230e4e4f8f69f203c54790df78ed580d420abf4aff516a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9435238adb36c0e8b230e4e4f8f69f203c54790df78ed580d420abf4aff516a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9435238adb36c0e8b230e4e4f8f69f203c54790df78ed580d420abf4aff516a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:19:16 compute-0 podman[325838]: 2026-02-28 10:19:16.272373946 +0000 UTC m=+0.166216618 container init 2ddae9da1512a1d74e7a45e5eb1a47eab9044b3dbed407d0273e44d222f05eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swirles, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:19:16 compute-0 podman[325838]: 2026-02-28 10:19:16.278355367 +0000 UTC m=+0.172197969 container start 2ddae9da1512a1d74e7a45e5eb1a47eab9044b3dbed407d0273e44d222f05eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 10:19:16 compute-0 podman[325838]: 2026-02-28 10:19:16.282973018 +0000 UTC m=+0.176815630 container attach 2ddae9da1512a1d74e7a45e5eb1a47eab9044b3dbed407d0273e44d222f05eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:19:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ffa0670bcd910fc33d82608cba6b5395e6dfb25fb41af8bafd134b6043389a6-merged.mount: Deactivated successfully.
Feb 28 10:19:16 compute-0 nova_compute[243452]: 2026-02-28 10:19:16.313 243456 DEBUG nova.network.neutron [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:19:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 313 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.1 MiB/s wr, 23 op/s
Feb 28 10:19:16 compute-0 nova_compute[243452]: 2026-02-28 10:19:16.371 243456 DEBUG nova.compute.manager [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-changed-6912d1ef-9679-45b5-ae80-a91f63ecce55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:16 compute-0 nova_compute[243452]: 2026-02-28 10:19:16.372 243456 DEBUG nova.compute.manager [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Refreshing instance network info cache due to event network-changed-6912d1ef-9679-45b5-ae80-a91f63ecce55. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:19:16 compute-0 nova_compute[243452]: 2026-02-28 10:19:16.372 243456 DEBUG oslo_concurrency.lockutils [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-e8c0bffa-2672-4f45-8646-3a41b8e780a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:19:16 compute-0 nova_compute[243452]: 2026-02-28 10:19:16.372 243456 DEBUG oslo_concurrency.lockutils [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-e8c0bffa-2672-4f45-8646-3a41b8e780a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:19:16 compute-0 nova_compute[243452]: 2026-02-28 10:19:16.372 243456 DEBUG nova.network.neutron [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Refreshing network info cache for port 6912d1ef-9679-45b5-ae80-a91f63ecce55 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:19:16 compute-0 lvm[325930]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:19:16 compute-0 lvm[325930]: VG ceph_vg0 finished
Feb 28 10:19:16 compute-0 lvm[325932]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:19:16 compute-0 lvm[325932]: VG ceph_vg1 finished
Feb 28 10:19:17 compute-0 lvm[325933]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:19:17 compute-0 lvm[325933]: VG ceph_vg2 finished
Feb 28 10:19:17 compute-0 funny_swirles[325855]: {}
Feb 28 10:19:17 compute-0 systemd[1]: libpod-2ddae9da1512a1d74e7a45e5eb1a47eab9044b3dbed407d0273e44d222f05eac.scope: Deactivated successfully.
Feb 28 10:19:17 compute-0 systemd[1]: libpod-2ddae9da1512a1d74e7a45e5eb1a47eab9044b3dbed407d0273e44d222f05eac.scope: Consumed 1.250s CPU time.
Feb 28 10:19:17 compute-0 podman[325838]: 2026-02-28 10:19:17.17170787 +0000 UTC m=+1.065550472 container died 2ddae9da1512a1d74e7a45e5eb1a47eab9044b3dbed407d0273e44d222f05eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swirles, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 28 10:19:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-9435238adb36c0e8b230e4e4f8f69f203c54790df78ed580d420abf4aff516a0-merged.mount: Deactivated successfully.
Feb 28 10:19:17 compute-0 podman[325838]: 2026-02-28 10:19:17.22610421 +0000 UTC m=+1.119946802 container remove 2ddae9da1512a1d74e7a45e5eb1a47eab9044b3dbed407d0273e44d222f05eac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 10:19:17 compute-0 systemd[1]: libpod-conmon-2ddae9da1512a1d74e7a45e5eb1a47eab9044b3dbed407d0273e44d222f05eac.scope: Deactivated successfully.
Feb 28 10:19:17 compute-0 sudo[325756]: pam_unix(sudo:session): session closed for user root
Feb 28 10:19:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:19:17 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:19:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:19:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:17.310 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:19:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:17.312 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:19:17 compute-0 nova_compute[243452]: 2026-02-28 10:19:17.311 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:17 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:19:17 compute-0 sudo[325947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:19:17 compute-0 sudo[325947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:19:17 compute-0 sudo[325947]: pam_unix(sudo:session): session closed for user root
Feb 28 10:19:17 compute-0 ceph-mon[76304]: pgmap v1607: 305 pgs: 305 active+clean; 313 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.1 MiB/s wr, 23 op/s
Feb 28 10:19:17 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:19:17 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.011 243456 DEBUG nova.network.neutron [-] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.035 243456 INFO nova.compute.manager [-] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Took 2.30 seconds to deallocate network for instance.
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.084 243456 DEBUG oslo_concurrency.lockutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.085 243456 DEBUG oslo_concurrency.lockutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.117 243456 DEBUG nova.scheduler.client.report [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.140 243456 DEBUG nova.scheduler.client.report [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.141 243456 DEBUG nova.compute.provider_tree [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.166 243456 DEBUG nova.scheduler.client.report [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.201 243456 DEBUG nova.scheduler.client.report [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.221 243456 DEBUG nova.compute.manager [req-0cad93cb-b060-437b-8132-a1288770327f req-0469494b-d714-4be4-a1da-b9445a967275 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-vif-deleted-6912d1ef-9679-45b5-ae80-a91f63ecce55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.282 243456 DEBUG oslo_concurrency.processutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 309 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.480 243456 DEBUG nova.compute.manager [req-d8e5620c-a70d-4090-ba60-d74635a792ed req-7268491e-3e03-4213-bf66-7177c4894390 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.482 243456 DEBUG oslo_concurrency.lockutils [req-d8e5620c-a70d-4090-ba60-d74635a792ed req-7268491e-3e03-4213-bf66-7177c4894390 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.483 243456 DEBUG oslo_concurrency.lockutils [req-d8e5620c-a70d-4090-ba60-d74635a792ed req-7268491e-3e03-4213-bf66-7177c4894390 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.483 243456 DEBUG oslo_concurrency.lockutils [req-d8e5620c-a70d-4090-ba60-d74635a792ed req-7268491e-3e03-4213-bf66-7177c4894390 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.483 243456 DEBUG nova.compute.manager [req-d8e5620c-a70d-4090-ba60-d74635a792ed req-7268491e-3e03-4213-bf66-7177c4894390 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] No waiting events found dispatching network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.484 243456 WARNING nova.compute.manager [req-d8e5620c-a70d-4090-ba60-d74635a792ed req-7268491e-3e03-4213-bf66-7177c4894390 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received unexpected event network-vif-plugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 for instance with vm_state deleted and task_state None.
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.485 243456 DEBUG nova.network.neutron [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Updating instance_info_cache with network_info: [{"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.503 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Releasing lock "refresh_cache-6a659835-f144-4e34-87ec-3b37ff81b0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.504 243456 DEBUG nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance network_info: |[{"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.508 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Start _get_guest_xml network_info=[{"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.515 243456 WARNING nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.520 243456 DEBUG nova.virt.libvirt.host [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.521 243456 DEBUG nova.virt.libvirt.host [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.525 243456 DEBUG nova.virt.libvirt.host [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.527 243456 DEBUG nova.virt.libvirt.host [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.527 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.528 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.529 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.529 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.529 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.530 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.530 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.530 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.532 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.532 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.532 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.533 243456 DEBUG nova.virt.hardware [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.538 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:19:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:19:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/452138698' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.814 243456 DEBUG oslo_concurrency.processutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.820 243456 DEBUG nova.compute.provider_tree [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.839 243456 DEBUG nova.scheduler.client.report [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.872 243456 DEBUG oslo_concurrency.lockutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.895 243456 INFO nova.scheduler.client.report [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Deleted allocations for instance e8c0bffa-2672-4f45-8646-3a41b8e780a8
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.907 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:18 compute-0 nova_compute[243452]: 2026-02-28 10:19:18.963 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.053 243456 DEBUG oslo_concurrency.lockutils [None req-0eb5f906-bb19-42fc-bf95-fd43fbb40bdb 99530c323188499c8d0e75b8edf1f77b 4c568ca6a09a48c1a1197267be4d4583 - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:19:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3671386092' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.166 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.195 243456 DEBUG nova.storage.rbd_utils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.201 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.291 243456 DEBUG nova.network.neutron [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Updated VIF entry in instance network info cache for port 6912d1ef-9679-45b5-ae80-a91f63ecce55. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.292 243456 DEBUG nova.network.neutron [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Updating instance_info_cache with network_info: [{"id": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "address": "fa:16:3e:b7:a0:a7", "network": {"id": "99091813-133c-46b0-a8d3-eeb21884f48a", "bridge": "br-int", "label": "tempest-network-smoke--923256228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c568ca6a09a48c1a1197267be4d4583", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6912d1ef-96", "ovs_interfaceid": "6912d1ef-9679-45b5-ae80-a91f63ecce55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.309 243456 DEBUG oslo_concurrency.lockutils [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-e8c0bffa-2672-4f45-8646-3a41b8e780a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.310 243456 DEBUG nova.compute.manager [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-changed-c058dd2c-3349-4364-8659-31bb8b2509bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.310 243456 DEBUG nova.compute.manager [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Refreshing instance network info cache due to event network-changed-c058dd2c-3349-4364-8659-31bb8b2509bb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.311 243456 DEBUG oslo_concurrency.lockutils [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-6a659835-f144-4e34-87ec-3b37ff81b0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.311 243456 DEBUG oslo_concurrency.lockutils [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-6a659835-f144-4e34-87ec-3b37ff81b0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.311 243456 DEBUG nova.network.neutron [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Refreshing network info cache for port c058dd2c-3349-4364-8659-31bb8b2509bb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:19:19 compute-0 ceph-mon[76304]: pgmap v1608: 305 pgs: 305 active+clean; 309 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Feb 28 10:19:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/452138698' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:19:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3671386092' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:19:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3007217156' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.766 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.768 243456 DEBUG nova.virt.libvirt.vif [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:19:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-303074087',display_name='tempest-tempest.common.compute-instance-303074087',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-303074087',id=95,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-ufo6e5u0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:12Z,user_data=None,user_id='b9006c7543a244aa948b78020335223a',uuid=6a659835-f144-4e34-87ec-3b37ff81b0d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.769 243456 DEBUG nova.network.os_vif_util [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.771 243456 DEBUG nova.network.os_vif_util [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.773 243456 DEBUG nova.objects.instance [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.790 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:19:19 compute-0 nova_compute[243452]:   <uuid>6a659835-f144-4e34-87ec-3b37ff81b0d1</uuid>
Feb 28 10:19:19 compute-0 nova_compute[243452]:   <name>instance-0000005f</name>
Feb 28 10:19:19 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:19:19 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:19:19 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <nova:name>tempest-tempest.common.compute-instance-303074087</nova:name>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:19:18</nova:creationTime>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:19:19 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:19:19 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:19:19 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:19:19 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:19:19 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:19:19 compute-0 nova_compute[243452]:         <nova:user uuid="b9006c7543a244aa948b78020335223a">tempest-ServerActionsTestJSON-152155156-project-member</nova:user>
Feb 28 10:19:19 compute-0 nova_compute[243452]:         <nova:project uuid="6952e00efd364e1491714983e2425e93">tempest-ServerActionsTestJSON-152155156</nova:project>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:19:19 compute-0 nova_compute[243452]:         <nova:port uuid="c058dd2c-3349-4364-8659-31bb8b2509bb">
Feb 28 10:19:19 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:19:19 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:19:19 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <system>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <entry name="serial">6a659835-f144-4e34-87ec-3b37ff81b0d1</entry>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <entry name="uuid">6a659835-f144-4e34-87ec-3b37ff81b0d1</entry>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     </system>
Feb 28 10:19:19 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:19:19 compute-0 nova_compute[243452]:   <os>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:   </os>
Feb 28 10:19:19 compute-0 nova_compute[243452]:   <features>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:   </features>
Feb 28 10:19:19 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:19:19 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:19:19 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/6a659835-f144-4e34-87ec-3b37ff81b0d1_disk">
Feb 28 10:19:19 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:19:19 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config">
Feb 28 10:19:19 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:19:19 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:5a:09:29"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <target dev="tapc058dd2c-33"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/console.log" append="off"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <video>
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     </video>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:19:19 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:19:19 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:19:19 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:19:19 compute-0 nova_compute[243452]: </domain>
Feb 28 10:19:19 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.791 243456 DEBUG nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Preparing to wait for external event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.791 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.792 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.792 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.794 243456 DEBUG nova.virt.libvirt.vif [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:19:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-303074087',display_name='tempest-tempest.common.compute-instance-303074087',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-303074087',id=95,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-ufo6e5u0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:12Z,user_data=None,user_id='b9006c7543a244aa948b78020335223a',uuid=6a659835-f144-4e34-87ec-3b37ff81b0d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.794 243456 DEBUG nova.network.os_vif_util [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.796 243456 DEBUG nova.network.os_vif_util [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.796 243456 DEBUG os_vif [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.798 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.798 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.799 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.804 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.805 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc058dd2c-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.805 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc058dd2c-33, col_values=(('external_ids', {'iface-id': 'c058dd2c-3349-4364-8659-31bb8b2509bb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5a:09:29', 'vm-uuid': '6a659835-f144-4e34-87ec-3b37ff81b0d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.808 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:19 compute-0 NetworkManager[49805]: <info>  [1772273959.8096] manager: (tapc058dd2c-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.811 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.819 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:19 compute-0 nova_compute[243452]: 2026-02-28 10:19:19.820 243456 INFO os_vif [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33')
Feb 28 10:19:20 compute-0 nova_compute[243452]: 2026-02-28 10:19:20.050 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:19:20 compute-0 nova_compute[243452]: 2026-02-28 10:19:20.050 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:19:20 compute-0 nova_compute[243452]: 2026-02-28 10:19:20.051 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] No VIF found with MAC fa:16:3e:5a:09:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:19:20 compute-0 nova_compute[243452]: 2026-02-28 10:19:20.051 243456 INFO nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Using config drive
Feb 28 10:19:20 compute-0 nova_compute[243452]: 2026-02-28 10:19:20.080 243456 DEBUG nova.storage.rbd_utils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:20 compute-0 nova_compute[243452]: 2026-02-28 10:19:20.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:19:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 281 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Feb 28 10:19:20 compute-0 nova_compute[243452]: 2026-02-28 10:19:20.564 243456 INFO nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Creating config drive at /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config
Feb 28 10:19:20 compute-0 nova_compute[243452]: 2026-02-28 10:19:20.568 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp651650hv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3007217156' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:20 compute-0 nova_compute[243452]: 2026-02-28 10:19:20.711 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp651650hv" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:20 compute-0 nova_compute[243452]: 2026-02-28 10:19:20.751 243456 DEBUG nova.storage.rbd_utils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:20 compute-0 nova_compute[243452]: 2026-02-28 10:19:20.756 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:21 compute-0 nova_compute[243452]: 2026-02-28 10:19:21.294 243456 DEBUG nova.network.neutron [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Updated VIF entry in instance network info cache for port c058dd2c-3349-4364-8659-31bb8b2509bb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:19:21 compute-0 nova_compute[243452]: 2026-02-28 10:19:21.295 243456 DEBUG nova.network.neutron [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Updating instance_info_cache with network_info: [{"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:19:21 compute-0 nova_compute[243452]: 2026-02-28 10:19:21.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:19:21 compute-0 nova_compute[243452]: 2026-02-28 10:19:21.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:19:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.314 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:21 compute-0 nova_compute[243452]: 2026-02-28 10:19:21.323 243456 DEBUG oslo_concurrency.lockutils [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-6a659835-f144-4e34-87ec-3b37ff81b0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:19:21 compute-0 nova_compute[243452]: 2026-02-28 10:19:21.324 243456 DEBUG nova.compute.manager [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-vif-unplugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:21 compute-0 nova_compute[243452]: 2026-02-28 10:19:21.324 243456 DEBUG oslo_concurrency.lockutils [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:21 compute-0 nova_compute[243452]: 2026-02-28 10:19:21.324 243456 DEBUG oslo_concurrency.lockutils [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:21 compute-0 nova_compute[243452]: 2026-02-28 10:19:21.324 243456 DEBUG oslo_concurrency.lockutils [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e8c0bffa-2672-4f45-8646-3a41b8e780a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:21 compute-0 nova_compute[243452]: 2026-02-28 10:19:21.325 243456 DEBUG nova.compute.manager [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] No waiting events found dispatching network-vif-unplugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:19:21 compute-0 nova_compute[243452]: 2026-02-28 10:19:21.325 243456 DEBUG nova.compute.manager [req-7fb04b5f-f9f9-43ed-9268-c0245a16c951 req-3546e76a-c148-45ce-83a3-b2f3e9a1b200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Received event network-vif-unplugged-6912d1ef-9679-45b5-ae80-a91f63ecce55 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:19:21 compute-0 nova_compute[243452]: 2026-02-28 10:19:21.503 243456 DEBUG oslo_concurrency.processutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.747s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:21 compute-0 nova_compute[243452]: 2026-02-28 10:19:21.503 243456 INFO nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Deleting local config drive /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config because it was imported into RBD.
Feb 28 10:19:21 compute-0 kernel: tapc058dd2c-33: entered promiscuous mode
Feb 28 10:19:21 compute-0 NetworkManager[49805]: <info>  [1772273961.5541] manager: (tapc058dd2c-33): new Tun device (/org/freedesktop/NetworkManager/Devices/409)
Feb 28 10:19:21 compute-0 ovn_controller[146846]: 2026-02-28T10:19:21Z|00938|binding|INFO|Claiming lport c058dd2c-3349-4364-8659-31bb8b2509bb for this chassis.
Feb 28 10:19:21 compute-0 ovn_controller[146846]: 2026-02-28T10:19:21Z|00939|binding|INFO|c058dd2c-3349-4364-8659-31bb8b2509bb: Claiming fa:16:3e:5a:09:29 10.100.0.13
Feb 28 10:19:21 compute-0 nova_compute[243452]: 2026-02-28 10:19:21.559 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:21 compute-0 ovn_controller[146846]: 2026-02-28T10:19:21Z|00940|binding|INFO|Setting lport c058dd2c-3349-4364-8659-31bb8b2509bb up in Southbound
Feb 28 10:19:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.565 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:09:29 10.100.0.13'], port_security=['fa:16:3e:5a:09:29 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6a659835-f144-4e34-87ec-3b37ff81b0d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5bc636a0-4322-44ce-8e09-d7c1b49250ab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=c058dd2c-3349-4364-8659-31bb8b2509bb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:19:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.568 156681 INFO neutron.agent.ovn.metadata.agent [-] Port c058dd2c-3349-4364-8659-31bb8b2509bb in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb bound to our chassis
Feb 28 10:19:21 compute-0 ovn_controller[146846]: 2026-02-28T10:19:21Z|00941|binding|INFO|Setting lport c058dd2c-3349-4364-8659-31bb8b2509bb ovn-installed in OVS
Feb 28 10:19:21 compute-0 nova_compute[243452]: 2026-02-28 10:19:21.570 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:21 compute-0 nova_compute[243452]: 2026-02-28 10:19:21.571 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.571 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:19:21 compute-0 systemd-machined[209480]: New machine qemu-117-instance-0000005f.
Feb 28 10:19:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.592 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2cd4bdad-560f-49e2-b884-580eae1e5d10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:21 compute-0 systemd[1]: Started Virtual Machine qemu-117-instance-0000005f.
Feb 28 10:19:21 compute-0 systemd-udevd[326132]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:19:21 compute-0 NetworkManager[49805]: <info>  [1772273961.6142] device (tapc058dd2c-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:19:21 compute-0 NetworkManager[49805]: <info>  [1772273961.6146] device (tapc058dd2c-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:19:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.634 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6e9fed39-16e9-4c4a-acb2-70eede857271]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.637 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[32368a87-16cf-4d9b-8dc4-3d9dc2265fd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.667 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf744da-6f10-48e0-a4f6-c7fa2ff3034e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.684 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2d2ba85d-0606-4682-9900-576ccfd77bec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543466, 'reachable_time': 27884, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326144, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:21 compute-0 ceph-mon[76304]: pgmap v1609: 305 pgs: 305 active+clean; 281 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Feb 28 10:19:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.699 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a2648d28-6b90-4d8a-80c8-bbe0c2ecdebb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8082b9e7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543481, 'tstamp': 543481}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326145, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8082b9e7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543483, 'tstamp': 543483}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326145, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.701 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:21 compute-0 nova_compute[243452]: 2026-02-28 10:19:21.703 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:21 compute-0 nova_compute[243452]: 2026-02-28 10:19:21.704 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.705 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8082b9e7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.705 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:19:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.706 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8082b9e7-a0, col_values=(('external_ids', {'iface-id': 'ecb4bce5-f543-4ade-98cc-d98a0d015682'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:21.707 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:19:21 compute-0 ovn_controller[146846]: 2026-02-28T10:19:21Z|00942|binding|INFO|Releasing lport ecb4bce5-f543-4ade-98cc-d98a0d015682 from this chassis (sb_readonly=0)
Feb 28 10:19:21 compute-0 nova_compute[243452]: 2026-02-28 10:19:21.820 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.034 243456 DEBUG nova.compute.manager [req-e4880c6f-f07e-4f3c-87ad-c02def4f74d9 req-326e4468-f9f6-42ed-89cc-58b5657350a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.035 243456 DEBUG oslo_concurrency.lockutils [req-e4880c6f-f07e-4f3c-87ad-c02def4f74d9 req-326e4468-f9f6-42ed-89cc-58b5657350a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.035 243456 DEBUG oslo_concurrency.lockutils [req-e4880c6f-f07e-4f3c-87ad-c02def4f74d9 req-326e4468-f9f6-42ed-89cc-58b5657350a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.035 243456 DEBUG oslo_concurrency.lockutils [req-e4880c6f-f07e-4f3c-87ad-c02def4f74d9 req-326e4468-f9f6-42ed-89cc-58b5657350a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.035 243456 DEBUG nova.compute.manager [req-e4880c6f-f07e-4f3c-87ad-c02def4f74d9 req-326e4468-f9f6-42ed-89cc-58b5657350a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Processing event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:19:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 281 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Feb 28 10:19:22 compute-0 ceph-mon[76304]: pgmap v1610: 305 pgs: 305 active+clean; 281 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.872 243456 DEBUG nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.873 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273962.8731813, 6a659835-f144-4e34-87ec-3b37ff81b0d1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.873 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] VM Started (Lifecycle Event)
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.876 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.879 243456 INFO nova.virt.libvirt.driver [-] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance spawned successfully.
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.880 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.900 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.903 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.904 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.904 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.904 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.905 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.905 243456 DEBUG nova.virt.libvirt.driver [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.909 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.945 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.945 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273962.873347, 6a659835-f144-4e34-87ec-3b37ff81b0d1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.945 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] VM Paused (Lifecycle Event)
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.970 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.975 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273962.8768837, 6a659835-f144-4e34-87ec-3b37ff81b0d1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.975 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] VM Resumed (Lifecycle Event)
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.978 243456 INFO nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Took 9.99 seconds to spawn the instance on the hypervisor.
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.979 243456 DEBUG nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:22 compute-0 nova_compute[243452]: 2026-02-28 10:19:22.999 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:23 compute-0 nova_compute[243452]: 2026-02-28 10:19:23.000 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:23 compute-0 nova_compute[243452]: 2026-02-28 10:19:23.016 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:23 compute-0 nova_compute[243452]: 2026-02-28 10:19:23.019 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:19:23 compute-0 nova_compute[243452]: 2026-02-28 10:19:23.035 243456 DEBUG nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:19:23 compute-0 nova_compute[243452]: 2026-02-28 10:19:23.060 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:19:23 compute-0 nova_compute[243452]: 2026-02-28 10:19:23.079 243456 INFO nova.compute.manager [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Took 11.12 seconds to build instance.
Feb 28 10:19:23 compute-0 nova_compute[243452]: 2026-02-28 10:19:23.103 243456 DEBUG oslo_concurrency.lockutils [None req-ab5cbddd-7594-4fac-a9b7-709f76b10747 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:23 compute-0 nova_compute[243452]: 2026-02-28 10:19:23.117 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:23 compute-0 nova_compute[243452]: 2026-02-28 10:19:23.118 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:23 compute-0 nova_compute[243452]: 2026-02-28 10:19:23.124 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:19:23 compute-0 nova_compute[243452]: 2026-02-28 10:19:23.125 243456 INFO nova.compute.claims [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:19:23 compute-0 nova_compute[243452]: 2026-02-28 10:19:23.288 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:23 compute-0 nova_compute[243452]: 2026-02-28 10:19:23.330 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:19:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:19:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:19:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2493329366' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:19:23 compute-0 nova_compute[243452]: 2026-02-28 10:19:23.891 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:23 compute-0 nova_compute[243452]: 2026-02-28 10:19:23.897 243456 DEBUG nova.compute.provider_tree [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:19:23 compute-0 nova_compute[243452]: 2026-02-28 10:19:23.909 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2493329366' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:19:23 compute-0 nova_compute[243452]: 2026-02-28 10:19:23.928 243456 DEBUG nova.scheduler.client.report [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:19:23 compute-0 nova_compute[243452]: 2026-02-28 10:19:23.961 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:23 compute-0 nova_compute[243452]: 2026-02-28 10:19:23.962 243456 DEBUG nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.015 243456 DEBUG nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.015 243456 DEBUG nova.network.neutron [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.036 243456 INFO nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.057 243456 DEBUG nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.158 243456 DEBUG nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.161 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.161 243456 INFO nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Creating image(s)
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.200 243456 DEBUG nova.storage.rbd_utils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.238 243456 DEBUG nova.storage.rbd_utils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.270 243456 DEBUG nova.storage.rbd_utils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.275 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.314 243456 DEBUG nova.compute.manager [req-54fcaabe-560f-4aa2-bd17-d4ec92647533 req-aadea5ae-6da8-4b19-93cc-02c7fb3620c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.315 243456 DEBUG oslo_concurrency.lockutils [req-54fcaabe-560f-4aa2-bd17-d4ec92647533 req-aadea5ae-6da8-4b19-93cc-02c7fb3620c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.316 243456 DEBUG oslo_concurrency.lockutils [req-54fcaabe-560f-4aa2-bd17-d4ec92647533 req-aadea5ae-6da8-4b19-93cc-02c7fb3620c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.316 243456 DEBUG oslo_concurrency.lockutils [req-54fcaabe-560f-4aa2-bd17-d4ec92647533 req-aadea5ae-6da8-4b19-93cc-02c7fb3620c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.317 243456 DEBUG nova.compute.manager [req-54fcaabe-560f-4aa2-bd17-d4ec92647533 req-aadea5ae-6da8-4b19-93cc-02c7fb3620c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] No waiting events found dispatching network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.317 243456 WARNING nova.compute.manager [req-54fcaabe-560f-4aa2-bd17-d4ec92647533 req-aadea5ae-6da8-4b19-93cc-02c7fb3620c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received unexpected event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb for instance with vm_state active and task_state None.
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:19:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 281 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.357 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.358 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.358 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.359 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.384 243456 DEBUG nova.storage.rbd_utils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.388 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ea5efc55-0a5e-435e-9805-9a9726c17eda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.419 243456 DEBUG nova.policy [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d03850a765742908401b28b9f983e96', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3882eded03594958a2e5d10832a6c3a9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.646 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ea5efc55-0a5e-435e-9805-9a9726c17eda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.727 243456 DEBUG nova.storage.rbd_utils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] resizing rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.811 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.817 243456 DEBUG nova.objects.instance [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'migration_context' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.835 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.835 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Ensure instance console log exists: /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.836 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.836 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:24 compute-0 nova_compute[243452]: 2026-02-28 10:19:24.837 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:24 compute-0 ceph-mon[76304]: pgmap v1611: 305 pgs: 305 active+clean; 281 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Feb 28 10:19:25 compute-0 nova_compute[243452]: 2026-02-28 10:19:25.290 243456 DEBUG nova.network.neutron [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Successfully created port: 2f9562b0-54ce-4c24-9341-33a674532bf0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:19:26 compute-0 nova_compute[243452]: 2026-02-28 10:19:26.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:19:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 310 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.7 MiB/s wr, 123 op/s
Feb 28 10:19:26 compute-0 nova_compute[243452]: 2026-02-28 10:19:26.335 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:26 compute-0 nova_compute[243452]: 2026-02-28 10:19:26.336 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:26 compute-0 nova_compute[243452]: 2026-02-28 10:19:26.336 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:26 compute-0 nova_compute[243452]: 2026-02-28 10:19:26.336 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:19:26 compute-0 nova_compute[243452]: 2026-02-28 10:19:26.337 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:26 compute-0 nova_compute[243452]: 2026-02-28 10:19:26.378 243456 DEBUG nova.network.neutron [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Successfully updated port: 2f9562b0-54ce-4c24-9341-33a674532bf0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:19:26 compute-0 nova_compute[243452]: 2026-02-28 10:19:26.393 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "refresh_cache-ea5efc55-0a5e-435e-9805-9a9726c17eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:19:26 compute-0 nova_compute[243452]: 2026-02-28 10:19:26.394 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquired lock "refresh_cache-ea5efc55-0a5e-435e-9805-9a9726c17eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:19:26 compute-0 nova_compute[243452]: 2026-02-28 10:19:26.395 243456 DEBUG nova.network.neutron [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:19:26 compute-0 nova_compute[243452]: 2026-02-28 10:19:26.566 243456 DEBUG nova.compute.manager [req-16fea14e-b7ec-4299-825f-fb31fbf15512 req-8d69dc01-aa90-4642-b6f5-438cfcac7678 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-changed-2f9562b0-54ce-4c24-9341-33a674532bf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:26 compute-0 nova_compute[243452]: 2026-02-28 10:19:26.566 243456 DEBUG nova.compute.manager [req-16fea14e-b7ec-4299-825f-fb31fbf15512 req-8d69dc01-aa90-4642-b6f5-438cfcac7678 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Refreshing instance network info cache due to event network-changed-2f9562b0-54ce-4c24-9341-33a674532bf0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:19:26 compute-0 nova_compute[243452]: 2026-02-28 10:19:26.567 243456 DEBUG oslo_concurrency.lockutils [req-16fea14e-b7ec-4299-825f-fb31fbf15512 req-8d69dc01-aa90-4642-b6f5-438cfcac7678 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ea5efc55-0a5e-435e-9805-9a9726c17eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:19:26 compute-0 nova_compute[243452]: 2026-02-28 10:19:26.660 243456 DEBUG nova.network.neutron [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:19:26 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:19:26 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2504765288' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:19:26 compute-0 nova_compute[243452]: 2026-02-28 10:19:26.872 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:26 compute-0 nova_compute[243452]: 2026-02-28 10:19:26.936 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:19:26 compute-0 nova_compute[243452]: 2026-02-28 10:19:26.937 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:19:26 compute-0 nova_compute[243452]: 2026-02-28 10:19:26.941 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:19:26 compute-0 nova_compute[243452]: 2026-02-28 10:19:26.941 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:19:27 compute-0 nova_compute[243452]: 2026-02-28 10:19:27.059 243456 INFO nova.compute.manager [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Rebuilding instance
Feb 28 10:19:27 compute-0 nova_compute[243452]: 2026-02-28 10:19:27.155 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:19:27 compute-0 nova_compute[243452]: 2026-02-28 10:19:27.156 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3373MB free_disk=59.91017228830606GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:19:27 compute-0 nova_compute[243452]: 2026-02-28 10:19:27.156 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:27 compute-0 nova_compute[243452]: 2026-02-28 10:19:27.157 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:27 compute-0 nova_compute[243452]: 2026-02-28 10:19:27.237 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 690896df-6307-469c-9685-325a61a62b88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:19:27 compute-0 nova_compute[243452]: 2026-02-28 10:19:27.237 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 6a659835-f144-4e34-87ec-3b37ff81b0d1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:19:27 compute-0 nova_compute[243452]: 2026-02-28 10:19:27.237 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance ea5efc55-0a5e-435e-9805-9a9726c17eda actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:19:27 compute-0 nova_compute[243452]: 2026-02-28 10:19:27.238 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:19:27 compute-0 nova_compute[243452]: 2026-02-28 10:19:27.238 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:19:27 compute-0 nova_compute[243452]: 2026-02-28 10:19:27.328 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:27 compute-0 ceph-mon[76304]: pgmap v1612: 305 pgs: 305 active+clean; 310 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.7 MiB/s wr, 123 op/s
Feb 28 10:19:27 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2504765288' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:19:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:19:27 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/614799489' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:19:27 compute-0 nova_compute[243452]: 2026-02-28 10:19:27.852 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:27 compute-0 nova_compute[243452]: 2026-02-28 10:19:27.856 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:19:27 compute-0 nova_compute[243452]: 2026-02-28 10:19:27.876 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:19:27 compute-0 nova_compute[243452]: 2026-02-28 10:19:27.890 243456 DEBUG nova.objects.instance [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:27 compute-0 nova_compute[243452]: 2026-02-28 10:19:27.912 243456 DEBUG nova.compute.manager [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:27 compute-0 nova_compute[243452]: 2026-02-28 10:19:27.914 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:19:27 compute-0 nova_compute[243452]: 2026-02-28 10:19:27.914 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:27 compute-0 nova_compute[243452]: 2026-02-28 10:19:27.977 243456 DEBUG nova.objects.instance [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'pci_requests' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:27 compute-0 nova_compute[243452]: 2026-02-28 10:19:27.991 243456 DEBUG nova.objects.instance [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.020 243456 DEBUG nova.objects.instance [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'resources' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.041 243456 DEBUG nova.objects.instance [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'migration_context' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.062 243456 DEBUG nova.objects.instance [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.067 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.311 243456 DEBUG nova.network.neutron [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Updating instance_info_cache with network_info: [{"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:19:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 327 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 130 op/s
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.346 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Releasing lock "refresh_cache-ea5efc55-0a5e-435e-9805-9a9726c17eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.346 243456 DEBUG nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Instance network_info: |[{"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.346 243456 DEBUG oslo_concurrency.lockutils [req-16fea14e-b7ec-4299-825f-fb31fbf15512 req-8d69dc01-aa90-4642-b6f5-438cfcac7678 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ea5efc55-0a5e-435e-9805-9a9726c17eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.347 243456 DEBUG nova.network.neutron [req-16fea14e-b7ec-4299-825f-fb31fbf15512 req-8d69dc01-aa90-4642-b6f5-438cfcac7678 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Refreshing network info cache for port 2f9562b0-54ce-4c24-9341-33a674532bf0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.349 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Start _get_guest_xml network_info=[{"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:19:28 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/614799489' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.488 243456 WARNING nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.493 243456 DEBUG nova.virt.libvirt.host [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.493 243456 DEBUG nova.virt.libvirt.host [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.497 243456 DEBUG nova.virt.libvirt.host [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.497 243456 DEBUG nova.virt.libvirt.host [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.498 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.498 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.498 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.499 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.499 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.499 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.499 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.499 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.500 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.500 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.500 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.500 243456 DEBUG nova.virt.hardware [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.503 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.911 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.915 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.916 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.916 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:19:28 compute-0 nova_compute[243452]: 2026-02-28 10:19:28.935 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 28 10:19:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:19:29 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1289625373' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.077 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:19:29
Feb 28 10:19:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:19:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:19:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'vms', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.meta', 'volumes', 'default.rgw.log']
Feb 28 10:19:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.106 243456 DEBUG nova.storage.rbd_utils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.110 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.141 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.143 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.144 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.144 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:29 compute-0 ceph-mon[76304]: pgmap v1613: 305 pgs: 305 active+clean; 327 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 130 op/s
Feb 28 10:19:29 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1289625373' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:19:29 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2843012645' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.671 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.673 243456 DEBUG nova.virt.libvirt.vif [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:19:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1998552864',display_name='tempest-ServerRescueTestJSON-server-1998552864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1998552864',id=96,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3882eded03594958a2e5d10832a6c3a9',ramdisk_id='',reservation_id='r-fussapqk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-2101936935',owner_user_name='tempest-ServerRescueTestJSON-2101936935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:24Z,user_data=None,user_id='8d03850a765742908401b28b9f983e96',uuid=ea5efc55-0a5e-435e-9805-9a9726c17eda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.674 243456 DEBUG nova.network.os_vif_util [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converting VIF {"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.675 243456 DEBUG nova.network.os_vif_util [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:07:b7,bridge_name='br-int',has_traffic_filtering=True,id=2f9562b0-54ce-4c24-9341-33a674532bf0,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f9562b0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.677 243456 DEBUG nova.objects.instance [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'pci_devices' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.697 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:19:29 compute-0 nova_compute[243452]:   <uuid>ea5efc55-0a5e-435e-9805-9a9726c17eda</uuid>
Feb 28 10:19:29 compute-0 nova_compute[243452]:   <name>instance-00000060</name>
Feb 28 10:19:29 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:19:29 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:19:29 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerRescueTestJSON-server-1998552864</nova:name>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:19:28</nova:creationTime>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:19:29 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:19:29 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:19:29 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:19:29 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:19:29 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:19:29 compute-0 nova_compute[243452]:         <nova:user uuid="8d03850a765742908401b28b9f983e96">tempest-ServerRescueTestJSON-2101936935-project-member</nova:user>
Feb 28 10:19:29 compute-0 nova_compute[243452]:         <nova:project uuid="3882eded03594958a2e5d10832a6c3a9">tempest-ServerRescueTestJSON-2101936935</nova:project>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:19:29 compute-0 nova_compute[243452]:         <nova:port uuid="2f9562b0-54ce-4c24-9341-33a674532bf0">
Feb 28 10:19:29 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:19:29 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:19:29 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <system>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <entry name="serial">ea5efc55-0a5e-435e-9805-9a9726c17eda</entry>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <entry name="uuid">ea5efc55-0a5e-435e-9805-9a9726c17eda</entry>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     </system>
Feb 28 10:19:29 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:19:29 compute-0 nova_compute[243452]:   <os>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:   </os>
Feb 28 10:19:29 compute-0 nova_compute[243452]:   <features>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:   </features>
Feb 28 10:19:29 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:19:29 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:19:29 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ea5efc55-0a5e-435e-9805-9a9726c17eda_disk">
Feb 28 10:19:29 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       </source>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:19:29 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config">
Feb 28 10:19:29 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       </source>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:19:29 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:db:07:b7"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <target dev="tap2f9562b0-54"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/console.log" append="off"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <video>
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     </video>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:19:29 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:19:29 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:19:29 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:19:29 compute-0 nova_compute[243452]: </domain>
Feb 28 10:19:29 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.699 243456 DEBUG nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Preparing to wait for external event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.700 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.700 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.701 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.702 243456 DEBUG nova.virt.libvirt.vif [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:19:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1998552864',display_name='tempest-ServerRescueTestJSON-server-1998552864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1998552864',id=96,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3882eded03594958a2e5d10832a6c3a9',ramdisk_id='',reservation_id='r-fussapqk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-2101936935',owner_user_name='tempest-ServerRescueTestJSON-2101936935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:24Z,user_data=None,user_id='8d03850a765742908401b28b9f983e96',uuid=ea5efc55-0a5e-435e-9805-9a9726c17eda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.703 243456 DEBUG nova.network.os_vif_util [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converting VIF {"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.705 243456 DEBUG nova.network.os_vif_util [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:07:b7,bridge_name='br-int',has_traffic_filtering=True,id=2f9562b0-54ce-4c24-9341-33a674532bf0,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f9562b0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.706 243456 DEBUG os_vif [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:07:b7,bridge_name='br-int',has_traffic_filtering=True,id=2f9562b0-54ce-4c24-9341-33a674532bf0,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f9562b0-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.707 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.708 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.709 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.713 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.713 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f9562b0-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.714 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2f9562b0-54, col_values=(('external_ids', {'iface-id': '2f9562b0-54ce-4c24-9341-33a674532bf0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:07:b7', 'vm-uuid': 'ea5efc55-0a5e-435e-9805-9a9726c17eda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:29 compute-0 NetworkManager[49805]: <info>  [1772273969.7183] manager: (tap2f9562b0-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.717 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.722 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.724 243456 INFO os_vif [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:07:b7,bridge_name='br-int',has_traffic_filtering=True,id=2f9562b0-54ce-4c24-9341-33a674532bf0,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f9562b0-54')
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.782 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.783 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.783 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No VIF found with MAC fa:16:3e:db:07:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.784 243456 INFO nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Using config drive
Feb 28 10:19:29 compute-0 nova_compute[243452]: 2026-02-28 10:19:29.815 243456 DEBUG nova.storage.rbd_utils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:30 compute-0 nova_compute[243452]: 2026-02-28 10:19:30.210 243456 INFO nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Creating config drive at /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config
Feb 28 10:19:30 compute-0 nova_compute[243452]: 2026-02-28 10:19:30.218 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpay5t475r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:19:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:19:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:19:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:19:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:19:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:19:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 327 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Feb 28 10:19:30 compute-0 nova_compute[243452]: 2026-02-28 10:19:30.362 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpay5t475r" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:30 compute-0 nova_compute[243452]: 2026-02-28 10:19:30.396 243456 DEBUG nova.storage.rbd_utils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:30 compute-0 nova_compute[243452]: 2026-02-28 10:19:30.402 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:30 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2843012645' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:30 compute-0 nova_compute[243452]: 2026-02-28 10:19:30.433 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273955.3706656, e8c0bffa-2672-4f45-8646-3a41b8e780a8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:19:30 compute-0 nova_compute[243452]: 2026-02-28 10:19:30.434 243456 INFO nova.compute.manager [-] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] VM Stopped (Lifecycle Event)
Feb 28 10:19:30 compute-0 nova_compute[243452]: 2026-02-28 10:19:30.457 243456 DEBUG nova.compute.manager [None req-1a644ef3-9a80-4edd-a691-d9691dd7f74a - - - - - -] [instance: e8c0bffa-2672-4f45-8646-3a41b8e780a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:30 compute-0 nova_compute[243452]: 2026-02-28 10:19:30.529 243456 DEBUG oslo_concurrency.processutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:30 compute-0 nova_compute[243452]: 2026-02-28 10:19:30.530 243456 INFO nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Deleting local config drive /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config because it was imported into RBD.
Feb 28 10:19:30 compute-0 kernel: tap2f9562b0-54: entered promiscuous mode
Feb 28 10:19:30 compute-0 NetworkManager[49805]: <info>  [1772273970.5836] manager: (tap2f9562b0-54): new Tun device (/org/freedesktop/NetworkManager/Devices/411)
Feb 28 10:19:30 compute-0 nova_compute[243452]: 2026-02-28 10:19:30.584 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:30 compute-0 ovn_controller[146846]: 2026-02-28T10:19:30Z|00943|binding|INFO|Claiming lport 2f9562b0-54ce-4c24-9341-33a674532bf0 for this chassis.
Feb 28 10:19:30 compute-0 ovn_controller[146846]: 2026-02-28T10:19:30Z|00944|binding|INFO|2f9562b0-54ce-4c24-9341-33a674532bf0: Claiming fa:16:3e:db:07:b7 10.100.0.9
Feb 28 10:19:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:30.592 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:07:b7 10.100.0.9'], port_security=['fa:16:3e:db:07:b7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ea5efc55-0a5e-435e-9805-9a9726c17eda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2f9562b0-54ce-4c24-9341-33a674532bf0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:19:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:30.593 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2f9562b0-54ce-4c24-9341-33a674532bf0 in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e bound to our chassis
Feb 28 10:19:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:30.594 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:19:30 compute-0 ovn_controller[146846]: 2026-02-28T10:19:30Z|00945|binding|INFO|Setting lport 2f9562b0-54ce-4c24-9341-33a674532bf0 ovn-installed in OVS
Feb 28 10:19:30 compute-0 ovn_controller[146846]: 2026-02-28T10:19:30Z|00946|binding|INFO|Setting lport 2f9562b0-54ce-4c24-9341-33a674532bf0 up in Southbound
Feb 28 10:19:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:30.595 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[91b1988a-c2c9-4820-b660-1ac1b19f8852]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:30 compute-0 nova_compute[243452]: 2026-02-28 10:19:30.595 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:30 compute-0 nova_compute[243452]: 2026-02-28 10:19:30.601 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:30 compute-0 systemd-udevd[326554]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:19:30 compute-0 systemd-machined[209480]: New machine qemu-118-instance-00000060.
Feb 28 10:19:30 compute-0 NetworkManager[49805]: <info>  [1772273970.6278] device (tap2f9562b0-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:19:30 compute-0 NetworkManager[49805]: <info>  [1772273970.6289] device (tap2f9562b0-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:19:30 compute-0 systemd[1]: Started Virtual Machine qemu-118-instance-00000060.
Feb 28 10:19:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:19:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:19:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:19:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:19:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:19:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:19:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:19:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:19:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:19:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:19:30 compute-0 nova_compute[243452]: 2026-02-28 10:19:30.810 243456 DEBUG nova.compute.manager [req-453d831c-aa45-400f-9c55-c4fea7840c5f req-c14c9960-453d-41dd-bd8b-736e55963435 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:30 compute-0 nova_compute[243452]: 2026-02-28 10:19:30.812 243456 DEBUG oslo_concurrency.lockutils [req-453d831c-aa45-400f-9c55-c4fea7840c5f req-c14c9960-453d-41dd-bd8b-736e55963435 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:30 compute-0 nova_compute[243452]: 2026-02-28 10:19:30.812 243456 DEBUG oslo_concurrency.lockutils [req-453d831c-aa45-400f-9c55-c4fea7840c5f req-c14c9960-453d-41dd-bd8b-736e55963435 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:30 compute-0 nova_compute[243452]: 2026-02-28 10:19:30.813 243456 DEBUG oslo_concurrency.lockutils [req-453d831c-aa45-400f-9c55-c4fea7840c5f req-c14c9960-453d-41dd-bd8b-736e55963435 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:30 compute-0 nova_compute[243452]: 2026-02-28 10:19:30.813 243456 DEBUG nova.compute.manager [req-453d831c-aa45-400f-9c55-c4fea7840c5f req-c14c9960-453d-41dd-bd8b-736e55963435 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Processing event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:19:30 compute-0 nova_compute[243452]: 2026-02-28 10:19:30.819 243456 DEBUG nova.network.neutron [req-16fea14e-b7ec-4299-825f-fb31fbf15512 req-8d69dc01-aa90-4642-b6f5-438cfcac7678 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Updated VIF entry in instance network info cache for port 2f9562b0-54ce-4c24-9341-33a674532bf0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:19:30 compute-0 nova_compute[243452]: 2026-02-28 10:19:30.820 243456 DEBUG nova.network.neutron [req-16fea14e-b7ec-4299-825f-fb31fbf15512 req-8d69dc01-aa90-4642-b6f5-438cfcac7678 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Updating instance_info_cache with network_info: [{"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:19:30 compute-0 nova_compute[243452]: 2026-02-28 10:19:30.840 243456 DEBUG oslo_concurrency.lockutils [req-16fea14e-b7ec-4299-825f-fb31fbf15512 req-8d69dc01-aa90-4642-b6f5-438cfcac7678 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ea5efc55-0a5e-435e-9805-9a9726c17eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.070 243456 DEBUG nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.071 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273971.0691988, ea5efc55-0a5e-435e-9805-9a9726c17eda => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.072 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] VM Started (Lifecycle Event)
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.075 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.079 243456 INFO nova.virt.libvirt.driver [-] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Instance spawned successfully.
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.079 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.094 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.101 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.105 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.105 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.105 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.106 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.106 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.107 243456 DEBUG nova.virt.libvirt.driver [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.133 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.133 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273971.0695293, ea5efc55-0a5e-435e-9805-9a9726c17eda => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.134 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] VM Paused (Lifecycle Event)
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.169 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.172 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273971.0746179, ea5efc55-0a5e-435e-9805-9a9726c17eda => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.173 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] VM Resumed (Lifecycle Event)
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.199 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.201 243456 INFO nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Took 7.04 seconds to spawn the instance on the hypervisor.
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.202 243456 DEBUG nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.204 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.244 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.265 243456 INFO nova.compute.manager [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Took 8.16 seconds to build instance.
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.280 243456 DEBUG oslo_concurrency.lockutils [None req-16762382-ba66-4265-8a02-029caa73bf20 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.379 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Updating instance_info_cache with network_info: [{"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.402 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.403 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.404 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.405 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:19:31 compute-0 ceph-mon[76304]: pgmap v1614: 305 pgs: 305 active+clean; 327 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Feb 28 10:19:31 compute-0 nova_compute[243452]: 2026-02-28 10:19:31.800 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:19:32 compute-0 nova_compute[243452]: 2026-02-28 10:19:32.184 243456 INFO nova.compute.manager [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Rescuing
Feb 28 10:19:32 compute-0 nova_compute[243452]: 2026-02-28 10:19:32.185 243456 DEBUG oslo_concurrency.lockutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "refresh_cache-ea5efc55-0a5e-435e-9805-9a9726c17eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:19:32 compute-0 nova_compute[243452]: 2026-02-28 10:19:32.185 243456 DEBUG oslo_concurrency.lockutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquired lock "refresh_cache-ea5efc55-0a5e-435e-9805-9a9726c17eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:19:32 compute-0 nova_compute[243452]: 2026-02-28 10:19:32.185 243456 DEBUG nova.network.neutron [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:19:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 327 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 28 10:19:32 compute-0 nova_compute[243452]: 2026-02-28 10:19:32.899 243456 DEBUG nova.compute.manager [req-5a329331-ed6a-400e-b41e-4c779a2e8e1f req-d1250509-66f1-450b-bb08-88100ebde577 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:32 compute-0 nova_compute[243452]: 2026-02-28 10:19:32.900 243456 DEBUG oslo_concurrency.lockutils [req-5a329331-ed6a-400e-b41e-4c779a2e8e1f req-d1250509-66f1-450b-bb08-88100ebde577 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:32 compute-0 nova_compute[243452]: 2026-02-28 10:19:32.900 243456 DEBUG oslo_concurrency.lockutils [req-5a329331-ed6a-400e-b41e-4c779a2e8e1f req-d1250509-66f1-450b-bb08-88100ebde577 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:32 compute-0 nova_compute[243452]: 2026-02-28 10:19:32.901 243456 DEBUG oslo_concurrency.lockutils [req-5a329331-ed6a-400e-b41e-4c779a2e8e1f req-d1250509-66f1-450b-bb08-88100ebde577 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:32 compute-0 nova_compute[243452]: 2026-02-28 10:19:32.901 243456 DEBUG nova.compute.manager [req-5a329331-ed6a-400e-b41e-4c779a2e8e1f req-d1250509-66f1-450b-bb08-88100ebde577 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] No waiting events found dispatching network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:19:32 compute-0 nova_compute[243452]: 2026-02-28 10:19:32.902 243456 WARNING nova.compute.manager [req-5a329331-ed6a-400e-b41e-4c779a2e8e1f req-d1250509-66f1-450b-bb08-88100ebde577 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received unexpected event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 for instance with vm_state active and task_state rescuing.
Feb 28 10:19:33 compute-0 nova_compute[243452]: 2026-02-28 10:19:33.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:33 compute-0 ceph-mon[76304]: pgmap v1615: 305 pgs: 305 active+clean; 327 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 28 10:19:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:19:33 compute-0 nova_compute[243452]: 2026-02-28 10:19:33.913 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:34 compute-0 nova_compute[243452]: 2026-02-28 10:19:34.309 243456 DEBUG nova.network.neutron [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Updating instance_info_cache with network_info: [{"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:19:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 327 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Feb 28 10:19:34 compute-0 nova_compute[243452]: 2026-02-28 10:19:34.333 243456 DEBUG oslo_concurrency.lockutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Releasing lock "refresh_cache-ea5efc55-0a5e-435e-9805-9a9726c17eda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:19:34 compute-0 nova_compute[243452]: 2026-02-28 10:19:34.717 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:34 compute-0 nova_compute[243452]: 2026-02-28 10:19:34.849 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:19:35 compute-0 ceph-mon[76304]: pgmap v1616: 305 pgs: 305 active+clean; 327 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Feb 28 10:19:35 compute-0 ovn_controller[146846]: 2026-02-28T10:19:35Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5a:09:29 10.100.0.13
Feb 28 10:19:35 compute-0 ovn_controller[146846]: 2026-02-28T10:19:35Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5a:09:29 10.100.0.13
Feb 28 10:19:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 350 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.3 MiB/s wr, 199 op/s
Feb 28 10:19:37 compute-0 ceph-mon[76304]: pgmap v1617: 305 pgs: 305 active+clean; 350 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.3 MiB/s wr, 199 op/s
Feb 28 10:19:38 compute-0 nova_compute[243452]: 2026-02-28 10:19:38.193 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 28 10:19:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 358 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.9 MiB/s wr, 171 op/s
Feb 28 10:19:38 compute-0 ceph-mon[76304]: pgmap v1618: 305 pgs: 305 active+clean; 358 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.9 MiB/s wr, 171 op/s
Feb 28 10:19:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:19:38 compute-0 nova_compute[243452]: 2026-02-28 10:19:38.915 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:39 compute-0 nova_compute[243452]: 2026-02-28 10:19:39.345 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:39 compute-0 nova_compute[243452]: 2026-02-28 10:19:39.721 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 360 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 146 op/s
Feb 28 10:19:40 compute-0 kernel: tapc058dd2c-33 (unregistering): left promiscuous mode
Feb 28 10:19:40 compute-0 NetworkManager[49805]: <info>  [1772273980.4779] device (tapc058dd2c-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:19:40 compute-0 nova_compute[243452]: 2026-02-28 10:19:40.483 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:40 compute-0 ovn_controller[146846]: 2026-02-28T10:19:40Z|00947|binding|INFO|Releasing lport c058dd2c-3349-4364-8659-31bb8b2509bb from this chassis (sb_readonly=0)
Feb 28 10:19:40 compute-0 ovn_controller[146846]: 2026-02-28T10:19:40Z|00948|binding|INFO|Setting lport c058dd2c-3349-4364-8659-31bb8b2509bb down in Southbound
Feb 28 10:19:40 compute-0 ovn_controller[146846]: 2026-02-28T10:19:40Z|00949|binding|INFO|Removing iface tapc058dd2c-33 ovn-installed in OVS
Feb 28 10:19:40 compute-0 nova_compute[243452]: 2026-02-28 10:19:40.487 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:40 compute-0 nova_compute[243452]: 2026-02-28 10:19:40.490 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.493 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:09:29 10.100.0.13'], port_security=['fa:16:3e:5a:09:29 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6a659835-f144-4e34-87ec-3b37ff81b0d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5bc636a0-4322-44ce-8e09-d7c1b49250ab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=c058dd2c-3349-4364-8659-31bb8b2509bb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:19:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.494 156681 INFO neutron.agent.ovn.metadata.agent [-] Port c058dd2c-3349-4364-8659-31bb8b2509bb in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb unbound from our chassis
Feb 28 10:19:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.496 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:19:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.515 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1672a2de-9976-4617-ab69-6e978f310f03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:40 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Feb 28 10:19:40 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005f.scope: Consumed 13.156s CPU time.
Feb 28 10:19:40 compute-0 systemd-machined[209480]: Machine qemu-117-instance-0000005f terminated.
Feb 28 10:19:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.560 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8aab9deb-58f3-47e8-9eb6-7d83516018cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.565 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1d6560c7-e342-447d-8e8b-51d74766c103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.598 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7a97ddc9-d17b-41e5-ad23-436b5964d314]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.618 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[26998d13-7d3e-43f6-97a0-0199e3e54cb5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543466, 'reachable_time': 27884, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326617, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.632 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fff6fde7-0039-497d-911b-2261e32b0693]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8082b9e7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543481, 'tstamp': 543481}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326618, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8082b9e7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543483, 'tstamp': 543483}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326618, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.634 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:40 compute-0 nova_compute[243452]: 2026-02-28 10:19:40.636 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:40 compute-0 nova_compute[243452]: 2026-02-28 10:19:40.639 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.639 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8082b9e7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.639 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:19:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.640 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8082b9e7-a0, col_values=(('external_ids', {'iface-id': 'ecb4bce5-f543-4ade-98cc-d98a0d015682'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:40.640 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:19:40 compute-0 nova_compute[243452]: 2026-02-28 10:19:40.707 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:40 compute-0 nova_compute[243452]: 2026-02-28 10:19:40.712 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018791045311676103 of space, bias 1.0, pg target 0.5637313593502831 quantized to 32 (current 32)
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024930647110856744 of space, bias 1.0, pg target 0.7479194133257023 quantized to 32 (current 32)
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.713944090525061e-07 of space, bias 4.0, pg target 0.0009256732908630073 quantized to 16 (current 16)
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:19:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.207 243456 INFO nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance shutdown successfully after 13 seconds.
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.213 243456 INFO nova.virt.libvirt.driver [-] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance destroyed successfully.
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.218 243456 INFO nova.virt.libvirt.driver [-] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance destroyed successfully.
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.220 243456 DEBUG nova.virt.libvirt.vif [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:19:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-303074087',display_name='tempest-ServerActionsTestJSON-server-1668637263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-303074087',id=95,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:19:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-ufo6e5u0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:26Z,user_data=None,user_id='b9006c7543a244aa948b78020335223a',uuid=6a659835-f144-4e34-87ec-3b37ff81b0d1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.221 243456 DEBUG nova.network.os_vif_util [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.222 243456 DEBUG nova.network.os_vif_util [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.223 243456 DEBUG os_vif [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.226 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.226 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc058dd2c-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.229 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.232 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.234 243456 INFO os_vif [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33')
Feb 28 10:19:41 compute-0 ceph-mon[76304]: pgmap v1619: 305 pgs: 305 active+clean; 360 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 146 op/s
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.507 243456 INFO nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Deleting instance files /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1_del
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.509 243456 INFO nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Deletion of /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1_del complete
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.686 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.687 243456 INFO nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Creating image(s)
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.735 243456 DEBUG nova.storage.rbd_utils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.776 243456 DEBUG nova.storage.rbd_utils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.807 243456 DEBUG nova.storage.rbd_utils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.814 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.907 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.907 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.908 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.908 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.931 243456 DEBUG nova.storage.rbd_utils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:41 compute-0 nova_compute[243452]: 2026-02-28 10:19:41.935 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.142 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.212 243456 DEBUG nova.storage.rbd_utils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] resizing rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.301 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.303 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Ensure instance console log exists: /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.303 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.304 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.304 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.307 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Start _get_guest_xml network_info=[{"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.311 243456 WARNING nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.320 243456 DEBUG nova.virt.libvirt.host [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.321 243456 DEBUG nova.virt.libvirt.host [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.324 243456 DEBUG nova.virt.libvirt.host [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.325 243456 DEBUG nova.virt.libvirt.host [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.325 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.325 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.326 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.327 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.327 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.327 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.328 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.328 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.328 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.329 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.329 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.329 243456 DEBUG nova.virt.hardware [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.330 243456 DEBUG nova.objects.instance [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 360 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.347 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:19:42 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3469063387' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.905 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.946 243456 DEBUG nova.storage.rbd_utils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:42 compute-0 nova_compute[243452]: 2026-02-28 10:19:42.953 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:43 compute-0 podman[326858]: 2026-02-28 10:19:43.112816953 +0000 UTC m=+0.054049951 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:19:43 compute-0 podman[326857]: 2026-02-28 10:19:43.166199105 +0000 UTC m=+0.107360701 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller)
Feb 28 10:19:43 compute-0 ceph-mon[76304]: pgmap v1620: 305 pgs: 305 active+clean; 360 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Feb 28 10:19:43 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3469063387' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:19:43 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/470665309' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.506 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.510 243456 DEBUG nova.virt.libvirt.vif [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:19:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-303074087',display_name='tempest-ServerActionsTestJSON-server-1668637263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-303074087',id=95,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:19:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-ufo6e5u0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:41Z,user_data=None,user_id='b9006c7543a244aa948b78020335223a',uuid=6a659835-f144-4e34-87ec-3b37ff81b0d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.511 243456 DEBUG nova.network.os_vif_util [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.512 243456 DEBUG nova.network.os_vif_util [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.518 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:19:43 compute-0 nova_compute[243452]:   <uuid>6a659835-f144-4e34-87ec-3b37ff81b0d1</uuid>
Feb 28 10:19:43 compute-0 nova_compute[243452]:   <name>instance-0000005f</name>
Feb 28 10:19:43 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:19:43 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:19:43 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerActionsTestJSON-server-1668637263</nova:name>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:19:42</nova:creationTime>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:19:43 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:19:43 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:19:43 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:19:43 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:19:43 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:19:43 compute-0 nova_compute[243452]:         <nova:user uuid="b9006c7543a244aa948b78020335223a">tempest-ServerActionsTestJSON-152155156-project-member</nova:user>
Feb 28 10:19:43 compute-0 nova_compute[243452]:         <nova:project uuid="6952e00efd364e1491714983e2425e93">tempest-ServerActionsTestJSON-152155156</nova:project>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="88971623-4808-4102-a4a7-34a287d8b7fe"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:19:43 compute-0 nova_compute[243452]:         <nova:port uuid="c058dd2c-3349-4364-8659-31bb8b2509bb">
Feb 28 10:19:43 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:19:43 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:19:43 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <system>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <entry name="serial">6a659835-f144-4e34-87ec-3b37ff81b0d1</entry>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <entry name="uuid">6a659835-f144-4e34-87ec-3b37ff81b0d1</entry>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     </system>
Feb 28 10:19:43 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:19:43 compute-0 nova_compute[243452]:   <os>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:   </os>
Feb 28 10:19:43 compute-0 nova_compute[243452]:   <features>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:   </features>
Feb 28 10:19:43 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:19:43 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:19:43 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/6a659835-f144-4e34-87ec-3b37ff81b0d1_disk">
Feb 28 10:19:43 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       </source>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:19:43 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config">
Feb 28 10:19:43 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       </source>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:19:43 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:5a:09:29"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <target dev="tapc058dd2c-33"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/console.log" append="off"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <video>
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     </video>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:19:43 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:19:43 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:19:43 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:19:43 compute-0 nova_compute[243452]: </domain>
Feb 28 10:19:43 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.519 243456 DEBUG nova.compute.manager [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Preparing to wait for external event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.520 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.520 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.520 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.521 243456 DEBUG nova.virt.libvirt.vif [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:19:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-303074087',display_name='tempest-ServerActionsTestJSON-server-1668637263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-303074087',id=95,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:19:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-ufo6e5u0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:41Z,user_data=None,user_id='b9006c7543a244aa948b78020335223a',uuid=6a659835-f144-4e34-87ec-3b37ff81b0d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.521 243456 DEBUG nova.network.os_vif_util [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.522 243456 DEBUG nova.network.os_vif_util [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.522 243456 DEBUG os_vif [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.523 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.523 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.524 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.526 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.526 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc058dd2c-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.526 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc058dd2c-33, col_values=(('external_ids', {'iface-id': 'c058dd2c-3349-4364-8659-31bb8b2509bb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5a:09:29', 'vm-uuid': '6a659835-f144-4e34-87ec-3b37ff81b0d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.528 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:43 compute-0 NetworkManager[49805]: <info>  [1772273983.5294] manager: (tapc058dd2c-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/412)
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.532 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.534 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.535 243456 INFO os_vif [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33')
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.598 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.599 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.599 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] No VIF found with MAC fa:16:3e:5a:09:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.600 243456 INFO nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Using config drive
Feb 28 10:19:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.634 243456 DEBUG nova.storage.rbd_utils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.665 243456 DEBUG nova.objects.instance [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.705 243456 DEBUG nova.objects.instance [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'keypairs' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:43 compute-0 nova_compute[243452]: 2026-02-28 10:19:43.918 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.082 243456 DEBUG nova.compute.manager [req-c6291eac-0e3c-41f6-8ac1-4a6ed6cfba67 req-f81d53d4-97bc-4b6e-871a-19239680adc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-vif-unplugged-c058dd2c-3349-4364-8659-31bb8b2509bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.083 243456 DEBUG oslo_concurrency.lockutils [req-c6291eac-0e3c-41f6-8ac1-4a6ed6cfba67 req-f81d53d4-97bc-4b6e-871a-19239680adc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.084 243456 DEBUG oslo_concurrency.lockutils [req-c6291eac-0e3c-41f6-8ac1-4a6ed6cfba67 req-f81d53d4-97bc-4b6e-871a-19239680adc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.084 243456 DEBUG oslo_concurrency.lockutils [req-c6291eac-0e3c-41f6-8ac1-4a6ed6cfba67 req-f81d53d4-97bc-4b6e-871a-19239680adc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.085 243456 DEBUG nova.compute.manager [req-c6291eac-0e3c-41f6-8ac1-4a6ed6cfba67 req-f81d53d4-97bc-4b6e-871a-19239680adc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] No event matching network-vif-unplugged-c058dd2c-3349-4364-8659-31bb8b2509bb in dict_keys([('network-vif-plugged', 'c058dd2c-3349-4364-8659-31bb8b2509bb')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.085 243456 WARNING nova.compute.manager [req-c6291eac-0e3c-41f6-8ac1-4a6ed6cfba67 req-f81d53d4-97bc-4b6e-871a-19239680adc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received unexpected event network-vif-unplugged-c058dd2c-3349-4364-8659-31bb8b2509bb for instance with vm_state active and task_state rebuild_spawning.
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.190 243456 INFO nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Creating config drive at /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.197 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp05kbo8jn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.273 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "e4349bd8-727a-4533-9edd-b2d54353a617" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.274 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.293 243456 DEBUG nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:19:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 362 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.8 MiB/s wr, 173 op/s
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.339 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp05kbo8jn" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.376 243456 DEBUG nova.storage.rbd_utils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] rbd image 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.380 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:44 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/470665309' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.464 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.464 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.473 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.473 243456 INFO nova.compute.claims [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.548 243456 DEBUG oslo_concurrency.processutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config 6a659835-f144-4e34-87ec-3b37ff81b0d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.549 243456 INFO nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Deleting local config drive /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1/disk.config because it was imported into RBD.
Feb 28 10:19:44 compute-0 NetworkManager[49805]: <info>  [1772273984.6063] manager: (tapc058dd2c-33): new Tun device (/org/freedesktop/NetworkManager/Devices/413)
Feb 28 10:19:44 compute-0 kernel: tapc058dd2c-33: entered promiscuous mode
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.609 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:44 compute-0 ovn_controller[146846]: 2026-02-28T10:19:44Z|00950|binding|INFO|Claiming lport c058dd2c-3349-4364-8659-31bb8b2509bb for this chassis.
Feb 28 10:19:44 compute-0 ovn_controller[146846]: 2026-02-28T10:19:44Z|00951|binding|INFO|c058dd2c-3349-4364-8659-31bb8b2509bb: Claiming fa:16:3e:5a:09:29 10.100.0.13
Feb 28 10:19:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.620 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:09:29 10.100.0.13'], port_security=['fa:16:3e:5a:09:29 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6a659835-f144-4e34-87ec-3b37ff81b0d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '5', 'neutron:security_group_ids': '5bc636a0-4322-44ce-8e09-d7c1b49250ab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=c058dd2c-3349-4364-8659-31bb8b2509bb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:19:44 compute-0 ovn_controller[146846]: 2026-02-28T10:19:44Z|00952|binding|INFO|Setting lport c058dd2c-3349-4364-8659-31bb8b2509bb ovn-installed in OVS
Feb 28 10:19:44 compute-0 ovn_controller[146846]: 2026-02-28T10:19:44Z|00953|binding|INFO|Setting lport c058dd2c-3349-4364-8659-31bb8b2509bb up in Southbound
Feb 28 10:19:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.622 156681 INFO neutron.agent.ovn.metadata.agent [-] Port c058dd2c-3349-4364-8659-31bb8b2509bb in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb bound to our chassis
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.622 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.624 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:19:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.643 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b080cb8a-0e4a-4740-9172-b81b00c7f862]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:44 compute-0 systemd-udevd[326995]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:19:44 compute-0 systemd-machined[209480]: New machine qemu-119-instance-0000005f.
Feb 28 10:19:44 compute-0 NetworkManager[49805]: <info>  [1772273984.6575] device (tapc058dd2c-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:19:44 compute-0 NetworkManager[49805]: <info>  [1772273984.6588] device (tapc058dd2c-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:19:44 compute-0 systemd[1]: Started Virtual Machine qemu-119-instance-0000005f.
Feb 28 10:19:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.666 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ed16d963-be18-46a8-a77f-c288066dafbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.670 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[99eb3c29-fe7c-4fa0-8005-82d170685143]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.670 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.696 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[73fb9ee8-ba22-4cef-befc-8fdce6173e84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.711 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[49e0d96d-3ff7-46b4-88f4-60f099557746]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543466, 'reachable_time': 27884, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327006, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.725 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[127f7c9a-12e4-41de-954e-e6281e7680de]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8082b9e7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543481, 'tstamp': 543481}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327009, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8082b9e7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543483, 'tstamp': 543483}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327009, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.727 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.729 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.731 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8082b9e7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.731 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:19:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.731 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8082b9e7-a0, col_values=(('external_ids', {'iface-id': 'ecb4bce5-f543-4ade-98cc-d98a0d015682'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:44.731 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:19:44 compute-0 nova_compute[243452]: 2026-02-28 10:19:44.901 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 28 10:19:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:19:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1864028965' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.236 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.243 243456 DEBUG nova.compute.provider_tree [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.265 243456 DEBUG nova.scheduler.client.report [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.291 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.292 243456 DEBUG nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.327 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 6a659835-f144-4e34-87ec-3b37ff81b0d1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.328 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273985.3272188, 6a659835-f144-4e34-87ec-3b37ff81b0d1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.328 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] VM Started (Lifecycle Event)
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.335 243456 DEBUG nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.336 243456 DEBUG nova.network.neutron [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.343 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.347 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273985.3283725, 6a659835-f144-4e34-87ec-3b37ff81b0d1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.347 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] VM Paused (Lifecycle Event)
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.351 243456 INFO nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.368 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.372 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.375 243456 DEBUG nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.398 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:19:45 compute-0 ceph-mon[76304]: pgmap v1621: 305 pgs: 305 active+clean; 362 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.8 MiB/s wr, 173 op/s
Feb 28 10:19:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1864028965' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.459 243456 DEBUG nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.460 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.461 243456 INFO nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Creating image(s)
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.482 243456 DEBUG nova.storage.rbd_utils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image e4349bd8-727a-4533-9edd-b2d54353a617_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:19:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3343616085' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:19:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:19:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3343616085' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.514 243456 DEBUG nova.storage.rbd_utils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image e4349bd8-727a-4533-9edd-b2d54353a617_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.540 243456 DEBUG nova.storage.rbd_utils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image e4349bd8-727a-4533-9edd-b2d54353a617_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.544 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.596 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.597 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.598 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.598 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.621 243456 DEBUG nova.storage.rbd_utils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image e4349bd8-727a-4533-9edd-b2d54353a617_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.624 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 e4349bd8-727a-4533-9edd-b2d54353a617_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.845 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 e4349bd8-727a-4533-9edd-b2d54353a617_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:45 compute-0 nova_compute[243452]: 2026-02-28 10:19:45.922 243456 DEBUG nova.storage.rbd_utils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] resizing rbd image e4349bd8-727a-4533-9edd-b2d54353a617_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.027 243456 DEBUG nova.objects.instance [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'migration_context' on Instance uuid e4349bd8-727a-4533-9edd-b2d54353a617 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.085 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.085 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Ensure instance console log exists: /var/lib/nova/instances/e4349bd8-727a-4533-9edd-b2d54353a617/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.086 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.086 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.086 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.217 243456 DEBUG nova.compute.manager [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.218 243456 DEBUG oslo_concurrency.lockutils [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.218 243456 DEBUG oslo_concurrency.lockutils [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.218 243456 DEBUG oslo_concurrency.lockutils [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.218 243456 DEBUG nova.compute.manager [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Processing event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.218 243456 DEBUG nova.compute.manager [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.218 243456 DEBUG oslo_concurrency.lockutils [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.219 243456 DEBUG oslo_concurrency.lockutils [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.219 243456 DEBUG oslo_concurrency.lockutils [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.219 243456 DEBUG nova.compute.manager [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] No waiting events found dispatching network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.219 243456 WARNING nova.compute.manager [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received unexpected event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb for instance with vm_state active and task_state rebuild_spawning.
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.220 243456 DEBUG nova.compute.manager [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.220 243456 DEBUG oslo_concurrency.lockutils [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.220 243456 DEBUG oslo_concurrency.lockutils [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.220 243456 DEBUG oslo_concurrency.lockutils [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.220 243456 DEBUG nova.compute.manager [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] No waiting events found dispatching network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.221 243456 WARNING nova.compute.manager [req-5bea1814-e82f-4fa3-8228-b2a3502c9a57 req-a472a130-83d2-429a-b8b5-e42bdcd1f4ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received unexpected event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb for instance with vm_state active and task_state rebuild_spawning.
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.222 243456 DEBUG nova.compute.manager [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.225 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273986.2254226, 6a659835-f144-4e34-87ec-3b37ff81b0d1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.225 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] VM Resumed (Lifecycle Event)
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.227 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.231 243456 INFO nova.virt.libvirt.driver [-] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance spawned successfully.
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.231 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.236 243456 DEBUG nova.policy [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9046d3ef932481196b82d5e1fdd5de6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:19:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 357 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 6.0 MiB/s wr, 224 op/s
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.349 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.355 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.355 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.356 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.356 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.357 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.357 243456 DEBUG nova.virt.libvirt.driver [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.362 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:19:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3343616085' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:19:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3343616085' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.482 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.498 243456 DEBUG nova.compute.manager [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.798 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.799 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:46 compute-0 nova_compute[243452]: 2026-02-28 10:19:46.800 243456 DEBUG nova.objects.instance [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:19:47 compute-0 nova_compute[243452]: 2026-02-28 10:19:47.085 243456 DEBUG oslo_concurrency.lockutils [None req-38fddbc9-0180-424c-acfe-468d02171fcb b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:47 compute-0 kernel: tap2f9562b0-54 (unregistering): left promiscuous mode
Feb 28 10:19:47 compute-0 NetworkManager[49805]: <info>  [1772273987.1574] device (tap2f9562b0-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:19:47 compute-0 nova_compute[243452]: 2026-02-28 10:19:47.168 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:47 compute-0 ovn_controller[146846]: 2026-02-28T10:19:47Z|00954|binding|INFO|Releasing lport 2f9562b0-54ce-4c24-9341-33a674532bf0 from this chassis (sb_readonly=0)
Feb 28 10:19:47 compute-0 ovn_controller[146846]: 2026-02-28T10:19:47Z|00955|binding|INFO|Setting lport 2f9562b0-54ce-4c24-9341-33a674532bf0 down in Southbound
Feb 28 10:19:47 compute-0 ovn_controller[146846]: 2026-02-28T10:19:47Z|00956|binding|INFO|Removing iface tap2f9562b0-54 ovn-installed in OVS
Feb 28 10:19:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:47.189 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:07:b7 10.100.0.9'], port_security=['fa:16:3e:db:07:b7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ea5efc55-0a5e-435e-9805-9a9726c17eda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2f9562b0-54ce-4c24-9341-33a674532bf0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:19:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:47.192 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2f9562b0-54ce-4c24-9341-33a674532bf0 in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e unbound from our chassis
Feb 28 10:19:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:47.194 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:19:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:47.195 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[91c6ee70-dc44-4bc3-970a-46da387d6fa9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:47 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000060.scope: Deactivated successfully.
Feb 28 10:19:47 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000060.scope: Consumed 12.230s CPU time.
Feb 28 10:19:47 compute-0 systemd-machined[209480]: Machine qemu-118-instance-00000060 terminated.
Feb 28 10:19:47 compute-0 ceph-mon[76304]: pgmap v1622: 305 pgs: 305 active+clean; 357 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 6.0 MiB/s wr, 224 op/s
Feb 28 10:19:47 compute-0 nova_compute[243452]: 2026-02-28 10:19:47.498 243456 DEBUG nova.network.neutron [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Successfully created port: 07b4c83e-2fe2-42c9-a758-c50ddf0919fb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:19:47 compute-0 nova_compute[243452]: 2026-02-28 10:19:47.971 243456 INFO nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Instance shutdown successfully after 13 seconds.
Feb 28 10:19:47 compute-0 nova_compute[243452]: 2026-02-28 10:19:47.977 243456 INFO nova.virt.libvirt.driver [-] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Instance destroyed successfully.
Feb 28 10:19:47 compute-0 nova_compute[243452]: 2026-02-28 10:19:47.978 243456 DEBUG nova.objects.instance [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'numa_topology' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:47.999 243456 INFO nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Attempting rescue
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.000 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.006 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.006 243456 INFO nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Creating image(s)
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.031 243456 DEBUG nova.storage.rbd_utils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.037 243456 DEBUG nova.objects.instance [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'trusted_certs' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.076 243456 DEBUG nova.storage.rbd_utils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.104 243456 DEBUG nova.storage.rbd_utils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.109 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.198 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.199 243456 DEBUG oslo_concurrency.lockutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.200 243456 DEBUG oslo_concurrency.lockutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.200 243456 DEBUG oslo_concurrency.lockutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.232 243456 DEBUG nova.storage.rbd_utils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.239 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 369 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.8 MiB/s wr, 200 op/s
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.382 243456 DEBUG nova.compute.manager [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-vif-unplugged-2f9562b0-54ce-4c24-9341-33a674532bf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.383 243456 DEBUG oslo_concurrency.lockutils [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.383 243456 DEBUG oslo_concurrency.lockutils [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.384 243456 DEBUG oslo_concurrency.lockutils [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.384 243456 DEBUG nova.compute.manager [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] No waiting events found dispatching network-vif-unplugged-2f9562b0-54ce-4c24-9341-33a674532bf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.384 243456 WARNING nova.compute.manager [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received unexpected event network-vif-unplugged-2f9562b0-54ce-4c24-9341-33a674532bf0 for instance with vm_state active and task_state rescuing.
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.384 243456 DEBUG nova.compute.manager [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.385 243456 DEBUG oslo_concurrency.lockutils [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.385 243456 DEBUG oslo_concurrency.lockutils [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.385 243456 DEBUG oslo_concurrency.lockutils [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.385 243456 DEBUG nova.compute.manager [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] No waiting events found dispatching network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.385 243456 WARNING nova.compute.manager [req-2efeae38-389a-4da0-ba10-8ea1fc0deff7 req-4343f810-ba4e-4c59-bafb-9d20fb102884 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received unexpected event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 for instance with vm_state active and task_state rescuing.
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.394 243456 DEBUG nova.network.neutron [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Successfully updated port: 07b4c83e-2fe2-42c9-a758-c50ddf0919fb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.412 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.413 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquired lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.413 243456 DEBUG nova.network.neutron [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.474 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.475 243456 DEBUG nova.objects.instance [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'migration_context' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.487 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.488 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Start _get_guest_xml network_info=[{"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1242488818-network", "vif_mac": "fa:16:3e:db:07:b7"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.488 243456 DEBUG nova.objects.instance [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'resources' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:48 compute-0 ceph-mon[76304]: pgmap v1623: 305 pgs: 305 active+clean; 369 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.8 MiB/s wr, 200 op/s
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.508 243456 WARNING nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.513 243456 DEBUG nova.virt.libvirt.host [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.514 243456 DEBUG nova.virt.libvirt.host [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.517 243456 DEBUG nova.virt.libvirt.host [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.517 243456 DEBUG nova.virt.libvirt.host [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.518 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.518 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.518 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.518 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.519 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.519 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.519 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.519 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.519 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.519 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.519 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.520 243456 DEBUG nova.virt.hardware [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.520 243456 DEBUG nova.objects.instance [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.529 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.538 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.598 243456 DEBUG nova.network.neutron [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:19:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.879 243456 DEBUG nova.compute.manager [req-6f0681d2-d204-4f1e-adae-6df8d2963c8c req-03242dab-70bc-4754-b82a-838012c4d51b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received event network-changed-07b4c83e-2fe2-42c9-a758-c50ddf0919fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.879 243456 DEBUG nova.compute.manager [req-6f0681d2-d204-4f1e-adae-6df8d2963c8c req-03242dab-70bc-4754-b82a-838012c4d51b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Refreshing instance network info cache due to event network-changed-07b4c83e-2fe2-42c9-a758-c50ddf0919fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.880 243456 DEBUG oslo_concurrency.lockutils [req-6f0681d2-d204-4f1e-adae-6df8d2963c8c req-03242dab-70bc-4754-b82a-838012c4d51b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:19:48 compute-0 nova_compute[243452]: 2026-02-28 10:19:48.920 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:19:49 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3045192296' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.186 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.187 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:49 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3045192296' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.525 243456 DEBUG nova.network.neutron [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Updating instance_info_cache with network_info: [{"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.543 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Releasing lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.544 243456 DEBUG nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Instance network_info: |[{"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.546 243456 DEBUG oslo_concurrency.lockutils [req-6f0681d2-d204-4f1e-adae-6df8d2963c8c req-03242dab-70bc-4754-b82a-838012c4d51b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.546 243456 DEBUG nova.network.neutron [req-6f0681d2-d204-4f1e-adae-6df8d2963c8c req-03242dab-70bc-4754-b82a-838012c4d51b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Refreshing network info cache for port 07b4c83e-2fe2-42c9-a758-c50ddf0919fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.552 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Start _get_guest_xml network_info=[{"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.560 243456 WARNING nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.570 243456 DEBUG nova.virt.libvirt.host [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.571 243456 DEBUG nova.virt.libvirt.host [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.581 243456 DEBUG nova.virt.libvirt.host [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.582 243456 DEBUG nova.virt.libvirt.host [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.583 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.583 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.584 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.585 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.585 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.586 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.586 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.586 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.587 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.588 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.588 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.589 243456 DEBUG nova.virt.hardware [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.594 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:19:49 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3249347176' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.749 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:49 compute-0 nova_compute[243452]: 2026-02-28 10:19:49.751 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:19:50 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1889414919' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.182 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.218 243456 DEBUG nova.storage.rbd_utils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image e4349bd8-727a-4533-9edd-b2d54353a617_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.225 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:19:50 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1615206555' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.307 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.310 243456 DEBUG nova.virt.libvirt.vif [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:19:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1998552864',display_name='tempest-ServerRescueTestJSON-server-1998552864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1998552864',id=96,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:19:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3882eded03594958a2e5d10832a6c3a9',ramdisk_id='',reservation_id='r-fussapqk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-2101936935',owner_user_name='tempest-ServerRescueTestJSON-2101936935-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:31Z,user_data=None,user_id='8d03850a765742908401b28b9f983e96',uuid=ea5efc55-0a5e-435e-9805-9a9726c17eda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1242488818-network", "vif_mac": "fa:16:3e:db:07:b7"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.311 243456 DEBUG nova.network.os_vif_util [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converting VIF {"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1242488818-network", "vif_mac": "fa:16:3e:db:07:b7"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.313 243456 DEBUG nova.network.os_vif_util [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:db:07:b7,bridge_name='br-int',has_traffic_filtering=True,id=2f9562b0-54ce-4c24-9341-33a674532bf0,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f9562b0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.315 243456 DEBUG nova.objects.instance [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'pci_devices' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 408 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.5 MiB/s wr, 205 op/s
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.341 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <uuid>ea5efc55-0a5e-435e-9805-9a9726c17eda</uuid>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <name>instance-00000060</name>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerRescueTestJSON-server-1998552864</nova:name>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:19:48</nova:creationTime>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <nova:user uuid="8d03850a765742908401b28b9f983e96">tempest-ServerRescueTestJSON-2101936935-project-member</nova:user>
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <nova:project uuid="3882eded03594958a2e5d10832a6c3a9">tempest-ServerRescueTestJSON-2101936935</nova:project>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <nova:port uuid="2f9562b0-54ce-4c24-9341-33a674532bf0">
Feb 28 10:19:50 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <system>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <entry name="serial">ea5efc55-0a5e-435e-9805-9a9726c17eda</entry>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <entry name="uuid">ea5efc55-0a5e-435e-9805-9a9726c17eda</entry>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     </system>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <os>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   </os>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <features>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   </features>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.rescue">
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       </source>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ea5efc55-0a5e-435e-9805-9a9726c17eda_disk">
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       </source>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <target dev="vdb" bus="virtio"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config.rescue">
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       </source>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:db:07:b7"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <target dev="tap2f9562b0-54"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/console.log" append="off"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <video>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     </video>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:19:50 compute-0 nova_compute[243452]: </domain>
Feb 28 10:19:50 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.353 243456 DEBUG oslo_concurrency.lockutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.354 243456 DEBUG oslo_concurrency.lockutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.354 243456 DEBUG oslo_concurrency.lockutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.355 243456 DEBUG oslo_concurrency.lockutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.356 243456 DEBUG oslo_concurrency.lockutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.358 243456 INFO nova.compute.manager [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Terminating instance
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.360 243456 DEBUG nova.compute.manager [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.373 243456 INFO nova.virt.libvirt.driver [-] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Instance destroyed successfully.
Feb 28 10:19:50 compute-0 kernel: tapc058dd2c-33 (unregistering): left promiscuous mode
Feb 28 10:19:50 compute-0 NetworkManager[49805]: <info>  [1772273990.4146] device (tapc058dd2c-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:19:50 compute-0 ovn_controller[146846]: 2026-02-28T10:19:50Z|00957|binding|INFO|Releasing lport c058dd2c-3349-4364-8659-31bb8b2509bb from this chassis (sb_readonly=0)
Feb 28 10:19:50 compute-0 ovn_controller[146846]: 2026-02-28T10:19:50Z|00958|binding|INFO|Setting lport c058dd2c-3349-4364-8659-31bb8b2509bb down in Southbound
Feb 28 10:19:50 compute-0 ovn_controller[146846]: 2026-02-28T10:19:50Z|00959|binding|INFO|Removing iface tapc058dd2c-33 ovn-installed in OVS
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.430 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.433 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:09:29 10.100.0.13'], port_security=['fa:16:3e:5a:09:29 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6a659835-f144-4e34-87ec-3b37ff81b0d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5bc636a0-4322-44ce-8e09-d7c1b49250ab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=c058dd2c-3349-4364-8659-31bb8b2509bb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:19:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.434 156681 INFO neutron.agent.ovn.metadata.agent [-] Port c058dd2c-3349-4364-8659-31bb8b2509bb in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb unbound from our chassis
Feb 28 10:19:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.436 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.446 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.447 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.447 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.448 243456 DEBUG nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No VIF found with MAC fa:16:3e:db:07:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.448 243456 INFO nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Using config drive
Feb 28 10:19:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.449 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ca69a150-e91e-4851-afd6-8485f246ef30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:50 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Feb 28 10:19:50 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d0000005f.scope: Consumed 4.940s CPU time.
Feb 28 10:19:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.465 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c2a57e-10f5-4f53-8da0-29a6177633e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:50 compute-0 systemd-machined[209480]: Machine qemu-119-instance-0000005f terminated.
Feb 28 10:19:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.469 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[57e079bb-5b2d-42e1-9789-e0d404526d2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.477 243456 DEBUG nova.storage.rbd_utils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.485 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d2943871-fe5e-498a-a087-e11bedc6049c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.497 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8956db9c-0d98-4735-9f42-fa46478c9f75]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543466, 'reachable_time': 27884, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327508, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.502 243456 DEBUG nova.objects.instance [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'ec2_ids' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:50 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3249347176' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:50 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1889414919' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:50 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1615206555' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:50 compute-0 ceph-mon[76304]: pgmap v1624: 305 pgs: 305 active+clean; 408 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.5 MiB/s wr, 205 op/s
Feb 28 10:19:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.514 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eeeeb413-cf09-48b2-af17-5eae5e552048]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8082b9e7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543481, 'tstamp': 543481}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327509, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8082b9e7-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543483, 'tstamp': 543483}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327509, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.516 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.520 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.525 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8082b9e7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.526 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:19:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.526 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8082b9e7-a0, col_values=(('external_ids', {'iface-id': 'ecb4bce5-f543-4ade-98cc-d98a0d015682'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:50.527 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.527 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.529 243456 DEBUG nova.objects.instance [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'keypairs' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.603 243456 INFO nova.virt.libvirt.driver [-] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Instance destroyed successfully.
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.603 243456 DEBUG nova.objects.instance [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'resources' on Instance uuid 6a659835-f144-4e34-87ec-3b37ff81b0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.616 243456 DEBUG nova.virt.libvirt.vif [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:19:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-303074087',display_name='tempest-ServerActionsTestJSON-server-1668637263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-303074087',id=95,image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:19:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-ufo6e5u0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='88971623-4808-4102-a4a7-34a287d8b7fe',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:19:46Z,user_data=None,user_id='b9006c7543a244aa948b78020335223a',uuid=6a659835-f144-4e34-87ec-3b37ff81b0d1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.616 243456 DEBUG nova.network.os_vif_util [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "c058dd2c-3349-4364-8659-31bb8b2509bb", "address": "fa:16:3e:5a:09:29", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc058dd2c-33", "ovs_interfaceid": "c058dd2c-3349-4364-8659-31bb8b2509bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.617 243456 DEBUG nova.network.os_vif_util [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.618 243456 DEBUG os_vif [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.619 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.619 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc058dd2c-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.621 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.623 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.624 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.627 243456 INFO os_vif [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:09:29,bridge_name='br-int',has_traffic_filtering=True,id=c058dd2c-3349-4364-8659-31bb8b2509bb,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc058dd2c-33')
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.656 243456 DEBUG nova.compute.manager [req-9235aef7-8290-41ce-b652-6718c8a127eb req-9dcf121b-8aa6-47ab-a151-a8f5e12a4725 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-vif-unplugged-c058dd2c-3349-4364-8659-31bb8b2509bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.657 243456 DEBUG oslo_concurrency.lockutils [req-9235aef7-8290-41ce-b652-6718c8a127eb req-9dcf121b-8aa6-47ab-a151-a8f5e12a4725 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.657 243456 DEBUG oslo_concurrency.lockutils [req-9235aef7-8290-41ce-b652-6718c8a127eb req-9dcf121b-8aa6-47ab-a151-a8f5e12a4725 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.658 243456 DEBUG oslo_concurrency.lockutils [req-9235aef7-8290-41ce-b652-6718c8a127eb req-9dcf121b-8aa6-47ab-a151-a8f5e12a4725 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.658 243456 DEBUG nova.compute.manager [req-9235aef7-8290-41ce-b652-6718c8a127eb req-9dcf121b-8aa6-47ab-a151-a8f5e12a4725 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] No waiting events found dispatching network-vif-unplugged-c058dd2c-3349-4364-8659-31bb8b2509bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.658 243456 DEBUG nova.compute.manager [req-9235aef7-8290-41ce-b652-6718c8a127eb req-9dcf121b-8aa6-47ab-a151-a8f5e12a4725 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-vif-unplugged-c058dd2c-3349-4364-8659-31bb8b2509bb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.757 243456 DEBUG nova.network.neutron [req-6f0681d2-d204-4f1e-adae-6df8d2963c8c req-03242dab-70bc-4754-b82a-838012c4d51b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Updated VIF entry in instance network info cache for port 07b4c83e-2fe2-42c9-a758-c50ddf0919fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.758 243456 DEBUG nova.network.neutron [req-6f0681d2-d204-4f1e-adae-6df8d2963c8c req-03242dab-70bc-4754-b82a-838012c4d51b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Updating instance_info_cache with network_info: [{"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.778 243456 DEBUG oslo_concurrency.lockutils [req-6f0681d2-d204-4f1e-adae-6df8d2963c8c req-03242dab-70bc-4754-b82a-838012c4d51b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:19:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:19:50 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2072605686' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.809 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.810 243456 DEBUG nova.virt.libvirt.vif [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:19:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-900188454',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-900188454',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=97,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNIQnNUL9XDPDLTHZklFyfDZYSt9AYWyQxG0r/+WdFB7vyQw7J2acteYFQFpxgWWQ/0J0kXyBJr3KJn/hEaVogEETmejopRSrT8PjwOvTjZAhY243jVoswANs6Qv2qSuA==',key_name='tempest-TestSecurityGroupsBasicOps-146961595',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-7bnrc9wi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:45Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=e4349bd8-727a-4533-9edd-b2d54353a617,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.811 243456 DEBUG nova.network.os_vif_util [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.812 243456 DEBUG nova.network.os_vif_util [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:73:58,bridge_name='br-int',has_traffic_filtering=True,id=07b4c83e-2fe2-42c9-a758-c50ddf0919fb,network=Network(df5b5e58-da82-40fd-b4b8-660edea3cecb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07b4c83e-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.813 243456 DEBUG nova.objects.instance [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'pci_devices' on Instance uuid e4349bd8-727a-4533-9edd-b2d54353a617 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.825 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <uuid>e4349bd8-727a-4533-9edd-b2d54353a617</uuid>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <name>instance-00000061</name>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-900188454</nova:name>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:19:49</nova:creationTime>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <nova:user uuid="f9046d3ef932481196b82d5e1fdd5de6">tempest-TestSecurityGroupsBasicOps-1303513239-project-member</nova:user>
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <nova:project uuid="1eb3aafa74d14544ba5cde61223f2e23">tempest-TestSecurityGroupsBasicOps-1303513239</nova:project>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <nova:port uuid="07b4c83e-2fe2-42c9-a758-c50ddf0919fb">
Feb 28 10:19:50 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <system>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <entry name="serial">e4349bd8-727a-4533-9edd-b2d54353a617</entry>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <entry name="uuid">e4349bd8-727a-4533-9edd-b2d54353a617</entry>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     </system>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <os>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   </os>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <features>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   </features>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/e4349bd8-727a-4533-9edd-b2d54353a617_disk">
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       </source>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/e4349bd8-727a-4533-9edd-b2d54353a617_disk.config">
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       </source>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:19:50 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:7d:73:58"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <target dev="tap07b4c83e-2f"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/e4349bd8-727a-4533-9edd-b2d54353a617/console.log" append="off"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <video>
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     </video>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:19:50 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:19:50 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:19:50 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:19:50 compute-0 nova_compute[243452]: </domain>
Feb 28 10:19:50 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.827 243456 DEBUG nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Preparing to wait for external event network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.828 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.829 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.830 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.831 243456 DEBUG nova.virt.libvirt.vif [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:19:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-900188454',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-900188454',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=97,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNIQnNUL9XDPDLTHZklFyfDZYSt9AYWyQxG0r/+WdFB7vyQw7J2acteYFQFpxgWWQ/0J0kXyBJr3KJn/hEaVogEETmejopRSrT8PjwOvTjZAhY243jVoswANs6Qv2qSuA==',key_name='tempest-TestSecurityGroupsBasicOps-146961595',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-7bnrc9wi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:45Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=e4349bd8-727a-4533-9edd-b2d54353a617,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.832 243456 DEBUG nova.network.os_vif_util [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.833 243456 DEBUG nova.network.os_vif_util [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:73:58,bridge_name='br-int',has_traffic_filtering=True,id=07b4c83e-2fe2-42c9-a758-c50ddf0919fb,network=Network(df5b5e58-da82-40fd-b4b8-660edea3cecb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07b4c83e-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.835 243456 DEBUG os_vif [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:73:58,bridge_name='br-int',has_traffic_filtering=True,id=07b4c83e-2fe2-42c9-a758-c50ddf0919fb,network=Network(df5b5e58-da82-40fd-b4b8-660edea3cecb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07b4c83e-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.837 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.837 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.838 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.843 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.844 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07b4c83e-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.845 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap07b4c83e-2f, col_values=(('external_ids', {'iface-id': '07b4c83e-2fe2-42c9-a758-c50ddf0919fb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:73:58', 'vm-uuid': 'e4349bd8-727a-4533-9edd-b2d54353a617'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.846 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:50 compute-0 NetworkManager[49805]: <info>  [1772273990.8477] manager: (tap07b4c83e-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.850 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.856 243456 INFO nova.virt.libvirt.driver [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Deleting instance files /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1_del
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.857 243456 INFO nova.virt.libvirt.driver [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Deletion of /var/lib/nova/instances/6a659835-f144-4e34-87ec-3b37ff81b0d1_del complete
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.861 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.863 243456 INFO os_vif [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:73:58,bridge_name='br-int',has_traffic_filtering=True,id=07b4c83e-2fe2-42c9-a758-c50ddf0919fb,network=Network(df5b5e58-da82-40fd-b4b8-660edea3cecb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07b4c83e-2f')
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.874 243456 INFO nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Creating config drive at /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config.rescue
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.880 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpdf309q3j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.934 243456 INFO nova.compute.manager [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Took 0.57 seconds to destroy the instance on the hypervisor.
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.935 243456 DEBUG oslo.service.loopingcall [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.936 243456 DEBUG nova.compute.manager [-] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.936 243456 DEBUG nova.network.neutron [-] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.968 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.969 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.970 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No VIF found with MAC fa:16:3e:7d:73:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:19:50 compute-0 nova_compute[243452]: 2026-02-28 10:19:50.971 243456 INFO nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Using config drive
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.003 243456 DEBUG nova.storage.rbd_utils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image e4349bd8-727a-4533-9edd-b2d54353a617_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.034 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpdf309q3j" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.065 243456 DEBUG nova.storage.rbd_utils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.070 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config.rescue ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.221 243456 DEBUG oslo_concurrency.processutils [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config.rescue ea5efc55-0a5e-435e-9805-9a9726c17eda_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.223 243456 INFO nova.virt.libvirt.driver [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Deleting local config drive /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda/disk.config.rescue because it was imported into RBD.
Feb 28 10:19:51 compute-0 kernel: tap2f9562b0-54: entered promiscuous mode
Feb 28 10:19:51 compute-0 systemd-udevd[327482]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:19:51 compute-0 NetworkManager[49805]: <info>  [1772273991.2673] manager: (tap2f9562b0-54): new Tun device (/org/freedesktop/NetworkManager/Devices/415)
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.269 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:51 compute-0 ovn_controller[146846]: 2026-02-28T10:19:51Z|00960|binding|INFO|Claiming lport 2f9562b0-54ce-4c24-9341-33a674532bf0 for this chassis.
Feb 28 10:19:51 compute-0 ovn_controller[146846]: 2026-02-28T10:19:51Z|00961|binding|INFO|2f9562b0-54ce-4c24-9341-33a674532bf0: Claiming fa:16:3e:db:07:b7 10.100.0.9
Feb 28 10:19:51 compute-0 ovn_controller[146846]: 2026-02-28T10:19:51Z|00962|binding|INFO|Setting lport 2f9562b0-54ce-4c24-9341-33a674532bf0 ovn-installed in OVS
Feb 28 10:19:51 compute-0 NetworkManager[49805]: <info>  [1772273991.2781] device (tap2f9562b0-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.276 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:51 compute-0 NetworkManager[49805]: <info>  [1772273991.2791] device (tap2f9562b0-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:19:51 compute-0 ovn_controller[146846]: 2026-02-28T10:19:51Z|00963|binding|INFO|Setting lport 2f9562b0-54ce-4c24-9341-33a674532bf0 up in Southbound
Feb 28 10:19:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.278 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:07:b7 10.100.0.9'], port_security=['fa:16:3e:db:07:b7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ea5efc55-0a5e-435e-9805-9a9726c17eda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2f9562b0-54ce-4c24-9341-33a674532bf0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:19:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.280 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2f9562b0-54ce-4c24-9341-33a674532bf0 in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e bound to our chassis
Feb 28 10:19:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.280 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:19:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.281 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[48272dae-5b11-4bd3-9ca3-5e38aa6c6a34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.288 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:51 compute-0 systemd-machined[209480]: New machine qemu-120-instance-00000060.
Feb 28 10:19:51 compute-0 systemd[1]: Started Virtual Machine qemu-120-instance-00000060.
Feb 28 10:19:51 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2072605686' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.561 243456 INFO nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Creating config drive at /var/lib/nova/instances/e4349bd8-727a-4533-9edd-b2d54353a617/disk.config
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.565 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e4349bd8-727a-4533-9edd-b2d54353a617/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpp3nbjb7e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.683 243456 DEBUG nova.network.neutron [-] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.706 243456 INFO nova.compute.manager [-] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Took 0.77 seconds to deallocate network for instance.
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.710 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e4349bd8-727a-4533-9edd-b2d54353a617/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpp3nbjb7e" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.736 243456 DEBUG nova.storage.rbd_utils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image e4349bd8-727a-4533-9edd-b2d54353a617_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.742 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e4349bd8-727a-4533-9edd-b2d54353a617/disk.config e4349bd8-727a-4533-9edd-b2d54353a617_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.784 243456 DEBUG oslo_concurrency.lockutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.785 243456 DEBUG oslo_concurrency.lockutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.785 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for ea5efc55-0a5e-435e-9805-9a9726c17eda due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.786 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273991.739541, ea5efc55-0a5e-435e-9805-9a9726c17eda => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.786 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] VM Resumed (Lifecycle Event)
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.789 243456 DEBUG nova.compute.manager [None req-79a3be52-f32f-4f7b-ab4c-bccc920c4c80 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.794 243456 DEBUG nova.compute.manager [req-70abde67-282d-449b-8523-0065eeafb132 req-899f704f-fbb4-45ba-aeb7-d974df9cc90f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-vif-deleted-c058dd2c-3349-4364-8659-31bb8b2509bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.808 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.813 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.840 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] During sync_power_state the instance has a pending task (rescuing). Skip.
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.841 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273991.7396615, ea5efc55-0a5e-435e-9805-9a9726c17eda => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.841 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] VM Started (Lifecycle Event)
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.866 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.870 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.875 243456 DEBUG oslo_concurrency.processutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e4349bd8-727a-4533-9edd-b2d54353a617/disk.config e4349bd8-727a-4533-9edd-b2d54353a617_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.876 243456 INFO nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Deleting local config drive /var/lib/nova/instances/e4349bd8-727a-4533-9edd-b2d54353a617/disk.config because it was imported into RBD.
Feb 28 10:19:51 compute-0 kernel: tap07b4c83e-2f: entered promiscuous mode
Feb 28 10:19:51 compute-0 NetworkManager[49805]: <info>  [1772273991.9143] manager: (tap07b4c83e-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/416)
Feb 28 10:19:51 compute-0 ovn_controller[146846]: 2026-02-28T10:19:51Z|00964|binding|INFO|Claiming lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb for this chassis.
Feb 28 10:19:51 compute-0 ovn_controller[146846]: 2026-02-28T10:19:51Z|00965|binding|INFO|07b4c83e-2fe2-42c9-a758-c50ddf0919fb: Claiming fa:16:3e:7d:73:58 10.100.0.9
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.916 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:51 compute-0 ovn_controller[146846]: 2026-02-28T10:19:51Z|00966|binding|INFO|Setting lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb ovn-installed in OVS
Feb 28 10:19:51 compute-0 NetworkManager[49805]: <info>  [1772273991.9252] device (tap07b4c83e-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:19:51 compute-0 NetworkManager[49805]: <info>  [1772273991.9257] device (tap07b4c83e-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:19:51 compute-0 ovn_controller[146846]: 2026-02-28T10:19:51Z|00967|binding|INFO|Setting lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb up in Southbound
Feb 28 10:19:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.926 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:73:58 10.100.0.9'], port_security=['fa:16:3e:7d:73:58 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e4349bd8-727a-4533-9edd-b2d54353a617', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df5b5e58-da82-40fd-b4b8-660edea3cecb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0184ef1a-f67e-49ee-b39d-03bdf1995f2e ec08cc57-dbf0-4da0-80ed-116af2ee764f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f39672a-6e8d-45c3-9a0d-437ee1b4e09f, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=07b4c83e-2fe2-42c9-a758-c50ddf0919fb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.929 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.931 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.930 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 07b4c83e-2fe2-42c9-a758-c50ddf0919fb in datapath df5b5e58-da82-40fd-b4b8-660edea3cecb bound to our chassis
Feb 28 10:19:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.931 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network df5b5e58-da82-40fd-b4b8-660edea3cecb
Feb 28 10:19:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.941 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[722f2d40-8d3a-40ca-ae3c-c22c571f9736]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.943 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdf5b5e58-d1 in ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:19:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.945 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdf5b5e58-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:19:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.945 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[34480ff1-0a2b-4c16-ac34-fddf3517fc45]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.947 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f2c5e1-498f-4683-b587-97965c96a8a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:51 compute-0 systemd-machined[209480]: New machine qemu-121-instance-00000061.
Feb 28 10:19:51 compute-0 nova_compute[243452]: 2026-02-28 10:19:51.950 243456 DEBUG oslo_concurrency.processutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.959 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8dac98-8adc-4fca-89a0-d17e7ccd8915]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:51 compute-0 systemd[1]: Started Virtual Machine qemu-121-instance-00000061.
Feb 28 10:19:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:51.977 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f784c2c3-4131-4883-a0e8-4b6f806e5257]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.006 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[153ccfe0-4628-4de6-b77e-de01e8e4b8bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:52 compute-0 NetworkManager[49805]: <info>  [1772273992.0145] manager: (tapdf5b5e58-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/417)
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.012 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b5833434-51b3-49df-abda-3d2ca292973e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.042 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[90e0f93f-889d-4ec4-b4bf-c05aa4a83070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.046 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[255042b2-5a21-4cbc-99c6-c84b0b170295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:52 compute-0 NetworkManager[49805]: <info>  [1772273992.0735] device (tapdf5b5e58-d0): carrier: link connected
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.079 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ddbb5193-6827-46e7-9014-926131718d40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.103 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b4b5db-5b9a-4cfa-a8ed-3613092f9adc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf5b5e58-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:ff:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549935, 'reachable_time': 15255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327786, 'error': None, 'target': 'ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.123 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6c24dc65-5f79-430e-abbb-6725e328e0e7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee4:ff15'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 549935, 'tstamp': 549935}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327796, 'error': None, 'target': 'ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.139 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e58fbf87-bcf8-44a3-a5c9-0d9666c4be0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf5b5e58-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:ff:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549935, 'reachable_time': 15255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327797, 'error': None, 'target': 'ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.177 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[691a0f9a-23d5-4228-af60-b0dab22faeb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.250 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[411cc363-2b95-42de-940c-95825e5bee48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.252 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf5b5e58-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.252 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.253 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf5b5e58-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:52 compute-0 kernel: tapdf5b5e58-d0: entered promiscuous mode
Feb 28 10:19:52 compute-0 NetworkManager[49805]: <info>  [1772273992.2560] manager: (tapdf5b5e58-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/418)
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.260 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdf5b5e58-d0, col_values=(('external_ids', {'iface-id': 'ec441ae8-7dea-4a06-ba6a-57dcbc67001f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:19:52 compute-0 ovn_controller[146846]: 2026-02-28T10:19:52Z|00968|binding|INFO|Releasing lport ec441ae8-7dea-4a06-ba6a-57dcbc67001f from this chassis (sb_readonly=0)
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.263 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/df5b5e58-da82-40fd-b4b8-660edea3cecb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/df5b5e58-da82-40fd-b4b8-660edea3cecb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.264 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e47c5790-31a0-4065-b594-c5dece26d48b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.265 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-df5b5e58-da82-40fd-b4b8-660edea3cecb
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/df5b5e58-da82-40fd-b4b8-660edea3cecb.pid.haproxy
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID df5b5e58-da82-40fd-b4b8-660edea3cecb
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:19:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:52.266 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb', 'env', 'PROCESS_TAG=haproxy-df5b5e58-da82-40fd-b4b8-660edea3cecb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/df5b5e58-da82-40fd-b4b8-660edea3cecb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:19:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 439 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.9 MiB/s wr, 228 op/s
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.255 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:52 compute-0 ceph-mon[76304]: pgmap v1625: 305 pgs: 305 active+clean; 439 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.9 MiB/s wr, 228 op/s
Feb 28 10:19:52 compute-0 podman[327830]: 2026-02-28 10:19:52.614286848 +0000 UTC m=+0.050565303 container create 37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:19:52 compute-0 systemd[1]: Started libpod-conmon-37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c.scope.
Feb 28 10:19:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:19:52 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1230983670' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:19:52 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:19:52 compute-0 podman[327830]: 2026-02-28 10:19:52.584918501 +0000 UTC m=+0.021196976 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/835d604bdbc864696e0b1086b090dae849a757f07b6c5ac638b1186ab31417bd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:19:52 compute-0 podman[327830]: 2026-02-28 10:19:52.696641625 +0000 UTC m=+0.132920110 container init 37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.696 243456 DEBUG oslo_concurrency.processutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.746s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:52 compute-0 podman[327830]: 2026-02-28 10:19:52.702886963 +0000 UTC m=+0.139165408 container start 37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.706 243456 DEBUG nova.compute.provider_tree [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:19:52 compute-0 neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb[327846]: [NOTICE]   (327852) : New worker (327854) forked
Feb 28 10:19:52 compute-0 neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb[327846]: [NOTICE]   (327852) : Loading success.
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.726 243456 DEBUG nova.scheduler.client.report [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.762 243456 DEBUG oslo_concurrency.lockutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.792 243456 DEBUG nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.793 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.793 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.794 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.794 243456 DEBUG nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] No waiting events found dispatching network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.794 243456 WARNING nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Received unexpected event network-vif-plugged-c058dd2c-3349-4364-8659-31bb8b2509bb for instance with vm_state deleted and task_state None.
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.795 243456 DEBUG nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.795 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.795 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.796 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.796 243456 DEBUG nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] No waiting events found dispatching network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.796 243456 WARNING nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received unexpected event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 for instance with vm_state rescued and task_state None.
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.797 243456 DEBUG nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.797 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.797 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.797 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.798 243456 DEBUG nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] No waiting events found dispatching network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.798 243456 WARNING nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received unexpected event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 for instance with vm_state rescued and task_state None.
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.798 243456 DEBUG nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received event network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.799 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.799 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.799 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.799 243456 DEBUG nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Processing event network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.800 243456 DEBUG nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received event network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.800 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.800 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.800 243456 DEBUG oslo_concurrency.lockutils [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.801 243456 DEBUG nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] No waiting events found dispatching network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.801 243456 WARNING nova.compute.manager [req-110655de-222d-4117-a660-9cc5a2144a74 req-495b218f-1430-40c9-8a35-1d6330751a7b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received unexpected event network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb for instance with vm_state building and task_state spawning.
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.803 243456 INFO nova.scheduler.client.report [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Deleted allocations for instance 6a659835-f144-4e34-87ec-3b37ff81b0d1
Feb 28 10:19:52 compute-0 nova_compute[243452]: 2026-02-28 10:19:52.879 243456 DEBUG oslo_concurrency.lockutils [None req-5785b580-c5c0-4cc3-8d8e-2ba19119287b b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "6a659835-f144-4e34-87ec-3b37ff81b0d1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.143 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273993.1424856, e4349bd8-727a-4533-9edd-b2d54353a617 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.143 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] VM Started (Lifecycle Event)
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.145 243456 DEBUG nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.148 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.151 243456 INFO nova.virt.libvirt.driver [-] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Instance spawned successfully.
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.151 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.165 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.170 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.174 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.176 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.177 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.177 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.178 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.178 243456 DEBUG nova.virt.libvirt.driver [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.186 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.187 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273993.142695, e4349bd8-727a-4533-9edd-b2d54353a617 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.187 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] VM Paused (Lifecycle Event)
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.214 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.218 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772273993.147844, e4349bd8-727a-4533-9edd-b2d54353a617 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.218 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] VM Resumed (Lifecycle Event)
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.242 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.246 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.255 243456 INFO nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Took 7.80 seconds to spawn the instance on the hypervisor.
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.256 243456 DEBUG nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.267 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.327 243456 INFO nova.compute.manager [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Took 8.90 seconds to build instance.
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.348 243456 DEBUG oslo_concurrency.lockutils [None req-e9c58ffa-df8f-4fe9-976e-abb9016d5031 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Feb 28 10:19:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1230983670' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:19:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Feb 28 10:19:53 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Feb 28 10:19:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:19:53 compute-0 nova_compute[243452]: 2026-02-28 10:19:53.922 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 437 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 7.0 MiB/s wr, 262 op/s
Feb 28 10:19:54 compute-0 ceph-mon[76304]: osdmap e244: 3 total, 3 up, 3 in
Feb 28 10:19:54 compute-0 ceph-mon[76304]: pgmap v1627: 305 pgs: 305 active+clean; 437 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 7.0 MiB/s wr, 262 op/s
Feb 28 10:19:55 compute-0 nova_compute[243452]: 2026-02-28 10:19:55.848 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 407 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 4.4 MiB/s wr, 327 op/s
Feb 28 10:19:57 compute-0 ceph-mon[76304]: pgmap v1628: 305 pgs: 305 active+clean; 407 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 4.4 MiB/s wr, 327 op/s
Feb 28 10:19:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:57.858 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:57.860 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:19:57.860 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:57 compute-0 nova_compute[243452]: 2026-02-28 10:19:57.928 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Acquiring lock "98504b0a-8c47-4488-b870-9fb9ebfa3e59" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:57 compute-0 nova_compute[243452]: 2026-02-28 10:19:57.929 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "98504b0a-8c47-4488-b870-9fb9ebfa3e59" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:57 compute-0 nova_compute[243452]: 2026-02-28 10:19:57.948 243456 DEBUG nova.compute.manager [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.013 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.014 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.020 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.021 243456 INFO nova.compute.claims [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.239 243456 DEBUG oslo_concurrency.processutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.302 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.302 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.314 243456 DEBUG oslo_concurrency.lockutils [None req-a3f3fcad-a332-4cbc-8a36-3371252b885a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.314 243456 DEBUG oslo_concurrency.lockutils [None req-a3f3fcad-a332-4cbc-8a36-3371252b885a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.315 243456 DEBUG nova.compute.manager [None req-a3f3fcad-a332-4cbc-8a36-3371252b885a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.316 243456 DEBUG nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.324 243456 DEBUG nova.compute.manager [None req-a3f3fcad-a332-4cbc-8a36-3371252b885a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.325 243456 DEBUG nova.objects.instance [None req-a3f3fcad-a332-4cbc-8a36-3371252b885a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'flavor' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 407 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 4.0 MiB/s wr, 310 op/s
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.361 243456 DEBUG nova.virt.libvirt.driver [None req-a3f3fcad-a332-4cbc-8a36-3371252b885a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.414 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:19:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:19:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/228876417' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.810 243456 DEBUG oslo_concurrency.processutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.815 243456 DEBUG nova.compute.provider_tree [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.835 243456 DEBUG nova.scheduler.client.report [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.882 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.883 243456 DEBUG nova.compute.manager [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.890 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.476s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.899 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.901 243456 INFO nova.compute.claims [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.926 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.942 243456 DEBUG nova.compute.manager [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.942 243456 DEBUG nova.network.neutron [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.964 243456 INFO nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:19:58 compute-0 nova_compute[243452]: 2026-02-28 10:19:58.982 243456 DEBUG nova.compute.manager [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.071 243456 DEBUG nova.compute.manager [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.073 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.074 243456 INFO nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Creating image(s)
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.110 243456 DEBUG nova.storage.rbd_utils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] rbd image 98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.132 243456 DEBUG nova.storage.rbd_utils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] rbd image 98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.155 243456 DEBUG nova.storage.rbd_utils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] rbd image 98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.159 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Acquiring lock "e8bb49cc096e9adbbb6156bde35e9bc9f9bdd7c5" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.159 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "e8bb49cc096e9adbbb6156bde35e9bc9f9bdd7c5" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.179 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.280 243456 DEBUG nova.network.neutron [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.281 243456 DEBUG nova.compute.manager [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:19:59 compute-0 ceph-mon[76304]: pgmap v1629: 305 pgs: 305 active+clean; 407 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 4.0 MiB/s wr, 310 op/s
Feb 28 10:19:59 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/228876417' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.437 243456 DEBUG nova.virt.libvirt.imagebackend [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Image locations are: [{'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/1583781d-ec8c-4060-a2ad-53d52445d23e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/1583781d-ec8c-4060-a2ad-53d52445d23e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.499 243456 DEBUG nova.virt.libvirt.imagebackend [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Selected location: {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/1583781d-ec8c-4060-a2ad-53d52445d23e/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.500 243456 DEBUG nova.storage.rbd_utils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] cloning images/1583781d-ec8c-4060-a2ad-53d52445d23e@snap to None/98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.590 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "e8bb49cc096e9adbbb6156bde35e9bc9f9bdd7c5" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.430s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:19:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3239131865' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.729 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.742 243456 DEBUG nova.storage.rbd_utils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] resizing rbd image 98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.793 243456 DEBUG nova.compute.provider_tree [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.816 243456 DEBUG nova.objects.instance [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lazy-loading 'migration_context' on Instance uuid 98504b0a-8c47-4488-b870-9fb9ebfa3e59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.821 243456 DEBUG nova.scheduler.client.report [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.839 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.840 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Ensure instance console log exists: /var/lib/nova/instances/98504b0a-8c47-4488-b870-9fb9ebfa3e59/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.841 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.841 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.842 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.843 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='8dbf5d7f5ae126eb3d2e8c6f3c6631aa',container_format='bare',created_at=2026-02-28T10:19:52Z,direct_url=<?>,disk_format='raw',id=1583781d-ec8c-4060-a2ad-53d52445d23e,min_disk=0,min_ram=0,name='tempest-image-dependency-test-1415239692',owner='f743c06bc2ae45fda68427b8418baef8',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2026-02-28T10:19:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': '1583781d-ec8c-4060-a2ad-53d52445d23e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.848 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.958s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.849 243456 DEBUG nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.853 243456 WARNING nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.860 243456 DEBUG nova.virt.libvirt.host [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.861 243456 DEBUG nova.virt.libvirt.host [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.864 243456 DEBUG nova.virt.libvirt.host [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.864 243456 DEBUG nova.virt.libvirt.host [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.865 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.865 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='8dbf5d7f5ae126eb3d2e8c6f3c6631aa',container_format='bare',created_at=2026-02-28T10:19:52Z,direct_url=<?>,disk_format='raw',id=1583781d-ec8c-4060-a2ad-53d52445d23e,min_disk=0,min_ram=0,name='tempest-image-dependency-test-1415239692',owner='f743c06bc2ae45fda68427b8418baef8',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2026-02-28T10:19:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.865 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.866 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.866 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.866 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.866 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.867 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.867 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.867 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.867 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.867 243456 DEBUG nova.virt.hardware [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.870 243456 DEBUG oslo_concurrency.processutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.905 243456 DEBUG nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.905 243456 DEBUG nova.network.neutron [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.929 243456 INFO nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:19:59 compute-0 nova_compute[243452]: 2026-02-28 10:19:59.946 243456 DEBUG nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.058 243456 DEBUG nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.059 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.060 243456 INFO nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Creating image(s)
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.082 243456 DEBUG nova.storage.rbd_utils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.105 243456 DEBUG nova.storage.rbd_utils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.127 243456 DEBUG nova.storage.rbd_utils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.132 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.198 243456 DEBUG nova.compute.manager [req-09abe010-ca2c-41d7-96d9-026dca2c48b1 req-9b9a6adb-28a4-4f45-bb3c-9606f70a8da6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received event network-changed-07b4c83e-2fe2-42c9-a758-c50ddf0919fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.198 243456 DEBUG nova.compute.manager [req-09abe010-ca2c-41d7-96d9-026dca2c48b1 req-9b9a6adb-28a4-4f45-bb3c-9606f70a8da6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Refreshing instance network info cache due to event network-changed-07b4c83e-2fe2-42c9-a758-c50ddf0919fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.198 243456 DEBUG oslo_concurrency.lockutils [req-09abe010-ca2c-41d7-96d9-026dca2c48b1 req-9b9a6adb-28a4-4f45-bb3c-9606f70a8da6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.198 243456 DEBUG oslo_concurrency.lockutils [req-09abe010-ca2c-41d7-96d9-026dca2c48b1 req-9b9a6adb-28a4-4f45-bb3c-9606f70a8da6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.199 243456 DEBUG nova.network.neutron [req-09abe010-ca2c-41d7-96d9-026dca2c48b1 req-9b9a6adb-28a4-4f45-bb3c-9606f70a8da6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Refreshing network info cache for port 07b4c83e-2fe2-42c9-a758-c50ddf0919fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.209 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.209 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.210 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.210 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.236 243456 DEBUG nova.storage.rbd_utils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.241 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:20:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:20:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:20:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:20:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:20:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:20:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 407 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 2.4 MiB/s wr, 271 op/s
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.359 243456 DEBUG nova.policy [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d03850a765742908401b28b9f983e96', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3882eded03594958a2e5d10832a6c3a9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:20:00 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3239131865' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:20:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:20:00 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2833492996' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.489 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.522 243456 DEBUG oslo_concurrency.processutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.547 243456 DEBUG nova.storage.rbd_utils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] rbd image 98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.552 243456 DEBUG oslo_concurrency.processutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:00 compute-0 kernel: taped25d1f8-c3 (unregistering): left promiscuous mode
Feb 28 10:20:00 compute-0 NetworkManager[49805]: <info>  [1772274000.6143] device (taped25d1f8-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:20:00 compute-0 ovn_controller[146846]: 2026-02-28T10:20:00Z|00969|binding|INFO|Releasing lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c from this chassis (sb_readonly=0)
Feb 28 10:20:00 compute-0 ovn_controller[146846]: 2026-02-28T10:20:00Z|00970|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c down in Southbound
Feb 28 10:20:00 compute-0 ovn_controller[146846]: 2026-02-28T10:20:00Z|00971|binding|INFO|Removing iface taped25d1f8-c3 ovn-installed in OVS
Feb 28 10:20:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.642 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:05:21 10.100.0.14'], port_security=['fa:16:3e:f6:05:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '690896df-6307-469c-9685-325a61a62b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '10', 'neutron:security_group_ids': '26695444-5c7f-4bb0-b1ed-94a026868cfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:20:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.643 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ed25d1f8-c3a0-43d4-b57e-12b647a48b3c in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb unbound from our chassis
Feb 28 10:20:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.644 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:20:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.649 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6f171366-8fee-4b8d-9de7-81a416a88499]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.650 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb namespace which is not needed anymore
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.663 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:00 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Feb 28 10:20:00 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005a.scope: Consumed 15.014s CPU time.
Feb 28 10:20:00 compute-0 systemd-machined[209480]: Machine qemu-114-instance-0000005a terminated.
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.676 243456 DEBUG nova.storage.rbd_utils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] resizing rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:20:00 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[324272]: [NOTICE]   (324276) : haproxy version is 2.8.14-c23fe91
Feb 28 10:20:00 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[324272]: [NOTICE]   (324276) : path to executable is /usr/sbin/haproxy
Feb 28 10:20:00 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[324272]: [WARNING]  (324276) : Exiting Master process...
Feb 28 10:20:00 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[324272]: [ALERT]    (324276) : Current worker (324278) exited with code 143 (Terminated)
Feb 28 10:20:00 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[324272]: [WARNING]  (324276) : All workers exited. Exiting... (0)
Feb 28 10:20:00 compute-0 systemd[1]: libpod-fe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3.scope: Deactivated successfully.
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.791 243456 DEBUG nova.objects.instance [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'migration_context' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:00 compute-0 podman[328375]: 2026-02-28 10:20:00.796970334 +0000 UTC m=+0.050199222 container died fe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.807 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.808 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Ensure instance console log exists: /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.808 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.809 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.809 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3-userdata-shm.mount: Deactivated successfully.
Feb 28 10:20:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-503d7322e1f253bb2450dad4bde36cc78f849c077ea3c594879c4d79193de54d-merged.mount: Deactivated successfully.
Feb 28 10:20:00 compute-0 podman[328375]: 2026-02-28 10:20:00.837264233 +0000 UTC m=+0.090493101 container cleanup fe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:20:00 compute-0 systemd[1]: libpod-conmon-fe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3.scope: Deactivated successfully.
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.852 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.865 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:00 compute-0 podman[328423]: 2026-02-28 10:20:00.913058583 +0000 UTC m=+0.045483348 container remove fe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:20:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.919 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aac1a205-c2c0-4da6-b7ba-0e7cdb0d8f1b]: (4, ('Sat Feb 28 10:20:00 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb (fe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3)\nfe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3\nSat Feb 28 10:20:00 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb (fe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3)\nfe3292f6733adde4b2a70c1020c2faa32967541523abd80f3702d6230a2572d3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.921 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fc671fa2-6a8d-4be6-ad4b-cc07dab37af7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.922 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:00 compute-0 kernel: tap8082b9e7-a0: left promiscuous mode
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.924 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:00 compute-0 nova_compute[243452]: 2026-02-28 10:20:00.936 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.940 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f82c41a1-c7be-4c0b-9e70-eca8b99a2018]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.955 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[14c757ba-fb79-4768-872d-6d9e96d9f9bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.957 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ba88b635-e15c-4e02-a451-bd614af43523]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.972 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[875fbe60-d3f9-4c2e-8bc7-9f67ad220b00]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543458, 'reachable_time': 38409, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328449, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.974 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:20:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:00.974 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[8bdba2b7-9ab7-4337-98f1-2b3c79954671]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:00 compute-0 systemd[1]: run-netns-ovnmeta\x2d8082b9e7\x2da888\x2d4fb7\x2db48c\x2da7c16db892eb.mount: Deactivated successfully.
Feb 28 10:20:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:20:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/439465713' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:01 compute-0 nova_compute[243452]: 2026-02-28 10:20:01.157 243456 DEBUG oslo_concurrency.processutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:01 compute-0 nova_compute[243452]: 2026-02-28 10:20:01.159 243456 DEBUG nova.objects.instance [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98504b0a-8c47-4488-b870-9fb9ebfa3e59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:01 compute-0 nova_compute[243452]: 2026-02-28 10:20:01.177 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:20:01 compute-0 nova_compute[243452]:   <uuid>98504b0a-8c47-4488-b870-9fb9ebfa3e59</uuid>
Feb 28 10:20:01 compute-0 nova_compute[243452]:   <name>instance-00000062</name>
Feb 28 10:20:01 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:20:01 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:20:01 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <nova:name>instance-depend-image</nova:name>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:19:59</nova:creationTime>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:20:01 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:20:01 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:20:01 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:20:01 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:20:01 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:20:01 compute-0 nova_compute[243452]:         <nova:user uuid="05780ead76294473ae8c8fc112f7610d">tempest-ImageDependencyTests-2065005612-project-member</nova:user>
Feb 28 10:20:01 compute-0 nova_compute[243452]:         <nova:project uuid="f743c06bc2ae45fda68427b8418baef8">tempest-ImageDependencyTests-2065005612</nova:project>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="1583781d-ec8c-4060-a2ad-53d52445d23e"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <nova:ports/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:20:01 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:20:01 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <system>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <entry name="serial">98504b0a-8c47-4488-b870-9fb9ebfa3e59</entry>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <entry name="uuid">98504b0a-8c47-4488-b870-9fb9ebfa3e59</entry>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     </system>
Feb 28 10:20:01 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:20:01 compute-0 nova_compute[243452]:   <os>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:   </os>
Feb 28 10:20:01 compute-0 nova_compute[243452]:   <features>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:   </features>
Feb 28 10:20:01 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:20:01 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:20:01 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk">
Feb 28 10:20:01 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       </source>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:20:01 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk.config">
Feb 28 10:20:01 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       </source>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:20:01 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/98504b0a-8c47-4488-b870-9fb9ebfa3e59/console.log" append="off"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <video>
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     </video>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:20:01 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:20:01 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:20:01 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:20:01 compute-0 nova_compute[243452]: </domain>
Feb 28 10:20:01 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:20:01 compute-0 nova_compute[243452]: 2026-02-28 10:20:01.225 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:20:01 compute-0 nova_compute[243452]: 2026-02-28 10:20:01.227 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:20:01 compute-0 nova_compute[243452]: 2026-02-28 10:20:01.228 243456 INFO nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Using config drive
Feb 28 10:20:01 compute-0 nova_compute[243452]: 2026-02-28 10:20:01.250 243456 DEBUG nova.storage.rbd_utils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] rbd image 98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:01 compute-0 ceph-mon[76304]: pgmap v1630: 305 pgs: 305 active+clean; 407 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 2.4 MiB/s wr, 271 op/s
Feb 28 10:20:01 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2833492996' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:01 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/439465713' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:01 compute-0 nova_compute[243452]: 2026-02-28 10:20:01.435 243456 INFO nova.virt.libvirt.driver [None req-a3f3fcad-a332-4cbc-8a36-3371252b885a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance shutdown successfully after 3 seconds.
Feb 28 10:20:01 compute-0 nova_compute[243452]: 2026-02-28 10:20:01.443 243456 INFO nova.virt.libvirt.driver [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance destroyed successfully.
Feb 28 10:20:01 compute-0 nova_compute[243452]: 2026-02-28 10:20:01.444 243456 DEBUG nova.objects.instance [None req-a3f3fcad-a332-4cbc-8a36-3371252b885a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'numa_topology' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:01 compute-0 nova_compute[243452]: 2026-02-28 10:20:01.466 243456 DEBUG nova.compute.manager [None req-a3f3fcad-a332-4cbc-8a36-3371252b885a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:01 compute-0 nova_compute[243452]: 2026-02-28 10:20:01.535 243456 DEBUG oslo_concurrency.lockutils [None req-a3f3fcad-a332-4cbc-8a36-3371252b885a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:01 compute-0 nova_compute[243452]: 2026-02-28 10:20:01.567 243456 INFO nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Creating config drive at /var/lib/nova/instances/98504b0a-8c47-4488-b870-9fb9ebfa3e59/disk.config
Feb 28 10:20:01 compute-0 nova_compute[243452]: 2026-02-28 10:20:01.575 243456 DEBUG oslo_concurrency.processutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98504b0a-8c47-4488-b870-9fb9ebfa3e59/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpg50prgut execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:01 compute-0 nova_compute[243452]: 2026-02-28 10:20:01.726 243456 DEBUG oslo_concurrency.processutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98504b0a-8c47-4488-b870-9fb9ebfa3e59/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpg50prgut" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:01 compute-0 nova_compute[243452]: 2026-02-28 10:20:01.759 243456 DEBUG nova.storage.rbd_utils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] rbd image 98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:01 compute-0 nova_compute[243452]: 2026-02-28 10:20:01.764 243456 DEBUG oslo_concurrency.processutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98504b0a-8c47-4488-b870-9fb9ebfa3e59/disk.config 98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:01 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Feb 28 10:20:01 compute-0 nova_compute[243452]: 2026-02-28 10:20:01.942 243456 DEBUG oslo_concurrency.processutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98504b0a-8c47-4488-b870-9fb9ebfa3e59/disk.config 98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:01 compute-0 nova_compute[243452]: 2026-02-28 10:20:01.943 243456 INFO nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Deleting local config drive /var/lib/nova/instances/98504b0a-8c47-4488-b870-9fb9ebfa3e59/disk.config because it was imported into RBD.
Feb 28 10:20:02 compute-0 systemd-machined[209480]: New machine qemu-122-instance-00000062.
Feb 28 10:20:02 compute-0 systemd[1]: Started Virtual Machine qemu-122-instance-00000062.
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.079 243456 DEBUG nova.network.neutron [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Successfully created port: a98f753e-a6d6-4d97-b307-f08d35a37f1f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.277 243456 DEBUG nova.compute.manager [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.278 243456 DEBUG oslo_concurrency.lockutils [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.279 243456 DEBUG oslo_concurrency.lockutils [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.279 243456 DEBUG oslo_concurrency.lockutils [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.279 243456 DEBUG nova.compute.manager [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.280 243456 WARNING nova.compute.manager [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state stopped and task_state None.
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.280 243456 DEBUG nova.compute.manager [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.280 243456 DEBUG oslo_concurrency.lockutils [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.281 243456 DEBUG oslo_concurrency.lockutils [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.281 243456 DEBUG oslo_concurrency.lockutils [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.281 243456 DEBUG nova.compute.manager [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.282 243456 WARNING nova.compute.manager [req-8d785c43-4d5f-4f1b-ba6c-268f8ddcccec req-7cacf95e-d902-4a5e-aa37-d9d7d9fa0a7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state stopped and task_state None.
Feb 28 10:20:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 408 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 793 KiB/s wr, 265 op/s
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.710 243456 DEBUG nova.network.neutron [req-09abe010-ca2c-41d7-96d9-026dca2c48b1 req-9b9a6adb-28a4-4f45-bb3c-9606f70a8da6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Updated VIF entry in instance network info cache for port 07b4c83e-2fe2-42c9-a758-c50ddf0919fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.710 243456 DEBUG nova.network.neutron [req-09abe010-ca2c-41d7-96d9-026dca2c48b1 req-9b9a6adb-28a4-4f45-bb3c-9606f70a8da6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Updating instance_info_cache with network_info: [{"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.739 243456 DEBUG oslo_concurrency.lockutils [req-09abe010-ca2c-41d7-96d9-026dca2c48b1 req-9b9a6adb-28a4-4f45-bb3c-9606f70a8da6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.814 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274002.8143718, 98504b0a-8c47-4488-b870-9fb9ebfa3e59 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.815 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] VM Resumed (Lifecycle Event)
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.819 243456 DEBUG nova.compute.manager [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.820 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.825 243456 INFO nova.virt.libvirt.driver [-] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Instance spawned successfully.
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.826 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.853 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.863 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.872 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.873 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.874 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.875 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.876 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.878 243456 DEBUG nova.virt.libvirt.driver [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.888 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.889 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274002.8154237, 98504b0a-8c47-4488-b870-9fb9ebfa3e59 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.890 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] VM Started (Lifecycle Event)
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.921 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.926 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.953 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.960 243456 INFO nova.compute.manager [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Took 3.89 seconds to spawn the instance on the hypervisor.
Feb 28 10:20:02 compute-0 nova_compute[243452]: 2026-02-28 10:20:02.961 243456 DEBUG nova.compute.manager [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:03 compute-0 nova_compute[243452]: 2026-02-28 10:20:03.047 243456 INFO nova.compute.manager [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Took 5.06 seconds to build instance.
Feb 28 10:20:03 compute-0 nova_compute[243452]: 2026-02-28 10:20:03.051 243456 DEBUG nova.network.neutron [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Successfully updated port: a98f753e-a6d6-4d97-b307-f08d35a37f1f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:20:03 compute-0 nova_compute[243452]: 2026-02-28 10:20:03.070 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:20:03 compute-0 nova_compute[243452]: 2026-02-28 10:20:03.071 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquired lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:20:03 compute-0 nova_compute[243452]: 2026-02-28 10:20:03.071 243456 DEBUG nova.network.neutron [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:20:03 compute-0 nova_compute[243452]: 2026-02-28 10:20:03.074 243456 DEBUG oslo_concurrency.lockutils [None req-6851870f-6d90-4e8e-afd5-f419275cfa9d 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "98504b0a-8c47-4488-b870-9fb9ebfa3e59" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:03 compute-0 ceph-mon[76304]: pgmap v1631: 305 pgs: 305 active+clean; 408 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 793 KiB/s wr, 265 op/s
Feb 28 10:20:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:20:03 compute-0 nova_compute[243452]: 2026-02-28 10:20:03.926 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:04 compute-0 nova_compute[243452]: 2026-02-28 10:20:04.296 243456 DEBUG nova.network.neutron [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:20:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 422 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 526 KiB/s wr, 246 op/s
Feb 28 10:20:04 compute-0 ovn_controller[146846]: 2026-02-28T10:20:04Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:73:58 10.100.0.9
Feb 28 10:20:04 compute-0 ovn_controller[146846]: 2026-02-28T10:20:04Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:73:58 10.100.0.9
Feb 28 10:20:05 compute-0 ceph-mon[76304]: pgmap v1632: 305 pgs: 305 active+clean; 422 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 526 KiB/s wr, 246 op/s
Feb 28 10:20:05 compute-0 nova_compute[243452]: 2026-02-28 10:20:05.464 243456 DEBUG nova.compute.manager [req-aede0ce5-c386-4b4c-9914-fe0e92de4571 req-b1a45c56-2647-43e1-b765-4b3d373d6519 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-changed-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:05 compute-0 nova_compute[243452]: 2026-02-28 10:20:05.464 243456 DEBUG nova.compute.manager [req-aede0ce5-c386-4b4c-9914-fe0e92de4571 req-b1a45c56-2647-43e1-b765-4b3d373d6519 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Refreshing instance network info cache due to event network-changed-a98f753e-a6d6-4d97-b307-f08d35a37f1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:20:05 compute-0 nova_compute[243452]: 2026-02-28 10:20:05.464 243456 DEBUG oslo_concurrency.lockutils [req-aede0ce5-c386-4b4c-9914-fe0e92de4571 req-b1a45c56-2647-43e1-b765-4b3d373d6519 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:20:05 compute-0 nova_compute[243452]: 2026-02-28 10:20:05.601 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772273990.6005676, 6a659835-f144-4e34-87ec-3b37ff81b0d1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:05 compute-0 nova_compute[243452]: 2026-02-28 10:20:05.602 243456 INFO nova.compute.manager [-] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] VM Stopped (Lifecycle Event)
Feb 28 10:20:05 compute-0 nova_compute[243452]: 2026-02-28 10:20:05.622 243456 DEBUG nova.compute.manager [None req-d945d424-f03c-490c-9313-edf621b89489 - - - - - -] [instance: 6a659835-f144-4e34-87ec-3b37ff81b0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:05 compute-0 nova_compute[243452]: 2026-02-28 10:20:05.855 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 471 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.8 MiB/s wr, 335 op/s
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.397 243456 DEBUG nova.objects.instance [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'flavor' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.421 243456 DEBUG oslo_concurrency.lockutils [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.422 243456 DEBUG oslo_concurrency.lockutils [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquired lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.422 243456 DEBUG nova.network.neutron [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.423 243456 DEBUG nova.objects.instance [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'info_cache' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:06 compute-0 ceph-mon[76304]: pgmap v1633: 305 pgs: 305 active+clean; 471 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.8 MiB/s wr, 335 op/s
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.820 243456 DEBUG nova.network.neutron [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Updating instance_info_cache with network_info: [{"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.845 243456 DEBUG nova.compute.manager [None req-00240748-d009-4dc9-9cd8-d63ffc57e8b9 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.846 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Releasing lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.846 243456 DEBUG nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance network_info: |[{"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.847 243456 DEBUG oslo_concurrency.lockutils [req-aede0ce5-c386-4b4c-9914-fe0e92de4571 req-b1a45c56-2647-43e1-b765-4b3d373d6519 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.848 243456 DEBUG nova.network.neutron [req-aede0ce5-c386-4b4c-9914-fe0e92de4571 req-b1a45c56-2647-43e1-b765-4b3d373d6519 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Refreshing network info cache for port a98f753e-a6d6-4d97-b307-f08d35a37f1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.851 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Start _get_guest_xml network_info=[{"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.861 243456 WARNING nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.867 243456 DEBUG nova.virt.libvirt.host [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.867 243456 DEBUG nova.virt.libvirt.host [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.873 243456 DEBUG nova.virt.libvirt.host [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.874 243456 DEBUG nova.virt.libvirt.host [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.874 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.874 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.875 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.875 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.876 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.876 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.876 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.876 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.877 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.877 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.877 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.878 243456 DEBUG nova.virt.hardware [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.881 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:06 compute-0 nova_compute[243452]: 2026-02-28 10:20:06.925 243456 INFO nova.compute.manager [None req-00240748-d009-4dc9-9cd8-d63ffc57e8b9 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] instance snapshotting
Feb 28 10:20:07 compute-0 nova_compute[243452]: 2026-02-28 10:20:07.109 243456 INFO nova.virt.libvirt.driver [None req-00240748-d009-4dc9-9cd8-d63ffc57e8b9 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Beginning live snapshot process
Feb 28 10:20:07 compute-0 nova_compute[243452]: 2026-02-28 10:20:07.294 243456 DEBUG nova.storage.rbd_utils [None req-00240748-d009-4dc9-9cd8-d63ffc57e8b9 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] creating snapshot(61830609917343a3bcc672a9878daa2d) on rbd image(98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:20:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:20:07 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3635222755' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:07 compute-0 nova_compute[243452]: 2026-02-28 10:20:07.436 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:07 compute-0 nova_compute[243452]: 2026-02-28 10:20:07.454 243456 DEBUG nova.storage.rbd_utils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:07 compute-0 nova_compute[243452]: 2026-02-28 10:20:07.458 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Feb 28 10:20:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Feb 28 10:20:07 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Feb 28 10:20:07 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3635222755' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:07 compute-0 nova_compute[243452]: 2026-02-28 10:20:07.551 243456 DEBUG nova.storage.rbd_utils [None req-00240748-d009-4dc9-9cd8-d63ffc57e8b9 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] cloning vms/98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk@61830609917343a3bcc672a9878daa2d to images/818dfec7-2d43-4696-a4bb-91240db544c9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:20:07 compute-0 nova_compute[243452]: 2026-02-28 10:20:07.662 243456 DEBUG nova.storage.rbd_utils [None req-00240748-d009-4dc9-9cd8-d63ffc57e8b9 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] flattening images/818dfec7-2d43-4696-a4bb-91240db544c9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:20:07 compute-0 nova_compute[243452]: 2026-02-28 10:20:07.796 243456 DEBUG nova.storage.rbd_utils [None req-00240748-d009-4dc9-9cd8-d63ffc57e8b9 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] removing snapshot(61830609917343a3bcc672a9878daa2d) on rbd image(98504b0a-8c47-4488-b870-9fb9ebfa3e59_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:20:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:20:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4085660845' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.029 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.031 243456 DEBUG nova.virt.libvirt.vif [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1593211213',display_name='tempest-ServerRescueTestJSON-server-1593211213',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1593211213',id=99,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3882eded03594958a2e5d10832a6c3a9',ramdisk_id='',reservation_id='r-xm64dw2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-2101936935',owner_user_name='tempest-ServerRescueTestJSON-2101936935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:59Z,user_data=None,user_id='8d03850a765742908401b28b9f983e96',uuid=080f8608-f57f-4ffa-a966-ae62df8f6f9b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.032 243456 DEBUG nova.network.os_vif_util [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converting VIF {"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.032 243456 DEBUG nova.network.os_vif_util [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:c9:51,bridge_name='br-int',has_traffic_filtering=True,id=a98f753e-a6d6-4d97-b307-f08d35a37f1f,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa98f753e-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.034 243456 DEBUG nova.objects.instance [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.057 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:20:08 compute-0 nova_compute[243452]:   <uuid>080f8608-f57f-4ffa-a966-ae62df8f6f9b</uuid>
Feb 28 10:20:08 compute-0 nova_compute[243452]:   <name>instance-00000063</name>
Feb 28 10:20:08 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:20:08 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:20:08 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerRescueTestJSON-server-1593211213</nova:name>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:20:06</nova:creationTime>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:20:08 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:20:08 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:20:08 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:20:08 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:20:08 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:20:08 compute-0 nova_compute[243452]:         <nova:user uuid="8d03850a765742908401b28b9f983e96">tempest-ServerRescueTestJSON-2101936935-project-member</nova:user>
Feb 28 10:20:08 compute-0 nova_compute[243452]:         <nova:project uuid="3882eded03594958a2e5d10832a6c3a9">tempest-ServerRescueTestJSON-2101936935</nova:project>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:20:08 compute-0 nova_compute[243452]:         <nova:port uuid="a98f753e-a6d6-4d97-b307-f08d35a37f1f">
Feb 28 10:20:08 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:20:08 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:20:08 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <system>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <entry name="serial">080f8608-f57f-4ffa-a966-ae62df8f6f9b</entry>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <entry name="uuid">080f8608-f57f-4ffa-a966-ae62df8f6f9b</entry>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     </system>
Feb 28 10:20:08 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:20:08 compute-0 nova_compute[243452]:   <os>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:   </os>
Feb 28 10:20:08 compute-0 nova_compute[243452]:   <features>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:   </features>
Feb 28 10:20:08 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:20:08 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:20:08 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk">
Feb 28 10:20:08 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       </source>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:20:08 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config">
Feb 28 10:20:08 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       </source>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:20:08 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:71:c9:51"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <target dev="tapa98f753e-a6"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/console.log" append="off"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <video>
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     </video>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:20:08 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:20:08 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:20:08 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:20:08 compute-0 nova_compute[243452]: </domain>
Feb 28 10:20:08 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.059 243456 DEBUG nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Preparing to wait for external event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.059 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.060 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.060 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.060 243456 DEBUG nova.virt.libvirt.vif [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1593211213',display_name='tempest-ServerRescueTestJSON-server-1593211213',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1593211213',id=99,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3882eded03594958a2e5d10832a6c3a9',ramdisk_id='',reservation_id='r-xm64dw2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-2101936935',owner_user_name='tempest-ServerRescueTestJSON-2101936935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:19:59Z,user_data=None,user_id='8d03850a765742908401b28b9f983e96',uuid=080f8608-f57f-4ffa-a966-ae62df8f6f9b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.061 243456 DEBUG nova.network.os_vif_util [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converting VIF {"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.061 243456 DEBUG nova.network.os_vif_util [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:c9:51,bridge_name='br-int',has_traffic_filtering=True,id=a98f753e-a6d6-4d97-b307-f08d35a37f1f,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa98f753e-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.062 243456 DEBUG os_vif [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:c9:51,bridge_name='br-int',has_traffic_filtering=True,id=a98f753e-a6d6-4d97-b307-f08d35a37f1f,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa98f753e-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.062 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.063 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.063 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.067 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.068 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa98f753e-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.068 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa98f753e-a6, col_values=(('external_ids', {'iface-id': 'a98f753e-a6d6-4d97-b307-f08d35a37f1f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:71:c9:51', 'vm-uuid': '080f8608-f57f-4ffa-a966-ae62df8f6f9b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.070 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:08 compute-0 NetworkManager[49805]: <info>  [1772274008.0723] manager: (tapa98f753e-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/419)
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.072 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.079 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.080 243456 INFO os_vif [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:c9:51,bridge_name='br-int',has_traffic_filtering=True,id=a98f753e-a6d6-4d97-b307-f08d35a37f1f,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa98f753e-a6')
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.149 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.150 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.150 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No VIF found with MAC fa:16:3e:71:c9:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.151 243456 INFO nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Using config drive
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.173 243456 DEBUG nova.storage.rbd_utils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 486 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.7 MiB/s wr, 235 op/s
Feb 28 10:20:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Feb 28 10:20:08 compute-0 ceph-mon[76304]: osdmap e245: 3 total, 3 up, 3 in
Feb 28 10:20:08 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4085660845' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:08 compute-0 ceph-mon[76304]: pgmap v1635: 305 pgs: 305 active+clean; 486 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.7 MiB/s wr, 235 op/s
Feb 28 10:20:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Feb 28 10:20:08 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.567 243456 DEBUG nova.storage.rbd_utils [None req-00240748-d009-4dc9-9cd8-d63ffc57e8b9 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] creating snapshot(snap) on rbd image(818dfec7-2d43-4696-a4bb-91240db544c9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:20:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:20:08 compute-0 nova_compute[243452]: 2026-02-28 10:20:08.929 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:09 compute-0 nova_compute[243452]: 2026-02-28 10:20:09.342 243456 INFO nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Creating config drive at /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config
Feb 28 10:20:09 compute-0 nova_compute[243452]: 2026-02-28 10:20:09.350 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpiou81cvo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:09 compute-0 nova_compute[243452]: 2026-02-28 10:20:09.502 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpiou81cvo" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Feb 28 10:20:09 compute-0 ceph-mon[76304]: osdmap e246: 3 total, 3 up, 3 in
Feb 28 10:20:09 compute-0 nova_compute[243452]: 2026-02-28 10:20:09.541 243456 DEBUG nova.storage.rbd_utils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Feb 28 10:20:09 compute-0 nova_compute[243452]: 2026-02-28 10:20:09.545 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:09 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Feb 28 10:20:09 compute-0 nova_compute[243452]: 2026-02-28 10:20:09.699 243456 DEBUG oslo_concurrency.processutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:09 compute-0 nova_compute[243452]: 2026-02-28 10:20:09.700 243456 INFO nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Deleting local config drive /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config because it was imported into RBD.
Feb 28 10:20:09 compute-0 NetworkManager[49805]: <info>  [1772274009.7462] manager: (tapa98f753e-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/420)
Feb 28 10:20:09 compute-0 kernel: tapa98f753e-a6: entered promiscuous mode
Feb 28 10:20:09 compute-0 ovn_controller[146846]: 2026-02-28T10:20:09Z|00972|binding|INFO|Claiming lport a98f753e-a6d6-4d97-b307-f08d35a37f1f for this chassis.
Feb 28 10:20:09 compute-0 ovn_controller[146846]: 2026-02-28T10:20:09Z|00973|binding|INFO|a98f753e-a6d6-4d97-b307-f08d35a37f1f: Claiming fa:16:3e:71:c9:51 10.100.0.5
Feb 28 10:20:09 compute-0 nova_compute[243452]: 2026-02-28 10:20:09.751 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:09 compute-0 ovn_controller[146846]: 2026-02-28T10:20:09Z|00974|binding|INFO|Setting lport a98f753e-a6d6-4d97-b307-f08d35a37f1f ovn-installed in OVS
Feb 28 10:20:09 compute-0 ovn_controller[146846]: 2026-02-28T10:20:09Z|00975|binding|INFO|Setting lport a98f753e-a6d6-4d97-b307-f08d35a37f1f up in Southbound
Feb 28 10:20:09 compute-0 nova_compute[243452]: 2026-02-28 10:20:09.762 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:09.766 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:c9:51 10.100.0.5'], port_security=['fa:16:3e:71:c9:51 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '080f8608-f57f-4ffa-a966-ae62df8f6f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a98f753e-a6d6-4d97-b307-f08d35a37f1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:20:09 compute-0 nova_compute[243452]: 2026-02-28 10:20:09.767 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:09.768 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a98f753e-a6d6-4d97-b307-f08d35a37f1f in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e bound to our chassis
Feb 28 10:20:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:09.770 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:20:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:09.771 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[77bd0630-b31d-45df-af23-fe505a3dc9d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:09 compute-0 systemd-udevd[328853]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:20:09 compute-0 systemd-machined[209480]: New machine qemu-123-instance-00000063.
Feb 28 10:20:09 compute-0 NetworkManager[49805]: <info>  [1772274009.7894] device (tapa98f753e-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:20:09 compute-0 NetworkManager[49805]: <info>  [1772274009.7899] device (tapa98f753e-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:20:09 compute-0 systemd[1]: Started Virtual Machine qemu-123-instance-00000063.
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.159 243456 DEBUG nova.network.neutron [req-aede0ce5-c386-4b4c-9914-fe0e92de4571 req-b1a45c56-2647-43e1-b765-4b3d373d6519 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Updated VIF entry in instance network info cache for port a98f753e-a6d6-4d97-b307-f08d35a37f1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.160 243456 DEBUG nova.network.neutron [req-aede0ce5-c386-4b4c-9914-fe0e92de4571 req-b1a45c56-2647-43e1-b765-4b3d373d6519 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Updating instance_info_cache with network_info: [{"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.164 243456 DEBUG nova.network.neutron [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Updating instance_info_cache with network_info: [{"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.183 243456 DEBUG oslo_concurrency.lockutils [req-aede0ce5-c386-4b4c-9914-fe0e92de4571 req-b1a45c56-2647-43e1-b765-4b3d373d6519 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.184 243456 DEBUG oslo_concurrency.lockutils [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Releasing lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.217 243456 INFO nova.virt.libvirt.driver [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance destroyed successfully.
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.218 243456 DEBUG nova.objects.instance [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'numa_topology' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.234 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274010.233177, 080f8608-f57f-4ffa-a966-ae62df8f6f9b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.234 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] VM Started (Lifecycle Event)
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.239 243456 DEBUG nova.objects.instance [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'resources' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.254 243456 DEBUG nova.virt.libvirt.vif [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-49030969',display_name='tempest-ServerActionsTestJSON-server-49030969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-49030969',id=90,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2gogpz3Yx3EhEkpX1VCBq2KO8dk2oxi+PMU5eyyDvXF1ONWBCuHAC/cBb5pEuXFL5UEmuntYZ5G82kfqwpjBS+OnOyyPUfTpF/G370TEdXgIbjgT7mCqXRtxntkLSfyw==',key_name='tempest-keypair-1420549814',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-iwkgdg48',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:20:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9006c7543a244aa948b78020335223a',uuid=690896df-6307-469c-9685-325a61a62b88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.255 243456 DEBUG nova.network.os_vif_util [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.256 243456 DEBUG nova.network.os_vif_util [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.256 243456 DEBUG os_vif [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.260 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.260 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped25d1f8-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.262 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.263 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.264 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.267 243456 INFO os_vif [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3')
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.268 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274010.233379, 080f8608-f57f-4ffa-a966-ae62df8f6f9b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.269 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] VM Paused (Lifecycle Event)
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.277 243456 DEBUG nova.virt.libvirt.driver [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Start _get_guest_xml network_info=[{"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.280 243456 WARNING nova.virt.libvirt.driver [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.285 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.286 243456 DEBUG nova.virt.libvirt.host [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.286 243456 DEBUG nova.virt.libvirt.host [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.291 243456 DEBUG nova.compute.manager [req-2c9c60f3-c06a-4bfc-aeec-65618ca5ea77 req-89e33606-b52c-4a21-9888-02ec0f6bb7fb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.292 243456 DEBUG oslo_concurrency.lockutils [req-2c9c60f3-c06a-4bfc-aeec-65618ca5ea77 req-89e33606-b52c-4a21-9888-02ec0f6bb7fb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.292 243456 DEBUG oslo_concurrency.lockutils [req-2c9c60f3-c06a-4bfc-aeec-65618ca5ea77 req-89e33606-b52c-4a21-9888-02ec0f6bb7fb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.292 243456 DEBUG oslo_concurrency.lockutils [req-2c9c60f3-c06a-4bfc-aeec-65618ca5ea77 req-89e33606-b52c-4a21-9888-02ec0f6bb7fb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.293 243456 DEBUG nova.compute.manager [req-2c9c60f3-c06a-4bfc-aeec-65618ca5ea77 req-89e33606-b52c-4a21-9888-02ec0f6bb7fb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Processing event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.294 243456 DEBUG nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.295 243456 DEBUG nova.virt.libvirt.host [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.296 243456 DEBUG nova.virt.libvirt.host [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.297 243456 DEBUG nova.virt.libvirt.driver [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.297 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.298 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.298 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.299 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.299 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.299 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.300 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.300 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.300 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.300 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.301 243456 DEBUG nova.virt.hardware [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.301 243456 DEBUG nova.objects.instance [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.303 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.306 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.308 243456 INFO nova.virt.libvirt.driver [-] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance spawned successfully.
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.308 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.327 243456 DEBUG oslo_concurrency.processutils [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 486 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.0 MiB/s wr, 336 op/s
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.375 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.376 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274010.297266, 080f8608-f57f-4ffa-a966-ae62df8f6f9b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.376 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] VM Resumed (Lifecycle Event)
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.387 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.388 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.389 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.389 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.390 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.390 243456 DEBUG nova.virt.libvirt.driver [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.428 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.435 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.459 243456 INFO nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Took 10.40 seconds to spawn the instance on the hypervisor.
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.459 243456 DEBUG nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.460 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.518 243456 INFO nova.compute.manager [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Took 12.14 seconds to build instance.
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.535 243456 DEBUG oslo_concurrency.lockutils [None req-d637974f-9a81-4b68-afd8-3500b0b6f0fd 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:10 compute-0 ceph-mon[76304]: osdmap e247: 3 total, 3 up, 3 in
Feb 28 10:20:10 compute-0 ceph-mon[76304]: pgmap v1638: 305 pgs: 305 active+clean; 486 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.0 MiB/s wr, 336 op/s
Feb 28 10:20:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:20:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2539464663' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.940 243456 INFO nova.virt.libvirt.driver [None req-00240748-d009-4dc9-9cd8-d63ffc57e8b9 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Snapshot image upload complete
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.941 243456 INFO nova.compute.manager [None req-00240748-d009-4dc9-9cd8-d63ffc57e8b9 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Took 4.01 seconds to snapshot the instance on the hypervisor.
Feb 28 10:20:10 compute-0 nova_compute[243452]: 2026-02-28 10:20:10.954 243456 DEBUG oslo_concurrency.processutils [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.006 243456 DEBUG oslo_concurrency.processutils [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2539464663' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:20:11 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1016280227' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.638 243456 DEBUG oslo_concurrency.processutils [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.639 243456 DEBUG nova.virt.libvirt.vif [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-49030969',display_name='tempest-ServerActionsTestJSON-server-49030969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-49030969',id=90,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2gogpz3Yx3EhEkpX1VCBq2KO8dk2oxi+PMU5eyyDvXF1ONWBCuHAC/cBb5pEuXFL5UEmuntYZ5G82kfqwpjBS+OnOyyPUfTpF/G370TEdXgIbjgT7mCqXRtxntkLSfyw==',key_name='tempest-keypair-1420549814',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-iwkgdg48',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:20:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9006c7543a244aa948b78020335223a',uuid=690896df-6307-469c-9685-325a61a62b88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.639 243456 DEBUG nova.network.os_vif_util [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.640 243456 DEBUG nova.network.os_vif_util [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.641 243456 DEBUG nova.objects.instance [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'pci_devices' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.656 243456 DEBUG nova.virt.libvirt.driver [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:20:11 compute-0 nova_compute[243452]:   <uuid>690896df-6307-469c-9685-325a61a62b88</uuid>
Feb 28 10:20:11 compute-0 nova_compute[243452]:   <name>instance-0000005a</name>
Feb 28 10:20:11 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:20:11 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:20:11 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerActionsTestJSON-server-49030969</nova:name>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:20:10</nova:creationTime>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:20:11 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:20:11 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:20:11 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:20:11 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:20:11 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:20:11 compute-0 nova_compute[243452]:         <nova:user uuid="b9006c7543a244aa948b78020335223a">tempest-ServerActionsTestJSON-152155156-project-member</nova:user>
Feb 28 10:20:11 compute-0 nova_compute[243452]:         <nova:project uuid="6952e00efd364e1491714983e2425e93">tempest-ServerActionsTestJSON-152155156</nova:project>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:20:11 compute-0 nova_compute[243452]:         <nova:port uuid="ed25d1f8-c3a0-43d4-b57e-12b647a48b3c">
Feb 28 10:20:11 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:20:11 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:20:11 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <system>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <entry name="serial">690896df-6307-469c-9685-325a61a62b88</entry>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <entry name="uuid">690896df-6307-469c-9685-325a61a62b88</entry>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     </system>
Feb 28 10:20:11 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:20:11 compute-0 nova_compute[243452]:   <os>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:   </os>
Feb 28 10:20:11 compute-0 nova_compute[243452]:   <features>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:   </features>
Feb 28 10:20:11 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:20:11 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:20:11 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/690896df-6307-469c-9685-325a61a62b88_disk">
Feb 28 10:20:11 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       </source>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:20:11 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/690896df-6307-469c-9685-325a61a62b88_disk.config">
Feb 28 10:20:11 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       </source>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:20:11 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:f6:05:21"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <target dev="taped25d1f8-c3"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/690896df-6307-469c-9685-325a61a62b88/console.log" append="off"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <video>
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     </video>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <input type="keyboard" bus="usb"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:20:11 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:20:11 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:20:11 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:20:11 compute-0 nova_compute[243452]: </domain>
Feb 28 10:20:11 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.658 243456 DEBUG nova.virt.libvirt.driver [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.658 243456 DEBUG nova.virt.libvirt.driver [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.659 243456 DEBUG nova.virt.libvirt.vif [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-49030969',display_name='tempest-ServerActionsTestJSON-server-49030969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-49030969',id=90,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2gogpz3Yx3EhEkpX1VCBq2KO8dk2oxi+PMU5eyyDvXF1ONWBCuHAC/cBb5pEuXFL5UEmuntYZ5G82kfqwpjBS+OnOyyPUfTpF/G370TEdXgIbjgT7mCqXRtxntkLSfyw==',key_name='tempest-keypair-1420549814',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-iwkgdg48',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:20:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9006c7543a244aa948b78020335223a',uuid=690896df-6307-469c-9685-325a61a62b88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.659 243456 DEBUG nova.network.os_vif_util [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.660 243456 DEBUG nova.network.os_vif_util [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.660 243456 DEBUG os_vif [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.661 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.662 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.662 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.664 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.665 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped25d1f8-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.665 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=taped25d1f8-c3, col_values=(('external_ids', {'iface-id': 'ed25d1f8-c3a0-43d4-b57e-12b647a48b3c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:05:21', 'vm-uuid': '690896df-6307-469c-9685-325a61a62b88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.666 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:11 compute-0 NetworkManager[49805]: <info>  [1772274011.6676] manager: (taped25d1f8-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.671 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.673 243456 INFO os_vif [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3')
Feb 28 10:20:11 compute-0 NetworkManager[49805]: <info>  [1772274011.7284] manager: (taped25d1f8-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/422)
Feb 28 10:20:11 compute-0 kernel: taped25d1f8-c3: entered promiscuous mode
Feb 28 10:20:11 compute-0 systemd-udevd[328855]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.732 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:11 compute-0 ovn_controller[146846]: 2026-02-28T10:20:11Z|00976|binding|INFO|Claiming lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for this chassis.
Feb 28 10:20:11 compute-0 ovn_controller[146846]: 2026-02-28T10:20:11Z|00977|binding|INFO|ed25d1f8-c3a0-43d4-b57e-12b647a48b3c: Claiming fa:16:3e:f6:05:21 10.100.0.14
Feb 28 10:20:11 compute-0 ovn_controller[146846]: 2026-02-28T10:20:11Z|00978|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c up in Southbound
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.742 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:05:21 10.100.0.14'], port_security=['fa:16:3e:f6:05:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '690896df-6307-469c-9685-325a61a62b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '11', 'neutron:security_group_ids': '26695444-5c7f-4bb0-b1ed-94a026868cfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.743 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ed25d1f8-c3a0-43d4-b57e-12b647a48b3c in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb bound to our chassis
Feb 28 10:20:11 compute-0 ovn_controller[146846]: 2026-02-28T10:20:11Z|00979|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c ovn-installed in OVS
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.744 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.747 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:20:11 compute-0 nova_compute[243452]: 2026-02-28 10:20:11.746 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:11 compute-0 NetworkManager[49805]: <info>  [1772274011.7495] device (taped25d1f8-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:20:11 compute-0 NetworkManager[49805]: <info>  [1772274011.7499] device (taped25d1f8-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.761 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2070fc36-9c41-47ad-a452-210dd0865102]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.762 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8082b9e7-a1 in ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.764 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8082b9e7-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.764 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e7262f7f-cb7d-4fba-89b5-455a5aea5fd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:11 compute-0 systemd-machined[209480]: New machine qemu-124-instance-0000005a.
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.764 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a27af1-81ed-49c5-8c8b-0cf9e9dc317f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.774 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[24641404-c9ca-446e-bee6-3ea61756ef32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.783 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac6925b-9539-4559-8491-c8429632062f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:11 compute-0 systemd[1]: Started Virtual Machine qemu-124-instance-0000005a.
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.802 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[32cf1cae-9b77-402c-a1e2-416e3a666efb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.807 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[69c7a063-74c6-4bed-9702-3d7da44fef26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:11 compute-0 NetworkManager[49805]: <info>  [1772274011.8104] manager: (tap8082b9e7-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/423)
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.832 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9fee1723-e248-4e4e-a36f-168324bf26dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.835 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[38fdb5eb-cac5-47e9-ba68-3d9e63693fd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:11 compute-0 NetworkManager[49805]: <info>  [1772274011.8631] device (tap8082b9e7-a0): carrier: link connected
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.865 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[60a2e09c-79bc-4fc9-b837-e7a0bd6f62d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.879 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[16c4514b-7d8e-4f1f-a112-cffcedcedd68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 551914, 'reachable_time': 39192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329014, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.889 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c7df5f40-fa06-4e5b-bf65-7cd31ede811b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:9bb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 551914, 'tstamp': 551914}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329015, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.901 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c00acc-cbdd-4623-b6b7-984b47fa1bca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 551914, 'reachable_time': 39192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329016, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.933 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2911363b-566f-49ae-afe4-c7b16bcd2469]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.997 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f85df9-ca68-4efe-905a-d8c31cf8d6a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.998 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.998 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:20:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:11.999 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8082b9e7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:12 compute-0 NetworkManager[49805]: <info>  [1772274012.0017] manager: (tap8082b9e7-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Feb 28 10:20:12 compute-0 nova_compute[243452]: 2026-02-28 10:20:12.001 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:12 compute-0 kernel: tap8082b9e7-a0: entered promiscuous mode
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:12.006 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8082b9e7-a0, col_values=(('external_ids', {'iface-id': 'ecb4bce5-f543-4ade-98cc-d98a0d015682'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:12 compute-0 ovn_controller[146846]: 2026-02-28T10:20:12Z|00980|binding|INFO|Releasing lport ecb4bce5-f543-4ade-98cc-d98a0d015682 from this chassis (sb_readonly=0)
Feb 28 10:20:12 compute-0 nova_compute[243452]: 2026-02-28 10:20:12.007 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:12.010 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:12.010 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1dc0e8-9ae6-417e-8306-eaab8e78d258]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:12.011 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:20:12 compute-0 nova_compute[243452]: 2026-02-28 10:20:12.013 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:12.013 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'env', 'PROCESS_TAG=haproxy-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8082b9e7-a888-4fb7-b48c-a7c16db892eb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:20:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 486 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 663 KiB/s rd, 2.3 MiB/s wr, 173 op/s
Feb 28 10:20:12 compute-0 podman[329066]: 2026-02-28 10:20:12.397854377 +0000 UTC m=+0.056584414 container create 08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:20:12 compute-0 nova_compute[243452]: 2026-02-28 10:20:12.425 243456 DEBUG nova.compute.manager [req-75e8a72b-314e-45aa-9c70-d7e039fb1ae3 req-814fa99b-15f0-4f8b-808d-a26ce59f1e95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:12 compute-0 nova_compute[243452]: 2026-02-28 10:20:12.426 243456 DEBUG oslo_concurrency.lockutils [req-75e8a72b-314e-45aa-9c70-d7e039fb1ae3 req-814fa99b-15f0-4f8b-808d-a26ce59f1e95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:12 compute-0 nova_compute[243452]: 2026-02-28 10:20:12.427 243456 DEBUG oslo_concurrency.lockutils [req-75e8a72b-314e-45aa-9c70-d7e039fb1ae3 req-814fa99b-15f0-4f8b-808d-a26ce59f1e95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:12 compute-0 nova_compute[243452]: 2026-02-28 10:20:12.427 243456 DEBUG oslo_concurrency.lockutils [req-75e8a72b-314e-45aa-9c70-d7e039fb1ae3 req-814fa99b-15f0-4f8b-808d-a26ce59f1e95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:12 compute-0 nova_compute[243452]: 2026-02-28 10:20:12.427 243456 DEBUG nova.compute.manager [req-75e8a72b-314e-45aa-9c70-d7e039fb1ae3 req-814fa99b-15f0-4f8b-808d-a26ce59f1e95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:12 compute-0 nova_compute[243452]: 2026-02-28 10:20:12.427 243456 WARNING nova.compute.manager [req-75e8a72b-314e-45aa-9c70-d7e039fb1ae3 req-814fa99b-15f0-4f8b-808d-a26ce59f1e95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received unexpected event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with vm_state active and task_state None.
Feb 28 10:20:12 compute-0 systemd[1]: Started libpod-conmon-08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3.scope.
Feb 28 10:20:12 compute-0 podman[329066]: 2026-02-28 10:20:12.366933886 +0000 UTC m=+0.025663943 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:20:12 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec2b40b2212a11bdcbb192052f2bdb44e9ffa51d1597581290bbd158ef73b64/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:20:12 compute-0 nova_compute[243452]: 2026-02-28 10:20:12.485 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 690896df-6307-469c-9685-325a61a62b88 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:20:12 compute-0 nova_compute[243452]: 2026-02-28 10:20:12.486 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274012.4851067, 690896df-6307-469c-9685-325a61a62b88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:12 compute-0 nova_compute[243452]: 2026-02-28 10:20:12.486 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Resumed (Lifecycle Event)
Feb 28 10:20:12 compute-0 nova_compute[243452]: 2026-02-28 10:20:12.488 243456 DEBUG nova.compute.manager [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:20:12 compute-0 nova_compute[243452]: 2026-02-28 10:20:12.494 243456 INFO nova.virt.libvirt.driver [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance rebooted successfully.
Feb 28 10:20:12 compute-0 nova_compute[243452]: 2026-02-28 10:20:12.494 243456 DEBUG nova.compute.manager [None req-9847ea4f-233e-4e1e-9349-aab827608893 b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:12 compute-0 podman[329066]: 2026-02-28 10:20:12.495383377 +0000 UTC m=+0.154113434 container init 08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:20:12 compute-0 podman[329066]: 2026-02-28 10:20:12.509478659 +0000 UTC m=+0.168208686 container start 08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 10:20:12 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329105]: [NOTICE]   (329110) : New worker (329112) forked
Feb 28 10:20:12 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329105]: [NOTICE]   (329110) : Loading success.
Feb 28 10:20:12 compute-0 nova_compute[243452]: 2026-02-28 10:20:12.543 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:12 compute-0 nova_compute[243452]: 2026-02-28 10:20:12.549 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:20:12 compute-0 nova_compute[243452]: 2026-02-28 10:20:12.572 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274012.485372, 690896df-6307-469c-9685-325a61a62b88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:12 compute-0 nova_compute[243452]: 2026-02-28 10:20:12.573 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Started (Lifecycle Event)
Feb 28 10:20:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Feb 28 10:20:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Feb 28 10:20:12 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Feb 28 10:20:12 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1016280227' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:12 compute-0 ceph-mon[76304]: pgmap v1639: 305 pgs: 305 active+clean; 486 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 663 KiB/s rd, 2.3 MiB/s wr, 173 op/s
Feb 28 10:20:12 compute-0 nova_compute[243452]: 2026-02-28 10:20:12.597 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:12 compute-0 nova_compute[243452]: 2026-02-28 10:20:12.603 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.030 243456 DEBUG nova.compute.manager [req-76d1596c-a49a-4a16-9a07-926d43cf0458 req-19a35324-86d2-47cd-9661-32728a0ab8f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.031 243456 DEBUG oslo_concurrency.lockutils [req-76d1596c-a49a-4a16-9a07-926d43cf0458 req-19a35324-86d2-47cd-9661-32728a0ab8f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.032 243456 DEBUG oslo_concurrency.lockutils [req-76d1596c-a49a-4a16-9a07-926d43cf0458 req-19a35324-86d2-47cd-9661-32728a0ab8f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.033 243456 DEBUG oslo_concurrency.lockutils [req-76d1596c-a49a-4a16-9a07-926d43cf0458 req-19a35324-86d2-47cd-9661-32728a0ab8f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.033 243456 DEBUG nova.compute.manager [req-76d1596c-a49a-4a16-9a07-926d43cf0458 req-19a35324-86d2-47cd-9661-32728a0ab8f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.034 243456 WARNING nova.compute.manager [req-76d1596c-a49a-4a16-9a07-926d43cf0458 req-19a35324-86d2-47cd-9661-32728a0ab8f7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state active and task_state None.
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.145 243456 INFO nova.compute.manager [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Rescuing
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.146 243456 DEBUG oslo_concurrency.lockutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.146 243456 DEBUG oslo_concurrency.lockutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquired lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.147 243456 DEBUG nova.network.neutron [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.215 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Acquiring lock "98504b0a-8c47-4488-b870-9fb9ebfa3e59" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.216 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "98504b0a-8c47-4488-b870-9fb9ebfa3e59" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.217 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Acquiring lock "98504b0a-8c47-4488-b870-9fb9ebfa3e59-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.218 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "98504b0a-8c47-4488-b870-9fb9ebfa3e59-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.219 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "98504b0a-8c47-4488-b870-9fb9ebfa3e59-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.221 243456 INFO nova.compute.manager [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Terminating instance
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.224 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Acquiring lock "refresh_cache-98504b0a-8c47-4488-b870-9fb9ebfa3e59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.225 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Acquired lock "refresh_cache-98504b0a-8c47-4488-b870-9fb9ebfa3e59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.226 243456 DEBUG nova.network.neutron [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.418 243456 DEBUG nova.network.neutron [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:20:13 compute-0 ceph-mon[76304]: osdmap e248: 3 total, 3 up, 3 in
Feb 28 10:20:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.703 243456 DEBUG nova.network.neutron [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.722 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Releasing lock "refresh_cache-98504b0a-8c47-4488-b870-9fb9ebfa3e59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.724 243456 DEBUG nova.compute.manager [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:20:13 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000062.scope: Deactivated successfully.
Feb 28 10:20:13 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000062.scope: Consumed 1.179s CPU time.
Feb 28 10:20:13 compute-0 systemd-machined[209480]: Machine qemu-122-instance-00000062 terminated.
Feb 28 10:20:13 compute-0 podman[329122]: 2026-02-28 10:20:13.855116592 +0000 UTC m=+0.063844041 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:20:13 compute-0 podman[329121]: 2026-02-28 10:20:13.880499735 +0000 UTC m=+0.090728797 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.933 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.951 243456 INFO nova.virt.libvirt.driver [-] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Instance destroyed successfully.
Feb 28 10:20:13 compute-0 nova_compute[243452]: 2026-02-28 10:20:13.951 243456 DEBUG nova.objects.instance [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lazy-loading 'resources' on Instance uuid 98504b0a-8c47-4488-b870-9fb9ebfa3e59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 488 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 890 KiB/s rd, 156 KiB/s wr, 187 op/s
Feb 28 10:20:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Feb 28 10:20:14 compute-0 ceph-mon[76304]: pgmap v1641: 305 pgs: 305 active+clean; 488 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 890 KiB/s rd, 156 KiB/s wr, 187 op/s
Feb 28 10:20:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Feb 28 10:20:14 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Feb 28 10:20:14 compute-0 nova_compute[243452]: 2026-02-28 10:20:14.805 243456 INFO nova.virt.libvirt.driver [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Deleting instance files /var/lib/nova/instances/98504b0a-8c47-4488-b870-9fb9ebfa3e59_del
Feb 28 10:20:14 compute-0 nova_compute[243452]: 2026-02-28 10:20:14.808 243456 INFO nova.virt.libvirt.driver [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Deletion of /var/lib/nova/instances/98504b0a-8c47-4488-b870-9fb9ebfa3e59_del complete
Feb 28 10:20:14 compute-0 nova_compute[243452]: 2026-02-28 10:20:14.815 243456 DEBUG nova.network.neutron [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Updating instance_info_cache with network_info: [{"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:20:14 compute-0 nova_compute[243452]: 2026-02-28 10:20:14.852 243456 DEBUG oslo_concurrency.lockutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Releasing lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:20:14 compute-0 nova_compute[243452]: 2026-02-28 10:20:14.881 243456 INFO nova.compute.manager [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Took 1.16 seconds to destroy the instance on the hypervisor.
Feb 28 10:20:14 compute-0 nova_compute[243452]: 2026-02-28 10:20:14.882 243456 DEBUG oslo.service.loopingcall [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:20:14 compute-0 nova_compute[243452]: 2026-02-28 10:20:14.883 243456 DEBUG nova.compute.manager [-] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:20:14 compute-0 nova_compute[243452]: 2026-02-28 10:20:14.883 243456 DEBUG nova.network.neutron [-] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:20:15 compute-0 nova_compute[243452]: 2026-02-28 10:20:15.112 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:20:15 compute-0 nova_compute[243452]: 2026-02-28 10:20:15.273 243456 DEBUG nova.network.neutron [-] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:20:15 compute-0 nova_compute[243452]: 2026-02-28 10:20:15.293 243456 DEBUG nova.network.neutron [-] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:20:15 compute-0 nova_compute[243452]: 2026-02-28 10:20:15.310 243456 INFO nova.compute.manager [-] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Took 0.43 seconds to deallocate network for instance.
Feb 28 10:20:15 compute-0 nova_compute[243452]: 2026-02-28 10:20:15.368 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:15 compute-0 nova_compute[243452]: 2026-02-28 10:20:15.370 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:15 compute-0 nova_compute[243452]: 2026-02-28 10:20:15.410 243456 DEBUG nova.compute.manager [req-872faa97-242c-40a9-8958-edfb6d9228a2 req-1c4a1f3d-0566-49c0-9593-23a0912f2831 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:15 compute-0 nova_compute[243452]: 2026-02-28 10:20:15.411 243456 DEBUG oslo_concurrency.lockutils [req-872faa97-242c-40a9-8958-edfb6d9228a2 req-1c4a1f3d-0566-49c0-9593-23a0912f2831 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:15 compute-0 nova_compute[243452]: 2026-02-28 10:20:15.411 243456 DEBUG oslo_concurrency.lockutils [req-872faa97-242c-40a9-8958-edfb6d9228a2 req-1c4a1f3d-0566-49c0-9593-23a0912f2831 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:15 compute-0 nova_compute[243452]: 2026-02-28 10:20:15.412 243456 DEBUG oslo_concurrency.lockutils [req-872faa97-242c-40a9-8958-edfb6d9228a2 req-1c4a1f3d-0566-49c0-9593-23a0912f2831 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:15 compute-0 nova_compute[243452]: 2026-02-28 10:20:15.412 243456 DEBUG nova.compute.manager [req-872faa97-242c-40a9-8958-edfb6d9228a2 req-1c4a1f3d-0566-49c0-9593-23a0912f2831 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:15 compute-0 nova_compute[243452]: 2026-02-28 10:20:15.412 243456 WARNING nova.compute.manager [req-872faa97-242c-40a9-8958-edfb6d9228a2 req-1c4a1f3d-0566-49c0-9593-23a0912f2831 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state active and task_state None.
Feb 28 10:20:15 compute-0 nova_compute[243452]: 2026-02-28 10:20:15.516 243456 DEBUG oslo_concurrency.processutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:15 compute-0 ceph-mon[76304]: osdmap e249: 3 total, 3 up, 3 in
Feb 28 10:20:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:20:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2692883052' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:20:16 compute-0 nova_compute[243452]: 2026-02-28 10:20:16.159 243456 DEBUG oslo_concurrency.processutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:16 compute-0 nova_compute[243452]: 2026-02-28 10:20:16.168 243456 DEBUG nova.compute.provider_tree [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:20:16 compute-0 nova_compute[243452]: 2026-02-28 10:20:16.196 243456 DEBUG nova.scheduler.client.report [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:20:16 compute-0 nova_compute[243452]: 2026-02-28 10:20:16.246 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:16 compute-0 nova_compute[243452]: 2026-02-28 10:20:16.283 243456 INFO nova.scheduler.client.report [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Deleted allocations for instance 98504b0a-8c47-4488-b870-9fb9ebfa3e59
Feb 28 10:20:16 compute-0 nova_compute[243452]: 2026-02-28 10:20:16.347 243456 DEBUG oslo_concurrency.lockutils [None req-ccd6f3f1-701f-47dc-a4c5-7868cf7c6073 05780ead76294473ae8c8fc112f7610d f743c06bc2ae45fda68427b8418baef8 - - default default] Lock "98504b0a-8c47-4488-b870-9fb9ebfa3e59" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 488 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 56 KiB/s wr, 400 op/s
Feb 28 10:20:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2692883052' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:20:16 compute-0 ceph-mon[76304]: pgmap v1643: 305 pgs: 305 active+clean; 488 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 56 KiB/s wr, 400 op/s
Feb 28 10:20:16 compute-0 nova_compute[243452]: 2026-02-28 10:20:16.666 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:16 compute-0 nova_compute[243452]: 2026-02-28 10:20:16.978 243456 DEBUG nova.objects.instance [None req-9940bc3f-d8c6-4eee-ba93-d095bd7d653c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'pci_devices' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:17 compute-0 nova_compute[243452]: 2026-02-28 10:20:17.007 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274017.0068257, 690896df-6307-469c-9685-325a61a62b88 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:17 compute-0 nova_compute[243452]: 2026-02-28 10:20:17.007 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Paused (Lifecycle Event)
Feb 28 10:20:17 compute-0 nova_compute[243452]: 2026-02-28 10:20:17.027 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:17 compute-0 nova_compute[243452]: 2026-02-28 10:20:17.037 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:20:17 compute-0 nova_compute[243452]: 2026-02-28 10:20:17.059 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] During sync_power_state the instance has a pending task (suspending). Skip.
Feb 28 10:20:17 compute-0 kernel: taped25d1f8-c3 (unregistering): left promiscuous mode
Feb 28 10:20:17 compute-0 NetworkManager[49805]: <info>  [1772274017.4716] device (taped25d1f8-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:20:17 compute-0 sudo[329206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:20:17 compute-0 sudo[329206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:20:17 compute-0 nova_compute[243452]: 2026-02-28 10:20:17.484 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:17 compute-0 ovn_controller[146846]: 2026-02-28T10:20:17Z|00981|binding|INFO|Releasing lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c from this chassis (sb_readonly=0)
Feb 28 10:20:17 compute-0 sudo[329206]: pam_unix(sudo:session): session closed for user root
Feb 28 10:20:17 compute-0 ovn_controller[146846]: 2026-02-28T10:20:17Z|00982|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c down in Southbound
Feb 28 10:20:17 compute-0 ovn_controller[146846]: 2026-02-28T10:20:17Z|00983|binding|INFO|Removing iface taped25d1f8-c3 ovn-installed in OVS
Feb 28 10:20:17 compute-0 nova_compute[243452]: 2026-02-28 10:20:17.487 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.494 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:05:21 10.100.0.14'], port_security=['fa:16:3e:f6:05:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '690896df-6307-469c-9685-325a61a62b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '12', 'neutron:security_group_ids': '26695444-5c7f-4bb0-b1ed-94a026868cfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:20:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.496 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ed25d1f8-c3a0-43d4-b57e-12b647a48b3c in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb unbound from our chassis
Feb 28 10:20:17 compute-0 nova_compute[243452]: 2026-02-28 10:20:17.497 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.499 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:20:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.500 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e96454-aa85-4502-bd96-d4d69cb36fcb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.500 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb namespace which is not needed anymore
Feb 28 10:20:17 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Feb 28 10:20:17 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d0000005a.scope: Consumed 5.402s CPU time.
Feb 28 10:20:17 compute-0 systemd-machined[209480]: Machine qemu-124-instance-0000005a terminated.
Feb 28 10:20:17 compute-0 sudo[329233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:20:17 compute-0 sudo[329233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:20:17 compute-0 nova_compute[243452]: 2026-02-28 10:20:17.657 243456 DEBUG nova.compute.manager [None req-9940bc3f-d8c6-4eee-ba93-d095bd7d653c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:17 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329105]: [NOTICE]   (329110) : haproxy version is 2.8.14-c23fe91
Feb 28 10:20:17 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329105]: [NOTICE]   (329110) : path to executable is /usr/sbin/haproxy
Feb 28 10:20:17 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329105]: [ALERT]    (329110) : Current worker (329112) exited with code 143 (Terminated)
Feb 28 10:20:17 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329105]: [WARNING]  (329110) : All workers exited. Exiting... (0)
Feb 28 10:20:17 compute-0 systemd[1]: libpod-08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3.scope: Deactivated successfully.
Feb 28 10:20:17 compute-0 conmon[329105]: conmon 08fad2a274be7c8ad1d4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3.scope/container/memory.events
Feb 28 10:20:17 compute-0 nova_compute[243452]: 2026-02-28 10:20:17.681 243456 DEBUG nova.compute.manager [req-b62c33c2-4406-45a8-b426-a98f41bcab72 req-d8c7aeb5-49bb-4d32-bf56-9230fecf90dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:17 compute-0 nova_compute[243452]: 2026-02-28 10:20:17.681 243456 DEBUG oslo_concurrency.lockutils [req-b62c33c2-4406-45a8-b426-a98f41bcab72 req-d8c7aeb5-49bb-4d32-bf56-9230fecf90dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:17 compute-0 nova_compute[243452]: 2026-02-28 10:20:17.682 243456 DEBUG oslo_concurrency.lockutils [req-b62c33c2-4406-45a8-b426-a98f41bcab72 req-d8c7aeb5-49bb-4d32-bf56-9230fecf90dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:17 compute-0 nova_compute[243452]: 2026-02-28 10:20:17.682 243456 DEBUG oslo_concurrency.lockutils [req-b62c33c2-4406-45a8-b426-a98f41bcab72 req-d8c7aeb5-49bb-4d32-bf56-9230fecf90dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:17 compute-0 nova_compute[243452]: 2026-02-28 10:20:17.682 243456 DEBUG nova.compute.manager [req-b62c33c2-4406-45a8-b426-a98f41bcab72 req-d8c7aeb5-49bb-4d32-bf56-9230fecf90dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:17 compute-0 nova_compute[243452]: 2026-02-28 10:20:17.682 243456 WARNING nova.compute.manager [req-b62c33c2-4406-45a8-b426-a98f41bcab72 req-d8c7aeb5-49bb-4d32-bf56-9230fecf90dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state active and task_state suspending.
Feb 28 10:20:17 compute-0 podman[329279]: 2026-02-28 10:20:17.688098401 +0000 UTC m=+0.064930602 container died 08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:20:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3-userdata-shm.mount: Deactivated successfully.
Feb 28 10:20:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ec2b40b2212a11bdcbb192052f2bdb44e9ffa51d1597581290bbd158ef73b64-merged.mount: Deactivated successfully.
Feb 28 10:20:17 compute-0 podman[329279]: 2026-02-28 10:20:17.732911458 +0000 UTC m=+0.109743649 container cleanup 08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:20:17 compute-0 systemd[1]: libpod-conmon-08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3.scope: Deactivated successfully.
Feb 28 10:20:17 compute-0 podman[329318]: 2026-02-28 10:20:17.803284734 +0000 UTC m=+0.048772371 container remove 08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 28 10:20:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.809 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[721e9807-7a6f-47ac-99f4-dd3d3e83af83]: (4, ('Sat Feb 28 10:20:17 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb (08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3)\n08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3\nSat Feb 28 10:20:17 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb (08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3)\n08fad2a274be7c8ad1d45d442f4642d7e364fcb6c9f62eb8f481a308e9cd45b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.811 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e071f7af-93ba-4b76-b275-741f57256678]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.812 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:17 compute-0 kernel: tap8082b9e7-a0: left promiscuous mode
Feb 28 10:20:17 compute-0 nova_compute[243452]: 2026-02-28 10:20:17.814 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.821 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:20:17 compute-0 nova_compute[243452]: 2026-02-28 10:20:17.824 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:17 compute-0 nova_compute[243452]: 2026-02-28 10:20:17.825 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.829 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cab3ace5-a4cb-4085-a94b-887f0a8200a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.839 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[abf9da2f-59ea-4a85-957a-0e84c52a7e05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.840 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9fd86623-6f6e-4c24-aa71-6ddc2c70be37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.856 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf490d7-2318-41ee-bb60-cd3e3852428b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 551908, 'reachable_time': 42459, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329351, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:17 compute-0 systemd[1]: run-netns-ovnmeta\x2d8082b9e7\x2da888\x2d4fb7\x2db48c\x2da7c16db892eb.mount: Deactivated successfully.
Feb 28 10:20:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.860 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:20:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.860 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[9c07dce6-9768-4230-a3e6-bf919ec96b48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:17.861 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:20:18 compute-0 sudo[329233]: pam_unix(sudo:session): session closed for user root
Feb 28 10:20:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:20:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:20:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:20:18 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:20:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:20:18 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:20:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:20:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:20:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:20:18 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:20:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:20:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:20:18 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:20:18 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:20:18 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:20:18 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:20:18 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:20:18 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:20:18 compute-0 sudo[329369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:20:18 compute-0 sudo[329369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:20:18 compute-0 sudo[329369]: pam_unix(sudo:session): session closed for user root
Feb 28 10:20:18 compute-0 sudo[329394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:20:18 compute-0 sudo[329394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:20:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 488 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 48 KiB/s wr, 417 op/s
Feb 28 10:20:18 compute-0 podman[329430]: 2026-02-28 10:20:18.534671861 +0000 UTC m=+0.055940966 container create b898c3f5a4b8e8ee20be43060ab44ac3caf4e8e10c81f9772c9245e8c9b0f0ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cray, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 10:20:18 compute-0 systemd[1]: Started libpod-conmon-b898c3f5a4b8e8ee20be43060ab44ac3caf4e8e10c81f9772c9245e8c9b0f0ef.scope.
Feb 28 10:20:18 compute-0 podman[329430]: 2026-02-28 10:20:18.506739275 +0000 UTC m=+0.028008420 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:20:18 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:20:18 compute-0 podman[329430]: 2026-02-28 10:20:18.622329029 +0000 UTC m=+0.143598104 container init b898c3f5a4b8e8ee20be43060ab44ac3caf4e8e10c81f9772c9245e8c9b0f0ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cray, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 28 10:20:18 compute-0 podman[329430]: 2026-02-28 10:20:18.631988875 +0000 UTC m=+0.153257980 container start b898c3f5a4b8e8ee20be43060ab44ac3caf4e8e10c81f9772c9245e8c9b0f0ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cray, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:20:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:20:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Feb 28 10:20:18 compute-0 podman[329430]: 2026-02-28 10:20:18.636476472 +0000 UTC m=+0.157745577 container attach b898c3f5a4b8e8ee20be43060ab44ac3caf4e8e10c81f9772c9245e8c9b0f0ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cray, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 10:20:18 compute-0 sweet_cray[329446]: 167 167
Feb 28 10:20:18 compute-0 systemd[1]: libpod-b898c3f5a4b8e8ee20be43060ab44ac3caf4e8e10c81f9772c9245e8c9b0f0ef.scope: Deactivated successfully.
Feb 28 10:20:18 compute-0 podman[329430]: 2026-02-28 10:20:18.637653676 +0000 UTC m=+0.158922781 container died b898c3f5a4b8e8ee20be43060ab44ac3caf4e8e10c81f9772c9245e8c9b0f0ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cray, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 10:20:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Feb 28 10:20:18 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Feb 28 10:20:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-77a3e58e80279d8d9443543f6edd42033f6d8c987c1d59c17a3633c2e5f828f5-merged.mount: Deactivated successfully.
Feb 28 10:20:18 compute-0 podman[329430]: 2026-02-28 10:20:18.690217674 +0000 UTC m=+0.211486749 container remove b898c3f5a4b8e8ee20be43060ab44ac3caf4e8e10c81f9772c9245e8c9b0f0ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cray, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 28 10:20:18 compute-0 systemd[1]: libpod-conmon-b898c3f5a4b8e8ee20be43060ab44ac3caf4e8e10c81f9772c9245e8c9b0f0ef.scope: Deactivated successfully.
Feb 28 10:20:18 compute-0 podman[329470]: 2026-02-28 10:20:18.889089283 +0000 UTC m=+0.059103366 container create 7101f662ab572529109ac8f09f39f1985da5e8fe6ca08a4d4e3af847d8180bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_ritchie, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 10:20:18 compute-0 systemd[1]: Started libpod-conmon-7101f662ab572529109ac8f09f39f1985da5e8fe6ca08a4d4e3af847d8180bad.scope.
Feb 28 10:20:18 compute-0 nova_compute[243452]: 2026-02-28 10:20:18.934 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:18 compute-0 podman[329470]: 2026-02-28 10:20:18.867645961 +0000 UTC m=+0.037660094 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:20:18 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc2bd44b3fda94be3a6957e7f81d0944a4eaf1bf514fe72f743bf0f6cc7f74a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc2bd44b3fda94be3a6957e7f81d0944a4eaf1bf514fe72f743bf0f6cc7f74a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc2bd44b3fda94be3a6957e7f81d0944a4eaf1bf514fe72f743bf0f6cc7f74a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc2bd44b3fda94be3a6957e7f81d0944a4eaf1bf514fe72f743bf0f6cc7f74a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc2bd44b3fda94be3a6957e7f81d0944a4eaf1bf514fe72f743bf0f6cc7f74a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:20:18 compute-0 podman[329470]: 2026-02-28 10:20:18.987014554 +0000 UTC m=+0.157028657 container init 7101f662ab572529109ac8f09f39f1985da5e8fe6ca08a4d4e3af847d8180bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_ritchie, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 10:20:18 compute-0 podman[329470]: 2026-02-28 10:20:18.993718665 +0000 UTC m=+0.163732758 container start 7101f662ab572529109ac8f09f39f1985da5e8fe6ca08a4d4e3af847d8180bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_ritchie, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:20:18 compute-0 podman[329470]: 2026-02-28 10:20:18.99704742 +0000 UTC m=+0.167061513 container attach 7101f662ab572529109ac8f09f39f1985da5e8fe6ca08a4d4e3af847d8180bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_ritchie, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 28 10:20:19 compute-0 ceph-mon[76304]: pgmap v1644: 305 pgs: 305 active+clean; 488 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 48 KiB/s wr, 417 op/s
Feb 28 10:20:19 compute-0 ceph-mon[76304]: osdmap e250: 3 total, 3 up, 3 in
Feb 28 10:20:19 compute-0 nova_compute[243452]: 2026-02-28 10:20:19.218 243456 INFO nova.compute.manager [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Resuming
Feb 28 10:20:19 compute-0 nova_compute[243452]: 2026-02-28 10:20:19.220 243456 DEBUG nova.objects.instance [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'flavor' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:19 compute-0 nova_compute[243452]: 2026-02-28 10:20:19.267 243456 DEBUG oslo_concurrency.lockutils [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:20:19 compute-0 nova_compute[243452]: 2026-02-28 10:20:19.268 243456 DEBUG oslo_concurrency.lockutils [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquired lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:20:19 compute-0 nova_compute[243452]: 2026-02-28 10:20:19.269 243456 DEBUG nova.network.neutron [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:20:19 compute-0 gallant_ritchie[329488]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:20:19 compute-0 gallant_ritchie[329488]: --> All data devices are unavailable
Feb 28 10:20:19 compute-0 systemd[1]: libpod-7101f662ab572529109ac8f09f39f1985da5e8fe6ca08a4d4e3af847d8180bad.scope: Deactivated successfully.
Feb 28 10:20:19 compute-0 podman[329470]: 2026-02-28 10:20:19.471208274 +0000 UTC m=+0.641222377 container died 7101f662ab572529109ac8f09f39f1985da5e8fe6ca08a4d4e3af847d8180bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:20:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbc2bd44b3fda94be3a6957e7f81d0944a4eaf1bf514fe72f743bf0f6cc7f74a-merged.mount: Deactivated successfully.
Feb 28 10:20:19 compute-0 podman[329470]: 2026-02-28 10:20:19.552709868 +0000 UTC m=+0.722723961 container remove 7101f662ab572529109ac8f09f39f1985da5e8fe6ca08a4d4e3af847d8180bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_ritchie, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 10:20:19 compute-0 systemd[1]: libpod-conmon-7101f662ab572529109ac8f09f39f1985da5e8fe6ca08a4d4e3af847d8180bad.scope: Deactivated successfully.
Feb 28 10:20:19 compute-0 sudo[329394]: pam_unix(sudo:session): session closed for user root
Feb 28 10:20:19 compute-0 sudo[329521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:20:19 compute-0 sudo[329521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:20:19 compute-0 sudo[329521]: pam_unix(sudo:session): session closed for user root
Feb 28 10:20:19 compute-0 sudo[329546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:20:19 compute-0 sudo[329546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:20:19 compute-0 podman[329584]: 2026-02-28 10:20:19.971425322 +0000 UTC m=+0.039078185 container create 0d150080be6f64b34dfac8ed435e48e89ddc39c58027c01919a6ab1aeb823898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:20:20 compute-0 systemd[1]: Started libpod-conmon-0d150080be6f64b34dfac8ed435e48e89ddc39c58027c01919a6ab1aeb823898.scope.
Feb 28 10:20:20 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:20:20 compute-0 podman[329584]: 2026-02-28 10:20:20.046650936 +0000 UTC m=+0.114303699 container init 0d150080be6f64b34dfac8ed435e48e89ddc39c58027c01919a6ab1aeb823898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 10:20:20 compute-0 podman[329584]: 2026-02-28 10:20:19.953769119 +0000 UTC m=+0.021421922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:20:20 compute-0 podman[329584]: 2026-02-28 10:20:20.054120859 +0000 UTC m=+0.121773612 container start 0d150080be6f64b34dfac8ed435e48e89ddc39c58027c01919a6ab1aeb823898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_johnson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:20:20 compute-0 nostalgic_johnson[329601]: 167 167
Feb 28 10:20:20 compute-0 systemd[1]: libpod-0d150080be6f64b34dfac8ed435e48e89ddc39c58027c01919a6ab1aeb823898.scope: Deactivated successfully.
Feb 28 10:20:20 compute-0 podman[329584]: 2026-02-28 10:20:20.058103303 +0000 UTC m=+0.125756156 container attach 0d150080be6f64b34dfac8ed435e48e89ddc39c58027c01919a6ab1aeb823898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:20:20 compute-0 podman[329584]: 2026-02-28 10:20:20.058490944 +0000 UTC m=+0.126143737 container died 0d150080be6f64b34dfac8ed435e48e89ddc39c58027c01919a6ab1aeb823898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Feb 28 10:20:20 compute-0 nova_compute[243452]: 2026-02-28 10:20:20.078 243456 DEBUG nova.compute.manager [req-2ed76c3f-94f8-4ddf-8e9f-34a50b249180 req-fe3ca739-77f8-4c88-901e-5092132d5557 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:20 compute-0 nova_compute[243452]: 2026-02-28 10:20:20.080 243456 DEBUG oslo_concurrency.lockutils [req-2ed76c3f-94f8-4ddf-8e9f-34a50b249180 req-fe3ca739-77f8-4c88-901e-5092132d5557 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:20 compute-0 nova_compute[243452]: 2026-02-28 10:20:20.081 243456 DEBUG oslo_concurrency.lockutils [req-2ed76c3f-94f8-4ddf-8e9f-34a50b249180 req-fe3ca739-77f8-4c88-901e-5092132d5557 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:20 compute-0 nova_compute[243452]: 2026-02-28 10:20:20.081 243456 DEBUG oslo_concurrency.lockutils [req-2ed76c3f-94f8-4ddf-8e9f-34a50b249180 req-fe3ca739-77f8-4c88-901e-5092132d5557 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:20 compute-0 nova_compute[243452]: 2026-02-28 10:20:20.081 243456 DEBUG nova.compute.manager [req-2ed76c3f-94f8-4ddf-8e9f-34a50b249180 req-fe3ca739-77f8-4c88-901e-5092132d5557 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:20 compute-0 nova_compute[243452]: 2026-02-28 10:20:20.081 243456 WARNING nova.compute.manager [req-2ed76c3f-94f8-4ddf-8e9f-34a50b249180 req-fe3ca739-77f8-4c88-901e-5092132d5557 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state suspended and task_state resuming.
Feb 28 10:20:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ebd8e1e706f20511262fd899816cf64f5e10436e51ed37b234fa5b1c5a0f425-merged.mount: Deactivated successfully.
Feb 28 10:20:20 compute-0 podman[329584]: 2026-02-28 10:20:20.173125851 +0000 UTC m=+0.240778634 container remove 0d150080be6f64b34dfac8ed435e48e89ddc39c58027c01919a6ab1aeb823898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:20:20 compute-0 systemd[1]: libpod-conmon-0d150080be6f64b34dfac8ed435e48e89ddc39c58027c01919a6ab1aeb823898.scope: Deactivated successfully.
Feb 28 10:20:20 compute-0 nova_compute[243452]: 2026-02-28 10:20:20.228 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 488 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 27 KiB/s wr, 321 op/s
Feb 28 10:20:20 compute-0 podman[329629]: 2026-02-28 10:20:20.362865848 +0000 UTC m=+0.050025727 container create 5d95dfc81aa7a89ad1241b46888f9ce35fa5a19a70665dadb480275ab883cc8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_morse, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:20:20 compute-0 systemd[1]: Started libpod-conmon-5d95dfc81aa7a89ad1241b46888f9ce35fa5a19a70665dadb480275ab883cc8a.scope.
Feb 28 10:20:20 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ece8f0fb69de873a9de827df8cac432b0f646b413535f3848b79c3e84d69e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ece8f0fb69de873a9de827df8cac432b0f646b413535f3848b79c3e84d69e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ece8f0fb69de873a9de827df8cac432b0f646b413535f3848b79c3e84d69e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ece8f0fb69de873a9de827df8cac432b0f646b413535f3848b79c3e84d69e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:20:20 compute-0 podman[329629]: 2026-02-28 10:20:20.424766842 +0000 UTC m=+0.111926731 container init 5d95dfc81aa7a89ad1241b46888f9ce35fa5a19a70665dadb480275ab883cc8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 10:20:20 compute-0 podman[329629]: 2026-02-28 10:20:20.332090861 +0000 UTC m=+0.019250790 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:20:20 compute-0 podman[329629]: 2026-02-28 10:20:20.438983758 +0000 UTC m=+0.126143657 container start 5d95dfc81aa7a89ad1241b46888f9ce35fa5a19a70665dadb480275ab883cc8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 10:20:20 compute-0 podman[329629]: 2026-02-28 10:20:20.441942222 +0000 UTC m=+0.129102131 container attach 5d95dfc81aa7a89ad1241b46888f9ce35fa5a19a70665dadb480275ab883cc8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_morse, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:20:20 compute-0 goofy_morse[329646]: {
Feb 28 10:20:20 compute-0 goofy_morse[329646]:     "0": [
Feb 28 10:20:20 compute-0 goofy_morse[329646]:         {
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "devices": [
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "/dev/loop3"
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             ],
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "lv_name": "ceph_lv0",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "lv_size": "21470642176",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "name": "ceph_lv0",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "tags": {
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.cluster_name": "ceph",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.crush_device_class": "",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.encrypted": "0",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.objectstore": "bluestore",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.osd_id": "0",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.type": "block",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.vdo": "0",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.with_tpm": "0"
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             },
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "type": "block",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "vg_name": "ceph_vg0"
Feb 28 10:20:20 compute-0 goofy_morse[329646]:         }
Feb 28 10:20:20 compute-0 goofy_morse[329646]:     ],
Feb 28 10:20:20 compute-0 goofy_morse[329646]:     "1": [
Feb 28 10:20:20 compute-0 goofy_morse[329646]:         {
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "devices": [
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "/dev/loop4"
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             ],
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "lv_name": "ceph_lv1",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "lv_size": "21470642176",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "name": "ceph_lv1",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "tags": {
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.cluster_name": "ceph",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.crush_device_class": "",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.encrypted": "0",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.objectstore": "bluestore",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.osd_id": "1",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.type": "block",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.vdo": "0",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.with_tpm": "0"
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             },
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "type": "block",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "vg_name": "ceph_vg1"
Feb 28 10:20:20 compute-0 goofy_morse[329646]:         }
Feb 28 10:20:20 compute-0 goofy_morse[329646]:     ],
Feb 28 10:20:20 compute-0 goofy_morse[329646]:     "2": [
Feb 28 10:20:20 compute-0 goofy_morse[329646]:         {
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "devices": [
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "/dev/loop5"
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             ],
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "lv_name": "ceph_lv2",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "lv_size": "21470642176",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "name": "ceph_lv2",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "tags": {
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.cluster_name": "ceph",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.crush_device_class": "",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.encrypted": "0",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.objectstore": "bluestore",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.osd_id": "2",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.type": "block",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.vdo": "0",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:                 "ceph.with_tpm": "0"
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             },
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "type": "block",
Feb 28 10:20:20 compute-0 goofy_morse[329646]:             "vg_name": "ceph_vg2"
Feb 28 10:20:20 compute-0 goofy_morse[329646]:         }
Feb 28 10:20:20 compute-0 goofy_morse[329646]:     ]
Feb 28 10:20:20 compute-0 goofy_morse[329646]: }
Feb 28 10:20:20 compute-0 systemd[1]: libpod-5d95dfc81aa7a89ad1241b46888f9ce35fa5a19a70665dadb480275ab883cc8a.scope: Deactivated successfully.
Feb 28 10:20:20 compute-0 podman[329629]: 2026-02-28 10:20:20.736204209 +0000 UTC m=+0.423364178 container died 5d95dfc81aa7a89ad1241b46888f9ce35fa5a19a70665dadb480275ab883cc8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_morse, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 10:20:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5ece8f0fb69de873a9de827df8cac432b0f646b413535f3848b79c3e84d69e5-merged.mount: Deactivated successfully.
Feb 28 10:20:20 compute-0 podman[329629]: 2026-02-28 10:20:20.78289023 +0000 UTC m=+0.470050119 container remove 5d95dfc81aa7a89ad1241b46888f9ce35fa5a19a70665dadb480275ab883cc8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_morse, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Feb 28 10:20:20 compute-0 systemd[1]: libpod-conmon-5d95dfc81aa7a89ad1241b46888f9ce35fa5a19a70665dadb480275ab883cc8a.scope: Deactivated successfully.
Feb 28 10:20:20 compute-0 sudo[329546]: pam_unix(sudo:session): session closed for user root
Feb 28 10:20:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:20.863 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:20 compute-0 sudo[329665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:20:20 compute-0 sudo[329665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:20:20 compute-0 sudo[329665]: pam_unix(sudo:session): session closed for user root
Feb 28 10:20:20 compute-0 nova_compute[243452]: 2026-02-28 10:20:20.969 243456 DEBUG nova.network.neutron [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Updating instance_info_cache with network_info: [{"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:20:20 compute-0 nova_compute[243452]: 2026-02-28 10:20:20.992 243456 DEBUG oslo_concurrency.lockutils [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Releasing lock "refresh_cache-690896df-6307-469c-9685-325a61a62b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:20:20 compute-0 sudo[329690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:20:20 compute-0 sudo[329690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:20:20 compute-0 nova_compute[243452]: 2026-02-28 10:20:20.998 243456 DEBUG nova.virt.libvirt.vif [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-49030969',display_name='tempest-ServerActionsTestJSON-server-49030969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-49030969',id=90,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2gogpz3Yx3EhEkpX1VCBq2KO8dk2oxi+PMU5eyyDvXF1ONWBCuHAC/cBb5pEuXFL5UEmuntYZ5G82kfqwpjBS+OnOyyPUfTpF/G370TEdXgIbjgT7mCqXRtxntkLSfyw==',key_name='tempest-keypair-1420549814',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-iwkgdg48',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:20:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9006c7543a244aa948b78020335223a',uuid=690896df-6307-469c-9685-325a61a62b88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:20:20 compute-0 nova_compute[243452]: 2026-02-28 10:20:20.998 243456 DEBUG nova.network.os_vif_util [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:20:20 compute-0 nova_compute[243452]: 2026-02-28 10:20:20.999 243456 DEBUG nova.network.os_vif_util [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:20.999 243456 DEBUG os_vif [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.000 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.001 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.001 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.005 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.005 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped25d1f8-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.006 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=taped25d1f8-c3, col_values=(('external_ids', {'iface-id': 'ed25d1f8-c3a0-43d4-b57e-12b647a48b3c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:05:21', 'vm-uuid': '690896df-6307-469c-9685-325a61a62b88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.006 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.006 243456 INFO os_vif [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3')
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.025 243456 DEBUG nova.objects.instance [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'numa_topology' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:21 compute-0 kernel: taped25d1f8-c3: entered promiscuous mode
Feb 28 10:20:21 compute-0 NetworkManager[49805]: <info>  [1772274021.0902] manager: (taped25d1f8-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/425)
Feb 28 10:20:21 compute-0 ovn_controller[146846]: 2026-02-28T10:20:21Z|00984|binding|INFO|Claiming lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for this chassis.
Feb 28 10:20:21 compute-0 ovn_controller[146846]: 2026-02-28T10:20:21Z|00985|binding|INFO|ed25d1f8-c3a0-43d4-b57e-12b647a48b3c: Claiming fa:16:3e:f6:05:21 10.100.0.14
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.095 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.100 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:05:21 10.100.0.14'], port_security=['fa:16:3e:f6:05:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '690896df-6307-469c-9685-325a61a62b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '13', 'neutron:security_group_ids': '26695444-5c7f-4bb0-b1ed-94a026868cfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.101 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ed25d1f8-c3a0-43d4-b57e-12b647a48b3c in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb bound to our chassis
Feb 28 10:20:21 compute-0 ovn_controller[146846]: 2026-02-28T10:20:21Z|00986|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c ovn-installed in OVS
Feb 28 10:20:21 compute-0 ovn_controller[146846]: 2026-02-28T10:20:21Z|00987|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c up in Southbound
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.102 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.102 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.107 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.113 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd0a80e-0c34-49dc-befa-7abf9c176be7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.114 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8082b9e7-a1 in ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.115 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8082b9e7-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.116 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2fe4ac6d-6c06-4c9e-bbac-a86c6f199e92]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.119 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7ede27-c31f-441a-a437-40f48065754d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:21 compute-0 systemd-machined[209480]: New machine qemu-125-instance-0000005a.
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.129 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[d93a2bf8-41f8-4b5e-b821-3888095abc1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:21 compute-0 systemd[1]: Started Virtual Machine qemu-125-instance-0000005a.
Feb 28 10:20:21 compute-0 systemd-udevd[329731]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:20:21 compute-0 NetworkManager[49805]: <info>  [1772274021.1531] device (taped25d1f8-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.152 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[312f6e5a-e547-4f72-a5ef-00ed18fdb387]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:21 compute-0 NetworkManager[49805]: <info>  [1772274021.1535] device (taped25d1f8-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.176 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[cbd5e2d7-5c79-4a94-b36b-b3debf791402]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.182 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c6378773-1daa-4390-8dd3-4bb623d0abcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:21 compute-0 NetworkManager[49805]: <info>  [1772274021.1834] manager: (tap8082b9e7-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/426)
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.208 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4a5eb13d-fbb2-46ac-809c-27c8c371a5ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.212 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3913047c-1714-4a13-8fb1-bea5a5ac04a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:21 compute-0 NetworkManager[49805]: <info>  [1772274021.2343] device (tap8082b9e7-a0): carrier: link connected
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.240 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[abe36f68-1c10-49ac-9276-7ef9c1b4da24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.261 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[404d40e2-c6ef-4902-9578-cede93c5f87e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552852, 'reachable_time': 38387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329781, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.274 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[61823e39-5e9f-49a0-8d2c-4d0e673f51bb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:9bb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 552852, 'tstamp': 552852}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329785, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:21 compute-0 podman[329770]: 2026-02-28 10:20:21.278545477 +0000 UTC m=+0.048863733 container create 0dfb5c10b8cd581f8f67af85edba89e9e627a5da83ee8efb65c30ca3c154aa99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_darwin, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.289 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4660225f-a484-49d1-9552-cc7d03c0f280]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8082b9e7-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:09:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552852, 'reachable_time': 38387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329786, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:21 compute-0 systemd[1]: Started libpod-conmon-0dfb5c10b8cd581f8f67af85edba89e9e627a5da83ee8efb65c30ca3c154aa99.scope.
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.316 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a89945c8-e361-4d43-9ca0-72a984f82817]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:21 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:20:21 compute-0 podman[329770]: 2026-02-28 10:20:21.253097512 +0000 UTC m=+0.023415818 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:20:21 compute-0 podman[329770]: 2026-02-28 10:20:21.35159585 +0000 UTC m=+0.121914126 container init 0dfb5c10b8cd581f8f67af85edba89e9e627a5da83ee8efb65c30ca3c154aa99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 28 10:20:21 compute-0 podman[329770]: 2026-02-28 10:20:21.359590987 +0000 UTC m=+0.129909243 container start 0dfb5c10b8cd581f8f67af85edba89e9e627a5da83ee8efb65c30ca3c154aa99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:20:21 compute-0 podman[329770]: 2026-02-28 10:20:21.364705133 +0000 UTC m=+0.135023399 container attach 0dfb5c10b8cd581f8f67af85edba89e9e627a5da83ee8efb65c30ca3c154aa99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.365 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bba9bf53-60a5-4647-89ba-6d9d4fd77788]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.366 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.366 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.367 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8082b9e7-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:21 compute-0 nifty_darwin[329792]: 167 167
Feb 28 10:20:21 compute-0 systemd[1]: libpod-0dfb5c10b8cd581f8f67af85edba89e9e627a5da83ee8efb65c30ca3c154aa99.scope: Deactivated successfully.
Feb 28 10:20:21 compute-0 NetworkManager[49805]: <info>  [1772274021.3692] manager: (tap8082b9e7-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/427)
Feb 28 10:20:21 compute-0 kernel: tap8082b9e7-a0: entered promiscuous mode
Feb 28 10:20:21 compute-0 conmon[329792]: conmon 0dfb5c10b8cd581f8f67 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0dfb5c10b8cd581f8f67af85edba89e9e627a5da83ee8efb65c30ca3c154aa99.scope/container/memory.events
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.371 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:21 compute-0 podman[329770]: 2026-02-28 10:20:21.372016802 +0000 UTC m=+0.142335068 container died 0dfb5c10b8cd581f8f67af85edba89e9e627a5da83ee8efb65c30ca3c154aa99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.371 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8082b9e7-a0, col_values=(('external_ids', {'iface-id': 'ecb4bce5-f543-4ade-98cc-d98a0d015682'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:21 compute-0 ovn_controller[146846]: 2026-02-28T10:20:21Z|00988|binding|INFO|Releasing lport ecb4bce5-f543-4ade-98cc-d98a0d015682 from this chassis (sb_readonly=0)
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.386 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.386 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.387 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4eeab2a7-b0e6-4507-a703-0236e7d6c431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.388 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/8082b9e7-a888-4fb7-b48c-a7c16db892eb.pid.haproxy
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 8082b9e7-a888-4fb7-b48c-a7c16db892eb
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:20:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:21.389 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'env', 'PROCESS_TAG=haproxy-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8082b9e7-a888-4fb7-b48c-a7c16db892eb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:20:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6703a6b67b815133b32ac27cb36260612c110dab160e5f184f53e9478894961-merged.mount: Deactivated successfully.
Feb 28 10:20:21 compute-0 podman[329770]: 2026-02-28 10:20:21.406276268 +0000 UTC m=+0.176594524 container remove 0dfb5c10b8cd581f8f67af85edba89e9e627a5da83ee8efb65c30ca3c154aa99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_darwin, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:20:21 compute-0 ceph-mon[76304]: pgmap v1646: 305 pgs: 305 active+clean; 488 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 27 KiB/s wr, 321 op/s
Feb 28 10:20:21 compute-0 systemd[1]: libpod-conmon-0dfb5c10b8cd581f8f67af85edba89e9e627a5da83ee8efb65c30ca3c154aa99.scope: Deactivated successfully.
Feb 28 10:20:21 compute-0 podman[329822]: 2026-02-28 10:20:21.554314548 +0000 UTC m=+0.045233181 container create 8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 10:20:21 compute-0 systemd[1]: Started libpod-conmon-8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71.scope.
Feb 28 10:20:21 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985ae06a249e67321205ae0f5643e37b42e6f56e3f96165f868d0c0c0563e640/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985ae06a249e67321205ae0f5643e37b42e6f56e3f96165f868d0c0c0563e640/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985ae06a249e67321205ae0f5643e37b42e6f56e3f96165f868d0c0c0563e640/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985ae06a249e67321205ae0f5643e37b42e6f56e3f96165f868d0c0c0563e640/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:20:21 compute-0 podman[329822]: 2026-02-28 10:20:21.53439288 +0000 UTC m=+0.025311553 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:20:21 compute-0 podman[329822]: 2026-02-28 10:20:21.648799051 +0000 UTC m=+0.139717704 container init 8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:20:21 compute-0 podman[329822]: 2026-02-28 10:20:21.65682965 +0000 UTC m=+0.147748283 container start 8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:20:21 compute-0 podman[329822]: 2026-02-28 10:20:21.668467431 +0000 UTC m=+0.159386054 container attach 8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.669 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:21 compute-0 podman[329864]: 2026-02-28 10:20:21.738298822 +0000 UTC m=+0.053453915 container create 6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 10:20:21 compute-0 systemd[1]: Started libpod-conmon-6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d.scope.
Feb 28 10:20:21 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcb7018c52ded7e66f2df60116ae88981dd11943239a5de84d085b48a577280f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:20:21 compute-0 podman[329864]: 2026-02-28 10:20:21.798772505 +0000 UTC m=+0.113927628 container init 6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 28 10:20:21 compute-0 podman[329864]: 2026-02-28 10:20:21.803828559 +0000 UTC m=+0.118983652 container start 6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:20:21 compute-0 podman[329864]: 2026-02-28 10:20:21.712141086 +0000 UTC m=+0.027296209 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:20:21 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329919]: [NOTICE]   (329924) : New worker (329927) forked
Feb 28 10:20:21 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329919]: [NOTICE]   (329924) : Loading success.
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.927 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 690896df-6307-469c-9685-325a61a62b88 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.928 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274021.9272852, 690896df-6307-469c-9685-325a61a62b88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.928 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Started (Lifecycle Event)
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.937 243456 DEBUG nova.compute.manager [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.938 243456 DEBUG nova.objects.instance [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'pci_devices' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.951 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.955 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.958 243456 INFO nova.virt.libvirt.driver [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance running successfully.
Feb 28 10:20:21 compute-0 virtqemud[242837]: argument unsupported: QEMU guest agent is not configured
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.961 243456 DEBUG nova.virt.libvirt.guest [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.961 243456 DEBUG nova.compute.manager [None req-712a6c39-4904-4870-932c-bbeb676ad10c b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.981 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] During sync_power_state the instance has a pending task (resuming). Skip.
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.982 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274021.9313285, 690896df-6307-469c-9685-325a61a62b88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:21 compute-0 nova_compute[243452]: 2026-02-28 10:20:21.982 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Resumed (Lifecycle Event)
Feb 28 10:20:22 compute-0 nova_compute[243452]: 2026-02-28 10:20:22.010 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:22 compute-0 nova_compute[243452]: 2026-02-28 10:20:22.013 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:20:22 compute-0 lvm[330006]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:20:22 compute-0 lvm[330006]: VG ceph_vg0 finished
Feb 28 10:20:22 compute-0 lvm[330008]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:20:22 compute-0 lvm[330008]: VG ceph_vg1 finished
Feb 28 10:20:22 compute-0 nova_compute[243452]: 2026-02-28 10:20:22.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:20:22 compute-0 nova_compute[243452]: 2026-02-28 10:20:22.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:20:22 compute-0 lvm[330009]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:20:22 compute-0 lvm[330009]: VG ceph_vg2 finished
Feb 28 10:20:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 488 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 26 KiB/s wr, 311 op/s
Feb 28 10:20:22 compute-0 brave_solomon[329839]: {}
Feb 28 10:20:22 compute-0 systemd[1]: libpod-8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71.scope: Deactivated successfully.
Feb 28 10:20:22 compute-0 systemd[1]: libpod-8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71.scope: Consumed 1.040s CPU time.
Feb 28 10:20:22 compute-0 conmon[329839]: conmon 8fe856f9045fa144a71e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71.scope/container/memory.events
Feb 28 10:20:22 compute-0 podman[329822]: 2026-02-28 10:20:22.447789804 +0000 UTC m=+0.938708427 container died 8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:20:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-985ae06a249e67321205ae0f5643e37b42e6f56e3f96165f868d0c0c0563e640-merged.mount: Deactivated successfully.
Feb 28 10:20:22 compute-0 podman[329822]: 2026-02-28 10:20:22.493186278 +0000 UTC m=+0.984104901 container remove 8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:20:22 compute-0 systemd[1]: libpod-conmon-8fe856f9045fa144a71ee73f89efcec6f15d1811ae9f32f0bf58dbc522e52c71.scope: Deactivated successfully.
Feb 28 10:20:22 compute-0 sudo[329690]: pam_unix(sudo:session): session closed for user root
Feb 28 10:20:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:20:22 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:20:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:20:22 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:20:22 compute-0 sudo[330022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:20:22 compute-0 sudo[330022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:20:22 compute-0 sudo[330022]: pam_unix(sudo:session): session closed for user root
Feb 28 10:20:22 compute-0 nova_compute[243452]: 2026-02-28 10:20:22.875 243456 DEBUG nova.compute.manager [req-efa807b9-da66-4b1c-a84a-9fc76da02497 req-019403c7-664c-46e4-9a3b-865645575b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:22 compute-0 nova_compute[243452]: 2026-02-28 10:20:22.876 243456 DEBUG oslo_concurrency.lockutils [req-efa807b9-da66-4b1c-a84a-9fc76da02497 req-019403c7-664c-46e4-9a3b-865645575b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:22 compute-0 nova_compute[243452]: 2026-02-28 10:20:22.877 243456 DEBUG oslo_concurrency.lockutils [req-efa807b9-da66-4b1c-a84a-9fc76da02497 req-019403c7-664c-46e4-9a3b-865645575b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:22 compute-0 nova_compute[243452]: 2026-02-28 10:20:22.877 243456 DEBUG oslo_concurrency.lockutils [req-efa807b9-da66-4b1c-a84a-9fc76da02497 req-019403c7-664c-46e4-9a3b-865645575b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:22 compute-0 nova_compute[243452]: 2026-02-28 10:20:22.877 243456 DEBUG nova.compute.manager [req-efa807b9-da66-4b1c-a84a-9fc76da02497 req-019403c7-664c-46e4-9a3b-865645575b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:22 compute-0 nova_compute[243452]: 2026-02-28 10:20:22.878 243456 WARNING nova.compute.manager [req-efa807b9-da66-4b1c-a84a-9fc76da02497 req-019403c7-664c-46e4-9a3b-865645575b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state active and task_state None.
Feb 28 10:20:23 compute-0 nova_compute[243452]: 2026-02-28 10:20:23.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:20:23 compute-0 ceph-mon[76304]: pgmap v1647: 305 pgs: 305 active+clean; 488 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 26 KiB/s wr, 311 op/s
Feb 28 10:20:23 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:20:23 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:20:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:20:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Feb 28 10:20:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Feb 28 10:20:23 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Feb 28 10:20:23 compute-0 nova_compute[243452]: 2026-02-28 10:20:23.936 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.100 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:20:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 504 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.1 MiB/s wr, 132 op/s
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.363 243456 DEBUG oslo_concurrency.lockutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.364 243456 DEBUG oslo_concurrency.lockutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.364 243456 DEBUG oslo_concurrency.lockutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.365 243456 DEBUG oslo_concurrency.lockutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.365 243456 DEBUG oslo_concurrency.lockutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.367 243456 INFO nova.compute.manager [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Terminating instance
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.369 243456 DEBUG nova.compute.manager [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:20:24 compute-0 kernel: taped25d1f8-c3 (unregistering): left promiscuous mode
Feb 28 10:20:24 compute-0 NetworkManager[49805]: <info>  [1772274024.4093] device (taped25d1f8-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.422 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:24 compute-0 ovn_controller[146846]: 2026-02-28T10:20:24Z|00989|binding|INFO|Releasing lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c from this chassis (sb_readonly=0)
Feb 28 10:20:24 compute-0 ovn_controller[146846]: 2026-02-28T10:20:24Z|00990|binding|INFO|Setting lport ed25d1f8-c3a0-43d4-b57e-12b647a48b3c down in Southbound
Feb 28 10:20:24 compute-0 ovn_controller[146846]: 2026-02-28T10:20:24Z|00991|binding|INFO|Removing iface taped25d1f8-c3 ovn-installed in OVS
Feb 28 10:20:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.431 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:05:21 10.100.0.14'], port_security=['fa:16:3e:f6:05:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '690896df-6307-469c-9685-325a61a62b88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6952e00efd364e1491714983e2425e93', 'neutron:revision_number': '14', 'neutron:security_group_ids': '26695444-5c7f-4bb0-b1ed-94a026868cfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999babbe-38ab-491f-bab6-db221e47dcf3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:20:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.433 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ed25d1f8-c3a0-43d4-b57e-12b647a48b3c in datapath 8082b9e7-a888-4fb7-b48c-a7c16db892eb unbound from our chassis
Feb 28 10:20:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.436 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8082b9e7-a888-4fb7-b48c-a7c16db892eb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.438 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.438 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3a8be346-da52-47be-85eb-580681dcb045]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.440 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb namespace which is not needed anymore
Feb 28 10:20:24 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Feb 28 10:20:24 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d0000005a.scope: Consumed 3.159s CPU time.
Feb 28 10:20:24 compute-0 systemd-machined[209480]: Machine qemu-125-instance-0000005a terminated.
Feb 28 10:20:24 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329919]: [NOTICE]   (329924) : haproxy version is 2.8.14-c23fe91
Feb 28 10:20:24 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329919]: [NOTICE]   (329924) : path to executable is /usr/sbin/haproxy
Feb 28 10:20:24 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329919]: [WARNING]  (329924) : Exiting Master process...
Feb 28 10:20:24 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329919]: [ALERT]    (329924) : Current worker (329927) exited with code 143 (Terminated)
Feb 28 10:20:24 compute-0 neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb[329919]: [WARNING]  (329924) : All workers exited. Exiting... (0)
Feb 28 10:20:24 compute-0 systemd[1]: libpod-6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d.scope: Deactivated successfully.
Feb 28 10:20:24 compute-0 podman[330071]: 2026-02-28 10:20:24.562612531 +0000 UTC m=+0.045027164 container died 6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:20:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d-userdata-shm.mount: Deactivated successfully.
Feb 28 10:20:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcb7018c52ded7e66f2df60116ae88981dd11943239a5de84d085b48a577280f-merged.mount: Deactivated successfully.
Feb 28 10:20:24 compute-0 podman[330071]: 2026-02-28 10:20:24.607580463 +0000 UTC m=+0.089995096 container cleanup 6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.607 243456 INFO nova.virt.libvirt.driver [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Instance destroyed successfully.
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.608 243456 DEBUG nova.objects.instance [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lazy-loading 'resources' on Instance uuid 690896df-6307-469c-9685-325a61a62b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:24 compute-0 systemd[1]: libpod-conmon-6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d.scope: Deactivated successfully.
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.623 243456 DEBUG nova.virt.libvirt.vif [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:17:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-49030969',display_name='tempest-ServerActionsTestJSON-server-49030969',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-49030969',id=90,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2gogpz3Yx3EhEkpX1VCBq2KO8dk2oxi+PMU5eyyDvXF1ONWBCuHAC/cBb5pEuXFL5UEmuntYZ5G82kfqwpjBS+OnOyyPUfTpF/G370TEdXgIbjgT7mCqXRtxntkLSfyw==',key_name='tempest-keypair-1420549814',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:17:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6952e00efd364e1491714983e2425e93',ramdisk_id='',reservation_id='r-iwkgdg48',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-152155156',owner_user_name='tempest-ServerActionsTestJSON-152155156-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:20:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b9006c7543a244aa948b78020335223a',uuid=690896df-6307-469c-9685-325a61a62b88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.624 243456 DEBUG nova.network.os_vif_util [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converting VIF {"id": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "address": "fa:16:3e:f6:05:21", "network": {"id": "8082b9e7-a888-4fb7-b48c-a7c16db892eb", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-487778468-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6952e00efd364e1491714983e2425e93", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped25d1f8-c3", "ovs_interfaceid": "ed25d1f8-c3a0-43d4-b57e-12b647a48b3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.625 243456 DEBUG nova.network.os_vif_util [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.625 243456 DEBUG os_vif [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.628 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.628 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped25d1f8-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.630 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.631 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.633 243456 INFO os_vif [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:05:21,bridge_name='br-int',has_traffic_filtering=True,id=ed25d1f8-c3a0-43d4-b57e-12b647a48b3c,network=Network(8082b9e7-a888-4fb7-b48c-a7c16db892eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped25d1f8-c3')
Feb 28 10:20:24 compute-0 ceph-mon[76304]: osdmap e251: 3 total, 3 up, 3 in
Feb 28 10:20:24 compute-0 ceph-mon[76304]: pgmap v1649: 305 pgs: 305 active+clean; 504 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.1 MiB/s wr, 132 op/s
Feb 28 10:20:24 compute-0 podman[330112]: 2026-02-28 10:20:24.66114444 +0000 UTC m=+0.037532331 container remove 6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 10:20:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.666 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7d5ecdc9-f8d5-4bec-8c8a-efc52f064e68]: (4, ('Sat Feb 28 10:20:24 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb (6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d)\n6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d\nSat Feb 28 10:20:24 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb (6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d)\n6f4a21845ad2febeaeb1837d3fca835b51ff84bd771729e7e6bac936af93f08d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.667 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2dc41cb5-f8ca-478d-a95d-e1526d5658ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.668 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8082b9e7-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.670 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:24 compute-0 kernel: tap8082b9e7-a0: left promiscuous mode
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.677 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.682 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f3c9b5a3-d6bd-4411-bedd-d5b84ba8bb65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.693 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[92220f3d-ba65-48f2-ada8-dc133d64b213]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.695 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bc0597b0-0fe8-4403-b8e7-8d946811d44c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.709 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d4dd4fe2-4571-436b-a313-9082f6f980a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552845, 'reachable_time': 19837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330145, 'error': None, 'target': 'ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.711 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8082b9e7-a888-4fb7-b48c-a7c16db892eb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:20:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:24.711 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c534eae7-6d6b-49be-b55c-97031e7fd25b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d8082b9e7\x2da888\x2d4fb7\x2db48c\x2da7c16db892eb.mount: Deactivated successfully.
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.949 243456 INFO nova.virt.libvirt.driver [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Deleting instance files /var/lib/nova/instances/690896df-6307-469c-9685-325a61a62b88_del
Feb 28 10:20:24 compute-0 nova_compute[243452]: 2026-02-28 10:20:24.950 243456 INFO nova.virt.libvirt.driver [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Deletion of /var/lib/nova/instances/690896df-6307-469c-9685-325a61a62b88_del complete
Feb 28 10:20:25 compute-0 nova_compute[243452]: 2026-02-28 10:20:25.037 243456 INFO nova.compute.manager [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Took 0.67 seconds to destroy the instance on the hypervisor.
Feb 28 10:20:25 compute-0 nova_compute[243452]: 2026-02-28 10:20:25.038 243456 DEBUG oslo.service.loopingcall [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:20:25 compute-0 nova_compute[243452]: 2026-02-28 10:20:25.038 243456 DEBUG nova.compute.manager [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:20:25 compute-0 nova_compute[243452]: 2026-02-28 10:20:25.039 243456 DEBUG nova.network.neutron [-] [instance: 690896df-6307-469c-9685-325a61a62b88] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:20:25 compute-0 nova_compute[243452]: 2026-02-28 10:20:25.169 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 28 10:20:25 compute-0 nova_compute[243452]: 2026-02-28 10:20:25.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:20:25 compute-0 nova_compute[243452]: 2026-02-28 10:20:25.601 243456 DEBUG nova.compute.manager [req-ef89d28c-a0e2-4284-94d9-7bbe2b48f988 req-8947a8e7-cc75-41b4-9c53-e6823fa780fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:25 compute-0 nova_compute[243452]: 2026-02-28 10:20:25.601 243456 DEBUG oslo_concurrency.lockutils [req-ef89d28c-a0e2-4284-94d9-7bbe2b48f988 req-8947a8e7-cc75-41b4-9c53-e6823fa780fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:25 compute-0 nova_compute[243452]: 2026-02-28 10:20:25.602 243456 DEBUG oslo_concurrency.lockutils [req-ef89d28c-a0e2-4284-94d9-7bbe2b48f988 req-8947a8e7-cc75-41b4-9c53-e6823fa780fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:25 compute-0 nova_compute[243452]: 2026-02-28 10:20:25.602 243456 DEBUG oslo_concurrency.lockutils [req-ef89d28c-a0e2-4284-94d9-7bbe2b48f988 req-8947a8e7-cc75-41b4-9c53-e6823fa780fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:25 compute-0 nova_compute[243452]: 2026-02-28 10:20:25.602 243456 DEBUG nova.compute.manager [req-ef89d28c-a0e2-4284-94d9-7bbe2b48f988 req-8947a8e7-cc75-41b4-9c53-e6823fa780fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:25 compute-0 nova_compute[243452]: 2026-02-28 10:20:25.602 243456 WARNING nova.compute.manager [req-ef89d28c-a0e2-4284-94d9-7bbe2b48f988 req-8947a8e7-cc75-41b4-9c53-e6823fa780fc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state active and task_state deleting.
Feb 28 10:20:26 compute-0 nova_compute[243452]: 2026-02-28 10:20:26.317 243456 DEBUG nova.network.neutron [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:20:26 compute-0 nova_compute[243452]: 2026-02-28 10:20:26.343 243456 INFO nova.compute.manager [-] [instance: 690896df-6307-469c-9685-325a61a62b88] Took 1.30 seconds to deallocate network for instance.
Feb 28 10:20:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 495 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 511 KiB/s rd, 3.2 MiB/s wr, 131 op/s
Feb 28 10:20:26 compute-0 nova_compute[243452]: 2026-02-28 10:20:26.404 243456 DEBUG oslo_concurrency.lockutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:26 compute-0 nova_compute[243452]: 2026-02-28 10:20:26.405 243456 DEBUG oslo_concurrency.lockutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:26 compute-0 nova_compute[243452]: 2026-02-28 10:20:26.513 243456 DEBUG oslo_concurrency.processutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:20:27 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4099121038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:20:27 compute-0 nova_compute[243452]: 2026-02-28 10:20:27.113 243456 DEBUG oslo_concurrency.processutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:27 compute-0 nova_compute[243452]: 2026-02-28 10:20:27.120 243456 DEBUG nova.compute.provider_tree [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:20:27 compute-0 nova_compute[243452]: 2026-02-28 10:20:27.140 243456 DEBUG nova.scheduler.client.report [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:20:27 compute-0 nova_compute[243452]: 2026-02-28 10:20:27.168 243456 DEBUG oslo_concurrency.lockutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:27 compute-0 nova_compute[243452]: 2026-02-28 10:20:27.198 243456 INFO nova.scheduler.client.report [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Deleted allocations for instance 690896df-6307-469c-9685-325a61a62b88
Feb 28 10:20:27 compute-0 nova_compute[243452]: 2026-02-28 10:20:27.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:20:27 compute-0 nova_compute[243452]: 2026-02-28 10:20:27.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:20:27 compute-0 nova_compute[243452]: 2026-02-28 10:20:27.323 243456 DEBUG oslo_concurrency.lockutils [None req-c609784a-d8e4-4aac-b6e2-dfcbc625a20a b9006c7543a244aa948b78020335223a 6952e00efd364e1491714983e2425e93 - - default default] Lock "690896df-6307-469c-9685-325a61a62b88" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:27 compute-0 nova_compute[243452]: 2026-02-28 10:20:27.348 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:20:27 compute-0 ceph-mon[76304]: pgmap v1650: 305 pgs: 305 active+clean; 495 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 511 KiB/s rd, 3.2 MiB/s wr, 131 op/s
Feb 28 10:20:27 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4099121038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:20:27 compute-0 kernel: tapa98f753e-a6 (unregistering): left promiscuous mode
Feb 28 10:20:27 compute-0 NetworkManager[49805]: <info>  [1772274027.4577] device (tapa98f753e-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:20:27 compute-0 ovn_controller[146846]: 2026-02-28T10:20:27Z|00992|binding|INFO|Releasing lport a98f753e-a6d6-4d97-b307-f08d35a37f1f from this chassis (sb_readonly=0)
Feb 28 10:20:27 compute-0 ovn_controller[146846]: 2026-02-28T10:20:27Z|00993|binding|INFO|Setting lport a98f753e-a6d6-4d97-b307-f08d35a37f1f down in Southbound
Feb 28 10:20:27 compute-0 nova_compute[243452]: 2026-02-28 10:20:27.460 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:27 compute-0 ovn_controller[146846]: 2026-02-28T10:20:27Z|00994|binding|INFO|Removing iface tapa98f753e-a6 ovn-installed in OVS
Feb 28 10:20:27 compute-0 nova_compute[243452]: 2026-02-28 10:20:27.464 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:27.469 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:c9:51 10.100.0.5'], port_security=['fa:16:3e:71:c9:51 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '080f8608-f57f-4ffa-a966-ae62df8f6f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a98f753e-a6d6-4d97-b307-f08d35a37f1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:20:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:27.470 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a98f753e-a6d6-4d97-b307-f08d35a37f1f in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e unbound from our chassis
Feb 28 10:20:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:27.471 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:20:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:27.472 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab41529-bc4a-4fb5-badd-c59a82bf51c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:27 compute-0 nova_compute[243452]: 2026-02-28 10:20:27.479 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:27 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000063.scope: Deactivated successfully.
Feb 28 10:20:27 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000063.scope: Consumed 12.082s CPU time.
Feb 28 10:20:27 compute-0 systemd-machined[209480]: Machine qemu-123-instance-00000063 terminated.
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.186 243456 INFO nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance shutdown successfully after 13 seconds.
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.194 243456 INFO nova.virt.libvirt.driver [-] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance destroyed successfully.
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.195 243456 DEBUG nova.objects.instance [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.223 243456 INFO nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Attempting rescue
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.224 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.230 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.231 243456 INFO nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Creating image(s)
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.268 243456 DEBUG nova.storage.rbd_utils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.272 243456 DEBUG nova.objects.instance [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.323 243456 DEBUG nova.storage.rbd_utils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 461 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 432 KiB/s rd, 2.6 MiB/s wr, 123 op/s
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.358 243456 DEBUG nova.storage.rbd_utils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.364 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.398 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.425 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.425 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.426 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.426 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.426 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.468 243456 DEBUG nova.compute.manager [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.469 243456 DEBUG oslo_concurrency.lockutils [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.469 243456 DEBUG oslo_concurrency.lockutils [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.470 243456 DEBUG oslo_concurrency.lockutils [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.470 243456 DEBUG nova.compute.manager [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.470 243456 WARNING nova.compute.manager [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-unplugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state deleted and task_state None.
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.471 243456 DEBUG nova.compute.manager [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.471 243456 DEBUG oslo_concurrency.lockutils [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "690896df-6307-469c-9685-325a61a62b88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.471 243456 DEBUG oslo_concurrency.lockutils [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.472 243456 DEBUG oslo_concurrency.lockutils [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "690896df-6307-469c-9685-325a61a62b88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.472 243456 DEBUG nova.compute.manager [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] No waiting events found dispatching network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.472 243456 WARNING nova.compute.manager [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received unexpected event network-vif-plugged-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c for instance with vm_state deleted and task_state None.
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.473 243456 DEBUG nova.compute.manager [req-81ca06df-a0bc-4b66-b573-debaf6ff2ad7 req-bf0c376f-9ff7-41e3-b0dc-24de521a3e45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 690896df-6307-469c-9685-325a61a62b88] Received event network-vif-deleted-ed25d1f8-c3a0-43d4-b57e-12b647a48b3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.474 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.475 243456 DEBUG oslo_concurrency.lockutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.475 243456 DEBUG oslo_concurrency.lockutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.476 243456 DEBUG oslo_concurrency.lockutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.502 243456 DEBUG nova.storage.rbd_utils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.506 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.803 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.805 243456 DEBUG nova.objects.instance [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'migration_context' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.824 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.825 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Start _get_guest_xml network_info=[{"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1242488818-network", "vif_mac": "fa:16:3e:71:c9:51"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.825 243456 DEBUG nova.objects.instance [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'resources' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.849 243456 WARNING nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.853 243456 DEBUG nova.virt.libvirt.host [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.856 243456 DEBUG nova.virt.libvirt.host [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.862 243456 DEBUG nova.virt.libvirt.host [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.863 243456 DEBUG nova.virt.libvirt.host [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.863 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.863 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.864 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.864 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.865 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.865 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.865 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.866 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.866 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.866 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.866 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.867 243456 DEBUG nova.virt.hardware [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.867 243456 DEBUG nova.objects.instance [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.890 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.938 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.948 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274013.9480052, 98504b0a-8c47-4488-b870-9fb9ebfa3e59 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.949 243456 INFO nova.compute.manager [-] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] VM Stopped (Lifecycle Event)
Feb 28 10:20:28 compute-0 nova_compute[243452]: 2026-02-28 10:20:28.969 243456 DEBUG nova.compute.manager [None req-7018c7ea-1feb-4079-bede-ce88763957cb - - - - - -] [instance: 98504b0a-8c47-4488-b870-9fb9ebfa3e59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:20:28 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1919154869' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:20:29 compute-0 nova_compute[243452]: 2026-02-28 10:20:29.009 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:29 compute-0 nova_compute[243452]: 2026-02-28 10:20:29.095 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:20:29 compute-0 nova_compute[243452]: 2026-02-28 10:20:29.095 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:20:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:20:29
Feb 28 10:20:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:20:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:20:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'backups', '.mgr', 'images', '.rgw.root', 'default.rgw.meta', 'volumes']
Feb 28 10:20:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:20:29 compute-0 nova_compute[243452]: 2026-02-28 10:20:29.101 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:20:29 compute-0 nova_compute[243452]: 2026-02-28 10:20:29.102 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:20:29 compute-0 nova_compute[243452]: 2026-02-28 10:20:29.107 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:20:29 compute-0 nova_compute[243452]: 2026-02-28 10:20:29.107 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:20:29 compute-0 nova_compute[243452]: 2026-02-28 10:20:29.108 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:20:29 compute-0 nova_compute[243452]: 2026-02-28 10:20:29.265 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:20:29 compute-0 nova_compute[243452]: 2026-02-28 10:20:29.266 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3246MB free_disk=59.81664128229022GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:20:29 compute-0 nova_compute[243452]: 2026-02-28 10:20:29.267 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:29 compute-0 nova_compute[243452]: 2026-02-28 10:20:29.267 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:29 compute-0 nova_compute[243452]: 2026-02-28 10:20:29.330 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance ea5efc55-0a5e-435e-9805-9a9726c17eda actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:20:29 compute-0 nova_compute[243452]: 2026-02-28 10:20:29.330 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance e4349bd8-727a-4533-9edd-b2d54353a617 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:20:29 compute-0 nova_compute[243452]: 2026-02-28 10:20:29.331 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 080f8608-f57f-4ffa-a966-ae62df8f6f9b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:20:29 compute-0 nova_compute[243452]: 2026-02-28 10:20:29.332 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:20:29 compute-0 nova_compute[243452]: 2026-02-28 10:20:29.332 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:20:29 compute-0 ceph-mon[76304]: pgmap v1651: 305 pgs: 305 active+clean; 461 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 432 KiB/s rd, 2.6 MiB/s wr, 123 op/s
Feb 28 10:20:29 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1919154869' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:20:29 compute-0 nova_compute[243452]: 2026-02-28 10:20:29.422 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:20:29 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/348724625' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:29 compute-0 nova_compute[243452]: 2026-02-28 10:20:29.503 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:29 compute-0 nova_compute[243452]: 2026-02-28 10:20:29.504 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:29 compute-0 nova_compute[243452]: 2026-02-28 10:20:29.630 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:20:29 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3467550894' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.006 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.015 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:20:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:20:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/363430046' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.026 243456 DEBUG nova.compute.manager [req-42247004-ffe9-4c56-94e8-4c5401c27c1f req-78546d18-2834-4a82-8c46-82faad6e1f82 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-unplugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.026 243456 DEBUG oslo_concurrency.lockutils [req-42247004-ffe9-4c56-94e8-4c5401c27c1f req-78546d18-2834-4a82-8c46-82faad6e1f82 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.027 243456 DEBUG oslo_concurrency.lockutils [req-42247004-ffe9-4c56-94e8-4c5401c27c1f req-78546d18-2834-4a82-8c46-82faad6e1f82 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.027 243456 DEBUG oslo_concurrency.lockutils [req-42247004-ffe9-4c56-94e8-4c5401c27c1f req-78546d18-2834-4a82-8c46-82faad6e1f82 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.027 243456 DEBUG nova.compute.manager [req-42247004-ffe9-4c56-94e8-4c5401c27c1f req-78546d18-2834-4a82-8c46-82faad6e1f82 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-unplugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.027 243456 WARNING nova.compute.manager [req-42247004-ffe9-4c56-94e8-4c5401c27c1f req-78546d18-2834-4a82-8c46-82faad6e1f82 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received unexpected event network-vif-unplugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with vm_state active and task_state rescuing.
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.035 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.040 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.041 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.070 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.070 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.273 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Acquiring lock "c4a33511-0908-4787-82f4-79505aa9d436" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.274 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.287 243456 DEBUG nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:20:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:20:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:20:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:20:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:20:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:20:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.346 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.346 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.352 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.353 243456 INFO nova.compute.claims [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:20:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 454 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 418 KiB/s rd, 3.1 MiB/s wr, 120 op/s
Feb 28 10:20:30 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/348724625' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:30 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3467550894' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:20:30 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/363430046' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.437918) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274030438004, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2010, "num_deletes": 254, "total_data_size": 3119230, "memory_usage": 3170880, "flush_reason": "Manual Compaction"}
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274030453217, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 3051822, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33407, "largest_seqno": 35416, "table_properties": {"data_size": 3042750, "index_size": 5566, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19558, "raw_average_key_size": 20, "raw_value_size": 3024276, "raw_average_value_size": 3180, "num_data_blocks": 245, "num_entries": 951, "num_filter_entries": 951, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772273844, "oldest_key_time": 1772273844, "file_creation_time": 1772274030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 15351 microseconds, and 8814 cpu microseconds.
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.453281) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 3051822 bytes OK
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.453310) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.454915) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.454937) EVENT_LOG_v1 {"time_micros": 1772274030454930, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.454968) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 3110630, prev total WAL file size 3110630, number of live WAL files 2.
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.456353) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(2980KB)], [74(8152KB)]
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274030456463, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11400386, "oldest_snapshot_seqno": -1}
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6122 keys, 9751563 bytes, temperature: kUnknown
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274030502134, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 9751563, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9709047, "index_size": 26086, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15365, "raw_key_size": 154585, "raw_average_key_size": 25, "raw_value_size": 9597865, "raw_average_value_size": 1567, "num_data_blocks": 1053, "num_entries": 6122, "num_filter_entries": 6122, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772274030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.502466) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 9751563 bytes
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.503766) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 249.0 rd, 213.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 8.0 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 6645, records dropped: 523 output_compression: NoCompression
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.503793) EVENT_LOG_v1 {"time_micros": 1772274030503779, "job": 42, "event": "compaction_finished", "compaction_time_micros": 45784, "compaction_time_cpu_micros": 24524, "output_level": 6, "num_output_files": 1, "total_output_size": 9751563, "num_input_records": 6645, "num_output_records": 6122, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274030504411, "job": 42, "event": "table_file_deletion", "file_number": 76}
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274030505614, "job": 42, "event": "table_file_deletion", "file_number": 74}
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.456184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.505729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.505736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.505738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.505739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:20:30 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:20:30.505741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.510 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:20:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/609544957' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.558 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.560 243456 DEBUG nova.virt.libvirt.vif [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1593211213',display_name='tempest-ServerRescueTestJSON-server-1593211213',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1593211213',id=99,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:20:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3882eded03594958a2e5d10832a6c3a9',ramdisk_id='',reservation_id='r-xm64dw2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-2101936935',owner_user_name='tempest-ServerRescueTestJSON-2101936935-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:20:10Z,user_data=None,user_id='8d03850a765742908401b28b9f983e96',uuid=080f8608-f57f-4ffa-a966-ae62df8f6f9b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1242488818-network", "vif_mac": "fa:16:3e:71:c9:51"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.561 243456 DEBUG nova.network.os_vif_util [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converting VIF {"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1242488818-network", "vif_mac": "fa:16:3e:71:c9:51"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.562 243456 DEBUG nova.network.os_vif_util [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:71:c9:51,bridge_name='br-int',has_traffic_filtering=True,id=a98f753e-a6d6-4d97-b307-f08d35a37f1f,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa98f753e-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.564 243456 DEBUG nova.objects.instance [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.582 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:20:30 compute-0 nova_compute[243452]:   <uuid>080f8608-f57f-4ffa-a966-ae62df8f6f9b</uuid>
Feb 28 10:20:30 compute-0 nova_compute[243452]:   <name>instance-00000063</name>
Feb 28 10:20:30 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:20:30 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:20:30 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerRescueTestJSON-server-1593211213</nova:name>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:20:28</nova:creationTime>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:20:30 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:20:30 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:20:30 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:20:30 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:20:30 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:20:30 compute-0 nova_compute[243452]:         <nova:user uuid="8d03850a765742908401b28b9f983e96">tempest-ServerRescueTestJSON-2101936935-project-member</nova:user>
Feb 28 10:20:30 compute-0 nova_compute[243452]:         <nova:project uuid="3882eded03594958a2e5d10832a6c3a9">tempest-ServerRescueTestJSON-2101936935</nova:project>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:20:30 compute-0 nova_compute[243452]:         <nova:port uuid="a98f753e-a6d6-4d97-b307-f08d35a37f1f">
Feb 28 10:20:30 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:20:30 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:20:30 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <system>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <entry name="serial">080f8608-f57f-4ffa-a966-ae62df8f6f9b</entry>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <entry name="uuid">080f8608-f57f-4ffa-a966-ae62df8f6f9b</entry>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     </system>
Feb 28 10:20:30 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:20:30 compute-0 nova_compute[243452]:   <os>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:   </os>
Feb 28 10:20:30 compute-0 nova_compute[243452]:   <features>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:   </features>
Feb 28 10:20:30 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:20:30 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:20:30 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.rescue">
Feb 28 10:20:30 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       </source>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:20:30 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk">
Feb 28 10:20:30 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       </source>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:20:30 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <target dev="vdb" bus="virtio"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config.rescue">
Feb 28 10:20:30 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       </source>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:20:30 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:71:c9:51"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <target dev="tapa98f753e-a6"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/console.log" append="off"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <video>
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     </video>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:20:30 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:20:30 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:20:30 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:20:30 compute-0 nova_compute[243452]: </domain>
Feb 28 10:20:30 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.599 243456 INFO nova.virt.libvirt.driver [-] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance destroyed successfully.
Feb 28 10:20:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:20:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:20:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:20:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:20:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:20:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:20:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:20:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:20:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:20:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.669 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.670 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.671 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.672 243456 DEBUG nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] No VIF found with MAC fa:16:3e:71:c9:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.673 243456 INFO nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Using config drive
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.712 243456 DEBUG nova.storage.rbd_utils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.740 243456 DEBUG nova.objects.instance [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:30 compute-0 nova_compute[243452]: 2026-02-28 10:20:30.781 243456 DEBUG nova.objects.instance [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'keypairs' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:20:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2723623436' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.117 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.125 243456 DEBUG nova.compute.provider_tree [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.149 243456 DEBUG nova.scheduler.client.report [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.175 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.176 243456 DEBUG nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.239 243456 DEBUG nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.239 243456 DEBUG nova.network.neutron [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.268 243456 INFO nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.288 243456 DEBUG nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.394 243456 DEBUG nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.397 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.398 243456 INFO nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Creating image(s)
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.431 243456 DEBUG nova.storage.rbd_utils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] rbd image c4a33511-0908-4787-82f4-79505aa9d436_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:31 compute-0 ceph-mon[76304]: pgmap v1652: 305 pgs: 305 active+clean; 454 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 418 KiB/s rd, 3.1 MiB/s wr, 120 op/s
Feb 28 10:20:31 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/609544957' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:31 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2723623436' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.468 243456 DEBUG nova.storage.rbd_utils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] rbd image c4a33511-0908-4787-82f4-79505aa9d436_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.507 243456 DEBUG nova.storage.rbd_utils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] rbd image c4a33511-0908-4787-82f4-79505aa9d436_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.513 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.556 243456 INFO nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Creating config drive at /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config.rescue
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.565 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp92z2clak execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.608 243456 DEBUG nova.policy [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd000e26b1aaf4a60bd2c928412e59ca5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1dab7067181d43f1acb702fce4ca882c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.615 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.616 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.617 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.617 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.652 243456 DEBUG nova.storage.rbd_utils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] rbd image c4a33511-0908-4787-82f4-79505aa9d436_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.656 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c4a33511-0908-4787-82f4-79505aa9d436_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.712 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp92z2clak" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.759 243456 DEBUG nova.storage.rbd_utils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] rbd image 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.766 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config.rescue 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.895 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c4a33511-0908-4787-82f4-79505aa9d436_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.934 243456 DEBUG oslo_concurrency.processutils [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config.rescue 080f8608-f57f-4ffa-a966-ae62df8f6f9b_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.935 243456 INFO nova.virt.libvirt.driver [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Deleting local config drive /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b/disk.config.rescue because it was imported into RBD.
Feb 28 10:20:31 compute-0 kernel: tapa98f753e-a6: entered promiscuous mode
Feb 28 10:20:31 compute-0 NetworkManager[49805]: <info>  [1772274031.9744] manager: (tapa98f753e-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/428)
Feb 28 10:20:31 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.977 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:31 compute-0 ovn_controller[146846]: 2026-02-28T10:20:31Z|00995|binding|INFO|Claiming lport a98f753e-a6d6-4d97-b307-f08d35a37f1f for this chassis.
Feb 28 10:20:31 compute-0 ovn_controller[146846]: 2026-02-28T10:20:31Z|00996|binding|INFO|a98f753e-a6d6-4d97-b307-f08d35a37f1f: Claiming fa:16:3e:71:c9:51 10.100.0.5
Feb 28 10:20:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:31.988 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:c9:51 10.100.0.5'], port_security=['fa:16:3e:71:c9:51 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '080f8608-f57f-4ffa-a966-ae62df8f6f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a98f753e-a6d6-4d97-b307-f08d35a37f1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:20:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:31.990 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a98f753e-a6d6-4d97-b307-f08d35a37f1f in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e bound to our chassis
Feb 28 10:20:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:31.990 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:20:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:31.992 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7f213891-9e7a-41f4-954f-92401607dc60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:31 compute-0 ovn_controller[146846]: 2026-02-28T10:20:31Z|00997|binding|INFO|Setting lport a98f753e-a6d6-4d97-b307-f08d35a37f1f ovn-installed in OVS
Feb 28 10:20:31 compute-0 ovn_controller[146846]: 2026-02-28T10:20:31Z|00998|binding|INFO|Setting lport a98f753e-a6d6-4d97-b307-f08d35a37f1f up in Southbound
Feb 28 10:20:32 compute-0 systemd-machined[209480]: New machine qemu-126-instance-00000063.
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:31.999 243456 DEBUG nova.storage.rbd_utils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] resizing rbd image c4a33511-0908-4787-82f4-79505aa9d436_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:20:32 compute-0 systemd[1]: Started Virtual Machine qemu-126-instance-00000063.
Feb 28 10:20:32 compute-0 systemd-udevd[330631]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:20:32 compute-0 NetworkManager[49805]: <info>  [1772274032.0342] device (tapa98f753e-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:20:32 compute-0 NetworkManager[49805]: <info>  [1772274032.0351] device (tapa98f753e-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.037 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.096 243456 DEBUG nova.objects.instance [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lazy-loading 'migration_context' on Instance uuid c4a33511-0908-4787-82f4-79505aa9d436 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.106 243456 DEBUG nova.compute.manager [req-5f582d92-7b1e-401c-8fc5-86c1df758c6a req-9c7950b3-585a-42f2-a614-349fcc4ef5eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.107 243456 DEBUG oslo_concurrency.lockutils [req-5f582d92-7b1e-401c-8fc5-86c1df758c6a req-9c7950b3-585a-42f2-a614-349fcc4ef5eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.107 243456 DEBUG oslo_concurrency.lockutils [req-5f582d92-7b1e-401c-8fc5-86c1df758c6a req-9c7950b3-585a-42f2-a614-349fcc4ef5eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.107 243456 DEBUG oslo_concurrency.lockutils [req-5f582d92-7b1e-401c-8fc5-86c1df758c6a req-9c7950b3-585a-42f2-a614-349fcc4ef5eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.108 243456 DEBUG nova.compute.manager [req-5f582d92-7b1e-401c-8fc5-86c1df758c6a req-9c7950b3-585a-42f2-a614-349fcc4ef5eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.108 243456 WARNING nova.compute.manager [req-5f582d92-7b1e-401c-8fc5-86c1df758c6a req-9c7950b3-585a-42f2-a614-349fcc4ef5eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received unexpected event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with vm_state active and task_state rescuing.
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.110 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.110 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Ensure instance console log exists: /var/lib/nova/instances/c4a33511-0908-4787-82f4-79505aa9d436/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.111 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.111 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.112 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 462 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 418 KiB/s rd, 3.7 MiB/s wr, 121 op/s
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.464 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 080f8608-f57f-4ffa-a966-ae62df8f6f9b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.465 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274032.4642735, 080f8608-f57f-4ffa-a966-ae62df8f6f9b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.465 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] VM Resumed (Lifecycle Event)
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.469 243456 DEBUG nova.compute.manager [None req-af3ab21c-5f6a-46de-8122-41db2d05b014 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:32 compute-0 ceph-mon[76304]: pgmap v1653: 305 pgs: 305 active+clean; 462 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 418 KiB/s rd, 3.7 MiB/s wr, 121 op/s
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.502 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.507 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.535 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] During sync_power_state the instance has a pending task (rescuing). Skip.
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.536 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274032.4698439, 080f8608-f57f-4ffa-a966-ae62df8f6f9b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.537 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] VM Started (Lifecycle Event)
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.557 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.562 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.987 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:20:32 compute-0 nova_compute[243452]: 2026-02-28 10:20:32.988 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:20:33 compute-0 nova_compute[243452]: 2026-02-28 10:20:33.068 243456 DEBUG nova.network.neutron [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Successfully created port: 8f265ce7-668d-4462-8ac4-a9487fc7d3cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:20:33 compute-0 ovn_controller[146846]: 2026-02-28T10:20:33Z|00999|binding|INFO|Releasing lport ec441ae8-7dea-4a06-ba6a-57dcbc67001f from this chassis (sb_readonly=0)
Feb 28 10:20:33 compute-0 nova_compute[243452]: 2026-02-28 10:20:33.120 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:33 compute-0 nova_compute[243452]: 2026-02-28 10:20:33.393 243456 INFO nova.compute.manager [None req-8715f514-03fd-46a8-9c45-ee942edd0c27 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Unrescuing
Feb 28 10:20:33 compute-0 nova_compute[243452]: 2026-02-28 10:20:33.394 243456 DEBUG oslo_concurrency.lockutils [None req-8715f514-03fd-46a8-9c45-ee942edd0c27 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:20:33 compute-0 nova_compute[243452]: 2026-02-28 10:20:33.395 243456 DEBUG oslo_concurrency.lockutils [None req-8715f514-03fd-46a8-9c45-ee942edd0c27 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquired lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:20:33 compute-0 nova_compute[243452]: 2026-02-28 10:20:33.395 243456 DEBUG nova.network.neutron [None req-8715f514-03fd-46a8-9c45-ee942edd0c27 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:20:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:20:33 compute-0 nova_compute[243452]: 2026-02-28 10:20:33.941 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:34 compute-0 nova_compute[243452]: 2026-02-28 10:20:34.105 243456 DEBUG nova.network.neutron [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Successfully updated port: 8f265ce7-668d-4462-8ac4-a9487fc7d3cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:20:34 compute-0 nova_compute[243452]: 2026-02-28 10:20:34.125 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Acquiring lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:20:34 compute-0 nova_compute[243452]: 2026-02-28 10:20:34.125 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Acquired lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:20:34 compute-0 nova_compute[243452]: 2026-02-28 10:20:34.126 243456 DEBUG nova.network.neutron [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:20:34 compute-0 nova_compute[243452]: 2026-02-28 10:20:34.215 243456 DEBUG nova.compute.manager [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:34 compute-0 nova_compute[243452]: 2026-02-28 10:20:34.215 243456 DEBUG oslo_concurrency.lockutils [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:34 compute-0 nova_compute[243452]: 2026-02-28 10:20:34.216 243456 DEBUG oslo_concurrency.lockutils [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:34 compute-0 nova_compute[243452]: 2026-02-28 10:20:34.216 243456 DEBUG oslo_concurrency.lockutils [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:34 compute-0 nova_compute[243452]: 2026-02-28 10:20:34.217 243456 DEBUG nova.compute.manager [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:34 compute-0 nova_compute[243452]: 2026-02-28 10:20:34.217 243456 WARNING nova.compute.manager [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received unexpected event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with vm_state rescued and task_state unrescuing.
Feb 28 10:20:34 compute-0 nova_compute[243452]: 2026-02-28 10:20:34.217 243456 DEBUG nova.compute.manager [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:34 compute-0 nova_compute[243452]: 2026-02-28 10:20:34.218 243456 DEBUG oslo_concurrency.lockutils [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:34 compute-0 nova_compute[243452]: 2026-02-28 10:20:34.218 243456 DEBUG oslo_concurrency.lockutils [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:34 compute-0 nova_compute[243452]: 2026-02-28 10:20:34.218 243456 DEBUG oslo_concurrency.lockutils [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:34 compute-0 nova_compute[243452]: 2026-02-28 10:20:34.218 243456 DEBUG nova.compute.manager [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:34 compute-0 nova_compute[243452]: 2026-02-28 10:20:34.219 243456 WARNING nova.compute.manager [req-d948fe28-6815-453c-a1c7-f026c258c57c req-96009592-3d6b-46a0-8315-fd5a35bb1f8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received unexpected event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with vm_state rescued and task_state unrescuing.
Feb 28 10:20:34 compute-0 nova_compute[243452]: 2026-02-28 10:20:34.351 243456 DEBUG nova.compute.manager [req-99bbe8fd-f180-4a00-a218-6eab02723577 req-b7bd5e3f-747c-4bc5-b45a-0d452be80215 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received event network-changed-8f265ce7-668d-4462-8ac4-a9487fc7d3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:34 compute-0 nova_compute[243452]: 2026-02-28 10:20:34.351 243456 DEBUG nova.compute.manager [req-99bbe8fd-f180-4a00-a218-6eab02723577 req-b7bd5e3f-747c-4bc5-b45a-0d452be80215 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Refreshing instance network info cache due to event network-changed-8f265ce7-668d-4462-8ac4-a9487fc7d3cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:20:34 compute-0 nova_compute[243452]: 2026-02-28 10:20:34.351 243456 DEBUG oslo_concurrency.lockutils [req-99bbe8fd-f180-4a00-a218-6eab02723577 req-b7bd5e3f-747c-4bc5-b45a-0d452be80215 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:20:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 495 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 3.9 MiB/s wr, 99 op/s
Feb 28 10:20:34 compute-0 nova_compute[243452]: 2026-02-28 10:20:34.419 243456 DEBUG nova.network.neutron [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:20:34 compute-0 nova_compute[243452]: 2026-02-28 10:20:34.634 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.187 243456 DEBUG nova.network.neutron [None req-8715f514-03fd-46a8-9c45-ee942edd0c27 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Updating instance_info_cache with network_info: [{"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.209 243456 DEBUG oslo_concurrency.lockutils [None req-8715f514-03fd-46a8-9c45-ee942edd0c27 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Releasing lock "refresh_cache-080f8608-f57f-4ffa-a966-ae62df8f6f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.210 243456 DEBUG nova.objects.instance [None req-8715f514-03fd-46a8-9c45-ee942edd0c27 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'flavor' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:35 compute-0 kernel: tapa98f753e-a6 (unregistering): left promiscuous mode
Feb 28 10:20:35 compute-0 NetworkManager[49805]: <info>  [1772274035.2825] device (tapa98f753e-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:20:35 compute-0 ovn_controller[146846]: 2026-02-28T10:20:35Z|01000|binding|INFO|Releasing lport a98f753e-a6d6-4d97-b307-f08d35a37f1f from this chassis (sb_readonly=0)
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.288 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:35 compute-0 ovn_controller[146846]: 2026-02-28T10:20:35Z|01001|binding|INFO|Setting lport a98f753e-a6d6-4d97-b307-f08d35a37f1f down in Southbound
Feb 28 10:20:35 compute-0 ovn_controller[146846]: 2026-02-28T10:20:35Z|01002|binding|INFO|Removing iface tapa98f753e-a6 ovn-installed in OVS
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.291 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.296 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:35.300 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:c9:51 10.100.0.5'], port_security=['fa:16:3e:71:c9:51 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '080f8608-f57f-4ffa-a966-ae62df8f6f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a98f753e-a6d6-4d97-b307-f08d35a37f1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:20:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:35.302 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a98f753e-a6d6-4d97-b307-f08d35a37f1f in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e unbound from our chassis
Feb 28 10:20:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:35.303 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:20:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:35.304 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5ecb81e9-f3d8-4cbc-be39-527465a21bae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:35 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000063.scope: Deactivated successfully.
Feb 28 10:20:35 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000063.scope: Consumed 3.110s CPU time.
Feb 28 10:20:35 compute-0 systemd-machined[209480]: Machine qemu-126-instance-00000063 terminated.
Feb 28 10:20:35 compute-0 ceph-mon[76304]: pgmap v1654: 305 pgs: 305 active+clean; 495 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 3.9 MiB/s wr, 99 op/s
Feb 28 10:20:35 compute-0 kernel: tapa98f753e-a6: entered promiscuous mode
Feb 28 10:20:35 compute-0 kernel: tapa98f753e-a6 (unregistering): left promiscuous mode
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.461 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.470 243456 INFO nova.virt.libvirt.driver [-] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance destroyed successfully.
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.471 243456 DEBUG nova.objects.instance [None req-8715f514-03fd-46a8-9c45-ee942edd0c27 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:35 compute-0 kernel: tapa98f753e-a6: entered promiscuous mode
Feb 28 10:20:35 compute-0 systemd-udevd[330725]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:20:35 compute-0 ovn_controller[146846]: 2026-02-28T10:20:35Z|01003|binding|INFO|Claiming lport a98f753e-a6d6-4d97-b307-f08d35a37f1f for this chassis.
Feb 28 10:20:35 compute-0 ovn_controller[146846]: 2026-02-28T10:20:35Z|01004|binding|INFO|a98f753e-a6d6-4d97-b307-f08d35a37f1f: Claiming fa:16:3e:71:c9:51 10.100.0.5
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.563 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:35 compute-0 NetworkManager[49805]: <info>  [1772274035.5648] manager: (tapa98f753e-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/429)
Feb 28 10:20:35 compute-0 NetworkManager[49805]: <info>  [1772274035.5721] device (tapa98f753e-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:20:35 compute-0 NetworkManager[49805]: <info>  [1772274035.5728] device (tapa98f753e-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:20:35 compute-0 ovn_controller[146846]: 2026-02-28T10:20:35Z|01005|binding|INFO|Setting lport a98f753e-a6d6-4d97-b307-f08d35a37f1f ovn-installed in OVS
Feb 28 10:20:35 compute-0 ovn_controller[146846]: 2026-02-28T10:20:35Z|01006|binding|INFO|Setting lport a98f753e-a6d6-4d97-b307-f08d35a37f1f up in Southbound
Feb 28 10:20:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:35.573 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:c9:51 10.100.0.5'], port_security=['fa:16:3e:71:c9:51 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '080f8608-f57f-4ffa-a966-ae62df8f6f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '7', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a98f753e-a6d6-4d97-b307-f08d35a37f1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.574 243456 DEBUG nova.network.neutron [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Updating instance_info_cache with network_info: [{"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:20:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:35.575 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a98f753e-a6d6-4d97-b307-f08d35a37f1f in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e bound to our chassis
Feb 28 10:20:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:35.577 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.578 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:35.578 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c161e065-016a-455c-9660-13b72e766358]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:35 compute-0 systemd-machined[209480]: New machine qemu-127-instance-00000063.
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.606 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Releasing lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.607 243456 DEBUG nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Instance network_info: |[{"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.607 243456 DEBUG oslo_concurrency.lockutils [req-99bbe8fd-f180-4a00-a218-6eab02723577 req-b7bd5e3f-747c-4bc5-b45a-0d452be80215 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:20:35 compute-0 systemd[1]: Started Virtual Machine qemu-127-instance-00000063.
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.608 243456 DEBUG nova.network.neutron [req-99bbe8fd-f180-4a00-a218-6eab02723577 req-b7bd5e3f-747c-4bc5-b45a-0d452be80215 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Refreshing network info cache for port 8f265ce7-668d-4462-8ac4-a9487fc7d3cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.613 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Start _get_guest_xml network_info=[{"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.621 243456 WARNING nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.628 243456 DEBUG nova.virt.libvirt.host [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.630 243456 DEBUG nova.virt.libvirt.host [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.643 243456 DEBUG nova.virt.libvirt.host [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.645 243456 DEBUG nova.virt.libvirt.host [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.645 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.646 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.647 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.647 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.647 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.648 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.648 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.648 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.649 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.649 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.649 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.650 243456 DEBUG nova.virt.hardware [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:20:35 compute-0 nova_compute[243452]: 2026-02-28 10:20:35.655 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:20:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4290161541' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.235 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.279 243456 DEBUG nova.storage.rbd_utils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] rbd image c4a33511-0908-4787-82f4-79505aa9d436_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.287 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 532 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.9 MiB/s wr, 159 op/s
Feb 28 10:20:36 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4290161541' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.557 243456 DEBUG nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-unplugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.559 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.560 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.561 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.561 243456 DEBUG nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-unplugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.562 243456 WARNING nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received unexpected event network-vif-unplugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with vm_state rescued and task_state unrescuing.
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.562 243456 DEBUG nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.562 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.563 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.563 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.564 243456 DEBUG nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.564 243456 WARNING nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received unexpected event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with vm_state rescued and task_state unrescuing.
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.565 243456 DEBUG nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.565 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.566 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.566 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.566 243456 DEBUG nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.567 243456 WARNING nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received unexpected event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with vm_state rescued and task_state unrescuing.
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.567 243456 DEBUG nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.568 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.568 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.569 243456 DEBUG oslo_concurrency.lockutils [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.569 243456 DEBUG nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.569 243456 WARNING nova.compute.manager [req-a1566920-bf84-4add-a5a7-c6822d377eab req-023768ca-fbd7-4934-ae45-076d64a044dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received unexpected event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with vm_state rescued and task_state unrescuing.
Feb 28 10:20:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:20:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1135467290' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.832 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.835 243456 DEBUG nova.virt.libvirt.vif [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:20:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1952070192-access_point-1839928684',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1952070192-access_point-1839928684',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1952070192-ac',id=100,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHCNMYLg5p+aB2JY148QNYzXhobB6/drpl2qIRV0JTfQX5v2ytqL0PnW0jus5lqsKTS3QAwQHPeB44EmRsvT+kml+WKzSDkocdN0cBQPBu+t8DfZ2YkjQe5xHSo47UtrMg==',key_name='tempest-TestSecurityGroupsBasicOps-225180537',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1dab7067181d43f1acb702fce4ca882c',ramdisk_id='',reservation_id='r-6trxlpi6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1952070192',owner_user_name='tempest-TestSecurityGroupsBasicOps-1952070192-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:20:31Z,user_data=None,user_id='d000e26b1aaf4a60bd2c928412e59ca5',uuid=c4a33511-0908-4787-82f4-79505aa9d436,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.835 243456 DEBUG nova.network.os_vif_util [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Converting VIF {"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.839 243456 DEBUG nova.network.os_vif_util [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:75:fe,bridge_name='br-int',has_traffic_filtering=True,id=8f265ce7-668d-4462-8ac4-a9487fc7d3cd,network=Network(37214c09-5017-4e54-bd29-785084655f44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f265ce7-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.843 243456 DEBUG nova.objects.instance [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lazy-loading 'pci_devices' on Instance uuid c4a33511-0908-4787-82f4-79505aa9d436 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.880 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:20:36 compute-0 nova_compute[243452]:   <uuid>c4a33511-0908-4787-82f4-79505aa9d436</uuid>
Feb 28 10:20:36 compute-0 nova_compute[243452]:   <name>instance-00000064</name>
Feb 28 10:20:36 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:20:36 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:20:36 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1952070192-access_point-1839928684</nova:name>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:20:35</nova:creationTime>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:20:36 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:20:36 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:20:36 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:20:36 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:20:36 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:20:36 compute-0 nova_compute[243452]:         <nova:user uuid="d000e26b1aaf4a60bd2c928412e59ca5">tempest-TestSecurityGroupsBasicOps-1952070192-project-member</nova:user>
Feb 28 10:20:36 compute-0 nova_compute[243452]:         <nova:project uuid="1dab7067181d43f1acb702fce4ca882c">tempest-TestSecurityGroupsBasicOps-1952070192</nova:project>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:20:36 compute-0 nova_compute[243452]:         <nova:port uuid="8f265ce7-668d-4462-8ac4-a9487fc7d3cd">
Feb 28 10:20:36 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:20:36 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:20:36 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <system>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <entry name="serial">c4a33511-0908-4787-82f4-79505aa9d436</entry>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <entry name="uuid">c4a33511-0908-4787-82f4-79505aa9d436</entry>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     </system>
Feb 28 10:20:36 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:20:36 compute-0 nova_compute[243452]:   <os>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:   </os>
Feb 28 10:20:36 compute-0 nova_compute[243452]:   <features>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:   </features>
Feb 28 10:20:36 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:20:36 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:20:36 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c4a33511-0908-4787-82f4-79505aa9d436_disk">
Feb 28 10:20:36 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       </source>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:20:36 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c4a33511-0908-4787-82f4-79505aa9d436_disk.config">
Feb 28 10:20:36 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       </source>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:20:36 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:cb:75:fe"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <target dev="tap8f265ce7-66"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/c4a33511-0908-4787-82f4-79505aa9d436/console.log" append="off"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <video>
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     </video>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:20:36 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:20:36 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:20:36 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:20:36 compute-0 nova_compute[243452]: </domain>
Feb 28 10:20:36 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.882 243456 DEBUG nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Preparing to wait for external event network-vif-plugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.882 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Acquiring lock "c4a33511-0908-4787-82f4-79505aa9d436-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.883 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.883 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.884 243456 DEBUG nova.virt.libvirt.vif [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:20:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1952070192-access_point-1839928684',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1952070192-access_point-1839928684',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1952070192-ac',id=100,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHCNMYLg5p+aB2JY148QNYzXhobB6/drpl2qIRV0JTfQX5v2ytqL0PnW0jus5lqsKTS3QAwQHPeB44EmRsvT+kml+WKzSDkocdN0cBQPBu+t8DfZ2YkjQe5xHSo47UtrMg==',key_name='tempest-TestSecurityGroupsBasicOps-225180537',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1dab7067181d43f1acb702fce4ca882c',ramdisk_id='',reservation_id='r-6trxlpi6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1952070192',owner_user_name='tempest-TestSecurityGroupsBasicOps-1952070192-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:20:31Z,user_data=None,user_id='d000e26b1aaf4a60bd2c928412e59ca5',uuid=c4a33511-0908-4787-82f4-79505aa9d436,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.884 243456 DEBUG nova.network.os_vif_util [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Converting VIF {"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.885 243456 DEBUG nova.network.os_vif_util [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:75:fe,bridge_name='br-int',has_traffic_filtering=True,id=8f265ce7-668d-4462-8ac4-a9487fc7d3cd,network=Network(37214c09-5017-4e54-bd29-785084655f44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f265ce7-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.886 243456 DEBUG os_vif [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:75:fe,bridge_name='br-int',has_traffic_filtering=True,id=8f265ce7-668d-4462-8ac4-a9487fc7d3cd,network=Network(37214c09-5017-4e54-bd29-785084655f44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f265ce7-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.886 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.887 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.887 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.890 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.890 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f265ce7-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.891 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f265ce7-66, col_values=(('external_ids', {'iface-id': '8f265ce7-668d-4462-8ac4-a9487fc7d3cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:75:fe', 'vm-uuid': 'c4a33511-0908-4787-82f4-79505aa9d436'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.892 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:36 compute-0 NetworkManager[49805]: <info>  [1772274036.8936] manager: (tap8f265ce7-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/430)
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.895 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.901 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.902 243456 INFO os_vif [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:75:fe,bridge_name='br-int',has_traffic_filtering=True,id=8f265ce7-668d-4462-8ac4-a9487fc7d3cd,network=Network(37214c09-5017-4e54-bd29-785084655f44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f265ce7-66')
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.960 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.961 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.961 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] No VIF found with MAC fa:16:3e:cb:75:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.962 243456 INFO nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Using config drive
Feb 28 10:20:36 compute-0 nova_compute[243452]: 2026-02-28 10:20:36.986 243456 DEBUG nova.storage.rbd_utils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] rbd image c4a33511-0908-4787-82f4-79505aa9d436_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:37 compute-0 nova_compute[243452]: 2026-02-28 10:20:37.027 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 080f8608-f57f-4ffa-a966-ae62df8f6f9b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:20:37 compute-0 nova_compute[243452]: 2026-02-28 10:20:37.028 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274037.0246725, 080f8608-f57f-4ffa-a966-ae62df8f6f9b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:37 compute-0 nova_compute[243452]: 2026-02-28 10:20:37.029 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] VM Resumed (Lifecycle Event)
Feb 28 10:20:37 compute-0 nova_compute[243452]: 2026-02-28 10:20:37.054 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:37 compute-0 nova_compute[243452]: 2026-02-28 10:20:37.058 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:20:37 compute-0 nova_compute[243452]: 2026-02-28 10:20:37.081 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] During sync_power_state the instance has a pending task (unrescuing). Skip.
Feb 28 10:20:37 compute-0 nova_compute[243452]: 2026-02-28 10:20:37.081 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274037.031372, 080f8608-f57f-4ffa-a966-ae62df8f6f9b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:37 compute-0 nova_compute[243452]: 2026-02-28 10:20:37.082 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] VM Started (Lifecycle Event)
Feb 28 10:20:37 compute-0 nova_compute[243452]: 2026-02-28 10:20:37.100 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:37 compute-0 nova_compute[243452]: 2026-02-28 10:20:37.106 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:20:37 compute-0 nova_compute[243452]: 2026-02-28 10:20:37.133 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] During sync_power_state the instance has a pending task (unrescuing). Skip.
Feb 28 10:20:37 compute-0 nova_compute[243452]: 2026-02-28 10:20:37.361 243456 DEBUG nova.compute.manager [None req-8715f514-03fd-46a8-9c45-ee942edd0c27 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:37 compute-0 ceph-mon[76304]: pgmap v1655: 305 pgs: 305 active+clean; 532 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.9 MiB/s wr, 159 op/s
Feb 28 10:20:37 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1135467290' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:37 compute-0 nova_compute[243452]: 2026-02-28 10:20:37.495 243456 DEBUG nova.network.neutron [req-99bbe8fd-f180-4a00-a218-6eab02723577 req-b7bd5e3f-747c-4bc5-b45a-0d452be80215 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Updated VIF entry in instance network info cache for port 8f265ce7-668d-4462-8ac4-a9487fc7d3cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:20:37 compute-0 nova_compute[243452]: 2026-02-28 10:20:37.496 243456 DEBUG nova.network.neutron [req-99bbe8fd-f180-4a00-a218-6eab02723577 req-b7bd5e3f-747c-4bc5-b45a-0d452be80215 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Updating instance_info_cache with network_info: [{"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:20:37 compute-0 nova_compute[243452]: 2026-02-28 10:20:37.525 243456 DEBUG oslo_concurrency.lockutils [req-99bbe8fd-f180-4a00-a218-6eab02723577 req-b7bd5e3f-747c-4bc5-b45a-0d452be80215 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:20:37 compute-0 nova_compute[243452]: 2026-02-28 10:20:37.773 243456 INFO nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Creating config drive at /var/lib/nova/instances/c4a33511-0908-4787-82f4-79505aa9d436/disk.config
Feb 28 10:20:37 compute-0 nova_compute[243452]: 2026-02-28 10:20:37.779 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c4a33511-0908-4787-82f4-79505aa9d436/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpic30z_x9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:37 compute-0 nova_compute[243452]: 2026-02-28 10:20:37.922 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c4a33511-0908-4787-82f4-79505aa9d436/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpic30z_x9" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:37 compute-0 nova_compute[243452]: 2026-02-28 10:20:37.967 243456 DEBUG nova.storage.rbd_utils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] rbd image c4a33511-0908-4787-82f4-79505aa9d436_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:37 compute-0 nova_compute[243452]: 2026-02-28 10:20:37.974 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c4a33511-0908-4787-82f4-79505aa9d436/disk.config c4a33511-0908-4787-82f4-79505aa9d436_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.131 243456 DEBUG oslo_concurrency.processutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c4a33511-0908-4787-82f4-79505aa9d436/disk.config c4a33511-0908-4787-82f4-79505aa9d436_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.133 243456 INFO nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Deleting local config drive /var/lib/nova/instances/c4a33511-0908-4787-82f4-79505aa9d436/disk.config because it was imported into RBD.
Feb 28 10:20:38 compute-0 kernel: tap8f265ce7-66: entered promiscuous mode
Feb 28 10:20:38 compute-0 NetworkManager[49805]: <info>  [1772274038.1897] manager: (tap8f265ce7-66): new Tun device (/org/freedesktop/NetworkManager/Devices/431)
Feb 28 10:20:38 compute-0 ovn_controller[146846]: 2026-02-28T10:20:38Z|01007|binding|INFO|Claiming lport 8f265ce7-668d-4462-8ac4-a9487fc7d3cd for this chassis.
Feb 28 10:20:38 compute-0 ovn_controller[146846]: 2026-02-28T10:20:38Z|01008|binding|INFO|8f265ce7-668d-4462-8ac4-a9487fc7d3cd: Claiming fa:16:3e:cb:75:fe 10.100.0.6
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.197 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.208 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:75:fe 10.100.0.6'], port_security=['fa:16:3e:cb:75:fe 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c4a33511-0908-4787-82f4-79505aa9d436', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37214c09-5017-4e54-bd29-785084655f44', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1dab7067181d43f1acb702fce4ca882c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b33bdd64-2933-4bbe-ba3e-cc39acc8701b e5271994-622a-4bbd-b4e6-a7d717e49d1d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=564ae5b8-c7a3-416a-9979-7200cc2a4584, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8f265ce7-668d-4462-8ac4-a9487fc7d3cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:20:38 compute-0 NetworkManager[49805]: <info>  [1772274038.2097] device (tap8f265ce7-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:20:38 compute-0 NetworkManager[49805]: <info>  [1772274038.2106] device (tap8f265ce7-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:20:38 compute-0 ovn_controller[146846]: 2026-02-28T10:20:38Z|01009|binding|INFO|Setting lport 8f265ce7-668d-4462-8ac4-a9487fc7d3cd ovn-installed in OVS
Feb 28 10:20:38 compute-0 ovn_controller[146846]: 2026-02-28T10:20:38Z|01010|binding|INFO|Setting lport 8f265ce7-668d-4462-8ac4-a9487fc7d3cd up in Southbound
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.211 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8f265ce7-668d-4462-8ac4-a9487fc7d3cd in datapath 37214c09-5017-4e54-bd29-785084655f44 bound to our chassis
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.214 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 37214c09-5017-4e54-bd29-785084655f44
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.216 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.230 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9a11a7-5b68-4af7-b551-e1a422c3e721]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.232 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap37214c09-51 in ovnmeta-37214c09-5017-4e54-bd29-785084655f44 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.235 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap37214c09-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.235 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a555ce01-1bd8-4d17-a9aa-079d0e0083b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.237 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[50a64cb4-0eb5-4c83-a8a3-3440209cf328]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:38 compute-0 systemd-machined[209480]: New machine qemu-128-instance-00000064.
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.256 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[8b4ed5fd-ef8b-40a6-9277-8ce12b17eacb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:38 compute-0 systemd[1]: Started Virtual Machine qemu-128-instance-00000064.
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.284 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[93fe1736-233d-4b71-9beb-9b9f44fa9ffe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.320 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5ce7fbb9-cf71-45dc-87ea-b904706152f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:38 compute-0 NetworkManager[49805]: <info>  [1772274038.3290] manager: (tap37214c09-50): new Veth device (/org/freedesktop/NetworkManager/Devices/432)
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.329 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b88cd2-d396-4665-b03b-79fc4229860a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 509 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 152 op/s
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.374 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5e27ad8b-dfef-48a7-a5bc-67f8b38f1de0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:38 compute-0 systemd-udevd[330965]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.379 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c6b1c1-a0e7-4def-8a4a-64bb86b0815b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:38 compute-0 NetworkManager[49805]: <info>  [1772274038.4143] device (tap37214c09-50): carrier: link connected
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.420 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[60e86aa6-57ca-47c3-ab2d-bd63b6dade96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.440 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2eda440a-3952-4629-869b-ec78329b14f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37214c09-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:31:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554570, 'reachable_time': 28129, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330984, 'error': None, 'target': 'ovnmeta-37214c09-5017-4e54-bd29-785084655f44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.462 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ada63865-185b-4ac6-8139-be5ae8b4988d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed7:3160'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554570, 'tstamp': 554570}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330985, 'error': None, 'target': 'ovnmeta-37214c09-5017-4e54-bd29-785084655f44', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.482 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6eedc2d5-5ff0-49d8-930f-cd5cd86f4c90]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37214c09-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:31:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554570, 'reachable_time': 28129, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 330986, 'error': None, 'target': 'ovnmeta-37214c09-5017-4e54-bd29-785084655f44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.522 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[14488879-a46a-429c-acdd-b7c3ba9d29f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.599 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ca1f351a-6bcd-4fe3-a181-6525a789a6b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.602 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37214c09-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.603 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.604 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37214c09-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:38 compute-0 kernel: tap37214c09-50: entered promiscuous mode
Feb 28 10:20:38 compute-0 NetworkManager[49805]: <info>  [1772274038.6082] manager: (tap37214c09-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/433)
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.611 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.616 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap37214c09-50, col_values=(('external_ids', {'iface-id': '84e4a764-c038-44dc-af65-1b856dd92486'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.618 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:38 compute-0 ovn_controller[146846]: 2026-02-28T10:20:38Z|01011|binding|INFO|Releasing lport 84e4a764-c038-44dc-af65-1b856dd92486 from this chassis (sb_readonly=0)
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.621 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/37214c09-5017-4e54-bd29-785084655f44.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/37214c09-5017-4e54-bd29-785084655f44.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.626 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[16be51e0-3989-4077-b498-0e8495ebabce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.628 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.628 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-37214c09-5017-4e54-bd29-785084655f44
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/37214c09-5017-4e54-bd29-785084655f44.pid.haproxy
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 37214c09-5017-4e54-bd29-785084655f44
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.630 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-37214c09-5017-4e54-bd29-785084655f44', 'env', 'PROCESS_TAG=haproxy-37214c09-5017-4e54-bd29-785084655f44', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/37214c09-5017-4e54-bd29-785084655f44.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:20:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.656 243456 DEBUG nova.compute.manager [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received event network-vif-plugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.657 243456 DEBUG oslo_concurrency.lockutils [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4a33511-0908-4787-82f4-79505aa9d436-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.658 243456 DEBUG oslo_concurrency.lockutils [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.658 243456 DEBUG oslo_concurrency.lockutils [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.659 243456 DEBUG nova.compute.manager [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Processing event network-vif-plugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.659 243456 DEBUG nova.compute.manager [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received event network-vif-plugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.660 243456 DEBUG oslo_concurrency.lockutils [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4a33511-0908-4787-82f4-79505aa9d436-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.661 243456 DEBUG oslo_concurrency.lockutils [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.662 243456 DEBUG oslo_concurrency.lockutils [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.662 243456 DEBUG nova.compute.manager [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] No waiting events found dispatching network-vif-plugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.663 243456 WARNING nova.compute.manager [req-2fac99b7-2a5f-4091-8204-be93cface995 req-d3916823-f0f1-4efe-aafd-9b25bd3a0bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received unexpected event network-vif-plugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd for instance with vm_state building and task_state spawning.
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.750 243456 DEBUG nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.752 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274038.7494185, c4a33511-0908-4787-82f4-79505aa9d436 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.752 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] VM Started (Lifecycle Event)
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.765 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.771 243456 INFO nova.virt.libvirt.driver [-] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Instance spawned successfully.
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.771 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.776 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.780 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.809 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.810 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274038.749851, c4a33511-0908-4787-82f4-79505aa9d436 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.811 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] VM Paused (Lifecycle Event)
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.822 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.823 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.824 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.824 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.825 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.826 243456 DEBUG nova.virt.libvirt.driver [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.841 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.846 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274038.7636118, c4a33511-0908-4787-82f4-79505aa9d436 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.846 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] VM Resumed (Lifecycle Event)
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.892 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.897 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.902 243456 DEBUG oslo_concurrency.lockutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.903 243456 DEBUG oslo_concurrency.lockutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.904 243456 DEBUG oslo_concurrency.lockutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.904 243456 DEBUG oslo_concurrency.lockutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.904 243456 DEBUG oslo_concurrency.lockutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.906 243456 INFO nova.compute.manager [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Terminating instance
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.907 243456 DEBUG nova.compute.manager [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.937 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:20:38 compute-0 kernel: tapa98f753e-a6 (unregistering): left promiscuous mode
Feb 28 10:20:38 compute-0 NetworkManager[49805]: <info>  [1772274038.9477] device (tapa98f753e-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.953 243456 INFO nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Took 7.56 seconds to spawn the instance on the hypervisor.
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.953 243456 DEBUG nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.957 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:38 compute-0 ovn_controller[146846]: 2026-02-28T10:20:38Z|01012|binding|INFO|Releasing lport a98f753e-a6d6-4d97-b307-f08d35a37f1f from this chassis (sb_readonly=0)
Feb 28 10:20:38 compute-0 ovn_controller[146846]: 2026-02-28T10:20:38Z|01013|binding|INFO|Setting lport a98f753e-a6d6-4d97-b307-f08d35a37f1f down in Southbound
Feb 28 10:20:38 compute-0 ovn_controller[146846]: 2026-02-28T10:20:38Z|01014|binding|INFO|Removing iface tapa98f753e-a6 ovn-installed in OVS
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.961 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:38 compute-0 nova_compute[243452]: 2026-02-28 10:20:38.971 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:38.975 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:c9:51 10.100.0.5'], port_security=['fa:16:3e:71:c9:51 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '080f8608-f57f-4ffa-a966-ae62df8f6f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '8', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a98f753e-a6d6-4d97-b307-f08d35a37f1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:20:38 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000063.scope: Deactivated successfully.
Feb 28 10:20:38 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000063.scope: Consumed 3.403s CPU time.
Feb 28 10:20:38 compute-0 systemd-machined[209480]: Machine qemu-127-instance-00000063 terminated.
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.062 243456 INFO nova.compute.manager [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Took 8.73 seconds to build instance.
Feb 28 10:20:39 compute-0 podman[331063]: 2026-02-28 10:20:39.067489056 +0000 UTC m=+0.079204348 container create 92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.092 243456 DEBUG oslo_concurrency.lockutils [None req-957d5989-1529-4b57-b283-3246e030f946 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:39 compute-0 podman[331063]: 2026-02-28 10:20:39.023443191 +0000 UTC m=+0.035158563 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:20:39 compute-0 systemd[1]: Started libpod-conmon-92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e.scope.
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.130 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.136 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.147 243456 INFO nova.virt.libvirt.driver [-] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Instance destroyed successfully.
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.147 243456 DEBUG nova.objects.instance [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'resources' on Instance uuid 080f8608-f57f-4ffa-a966-ae62df8f6f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:39 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:20:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248dfaf5b43c5e78179cc2d605133b83fbf37721a8e21412e8df26c566d94eb2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.168 243456 DEBUG nova.virt.libvirt.vif [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1593211213',display_name='tempest-ServerRescueTestJSON-server-1593211213',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1593211213',id=99,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:20:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3882eded03594958a2e5d10832a6c3a9',ramdisk_id='',reservation_id='r-xm64dw2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-2101936935',owner_user_name='tempest-ServerRescueTestJSON-2101936935-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:20:37Z,user_data=None,user_id='8d03850a765742908401b28b9f983e96',uuid=080f8608-f57f-4ffa-a966-ae62df8f6f9b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.168 243456 DEBUG nova.network.os_vif_util [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converting VIF {"id": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "address": "fa:16:3e:71:c9:51", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa98f753e-a6", "ovs_interfaceid": "a98f753e-a6d6-4d97-b307-f08d35a37f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.169 243456 DEBUG nova.network.os_vif_util [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:71:c9:51,bridge_name='br-int',has_traffic_filtering=True,id=a98f753e-a6d6-4d97-b307-f08d35a37f1f,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa98f753e-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.169 243456 DEBUG os_vif [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:c9:51,bridge_name='br-int',has_traffic_filtering=True,id=a98f753e-a6d6-4d97-b307-f08d35a37f1f,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa98f753e-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.172 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.172 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa98f753e-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.174 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.175 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.175 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:39 compute-0 podman[331063]: 2026-02-28 10:20:39.176878454 +0000 UTC m=+0.188593786 container init 92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.178 243456 INFO os_vif [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:c9:51,bridge_name='br-int',has_traffic_filtering=True,id=a98f753e-a6d6-4d97-b307-f08d35a37f1f,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa98f753e-a6')
Feb 28 10:20:39 compute-0 podman[331063]: 2026-02-28 10:20:39.184049698 +0000 UTC m=+0.195765020 container start 92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 10:20:39 compute-0 neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44[331080]: [NOTICE]   (331098) : New worker (331110) forked
Feb 28 10:20:39 compute-0 neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44[331080]: [NOTICE]   (331098) : Loading success.
Feb 28 10:20:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:39.277 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a98f753e-a6d6-4d97-b307-f08d35a37f1f in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e unbound from our chassis
Feb 28 10:20:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:39.279 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:20:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:39.281 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc6a3bf-d228-4c5b-8e4b-a8c963168242]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.388 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "98150245-079d-43f8-bbd9-3d12a8f26719" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.388 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "98150245-079d-43f8-bbd9-3d12a8f26719" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.413 243456 DEBUG nova.compute.manager [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:20:39 compute-0 ceph-mon[76304]: pgmap v1656: 305 pgs: 305 active+clean; 509 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 152 op/s
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.495 243456 INFO nova.virt.libvirt.driver [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Deleting instance files /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b_del
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.496 243456 INFO nova.virt.libvirt.driver [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Deletion of /var/lib/nova/instances/080f8608-f57f-4ffa-a966-ae62df8f6f9b_del complete
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.535 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.536 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.544 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.545 243456 INFO nova.compute.claims [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.564 243456 INFO nova.compute.manager [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Took 0.66 seconds to destroy the instance on the hypervisor.
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.564 243456 DEBUG oslo.service.loopingcall [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.564 243456 DEBUG nova.compute.manager [-] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.565 243456 DEBUG nova.network.neutron [-] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.604 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274024.6034784, 690896df-6307-469c-9685-325a61a62b88 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.605 243456 INFO nova.compute.manager [-] [instance: 690896df-6307-469c-9685-325a61a62b88] VM Stopped (Lifecycle Event)
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.637 243456 DEBUG nova.compute.manager [None req-b4928298-7a11-4eed-b764-2383b31c9306 - - - - - -] [instance: 690896df-6307-469c-9685-325a61a62b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:39 compute-0 nova_compute[243452]: 2026-02-28 10:20:39.760 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:40 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:20:40 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4141303139' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:20:40 compute-0 nova_compute[243452]: 2026-02-28 10:20:40.303 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:40 compute-0 nova_compute[243452]: 2026-02-28 10:20:40.313 243456 DEBUG nova.compute.provider_tree [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 464 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 199 op/s
Feb 28 10:20:40 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4141303139' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:20:40 compute-0 nova_compute[243452]: 2026-02-28 10:20:40.782 243456 DEBUG nova.scheduler.client.report [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:20:40 compute-0 nova_compute[243452]: 2026-02-28 10:20:40.835 243456 DEBUG nova.compute.manager [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-unplugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:40 compute-0 nova_compute[243452]: 2026-02-28 10:20:40.835 243456 DEBUG oslo_concurrency.lockutils [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:40 compute-0 nova_compute[243452]: 2026-02-28 10:20:40.835 243456 DEBUG oslo_concurrency.lockutils [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:40 compute-0 nova_compute[243452]: 2026-02-28 10:20:40.836 243456 DEBUG oslo_concurrency.lockutils [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:40 compute-0 nova_compute[243452]: 2026-02-28 10:20:40.836 243456 DEBUG nova.compute.manager [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-unplugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:40 compute-0 nova_compute[243452]: 2026-02-28 10:20:40.836 243456 DEBUG nova.compute.manager [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-unplugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:20:40 compute-0 nova_compute[243452]: 2026-02-28 10:20:40.836 243456 DEBUG nova.compute.manager [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:40 compute-0 nova_compute[243452]: 2026-02-28 10:20:40.836 243456 DEBUG oslo_concurrency.lockutils [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:40 compute-0 nova_compute[243452]: 2026-02-28 10:20:40.837 243456 DEBUG oslo_concurrency.lockutils [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:40 compute-0 nova_compute[243452]: 2026-02-28 10:20:40.837 243456 DEBUG oslo_concurrency.lockutils [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:40 compute-0 nova_compute[243452]: 2026-02-28 10:20:40.837 243456 DEBUG nova.compute.manager [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] No waiting events found dispatching network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:40 compute-0 nova_compute[243452]: 2026-02-28 10:20:40.837 243456 WARNING nova.compute.manager [req-c689d01a-b4ee-4116-b172-9c59ebbf7e78 req-8b978275-8778-42da-8862-8537fd73d0c1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received unexpected event network-vif-plugged-a98f753e-a6d6-4d97-b307-f08d35a37f1f for instance with vm_state active and task_state deleting.
Feb 28 10:20:40 compute-0 nova_compute[243452]: 2026-02-28 10:20:40.846 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:40 compute-0 nova_compute[243452]: 2026-02-28 10:20:40.846 243456 DEBUG nova.compute.manager [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:20:40 compute-0 nova_compute[243452]: 2026-02-28 10:20:40.895 243456 DEBUG nova.compute.manager [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0027284727141890838 of space, bias 1.0, pg target 0.8185418142567251 quantized to 32 (current 32)
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493165111746679 of space, bias 1.0, pg target 0.7479495335240037 quantized to 32 (current 32)
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.61893063059789e-07 of space, bias 4.0, pg target 0.0009142716756717468 quantized to 16 (current 16)
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:20:40 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.031 243456 INFO nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.146 243456 DEBUG nova.compute.manager [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.261 243456 DEBUG nova.compute.manager [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.262 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.263 243456 INFO nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Creating image(s)
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.291 243456 DEBUG nova.storage.rbd_utils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.320 243456 DEBUG nova.storage.rbd_utils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.350 243456 DEBUG nova.storage.rbd_utils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.354 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.422 243456 DEBUG nova.network.neutron [-] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.429 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.429 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.430 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.430 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.457 243456 DEBUG nova.storage.rbd_utils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:41 compute-0 ceph-mon[76304]: pgmap v1657: 305 pgs: 305 active+clean; 464 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 199 op/s
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.462 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 98150245-079d-43f8-bbd9-3d12a8f26719_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.492 243456 INFO nova.compute.manager [-] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Took 1.93 seconds to deallocate network for instance.
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.698 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 98150245-079d-43f8-bbd9-3d12a8f26719_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.738 243456 DEBUG oslo_concurrency.lockutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.739 243456 DEBUG oslo_concurrency.lockutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.792 243456 DEBUG nova.storage.rbd_utils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] resizing rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.840 243456 DEBUG nova.compute.manager [req-5660b58d-db96-47d5-9cfa-8c1e346390d5 req-29fcd91f-ee0c-48e0-aa4c-2db078f93635 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Received event network-vif-deleted-a98f753e-a6d6-4d97-b307-f08d35a37f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.881 243456 DEBUG nova.objects.instance [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'migration_context' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.939 243456 DEBUG oslo_concurrency.processutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.979 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.980 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Ensure instance console log exists: /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.981 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.981 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.981 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.983 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.988 243456 WARNING nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.995 243456 DEBUG nova.virt.libvirt.host [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.995 243456 DEBUG nova.virt.libvirt.host [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.998 243456 DEBUG nova.virt.libvirt.host [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.998 243456 DEBUG nova.virt.libvirt.host [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:20:41 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.999 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:20:42 compute-0 nova_compute[243452]: 2026-02-28 10:20:41.999 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:20:42 compute-0 nova_compute[243452]: 2026-02-28 10:20:42.000 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:20:42 compute-0 nova_compute[243452]: 2026-02-28 10:20:42.000 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:20:42 compute-0 nova_compute[243452]: 2026-02-28 10:20:42.000 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:20:42 compute-0 nova_compute[243452]: 2026-02-28 10:20:42.000 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:20:42 compute-0 nova_compute[243452]: 2026-02-28 10:20:42.001 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:20:42 compute-0 nova_compute[243452]: 2026-02-28 10:20:42.001 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:20:42 compute-0 nova_compute[243452]: 2026-02-28 10:20:42.001 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:20:42 compute-0 nova_compute[243452]: 2026-02-28 10:20:42.001 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:20:42 compute-0 nova_compute[243452]: 2026-02-28 10:20:42.001 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:20:42 compute-0 nova_compute[243452]: 2026-02-28 10:20:42.002 243456 DEBUG nova.virt.hardware [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:20:42 compute-0 nova_compute[243452]: 2026-02-28 10:20:42.006 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 437 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.2 MiB/s wr, 252 op/s
Feb 28 10:20:42 compute-0 ceph-mon[76304]: pgmap v1658: 305 pgs: 305 active+clean; 437 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.2 MiB/s wr, 252 op/s
Feb 28 10:20:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:20:42 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/753415033' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:20:42 compute-0 nova_compute[243452]: 2026-02-28 10:20:42.525 243456 DEBUG oslo_concurrency.processutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:42 compute-0 nova_compute[243452]: 2026-02-28 10:20:42.535 243456 DEBUG nova.compute.provider_tree [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:20:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:20:42 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1212162173' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:42 compute-0 nova_compute[243452]: 2026-02-28 10:20:42.560 243456 DEBUG nova.scheduler.client.report [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:20:42 compute-0 nova_compute[243452]: 2026-02-28 10:20:42.575 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:42 compute-0 nova_compute[243452]: 2026-02-28 10:20:42.613 243456 DEBUG nova.storage.rbd_utils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:42 compute-0 nova_compute[243452]: 2026-02-28 10:20:42.620 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:42 compute-0 nova_compute[243452]: 2026-02-28 10:20:42.661 243456 DEBUG oslo_concurrency.lockutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:42 compute-0 nova_compute[243452]: 2026-02-28 10:20:42.690 243456 INFO nova.scheduler.client.report [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Deleted allocations for instance 080f8608-f57f-4ffa-a966-ae62df8f6f9b
Feb 28 10:20:42 compute-0 nova_compute[243452]: 2026-02-28 10:20:42.785 243456 DEBUG oslo_concurrency.lockutils [None req-8afc2e5a-ec05-447c-bd9c-7d45f0d5aea6 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "080f8608-f57f-4ffa-a966-ae62df8f6f9b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:20:43 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3047918325' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.180 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.182 243456 DEBUG nova.objects.instance [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'pci_devices' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.221 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:20:43 compute-0 nova_compute[243452]:   <uuid>98150245-079d-43f8-bbd9-3d12a8f26719</uuid>
Feb 28 10:20:43 compute-0 nova_compute[243452]:   <name>instance-00000065</name>
Feb 28 10:20:43 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:20:43 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:20:43 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerShowV257Test-server-2037913836</nova:name>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:20:41</nova:creationTime>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:20:43 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:20:43 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:20:43 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:20:43 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:20:43 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:20:43 compute-0 nova_compute[243452]:         <nova:user uuid="17ee551657ed4a4c8a2f040ff863ad9a">tempest-ServerShowV257Test-1598019138-project-member</nova:user>
Feb 28 10:20:43 compute-0 nova_compute[243452]:         <nova:project uuid="124d8457f7e342f1ab81af27d8c3ba3a">tempest-ServerShowV257Test-1598019138</nova:project>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <nova:ports/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:20:43 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:20:43 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <system>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <entry name="serial">98150245-079d-43f8-bbd9-3d12a8f26719</entry>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <entry name="uuid">98150245-079d-43f8-bbd9-3d12a8f26719</entry>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     </system>
Feb 28 10:20:43 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:20:43 compute-0 nova_compute[243452]:   <os>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:   </os>
Feb 28 10:20:43 compute-0 nova_compute[243452]:   <features>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:   </features>
Feb 28 10:20:43 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:20:43 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:20:43 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/98150245-079d-43f8-bbd9-3d12a8f26719_disk">
Feb 28 10:20:43 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       </source>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:20:43 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/98150245-079d-43f8-bbd9-3d12a8f26719_disk.config">
Feb 28 10:20:43 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       </source>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:20:43 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/console.log" append="off"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <video>
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     </video>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:20:43 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:20:43 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:20:43 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:20:43 compute-0 nova_compute[243452]: </domain>
Feb 28 10:20:43 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.304 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.305 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.307 243456 INFO nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Using config drive
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.344 243456 DEBUG nova.storage.rbd_utils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:43 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/753415033' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:20:43 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1212162173' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:43 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3047918325' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.618 243456 INFO nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Creating config drive at /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.626 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwz4w93sb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.778 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwz4w93sb" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.811 243456 DEBUG nova.storage.rbd_utils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.816 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config 98150245-079d-43f8-bbd9-3d12a8f26719_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.872 243456 DEBUG oslo_concurrency.lockutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.873 243456 DEBUG oslo_concurrency.lockutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.874 243456 DEBUG oslo_concurrency.lockutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.874 243456 DEBUG oslo_concurrency.lockutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.874 243456 DEBUG oslo_concurrency.lockutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.876 243456 INFO nova.compute.manager [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Terminating instance
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.877 243456 DEBUG nova.compute.manager [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:20:43 compute-0 kernel: tap2f9562b0-54 (unregistering): left promiscuous mode
Feb 28 10:20:43 compute-0 NetworkManager[49805]: <info>  [1772274043.9367] device (tap2f9562b0-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:20:43 compute-0 ovn_controller[146846]: 2026-02-28T10:20:43Z|01015|binding|INFO|Releasing lport 2f9562b0-54ce-4c24-9341-33a674532bf0 from this chassis (sb_readonly=0)
Feb 28 10:20:43 compute-0 ovn_controller[146846]: 2026-02-28T10:20:43Z|01016|binding|INFO|Setting lport 2f9562b0-54ce-4c24-9341-33a674532bf0 down in Southbound
Feb 28 10:20:43 compute-0 ovn_controller[146846]: 2026-02-28T10:20:43Z|01017|binding|INFO|Removing iface tap2f9562b0-54 ovn-installed in OVS
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.945 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.950 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.960 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:43.961 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:07:b7 10.100.0.9'], port_security=['fa:16:3e:db:07:b7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ea5efc55-0a5e-435e-9805-9a9726c17eda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2f9562b0-54ce-4c24-9341-33a674532bf0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:20:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:43.964 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2f9562b0-54ce-4c24-9341-33a674532bf0 in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e unbound from our chassis
Feb 28 10:20:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:43.966 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:20:43 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000060.scope: Deactivated successfully.
Feb 28 10:20:43 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000060.scope: Consumed 13.010s CPU time.
Feb 28 10:20:43 compute-0 systemd-machined[209480]: Machine qemu-120-instance-00000060 terminated.
Feb 28 10:20:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:43.971 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0f34996b-2852-430f-9645-59d2299ba175]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.974 243456 DEBUG nova.compute.manager [req-65aceb78-cfc9-435f-b8f9-4c797458115f req-e2b60602-0c93-4372-8144-4a4cbe57f3c8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received event network-changed-8f265ce7-668d-4462-8ac4-a9487fc7d3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.974 243456 DEBUG nova.compute.manager [req-65aceb78-cfc9-435f-b8f9-4c797458115f req-e2b60602-0c93-4372-8144-4a4cbe57f3c8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Refreshing instance network info cache due to event network-changed-8f265ce7-668d-4462-8ac4-a9487fc7d3cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.975 243456 DEBUG oslo_concurrency.lockutils [req-65aceb78-cfc9-435f-b8f9-4c797458115f req-e2b60602-0c93-4372-8144-4a4cbe57f3c8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.976 243456 DEBUG oslo_concurrency.lockutils [req-65aceb78-cfc9-435f-b8f9-4c797458115f req-e2b60602-0c93-4372-8144-4a4cbe57f3c8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:20:43 compute-0 nova_compute[243452]: 2026-02-28 10:20:43.976 243456 DEBUG nova.network.neutron [req-65aceb78-cfc9-435f-b8f9-4c797458115f req-e2b60602-0c93-4372-8144-4a4cbe57f3c8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Refreshing network info cache for port 8f265ce7-668d-4462-8ac4-a9487fc7d3cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.023 243456 DEBUG oslo_concurrency.processutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config 98150245-079d-43f8-bbd9-3d12a8f26719_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.024 243456 INFO nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Deleting local config drive /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config because it was imported into RBD.
Feb 28 10:20:44 compute-0 podman[331457]: 2026-02-28 10:20:44.026294124 +0000 UTC m=+0.061093002 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:20:44 compute-0 podman[331454]: 2026-02-28 10:20:44.040850159 +0000 UTC m=+0.075525193 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 10:20:44 compute-0 systemd-udevd[331474]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:20:44 compute-0 kernel: tap2f9562b0-54: entered promiscuous mode
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.097 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:44 compute-0 NetworkManager[49805]: <info>  [1772274044.0992] manager: (tap2f9562b0-54): new Tun device (/org/freedesktop/NetworkManager/Devices/434)
Feb 28 10:20:44 compute-0 ovn_controller[146846]: 2026-02-28T10:20:44Z|01018|binding|INFO|Claiming lport 2f9562b0-54ce-4c24-9341-33a674532bf0 for this chassis.
Feb 28 10:20:44 compute-0 ovn_controller[146846]: 2026-02-28T10:20:44Z|01019|binding|INFO|2f9562b0-54ce-4c24-9341-33a674532bf0: Claiming fa:16:3e:db:07:b7 10.100.0.9
Feb 28 10:20:44 compute-0 kernel: tap2f9562b0-54 (unregistering): left promiscuous mode
Feb 28 10:20:44 compute-0 ovn_controller[146846]: 2026-02-28T10:20:44Z|01020|binding|INFO|Setting lport 2f9562b0-54ce-4c24-9341-33a674532bf0 ovn-installed in OVS
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.109 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.113 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:44 compute-0 ovn_controller[146846]: 2026-02-28T10:20:44Z|01021|if_status|INFO|Dropped 1 log messages in last 242 seconds (most recently, 242 seconds ago) due to excessive rate
Feb 28 10:20:44 compute-0 ovn_controller[146846]: 2026-02-28T10:20:44Z|01022|if_status|INFO|Not setting lport 2f9562b0-54ce-4c24-9341-33a674532bf0 down as sb is readonly
Feb 28 10:20:44 compute-0 systemd-machined[209480]: New machine qemu-129-instance-00000065.
Feb 28 10:20:44 compute-0 ovn_controller[146846]: 2026-02-28T10:20:44Z|01023|binding|INFO|Releasing lport 2f9562b0-54ce-4c24-9341-33a674532bf0 from this chassis (sb_readonly=0)
Feb 28 10:20:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:44.131 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:07:b7 10.100.0.9'], port_security=['fa:16:3e:db:07:b7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ea5efc55-0a5e-435e-9805-9a9726c17eda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2f9562b0-54ce-4c24-9341-33a674532bf0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:20:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:44.132 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2f9562b0-54ce-4c24-9341-33a674532bf0 in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e bound to our chassis
Feb 28 10:20:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:44.132 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:20:44 compute-0 systemd[1]: Started Virtual Machine qemu-129-instance-00000065.
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.134 243456 INFO nova.virt.libvirt.driver [-] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Instance destroyed successfully.
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.135 243456 DEBUG nova.objects.instance [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lazy-loading 'resources' on Instance uuid ea5efc55-0a5e-435e-9805-9a9726c17eda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:44.133 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9ddf3432-ac02-4146-bf76-011db5eae7a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:44.136 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:07:b7 10.100.0.9'], port_security=['fa:16:3e:db:07:b7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ea5efc55-0a5e-435e-9805-9a9726c17eda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ba87a0d-4c27-4844-a55f-2927dfe4893e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3882eded03594958a2e5d10832a6c3a9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '045c4763-a218-4cca-a882-e267b0c43dad', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d941e6-1790-487e-9400-385b4660d2ae, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2f9562b0-54ce-4c24-9341-33a674532bf0) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:20:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:44.137 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2f9562b0-54ce-4c24-9341-33a674532bf0 in datapath 9ba87a0d-4c27-4844-a55f-2927dfe4893e unbound from our chassis
Feb 28 10:20:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:44.138 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9ba87a0d-4c27-4844-a55f-2927dfe4893e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.139 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:44.138 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c40493cc-e17c-4141-ad51-98ad8044f81c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.146 243456 DEBUG nova.virt.libvirt.vif [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:19:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1998552864',display_name='tempest-ServerRescueTestJSON-server-1998552864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1998552864',id=96,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:19:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3882eded03594958a2e5d10832a6c3a9',ramdisk_id='',reservation_id='r-fussapqk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-2101936935',owner_user_name='tempest-ServerRescueTestJSON-2101936935-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:19:51Z,user_data=None,user_id='8d03850a765742908401b28b9f983e96',uuid=ea5efc55-0a5e-435e-9805-9a9726c17eda,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.147 243456 DEBUG nova.network.os_vif_util [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converting VIF {"id": "2f9562b0-54ce-4c24-9341-33a674532bf0", "address": "fa:16:3e:db:07:b7", "network": {"id": "9ba87a0d-4c27-4844-a55f-2927dfe4893e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1242488818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "3882eded03594958a2e5d10832a6c3a9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f9562b0-54", "ovs_interfaceid": "2f9562b0-54ce-4c24-9341-33a674532bf0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.147 243456 DEBUG nova.network.os_vif_util [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:db:07:b7,bridge_name='br-int',has_traffic_filtering=True,id=2f9562b0-54ce-4c24-9341-33a674532bf0,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f9562b0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.147 243456 DEBUG os_vif [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:07:b7,bridge_name='br-int',has_traffic_filtering=True,id=2f9562b0-54ce-4c24-9341-33a674532bf0,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f9562b0-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.151 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.152 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f9562b0-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.154 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.157 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.159 243456 INFO os_vif [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:07:b7,bridge_name='br-int',has_traffic_filtering=True,id=2f9562b0-54ce-4c24-9341-33a674532bf0,network=Network(9ba87a0d-4c27-4844-a55f-2927dfe4893e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f9562b0-54')
Feb 28 10:20:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 428 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.1 MiB/s wr, 311 op/s
Feb 28 10:20:44 compute-0 ceph-mon[76304]: pgmap v1659: 305 pgs: 305 active+clean; 428 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.1 MiB/s wr, 311 op/s
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.626 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274044.6254103, 98150245-079d-43f8-bbd9-3d12a8f26719 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.627 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] VM Resumed (Lifecycle Event)
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.635 243456 DEBUG nova.compute.manager [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.636 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.643 243456 INFO nova.virt.libvirt.driver [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance spawned successfully.
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.643 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.657 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.660 243456 DEBUG nova.compute.manager [req-79d5d539-79ff-47e3-abc6-e5c1d814acf5 req-89c3ffc8-fa12-4a55-b6e3-bb96b7704846 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-vif-unplugged-2f9562b0-54ce-4c24-9341-33a674532bf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.661 243456 DEBUG oslo_concurrency.lockutils [req-79d5d539-79ff-47e3-abc6-e5c1d814acf5 req-89c3ffc8-fa12-4a55-b6e3-bb96b7704846 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.661 243456 DEBUG oslo_concurrency.lockutils [req-79d5d539-79ff-47e3-abc6-e5c1d814acf5 req-89c3ffc8-fa12-4a55-b6e3-bb96b7704846 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.661 243456 DEBUG oslo_concurrency.lockutils [req-79d5d539-79ff-47e3-abc6-e5c1d814acf5 req-89c3ffc8-fa12-4a55-b6e3-bb96b7704846 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.662 243456 DEBUG nova.compute.manager [req-79d5d539-79ff-47e3-abc6-e5c1d814acf5 req-89c3ffc8-fa12-4a55-b6e3-bb96b7704846 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] No waiting events found dispatching network-vif-unplugged-2f9562b0-54ce-4c24-9341-33a674532bf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.662 243456 DEBUG nova.compute.manager [req-79d5d539-79ff-47e3-abc6-e5c1d814acf5 req-89c3ffc8-fa12-4a55-b6e3-bb96b7704846 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-vif-unplugged-2f9562b0-54ce-4c24-9341-33a674532bf0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.668 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.674 243456 INFO nova.virt.libvirt.driver [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Deleting instance files /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda_del
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.675 243456 INFO nova.virt.libvirt.driver [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Deletion of /var/lib/nova/instances/ea5efc55-0a5e-435e-9805-9a9726c17eda_del complete
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.679 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.680 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.680 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.680 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.681 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.681 243456 DEBUG nova.virt.libvirt.driver [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.774 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.775 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274044.6272006, 98150245-079d-43f8-bbd9-3d12a8f26719 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.775 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] VM Started (Lifecycle Event)
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.803 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.813 243456 INFO nova.compute.manager [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Took 3.55 seconds to spawn the instance on the hypervisor.
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.813 243456 DEBUG nova.compute.manager [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.814 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.825 243456 INFO nova.compute.manager [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Took 0.95 seconds to destroy the instance on the hypervisor.
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.826 243456 DEBUG oslo.service.loopingcall [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.826 243456 DEBUG nova.compute.manager [-] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.827 243456 DEBUG nova.network.neutron [-] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.846 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.906 243456 INFO nova.compute.manager [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Took 5.40 seconds to build instance.
Feb 28 10:20:44 compute-0 nova_compute[243452]: 2026-02-28 10:20:44.965 243456 DEBUG oslo_concurrency.lockutils [None req-b12df446-7a22-49d7-9af1-369a89308f20 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "98150245-079d-43f8-bbd9-3d12a8f26719" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:20:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/484818350' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:20:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:20:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/484818350' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:20:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/484818350' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:20:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/484818350' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:20:46 compute-0 nova_compute[243452]: 2026-02-28 10:20:46.160 243456 DEBUG nova.network.neutron [-] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:20:46 compute-0 nova_compute[243452]: 2026-02-28 10:20:46.178 243456 INFO nova.compute.manager [-] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Took 1.35 seconds to deallocate network for instance.
Feb 28 10:20:46 compute-0 nova_compute[243452]: 2026-02-28 10:20:46.241 243456 DEBUG oslo_concurrency.lockutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:46 compute-0 nova_compute[243452]: 2026-02-28 10:20:46.242 243456 DEBUG oslo_concurrency.lockutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 395 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 3.3 MiB/s wr, 355 op/s
Feb 28 10:20:46 compute-0 nova_compute[243452]: 2026-02-28 10:20:46.395 243456 DEBUG oslo_concurrency.processutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:20:46 compute-0 nova_compute[243452]: 2026-02-28 10:20:46.450 243456 DEBUG nova.network.neutron [req-65aceb78-cfc9-435f-b8f9-4c797458115f req-e2b60602-0c93-4372-8144-4a4cbe57f3c8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Updated VIF entry in instance network info cache for port 8f265ce7-668d-4462-8ac4-a9487fc7d3cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:20:46 compute-0 nova_compute[243452]: 2026-02-28 10:20:46.451 243456 DEBUG nova.network.neutron [req-65aceb78-cfc9-435f-b8f9-4c797458115f req-e2b60602-0c93-4372-8144-4a4cbe57f3c8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Updating instance_info_cache with network_info: [{"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:20:46 compute-0 nova_compute[243452]: 2026-02-28 10:20:46.474 243456 DEBUG oslo_concurrency.lockutils [req-65aceb78-cfc9-435f-b8f9-4c797458115f req-e2b60602-0c93-4372-8144-4a4cbe57f3c8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:20:46 compute-0 ceph-mon[76304]: pgmap v1660: 305 pgs: 305 active+clean; 395 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 3.3 MiB/s wr, 355 op/s
Feb 28 10:20:46 compute-0 nova_compute[243452]: 2026-02-28 10:20:46.841 243456 DEBUG nova.compute.manager [req-135c7d13-47f1-42c3-86cc-eb7e9c203493 req-1f20d14c-0d4c-4b4d-a1f6-7710bb4cd573 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:46 compute-0 nova_compute[243452]: 2026-02-28 10:20:46.842 243456 DEBUG oslo_concurrency.lockutils [req-135c7d13-47f1-42c3-86cc-eb7e9c203493 req-1f20d14c-0d4c-4b4d-a1f6-7710bb4cd573 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:46 compute-0 nova_compute[243452]: 2026-02-28 10:20:46.843 243456 DEBUG oslo_concurrency.lockutils [req-135c7d13-47f1-42c3-86cc-eb7e9c203493 req-1f20d14c-0d4c-4b4d-a1f6-7710bb4cd573 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:46 compute-0 nova_compute[243452]: 2026-02-28 10:20:46.843 243456 DEBUG oslo_concurrency.lockutils [req-135c7d13-47f1-42c3-86cc-eb7e9c203493 req-1f20d14c-0d4c-4b4d-a1f6-7710bb4cd573 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:46 compute-0 nova_compute[243452]: 2026-02-28 10:20:46.843 243456 DEBUG nova.compute.manager [req-135c7d13-47f1-42c3-86cc-eb7e9c203493 req-1f20d14c-0d4c-4b4d-a1f6-7710bb4cd573 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] No waiting events found dispatching network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:20:46 compute-0 nova_compute[243452]: 2026-02-28 10:20:46.843 243456 WARNING nova.compute.manager [req-135c7d13-47f1-42c3-86cc-eb7e9c203493 req-1f20d14c-0d4c-4b4d-a1f6-7710bb4cd573 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received unexpected event network-vif-plugged-2f9562b0-54ce-4c24-9341-33a674532bf0 for instance with vm_state deleted and task_state None.
Feb 28 10:20:46 compute-0 nova_compute[243452]: 2026-02-28 10:20:46.844 243456 DEBUG nova.compute.manager [req-135c7d13-47f1-42c3-86cc-eb7e9c203493 req-1f20d14c-0d4c-4b4d-a1f6-7710bb4cd573 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Received event network-vif-deleted-2f9562b0-54ce-4c24-9341-33a674532bf0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:20:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:20:46 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4164320592' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:20:46 compute-0 nova_compute[243452]: 2026-02-28 10:20:46.965 243456 DEBUG oslo_concurrency.processutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:20:46 compute-0 nova_compute[243452]: 2026-02-28 10:20:46.971 243456 DEBUG nova.compute.provider_tree [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:20:46 compute-0 nova_compute[243452]: 2026-02-28 10:20:46.990 243456 DEBUG nova.scheduler.client.report [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:20:47 compute-0 nova_compute[243452]: 2026-02-28 10:20:47.031 243456 DEBUG oslo_concurrency.lockutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:47 compute-0 nova_compute[243452]: 2026-02-28 10:20:47.083 243456 INFO nova.scheduler.client.report [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Deleted allocations for instance ea5efc55-0a5e-435e-9805-9a9726c17eda
Feb 28 10:20:47 compute-0 nova_compute[243452]: 2026-02-28 10:20:47.154 243456 DEBUG oslo_concurrency.lockutils [None req-9eaa0560-5371-44d8-bd7c-03e92dfdc209 8d03850a765742908401b28b9f983e96 3882eded03594958a2e5d10832a6c3a9 - - default default] Lock "ea5efc55-0a5e-435e-9805-9a9726c17eda" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.281s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:47 compute-0 nova_compute[243452]: 2026-02-28 10:20:47.177 243456 INFO nova.compute.manager [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Rebuilding instance
Feb 28 10:20:47 compute-0 nova_compute[243452]: 2026-02-28 10:20:47.487 243456 DEBUG nova.objects.instance [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'trusted_certs' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:47 compute-0 nova_compute[243452]: 2026-02-28 10:20:47.511 243456 DEBUG nova.compute.manager [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:47 compute-0 nova_compute[243452]: 2026-02-28 10:20:47.562 243456 DEBUG nova.objects.instance [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'pci_requests' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:47 compute-0 nova_compute[243452]: 2026-02-28 10:20:47.578 243456 DEBUG nova.objects.instance [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'pci_devices' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:47 compute-0 nova_compute[243452]: 2026-02-28 10:20:47.602 243456 DEBUG nova.objects.instance [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'resources' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:47 compute-0 nova_compute[243452]: 2026-02-28 10:20:47.620 243456 DEBUG nova.objects.instance [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'migration_context' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:20:47 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4164320592' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:20:47 compute-0 nova_compute[243452]: 2026-02-28 10:20:47.649 243456 DEBUG nova.objects.instance [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:20:47 compute-0 nova_compute[243452]: 2026-02-28 10:20:47.653 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:20:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 343 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 1.8 MiB/s wr, 337 op/s
Feb 28 10:20:48 compute-0 ceph-mon[76304]: pgmap v1661: 305 pgs: 305 active+clean; 343 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 1.8 MiB/s wr, 337 op/s
Feb 28 10:20:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:20:48 compute-0 nova_compute[243452]: 2026-02-28 10:20:48.962 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:49 compute-0 nova_compute[243452]: 2026-02-28 10:20:49.154 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:49 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Feb 28 10:20:49 compute-0 ovn_controller[146846]: 2026-02-28T10:20:49Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cb:75:fe 10.100.0.6
Feb 28 10:20:49 compute-0 ovn_controller[146846]: 2026-02-28T10:20:49Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cb:75:fe 10.100.0.6
Feb 28 10:20:50 compute-0 ovn_controller[146846]: 2026-02-28T10:20:50Z|01024|binding|INFO|Releasing lport ec441ae8-7dea-4a06-ba6a-57dcbc67001f from this chassis (sb_readonly=0)
Feb 28 10:20:50 compute-0 ovn_controller[146846]: 2026-02-28T10:20:50Z|01025|binding|INFO|Releasing lport 84e4a764-c038-44dc-af65-1b856dd92486 from this chassis (sb_readonly=0)
Feb 28 10:20:50 compute-0 nova_compute[243452]: 2026-02-28 10:20:50.125 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 332 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.3 MiB/s wr, 329 op/s
Feb 28 10:20:52 compute-0 ceph-mon[76304]: pgmap v1662: 305 pgs: 305 active+clean; 332 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.3 MiB/s wr, 329 op/s
Feb 28 10:20:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 340 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.2 MiB/s wr, 305 op/s
Feb 28 10:20:52 compute-0 sshd-session[331606]: Invalid user solana from 45.148.10.240 port 34572
Feb 28 10:20:52 compute-0 sshd-session[331606]: Connection closed by invalid user solana 45.148.10.240 port 34572 [preauth]
Feb 28 10:20:53 compute-0 ceph-mon[76304]: pgmap v1663: 305 pgs: 305 active+clean; 340 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.2 MiB/s wr, 305 op/s
Feb 28 10:20:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:20:53 compute-0 nova_compute[243452]: 2026-02-28 10:20:53.965 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:54 compute-0 nova_compute[243452]: 2026-02-28 10:20:54.144 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274039.14271, 080f8608-f57f-4ffa-a966-ae62df8f6f9b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:54 compute-0 nova_compute[243452]: 2026-02-28 10:20:54.145 243456 INFO nova.compute.manager [-] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] VM Stopped (Lifecycle Event)
Feb 28 10:20:54 compute-0 nova_compute[243452]: 2026-02-28 10:20:54.156 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:54 compute-0 nova_compute[243452]: 2026-02-28 10:20:54.169 243456 DEBUG nova.compute.manager [None req-c73c8896-0bb4-4124-9582-9aaed9b59974 - - - - - -] [instance: 080f8608-f57f-4ffa-a966-ae62df8f6f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 355 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 261 op/s
Feb 28 10:20:55 compute-0 ceph-mon[76304]: pgmap v1664: 305 pgs: 305 active+clean; 355 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 261 op/s
Feb 28 10:20:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 364 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.1 MiB/s wr, 237 op/s
Feb 28 10:20:57 compute-0 ceph-mon[76304]: pgmap v1665: 305 pgs: 305 active+clean; 364 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.1 MiB/s wr, 237 op/s
Feb 28 10:20:57 compute-0 nova_compute[243452]: 2026-02-28 10:20:57.565 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:57.859 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:20:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:57.860 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:20:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:20:57.860 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:20:58 compute-0 nova_compute[243452]: 2026-02-28 10:20:58.120 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 28 10:20:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 382 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 201 op/s
Feb 28 10:20:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:20:58 compute-0 nova_compute[243452]: 2026-02-28 10:20:58.966 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:59 compute-0 nova_compute[243452]: 2026-02-28 10:20:59.125 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274044.1239626, ea5efc55-0a5e-435e-9805-9a9726c17eda => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:20:59 compute-0 nova_compute[243452]: 2026-02-28 10:20:59.126 243456 INFO nova.compute.manager [-] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] VM Stopped (Lifecycle Event)
Feb 28 10:20:59 compute-0 nova_compute[243452]: 2026-02-28 10:20:59.148 243456 DEBUG nova.compute.manager [None req-3e097d6c-6827-4017-aee7-fd42a9520aef - - - - - -] [instance: ea5efc55-0a5e-435e-9805-9a9726c17eda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:20:59 compute-0 nova_compute[243452]: 2026-02-28 10:20:59.160 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:20:59 compute-0 ceph-mon[76304]: pgmap v1666: 305 pgs: 305 active+clean; 382 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 201 op/s
Feb 28 10:21:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:21:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:21:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:21:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:21:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:21:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:21:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 390 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.2 MiB/s wr, 165 op/s
Feb 28 10:21:00 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000065.scope: Deactivated successfully.
Feb 28 10:21:00 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000065.scope: Consumed 11.538s CPU time.
Feb 28 10:21:00 compute-0 systemd-machined[209480]: Machine qemu-129-instance-00000065 terminated.
Feb 28 10:21:00 compute-0 ceph-mon[76304]: pgmap v1667: 305 pgs: 305 active+clean; 390 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.2 MiB/s wr, 165 op/s
Feb 28 10:21:01 compute-0 nova_compute[243452]: 2026-02-28 10:21:01.135 243456 INFO nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance shutdown successfully after 13 seconds.
Feb 28 10:21:01 compute-0 nova_compute[243452]: 2026-02-28 10:21:01.142 243456 INFO nova.virt.libvirt.driver [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance destroyed successfully.
Feb 28 10:21:01 compute-0 nova_compute[243452]: 2026-02-28 10:21:01.148 243456 INFO nova.virt.libvirt.driver [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance destroyed successfully.
Feb 28 10:21:01 compute-0 nova_compute[243452]: 2026-02-28 10:21:01.471 243456 INFO nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Deleting instance files /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719_del
Feb 28 10:21:01 compute-0 nova_compute[243452]: 2026-02-28 10:21:01.472 243456 INFO nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Deletion of /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719_del complete
Feb 28 10:21:01 compute-0 nova_compute[243452]: 2026-02-28 10:21:01.638 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:21:01 compute-0 nova_compute[243452]: 2026-02-28 10:21:01.639 243456 INFO nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Creating image(s)
Feb 28 10:21:01 compute-0 nova_compute[243452]: 2026-02-28 10:21:01.674 243456 DEBUG nova.storage.rbd_utils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:01 compute-0 nova_compute[243452]: 2026-02-28 10:21:01.710 243456 DEBUG nova.storage.rbd_utils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:01 compute-0 nova_compute[243452]: 2026-02-28 10:21:01.747 243456 DEBUG nova.storage.rbd_utils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:01 compute-0 nova_compute[243452]: 2026-02-28 10:21:01.752 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:01 compute-0 nova_compute[243452]: 2026-02-28 10:21:01.835 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:01 compute-0 nova_compute[243452]: 2026-02-28 10:21:01.837 243456 DEBUG oslo_concurrency.lockutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:01 compute-0 nova_compute[243452]: 2026-02-28 10:21:01.838 243456 DEBUG oslo_concurrency.lockutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:01 compute-0 nova_compute[243452]: 2026-02-28 10:21:01.838 243456 DEBUG oslo_concurrency.lockutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:01 compute-0 nova_compute[243452]: 2026-02-28 10:21:01.871 243456 DEBUG nova.storage.rbd_utils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:01 compute-0 nova_compute[243452]: 2026-02-28 10:21:01.875 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 98150245-079d-43f8-bbd9-3d12a8f26719_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:01 compute-0 nova_compute[243452]: 2026-02-28 10:21:01.975 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.109 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 98150245-079d-43f8-bbd9-3d12a8f26719_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.175 243456 DEBUG nova.storage.rbd_utils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] resizing rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.267 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.269 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Ensure instance console log exists: /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.270 243456 DEBUG oslo_concurrency.lockutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.270 243456 DEBUG oslo_concurrency.lockutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.271 243456 DEBUG oslo_concurrency.lockutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.273 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.278 243456 WARNING nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.290 243456 DEBUG nova.virt.libvirt.host [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.291 243456 DEBUG nova.virt.libvirt.host [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.294 243456 DEBUG nova.virt.libvirt.host [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.294 243456 DEBUG nova.virt.libvirt.host [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.295 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.295 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.296 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.296 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.296 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.297 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.297 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.297 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.298 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.298 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.298 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.299 243456 DEBUG nova.virt.hardware [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.299 243456 DEBUG nova.objects.instance [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.325 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 391 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.8 MiB/s wr, 135 op/s
Feb 28 10:21:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:21:02 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1572976600' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.898 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.932 243456 DEBUG nova.storage.rbd_utils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:02 compute-0 nova_compute[243452]: 2026-02-28 10:21:02.936 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:03 compute-0 ceph-mon[76304]: pgmap v1668: 305 pgs: 305 active+clean; 391 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.8 MiB/s wr, 135 op/s
Feb 28 10:21:03 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1572976600' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:21:03 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/538922934' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:03 compute-0 nova_compute[243452]: 2026-02-28 10:21:03.466 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:03 compute-0 nova_compute[243452]: 2026-02-28 10:21:03.469 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:21:03 compute-0 nova_compute[243452]:   <uuid>98150245-079d-43f8-bbd9-3d12a8f26719</uuid>
Feb 28 10:21:03 compute-0 nova_compute[243452]:   <name>instance-00000065</name>
Feb 28 10:21:03 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:21:03 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:21:03 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerShowV257Test-server-2037913836</nova:name>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:21:02</nova:creationTime>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:21:03 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:21:03 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:21:03 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:21:03 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:21:03 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:21:03 compute-0 nova_compute[243452]:         <nova:user uuid="17ee551657ed4a4c8a2f040ff863ad9a">tempest-ServerShowV257Test-1598019138-project-member</nova:user>
Feb 28 10:21:03 compute-0 nova_compute[243452]:         <nova:project uuid="124d8457f7e342f1ab81af27d8c3ba3a">tempest-ServerShowV257Test-1598019138</nova:project>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="88971623-4808-4102-a4a7-34a287d8b7fe"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <nova:ports/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:21:03 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:21:03 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <system>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <entry name="serial">98150245-079d-43f8-bbd9-3d12a8f26719</entry>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <entry name="uuid">98150245-079d-43f8-bbd9-3d12a8f26719</entry>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     </system>
Feb 28 10:21:03 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:21:03 compute-0 nova_compute[243452]:   <os>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:   </os>
Feb 28 10:21:03 compute-0 nova_compute[243452]:   <features>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:   </features>
Feb 28 10:21:03 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:21:03 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:21:03 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/98150245-079d-43f8-bbd9-3d12a8f26719_disk">
Feb 28 10:21:03 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       </source>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:21:03 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/98150245-079d-43f8-bbd9-3d12a8f26719_disk.config">
Feb 28 10:21:03 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       </source>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:21:03 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/console.log" append="off"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <video>
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     </video>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:21:03 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:21:03 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:21:03 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:21:03 compute-0 nova_compute[243452]: </domain>
Feb 28 10:21:03 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:21:03 compute-0 nova_compute[243452]: 2026-02-28 10:21:03.532 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:21:03 compute-0 nova_compute[243452]: 2026-02-28 10:21:03.533 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:21:03 compute-0 nova_compute[243452]: 2026-02-28 10:21:03.534 243456 INFO nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Using config drive
Feb 28 10:21:03 compute-0 nova_compute[243452]: 2026-02-28 10:21:03.567 243456 DEBUG nova.storage.rbd_utils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:03 compute-0 nova_compute[243452]: 2026-02-28 10:21:03.591 243456 DEBUG nova.objects.instance [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'ec2_ids' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:03 compute-0 nova_compute[243452]: 2026-02-28 10:21:03.626 243456 DEBUG nova.objects.instance [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'keypairs' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:21:03 compute-0 nova_compute[243452]: 2026-02-28 10:21:03.857 243456 INFO nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Creating config drive at /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config
Feb 28 10:21:03 compute-0 nova_compute[243452]: 2026-02-28 10:21:03.866 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwbghj97n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:03 compute-0 nova_compute[243452]: 2026-02-28 10:21:03.969 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.012 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwbghj97n" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.049 243456 DEBUG nova.storage.rbd_utils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] rbd image 98150245-079d-43f8-bbd9-3d12a8f26719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.054 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config 98150245-079d-43f8-bbd9-3d12a8f26719_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.161 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.226 243456 DEBUG oslo_concurrency.processutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config 98150245-079d-43f8-bbd9-3d12a8f26719_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.226 243456 INFO nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Deleting local config drive /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719/disk.config because it was imported into RBD.
Feb 28 10:21:04 compute-0 systemd-machined[209480]: New machine qemu-130-instance-00000065.
Feb 28 10:21:04 compute-0 systemd[1]: Started Virtual Machine qemu-130-instance-00000065.
Feb 28 10:21:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 373 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 522 KiB/s rd, 3.7 MiB/s wr, 138 op/s
Feb 28 10:21:04 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/538922934' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.687 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.688 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.706 243456 DEBUG nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.715 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 98150245-079d-43f8-bbd9-3d12a8f26719 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.716 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274064.7145927, 98150245-079d-43f8-bbd9-3d12a8f26719 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.716 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] VM Resumed (Lifecycle Event)
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.718 243456 DEBUG nova.compute.manager [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.719 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.723 243456 INFO nova.virt.libvirt.driver [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance spawned successfully.
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.723 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.744 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.748 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.767 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.768 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.769 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.769 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.770 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.770 243456 DEBUG nova.virt.libvirt.driver [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.775 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.775 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274064.7183204, 98150245-079d-43f8-bbd9-3d12a8f26719 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.776 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] VM Started (Lifecycle Event)
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.792 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.793 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.801 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.802 243456 INFO nova.compute.claims [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.814 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.818 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.850 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.864 243456 DEBUG nova.compute.manager [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.938 243456 DEBUG oslo_concurrency.lockutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.941 243456 DEBUG nova.compute.manager [req-687ab859-cbba-4e5c-8e40-ad42413e22e5 req-3a2d292e-ab30-4046-86c6-62eebd5a1e72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received event network-changed-8f265ce7-668d-4462-8ac4-a9487fc7d3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.941 243456 DEBUG nova.compute.manager [req-687ab859-cbba-4e5c-8e40-ad42413e22e5 req-3a2d292e-ab30-4046-86c6-62eebd5a1e72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Refreshing instance network info cache due to event network-changed-8f265ce7-668d-4462-8ac4-a9487fc7d3cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.941 243456 DEBUG oslo_concurrency.lockutils [req-687ab859-cbba-4e5c-8e40-ad42413e22e5 req-3a2d292e-ab30-4046-86c6-62eebd5a1e72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.941 243456 DEBUG oslo_concurrency.lockutils [req-687ab859-cbba-4e5c-8e40-ad42413e22e5 req-3a2d292e-ab30-4046-86c6-62eebd5a1e72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.942 243456 DEBUG nova.network.neutron [req-687ab859-cbba-4e5c-8e40-ad42413e22e5 req-3a2d292e-ab30-4046-86c6-62eebd5a1e72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Refreshing network info cache for port 8f265ce7-668d-4462-8ac4-a9487fc7d3cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:21:04 compute-0 nova_compute[243452]: 2026-02-28 10:21:04.982 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.017 243456 DEBUG oslo_concurrency.lockutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Acquiring lock "c4a33511-0908-4787-82f4-79505aa9d436" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.018 243456 DEBUG oslo_concurrency.lockutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.018 243456 DEBUG oslo_concurrency.lockutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Acquiring lock "c4a33511-0908-4787-82f4-79505aa9d436-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.019 243456 DEBUG oslo_concurrency.lockutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.019 243456 DEBUG oslo_concurrency.lockutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.021 243456 INFO nova.compute.manager [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Terminating instance
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.022 243456 DEBUG nova.compute.manager [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:21:05 compute-0 kernel: tap8f265ce7-66 (unregistering): left promiscuous mode
Feb 28 10:21:05 compute-0 NetworkManager[49805]: <info>  [1772274065.0686] device (tap8f265ce7-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.076 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:05 compute-0 ovn_controller[146846]: 2026-02-28T10:21:05Z|01026|binding|INFO|Releasing lport 8f265ce7-668d-4462-8ac4-a9487fc7d3cd from this chassis (sb_readonly=0)
Feb 28 10:21:05 compute-0 ovn_controller[146846]: 2026-02-28T10:21:05Z|01027|binding|INFO|Setting lport 8f265ce7-668d-4462-8ac4-a9487fc7d3cd down in Southbound
Feb 28 10:21:05 compute-0 ovn_controller[146846]: 2026-02-28T10:21:05Z|01028|binding|INFO|Removing iface tap8f265ce7-66 ovn-installed in OVS
Feb 28 10:21:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.082 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:75:fe 10.100.0.6'], port_security=['fa:16:3e:cb:75:fe 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c4a33511-0908-4787-82f4-79505aa9d436', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37214c09-5017-4e54-bd29-785084655f44', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1dab7067181d43f1acb702fce4ca882c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b33bdd64-2933-4bbe-ba3e-cc39acc8701b e5271994-622a-4bbd-b4e6-a7d717e49d1d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=564ae5b8-c7a3-416a-9979-7200cc2a4584, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8f265ce7-668d-4462-8ac4-a9487fc7d3cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:21:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.084 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8f265ce7-668d-4462-8ac4-a9487fc7d3cd in datapath 37214c09-5017-4e54-bd29-785084655f44 unbound from our chassis
Feb 28 10:21:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.086 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 37214c09-5017-4e54-bd29-785084655f44, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:21:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.087 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5a6022c8-b81c-4d84-8f96-b91527485d95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.088 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-37214c09-5017-4e54-bd29-785084655f44 namespace which is not needed anymore
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.101 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:05 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000064.scope: Deactivated successfully.
Feb 28 10:21:05 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000064.scope: Consumed 12.431s CPU time.
Feb 28 10:21:05 compute-0 systemd-machined[209480]: Machine qemu-128-instance-00000064 terminated.
Feb 28 10:21:05 compute-0 neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44[331080]: [NOTICE]   (331098) : haproxy version is 2.8.14-c23fe91
Feb 28 10:21:05 compute-0 neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44[331080]: [NOTICE]   (331098) : path to executable is /usr/sbin/haproxy
Feb 28 10:21:05 compute-0 neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44[331080]: [WARNING]  (331098) : Exiting Master process...
Feb 28 10:21:05 compute-0 neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44[331080]: [ALERT]    (331098) : Current worker (331110) exited with code 143 (Terminated)
Feb 28 10:21:05 compute-0 neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44[331080]: [WARNING]  (331098) : All workers exited. Exiting... (0)
Feb 28 10:21:05 compute-0 systemd[1]: libpod-92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e.scope: Deactivated successfully.
Feb 28 10:21:05 compute-0 conmon[331080]: conmon 92886d7d5b788931cf14 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e.scope/container/memory.events
Feb 28 10:21:05 compute-0 podman[332016]: 2026-02-28 10:21:05.228910433 +0000 UTC m=+0.043650741 container died 92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 10:21:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e-userdata-shm.mount: Deactivated successfully.
Feb 28 10:21:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-248dfaf5b43c5e78179cc2d605133b83fbf37721a8e21412e8df26c566d94eb2-merged.mount: Deactivated successfully.
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.268 243456 INFO nova.virt.libvirt.driver [-] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Instance destroyed successfully.
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.271 243456 DEBUG nova.objects.instance [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lazy-loading 'resources' on Instance uuid c4a33511-0908-4787-82f4-79505aa9d436 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:05 compute-0 podman[332016]: 2026-02-28 10:21:05.275848848 +0000 UTC m=+0.090589146 container cleanup 92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:21:05 compute-0 systemd[1]: libpod-conmon-92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e.scope: Deactivated successfully.
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.306 243456 DEBUG nova.virt.libvirt.vif [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:20:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1952070192-access_point-1839928684',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1952070192-access_point-1839928684',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1952070192-ac',id=100,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHCNMYLg5p+aB2JY148QNYzXhobB6/drpl2qIRV0JTfQX5v2ytqL0PnW0jus5lqsKTS3QAwQHPeB44EmRsvT+kml+WKzSDkocdN0cBQPBu+t8DfZ2YkjQe5xHSo47UtrMg==',key_name='tempest-TestSecurityGroupsBasicOps-225180537',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:20:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1dab7067181d43f1acb702fce4ca882c',ramdisk_id='',reservation_id='r-6trxlpi6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1952070192',owner_user_name='tempest-TestSecurityGroupsBasicOps-1952070192-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:20:39Z,user_data=None,user_id='d000e26b1aaf4a60bd2c928412e59ca5',uuid=c4a33511-0908-4787-82f4-79505aa9d436,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.307 243456 DEBUG nova.network.os_vif_util [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Converting VIF {"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.308 243456 DEBUG nova.network.os_vif_util [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cb:75:fe,bridge_name='br-int',has_traffic_filtering=True,id=8f265ce7-668d-4462-8ac4-a9487fc7d3cd,network=Network(37214c09-5017-4e54-bd29-785084655f44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f265ce7-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.309 243456 DEBUG os_vif [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:75:fe,bridge_name='br-int',has_traffic_filtering=True,id=8f265ce7-668d-4462-8ac4-a9487fc7d3cd,network=Network(37214c09-5017-4e54-bd29-785084655f44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f265ce7-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.312 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.312 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f265ce7-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.314 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.315 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.319 243456 INFO os_vif [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:75:fe,bridge_name='br-int',has_traffic_filtering=True,id=8f265ce7-668d-4462-8ac4-a9487fc7d3cd,network=Network(37214c09-5017-4e54-bd29-785084655f44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f265ce7-66')
Feb 28 10:21:05 compute-0 podman[332054]: 2026-02-28 10:21:05.341950216 +0000 UTC m=+0.046272257 container remove 92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 10:21:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.348 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[caedfa5b-74b7-4fb2-a0f3-270b32c15d89]: (4, ('Sat Feb 28 10:21:05 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44 (92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e)\n92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e\nSat Feb 28 10:21:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-37214c09-5017-4e54-bd29-785084655f44 (92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e)\n92886d7d5b788931cf1449c926f5e8494ff589eec5e75a13b4f3373cf34b3c6e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.350 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9ae6cc8b-6e88-4d5a-af6f-c1269112aeb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.353 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37214c09-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.356 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:05 compute-0 kernel: tap37214c09-50: left promiscuous mode
Feb 28 10:21:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.367 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[396bca7d-98d8-4b9b-ac86-f4876bf937e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.369 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.382 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[59ce3d21-fa7b-46e3-9955-01b6632d1a7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.384 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4572dd6f-ca08-4318-b056-60022e8bceeb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.410 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[66448698-c887-4af0-9844-a8b6dc44f4d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554560, 'reachable_time': 22780, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332087, 'error': None, 'target': 'ovnmeta-37214c09-5017-4e54-bd29-785084655f44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.414 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-37214c09-5017-4e54-bd29-785084655f44 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:21:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:05.414 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[0656b8fb-6c11-4ef6-9ef5-0cc52914d30c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d37214c09\x2d5017\x2d4e54\x2dbd29\x2d785084655f44.mount: Deactivated successfully.
Feb 28 10:21:05 compute-0 ceph-mon[76304]: pgmap v1669: 305 pgs: 305 active+clean; 373 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 522 KiB/s rd, 3.7 MiB/s wr, 138 op/s
Feb 28 10:21:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:21:05 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3312576475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.600 243456 INFO nova.virt.libvirt.driver [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Deleting instance files /var/lib/nova/instances/c4a33511-0908-4787-82f4-79505aa9d436_del
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.601 243456 INFO nova.virt.libvirt.driver [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Deletion of /var/lib/nova/instances/c4a33511-0908-4787-82f4-79505aa9d436_del complete
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.610 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.614 243456 DEBUG nova.compute.provider_tree [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.635 243456 DEBUG nova.scheduler.client.report [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.661 243456 INFO nova.compute.manager [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Took 0.64 seconds to destroy the instance on the hypervisor.
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.661 243456 DEBUG oslo.service.loopingcall [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.662 243456 DEBUG nova.compute.manager [-] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.663 243456 DEBUG nova.network.neutron [-] [instance: c4a33511-0908-4787-82f4-79505aa9d436] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.668 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.668 243456 DEBUG nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.672 243456 DEBUG oslo_concurrency.lockutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.673 243456 DEBUG nova.objects.instance [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.741 243456 DEBUG nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.742 243456 DEBUG nova.network.neutron [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.747 243456 DEBUG oslo_concurrency.lockutils [None req-b5e6fcc9-98ba-42f7-b024-6aac5c172656 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.764 243456 INFO nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.780 243456 DEBUG nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.802 243456 DEBUG nova.compute.manager [req-9def961b-b51d-442b-8b63-d4ba94904355 req-6c887afd-bb9d-43e0-bce2-0e7ef50e2406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received event network-vif-unplugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.803 243456 DEBUG oslo_concurrency.lockutils [req-9def961b-b51d-442b-8b63-d4ba94904355 req-6c887afd-bb9d-43e0-bce2-0e7ef50e2406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4a33511-0908-4787-82f4-79505aa9d436-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.803 243456 DEBUG oslo_concurrency.lockutils [req-9def961b-b51d-442b-8b63-d4ba94904355 req-6c887afd-bb9d-43e0-bce2-0e7ef50e2406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.803 243456 DEBUG oslo_concurrency.lockutils [req-9def961b-b51d-442b-8b63-d4ba94904355 req-6c887afd-bb9d-43e0-bce2-0e7ef50e2406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.803 243456 DEBUG nova.compute.manager [req-9def961b-b51d-442b-8b63-d4ba94904355 req-6c887afd-bb9d-43e0-bce2-0e7ef50e2406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] No waiting events found dispatching network-vif-unplugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.803 243456 DEBUG nova.compute.manager [req-9def961b-b51d-442b-8b63-d4ba94904355 req-6c887afd-bb9d-43e0-bce2-0e7ef50e2406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received event network-vif-unplugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.854 243456 DEBUG nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.860 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.860 243456 INFO nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Creating image(s)
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.884 243456 DEBUG nova.storage.rbd_utils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.915 243456 DEBUG nova.storage.rbd_utils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.939 243456 DEBUG nova.storage.rbd_utils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:05 compute-0 nova_compute[243452]: 2026-02-28 10:21:05.942 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.001 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "98150245-079d-43f8-bbd9-3d12a8f26719" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.002 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "98150245-079d-43f8-bbd9-3d12a8f26719" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.002 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "98150245-079d-43f8-bbd9-3d12a8f26719-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.003 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "98150245-079d-43f8-bbd9-3d12a8f26719-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.003 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "98150245-079d-43f8-bbd9-3d12a8f26719-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.005 243456 INFO nova.compute.manager [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Terminating instance
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.005 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "refresh_cache-98150245-079d-43f8-bbd9-3d12a8f26719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.006 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquired lock "refresh_cache-98150245-079d-43f8-bbd9-3d12a8f26719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.006 243456 DEBUG nova.network.neutron [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.008 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.009 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.009 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.010 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.033 243456 DEBUG nova.storage.rbd_utils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.037 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ec785d5e-9b62-4b52-a727-f64173b4b853_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.102 243456 DEBUG nova.policy [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f18b63d43ee24e59bdff962c9a727213', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '14500a4ea1d94c0e9c58b076f5c918b5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.317 243456 DEBUG nova.network.neutron [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.338 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ec785d5e-9b62-4b52-a727-f64173b4b853_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 358 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 489 KiB/s rd, 4.0 MiB/s wr, 144 op/s
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.424 243456 DEBUG nova.storage.rbd_utils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] resizing rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:21:06 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3312576475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.535 243456 DEBUG nova.objects.instance [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'migration_context' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.547 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.548 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Ensure instance console log exists: /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.548 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.548 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.549 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.583 243456 DEBUG nova.network.neutron [req-687ab859-cbba-4e5c-8e40-ad42413e22e5 req-3a2d292e-ab30-4046-86c6-62eebd5a1e72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Updated VIF entry in instance network info cache for port 8f265ce7-668d-4462-8ac4-a9487fc7d3cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.584 243456 DEBUG nova.network.neutron [req-687ab859-cbba-4e5c-8e40-ad42413e22e5 req-3a2d292e-ab30-4046-86c6-62eebd5a1e72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Updating instance_info_cache with network_info: [{"id": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "address": "fa:16:3e:cb:75:fe", "network": {"id": "37214c09-5017-4e54-bd29-785084655f44", "bridge": "br-int", "label": "tempest-network-smoke--1165805221", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dab7067181d43f1acb702fce4ca882c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f265ce7-66", "ovs_interfaceid": "8f265ce7-668d-4462-8ac4-a9487fc7d3cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.605 243456 DEBUG oslo_concurrency.lockutils [req-687ab859-cbba-4e5c-8e40-ad42413e22e5 req-3a2d292e-ab30-4046-86c6-62eebd5a1e72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c4a33511-0908-4787-82f4-79505aa9d436" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.696 243456 DEBUG nova.network.neutron [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.719 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Releasing lock "refresh_cache-98150245-079d-43f8-bbd9-3d12a8f26719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.720 243456 DEBUG nova.compute.manager [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:21:06 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000065.scope: Deactivated successfully.
Feb 28 10:21:06 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000065.scope: Consumed 2.379s CPU time.
Feb 28 10:21:06 compute-0 systemd-machined[209480]: Machine qemu-130-instance-00000065 terminated.
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.826 243456 DEBUG nova.network.neutron [-] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.854 243456 INFO nova.compute.manager [-] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Took 1.19 seconds to deallocate network for instance.
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.910 243456 DEBUG oslo_concurrency.lockutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.910 243456 DEBUG oslo_concurrency.lockutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.944 243456 INFO nova.virt.libvirt.driver [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance destroyed successfully.
Feb 28 10:21:06 compute-0 nova_compute[243452]: 2026-02-28 10:21:06.945 243456 DEBUG nova.objects.instance [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lazy-loading 'resources' on Instance uuid 98150245-079d-43f8-bbd9-3d12a8f26719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.029 243456 DEBUG oslo_concurrency.processutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.068 243456 DEBUG nova.network.neutron [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Successfully created port: a920b0c3-c6cf-44d3-9a22-40eda0e09078 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.119 243456 DEBUG nova.compute.manager [req-a027fc06-4800-44a1-a8f9-208f9d1fe81a req-2fd1a55c-7a6a-4813-b1d8-2ceb5b3ebc9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received event network-vif-deleted-8f265ce7-668d-4462-8ac4-a9487fc7d3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.294 243456 INFO nova.virt.libvirt.driver [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Deleting instance files /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719_del
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.296 243456 INFO nova.virt.libvirt.driver [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Deletion of /var/lib/nova/instances/98150245-079d-43f8-bbd9-3d12a8f26719_del complete
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.357 243456 INFO nova.compute.manager [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Took 0.64 seconds to destroy the instance on the hypervisor.
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.358 243456 DEBUG oslo.service.loopingcall [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.359 243456 DEBUG nova.compute.manager [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.360 243456 DEBUG nova.network.neutron [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:21:07 compute-0 ceph-mon[76304]: pgmap v1670: 305 pgs: 305 active+clean; 358 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 489 KiB/s rd, 4.0 MiB/s wr, 144 op/s
Feb 28 10:21:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:21:07 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3983807347' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.553 243456 DEBUG nova.network.neutron [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.567 243456 DEBUG nova.network.neutron [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.577 243456 DEBUG oslo_concurrency.processutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.583 243456 DEBUG nova.compute.provider_tree [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.613 243456 DEBUG nova.scheduler.client.report [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.621 243456 INFO nova.compute.manager [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Took 0.26 seconds to deallocate network for instance.
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.649 243456 DEBUG oslo_concurrency.lockutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.675 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.676 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.679 243456 INFO nova.scheduler.client.report [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Deleted allocations for instance c4a33511-0908-4787-82f4-79505aa9d436
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.748 243456 DEBUG oslo_concurrency.lockutils [None req-e13e26f5-aedb-4837-8dde-868e96c02b93 d000e26b1aaf4a60bd2c928412e59ca5 1dab7067181d43f1acb702fce4ca882c - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.761 243456 DEBUG oslo_concurrency.processutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.808 243456 DEBUG nova.network.neutron [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Successfully updated port: a920b0c3-c6cf-44d3-9a22-40eda0e09078 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.829 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.829 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquired lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.830 243456 DEBUG nova.network.neutron [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.881 243456 DEBUG nova.compute.manager [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received event network-vif-plugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.882 243456 DEBUG oslo_concurrency.lockutils [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c4a33511-0908-4787-82f4-79505aa9d436-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.883 243456 DEBUG oslo_concurrency.lockutils [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.883 243456 DEBUG oslo_concurrency.lockutils [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c4a33511-0908-4787-82f4-79505aa9d436-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.884 243456 DEBUG nova.compute.manager [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] No waiting events found dispatching network-vif-plugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.884 243456 WARNING nova.compute.manager [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Received unexpected event network-vif-plugged-8f265ce7-668d-4462-8ac4-a9487fc7d3cd for instance with vm_state deleted and task_state None.
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.885 243456 DEBUG nova.compute.manager [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-changed-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.885 243456 DEBUG nova.compute.manager [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Refreshing instance network info cache due to event network-changed-a920b0c3-c6cf-44d3-9a22-40eda0e09078. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.886 243456 DEBUG oslo_concurrency.lockutils [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:21:07 compute-0 nova_compute[243452]: 2026-02-28 10:21:07.943 243456 DEBUG nova.network.neutron [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:21:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:21:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3246148993' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:21:08 compute-0 nova_compute[243452]: 2026-02-28 10:21:08.344 243456 DEBUG oslo_concurrency.processutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:08 compute-0 nova_compute[243452]: 2026-02-28 10:21:08.352 243456 DEBUG nova.compute.provider_tree [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:21:08 compute-0 nova_compute[243452]: 2026-02-28 10:21:08.370 243456 DEBUG nova.scheduler.client.report [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:21:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 325 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.8 MiB/s wr, 191 op/s
Feb 28 10:21:08 compute-0 nova_compute[243452]: 2026-02-28 10:21:08.406 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:08 compute-0 nova_compute[243452]: 2026-02-28 10:21:08.463 243456 INFO nova.scheduler.client.report [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Deleted allocations for instance 98150245-079d-43f8-bbd9-3d12a8f26719
Feb 28 10:21:08 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3983807347' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:21:08 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3246148993' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:21:08 compute-0 ceph-mon[76304]: pgmap v1671: 305 pgs: 305 active+clean; 325 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.8 MiB/s wr, 191 op/s
Feb 28 10:21:08 compute-0 nova_compute[243452]: 2026-02-28 10:21:08.550 243456 DEBUG oslo_concurrency.lockutils [None req-1e7fe761-bf7c-4ada-af5a-5e55059c45b6 17ee551657ed4a4c8a2f040ff863ad9a 124d8457f7e342f1ab81af27d8c3ba3a - - default default] Lock "98150245-079d-43f8-bbd9-3d12a8f26719" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:21:08 compute-0 nova_compute[243452]: 2026-02-28 10:21:08.972 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.528 243456 DEBUG nova.network.neutron [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updating instance_info_cache with network_info: [{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.546 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Releasing lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.547 243456 DEBUG nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Instance network_info: |[{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.547 243456 DEBUG oslo_concurrency.lockutils [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.548 243456 DEBUG nova.network.neutron [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Refreshing network info cache for port a920b0c3-c6cf-44d3-9a22-40eda0e09078 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.553 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Start _get_guest_xml network_info=[{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.559 243456 WARNING nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.565 243456 DEBUG nova.virt.libvirt.host [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.566 243456 DEBUG nova.virt.libvirt.host [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.575 243456 DEBUG nova.virt.libvirt.host [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.576 243456 DEBUG nova.virt.libvirt.host [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.576 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.577 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.577 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.578 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.578 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.578 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.578 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.579 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.579 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.579 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.580 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.580 243456 DEBUG nova.virt.hardware [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:21:09 compute-0 nova_compute[243452]: 2026-02-28 10:21:09.584 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:21:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2674454693' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.148 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:10 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2674454693' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.183 243456 DEBUG nova.storage.rbd_utils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.191 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.316 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 321 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 192 op/s
Feb 28 10:21:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:21:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2288084726' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.745 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.747 243456 DEBUG nova.virt.libvirt.vif [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1346326288',display_name='tempest-ServerRescueTestJSONUnderV235-server-1346326288',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1346326288',id=102,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='14500a4ea1d94c0e9c58b076f5c918b5',ramdisk_id='',reservation_id='r-svzcm10h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-749971841',owner_user_name='tempest-ServerRescueTestJSONUnderV235-749971841-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:21:05Z,user_data=None,user_id='f18b63d43ee24e59bdff962c9a727213',uuid=ec785d5e-9b62-4b52-a727-f64173b4b853,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.748 243456 DEBUG nova.network.os_vif_util [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Converting VIF {"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.749 243456 DEBUG nova.network.os_vif_util [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:a9:65,bridge_name='br-int',has_traffic_filtering=True,id=a920b0c3-c6cf-44d3-9a22-40eda0e09078,network=Network(4d053625-c393-49e7-ae73-bce276bdc186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa920b0c3-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.750 243456 DEBUG nova.objects.instance [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.770 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:21:10 compute-0 nova_compute[243452]:   <uuid>ec785d5e-9b62-4b52-a727-f64173b4b853</uuid>
Feb 28 10:21:10 compute-0 nova_compute[243452]:   <name>instance-00000066</name>
Feb 28 10:21:10 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:21:10 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:21:10 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-1346326288</nova:name>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:21:09</nova:creationTime>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:21:10 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:21:10 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:21:10 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:21:10 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:21:10 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:21:10 compute-0 nova_compute[243452]:         <nova:user uuid="f18b63d43ee24e59bdff962c9a727213">tempest-ServerRescueTestJSONUnderV235-749971841-project-member</nova:user>
Feb 28 10:21:10 compute-0 nova_compute[243452]:         <nova:project uuid="14500a4ea1d94c0e9c58b076f5c918b5">tempest-ServerRescueTestJSONUnderV235-749971841</nova:project>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:21:10 compute-0 nova_compute[243452]:         <nova:port uuid="a920b0c3-c6cf-44d3-9a22-40eda0e09078">
Feb 28 10:21:10 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.2" ipVersion="4"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:21:10 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:21:10 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <system>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <entry name="serial">ec785d5e-9b62-4b52-a727-f64173b4b853</entry>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <entry name="uuid">ec785d5e-9b62-4b52-a727-f64173b4b853</entry>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     </system>
Feb 28 10:21:10 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:21:10 compute-0 nova_compute[243452]:   <os>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:   </os>
Feb 28 10:21:10 compute-0 nova_compute[243452]:   <features>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:   </features>
Feb 28 10:21:10 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:21:10 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:21:10 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ec785d5e-9b62-4b52-a727-f64173b4b853_disk">
Feb 28 10:21:10 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       </source>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:21:10 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config">
Feb 28 10:21:10 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       </source>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:21:10 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:a2:a9:65"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <target dev="tapa920b0c3-c6"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/console.log" append="off"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <video>
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     </video>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:21:10 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:21:10 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:21:10 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:21:10 compute-0 nova_compute[243452]: </domain>
Feb 28 10:21:10 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.772 243456 DEBUG nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Preparing to wait for external event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.772 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.772 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.772 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.773 243456 DEBUG nova.virt.libvirt.vif [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1346326288',display_name='tempest-ServerRescueTestJSONUnderV235-server-1346326288',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1346326288',id=102,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='14500a4ea1d94c0e9c58b076f5c918b5',ramdisk_id='',reservation_id='r-svzcm10h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-749971841',owner_user_name='tempest-ServerRescueTestJSONUnderV235-749971841-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:21:05Z,user_data=None,user_id='f18b63d43ee24e59bdff962c9a727213',uuid=ec785d5e-9b62-4b52-a727-f64173b4b853,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.774 243456 DEBUG nova.network.os_vif_util [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Converting VIF {"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.774 243456 DEBUG nova.network.os_vif_util [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:a9:65,bridge_name='br-int',has_traffic_filtering=True,id=a920b0c3-c6cf-44d3-9a22-40eda0e09078,network=Network(4d053625-c393-49e7-ae73-bce276bdc186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa920b0c3-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.775 243456 DEBUG os_vif [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:a9:65,bridge_name='br-int',has_traffic_filtering=True,id=a920b0c3-c6cf-44d3-9a22-40eda0e09078,network=Network(4d053625-c393-49e7-ae73-bce276bdc186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa920b0c3-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.775 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.776 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.776 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.781 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.782 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa920b0c3-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.782 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa920b0c3-c6, col_values=(('external_ids', {'iface-id': 'a920b0c3-c6cf-44d3-9a22-40eda0e09078', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a2:a9:65', 'vm-uuid': 'ec785d5e-9b62-4b52-a727-f64173b4b853'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:10 compute-0 NetworkManager[49805]: <info>  [1772274070.7853] manager: (tapa920b0c3-c6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/435)
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.787 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.790 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.791 243456 INFO os_vif [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:a9:65,bridge_name='br-int',has_traffic_filtering=True,id=a920b0c3-c6cf-44d3-9a22-40eda0e09078,network=Network(4d053625-c393-49e7-ae73-bce276bdc186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa920b0c3-c6')
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.838 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.839 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.839 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] No VIF found with MAC fa:16:3e:a2:a9:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.839 243456 INFO nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Using config drive
Feb 28 10:21:10 compute-0 nova_compute[243452]: 2026-02-28 10:21:10.862 243456 DEBUG nova.storage.rbd_utils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:11 compute-0 nova_compute[243452]: 2026-02-28 10:21:11.169 243456 DEBUG nova.network.neutron [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updated VIF entry in instance network info cache for port a920b0c3-c6cf-44d3-9a22-40eda0e09078. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:21:11 compute-0 nova_compute[243452]: 2026-02-28 10:21:11.170 243456 DEBUG nova.network.neutron [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updating instance_info_cache with network_info: [{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:21:11 compute-0 nova_compute[243452]: 2026-02-28 10:21:11.185 243456 DEBUG oslo_concurrency.lockutils [req-b7542c55-135d-401c-8b66-f813eac09f89 req-498acf68-e49d-40d0-b748-c82eae5800b4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:21:11 compute-0 ceph-mon[76304]: pgmap v1672: 305 pgs: 305 active+clean; 321 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 192 op/s
Feb 28 10:21:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2288084726' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:11 compute-0 nova_compute[243452]: 2026-02-28 10:21:11.320 243456 INFO nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Creating config drive at /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config
Feb 28 10:21:11 compute-0 nova_compute[243452]: 2026-02-28 10:21:11.323 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpn2ckyxj7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:11 compute-0 nova_compute[243452]: 2026-02-28 10:21:11.462 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpn2ckyxj7" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:11 compute-0 nova_compute[243452]: 2026-02-28 10:21:11.486 243456 DEBUG nova.storage.rbd_utils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:11 compute-0 nova_compute[243452]: 2026-02-28 10:21:11.489 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:11 compute-0 nova_compute[243452]: 2026-02-28 10:21:11.618 243456 DEBUG oslo_concurrency.processutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:11 compute-0 nova_compute[243452]: 2026-02-28 10:21:11.620 243456 INFO nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Deleting local config drive /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config because it was imported into RBD.
Feb 28 10:21:11 compute-0 kernel: tapa920b0c3-c6: entered promiscuous mode
Feb 28 10:21:11 compute-0 nova_compute[243452]: 2026-02-28 10:21:11.676 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:11 compute-0 ovn_controller[146846]: 2026-02-28T10:21:11Z|01029|binding|INFO|Claiming lport a920b0c3-c6cf-44d3-9a22-40eda0e09078 for this chassis.
Feb 28 10:21:11 compute-0 ovn_controller[146846]: 2026-02-28T10:21:11Z|01030|binding|INFO|a920b0c3-c6cf-44d3-9a22-40eda0e09078: Claiming fa:16:3e:a2:a9:65 10.100.0.2
Feb 28 10:21:11 compute-0 NetworkManager[49805]: <info>  [1772274071.6786] manager: (tapa920b0c3-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/436)
Feb 28 10:21:11 compute-0 ovn_controller[146846]: 2026-02-28T10:21:11Z|01031|binding|INFO|Setting lport a920b0c3-c6cf-44d3-9a22-40eda0e09078 ovn-installed in OVS
Feb 28 10:21:11 compute-0 nova_compute[243452]: 2026-02-28 10:21:11.685 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:11 compute-0 ovn_controller[146846]: 2026-02-28T10:21:11Z|01032|binding|INFO|Setting lport a920b0c3-c6cf-44d3-9a22-40eda0e09078 up in Southbound
Feb 28 10:21:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:11.687 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:a9:65 10.100.0.2'], port_security=['fa:16:3e:a2:a9:65 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ec785d5e-9b62-4b52-a727-f64173b4b853', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d053625-c393-49e7-ae73-bce276bdc186', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14500a4ea1d94c0e9c58b076f5c918b5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1b56c898-1f47-46b5-8fd8-bf772110d194', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0513d6b5-e918-4c66-8302-fa0b35a813c3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a920b0c3-c6cf-44d3-9a22-40eda0e09078) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:21:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:11.689 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a920b0c3-c6cf-44d3-9a22-40eda0e09078 in datapath 4d053625-c393-49e7-ae73-bce276bdc186 bound to our chassis
Feb 28 10:21:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:11.689 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4d053625-c393-49e7-ae73-bce276bdc186 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:21:11 compute-0 nova_compute[243452]: 2026-02-28 10:21:11.691 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:11 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:11.694 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ba604f51-4d26-4811-8b32-4a27181de233]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:11 compute-0 systemd-machined[209480]: New machine qemu-131-instance-00000066.
Feb 28 10:21:11 compute-0 systemd[1]: Started Virtual Machine qemu-131-instance-00000066.
Feb 28 10:21:11 compute-0 systemd-udevd[332457]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:21:11 compute-0 NetworkManager[49805]: <info>  [1772274071.7528] device (tapa920b0c3-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:21:11 compute-0 NetworkManager[49805]: <info>  [1772274071.7533] device (tapa920b0c3-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:21:11 compute-0 ovn_controller[146846]: 2026-02-28T10:21:11Z|01033|binding|INFO|Releasing lport ec441ae8-7dea-4a06-ba6a-57dcbc67001f from this chassis (sb_readonly=0)
Feb 28 10:21:11 compute-0 nova_compute[243452]: 2026-02-28 10:21:11.851 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.006 243456 DEBUG nova.compute.manager [req-f2c707b6-c077-4cc9-8ebd-ef6d11899594 req-91907f9b-8bcc-49e4-960b-e2df0e935f7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.007 243456 DEBUG oslo_concurrency.lockutils [req-f2c707b6-c077-4cc9-8ebd-ef6d11899594 req-91907f9b-8bcc-49e4-960b-e2df0e935f7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.007 243456 DEBUG oslo_concurrency.lockutils [req-f2c707b6-c077-4cc9-8ebd-ef6d11899594 req-91907f9b-8bcc-49e4-960b-e2df0e935f7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.008 243456 DEBUG oslo_concurrency.lockutils [req-f2c707b6-c077-4cc9-8ebd-ef6d11899594 req-91907f9b-8bcc-49e4-960b-e2df0e935f7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.009 243456 DEBUG nova.compute.manager [req-f2c707b6-c077-4cc9-8ebd-ef6d11899594 req-91907f9b-8bcc-49e4-960b-e2df0e935f7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Processing event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.155 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274072.1553555, ec785d5e-9b62-4b52-a727-f64173b4b853 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.156 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] VM Started (Lifecycle Event)
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.159 243456 DEBUG nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.162 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.166 243456 INFO nova.virt.libvirt.driver [-] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Instance spawned successfully.
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.167 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.195 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.198 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.207 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.208 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.208 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.208 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.209 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.209 243456 DEBUG nova.virt.libvirt.driver [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.261 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.262 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274072.1556273, ec785d5e-9b62-4b52-a727-f64173b4b853 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.262 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] VM Paused (Lifecycle Event)
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.296 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.300 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274072.1614273, ec785d5e-9b62-4b52-a727-f64173b4b853 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.300 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] VM Resumed (Lifecycle Event)
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.306 243456 INFO nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Took 6.45 seconds to spawn the instance on the hypervisor.
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.306 243456 DEBUG nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.335 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.342 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.365 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:21:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 218 op/s
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.382 243456 INFO nova.compute.manager [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Took 7.62 seconds to build instance.
Feb 28 10:21:12 compute-0 nova_compute[243452]: 2026-02-28 10:21:12.404 243456 DEBUG oslo_concurrency.lockutils [None req-67ad9f12-41b0-42ed-98c7-77603d5965d6 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:13 compute-0 ceph-mon[76304]: pgmap v1673: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 218 op/s
Feb 28 10:21:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:21:13 compute-0 nova_compute[243452]: 2026-02-28 10:21:13.823 243456 DEBUG oslo_concurrency.lockutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "e4349bd8-727a-4533-9edd-b2d54353a617" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:13 compute-0 nova_compute[243452]: 2026-02-28 10:21:13.824 243456 DEBUG oslo_concurrency.lockutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:13 compute-0 nova_compute[243452]: 2026-02-28 10:21:13.824 243456 DEBUG oslo_concurrency.lockutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:13 compute-0 nova_compute[243452]: 2026-02-28 10:21:13.824 243456 DEBUG oslo_concurrency.lockutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:13 compute-0 nova_compute[243452]: 2026-02-28 10:21:13.825 243456 DEBUG oslo_concurrency.lockutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:13 compute-0 nova_compute[243452]: 2026-02-28 10:21:13.826 243456 INFO nova.compute.manager [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Terminating instance
Feb 28 10:21:13 compute-0 nova_compute[243452]: 2026-02-28 10:21:13.828 243456 DEBUG nova.compute.manager [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:21:13 compute-0 kernel: tap07b4c83e-2f (unregistering): left promiscuous mode
Feb 28 10:21:13 compute-0 NetworkManager[49805]: <info>  [1772274073.8773] device (tap07b4c83e-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:21:13 compute-0 ovn_controller[146846]: 2026-02-28T10:21:13Z|01034|binding|INFO|Releasing lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb from this chassis (sb_readonly=0)
Feb 28 10:21:13 compute-0 ovn_controller[146846]: 2026-02-28T10:21:13Z|01035|binding|INFO|Setting lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb down in Southbound
Feb 28 10:21:13 compute-0 nova_compute[243452]: 2026-02-28 10:21:13.884 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:13 compute-0 ovn_controller[146846]: 2026-02-28T10:21:13Z|01036|binding|INFO|Removing iface tap07b4c83e-2f ovn-installed in OVS
Feb 28 10:21:13 compute-0 nova_compute[243452]: 2026-02-28 10:21:13.888 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:13.894 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:73:58 10.100.0.9'], port_security=['fa:16:3e:7d:73:58 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e4349bd8-727a-4533-9edd-b2d54353a617', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df5b5e58-da82-40fd-b4b8-660edea3cecb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0184ef1a-f67e-49ee-b39d-03bdf1995f2e ec08cc57-dbf0-4da0-80ed-116af2ee764f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f39672a-6e8d-45c3-9a0d-437ee1b4e09f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=07b4c83e-2fe2-42c9-a758-c50ddf0919fb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:21:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:13.896 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 07b4c83e-2fe2-42c9-a758-c50ddf0919fb in datapath df5b5e58-da82-40fd-b4b8-660edea3cecb unbound from our chassis
Feb 28 10:21:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:13.897 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df5b5e58-da82-40fd-b4b8-660edea3cecb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:21:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:13.903 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[66ff4ab7-3bba-47e8-89bc-21f225418246]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:13.903 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb namespace which is not needed anymore
Feb 28 10:21:13 compute-0 nova_compute[243452]: 2026-02-28 10:21:13.906 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:13 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000061.scope: Deactivated successfully.
Feb 28 10:21:13 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000061.scope: Consumed 15.298s CPU time.
Feb 28 10:21:13 compute-0 systemd-machined[209480]: Machine qemu-121-instance-00000061 terminated.
Feb 28 10:21:13 compute-0 nova_compute[243452]: 2026-02-28 10:21:13.974 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:14 compute-0 neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb[327846]: [NOTICE]   (327852) : haproxy version is 2.8.14-c23fe91
Feb 28 10:21:14 compute-0 neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb[327846]: [NOTICE]   (327852) : path to executable is /usr/sbin/haproxy
Feb 28 10:21:14 compute-0 neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb[327846]: [WARNING]  (327852) : Exiting Master process...
Feb 28 10:21:14 compute-0 neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb[327846]: [WARNING]  (327852) : Exiting Master process...
Feb 28 10:21:14 compute-0 neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb[327846]: [ALERT]    (327852) : Current worker (327854) exited with code 143 (Terminated)
Feb 28 10:21:14 compute-0 neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb[327846]: [WARNING]  (327852) : All workers exited. Exiting... (0)
Feb 28 10:21:14 compute-0 systemd[1]: libpod-37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c.scope: Deactivated successfully.
Feb 28 10:21:14 compute-0 podman[332530]: 2026-02-28 10:21:14.037959922 +0000 UTC m=+0.041924391 container died 37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:21:14 compute-0 NetworkManager[49805]: <info>  [1772274074.0466] manager: (tap07b4c83e-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/437)
Feb 28 10:21:14 compute-0 kernel: tap07b4c83e-2f: entered promiscuous mode
Feb 28 10:21:14 compute-0 kernel: tap07b4c83e-2f (unregistering): left promiscuous mode
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.054 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:14 compute-0 ovn_controller[146846]: 2026-02-28T10:21:14Z|01037|binding|INFO|Claiming lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb for this chassis.
Feb 28 10:21:14 compute-0 ovn_controller[146846]: 2026-02-28T10:21:14Z|01038|binding|INFO|07b4c83e-2fe2-42c9-a758-c50ddf0919fb: Claiming fa:16:3e:7d:73:58 10.100.0.9
Feb 28 10:21:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.060 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:73:58 10.100.0.9'], port_security=['fa:16:3e:7d:73:58 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e4349bd8-727a-4533-9edd-b2d54353a617', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df5b5e58-da82-40fd-b4b8-660edea3cecb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0184ef1a-f67e-49ee-b39d-03bdf1995f2e ec08cc57-dbf0-4da0-80ed-116af2ee764f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f39672a-6e8d-45c3-9a0d-437ee1b4e09f, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=07b4c83e-2fe2-42c9-a758-c50ddf0919fb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.067 243456 INFO nova.virt.libvirt.driver [-] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Instance destroyed successfully.
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.068 243456 DEBUG nova.objects.instance [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'resources' on Instance uuid e4349bd8-727a-4533-9edd-b2d54353a617 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:14 compute-0 ovn_controller[146846]: 2026-02-28T10:21:14Z|01039|binding|INFO|Setting lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb ovn-installed in OVS
Feb 28 10:21:14 compute-0 ovn_controller[146846]: 2026-02-28T10:21:14Z|01040|binding|INFO|Setting lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb up in Southbound
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.073 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:14 compute-0 ovn_controller[146846]: 2026-02-28T10:21:14Z|01041|binding|INFO|Releasing lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb from this chassis (sb_readonly=1)
Feb 28 10:21:14 compute-0 ovn_controller[146846]: 2026-02-28T10:21:14Z|01042|binding|INFO|Removing iface tap07b4c83e-2f ovn-installed in OVS
Feb 28 10:21:14 compute-0 ovn_controller[146846]: 2026-02-28T10:21:14Z|01043|if_status|INFO|Dropped 1 log messages in last 30 seconds (most recently, 30 seconds ago) due to excessive rate
Feb 28 10:21:14 compute-0 ovn_controller[146846]: 2026-02-28T10:21:14Z|01044|if_status|INFO|Not setting lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb down as sb is readonly
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.074 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c-userdata-shm.mount: Deactivated successfully.
Feb 28 10:21:14 compute-0 ovn_controller[146846]: 2026-02-28T10:21:14Z|01045|binding|INFO|Releasing lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb from this chassis (sb_readonly=0)
Feb 28 10:21:14 compute-0 ovn_controller[146846]: 2026-02-28T10:21:14Z|01046|binding|INFO|Setting lport 07b4c83e-2fe2-42c9-a758-c50ddf0919fb down in Southbound
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.081 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.085 243456 DEBUG nova.virt.libvirt.vif [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:19:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-900188454',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-900188454',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=97,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNIQnNUL9XDPDLTHZklFyfDZYSt9AYWyQxG0r/+WdFB7vyQw7J2acteYFQFpxgWWQ/0J0kXyBJr3KJn/hEaVogEETmejopRSrT8PjwOvTjZAhY243jVoswANs6Qv2qSuA==',key_name='tempest-TestSecurityGroupsBasicOps-146961595',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:19:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-7bnrc9wi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:19:53Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=e4349bd8-727a-4533-9edd-b2d54353a617,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:21:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.085 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:73:58 10.100.0.9'], port_security=['fa:16:3e:7d:73:58 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e4349bd8-727a-4533-9edd-b2d54353a617', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df5b5e58-da82-40fd-b4b8-660edea3cecb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0184ef1a-f67e-49ee-b39d-03bdf1995f2e ec08cc57-dbf0-4da0-80ed-116af2ee764f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f39672a-6e8d-45c3-9a0d-437ee1b4e09f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=07b4c83e-2fe2-42c9-a758-c50ddf0919fb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.085 243456 DEBUG nova.network.os_vif_util [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.086 243456 DEBUG nova.network.os_vif_util [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7d:73:58,bridge_name='br-int',has_traffic_filtering=True,id=07b4c83e-2fe2-42c9-a758-c50ddf0919fb,network=Network(df5b5e58-da82-40fd-b4b8-660edea3cecb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07b4c83e-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.086 243456 DEBUG os_vif [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:73:58,bridge_name='br-int',has_traffic_filtering=True,id=07b4c83e-2fe2-42c9-a758-c50ddf0919fb,network=Network(df5b5e58-da82-40fd-b4b8-660edea3cecb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07b4c83e-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.089 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.089 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07b4c83e-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-835d604bdbc864696e0b1086b090dae849a757f07b6c5ac638b1186ab31417bd-merged.mount: Deactivated successfully.
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.094 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.097 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.099 243456 INFO os_vif [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:73:58,bridge_name='br-int',has_traffic_filtering=True,id=07b4c83e-2fe2-42c9-a758-c50ddf0919fb,network=Network(df5b5e58-da82-40fd-b4b8-660edea3cecb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07b4c83e-2f')
Feb 28 10:21:14 compute-0 podman[332530]: 2026-02-28 10:21:14.102260298 +0000 UTC m=+0.106224617 container cleanup 37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:21:14 compute-0 systemd[1]: libpod-conmon-37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c.scope: Deactivated successfully.
Feb 28 10:21:14 compute-0 podman[332548]: 2026-02-28 10:21:14.137048522 +0000 UTC m=+0.073472262 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 28 10:21:14 compute-0 podman[332557]: 2026-02-28 10:21:14.166917864 +0000 UTC m=+0.101122340 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller)
Feb 28 10:21:14 compute-0 podman[332592]: 2026-02-28 10:21:14.173576966 +0000 UTC m=+0.049888871 container remove 37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:21:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.180 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[acf59703-4eae-4e94-819a-e77d61ad86e7]: (4, ('Sat Feb 28 10:21:13 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb (37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c)\n37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c\nSat Feb 28 10:21:14 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb (37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c)\n37919b62aecf27a3cf53c0db55ff375e052d9b4bda5781a8d7b2dc71374a0b7c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.182 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a8553538-2381-4a4f-81a8-f4e15ba41b01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.183 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf5b5e58-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.186 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:14 compute-0 kernel: tapdf5b5e58-d0: left promiscuous mode
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.192 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.195 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b8cb342b-fd42-4046-b8eb-baa55c3d8ef3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.207 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[18e094d3-318a-47e8-be95-223c6ecaf6ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.208 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c81a0c-10d9-40bb-a4f0-aa624bcb7a14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.223 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0ace15-b6a9-49ac-9220-0f7e6b5d08cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549928, 'reachable_time': 29601, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332637, 'error': None, 'target': 'ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:14 compute-0 systemd[1]: run-netns-ovnmeta\x2ddf5b5e58\x2dda82\x2d40fd\x2db4b8\x2d660edea3cecb.mount: Deactivated successfully.
Feb 28 10:21:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.229 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-df5b5e58-da82-40fd-b4b8-660edea3cecb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:21:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.229 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[63cb1bc2-cf52-45b1-92ad-bf5b2892b917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.231 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 07b4c83e-2fe2-42c9-a758-c50ddf0919fb in datapath df5b5e58-da82-40fd-b4b8-660edea3cecb unbound from our chassis
Feb 28 10:21:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.232 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df5b5e58-da82-40fd-b4b8-660edea3cecb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:21:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.233 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a6cc5f60-f46d-47be-8182-6f92537dce1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.234 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 07b4c83e-2fe2-42c9-a758-c50ddf0919fb in datapath df5b5e58-da82-40fd-b4b8-660edea3cecb unbound from our chassis
Feb 28 10:21:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.235 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df5b5e58-da82-40fd-b4b8-660edea3cecb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:21:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:14.235 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3fbc6f57-e999-4cfc-8bc3-be0873ef0dab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.285 243456 DEBUG nova.compute.manager [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.286 243456 DEBUG oslo_concurrency.lockutils [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.286 243456 DEBUG oslo_concurrency.lockutils [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.286 243456 DEBUG oslo_concurrency.lockutils [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.286 243456 DEBUG nova.compute.manager [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] No waiting events found dispatching network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.287 243456 WARNING nova.compute.manager [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received unexpected event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 for instance with vm_state active and task_state None.
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.287 243456 DEBUG nova.compute.manager [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received event network-changed-07b4c83e-2fe2-42c9-a758-c50ddf0919fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.287 243456 DEBUG nova.compute.manager [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Refreshing instance network info cache due to event network-changed-07b4c83e-2fe2-42c9-a758-c50ddf0919fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.288 243456 DEBUG oslo_concurrency.lockutils [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.288 243456 DEBUG oslo_concurrency.lockutils [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.288 243456 DEBUG nova.network.neutron [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Refreshing network info cache for port 07b4c83e-2fe2-42c9-a758-c50ddf0919fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.356 243456 INFO nova.virt.libvirt.driver [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Deleting instance files /var/lib/nova/instances/e4349bd8-727a-4533-9edd-b2d54353a617_del
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.357 243456 INFO nova.virt.libvirt.driver [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Deletion of /var/lib/nova/instances/e4349bd8-727a-4533-9edd-b2d54353a617_del complete
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.363 243456 DEBUG nova.compute.manager [req-522141ed-9d1d-4f94-b5ac-4c2455017f53 req-c8132fb1-9562-4ae1-a56a-1316a55fb539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received event network-vif-unplugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.364 243456 DEBUG oslo_concurrency.lockutils [req-522141ed-9d1d-4f94-b5ac-4c2455017f53 req-c8132fb1-9562-4ae1-a56a-1316a55fb539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.364 243456 DEBUG oslo_concurrency.lockutils [req-522141ed-9d1d-4f94-b5ac-4c2455017f53 req-c8132fb1-9562-4ae1-a56a-1316a55fb539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.364 243456 DEBUG oslo_concurrency.lockutils [req-522141ed-9d1d-4f94-b5ac-4c2455017f53 req-c8132fb1-9562-4ae1-a56a-1316a55fb539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.365 243456 DEBUG nova.compute.manager [req-522141ed-9d1d-4f94-b5ac-4c2455017f53 req-c8132fb1-9562-4ae1-a56a-1316a55fb539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] No waiting events found dispatching network-vif-unplugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.365 243456 DEBUG nova.compute.manager [req-522141ed-9d1d-4f94-b5ac-4c2455017f53 req-c8132fb1-9562-4ae1-a56a-1316a55fb539 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received event network-vif-unplugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:21:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 279 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 217 op/s
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.424 243456 INFO nova.compute.manager [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Took 0.60 seconds to destroy the instance on the hypervisor.
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.425 243456 DEBUG oslo.service.loopingcall [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.425 243456 DEBUG nova.compute.manager [-] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.425 243456 DEBUG nova.network.neutron [-] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.449 243456 INFO nova.compute.manager [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Rescuing
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.450 243456 DEBUG oslo_concurrency.lockutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.450 243456 DEBUG oslo_concurrency.lockutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquired lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:21:14 compute-0 nova_compute[243452]: 2026-02-28 10:21:14.451 243456 DEBUG nova.network.neutron [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:21:15 compute-0 nova_compute[243452]: 2026-02-28 10:21:15.371 243456 DEBUG nova.network.neutron [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Updated VIF entry in instance network info cache for port 07b4c83e-2fe2-42c9-a758-c50ddf0919fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:21:15 compute-0 nova_compute[243452]: 2026-02-28 10:21:15.373 243456 DEBUG nova.network.neutron [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Updating instance_info_cache with network_info: [{"id": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "address": "fa:16:3e:7d:73:58", "network": {"id": "df5b5e58-da82-40fd-b4b8-660edea3cecb", "bridge": "br-int", "label": "tempest-network-smoke--2117845100", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07b4c83e-2f", "ovs_interfaceid": "07b4c83e-2fe2-42c9-a758-c50ddf0919fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:21:15 compute-0 nova_compute[243452]: 2026-02-28 10:21:15.394 243456 DEBUG oslo_concurrency.lockutils [req-f38a3944-f03e-45cf-b273-7ab156bdbf2c req-e680ae01-5ed5-4db8-be5e-76e3d28b2897 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-e4349bd8-727a-4533-9edd-b2d54353a617" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:21:15 compute-0 ceph-mon[76304]: pgmap v1674: 305 pgs: 305 active+clean; 279 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 217 op/s
Feb 28 10:21:16 compute-0 nova_compute[243452]: 2026-02-28 10:21:16.140 243456 DEBUG nova.network.neutron [-] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:21:16 compute-0 nova_compute[243452]: 2026-02-28 10:21:16.170 243456 INFO nova.compute.manager [-] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Took 1.74 seconds to deallocate network for instance.
Feb 28 10:21:16 compute-0 nova_compute[243452]: 2026-02-28 10:21:16.227 243456 DEBUG oslo_concurrency.lockutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:16 compute-0 nova_compute[243452]: 2026-02-28 10:21:16.228 243456 DEBUG oslo_concurrency.lockutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:16 compute-0 nova_compute[243452]: 2026-02-28 10:21:16.342 243456 DEBUG nova.network.neutron [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updating instance_info_cache with network_info: [{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:21:16 compute-0 nova_compute[243452]: 2026-02-28 10:21:16.368 243456 DEBUG oslo_concurrency.lockutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Releasing lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:21:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 224 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.7 MiB/s wr, 242 op/s
Feb 28 10:21:16 compute-0 nova_compute[243452]: 2026-02-28 10:21:16.538 243456 DEBUG nova.compute.manager [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received event network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:16 compute-0 nova_compute[243452]: 2026-02-28 10:21:16.539 243456 DEBUG oslo_concurrency.lockutils [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:16 compute-0 nova_compute[243452]: 2026-02-28 10:21:16.539 243456 DEBUG oslo_concurrency.lockutils [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:16 compute-0 nova_compute[243452]: 2026-02-28 10:21:16.539 243456 DEBUG oslo_concurrency.lockutils [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:16 compute-0 nova_compute[243452]: 2026-02-28 10:21:16.539 243456 DEBUG nova.compute.manager [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] No waiting events found dispatching network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:21:16 compute-0 nova_compute[243452]: 2026-02-28 10:21:16.540 243456 WARNING nova.compute.manager [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received unexpected event network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb for instance with vm_state deleted and task_state None.
Feb 28 10:21:16 compute-0 nova_compute[243452]: 2026-02-28 10:21:16.540 243456 DEBUG nova.compute.manager [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received event network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:16 compute-0 nova_compute[243452]: 2026-02-28 10:21:16.540 243456 DEBUG oslo_concurrency.lockutils [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:16 compute-0 nova_compute[243452]: 2026-02-28 10:21:16.540 243456 DEBUG oslo_concurrency.lockutils [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:16 compute-0 nova_compute[243452]: 2026-02-28 10:21:16.541 243456 DEBUG oslo_concurrency.lockutils [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:16 compute-0 nova_compute[243452]: 2026-02-28 10:21:16.541 243456 DEBUG nova.compute.manager [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] No waiting events found dispatching network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:21:16 compute-0 nova_compute[243452]: 2026-02-28 10:21:16.541 243456 WARNING nova.compute.manager [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received unexpected event network-vif-plugged-07b4c83e-2fe2-42c9-a758-c50ddf0919fb for instance with vm_state deleted and task_state None.
Feb 28 10:21:16 compute-0 nova_compute[243452]: 2026-02-28 10:21:16.541 243456 DEBUG nova.compute.manager [req-eab06c7d-d872-463a-94d1-8afa74f8adba req-31b0f443-edf6-4498-b3db-e712468f6356 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Received event network-vif-deleted-07b4c83e-2fe2-42c9-a758-c50ddf0919fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:16 compute-0 nova_compute[243452]: 2026-02-28 10:21:16.636 243456 DEBUG oslo_concurrency.processutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:16 compute-0 nova_compute[243452]: 2026-02-28 10:21:16.749 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:21:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:21:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2573961724' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:21:17 compute-0 nova_compute[243452]: 2026-02-28 10:21:17.186 243456 DEBUG oslo_concurrency.processutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:17 compute-0 nova_compute[243452]: 2026-02-28 10:21:17.192 243456 DEBUG nova.compute.provider_tree [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:21:17 compute-0 nova_compute[243452]: 2026-02-28 10:21:17.209 243456 DEBUG nova.scheduler.client.report [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:21:17 compute-0 nova_compute[243452]: 2026-02-28 10:21:17.234 243456 DEBUG oslo_concurrency.lockutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:17 compute-0 nova_compute[243452]: 2026-02-28 10:21:17.262 243456 INFO nova.scheduler.client.report [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Deleted allocations for instance e4349bd8-727a-4533-9edd-b2d54353a617
Feb 28 10:21:17 compute-0 nova_compute[243452]: 2026-02-28 10:21:17.330 243456 DEBUG oslo_concurrency.lockutils [None req-896dbe0b-bd22-4ea3-86dd-726c4f27f821 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "e4349bd8-727a-4533-9edd-b2d54353a617" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.506s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:17 compute-0 ceph-mon[76304]: pgmap v1675: 305 pgs: 305 active+clean; 224 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.7 MiB/s wr, 242 op/s
Feb 28 10:21:17 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2573961724' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:21:18 compute-0 nova_compute[243452]: 2026-02-28 10:21:18.049 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:18.365 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:21:18 compute-0 nova_compute[243452]: 2026-02-28 10:21:18.366 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:18.367 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:21:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 200 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 255 op/s
Feb 28 10:21:18 compute-0 ceph-mon[76304]: pgmap v1676: 305 pgs: 305 active+clean; 200 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 255 op/s
Feb 28 10:21:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:21:18 compute-0 nova_compute[243452]: 2026-02-28 10:21:18.976 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:19 compute-0 nova_compute[243452]: 2026-02-28 10:21:19.090 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:19 compute-0 nova_compute[243452]: 2026-02-28 10:21:19.760 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:21:20 compute-0 nova_compute[243452]: 2026-02-28 10:21:20.254 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274065.2520442, c4a33511-0908-4787-82f4-79505aa9d436 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:21:20 compute-0 nova_compute[243452]: 2026-02-28 10:21:20.255 243456 INFO nova.compute.manager [-] [instance: c4a33511-0908-4787-82f4-79505aa9d436] VM Stopped (Lifecycle Event)
Feb 28 10:21:20 compute-0 nova_compute[243452]: 2026-02-28 10:21:20.278 243456 DEBUG nova.compute.manager [None req-21b203dd-f74f-4455-9189-1a484c9c647c - - - - - -] [instance: c4a33511-0908-4787-82f4-79505aa9d436] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:21:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:20.370 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.2 MiB/s wr, 173 op/s
Feb 28 10:21:20 compute-0 ceph-mon[76304]: pgmap v1677: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.2 MiB/s wr, 173 op/s
Feb 28 10:21:21 compute-0 nova_compute[243452]: 2026-02-28 10:21:21.287 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:21 compute-0 nova_compute[243452]: 2026-02-28 10:21:21.649 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:21 compute-0 nova_compute[243452]: 2026-02-28 10:21:21.699 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:21 compute-0 nova_compute[243452]: 2026-02-28 10:21:21.938 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274066.9368863, 98150245-079d-43f8-bbd9-3d12a8f26719 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:21:21 compute-0 nova_compute[243452]: 2026-02-28 10:21:21.939 243456 INFO nova.compute.manager [-] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] VM Stopped (Lifecycle Event)
Feb 28 10:21:21 compute-0 nova_compute[243452]: 2026-02-28 10:21:21.959 243456 DEBUG nova.compute.manager [None req-4ee28fe9-5307-412c-9594-c8c131221a3d - - - - - -] [instance: 98150245-079d-43f8-bbd9-3d12a8f26719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:21:22 compute-0 nova_compute[243452]: 2026-02-28 10:21:22.356 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:21:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 314 KiB/s wr, 145 op/s
Feb 28 10:21:22 compute-0 sudo[332663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:21:22 compute-0 sudo[332663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:21:22 compute-0 sudo[332663]: pam_unix(sudo:session): session closed for user root
Feb 28 10:21:22 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Feb 28 10:21:22 compute-0 sudo[332688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:21:22 compute-0 sudo[332688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:21:23 compute-0 sudo[332688]: pam_unix(sudo:session): session closed for user root
Feb 28 10:21:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:21:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:21:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:21:23 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:21:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:21:23 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:21:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:21:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:21:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:21:23 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:21:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:21:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:21:23 compute-0 sudo[332743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:21:23 compute-0 nova_compute[243452]: 2026-02-28 10:21:23.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:21:23 compute-0 sudo[332743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:21:23 compute-0 sudo[332743]: pam_unix(sudo:session): session closed for user root
Feb 28 10:21:23 compute-0 sudo[332768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:21:23 compute-0 sudo[332768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:21:23 compute-0 ceph-mon[76304]: pgmap v1678: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 314 KiB/s wr, 145 op/s
Feb 28 10:21:23 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:21:23 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:21:23 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:21:23 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:21:23 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:21:23 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:21:23 compute-0 podman[332805]: 2026-02-28 10:21:23.605305978 +0000 UTC m=+0.046948696 container create 2400d91611b25dca50caffddc0f895d33ba1ef013a0d94afa36bca0f2c2586c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_colden, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 10:21:23 compute-0 systemd[1]: Started libpod-conmon-2400d91611b25dca50caffddc0f895d33ba1ef013a0d94afa36bca0f2c2586c8.scope.
Feb 28 10:21:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:21:23 compute-0 podman[332805]: 2026-02-28 10:21:23.575441036 +0000 UTC m=+0.017083744 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:21:23 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:21:23 compute-0 podman[332805]: 2026-02-28 10:21:23.692429843 +0000 UTC m=+0.134072551 container init 2400d91611b25dca50caffddc0f895d33ba1ef013a0d94afa36bca0f2c2586c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_colden, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Feb 28 10:21:23 compute-0 podman[332805]: 2026-02-28 10:21:23.697241272 +0000 UTC m=+0.138883950 container start 2400d91611b25dca50caffddc0f895d33ba1ef013a0d94afa36bca0f2c2586c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 10:21:23 compute-0 podman[332805]: 2026-02-28 10:21:23.700778044 +0000 UTC m=+0.142420742 container attach 2400d91611b25dca50caffddc0f895d33ba1ef013a0d94afa36bca0f2c2586c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_colden, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 28 10:21:23 compute-0 romantic_colden[332822]: 167 167
Feb 28 10:21:23 compute-0 systemd[1]: libpod-2400d91611b25dca50caffddc0f895d33ba1ef013a0d94afa36bca0f2c2586c8.scope: Deactivated successfully.
Feb 28 10:21:23 compute-0 podman[332805]: 2026-02-28 10:21:23.702528824 +0000 UTC m=+0.144171502 container died 2400d91611b25dca50caffddc0f895d33ba1ef013a0d94afa36bca0f2c2586c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_colden, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 10:21:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-50d8d129f69e9d21db5f72fb98024947df6c1de556072f6c3789b9cd3ad703c5-merged.mount: Deactivated successfully.
Feb 28 10:21:23 compute-0 podman[332805]: 2026-02-28 10:21:23.747150202 +0000 UTC m=+0.188792920 container remove 2400d91611b25dca50caffddc0f895d33ba1ef013a0d94afa36bca0f2c2586c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_colden, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 10:21:23 compute-0 systemd[1]: libpod-conmon-2400d91611b25dca50caffddc0f895d33ba1ef013a0d94afa36bca0f2c2586c8.scope: Deactivated successfully.
Feb 28 10:21:23 compute-0 podman[332845]: 2026-02-28 10:21:23.9261936 +0000 UTC m=+0.064749880 container create 219d63b3b62079ca40ffd9dc42abb76959ee7981e43feb775b04c6df7db472e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_poitras, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:21:23 compute-0 systemd[1]: Started libpod-conmon-219d63b3b62079ca40ffd9dc42abb76959ee7981e43feb775b04c6df7db472e0.scope.
Feb 28 10:21:23 compute-0 nova_compute[243452]: 2026-02-28 10:21:23.977 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:23 compute-0 podman[332845]: 2026-02-28 10:21:23.896957796 +0000 UTC m=+0.035514066 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:21:23 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:21:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2016cd9257489d62598b7a3dcc79c8d13fc12ee00b7c6d61599f61a04aeac9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:21:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2016cd9257489d62598b7a3dcc79c8d13fc12ee00b7c6d61599f61a04aeac9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:21:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2016cd9257489d62598b7a3dcc79c8d13fc12ee00b7c6d61599f61a04aeac9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:21:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2016cd9257489d62598b7a3dcc79c8d13fc12ee00b7c6d61599f61a04aeac9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:21:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2016cd9257489d62598b7a3dcc79c8d13fc12ee00b7c6d61599f61a04aeac9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:21:24 compute-0 podman[332845]: 2026-02-28 10:21:24.030313045 +0000 UTC m=+0.168869305 container init 219d63b3b62079ca40ffd9dc42abb76959ee7981e43feb775b04c6df7db472e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_poitras, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 10:21:24 compute-0 podman[332845]: 2026-02-28 10:21:24.03739946 +0000 UTC m=+0.175955700 container start 219d63b3b62079ca40ffd9dc42abb76959ee7981e43feb775b04c6df7db472e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_poitras, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 10:21:24 compute-0 podman[332845]: 2026-02-28 10:21:24.040842819 +0000 UTC m=+0.179399069 container attach 219d63b3b62079ca40ffd9dc42abb76959ee7981e43feb775b04c6df7db472e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_poitras, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 10:21:24 compute-0 nova_compute[243452]: 2026-02-28 10:21:24.092 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:24 compute-0 nova_compute[243452]: 2026-02-28 10:21:24.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:21:24 compute-0 nova_compute[243452]: 2026-02-28 10:21:24.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:21:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 28 10:21:24 compute-0 laughing_poitras[332861]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:21:24 compute-0 laughing_poitras[332861]: --> All data devices are unavailable
Feb 28 10:21:24 compute-0 systemd[1]: libpod-219d63b3b62079ca40ffd9dc42abb76959ee7981e43feb775b04c6df7db472e0.scope: Deactivated successfully.
Feb 28 10:21:24 compute-0 podman[332845]: 2026-02-28 10:21:24.495863963 +0000 UTC m=+0.634420233 container died 219d63b3b62079ca40ffd9dc42abb76959ee7981e43feb775b04c6df7db472e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_poitras, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:21:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e2016cd9257489d62598b7a3dcc79c8d13fc12ee00b7c6d61599f61a04aeac9-merged.mount: Deactivated successfully.
Feb 28 10:21:24 compute-0 podman[332845]: 2026-02-28 10:21:24.630498029 +0000 UTC m=+0.769054289 container remove 219d63b3b62079ca40ffd9dc42abb76959ee7981e43feb775b04c6df7db472e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:21:24 compute-0 systemd[1]: libpod-conmon-219d63b3b62079ca40ffd9dc42abb76959ee7981e43feb775b04c6df7db472e0.scope: Deactivated successfully.
Feb 28 10:21:24 compute-0 sudo[332768]: pam_unix(sudo:session): session closed for user root
Feb 28 10:21:24 compute-0 sudo[332893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:21:24 compute-0 sudo[332893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:21:24 compute-0 sudo[332893]: pam_unix(sudo:session): session closed for user root
Feb 28 10:21:24 compute-0 sudo[332918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:21:24 compute-0 sudo[332918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:21:25 compute-0 podman[332955]: 2026-02-28 10:21:25.080902978 +0000 UTC m=+0.060984941 container create 5a2aa464035d75300948f1f9b71f104ff1b0d1f9b7ac59929c956631c38f185f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_booth, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 10:21:25 compute-0 podman[332955]: 2026-02-28 10:21:25.040328307 +0000 UTC m=+0.020410280 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:21:25 compute-0 systemd[1]: Started libpod-conmon-5a2aa464035d75300948f1f9b71f104ff1b0d1f9b7ac59929c956631c38f185f.scope.
Feb 28 10:21:25 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:21:25 compute-0 podman[332955]: 2026-02-28 10:21:25.306974534 +0000 UTC m=+0.287056567 container init 5a2aa464035d75300948f1f9b71f104ff1b0d1f9b7ac59929c956631c38f185f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_booth, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 10:21:25 compute-0 podman[332955]: 2026-02-28 10:21:25.316845258 +0000 UTC m=+0.296927251 container start 5a2aa464035d75300948f1f9b71f104ff1b0d1f9b7ac59929c956631c38f185f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_booth, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:21:25 compute-0 romantic_booth[332971]: 167 167
Feb 28 10:21:25 compute-0 systemd[1]: libpod-5a2aa464035d75300948f1f9b71f104ff1b0d1f9b7ac59929c956631c38f185f.scope: Deactivated successfully.
Feb 28 10:21:25 compute-0 podman[332955]: 2026-02-28 10:21:25.354452374 +0000 UTC m=+0.334534407 container attach 5a2aa464035d75300948f1f9b71f104ff1b0d1f9b7ac59929c956631c38f185f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:21:25 compute-0 podman[332955]: 2026-02-28 10:21:25.35502027 +0000 UTC m=+0.335102263 container died 5a2aa464035d75300948f1f9b71f104ff1b0d1f9b7ac59929c956631c38f185f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:21:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9ab46d8f9f2003216e37bcea31ac514457aa9875a87d8b26c44d74068173003-merged.mount: Deactivated successfully.
Feb 28 10:21:25 compute-0 ceph-mon[76304]: pgmap v1679: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 28 10:21:25 compute-0 podman[332955]: 2026-02-28 10:21:25.500269403 +0000 UTC m=+0.480351346 container remove 5a2aa464035d75300948f1f9b71f104ff1b0d1f9b7ac59929c956631c38f185f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:21:25 compute-0 systemd[1]: libpod-conmon-5a2aa464035d75300948f1f9b71f104ff1b0d1f9b7ac59929c956631c38f185f.scope: Deactivated successfully.
Feb 28 10:21:25 compute-0 podman[332997]: 2026-02-28 10:21:25.639124271 +0000 UTC m=+0.022974134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:21:25 compute-0 podman[332997]: 2026-02-28 10:21:25.800840869 +0000 UTC m=+0.184690772 container create 873721ec326fb27a581b6eae48e52b2b565ae7d2690d67ef1c215bcf2b8ee00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 10:21:25 compute-0 systemd[1]: Started libpod-conmon-873721ec326fb27a581b6eae48e52b2b565ae7d2690d67ef1c215bcf2b8ee00a.scope.
Feb 28 10:21:25 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4425f153f629a57e11cbb6fafce09cf0736031329f12f1f36bd31f18b2e57b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4425f153f629a57e11cbb6fafce09cf0736031329f12f1f36bd31f18b2e57b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4425f153f629a57e11cbb6fafce09cf0736031329f12f1f36bd31f18b2e57b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4425f153f629a57e11cbb6fafce09cf0736031329f12f1f36bd31f18b2e57b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:21:26 compute-0 podman[332997]: 2026-02-28 10:21:26.010891662 +0000 UTC m=+0.394741545 container init 873721ec326fb27a581b6eae48e52b2b565ae7d2690d67ef1c215bcf2b8ee00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:21:26 compute-0 podman[332997]: 2026-02-28 10:21:26.019708286 +0000 UTC m=+0.403558159 container start 873721ec326fb27a581b6eae48e52b2b565ae7d2690d67ef1c215bcf2b8ee00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:21:26 compute-0 podman[332997]: 2026-02-28 10:21:26.024261377 +0000 UTC m=+0.408111290 container attach 873721ec326fb27a581b6eae48e52b2b565ae7d2690d67ef1c215bcf2b8ee00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]: {
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:     "0": [
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:         {
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "devices": [
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "/dev/loop3"
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             ],
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "lv_name": "ceph_lv0",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "lv_size": "21470642176",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "name": "ceph_lv0",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "tags": {
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.cluster_name": "ceph",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.crush_device_class": "",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.encrypted": "0",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.objectstore": "bluestore",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.osd_id": "0",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.type": "block",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.vdo": "0",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.with_tpm": "0"
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             },
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "type": "block",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "vg_name": "ceph_vg0"
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:         }
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:     ],
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:     "1": [
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:         {
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "devices": [
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "/dev/loop4"
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             ],
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "lv_name": "ceph_lv1",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "lv_size": "21470642176",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "name": "ceph_lv1",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "tags": {
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.cluster_name": "ceph",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.crush_device_class": "",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.encrypted": "0",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.objectstore": "bluestore",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.osd_id": "1",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.type": "block",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.vdo": "0",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.with_tpm": "0"
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             },
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "type": "block",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "vg_name": "ceph_vg1"
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:         }
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:     ],
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:     "2": [
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:         {
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "devices": [
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "/dev/loop5"
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             ],
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "lv_name": "ceph_lv2",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "lv_size": "21470642176",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "name": "ceph_lv2",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "tags": {
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.cluster_name": "ceph",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.crush_device_class": "",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.encrypted": "0",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.objectstore": "bluestore",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.osd_id": "2",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.type": "block",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.vdo": "0",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:                 "ceph.with_tpm": "0"
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             },
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "type": "block",
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:             "vg_name": "ceph_vg2"
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:         }
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]:     ]
Feb 28 10:21:26 compute-0 pedantic_dirac[333012]: }
Feb 28 10:21:26 compute-0 nova_compute[243452]: 2026-02-28 10:21:26.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:21:26 compute-0 systemd[1]: libpod-873721ec326fb27a581b6eae48e52b2b565ae7d2690d67ef1c215bcf2b8ee00a.scope: Deactivated successfully.
Feb 28 10:21:26 compute-0 podman[332997]: 2026-02-28 10:21:26.352314536 +0000 UTC m=+0.736164409 container died 873721ec326fb27a581b6eae48e52b2b565ae7d2690d67ef1c215bcf2b8ee00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:21:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4425f153f629a57e11cbb6fafce09cf0736031329f12f1f36bd31f18b2e57b1-merged.mount: Deactivated successfully.
Feb 28 10:21:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 223 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 127 op/s
Feb 28 10:21:26 compute-0 podman[332997]: 2026-02-28 10:21:26.398490679 +0000 UTC m=+0.782340552 container remove 873721ec326fb27a581b6eae48e52b2b565ae7d2690d67ef1c215bcf2b8ee00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dirac, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 10:21:26 compute-0 systemd[1]: libpod-conmon-873721ec326fb27a581b6eae48e52b2b565ae7d2690d67ef1c215bcf2b8ee00a.scope: Deactivated successfully.
Feb 28 10:21:26 compute-0 sudo[332918]: pam_unix(sudo:session): session closed for user root
Feb 28 10:21:26 compute-0 ceph-mon[76304]: pgmap v1680: 305 pgs: 305 active+clean; 223 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 127 op/s
Feb 28 10:21:26 compute-0 sudo[333035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:21:26 compute-0 sudo[333035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:21:26 compute-0 sudo[333035]: pam_unix(sudo:session): session closed for user root
Feb 28 10:21:26 compute-0 sudo[333060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:21:26 compute-0 sudo[333060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:21:26 compute-0 podman[333097]: 2026-02-28 10:21:26.788436104 +0000 UTC m=+0.041842919 container create 4f92dde3def6ec13b8d1b0c253f7cd2a035ad7e42b0c8410dd33bf1865281b45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_haslett, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 10:21:26 compute-0 nova_compute[243452]: 2026-02-28 10:21:26.803 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 28 10:21:26 compute-0 systemd[1]: Started libpod-conmon-4f92dde3def6ec13b8d1b0c253f7cd2a035ad7e42b0c8410dd33bf1865281b45.scope.
Feb 28 10:21:26 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:21:26 compute-0 podman[333097]: 2026-02-28 10:21:26.772108463 +0000 UTC m=+0.025515278 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:21:26 compute-0 podman[333097]: 2026-02-28 10:21:26.876156686 +0000 UTC m=+0.129563481 container init 4f92dde3def6ec13b8d1b0c253f7cd2a035ad7e42b0c8410dd33bf1865281b45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_haslett, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 10:21:26 compute-0 podman[333097]: 2026-02-28 10:21:26.883487918 +0000 UTC m=+0.136894723 container start 4f92dde3def6ec13b8d1b0c253f7cd2a035ad7e42b0c8410dd33bf1865281b45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_haslett, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Feb 28 10:21:26 compute-0 vigorous_haslett[333113]: 167 167
Feb 28 10:21:26 compute-0 systemd[1]: libpod-4f92dde3def6ec13b8d1b0c253f7cd2a035ad7e42b0c8410dd33bf1865281b45.scope: Deactivated successfully.
Feb 28 10:21:26 compute-0 podman[333097]: 2026-02-28 10:21:26.90745514 +0000 UTC m=+0.160861955 container attach 4f92dde3def6ec13b8d1b0c253f7cd2a035ad7e42b0c8410dd33bf1865281b45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 10:21:26 compute-0 podman[333097]: 2026-02-28 10:21:26.908442278 +0000 UTC m=+0.161849083 container died 4f92dde3def6ec13b8d1b0c253f7cd2a035ad7e42b0c8410dd33bf1865281b45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 10:21:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ad8f14d99fa5e12c0253c426bc30a6a496e3f63a0d493bf57a81ac000ff7d45-merged.mount: Deactivated successfully.
Feb 28 10:21:26 compute-0 podman[333097]: 2026-02-28 10:21:26.974847215 +0000 UTC m=+0.228254010 container remove 4f92dde3def6ec13b8d1b0c253f7cd2a035ad7e42b0c8410dd33bf1865281b45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:21:26 compute-0 systemd[1]: libpod-conmon-4f92dde3def6ec13b8d1b0c253f7cd2a035ad7e42b0c8410dd33bf1865281b45.scope: Deactivated successfully.
Feb 28 10:21:27 compute-0 podman[333139]: 2026-02-28 10:21:27.132630929 +0000 UTC m=+0.045930857 container create fed93f33bdab4fcf377facd8994a32a88d1e6beca02f316ac2abd838695f2263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_kepler, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 28 10:21:27 compute-0 systemd[1]: Started libpod-conmon-fed93f33bdab4fcf377facd8994a32a88d1e6beca02f316ac2abd838695f2263.scope.
Feb 28 10:21:27 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:21:27 compute-0 podman[333139]: 2026-02-28 10:21:27.109936554 +0000 UTC m=+0.023236522 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c95de49c52439a7bad64eeb9a028b4ab1190554a94ba9cb07de6636e30095d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c95de49c52439a7bad64eeb9a028b4ab1190554a94ba9cb07de6636e30095d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c95de49c52439a7bad64eeb9a028b4ab1190554a94ba9cb07de6636e30095d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c95de49c52439a7bad64eeb9a028b4ab1190554a94ba9cb07de6636e30095d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:21:27 compute-0 podman[333139]: 2026-02-28 10:21:27.237858996 +0000 UTC m=+0.151159034 container init fed93f33bdab4fcf377facd8994a32a88d1e6beca02f316ac2abd838695f2263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:21:27 compute-0 podman[333139]: 2026-02-28 10:21:27.246134405 +0000 UTC m=+0.159434343 container start fed93f33bdab4fcf377facd8994a32a88d1e6beca02f316ac2abd838695f2263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_kepler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 10:21:27 compute-0 podman[333139]: 2026-02-28 10:21:27.250675126 +0000 UTC m=+0.163975144 container attach fed93f33bdab4fcf377facd8994a32a88d1e6beca02f316ac2abd838695f2263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 10:21:27 compute-0 nova_compute[243452]: 2026-02-28 10:21:27.320 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:21:27 compute-0 lvm[333234]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:21:27 compute-0 lvm[333234]: VG ceph_vg1 finished
Feb 28 10:21:27 compute-0 lvm[333233]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:21:27 compute-0 lvm[333233]: VG ceph_vg0 finished
Feb 28 10:21:27 compute-0 lvm[333236]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:21:27 compute-0 lvm[333236]: VG ceph_vg2 finished
Feb 28 10:21:28 compute-0 ecstatic_kepler[333155]: {}
Feb 28 10:21:28 compute-0 systemd[1]: libpod-fed93f33bdab4fcf377facd8994a32a88d1e6beca02f316ac2abd838695f2263.scope: Deactivated successfully.
Feb 28 10:21:28 compute-0 podman[333139]: 2026-02-28 10:21:28.039363541 +0000 UTC m=+0.952663519 container died fed93f33bdab4fcf377facd8994a32a88d1e6beca02f316ac2abd838695f2263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_kepler, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:21:28 compute-0 systemd[1]: libpod-fed93f33bdab4fcf377facd8994a32a88d1e6beca02f316ac2abd838695f2263.scope: Consumed 1.138s CPU time.
Feb 28 10:21:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c95de49c52439a7bad64eeb9a028b4ab1190554a94ba9cb07de6636e30095d2-merged.mount: Deactivated successfully.
Feb 28 10:21:28 compute-0 podman[333139]: 2026-02-28 10:21:28.08404254 +0000 UTC m=+0.997342498 container remove fed93f33bdab4fcf377facd8994a32a88d1e6beca02f316ac2abd838695f2263 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_kepler, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:21:28 compute-0 systemd[1]: libpod-conmon-fed93f33bdab4fcf377facd8994a32a88d1e6beca02f316ac2abd838695f2263.scope: Deactivated successfully.
Feb 28 10:21:28 compute-0 sudo[333060]: pam_unix(sudo:session): session closed for user root
Feb 28 10:21:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:21:28 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:21:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:21:28 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:21:28 compute-0 sudo[333250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:21:28 compute-0 sudo[333250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:21:28 compute-0 sudo[333250]: pam_unix(sudo:session): session closed for user root
Feb 28 10:21:28 compute-0 nova_compute[243452]: 2026-02-28 10:21:28.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:21:28 compute-0 nova_compute[243452]: 2026-02-28 10:21:28.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:21:28 compute-0 nova_compute[243452]: 2026-02-28 10:21:28.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:21:28 compute-0 nova_compute[243452]: 2026-02-28 10:21:28.347 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:21:28 compute-0 nova_compute[243452]: 2026-02-28 10:21:28.347 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:21:28 compute-0 nova_compute[243452]: 2026-02-28 10:21:28.347 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:21:28 compute-0 nova_compute[243452]: 2026-02-28 10:21:28.347 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 233 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 850 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Feb 28 10:21:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:21:28 compute-0 nova_compute[243452]: 2026-02-28 10:21:28.979 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:29 compute-0 nova_compute[243452]: 2026-02-28 10:21:29.065 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274074.0623562, e4349bd8-727a-4533-9edd-b2d54353a617 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:21:29 compute-0 nova_compute[243452]: 2026-02-28 10:21:29.065 243456 INFO nova.compute.manager [-] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] VM Stopped (Lifecycle Event)
Feb 28 10:21:29 compute-0 kernel: tapa920b0c3-c6 (unregistering): left promiscuous mode
Feb 28 10:21:29 compute-0 NetworkManager[49805]: <info>  [1772274089.0788] device (tapa920b0c3-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:21:29 compute-0 ovn_controller[146846]: 2026-02-28T10:21:29Z|01047|binding|INFO|Releasing lport a920b0c3-c6cf-44d3-9a22-40eda0e09078 from this chassis (sb_readonly=0)
Feb 28 10:21:29 compute-0 ovn_controller[146846]: 2026-02-28T10:21:29Z|01048|binding|INFO|Setting lport a920b0c3-c6cf-44d3-9a22-40eda0e09078 down in Southbound
Feb 28 10:21:29 compute-0 nova_compute[243452]: 2026-02-28 10:21:29.087 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:29 compute-0 ovn_controller[146846]: 2026-02-28T10:21:29Z|01049|binding|INFO|Removing iface tapa920b0c3-c6 ovn-installed in OVS
Feb 28 10:21:29 compute-0 nova_compute[243452]: 2026-02-28 10:21:29.091 243456 DEBUG nova.compute.manager [None req-32049b83-c46e-48bf-a77c-cce9a3429e69 - - - - - -] [instance: e4349bd8-727a-4533-9edd-b2d54353a617] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:21:29 compute-0 nova_compute[243452]: 2026-02-28 10:21:29.092 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:29 compute-0 nova_compute[243452]: 2026-02-28 10:21:29.093 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:21:29
Feb 28 10:21:29 compute-0 nova_compute[243452]: 2026-02-28 10:21:29.101 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:21:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:21:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'images', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'backups']
Feb 28 10:21:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:21:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:29.118 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:a9:65 10.100.0.2'], port_security=['fa:16:3e:a2:a9:65 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ec785d5e-9b62-4b52-a727-f64173b4b853', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d053625-c393-49e7-ae73-bce276bdc186', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14500a4ea1d94c0e9c58b076f5c918b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1b56c898-1f47-46b5-8fd8-bf772110d194', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0513d6b5-e918-4c66-8302-fa0b35a813c3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a920b0c3-c6cf-44d3-9a22-40eda0e09078) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:21:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:29.120 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a920b0c3-c6cf-44d3-9a22-40eda0e09078 in datapath 4d053625-c393-49e7-ae73-bce276bdc186 unbound from our chassis
Feb 28 10:21:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:29.121 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4d053625-c393-49e7-ae73-bce276bdc186 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:21:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:29.122 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[448be03c-3e16-42e8-83af-a40107fe1c78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:29 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000066.scope: Deactivated successfully.
Feb 28 10:21:29 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000066.scope: Consumed 12.152s CPU time.
Feb 28 10:21:29 compute-0 systemd-machined[209480]: Machine qemu-131-instance-00000066 terminated.
Feb 28 10:21:29 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:21:29 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:21:29 compute-0 ceph-mon[76304]: pgmap v1681: 305 pgs: 305 active+clean; 233 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 850 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Feb 28 10:21:29 compute-0 nova_compute[243452]: 2026-02-28 10:21:29.311 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:29 compute-0 nova_compute[243452]: 2026-02-28 10:21:29.315 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:29 compute-0 nova_compute[243452]: 2026-02-28 10:21:29.822 243456 INFO nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Instance shutdown successfully after 13 seconds.
Feb 28 10:21:29 compute-0 nova_compute[243452]: 2026-02-28 10:21:29.832 243456 INFO nova.virt.libvirt.driver [-] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Instance destroyed successfully.
Feb 28 10:21:29 compute-0 nova_compute[243452]: 2026-02-28 10:21:29.833 243456 DEBUG nova.objects.instance [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'numa_topology' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:29 compute-0 nova_compute[243452]: 2026-02-28 10:21:29.856 243456 INFO nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Attempting rescue
Feb 28 10:21:29 compute-0 nova_compute[243452]: 2026-02-28 10:21:29.858 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Feb 28 10:21:29 compute-0 nova_compute[243452]: 2026-02-28 10:21:29.865 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Feb 28 10:21:29 compute-0 nova_compute[243452]: 2026-02-28 10:21:29.866 243456 INFO nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Creating image(s)
Feb 28 10:21:29 compute-0 nova_compute[243452]: 2026-02-28 10:21:29.902 243456 DEBUG nova.storage.rbd_utils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:29 compute-0 nova_compute[243452]: 2026-02-28 10:21:29.906 243456 DEBUG nova.objects.instance [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:29 compute-0 nova_compute[243452]: 2026-02-28 10:21:29.958 243456 DEBUG nova.storage.rbd_utils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:29 compute-0 nova_compute[243452]: 2026-02-28 10:21:29.997 243456 DEBUG nova.storage.rbd_utils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.003 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.083 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.084 243456 DEBUG oslo_concurrency.lockutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.085 243456 DEBUG oslo_concurrency.lockutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.085 243456 DEBUG oslo_concurrency.lockutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.114 243456 DEBUG nova.storage.rbd_utils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.120 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ec785d5e-9b62-4b52-a727-f64173b4b853_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:21:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:21:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:21:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:21:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:21:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.387 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ec785d5e-9b62-4b52-a727-f64173b4b853_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.388 243456 DEBUG nova.objects.instance [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'migration_context' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.414 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.415 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Start _get_guest_xml network_info=[{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "vif_mac": "fa:16:3e:a2:a9:65"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.415 243456 DEBUG nova.objects.instance [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'resources' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.433 243456 WARNING nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.439 243456 DEBUG nova.virt.libvirt.host [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.441 243456 DEBUG nova.virt.libvirt.host [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.445 243456 DEBUG nova.virt.libvirt.host [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.446 243456 DEBUG nova.virt.libvirt.host [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.446 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.446 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.447 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.447 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.447 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.448 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.448 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.448 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.448 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.449 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.449 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.449 243456 DEBUG nova.virt.hardware [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.449 243456 DEBUG nova.objects.instance [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.467 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.529 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updating instance_info_cache with network_info: [{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.546 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.547 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.548 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.602 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.603 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.603 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.604 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.605 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:21:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:21:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:21:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:21:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:21:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:21:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:21:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:21:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:21:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:21:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:21:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1063165600' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.985 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:30 compute-0 nova_compute[243452]: 2026-02-28 10:21:30.987 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:21:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3543630777' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:21:31 compute-0 nova_compute[243452]: 2026-02-28 10:21:31.193 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:31 compute-0 nova_compute[243452]: 2026-02-28 10:21:31.271 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:21:31 compute-0 nova_compute[243452]: 2026-02-28 10:21:31.272 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:21:31 compute-0 nova_compute[243452]: 2026-02-28 10:21:31.412 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:21:31 compute-0 nova_compute[243452]: 2026-02-28 10:21:31.413 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3741MB free_disk=59.94204984605312GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:21:31 compute-0 nova_compute[243452]: 2026-02-28 10:21:31.413 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:31 compute-0 nova_compute[243452]: 2026-02-28 10:21:31.413 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:31 compute-0 ceph-mon[76304]: pgmap v1682: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Feb 28 10:21:31 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1063165600' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:31 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3543630777' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:21:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:21:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2532053339' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:31 compute-0 nova_compute[243452]: 2026-02-28 10:21:31.517 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance ec785d5e-9b62-4b52-a727-f64173b4b853 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:21:31 compute-0 nova_compute[243452]: 2026-02-28 10:21:31.517 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:21:31 compute-0 nova_compute[243452]: 2026-02-28 10:21:31.517 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:21:31 compute-0 nova_compute[243452]: 2026-02-28 10:21:31.526 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:31 compute-0 nova_compute[243452]: 2026-02-28 10:21:31.527 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:31 compute-0 nova_compute[243452]: 2026-02-28 10:21:31.585 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:21:32 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4017787345' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:32 compute-0 nova_compute[243452]: 2026-02-28 10:21:32.044 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:32 compute-0 nova_compute[243452]: 2026-02-28 10:21:32.048 243456 DEBUG nova.virt.libvirt.vif [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1346326288',display_name='tempest-ServerRescueTestJSONUnderV235-server-1346326288',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1346326288',id=102,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:21:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='14500a4ea1d94c0e9c58b076f5c918b5',ramdisk_id='',reservation_id='r-svzcm10h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-749971841',owner_user_name='tempest-ServerRescueTestJSONUnderV235-749971841-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:21:12Z,user_data=None,user_id='f18b63d43ee24e59bdff962c9a727213',uuid=ec785d5e-9b62-4b52-a727-f64173b4b853,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "vif_mac": "fa:16:3e:a2:a9:65"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:21:32 compute-0 nova_compute[243452]: 2026-02-28 10:21:32.049 243456 DEBUG nova.network.os_vif_util [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Converting VIF {"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "vif_mac": "fa:16:3e:a2:a9:65"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:21:32 compute-0 nova_compute[243452]: 2026-02-28 10:21:32.050 243456 DEBUG nova.network.os_vif_util [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a2:a9:65,bridge_name='br-int',has_traffic_filtering=True,id=a920b0c3-c6cf-44d3-9a22-40eda0e09078,network=Network(4d053625-c393-49e7-ae73-bce276bdc186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa920b0c3-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:21:32 compute-0 nova_compute[243452]: 2026-02-28 10:21:32.053 243456 DEBUG nova.objects.instance [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:32 compute-0 nova_compute[243452]: 2026-02-28 10:21:32.071 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:21:32 compute-0 nova_compute[243452]:   <uuid>ec785d5e-9b62-4b52-a727-f64173b4b853</uuid>
Feb 28 10:21:32 compute-0 nova_compute[243452]:   <name>instance-00000066</name>
Feb 28 10:21:32 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:21:32 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:21:32 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-1346326288</nova:name>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:21:30</nova:creationTime>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:21:32 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:21:32 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:21:32 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:21:32 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:21:32 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:21:32 compute-0 nova_compute[243452]:         <nova:user uuid="f18b63d43ee24e59bdff962c9a727213">tempest-ServerRescueTestJSONUnderV235-749971841-project-member</nova:user>
Feb 28 10:21:32 compute-0 nova_compute[243452]:         <nova:project uuid="14500a4ea1d94c0e9c58b076f5c918b5">tempest-ServerRescueTestJSONUnderV235-749971841</nova:project>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:21:32 compute-0 nova_compute[243452]:         <nova:port uuid="a920b0c3-c6cf-44d3-9a22-40eda0e09078">
Feb 28 10:21:32 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.2" ipVersion="4"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:21:32 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:21:32 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <system>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <entry name="serial">ec785d5e-9b62-4b52-a727-f64173b4b853</entry>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <entry name="uuid">ec785d5e-9b62-4b52-a727-f64173b4b853</entry>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     </system>
Feb 28 10:21:32 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:21:32 compute-0 nova_compute[243452]:   <os>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:   </os>
Feb 28 10:21:32 compute-0 nova_compute[243452]:   <features>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:   </features>
Feb 28 10:21:32 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:21:32 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:21:32 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ec785d5e-9b62-4b52-a727-f64173b4b853_disk.rescue">
Feb 28 10:21:32 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       </source>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:21:32 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ec785d5e-9b62-4b52-a727-f64173b4b853_disk">
Feb 28 10:21:32 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       </source>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:21:32 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <target dev="vdb" bus="virtio"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config.rescue">
Feb 28 10:21:32 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       </source>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:21:32 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:a2:a9:65"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <target dev="tapa920b0c3-c6"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/console.log" append="off"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <video>
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     </video>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:21:32 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:21:32 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:21:32 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:21:32 compute-0 nova_compute[243452]: </domain>
Feb 28 10:21:32 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:21:32 compute-0 nova_compute[243452]: 2026-02-28 10:21:32.084 243456 INFO nova.virt.libvirt.driver [-] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Instance destroyed successfully.
Feb 28 10:21:32 compute-0 nova_compute[243452]: 2026-02-28 10:21:32.132 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:21:32 compute-0 nova_compute[243452]: 2026-02-28 10:21:32.133 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:21:32 compute-0 nova_compute[243452]: 2026-02-28 10:21:32.133 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:21:32 compute-0 nova_compute[243452]: 2026-02-28 10:21:32.134 243456 DEBUG nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] No VIF found with MAC fa:16:3e:a2:a9:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:21:32 compute-0 nova_compute[243452]: 2026-02-28 10:21:32.135 243456 INFO nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Using config drive
Feb 28 10:21:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:21:32 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/950168576' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:21:32 compute-0 nova_compute[243452]: 2026-02-28 10:21:32.168 243456 DEBUG nova.storage.rbd_utils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:32 compute-0 nova_compute[243452]: 2026-02-28 10:21:32.176 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:32 compute-0 nova_compute[243452]: 2026-02-28 10:21:32.182 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:21:32 compute-0 nova_compute[243452]: 2026-02-28 10:21:32.189 243456 DEBUG nova.objects.instance [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'ec2_ids' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:32 compute-0 nova_compute[243452]: 2026-02-28 10:21:32.198 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:21:32 compute-0 nova_compute[243452]: 2026-02-28 10:21:32.234 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:21:32 compute-0 nova_compute[243452]: 2026-02-28 10:21:32.235 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:32 compute-0 nova_compute[243452]: 2026-02-28 10:21:32.236 243456 DEBUG nova.objects.instance [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'keypairs' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 246 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 300 KiB/s rd, 2.4 MiB/s wr, 65 op/s
Feb 28 10:21:32 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2532053339' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:32 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4017787345' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:32 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/950168576' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:21:33 compute-0 nova_compute[243452]: 2026-02-28 10:21:33.386 243456 INFO nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Creating config drive at /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config.rescue
Feb 28 10:21:33 compute-0 nova_compute[243452]: 2026-02-28 10:21:33.393 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp63_kwpac execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:33 compute-0 ceph-mon[76304]: pgmap v1683: 305 pgs: 305 active+clean; 246 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 300 KiB/s rd, 2.4 MiB/s wr, 65 op/s
Feb 28 10:21:33 compute-0 nova_compute[243452]: 2026-02-28 10:21:33.540 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp63_kwpac" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:33 compute-0 nova_compute[243452]: 2026-02-28 10:21:33.579 243456 DEBUG nova.storage.rbd_utils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] rbd image ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:33 compute-0 nova_compute[243452]: 2026-02-28 10:21:33.584 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config.rescue ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:21:33 compute-0 nova_compute[243452]: 2026-02-28 10:21:33.760 243456 DEBUG oslo_concurrency.processutils [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config.rescue ec785d5e-9b62-4b52-a727-f64173b4b853_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:33 compute-0 nova_compute[243452]: 2026-02-28 10:21:33.761 243456 INFO nova.virt.libvirt.driver [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Deleting local config drive /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853/disk.config.rescue because it was imported into RBD.
Feb 28 10:21:33 compute-0 kernel: tapa920b0c3-c6: entered promiscuous mode
Feb 28 10:21:33 compute-0 NetworkManager[49805]: <info>  [1772274093.8172] manager: (tapa920b0c3-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/438)
Feb 28 10:21:33 compute-0 ovn_controller[146846]: 2026-02-28T10:21:33Z|01050|binding|INFO|Claiming lport a920b0c3-c6cf-44d3-9a22-40eda0e09078 for this chassis.
Feb 28 10:21:33 compute-0 ovn_controller[146846]: 2026-02-28T10:21:33Z|01051|binding|INFO|a920b0c3-c6cf-44d3-9a22-40eda0e09078: Claiming fa:16:3e:a2:a9:65 10.100.0.2
Feb 28 10:21:33 compute-0 nova_compute[243452]: 2026-02-28 10:21:33.818 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:33.828 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:a9:65 10.100.0.2'], port_security=['fa:16:3e:a2:a9:65 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ec785d5e-9b62-4b52-a727-f64173b4b853', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d053625-c393-49e7-ae73-bce276bdc186', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14500a4ea1d94c0e9c58b076f5c918b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1b56c898-1f47-46b5-8fd8-bf772110d194', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0513d6b5-e918-4c66-8302-fa0b35a813c3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a920b0c3-c6cf-44d3-9a22-40eda0e09078) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:21:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:33.829 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a920b0c3-c6cf-44d3-9a22-40eda0e09078 in datapath 4d053625-c393-49e7-ae73-bce276bdc186 bound to our chassis
Feb 28 10:21:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:33.829 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4d053625-c393-49e7-ae73-bce276bdc186 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:21:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:33.830 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6f261b4a-7b63-4770-a39d-0ff0b72600de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:33 compute-0 ovn_controller[146846]: 2026-02-28T10:21:33Z|01052|binding|INFO|Setting lport a920b0c3-c6cf-44d3-9a22-40eda0e09078 ovn-installed in OVS
Feb 28 10:21:33 compute-0 ovn_controller[146846]: 2026-02-28T10:21:33Z|01053|binding|INFO|Setting lport a920b0c3-c6cf-44d3-9a22-40eda0e09078 up in Southbound
Feb 28 10:21:33 compute-0 nova_compute[243452]: 2026-02-28 10:21:33.831 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:33 compute-0 nova_compute[243452]: 2026-02-28 10:21:33.836 243456 DEBUG nova.compute.manager [req-9ca796fd-fa89-4554-80a0-2ff7cae6abac req-dc9457a9-4585-4959-ae4c-bed49bb10adc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-vif-unplugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:33 compute-0 nova_compute[243452]: 2026-02-28 10:21:33.837 243456 DEBUG oslo_concurrency.lockutils [req-9ca796fd-fa89-4554-80a0-2ff7cae6abac req-dc9457a9-4585-4959-ae4c-bed49bb10adc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:33 compute-0 nova_compute[243452]: 2026-02-28 10:21:33.837 243456 DEBUG oslo_concurrency.lockutils [req-9ca796fd-fa89-4554-80a0-2ff7cae6abac req-dc9457a9-4585-4959-ae4c-bed49bb10adc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:33 compute-0 nova_compute[243452]: 2026-02-28 10:21:33.837 243456 DEBUG oslo_concurrency.lockutils [req-9ca796fd-fa89-4554-80a0-2ff7cae6abac req-dc9457a9-4585-4959-ae4c-bed49bb10adc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:33 compute-0 nova_compute[243452]: 2026-02-28 10:21:33.837 243456 DEBUG nova.compute.manager [req-9ca796fd-fa89-4554-80a0-2ff7cae6abac req-dc9457a9-4585-4959-ae4c-bed49bb10adc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] No waiting events found dispatching network-vif-unplugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:21:33 compute-0 nova_compute[243452]: 2026-02-28 10:21:33.838 243456 WARNING nova.compute.manager [req-9ca796fd-fa89-4554-80a0-2ff7cae6abac req-dc9457a9-4585-4959-ae4c-bed49bb10adc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received unexpected event network-vif-unplugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 for instance with vm_state active and task_state rescuing.
Feb 28 10:21:33 compute-0 nova_compute[243452]: 2026-02-28 10:21:33.838 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:33 compute-0 systemd-udevd[333565]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:21:33 compute-0 systemd-machined[209480]: New machine qemu-132-instance-00000066.
Feb 28 10:21:33 compute-0 NetworkManager[49805]: <info>  [1772274093.8620] device (tapa920b0c3-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:21:33 compute-0 NetworkManager[49805]: <info>  [1772274093.8626] device (tapa920b0c3-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:21:33 compute-0 systemd[1]: Started Virtual Machine qemu-132-instance-00000066.
Feb 28 10:21:33 compute-0 nova_compute[243452]: 2026-02-28 10:21:33.980 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:34 compute-0 nova_compute[243452]: 2026-02-28 10:21:34.003 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:21:34 compute-0 nova_compute[243452]: 2026-02-28 10:21:34.030 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:21:34 compute-0 nova_compute[243452]: 2026-02-28 10:21:34.031 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:21:34 compute-0 nova_compute[243452]: 2026-02-28 10:21:34.095 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:34 compute-0 nova_compute[243452]: 2026-02-28 10:21:34.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:21:34 compute-0 nova_compute[243452]: 2026-02-28 10:21:34.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 28 10:21:34 compute-0 nova_compute[243452]: 2026-02-28 10:21:34.324 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for ec785d5e-9b62-4b52-a727-f64173b4b853 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:21:34 compute-0 nova_compute[243452]: 2026-02-28 10:21:34.325 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274094.3240635, ec785d5e-9b62-4b52-a727-f64173b4b853 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:21:34 compute-0 nova_compute[243452]: 2026-02-28 10:21:34.325 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] VM Resumed (Lifecycle Event)
Feb 28 10:21:34 compute-0 nova_compute[243452]: 2026-02-28 10:21:34.329 243456 DEBUG nova.compute.manager [None req-66813702-d4e6-40e8-901f-ef2fd97b785d f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:21:34 compute-0 nova_compute[243452]: 2026-02-28 10:21:34.344 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:21:34 compute-0 nova_compute[243452]: 2026-02-28 10:21:34.365 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:21:34 compute-0 nova_compute[243452]: 2026-02-28 10:21:34.368 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:21:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 259 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 306 KiB/s rd, 3.1 MiB/s wr, 76 op/s
Feb 28 10:21:34 compute-0 nova_compute[243452]: 2026-02-28 10:21:34.391 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] During sync_power_state the instance has a pending task (rescuing). Skip.
Feb 28 10:21:34 compute-0 nova_compute[243452]: 2026-02-28 10:21:34.392 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274094.3252978, ec785d5e-9b62-4b52-a727-f64173b4b853 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:21:34 compute-0 nova_compute[243452]: 2026-02-28 10:21:34.392 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] VM Started (Lifecycle Event)
Feb 28 10:21:34 compute-0 nova_compute[243452]: 2026-02-28 10:21:34.420 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:21:34 compute-0 nova_compute[243452]: 2026-02-28 10:21:34.424 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:21:34 compute-0 ceph-mon[76304]: pgmap v1684: 305 pgs: 305 active+clean; 259 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 306 KiB/s rd, 3.1 MiB/s wr, 76 op/s
Feb 28 10:21:36 compute-0 nova_compute[243452]: 2026-02-28 10:21:36.168 243456 DEBUG nova.compute.manager [req-edefcd04-622d-47d1-b229-09ac5c23fa91 req-ce513542-e5ae-46e2-b81e-4ec06ff42bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:36 compute-0 nova_compute[243452]: 2026-02-28 10:21:36.168 243456 DEBUG oslo_concurrency.lockutils [req-edefcd04-622d-47d1-b229-09ac5c23fa91 req-ce513542-e5ae-46e2-b81e-4ec06ff42bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:36 compute-0 nova_compute[243452]: 2026-02-28 10:21:36.169 243456 DEBUG oslo_concurrency.lockutils [req-edefcd04-622d-47d1-b229-09ac5c23fa91 req-ce513542-e5ae-46e2-b81e-4ec06ff42bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:36 compute-0 nova_compute[243452]: 2026-02-28 10:21:36.169 243456 DEBUG oslo_concurrency.lockutils [req-edefcd04-622d-47d1-b229-09ac5c23fa91 req-ce513542-e5ae-46e2-b81e-4ec06ff42bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:36 compute-0 nova_compute[243452]: 2026-02-28 10:21:36.169 243456 DEBUG nova.compute.manager [req-edefcd04-622d-47d1-b229-09ac5c23fa91 req-ce513542-e5ae-46e2-b81e-4ec06ff42bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] No waiting events found dispatching network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:21:36 compute-0 nova_compute[243452]: 2026-02-28 10:21:36.170 243456 WARNING nova.compute.manager [req-edefcd04-622d-47d1-b229-09ac5c23fa91 req-ce513542-e5ae-46e2-b81e-4ec06ff42bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received unexpected event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 for instance with vm_state rescued and task_state None.
Feb 28 10:21:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 505 KiB/s rd, 3.9 MiB/s wr, 96 op/s
Feb 28 10:21:36 compute-0 ceph-mon[76304]: pgmap v1685: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 505 KiB/s rd, 3.9 MiB/s wr, 96 op/s
Feb 28 10:21:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 94 op/s
Feb 28 10:21:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:21:38 compute-0 nova_compute[243452]: 2026-02-28 10:21:38.982 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:39 compute-0 nova_compute[243452]: 2026-02-28 10:21:39.097 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:39 compute-0 ceph-mon[76304]: pgmap v1686: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 94 op/s
Feb 28 10:21:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Feb 28 10:21:40 compute-0 nova_compute[243452]: 2026-02-28 10:21:40.685 243456 DEBUG nova.compute.manager [req-12d54ea1-7a88-4822-9f48-58e679254996 req-f788e0d0-7159-4481-a21b-53288dcb4e3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-changed-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:40 compute-0 nova_compute[243452]: 2026-02-28 10:21:40.685 243456 DEBUG nova.compute.manager [req-12d54ea1-7a88-4822-9f48-58e679254996 req-f788e0d0-7159-4481-a21b-53288dcb4e3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Refreshing instance network info cache due to event network-changed-a920b0c3-c6cf-44d3-9a22-40eda0e09078. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:21:40 compute-0 nova_compute[243452]: 2026-02-28 10:21:40.687 243456 DEBUG oslo_concurrency.lockutils [req-12d54ea1-7a88-4822-9f48-58e679254996 req-f788e0d0-7159-4481-a21b-53288dcb4e3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:21:40 compute-0 nova_compute[243452]: 2026-02-28 10:21:40.688 243456 DEBUG oslo_concurrency.lockutils [req-12d54ea1-7a88-4822-9f48-58e679254996 req-f788e0d0-7159-4481-a21b-53288dcb4e3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:21:40 compute-0 nova_compute[243452]: 2026-02-28 10:21:40.688 243456 DEBUG nova.network.neutron [req-12d54ea1-7a88-4822-9f48-58e679254996 req-f788e0d0-7159-4481-a21b-53288dcb4e3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Refreshing network info cache for port a920b0c3-c6cf-44d3-9a22-40eda0e09078 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:21:40 compute-0 nova_compute[243452]: 2026-02-28 10:21:40.740 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Acquiring lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:40 compute-0 nova_compute[243452]: 2026-02-28 10:21:40.741 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:40 compute-0 nova_compute[243452]: 2026-02-28 10:21:40.761 243456 DEBUG nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:21:40 compute-0 nova_compute[243452]: 2026-02-28 10:21:40.848 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:40 compute-0 nova_compute[243452]: 2026-02-28 10:21:40.849 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:40 compute-0 nova_compute[243452]: 2026-02-28 10:21:40.858 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:21:40 compute-0 nova_compute[243452]: 2026-02-28 10:21:40.859 243456 INFO nova.compute.claims [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:21:40 compute-0 nova_compute[243452]: 2026-02-28 10:21:40.990 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001119069245175675 of space, bias 1.0, pg target 0.3357207735527025 quantized to 32 (current 32)
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493206594747484 of space, bias 1.0, pg target 0.7479619784242452 quantized to 32 (current 32)
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.57530516942218e-07 of space, bias 4.0, pg target 0.0009090366203306615 quantized to 16 (current 16)
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:21:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.294 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "0bafc3af-eadf-4d97-9acf-026c531362c3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.295 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.298 243456 DEBUG nova.compute.manager [req-43d6402d-accd-424c-97c3-b0b58818f165 req-808ae0a4-7032-4af9-ab3f-fc3230100cba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-changed-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.299 243456 DEBUG nova.compute.manager [req-43d6402d-accd-424c-97c3-b0b58818f165 req-808ae0a4-7032-4af9-ab3f-fc3230100cba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Refreshing instance network info cache due to event network-changed-a920b0c3-c6cf-44d3-9a22-40eda0e09078. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.299 243456 DEBUG oslo_concurrency.lockutils [req-43d6402d-accd-424c-97c3-b0b58818f165 req-808ae0a4-7032-4af9-ab3f-fc3230100cba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.318 243456 DEBUG nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.393 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:41 compute-0 ceph-mon[76304]: pgmap v1687: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Feb 28 10:21:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:21:41 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2553236120' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.608 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.613 243456 DEBUG nova.compute.provider_tree [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.631 243456 DEBUG nova.scheduler.client.report [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.664 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.665 243456 DEBUG nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.668 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.677 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.677 243456 INFO nova.compute.claims [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.727 243456 DEBUG nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.729 243456 DEBUG nova.network.neutron [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.751 243456 INFO nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.788 243456 DEBUG nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.852 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.897 243456 DEBUG nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.901 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.902 243456 INFO nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Creating image(s)
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.944 243456 DEBUG nova.storage.rbd_utils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] rbd image 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:41 compute-0 nova_compute[243452]: 2026-02-28 10:21:41.979 243456 DEBUG nova.storage.rbd_utils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] rbd image 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.008 243456 DEBUG nova.storage.rbd_utils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] rbd image 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.012 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.102 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.103 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.104 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.104 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.125 243456 DEBUG nova.storage.rbd_utils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] rbd image 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.129 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.395 243456 DEBUG nova.policy [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '70e8f691ae0f4768bb68cd8d497033e8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '335faa1173e64cf8a7b107ae6238353d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.400 243456 DEBUG nova.network.neutron [req-12d54ea1-7a88-4822-9f48-58e679254996 req-f788e0d0-7159-4481-a21b-53288dcb4e3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updated VIF entry in instance network info cache for port a920b0c3-c6cf-44d3-9a22-40eda0e09078. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.401 243456 DEBUG nova.network.neutron [req-12d54ea1-7a88-4822-9f48-58e679254996 req-f788e0d0-7159-4481-a21b-53288dcb4e3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updating instance_info_cache with network_info: [{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.411 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:21:42 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2994791735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.447 243456 DEBUG oslo_concurrency.lockutils [req-12d54ea1-7a88-4822-9f48-58e679254996 req-f788e0d0-7159-4481-a21b-53288dcb4e3e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.448 243456 DEBUG oslo_concurrency.lockutils [req-43d6402d-accd-424c-97c3-b0b58818f165 req-808ae0a4-7032-4af9-ab3f-fc3230100cba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.449 243456 DEBUG nova.network.neutron [req-43d6402d-accd-424c-97c3-b0b58818f165 req-808ae0a4-7032-4af9-ab3f-fc3230100cba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Refreshing network info cache for port a920b0c3-c6cf-44d3-9a22-40eda0e09078 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.453 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:42 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2553236120' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:21:42 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2994791735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.501 243456 DEBUG nova.storage.rbd_utils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] resizing rbd image 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.536 243456 DEBUG nova.compute.provider_tree [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.555 243456 DEBUG nova.scheduler.client.report [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.597 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.599 243456 DEBUG nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.608 243456 DEBUG nova.objects.instance [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lazy-loading 'migration_context' on Instance uuid 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.623 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.623 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Ensure instance console log exists: /var/lib/nova/instances/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.624 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.624 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.624 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.652 243456 DEBUG nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.654 243456 DEBUG nova.network.neutron [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.670 243456 INFO nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.693 243456 DEBUG nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.801 243456 DEBUG nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.803 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.804 243456 INFO nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Creating image(s)
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.829 243456 DEBUG nova.storage.rbd_utils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 0bafc3af-eadf-4d97-9acf-026c531362c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.857 243456 DEBUG nova.storage.rbd_utils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 0bafc3af-eadf-4d97-9acf-026c531362c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.880 243456 DEBUG nova.storage.rbd_utils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 0bafc3af-eadf-4d97-9acf-026c531362c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.885 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.916 243456 DEBUG nova.policy [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9046d3ef932481196b82d5e1fdd5de6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.921 243456 DEBUG nova.compute.manager [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.921 243456 DEBUG oslo_concurrency.lockutils [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.922 243456 DEBUG oslo_concurrency.lockutils [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.922 243456 DEBUG oslo_concurrency.lockutils [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.922 243456 DEBUG nova.compute.manager [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] No waiting events found dispatching network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.923 243456 WARNING nova.compute.manager [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received unexpected event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 for instance with vm_state rescued and task_state None.
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.923 243456 DEBUG nova.compute.manager [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.923 243456 DEBUG oslo_concurrency.lockutils [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.923 243456 DEBUG oslo_concurrency.lockutils [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.924 243456 DEBUG oslo_concurrency.lockutils [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.924 243456 DEBUG nova.compute.manager [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] No waiting events found dispatching network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.924 243456 WARNING nova.compute.manager [req-23af569c-a7d7-475a-88d0-3955c5bc7a53 req-d06d97cf-8085-466d-a1de-3f25ecfb7286 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received unexpected event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 for instance with vm_state rescued and task_state None.
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.960 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.961 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.962 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.963 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.987 243456 DEBUG nova.storage.rbd_utils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 0bafc3af-eadf-4d97-9acf-026c531362c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:42 compute-0 nova_compute[243452]: 2026-02-28 10:21:42.992 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 0bafc3af-eadf-4d97-9acf-026c531362c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:43 compute-0 nova_compute[243452]: 2026-02-28 10:21:43.203 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 0bafc3af-eadf-4d97-9acf-026c531362c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:43 compute-0 nova_compute[243452]: 2026-02-28 10:21:43.258 243456 DEBUG nova.network.neutron [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Successfully created port: 247791be-e482-41d7-b078-7328138dd0ea _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:21:43 compute-0 nova_compute[243452]: 2026-02-28 10:21:43.270 243456 DEBUG nova.storage.rbd_utils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] resizing rbd image 0bafc3af-eadf-4d97-9acf-026c531362c3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:21:43 compute-0 nova_compute[243452]: 2026-02-28 10:21:43.339 243456 DEBUG nova.objects.instance [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'migration_context' on Instance uuid 0bafc3af-eadf-4d97-9acf-026c531362c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:43 compute-0 nova_compute[243452]: 2026-02-28 10:21:43.354 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:21:43 compute-0 nova_compute[243452]: 2026-02-28 10:21:43.354 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Ensure instance console log exists: /var/lib/nova/instances/0bafc3af-eadf-4d97-9acf-026c531362c3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:21:43 compute-0 nova_compute[243452]: 2026-02-28 10:21:43.355 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:43 compute-0 nova_compute[243452]: 2026-02-28 10:21:43.355 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:43 compute-0 nova_compute[243452]: 2026-02-28 10:21:43.355 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:43 compute-0 ceph-mon[76304]: pgmap v1688: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Feb 28 10:21:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:21:43 compute-0 nova_compute[243452]: 2026-02-28 10:21:43.947 243456 DEBUG nova.network.neutron [req-43d6402d-accd-424c-97c3-b0b58818f165 req-808ae0a4-7032-4af9-ab3f-fc3230100cba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updated VIF entry in instance network info cache for port a920b0c3-c6cf-44d3-9a22-40eda0e09078. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:21:43 compute-0 nova_compute[243452]: 2026-02-28 10:21:43.947 243456 DEBUG nova.network.neutron [req-43d6402d-accd-424c-97c3-b0b58818f165 req-808ae0a4-7032-4af9-ab3f-fc3230100cba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updating instance_info_cache with network_info: [{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:21:43 compute-0 nova_compute[243452]: 2026-02-28 10:21:43.979 243456 DEBUG oslo_concurrency.lockutils [req-43d6402d-accd-424c-97c3-b0b58818f165 req-808ae0a4-7032-4af9-ab3f-fc3230100cba 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:21:43 compute-0 nova_compute[243452]: 2026-02-28 10:21:43.984 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:44 compute-0 nova_compute[243452]: 2026-02-28 10:21:44.098 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:44 compute-0 nova_compute[243452]: 2026-02-28 10:21:44.162 243456 DEBUG nova.network.neutron [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Successfully created port: 09ffa25b-e3df-45c2-9db2-423ed33e2a28 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:21:44 compute-0 nova_compute[243452]: 2026-02-28 10:21:44.326 243456 DEBUG nova.network.neutron [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Successfully updated port: 247791be-e482-41d7-b078-7328138dd0ea _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:21:44 compute-0 nova_compute[243452]: 2026-02-28 10:21:44.343 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Acquiring lock "refresh_cache-6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:21:44 compute-0 nova_compute[243452]: 2026-02-28 10:21:44.344 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Acquired lock "refresh_cache-6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:21:44 compute-0 nova_compute[243452]: 2026-02-28 10:21:44.344 243456 DEBUG nova.network.neutron [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:21:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 296 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.4 MiB/s wr, 95 op/s
Feb 28 10:21:44 compute-0 ceph-mon[76304]: pgmap v1689: 305 pgs: 305 active+clean; 296 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.4 MiB/s wr, 95 op/s
Feb 28 10:21:44 compute-0 nova_compute[243452]: 2026-02-28 10:21:44.527 243456 DEBUG nova.network.neutron [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:21:44 compute-0 nova_compute[243452]: 2026-02-28 10:21:44.871 243456 DEBUG nova.compute.manager [req-91a42177-0aef-441a-a21d-5432b4245bd1 req-d236fcce-d20a-4898-8351-e81a599a3968 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Received event network-changed-247791be-e482-41d7-b078-7328138dd0ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:44 compute-0 nova_compute[243452]: 2026-02-28 10:21:44.872 243456 DEBUG nova.compute.manager [req-91a42177-0aef-441a-a21d-5432b4245bd1 req-d236fcce-d20a-4898-8351-e81a599a3968 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Refreshing instance network info cache due to event network-changed-247791be-e482-41d7-b078-7328138dd0ea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:21:44 compute-0 nova_compute[243452]: 2026-02-28 10:21:44.872 243456 DEBUG oslo_concurrency.lockutils [req-91a42177-0aef-441a-a21d-5432b4245bd1 req-d236fcce-d20a-4898-8351-e81a599a3968 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:21:45 compute-0 podman[334012]: 2026-02-28 10:21:45.160368151 +0000 UTC m=+0.085751166 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223)
Feb 28 10:21:45 compute-0 podman[334011]: 2026-02-28 10:21:45.181611904 +0000 UTC m=+0.107421322 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.271 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:45 compute-0 NetworkManager[49805]: <info>  [1772274105.2720] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/439)
Feb 28 10:21:45 compute-0 NetworkManager[49805]: <info>  [1772274105.2728] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/440)
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.303 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.332 243456 WARNING nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] While synchronizing instance power states, found 3 instances in the database and 1 instances on the hypervisor.
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.333 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid ec785d5e-9b62-4b52-a727-f64173b4b853 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.333 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.333 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid 0bafc3af-eadf-4d97-9acf-026c531362c3 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.334 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.334 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.335 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.335 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "0bafc3af-eadf-4d97-9acf-026c531362c3" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.336 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.336 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.349 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.362 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.365 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.377 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.446 243456 DEBUG nova.network.neutron [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Successfully updated port: 09ffa25b-e3df-45c2-9db2-423ed33e2a28 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.470 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.470 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquired lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.471 243456 DEBUG nova.network.neutron [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:21:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:21:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3602889676' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:21:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:21:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3602889676' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:21:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3602889676' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:21:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3602889676' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.591 243456 DEBUG nova.network.neutron [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Updating instance_info_cache with network_info: [{"id": "247791be-e482-41d7-b078-7328138dd0ea", "address": "fa:16:3e:d2:c3:14", "network": {"id": "10418c0f-a33e-4d93-99b1-462207fda43a", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1972878049-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "335faa1173e64cf8a7b107ae6238353d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247791be-e4", "ovs_interfaceid": "247791be-e482-41d7-b078-7328138dd0ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.615 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Releasing lock "refresh_cache-6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.616 243456 DEBUG nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Instance network_info: |[{"id": "247791be-e482-41d7-b078-7328138dd0ea", "address": "fa:16:3e:d2:c3:14", "network": {"id": "10418c0f-a33e-4d93-99b1-462207fda43a", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1972878049-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "335faa1173e64cf8a7b107ae6238353d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247791be-e4", "ovs_interfaceid": "247791be-e482-41d7-b078-7328138dd0ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.617 243456 DEBUG oslo_concurrency.lockutils [req-91a42177-0aef-441a-a21d-5432b4245bd1 req-d236fcce-d20a-4898-8351-e81a599a3968 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.617 243456 DEBUG nova.network.neutron [req-91a42177-0aef-441a-a21d-5432b4245bd1 req-d236fcce-d20a-4898-8351-e81a599a3968 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Refreshing network info cache for port 247791be-e482-41d7-b078-7328138dd0ea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.623 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Start _get_guest_xml network_info=[{"id": "247791be-e482-41d7-b078-7328138dd0ea", "address": "fa:16:3e:d2:c3:14", "network": {"id": "10418c0f-a33e-4d93-99b1-462207fda43a", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1972878049-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "335faa1173e64cf8a7b107ae6238353d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247791be-e4", "ovs_interfaceid": "247791be-e482-41d7-b078-7328138dd0ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.629 243456 WARNING nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.634 243456 DEBUG nova.virt.libvirt.host [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.635 243456 DEBUG nova.virt.libvirt.host [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.638 243456 DEBUG nova.virt.libvirt.host [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.639 243456 DEBUG nova.virt.libvirt.host [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.640 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.641 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.641 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.642 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.642 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.643 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.643 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.644 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.644 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.644 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.645 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.645 243456 DEBUG nova.virt.hardware [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.650 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.686 243456 DEBUG nova.network.neutron [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.696 243456 DEBUG nova.compute.manager [req-7254dec2-1bed-45a1-821d-5795c0bb967a req-7b4e9ede-89f7-4dc4-a215-0be44eed0a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received event network-changed-09ffa25b-e3df-45c2-9db2-423ed33e2a28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.696 243456 DEBUG nova.compute.manager [req-7254dec2-1bed-45a1-821d-5795c0bb967a req-7b4e9ede-89f7-4dc4-a215-0be44eed0a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Refreshing instance network info cache due to event network-changed-09ffa25b-e3df-45c2-9db2-423ed33e2a28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:21:45 compute-0 nova_compute[243452]: 2026-02-28 10:21:45.696 243456 DEBUG oslo_concurrency.lockutils [req-7254dec2-1bed-45a1-821d-5795c0bb967a req-7b4e9ede-89f7-4dc4-a215-0be44eed0a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:21:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:21:46 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/513603687' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.187 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.210 243456 DEBUG nova.storage.rbd_utils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] rbd image 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.214 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 344 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 148 op/s
Feb 28 10:21:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/513603687' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:46 compute-0 ceph-mon[76304]: pgmap v1690: 305 pgs: 305 active+clean; 344 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 148 op/s
Feb 28 10:21:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:21:46 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3022739211' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.897 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.683s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.899 243456 DEBUG nova.virt.libvirt.vif [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:21:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1786122360',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1786122360',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1786122360',id=103,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='335faa1173e64cf8a7b107ae6238353d',ramdisk_id='',reservation_id='r-bbmf5pbr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1249930023',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1249930023-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:21:41Z,user_data=None,user_id='70e8f691ae0f4768bb68cd8d497033e8',uuid=6f73ef31-2ae0-4765-9f6c-fb732af2b3f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "247791be-e482-41d7-b078-7328138dd0ea", "address": "fa:16:3e:d2:c3:14", "network": {"id": "10418c0f-a33e-4d93-99b1-462207fda43a", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1972878049-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "335faa1173e64cf8a7b107ae6238353d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247791be-e4", "ovs_interfaceid": "247791be-e482-41d7-b078-7328138dd0ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.899 243456 DEBUG nova.network.os_vif_util [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Converting VIF {"id": "247791be-e482-41d7-b078-7328138dd0ea", "address": "fa:16:3e:d2:c3:14", "network": {"id": "10418c0f-a33e-4d93-99b1-462207fda43a", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1972878049-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "335faa1173e64cf8a7b107ae6238353d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247791be-e4", "ovs_interfaceid": "247791be-e482-41d7-b078-7328138dd0ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.900 243456 DEBUG nova.network.os_vif_util [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c3:14,bridge_name='br-int',has_traffic_filtering=True,id=247791be-e482-41d7-b078-7328138dd0ea,network=Network(10418c0f-a33e-4d93-99b1-462207fda43a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247791be-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.901 243456 DEBUG nova.objects.instance [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lazy-loading 'pci_devices' on Instance uuid 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.916 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:21:46 compute-0 nova_compute[243452]:   <uuid>6f73ef31-2ae0-4765-9f6c-fb732af2b3f4</uuid>
Feb 28 10:21:46 compute-0 nova_compute[243452]:   <name>instance-00000067</name>
Feb 28 10:21:46 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:21:46 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:21:46 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-1786122360</nova:name>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:21:45</nova:creationTime>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:21:46 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:21:46 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:21:46 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:21:46 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:21:46 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:21:46 compute-0 nova_compute[243452]:         <nova:user uuid="70e8f691ae0f4768bb68cd8d497033e8">tempest-ServersNegativeTestMultiTenantJSON-1249930023-project-member</nova:user>
Feb 28 10:21:46 compute-0 nova_compute[243452]:         <nova:project uuid="335faa1173e64cf8a7b107ae6238353d">tempest-ServersNegativeTestMultiTenantJSON-1249930023</nova:project>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:21:46 compute-0 nova_compute[243452]:         <nova:port uuid="247791be-e482-41d7-b078-7328138dd0ea">
Feb 28 10:21:46 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:21:46 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:21:46 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <system>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <entry name="serial">6f73ef31-2ae0-4765-9f6c-fb732af2b3f4</entry>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <entry name="uuid">6f73ef31-2ae0-4765-9f6c-fb732af2b3f4</entry>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     </system>
Feb 28 10:21:46 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:21:46 compute-0 nova_compute[243452]:   <os>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:   </os>
Feb 28 10:21:46 compute-0 nova_compute[243452]:   <features>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:   </features>
Feb 28 10:21:46 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:21:46 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:21:46 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk">
Feb 28 10:21:46 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       </source>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:21:46 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk.config">
Feb 28 10:21:46 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       </source>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:21:46 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:d2:c3:14"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <target dev="tap247791be-e4"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4/console.log" append="off"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <video>
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     </video>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:21:46 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:21:46 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:21:46 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:21:46 compute-0 nova_compute[243452]: </domain>
Feb 28 10:21:46 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.917 243456 DEBUG nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Preparing to wait for external event network-vif-plugged-247791be-e482-41d7-b078-7328138dd0ea prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.918 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Acquiring lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.918 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.918 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.919 243456 DEBUG nova.virt.libvirt.vif [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:21:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1786122360',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1786122360',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1786122360',id=103,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='335faa1173e64cf8a7b107ae6238353d',ramdisk_id='',reservation_id='r-bbmf5pbr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1249930023',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1249930023-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:21:41Z,user_data=None,user_id='70e8f691ae0f4768bb68cd8d497033e8',uuid=6f73ef31-2ae0-4765-9f6c-fb732af2b3f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "247791be-e482-41d7-b078-7328138dd0ea", "address": "fa:16:3e:d2:c3:14", "network": {"id": "10418c0f-a33e-4d93-99b1-462207fda43a", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1972878049-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "335faa1173e64cf8a7b107ae6238353d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247791be-e4", "ovs_interfaceid": "247791be-e482-41d7-b078-7328138dd0ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.919 243456 DEBUG nova.network.os_vif_util [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Converting VIF {"id": "247791be-e482-41d7-b078-7328138dd0ea", "address": "fa:16:3e:d2:c3:14", "network": {"id": "10418c0f-a33e-4d93-99b1-462207fda43a", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1972878049-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "335faa1173e64cf8a7b107ae6238353d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247791be-e4", "ovs_interfaceid": "247791be-e482-41d7-b078-7328138dd0ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.920 243456 DEBUG nova.network.os_vif_util [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c3:14,bridge_name='br-int',has_traffic_filtering=True,id=247791be-e482-41d7-b078-7328138dd0ea,network=Network(10418c0f-a33e-4d93-99b1-462207fda43a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247791be-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.920 243456 DEBUG os_vif [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c3:14,bridge_name='br-int',has_traffic_filtering=True,id=247791be-e482-41d7-b078-7328138dd0ea,network=Network(10418c0f-a33e-4d93-99b1-462207fda43a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247791be-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.920 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.921 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.921 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.926 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.926 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap247791be-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.927 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap247791be-e4, col_values=(('external_ids', {'iface-id': '247791be-e482-41d7-b078-7328138dd0ea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:c3:14', 'vm-uuid': '6f73ef31-2ae0-4765-9f6c-fb732af2b3f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.928 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:46 compute-0 NetworkManager[49805]: <info>  [1772274106.9296] manager: (tap247791be-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/441)
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.930 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.934 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.935 243456 INFO os_vif [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c3:14,bridge_name='br-int',has_traffic_filtering=True,id=247791be-e482-41d7-b078-7328138dd0ea,network=Network(10418c0f-a33e-4d93-99b1-462207fda43a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247791be-e4')
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.984 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.984 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.984 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] No VIF found with MAC fa:16:3e:d2:c3:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:21:46 compute-0 nova_compute[243452]: 2026-02-28 10:21:46.985 243456 INFO nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Using config drive
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.009 243456 DEBUG nova.storage.rbd_utils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] rbd image 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.370 243456 DEBUG nova.network.neutron [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Updating instance_info_cache with network_info: [{"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.401 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Releasing lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.402 243456 DEBUG nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Instance network_info: |[{"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.403 243456 DEBUG oslo_concurrency.lockutils [req-7254dec2-1bed-45a1-821d-5795c0bb967a req-7b4e9ede-89f7-4dc4-a215-0be44eed0a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.403 243456 DEBUG nova.network.neutron [req-7254dec2-1bed-45a1-821d-5795c0bb967a req-7b4e9ede-89f7-4dc4-a215-0be44eed0a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Refreshing network info cache for port 09ffa25b-e3df-45c2-9db2-423ed33e2a28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.406 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Start _get_guest_xml network_info=[{"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.412 243456 WARNING nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.417 243456 DEBUG nova.virt.libvirt.host [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.418 243456 DEBUG nova.virt.libvirt.host [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.424 243456 DEBUG nova.virt.libvirt.host [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.424 243456 DEBUG nova.virt.libvirt.host [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.425 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.425 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.426 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.426 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.427 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.427 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.427 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.428 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.428 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.428 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.429 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.429 243456 DEBUG nova.virt.hardware [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.432 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.532 243456 DEBUG nova.compute.manager [req-86196260-6576-4fde-a72e-0a1eee4093b0 req-d4e74e4b-0495-434f-a035-a46f1e032aa4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-changed-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.533 243456 DEBUG nova.compute.manager [req-86196260-6576-4fde-a72e-0a1eee4093b0 req-d4e74e4b-0495-434f-a035-a46f1e032aa4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Refreshing instance network info cache due to event network-changed-a920b0c3-c6cf-44d3-9a22-40eda0e09078. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.533 243456 DEBUG oslo_concurrency.lockutils [req-86196260-6576-4fde-a72e-0a1eee4093b0 req-d4e74e4b-0495-434f-a035-a46f1e032aa4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.534 243456 DEBUG oslo_concurrency.lockutils [req-86196260-6576-4fde-a72e-0a1eee4093b0 req-d4e74e4b-0495-434f-a035-a46f1e032aa4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.534 243456 DEBUG nova.network.neutron [req-86196260-6576-4fde-a72e-0a1eee4093b0 req-d4e74e4b-0495-434f-a035-a46f1e032aa4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Refreshing network info cache for port a920b0c3-c6cf-44d3-9a22-40eda0e09078 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:21:47 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3022739211' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.674 243456 DEBUG nova.network.neutron [req-91a42177-0aef-441a-a21d-5432b4245bd1 req-d236fcce-d20a-4898-8351-e81a599a3968 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Updated VIF entry in instance network info cache for port 247791be-e482-41d7-b078-7328138dd0ea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.675 243456 DEBUG nova.network.neutron [req-91a42177-0aef-441a-a21d-5432b4245bd1 req-d236fcce-d20a-4898-8351-e81a599a3968 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Updating instance_info_cache with network_info: [{"id": "247791be-e482-41d7-b078-7328138dd0ea", "address": "fa:16:3e:d2:c3:14", "network": {"id": "10418c0f-a33e-4d93-99b1-462207fda43a", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1972878049-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "335faa1173e64cf8a7b107ae6238353d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247791be-e4", "ovs_interfaceid": "247791be-e482-41d7-b078-7328138dd0ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.700 243456 DEBUG oslo_concurrency.lockutils [req-91a42177-0aef-441a-a21d-5432b4245bd1 req-d236fcce-d20a-4898-8351-e81a599a3968 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.709 243456 INFO nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Creating config drive at /var/lib/nova/instances/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4/disk.config
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.713 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpcxm3vk4i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.855 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpcxm3vk4i" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.882 243456 DEBUG nova.storage.rbd_utils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] rbd image 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.888 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4/disk.config 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:21:47 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2508671135' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.967 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.993 243456 DEBUG nova.storage.rbd_utils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 0bafc3af-eadf-4d97-9acf-026c531362c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:47 compute-0 nova_compute[243452]: 2026-02-28 10:21:47.999 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.065 243456 DEBUG oslo_concurrency.processutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4/disk.config 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.066 243456 INFO nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Deleting local config drive /var/lib/nova/instances/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4/disk.config because it was imported into RBD.
Feb 28 10:21:48 compute-0 kernel: tap247791be-e4: entered promiscuous mode
Feb 28 10:21:48 compute-0 NetworkManager[49805]: <info>  [1772274108.1155] manager: (tap247791be-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/442)
Feb 28 10:21:48 compute-0 ovn_controller[146846]: 2026-02-28T10:21:48Z|01054|binding|INFO|Claiming lport 247791be-e482-41d7-b078-7328138dd0ea for this chassis.
Feb 28 10:21:48 compute-0 ovn_controller[146846]: 2026-02-28T10:21:48Z|01055|binding|INFO|247791be-e482-41d7-b078-7328138dd0ea: Claiming fa:16:3e:d2:c3:14 10.100.0.3
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.118 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:48 compute-0 ovn_controller[146846]: 2026-02-28T10:21:48Z|01056|binding|INFO|Setting lport 247791be-e482-41d7-b078-7328138dd0ea ovn-installed in OVS
Feb 28 10:21:48 compute-0 ovn_controller[146846]: 2026-02-28T10:21:48Z|01057|binding|INFO|Setting lport 247791be-e482-41d7-b078-7328138dd0ea up in Southbound
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.124 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.126 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:c3:14 10.100.0.3'], port_security=['fa:16:3e:d2:c3:14 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6f73ef31-2ae0-4765-9f6c-fb732af2b3f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10418c0f-a33e-4d93-99b1-462207fda43a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '335faa1173e64cf8a7b107ae6238353d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6bd5e14f-e535-467a-9991-a18d1756839a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b473e0e-3363-4c16-9aa6-bed6b246a535, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=247791be-e482-41d7-b078-7328138dd0ea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.128 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 247791be-e482-41d7-b078-7328138dd0ea in datapath 10418c0f-a33e-4d93-99b1-462207fda43a bound to our chassis
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.128 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.130 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 10418c0f-a33e-4d93-99b1-462207fda43a
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.147 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8c620339-bb87-425a-8389-ed253dcec72a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.148 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap10418c0f-a1 in ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.151 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap10418c0f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.152 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[28733d90-f886-4f59-8c29-1ed09eb1a921]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:48 compute-0 systemd-machined[209480]: New machine qemu-133-instance-00000067.
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.154 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bd807ad9-5940-414d-9b22-68a7e65778be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:48 compute-0 systemd[1]: Started Virtual Machine qemu-133-instance-00000067.
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.172 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[3cc44bec-b837-4f51-ba0b-6f0b145e571c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.189 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ff76dc5c-1c34-43bb-9c77-4a95544c5ce2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:48 compute-0 systemd-udevd[334258]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:21:48 compute-0 NetworkManager[49805]: <info>  [1772274108.2171] device (tap247791be-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:21:48 compute-0 NetworkManager[49805]: <info>  [1772274108.2184] device (tap247791be-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.225 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[aaef5a4c-3fb4-49cf-bc82-8b795f738f29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:48 compute-0 NetworkManager[49805]: <info>  [1772274108.2349] manager: (tap10418c0f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/443)
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.234 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b368ee28-c12f-43af-a794-eee6689003b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.283 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5376905b-785f-407c-867b-66178be3b4ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.287 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9381eb-a5ec-4d83-8cdb-327be6ad4ae3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:48 compute-0 NetworkManager[49805]: <info>  [1772274108.3118] device (tap10418c0f-a0): carrier: link connected
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.317 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7e74d424-0b93-48e7-bf0b-d37809ce2e6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.340 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c89a2e37-79b6-4b81-995e-190a31568ae5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10418c0f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:9a:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561559, 'reachable_time': 39554, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334287, 'error': None, 'target': 'ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.356 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ceac77c3-deff-403e-9da5-4ea6e1d01ff7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:9a0d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561559, 'tstamp': 561559}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334288, 'error': None, 'target': 'ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.371 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dc7b7263-42ab-4163-8234-13ad60d2a13d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10418c0f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:9a:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561559, 'reachable_time': 39554, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 334289, 'error': None, 'target': 'ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 372 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 169 op/s
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.398 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc769de-b7bc-4ee2-87c8-c6cbbe16e0d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.452 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[48fa2315-0dad-4886-9b36-9ecfeaa5c39a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.453 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10418c0f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.453 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.454 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10418c0f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:48 compute-0 NetworkManager[49805]: <info>  [1772274108.4564] manager: (tap10418c0f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/444)
Feb 28 10:21:48 compute-0 kernel: tap10418c0f-a0: entered promiscuous mode
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.457 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.458 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap10418c0f-a0, col_values=(('external_ids', {'iface-id': 'c380a886-9856-4ecb-b5cc-7df0476b3254'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:48 compute-0 ovn_controller[146846]: 2026-02-28T10:21:48Z|01058|binding|INFO|Releasing lport c380a886-9856-4ecb-b5cc-7df0476b3254 from this chassis (sb_readonly=0)
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.460 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/10418c0f-a33e-4d93-99b1-462207fda43a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/10418c0f-a33e-4d93-99b1-462207fda43a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.465 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5704a405-276b-4cc4-90df-e2135d6bfdfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.466 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.466 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-10418c0f-a33e-4d93-99b1-462207fda43a
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/10418c0f-a33e-4d93-99b1-462207fda43a.pid.haproxy
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 10418c0f-a33e-4d93-99b1-462207fda43a
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:21:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:48.466 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a', 'env', 'PROCESS_TAG=haproxy-10418c0f-a33e-4d93-99b1-462207fda43a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/10418c0f-a33e-4d93-99b1-462207fda43a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:21:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:21:48 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3517046644' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.579 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.581 243456 DEBUG nova.virt.libvirt.vif [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:21:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1705661194',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1705661194',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=104,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNs+nd6rPqJ/AGV55FcprVoNF73HxSzu9S0FiqOuhvN4rlizLZ9YW8wn4BFYC1ax4N+CgAyJWdOsbXuKNmW4pxXu15elRBRUjfpzdRq3EXNuL00qJwL5OuaSxnPUOdFXtw==',key_name='tempest-TestSecurityGroupsBasicOps-1541180013',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-qp97324q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:21:42Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=0bafc3af-eadf-4d97-9acf-026c531362c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.581 243456 DEBUG nova.network.os_vif_util [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.582 243456 DEBUG nova.network.os_vif_util [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:cc:93,bridge_name='br-int',has_traffic_filtering=True,id=09ffa25b-e3df-45c2-9db2-423ed33e2a28,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09ffa25b-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.583 243456 DEBUG nova.objects.instance [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0bafc3af-eadf-4d97-9acf-026c531362c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.600 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:21:48 compute-0 nova_compute[243452]:   <uuid>0bafc3af-eadf-4d97-9acf-026c531362c3</uuid>
Feb 28 10:21:48 compute-0 nova_compute[243452]:   <name>instance-00000068</name>
Feb 28 10:21:48 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:21:48 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:21:48 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1705661194</nova:name>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:21:47</nova:creationTime>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:21:48 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:21:48 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:21:48 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:21:48 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:21:48 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:21:48 compute-0 nova_compute[243452]:         <nova:user uuid="f9046d3ef932481196b82d5e1fdd5de6">tempest-TestSecurityGroupsBasicOps-1303513239-project-member</nova:user>
Feb 28 10:21:48 compute-0 nova_compute[243452]:         <nova:project uuid="1eb3aafa74d14544ba5cde61223f2e23">tempest-TestSecurityGroupsBasicOps-1303513239</nova:project>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:21:48 compute-0 nova_compute[243452]:         <nova:port uuid="09ffa25b-e3df-45c2-9db2-423ed33e2a28">
Feb 28 10:21:48 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:21:48 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:21:48 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <system>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <entry name="serial">0bafc3af-eadf-4d97-9acf-026c531362c3</entry>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <entry name="uuid">0bafc3af-eadf-4d97-9acf-026c531362c3</entry>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     </system>
Feb 28 10:21:48 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:21:48 compute-0 nova_compute[243452]:   <os>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:   </os>
Feb 28 10:21:48 compute-0 nova_compute[243452]:   <features>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:   </features>
Feb 28 10:21:48 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:21:48 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:21:48 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/0bafc3af-eadf-4d97-9acf-026c531362c3_disk">
Feb 28 10:21:48 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       </source>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:21:48 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/0bafc3af-eadf-4d97-9acf-026c531362c3_disk.config">
Feb 28 10:21:48 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       </source>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:21:48 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:93:cc:93"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <target dev="tap09ffa25b-e3"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/0bafc3af-eadf-4d97-9acf-026c531362c3/console.log" append="off"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <video>
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     </video>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:21:48 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:21:48 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:21:48 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:21:48 compute-0 nova_compute[243452]: </domain>
Feb 28 10:21:48 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.605 243456 DEBUG nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Preparing to wait for external event network-vif-plugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.605 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.605 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.605 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.606 243456 DEBUG nova.virt.libvirt.vif [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:21:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1705661194',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1705661194',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=104,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNs+nd6rPqJ/AGV55FcprVoNF73HxSzu9S0FiqOuhvN4rlizLZ9YW8wn4BFYC1ax4N+CgAyJWdOsbXuKNmW4pxXu15elRBRUjfpzdRq3EXNuL00qJwL5OuaSxnPUOdFXtw==',key_name='tempest-TestSecurityGroupsBasicOps-1541180013',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-qp97324q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:21:42Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=0bafc3af-eadf-4d97-9acf-026c531362c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.606 243456 DEBUG nova.network.os_vif_util [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.607 243456 DEBUG nova.network.os_vif_util [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:cc:93,bridge_name='br-int',has_traffic_filtering=True,id=09ffa25b-e3df-45c2-9db2-423ed33e2a28,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09ffa25b-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.607 243456 DEBUG os_vif [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:cc:93,bridge_name='br-int',has_traffic_filtering=True,id=09ffa25b-e3df-45c2-9db2-423ed33e2a28,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09ffa25b-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.608 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.608 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.609 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.610 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.611 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09ffa25b-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.611 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap09ffa25b-e3, col_values=(('external_ids', {'iface-id': '09ffa25b-e3df-45c2-9db2-423ed33e2a28', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:cc:93', 'vm-uuid': '0bafc3af-eadf-4d97-9acf-026c531362c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.612 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:48 compute-0 NetworkManager[49805]: <info>  [1772274108.6135] manager: (tap09ffa25b-e3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/445)
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.616 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.617 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.617 243456 INFO os_vif [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:cc:93,bridge_name='br-int',has_traffic_filtering=True,id=09ffa25b-e3df-45c2-9db2-423ed33e2a28,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09ffa25b-e3')
Feb 28 10:21:48 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2508671135' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:48 compute-0 ceph-mon[76304]: pgmap v1691: 305 pgs: 305 active+clean; 372 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 169 op/s
Feb 28 10:21:48 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3517046644' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:21:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.682 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.683 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.683 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No VIF found with MAC fa:16:3e:93:cc:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.684 243456 INFO nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Using config drive
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.709 243456 DEBUG nova.storage.rbd_utils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 0bafc3af-eadf-4d97-9acf-026c531362c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:48 compute-0 podman[334340]: 2026-02-28 10:21:48.855125314 +0000 UTC m=+0.055533784 container create 9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:21:48 compute-0 systemd[1]: Started libpod-conmon-9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30.scope.
Feb 28 10:21:48 compute-0 podman[334340]: 2026-02-28 10:21:48.826024054 +0000 UTC m=+0.026432534 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:21:48 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:21:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c59a038730ab4b956f42ebaa25408510fb18b7aaf9726af2e166ee3c3b54acf3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:21:48 compute-0 podman[334340]: 2026-02-28 10:21:48.961818384 +0000 UTC m=+0.162226894 container init 9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223)
Feb 28 10:21:48 compute-0 podman[334340]: 2026-02-28 10:21:48.965888371 +0000 UTC m=+0.166296851 container start 9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:21:48 compute-0 neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a[334356]: [NOTICE]   (334360) : New worker (334362) forked
Feb 28 10:21:48 compute-0 neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a[334356]: [NOTICE]   (334360) : Loading success.
Feb 28 10:21:48 compute-0 nova_compute[243452]: 2026-02-28 10:21:48.986 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.249 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274109.2485044, 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.250 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] VM Started (Lifecycle Event)
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.275 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.280 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274109.2498407, 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.280 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] VM Paused (Lifecycle Event)
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.307 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.310 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.332 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.378 243456 DEBUG nova.network.neutron [req-7254dec2-1bed-45a1-821d-5795c0bb967a req-7b4e9ede-89f7-4dc4-a215-0be44eed0a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Updated VIF entry in instance network info cache for port 09ffa25b-e3df-45c2-9db2-423ed33e2a28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.379 243456 DEBUG nova.network.neutron [req-7254dec2-1bed-45a1-821d-5795c0bb967a req-7b4e9ede-89f7-4dc4-a215-0be44eed0a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Updating instance_info_cache with network_info: [{"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.401 243456 DEBUG oslo_concurrency.lockutils [req-7254dec2-1bed-45a1-821d-5795c0bb967a req-7b4e9ede-89f7-4dc4-a215-0be44eed0a0e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.415 243456 INFO nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Creating config drive at /var/lib/nova/instances/0bafc3af-eadf-4d97-9acf-026c531362c3/disk.config
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.418 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0bafc3af-eadf-4d97-9acf-026c531362c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqu6i4s8c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.559 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0bafc3af-eadf-4d97-9acf-026c531362c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqu6i4s8c" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.600 243456 DEBUG nova.storage.rbd_utils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 0bafc3af-eadf-4d97-9acf-026c531362c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.606 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0bafc3af-eadf-4d97-9acf-026c531362c3/disk.config 0bafc3af-eadf-4d97-9acf-026c531362c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.652 243456 DEBUG nova.compute.manager [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Received event network-vif-plugged-247791be-e482-41d7-b078-7328138dd0ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.654 243456 DEBUG oslo_concurrency.lockutils [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.655 243456 DEBUG oslo_concurrency.lockutils [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.655 243456 DEBUG oslo_concurrency.lockutils [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.656 243456 DEBUG nova.compute.manager [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Processing event network-vif-plugged-247791be-e482-41d7-b078-7328138dd0ea _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.657 243456 DEBUG nova.compute.manager [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Received event network-vif-plugged-247791be-e482-41d7-b078-7328138dd0ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.658 243456 DEBUG oslo_concurrency.lockutils [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.659 243456 DEBUG oslo_concurrency.lockutils [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.659 243456 DEBUG oslo_concurrency.lockutils [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.660 243456 DEBUG nova.compute.manager [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] No waiting events found dispatching network-vif-plugged-247791be-e482-41d7-b078-7328138dd0ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.661 243456 WARNING nova.compute.manager [req-6c458fa8-bf94-47c6-bc26-1fb6b1f641e1 req-e1843abd-88f1-40a9-9062-f38c6d1290a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Received unexpected event network-vif-plugged-247791be-e482-41d7-b078-7328138dd0ea for instance with vm_state building and task_state spawning.
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.663 243456 DEBUG nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.668 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274109.6681738, 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.669 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] VM Resumed (Lifecycle Event)
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.671 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.676 243456 INFO nova.virt.libvirt.driver [-] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Instance spawned successfully.
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.677 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.695 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.701 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.704 243456 DEBUG nova.network.neutron [req-86196260-6576-4fde-a72e-0a1eee4093b0 req-d4e74e4b-0495-434f-a035-a46f1e032aa4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updated VIF entry in instance network info cache for port a920b0c3-c6cf-44d3-9a22-40eda0e09078. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.705 243456 DEBUG nova.network.neutron [req-86196260-6576-4fde-a72e-0a1eee4093b0 req-d4e74e4b-0495-434f-a035-a46f1e032aa4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updating instance_info_cache with network_info: [{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.709 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.710 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.711 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.711 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.712 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.712 243456 DEBUG nova.virt.libvirt.driver [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.746 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.748 243456 DEBUG oslo_concurrency.lockutils [req-86196260-6576-4fde-a72e-0a1eee4093b0 req-d4e74e4b-0495-434f-a035-a46f1e032aa4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.786 243456 INFO nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Took 7.89 seconds to spawn the instance on the hypervisor.
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.787 243456 DEBUG nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.788 243456 DEBUG oslo_concurrency.processutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0bafc3af-eadf-4d97-9acf-026c531362c3/disk.config 0bafc3af-eadf-4d97-9acf-026c531362c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.789 243456 INFO nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Deleting local config drive /var/lib/nova/instances/0bafc3af-eadf-4d97-9acf-026c531362c3/disk.config because it was imported into RBD.
Feb 28 10:21:49 compute-0 kernel: tap09ffa25b-e3: entered promiscuous mode
Feb 28 10:21:49 compute-0 NetworkManager[49805]: <info>  [1772274109.8539] manager: (tap09ffa25b-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/446)
Feb 28 10:21:49 compute-0 systemd-udevd[334280]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:21:49 compute-0 ovn_controller[146846]: 2026-02-28T10:21:49Z|01059|binding|INFO|Claiming lport 09ffa25b-e3df-45c2-9db2-423ed33e2a28 for this chassis.
Feb 28 10:21:49 compute-0 ovn_controller[146846]: 2026-02-28T10:21:49Z|01060|binding|INFO|09ffa25b-e3df-45c2-9db2-423ed33e2a28: Claiming fa:16:3e:93:cc:93 10.100.0.9
Feb 28 10:21:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.870 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:cc:93 10.100.0.9'], port_security=['fa:16:3e:93:cc:93 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0bafc3af-eadf-4d97-9acf-026c531362c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3451a2ef-e97c-49df-813f-57c35ec0999a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6d9d1441-5ce1-4022-9f50-b5399f868b07 88239c96-6a1e-46d1-adfd-2f479ed23a6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6802505a-6a05-4258-9e53-19d8c7319e67, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=09ffa25b-e3df-45c2-9db2-423ed33e2a28) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.870 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.872 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 09ffa25b-e3df-45c2-9db2-423ed33e2a28 in datapath 3451a2ef-e97c-49df-813f-57c35ec0999a bound to our chassis
Feb 28 10:21:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.875 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3451a2ef-e97c-49df-813f-57c35ec0999a
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.878 243456 INFO nova.compute.manager [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Took 9.06 seconds to build instance.
Feb 28 10:21:49 compute-0 ovn_controller[146846]: 2026-02-28T10:21:49Z|01061|binding|INFO|Setting lport 09ffa25b-e3df-45c2-9db2-423ed33e2a28 ovn-installed in OVS
Feb 28 10:21:49 compute-0 ovn_controller[146846]: 2026-02-28T10:21:49Z|01062|binding|INFO|Setting lport 09ffa25b-e3df-45c2-9db2-423ed33e2a28 up in Southbound
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.882 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:49 compute-0 NetworkManager[49805]: <info>  [1772274109.8913] device (tap09ffa25b-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:21:49 compute-0 NetworkManager[49805]: <info>  [1772274109.8925] device (tap09ffa25b-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.894 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.898 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[97f1baff-8639-49d4-bc3e-559397234197]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.899 243456 DEBUG oslo_concurrency.lockutils [None req-1f7672c4-03df-4726-a90b-26ee66ddf4c5 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.899 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3451a2ef-e1 in ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.899 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 4.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.900 243456 INFO nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:21:49 compute-0 nova_compute[243452]: 2026-02-28 10:21:49.900 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:49 compute-0 systemd-machined[209480]: New machine qemu-134-instance-00000068.
Feb 28 10:21:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.902 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3451a2ef-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:21:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.902 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[20b41717-7268-49c5-999c-ea0c8f803e88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.903 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f1daf5e1-02d8-40c4-a008-c62fdbe272fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:49 compute-0 systemd[1]: Started Virtual Machine qemu-134-instance-00000068.
Feb 28 10:21:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.916 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a385033f-dc7f-43cd-97eb-ce5136267f8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.931 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9b41aa9f-9d6b-48bf-8f35-a0e93ceed10f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.961 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b10713a5-d08b-4bea-93a6-290f14d512d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:49.969 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d2cb4fab-91e0-444d-9f2e-9984b5faf99d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:49 compute-0 NetworkManager[49805]: <info>  [1772274109.9710] manager: (tap3451a2ef-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/447)
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.006 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc9639c-49a7-4632-8bc1-47ea4c416971]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.009 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3613b634-0940-435e-959d-4add3302e8cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:50 compute-0 NetworkManager[49805]: <info>  [1772274110.0486] device (tap3451a2ef-e0): carrier: link connected
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.057 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[17f25f50-abcf-4bf1-a50f-26b5462f081a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.076 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6a32e26b-3cbc-4869-998e-4be63e3c6c1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3451a2ef-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:c2:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561733, 'reachable_time': 16789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334485, 'error': None, 'target': 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.096 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d474e820-fb6e-4768-9c49-d2768eb88ad3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe51:c2b6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561733, 'tstamp': 561733}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334486, 'error': None, 'target': 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.114 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b1660708-76ff-4f36-8589-f3ec2958ccc7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3451a2ef-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:c2:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561733, 'reachable_time': 16789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 334487, 'error': None, 'target': 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.147 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9e9f9598-6b21-4a70-be0f-0ac43d916788]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.216 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd17419-c5fb-495f-bb33-a533bd1cf59f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.219 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3451a2ef-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.219 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.220 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3451a2ef-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:50 compute-0 nova_compute[243452]: 2026-02-28 10:21:50.223 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:50 compute-0 kernel: tap3451a2ef-e0: entered promiscuous mode
Feb 28 10:21:50 compute-0 NetworkManager[49805]: <info>  [1772274110.2245] manager: (tap3451a2ef-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/448)
Feb 28 10:21:50 compute-0 nova_compute[243452]: 2026-02-28 10:21:50.226 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.228 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3451a2ef-e0, col_values=(('external_ids', {'iface-id': '0715e649-02a0-4a60-88ea-663bf03161ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:50 compute-0 nova_compute[243452]: 2026-02-28 10:21:50.230 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:50 compute-0 ovn_controller[146846]: 2026-02-28T10:21:50Z|01063|binding|INFO|Releasing lport 0715e649-02a0-4a60-88ea-663bf03161ac from this chassis (sb_readonly=0)
Feb 28 10:21:50 compute-0 nova_compute[243452]: 2026-02-28 10:21:50.235 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.236 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3451a2ef-e97c-49df-813f-57c35ec0999a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3451a2ef-e97c-49df-813f-57c35ec0999a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.237 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e2234411-8442-4555-9b3d-2e39aac9df99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.238 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-3451a2ef-e97c-49df-813f-57c35ec0999a
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/3451a2ef-e97c-49df-813f-57c35ec0999a.pid.haproxy
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 3451a2ef-e97c-49df-813f-57c35ec0999a
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:21:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:50.239 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'env', 'PROCESS_TAG=haproxy-3451a2ef-e97c-49df-813f-57c35ec0999a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3451a2ef-e97c-49df-813f-57c35ec0999a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:21:50 compute-0 nova_compute[243452]: 2026-02-28 10:21:50.249 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:50 compute-0 nova_compute[243452]: 2026-02-28 10:21:50.382 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274110.3818536, 0bafc3af-eadf-4d97-9acf-026c531362c3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:21:50 compute-0 nova_compute[243452]: 2026-02-28 10:21:50.382 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] VM Started (Lifecycle Event)
Feb 28 10:21:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 372 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 148 op/s
Feb 28 10:21:50 compute-0 nova_compute[243452]: 2026-02-28 10:21:50.412 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:21:50 compute-0 nova_compute[243452]: 2026-02-28 10:21:50.418 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274110.3838344, 0bafc3af-eadf-4d97-9acf-026c531362c3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:21:50 compute-0 nova_compute[243452]: 2026-02-28 10:21:50.419 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] VM Paused (Lifecycle Event)
Feb 28 10:21:50 compute-0 nova_compute[243452]: 2026-02-28 10:21:50.441 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:21:50 compute-0 nova_compute[243452]: 2026-02-28 10:21:50.446 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:21:50 compute-0 nova_compute[243452]: 2026-02-28 10:21:50.468 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:21:50 compute-0 podman[334562]: 2026-02-28 10:21:50.645951913 +0000 UTC m=+0.064991507 container create b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:21:50 compute-0 systemd[1]: Started libpod-conmon-b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4.scope.
Feb 28 10:21:50 compute-0 podman[334562]: 2026-02-28 10:21:50.611613972 +0000 UTC m=+0.030653646 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:21:50 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:21:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aefe496d22c05c4788569ae7202c870c2d3e386af2fddcdb07f716cc705ac851/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:21:50 compute-0 podman[334562]: 2026-02-28 10:21:50.738541826 +0000 UTC m=+0.157581450 container init b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 10:21:50 compute-0 podman[334562]: 2026-02-28 10:21:50.745488386 +0000 UTC m=+0.164527980 container start b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 28 10:21:50 compute-0 neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a[334577]: [NOTICE]   (334581) : New worker (334583) forked
Feb 28 10:21:50 compute-0 neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a[334577]: [NOTICE]   (334581) : Loading success.
Feb 28 10:21:51 compute-0 ceph-mon[76304]: pgmap v1692: 305 pgs: 305 active+clean; 372 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 148 op/s
Feb 28 10:21:51 compute-0 nova_compute[243452]: 2026-02-28 10:21:51.630 243456 DEBUG nova.compute.manager [req-ff0e5a8b-286f-448b-92cc-f46fc54c6dad req-c2caf2a0-7ee2-4b6c-9dda-403798dac40e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-changed-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:51 compute-0 nova_compute[243452]: 2026-02-28 10:21:51.631 243456 DEBUG nova.compute.manager [req-ff0e5a8b-286f-448b-92cc-f46fc54c6dad req-c2caf2a0-7ee2-4b6c-9dda-403798dac40e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Refreshing instance network info cache due to event network-changed-a920b0c3-c6cf-44d3-9a22-40eda0e09078. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:21:51 compute-0 nova_compute[243452]: 2026-02-28 10:21:51.631 243456 DEBUG oslo_concurrency.lockutils [req-ff0e5a8b-286f-448b-92cc-f46fc54c6dad req-c2caf2a0-7ee2-4b6c-9dda-403798dac40e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:21:51 compute-0 nova_compute[243452]: 2026-02-28 10:21:51.631 243456 DEBUG oslo_concurrency.lockutils [req-ff0e5a8b-286f-448b-92cc-f46fc54c6dad req-c2caf2a0-7ee2-4b6c-9dda-403798dac40e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:21:51 compute-0 nova_compute[243452]: 2026-02-28 10:21:51.631 243456 DEBUG nova.network.neutron [req-ff0e5a8b-286f-448b-92cc-f46fc54c6dad req-c2caf2a0-7ee2-4b6c-9dda-403798dac40e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Refreshing network info cache for port a920b0c3-c6cf-44d3-9a22-40eda0e09078 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:21:51 compute-0 nova_compute[243452]: 2026-02-28 10:21:51.790 243456 DEBUG nova.compute.manager [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received event network-vif-plugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:51 compute-0 nova_compute[243452]: 2026-02-28 10:21:51.790 243456 DEBUG oslo_concurrency.lockutils [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:51 compute-0 nova_compute[243452]: 2026-02-28 10:21:51.791 243456 DEBUG oslo_concurrency.lockutils [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:51 compute-0 nova_compute[243452]: 2026-02-28 10:21:51.791 243456 DEBUG oslo_concurrency.lockutils [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:51 compute-0 nova_compute[243452]: 2026-02-28 10:21:51.791 243456 DEBUG nova.compute.manager [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Processing event network-vif-plugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:21:51 compute-0 nova_compute[243452]: 2026-02-28 10:21:51.792 243456 DEBUG nova.compute.manager [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received event network-vif-plugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:51 compute-0 nova_compute[243452]: 2026-02-28 10:21:51.792 243456 DEBUG oslo_concurrency.lockutils [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:51 compute-0 nova_compute[243452]: 2026-02-28 10:21:51.793 243456 DEBUG oslo_concurrency.lockutils [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:51 compute-0 nova_compute[243452]: 2026-02-28 10:21:51.793 243456 DEBUG oslo_concurrency.lockutils [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:51 compute-0 nova_compute[243452]: 2026-02-28 10:21:51.794 243456 DEBUG nova.compute.manager [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] No waiting events found dispatching network-vif-plugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:21:51 compute-0 nova_compute[243452]: 2026-02-28 10:21:51.794 243456 WARNING nova.compute.manager [req-978f0740-e7d4-409a-a146-f2279e1d0073 req-7195dda3-513a-47b6-8725-82460fa48bfc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received unexpected event network-vif-plugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 for instance with vm_state building and task_state spawning.
Feb 28 10:21:51 compute-0 nova_compute[243452]: 2026-02-28 10:21:51.795 243456 DEBUG nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:21:52 compute-0 nova_compute[243452]: 2026-02-28 10:21:52.028 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:21:52 compute-0 nova_compute[243452]: 2026-02-28 10:21:52.029 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274112.029148, 0bafc3af-eadf-4d97-9acf-026c531362c3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:21:52 compute-0 nova_compute[243452]: 2026-02-28 10:21:52.029 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] VM Resumed (Lifecycle Event)
Feb 28 10:21:52 compute-0 nova_compute[243452]: 2026-02-28 10:21:52.034 243456 INFO nova.virt.libvirt.driver [-] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Instance spawned successfully.
Feb 28 10:21:52 compute-0 nova_compute[243452]: 2026-02-28 10:21:52.035 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:21:52 compute-0 nova_compute[243452]: 2026-02-28 10:21:52.051 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:21:52 compute-0 nova_compute[243452]: 2026-02-28 10:21:52.056 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:52 compute-0 nova_compute[243452]: 2026-02-28 10:21:52.056 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:52 compute-0 nova_compute[243452]: 2026-02-28 10:21:52.056 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:52 compute-0 nova_compute[243452]: 2026-02-28 10:21:52.057 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:52 compute-0 nova_compute[243452]: 2026-02-28 10:21:52.057 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:52 compute-0 nova_compute[243452]: 2026-02-28 10:21:52.057 243456 DEBUG nova.virt.libvirt.driver [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:21:52 compute-0 nova_compute[243452]: 2026-02-28 10:21:52.062 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:21:52 compute-0 nova_compute[243452]: 2026-02-28 10:21:52.085 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:21:52 compute-0 nova_compute[243452]: 2026-02-28 10:21:52.125 243456 INFO nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Took 9.32 seconds to spawn the instance on the hypervisor.
Feb 28 10:21:52 compute-0 nova_compute[243452]: 2026-02-28 10:21:52.125 243456 DEBUG nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:21:52 compute-0 nova_compute[243452]: 2026-02-28 10:21:52.175 243456 INFO nova.compute.manager [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Took 10.80 seconds to build instance.
Feb 28 10:21:52 compute-0 nova_compute[243452]: 2026-02-28 10:21:52.192 243456 DEBUG oslo_concurrency.lockutils [None req-2633adc6-2f12-4a13-a1df-8fb73134c660 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:52 compute-0 nova_compute[243452]: 2026-02-28 10:21:52.192 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 6.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:52 compute-0 nova_compute[243452]: 2026-02-28 10:21:52.193 243456 INFO nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:21:52 compute-0 nova_compute[243452]: 2026-02-28 10:21:52.193 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 373 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 793 KiB/s rd, 3.6 MiB/s wr, 130 op/s
Feb 28 10:21:53 compute-0 ceph-mon[76304]: pgmap v1693: 305 pgs: 305 active+clean; 373 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 793 KiB/s rd, 3.6 MiB/s wr, 130 op/s
Feb 28 10:21:53 compute-0 nova_compute[243452]: 2026-02-28 10:21:53.615 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:21:53 compute-0 nova_compute[243452]: 2026-02-28 10:21:53.866 243456 DEBUG nova.network.neutron [req-ff0e5a8b-286f-448b-92cc-f46fc54c6dad req-c2caf2a0-7ee2-4b6c-9dda-403798dac40e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updated VIF entry in instance network info cache for port a920b0c3-c6cf-44d3-9a22-40eda0e09078. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:21:53 compute-0 nova_compute[243452]: 2026-02-28 10:21:53.867 243456 DEBUG nova.network.neutron [req-ff0e5a8b-286f-448b-92cc-f46fc54c6dad req-c2caf2a0-7ee2-4b6c-9dda-403798dac40e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updating instance_info_cache with network_info: [{"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:21:53 compute-0 nova_compute[243452]: 2026-02-28 10:21:53.883 243456 DEBUG oslo_concurrency.lockutils [req-ff0e5a8b-286f-448b-92cc-f46fc54c6dad req-c2caf2a0-7ee2-4b6c-9dda-403798dac40e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ec785d5e-9b62-4b52-a727-f64173b4b853" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:21:53 compute-0 nova_compute[243452]: 2026-02-28 10:21:53.989 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.239 243456 DEBUG oslo_concurrency.lockutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Acquiring lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.240 243456 DEBUG oslo_concurrency.lockutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.240 243456 DEBUG oslo_concurrency.lockutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Acquiring lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.241 243456 DEBUG oslo_concurrency.lockutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.241 243456 DEBUG oslo_concurrency.lockutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.242 243456 INFO nova.compute.manager [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Terminating instance
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.243 243456 DEBUG nova.compute.manager [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:21:54 compute-0 kernel: tap247791be-e4 (unregistering): left promiscuous mode
Feb 28 10:21:54 compute-0 NetworkManager[49805]: <info>  [1772274114.2883] device (tap247791be-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.291 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:54 compute-0 ovn_controller[146846]: 2026-02-28T10:21:54Z|01064|binding|INFO|Releasing lport 247791be-e482-41d7-b078-7328138dd0ea from this chassis (sb_readonly=0)
Feb 28 10:21:54 compute-0 ovn_controller[146846]: 2026-02-28T10:21:54Z|01065|binding|INFO|Setting lport 247791be-e482-41d7-b078-7328138dd0ea down in Southbound
Feb 28 10:21:54 compute-0 ovn_controller[146846]: 2026-02-28T10:21:54Z|01066|binding|INFO|Removing iface tap247791be-e4 ovn-installed in OVS
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.294 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.301 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:c3:14 10.100.0.3'], port_security=['fa:16:3e:d2:c3:14 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6f73ef31-2ae0-4765-9f6c-fb732af2b3f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10418c0f-a33e-4d93-99b1-462207fda43a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '335faa1173e64cf8a7b107ae6238353d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6bd5e14f-e535-467a-9991-a18d1756839a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b473e0e-3363-4c16-9aa6-bed6b246a535, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=247791be-e482-41d7-b078-7328138dd0ea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:21:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.303 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 247791be-e482-41d7-b078-7328138dd0ea in datapath 10418c0f-a33e-4d93-99b1-462207fda43a unbound from our chassis
Feb 28 10:21:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.304 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 10418c0f-a33e-4d93-99b1-462207fda43a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:21:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.306 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5c04289b-8355-4445-928a-01d0b1e1a3d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.307 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a namespace which is not needed anymore
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.312 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:54 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d00000067.scope: Deactivated successfully.
Feb 28 10:21:54 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d00000067.scope: Consumed 5.622s CPU time.
Feb 28 10:21:54 compute-0 systemd-machined[209480]: Machine qemu-133-instance-00000067 terminated.
Feb 28 10:21:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 374 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 165 op/s
Feb 28 10:21:54 compute-0 neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a[334356]: [NOTICE]   (334360) : haproxy version is 2.8.14-c23fe91
Feb 28 10:21:54 compute-0 neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a[334356]: [NOTICE]   (334360) : path to executable is /usr/sbin/haproxy
Feb 28 10:21:54 compute-0 neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a[334356]: [WARNING]  (334360) : Exiting Master process...
Feb 28 10:21:54 compute-0 neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a[334356]: [WARNING]  (334360) : Exiting Master process...
Feb 28 10:21:54 compute-0 neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a[334356]: [ALERT]    (334360) : Current worker (334362) exited with code 143 (Terminated)
Feb 28 10:21:54 compute-0 neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a[334356]: [WARNING]  (334360) : All workers exited. Exiting... (0)
Feb 28 10:21:54 compute-0 systemd[1]: libpod-9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30.scope: Deactivated successfully.
Feb 28 10:21:54 compute-0 podman[334618]: 2026-02-28 10:21:54.42711898 +0000 UTC m=+0.040182280 container died 9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 28 10:21:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30-userdata-shm.mount: Deactivated successfully.
Feb 28 10:21:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c59a038730ab4b956f42ebaa25408510fb18b7aaf9726af2e166ee3c3b54acf3-merged.mount: Deactivated successfully.
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.460 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.463 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:54 compute-0 podman[334618]: 2026-02-28 10:21:54.46488269 +0000 UTC m=+0.077945990 container cleanup 9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 10:21:54 compute-0 systemd[1]: libpod-conmon-9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30.scope: Deactivated successfully.
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.472 243456 INFO nova.virt.libvirt.driver [-] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Instance destroyed successfully.
Feb 28 10:21:54 compute-0 ceph-mon[76304]: pgmap v1694: 305 pgs: 305 active+clean; 374 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 165 op/s
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.474 243456 DEBUG nova.objects.instance [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lazy-loading 'resources' on Instance uuid 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.499 243456 DEBUG nova.virt.libvirt.vif [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:21:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1786122360',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1786122360',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1786122360',id=103,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:21:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='335faa1173e64cf8a7b107ae6238353d',ramdisk_id='',reservation_id='r-bbmf5pbr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1249930023',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1249930023-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:21:49Z,user_data=None,user_id='70e8f691ae0f4768bb68cd8d497033e8',uuid=6f73ef31-2ae0-4765-9f6c-fb732af2b3f4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "247791be-e482-41d7-b078-7328138dd0ea", "address": "fa:16:3e:d2:c3:14", "network": {"id": "10418c0f-a33e-4d93-99b1-462207fda43a", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1972878049-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "335faa1173e64cf8a7b107ae6238353d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247791be-e4", "ovs_interfaceid": "247791be-e482-41d7-b078-7328138dd0ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.500 243456 DEBUG nova.network.os_vif_util [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Converting VIF {"id": "247791be-e482-41d7-b078-7328138dd0ea", "address": "fa:16:3e:d2:c3:14", "network": {"id": "10418c0f-a33e-4d93-99b1-462207fda43a", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1972878049-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "335faa1173e64cf8a7b107ae6238353d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247791be-e4", "ovs_interfaceid": "247791be-e482-41d7-b078-7328138dd0ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.501 243456 DEBUG nova.network.os_vif_util [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c3:14,bridge_name='br-int',has_traffic_filtering=True,id=247791be-e482-41d7-b078-7328138dd0ea,network=Network(10418c0f-a33e-4d93-99b1-462207fda43a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247791be-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.501 243456 DEBUG os_vif [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c3:14,bridge_name='br-int',has_traffic_filtering=True,id=247791be-e482-41d7-b078-7328138dd0ea,network=Network(10418c0f-a33e-4d93-99b1-462207fda43a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247791be-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.503 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.504 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap247791be-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.505 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.509 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.512 243456 INFO os_vif [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c3:14,bridge_name='br-int',has_traffic_filtering=True,id=247791be-e482-41d7-b078-7328138dd0ea,network=Network(10418c0f-a33e-4d93-99b1-462207fda43a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247791be-e4')
Feb 28 10:21:54 compute-0 podman[334656]: 2026-02-28 10:21:54.537025903 +0000 UTC m=+0.051176168 container remove 9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 28 10:21:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.541 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4d03bc25-be60-4859-bf56-bb231538f0d9]: (4, ('Sat Feb 28 10:21:54 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a (9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30)\n9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30\nSat Feb 28 10:21:54 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a (9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30)\n9d3f463d5c875a5f66be19d8dc94f064b1afaea5dd5f44861b57d83b5fdeca30\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.543 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[42ccef82-ce79-4ec3-b5b8-db30f37fa9e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.544 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10418c0f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.545 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:54 compute-0 kernel: tap10418c0f-a0: left promiscuous mode
Feb 28 10:21:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.556 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9e37b1c2-8707-4b96-b221-dd198ae67202]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.558 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.570 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8bdc520e-bfb7-49a5-803c-999ebd9bda30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.571 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac3fcf7-ff62-4ff8-9d20-6592b2f10c5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.585 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ff94107c-9952-484b-b20f-8195b5f6ceec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561550, 'reachable_time': 18717, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334690, 'error': None, 'target': 'ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d10418c0f\x2da33e\x2d4d93\x2d99b1\x2d462207fda43a.mount: Deactivated successfully.
Feb 28 10:21:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.590 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-10418c0f-a33e-4d93-99b1-462207fda43a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:21:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:54.590 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[eeeed348-57a2-466a-85fc-b394724bedd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.763 243456 INFO nova.virt.libvirt.driver [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Deleting instance files /var/lib/nova/instances/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_del
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.764 243456 INFO nova.virt.libvirt.driver [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Deletion of /var/lib/nova/instances/6f73ef31-2ae0-4765-9f6c-fb732af2b3f4_del complete
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.830 243456 INFO nova.compute.manager [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Took 0.59 seconds to destroy the instance on the hypervisor.
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.831 243456 DEBUG oslo.service.loopingcall [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.831 243456 DEBUG nova.compute.manager [-] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:21:54 compute-0 nova_compute[243452]: 2026-02-28 10:21:54.832 243456 DEBUG nova.network.neutron [-] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:21:55 compute-0 nova_compute[243452]: 2026-02-28 10:21:55.471 243456 DEBUG nova.compute.manager [req-8e383a08-98d5-41e1-838d-de7a3914c603 req-cf0eb54f-48a0-45c1-95af-a7b8c79dca41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Received event network-vif-unplugged-247791be-e482-41d7-b078-7328138dd0ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:55 compute-0 nova_compute[243452]: 2026-02-28 10:21:55.472 243456 DEBUG oslo_concurrency.lockutils [req-8e383a08-98d5-41e1-838d-de7a3914c603 req-cf0eb54f-48a0-45c1-95af-a7b8c79dca41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:55 compute-0 nova_compute[243452]: 2026-02-28 10:21:55.472 243456 DEBUG oslo_concurrency.lockutils [req-8e383a08-98d5-41e1-838d-de7a3914c603 req-cf0eb54f-48a0-45c1-95af-a7b8c79dca41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:55 compute-0 nova_compute[243452]: 2026-02-28 10:21:55.473 243456 DEBUG oslo_concurrency.lockutils [req-8e383a08-98d5-41e1-838d-de7a3914c603 req-cf0eb54f-48a0-45c1-95af-a7b8c79dca41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:55 compute-0 nova_compute[243452]: 2026-02-28 10:21:55.473 243456 DEBUG nova.compute.manager [req-8e383a08-98d5-41e1-838d-de7a3914c603 req-cf0eb54f-48a0-45c1-95af-a7b8c79dca41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] No waiting events found dispatching network-vif-unplugged-247791be-e482-41d7-b078-7328138dd0ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:21:55 compute-0 nova_compute[243452]: 2026-02-28 10:21:55.473 243456 DEBUG nova.compute.manager [req-8e383a08-98d5-41e1-838d-de7a3914c603 req-cf0eb54f-48a0-45c1-95af-a7b8c79dca41 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Received event network-vif-unplugged-247791be-e482-41d7-b078-7328138dd0ea for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:21:55 compute-0 nova_compute[243452]: 2026-02-28 10:21:55.546 243456 DEBUG nova.compute.manager [req-030025ef-1bcc-4a2a-8fa3-3f9aa0dcbbbe req-7965bbf6-0559-4b10-b8e8-0afdeb351450 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received event network-changed-09ffa25b-e3df-45c2-9db2-423ed33e2a28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:55 compute-0 nova_compute[243452]: 2026-02-28 10:21:55.547 243456 DEBUG nova.compute.manager [req-030025ef-1bcc-4a2a-8fa3-3f9aa0dcbbbe req-7965bbf6-0559-4b10-b8e8-0afdeb351450 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Refreshing instance network info cache due to event network-changed-09ffa25b-e3df-45c2-9db2-423ed33e2a28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:21:55 compute-0 nova_compute[243452]: 2026-02-28 10:21:55.547 243456 DEBUG oslo_concurrency.lockutils [req-030025ef-1bcc-4a2a-8fa3-3f9aa0dcbbbe req-7965bbf6-0559-4b10-b8e8-0afdeb351450 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:21:55 compute-0 nova_compute[243452]: 2026-02-28 10:21:55.547 243456 DEBUG oslo_concurrency.lockutils [req-030025ef-1bcc-4a2a-8fa3-3f9aa0dcbbbe req-7965bbf6-0559-4b10-b8e8-0afdeb351450 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:21:55 compute-0 nova_compute[243452]: 2026-02-28 10:21:55.548 243456 DEBUG nova.network.neutron [req-030025ef-1bcc-4a2a-8fa3-3f9aa0dcbbbe req-7965bbf6-0559-4b10-b8e8-0afdeb351450 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Refreshing network info cache for port 09ffa25b-e3df-45c2-9db2-423ed33e2a28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.044 243456 DEBUG oslo_concurrency.lockutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.044 243456 DEBUG oslo_concurrency.lockutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.045 243456 DEBUG oslo_concurrency.lockutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.045 243456 DEBUG oslo_concurrency.lockutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.045 243456 DEBUG oslo_concurrency.lockutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.047 243456 INFO nova.compute.manager [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Terminating instance
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.048 243456 DEBUG nova.compute.manager [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:21:56 compute-0 kernel: tapa920b0c3-c6 (unregistering): left promiscuous mode
Feb 28 10:21:56 compute-0 NetworkManager[49805]: <info>  [1772274116.1230] device (tapa920b0c3-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.128 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:56 compute-0 ovn_controller[146846]: 2026-02-28T10:21:56Z|01067|binding|INFO|Releasing lport a920b0c3-c6cf-44d3-9a22-40eda0e09078 from this chassis (sb_readonly=0)
Feb 28 10:21:56 compute-0 ovn_controller[146846]: 2026-02-28T10:21:56Z|01068|binding|INFO|Setting lport a920b0c3-c6cf-44d3-9a22-40eda0e09078 down in Southbound
Feb 28 10:21:56 compute-0 ovn_controller[146846]: 2026-02-28T10:21:56Z|01069|binding|INFO|Removing iface tapa920b0c3-c6 ovn-installed in OVS
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.131 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.141 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:56.148 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:a9:65 10.100.0.2'], port_security=['fa:16:3e:a2:a9:65 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ec785d5e-9b62-4b52-a727-f64173b4b853', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d053625-c393-49e7-ae73-bce276bdc186', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '14500a4ea1d94c0e9c58b076f5c918b5', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1b56c898-1f47-46b5-8fd8-bf772110d194', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0513d6b5-e918-4c66-8302-fa0b35a813c3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a920b0c3-c6cf-44d3-9a22-40eda0e09078) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:21:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:56.149 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a920b0c3-c6cf-44d3-9a22-40eda0e09078 in datapath 4d053625-c393-49e7-ae73-bce276bdc186 unbound from our chassis
Feb 28 10:21:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:56.150 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4d053625-c393-49e7-ae73-bce276bdc186 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:21:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:56.151 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[420ec822-fd03-47dd-bddc-2200a19be7f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:21:56 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000066.scope: Deactivated successfully.
Feb 28 10:21:56 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000066.scope: Consumed 12.035s CPU time.
Feb 28 10:21:56 compute-0 systemd-machined[209480]: Machine qemu-132-instance-00000066 terminated.
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.267 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.272 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.283 243456 INFO nova.virt.libvirt.driver [-] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Instance destroyed successfully.
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.284 243456 DEBUG nova.objects.instance [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lazy-loading 'resources' on Instance uuid ec785d5e-9b62-4b52-a727-f64173b4b853 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.297 243456 DEBUG nova.virt.libvirt.vif [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1346326288',display_name='tempest-ServerRescueTestJSONUnderV235-server-1346326288',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1346326288',id=102,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:21:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='14500a4ea1d94c0e9c58b076f5c918b5',ramdisk_id='',reservation_id='r-svzcm10h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-749971841',owner_user_name='tempest-ServerRescueTestJSONUnderV235-749971841-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:21:34Z,user_data=None,user_id='f18b63d43ee24e59bdff962c9a727213',uuid=ec785d5e-9b62-4b52-a727-f64173b4b853,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.298 243456 DEBUG nova.network.os_vif_util [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Converting VIF {"id": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "address": "fa:16:3e:a2:a9:65", "network": {"id": "4d053625-c393-49e7-ae73-bce276bdc186", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-90883397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "14500a4ea1d94c0e9c58b076f5c918b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa920b0c3-c6", "ovs_interfaceid": "a920b0c3-c6cf-44d3-9a22-40eda0e09078", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.299 243456 DEBUG nova.network.os_vif_util [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a2:a9:65,bridge_name='br-int',has_traffic_filtering=True,id=a920b0c3-c6cf-44d3-9a22-40eda0e09078,network=Network(4d053625-c393-49e7-ae73-bce276bdc186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa920b0c3-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.299 243456 DEBUG os_vif [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:a9:65,bridge_name='br-int',has_traffic_filtering=True,id=a920b0c3-c6cf-44d3-9a22-40eda0e09078,network=Network(4d053625-c393-49e7-ae73-bce276bdc186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa920b0c3-c6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.302 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.302 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa920b0c3-c6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.303 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.304 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.306 243456 INFO os_vif [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:a9:65,bridge_name='br-int',has_traffic_filtering=True,id=a920b0c3-c6cf-44d3-9a22-40eda0e09078,network=Network(4d053625-c393-49e7-ae73-bce276bdc186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa920b0c3-c6')
Feb 28 10:21:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 361 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.7 MiB/s wr, 226 op/s
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.414 243456 DEBUG nova.network.neutron [-] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.589 243456 DEBUG nova.network.neutron [req-030025ef-1bcc-4a2a-8fa3-3f9aa0dcbbbe req-7965bbf6-0559-4b10-b8e8-0afdeb351450 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Updated VIF entry in instance network info cache for port 09ffa25b-e3df-45c2-9db2-423ed33e2a28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.590 243456 DEBUG nova.network.neutron [req-030025ef-1bcc-4a2a-8fa3-3f9aa0dcbbbe req-7965bbf6-0559-4b10-b8e8-0afdeb351450 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Updating instance_info_cache with network_info: [{"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.749 243456 INFO nova.compute.manager [-] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Took 1.92 seconds to deallocate network for instance.
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.854 243456 DEBUG oslo_concurrency.lockutils [req-030025ef-1bcc-4a2a-8fa3-3f9aa0dcbbbe req-7965bbf6-0559-4b10-b8e8-0afdeb351450 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.866 243456 DEBUG oslo_concurrency.lockutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.867 243456 DEBUG oslo_concurrency.lockutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.905 243456 INFO nova.virt.libvirt.driver [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Deleting instance files /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853_del
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.906 243456 INFO nova.virt.libvirt.driver [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Deletion of /var/lib/nova/instances/ec785d5e-9b62-4b52-a727-f64173b4b853_del complete
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.980 243456 INFO nova.compute.manager [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Took 0.93 seconds to destroy the instance on the hypervisor.
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.980 243456 DEBUG oslo.service.loopingcall [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.981 243456 DEBUG nova.compute.manager [-] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.982 243456 DEBUG nova.network.neutron [-] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:21:56 compute-0 nova_compute[243452]: 2026-02-28 10:21:56.998 243456 DEBUG oslo_concurrency.processutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:57 compute-0 ceph-mon[76304]: pgmap v1695: 305 pgs: 305 active+clean; 361 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.7 MiB/s wr, 226 op/s
Feb 28 10:21:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:21:57 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/790989969' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.536 243456 DEBUG oslo_concurrency.processutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.544 243456 DEBUG nova.compute.provider_tree [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.567 243456 DEBUG nova.scheduler.client.report [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.608 243456 DEBUG nova.compute.manager [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Received event network-vif-plugged-247791be-e482-41d7-b078-7328138dd0ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.609 243456 DEBUG oslo_concurrency.lockutils [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.609 243456 DEBUG oslo_concurrency.lockutils [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.610 243456 DEBUG oslo_concurrency.lockutils [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.610 243456 DEBUG nova.compute.manager [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] No waiting events found dispatching network-vif-plugged-247791be-e482-41d7-b078-7328138dd0ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.611 243456 WARNING nova.compute.manager [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Received unexpected event network-vif-plugged-247791be-e482-41d7-b078-7328138dd0ea for instance with vm_state deleted and task_state None.
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.611 243456 DEBUG nova.compute.manager [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Received event network-vif-deleted-247791be-e482-41d7-b078-7328138dd0ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.612 243456 DEBUG nova.compute.manager [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-vif-unplugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.612 243456 DEBUG oslo_concurrency.lockutils [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.612 243456 DEBUG oslo_concurrency.lockutils [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.613 243456 DEBUG oslo_concurrency.lockutils [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.613 243456 DEBUG nova.compute.manager [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] No waiting events found dispatching network-vif-unplugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.614 243456 DEBUG nova.compute.manager [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-vif-unplugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.614 243456 DEBUG nova.compute.manager [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.615 243456 DEBUG oslo_concurrency.lockutils [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.615 243456 DEBUG oslo_concurrency.lockutils [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.615 243456 DEBUG oslo_concurrency.lockutils [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.616 243456 DEBUG nova.compute.manager [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] No waiting events found dispatching network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.616 243456 WARNING nova.compute.manager [req-60354bb1-98d7-41ed-8421-39fd0b63ec5c req-f646675a-d75c-49fd-bb69-d04388462b57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received unexpected event network-vif-plugged-a920b0c3-c6cf-44d3-9a22-40eda0e09078 for instance with vm_state rescued and task_state deleting.
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.619 243456 DEBUG oslo_concurrency.lockutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.649 243456 INFO nova.scheduler.client.report [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Deleted allocations for instance 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4
Feb 28 10:21:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:57.860 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:57.862 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:21:57.862 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:57 compute-0 nova_compute[243452]: 2026-02-28 10:21:57.950 243456 DEBUG nova.network.neutron [-] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:21:58 compute-0 nova_compute[243452]: 2026-02-28 10:21:58.048 243456 DEBUG oslo_concurrency.lockutils [None req-91e9de3c-52cd-4eb6-9b8c-fc4221d6c7f2 70e8f691ae0f4768bb68cd8d497033e8 335faa1173e64cf8a7b107ae6238353d - - default default] Lock "6f73ef31-2ae0-4765-9f6c-fb732af2b3f4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 310 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.1 MiB/s wr, 226 op/s
Feb 28 10:21:58 compute-0 nova_compute[243452]: 2026-02-28 10:21:58.435 243456 INFO nova.compute.manager [-] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Took 1.45 seconds to deallocate network for instance.
Feb 28 10:21:58 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/790989969' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:21:58 compute-0 nova_compute[243452]: 2026-02-28 10:21:58.649 243456 DEBUG oslo_concurrency.lockutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:21:58 compute-0 nova_compute[243452]: 2026-02-28 10:21:58.649 243456 DEBUG oslo_concurrency.lockutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:21:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:21:58 compute-0 nova_compute[243452]: 2026-02-28 10:21:58.725 243456 DEBUG oslo_concurrency.processutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:21:58 compute-0 nova_compute[243452]: 2026-02-28 10:21:58.992 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:21:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:21:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/352467888' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:21:59 compute-0 nova_compute[243452]: 2026-02-28 10:21:59.301 243456 DEBUG oslo_concurrency.processutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:21:59 compute-0 nova_compute[243452]: 2026-02-28 10:21:59.307 243456 DEBUG nova.compute.provider_tree [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:21:59 compute-0 nova_compute[243452]: 2026-02-28 10:21:59.338 243456 DEBUG nova.scheduler.client.report [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:21:59 compute-0 nova_compute[243452]: 2026-02-28 10:21:59.433 243456 DEBUG oslo_concurrency.lockutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:59 compute-0 ceph-mon[76304]: pgmap v1696: 305 pgs: 305 active+clean; 310 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.1 MiB/s wr, 226 op/s
Feb 28 10:21:59 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/352467888' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:21:59 compute-0 nova_compute[243452]: 2026-02-28 10:21:59.486 243456 INFO nova.scheduler.client.report [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Deleted allocations for instance ec785d5e-9b62-4b52-a727-f64173b4b853
Feb 28 10:21:59 compute-0 nova_compute[243452]: 2026-02-28 10:21:59.599 243456 DEBUG oslo_concurrency.lockutils [None req-89d844fb-2c8f-49b8-b070-b26ae5906d68 f18b63d43ee24e59bdff962c9a727213 14500a4ea1d94c0e9c58b076f5c918b5 - - default default] Lock "ec785d5e-9b62-4b52-a727-f64173b4b853" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:21:59 compute-0 nova_compute[243452]: 2026-02-28 10:21:59.698 243456 DEBUG nova.compute.manager [req-c71425bc-f088-4458-8323-a5cbd0315b3d req-df6ab0fa-6596-4949-b8ed-3a0cf85c9dd3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Received event network-vif-deleted-a920b0c3-c6cf-44d3-9a22-40eda0e09078 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:22:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:22:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:22:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:22:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:22:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:22:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:22:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 244 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 39 KiB/s wr, 218 op/s
Feb 28 10:22:00 compute-0 ceph-mon[76304]: pgmap v1697: 305 pgs: 305 active+clean; 244 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 39 KiB/s wr, 218 op/s
Feb 28 10:22:01 compute-0 nova_compute[243452]: 2026-02-28 10:22:01.305 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 200 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 39 KiB/s wr, 224 op/s
Feb 28 10:22:02 compute-0 ovn_controller[146846]: 2026-02-28T10:22:02Z|01070|binding|INFO|Releasing lport 0715e649-02a0-4a60-88ea-663bf03161ac from this chassis (sb_readonly=0)
Feb 28 10:22:02 compute-0 nova_compute[243452]: 2026-02-28 10:22:02.762 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:03 compute-0 ceph-mon[76304]: pgmap v1698: 305 pgs: 305 active+clean; 200 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 39 KiB/s wr, 224 op/s
Feb 28 10:22:03 compute-0 ovn_controller[146846]: 2026-02-28T10:22:03Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:93:cc:93 10.100.0.9
Feb 28 10:22:03 compute-0 ovn_controller[146846]: 2026-02-28T10:22:03Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:cc:93 10.100.0.9
Feb 28 10:22:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:22:03 compute-0 nova_compute[243452]: 2026-02-28 10:22:03.995 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 200 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 37 KiB/s wr, 211 op/s
Feb 28 10:22:04 compute-0 ovn_controller[146846]: 2026-02-28T10:22:04Z|01071|binding|INFO|Releasing lport 0715e649-02a0-4a60-88ea-663bf03161ac from this chassis (sb_readonly=0)
Feb 28 10:22:04 compute-0 nova_compute[243452]: 2026-02-28 10:22:04.634 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:05 compute-0 ceph-mon[76304]: pgmap v1699: 305 pgs: 305 active+clean; 200 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 37 KiB/s wr, 211 op/s
Feb 28 10:22:06 compute-0 nova_compute[243452]: 2026-02-28 10:22:06.309 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 215 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.2 MiB/s wr, 218 op/s
Feb 28 10:22:06 compute-0 ceph-mon[76304]: pgmap v1700: 305 pgs: 305 active+clean; 215 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.2 MiB/s wr, 218 op/s
Feb 28 10:22:06 compute-0 ovn_controller[146846]: 2026-02-28T10:22:06Z|01072|binding|INFO|Releasing lport 0715e649-02a0-4a60-88ea-663bf03161ac from this chassis (sb_readonly=0)
Feb 28 10:22:06 compute-0 nova_compute[243452]: 2026-02-28 10:22:06.556 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 233 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 176 op/s
Feb 28 10:22:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:22:08 compute-0 nova_compute[243452]: 2026-02-28 10:22:08.997 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:09 compute-0 ceph-mon[76304]: pgmap v1701: 305 pgs: 305 active+clean; 233 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 176 op/s
Feb 28 10:22:09 compute-0 nova_compute[243452]: 2026-02-28 10:22:09.468 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274114.4671314, 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:22:09 compute-0 nova_compute[243452]: 2026-02-28 10:22:09.468 243456 INFO nova.compute.manager [-] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] VM Stopped (Lifecycle Event)
Feb 28 10:22:09 compute-0 nova_compute[243452]: 2026-02-28 10:22:09.503 243456 DEBUG nova.compute.manager [None req-f8e8eadc-f14a-40a3-a09a-bdc18503ff74 - - - - - -] [instance: 6f73ef31-2ae0-4765-9f6c-fb732af2b3f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:22:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 423 KiB/s rd, 2.1 MiB/s wr, 112 op/s
Feb 28 10:22:10 compute-0 ceph-mon[76304]: pgmap v1702: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 423 KiB/s rd, 2.1 MiB/s wr, 112 op/s
Feb 28 10:22:11 compute-0 nova_compute[243452]: 2026-02-28 10:22:11.288 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274116.2818878, ec785d5e-9b62-4b52-a727-f64173b4b853 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:22:11 compute-0 nova_compute[243452]: 2026-02-28 10:22:11.290 243456 INFO nova.compute.manager [-] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] VM Stopped (Lifecycle Event)
Feb 28 10:22:11 compute-0 nova_compute[243452]: 2026-02-28 10:22:11.312 243456 DEBUG nova.compute.manager [None req-6dc4cd3c-8b3b-4b0c-ade5-34a4a82fd322 - - - - - -] [instance: ec785d5e-9b62-4b52-a727-f64173b4b853] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:22:11 compute-0 nova_compute[243452]: 2026-02-28 10:22:11.314 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:12 compute-0 nova_compute[243452]: 2026-02-28 10:22:12.361 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Feb 28 10:22:13 compute-0 ceph-mon[76304]: pgmap v1703: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Feb 28 10:22:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:22:14 compute-0 nova_compute[243452]: 2026-02-28 10:22:13.999 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 28 10:22:14 compute-0 ceph-mon[76304]: pgmap v1704: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 28 10:22:14 compute-0 sshd-session[334770]: Received disconnect from 103.67.78.132 port 60488:11: Bye Bye [preauth]
Feb 28 10:22:14 compute-0 sshd-session[334770]: Disconnected from authenticating user root 103.67.78.132 port 60488 [preauth]
Feb 28 10:22:15 compute-0 nova_compute[243452]: 2026-02-28 10:22:15.414 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:16 compute-0 podman[334773]: 2026-02-28 10:22:16.132989566 +0000 UTC m=+0.061783924 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223)
Feb 28 10:22:16 compute-0 podman[334772]: 2026-02-28 10:22:16.163555848 +0000 UTC m=+0.097343651 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Feb 28 10:22:16 compute-0 nova_compute[243452]: 2026-02-28 10:22:16.315 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 28 10:22:16 compute-0 nova_compute[243452]: 2026-02-28 10:22:16.716 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "3a425770-67d6-411f-9586-1977cbc678ed" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:16 compute-0 nova_compute[243452]: 2026-02-28 10:22:16.717 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:16 compute-0 nova_compute[243452]: 2026-02-28 10:22:16.734 243456 DEBUG nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:22:16 compute-0 nova_compute[243452]: 2026-02-28 10:22:16.825 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:16 compute-0 nova_compute[243452]: 2026-02-28 10:22:16.826 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:16 compute-0 nova_compute[243452]: 2026-02-28 10:22:16.839 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:22:16 compute-0 nova_compute[243452]: 2026-02-28 10:22:16.840 243456 INFO nova.compute.claims [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:22:16 compute-0 nova_compute[243452]: 2026-02-28 10:22:16.956 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:17 compute-0 ceph-mon[76304]: pgmap v1705: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 28 10:22:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:22:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3279325425' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:22:17 compute-0 nova_compute[243452]: 2026-02-28 10:22:17.518 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:17 compute-0 nova_compute[243452]: 2026-02-28 10:22:17.528 243456 DEBUG nova.compute.provider_tree [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:22:17 compute-0 nova_compute[243452]: 2026-02-28 10:22:17.550 243456 DEBUG nova.scheduler.client.report [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:22:17 compute-0 nova_compute[243452]: 2026-02-28 10:22:17.578 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:17 compute-0 nova_compute[243452]: 2026-02-28 10:22:17.579 243456 DEBUG nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:22:17 compute-0 nova_compute[243452]: 2026-02-28 10:22:17.652 243456 DEBUG nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:22:17 compute-0 nova_compute[243452]: 2026-02-28 10:22:17.653 243456 DEBUG nova.network.neutron [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:22:17 compute-0 nova_compute[243452]: 2026-02-28 10:22:17.679 243456 INFO nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:22:17 compute-0 nova_compute[243452]: 2026-02-28 10:22:17.698 243456 DEBUG nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:22:17 compute-0 nova_compute[243452]: 2026-02-28 10:22:17.833 243456 DEBUG nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:22:17 compute-0 nova_compute[243452]: 2026-02-28 10:22:17.835 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:22:17 compute-0 nova_compute[243452]: 2026-02-28 10:22:17.835 243456 INFO nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Creating image(s)
Feb 28 10:22:17 compute-0 nova_compute[243452]: 2026-02-28 10:22:17.861 243456 DEBUG nova.storage.rbd_utils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 3a425770-67d6-411f-9586-1977cbc678ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:17 compute-0 nova_compute[243452]: 2026-02-28 10:22:17.889 243456 DEBUG nova.storage.rbd_utils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 3a425770-67d6-411f-9586-1977cbc678ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:17 compute-0 nova_compute[243452]: 2026-02-28 10:22:17.921 243456 DEBUG nova.storage.rbd_utils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 3a425770-67d6-411f-9586-1977cbc678ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:17 compute-0 nova_compute[243452]: 2026-02-28 10:22:17.926 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:18 compute-0 nova_compute[243452]: 2026-02-28 10:22:18.000 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:18 compute-0 nova_compute[243452]: 2026-02-28 10:22:18.002 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:18 compute-0 nova_compute[243452]: 2026-02-28 10:22:18.003 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:18 compute-0 nova_compute[243452]: 2026-02-28 10:22:18.004 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:18 compute-0 nova_compute[243452]: 2026-02-28 10:22:18.042 243456 DEBUG nova.storage.rbd_utils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 3a425770-67d6-411f-9586-1977cbc678ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:18 compute-0 nova_compute[243452]: 2026-02-28 10:22:18.049 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 3a425770-67d6-411f-9586-1977cbc678ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:18 compute-0 nova_compute[243452]: 2026-02-28 10:22:18.194 243456 DEBUG nova.policy [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9046d3ef932481196b82d5e1fdd5de6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:22:18 compute-0 nova_compute[243452]: 2026-02-28 10:22:18.319 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 3a425770-67d6-411f-9586-1977cbc678ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:18 compute-0 nova_compute[243452]: 2026-02-28 10:22:18.389 243456 DEBUG nova.storage.rbd_utils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] resizing rbd image 3a425770-67d6-411f-9586-1977cbc678ed_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:22:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 968 KiB/s wr, 23 op/s
Feb 28 10:22:18 compute-0 nova_compute[243452]: 2026-02-28 10:22:18.479 243456 DEBUG nova.objects.instance [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'migration_context' on Instance uuid 3a425770-67d6-411f-9586-1977cbc678ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:22:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3279325425' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:22:18 compute-0 nova_compute[243452]: 2026-02-28 10:22:18.499 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:22:18 compute-0 nova_compute[243452]: 2026-02-28 10:22:18.500 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Ensure instance console log exists: /var/lib/nova/instances/3a425770-67d6-411f-9586-1977cbc678ed/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:22:18 compute-0 nova_compute[243452]: 2026-02-28 10:22:18.501 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:18 compute-0 nova_compute[243452]: 2026-02-28 10:22:18.501 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:18 compute-0 nova_compute[243452]: 2026-02-28 10:22:18.501 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:18 compute-0 nova_compute[243452]: 2026-02-28 10:22:18.579 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:18.579 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:22:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:18.581 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:22:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:22:19 compute-0 nova_compute[243452]: 2026-02-28 10:22:19.001 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:19 compute-0 nova_compute[243452]: 2026-02-28 10:22:19.033 243456 DEBUG nova.network.neutron [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Successfully created port: 8356f577-07af-4575-b9ba-e2764b155dcc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:22:19 compute-0 ceph-mon[76304]: pgmap v1706: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 968 KiB/s wr, 23 op/s
Feb 28 10:22:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 272 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.6 MiB/s wr, 27 op/s
Feb 28 10:22:20 compute-0 nova_compute[243452]: 2026-02-28 10:22:20.449 243456 DEBUG nova.network.neutron [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Successfully updated port: 8356f577-07af-4575-b9ba-e2764b155dcc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:22:20 compute-0 nova_compute[243452]: 2026-02-28 10:22:20.469 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "refresh_cache-3a425770-67d6-411f-9586-1977cbc678ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:22:20 compute-0 nova_compute[243452]: 2026-02-28 10:22:20.470 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquired lock "refresh_cache-3a425770-67d6-411f-9586-1977cbc678ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:22:20 compute-0 nova_compute[243452]: 2026-02-28 10:22:20.470 243456 DEBUG nova.network.neutron [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:22:20 compute-0 nova_compute[243452]: 2026-02-28 10:22:20.482 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:20 compute-0 nova_compute[243452]: 2026-02-28 10:22:20.483 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:20 compute-0 ceph-mon[76304]: pgmap v1707: 305 pgs: 305 active+clean; 272 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.6 MiB/s wr, 27 op/s
Feb 28 10:22:20 compute-0 nova_compute[243452]: 2026-02-28 10:22:20.501 243456 DEBUG nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:22:20 compute-0 nova_compute[243452]: 2026-02-28 10:22:20.573 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:20 compute-0 nova_compute[243452]: 2026-02-28 10:22:20.573 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:20 compute-0 nova_compute[243452]: 2026-02-28 10:22:20.582 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:22:20 compute-0 nova_compute[243452]: 2026-02-28 10:22:20.582 243456 INFO nova.compute.claims [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:22:20 compute-0 nova_compute[243452]: 2026-02-28 10:22:20.591 243456 DEBUG nova.compute.manager [req-bf056df2-a93e-47c4-bed6-3231c16ca438 req-013ccc78-117c-49b4-a58b-43e0cefa4c30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Received event network-changed-8356f577-07af-4575-b9ba-e2764b155dcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:22:20 compute-0 nova_compute[243452]: 2026-02-28 10:22:20.592 243456 DEBUG nova.compute.manager [req-bf056df2-a93e-47c4-bed6-3231c16ca438 req-013ccc78-117c-49b4-a58b-43e0cefa4c30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Refreshing instance network info cache due to event network-changed-8356f577-07af-4575-b9ba-e2764b155dcc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:22:20 compute-0 nova_compute[243452]: 2026-02-28 10:22:20.593 243456 DEBUG oslo_concurrency.lockutils [req-bf056df2-a93e-47c4-bed6-3231c16ca438 req-013ccc78-117c-49b4-a58b-43e0cefa4c30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-3a425770-67d6-411f-9586-1977cbc678ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:22:20 compute-0 nova_compute[243452]: 2026-02-28 10:22:20.731 243456 DEBUG nova.network.neutron [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:22:20 compute-0 nova_compute[243452]: 2026-02-28 10:22:20.849 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.317 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:21 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:22:21 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3014086928' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.441 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.448 243456 DEBUG nova.compute.provider_tree [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.462 243456 DEBUG nova.scheduler.client.report [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.493 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.494 243456 DEBUG nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:22:21 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3014086928' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.550 243456 DEBUG nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.551 243456 DEBUG nova.network.neutron [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.585 243456 INFO nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.603 243456 DEBUG nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.693 243456 DEBUG nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.694 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.695 243456 INFO nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Creating image(s)
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.716 243456 DEBUG nova.storage.rbd_utils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.739 243456 DEBUG nova.storage.rbd_utils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.762 243456 DEBUG nova.storage.rbd_utils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.766 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.826 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.827 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.827 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.828 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.848 243456 DEBUG nova.storage.rbd_utils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.854 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.924 243456 DEBUG nova.policy [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30797c1e587b4532a2e148d0cdcd9c51', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:22:21 compute-0 nova_compute[243452]: 2026-02-28 10:22:21.993 243456 DEBUG nova.network.neutron [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Updating instance_info_cache with network_info: [{"id": "8356f577-07af-4575-b9ba-e2764b155dcc", "address": "fa:16:3e:bc:ac:78", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8356f577-07", "ovs_interfaceid": "8356f577-07af-4575-b9ba-e2764b155dcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.011 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Releasing lock "refresh_cache-3a425770-67d6-411f-9586-1977cbc678ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.012 243456 DEBUG nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Instance network_info: |[{"id": "8356f577-07af-4575-b9ba-e2764b155dcc", "address": "fa:16:3e:bc:ac:78", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8356f577-07", "ovs_interfaceid": "8356f577-07af-4575-b9ba-e2764b155dcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.013 243456 DEBUG oslo_concurrency.lockutils [req-bf056df2-a93e-47c4-bed6-3231c16ca438 req-013ccc78-117c-49b4-a58b-43e0cefa4c30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-3a425770-67d6-411f-9586-1977cbc678ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.014 243456 DEBUG nova.network.neutron [req-bf056df2-a93e-47c4-bed6-3231c16ca438 req-013ccc78-117c-49b4-a58b-43e0cefa4c30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Refreshing network info cache for port 8356f577-07af-4575-b9ba-e2764b155dcc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.016 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Start _get_guest_xml network_info=[{"id": "8356f577-07af-4575-b9ba-e2764b155dcc", "address": "fa:16:3e:bc:ac:78", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8356f577-07", "ovs_interfaceid": "8356f577-07af-4575-b9ba-e2764b155dcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.025 243456 WARNING nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.040 243456 DEBUG nova.virt.libvirt.host [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.042 243456 DEBUG nova.virt.libvirt.host [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.045 243456 DEBUG nova.virt.libvirt.host [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.045 243456 DEBUG nova.virt.libvirt.host [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.046 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.046 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.046 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.046 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.047 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.047 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.047 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.047 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.047 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.048 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.048 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.048 243456 DEBUG nova.virt.hardware [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.052 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.111 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.203 243456 DEBUG nova.storage.rbd_utils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] resizing rbd image 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.281 243456 DEBUG nova.objects.instance [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'migration_context' on Instance uuid 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.303 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.303 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Ensure instance console log exists: /var/lib/nova/instances/0d4ce277-1bbb-4926-a7ee-30f5df57fff9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.304 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.304 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.304 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 272 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.6 MiB/s wr, 27 op/s
Feb 28 10:22:22 compute-0 ceph-mon[76304]: pgmap v1708: 305 pgs: 305 active+clean; 272 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.6 MiB/s wr, 27 op/s
Feb 28 10:22:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:22:22 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3333438576' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.605 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.631 243456 DEBUG nova.storage.rbd_utils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 3a425770-67d6-411f-9586-1977cbc678ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.637 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:22 compute-0 nova_compute[243452]: 2026-02-28 10:22:22.787 243456 DEBUG nova.network.neutron [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Successfully created port: 53819bfb-ebe3-4956-8f91-805dd04b5954 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:22:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:22:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1846463003' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.241 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.243 243456 DEBUG nova.virt.libvirt.vif [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:22:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-0-1517409208',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-0-1517409208',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ge',id=105,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNs+nd6rPqJ/AGV55FcprVoNF73HxSzu9S0FiqOuhvN4rlizLZ9YW8wn4BFYC1ax4N+CgAyJWdOsbXuKNmW4pxXu15elRBRUjfpzdRq3EXNuL00qJwL5OuaSxnPUOdFXtw==',key_name='tempest-TestSecurityGroupsBasicOps-1541180013',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-xny8xf69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:22:17Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=3a425770-67d6-411f-9586-1977cbc678ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8356f577-07af-4575-b9ba-e2764b155dcc", "address": "fa:16:3e:bc:ac:78", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8356f577-07", "ovs_interfaceid": "8356f577-07af-4575-b9ba-e2764b155dcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.243 243456 DEBUG nova.network.os_vif_util [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "8356f577-07af-4575-b9ba-e2764b155dcc", "address": "fa:16:3e:bc:ac:78", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8356f577-07", "ovs_interfaceid": "8356f577-07af-4575-b9ba-e2764b155dcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.244 243456 DEBUG nova.network.os_vif_util [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:ac:78,bridge_name='br-int',has_traffic_filtering=True,id=8356f577-07af-4575-b9ba-e2764b155dcc,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8356f577-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.246 243456 DEBUG nova.objects.instance [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a425770-67d6-411f-9586-1977cbc678ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.259 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:22:23 compute-0 nova_compute[243452]:   <uuid>3a425770-67d6-411f-9586-1977cbc678ed</uuid>
Feb 28 10:22:23 compute-0 nova_compute[243452]:   <name>instance-00000069</name>
Feb 28 10:22:23 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:22:23 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:22:23 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-0-1517409208</nova:name>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:22:22</nova:creationTime>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:22:23 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:22:23 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:22:23 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:22:23 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:22:23 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:22:23 compute-0 nova_compute[243452]:         <nova:user uuid="f9046d3ef932481196b82d5e1fdd5de6">tempest-TestSecurityGroupsBasicOps-1303513239-project-member</nova:user>
Feb 28 10:22:23 compute-0 nova_compute[243452]:         <nova:project uuid="1eb3aafa74d14544ba5cde61223f2e23">tempest-TestSecurityGroupsBasicOps-1303513239</nova:project>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:22:23 compute-0 nova_compute[243452]:         <nova:port uuid="8356f577-07af-4575-b9ba-e2764b155dcc">
Feb 28 10:22:23 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:22:23 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:22:23 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <system>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <entry name="serial">3a425770-67d6-411f-9586-1977cbc678ed</entry>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <entry name="uuid">3a425770-67d6-411f-9586-1977cbc678ed</entry>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     </system>
Feb 28 10:22:23 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:22:23 compute-0 nova_compute[243452]:   <os>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:   </os>
Feb 28 10:22:23 compute-0 nova_compute[243452]:   <features>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:   </features>
Feb 28 10:22:23 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:22:23 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:22:23 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/3a425770-67d6-411f-9586-1977cbc678ed_disk">
Feb 28 10:22:23 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       </source>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:22:23 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/3a425770-67d6-411f-9586-1977cbc678ed_disk.config">
Feb 28 10:22:23 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       </source>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:22:23 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:bc:ac:78"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <target dev="tap8356f577-07"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/3a425770-67d6-411f-9586-1977cbc678ed/console.log" append="off"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <video>
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     </video>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:22:23 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:22:23 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:22:23 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:22:23 compute-0 nova_compute[243452]: </domain>
Feb 28 10:22:23 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.260 243456 DEBUG nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Preparing to wait for external event network-vif-plugged-8356f577-07af-4575-b9ba-e2764b155dcc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.260 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "3a425770-67d6-411f-9586-1977cbc678ed-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.260 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.261 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.261 243456 DEBUG nova.virt.libvirt.vif [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:22:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-0-1517409208',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-0-1517409208',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ge',id=105,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNs+nd6rPqJ/AGV55FcprVoNF73HxSzu9S0FiqOuhvN4rlizLZ9YW8wn4BFYC1ax4N+CgAyJWdOsbXuKNmW4pxXu15elRBRUjfpzdRq3EXNuL00qJwL5OuaSxnPUOdFXtw==',key_name='tempest-TestSecurityGroupsBasicOps-1541180013',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-xny8xf69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:22:17Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=3a425770-67d6-411f-9586-1977cbc678ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8356f577-07af-4575-b9ba-e2764b155dcc", "address": "fa:16:3e:bc:ac:78", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8356f577-07", "ovs_interfaceid": "8356f577-07af-4575-b9ba-e2764b155dcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.262 243456 DEBUG nova.network.os_vif_util [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "8356f577-07af-4575-b9ba-e2764b155dcc", "address": "fa:16:3e:bc:ac:78", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8356f577-07", "ovs_interfaceid": "8356f577-07af-4575-b9ba-e2764b155dcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.262 243456 DEBUG nova.network.os_vif_util [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:ac:78,bridge_name='br-int',has_traffic_filtering=True,id=8356f577-07af-4575-b9ba-e2764b155dcc,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8356f577-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.263 243456 DEBUG os_vif [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:ac:78,bridge_name='br-int',has_traffic_filtering=True,id=8356f577-07af-4575-b9ba-e2764b155dcc,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8356f577-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.263 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.264 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.264 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.268 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.268 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8356f577-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.269 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8356f577-07, col_values=(('external_ids', {'iface-id': '8356f577-07af-4575-b9ba-e2764b155dcc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:ac:78', 'vm-uuid': '3a425770-67d6-411f-9586-1977cbc678ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.270 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:23 compute-0 NetworkManager[49805]: <info>  [1772274143.2725] manager: (tap8356f577-07): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/449)
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.272 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.278 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.278 243456 INFO os_vif [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:ac:78,bridge_name='br-int',has_traffic_filtering=True,id=8356f577-07af-4575-b9ba-e2764b155dcc,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8356f577-07')
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.329 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.330 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.330 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No VIF found with MAC fa:16:3e:bc:ac:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.330 243456 INFO nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Using config drive
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.351 243456 DEBUG nova.storage.rbd_utils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 3a425770-67d6-411f-9586-1977cbc678ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3333438576' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:22:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1846463003' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.634 243456 DEBUG nova.network.neutron [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Successfully updated port: 53819bfb-ebe3-4956-8f91-805dd04b5954 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.652 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.652 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquired lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.653 243456 DEBUG nova.network.neutron [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:22:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:22:23 compute-0 nova_compute[243452]: 2026-02-28 10:22:23.969 243456 DEBUG nova.network.neutron [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.003 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.081 243456 INFO nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Creating config drive at /var/lib/nova/instances/3a425770-67d6-411f-9586-1977cbc678ed/disk.config
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.089 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a425770-67d6-411f-9586-1977cbc678ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwdqozpj7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.177 243456 DEBUG nova.network.neutron [req-bf056df2-a93e-47c4-bed6-3231c16ca438 req-013ccc78-117c-49b4-a58b-43e0cefa4c30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Updated VIF entry in instance network info cache for port 8356f577-07af-4575-b9ba-e2764b155dcc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.178 243456 DEBUG nova.network.neutron [req-bf056df2-a93e-47c4-bed6-3231c16ca438 req-013ccc78-117c-49b4-a58b-43e0cefa4c30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Updating instance_info_cache with network_info: [{"id": "8356f577-07af-4575-b9ba-e2764b155dcc", "address": "fa:16:3e:bc:ac:78", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8356f577-07", "ovs_interfaceid": "8356f577-07af-4575-b9ba-e2764b155dcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.201 243456 DEBUG oslo_concurrency.lockutils [req-bf056df2-a93e-47c4-bed6-3231c16ca438 req-013ccc78-117c-49b4-a58b-43e0cefa4c30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-3a425770-67d6-411f-9586-1977cbc678ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.239 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a425770-67d6-411f-9586-1977cbc678ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwdqozpj7" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.284 243456 DEBUG nova.storage.rbd_utils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 3a425770-67d6-411f-9586-1977cbc678ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.291 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a425770-67d6-411f-9586-1977cbc678ed/disk.config 3a425770-67d6-411f-9586-1977cbc678ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.376 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:22:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 302 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 3.1 MiB/s wr, 30 op/s
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.445 243456 DEBUG oslo_concurrency.processutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a425770-67d6-411f-9586-1977cbc678ed/disk.config 3a425770-67d6-411f-9586-1977cbc678ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.446 243456 INFO nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Deleting local config drive /var/lib/nova/instances/3a425770-67d6-411f-9586-1977cbc678ed/disk.config because it was imported into RBD.
Feb 28 10:22:24 compute-0 kernel: tap8356f577-07: entered promiscuous mode
Feb 28 10:22:24 compute-0 NetworkManager[49805]: <info>  [1772274144.4889] manager: (tap8356f577-07): new Tun device (/org/freedesktop/NetworkManager/Devices/450)
Feb 28 10:22:24 compute-0 ovn_controller[146846]: 2026-02-28T10:22:24Z|01073|binding|INFO|Claiming lport 8356f577-07af-4575-b9ba-e2764b155dcc for this chassis.
Feb 28 10:22:24 compute-0 ovn_controller[146846]: 2026-02-28T10:22:24Z|01074|binding|INFO|8356f577-07af-4575-b9ba-e2764b155dcc: Claiming fa:16:3e:bc:ac:78 10.100.0.14
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.489 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.498 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:ac:78 10.100.0.14'], port_security=['fa:16:3e:bc:ac:78 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3a425770-67d6-411f-9586-1977cbc678ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3451a2ef-e97c-49df-813f-57c35ec0999a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '2', 'neutron:security_group_ids': '88239c96-6a1e-46d1-adfd-2f479ed23a6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6802505a-6a05-4258-9e53-19d8c7319e67, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8356f577-07af-4575-b9ba-e2764b155dcc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:22:24 compute-0 ovn_controller[146846]: 2026-02-28T10:22:24Z|01075|binding|INFO|Setting lport 8356f577-07af-4575-b9ba-e2764b155dcc ovn-installed in OVS
Feb 28 10:22:24 compute-0 ovn_controller[146846]: 2026-02-28T10:22:24Z|01076|binding|INFO|Setting lport 8356f577-07af-4575-b9ba-e2764b155dcc up in Southbound
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.501 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.500 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8356f577-07af-4575-b9ba-e2764b155dcc in datapath 3451a2ef-e97c-49df-813f-57c35ec0999a bound to our chassis
Feb 28 10:22:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.502 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3451a2ef-e97c-49df-813f-57c35ec0999a
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.505 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:24 compute-0 systemd-udevd[335329]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:22:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.521 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9f586dfd-a902-42c4-9397-b88e6cee832e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:24 compute-0 ceph-mon[76304]: pgmap v1709: 305 pgs: 305 active+clean; 302 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 3.1 MiB/s wr, 30 op/s
Feb 28 10:22:24 compute-0 systemd-machined[209480]: New machine qemu-135-instance-00000069.
Feb 28 10:22:24 compute-0 NetworkManager[49805]: <info>  [1772274144.5341] device (tap8356f577-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:22:24 compute-0 NetworkManager[49805]: <info>  [1772274144.5348] device (tap8356f577-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:22:24 compute-0 systemd[1]: Started Virtual Machine qemu-135-instance-00000069.
Feb 28 10:22:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.551 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[61d2ff74-159a-47e3-bddd-51d878d7f4eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.555 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bb355185-1658-4dd6-9875-e527ba81c781]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.580 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[eea5eebf-fab3-4984-95f1-59be5d9437b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.603 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3336b1ea-b1c2-48ca-8245-4907b1995e6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3451a2ef-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:c2:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561733, 'reachable_time': 16789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335342, 'error': None, 'target': 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.623 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c2cdb91b-5aea-41c8-9954-1c4e6d4c3216]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3451a2ef-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561746, 'tstamp': 561746}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335344, 'error': None, 'target': 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3451a2ef-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561749, 'tstamp': 561749}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335344, 'error': None, 'target': 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.624 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3451a2ef-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.626 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.627 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3451a2ef-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.628 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:22:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.628 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3451a2ef-e0, col_values=(('external_ids', {'iface-id': '0715e649-02a0-4a60-88ea-663bf03161ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:24.628 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.663 243456 DEBUG nova.compute.manager [req-ac4988b1-6c77-45d5-9835-6588fc4c25be req-0a3f93d5-de65-4eb1-95e2-46bd1c78dc29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Received event network-changed-53819bfb-ebe3-4956-8f91-805dd04b5954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.664 243456 DEBUG nova.compute.manager [req-ac4988b1-6c77-45d5-9835-6588fc4c25be req-0a3f93d5-de65-4eb1-95e2-46bd1c78dc29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Refreshing instance network info cache due to event network-changed-53819bfb-ebe3-4956-8f91-805dd04b5954. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.664 243456 DEBUG oslo_concurrency.lockutils [req-ac4988b1-6c77-45d5-9835-6588fc4c25be req-0a3f93d5-de65-4eb1-95e2-46bd1c78dc29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.774 243456 DEBUG nova.compute.manager [req-c70a3a4d-de1d-4f8e-9b09-1ce5ea5fed3a req-6c6df1e6-5349-4cf7-9e60-5b2c46e01914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Received event network-vif-plugged-8356f577-07af-4575-b9ba-e2764b155dcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.775 243456 DEBUG oslo_concurrency.lockutils [req-c70a3a4d-de1d-4f8e-9b09-1ce5ea5fed3a req-6c6df1e6-5349-4cf7-9e60-5b2c46e01914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "3a425770-67d6-411f-9586-1977cbc678ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.776 243456 DEBUG oslo_concurrency.lockutils [req-c70a3a4d-de1d-4f8e-9b09-1ce5ea5fed3a req-6c6df1e6-5349-4cf7-9e60-5b2c46e01914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.778 243456 DEBUG oslo_concurrency.lockutils [req-c70a3a4d-de1d-4f8e-9b09-1ce5ea5fed3a req-6c6df1e6-5349-4cf7-9e60-5b2c46e01914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:24 compute-0 nova_compute[243452]: 2026-02-28 10:22:24.779 243456 DEBUG nova.compute.manager [req-c70a3a4d-de1d-4f8e-9b09-1ce5ea5fed3a req-6c6df1e6-5349-4cf7-9e60-5b2c46e01914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Processing event network-vif-plugged-8356f577-07af-4575-b9ba-e2764b155dcc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.314 243456 DEBUG nova.network.neutron [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Updating instance_info_cache with network_info: [{"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.342 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Releasing lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.343 243456 DEBUG nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Instance network_info: |[{"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.345 243456 DEBUG oslo_concurrency.lockutils [req-ac4988b1-6c77-45d5-9835-6588fc4c25be req-0a3f93d5-de65-4eb1-95e2-46bd1c78dc29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.345 243456 DEBUG nova.network.neutron [req-ac4988b1-6c77-45d5-9835-6588fc4c25be req-0a3f93d5-de65-4eb1-95e2-46bd1c78dc29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Refreshing network info cache for port 53819bfb-ebe3-4956-8f91-805dd04b5954 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.348 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Start _get_guest_xml network_info=[{"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.355 243456 WARNING nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.364 243456 DEBUG nova.virt.libvirt.host [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.366 243456 DEBUG nova.virt.libvirt.host [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.378 243456 DEBUG nova.virt.libvirt.host [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.378 243456 DEBUG nova.virt.libvirt.host [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.380 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.380 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.381 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.382 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.383 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.383 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.384 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.384 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.385 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.385 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.386 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.387 243456 DEBUG nova.virt.hardware [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.392 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.460 243456 DEBUG nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.461 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274145.461376, 3a425770-67d6-411f-9586-1977cbc678ed => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.462 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] VM Started (Lifecycle Event)
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.471 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.476 243456 INFO nova.virt.libvirt.driver [-] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Instance spawned successfully.
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.477 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.491 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.506 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.512 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.512 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.513 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.514 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.515 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.516 243456 DEBUG nova.virt.libvirt.driver [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.553 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.554 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274145.4615405, 3a425770-67d6-411f-9586-1977cbc678ed => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.554 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] VM Paused (Lifecycle Event)
Feb 28 10:22:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:25.583 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.591 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.595 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274145.4706702, 3a425770-67d6-411f-9586-1977cbc678ed => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.595 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] VM Resumed (Lifecycle Event)
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.602 243456 INFO nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Took 7.77 seconds to spawn the instance on the hypervisor.
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.603 243456 DEBUG nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.612 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.617 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.645 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.693 243456 INFO nova.compute.manager [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Took 8.91 seconds to build instance.
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.714 243456 DEBUG oslo_concurrency.lockutils [None req-e9a22dfc-2c8d-4f51-a66f-1fe22b7ac646 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:22:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2198608139' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:22:25 compute-0 nova_compute[243452]: 2026-02-28 10:22:25.971 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.004 243456 DEBUG nova.storage.rbd_utils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:26 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2198608139' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.012 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:22:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 325 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 57 op/s
Feb 28 10:22:26 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:22:26 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2565571775' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.567 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.569 243456 DEBUG nova.virt.libvirt.vif [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:22:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-168997413',display_name='tempest-₡-168997413',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--168997413',id=106,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-9boikcvy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:22:21Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=0d4ce277-1bbb-4926-a7ee-30f5df57fff9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.570 243456 DEBUG nova.network.os_vif_util [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.570 243456 DEBUG nova.network.os_vif_util [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:f2:95,bridge_name='br-int',has_traffic_filtering=True,id=53819bfb-ebe3-4956-8f91-805dd04b5954,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53819bfb-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.572 243456 DEBUG nova.objects.instance [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.596 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:22:26 compute-0 nova_compute[243452]:   <uuid>0d4ce277-1bbb-4926-a7ee-30f5df57fff9</uuid>
Feb 28 10:22:26 compute-0 nova_compute[243452]:   <name>instance-0000006a</name>
Feb 28 10:22:26 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:22:26 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:22:26 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <nova:name>tempest-₡-168997413</nova:name>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:22:25</nova:creationTime>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:22:26 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:22:26 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:22:26 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:22:26 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:22:26 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:22:26 compute-0 nova_compute[243452]:         <nova:user uuid="30797c1e587b4532a2e148d0cdcd9c51">tempest-ServersTestJSON-973249707-project-member</nova:user>
Feb 28 10:22:26 compute-0 nova_compute[243452]:         <nova:project uuid="1c3cb5cdfa53405bb0387af43e804bd1">tempest-ServersTestJSON-973249707</nova:project>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:22:26 compute-0 nova_compute[243452]:         <nova:port uuid="53819bfb-ebe3-4956-8f91-805dd04b5954">
Feb 28 10:22:26 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:22:26 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:22:26 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <system>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <entry name="serial">0d4ce277-1bbb-4926-a7ee-30f5df57fff9</entry>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <entry name="uuid">0d4ce277-1bbb-4926-a7ee-30f5df57fff9</entry>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     </system>
Feb 28 10:22:26 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:22:26 compute-0 nova_compute[243452]:   <os>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:   </os>
Feb 28 10:22:26 compute-0 nova_compute[243452]:   <features>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:   </features>
Feb 28 10:22:26 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:22:26 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:22:26 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk">
Feb 28 10:22:26 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       </source>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:22:26 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk.config">
Feb 28 10:22:26 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       </source>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:22:26 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:e5:f2:95"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <target dev="tap53819bfb-eb"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/0d4ce277-1bbb-4926-a7ee-30f5df57fff9/console.log" append="off"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <video>
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     </video>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:22:26 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:22:26 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:22:26 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:22:26 compute-0 nova_compute[243452]: </domain>
Feb 28 10:22:26 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.596 243456 DEBUG nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Preparing to wait for external event network-vif-plugged-53819bfb-ebe3-4956-8f91-805dd04b5954 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.596 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.597 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.597 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.597 243456 DEBUG nova.virt.libvirt.vif [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:22:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-168997413',display_name='tempest-₡-168997413',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--168997413',id=106,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-9boikcvy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:22:21Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=0d4ce277-1bbb-4926-a7ee-30f5df57fff9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.598 243456 DEBUG nova.network.os_vif_util [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.598 243456 DEBUG nova.network.os_vif_util [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:f2:95,bridge_name='br-int',has_traffic_filtering=True,id=53819bfb-ebe3-4956-8f91-805dd04b5954,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53819bfb-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.599 243456 DEBUG os_vif [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:f2:95,bridge_name='br-int',has_traffic_filtering=True,id=53819bfb-ebe3-4956-8f91-805dd04b5954,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53819bfb-eb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.599 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.600 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.600 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.604 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.604 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap53819bfb-eb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.604 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap53819bfb-eb, col_values=(('external_ids', {'iface-id': '53819bfb-ebe3-4956-8f91-805dd04b5954', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:f2:95', 'vm-uuid': '0d4ce277-1bbb-4926-a7ee-30f5df57fff9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:26 compute-0 NetworkManager[49805]: <info>  [1772274146.6070] manager: (tap53819bfb-eb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/451)
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.609 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.613 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.614 243456 INFO os_vif [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:f2:95,bridge_name='br-int',has_traffic_filtering=True,id=53819bfb-ebe3-4956-8f91-805dd04b5954,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53819bfb-eb')
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.667 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.668 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.668 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No VIF found with MAC fa:16:3e:e5:f2:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.669 243456 INFO nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Using config drive
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.691 243456 DEBUG nova.storage.rbd_utils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.980 243456 DEBUG nova.compute.manager [req-5aea5c56-69d5-4549-883d-e777203d9ef1 req-f3c14ba5-5563-4e00-9e50-6f365eda5f9c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Received event network-vif-plugged-8356f577-07af-4575-b9ba-e2764b155dcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.981 243456 DEBUG oslo_concurrency.lockutils [req-5aea5c56-69d5-4549-883d-e777203d9ef1 req-f3c14ba5-5563-4e00-9e50-6f365eda5f9c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "3a425770-67d6-411f-9586-1977cbc678ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.982 243456 DEBUG oslo_concurrency.lockutils [req-5aea5c56-69d5-4549-883d-e777203d9ef1 req-f3c14ba5-5563-4e00-9e50-6f365eda5f9c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.983 243456 DEBUG oslo_concurrency.lockutils [req-5aea5c56-69d5-4549-883d-e777203d9ef1 req-f3c14ba5-5563-4e00-9e50-6f365eda5f9c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.984 243456 DEBUG nova.compute.manager [req-5aea5c56-69d5-4549-883d-e777203d9ef1 req-f3c14ba5-5563-4e00-9e50-6f365eda5f9c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] No waiting events found dispatching network-vif-plugged-8356f577-07af-4575-b9ba-e2764b155dcc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:22:26 compute-0 nova_compute[243452]: 2026-02-28 10:22:26.985 243456 WARNING nova.compute.manager [req-5aea5c56-69d5-4549-883d-e777203d9ef1 req-f3c14ba5-5563-4e00-9e50-6f365eda5f9c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Received unexpected event network-vif-plugged-8356f577-07af-4575-b9ba-e2764b155dcc for instance with vm_state active and task_state None.
Feb 28 10:22:27 compute-0 ceph-mon[76304]: pgmap v1710: 305 pgs: 305 active+clean; 325 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 57 op/s
Feb 28 10:22:27 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2565571775' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:22:27 compute-0 nova_compute[243452]: 2026-02-28 10:22:27.236 243456 INFO nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Creating config drive at /var/lib/nova/instances/0d4ce277-1bbb-4926-a7ee-30f5df57fff9/disk.config
Feb 28 10:22:27 compute-0 nova_compute[243452]: 2026-02-28 10:22:27.240 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0d4ce277-1bbb-4926-a7ee-30f5df57fff9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpofek_agn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:27 compute-0 nova_compute[243452]: 2026-02-28 10:22:27.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:22:27 compute-0 nova_compute[243452]: 2026-02-28 10:22:27.387 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0d4ce277-1bbb-4926-a7ee-30f5df57fff9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpofek_agn" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:27 compute-0 nova_compute[243452]: 2026-02-28 10:22:27.422 243456 DEBUG nova.storage.rbd_utils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:27 compute-0 nova_compute[243452]: 2026-02-28 10:22:27.428 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0d4ce277-1bbb-4926-a7ee-30f5df57fff9/disk.config 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:27 compute-0 nova_compute[243452]: 2026-02-28 10:22:27.592 243456 DEBUG oslo_concurrency.processutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0d4ce277-1bbb-4926-a7ee-30f5df57fff9/disk.config 0d4ce277-1bbb-4926-a7ee-30f5df57fff9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:27 compute-0 nova_compute[243452]: 2026-02-28 10:22:27.593 243456 INFO nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Deleting local config drive /var/lib/nova/instances/0d4ce277-1bbb-4926-a7ee-30f5df57fff9/disk.config because it was imported into RBD.
Feb 28 10:22:27 compute-0 NetworkManager[49805]: <info>  [1772274147.6398] manager: (tap53819bfb-eb): new Tun device (/org/freedesktop/NetworkManager/Devices/452)
Feb 28 10:22:27 compute-0 kernel: tap53819bfb-eb: entered promiscuous mode
Feb 28 10:22:27 compute-0 nova_compute[243452]: 2026-02-28 10:22:27.643 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:27 compute-0 ovn_controller[146846]: 2026-02-28T10:22:27Z|01077|binding|INFO|Claiming lport 53819bfb-ebe3-4956-8f91-805dd04b5954 for this chassis.
Feb 28 10:22:27 compute-0 ovn_controller[146846]: 2026-02-28T10:22:27Z|01078|binding|INFO|53819bfb-ebe3-4956-8f91-805dd04b5954: Claiming fa:16:3e:e5:f2:95 10.100.0.9
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.652 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:f2:95 10.100.0.9'], port_security=['fa:16:3e:e5:f2:95 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0d4ce277-1bbb-4926-a7ee-30f5df57fff9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=53819bfb-ebe3-4956-8f91-805dd04b5954) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.654 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 53819bfb-ebe3-4956-8f91-805dd04b5954 in datapath 7ec4804c-4a13-485a-9300-db6edf74473b bound to our chassis
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.656 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b
Feb 28 10:22:27 compute-0 systemd-machined[209480]: New machine qemu-136-instance-0000006a.
Feb 28 10:22:27 compute-0 ovn_controller[146846]: 2026-02-28T10:22:27Z|01079|binding|INFO|Setting lport 53819bfb-ebe3-4956-8f91-805dd04b5954 ovn-installed in OVS
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.672 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c021d8cb-2c47-4e3b-aec8-4942926ceac2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.673 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7ec4804c-41 in ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:22:27 compute-0 ovn_controller[146846]: 2026-02-28T10:22:27Z|01080|binding|INFO|Setting lport 53819bfb-ebe3-4956-8f91-805dd04b5954 up in Southbound
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.676 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7ec4804c-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:22:27 compute-0 nova_compute[243452]: 2026-02-28 10:22:27.676 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.676 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f1a8e702-481d-43c0-acb6-1d31a4d40140]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.678 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9714133c-1ebe-4e9c-a044-07adfcd6e2ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:27 compute-0 systemd[1]: Started Virtual Machine qemu-136-instance-0000006a.
Feb 28 10:22:27 compute-0 nova_compute[243452]: 2026-02-28 10:22:27.681 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:27 compute-0 systemd-udevd[335525]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.694 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[e5bccdfe-db5c-40e1-a112-0d3043421ece]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:27 compute-0 NetworkManager[49805]: <info>  [1772274147.7004] device (tap53819bfb-eb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:22:27 compute-0 NetworkManager[49805]: <info>  [1772274147.7011] device (tap53819bfb-eb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.710 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[45eeb1b7-feae-41d0-bf1b-db106cd169f6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.741 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e6fde387-3dc6-4f84-b6a1-727e4fb1d1fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:27 compute-0 NetworkManager[49805]: <info>  [1772274147.7495] manager: (tap7ec4804c-40): new Veth device (/org/freedesktop/NetworkManager/Devices/453)
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.750 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ee98d370-f922-4440-8487-c74bcfe3b2bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.779 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[38861613-eb0b-4320-bc1a-50212543c9d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.784 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b8eeff6e-0871-4abd-8285-f35b2aec5bfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:27 compute-0 NetworkManager[49805]: <info>  [1772274147.8050] device (tap7ec4804c-40): carrier: link connected
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.813 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e704911c-0806-4c00-b3a5-3831d6fac8b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.830 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5748bb7c-870a-482b-8c46-a6bef4a33b66]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335556, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.843 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b74341-2043-4833-b300-11d88269df5e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:7196'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565509, 'tstamp': 565509}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335557, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.864 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[080169ba-939e-48fa-a2b3-c5abe456dba3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335558, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.894 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[25654d04-ae03-46fe-ba48-23777eeabae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.955 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e7f248-b57c-4464-8483-15d6a91da36c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.957 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.957 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.958 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:27 compute-0 nova_compute[243452]: 2026-02-28 10:22:27.959 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:27 compute-0 kernel: tap7ec4804c-40: entered promiscuous mode
Feb 28 10:22:27 compute-0 NetworkManager[49805]: <info>  [1772274147.9603] manager: (tap7ec4804c-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/454)
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.963 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:27 compute-0 nova_compute[243452]: 2026-02-28 10:22:27.964 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:27 compute-0 ovn_controller[146846]: 2026-02-28T10:22:27Z|01081|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.966 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ec4804c-4a13-485a-9300-db6edf74473b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ec4804c-4a13-485a-9300-db6edf74473b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.967 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7fbe8d73-1298-4eaf-94c8-6ee5575984b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.967 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-7ec4804c-4a13-485a-9300-db6edf74473b
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/7ec4804c-4a13-485a-9300-db6edf74473b.pid.haproxy
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 7ec4804c-4a13-485a-9300-db6edf74473b
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:22:27 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:27.969 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'env', 'PROCESS_TAG=haproxy-7ec4804c-4a13-485a-9300-db6edf74473b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7ec4804c-4a13-485a-9300-db6edf74473b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:22:27 compute-0 nova_compute[243452]: 2026-02-28 10:22:27.973 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:28 compute-0 nova_compute[243452]: 2026-02-28 10:22:28.043 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274148.0424592, 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:22:28 compute-0 nova_compute[243452]: 2026-02-28 10:22:28.043 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] VM Started (Lifecycle Event)
Feb 28 10:22:28 compute-0 nova_compute[243452]: 2026-02-28 10:22:28.094 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:22:28 compute-0 nova_compute[243452]: 2026-02-28 10:22:28.099 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274148.0427306, 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:22:28 compute-0 nova_compute[243452]: 2026-02-28 10:22:28.099 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] VM Paused (Lifecycle Event)
Feb 28 10:22:28 compute-0 nova_compute[243452]: 2026-02-28 10:22:28.133 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:22:28 compute-0 nova_compute[243452]: 2026-02-28 10:22:28.136 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:22:28 compute-0 nova_compute[243452]: 2026-02-28 10:22:28.159 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:22:28 compute-0 nova_compute[243452]: 2026-02-28 10:22:28.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:22:28 compute-0 sudo[335625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:22:28 compute-0 sudo[335625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:22:28 compute-0 sudo[335625]: pam_unix(sudo:session): session closed for user root
Feb 28 10:22:28 compute-0 podman[335648]: 2026-02-28 10:22:28.386659769 +0000 UTC m=+0.049838930 container create bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 10:22:28 compute-0 sudo[335667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:22:28 compute-0 sudo[335667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:22:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 325 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 308 KiB/s rd, 3.6 MiB/s wr, 68 op/s
Feb 28 10:22:28 compute-0 systemd[1]: Started libpod-conmon-bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334.scope.
Feb 28 10:22:28 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:22:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f86c5f2ef6c8614823a87cf49a92bf5ddc50b5f72de34d838a1adf35e0b322e0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:22:28 compute-0 podman[335648]: 2026-02-28 10:22:28.452385076 +0000 UTC m=+0.115564257 container init bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 28 10:22:28 compute-0 podman[335648]: 2026-02-28 10:22:28.361643117 +0000 UTC m=+0.024822298 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:22:28 compute-0 podman[335648]: 2026-02-28 10:22:28.456658919 +0000 UTC m=+0.119838080 container start bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 28 10:22:28 compute-0 neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b[335695]: [NOTICE]   (335699) : New worker (335701) forked
Feb 28 10:22:28 compute-0 neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b[335695]: [NOTICE]   (335699) : Loading success.
Feb 28 10:22:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:22:28 compute-0 sudo[335667]: pam_unix(sudo:session): session closed for user root
Feb 28 10:22:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:22:28 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:22:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:22:28 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:22:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:22:28 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:22:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:22:28 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:22:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:22:28 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:22:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:22:28 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:22:28 compute-0 nova_compute[243452]: 2026-02-28 10:22:28.968 243456 DEBUG nova.network.neutron [req-ac4988b1-6c77-45d5-9835-6588fc4c25be req-0a3f93d5-de65-4eb1-95e2-46bd1c78dc29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Updated VIF entry in instance network info cache for port 53819bfb-ebe3-4956-8f91-805dd04b5954. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:22:28 compute-0 nova_compute[243452]: 2026-02-28 10:22:28.969 243456 DEBUG nova.network.neutron [req-ac4988b1-6c77-45d5-9835-6588fc4c25be req-0a3f93d5-de65-4eb1-95e2-46bd1c78dc29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Updating instance_info_cache with network_info: [{"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:22:28 compute-0 sudo[335740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:22:28 compute-0 sudo[335740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:22:28 compute-0 sudo[335740]: pam_unix(sudo:session): session closed for user root
Feb 28 10:22:28 compute-0 nova_compute[243452]: 2026-02-28 10:22:28.987 243456 DEBUG oslo_concurrency.lockutils [req-ac4988b1-6c77-45d5-9835-6588fc4c25be req-0a3f93d5-de65-4eb1-95e2-46bd1c78dc29 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.005 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:29 compute-0 sudo[335765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:22:29 compute-0 sudo[335765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.098 243456 DEBUG nova.compute.manager [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Received event network-vif-plugged-53819bfb-ebe3-4956-8f91-805dd04b5954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.098 243456 DEBUG oslo_concurrency.lockutils [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.099 243456 DEBUG oslo_concurrency.lockutils [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.099 243456 DEBUG oslo_concurrency.lockutils [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.099 243456 DEBUG nova.compute.manager [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Processing event network-vif-plugged-53819bfb-ebe3-4956-8f91-805dd04b5954 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.100 243456 DEBUG nova.compute.manager [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Received event network-vif-plugged-53819bfb-ebe3-4956-8f91-805dd04b5954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.100 243456 DEBUG oslo_concurrency.lockutils [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.100 243456 DEBUG oslo_concurrency.lockutils [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.100 243456 DEBUG oslo_concurrency.lockutils [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.101 243456 DEBUG nova.compute.manager [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] No waiting events found dispatching network-vif-plugged-53819bfb-ebe3-4956-8f91-805dd04b5954 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.101 243456 WARNING nova.compute.manager [req-1050af38-203a-4dec-b50b-a5d252c49520 req-5103447f-381d-47cc-8401-bb455c0c4f47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Received unexpected event network-vif-plugged-53819bfb-ebe3-4956-8f91-805dd04b5954 for instance with vm_state building and task_state spawning.
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.101 243456 DEBUG nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:22:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:22:29
Feb 28 10:22:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:22:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:22:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'default.rgw.control', '.mgr', 'images', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'vms']
Feb 28 10:22:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.107 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274149.1070514, 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.108 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] VM Resumed (Lifecycle Event)
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.110 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.123 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.124 243456 INFO nova.virt.libvirt.driver [-] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Instance spawned successfully.
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.124 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.128 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.154 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.164 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.165 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.165 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.166 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.166 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.167 243456 DEBUG nova.virt.libvirt.driver [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.236 243456 INFO nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Took 7.54 seconds to spawn the instance on the hypervisor.
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.237 243456 DEBUG nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:22:29 compute-0 podman[335801]: 2026-02-28 10:22:29.292193785 +0000 UTC m=+0.034742554 container create 6e3b6f5f27c52550b3ba6f32745a9deae602b2723ca93c9943c7c772cd683e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.306 243456 INFO nova.compute.manager [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Took 8.76 seconds to build instance.
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.326 243456 DEBUG oslo_concurrency.lockutils [None req-c5e9742c-32c7-445c-baf4-f046bb1c544a 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:29 compute-0 systemd[1]: Started libpod-conmon-6e3b6f5f27c52550b3ba6f32745a9deae602b2723ca93c9943c7c772cd683e93.scope.
Feb 28 10:22:29 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:22:29 compute-0 podman[335801]: 2026-02-28 10:22:29.275290277 +0000 UTC m=+0.017839066 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:22:29 compute-0 podman[335801]: 2026-02-28 10:22:29.37725322 +0000 UTC m=+0.119801999 container init 6e3b6f5f27c52550b3ba6f32745a9deae602b2723ca93c9943c7c772cd683e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 10:22:29 compute-0 podman[335801]: 2026-02-28 10:22:29.382280115 +0000 UTC m=+0.124828874 container start 6e3b6f5f27c52550b3ba6f32745a9deae602b2723ca93c9943c7c772cd683e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leavitt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 10:22:29 compute-0 podman[335801]: 2026-02-28 10:22:29.385318593 +0000 UTC m=+0.127867362 container attach 6e3b6f5f27c52550b3ba6f32745a9deae602b2723ca93c9943c7c772cd683e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Feb 28 10:22:29 compute-0 focused_leavitt[335817]: 167 167
Feb 28 10:22:29 compute-0 systemd[1]: libpod-6e3b6f5f27c52550b3ba6f32745a9deae602b2723ca93c9943c7c772cd683e93.scope: Deactivated successfully.
Feb 28 10:22:29 compute-0 podman[335801]: 2026-02-28 10:22:29.387559917 +0000 UTC m=+0.130108716 container died 6e3b6f5f27c52550b3ba6f32745a9deae602b2723ca93c9943c7c772cd683e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:22:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-3973c938799c586d1d75d102f0da4b0db114429b12ceb7dfcd3f69947a623146-merged.mount: Deactivated successfully.
Feb 28 10:22:29 compute-0 podman[335801]: 2026-02-28 10:22:29.421534358 +0000 UTC m=+0.164083127 container remove 6e3b6f5f27c52550b3ba6f32745a9deae602b2723ca93c9943c7c772cd683e93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:22:29 compute-0 systemd[1]: libpod-conmon-6e3b6f5f27c52550b3ba6f32745a9deae602b2723ca93c9943c7c772cd683e93.scope: Deactivated successfully.
Feb 28 10:22:29 compute-0 ceph-mon[76304]: pgmap v1711: 305 pgs: 305 active+clean; 325 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 308 KiB/s rd, 3.6 MiB/s wr, 68 op/s
Feb 28 10:22:29 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:22:29 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:22:29 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:22:29 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:22:29 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:22:29 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.584 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.586 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.587 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:22:29 compute-0 nova_compute[243452]: 2026-02-28 10:22:29.587 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0bafc3af-eadf-4d97-9acf-026c531362c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:22:29 compute-0 podman[335840]: 2026-02-28 10:22:29.59172684 +0000 UTC m=+0.040195071 container create 7c7d8a7e65f27bf2d495ea63d0af8132cb1fd01ad6abfd50185b04bc962a2aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhaskara, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:22:29 compute-0 systemd[1]: Started libpod-conmon-7c7d8a7e65f27bf2d495ea63d0af8132cb1fd01ad6abfd50185b04bc962a2aa0.scope.
Feb 28 10:22:29 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:22:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3b2151004483a65434c4478d6f048b5faa90fcfa8b491bb7d2fb64211a19f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:22:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3b2151004483a65434c4478d6f048b5faa90fcfa8b491bb7d2fb64211a19f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:22:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3b2151004483a65434c4478d6f048b5faa90fcfa8b491bb7d2fb64211a19f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:22:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3b2151004483a65434c4478d6f048b5faa90fcfa8b491bb7d2fb64211a19f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:22:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3b2151004483a65434c4478d6f048b5faa90fcfa8b491bb7d2fb64211a19f1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:22:29 compute-0 podman[335840]: 2026-02-28 10:22:29.572869656 +0000 UTC m=+0.021337887 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:22:29 compute-0 podman[335840]: 2026-02-28 10:22:29.680194734 +0000 UTC m=+0.128662955 container init 7c7d8a7e65f27bf2d495ea63d0af8132cb1fd01ad6abfd50185b04bc962a2aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 28 10:22:29 compute-0 podman[335840]: 2026-02-28 10:22:29.691274664 +0000 UTC m=+0.139742895 container start 7c7d8a7e65f27bf2d495ea63d0af8132cb1fd01ad6abfd50185b04bc962a2aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhaskara, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:22:29 compute-0 podman[335840]: 2026-02-28 10:22:29.694409504 +0000 UTC m=+0.142877715 container attach 7c7d8a7e65f27bf2d495ea63d0af8132cb1fd01ad6abfd50185b04bc962a2aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhaskara, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:22:30 compute-0 unruffled_bhaskara[335856]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:22:30 compute-0 unruffled_bhaskara[335856]: --> All data devices are unavailable
Feb 28 10:22:30 compute-0 systemd[1]: libpod-7c7d8a7e65f27bf2d495ea63d0af8132cb1fd01ad6abfd50185b04bc962a2aa0.scope: Deactivated successfully.
Feb 28 10:22:30 compute-0 podman[335840]: 2026-02-28 10:22:30.201612034 +0000 UTC m=+0.650080265 container died 7c7d8a7e65f27bf2d495ea63d0af8132cb1fd01ad6abfd50185b04bc962a2aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhaskara, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 10:22:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a3b2151004483a65434c4478d6f048b5faa90fcfa8b491bb7d2fb64211a19f1-merged.mount: Deactivated successfully.
Feb 28 10:22:30 compute-0 podman[335840]: 2026-02-28 10:22:30.245627564 +0000 UTC m=+0.694095785 container remove 7c7d8a7e65f27bf2d495ea63d0af8132cb1fd01ad6abfd50185b04bc962a2aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 10:22:30 compute-0 sudo[335765]: pam_unix(sudo:session): session closed for user root
Feb 28 10:22:30 compute-0 systemd[1]: libpod-conmon-7c7d8a7e65f27bf2d495ea63d0af8132cb1fd01ad6abfd50185b04bc962a2aa0.scope: Deactivated successfully.
Feb 28 10:22:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:22:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:22:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:22:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:22:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:22:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:22:30 compute-0 sudo[335885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:22:30 compute-0 sudo[335885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:22:30 compute-0 sudo[335885]: pam_unix(sudo:session): session closed for user root
Feb 28 10:22:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 325 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 151 op/s
Feb 28 10:22:30 compute-0 sudo[335910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:22:30 compute-0 sudo[335910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:22:30 compute-0 ceph-mon[76304]: pgmap v1712: 305 pgs: 305 active+clean; 325 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 151 op/s
Feb 28 10:22:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:22:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:22:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:22:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:22:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:22:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:22:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:22:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:22:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:22:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:22:30 compute-0 podman[335946]: 2026-02-28 10:22:30.770885175 +0000 UTC m=+0.063203095 container create 27d770c256907dd0715f3be29786d0fdf3906cbe18909ecfbda659f071455e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wu, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 28 10:22:30 compute-0 systemd[1]: Started libpod-conmon-27d770c256907dd0715f3be29786d0fdf3906cbe18909ecfbda659f071455e48.scope.
Feb 28 10:22:30 compute-0 podman[335946]: 2026-02-28 10:22:30.741731714 +0000 UTC m=+0.034049684 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:22:30 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:22:30 compute-0 podman[335946]: 2026-02-28 10:22:30.858563516 +0000 UTC m=+0.150881426 container init 27d770c256907dd0715f3be29786d0fdf3906cbe18909ecfbda659f071455e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:22:30 compute-0 podman[335946]: 2026-02-28 10:22:30.869537223 +0000 UTC m=+0.161855113 container start 27d770c256907dd0715f3be29786d0fdf3906cbe18909ecfbda659f071455e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 10:22:30 compute-0 funny_wu[335962]: 167 167
Feb 28 10:22:30 compute-0 podman[335946]: 2026-02-28 10:22:30.874274679 +0000 UTC m=+0.166592589 container attach 27d770c256907dd0715f3be29786d0fdf3906cbe18909ecfbda659f071455e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wu, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 10:22:30 compute-0 systemd[1]: libpod-27d770c256907dd0715f3be29786d0fdf3906cbe18909ecfbda659f071455e48.scope: Deactivated successfully.
Feb 28 10:22:30 compute-0 podman[335946]: 2026-02-28 10:22:30.876894075 +0000 UTC m=+0.169212005 container died 27d770c256907dd0715f3be29786d0fdf3906cbe18909ecfbda659f071455e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wu, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:22:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f124b158eb271d4de62e4aa5cf6b489e62fc75853436be914500f15232f0be1-merged.mount: Deactivated successfully.
Feb 28 10:22:30 compute-0 podman[335946]: 2026-02-28 10:22:30.926240509 +0000 UTC m=+0.218558439 container remove 27d770c256907dd0715f3be29786d0fdf3906cbe18909ecfbda659f071455e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wu, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:22:30 compute-0 systemd[1]: libpod-conmon-27d770c256907dd0715f3be29786d0fdf3906cbe18909ecfbda659f071455e48.scope: Deactivated successfully.
Feb 28 10:22:31 compute-0 podman[335985]: 2026-02-28 10:22:31.083584431 +0000 UTC m=+0.040984894 container create 84c20c69d94332d4388891ea04031dee1c8f1b1d6f460c052be8a01e1abe088d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mendeleev, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:22:31 compute-0 systemd[1]: Started libpod-conmon-84c20c69d94332d4388891ea04031dee1c8f1b1d6f460c052be8a01e1abe088d.scope.
Feb 28 10:22:31 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:22:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732c93cee66603d6d89625c60da3d87c806f97c993c21461f4385ef76534fed4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:22:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732c93cee66603d6d89625c60da3d87c806f97c993c21461f4385ef76534fed4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:22:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732c93cee66603d6d89625c60da3d87c806f97c993c21461f4385ef76534fed4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:22:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732c93cee66603d6d89625c60da3d87c806f97c993c21461f4385ef76534fed4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:22:31 compute-0 podman[335985]: 2026-02-28 10:22:31.064275034 +0000 UTC m=+0.021675507 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:22:31 compute-0 podman[335985]: 2026-02-28 10:22:31.177627295 +0000 UTC m=+0.135027818 container init 84c20c69d94332d4388891ea04031dee1c8f1b1d6f460c052be8a01e1abe088d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mendeleev, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:22:31 compute-0 podman[335985]: 2026-02-28 10:22:31.186682607 +0000 UTC m=+0.144083060 container start 84c20c69d94332d4388891ea04031dee1c8f1b1d6f460c052be8a01e1abe088d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:22:31 compute-0 podman[335985]: 2026-02-28 10:22:31.193252256 +0000 UTC m=+0.150652799 container attach 84c20c69d94332d4388891ea04031dee1c8f1b1d6f460c052be8a01e1abe088d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:22:31 compute-0 nova_compute[243452]: 2026-02-28 10:22:31.264 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Updating instance_info_cache with network_info: [{"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:22:31 compute-0 nova_compute[243452]: 2026-02-28 10:22:31.287 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:22:31 compute-0 nova_compute[243452]: 2026-02-28 10:22:31.288 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:22:31 compute-0 nova_compute[243452]: 2026-02-28 10:22:31.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:22:31 compute-0 nova_compute[243452]: 2026-02-28 10:22:31.352 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:31 compute-0 nova_compute[243452]: 2026-02-28 10:22:31.353 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:31 compute-0 nova_compute[243452]: 2026-02-28 10:22:31.354 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:31 compute-0 nova_compute[243452]: 2026-02-28 10:22:31.355 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:22:31 compute-0 nova_compute[243452]: 2026-02-28 10:22:31.355 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]: {
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:     "0": [
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:         {
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "devices": [
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "/dev/loop3"
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             ],
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "lv_name": "ceph_lv0",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "lv_size": "21470642176",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "name": "ceph_lv0",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "tags": {
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.cluster_name": "ceph",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.crush_device_class": "",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.encrypted": "0",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.objectstore": "bluestore",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.osd_id": "0",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.type": "block",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.vdo": "0",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.with_tpm": "0"
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             },
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "type": "block",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "vg_name": "ceph_vg0"
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:         }
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:     ],
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:     "1": [
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:         {
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "devices": [
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "/dev/loop4"
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             ],
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "lv_name": "ceph_lv1",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "lv_size": "21470642176",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "name": "ceph_lv1",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "tags": {
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.cluster_name": "ceph",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.crush_device_class": "",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.encrypted": "0",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.objectstore": "bluestore",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.osd_id": "1",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.type": "block",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.vdo": "0",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.with_tpm": "0"
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             },
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "type": "block",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "vg_name": "ceph_vg1"
Feb 28 10:22:31 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:         }
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:     ],
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:     "2": [
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:         {
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "devices": [
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "/dev/loop5"
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             ],
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "lv_name": "ceph_lv2",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "lv_size": "21470642176",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "name": "ceph_lv2",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "tags": {
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.cluster_name": "ceph",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.crush_device_class": "",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.encrypted": "0",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.objectstore": "bluestore",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.osd_id": "2",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.type": "block",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.vdo": "0",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:                 "ceph.with_tpm": "0"
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             },
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "type": "block",
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:             "vg_name": "ceph_vg2"
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:         }
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]:     ]
Feb 28 10:22:31 compute-0 unruffled_mendeleev[336001]: }
Feb 28 10:22:31 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 10:22:31 compute-0 systemd[1]: libpod-84c20c69d94332d4388891ea04031dee1c8f1b1d6f460c052be8a01e1abe088d.scope: Deactivated successfully.
Feb 28 10:22:31 compute-0 podman[335985]: 2026-02-28 10:22:31.519999097 +0000 UTC m=+0.477399570 container died 84c20c69d94332d4388891ea04031dee1c8f1b1d6f460c052be8a01e1abe088d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 10:22:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-732c93cee66603d6d89625c60da3d87c806f97c993c21461f4385ef76534fed4-merged.mount: Deactivated successfully.
Feb 28 10:22:31 compute-0 podman[335985]: 2026-02-28 10:22:31.559165438 +0000 UTC m=+0.516565891 container remove 84c20c69d94332d4388891ea04031dee1c8f1b1d6f460c052be8a01e1abe088d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 10:22:31 compute-0 systemd[1]: libpod-conmon-84c20c69d94332d4388891ea04031dee1c8f1b1d6f460c052be8a01e1abe088d.scope: Deactivated successfully.
Feb 28 10:22:31 compute-0 sudo[335910]: pam_unix(sudo:session): session closed for user root
Feb 28 10:22:31 compute-0 nova_compute[243452]: 2026-02-28 10:22:31.606 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:31 compute-0 sudo[336042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:22:31 compute-0 sudo[336042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:22:31 compute-0 sudo[336042]: pam_unix(sudo:session): session closed for user root
Feb 28 10:22:31 compute-0 sudo[336067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:22:31 compute-0 sudo[336067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:22:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:22:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3433513059' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:22:31 compute-0 nova_compute[243452]: 2026-02-28 10:22:31.966 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:31 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3433513059' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:22:32 compute-0 podman[336105]: 2026-02-28 10:22:32.061825997 +0000 UTC m=+0.054681300 container create 94a5c8c4160e42e8f2b8e832f24516f17b3c2bbd4f8e04142bc691538eb464ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 10:22:32 compute-0 nova_compute[243452]: 2026-02-28 10:22:32.074 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:22:32 compute-0 nova_compute[243452]: 2026-02-28 10:22:32.076 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:22:32 compute-0 nova_compute[243452]: 2026-02-28 10:22:32.080 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:22:32 compute-0 nova_compute[243452]: 2026-02-28 10:22:32.081 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:22:32 compute-0 nova_compute[243452]: 2026-02-28 10:22:32.085 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:22:32 compute-0 nova_compute[243452]: 2026-02-28 10:22:32.085 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:22:32 compute-0 systemd[1]: Started libpod-conmon-94a5c8c4160e42e8f2b8e832f24516f17b3c2bbd4f8e04142bc691538eb464ba.scope.
Feb 28 10:22:32 compute-0 podman[336105]: 2026-02-28 10:22:32.03354553 +0000 UTC m=+0.026400853 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:22:32 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:22:32 compute-0 podman[336105]: 2026-02-28 10:22:32.159398073 +0000 UTC m=+0.152253726 container init 94a5c8c4160e42e8f2b8e832f24516f17b3c2bbd4f8e04142bc691538eb464ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 28 10:22:32 compute-0 podman[336105]: 2026-02-28 10:22:32.167299121 +0000 UTC m=+0.160154414 container start 94a5c8c4160e42e8f2b8e832f24516f17b3c2bbd4f8e04142bc691538eb464ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:22:32 compute-0 podman[336105]: 2026-02-28 10:22:32.171188403 +0000 UTC m=+0.164043736 container attach 94a5c8c4160e42e8f2b8e832f24516f17b3c2bbd4f8e04142bc691538eb464ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 10:22:32 compute-0 magical_hamilton[336122]: 167 167
Feb 28 10:22:32 compute-0 systemd[1]: libpod-94a5c8c4160e42e8f2b8e832f24516f17b3c2bbd4f8e04142bc691538eb464ba.scope: Deactivated successfully.
Feb 28 10:22:32 compute-0 podman[336127]: 2026-02-28 10:22:32.246585699 +0000 UTC m=+0.051426865 container died 94a5c8c4160e42e8f2b8e832f24516f17b3c2bbd4f8e04142bc691538eb464ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 10:22:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-8da13ef9a709400e21a50f23ac7326d4078667df6931786848e5b9073e9876b9-merged.mount: Deactivated successfully.
Feb 28 10:22:32 compute-0 podman[336127]: 2026-02-28 10:22:32.300440704 +0000 UTC m=+0.105281830 container remove 94a5c8c4160e42e8f2b8e832f24516f17b3c2bbd4f8e04142bc691538eb464ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 10:22:32 compute-0 systemd[1]: libpod-conmon-94a5c8c4160e42e8f2b8e832f24516f17b3c2bbd4f8e04142bc691538eb464ba.scope: Deactivated successfully.
Feb 28 10:22:32 compute-0 nova_compute[243452]: 2026-02-28 10:22:32.350 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:22:32 compute-0 nova_compute[243452]: 2026-02-28 10:22:32.352 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3200MB free_disk=59.900322584435344GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:22:32 compute-0 nova_compute[243452]: 2026-02-28 10:22:32.353 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:32 compute-0 nova_compute[243452]: 2026-02-28 10:22:32.353 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 325 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.0 MiB/s wr, 125 op/s
Feb 28 10:22:32 compute-0 nova_compute[243452]: 2026-02-28 10:22:32.495 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 0bafc3af-eadf-4d97-9acf-026c531362c3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:22:32 compute-0 nova_compute[243452]: 2026-02-28 10:22:32.496 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 3a425770-67d6-411f-9586-1977cbc678ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:22:32 compute-0 nova_compute[243452]: 2026-02-28 10:22:32.496 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:22:32 compute-0 nova_compute[243452]: 2026-02-28 10:22:32.496 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:22:32 compute-0 nova_compute[243452]: 2026-02-28 10:22:32.496 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:22:32 compute-0 podman[336149]: 2026-02-28 10:22:32.505219625 +0000 UTC m=+0.047873343 container create ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swanson, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 28 10:22:32 compute-0 systemd[1]: Started libpod-conmon-ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45.scope.
Feb 28 10:22:32 compute-0 podman[336149]: 2026-02-28 10:22:32.482981213 +0000 UTC m=+0.025634951 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:22:32 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1f91e8d74c31b75cff2b307a983d6ae134c71cee8be2f0f3c161b8e670859b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1f91e8d74c31b75cff2b307a983d6ae134c71cee8be2f0f3c161b8e670859b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1f91e8d74c31b75cff2b307a983d6ae134c71cee8be2f0f3c161b8e670859b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1f91e8d74c31b75cff2b307a983d6ae134c71cee8be2f0f3c161b8e670859b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:22:32 compute-0 podman[336149]: 2026-02-28 10:22:32.615391375 +0000 UTC m=+0.158045073 container init ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swanson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:22:32 compute-0 podman[336149]: 2026-02-28 10:22:32.627154004 +0000 UTC m=+0.169807702 container start ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swanson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:22:32 compute-0 podman[336149]: 2026-02-28 10:22:32.630405658 +0000 UTC m=+0.173059466 container attach ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swanson, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 10:22:32 compute-0 nova_compute[243452]: 2026-02-28 10:22:32.764 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:32 compute-0 ceph-mon[76304]: pgmap v1713: 305 pgs: 305 active+clean; 325 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.0 MiB/s wr, 125 op/s
Feb 28 10:22:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:22:33 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2317332672' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:22:33 compute-0 nova_compute[243452]: 2026-02-28 10:22:33.344 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:33 compute-0 nova_compute[243452]: 2026-02-28 10:22:33.351 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:22:33 compute-0 nova_compute[243452]: 2026-02-28 10:22:33.366 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:22:33 compute-0 nova_compute[243452]: 2026-02-28 10:22:33.384 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:22:33 compute-0 nova_compute[243452]: 2026-02-28 10:22:33.384 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:33 compute-0 lvm[336263]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:22:33 compute-0 lvm[336266]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:22:33 compute-0 lvm[336266]: VG ceph_vg1 finished
Feb 28 10:22:33 compute-0 lvm[336263]: VG ceph_vg0 finished
Feb 28 10:22:33 compute-0 lvm[336268]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:22:33 compute-0 lvm[336268]: VG ceph_vg2 finished
Feb 28 10:22:33 compute-0 fervent_swanson[336165]: {}
Feb 28 10:22:33 compute-0 systemd[1]: libpod-ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45.scope: Deactivated successfully.
Feb 28 10:22:33 compute-0 systemd[1]: libpod-ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45.scope: Consumed 1.328s CPU time.
Feb 28 10:22:33 compute-0 conmon[336165]: conmon ae1f5ba2a510d4b67933 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45.scope/container/memory.events
Feb 28 10:22:33 compute-0 podman[336149]: 2026-02-28 10:22:33.603311598 +0000 UTC m=+1.145965336 container died ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swanson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:22:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c1f91e8d74c31b75cff2b307a983d6ae134c71cee8be2f0f3c161b8e670859b-merged.mount: Deactivated successfully.
Feb 28 10:22:33 compute-0 podman[336149]: 2026-02-28 10:22:33.647142924 +0000 UTC m=+1.189796642 container remove ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swanson, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:22:33 compute-0 systemd[1]: libpod-conmon-ae1f5ba2a510d4b6793328eccbc65851ff1aea2efb10cef957f8e4d6e638ac45.scope: Deactivated successfully.
Feb 28 10:22:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:22:33 compute-0 sudo[336067]: pam_unix(sudo:session): session closed for user root
Feb 28 10:22:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:22:33 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:22:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:22:33 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:22:33 compute-0 sudo[336282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:22:33 compute-0 sudo[336282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:22:33 compute-0 sudo[336282]: pam_unix(sudo:session): session closed for user root
Feb 28 10:22:34 compute-0 nova_compute[243452]: 2026-02-28 10:22:34.009 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:34 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2317332672' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:22:34 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:22:34 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:22:34 compute-0 nova_compute[243452]: 2026-02-28 10:22:34.385 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:22:34 compute-0 nova_compute[243452]: 2026-02-28 10:22:34.386 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:22:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.0 MiB/s wr, 166 op/s
Feb 28 10:22:35 compute-0 ceph-mon[76304]: pgmap v1714: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.0 MiB/s wr, 166 op/s
Feb 28 10:22:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 545 KiB/s wr, 172 op/s
Feb 28 10:22:36 compute-0 nova_compute[243452]: 2026-02-28 10:22:36.611 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:37 compute-0 nova_compute[243452]: 2026-02-28 10:22:37.446 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:37 compute-0 nova_compute[243452]: 2026-02-28 10:22:37.446 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:37 compute-0 nova_compute[243452]: 2026-02-28 10:22:37.464 243456 DEBUG nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:22:37 compute-0 ceph-mon[76304]: pgmap v1715: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 545 KiB/s wr, 172 op/s
Feb 28 10:22:37 compute-0 ovn_controller[146846]: 2026-02-28T10:22:37Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bc:ac:78 10.100.0.14
Feb 28 10:22:37 compute-0 ovn_controller[146846]: 2026-02-28T10:22:37Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bc:ac:78 10.100.0.14
Feb 28 10:22:37 compute-0 nova_compute[243452]: 2026-02-28 10:22:37.569 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:37 compute-0 nova_compute[243452]: 2026-02-28 10:22:37.570 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:37 compute-0 nova_compute[243452]: 2026-02-28 10:22:37.576 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:22:37 compute-0 nova_compute[243452]: 2026-02-28 10:22:37.577 243456 INFO nova.compute.claims [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:22:37 compute-0 nova_compute[243452]: 2026-02-28 10:22:37.925 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 338 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 841 KiB/s wr, 165 op/s
Feb 28 10:22:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:22:38 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/158170004' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.472 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.482 243456 DEBUG nova.compute.provider_tree [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:22:38 compute-0 ceph-mon[76304]: pgmap v1716: 305 pgs: 305 active+clean; 338 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 841 KiB/s wr, 165 op/s
Feb 28 10:22:38 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/158170004' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.508 243456 DEBUG nova.scheduler.client.report [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.538 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.540 243456 DEBUG nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.593 243456 DEBUG nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.593 243456 DEBUG nova.network.neutron [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.624 243456 INFO nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.645 243456 DEBUG nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:22:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.731 243456 DEBUG nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.733 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.734 243456 INFO nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Creating image(s)
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.760 243456 DEBUG nova.storage.rbd_utils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.789 243456 DEBUG nova.storage.rbd_utils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.814 243456 DEBUG nova.storage.rbd_utils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.818 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.886 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.887 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.888 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.888 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.916 243456 DEBUG nova.storage.rbd_utils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:38 compute-0 nova_compute[243452]: 2026-02-28 10:22:38.920 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:39 compute-0 nova_compute[243452]: 2026-02-28 10:22:39.013 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:39 compute-0 nova_compute[243452]: 2026-02-28 10:22:39.184 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:39 compute-0 nova_compute[243452]: 2026-02-28 10:22:39.258 243456 DEBUG nova.storage.rbd_utils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] resizing rbd image 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:22:39 compute-0 nova_compute[243452]: 2026-02-28 10:22:39.362 243456 DEBUG nova.objects.instance [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'migration_context' on Instance uuid 1c621cf0-85bd-40e0-8c9c-467e5da2e21b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:22:39 compute-0 nova_compute[243452]: 2026-02-28 10:22:39.377 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:22:39 compute-0 nova_compute[243452]: 2026-02-28 10:22:39.378 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Ensure instance console log exists: /var/lib/nova/instances/1c621cf0-85bd-40e0-8c9c-467e5da2e21b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:22:39 compute-0 nova_compute[243452]: 2026-02-28 10:22:39.378 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:39 compute-0 nova_compute[243452]: 2026-02-28 10:22:39.379 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:39 compute-0 nova_compute[243452]: 2026-02-28 10:22:39.379 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:39 compute-0 nova_compute[243452]: 2026-02-28 10:22:39.412 243456 DEBUG nova.policy [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30797c1e587b4532a2e148d0cdcd9c51', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:22:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 389 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.5 MiB/s wr, 206 op/s
Feb 28 10:22:40 compute-0 ceph-mon[76304]: pgmap v1717: 305 pgs: 305 active+clean; 389 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.5 MiB/s wr, 206 op/s
Feb 28 10:22:40 compute-0 nova_compute[243452]: 2026-02-28 10:22:40.591 243456 DEBUG nova.network.neutron [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Successfully created port: 3789f2db-7dec-44ac-93d7-2712307dc094 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:22:40 compute-0 ovn_controller[146846]: 2026-02-28T10:22:40Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e5:f2:95 10.100.0.9
Feb 28 10:22:40 compute-0 ovn_controller[146846]: 2026-02-28T10:22:40Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e5:f2:95 10.100.0.9
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002142553397157101 of space, bias 1.0, pg target 0.6427660191471304 quantized to 32 (current 32)
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024929816364085387 of space, bias 1.0, pg target 0.7478944909225617 quantized to 32 (current 32)
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.540373750331308e-07 of space, bias 4.0, pg target 0.0009048448500397569 quantized to 16 (current 16)
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:22:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:22:41 compute-0 nova_compute[243452]: 2026-02-28 10:22:41.613 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:41 compute-0 nova_compute[243452]: 2026-02-28 10:22:41.870 243456 DEBUG nova.network.neutron [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Successfully updated port: 3789f2db-7dec-44ac-93d7-2712307dc094 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:22:41 compute-0 nova_compute[243452]: 2026-02-28 10:22:41.889 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "refresh_cache-1c621cf0-85bd-40e0-8c9c-467e5da2e21b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:22:41 compute-0 nova_compute[243452]: 2026-02-28 10:22:41.890 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquired lock "refresh_cache-1c621cf0-85bd-40e0-8c9c-467e5da2e21b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:22:41 compute-0 nova_compute[243452]: 2026-02-28 10:22:41.890 243456 DEBUG nova.network.neutron [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:22:42 compute-0 nova_compute[243452]: 2026-02-28 10:22:42.000 243456 DEBUG nova.compute.manager [req-592d1763-8fcb-41f8-9dae-5f8974ddb545 req-29729949-ace1-4035-8b04-76ca2aa80a5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Received event network-changed-3789f2db-7dec-44ac-93d7-2712307dc094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:22:42 compute-0 nova_compute[243452]: 2026-02-28 10:22:42.001 243456 DEBUG nova.compute.manager [req-592d1763-8fcb-41f8-9dae-5f8974ddb545 req-29729949-ace1-4035-8b04-76ca2aa80a5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Refreshing instance network info cache due to event network-changed-3789f2db-7dec-44ac-93d7-2712307dc094. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:22:42 compute-0 nova_compute[243452]: 2026-02-28 10:22:42.001 243456 DEBUG oslo_concurrency.lockutils [req-592d1763-8fcb-41f8-9dae-5f8974ddb545 req-29729949-ace1-4035-8b04-76ca2aa80a5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-1c621cf0-85bd-40e0-8c9c-467e5da2e21b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:22:42 compute-0 nova_compute[243452]: 2026-02-28 10:22:42.089 243456 DEBUG nova.network.neutron [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:22:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 389 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.5 MiB/s wr, 123 op/s
Feb 28 10:22:42 compute-0 ceph-mon[76304]: pgmap v1718: 305 pgs: 305 active+clean; 389 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.5 MiB/s wr, 123 op/s
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.276 243456 DEBUG nova.network.neutron [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Updating instance_info_cache with network_info: [{"id": "3789f2db-7dec-44ac-93d7-2712307dc094", "address": "fa:16:3e:d7:89:88", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3789f2db-7d", "ovs_interfaceid": "3789f2db-7dec-44ac-93d7-2712307dc094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.302 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Releasing lock "refresh_cache-1c621cf0-85bd-40e0-8c9c-467e5da2e21b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.303 243456 DEBUG nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Instance network_info: |[{"id": "3789f2db-7dec-44ac-93d7-2712307dc094", "address": "fa:16:3e:d7:89:88", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3789f2db-7d", "ovs_interfaceid": "3789f2db-7dec-44ac-93d7-2712307dc094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.303 243456 DEBUG oslo_concurrency.lockutils [req-592d1763-8fcb-41f8-9dae-5f8974ddb545 req-29729949-ace1-4035-8b04-76ca2aa80a5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-1c621cf0-85bd-40e0-8c9c-467e5da2e21b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.304 243456 DEBUG nova.network.neutron [req-592d1763-8fcb-41f8-9dae-5f8974ddb545 req-29729949-ace1-4035-8b04-76ca2aa80a5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Refreshing network info cache for port 3789f2db-7dec-44ac-93d7-2712307dc094 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.306 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Start _get_guest_xml network_info=[{"id": "3789f2db-7dec-44ac-93d7-2712307dc094", "address": "fa:16:3e:d7:89:88", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3789f2db-7d", "ovs_interfaceid": "3789f2db-7dec-44ac-93d7-2712307dc094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.312 243456 WARNING nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.322 243456 DEBUG nova.virt.libvirt.host [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.322 243456 DEBUG nova.virt.libvirt.host [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.325 243456 DEBUG nova.virt.libvirt.host [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.326 243456 DEBUG nova.virt.libvirt.host [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.327 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.327 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.327 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.327 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.328 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.328 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.328 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.329 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.329 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.329 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.330 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.330 243456 DEBUG nova.virt.hardware [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.333 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:22:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:22:43 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3378005444' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.868 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.899 243456 DEBUG nova.storage.rbd_utils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:43 compute-0 nova_compute[243452]: 2026-02-28 10:22:43.904 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:43 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3378005444' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.014 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.228 243456 DEBUG oslo_concurrency.lockutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "3a425770-67d6-411f-9586-1977cbc678ed" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.228 243456 DEBUG oslo_concurrency.lockutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.229 243456 DEBUG oslo_concurrency.lockutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "3a425770-67d6-411f-9586-1977cbc678ed-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.229 243456 DEBUG oslo_concurrency.lockutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.229 243456 DEBUG oslo_concurrency.lockutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.230 243456 INFO nova.compute.manager [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Terminating instance
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.231 243456 DEBUG nova.compute.manager [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:22:44 compute-0 kernel: tap8356f577-07 (unregistering): left promiscuous mode
Feb 28 10:22:44 compute-0 NetworkManager[49805]: <info>  [1772274164.2762] device (tap8356f577-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:22:44 compute-0 ovn_controller[146846]: 2026-02-28T10:22:44Z|01082|binding|INFO|Releasing lport 8356f577-07af-4575-b9ba-e2764b155dcc from this chassis (sb_readonly=0)
Feb 28 10:22:44 compute-0 ovn_controller[146846]: 2026-02-28T10:22:44Z|01083|binding|INFO|Setting lport 8356f577-07af-4575-b9ba-e2764b155dcc down in Southbound
Feb 28 10:22:44 compute-0 ovn_controller[146846]: 2026-02-28T10:22:44Z|01084|binding|INFO|Removing iface tap8356f577-07 ovn-installed in OVS
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.288 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.289 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.300 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:ac:78 10.100.0.14'], port_security=['fa:16:3e:bc:ac:78 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3a425770-67d6-411f-9586-1977cbc678ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3451a2ef-e97c-49df-813f-57c35ec0999a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '88239c96-6a1e-46d1-adfd-2f479ed23a6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6802505a-6a05-4258-9e53-19d8c7319e67, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8356f577-07af-4575-b9ba-e2764b155dcc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:22:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.304 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8356f577-07af-4575-b9ba-e2764b155dcc in datapath 3451a2ef-e97c-49df-813f-57c35ec0999a unbound from our chassis
Feb 28 10:22:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.306 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3451a2ef-e97c-49df-813f-57c35ec0999a
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.319 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.331 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ce8fede8-20b7-4fee-a16a-88d2dd3f46f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:44 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d00000069.scope: Deactivated successfully.
Feb 28 10:22:44 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d00000069.scope: Consumed 12.886s CPU time.
Feb 28 10:22:44 compute-0 systemd-machined[209480]: Machine qemu-135-instance-00000069 terminated.
Feb 28 10:22:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.359 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[488fc8f6-875c-41f6-8e4a-740274889881]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.363 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1901f27a-3a78-4189-bfc3-ad970e689bcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.386 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5025843d-5721-42de-b309-7c8e5db8bc3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.400 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5a71bff7-8427-4fc3-a046-5442448162ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3451a2ef-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:c2:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561733, 'reachable_time': 16789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336567, 'error': None, 'target': 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.414 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[905f56c1-3232-4340-b471-e231723bdb9c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3451a2ef-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561746, 'tstamp': 561746}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336568, 'error': None, 'target': 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3451a2ef-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561749, 'tstamp': 561749}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336568, 'error': None, 'target': 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.416 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3451a2ef-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.418 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.421 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.422 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3451a2ef-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.422 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:22:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.422 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3451a2ef-e0, col_values=(('external_ids', {'iface-id': '0715e649-02a0-4a60-88ea-663bf03161ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:44.423 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:22:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 421 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 174 op/s
Feb 28 10:22:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:22:44 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/225991063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.444 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.445 243456 DEBUG nova.virt.libvirt.vif [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:22:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-828230938',display_name='tempest-ServersTestJSON-server-828230938',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-828230938',id=107,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-qfm9iifg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:22:38Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=1c621cf0-85bd-40e0-8c9c-467e5da2e21b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3789f2db-7dec-44ac-93d7-2712307dc094", "address": "fa:16:3e:d7:89:88", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3789f2db-7d", "ovs_interfaceid": "3789f2db-7dec-44ac-93d7-2712307dc094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.446 243456 DEBUG nova.network.os_vif_util [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "3789f2db-7dec-44ac-93d7-2712307dc094", "address": "fa:16:3e:d7:89:88", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3789f2db-7d", "ovs_interfaceid": "3789f2db-7dec-44ac-93d7-2712307dc094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.447 243456 DEBUG nova.network.os_vif_util [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:89:88,bridge_name='br-int',has_traffic_filtering=True,id=3789f2db-7dec-44ac-93d7-2712307dc094,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3789f2db-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.448 243456 DEBUG nova.objects.instance [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c621cf0-85bd-40e0-8c9c-467e5da2e21b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.465 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:22:44 compute-0 nova_compute[243452]:   <uuid>1c621cf0-85bd-40e0-8c9c-467e5da2e21b</uuid>
Feb 28 10:22:44 compute-0 nova_compute[243452]:   <name>instance-0000006b</name>
Feb 28 10:22:44 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:22:44 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:22:44 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersTestJSON-server-828230938</nova:name>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:22:43</nova:creationTime>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:22:44 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:22:44 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:22:44 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:22:44 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:22:44 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:22:44 compute-0 nova_compute[243452]:         <nova:user uuid="30797c1e587b4532a2e148d0cdcd9c51">tempest-ServersTestJSON-973249707-project-member</nova:user>
Feb 28 10:22:44 compute-0 nova_compute[243452]:         <nova:project uuid="1c3cb5cdfa53405bb0387af43e804bd1">tempest-ServersTestJSON-973249707</nova:project>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:22:44 compute-0 nova_compute[243452]:         <nova:port uuid="3789f2db-7dec-44ac-93d7-2712307dc094">
Feb 28 10:22:44 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:22:44 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:22:44 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <system>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <entry name="serial">1c621cf0-85bd-40e0-8c9c-467e5da2e21b</entry>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <entry name="uuid">1c621cf0-85bd-40e0-8c9c-467e5da2e21b</entry>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     </system>
Feb 28 10:22:44 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:22:44 compute-0 nova_compute[243452]:   <os>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:   </os>
Feb 28 10:22:44 compute-0 nova_compute[243452]:   <features>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:   </features>
Feb 28 10:22:44 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:22:44 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:22:44 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk">
Feb 28 10:22:44 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       </source>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:22:44 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk.config">
Feb 28 10:22:44 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       </source>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:22:44 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:d7:89:88"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <target dev="tap3789f2db-7d"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/1c621cf0-85bd-40e0-8c9c-467e5da2e21b/console.log" append="off"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <video>
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     </video>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:22:44 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:22:44 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:22:44 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:22:44 compute-0 nova_compute[243452]: </domain>
Feb 28 10:22:44 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.466 243456 DEBUG nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Preparing to wait for external event network-vif-plugged-3789f2db-7dec-44ac-93d7-2712307dc094 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.467 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.467 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.467 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.468 243456 DEBUG nova.virt.libvirt.vif [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:22:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-828230938',display_name='tempest-ServersTestJSON-server-828230938',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-828230938',id=107,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-qfm9iifg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:22:38Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=1c621cf0-85bd-40e0-8c9c-467e5da2e21b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3789f2db-7dec-44ac-93d7-2712307dc094", "address": "fa:16:3e:d7:89:88", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3789f2db-7d", "ovs_interfaceid": "3789f2db-7dec-44ac-93d7-2712307dc094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.468 243456 DEBUG nova.network.os_vif_util [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "3789f2db-7dec-44ac-93d7-2712307dc094", "address": "fa:16:3e:d7:89:88", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3789f2db-7d", "ovs_interfaceid": "3789f2db-7dec-44ac-93d7-2712307dc094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.469 243456 DEBUG nova.network.os_vif_util [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:89:88,bridge_name='br-int',has_traffic_filtering=True,id=3789f2db-7dec-44ac-93d7-2712307dc094,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3789f2db-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.469 243456 DEBUG os_vif [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:89:88,bridge_name='br-int',has_traffic_filtering=True,id=3789f2db-7dec-44ac-93d7-2712307dc094,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3789f2db-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.469 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.470 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.470 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.474 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.474 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3789f2db-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.475 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3789f2db-7d, col_values=(('external_ids', {'iface-id': '3789f2db-7dec-44ac-93d7-2712307dc094', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d7:89:88', 'vm-uuid': '1c621cf0-85bd-40e0-8c9c-467e5da2e21b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.476 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:44 compute-0 NetworkManager[49805]: <info>  [1772274164.4773] manager: (tap3789f2db-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/455)
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.478 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.479 243456 INFO nova.virt.libvirt.driver [-] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Instance destroyed successfully.
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.479 243456 DEBUG nova.objects.instance [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'resources' on Instance uuid 3a425770-67d6-411f-9586-1977cbc678ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.484 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.486 243456 INFO os_vif [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:89:88,bridge_name='br-int',has_traffic_filtering=True,id=3789f2db-7dec-44ac-93d7-2712307dc094,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3789f2db-7d')
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.494 243456 DEBUG nova.virt.libvirt.vif [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:22:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-0-1517409208',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-0-1517409208',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ge',id=105,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNs+nd6rPqJ/AGV55FcprVoNF73HxSzu9S0FiqOuhvN4rlizLZ9YW8wn4BFYC1ax4N+CgAyJWdOsbXuKNmW4pxXu15elRBRUjfpzdRq3EXNuL00qJwL5OuaSxnPUOdFXtw==',key_name='tempest-TestSecurityGroupsBasicOps-1541180013',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:22:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-xny8xf69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:22:25Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=3a425770-67d6-411f-9586-1977cbc678ed,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8356f577-07af-4575-b9ba-e2764b155dcc", "address": "fa:16:3e:bc:ac:78", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8356f577-07", "ovs_interfaceid": "8356f577-07af-4575-b9ba-e2764b155dcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.494 243456 DEBUG nova.network.os_vif_util [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "8356f577-07af-4575-b9ba-e2764b155dcc", "address": "fa:16:3e:bc:ac:78", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8356f577-07", "ovs_interfaceid": "8356f577-07af-4575-b9ba-e2764b155dcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.495 243456 DEBUG nova.network.os_vif_util [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:ac:78,bridge_name='br-int',has_traffic_filtering=True,id=8356f577-07af-4575-b9ba-e2764b155dcc,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8356f577-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.496 243456 DEBUG os_vif [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:ac:78,bridge_name='br-int',has_traffic_filtering=True,id=8356f577-07af-4575-b9ba-e2764b155dcc,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8356f577-07') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.498 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.499 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8356f577-07, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.502 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.504 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.507 243456 INFO os_vif [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:ac:78,bridge_name='br-int',has_traffic_filtering=True,id=8356f577-07af-4575-b9ba-e2764b155dcc,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8356f577-07')
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.556 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.558 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.558 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No VIF found with MAC fa:16:3e:d7:89:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.559 243456 INFO nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Using config drive
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.589 243456 DEBUG nova.storage.rbd_utils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.780 243456 INFO nova.virt.libvirt.driver [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Deleting instance files /var/lib/nova/instances/3a425770-67d6-411f-9586-1977cbc678ed_del
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.781 243456 INFO nova.virt.libvirt.driver [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Deletion of /var/lib/nova/instances/3a425770-67d6-411f-9586-1977cbc678ed_del complete
Feb 28 10:22:44 compute-0 ceph-mon[76304]: pgmap v1719: 305 pgs: 305 active+clean; 421 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 174 op/s
Feb 28 10:22:44 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/225991063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.988 243456 INFO nova.compute.manager [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Took 0.76 seconds to destroy the instance on the hypervisor.
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.989 243456 DEBUG oslo.service.loopingcall [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.989 243456 DEBUG nova.compute.manager [-] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:22:44 compute-0 nova_compute[243452]: 2026-02-28 10:22:44.990 243456 DEBUG nova.network.neutron [-] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:22:45 compute-0 nova_compute[243452]: 2026-02-28 10:22:45.023 243456 DEBUG nova.network.neutron [req-592d1763-8fcb-41f8-9dae-5f8974ddb545 req-29729949-ace1-4035-8b04-76ca2aa80a5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Updated VIF entry in instance network info cache for port 3789f2db-7dec-44ac-93d7-2712307dc094. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:22:45 compute-0 nova_compute[243452]: 2026-02-28 10:22:45.023 243456 DEBUG nova.network.neutron [req-592d1763-8fcb-41f8-9dae-5f8974ddb545 req-29729949-ace1-4035-8b04-76ca2aa80a5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Updating instance_info_cache with network_info: [{"id": "3789f2db-7dec-44ac-93d7-2712307dc094", "address": "fa:16:3e:d7:89:88", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3789f2db-7d", "ovs_interfaceid": "3789f2db-7dec-44ac-93d7-2712307dc094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:22:45 compute-0 nova_compute[243452]: 2026-02-28 10:22:45.039 243456 DEBUG oslo_concurrency.lockutils [req-592d1763-8fcb-41f8-9dae-5f8974ddb545 req-29729949-ace1-4035-8b04-76ca2aa80a5e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-1c621cf0-85bd-40e0-8c9c-467e5da2e21b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:22:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:22:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3463702608' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:22:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:22:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3463702608' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:22:45 compute-0 nova_compute[243452]: 2026-02-28 10:22:45.681 243456 INFO nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Creating config drive at /var/lib/nova/instances/1c621cf0-85bd-40e0-8c9c-467e5da2e21b/disk.config
Feb 28 10:22:45 compute-0 nova_compute[243452]: 2026-02-28 10:22:45.689 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1c621cf0-85bd-40e0-8c9c-467e5da2e21b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpnny66t7b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:45 compute-0 nova_compute[243452]: 2026-02-28 10:22:45.833 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1c621cf0-85bd-40e0-8c9c-467e5da2e21b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpnny66t7b" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:45 compute-0 nova_compute[243452]: 2026-02-28 10:22:45.872 243456 DEBUG nova.storage.rbd_utils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:45 compute-0 nova_compute[243452]: 2026-02-28 10:22:45.877 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1c621cf0-85bd-40e0-8c9c-467e5da2e21b/disk.config 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3463702608' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:22:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3463702608' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:22:46 compute-0 nova_compute[243452]: 2026-02-28 10:22:46.016 243456 DEBUG oslo_concurrency.processutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1c621cf0-85bd-40e0-8c9c-467e5da2e21b/disk.config 1c621cf0-85bd-40e0-8c9c-467e5da2e21b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:46 compute-0 nova_compute[243452]: 2026-02-28 10:22:46.018 243456 INFO nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Deleting local config drive /var/lib/nova/instances/1c621cf0-85bd-40e0-8c9c-467e5da2e21b/disk.config because it was imported into RBD.
Feb 28 10:22:46 compute-0 kernel: tap3789f2db-7d: entered promiscuous mode
Feb 28 10:22:46 compute-0 NetworkManager[49805]: <info>  [1772274166.0681] manager: (tap3789f2db-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/456)
Feb 28 10:22:46 compute-0 systemd-udevd[336558]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:22:46 compute-0 ovn_controller[146846]: 2026-02-28T10:22:46Z|01085|binding|INFO|Claiming lport 3789f2db-7dec-44ac-93d7-2712307dc094 for this chassis.
Feb 28 10:22:46 compute-0 ovn_controller[146846]: 2026-02-28T10:22:46Z|01086|binding|INFO|3789f2db-7dec-44ac-93d7-2712307dc094: Claiming fa:16:3e:d7:89:88 10.100.0.10
Feb 28 10:22:46 compute-0 nova_compute[243452]: 2026-02-28 10:22:46.070 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.078 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:89:88 10.100.0.10'], port_security=['fa:16:3e:d7:89:88 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1c621cf0-85bd-40e0-8c9c-467e5da2e21b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=3789f2db-7dec-44ac-93d7-2712307dc094) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:22:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.080 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 3789f2db-7dec-44ac-93d7-2712307dc094 in datapath 7ec4804c-4a13-485a-9300-db6edf74473b bound to our chassis
Feb 28 10:22:46 compute-0 ovn_controller[146846]: 2026-02-28T10:22:46Z|01087|binding|INFO|Setting lport 3789f2db-7dec-44ac-93d7-2712307dc094 ovn-installed in OVS
Feb 28 10:22:46 compute-0 ovn_controller[146846]: 2026-02-28T10:22:46Z|01088|binding|INFO|Setting lport 3789f2db-7dec-44ac-93d7-2712307dc094 up in Southbound
Feb 28 10:22:46 compute-0 NetworkManager[49805]: <info>  [1772274166.0865] device (tap3789f2db-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:22:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.086 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b
Feb 28 10:22:46 compute-0 nova_compute[243452]: 2026-02-28 10:22:46.087 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:46 compute-0 NetworkManager[49805]: <info>  [1772274166.0891] device (tap3789f2db-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:22:46 compute-0 nova_compute[243452]: 2026-02-28 10:22:46.094 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.108 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[66538b13-a6cd-4301-8298-0b93b8245a3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:46 compute-0 systemd-machined[209480]: New machine qemu-137-instance-0000006b.
Feb 28 10:22:46 compute-0 systemd[1]: Started Virtual Machine qemu-137-instance-0000006b.
Feb 28 10:22:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.141 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[436f89fe-5ea9-42c3-8fb2-55bb25a18bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:46 compute-0 nova_compute[243452]: 2026-02-28 10:22:46.142 243456 DEBUG nova.network.neutron [-] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:22:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.145 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fb796d7e-552d-4dcb-819b-cf7b1eb6d187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:46 compute-0 nova_compute[243452]: 2026-02-28 10:22:46.163 243456 INFO nova.compute.manager [-] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Took 1.17 seconds to deallocate network for instance.
Feb 28 10:22:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.180 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f58adbaf-55d3-41f8-afa1-31399e9061ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.195 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a185a71a-21b7-4204-bb49-6bd4d1c249a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336688, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.212 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[94039fda-ae90-40ef-83c0-308babc9b7f4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336690, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336690, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.214 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:46 compute-0 nova_compute[243452]: 2026-02-28 10:22:46.217 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.219 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.220 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:22:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.220 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:46.221 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:22:46 compute-0 nova_compute[243452]: 2026-02-28 10:22:46.231 243456 DEBUG oslo_concurrency.lockutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:46 compute-0 nova_compute[243452]: 2026-02-28 10:22:46.231 243456 DEBUG oslo_concurrency.lockutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:46 compute-0 nova_compute[243452]: 2026-02-28 10:22:46.352 243456 DEBUG oslo_concurrency.processutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:46 compute-0 nova_compute[243452]: 2026-02-28 10:22:46.404 243456 DEBUG nova.compute.manager [req-10ab324e-cc10-462e-a99a-7f40675746d0 req-46312ded-84e1-4cab-8c20-0183949abf31 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Received event network-vif-deleted-8356f577-07af-4575-b9ba-e2764b155dcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:22:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 417 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 908 KiB/s rd, 6.0 MiB/s wr, 167 op/s
Feb 28 10:22:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:22:46 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3036863761' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:22:46 compute-0 nova_compute[243452]: 2026-02-28 10:22:46.893 243456 DEBUG oslo_concurrency.processutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:46 compute-0 nova_compute[243452]: 2026-02-28 10:22:46.902 243456 DEBUG nova.compute.provider_tree [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:22:46 compute-0 nova_compute[243452]: 2026-02-28 10:22:46.919 243456 DEBUG nova.scheduler.client.report [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:22:46 compute-0 ceph-mon[76304]: pgmap v1720: 305 pgs: 305 active+clean; 417 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 908 KiB/s rd, 6.0 MiB/s wr, 167 op/s
Feb 28 10:22:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3036863761' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:22:46 compute-0 nova_compute[243452]: 2026-02-28 10:22:46.954 243456 DEBUG oslo_concurrency.lockutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:46 compute-0 nova_compute[243452]: 2026-02-28 10:22:46.983 243456 INFO nova.scheduler.client.report [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Deleted allocations for instance 3a425770-67d6-411f-9586-1977cbc678ed
Feb 28 10:22:47 compute-0 nova_compute[243452]: 2026-02-28 10:22:47.053 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274167.0523014, 1c621cf0-85bd-40e0-8c9c-467e5da2e21b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:22:47 compute-0 nova_compute[243452]: 2026-02-28 10:22:47.053 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] VM Started (Lifecycle Event)
Feb 28 10:22:47 compute-0 nova_compute[243452]: 2026-02-28 10:22:47.065 243456 DEBUG oslo_concurrency.lockutils [None req-98f638e2-574a-48a4-9526-73835ef63049 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "3a425770-67d6-411f-9586-1977cbc678ed" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:47 compute-0 nova_compute[243452]: 2026-02-28 10:22:47.077 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:22:47 compute-0 nova_compute[243452]: 2026-02-28 10:22:47.087 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274167.0569365, 1c621cf0-85bd-40e0-8c9c-467e5da2e21b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:22:47 compute-0 nova_compute[243452]: 2026-02-28 10:22:47.087 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] VM Paused (Lifecycle Event)
Feb 28 10:22:47 compute-0 nova_compute[243452]: 2026-02-28 10:22:47.108 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:22:47 compute-0 nova_compute[243452]: 2026-02-28 10:22:47.114 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:22:47 compute-0 nova_compute[243452]: 2026-02-28 10:22:47.147 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:22:47 compute-0 podman[336756]: 2026-02-28 10:22:47.155577462 +0000 UTC m=+0.096273970 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true)
Feb 28 10:22:47 compute-0 podman[336755]: 2026-02-28 10:22:47.157722684 +0000 UTC m=+0.101145370 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:22:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 398 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 679 KiB/s rd, 6.0 MiB/s wr, 171 op/s
Feb 28 10:22:48 compute-0 ceph-mon[76304]: pgmap v1721: 305 pgs: 305 active+clean; 398 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 679 KiB/s rd, 6.0 MiB/s wr, 171 op/s
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.643 243456 DEBUG nova.compute.manager [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Received event network-vif-plugged-3789f2db-7dec-44ac-93d7-2712307dc094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.643 243456 DEBUG oslo_concurrency.lockutils [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.644 243456 DEBUG oslo_concurrency.lockutils [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.644 243456 DEBUG oslo_concurrency.lockutils [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.644 243456 DEBUG nova.compute.manager [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Processing event network-vif-plugged-3789f2db-7dec-44ac-93d7-2712307dc094 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.645 243456 DEBUG nova.compute.manager [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Received event network-vif-plugged-3789f2db-7dec-44ac-93d7-2712307dc094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.645 243456 DEBUG oslo_concurrency.lockutils [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.645 243456 DEBUG oslo_concurrency.lockutils [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.646 243456 DEBUG oslo_concurrency.lockutils [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.646 243456 DEBUG nova.compute.manager [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] No waiting events found dispatching network-vif-plugged-3789f2db-7dec-44ac-93d7-2712307dc094 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.647 243456 WARNING nova.compute.manager [req-0a9bf9a5-5368-4fbd-8086-cb3dd2b07145 req-88bc653c-586f-426f-90e3-012b9add6814 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Received unexpected event network-vif-plugged-3789f2db-7dec-44ac-93d7-2712307dc094 for instance with vm_state building and task_state spawning.
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.648 243456 DEBUG nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.652 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274168.652321, 1c621cf0-85bd-40e0-8c9c-467e5da2e21b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.653 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] VM Resumed (Lifecycle Event)
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.655 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.659 243456 INFO nova.virt.libvirt.driver [-] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Instance spawned successfully.
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.660 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.676 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:22:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.690 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.695 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.696 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.697 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.697 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.698 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.699 243456 DEBUG nova.virt.libvirt.driver [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.710 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.755 243456 INFO nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Took 10.02 seconds to spawn the instance on the hypervisor.
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.756 243456 DEBUG nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.812 243456 DEBUG oslo_concurrency.lockutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "0bafc3af-eadf-4d97-9acf-026c531362c3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.813 243456 DEBUG oslo_concurrency.lockutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.814 243456 DEBUG oslo_concurrency.lockutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.814 243456 DEBUG oslo_concurrency.lockutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.815 243456 DEBUG oslo_concurrency.lockutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.817 243456 INFO nova.compute.manager [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Terminating instance
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.819 243456 DEBUG nova.compute.manager [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.841 243456 INFO nova.compute.manager [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Took 11.30 seconds to build instance.
Feb 28 10:22:48 compute-0 kernel: tap09ffa25b-e3 (unregistering): left promiscuous mode
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.865 243456 DEBUG oslo_concurrency.lockutils [None req-62fe5782-cd74-4d05-8d49-68132185bc09 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:48 compute-0 NetworkManager[49805]: <info>  [1772274168.8676] device (tap09ffa25b-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.872 243456 DEBUG nova.compute.manager [req-1a799f7f-bf5b-4754-b588-c4307bfc0777 req-d4b2a65d-d1d2-4583-98a2-0487f7f7737a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received event network-changed-09ffa25b-e3df-45c2-9db2-423ed33e2a28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.872 243456 DEBUG nova.compute.manager [req-1a799f7f-bf5b-4754-b588-c4307bfc0777 req-d4b2a65d-d1d2-4583-98a2-0487f7f7737a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Refreshing instance network info cache due to event network-changed-09ffa25b-e3df-45c2-9db2-423ed33e2a28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.873 243456 DEBUG oslo_concurrency.lockutils [req-1a799f7f-bf5b-4754-b588-c4307bfc0777 req-d4b2a65d-d1d2-4583-98a2-0487f7f7737a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.873 243456 DEBUG oslo_concurrency.lockutils [req-1a799f7f-bf5b-4754-b588-c4307bfc0777 req-d4b2a65d-d1d2-4583-98a2-0487f7f7737a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.873 243456 DEBUG nova.network.neutron [req-1a799f7f-bf5b-4754-b588-c4307bfc0777 req-d4b2a65d-d1d2-4583-98a2-0487f7f7737a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Refreshing network info cache for port 09ffa25b-e3df-45c2-9db2-423ed33e2a28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:22:48 compute-0 ovn_controller[146846]: 2026-02-28T10:22:48Z|01089|binding|INFO|Releasing lport 09ffa25b-e3df-45c2-9db2-423ed33e2a28 from this chassis (sb_readonly=0)
Feb 28 10:22:48 compute-0 ovn_controller[146846]: 2026-02-28T10:22:48Z|01090|binding|INFO|Setting lport 09ffa25b-e3df-45c2-9db2-423ed33e2a28 down in Southbound
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.883 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:48 compute-0 ovn_controller[146846]: 2026-02-28T10:22:48Z|01091|binding|INFO|Removing iface tap09ffa25b-e3 ovn-installed in OVS
Feb 28 10:22:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:48.896 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:cc:93 10.100.0.9'], port_security=['fa:16:3e:93:cc:93 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0bafc3af-eadf-4d97-9acf-026c531362c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3451a2ef-e97c-49df-813f-57c35ec0999a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6d9d1441-5ce1-4022-9f50-b5399f868b07 88239c96-6a1e-46d1-adfd-2f479ed23a6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6802505a-6a05-4258-9e53-19d8c7319e67, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=09ffa25b-e3df-45c2-9db2-423ed33e2a28) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:22:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:48.897 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 09ffa25b-e3df-45c2-9db2-423ed33e2a28 in datapath 3451a2ef-e97c-49df-813f-57c35ec0999a unbound from our chassis
Feb 28 10:22:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:48.900 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3451a2ef-e97c-49df-813f-57c35ec0999a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:22:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:48.901 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[58b96399-fc83-4b6a-934d-f3093c37279c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:48.902 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a namespace which is not needed anymore
Feb 28 10:22:48 compute-0 nova_compute[243452]: 2026-02-28 10:22:48.902 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:48 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d00000068.scope: Deactivated successfully.
Feb 28 10:22:48 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d00000068.scope: Consumed 14.703s CPU time.
Feb 28 10:22:48 compute-0 systemd-machined[209480]: Machine qemu-134-instance-00000068 terminated.
Feb 28 10:22:49 compute-0 nova_compute[243452]: 2026-02-28 10:22:49.016 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:49 compute-0 neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a[334577]: [NOTICE]   (334581) : haproxy version is 2.8.14-c23fe91
Feb 28 10:22:49 compute-0 neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a[334577]: [NOTICE]   (334581) : path to executable is /usr/sbin/haproxy
Feb 28 10:22:49 compute-0 neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a[334577]: [WARNING]  (334581) : Exiting Master process...
Feb 28 10:22:49 compute-0 neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a[334577]: [ALERT]    (334581) : Current worker (334583) exited with code 143 (Terminated)
Feb 28 10:22:49 compute-0 neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a[334577]: [WARNING]  (334581) : All workers exited. Exiting... (0)
Feb 28 10:22:49 compute-0 systemd[1]: libpod-b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4.scope: Deactivated successfully.
Feb 28 10:22:49 compute-0 podman[336820]: 2026-02-28 10:22:49.05443367 +0000 UTC m=+0.067273543 container died b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 10:22:49 compute-0 nova_compute[243452]: 2026-02-28 10:22:49.055 243456 INFO nova.virt.libvirt.driver [-] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Instance destroyed successfully.
Feb 28 10:22:49 compute-0 nova_compute[243452]: 2026-02-28 10:22:49.057 243456 DEBUG nova.objects.instance [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'resources' on Instance uuid 0bafc3af-eadf-4d97-9acf-026c531362c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:22:49 compute-0 nova_compute[243452]: 2026-02-28 10:22:49.076 243456 DEBUG nova.virt.libvirt.vif [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:21:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1705661194',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1705661194',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=104,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNs+nd6rPqJ/AGV55FcprVoNF73HxSzu9S0FiqOuhvN4rlizLZ9YW8wn4BFYC1ax4N+CgAyJWdOsbXuKNmW4pxXu15elRBRUjfpzdRq3EXNuL00qJwL5OuaSxnPUOdFXtw==',key_name='tempest-TestSecurityGroupsBasicOps-1541180013',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:21:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-qp97324q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:21:52Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=0bafc3af-eadf-4d97-9acf-026c531362c3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:22:49 compute-0 nova_compute[243452]: 2026-02-28 10:22:49.076 243456 DEBUG nova.network.os_vif_util [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:22:49 compute-0 nova_compute[243452]: 2026-02-28 10:22:49.078 243456 DEBUG nova.network.os_vif_util [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:93:cc:93,bridge_name='br-int',has_traffic_filtering=True,id=09ffa25b-e3df-45c2-9db2-423ed33e2a28,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09ffa25b-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:22:49 compute-0 nova_compute[243452]: 2026-02-28 10:22:49.078 243456 DEBUG os_vif [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:cc:93,bridge_name='br-int',has_traffic_filtering=True,id=09ffa25b-e3df-45c2-9db2-423ed33e2a28,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09ffa25b-e3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:22:49 compute-0 nova_compute[243452]: 2026-02-28 10:22:49.081 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:49 compute-0 nova_compute[243452]: 2026-02-28 10:22:49.081 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09ffa25b-e3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:49 compute-0 nova_compute[243452]: 2026-02-28 10:22:49.087 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:22:49 compute-0 nova_compute[243452]: 2026-02-28 10:22:49.090 243456 INFO os_vif [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:cc:93,bridge_name='br-int',has_traffic_filtering=True,id=09ffa25b-e3df-45c2-9db2-423ed33e2a28,network=Network(3451a2ef-e97c-49df-813f-57c35ec0999a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09ffa25b-e3')
Feb 28 10:22:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4-userdata-shm.mount: Deactivated successfully.
Feb 28 10:22:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-aefe496d22c05c4788569ae7202c870c2d3e386af2fddcdb07f716cc705ac851-merged.mount: Deactivated successfully.
Feb 28 10:22:49 compute-0 podman[336820]: 2026-02-28 10:22:49.116774709 +0000 UTC m=+0.129614492 container cleanup b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:22:49 compute-0 systemd[1]: libpod-conmon-b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4.scope: Deactivated successfully.
Feb 28 10:22:49 compute-0 podman[336878]: 2026-02-28 10:22:49.196281994 +0000 UTC m=+0.055623227 container remove b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 10:22:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:49.205 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[742a65e3-9fac-4f4d-b140-e7ad28686d3d]: (4, ('Sat Feb 28 10:22:48 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a (b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4)\nb54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4\nSat Feb 28 10:22:49 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a (b54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4)\nb54c22f0a18da3f73396f2db013906c7d413ccdcec9b71233deaf8d0e48906b4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:49.206 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[45ec6928-31af-4973-9a6c-38297864a2cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:49.208 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3451a2ef-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:49 compute-0 nova_compute[243452]: 2026-02-28 10:22:49.209 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:49 compute-0 kernel: tap3451a2ef-e0: left promiscuous mode
Feb 28 10:22:49 compute-0 nova_compute[243452]: 2026-02-28 10:22:49.212 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:49.215 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7da2638c-4e94-456e-a084-31c20874fe46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:49 compute-0 nova_compute[243452]: 2026-02-28 10:22:49.219 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:49.228 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e51e820c-e4be-4266-a8e1-29535c38f48f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:49.230 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fd3be867-4b23-4ba7-aead-b0c140a2c10e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:49.251 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[10c3eaf0-fea8-4f25-9924-2a2f8c6044a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561724, 'reachable_time': 18181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336890, 'error': None, 'target': 'ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d3451a2ef\x2de97c\x2d49df\x2d813f\x2d57c35ec0999a.mount: Deactivated successfully.
Feb 28 10:22:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:49.257 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3451a2ef-e97c-49df-813f-57c35ec0999a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:22:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:49.257 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[cb531358-c855-41fc-bd78-82bd7c241be7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:49 compute-0 nova_compute[243452]: 2026-02-28 10:22:49.397 243456 INFO nova.virt.libvirt.driver [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Deleting instance files /var/lib/nova/instances/0bafc3af-eadf-4d97-9acf-026c531362c3_del
Feb 28 10:22:49 compute-0 nova_compute[243452]: 2026-02-28 10:22:49.398 243456 INFO nova.virt.libvirt.driver [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Deletion of /var/lib/nova/instances/0bafc3af-eadf-4d97-9acf-026c531362c3_del complete
Feb 28 10:22:49 compute-0 nova_compute[243452]: 2026-02-28 10:22:49.461 243456 INFO nova.compute.manager [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Took 0.64 seconds to destroy the instance on the hypervisor.
Feb 28 10:22:49 compute-0 nova_compute[243452]: 2026-02-28 10:22:49.461 243456 DEBUG oslo.service.loopingcall [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:22:49 compute-0 nova_compute[243452]: 2026-02-28 10:22:49.462 243456 DEBUG nova.compute.manager [-] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:22:49 compute-0 nova_compute[243452]: 2026-02-28 10:22:49.462 243456 DEBUG nova.network.neutron [-] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.311 243456 DEBUG nova.network.neutron [-] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.332 243456 INFO nova.compute.manager [-] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Took 0.87 seconds to deallocate network for instance.
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.396 243456 DEBUG oslo_concurrency.lockutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.397 243456 DEBUG oslo_concurrency.lockutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 324 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.3 MiB/s wr, 218 op/s
Feb 28 10:22:50 compute-0 ceph-mon[76304]: pgmap v1722: 305 pgs: 305 active+clean; 324 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.3 MiB/s wr, 218 op/s
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.562 243456 DEBUG oslo_concurrency.processutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.642 243456 DEBUG oslo_concurrency.lockutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.643 243456 DEBUG oslo_concurrency.lockutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.643 243456 DEBUG oslo_concurrency.lockutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.643 243456 DEBUG oslo_concurrency.lockutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.643 243456 DEBUG oslo_concurrency.lockutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.645 243456 INFO nova.compute.manager [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Terminating instance
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.646 243456 DEBUG nova.compute.manager [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:22:50 compute-0 kernel: tap3789f2db-7d (unregistering): left promiscuous mode
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.675 243456 DEBUG nova.network.neutron [req-1a799f7f-bf5b-4754-b588-c4307bfc0777 req-d4b2a65d-d1d2-4583-98a2-0487f7f7737a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Updated VIF entry in instance network info cache for port 09ffa25b-e3df-45c2-9db2-423ed33e2a28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.677 243456 DEBUG nova.network.neutron [req-1a799f7f-bf5b-4754-b588-c4307bfc0777 req-d4b2a65d-d1d2-4583-98a2-0487f7f7737a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Updating instance_info_cache with network_info: [{"id": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "address": "fa:16:3e:93:cc:93", "network": {"id": "3451a2ef-e97c-49df-813f-57c35ec0999a", "bridge": "br-int", "label": "tempest-network-smoke--1262730095", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09ffa25b-e3", "ovs_interfaceid": "09ffa25b-e3df-45c2-9db2-423ed33e2a28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:22:50 compute-0 NetworkManager[49805]: <info>  [1772274170.6776] device (tap3789f2db-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.684 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:50 compute-0 ovn_controller[146846]: 2026-02-28T10:22:50Z|01092|binding|INFO|Releasing lport 3789f2db-7dec-44ac-93d7-2712307dc094 from this chassis (sb_readonly=0)
Feb 28 10:22:50 compute-0 ovn_controller[146846]: 2026-02-28T10:22:50Z|01093|binding|INFO|Setting lport 3789f2db-7dec-44ac-93d7-2712307dc094 down in Southbound
Feb 28 10:22:50 compute-0 ovn_controller[146846]: 2026-02-28T10:22:50Z|01094|binding|INFO|Removing iface tap3789f2db-7d ovn-installed in OVS
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.694 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.696 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:89:88 10.100.0.10'], port_security=['fa:16:3e:d7:89:88 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1c621cf0-85bd-40e0-8c9c-467e5da2e21b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=3789f2db-7dec-44ac-93d7-2712307dc094) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:22:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.698 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 3789f2db-7dec-44ac-93d7-2712307dc094 in datapath 7ec4804c-4a13-485a-9300-db6edf74473b unbound from our chassis
Feb 28 10:22:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.699 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.701 243456 DEBUG oslo_concurrency.lockutils [req-1a799f7f-bf5b-4754-b588-c4307bfc0777 req-d4b2a65d-d1d2-4583-98a2-0487f7f7737a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-0bafc3af-eadf-4d97-9acf-026c531362c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:22:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.711 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1eb68ef8-bd54-4b7c-b0f1-14899aeb162f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:50 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Feb 28 10:22:50 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006b.scope: Consumed 2.932s CPU time.
Feb 28 10:22:50 compute-0 systemd-machined[209480]: Machine qemu-137-instance-0000006b terminated.
Feb 28 10:22:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.737 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed180c9-f375-48ac-86bd-813d00c0a9f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.740 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[208952a6-98b9-40ad-876b-7e1a7ea757d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.765 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fd3894dd-fc82-4510-9cfb-ce09cc903bb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.785 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[561044a7-9b97-42d3-9c34-fb49cbcb211c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336922, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.800 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[789cbbc4-a8e6-4632-9c0a-4a823942e6f0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336923, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336923, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:22:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.802 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.803 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.806 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.807 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.808 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:22:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.809 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:50.810 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.877 243456 INFO nova.virt.libvirt.driver [-] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Instance destroyed successfully.
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.878 243456 DEBUG nova.objects.instance [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'resources' on Instance uuid 1c621cf0-85bd-40e0-8c9c-467e5da2e21b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.969 243456 DEBUG nova.virt.libvirt.vif [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:22:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-828230938',display_name='tempest-ServersTestJSON-server-828230938',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-828230938',id=107,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:22:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-qfm9iifg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:22:48Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=1c621cf0-85bd-40e0-8c9c-467e5da2e21b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3789f2db-7dec-44ac-93d7-2712307dc094", "address": "fa:16:3e:d7:89:88", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3789f2db-7d", "ovs_interfaceid": "3789f2db-7dec-44ac-93d7-2712307dc094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.970 243456 DEBUG nova.network.os_vif_util [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "3789f2db-7dec-44ac-93d7-2712307dc094", "address": "fa:16:3e:d7:89:88", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3789f2db-7d", "ovs_interfaceid": "3789f2db-7dec-44ac-93d7-2712307dc094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.971 243456 DEBUG nova.network.os_vif_util [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:89:88,bridge_name='br-int',has_traffic_filtering=True,id=3789f2db-7dec-44ac-93d7-2712307dc094,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3789f2db-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.971 243456 DEBUG os_vif [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:89:88,bridge_name='br-int',has_traffic_filtering=True,id=3789f2db-7dec-44ac-93d7-2712307dc094,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3789f2db-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.973 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.974 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3789f2db-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.976 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.977 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:50 compute-0 nova_compute[243452]: 2026-02-28 10:22:50.979 243456 INFO os_vif [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:89:88,bridge_name='br-int',has_traffic_filtering=True,id=3789f2db-7dec-44ac-93d7-2712307dc094,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3789f2db-7d')
Feb 28 10:22:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:22:51 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2690108448' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.118 243456 DEBUG oslo_concurrency.processutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.126 243456 DEBUG nova.compute.provider_tree [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.146 243456 DEBUG nova.scheduler.client.report [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.172 243456 DEBUG oslo_concurrency.lockutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.211 243456 INFO nova.scheduler.client.report [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Deleted allocations for instance 0bafc3af-eadf-4d97-9acf-026c531362c3
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.257 243456 INFO nova.virt.libvirt.driver [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Deleting instance files /var/lib/nova/instances/1c621cf0-85bd-40e0-8c9c-467e5da2e21b_del
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.258 243456 INFO nova.virt.libvirt.driver [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Deletion of /var/lib/nova/instances/1c621cf0-85bd-40e0-8c9c-467e5da2e21b_del complete
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.265 243456 DEBUG nova.compute.manager [req-ff50eb72-c91d-4815-9412-1074a9974beb req-f803ed3c-5e56-4526-831e-3c5d9c596ae4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Received event network-vif-unplugged-3789f2db-7dec-44ac-93d7-2712307dc094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.265 243456 DEBUG oslo_concurrency.lockutils [req-ff50eb72-c91d-4815-9412-1074a9974beb req-f803ed3c-5e56-4526-831e-3c5d9c596ae4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.266 243456 DEBUG oslo_concurrency.lockutils [req-ff50eb72-c91d-4815-9412-1074a9974beb req-f803ed3c-5e56-4526-831e-3c5d9c596ae4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.266 243456 DEBUG oslo_concurrency.lockutils [req-ff50eb72-c91d-4815-9412-1074a9974beb req-f803ed3c-5e56-4526-831e-3c5d9c596ae4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.266 243456 DEBUG nova.compute.manager [req-ff50eb72-c91d-4815-9412-1074a9974beb req-f803ed3c-5e56-4526-831e-3c5d9c596ae4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] No waiting events found dispatching network-vif-unplugged-3789f2db-7dec-44ac-93d7-2712307dc094 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.267 243456 DEBUG nova.compute.manager [req-ff50eb72-c91d-4815-9412-1074a9974beb req-f803ed3c-5e56-4526-831e-3c5d9c596ae4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Received event network-vif-unplugged-3789f2db-7dec-44ac-93d7-2712307dc094 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.285 243456 DEBUG oslo_concurrency.lockutils [None req-f7cb5537-69a7-4f55-812b-141b84072d91 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.317 243456 INFO nova.compute.manager [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Took 0.67 seconds to destroy the instance on the hypervisor.
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.318 243456 DEBUG oslo.service.loopingcall [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.318 243456 DEBUG nova.compute.manager [-] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.318 243456 DEBUG nova.network.neutron [-] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.388 243456 DEBUG nova.compute.manager [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received event network-vif-unplugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.389 243456 DEBUG oslo_concurrency.lockutils [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.389 243456 DEBUG oslo_concurrency.lockutils [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.390 243456 DEBUG oslo_concurrency.lockutils [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.390 243456 DEBUG nova.compute.manager [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] No waiting events found dispatching network-vif-unplugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.390 243456 WARNING nova.compute.manager [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received unexpected event network-vif-unplugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 for instance with vm_state deleted and task_state None.
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.391 243456 DEBUG nova.compute.manager [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received event network-vif-plugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.391 243456 DEBUG oslo_concurrency.lockutils [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.392 243456 DEBUG oslo_concurrency.lockutils [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.392 243456 DEBUG oslo_concurrency.lockutils [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0bafc3af-eadf-4d97-9acf-026c531362c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.393 243456 DEBUG nova.compute.manager [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] No waiting events found dispatching network-vif-plugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.393 243456 WARNING nova.compute.manager [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received unexpected event network-vif-plugged-09ffa25b-e3df-45c2-9db2-423ed33e2a28 for instance with vm_state deleted and task_state None.
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.394 243456 DEBUG nova.compute.manager [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Received event network-vif-deleted-09ffa25b-e3df-45c2-9db2-423ed33e2a28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.394 243456 INFO nova.compute.manager [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Neutron deleted interface 09ffa25b-e3df-45c2-9db2-423ed33e2a28; detaching it from the instance and deleting it from the info cache
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.394 243456 DEBUG nova.network.neutron [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Feb 28 10:22:51 compute-0 nova_compute[243452]: 2026-02-28 10:22:51.398 243456 DEBUG nova.compute.manager [req-42797934-ed63-4aff-8bd8-f8c70b2464b0 req-dfe3d167-baa1-45f4-83d0-85442cb98352 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Detach interface failed, port_id=09ffa25b-e3df-45c2-9db2-423ed33e2a28, reason: Instance 0bafc3af-eadf-4d97-9acf-026c531362c3 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:22:51 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2690108448' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:22:52 compute-0 nova_compute[243452]: 2026-02-28 10:22:52.282 243456 DEBUG nova.network.neutron [-] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:22:52 compute-0 nova_compute[243452]: 2026-02-28 10:22:52.304 243456 INFO nova.compute.manager [-] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Took 0.99 seconds to deallocate network for instance.
Feb 28 10:22:52 compute-0 nova_compute[243452]: 2026-02-28 10:22:52.359 243456 DEBUG oslo_concurrency.lockutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:52 compute-0 nova_compute[243452]: 2026-02-28 10:22:52.360 243456 DEBUG oslo_concurrency.lockutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:52 compute-0 nova_compute[243452]: 2026-02-28 10:22:52.426 243456 DEBUG oslo_concurrency.processutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 324 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 166 op/s
Feb 28 10:22:52 compute-0 ceph-mon[76304]: pgmap v1723: 305 pgs: 305 active+clean; 324 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 166 op/s
Feb 28 10:22:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:22:52 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1330349262' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:22:52 compute-0 nova_compute[243452]: 2026-02-28 10:22:52.966 243456 DEBUG oslo_concurrency.processutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:52 compute-0 nova_compute[243452]: 2026-02-28 10:22:52.971 243456 DEBUG nova.compute.provider_tree [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:22:52 compute-0 nova_compute[243452]: 2026-02-28 10:22:52.990 243456 DEBUG nova.scheduler.client.report [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:22:53 compute-0 nova_compute[243452]: 2026-02-28 10:22:53.007 243456 DEBUG oslo_concurrency.lockutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:53 compute-0 nova_compute[243452]: 2026-02-28 10:22:53.028 243456 INFO nova.scheduler.client.report [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Deleted allocations for instance 1c621cf0-85bd-40e0-8c9c-467e5da2e21b
Feb 28 10:22:53 compute-0 nova_compute[243452]: 2026-02-28 10:22:53.104 243456 DEBUG oslo_concurrency.lockutils [None req-1df2a892-14fb-4e37-ab04-c398fe547cf6 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.461s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:53 compute-0 nova_compute[243452]: 2026-02-28 10:22:53.365 243456 DEBUG nova.compute.manager [req-30d40c1c-11e0-4ed4-86d6-012b8bc7a47e req-6b66c654-3c01-4adc-8e79-a1934939ea94 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Received event network-vif-plugged-3789f2db-7dec-44ac-93d7-2712307dc094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:22:53 compute-0 nova_compute[243452]: 2026-02-28 10:22:53.366 243456 DEBUG oslo_concurrency.lockutils [req-30d40c1c-11e0-4ed4-86d6-012b8bc7a47e req-6b66c654-3c01-4adc-8e79-a1934939ea94 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:53 compute-0 nova_compute[243452]: 2026-02-28 10:22:53.366 243456 DEBUG oslo_concurrency.lockutils [req-30d40c1c-11e0-4ed4-86d6-012b8bc7a47e req-6b66c654-3c01-4adc-8e79-a1934939ea94 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:53 compute-0 nova_compute[243452]: 2026-02-28 10:22:53.367 243456 DEBUG oslo_concurrency.lockutils [req-30d40c1c-11e0-4ed4-86d6-012b8bc7a47e req-6b66c654-3c01-4adc-8e79-a1934939ea94 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1c621cf0-85bd-40e0-8c9c-467e5da2e21b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:53 compute-0 nova_compute[243452]: 2026-02-28 10:22:53.367 243456 DEBUG nova.compute.manager [req-30d40c1c-11e0-4ed4-86d6-012b8bc7a47e req-6b66c654-3c01-4adc-8e79-a1934939ea94 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] No waiting events found dispatching network-vif-plugged-3789f2db-7dec-44ac-93d7-2712307dc094 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:22:53 compute-0 nova_compute[243452]: 2026-02-28 10:22:53.368 243456 WARNING nova.compute.manager [req-30d40c1c-11e0-4ed4-86d6-012b8bc7a47e req-6b66c654-3c01-4adc-8e79-a1934939ea94 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Received unexpected event network-vif-plugged-3789f2db-7dec-44ac-93d7-2712307dc094 for instance with vm_state deleted and task_state None.
Feb 28 10:22:53 compute-0 nova_compute[243452]: 2026-02-28 10:22:53.368 243456 DEBUG nova.compute.manager [req-30d40c1c-11e0-4ed4-86d6-012b8bc7a47e req-6b66c654-3c01-4adc-8e79-a1934939ea94 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Received event network-vif-deleted-3789f2db-7dec-44ac-93d7-2712307dc094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:22:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1330349262' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:22:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:22:54 compute-0 nova_compute[243452]: 2026-02-28 10:22:54.017 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 262 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 230 op/s
Feb 28 10:22:54 compute-0 ceph-mon[76304]: pgmap v1724: 305 pgs: 305 active+clean; 262 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 230 op/s
Feb 28 10:22:55 compute-0 nova_compute[243452]: 2026-02-28 10:22:55.976 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:56 compute-0 ovn_controller[146846]: 2026-02-28T10:22:56Z|01095|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 10:22:56 compute-0 nova_compute[243452]: 2026-02-28 10:22:56.054 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:56 compute-0 ovn_controller[146846]: 2026-02-28T10:22:56Z|01096|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 10:22:56 compute-0 nova_compute[243452]: 2026-02-28 10:22:56.092 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 188 op/s
Feb 28 10:22:56 compute-0 ceph-mon[76304]: pgmap v1725: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 188 op/s
Feb 28 10:22:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:57.861 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:57.862 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:22:57.863 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 30 KiB/s wr, 154 op/s
Feb 28 10:22:58 compute-0 nova_compute[243452]: 2026-02-28 10:22:58.493 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "44a09c36-3876-4513-8285-1d4aedc2ec68" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:58 compute-0 nova_compute[243452]: 2026-02-28 10:22:58.494 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:58 compute-0 nova_compute[243452]: 2026-02-28 10:22:58.526 243456 DEBUG nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:22:58 compute-0 ceph-mon[76304]: pgmap v1726: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 30 KiB/s wr, 154 op/s
Feb 28 10:22:58 compute-0 nova_compute[243452]: 2026-02-28 10:22:58.625 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:58 compute-0 nova_compute[243452]: 2026-02-28 10:22:58.626 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:58 compute-0 nova_compute[243452]: 2026-02-28 10:22:58.636 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:22:58 compute-0 nova_compute[243452]: 2026-02-28 10:22:58.637 243456 INFO nova.compute.claims [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:22:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:22:58 compute-0 nova_compute[243452]: 2026-02-28 10:22:58.782 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.019 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:22:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:22:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3058618406' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.330 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.334 243456 DEBUG nova.compute.provider_tree [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.355 243456 DEBUG nova.scheduler.client.report [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.429 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.429 243456 DEBUG nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.472 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274164.4663682, 3a425770-67d6-411f-9586-1977cbc678ed => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.472 243456 INFO nova.compute.manager [-] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] VM Stopped (Lifecycle Event)
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.481 243456 DEBUG nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.482 243456 DEBUG nova.network.neutron [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.497 243456 DEBUG nova.compute.manager [None req-35db4545-0ad0-450c-90bd-e43342cf96c7 - - - - - -] [instance: 3a425770-67d6-411f-9586-1977cbc678ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.511 243456 INFO nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:22:59 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3058618406' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.540 243456 DEBUG nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.675 243456 DEBUG nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.677 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.678 243456 INFO nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Creating image(s)
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.712 243456 DEBUG nova.storage.rbd_utils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 44a09c36-3876-4513-8285-1d4aedc2ec68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.749 243456 DEBUG nova.storage.rbd_utils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 44a09c36-3876-4513-8285-1d4aedc2ec68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.786 243456 DEBUG nova.storage.rbd_utils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 44a09c36-3876-4513-8285-1d4aedc2ec68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.791 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.874 243456 DEBUG nova.policy [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30797c1e587b4532a2e148d0cdcd9c51', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.880 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.882 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.883 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.883 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.924 243456 DEBUG nova.storage.rbd_utils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 44a09c36-3876-4513-8285-1d4aedc2ec68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:22:59 compute-0 nova_compute[243452]: 2026-02-28 10:22:59.931 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 44a09c36-3876-4513-8285-1d4aedc2ec68_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:00 compute-0 nova_compute[243452]: 2026-02-28 10:23:00.236 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 44a09c36-3876-4513-8285-1d4aedc2ec68_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:00 compute-0 nova_compute[243452]: 2026-02-28 10:23:00.321 243456 DEBUG nova.storage.rbd_utils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] resizing rbd image 44a09c36-3876-4513-8285-1d4aedc2ec68_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:23:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:23:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:23:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:23:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:23:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:23:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:23:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 247 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 530 KiB/s wr, 152 op/s
Feb 28 10:23:00 compute-0 nova_compute[243452]: 2026-02-28 10:23:00.434 243456 DEBUG nova.objects.instance [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'migration_context' on Instance uuid 44a09c36-3876-4513-8285-1d4aedc2ec68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:23:00 compute-0 nova_compute[243452]: 2026-02-28 10:23:00.454 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:23:00 compute-0 nova_compute[243452]: 2026-02-28 10:23:00.455 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Ensure instance console log exists: /var/lib/nova/instances/44a09c36-3876-4513-8285-1d4aedc2ec68/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:23:00 compute-0 nova_compute[243452]: 2026-02-28 10:23:00.455 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:00 compute-0 nova_compute[243452]: 2026-02-28 10:23:00.456 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:00 compute-0 nova_compute[243452]: 2026-02-28 10:23:00.457 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:00 compute-0 ceph-mon[76304]: pgmap v1727: 305 pgs: 305 active+clean; 247 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 530 KiB/s wr, 152 op/s
Feb 28 10:23:00 compute-0 nova_compute[243452]: 2026-02-28 10:23:00.978 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:01 compute-0 nova_compute[243452]: 2026-02-28 10:23:01.350 243456 DEBUG nova.network.neutron [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Successfully created port: d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:23:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 247 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1020 KiB/s rd, 504 KiB/s wr, 83 op/s
Feb 28 10:23:02 compute-0 ceph-mon[76304]: pgmap v1728: 305 pgs: 305 active+clean; 247 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1020 KiB/s rd, 504 KiB/s wr, 83 op/s
Feb 28 10:23:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:23:03 compute-0 nova_compute[243452]: 2026-02-28 10:23:03.867 243456 DEBUG nova.network.neutron [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Successfully updated port: d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:23:03 compute-0 nova_compute[243452]: 2026-02-28 10:23:03.895 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "refresh_cache-44a09c36-3876-4513-8285-1d4aedc2ec68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:23:03 compute-0 nova_compute[243452]: 2026-02-28 10:23:03.895 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquired lock "refresh_cache-44a09c36-3876-4513-8285-1d4aedc2ec68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:23:03 compute-0 nova_compute[243452]: 2026-02-28 10:23:03.896 243456 DEBUG nova.network.neutron [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:23:04 compute-0 nova_compute[243452]: 2026-02-28 10:23:04.025 243456 DEBUG nova.compute.manager [req-e454a304-84fe-4790-af2e-7166500575b2 req-5da59a9e-a9ee-43ca-8f81-a28659a6d8dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Received event network-changed-d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:04 compute-0 nova_compute[243452]: 2026-02-28 10:23:04.025 243456 DEBUG nova.compute.manager [req-e454a304-84fe-4790-af2e-7166500575b2 req-5da59a9e-a9ee-43ca-8f81-a28659a6d8dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Refreshing instance network info cache due to event network-changed-d79e5c2c-3bc4-4c60-842c-c52c58b15ff3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:23:04 compute-0 nova_compute[243452]: 2026-02-28 10:23:04.026 243456 DEBUG oslo_concurrency.lockutils [req-e454a304-84fe-4790-af2e-7166500575b2 req-5da59a9e-a9ee-43ca-8f81-a28659a6d8dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-44a09c36-3876-4513-8285-1d4aedc2ec68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:23:04 compute-0 nova_compute[243452]: 2026-02-28 10:23:04.026 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:04 compute-0 nova_compute[243452]: 2026-02-28 10:23:04.054 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274169.0534127, 0bafc3af-eadf-4d97-9acf-026c531362c3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:23:04 compute-0 nova_compute[243452]: 2026-02-28 10:23:04.054 243456 INFO nova.compute.manager [-] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] VM Stopped (Lifecycle Event)
Feb 28 10:23:04 compute-0 nova_compute[243452]: 2026-02-28 10:23:04.080 243456 DEBUG nova.compute.manager [None req-e8de3ef8-9958-4170-96f4-7489508016fd - - - - - -] [instance: 0bafc3af-eadf-4d97-9acf-026c531362c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:23:04 compute-0 nova_compute[243452]: 2026-02-28 10:23:04.400 243456 DEBUG nova.network.neutron [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:23:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 266 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.5 MiB/s wr, 95 op/s
Feb 28 10:23:04 compute-0 ceph-mon[76304]: pgmap v1729: 305 pgs: 305 active+clean; 266 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.5 MiB/s wr, 95 op/s
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.722 243456 DEBUG nova.network.neutron [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Updating instance_info_cache with network_info: [{"id": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "address": "fa:16:3e:28:2a:42", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79e5c2c-3b", "ovs_interfaceid": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.751 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Releasing lock "refresh_cache-44a09c36-3876-4513-8285-1d4aedc2ec68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.752 243456 DEBUG nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Instance network_info: |[{"id": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "address": "fa:16:3e:28:2a:42", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79e5c2c-3b", "ovs_interfaceid": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.753 243456 DEBUG oslo_concurrency.lockutils [req-e454a304-84fe-4790-af2e-7166500575b2 req-5da59a9e-a9ee-43ca-8f81-a28659a6d8dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-44a09c36-3876-4513-8285-1d4aedc2ec68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.754 243456 DEBUG nova.network.neutron [req-e454a304-84fe-4790-af2e-7166500575b2 req-5da59a9e-a9ee-43ca-8f81-a28659a6d8dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Refreshing network info cache for port d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.758 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Start _get_guest_xml network_info=[{"id": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "address": "fa:16:3e:28:2a:42", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79e5c2c-3b", "ovs_interfaceid": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.765 243456 WARNING nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.771 243456 DEBUG nova.virt.libvirt.host [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.772 243456 DEBUG nova.virt.libvirt.host [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.785 243456 DEBUG nova.virt.libvirt.host [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.786 243456 DEBUG nova.virt.libvirt.host [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.786 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.787 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.788 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.789 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.789 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.790 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.790 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.791 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.791 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.792 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.792 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.793 243456 DEBUG nova.virt.hardware [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.798 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.876 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274170.8753576, 1c621cf0-85bd-40e0-8c9c-467e5da2e21b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.877 243456 INFO nova.compute.manager [-] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] VM Stopped (Lifecycle Event)
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.904 243456 DEBUG nova.compute.manager [None req-22f141ab-8034-415f-9f3f-70ba2ff4ed2d - - - - - -] [instance: 1c621cf0-85bd-40e0-8c9c-467e5da2e21b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:23:05 compute-0 nova_compute[243452]: 2026-02-28 10:23:05.980 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:06 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:23:06 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2213914873' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:23:06 compute-0 nova_compute[243452]: 2026-02-28 10:23:06.365 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:06 compute-0 nova_compute[243452]: 2026-02-28 10:23:06.394 243456 DEBUG nova.storage.rbd_utils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 44a09c36-3876-4513-8285-1d4aedc2ec68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:06 compute-0 nova_compute[243452]: 2026-02-28 10:23:06.400 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:06 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2213914873' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:23:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Feb 28 10:23:06 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:23:06 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3278732323' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.008 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.010 243456 DEBUG nova.virt.libvirt.vif [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:22:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-925426756',display_name='tempest-ServersTestJSON-server-925426756',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-925426756',id=108,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLaWmqbATX1tMQaqkwgwWynVJNR8yYFwj3L4iJiJyj4lWHU05AbI0aLcvMcXZfAOON+EkQ1bGm552xW05v6R9BOUu6oYre0g1cnU2TpdCVUO7YJAOjpRb0+YCxAujwtdhA==',key_name='tempest-key-1054399794',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-q00sjeb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:22:59Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=44a09c36-3876-4513-8285-1d4aedc2ec68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "address": "fa:16:3e:28:2a:42", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79e5c2c-3b", "ovs_interfaceid": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.010 243456 DEBUG nova.network.os_vif_util [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "address": "fa:16:3e:28:2a:42", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79e5c2c-3b", "ovs_interfaceid": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.011 243456 DEBUG nova.network.os_vif_util [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:2a:42,bridge_name='br-int',has_traffic_filtering=True,id=d79e5c2c-3bc4-4c60-842c-c52c58b15ff3,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79e5c2c-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.012 243456 DEBUG nova.objects.instance [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 44a09c36-3876-4513-8285-1d4aedc2ec68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.029 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:23:07 compute-0 nova_compute[243452]:   <uuid>44a09c36-3876-4513-8285-1d4aedc2ec68</uuid>
Feb 28 10:23:07 compute-0 nova_compute[243452]:   <name>instance-0000006c</name>
Feb 28 10:23:07 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:23:07 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:23:07 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersTestJSON-server-925426756</nova:name>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:23:05</nova:creationTime>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:23:07 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:23:07 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:23:07 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:23:07 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:23:07 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:23:07 compute-0 nova_compute[243452]:         <nova:user uuid="30797c1e587b4532a2e148d0cdcd9c51">tempest-ServersTestJSON-973249707-project-member</nova:user>
Feb 28 10:23:07 compute-0 nova_compute[243452]:         <nova:project uuid="1c3cb5cdfa53405bb0387af43e804bd1">tempest-ServersTestJSON-973249707</nova:project>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:23:07 compute-0 nova_compute[243452]:         <nova:port uuid="d79e5c2c-3bc4-4c60-842c-c52c58b15ff3">
Feb 28 10:23:07 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:23:07 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:23:07 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <system>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <entry name="serial">44a09c36-3876-4513-8285-1d4aedc2ec68</entry>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <entry name="uuid">44a09c36-3876-4513-8285-1d4aedc2ec68</entry>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     </system>
Feb 28 10:23:07 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:23:07 compute-0 nova_compute[243452]:   <os>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:   </os>
Feb 28 10:23:07 compute-0 nova_compute[243452]:   <features>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:   </features>
Feb 28 10:23:07 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:23:07 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:23:07 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/44a09c36-3876-4513-8285-1d4aedc2ec68_disk">
Feb 28 10:23:07 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       </source>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:23:07 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/44a09c36-3876-4513-8285-1d4aedc2ec68_disk.config">
Feb 28 10:23:07 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       </source>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:23:07 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:28:2a:42"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <target dev="tapd79e5c2c-3b"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/44a09c36-3876-4513-8285-1d4aedc2ec68/console.log" append="off"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <video>
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     </video>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:23:07 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:23:07 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:23:07 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:23:07 compute-0 nova_compute[243452]: </domain>
Feb 28 10:23:07 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.030 243456 DEBUG nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Preparing to wait for external event network-vif-plugged-d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.030 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.030 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.031 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.031 243456 DEBUG nova.virt.libvirt.vif [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:22:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-925426756',display_name='tempest-ServersTestJSON-server-925426756',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-925426756',id=108,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLaWmqbATX1tMQaqkwgwWynVJNR8yYFwj3L4iJiJyj4lWHU05AbI0aLcvMcXZfAOON+EkQ1bGm552xW05v6R9BOUu6oYre0g1cnU2TpdCVUO7YJAOjpRb0+YCxAujwtdhA==',key_name='tempest-key-1054399794',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-q00sjeb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:22:59Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=44a09c36-3876-4513-8285-1d4aedc2ec68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "address": "fa:16:3e:28:2a:42", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79e5c2c-3b", "ovs_interfaceid": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.032 243456 DEBUG nova.network.os_vif_util [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "address": "fa:16:3e:28:2a:42", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79e5c2c-3b", "ovs_interfaceid": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.032 243456 DEBUG nova.network.os_vif_util [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:2a:42,bridge_name='br-int',has_traffic_filtering=True,id=d79e5c2c-3bc4-4c60-842c-c52c58b15ff3,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79e5c2c-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.033 243456 DEBUG os_vif [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:2a:42,bridge_name='br-int',has_traffic_filtering=True,id=d79e5c2c-3bc4-4c60-842c-c52c58b15ff3,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79e5c2c-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.033 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.033 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.034 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.037 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.038 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd79e5c2c-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.039 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd79e5c2c-3b, col_values=(('external_ids', {'iface-id': 'd79e5c2c-3bc4-4c60-842c-c52c58b15ff3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:2a:42', 'vm-uuid': '44a09c36-3876-4513-8285-1d4aedc2ec68'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.041 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:07 compute-0 NetworkManager[49805]: <info>  [1772274187.0421] manager: (tapd79e5c2c-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/457)
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.044 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.046 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.047 243456 INFO os_vif [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:2a:42,bridge_name='br-int',has_traffic_filtering=True,id=d79e5c2c-3bc4-4c60-842c-c52c58b15ff3,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79e5c2c-3b')
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.114 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.114 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.114 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No VIF found with MAC fa:16:3e:28:2a:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.115 243456 INFO nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Using config drive
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.135 243456 DEBUG nova.storage.rbd_utils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 44a09c36-3876-4513-8285-1d4aedc2ec68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:07 compute-0 ceph-mon[76304]: pgmap v1730: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Feb 28 10:23:07 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3278732323' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.649 243456 INFO nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Creating config drive at /var/lib/nova/instances/44a09c36-3876-4513-8285-1d4aedc2ec68/disk.config
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.656 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/44a09c36-3876-4513-8285-1d4aedc2ec68/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpo6b2l10g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.800 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/44a09c36-3876-4513-8285-1d4aedc2ec68/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpo6b2l10g" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.837 243456 DEBUG nova.storage.rbd_utils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 44a09c36-3876-4513-8285-1d4aedc2ec68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:07 compute-0 nova_compute[243452]: 2026-02-28 10:23:07.843 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/44a09c36-3876-4513-8285-1d4aedc2ec68/disk.config 44a09c36-3876-4513-8285-1d4aedc2ec68_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:08 compute-0 nova_compute[243452]: 2026-02-28 10:23:08.008 243456 DEBUG oslo_concurrency.processutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/44a09c36-3876-4513-8285-1d4aedc2ec68/disk.config 44a09c36-3876-4513-8285-1d4aedc2ec68_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:08 compute-0 nova_compute[243452]: 2026-02-28 10:23:08.010 243456 INFO nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Deleting local config drive /var/lib/nova/instances/44a09c36-3876-4513-8285-1d4aedc2ec68/disk.config because it was imported into RBD.
Feb 28 10:23:08 compute-0 kernel: tapd79e5c2c-3b: entered promiscuous mode
Feb 28 10:23:08 compute-0 NetworkManager[49805]: <info>  [1772274188.0622] manager: (tapd79e5c2c-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/458)
Feb 28 10:23:08 compute-0 ovn_controller[146846]: 2026-02-28T10:23:08Z|01097|binding|INFO|Claiming lport d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 for this chassis.
Feb 28 10:23:08 compute-0 ovn_controller[146846]: 2026-02-28T10:23:08Z|01098|binding|INFO|d79e5c2c-3bc4-4c60-842c-c52c58b15ff3: Claiming fa:16:3e:28:2a:42 10.100.0.4
Feb 28 10:23:08 compute-0 nova_compute[243452]: 2026-02-28 10:23:08.063 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:08 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.073 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:2a:42 10.100.0.4'], port_security=['fa:16:3e:28:2a:42 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '44a09c36-3876-4513-8285-1d4aedc2ec68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d79e5c2c-3bc4-4c60-842c-c52c58b15ff3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:23:08 compute-0 nova_compute[243452]: 2026-02-28 10:23:08.074 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:08 compute-0 nova_compute[243452]: 2026-02-28 10:23:08.075 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:08 compute-0 ovn_controller[146846]: 2026-02-28T10:23:08Z|01099|binding|INFO|Setting lport d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 ovn-installed in OVS
Feb 28 10:23:08 compute-0 ovn_controller[146846]: 2026-02-28T10:23:08Z|01100|binding|INFO|Setting lport d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 up in Southbound
Feb 28 10:23:08 compute-0 nova_compute[243452]: 2026-02-28 10:23:08.076 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:08 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.076 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 in datapath 7ec4804c-4a13-485a-9300-db6edf74473b bound to our chassis
Feb 28 10:23:08 compute-0 nova_compute[243452]: 2026-02-28 10:23:08.078 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:08 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.078 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b
Feb 28 10:23:08 compute-0 systemd-udevd[337300]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:23:08 compute-0 NetworkManager[49805]: <info>  [1772274188.1004] device (tapd79e5c2c-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:23:08 compute-0 NetworkManager[49805]: <info>  [1772274188.1025] device (tapd79e5c2c-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:23:08 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.104 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7c6a9e8b-04a5-439a-bc98-2fe43b458b93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:08 compute-0 systemd-machined[209480]: New machine qemu-138-instance-0000006c.
Feb 28 10:23:08 compute-0 systemd[1]: Started Virtual Machine qemu-138-instance-0000006c.
Feb 28 10:23:08 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.132 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c76ae0bb-71d8-4ae0-9cbe-ff18506eb496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:08 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.136 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[af9b5d8d-4887-4370-a276-91636ec89b03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:08 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.171 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5db4f43c-abbd-4717-afe1-b0f653e353bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:08 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.191 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[66f44932-4d53-4540-af0b-55a3def94c7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337317, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:08 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.209 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[deaad118-b819-477b-a576-64023af03d36]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337318, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337318, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:08 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.211 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:08 compute-0 nova_compute[243452]: 2026-02-28 10:23:08.212 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:08 compute-0 nova_compute[243452]: 2026-02-28 10:23:08.214 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:08 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.214 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:08 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.214 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:23:08 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.215 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:08 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:08.215 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:23:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:23:08 compute-0 nova_compute[243452]: 2026-02-28 10:23:08.443 243456 DEBUG nova.compute.manager [req-0eac1b18-1a66-47cd-a991-51539cf32a14 req-f7175931-6e15-4d4c-828f-ae0ab79d41d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Received event network-vif-plugged-d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:08 compute-0 nova_compute[243452]: 2026-02-28 10:23:08.443 243456 DEBUG oslo_concurrency.lockutils [req-0eac1b18-1a66-47cd-a991-51539cf32a14 req-f7175931-6e15-4d4c-828f-ae0ab79d41d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:08 compute-0 nova_compute[243452]: 2026-02-28 10:23:08.444 243456 DEBUG oslo_concurrency.lockutils [req-0eac1b18-1a66-47cd-a991-51539cf32a14 req-f7175931-6e15-4d4c-828f-ae0ab79d41d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:08 compute-0 nova_compute[243452]: 2026-02-28 10:23:08.444 243456 DEBUG oslo_concurrency.lockutils [req-0eac1b18-1a66-47cd-a991-51539cf32a14 req-f7175931-6e15-4d4c-828f-ae0ab79d41d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:08 compute-0 nova_compute[243452]: 2026-02-28 10:23:08.444 243456 DEBUG nova.compute.manager [req-0eac1b18-1a66-47cd-a991-51539cf32a14 req-f7175931-6e15-4d4c-828f-ae0ab79d41d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Processing event network-vif-plugged-d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:23:08 compute-0 ceph-mon[76304]: pgmap v1731: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:23:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.024 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.143 243456 DEBUG nova.network.neutron [req-e454a304-84fe-4790-af2e-7166500575b2 req-5da59a9e-a9ee-43ca-8f81-a28659a6d8dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Updated VIF entry in instance network info cache for port d79e5c2c-3bc4-4c60-842c-c52c58b15ff3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.144 243456 DEBUG nova.network.neutron [req-e454a304-84fe-4790-af2e-7166500575b2 req-5da59a9e-a9ee-43ca-8f81-a28659a6d8dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Updating instance_info_cache with network_info: [{"id": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "address": "fa:16:3e:28:2a:42", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79e5c2c-3b", "ovs_interfaceid": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.167 243456 DEBUG oslo_concurrency.lockutils [req-e454a304-84fe-4790-af2e-7166500575b2 req-5da59a9e-a9ee-43ca-8f81-a28659a6d8dd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-44a09c36-3876-4513-8285-1d4aedc2ec68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.456 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274189.4554343, 44a09c36-3876-4513-8285-1d4aedc2ec68 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.457 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] VM Started (Lifecycle Event)
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.459 243456 DEBUG nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.463 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.467 243456 INFO nova.virt.libvirt.driver [-] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Instance spawned successfully.
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.467 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.496 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.502 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.502 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.503 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.503 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.504 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.505 243456 DEBUG nova.virt.libvirt.driver [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.510 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.558 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.559 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274189.4575496, 44a09c36-3876-4513-8285-1d4aedc2ec68 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.559 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] VM Paused (Lifecycle Event)
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.570 243456 INFO nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Took 9.89 seconds to spawn the instance on the hypervisor.
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.571 243456 DEBUG nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.588 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.592 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274189.4616396, 44a09c36-3876-4513-8285-1d4aedc2ec68 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.592 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] VM Resumed (Lifecycle Event)
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.634 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.639 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.673 243456 INFO nova.compute.manager [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Took 11.08 seconds to build instance.
Feb 28 10:23:09 compute-0 nova_compute[243452]: 2026-02-28 10:23:09.716 243456 DEBUG oslo_concurrency.lockutils [None req-caf15543-6a2c-4cb7-bf55-6943403eed1b 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 204 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 28 10:23:10 compute-0 ceph-mon[76304]: pgmap v1732: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 204 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 28 10:23:10 compute-0 nova_compute[243452]: 2026-02-28 10:23:10.709 243456 DEBUG nova.compute.manager [req-c1fe24bd-58f7-4e20-a747-c488b01fc79e req-dbefe8ec-51ac-40c9-b2c8-ecd6a65dbbce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Received event network-vif-plugged-d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:10 compute-0 nova_compute[243452]: 2026-02-28 10:23:10.710 243456 DEBUG oslo_concurrency.lockutils [req-c1fe24bd-58f7-4e20-a747-c488b01fc79e req-dbefe8ec-51ac-40c9-b2c8-ecd6a65dbbce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:10 compute-0 nova_compute[243452]: 2026-02-28 10:23:10.711 243456 DEBUG oslo_concurrency.lockutils [req-c1fe24bd-58f7-4e20-a747-c488b01fc79e req-dbefe8ec-51ac-40c9-b2c8-ecd6a65dbbce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:10 compute-0 nova_compute[243452]: 2026-02-28 10:23:10.711 243456 DEBUG oslo_concurrency.lockutils [req-c1fe24bd-58f7-4e20-a747-c488b01fc79e req-dbefe8ec-51ac-40c9-b2c8-ecd6a65dbbce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:10 compute-0 nova_compute[243452]: 2026-02-28 10:23:10.712 243456 DEBUG nova.compute.manager [req-c1fe24bd-58f7-4e20-a747-c488b01fc79e req-dbefe8ec-51ac-40c9-b2c8-ecd6a65dbbce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] No waiting events found dispatching network-vif-plugged-d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:23:10 compute-0 nova_compute[243452]: 2026-02-28 10:23:10.712 243456 WARNING nova.compute.manager [req-c1fe24bd-58f7-4e20-a747-c488b01fc79e req-dbefe8ec-51ac-40c9-b2c8-ecd6a65dbbce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Received unexpected event network-vif-plugged-d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 for instance with vm_state active and task_state None.
Feb 28 10:23:10 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:23:10 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 8152 writes, 36K keys, 8152 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 8152 writes, 8152 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1516 writes, 6822 keys, 1516 commit groups, 1.0 writes per commit group, ingest: 9.21 MB, 0.02 MB/s
                                           Interval WAL: 1516 writes, 1516 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     64.4      0.65              0.11        21    0.031       0      0       0.0       0.0
                                             L6      1/0    9.30 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   3.7    142.1    117.6      1.32              0.39        20    0.066    103K    11K       0.0       0.0
                                            Sum      1/0    9.30 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.7     95.4    100.1      1.97              0.50        41    0.048    103K    11K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   6.0    137.4    140.0      0.37              0.14        10    0.037     32K   3070       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0    142.1    117.6      1.32              0.39        20    0.066    103K    11K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     64.9      0.64              0.11        20    0.032       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.5      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.041, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.19 GB write, 0.07 MB/s write, 0.18 GB read, 0.06 MB/s read, 2.0 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555caadbb8d0#2 capacity: 304.00 MB usage: 22.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000295 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1442,21.59 MB,7.10186%) FilterBlock(42,295.86 KB,0.0950412%) IndexBlock(42,524.14 KB,0.168374%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.041 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 196 KiB/s rd, 1.3 MiB/s wr, 26 op/s
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.526 243456 DEBUG oslo_concurrency.lockutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "44a09c36-3876-4513-8285-1d4aedc2ec68" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.527 243456 DEBUG oslo_concurrency.lockutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.527 243456 DEBUG oslo_concurrency.lockutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.528 243456 DEBUG oslo_concurrency.lockutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.528 243456 DEBUG oslo_concurrency.lockutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.530 243456 INFO nova.compute.manager [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Terminating instance
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.532 243456 DEBUG nova.compute.manager [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:23:12 compute-0 ceph-mon[76304]: pgmap v1733: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 196 KiB/s rd, 1.3 MiB/s wr, 26 op/s
Feb 28 10:23:12 compute-0 kernel: tapd79e5c2c-3b (unregistering): left promiscuous mode
Feb 28 10:23:12 compute-0 NetworkManager[49805]: <info>  [1772274192.5788] device (tapd79e5c2c-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:23:12 compute-0 ovn_controller[146846]: 2026-02-28T10:23:12Z|01101|binding|INFO|Releasing lport d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 from this chassis (sb_readonly=0)
Feb 28 10:23:12 compute-0 ovn_controller[146846]: 2026-02-28T10:23:12Z|01102|binding|INFO|Setting lport d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 down in Southbound
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.582 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:12 compute-0 ovn_controller[146846]: 2026-02-28T10:23:12Z|01103|binding|INFO|Removing iface tapd79e5c2c-3b ovn-installed in OVS
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.585 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.592 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:2a:42 10.100.0.4'], port_security=['fa:16:3e:28:2a:42 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '44a09c36-3876-4513-8285-1d4aedc2ec68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d79e5c2c-3bc4-4c60-842c-c52c58b15ff3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:23:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.596 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 in datapath 7ec4804c-4a13-485a-9300-db6edf74473b unbound from our chassis
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.598 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.600 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b
Feb 28 10:23:12 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Feb 28 10:23:12 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006c.scope: Consumed 4.515s CPU time.
Feb 28 10:23:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.620 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a1d819eb-8fd1-4be9-8fe6-f7d78408b46b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:12 compute-0 systemd-machined[209480]: Machine qemu-138-instance-0000006c terminated.
Feb 28 10:23:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.661 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[796fe9d6-5f8b-4568-a51a-a5ca91a7f7bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.666 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5ce8cfe3-4dc0-4e66-b646-979cc2624aea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.708 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[63d1c5bf-effa-4e55-9e54-76744af69364]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.734 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2f6a0716-8dce-46b6-92d4-29a374eb9e93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337374, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:12 compute-0 kernel: tapd79e5c2c-3b: entered promiscuous mode
Feb 28 10:23:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.759 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c04373b9-bf0a-4a38-a01f-7557393513e1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337376, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337376, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.762 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:12 compute-0 kernel: tapd79e5c2c-3b (unregistering): left promiscuous mode
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.764 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:12 compute-0 NetworkManager[49805]: <info>  [1772274192.7664] manager: (tapd79e5c2c-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/459)
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.771 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.783 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.784 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.785 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.785 243456 INFO nova.virt.libvirt.driver [-] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Instance destroyed successfully.
Feb 28 10:23:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.786 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.786 243456 DEBUG nova.objects.instance [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'resources' on Instance uuid 44a09c36-3876-4513-8285-1d4aedc2ec68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:23:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:12.786 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.805 243456 DEBUG nova.virt.libvirt.vif [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:22:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-925426756',display_name='tempest-ServersTestJSON-server-925426756',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-925426756',id=108,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLaWmqbATX1tMQaqkwgwWynVJNR8yYFwj3L4iJiJyj4lWHU05AbI0aLcvMcXZfAOON+EkQ1bGm552xW05v6R9BOUu6oYre0g1cnU2TpdCVUO7YJAOjpRb0+YCxAujwtdhA==',key_name='tempest-key-1054399794',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:23:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-q00sjeb0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:23:09Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=44a09c36-3876-4513-8285-1d4aedc2ec68,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "address": "fa:16:3e:28:2a:42", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79e5c2c-3b", "ovs_interfaceid": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.806 243456 DEBUG nova.network.os_vif_util [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "address": "fa:16:3e:28:2a:42", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79e5c2c-3b", "ovs_interfaceid": "d79e5c2c-3bc4-4c60-842c-c52c58b15ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.807 243456 DEBUG nova.network.os_vif_util [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:2a:42,bridge_name='br-int',has_traffic_filtering=True,id=d79e5c2c-3bc4-4c60-842c-c52c58b15ff3,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79e5c2c-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.807 243456 DEBUG os_vif [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:2a:42,bridge_name='br-int',has_traffic_filtering=True,id=d79e5c2c-3bc4-4c60-842c-c52c58b15ff3,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79e5c2c-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.810 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.810 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd79e5c2c-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.813 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.815 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:12 compute-0 nova_compute[243452]: 2026-02-28 10:23:12.818 243456 INFO os_vif [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:2a:42,bridge_name='br-int',has_traffic_filtering=True,id=d79e5c2c-3bc4-4c60-842c-c52c58b15ff3,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79e5c2c-3b')
Feb 28 10:23:13 compute-0 nova_compute[243452]: 2026-02-28 10:23:13.126 243456 INFO nova.virt.libvirt.driver [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Deleting instance files /var/lib/nova/instances/44a09c36-3876-4513-8285-1d4aedc2ec68_del
Feb 28 10:23:13 compute-0 nova_compute[243452]: 2026-02-28 10:23:13.126 243456 INFO nova.virt.libvirt.driver [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Deletion of /var/lib/nova/instances/44a09c36-3876-4513-8285-1d4aedc2ec68_del complete
Feb 28 10:23:13 compute-0 nova_compute[243452]: 2026-02-28 10:23:13.192 243456 INFO nova.compute.manager [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Took 0.66 seconds to destroy the instance on the hypervisor.
Feb 28 10:23:13 compute-0 nova_compute[243452]: 2026-02-28 10:23:13.192 243456 DEBUG oslo.service.loopingcall [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:23:13 compute-0 nova_compute[243452]: 2026-02-28 10:23:13.193 243456 DEBUG nova.compute.manager [-] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:23:13 compute-0 nova_compute[243452]: 2026-02-28 10:23:13.193 243456 DEBUG nova.network.neutron [-] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:23:13 compute-0 nova_compute[243452]: 2026-02-28 10:23:13.430 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "027ce924-b530-4917-956c-ab66555058b0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:13 compute-0 nova_compute[243452]: 2026-02-28 10:23:13.430 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:13 compute-0 nova_compute[243452]: 2026-02-28 10:23:13.456 243456 DEBUG nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:23:13 compute-0 nova_compute[243452]: 2026-02-28 10:23:13.545 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:13 compute-0 nova_compute[243452]: 2026-02-28 10:23:13.546 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:13 compute-0 nova_compute[243452]: 2026-02-28 10:23:13.554 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:23:13 compute-0 nova_compute[243452]: 2026-02-28 10:23:13.555 243456 INFO nova.compute.claims [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:23:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:23:13 compute-0 nova_compute[243452]: 2026-02-28 10:23:13.722 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.225 243456 DEBUG nova.network.neutron [-] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.248 243456 INFO nova.compute.manager [-] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Took 1.05 seconds to deallocate network for instance.
Feb 28 10:23:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:23:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2176746023' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.288 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.296 243456 DEBUG nova.compute.provider_tree [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.320 243456 DEBUG oslo_concurrency.lockutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.323 243456 DEBUG nova.scheduler.client.report [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:23:14 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2176746023' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.350 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.351 243456 DEBUG nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.357 243456 DEBUG oslo_concurrency.lockutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.396 243456 DEBUG nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.396 243456 DEBUG nova.network.neutron [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.420 243456 INFO nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:23:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 68 op/s
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.443 243456 DEBUG nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.448 243456 DEBUG oslo_concurrency.processutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.488 243456 DEBUG nova.compute.manager [req-dd2317cd-ed01-46e8-8810-7f6a0c3661d5 req-ed02bac0-c7ea-4e06-b877-69b8c4ed5f25 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Received event network-vif-deleted-d79e5c2c-3bc4-4c60-842c-c52c58b15ff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.566 243456 DEBUG nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.567 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.568 243456 INFO nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Creating image(s)
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.595 243456 DEBUG nova.storage.rbd_utils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 027ce924-b530-4917-956c-ab66555058b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.625 243456 DEBUG nova.storage.rbd_utils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 027ce924-b530-4917-956c-ab66555058b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.655 243456 DEBUG nova.storage.rbd_utils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 027ce924-b530-4917-956c-ab66555058b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.660 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.702 243456 DEBUG nova.policy [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9046d3ef932481196b82d5e1fdd5de6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.752 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.753 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.754 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.754 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.783 243456 DEBUG nova.storage.rbd_utils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 027ce924-b530-4917-956c-ab66555058b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:14 compute-0 nova_compute[243452]: 2026-02-28 10:23:14.790 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 027ce924-b530-4917-956c-ab66555058b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:23:15 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1929258099' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:23:15 compute-0 nova_compute[243452]: 2026-02-28 10:23:15.049 243456 DEBUG oslo_concurrency.processutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:15 compute-0 nova_compute[243452]: 2026-02-28 10:23:15.056 243456 DEBUG nova.compute.provider_tree [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:23:15 compute-0 nova_compute[243452]: 2026-02-28 10:23:15.062 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 027ce924-b530-4917-956c-ab66555058b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:15 compute-0 nova_compute[243452]: 2026-02-28 10:23:15.092 243456 DEBUG nova.scheduler.client.report [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:23:15 compute-0 nova_compute[243452]: 2026-02-28 10:23:15.131 243456 DEBUG oslo_concurrency.lockutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:15 compute-0 nova_compute[243452]: 2026-02-28 10:23:15.139 243456 DEBUG nova.storage.rbd_utils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] resizing rbd image 027ce924-b530-4917-956c-ab66555058b0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:23:15 compute-0 nova_compute[243452]: 2026-02-28 10:23:15.171 243456 INFO nova.scheduler.client.report [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Deleted allocations for instance 44a09c36-3876-4513-8285-1d4aedc2ec68
Feb 28 10:23:15 compute-0 nova_compute[243452]: 2026-02-28 10:23:15.242 243456 DEBUG oslo_concurrency.lockutils [None req-4097d3f9-ea39-46d3-a440-3c680e7454a9 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "44a09c36-3876-4513-8285-1d4aedc2ec68" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:15 compute-0 nova_compute[243452]: 2026-02-28 10:23:15.250 243456 DEBUG nova.objects.instance [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'migration_context' on Instance uuid 027ce924-b530-4917-956c-ab66555058b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:23:15 compute-0 nova_compute[243452]: 2026-02-28 10:23:15.274 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:23:15 compute-0 nova_compute[243452]: 2026-02-28 10:23:15.275 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Ensure instance console log exists: /var/lib/nova/instances/027ce924-b530-4917-956c-ab66555058b0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:23:15 compute-0 nova_compute[243452]: 2026-02-28 10:23:15.276 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:15 compute-0 nova_compute[243452]: 2026-02-28 10:23:15.276 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:15 compute-0 nova_compute[243452]: 2026-02-28 10:23:15.276 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:15 compute-0 ceph-mon[76304]: pgmap v1734: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 68 op/s
Feb 28 10:23:15 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1929258099' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:23:15 compute-0 nova_compute[243452]: 2026-02-28 10:23:15.800 243456 DEBUG nova.network.neutron [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Successfully created port: 415ef63e-f355-4d3a-a625-bee99de661ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:23:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 264 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 617 KiB/s wr, 104 op/s
Feb 28 10:23:16 compute-0 ceph-mon[76304]: pgmap v1735: 305 pgs: 305 active+clean; 264 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 617 KiB/s wr, 104 op/s
Feb 28 10:23:17 compute-0 nova_compute[243452]: 2026-02-28 10:23:17.273 243456 DEBUG nova.network.neutron [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Successfully updated port: 415ef63e-f355-4d3a-a625-bee99de661ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:23:17 compute-0 nova_compute[243452]: 2026-02-28 10:23:17.290 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:23:17 compute-0 nova_compute[243452]: 2026-02-28 10:23:17.291 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquired lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:23:17 compute-0 nova_compute[243452]: 2026-02-28 10:23:17.291 243456 DEBUG nova.network.neutron [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:23:17 compute-0 nova_compute[243452]: 2026-02-28 10:23:17.533 243456 DEBUG nova.network.neutron [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:23:17 compute-0 nova_compute[243452]: 2026-02-28 10:23:17.813 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:17 compute-0 nova_compute[243452]: 2026-02-28 10:23:17.890 243456 DEBUG nova.compute.manager [req-a27af774-3e24-4f30-b034-0a3ff585e354 req-8232bc22-7492-4e8b-b3e5-20c851b93df5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received event network-changed-415ef63e-f355-4d3a-a625-bee99de661ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:17 compute-0 nova_compute[243452]: 2026-02-28 10:23:17.891 243456 DEBUG nova.compute.manager [req-a27af774-3e24-4f30-b034-0a3ff585e354 req-8232bc22-7492-4e8b-b3e5-20c851b93df5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Refreshing instance network info cache due to event network-changed-415ef63e-f355-4d3a-a625-bee99de661ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:23:17 compute-0 nova_compute[243452]: 2026-02-28 10:23:17.891 243456 DEBUG oslo_concurrency.lockutils [req-a27af774-3e24-4f30-b034-0a3ff585e354 req-8232bc22-7492-4e8b-b3e5-20c851b93df5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:23:18 compute-0 podman[337616]: 2026-02-28 10:23:18.173107534 +0000 UTC m=+0.096000432 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Feb 28 10:23:18 compute-0 podman[337615]: 2026-02-28 10:23:18.177218222 +0000 UTC m=+0.105416313 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0)
Feb 28 10:23:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 260 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 771 KiB/s wr, 114 op/s
Feb 28 10:23:18 compute-0 ceph-mon[76304]: pgmap v1736: 305 pgs: 305 active+clean; 260 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 771 KiB/s wr, 114 op/s
Feb 28 10:23:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.688 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:18.688 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:23:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:18.690 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:23:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:18.692 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.830 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.830 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.858 243456 DEBUG nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.909 243456 DEBUG nova.network.neutron [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Updating instance_info_cache with network_info: [{"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.928 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Releasing lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.929 243456 DEBUG nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Instance network_info: |[{"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.929 243456 DEBUG oslo_concurrency.lockutils [req-a27af774-3e24-4f30-b034-0a3ff585e354 req-8232bc22-7492-4e8b-b3e5-20c851b93df5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.930 243456 DEBUG nova.network.neutron [req-a27af774-3e24-4f30-b034-0a3ff585e354 req-8232bc22-7492-4e8b-b3e5-20c851b93df5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Refreshing network info cache for port 415ef63e-f355-4d3a-a625-bee99de661ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.935 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Start _get_guest_xml network_info=[{"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.941 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.942 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.946 243456 WARNING nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.959 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.960 243456 INFO nova.compute.claims [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.967 243456 DEBUG nova.virt.libvirt.host [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.968 243456 DEBUG nova.virt.libvirt.host [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.974 243456 DEBUG nova.virt.libvirt.host [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.975 243456 DEBUG nova.virt.libvirt.host [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.975 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.976 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.977 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.977 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.977 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.978 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.978 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.978 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.979 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.980 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.980 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.980 243456 DEBUG nova.virt.hardware [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:23:18 compute-0 nova_compute[243452]: 2026-02-28 10:23:18.988 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:19 compute-0 nova_compute[243452]: 2026-02-28 10:23:19.034 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:19 compute-0 nova_compute[243452]: 2026-02-28 10:23:19.152 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:23:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1371678343' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:23:19 compute-0 nova_compute[243452]: 2026-02-28 10:23:19.611 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:19 compute-0 nova_compute[243452]: 2026-02-28 10:23:19.637 243456 DEBUG nova.storage.rbd_utils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 027ce924-b530-4917-956c-ab66555058b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:19 compute-0 nova_compute[243452]: 2026-02-28 10:23:19.642 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1371678343' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:23:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:23:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3783250326' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:23:19 compute-0 nova_compute[243452]: 2026-02-28 10:23:19.739 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:19 compute-0 nova_compute[243452]: 2026-02-28 10:23:19.744 243456 DEBUG nova.compute.provider_tree [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:23:19 compute-0 nova_compute[243452]: 2026-02-28 10:23:19.768 243456 DEBUG nova.scheduler.client.report [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:23:19 compute-0 nova_compute[243452]: 2026-02-28 10:23:19.790 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:19 compute-0 nova_compute[243452]: 2026-02-28 10:23:19.791 243456 DEBUG nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:23:19 compute-0 nova_compute[243452]: 2026-02-28 10:23:19.838 243456 DEBUG nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:23:19 compute-0 nova_compute[243452]: 2026-02-28 10:23:19.839 243456 DEBUG nova.network.neutron [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:23:19 compute-0 nova_compute[243452]: 2026-02-28 10:23:19.858 243456 INFO nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:23:19 compute-0 nova_compute[243452]: 2026-02-28 10:23:19.875 243456 DEBUG nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:23:19 compute-0 nova_compute[243452]: 2026-02-28 10:23:19.954 243456 DEBUG nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:23:19 compute-0 nova_compute[243452]: 2026-02-28 10:23:19.956 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:23:19 compute-0 nova_compute[243452]: 2026-02-28 10:23:19.956 243456 INFO nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Creating image(s)
Feb 28 10:23:19 compute-0 nova_compute[243452]: 2026-02-28 10:23:19.980 243456 DEBUG nova.storage.rbd_utils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.006 243456 DEBUG nova.storage.rbd_utils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.037 243456 DEBUG nova.storage.rbd_utils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.042 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.112 243456 DEBUG nova.policy [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30797c1e587b4532a2e148d0cdcd9c51', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.123 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.124 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.124 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.124 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:20 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:23:20 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4134194997' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.150 243456 DEBUG nova.storage.rbd_utils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.156 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.198 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.201 243456 DEBUG nova.virt.libvirt.vif [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:23:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-603155240',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-603155240',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=109,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9wU8YY0rYLr+FY6zo9NLT62DKW4Etx3PelXGTWlh9GbKMYWy7V2liaoU6eFO4swsQU7hFfqtc0g/T8jGlbB4RbgIdTi8pbxGSqSx76HlaAvQoV4NMlMuhy55HuiLv7yw==',key_name='tempest-TestSecurityGroupsBasicOps-1207522769',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-ag0oqgtp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:23:14Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=027ce924-b530-4917-956c-ab66555058b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.202 243456 DEBUG nova.network.os_vif_util [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.203 243456 DEBUG nova.network.os_vif_util [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:49:3b,bridge_name='br-int',has_traffic_filtering=True,id=415ef63e-f355-4d3a-a625-bee99de661ad,network=Network(17bc494c-7a5d-47b4-92b7-06dd91f131ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap415ef63e-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.204 243456 DEBUG nova.objects.instance [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 027ce924-b530-4917-956c-ab66555058b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.221 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:23:20 compute-0 nova_compute[243452]:   <uuid>027ce924-b530-4917-956c-ab66555058b0</uuid>
Feb 28 10:23:20 compute-0 nova_compute[243452]:   <name>instance-0000006d</name>
Feb 28 10:23:20 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:23:20 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:23:20 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-603155240</nova:name>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:23:18</nova:creationTime>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:23:20 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:23:20 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:23:20 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:23:20 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:23:20 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:23:20 compute-0 nova_compute[243452]:         <nova:user uuid="f9046d3ef932481196b82d5e1fdd5de6">tempest-TestSecurityGroupsBasicOps-1303513239-project-member</nova:user>
Feb 28 10:23:20 compute-0 nova_compute[243452]:         <nova:project uuid="1eb3aafa74d14544ba5cde61223f2e23">tempest-TestSecurityGroupsBasicOps-1303513239</nova:project>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:23:20 compute-0 nova_compute[243452]:         <nova:port uuid="415ef63e-f355-4d3a-a625-bee99de661ad">
Feb 28 10:23:20 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:23:20 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:23:20 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <system>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <entry name="serial">027ce924-b530-4917-956c-ab66555058b0</entry>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <entry name="uuid">027ce924-b530-4917-956c-ab66555058b0</entry>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     </system>
Feb 28 10:23:20 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:23:20 compute-0 nova_compute[243452]:   <os>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:   </os>
Feb 28 10:23:20 compute-0 nova_compute[243452]:   <features>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:   </features>
Feb 28 10:23:20 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:23:20 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:23:20 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/027ce924-b530-4917-956c-ab66555058b0_disk">
Feb 28 10:23:20 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       </source>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:23:20 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/027ce924-b530-4917-956c-ab66555058b0_disk.config">
Feb 28 10:23:20 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       </source>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:23:20 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:01:49:3b"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <target dev="tap415ef63e-f3"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/027ce924-b530-4917-956c-ab66555058b0/console.log" append="off"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <video>
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     </video>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:23:20 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:23:20 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:23:20 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:23:20 compute-0 nova_compute[243452]: </domain>
Feb 28 10:23:20 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.222 243456 DEBUG nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Preparing to wait for external event network-vif-plugged-415ef63e-f355-4d3a-a625-bee99de661ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.222 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "027ce924-b530-4917-956c-ab66555058b0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.222 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.223 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.224 243456 DEBUG nova.virt.libvirt.vif [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:23:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-603155240',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-603155240',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=109,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9wU8YY0rYLr+FY6zo9NLT62DKW4Etx3PelXGTWlh9GbKMYWy7V2liaoU6eFO4swsQU7hFfqtc0g/T8jGlbB4RbgIdTi8pbxGSqSx76HlaAvQoV4NMlMuhy55HuiLv7yw==',key_name='tempest-TestSecurityGroupsBasicOps-1207522769',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-ag0oqgtp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:23:14Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=027ce924-b530-4917-956c-ab66555058b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.224 243456 DEBUG nova.network.os_vif_util [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.225 243456 DEBUG nova.network.os_vif_util [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:49:3b,bridge_name='br-int',has_traffic_filtering=True,id=415ef63e-f355-4d3a-a625-bee99de661ad,network=Network(17bc494c-7a5d-47b4-92b7-06dd91f131ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap415ef63e-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.225 243456 DEBUG os_vif [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:49:3b,bridge_name='br-int',has_traffic_filtering=True,id=415ef63e-f355-4d3a-a625-bee99de661ad,network=Network(17bc494c-7a5d-47b4-92b7-06dd91f131ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap415ef63e-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.226 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.226 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.227 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.231 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.231 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap415ef63e-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.232 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap415ef63e-f3, col_values=(('external_ids', {'iface-id': '415ef63e-f355-4d3a-a625-bee99de661ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:49:3b', 'vm-uuid': '027ce924-b530-4917-956c-ab66555058b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.233 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:20 compute-0 NetworkManager[49805]: <info>  [1772274200.2348] manager: (tap415ef63e-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/460)
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.237 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.238 243456 INFO os_vif [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:49:3b,bridge_name='br-int',has_traffic_filtering=True,id=415ef63e-f355-4d3a-a625-bee99de661ad,network=Network(17bc494c-7a5d-47b4-92b7-06dd91f131ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap415ef63e-f3')
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.396 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 292 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 128 op/s
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.460 243456 DEBUG nova.storage.rbd_utils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] resizing rbd image c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.498 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.499 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.500 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No VIF found with MAC fa:16:3e:01:49:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.501 243456 INFO nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Using config drive
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.522 243456 DEBUG nova.storage.rbd_utils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 027ce924-b530-4917-956c-ab66555058b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.571 243456 DEBUG nova.objects.instance [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'migration_context' on Instance uuid c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.583 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.583 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Ensure instance console log exists: /var/lib/nova/instances/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.583 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.584 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:20 compute-0 nova_compute[243452]: 2026-02-28 10:23:20.584 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3783250326' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:23:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4134194997' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:23:20 compute-0 ceph-mon[76304]: pgmap v1737: 305 pgs: 305 active+clean; 292 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 128 op/s
Feb 28 10:23:21 compute-0 nova_compute[243452]: 2026-02-28 10:23:21.103 243456 DEBUG nova.network.neutron [req-a27af774-3e24-4f30-b034-0a3ff585e354 req-8232bc22-7492-4e8b-b3e5-20c851b93df5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Updated VIF entry in instance network info cache for port 415ef63e-f355-4d3a-a625-bee99de661ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:23:21 compute-0 nova_compute[243452]: 2026-02-28 10:23:21.103 243456 DEBUG nova.network.neutron [req-a27af774-3e24-4f30-b034-0a3ff585e354 req-8232bc22-7492-4e8b-b3e5-20c851b93df5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Updating instance_info_cache with network_info: [{"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:23:21 compute-0 nova_compute[243452]: 2026-02-28 10:23:21.137 243456 DEBUG oslo_concurrency.lockutils [req-a27af774-3e24-4f30-b034-0a3ff585e354 req-8232bc22-7492-4e8b-b3e5-20c851b93df5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:23:21 compute-0 nova_compute[243452]: 2026-02-28 10:23:21.387 243456 INFO nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Creating config drive at /var/lib/nova/instances/027ce924-b530-4917-956c-ab66555058b0/disk.config
Feb 28 10:23:21 compute-0 nova_compute[243452]: 2026-02-28 10:23:21.391 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/027ce924-b530-4917-956c-ab66555058b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpc6_kp8k_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:21 compute-0 nova_compute[243452]: 2026-02-28 10:23:21.511 243456 DEBUG nova.network.neutron [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Successfully created port: 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:23:21 compute-0 nova_compute[243452]: 2026-02-28 10:23:21.530 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/027ce924-b530-4917-956c-ab66555058b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpc6_kp8k_" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:21 compute-0 nova_compute[243452]: 2026-02-28 10:23:21.566 243456 DEBUG nova.storage.rbd_utils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 027ce924-b530-4917-956c-ab66555058b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:21 compute-0 nova_compute[243452]: 2026-02-28 10:23:21.571 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/027ce924-b530-4917-956c-ab66555058b0/disk.config 027ce924-b530-4917-956c-ab66555058b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:21 compute-0 nova_compute[243452]: 2026-02-28 10:23:21.725 243456 DEBUG oslo_concurrency.processutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/027ce924-b530-4917-956c-ab66555058b0/disk.config 027ce924-b530-4917-956c-ab66555058b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:21 compute-0 nova_compute[243452]: 2026-02-28 10:23:21.727 243456 INFO nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Deleting local config drive /var/lib/nova/instances/027ce924-b530-4917-956c-ab66555058b0/disk.config because it was imported into RBD.
Feb 28 10:23:21 compute-0 kernel: tap415ef63e-f3: entered promiscuous mode
Feb 28 10:23:21 compute-0 NetworkManager[49805]: <info>  [1772274201.7584] manager: (tap415ef63e-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/461)
Feb 28 10:23:21 compute-0 nova_compute[243452]: 2026-02-28 10:23:21.758 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:21 compute-0 ovn_controller[146846]: 2026-02-28T10:23:21Z|01104|binding|INFO|Claiming lport 415ef63e-f355-4d3a-a625-bee99de661ad for this chassis.
Feb 28 10:23:21 compute-0 ovn_controller[146846]: 2026-02-28T10:23:21Z|01105|binding|INFO|415ef63e-f355-4d3a-a625-bee99de661ad: Claiming fa:16:3e:01:49:3b 10.100.0.8
Feb 28 10:23:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.774 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:49:3b 10.100.0.8'], port_security=['fa:16:3e:01:49:3b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '027ce924-b530-4917-956c-ab66555058b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bc494c-7a5d-47b4-92b7-06dd91f131ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1ef1d2e7-20e3-4f13-87f1-05e5f962f01b 2651d5ee-6558-4826-a4a8-35378d050e16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1398b86-8c3c-44c8-8c2b-3601475652eb, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=415ef63e-f355-4d3a-a625-bee99de661ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:23:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.775 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 415ef63e-f355-4d3a-a625-bee99de661ad in datapath 17bc494c-7a5d-47b4-92b7-06dd91f131ca bound to our chassis
Feb 28 10:23:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.776 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17bc494c-7a5d-47b4-92b7-06dd91f131ca
Feb 28 10:23:21 compute-0 systemd-machined[209480]: New machine qemu-139-instance-0000006d.
Feb 28 10:23:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.786 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bb3a920a-b876-49dd-acbf-f8949122ff16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.786 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap17bc494c-71 in ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:23:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.788 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap17bc494c-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:23:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.788 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e6337f9f-4c19-4bfc-90d0-a4670d41be68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.788 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[36689852-5386-4872-b819-abded53b4834]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.795 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a1985c2f-5d3a-4a33-a69f-bddae5b39d4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:21 compute-0 systemd[1]: Started Virtual Machine qemu-139-instance-0000006d.
Feb 28 10:23:21 compute-0 nova_compute[243452]: 2026-02-28 10:23:21.801 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:21 compute-0 ovn_controller[146846]: 2026-02-28T10:23:21Z|01106|binding|INFO|Setting lport 415ef63e-f355-4d3a-a625-bee99de661ad ovn-installed in OVS
Feb 28 10:23:21 compute-0 ovn_controller[146846]: 2026-02-28T10:23:21Z|01107|binding|INFO|Setting lport 415ef63e-f355-4d3a-a625-bee99de661ad up in Southbound
Feb 28 10:23:21 compute-0 nova_compute[243452]: 2026-02-28 10:23:21.804 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:21 compute-0 systemd-udevd[337987]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:23:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.819 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc40d14-2437-4785-bae2-7f166d0803b9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:21 compute-0 NetworkManager[49805]: <info>  [1772274201.8284] device (tap415ef63e-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:23:21 compute-0 NetworkManager[49805]: <info>  [1772274201.8291] device (tap415ef63e-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:23:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.849 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[65cf5616-cde0-4bfa-8bcc-8ae487064ed0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:21 compute-0 NetworkManager[49805]: <info>  [1772274201.8565] manager: (tap17bc494c-70): new Veth device (/org/freedesktop/NetworkManager/Devices/462)
Feb 28 10:23:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.856 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1e716caf-4c90-4477-9375-12a8edf34960]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.885 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fb19a366-e486-4a7e-b36c-d7879bfb1541]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.888 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5f97c1c4-1de4-47ab-a9c1-ddde9f546814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:21 compute-0 NetworkManager[49805]: <info>  [1772274201.9056] device (tap17bc494c-70): carrier: link connected
Feb 28 10:23:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.910 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e131ef-b6f0-437b-bc59-f9cb5cc7a508]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.924 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e88270db-4f77-43e1-8a65-02887e6bf48b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bc494c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:31:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 334], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570919, 'reachable_time': 28744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338017, 'error': None, 'target': 'ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.936 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b75dc445-888b-4f27-8d5b-4cc3bbd75766]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea0:31ab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570919, 'tstamp': 570919}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338018, 'error': None, 'target': 'ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.951 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[700ce6f5-e1e9-4d21-b20f-ab7ed1ba6d7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17bc494c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:31:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 334], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570919, 'reachable_time': 28744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338019, 'error': None, 'target': 'ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:21.976 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c83b68dd-ac48-4f20-b389-91bb34d12312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:22.030 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[09fdd8dd-9cd0-4b73-b6c0-560bf06ecbe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:22.032 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bc494c-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:22.033 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:22.033 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17bc494c-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:22 compute-0 NetworkManager[49805]: <info>  [1772274202.0364] manager: (tap17bc494c-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/463)
Feb 28 10:23:22 compute-0 kernel: tap17bc494c-70: entered promiscuous mode
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.035 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:22.040 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17bc494c-70, col_values=(('external_ids', {'iface-id': 'c07252a3-f61e-4ad2-8106-a9d8b025e5a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:22 compute-0 ovn_controller[146846]: 2026-02-28T10:23:22Z|01108|binding|INFO|Releasing lport c07252a3-f61e-4ad2-8106-a9d8b025e5a7 from this chassis (sb_readonly=0)
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.041 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.056 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:22.057 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/17bc494c-7a5d-47b4-92b7-06dd91f131ca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/17bc494c-7a5d-47b4-92b7-06dd91f131ca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:22.058 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd48361-3998-4b2f-b301-8b70746e1952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:22.059 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-17bc494c-7a5d-47b4-92b7-06dd91f131ca
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/17bc494c-7a5d-47b4-92b7-06dd91f131ca.pid.haproxy
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 17bc494c-7a5d-47b4-92b7-06dd91f131ca
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:23:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:22.060 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca', 'env', 'PROCESS_TAG=haproxy-17bc494c-7a5d-47b4-92b7-06dd91f131ca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/17bc494c-7a5d-47b4-92b7-06dd91f131ca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.088 243456 DEBUG nova.compute.manager [req-81a36192-06f2-48f9-babe-13cd6d4a846c req-cf556a9d-c122-4c40-a984-fd3751647150 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received event network-vif-plugged-415ef63e-f355-4d3a-a625-bee99de661ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.088 243456 DEBUG oslo_concurrency.lockutils [req-81a36192-06f2-48f9-babe-13cd6d4a846c req-cf556a9d-c122-4c40-a984-fd3751647150 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "027ce924-b530-4917-956c-ab66555058b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.089 243456 DEBUG oslo_concurrency.lockutils [req-81a36192-06f2-48f9-babe-13cd6d4a846c req-cf556a9d-c122-4c40-a984-fd3751647150 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.089 243456 DEBUG oslo_concurrency.lockutils [req-81a36192-06f2-48f9-babe-13cd6d4a846c req-cf556a9d-c122-4c40-a984-fd3751647150 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.089 243456 DEBUG nova.compute.manager [req-81a36192-06f2-48f9-babe-13cd6d4a846c req-cf556a9d-c122-4c40-a984-fd3751647150 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Processing event network-vif-plugged-415ef63e-f355-4d3a-a625-bee99de661ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.191 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274202.1907961, 027ce924-b530-4917-956c-ab66555058b0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.191 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] VM Started (Lifecycle Event)
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.194 243456 DEBUG nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.198 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.202 243456 INFO nova.virt.libvirt.driver [-] [instance: 027ce924-b530-4917-956c-ab66555058b0] Instance spawned successfully.
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.202 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.213 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.222 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.226 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.227 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.228 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.228 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.229 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.230 243456 DEBUG nova.virt.libvirt.driver [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.238 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.238 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274202.1910398, 027ce924-b530-4917-956c-ab66555058b0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.238 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] VM Paused (Lifecycle Event)
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.257 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.261 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274202.1972303, 027ce924-b530-4917-956c-ab66555058b0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.261 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] VM Resumed (Lifecycle Event)
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.276 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.280 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.287 243456 INFO nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Took 7.72 seconds to spawn the instance on the hypervisor.
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.288 243456 DEBUG nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.314 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.355 243456 INFO nova.compute.manager [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Took 8.85 seconds to build instance.
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.372 243456 DEBUG oslo_concurrency.lockutils [None req-8754dff8-d89a-4d65-a238-fa52b29cf9e3 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:22 compute-0 podman[338093]: 2026-02-28 10:23:22.397671009 +0000 UTC m=+0.056807841 container create 6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 10:23:22 compute-0 systemd[1]: Started libpod-conmon-6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df.scope.
Feb 28 10:23:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 312 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.0 MiB/s wr, 130 op/s
Feb 28 10:23:22 compute-0 podman[338093]: 2026-02-28 10:23:22.365890692 +0000 UTC m=+0.025027394 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:23:22 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:23:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fba657c3e9cfc3e8a46c61136017aeb0fbea7cdb4ea5ad38579fb28430e14c1d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:23:22 compute-0 podman[338093]: 2026-02-28 10:23:22.484821494 +0000 UTC m=+0.143958216 container init 6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 10:23:22 compute-0 podman[338093]: 2026-02-28 10:23:22.48985416 +0000 UTC m=+0.148990882 container start 6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:23:22 compute-0 neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca[338109]: [NOTICE]   (338113) : New worker (338115) forked
Feb 28 10:23:22 compute-0 neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca[338109]: [NOTICE]   (338113) : Loading success.
Feb 28 10:23:22 compute-0 ceph-mon[76304]: pgmap v1738: 305 pgs: 305 active+clean; 312 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.0 MiB/s wr, 130 op/s
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.599 243456 DEBUG nova.network.neutron [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Successfully updated port: 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.618 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "refresh_cache-c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.618 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquired lock "refresh_cache-c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.619 243456 DEBUG nova.network.neutron [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.851 243456 DEBUG nova.network.neutron [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.988 243456 DEBUG nova.compute.manager [req-c064ce51-9a99-4315-a5a5-a72670ff0712 req-b23ecc9b-8bf9-46e9-b3c6-25f18653bfd2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Received event network-changed-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.989 243456 DEBUG nova.compute.manager [req-c064ce51-9a99-4315-a5a5-a72670ff0712 req-b23ecc9b-8bf9-46e9-b3c6-25f18653bfd2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Refreshing instance network info cache due to event network-changed-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:23:22 compute-0 nova_compute[243452]: 2026-02-28 10:23:22.989 243456 DEBUG oslo_concurrency.lockutils [req-c064ce51-9a99-4315-a5a5-a72670ff0712 req-b23ecc9b-8bf9-46e9-b3c6-25f18653bfd2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:23:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Feb 28 10:23:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Feb 28 10:23:23 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Feb 28 10:23:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.804 243456 DEBUG nova.network.neutron [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Updating instance_info_cache with network_info: [{"id": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "address": "fa:16:3e:3f:12:78", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d930aa0-d5", "ovs_interfaceid": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.828 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Releasing lock "refresh_cache-c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.828 243456 DEBUG nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Instance network_info: |[{"id": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "address": "fa:16:3e:3f:12:78", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d930aa0-d5", "ovs_interfaceid": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.829 243456 DEBUG oslo_concurrency.lockutils [req-c064ce51-9a99-4315-a5a5-a72670ff0712 req-b23ecc9b-8bf9-46e9-b3c6-25f18653bfd2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.829 243456 DEBUG nova.network.neutron [req-c064ce51-9a99-4315-a5a5-a72670ff0712 req-b23ecc9b-8bf9-46e9-b3c6-25f18653bfd2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Refreshing network info cache for port 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.833 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Start _get_guest_xml network_info=[{"id": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "address": "fa:16:3e:3f:12:78", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d930aa0-d5", "ovs_interfaceid": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.840 243456 WARNING nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.859 243456 DEBUG nova.virt.libvirt.host [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.861 243456 DEBUG nova.virt.libvirt.host [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.881 243456 DEBUG nova.virt.libvirt.host [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.882 243456 DEBUG nova.virt.libvirt.host [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.883 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.884 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.885 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.886 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.886 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.887 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.887 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.888 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.888 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.889 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.889 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.890 243456 DEBUG nova.virt.hardware [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:23:23 compute-0 nova_compute[243452]: 2026-02-28 10:23:23.897 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:24 compute-0 nova_compute[243452]: 2026-02-28 10:23:24.031 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:24 compute-0 nova_compute[243452]: 2026-02-28 10:23:24.183 243456 DEBUG nova.compute.manager [req-ba9157ec-4c0e-477e-9f58-8143d0e5623d req-4bff6b26-a1f8-47e6-9171-3fc4c472886e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received event network-vif-plugged-415ef63e-f355-4d3a-a625-bee99de661ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:24 compute-0 nova_compute[243452]: 2026-02-28 10:23:24.184 243456 DEBUG oslo_concurrency.lockutils [req-ba9157ec-4c0e-477e-9f58-8143d0e5623d req-4bff6b26-a1f8-47e6-9171-3fc4c472886e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "027ce924-b530-4917-956c-ab66555058b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:24 compute-0 nova_compute[243452]: 2026-02-28 10:23:24.184 243456 DEBUG oslo_concurrency.lockutils [req-ba9157ec-4c0e-477e-9f58-8143d0e5623d req-4bff6b26-a1f8-47e6-9171-3fc4c472886e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:24 compute-0 nova_compute[243452]: 2026-02-28 10:23:24.184 243456 DEBUG oslo_concurrency.lockutils [req-ba9157ec-4c0e-477e-9f58-8143d0e5623d req-4bff6b26-a1f8-47e6-9171-3fc4c472886e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:24 compute-0 nova_compute[243452]: 2026-02-28 10:23:24.185 243456 DEBUG nova.compute.manager [req-ba9157ec-4c0e-477e-9f58-8143d0e5623d req-4bff6b26-a1f8-47e6-9171-3fc4c472886e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] No waiting events found dispatching network-vif-plugged-415ef63e-f355-4d3a-a625-bee99de661ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:23:24 compute-0 nova_compute[243452]: 2026-02-28 10:23:24.185 243456 WARNING nova.compute.manager [req-ba9157ec-4c0e-477e-9f58-8143d0e5623d req-4bff6b26-a1f8-47e6-9171-3fc4c472886e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received unexpected event network-vif-plugged-415ef63e-f355-4d3a-a625-bee99de661ad for instance with vm_state active and task_state None.
Feb 28 10:23:24 compute-0 nova_compute[243452]: 2026-02-28 10:23:24.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:23:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 312 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 866 KiB/s rd, 3.6 MiB/s wr, 106 op/s
Feb 28 10:23:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:23:24 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/76998934' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:23:24 compute-0 nova_compute[243452]: 2026-02-28 10:23:24.527 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:24 compute-0 ceph-mon[76304]: osdmap e252: 3 total, 3 up, 3 in
Feb 28 10:23:24 compute-0 ceph-mon[76304]: pgmap v1740: 305 pgs: 305 active+clean; 312 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 866 KiB/s rd, 3.6 MiB/s wr, 106 op/s
Feb 28 10:23:24 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/76998934' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:23:24 compute-0 nova_compute[243452]: 2026-02-28 10:23:24.574 243456 DEBUG nova.storage.rbd_utils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:24 compute-0 nova_compute[243452]: 2026-02-28 10:23:24.579 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:23:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1109741320' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.114 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.117 243456 DEBUG nova.virt.libvirt.vif [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:23:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-426854263',display_name='tempest-ServersTestJSON-server-426854263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-426854263',id=110,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-6jiu7deq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:23:19Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "address": "fa:16:3e:3f:12:78", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d930aa0-d5", "ovs_interfaceid": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.117 243456 DEBUG nova.network.os_vif_util [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "address": "fa:16:3e:3f:12:78", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d930aa0-d5", "ovs_interfaceid": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.118 243456 DEBUG nova.network.os_vif_util [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:12:78,bridge_name='br-int',has_traffic_filtering=True,id=7d930aa0-d51d-4c56-9da2-7f8c1faf8b99,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d930aa0-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.120 243456 DEBUG nova.objects.instance [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'pci_devices' on Instance uuid c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.137 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:23:25 compute-0 nova_compute[243452]:   <uuid>c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4</uuid>
Feb 28 10:23:25 compute-0 nova_compute[243452]:   <name>instance-0000006e</name>
Feb 28 10:23:25 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:23:25 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:23:25 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersTestJSON-server-426854263</nova:name>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:23:23</nova:creationTime>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:23:25 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:23:25 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:23:25 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:23:25 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:23:25 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:23:25 compute-0 nova_compute[243452]:         <nova:user uuid="30797c1e587b4532a2e148d0cdcd9c51">tempest-ServersTestJSON-973249707-project-member</nova:user>
Feb 28 10:23:25 compute-0 nova_compute[243452]:         <nova:project uuid="1c3cb5cdfa53405bb0387af43e804bd1">tempest-ServersTestJSON-973249707</nova:project>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:23:25 compute-0 nova_compute[243452]:         <nova:port uuid="7d930aa0-d51d-4c56-9da2-7f8c1faf8b99">
Feb 28 10:23:25 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:23:25 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:23:25 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <system>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <entry name="serial">c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4</entry>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <entry name="uuid">c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4</entry>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     </system>
Feb 28 10:23:25 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:23:25 compute-0 nova_compute[243452]:   <os>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:   </os>
Feb 28 10:23:25 compute-0 nova_compute[243452]:   <features>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:   </features>
Feb 28 10:23:25 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:23:25 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:23:25 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk">
Feb 28 10:23:25 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       </source>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:23:25 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk.config">
Feb 28 10:23:25 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       </source>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:23:25 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:3f:12:78"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <target dev="tap7d930aa0-d5"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4/console.log" append="off"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <video>
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     </video>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:23:25 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:23:25 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:23:25 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:23:25 compute-0 nova_compute[243452]: </domain>
Feb 28 10:23:25 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.139 243456 DEBUG nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Preparing to wait for external event network-vif-plugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.139 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.140 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.140 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.140 243456 DEBUG nova.virt.libvirt.vif [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:23:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-426854263',display_name='tempest-ServersTestJSON-server-426854263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-426854263',id=110,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-6jiu7deq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:23:19Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "address": "fa:16:3e:3f:12:78", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d930aa0-d5", "ovs_interfaceid": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.141 243456 DEBUG nova.network.os_vif_util [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "address": "fa:16:3e:3f:12:78", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d930aa0-d5", "ovs_interfaceid": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.141 243456 DEBUG nova.network.os_vif_util [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:12:78,bridge_name='br-int',has_traffic_filtering=True,id=7d930aa0-d51d-4c56-9da2-7f8c1faf8b99,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d930aa0-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.142 243456 DEBUG os_vif [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:12:78,bridge_name='br-int',has_traffic_filtering=True,id=7d930aa0-d51d-4c56-9da2-7f8c1faf8b99,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d930aa0-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.142 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.142 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.143 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.146 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.146 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7d930aa0-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.147 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7d930aa0-d5, col_values=(('external_ids', {'iface-id': '7d930aa0-d51d-4c56-9da2-7f8c1faf8b99', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:12:78', 'vm-uuid': 'c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.148 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:25 compute-0 NetworkManager[49805]: <info>  [1772274205.1495] manager: (tap7d930aa0-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/464)
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.151 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.153 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.154 243456 INFO os_vif [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:12:78,bridge_name='br-int',has_traffic_filtering=True,id=7d930aa0-d51d-4c56-9da2-7f8c1faf8b99,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d930aa0-d5')
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.211 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.211 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.212 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No VIF found with MAC fa:16:3e:3f:12:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.212 243456 INFO nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Using config drive
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.234 243456 DEBUG nova.storage.rbd_utils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Feb 28 10:23:25 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1109741320' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:23:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Feb 28 10:23:25 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Feb 28 10:23:25 compute-0 ovn_controller[146846]: 2026-02-28T10:23:25Z|01109|binding|INFO|Releasing lport c07252a3-f61e-4ad2-8106-a9d8b025e5a7 from this chassis (sb_readonly=0)
Feb 28 10:23:25 compute-0 ovn_controller[146846]: 2026-02-28T10:23:25Z|01110|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 10:23:25 compute-0 NetworkManager[49805]: <info>  [1772274205.6119] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/465)
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.611 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:25 compute-0 NetworkManager[49805]: <info>  [1772274205.6125] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/466)
Feb 28 10:23:25 compute-0 ovn_controller[146846]: 2026-02-28T10:23:25Z|01111|binding|INFO|Releasing lport c07252a3-f61e-4ad2-8106-a9d8b025e5a7 from this chassis (sb_readonly=0)
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.631 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:25 compute-0 ovn_controller[146846]: 2026-02-28T10:23:25Z|01112|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.644 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.788 243456 INFO nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Creating config drive at /var/lib/nova/instances/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4/disk.config
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.793 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp9frouqyp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.867 243456 DEBUG nova.network.neutron [req-c064ce51-9a99-4315-a5a5-a72670ff0712 req-b23ecc9b-8bf9-46e9-b3c6-25f18653bfd2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Updated VIF entry in instance network info cache for port 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.870 243456 DEBUG nova.network.neutron [req-c064ce51-9a99-4315-a5a5-a72670ff0712 req-b23ecc9b-8bf9-46e9-b3c6-25f18653bfd2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Updating instance_info_cache with network_info: [{"id": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "address": "fa:16:3e:3f:12:78", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d930aa0-d5", "ovs_interfaceid": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.888 243456 DEBUG oslo_concurrency.lockutils [req-c064ce51-9a99-4315-a5a5-a72670ff0712 req-b23ecc9b-8bf9-46e9-b3c6-25f18653bfd2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.931 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp9frouqyp" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.963 243456 DEBUG nova.storage.rbd_utils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:25 compute-0 nova_compute[243452]: 2026-02-28 10:23:25.970 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4/disk.config c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.110 243456 DEBUG nova.compute.manager [req-3718eff9-190d-485b-a3e5-a9d84865cbdb req-4f01d3fe-1a72-4bf6-8e4d-0c09a2a93167 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received event network-changed-415ef63e-f355-4d3a-a625-bee99de661ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.111 243456 DEBUG nova.compute.manager [req-3718eff9-190d-485b-a3e5-a9d84865cbdb req-4f01d3fe-1a72-4bf6-8e4d-0c09a2a93167 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Refreshing instance network info cache due to event network-changed-415ef63e-f355-4d3a-a625-bee99de661ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.112 243456 DEBUG oslo_concurrency.lockutils [req-3718eff9-190d-485b-a3e5-a9d84865cbdb req-4f01d3fe-1a72-4bf6-8e4d-0c09a2a93167 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.112 243456 DEBUG oslo_concurrency.lockutils [req-3718eff9-190d-485b-a3e5-a9d84865cbdb req-4f01d3fe-1a72-4bf6-8e4d-0c09a2a93167 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.113 243456 DEBUG nova.network.neutron [req-3718eff9-190d-485b-a3e5-a9d84865cbdb req-4f01d3fe-1a72-4bf6-8e4d-0c09a2a93167 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Refreshing network info cache for port 415ef63e-f355-4d3a-a625-bee99de661ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.125 243456 DEBUG oslo_concurrency.processutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4/disk.config c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.129 243456 INFO nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Deleting local config drive /var/lib/nova/instances/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4/disk.config because it was imported into RBD.
Feb 28 10:23:26 compute-0 kernel: tap7d930aa0-d5: entered promiscuous mode
Feb 28 10:23:26 compute-0 NetworkManager[49805]: <info>  [1772274206.1931] manager: (tap7d930aa0-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/467)
Feb 28 10:23:26 compute-0 ovn_controller[146846]: 2026-02-28T10:23:26Z|01113|binding|INFO|Claiming lport 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 for this chassis.
Feb 28 10:23:26 compute-0 ovn_controller[146846]: 2026-02-28T10:23:26Z|01114|binding|INFO|7d930aa0-d51d-4c56-9da2-7f8c1faf8b99: Claiming fa:16:3e:3f:12:78 10.100.0.6
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.204 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.213 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:12:78 10.100.0.6'], port_security=['fa:16:3e:3f:12:78 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=7d930aa0-d51d-4c56-9da2-7f8c1faf8b99) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:23:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.214 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 in datapath 7ec4804c-4a13-485a-9300-db6edf74473b bound to our chassis
Feb 28 10:23:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.216 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b
Feb 28 10:23:26 compute-0 ovn_controller[146846]: 2026-02-28T10:23:26Z|01115|binding|INFO|Setting lport 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 ovn-installed in OVS
Feb 28 10:23:26 compute-0 ovn_controller[146846]: 2026-02-28T10:23:26Z|01116|binding|INFO|Setting lport 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 up in Southbound
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.219 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.228 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.234 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ca85fa0f-25d8-429d-8d5c-fe84c7cb9b0e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:26 compute-0 systemd-udevd[338261]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:23:26 compute-0 systemd-machined[209480]: New machine qemu-140-instance-0000006e.
Feb 28 10:23:26 compute-0 NetworkManager[49805]: <info>  [1772274206.2541] device (tap7d930aa0-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:23:26 compute-0 systemd[1]: Started Virtual Machine qemu-140-instance-0000006e.
Feb 28 10:23:26 compute-0 NetworkManager[49805]: <info>  [1772274206.2575] device (tap7d930aa0-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:23:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.263 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[21340cc7-d1ed-4596-b778-0420e4e07727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.268 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3b691b1c-4926-46eb-936c-1885babf8c4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.298 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[44300fb3-dfb7-47d7-806c-1ee39122064d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:23:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.317 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[45b474b8-002d-45d3-a059-99c9be12414c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338273, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.331 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3d8918b9-c17b-4690-8caa-4ab5fded5a6f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338274, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338274, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.334 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.336 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.338 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.338 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.338 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:23:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.339 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:26.340 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:23:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 325 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.2 MiB/s wr, 172 op/s
Feb 28 10:23:26 compute-0 ceph-mon[76304]: osdmap e253: 3 total, 3 up, 3 in
Feb 28 10:23:26 compute-0 ceph-mon[76304]: pgmap v1742: 305 pgs: 305 active+clean; 325 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.2 MiB/s wr, 172 op/s
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.797 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274206.797625, c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.798 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] VM Started (Lifecycle Event)
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.826 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.830 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274206.7983136, c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.830 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] VM Paused (Lifecycle Event)
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.858 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.862 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:23:26 compute-0 nova_compute[243452]: 2026-02-28 10:23:26.894 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:23:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Feb 28 10:23:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Feb 28 10:23:27 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Feb 28 10:23:27 compute-0 nova_compute[243452]: 2026-02-28 10:23:27.781 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274192.7799156, 44a09c36-3876-4513-8285-1d4aedc2ec68 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:23:27 compute-0 nova_compute[243452]: 2026-02-28 10:23:27.782 243456 INFO nova.compute.manager [-] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] VM Stopped (Lifecycle Event)
Feb 28 10:23:27 compute-0 nova_compute[243452]: 2026-02-28 10:23:27.818 243456 DEBUG nova.compute.manager [None req-568bc9d8-1746-432a-bc2e-f913d7f213bb - - - - - -] [instance: 44a09c36-3876-4513-8285-1d4aedc2ec68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.250 243456 DEBUG nova.compute.manager [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Received event network-vif-plugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.251 243456 DEBUG oslo_concurrency.lockutils [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.251 243456 DEBUG oslo_concurrency.lockutils [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.251 243456 DEBUG oslo_concurrency.lockutils [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.251 243456 DEBUG nova.compute.manager [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Processing event network-vif-plugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.252 243456 DEBUG nova.compute.manager [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Received event network-vif-plugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.252 243456 DEBUG oslo_concurrency.lockutils [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.252 243456 DEBUG oslo_concurrency.lockutils [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.253 243456 DEBUG oslo_concurrency.lockutils [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.253 243456 DEBUG nova.compute.manager [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] No waiting events found dispatching network-vif-plugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.253 243456 WARNING nova.compute.manager [req-7c044746-3401-43ae-8c06-b9a3e823265a req-0586b470-bcbc-417e-b2d2-27f9791cc02c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Received unexpected event network-vif-plugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 for instance with vm_state building and task_state spawning.
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.254 243456 DEBUG nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.263 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.264 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274208.2630408, c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.264 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] VM Resumed (Lifecycle Event)
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.270 243456 INFO nova.virt.libvirt.driver [-] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Instance spawned successfully.
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.270 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.290 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.295 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.299 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.300 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.300 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.301 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.301 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.302 243456 DEBUG nova.virt.libvirt.driver [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.332 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.368 243456 INFO nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Took 8.41 seconds to spawn the instance on the hypervisor.
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.368 243456 DEBUG nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.441 243456 INFO nova.compute.manager [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Took 9.53 seconds to build instance.
Feb 28 10:23:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 325 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.0 MiB/s wr, 267 op/s
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.461 243456 DEBUG oslo_concurrency.lockutils [None req-f7a19d5b-4207-4190-b59b-582a9352fe53 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:28 compute-0 ceph-mon[76304]: osdmap e254: 3 total, 3 up, 3 in
Feb 28 10:23:28 compute-0 ceph-mon[76304]: pgmap v1744: 305 pgs: 305 active+clean; 325 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.0 MiB/s wr, 267 op/s
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.679 243456 DEBUG nova.network.neutron [req-3718eff9-190d-485b-a3e5-a9d84865cbdb req-4f01d3fe-1a72-4bf6-8e4d-0c09a2a93167 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Updated VIF entry in instance network info cache for port 415ef63e-f355-4d3a-a625-bee99de661ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.680 243456 DEBUG nova.network.neutron [req-3718eff9-190d-485b-a3e5-a9d84865cbdb req-4f01d3fe-1a72-4bf6-8e4d-0c09a2a93167 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Updating instance_info_cache with network_info: [{"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:23:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:23:28 compute-0 nova_compute[243452]: 2026-02-28 10:23:28.697 243456 DEBUG oslo_concurrency.lockutils [req-3718eff9-190d-485b-a3e5-a9d84865cbdb req-4f01d3fe-1a72-4bf6-8e4d-0c09a2a93167 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:23:29 compute-0 nova_compute[243452]: 2026-02-28 10:23:29.032 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:23:29
Feb 28 10:23:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:23:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:23:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['volumes', '.rgw.root', 'default.rgw.meta', 'images', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'vms']
Feb 28 10:23:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:23:29 compute-0 sshd-session[338318]: Invalid user solana from 45.148.10.240 port 57638
Feb 28 10:23:29 compute-0 nova_compute[243452]: 2026-02-28 10:23:29.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:23:29 compute-0 nova_compute[243452]: 2026-02-28 10:23:29.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:23:29 compute-0 sshd-session[338318]: Connection closed by invalid user solana 45.148.10.240 port 57638 [preauth]
Feb 28 10:23:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Feb 28 10:23:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Feb 28 10:23:29 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Feb 28 10:23:30 compute-0 nova_compute[243452]: 2026-02-28 10:23:30.148 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:30 compute-0 nova_compute[243452]: 2026-02-28 10:23:30.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:23:30 compute-0 nova_compute[243452]: 2026-02-28 10:23:30.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:23:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:23:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:23:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:23:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:23:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:23:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:23:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 1.1 MiB/s wr, 399 op/s
Feb 28 10:23:30 compute-0 ceph-mon[76304]: osdmap e255: 3 total, 3 up, 3 in
Feb 28 10:23:30 compute-0 ceph-mon[76304]: pgmap v1746: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 1.1 MiB/s wr, 399 op/s
Feb 28 10:23:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:23:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:23:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:23:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:23:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:23:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:23:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:23:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:23:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:23:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:23:30 compute-0 nova_compute[243452]: 2026-02-28 10:23:30.706 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:23:30 compute-0 nova_compute[243452]: 2026-02-28 10:23:30.708 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:23:30 compute-0 nova_compute[243452]: 2026-02-28 10:23:30.708 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:23:31 compute-0 sshd-session[338320]: Received disconnect from 103.217.144.161 port 38562:11: Bye Bye [preauth]
Feb 28 10:23:31 compute-0 sshd-session[338320]: Disconnected from authenticating user root 103.217.144.161 port 38562 [preauth]
Feb 28 10:23:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 29 KiB/s wr, 266 op/s
Feb 28 10:23:32 compute-0 ceph-mon[76304]: pgmap v1747: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 29 KiB/s wr, 266 op/s
Feb 28 10:23:32 compute-0 nova_compute[243452]: 2026-02-28 10:23:32.960 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Updating instance_info_cache with network_info: [{"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:23:32 compute-0 nova_compute[243452]: 2026-02-28 10:23:32.984 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:23:32 compute-0 nova_compute[243452]: 2026-02-28 10:23:32.985 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:23:32 compute-0 nova_compute[243452]: 2026-02-28 10:23:32.986 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:23:32 compute-0 nova_compute[243452]: 2026-02-28 10:23:32.986 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:23:32 compute-0 nova_compute[243452]: 2026-02-28 10:23:32.987 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:23:33 compute-0 nova_compute[243452]: 2026-02-28 10:23:33.019 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:33 compute-0 nova_compute[243452]: 2026-02-28 10:23:33.020 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:33 compute-0 nova_compute[243452]: 2026-02-28 10:23:33.020 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:33 compute-0 nova_compute[243452]: 2026-02-28 10:23:33.021 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:23:33 compute-0 nova_compute[243452]: 2026-02-28 10:23:33.021 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:23:33 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/846847650' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:23:33 compute-0 nova_compute[243452]: 2026-02-28 10:23:33.605 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:33 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/846847650' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:23:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:23:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Feb 28 10:23:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Feb 28 10:23:33 compute-0 nova_compute[243452]: 2026-02-28 10:23:33.691 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:23:33 compute-0 nova_compute[243452]: 2026-02-28 10:23:33.692 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:23:33 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Feb 28 10:23:33 compute-0 nova_compute[243452]: 2026-02-28 10:23:33.699 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:23:33 compute-0 nova_compute[243452]: 2026-02-28 10:23:33.700 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:23:33 compute-0 nova_compute[243452]: 2026-02-28 10:23:33.704 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:23:33 compute-0 nova_compute[243452]: 2026-02-28 10:23:33.705 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:23:33 compute-0 nova_compute[243452]: 2026-02-28 10:23:33.747 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "287f71c6-9068-476b-81e7-da5069ee831f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:33 compute-0 nova_compute[243452]: 2026-02-28 10:23:33.748 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:33 compute-0 nova_compute[243452]: 2026-02-28 10:23:33.773 243456 DEBUG nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:23:33 compute-0 nova_compute[243452]: 2026-02-28 10:23:33.851 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:33 compute-0 nova_compute[243452]: 2026-02-28 10:23:33.852 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:33 compute-0 nova_compute[243452]: 2026-02-28 10:23:33.857 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:23:33 compute-0 nova_compute[243452]: 2026-02-28 10:23:33.858 243456 INFO nova.compute.claims [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:23:33 compute-0 sudo[338345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:23:33 compute-0 sudo[338345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:23:33 compute-0 sudo[338345]: pam_unix(sudo:session): session closed for user root
Feb 28 10:23:33 compute-0 sudo[338370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:23:33 compute-0 sudo[338370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:23:33 compute-0 ovn_controller[146846]: 2026-02-28T10:23:33Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:01:49:3b 10.100.0.8
Feb 28 10:23:33 compute-0 ovn_controller[146846]: 2026-02-28T10:23:33Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:01:49:3b 10.100.0.8
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.016 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.017 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3192MB free_disk=59.90021583996713GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.017 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.036 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.039 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:34 compute-0 sudo[338370]: pam_unix(sudo:session): session closed for user root
Feb 28 10:23:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 26 KiB/s wr, 187 op/s
Feb 28 10:23:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:23:34 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:23:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:23:34 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:23:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:23:34 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:23:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:23:34 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:23:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:23:34 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:23:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:23:34 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:23:34 compute-0 sudo[338445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:23:34 compute-0 sudo[338445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:23:34 compute-0 sudo[338445]: pam_unix(sudo:session): session closed for user root
Feb 28 10:23:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:23:34 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3479048948' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.565 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.573 243456 DEBUG nova.compute.provider_tree [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.593 243456 DEBUG nova.scheduler.client.report [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:23:34 compute-0 sudo[338470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:23:34 compute-0 sudo[338470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.629 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.630 243456 DEBUG nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.634 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:34 compute-0 ceph-mon[76304]: osdmap e256: 3 total, 3 up, 3 in
Feb 28 10:23:34 compute-0 ceph-mon[76304]: pgmap v1749: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 26 KiB/s wr, 187 op/s
Feb 28 10:23:34 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:23:34 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:23:34 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:23:34 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:23:34 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:23:34 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:23:34 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3479048948' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.737 243456 DEBUG nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.737 243456 DEBUG nova.network.neutron [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.757 243456 INFO nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.761 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.762 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 027ce924-b530-4917-956c-ab66555058b0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.762 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.762 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 287f71c6-9068-476b-81e7-da5069ee831f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.762 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.763 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.788 243456 DEBUG nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.857 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:34 compute-0 podman[338509]: 2026-02-28 10:23:34.876968863 +0000 UTC m=+0.062486144 container create dafee47a345f12dc46a90dfc083fd1e6f0ba904c99e7af891958f328ad5c656f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.894 243456 DEBUG nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.896 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.897 243456 INFO nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Creating image(s)
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.918 243456 DEBUG nova.storage.rbd_utils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 287f71c6-9068-476b-81e7-da5069ee831f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:34 compute-0 systemd[1]: Started libpod-conmon-dafee47a345f12dc46a90dfc083fd1e6f0ba904c99e7af891958f328ad5c656f.scope.
Feb 28 10:23:34 compute-0 podman[338509]: 2026-02-28 10:23:34.848818841 +0000 UTC m=+0.034336132 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:23:34 compute-0 nova_compute[243452]: 2026-02-28 10:23:34.953 243456 DEBUG nova.storage.rbd_utils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 287f71c6-9068-476b-81e7-da5069ee831f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:34 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:23:34 compute-0 podman[338509]: 2026-02-28 10:23:34.976460745 +0000 UTC m=+0.161978076 container init dafee47a345f12dc46a90dfc083fd1e6f0ba904c99e7af891958f328ad5c656f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:23:34 compute-0 podman[338509]: 2026-02-28 10:23:34.98666767 +0000 UTC m=+0.172184921 container start dafee47a345f12dc46a90dfc083fd1e6f0ba904c99e7af891958f328ad5c656f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:23:34 compute-0 podman[338509]: 2026-02-28 10:23:34.992316163 +0000 UTC m=+0.177833494 container attach dafee47a345f12dc46a90dfc083fd1e6f0ba904c99e7af891958f328ad5c656f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chatelet, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Feb 28 10:23:34 compute-0 bold_chatelet[338544]: 167 167
Feb 28 10:23:34 compute-0 systemd[1]: libpod-dafee47a345f12dc46a90dfc083fd1e6f0ba904c99e7af891958f328ad5c656f.scope: Deactivated successfully.
Feb 28 10:23:34 compute-0 podman[338509]: 2026-02-28 10:23:34.994516696 +0000 UTC m=+0.180033977 container died dafee47a345f12dc46a90dfc083fd1e6f0ba904c99e7af891958f328ad5c656f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.010 243456 DEBUG nova.storage.rbd_utils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 287f71c6-9068-476b-81e7-da5069ee831f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.017 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-aad993d6cb4ae4baaa8e7875fd6a24ebcea24ce07ebc9713469f0bc33934587c-merged.mount: Deactivated successfully.
Feb 28 10:23:35 compute-0 podman[338509]: 2026-02-28 10:23:35.051569983 +0000 UTC m=+0.237087234 container remove dafee47a345f12dc46a90dfc083fd1e6f0ba904c99e7af891958f328ad5c656f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 10:23:35 compute-0 systemd[1]: libpod-conmon-dafee47a345f12dc46a90dfc083fd1e6f0ba904c99e7af891958f328ad5c656f.scope: Deactivated successfully.
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.064 243456 DEBUG nova.policy [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30797c1e587b4532a2e148d0cdcd9c51', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.108 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.109 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.110 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.110 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.140 243456 DEBUG nova.storage.rbd_utils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 287f71c6-9068-476b-81e7-da5069ee831f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.149 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 287f71c6-9068-476b-81e7-da5069ee831f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.187 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:35 compute-0 podman[338645]: 2026-02-28 10:23:35.202330425 +0000 UTC m=+0.037732881 container create 769cd698b594d46667c6490134084fa37f0a39c2dd730f83d44c549d2c794b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:23:35 compute-0 systemd[1]: Started libpod-conmon-769cd698b594d46667c6490134084fa37f0a39c2dd730f83d44c549d2c794b82.scope.
Feb 28 10:23:35 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/448be31a2eef4d91d759b42b71ff6ce105a549b01171a8e93e70c28b66af66b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/448be31a2eef4d91d759b42b71ff6ce105a549b01171a8e93e70c28b66af66b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/448be31a2eef4d91d759b42b71ff6ce105a549b01171a8e93e70c28b66af66b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/448be31a2eef4d91d759b42b71ff6ce105a549b01171a8e93e70c28b66af66b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/448be31a2eef4d91d759b42b71ff6ce105a549b01171a8e93e70c28b66af66b3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:23:35 compute-0 podman[338645]: 2026-02-28 10:23:35.1852036 +0000 UTC m=+0.020606076 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:23:35 compute-0 podman[338645]: 2026-02-28 10:23:35.290819909 +0000 UTC m=+0.126222415 container init 769cd698b594d46667c6490134084fa37f0a39c2dd730f83d44c549d2c794b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 10:23:35 compute-0 podman[338645]: 2026-02-28 10:23:35.299212001 +0000 UTC m=+0.134614467 container start 769cd698b594d46667c6490134084fa37f0a39c2dd730f83d44c549d2c794b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:23:35 compute-0 podman[338645]: 2026-02-28 10:23:35.307290844 +0000 UTC m=+0.142693300 container attach 769cd698b594d46667c6490134084fa37f0a39c2dd730f83d44c549d2c794b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.428 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 287f71c6-9068-476b-81e7-da5069ee831f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:23:35 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/935840045' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.483 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.490 243456 DEBUG nova.storage.rbd_utils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] resizing rbd image 287f71c6-9068-476b-81e7-da5069ee831f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.522 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.542 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.575 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.575 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.580 243456 DEBUG nova.objects.instance [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'migration_context' on Instance uuid 287f71c6-9068-476b-81e7-da5069ee831f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.596 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.597 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Ensure instance console log exists: /var/lib/nova/instances/287f71c6-9068-476b-81e7-da5069ee831f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.598 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.598 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.599 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:35 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/935840045' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:23:35 compute-0 unruffled_montalcini[338677]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:23:35 compute-0 unruffled_montalcini[338677]: --> All data devices are unavailable
Feb 28 10:23:35 compute-0 systemd[1]: libpod-769cd698b594d46667c6490134084fa37f0a39c2dd730f83d44c549d2c794b82.scope: Deactivated successfully.
Feb 28 10:23:35 compute-0 podman[338645]: 2026-02-28 10:23:35.763809291 +0000 UTC m=+0.599211747 container died 769cd698b594d46667c6490134084fa37f0a39c2dd730f83d44c549d2c794b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 28 10:23:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-448be31a2eef4d91d759b42b71ff6ce105a549b01171a8e93e70c28b66af66b3-merged.mount: Deactivated successfully.
Feb 28 10:23:35 compute-0 podman[338645]: 2026-02-28 10:23:35.805542375 +0000 UTC m=+0.640944831 container remove 769cd698b594d46667c6490134084fa37f0a39c2dd730f83d44c549d2c794b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:23:35 compute-0 nova_compute[243452]: 2026-02-28 10:23:35.811 243456 DEBUG nova.network.neutron [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Successfully created port: caccf6a2-afd7-48b7-b262-bb7a3178d25c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:23:35 compute-0 systemd[1]: libpod-conmon-769cd698b594d46667c6490134084fa37f0a39c2dd730f83d44c549d2c794b82.scope: Deactivated successfully.
Feb 28 10:23:35 compute-0 sudo[338470]: pam_unix(sudo:session): session closed for user root
Feb 28 10:23:35 compute-0 sudo[338787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:23:35 compute-0 sudo[338787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:23:35 compute-0 sudo[338787]: pam_unix(sudo:session): session closed for user root
Feb 28 10:23:35 compute-0 sudo[338812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:23:35 compute-0 sudo[338812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:23:36 compute-0 podman[338849]: 2026-02-28 10:23:36.194262575 +0000 UTC m=+0.047582524 container create 65fca6aa369eac8f5e231215837f9f1dd6fd72cfc17ffc2f7c604c53036f3807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilbur, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:23:36 compute-0 systemd[1]: Started libpod-conmon-65fca6aa369eac8f5e231215837f9f1dd6fd72cfc17ffc2f7c604c53036f3807.scope.
Feb 28 10:23:36 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:23:36 compute-0 podman[338849]: 2026-02-28 10:23:36.270971859 +0000 UTC m=+0.124291768 container init 65fca6aa369eac8f5e231215837f9f1dd6fd72cfc17ffc2f7c604c53036f3807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilbur, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:23:36 compute-0 podman[338849]: 2026-02-28 10:23:36.176870693 +0000 UTC m=+0.030190582 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:23:36 compute-0 podman[338849]: 2026-02-28 10:23:36.280261727 +0000 UTC m=+0.133581586 container start 65fca6aa369eac8f5e231215837f9f1dd6fd72cfc17ffc2f7c604c53036f3807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 10:23:36 compute-0 podman[338849]: 2026-02-28 10:23:36.284100548 +0000 UTC m=+0.137420417 container attach 65fca6aa369eac8f5e231215837f9f1dd6fd72cfc17ffc2f7c604c53036f3807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:23:36 compute-0 magical_wilbur[338865]: 167 167
Feb 28 10:23:36 compute-0 systemd[1]: libpod-65fca6aa369eac8f5e231215837f9f1dd6fd72cfc17ffc2f7c604c53036f3807.scope: Deactivated successfully.
Feb 28 10:23:36 compute-0 podman[338849]: 2026-02-28 10:23:36.285554 +0000 UTC m=+0.138873919 container died 65fca6aa369eac8f5e231215837f9f1dd6fd72cfc17ffc2f7c604c53036f3807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilbur, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Feb 28 10:23:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c9d92d27e23debae8679b44e4517355e5ef78bcab6f263399df7c2e0696d69b-merged.mount: Deactivated successfully.
Feb 28 10:23:36 compute-0 podman[338849]: 2026-02-28 10:23:36.328688435 +0000 UTC m=+0.182008304 container remove 65fca6aa369eac8f5e231215837f9f1dd6fd72cfc17ffc2f7c604c53036f3807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 10:23:36 compute-0 systemd[1]: libpod-conmon-65fca6aa369eac8f5e231215837f9f1dd6fd72cfc17ffc2f7c604c53036f3807.scope: Deactivated successfully.
Feb 28 10:23:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 345 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.3 MiB/s wr, 228 op/s
Feb 28 10:23:36 compute-0 podman[338889]: 2026-02-28 10:23:36.525195857 +0000 UTC m=+0.062792413 container create b9b7141a63b7b04dd82b12d0f3abe349daecb8dcb3ff51deac10220716f8bc0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 10:23:36 compute-0 systemd[1]: Started libpod-conmon-b9b7141a63b7b04dd82b12d0f3abe349daecb8dcb3ff51deac10220716f8bc0e.scope.
Feb 28 10:23:36 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:23:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8321f76a524bc7e4c9d0ffe81cf6e7efd57603aac09effa73453294dd9ac3b6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:23:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8321f76a524bc7e4c9d0ffe81cf6e7efd57603aac09effa73453294dd9ac3b6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:23:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8321f76a524bc7e4c9d0ffe81cf6e7efd57603aac09effa73453294dd9ac3b6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:23:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8321f76a524bc7e4c9d0ffe81cf6e7efd57603aac09effa73453294dd9ac3b6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:23:36 compute-0 podman[338889]: 2026-02-28 10:23:36.498488526 +0000 UTC m=+0.036085152 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:23:36 compute-0 podman[338889]: 2026-02-28 10:23:36.619049026 +0000 UTC m=+0.156645622 container init b9b7141a63b7b04dd82b12d0f3abe349daecb8dcb3ff51deac10220716f8bc0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 10:23:36 compute-0 podman[338889]: 2026-02-28 10:23:36.626850351 +0000 UTC m=+0.164446907 container start b9b7141a63b7b04dd82b12d0f3abe349daecb8dcb3ff51deac10220716f8bc0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 10:23:36 compute-0 podman[338889]: 2026-02-28 10:23:36.63061567 +0000 UTC m=+0.168212246 container attach b9b7141a63b7b04dd82b12d0f3abe349daecb8dcb3ff51deac10220716f8bc0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:23:36 compute-0 nova_compute[243452]: 2026-02-28 10:23:36.675 243456 DEBUG nova.network.neutron [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Successfully updated port: caccf6a2-afd7-48b7-b262-bb7a3178d25c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:23:36 compute-0 nova_compute[243452]: 2026-02-28 10:23:36.692 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "refresh_cache-287f71c6-9068-476b-81e7-da5069ee831f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:23:36 compute-0 nova_compute[243452]: 2026-02-28 10:23:36.693 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquired lock "refresh_cache-287f71c6-9068-476b-81e7-da5069ee831f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:23:36 compute-0 nova_compute[243452]: 2026-02-28 10:23:36.693 243456 DEBUG nova.network.neutron [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:23:36 compute-0 ceph-mon[76304]: pgmap v1750: 305 pgs: 305 active+clean; 345 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.3 MiB/s wr, 228 op/s
Feb 28 10:23:36 compute-0 nova_compute[243452]: 2026-02-28 10:23:36.860 243456 DEBUG nova.compute.manager [req-ea2c648c-f5e9-4329-b772-b8b9c1af270d req-cedc1a47-0635-4898-8a31-69642dcdab88 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Received event network-changed-caccf6a2-afd7-48b7-b262-bb7a3178d25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:36 compute-0 nova_compute[243452]: 2026-02-28 10:23:36.861 243456 DEBUG nova.compute.manager [req-ea2c648c-f5e9-4329-b772-b8b9c1af270d req-cedc1a47-0635-4898-8a31-69642dcdab88 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Refreshing instance network info cache due to event network-changed-caccf6a2-afd7-48b7-b262-bb7a3178d25c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:23:36 compute-0 nova_compute[243452]: 2026-02-28 10:23:36.861 243456 DEBUG oslo_concurrency.lockutils [req-ea2c648c-f5e9-4329-b772-b8b9c1af270d req-cedc1a47-0635-4898-8a31-69642dcdab88 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-287f71c6-9068-476b-81e7-da5069ee831f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:23:36 compute-0 happy_liskov[338906]: {
Feb 28 10:23:36 compute-0 happy_liskov[338906]:     "0": [
Feb 28 10:23:36 compute-0 happy_liskov[338906]:         {
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "devices": [
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "/dev/loop3"
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             ],
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "lv_name": "ceph_lv0",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "lv_size": "21470642176",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "name": "ceph_lv0",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "tags": {
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.cluster_name": "ceph",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.crush_device_class": "",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.encrypted": "0",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.objectstore": "bluestore",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.osd_id": "0",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.type": "block",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.vdo": "0",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.with_tpm": "0"
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             },
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "type": "block",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "vg_name": "ceph_vg0"
Feb 28 10:23:36 compute-0 happy_liskov[338906]:         }
Feb 28 10:23:36 compute-0 happy_liskov[338906]:     ],
Feb 28 10:23:36 compute-0 happy_liskov[338906]:     "1": [
Feb 28 10:23:36 compute-0 happy_liskov[338906]:         {
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "devices": [
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "/dev/loop4"
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             ],
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "lv_name": "ceph_lv1",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "lv_size": "21470642176",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "name": "ceph_lv1",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "tags": {
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.cluster_name": "ceph",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.crush_device_class": "",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.encrypted": "0",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.objectstore": "bluestore",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.osd_id": "1",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.type": "block",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.vdo": "0",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.with_tpm": "0"
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             },
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "type": "block",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "vg_name": "ceph_vg1"
Feb 28 10:23:36 compute-0 happy_liskov[338906]:         }
Feb 28 10:23:36 compute-0 happy_liskov[338906]:     ],
Feb 28 10:23:36 compute-0 happy_liskov[338906]:     "2": [
Feb 28 10:23:36 compute-0 happy_liskov[338906]:         {
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "devices": [
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "/dev/loop5"
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             ],
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "lv_name": "ceph_lv2",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "lv_size": "21470642176",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "name": "ceph_lv2",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "tags": {
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.cluster_name": "ceph",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.crush_device_class": "",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.encrypted": "0",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.objectstore": "bluestore",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.osd_id": "2",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.type": "block",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.vdo": "0",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:                 "ceph.with_tpm": "0"
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             },
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "type": "block",
Feb 28 10:23:36 compute-0 happy_liskov[338906]:             "vg_name": "ceph_vg2"
Feb 28 10:23:36 compute-0 happy_liskov[338906]:         }
Feb 28 10:23:36 compute-0 happy_liskov[338906]:     ]
Feb 28 10:23:36 compute-0 happy_liskov[338906]: }
Feb 28 10:23:36 compute-0 systemd[1]: libpod-b9b7141a63b7b04dd82b12d0f3abe349daecb8dcb3ff51deac10220716f8bc0e.scope: Deactivated successfully.
Feb 28 10:23:36 compute-0 podman[338889]: 2026-02-28 10:23:36.943743738 +0000 UTC m=+0.481340294 container died b9b7141a63b7b04dd82b12d0f3abe349daecb8dcb3ff51deac10220716f8bc0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:23:36 compute-0 nova_compute[243452]: 2026-02-28 10:23:36.952 243456 DEBUG nova.network.neutron [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:23:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-8321f76a524bc7e4c9d0ffe81cf6e7efd57603aac09effa73453294dd9ac3b6f-merged.mount: Deactivated successfully.
Feb 28 10:23:36 compute-0 podman[338889]: 2026-02-28 10:23:36.997078547 +0000 UTC m=+0.534675103 container remove b9b7141a63b7b04dd82b12d0f3abe349daecb8dcb3ff51deac10220716f8bc0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:23:37 compute-0 systemd[1]: libpod-conmon-b9b7141a63b7b04dd82b12d0f3abe349daecb8dcb3ff51deac10220716f8bc0e.scope: Deactivated successfully.
Feb 28 10:23:37 compute-0 sudo[338812]: pam_unix(sudo:session): session closed for user root
Feb 28 10:23:37 compute-0 sudo[338928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:23:37 compute-0 sudo[338928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:23:37 compute-0 sudo[338928]: pam_unix(sudo:session): session closed for user root
Feb 28 10:23:37 compute-0 sudo[338953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:23:37 compute-0 sudo[338953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:23:37 compute-0 podman[338990]: 2026-02-28 10:23:37.529436762 +0000 UTC m=+0.069848167 container create be3494a5f590ce0063e698c0db7fd3b644cd5e4367a5205221723d6acf4964cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Feb 28 10:23:37 compute-0 systemd[1]: Started libpod-conmon-be3494a5f590ce0063e698c0db7fd3b644cd5e4367a5205221723d6acf4964cd.scope.
Feb 28 10:23:37 compute-0 podman[338990]: 2026-02-28 10:23:37.497923793 +0000 UTC m=+0.038335218 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:23:37 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:23:37 compute-0 podman[338990]: 2026-02-28 10:23:37.627912835 +0000 UTC m=+0.168324250 container init be3494a5f590ce0063e698c0db7fd3b644cd5e4367a5205221723d6acf4964cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 10:23:37 compute-0 podman[338990]: 2026-02-28 10:23:37.639835549 +0000 UTC m=+0.180246974 container start be3494a5f590ce0063e698c0db7fd3b644cd5e4367a5205221723d6acf4964cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_poitras, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 10:23:37 compute-0 podman[338990]: 2026-02-28 10:23:37.643439243 +0000 UTC m=+0.183850698 container attach be3494a5f590ce0063e698c0db7fd3b644cd5e4367a5205221723d6acf4964cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_poitras, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:23:37 compute-0 nervous_poitras[339007]: 167 167
Feb 28 10:23:37 compute-0 systemd[1]: libpod-be3494a5f590ce0063e698c0db7fd3b644cd5e4367a5205221723d6acf4964cd.scope: Deactivated successfully.
Feb 28 10:23:37 compute-0 conmon[339007]: conmon be3494a5f590ce0063e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-be3494a5f590ce0063e698c0db7fd3b644cd5e4367a5205221723d6acf4964cd.scope/container/memory.events
Feb 28 10:23:37 compute-0 podman[338990]: 2026-02-28 10:23:37.64716343 +0000 UTC m=+0.187574845 container died be3494a5f590ce0063e698c0db7fd3b644cd5e4367a5205221723d6acf4964cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_poitras, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Feb 28 10:23:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee0f9542274fae20aa7371279d0cb5c4315750a159557935074f32c7fa1b498a-merged.mount: Deactivated successfully.
Feb 28 10:23:37 compute-0 podman[338990]: 2026-02-28 10:23:37.683147699 +0000 UTC m=+0.223559134 container remove be3494a5f590ce0063e698c0db7fd3b644cd5e4367a5205221723d6acf4964cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_poitras, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 10:23:37 compute-0 systemd[1]: libpod-conmon-be3494a5f590ce0063e698c0db7fd3b644cd5e4367a5205221723d6acf4964cd.scope: Deactivated successfully.
Feb 28 10:23:37 compute-0 podman[339029]: 2026-02-28 10:23:37.845611188 +0000 UTC m=+0.051662442 container create e78f309341baed0162dc81ef21d7ad24b3a0f8cb04e5774e3a67ef1678e71ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 10:23:37 compute-0 systemd[1]: Started libpod-conmon-e78f309341baed0162dc81ef21d7ad24b3a0f8cb04e5774e3a67ef1678e71ed1.scope.
Feb 28 10:23:37 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c88b5fc105fb998fb67e968eda9ac1698ee081ebb683e2ce0acc91bd317661b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c88b5fc105fb998fb67e968eda9ac1698ee081ebb683e2ce0acc91bd317661b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c88b5fc105fb998fb67e968eda9ac1698ee081ebb683e2ce0acc91bd317661b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c88b5fc105fb998fb67e968eda9ac1698ee081ebb683e2ce0acc91bd317661b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:23:37 compute-0 podman[339029]: 2026-02-28 10:23:37.827203327 +0000 UTC m=+0.033254531 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:23:37 compute-0 podman[339029]: 2026-02-28 10:23:37.941587478 +0000 UTC m=+0.147638652 container init e78f309341baed0162dc81ef21d7ad24b3a0f8cb04e5774e3a67ef1678e71ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_perlman, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:23:37 compute-0 podman[339029]: 2026-02-28 10:23:37.955518771 +0000 UTC m=+0.161569925 container start e78f309341baed0162dc81ef21d7ad24b3a0f8cb04e5774e3a67ef1678e71ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_perlman, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:23:37 compute-0 podman[339029]: 2026-02-28 10:23:37.959766303 +0000 UTC m=+0.165817487 container attach e78f309341baed0162dc81ef21d7ad24b3a0f8cb04e5774e3a67ef1678e71ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.011 243456 DEBUG nova.network.neutron [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Updating instance_info_cache with network_info: [{"id": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "address": "fa:16:3e:08:b6:0f", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaccf6a2-af", "ovs_interfaceid": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.034 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Releasing lock "refresh_cache-287f71c6-9068-476b-81e7-da5069ee831f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.035 243456 DEBUG nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Instance network_info: |[{"id": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "address": "fa:16:3e:08:b6:0f", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaccf6a2-af", "ovs_interfaceid": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.036 243456 DEBUG oslo_concurrency.lockutils [req-ea2c648c-f5e9-4329-b772-b8b9c1af270d req-cedc1a47-0635-4898-8a31-69642dcdab88 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-287f71c6-9068-476b-81e7-da5069ee831f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.037 243456 DEBUG nova.network.neutron [req-ea2c648c-f5e9-4329-b772-b8b9c1af270d req-cedc1a47-0635-4898-8a31-69642dcdab88 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Refreshing network info cache for port caccf6a2-afd7-48b7-b262-bb7a3178d25c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.042 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Start _get_guest_xml network_info=[{"id": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "address": "fa:16:3e:08:b6:0f", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaccf6a2-af", "ovs_interfaceid": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.049 243456 WARNING nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.055 243456 DEBUG nova.virt.libvirt.host [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.058 243456 DEBUG nova.virt.libvirt.host [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.068 243456 DEBUG nova.virt.libvirt.host [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.070 243456 DEBUG nova.virt.libvirt.host [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.071 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.071 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.072 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.073 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.074 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.074 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.075 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.075 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.076 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.076 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.077 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.078 243456 DEBUG nova.virt.hardware [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.083 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 404 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.3 MiB/s wr, 227 op/s
Feb 28 10:23:38 compute-0 ceph-mon[76304]: pgmap v1751: 305 pgs: 305 active+clean; 404 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.3 MiB/s wr, 227 op/s
Feb 28 10:23:38 compute-0 lvm[339145]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:23:38 compute-0 lvm[339144]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:23:38 compute-0 lvm[339145]: VG ceph_vg1 finished
Feb 28 10:23:38 compute-0 lvm[339144]: VG ceph_vg0 finished
Feb 28 10:23:38 compute-0 lvm[339147]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:23:38 compute-0 lvm[339147]: VG ceph_vg2 finished
Feb 28 10:23:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:23:38 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3806269009' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.635 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.660 243456 DEBUG nova.storage.rbd_utils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 287f71c6-9068-476b-81e7-da5069ee831f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:38 compute-0 musing_perlman[339046]: {}
Feb 28 10:23:38 compute-0 nova_compute[243452]: 2026-02-28 10:23:38.668 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:23:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Feb 28 10:23:38 compute-0 systemd[1]: libpod-e78f309341baed0162dc81ef21d7ad24b3a0f8cb04e5774e3a67ef1678e71ed1.scope: Deactivated successfully.
Feb 28 10:23:38 compute-0 systemd[1]: libpod-e78f309341baed0162dc81ef21d7ad24b3a0f8cb04e5774e3a67ef1678e71ed1.scope: Consumed 1.046s CPU time.
Feb 28 10:23:38 compute-0 podman[339029]: 2026-02-28 10:23:38.694401787 +0000 UTC m=+0.900452941 container died e78f309341baed0162dc81ef21d7ad24b3a0f8cb04e5774e3a67ef1678e71ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_perlman, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:23:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Feb 28 10:23:38 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Feb 28 10:23:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c88b5fc105fb998fb67e968eda9ac1698ee081ebb683e2ce0acc91bd317661b-merged.mount: Deactivated successfully.
Feb 28 10:23:38 compute-0 podman[339029]: 2026-02-28 10:23:38.77347756 +0000 UTC m=+0.979528714 container remove e78f309341baed0162dc81ef21d7ad24b3a0f8cb04e5774e3a67ef1678e71ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_perlman, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 10:23:38 compute-0 systemd[1]: libpod-conmon-e78f309341baed0162dc81ef21d7ad24b3a0f8cb04e5774e3a67ef1678e71ed1.scope: Deactivated successfully.
Feb 28 10:23:38 compute-0 sudo[338953]: pam_unix(sudo:session): session closed for user root
Feb 28 10:23:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:23:38 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:23:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:23:38 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:23:38 compute-0 sudo[339202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:23:38 compute-0 sudo[339202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:23:38 compute-0 sudo[339202]: pam_unix(sudo:session): session closed for user root
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.037 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:23:39 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/23880949' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.243 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.246 243456 DEBUG nova.virt.libvirt.vif [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:23:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-426854263',display_name='tempest-ServersTestJSON-server-426854263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-426854263',id=111,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-20wtorxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:23:34Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=287f71c6-9068-476b-81e7-da5069ee831f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "address": "fa:16:3e:08:b6:0f", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaccf6a2-af", "ovs_interfaceid": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.246 243456 DEBUG nova.network.os_vif_util [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "address": "fa:16:3e:08:b6:0f", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaccf6a2-af", "ovs_interfaceid": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.248 243456 DEBUG nova.network.os_vif_util [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:b6:0f,bridge_name='br-int',has_traffic_filtering=True,id=caccf6a2-afd7-48b7-b262-bb7a3178d25c,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcaccf6a2-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.250 243456 DEBUG nova.objects.instance [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 287f71c6-9068-476b-81e7-da5069ee831f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.274 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:23:39 compute-0 nova_compute[243452]:   <uuid>287f71c6-9068-476b-81e7-da5069ee831f</uuid>
Feb 28 10:23:39 compute-0 nova_compute[243452]:   <name>instance-0000006f</name>
Feb 28 10:23:39 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:23:39 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:23:39 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersTestJSON-server-426854263</nova:name>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:23:38</nova:creationTime>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:23:39 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:23:39 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:23:39 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:23:39 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:23:39 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:23:39 compute-0 nova_compute[243452]:         <nova:user uuid="30797c1e587b4532a2e148d0cdcd9c51">tempest-ServersTestJSON-973249707-project-member</nova:user>
Feb 28 10:23:39 compute-0 nova_compute[243452]:         <nova:project uuid="1c3cb5cdfa53405bb0387af43e804bd1">tempest-ServersTestJSON-973249707</nova:project>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:23:39 compute-0 nova_compute[243452]:         <nova:port uuid="caccf6a2-afd7-48b7-b262-bb7a3178d25c">
Feb 28 10:23:39 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:23:39 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:23:39 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <system>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <entry name="serial">287f71c6-9068-476b-81e7-da5069ee831f</entry>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <entry name="uuid">287f71c6-9068-476b-81e7-da5069ee831f</entry>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     </system>
Feb 28 10:23:39 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:23:39 compute-0 nova_compute[243452]:   <os>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:   </os>
Feb 28 10:23:39 compute-0 nova_compute[243452]:   <features>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:   </features>
Feb 28 10:23:39 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:23:39 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:23:39 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/287f71c6-9068-476b-81e7-da5069ee831f_disk">
Feb 28 10:23:39 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       </source>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:23:39 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/287f71c6-9068-476b-81e7-da5069ee831f_disk.config">
Feb 28 10:23:39 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       </source>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:23:39 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:08:b6:0f"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <target dev="tapcaccf6a2-af"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/287f71c6-9068-476b-81e7-da5069ee831f/console.log" append="off"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <video>
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     </video>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:23:39 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:23:39 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:23:39 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:23:39 compute-0 nova_compute[243452]: </domain>
Feb 28 10:23:39 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.275 243456 DEBUG nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Preparing to wait for external event network-vif-plugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.276 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "287f71c6-9068-476b-81e7-da5069ee831f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.277 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.278 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.280 243456 DEBUG nova.virt.libvirt.vif [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:23:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-426854263',display_name='tempest-ServersTestJSON-server-426854263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-426854263',id=111,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-20wtorxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:23:34Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=287f71c6-9068-476b-81e7-da5069ee831f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "address": "fa:16:3e:08:b6:0f", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaccf6a2-af", "ovs_interfaceid": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.280 243456 DEBUG nova.network.os_vif_util [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "address": "fa:16:3e:08:b6:0f", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaccf6a2-af", "ovs_interfaceid": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.282 243456 DEBUG nova.network.os_vif_util [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:b6:0f,bridge_name='br-int',has_traffic_filtering=True,id=caccf6a2-afd7-48b7-b262-bb7a3178d25c,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcaccf6a2-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.283 243456 DEBUG os_vif [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:b6:0f,bridge_name='br-int',has_traffic_filtering=True,id=caccf6a2-afd7-48b7-b262-bb7a3178d25c,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcaccf6a2-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.284 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.285 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.286 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.291 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.292 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcaccf6a2-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.293 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcaccf6a2-af, col_values=(('external_ids', {'iface-id': 'caccf6a2-afd7-48b7-b262-bb7a3178d25c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:b6:0f', 'vm-uuid': '287f71c6-9068-476b-81e7-da5069ee831f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:39 compute-0 NetworkManager[49805]: <info>  [1772274219.2974] manager: (tapcaccf6a2-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/468)
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.295 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.299 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.304 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.305 243456 INFO os_vif [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:b6:0f,bridge_name='br-int',has_traffic_filtering=True,id=caccf6a2-afd7-48b7-b262-bb7a3178d25c,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcaccf6a2-af')
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.358 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.359 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.359 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No VIF found with MAC fa:16:3e:08:b6:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.360 243456 INFO nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Using config drive
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.391 243456 DEBUG nova.storage.rbd_utils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 287f71c6-9068-476b-81e7-da5069ee831f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:39 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3806269009' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:23:39 compute-0 ceph-mon[76304]: osdmap e257: 3 total, 3 up, 3 in
Feb 28 10:23:39 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:23:39 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:23:39 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/23880949' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.570 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.944 243456 INFO nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Creating config drive at /var/lib/nova/instances/287f71c6-9068-476b-81e7-da5069ee831f/disk.config
Feb 28 10:23:39 compute-0 nova_compute[243452]: 2026-02-28 10:23:39.948 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/287f71c6-9068-476b-81e7-da5069ee831f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpti1l9q65 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:40 compute-0 nova_compute[243452]: 2026-02-28 10:23:40.084 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/287f71c6-9068-476b-81e7-da5069ee831f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpti1l9q65" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:40 compute-0 nova_compute[243452]: 2026-02-28 10:23:40.122 243456 DEBUG nova.storage.rbd_utils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 287f71c6-9068-476b-81e7-da5069ee831f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:40 compute-0 nova_compute[243452]: 2026-02-28 10:23:40.126 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/287f71c6-9068-476b-81e7-da5069ee831f/disk.config 287f71c6-9068-476b-81e7-da5069ee831f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:40 compute-0 ovn_controller[146846]: 2026-02-28T10:23:40Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3f:12:78 10.100.0.6
Feb 28 10:23:40 compute-0 ovn_controller[146846]: 2026-02-28T10:23:40Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3f:12:78 10.100.0.6
Feb 28 10:23:40 compute-0 nova_compute[243452]: 2026-02-28 10:23:40.265 243456 DEBUG oslo_concurrency.processutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/287f71c6-9068-476b-81e7-da5069ee831f/disk.config 287f71c6-9068-476b-81e7-da5069ee831f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:40 compute-0 nova_compute[243452]: 2026-02-28 10:23:40.266 243456 INFO nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Deleting local config drive /var/lib/nova/instances/287f71c6-9068-476b-81e7-da5069ee831f/disk.config because it was imported into RBD.
Feb 28 10:23:40 compute-0 kernel: tapcaccf6a2-af: entered promiscuous mode
Feb 28 10:23:40 compute-0 ovn_controller[146846]: 2026-02-28T10:23:40Z|01117|binding|INFO|Claiming lport caccf6a2-afd7-48b7-b262-bb7a3178d25c for this chassis.
Feb 28 10:23:40 compute-0 ovn_controller[146846]: 2026-02-28T10:23:40Z|01118|binding|INFO|caccf6a2-afd7-48b7-b262-bb7a3178d25c: Claiming fa:16:3e:08:b6:0f 10.100.0.3
Feb 28 10:23:40 compute-0 systemd-udevd[339142]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:23:40 compute-0 nova_compute[243452]: 2026-02-28 10:23:40.299 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:40 compute-0 NetworkManager[49805]: <info>  [1772274220.3007] manager: (tapcaccf6a2-af): new Tun device (/org/freedesktop/NetworkManager/Devices/469)
Feb 28 10:23:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.308 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:b6:0f 10.100.0.3'], port_security=['fa:16:3e:08:b6:0f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '287f71c6-9068-476b-81e7-da5069ee831f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=caccf6a2-afd7-48b7-b262-bb7a3178d25c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:23:40 compute-0 ovn_controller[146846]: 2026-02-28T10:23:40Z|01119|binding|INFO|Setting lport caccf6a2-afd7-48b7-b262-bb7a3178d25c ovn-installed in OVS
Feb 28 10:23:40 compute-0 ovn_controller[146846]: 2026-02-28T10:23:40Z|01120|binding|INFO|Setting lport caccf6a2-afd7-48b7-b262-bb7a3178d25c up in Southbound
Feb 28 10:23:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.311 156681 INFO neutron.agent.ovn.metadata.agent [-] Port caccf6a2-afd7-48b7-b262-bb7a3178d25c in datapath 7ec4804c-4a13-485a-9300-db6edf74473b bound to our chassis
Feb 28 10:23:40 compute-0 nova_compute[243452]: 2026-02-28 10:23:40.313 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.316 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b
Feb 28 10:23:40 compute-0 NetworkManager[49805]: <info>  [1772274220.3206] device (tapcaccf6a2-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:23:40 compute-0 NetworkManager[49805]: <info>  [1772274220.3232] device (tapcaccf6a2-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:23:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.339 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[886d0297-a90c-4a31-9607-7ad7b7e7167f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:40 compute-0 systemd-machined[209480]: New machine qemu-141-instance-0000006f.
Feb 28 10:23:40 compute-0 nova_compute[243452]: 2026-02-28 10:23:40.355 243456 DEBUG nova.network.neutron [req-ea2c648c-f5e9-4329-b772-b8b9c1af270d req-cedc1a47-0635-4898-8a31-69642dcdab88 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Updated VIF entry in instance network info cache for port caccf6a2-afd7-48b7-b262-bb7a3178d25c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:23:40 compute-0 nova_compute[243452]: 2026-02-28 10:23:40.356 243456 DEBUG nova.network.neutron [req-ea2c648c-f5e9-4329-b772-b8b9c1af270d req-cedc1a47-0635-4898-8a31-69642dcdab88 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Updating instance_info_cache with network_info: [{"id": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "address": "fa:16:3e:08:b6:0f", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaccf6a2-af", "ovs_interfaceid": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:23:40 compute-0 systemd[1]: Started Virtual Machine qemu-141-instance-0000006f.
Feb 28 10:23:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.375 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7a9be823-b867-496b-9571-5e7f60473cfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:40 compute-0 nova_compute[243452]: 2026-02-28 10:23:40.376 243456 DEBUG oslo_concurrency.lockutils [req-ea2c648c-f5e9-4329-b772-b8b9c1af270d req-cedc1a47-0635-4898-8a31-69642dcdab88 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-287f71c6-9068-476b-81e7-da5069ee831f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:23:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.378 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[05c0af0e-eb22-497b-a479-a901e19930fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.402 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[33dd5142-de2f-4db3-8ddb-5fbe4df6c7f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.416 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[23bdddf5-c206-4fd9-a83d-0145bf604361]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339312, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.429 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c57fa506-8838-403b-b6cb-9a24e44ec30a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339313, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339313, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.431 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:40 compute-0 nova_compute[243452]: 2026-02-28 10:23:40.433 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:40 compute-0 nova_compute[243452]: 2026-02-28 10:23:40.434 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.436 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.436 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:23:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.437 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:40.438 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:23:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 417 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 818 KiB/s rd, 7.6 MiB/s wr, 187 op/s
Feb 28 10:23:40 compute-0 ceph-mon[76304]: pgmap v1753: 305 pgs: 305 active+clean; 417 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 818 KiB/s rd, 7.6 MiB/s wr, 187 op/s
Feb 28 10:23:40 compute-0 nova_compute[243452]: 2026-02-28 10:23:40.738 243456 DEBUG nova.compute.manager [req-53cd4346-0bfa-48db-8e9f-d1e4baab5ffb req-9ec8a643-990d-4896-aaac-34836f903fb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Received event network-vif-plugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:40 compute-0 nova_compute[243452]: 2026-02-28 10:23:40.739 243456 DEBUG oslo_concurrency.lockutils [req-53cd4346-0bfa-48db-8e9f-d1e4baab5ffb req-9ec8a643-990d-4896-aaac-34836f903fb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "287f71c6-9068-476b-81e7-da5069ee831f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:40 compute-0 nova_compute[243452]: 2026-02-28 10:23:40.739 243456 DEBUG oslo_concurrency.lockutils [req-53cd4346-0bfa-48db-8e9f-d1e4baab5ffb req-9ec8a643-990d-4896-aaac-34836f903fb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:40 compute-0 nova_compute[243452]: 2026-02-28 10:23:40.739 243456 DEBUG oslo_concurrency.lockutils [req-53cd4346-0bfa-48db-8e9f-d1e4baab5ffb req-9ec8a643-990d-4896-aaac-34836f903fb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:40 compute-0 nova_compute[243452]: 2026-02-28 10:23:40.739 243456 DEBUG nova.compute.manager [req-53cd4346-0bfa-48db-8e9f-d1e4baab5ffb req-9ec8a643-990d-4896-aaac-34836f903fb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Processing event network-vif-plugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:23:40 compute-0 nova_compute[243452]: 2026-02-28 10:23:40.991 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274220.9909859, 287f71c6-9068-476b-81e7-da5069ee831f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:23:40 compute-0 nova_compute[243452]: 2026-02-28 10:23:40.992 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] VM Started (Lifecycle Event)
Feb 28 10:23:40 compute-0 nova_compute[243452]: 2026-02-28 10:23:40.995 243456 DEBUG nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:40.999 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.003 243456 INFO nova.virt.libvirt.driver [-] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Instance spawned successfully.
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.004 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0024429714104513985 of space, bias 1.0, pg target 0.7328914231354196 quantized to 32 (current 32)
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024929289132533243 of space, bias 1.0, pg target 0.7478786739759973 quantized to 32 (current 32)
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.388228013846623e-07 of space, bias 4.0, pg target 0.0008865873616615948 quantized to 16 (current 16)
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:23:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.030 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.036 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.040 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.040 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.041 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.041 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.042 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.042 243456 DEBUG nova.virt.libvirt.driver [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.066 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.066 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274220.991178, 287f71c6-9068-476b-81e7-da5069ee831f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.067 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] VM Paused (Lifecycle Event)
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.092 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.097 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274220.999, 287f71c6-9068-476b-81e7-da5069ee831f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.097 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] VM Resumed (Lifecycle Event)
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.105 243456 INFO nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Took 6.21 seconds to spawn the instance on the hypervisor.
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.105 243456 DEBUG nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.114 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.117 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.139 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.177 243456 INFO nova.compute.manager [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Took 7.36 seconds to build instance.
Feb 28 10:23:41 compute-0 nova_compute[243452]: 2026-02-28 10:23:41.193 243456 DEBUG oslo_concurrency.lockutils [None req-7c0b0774-0d59-45a9-bc9c-0b109b89949c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 430 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 8.3 MiB/s wr, 292 op/s
Feb 28 10:23:42 compute-0 ceph-mon[76304]: pgmap v1754: 305 pgs: 305 active+clean; 430 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 8.3 MiB/s wr, 292 op/s
Feb 28 10:23:42 compute-0 nova_compute[243452]: 2026-02-28 10:23:42.948 243456 DEBUG nova.compute.manager [req-1e690c7f-59c7-40a1-b61f-8c510f237e56 req-09ea45cc-985c-4fb1-a514-abdc2b7efcd7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Received event network-vif-plugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:42 compute-0 nova_compute[243452]: 2026-02-28 10:23:42.949 243456 DEBUG oslo_concurrency.lockutils [req-1e690c7f-59c7-40a1-b61f-8c510f237e56 req-09ea45cc-985c-4fb1-a514-abdc2b7efcd7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "287f71c6-9068-476b-81e7-da5069ee831f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:42 compute-0 nova_compute[243452]: 2026-02-28 10:23:42.949 243456 DEBUG oslo_concurrency.lockutils [req-1e690c7f-59c7-40a1-b61f-8c510f237e56 req-09ea45cc-985c-4fb1-a514-abdc2b7efcd7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:42 compute-0 nova_compute[243452]: 2026-02-28 10:23:42.950 243456 DEBUG oslo_concurrency.lockutils [req-1e690c7f-59c7-40a1-b61f-8c510f237e56 req-09ea45cc-985c-4fb1-a514-abdc2b7efcd7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:42 compute-0 nova_compute[243452]: 2026-02-28 10:23:42.951 243456 DEBUG nova.compute.manager [req-1e690c7f-59c7-40a1-b61f-8c510f237e56 req-09ea45cc-985c-4fb1-a514-abdc2b7efcd7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] No waiting events found dispatching network-vif-plugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:23:42 compute-0 nova_compute[243452]: 2026-02-28 10:23:42.951 243456 WARNING nova.compute.manager [req-1e690c7f-59c7-40a1-b61f-8c510f237e56 req-09ea45cc-985c-4fb1-a514-abdc2b7efcd7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Received unexpected event network-vif-plugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c for instance with vm_state active and task_state None.
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.634 243456 DEBUG oslo_concurrency.lockutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "287f71c6-9068-476b-81e7-da5069ee831f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.635 243456 DEBUG oslo_concurrency.lockutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.635 243456 DEBUG oslo_concurrency.lockutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "287f71c6-9068-476b-81e7-da5069ee831f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.635 243456 DEBUG oslo_concurrency.lockutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.635 243456 DEBUG oslo_concurrency.lockutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.636 243456 INFO nova.compute.manager [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Terminating instance
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.637 243456 DEBUG nova.compute.manager [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:23:43 compute-0 kernel: tapcaccf6a2-af (unregistering): left promiscuous mode
Feb 28 10:23:43 compute-0 NetworkManager[49805]: <info>  [1772274223.6798] device (tapcaccf6a2-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.690 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:43 compute-0 ovn_controller[146846]: 2026-02-28T10:23:43Z|01121|binding|INFO|Releasing lport caccf6a2-afd7-48b7-b262-bb7a3178d25c from this chassis (sb_readonly=0)
Feb 28 10:23:43 compute-0 ovn_controller[146846]: 2026-02-28T10:23:43Z|01122|binding|INFO|Setting lport caccf6a2-afd7-48b7-b262-bb7a3178d25c down in Southbound
Feb 28 10:23:43 compute-0 ovn_controller[146846]: 2026-02-28T10:23:43Z|01123|binding|INFO|Removing iface tapcaccf6a2-af ovn-installed in OVS
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.697 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:23:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.703 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:b6:0f 10.100.0.3'], port_security=['fa:16:3e:08:b6:0f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '287f71c6-9068-476b-81e7-da5069ee831f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=caccf6a2-afd7-48b7-b262-bb7a3178d25c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:23:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.705 156681 INFO neutron.agent.ovn.metadata.agent [-] Port caccf6a2-afd7-48b7-b262-bb7a3178d25c in datapath 7ec4804c-4a13-485a-9300-db6edf74473b unbound from our chassis
Feb 28 10:23:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.706 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.706 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.718793) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274223718826, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 2027, "num_deletes": 257, "total_data_size": 3076728, "memory_usage": 3117272, "flush_reason": "Manual Compaction"}
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Feb 28 10:23:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.720 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d49c28fa-d0a7-46b5-97f0-23d5951e5a22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274223732032, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 3020885, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35417, "largest_seqno": 37443, "table_properties": {"data_size": 3011826, "index_size": 5553, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19289, "raw_average_key_size": 20, "raw_value_size": 2993322, "raw_average_value_size": 3131, "num_data_blocks": 245, "num_entries": 956, "num_filter_entries": 956, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772274031, "oldest_key_time": 1772274031, "file_creation_time": 1772274223, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 13319 microseconds, and 4726 cpu microseconds.
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:23:43 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Feb 28 10:23:43 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d0000006f.scope: Consumed 3.347s CPU time.
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.732107) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 3020885 bytes OK
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.732130) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.736260) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.736289) EVENT_LOG_v1 {"time_micros": 1772274223736281, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.736315) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3068114, prev total WAL file size 3068114, number of live WAL files 2.
Feb 28 10:23:43 compute-0 systemd-machined[209480]: Machine qemu-141-instance-0000006f terminated.
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.737044) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323531' seq:72057594037927935, type:22 .. '6C6F676D0031353033' seq:0, type:0; will stop at (end)
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(2950KB)], [77(9523KB)]
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274223737090, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 12772448, "oldest_snapshot_seqno": -1}
Feb 28 10:23:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.741 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[eba1a2bf-5f43-4815-964f-2516798addb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.745 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ca645480-7fc0-46b3-ae1b-3d820faf6e10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.768 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c0d9e78c-c270-4670-b24d-49a1e270240c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.782 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[71597038-8ffb-4074-9cc4-ebfc28f98e36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 17, 'rx_bytes': 616, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 17, 'rx_bytes': 616, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339369, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6548 keys, 12639637 bytes, temperature: kUnknown
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274223784919, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 12639637, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12591031, "index_size": 31150, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 164528, "raw_average_key_size": 25, "raw_value_size": 12469170, "raw_average_value_size": 1904, "num_data_blocks": 1268, "num_entries": 6548, "num_filter_entries": 6548, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772274223, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.785195) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12639637 bytes
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.786612) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 266.6 rd, 263.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 9.3 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(8.4) write-amplify(4.2) OK, records in: 7078, records dropped: 530 output_compression: NoCompression
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.786628) EVENT_LOG_v1 {"time_micros": 1772274223786620, "job": 44, "event": "compaction_finished", "compaction_time_micros": 47908, "compaction_time_cpu_micros": 24696, "output_level": 6, "num_output_files": 1, "total_output_size": 12639637, "num_input_records": 7078, "num_output_records": 6548, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274223786927, "job": 44, "event": "table_file_deletion", "file_number": 79}
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274223788346, "job": 44, "event": "table_file_deletion", "file_number": 77}
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.736961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.788546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.788557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.788562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.788566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:23:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:43.788570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:23:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.796 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c62c3965-3e98-44ce-9159-f8133fad299f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339370, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339370, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.798 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.800 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.805 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.805 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.805 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:23:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.806 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:43.806 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.870 243456 INFO nova.virt.libvirt.driver [-] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Instance destroyed successfully.
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.870 243456 DEBUG nova.objects.instance [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'resources' on Instance uuid 287f71c6-9068-476b-81e7-da5069ee831f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.885 243456 DEBUG nova.virt.libvirt.vif [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:23:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-426854263',display_name='tempest-ServersTestJSON-server-426854263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-426854263',id=111,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:23:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-20wtorxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:23:41Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=287f71c6-9068-476b-81e7-da5069ee831f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "address": "fa:16:3e:08:b6:0f", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaccf6a2-af", "ovs_interfaceid": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.885 243456 DEBUG nova.network.os_vif_util [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "address": "fa:16:3e:08:b6:0f", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcaccf6a2-af", "ovs_interfaceid": "caccf6a2-afd7-48b7-b262-bb7a3178d25c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.886 243456 DEBUG nova.network.os_vif_util [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:b6:0f,bridge_name='br-int',has_traffic_filtering=True,id=caccf6a2-afd7-48b7-b262-bb7a3178d25c,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcaccf6a2-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.886 243456 DEBUG os_vif [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:b6:0f,bridge_name='br-int',has_traffic_filtering=True,id=caccf6a2-afd7-48b7-b262-bb7a3178d25c,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcaccf6a2-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.888 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.888 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcaccf6a2-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.890 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.891 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:43 compute-0 nova_compute[243452]: 2026-02-28 10:23:43.894 243456 INFO os_vif [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:b6:0f,bridge_name='br-int',has_traffic_filtering=True,id=caccf6a2-afd7-48b7-b262-bb7a3178d25c,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcaccf6a2-af')
Feb 28 10:23:44 compute-0 nova_compute[243452]: 2026-02-28 10:23:44.039 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:44 compute-0 nova_compute[243452]: 2026-02-28 10:23:44.320 243456 INFO nova.virt.libvirt.driver [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Deleting instance files /var/lib/nova/instances/287f71c6-9068-476b-81e7-da5069ee831f_del
Feb 28 10:23:44 compute-0 nova_compute[243452]: 2026-02-28 10:23:44.321 243456 INFO nova.virt.libvirt.driver [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Deletion of /var/lib/nova/instances/287f71c6-9068-476b-81e7-da5069ee831f_del complete
Feb 28 10:23:44 compute-0 nova_compute[243452]: 2026-02-28 10:23:44.380 243456 INFO nova.compute.manager [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Took 0.74 seconds to destroy the instance on the hypervisor.
Feb 28 10:23:44 compute-0 nova_compute[243452]: 2026-02-28 10:23:44.380 243456 DEBUG oslo.service.loopingcall [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:23:44 compute-0 nova_compute[243452]: 2026-02-28 10:23:44.381 243456 DEBUG nova.compute.manager [-] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:23:44 compute-0 nova_compute[243452]: 2026-02-28 10:23:44.381 243456 DEBUG nova.network.neutron [-] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:23:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 430 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 7.2 MiB/s wr, 256 op/s
Feb 28 10:23:44 compute-0 ceph-mon[76304]: pgmap v1755: 305 pgs: 305 active+clean; 430 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 7.2 MiB/s wr, 256 op/s
Feb 28 10:23:45 compute-0 nova_compute[243452]: 2026-02-28 10:23:45.110 243456 DEBUG nova.compute.manager [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Received event network-vif-unplugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:45 compute-0 nova_compute[243452]: 2026-02-28 10:23:45.110 243456 DEBUG oslo_concurrency.lockutils [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "287f71c6-9068-476b-81e7-da5069ee831f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:45 compute-0 nova_compute[243452]: 2026-02-28 10:23:45.110 243456 DEBUG oslo_concurrency.lockutils [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:45 compute-0 nova_compute[243452]: 2026-02-28 10:23:45.111 243456 DEBUG oslo_concurrency.lockutils [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:45 compute-0 nova_compute[243452]: 2026-02-28 10:23:45.111 243456 DEBUG nova.compute.manager [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] No waiting events found dispatching network-vif-unplugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:23:45 compute-0 nova_compute[243452]: 2026-02-28 10:23:45.112 243456 DEBUG nova.compute.manager [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Received event network-vif-unplugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:23:45 compute-0 nova_compute[243452]: 2026-02-28 10:23:45.112 243456 DEBUG nova.compute.manager [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Received event network-vif-plugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:45 compute-0 nova_compute[243452]: 2026-02-28 10:23:45.112 243456 DEBUG oslo_concurrency.lockutils [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "287f71c6-9068-476b-81e7-da5069ee831f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:45 compute-0 nova_compute[243452]: 2026-02-28 10:23:45.112 243456 DEBUG oslo_concurrency.lockutils [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:45 compute-0 nova_compute[243452]: 2026-02-28 10:23:45.112 243456 DEBUG oslo_concurrency.lockutils [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:45 compute-0 nova_compute[243452]: 2026-02-28 10:23:45.113 243456 DEBUG nova.compute.manager [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] No waiting events found dispatching network-vif-plugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:23:45 compute-0 nova_compute[243452]: 2026-02-28 10:23:45.113 243456 WARNING nova.compute.manager [req-848543a1-8f65-4544-b1c5-b4453ae241ff req-79dc5173-87aa-41a8-afff-e559e1c4f770 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Received unexpected event network-vif-plugged-caccf6a2-afd7-48b7-b262-bb7a3178d25c for instance with vm_state active and task_state deleting.
Feb 28 10:23:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:23:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2259821455' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:23:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:23:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2259821455' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:23:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2259821455' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:23:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2259821455' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:23:46 compute-0 nova_compute[243452]: 2026-02-28 10:23:46.004 243456 DEBUG nova.network.neutron [-] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:23:46 compute-0 nova_compute[243452]: 2026-02-28 10:23:46.033 243456 INFO nova.compute.manager [-] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Took 1.65 seconds to deallocate network for instance.
Feb 28 10:23:46 compute-0 nova_compute[243452]: 2026-02-28 10:23:46.080 243456 DEBUG oslo_concurrency.lockutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:46 compute-0 nova_compute[243452]: 2026-02-28 10:23:46.081 243456 DEBUG oslo_concurrency.lockutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:46 compute-0 nova_compute[243452]: 2026-02-28 10:23:46.146 243456 DEBUG nova.compute.manager [req-f649b28a-0b9c-409b-9ac6-961d1f40dcb5 req-f5227968-cda9-4847-9f9c-0b1879628298 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Received event network-vif-deleted-caccf6a2-afd7-48b7-b262-bb7a3178d25c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:46 compute-0 nova_compute[243452]: 2026-02-28 10:23:46.205 243456 DEBUG oslo_concurrency.processutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 429 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.2 MiB/s wr, 239 op/s
Feb 28 10:23:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:23:46 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4083545794' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:23:46 compute-0 ceph-mon[76304]: pgmap v1756: 305 pgs: 305 active+clean; 429 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.2 MiB/s wr, 239 op/s
Feb 28 10:23:46 compute-0 nova_compute[243452]: 2026-02-28 10:23:46.799 243456 DEBUG oslo_concurrency.processutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:46 compute-0 nova_compute[243452]: 2026-02-28 10:23:46.808 243456 DEBUG nova.compute.provider_tree [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:23:46 compute-0 nova_compute[243452]: 2026-02-28 10:23:46.840 243456 DEBUG nova.scheduler.client.report [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:23:46 compute-0 nova_compute[243452]: 2026-02-28 10:23:46.880 243456 DEBUG oslo_concurrency.lockutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:46 compute-0 nova_compute[243452]: 2026-02-28 10:23:46.882 243456 DEBUG oslo_concurrency.lockutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "027ce924-b530-4917-956c-ab66555058b0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:46 compute-0 nova_compute[243452]: 2026-02-28 10:23:46.883 243456 DEBUG oslo_concurrency.lockutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:46 compute-0 nova_compute[243452]: 2026-02-28 10:23:46.883 243456 DEBUG oslo_concurrency.lockutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "027ce924-b530-4917-956c-ab66555058b0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:46 compute-0 nova_compute[243452]: 2026-02-28 10:23:46.883 243456 DEBUG oslo_concurrency.lockutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:46 compute-0 nova_compute[243452]: 2026-02-28 10:23:46.883 243456 DEBUG oslo_concurrency.lockutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:46 compute-0 nova_compute[243452]: 2026-02-28 10:23:46.884 243456 INFO nova.compute.manager [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Terminating instance
Feb 28 10:23:46 compute-0 nova_compute[243452]: 2026-02-28 10:23:46.885 243456 DEBUG nova.compute.manager [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:23:46 compute-0 kernel: tap415ef63e-f3 (unregistering): left promiscuous mode
Feb 28 10:23:46 compute-0 nova_compute[243452]: 2026-02-28 10:23:46.936 243456 INFO nova.scheduler.client.report [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Deleted allocations for instance 287f71c6-9068-476b-81e7-da5069ee831f
Feb 28 10:23:46 compute-0 NetworkManager[49805]: <info>  [1772274226.9436] device (tap415ef63e-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:23:46 compute-0 ovn_controller[146846]: 2026-02-28T10:23:46Z|01124|binding|INFO|Releasing lport 415ef63e-f355-4d3a-a625-bee99de661ad from this chassis (sb_readonly=0)
Feb 28 10:23:46 compute-0 ovn_controller[146846]: 2026-02-28T10:23:46Z|01125|binding|INFO|Setting lport 415ef63e-f355-4d3a-a625-bee99de661ad down in Southbound
Feb 28 10:23:46 compute-0 nova_compute[243452]: 2026-02-28 10:23:46.952 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:46 compute-0 ovn_controller[146846]: 2026-02-28T10:23:46Z|01126|binding|INFO|Removing iface tap415ef63e-f3 ovn-installed in OVS
Feb 28 10:23:46 compute-0 nova_compute[243452]: 2026-02-28 10:23:46.956 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:46.965 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:49:3b 10.100.0.8'], port_security=['fa:16:3e:01:49:3b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '027ce924-b530-4917-956c-ab66555058b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17bc494c-7a5d-47b4-92b7-06dd91f131ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1ef1d2e7-20e3-4f13-87f1-05e5f962f01b 2651d5ee-6558-4826-a4a8-35378d050e16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1398b86-8c3c-44c8-8c2b-3601475652eb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=415ef63e-f355-4d3a-a625-bee99de661ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:23:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:46.967 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 415ef63e-f355-4d3a-a625-bee99de661ad in datapath 17bc494c-7a5d-47b4-92b7-06dd91f131ca unbound from our chassis
Feb 28 10:23:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:46.968 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 17bc494c-7a5d-47b4-92b7-06dd91f131ca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:23:46 compute-0 nova_compute[243452]: 2026-02-28 10:23:46.968 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:46.969 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2be31753-85ab-475a-a57f-c53ea3157709]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:46.969 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca namespace which is not needed anymore
Feb 28 10:23:47 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Feb 28 10:23:47 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006d.scope: Consumed 12.305s CPU time.
Feb 28 10:23:47 compute-0 systemd-machined[209480]: Machine qemu-139-instance-0000006d terminated.
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.015 243456 DEBUG oslo_concurrency.lockutils [None req-36489e0d-9039-4cb1-99e1-b7a346dbde27 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "287f71c6-9068-476b-81e7-da5069ee831f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.108 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.113 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.119 243456 INFO nova.virt.libvirt.driver [-] [instance: 027ce924-b530-4917-956c-ab66555058b0] Instance destroyed successfully.
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.119 243456 DEBUG nova.objects.instance [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'resources' on Instance uuid 027ce924-b530-4917-956c-ab66555058b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.136 243456 DEBUG nova.virt.libvirt.vif [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:23:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-603155240',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-603155240',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=109,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9wU8YY0rYLr+FY6zo9NLT62DKW4Etx3PelXGTWlh9GbKMYWy7V2liaoU6eFO4swsQU7hFfqtc0g/T8jGlbB4RbgIdTi8pbxGSqSx76HlaAvQoV4NMlMuhy55HuiLv7yw==',key_name='tempest-TestSecurityGroupsBasicOps-1207522769',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:23:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-ag0oqgtp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:23:22Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=027ce924-b530-4917-956c-ab66555058b0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.137 243456 DEBUG nova.network.os_vif_util [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.138 243456 DEBUG nova.network.os_vif_util [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:49:3b,bridge_name='br-int',has_traffic_filtering=True,id=415ef63e-f355-4d3a-a625-bee99de661ad,network=Network(17bc494c-7a5d-47b4-92b7-06dd91f131ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap415ef63e-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.139 243456 DEBUG os_vif [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:49:3b,bridge_name='br-int',has_traffic_filtering=True,id=415ef63e-f355-4d3a-a625-bee99de661ad,network=Network(17bc494c-7a5d-47b4-92b7-06dd91f131ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap415ef63e-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:23:47 compute-0 neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca[338109]: [NOTICE]   (338113) : haproxy version is 2.8.14-c23fe91
Feb 28 10:23:47 compute-0 neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca[338109]: [NOTICE]   (338113) : path to executable is /usr/sbin/haproxy
Feb 28 10:23:47 compute-0 neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca[338109]: [WARNING]  (338113) : Exiting Master process...
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.142 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:47 compute-0 neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca[338109]: [ALERT]    (338113) : Current worker (338115) exited with code 143 (Terminated)
Feb 28 10:23:47 compute-0 neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca[338109]: [WARNING]  (338113) : All workers exited. Exiting... (0)
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.142 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap415ef63e-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:47 compute-0 systemd[1]: libpod-6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df.scope: Deactivated successfully.
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.146 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.149 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:23:47 compute-0 podman[339446]: 2026-02-28 10:23:47.152193879 +0000 UTC m=+0.067765787 container died 6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.152 243456 INFO os_vif [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:49:3b,bridge_name='br-int',has_traffic_filtering=True,id=415ef63e-f355-4d3a-a625-bee99de661ad,network=Network(17bc494c-7a5d-47b4-92b7-06dd91f131ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap415ef63e-f3')
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.206 243456 DEBUG nova.compute.manager [req-a135118d-718a-4072-8db3-10055611bb9d req-99d0b2bf-1749-4aca-b3e7-5485473bcf57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received event network-vif-unplugged-415ef63e-f355-4d3a-a625-bee99de661ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.207 243456 DEBUG oslo_concurrency.lockutils [req-a135118d-718a-4072-8db3-10055611bb9d req-99d0b2bf-1749-4aca-b3e7-5485473bcf57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "027ce924-b530-4917-956c-ab66555058b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.207 243456 DEBUG oslo_concurrency.lockutils [req-a135118d-718a-4072-8db3-10055611bb9d req-99d0b2bf-1749-4aca-b3e7-5485473bcf57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.207 243456 DEBUG oslo_concurrency.lockutils [req-a135118d-718a-4072-8db3-10055611bb9d req-99d0b2bf-1749-4aca-b3e7-5485473bcf57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.208 243456 DEBUG nova.compute.manager [req-a135118d-718a-4072-8db3-10055611bb9d req-99d0b2bf-1749-4aca-b3e7-5485473bcf57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] No waiting events found dispatching network-vif-unplugged-415ef63e-f355-4d3a-a625-bee99de661ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.208 243456 DEBUG nova.compute.manager [req-a135118d-718a-4072-8db3-10055611bb9d req-99d0b2bf-1749-4aca-b3e7-5485473bcf57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received event network-vif-unplugged-415ef63e-f355-4d3a-a625-bee99de661ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:23:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-fba657c3e9cfc3e8a46c61136017aeb0fbea7cdb4ea5ad38579fb28430e14c1d-merged.mount: Deactivated successfully.
Feb 28 10:23:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df-userdata-shm.mount: Deactivated successfully.
Feb 28 10:23:47 compute-0 podman[339446]: 2026-02-28 10:23:47.242555267 +0000 UTC m=+0.158127165 container cleanup 6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 10:23:47 compute-0 systemd[1]: libpod-conmon-6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df.scope: Deactivated successfully.
Feb 28 10:23:47 compute-0 podman[339503]: 2026-02-28 10:23:47.314466483 +0000 UTC m=+0.053565597 container remove 6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 28 10:23:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:47.321 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba5b8f7-acb9-48bf-b990-5cf4875f3797]: (4, ('Sat Feb 28 10:23:47 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca (6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df)\n6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df\nSat Feb 28 10:23:47 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca (6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df)\n6b4baf745cd0d9428cc2857a44cde2842bd2ec2f310ed9e7fc439cd7e20533df\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:47.323 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[42c0b989-a549-41a1-8c7e-2328881d8cb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:47.325 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17bc494c-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:47 compute-0 kernel: tap17bc494c-70: left promiscuous mode
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.328 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.335 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:47.340 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[21e46b6e-55c4-4026-9912-df01b6fd195c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:47.357 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[04124746-b45b-4c7c-a524-d59d58419e67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:47.358 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3a01b9-f84a-4097-972f-9c3ea88a6353]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:47.375 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[080919f1-aa1c-493a-9e59-6dc155bb73b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570913, 'reachable_time': 39527, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339519, 'error': None, 'target': 'ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d17bc494c\x2d7a5d\x2d47b4\x2d92b7\x2d06dd91f131ca.mount: Deactivated successfully.
Feb 28 10:23:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:47.379 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-17bc494c-7a5d-47b4-92b7-06dd91f131ca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:23:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:47.380 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[8f711cb3-b74d-4e0d-98ef-ff4cde7775be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.496 243456 INFO nova.virt.libvirt.driver [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Deleting instance files /var/lib/nova/instances/027ce924-b530-4917-956c-ab66555058b0_del
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.497 243456 INFO nova.virt.libvirt.driver [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Deletion of /var/lib/nova/instances/027ce924-b530-4917-956c-ab66555058b0_del complete
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.575 243456 INFO nova.compute.manager [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Took 0.69 seconds to destroy the instance on the hypervisor.
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.575 243456 DEBUG oslo.service.loopingcall [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.575 243456 DEBUG nova.compute.manager [-] [instance: 027ce924-b530-4917-956c-ab66555058b0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:23:47 compute-0 nova_compute[243452]: 2026-02-28 10:23:47.576 243456 DEBUG nova.network.neutron [-] [instance: 027ce924-b530-4917-956c-ab66555058b0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:23:47 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4083545794' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.359 243456 DEBUG oslo_concurrency.lockutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.360 243456 DEBUG oslo_concurrency.lockutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.360 243456 DEBUG oslo_concurrency.lockutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.360 243456 DEBUG oslo_concurrency.lockutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.361 243456 DEBUG oslo_concurrency.lockutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.362 243456 INFO nova.compute.manager [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Terminating instance
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.363 243456 DEBUG nova.compute.manager [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:23:48 compute-0 kernel: tap7d930aa0-d5 (unregistering): left promiscuous mode
Feb 28 10:23:48 compute-0 NetworkManager[49805]: <info>  [1772274228.4154] device (tap7d930aa0-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:23:48 compute-0 ovn_controller[146846]: 2026-02-28T10:23:48Z|01127|binding|INFO|Releasing lport 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 from this chassis (sb_readonly=0)
Feb 28 10:23:48 compute-0 ovn_controller[146846]: 2026-02-28T10:23:48Z|01128|binding|INFO|Setting lport 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 down in Southbound
Feb 28 10:23:48 compute-0 ovn_controller[146846]: 2026-02-28T10:23:48Z|01129|binding|INFO|Removing iface tap7d930aa0-d5 ovn-installed in OVS
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.423 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.431 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:12:78 10.100.0.6'], port_security=['fa:16:3e:3f:12:78 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=7d930aa0-d51d-4c56-9da2-7f8c1faf8b99) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:23:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.432 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 in datapath 7ec4804c-4a13-485a-9300-db6edf74473b unbound from our chassis
Feb 28 10:23:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.434 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.436 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:48 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Feb 28 10:23:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 354 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 205 op/s
Feb 28 10:23:48 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006e.scope: Consumed 12.274s CPU time.
Feb 28 10:23:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.454 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d5d3d1b2-c378-49d5-b065-074a62a6ef89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:48 compute-0 systemd-machined[209480]: Machine qemu-140-instance-0000006e terminated.
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.473 243456 DEBUG nova.compute.manager [req-ed4a5ac2-72de-4f8d-ad95-97f2632a564a req-28be3677-2090-41ea-b867-8f9d512929db 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received event network-changed-415ef63e-f355-4d3a-a625-bee99de661ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.473 243456 DEBUG nova.compute.manager [req-ed4a5ac2-72de-4f8d-ad95-97f2632a564a req-28be3677-2090-41ea-b867-8f9d512929db 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Refreshing instance network info cache due to event network-changed-415ef63e-f355-4d3a-a625-bee99de661ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.474 243456 DEBUG oslo_concurrency.lockutils [req-ed4a5ac2-72de-4f8d-ad95-97f2632a564a req-28be3677-2090-41ea-b867-8f9d512929db 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.474 243456 DEBUG oslo_concurrency.lockutils [req-ed4a5ac2-72de-4f8d-ad95-97f2632a564a req-28be3677-2090-41ea-b867-8f9d512929db 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.474 243456 DEBUG nova.network.neutron [req-ed4a5ac2-72de-4f8d-ad95-97f2632a564a req-28be3677-2090-41ea-b867-8f9d512929db 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Refreshing network info cache for port 415ef63e-f355-4d3a-a625-bee99de661ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:23:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.484 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[27a5f58a-c400-4183-8f74-1a50b1085339]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.489 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[81c29694-abde-492d-927c-025c4674a865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:48 compute-0 podman[339525]: 2026-02-28 10:23:48.506485888 +0000 UTC m=+0.059598641 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 10:23:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.518 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[3580e880-dea3-4876-b944-ccd40326fcaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:48 compute-0 podman[339520]: 2026-02-28 10:23:48.539764618 +0000 UTC m=+0.094859769 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Feb 28 10:23:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.538 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b665ad14-475d-4e69-b6c1-e47d9e220b38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339568, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.554 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[de678895-58b0-460f-adc4-ec5c68d03a25]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339570, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339570, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:23:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.557 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.559 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.564 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.565 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.565 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:23:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.566 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:48.566 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.584 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.589 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.603 243456 INFO nova.virt.libvirt.driver [-] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Instance destroyed successfully.
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.604 243456 DEBUG nova.objects.instance [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'resources' on Instance uuid c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.634 243456 DEBUG nova.virt.libvirt.vif [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:23:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-426854263',display_name='tempest-ServersTestJSON-server-426854263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-426854263',id=110,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:23:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-6jiu7deq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:23:28Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "address": "fa:16:3e:3f:12:78", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d930aa0-d5", "ovs_interfaceid": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.634 243456 DEBUG nova.network.os_vif_util [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "address": "fa:16:3e:3f:12:78", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d930aa0-d5", "ovs_interfaceid": "7d930aa0-d51d-4c56-9da2-7f8c1faf8b99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.635 243456 DEBUG nova.network.os_vif_util [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:12:78,bridge_name='br-int',has_traffic_filtering=True,id=7d930aa0-d51d-4c56-9da2-7f8c1faf8b99,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d930aa0-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.635 243456 DEBUG os_vif [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:12:78,bridge_name='br-int',has_traffic_filtering=True,id=7d930aa0-d51d-4c56-9da2-7f8c1faf8b99,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d930aa0-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.637 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.637 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7d930aa0-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.641 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.644 243456 INFO os_vif [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:12:78,bridge_name='br-int',has_traffic_filtering=True,id=7d930aa0-d51d-4c56-9da2-7f8c1faf8b99,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d930aa0-d5')
Feb 28 10:23:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:23:48 compute-0 ceph-mon[76304]: pgmap v1757: 305 pgs: 305 active+clean; 354 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 205 op/s
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.908 243456 INFO nova.virt.libvirt.driver [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Deleting instance files /var/lib/nova/instances/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_del
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.909 243456 INFO nova.virt.libvirt.driver [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Deletion of /var/lib/nova/instances/c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4_del complete
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.965 243456 INFO nova.compute.manager [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Took 0.60 seconds to destroy the instance on the hypervisor.
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.967 243456 DEBUG oslo.service.loopingcall [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.967 243456 DEBUG nova.compute.manager [-] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:23:48 compute-0 nova_compute[243452]: 2026-02-28 10:23:48.967 243456 DEBUG nova.network.neutron [-] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:23:49 compute-0 nova_compute[243452]: 2026-02-28 10:23:49.042 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:49 compute-0 nova_compute[243452]: 2026-02-28 10:23:49.516 243456 DEBUG nova.compute.manager [req-1d0512f1-f663-444c-aa3d-435698a253e9 req-dcbcf5c2-4f25-4f02-aadc-36a28e2cb4b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received event network-vif-plugged-415ef63e-f355-4d3a-a625-bee99de661ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:49 compute-0 nova_compute[243452]: 2026-02-28 10:23:49.517 243456 DEBUG oslo_concurrency.lockutils [req-1d0512f1-f663-444c-aa3d-435698a253e9 req-dcbcf5c2-4f25-4f02-aadc-36a28e2cb4b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "027ce924-b530-4917-956c-ab66555058b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:49 compute-0 nova_compute[243452]: 2026-02-28 10:23:49.517 243456 DEBUG oslo_concurrency.lockutils [req-1d0512f1-f663-444c-aa3d-435698a253e9 req-dcbcf5c2-4f25-4f02-aadc-36a28e2cb4b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:49 compute-0 nova_compute[243452]: 2026-02-28 10:23:49.518 243456 DEBUG oslo_concurrency.lockutils [req-1d0512f1-f663-444c-aa3d-435698a253e9 req-dcbcf5c2-4f25-4f02-aadc-36a28e2cb4b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:49 compute-0 nova_compute[243452]: 2026-02-28 10:23:49.518 243456 DEBUG nova.compute.manager [req-1d0512f1-f663-444c-aa3d-435698a253e9 req-dcbcf5c2-4f25-4f02-aadc-36a28e2cb4b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] No waiting events found dispatching network-vif-plugged-415ef63e-f355-4d3a-a625-bee99de661ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:23:49 compute-0 nova_compute[243452]: 2026-02-28 10:23:49.518 243456 WARNING nova.compute.manager [req-1d0512f1-f663-444c-aa3d-435698a253e9 req-dcbcf5c2-4f25-4f02-aadc-36a28e2cb4b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received unexpected event network-vif-plugged-415ef63e-f355-4d3a-a625-bee99de661ad for instance with vm_state active and task_state deleting.
Feb 28 10:23:49 compute-0 nova_compute[243452]: 2026-02-28 10:23:49.677 243456 DEBUG nova.network.neutron [-] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:23:49 compute-0 nova_compute[243452]: 2026-02-28 10:23:49.679 243456 DEBUG nova.network.neutron [-] [instance: 027ce924-b530-4917-956c-ab66555058b0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:23:49 compute-0 nova_compute[243452]: 2026-02-28 10:23:49.708 243456 INFO nova.compute.manager [-] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Took 0.74 seconds to deallocate network for instance.
Feb 28 10:23:49 compute-0 nova_compute[243452]: 2026-02-28 10:23:49.718 243456 INFO nova.compute.manager [-] [instance: 027ce924-b530-4917-956c-ab66555058b0] Took 2.14 seconds to deallocate network for instance.
Feb 28 10:23:49 compute-0 nova_compute[243452]: 2026-02-28 10:23:49.811 243456 DEBUG oslo_concurrency.lockutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:49 compute-0 nova_compute[243452]: 2026-02-28 10:23:49.811 243456 DEBUG oslo_concurrency.lockutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:49 compute-0 nova_compute[243452]: 2026-02-28 10:23:49.844 243456 DEBUG nova.network.neutron [req-ed4a5ac2-72de-4f8d-ad95-97f2632a564a req-28be3677-2090-41ea-b867-8f9d512929db 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Updated VIF entry in instance network info cache for port 415ef63e-f355-4d3a-a625-bee99de661ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:23:49 compute-0 nova_compute[243452]: 2026-02-28 10:23:49.845 243456 DEBUG nova.network.neutron [req-ed4a5ac2-72de-4f8d-ad95-97f2632a564a req-28be3677-2090-41ea-b867-8f9d512929db 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Updating instance_info_cache with network_info: [{"id": "415ef63e-f355-4d3a-a625-bee99de661ad", "address": "fa:16:3e:01:49:3b", "network": {"id": "17bc494c-7a5d-47b4-92b7-06dd91f131ca", "bridge": "br-int", "label": "tempest-network-smoke--669117396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap415ef63e-f3", "ovs_interfaceid": "415ef63e-f355-4d3a-a625-bee99de661ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:23:49 compute-0 nova_compute[243452]: 2026-02-28 10:23:49.891 243456 DEBUG oslo_concurrency.lockutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:49 compute-0 nova_compute[243452]: 2026-02-28 10:23:49.892 243456 DEBUG oslo_concurrency.processutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:50 compute-0 nova_compute[243452]: 2026-02-28 10:23:50.045 243456 DEBUG oslo_concurrency.lockutils [req-ed4a5ac2-72de-4f8d-ad95-97f2632a564a req-28be3677-2090-41ea-b867-8f9d512929db 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-027ce924-b530-4917-956c-ab66555058b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:23:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:23:50 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139069359' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:23:50 compute-0 nova_compute[243452]: 2026-02-28 10:23:50.445 243456 DEBUG oslo_concurrency.processutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:50 compute-0 nova_compute[243452]: 2026-02-28 10:23:50.451 243456 DEBUG nova.compute.provider_tree [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:23:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 265 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 213 op/s
Feb 28 10:23:50 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4139069359' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:23:50 compute-0 nova_compute[243452]: 2026-02-28 10:23:50.498 243456 DEBUG nova.scheduler.client.report [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.521094) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274230521165, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 327, "num_deletes": 251, "total_data_size": 130205, "memory_usage": 136312, "flush_reason": "Manual Compaction"}
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274230524670, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 129028, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37444, "largest_seqno": 37770, "table_properties": {"data_size": 126938, "index_size": 254, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5276, "raw_average_key_size": 18, "raw_value_size": 122872, "raw_average_value_size": 429, "num_data_blocks": 12, "num_entries": 286, "num_filter_entries": 286, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772274224, "oldest_key_time": 1772274224, "file_creation_time": 1772274230, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 3607 microseconds, and 1156 cpu microseconds.
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.524717) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 129028 bytes OK
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.524734) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.526286) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.526302) EVENT_LOG_v1 {"time_micros": 1772274230526297, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.526320) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 127929, prev total WAL file size 127929, number of live WAL files 2.
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.526773) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(126KB)], [80(12MB)]
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274230526863, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 12768665, "oldest_snapshot_seqno": -1}
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6325 keys, 11164506 bytes, temperature: kUnknown
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274230583182, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 11164506, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11118662, "index_size": 28902, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 160585, "raw_average_key_size": 25, "raw_value_size": 11001969, "raw_average_value_size": 1739, "num_data_blocks": 1165, "num_entries": 6325, "num_filter_entries": 6325, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772274230, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.583499) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 11164506 bytes
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.584984) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 226.4 rd, 197.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 12.1 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(185.5) write-amplify(86.5) OK, records in: 6834, records dropped: 509 output_compression: NoCompression
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.585014) EVENT_LOG_v1 {"time_micros": 1772274230585000, "job": 46, "event": "compaction_finished", "compaction_time_micros": 56406, "compaction_time_cpu_micros": 34816, "output_level": 6, "num_output_files": 1, "total_output_size": 11164506, "num_input_records": 6834, "num_output_records": 6325, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274230585214, "job": 46, "event": "table_file_deletion", "file_number": 82}
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274230586860, "job": 46, "event": "table_file_deletion", "file_number": 80}
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.526614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.586997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.587008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.587011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.587015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:23:50 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:23:50.587019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:23:50 compute-0 nova_compute[243452]: 2026-02-28 10:23:50.719 243456 DEBUG oslo_concurrency.lockutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:50 compute-0 nova_compute[243452]: 2026-02-28 10:23:50.723 243456 DEBUG oslo_concurrency.lockutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:50 compute-0 nova_compute[243452]: 2026-02-28 10:23:50.765 243456 INFO nova.scheduler.client.report [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Deleted allocations for instance c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4
Feb 28 10:23:50 compute-0 nova_compute[243452]: 2026-02-28 10:23:50.789 243456 DEBUG oslo_concurrency.processutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.153 243456 DEBUG oslo_concurrency.lockutils [None req-11dd0051-ba56-484e-8afc-fc9382073f2c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.201 243456 DEBUG nova.compute.manager [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Received event network-vif-unplugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.202 243456 DEBUG oslo_concurrency.lockutils [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.202 243456 DEBUG oslo_concurrency.lockutils [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.203 243456 DEBUG oslo_concurrency.lockutils [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.203 243456 DEBUG nova.compute.manager [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] No waiting events found dispatching network-vif-unplugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.204 243456 WARNING nova.compute.manager [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Received unexpected event network-vif-unplugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 for instance with vm_state deleted and task_state None.
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.204 243456 DEBUG nova.compute.manager [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Received event network-vif-plugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.204 243456 DEBUG oslo_concurrency.lockutils [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.205 243456 DEBUG oslo_concurrency.lockutils [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.205 243456 DEBUG oslo_concurrency.lockutils [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.206 243456 DEBUG nova.compute.manager [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] No waiting events found dispatching network-vif-plugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.207 243456 WARNING nova.compute.manager [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Received unexpected event network-vif-plugged-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 for instance with vm_state deleted and task_state None.
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.207 243456 DEBUG nova.compute.manager [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Received event network-vif-deleted-415ef63e-f355-4d3a-a625-bee99de661ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.208 243456 INFO nova.compute.manager [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Neutron deleted interface 415ef63e-f355-4d3a-a625-bee99de661ad; detaching it from the instance and deleting it from the info cache
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.208 243456 DEBUG nova.network.neutron [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.235 243456 DEBUG nova.compute.manager [req-3fb60387-2833-4a3b-9817-10b8844adaa9 req-37541e0f-c7ea-4c9b-9d76-74a34cc3750a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 027ce924-b530-4917-956c-ab66555058b0] Detach interface failed, port_id=415ef63e-f355-4d3a-a625-bee99de661ad, reason: Instance 027ce924-b530-4917-956c-ab66555058b0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:23:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:23:51 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3397732749' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.419 243456 DEBUG oslo_concurrency.processutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.426 243456 DEBUG nova.compute.provider_tree [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.445 243456 DEBUG nova.scheduler.client.report [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.474 243456 DEBUG oslo_concurrency.lockutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.517 243456 INFO nova.scheduler.client.report [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Deleted allocations for instance 027ce924-b530-4917-956c-ab66555058b0
Feb 28 10:23:51 compute-0 ceph-mon[76304]: pgmap v1758: 305 pgs: 305 active+clean; 265 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 213 op/s
Feb 28 10:23:51 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3397732749' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.608 243456 DEBUG oslo_concurrency.lockutils [None req-d8752ad8-a9b1-4259-a768-a394f679901e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "027ce924-b530-4917-956c-ab66555058b0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:51 compute-0 nova_compute[243452]: 2026-02-28 10:23:51.681 243456 DEBUG nova.compute.manager [req-cc62da77-a5d3-48ab-bf0b-7688e55ab0a4 req-e5a4d357-2c71-49a9-b878-70a92999b43d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Received event network-vif-deleted-7d930aa0-d51d-4c56-9da2-7f8c1faf8b99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 187 op/s
Feb 28 10:23:52 compute-0 ceph-mon[76304]: pgmap v1759: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 187 op/s
Feb 28 10:23:53 compute-0 nova_compute[243452]: 2026-02-28 10:23:53.639 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:23:54 compute-0 nova_compute[243452]: 2026-02-28 10:23:54.045 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 247 KiB/s rd, 35 KiB/s wr, 99 op/s
Feb 28 10:23:54 compute-0 ceph-mon[76304]: pgmap v1760: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 247 KiB/s rd, 35 KiB/s wr, 99 op/s
Feb 28 10:23:54 compute-0 nova_compute[243452]: 2026-02-28 10:23:54.614 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:54 compute-0 nova_compute[243452]: 2026-02-28 10:23:54.615 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:54 compute-0 nova_compute[243452]: 2026-02-28 10:23:54.636 243456 DEBUG nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:23:54 compute-0 nova_compute[243452]: 2026-02-28 10:23:54.718 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:54 compute-0 nova_compute[243452]: 2026-02-28 10:23:54.719 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:54 compute-0 nova_compute[243452]: 2026-02-28 10:23:54.729 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:23:54 compute-0 nova_compute[243452]: 2026-02-28 10:23:54.730 243456 INFO nova.compute.claims [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:23:54 compute-0 nova_compute[243452]: 2026-02-28 10:23:54.874 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:23:55 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3660935122' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.427 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.436 243456 DEBUG nova.compute.provider_tree [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.456 243456 DEBUG nova.scheduler.client.report [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.484 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.485 243456 DEBUG nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.545 243456 DEBUG nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.546 243456 DEBUG nova.network.neutron [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:23:55 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3660935122' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.570 243456 INFO nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.590 243456 DEBUG nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.682 243456 DEBUG nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.685 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.687 243456 INFO nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Creating image(s)
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.721 243456 DEBUG nova.storage.rbd_utils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.752 243456 DEBUG nova.storage.rbd_utils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.782 243456 DEBUG nova.storage.rbd_utils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.787 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.842 243456 DEBUG nova.policy [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30797c1e587b4532a2e148d0cdcd9c51', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.878 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.880 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.881 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.881 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.908 243456 DEBUG nova.storage.rbd_utils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:55 compute-0 nova_compute[243452]: 2026-02-28 10:23:55.913 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:56 compute-0 nova_compute[243452]: 2026-02-28 10:23:56.201 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:56 compute-0 nova_compute[243452]: 2026-02-28 10:23:56.261 243456 DEBUG nova.storage.rbd_utils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] resizing rbd image bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:23:56 compute-0 nova_compute[243452]: 2026-02-28 10:23:56.337 243456 DEBUG nova.objects.instance [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'migration_context' on Instance uuid bf3b0f76-98db-4eb3-8e3c-ab7b525cd129 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:23:56 compute-0 nova_compute[243452]: 2026-02-28 10:23:56.356 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:23:56 compute-0 nova_compute[243452]: 2026-02-28 10:23:56.356 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Ensure instance console log exists: /var/lib/nova/instances/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:23:56 compute-0 nova_compute[243452]: 2026-02-28 10:23:56.357 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:56 compute-0 nova_compute[243452]: 2026-02-28 10:23:56.357 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:56 compute-0 nova_compute[243452]: 2026-02-28 10:23:56.357 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:56 compute-0 nova_compute[243452]: 2026-02-28 10:23:56.407 243456 DEBUG nova.network.neutron [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Successfully created port: bf4d947d-49ab-4681-beda-1384a9a5c61b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:23:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 247 KiB/s rd, 35 KiB/s wr, 99 op/s
Feb 28 10:23:56 compute-0 ceph-mon[76304]: pgmap v1761: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 247 KiB/s rd, 35 KiB/s wr, 99 op/s
Feb 28 10:23:57 compute-0 nova_compute[243452]: 2026-02-28 10:23:57.114 243456 DEBUG nova.network.neutron [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Successfully updated port: bf4d947d-49ab-4681-beda-1384a9a5c61b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:23:57 compute-0 nova_compute[243452]: 2026-02-28 10:23:57.139 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "refresh_cache-bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:23:57 compute-0 nova_compute[243452]: 2026-02-28 10:23:57.139 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquired lock "refresh_cache-bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:23:57 compute-0 nova_compute[243452]: 2026-02-28 10:23:57.139 243456 DEBUG nova.network.neutron [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:23:57 compute-0 nova_compute[243452]: 2026-02-28 10:23:57.240 243456 DEBUG nova.compute.manager [req-7e209b65-7475-42a0-a525-b8b76ad255f1 req-64123ad4-b74d-41d6-9570-2f076e0a16ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Received event network-changed-bf4d947d-49ab-4681-beda-1384a9a5c61b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:23:57 compute-0 nova_compute[243452]: 2026-02-28 10:23:57.240 243456 DEBUG nova.compute.manager [req-7e209b65-7475-42a0-a525-b8b76ad255f1 req-64123ad4-b74d-41d6-9570-2f076e0a16ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Refreshing instance network info cache due to event network-changed-bf4d947d-49ab-4681-beda-1384a9a5c61b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:23:57 compute-0 nova_compute[243452]: 2026-02-28 10:23:57.241 243456 DEBUG oslo_concurrency.lockutils [req-7e209b65-7475-42a0-a525-b8b76ad255f1 req-64123ad4-b74d-41d6-9570-2f076e0a16ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:23:57 compute-0 nova_compute[243452]: 2026-02-28 10:23:57.317 243456 DEBUG nova.network.neutron [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:23:57 compute-0 ovn_controller[146846]: 2026-02-28T10:23:57Z|01130|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 10:23:57 compute-0 nova_compute[243452]: 2026-02-28 10:23:57.504 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:57 compute-0 ovn_controller[146846]: 2026-02-28T10:23:57Z|01131|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 10:23:57 compute-0 nova_compute[243452]: 2026-02-28 10:23:57.534 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:57.862 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:57.863 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:23:57.863 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.142 243456 DEBUG nova.network.neutron [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Updating instance_info_cache with network_info: [{"id": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "address": "fa:16:3e:fc:bc:6a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf4d947d-49", "ovs_interfaceid": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.160 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Releasing lock "refresh_cache-bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.161 243456 DEBUG nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Instance network_info: |[{"id": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "address": "fa:16:3e:fc:bc:6a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf4d947d-49", "ovs_interfaceid": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.161 243456 DEBUG oslo_concurrency.lockutils [req-7e209b65-7475-42a0-a525-b8b76ad255f1 req-64123ad4-b74d-41d6-9570-2f076e0a16ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.161 243456 DEBUG nova.network.neutron [req-7e209b65-7475-42a0-a525-b8b76ad255f1 req-64123ad4-b74d-41d6-9570-2f076e0a16ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Refreshing network info cache for port bf4d947d-49ab-4681-beda-1384a9a5c61b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.165 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Start _get_guest_xml network_info=[{"id": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "address": "fa:16:3e:fc:bc:6a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf4d947d-49", "ovs_interfaceid": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.171 243456 WARNING nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.178 243456 DEBUG nova.virt.libvirt.host [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.179 243456 DEBUG nova.virt.libvirt.host [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.189 243456 DEBUG nova.virt.libvirt.host [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.190 243456 DEBUG nova.virt.libvirt.host [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.191 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.191 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.192 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.192 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.192 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.192 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.193 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.193 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.193 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.194 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.194 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.194 243456 DEBUG nova.virt.hardware [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.198 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 251 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 718 KiB/s wr, 71 op/s
Feb 28 10:23:58 compute-0 ceph-mon[76304]: pgmap v1762: 305 pgs: 305 active+clean; 251 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 718 KiB/s wr, 71 op/s
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.642 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:23:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:23:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1972460946' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.818 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.855 243456 DEBUG nova.storage.rbd_utils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.863 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.906 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274223.8678675, 287f71c6-9068-476b-81e7-da5069ee831f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.908 243456 INFO nova.compute.manager [-] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] VM Stopped (Lifecycle Event)
Feb 28 10:23:58 compute-0 nova_compute[243452]: 2026-02-28 10:23:58.939 243456 DEBUG nova.compute.manager [None req-051dd1fe-2a93-45c3-aa29-598995f8b201 - - - - - -] [instance: 287f71c6-9068-476b-81e7-da5069ee831f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.048 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:23:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1255167679' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.447 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.449 243456 DEBUG nova.virt.libvirt.vif [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:23:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1318487154',display_name='tempest-ServersTestJSON-server-1318487154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1318487154',id=112,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-puzx8x3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:23:55Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=bf3b0f76-98db-4eb3-8e3c-ab7b525cd129,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "address": "fa:16:3e:fc:bc:6a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf4d947d-49", "ovs_interfaceid": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.450 243456 DEBUG nova.network.os_vif_util [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "address": "fa:16:3e:fc:bc:6a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf4d947d-49", "ovs_interfaceid": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.451 243456 DEBUG nova.network.os_vif_util [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:bc:6a,bridge_name='br-int',has_traffic_filtering=True,id=bf4d947d-49ab-4681-beda-1384a9a5c61b,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf4d947d-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.453 243456 DEBUG nova.objects.instance [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'pci_devices' on Instance uuid bf3b0f76-98db-4eb3-8e3c-ab7b525cd129 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.478 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:23:59 compute-0 nova_compute[243452]:   <uuid>bf3b0f76-98db-4eb3-8e3c-ab7b525cd129</uuid>
Feb 28 10:23:59 compute-0 nova_compute[243452]:   <name>instance-00000070</name>
Feb 28 10:23:59 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:23:59 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:23:59 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersTestJSON-server-1318487154</nova:name>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:23:58</nova:creationTime>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:23:59 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:23:59 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:23:59 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:23:59 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:23:59 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:23:59 compute-0 nova_compute[243452]:         <nova:user uuid="30797c1e587b4532a2e148d0cdcd9c51">tempest-ServersTestJSON-973249707-project-member</nova:user>
Feb 28 10:23:59 compute-0 nova_compute[243452]:         <nova:project uuid="1c3cb5cdfa53405bb0387af43e804bd1">tempest-ServersTestJSON-973249707</nova:project>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:23:59 compute-0 nova_compute[243452]:         <nova:port uuid="bf4d947d-49ab-4681-beda-1384a9a5c61b">
Feb 28 10:23:59 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:23:59 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:23:59 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <system>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <entry name="serial">bf3b0f76-98db-4eb3-8e3c-ab7b525cd129</entry>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <entry name="uuid">bf3b0f76-98db-4eb3-8e3c-ab7b525cd129</entry>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     </system>
Feb 28 10:23:59 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:23:59 compute-0 nova_compute[243452]:   <os>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:   </os>
Feb 28 10:23:59 compute-0 nova_compute[243452]:   <features>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:   </features>
Feb 28 10:23:59 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:23:59 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:23:59 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk">
Feb 28 10:23:59 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       </source>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:23:59 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk.config">
Feb 28 10:23:59 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       </source>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:23:59 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:fc:bc:6a"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <target dev="tapbf4d947d-49"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129/console.log" append="off"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <video>
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     </video>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:23:59 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:23:59 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:23:59 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:23:59 compute-0 nova_compute[243452]: </domain>
Feb 28 10:23:59 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.480 243456 DEBUG nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Preparing to wait for external event network-vif-plugged-bf4d947d-49ab-4681-beda-1384a9a5c61b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.481 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.482 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.482 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.484 243456 DEBUG nova.virt.libvirt.vif [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:23:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1318487154',display_name='tempest-ServersTestJSON-server-1318487154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1318487154',id=112,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-puzx8x3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:23:55Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=bf3b0f76-98db-4eb3-8e3c-ab7b525cd129,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "address": "fa:16:3e:fc:bc:6a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf4d947d-49", "ovs_interfaceid": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.484 243456 DEBUG nova.network.os_vif_util [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "address": "fa:16:3e:fc:bc:6a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf4d947d-49", "ovs_interfaceid": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.485 243456 DEBUG nova.network.os_vif_util [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:bc:6a,bridge_name='br-int',has_traffic_filtering=True,id=bf4d947d-49ab-4681-beda-1384a9a5c61b,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf4d947d-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.487 243456 DEBUG os_vif [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:bc:6a,bridge_name='br-int',has_traffic_filtering=True,id=bf4d947d-49ab-4681-beda-1384a9a5c61b,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf4d947d-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.488 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.489 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.489 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.498 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.498 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf4d947d-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.499 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf4d947d-49, col_values=(('external_ids', {'iface-id': 'bf4d947d-49ab-4681-beda-1384a9a5c61b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:bc:6a', 'vm-uuid': 'bf3b0f76-98db-4eb3-8e3c-ab7b525cd129'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.501 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:59 compute-0 NetworkManager[49805]: <info>  [1772274239.5024] manager: (tapbf4d947d-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/470)
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.503 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.506 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.508 243456 INFO os_vif [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:bc:6a,bridge_name='br-int',has_traffic_filtering=True,id=bf4d947d-49ab-4681-beda-1384a9a5c61b,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf4d947d-49')
Feb 28 10:23:59 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1972460946' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:23:59 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1255167679' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.590 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.590 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.591 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No VIF found with MAC fa:16:3e:fc:bc:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.591 243456 INFO nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Using config drive
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.619 243456 DEBUG nova.storage.rbd_utils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.909 243456 DEBUG nova.network.neutron [req-7e209b65-7475-42a0-a525-b8b76ad255f1 req-64123ad4-b74d-41d6-9570-2f076e0a16ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Updated VIF entry in instance network info cache for port bf4d947d-49ab-4681-beda-1384a9a5c61b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.910 243456 DEBUG nova.network.neutron [req-7e209b65-7475-42a0-a525-b8b76ad255f1 req-64123ad4-b74d-41d6-9570-2f076e0a16ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Updating instance_info_cache with network_info: [{"id": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "address": "fa:16:3e:fc:bc:6a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf4d947d-49", "ovs_interfaceid": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:23:59 compute-0 nova_compute[243452]: 2026-02-28 10:23:59.936 243456 DEBUG oslo_concurrency.lockutils [req-7e209b65-7475-42a0-a525-b8b76ad255f1 req-64123ad4-b74d-41d6-9570-2f076e0a16ce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:24:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:24:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:24:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:24:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:24:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:24:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:24:00 compute-0 nova_compute[243452]: 2026-02-28 10:24:00.415 243456 INFO nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Creating config drive at /var/lib/nova/instances/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129/disk.config
Feb 28 10:24:00 compute-0 nova_compute[243452]: 2026-02-28 10:24:00.420 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp9_7ry731 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Feb 28 10:24:00 compute-0 ceph-mon[76304]: pgmap v1763: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Feb 28 10:24:00 compute-0 nova_compute[243452]: 2026-02-28 10:24:00.577 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp9_7ry731" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:00 compute-0 nova_compute[243452]: 2026-02-28 10:24:00.625 243456 DEBUG nova.storage.rbd_utils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:00 compute-0 nova_compute[243452]: 2026-02-28 10:24:00.630 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129/disk.config bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:00 compute-0 nova_compute[243452]: 2026-02-28 10:24:00.773 243456 DEBUG oslo_concurrency.processutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129/disk.config bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:00 compute-0 nova_compute[243452]: 2026-02-28 10:24:00.774 243456 INFO nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Deleting local config drive /var/lib/nova/instances/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129/disk.config because it was imported into RBD.
Feb 28 10:24:00 compute-0 kernel: tapbf4d947d-49: entered promiscuous mode
Feb 28 10:24:00 compute-0 NetworkManager[49805]: <info>  [1772274240.8208] manager: (tapbf4d947d-49): new Tun device (/org/freedesktop/NetworkManager/Devices/471)
Feb 28 10:24:00 compute-0 ovn_controller[146846]: 2026-02-28T10:24:00Z|01132|binding|INFO|Claiming lport bf4d947d-49ab-4681-beda-1384a9a5c61b for this chassis.
Feb 28 10:24:00 compute-0 ovn_controller[146846]: 2026-02-28T10:24:00Z|01133|binding|INFO|bf4d947d-49ab-4681-beda-1384a9a5c61b: Claiming fa:16:3e:fc:bc:6a 10.100.0.7
Feb 28 10:24:00 compute-0 nova_compute[243452]: 2026-02-28 10:24:00.824 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.832 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:bc:6a 10.100.0.7'], port_security=['fa:16:3e:fc:bc:6a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bf3b0f76-98db-4eb3-8e3c-ab7b525cd129', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=bf4d947d-49ab-4681-beda-1384a9a5c61b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:24:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.833 156681 INFO neutron.agent.ovn.metadata.agent [-] Port bf4d947d-49ab-4681-beda-1384a9a5c61b in datapath 7ec4804c-4a13-485a-9300-db6edf74473b bound to our chassis
Feb 28 10:24:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.834 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b
Feb 28 10:24:00 compute-0 nova_compute[243452]: 2026-02-28 10:24:00.834 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:00 compute-0 ovn_controller[146846]: 2026-02-28T10:24:00Z|01134|binding|INFO|Setting lport bf4d947d-49ab-4681-beda-1384a9a5c61b ovn-installed in OVS
Feb 28 10:24:00 compute-0 ovn_controller[146846]: 2026-02-28T10:24:00Z|01135|binding|INFO|Setting lport bf4d947d-49ab-4681-beda-1384a9a5c61b up in Southbound
Feb 28 10:24:00 compute-0 nova_compute[243452]: 2026-02-28 10:24:00.838 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.853 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1134ae-9b5c-464e-bd1f-53f65f65fd38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:00 compute-0 systemd-machined[209480]: New machine qemu-142-instance-00000070.
Feb 28 10:24:00 compute-0 systemd[1]: Started Virtual Machine qemu-142-instance-00000070.
Feb 28 10:24:00 compute-0 systemd-udevd[339973]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:24:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.886 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0852f699-1619-4883-b230-93a09a460a47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.890 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ea57b2-2d29-4cc6-a1a4-82edcce90df3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:00 compute-0 NetworkManager[49805]: <info>  [1772274240.8969] device (tapbf4d947d-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:24:00 compute-0 NetworkManager[49805]: <info>  [1772274240.8975] device (tapbf4d947d-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:24:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.914 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d0bd90e1-7b7f-410f-b07b-93c26e91f86d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.927 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6209bf0f-1325-4b9f-9c0c-790feebfb45c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339980, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.948 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f5437c4d-1971-4d98-9ece-3ff8cd030bb6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339984, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339984, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.951 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:00 compute-0 nova_compute[243452]: 2026-02-28 10:24:00.954 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.956 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:00 compute-0 nova_compute[243452]: 2026-02-28 10:24:00.955 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.958 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:24:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.960 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:00.960 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:24:01 compute-0 nova_compute[243452]: 2026-02-28 10:24:01.227 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274241.22599, bf3b0f76-98db-4eb3-8e3c-ab7b525cd129 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:24:01 compute-0 nova_compute[243452]: 2026-02-28 10:24:01.228 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] VM Started (Lifecycle Event)
Feb 28 10:24:01 compute-0 nova_compute[243452]: 2026-02-28 10:24:01.254 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:24:01 compute-0 nova_compute[243452]: 2026-02-28 10:24:01.261 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274241.2261636, bf3b0f76-98db-4eb3-8e3c-ab7b525cd129 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:24:01 compute-0 nova_compute[243452]: 2026-02-28 10:24:01.261 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] VM Paused (Lifecycle Event)
Feb 28 10:24:01 compute-0 nova_compute[243452]: 2026-02-28 10:24:01.286 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:24:01 compute-0 nova_compute[243452]: 2026-02-28 10:24:01.290 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:24:01 compute-0 nova_compute[243452]: 2026-02-28 10:24:01.321 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:24:02 compute-0 nova_compute[243452]: 2026-02-28 10:24:02.116 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274227.115624, 027ce924-b530-4917-956c-ab66555058b0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:24:02 compute-0 nova_compute[243452]: 2026-02-28 10:24:02.116 243456 INFO nova.compute.manager [-] [instance: 027ce924-b530-4917-956c-ab66555058b0] VM Stopped (Lifecycle Event)
Feb 28 10:24:02 compute-0 nova_compute[243452]: 2026-02-28 10:24:02.140 243456 DEBUG nova.compute.manager [None req-2067aee3-e77a-4d6d-88e5-ca258b2566db - - - - - -] [instance: 027ce924-b530-4917-956c-ab66555058b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:24:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Feb 28 10:24:02 compute-0 ceph-mon[76304]: pgmap v1764: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.074 243456 DEBUG nova.compute.manager [req-c18696f0-dc41-48da-ae5e-4b548143af5e req-b0408efa-37d8-4be4-8ccc-b7f2c6ce04e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Received event network-vif-plugged-bf4d947d-49ab-4681-beda-1384a9a5c61b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.074 243456 DEBUG oslo_concurrency.lockutils [req-c18696f0-dc41-48da-ae5e-4b548143af5e req-b0408efa-37d8-4be4-8ccc-b7f2c6ce04e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.075 243456 DEBUG oslo_concurrency.lockutils [req-c18696f0-dc41-48da-ae5e-4b548143af5e req-b0408efa-37d8-4be4-8ccc-b7f2c6ce04e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.075 243456 DEBUG oslo_concurrency.lockutils [req-c18696f0-dc41-48da-ae5e-4b548143af5e req-b0408efa-37d8-4be4-8ccc-b7f2c6ce04e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.076 243456 DEBUG nova.compute.manager [req-c18696f0-dc41-48da-ae5e-4b548143af5e req-b0408efa-37d8-4be4-8ccc-b7f2c6ce04e3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Processing event network-vif-plugged-bf4d947d-49ab-4681-beda-1384a9a5c61b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.077 243456 DEBUG nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.083 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274243.0829666, bf3b0f76-98db-4eb3-8e3c-ab7b525cd129 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.083 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] VM Resumed (Lifecycle Event)
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.087 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.091 243456 INFO nova.virt.libvirt.driver [-] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Instance spawned successfully.
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.092 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.109 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.118 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.127 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.128 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.129 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.130 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.131 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.132 243456 DEBUG nova.virt.libvirt.driver [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.139 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.182 243456 INFO nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Took 7.50 seconds to spawn the instance on the hypervisor.
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.183 243456 DEBUG nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.256 243456 INFO nova.compute.manager [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Took 8.56 seconds to build instance.
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.276 243456 DEBUG oslo_concurrency.lockutils [None req-ce6bff3b-29b7-4c7b-80c1-d2d60c61b94f 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.602 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274228.6016119, c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.603 243456 INFO nova.compute.manager [-] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] VM Stopped (Lifecycle Event)
Feb 28 10:24:03 compute-0 nova_compute[243452]: 2026-02-28 10:24:03.629 243456 DEBUG nova.compute.manager [None req-11772288-6a13-4928-af58-e83dff88a8e5 - - - - - -] [instance: c6e0d94e-1817-4c1d-b1d0-744cfd5b72d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:24:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.720304) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274243720349, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 362, "num_deletes": 250, "total_data_size": 189790, "memory_usage": 196432, "flush_reason": "Manual Compaction"}
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274243724053, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 187401, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37771, "largest_seqno": 38132, "table_properties": {"data_size": 185216, "index_size": 349, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 6057, "raw_average_key_size": 20, "raw_value_size": 180852, "raw_average_value_size": 604, "num_data_blocks": 16, "num_entries": 299, "num_filter_entries": 299, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772274231, "oldest_key_time": 1772274231, "file_creation_time": 1772274243, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 3879 microseconds, and 1962 cpu microseconds.
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.724178) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 187401 bytes OK
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.724209) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.726029) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.726062) EVENT_LOG_v1 {"time_micros": 1772274243726051, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.726126) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 187390, prev total WAL file size 187390, number of live WAL files 2.
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.726848) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323532' seq:72057594037927935, type:22 .. '6D6772737461740031353033' seq:0, type:0; will stop at (end)
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(183KB)], [83(10MB)]
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274243726919, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 11351907, "oldest_snapshot_seqno": -1}
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6117 keys, 8012493 bytes, temperature: kUnknown
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274243790619, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8012493, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7972864, "index_size": 23263, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 156457, "raw_average_key_size": 25, "raw_value_size": 7864583, "raw_average_value_size": 1285, "num_data_blocks": 929, "num_entries": 6117, "num_filter_entries": 6117, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772274243, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.790943) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8012493 bytes
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.792145) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.9 rd, 125.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.6 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(103.3) write-amplify(42.8) OK, records in: 6624, records dropped: 507 output_compression: NoCompression
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.792171) EVENT_LOG_v1 {"time_micros": 1772274243792159, "job": 48, "event": "compaction_finished", "compaction_time_micros": 63805, "compaction_time_cpu_micros": 36212, "output_level": 6, "num_output_files": 1, "total_output_size": 8012493, "num_input_records": 6624, "num_output_records": 6117, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274243792334, "job": 48, "event": "table_file_deletion", "file_number": 85}
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274243793528, "job": 48, "event": "table_file_deletion", "file_number": 83}
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.726680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.793624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.793633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.793635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.793636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:24:03 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:24:03.793639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:24:04 compute-0 nova_compute[243452]: 2026-02-28 10:24:04.052 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:24:04 compute-0 nova_compute[243452]: 2026-02-28 10:24:04.502 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:04 compute-0 ceph-mon[76304]: pgmap v1765: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.156 243456 DEBUG nova.compute.manager [req-2390defe-8f89-40de-b1ee-b0075e0fa7e4 req-e83c2266-7f76-4de4-957f-5ad17201ed21 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Received event network-vif-plugged-bf4d947d-49ab-4681-beda-1384a9a5c61b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.157 243456 DEBUG oslo_concurrency.lockutils [req-2390defe-8f89-40de-b1ee-b0075e0fa7e4 req-e83c2266-7f76-4de4-957f-5ad17201ed21 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.157 243456 DEBUG oslo_concurrency.lockutils [req-2390defe-8f89-40de-b1ee-b0075e0fa7e4 req-e83c2266-7f76-4de4-957f-5ad17201ed21 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.157 243456 DEBUG oslo_concurrency.lockutils [req-2390defe-8f89-40de-b1ee-b0075e0fa7e4 req-e83c2266-7f76-4de4-957f-5ad17201ed21 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.158 243456 DEBUG nova.compute.manager [req-2390defe-8f89-40de-b1ee-b0075e0fa7e4 req-e83c2266-7f76-4de4-957f-5ad17201ed21 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] No waiting events found dispatching network-vif-plugged-bf4d947d-49ab-4681-beda-1384a9a5c61b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.158 243456 WARNING nova.compute.manager [req-2390defe-8f89-40de-b1ee-b0075e0fa7e4 req-e83c2266-7f76-4de4-957f-5ad17201ed21 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Received unexpected event network-vif-plugged-bf4d947d-49ab-4681-beda-1384a9a5c61b for instance with vm_state active and task_state None.
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.707 243456 DEBUG oslo_concurrency.lockutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.709 243456 DEBUG oslo_concurrency.lockutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.710 243456 DEBUG oslo_concurrency.lockutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.710 243456 DEBUG oslo_concurrency.lockutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.710 243456 DEBUG oslo_concurrency.lockutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.712 243456 INFO nova.compute.manager [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Terminating instance
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.713 243456 DEBUG nova.compute.manager [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:24:05 compute-0 kernel: tapbf4d947d-49 (unregistering): left promiscuous mode
Feb 28 10:24:05 compute-0 NetworkManager[49805]: <info>  [1772274245.7588] device (tapbf4d947d-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:24:05 compute-0 ovn_controller[146846]: 2026-02-28T10:24:05Z|01136|binding|INFO|Releasing lport bf4d947d-49ab-4681-beda-1384a9a5c61b from this chassis (sb_readonly=0)
Feb 28 10:24:05 compute-0 ovn_controller[146846]: 2026-02-28T10:24:05Z|01137|binding|INFO|Setting lport bf4d947d-49ab-4681-beda-1384a9a5c61b down in Southbound
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.766 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:05 compute-0 ovn_controller[146846]: 2026-02-28T10:24:05Z|01138|binding|INFO|Removing iface tapbf4d947d-49 ovn-installed in OVS
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.769 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.773 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.778 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:bc:6a 10.100.0.7'], port_security=['fa:16:3e:fc:bc:6a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bf3b0f76-98db-4eb3-8e3c-ab7b525cd129', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=bf4d947d-49ab-4681-beda-1384a9a5c61b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:24:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.780 156681 INFO neutron.agent.ovn.metadata.agent [-] Port bf4d947d-49ab-4681-beda-1384a9a5c61b in datapath 7ec4804c-4a13-485a-9300-db6edf74473b unbound from our chassis
Feb 28 10:24:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.781 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b
Feb 28 10:24:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.794 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0c18dae5-0664-47e3-8cc4-fc96b008adc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:05 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000070.scope: Deactivated successfully.
Feb 28 10:24:05 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000070.scope: Consumed 3.085s CPU time.
Feb 28 10:24:05 compute-0 systemd-machined[209480]: Machine qemu-142-instance-00000070 terminated.
Feb 28 10:24:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.822 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9e26ac81-b46e-4aaf-a376-c0a222cf4450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.825 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[09922337-29e3-4e64-a879-ff3780009804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.847 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d227ad52-77c8-4d1e-90fe-6706d90f6702]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.864 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[586e8d95-285b-4558-9715-993670ae4047]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340039, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.886 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2d261842-e58a-4717-947a-dff33c1bfafb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340040, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340040, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.887 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.889 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.894 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.894 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.895 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:24:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.896 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:05.896 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.934 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.938 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.957 243456 INFO nova.virt.libvirt.driver [-] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Instance destroyed successfully.
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.958 243456 DEBUG nova.objects.instance [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'resources' on Instance uuid bf3b0f76-98db-4eb3-8e3c-ab7b525cd129 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.974 243456 DEBUG nova.virt.libvirt.vif [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:23:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1318487154',display_name='tempest-ServersTestJSON-server-1318487154',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1318487154',id=112,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:24:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-puzx8x3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:24:04Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=bf3b0f76-98db-4eb3-8e3c-ab7b525cd129,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "address": "fa:16:3e:fc:bc:6a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf4d947d-49", "ovs_interfaceid": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.975 243456 DEBUG nova.network.os_vif_util [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "address": "fa:16:3e:fc:bc:6a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf4d947d-49", "ovs_interfaceid": "bf4d947d-49ab-4681-beda-1384a9a5c61b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.977 243456 DEBUG nova.network.os_vif_util [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:bc:6a,bridge_name='br-int',has_traffic_filtering=True,id=bf4d947d-49ab-4681-beda-1384a9a5c61b,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf4d947d-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.977 243456 DEBUG os_vif [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:bc:6a,bridge_name='br-int',has_traffic_filtering=True,id=bf4d947d-49ab-4681-beda-1384a9a5c61b,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf4d947d-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.981 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.982 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf4d947d-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.984 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.985 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:05 compute-0 nova_compute[243452]: 2026-02-28 10:24:05.989 243456 INFO os_vif [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:bc:6a,bridge_name='br-int',has_traffic_filtering=True,id=bf4d947d-49ab-4681-beda-1384a9a5c61b,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf4d947d-49')
Feb 28 10:24:06 compute-0 nova_compute[243452]: 2026-02-28 10:24:06.300 243456 INFO nova.virt.libvirt.driver [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Deleting instance files /var/lib/nova/instances/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_del
Feb 28 10:24:06 compute-0 nova_compute[243452]: 2026-02-28 10:24:06.302 243456 INFO nova.virt.libvirt.driver [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Deletion of /var/lib/nova/instances/bf3b0f76-98db-4eb3-8e3c-ab7b525cd129_del complete
Feb 28 10:24:06 compute-0 nova_compute[243452]: 2026-02-28 10:24:06.358 243456 INFO nova.compute.manager [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Took 0.64 seconds to destroy the instance on the hypervisor.
Feb 28 10:24:06 compute-0 nova_compute[243452]: 2026-02-28 10:24:06.359 243456 DEBUG oslo.service.loopingcall [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:24:06 compute-0 nova_compute[243452]: 2026-02-28 10:24:06.359 243456 DEBUG nova.compute.manager [-] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:24:06 compute-0 nova_compute[243452]: 2026-02-28 10:24:06.360 243456 DEBUG nova.network.neutron [-] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:24:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 76 op/s
Feb 28 10:24:06 compute-0 ceph-mon[76304]: pgmap v1766: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 76 op/s
Feb 28 10:24:07 compute-0 nova_compute[243452]: 2026-02-28 10:24:07.426 243456 DEBUG nova.compute.manager [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Received event network-vif-unplugged-bf4d947d-49ab-4681-beda-1384a9a5c61b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:24:07 compute-0 nova_compute[243452]: 2026-02-28 10:24:07.426 243456 DEBUG oslo_concurrency.lockutils [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:07 compute-0 nova_compute[243452]: 2026-02-28 10:24:07.426 243456 DEBUG oslo_concurrency.lockutils [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:07 compute-0 nova_compute[243452]: 2026-02-28 10:24:07.427 243456 DEBUG oslo_concurrency.lockutils [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:07 compute-0 nova_compute[243452]: 2026-02-28 10:24:07.427 243456 DEBUG nova.compute.manager [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] No waiting events found dispatching network-vif-unplugged-bf4d947d-49ab-4681-beda-1384a9a5c61b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:24:07 compute-0 nova_compute[243452]: 2026-02-28 10:24:07.427 243456 DEBUG nova.compute.manager [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Received event network-vif-unplugged-bf4d947d-49ab-4681-beda-1384a9a5c61b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:24:07 compute-0 nova_compute[243452]: 2026-02-28 10:24:07.427 243456 DEBUG nova.compute.manager [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Received event network-vif-plugged-bf4d947d-49ab-4681-beda-1384a9a5c61b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:24:07 compute-0 nova_compute[243452]: 2026-02-28 10:24:07.427 243456 DEBUG oslo_concurrency.lockutils [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:07 compute-0 nova_compute[243452]: 2026-02-28 10:24:07.428 243456 DEBUG oslo_concurrency.lockutils [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:07 compute-0 nova_compute[243452]: 2026-02-28 10:24:07.428 243456 DEBUG oslo_concurrency.lockutils [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:07 compute-0 nova_compute[243452]: 2026-02-28 10:24:07.428 243456 DEBUG nova.compute.manager [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] No waiting events found dispatching network-vif-plugged-bf4d947d-49ab-4681-beda-1384a9a5c61b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:24:07 compute-0 nova_compute[243452]: 2026-02-28 10:24:07.428 243456 WARNING nova.compute.manager [req-70cbcc6f-da5d-4d84-a654-f931c2f50ba1 req-f1e8a8df-4e86-4212-8be9-9796c10edcc0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Received unexpected event network-vif-plugged-bf4d947d-49ab-4681-beda-1384a9a5c61b for instance with vm_state active and task_state deleting.
Feb 28 10:24:07 compute-0 nova_compute[243452]: 2026-02-28 10:24:07.664 243456 DEBUG nova.network.neutron [-] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:24:07 compute-0 nova_compute[243452]: 2026-02-28 10:24:07.683 243456 INFO nova.compute.manager [-] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Took 1.32 seconds to deallocate network for instance.
Feb 28 10:24:07 compute-0 nova_compute[243452]: 2026-02-28 10:24:07.723 243456 DEBUG oslo_concurrency.lockutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:07 compute-0 nova_compute[243452]: 2026-02-28 10:24:07.724 243456 DEBUG oslo_concurrency.lockutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:07 compute-0 nova_compute[243452]: 2026-02-28 10:24:07.789 243456 DEBUG oslo_concurrency.processutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:24:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/104028041' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:24:08 compute-0 nova_compute[243452]: 2026-02-28 10:24:08.443 243456 DEBUG oslo_concurrency.processutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.654s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:08 compute-0 nova_compute[243452]: 2026-02-28 10:24:08.450 243456 DEBUG nova.compute.provider_tree [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:24:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 260 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 28 10:24:08 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/104028041' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:24:08 compute-0 nova_compute[243452]: 2026-02-28 10:24:08.475 243456 DEBUG nova.scheduler.client.report [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:24:08 compute-0 nova_compute[243452]: 2026-02-28 10:24:08.518 243456 DEBUG oslo_concurrency.lockutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:08 compute-0 nova_compute[243452]: 2026-02-28 10:24:08.569 243456 INFO nova.scheduler.client.report [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Deleted allocations for instance bf3b0f76-98db-4eb3-8e3c-ab7b525cd129
Feb 28 10:24:08 compute-0 nova_compute[243452]: 2026-02-28 10:24:08.636 243456 DEBUG oslo_concurrency.lockutils [None req-b435d848-b132-4bab-89c4-c0f28b407e4c 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "bf3b0f76-98db-4eb3-8e3c-ab7b525cd129" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:24:09 compute-0 nova_compute[243452]: 2026-02-28 10:24:09.054 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:09 compute-0 ceph-mon[76304]: pgmap v1767: 305 pgs: 305 active+clean; 260 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 28 10:24:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 124 op/s
Feb 28 10:24:10 compute-0 nova_compute[243452]: 2026-02-28 10:24:10.687 243456 DEBUG nova.compute.manager [req-3626c2c9-d809-47f5-989b-c65d43e82340 req-d0223bdf-db5a-40be-bf71-afea8bbe6e06 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Received event network-vif-deleted-bf4d947d-49ab-4681-beda-1384a9a5c61b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:24:10 compute-0 nova_compute[243452]: 2026-02-28 10:24:10.984 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:11 compute-0 nova_compute[243452]: 2026-02-28 10:24:11.345 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "08a4cd5e-f711-44d2-b17e-c1941be22e85" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:11 compute-0 nova_compute[243452]: 2026-02-28 10:24:11.346 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:11 compute-0 nova_compute[243452]: 2026-02-28 10:24:11.363 243456 DEBUG nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:24:11 compute-0 nova_compute[243452]: 2026-02-28 10:24:11.427 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:11 compute-0 nova_compute[243452]: 2026-02-28 10:24:11.427 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:11 compute-0 nova_compute[243452]: 2026-02-28 10:24:11.437 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:24:11 compute-0 nova_compute[243452]: 2026-02-28 10:24:11.437 243456 INFO nova.compute.claims [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:24:11 compute-0 ceph-mon[76304]: pgmap v1768: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 124 op/s
Feb 28 10:24:11 compute-0 nova_compute[243452]: 2026-02-28 10:24:11.582 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:11 compute-0 nova_compute[243452]: 2026-02-28 10:24:11.927 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:11 compute-0 nova_compute[243452]: 2026-02-28 10:24:11.927 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:11 compute-0 nova_compute[243452]: 2026-02-28 10:24:11.945 243456 DEBUG nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.007 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:24:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1030360493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.134 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.142 243456 DEBUG nova.compute.provider_tree [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.165 243456 DEBUG nova.scheduler.client.report [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.198 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.199 243456 DEBUG nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.202 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.210 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.210 243456 INFO nova.compute.claims [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.274 243456 DEBUG nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.275 243456 DEBUG nova.network.neutron [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.317 243456 INFO nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.350 243456 DEBUG nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.437 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.482 243456 DEBUG nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.485 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.486 243456 INFO nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Creating image(s)
Feb 28 10:24:12 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1030360493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:24:12 compute-0 ceph-mon[76304]: pgmap v1769: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.517 243456 DEBUG nova.storage.rbd_utils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.539 243456 DEBUG nova.storage.rbd_utils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.560 243456 DEBUG nova.storage.rbd_utils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.563 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.607 243456 DEBUG nova.policy [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '30797c1e587b4532a2e148d0cdcd9c51', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.662 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.664 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.664 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.665 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.690 243456 DEBUG nova.storage.rbd_utils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.694 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:12 compute-0 nova_compute[243452]: 2026-02-28 10:24:12.931 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:24:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2330502058' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.014 243456 DEBUG nova.storage.rbd_utils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] resizing rbd image 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.047 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.053 243456 DEBUG nova.compute.provider_tree [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.095 243456 DEBUG nova.scheduler.client.report [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.103 243456 DEBUG nova.objects.instance [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'migration_context' on Instance uuid 08a4cd5e-f711-44d2-b17e-c1941be22e85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.116 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.116 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Ensure instance console log exists: /var/lib/nova/instances/08a4cd5e-f711-44d2-b17e-c1941be22e85/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.117 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.117 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.117 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.118 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.119 243456 DEBUG nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.171 243456 DEBUG nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.172 243456 DEBUG nova.network.neutron [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.196 243456 INFO nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.217 243456 DEBUG nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.310 243456 DEBUG nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.313 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.313 243456 INFO nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Creating image(s)
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.343 243456 DEBUG nova.storage.rbd_utils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.376 243456 DEBUG nova.storage.rbd_utils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.413 243456 DEBUG nova.storage.rbd_utils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.419 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:13 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2330502058' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.538 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.539 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.540 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.540 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.567 243456 DEBUG nova.storage.rbd_utils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.572 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.777 243456 DEBUG nova.policy [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9046d3ef932481196b82d5e1fdd5de6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.842 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:13 compute-0 nova_compute[243452]: 2026-02-28 10:24:13.912 243456 DEBUG nova.storage.rbd_utils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] resizing rbd image a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:24:14 compute-0 nova_compute[243452]: 2026-02-28 10:24:14.006 243456 DEBUG nova.objects.instance [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'migration_context' on Instance uuid a1bf329d-ed65-4cbc-99cb-e49716d1b24d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:24:14 compute-0 nova_compute[243452]: 2026-02-28 10:24:14.022 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:24:14 compute-0 nova_compute[243452]: 2026-02-28 10:24:14.022 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Ensure instance console log exists: /var/lib/nova/instances/a1bf329d-ed65-4cbc-99cb-e49716d1b24d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:24:14 compute-0 nova_compute[243452]: 2026-02-28 10:24:14.023 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:14 compute-0 nova_compute[243452]: 2026-02-28 10:24:14.023 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:14 compute-0 nova_compute[243452]: 2026-02-28 10:24:14.023 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:14 compute-0 nova_compute[243452]: 2026-02-28 10:24:14.057 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:14 compute-0 nova_compute[243452]: 2026-02-28 10:24:14.073 243456 DEBUG nova.network.neutron [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Successfully created port: c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:24:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Feb 28 10:24:14 compute-0 ceph-mon[76304]: pgmap v1770: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Feb 28 10:24:15 compute-0 nova_compute[243452]: 2026-02-28 10:24:15.465 243456 DEBUG nova.network.neutron [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Successfully updated port: c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:24:15 compute-0 nova_compute[243452]: 2026-02-28 10:24:15.483 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "refresh_cache-08a4cd5e-f711-44d2-b17e-c1941be22e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:24:15 compute-0 nova_compute[243452]: 2026-02-28 10:24:15.483 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquired lock "refresh_cache-08a4cd5e-f711-44d2-b17e-c1941be22e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:24:15 compute-0 nova_compute[243452]: 2026-02-28 10:24:15.484 243456 DEBUG nova.network.neutron [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:24:15 compute-0 nova_compute[243452]: 2026-02-28 10:24:15.530 243456 DEBUG nova.network.neutron [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Successfully created port: cc564e14-816b-4b92-877a-db0b2ddd0285 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:24:15 compute-0 nova_compute[243452]: 2026-02-28 10:24:15.656 243456 DEBUG nova.compute.manager [req-b96105e5-dcc2-4e9a-af68-901d8f16938a req-690a6c37-2c62-4076-ad8f-5848751026cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Received event network-changed-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:24:15 compute-0 nova_compute[243452]: 2026-02-28 10:24:15.657 243456 DEBUG nova.compute.manager [req-b96105e5-dcc2-4e9a-af68-901d8f16938a req-690a6c37-2c62-4076-ad8f-5848751026cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Refreshing instance network info cache due to event network-changed-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:24:15 compute-0 nova_compute[243452]: 2026-02-28 10:24:15.657 243456 DEBUG oslo_concurrency.lockutils [req-b96105e5-dcc2-4e9a-af68-901d8f16938a req-690a6c37-2c62-4076-ad8f-5848751026cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-08a4cd5e-f711-44d2-b17e-c1941be22e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:24:15 compute-0 nova_compute[243452]: 2026-02-28 10:24:15.751 243456 DEBUG nova.network.neutron [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:24:15 compute-0 nova_compute[243452]: 2026-02-28 10:24:15.986 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 287 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 149 op/s
Feb 28 10:24:16 compute-0 ceph-mon[76304]: pgmap v1771: 305 pgs: 305 active+clean; 287 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 149 op/s
Feb 28 10:24:16 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:24:16 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 34K writes, 134K keys, 34K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s
                                           Cumulative WAL: 34K writes, 12K syncs, 2.76 writes per sync, written: 0.13 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 40.78 MB, 0.07 MB/s
                                           Interval WAL: 10K writes, 4124 syncs, 2.48 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 10:24:16 compute-0 nova_compute[243452]: 2026-02-28 10:24:16.973 243456 DEBUG nova.network.neutron [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Updating instance_info_cache with network_info: [{"id": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "address": "fa:16:3e:98:3b:3a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f8ec31-25", "ovs_interfaceid": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:24:16 compute-0 nova_compute[243452]: 2026-02-28 10:24:16.998 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Releasing lock "refresh_cache-08a4cd5e-f711-44d2-b17e-c1941be22e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:24:16 compute-0 nova_compute[243452]: 2026-02-28 10:24:16.999 243456 DEBUG nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Instance network_info: |[{"id": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "address": "fa:16:3e:98:3b:3a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f8ec31-25", "ovs_interfaceid": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.000 243456 DEBUG nova.network.neutron [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Successfully updated port: cc564e14-816b-4b92-877a-db0b2ddd0285 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.001 243456 DEBUG oslo_concurrency.lockutils [req-b96105e5-dcc2-4e9a-af68-901d8f16938a req-690a6c37-2c62-4076-ad8f-5848751026cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-08a4cd5e-f711-44d2-b17e-c1941be22e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.001 243456 DEBUG nova.network.neutron [req-b96105e5-dcc2-4e9a-af68-901d8f16938a req-690a6c37-2c62-4076-ad8f-5848751026cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Refreshing network info cache for port c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.006 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Start _get_guest_xml network_info=[{"id": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "address": "fa:16:3e:98:3b:3a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f8ec31-25", "ovs_interfaceid": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.011 243456 WARNING nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.018 243456 DEBUG nova.virt.libvirt.host [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.018 243456 DEBUG nova.virt.libvirt.host [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.022 243456 DEBUG nova.virt.libvirt.host [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.022 243456 DEBUG nova.virt.libvirt.host [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.022 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.022 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.023 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.023 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.023 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.023 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.023 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.024 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.024 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.024 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.024 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.024 243456 DEBUG nova.virt.hardware [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.027 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.072 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.072 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquired lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.072 243456 DEBUG nova.network.neutron [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.248 243456 DEBUG nova.network.neutron [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:24:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:24:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4075688166' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.620 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.645 243456 DEBUG nova.storage.rbd_utils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.651 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:17 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4075688166' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.857 243456 DEBUG nova.compute.manager [req-4c0a22ed-1623-4290-a42b-0f9202d48b80 req-5e348653-d604-4baa-b814-29f893d1dff8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received event network-changed-cc564e14-816b-4b92-877a-db0b2ddd0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.858 243456 DEBUG nova.compute.manager [req-4c0a22ed-1623-4290-a42b-0f9202d48b80 req-5e348653-d604-4baa-b814-29f893d1dff8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Refreshing instance network info cache due to event network-changed-cc564e14-816b-4b92-877a-db0b2ddd0285. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:24:17 compute-0 nova_compute[243452]: 2026-02-28 10:24:17.858 243456 DEBUG oslo_concurrency.lockutils [req-4c0a22ed-1623-4290-a42b-0f9202d48b80 req-5e348653-d604-4baa-b814-29f893d1dff8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:24:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:24:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2306797905' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.153 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.154 243456 DEBUG nova.virt.libvirt.vif [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:24:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1172761758',display_name='tempest-ServersTestJSON-server-1172761758',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1172761758',id=113,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-0100k6jb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:24:12Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=08a4cd5e-f711-44d2-b17e-c1941be22e85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "address": "fa:16:3e:98:3b:3a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f8ec31-25", "ovs_interfaceid": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.154 243456 DEBUG nova.network.os_vif_util [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "address": "fa:16:3e:98:3b:3a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f8ec31-25", "ovs_interfaceid": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.155 243456 DEBUG nova.network.os_vif_util [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:3b:3a,bridge_name='br-int',has_traffic_filtering=True,id=c6f8ec31-2557-49a7-9d41-fc0bc3a16a37,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f8ec31-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.156 243456 DEBUG nova.objects.instance [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 08a4cd5e-f711-44d2-b17e-c1941be22e85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.192 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:24:18 compute-0 nova_compute[243452]:   <uuid>08a4cd5e-f711-44d2-b17e-c1941be22e85</uuid>
Feb 28 10:24:18 compute-0 nova_compute[243452]:   <name>instance-00000071</name>
Feb 28 10:24:18 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:24:18 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:24:18 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <nova:name>tempest-ServersTestJSON-server-1172761758</nova:name>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:24:17</nova:creationTime>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:24:18 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:24:18 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:24:18 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:24:18 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:24:18 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:24:18 compute-0 nova_compute[243452]:         <nova:user uuid="30797c1e587b4532a2e148d0cdcd9c51">tempest-ServersTestJSON-973249707-project-member</nova:user>
Feb 28 10:24:18 compute-0 nova_compute[243452]:         <nova:project uuid="1c3cb5cdfa53405bb0387af43e804bd1">tempest-ServersTestJSON-973249707</nova:project>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:24:18 compute-0 nova_compute[243452]:         <nova:port uuid="c6f8ec31-2557-49a7-9d41-fc0bc3a16a37">
Feb 28 10:24:18 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:24:18 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:24:18 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <system>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <entry name="serial">08a4cd5e-f711-44d2-b17e-c1941be22e85</entry>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <entry name="uuid">08a4cd5e-f711-44d2-b17e-c1941be22e85</entry>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     </system>
Feb 28 10:24:18 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:24:18 compute-0 nova_compute[243452]:   <os>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:   </os>
Feb 28 10:24:18 compute-0 nova_compute[243452]:   <features>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:   </features>
Feb 28 10:24:18 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:24:18 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:24:18 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/08a4cd5e-f711-44d2-b17e-c1941be22e85_disk">
Feb 28 10:24:18 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       </source>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:24:18 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/08a4cd5e-f711-44d2-b17e-c1941be22e85_disk.config">
Feb 28 10:24:18 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       </source>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:24:18 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:98:3b:3a"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <target dev="tapc6f8ec31-25"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/08a4cd5e-f711-44d2-b17e-c1941be22e85/console.log" append="off"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <video>
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     </video>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:24:18 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:24:18 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:24:18 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:24:18 compute-0 nova_compute[243452]: </domain>
Feb 28 10:24:18 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.193 243456 DEBUG nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Preparing to wait for external event network-vif-plugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.193 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.193 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.194 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.195 243456 DEBUG nova.virt.libvirt.vif [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:24:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1172761758',display_name='tempest-ServersTestJSON-server-1172761758',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1172761758',id=113,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-0100k6jb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:24:12Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=08a4cd5e-f711-44d2-b17e-c1941be22e85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "address": "fa:16:3e:98:3b:3a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f8ec31-25", "ovs_interfaceid": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.195 243456 DEBUG nova.network.os_vif_util [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "address": "fa:16:3e:98:3b:3a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f8ec31-25", "ovs_interfaceid": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.196 243456 DEBUG nova.network.os_vif_util [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:3b:3a,bridge_name='br-int',has_traffic_filtering=True,id=c6f8ec31-2557-49a7-9d41-fc0bc3a16a37,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f8ec31-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.196 243456 DEBUG os_vif [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:3b:3a,bridge_name='br-int',has_traffic_filtering=True,id=c6f8ec31-2557-49a7-9d41-fc0bc3a16a37,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f8ec31-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.197 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.198 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.198 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.206 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.206 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6f8ec31-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.207 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc6f8ec31-25, col_values=(('external_ids', {'iface-id': 'c6f8ec31-2557-49a7-9d41-fc0bc3a16a37', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:3b:3a', 'vm-uuid': '08a4cd5e-f711-44d2-b17e-c1941be22e85'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:18 compute-0 NetworkManager[49805]: <info>  [1772274258.2095] manager: (tapc6f8ec31-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/472)
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.213 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.215 243456 INFO os_vif [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:3b:3a,bridge_name='br-int',has_traffic_filtering=True,id=c6f8ec31-2557-49a7-9d41-fc0bc3a16a37,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f8ec31-25')
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.283 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.284 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.284 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] No VIF found with MAC fa:16:3e:98:3b:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.286 243456 INFO nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Using config drive
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.313 243456 DEBUG nova.storage.rbd_utils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.329 243456 DEBUG nova.network.neutron [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Updating instance_info_cache with network_info: [{"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.359 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Releasing lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.360 243456 DEBUG nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Instance network_info: |[{"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.361 243456 DEBUG oslo_concurrency.lockutils [req-4c0a22ed-1623-4290-a42b-0f9202d48b80 req-5e348653-d604-4baa-b814-29f893d1dff8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.361 243456 DEBUG nova.network.neutron [req-4c0a22ed-1623-4290-a42b-0f9202d48b80 req-5e348653-d604-4baa-b814-29f893d1dff8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Refreshing network info cache for port cc564e14-816b-4b92-877a-db0b2ddd0285 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.364 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Start _get_guest_xml network_info=[{"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.370 243456 WARNING nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.376 243456 DEBUG nova.virt.libvirt.host [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.376 243456 DEBUG nova.virt.libvirt.host [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.384 243456 DEBUG nova.virt.libvirt.host [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.385 243456 DEBUG nova.virt.libvirt.host [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.385 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.386 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.386 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.386 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.387 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.387 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.387 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.387 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.387 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.388 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.388 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.388 243456 DEBUG nova.virt.hardware [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.391 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.432 243456 DEBUG nova.network.neutron [req-b96105e5-dcc2-4e9a-af68-901d8f16938a req-690a6c37-2c62-4076-ad8f-5848751026cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Updated VIF entry in instance network info cache for port c6f8ec31-2557-49a7-9d41-fc0bc3a16a37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.433 243456 DEBUG nova.network.neutron [req-b96105e5-dcc2-4e9a-af68-901d8f16938a req-690a6c37-2c62-4076-ad8f-5848751026cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Updating instance_info_cache with network_info: [{"id": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "address": "fa:16:3e:98:3b:3a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f8ec31-25", "ovs_interfaceid": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.449 243456 DEBUG oslo_concurrency.lockutils [req-b96105e5-dcc2-4e9a-af68-901d8f16938a req-690a6c37-2c62-4076-ad8f-5848751026cf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-08a4cd5e-f711-44d2-b17e-c1941be22e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:24:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 325 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 803 KiB/s rd, 3.5 MiB/s wr, 104 op/s
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.665 243456 INFO nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Creating config drive at /var/lib/nova/instances/08a4cd5e-f711-44d2-b17e-c1941be22e85/disk.config
Feb 28 10:24:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2306797905' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:24:18 compute-0 ceph-mon[76304]: pgmap v1772: 305 pgs: 305 active+clean; 325 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 803 KiB/s rd, 3.5 MiB/s wr, 104 op/s
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.671 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/08a4cd5e-f711-44d2-b17e-c1941be22e85/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpq6ttu7xm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.819 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/08a4cd5e-f711-44d2-b17e-c1941be22e85/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpq6ttu7xm" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.856 243456 DEBUG nova.storage.rbd_utils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] rbd image 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.862 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/08a4cd5e-f711-44d2-b17e-c1941be22e85/disk.config 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:24:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2389415991' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.955 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.977 243456 DEBUG nova.storage.rbd_utils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:18 compute-0 nova_compute[243452]: 2026-02-28 10:24:18.982 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.012 243456 DEBUG oslo_concurrency.processutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/08a4cd5e-f711-44d2-b17e-c1941be22e85/disk.config 08a4cd5e-f711-44d2-b17e-c1941be22e85_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.012 243456 INFO nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Deleting local config drive /var/lib/nova/instances/08a4cd5e-f711-44d2-b17e-c1941be22e85/disk.config because it was imported into RBD.
Feb 28 10:24:19 compute-0 kernel: tapc6f8ec31-25: entered promiscuous mode
Feb 28 10:24:19 compute-0 NetworkManager[49805]: <info>  [1772274259.0604] manager: (tapc6f8ec31-25): new Tun device (/org/freedesktop/NetworkManager/Devices/473)
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.062 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:19 compute-0 ovn_controller[146846]: 2026-02-28T10:24:19Z|01139|binding|INFO|Claiming lport c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 for this chassis.
Feb 28 10:24:19 compute-0 ovn_controller[146846]: 2026-02-28T10:24:19Z|01140|binding|INFO|c6f8ec31-2557-49a7-9d41-fc0bc3a16a37: Claiming fa:16:3e:98:3b:3a 10.100.0.3
Feb 28 10:24:19 compute-0 ovn_controller[146846]: 2026-02-28T10:24:19Z|01141|binding|INFO|Setting lport c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 ovn-installed in OVS
Feb 28 10:24:19 compute-0 ovn_controller[146846]: 2026-02-28T10:24:19Z|01142|binding|INFO|Setting lport c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 up in Southbound
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.071 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.072 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.071 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:3b:3a 10.100.0.3'], port_security=['fa:16:3e:98:3b:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '08a4cd5e-f711-44d2-b17e-c1941be22e85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=c6f8ec31-2557-49a7-9d41-fc0bc3a16a37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:24:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.074 156681 INFO neutron.agent.ovn.metadata.agent [-] Port c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 in datapath 7ec4804c-4a13-485a-9300-db6edf74473b bound to our chassis
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.076 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.077 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b
Feb 28 10:24:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.095 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e4957d-a8ea-4310-9617-2c35c3edd25e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:19 compute-0 systemd-udevd[340666]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:24:19 compute-0 systemd-machined[209480]: New machine qemu-143-instance-00000071.
Feb 28 10:24:19 compute-0 systemd[1]: Started Virtual Machine qemu-143-instance-00000071.
Feb 28 10:24:19 compute-0 NetworkManager[49805]: <info>  [1772274259.1141] device (tapc6f8ec31-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:24:19 compute-0 NetworkManager[49805]: <info>  [1772274259.1148] device (tapc6f8ec31-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:24:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.125 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[17e38e74-8458-480a-8c55-64d68c745b4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.128 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e83279e5-a4f8-4040-ac15-cc3ccffc71b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:19 compute-0 podman[340640]: 2026-02-28 10:24:19.144107676 +0000 UTC m=+0.081391001 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:24:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.156 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[669a6933-b316-42ba-8685-42d80e8d65ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.173 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[464efda9-128b-4afd-a742-69592030530f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340714, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.186 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1d58fa1b-7de8-4c78-8b8f-922adf0f837d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340719, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340719, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.188 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.189 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.191 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.191 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:24:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.192 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:19.192 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:24:19 compute-0 podman[340637]: 2026-02-28 10:24:19.203939663 +0000 UTC m=+0.141174886 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 10:24:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:24:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3606098825' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.529 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.531 243456 DEBUG nova.virt.libvirt.vif [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:24:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1241744982',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1241744982',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=114,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAhOy8scu7pucn6hR6bdEJEq/jaPKx42EW3BCfNrjQsnOj6n32/A+FEMwyqz7ox0jzcrbxxA91mTDdT3vrbzI6b58jlzlwGg5fJ7HUGs8jPxEEShL7FM9h/slX6zH/FfpQ==',key_name='tempest-TestSecurityGroupsBasicOps-850120140',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-t2on9vk5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:24:13Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=a1bf329d-ed65-4cbc-99cb-e49716d1b24d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.532 243456 DEBUG nova.network.os_vif_util [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.534 243456 DEBUG nova.network.os_vif_util [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:45:f7,bridge_name='br-int',has_traffic_filtering=True,id=cc564e14-816b-4b92-877a-db0b2ddd0285,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc564e14-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.536 243456 DEBUG nova.objects.instance [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'pci_devices' on Instance uuid a1bf329d-ed65-4cbc-99cb-e49716d1b24d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.553 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:24:19 compute-0 nova_compute[243452]:   <uuid>a1bf329d-ed65-4cbc-99cb-e49716d1b24d</uuid>
Feb 28 10:24:19 compute-0 nova_compute[243452]:   <name>instance-00000072</name>
Feb 28 10:24:19 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:24:19 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:24:19 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1241744982</nova:name>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:24:18</nova:creationTime>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:24:19 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:24:19 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:24:19 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:24:19 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:24:19 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:24:19 compute-0 nova_compute[243452]:         <nova:user uuid="f9046d3ef932481196b82d5e1fdd5de6">tempest-TestSecurityGroupsBasicOps-1303513239-project-member</nova:user>
Feb 28 10:24:19 compute-0 nova_compute[243452]:         <nova:project uuid="1eb3aafa74d14544ba5cde61223f2e23">tempest-TestSecurityGroupsBasicOps-1303513239</nova:project>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:24:19 compute-0 nova_compute[243452]:         <nova:port uuid="cc564e14-816b-4b92-877a-db0b2ddd0285">
Feb 28 10:24:19 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:24:19 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:24:19 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <system>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <entry name="serial">a1bf329d-ed65-4cbc-99cb-e49716d1b24d</entry>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <entry name="uuid">a1bf329d-ed65-4cbc-99cb-e49716d1b24d</entry>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     </system>
Feb 28 10:24:19 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:24:19 compute-0 nova_compute[243452]:   <os>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:   </os>
Feb 28 10:24:19 compute-0 nova_compute[243452]:   <features>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:   </features>
Feb 28 10:24:19 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:24:19 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:24:19 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk">
Feb 28 10:24:19 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:24:19 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk.config">
Feb 28 10:24:19 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:24:19 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:cc:45:f7"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <target dev="tapcc564e14-81"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/a1bf329d-ed65-4cbc-99cb-e49716d1b24d/console.log" append="off"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <video>
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     </video>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:24:19 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:24:19 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:24:19 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:24:19 compute-0 nova_compute[243452]: </domain>
Feb 28 10:24:19 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.557 243456 DEBUG nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Preparing to wait for external event network-vif-plugged-cc564e14-816b-4b92-877a-db0b2ddd0285 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.558 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.558 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.558 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.559 243456 DEBUG nova.virt.libvirt.vif [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:24:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1241744982',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1241744982',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=114,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAhOy8scu7pucn6hR6bdEJEq/jaPKx42EW3BCfNrjQsnOj6n32/A+FEMwyqz7ox0jzcrbxxA91mTDdT3vrbzI6b58jlzlwGg5fJ7HUGs8jPxEEShL7FM9h/slX6zH/FfpQ==',key_name='tempest-TestSecurityGroupsBasicOps-850120140',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-t2on9vk5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:24:13Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=a1bf329d-ed65-4cbc-99cb-e49716d1b24d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.559 243456 DEBUG nova.network.os_vif_util [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.560 243456 DEBUG nova.network.os_vif_util [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:45:f7,bridge_name='br-int',has_traffic_filtering=True,id=cc564e14-816b-4b92-877a-db0b2ddd0285,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc564e14-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.560 243456 DEBUG os_vif [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:45:f7,bridge_name='br-int',has_traffic_filtering=True,id=cc564e14-816b-4b92-877a-db0b2ddd0285,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc564e14-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.561 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.561 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.562 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.564 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.564 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc564e14-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.565 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcc564e14-81, col_values=(('external_ids', {'iface-id': 'cc564e14-816b-4b92-877a-db0b2ddd0285', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:45:f7', 'vm-uuid': 'a1bf329d-ed65-4cbc-99cb-e49716d1b24d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.566 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:19 compute-0 NetworkManager[49805]: <info>  [1772274259.5671] manager: (tapcc564e14-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/474)
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.569 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.571 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.572 243456 INFO os_vif [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:45:f7,bridge_name='br-int',has_traffic_filtering=True,id=cc564e14-816b-4b92-877a-db0b2ddd0285,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc564e14-81')
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.621 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.621 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.621 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No VIF found with MAC fa:16:3e:cc:45:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.623 243456 INFO nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Using config drive
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.642 243456 DEBUG nova.storage.rbd_utils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2389415991' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:24:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3606098825' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.945 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274259.945472, 08a4cd5e-f711-44d2-b17e-c1941be22e85 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.946 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] VM Started (Lifecycle Event)
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.967 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.974 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274259.9456332, 08a4cd5e-f711-44d2-b17e-c1941be22e85 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.974 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] VM Paused (Lifecycle Event)
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.995 243456 DEBUG nova.compute.manager [req-5af582db-6873-4ce7-970c-2ebe15c3fe6d req-cd96bed2-fa77-48f6-8665-d8ecb273131d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Received event network-vif-plugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.996 243456 DEBUG oslo_concurrency.lockutils [req-5af582db-6873-4ce7-970c-2ebe15c3fe6d req-cd96bed2-fa77-48f6-8665-d8ecb273131d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.996 243456 DEBUG oslo_concurrency.lockutils [req-5af582db-6873-4ce7-970c-2ebe15c3fe6d req-cd96bed2-fa77-48f6-8665-d8ecb273131d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.996 243456 DEBUG oslo_concurrency.lockutils [req-5af582db-6873-4ce7-970c-2ebe15c3fe6d req-cd96bed2-fa77-48f6-8665-d8ecb273131d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.997 243456 DEBUG nova.compute.manager [req-5af582db-6873-4ce7-970c-2ebe15c3fe6d req-cd96bed2-fa77-48f6-8665-d8ecb273131d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Processing event network-vif-plugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:24:19 compute-0 nova_compute[243452]: 2026-02-28 10:24:19.997 243456 DEBUG nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.001 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.003 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.005 243456 INFO nova.virt.libvirt.driver [-] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Instance spawned successfully.
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.006 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.008 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274260.0008912, 08a4cd5e-f711-44d2-b17e-c1941be22e85 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.008 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] VM Resumed (Lifecycle Event)
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.033 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.038 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.038 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.038 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.039 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.039 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.040 243456 DEBUG nova.virt.libvirt.driver [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.046 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.079 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.133 243456 INFO nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Took 7.65 seconds to spawn the instance on the hypervisor.
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.133 243456 DEBUG nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.198 243456 INFO nova.compute.manager [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Took 8.79 seconds to build instance.
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.214 243456 DEBUG oslo_concurrency.lockutils [None req-e8c76b4e-8ae1-46be-9ac2-7bf20850dcdd 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.275 243456 DEBUG nova.network.neutron [req-4c0a22ed-1623-4290-a42b-0f9202d48b80 req-5e348653-d604-4baa-b814-29f893d1dff8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Updated VIF entry in instance network info cache for port cc564e14-816b-4b92-877a-db0b2ddd0285. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.276 243456 DEBUG nova.network.neutron [req-4c0a22ed-1623-4290-a42b-0f9202d48b80 req-5e348653-d604-4baa-b814-29f893d1dff8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Updating instance_info_cache with network_info: [{"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.282 243456 INFO nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Creating config drive at /var/lib/nova/instances/a1bf329d-ed65-4cbc-99cb-e49716d1b24d/disk.config
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.286 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a1bf329d-ed65-4cbc-99cb-e49716d1b24d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpeqjokvvf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.327 243456 DEBUG oslo_concurrency.lockutils [req-4c0a22ed-1623-4290-a42b-0f9202d48b80 req-5e348653-d604-4baa-b814-29f893d1dff8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:24:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.384 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.385 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.385 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.434 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a1bf329d-ed65-4cbc-99cb-e49716d1b24d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpeqjokvvf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 325 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 3.6 MiB/s wr, 86 op/s
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.473 243456 DEBUG nova.storage.rbd_utils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.479 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a1bf329d-ed65-4cbc-99cb-e49716d1b24d/disk.config a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.635 243456 DEBUG oslo_concurrency.processutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a1bf329d-ed65-4cbc-99cb-e49716d1b24d/disk.config a1bf329d-ed65-4cbc-99cb-e49716d1b24d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.637 243456 INFO nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Deleting local config drive /var/lib/nova/instances/a1bf329d-ed65-4cbc-99cb-e49716d1b24d/disk.config because it was imported into RBD.
Feb 28 10:24:20 compute-0 ceph-mon[76304]: pgmap v1773: 305 pgs: 305 active+clean; 325 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 3.6 MiB/s wr, 86 op/s
Feb 28 10:24:20 compute-0 NetworkManager[49805]: <info>  [1772274260.7021] manager: (tapcc564e14-81): new Tun device (/org/freedesktop/NetworkManager/Devices/475)
Feb 28 10:24:20 compute-0 systemd-udevd[340688]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:24:20 compute-0 kernel: tapcc564e14-81: entered promiscuous mode
Feb 28 10:24:20 compute-0 ovn_controller[146846]: 2026-02-28T10:24:20Z|01143|binding|INFO|Claiming lport cc564e14-816b-4b92-877a-db0b2ddd0285 for this chassis.
Feb 28 10:24:20 compute-0 ovn_controller[146846]: 2026-02-28T10:24:20Z|01144|binding|INFO|cc564e14-816b-4b92-877a-db0b2ddd0285: Claiming fa:16:3e:cc:45:f7 10.100.0.10
Feb 28 10:24:20 compute-0 NetworkManager[49805]: <info>  [1772274260.7146] device (tapcc564e14-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.710 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:20 compute-0 NetworkManager[49805]: <info>  [1772274260.7150] device (tapcc564e14-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:24:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.717 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:45:f7 10.100.0.10'], port_security=['fa:16:3e:cc:45:f7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a1bf329d-ed65-4cbc-99cb-e49716d1b24d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ee12748-b368-477a-aacb-62375ce0b51c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0b899b7a-4953-43a9-ae1c-b9897419d094 ec278221-1438-4a93-ade3-9d533e8b727e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2fa6d2-fb6a-49e0-af1a-0d44f4a09e25, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cc564e14-816b-4b92-877a-db0b2ddd0285) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:24:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.719 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cc564e14-816b-4b92-877a-db0b2ddd0285 in datapath 9ee12748-b368-477a-aacb-62375ce0b51c bound to our chassis
Feb 28 10:24:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.720 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ee12748-b368-477a-aacb-62375ce0b51c
Feb 28 10:24:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.728 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4ecd4de0-a56d-401a-a0e8-bc097b7a01ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.729 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9ee12748-b1 in ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:24:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.733 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9ee12748-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:24:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.733 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4315042d-99c0-4090-9f42-8e211b0cc30d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.734 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb0b879-7c90-4a32-bb15-374e3c4ef6b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:20 compute-0 systemd-machined[209480]: New machine qemu-144-instance-00000072.
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.742 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.744 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[1b13acc8-6c71-4e89-b045-eb3846a3e20e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:20 compute-0 ovn_controller[146846]: 2026-02-28T10:24:20Z|01145|binding|INFO|Setting lport cc564e14-816b-4b92-877a-db0b2ddd0285 ovn-installed in OVS
Feb 28 10:24:20 compute-0 ovn_controller[146846]: 2026-02-28T10:24:20Z|01146|binding|INFO|Setting lport cc564e14-816b-4b92-877a-db0b2ddd0285 up in Southbound
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.748 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:20 compute-0 systemd[1]: Started Virtual Machine qemu-144-instance-00000072.
Feb 28 10:24:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.757 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3c26c8bc-1b4a-4d4f-93af-d0090d11cdaf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.789 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1c7eee-6fb8-434c-aeb0-3d64904421d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:20 compute-0 NetworkManager[49805]: <info>  [1772274260.7965] manager: (tap9ee12748-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/476)
Feb 28 10:24:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.797 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[42fa24d5-4910-4b3a-a1e2-ccd17e5f5229]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.833 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[464404cb-ac0c-4861-bf7c-338f13735124]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.836 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e801d697-cd6d-4846-80dc-81ff9263e1f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:20 compute-0 NetworkManager[49805]: <info>  [1772274260.8588] device (tap9ee12748-b0): carrier: link connected
Feb 28 10:24:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.865 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b38dcb64-fa94-4acf-a8a5-94c829e429df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.886 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a2a77de0-73c2-414b-92b2-c84398f896fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ee12748-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:c4:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 344], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576814, 'reachable_time': 35187, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340870, 'error': None, 'target': 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.902 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b06d8999-49bf-44e3-abe2-705d9b2ea764]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefd:c4ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576814, 'tstamp': 576814}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340871, 'error': None, 'target': 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.918 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[49faae44-643e-4abc-9bb9-392d457d0c27]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ee12748-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:c4:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 344], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576814, 'reachable_time': 35187, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 340872, 'error': None, 'target': 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:20.949 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6537c8-79bd-4074-888d-c1270323149c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.955 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274245.9551432, bf3b0f76-98db-4eb3-8e3c-ab7b525cd129 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.957 243456 INFO nova.compute.manager [-] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] VM Stopped (Lifecycle Event)
Feb 28 10:24:20 compute-0 nova_compute[243452]: 2026-02-28 10:24:20.982 243456 DEBUG nova.compute.manager [None req-9ca06cca-9ff8-4fec-81da-028b9dfe4020 - - - - - -] [instance: bf3b0f76-98db-4eb3-8e3c-ab7b525cd129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:21.010 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[01b0e9ca-71c7-4376-b28f-851b2d461146]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:21.012 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ee12748-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:21.012 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:21.012 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ee12748-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:21 compute-0 NetworkManager[49805]: <info>  [1772274261.0148] manager: (tap9ee12748-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/477)
Feb 28 10:24:21 compute-0 kernel: tap9ee12748-b0: entered promiscuous mode
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:21.017 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ee12748-b0, col_values=(('external_ids', {'iface-id': '42d54653-4cb7-4be2-99ee-96f5681da7be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:21 compute-0 ovn_controller[146846]: 2026-02-28T10:24:21Z|01147|binding|INFO|Releasing lport 42d54653-4cb7-4be2-99ee-96f5681da7be from this chassis (sb_readonly=0)
Feb 28 10:24:21 compute-0 nova_compute[243452]: 2026-02-28 10:24:21.019 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:21.019 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9ee12748-b368-477a-aacb-62375ce0b51c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9ee12748-b368-477a-aacb-62375ce0b51c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:21.020 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a61e7b0f-4cde-4ed7-a99d-aebfda2379d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:21.021 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-9ee12748-b368-477a-aacb-62375ce0b51c
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/9ee12748-b368-477a-aacb-62375ce0b51c.pid.haproxy
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 9ee12748-b368-477a-aacb-62375ce0b51c
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:24:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:21.022 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'env', 'PROCESS_TAG=haproxy-9ee12748-b368-477a-aacb-62375ce0b51c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9ee12748-b368-477a-aacb-62375ce0b51c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:24:21 compute-0 podman[340904]: 2026-02-28 10:24:21.388136265 +0000 UTC m=+0.049487689 container create a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223)
Feb 28 10:24:21 compute-0 systemd[1]: Started libpod-conmon-a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662.scope.
Feb 28 10:24:21 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:24:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efddfc5a8b916af41ee280cb5601c71a26e54fba8c82938781e38e62b83cab90/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:24:21 compute-0 podman[340904]: 2026-02-28 10:24:21.359951412 +0000 UTC m=+0.021302696 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:24:21 compute-0 podman[340904]: 2026-02-28 10:24:21.459003121 +0000 UTC m=+0.120354415 container init a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 10:24:21 compute-0 podman[340904]: 2026-02-28 10:24:21.463588233 +0000 UTC m=+0.124939527 container start a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:24:21 compute-0 neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c[340952]: [NOTICE]   (340965) : New worker (340967) forked
Feb 28 10:24:21 compute-0 neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c[340952]: [NOTICE]   (340965) : Loading success.
Feb 28 10:24:21 compute-0 nova_compute[243452]: 2026-02-28 10:24:21.521 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274261.5207772, a1bf329d-ed65-4cbc-99cb-e49716d1b24d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:24:21 compute-0 nova_compute[243452]: 2026-02-28 10:24:21.521 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] VM Started (Lifecycle Event)
Feb 28 10:24:21 compute-0 nova_compute[243452]: 2026-02-28 10:24:21.550 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:24:21 compute-0 nova_compute[243452]: 2026-02-28 10:24:21.553 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274261.5212047, a1bf329d-ed65-4cbc-99cb-e49716d1b24d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:24:21 compute-0 nova_compute[243452]: 2026-02-28 10:24:21.553 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] VM Paused (Lifecycle Event)
Feb 28 10:24:21 compute-0 nova_compute[243452]: 2026-02-28 10:24:21.576 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:24:21 compute-0 nova_compute[243452]: 2026-02-28 10:24:21.578 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:24:21 compute-0 nova_compute[243452]: 2026-02-28 10:24:21.607 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.082 243456 DEBUG nova.compute.manager [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Received event network-vif-plugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.083 243456 DEBUG oslo_concurrency.lockutils [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.083 243456 DEBUG oslo_concurrency.lockutils [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.083 243456 DEBUG oslo_concurrency.lockutils [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.084 243456 DEBUG nova.compute.manager [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] No waiting events found dispatching network-vif-plugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.084 243456 WARNING nova.compute.manager [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Received unexpected event network-vif-plugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 for instance with vm_state active and task_state None.
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.084 243456 DEBUG nova.compute.manager [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received event network-vif-plugged-cc564e14-816b-4b92-877a-db0b2ddd0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.085 243456 DEBUG oslo_concurrency.lockutils [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.085 243456 DEBUG oslo_concurrency.lockutils [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.085 243456 DEBUG oslo_concurrency.lockutils [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.086 243456 DEBUG nova.compute.manager [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Processing event network-vif-plugged-cc564e14-816b-4b92-877a-db0b2ddd0285 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.086 243456 DEBUG nova.compute.manager [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received event network-vif-plugged-cc564e14-816b-4b92-877a-db0b2ddd0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.086 243456 DEBUG oslo_concurrency.lockutils [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.087 243456 DEBUG oslo_concurrency.lockutils [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.087 243456 DEBUG oslo_concurrency.lockutils [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.088 243456 DEBUG nova.compute.manager [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] No waiting events found dispatching network-vif-plugged-cc564e14-816b-4b92-877a-db0b2ddd0285 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.088 243456 WARNING nova.compute.manager [req-6dc3ba5c-eb66-41ce-ad43-658b5d9694ca req-a6dab470-ab4e-43dc-a2ea-bde44d360ebe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received unexpected event network-vif-plugged-cc564e14-816b-4b92-877a-db0b2ddd0285 for instance with vm_state building and task_state spawning.
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.089 243456 DEBUG nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.093 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274262.093343, a1bf329d-ed65-4cbc-99cb-e49716d1b24d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.094 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] VM Resumed (Lifecycle Event)
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.096 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.099 243456 INFO nova.virt.libvirt.driver [-] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Instance spawned successfully.
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.100 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.126 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.132 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.137 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.138 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.139 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.139 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.140 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.140 243456 DEBUG nova.virt.libvirt.driver [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.172 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.209 243456 INFO nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Took 8.90 seconds to spawn the instance on the hypervisor.
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.210 243456 DEBUG nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.278 243456 INFO nova.compute.manager [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Took 10.28 seconds to build instance.
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.298 243456 DEBUG oslo_concurrency.lockutils [None req-8c0214cc-8e4e-41b7-a7b8-5a00194e3ebf f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 325 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 3.6 MiB/s wr, 75 op/s
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.517 243456 DEBUG oslo_concurrency.lockutils [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "08a4cd5e-f711-44d2-b17e-c1941be22e85" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.518 243456 DEBUG oslo_concurrency.lockutils [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.518 243456 DEBUG nova.compute.manager [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.522 243456 DEBUG nova.compute.manager [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.522 243456 DEBUG nova.objects.instance [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'flavor' on Instance uuid 08a4cd5e-f711-44d2-b17e-c1941be22e85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:24:22 compute-0 nova_compute[243452]: 2026-02-28 10:24:22.547 243456 DEBUG nova.virt.libvirt.driver [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:24:22 compute-0 ceph-mon[76304]: pgmap v1774: 305 pgs: 305 active+clean; 325 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 3.6 MiB/s wr, 75 op/s
Feb 28 10:24:22 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:24:22 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3001.6 total, 600.0 interval
                                           Cumulative writes: 36K writes, 140K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s
                                           Cumulative WAL: 36K writes, 13K syncs, 2.76 writes per sync, written: 0.13 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9816 writes, 37K keys, 9816 commit groups, 1.0 writes per commit group, ingest: 40.46 MB, 0.07 MB/s
                                           Interval WAL: 9816 writes, 3930 syncs, 2.50 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 10:24:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:24:24 compute-0 nova_compute[243452]: 2026-02-28 10:24:24.072 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 325 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 3.6 MiB/s wr, 75 op/s
Feb 28 10:24:24 compute-0 ceph-mon[76304]: pgmap v1775: 305 pgs: 305 active+clean; 325 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 3.6 MiB/s wr, 75 op/s
Feb 28 10:24:24 compute-0 nova_compute[243452]: 2026-02-28 10:24:24.566 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:25 compute-0 nova_compute[243452]: 2026-02-28 10:24:25.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:24:26 compute-0 nova_compute[243452]: 2026-02-28 10:24:26.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:24:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 164 op/s
Feb 28 10:24:26 compute-0 ceph-mon[76304]: pgmap v1776: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 164 op/s
Feb 28 10:24:27 compute-0 nova_compute[243452]: 2026-02-28 10:24:27.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:24:27 compute-0 ovn_controller[146846]: 2026-02-28T10:24:27Z|01148|binding|INFO|Releasing lport 42d54653-4cb7-4be2-99ee-96f5681da7be from this chassis (sb_readonly=0)
Feb 28 10:24:27 compute-0 ovn_controller[146846]: 2026-02-28T10:24:27Z|01149|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 10:24:27 compute-0 NetworkManager[49805]: <info>  [1772274267.6843] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/478)
Feb 28 10:24:27 compute-0 NetworkManager[49805]: <info>  [1772274267.6852] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/479)
Feb 28 10:24:27 compute-0 ovn_controller[146846]: 2026-02-28T10:24:27Z|01150|binding|INFO|Releasing lport 42d54653-4cb7-4be2-99ee-96f5681da7be from this chassis (sb_readonly=0)
Feb 28 10:24:27 compute-0 ovn_controller[146846]: 2026-02-28T10:24:27Z|01151|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 10:24:27 compute-0 nova_compute[243452]: 2026-02-28 10:24:27.692 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:27 compute-0 nova_compute[243452]: 2026-02-28 10:24:27.698 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:28 compute-0 nova_compute[243452]: 2026-02-28 10:24:28.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:24:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.4 MiB/s wr, 151 op/s
Feb 28 10:24:28 compute-0 ceph-mon[76304]: pgmap v1777: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.4 MiB/s wr, 151 op/s
Feb 28 10:24:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:24:28 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:24:28 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.2 total, 600.0 interval
                                           Cumulative writes: 28K writes, 112K keys, 28K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.04 MB/s
                                           Cumulative WAL: 28K writes, 10K syncs, 2.78 writes per sync, written: 0.11 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7128 writes, 28K keys, 7128 commit groups, 1.0 writes per commit group, ingest: 30.63 MB, 0.05 MB/s
                                           Interval WAL: 7128 writes, 2882 syncs, 2.47 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 10:24:29 compute-0 nova_compute[243452]: 2026-02-28 10:24:29.075 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:24:29
Feb 28 10:24:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:24:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:24:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'images', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'vms', 'backups']
Feb 28 10:24:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:24:29 compute-0 nova_compute[243452]: 2026-02-28 10:24:29.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:24:29 compute-0 nova_compute[243452]: 2026-02-28 10:24:29.464 243456 DEBUG nova.compute.manager [req-d180d92f-17c9-4f67-9449-b08af764b441 req-77daf279-5822-4224-905f-770b6a793b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received event network-changed-cc564e14-816b-4b92-877a-db0b2ddd0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:24:29 compute-0 nova_compute[243452]: 2026-02-28 10:24:29.464 243456 DEBUG nova.compute.manager [req-d180d92f-17c9-4f67-9449-b08af764b441 req-77daf279-5822-4224-905f-770b6a793b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Refreshing instance network info cache due to event network-changed-cc564e14-816b-4b92-877a-db0b2ddd0285. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:24:29 compute-0 nova_compute[243452]: 2026-02-28 10:24:29.464 243456 DEBUG oslo_concurrency.lockutils [req-d180d92f-17c9-4f67-9449-b08af764b441 req-77daf279-5822-4224-905f-770b6a793b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:24:29 compute-0 nova_compute[243452]: 2026-02-28 10:24:29.464 243456 DEBUG oslo_concurrency.lockutils [req-d180d92f-17c9-4f67-9449-b08af764b441 req-77daf279-5822-4224-905f-770b6a793b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:24:29 compute-0 nova_compute[243452]: 2026-02-28 10:24:29.465 243456 DEBUG nova.network.neutron [req-d180d92f-17c9-4f67-9449-b08af764b441 req-77daf279-5822-4224-905f-770b6a793b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Refreshing network info cache for port cc564e14-816b-4b92-877a-db0b2ddd0285 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:24:29 compute-0 nova_compute[243452]: 2026-02-28 10:24:29.568 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:30 compute-0 nova_compute[243452]: 2026-02-28 10:24:30.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:24:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:24:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:24:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:24:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:24:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:24:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:24:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:30.388 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 148 op/s
Feb 28 10:24:30 compute-0 ceph-mon[76304]: pgmap v1778: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 148 op/s
Feb 28 10:24:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:24:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:24:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:24:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:24:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:24:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:24:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:24:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:24:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:24:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:24:31 compute-0 ceph-mgr[76610]: [devicehealth INFO root] Check health
Feb 28 10:24:31 compute-0 nova_compute[243452]: 2026-02-28 10:24:31.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:24:31 compute-0 nova_compute[243452]: 2026-02-28 10:24:31.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:24:31 compute-0 nova_compute[243452]: 2026-02-28 10:24:31.318 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:24:31 compute-0 nova_compute[243452]: 2026-02-28 10:24:31.531 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:24:31 compute-0 nova_compute[243452]: 2026-02-28 10:24:31.532 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:24:31 compute-0 nova_compute[243452]: 2026-02-28 10:24:31.532 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:24:31 compute-0 nova_compute[243452]: 2026-02-28 10:24:31.533 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:24:31 compute-0 nova_compute[243452]: 2026-02-28 10:24:31.627 243456 DEBUG nova.network.neutron [req-d180d92f-17c9-4f67-9449-b08af764b441 req-77daf279-5822-4224-905f-770b6a793b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Updated VIF entry in instance network info cache for port cc564e14-816b-4b92-877a-db0b2ddd0285. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:24:31 compute-0 nova_compute[243452]: 2026-02-28 10:24:31.628 243456 DEBUG nova.network.neutron [req-d180d92f-17c9-4f67-9449-b08af764b441 req-77daf279-5822-4224-905f-770b6a793b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Updating instance_info_cache with network_info: [{"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:24:31 compute-0 nova_compute[243452]: 2026-02-28 10:24:31.649 243456 DEBUG oslo_concurrency.lockutils [req-d180d92f-17c9-4f67-9449-b08af764b441 req-77daf279-5822-4224-905f-770b6a793b9e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:24:31 compute-0 ovn_controller[146846]: 2026-02-28T10:24:31Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:98:3b:3a 10.100.0.3
Feb 28 10:24:31 compute-0 ovn_controller[146846]: 2026-02-28T10:24:31Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:98:3b:3a 10.100.0.3
Feb 28 10:24:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 337 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.2 MiB/s wr, 155 op/s
Feb 28 10:24:32 compute-0 ceph-mon[76304]: pgmap v1779: 305 pgs: 305 active+clean; 337 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.2 MiB/s wr, 155 op/s
Feb 28 10:24:32 compute-0 nova_compute[243452]: 2026-02-28 10:24:32.594 243456 DEBUG nova.virt.libvirt.driver [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 28 10:24:32 compute-0 nova_compute[243452]: 2026-02-28 10:24:32.677 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Updating instance_info_cache with network_info: [{"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:24:32 compute-0 nova_compute[243452]: 2026-02-28 10:24:32.697 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-0d4ce277-1bbb-4926-a7ee-30f5df57fff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:24:32 compute-0 nova_compute[243452]: 2026-02-28 10:24:32.697 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:24:33 compute-0 nova_compute[243452]: 2026-02-28 10:24:33.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:24:33 compute-0 nova_compute[243452]: 2026-02-28 10:24:33.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:24:33 compute-0 nova_compute[243452]: 2026-02-28 10:24:33.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:24:33 compute-0 nova_compute[243452]: 2026-02-28 10:24:33.335 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:33 compute-0 nova_compute[243452]: 2026-02-28 10:24:33.336 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:33 compute-0 nova_compute[243452]: 2026-02-28 10:24:33.336 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:33 compute-0 nova_compute[243452]: 2026-02-28 10:24:33.337 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:24:33 compute-0 nova_compute[243452]: 2026-02-28 10:24:33.337 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:33 compute-0 sshd-session[340978]: Received disconnect from 103.67.78.202 port 58324:11: Bye Bye [preauth]
Feb 28 10:24:33 compute-0 sshd-session[340978]: Disconnected from authenticating user root 103.67.78.202 port 58324 [preauth]
Feb 28 10:24:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:24:33 compute-0 nova_compute[243452]: 2026-02-28 10:24:33.734 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:24:33 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2696210564' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:24:33 compute-0 nova_compute[243452]: 2026-02-28 10:24:33.937 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:33 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2696210564' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.055 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.056 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.061 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.062 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.067 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.068 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.080 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:34 compute-0 ovn_controller[146846]: 2026-02-28T10:24:34Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cc:45:f7 10.100.0.10
Feb 28 10:24:34 compute-0 ovn_controller[146846]: 2026-02-28T10:24:34Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cc:45:f7 10.100.0.10
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.253 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.254 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3121MB free_disk=59.886876408942044GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.254 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.254 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.341 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.341 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 08a4cd5e-f711-44d2-b17e-c1941be22e85 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.342 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance a1bf329d-ed65-4cbc-99cb-e49716d1b24d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.342 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.342 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.361 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.381 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.381 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.400 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.438 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 28 10:24:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 337 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.2 MiB/s wr, 141 op/s
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.514 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.570 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:34 compute-0 kernel: tapc6f8ec31-25 (unregistering): left promiscuous mode
Feb 28 10:24:34 compute-0 NetworkManager[49805]: <info>  [1772274274.9174] device (tapc6f8ec31-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:24:34 compute-0 ovn_controller[146846]: 2026-02-28T10:24:34Z|01152|binding|INFO|Releasing lport c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 from this chassis (sb_readonly=0)
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.922 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:34 compute-0 ovn_controller[146846]: 2026-02-28T10:24:34Z|01153|binding|INFO|Setting lport c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 down in Southbound
Feb 28 10:24:34 compute-0 ovn_controller[146846]: 2026-02-28T10:24:34Z|01154|binding|INFO|Removing iface tapc6f8ec31-25 ovn-installed in OVS
Feb 28 10:24:34 compute-0 nova_compute[243452]: 2026-02-28 10:24:34.930 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:34.931 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:3b:3a 10.100.0.3'], port_security=['fa:16:3e:98:3b:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '08a4cd5e-f711-44d2-b17e-c1941be22e85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=c6f8ec31-2557-49a7-9d41-fc0bc3a16a37) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:24:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:34.935 156681 INFO neutron.agent.ovn.metadata.agent [-] Port c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 in datapath 7ec4804c-4a13-485a-9300-db6edf74473b unbound from our chassis
Feb 28 10:24:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:34.939 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ec4804c-4a13-485a-9300-db6edf74473b
Feb 28 10:24:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:34.958 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8a0ac927-be9e-4bfd-8864-1efd68ee58d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:34 compute-0 ceph-mon[76304]: pgmap v1780: 305 pgs: 305 active+clean; 337 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.2 MiB/s wr, 141 op/s
Feb 28 10:24:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:34.987 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[25c41995-767b-404d-b5bf-5aa0768d9f31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:34 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000071.scope: Deactivated successfully.
Feb 28 10:24:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:34.991 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[225a52a1-90e7-4ee1-bb6d-df583fac4a30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:34 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000071.scope: Consumed 12.805s CPU time.
Feb 28 10:24:34 compute-0 systemd-machined[209480]: Machine qemu-143-instance-00000071 terminated.
Feb 28 10:24:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:35.015 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[567b1769-2b61-401d-ad96-f64db1f36455]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:35.034 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e52479-390e-401e-9d19-adca178a387f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ec4804c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:71:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 27, 'rx_bytes': 700, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 27, 'rx_bytes': 700, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565509, 'reachable_time': 21702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341034, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:35.049 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0a8574ec-a36d-4472-9aa8-716e81633f8b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565520, 'tstamp': 565520}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341035, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ec4804c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565523, 'tstamp': 565523}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341035, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:35.051 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:35 compute-0 nova_compute[243452]: 2026-02-28 10:24:35.053 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:35 compute-0 nova_compute[243452]: 2026-02-28 10:24:35.057 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:35.058 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ec4804c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:35.058 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:24:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:35.059 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ec4804c-40, col_values=(('external_ids', {'iface-id': 'e8c59be7-5922-4be3-b143-f702a4828758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:35.059 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:24:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:24:35 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2970285244' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:24:35 compute-0 nova_compute[243452]: 2026-02-28 10:24:35.087 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:35 compute-0 nova_compute[243452]: 2026-02-28 10:24:35.094 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:24:35 compute-0 nova_compute[243452]: 2026-02-28 10:24:35.112 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:24:35 compute-0 nova_compute[243452]: 2026-02-28 10:24:35.137 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:24:35 compute-0 nova_compute[243452]: 2026-02-28 10:24:35.138 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:35 compute-0 nova_compute[243452]: 2026-02-28 10:24:35.157 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:35 compute-0 nova_compute[243452]: 2026-02-28 10:24:35.162 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:35 compute-0 nova_compute[243452]: 2026-02-28 10:24:35.610 243456 INFO nova.virt.libvirt.driver [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Instance shutdown successfully after 13 seconds.
Feb 28 10:24:35 compute-0 nova_compute[243452]: 2026-02-28 10:24:35.618 243456 INFO nova.virt.libvirt.driver [-] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Instance destroyed successfully.
Feb 28 10:24:35 compute-0 nova_compute[243452]: 2026-02-28 10:24:35.619 243456 DEBUG nova.objects.instance [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'numa_topology' on Instance uuid 08a4cd5e-f711-44d2-b17e-c1941be22e85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:24:35 compute-0 nova_compute[243452]: 2026-02-28 10:24:35.642 243456 DEBUG nova.compute.manager [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:24:35 compute-0 nova_compute[243452]: 2026-02-28 10:24:35.716 243456 DEBUG oslo_concurrency.lockutils [None req-a9e428e4-4d79-40cc-a7e1-5c57b1ecc77e 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:35 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2970285244' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:24:36 compute-0 nova_compute[243452]: 2026-02-28 10:24:36.468 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 379 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.6 MiB/s wr, 246 op/s
Feb 28 10:24:36 compute-0 nova_compute[243452]: 2026-02-28 10:24:36.651 243456 DEBUG nova.compute.manager [req-25260319-06fb-4718-ba35-2fcab9fc1793 req-64c08a41-0a5c-431c-bf8d-3276e81de865 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Received event network-vif-unplugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:24:36 compute-0 nova_compute[243452]: 2026-02-28 10:24:36.652 243456 DEBUG oslo_concurrency.lockutils [req-25260319-06fb-4718-ba35-2fcab9fc1793 req-64c08a41-0a5c-431c-bf8d-3276e81de865 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:36 compute-0 nova_compute[243452]: 2026-02-28 10:24:36.652 243456 DEBUG oslo_concurrency.lockutils [req-25260319-06fb-4718-ba35-2fcab9fc1793 req-64c08a41-0a5c-431c-bf8d-3276e81de865 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:36 compute-0 nova_compute[243452]: 2026-02-28 10:24:36.653 243456 DEBUG oslo_concurrency.lockutils [req-25260319-06fb-4718-ba35-2fcab9fc1793 req-64c08a41-0a5c-431c-bf8d-3276e81de865 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:36 compute-0 nova_compute[243452]: 2026-02-28 10:24:36.653 243456 DEBUG nova.compute.manager [req-25260319-06fb-4718-ba35-2fcab9fc1793 req-64c08a41-0a5c-431c-bf8d-3276e81de865 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] No waiting events found dispatching network-vif-unplugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:24:36 compute-0 nova_compute[243452]: 2026-02-28 10:24:36.653 243456 WARNING nova.compute.manager [req-25260319-06fb-4718-ba35-2fcab9fc1793 req-64c08a41-0a5c-431c-bf8d-3276e81de865 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Received unexpected event network-vif-unplugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 for instance with vm_state stopped and task_state None.
Feb 28 10:24:36 compute-0 ceph-mon[76304]: pgmap v1781: 305 pgs: 305 active+clean; 379 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.6 MiB/s wr, 246 op/s
Feb 28 10:24:37 compute-0 ovn_controller[146846]: 2026-02-28T10:24:37Z|01155|binding|INFO|Releasing lport 42d54653-4cb7-4be2-99ee-96f5681da7be from this chassis (sb_readonly=0)
Feb 28 10:24:37 compute-0 ovn_controller[146846]: 2026-02-28T10:24:37Z|01156|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 10:24:37 compute-0 nova_compute[243452]: 2026-02-28 10:24:37.885 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.311 243456 DEBUG oslo_concurrency.lockutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "08a4cd5e-f711-44d2-b17e-c1941be22e85" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.312 243456 DEBUG oslo_concurrency.lockutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.312 243456 DEBUG oslo_concurrency.lockutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.313 243456 DEBUG oslo_concurrency.lockutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.313 243456 DEBUG oslo_concurrency.lockutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.314 243456 INFO nova.compute.manager [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Terminating instance
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.315 243456 DEBUG nova.compute.manager [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.321 243456 INFO nova.virt.libvirt.driver [-] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Instance destroyed successfully.
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.322 243456 DEBUG nova.objects.instance [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'resources' on Instance uuid 08a4cd5e-f711-44d2-b17e-c1941be22e85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.339 243456 DEBUG nova.virt.libvirt.vif [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:24:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1172761758',display_name='tempest-Íñstáñcé-1386512012',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1172761758',id=113,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:24:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-0100k6jb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:24:36Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=08a4cd5e-f711-44d2-b17e-c1941be22e85,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "address": "fa:16:3e:98:3b:3a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f8ec31-25", "ovs_interfaceid": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.339 243456 DEBUG nova.network.os_vif_util [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "address": "fa:16:3e:98:3b:3a", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6f8ec31-25", "ovs_interfaceid": "c6f8ec31-2557-49a7-9d41-fc0bc3a16a37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.340 243456 DEBUG nova.network.os_vif_util [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:3b:3a,bridge_name='br-int',has_traffic_filtering=True,id=c6f8ec31-2557-49a7-9d41-fc0bc3a16a37,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f8ec31-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.340 243456 DEBUG os_vif [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:3b:3a,bridge_name='br-int',has_traffic_filtering=True,id=c6f8ec31-2557-49a7-9d41-fc0bc3a16a37,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f8ec31-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.342 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.342 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6f8ec31-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.344 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.346 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.348 243456 INFO os_vif [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:3b:3a,bridge_name='br-int',has_traffic_filtering=True,id=c6f8ec31-2557-49a7-9d41-fc0bc3a16a37,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6f8ec31-25')
Feb 28 10:24:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 391 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.3 MiB/s wr, 167 op/s
Feb 28 10:24:38 compute-0 ceph-mon[76304]: pgmap v1782: 305 pgs: 305 active+clean; 391 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.3 MiB/s wr, 167 op/s
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.633 243456 INFO nova.virt.libvirt.driver [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Deleting instance files /var/lib/nova/instances/08a4cd5e-f711-44d2-b17e-c1941be22e85_del
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.634 243456 INFO nova.virt.libvirt.driver [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Deletion of /var/lib/nova/instances/08a4cd5e-f711-44d2-b17e-c1941be22e85_del complete
Feb 28 10:24:38 compute-0 ovn_controller[146846]: 2026-02-28T10:24:38Z|01157|binding|INFO|Releasing lport 42d54653-4cb7-4be2-99ee-96f5681da7be from this chassis (sb_readonly=0)
Feb 28 10:24:38 compute-0 ovn_controller[146846]: 2026-02-28T10:24:38Z|01158|binding|INFO|Releasing lport e8c59be7-5922-4be3-b143-f702a4828758 from this chassis (sb_readonly=0)
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.677 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.685 243456 INFO nova.compute.manager [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Took 0.37 seconds to destroy the instance on the hypervisor.
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.686 243456 DEBUG oslo.service.loopingcall [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.686 243456 DEBUG nova.compute.manager [-] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.686 243456 DEBUG nova.network.neutron [-] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:24:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.837 243456 DEBUG nova.compute.manager [req-bce335b3-6e01-4c81-884e-bb906065772b req-21a3c35f-8435-4fbe-ba55-3a5cb2d8d980 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Received event network-vif-plugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.837 243456 DEBUG oslo_concurrency.lockutils [req-bce335b3-6e01-4c81-884e-bb906065772b req-21a3c35f-8435-4fbe-ba55-3a5cb2d8d980 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.838 243456 DEBUG oslo_concurrency.lockutils [req-bce335b3-6e01-4c81-884e-bb906065772b req-21a3c35f-8435-4fbe-ba55-3a5cb2d8d980 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.838 243456 DEBUG oslo_concurrency.lockutils [req-bce335b3-6e01-4c81-884e-bb906065772b req-21a3c35f-8435-4fbe-ba55-3a5cb2d8d980 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.838 243456 DEBUG nova.compute.manager [req-bce335b3-6e01-4c81-884e-bb906065772b req-21a3c35f-8435-4fbe-ba55-3a5cb2d8d980 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] No waiting events found dispatching network-vif-plugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:24:38 compute-0 nova_compute[243452]: 2026-02-28 10:24:38.838 243456 WARNING nova.compute.manager [req-bce335b3-6e01-4c81-884e-bb906065772b req-21a3c35f-8435-4fbe-ba55-3a5cb2d8d980 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Received unexpected event network-vif-plugged-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 for instance with vm_state stopped and task_state deleting.
Feb 28 10:24:38 compute-0 sudo[341070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:24:38 compute-0 sudo[341070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:24:38 compute-0 sudo[341070]: pam_unix(sudo:session): session closed for user root
Feb 28 10:24:39 compute-0 sudo[341095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:24:39 compute-0 sudo[341095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:24:39 compute-0 nova_compute[243452]: 2026-02-28 10:24:39.082 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:39 compute-0 sudo[341095]: pam_unix(sudo:session): session closed for user root
Feb 28 10:24:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:24:39 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:24:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:24:39 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:24:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:24:39 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:24:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:24:39 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:24:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:24:39 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:24:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:24:39 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:24:39 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:24:39 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:24:39 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:24:39 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:24:39 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:24:39 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:24:39 compute-0 sudo[341151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:24:39 compute-0 sudo[341151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:24:39 compute-0 sudo[341151]: pam_unix(sudo:session): session closed for user root
Feb 28 10:24:39 compute-0 sudo[341176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:24:39 compute-0 sudo[341176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:24:39 compute-0 nova_compute[243452]: 2026-02-28 10:24:39.823 243456 DEBUG nova.network.neutron [-] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:24:39 compute-0 nova_compute[243452]: 2026-02-28 10:24:39.843 243456 INFO nova.compute.manager [-] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Took 1.16 seconds to deallocate network for instance.
Feb 28 10:24:39 compute-0 nova_compute[243452]: 2026-02-28 10:24:39.893 243456 DEBUG nova.compute.manager [req-d5c130c9-c0e0-4854-ae24-ece0e3e6f26f req-f79b957a-711e-4431-8d76-2feb02b28690 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Received event network-vif-deleted-c6f8ec31-2557-49a7-9d41-fc0bc3a16a37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:24:39 compute-0 nova_compute[243452]: 2026-02-28 10:24:39.899 243456 DEBUG oslo_concurrency.lockutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:39 compute-0 nova_compute[243452]: 2026-02-28 10:24:39.899 243456 DEBUG oslo_concurrency.lockutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:40 compute-0 podman[341212]: 2026-02-28 10:24:39.999167384 +0000 UTC m=+0.052374112 container create e34002af4abbcc209ad8d1c2197652698157e55b0e3ebecc5d00bc59f3d1577e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hofstadter, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:24:40 compute-0 nova_compute[243452]: 2026-02-28 10:24:40.004 243456 DEBUG oslo_concurrency.processutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:40 compute-0 sshd-session[341049]: Received disconnect from 103.67.78.202 port 58326:11: Bye Bye [preauth]
Feb 28 10:24:40 compute-0 sshd-session[341049]: Disconnected from authenticating user root 103.67.78.202 port 58326 [preauth]
Feb 28 10:24:40 compute-0 systemd[1]: Started libpod-conmon-e34002af4abbcc209ad8d1c2197652698157e55b0e3ebecc5d00bc59f3d1577e.scope.
Feb 28 10:24:40 compute-0 podman[341212]: 2026-02-28 10:24:39.971763513 +0000 UTC m=+0.024970271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:24:40 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:24:40 compute-0 podman[341212]: 2026-02-28 10:24:40.088329098 +0000 UTC m=+0.141535896 container init e34002af4abbcc209ad8d1c2197652698157e55b0e3ebecc5d00bc59f3d1577e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hofstadter, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:24:40 compute-0 podman[341212]: 2026-02-28 10:24:40.094596729 +0000 UTC m=+0.147803457 container start e34002af4abbcc209ad8d1c2197652698157e55b0e3ebecc5d00bc59f3d1577e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:24:40 compute-0 podman[341212]: 2026-02-28 10:24:40.099019417 +0000 UTC m=+0.152226135 container attach e34002af4abbcc209ad8d1c2197652698157e55b0e3ebecc5d00bc59f3d1577e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 10:24:40 compute-0 systemd[1]: libpod-e34002af4abbcc209ad8d1c2197652698157e55b0e3ebecc5d00bc59f3d1577e.scope: Deactivated successfully.
Feb 28 10:24:40 compute-0 conmon[341230]: conmon e34002af4abbcc209ad8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e34002af4abbcc209ad8d1c2197652698157e55b0e3ebecc5d00bc59f3d1577e.scope/container/memory.events
Feb 28 10:24:40 compute-0 elastic_hofstadter[341230]: 167 167
Feb 28 10:24:40 compute-0 podman[341212]: 2026-02-28 10:24:40.104782033 +0000 UTC m=+0.157988741 container died e34002af4abbcc209ad8d1c2197652698157e55b0e3ebecc5d00bc59f3d1577e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:24:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4cd07d4b5426adccfeb537d21adccd22d47a2dfd9fd269aad24062bcf03164d-merged.mount: Deactivated successfully.
Feb 28 10:24:40 compute-0 podman[341212]: 2026-02-28 10:24:40.153333104 +0000 UTC m=+0.206539792 container remove e34002af4abbcc209ad8d1c2197652698157e55b0e3ebecc5d00bc59f3d1577e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hofstadter, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Feb 28 10:24:40 compute-0 systemd[1]: libpod-conmon-e34002af4abbcc209ad8d1c2197652698157e55b0e3ebecc5d00bc59f3d1577e.scope: Deactivated successfully.
Feb 28 10:24:40 compute-0 podman[341273]: 2026-02-28 10:24:40.304740504 +0000 UTC m=+0.036928506 container create 3b011bc23fde6a63ef69d0c4dffdc3805602120e5da99941f7ce2aba17a2ef5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 10:24:40 compute-0 systemd[1]: Started libpod-conmon-3b011bc23fde6a63ef69d0c4dffdc3805602120e5da99941f7ce2aba17a2ef5e.scope.
Feb 28 10:24:40 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/589dba552164f7f764ece3ab8ca01114d8e6e3b4931765664cc7bd86b8820b23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/589dba552164f7f764ece3ab8ca01114d8e6e3b4931765664cc7bd86b8820b23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/589dba552164f7f764ece3ab8ca01114d8e6e3b4931765664cc7bd86b8820b23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/589dba552164f7f764ece3ab8ca01114d8e6e3b4931765664cc7bd86b8820b23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/589dba552164f7f764ece3ab8ca01114d8e6e3b4931765664cc7bd86b8820b23/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:24:40 compute-0 podman[341273]: 2026-02-28 10:24:40.289032751 +0000 UTC m=+0.021220763 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:24:40 compute-0 podman[341273]: 2026-02-28 10:24:40.390788128 +0000 UTC m=+0.122976150 container init 3b011bc23fde6a63ef69d0c4dffdc3805602120e5da99941f7ce2aba17a2ef5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_leavitt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:24:40 compute-0 podman[341273]: 2026-02-28 10:24:40.398325996 +0000 UTC m=+0.130514038 container start 3b011bc23fde6a63ef69d0c4dffdc3805602120e5da99941f7ce2aba17a2ef5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:24:40 compute-0 podman[341273]: 2026-02-28 10:24:40.402029492 +0000 UTC m=+0.134217524 container attach 3b011bc23fde6a63ef69d0c4dffdc3805602120e5da99941f7ce2aba17a2ef5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_leavitt, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Feb 28 10:24:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 338 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 670 KiB/s rd, 4.3 MiB/s wr, 155 op/s
Feb 28 10:24:40 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:24:40 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/750158308' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:24:40 compute-0 nova_compute[243452]: 2026-02-28 10:24:40.558 243456 DEBUG oslo_concurrency.processutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:40 compute-0 nova_compute[243452]: 2026-02-28 10:24:40.566 243456 DEBUG nova.compute.provider_tree [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:24:40 compute-0 nova_compute[243452]: 2026-02-28 10:24:40.589 243456 DEBUG nova.scheduler.client.report [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:24:40 compute-0 nova_compute[243452]: 2026-02-28 10:24:40.620 243456 DEBUG oslo_concurrency.lockutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:40 compute-0 nova_compute[243452]: 2026-02-28 10:24:40.642 243456 INFO nova.scheduler.client.report [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Deleted allocations for instance 08a4cd5e-f711-44d2-b17e-c1941be22e85
Feb 28 10:24:40 compute-0 ceph-mon[76304]: pgmap v1783: 305 pgs: 305 active+clean; 338 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 670 KiB/s rd, 4.3 MiB/s wr, 155 op/s
Feb 28 10:24:40 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/750158308' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:24:40 compute-0 nova_compute[243452]: 2026-02-28 10:24:40.708 243456 DEBUG oslo_concurrency.lockutils [None req-5fd7f2e3-af13-4895-9d03-48deae8945a0 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "08a4cd5e-f711-44d2-b17e-c1941be22e85" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.396s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:40 compute-0 upbeat_leavitt[341289]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:24:40 compute-0 upbeat_leavitt[341289]: --> All data devices are unavailable
Feb 28 10:24:40 compute-0 systemd[1]: libpod-3b011bc23fde6a63ef69d0c4dffdc3805602120e5da99941f7ce2aba17a2ef5e.scope: Deactivated successfully.
Feb 28 10:24:40 compute-0 podman[341273]: 2026-02-28 10:24:40.83603625 +0000 UTC m=+0.568224322 container died 3b011bc23fde6a63ef69d0c4dffdc3805602120e5da99941f7ce2aba17a2ef5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 10:24:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-589dba552164f7f764ece3ab8ca01114d8e6e3b4931765664cc7bd86b8820b23-merged.mount: Deactivated successfully.
Feb 28 10:24:40 compute-0 podman[341273]: 2026-02-28 10:24:40.878984789 +0000 UTC m=+0.611172821 container remove 3b011bc23fde6a63ef69d0c4dffdc3805602120e5da99941f7ce2aba17a2ef5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 10:24:40 compute-0 systemd[1]: libpod-conmon-3b011bc23fde6a63ef69d0c4dffdc3805602120e5da99941f7ce2aba17a2ef5e.scope: Deactivated successfully.
Feb 28 10:24:40 compute-0 sudo[341176]: pam_unix(sudo:session): session closed for user root
Feb 28 10:24:40 compute-0 sudo[341323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:24:40 compute-0 sudo[341323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:24:40 compute-0 sudo[341323]: pam_unix(sudo:session): session closed for user root
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018071058773471251 of space, bias 1.0, pg target 0.5421317632041376 quantized to 32 (current 32)
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00249289725762509 of space, bias 1.0, pg target 0.747869177287527 quantized to 32 (current 32)
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.377670962743603e-07 of space, bias 4.0, pg target 0.0008853205155292325 quantized to 16 (current 16)
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:24:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:24:41 compute-0 sudo[341348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:24:41 compute-0 sudo[341348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:24:41 compute-0 podman[341387]: 2026-02-28 10:24:41.372694479 +0000 UTC m=+0.061246068 container create 4e609b74b0ea5c38977c59b77c9b8ef57f7343b6d1c6a20b539ff4d860a5bdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_elgamal, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 10:24:41 compute-0 systemd[1]: Started libpod-conmon-4e609b74b0ea5c38977c59b77c9b8ef57f7343b6d1c6a20b539ff4d860a5bdcb.scope.
Feb 28 10:24:41 compute-0 podman[341387]: 2026-02-28 10:24:41.348966455 +0000 UTC m=+0.037518064 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:24:41 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:24:41 compute-0 podman[341387]: 2026-02-28 10:24:41.486833984 +0000 UTC m=+0.175385623 container init 4e609b74b0ea5c38977c59b77c9b8ef57f7343b6d1c6a20b539ff4d860a5bdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_elgamal, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:24:41 compute-0 podman[341387]: 2026-02-28 10:24:41.49396114 +0000 UTC m=+0.182512749 container start 4e609b74b0ea5c38977c59b77c9b8ef57f7343b6d1c6a20b539ff4d860a5bdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_elgamal, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 10:24:41 compute-0 nice_elgamal[341404]: 167 167
Feb 28 10:24:41 compute-0 podman[341387]: 2026-02-28 10:24:41.499670424 +0000 UTC m=+0.188222023 container attach 4e609b74b0ea5c38977c59b77c9b8ef57f7343b6d1c6a20b539ff4d860a5bdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Feb 28 10:24:41 compute-0 systemd[1]: libpod-4e609b74b0ea5c38977c59b77c9b8ef57f7343b6d1c6a20b539ff4d860a5bdcb.scope: Deactivated successfully.
Feb 28 10:24:41 compute-0 podman[341387]: 2026-02-28 10:24:41.500652583 +0000 UTC m=+0.189204182 container died 4e609b74b0ea5c38977c59b77c9b8ef57f7343b6d1c6a20b539ff4d860a5bdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_elgamal, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:24:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd66e5a32188e322eff9e4ead8a1cc3c850418ba36536f11f190e197909075a4-merged.mount: Deactivated successfully.
Feb 28 10:24:41 compute-0 podman[341387]: 2026-02-28 10:24:41.546430114 +0000 UTC m=+0.234981683 container remove 4e609b74b0ea5c38977c59b77c9b8ef57f7343b6d1c6a20b539ff4d860a5bdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_elgamal, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 28 10:24:41 compute-0 systemd[1]: libpod-conmon-4e609b74b0ea5c38977c59b77c9b8ef57f7343b6d1c6a20b539ff4d860a5bdcb.scope: Deactivated successfully.
Feb 28 10:24:41 compute-0 podman[341428]: 2026-02-28 10:24:41.733115841 +0000 UTC m=+0.051495338 container create a0f7d0f21d17fe3edca585f55d396295b6c9d959cef9073a12dce915c8ebc1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 10:24:41 compute-0 systemd[1]: Started libpod-conmon-a0f7d0f21d17fe3edca585f55d396295b6c9d959cef9073a12dce915c8ebc1d1.scope.
Feb 28 10:24:41 compute-0 podman[341428]: 2026-02-28 10:24:41.711288451 +0000 UTC m=+0.029667897 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:24:41 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d213ca4875453ce62d8fcad99b673b7c688443e046c35013c2048728ed3fc5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d213ca4875453ce62d8fcad99b673b7c688443e046c35013c2048728ed3fc5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d213ca4875453ce62d8fcad99b673b7c688443e046c35013c2048728ed3fc5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d213ca4875453ce62d8fcad99b673b7c688443e046c35013c2048728ed3fc5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:24:41 compute-0 podman[341428]: 2026-02-28 10:24:41.845201847 +0000 UTC m=+0.163581343 container init a0f7d0f21d17fe3edca585f55d396295b6c9d959cef9073a12dce915c8ebc1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:24:41 compute-0 podman[341428]: 2026-02-28 10:24:41.859392656 +0000 UTC m=+0.177772072 container start a0f7d0f21d17fe3edca585f55d396295b6c9d959cef9073a12dce915c8ebc1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_knuth, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:24:41 compute-0 podman[341428]: 2026-02-28 10:24:41.862533477 +0000 UTC m=+0.180912933 container attach a0f7d0f21d17fe3edca585f55d396295b6c9d959cef9073a12dce915c8ebc1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:24:42 compute-0 zen_knuth[341444]: {
Feb 28 10:24:42 compute-0 zen_knuth[341444]:     "0": [
Feb 28 10:24:42 compute-0 zen_knuth[341444]:         {
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "devices": [
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "/dev/loop3"
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             ],
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "lv_name": "ceph_lv0",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "lv_size": "21470642176",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "name": "ceph_lv0",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "tags": {
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.cluster_name": "ceph",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.crush_device_class": "",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.encrypted": "0",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.objectstore": "bluestore",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.osd_id": "0",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.type": "block",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.vdo": "0",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.with_tpm": "0"
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             },
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "type": "block",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "vg_name": "ceph_vg0"
Feb 28 10:24:42 compute-0 zen_knuth[341444]:         }
Feb 28 10:24:42 compute-0 zen_knuth[341444]:     ],
Feb 28 10:24:42 compute-0 zen_knuth[341444]:     "1": [
Feb 28 10:24:42 compute-0 zen_knuth[341444]:         {
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "devices": [
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "/dev/loop4"
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             ],
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "lv_name": "ceph_lv1",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "lv_size": "21470642176",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "name": "ceph_lv1",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "tags": {
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.cluster_name": "ceph",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.crush_device_class": "",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.encrypted": "0",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.objectstore": "bluestore",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.osd_id": "1",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.type": "block",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.vdo": "0",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.with_tpm": "0"
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             },
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "type": "block",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "vg_name": "ceph_vg1"
Feb 28 10:24:42 compute-0 zen_knuth[341444]:         }
Feb 28 10:24:42 compute-0 zen_knuth[341444]:     ],
Feb 28 10:24:42 compute-0 zen_knuth[341444]:     "2": [
Feb 28 10:24:42 compute-0 zen_knuth[341444]:         {
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "devices": [
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "/dev/loop5"
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             ],
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "lv_name": "ceph_lv2",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "lv_size": "21470642176",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "name": "ceph_lv2",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "tags": {
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.cluster_name": "ceph",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.crush_device_class": "",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.encrypted": "0",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.objectstore": "bluestore",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.osd_id": "2",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.type": "block",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.vdo": "0",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:                 "ceph.with_tpm": "0"
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             },
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "type": "block",
Feb 28 10:24:42 compute-0 zen_knuth[341444]:             "vg_name": "ceph_vg2"
Feb 28 10:24:42 compute-0 zen_knuth[341444]:         }
Feb 28 10:24:42 compute-0 zen_knuth[341444]:     ]
Feb 28 10:24:42 compute-0 zen_knuth[341444]: }
Feb 28 10:24:42 compute-0 systemd[1]: libpod-a0f7d0f21d17fe3edca585f55d396295b6c9d959cef9073a12dce915c8ebc1d1.scope: Deactivated successfully.
Feb 28 10:24:42 compute-0 podman[341428]: 2026-02-28 10:24:42.228536001 +0000 UTC m=+0.546915447 container died a0f7d0f21d17fe3edca585f55d396295b6c9d959cef9073a12dce915c8ebc1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_knuth, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 28 10:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d213ca4875453ce62d8fcad99b673b7c688443e046c35013c2048728ed3fc5f-merged.mount: Deactivated successfully.
Feb 28 10:24:42 compute-0 podman[341428]: 2026-02-28 10:24:42.275803735 +0000 UTC m=+0.594183151 container remove a0f7d0f21d17fe3edca585f55d396295b6c9d959cef9073a12dce915c8ebc1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:24:42 compute-0 systemd[1]: libpod-conmon-a0f7d0f21d17fe3edca585f55d396295b6c9d959cef9073a12dce915c8ebc1d1.scope: Deactivated successfully.
Feb 28 10:24:42 compute-0 sudo[341348]: pam_unix(sudo:session): session closed for user root
Feb 28 10:24:42 compute-0 sudo[341465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:24:42 compute-0 sudo[341465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:24:42 compute-0 sudo[341465]: pam_unix(sudo:session): session closed for user root
Feb 28 10:24:42 compute-0 sudo[341490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:24:42 compute-0 sudo[341490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:24:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 312 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 4.3 MiB/s wr, 157 op/s
Feb 28 10:24:42 compute-0 ceph-mon[76304]: pgmap v1784: 305 pgs: 305 active+clean; 312 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 4.3 MiB/s wr, 157 op/s
Feb 28 10:24:42 compute-0 podman[341527]: 2026-02-28 10:24:42.702570354 +0000 UTC m=+0.034692233 container create c3aaaa94c8858e2a905a8f05b277cfc81e8a54550fb9337cbd7c1db08d76a1b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_sinoussi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 10:24:42 compute-0 systemd[1]: Started libpod-conmon-c3aaaa94c8858e2a905a8f05b277cfc81e8a54550fb9337cbd7c1db08d76a1b7.scope.
Feb 28 10:24:42 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:24:42 compute-0 podman[341527]: 2026-02-28 10:24:42.68754815 +0000 UTC m=+0.019670029 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:24:42 compute-0 podman[341527]: 2026-02-28 10:24:42.790691877 +0000 UTC m=+0.122813796 container init c3aaaa94c8858e2a905a8f05b277cfc81e8a54550fb9337cbd7c1db08d76a1b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_sinoussi, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 10:24:42 compute-0 podman[341527]: 2026-02-28 10:24:42.797756331 +0000 UTC m=+0.129878200 container start c3aaaa94c8858e2a905a8f05b277cfc81e8a54550fb9337cbd7c1db08d76a1b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:24:42 compute-0 systemd[1]: libpod-c3aaaa94c8858e2a905a8f05b277cfc81e8a54550fb9337cbd7c1db08d76a1b7.scope: Deactivated successfully.
Feb 28 10:24:42 compute-0 interesting_sinoussi[341544]: 167 167
Feb 28 10:24:42 compute-0 conmon[341544]: conmon c3aaaa94c8858e2a905a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c3aaaa94c8858e2a905a8f05b277cfc81e8a54550fb9337cbd7c1db08d76a1b7.scope/container/memory.events
Feb 28 10:24:42 compute-0 podman[341527]: 2026-02-28 10:24:42.803890828 +0000 UTC m=+0.136012707 container attach c3aaaa94c8858e2a905a8f05b277cfc81e8a54550fb9337cbd7c1db08d76a1b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 10:24:42 compute-0 podman[341527]: 2026-02-28 10:24:42.805423872 +0000 UTC m=+0.137545781 container died c3aaaa94c8858e2a905a8f05b277cfc81e8a54550fb9337cbd7c1db08d76a1b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d73f0d84920df7779fe02584012893928cf8da0cc715b73c38742931a0f4560-merged.mount: Deactivated successfully.
Feb 28 10:24:42 compute-0 podman[341527]: 2026-02-28 10:24:42.838091225 +0000 UTC m=+0.170213084 container remove c3aaaa94c8858e2a905a8f05b277cfc81e8a54550fb9337cbd7c1db08d76a1b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_sinoussi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 10:24:42 compute-0 systemd[1]: libpod-conmon-c3aaaa94c8858e2a905a8f05b277cfc81e8a54550fb9337cbd7c1db08d76a1b7.scope: Deactivated successfully.
Feb 28 10:24:42 compute-0 nova_compute[243452]: 2026-02-28 10:24:42.938 243456 DEBUG oslo_concurrency.lockutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:42 compute-0 nova_compute[243452]: 2026-02-28 10:24:42.941 243456 DEBUG oslo_concurrency.lockutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:42 compute-0 nova_compute[243452]: 2026-02-28 10:24:42.941 243456 DEBUG oslo_concurrency.lockutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:42 compute-0 nova_compute[243452]: 2026-02-28 10:24:42.941 243456 DEBUG oslo_concurrency.lockutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:42 compute-0 nova_compute[243452]: 2026-02-28 10:24:42.942 243456 DEBUG oslo_concurrency.lockutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:42 compute-0 nova_compute[243452]: 2026-02-28 10:24:42.944 243456 INFO nova.compute.manager [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Terminating instance
Feb 28 10:24:42 compute-0 nova_compute[243452]: 2026-02-28 10:24:42.946 243456 DEBUG nova.compute.manager [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:24:42 compute-0 podman[341567]: 2026-02-28 10:24:42.986844169 +0000 UTC m=+0.042168958 container create b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bose, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 10:24:42 compute-0 kernel: tap53819bfb-eb (unregistering): left promiscuous mode
Feb 28 10:24:42 compute-0 NetworkManager[49805]: <info>  [1772274282.9946] device (tap53819bfb-eb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:24:43 compute-0 ovn_controller[146846]: 2026-02-28T10:24:43Z|01159|binding|INFO|Releasing lport 53819bfb-ebe3-4956-8f91-805dd04b5954 from this chassis (sb_readonly=0)
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.005 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:43 compute-0 ovn_controller[146846]: 2026-02-28T10:24:43Z|01160|binding|INFO|Setting lport 53819bfb-ebe3-4956-8f91-805dd04b5954 down in Southbound
Feb 28 10:24:43 compute-0 ovn_controller[146846]: 2026-02-28T10:24:43Z|01161|binding|INFO|Removing iface tap53819bfb-eb ovn-installed in OVS
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.007 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.014 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.019 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:f2:95 10.100.0.9'], port_security=['fa:16:3e:e5:f2:95 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0d4ce277-1bbb-4926-a7ee-30f5df57fff9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ec4804c-4a13-485a-9300-db6edf74473b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c3cb5cdfa53405bb0387af43e804bd1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78ef92c2-0e51-43a6-8965-35e013bc82f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=777b78e8-68a6-45c7-be27-6698a20a9a96, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=53819bfb-ebe3-4956-8f91-805dd04b5954) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:24:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.021 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 53819bfb-ebe3-4956-8f91-805dd04b5954 in datapath 7ec4804c-4a13-485a-9300-db6edf74473b unbound from our chassis
Feb 28 10:24:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.023 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7ec4804c-4a13-485a-9300-db6edf74473b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:24:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.025 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aa0b81a0-4349-4514-a175-aaa1be9b0b5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.025 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b namespace which is not needed anymore
Feb 28 10:24:43 compute-0 systemd[1]: Started libpod-conmon-b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b.scope.
Feb 28 10:24:43 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Feb 28 10:24:43 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006a.scope: Consumed 16.844s CPU time.
Feb 28 10:24:43 compute-0 systemd-machined[209480]: Machine qemu-136-instance-0000006a terminated.
Feb 28 10:24:43 compute-0 podman[341567]: 2026-02-28 10:24:42.966129001 +0000 UTC m=+0.021453840 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:24:43 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:24:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/794a6f7c52ee2536e8e13066d593eee767db713f3a2cfc88dea2cfb0d64131fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:24:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/794a6f7c52ee2536e8e13066d593eee767db713f3a2cfc88dea2cfb0d64131fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:24:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/794a6f7c52ee2536e8e13066d593eee767db713f3a2cfc88dea2cfb0d64131fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:24:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/794a6f7c52ee2536e8e13066d593eee767db713f3a2cfc88dea2cfb0d64131fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:24:43 compute-0 podman[341567]: 2026-02-28 10:24:43.08597405 +0000 UTC m=+0.141298859 container init b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bose, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 10:24:43 compute-0 podman[341567]: 2026-02-28 10:24:43.092413026 +0000 UTC m=+0.147737825 container start b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bose, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 10:24:43 compute-0 podman[341567]: 2026-02-28 10:24:43.09602293 +0000 UTC m=+0.151347739 container attach b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bose, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Feb 28 10:24:43 compute-0 neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b[335695]: [NOTICE]   (335699) : haproxy version is 2.8.14-c23fe91
Feb 28 10:24:43 compute-0 neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b[335695]: [NOTICE]   (335699) : path to executable is /usr/sbin/haproxy
Feb 28 10:24:43 compute-0 neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b[335695]: [WARNING]  (335699) : Exiting Master process...
Feb 28 10:24:43 compute-0 neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b[335695]: [ALERT]    (335699) : Current worker (335701) exited with code 143 (Terminated)
Feb 28 10:24:43 compute-0 neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b[335695]: [WARNING]  (335699) : All workers exited. Exiting... (0)
Feb 28 10:24:43 compute-0 systemd[1]: libpod-bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334.scope: Deactivated successfully.
Feb 28 10:24:43 compute-0 podman[341613]: 2026-02-28 10:24:43.155209758 +0000 UTC m=+0.039690466 container died bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.168 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.174 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334-userdata-shm.mount: Deactivated successfully.
Feb 28 10:24:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f86c5f2ef6c8614823a87cf49a92bf5ddc50b5f72de34d838a1adf35e0b322e0-merged.mount: Deactivated successfully.
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.188 243456 INFO nova.virt.libvirt.driver [-] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Instance destroyed successfully.
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.189 243456 DEBUG nova.objects.instance [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lazy-loading 'resources' on Instance uuid 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:24:43 compute-0 podman[341613]: 2026-02-28 10:24:43.197834189 +0000 UTC m=+0.082314887 container cleanup bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 10:24:43 compute-0 systemd[1]: libpod-conmon-bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334.scope: Deactivated successfully.
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.203 243456 DEBUG nova.virt.libvirt.vif [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:22:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-168997413',display_name='tempest-₡-168997413',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--168997413',id=106,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:22:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1c3cb5cdfa53405bb0387af43e804bd1',ramdisk_id='',reservation_id='r-9boikcvy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-973249707',owner_user_name='tempest-ServersTestJSON-973249707-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:22:29Z,user_data=None,user_id='30797c1e587b4532a2e148d0cdcd9c51',uuid=0d4ce277-1bbb-4926-a7ee-30f5df57fff9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.204 243456 DEBUG nova.network.os_vif_util [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converting VIF {"id": "53819bfb-ebe3-4956-8f91-805dd04b5954", "address": "fa:16:3e:e5:f2:95", "network": {"id": "7ec4804c-4a13-485a-9300-db6edf74473b", "bridge": "br-int", "label": "tempest-ServersTestJSON-650682166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1c3cb5cdfa53405bb0387af43e804bd1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53819bfb-eb", "ovs_interfaceid": "53819bfb-ebe3-4956-8f91-805dd04b5954", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.204 243456 DEBUG nova.network.os_vif_util [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e5:f2:95,bridge_name='br-int',has_traffic_filtering=True,id=53819bfb-ebe3-4956-8f91-805dd04b5954,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53819bfb-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.205 243456 DEBUG os_vif [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:f2:95,bridge_name='br-int',has_traffic_filtering=True,id=53819bfb-ebe3-4956-8f91-805dd04b5954,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53819bfb-eb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.207 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.207 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53819bfb-eb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.209 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.210 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.212 243456 INFO os_vif [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:f2:95,bridge_name='br-int',has_traffic_filtering=True,id=53819bfb-ebe3-4956-8f91-805dd04b5954,network=Network(7ec4804c-4a13-485a-9300-db6edf74473b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53819bfb-eb')
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.229 243456 DEBUG nova.compute.manager [req-1e90a592-2ccd-4917-84ec-2641772177ec req-f922827f-79e1-40e3-8c3c-fe49b6f1acc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Received event network-vif-unplugged-53819bfb-ebe3-4956-8f91-805dd04b5954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.230 243456 DEBUG oslo_concurrency.lockutils [req-1e90a592-2ccd-4917-84ec-2641772177ec req-f922827f-79e1-40e3-8c3c-fe49b6f1acc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.230 243456 DEBUG oslo_concurrency.lockutils [req-1e90a592-2ccd-4917-84ec-2641772177ec req-f922827f-79e1-40e3-8c3c-fe49b6f1acc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.230 243456 DEBUG oslo_concurrency.lockutils [req-1e90a592-2ccd-4917-84ec-2641772177ec req-f922827f-79e1-40e3-8c3c-fe49b6f1acc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.231 243456 DEBUG nova.compute.manager [req-1e90a592-2ccd-4917-84ec-2641772177ec req-f922827f-79e1-40e3-8c3c-fe49b6f1acc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] No waiting events found dispatching network-vif-unplugged-53819bfb-ebe3-4956-8f91-805dd04b5954 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.231 243456 DEBUG nova.compute.manager [req-1e90a592-2ccd-4917-84ec-2641772177ec req-f922827f-79e1-40e3-8c3c-fe49b6f1acc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Received event network-vif-unplugged-53819bfb-ebe3-4956-8f91-805dd04b5954 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:24:43 compute-0 podman[341648]: 2026-02-28 10:24:43.271865405 +0000 UTC m=+0.043461755 container remove bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:24:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.278 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cc8250a1-e532-4d9d-bf3a-8c69dbcaf983]: (4, ('Sat Feb 28 10:24:43 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b (bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334)\nbf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334\nSat Feb 28 10:24:43 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b (bf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334)\nbf046cfa737a54af46b559834997a8b8f786ab030ece8a94ccfb83edd80db334\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.281 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[df207e36-f3ac-4345-9c76-2a8229889fec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.283 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ec4804c-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.285 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:43 compute-0 kernel: tap7ec4804c-40: left promiscuous mode
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.302 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.301 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cb779f15-5f25-4550-8402-1ce5a4e4dbee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.316 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[abc4cfc9-8cd2-47c3-b5c5-702e9bfa9f75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.319 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[57328262-c130-452a-ba74-73f1047584b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.333 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[62fa9b5a-2a26-45c5-a93f-e3b100fc9b50]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565502, 'reachable_time': 31298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341691, 'error': None, 'target': 'ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.336 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7ec4804c-4a13-485a-9300-db6edf74473b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:24:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:43.336 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[125f5c25-0012-4d9b-bc1e-a22df0311a96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.474 243456 INFO nova.virt.libvirt.driver [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Deleting instance files /var/lib/nova/instances/0d4ce277-1bbb-4926-a7ee-30f5df57fff9_del
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.475 243456 INFO nova.virt.libvirt.driver [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Deletion of /var/lib/nova/instances/0d4ce277-1bbb-4926-a7ee-30f5df57fff9_del complete
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.523 243456 INFO nova.compute.manager [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Took 0.58 seconds to destroy the instance on the hypervisor.
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.523 243456 DEBUG oslo.service.loopingcall [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.523 243456 DEBUG nova.compute.manager [-] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:24:43 compute-0 nova_compute[243452]: 2026-02-28 10:24:43.524 243456 DEBUG nova.network.neutron [-] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:24:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d7ec4804c\x2d4a13\x2d485a\x2d9300\x2ddb6edf74473b.mount: Deactivated successfully.
Feb 28 10:24:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:24:43 compute-0 lvm[341753]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:24:43 compute-0 lvm[341753]: VG ceph_vg0 finished
Feb 28 10:24:43 compute-0 lvm[341755]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:24:43 compute-0 lvm[341755]: VG ceph_vg1 finished
Feb 28 10:24:43 compute-0 lvm[341756]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:24:43 compute-0 lvm[341756]: VG ceph_vg2 finished
Feb 28 10:24:43 compute-0 crazy_bose[341586]: {}
Feb 28 10:24:43 compute-0 systemd[1]: libpod-b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b.scope: Deactivated successfully.
Feb 28 10:24:43 compute-0 systemd[1]: libpod-b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b.scope: Consumed 1.117s CPU time.
Feb 28 10:24:43 compute-0 conmon[341586]: conmon b0db68dc84f47bbee464 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b.scope/container/memory.events
Feb 28 10:24:43 compute-0 podman[341567]: 2026-02-28 10:24:43.929273851 +0000 UTC m=+0.984598680 container died b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bose, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:24:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-794a6f7c52ee2536e8e13066d593eee767db713f3a2cfc88dea2cfb0d64131fb-merged.mount: Deactivated successfully.
Feb 28 10:24:43 compute-0 podman[341567]: 2026-02-28 10:24:43.971648144 +0000 UTC m=+1.026972933 container remove b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:24:43 compute-0 systemd[1]: libpod-conmon-b0db68dc84f47bbee464348ec3b9624040acc9979905c65bba7420f8db80e31b.scope: Deactivated successfully.
Feb 28 10:24:44 compute-0 sudo[341490]: pam_unix(sudo:session): session closed for user root
Feb 28 10:24:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:24:44 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:24:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:24:44 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:24:44 compute-0 nova_compute[243452]: 2026-02-28 10:24:44.084 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:44 compute-0 sudo[341771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:24:44 compute-0 sudo[341771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:24:44 compute-0 sudo[341771]: pam_unix(sudo:session): session closed for user root
Feb 28 10:24:44 compute-0 nova_compute[243452]: 2026-02-28 10:24:44.383 243456 DEBUG nova.network.neutron [-] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:24:44 compute-0 nova_compute[243452]: 2026-02-28 10:24:44.405 243456 INFO nova.compute.manager [-] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Took 0.88 seconds to deallocate network for instance.
Feb 28 10:24:44 compute-0 nova_compute[243452]: 2026-02-28 10:24:44.465 243456 DEBUG oslo_concurrency.lockutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:44 compute-0 nova_compute[243452]: 2026-02-28 10:24:44.465 243456 DEBUG oslo_concurrency.lockutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 312 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 535 KiB/s rd, 3.2 MiB/s wr, 142 op/s
Feb 28 10:24:44 compute-0 nova_compute[243452]: 2026-02-28 10:24:44.521 243456 DEBUG nova.compute.manager [req-4e55942d-7be3-4fe5-a4aa-4b5198f1b6bd req-3c897c44-ff5a-4e98-8d9b-ee9a834fe229 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Received event network-vif-deleted-53819bfb-ebe3-4956-8f91-805dd04b5954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:24:44 compute-0 nova_compute[243452]: 2026-02-28 10:24:44.549 243456 DEBUG oslo_concurrency.processutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:45 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:24:45 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:24:45 compute-0 ceph-mon[76304]: pgmap v1785: 305 pgs: 305 active+clean; 312 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 535 KiB/s rd, 3.2 MiB/s wr, 142 op/s
Feb 28 10:24:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:24:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/713305397' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:24:45 compute-0 nova_compute[243452]: 2026-02-28 10:24:45.119 243456 DEBUG oslo_concurrency.processutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:45 compute-0 nova_compute[243452]: 2026-02-28 10:24:45.127 243456 DEBUG nova.compute.provider_tree [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:24:45 compute-0 nova_compute[243452]: 2026-02-28 10:24:45.146 243456 DEBUG nova.scheduler.client.report [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:24:45 compute-0 nova_compute[243452]: 2026-02-28 10:24:45.173 243456 DEBUG oslo_concurrency.lockutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:45 compute-0 nova_compute[243452]: 2026-02-28 10:24:45.212 243456 INFO nova.scheduler.client.report [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Deleted allocations for instance 0d4ce277-1bbb-4926-a7ee-30f5df57fff9
Feb 28 10:24:45 compute-0 nova_compute[243452]: 2026-02-28 10:24:45.340 243456 DEBUG oslo_concurrency.lockutils [None req-a2b4e782-53b9-4695-b84d-9a76ddb634ac 30797c1e587b4532a2e148d0cdcd9c51 1c3cb5cdfa53405bb0387af43e804bd1 - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.399s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:45 compute-0 nova_compute[243452]: 2026-02-28 10:24:45.366 243456 DEBUG nova.compute.manager [req-9190e274-79f1-493b-b070-acdad3ed88ae req-c231f7b4-f95a-4e41-a1c9-563c9b0bb0a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Received event network-vif-plugged-53819bfb-ebe3-4956-8f91-805dd04b5954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:24:45 compute-0 nova_compute[243452]: 2026-02-28 10:24:45.367 243456 DEBUG oslo_concurrency.lockutils [req-9190e274-79f1-493b-b070-acdad3ed88ae req-c231f7b4-f95a-4e41-a1c9-563c9b0bb0a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:45 compute-0 nova_compute[243452]: 2026-02-28 10:24:45.368 243456 DEBUG oslo_concurrency.lockutils [req-9190e274-79f1-493b-b070-acdad3ed88ae req-c231f7b4-f95a-4e41-a1c9-563c9b0bb0a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:45 compute-0 nova_compute[243452]: 2026-02-28 10:24:45.368 243456 DEBUG oslo_concurrency.lockutils [req-9190e274-79f1-493b-b070-acdad3ed88ae req-c231f7b4-f95a-4e41-a1c9-563c9b0bb0a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0d4ce277-1bbb-4926-a7ee-30f5df57fff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:45 compute-0 nova_compute[243452]: 2026-02-28 10:24:45.368 243456 DEBUG nova.compute.manager [req-9190e274-79f1-493b-b070-acdad3ed88ae req-c231f7b4-f95a-4e41-a1c9-563c9b0bb0a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] No waiting events found dispatching network-vif-plugged-53819bfb-ebe3-4956-8f91-805dd04b5954 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:24:45 compute-0 nova_compute[243452]: 2026-02-28 10:24:45.369 243456 WARNING nova.compute.manager [req-9190e274-79f1-493b-b070-acdad3ed88ae req-c231f7b4-f95a-4e41-a1c9-563c9b0bb0a7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Received unexpected event network-vif-plugged-53819bfb-ebe3-4956-8f91-805dd04b5954 for instance with vm_state deleted and task_state None.
Feb 28 10:24:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:24:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1903058916' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:24:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:24:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1903058916' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:24:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/713305397' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:24:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1903058916' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:24:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1903058916' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:24:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 271 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 553 KiB/s rd, 3.2 MiB/s wr, 167 op/s
Feb 28 10:24:47 compute-0 ceph-mon[76304]: pgmap v1786: 305 pgs: 305 active+clean; 271 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 553 KiB/s rd, 3.2 MiB/s wr, 167 op/s
Feb 28 10:24:48 compute-0 nova_compute[243452]: 2026-02-28 10:24:48.211 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 721 KiB/s wr, 65 op/s
Feb 28 10:24:48 compute-0 ceph-mon[76304]: pgmap v1787: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 721 KiB/s wr, 65 op/s
Feb 28 10:24:48 compute-0 ovn_controller[146846]: 2026-02-28T10:24:48Z|01162|binding|INFO|Releasing lport 42d54653-4cb7-4be2-99ee-96f5681da7be from this chassis (sb_readonly=0)
Feb 28 10:24:48 compute-0 nova_compute[243452]: 2026-02-28 10:24:48.588 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:24:49 compute-0 nova_compute[243452]: 2026-02-28 10:24:49.088 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:49 compute-0 nova_compute[243452]: 2026-02-28 10:24:49.969 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "9f873b2a-d04c-475f-941d-397e5a9bc81a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:49 compute-0 nova_compute[243452]: 2026-02-28 10:24:49.970 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:49 compute-0 nova_compute[243452]: 2026-02-28 10:24:49.992 243456 DEBUG nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:24:50 compute-0 nova_compute[243452]: 2026-02-28 10:24:50.083 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:50 compute-0 nova_compute[243452]: 2026-02-28 10:24:50.084 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:50 compute-0 nova_compute[243452]: 2026-02-28 10:24:50.091 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:24:50 compute-0 nova_compute[243452]: 2026-02-28 10:24:50.091 243456 INFO nova.compute.claims [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:24:50 compute-0 podman[341819]: 2026-02-28 10:24:50.123521057 +0000 UTC m=+0.060547599 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 28 10:24:50 compute-0 podman[341818]: 2026-02-28 10:24:50.16035708 +0000 UTC m=+0.094694394 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 10:24:50 compute-0 nova_compute[243452]: 2026-02-28 10:24:50.169 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274275.167945, 08a4cd5e-f711-44d2-b17e-c1941be22e85 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:24:50 compute-0 nova_compute[243452]: 2026-02-28 10:24:50.169 243456 INFO nova.compute.manager [-] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] VM Stopped (Lifecycle Event)
Feb 28 10:24:50 compute-0 nova_compute[243452]: 2026-02-28 10:24:50.192 243456 DEBUG nova.compute.manager [None req-249befa2-a20b-41fd-b58f-8e4cf21e93c3 - - - - - -] [instance: 08a4cd5e-f711-44d2-b17e-c1941be22e85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:24:50 compute-0 nova_compute[243452]: 2026-02-28 10:24:50.250 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 56 op/s
Feb 28 10:24:50 compute-0 ceph-mon[76304]: pgmap v1788: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 56 op/s
Feb 28 10:24:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:24:50 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1489913355' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:24:50 compute-0 nova_compute[243452]: 2026-02-28 10:24:50.805 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:50 compute-0 nova_compute[243452]: 2026-02-28 10:24:50.815 243456 DEBUG nova.compute.provider_tree [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:24:50 compute-0 nova_compute[243452]: 2026-02-28 10:24:50.836 243456 DEBUG nova.scheduler.client.report [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:24:50 compute-0 nova_compute[243452]: 2026-02-28 10:24:50.872 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:50 compute-0 nova_compute[243452]: 2026-02-28 10:24:50.873 243456 DEBUG nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:24:50 compute-0 nova_compute[243452]: 2026-02-28 10:24:50.938 243456 DEBUG nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:24:50 compute-0 nova_compute[243452]: 2026-02-28 10:24:50.939 243456 DEBUG nova.network.neutron [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:24:50 compute-0 nova_compute[243452]: 2026-02-28 10:24:50.969 243456 INFO nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:24:50 compute-0 nova_compute[243452]: 2026-02-28 10:24:50.991 243456 DEBUG nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.100 243456 DEBUG nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.102 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.104 243456 INFO nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Creating image(s)
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.139 243456 DEBUG nova.storage.rbd_utils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.173 243456 DEBUG nova.storage.rbd_utils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.197 243456 DEBUG nova.storage.rbd_utils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.201 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.284 243456 DEBUG nova.policy [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9046d3ef932481196b82d5e1fdd5de6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.291 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.292 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.293 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.293 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.333 243456 DEBUG nova.storage.rbd_utils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.339 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:51 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1489913355' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.584 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.654 243456 DEBUG nova.storage.rbd_utils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] resizing rbd image 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.740 243456 DEBUG nova.objects.instance [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'migration_context' on Instance uuid 9f873b2a-d04c-475f-941d-397e5a9bc81a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.758 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.759 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Ensure instance console log exists: /var/lib/nova/instances/9f873b2a-d04c-475f-941d-397e5a9bc81a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.759 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.759 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:51 compute-0 nova_compute[243452]: 2026-02-28 10:24:51.760 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 252 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1019 KiB/s wr, 32 op/s
Feb 28 10:24:52 compute-0 ceph-mon[76304]: pgmap v1789: 305 pgs: 305 active+clean; 252 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1019 KiB/s wr, 32 op/s
Feb 28 10:24:52 compute-0 nova_compute[243452]: 2026-02-28 10:24:52.935 243456 DEBUG nova.network.neutron [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Successfully created port: 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:24:53 compute-0 nova_compute[243452]: 2026-02-28 10:24:53.215 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:24:54 compute-0 nova_compute[243452]: 2026-02-28 10:24:54.091 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 252 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1008 KiB/s wr, 29 op/s
Feb 28 10:24:54 compute-0 ceph-mon[76304]: pgmap v1790: 305 pgs: 305 active+clean; 252 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1008 KiB/s wr, 29 op/s
Feb 28 10:24:55 compute-0 nova_compute[243452]: 2026-02-28 10:24:55.615 243456 DEBUG nova.network.neutron [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Successfully updated port: 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:24:55 compute-0 nova_compute[243452]: 2026-02-28 10:24:55.632 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:24:55 compute-0 nova_compute[243452]: 2026-02-28 10:24:55.632 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquired lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:24:55 compute-0 nova_compute[243452]: 2026-02-28 10:24:55.632 243456 DEBUG nova.network.neutron [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:24:55 compute-0 nova_compute[243452]: 2026-02-28 10:24:55.738 243456 DEBUG nova.compute.manager [req-bf773168-26d0-47db-834f-0de41d5d5880 req-9df9af04-1591-498b-810e-4b06c3b8b722 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received event network-changed-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:24:55 compute-0 nova_compute[243452]: 2026-02-28 10:24:55.739 243456 DEBUG nova.compute.manager [req-bf773168-26d0-47db-834f-0de41d5d5880 req-9df9af04-1591-498b-810e-4b06c3b8b722 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Refreshing instance network info cache due to event network-changed-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:24:55 compute-0 nova_compute[243452]: 2026-02-28 10:24:55.739 243456 DEBUG oslo_concurrency.lockutils [req-bf773168-26d0-47db-834f-0de41d5d5880 req-9df9af04-1591-498b-810e-4b06c3b8b722 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:24:55 compute-0 nova_compute[243452]: 2026-02-28 10:24:55.833 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "0f9fd694-ce0d-49b2-a027-0c24d6bbfa64" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:55 compute-0 nova_compute[243452]: 2026-02-28 10:24:55.834 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "0f9fd694-ce0d-49b2-a027-0c24d6bbfa64" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:55 compute-0 nova_compute[243452]: 2026-02-28 10:24:55.852 243456 DEBUG nova.network.neutron [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:24:55 compute-0 nova_compute[243452]: 2026-02-28 10:24:55.860 243456 DEBUG nova.compute.manager [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:24:56 compute-0 nova_compute[243452]: 2026-02-28 10:24:56.087 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:56 compute-0 nova_compute[243452]: 2026-02-28 10:24:56.088 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:56 compute-0 nova_compute[243452]: 2026-02-28 10:24:56.098 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:24:56 compute-0 nova_compute[243452]: 2026-02-28 10:24:56.098 243456 INFO nova.compute.claims [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:24:56 compute-0 nova_compute[243452]: 2026-02-28 10:24:56.408 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 28 10:24:56 compute-0 ceph-mon[76304]: pgmap v1791: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 28 10:24:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:24:56 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3925535493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:24:56 compute-0 nova_compute[243452]: 2026-02-28 10:24:56.951 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:56 compute-0 nova_compute[243452]: 2026-02-28 10:24:56.958 243456 DEBUG nova.compute.provider_tree [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:24:56 compute-0 nova_compute[243452]: 2026-02-28 10:24:56.979 243456 DEBUG nova.scheduler.client.report [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.003 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.005 243456 DEBUG nova.compute.manager [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.064 243456 DEBUG nova.compute.manager [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.079 243456 INFO nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.099 243456 DEBUG nova.compute.manager [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.200 243456 DEBUG nova.compute.manager [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.201 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.202 243456 INFO nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Creating image(s)
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.228 243456 DEBUG nova.storage.rbd_utils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.254 243456 DEBUG nova.storage.rbd_utils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.276 243456 DEBUG nova.storage.rbd_utils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.280 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.373 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.375 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.376 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.376 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.410 243456 DEBUG nova.storage.rbd_utils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.415 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.536 243456 DEBUG nova.network.neutron [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Updating instance_info_cache with network_info: [{"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.566 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Releasing lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.567 243456 DEBUG nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Instance network_info: |[{"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.568 243456 DEBUG oslo_concurrency.lockutils [req-bf773168-26d0-47db-834f-0de41d5d5880 req-9df9af04-1591-498b-810e-4b06c3b8b722 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.569 243456 DEBUG nova.network.neutron [req-bf773168-26d0-47db-834f-0de41d5d5880 req-9df9af04-1591-498b-810e-4b06c3b8b722 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Refreshing network info cache for port 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:24:57 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3925535493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.575 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Start _get_guest_xml network_info=[{"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.584 243456 WARNING nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.605 243456 DEBUG nova.virt.libvirt.host [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.606 243456 DEBUG nova.virt.libvirt.host [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.610 243456 DEBUG nova.virt.libvirt.host [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.611 243456 DEBUG nova.virt.libvirt.host [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.611 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.612 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.612 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.613 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.613 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.613 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.614 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.615 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.615 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.616 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.616 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.616 243456 DEBUG nova.virt.hardware [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.621 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.670 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.255s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.749 243456 DEBUG nova.storage.rbd_utils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] resizing rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.842 243456 DEBUG nova.objects.instance [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lazy-loading 'migration_context' on Instance uuid 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.855 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.855 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Ensure instance console log exists: /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.856 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.856 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.856 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.857 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.861 243456 WARNING nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:24:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:57.863 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:57.864 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:24:57.864 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.866 243456 DEBUG nova.virt.libvirt.host [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.866 243456 DEBUG nova.virt.libvirt.host [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.869 243456 DEBUG nova.virt.libvirt.host [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.869 243456 DEBUG nova.virt.libvirt.host [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.870 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.870 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.870 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.870 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.870 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.871 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.871 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.871 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.871 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.871 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.872 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.872 243456 DEBUG nova.virt.hardware [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:24:57 compute-0 nova_compute[243452]: 2026-02-28 10:24:57.874 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:24:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3086382063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.183 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274283.182444, 0d4ce277-1bbb-4926-a7ee-30f5df57fff9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.184 243456 INFO nova.compute.manager [-] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] VM Stopped (Lifecycle Event)
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.207 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.233 243456 DEBUG nova.storage.rbd_utils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.237 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.266 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.271 243456 DEBUG nova.compute.manager [None req-18a51129-9050-4a72-9df6-da7f63e0198b - - - - - -] [instance: 0d4ce277-1bbb-4926-a7ee-30f5df57fff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:24:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:24:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/702629668' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.439 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.465 243456 DEBUG nova.storage.rbd_utils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.469 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Feb 28 10:24:58 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3086382063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:24:58 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/702629668' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:24:58 compute-0 ceph-mon[76304]: pgmap v1792: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Feb 28 10:24:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:24:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:24:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2547109125' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.800 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.802 243456 DEBUG nova.virt.libvirt.vif [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:24:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-1230754442',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-1230754442',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ge',id=115,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAhOy8scu7pucn6hR6bdEJEq/jaPKx42EW3BCfNrjQsnOj6n32/A+FEMwyqz7ox0jzcrbxxA91mTDdT3vrbzI6b58jlzlwGg5fJ7HUGs8jPxEEShL7FM9h/slX6zH/FfpQ==',key_name='tempest-TestSecurityGroupsBasicOps-850120140',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-8fgbwnao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:24:51Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=9f873b2a-d04c-475f-941d-397e5a9bc81a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.802 243456 DEBUG nova.network.os_vif_util [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.803 243456 DEBUG nova.network.os_vif_util [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:ab:a9,bridge_name='br-int',has_traffic_filtering=True,id=8b15ec49-ee59-41b3-b0e8-b6779ab7bde7,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b15ec49-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.804 243456 DEBUG nova.objects.instance [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f873b2a-d04c-475f-941d-397e5a9bc81a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.818 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:24:58 compute-0 nova_compute[243452]:   <uuid>9f873b2a-d04c-475f-941d-397e5a9bc81a</uuid>
Feb 28 10:24:58 compute-0 nova_compute[243452]:   <name>instance-00000073</name>
Feb 28 10:24:58 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:24:58 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:24:58 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-1230754442</nova:name>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:24:57</nova:creationTime>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:24:58 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:24:58 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:24:58 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:24:58 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:24:58 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:24:58 compute-0 nova_compute[243452]:         <nova:user uuid="f9046d3ef932481196b82d5e1fdd5de6">tempest-TestSecurityGroupsBasicOps-1303513239-project-member</nova:user>
Feb 28 10:24:58 compute-0 nova_compute[243452]:         <nova:project uuid="1eb3aafa74d14544ba5cde61223f2e23">tempest-TestSecurityGroupsBasicOps-1303513239</nova:project>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:24:58 compute-0 nova_compute[243452]:         <nova:port uuid="8b15ec49-ee59-41b3-b0e8-b6779ab7bde7">
Feb 28 10:24:58 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:24:58 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:24:58 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <system>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <entry name="serial">9f873b2a-d04c-475f-941d-397e5a9bc81a</entry>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <entry name="uuid">9f873b2a-d04c-475f-941d-397e5a9bc81a</entry>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     </system>
Feb 28 10:24:58 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:24:58 compute-0 nova_compute[243452]:   <os>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:   </os>
Feb 28 10:24:58 compute-0 nova_compute[243452]:   <features>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:   </features>
Feb 28 10:24:58 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:24:58 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:24:58 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9f873b2a-d04c-475f-941d-397e5a9bc81a_disk">
Feb 28 10:24:58 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       </source>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:24:58 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9f873b2a-d04c-475f-941d-397e5a9bc81a_disk.config">
Feb 28 10:24:58 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       </source>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:24:58 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:51:ab:a9"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <target dev="tap8b15ec49-ee"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/9f873b2a-d04c-475f-941d-397e5a9bc81a/console.log" append="off"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <video>
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     </video>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:24:58 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:24:58 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:24:58 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:24:58 compute-0 nova_compute[243452]: </domain>
Feb 28 10:24:58 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.818 243456 DEBUG nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Preparing to wait for external event network-vif-plugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.818 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.819 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.820 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.820 243456 DEBUG nova.virt.libvirt.vif [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:24:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-1230754442',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-1230754442',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ge',id=115,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAhOy8scu7pucn6hR6bdEJEq/jaPKx42EW3BCfNrjQsnOj6n32/A+FEMwyqz7ox0jzcrbxxA91mTDdT3vrbzI6b58jlzlwGg5fJ7HUGs8jPxEEShL7FM9h/slX6zH/FfpQ==',key_name='tempest-TestSecurityGroupsBasicOps-850120140',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-8fgbwnao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:24:51Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=9f873b2a-d04c-475f-941d-397e5a9bc81a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.820 243456 DEBUG nova.network.os_vif_util [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.821 243456 DEBUG nova.network.os_vif_util [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:ab:a9,bridge_name='br-int',has_traffic_filtering=True,id=8b15ec49-ee59-41b3-b0e8-b6779ab7bde7,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b15ec49-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.821 243456 DEBUG os_vif [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:ab:a9,bridge_name='br-int',has_traffic_filtering=True,id=8b15ec49-ee59-41b3-b0e8-b6779ab7bde7,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b15ec49-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.821 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.822 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.822 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.825 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.825 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b15ec49-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.825 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8b15ec49-ee, col_values=(('external_ids', {'iface-id': '8b15ec49-ee59-41b3-b0e8-b6779ab7bde7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:51:ab:a9', 'vm-uuid': '9f873b2a-d04c-475f-941d-397e5a9bc81a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.827 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:58 compute-0 NetworkManager[49805]: <info>  [1772274298.8282] manager: (tap8b15ec49-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/480)
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.830 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.835 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.835 243456 INFO os_vif [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:ab:a9,bridge_name='br-int',has_traffic_filtering=True,id=8b15ec49-ee59-41b3-b0e8-b6779ab7bde7,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b15ec49-ee')
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.904 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.904 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.904 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No VIF found with MAC fa:16:3e:51:ab:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.904 243456 INFO nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Using config drive
Feb 28 10:24:58 compute-0 nova_compute[243452]: 2026-02-28 10:24:58.923 243456 DEBUG nova.storage.rbd_utils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:24:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1230596899' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:24:59 compute-0 nova_compute[243452]: 2026-02-28 10:24:59.032 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:59 compute-0 nova_compute[243452]: 2026-02-28 10:24:59.034 243456 DEBUG nova.objects.instance [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:24:59 compute-0 nova_compute[243452]: 2026-02-28 10:24:59.049 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:24:59 compute-0 nova_compute[243452]:   <uuid>0f9fd694-ce0d-49b2-a027-0c24d6bbfa64</uuid>
Feb 28 10:24:59 compute-0 nova_compute[243452]:   <name>instance-00000074</name>
Feb 28 10:24:59 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:24:59 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:24:59 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerShowV254Test-server-434009731</nova:name>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:24:57</nova:creationTime>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:24:59 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:24:59 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:24:59 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:24:59 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:24:59 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:24:59 compute-0 nova_compute[243452]:         <nova:user uuid="df63289bf60946e2a983ee2fa57352b1">tempest-ServerShowV254Test-1359990056-project-member</nova:user>
Feb 28 10:24:59 compute-0 nova_compute[243452]:         <nova:project uuid="ff0879b364b142e782530e413eb35f55">tempest-ServerShowV254Test-1359990056</nova:project>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <nova:ports/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:24:59 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:24:59 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <system>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <entry name="serial">0f9fd694-ce0d-49b2-a027-0c24d6bbfa64</entry>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <entry name="uuid">0f9fd694-ce0d-49b2-a027-0c24d6bbfa64</entry>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     </system>
Feb 28 10:24:59 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:24:59 compute-0 nova_compute[243452]:   <os>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:   </os>
Feb 28 10:24:59 compute-0 nova_compute[243452]:   <features>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:   </features>
Feb 28 10:24:59 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:24:59 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:24:59 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk">
Feb 28 10:24:59 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       </source>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:24:59 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config">
Feb 28 10:24:59 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       </source>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:24:59 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/console.log" append="off"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <video>
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     </video>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:24:59 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:24:59 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:24:59 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:24:59 compute-0 nova_compute[243452]: </domain>
Feb 28 10:24:59 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:24:59 compute-0 nova_compute[243452]: 2026-02-28 10:24:59.085 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:24:59 compute-0 nova_compute[243452]: 2026-02-28 10:24:59.086 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:24:59 compute-0 nova_compute[243452]: 2026-02-28 10:24:59.086 243456 INFO nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Using config drive
Feb 28 10:24:59 compute-0 nova_compute[243452]: 2026-02-28 10:24:59.112 243456 DEBUG nova.storage.rbd_utils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:59 compute-0 nova_compute[243452]: 2026-02-28 10:24:59.116 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:24:59 compute-0 nova_compute[243452]: 2026-02-28 10:24:59.412 243456 INFO nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Creating config drive at /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config
Feb 28 10:24:59 compute-0 nova_compute[243452]: 2026-02-28 10:24:59.415 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpahb1kf1y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:59 compute-0 nova_compute[243452]: 2026-02-28 10:24:59.571 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpahb1kf1y" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:59 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2547109125' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:24:59 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1230596899' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:24:59 compute-0 nova_compute[243452]: 2026-02-28 10:24:59.597 243456 DEBUG nova.storage.rbd_utils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:59 compute-0 nova_compute[243452]: 2026-02-28 10:24:59.601 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:59 compute-0 nova_compute[243452]: 2026-02-28 10:24:59.684 243456 INFO nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Creating config drive at /var/lib/nova/instances/9f873b2a-d04c-475f-941d-397e5a9bc81a/disk.config
Feb 28 10:24:59 compute-0 nova_compute[243452]: 2026-02-28 10:24:59.689 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f873b2a-d04c-475f-941d-397e5a9bc81a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5are5zi1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:24:59 compute-0 nova_compute[243452]: 2026-02-28 10:24:59.761 243456 DEBUG oslo_concurrency.processutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:59 compute-0 nova_compute[243452]: 2026-02-28 10:24:59.762 243456 INFO nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Deleting local config drive /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config because it was imported into RBD.
Feb 28 10:24:59 compute-0 systemd-machined[209480]: New machine qemu-145-instance-00000074.
Feb 28 10:24:59 compute-0 systemd[1]: Started Virtual Machine qemu-145-instance-00000074.
Feb 28 10:24:59 compute-0 nova_compute[243452]: 2026-02-28 10:24:59.832 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f873b2a-d04c-475f-941d-397e5a9bc81a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5are5zi1" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:24:59 compute-0 nova_compute[243452]: 2026-02-28 10:24:59.867 243456 DEBUG nova.storage.rbd_utils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:24:59 compute-0 nova_compute[243452]: 2026-02-28 10:24:59.871 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9f873b2a-d04c-475f-941d-397e5a9bc81a/disk.config 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:24:59.999 243456 DEBUG oslo_concurrency.processutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9f873b2a-d04c-475f-941d-397e5a9bc81a/disk.config 9f873b2a-d04c-475f-941d-397e5a9bc81a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.001 243456 INFO nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Deleting local config drive /var/lib/nova/instances/9f873b2a-d04c-475f-941d-397e5a9bc81a/disk.config because it was imported into RBD.
Feb 28 10:25:00 compute-0 kernel: tap8b15ec49-ee: entered promiscuous mode
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.050 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:00 compute-0 ovn_controller[146846]: 2026-02-28T10:25:00Z|01163|binding|INFO|Claiming lport 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 for this chassis.
Feb 28 10:25:00 compute-0 ovn_controller[146846]: 2026-02-28T10:25:00Z|01164|binding|INFO|8b15ec49-ee59-41b3-b0e8-b6779ab7bde7: Claiming fa:16:3e:51:ab:a9 10.100.0.5
Feb 28 10:25:00 compute-0 NetworkManager[49805]: <info>  [1772274300.0540] manager: (tap8b15ec49-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/481)
Feb 28 10:25:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.058 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:ab:a9 10.100.0.5'], port_security=['fa:16:3e:51:ab:a9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9f873b2a-d04c-475f-941d-397e5a9bc81a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ee12748-b368-477a-aacb-62375ce0b51c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ec278221-1438-4a93-ade3-9d533e8b727e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2fa6d2-fb6a-49e0-af1a-0d44f4a09e25, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8b15ec49-ee59-41b3-b0e8-b6779ab7bde7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:25:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.061 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 in datapath 9ee12748-b368-477a-aacb-62375ce0b51c bound to our chassis
Feb 28 10:25:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.063 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ee12748-b368-477a-aacb-62375ce0b51c
Feb 28 10:25:00 compute-0 ovn_controller[146846]: 2026-02-28T10:25:00Z|01165|binding|INFO|Setting lport 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 ovn-installed in OVS
Feb 28 10:25:00 compute-0 ovn_controller[146846]: 2026-02-28T10:25:00Z|01166|binding|INFO|Setting lport 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 up in Southbound
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.065 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.067 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:00 compute-0 systemd-machined[209480]: New machine qemu-146-instance-00000073.
Feb 28 10:25:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.079 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[04b5462a-d218-46b2-ab27-d704534df6ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:00 compute-0 systemd[1]: Started Virtual Machine qemu-146-instance-00000073.
Feb 28 10:25:00 compute-0 systemd-udevd[342544]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:25:00 compute-0 NetworkManager[49805]: <info>  [1772274300.1113] device (tap8b15ec49-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:25:00 compute-0 NetworkManager[49805]: <info>  [1772274300.1117] device (tap8b15ec49-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:25:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.112 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[95e36bc7-9035-4bae-bb7f-c525f1426f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.118 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[904b94f5-bd07-4dc7-91f4-0314806a977f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.151 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef3e36c-8102-43da-a52a-d76cb59a4eb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.170 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[922fbba9-528e-4e54-944b-43cd57dfa7a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ee12748-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:c4:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 344], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576814, 'reachable_time': 35187, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342563, 'error': None, 'target': 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.187 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e80f999b-d779-4be2-8ebd-a18b2df7f1c5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9ee12748-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576826, 'tstamp': 576826}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342564, 'error': None, 'target': 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9ee12748-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576829, 'tstamp': 576829}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342564, 'error': None, 'target': 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.189 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ee12748-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.191 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.193 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ee12748-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:25:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.193 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:25:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.194 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ee12748-b0, col_values=(('external_ids', {'iface-id': '42d54653-4cb7-4be2-99ee-96f5681da7be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:25:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:00.194 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.240 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274300.2397754, 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.242 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] VM Resumed (Lifecycle Event)
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.247 243456 DEBUG nova.compute.manager [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.248 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.253 243456 INFO nova.virt.libvirt.driver [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance spawned successfully.
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.253 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.277 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.280 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.299 243456 DEBUG nova.network.neutron [req-bf773168-26d0-47db-834f-0de41d5d5880 req-9df9af04-1591-498b-810e-4b06c3b8b722 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Updated VIF entry in instance network info cache for port 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.300 243456 DEBUG nova.network.neutron [req-bf773168-26d0-47db-834f-0de41d5d5880 req-9df9af04-1591-498b-810e-4b06c3b8b722 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Updating instance_info_cache with network_info: [{"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.304 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.304 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.304 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.305 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.305 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.306 243456 DEBUG nova.virt.libvirt.driver [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.311 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.312 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274300.2420087, 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.312 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] VM Started (Lifecycle Event)
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.315 243456 DEBUG oslo_concurrency.lockutils [req-bf773168-26d0-47db-834f-0de41d5d5880 req-9df9af04-1591-498b-810e-4b06c3b8b722 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:25:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:25:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:25:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:25:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:25:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:25:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.346 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.350 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.372 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.383 243456 INFO nova.compute.manager [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Took 3.18 seconds to spawn the instance on the hypervisor.
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.384 243456 DEBUG nova.compute.manager [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.409 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274300.408746, 9f873b2a-d04c-475f-941d-397e5a9bc81a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.409 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] VM Started (Lifecycle Event)
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.436 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.442 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274300.4095762, 9f873b2a-d04c-475f-941d-397e5a9bc81a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.442 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] VM Paused (Lifecycle Event)
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.455 243456 INFO nova.compute.manager [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Took 4.45 seconds to build instance.
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.466 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.470 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.473 243456 DEBUG oslo_concurrency.lockutils [None req-cbfa3027-0f09-47eb-958b-18d3d2b1d44a df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "0f9fd694-ce0d-49b2-a027-0c24d6bbfa64" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 302 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.8 MiB/s wr, 51 op/s
Feb 28 10:25:00 compute-0 nova_compute[243452]: 2026-02-28 10:25:00.492 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:25:00 compute-0 ceph-mon[76304]: pgmap v1793: 305 pgs: 305 active+clean; 302 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.8 MiB/s wr, 51 op/s
Feb 28 10:25:02 compute-0 nova_compute[243452]: 2026-02-28 10:25:02.417 243456 INFO nova.compute.manager [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Rebuilding instance
Feb 28 10:25:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 923 KiB/s rd, 3.6 MiB/s wr, 95 op/s
Feb 28 10:25:02 compute-0 ceph-mon[76304]: pgmap v1794: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 923 KiB/s rd, 3.6 MiB/s wr, 95 op/s
Feb 28 10:25:02 compute-0 nova_compute[243452]: 2026-02-28 10:25:02.701 243456 DEBUG nova.objects.instance [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:25:02 compute-0 nova_compute[243452]: 2026-02-28 10:25:02.719 243456 DEBUG nova.compute.manager [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:25:02 compute-0 nova_compute[243452]: 2026-02-28 10:25:02.764 243456 DEBUG nova.objects.instance [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lazy-loading 'pci_requests' on Instance uuid 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:25:02 compute-0 nova_compute[243452]: 2026-02-28 10:25:02.776 243456 DEBUG nova.objects.instance [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:25:02 compute-0 nova_compute[243452]: 2026-02-28 10:25:02.787 243456 DEBUG nova.objects.instance [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lazy-loading 'resources' on Instance uuid 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:25:02 compute-0 nova_compute[243452]: 2026-02-28 10:25:02.798 243456 DEBUG nova.objects.instance [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lazy-loading 'migration_context' on Instance uuid 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:25:02 compute-0 nova_compute[243452]: 2026-02-28 10:25:02.810 243456 DEBUG nova.objects.instance [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:25:02 compute-0 nova_compute[243452]: 2026-02-28 10:25:02.815 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:25:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:25:03 compute-0 nova_compute[243452]: 2026-02-28 10:25:03.827 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:04 compute-0 nova_compute[243452]: 2026-02-28 10:25:04.095 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 923 KiB/s rd, 2.6 MiB/s wr, 94 op/s
Feb 28 10:25:04 compute-0 ceph-mon[76304]: pgmap v1795: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 923 KiB/s rd, 2.6 MiB/s wr, 94 op/s
Feb 28 10:25:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 136 op/s
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.537 243456 DEBUG nova.compute.manager [req-cc04eafe-03c6-437f-a2c3-58ef71f93430 req-9cbf737b-3bc1-41fb-8262-9adda3103f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received event network-vif-plugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.538 243456 DEBUG oslo_concurrency.lockutils [req-cc04eafe-03c6-437f-a2c3-58ef71f93430 req-9cbf737b-3bc1-41fb-8262-9adda3103f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.539 243456 DEBUG oslo_concurrency.lockutils [req-cc04eafe-03c6-437f-a2c3-58ef71f93430 req-9cbf737b-3bc1-41fb-8262-9adda3103f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.539 243456 DEBUG oslo_concurrency.lockutils [req-cc04eafe-03c6-437f-a2c3-58ef71f93430 req-9cbf737b-3bc1-41fb-8262-9adda3103f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.540 243456 DEBUG nova.compute.manager [req-cc04eafe-03c6-437f-a2c3-58ef71f93430 req-9cbf737b-3bc1-41fb-8262-9adda3103f3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Processing event network-vif-plugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.541 243456 DEBUG nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.554 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274306.553912, 9f873b2a-d04c-475f-941d-397e5a9bc81a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.555 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] VM Resumed (Lifecycle Event)
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.558 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.563 243456 INFO nova.virt.libvirt.driver [-] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Instance spawned successfully.
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.564 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:25:06 compute-0 ceph-mon[76304]: pgmap v1796: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 136 op/s
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.583 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.594 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.601 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.602 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.603 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.604 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.605 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.606 243456 DEBUG nova.virt.libvirt.driver [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.650 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.693 243456 INFO nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Took 15.59 seconds to spawn the instance on the hypervisor.
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.694 243456 DEBUG nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.781 243456 INFO nova.compute.manager [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Took 16.73 seconds to build instance.
Feb 28 10:25:06 compute-0 nova_compute[243452]: 2026-02-28 10:25:06.804 243456 DEBUG oslo_concurrency.lockutils [None req-5b2533dc-1be3-4a46-b852-7cf949a5a43e f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Feb 28 10:25:08 compute-0 ceph-mon[76304]: pgmap v1797: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Feb 28 10:25:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:25:08 compute-0 nova_compute[243452]: 2026-02-28 10:25:08.791 243456 DEBUG nova.compute.manager [req-eb25e7d7-6d34-405d-bddb-81c42a0838d8 req-d3c0b308-24f0-4b37-9ae7-049fe5c85ad1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received event network-vif-plugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:25:08 compute-0 nova_compute[243452]: 2026-02-28 10:25:08.792 243456 DEBUG oslo_concurrency.lockutils [req-eb25e7d7-6d34-405d-bddb-81c42a0838d8 req-d3c0b308-24f0-4b37-9ae7-049fe5c85ad1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:08 compute-0 nova_compute[243452]: 2026-02-28 10:25:08.793 243456 DEBUG oslo_concurrency.lockutils [req-eb25e7d7-6d34-405d-bddb-81c42a0838d8 req-d3c0b308-24f0-4b37-9ae7-049fe5c85ad1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:08 compute-0 nova_compute[243452]: 2026-02-28 10:25:08.793 243456 DEBUG oslo_concurrency.lockutils [req-eb25e7d7-6d34-405d-bddb-81c42a0838d8 req-d3c0b308-24f0-4b37-9ae7-049fe5c85ad1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:08 compute-0 nova_compute[243452]: 2026-02-28 10:25:08.793 243456 DEBUG nova.compute.manager [req-eb25e7d7-6d34-405d-bddb-81c42a0838d8 req-d3c0b308-24f0-4b37-9ae7-049fe5c85ad1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] No waiting events found dispatching network-vif-plugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:25:08 compute-0 nova_compute[243452]: 2026-02-28 10:25:08.794 243456 WARNING nova.compute.manager [req-eb25e7d7-6d34-405d-bddb-81c42a0838d8 req-d3c0b308-24f0-4b37-9ae7-049fe5c85ad1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received unexpected event network-vif-plugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 for instance with vm_state active and task_state None.
Feb 28 10:25:08 compute-0 nova_compute[243452]: 2026-02-28 10:25:08.830 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:09 compute-0 nova_compute[243452]: 2026-02-28 10:25:09.098 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:10.318 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:35:99 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-d2c1432a-6bfa-4126-b876-01e6f5677734', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2c1432a-6bfa-4126-b876-01e6f5677734', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5c264946768448e9246a07a54d4b13c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b6ad741-74ba-4342-9c4e-7da3bec25db6, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d3cc0106-9bf3-4ca8-8469-0e9c9bf5b98a) old=Port_Binding(mac=['fa:16:3e:f2:35:99 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-d2c1432a-6bfa-4126-b876-01e6f5677734', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2c1432a-6bfa-4126-b876-01e6f5677734', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5c264946768448e9246a07a54d4b13c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:25:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:10.320 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d3cc0106-9bf3-4ca8-8469-0e9c9bf5b98a in datapath d2c1432a-6bfa-4126-b876-01e6f5677734 updated
Feb 28 10:25:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:10.322 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2c1432a-6bfa-4126-b876-01e6f5677734, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:25:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:10.324 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b1987b6e-0d04-4a1b-b2cd-a5d6cd39f052]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.9 MiB/s wr, 153 op/s
Feb 28 10:25:10 compute-0 ceph-mon[76304]: pgmap v1798: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.9 MiB/s wr, 153 op/s
Feb 28 10:25:11 compute-0 nova_compute[243452]: 2026-02-28 10:25:11.356 243456 DEBUG nova.compute.manager [req-69f99b8a-cc0c-4e22-9f49-7db9896f5f7e req-3a38f6f8-94ec-4349-b10f-9974e5d902ff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received event network-changed-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:25:11 compute-0 nova_compute[243452]: 2026-02-28 10:25:11.357 243456 DEBUG nova.compute.manager [req-69f99b8a-cc0c-4e22-9f49-7db9896f5f7e req-3a38f6f8-94ec-4349-b10f-9974e5d902ff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Refreshing instance network info cache due to event network-changed-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:25:11 compute-0 nova_compute[243452]: 2026-02-28 10:25:11.358 243456 DEBUG oslo_concurrency.lockutils [req-69f99b8a-cc0c-4e22-9f49-7db9896f5f7e req-3a38f6f8-94ec-4349-b10f-9974e5d902ff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:25:11 compute-0 nova_compute[243452]: 2026-02-28 10:25:11.359 243456 DEBUG oslo_concurrency.lockutils [req-69f99b8a-cc0c-4e22-9f49-7db9896f5f7e req-3a38f6f8-94ec-4349-b10f-9974e5d902ff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:25:11 compute-0 nova_compute[243452]: 2026-02-28 10:25:11.360 243456 DEBUG nova.network.neutron [req-69f99b8a-cc0c-4e22-9f49-7db9896f5f7e req-3a38f6f8-94ec-4349-b10f-9974e5d902ff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Refreshing network info cache for port 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:25:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 341 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 192 op/s
Feb 28 10:25:12 compute-0 ceph-mon[76304]: pgmap v1799: 305 pgs: 305 active+clean; 341 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 192 op/s
Feb 28 10:25:12 compute-0 nova_compute[243452]: 2026-02-28 10:25:12.654 243456 DEBUG nova.network.neutron [req-69f99b8a-cc0c-4e22-9f49-7db9896f5f7e req-3a38f6f8-94ec-4349-b10f-9974e5d902ff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Updated VIF entry in instance network info cache for port 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:25:12 compute-0 nova_compute[243452]: 2026-02-28 10:25:12.655 243456 DEBUG nova.network.neutron [req-69f99b8a-cc0c-4e22-9f49-7db9896f5f7e req-3a38f6f8-94ec-4349-b10f-9974e5d902ff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Updating instance_info_cache with network_info: [{"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:25:12 compute-0 nova_compute[243452]: 2026-02-28 10:25:12.674 243456 DEBUG oslo_concurrency.lockutils [req-69f99b8a-cc0c-4e22-9f49-7db9896f5f7e req-3a38f6f8-94ec-4349-b10f-9974e5d902ff 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:25:12 compute-0 nova_compute[243452]: 2026-02-28 10:25:12.863 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Feb 28 10:25:13 compute-0 nova_compute[243452]: 2026-02-28 10:25:13.487 243456 DEBUG nova.compute.manager [req-8107f0f4-8dc0-4c6c-9e82-6f240b3bdade req-6831aaff-a725-4260-8382-9626679e3ba3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received event network-changed-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:25:13 compute-0 nova_compute[243452]: 2026-02-28 10:25:13.487 243456 DEBUG nova.compute.manager [req-8107f0f4-8dc0-4c6c-9e82-6f240b3bdade req-6831aaff-a725-4260-8382-9626679e3ba3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Refreshing instance network info cache due to event network-changed-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:25:13 compute-0 nova_compute[243452]: 2026-02-28 10:25:13.488 243456 DEBUG oslo_concurrency.lockutils [req-8107f0f4-8dc0-4c6c-9e82-6f240b3bdade req-6831aaff-a725-4260-8382-9626679e3ba3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:25:13 compute-0 nova_compute[243452]: 2026-02-28 10:25:13.488 243456 DEBUG oslo_concurrency.lockutils [req-8107f0f4-8dc0-4c6c-9e82-6f240b3bdade req-6831aaff-a725-4260-8382-9626679e3ba3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:25:13 compute-0 nova_compute[243452]: 2026-02-28 10:25:13.489 243456 DEBUG nova.network.neutron [req-8107f0f4-8dc0-4c6c-9e82-6f240b3bdade req-6831aaff-a725-4260-8382-9626679e3ba3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Refreshing network info cache for port 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:25:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:25:13 compute-0 nova_compute[243452]: 2026-02-28 10:25:13.836 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:14 compute-0 nova_compute[243452]: 2026-02-28 10:25:14.099 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 341 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 993 KiB/s wr, 147 op/s
Feb 28 10:25:14 compute-0 ceph-mon[76304]: pgmap v1800: 305 pgs: 305 active+clean; 341 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 993 KiB/s wr, 147 op/s
Feb 28 10:25:15 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000074.scope: Deactivated successfully.
Feb 28 10:25:15 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000074.scope: Consumed 11.325s CPU time.
Feb 28 10:25:15 compute-0 systemd-machined[209480]: Machine qemu-145-instance-00000074 terminated.
Feb 28 10:25:15 compute-0 nova_compute[243452]: 2026-02-28 10:25:15.877 243456 INFO nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance shutdown successfully after 13 seconds.
Feb 28 10:25:15 compute-0 nova_compute[243452]: 2026-02-28 10:25:15.887 243456 INFO nova.virt.libvirt.driver [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance destroyed successfully.
Feb 28 10:25:15 compute-0 nova_compute[243452]: 2026-02-28 10:25:15.893 243456 INFO nova.virt.libvirt.driver [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance destroyed successfully.
Feb 28 10:25:16 compute-0 nova_compute[243452]: 2026-02-28 10:25:16.086 243456 DEBUG nova.network.neutron [req-8107f0f4-8dc0-4c6c-9e82-6f240b3bdade req-6831aaff-a725-4260-8382-9626679e3ba3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Updated VIF entry in instance network info cache for port 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:25:16 compute-0 nova_compute[243452]: 2026-02-28 10:25:16.087 243456 DEBUG nova.network.neutron [req-8107f0f4-8dc0-4c6c-9e82-6f240b3bdade req-6831aaff-a725-4260-8382-9626679e3ba3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Updating instance_info_cache with network_info: [{"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:25:16 compute-0 nova_compute[243452]: 2026-02-28 10:25:16.104 243456 DEBUG oslo_concurrency.lockutils [req-8107f0f4-8dc0-4c6c-9e82-6f240b3bdade req-6831aaff-a725-4260-8382-9626679e3ba3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9f873b2a-d04c-475f-941d-397e5a9bc81a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:25:16 compute-0 nova_compute[243452]: 2026-02-28 10:25:16.226 243456 INFO nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Deleting instance files /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_del
Feb 28 10:25:16 compute-0 nova_compute[243452]: 2026-02-28 10:25:16.227 243456 INFO nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Deletion of /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_del complete
Feb 28 10:25:16 compute-0 nova_compute[243452]: 2026-02-28 10:25:16.404 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:25:16 compute-0 nova_compute[243452]: 2026-02-28 10:25:16.405 243456 INFO nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Creating image(s)
Feb 28 10:25:16 compute-0 nova_compute[243452]: 2026-02-28 10:25:16.428 243456 DEBUG nova.storage.rbd_utils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:25:16 compute-0 nova_compute[243452]: 2026-02-28 10:25:16.465 243456 DEBUG nova.storage.rbd_utils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:25:16 compute-0 nova_compute[243452]: 2026-02-28 10:25:16.496 243456 DEBUG nova.storage.rbd_utils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:25:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 358 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 173 op/s
Feb 28 10:25:16 compute-0 nova_compute[243452]: 2026-02-28 10:25:16.501 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:25:16 compute-0 ceph-mon[76304]: pgmap v1801: 305 pgs: 305 active+clean; 358 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 173 op/s
Feb 28 10:25:16 compute-0 nova_compute[243452]: 2026-02-28 10:25:16.579 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:25:16 compute-0 nova_compute[243452]: 2026-02-28 10:25:16.581 243456 DEBUG oslo_concurrency.lockutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:16 compute-0 nova_compute[243452]: 2026-02-28 10:25:16.582 243456 DEBUG oslo_concurrency.lockutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:16 compute-0 nova_compute[243452]: 2026-02-28 10:25:16.582 243456 DEBUG oslo_concurrency.lockutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "d8b42e29e77cfa759998bf1aaf01e8a4671ba847" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:16 compute-0 nova_compute[243452]: 2026-02-28 10:25:16.615 243456 DEBUG nova.storage.rbd_utils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:25:16 compute-0 nova_compute[243452]: 2026-02-28 10:25:16.622 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.021 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.108 243456 DEBUG nova.storage.rbd_utils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] resizing rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.211 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.212 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Ensure instance console log exists: /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.213 243456 DEBUG oslo_concurrency.lockutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.213 243456 DEBUG oslo_concurrency.lockutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.213 243456 DEBUG oslo_concurrency.lockutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.215 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.220 243456 WARNING nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.226 243456 DEBUG nova.virt.libvirt.host [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.227 243456 DEBUG nova.virt.libvirt.host [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.231 243456 DEBUG nova.virt.libvirt.host [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.232 243456 DEBUG nova.virt.libvirt.host [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.232 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.232 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:17Z,direct_url=<?>,disk_format='qcow2',id=88971623-4808-4102-a4a7-34a287d8b7fe,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.233 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.233 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.234 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.234 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.234 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.234 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.235 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.235 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.235 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.236 243456 DEBUG nova.virt.hardware [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.236 243456 DEBUG nova.objects.instance [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.257 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:25:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:25:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3447145778' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.853 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:25:17 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3447145778' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.888 243456 DEBUG nova.storage.rbd_utils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:25:17 compute-0 nova_compute[243452]: 2026-02-28 10:25:17.894 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:25:18 compute-0 ovn_controller[146846]: 2026-02-28T10:25:18Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:51:ab:a9 10.100.0.5
Feb 28 10:25:18 compute-0 ovn_controller[146846]: 2026-02-28T10:25:18Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:51:ab:a9 10.100.0.5
Feb 28 10:25:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:25:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3168694531' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:25:18 compute-0 nova_compute[243452]: 2026-02-28 10:25:18.491 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:25:18 compute-0 nova_compute[243452]: 2026-02-28 10:25:18.497 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:25:18 compute-0 nova_compute[243452]:   <uuid>0f9fd694-ce0d-49b2-a027-0c24d6bbfa64</uuid>
Feb 28 10:25:18 compute-0 nova_compute[243452]:   <name>instance-00000074</name>
Feb 28 10:25:18 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:25:18 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:25:18 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <nova:name>tempest-ServerShowV254Test-server-434009731</nova:name>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:25:17</nova:creationTime>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:25:18 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:25:18 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:25:18 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:25:18 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:25:18 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:25:18 compute-0 nova_compute[243452]:         <nova:user uuid="df63289bf60946e2a983ee2fa57352b1">tempest-ServerShowV254Test-1359990056-project-member</nova:user>
Feb 28 10:25:18 compute-0 nova_compute[243452]:         <nova:project uuid="ff0879b364b142e782530e413eb35f55">tempest-ServerShowV254Test-1359990056</nova:project>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="88971623-4808-4102-a4a7-34a287d8b7fe"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <nova:ports/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:25:18 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:25:18 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <system>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <entry name="serial">0f9fd694-ce0d-49b2-a027-0c24d6bbfa64</entry>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <entry name="uuid">0f9fd694-ce0d-49b2-a027-0c24d6bbfa64</entry>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     </system>
Feb 28 10:25:18 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:25:18 compute-0 nova_compute[243452]:   <os>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:   </os>
Feb 28 10:25:18 compute-0 nova_compute[243452]:   <features>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:   </features>
Feb 28 10:25:18 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:25:18 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:25:18 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk">
Feb 28 10:25:18 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       </source>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:25:18 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config">
Feb 28 10:25:18 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       </source>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:25:18 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/console.log" append="off"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <video>
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     </video>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:25:18 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:25:18 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:25:18 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:25:18 compute-0 nova_compute[243452]: </domain>
Feb 28 10:25:18 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:25:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 347 MiB data, 974 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 180 op/s
Feb 28 10:25:18 compute-0 nova_compute[243452]: 2026-02-28 10:25:18.550 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:25:18 compute-0 nova_compute[243452]: 2026-02-28 10:25:18.550 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:25:18 compute-0 nova_compute[243452]: 2026-02-28 10:25:18.551 243456 INFO nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Using config drive
Feb 28 10:25:18 compute-0 nova_compute[243452]: 2026-02-28 10:25:18.579 243456 DEBUG nova.storage.rbd_utils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:25:18 compute-0 nova_compute[243452]: 2026-02-28 10:25:18.606 243456 DEBUG nova.objects.instance [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:25:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:25:18 compute-0 nova_compute[243452]: 2026-02-28 10:25:18.772 243456 INFO nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Creating config drive at /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config
Feb 28 10:25:18 compute-0 nova_compute[243452]: 2026-02-28 10:25:18.776 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqg_p77kq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:25:18 compute-0 nova_compute[243452]: 2026-02-28 10:25:18.839 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3168694531' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:25:18 compute-0 ceph-mon[76304]: pgmap v1802: 305 pgs: 305 active+clean; 347 MiB data, 974 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 180 op/s
Feb 28 10:25:18 compute-0 nova_compute[243452]: 2026-02-28 10:25:18.929 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqg_p77kq" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:25:18 compute-0 nova_compute[243452]: 2026-02-28 10:25:18.969 243456 DEBUG nova.storage.rbd_utils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] rbd image 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:25:18 compute-0 nova_compute[243452]: 2026-02-28 10:25:18.975 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.103 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.189 243456 DEBUG oslo_concurrency.processutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.190 243456 INFO nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Deleting local config drive /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64/disk.config because it was imported into RBD.
Feb 28 10:25:19 compute-0 systemd-machined[209480]: New machine qemu-147-instance-00000074.
Feb 28 10:25:19 compute-0 systemd[1]: Started Virtual Machine qemu-147-instance-00000074.
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.581 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.582 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274319.580393, 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.582 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] VM Resumed (Lifecycle Event)
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.588 243456 DEBUG nova.compute.manager [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.589 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.595 243456 INFO nova.virt.libvirt.driver [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance spawned successfully.
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.596 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.606 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.609 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.634 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.635 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.635 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.636 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.636 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.636 243456 DEBUG nova.virt.libvirt.driver [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.640 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.640 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274319.5868073, 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.640 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] VM Started (Lifecycle Event)
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.685 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.689 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.719 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.733 243456 DEBUG nova.compute.manager [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.790 243456 DEBUG oslo_concurrency.lockutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.790 243456 DEBUG oslo_concurrency.lockutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.790 243456 DEBUG nova.objects.instance [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Feb 28 10:25:19 compute-0 nova_compute[243452]: 2026-02-28 10:25:19.859 243456 DEBUG oslo_concurrency.lockutils [None req-fd1c9b7d-1141-4f61-b9ea-3fd4532ffb16 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:20 compute-0 nova_compute[243452]: 2026-02-28 10:25:20.474 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "0f9fd694-ce0d-49b2-a027-0c24d6bbfa64" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:20 compute-0 nova_compute[243452]: 2026-02-28 10:25:20.475 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "0f9fd694-ce0d-49b2-a027-0c24d6bbfa64" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:20 compute-0 nova_compute[243452]: 2026-02-28 10:25:20.475 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "0f9fd694-ce0d-49b2-a027-0c24d6bbfa64-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:20 compute-0 nova_compute[243452]: 2026-02-28 10:25:20.475 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "0f9fd694-ce0d-49b2-a027-0c24d6bbfa64-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:20 compute-0 nova_compute[243452]: 2026-02-28 10:25:20.475 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "0f9fd694-ce0d-49b2-a027-0c24d6bbfa64-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:20 compute-0 nova_compute[243452]: 2026-02-28 10:25:20.476 243456 INFO nova.compute.manager [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Terminating instance
Feb 28 10:25:20 compute-0 nova_compute[243452]: 2026-02-28 10:25:20.477 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "refresh_cache-0f9fd694-ce0d-49b2-a027-0c24d6bbfa64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:25:20 compute-0 nova_compute[243452]: 2026-02-28 10:25:20.477 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquired lock "refresh_cache-0f9fd694-ce0d-49b2-a027-0c24d6bbfa64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:25:20 compute-0 nova_compute[243452]: 2026-02-28 10:25:20.477 243456 DEBUG nova.network.neutron [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:25:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 350 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.1 MiB/s wr, 297 op/s
Feb 28 10:25:20 compute-0 ceph-mon[76304]: pgmap v1803: 305 pgs: 305 active+clean; 350 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.1 MiB/s wr, 297 op/s
Feb 28 10:25:20 compute-0 nova_compute[243452]: 2026-02-28 10:25:20.664 243456 DEBUG nova.network.neutron [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:25:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:20.684 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:25:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:20.685 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:25:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:20.685 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:25:20 compute-0 nova_compute[243452]: 2026-02-28 10:25:20.686 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:21 compute-0 nova_compute[243452]: 2026-02-28 10:25:21.025 243456 DEBUG nova.network.neutron [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:25:21 compute-0 nova_compute[243452]: 2026-02-28 10:25:21.040 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Releasing lock "refresh_cache-0f9fd694-ce0d-49b2-a027-0c24d6bbfa64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:25:21 compute-0 nova_compute[243452]: 2026-02-28 10:25:21.041 243456 DEBUG nova.compute.manager [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:25:21 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000074.scope: Deactivated successfully.
Feb 28 10:25:21 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000074.scope: Consumed 1.842s CPU time.
Feb 28 10:25:21 compute-0 systemd-machined[209480]: Machine qemu-147-instance-00000074 terminated.
Feb 28 10:25:21 compute-0 podman[342976]: 2026-02-28 10:25:21.135674253 +0000 UTC m=+0.062325780 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:25:21 compute-0 podman[342975]: 2026-02-28 10:25:21.171500087 +0000 UTC m=+0.101199902 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Feb 28 10:25:21 compute-0 nova_compute[243452]: 2026-02-28 10:25:21.261 243456 INFO nova.virt.libvirt.driver [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance destroyed successfully.
Feb 28 10:25:21 compute-0 nova_compute[243452]: 2026-02-28 10:25:21.262 243456 DEBUG nova.objects.instance [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lazy-loading 'resources' on Instance uuid 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:25:21 compute-0 nova_compute[243452]: 2026-02-28 10:25:21.633 243456 INFO nova.virt.libvirt.driver [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Deleting instance files /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_del
Feb 28 10:25:21 compute-0 nova_compute[243452]: 2026-02-28 10:25:21.634 243456 INFO nova.virt.libvirt.driver [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Deletion of /var/lib/nova/instances/0f9fd694-ce0d-49b2-a027-0c24d6bbfa64_del complete
Feb 28 10:25:21 compute-0 nova_compute[243452]: 2026-02-28 10:25:21.700 243456 INFO nova.compute.manager [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Took 0.66 seconds to destroy the instance on the hypervisor.
Feb 28 10:25:21 compute-0 nova_compute[243452]: 2026-02-28 10:25:21.701 243456 DEBUG oslo.service.loopingcall [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:25:21 compute-0 nova_compute[243452]: 2026-02-28 10:25:21.702 243456 DEBUG nova.compute.manager [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:25:21 compute-0 nova_compute[243452]: 2026-02-28 10:25:21.702 243456 DEBUG nova.network.neutron [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:25:21 compute-0 nova_compute[243452]: 2026-02-28 10:25:21.855 243456 DEBUG nova.network.neutron [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:25:21 compute-0 nova_compute[243452]: 2026-02-28 10:25:21.872 243456 DEBUG nova.network.neutron [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:25:21 compute-0 nova_compute[243452]: 2026-02-28 10:25:21.894 243456 INFO nova.compute.manager [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Took 0.19 seconds to deallocate network for instance.
Feb 28 10:25:21 compute-0 nova_compute[243452]: 2026-02-28 10:25:21.977 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:21 compute-0 nova_compute[243452]: 2026-02-28 10:25:21.978 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:22 compute-0 nova_compute[243452]: 2026-02-28 10:25:22.088 243456 DEBUG oslo_concurrency.processutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:25:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 337 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.0 MiB/s wr, 335 op/s
Feb 28 10:25:22 compute-0 ceph-mon[76304]: pgmap v1804: 305 pgs: 305 active+clean; 337 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.0 MiB/s wr, 335 op/s
Feb 28 10:25:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:25:22 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/713539591' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:25:22 compute-0 nova_compute[243452]: 2026-02-28 10:25:22.709 243456 DEBUG oslo_concurrency.processutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:25:22 compute-0 nova_compute[243452]: 2026-02-28 10:25:22.717 243456 DEBUG nova.compute.provider_tree [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:25:22 compute-0 nova_compute[243452]: 2026-02-28 10:25:22.741 243456 DEBUG nova.scheduler.client.report [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:25:22 compute-0 nova_compute[243452]: 2026-02-28 10:25:22.774 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:22 compute-0 nova_compute[243452]: 2026-02-28 10:25:22.827 243456 INFO nova.scheduler.client.report [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Deleted allocations for instance 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64
Feb 28 10:25:22 compute-0 nova_compute[243452]: 2026-02-28 10:25:22.926 243456 DEBUG oslo_concurrency.lockutils [None req-2a4c06cb-bf5a-4bea-a8a0-42bcb077c4a4 df63289bf60946e2a983ee2fa57352b1 ff0879b364b142e782530e413eb35f55 - - default default] Lock "0f9fd694-ce0d-49b2-a027-0c24d6bbfa64" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/713539591' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:25:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:25:23 compute-0 nova_compute[243452]: 2026-02-28 10:25:23.844 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:24 compute-0 nova_compute[243452]: 2026-02-28 10:25:24.104 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 337 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.1 MiB/s wr, 272 op/s
Feb 28 10:25:24 compute-0 ceph-mon[76304]: pgmap v1805: 305 pgs: 305 active+clean; 337 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.1 MiB/s wr, 272 op/s
Feb 28 10:25:25 compute-0 nova_compute[243452]: 2026-02-28 10:25:25.917 243456 DEBUG oslo_concurrency.lockutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "9f873b2a-d04c-475f-941d-397e5a9bc81a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:25 compute-0 nova_compute[243452]: 2026-02-28 10:25:25.918 243456 DEBUG oslo_concurrency.lockutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:25 compute-0 nova_compute[243452]: 2026-02-28 10:25:25.919 243456 DEBUG oslo_concurrency.lockutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:25 compute-0 nova_compute[243452]: 2026-02-28 10:25:25.919 243456 DEBUG oslo_concurrency.lockutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:25 compute-0 nova_compute[243452]: 2026-02-28 10:25:25.920 243456 DEBUG oslo_concurrency.lockutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:25 compute-0 nova_compute[243452]: 2026-02-28 10:25:25.922 243456 INFO nova.compute.manager [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Terminating instance
Feb 28 10:25:25 compute-0 nova_compute[243452]: 2026-02-28 10:25:25.924 243456 DEBUG nova.compute.manager [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:25:25 compute-0 kernel: tap8b15ec49-ee (unregistering): left promiscuous mode
Feb 28 10:25:25 compute-0 NetworkManager[49805]: <info>  [1772274325.9806] device (tap8b15ec49-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:25:25 compute-0 ovn_controller[146846]: 2026-02-28T10:25:25Z|01167|binding|INFO|Releasing lport 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 from this chassis (sb_readonly=0)
Feb 28 10:25:25 compute-0 nova_compute[243452]: 2026-02-28 10:25:25.991 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:25 compute-0 ovn_controller[146846]: 2026-02-28T10:25:25Z|01168|binding|INFO|Setting lport 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 down in Southbound
Feb 28 10:25:25 compute-0 ovn_controller[146846]: 2026-02-28T10:25:25Z|01169|binding|INFO|Removing iface tap8b15ec49-ee ovn-installed in OVS
Feb 28 10:25:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:25.999 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:ab:a9 10.100.0.5', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9f873b2a-d04c-475f-941d-397e5a9bc81a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ee12748-b368-477a-aacb-62375ce0b51c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2fa6d2-fb6a-49e0-af1a-0d44f4a09e25, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8b15ec49-ee59-41b3-b0e8-b6779ab7bde7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:25:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.001 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 in datapath 9ee12748-b368-477a-aacb-62375ce0b51c unbound from our chassis
Feb 28 10:25:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.002 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ee12748-b368-477a-aacb-62375ce0b51c
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.006 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.021 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d77caf35-3945-4312-9379-4872990fdf5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:26 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000073.scope: Deactivated successfully.
Feb 28 10:25:26 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000073.scope: Consumed 11.901s CPU time.
Feb 28 10:25:26 compute-0 systemd-machined[209480]: Machine qemu-146-instance-00000073 terminated.
Feb 28 10:25:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.052 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fc1c45-e8ca-4959-8266-f94a3e8daf2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.055 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[276791a5-9afb-4917-abc7-b06d63edc331]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.075 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[86e88844-506a-4754-a901-d775ea8434ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.092 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9c0d55d9-9b20-4aec-95d3-852eeb51a55b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ee12748-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:c4:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 7, 'rx_bytes': 1412, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 7, 'rx_bytes': 1412, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 344], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576814, 'reachable_time': 35187, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 13, 'inoctets': 936, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 13, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 936, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 13, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343075, 'error': None, 'target': 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.107 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a9b2bf61-e8be-42b2-a943-fb4618c9dba7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9ee12748-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576826, 'tstamp': 576826}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343076, 'error': None, 'target': 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9ee12748-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576829, 'tstamp': 576829}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343076, 'error': None, 'target': 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.109 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ee12748-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.110 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.113 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.114 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ee12748-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:25:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.114 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:25:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.114 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ee12748-b0, col_values=(('external_ids', {'iface-id': '42d54653-4cb7-4be2-99ee-96f5681da7be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:25:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:26.115 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.163 243456 INFO nova.virt.libvirt.driver [-] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Instance destroyed successfully.
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.163 243456 DEBUG nova.objects.instance [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'resources' on Instance uuid 9f873b2a-d04c-475f-941d-397e5a9bc81a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.193 243456 DEBUG nova.virt.libvirt.vif [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:24:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-1230754442',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-1230754442',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ge',id=115,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAhOy8scu7pucn6hR6bdEJEq/jaPKx42EW3BCfNrjQsnOj6n32/A+FEMwyqz7ox0jzcrbxxA91mTDdT3vrbzI6b58jlzlwGg5fJ7HUGs8jPxEEShL7FM9h/slX6zH/FfpQ==',key_name='tempest-TestSecurityGroupsBasicOps-850120140',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:25:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-8fgbwnao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:25:06Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=9f873b2a-d04c-475f-941d-397e5a9bc81a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.195 243456 DEBUG nova.network.os_vif_util [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "address": "fa:16:3e:51:ab:a9", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b15ec49-ee", "ovs_interfaceid": "8b15ec49-ee59-41b3-b0e8-b6779ab7bde7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.196 243456 DEBUG nova.network.os_vif_util [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:51:ab:a9,bridge_name='br-int',has_traffic_filtering=True,id=8b15ec49-ee59-41b3-b0e8-b6779ab7bde7,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b15ec49-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.196 243456 DEBUG os_vif [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:51:ab:a9,bridge_name='br-int',has_traffic_filtering=True,id=8b15ec49-ee59-41b3-b0e8-b6779ab7bde7,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b15ec49-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.198 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.198 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b15ec49-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.200 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.201 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.204 243456 INFO os_vif [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:51:ab:a9,bridge_name='br-int',has_traffic_filtering=True,id=8b15ec49-ee59-41b3-b0e8-b6779ab7bde7,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b15ec49-ee')
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.488 243456 INFO nova.virt.libvirt.driver [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Deleting instance files /var/lib/nova/instances/9f873b2a-d04c-475f-941d-397e5a9bc81a_del
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.489 243456 INFO nova.virt.libvirt.driver [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Deletion of /var/lib/nova/instances/9f873b2a-d04c-475f-941d-397e5a9bc81a_del complete
Feb 28 10:25:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 312 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.1 MiB/s wr, 302 op/s
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.554 243456 DEBUG nova.compute.manager [req-57fc531d-995e-4ae1-8b24-50b8b4cdd804 req-1a6f5a54-77e5-489a-8f44-8d24b2366402 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received event network-vif-unplugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.555 243456 DEBUG oslo_concurrency.lockutils [req-57fc531d-995e-4ae1-8b24-50b8b4cdd804 req-1a6f5a54-77e5-489a-8f44-8d24b2366402 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.556 243456 DEBUG oslo_concurrency.lockutils [req-57fc531d-995e-4ae1-8b24-50b8b4cdd804 req-1a6f5a54-77e5-489a-8f44-8d24b2366402 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.556 243456 DEBUG oslo_concurrency.lockutils [req-57fc531d-995e-4ae1-8b24-50b8b4cdd804 req-1a6f5a54-77e5-489a-8f44-8d24b2366402 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.557 243456 DEBUG nova.compute.manager [req-57fc531d-995e-4ae1-8b24-50b8b4cdd804 req-1a6f5a54-77e5-489a-8f44-8d24b2366402 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] No waiting events found dispatching network-vif-unplugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.557 243456 DEBUG nova.compute.manager [req-57fc531d-995e-4ae1-8b24-50b8b4cdd804 req-1a6f5a54-77e5-489a-8f44-8d24b2366402 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received event network-vif-unplugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.561 243456 INFO nova.compute.manager [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Took 0.64 seconds to destroy the instance on the hypervisor.
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.562 243456 DEBUG oslo.service.loopingcall [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.563 243456 DEBUG nova.compute.manager [-] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:25:26 compute-0 nova_compute[243452]: 2026-02-28 10:25:26.563 243456 DEBUG nova.network.neutron [-] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:25:26 compute-0 ceph-mon[76304]: pgmap v1806: 305 pgs: 305 active+clean; 312 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.1 MiB/s wr, 302 op/s
Feb 28 10:25:27 compute-0 nova_compute[243452]: 2026-02-28 10:25:27.139 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:25:27 compute-0 nova_compute[243452]: 2026-02-28 10:25:27.273 243456 DEBUG nova.network.neutron [-] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:25:27 compute-0 nova_compute[243452]: 2026-02-28 10:25:27.300 243456 INFO nova.compute.manager [-] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Took 0.74 seconds to deallocate network for instance.
Feb 28 10:25:27 compute-0 nova_compute[243452]: 2026-02-28 10:25:27.355 243456 DEBUG oslo_concurrency.lockutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:27 compute-0 nova_compute[243452]: 2026-02-28 10:25:27.356 243456 DEBUG oslo_concurrency.lockutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:27 compute-0 nova_compute[243452]: 2026-02-28 10:25:27.434 243456 DEBUG oslo_concurrency.processutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:25:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:25:27 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1728565344' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:25:27 compute-0 nova_compute[243452]: 2026-02-28 10:25:27.985 243456 DEBUG oslo_concurrency.processutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:25:27 compute-0 nova_compute[243452]: 2026-02-28 10:25:27.992 243456 DEBUG nova.compute.provider_tree [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:25:28 compute-0 nova_compute[243452]: 2026-02-28 10:25:28.011 243456 DEBUG nova.scheduler.client.report [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:25:28 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1728565344' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:25:28 compute-0 nova_compute[243452]: 2026-02-28 10:25:28.036 243456 DEBUG oslo_concurrency.lockutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:28 compute-0 nova_compute[243452]: 2026-02-28 10:25:28.077 243456 INFO nova.scheduler.client.report [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Deleted allocations for instance 9f873b2a-d04c-475f-941d-397e5a9bc81a
Feb 28 10:25:28 compute-0 nova_compute[243452]: 2026-02-28 10:25:28.163 243456 DEBUG oslo_concurrency.lockutils [None req-40ecbf21-e0a2-47f2-8727-cc1cc2b75805 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:28 compute-0 nova_compute[243452]: 2026-02-28 10:25:28.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:25:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 279 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 281 op/s
Feb 28 10:25:28 compute-0 nova_compute[243452]: 2026-02-28 10:25:28.716 243456 DEBUG nova.compute.manager [req-ae89ffef-c2ee-4e3a-829f-4bc3ef1f9306 req-9ecb288d-f9b1-4425-abb6-80777d09b5cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received event network-vif-plugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:25:28 compute-0 nova_compute[243452]: 2026-02-28 10:25:28.717 243456 DEBUG oslo_concurrency.lockutils [req-ae89ffef-c2ee-4e3a-829f-4bc3ef1f9306 req-9ecb288d-f9b1-4425-abb6-80777d09b5cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:28 compute-0 nova_compute[243452]: 2026-02-28 10:25:28.718 243456 DEBUG oslo_concurrency.lockutils [req-ae89ffef-c2ee-4e3a-829f-4bc3ef1f9306 req-9ecb288d-f9b1-4425-abb6-80777d09b5cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:28 compute-0 nova_compute[243452]: 2026-02-28 10:25:28.718 243456 DEBUG oslo_concurrency.lockutils [req-ae89ffef-c2ee-4e3a-829f-4bc3ef1f9306 req-9ecb288d-f9b1-4425-abb6-80777d09b5cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9f873b2a-d04c-475f-941d-397e5a9bc81a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:28 compute-0 nova_compute[243452]: 2026-02-28 10:25:28.719 243456 DEBUG nova.compute.manager [req-ae89ffef-c2ee-4e3a-829f-4bc3ef1f9306 req-9ecb288d-f9b1-4425-abb6-80777d09b5cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] No waiting events found dispatching network-vif-plugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:25:28 compute-0 nova_compute[243452]: 2026-02-28 10:25:28.719 243456 WARNING nova.compute.manager [req-ae89ffef-c2ee-4e3a-829f-4bc3ef1f9306 req-9ecb288d-f9b1-4425-abb6-80777d09b5cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received unexpected event network-vif-plugged-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 for instance with vm_state deleted and task_state None.
Feb 28 10:25:28 compute-0 nova_compute[243452]: 2026-02-28 10:25:28.720 243456 DEBUG nova.compute.manager [req-ae89ffef-c2ee-4e3a-829f-4bc3ef1f9306 req-9ecb288d-f9b1-4425-abb6-80777d09b5cd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Received event network-vif-deleted-8b15ec49-ee59-41b3-b0e8-b6779ab7bde7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:25:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:25:29 compute-0 ceph-mon[76304]: pgmap v1807: 305 pgs: 305 active+clean; 279 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 281 op/s
Feb 28 10:25:29 compute-0 nova_compute[243452]: 2026-02-28 10:25:29.107 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:25:29
Feb 28 10:25:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:25:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:25:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.meta', 'images', 'backups', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', '.mgr']
Feb 28 10:25:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:25:29 compute-0 nova_compute[243452]: 2026-02-28 10:25:29.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:25:29 compute-0 nova_compute[243452]: 2026-02-28 10:25:29.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:25:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:25:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:25:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:25:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:25:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:25:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:25:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 233 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 254 op/s
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.568 243456 DEBUG nova.compute.manager [req-bdf21405-8d8e-4e1b-a58c-41a0e42e3244 req-a3826c24-a5a8-4997-bf12-5c9ea789fa24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received event network-changed-cc564e14-816b-4b92-877a-db0b2ddd0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.569 243456 DEBUG nova.compute.manager [req-bdf21405-8d8e-4e1b-a58c-41a0e42e3244 req-a3826c24-a5a8-4997-bf12-5c9ea789fa24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Refreshing instance network info cache due to event network-changed-cc564e14-816b-4b92-877a-db0b2ddd0285. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:25:30 compute-0 ceph-mon[76304]: pgmap v1808: 305 pgs: 305 active+clean; 233 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 254 op/s
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.570 243456 DEBUG oslo_concurrency.lockutils [req-bdf21405-8d8e-4e1b-a58c-41a0e42e3244 req-a3826c24-a5a8-4997-bf12-5c9ea789fa24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.570 243456 DEBUG oslo_concurrency.lockutils [req-bdf21405-8d8e-4e1b-a58c-41a0e42e3244 req-a3826c24-a5a8-4997-bf12-5c9ea789fa24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.571 243456 DEBUG nova.network.neutron [req-bdf21405-8d8e-4e1b-a58c-41a0e42e3244 req-a3826c24-a5a8-4997-bf12-5c9ea789fa24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Refreshing network info cache for port cc564e14-816b-4b92-877a-db0b2ddd0285 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.623 243456 DEBUG oslo_concurrency.lockutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.624 243456 DEBUG oslo_concurrency.lockutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.624 243456 DEBUG oslo_concurrency.lockutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.625 243456 DEBUG oslo_concurrency.lockutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.625 243456 DEBUG oslo_concurrency.lockutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.627 243456 INFO nova.compute.manager [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Terminating instance
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.630 243456 DEBUG nova.compute.manager [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:25:30 compute-0 kernel: tapcc564e14-81 (unregistering): left promiscuous mode
Feb 28 10:25:30 compute-0 NetworkManager[49805]: <info>  [1772274330.6839] device (tapcc564e14-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.691 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:30 compute-0 ovn_controller[146846]: 2026-02-28T10:25:30Z|01170|binding|INFO|Releasing lport cc564e14-816b-4b92-877a-db0b2ddd0285 from this chassis (sb_readonly=0)
Feb 28 10:25:30 compute-0 ovn_controller[146846]: 2026-02-28T10:25:30Z|01171|binding|INFO|Setting lport cc564e14-816b-4b92-877a-db0b2ddd0285 down in Southbound
Feb 28 10:25:30 compute-0 ovn_controller[146846]: 2026-02-28T10:25:30Z|01172|binding|INFO|Removing iface tapcc564e14-81 ovn-installed in OVS
Feb 28 10:25:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.703 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:45:f7 10.100.0.10'], port_security=['fa:16:3e:cc:45:f7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a1bf329d-ed65-4cbc-99cb-e49716d1b24d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ee12748-b368-477a-aacb-62375ce0b51c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0b899b7a-4953-43a9-ae1c-b9897419d094 ec278221-1438-4a93-ade3-9d533e8b727e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2fa6d2-fb6a-49e0-af1a-0d44f4a09e25, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=cc564e14-816b-4b92-877a-db0b2ddd0285) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:25:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.706 156681 INFO neutron.agent.ovn.metadata.agent [-] Port cc564e14-816b-4b92-877a-db0b2ddd0285 in datapath 9ee12748-b368-477a-aacb-62375ce0b51c unbound from our chassis
Feb 28 10:25:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.708 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ee12748-b368-477a-aacb-62375ce0b51c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:25:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.709 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf9af08-c615-4d17-bbd8-b26b58f75ee5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.711 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.711 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c namespace which is not needed anymore
Feb 28 10:25:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:25:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:25:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:25:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:25:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:25:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:25:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:25:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:25:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:25:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:25:30 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000072.scope: Deactivated successfully.
Feb 28 10:25:30 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000072.scope: Consumed 14.712s CPU time.
Feb 28 10:25:30 compute-0 systemd-machined[209480]: Machine qemu-144-instance-00000072 terminated.
Feb 28 10:25:30 compute-0 neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c[340952]: [NOTICE]   (340965) : haproxy version is 2.8.14-c23fe91
Feb 28 10:25:30 compute-0 neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c[340952]: [NOTICE]   (340965) : path to executable is /usr/sbin/haproxy
Feb 28 10:25:30 compute-0 neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c[340952]: [WARNING]  (340965) : Exiting Master process...
Feb 28 10:25:30 compute-0 neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c[340952]: [ALERT]    (340965) : Current worker (340967) exited with code 143 (Terminated)
Feb 28 10:25:30 compute-0 neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c[340952]: [WARNING]  (340965) : All workers exited. Exiting... (0)
Feb 28 10:25:30 compute-0 systemd[1]: libpod-a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662.scope: Deactivated successfully.
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.852 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:30 compute-0 podman[343154]: 2026-02-28 10:25:30.855978146 +0000 UTC m=+0.057796029 container died a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.857 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.866 243456 INFO nova.virt.libvirt.driver [-] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Instance destroyed successfully.
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.867 243456 DEBUG nova.objects.instance [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'resources' on Instance uuid a1bf329d-ed65-4cbc-99cb-e49716d1b24d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.885 243456 DEBUG nova.virt.libvirt.vif [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:24:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1241744982',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1241744982',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=114,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAhOy8scu7pucn6hR6bdEJEq/jaPKx42EW3BCfNrjQsnOj6n32/A+FEMwyqz7ox0jzcrbxxA91mTDdT3vrbzI6b58jlzlwGg5fJ7HUGs8jPxEEShL7FM9h/slX6zH/FfpQ==',key_name='tempest-TestSecurityGroupsBasicOps-850120140',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:24:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-t2on9vk5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:24:22Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=a1bf329d-ed65-4cbc-99cb-e49716d1b24d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.886 243456 DEBUG nova.network.os_vif_util [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:25:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662-userdata-shm.mount: Deactivated successfully.
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.888 243456 DEBUG nova.network.os_vif_util [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cc:45:f7,bridge_name='br-int',has_traffic_filtering=True,id=cc564e14-816b-4b92-877a-db0b2ddd0285,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc564e14-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.889 243456 DEBUG os_vif [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cc:45:f7,bridge_name='br-int',has_traffic_filtering=True,id=cc564e14-816b-4b92-877a-db0b2ddd0285,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc564e14-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.891 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-efddfc5a8b916af41ee280cb5601c71a26e54fba8c82938781e38e62b83cab90-merged.mount: Deactivated successfully.
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.892 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc564e14-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.893 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.896 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.898 243456 INFO os_vif [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cc:45:f7,bridge_name='br-int',has_traffic_filtering=True,id=cc564e14-816b-4b92-877a-db0b2ddd0285,network=Network(9ee12748-b368-477a-aacb-62375ce0b51c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc564e14-81')
Feb 28 10:25:30 compute-0 podman[343154]: 2026-02-28 10:25:30.905442724 +0000 UTC m=+0.107260577 container cleanup a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 10:25:30 compute-0 systemd[1]: libpod-conmon-a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662.scope: Deactivated successfully.
Feb 28 10:25:30 compute-0 podman[343207]: 2026-02-28 10:25:30.96284267 +0000 UTC m=+0.039440949 container remove a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 10:25:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.967 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[efaae4d9-51fa-446d-b8c3-2ed8dae3ab54]: (4, ('Sat Feb 28 10:25:30 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c (a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662)\na7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662\nSat Feb 28 10:25:30 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c (a7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662)\na7108b92bf425ffca8562826531bfa907c004597723e532d68f0760eb8e81662\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.968 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[446c562b-235c-4fd9-8131-e0e57f8d0c97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.969 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ee12748-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.972 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:30 compute-0 kernel: tap9ee12748-b0: left promiscuous mode
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.974 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.977 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[932aa416-5699-4395-97fb-87f5db6a8dc2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.979 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.996 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9708f518-b15f-4a90-89a0-bbce23bba79d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.997 243456 DEBUG nova.compute.manager [req-837e3dfa-0e0b-44ed-a5c5-47a8652cb8bc req-e160142b-1263-49e2-9696-55201eb28b35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received event network-vif-unplugged-cc564e14-816b-4b92-877a-db0b2ddd0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.997 243456 DEBUG oslo_concurrency.lockutils [req-837e3dfa-0e0b-44ed-a5c5-47a8652cb8bc req-e160142b-1263-49e2-9696-55201eb28b35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:30.997 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f0223e49-a3f2-4d95-b133-db5d1b76ab81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.997 243456 DEBUG oslo_concurrency.lockutils [req-837e3dfa-0e0b-44ed-a5c5-47a8652cb8bc req-e160142b-1263-49e2-9696-55201eb28b35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.997 243456 DEBUG oslo_concurrency.lockutils [req-837e3dfa-0e0b-44ed-a5c5-47a8652cb8bc req-e160142b-1263-49e2-9696-55201eb28b35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.998 243456 DEBUG nova.compute.manager [req-837e3dfa-0e0b-44ed-a5c5-47a8652cb8bc req-e160142b-1263-49e2-9696-55201eb28b35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] No waiting events found dispatching network-vif-unplugged-cc564e14-816b-4b92-877a-db0b2ddd0285 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:25:30 compute-0 nova_compute[243452]: 2026-02-28 10:25:30.998 243456 DEBUG nova.compute.manager [req-837e3dfa-0e0b-44ed-a5c5-47a8652cb8bc req-e160142b-1263-49e2-9696-55201eb28b35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received event network-vif-unplugged-cc564e14-816b-4b92-877a-db0b2ddd0285 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:25:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:31.009 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4aec7bc6-28fd-4919-b768-3ee6845cebad]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576807, 'reachable_time': 29858, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343229, 'error': None, 'target': 'ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d9ee12748\x2db368\x2d477a\x2daacb\x2d62375ce0b51c.mount: Deactivated successfully.
Feb 28 10:25:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:31.012 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9ee12748-b368-477a-aacb-62375ce0b51c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:25:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:31.013 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[990c6b30-b659-4cd0-bb24-29eccc376091]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:31 compute-0 nova_compute[243452]: 2026-02-28 10:25:31.151 243456 INFO nova.virt.libvirt.driver [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Deleting instance files /var/lib/nova/instances/a1bf329d-ed65-4cbc-99cb-e49716d1b24d_del
Feb 28 10:25:31 compute-0 nova_compute[243452]: 2026-02-28 10:25:31.151 243456 INFO nova.virt.libvirt.driver [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Deletion of /var/lib/nova/instances/a1bf329d-ed65-4cbc-99cb-e49716d1b24d_del complete
Feb 28 10:25:31 compute-0 nova_compute[243452]: 2026-02-28 10:25:31.202 243456 INFO nova.compute.manager [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Took 0.57 seconds to destroy the instance on the hypervisor.
Feb 28 10:25:31 compute-0 nova_compute[243452]: 2026-02-28 10:25:31.203 243456 DEBUG oslo.service.loopingcall [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:25:31 compute-0 nova_compute[243452]: 2026-02-28 10:25:31.203 243456 DEBUG nova.compute.manager [-] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:25:31 compute-0 nova_compute[243452]: 2026-02-28 10:25:31.203 243456 DEBUG nova.network.neutron [-] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:25:31 compute-0 nova_compute[243452]: 2026-02-28 10:25:31.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:25:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 197 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 131 op/s
Feb 28 10:25:32 compute-0 ceph-mon[76304]: pgmap v1809: 305 pgs: 305 active+clean; 197 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 131 op/s
Feb 28 10:25:32 compute-0 nova_compute[243452]: 2026-02-28 10:25:32.679 243456 DEBUG nova.network.neutron [-] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:25:32 compute-0 nova_compute[243452]: 2026-02-28 10:25:32.701 243456 INFO nova.compute.manager [-] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Took 1.50 seconds to deallocate network for instance.
Feb 28 10:25:32 compute-0 nova_compute[243452]: 2026-02-28 10:25:32.763 243456 DEBUG oslo_concurrency.lockutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:32 compute-0 nova_compute[243452]: 2026-02-28 10:25:32.763 243456 DEBUG oslo_concurrency.lockutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:32 compute-0 nova_compute[243452]: 2026-02-28 10:25:32.818 243456 DEBUG oslo_concurrency.processutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.206 243456 DEBUG nova.network.neutron [req-bdf21405-8d8e-4e1b-a58c-41a0e42e3244 req-a3826c24-a5a8-4997-bf12-5c9ea789fa24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Updated VIF entry in instance network info cache for port cc564e14-816b-4b92-877a-db0b2ddd0285. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.206 243456 DEBUG nova.network.neutron [req-bdf21405-8d8e-4e1b-a58c-41a0e42e3244 req-a3826c24-a5a8-4997-bf12-5c9ea789fa24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Updating instance_info_cache with network_info: [{"id": "cc564e14-816b-4b92-877a-db0b2ddd0285", "address": "fa:16:3e:cc:45:f7", "network": {"id": "9ee12748-b368-477a-aacb-62375ce0b51c", "bridge": "br-int", "label": "tempest-network-smoke--1326325098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc564e14-81", "ovs_interfaceid": "cc564e14-816b-4b92-877a-db0b2ddd0285", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.213 243456 DEBUG nova.compute.manager [req-b57455ff-8070-4c6d-b50c-6fe252a53f13 req-ccb51e9e-b12f-46be-ac1c-a6ade6f80200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received event network-vif-plugged-cc564e14-816b-4b92-877a-db0b2ddd0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.213 243456 DEBUG oslo_concurrency.lockutils [req-b57455ff-8070-4c6d-b50c-6fe252a53f13 req-ccb51e9e-b12f-46be-ac1c-a6ade6f80200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.213 243456 DEBUG oslo_concurrency.lockutils [req-b57455ff-8070-4c6d-b50c-6fe252a53f13 req-ccb51e9e-b12f-46be-ac1c-a6ade6f80200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.213 243456 DEBUG oslo_concurrency.lockutils [req-b57455ff-8070-4c6d-b50c-6fe252a53f13 req-ccb51e9e-b12f-46be-ac1c-a6ade6f80200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.214 243456 DEBUG nova.compute.manager [req-b57455ff-8070-4c6d-b50c-6fe252a53f13 req-ccb51e9e-b12f-46be-ac1c-a6ade6f80200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] No waiting events found dispatching network-vif-plugged-cc564e14-816b-4b92-877a-db0b2ddd0285 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.214 243456 WARNING nova.compute.manager [req-b57455ff-8070-4c6d-b50c-6fe252a53f13 req-ccb51e9e-b12f-46be-ac1c-a6ade6f80200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received unexpected event network-vif-plugged-cc564e14-816b-4b92-877a-db0b2ddd0285 for instance with vm_state deleted and task_state None.
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.214 243456 DEBUG nova.compute.manager [req-b57455ff-8070-4c6d-b50c-6fe252a53f13 req-ccb51e9e-b12f-46be-ac1c-a6ade6f80200 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Received event network-vif-deleted-cc564e14-816b-4b92-877a-db0b2ddd0285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.231 243456 DEBUG oslo_concurrency.lockutils [req-bdf21405-8d8e-4e1b-a58c-41a0e42e3244 req-a3826c24-a5a8-4997-bf12-5c9ea789fa24 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:25:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:25:33 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/561652332' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.368 243456 DEBUG oslo_concurrency.processutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.383 243456 DEBUG nova.compute.provider_tree [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.398 243456 DEBUG nova.scheduler.client.report [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.426 243456 DEBUG oslo_concurrency.lockutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.457 243456 INFO nova.scheduler.client.report [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Deleted allocations for instance a1bf329d-ed65-4cbc-99cb-e49716d1b24d
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.530 243456 DEBUG oslo_concurrency.lockutils [None req-85bd24a8-4c83-4d1d-8837-e2e26caa3161 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "a1bf329d-ed65-4cbc-99cb-e49716d1b24d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:33 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/561652332' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.667 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.668 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.668 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:25:33 compute-0 nova_compute[243452]: 2026-02-28 10:25:33.689 243456 DEBUG nova.compute.utils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Can not refresh info_cache because instance was not found refresh_info_cache_for_instance /usr/lib/python3.9/site-packages/nova/compute/utils.py:1010
Feb 28 10:25:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:25:34 compute-0 nova_compute[243452]: 2026-02-28 10:25:34.111 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:34 compute-0 nova_compute[243452]: 2026-02-28 10:25:34.329 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:25:34 compute-0 nova_compute[243452]: 2026-02-28 10:25:34.354 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-a1bf329d-ed65-4cbc-99cb-e49716d1b24d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:25:34 compute-0 nova_compute[243452]: 2026-02-28 10:25:34.355 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:25:34 compute-0 nova_compute[243452]: 2026-02-28 10:25:34.356 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:25:34 compute-0 nova_compute[243452]: 2026-02-28 10:25:34.356 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:25:34 compute-0 nova_compute[243452]: 2026-02-28 10:25:34.356 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:25:34 compute-0 nova_compute[243452]: 2026-02-28 10:25:34.387 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:34 compute-0 nova_compute[243452]: 2026-02-28 10:25:34.388 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:34 compute-0 nova_compute[243452]: 2026-02-28 10:25:34.388 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:34 compute-0 nova_compute[243452]: 2026-02-28 10:25:34.389 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:25:34 compute-0 nova_compute[243452]: 2026-02-28 10:25:34.389 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:25:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 197 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 641 KiB/s rd, 16 KiB/s wr, 62 op/s
Feb 28 10:25:34 compute-0 ceph-mon[76304]: pgmap v1810: 305 pgs: 305 active+clean; 197 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 641 KiB/s rd, 16 KiB/s wr, 62 op/s
Feb 28 10:25:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:25:34 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2085250137' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:25:34 compute-0 nova_compute[243452]: 2026-02-28 10:25:34.957 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:25:35 compute-0 nova_compute[243452]: 2026-02-28 10:25:35.130 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:25:35 compute-0 nova_compute[243452]: 2026-02-28 10:25:35.133 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3667MB free_disk=59.96235081087798GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:25:35 compute-0 nova_compute[243452]: 2026-02-28 10:25:35.134 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:35 compute-0 nova_compute[243452]: 2026-02-28 10:25:35.134 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:35 compute-0 nova_compute[243452]: 2026-02-28 10:25:35.200 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:25:35 compute-0 nova_compute[243452]: 2026-02-28 10:25:35.201 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:25:35 compute-0 nova_compute[243452]: 2026-02-28 10:25:35.219 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:25:35 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2085250137' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:25:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:25:35 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3660763675' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:25:35 compute-0 nova_compute[243452]: 2026-02-28 10:25:35.763 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:25:35 compute-0 nova_compute[243452]: 2026-02-28 10:25:35.770 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:25:35 compute-0 nova_compute[243452]: 2026-02-28 10:25:35.792 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:25:35 compute-0 nova_compute[243452]: 2026-02-28 10:25:35.821 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:25:35 compute-0 nova_compute[243452]: 2026-02-28 10:25:35.821 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:35 compute-0 nova_compute[243452]: 2026-02-28 10:25:35.895 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:36 compute-0 nova_compute[243452]: 2026-02-28 10:25:36.258 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274321.2568827, 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:25:36 compute-0 nova_compute[243452]: 2026-02-28 10:25:36.259 243456 INFO nova.compute.manager [-] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] VM Stopped (Lifecycle Event)
Feb 28 10:25:36 compute-0 nova_compute[243452]: 2026-02-28 10:25:36.298 243456 DEBUG nova.compute.manager [None req-f2bf3ae3-61a7-45d8-be85-658d149bb255 - - - - - -] [instance: 0f9fd694-ce0d-49b2-a027-0c24d6bbfa64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:25:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:36.430 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:73:63 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-f9247cb3-1f5f-4b25-8137-520bf2985945', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f9247cb3-1f5f-4b25-8137-520bf2985945', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5c264946768448e9246a07a54d4b13c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06dd8bdb-67e2-4a21-9f83-b5510704cb5b, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e273f9f1-84ad-4a28-b582-99a6b6931ccf) old=Port_Binding(mac=['fa:16:3e:b1:73:63 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-f9247cb3-1f5f-4b25-8137-520bf2985945', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f9247cb3-1f5f-4b25-8137-520bf2985945', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5c264946768448e9246a07a54d4b13c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:25:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:36.432 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e273f9f1-84ad-4a28-b582-99a6b6931ccf in datapath f9247cb3-1f5f-4b25-8137-520bf2985945 updated
Feb 28 10:25:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:36.433 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f9247cb3-1f5f-4b25-8137-520bf2985945, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:25:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:36.434 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[efa6a36c-4f17-4e74-b333-0782cbff95ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 657 KiB/s rd, 17 KiB/s wr, 85 op/s
Feb 28 10:25:36 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3660763675' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:25:36 compute-0 ceph-mon[76304]: pgmap v1811: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 657 KiB/s rd, 17 KiB/s wr, 85 op/s
Feb 28 10:25:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.7 KiB/s wr, 55 op/s
Feb 28 10:25:38 compute-0 nova_compute[243452]: 2026-02-28 10:25:38.529 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:38 compute-0 nova_compute[243452]: 2026-02-28 10:25:38.564 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:38 compute-0 ceph-mon[76304]: pgmap v1812: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.7 KiB/s wr, 55 op/s
Feb 28 10:25:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:25:38 compute-0 nova_compute[243452]: 2026-02-28 10:25:38.817 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:25:39 compute-0 nova_compute[243452]: 2026-02-28 10:25:39.111 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.0 KiB/s wr, 50 op/s
Feb 28 10:25:40 compute-0 ceph-mon[76304]: pgmap v1813: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.0 KiB/s wr, 50 op/s
Feb 28 10:25:40 compute-0 nova_compute[243452]: 2026-02-28 10:25:40.899 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.2958919954632163e-05 of space, bias 1.0, pg target 0.003887675986389649 quantized to 32 (current 32)
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024928262304062723 of space, bias 1.0, pg target 0.7478478691218817 quantized to 32 (current 32)
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.352365090246661e-07 of space, bias 4.0, pg target 0.0008822838108295993 quantized to 16 (current 16)
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:25:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:25:41 compute-0 nova_compute[243452]: 2026-02-28 10:25:41.160 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274326.1591444, 9f873b2a-d04c-475f-941d-397e5a9bc81a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:25:41 compute-0 nova_compute[243452]: 2026-02-28 10:25:41.161 243456 INFO nova.compute.manager [-] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] VM Stopped (Lifecycle Event)
Feb 28 10:25:41 compute-0 nova_compute[243452]: 2026-02-28 10:25:41.183 243456 DEBUG nova.compute.manager [None req-bdd83941-60e2-449f-84a4-8017efda6279 - - - - - -] [instance: 9f873b2a-d04c-475f-941d-397e5a9bc81a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:25:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 28 10:25:42 compute-0 ceph-mon[76304]: pgmap v1814: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 28 10:25:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:25:44 compute-0 nova_compute[243452]: 2026-02-28 10:25:44.112 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:44 compute-0 sudo[343299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:25:44 compute-0 sudo[343299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:25:44 compute-0 sudo[343299]: pam_unix(sudo:session): session closed for user root
Feb 28 10:25:44 compute-0 sudo[343324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:25:44 compute-0 sudo[343324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:25:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Feb 28 10:25:44 compute-0 ceph-mon[76304]: pgmap v1815: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Feb 28 10:25:44 compute-0 sudo[343324]: pam_unix(sudo:session): session closed for user root
Feb 28 10:25:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:25:44 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:25:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:25:44 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:25:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:25:44 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:25:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:25:44 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:25:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:25:44 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:25:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:25:44 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:25:44 compute-0 sudo[343380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:25:44 compute-0 sudo[343380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:25:44 compute-0 sudo[343380]: pam_unix(sudo:session): session closed for user root
Feb 28 10:25:44 compute-0 sudo[343405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:25:44 compute-0 sudo[343405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:25:45 compute-0 podman[343442]: 2026-02-28 10:25:45.184123505 +0000 UTC m=+0.044984790 container create f563734a623bdbc459511d3ad8e8fe273f8acbc62b47b05e1f3853ab32301a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 10:25:45 compute-0 systemd[1]: Started libpod-conmon-f563734a623bdbc459511d3ad8e8fe273f8acbc62b47b05e1f3853ab32301a3c.scope.
Feb 28 10:25:45 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:25:45 compute-0 podman[343442]: 2026-02-28 10:25:45.159301768 +0000 UTC m=+0.020163073 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:25:45 compute-0 podman[343442]: 2026-02-28 10:25:45.277092978 +0000 UTC m=+0.137954283 container init f563734a623bdbc459511d3ad8e8fe273f8acbc62b47b05e1f3853ab32301a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:25:45 compute-0 podman[343442]: 2026-02-28 10:25:45.282748722 +0000 UTC m=+0.143609987 container start f563734a623bdbc459511d3ad8e8fe273f8acbc62b47b05e1f3853ab32301a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 10:25:45 compute-0 podman[343442]: 2026-02-28 10:25:45.286206731 +0000 UTC m=+0.147067996 container attach f563734a623bdbc459511d3ad8e8fe273f8acbc62b47b05e1f3853ab32301a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Feb 28 10:25:45 compute-0 naughty_wright[343458]: 167 167
Feb 28 10:25:45 compute-0 systemd[1]: libpod-f563734a623bdbc459511d3ad8e8fe273f8acbc62b47b05e1f3853ab32301a3c.scope: Deactivated successfully.
Feb 28 10:25:45 compute-0 conmon[343458]: conmon f563734a623bdbc45951 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f563734a623bdbc459511d3ad8e8fe273f8acbc62b47b05e1f3853ab32301a3c.scope/container/memory.events
Feb 28 10:25:45 compute-0 podman[343442]: 2026-02-28 10:25:45.290370452 +0000 UTC m=+0.151231717 container died f563734a623bdbc459511d3ad8e8fe273f8acbc62b47b05e1f3853ab32301a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:25:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0be7e6999c90064df6bc219978b544b8af3e739688a02d07420e68fec7fe1a9-merged.mount: Deactivated successfully.
Feb 28 10:25:45 compute-0 podman[343442]: 2026-02-28 10:25:45.326105933 +0000 UTC m=+0.186967228 container remove f563734a623bdbc459511d3ad8e8fe273f8acbc62b47b05e1f3853ab32301a3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:25:45 compute-0 systemd[1]: libpod-conmon-f563734a623bdbc459511d3ad8e8fe273f8acbc62b47b05e1f3853ab32301a3c.scope: Deactivated successfully.
Feb 28 10:25:45 compute-0 podman[343481]: 2026-02-28 10:25:45.452685217 +0000 UTC m=+0.040914962 container create 971d8f5da6284f39123ba3381f4fc1548d2a523c2d519ffc85775a8829edcc75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_black, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 10:25:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:25:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2887033397' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:25:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:25:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2887033397' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:25:45 compute-0 systemd[1]: Started libpod-conmon-971d8f5da6284f39123ba3381f4fc1548d2a523c2d519ffc85775a8829edcc75.scope.
Feb 28 10:25:45 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7040cec0e5460779f18990ffb0242916fb86dff501dbb114a483c749a740b367/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7040cec0e5460779f18990ffb0242916fb86dff501dbb114a483c749a740b367/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7040cec0e5460779f18990ffb0242916fb86dff501dbb114a483c749a740b367/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7040cec0e5460779f18990ffb0242916fb86dff501dbb114a483c749a740b367/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7040cec0e5460779f18990ffb0242916fb86dff501dbb114a483c749a740b367/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:25:45 compute-0 podman[343481]: 2026-02-28 10:25:45.432919266 +0000 UTC m=+0.021148991 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:25:45 compute-0 podman[343481]: 2026-02-28 10:25:45.531551373 +0000 UTC m=+0.119781118 container init 971d8f5da6284f39123ba3381f4fc1548d2a523c2d519ffc85775a8829edcc75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_black, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:25:45 compute-0 podman[343481]: 2026-02-28 10:25:45.540657906 +0000 UTC m=+0.128887661 container start 971d8f5da6284f39123ba3381f4fc1548d2a523c2d519ffc85775a8829edcc75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 10:25:45 compute-0 podman[343481]: 2026-02-28 10:25:45.546119323 +0000 UTC m=+0.134349068 container attach 971d8f5da6284f39123ba3381f4fc1548d2a523c2d519ffc85775a8829edcc75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_black, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Feb 28 10:25:45 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:25:45 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:25:45 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:25:45 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:25:45 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:25:45 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:25:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2887033397' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:25:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2887033397' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:25:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:45.828 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:73:63 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-f9247cb3-1f5f-4b25-8137-520bf2985945', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f9247cb3-1f5f-4b25-8137-520bf2985945', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5c264946768448e9246a07a54d4b13c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06dd8bdb-67e2-4a21-9f83-b5510704cb5b, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e273f9f1-84ad-4a28-b582-99a6b6931ccf) old=Port_Binding(mac=['fa:16:3e:b1:73:63 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-f9247cb3-1f5f-4b25-8137-520bf2985945', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f9247cb3-1f5f-4b25-8137-520bf2985945', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a5c264946768448e9246a07a54d4b13c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:25:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:45.834 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e273f9f1-84ad-4a28-b582-99a6b6931ccf in datapath f9247cb3-1f5f-4b25-8137-520bf2985945 updated
Feb 28 10:25:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:45.836 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f9247cb3-1f5f-4b25-8137-520bf2985945, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:25:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:45.837 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc9f65a-7b3b-4724-874f-a38324568284]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:25:45 compute-0 nova_compute[243452]: 2026-02-28 10:25:45.863 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274330.8632147, a1bf329d-ed65-4cbc-99cb-e49716d1b24d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:25:45 compute-0 nova_compute[243452]: 2026-02-28 10:25:45.864 243456 INFO nova.compute.manager [-] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] VM Stopped (Lifecycle Event)
Feb 28 10:25:45 compute-0 nova_compute[243452]: 2026-02-28 10:25:45.889 243456 DEBUG nova.compute.manager [None req-aa84a9db-4c26-4221-afb4-5c3d47e9b84f - - - - - -] [instance: a1bf329d-ed65-4cbc-99cb-e49716d1b24d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:25:45 compute-0 nova_compute[243452]: 2026-02-28 10:25:45.903 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:46 compute-0 nostalgic_black[343498]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:25:46 compute-0 nostalgic_black[343498]: --> All data devices are unavailable
Feb 28 10:25:46 compute-0 systemd[1]: libpod-971d8f5da6284f39123ba3381f4fc1548d2a523c2d519ffc85775a8829edcc75.scope: Deactivated successfully.
Feb 28 10:25:46 compute-0 podman[343481]: 2026-02-28 10:25:46.073573737 +0000 UTC m=+0.661803472 container died 971d8f5da6284f39123ba3381f4fc1548d2a523c2d519ffc85775a8829edcc75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 10:25:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7040cec0e5460779f18990ffb0242916fb86dff501dbb114a483c749a740b367-merged.mount: Deactivated successfully.
Feb 28 10:25:46 compute-0 podman[343481]: 2026-02-28 10:25:46.119458631 +0000 UTC m=+0.707688346 container remove 971d8f5da6284f39123ba3381f4fc1548d2a523c2d519ffc85775a8829edcc75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Feb 28 10:25:46 compute-0 systemd[1]: libpod-conmon-971d8f5da6284f39123ba3381f4fc1548d2a523c2d519ffc85775a8829edcc75.scope: Deactivated successfully.
Feb 28 10:25:46 compute-0 sudo[343405]: pam_unix(sudo:session): session closed for user root
Feb 28 10:25:46 compute-0 sudo[343532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:25:46 compute-0 sudo[343532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:25:46 compute-0 sudo[343532]: pam_unix(sudo:session): session closed for user root
Feb 28 10:25:46 compute-0 sudo[343557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:25:46 compute-0 sudo[343557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:25:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Feb 28 10:25:46 compute-0 ceph-mon[76304]: pgmap v1816: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Feb 28 10:25:46 compute-0 podman[343593]: 2026-02-28 10:25:46.588715125 +0000 UTC m=+0.047526373 container create a3912f6f39049e9ebb35c66cb187da81231cdbe3897468f7d1269920b8351141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_austin, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:25:46 compute-0 systemd[1]: Started libpod-conmon-a3912f6f39049e9ebb35c66cb187da81231cdbe3897468f7d1269920b8351141.scope.
Feb 28 10:25:46 compute-0 podman[343593]: 2026-02-28 10:25:46.562416926 +0000 UTC m=+0.021228233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:25:46 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:25:46 compute-0 podman[343593]: 2026-02-28 10:25:46.685226651 +0000 UTC m=+0.144037978 container init a3912f6f39049e9ebb35c66cb187da81231cdbe3897468f7d1269920b8351141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_austin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 10:25:46 compute-0 podman[343593]: 2026-02-28 10:25:46.690452382 +0000 UTC m=+0.149263639 container start a3912f6f39049e9ebb35c66cb187da81231cdbe3897468f7d1269920b8351141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:25:46 compute-0 festive_austin[343609]: 167 167
Feb 28 10:25:46 compute-0 podman[343593]: 2026-02-28 10:25:46.694250712 +0000 UTC m=+0.153061959 container attach a3912f6f39049e9ebb35c66cb187da81231cdbe3897468f7d1269920b8351141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_austin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 28 10:25:46 compute-0 systemd[1]: libpod-a3912f6f39049e9ebb35c66cb187da81231cdbe3897468f7d1269920b8351141.scope: Deactivated successfully.
Feb 28 10:25:46 compute-0 conmon[343609]: conmon a3912f6f39049e9ebb35 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a3912f6f39049e9ebb35c66cb187da81231cdbe3897468f7d1269920b8351141.scope/container/memory.events
Feb 28 10:25:46 compute-0 podman[343593]: 2026-02-28 10:25:46.697238068 +0000 UTC m=+0.156049315 container died a3912f6f39049e9ebb35c66cb187da81231cdbe3897468f7d1269920b8351141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:25:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ddf9c631c0441c4aaa47e6a3debce4d5bc21dc67fec32f02939099004e6cc39-merged.mount: Deactivated successfully.
Feb 28 10:25:46 compute-0 podman[343593]: 2026-02-28 10:25:46.749578029 +0000 UTC m=+0.208389276 container remove a3912f6f39049e9ebb35c66cb187da81231cdbe3897468f7d1269920b8351141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_austin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:25:46 compute-0 systemd[1]: libpod-conmon-a3912f6f39049e9ebb35c66cb187da81231cdbe3897468f7d1269920b8351141.scope: Deactivated successfully.
Feb 28 10:25:46 compute-0 podman[343633]: 2026-02-28 10:25:46.926878476 +0000 UTC m=+0.060291771 container create b4c418c4f92387fa6494f8b90fd8a7e8777d7810dcd1e1c0299b1d5eba5a2c40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_kowalevski, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:25:46 compute-0 systemd[1]: Started libpod-conmon-b4c418c4f92387fa6494f8b90fd8a7e8777d7810dcd1e1c0299b1d5eba5a2c40.scope.
Feb 28 10:25:46 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92991937c797c0188d1e7da69e9ffe5c461408e3f1685ea84c87674f1cf678f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92991937c797c0188d1e7da69e9ffe5c461408e3f1685ea84c87674f1cf678f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92991937c797c0188d1e7da69e9ffe5c461408e3f1685ea84c87674f1cf678f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92991937c797c0188d1e7da69e9ffe5c461408e3f1685ea84c87674f1cf678f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:25:46 compute-0 podman[343633]: 2026-02-28 10:25:46.900175835 +0000 UTC m=+0.033589150 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:25:47 compute-0 podman[343633]: 2026-02-28 10:25:47.005404783 +0000 UTC m=+0.138818118 container init b4c418c4f92387fa6494f8b90fd8a7e8777d7810dcd1e1c0299b1d5eba5a2c40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_kowalevski, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 10:25:47 compute-0 podman[343633]: 2026-02-28 10:25:47.014519926 +0000 UTC m=+0.147933171 container start b4c418c4f92387fa6494f8b90fd8a7e8777d7810dcd1e1c0299b1d5eba5a2c40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:25:47 compute-0 podman[343633]: 2026-02-28 10:25:47.017811231 +0000 UTC m=+0.151224486 container attach b4c418c4f92387fa6494f8b90fd8a7e8777d7810dcd1e1c0299b1d5eba5a2c40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]: {
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:     "0": [
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:         {
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "devices": [
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "/dev/loop3"
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             ],
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "lv_name": "ceph_lv0",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "lv_size": "21470642176",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "name": "ceph_lv0",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "tags": {
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.cluster_name": "ceph",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.crush_device_class": "",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.encrypted": "0",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.objectstore": "bluestore",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.osd_id": "0",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.type": "block",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.vdo": "0",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.with_tpm": "0"
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             },
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "type": "block",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "vg_name": "ceph_vg0"
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:         }
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:     ],
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:     "1": [
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:         {
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "devices": [
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "/dev/loop4"
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             ],
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "lv_name": "ceph_lv1",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "lv_size": "21470642176",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "name": "ceph_lv1",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "tags": {
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.cluster_name": "ceph",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.crush_device_class": "",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.encrypted": "0",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.objectstore": "bluestore",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.osd_id": "1",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.type": "block",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.vdo": "0",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.with_tpm": "0"
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             },
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "type": "block",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "vg_name": "ceph_vg1"
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:         }
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:     ],
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:     "2": [
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:         {
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "devices": [
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "/dev/loop5"
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             ],
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "lv_name": "ceph_lv2",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "lv_size": "21470642176",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "name": "ceph_lv2",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "tags": {
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.cluster_name": "ceph",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.crush_device_class": "",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.encrypted": "0",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.objectstore": "bluestore",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.osd_id": "2",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.type": "block",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.vdo": "0",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:                 "ceph.with_tpm": "0"
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             },
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "type": "block",
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:             "vg_name": "ceph_vg2"
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:         }
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]:     ]
Feb 28 10:25:47 compute-0 intelligent_kowalevski[343649]: }
Feb 28 10:25:47 compute-0 systemd[1]: libpod-b4c418c4f92387fa6494f8b90fd8a7e8777d7810dcd1e1c0299b1d5eba5a2c40.scope: Deactivated successfully.
Feb 28 10:25:47 compute-0 podman[343658]: 2026-02-28 10:25:47.387236724 +0000 UTC m=+0.040439698 container died b4c418c4f92387fa6494f8b90fd8a7e8777d7810dcd1e1c0299b1d5eba5a2c40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_kowalevski, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:25:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-92991937c797c0188d1e7da69e9ffe5c461408e3f1685ea84c87674f1cf678f2-merged.mount: Deactivated successfully.
Feb 28 10:25:47 compute-0 podman[343658]: 2026-02-28 10:25:47.439649487 +0000 UTC m=+0.092852421 container remove b4c418c4f92387fa6494f8b90fd8a7e8777d7810dcd1e1c0299b1d5eba5a2c40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:25:47 compute-0 systemd[1]: libpod-conmon-b4c418c4f92387fa6494f8b90fd8a7e8777d7810dcd1e1c0299b1d5eba5a2c40.scope: Deactivated successfully.
Feb 28 10:25:47 compute-0 sudo[343557]: pam_unix(sudo:session): session closed for user root
Feb 28 10:25:47 compute-0 sudo[343673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:25:47 compute-0 sudo[343673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:25:47 compute-0 sudo[343673]: pam_unix(sudo:session): session closed for user root
Feb 28 10:25:47 compute-0 sudo[343698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:25:47 compute-0 sudo[343698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:25:47 compute-0 podman[343735]: 2026-02-28 10:25:47.985938375 +0000 UTC m=+0.055916005 container create a9df81eefafffd072c748a7b0c61a802c5d6909888a583097bc63dbf4def00da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:25:48 compute-0 systemd[1]: Started libpod-conmon-a9df81eefafffd072c748a7b0c61a802c5d6909888a583097bc63dbf4def00da.scope.
Feb 28 10:25:48 compute-0 podman[343735]: 2026-02-28 10:25:47.960099299 +0000 UTC m=+0.030077019 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:25:48 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:25:48 compute-0 podman[343735]: 2026-02-28 10:25:48.08382228 +0000 UTC m=+0.153799910 container init a9df81eefafffd072c748a7b0c61a802c5d6909888a583097bc63dbf4def00da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 10:25:48 compute-0 podman[343735]: 2026-02-28 10:25:48.092794499 +0000 UTC m=+0.162772129 container start a9df81eefafffd072c748a7b0c61a802c5d6909888a583097bc63dbf4def00da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 10:25:48 compute-0 podman[343735]: 2026-02-28 10:25:48.097754262 +0000 UTC m=+0.167731932 container attach a9df81eefafffd072c748a7b0c61a802c5d6909888a583097bc63dbf4def00da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 10:25:48 compute-0 quirky_cerf[343752]: 167 167
Feb 28 10:25:48 compute-0 systemd[1]: libpod-a9df81eefafffd072c748a7b0c61a802c5d6909888a583097bc63dbf4def00da.scope: Deactivated successfully.
Feb 28 10:25:48 compute-0 podman[343735]: 2026-02-28 10:25:48.102626933 +0000 UTC m=+0.172604533 container died a9df81eefafffd072c748a7b0c61a802c5d6909888a583097bc63dbf4def00da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 10:25:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6f4d89979cb50873ca25cb8f8ac8d47a5233fb242109715046bcc8a1e8ecf70-merged.mount: Deactivated successfully.
Feb 28 10:25:48 compute-0 podman[343735]: 2026-02-28 10:25:48.152967386 +0000 UTC m=+0.222945016 container remove a9df81eefafffd072c748a7b0c61a802c5d6909888a583097bc63dbf4def00da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 10:25:48 compute-0 systemd[1]: libpod-conmon-a9df81eefafffd072c748a7b0c61a802c5d6909888a583097bc63dbf4def00da.scope: Deactivated successfully.
Feb 28 10:25:48 compute-0 podman[343775]: 2026-02-28 10:25:48.34469952 +0000 UTC m=+0.056460501 container create c0e10ce97bbeda8090ca4c1841d9f68253eb4df8a963099dfe27e6e181297f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:25:48 compute-0 systemd[1]: Started libpod-conmon-c0e10ce97bbeda8090ca4c1841d9f68253eb4df8a963099dfe27e6e181297f02.scope.
Feb 28 10:25:48 compute-0 podman[343775]: 2026-02-28 10:25:48.320641486 +0000 UTC m=+0.032402467 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:25:48 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/978886479e0a44a594dcaacee5ba62da6da0f0fc4228482cea0de48780aa0489/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/978886479e0a44a594dcaacee5ba62da6da0f0fc4228482cea0de48780aa0489/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/978886479e0a44a594dcaacee5ba62da6da0f0fc4228482cea0de48780aa0489/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/978886479e0a44a594dcaacee5ba62da6da0f0fc4228482cea0de48780aa0489/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:25:48 compute-0 podman[343775]: 2026-02-28 10:25:48.463369495 +0000 UTC m=+0.175130516 container init c0e10ce97bbeda8090ca4c1841d9f68253eb4df8a963099dfe27e6e181297f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 10:25:48 compute-0 podman[343775]: 2026-02-28 10:25:48.473443986 +0000 UTC m=+0.185204957 container start c0e10ce97bbeda8090ca4c1841d9f68253eb4df8a963099dfe27e6e181297f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldberg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 10:25:48 compute-0 podman[343775]: 2026-02-28 10:25:48.478624646 +0000 UTC m=+0.190385657 container attach c0e10ce97bbeda8090ca4c1841d9f68253eb4df8a963099dfe27e6e181297f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldberg, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:25:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:25:48 compute-0 ceph-mon[76304]: pgmap v1817: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:25:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:25:49 compute-0 nova_compute[243452]: 2026-02-28 10:25:49.114 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:49 compute-0 lvm[343871]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:25:49 compute-0 lvm[343871]: VG ceph_vg0 finished
Feb 28 10:25:49 compute-0 lvm[343872]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:25:49 compute-0 lvm[343872]: VG ceph_vg1 finished
Feb 28 10:25:49 compute-0 lvm[343874]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:25:49 compute-0 lvm[343874]: VG ceph_vg2 finished
Feb 28 10:25:49 compute-0 clever_goldberg[343792]: {}
Feb 28 10:25:49 compute-0 systemd[1]: libpod-c0e10ce97bbeda8090ca4c1841d9f68253eb4df8a963099dfe27e6e181297f02.scope: Deactivated successfully.
Feb 28 10:25:49 compute-0 systemd[1]: libpod-c0e10ce97bbeda8090ca4c1841d9f68253eb4df8a963099dfe27e6e181297f02.scope: Consumed 1.217s CPU time.
Feb 28 10:25:49 compute-0 podman[343775]: 2026-02-28 10:25:49.303021511 +0000 UTC m=+1.014782482 container died c0e10ce97bbeda8090ca4c1841d9f68253eb4df8a963099dfe27e6e181297f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 10:25:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-978886479e0a44a594dcaacee5ba62da6da0f0fc4228482cea0de48780aa0489-merged.mount: Deactivated successfully.
Feb 28 10:25:49 compute-0 podman[343775]: 2026-02-28 10:25:49.348012059 +0000 UTC m=+1.059773040 container remove c0e10ce97bbeda8090ca4c1841d9f68253eb4df8a963099dfe27e6e181297f02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:25:49 compute-0 systemd[1]: libpod-conmon-c0e10ce97bbeda8090ca4c1841d9f68253eb4df8a963099dfe27e6e181297f02.scope: Deactivated successfully.
Feb 28 10:25:49 compute-0 sudo[343698]: pam_unix(sudo:session): session closed for user root
Feb 28 10:25:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:25:49 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:25:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:25:49 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:25:49 compute-0 sudo[343889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:25:49 compute-0 sudo[343889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:25:49 compute-0 sudo[343889]: pam_unix(sudo:session): session closed for user root
Feb 28 10:25:50 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:25:50 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:25:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:25:50 compute-0 nova_compute[243452]: 2026-02-28 10:25:50.907 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:51 compute-0 ceph-mon[76304]: pgmap v1818: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:25:52 compute-0 podman[343915]: 2026-02-28 10:25:52.156873843 +0000 UTC m=+0.086455516 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 28 10:25:52 compute-0 podman[343914]: 2026-02-28 10:25:52.220652364 +0000 UTC m=+0.151448942 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 28 10:25:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:25:52 compute-0 nova_compute[243452]: 2026-02-28 10:25:52.547 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:52 compute-0 nova_compute[243452]: 2026-02-28 10:25:52.548 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:52 compute-0 nova_compute[243452]: 2026-02-28 10:25:52.572 243456 DEBUG nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:25:52 compute-0 ceph-mon[76304]: pgmap v1819: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:25:52 compute-0 nova_compute[243452]: 2026-02-28 10:25:52.660 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:52 compute-0 nova_compute[243452]: 2026-02-28 10:25:52.661 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:52 compute-0 nova_compute[243452]: 2026-02-28 10:25:52.672 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:25:52 compute-0 nova_compute[243452]: 2026-02-28 10:25:52.673 243456 INFO nova.compute.claims [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:25:52 compute-0 nova_compute[243452]: 2026-02-28 10:25:52.770 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:25:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:25:53 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1975519888' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.323 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.331 243456 DEBUG nova.compute.provider_tree [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.355 243456 DEBUG nova.scheduler.client.report [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.386 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.387 243456 DEBUG nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.440 243456 DEBUG nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.440 243456 DEBUG nova.network.neutron [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.458 243456 INFO nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.478 243456 DEBUG nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.580 243456 DEBUG nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.581 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.582 243456 INFO nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Creating image(s)
Feb 28 10:25:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1975519888' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.606 243456 DEBUG nova.storage.rbd_utils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.636 243456 DEBUG nova.storage.rbd_utils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.665 243456 DEBUG nova.storage.rbd_utils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.670 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:25:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.749 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.751 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.751 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.752 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.776 243456 DEBUG nova.storage.rbd_utils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.781 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:25:53 compute-0 nova_compute[243452]: 2026-02-28 10:25:53.850 243456 DEBUG nova.policy [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9046d3ef932481196b82d5e1fdd5de6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:25:54 compute-0 sshd-session[343961]: Received disconnect from 103.67.78.132 port 46444:11: Bye Bye [preauth]
Feb 28 10:25:54 compute-0 sshd-session[343961]: Disconnected from authenticating user root 103.67.78.132 port 46444 [preauth]
Feb 28 10:25:54 compute-0 nova_compute[243452]: 2026-02-28 10:25:54.053 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:25:54 compute-0 nova_compute[243452]: 2026-02-28 10:25:54.132 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:54 compute-0 nova_compute[243452]: 2026-02-28 10:25:54.142 243456 DEBUG nova.storage.rbd_utils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] resizing rbd image 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:25:54 compute-0 nova_compute[243452]: 2026-02-28 10:25:54.237 243456 DEBUG nova.objects.instance [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'migration_context' on Instance uuid 8f174807-b15f-4588-83e1-c6e2ef2c2b4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:25:54 compute-0 nova_compute[243452]: 2026-02-28 10:25:54.252 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:25:54 compute-0 nova_compute[243452]: 2026-02-28 10:25:54.253 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Ensure instance console log exists: /var/lib/nova/instances/8f174807-b15f-4588-83e1-c6e2ef2c2b4a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:25:54 compute-0 nova_compute[243452]: 2026-02-28 10:25:54.253 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:54 compute-0 nova_compute[243452]: 2026-02-28 10:25:54.254 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:54 compute-0 nova_compute[243452]: 2026-02-28 10:25:54.254 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:25:54 compute-0 ceph-mon[76304]: pgmap v1820: 305 pgs: 305 active+clean; 153 MiB data, 858 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:25:55 compute-0 nova_compute[243452]: 2026-02-28 10:25:55.911 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:56 compute-0 nova_compute[243452]: 2026-02-28 10:25:56.150 243456 DEBUG nova.network.neutron [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Successfully created port: 3e3d207f-3991-41f0-a55c-44cd0479e7f8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:25:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 182 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 906 KiB/s wr, 25 op/s
Feb 28 10:25:56 compute-0 ceph-mon[76304]: pgmap v1821: 305 pgs: 305 active+clean; 182 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 906 KiB/s wr, 25 op/s
Feb 28 10:25:56 compute-0 nova_compute[243452]: 2026-02-28 10:25:56.934 243456 DEBUG nova.network.neutron [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Successfully updated port: 3e3d207f-3991-41f0-a55c-44cd0479e7f8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:25:56 compute-0 nova_compute[243452]: 2026-02-28 10:25:56.952 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:25:56 compute-0 nova_compute[243452]: 2026-02-28 10:25:56.953 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquired lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:25:56 compute-0 nova_compute[243452]: 2026-02-28 10:25:56.953 243456 DEBUG nova.network.neutron [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:25:57 compute-0 nova_compute[243452]: 2026-02-28 10:25:57.163 243456 DEBUG nova.compute.manager [req-32f09ad1-2c35-4aa2-adce-d90a3afeab52 req-f0051e1d-3c19-4e93-b478-5ee17dc31560 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received event network-changed-3e3d207f-3991-41f0-a55c-44cd0479e7f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:25:57 compute-0 nova_compute[243452]: 2026-02-28 10:25:57.163 243456 DEBUG nova.compute.manager [req-32f09ad1-2c35-4aa2-adce-d90a3afeab52 req-f0051e1d-3c19-4e93-b478-5ee17dc31560 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Refreshing instance network info cache due to event network-changed-3e3d207f-3991-41f0-a55c-44cd0479e7f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:25:57 compute-0 nova_compute[243452]: 2026-02-28 10:25:57.164 243456 DEBUG oslo_concurrency.lockutils [req-32f09ad1-2c35-4aa2-adce-d90a3afeab52 req-f0051e1d-3c19-4e93-b478-5ee17dc31560 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:25:57 compute-0 nova_compute[243452]: 2026-02-28 10:25:57.191 243456 DEBUG nova.network.neutron [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:25:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:57.865 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:57.866 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:25:57.866 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.221 243456 DEBUG nova.network.neutron [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Updating instance_info_cache with network_info: [{"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.251 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Releasing lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.252 243456 DEBUG nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Instance network_info: |[{"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.253 243456 DEBUG oslo_concurrency.lockutils [req-32f09ad1-2c35-4aa2-adce-d90a3afeab52 req-f0051e1d-3c19-4e93-b478-5ee17dc31560 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.253 243456 DEBUG nova.network.neutron [req-32f09ad1-2c35-4aa2-adce-d90a3afeab52 req-f0051e1d-3c19-4e93-b478-5ee17dc31560 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Refreshing network info cache for port 3e3d207f-3991-41f0-a55c-44cd0479e7f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.256 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Start _get_guest_xml network_info=[{"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.262 243456 WARNING nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.267 243456 DEBUG nova.virt.libvirt.host [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.268 243456 DEBUG nova.virt.libvirt.host [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.272 243456 DEBUG nova.virt.libvirt.host [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.273 243456 DEBUG nova.virt.libvirt.host [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.274 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.274 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.275 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.275 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.275 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.276 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.276 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.276 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.277 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.277 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.277 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.278 243456 DEBUG nova.virt.hardware [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.282 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:25:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 200 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:25:58 compute-0 ceph-mon[76304]: pgmap v1822: 305 pgs: 305 active+clean; 200 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:25:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:25:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:25:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1021679401' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.877 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.912 243456 DEBUG nova.storage.rbd_utils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:25:58 compute-0 nova_compute[243452]: 2026-02-28 10:25:58.919 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.117 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:25:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1908879600' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.507 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.509 243456 DEBUG nova.virt.libvirt.vif [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:25:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1565847176',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1565847176',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=117,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJI9lOGpAHM54rXvRMjq/x0g38COJ3hV6RmI46r9AJNCEFVJesQ2691y4U6QBJ7NeNJa4TpkakoTRePIkSoO6HE1KNS3jJXu2ARIkPnrG7XuVDaOVfaQKY/gyfQrOLusGQ==',key_name='tempest-TestSecurityGroupsBasicOps-1768974086',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-899856xm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:25:53Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=8f174807-b15f-4588-83e1-c6e2ef2c2b4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.510 243456 DEBUG nova.network.os_vif_util [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.511 243456 DEBUG nova.network.os_vif_util [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:84:10,bridge_name='br-int',has_traffic_filtering=True,id=3e3d207f-3991-41f0-a55c-44cd0479e7f8,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3d207f-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.513 243456 DEBUG nova.objects.instance [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8f174807-b15f-4588-83e1-c6e2ef2c2b4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.554 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:25:59 compute-0 nova_compute[243452]:   <uuid>8f174807-b15f-4588-83e1-c6e2ef2c2b4a</uuid>
Feb 28 10:25:59 compute-0 nova_compute[243452]:   <name>instance-00000075</name>
Feb 28 10:25:59 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:25:59 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:25:59 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1565847176</nova:name>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:25:58</nova:creationTime>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:25:59 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:25:59 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:25:59 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:25:59 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:25:59 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:25:59 compute-0 nova_compute[243452]:         <nova:user uuid="f9046d3ef932481196b82d5e1fdd5de6">tempest-TestSecurityGroupsBasicOps-1303513239-project-member</nova:user>
Feb 28 10:25:59 compute-0 nova_compute[243452]:         <nova:project uuid="1eb3aafa74d14544ba5cde61223f2e23">tempest-TestSecurityGroupsBasicOps-1303513239</nova:project>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:25:59 compute-0 nova_compute[243452]:         <nova:port uuid="3e3d207f-3991-41f0-a55c-44cd0479e7f8">
Feb 28 10:25:59 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:25:59 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:25:59 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <system>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <entry name="serial">8f174807-b15f-4588-83e1-c6e2ef2c2b4a</entry>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <entry name="uuid">8f174807-b15f-4588-83e1-c6e2ef2c2b4a</entry>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     </system>
Feb 28 10:25:59 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:25:59 compute-0 nova_compute[243452]:   <os>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:   </os>
Feb 28 10:25:59 compute-0 nova_compute[243452]:   <features>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:   </features>
Feb 28 10:25:59 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:25:59 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:25:59 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk">
Feb 28 10:25:59 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       </source>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:25:59 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk.config">
Feb 28 10:25:59 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       </source>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:25:59 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:27:84:10"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <target dev="tap3e3d207f-39"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/8f174807-b15f-4588-83e1-c6e2ef2c2b4a/console.log" append="off"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <video>
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     </video>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:25:59 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:25:59 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:25:59 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:25:59 compute-0 nova_compute[243452]: </domain>
Feb 28 10:25:59 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.555 243456 DEBUG nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Preparing to wait for external event network-vif-plugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.556 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.556 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.557 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.558 243456 DEBUG nova.virt.libvirt.vif [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:25:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1565847176',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1565847176',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=117,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJI9lOGpAHM54rXvRMjq/x0g38COJ3hV6RmI46r9AJNCEFVJesQ2691y4U6QBJ7NeNJa4TpkakoTRePIkSoO6HE1KNS3jJXu2ARIkPnrG7XuVDaOVfaQKY/gyfQrOLusGQ==',key_name='tempest-TestSecurityGroupsBasicOps-1768974086',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-899856xm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:25:53Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=8f174807-b15f-4588-83e1-c6e2ef2c2b4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.559 243456 DEBUG nova.network.os_vif_util [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.560 243456 DEBUG nova.network.os_vif_util [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:84:10,bridge_name='br-int',has_traffic_filtering=True,id=3e3d207f-3991-41f0-a55c-44cd0479e7f8,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3d207f-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.561 243456 DEBUG os_vif [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:84:10,bridge_name='br-int',has_traffic_filtering=True,id=3e3d207f-3991-41f0-a55c-44cd0479e7f8,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3d207f-39') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.562 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.563 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.563 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.568 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.569 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e3d207f-39, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.570 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3e3d207f-39, col_values=(('external_ids', {'iface-id': '3e3d207f-3991-41f0-a55c-44cd0479e7f8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:84:10', 'vm-uuid': '8f174807-b15f-4588-83e1-c6e2ef2c2b4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:25:59 compute-0 NetworkManager[49805]: <info>  [1772274359.5732] manager: (tap3e3d207f-39): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/482)
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.572 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.576 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.580 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.582 243456 INFO os_vif [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:84:10,bridge_name='br-int',has_traffic_filtering=True,id=3e3d207f-3991-41f0-a55c-44cd0479e7f8,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3d207f-39')
Feb 28 10:25:59 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1021679401' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:25:59 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1908879600' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.652 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.652 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.653 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No VIF found with MAC fa:16:3e:27:84:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.654 243456 INFO nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Using config drive
Feb 28 10:25:59 compute-0 nova_compute[243452]: 2026-02-28 10:25:59.687 243456 DEBUG nova.storage.rbd_utils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:26:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:26:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:26:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:26:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:26:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:26:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:26:00 compute-0 nova_compute[243452]: 2026-02-28 10:26:00.351 243456 INFO nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Creating config drive at /var/lib/nova/instances/8f174807-b15f-4588-83e1-c6e2ef2c2b4a/disk.config
Feb 28 10:26:00 compute-0 nova_compute[243452]: 2026-02-28 10:26:00.357 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8f174807-b15f-4588-83e1-c6e2ef2c2b4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpvbh_8g7y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:26:00 compute-0 nova_compute[243452]: 2026-02-28 10:26:00.491 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8f174807-b15f-4588-83e1-c6e2ef2c2b4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpvbh_8g7y" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:26:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 200 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:26:00 compute-0 nova_compute[243452]: 2026-02-28 10:26:00.523 243456 DEBUG nova.storage.rbd_utils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:26:00 compute-0 nova_compute[243452]: 2026-02-28 10:26:00.527 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8f174807-b15f-4588-83e1-c6e2ef2c2b4a/disk.config 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:26:00 compute-0 ceph-mon[76304]: pgmap v1823: 305 pgs: 305 active+clean; 200 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:26:00 compute-0 nova_compute[243452]: 2026-02-28 10:26:00.709 243456 DEBUG oslo_concurrency.processutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8f174807-b15f-4588-83e1-c6e2ef2c2b4a/disk.config 8f174807-b15f-4588-83e1-c6e2ef2c2b4a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:26:00 compute-0 nova_compute[243452]: 2026-02-28 10:26:00.711 243456 INFO nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Deleting local config drive /var/lib/nova/instances/8f174807-b15f-4588-83e1-c6e2ef2c2b4a/disk.config because it was imported into RBD.
Feb 28 10:26:00 compute-0 nova_compute[243452]: 2026-02-28 10:26:00.716 243456 DEBUG nova.network.neutron [req-32f09ad1-2c35-4aa2-adce-d90a3afeab52 req-f0051e1d-3c19-4e93-b478-5ee17dc31560 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Updated VIF entry in instance network info cache for port 3e3d207f-3991-41f0-a55c-44cd0479e7f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:26:00 compute-0 nova_compute[243452]: 2026-02-28 10:26:00.717 243456 DEBUG nova.network.neutron [req-32f09ad1-2c35-4aa2-adce-d90a3afeab52 req-f0051e1d-3c19-4e93-b478-5ee17dc31560 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Updating instance_info_cache with network_info: [{"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:26:00 compute-0 nova_compute[243452]: 2026-02-28 10:26:00.737 243456 DEBUG oslo_concurrency.lockutils [req-32f09ad1-2c35-4aa2-adce-d90a3afeab52 req-f0051e1d-3c19-4e93-b478-5ee17dc31560 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:26:00 compute-0 kernel: tap3e3d207f-39: entered promiscuous mode
Feb 28 10:26:00 compute-0 NetworkManager[49805]: <info>  [1772274360.7720] manager: (tap3e3d207f-39): new Tun device (/org/freedesktop/NetworkManager/Devices/483)
Feb 28 10:26:00 compute-0 ovn_controller[146846]: 2026-02-28T10:26:00Z|01173|binding|INFO|Claiming lport 3e3d207f-3991-41f0-a55c-44cd0479e7f8 for this chassis.
Feb 28 10:26:00 compute-0 ovn_controller[146846]: 2026-02-28T10:26:00Z|01174|binding|INFO|3e3d207f-3991-41f0-a55c-44cd0479e7f8: Claiming fa:16:3e:27:84:10 10.100.0.3
Feb 28 10:26:00 compute-0 nova_compute[243452]: 2026-02-28 10:26:00.773 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.784 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:84:10 10.100.0.3'], port_security=['fa:16:3e:27:84:10 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8f174807-b15f-4588-83e1-c6e2ef2c2b4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd1940e3c-fc4a-432c-9cc8-be1893711fb9 f3ded7bc-f509-4fba-90dd-a4f12df4a200', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d36dc637-8959-4ab7-8bcf-cb8fb989bc7e, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=3e3d207f-3991-41f0-a55c-44cd0479e7f8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:26:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.785 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 3e3d207f-3991-41f0-a55c-44cd0479e7f8 in datapath a3c17451-fecb-4c3a-bc65-efba96c6e655 bound to our chassis
Feb 28 10:26:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.786 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3c17451-fecb-4c3a-bc65-efba96c6e655
Feb 28 10:26:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.800 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[034e6def-9ef9-4ed8-acb3-040c1d5c7f63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.801 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa3c17451-f1 in ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:26:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.804 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa3c17451-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:26:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.804 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[75fbca29-8a27-4210-9fec-5c85d08acda2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.805 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d0e18101-e93c-4d40-9170-319e3e3c4dcd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:00 compute-0 systemd-machined[209480]: New machine qemu-148-instance-00000075.
Feb 28 10:26:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.819 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[cec89c59-5d9e-4e31-980c-fa0969951972]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:00 compute-0 systemd[1]: Started Virtual Machine qemu-148-instance-00000075.
Feb 28 10:26:00 compute-0 nova_compute[243452]: 2026-02-28 10:26:00.820 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:00 compute-0 nova_compute[243452]: 2026-02-28 10:26:00.826 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:00 compute-0 ovn_controller[146846]: 2026-02-28T10:26:00Z|01175|binding|INFO|Setting lport 3e3d207f-3991-41f0-a55c-44cd0479e7f8 ovn-installed in OVS
Feb 28 10:26:00 compute-0 ovn_controller[146846]: 2026-02-28T10:26:00Z|01176|binding|INFO|Setting lport 3e3d207f-3991-41f0-a55c-44cd0479e7f8 up in Southbound
Feb 28 10:26:00 compute-0 nova_compute[243452]: 2026-02-28 10:26:00.829 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:00 compute-0 systemd-udevd[344289]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:26:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.833 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2 2001:db8::f816:3eff:feb9:6cdb'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9a075f4-1100-445c-9c83-55409e5c7a09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d61006dd-6d54-48e6-89d8-98f649887b90) old=Port_Binding(mac=['fa:16:3e:b9:6c:db 2001:db8::f816:3eff:feb9:6cdb'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:26:00 compute-0 NetworkManager[49805]: <info>  [1772274360.8437] device (tap3e3d207f-39): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:26:00 compute-0 NetworkManager[49805]: <info>  [1772274360.8445] device (tap3e3d207f-39): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:26:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.846 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[49632bbe-a826-49e8-b37b-f5d4d0d2653d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.874 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d78e6a5a-2a4a-445c-ac4b-8265c47c229b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:00 compute-0 NetworkManager[49805]: <info>  [1772274360.8809] manager: (tapa3c17451-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/484)
Feb 28 10:26:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.880 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2b0ed30f-0fb5-47fb-ba48-37d68bad7e32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.911 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[467c8dd3-ad2d-4a64-83c7-1019c3efcc76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.914 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[23fcf48a-d59e-408c-98a0-e3affa47486e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:00 compute-0 NetworkManager[49805]: <info>  [1772274360.9365] device (tapa3c17451-f0): carrier: link connected
Feb 28 10:26:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.939 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a5137e7a-90eb-404c-a6a9-f9c543dca2dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.954 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4f1f9d-e5de-457c-a1a7-5e47767d9036]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3c17451-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:c6:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 351], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586822, 'reachable_time': 21966, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344319, 'error': None, 'target': 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.968 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa58034-faeb-43e9-9fbd-c82a266fac07]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5b:c66e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586822, 'tstamp': 586822}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344320, 'error': None, 'target': 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:00.984 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[00456d8f-ef9b-4954-8017-bc6d09fd7251]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3c17451-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:c6:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 351], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586822, 'reachable_time': 21966, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 344321, 'error': None, 'target': 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.009 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4aff1e-ffa4-44b0-a9e5-b646ed0bbab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.075 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[107b395c-3589-405d-a8c2-e8317ea9e9f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.077 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3c17451-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.078 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.078 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3c17451-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.080 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:01 compute-0 kernel: tapa3c17451-f0: entered promiscuous mode
Feb 28 10:26:01 compute-0 NetworkManager[49805]: <info>  [1772274361.0820] manager: (tapa3c17451-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/485)
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.083 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3c17451-f0, col_values=(('external_ids', {'iface-id': '593df982-6822-42f7-9086-436943b49f8d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.084 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:01 compute-0 ovn_controller[146846]: 2026-02-28T10:26:01Z|01177|binding|INFO|Releasing lport 593df982-6822-42f7-9086-436943b49f8d from this chassis (sb_readonly=0)
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.085 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a3c17451-fecb-4c3a-bc65-efba96c6e655.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a3c17451-fecb-4c3a-bc65-efba96c6e655.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.087 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[992ae3d8-dc70-463e-a9ab-5a4d58e5bd58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.087 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-a3c17451-fecb-4c3a-bc65-efba96c6e655
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/a3c17451-fecb-4c3a-bc65-efba96c6e655.pid.haproxy
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID a3c17451-fecb-4c3a-bc65-efba96c6e655
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.088 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'env', 'PROCESS_TAG=haproxy-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a3c17451-fecb-4c3a-bc65-efba96c6e655.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.090 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.122 243456 DEBUG nova.compute.manager [req-fc2a14b2-f17f-49f2-bfd6-b3f6f727e0f7 req-8a09ec66-d8c0-49d0-92e0-43dfa8672886 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received event network-vif-plugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.122 243456 DEBUG oslo_concurrency.lockutils [req-fc2a14b2-f17f-49f2-bfd6-b3f6f727e0f7 req-8a09ec66-d8c0-49d0-92e0-43dfa8672886 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.122 243456 DEBUG oslo_concurrency.lockutils [req-fc2a14b2-f17f-49f2-bfd6-b3f6f727e0f7 req-8a09ec66-d8c0-49d0-92e0-43dfa8672886 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.122 243456 DEBUG oslo_concurrency.lockutils [req-fc2a14b2-f17f-49f2-bfd6-b3f6f727e0f7 req-8a09ec66-d8c0-49d0-92e0-43dfa8672886 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.123 243456 DEBUG nova.compute.manager [req-fc2a14b2-f17f-49f2-bfd6-b3f6f727e0f7 req-8a09ec66-d8c0-49d0-92e0-43dfa8672886 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Processing event network-vif-plugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.278 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274361.2782345, 8f174807-b15f-4588-83e1-c6e2ef2c2b4a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.279 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] VM Started (Lifecycle Event)
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.281 243456 DEBUG nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.285 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.289 243456 INFO nova.virt.libvirt.driver [-] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Instance spawned successfully.
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.289 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.297 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.307 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.316 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.317 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.318 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.319 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.320 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.320 243456 DEBUG nova.virt.libvirt.driver [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.331 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.331 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274361.2784963, 8f174807-b15f-4588-83e1-c6e2ef2c2b4a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.332 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] VM Paused (Lifecycle Event)
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.352 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.358 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274361.284623, 8f174807-b15f-4588-83e1-c6e2ef2c2b4a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.358 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] VM Resumed (Lifecycle Event)
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.380 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.386 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.392 243456 INFO nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Took 7.81 seconds to spawn the instance on the hypervisor.
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.393 243456 DEBUG nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.405 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.452 243456 INFO nova.compute.manager [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Took 8.83 seconds to build instance.
Feb 28 10:26:01 compute-0 podman[344393]: 2026-02-28 10:26:01.469039116 +0000 UTC m=+0.051204539 container create d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 28 10:26:01 compute-0 nova_compute[243452]: 2026-02-28 10:26:01.471 243456 DEBUG oslo_concurrency.lockutils [None req-b9e11790-4875-4324-afc8-ec1b491d0a16 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:26:01 compute-0 systemd[1]: Started libpod-conmon-d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d.scope.
Feb 28 10:26:01 compute-0 podman[344393]: 2026-02-28 10:26:01.442985324 +0000 UTC m=+0.025150807 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:26:01 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc280004ec33d4980617412d8deb4460aaa18af7068156edd80551a2bd739484/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:26:01 compute-0 podman[344393]: 2026-02-28 10:26:01.557951752 +0000 UTC m=+0.140117205 container init d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 28 10:26:01 compute-0 podman[344393]: 2026-02-28 10:26:01.56273051 +0000 UTC m=+0.144895943 container start d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 28 10:26:01 compute-0 neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655[344408]: [NOTICE]   (344412) : New worker (344414) forked
Feb 28 10:26:01 compute-0 neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655[344408]: [NOTICE]   (344412) : Loading success.
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.625 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d61006dd-6d54-48e6-89d8-98f649887b90 in datapath 8f8968b3-f386-4803-8793-bde890c8208a unbound from our chassis
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.628 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f8968b3-f386-4803-8793-bde890c8208a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:26:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:01.629 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[72c0d16c-c2c5-4b4a-953a-39d6c53b02aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 200 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 881 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 28 10:26:02 compute-0 ceph-mon[76304]: pgmap v1824: 305 pgs: 305 active+clean; 200 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 881 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 28 10:26:03 compute-0 nova_compute[243452]: 2026-02-28 10:26:03.520 243456 DEBUG nova.compute.manager [req-09e89743-15f4-4202-a6f9-d6e6e2650cce req-d04c8965-3967-47ac-9753-9d2803781bec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received event network-vif-plugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:26:03 compute-0 nova_compute[243452]: 2026-02-28 10:26:03.521 243456 DEBUG oslo_concurrency.lockutils [req-09e89743-15f4-4202-a6f9-d6e6e2650cce req-d04c8965-3967-47ac-9753-9d2803781bec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:26:03 compute-0 nova_compute[243452]: 2026-02-28 10:26:03.522 243456 DEBUG oslo_concurrency.lockutils [req-09e89743-15f4-4202-a6f9-d6e6e2650cce req-d04c8965-3967-47ac-9753-9d2803781bec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:26:03 compute-0 nova_compute[243452]: 2026-02-28 10:26:03.523 243456 DEBUG oslo_concurrency.lockutils [req-09e89743-15f4-4202-a6f9-d6e6e2650cce req-d04c8965-3967-47ac-9753-9d2803781bec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:26:03 compute-0 nova_compute[243452]: 2026-02-28 10:26:03.523 243456 DEBUG nova.compute.manager [req-09e89743-15f4-4202-a6f9-d6e6e2650cce req-d04c8965-3967-47ac-9753-9d2803781bec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] No waiting events found dispatching network-vif-plugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:26:03 compute-0 nova_compute[243452]: 2026-02-28 10:26:03.524 243456 WARNING nova.compute.manager [req-09e89743-15f4-4202-a6f9-d6e6e2650cce req-d04c8965-3967-47ac-9753-9d2803781bec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received unexpected event network-vif-plugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 for instance with vm_state active and task_state None.
Feb 28 10:26:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:26:04 compute-0 nova_compute[243452]: 2026-02-28 10:26:04.119 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 200 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 881 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 28 10:26:04 compute-0 nova_compute[243452]: 2026-02-28 10:26:04.573 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:04 compute-0 ceph-mon[76304]: pgmap v1825: 305 pgs: 305 active+clean; 200 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 881 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 28 10:26:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:05.399 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:6c:db 2001:db8::f816:3eff:feb9:6cdb'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9a075f4-1100-445c-9c83-55409e5c7a09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d61006dd-6d54-48e6-89d8-98f649887b90) old=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2 2001:db8::f816:3eff:feb9:6cdb'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:26:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:05.401 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d61006dd-6d54-48e6-89d8-98f649887b90 in datapath 8f8968b3-f386-4803-8793-bde890c8208a updated
Feb 28 10:26:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:05.402 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f8968b3-f386-4803-8793-bde890c8208a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:26:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:05.403 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d98a89d1-7544-4890-9ea7-4c2cb2c844a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:05 compute-0 ovn_controller[146846]: 2026-02-28T10:26:05Z|01178|binding|INFO|Releasing lport 593df982-6822-42f7-9086-436943b49f8d from this chassis (sb_readonly=0)
Feb 28 10:26:05 compute-0 nova_compute[243452]: 2026-02-28 10:26:05.429 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:05 compute-0 NetworkManager[49805]: <info>  [1772274365.4311] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/486)
Feb 28 10:26:05 compute-0 NetworkManager[49805]: <info>  [1772274365.4337] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/487)
Feb 28 10:26:05 compute-0 ovn_controller[146846]: 2026-02-28T10:26:05Z|01179|binding|INFO|Releasing lport 593df982-6822-42f7-9086-436943b49f8d from this chassis (sb_readonly=0)
Feb 28 10:26:05 compute-0 nova_compute[243452]: 2026-02-28 10:26:05.438 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:05 compute-0 nova_compute[243452]: 2026-02-28 10:26:05.443 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:06 compute-0 sshd-session[344424]: Invalid user solana from 45.148.10.240 port 36502
Feb 28 10:26:06 compute-0 sshd-session[344424]: Connection closed by invalid user solana 45.148.10.240 port 36502 [preauth]
Feb 28 10:26:06 compute-0 nova_compute[243452]: 2026-02-28 10:26:06.370 243456 DEBUG nova.compute.manager [req-cbabbbc5-506f-443b-a4d3-32403a62b0b9 req-3417a017-c43e-478c-8213-546b86595af0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received event network-changed-3e3d207f-3991-41f0-a55c-44cd0479e7f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:26:06 compute-0 nova_compute[243452]: 2026-02-28 10:26:06.371 243456 DEBUG nova.compute.manager [req-cbabbbc5-506f-443b-a4d3-32403a62b0b9 req-3417a017-c43e-478c-8213-546b86595af0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Refreshing instance network info cache due to event network-changed-3e3d207f-3991-41f0-a55c-44cd0479e7f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:26:06 compute-0 nova_compute[243452]: 2026-02-28 10:26:06.372 243456 DEBUG oslo_concurrency.lockutils [req-cbabbbc5-506f-443b-a4d3-32403a62b0b9 req-3417a017-c43e-478c-8213-546b86595af0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:26:06 compute-0 nova_compute[243452]: 2026-02-28 10:26:06.372 243456 DEBUG oslo_concurrency.lockutils [req-cbabbbc5-506f-443b-a4d3-32403a62b0b9 req-3417a017-c43e-478c-8213-546b86595af0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:26:06 compute-0 nova_compute[243452]: 2026-02-28 10:26:06.373 243456 DEBUG nova.network.neutron [req-cbabbbc5-506f-443b-a4d3-32403a62b0b9 req-3417a017-c43e-478c-8213-546b86595af0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Refreshing network info cache for port 3e3d207f-3991-41f0-a55c-44cd0479e7f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:26:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 200 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 10:26:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:07.001 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2 2001:db8::f816:3eff:feb9:6cdb'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9a075f4-1100-445c-9c83-55409e5c7a09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d61006dd-6d54-48e6-89d8-98f649887b90) old=Port_Binding(mac=['fa:16:3e:b9:6c:db 2001:db8::f816:3eff:feb9:6cdb'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:26:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:07.003 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d61006dd-6d54-48e6-89d8-98f649887b90 in datapath 8f8968b3-f386-4803-8793-bde890c8208a updated
Feb 28 10:26:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:07.005 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f8968b3-f386-4803-8793-bde890c8208a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:26:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:07.006 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8e868867-dbca-4a2a-b5d8-f5687b325699]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:07 compute-0 ceph-mon[76304]: pgmap v1826: 305 pgs: 305 active+clean; 200 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 10:26:08 compute-0 nova_compute[243452]: 2026-02-28 10:26:08.415 243456 DEBUG nova.network.neutron [req-cbabbbc5-506f-443b-a4d3-32403a62b0b9 req-3417a017-c43e-478c-8213-546b86595af0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Updated VIF entry in instance network info cache for port 3e3d207f-3991-41f0-a55c-44cd0479e7f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:26:08 compute-0 nova_compute[243452]: 2026-02-28 10:26:08.416 243456 DEBUG nova.network.neutron [req-cbabbbc5-506f-443b-a4d3-32403a62b0b9 req-3417a017-c43e-478c-8213-546b86595af0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Updating instance_info_cache with network_info: [{"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:26:08 compute-0 nova_compute[243452]: 2026-02-28 10:26:08.445 243456 DEBUG oslo_concurrency.lockutils [req-cbabbbc5-506f-443b-a4d3-32403a62b0b9 req-3417a017-c43e-478c-8213-546b86595af0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:26:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 200 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 921 KiB/s wr, 74 op/s
Feb 28 10:26:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:26:09 compute-0 nova_compute[243452]: 2026-02-28 10:26:09.121 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:09 compute-0 nova_compute[243452]: 2026-02-28 10:26:09.575 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:09 compute-0 ceph-mon[76304]: pgmap v1827: 305 pgs: 305 active+clean; 200 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 921 KiB/s wr, 74 op/s
Feb 28 10:26:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 200 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:26:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:10.770 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9a075f4-1100-445c-9c83-55409e5c7a09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d61006dd-6d54-48e6-89d8-98f649887b90) old=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2 2001:db8::f816:3eff:feb9:6cdb'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:26:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:10.771 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d61006dd-6d54-48e6-89d8-98f649887b90 in datapath 8f8968b3-f386-4803-8793-bde890c8208a updated
Feb 28 10:26:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:10.772 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f8968b3-f386-4803-8793-bde890c8208a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:26:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:10.773 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[20748441-a48e-4e2d-9105-3667b060e160]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:11 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Feb 28 10:26:11 compute-0 ceph-mon[76304]: pgmap v1828: 305 pgs: 305 active+clean; 200 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:26:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:12.362 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2 2001:db8::f816:3eff:feb9:6cdb'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9a075f4-1100-445c-9c83-55409e5c7a09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d61006dd-6d54-48e6-89d8-98f649887b90) old=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:26:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:12.365 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d61006dd-6d54-48e6-89d8-98f649887b90 in datapath 8f8968b3-f386-4803-8793-bde890c8208a updated
Feb 28 10:26:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:12.366 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f8968b3-f386-4803-8793-bde890c8208a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:26:12 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:12.370 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[082de97c-4d9d-42bf-baa1-ddf1c8f557b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 215 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 87 op/s
Feb 28 10:26:12 compute-0 ovn_controller[146846]: 2026-02-28T10:26:12Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:27:84:10 10.100.0.3
Feb 28 10:26:12 compute-0 ovn_controller[146846]: 2026-02-28T10:26:12Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:27:84:10 10.100.0.3
Feb 28 10:26:13 compute-0 ceph-mon[76304]: pgmap v1829: 305 pgs: 305 active+clean; 215 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 87 op/s
Feb 28 10:26:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:26:14 compute-0 nova_compute[243452]: 2026-02-28 10:26:14.123 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 215 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.0 MiB/s wr, 60 op/s
Feb 28 10:26:14 compute-0 nova_compute[243452]: 2026-02-28 10:26:14.577 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:15 compute-0 ceph-mon[76304]: pgmap v1830: 305 pgs: 305 active+clean; 215 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.0 MiB/s wr, 60 op/s
Feb 28 10:26:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 233 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Feb 28 10:26:17 compute-0 ceph-mon[76304]: pgmap v1831: 305 pgs: 305 active+clean; 233 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Feb 28 10:26:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 233 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 10:26:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:26:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:18.956 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9a075f4-1100-445c-9c83-55409e5c7a09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d61006dd-6d54-48e6-89d8-98f649887b90) old=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2 2001:db8::f816:3eff:feb9:6cdb'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:26:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:18.957 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d61006dd-6d54-48e6-89d8-98f649887b90 in datapath 8f8968b3-f386-4803-8793-bde890c8208a updated
Feb 28 10:26:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:18.958 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f8968b3-f386-4803-8793-bde890c8208a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:26:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:18.959 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c8eecb95-6f7e-49f9-94ca-834137721e99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:19 compute-0 nova_compute[243452]: 2026-02-28 10:26:19.125 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:19 compute-0 nova_compute[243452]: 2026-02-28 10:26:19.579 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:19 compute-0 ceph-mon[76304]: pgmap v1832: 305 pgs: 305 active+clean; 233 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 10:26:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:20.234 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2 2001:db8::f816:3eff:feb9:6cdb'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9a075f4-1100-445c-9c83-55409e5c7a09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d61006dd-6d54-48e6-89d8-98f649887b90) old=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:26:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:20.236 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d61006dd-6d54-48e6-89d8-98f649887b90 in datapath 8f8968b3-f386-4803-8793-bde890c8208a updated
Feb 28 10:26:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:20.237 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f8968b3-f386-4803-8793-bde890c8208a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:26:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:20.239 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cf054547-9e5c-4972-b6d8-56f15585af5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 233 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 10:26:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:20.989 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:26:20 compute-0 nova_compute[243452]: 2026-02-28 10:26:20.989 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:20.990 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:26:21 compute-0 ceph-mon[76304]: pgmap v1833: 305 pgs: 305 active+clean; 233 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 10:26:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 233 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 10:26:23 compute-0 podman[344429]: 2026-02-28 10:26:23.100805312 +0000 UTC m=+0.044514536 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 28 10:26:23 compute-0 podman[344428]: 2026-02-28 10:26:23.12497222 +0000 UTC m=+0.069473167 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller)
Feb 28 10:26:23 compute-0 ceph-mon[76304]: pgmap v1834: 305 pgs: 305 active+clean; 233 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 10:26:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:26:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:23.992 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:26:24 compute-0 nova_compute[243452]: 2026-02-28 10:26:24.126 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 233 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 266 KiB/s rd, 1.1 MiB/s wr, 50 op/s
Feb 28 10:26:24 compute-0 nova_compute[243452]: 2026-02-28 10:26:24.581 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:25 compute-0 nova_compute[243452]: 2026-02-28 10:26:25.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:26:25 compute-0 ceph-mon[76304]: pgmap v1835: 305 pgs: 305 active+clean; 233 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 266 KiB/s rd, 1.1 MiB/s wr, 50 op/s
Feb 28 10:26:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 233 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 266 KiB/s rd, 1.1 MiB/s wr, 50 op/s
Feb 28 10:26:27 compute-0 ceph-mon[76304]: pgmap v1836: 305 pgs: 305 active+clean; 233 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 266 KiB/s rd, 1.1 MiB/s wr, 50 op/s
Feb 28 10:26:28 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:28.436 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:6c:db 2001:db8::f816:3eff:feb9:6cdb'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9a075f4-1100-445c-9c83-55409e5c7a09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d61006dd-6d54-48e6-89d8-98f649887b90) old=Port_Binding(mac=['fa:16:3e:b9:6c:db 10.100.0.2 2001:db8::f816:3eff:feb9:6cdb'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:26:28 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:28.439 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d61006dd-6d54-48e6-89d8-98f649887b90 in datapath 8f8968b3-f386-4803-8793-bde890c8208a updated
Feb 28 10:26:28 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:28.441 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f8968b3-f386-4803-8793-bde890c8208a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:26:28 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:28.442 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[42790b6f-69cd-4b32-85a8-63bbec21a805]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:28 compute-0 nova_compute[243452]: 2026-02-28 10:26:28.452 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "9819900a-9819-4896-a490-54f445126d24" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:26:28 compute-0 nova_compute[243452]: 2026-02-28 10:26:28.453 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9819900a-9819-4896-a490-54f445126d24" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:26:28 compute-0 nova_compute[243452]: 2026-02-28 10:26:28.469 243456 DEBUG nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:26:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 233 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Feb 28 10:26:28 compute-0 nova_compute[243452]: 2026-02-28 10:26:28.719 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:26:28 compute-0 nova_compute[243452]: 2026-02-28 10:26:28.720 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:26:28 compute-0 nova_compute[243452]: 2026-02-28 10:26:28.731 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:26:28 compute-0 nova_compute[243452]: 2026-02-28 10:26:28.732 243456 INFO nova.compute.claims [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:26:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:26:29 compute-0 nova_compute[243452]: 2026-02-28 10:26:29.073 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:26:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:26:29
Feb 28 10:26:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:26:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:26:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['images', 'default.rgw.log', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'backups', 'vms']
Feb 28 10:26:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:26:29 compute-0 nova_compute[243452]: 2026-02-28 10:26:29.128 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:29 compute-0 nova_compute[243452]: 2026-02-28 10:26:29.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:26:29 compute-0 nova_compute[243452]: 2026-02-28 10:26:29.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:26:29 compute-0 nova_compute[243452]: 2026-02-28 10:26:29.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:26:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:26:29 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3741980337' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:26:29 compute-0 nova_compute[243452]: 2026-02-28 10:26:29.583 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:29 compute-0 nova_compute[243452]: 2026-02-28 10:26:29.589 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:26:29 compute-0 nova_compute[243452]: 2026-02-28 10:26:29.596 243456 DEBUG nova.compute.provider_tree [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:26:29 compute-0 nova_compute[243452]: 2026-02-28 10:26:29.620 243456 DEBUG nova.scheduler.client.report [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:26:29 compute-0 nova_compute[243452]: 2026-02-28 10:26:29.646 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:26:29 compute-0 nova_compute[243452]: 2026-02-28 10:26:29.647 243456 DEBUG nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:26:29 compute-0 ceph-mon[76304]: pgmap v1837: 305 pgs: 305 active+clean; 233 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Feb 28 10:26:29 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3741980337' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:26:29 compute-0 nova_compute[243452]: 2026-02-28 10:26:29.697 243456 DEBUG nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:26:29 compute-0 nova_compute[243452]: 2026-02-28 10:26:29.698 243456 DEBUG nova.network.neutron [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:26:29 compute-0 nova_compute[243452]: 2026-02-28 10:26:29.757 243456 INFO nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:26:29 compute-0 nova_compute[243452]: 2026-02-28 10:26:29.799 243456 DEBUG nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:26:29 compute-0 nova_compute[243452]: 2026-02-28 10:26:29.907 243456 DEBUG nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:26:29 compute-0 nova_compute[243452]: 2026-02-28 10:26:29.909 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:26:29 compute-0 nova_compute[243452]: 2026-02-28 10:26:29.910 243456 INFO nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Creating image(s)
Feb 28 10:26:29 compute-0 nova_compute[243452]: 2026-02-28 10:26:29.944 243456 DEBUG nova.storage.rbd_utils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9819900a-9819-4896-a490-54f445126d24_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:26:29 compute-0 nova_compute[243452]: 2026-02-28 10:26:29.984 243456 DEBUG nova.storage.rbd_utils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9819900a-9819-4896-a490-54f445126d24_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:26:30 compute-0 nova_compute[243452]: 2026-02-28 10:26:30.017 243456 DEBUG nova.storage.rbd_utils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9819900a-9819-4896-a490-54f445126d24_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:26:30 compute-0 nova_compute[243452]: 2026-02-28 10:26:30.021 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:26:30 compute-0 nova_compute[243452]: 2026-02-28 10:26:30.114 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:26:30 compute-0 nova_compute[243452]: 2026-02-28 10:26:30.116 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:26:30 compute-0 nova_compute[243452]: 2026-02-28 10:26:30.117 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:26:30 compute-0 nova_compute[243452]: 2026-02-28 10:26:30.117 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:26:30 compute-0 nova_compute[243452]: 2026-02-28 10:26:30.150 243456 DEBUG nova.storage.rbd_utils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9819900a-9819-4896-a490-54f445126d24_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:26:30 compute-0 nova_compute[243452]: 2026-02-28 10:26:30.155 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9819900a-9819-4896-a490-54f445126d24_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:26:30 compute-0 nova_compute[243452]: 2026-02-28 10:26:30.214 243456 DEBUG nova.policy [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9046d3ef932481196b82d5e1fdd5de6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:26:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:26:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:26:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:26:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:26:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:26:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:26:30 compute-0 nova_compute[243452]: 2026-02-28 10:26:30.366 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9819900a-9819-4896-a490-54f445126d24_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:26:30 compute-0 nova_compute[243452]: 2026-02-28 10:26:30.441 243456 DEBUG nova.storage.rbd_utils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] resizing rbd image 9819900a-9819-4896-a490-54f445126d24_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:26:30 compute-0 nova_compute[243452]: 2026-02-28 10:26:30.529 243456 DEBUG nova.objects.instance [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'migration_context' on Instance uuid 9819900a-9819-4896-a490-54f445126d24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:26:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 240 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 448 KiB/s wr, 8 op/s
Feb 28 10:26:30 compute-0 nova_compute[243452]: 2026-02-28 10:26:30.549 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:26:30 compute-0 nova_compute[243452]: 2026-02-28 10:26:30.550 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Ensure instance console log exists: /var/lib/nova/instances/9819900a-9819-4896-a490-54f445126d24/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:26:30 compute-0 nova_compute[243452]: 2026-02-28 10:26:30.550 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:26:30 compute-0 nova_compute[243452]: 2026-02-28 10:26:30.551 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:26:30 compute-0 nova_compute[243452]: 2026-02-28 10:26:30.551 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:26:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:26:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:26:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:26:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:26:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:26:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:26:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:26:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:26:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:26:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:26:31 compute-0 nova_compute[243452]: 2026-02-28 10:26:31.179 243456 DEBUG nova.network.neutron [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Successfully created port: f56f6906-42d3-443a-bf7f-e2375d5bf698 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:26:31 compute-0 nova_compute[243452]: 2026-02-28 10:26:31.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:26:31 compute-0 ceph-mon[76304]: pgmap v1838: 305 pgs: 305 active+clean; 240 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 448 KiB/s wr, 8 op/s
Feb 28 10:26:32 compute-0 nova_compute[243452]: 2026-02-28 10:26:32.267 243456 DEBUG nova.network.neutron [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Successfully updated port: f56f6906-42d3-443a-bf7f-e2375d5bf698 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:26:32 compute-0 nova_compute[243452]: 2026-02-28 10:26:32.289 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "refresh_cache-9819900a-9819-4896-a490-54f445126d24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:26:32 compute-0 nova_compute[243452]: 2026-02-28 10:26:32.290 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquired lock "refresh_cache-9819900a-9819-4896-a490-54f445126d24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:26:32 compute-0 nova_compute[243452]: 2026-02-28 10:26:32.290 243456 DEBUG nova.network.neutron [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:26:32 compute-0 nova_compute[243452]: 2026-02-28 10:26:32.446 243456 DEBUG nova.compute.manager [req-2504a021-31a7-4719-bc38-a0adaac94bb6 req-37086b7c-e713-43d0-9579-adb3cc993504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Received event network-changed-f56f6906-42d3-443a-bf7f-e2375d5bf698 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:26:32 compute-0 nova_compute[243452]: 2026-02-28 10:26:32.446 243456 DEBUG nova.compute.manager [req-2504a021-31a7-4719-bc38-a0adaac94bb6 req-37086b7c-e713-43d0-9579-adb3cc993504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Refreshing instance network info cache due to event network-changed-f56f6906-42d3-443a-bf7f-e2375d5bf698. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:26:32 compute-0 nova_compute[243452]: 2026-02-28 10:26:32.447 243456 DEBUG oslo_concurrency.lockutils [req-2504a021-31a7-4719-bc38-a0adaac94bb6 req-37086b7c-e713-43d0-9579-adb3cc993504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9819900a-9819-4896-a490-54f445126d24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:26:32 compute-0 nova_compute[243452]: 2026-02-28 10:26:32.506 243456 DEBUG nova.network.neutron [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:26:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 268 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.4 MiB/s wr, 20 op/s
Feb 28 10:26:33 compute-0 nova_compute[243452]: 2026-02-28 10:26:33.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:26:33 compute-0 ceph-mon[76304]: pgmap v1839: 305 pgs: 305 active+clean; 268 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.4 MiB/s wr, 20 op/s
Feb 28 10:26:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:26:34 compute-0 nova_compute[243452]: 2026-02-28 10:26:34.130 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:34 compute-0 nova_compute[243452]: 2026-02-28 10:26:34.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:26:34 compute-0 nova_compute[243452]: 2026-02-28 10:26:34.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:26:34 compute-0 nova_compute[243452]: 2026-02-28 10:26:34.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:26:34 compute-0 nova_compute[243452]: 2026-02-28 10:26:34.337 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 28 10:26:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 268 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.4 MiB/s wr, 20 op/s
Feb 28 10:26:34 compute-0 nova_compute[243452]: 2026-02-28 10:26:34.573 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:26:34 compute-0 nova_compute[243452]: 2026-02-28 10:26:34.574 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:26:34 compute-0 nova_compute[243452]: 2026-02-28 10:26:34.574 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:26:34 compute-0 nova_compute[243452]: 2026-02-28 10:26:34.574 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8f174807-b15f-4588-83e1-c6e2ef2c2b4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:26:34 compute-0 nova_compute[243452]: 2026-02-28 10:26:34.586 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:34 compute-0 nova_compute[243452]: 2026-02-28 10:26:34.957 243456 DEBUG nova.network.neutron [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Updating instance_info_cache with network_info: [{"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:26:34 compute-0 nova_compute[243452]: 2026-02-28 10:26:34.978 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Releasing lock "refresh_cache-9819900a-9819-4896-a490-54f445126d24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:26:34 compute-0 nova_compute[243452]: 2026-02-28 10:26:34.979 243456 DEBUG nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Instance network_info: |[{"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:26:34 compute-0 nova_compute[243452]: 2026-02-28 10:26:34.980 243456 DEBUG oslo_concurrency.lockutils [req-2504a021-31a7-4719-bc38-a0adaac94bb6 req-37086b7c-e713-43d0-9579-adb3cc993504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9819900a-9819-4896-a490-54f445126d24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:26:34 compute-0 nova_compute[243452]: 2026-02-28 10:26:34.980 243456 DEBUG nova.network.neutron [req-2504a021-31a7-4719-bc38-a0adaac94bb6 req-37086b7c-e713-43d0-9579-adb3cc993504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Refreshing network info cache for port f56f6906-42d3-443a-bf7f-e2375d5bf698 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:26:34 compute-0 nova_compute[243452]: 2026-02-28 10:26:34.985 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Start _get_guest_xml network_info=[{"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:26:34 compute-0 nova_compute[243452]: 2026-02-28 10:26:34.992 243456 WARNING nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:26:34 compute-0 nova_compute[243452]: 2026-02-28 10:26:34.997 243456 DEBUG nova.virt.libvirt.host [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:26:34 compute-0 nova_compute[243452]: 2026-02-28 10:26:34.998 243456 DEBUG nova.virt.libvirt.host [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:26:35 compute-0 nova_compute[243452]: 2026-02-28 10:26:35.008 243456 DEBUG nova.virt.libvirt.host [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:26:35 compute-0 nova_compute[243452]: 2026-02-28 10:26:35.009 243456 DEBUG nova.virt.libvirt.host [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:26:35 compute-0 nova_compute[243452]: 2026-02-28 10:26:35.010 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:26:35 compute-0 nova_compute[243452]: 2026-02-28 10:26:35.010 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:26:35 compute-0 nova_compute[243452]: 2026-02-28 10:26:35.011 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:26:35 compute-0 nova_compute[243452]: 2026-02-28 10:26:35.012 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:26:35 compute-0 nova_compute[243452]: 2026-02-28 10:26:35.012 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:26:35 compute-0 nova_compute[243452]: 2026-02-28 10:26:35.012 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:26:35 compute-0 nova_compute[243452]: 2026-02-28 10:26:35.013 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:26:35 compute-0 nova_compute[243452]: 2026-02-28 10:26:35.013 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:26:35 compute-0 nova_compute[243452]: 2026-02-28 10:26:35.014 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:26:35 compute-0 nova_compute[243452]: 2026-02-28 10:26:35.014 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:26:35 compute-0 nova_compute[243452]: 2026-02-28 10:26:35.015 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:26:35 compute-0 nova_compute[243452]: 2026-02-28 10:26:35.015 243456 DEBUG nova.virt.hardware [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:26:35 compute-0 nova_compute[243452]: 2026-02-28 10:26:35.020 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:26:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:26:35 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2633375911' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:26:35 compute-0 nova_compute[243452]: 2026-02-28 10:26:35.682 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:26:35 compute-0 ceph-mon[76304]: pgmap v1840: 305 pgs: 305 active+clean; 268 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.4 MiB/s wr, 20 op/s
Feb 28 10:26:35 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2633375911' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:26:35 compute-0 nova_compute[243452]: 2026-02-28 10:26:35.706 243456 DEBUG nova.storage.rbd_utils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9819900a-9819-4896-a490-54f445126d24_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:26:35 compute-0 nova_compute[243452]: 2026-02-28 10:26:35.711 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:26:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:26:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1851073826' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.261 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.264 243456 DEBUG nova.virt.libvirt.vif [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:26:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-2119771830',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-2119771830',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ge',id=118,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJI9lOGpAHM54rXvRMjq/x0g38COJ3hV6RmI46r9AJNCEFVJesQ2691y4U6QBJ7NeNJa4TpkakoTRePIkSoO6HE1KNS3jJXu2ARIkPnrG7XuVDaOVfaQKY/gyfQrOLusGQ==',key_name='tempest-TestSecurityGroupsBasicOps-1768974086',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-7zjy0n29',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:26:29Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=9819900a-9819-4896-a490-54f445126d24,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.265 243456 DEBUG nova.network.os_vif_util [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.267 243456 DEBUG nova.network.os_vif_util [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:f8:51,bridge_name='br-int',has_traffic_filtering=True,id=f56f6906-42d3-443a-bf7f-e2375d5bf698,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf56f6906-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.270 243456 DEBUG nova.objects.instance [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9819900a-9819-4896-a490-54f445126d24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.290 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:26:36 compute-0 nova_compute[243452]:   <uuid>9819900a-9819-4896-a490-54f445126d24</uuid>
Feb 28 10:26:36 compute-0 nova_compute[243452]:   <name>instance-00000076</name>
Feb 28 10:26:36 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:26:36 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:26:36 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-2119771830</nova:name>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:26:34</nova:creationTime>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:26:36 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:26:36 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:26:36 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:26:36 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:26:36 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:26:36 compute-0 nova_compute[243452]:         <nova:user uuid="f9046d3ef932481196b82d5e1fdd5de6">tempest-TestSecurityGroupsBasicOps-1303513239-project-member</nova:user>
Feb 28 10:26:36 compute-0 nova_compute[243452]:         <nova:project uuid="1eb3aafa74d14544ba5cde61223f2e23">tempest-TestSecurityGroupsBasicOps-1303513239</nova:project>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:26:36 compute-0 nova_compute[243452]:         <nova:port uuid="f56f6906-42d3-443a-bf7f-e2375d5bf698">
Feb 28 10:26:36 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:26:36 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:26:36 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <system>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <entry name="serial">9819900a-9819-4896-a490-54f445126d24</entry>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <entry name="uuid">9819900a-9819-4896-a490-54f445126d24</entry>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     </system>
Feb 28 10:26:36 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:26:36 compute-0 nova_compute[243452]:   <os>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:   </os>
Feb 28 10:26:36 compute-0 nova_compute[243452]:   <features>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:   </features>
Feb 28 10:26:36 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:26:36 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:26:36 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9819900a-9819-4896-a490-54f445126d24_disk">
Feb 28 10:26:36 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       </source>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:26:36 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9819900a-9819-4896-a490-54f445126d24_disk.config">
Feb 28 10:26:36 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       </source>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:26:36 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:c7:f8:51"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <target dev="tapf56f6906-42"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/9819900a-9819-4896-a490-54f445126d24/console.log" append="off"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <video>
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     </video>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:26:36 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:26:36 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:26:36 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:26:36 compute-0 nova_compute[243452]: </domain>
Feb 28 10:26:36 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.292 243456 DEBUG nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Preparing to wait for external event network-vif-plugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.293 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "9819900a-9819-4896-a490-54f445126d24-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.293 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.294 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.294 243456 DEBUG nova.virt.libvirt.vif [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:26:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-2119771830',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-2119771830',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ge',id=118,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJI9lOGpAHM54rXvRMjq/x0g38COJ3hV6RmI46r9AJNCEFVJesQ2691y4U6QBJ7NeNJa4TpkakoTRePIkSoO6HE1KNS3jJXu2ARIkPnrG7XuVDaOVfaQKY/gyfQrOLusGQ==',key_name='tempest-TestSecurityGroupsBasicOps-1768974086',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-7zjy0n29',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:26:29Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=9819900a-9819-4896-a490-54f445126d24,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.295 243456 DEBUG nova.network.os_vif_util [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.295 243456 DEBUG nova.network.os_vif_util [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:f8:51,bridge_name='br-int',has_traffic_filtering=True,id=f56f6906-42d3-443a-bf7f-e2375d5bf698,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf56f6906-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.296 243456 DEBUG os_vif [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:f8:51,bridge_name='br-int',has_traffic_filtering=True,id=f56f6906-42d3-443a-bf7f-e2375d5bf698,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf56f6906-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.297 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.297 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.298 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.302 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.303 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf56f6906-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.304 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf56f6906-42, col_values=(('external_ids', {'iface-id': 'f56f6906-42d3-443a-bf7f-e2375d5bf698', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:f8:51', 'vm-uuid': '9819900a-9819-4896-a490-54f445126d24'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:26:36 compute-0 NetworkManager[49805]: <info>  [1772274396.3076] manager: (tapf56f6906-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/488)
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.306 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.310 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.314 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.316 243456 INFO os_vif [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:f8:51,bridge_name='br-int',has_traffic_filtering=True,id=f56f6906-42d3-443a-bf7f-e2375d5bf698,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf56f6906-42')
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.379 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.379 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.379 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] No VIF found with MAC fa:16:3e:c7:f8:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.380 243456 INFO nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Using config drive
Feb 28 10:26:36 compute-0 nova_compute[243452]: 2026-02-28 10:26:36.405 243456 DEBUG nova.storage.rbd_utils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9819900a-9819-4896-a490-54f445126d24_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:26:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 279 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:26:36 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1851073826' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.210 243456 INFO nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Creating config drive at /var/lib/nova/instances/9819900a-9819-4896-a490-54f445126d24/disk.config
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.216 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9819900a-9819-4896-a490-54f445126d24/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxmq0l6j7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.374 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9819900a-9819-4896-a490-54f445126d24/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxmq0l6j7" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.416 243456 DEBUG nova.storage.rbd_utils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] rbd image 9819900a-9819-4896-a490-54f445126d24_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.422 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9819900a-9819-4896-a490-54f445126d24/disk.config 9819900a-9819-4896-a490-54f445126d24_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.602 243456 DEBUG oslo_concurrency.processutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9819900a-9819-4896-a490-54f445126d24/disk.config 9819900a-9819-4896-a490-54f445126d24_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.603 243456 INFO nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Deleting local config drive /var/lib/nova/instances/9819900a-9819-4896-a490-54f445126d24/disk.config because it was imported into RBD.
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.609 243456 DEBUG nova.network.neutron [req-2504a021-31a7-4719-bc38-a0adaac94bb6 req-37086b7c-e713-43d0-9579-adb3cc993504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Updated VIF entry in instance network info cache for port f56f6906-42d3-443a-bf7f-e2375d5bf698. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.610 243456 DEBUG nova.network.neutron [req-2504a021-31a7-4719-bc38-a0adaac94bb6 req-37086b7c-e713-43d0-9579-adb3cc993504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Updating instance_info_cache with network_info: [{"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.616 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Updating instance_info_cache with network_info: [{"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.637 243456 DEBUG oslo_concurrency.lockutils [req-2504a021-31a7-4719-bc38-a0adaac94bb6 req-37086b7c-e713-43d0-9579-adb3cc993504 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9819900a-9819-4896-a490-54f445126d24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.639 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.639 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.640 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.641 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.641 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:26:37 compute-0 NetworkManager[49805]: <info>  [1772274397.6570] manager: (tapf56f6906-42): new Tun device (/org/freedesktop/NetworkManager/Devices/489)
Feb 28 10:26:37 compute-0 kernel: tapf56f6906-42: entered promiscuous mode
Feb 28 10:26:37 compute-0 ovn_controller[146846]: 2026-02-28T10:26:37Z|01180|binding|INFO|Claiming lport f56f6906-42d3-443a-bf7f-e2375d5bf698 for this chassis.
Feb 28 10:26:37 compute-0 ovn_controller[146846]: 2026-02-28T10:26:37Z|01181|binding|INFO|f56f6906-42d3-443a-bf7f-e2375d5bf698: Claiming fa:16:3e:c7:f8:51 10.100.0.10
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.660 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:37 compute-0 ovn_controller[146846]: 2026-02-28T10:26:37Z|01182|binding|INFO|Setting lport f56f6906-42d3-443a-bf7f-e2375d5bf698 ovn-installed in OVS
Feb 28 10:26:37 compute-0 ovn_controller[146846]: 2026-02-28T10:26:37Z|01183|binding|INFO|Setting lport f56f6906-42d3-443a-bf7f-e2375d5bf698 up in Southbound
Feb 28 10:26:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.668 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:f8:51 10.100.0.10'], port_security=['fa:16:3e:c7:f8:51 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9819900a-9819-4896-a490-54f445126d24', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd1940e3c-fc4a-432c-9cc8-be1893711fb9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d36dc637-8959-4ab7-8bcf-cb8fb989bc7e, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f56f6906-42d3-443a-bf7f-e2375d5bf698) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:26:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.669 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f56f6906-42d3-443a-bf7f-e2375d5bf698 in datapath a3c17451-fecb-4c3a-bc65-efba96c6e655 bound to our chassis
Feb 28 10:26:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.671 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3c17451-fecb-4c3a-bc65-efba96c6e655
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.666 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.667 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.668 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.668 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.669 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:26:37 compute-0 systemd-machined[209480]: New machine qemu-149-instance-00000076.
Feb 28 10:26:37 compute-0 systemd-udevd[344799]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:26:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.689 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f424eab4-a0d1-4e98-a7a1-e777bf95adee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:37 compute-0 NetworkManager[49805]: <info>  [1772274397.7034] device (tapf56f6906-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:26:37 compute-0 NetworkManager[49805]: <info>  [1772274397.7038] device (tapf56f6906-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:26:37 compute-0 systemd[1]: Started Virtual Machine qemu-149-instance-00000076.
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.712 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:37 compute-0 ceph-mon[76304]: pgmap v1841: 305 pgs: 305 active+clean; 279 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:26:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.725 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0588ab52-f7c6-4313-94e8-0cdc8e303444]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.730 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1eb44b-2606-40e8-90ac-0aea114987a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.757 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[184a25a6-ec4e-47ba-bea8-9141e5d083ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.774 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e15e01d1-a921-4a1d-bc7b-a547a3d0e90e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3c17451-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:c6:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 351], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586822, 'reachable_time': 22720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344812, 'error': None, 'target': 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.789 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f44e0070-862b-4731-b7aa-326490295cda]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa3c17451-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586832, 'tstamp': 586832}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344813, 'error': None, 'target': 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa3c17451-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586835, 'tstamp': 586835}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344813, 'error': None, 'target': 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.791 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3c17451-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.792 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.793 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3c17451-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:26:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.793 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:26:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.794 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3c17451-f0, col_values=(('external_ids', {'iface-id': '593df982-6822-42f7-9086-436943b49f8d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:26:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:37.794 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.920 243456 DEBUG nova.compute.manager [req-f0fa360f-c826-440c-b831-9fd06a0fdc60 req-ebe8bfad-0b72-4f04-a76d-d9c12c16f69e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Received event network-vif-plugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.926 243456 DEBUG oslo_concurrency.lockutils [req-f0fa360f-c826-440c-b831-9fd06a0fdc60 req-ebe8bfad-0b72-4f04-a76d-d9c12c16f69e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9819900a-9819-4896-a490-54f445126d24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.927 243456 DEBUG oslo_concurrency.lockutils [req-f0fa360f-c826-440c-b831-9fd06a0fdc60 req-ebe8bfad-0b72-4f04-a76d-d9c12c16f69e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.927 243456 DEBUG oslo_concurrency.lockutils [req-f0fa360f-c826-440c-b831-9fd06a0fdc60 req-ebe8bfad-0b72-4f04-a76d-d9c12c16f69e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:26:37 compute-0 nova_compute[243452]: 2026-02-28 10:26:37.928 243456 DEBUG nova.compute.manager [req-f0fa360f-c826-440c-b831-9fd06a0fdc60 req-ebe8bfad-0b72-4f04-a76d-d9c12c16f69e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Processing event network-vif-plugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:26:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:26:38 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1706289120' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.243 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.334 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.335 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.338 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.338 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.458 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274398.4583406, 9819900a-9819-4896-a490-54f445126d24 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.459 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] VM Started (Lifecycle Event)
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.462 243456 DEBUG nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.466 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.470 243456 INFO nova.virt.libvirt.driver [-] [instance: 9819900a-9819-4896-a490-54f445126d24] Instance spawned successfully.
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.471 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.534 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.536 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3523MB free_disk=59.92119009792805GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.536 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.537 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:26:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 279 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.540 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.546 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.550 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.551 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.551 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.552 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.553 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.553 243456 DEBUG nova.virt.libvirt.driver [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.589 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.590 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274398.4584951, 9819900a-9819-4896-a490-54f445126d24 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.590 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] VM Paused (Lifecycle Event)
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.643 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.647 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274398.4645734, 9819900a-9819-4896-a490-54f445126d24 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.647 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] VM Resumed (Lifecycle Event)
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.666 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.676 243456 INFO nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Took 8.77 seconds to spawn the instance on the hypervisor.
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.677 243456 DEBUG nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.683 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.690 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 8f174807-b15f-4588-83e1-c6e2ef2c2b4a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.691 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 9819900a-9819-4896-a490-54f445126d24 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.691 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.691 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.718 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:26:38 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1706289120' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:26:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.752 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.793 243456 INFO nova.compute.manager [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Took 10.27 seconds to build instance.
Feb 28 10:26:38 compute-0 nova_compute[243452]: 2026-02-28 10:26:38.816 243456 DEBUG oslo_concurrency.lockutils [None req-1fe658ab-9a66-4c63-9173-c1dad17e0586 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9819900a-9819-4896-a490-54f445126d24" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:26:39 compute-0 nova_compute[243452]: 2026-02-28 10:26:39.132 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:26:39 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2368355386' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:26:39 compute-0 nova_compute[243452]: 2026-02-28 10:26:39.308 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:26:39 compute-0 nova_compute[243452]: 2026-02-28 10:26:39.316 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:26:39 compute-0 nova_compute[243452]: 2026-02-28 10:26:39.414 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:26:39 compute-0 nova_compute[243452]: 2026-02-28 10:26:39.443 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:26:39 compute-0 nova_compute[243452]: 2026-02-28 10:26:39.444 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:26:39 compute-0 nova_compute[243452]: 2026-02-28 10:26:39.445 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:26:39 compute-0 ceph-mon[76304]: pgmap v1842: 305 pgs: 305 active+clean; 279 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:26:39 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2368355386' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:26:40 compute-0 nova_compute[243452]: 2026-02-28 10:26:40.077 243456 DEBUG nova.compute.manager [req-7a56199d-8c59-4ba5-910f-6c86838706d3 req-8df5057f-7d6c-4e9c-9be7-e2412bb7aa3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Received event network-vif-plugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:26:40 compute-0 nova_compute[243452]: 2026-02-28 10:26:40.078 243456 DEBUG oslo_concurrency.lockutils [req-7a56199d-8c59-4ba5-910f-6c86838706d3 req-8df5057f-7d6c-4e9c-9be7-e2412bb7aa3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9819900a-9819-4896-a490-54f445126d24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:26:40 compute-0 nova_compute[243452]: 2026-02-28 10:26:40.078 243456 DEBUG oslo_concurrency.lockutils [req-7a56199d-8c59-4ba5-910f-6c86838706d3 req-8df5057f-7d6c-4e9c-9be7-e2412bb7aa3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:26:40 compute-0 nova_compute[243452]: 2026-02-28 10:26:40.078 243456 DEBUG oslo_concurrency.lockutils [req-7a56199d-8c59-4ba5-910f-6c86838706d3 req-8df5057f-7d6c-4e9c-9be7-e2412bb7aa3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:26:40 compute-0 nova_compute[243452]: 2026-02-28 10:26:40.079 243456 DEBUG nova.compute.manager [req-7a56199d-8c59-4ba5-910f-6c86838706d3 req-8df5057f-7d6c-4e9c-9be7-e2412bb7aa3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] No waiting events found dispatching network-vif-plugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:26:40 compute-0 nova_compute[243452]: 2026-02-28 10:26:40.079 243456 WARNING nova.compute.manager [req-7a56199d-8c59-4ba5-910f-6c86838706d3 req-8df5057f-7d6c-4e9c-9be7-e2412bb7aa3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Received unexpected event network-vif-plugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 for instance with vm_state active and task_state None.
Feb 28 10:26:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 279 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 882 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011207571623962963 of space, bias 1.0, pg target 0.33622714871888887 quantized to 32 (current 32)
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024928662385249375 of space, bias 1.0, pg target 0.7478598715574812 quantized to 32 (current 32)
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.357177863543625e-07 of space, bias 4.0, pg target 0.000882861343625235 quantized to 16 (current 16)
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:26:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:26:41 compute-0 nova_compute[243452]: 2026-02-28 10:26:41.309 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:41 compute-0 ceph-mon[76304]: pgmap v1843: 305 pgs: 305 active+clean; 279 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 882 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Feb 28 10:26:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 279 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 93 op/s
Feb 28 10:26:42 compute-0 nova_compute[243452]: 2026-02-28 10:26:42.904 243456 DEBUG nova.compute.manager [req-93a7b3ed-5cbe-4bbb-bff3-c38462b9f0a8 req-6a9056ba-a010-427c-9aff-cfb0dc089181 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Received event network-changed-f56f6906-42d3-443a-bf7f-e2375d5bf698 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:26:42 compute-0 nova_compute[243452]: 2026-02-28 10:26:42.904 243456 DEBUG nova.compute.manager [req-93a7b3ed-5cbe-4bbb-bff3-c38462b9f0a8 req-6a9056ba-a010-427c-9aff-cfb0dc089181 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Refreshing instance network info cache due to event network-changed-f56f6906-42d3-443a-bf7f-e2375d5bf698. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:26:42 compute-0 nova_compute[243452]: 2026-02-28 10:26:42.905 243456 DEBUG oslo_concurrency.lockutils [req-93a7b3ed-5cbe-4bbb-bff3-c38462b9f0a8 req-6a9056ba-a010-427c-9aff-cfb0dc089181 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9819900a-9819-4896-a490-54f445126d24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:26:42 compute-0 nova_compute[243452]: 2026-02-28 10:26:42.905 243456 DEBUG oslo_concurrency.lockutils [req-93a7b3ed-5cbe-4bbb-bff3-c38462b9f0a8 req-6a9056ba-a010-427c-9aff-cfb0dc089181 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9819900a-9819-4896-a490-54f445126d24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:26:42 compute-0 nova_compute[243452]: 2026-02-28 10:26:42.905 243456 DEBUG nova.network.neutron [req-93a7b3ed-5cbe-4bbb-bff3-c38462b9f0a8 req-6a9056ba-a010-427c-9aff-cfb0dc089181 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Refreshing network info cache for port f56f6906-42d3-443a-bf7f-e2375d5bf698 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:26:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:26:43 compute-0 ceph-mon[76304]: pgmap v1844: 305 pgs: 305 active+clean; 279 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 93 op/s
Feb 28 10:26:44 compute-0 nova_compute[243452]: 2026-02-28 10:26:44.137 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 279 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 405 KiB/s wr, 80 op/s
Feb 28 10:26:44 compute-0 nova_compute[243452]: 2026-02-28 10:26:44.932 243456 DEBUG nova.network.neutron [req-93a7b3ed-5cbe-4bbb-bff3-c38462b9f0a8 req-6a9056ba-a010-427c-9aff-cfb0dc089181 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Updated VIF entry in instance network info cache for port f56f6906-42d3-443a-bf7f-e2375d5bf698. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:26:44 compute-0 nova_compute[243452]: 2026-02-28 10:26:44.933 243456 DEBUG nova.network.neutron [req-93a7b3ed-5cbe-4bbb-bff3-c38462b9f0a8 req-6a9056ba-a010-427c-9aff-cfb0dc089181 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Updating instance_info_cache with network_info: [{"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:26:44 compute-0 nova_compute[243452]: 2026-02-28 10:26:44.955 243456 DEBUG oslo_concurrency.lockutils [req-93a7b3ed-5cbe-4bbb-bff3-c38462b9f0a8 req-6a9056ba-a010-427c-9aff-cfb0dc089181 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9819900a-9819-4896-a490-54f445126d24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:26:45 compute-0 nova_compute[243452]: 2026-02-28 10:26:45.327 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:26:45 compute-0 nova_compute[243452]: 2026-02-28 10:26:45.328 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 28 10:26:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:45.592 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:6c:db'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '25', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9a075f4-1100-445c-9c83-55409e5c7a09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d61006dd-6d54-48e6-89d8-98f649887b90) old=Port_Binding(mac=['fa:16:3e:b9:6c:db 2001:db8::f816:3eff:feb9:6cdb'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '24', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:26:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:45.594 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d61006dd-6d54-48e6-89d8-98f649887b90 in datapath 8f8968b3-f386-4803-8793-bde890c8208a updated
Feb 28 10:26:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:45.595 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8f8968b3-f386-4803-8793-bde890c8208a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:26:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:45.596 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7ca5a08b-b913-4de6-bb3c-22b4f4711d7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:26:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1182219108' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:26:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:26:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1182219108' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:26:45 compute-0 ceph-mon[76304]: pgmap v1845: 305 pgs: 305 active+clean; 279 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 405 KiB/s wr, 80 op/s
Feb 28 10:26:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1182219108' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:26:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1182219108' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:26:46 compute-0 nova_compute[243452]: 2026-02-28 10:26:46.313 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:46 compute-0 nova_compute[243452]: 2026-02-28 10:26:46.506 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:26:46 compute-0 nova_compute[243452]: 2026-02-28 10:26:46.507 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 28 10:26:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 279 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 407 KiB/s wr, 81 op/s
Feb 28 10:26:46 compute-0 nova_compute[243452]: 2026-02-28 10:26:46.544 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 28 10:26:47 compute-0 ceph-mon[76304]: pgmap v1846: 305 pgs: 305 active+clean; 279 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 407 KiB/s wr, 81 op/s
Feb 28 10:26:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 279 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Feb 28 10:26:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:26:49 compute-0 nova_compute[243452]: 2026-02-28 10:26:49.137 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:49 compute-0 sudo[344901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:26:49 compute-0 sudo[344901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:26:49 compute-0 sudo[344901]: pam_unix(sudo:session): session closed for user root
Feb 28 10:26:49 compute-0 sudo[344926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:26:49 compute-0 sudo[344926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:26:49 compute-0 ovn_controller[146846]: 2026-02-28T10:26:49Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c7:f8:51 10.100.0.10
Feb 28 10:26:49 compute-0 ovn_controller[146846]: 2026-02-28T10:26:49Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c7:f8:51 10.100.0.10
Feb 28 10:26:49 compute-0 ceph-mon[76304]: pgmap v1847: 305 pgs: 305 active+clean; 279 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Feb 28 10:26:50 compute-0 sudo[344926]: pam_unix(sudo:session): session closed for user root
Feb 28 10:26:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:26:50 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:26:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:26:50 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:26:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:26:50 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:26:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:26:50 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:26:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:26:50 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:26:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:26:50 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:26:50 compute-0 sudo[344982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:26:50 compute-0 sudo[344982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:26:50 compute-0 sudo[344982]: pam_unix(sudo:session): session closed for user root
Feb 28 10:26:50 compute-0 sudo[345007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:26:50 compute-0 sudo[345007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:26:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 300 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 98 op/s
Feb 28 10:26:50 compute-0 podman[345045]: 2026-02-28 10:26:50.691338997 +0000 UTC m=+0.048566712 container create 107b9407af0f2c334193ed09527326cdf18bc1fb358a65873879df3a6dac185d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:26:50 compute-0 systemd[1]: Started libpod-conmon-107b9407af0f2c334193ed09527326cdf18bc1fb358a65873879df3a6dac185d.scope.
Feb 28 10:26:50 compute-0 podman[345045]: 2026-02-28 10:26:50.667606627 +0000 UTC m=+0.024834342 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:26:50 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:26:50 compute-0 podman[345045]: 2026-02-28 10:26:50.801921051 +0000 UTC m=+0.159148746 container init 107b9407af0f2c334193ed09527326cdf18bc1fb358a65873879df3a6dac185d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_franklin, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 10:26:50 compute-0 podman[345045]: 2026-02-28 10:26:50.809577117 +0000 UTC m=+0.166804812 container start 107b9407af0f2c334193ed09527326cdf18bc1fb358a65873879df3a6dac185d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_franklin, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 10:26:50 compute-0 kind_franklin[345062]: 167 167
Feb 28 10:26:50 compute-0 systemd[1]: libpod-107b9407af0f2c334193ed09527326cdf18bc1fb358a65873879df3a6dac185d.scope: Deactivated successfully.
Feb 28 10:26:50 compute-0 podman[345045]: 2026-02-28 10:26:50.826015141 +0000 UTC m=+0.183242926 container attach 107b9407af0f2c334193ed09527326cdf18bc1fb358a65873879df3a6dac185d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:26:50 compute-0 podman[345045]: 2026-02-28 10:26:50.826612458 +0000 UTC m=+0.183840193 container died 107b9407af0f2c334193ed09527326cdf18bc1fb358a65873879df3a6dac185d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_franklin, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 10:26:50 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:26:50 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:26:50 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:26:50 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:26:50 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:26:50 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:26:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-5432456ae7524f33bc46741e8b5f0cde847553a89aab70fb2cefb98a22590adc-merged.mount: Deactivated successfully.
Feb 28 10:26:50 compute-0 podman[345045]: 2026-02-28 10:26:50.890152693 +0000 UTC m=+0.247380428 container remove 107b9407af0f2c334193ed09527326cdf18bc1fb358a65873879df3a6dac185d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 10:26:50 compute-0 systemd[1]: libpod-conmon-107b9407af0f2c334193ed09527326cdf18bc1fb358a65873879df3a6dac185d.scope: Deactivated successfully.
Feb 28 10:26:51 compute-0 podman[345089]: 2026-02-28 10:26:51.07167766 +0000 UTC m=+0.065517191 container create 1a5973fb5a2103dacb9d43d3d66d30d05c0630491abf1fca62619b5901c18238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:26:51 compute-0 systemd[1]: Started libpod-conmon-1a5973fb5a2103dacb9d43d3d66d30d05c0630491abf1fca62619b5901c18238.scope.
Feb 28 10:26:51 compute-0 podman[345089]: 2026-02-28 10:26:51.025683991 +0000 UTC m=+0.019523502 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:26:51 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:26:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f911dd2e33781fe6f3eb3c999098c726b602eb00e9857f1b6efca13f59e6f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:26:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f911dd2e33781fe6f3eb3c999098c726b602eb00e9857f1b6efca13f59e6f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:26:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f911dd2e33781fe6f3eb3c999098c726b602eb00e9857f1b6efca13f59e6f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:26:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f911dd2e33781fe6f3eb3c999098c726b602eb00e9857f1b6efca13f59e6f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:26:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f911dd2e33781fe6f3eb3c999098c726b602eb00e9857f1b6efca13f59e6f7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:26:51 compute-0 podman[345089]: 2026-02-28 10:26:51.195269241 +0000 UTC m=+0.189108742 container init 1a5973fb5a2103dacb9d43d3d66d30d05c0630491abf1fca62619b5901c18238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cartwright, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:26:51 compute-0 podman[345089]: 2026-02-28 10:26:51.211303024 +0000 UTC m=+0.205142515 container start 1a5973fb5a2103dacb9d43d3d66d30d05c0630491abf1fca62619b5901c18238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cartwright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Feb 28 10:26:51 compute-0 podman[345089]: 2026-02-28 10:26:51.221579424 +0000 UTC m=+0.215418915 container attach 1a5973fb5a2103dacb9d43d3d66d30d05c0630491abf1fca62619b5901c18238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 10:26:51 compute-0 nova_compute[243452]: 2026-02-28 10:26:51.315 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:51 compute-0 boring_cartwright[345106]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:26:51 compute-0 boring_cartwright[345106]: --> All data devices are unavailable
Feb 28 10:26:51 compute-0 systemd[1]: libpod-1a5973fb5a2103dacb9d43d3d66d30d05c0630491abf1fca62619b5901c18238.scope: Deactivated successfully.
Feb 28 10:26:51 compute-0 podman[345089]: 2026-02-28 10:26:51.650053996 +0000 UTC m=+0.643893487 container died 1a5973fb5a2103dacb9d43d3d66d30d05c0630491abf1fca62619b5901c18238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cartwright, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 10:26:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-75f911dd2e33781fe6f3eb3c999098c726b602eb00e9857f1b6efca13f59e6f7-merged.mount: Deactivated successfully.
Feb 28 10:26:51 compute-0 podman[345089]: 2026-02-28 10:26:51.706381247 +0000 UTC m=+0.700220738 container remove 1a5973fb5a2103dacb9d43d3d66d30d05c0630491abf1fca62619b5901c18238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:26:51 compute-0 systemd[1]: libpod-conmon-1a5973fb5a2103dacb9d43d3d66d30d05c0630491abf1fca62619b5901c18238.scope: Deactivated successfully.
Feb 28 10:26:51 compute-0 sudo[345007]: pam_unix(sudo:session): session closed for user root
Feb 28 10:26:51 compute-0 sudo[345140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:26:51 compute-0 sudo[345140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:26:51 compute-0 sudo[345140]: pam_unix(sudo:session): session closed for user root
Feb 28 10:26:51 compute-0 sudo[345165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:26:51 compute-0 sudo[345165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:26:52 compute-0 podman[345201]: 2026-02-28 10:26:52.206326198 +0000 UTC m=+0.059846831 container create b36997fb92f617dfec10cec5932cea5db67ef6064866c2466f2c7cb3fd9ff164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mclaren, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 28 10:26:52 compute-0 systemd[1]: Started libpod-conmon-b36997fb92f617dfec10cec5932cea5db67ef6064866c2466f2c7cb3fd9ff164.scope.
Feb 28 10:26:52 compute-0 podman[345201]: 2026-02-28 10:26:52.170831965 +0000 UTC m=+0.024352648 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:26:52 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:26:52 compute-0 podman[345201]: 2026-02-28 10:26:52.292313577 +0000 UTC m=+0.145834210 container init b36997fb92f617dfec10cec5932cea5db67ef6064866c2466f2c7cb3fd9ff164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 10:26:52 compute-0 podman[345201]: 2026-02-28 10:26:52.298326687 +0000 UTC m=+0.151847310 container start b36997fb92f617dfec10cec5932cea5db67ef6064866c2466f2c7cb3fd9ff164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mclaren, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:26:52 compute-0 upbeat_mclaren[345218]: 167 167
Feb 28 10:26:52 compute-0 systemd[1]: libpod-b36997fb92f617dfec10cec5932cea5db67ef6064866c2466f2c7cb3fd9ff164.scope: Deactivated successfully.
Feb 28 10:26:52 compute-0 podman[345201]: 2026-02-28 10:26:52.3041236 +0000 UTC m=+0.157644243 container attach b36997fb92f617dfec10cec5932cea5db67ef6064866c2466f2c7cb3fd9ff164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 10:26:52 compute-0 podman[345201]: 2026-02-28 10:26:52.304912963 +0000 UTC m=+0.158433616 container died b36997fb92f617dfec10cec5932cea5db67ef6064866c2466f2c7cb3fd9ff164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 10:26:52 compute-0 ceph-mon[76304]: pgmap v1848: 305 pgs: 305 active+clean; 300 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 98 op/s
Feb 28 10:26:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-de3a9eaa23e0bd70bbac25e1b8673ddcf966525ee8f3f330b12b3122ad477566-merged.mount: Deactivated successfully.
Feb 28 10:26:52 compute-0 podman[345201]: 2026-02-28 10:26:52.385341254 +0000 UTC m=+0.238861847 container remove b36997fb92f617dfec10cec5932cea5db67ef6064866c2466f2c7cb3fd9ff164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:26:52 compute-0 systemd[1]: libpod-conmon-b36997fb92f617dfec10cec5932cea5db67ef6064866c2466f2c7cb3fd9ff164.scope: Deactivated successfully.
Feb 28 10:26:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 306 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Feb 28 10:26:52 compute-0 podman[345242]: 2026-02-28 10:26:52.552338511 +0000 UTC m=+0.045903207 container create 0933fd6528a14873efa549414c935064255eaaa1f76ef65f29634358204277e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:26:52 compute-0 systemd[1]: Started libpod-conmon-0933fd6528a14873efa549414c935064255eaaa1f76ef65f29634358204277e2.scope.
Feb 28 10:26:52 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:26:52 compute-0 podman[345242]: 2026-02-28 10:26:52.52823639 +0000 UTC m=+0.021800846 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4effd0e224792818a01cd62a683819d501594f99759291e1c6824275b5169281/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4effd0e224792818a01cd62a683819d501594f99759291e1c6824275b5169281/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4effd0e224792818a01cd62a683819d501594f99759291e1c6824275b5169281/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4effd0e224792818a01cd62a683819d501594f99759291e1c6824275b5169281/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:26:52 compute-0 podman[345242]: 2026-02-28 10:26:52.643030663 +0000 UTC m=+0.136595109 container init 0933fd6528a14873efa549414c935064255eaaa1f76ef65f29634358204277e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:26:52 compute-0 podman[345242]: 2026-02-28 10:26:52.651607635 +0000 UTC m=+0.145172081 container start 0933fd6528a14873efa549414c935064255eaaa1f76ef65f29634358204277e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_grothendieck, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 10:26:52 compute-0 podman[345242]: 2026-02-28 10:26:52.655479704 +0000 UTC m=+0.149044190 container attach 0933fd6528a14873efa549414c935064255eaaa1f76ef65f29634358204277e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]: {
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:     "0": [
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:         {
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "devices": [
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "/dev/loop3"
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             ],
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "lv_name": "ceph_lv0",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "lv_size": "21470642176",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "name": "ceph_lv0",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "tags": {
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.cluster_name": "ceph",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.crush_device_class": "",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.encrypted": "0",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.objectstore": "bluestore",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.osd_id": "0",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.type": "block",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.vdo": "0",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.with_tpm": "0"
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             },
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "type": "block",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "vg_name": "ceph_vg0"
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:         }
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:     ],
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:     "1": [
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:         {
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "devices": [
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "/dev/loop4"
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             ],
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "lv_name": "ceph_lv1",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "lv_size": "21470642176",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "name": "ceph_lv1",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "tags": {
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.cluster_name": "ceph",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.crush_device_class": "",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.encrypted": "0",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.objectstore": "bluestore",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.osd_id": "1",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.type": "block",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.vdo": "0",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.with_tpm": "0"
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             },
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "type": "block",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "vg_name": "ceph_vg1"
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:         }
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:     ],
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:     "2": [
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:         {
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "devices": [
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "/dev/loop5"
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             ],
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "lv_name": "ceph_lv2",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "lv_size": "21470642176",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "name": "ceph_lv2",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "tags": {
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.cluster_name": "ceph",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.crush_device_class": "",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.encrypted": "0",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.objectstore": "bluestore",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.osd_id": "2",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.type": "block",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.vdo": "0",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:                 "ceph.with_tpm": "0"
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             },
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "type": "block",
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:             "vg_name": "ceph_vg2"
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:         }
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]:     ]
Feb 28 10:26:52 compute-0 hardcore_grothendieck[345259]: }
Feb 28 10:26:52 compute-0 systemd[1]: libpod-0933fd6528a14873efa549414c935064255eaaa1f76ef65f29634358204277e2.scope: Deactivated successfully.
Feb 28 10:26:52 compute-0 podman[345242]: 2026-02-28 10:26:52.943680794 +0000 UTC m=+0.437245220 container died 0933fd6528a14873efa549414c935064255eaaa1f76ef65f29634358204277e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_grothendieck, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:26:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-4effd0e224792818a01cd62a683819d501594f99759291e1c6824275b5169281-merged.mount: Deactivated successfully.
Feb 28 10:26:52 compute-0 podman[345242]: 2026-02-28 10:26:52.983770897 +0000 UTC m=+0.477335323 container remove 0933fd6528a14873efa549414c935064255eaaa1f76ef65f29634358204277e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_grothendieck, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:26:53 compute-0 systemd[1]: libpod-conmon-0933fd6528a14873efa549414c935064255eaaa1f76ef65f29634358204277e2.scope: Deactivated successfully.
Feb 28 10:26:53 compute-0 sudo[345165]: pam_unix(sudo:session): session closed for user root
Feb 28 10:26:53 compute-0 sudo[345279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:26:53 compute-0 sudo[345279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:26:53 compute-0 sudo[345279]: pam_unix(sudo:session): session closed for user root
Feb 28 10:26:53 compute-0 sudo[345312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:26:53 compute-0 sudo[345312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:26:53 compute-0 podman[345303]: 2026-02-28 10:26:53.222804158 +0000 UTC m=+0.068670360 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:26:53 compute-0 podman[345304]: 2026-02-28 10:26:53.262133229 +0000 UTC m=+0.098393200 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 28 10:26:53 compute-0 podman[345385]: 2026-02-28 10:26:53.476051831 +0000 UTC m=+0.043289894 container create 14aac7d29c81a119e21f64ed77e381a1c77284acfd28158e1d9efdfc93d884b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 10:26:53 compute-0 systemd[1]: Started libpod-conmon-14aac7d29c81a119e21f64ed77e381a1c77284acfd28158e1d9efdfc93d884b6.scope.
Feb 28 10:26:53 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:26:53 compute-0 podman[345385]: 2026-02-28 10:26:53.460044359 +0000 UTC m=+0.027282692 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:26:53 compute-0 podman[345385]: 2026-02-28 10:26:53.564636253 +0000 UTC m=+0.131874396 container init 14aac7d29c81a119e21f64ed77e381a1c77284acfd28158e1d9efdfc93d884b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 10:26:53 compute-0 podman[345385]: 2026-02-28 10:26:53.574337347 +0000 UTC m=+0.141575400 container start 14aac7d29c81a119e21f64ed77e381a1c77284acfd28158e1d9efdfc93d884b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 10:26:53 compute-0 podman[345385]: 2026-02-28 10:26:53.577807225 +0000 UTC m=+0.145045278 container attach 14aac7d29c81a119e21f64ed77e381a1c77284acfd28158e1d9efdfc93d884b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bardeen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:26:53 compute-0 determined_bardeen[345402]: 167 167
Feb 28 10:26:53 compute-0 systemd[1]: libpod-14aac7d29c81a119e21f64ed77e381a1c77284acfd28158e1d9efdfc93d884b6.scope: Deactivated successfully.
Feb 28 10:26:53 compute-0 conmon[345402]: conmon 14aac7d29c81a119e21f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-14aac7d29c81a119e21f64ed77e381a1c77284acfd28158e1d9efdfc93d884b6.scope/container/memory.events
Feb 28 10:26:53 compute-0 podman[345385]: 2026-02-28 10:26:53.582951791 +0000 UTC m=+0.150189844 container died 14aac7d29c81a119e21f64ed77e381a1c77284acfd28158e1d9efdfc93d884b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Feb 28 10:26:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-fec0c17ce5c385e31c99c70ac82e0d664d4cac40eb68ba66df552b5680ed1d7d-merged.mount: Deactivated successfully.
Feb 28 10:26:53 compute-0 podman[345385]: 2026-02-28 10:26:53.616284072 +0000 UTC m=+0.183522125 container remove 14aac7d29c81a119e21f64ed77e381a1c77284acfd28158e1d9efdfc93d884b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:26:53 compute-0 systemd[1]: libpod-conmon-14aac7d29c81a119e21f64ed77e381a1c77284acfd28158e1d9efdfc93d884b6.scope: Deactivated successfully.
Feb 28 10:26:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:26:53 compute-0 podman[345426]: 2026-02-28 10:26:53.793008113 +0000 UTC m=+0.054135350 container create 6ec1d3f12f3e341123f1c093942473ae7d1f3e94a7f87d09b7053a68182f474f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_joliot, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:26:53 compute-0 systemd[1]: Started libpod-conmon-6ec1d3f12f3e341123f1c093942473ae7d1f3e94a7f87d09b7053a68182f474f.scope.
Feb 28 10:26:53 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:26:53 compute-0 podman[345426]: 2026-02-28 10:26:53.774961354 +0000 UTC m=+0.036088511 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a71f05f8e70d056f5a8d668c545f1a6bb2e364357b878f78cd7e46d57a2ec7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a71f05f8e70d056f5a8d668c545f1a6bb2e364357b878f78cd7e46d57a2ec7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a71f05f8e70d056f5a8d668c545f1a6bb2e364357b878f78cd7e46d57a2ec7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a71f05f8e70d056f5a8d668c545f1a6bb2e364357b878f78cd7e46d57a2ec7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:26:53 compute-0 podman[345426]: 2026-02-28 10:26:53.892190465 +0000 UTC m=+0.153317632 container init 6ec1d3f12f3e341123f1c093942473ae7d1f3e94a7f87d09b7053a68182f474f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 10:26:53 compute-0 podman[345426]: 2026-02-28 10:26:53.904464772 +0000 UTC m=+0.165591939 container start 6ec1d3f12f3e341123f1c093942473ae7d1f3e94a7f87d09b7053a68182f474f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_joliot, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:26:53 compute-0 podman[345426]: 2026-02-28 10:26:53.908358292 +0000 UTC m=+0.169485459 container attach 6ec1d3f12f3e341123f1c093942473ae7d1f3e94a7f87d09b7053a68182f474f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_joliot, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:26:54 compute-0 nova_compute[243452]: 2026-02-28 10:26:54.138 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:54 compute-0 ceph-mon[76304]: pgmap v1849: 305 pgs: 305 active+clean; 306 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Feb 28 10:26:54 compute-0 lvm[345520]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:26:54 compute-0 lvm[345521]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:26:54 compute-0 lvm[345521]: VG ceph_vg1 finished
Feb 28 10:26:54 compute-0 lvm[345520]: VG ceph_vg0 finished
Feb 28 10:26:54 compute-0 lvm[345523]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:26:54 compute-0 lvm[345523]: VG ceph_vg2 finished
Feb 28 10:26:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 306 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 280 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Feb 28 10:26:54 compute-0 sweet_joliot[345442]: {}
Feb 28 10:26:54 compute-0 systemd[1]: libpod-6ec1d3f12f3e341123f1c093942473ae7d1f3e94a7f87d09b7053a68182f474f.scope: Deactivated successfully.
Feb 28 10:26:54 compute-0 podman[345426]: 2026-02-28 10:26:54.655964487 +0000 UTC m=+0.917091634 container died 6ec1d3f12f3e341123f1c093942473ae7d1f3e94a7f87d09b7053a68182f474f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:26:54 compute-0 systemd[1]: libpod-6ec1d3f12f3e341123f1c093942473ae7d1f3e94a7f87d09b7053a68182f474f.scope: Consumed 1.106s CPU time.
Feb 28 10:26:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-91a71f05f8e70d056f5a8d668c545f1a6bb2e364357b878f78cd7e46d57a2ec7-merged.mount: Deactivated successfully.
Feb 28 10:26:54 compute-0 podman[345426]: 2026-02-28 10:26:54.695914465 +0000 UTC m=+0.957041642 container remove 6ec1d3f12f3e341123f1c093942473ae7d1f3e94a7f87d09b7053a68182f474f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:26:54 compute-0 systemd[1]: libpod-conmon-6ec1d3f12f3e341123f1c093942473ae7d1f3e94a7f87d09b7053a68182f474f.scope: Deactivated successfully.
Feb 28 10:26:54 compute-0 sudo[345312]: pam_unix(sudo:session): session closed for user root
Feb 28 10:26:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:26:54 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:26:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:26:54 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:26:54 compute-0 sudo[345538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:26:54 compute-0 sudo[345538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:26:54 compute-0 sudo[345538]: pam_unix(sudo:session): session closed for user root
Feb 28 10:26:55 compute-0 ceph-mon[76304]: pgmap v1850: 305 pgs: 305 active+clean; 306 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 280 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Feb 28 10:26:55 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:26:55 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:26:55 compute-0 nova_compute[243452]: 2026-02-28 10:26:55.905 243456 DEBUG oslo_concurrency.lockutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "9819900a-9819-4896-a490-54f445126d24" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:26:55 compute-0 nova_compute[243452]: 2026-02-28 10:26:55.906 243456 DEBUG oslo_concurrency.lockutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9819900a-9819-4896-a490-54f445126d24" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:26:55 compute-0 nova_compute[243452]: 2026-02-28 10:26:55.906 243456 DEBUG oslo_concurrency.lockutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "9819900a-9819-4896-a490-54f445126d24-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:26:55 compute-0 nova_compute[243452]: 2026-02-28 10:26:55.906 243456 DEBUG oslo_concurrency.lockutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:26:55 compute-0 nova_compute[243452]: 2026-02-28 10:26:55.907 243456 DEBUG oslo_concurrency.lockutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:26:55 compute-0 nova_compute[243452]: 2026-02-28 10:26:55.908 243456 INFO nova.compute.manager [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Terminating instance
Feb 28 10:26:55 compute-0 nova_compute[243452]: 2026-02-28 10:26:55.909 243456 DEBUG nova.compute.manager [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:26:55 compute-0 kernel: tapf56f6906-42 (unregistering): left promiscuous mode
Feb 28 10:26:55 compute-0 NetworkManager[49805]: <info>  [1772274415.9618] device (tapf56f6906-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:26:55 compute-0 ovn_controller[146846]: 2026-02-28T10:26:55Z|01184|binding|INFO|Releasing lport f56f6906-42d3-443a-bf7f-e2375d5bf698 from this chassis (sb_readonly=0)
Feb 28 10:26:55 compute-0 ovn_controller[146846]: 2026-02-28T10:26:55Z|01185|binding|INFO|Setting lport f56f6906-42d3-443a-bf7f-e2375d5bf698 down in Southbound
Feb 28 10:26:55 compute-0 ovn_controller[146846]: 2026-02-28T10:26:55Z|01186|binding|INFO|Removing iface tapf56f6906-42 ovn-installed in OVS
Feb 28 10:26:55 compute-0 nova_compute[243452]: 2026-02-28 10:26:55.969 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:55.974 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:f8:51 10.100.0.10'], port_security=['fa:16:3e:c7:f8:51 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9819900a-9819-4896-a490-54f445126d24', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e747ccdf-f0bc-44cb-9277-bfb9c420d082', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d36dc637-8959-4ab7-8bcf-cb8fb989bc7e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f56f6906-42d3-443a-bf7f-e2375d5bf698) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:26:55 compute-0 nova_compute[243452]: 2026-02-28 10:26:55.975 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:55.975 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f56f6906-42d3-443a-bf7f-e2375d5bf698 in datapath a3c17451-fecb-4c3a-bc65-efba96c6e655 unbound from our chassis
Feb 28 10:26:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:55.976 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3c17451-fecb-4c3a-bc65-efba96c6e655
Feb 28 10:26:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:55.992 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[acc9a3af-1734-4466-aaad-260029dd879a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:56.020 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1bef5c-bf06-4a7b-8319-67edc1873caa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:56 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000076.scope: Deactivated successfully.
Feb 28 10:26:56 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000076.scope: Consumed 12.537s CPU time.
Feb 28 10:26:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:56.023 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ceee53-c7ac-4466-aa0b-f5839bc84810]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:56 compute-0 systemd-machined[209480]: Machine qemu-149-instance-00000076 terminated.
Feb 28 10:26:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:56.049 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb62eac-9f10-409f-8bef-991b59faf639]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:56.064 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4081c457-bf84-42d3-8cb9-8f0307db7ce2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3c17451-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:c6:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 351], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586822, 'reachable_time': 22720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345572, 'error': None, 'target': 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:56.080 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2178e797-abf6-4ae3-b713-029016e0ad31]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa3c17451-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586832, 'tstamp': 586832}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345573, 'error': None, 'target': 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa3c17451-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586835, 'tstamp': 586835}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345573, 'error': None, 'target': 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:56.081 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3c17451-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.083 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.087 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:56.087 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3c17451-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:26:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:56.087 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:26:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:56.088 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3c17451-f0, col_values=(('external_ids', {'iface-id': '593df982-6822-42f7-9086-436943b49f8d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:26:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:56.088 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.155 243456 INFO nova.virt.libvirt.driver [-] [instance: 9819900a-9819-4896-a490-54f445126d24] Instance destroyed successfully.
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.155 243456 DEBUG nova.objects.instance [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'resources' on Instance uuid 9819900a-9819-4896-a490-54f445126d24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.174 243456 DEBUG nova.virt.libvirt.vif [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:26:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-2119771830',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-gen-1-2119771830',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ge',id=118,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJI9lOGpAHM54rXvRMjq/x0g38COJ3hV6RmI46r9AJNCEFVJesQ2691y4U6QBJ7NeNJa4TpkakoTRePIkSoO6HE1KNS3jJXu2ARIkPnrG7XuVDaOVfaQKY/gyfQrOLusGQ==',key_name='tempest-TestSecurityGroupsBasicOps-1768974086',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:26:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-7zjy0n29',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:26:38Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=9819900a-9819-4896-a490-54f445126d24,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.174 243456 DEBUG nova.network.os_vif_util [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "address": "fa:16:3e:c7:f8:51", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf56f6906-42", "ovs_interfaceid": "f56f6906-42d3-443a-bf7f-e2375d5bf698", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.175 243456 DEBUG nova.network.os_vif_util [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:f8:51,bridge_name='br-int',has_traffic_filtering=True,id=f56f6906-42d3-443a-bf7f-e2375d5bf698,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf56f6906-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.175 243456 DEBUG os_vif [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:f8:51,bridge_name='br-int',has_traffic_filtering=True,id=f56f6906-42d3-443a-bf7f-e2375d5bf698,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf56f6906-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.177 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.177 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf56f6906-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.179 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.180 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.181 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.183 243456 INFO os_vif [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:f8:51,bridge_name='br-int',has_traffic_filtering=True,id=f56f6906-42d3-443a-bf7f-e2375d5bf698,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf56f6906-42')
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.307 243456 DEBUG nova.compute.manager [req-6a2da3b8-413c-447e-b2c8-e4f55d4a0559 req-962cdefc-99fb-49af-a6c3-2ea3696d5f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Received event network-vif-unplugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.307 243456 DEBUG oslo_concurrency.lockutils [req-6a2da3b8-413c-447e-b2c8-e4f55d4a0559 req-962cdefc-99fb-49af-a6c3-2ea3696d5f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9819900a-9819-4896-a490-54f445126d24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.308 243456 DEBUG oslo_concurrency.lockutils [req-6a2da3b8-413c-447e-b2c8-e4f55d4a0559 req-962cdefc-99fb-49af-a6c3-2ea3696d5f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.308 243456 DEBUG oslo_concurrency.lockutils [req-6a2da3b8-413c-447e-b2c8-e4f55d4a0559 req-962cdefc-99fb-49af-a6c3-2ea3696d5f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.308 243456 DEBUG nova.compute.manager [req-6a2da3b8-413c-447e-b2c8-e4f55d4a0559 req-962cdefc-99fb-49af-a6c3-2ea3696d5f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] No waiting events found dispatching network-vif-unplugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.308 243456 DEBUG nova.compute.manager [req-6a2da3b8-413c-447e-b2c8-e4f55d4a0559 req-962cdefc-99fb-49af-a6c3-2ea3696d5f35 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Received event network-vif-unplugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.483 243456 INFO nova.virt.libvirt.driver [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Deleting instance files /var/lib/nova/instances/9819900a-9819-4896-a490-54f445126d24_del
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.484 243456 INFO nova.virt.libvirt.driver [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Deletion of /var/lib/nova/instances/9819900a-9819-4896-a490-54f445126d24_del complete
Feb 28 10:26:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 312 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.571 243456 INFO nova.compute.manager [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Took 0.66 seconds to destroy the instance on the hypervisor.
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.572 243456 DEBUG oslo.service.loopingcall [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.572 243456 DEBUG nova.compute.manager [-] [instance: 9819900a-9819-4896-a490-54f445126d24] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:26:56 compute-0 nova_compute[243452]: 2026-02-28 10:26:56.572 243456 DEBUG nova.network.neutron [-] [instance: 9819900a-9819-4896-a490-54f445126d24] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:26:57 compute-0 nova_compute[243452]: 2026-02-28 10:26:57.338 243456 DEBUG nova.network.neutron [-] [instance: 9819900a-9819-4896-a490-54f445126d24] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:26:57 compute-0 nova_compute[243452]: 2026-02-28 10:26:57.355 243456 INFO nova.compute.manager [-] [instance: 9819900a-9819-4896-a490-54f445126d24] Took 0.78 seconds to deallocate network for instance.
Feb 28 10:26:57 compute-0 nova_compute[243452]: 2026-02-28 10:26:57.396 243456 DEBUG oslo_concurrency.lockutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:26:57 compute-0 nova_compute[243452]: 2026-02-28 10:26:57.396 243456 DEBUG oslo_concurrency.lockutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:26:57 compute-0 nova_compute[243452]: 2026-02-28 10:26:57.469 243456 DEBUG oslo_concurrency.processutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:26:57 compute-0 nova_compute[243452]: 2026-02-28 10:26:57.514 243456 DEBUG nova.compute.manager [req-c3b1765b-05a4-476e-af74-1c5c3a7c9456 req-f1390692-a1da-44f6-afed-e778b025f625 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Received event network-vif-deleted-f56f6906-42d3-443a-bf7f-e2375d5bf698 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:26:57 compute-0 ceph-mon[76304]: pgmap v1851: 305 pgs: 305 active+clean; 312 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 28 10:26:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:57.865 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:26:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:57.866 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:26:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:57.867 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:26:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:26:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2423769872' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:26:58 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:58.048 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:6c:db 2001:db8:0:1:f816:3eff:feb9:6cdb'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9a075f4-1100-445c-9c83-55409e5c7a09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d61006dd-6d54-48e6-89d8-98f649887b90) old=Port_Binding(mac=['fa:16:3e:b9:6c:db 2001:db8::f816:3eff:feb9:6cdb'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb9:6cdb/64', 'neutron:device_id': 'ovnmeta-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f8968b3-f386-4803-8793-bde890c8208a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '32a39e8cde2f4524a7dd15e916525a83', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:26:58 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:58.050 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d61006dd-6d54-48e6-89d8-98f649887b90 in datapath 8f8968b3-f386-4803-8793-bde890c8208a updated
Feb 28 10:26:58 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:58.051 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f8968b3-f386-4803-8793-bde890c8208a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:26:58 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:26:58.052 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fb84407a-a47e-4bca-b2f2-2ea52d816fa6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:26:58 compute-0 nova_compute[243452]: 2026-02-28 10:26:58.063 243456 DEBUG oslo_concurrency.processutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:26:58 compute-0 nova_compute[243452]: 2026-02-28 10:26:58.069 243456 DEBUG nova.compute.provider_tree [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:26:58 compute-0 nova_compute[243452]: 2026-02-28 10:26:58.087 243456 DEBUG nova.scheduler.client.report [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:26:58 compute-0 nova_compute[243452]: 2026-02-28 10:26:58.169 243456 DEBUG oslo_concurrency.lockutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:26:58 compute-0 nova_compute[243452]: 2026-02-28 10:26:58.201 243456 INFO nova.scheduler.client.report [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Deleted allocations for instance 9819900a-9819-4896-a490-54f445126d24
Feb 28 10:26:58 compute-0 nova_compute[243452]: 2026-02-28 10:26:58.273 243456 DEBUG oslo_concurrency.lockutils [None req-8ce0266d-e1cc-4477-84f6-9beb578b4c5b f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "9819900a-9819-4896-a490-54f445126d24" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:26:58 compute-0 nova_compute[243452]: 2026-02-28 10:26:58.427 243456 DEBUG nova.compute.manager [req-5fd2eae0-e6d1-4c39-a42b-803db149c469 req-3691ccc3-8618-4d45-9f07-5cf851798c95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Received event network-vif-plugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:26:58 compute-0 nova_compute[243452]: 2026-02-28 10:26:58.427 243456 DEBUG oslo_concurrency.lockutils [req-5fd2eae0-e6d1-4c39-a42b-803db149c469 req-3691ccc3-8618-4d45-9f07-5cf851798c95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9819900a-9819-4896-a490-54f445126d24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:26:58 compute-0 nova_compute[243452]: 2026-02-28 10:26:58.429 243456 DEBUG oslo_concurrency.lockutils [req-5fd2eae0-e6d1-4c39-a42b-803db149c469 req-3691ccc3-8618-4d45-9f07-5cf851798c95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:26:58 compute-0 nova_compute[243452]: 2026-02-28 10:26:58.430 243456 DEBUG oslo_concurrency.lockutils [req-5fd2eae0-e6d1-4c39-a42b-803db149c469 req-3691ccc3-8618-4d45-9f07-5cf851798c95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9819900a-9819-4896-a490-54f445126d24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:26:58 compute-0 nova_compute[243452]: 2026-02-28 10:26:58.431 243456 DEBUG nova.compute.manager [req-5fd2eae0-e6d1-4c39-a42b-803db149c469 req-3691ccc3-8618-4d45-9f07-5cf851798c95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] No waiting events found dispatching network-vif-plugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:26:58 compute-0 nova_compute[243452]: 2026-02-28 10:26:58.432 243456 WARNING nova.compute.manager [req-5fd2eae0-e6d1-4c39-a42b-803db149c469 req-3691ccc3-8618-4d45-9f07-5cf851798c95 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9819900a-9819-4896-a490-54f445126d24] Received unexpected event network-vif-plugged-f56f6906-42d3-443a-bf7f-e2375d5bf698 for instance with vm_state deleted and task_state None.
Feb 28 10:26:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 278 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Feb 28 10:26:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:26:58 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2423769872' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:26:59 compute-0 nova_compute[243452]: 2026-02-28 10:26:59.141 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:26:59 compute-0 ceph-mon[76304]: pgmap v1852: 305 pgs: 305 active+clean; 278 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Feb 28 10:27:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:27:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:27:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:27:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:27:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:27:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:27:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 233 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Feb 28 10:27:01 compute-0 nova_compute[243452]: 2026-02-28 10:27:01.179 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:01 compute-0 ceph-mon[76304]: pgmap v1853: 305 pgs: 305 active+clean; 233 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Feb 28 10:27:01 compute-0 nova_compute[243452]: 2026-02-28 10:27:01.874 243456 DEBUG nova.compute.manager [req-26c3671b-1f99-486e-96eb-7e54cc0549c0 req-4e548568-d68b-4117-a050-9cb398d076b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received event network-changed-3e3d207f-3991-41f0-a55c-44cd0479e7f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:01 compute-0 nova_compute[243452]: 2026-02-28 10:27:01.874 243456 DEBUG nova.compute.manager [req-26c3671b-1f99-486e-96eb-7e54cc0549c0 req-4e548568-d68b-4117-a050-9cb398d076b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Refreshing instance network info cache due to event network-changed-3e3d207f-3991-41f0-a55c-44cd0479e7f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:27:01 compute-0 nova_compute[243452]: 2026-02-28 10:27:01.875 243456 DEBUG oslo_concurrency.lockutils [req-26c3671b-1f99-486e-96eb-7e54cc0549c0 req-4e548568-d68b-4117-a050-9cb398d076b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:27:01 compute-0 nova_compute[243452]: 2026-02-28 10:27:01.875 243456 DEBUG oslo_concurrency.lockutils [req-26c3671b-1f99-486e-96eb-7e54cc0549c0 req-4e548568-d68b-4117-a050-9cb398d076b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:27:01 compute-0 nova_compute[243452]: 2026-02-28 10:27:01.875 243456 DEBUG nova.network.neutron [req-26c3671b-1f99-486e-96eb-7e54cc0549c0 req-4e548568-d68b-4117-a050-9cb398d076b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Refreshing network info cache for port 3e3d207f-3991-41f0-a55c-44cd0479e7f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:27:01 compute-0 nova_compute[243452]: 2026-02-28 10:27:01.924 243456 DEBUG oslo_concurrency.lockutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:01 compute-0 nova_compute[243452]: 2026-02-28 10:27:01.925 243456 DEBUG oslo_concurrency.lockutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:01 compute-0 nova_compute[243452]: 2026-02-28 10:27:01.925 243456 DEBUG oslo_concurrency.lockutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:01 compute-0 nova_compute[243452]: 2026-02-28 10:27:01.925 243456 DEBUG oslo_concurrency.lockutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:01 compute-0 nova_compute[243452]: 2026-02-28 10:27:01.926 243456 DEBUG oslo_concurrency.lockutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:01 compute-0 nova_compute[243452]: 2026-02-28 10:27:01.927 243456 INFO nova.compute.manager [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Terminating instance
Feb 28 10:27:01 compute-0 nova_compute[243452]: 2026-02-28 10:27:01.928 243456 DEBUG nova.compute.manager [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:27:01 compute-0 kernel: tap3e3d207f-39 (unregistering): left promiscuous mode
Feb 28 10:27:01 compute-0 NetworkManager[49805]: <info>  [1772274421.9718] device (tap3e3d207f-39): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:27:01 compute-0 nova_compute[243452]: 2026-02-28 10:27:01.978 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:01 compute-0 ovn_controller[146846]: 2026-02-28T10:27:01Z|01187|binding|INFO|Releasing lport 3e3d207f-3991-41f0-a55c-44cd0479e7f8 from this chassis (sb_readonly=0)
Feb 28 10:27:01 compute-0 ovn_controller[146846]: 2026-02-28T10:27:01Z|01188|binding|INFO|Setting lport 3e3d207f-3991-41f0-a55c-44cd0479e7f8 down in Southbound
Feb 28 10:27:01 compute-0 ovn_controller[146846]: 2026-02-28T10:27:01Z|01189|binding|INFO|Removing iface tap3e3d207f-39 ovn-installed in OVS
Feb 28 10:27:01 compute-0 nova_compute[243452]: 2026-02-28 10:27:01.984 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:01.987 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:84:10 10.100.0.3'], port_security=['fa:16:3e:27:84:10 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8f174807-b15f-4588-83e1-c6e2ef2c2b4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1eb3aafa74d14544ba5cde61223f2e23', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd1940e3c-fc4a-432c-9cc8-be1893711fb9 f3ded7bc-f509-4fba-90dd-a4f12df4a200', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d36dc637-8959-4ab7-8bcf-cb8fb989bc7e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=3e3d207f-3991-41f0-a55c-44cd0479e7f8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:27:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:01.988 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 3e3d207f-3991-41f0-a55c-44cd0479e7f8 in datapath a3c17451-fecb-4c3a-bc65-efba96c6e655 unbound from our chassis
Feb 28 10:27:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:01.989 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a3c17451-fecb-4c3a-bc65-efba96c6e655, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:27:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:01.991 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[edd93e0b-2991-45dc-80ad-8bd8b83e9c8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:01.992 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655 namespace which is not needed anymore
Feb 28 10:27:01 compute-0 nova_compute[243452]: 2026-02-28 10:27:01.994 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:02 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000075.scope: Deactivated successfully.
Feb 28 10:27:02 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000075.scope: Consumed 13.359s CPU time.
Feb 28 10:27:02 compute-0 systemd-machined[209480]: Machine qemu-148-instance-00000075 terminated.
Feb 28 10:27:02 compute-0 neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655[344408]: [NOTICE]   (344412) : haproxy version is 2.8.14-c23fe91
Feb 28 10:27:02 compute-0 neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655[344408]: [NOTICE]   (344412) : path to executable is /usr/sbin/haproxy
Feb 28 10:27:02 compute-0 neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655[344408]: [WARNING]  (344412) : Exiting Master process...
Feb 28 10:27:02 compute-0 neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655[344408]: [WARNING]  (344412) : Exiting Master process...
Feb 28 10:27:02 compute-0 neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655[344408]: [ALERT]    (344412) : Current worker (344414) exited with code 143 (Terminated)
Feb 28 10:27:02 compute-0 neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655[344408]: [WARNING]  (344412) : All workers exited. Exiting... (0)
Feb 28 10:27:02 compute-0 systemd[1]: libpod-d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d.scope: Deactivated successfully.
Feb 28 10:27:02 compute-0 podman[345650]: 2026-02-28 10:27:02.112513673 +0000 UTC m=+0.044489767 container died d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:27:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d-userdata-shm.mount: Deactivated successfully.
Feb 28 10:27:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc280004ec33d4980617412d8deb4460aaa18af7068156edd80551a2bd739484-merged.mount: Deactivated successfully.
Feb 28 10:27:02 compute-0 podman[345650]: 2026-02-28 10:27:02.157574156 +0000 UTC m=+0.089550230 container cleanup d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:27:02 compute-0 nova_compute[243452]: 2026-02-28 10:27:02.165 243456 INFO nova.virt.libvirt.driver [-] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Instance destroyed successfully.
Feb 28 10:27:02 compute-0 nova_compute[243452]: 2026-02-28 10:27:02.165 243456 DEBUG nova.objects.instance [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lazy-loading 'resources' on Instance uuid 8f174807-b15f-4588-83e1-c6e2ef2c2b4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:27:02 compute-0 systemd[1]: libpod-conmon-d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d.scope: Deactivated successfully.
Feb 28 10:27:02 compute-0 nova_compute[243452]: 2026-02-28 10:27:02.179 243456 DEBUG nova.virt.libvirt.vif [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:25:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1565847176',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1303513239-access_point-1565847176',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1303513239-ac',id=117,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJI9lOGpAHM54rXvRMjq/x0g38COJ3hV6RmI46r9AJNCEFVJesQ2691y4U6QBJ7NeNJa4TpkakoTRePIkSoO6HE1KNS3jJXu2ARIkPnrG7XuVDaOVfaQKY/gyfQrOLusGQ==',key_name='tempest-TestSecurityGroupsBasicOps-1768974086',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:26:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1eb3aafa74d14544ba5cde61223f2e23',ramdisk_id='',reservation_id='r-899856xm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1303513239',owner_user_name='tempest-TestSecurityGroupsBasicOps-1303513239-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:26:01Z,user_data=None,user_id='f9046d3ef932481196b82d5e1fdd5de6',uuid=8f174807-b15f-4588-83e1-c6e2ef2c2b4a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:27:02 compute-0 nova_compute[243452]: 2026-02-28 10:27:02.180 243456 DEBUG nova.network.os_vif_util [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converting VIF {"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:27:02 compute-0 nova_compute[243452]: 2026-02-28 10:27:02.181 243456 DEBUG nova.network.os_vif_util [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:27:84:10,bridge_name='br-int',has_traffic_filtering=True,id=3e3d207f-3991-41f0-a55c-44cd0479e7f8,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3d207f-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:27:02 compute-0 nova_compute[243452]: 2026-02-28 10:27:02.181 243456 DEBUG os_vif [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:84:10,bridge_name='br-int',has_traffic_filtering=True,id=3e3d207f-3991-41f0-a55c-44cd0479e7f8,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3d207f-39') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:27:02 compute-0 nova_compute[243452]: 2026-02-28 10:27:02.183 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:02 compute-0 nova_compute[243452]: 2026-02-28 10:27:02.183 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e3d207f-39, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:27:02 compute-0 nova_compute[243452]: 2026-02-28 10:27:02.186 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:02 compute-0 nova_compute[243452]: 2026-02-28 10:27:02.189 243456 INFO os_vif [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:84:10,bridge_name='br-int',has_traffic_filtering=True,id=3e3d207f-3991-41f0-a55c-44cd0479e7f8,network=Network(a3c17451-fecb-4c3a-bc65-efba96c6e655),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e3d207f-39')
Feb 28 10:27:02 compute-0 podman[345689]: 2026-02-28 10:27:02.239452429 +0000 UTC m=+0.058956557 container remove d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:27:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:02.245 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[df378473-6479-43de-8f2f-b4f52abf32f1]: (4, ('Sat Feb 28 10:27:02 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655 (d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d)\nd3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d\nSat Feb 28 10:27:02 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655 (d3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d)\nd3045499d5f99b6b460bb797482c46ad8d8502cc7f3882ff4de474e7b0d7552d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:02.247 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eabc7944-d6f9-4db3-bb94-db5a7973fd27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:02.248 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3c17451-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:27:02 compute-0 nova_compute[243452]: 2026-02-28 10:27:02.250 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:02 compute-0 kernel: tapa3c17451-f0: left promiscuous mode
Feb 28 10:27:02 compute-0 nova_compute[243452]: 2026-02-28 10:27:02.260 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:02.263 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[62fbfc00-e1fc-4da1-8621-ef0018538fcd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:02.278 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfa83e7-5db2-4aa1-b1df-3b73850f17a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:02.280 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[32876ac4-a185-4a87-960c-abdd3cb376c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:02.297 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e478ef70-d3eb-4f48-9680-45b1f6f494b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586815, 'reachable_time': 29546, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345722, 'error': None, 'target': 'ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:02.299 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a3c17451-fecb-4c3a-bc65-efba96c6e655 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:27:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:02.299 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[e9952a61-5054-464c-a1ff-a1b2efb819dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:02 compute-0 systemd[1]: run-netns-ovnmeta\x2da3c17451\x2dfecb\x2d4c3a\x2dbc65\x2defba96c6e655.mount: Deactivated successfully.
Feb 28 10:27:02 compute-0 nova_compute[243452]: 2026-02-28 10:27:02.463 243456 INFO nova.virt.libvirt.driver [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Deleting instance files /var/lib/nova/instances/8f174807-b15f-4588-83e1-c6e2ef2c2b4a_del
Feb 28 10:27:02 compute-0 nova_compute[243452]: 2026-02-28 10:27:02.464 243456 INFO nova.virt.libvirt.driver [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Deletion of /var/lib/nova/instances/8f174807-b15f-4588-83e1-c6e2ef2c2b4a_del complete
Feb 28 10:27:02 compute-0 nova_compute[243452]: 2026-02-28 10:27:02.510 243456 INFO nova.compute.manager [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Took 0.58 seconds to destroy the instance on the hypervisor.
Feb 28 10:27:02 compute-0 nova_compute[243452]: 2026-02-28 10:27:02.510 243456 DEBUG oslo.service.loopingcall [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:27:02 compute-0 nova_compute[243452]: 2026-02-28 10:27:02.511 243456 DEBUG nova.compute.manager [-] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:27:02 compute-0 nova_compute[243452]: 2026-02-28 10:27:02.511 243456 DEBUG nova.network.neutron [-] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:27:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 201 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 266 KiB/s rd, 771 KiB/s wr, 71 op/s
Feb 28 10:27:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:27:03 compute-0 ceph-mon[76304]: pgmap v1854: 305 pgs: 305 active+clean; 201 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 266 KiB/s rd, 771 KiB/s wr, 71 op/s
Feb 28 10:27:03 compute-0 nova_compute[243452]: 2026-02-28 10:27:03.995 243456 DEBUG nova.compute.manager [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received event network-vif-unplugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:03 compute-0 nova_compute[243452]: 2026-02-28 10:27:03.995 243456 DEBUG oslo_concurrency.lockutils [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:03 compute-0 nova_compute[243452]: 2026-02-28 10:27:03.995 243456 DEBUG oslo_concurrency.lockutils [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:03 compute-0 nova_compute[243452]: 2026-02-28 10:27:03.996 243456 DEBUG oslo_concurrency.lockutils [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:03 compute-0 nova_compute[243452]: 2026-02-28 10:27:03.996 243456 DEBUG nova.compute.manager [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] No waiting events found dispatching network-vif-unplugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:27:03 compute-0 nova_compute[243452]: 2026-02-28 10:27:03.996 243456 DEBUG nova.compute.manager [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received event network-vif-unplugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:27:03 compute-0 nova_compute[243452]: 2026-02-28 10:27:03.996 243456 DEBUG nova.compute.manager [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received event network-vif-plugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:03 compute-0 nova_compute[243452]: 2026-02-28 10:27:03.996 243456 DEBUG oslo_concurrency.lockutils [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:03 compute-0 nova_compute[243452]: 2026-02-28 10:27:03.997 243456 DEBUG oslo_concurrency.lockutils [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:03 compute-0 nova_compute[243452]: 2026-02-28 10:27:03.997 243456 DEBUG oslo_concurrency.lockutils [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:03 compute-0 nova_compute[243452]: 2026-02-28 10:27:03.997 243456 DEBUG nova.compute.manager [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] No waiting events found dispatching network-vif-plugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:27:03 compute-0 nova_compute[243452]: 2026-02-28 10:27:03.997 243456 WARNING nova.compute.manager [req-9811ce1f-f679-4c87-aaa4-4be8d9f67684 req-b6fadc07-278b-444a-aff7-dab72b534a58 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received unexpected event network-vif-plugged-3e3d207f-3991-41f0-a55c-44cd0479e7f8 for instance with vm_state active and task_state deleting.
Feb 28 10:27:04 compute-0 nova_compute[243452]: 2026-02-28 10:27:04.018 243456 DEBUG nova.network.neutron [-] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:27:04 compute-0 nova_compute[243452]: 2026-02-28 10:27:04.045 243456 INFO nova.compute.manager [-] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Took 1.53 seconds to deallocate network for instance.
Feb 28 10:27:04 compute-0 nova_compute[243452]: 2026-02-28 10:27:04.136 243456 DEBUG oslo_concurrency.lockutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:04 compute-0 nova_compute[243452]: 2026-02-28 10:27:04.137 243456 DEBUG oslo_concurrency.lockutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:04 compute-0 nova_compute[243452]: 2026-02-28 10:27:04.142 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:04 compute-0 nova_compute[243452]: 2026-02-28 10:27:04.208 243456 DEBUG oslo_concurrency.processutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:27:04 compute-0 nova_compute[243452]: 2026-02-28 10:27:04.297 243456 DEBUG nova.network.neutron [req-26c3671b-1f99-486e-96eb-7e54cc0549c0 req-4e548568-d68b-4117-a050-9cb398d076b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Updated VIF entry in instance network info cache for port 3e3d207f-3991-41f0-a55c-44cd0479e7f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:27:04 compute-0 nova_compute[243452]: 2026-02-28 10:27:04.298 243456 DEBUG nova.network.neutron [req-26c3671b-1f99-486e-96eb-7e54cc0549c0 req-4e548568-d68b-4117-a050-9cb398d076b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Updating instance_info_cache with network_info: [{"id": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "address": "fa:16:3e:27:84:10", "network": {"id": "a3c17451-fecb-4c3a-bc65-efba96c6e655", "bridge": "br-int", "label": "tempest-network-smoke--194999028", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1eb3aafa74d14544ba5cde61223f2e23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e3d207f-39", "ovs_interfaceid": "3e3d207f-3991-41f0-a55c-44cd0479e7f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:27:04 compute-0 nova_compute[243452]: 2026-02-28 10:27:04.325 243456 DEBUG oslo_concurrency.lockutils [req-26c3671b-1f99-486e-96eb-7e54cc0549c0 req-4e548568-d68b-4117-a050-9cb398d076b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-8f174807-b15f-4588-83e1-c6e2ef2c2b4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:27:04 compute-0 nova_compute[243452]: 2026-02-28 10:27:04.425 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 201 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 35 KiB/s wr, 40 op/s
Feb 28 10:27:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:27:04 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/24670061' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:27:04 compute-0 nova_compute[243452]: 2026-02-28 10:27:04.750 243456 DEBUG oslo_concurrency.processutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:27:04 compute-0 nova_compute[243452]: 2026-02-28 10:27:04.755 243456 DEBUG nova.compute.provider_tree [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:27:04 compute-0 nova_compute[243452]: 2026-02-28 10:27:04.780 243456 DEBUG nova.scheduler.client.report [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:27:04 compute-0 nova_compute[243452]: 2026-02-28 10:27:04.810 243456 DEBUG oslo_concurrency.lockutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:04 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/24670061' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:27:04 compute-0 nova_compute[243452]: 2026-02-28 10:27:04.843 243456 INFO nova.scheduler.client.report [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Deleted allocations for instance 8f174807-b15f-4588-83e1-c6e2ef2c2b4a
Feb 28 10:27:04 compute-0 nova_compute[243452]: 2026-02-28 10:27:04.954 243456 DEBUG oslo_concurrency.lockutils [None req-7f672caa-feb8-4467-95a5-704a8869b347 f9046d3ef932481196b82d5e1fdd5de6 1eb3aafa74d14544ba5cde61223f2e23 - - default default] Lock "8f174807-b15f-4588-83e1-c6e2ef2c2b4a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:05 compute-0 ceph-mon[76304]: pgmap v1855: 305 pgs: 305 active+clean; 201 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 35 KiB/s wr, 40 op/s
Feb 28 10:27:06 compute-0 nova_compute[243452]: 2026-02-28 10:27:06.135 243456 DEBUG nova.compute.manager [req-2e0e10e2-f91d-4b37-8fd4-6a98a945fc28 req-a639fb9f-90d4-4724-947c-d2758e8eb971 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Received event network-vif-deleted-3e3d207f-3991-41f0-a55c-44cd0479e7f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:06 compute-0 nova_compute[243452]: 2026-02-28 10:27:06.136 243456 INFO nova.compute.manager [req-2e0e10e2-f91d-4b37-8fd4-6a98a945fc28 req-a639fb9f-90d4-4724-947c-d2758e8eb971 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Neutron deleted interface 3e3d207f-3991-41f0-a55c-44cd0479e7f8; detaching it from the instance and deleting it from the info cache
Feb 28 10:27:06 compute-0 nova_compute[243452]: 2026-02-28 10:27:06.136 243456 DEBUG nova.network.neutron [req-2e0e10e2-f91d-4b37-8fd4-6a98a945fc28 req-a639fb9f-90d4-4724-947c-d2758e8eb971 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Feb 28 10:27:06 compute-0 nova_compute[243452]: 2026-02-28 10:27:06.138 243456 DEBUG nova.compute.manager [req-2e0e10e2-f91d-4b37-8fd4-6a98a945fc28 req-a639fb9f-90d4-4724-947c-d2758e8eb971 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Detach interface failed, port_id=3e3d207f-3991-41f0-a55c-44cd0479e7f8, reason: Instance 8f174807-b15f-4588-83e1-c6e2ef2c2b4a could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:27:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 153 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 36 KiB/s wr, 65 op/s
Feb 28 10:27:07 compute-0 nova_compute[243452]: 2026-02-28 10:27:07.186 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:07 compute-0 nova_compute[243452]: 2026-02-28 10:27:07.442 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:07 compute-0 ceph-mon[76304]: pgmap v1856: 305 pgs: 305 active+clean; 153 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 36 KiB/s wr, 65 op/s
Feb 28 10:27:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Feb 28 10:27:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:27:09 compute-0 nova_compute[243452]: 2026-02-28 10:27:09.144 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:09 compute-0 ceph-mon[76304]: pgmap v1857: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Feb 28 10:27:10 compute-0 nova_compute[243452]: 2026-02-28 10:27:10.240 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:10 compute-0 nova_compute[243452]: 2026-02-28 10:27:10.309 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.1 KiB/s wr, 42 op/s
Feb 28 10:27:11 compute-0 nova_compute[243452]: 2026-02-28 10:27:11.155 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274416.1532054, 9819900a-9819-4896-a490-54f445126d24 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:27:11 compute-0 nova_compute[243452]: 2026-02-28 10:27:11.155 243456 INFO nova.compute.manager [-] [instance: 9819900a-9819-4896-a490-54f445126d24] VM Stopped (Lifecycle Event)
Feb 28 10:27:11 compute-0 nova_compute[243452]: 2026-02-28 10:27:11.176 243456 DEBUG nova.compute.manager [None req-b2e8f44d-8fd7-4dec-9d40-1b8aecee82fe - - - - - -] [instance: 9819900a-9819-4896-a490-54f445126d24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:27:11 compute-0 ceph-mon[76304]: pgmap v1858: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.1 KiB/s wr, 42 op/s
Feb 28 10:27:12 compute-0 nova_compute[243452]: 2026-02-28 10:27:12.189 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 28 10:27:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:27:13 compute-0 ceph-mon[76304]: pgmap v1859: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 28 10:27:14 compute-0 nova_compute[243452]: 2026-02-28 10:27:14.144 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Feb 28 10:27:15 compute-0 ceph-mon[76304]: pgmap v1860: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Feb 28 10:27:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Feb 28 10:27:17 compute-0 nova_compute[243452]: 2026-02-28 10:27:17.163 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274422.1619728, 8f174807-b15f-4588-83e1-c6e2ef2c2b4a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:27:17 compute-0 nova_compute[243452]: 2026-02-28 10:27:17.164 243456 INFO nova.compute.manager [-] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] VM Stopped (Lifecycle Event)
Feb 28 10:27:17 compute-0 nova_compute[243452]: 2026-02-28 10:27:17.187 243456 DEBUG nova.compute.manager [None req-d63962e3-d820-420f-b247-5fa2615b1b7e - - - - - -] [instance: 8f174807-b15f-4588-83e1-c6e2ef2c2b4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:27:17 compute-0 nova_compute[243452]: 2026-02-28 10:27:17.191 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:17 compute-0 ceph-mon[76304]: pgmap v1861: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Feb 28 10:27:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:27:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:27:19 compute-0 nova_compute[243452]: 2026-02-28 10:27:19.146 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Feb 28 10:27:19 compute-0 ceph-mon[76304]: pgmap v1862: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:27:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Feb 28 10:27:19 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Feb 28 10:27:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 8.3 KiB/s rd, 921 B/s wr, 10 op/s
Feb 28 10:27:20 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Feb 28 10:27:20 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Feb 28 10:27:20 compute-0 ceph-mon[76304]: osdmap e258: 3 total, 3 up, 3 in
Feb 28 10:27:20 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Feb 28 10:27:21 compute-0 nova_compute[243452]: 2026-02-28 10:27:21.154 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:21.155 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:27:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:21.155 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:27:21 compute-0 ceph-mon[76304]: pgmap v1864: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 8.3 KiB/s rd, 921 B/s wr, 10 op/s
Feb 28 10:27:21 compute-0 ceph-mon[76304]: osdmap e259: 3 total, 3 up, 3 in
Feb 28 10:27:21 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Feb 28 10:27:21 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:21.925870) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:27:21 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Feb 28 10:27:21 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274441925938, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1949, "num_deletes": 251, "total_data_size": 3061579, "memory_usage": 3115664, "flush_reason": "Manual Compaction"}
Feb 28 10:27:21 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Feb 28 10:27:21 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274441956260, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 3007149, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38133, "largest_seqno": 40081, "table_properties": {"data_size": 2998414, "index_size": 5357, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18283, "raw_average_key_size": 20, "raw_value_size": 2980808, "raw_average_value_size": 3286, "num_data_blocks": 238, "num_entries": 907, "num_filter_entries": 907, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772274244, "oldest_key_time": 1772274244, "file_creation_time": 1772274441, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:27:21 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 30421 microseconds, and 5761 cpu microseconds.
Feb 28 10:27:21 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:27:21 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:21.956304) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 3007149 bytes OK
Feb 28 10:27:21 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:21.956323) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Feb 28 10:27:21 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:21.957707) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Feb 28 10:27:21 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:21.957723) EVENT_LOG_v1 {"time_micros": 1772274441957718, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:27:21 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:21.957740) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:27:21 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 3053349, prev total WAL file size 3053349, number of live WAL files 2.
Feb 28 10:27:21 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:27:21 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:21.958328) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Feb 28 10:27:21 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:27:21 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2936KB)], [86(7824KB)]
Feb 28 10:27:21 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274441958353, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11019642, "oldest_snapshot_seqno": -1}
Feb 28 10:27:22 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6506 keys, 9336304 bytes, temperature: kUnknown
Feb 28 10:27:22 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274442010911, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9336304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9292780, "index_size": 26134, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 165205, "raw_average_key_size": 25, "raw_value_size": 9176483, "raw_average_value_size": 1410, "num_data_blocks": 1048, "num_entries": 6506, "num_filter_entries": 6506, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772274441, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:27:22 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:27:22 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:22.011609) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9336304 bytes
Feb 28 10:27:22 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:22.012954) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 207.7 rd, 176.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 7.6 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 7024, records dropped: 518 output_compression: NoCompression
Feb 28 10:27:22 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:22.012987) EVENT_LOG_v1 {"time_micros": 1772274442012971, "job": 50, "event": "compaction_finished", "compaction_time_micros": 53053, "compaction_time_cpu_micros": 15457, "output_level": 6, "num_output_files": 1, "total_output_size": 9336304, "num_input_records": 7024, "num_output_records": 6506, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:27:22 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:27:22 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274442014379, "job": 50, "event": "table_file_deletion", "file_number": 88}
Feb 28 10:27:22 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:27:22 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274442016010, "job": 50, "event": "table_file_deletion", "file_number": 86}
Feb 28 10:27:22 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:21.958236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:27:22 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:22.016192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:27:22 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:22.016199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:27:22 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:22.016202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:27:22 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:22.016205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:27:22 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:27:22.016208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:27:22 compute-0 nova_compute[243452]: 2026-02-28 10:27:22.193 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.6 KiB/s wr, 15 op/s
Feb 28 10:27:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:27:23 compute-0 ceph-mon[76304]: pgmap v1866: 305 pgs: 305 active+clean; 153 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.6 KiB/s wr, 15 op/s
Feb 28 10:27:24 compute-0 nova_compute[243452]: 2026-02-28 10:27:24.147 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:24 compute-0 podman[345749]: 2026-02-28 10:27:24.152500812 +0000 UTC m=+0.082050019 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 10:27:24 compute-0 podman[345748]: 2026-02-28 10:27:24.17793207 +0000 UTC m=+0.107447136 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 28 10:27:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 153 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 3.1 KiB/s wr, 27 op/s
Feb 28 10:27:25 compute-0 nova_compute[243452]: 2026-02-28 10:27:25.353 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:27:25 compute-0 nova_compute[243452]: 2026-02-28 10:27:25.668 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:25 compute-0 nova_compute[243452]: 2026-02-28 10:27:25.669 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:25 compute-0 nova_compute[243452]: 2026-02-28 10:27:25.696 243456 DEBUG nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:27:25 compute-0 nova_compute[243452]: 2026-02-28 10:27:25.777 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Acquiring lock "502b3848-9702-4288-860e-d9b13ab3b047" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:25 compute-0 nova_compute[243452]: 2026-02-28 10:27:25.778 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:25 compute-0 nova_compute[243452]: 2026-02-28 10:27:25.781 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:25 compute-0 nova_compute[243452]: 2026-02-28 10:27:25.782 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:25 compute-0 nova_compute[243452]: 2026-02-28 10:27:25.792 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:27:25 compute-0 nova_compute[243452]: 2026-02-28 10:27:25.793 243456 INFO nova.compute.claims [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:27:25 compute-0 nova_compute[243452]: 2026-02-28 10:27:25.798 243456 DEBUG nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:27:25 compute-0 nova_compute[243452]: 2026-02-28 10:27:25.900 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Feb 28 10:27:25 compute-0 nova_compute[243452]: 2026-02-28 10:27:25.953 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:27:25 compute-0 ceph-mon[76304]: pgmap v1867: 305 pgs: 305 active+clean; 153 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 3.1 KiB/s wr, 27 op/s
Feb 28 10:27:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Feb 28 10:27:25 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Feb 28 10:27:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:26.157 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:27:26 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:27:26 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2050811230' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.558 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:27:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 153 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 4.7 KiB/s wr, 60 op/s
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.566 243456 DEBUG nova.compute.provider_tree [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.590 243456 DEBUG nova.scheduler.client.report [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.626 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.627 243456 DEBUG nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.636 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.645 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.646 243456 INFO nova.compute.claims [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.702 243456 DEBUG nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.703 243456 DEBUG nova.network.neutron [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.721 243456 INFO nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.737 243456 DEBUG nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.783 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.820 243456 DEBUG nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.822 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.822 243456 INFO nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Creating image(s)
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.844 243456 DEBUG nova.storage.rbd_utils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] rbd image 45cac133-9af0-462b-928c-05216ae1a68e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.868 243456 DEBUG nova.storage.rbd_utils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] rbd image 45cac133-9af0-462b-928c-05216ae1a68e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.892 243456 DEBUG nova.storage.rbd_utils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] rbd image 45cac133-9af0-462b-928c-05216ae1a68e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.896 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.958 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.959 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.960 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:26 compute-0 nova_compute[243452]: 2026-02-28 10:27:26.960 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:26 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Feb 28 10:27:26 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Feb 28 10:27:26 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Feb 28 10:27:26 compute-0 ceph-mon[76304]: osdmap e260: 3 total, 3 up, 3 in
Feb 28 10:27:26 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2050811230' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.013 243456 DEBUG nova.storage.rbd_utils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] rbd image 45cac133-9af0-462b-928c-05216ae1a68e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.017 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 45cac133-9af0-462b-928c-05216ae1a68e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.179 243456 DEBUG nova.policy [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3ed826a3011e43d68aac3f001281440a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '859784d5f59f4db99fb375f781853be3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.195 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:27:27 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3497929346' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.329 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 45cac133-9af0-462b-928c-05216ae1a68e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.377 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.417 243456 DEBUG nova.compute.provider_tree [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.425 243456 DEBUG nova.storage.rbd_utils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] resizing rbd image 45cac133-9af0-462b-928c-05216ae1a68e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.466 243456 DEBUG nova.scheduler.client.report [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.493 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.494 243456 DEBUG nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.567 243456 DEBUG nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.568 243456 DEBUG nova.network.neutron [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.576 243456 DEBUG nova.objects.instance [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'migration_context' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.595 243456 INFO nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.599 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.600 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Ensure instance console log exists: /var/lib/nova/instances/45cac133-9af0-462b-928c-05216ae1a68e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.600 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.600 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.601 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.619 243456 DEBUG nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.719 243456 DEBUG nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.721 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.721 243456 INFO nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Creating image(s)
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.744 243456 DEBUG nova.storage.rbd_utils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] rbd image 502b3848-9702-4288-860e-d9b13ab3b047_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.770 243456 DEBUG nova.storage.rbd_utils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] rbd image 502b3848-9702-4288-860e-d9b13ab3b047_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.794 243456 DEBUG nova.storage.rbd_utils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] rbd image 502b3848-9702-4288-860e-d9b13ab3b047_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.798 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.863 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.863 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.864 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.864 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.885 243456 DEBUG nova.storage.rbd_utils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] rbd image 502b3848-9702-4288-860e-d9b13ab3b047_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:27:27 compute-0 nova_compute[243452]: 2026-02-28 10:27:27.889 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 502b3848-9702-4288-860e-d9b13ab3b047_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:27:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Feb 28 10:27:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Feb 28 10:27:27 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Feb 28 10:27:27 compute-0 ceph-mon[76304]: pgmap v1869: 305 pgs: 305 active+clean; 153 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 4.7 KiB/s wr, 60 op/s
Feb 28 10:27:27 compute-0 ceph-mon[76304]: osdmap e261: 3 total, 3 up, 3 in
Feb 28 10:27:27 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3497929346' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:27:28 compute-0 nova_compute[243452]: 2026-02-28 10:27:28.234 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 502b3848-9702-4288-860e-d9b13ab3b047_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:27:28 compute-0 nova_compute[243452]: 2026-02-28 10:27:28.314 243456 DEBUG nova.storage.rbd_utils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] resizing rbd image 502b3848-9702-4288-860e-d9b13ab3b047_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:27:28 compute-0 nova_compute[243452]: 2026-02-28 10:27:28.376 243456 DEBUG nova.policy [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f9dd03f07d754030bedc45ef75a2ceb8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b907eb5634054c23999a514f3cbfbc23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:27:28 compute-0 nova_compute[243452]: 2026-02-28 10:27:28.421 243456 DEBUG nova.network.neutron [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Successfully created port: 63cc9218-a429-4d50-9dad-e3849863cae1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:27:28 compute-0 nova_compute[243452]: 2026-02-28 10:27:28.432 243456 DEBUG nova.objects.instance [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lazy-loading 'migration_context' on Instance uuid 502b3848-9702-4288-860e-d9b13ab3b047 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:27:28 compute-0 nova_compute[243452]: 2026-02-28 10:27:28.448 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:27:28 compute-0 nova_compute[243452]: 2026-02-28 10:27:28.448 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Ensure instance console log exists: /var/lib/nova/instances/502b3848-9702-4288-860e-d9b13ab3b047/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:27:28 compute-0 nova_compute[243452]: 2026-02-28 10:27:28.449 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:28 compute-0 nova_compute[243452]: 2026-02-28 10:27:28.449 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:28 compute-0 nova_compute[243452]: 2026-02-28 10:27:28.449 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 171 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 1.9 MiB/s wr, 114 op/s
Feb 28 10:27:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:27:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Feb 28 10:27:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Feb 28 10:27:28 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Feb 28 10:27:29 compute-0 ceph-mon[76304]: osdmap e262: 3 total, 3 up, 3 in
Feb 28 10:27:29 compute-0 ceph-mon[76304]: osdmap e263: 3 total, 3 up, 3 in
Feb 28 10:27:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:27:29
Feb 28 10:27:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:27:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:27:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'images', '.rgw.root', 'volumes', 'default.rgw.log']
Feb 28 10:27:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:27:29 compute-0 nova_compute[243452]: 2026-02-28 10:27:29.150 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:29 compute-0 nova_compute[243452]: 2026-02-28 10:27:29.167 243456 DEBUG nova.network.neutron [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Successfully updated port: 63cc9218-a429-4d50-9dad-e3849863cae1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:27:29 compute-0 nova_compute[243452]: 2026-02-28 10:27:29.189 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:27:29 compute-0 nova_compute[243452]: 2026-02-28 10:27:29.190 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquired lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:27:29 compute-0 nova_compute[243452]: 2026-02-28 10:27:29.190 243456 DEBUG nova.network.neutron [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:27:29 compute-0 nova_compute[243452]: 2026-02-28 10:27:29.294 243456 DEBUG nova.compute.manager [req-984f20e5-a404-407d-a71e-5ac40d132cd5 req-b16f010a-d4fc-4bb4-944b-2af60b18033f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-changed-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:29 compute-0 nova_compute[243452]: 2026-02-28 10:27:29.294 243456 DEBUG nova.compute.manager [req-984f20e5-a404-407d-a71e-5ac40d132cd5 req-b16f010a-d4fc-4bb4-944b-2af60b18033f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Refreshing instance network info cache due to event network-changed-63cc9218-a429-4d50-9dad-e3849863cae1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:27:29 compute-0 nova_compute[243452]: 2026-02-28 10:27:29.294 243456 DEBUG oslo_concurrency.lockutils [req-984f20e5-a404-407d-a71e-5ac40d132cd5 req-b16f010a-d4fc-4bb4-944b-2af60b18033f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:27:29 compute-0 nova_compute[243452]: 2026-02-28 10:27:29.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:27:29 compute-0 nova_compute[243452]: 2026-02-28 10:27:29.344 243456 DEBUG nova.network.neutron [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:27:29 compute-0 nova_compute[243452]: 2026-02-28 10:27:29.524 243456 DEBUG nova.network.neutron [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Successfully created port: a4f4f33b-d010-42c3-9963-b0602fd11558 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:27:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Feb 28 10:27:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Feb 28 10:27:30 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Feb 28 10:27:30 compute-0 ceph-mon[76304]: pgmap v1872: 305 pgs: 305 active+clean; 171 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 1.9 MiB/s wr, 114 op/s
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:27:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:27:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:27:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:27:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:27:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:27:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:27:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 223 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 210 KiB/s rd, 7.3 MiB/s wr, 310 op/s
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.659 243456 DEBUG nova.network.neutron [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Updating instance_info_cache with network_info: [{"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.684 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Releasing lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.685 243456 DEBUG nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Instance network_info: |[{"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.685 243456 DEBUG oslo_concurrency.lockutils [req-984f20e5-a404-407d-a71e-5ac40d132cd5 req-b16f010a-d4fc-4bb4-944b-2af60b18033f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.686 243456 DEBUG nova.network.neutron [req-984f20e5-a404-407d-a71e-5ac40d132cd5 req-b16f010a-d4fc-4bb4-944b-2af60b18033f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Refreshing network info cache for port 63cc9218-a429-4d50-9dad-e3849863cae1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.690 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Start _get_guest_xml network_info=[{"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.698 243456 WARNING nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.711 243456 DEBUG nova.virt.libvirt.host [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.712 243456 DEBUG nova.virt.libvirt.host [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.717 243456 DEBUG nova.virt.libvirt.host [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.718 243456 DEBUG nova.virt.libvirt.host [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.719 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.719 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.720 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.720 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.721 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.721 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.721 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.722 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.722 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.722 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.723 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.723 243456 DEBUG nova.virt.hardware [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:27:30 compute-0 nova_compute[243452]: 2026-02-28 10:27:30.728 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:27:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:27:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:27:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:27:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:27:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:27:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:27:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:27:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:27:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:27:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:27:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Feb 28 10:27:31 compute-0 ceph-mon[76304]: osdmap e264: 3 total, 3 up, 3 in
Feb 28 10:27:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Feb 28 10:27:31 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.203 243456 DEBUG nova.network.neutron [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Successfully updated port: a4f4f33b-d010-42c3-9963-b0602fd11558 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.229 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Acquiring lock "refresh_cache-502b3848-9702-4288-860e-d9b13ab3b047" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.229 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Acquired lock "refresh_cache-502b3848-9702-4288-860e-d9b13ab3b047" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.230 243456 DEBUG nova.network.neutron [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:27:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:27:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279637329' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.350 243456 DEBUG nova.compute.manager [req-7d674231-13f4-42ea-b5c2-6e1cc465faac req-85bfa5f1-8ef9-4219-be7a-64c5f0db5fe9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Received event network-changed-a4f4f33b-d010-42c3-9963-b0602fd11558 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.351 243456 DEBUG nova.compute.manager [req-7d674231-13f4-42ea-b5c2-6e1cc465faac req-85bfa5f1-8ef9-4219-be7a-64c5f0db5fe9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Refreshing instance network info cache due to event network-changed-a4f4f33b-d010-42c3-9963-b0602fd11558. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.351 243456 DEBUG oslo_concurrency.lockutils [req-7d674231-13f4-42ea-b5c2-6e1cc465faac req-85bfa5f1-8ef9-4219-be7a-64c5f0db5fe9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-502b3848-9702-4288-860e-d9b13ab3b047" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.361 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.395 243456 DEBUG nova.storage.rbd_utils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] rbd image 45cac133-9af0-462b-928c-05216ae1a68e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.401 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.460 243456 DEBUG nova.network.neutron [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:27:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:27:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2882330843' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.948 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.950 243456 DEBUG nova.virt.libvirt.vif [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:27:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-22377150',display_name='tempest-TestServerAdvancedOps-server-22377150',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-22377150',id=119,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='859784d5f59f4db99fb375f781853be3',ramdisk_id='',reservation_id='r-h74lcn4j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-244453076',owner_user_name='tempest-TestServerAdvancedOps-244453076-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:27:26Z,user_data=None,user_id='3ed826a3011e43d68aac3f001281440a',uuid=45cac133-9af0-462b-928c-05216ae1a68e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.950 243456 DEBUG nova.network.os_vif_util [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Converting VIF {"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.951 243456 DEBUG nova.network.os_vif_util [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.953 243456 DEBUG nova.objects.instance [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.970 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:27:31 compute-0 nova_compute[243452]:   <uuid>45cac133-9af0-462b-928c-05216ae1a68e</uuid>
Feb 28 10:27:31 compute-0 nova_compute[243452]:   <name>instance-00000077</name>
Feb 28 10:27:31 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:27:31 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:27:31 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <nova:name>tempest-TestServerAdvancedOps-server-22377150</nova:name>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:27:30</nova:creationTime>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:27:31 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:27:31 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:27:31 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:27:31 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:27:31 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:27:31 compute-0 nova_compute[243452]:         <nova:user uuid="3ed826a3011e43d68aac3f001281440a">tempest-TestServerAdvancedOps-244453076-project-member</nova:user>
Feb 28 10:27:31 compute-0 nova_compute[243452]:         <nova:project uuid="859784d5f59f4db99fb375f781853be3">tempest-TestServerAdvancedOps-244453076</nova:project>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:27:31 compute-0 nova_compute[243452]:         <nova:port uuid="63cc9218-a429-4d50-9dad-e3849863cae1">
Feb 28 10:27:31 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:27:31 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:27:31 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <system>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <entry name="serial">45cac133-9af0-462b-928c-05216ae1a68e</entry>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <entry name="uuid">45cac133-9af0-462b-928c-05216ae1a68e</entry>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     </system>
Feb 28 10:27:31 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:27:31 compute-0 nova_compute[243452]:   <os>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:   </os>
Feb 28 10:27:31 compute-0 nova_compute[243452]:   <features>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:   </features>
Feb 28 10:27:31 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:27:31 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:27:31 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/45cac133-9af0-462b-928c-05216ae1a68e_disk">
Feb 28 10:27:31 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       </source>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:27:31 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/45cac133-9af0-462b-928c-05216ae1a68e_disk.config">
Feb 28 10:27:31 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       </source>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:27:31 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:26:2a:f0"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <target dev="tap63cc9218-a4"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/45cac133-9af0-462b-928c-05216ae1a68e/console.log" append="off"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <video>
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     </video>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:27:31 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:27:31 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:27:31 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:27:31 compute-0 nova_compute[243452]: </domain>
Feb 28 10:27:31 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.971 243456 DEBUG nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Preparing to wait for external event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.972 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.973 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.973 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.974 243456 DEBUG nova.virt.libvirt.vif [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:27:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-22377150',display_name='tempest-TestServerAdvancedOps-server-22377150',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-22377150',id=119,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='859784d5f59f4db99fb375f781853be3',ramdisk_id='',reservation_id='r-h74lcn4j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-244453076',owner_user_name='tempest-TestServerAdvancedOps-244453076-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:27:26Z,user_data=None,user_id='3ed826a3011e43d68aac3f001281440a',uuid=45cac133-9af0-462b-928c-05216ae1a68e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.974 243456 DEBUG nova.network.os_vif_util [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Converting VIF {"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.975 243456 DEBUG nova.network.os_vif_util [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.976 243456 DEBUG os_vif [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.977 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.978 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.979 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.983 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.984 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63cc9218-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.985 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63cc9218-a4, col_values=(('external_ids', {'iface-id': '63cc9218-a429-4d50-9dad-e3849863cae1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:2a:f0', 'vm-uuid': '45cac133-9af0-462b-928c-05216ae1a68e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.987 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:31 compute-0 NetworkManager[49805]: <info>  [1772274451.9882] manager: (tap63cc9218-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/490)
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.990 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.993 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:31 compute-0 nova_compute[243452]: 2026-02-28 10:27:31.994 243456 INFO os_vif [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4')
Feb 28 10:27:32 compute-0 ceph-mon[76304]: pgmap v1875: 305 pgs: 305 active+clean; 223 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 210 KiB/s rd, 7.3 MiB/s wr, 310 op/s
Feb 28 10:27:32 compute-0 ceph-mon[76304]: osdmap e265: 3 total, 3 up, 3 in
Feb 28 10:27:32 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4279637329' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:27:32 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2882330843' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:27:32 compute-0 nova_compute[243452]: 2026-02-28 10:27:32.051 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:27:32 compute-0 nova_compute[243452]: 2026-02-28 10:27:32.052 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:27:32 compute-0 nova_compute[243452]: 2026-02-28 10:27:32.052 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] No VIF found with MAC fa:16:3e:26:2a:f0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:27:32 compute-0 nova_compute[243452]: 2026-02-28 10:27:32.053 243456 INFO nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Using config drive
Feb 28 10:27:32 compute-0 nova_compute[243452]: 2026-02-28 10:27:32.079 243456 DEBUG nova.storage.rbd_utils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] rbd image 45cac133-9af0-462b-928c-05216ae1a68e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:27:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 246 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 140 KiB/s rd, 6.8 MiB/s wr, 216 op/s
Feb 28 10:27:32 compute-0 nova_compute[243452]: 2026-02-28 10:27:32.699 243456 INFO nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Creating config drive at /var/lib/nova/instances/45cac133-9af0-462b-928c-05216ae1a68e/disk.config
Feb 28 10:27:32 compute-0 nova_compute[243452]: 2026-02-28 10:27:32.703 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/45cac133-9af0-462b-928c-05216ae1a68e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8kzndjyj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:27:32 compute-0 nova_compute[243452]: 2026-02-28 10:27:32.837 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/45cac133-9af0-462b-928c-05216ae1a68e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8kzndjyj" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:27:32 compute-0 nova_compute[243452]: 2026-02-28 10:27:32.861 243456 DEBUG nova.storage.rbd_utils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] rbd image 45cac133-9af0-462b-928c-05216ae1a68e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:27:32 compute-0 nova_compute[243452]: 2026-02-28 10:27:32.865 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/45cac133-9af0-462b-928c-05216ae1a68e/disk.config 45cac133-9af0-462b-928c-05216ae1a68e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.011 243456 DEBUG oslo_concurrency.processutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/45cac133-9af0-462b-928c-05216ae1a68e/disk.config 45cac133-9af0-462b-928c-05216ae1a68e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.012 243456 INFO nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Deleting local config drive /var/lib/nova/instances/45cac133-9af0-462b-928c-05216ae1a68e/disk.config because it was imported into RBD.
Feb 28 10:27:33 compute-0 kernel: tap63cc9218-a4: entered promiscuous mode
Feb 28 10:27:33 compute-0 NetworkManager[49805]: <info>  [1772274453.0709] manager: (tap63cc9218-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/491)
Feb 28 10:27:33 compute-0 ovn_controller[146846]: 2026-02-28T10:27:33Z|01190|binding|INFO|Claiming lport 63cc9218-a429-4d50-9dad-e3849863cae1 for this chassis.
Feb 28 10:27:33 compute-0 ovn_controller[146846]: 2026-02-28T10:27:33Z|01191|binding|INFO|63cc9218-a429-4d50-9dad-e3849863cae1: Claiming fa:16:3e:26:2a:f0 10.100.0.11
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.072 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:33.081 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:2a:f0 10.100.0.11'], port_security=['fa:16:3e:26:2a:f0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '45cac133-9af0-462b-928c-05216ae1a68e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9d695f1-2ebc-41de-afde-dc0fb11aa027', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '859784d5f59f4db99fb375f781853be3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3574eda9-0858-4fa6-a1da-2908ff989c86', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ecc9bc11-7523-42ec-afec-b5bdb65bb267, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=63cc9218-a429-4d50-9dad-e3849863cae1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:27:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:33.082 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 63cc9218-a429-4d50-9dad-e3849863cae1 in datapath c9d695f1-2ebc-41de-afde-dc0fb11aa027 bound to our chassis
Feb 28 10:27:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:33.083 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c9d695f1-2ebc-41de-afde-dc0fb11aa027 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:27:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:33.084 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[525f4e94-6b68-4c6f-8af6-e8d1170a8e3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:33 compute-0 systemd-udevd[346306]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:27:33 compute-0 NetworkManager[49805]: <info>  [1772274453.1086] device (tap63cc9218-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:27:33 compute-0 NetworkManager[49805]: <info>  [1772274453.1092] device (tap63cc9218-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:27:33 compute-0 ovn_controller[146846]: 2026-02-28T10:27:33Z|01192|binding|INFO|Setting lport 63cc9218-a429-4d50-9dad-e3849863cae1 ovn-installed in OVS
Feb 28 10:27:33 compute-0 ovn_controller[146846]: 2026-02-28T10:27:33Z|01193|binding|INFO|Setting lport 63cc9218-a429-4d50-9dad-e3849863cae1 up in Southbound
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.106 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.111 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:33 compute-0 systemd-machined[209480]: New machine qemu-150-instance-00000077.
Feb 28 10:27:33 compute-0 systemd[1]: Started Virtual Machine qemu-150-instance-00000077.
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.163 243456 DEBUG nova.network.neutron [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Updating instance_info_cache with network_info: [{"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.183 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Releasing lock "refresh_cache-502b3848-9702-4288-860e-d9b13ab3b047" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.183 243456 DEBUG nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Instance network_info: |[{"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.183 243456 DEBUG oslo_concurrency.lockutils [req-7d674231-13f4-42ea-b5c2-6e1cc465faac req-85bfa5f1-8ef9-4219-be7a-64c5f0db5fe9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-502b3848-9702-4288-860e-d9b13ab3b047" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.183 243456 DEBUG nova.network.neutron [req-7d674231-13f4-42ea-b5c2-6e1cc465faac req-85bfa5f1-8ef9-4219-be7a-64c5f0db5fe9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Refreshing network info cache for port a4f4f33b-d010-42c3-9963-b0602fd11558 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.186 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Start _get_guest_xml network_info=[{"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.192 243456 WARNING nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.196 243456 DEBUG nova.virt.libvirt.host [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.197 243456 DEBUG nova.virt.libvirt.host [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.248 243456 DEBUG nova.virt.libvirt.host [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.249 243456 DEBUG nova.virt.libvirt.host [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.249 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.249 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.250 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.250 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.250 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.251 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.251 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.251 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.251 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.251 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.252 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.252 243456 DEBUG nova.virt.hardware [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.255 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.389 243456 DEBUG nova.compute.manager [req-0505ce41-3ddf-4b07-848a-128e9c9c59c6 req-d846d547-9d73-4675-972b-15bfb5b20e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.390 243456 DEBUG oslo_concurrency.lockutils [req-0505ce41-3ddf-4b07-848a-128e9c9c59c6 req-d846d547-9d73-4675-972b-15bfb5b20e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.390 243456 DEBUG oslo_concurrency.lockutils [req-0505ce41-3ddf-4b07-848a-128e9c9c59c6 req-d846d547-9d73-4675-972b-15bfb5b20e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.390 243456 DEBUG oslo_concurrency.lockutils [req-0505ce41-3ddf-4b07-848a-128e9c9c59c6 req-d846d547-9d73-4675-972b-15bfb5b20e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.391 243456 DEBUG nova.compute.manager [req-0505ce41-3ddf-4b07-848a-128e9c9c59c6 req-d846d547-9d73-4675-972b-15bfb5b20e17 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Processing event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.486 243456 DEBUG nova.network.neutron [req-984f20e5-a404-407d-a71e-5ac40d132cd5 req-b16f010a-d4fc-4bb4-944b-2af60b18033f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Updated VIF entry in instance network info cache for port 63cc9218-a429-4d50-9dad-e3849863cae1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.488 243456 DEBUG nova.network.neutron [req-984f20e5-a404-407d-a71e-5ac40d132cd5 req-b16f010a-d4fc-4bb4-944b-2af60b18033f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Updating instance_info_cache with network_info: [{"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.508 243456 DEBUG oslo_concurrency.lockutils [req-984f20e5-a404-407d-a71e-5ac40d132cd5 req-b16f010a-d4fc-4bb4-944b-2af60b18033f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.753 243456 DEBUG nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.756 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274453.7523522, 45cac133-9af0-462b-928c-05216ae1a68e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.756 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] VM Started (Lifecycle Event)
Feb 28 10:27:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.763 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.770 243456 INFO nova.virt.libvirt.driver [-] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Instance spawned successfully.
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.771 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.781 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.786 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.802 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.803 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.804 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.805 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.806 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.807 243456 DEBUG nova.virt.libvirt.driver [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.814 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.815 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274453.7526946, 45cac133-9af0-462b-928c-05216ae1a68e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:27:33 compute-0 nova_compute[243452]: 2026-02-28 10:27:33.816 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] VM Paused (Lifecycle Event)
Feb 28 10:27:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:27:33 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4144905592' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:27:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Feb 28 10:27:34 compute-0 ceph-mon[76304]: pgmap v1877: 305 pgs: 305 active+clean; 246 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 140 KiB/s rd, 6.8 MiB/s wr, 216 op/s
Feb 28 10:27:34 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4144905592' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:27:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.063 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:27:34 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.067 243456 INFO nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Took 7.25 seconds to spawn the instance on the hypervisor.
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.068 243456 DEBUG nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.069 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.814s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.103 243456 DEBUG nova.storage.rbd_utils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] rbd image 502b3848-9702-4288-860e-d9b13ab3b047_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.107 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.147 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274453.7628763, 45cac133-9af0-462b-928c-05216ae1a68e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.147 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] VM Resumed (Lifecycle Event)
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.153 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.187 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.190 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.245 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.270 243456 INFO nova.compute.manager [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Took 8.52 seconds to build instance.
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.292 243456 DEBUG oslo_concurrency.lockutils [None req-a10cf8b3-5df0-4f0e-ace5-54508d5cca57 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 246 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 823 KiB/s rd, 5.3 MiB/s wr, 222 op/s
Feb 28 10:27:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:27:34 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/48174800' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.683 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.685 243456 DEBUG nova.virt.libvirt.vif [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:27:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-952479761',display_name='tempest-TestServerBasicOps-server-952479761',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-952479761',id=120,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIxvwaoaZS6lxY+qm3SVwVdpmr4odFer5lT4S2h//UYF7wFrY/sNYSd6hzwRrmh2VB5KT4fELzlhq046tWMx92gixHEtTbsSYNmG1SF9Z5rksMEf2+FpLLjssyHNY9JdaA==',key_name='tempest-TestServerBasicOps-1169969291',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b907eb5634054c23999a514f3cbfbc23',ramdisk_id='',reservation_id='r-1m2lpc4b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-267277269',owner_user_name='tempest-TestServerBasicOps-267277269-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:27:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f9dd03f07d754030bedc45ef75a2ceb8',uuid=502b3848-9702-4288-860e-d9b13ab3b047,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.686 243456 DEBUG nova.network.os_vif_util [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Converting VIF {"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.687 243456 DEBUG nova.network.os_vif_util [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:1f:18,bridge_name='br-int',has_traffic_filtering=True,id=a4f4f33b-d010-42c3-9963-b0602fd11558,network=Network(d755deef-15d2-410a-9b1a-81df70c45c93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4f4f33b-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.689 243456 DEBUG nova.objects.instance [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 502b3848-9702-4288-860e-d9b13ab3b047 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.712 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:27:34 compute-0 nova_compute[243452]:   <uuid>502b3848-9702-4288-860e-d9b13ab3b047</uuid>
Feb 28 10:27:34 compute-0 nova_compute[243452]:   <name>instance-00000078</name>
Feb 28 10:27:34 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:27:34 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:27:34 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <nova:name>tempest-TestServerBasicOps-server-952479761</nova:name>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:27:33</nova:creationTime>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:27:34 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:27:34 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:27:34 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:27:34 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:27:34 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:27:34 compute-0 nova_compute[243452]:         <nova:user uuid="f9dd03f07d754030bedc45ef75a2ceb8">tempest-TestServerBasicOps-267277269-project-member</nova:user>
Feb 28 10:27:34 compute-0 nova_compute[243452]:         <nova:project uuid="b907eb5634054c23999a514f3cbfbc23">tempest-TestServerBasicOps-267277269</nova:project>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:27:34 compute-0 nova_compute[243452]:         <nova:port uuid="a4f4f33b-d010-42c3-9963-b0602fd11558">
Feb 28 10:27:34 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:27:34 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:27:34 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <system>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <entry name="serial">502b3848-9702-4288-860e-d9b13ab3b047</entry>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <entry name="uuid">502b3848-9702-4288-860e-d9b13ab3b047</entry>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     </system>
Feb 28 10:27:34 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:27:34 compute-0 nova_compute[243452]:   <os>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:   </os>
Feb 28 10:27:34 compute-0 nova_compute[243452]:   <features>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:   </features>
Feb 28 10:27:34 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:27:34 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:27:34 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/502b3848-9702-4288-860e-d9b13ab3b047_disk">
Feb 28 10:27:34 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       </source>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:27:34 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/502b3848-9702-4288-860e-d9b13ab3b047_disk.config">
Feb 28 10:27:34 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       </source>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:27:34 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:be:1f:18"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <target dev="tapa4f4f33b-d0"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/502b3848-9702-4288-860e-d9b13ab3b047/console.log" append="off"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <video>
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     </video>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:27:34 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:27:34 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:27:34 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:27:34 compute-0 nova_compute[243452]: </domain>
Feb 28 10:27:34 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.713 243456 DEBUG nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Preparing to wait for external event network-vif-plugged-a4f4f33b-d010-42c3-9963-b0602fd11558 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.714 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Acquiring lock "502b3848-9702-4288-860e-d9b13ab3b047-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.714 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.715 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.716 243456 DEBUG nova.virt.libvirt.vif [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:27:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-952479761',display_name='tempest-TestServerBasicOps-server-952479761',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-952479761',id=120,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIxvwaoaZS6lxY+qm3SVwVdpmr4odFer5lT4S2h//UYF7wFrY/sNYSd6hzwRrmh2VB5KT4fELzlhq046tWMx92gixHEtTbsSYNmG1SF9Z5rksMEf2+FpLLjssyHNY9JdaA==',key_name='tempest-TestServerBasicOps-1169969291',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b907eb5634054c23999a514f3cbfbc23',ramdisk_id='',reservation_id='r-1m2lpc4b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-267277269',owner_user_name='tempest-TestServerBasicOps-267277269-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:27:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f9dd03f07d754030bedc45ef75a2ceb8',uuid=502b3848-9702-4288-860e-d9b13ab3b047,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.716 243456 DEBUG nova.network.os_vif_util [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Converting VIF {"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.718 243456 DEBUG nova.network.os_vif_util [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:1f:18,bridge_name='br-int',has_traffic_filtering=True,id=a4f4f33b-d010-42c3-9963-b0602fd11558,network=Network(d755deef-15d2-410a-9b1a-81df70c45c93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4f4f33b-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.718 243456 DEBUG os_vif [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:1f:18,bridge_name='br-int',has_traffic_filtering=True,id=a4f4f33b-d010-42c3-9963-b0602fd11558,network=Network(d755deef-15d2-410a-9b1a-81df70c45c93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4f4f33b-d0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.719 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.720 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.720 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.725 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.726 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa4f4f33b-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.727 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa4f4f33b-d0, col_values=(('external_ids', {'iface-id': 'a4f4f33b-d010-42c3-9963-b0602fd11558', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:1f:18', 'vm-uuid': '502b3848-9702-4288-860e-d9b13ab3b047'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.729 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:34 compute-0 NetworkManager[49805]: <info>  [1772274454.7304] manager: (tapa4f4f33b-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/492)
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.731 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.733 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.734 243456 INFO os_vif [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:1f:18,bridge_name='br-int',has_traffic_filtering=True,id=a4f4f33b-d010-42c3-9963-b0602fd11558,network=Network(d755deef-15d2-410a-9b1a-81df70c45c93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4f4f33b-d0')
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.791 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.792 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.792 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] No VIF found with MAC fa:16:3e:be:1f:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.793 243456 INFO nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Using config drive
Feb 28 10:27:34 compute-0 nova_compute[243452]: 2026-02-28 10:27:34.822 243456 DEBUG nova.storage.rbd_utils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] rbd image 502b3848-9702-4288-860e-d9b13ab3b047_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:27:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Feb 28 10:27:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Feb 28 10:27:35 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Feb 28 10:27:35 compute-0 ceph-mon[76304]: osdmap e266: 3 total, 3 up, 3 in
Feb 28 10:27:35 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/48174800' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.318 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.341 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.414 243456 DEBUG nova.network.neutron [req-7d674231-13f4-42ea-b5c2-6e1cc465faac req-85bfa5f1-8ef9-4219-be7a-64c5f0db5fe9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Updated VIF entry in instance network info cache for port a4f4f33b-d010-42c3-9963-b0602fd11558. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.415 243456 DEBUG nova.network.neutron [req-7d674231-13f4-42ea-b5c2-6e1cc465faac req-85bfa5f1-8ef9-4219-be7a-64c5f0db5fe9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Updating instance_info_cache with network_info: [{"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.444 243456 DEBUG oslo_concurrency.lockutils [req-7d674231-13f4-42ea-b5c2-6e1cc465faac req-85bfa5f1-8ef9-4219-be7a-64c5f0db5fe9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-502b3848-9702-4288-860e-d9b13ab3b047" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.525 243456 DEBUG nova.compute.manager [req-5795ff3f-0e56-4b47-9185-a43645d0b336 req-35d91a37-91c4-4e60-b707-040521e4703a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.526 243456 DEBUG oslo_concurrency.lockutils [req-5795ff3f-0e56-4b47-9185-a43645d0b336 req-35d91a37-91c4-4e60-b707-040521e4703a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.527 243456 DEBUG oslo_concurrency.lockutils [req-5795ff3f-0e56-4b47-9185-a43645d0b336 req-35d91a37-91c4-4e60-b707-040521e4703a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.527 243456 DEBUG oslo_concurrency.lockutils [req-5795ff3f-0e56-4b47-9185-a43645d0b336 req-35d91a37-91c4-4e60-b707-040521e4703a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.528 243456 DEBUG nova.compute.manager [req-5795ff3f-0e56-4b47-9185-a43645d0b336 req-35d91a37-91c4-4e60-b707-040521e4703a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.528 243456 WARNING nova.compute.manager [req-5795ff3f-0e56-4b47-9185-a43645d0b336 req-35d91a37-91c4-4e60-b707-040521e4703a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state active and task_state None.
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.560 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.561 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.561 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.562 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.582 243456 INFO nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Creating config drive at /var/lib/nova/instances/502b3848-9702-4288-860e-d9b13ab3b047/disk.config
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.591 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/502b3848-9702-4288-860e-d9b13ab3b047/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpmo063itf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.736 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/502b3848-9702-4288-860e-d9b13ab3b047/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpmo063itf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.772 243456 DEBUG nova.storage.rbd_utils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] rbd image 502b3848-9702-4288-860e-d9b13ab3b047_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.776 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/502b3848-9702-4288-860e-d9b13ab3b047/disk.config 502b3848-9702-4288-860e-d9b13ab3b047_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.920 243456 DEBUG oslo_concurrency.processutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/502b3848-9702-4288-860e-d9b13ab3b047/disk.config 502b3848-9702-4288-860e-d9b13ab3b047_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.921 243456 INFO nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Deleting local config drive /var/lib/nova/instances/502b3848-9702-4288-860e-d9b13ab3b047/disk.config because it was imported into RBD.
Feb 28 10:27:35 compute-0 kernel: tapa4f4f33b-d0: entered promiscuous mode
Feb 28 10:27:35 compute-0 NetworkManager[49805]: <info>  [1772274455.9722] manager: (tapa4f4f33b-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/493)
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.974 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:35 compute-0 ovn_controller[146846]: 2026-02-28T10:27:35Z|01194|binding|INFO|Claiming lport a4f4f33b-d010-42c3-9963-b0602fd11558 for this chassis.
Feb 28 10:27:35 compute-0 ovn_controller[146846]: 2026-02-28T10:27:35Z|01195|binding|INFO|a4f4f33b-d010-42c3-9963-b0602fd11558: Claiming fa:16:3e:be:1f:18 10.100.0.3
Feb 28 10:27:35 compute-0 nova_compute[243452]: 2026-02-28 10:27:35.980 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:35 compute-0 NetworkManager[49805]: <info>  [1772274455.9912] device (tapa4f4f33b-d0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:27:35 compute-0 NetworkManager[49805]: <info>  [1772274455.9944] device (tapa4f4f33b-d0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:27:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:35.992 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:1f:18 10.100.0.3'], port_security=['fa:16:3e:be:1f:18 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '502b3848-9702-4288-860e-d9b13ab3b047', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d755deef-15d2-410a-9b1a-81df70c45c93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b907eb5634054c23999a514f3cbfbc23', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0fc58a0a-8070-4472-9d4a-0833b80c1776 d0518d2f-a440-4fc3-9c12-d503c74451c2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f692542-8d39-4073-b698-c331b927e5a0, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a4f4f33b-d010-42c3-9963-b0602fd11558) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:27:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:35.995 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a4f4f33b-d010-42c3-9963-b0602fd11558 in datapath d755deef-15d2-410a-9b1a-81df70c45c93 bound to our chassis
Feb 28 10:27:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:35.997 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d755deef-15d2-410a-9b1a-81df70c45c93
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.009 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b950c6be-3992-4652-bfa3-f212e4feceeb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.010 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd755deef-11 in ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:27:36 compute-0 systemd-machined[209480]: New machine qemu-151-instance-00000078.
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.021 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd755deef-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.021 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[90100bb7-5fc6-4480-a92c-e42271b88542]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.023 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1d9be2c0-60cc-4a35-bfc8-db3a30f850d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.026 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:36 compute-0 systemd[1]: Started Virtual Machine qemu-151-instance-00000078.
Feb 28 10:27:36 compute-0 ovn_controller[146846]: 2026-02-28T10:27:36Z|01196|binding|INFO|Setting lport a4f4f33b-d010-42c3-9963-b0602fd11558 ovn-installed in OVS
Feb 28 10:27:36 compute-0 ovn_controller[146846]: 2026-02-28T10:27:36Z|01197|binding|INFO|Setting lport a4f4f33b-d010-42c3-9963-b0602fd11558 up in Southbound
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.039 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.040 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a53d241b-7223-4f2d-951f-f7310ab66b1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.064 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9328ec45-c6cd-4e33-91c2-c8df99ac62c7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Feb 28 10:27:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Feb 28 10:27:36 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Feb 28 10:27:36 compute-0 ceph-mon[76304]: pgmap v1879: 305 pgs: 305 active+clean; 246 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 823 KiB/s rd, 5.3 MiB/s wr, 222 op/s
Feb 28 10:27:36 compute-0 ceph-mon[76304]: osdmap e267: 3 total, 3 up, 3 in
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.107 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a43da3ca-bce8-4939-8c41-1fa39ffc5294]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.114 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[15aeeadc-8d5c-460d-85ac-db39245ba014]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:36 compute-0 NetworkManager[49805]: <info>  [1772274456.1165] manager: (tapd755deef-10): new Veth device (/org/freedesktop/NetworkManager/Devices/494)
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.139 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9fdcfb-1125-4112-8e2c-13f7c0d0c9d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.144 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ffd3253a-5f0a-412c-88f3-7911565910b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:36 compute-0 systemd-udevd[346508]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:27:36 compute-0 NetworkManager[49805]: <info>  [1772274456.1788] device (tapd755deef-10): carrier: link connected
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.185 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[29439129-8c2e-4976-9b52-0814154d9755]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.205 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6c5bb0d1-6538-4360-85d3-f03706d07fe1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd755deef-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:97:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596346, 'reachable_time': 21250, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346527, 'error': None, 'target': 'ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.224 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e694fdcf-e45b-4ca1-9438-8579acc9680d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:97f7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 596346, 'tstamp': 596346}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346543, 'error': None, 'target': 'ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.258 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8eab2040-eec1-4593-bd9d-c753f01d9393]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd755deef-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:97:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596346, 'reachable_time': 21250, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346554, 'error': None, 'target': 'ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.293 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1161635a-33c0-4475-8f89-1fbeb697a375]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.354 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274456.353527, 502b3848-9702-4288-860e-d9b13ab3b047 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.354 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] VM Started (Lifecycle Event)
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.373 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[62614d6e-8820-4a4a-b049-d3ac9323d2e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.375 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd755deef-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.375 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.376 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd755deef-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:27:36 compute-0 kernel: tapd755deef-10: entered promiscuous mode
Feb 28 10:27:36 compute-0 NetworkManager[49805]: <info>  [1772274456.3790] manager: (tapd755deef-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/495)
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.381 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd755deef-10, col_values=(('external_ids', {'iface-id': 'cbbfe533-6ee1-4103-ad47-26b6e9271250'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.382 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:36 compute-0 ovn_controller[146846]: 2026-02-28T10:27:36Z|01198|binding|INFO|Releasing lport cbbfe533-6ee1-4103-ad47-26b6e9271250 from this chassis (sb_readonly=0)
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.384 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d755deef-15d2-410a-9b1a-81df70c45c93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d755deef-15d2-410a-9b1a-81df70c45c93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.385 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[33e43b02-4384-4094-a5c7-4aeb333d4c44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.386 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-d755deef-15d2-410a-9b1a-81df70c45c93
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/d755deef-15d2-410a-9b1a-81df70c45c93.pid.haproxy
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID d755deef-15d2-410a-9b1a-81df70c45c93
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:27:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:36.387 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93', 'env', 'PROCESS_TAG=haproxy-d755deef-15d2-410a-9b1a-81df70c45c93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d755deef-15d2-410a-9b1a-81df70c45c93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.394 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.396 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.400 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274456.3537502, 502b3848-9702-4288-860e-d9b13ab3b047 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.401 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] VM Paused (Lifecycle Event)
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.440 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.444 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.475 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:27:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 246 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.4 MiB/s wr, 215 op/s
Feb 28 10:27:36 compute-0 podman[346603]: 2026-02-28 10:27:36.76159766 +0000 UTC m=+0.055546669 container create e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.776 243456 DEBUG nova.compute.manager [req-d20066ea-efb5-4bfe-9022-26c7c046a2ea req-f9ce853b-64e3-4854-8d17-ce5f00b5ff72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Received event network-vif-plugged-a4f4f33b-d010-42c3-9963-b0602fd11558 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.777 243456 DEBUG oslo_concurrency.lockutils [req-d20066ea-efb5-4bfe-9022-26c7c046a2ea req-f9ce853b-64e3-4854-8d17-ce5f00b5ff72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "502b3848-9702-4288-860e-d9b13ab3b047-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.778 243456 DEBUG oslo_concurrency.lockutils [req-d20066ea-efb5-4bfe-9022-26c7c046a2ea req-f9ce853b-64e3-4854-8d17-ce5f00b5ff72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.778 243456 DEBUG oslo_concurrency.lockutils [req-d20066ea-efb5-4bfe-9022-26c7c046a2ea req-f9ce853b-64e3-4854-8d17-ce5f00b5ff72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.779 243456 DEBUG nova.compute.manager [req-d20066ea-efb5-4bfe-9022-26c7c046a2ea req-f9ce853b-64e3-4854-8d17-ce5f00b5ff72 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Processing event network-vif-plugged-a4f4f33b-d010-42c3-9963-b0602fd11558 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.780 243456 DEBUG nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.785 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274456.7845898, 502b3848-9702-4288-860e-d9b13ab3b047 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.785 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] VM Resumed (Lifecycle Event)
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.787 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.794 243456 INFO nova.virt.libvirt.driver [-] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Instance spawned successfully.
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.794 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:27:36 compute-0 systemd[1]: Started libpod-conmon-e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2.scope.
Feb 28 10:27:36 compute-0 podman[346603]: 2026-02-28 10:27:36.731351126 +0000 UTC m=+0.025300155 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:27:36 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.836 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:27:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7740129404eead1c6a7f09e8f10ed3317b876a311baeb37869bca63dec79ce5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.841 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.842 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.843 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.843 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.844 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.844 243456 DEBUG nova.virt.libvirt.driver [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.850 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:27:36 compute-0 podman[346603]: 2026-02-28 10:27:36.85186196 +0000 UTC m=+0.145810999 container init e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 10:27:36 compute-0 podman[346603]: 2026-02-28 10:27:36.856219543 +0000 UTC m=+0.150168552 container start e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 10:27:36 compute-0 neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93[346618]: [NOTICE]   (346622) : New worker (346624) forked
Feb 28 10:27:36 compute-0 neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93[346618]: [NOTICE]   (346622) : Loading success.
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.881 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.908 243456 INFO nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Took 9.19 seconds to spawn the instance on the hypervisor.
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.909 243456 DEBUG nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.972 243456 INFO nova.compute.manager [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Took 11.10 seconds to build instance.
Feb 28 10:27:36 compute-0 nova_compute[243452]: 2026-02-28 10:27:36.988 243456 DEBUG oslo_concurrency.lockutils [None req-8994a765-674d-4277-a0f8-07ee373d321e f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Feb 28 10:27:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Feb 28 10:27:37 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Feb 28 10:27:37 compute-0 ceph-mon[76304]: osdmap e268: 3 total, 3 up, 3 in
Feb 28 10:27:37 compute-0 nova_compute[243452]: 2026-02-28 10:27:37.858 243456 DEBUG nova.objects.instance [None req-f6cb8fbb-c3f7-4b00-9dc8-471ed941566b 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:27:37 compute-0 nova_compute[243452]: 2026-02-28 10:27:37.884 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274457.8840063, 45cac133-9af0-462b-928c-05216ae1a68e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:27:37 compute-0 nova_compute[243452]: 2026-02-28 10:27:37.884 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] VM Paused (Lifecycle Event)
Feb 28 10:27:37 compute-0 nova_compute[243452]: 2026-02-28 10:27:37.913 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:27:37 compute-0 nova_compute[243452]: 2026-02-28 10:27:37.926 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:27:37 compute-0 nova_compute[243452]: 2026-02-28 10:27:37.958 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] During sync_power_state the instance has a pending task (suspending). Skip.
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.111 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Updating instance_info_cache with network_info: [{"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.136 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.137 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.138 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.138 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.138 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.139 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.161 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.162 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.162 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.162 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.163 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:27:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Feb 28 10:27:38 compute-0 ceph-mon[76304]: pgmap v1882: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 246 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.4 MiB/s wr, 215 op/s
Feb 28 10:27:38 compute-0 ceph-mon[76304]: osdmap e269: 3 total, 3 up, 3 in
Feb 28 10:27:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Feb 28 10:27:38 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Feb 28 10:27:38 compute-0 kernel: tap63cc9218-a4 (unregistering): left promiscuous mode
Feb 28 10:27:38 compute-0 NetworkManager[49805]: <info>  [1772274458.3544] device (tap63cc9218-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.354 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:38 compute-0 ovn_controller[146846]: 2026-02-28T10:27:38Z|01199|binding|INFO|Releasing lport 63cc9218-a429-4d50-9dad-e3849863cae1 from this chassis (sb_readonly=0)
Feb 28 10:27:38 compute-0 ovn_controller[146846]: 2026-02-28T10:27:38Z|01200|binding|INFO|Setting lport 63cc9218-a429-4d50-9dad-e3849863cae1 down in Southbound
Feb 28 10:27:38 compute-0 ovn_controller[146846]: 2026-02-28T10:27:38Z|01201|binding|INFO|Removing iface tap63cc9218-a4 ovn-installed in OVS
Feb 28 10:27:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:38.364 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:2a:f0 10.100.0.11'], port_security=['fa:16:3e:26:2a:f0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '45cac133-9af0-462b-928c-05216ae1a68e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9d695f1-2ebc-41de-afde-dc0fb11aa027', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '859784d5f59f4db99fb375f781853be3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3574eda9-0858-4fa6-a1da-2908ff989c86', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ecc9bc11-7523-42ec-afec-b5bdb65bb267, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=63cc9218-a429-4d50-9dad-e3849863cae1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:27:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:38.368 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 63cc9218-a429-4d50-9dad-e3849863cae1 in datapath c9d695f1-2ebc-41de-afde-dc0fb11aa027 unbound from our chassis
Feb 28 10:27:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:38.370 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c9d695f1-2ebc-41de-afde-dc0fb11aa027 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.372 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:38.371 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[08055500-cb8b-495a-a7ca-aca50d09eb22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:38 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000077.scope: Deactivated successfully.
Feb 28 10:27:38 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000077.scope: Consumed 4.803s CPU time.
Feb 28 10:27:38 compute-0 systemd-machined[209480]: Machine qemu-150-instance-00000077 terminated.
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.534 243456 DEBUG nova.compute.manager [None req-f6cb8fbb-c3f7-4b00-9dc8-471ed941566b 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:27:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 90 KiB/s wr, 383 op/s
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.615 243456 DEBUG nova.compute.manager [req-47a355e6-6671-40d0-a506-ecf465ce1207 req-94013442-1284-4f5b-bc2a-871e41ddd848 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-unplugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.618 243456 DEBUG oslo_concurrency.lockutils [req-47a355e6-6671-40d0-a506-ecf465ce1207 req-94013442-1284-4f5b-bc2a-871e41ddd848 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.619 243456 DEBUG oslo_concurrency.lockutils [req-47a355e6-6671-40d0-a506-ecf465ce1207 req-94013442-1284-4f5b-bc2a-871e41ddd848 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.619 243456 DEBUG oslo_concurrency.lockutils [req-47a355e6-6671-40d0-a506-ecf465ce1207 req-94013442-1284-4f5b-bc2a-871e41ddd848 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.620 243456 DEBUG nova.compute.manager [req-47a355e6-6671-40d0-a506-ecf465ce1207 req-94013442-1284-4f5b-bc2a-871e41ddd848 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-unplugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.620 243456 WARNING nova.compute.manager [req-47a355e6-6671-40d0-a506-ecf465ce1207 req-94013442-1284-4f5b-bc2a-871e41ddd848 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-unplugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state active and task_state suspending.
Feb 28 10:27:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:27:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:27:38 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1845595973' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.791 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.886 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.889 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.896 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.896 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.902 243456 DEBUG nova.compute.manager [req-8a8542ed-1d44-4b6b-a464-c53db9d6ab68 req-6f051084-0873-4146-b245-ac38839275b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Received event network-vif-plugged-a4f4f33b-d010-42c3-9963-b0602fd11558 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.902 243456 DEBUG oslo_concurrency.lockutils [req-8a8542ed-1d44-4b6b-a464-c53db9d6ab68 req-6f051084-0873-4146-b245-ac38839275b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "502b3848-9702-4288-860e-d9b13ab3b047-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.903 243456 DEBUG oslo_concurrency.lockutils [req-8a8542ed-1d44-4b6b-a464-c53db9d6ab68 req-6f051084-0873-4146-b245-ac38839275b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.903 243456 DEBUG oslo_concurrency.lockutils [req-8a8542ed-1d44-4b6b-a464-c53db9d6ab68 req-6f051084-0873-4146-b245-ac38839275b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.904 243456 DEBUG nova.compute.manager [req-8a8542ed-1d44-4b6b-a464-c53db9d6ab68 req-6f051084-0873-4146-b245-ac38839275b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] No waiting events found dispatching network-vif-plugged-a4f4f33b-d010-42c3-9963-b0602fd11558 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:27:38 compute-0 nova_compute[243452]: 2026-02-28 10:27:38.904 243456 WARNING nova.compute.manager [req-8a8542ed-1d44-4b6b-a464-c53db9d6ab68 req-6f051084-0873-4146-b245-ac38839275b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Received unexpected event network-vif-plugged-a4f4f33b-d010-42c3-9963-b0602fd11558 for instance with vm_state active and task_state None.
Feb 28 10:27:39 compute-0 nova_compute[243452]: 2026-02-28 10:27:39.074 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:27:39 compute-0 nova_compute[243452]: 2026-02-28 10:27:39.076 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3479MB free_disk=59.94584787450731GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:27:39 compute-0 nova_compute[243452]: 2026-02-28 10:27:39.076 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:39 compute-0 nova_compute[243452]: 2026-02-28 10:27:39.076 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:39 compute-0 nova_compute[243452]: 2026-02-28 10:27:39.146 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 45cac133-9af0-462b-928c-05216ae1a68e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:27:39 compute-0 nova_compute[243452]: 2026-02-28 10:27:39.147 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 502b3848-9702-4288-860e-d9b13ab3b047 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:27:39 compute-0 nova_compute[243452]: 2026-02-28 10:27:39.148 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:27:39 compute-0 nova_compute[243452]: 2026-02-28 10:27:39.148 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:27:39 compute-0 nova_compute[243452]: 2026-02-28 10:27:39.152 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:39 compute-0 nova_compute[243452]: 2026-02-28 10:27:39.200 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:27:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Feb 28 10:27:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Feb 28 10:27:39 compute-0 ceph-mon[76304]: osdmap e270: 3 total, 3 up, 3 in
Feb 28 10:27:39 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1845595973' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:27:39 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Feb 28 10:27:39 compute-0 nova_compute[243452]: 2026-02-28 10:27:39.730 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:27:39 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3180287' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:27:39 compute-0 nova_compute[243452]: 2026-02-28 10:27:39.761 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:27:39 compute-0 nova_compute[243452]: 2026-02-28 10:27:39.768 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:27:39 compute-0 nova_compute[243452]: 2026-02-28 10:27:39.790 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:27:39 compute-0 nova_compute[243452]: 2026-02-28 10:27:39.825 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:27:39 compute-0 nova_compute[243452]: 2026-02-28 10:27:39.826 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:40 compute-0 ceph-mon[76304]: pgmap v1885: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 90 KiB/s wr, 383 op/s
Feb 28 10:27:40 compute-0 ceph-mon[76304]: osdmap e271: 3 total, 3 up, 3 in
Feb 28 10:27:40 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3180287' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:27:40 compute-0 nova_compute[243452]: 2026-02-28 10:27:40.562 243456 INFO nova.compute.manager [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Resuming
Feb 28 10:27:40 compute-0 nova_compute[243452]: 2026-02-28 10:27:40.564 243456 DEBUG nova.objects.instance [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'flavor' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:27:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 52 KiB/s wr, 398 op/s
Feb 28 10:27:40 compute-0 nova_compute[243452]: 2026-02-28 10:27:40.609 243456 DEBUG oslo_concurrency.lockutils [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:27:40 compute-0 nova_compute[243452]: 2026-02-28 10:27:40.609 243456 DEBUG oslo_concurrency.lockutils [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquired lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:27:40 compute-0 nova_compute[243452]: 2026-02-28 10:27:40.610 243456 DEBUG nova.network.neutron [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:27:40 compute-0 NetworkManager[49805]: <info>  [1772274460.6156] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/496)
Feb 28 10:27:40 compute-0 nova_compute[243452]: 2026-02-28 10:27:40.615 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:40 compute-0 NetworkManager[49805]: <info>  [1772274460.6170] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/497)
Feb 28 10:27:40 compute-0 nova_compute[243452]: 2026-02-28 10:27:40.655 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:40 compute-0 ovn_controller[146846]: 2026-02-28T10:27:40Z|01202|binding|INFO|Releasing lport cbbfe533-6ee1-4103-ad47-26b6e9271250 from this chassis (sb_readonly=0)
Feb 28 10:27:40 compute-0 nova_compute[243452]: 2026-02-28 10:27:40.667 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:40 compute-0 nova_compute[243452]: 2026-02-28 10:27:40.674 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:40 compute-0 nova_compute[243452]: 2026-02-28 10:27:40.746 243456 DEBUG nova.compute.manager [req-35109c67-de74-4b9e-94e8-54676c8c3031 req-255655a8-c4dc-44d3-92c7-c7d821124b8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:40 compute-0 nova_compute[243452]: 2026-02-28 10:27:40.747 243456 DEBUG oslo_concurrency.lockutils [req-35109c67-de74-4b9e-94e8-54676c8c3031 req-255655a8-c4dc-44d3-92c7-c7d821124b8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:40 compute-0 nova_compute[243452]: 2026-02-28 10:27:40.748 243456 DEBUG oslo_concurrency.lockutils [req-35109c67-de74-4b9e-94e8-54676c8c3031 req-255655a8-c4dc-44d3-92c7-c7d821124b8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:40 compute-0 nova_compute[243452]: 2026-02-28 10:27:40.748 243456 DEBUG oslo_concurrency.lockutils [req-35109c67-de74-4b9e-94e8-54676c8c3031 req-255655a8-c4dc-44d3-92c7-c7d821124b8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:40 compute-0 nova_compute[243452]: 2026-02-28 10:27:40.749 243456 DEBUG nova.compute.manager [req-35109c67-de74-4b9e-94e8-54676c8c3031 req-255655a8-c4dc-44d3-92c7-c7d821124b8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:27:40 compute-0 nova_compute[243452]: 2026-02-28 10:27:40.749 243456 WARNING nova.compute.manager [req-35109c67-de74-4b9e-94e8-54676c8c3031 req-255655a8-c4dc-44d3-92c7-c7d821124b8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state suspended and task_state resuming.
Feb 28 10:27:40 compute-0 nova_compute[243452]: 2026-02-28 10:27:40.821 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:27:40 compute-0 nova_compute[243452]: 2026-02-28 10:27:40.974 243456 DEBUG nova.compute.manager [req-aaf889e3-6da9-4685-a743-3065058fd400 req-f24bc271-4921-4057-a3b3-4258b1136fc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Received event network-changed-a4f4f33b-d010-42c3-9963-b0602fd11558 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:40 compute-0 nova_compute[243452]: 2026-02-28 10:27:40.976 243456 DEBUG nova.compute.manager [req-aaf889e3-6da9-4685-a743-3065058fd400 req-f24bc271-4921-4057-a3b3-4258b1136fc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Refreshing instance network info cache due to event network-changed-a4f4f33b-d010-42c3-9963-b0602fd11558. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:27:40 compute-0 nova_compute[243452]: 2026-02-28 10:27:40.976 243456 DEBUG oslo_concurrency.lockutils [req-aaf889e3-6da9-4685-a743-3065058fd400 req-f24bc271-4921-4057-a3b3-4258b1136fc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-502b3848-9702-4288-860e-d9b13ab3b047" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:27:40 compute-0 nova_compute[243452]: 2026-02-28 10:27:40.977 243456 DEBUG oslo_concurrency.lockutils [req-aaf889e3-6da9-4685-a743-3065058fd400 req-f24bc271-4921-4057-a3b3-4258b1136fc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-502b3848-9702-4288-860e-d9b13ab3b047" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:27:40 compute-0 nova_compute[243452]: 2026-02-28 10:27:40.978 243456 DEBUG nova.network.neutron [req-aaf889e3-6da9-4685-a743-3065058fd400 req-f24bc271-4921-4057-a3b3-4258b1136fc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Refreshing network info cache for port a4f4f33b-d010-42c3-9963-b0602fd11558 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007102768922788274 of space, bias 1.0, pg target 0.2130830676836482 quantized to 32 (current 32)
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493632680435017 of space, bias 1.0, pg target 0.7480898041305051 quantized to 32 (current 32)
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.36261163984665e-07 of space, bias 4.0, pg target 0.000883513396781598 quantized to 16 (current 16)
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:27:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:27:42 compute-0 sshd-session[346697]: Received disconnect from 103.67.78.202 port 52166:11: Bye Bye [preauth]
Feb 28 10:27:42 compute-0 sshd-session[346697]: Disconnected from authenticating user root 103.67.78.202 port 52166 [preauth]
Feb 28 10:27:42 compute-0 ceph-mon[76304]: pgmap v1887: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 52 KiB/s wr, 398 op/s
Feb 28 10:27:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 41 KiB/s wr, 397 op/s
Feb 28 10:27:42 compute-0 nova_compute[243452]: 2026-02-28 10:27:42.671 243456 DEBUG nova.network.neutron [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Updating instance_info_cache with network_info: [{"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:27:42 compute-0 nova_compute[243452]: 2026-02-28 10:27:42.706 243456 DEBUG oslo_concurrency.lockutils [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Releasing lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:27:42 compute-0 nova_compute[243452]: 2026-02-28 10:27:42.713 243456 DEBUG nova.virt.libvirt.vif [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:27:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-22377150',display_name='tempest-TestServerAdvancedOps-server-22377150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-22377150',id=119,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:27:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='859784d5f59f4db99fb375f781853be3',ramdisk_id='',reservation_id='r-h74lcn4j',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-244453076',owner_user_name='tempest-TestServerAdvancedOps-244453076-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:27:38Z,user_data=None,user_id='3ed826a3011e43d68aac3f001281440a',uuid=45cac133-9af0-462b-928c-05216ae1a68e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:27:42 compute-0 nova_compute[243452]: 2026-02-28 10:27:42.714 243456 DEBUG nova.network.os_vif_util [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Converting VIF {"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:27:42 compute-0 nova_compute[243452]: 2026-02-28 10:27:42.715 243456 DEBUG nova.network.os_vif_util [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:27:42 compute-0 nova_compute[243452]: 2026-02-28 10:27:42.716 243456 DEBUG os_vif [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:27:42 compute-0 nova_compute[243452]: 2026-02-28 10:27:42.717 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:42 compute-0 nova_compute[243452]: 2026-02-28 10:27:42.718 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:27:42 compute-0 nova_compute[243452]: 2026-02-28 10:27:42.719 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:27:42 compute-0 nova_compute[243452]: 2026-02-28 10:27:42.723 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:42 compute-0 nova_compute[243452]: 2026-02-28 10:27:42.723 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63cc9218-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:27:42 compute-0 nova_compute[243452]: 2026-02-28 10:27:42.724 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63cc9218-a4, col_values=(('external_ids', {'iface-id': '63cc9218-a429-4d50-9dad-e3849863cae1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:2a:f0', 'vm-uuid': '45cac133-9af0-462b-928c-05216ae1a68e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:27:42 compute-0 nova_compute[243452]: 2026-02-28 10:27:42.725 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:27:42 compute-0 nova_compute[243452]: 2026-02-28 10:27:42.726 243456 INFO os_vif [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4')
Feb 28 10:27:42 compute-0 nova_compute[243452]: 2026-02-28 10:27:42.744 243456 DEBUG nova.objects.instance [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:27:42 compute-0 nova_compute[243452]: 2026-02-28 10:27:42.748 243456 DEBUG nova.network.neutron [req-aaf889e3-6da9-4685-a743-3065058fd400 req-f24bc271-4921-4057-a3b3-4258b1136fc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Updated VIF entry in instance network info cache for port a4f4f33b-d010-42c3-9963-b0602fd11558. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:27:42 compute-0 nova_compute[243452]: 2026-02-28 10:27:42.749 243456 DEBUG nova.network.neutron [req-aaf889e3-6da9-4685-a743-3065058fd400 req-f24bc271-4921-4057-a3b3-4258b1136fc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Updating instance_info_cache with network_info: [{"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:27:42 compute-0 nova_compute[243452]: 2026-02-28 10:27:42.773 243456 DEBUG oslo_concurrency.lockutils [req-aaf889e3-6da9-4685-a743-3065058fd400 req-f24bc271-4921-4057-a3b3-4258b1136fc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-502b3848-9702-4288-860e-d9b13ab3b047" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:27:42 compute-0 kernel: tap63cc9218-a4: entered promiscuous mode
Feb 28 10:27:42 compute-0 NetworkManager[49805]: <info>  [1772274462.8281] manager: (tap63cc9218-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/498)
Feb 28 10:27:42 compute-0 ovn_controller[146846]: 2026-02-28T10:27:42Z|01203|binding|INFO|Claiming lport 63cc9218-a429-4d50-9dad-e3849863cae1 for this chassis.
Feb 28 10:27:42 compute-0 ovn_controller[146846]: 2026-02-28T10:27:42Z|01204|binding|INFO|63cc9218-a429-4d50-9dad-e3849863cae1: Claiming fa:16:3e:26:2a:f0 10.100.0.11
Feb 28 10:27:42 compute-0 nova_compute[243452]: 2026-02-28 10:27:42.839 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:42.847 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:2a:f0 10.100.0.11'], port_security=['fa:16:3e:26:2a:f0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '45cac133-9af0-462b-928c-05216ae1a68e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9d695f1-2ebc-41de-afde-dc0fb11aa027', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '859784d5f59f4db99fb375f781853be3', 'neutron:revision_number': '5', 'neutron:security_group_ids': '3574eda9-0858-4fa6-a1da-2908ff989c86', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ecc9bc11-7523-42ec-afec-b5bdb65bb267, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=63cc9218-a429-4d50-9dad-e3849863cae1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:27:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:42.850 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 63cc9218-a429-4d50-9dad-e3849863cae1 in datapath c9d695f1-2ebc-41de-afde-dc0fb11aa027 bound to our chassis
Feb 28 10:27:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:42.851 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c9d695f1-2ebc-41de-afde-dc0fb11aa027 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:27:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:42.853 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9540f8a3-1b44-4b29-9ac7-9c62292baff8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:42 compute-0 systemd-udevd[346711]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:27:42 compute-0 ovn_controller[146846]: 2026-02-28T10:27:42Z|01205|binding|INFO|Setting lport 63cc9218-a429-4d50-9dad-e3849863cae1 ovn-installed in OVS
Feb 28 10:27:42 compute-0 ovn_controller[146846]: 2026-02-28T10:27:42Z|01206|binding|INFO|Setting lport 63cc9218-a429-4d50-9dad-e3849863cae1 up in Southbound
Feb 28 10:27:42 compute-0 nova_compute[243452]: 2026-02-28 10:27:42.864 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:42 compute-0 nova_compute[243452]: 2026-02-28 10:27:42.867 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:42 compute-0 NetworkManager[49805]: <info>  [1772274462.8809] device (tap63cc9218-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:27:42 compute-0 systemd-machined[209480]: New machine qemu-152-instance-00000077.
Feb 28 10:27:42 compute-0 NetworkManager[49805]: <info>  [1772274462.8846] device (tap63cc9218-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:27:42 compute-0 systemd[1]: Started Virtual Machine qemu-152-instance-00000077.
Feb 28 10:27:43 compute-0 nova_compute[243452]: 2026-02-28 10:27:43.470 243456 DEBUG nova.compute.manager [req-88d8e949-2c04-44f5-98ca-c4bb13e81082 req-59a34910-6592-4317-aa03-c8815106f143 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:43 compute-0 nova_compute[243452]: 2026-02-28 10:27:43.471 243456 DEBUG oslo_concurrency.lockutils [req-88d8e949-2c04-44f5-98ca-c4bb13e81082 req-59a34910-6592-4317-aa03-c8815106f143 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:43 compute-0 nova_compute[243452]: 2026-02-28 10:27:43.472 243456 DEBUG oslo_concurrency.lockutils [req-88d8e949-2c04-44f5-98ca-c4bb13e81082 req-59a34910-6592-4317-aa03-c8815106f143 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:43 compute-0 nova_compute[243452]: 2026-02-28 10:27:43.472 243456 DEBUG oslo_concurrency.lockutils [req-88d8e949-2c04-44f5-98ca-c4bb13e81082 req-59a34910-6592-4317-aa03-c8815106f143 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:43 compute-0 nova_compute[243452]: 2026-02-28 10:27:43.473 243456 DEBUG nova.compute.manager [req-88d8e949-2c04-44f5-98ca-c4bb13e81082 req-59a34910-6592-4317-aa03-c8815106f143 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:27:43 compute-0 nova_compute[243452]: 2026-02-28 10:27:43.474 243456 WARNING nova.compute.manager [req-88d8e949-2c04-44f5-98ca-c4bb13e81082 req-59a34910-6592-4317-aa03-c8815106f143 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state suspended and task_state resuming.
Feb 28 10:27:43 compute-0 nova_compute[243452]: 2026-02-28 10:27:43.618 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 45cac133-9af0-462b-928c-05216ae1a68e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:27:43 compute-0 nova_compute[243452]: 2026-02-28 10:27:43.619 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274463.6178133, 45cac133-9af0-462b-928c-05216ae1a68e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:27:43 compute-0 nova_compute[243452]: 2026-02-28 10:27:43.619 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] VM Started (Lifecycle Event)
Feb 28 10:27:43 compute-0 nova_compute[243452]: 2026-02-28 10:27:43.634 243456 DEBUG nova.compute.manager [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:27:43 compute-0 nova_compute[243452]: 2026-02-28 10:27:43.636 243456 DEBUG nova.objects.instance [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:27:43 compute-0 nova_compute[243452]: 2026-02-28 10:27:43.656 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:27:43 compute-0 nova_compute[243452]: 2026-02-28 10:27:43.664 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:27:43 compute-0 nova_compute[243452]: 2026-02-28 10:27:43.678 243456 INFO nova.virt.libvirt.driver [-] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Instance running successfully.
Feb 28 10:27:43 compute-0 virtqemud[242837]: argument unsupported: QEMU guest agent is not configured
Feb 28 10:27:43 compute-0 nova_compute[243452]: 2026-02-28 10:27:43.683 243456 DEBUG nova.virt.libvirt.guest [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Feb 28 10:27:43 compute-0 nova_compute[243452]: 2026-02-28 10:27:43.684 243456 DEBUG nova.compute.manager [None req-e001eaff-4bba-455a-bba3-51986c841415 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:27:43 compute-0 nova_compute[243452]: 2026-02-28 10:27:43.690 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] During sync_power_state the instance has a pending task (resuming). Skip.
Feb 28 10:27:43 compute-0 nova_compute[243452]: 2026-02-28 10:27:43.691 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274463.6228464, 45cac133-9af0-462b-928c-05216ae1a68e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:27:43 compute-0 nova_compute[243452]: 2026-02-28 10:27:43.691 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] VM Resumed (Lifecycle Event)
Feb 28 10:27:43 compute-0 nova_compute[243452]: 2026-02-28 10:27:43.719 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:27:43 compute-0 nova_compute[243452]: 2026-02-28 10:27:43.724 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:27:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:27:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Feb 28 10:27:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Feb 28 10:27:43 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Feb 28 10:27:44 compute-0 nova_compute[243452]: 2026-02-28 10:27:44.154 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:44 compute-0 ceph-mon[76304]: pgmap v1888: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 41 KiB/s wr, 397 op/s
Feb 28 10:27:44 compute-0 ceph-mon[76304]: osdmap e272: 3 total, 3 up, 3 in
Feb 28 10:27:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 7.5 KiB/s wr, 258 op/s
Feb 28 10:27:44 compute-0 nova_compute[243452]: 2026-02-28 10:27:44.733 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:45 compute-0 nova_compute[243452]: 2026-02-28 10:27:45.288 243456 DEBUG nova.objects.instance [None req-1b68bd6c-cd04-4c4b-88db-efb32514dc91 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:27:45 compute-0 nova_compute[243452]: 2026-02-28 10:27:45.321 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274465.321013, 45cac133-9af0-462b-928c-05216ae1a68e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:27:45 compute-0 nova_compute[243452]: 2026-02-28 10:27:45.322 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] VM Paused (Lifecycle Event)
Feb 28 10:27:45 compute-0 nova_compute[243452]: 2026-02-28 10:27:45.342 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:27:45 compute-0 nova_compute[243452]: 2026-02-28 10:27:45.346 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:27:45 compute-0 nova_compute[243452]: 2026-02-28 10:27:45.363 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] During sync_power_state the instance has a pending task (suspending). Skip.
Feb 28 10:27:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:27:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/591691403' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:27:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:27:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/591691403' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:27:45 compute-0 kernel: tap63cc9218-a4 (unregistering): left promiscuous mode
Feb 28 10:27:45 compute-0 NetworkManager[49805]: <info>  [1772274465.6772] device (tap63cc9218-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:27:45 compute-0 nova_compute[243452]: 2026-02-28 10:27:45.682 243456 DEBUG nova.compute.manager [req-67fa20db-856f-4f3c-b030-25b7388c9d62 req-4fbb077b-e9f3-40ce-bce6-f4954a83c3ad 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:45 compute-0 nova_compute[243452]: 2026-02-28 10:27:45.683 243456 DEBUG oslo_concurrency.lockutils [req-67fa20db-856f-4f3c-b030-25b7388c9d62 req-4fbb077b-e9f3-40ce-bce6-f4954a83c3ad 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:45 compute-0 nova_compute[243452]: 2026-02-28 10:27:45.683 243456 DEBUG oslo_concurrency.lockutils [req-67fa20db-856f-4f3c-b030-25b7388c9d62 req-4fbb077b-e9f3-40ce-bce6-f4954a83c3ad 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:45 compute-0 nova_compute[243452]: 2026-02-28 10:27:45.683 243456 DEBUG oslo_concurrency.lockutils [req-67fa20db-856f-4f3c-b030-25b7388c9d62 req-4fbb077b-e9f3-40ce-bce6-f4954a83c3ad 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:45 compute-0 nova_compute[243452]: 2026-02-28 10:27:45.683 243456 DEBUG nova.compute.manager [req-67fa20db-856f-4f3c-b030-25b7388c9d62 req-4fbb077b-e9f3-40ce-bce6-f4954a83c3ad 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:27:45 compute-0 nova_compute[243452]: 2026-02-28 10:27:45.684 243456 WARNING nova.compute.manager [req-67fa20db-856f-4f3c-b030-25b7388c9d62 req-4fbb077b-e9f3-40ce-bce6-f4954a83c3ad 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state active and task_state suspending.
Feb 28 10:27:45 compute-0 nova_compute[243452]: 2026-02-28 10:27:45.690 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:45 compute-0 ovn_controller[146846]: 2026-02-28T10:27:45Z|01207|binding|INFO|Releasing lport 63cc9218-a429-4d50-9dad-e3849863cae1 from this chassis (sb_readonly=0)
Feb 28 10:27:45 compute-0 ovn_controller[146846]: 2026-02-28T10:27:45Z|01208|binding|INFO|Setting lport 63cc9218-a429-4d50-9dad-e3849863cae1 down in Southbound
Feb 28 10:27:45 compute-0 ovn_controller[146846]: 2026-02-28T10:27:45Z|01209|binding|INFO|Removing iface tap63cc9218-a4 ovn-installed in OVS
Feb 28 10:27:45 compute-0 nova_compute[243452]: 2026-02-28 10:27:45.699 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:45.700 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:2a:f0 10.100.0.11'], port_security=['fa:16:3e:26:2a:f0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '45cac133-9af0-462b-928c-05216ae1a68e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9d695f1-2ebc-41de-afde-dc0fb11aa027', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '859784d5f59f4db99fb375f781853be3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '3574eda9-0858-4fa6-a1da-2908ff989c86', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ecc9bc11-7523-42ec-afec-b5bdb65bb267, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=63cc9218-a429-4d50-9dad-e3849863cae1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:27:45 compute-0 nova_compute[243452]: 2026-02-28 10:27:45.702 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:45.703 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 63cc9218-a429-4d50-9dad-e3849863cae1 in datapath c9d695f1-2ebc-41de-afde-dc0fb11aa027 unbound from our chassis
Feb 28 10:27:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:45.705 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c9d695f1-2ebc-41de-afde-dc0fb11aa027 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:27:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:45.707 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d94f5cdd-87bd-4ddb-940e-25bce2ef7555]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:45 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000077.scope: Deactivated successfully.
Feb 28 10:27:45 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000077.scope: Consumed 2.424s CPU time.
Feb 28 10:27:45 compute-0 systemd-machined[209480]: Machine qemu-152-instance-00000077 terminated.
Feb 28 10:27:45 compute-0 nova_compute[243452]: 2026-02-28 10:27:45.858 243456 DEBUG nova.compute.manager [None req-1b68bd6c-cd04-4c4b-88db-efb32514dc91 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:27:46 compute-0 ceph-mon[76304]: pgmap v1890: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 7.5 KiB/s wr, 258 op/s
Feb 28 10:27:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/591691403' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:27:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/591691403' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:27:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.9 KiB/s wr, 202 op/s
Feb 28 10:27:47 compute-0 sshd-session[346768]: Received disconnect from 103.67.78.202 port 45402:11: Bye Bye [preauth]
Feb 28 10:27:47 compute-0 sshd-session[346768]: Disconnected from authenticating user root 103.67.78.202 port 45402 [preauth]
Feb 28 10:27:47 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Feb 28 10:27:47 compute-0 nova_compute[243452]: 2026-02-28 10:27:47.794 243456 DEBUG nova.compute.manager [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-unplugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:47 compute-0 nova_compute[243452]: 2026-02-28 10:27:47.794 243456 DEBUG oslo_concurrency.lockutils [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:47 compute-0 nova_compute[243452]: 2026-02-28 10:27:47.795 243456 DEBUG oslo_concurrency.lockutils [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:47 compute-0 nova_compute[243452]: 2026-02-28 10:27:47.795 243456 DEBUG oslo_concurrency.lockutils [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:47 compute-0 nova_compute[243452]: 2026-02-28 10:27:47.796 243456 DEBUG nova.compute.manager [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-unplugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:27:47 compute-0 nova_compute[243452]: 2026-02-28 10:27:47.796 243456 WARNING nova.compute.manager [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-unplugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state suspended and task_state None.
Feb 28 10:27:47 compute-0 nova_compute[243452]: 2026-02-28 10:27:47.797 243456 DEBUG nova.compute.manager [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:47 compute-0 nova_compute[243452]: 2026-02-28 10:27:47.797 243456 DEBUG oslo_concurrency.lockutils [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:47 compute-0 nova_compute[243452]: 2026-02-28 10:27:47.798 243456 DEBUG oslo_concurrency.lockutils [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:47 compute-0 nova_compute[243452]: 2026-02-28 10:27:47.798 243456 DEBUG oslo_concurrency.lockutils [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:47 compute-0 nova_compute[243452]: 2026-02-28 10:27:47.799 243456 DEBUG nova.compute.manager [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:27:47 compute-0 nova_compute[243452]: 2026-02-28 10:27:47.799 243456 WARNING nova.compute.manager [req-2b38a217-b55c-45df-956e-7b5676b6b73d req-cb5859df-18f4-42df-ba9e-b040189aff33 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state suspended and task_state None.
Feb 28 10:27:48 compute-0 nova_compute[243452]: 2026-02-28 10:27:48.273 243456 INFO nova.compute.manager [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Resuming
Feb 28 10:27:48 compute-0 nova_compute[243452]: 2026-02-28 10:27:48.275 243456 DEBUG nova.objects.instance [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'flavor' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:27:48 compute-0 nova_compute[243452]: 2026-02-28 10:27:48.325 243456 DEBUG oslo_concurrency.lockutils [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:27:48 compute-0 nova_compute[243452]: 2026-02-28 10:27:48.326 243456 DEBUG oslo_concurrency.lockutils [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquired lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:27:48 compute-0 nova_compute[243452]: 2026-02-28 10:27:48.326 243456 DEBUG nova.network.neutron [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:27:48 compute-0 ceph-mon[76304]: pgmap v1891: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.9 KiB/s wr, 202 op/s
Feb 28 10:27:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.1 KiB/s wr, 182 op/s
Feb 28 10:27:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:27:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Feb 28 10:27:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Feb 28 10:27:48 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Feb 28 10:27:48 compute-0 ovn_controller[146846]: 2026-02-28T10:27:48Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:be:1f:18 10.100.0.3
Feb 28 10:27:48 compute-0 ovn_controller[146846]: 2026-02-28T10:27:48Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:be:1f:18 10.100.0.3
Feb 28 10:27:49 compute-0 nova_compute[243452]: 2026-02-28 10:27:49.156 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:49 compute-0 nova_compute[243452]: 2026-02-28 10:27:49.735 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:49 compute-0 ceph-mon[76304]: pgmap v1892: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.1 KiB/s wr, 182 op/s
Feb 28 10:27:49 compute-0 ceph-mon[76304]: osdmap e273: 3 total, 3 up, 3 in
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.153 243456 DEBUG nova.network.neutron [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Updating instance_info_cache with network_info: [{"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.178 243456 DEBUG oslo_concurrency.lockutils [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Releasing lock "refresh_cache-45cac133-9af0-462b-928c-05216ae1a68e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.187 243456 DEBUG nova.virt.libvirt.vif [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:27:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-22377150',display_name='tempest-TestServerAdvancedOps-server-22377150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-22377150',id=119,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:27:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='859784d5f59f4db99fb375f781853be3',ramdisk_id='',reservation_id='r-h74lcn4j',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-244453076',owner_user_name='tempest-TestServerAdvancedOps-244453076-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:27:45Z,user_data=None,user_id='3ed826a3011e43d68aac3f001281440a',uuid=45cac133-9af0-462b-928c-05216ae1a68e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.188 243456 DEBUG nova.network.os_vif_util [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Converting VIF {"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.189 243456 DEBUG nova.network.os_vif_util [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.190 243456 DEBUG os_vif [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.190 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.191 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.192 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.195 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.196 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63cc9218-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.196 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63cc9218-a4, col_values=(('external_ids', {'iface-id': '63cc9218-a429-4d50-9dad-e3849863cae1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:2a:f0', 'vm-uuid': '45cac133-9af0-462b-928c-05216ae1a68e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.197 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.197 243456 INFO os_vif [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4')
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.213 243456 DEBUG nova.objects.instance [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:27:50 compute-0 kernel: tap63cc9218-a4: entered promiscuous mode
Feb 28 10:27:50 compute-0 NetworkManager[49805]: <info>  [1772274470.2793] manager: (tap63cc9218-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/499)
Feb 28 10:27:50 compute-0 ovn_controller[146846]: 2026-02-28T10:27:50Z|01210|binding|INFO|Claiming lport 63cc9218-a429-4d50-9dad-e3849863cae1 for this chassis.
Feb 28 10:27:50 compute-0 ovn_controller[146846]: 2026-02-28T10:27:50Z|01211|binding|INFO|63cc9218-a429-4d50-9dad-e3849863cae1: Claiming fa:16:3e:26:2a:f0 10.100.0.11
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.281 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:50.291 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:2a:f0 10.100.0.11'], port_security=['fa:16:3e:26:2a:f0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '45cac133-9af0-462b-928c-05216ae1a68e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9d695f1-2ebc-41de-afde-dc0fb11aa027', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '859784d5f59f4db99fb375f781853be3', 'neutron:revision_number': '7', 'neutron:security_group_ids': '3574eda9-0858-4fa6-a1da-2908ff989c86', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ecc9bc11-7523-42ec-afec-b5bdb65bb267, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=63cc9218-a429-4d50-9dad-e3849863cae1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.291 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:50 compute-0 ovn_controller[146846]: 2026-02-28T10:27:50Z|01212|binding|INFO|Setting lport 63cc9218-a429-4d50-9dad-e3849863cae1 ovn-installed in OVS
Feb 28 10:27:50 compute-0 ovn_controller[146846]: 2026-02-28T10:27:50Z|01213|binding|INFO|Setting lport 63cc9218-a429-4d50-9dad-e3849863cae1 up in Southbound
Feb 28 10:27:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:50.294 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 63cc9218-a429-4d50-9dad-e3849863cae1 in datapath c9d695f1-2ebc-41de-afde-dc0fb11aa027 bound to our chassis
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.295 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:50.295 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c9d695f1-2ebc-41de-afde-dc0fb11aa027 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:27:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:50.297 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6f7726-ac66-449d-8fa0-4e91a135c50f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:50 compute-0 systemd-udevd[346806]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:27:50 compute-0 systemd-machined[209480]: New machine qemu-153-instance-00000077.
Feb 28 10:27:50 compute-0 NetworkManager[49805]: <info>  [1772274470.3261] device (tap63cc9218-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:27:50 compute-0 systemd[1]: Started Virtual Machine qemu-153-instance-00000077.
Feb 28 10:27:50 compute-0 NetworkManager[49805]: <info>  [1772274470.3292] device (tap63cc9218-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:27:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 267 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 2.0 MiB/s wr, 67 op/s
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.623 243456 DEBUG nova.compute.manager [req-fdef7463-0344-4527-bd04-9517444d0923 req-a145479c-3ae0-41c5-ae0b-b13f2512c8c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.625 243456 DEBUG oslo_concurrency.lockutils [req-fdef7463-0344-4527-bd04-9517444d0923 req-a145479c-3ae0-41c5-ae0b-b13f2512c8c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.626 243456 DEBUG oslo_concurrency.lockutils [req-fdef7463-0344-4527-bd04-9517444d0923 req-a145479c-3ae0-41c5-ae0b-b13f2512c8c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.627 243456 DEBUG oslo_concurrency.lockutils [req-fdef7463-0344-4527-bd04-9517444d0923 req-a145479c-3ae0-41c5-ae0b-b13f2512c8c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.628 243456 DEBUG nova.compute.manager [req-fdef7463-0344-4527-bd04-9517444d0923 req-a145479c-3ae0-41c5-ae0b-b13f2512c8c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.628 243456 WARNING nova.compute.manager [req-fdef7463-0344-4527-bd04-9517444d0923 req-a145479c-3ae0-41c5-ae0b-b13f2512c8c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state suspended and task_state resuming.
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.767 243456 DEBUG nova.virt.libvirt.host [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Removed pending event for 45cac133-9af0-462b-928c-05216ae1a68e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.768 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274470.7671397, 45cac133-9af0-462b-928c-05216ae1a68e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.768 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] VM Started (Lifecycle Event)
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.780 243456 DEBUG nova.compute.manager [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.780 243456 DEBUG nova.objects.instance [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.787 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.792 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.798 243456 INFO nova.virt.libvirt.driver [-] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Instance running successfully.
Feb 28 10:27:50 compute-0 virtqemud[242837]: argument unsupported: QEMU guest agent is not configured
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.801 243456 DEBUG nova.virt.libvirt.guest [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.802 243456 DEBUG nova.compute.manager [None req-a828d3b7-644a-4d67-b44c-a24ec8200cef 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.812 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] During sync_power_state the instance has a pending task (resuming). Skip.
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.812 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274470.7731998, 45cac133-9af0-462b-928c-05216ae1a68e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.813 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] VM Resumed (Lifecycle Event)
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.836 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:27:50 compute-0 nova_compute[243452]: 2026-02-28 10:27:50.841 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:27:51 compute-0 ceph-mon[76304]: pgmap v1894: 305 pgs: 305 active+clean; 267 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 2.0 MiB/s wr, 67 op/s
Feb 28 10:27:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 279 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 413 KiB/s rd, 2.9 MiB/s wr, 94 op/s
Feb 28 10:27:52 compute-0 nova_compute[243452]: 2026-02-28 10:27:52.762 243456 DEBUG nova.compute.manager [req-08f09ffb-f10a-4a85-8a77-7f89bb0bc68a req-75cb070c-f530-4849-a8f0-fad2dc835e32 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:52 compute-0 nova_compute[243452]: 2026-02-28 10:27:52.763 243456 DEBUG oslo_concurrency.lockutils [req-08f09ffb-f10a-4a85-8a77-7f89bb0bc68a req-75cb070c-f530-4849-a8f0-fad2dc835e32 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:52 compute-0 nova_compute[243452]: 2026-02-28 10:27:52.764 243456 DEBUG oslo_concurrency.lockutils [req-08f09ffb-f10a-4a85-8a77-7f89bb0bc68a req-75cb070c-f530-4849-a8f0-fad2dc835e32 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:52 compute-0 nova_compute[243452]: 2026-02-28 10:27:52.764 243456 DEBUG oslo_concurrency.lockutils [req-08f09ffb-f10a-4a85-8a77-7f89bb0bc68a req-75cb070c-f530-4849-a8f0-fad2dc835e32 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:52 compute-0 nova_compute[243452]: 2026-02-28 10:27:52.764 243456 DEBUG nova.compute.manager [req-08f09ffb-f10a-4a85-8a77-7f89bb0bc68a req-75cb070c-f530-4849-a8f0-fad2dc835e32 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:27:52 compute-0 nova_compute[243452]: 2026-02-28 10:27:52.765 243456 WARNING nova.compute.manager [req-08f09ffb-f10a-4a85-8a77-7f89bb0bc68a req-75cb070c-f530-4849-a8f0-fad2dc835e32 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state active and task_state deleting.
Feb 28 10:27:52 compute-0 nova_compute[243452]: 2026-02-28 10:27:52.780 243456 DEBUG oslo_concurrency.lockutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:52 compute-0 nova_compute[243452]: 2026-02-28 10:27:52.781 243456 DEBUG oslo_concurrency.lockutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:52 compute-0 nova_compute[243452]: 2026-02-28 10:27:52.781 243456 DEBUG oslo_concurrency.lockutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:52 compute-0 nova_compute[243452]: 2026-02-28 10:27:52.781 243456 DEBUG oslo_concurrency.lockutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:52 compute-0 nova_compute[243452]: 2026-02-28 10:27:52.781 243456 DEBUG oslo_concurrency.lockutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:52 compute-0 nova_compute[243452]: 2026-02-28 10:27:52.783 243456 INFO nova.compute.manager [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Terminating instance
Feb 28 10:27:52 compute-0 nova_compute[243452]: 2026-02-28 10:27:52.784 243456 DEBUG nova.compute.manager [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:27:52 compute-0 kernel: tap63cc9218-a4 (unregistering): left promiscuous mode
Feb 28 10:27:52 compute-0 NetworkManager[49805]: <info>  [1772274472.8239] device (tap63cc9218-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:27:52 compute-0 ovn_controller[146846]: 2026-02-28T10:27:52Z|01214|binding|INFO|Releasing lport 63cc9218-a429-4d50-9dad-e3849863cae1 from this chassis (sb_readonly=0)
Feb 28 10:27:52 compute-0 ovn_controller[146846]: 2026-02-28T10:27:52Z|01215|binding|INFO|Setting lport 63cc9218-a429-4d50-9dad-e3849863cae1 down in Southbound
Feb 28 10:27:52 compute-0 ovn_controller[146846]: 2026-02-28T10:27:52Z|01216|binding|INFO|Removing iface tap63cc9218-a4 ovn-installed in OVS
Feb 28 10:27:52 compute-0 nova_compute[243452]: 2026-02-28 10:27:52.830 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:52 compute-0 nova_compute[243452]: 2026-02-28 10:27:52.837 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:52.839 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:2a:f0 10.100.0.11'], port_security=['fa:16:3e:26:2a:f0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '45cac133-9af0-462b-928c-05216ae1a68e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9d695f1-2ebc-41de-afde-dc0fb11aa027', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '859784d5f59f4db99fb375f781853be3', 'neutron:revision_number': '8', 'neutron:security_group_ids': '3574eda9-0858-4fa6-a1da-2908ff989c86', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ecc9bc11-7523-42ec-afec-b5bdb65bb267, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=63cc9218-a429-4d50-9dad-e3849863cae1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:27:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:52.841 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 63cc9218-a429-4d50-9dad-e3849863cae1 in datapath c9d695f1-2ebc-41de-afde-dc0fb11aa027 unbound from our chassis
Feb 28 10:27:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:52.841 156681 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c9d695f1-2ebc-41de-afde-dc0fb11aa027 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 28 10:27:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:52.842 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[00473275-d88c-45fe-b2be-987127b2a9c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:27:52 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000077.scope: Deactivated successfully.
Feb 28 10:27:52 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000077.scope: Consumed 2.426s CPU time.
Feb 28 10:27:52 compute-0 systemd-machined[209480]: Machine qemu-153-instance-00000077 terminated.
Feb 28 10:27:53 compute-0 nova_compute[243452]: 2026-02-28 10:27:53.020 243456 INFO nova.virt.libvirt.driver [-] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Instance destroyed successfully.
Feb 28 10:27:53 compute-0 nova_compute[243452]: 2026-02-28 10:27:53.021 243456 DEBUG nova.objects.instance [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lazy-loading 'resources' on Instance uuid 45cac133-9af0-462b-928c-05216ae1a68e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:27:53 compute-0 nova_compute[243452]: 2026-02-28 10:27:53.047 243456 DEBUG nova.virt.libvirt.vif [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:27:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-22377150',display_name='tempest-TestServerAdvancedOps-server-22377150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-22377150',id=119,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:27:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='859784d5f59f4db99fb375f781853be3',ramdisk_id='',reservation_id='r-h74lcn4j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-244453076',owner_user_name='tempest-TestServerAdvancedOps-244453076-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:27:50Z,user_data=None,user_id='3ed826a3011e43d68aac3f001281440a',uuid=45cac133-9af0-462b-928c-05216ae1a68e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:27:53 compute-0 nova_compute[243452]: 2026-02-28 10:27:53.047 243456 DEBUG nova.network.os_vif_util [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Converting VIF {"id": "63cc9218-a429-4d50-9dad-e3849863cae1", "address": "fa:16:3e:26:2a:f0", "network": {"id": "c9d695f1-2ebc-41de-afde-dc0fb11aa027", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-589059021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "859784d5f59f4db99fb375f781853be3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63cc9218-a4", "ovs_interfaceid": "63cc9218-a429-4d50-9dad-e3849863cae1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:27:53 compute-0 nova_compute[243452]: 2026-02-28 10:27:53.048 243456 DEBUG nova.network.os_vif_util [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:27:53 compute-0 nova_compute[243452]: 2026-02-28 10:27:53.049 243456 DEBUG os_vif [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:27:53 compute-0 nova_compute[243452]: 2026-02-28 10:27:53.052 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:53 compute-0 nova_compute[243452]: 2026-02-28 10:27:53.052 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63cc9218-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:27:53 compute-0 nova_compute[243452]: 2026-02-28 10:27:53.182 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:53 compute-0 nova_compute[243452]: 2026-02-28 10:27:53.184 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:53 compute-0 nova_compute[243452]: 2026-02-28 10:27:53.188 243456 INFO os_vif [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:2a:f0,bridge_name='br-int',has_traffic_filtering=True,id=63cc9218-a429-4d50-9dad-e3849863cae1,network=Network(c9d695f1-2ebc-41de-afde-dc0fb11aa027),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63cc9218-a4')
Feb 28 10:27:53 compute-0 nova_compute[243452]: 2026-02-28 10:27:53.493 243456 INFO nova.virt.libvirt.driver [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Deleting instance files /var/lib/nova/instances/45cac133-9af0-462b-928c-05216ae1a68e_del
Feb 28 10:27:53 compute-0 nova_compute[243452]: 2026-02-28 10:27:53.494 243456 INFO nova.virt.libvirt.driver [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Deletion of /var/lib/nova/instances/45cac133-9af0-462b-928c-05216ae1a68e_del complete
Feb 28 10:27:53 compute-0 nova_compute[243452]: 2026-02-28 10:27:53.569 243456 INFO nova.compute.manager [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Took 0.78 seconds to destroy the instance on the hypervisor.
Feb 28 10:27:53 compute-0 nova_compute[243452]: 2026-02-28 10:27:53.570 243456 DEBUG oslo.service.loopingcall [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:27:53 compute-0 nova_compute[243452]: 2026-02-28 10:27:53.570 243456 DEBUG nova.compute.manager [-] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:27:53 compute-0 nova_compute[243452]: 2026-02-28 10:27:53.571 243456 DEBUG nova.network.neutron [-] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:27:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:27:53 compute-0 ceph-mon[76304]: pgmap v1895: 305 pgs: 305 active+clean; 279 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 413 KiB/s rd, 2.9 MiB/s wr, 94 op/s
Feb 28 10:27:54 compute-0 nova_compute[243452]: 2026-02-28 10:27:54.158 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 261 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.6 MiB/s wr, 95 op/s
Feb 28 10:27:54 compute-0 nova_compute[243452]: 2026-02-28 10:27:54.607 243456 DEBUG nova.network.neutron [-] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:27:54 compute-0 nova_compute[243452]: 2026-02-28 10:27:54.629 243456 INFO nova.compute.manager [-] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Took 1.06 seconds to deallocate network for instance.
Feb 28 10:27:54 compute-0 nova_compute[243452]: 2026-02-28 10:27:54.685 243456 DEBUG oslo_concurrency.lockutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:54 compute-0 nova_compute[243452]: 2026-02-28 10:27:54.686 243456 DEBUG oslo_concurrency.lockutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:54 compute-0 nova_compute[243452]: 2026-02-28 10:27:54.734 243456 DEBUG nova.compute.manager [req-bfac848b-220a-47ef-b2fe-91f5c4bcd5ae req-33e6f533-9aed-4919-82ad-536989a57462 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-deleted-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:54 compute-0 nova_compute[243452]: 2026-02-28 10:27:54.775 243456 DEBUG oslo_concurrency.processutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:27:54 compute-0 nova_compute[243452]: 2026-02-28 10:27:54.842 243456 DEBUG nova.compute.manager [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-unplugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:54 compute-0 nova_compute[243452]: 2026-02-28 10:27:54.843 243456 DEBUG oslo_concurrency.lockutils [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:54 compute-0 nova_compute[243452]: 2026-02-28 10:27:54.843 243456 DEBUG oslo_concurrency.lockutils [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:54 compute-0 nova_compute[243452]: 2026-02-28 10:27:54.843 243456 DEBUG oslo_concurrency.lockutils [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:54 compute-0 nova_compute[243452]: 2026-02-28 10:27:54.844 243456 DEBUG nova.compute.manager [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-unplugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:27:54 compute-0 nova_compute[243452]: 2026-02-28 10:27:54.844 243456 WARNING nova.compute.manager [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-unplugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state deleted and task_state None.
Feb 28 10:27:54 compute-0 nova_compute[243452]: 2026-02-28 10:27:54.844 243456 DEBUG nova.compute.manager [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:27:54 compute-0 nova_compute[243452]: 2026-02-28 10:27:54.844 243456 DEBUG oslo_concurrency.lockutils [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "45cac133-9af0-462b-928c-05216ae1a68e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:54 compute-0 nova_compute[243452]: 2026-02-28 10:27:54.845 243456 DEBUG oslo_concurrency.lockutils [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:54 compute-0 nova_compute[243452]: 2026-02-28 10:27:54.845 243456 DEBUG oslo_concurrency.lockutils [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:54 compute-0 nova_compute[243452]: 2026-02-28 10:27:54.845 243456 DEBUG nova.compute.manager [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] No waiting events found dispatching network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:27:54 compute-0 nova_compute[243452]: 2026-02-28 10:27:54.845 243456 WARNING nova.compute.manager [req-cce11852-ae14-43ab-8e07-111c02772335 req-7baf745c-69c5-46e5-b514-075f2aa48cab 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Received unexpected event network-vif-plugged-63cc9218-a429-4d50-9dad-e3849863cae1 for instance with vm_state deleted and task_state None.
Feb 28 10:27:54 compute-0 sudo[346892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:27:54 compute-0 sudo[346892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:27:54 compute-0 sudo[346892]: pam_unix(sudo:session): session closed for user root
Feb 28 10:27:54 compute-0 sudo[346940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 28 10:27:54 compute-0 sudo[346940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:27:55 compute-0 podman[346917]: 2026-02-28 10:27:55.009518243 +0000 UTC m=+0.077255223 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent)
Feb 28 10:27:55 compute-0 podman[346916]: 2026-02-28 10:27:55.04198004 +0000 UTC m=+0.109920786 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:27:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:27:55 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/41544991' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:27:55 compute-0 sudo[346940]: pam_unix(sudo:session): session closed for user root
Feb 28 10:27:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:27:55 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:27:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:27:55 compute-0 nova_compute[243452]: 2026-02-28 10:27:55.357 243456 DEBUG oslo_concurrency.processutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:27:55 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:27:55 compute-0 nova_compute[243452]: 2026-02-28 10:27:55.362 243456 DEBUG nova.compute.provider_tree [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:27:55 compute-0 nova_compute[243452]: 2026-02-28 10:27:55.386 243456 DEBUG nova.scheduler.client.report [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:27:55 compute-0 sudo[347025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:27:55 compute-0 sudo[347025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:27:55 compute-0 sudo[347025]: pam_unix(sudo:session): session closed for user root
Feb 28 10:27:55 compute-0 nova_compute[243452]: 2026-02-28 10:27:55.419 243456 DEBUG oslo_concurrency.lockutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:55 compute-0 nova_compute[243452]: 2026-02-28 10:27:55.451 243456 INFO nova.scheduler.client.report [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Deleted allocations for instance 45cac133-9af0-462b-928c-05216ae1a68e
Feb 28 10:27:55 compute-0 sudo[347050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:27:55 compute-0 sudo[347050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:27:55 compute-0 nova_compute[243452]: 2026-02-28 10:27:55.540 243456 DEBUG oslo_concurrency.lockutils [None req-7cfb91a6-fc91-48e6-9ce3-379e08366e2a 3ed826a3011e43d68aac3f001281440a 859784d5f59f4db99fb375f781853be3 - - default default] Lock "45cac133-9af0-462b-928c-05216ae1a68e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:55 compute-0 ceph-mon[76304]: pgmap v1896: 305 pgs: 305 active+clean; 261 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.6 MiB/s wr, 95 op/s
Feb 28 10:27:55 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/41544991' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:27:55 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:27:55 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:27:56 compute-0 sudo[347050]: pam_unix(sudo:session): session closed for user root
Feb 28 10:27:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:27:56 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:27:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:27:56 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:27:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:27:56 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:27:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:27:56 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:27:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:27:56 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:27:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:27:56 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:27:56 compute-0 sudo[347106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:27:56 compute-0 sudo[347106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:27:56 compute-0 sudo[347106]: pam_unix(sudo:session): session closed for user root
Feb 28 10:27:56 compute-0 sudo[347131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:27:56 compute-0 sudo[347131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:27:56 compute-0 podman[347168]: 2026-02-28 10:27:56.565624145 +0000 UTC m=+0.047973246 container create 35ee8c34f71076950f49f3215f6210c08e0c92edede1057566ac581f292aa0fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 10:27:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 251 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.6 MiB/s wr, 96 op/s
Feb 28 10:27:56 compute-0 systemd[1]: Started libpod-conmon-35ee8c34f71076950f49f3215f6210c08e0c92edede1057566ac581f292aa0fe.scope.
Feb 28 10:27:56 compute-0 podman[347168]: 2026-02-28 10:27:56.543438919 +0000 UTC m=+0.025788020 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:27:56 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:27:56 compute-0 podman[347168]: 2026-02-28 10:27:56.666827874 +0000 UTC m=+0.149176985 container init 35ee8c34f71076950f49f3215f6210c08e0c92edede1057566ac581f292aa0fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:27:56 compute-0 podman[347168]: 2026-02-28 10:27:56.675864669 +0000 UTC m=+0.158213720 container start 35ee8c34f71076950f49f3215f6210c08e0c92edede1057566ac581f292aa0fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 10:27:56 compute-0 podman[347168]: 2026-02-28 10:27:56.679736078 +0000 UTC m=+0.162085179 container attach 35ee8c34f71076950f49f3215f6210c08e0c92edede1057566ac581f292aa0fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:27:56 compute-0 sad_gagarin[347184]: 167 167
Feb 28 10:27:56 compute-0 systemd[1]: libpod-35ee8c34f71076950f49f3215f6210c08e0c92edede1057566ac581f292aa0fe.scope: Deactivated successfully.
Feb 28 10:27:56 compute-0 podman[347168]: 2026-02-28 10:27:56.683998249 +0000 UTC m=+0.166347340 container died 35ee8c34f71076950f49f3215f6210c08e0c92edede1057566ac581f292aa0fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:27:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-180daf0a882041d23204523cb7e287395505e9aa79016ac5bfdf269882a57f17-merged.mount: Deactivated successfully.
Feb 28 10:27:56 compute-0 podman[347168]: 2026-02-28 10:27:56.735627207 +0000 UTC m=+0.217976308 container remove 35ee8c34f71076950f49f3215f6210c08e0c92edede1057566ac581f292aa0fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 10:27:56 compute-0 systemd[1]: libpod-conmon-35ee8c34f71076950f49f3215f6210c08e0c92edede1057566ac581f292aa0fe.scope: Deactivated successfully.
Feb 28 10:27:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:27:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:27:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:27:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:27:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:27:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:27:56 compute-0 podman[347207]: 2026-02-28 10:27:56.942771038 +0000 UTC m=+0.051366072 container create f2ed47a636b6017af23a79fbcb7a0b6bf9fd6b0350a1f68ea800104eedc12b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mcnulty, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:27:56 compute-0 systemd[1]: Started libpod-conmon-f2ed47a636b6017af23a79fbcb7a0b6bf9fd6b0350a1f68ea800104eedc12b9b.scope.
Feb 28 10:27:57 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:27:57 compute-0 podman[347207]: 2026-02-28 10:27:56.916966669 +0000 UTC m=+0.025561493 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:27:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a8ecdfebac881430dd3bd0f9d4206a3c0707accfc8921156a2dad7f507ef7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:27:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a8ecdfebac881430dd3bd0f9d4206a3c0707accfc8921156a2dad7f507ef7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:27:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a8ecdfebac881430dd3bd0f9d4206a3c0707accfc8921156a2dad7f507ef7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:27:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a8ecdfebac881430dd3bd0f9d4206a3c0707accfc8921156a2dad7f507ef7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:27:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a8ecdfebac881430dd3bd0f9d4206a3c0707accfc8921156a2dad7f507ef7e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:27:57 compute-0 podman[347207]: 2026-02-28 10:27:57.030424343 +0000 UTC m=+0.139019177 container init f2ed47a636b6017af23a79fbcb7a0b6bf9fd6b0350a1f68ea800104eedc12b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mcnulty, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:27:57 compute-0 podman[347207]: 2026-02-28 10:27:57.038590594 +0000 UTC m=+0.147185358 container start f2ed47a636b6017af23a79fbcb7a0b6bf9fd6b0350a1f68ea800104eedc12b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mcnulty, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:27:57 compute-0 podman[347207]: 2026-02-28 10:27:57.042518865 +0000 UTC m=+0.151113659 container attach f2ed47a636b6017af23a79fbcb7a0b6bf9fd6b0350a1f68ea800104eedc12b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mcnulty, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 10:27:57 compute-0 ovn_controller[146846]: 2026-02-28T10:27:57Z|01217|binding|INFO|Releasing lport cbbfe533-6ee1-4103-ad47-26b6e9271250 from this chassis (sb_readonly=0)
Feb 28 10:27:57 compute-0 nova_compute[243452]: 2026-02-28 10:27:57.454 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:57 compute-0 magical_mcnulty[347223]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:27:57 compute-0 magical_mcnulty[347223]: --> All data devices are unavailable
Feb 28 10:27:57 compute-0 systemd[1]: libpod-f2ed47a636b6017af23a79fbcb7a0b6bf9fd6b0350a1f68ea800104eedc12b9b.scope: Deactivated successfully.
Feb 28 10:27:57 compute-0 podman[347207]: 2026-02-28 10:27:57.515183045 +0000 UTC m=+0.623777859 container died f2ed47a636b6017af23a79fbcb7a0b6bf9fd6b0350a1f68ea800104eedc12b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mcnulty, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 10:27:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8a8ecdfebac881430dd3bd0f9d4206a3c0707accfc8921156a2dad7f507ef7e-merged.mount: Deactivated successfully.
Feb 28 10:27:57 compute-0 podman[347207]: 2026-02-28 10:27:57.554910458 +0000 UTC m=+0.663505212 container remove f2ed47a636b6017af23a79fbcb7a0b6bf9fd6b0350a1f68ea800104eedc12b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mcnulty, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 10:27:57 compute-0 systemd[1]: libpod-conmon-f2ed47a636b6017af23a79fbcb7a0b6bf9fd6b0350a1f68ea800104eedc12b9b.scope: Deactivated successfully.
Feb 28 10:27:57 compute-0 sudo[347131]: pam_unix(sudo:session): session closed for user root
Feb 28 10:27:57 compute-0 sudo[347255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:27:57 compute-0 sudo[347255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:27:57 compute-0 sudo[347255]: pam_unix(sudo:session): session closed for user root
Feb 28 10:27:57 compute-0 sudo[347280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:27:57 compute-0 sudo[347280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:27:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:57.866 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:27:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:57.868 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:27:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:57.869 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:27:58 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:58.054 157134 DEBUG eventlet.wsgi.server [-] (157134) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Feb 28 10:27:58 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:58.056 157134 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Feb 28 10:27:58 compute-0 ovn_metadata_agent[156634]: Accept: */*
Feb 28 10:27:58 compute-0 ovn_metadata_agent[156634]: Connection: close
Feb 28 10:27:58 compute-0 ovn_metadata_agent[156634]: Content-Type: text/plain
Feb 28 10:27:58 compute-0 ovn_metadata_agent[156634]: Host: 169.254.169.254
Feb 28 10:27:58 compute-0 ovn_metadata_agent[156634]: User-Agent: curl/7.84.0
Feb 28 10:27:58 compute-0 ovn_metadata_agent[156634]: X-Forwarded-For: 10.100.0.3
Feb 28 10:27:58 compute-0 ovn_metadata_agent[156634]: X-Ovn-Network-Id: d755deef-15d2-410a-9b1a-81df70c45c93 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Feb 28 10:27:58 compute-0 podman[347318]: 2026-02-28 10:27:58.093353686 +0000 UTC m=+0.041475573 container create 85ed8edb92b886a29f633a7236c9b8db5d6954366711e75fcd7b1fadbb172e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_boyd, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:27:58 compute-0 ceph-mon[76304]: pgmap v1897: 305 pgs: 305 active+clean; 251 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.6 MiB/s wr, 96 op/s
Feb 28 10:27:58 compute-0 systemd[1]: Started libpod-conmon-85ed8edb92b886a29f633a7236c9b8db5d6954366711e75fcd7b1fadbb172e29.scope.
Feb 28 10:27:58 compute-0 podman[347318]: 2026-02-28 10:27:58.071315563 +0000 UTC m=+0.019437470 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:27:58 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:27:58 compute-0 nova_compute[243452]: 2026-02-28 10:27:58.182 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:58 compute-0 podman[347318]: 2026-02-28 10:27:58.187559557 +0000 UTC m=+0.135681534 container init 85ed8edb92b886a29f633a7236c9b8db5d6954366711e75fcd7b1fadbb172e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_boyd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Feb 28 10:27:58 compute-0 podman[347318]: 2026-02-28 10:27:58.19404499 +0000 UTC m=+0.142166907 container start 85ed8edb92b886a29f633a7236c9b8db5d6954366711e75fcd7b1fadbb172e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_boyd, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:27:58 compute-0 podman[347318]: 2026-02-28 10:27:58.197577639 +0000 UTC m=+0.145699566 container attach 85ed8edb92b886a29f633a7236c9b8db5d6954366711e75fcd7b1fadbb172e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_boyd, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:27:58 compute-0 reverent_boyd[347334]: 167 167
Feb 28 10:27:58 compute-0 systemd[1]: libpod-85ed8edb92b886a29f633a7236c9b8db5d6954366711e75fcd7b1fadbb172e29.scope: Deactivated successfully.
Feb 28 10:27:58 compute-0 conmon[347334]: conmon 85ed8edb92b886a29f63 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-85ed8edb92b886a29f633a7236c9b8db5d6954366711e75fcd7b1fadbb172e29.scope/container/memory.events
Feb 28 10:27:58 compute-0 podman[347318]: 2026-02-28 10:27:58.20077744 +0000 UTC m=+0.148899367 container died 85ed8edb92b886a29f633a7236c9b8db5d6954366711e75fcd7b1fadbb172e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:27:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-75b8eec30c7838712ca6e8ed1df0573ae3f1766fd39f0fd78594b6ed2741b453-merged.mount: Deactivated successfully.
Feb 28 10:27:58 compute-0 podman[347318]: 2026-02-28 10:27:58.242996482 +0000 UTC m=+0.191118399 container remove 85ed8edb92b886a29f633a7236c9b8db5d6954366711e75fcd7b1fadbb172e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 10:27:58 compute-0 systemd[1]: libpod-conmon-85ed8edb92b886a29f633a7236c9b8db5d6954366711e75fcd7b1fadbb172e29.scope: Deactivated successfully.
Feb 28 10:27:58 compute-0 podman[347358]: 2026-02-28 10:27:58.402102576 +0000 UTC m=+0.046299218 container create b23db2b16195fef2a8fbe43d25a8ad5e5a16a4acdcbda79beda050e4fd6cd720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sinoussi, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 10:27:58 compute-0 systemd[1]: Started libpod-conmon-b23db2b16195fef2a8fbe43d25a8ad5e5a16a4acdcbda79beda050e4fd6cd720.scope.
Feb 28 10:27:58 compute-0 podman[347358]: 2026-02-28 10:27:58.380618279 +0000 UTC m=+0.024814971 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:27:58 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e27656fbe118dde0787d08dac61a5fa4e44287c4bc598a31c22a2bc669768b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e27656fbe118dde0787d08dac61a5fa4e44287c4bc598a31c22a2bc669768b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e27656fbe118dde0787d08dac61a5fa4e44287c4bc598a31c22a2bc669768b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e27656fbe118dde0787d08dac61a5fa4e44287c4bc598a31c22a2bc669768b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:27:58 compute-0 podman[347358]: 2026-02-28 10:27:58.508997744 +0000 UTC m=+0.153194386 container init b23db2b16195fef2a8fbe43d25a8ad5e5a16a4acdcbda79beda050e4fd6cd720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:27:58 compute-0 podman[347358]: 2026-02-28 10:27:58.51909498 +0000 UTC m=+0.163291622 container start b23db2b16195fef2a8fbe43d25a8ad5e5a16a4acdcbda79beda050e4fd6cd720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:27:58 compute-0 podman[347358]: 2026-02-28 10:27:58.522764803 +0000 UTC m=+0.166961485 container attach b23db2b16195fef2a8fbe43d25a8ad5e5a16a4acdcbda79beda050e4fd6cd720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 10:27:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 233 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.6 MiB/s wr, 108 op/s
Feb 28 10:27:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]: {
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:     "0": [
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:         {
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "devices": [
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "/dev/loop3"
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             ],
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "lv_name": "ceph_lv0",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "lv_size": "21470642176",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "name": "ceph_lv0",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "tags": {
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.cluster_name": "ceph",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.crush_device_class": "",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.encrypted": "0",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.objectstore": "bluestore",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.osd_id": "0",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.type": "block",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.vdo": "0",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.with_tpm": "0"
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             },
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "type": "block",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "vg_name": "ceph_vg0"
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:         }
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:     ],
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:     "1": [
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:         {
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "devices": [
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "/dev/loop4"
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             ],
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "lv_name": "ceph_lv1",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "lv_size": "21470642176",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "name": "ceph_lv1",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "tags": {
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.cluster_name": "ceph",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.crush_device_class": "",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.encrypted": "0",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.objectstore": "bluestore",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.osd_id": "1",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.type": "block",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.vdo": "0",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.with_tpm": "0"
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             },
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "type": "block",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "vg_name": "ceph_vg1"
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:         }
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:     ],
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:     "2": [
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:         {
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "devices": [
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "/dev/loop5"
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             ],
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "lv_name": "ceph_lv2",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "lv_size": "21470642176",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "name": "ceph_lv2",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "tags": {
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.cluster_name": "ceph",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.crush_device_class": "",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.encrypted": "0",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.objectstore": "bluestore",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.osd_id": "2",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.type": "block",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.vdo": "0",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:                 "ceph.with_tpm": "0"
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             },
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "type": "block",
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:             "vg_name": "ceph_vg2"
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:         }
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]:     ]
Feb 28 10:27:58 compute-0 gifted_sinoussi[347374]: }
Feb 28 10:27:58 compute-0 systemd[1]: libpod-b23db2b16195fef2a8fbe43d25a8ad5e5a16a4acdcbda79beda050e4fd6cd720.scope: Deactivated successfully.
Feb 28 10:27:58 compute-0 podman[347358]: 2026-02-28 10:27:58.88269895 +0000 UTC m=+0.526895602 container died b23db2b16195fef2a8fbe43d25a8ad5e5a16a4acdcbda79beda050e4fd6cd720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 10:27:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e27656fbe118dde0787d08dac61a5fa4e44287c4bc598a31c22a2bc669768b4-merged.mount: Deactivated successfully.
Feb 28 10:27:58 compute-0 podman[347358]: 2026-02-28 10:27:58.920575949 +0000 UTC m=+0.564772581 container remove b23db2b16195fef2a8fbe43d25a8ad5e5a16a4acdcbda79beda050e4fd6cd720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_sinoussi, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 10:27:58 compute-0 systemd[1]: libpod-conmon-b23db2b16195fef2a8fbe43d25a8ad5e5a16a4acdcbda79beda050e4fd6cd720.scope: Deactivated successfully.
Feb 28 10:27:58 compute-0 sudo[347280]: pam_unix(sudo:session): session closed for user root
Feb 28 10:27:59 compute-0 sudo[347395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:27:59 compute-0 sudo[347395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:27:59 compute-0 sudo[347395]: pam_unix(sudo:session): session closed for user root
Feb 28 10:27:59 compute-0 sudo[347420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:27:59 compute-0 sudo[347420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:27:59 compute-0 nova_compute[243452]: 2026-02-28 10:27:59.161 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:27:59 compute-0 podman[347457]: 2026-02-28 10:27:59.360799843 +0000 UTC m=+0.036312196 container create 46648d26ba9738af9b5506dab2d01f9317795b7be77c3bdc342d485528f37128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_almeida, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:27:59 compute-0 systemd[1]: Started libpod-conmon-46648d26ba9738af9b5506dab2d01f9317795b7be77c3bdc342d485528f37128.scope.
Feb 28 10:27:59 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:27:59 compute-0 podman[347457]: 2026-02-28 10:27:59.421459687 +0000 UTC m=+0.096972070 container init 46648d26ba9738af9b5506dab2d01f9317795b7be77c3bdc342d485528f37128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_almeida, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:27:59 compute-0 podman[347457]: 2026-02-28 10:27:59.426762366 +0000 UTC m=+0.102274719 container start 46648d26ba9738af9b5506dab2d01f9317795b7be77c3bdc342d485528f37128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:27:59 compute-0 podman[347457]: 2026-02-28 10:27:59.429490324 +0000 UTC m=+0.105002707 container attach 46648d26ba9738af9b5506dab2d01f9317795b7be77c3bdc342d485528f37128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_almeida, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 10:27:59 compute-0 gallant_almeida[347474]: 167 167
Feb 28 10:27:59 compute-0 systemd[1]: libpod-46648d26ba9738af9b5506dab2d01f9317795b7be77c3bdc342d485528f37128.scope: Deactivated successfully.
Feb 28 10:27:59 compute-0 podman[347457]: 2026-02-28 10:27:59.431779068 +0000 UTC m=+0.107291421 container died 46648d26ba9738af9b5506dab2d01f9317795b7be77c3bdc342d485528f37128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_almeida, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 10:27:59 compute-0 podman[347457]: 2026-02-28 10:27:59.346652964 +0000 UTC m=+0.022165327 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:27:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-0449ea6c78b4c2b5682346f18d804c7686521facf7059a4254a658c8ea4ca67c-merged.mount: Deactivated successfully.
Feb 28 10:27:59 compute-0 podman[347457]: 2026-02-28 10:27:59.474901036 +0000 UTC m=+0.150413389 container remove 46648d26ba9738af9b5506dab2d01f9317795b7be77c3bdc342d485528f37128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_almeida, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:27:59 compute-0 systemd[1]: libpod-conmon-46648d26ba9738af9b5506dab2d01f9317795b7be77c3bdc342d485528f37128.scope: Deactivated successfully.
Feb 28 10:27:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:59.509 157134 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Feb 28 10:27:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:59.510 157134 INFO eventlet.wsgi.server [-] 10.100.0.3,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.4544373
Feb 28 10:27:59 compute-0 haproxy-metadata-proxy-d755deef-15d2-410a-9b1a-81df70c45c93[346624]: 10.100.0.3:56324 [28/Feb/2026:10:27:58.052] listener listener/metadata 0/0/0/1458/1458 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Feb 28 10:27:59 compute-0 podman[347498]: 2026-02-28 10:27:59.627860656 +0000 UTC m=+0.047454931 container create 1688e24ed9c0ef84a41d7da6321c0a521acc07d175e5a517d862156ae4fd0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 28 10:27:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:59.635 157134 DEBUG eventlet.wsgi.server [-] (157134) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Feb 28 10:27:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:59.636 157134 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Feb 28 10:27:59 compute-0 ovn_metadata_agent[156634]: Accept: */*
Feb 28 10:27:59 compute-0 ovn_metadata_agent[156634]: Connection: close
Feb 28 10:27:59 compute-0 ovn_metadata_agent[156634]: Content-Length: 100
Feb 28 10:27:59 compute-0 ovn_metadata_agent[156634]: Content-Type: application/x-www-form-urlencoded
Feb 28 10:27:59 compute-0 ovn_metadata_agent[156634]: Host: 169.254.169.254
Feb 28 10:27:59 compute-0 ovn_metadata_agent[156634]: User-Agent: curl/7.84.0
Feb 28 10:27:59 compute-0 ovn_metadata_agent[156634]: X-Forwarded-For: 10.100.0.3
Feb 28 10:27:59 compute-0 ovn_metadata_agent[156634]: X-Ovn-Network-Id: d755deef-15d2-410a-9b1a-81df70c45c93
Feb 28 10:27:59 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:27:59 compute-0 ovn_metadata_agent[156634]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Feb 28 10:27:59 compute-0 systemd[1]: Started libpod-conmon-1688e24ed9c0ef84a41d7da6321c0a521acc07d175e5a517d862156ae4fd0b9b.scope.
Feb 28 10:27:59 compute-0 podman[347498]: 2026-02-28 10:27:59.60886585 +0000 UTC m=+0.028460155 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:27:59 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d9b2a0c3220da1068fa56c98fca11bf1544c12490c43afa3716b3ed15040ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d9b2a0c3220da1068fa56c98fca11bf1544c12490c43afa3716b3ed15040ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d9b2a0c3220da1068fa56c98fca11bf1544c12490c43afa3716b3ed15040ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d9b2a0c3220da1068fa56c98fca11bf1544c12490c43afa3716b3ed15040ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:27:59 compute-0 podman[347498]: 2026-02-28 10:27:59.731339889 +0000 UTC m=+0.150934174 container init 1688e24ed9c0ef84a41d7da6321c0a521acc07d175e5a517d862156ae4fd0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_lederberg, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:27:59 compute-0 podman[347498]: 2026-02-28 10:27:59.73987564 +0000 UTC m=+0.159469905 container start 1688e24ed9c0ef84a41d7da6321c0a521acc07d175e5a517d862156ae4fd0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 10:27:59 compute-0 podman[347498]: 2026-02-28 10:27:59.743988626 +0000 UTC m=+0.163582951 container attach 1688e24ed9c0ef84a41d7da6321c0a521acc07d175e5a517d862156ae4fd0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 10:27:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:59.898 157134 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Feb 28 10:27:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:27:59.898 157134 INFO eventlet.wsgi.server [-] 10.100.0.3,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2619910
Feb 28 10:27:59 compute-0 haproxy-metadata-proxy-d755deef-15d2-410a-9b1a-81df70c45c93[346624]: 10.100.0.3:56328 [28/Feb/2026:10:27:59.634] listener listener/metadata 0/0/0/264/264 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Feb 28 10:28:00 compute-0 ceph-mon[76304]: pgmap v1898: 305 pgs: 305 active+clean; 233 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.6 MiB/s wr, 108 op/s
Feb 28 10:28:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:28:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:28:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:28:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:28:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:28:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:28:00 compute-0 lvm[347592]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:28:00 compute-0 lvm[347592]: VG ceph_vg1 finished
Feb 28 10:28:00 compute-0 lvm[347593]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:28:00 compute-0 lvm[347593]: VG ceph_vg0 finished
Feb 28 10:28:00 compute-0 lvm[347595]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:28:00 compute-0 lvm[347595]: VG ceph_vg2 finished
Feb 28 10:28:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 233 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 2.2 MiB/s wr, 92 op/s
Feb 28 10:28:00 compute-0 relaxed_lederberg[347514]: {}
Feb 28 10:28:00 compute-0 systemd[1]: libpod-1688e24ed9c0ef84a41d7da6321c0a521acc07d175e5a517d862156ae4fd0b9b.scope: Deactivated successfully.
Feb 28 10:28:00 compute-0 systemd[1]: libpod-1688e24ed9c0ef84a41d7da6321c0a521acc07d175e5a517d862156ae4fd0b9b.scope: Consumed 1.301s CPU time.
Feb 28 10:28:00 compute-0 podman[347498]: 2026-02-28 10:28:00.636494615 +0000 UTC m=+1.056088880 container died 1688e24ed9c0ef84a41d7da6321c0a521acc07d175e5a517d862156ae4fd0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Feb 28 10:28:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-93d9b2a0c3220da1068fa56c98fca11bf1544c12490c43afa3716b3ed15040ff-merged.mount: Deactivated successfully.
Feb 28 10:28:00 compute-0 podman[347498]: 2026-02-28 10:28:00.67739245 +0000 UTC m=+1.096986745 container remove 1688e24ed9c0ef84a41d7da6321c0a521acc07d175e5a517d862156ae4fd0b9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 10:28:00 compute-0 systemd[1]: libpod-conmon-1688e24ed9c0ef84a41d7da6321c0a521acc07d175e5a517d862156ae4fd0b9b.scope: Deactivated successfully.
Feb 28 10:28:00 compute-0 sudo[347420]: pam_unix(sudo:session): session closed for user root
Feb 28 10:28:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:28:00 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:28:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:28:00 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:28:00 compute-0 sudo[347611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:28:00 compute-0 sudo[347611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:28:00 compute-0 sudo[347611]: pam_unix(sudo:session): session closed for user root
Feb 28 10:28:01 compute-0 ceph-mon[76304]: pgmap v1899: 305 pgs: 305 active+clean; 233 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 2.2 MiB/s wr, 92 op/s
Feb 28 10:28:01 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:28:01 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:28:01 compute-0 nova_compute[243452]: 2026-02-28 10:28:01.922 243456 DEBUG oslo_concurrency.lockutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Acquiring lock "502b3848-9702-4288-860e-d9b13ab3b047" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:28:01 compute-0 nova_compute[243452]: 2026-02-28 10:28:01.923 243456 DEBUG oslo_concurrency.lockutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:28:01 compute-0 nova_compute[243452]: 2026-02-28 10:28:01.924 243456 DEBUG oslo_concurrency.lockutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Acquiring lock "502b3848-9702-4288-860e-d9b13ab3b047-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:28:01 compute-0 nova_compute[243452]: 2026-02-28 10:28:01.924 243456 DEBUG oslo_concurrency.lockutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:28:01 compute-0 nova_compute[243452]: 2026-02-28 10:28:01.924 243456 DEBUG oslo_concurrency.lockutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:28:01 compute-0 nova_compute[243452]: 2026-02-28 10:28:01.925 243456 INFO nova.compute.manager [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Terminating instance
Feb 28 10:28:01 compute-0 nova_compute[243452]: 2026-02-28 10:28:01.927 243456 DEBUG nova.compute.manager [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:28:01 compute-0 kernel: tapa4f4f33b-d0 (unregistering): left promiscuous mode
Feb 28 10:28:01 compute-0 NetworkManager[49805]: <info>  [1772274481.9847] device (tapa4f4f33b-d0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:28:01 compute-0 nova_compute[243452]: 2026-02-28 10:28:01.985 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:01 compute-0 ovn_controller[146846]: 2026-02-28T10:28:01Z|01218|binding|INFO|Releasing lport a4f4f33b-d010-42c3-9963-b0602fd11558 from this chassis (sb_readonly=0)
Feb 28 10:28:01 compute-0 ovn_controller[146846]: 2026-02-28T10:28:01Z|01219|binding|INFO|Setting lport a4f4f33b-d010-42c3-9963-b0602fd11558 down in Southbound
Feb 28 10:28:01 compute-0 ovn_controller[146846]: 2026-02-28T10:28:01Z|01220|binding|INFO|Removing iface tapa4f4f33b-d0 ovn-installed in OVS
Feb 28 10:28:01 compute-0 nova_compute[243452]: 2026-02-28 10:28:01.997 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:01 compute-0 nova_compute[243452]: 2026-02-28 10:28:01.999 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.003 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.004 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:1f:18 10.100.0.3'], port_security=['fa:16:3e:be:1f:18 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '502b3848-9702-4288-860e-d9b13ab3b047', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d755deef-15d2-410a-9b1a-81df70c45c93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b907eb5634054c23999a514f3cbfbc23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0fc58a0a-8070-4472-9d4a-0833b80c1776 d0518d2f-a440-4fc3-9c12-d503c74451c2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f692542-8d39-4073-b698-c331b927e5a0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a4f4f33b-d010-42c3-9963-b0602fd11558) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:28:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.006 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a4f4f33b-d010-42c3-9963-b0602fd11558 in datapath d755deef-15d2-410a-9b1a-81df70c45c93 unbound from our chassis
Feb 28 10:28:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.008 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d755deef-15d2-410a-9b1a-81df70c45c93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:28:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.010 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1c690f62-edd7-4e64-8569-f6a4d19c9582]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.010 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93 namespace which is not needed anymore
Feb 28 10:28:02 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000078.scope: Deactivated successfully.
Feb 28 10:28:02 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000078.scope: Consumed 13.794s CPU time.
Feb 28 10:28:02 compute-0 systemd-machined[209480]: Machine qemu-151-instance-00000078 terminated.
Feb 28 10:28:02 compute-0 neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93[346618]: [NOTICE]   (346622) : haproxy version is 2.8.14-c23fe91
Feb 28 10:28:02 compute-0 neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93[346618]: [NOTICE]   (346622) : path to executable is /usr/sbin/haproxy
Feb 28 10:28:02 compute-0 neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93[346618]: [WARNING]  (346622) : Exiting Master process...
Feb 28 10:28:02 compute-0 neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93[346618]: [WARNING]  (346622) : Exiting Master process...
Feb 28 10:28:02 compute-0 neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93[346618]: [ALERT]    (346622) : Current worker (346624) exited with code 143 (Terminated)
Feb 28 10:28:02 compute-0 neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93[346618]: [WARNING]  (346622) : All workers exited. Exiting... (0)
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.158 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:02 compute-0 systemd[1]: libpod-e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2.scope: Deactivated successfully.
Feb 28 10:28:02 compute-0 conmon[346618]: conmon e91af430cd6112e840b4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2.scope/container/memory.events
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.163 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:02 compute-0 podman[347657]: 2026-02-28 10:28:02.166028986 +0000 UTC m=+0.044994732 container died e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.178 243456 INFO nova.virt.libvirt.driver [-] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Instance destroyed successfully.
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.179 243456 DEBUG nova.objects.instance [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lazy-loading 'resources' on Instance uuid 502b3848-9702-4288-860e-d9b13ab3b047 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:28:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2-userdata-shm.mount: Deactivated successfully.
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.200 243456 DEBUG nova.virt.libvirt.vif [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:27:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-952479761',display_name='tempest-TestServerBasicOps-server-952479761',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-952479761',id=120,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIxvwaoaZS6lxY+qm3SVwVdpmr4odFer5lT4S2h//UYF7wFrY/sNYSd6hzwRrmh2VB5KT4fELzlhq046tWMx92gixHEtTbsSYNmG1SF9Z5rksMEf2+FpLLjssyHNY9JdaA==',key_name='tempest-TestServerBasicOps-1169969291',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:27:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b907eb5634054c23999a514f3cbfbc23',ramdisk_id='',reservation_id='r-1m2lpc4b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-267277269',owner_user_name='tempest-TestServerBasicOps-267277269-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:27:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f9dd03f07d754030bedc45ef75a2ceb8',uuid=502b3848-9702-4288-860e-d9b13ab3b047,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:28:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7740129404eead1c6a7f09e8f10ed3317b876a311baeb37869bca63dec79ce5-merged.mount: Deactivated successfully.
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.201 243456 DEBUG nova.network.os_vif_util [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Converting VIF {"id": "a4f4f33b-d010-42c3-9963-b0602fd11558", "address": "fa:16:3e:be:1f:18", "network": {"id": "d755deef-15d2-410a-9b1a-81df70c45c93", "bridge": "br-int", "label": "tempest-TestServerBasicOps-594899057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b907eb5634054c23999a514f3cbfbc23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4f4f33b-d0", "ovs_interfaceid": "a4f4f33b-d010-42c3-9963-b0602fd11558", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.202 243456 DEBUG nova.network.os_vif_util [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:be:1f:18,bridge_name='br-int',has_traffic_filtering=True,id=a4f4f33b-d010-42c3-9963-b0602fd11558,network=Network(d755deef-15d2-410a-9b1a-81df70c45c93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4f4f33b-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.202 243456 DEBUG os_vif [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:1f:18,bridge_name='br-int',has_traffic_filtering=True,id=a4f4f33b-d010-42c3-9963-b0602fd11558,network=Network(d755deef-15d2-410a-9b1a-81df70c45c93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4f4f33b-d0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.204 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.205 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa4f4f33b-d0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.209 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:28:02 compute-0 podman[347657]: 2026-02-28 10:28:02.211620693 +0000 UTC m=+0.090586439 container cleanup e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.212 243456 INFO os_vif [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:1f:18,bridge_name='br-int',has_traffic_filtering=True,id=a4f4f33b-d010-42c3-9963-b0602fd11558,network=Network(d755deef-15d2-410a-9b1a-81df70c45c93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4f4f33b-d0')
Feb 28 10:28:02 compute-0 systemd[1]: libpod-conmon-e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2.scope: Deactivated successfully.
Feb 28 10:28:02 compute-0 podman[347704]: 2026-02-28 10:28:02.283184255 +0000 UTC m=+0.050273651 container remove e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 10:28:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.287 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[65e99148-edc0-4212-958e-38cad2659787]: (4, ('Sat Feb 28 10:28:02 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93 (e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2)\ne91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2\nSat Feb 28 10:28:02 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93 (e91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2)\ne91af430cd6112e840b4abf38935905ffc9a9eb8454d98d522a5cefaf3b40cd2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.290 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cd9e22af-288c-44a8-9d6c-cbf9cdd111d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.291 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd755deef-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.293 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:02 compute-0 kernel: tapd755deef-10: left promiscuous mode
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.299 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.301 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2f6f9fad-20f1-48cb-b713-198c3c830df8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.315 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[40558cc2-2a64-49b8-820b-42a26da3839d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.317 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[caa8b186-f7a7-4b4f-8b99-0e3ea0c7117b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.329 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd23dd1-46f5-49bd-b2d6-74c467969d17]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596338, 'reachable_time': 21578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347730, 'error': None, 'target': 'ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.331 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d755deef-15d2-410a-9b1a-81df70c45c93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:28:02 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:02.331 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[1c15d443-f9f9-4160-96ca-5c2f44274d64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:02 compute-0 systemd[1]: run-netns-ovnmeta\x2dd755deef\x2d15d2\x2d410a\x2d9b1a\x2d81df70c45c93.mount: Deactivated successfully.
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.349 243456 DEBUG nova.compute.manager [req-31452c57-8083-4586-a6a2-05a65c8bc8ee req-3bae4e69-cea9-4383-82b7-718834bdbee3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Received event network-vif-unplugged-a4f4f33b-d010-42c3-9963-b0602fd11558 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.349 243456 DEBUG oslo_concurrency.lockutils [req-31452c57-8083-4586-a6a2-05a65c8bc8ee req-3bae4e69-cea9-4383-82b7-718834bdbee3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "502b3848-9702-4288-860e-d9b13ab3b047-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.350 243456 DEBUG oslo_concurrency.lockutils [req-31452c57-8083-4586-a6a2-05a65c8bc8ee req-3bae4e69-cea9-4383-82b7-718834bdbee3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.350 243456 DEBUG oslo_concurrency.lockutils [req-31452c57-8083-4586-a6a2-05a65c8bc8ee req-3bae4e69-cea9-4383-82b7-718834bdbee3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.350 243456 DEBUG nova.compute.manager [req-31452c57-8083-4586-a6a2-05a65c8bc8ee req-3bae4e69-cea9-4383-82b7-718834bdbee3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] No waiting events found dispatching network-vif-unplugged-a4f4f33b-d010-42c3-9963-b0602fd11558 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.350 243456 DEBUG nova.compute.manager [req-31452c57-8083-4586-a6a2-05a65c8bc8ee req-3bae4e69-cea9-4383-82b7-718834bdbee3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Received event network-vif-unplugged-a4f4f33b-d010-42c3-9963-b0602fd11558 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.510 243456 INFO nova.virt.libvirt.driver [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Deleting instance files /var/lib/nova/instances/502b3848-9702-4288-860e-d9b13ab3b047_del
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.511 243456 INFO nova.virt.libvirt.driver [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Deletion of /var/lib/nova/instances/502b3848-9702-4288-860e-d9b13ab3b047_del complete
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.577 243456 INFO nova.compute.manager [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Took 0.65 seconds to destroy the instance on the hypervisor.
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.578 243456 DEBUG oslo.service.loopingcall [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.578 243456 DEBUG nova.compute.manager [-] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:28:02 compute-0 nova_compute[243452]: 2026-02-28 10:28:02.578 243456 DEBUG nova.network.neutron [-] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:28:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 198 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 833 KiB/s wr, 66 op/s
Feb 28 10:28:03 compute-0 nova_compute[243452]: 2026-02-28 10:28:03.493 243456 DEBUG nova.network.neutron [-] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:28:03 compute-0 nova_compute[243452]: 2026-02-28 10:28:03.516 243456 INFO nova.compute.manager [-] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Took 0.94 seconds to deallocate network for instance.
Feb 28 10:28:03 compute-0 nova_compute[243452]: 2026-02-28 10:28:03.614 243456 DEBUG oslo_concurrency.lockutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:28:03 compute-0 nova_compute[243452]: 2026-02-28 10:28:03.614 243456 DEBUG oslo_concurrency.lockutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:28:03 compute-0 nova_compute[243452]: 2026-02-28 10:28:03.653 243456 DEBUG nova.compute.manager [req-cf723f97-3336-46c6-97bd-921d1e001c2b req-77c5f323-6f96-4fab-b385-66f1e264f26c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Received event network-vif-deleted-a4f4f33b-d010-42c3-9963-b0602fd11558 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:28:03 compute-0 nova_compute[243452]: 2026-02-28 10:28:03.691 243456 DEBUG oslo_concurrency.processutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:28:03 compute-0 ceph-mon[76304]: pgmap v1900: 305 pgs: 305 active+clean; 198 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 833 KiB/s wr, 66 op/s
Feb 28 10:28:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:28:04 compute-0 nova_compute[243452]: 2026-02-28 10:28:04.163 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:28:04 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1335962218' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:28:04 compute-0 nova_compute[243452]: 2026-02-28 10:28:04.227 243456 DEBUG oslo_concurrency.processutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:28:04 compute-0 nova_compute[243452]: 2026-02-28 10:28:04.233 243456 DEBUG nova.compute.provider_tree [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:28:04 compute-0 nova_compute[243452]: 2026-02-28 10:28:04.251 243456 DEBUG nova.scheduler.client.report [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:28:04 compute-0 nova_compute[243452]: 2026-02-28 10:28:04.270 243456 DEBUG oslo_concurrency.lockutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:28:04 compute-0 nova_compute[243452]: 2026-02-28 10:28:04.311 243456 INFO nova.scheduler.client.report [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Deleted allocations for instance 502b3848-9702-4288-860e-d9b13ab3b047
Feb 28 10:28:04 compute-0 nova_compute[243452]: 2026-02-28 10:28:04.365 243456 DEBUG oslo_concurrency.lockutils [None req-075ba498-acb1-4a80-bfe3-0c9d3651e0d0 f9dd03f07d754030bedc45ef75a2ceb8 b907eb5634054c23999a514f3cbfbc23 - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.441s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:28:04 compute-0 nova_compute[243452]: 2026-02-28 10:28:04.465 243456 DEBUG nova.compute.manager [req-316e7004-0927-4660-9868-da8f3123b5a0 req-c88ff67b-ca34-4621-9e95-d7fc40c25fce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Received event network-vif-plugged-a4f4f33b-d010-42c3-9963-b0602fd11558 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:28:04 compute-0 nova_compute[243452]: 2026-02-28 10:28:04.465 243456 DEBUG oslo_concurrency.lockutils [req-316e7004-0927-4660-9868-da8f3123b5a0 req-c88ff67b-ca34-4621-9e95-d7fc40c25fce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "502b3848-9702-4288-860e-d9b13ab3b047-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:28:04 compute-0 nova_compute[243452]: 2026-02-28 10:28:04.467 243456 DEBUG oslo_concurrency.lockutils [req-316e7004-0927-4660-9868-da8f3123b5a0 req-c88ff67b-ca34-4621-9e95-d7fc40c25fce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:28:04 compute-0 nova_compute[243452]: 2026-02-28 10:28:04.467 243456 DEBUG oslo_concurrency.lockutils [req-316e7004-0927-4660-9868-da8f3123b5a0 req-c88ff67b-ca34-4621-9e95-d7fc40c25fce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "502b3848-9702-4288-860e-d9b13ab3b047-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:28:04 compute-0 nova_compute[243452]: 2026-02-28 10:28:04.467 243456 DEBUG nova.compute.manager [req-316e7004-0927-4660-9868-da8f3123b5a0 req-c88ff67b-ca34-4621-9e95-d7fc40c25fce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] No waiting events found dispatching network-vif-plugged-a4f4f33b-d010-42c3-9963-b0602fd11558 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:28:04 compute-0 nova_compute[243452]: 2026-02-28 10:28:04.467 243456 WARNING nova.compute.manager [req-316e7004-0927-4660-9868-da8f3123b5a0 req-c88ff67b-ca34-4621-9e95-d7fc40c25fce 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Received unexpected event network-vif-plugged-a4f4f33b-d010-42c3-9963-b0602fd11558 for instance with vm_state deleted and task_state None.
Feb 28 10:28:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 168 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 19 KiB/s wr, 50 op/s
Feb 28 10:28:04 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1335962218' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:28:05 compute-0 ceph-mon[76304]: pgmap v1901: 305 pgs: 305 active+clean; 168 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 19 KiB/s wr, 50 op/s
Feb 28 10:28:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 153 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 18 KiB/s wr, 47 op/s
Feb 28 10:28:07 compute-0 nova_compute[243452]: 2026-02-28 10:28:07.208 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:07 compute-0 ceph-mon[76304]: pgmap v1902: 305 pgs: 305 active+clean; 153 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 18 KiB/s wr, 47 op/s
Feb 28 10:28:08 compute-0 nova_compute[243452]: 2026-02-28 10:28:08.018 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274473.0173407, 45cac133-9af0-462b-928c-05216ae1a68e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:28:08 compute-0 nova_compute[243452]: 2026-02-28 10:28:08.019 243456 INFO nova.compute.manager [-] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] VM Stopped (Lifecycle Event)
Feb 28 10:28:08 compute-0 nova_compute[243452]: 2026-02-28 10:28:08.041 243456 DEBUG nova.compute.manager [None req-db1ccbd7-b97a-4f48-9245-b42e19fc006d - - - - - -] [instance: 45cac133-9af0-462b-928c-05216ae1a68e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:28:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 18 KiB/s wr, 45 op/s
Feb 28 10:28:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:28:09 compute-0 nova_compute[243452]: 2026-02-28 10:28:09.166 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:09 compute-0 ceph-mon[76304]: pgmap v1903: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 18 KiB/s wr, 45 op/s
Feb 28 10:28:09 compute-0 nova_compute[243452]: 2026-02-28 10:28:09.906 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:09 compute-0 nova_compute[243452]: 2026-02-28 10:28:09.972 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 5.5 KiB/s wr, 30 op/s
Feb 28 10:28:11 compute-0 ceph-mon[76304]: pgmap v1904: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 5.5 KiB/s wr, 30 op/s
Feb 28 10:28:12 compute-0 nova_compute[243452]: 2026-02-28 10:28:12.214 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 5.2 KiB/s wr, 30 op/s
Feb 28 10:28:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:28:13 compute-0 ceph-mon[76304]: pgmap v1905: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 5.2 KiB/s wr, 30 op/s
Feb 28 10:28:14 compute-0 nova_compute[243452]: 2026-02-28 10:28:14.166 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Feb 28 10:28:15 compute-0 ceph-mon[76304]: pgmap v1906: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Feb 28 10:28:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 0 B/s wr, 8 op/s
Feb 28 10:28:17 compute-0 nova_compute[243452]: 2026-02-28 10:28:17.176 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274482.1736732, 502b3848-9702-4288-860e-d9b13ab3b047 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:28:17 compute-0 nova_compute[243452]: 2026-02-28 10:28:17.176 243456 INFO nova.compute.manager [-] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] VM Stopped (Lifecycle Event)
Feb 28 10:28:17 compute-0 nova_compute[243452]: 2026-02-28 10:28:17.202 243456 DEBUG nova.compute.manager [None req-43650dd4-500f-465b-a1f4-e72f7cd21541 - - - - - -] [instance: 502b3848-9702-4288-860e-d9b13ab3b047] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:28:17 compute-0 nova_compute[243452]: 2026-02-28 10:28:17.216 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:17 compute-0 ceph-mon[76304]: pgmap v1907: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 0 B/s wr, 8 op/s
Feb 28 10:28:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:28:19 compute-0 nova_compute[243452]: 2026-02-28 10:28:19.168 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:19 compute-0 ceph-mon[76304]: pgmap v1908: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:21.237 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:28:21 compute-0 nova_compute[243452]: 2026-02-28 10:28:21.237 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:21.239 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:28:21 compute-0 ceph-mon[76304]: pgmap v1909: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:22 compute-0 nova_compute[243452]: 2026-02-28 10:28:22.218 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:23.241 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:28:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:28:23 compute-0 ceph-mon[76304]: pgmap v1910: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:24 compute-0 nova_compute[243452]: 2026-02-28 10:28:24.169 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:25 compute-0 podman[347756]: 2026-02-28 10:28:25.124054165 +0000 UTC m=+0.061693924 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:28:25 compute-0 podman[347757]: 2026-02-28 10:28:25.14372168 +0000 UTC m=+0.078947941 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 28 10:28:25 compute-0 ceph-mon[76304]: pgmap v1911: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:26 compute-0 nova_compute[243452]: 2026-02-28 10:28:26.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:28:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:27 compute-0 nova_compute[243452]: 2026-02-28 10:28:27.220 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:27 compute-0 ceph-mon[76304]: pgmap v1912: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:28:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:28:29
Feb 28 10:28:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:28:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:28:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.meta']
Feb 28 10:28:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:28:29 compute-0 nova_compute[243452]: 2026-02-28 10:28:29.171 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:29 compute-0 ceph-mon[76304]: pgmap v1913: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:28:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:28:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:28:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:28:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:28:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:28:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:28:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:28:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:28:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:28:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:28:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:28:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:28:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:28:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:28:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:28:31 compute-0 nova_compute[243452]: 2026-02-28 10:28:31.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:28:31 compute-0 nova_compute[243452]: 2026-02-28 10:28:31.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:28:31 compute-0 nova_compute[243452]: 2026-02-28 10:28:31.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:28:31 compute-0 nova_compute[243452]: 2026-02-28 10:28:31.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:28:31 compute-0 ceph-mon[76304]: pgmap v1914: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:32 compute-0 nova_compute[243452]: 2026-02-28 10:28:32.222 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:28:33 compute-0 ceph-mon[76304]: pgmap v1915: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:34 compute-0 nova_compute[243452]: 2026-02-28 10:28:34.174 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:35 compute-0 nova_compute[243452]: 2026-02-28 10:28:35.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:28:35 compute-0 ceph-mon[76304]: pgmap v1916: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:36 compute-0 nova_compute[243452]: 2026-02-28 10:28:36.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:28:36 compute-0 nova_compute[243452]: 2026-02-28 10:28:36.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:28:36 compute-0 nova_compute[243452]: 2026-02-28 10:28:36.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:28:36 compute-0 nova_compute[243452]: 2026-02-28 10:28:36.336 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:28:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:37 compute-0 nova_compute[243452]: 2026-02-28 10:28:37.225 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:37 compute-0 nova_compute[243452]: 2026-02-28 10:28:37.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:28:37 compute-0 nova_compute[243452]: 2026-02-28 10:28:37.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:28:37 compute-0 ceph-mon[76304]: pgmap v1917: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:38 compute-0 nova_compute[243452]: 2026-02-28 10:28:38.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:28:38 compute-0 nova_compute[243452]: 2026-02-28 10:28:38.350 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:28:38 compute-0 nova_compute[243452]: 2026-02-28 10:28:38.351 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:28:38 compute-0 nova_compute[243452]: 2026-02-28 10:28:38.351 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:28:38 compute-0 nova_compute[243452]: 2026-02-28 10:28:38.351 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:28:38 compute-0 nova_compute[243452]: 2026-02-28 10:28:38.352 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:28:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:28:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:28:38 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/509776450' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:28:38 compute-0 nova_compute[243452]: 2026-02-28 10:28:38.945 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:28:38 compute-0 sshd-session[347801]: Received disconnect from 103.217.144.161 port 44930:11: Bye Bye [preauth]
Feb 28 10:28:38 compute-0 sshd-session[347801]: Disconnected from authenticating user root 103.217.144.161 port 44930 [preauth]
Feb 28 10:28:39 compute-0 nova_compute[243452]: 2026-02-28 10:28:39.142 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:28:39 compute-0 nova_compute[243452]: 2026-02-28 10:28:39.144 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3703MB free_disk=59.9874847875908GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:28:39 compute-0 nova_compute[243452]: 2026-02-28 10:28:39.144 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:28:39 compute-0 nova_compute[243452]: 2026-02-28 10:28:39.144 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:28:39 compute-0 nova_compute[243452]: 2026-02-28 10:28:39.176 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:39 compute-0 nova_compute[243452]: 2026-02-28 10:28:39.219 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:28:39 compute-0 nova_compute[243452]: 2026-02-28 10:28:39.220 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:28:39 compute-0 nova_compute[243452]: 2026-02-28 10:28:39.243 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:28:39 compute-0 sshd-session[347803]: Received disconnect from 103.67.78.132 port 54526:11: Bye Bye [preauth]
Feb 28 10:28:39 compute-0 sshd-session[347803]: Disconnected from authenticating user root 103.67.78.132 port 54526 [preauth]
Feb 28 10:28:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:28:39 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1359593659' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:28:39 compute-0 nova_compute[243452]: 2026-02-28 10:28:39.830 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:28:39 compute-0 nova_compute[243452]: 2026-02-28 10:28:39.839 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:28:39 compute-0 nova_compute[243452]: 2026-02-28 10:28:39.861 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:28:39 compute-0 nova_compute[243452]: 2026-02-28 10:28:39.890 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:28:39 compute-0 nova_compute[243452]: 2026-02-28 10:28:39.891 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:28:39 compute-0 ceph-mon[76304]: pgmap v1918: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:39 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/509776450' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:28:39 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1359593659' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:28:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.3276966644185762e-05 of space, bias 1.0, pg target 0.003983089993255729 quantized to 32 (current 32)
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493568655025092 of space, bias 1.0, pg target 0.7480705965075276 quantized to 32 (current 32)
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.368976920658764e-07 of space, bias 4.0, pg target 0.0008842772304790517 quantized to 16 (current 16)
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:28:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:28:41 compute-0 nova_compute[243452]: 2026-02-28 10:28:41.840 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "360effe7-8380-410d-a5b8-59c28fa4a75a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:28:41 compute-0 nova_compute[243452]: 2026-02-28 10:28:41.840 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:28:41 compute-0 nova_compute[243452]: 2026-02-28 10:28:41.861 243456 DEBUG nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:28:41 compute-0 ceph-mon[76304]: pgmap v1919: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:41 compute-0 nova_compute[243452]: 2026-02-28 10:28:41.938 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:28:41 compute-0 nova_compute[243452]: 2026-02-28 10:28:41.939 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:28:41 compute-0 nova_compute[243452]: 2026-02-28 10:28:41.949 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:28:41 compute-0 nova_compute[243452]: 2026-02-28 10:28:41.950 243456 INFO nova.compute.claims [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:28:42 compute-0 nova_compute[243452]: 2026-02-28 10:28:42.056 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:28:42 compute-0 nova_compute[243452]: 2026-02-28 10:28:42.227 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:28:42 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1108711752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:28:42 compute-0 nova_compute[243452]: 2026-02-28 10:28:42.703 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:28:42 compute-0 nova_compute[243452]: 2026-02-28 10:28:42.710 243456 DEBUG nova.compute.provider_tree [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:28:42 compute-0 nova_compute[243452]: 2026-02-28 10:28:42.734 243456 DEBUG nova.scheduler.client.report [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:28:42 compute-0 nova_compute[243452]: 2026-02-28 10:28:42.773 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:28:42 compute-0 nova_compute[243452]: 2026-02-28 10:28:42.774 243456 DEBUG nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:28:42 compute-0 nova_compute[243452]: 2026-02-28 10:28:42.861 243456 DEBUG nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:28:42 compute-0 nova_compute[243452]: 2026-02-28 10:28:42.862 243456 DEBUG nova.network.neutron [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:28:42 compute-0 nova_compute[243452]: 2026-02-28 10:28:42.883 243456 INFO nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:28:42 compute-0 nova_compute[243452]: 2026-02-28 10:28:42.903 243456 DEBUG nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:28:42 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1108711752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.007 243456 DEBUG nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.009 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.009 243456 INFO nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Creating image(s)
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.046 243456 DEBUG nova.storage.rbd_utils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 360effe7-8380-410d-a5b8-59c28fa4a75a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.085 243456 DEBUG nova.storage.rbd_utils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 360effe7-8380-410d-a5b8-59c28fa4a75a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.120 243456 DEBUG nova.storage.rbd_utils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 360effe7-8380-410d-a5b8-59c28fa4a75a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.126 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.214 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.215 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.215 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.216 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.243 243456 DEBUG nova.storage.rbd_utils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 360effe7-8380-410d-a5b8-59c28fa4a75a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.248 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 360effe7-8380-410d-a5b8-59c28fa4a75a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.513 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 360effe7-8380-410d-a5b8-59c28fa4a75a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.603 243456 DEBUG nova.storage.rbd_utils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] resizing rbd image 360effe7-8380-410d-a5b8-59c28fa4a75a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.654 243456 DEBUG nova.policy [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f54beab12fce4ee8adf80742bf33b916', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '344dd946e14146ab93c01183964c71b3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.704 243456 DEBUG nova.objects.instance [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lazy-loading 'migration_context' on Instance uuid 360effe7-8380-410d-a5b8-59c28fa4a75a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.730 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.731 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Ensure instance console log exists: /var/lib/nova/instances/360effe7-8380-410d-a5b8-59c28fa4a75a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.732 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.733 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:28:43 compute-0 nova_compute[243452]: 2026-02-28 10:28:43.733 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:28:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:28:43 compute-0 ceph-mon[76304]: pgmap v1920: 305 pgs: 305 active+clean; 153 MiB data, 914 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:28:44 compute-0 nova_compute[243452]: 2026-02-28 10:28:44.179 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 165 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 396 KiB/s wr, 23 op/s
Feb 28 10:28:45 compute-0 nova_compute[243452]: 2026-02-28 10:28:45.345 243456 DEBUG nova.network.neutron [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Successfully created port: 89352e9c-3fec-48bc-a264-6de98ec910c3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:28:45 compute-0 sshd-session[348038]: Invalid user sol from 45.148.10.240 port 36326
Feb 28 10:28:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:28:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1102776912' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:28:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:28:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1102776912' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:28:45 compute-0 sshd-session[348038]: Connection closed by invalid user sol 45.148.10.240 port 36326 [preauth]
Feb 28 10:28:45 compute-0 ceph-mon[76304]: pgmap v1921: 305 pgs: 305 active+clean; 165 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 396 KiB/s wr, 23 op/s
Feb 28 10:28:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1102776912' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:28:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1102776912' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:28:46 compute-0 nova_compute[243452]: 2026-02-28 10:28:46.470 243456 DEBUG nova.network.neutron [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Successfully updated port: 89352e9c-3fec-48bc-a264-6de98ec910c3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:28:46 compute-0 nova_compute[243452]: 2026-02-28 10:28:46.486 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:28:46 compute-0 nova_compute[243452]: 2026-02-28 10:28:46.486 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquired lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:28:46 compute-0 nova_compute[243452]: 2026-02-28 10:28:46.487 243456 DEBUG nova.network.neutron [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:28:46 compute-0 nova_compute[243452]: 2026-02-28 10:28:46.567 243456 DEBUG nova.compute.manager [req-264e4bb1-bc8a-46db-8ae1-14e946626087 req-92567b36-95af-4609-8dce-f5ff992fb36d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Received event network-changed-89352e9c-3fec-48bc-a264-6de98ec910c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:28:46 compute-0 nova_compute[243452]: 2026-02-28 10:28:46.568 243456 DEBUG nova.compute.manager [req-264e4bb1-bc8a-46db-8ae1-14e946626087 req-92567b36-95af-4609-8dce-f5ff992fb36d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Refreshing instance network info cache due to event network-changed-89352e9c-3fec-48bc-a264-6de98ec910c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:28:46 compute-0 nova_compute[243452]: 2026-02-28 10:28:46.568 243456 DEBUG oslo_concurrency.lockutils [req-264e4bb1-bc8a-46db-8ae1-14e946626087 req-92567b36-95af-4609-8dce-f5ff992fb36d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:28:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 179 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 629 KiB/s wr, 25 op/s
Feb 28 10:28:46 compute-0 nova_compute[243452]: 2026-02-28 10:28:46.668 243456 DEBUG nova.network.neutron [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.229 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.873 243456 DEBUG nova.network.neutron [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updating instance_info_cache with network_info: [{"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.895 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Releasing lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.896 243456 DEBUG nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Instance network_info: |[{"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.896 243456 DEBUG oslo_concurrency.lockutils [req-264e4bb1-bc8a-46db-8ae1-14e946626087 req-92567b36-95af-4609-8dce-f5ff992fb36d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.896 243456 DEBUG nova.network.neutron [req-264e4bb1-bc8a-46db-8ae1-14e946626087 req-92567b36-95af-4609-8dce-f5ff992fb36d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Refreshing network info cache for port 89352e9c-3fec-48bc-a264-6de98ec910c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.898 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Start _get_guest_xml network_info=[{"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.902 243456 WARNING nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.906 243456 DEBUG nova.virt.libvirt.host [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.907 243456 DEBUG nova.virt.libvirt.host [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.910 243456 DEBUG nova.virt.libvirt.host [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.910 243456 DEBUG nova.virt.libvirt.host [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.910 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.910 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.911 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.911 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.911 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.912 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.912 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.912 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.912 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.912 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.913 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.913 243456 DEBUG nova.virt.hardware [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:28:47 compute-0 nova_compute[243452]: 2026-02-28 10:28:47.915 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:28:47 compute-0 ceph-mon[76304]: pgmap v1922: 305 pgs: 305 active+clean; 179 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 629 KiB/s wr, 25 op/s
Feb 28 10:28:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:28:48 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2233154156' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:28:48 compute-0 nova_compute[243452]: 2026-02-28 10:28:48.538 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:28:48 compute-0 nova_compute[243452]: 2026-02-28 10:28:48.563 243456 DEBUG nova.storage.rbd_utils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 360effe7-8380-410d-a5b8-59c28fa4a75a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:28:48 compute-0 nova_compute[243452]: 2026-02-28 10:28:48.568 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:28:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 200 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:28:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:28:48 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2233154156' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:28:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:28:49 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/26869461' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.152 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.154 243456 DEBUG nova.virt.libvirt.vif [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:28:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1297881492',display_name='tempest-TestSnapshotPattern-server-1297881492',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1297881492',id=121,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUJ4MbCHcIoNICw2tL1X5ZefRw67v1vL6nTq2IACMxv94r+pZLfX9PQ5uIyPper8mbOQnjB2mZqp0GfnlVkrAkMMtWu4Dla7/PQOjpuGeOsYTKM5E2GBss1gtEw6oeaiA==',key_name='tempest-TestSnapshotPattern-538459727',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='344dd946e14146ab93c01183964c71b3',ramdisk_id='',reservation_id='r-yvdubwju',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-2121060882',owner_user_name='tempest-TestSnapshotPattern-2121060882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:28:42Z,user_data=None,user_id='f54beab12fce4ee8adf80742bf33b916',uuid=360effe7-8380-410d-a5b8-59c28fa4a75a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.154 243456 DEBUG nova.network.os_vif_util [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converting VIF {"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.155 243456 DEBUG nova.network.os_vif_util [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:b8:e7,bridge_name='br-int',has_traffic_filtering=True,id=89352e9c-3fec-48bc-a264-6de98ec910c3,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89352e9c-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.156 243456 DEBUG nova.objects.instance [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 360effe7-8380-410d-a5b8-59c28fa4a75a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.172 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:28:49 compute-0 nova_compute[243452]:   <uuid>360effe7-8380-410d-a5b8-59c28fa4a75a</uuid>
Feb 28 10:28:49 compute-0 nova_compute[243452]:   <name>instance-00000079</name>
Feb 28 10:28:49 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:28:49 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:28:49 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <nova:name>tempest-TestSnapshotPattern-server-1297881492</nova:name>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:28:47</nova:creationTime>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:28:49 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:28:49 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:28:49 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:28:49 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:28:49 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:28:49 compute-0 nova_compute[243452]:         <nova:user uuid="f54beab12fce4ee8adf80742bf33b916">tempest-TestSnapshotPattern-2121060882-project-member</nova:user>
Feb 28 10:28:49 compute-0 nova_compute[243452]:         <nova:project uuid="344dd946e14146ab93c01183964c71b3">tempest-TestSnapshotPattern-2121060882</nova:project>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:28:49 compute-0 nova_compute[243452]:         <nova:port uuid="89352e9c-3fec-48bc-a264-6de98ec910c3">
Feb 28 10:28:49 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:28:49 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:28:49 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <system>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <entry name="serial">360effe7-8380-410d-a5b8-59c28fa4a75a</entry>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <entry name="uuid">360effe7-8380-410d-a5b8-59c28fa4a75a</entry>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     </system>
Feb 28 10:28:49 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:28:49 compute-0 nova_compute[243452]:   <os>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:   </os>
Feb 28 10:28:49 compute-0 nova_compute[243452]:   <features>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:   </features>
Feb 28 10:28:49 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:28:49 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:28:49 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/360effe7-8380-410d-a5b8-59c28fa4a75a_disk">
Feb 28 10:28:49 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       </source>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:28:49 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/360effe7-8380-410d-a5b8-59c28fa4a75a_disk.config">
Feb 28 10:28:49 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       </source>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:28:49 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:ae:b8:e7"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <target dev="tap89352e9c-3f"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/360effe7-8380-410d-a5b8-59c28fa4a75a/console.log" append="off"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <video>
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     </video>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:28:49 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:28:49 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:28:49 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:28:49 compute-0 nova_compute[243452]: </domain>
Feb 28 10:28:49 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.173 243456 DEBUG nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Preparing to wait for external event network-vif-plugged-89352e9c-3fec-48bc-a264-6de98ec910c3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.173 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.174 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.174 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.174 243456 DEBUG nova.virt.libvirt.vif [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:28:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1297881492',display_name='tempest-TestSnapshotPattern-server-1297881492',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1297881492',id=121,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUJ4MbCHcIoNICw2tL1X5ZefRw67v1vL6nTq2IACMxv94r+pZLfX9PQ5uIyPper8mbOQnjB2mZqp0GfnlVkrAkMMtWu4Dla7/PQOjpuGeOsYTKM5E2GBss1gtEw6oeaiA==',key_name='tempest-TestSnapshotPattern-538459727',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='344dd946e14146ab93c01183964c71b3',ramdisk_id='',reservation_id='r-yvdubwju',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-2121060882',owner_user_name='tempest-TestSnapshotPattern-2121060882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:28:42Z,user_data=None,user_id='f54beab12fce4ee8adf80742bf33b916',uuid=360effe7-8380-410d-a5b8-59c28fa4a75a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.175 243456 DEBUG nova.network.os_vif_util [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converting VIF {"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.175 243456 DEBUG nova.network.os_vif_util [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:b8:e7,bridge_name='br-int',has_traffic_filtering=True,id=89352e9c-3fec-48bc-a264-6de98ec910c3,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89352e9c-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.175 243456 DEBUG os_vif [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:b8:e7,bridge_name='br-int',has_traffic_filtering=True,id=89352e9c-3fec-48bc-a264-6de98ec910c3,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89352e9c-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.176 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.176 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.177 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.180 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.180 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap89352e9c-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.181 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap89352e9c-3f, col_values=(('external_ids', {'iface-id': '89352e9c-3fec-48bc-a264-6de98ec910c3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:b8:e7', 'vm-uuid': '360effe7-8380-410d-a5b8-59c28fa4a75a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.181 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:49 compute-0 NetworkManager[49805]: <info>  [1772274529.1837] manager: (tap89352e9c-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/500)
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.185 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.189 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.190 243456 INFO os_vif [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:b8:e7,bridge_name='br-int',has_traffic_filtering=True,id=89352e9c-3fec-48bc-a264-6de98ec910c3,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89352e9c-3f')
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.232 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.233 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.233 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] No VIF found with MAC fa:16:3e:ae:b8:e7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.233 243456 INFO nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Using config drive
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.258 243456 DEBUG nova.storage.rbd_utils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 360effe7-8380-410d-a5b8-59c28fa4a75a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.265 243456 DEBUG nova.network.neutron [req-264e4bb1-bc8a-46db-8ae1-14e946626087 req-92567b36-95af-4609-8dce-f5ff992fb36d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updated VIF entry in instance network info cache for port 89352e9c-3fec-48bc-a264-6de98ec910c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.265 243456 DEBUG nova.network.neutron [req-264e4bb1-bc8a-46db-8ae1-14e946626087 req-92567b36-95af-4609-8dce-f5ff992fb36d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updating instance_info_cache with network_info: [{"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.288 243456 DEBUG oslo_concurrency.lockutils [req-264e4bb1-bc8a-46db-8ae1-14e946626087 req-92567b36-95af-4609-8dce-f5ff992fb36d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.768 243456 INFO nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Creating config drive at /var/lib/nova/instances/360effe7-8380-410d-a5b8-59c28fa4a75a/disk.config
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.774 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/360effe7-8380-410d-a5b8-59c28fa4a75a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpejgn27iq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.919 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/360effe7-8380-410d-a5b8-59c28fa4a75a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpejgn27iq" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.961 243456 DEBUG nova.storage.rbd_utils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 360effe7-8380-410d-a5b8-59c28fa4a75a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:28:49 compute-0 nova_compute[243452]: 2026-02-28 10:28:49.967 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/360effe7-8380-410d-a5b8-59c28fa4a75a/disk.config 360effe7-8380-410d-a5b8-59c28fa4a75a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:28:49 compute-0 ceph-mon[76304]: pgmap v1923: 305 pgs: 305 active+clean; 200 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:28:49 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/26869461' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:28:50 compute-0 nova_compute[243452]: 2026-02-28 10:28:50.148 243456 DEBUG oslo_concurrency.processutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/360effe7-8380-410d-a5b8-59c28fa4a75a/disk.config 360effe7-8380-410d-a5b8-59c28fa4a75a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:28:50 compute-0 nova_compute[243452]: 2026-02-28 10:28:50.149 243456 INFO nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Deleting local config drive /var/lib/nova/instances/360effe7-8380-410d-a5b8-59c28fa4a75a/disk.config because it was imported into RBD.
Feb 28 10:28:50 compute-0 kernel: tap89352e9c-3f: entered promiscuous mode
Feb 28 10:28:50 compute-0 NetworkManager[49805]: <info>  [1772274530.2122] manager: (tap89352e9c-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/501)
Feb 28 10:28:50 compute-0 ovn_controller[146846]: 2026-02-28T10:28:50Z|01221|binding|INFO|Claiming lport 89352e9c-3fec-48bc-a264-6de98ec910c3 for this chassis.
Feb 28 10:28:50 compute-0 ovn_controller[146846]: 2026-02-28T10:28:50Z|01222|binding|INFO|89352e9c-3fec-48bc-a264-6de98ec910c3: Claiming fa:16:3e:ae:b8:e7 10.100.0.7
Feb 28 10:28:50 compute-0 nova_compute[243452]: 2026-02-28 10:28:50.213 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:50 compute-0 nova_compute[243452]: 2026-02-28 10:28:50.217 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:50 compute-0 nova_compute[243452]: 2026-02-28 10:28:50.222 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:50 compute-0 nova_compute[243452]: 2026-02-28 10:28:50.226 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.234 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:b8:e7 10.100.0.7'], port_security=['fa:16:3e:ae:b8:e7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '360effe7-8380-410d-a5b8-59c28fa4a75a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-143add3f-ffeb-40fb-88e5-0af28b700615', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '344dd946e14146ab93c01183964c71b3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eee67bf9-5a46-4130-874d-acfd0939ad31', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c3d893a-030b-445a-98e5-9b2e4e6c6037, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=89352e9c-3fec-48bc-a264-6de98ec910c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.236 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 89352e9c-3fec-48bc-a264-6de98ec910c3 in datapath 143add3f-ffeb-40fb-88e5-0af28b700615 bound to our chassis
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.237 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 143add3f-ffeb-40fb-88e5-0af28b700615
Feb 28 10:28:50 compute-0 systemd-udevd[348175]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:28:50 compute-0 systemd-machined[209480]: New machine qemu-154-instance-00000079.
Feb 28 10:28:50 compute-0 ovn_controller[146846]: 2026-02-28T10:28:50Z|01223|binding|INFO|Setting lport 89352e9c-3fec-48bc-a264-6de98ec910c3 ovn-installed in OVS
Feb 28 10:28:50 compute-0 ovn_controller[146846]: 2026-02-28T10:28:50Z|01224|binding|INFO|Setting lport 89352e9c-3fec-48bc-a264-6de98ec910c3 up in Southbound
Feb 28 10:28:50 compute-0 nova_compute[243452]: 2026-02-28 10:28:50.253 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:50 compute-0 NetworkManager[49805]: <info>  [1772274530.2561] device (tap89352e9c-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:28:50 compute-0 NetworkManager[49805]: <info>  [1772274530.2568] device (tap89352e9c-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.253 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[31790f84-21c6-4065-861e-d09048d3d4c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.255 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap143add3f-f1 in ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:28:50 compute-0 systemd[1]: Started Virtual Machine qemu-154-instance-00000079.
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.258 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap143add3f-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.258 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eea448e1-a407-4662-a8e8-6cf1063ca348]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.259 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b8db518d-1f64-4520-ae58-5d373186bf83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.271 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a441bb66-64bc-4cb0-9859-70e7bbcc583f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.297 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[142291d0-35d5-4176-a913-4012696aefa4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.327 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[942904e9-419f-4b6f-8c9a-320213f9b9a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.333 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e281e7ff-3c2c-48d2-ab6e-406aaa471106]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:50 compute-0 NetworkManager[49805]: <info>  [1772274530.3352] manager: (tap143add3f-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/502)
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.366 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a1195cd9-95b6-4325-8942-ce710607433e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.371 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c71c8efe-e601-41d5-b9c8-e25b7648692a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:50 compute-0 NetworkManager[49805]: <info>  [1772274530.3985] device (tap143add3f-f0): carrier: link connected
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.403 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[72c6cdfb-13f7-40ed-9e0f-072f0e11c6c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.419 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2d50d662-5cf8-4f09-92b1-b2ed0094d245]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap143add3f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:02:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 365], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603768, 'reachable_time': 37208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348208, 'error': None, 'target': 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.435 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c5cbb174-06fc-4beb-bfaf-ccf8c56d2885]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:227'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603768, 'tstamp': 603768}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348209, 'error': None, 'target': 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.452 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d7684ea9-cf81-474e-ae5e-4cd38c467e78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap143add3f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:02:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 365], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603768, 'reachable_time': 37208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348210, 'error': None, 'target': 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.496 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9d282b-f7d3-4572-be70-6a9f7450696f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.563 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[251d4645-9429-4b50-86db-bce0c418d74f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.565 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap143add3f-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.565 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.565 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap143add3f-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:28:50 compute-0 NetworkManager[49805]: <info>  [1772274530.5687] manager: (tap143add3f-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/503)
Feb 28 10:28:50 compute-0 kernel: tap143add3f-f0: entered promiscuous mode
Feb 28 10:28:50 compute-0 nova_compute[243452]: 2026-02-28 10:28:50.569 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.571 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap143add3f-f0, col_values=(('external_ids', {'iface-id': '57c5c2bd-8863-4c9b-bde1-322e4dae0a4d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:28:50 compute-0 ovn_controller[146846]: 2026-02-28T10:28:50Z|01225|binding|INFO|Releasing lport 57c5c2bd-8863-4c9b-bde1-322e4dae0a4d from this chassis (sb_readonly=0)
Feb 28 10:28:50 compute-0 nova_compute[243452]: 2026-02-28 10:28:50.574 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.575 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/143add3f-ffeb-40fb-88e5-0af28b700615.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/143add3f-ffeb-40fb-88e5-0af28b700615.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.576 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[344ee94c-d8c3-4e4e-b96f-9880a95b7c5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.577 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-143add3f-ffeb-40fb-88e5-0af28b700615
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/143add3f-ffeb-40fb-88e5-0af28b700615.pid.haproxy
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 143add3f-ffeb-40fb-88e5-0af28b700615
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:28:50 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:50.577 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'env', 'PROCESS_TAG=haproxy-143add3f-ffeb-40fb-88e5-0af28b700615', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/143add3f-ffeb-40fb-88e5-0af28b700615.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:28:50 compute-0 nova_compute[243452]: 2026-02-28 10:28:50.578 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 200 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb 28 10:28:50 compute-0 nova_compute[243452]: 2026-02-28 10:28:50.629 243456 DEBUG nova.compute.manager [req-98029af8-ac9f-4373-812b-f234088b47f3 req-acaecab0-46ed-4da9-886f-7344109b0dfb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Received event network-vif-plugged-89352e9c-3fec-48bc-a264-6de98ec910c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:28:50 compute-0 nova_compute[243452]: 2026-02-28 10:28:50.630 243456 DEBUG oslo_concurrency.lockutils [req-98029af8-ac9f-4373-812b-f234088b47f3 req-acaecab0-46ed-4da9-886f-7344109b0dfb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:28:50 compute-0 nova_compute[243452]: 2026-02-28 10:28:50.630 243456 DEBUG oslo_concurrency.lockutils [req-98029af8-ac9f-4373-812b-f234088b47f3 req-acaecab0-46ed-4da9-886f-7344109b0dfb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:28:50 compute-0 nova_compute[243452]: 2026-02-28 10:28:50.630 243456 DEBUG oslo_concurrency.lockutils [req-98029af8-ac9f-4373-812b-f234088b47f3 req-acaecab0-46ed-4da9-886f-7344109b0dfb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:28:50 compute-0 nova_compute[243452]: 2026-02-28 10:28:50.631 243456 DEBUG nova.compute.manager [req-98029af8-ac9f-4373-812b-f234088b47f3 req-acaecab0-46ed-4da9-886f-7344109b0dfb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Processing event network-vif-plugged-89352e9c-3fec-48bc-a264-6de98ec910c3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:28:50 compute-0 podman[348242]: 2026-02-28 10:28:50.913479057 +0000 UTC m=+0.054948303 container create 12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 10:28:50 compute-0 systemd[1]: Started libpod-conmon-12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383.scope.
Feb 28 10:28:50 compute-0 podman[348242]: 2026-02-28 10:28:50.880949039 +0000 UTC m=+0.022418325 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:28:50 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/939fb4b0dac1cbd7c3f3f65d2ef77a6441361f4baacd11851b6ce732b973b040/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:28:51 compute-0 podman[348242]: 2026-02-28 10:28:51.013951785 +0000 UTC m=+0.155421031 container init 12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:28:51 compute-0 podman[348242]: 2026-02-28 10:28:51.021355154 +0000 UTC m=+0.162824400 container start 12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:28:51 compute-0 neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615[348257]: [NOTICE]   (348261) : New worker (348263) forked
Feb 28 10:28:51 compute-0 neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615[348257]: [NOTICE]   (348261) : Loading success.
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.862 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274531.8621616, 360effe7-8380-410d-a5b8-59c28fa4a75a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.863 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] VM Started (Lifecycle Event)
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.866 243456 DEBUG nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.872 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.876 243456 INFO nova.virt.libvirt.driver [-] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Instance spawned successfully.
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.877 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.890 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.894 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.914 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.915 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274531.8624976, 360effe7-8380-410d-a5b8-59c28fa4a75a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.916 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] VM Paused (Lifecycle Event)
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.921 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.922 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.922 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.923 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.924 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.925 243456 DEBUG nova.virt.libvirt.driver [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.937 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.942 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274531.8708794, 360effe7-8380-410d-a5b8-59c28fa4a75a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.942 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] VM Resumed (Lifecycle Event)
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.978 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:28:51 compute-0 ceph-mon[76304]: pgmap v1924: 305 pgs: 305 active+clean; 200 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.984 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.993 243456 INFO nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Took 8.99 seconds to spawn the instance on the hypervisor.
Feb 28 10:28:51 compute-0 nova_compute[243452]: 2026-02-28 10:28:51.994 243456 DEBUG nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:28:52 compute-0 nova_compute[243452]: 2026-02-28 10:28:52.007 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:28:52 compute-0 nova_compute[243452]: 2026-02-28 10:28:52.196 243456 INFO nova.compute.manager [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Took 10.28 seconds to build instance.
Feb 28 10:28:52 compute-0 nova_compute[243452]: 2026-02-28 10:28:52.215 243456 DEBUG oslo_concurrency.lockutils [None req-ac5c067a-4bef-42e6-b6ee-cce8d8624713 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:28:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 200 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Feb 28 10:28:52 compute-0 nova_compute[243452]: 2026-02-28 10:28:52.755 243456 DEBUG nova.compute.manager [req-3c278018-8d36-45f6-95f5-80845d526e8d req-ca6455e1-9347-4941-bd39-8d0d8e999ed8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Received event network-vif-plugged-89352e9c-3fec-48bc-a264-6de98ec910c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:28:52 compute-0 nova_compute[243452]: 2026-02-28 10:28:52.755 243456 DEBUG oslo_concurrency.lockutils [req-3c278018-8d36-45f6-95f5-80845d526e8d req-ca6455e1-9347-4941-bd39-8d0d8e999ed8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:28:52 compute-0 nova_compute[243452]: 2026-02-28 10:28:52.755 243456 DEBUG oslo_concurrency.lockutils [req-3c278018-8d36-45f6-95f5-80845d526e8d req-ca6455e1-9347-4941-bd39-8d0d8e999ed8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:28:52 compute-0 nova_compute[243452]: 2026-02-28 10:28:52.756 243456 DEBUG oslo_concurrency.lockutils [req-3c278018-8d36-45f6-95f5-80845d526e8d req-ca6455e1-9347-4941-bd39-8d0d8e999ed8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:28:52 compute-0 nova_compute[243452]: 2026-02-28 10:28:52.756 243456 DEBUG nova.compute.manager [req-3c278018-8d36-45f6-95f5-80845d526e8d req-ca6455e1-9347-4941-bd39-8d0d8e999ed8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] No waiting events found dispatching network-vif-plugged-89352e9c-3fec-48bc-a264-6de98ec910c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:28:52 compute-0 nova_compute[243452]: 2026-02-28 10:28:52.756 243456 WARNING nova.compute.manager [req-3c278018-8d36-45f6-95f5-80845d526e8d req-ca6455e1-9347-4941-bd39-8d0d8e999ed8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Received unexpected event network-vif-plugged-89352e9c-3fec-48bc-a264-6de98ec910c3 for instance with vm_state active and task_state None.
Feb 28 10:28:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:28:53 compute-0 ceph-mon[76304]: pgmap v1925: 305 pgs: 305 active+clean; 200 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Feb 28 10:28:54 compute-0 nova_compute[243452]: 2026-02-28 10:28:54.184 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 200 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 510 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Feb 28 10:28:56 compute-0 ceph-mon[76304]: pgmap v1926: 305 pgs: 305 active+clean; 200 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 510 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Feb 28 10:28:56 compute-0 podman[348314]: 2026-02-28 10:28:56.159971192 +0000 UTC m=+0.088705457 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0)
Feb 28 10:28:56 compute-0 podman[348315]: 2026-02-28 10:28:56.17547726 +0000 UTC m=+0.096613040 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:28:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 200 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 798 KiB/s rd, 1.4 MiB/s wr, 40 op/s
Feb 28 10:28:56 compute-0 nova_compute[243452]: 2026-02-28 10:28:56.702 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:56 compute-0 NetworkManager[49805]: <info>  [1772274536.7066] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/504)
Feb 28 10:28:56 compute-0 NetworkManager[49805]: <info>  [1772274536.7081] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/505)
Feb 28 10:28:56 compute-0 ovn_controller[146846]: 2026-02-28T10:28:56Z|01226|binding|INFO|Releasing lport 57c5c2bd-8863-4c9b-bde1-322e4dae0a4d from this chassis (sb_readonly=0)
Feb 28 10:28:56 compute-0 nova_compute[243452]: 2026-02-28 10:28:56.755 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:56 compute-0 nova_compute[243452]: 2026-02-28 10:28:56.765 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:28:57 compute-0 nova_compute[243452]: 2026-02-28 10:28:57.518 243456 DEBUG nova.compute.manager [req-16f06e19-4ddd-4990-9cf8-8ba4b04d600b req-09f8ac65-50a7-44b1-a1ec-f9ce7d52d9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Received event network-changed-89352e9c-3fec-48bc-a264-6de98ec910c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:28:57 compute-0 nova_compute[243452]: 2026-02-28 10:28:57.518 243456 DEBUG nova.compute.manager [req-16f06e19-4ddd-4990-9cf8-8ba4b04d600b req-09f8ac65-50a7-44b1-a1ec-f9ce7d52d9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Refreshing instance network info cache due to event network-changed-89352e9c-3fec-48bc-a264-6de98ec910c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:28:57 compute-0 nova_compute[243452]: 2026-02-28 10:28:57.519 243456 DEBUG oslo_concurrency.lockutils [req-16f06e19-4ddd-4990-9cf8-8ba4b04d600b req-09f8ac65-50a7-44b1-a1ec-f9ce7d52d9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:28:57 compute-0 nova_compute[243452]: 2026-02-28 10:28:57.519 243456 DEBUG oslo_concurrency.lockutils [req-16f06e19-4ddd-4990-9cf8-8ba4b04d600b req-09f8ac65-50a7-44b1-a1ec-f9ce7d52d9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:28:57 compute-0 nova_compute[243452]: 2026-02-28 10:28:57.519 243456 DEBUG nova.network.neutron [req-16f06e19-4ddd-4990-9cf8-8ba4b04d600b req-09f8ac65-50a7-44b1-a1ec-f9ce7d52d9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Refreshing network info cache for port 89352e9c-3fec-48bc-a264-6de98ec910c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:28:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:57.868 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:28:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:57.869 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:28:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:28:57.869 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:28:58 compute-0 ceph-mon[76304]: pgmap v1927: 305 pgs: 305 active+clean; 200 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 798 KiB/s rd, 1.4 MiB/s wr, 40 op/s
Feb 28 10:28:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 200 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 74 op/s
Feb 28 10:28:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:28:59 compute-0 nova_compute[243452]: 2026-02-28 10:28:59.186 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:00 compute-0 ceph-mon[76304]: pgmap v1928: 305 pgs: 305 active+clean; 200 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 74 op/s
Feb 28 10:29:00 compute-0 nova_compute[243452]: 2026-02-28 10:29:00.200 243456 DEBUG nova.network.neutron [req-16f06e19-4ddd-4990-9cf8-8ba4b04d600b req-09f8ac65-50a7-44b1-a1ec-f9ce7d52d9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updated VIF entry in instance network info cache for port 89352e9c-3fec-48bc-a264-6de98ec910c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:29:00 compute-0 nova_compute[243452]: 2026-02-28 10:29:00.201 243456 DEBUG nova.network.neutron [req-16f06e19-4ddd-4990-9cf8-8ba4b04d600b req-09f8ac65-50a7-44b1-a1ec-f9ce7d52d9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updating instance_info_cache with network_info: [{"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:29:00 compute-0 nova_compute[243452]: 2026-02-28 10:29:00.236 243456 DEBUG oslo_concurrency.lockutils [req-16f06e19-4ddd-4990-9cf8-8ba4b04d600b req-09f8ac65-50a7-44b1-a1ec-f9ce7d52d9e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:29:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:29:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:29:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:29:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:29:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:29:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:29:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 200 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:29:00 compute-0 sudo[348360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:29:00 compute-0 sudo[348360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:29:00 compute-0 sudo[348360]: pam_unix(sudo:session): session closed for user root
Feb 28 10:29:00 compute-0 sudo[348385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:29:00 compute-0 sudo[348385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:29:01 compute-0 sudo[348385]: pam_unix(sudo:session): session closed for user root
Feb 28 10:29:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 28 10:29:01 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 10:29:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:29:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:29:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:29:01 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:29:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:29:01 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:29:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:29:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:29:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:29:01 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:29:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:29:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:29:01 compute-0 sudo[348441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:29:01 compute-0 sudo[348441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:29:01 compute-0 sudo[348441]: pam_unix(sudo:session): session closed for user root
Feb 28 10:29:01 compute-0 sudo[348466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:29:01 compute-0 sudo[348466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:29:01 compute-0 podman[348503]: 2026-02-28 10:29:01.93233191 +0000 UTC m=+0.066712155 container create 4985bf843ea8f3dc1eb548b220dc9c71255172e6f949aee01f069bec2c0e9ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_grothendieck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 10:29:01 compute-0 systemd[1]: Started libpod-conmon-4985bf843ea8f3dc1eb548b220dc9c71255172e6f949aee01f069bec2c0e9ef6.scope.
Feb 28 10:29:01 compute-0 podman[348503]: 2026-02-28 10:29:01.892136425 +0000 UTC m=+0.026516730 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:29:02 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:29:02 compute-0 ceph-mon[76304]: pgmap v1929: 305 pgs: 305 active+clean; 200 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:29:02 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 10:29:02 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:29:02 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:29:02 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:29:02 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:29:02 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:29:02 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:29:02 compute-0 podman[348503]: 2026-02-28 10:29:02.035964657 +0000 UTC m=+0.170344922 container init 4985bf843ea8f3dc1eb548b220dc9c71255172e6f949aee01f069bec2c0e9ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:29:02 compute-0 podman[348503]: 2026-02-28 10:29:02.047284427 +0000 UTC m=+0.181664672 container start 4985bf843ea8f3dc1eb548b220dc9c71255172e6f949aee01f069bec2c0e9ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 10:29:02 compute-0 podman[348503]: 2026-02-28 10:29:02.052683269 +0000 UTC m=+0.187063524 container attach 4985bf843ea8f3dc1eb548b220dc9c71255172e6f949aee01f069bec2c0e9ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_grothendieck, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Feb 28 10:29:02 compute-0 stupefied_grothendieck[348520]: 167 167
Feb 28 10:29:02 compute-0 systemd[1]: libpod-4985bf843ea8f3dc1eb548b220dc9c71255172e6f949aee01f069bec2c0e9ef6.scope: Deactivated successfully.
Feb 28 10:29:02 compute-0 podman[348503]: 2026-02-28 10:29:02.070810351 +0000 UTC m=+0.205190606 container died 4985bf843ea8f3dc1eb548b220dc9c71255172e6f949aee01f069bec2c0e9ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:29:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a8b232f25abe1920265b1188db3c529006e5db8f3a0363cbed0d273a20ed7ac-merged.mount: Deactivated successfully.
Feb 28 10:29:02 compute-0 podman[348503]: 2026-02-28 10:29:02.122280585 +0000 UTC m=+0.256660810 container remove 4985bf843ea8f3dc1eb548b220dc9c71255172e6f949aee01f069bec2c0e9ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_grothendieck, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 10:29:02 compute-0 systemd[1]: libpod-conmon-4985bf843ea8f3dc1eb548b220dc9c71255172e6f949aee01f069bec2c0e9ef6.scope: Deactivated successfully.
Feb 28 10:29:02 compute-0 podman[348542]: 2026-02-28 10:29:02.312300212 +0000 UTC m=+0.056007383 container create 0989d8b8f987720911c96c328e47436f87d47d745638167737577187e7762545 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Feb 28 10:29:02 compute-0 systemd[1]: Started libpod-conmon-0989d8b8f987720911c96c328e47436f87d47d745638167737577187e7762545.scope.
Feb 28 10:29:02 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:29:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3519148e2ed27371d0d4ca3ea4626100b96ebd170b9b79e5e5a1375ff27b69e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:29:02 compute-0 podman[348542]: 2026-02-28 10:29:02.287188243 +0000 UTC m=+0.030895484 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:29:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3519148e2ed27371d0d4ca3ea4626100b96ebd170b9b79e5e5a1375ff27b69e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:29:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3519148e2ed27371d0d4ca3ea4626100b96ebd170b9b79e5e5a1375ff27b69e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:29:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3519148e2ed27371d0d4ca3ea4626100b96ebd170b9b79e5e5a1375ff27b69e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:29:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3519148e2ed27371d0d4ca3ea4626100b96ebd170b9b79e5e5a1375ff27b69e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:29:02 compute-0 podman[348542]: 2026-02-28 10:29:02.402954063 +0000 UTC m=+0.146661234 container init 0989d8b8f987720911c96c328e47436f87d47d745638167737577187e7762545 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cori, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:29:02 compute-0 podman[348542]: 2026-02-28 10:29:02.409388084 +0000 UTC m=+0.153095275 container start 0989d8b8f987720911c96c328e47436f87d47d745638167737577187e7762545 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cori, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 28 10:29:02 compute-0 podman[348542]: 2026-02-28 10:29:02.414829968 +0000 UTC m=+0.158537119 container attach 0989d8b8f987720911c96c328e47436f87d47d745638167737577187e7762545 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cori, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:29:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 212 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 893 KiB/s wr, 87 op/s
Feb 28 10:29:02 compute-0 determined_cori[348559]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:29:02 compute-0 determined_cori[348559]: --> All data devices are unavailable
Feb 28 10:29:02 compute-0 systemd[1]: libpod-0989d8b8f987720911c96c328e47436f87d47d745638167737577187e7762545.scope: Deactivated successfully.
Feb 28 10:29:02 compute-0 podman[348542]: 2026-02-28 10:29:02.955597881 +0000 UTC m=+0.699305052 container died 0989d8b8f987720911c96c328e47436f87d47d745638167737577187e7762545 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:29:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3519148e2ed27371d0d4ca3ea4626100b96ebd170b9b79e5e5a1375ff27b69e-merged.mount: Deactivated successfully.
Feb 28 10:29:03 compute-0 podman[348542]: 2026-02-28 10:29:03.012672583 +0000 UTC m=+0.756379764 container remove 0989d8b8f987720911c96c328e47436f87d47d745638167737577187e7762545 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cori, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:29:03 compute-0 systemd[1]: libpod-conmon-0989d8b8f987720911c96c328e47436f87d47d745638167737577187e7762545.scope: Deactivated successfully.
Feb 28 10:29:03 compute-0 sudo[348466]: pam_unix(sudo:session): session closed for user root
Feb 28 10:29:03 compute-0 sudo[348591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:29:03 compute-0 sudo[348591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:29:03 compute-0 sudo[348591]: pam_unix(sudo:session): session closed for user root
Feb 28 10:29:03 compute-0 sudo[348616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:29:03 compute-0 sudo[348616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:29:03 compute-0 podman[348653]: 2026-02-28 10:29:03.427400527 +0000 UTC m=+0.032462988 container create 3be2808dd8ed9065f3abbc42e7ebf94491db27a519e94a02775725d09d00c445 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_black, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 10:29:03 compute-0 systemd[1]: Started libpod-conmon-3be2808dd8ed9065f3abbc42e7ebf94491db27a519e94a02775725d09d00c445.scope.
Feb 28 10:29:03 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:29:03 compute-0 podman[348653]: 2026-02-28 10:29:03.502884609 +0000 UTC m=+0.107947110 container init 3be2808dd8ed9065f3abbc42e7ebf94491db27a519e94a02775725d09d00c445 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_black, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Feb 28 10:29:03 compute-0 podman[348653]: 2026-02-28 10:29:03.412133346 +0000 UTC m=+0.017195827 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:29:03 compute-0 podman[348653]: 2026-02-28 10:29:03.511234525 +0000 UTC m=+0.116296986 container start 3be2808dd8ed9065f3abbc42e7ebf94491db27a519e94a02775725d09d00c445 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_black, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:29:03 compute-0 adoring_black[348670]: 167 167
Feb 28 10:29:03 compute-0 podman[348653]: 2026-02-28 10:29:03.515177716 +0000 UTC m=+0.120240217 container attach 3be2808dd8ed9065f3abbc42e7ebf94491db27a519e94a02775725d09d00c445 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_black, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:29:03 compute-0 systemd[1]: libpod-3be2808dd8ed9065f3abbc42e7ebf94491db27a519e94a02775725d09d00c445.scope: Deactivated successfully.
Feb 28 10:29:03 compute-0 podman[348653]: 2026-02-28 10:29:03.516204095 +0000 UTC m=+0.121266556 container died 3be2808dd8ed9065f3abbc42e7ebf94491db27a519e94a02775725d09d00c445 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_black, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 10:29:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-46f632a377a61946bf067db8effed746ad51a63442cbb63af10445237b9c2815-merged.mount: Deactivated successfully.
Feb 28 10:29:03 compute-0 podman[348653]: 2026-02-28 10:29:03.559155008 +0000 UTC m=+0.164217469 container remove 3be2808dd8ed9065f3abbc42e7ebf94491db27a519e94a02775725d09d00c445 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_black, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Feb 28 10:29:03 compute-0 systemd[1]: libpod-conmon-3be2808dd8ed9065f3abbc42e7ebf94491db27a519e94a02775725d09d00c445.scope: Deactivated successfully.
Feb 28 10:29:03 compute-0 podman[348694]: 2026-02-28 10:29:03.744685309 +0000 UTC m=+0.076719608 container create 6038d1cd19654d9b31824fb552616971e94afcb3c66a6fe89d1eb646d2bbce2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_turing, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 10:29:03 compute-0 ovn_controller[146846]: 2026-02-28T10:29:03Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ae:b8:e7 10.100.0.7
Feb 28 10:29:03 compute-0 ovn_controller[146846]: 2026-02-28T10:29:03Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ae:b8:e7 10.100.0.7
Feb 28 10:29:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:29:03 compute-0 systemd[1]: Started libpod-conmon-6038d1cd19654d9b31824fb552616971e94afcb3c66a6fe89d1eb646d2bbce2f.scope.
Feb 28 10:29:03 compute-0 podman[348694]: 2026-02-28 10:29:03.713200419 +0000 UTC m=+0.045234788 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:29:03 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:29:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86326b5bf5fbe8749d96846638796a44deec41e2ba575377020e582c283f27a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:29:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86326b5bf5fbe8749d96846638796a44deec41e2ba575377020e582c283f27a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:29:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86326b5bf5fbe8749d96846638796a44deec41e2ba575377020e582c283f27a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:29:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86326b5bf5fbe8749d96846638796a44deec41e2ba575377020e582c283f27a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:29:03 compute-0 podman[348694]: 2026-02-28 10:29:03.847332658 +0000 UTC m=+0.179366957 container init 6038d1cd19654d9b31824fb552616971e94afcb3c66a6fe89d1eb646d2bbce2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_turing, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Feb 28 10:29:03 compute-0 podman[348694]: 2026-02-28 10:29:03.853752869 +0000 UTC m=+0.185787148 container start 6038d1cd19654d9b31824fb552616971e94afcb3c66a6fe89d1eb646d2bbce2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_turing, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 10:29:03 compute-0 podman[348694]: 2026-02-28 10:29:03.858630597 +0000 UTC m=+0.190664896 container attach 6038d1cd19654d9b31824fb552616971e94afcb3c66a6fe89d1eb646d2bbce2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 10:29:04 compute-0 ceph-mon[76304]: pgmap v1930: 305 pgs: 305 active+clean; 212 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 893 KiB/s wr, 87 op/s
Feb 28 10:29:04 compute-0 vigilant_turing[348710]: {
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:     "0": [
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:         {
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "devices": [
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "/dev/loop3"
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             ],
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "lv_name": "ceph_lv0",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "lv_size": "21470642176",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "name": "ceph_lv0",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "tags": {
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.cluster_name": "ceph",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.crush_device_class": "",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.encrypted": "0",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.objectstore": "bluestore",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.osd_id": "0",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.type": "block",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.vdo": "0",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.with_tpm": "0"
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             },
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "type": "block",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "vg_name": "ceph_vg0"
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:         }
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:     ],
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:     "1": [
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:         {
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "devices": [
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "/dev/loop4"
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             ],
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "lv_name": "ceph_lv1",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "lv_size": "21470642176",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "name": "ceph_lv1",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "tags": {
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.cluster_name": "ceph",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.crush_device_class": "",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.encrypted": "0",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.objectstore": "bluestore",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.osd_id": "1",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.type": "block",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.vdo": "0",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.with_tpm": "0"
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             },
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "type": "block",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "vg_name": "ceph_vg1"
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:         }
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:     ],
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:     "2": [
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:         {
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "devices": [
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "/dev/loop5"
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             ],
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "lv_name": "ceph_lv2",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "lv_size": "21470642176",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "name": "ceph_lv2",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "tags": {
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.cluster_name": "ceph",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.crush_device_class": "",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.encrypted": "0",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.objectstore": "bluestore",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.osd_id": "2",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.type": "block",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.vdo": "0",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:                 "ceph.with_tpm": "0"
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             },
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "type": "block",
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:             "vg_name": "ceph_vg2"
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:         }
Feb 28 10:29:04 compute-0 vigilant_turing[348710]:     ]
Feb 28 10:29:04 compute-0 vigilant_turing[348710]: }
Feb 28 10:29:04 compute-0 systemd[1]: libpod-6038d1cd19654d9b31824fb552616971e94afcb3c66a6fe89d1eb646d2bbce2f.scope: Deactivated successfully.
Feb 28 10:29:04 compute-0 podman[348694]: 2026-02-28 10:29:04.183502593 +0000 UTC m=+0.515536872 container died 6038d1cd19654d9b31824fb552616971e94afcb3c66a6fe89d1eb646d2bbce2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_turing, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:29:04 compute-0 nova_compute[243452]: 2026-02-28 10:29:04.187 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:29:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-86326b5bf5fbe8749d96846638796a44deec41e2ba575377020e582c283f27a7-merged.mount: Deactivated successfully.
Feb 28 10:29:04 compute-0 podman[348694]: 2026-02-28 10:29:04.226890778 +0000 UTC m=+0.558925047 container remove 6038d1cd19654d9b31824fb552616971e94afcb3c66a6fe89d1eb646d2bbce2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 10:29:04 compute-0 systemd[1]: libpod-conmon-6038d1cd19654d9b31824fb552616971e94afcb3c66a6fe89d1eb646d2bbce2f.scope: Deactivated successfully.
Feb 28 10:29:04 compute-0 sudo[348616]: pam_unix(sudo:session): session closed for user root
Feb 28 10:29:04 compute-0 sudo[348731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:29:04 compute-0 sudo[348731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:29:04 compute-0 sudo[348731]: pam_unix(sudo:session): session closed for user root
Feb 28 10:29:04 compute-0 sudo[348756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:29:04 compute-0 sudo[348756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:29:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 218 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 98 op/s
Feb 28 10:29:04 compute-0 podman[348794]: 2026-02-28 10:29:04.659789036 +0000 UTC m=+0.046295879 container create c810cd3916abb3007f332be89f5b6b6dde600966a6bdaf867f3bca6fdcc6b623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_joliot, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 10:29:04 compute-0 systemd[1]: Started libpod-conmon-c810cd3916abb3007f332be89f5b6b6dde600966a6bdaf867f3bca6fdcc6b623.scope.
Feb 28 10:29:04 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:29:04 compute-0 podman[348794]: 2026-02-28 10:29:04.636139658 +0000 UTC m=+0.022646581 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:29:04 compute-0 podman[348794]: 2026-02-28 10:29:04.744742895 +0000 UTC m=+0.131249808 container init c810cd3916abb3007f332be89f5b6b6dde600966a6bdaf867f3bca6fdcc6b623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 10:29:04 compute-0 podman[348794]: 2026-02-28 10:29:04.751918188 +0000 UTC m=+0.138425061 container start c810cd3916abb3007f332be89f5b6b6dde600966a6bdaf867f3bca6fdcc6b623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_joliot, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 10:29:04 compute-0 podman[348794]: 2026-02-28 10:29:04.756568489 +0000 UTC m=+0.143075352 container attach c810cd3916abb3007f332be89f5b6b6dde600966a6bdaf867f3bca6fdcc6b623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_joliot, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:29:04 compute-0 systemd[1]: libpod-c810cd3916abb3007f332be89f5b6b6dde600966a6bdaf867f3bca6fdcc6b623.scope: Deactivated successfully.
Feb 28 10:29:04 compute-0 wonderful_joliot[348810]: 167 167
Feb 28 10:29:04 compute-0 conmon[348810]: conmon c810cd3916abb3007f33 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c810cd3916abb3007f332be89f5b6b6dde600966a6bdaf867f3bca6fdcc6b623.scope/container/memory.events
Feb 28 10:29:04 compute-0 podman[348794]: 2026-02-28 10:29:04.759814941 +0000 UTC m=+0.146321824 container died c810cd3916abb3007f332be89f5b6b6dde600966a6bdaf867f3bca6fdcc6b623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_joliot, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:29:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-377e7f4fc97f8bbec2df499eccf0e6e179a7ccbc8f8b14a05bdc289be000c61e-merged.mount: Deactivated successfully.
Feb 28 10:29:04 compute-0 podman[348794]: 2026-02-28 10:29:04.802171877 +0000 UTC m=+0.188678760 container remove c810cd3916abb3007f332be89f5b6b6dde600966a6bdaf867f3bca6fdcc6b623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 10:29:04 compute-0 systemd[1]: libpod-conmon-c810cd3916abb3007f332be89f5b6b6dde600966a6bdaf867f3bca6fdcc6b623.scope: Deactivated successfully.
Feb 28 10:29:04 compute-0 podman[348833]: 2026-02-28 10:29:04.990363903 +0000 UTC m=+0.056977111 container create 598912602c26c43ff8827f803a8145e6d8b8cb70ccccfcbff0881343fece8766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_mendel, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:29:05 compute-0 systemd[1]: Started libpod-conmon-598912602c26c43ff8827f803a8145e6d8b8cb70ccccfcbff0881343fece8766.scope.
Feb 28 10:29:05 compute-0 podman[348833]: 2026-02-28 10:29:04.967726733 +0000 UTC m=+0.034339991 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:29:05 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:29:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3759a8770ea52661801db83f10cc14ad03550a456e8ece50bde6e543d001c35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:29:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3759a8770ea52661801db83f10cc14ad03550a456e8ece50bde6e543d001c35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:29:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3759a8770ea52661801db83f10cc14ad03550a456e8ece50bde6e543d001c35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:29:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3759a8770ea52661801db83f10cc14ad03550a456e8ece50bde6e543d001c35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:29:05 compute-0 podman[348833]: 2026-02-28 10:29:05.089387429 +0000 UTC m=+0.156000627 container init 598912602c26c43ff8827f803a8145e6d8b8cb70ccccfcbff0881343fece8766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Feb 28 10:29:05 compute-0 podman[348833]: 2026-02-28 10:29:05.104361512 +0000 UTC m=+0.170974730 container start 598912602c26c43ff8827f803a8145e6d8b8cb70ccccfcbff0881343fece8766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_mendel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 10:29:05 compute-0 podman[348833]: 2026-02-28 10:29:05.109658012 +0000 UTC m=+0.176271270 container attach 598912602c26c43ff8827f803a8145e6d8b8cb70ccccfcbff0881343fece8766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:29:05 compute-0 lvm[348927]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:29:05 compute-0 lvm[348927]: VG ceph_vg0 finished
Feb 28 10:29:05 compute-0 lvm[348928]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:29:05 compute-0 lvm[348928]: VG ceph_vg1 finished
Feb 28 10:29:05 compute-0 lvm[348930]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:29:05 compute-0 lvm[348930]: VG ceph_vg2 finished
Feb 28 10:29:05 compute-0 xenodochial_mendel[348849]: {}
Feb 28 10:29:05 compute-0 systemd[1]: libpod-598912602c26c43ff8827f803a8145e6d8b8cb70ccccfcbff0881343fece8766.scope: Deactivated successfully.
Feb 28 10:29:05 compute-0 systemd[1]: libpod-598912602c26c43ff8827f803a8145e6d8b8cb70ccccfcbff0881343fece8766.scope: Consumed 1.115s CPU time.
Feb 28 10:29:05 compute-0 podman[348833]: 2026-02-28 10:29:05.932146033 +0000 UTC m=+0.998759251 container died 598912602c26c43ff8827f803a8145e6d8b8cb70ccccfcbff0881343fece8766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_mendel, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 10:29:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3759a8770ea52661801db83f10cc14ad03550a456e8ece50bde6e543d001c35-merged.mount: Deactivated successfully.
Feb 28 10:29:05 compute-0 podman[348833]: 2026-02-28 10:29:05.98125968 +0000 UTC m=+1.047872858 container remove 598912602c26c43ff8827f803a8145e6d8b8cb70ccccfcbff0881343fece8766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 10:29:05 compute-0 systemd[1]: libpod-conmon-598912602c26c43ff8827f803a8145e6d8b8cb70ccccfcbff0881343fece8766.scope: Deactivated successfully.
Feb 28 10:29:06 compute-0 sudo[348756]: pam_unix(sudo:session): session closed for user root
Feb 28 10:29:06 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:29:06 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:29:06 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:29:06 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:29:06 compute-0 ceph-mon[76304]: pgmap v1931: 305 pgs: 305 active+clean; 218 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 98 op/s
Feb 28 10:29:06 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:29:06 compute-0 sudo[348945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:29:06 compute-0 sudo[348945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:29:06 compute-0 sudo[348945]: pam_unix(sudo:session): session closed for user root
Feb 28 10:29:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 227 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 109 op/s
Feb 28 10:29:07 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:29:08 compute-0 ceph-mon[76304]: pgmap v1932: 305 pgs: 305 active+clean; 227 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 109 op/s
Feb 28 10:29:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 233 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 99 op/s
Feb 28 10:29:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:29:09 compute-0 nova_compute[243452]: 2026-02-28 10:29:09.189 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:09 compute-0 nova_compute[243452]: 2026-02-28 10:29:09.192 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:10 compute-0 ceph-mon[76304]: pgmap v1933: 305 pgs: 305 active+clean; 233 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 99 op/s
Feb 28 10:29:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 233 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Feb 28 10:29:12 compute-0 ceph-mon[76304]: pgmap v1934: 305 pgs: 305 active+clean; 233 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Feb 28 10:29:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 233 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:29:13 compute-0 nova_compute[243452]: 2026-02-28 10:29:13.675 243456 DEBUG nova.compute.manager [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:29:13 compute-0 nova_compute[243452]: 2026-02-28 10:29:13.753 243456 INFO nova.compute.manager [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] instance snapshotting
Feb 28 10:29:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:29:14 compute-0 nova_compute[243452]: 2026-02-28 10:29:14.065 243456 INFO nova.virt.libvirt.driver [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Beginning live snapshot process
Feb 28 10:29:14 compute-0 ceph-mon[76304]: pgmap v1935: 305 pgs: 305 active+clean; 233 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:29:14 compute-0 nova_compute[243452]: 2026-02-28 10:29:14.220 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:14 compute-0 nova_compute[243452]: 2026-02-28 10:29:14.231 243456 DEBUG nova.virt.libvirt.imagebackend [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 28 10:29:14 compute-0 nova_compute[243452]: 2026-02-28 10:29:14.442 243456 DEBUG nova.storage.rbd_utils [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] creating snapshot(cb18998cfc044a609e0df4ccadf9c2bf) on rbd image(360effe7-8380-410d-a5b8-59c28fa4a75a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:29:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 233 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 234 KiB/s rd, 1.3 MiB/s wr, 50 op/s
Feb 28 10:29:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Feb 28 10:29:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Feb 28 10:29:15 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Feb 28 10:29:15 compute-0 nova_compute[243452]: 2026-02-28 10:29:15.154 243456 DEBUG nova.storage.rbd_utils [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] cloning vms/360effe7-8380-410d-a5b8-59c28fa4a75a_disk@cb18998cfc044a609e0df4ccadf9c2bf to images/889a5b88-09b7-4c48-88ed-168ca73ca921 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:29:15 compute-0 nova_compute[243452]: 2026-02-28 10:29:15.265 243456 DEBUG nova.storage.rbd_utils [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] flattening images/889a5b88-09b7-4c48-88ed-168ca73ca921 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:29:15 compute-0 nova_compute[243452]: 2026-02-28 10:29:15.580 243456 DEBUG nova.storage.rbd_utils [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] removing snapshot(cb18998cfc044a609e0df4ccadf9c2bf) on rbd image(360effe7-8380-410d-a5b8-59c28fa4a75a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:29:16 compute-0 ceph-mon[76304]: pgmap v1936: 305 pgs: 305 active+clean; 233 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 234 KiB/s rd, 1.3 MiB/s wr, 50 op/s
Feb 28 10:29:16 compute-0 ceph-mon[76304]: osdmap e274: 3 total, 3 up, 3 in
Feb 28 10:29:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Feb 28 10:29:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Feb 28 10:29:16 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Feb 28 10:29:16 compute-0 nova_compute[243452]: 2026-02-28 10:29:16.161 243456 DEBUG nova.storage.rbd_utils [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] creating snapshot(snap) on rbd image(889a5b88-09b7-4c48-88ed-168ca73ca921) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:29:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 256 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 47 op/s
Feb 28 10:29:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Feb 28 10:29:17 compute-0 ceph-mon[76304]: osdmap e275: 3 total, 3 up, 3 in
Feb 28 10:29:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Feb 28 10:29:17 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Feb 28 10:29:18 compute-0 ceph-mon[76304]: pgmap v1939: 305 pgs: 305 active+clean; 256 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 47 op/s
Feb 28 10:29:18 compute-0 ceph-mon[76304]: osdmap e276: 3 total, 3 up, 3 in
Feb 28 10:29:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 268 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 3.5 MiB/s wr, 94 op/s
Feb 28 10:29:18 compute-0 nova_compute[243452]: 2026-02-28 10:29:18.716 243456 INFO nova.virt.libvirt.driver [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Snapshot image upload complete
Feb 28 10:29:18 compute-0 nova_compute[243452]: 2026-02-28 10:29:18.717 243456 INFO nova.compute.manager [None req-9195a26a-3c5c-45d0-aace-cd48ecf1a53a f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Took 4.96 seconds to snapshot the instance on the hypervisor.
Feb 28 10:29:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:29:19 compute-0 nova_compute[243452]: 2026-02-28 10:29:19.195 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:19 compute-0 nova_compute[243452]: 2026-02-28 10:29:19.224 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:20 compute-0 ceph-mon[76304]: pgmap v1941: 305 pgs: 305 active+clean; 268 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 3.5 MiB/s wr, 94 op/s
Feb 28 10:29:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 312 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 154 op/s
Feb 28 10:29:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:29:21.910 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:29:21 compute-0 nova_compute[243452]: 2026-02-28 10:29:21.910 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:21 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:29:21.911 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:29:22 compute-0 ceph-mon[76304]: pgmap v1942: 305 pgs: 305 active+clean; 312 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 154 op/s
Feb 28 10:29:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 312 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.2 MiB/s wr, 137 op/s
Feb 28 10:29:23 compute-0 nova_compute[243452]: 2026-02-28 10:29:23.677 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "0292c690-55db-421d-a004-21a0acd72961" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:29:23 compute-0 nova_compute[243452]: 2026-02-28 10:29:23.677 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:29:23 compute-0 nova_compute[243452]: 2026-02-28 10:29:23.714 243456 DEBUG nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:29:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:29:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Feb 28 10:29:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Feb 28 10:29:23 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Feb 28 10:29:23 compute-0 nova_compute[243452]: 2026-02-28 10:29:23.811 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:29:23 compute-0 nova_compute[243452]: 2026-02-28 10:29:23.812 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:29:23 compute-0 nova_compute[243452]: 2026-02-28 10:29:23.822 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:29:23 compute-0 nova_compute[243452]: 2026-02-28 10:29:23.822 243456 INFO nova.compute.claims [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:29:23 compute-0 nova_compute[243452]: 2026-02-28 10:29:23.954 243456 DEBUG oslo_concurrency.processutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:29:24 compute-0 ceph-mon[76304]: pgmap v1943: 305 pgs: 305 active+clean; 312 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.2 MiB/s wr, 137 op/s
Feb 28 10:29:24 compute-0 ceph-mon[76304]: osdmap e277: 3 total, 3 up, 3 in
Feb 28 10:29:24 compute-0 nova_compute[243452]: 2026-02-28 10:29:24.196 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:24 compute-0 nova_compute[243452]: 2026-02-28 10:29:24.226 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:29:24 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2745733054' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:29:24 compute-0 nova_compute[243452]: 2026-02-28 10:29:24.504 243456 DEBUG oslo_concurrency.processutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:29:24 compute-0 nova_compute[243452]: 2026-02-28 10:29:24.512 243456 DEBUG nova.compute.provider_tree [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:29:24 compute-0 nova_compute[243452]: 2026-02-28 10:29:24.530 243456 DEBUG nova.scheduler.client.report [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:29:24 compute-0 nova_compute[243452]: 2026-02-28 10:29:24.553 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:29:24 compute-0 nova_compute[243452]: 2026-02-28 10:29:24.554 243456 DEBUG nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:29:24 compute-0 nova_compute[243452]: 2026-02-28 10:29:24.603 243456 DEBUG nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:29:24 compute-0 nova_compute[243452]: 2026-02-28 10:29:24.603 243456 DEBUG nova.network.neutron [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:29:24 compute-0 nova_compute[243452]: 2026-02-28 10:29:24.619 243456 INFO nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:29:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 312 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.1 MiB/s wr, 89 op/s
Feb 28 10:29:24 compute-0 nova_compute[243452]: 2026-02-28 10:29:24.635 243456 DEBUG nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:29:24 compute-0 nova_compute[243452]: 2026-02-28 10:29:24.735 243456 DEBUG nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:29:24 compute-0 nova_compute[243452]: 2026-02-28 10:29:24.737 243456 DEBUG nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:29:24 compute-0 nova_compute[243452]: 2026-02-28 10:29:24.738 243456 INFO nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Creating image(s)
Feb 28 10:29:24 compute-0 nova_compute[243452]: 2026-02-28 10:29:24.775 243456 DEBUG nova.storage.rbd_utils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 0292c690-55db-421d-a004-21a0acd72961_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:29:24 compute-0 nova_compute[243452]: 2026-02-28 10:29:24.801 243456 DEBUG nova.storage.rbd_utils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 0292c690-55db-421d-a004-21a0acd72961_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:29:24 compute-0 nova_compute[243452]: 2026-02-28 10:29:24.827 243456 DEBUG nova.storage.rbd_utils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 0292c690-55db-421d-a004-21a0acd72961_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:29:24 compute-0 nova_compute[243452]: 2026-02-28 10:29:24.830 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "4d2b9309393ff07c8cae21ceab624ba9ca90168e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:29:24 compute-0 nova_compute[243452]: 2026-02-28 10:29:24.831 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "4d2b9309393ff07c8cae21ceab624ba9ca90168e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:29:24 compute-0 nova_compute[243452]: 2026-02-28 10:29:24.900 243456 DEBUG nova.policy [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f54beab12fce4ee8adf80742bf33b916', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '344dd946e14146ab93c01183964c71b3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:29:25 compute-0 nova_compute[243452]: 2026-02-28 10:29:25.091 243456 DEBUG nova.virt.libvirt.imagebackend [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Image locations are: [{'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/889a5b88-09b7-4c48-88ed-168ca73ca921/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/889a5b88-09b7-4c48-88ed-168ca73ca921/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Feb 28 10:29:25 compute-0 nova_compute[243452]: 2026-02-28 10:29:25.133 243456 DEBUG nova.virt.libvirt.imagebackend [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Selected location: {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/889a5b88-09b7-4c48-88ed-168ca73ca921/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Feb 28 10:29:25 compute-0 nova_compute[243452]: 2026-02-28 10:29:25.134 243456 DEBUG nova.storage.rbd_utils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] cloning images/889a5b88-09b7-4c48-88ed-168ca73ca921@snap to None/0292c690-55db-421d-a004-21a0acd72961_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:29:25 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2745733054' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:29:25 compute-0 nova_compute[243452]: 2026-02-28 10:29:25.297 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "4d2b9309393ff07c8cae21ceab624ba9ca90168e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:29:25 compute-0 nova_compute[243452]: 2026-02-28 10:29:25.444 243456 DEBUG nova.objects.instance [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lazy-loading 'migration_context' on Instance uuid 0292c690-55db-421d-a004-21a0acd72961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:29:25 compute-0 nova_compute[243452]: 2026-02-28 10:29:25.463 243456 DEBUG nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:29:25 compute-0 nova_compute[243452]: 2026-02-28 10:29:25.463 243456 DEBUG nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Ensure instance console log exists: /var/lib/nova/instances/0292c690-55db-421d-a004-21a0acd72961/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:29:25 compute-0 nova_compute[243452]: 2026-02-28 10:29:25.464 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:29:25 compute-0 nova_compute[243452]: 2026-02-28 10:29:25.464 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:29:25 compute-0 nova_compute[243452]: 2026-02-28 10:29:25.464 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:29:26 compute-0 ceph-mon[76304]: pgmap v1945: 305 pgs: 305 active+clean; 312 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.1 MiB/s wr, 89 op/s
Feb 28 10:29:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 312 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.4 MiB/s wr, 78 op/s
Feb 28 10:29:26 compute-0 nova_compute[243452]: 2026-02-28 10:29:26.760 243456 DEBUG nova.network.neutron [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Successfully created port: 0e852cef-a459-43ce-ad9b-4f379e129ace _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:29:27 compute-0 podman[349312]: 2026-02-28 10:29:27.169942512 +0000 UTC m=+0.090266040 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 10:29:27 compute-0 podman[349311]: 2026-02-28 10:29:27.209944682 +0000 UTC m=+0.130217339 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 10:29:28 compute-0 nova_compute[243452]: 2026-02-28 10:29:28.148 243456 DEBUG nova.network.neutron [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Successfully updated port: 0e852cef-a459-43ce-ad9b-4f379e129ace _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:29:28 compute-0 nova_compute[243452]: 2026-02-28 10:29:28.166 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:29:28 compute-0 nova_compute[243452]: 2026-02-28 10:29:28.167 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquired lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:29:28 compute-0 nova_compute[243452]: 2026-02-28 10:29:28.168 243456 DEBUG nova.network.neutron [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:29:28 compute-0 nova_compute[243452]: 2026-02-28 10:29:28.277 243456 DEBUG nova.compute.manager [req-df03af75-24a0-4f3e-8c06-03ac08e4c143 req-a93ddc47-6bb2-449a-b13c-2cecc9a9b33d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Received event network-changed-0e852cef-a459-43ce-ad9b-4f379e129ace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:29:28 compute-0 ceph-mon[76304]: pgmap v1946: 305 pgs: 305 active+clean; 312 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.4 MiB/s wr, 78 op/s
Feb 28 10:29:28 compute-0 nova_compute[243452]: 2026-02-28 10:29:28.278 243456 DEBUG nova.compute.manager [req-df03af75-24a0-4f3e-8c06-03ac08e4c143 req-a93ddc47-6bb2-449a-b13c-2cecc9a9b33d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Refreshing instance network info cache due to event network-changed-0e852cef-a459-43ce-ad9b-4f379e129ace. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:29:28 compute-0 nova_compute[243452]: 2026-02-28 10:29:28.279 243456 DEBUG oslo_concurrency.lockutils [req-df03af75-24a0-4f3e-8c06-03ac08e4c143 req-a93ddc47-6bb2-449a-b13c-2cecc9a9b33d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:29:28 compute-0 nova_compute[243452]: 2026-02-28 10:29:28.365 243456 DEBUG nova.network.neutron [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:29:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 312 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.6 MiB/s wr, 73 op/s
Feb 28 10:29:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:29:28 compute-0 nova_compute[243452]: 2026-02-28 10:29:28.893 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:29:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:29:29
Feb 28 10:29:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:29:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:29:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'backups', 'vms', 'default.rgw.meta', '.rgw.root', '.mgr']
Feb 28 10:29:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:29:29 compute-0 nova_compute[243452]: 2026-02-28 10:29:29.198 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:29 compute-0 nova_compute[243452]: 2026-02-28 10:29:29.228 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:29:29.913 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:29:30 compute-0 ceph-mon[76304]: pgmap v1947: 305 pgs: 305 active+clean; 312 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.6 MiB/s wr, 73 op/s
Feb 28 10:29:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:29:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:29:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:29:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:29:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:29:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.538 243456 DEBUG nova.network.neutron [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Updating instance_info_cache with network_info: [{"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.577 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Releasing lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.578 243456 DEBUG nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Instance network_info: |[{"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.579 243456 DEBUG oslo_concurrency.lockutils [req-df03af75-24a0-4f3e-8c06-03ac08e4c143 req-a93ddc47-6bb2-449a-b13c-2cecc9a9b33d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.579 243456 DEBUG nova.network.neutron [req-df03af75-24a0-4f3e-8c06-03ac08e4c143 req-a93ddc47-6bb2-449a-b13c-2cecc9a9b33d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Refreshing network info cache for port 0e852cef-a459-43ce-ad9b-4f379e129ace _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.585 243456 DEBUG nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Start _get_guest_xml network_info=[{"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-28T10:29:13Z,direct_url=<?>,disk_format='raw',id=889a5b88-09b7-4c48-88ed-168ca73ca921,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-946860430',owner='344dd946e14146ab93c01183964c71b3',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-28T10:29:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': '889a5b88-09b7-4c48-88ed-168ca73ca921'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.591 243456 WARNING nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.598 243456 DEBUG nova.virt.libvirt.host [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.599 243456 DEBUG nova.virt.libvirt.host [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.610 243456 DEBUG nova.virt.libvirt.host [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.611 243456 DEBUG nova.virt.libvirt.host [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.611 243456 DEBUG nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.612 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-28T10:29:13Z,direct_url=<?>,disk_format='raw',id=889a5b88-09b7-4c48-88ed-168ca73ca921,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-946860430',owner='344dd946e14146ab93c01183964c71b3',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-28T10:29:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.612 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.613 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.613 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.613 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.614 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.614 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.615 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.615 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.615 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.616 243456 DEBUG nova.virt.hardware [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:29:30 compute-0 nova_compute[243452]: 2026-02-28 10:29:30.620 243456 DEBUG oslo_concurrency.processutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:29:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.1 KiB/s wr, 43 op/s
Feb 28 10:29:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:29:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:29:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:29:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:29:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:29:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:29:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:29:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:29:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:29:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:29:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:29:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3528040933' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.234 243456 DEBUG oslo_concurrency.processutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.265 243456 DEBUG nova.storage.rbd_utils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 0292c690-55db-421d-a004-21a0acd72961_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.272 243456 DEBUG oslo_concurrency.processutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:29:31 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3528040933' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:29:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:29:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3690528608' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.835 243456 DEBUG oslo_concurrency.processutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.837 243456 DEBUG nova.virt.libvirt.vif [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:29:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-981413902',display_name='tempest-TestSnapshotPattern-server-981413902',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-981413902',id=122,image_ref='889a5b88-09b7-4c48-88ed-168ca73ca921',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUJ4MbCHcIoNICw2tL1X5ZefRw67v1vL6nTq2IACMxv94r+pZLfX9PQ5uIyPper8mbOQnjB2mZqp0GfnlVkrAkMMtWu4Dla7/PQOjpuGeOsYTKM5E2GBss1gtEw6oeaiA==',key_name='tempest-TestSnapshotPattern-538459727',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='344dd946e14146ab93c01183964c71b3',ramdisk_id='',reservation_id='r-r0dl808n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='360effe7-8380-410d-a5b8-59c28fa4a75a',image_min_disk='1',image_min_ram='0',image_owner_id='344dd946e14146ab93c01183964c71b3',image_owner_project_name='tempest-TestSnapshotPattern-2121060882',image_owner_user_name='tempest-TestSnapshotPattern-2121060882-project-member',image_user_id='f54beab12fce4ee8adf80742bf33b916',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-2121060882',owner_user_name='tempest-TestSnapshotPattern-2121060882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:29:24Z,user_data=None,user_id='f54beab12fce4ee8adf80742bf33b916',uuid=0292c690-55db-421d-a004-21a0acd72961,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.838 243456 DEBUG nova.network.os_vif_util [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converting VIF {"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.839 243456 DEBUG nova.network.os_vif_util [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:2e:7f,bridge_name='br-int',has_traffic_filtering=True,id=0e852cef-a459-43ce-ad9b-4f379e129ace,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e852cef-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.841 243456 DEBUG nova.objects.instance [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0292c690-55db-421d-a004-21a0acd72961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.868 243456 DEBUG nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:29:31 compute-0 nova_compute[243452]:   <uuid>0292c690-55db-421d-a004-21a0acd72961</uuid>
Feb 28 10:29:31 compute-0 nova_compute[243452]:   <name>instance-0000007a</name>
Feb 28 10:29:31 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:29:31 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:29:31 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <nova:name>tempest-TestSnapshotPattern-server-981413902</nova:name>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:29:30</nova:creationTime>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:29:31 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:29:31 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:29:31 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:29:31 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:29:31 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:29:31 compute-0 nova_compute[243452]:         <nova:user uuid="f54beab12fce4ee8adf80742bf33b916">tempest-TestSnapshotPattern-2121060882-project-member</nova:user>
Feb 28 10:29:31 compute-0 nova_compute[243452]:         <nova:project uuid="344dd946e14146ab93c01183964c71b3">tempest-TestSnapshotPattern-2121060882</nova:project>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="889a5b88-09b7-4c48-88ed-168ca73ca921"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:29:31 compute-0 nova_compute[243452]:         <nova:port uuid="0e852cef-a459-43ce-ad9b-4f379e129ace">
Feb 28 10:29:31 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:29:31 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:29:31 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <system>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <entry name="serial">0292c690-55db-421d-a004-21a0acd72961</entry>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <entry name="uuid">0292c690-55db-421d-a004-21a0acd72961</entry>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     </system>
Feb 28 10:29:31 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:29:31 compute-0 nova_compute[243452]:   <os>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:   </os>
Feb 28 10:29:31 compute-0 nova_compute[243452]:   <features>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:   </features>
Feb 28 10:29:31 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:29:31 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:29:31 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/0292c690-55db-421d-a004-21a0acd72961_disk">
Feb 28 10:29:31 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       </source>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:29:31 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/0292c690-55db-421d-a004-21a0acd72961_disk.config">
Feb 28 10:29:31 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       </source>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:29:31 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:64:2e:7f"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <target dev="tap0e852cef-a4"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/0292c690-55db-421d-a004-21a0acd72961/console.log" append="off"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <video>
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     </video>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <input type="keyboard" bus="usb"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:29:31 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:29:31 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:29:31 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:29:31 compute-0 nova_compute[243452]: </domain>
Feb 28 10:29:31 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.869 243456 DEBUG nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Preparing to wait for external event network-vif-plugged-0e852cef-a459-43ce-ad9b-4f379e129ace prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.870 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "0292c690-55db-421d-a004-21a0acd72961-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.870 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.871 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.873 243456 DEBUG nova.virt.libvirt.vif [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:29:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-981413902',display_name='tempest-TestSnapshotPattern-server-981413902',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-981413902',id=122,image_ref='889a5b88-09b7-4c48-88ed-168ca73ca921',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUJ4MbCHcIoNICw2tL1X5ZefRw67v1vL6nTq2IACMxv94r+pZLfX9PQ5uIyPper8mbOQnjB2mZqp0GfnlVkrAkMMtWu4Dla7/PQOjpuGeOsYTKM5E2GBss1gtEw6oeaiA==',key_name='tempest-TestSnapshotPattern-538459727',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='344dd946e14146ab93c01183964c71b3',ramdisk_id='',reservation_id='r-r0dl808n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='360effe7-8380-410d-a5b8-59c28fa4a75a',image_min_disk='1',image_min_ram='0',image_owner_id='344dd946e14146ab93c01183964c71b3',image_owner_project_name='tempest-TestSnapshotPattern-2121060882',image_owner_user_name='tempest-TestSnapshotPattern-2121060882-project-member',image_user_id='f54beab12fce4ee8adf80742bf33b916',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-2121060882',owner_user_name='tempest-TestSnapshotPattern-2121060882-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:29:24Z,user_data=None,user_id='f54beab12fce4ee8adf80742bf33b916',uuid=0292c690-55db-421d-a004-21a0acd72961,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.874 243456 DEBUG nova.network.os_vif_util [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converting VIF {"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.875 243456 DEBUG nova.network.os_vif_util [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:2e:7f,bridge_name='br-int',has_traffic_filtering=True,id=0e852cef-a459-43ce-ad9b-4f379e129ace,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e852cef-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.876 243456 DEBUG os_vif [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:2e:7f,bridge_name='br-int',has_traffic_filtering=True,id=0e852cef-a459-43ce-ad9b-4f379e129ace,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e852cef-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.877 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.878 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.879 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.885 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.885 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e852cef-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.886 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0e852cef-a4, col_values=(('external_ids', {'iface-id': '0e852cef-a459-43ce-ad9b-4f379e129ace', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:2e:7f', 'vm-uuid': '0292c690-55db-421d-a004-21a0acd72961'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:29:31 compute-0 NetworkManager[49805]: <info>  [1772274571.8897] manager: (tap0e852cef-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/506)
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.891 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.896 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.897 243456 INFO os_vif [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:2e:7f,bridge_name='br-int',has_traffic_filtering=True,id=0e852cef-a459-43ce-ad9b-4f379e129ace,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e852cef-a4')
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.964 243456 DEBUG nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.964 243456 DEBUG nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.964 243456 DEBUG nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] No VIF found with MAC fa:16:3e:64:2e:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.965 243456 INFO nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Using config drive
Feb 28 10:29:31 compute-0 nova_compute[243452]: 2026-02-28 10:29:31.992 243456 DEBUG nova.storage.rbd_utils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 0292c690-55db-421d-a004-21a0acd72961_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:29:32 compute-0 ceph-mon[76304]: pgmap v1948: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.1 KiB/s wr, 43 op/s
Feb 28 10:29:32 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3690528608' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:29:32 compute-0 nova_compute[243452]: 2026-02-28 10:29:32.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:29:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.1 KiB/s wr, 32 op/s
Feb 28 10:29:33 compute-0 nova_compute[243452]: 2026-02-28 10:29:33.016 243456 INFO nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Creating config drive at /var/lib/nova/instances/0292c690-55db-421d-a004-21a0acd72961/disk.config
Feb 28 10:29:33 compute-0 nova_compute[243452]: 2026-02-28 10:29:33.020 243456 DEBUG oslo_concurrency.processutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0292c690-55db-421d-a004-21a0acd72961/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjofqfkwr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:29:33 compute-0 nova_compute[243452]: 2026-02-28 10:29:33.171 243456 DEBUG oslo_concurrency.processutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0292c690-55db-421d-a004-21a0acd72961/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjofqfkwr" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:29:33 compute-0 nova_compute[243452]: 2026-02-28 10:29:33.212 243456 DEBUG nova.storage.rbd_utils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] rbd image 0292c690-55db-421d-a004-21a0acd72961_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:29:33 compute-0 nova_compute[243452]: 2026-02-28 10:29:33.218 243456 DEBUG oslo_concurrency.processutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0292c690-55db-421d-a004-21a0acd72961/disk.config 0292c690-55db-421d-a004-21a0acd72961_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:29:33 compute-0 nova_compute[243452]: 2026-02-28 10:29:33.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:29:33 compute-0 nova_compute[243452]: 2026-02-28 10:29:33.398 243456 DEBUG oslo_concurrency.processutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0292c690-55db-421d-a004-21a0acd72961/disk.config 0292c690-55db-421d-a004-21a0acd72961_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:29:33 compute-0 nova_compute[243452]: 2026-02-28 10:29:33.398 243456 INFO nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Deleting local config drive /var/lib/nova/instances/0292c690-55db-421d-a004-21a0acd72961/disk.config because it was imported into RBD.
Feb 28 10:29:33 compute-0 kernel: tap0e852cef-a4: entered promiscuous mode
Feb 28 10:29:33 compute-0 NetworkManager[49805]: <info>  [1772274573.4536] manager: (tap0e852cef-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/507)
Feb 28 10:29:33 compute-0 ovn_controller[146846]: 2026-02-28T10:29:33Z|01227|binding|INFO|Claiming lport 0e852cef-a459-43ce-ad9b-4f379e129ace for this chassis.
Feb 28 10:29:33 compute-0 ovn_controller[146846]: 2026-02-28T10:29:33Z|01228|binding|INFO|0e852cef-a459-43ce-ad9b-4f379e129ace: Claiming fa:16:3e:64:2e:7f 10.100.0.11
Feb 28 10:29:33 compute-0 nova_compute[243452]: 2026-02-28 10:29:33.453 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.464 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:2e:7f 10.100.0.11'], port_security=['fa:16:3e:64:2e:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0292c690-55db-421d-a004-21a0acd72961', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-143add3f-ffeb-40fb-88e5-0af28b700615', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '344dd946e14146ab93c01183964c71b3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eee67bf9-5a46-4130-874d-acfd0939ad31', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c3d893a-030b-445a-98e5-9b2e4e6c6037, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0e852cef-a459-43ce-ad9b-4f379e129ace) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:29:33 compute-0 ovn_controller[146846]: 2026-02-28T10:29:33Z|01229|binding|INFO|Setting lport 0e852cef-a459-43ce-ad9b-4f379e129ace ovn-installed in OVS
Feb 28 10:29:33 compute-0 ovn_controller[146846]: 2026-02-28T10:29:33Z|01230|binding|INFO|Setting lport 0e852cef-a459-43ce-ad9b-4f379e129ace up in Southbound
Feb 28 10:29:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.466 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0e852cef-a459-43ce-ad9b-4f379e129ace in datapath 143add3f-ffeb-40fb-88e5-0af28b700615 bound to our chassis
Feb 28 10:29:33 compute-0 nova_compute[243452]: 2026-02-28 10:29:33.466 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.468 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 143add3f-ffeb-40fb-88e5-0af28b700615
Feb 28 10:29:33 compute-0 nova_compute[243452]: 2026-02-28 10:29:33.469 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:33 compute-0 systemd-udevd[349489]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:29:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.485 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[92ac4296-6b36-4962-841c-aaa3d2b9c72d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:29:33 compute-0 systemd-machined[209480]: New machine qemu-155-instance-0000007a.
Feb 28 10:29:33 compute-0 NetworkManager[49805]: <info>  [1772274573.4978] device (tap0e852cef-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:29:33 compute-0 NetworkManager[49805]: <info>  [1772274573.4985] device (tap0e852cef-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:29:33 compute-0 systemd[1]: Started Virtual Machine qemu-155-instance-0000007a.
Feb 28 10:29:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.514 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[951efe52-e67d-46cb-a898-04aa25a49863]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:29:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.517 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4f73c696-3351-4520-bf02-c2608d3f3ac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:29:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.548 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d371b0c6-e169-4bde-b5db-25ed3121c064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:29:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.566 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[80ea9a06-0199-48ee-907b-a96e0741294d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap143add3f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:02:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 365], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603768, 'reachable_time': 37208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349502, 'error': None, 'target': 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:29:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.576 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8530431f-cd9e-45b2-adb6-8fcd6685af68]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap143add3f-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603781, 'tstamp': 603781}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349503, 'error': None, 'target': 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap143add3f-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603784, 'tstamp': 603784}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349503, 'error': None, 'target': 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:29:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.578 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap143add3f-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:29:33 compute-0 nova_compute[243452]: 2026-02-28 10:29:33.580 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:33 compute-0 nova_compute[243452]: 2026-02-28 10:29:33.581 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.581 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap143add3f-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:29:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.582 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:29:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.582 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap143add3f-f0, col_values=(('external_ids', {'iface-id': '57c5c2bd-8863-4c9b-bde1-322e4dae0a4d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:29:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:29:33.582 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:29:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:29:33 compute-0 nova_compute[243452]: 2026-02-28 10:29:33.947 243456 DEBUG nova.network.neutron [req-df03af75-24a0-4f3e-8c06-03ac08e4c143 req-a93ddc47-6bb2-449a-b13c-2cecc9a9b33d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Updated VIF entry in instance network info cache for port 0e852cef-a459-43ce-ad9b-4f379e129ace. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:29:33 compute-0 nova_compute[243452]: 2026-02-28 10:29:33.948 243456 DEBUG nova.network.neutron [req-df03af75-24a0-4f3e-8c06-03ac08e4c143 req-a93ddc47-6bb2-449a-b13c-2cecc9a9b33d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Updating instance_info_cache with network_info: [{"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:29:33 compute-0 nova_compute[243452]: 2026-02-28 10:29:33.967 243456 DEBUG oslo_concurrency.lockutils [req-df03af75-24a0-4f3e-8c06-03ac08e4c143 req-a93ddc47-6bb2-449a-b13c-2cecc9a9b33d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.068 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274574.0675745, 0292c690-55db-421d-a004-21a0acd72961 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.069 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] VM Started (Lifecycle Event)
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.089 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.095 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274574.0678458, 0292c690-55db-421d-a004-21a0acd72961 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.095 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] VM Paused (Lifecycle Event)
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.111 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.115 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.136 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.201 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:34 compute-0 ceph-mon[76304]: pgmap v1949: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.1 KiB/s wr, 32 op/s
Feb 28 10:29:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 15 KiB/s wr, 34 op/s
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.861 243456 DEBUG nova.compute.manager [req-80013901-bd31-49f8-ab8c-3ae32e815e53 req-1bacb42d-47c4-4a95-baaf-5f2886bc1185 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Received event network-vif-plugged-0e852cef-a459-43ce-ad9b-4f379e129ace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.861 243456 DEBUG oslo_concurrency.lockutils [req-80013901-bd31-49f8-ab8c-3ae32e815e53 req-1bacb42d-47c4-4a95-baaf-5f2886bc1185 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "0292c690-55db-421d-a004-21a0acd72961-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.862 243456 DEBUG oslo_concurrency.lockutils [req-80013901-bd31-49f8-ab8c-3ae32e815e53 req-1bacb42d-47c4-4a95-baaf-5f2886bc1185 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.862 243456 DEBUG oslo_concurrency.lockutils [req-80013901-bd31-49f8-ab8c-3ae32e815e53 req-1bacb42d-47c4-4a95-baaf-5f2886bc1185 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.863 243456 DEBUG nova.compute.manager [req-80013901-bd31-49f8-ab8c-3ae32e815e53 req-1bacb42d-47c4-4a95-baaf-5f2886bc1185 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Processing event network-vif-plugged-0e852cef-a459-43ce-ad9b-4f379e129ace _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.864 243456 DEBUG nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.869 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274574.8691294, 0292c690-55db-421d-a004-21a0acd72961 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.869 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] VM Resumed (Lifecycle Event)
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.873 243456 DEBUG nova.virt.libvirt.driver [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.877 243456 INFO nova.virt.libvirt.driver [-] [instance: 0292c690-55db-421d-a004-21a0acd72961] Instance spawned successfully.
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.877 243456 INFO nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Took 10.14 seconds to spawn the instance on the hypervisor.
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.878 243456 DEBUG nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.894 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.900 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.933 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.972 243456 INFO nova.compute.manager [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Took 11.19 seconds to build instance.
Feb 28 10:29:34 compute-0 nova_compute[243452]: 2026-02-28 10:29:34.989 243456 DEBUG oslo_concurrency.lockutils [None req-b438ac0a-2b93-4bd6-bc24-54092f72f62d f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:29:36 compute-0 nova_compute[243452]: 2026-02-28 10:29:36.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:29:36 compute-0 ceph-mon[76304]: pgmap v1950: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 15 KiB/s wr, 34 op/s
Feb 28 10:29:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 13 KiB/s wr, 33 op/s
Feb 28 10:29:36 compute-0 nova_compute[243452]: 2026-02-28 10:29:36.889 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:38 compute-0 ceph-mon[76304]: pgmap v1951: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 13 KiB/s wr, 33 op/s
Feb 28 10:29:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 502 KiB/s rd, 13 KiB/s wr, 54 op/s
Feb 28 10:29:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:29:39 compute-0 nova_compute[243452]: 2026-02-28 10:29:39.204 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:39 compute-0 nova_compute[243452]: 2026-02-28 10:29:39.674 243456 DEBUG nova.compute.manager [req-5424511b-6951-46bf-a0c2-bb2c3a2a4165 req-aed33b3a-6079-40f2-aa99-ff3efedad845 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Received event network-vif-plugged-0e852cef-a459-43ce-ad9b-4f379e129ace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:29:39 compute-0 nova_compute[243452]: 2026-02-28 10:29:39.675 243456 DEBUG oslo_concurrency.lockutils [req-5424511b-6951-46bf-a0c2-bb2c3a2a4165 req-aed33b3a-6079-40f2-aa99-ff3efedad845 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "0292c690-55db-421d-a004-21a0acd72961-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:29:39 compute-0 nova_compute[243452]: 2026-02-28 10:29:39.676 243456 DEBUG oslo_concurrency.lockutils [req-5424511b-6951-46bf-a0c2-bb2c3a2a4165 req-aed33b3a-6079-40f2-aa99-ff3efedad845 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:29:39 compute-0 nova_compute[243452]: 2026-02-28 10:29:39.676 243456 DEBUG oslo_concurrency.lockutils [req-5424511b-6951-46bf-a0c2-bb2c3a2a4165 req-aed33b3a-6079-40f2-aa99-ff3efedad845 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:29:39 compute-0 nova_compute[243452]: 2026-02-28 10:29:39.677 243456 DEBUG nova.compute.manager [req-5424511b-6951-46bf-a0c2-bb2c3a2a4165 req-aed33b3a-6079-40f2-aa99-ff3efedad845 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] No waiting events found dispatching network-vif-plugged-0e852cef-a459-43ce-ad9b-4f379e129ace pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:29:39 compute-0 nova_compute[243452]: 2026-02-28 10:29:39.677 243456 WARNING nova.compute.manager [req-5424511b-6951-46bf-a0c2-bb2c3a2a4165 req-aed33b3a-6079-40f2-aa99-ff3efedad845 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Received unexpected event network-vif-plugged-0e852cef-a459-43ce-ad9b-4f379e129ace for instance with vm_state active and task_state None.
Feb 28 10:29:39 compute-0 nova_compute[243452]: 2026-02-28 10:29:39.692 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:29:39 compute-0 nova_compute[243452]: 2026-02-28 10:29:39.692 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:29:39 compute-0 nova_compute[243452]: 2026-02-28 10:29:39.693 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:29:40 compute-0 nova_compute[243452]: 2026-02-28 10:29:40.244 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:29:40 compute-0 nova_compute[243452]: 2026-02-28 10:29:40.244 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:29:40 compute-0 nova_compute[243452]: 2026-02-28 10:29:40.245 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:29:40 compute-0 nova_compute[243452]: 2026-02-28 10:29:40.245 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 360effe7-8380-410d-a5b8-59c28fa4a75a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:29:40 compute-0 ceph-mon[76304]: pgmap v1952: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 502 KiB/s rd, 13 KiB/s wr, 54 op/s
Feb 28 10:29:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 86 op/s
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007763523107519869 of space, bias 1.0, pg target 0.23290569322559607 quantized to 32 (current 32)
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0032514036342161058 of space, bias 1.0, pg target 0.9754210902648317 quantized to 32 (current 32)
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.358730371058775e-07 of space, bias 4.0, pg target 0.0008830476445270529 quantized to 16 (current 16)
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:29:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:29:41 compute-0 nova_compute[243452]: 2026-02-28 10:29:41.891 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:42 compute-0 ceph-mon[76304]: pgmap v1953: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 86 op/s
Feb 28 10:29:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Feb 28 10:29:43 compute-0 nova_compute[243452]: 2026-02-28 10:29:43.273 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updating instance_info_cache with network_info: [{"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:29:43 compute-0 nova_compute[243452]: 2026-02-28 10:29:43.297 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:29:43 compute-0 nova_compute[243452]: 2026-02-28 10:29:43.298 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:29:43 compute-0 nova_compute[243452]: 2026-02-28 10:29:43.298 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:29:43 compute-0 nova_compute[243452]: 2026-02-28 10:29:43.299 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:29:43 compute-0 nova_compute[243452]: 2026-02-28 10:29:43.299 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:29:43 compute-0 nova_compute[243452]: 2026-02-28 10:29:43.299 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:29:43 compute-0 nova_compute[243452]: 2026-02-28 10:29:43.322 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:29:43 compute-0 nova_compute[243452]: 2026-02-28 10:29:43.323 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:29:43 compute-0 nova_compute[243452]: 2026-02-28 10:29:43.324 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:29:43 compute-0 nova_compute[243452]: 2026-02-28 10:29:43.324 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:29:43 compute-0 nova_compute[243452]: 2026-02-28 10:29:43.325 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:29:43 compute-0 nova_compute[243452]: 2026-02-28 10:29:43.674 243456 DEBUG nova.compute.manager [req-203af496-928d-49e8-97e8-e2a878eebaaf req-f2a6ee0a-6118-4654-bcbd-3c8fb94986a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Received event network-changed-0e852cef-a459-43ce-ad9b-4f379e129ace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:29:43 compute-0 nova_compute[243452]: 2026-02-28 10:29:43.675 243456 DEBUG nova.compute.manager [req-203af496-928d-49e8-97e8-e2a878eebaaf req-f2a6ee0a-6118-4654-bcbd-3c8fb94986a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Refreshing instance network info cache due to event network-changed-0e852cef-a459-43ce-ad9b-4f379e129ace. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:29:43 compute-0 nova_compute[243452]: 2026-02-28 10:29:43.675 243456 DEBUG oslo_concurrency.lockutils [req-203af496-928d-49e8-97e8-e2a878eebaaf req-f2a6ee0a-6118-4654-bcbd-3c8fb94986a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:29:43 compute-0 nova_compute[243452]: 2026-02-28 10:29:43.675 243456 DEBUG oslo_concurrency.lockutils [req-203af496-928d-49e8-97e8-e2a878eebaaf req-f2a6ee0a-6118-4654-bcbd-3c8fb94986a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:29:43 compute-0 nova_compute[243452]: 2026-02-28 10:29:43.676 243456 DEBUG nova.network.neutron [req-203af496-928d-49e8-97e8-e2a878eebaaf req-f2a6ee0a-6118-4654-bcbd-3c8fb94986a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Refreshing network info cache for port 0e852cef-a459-43ce-ad9b-4f379e129ace _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:29:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:29:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:29:43 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2655490118' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:29:43 compute-0 nova_compute[243452]: 2026-02-28 10:29:43.921 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.043 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.044 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.053 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.054 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.206 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.295 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.296 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3334MB free_disk=59.94170920923352GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.297 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.298 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:29:44 compute-0 ceph-mon[76304]: pgmap v1954: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Feb 28 10:29:44 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2655490118' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.398 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 360effe7-8380-410d-a5b8-59c28fa4a75a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.400 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 0292c690-55db-421d-a004-21a0acd72961 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.400 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.401 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.426 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.443 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.444 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.461 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.503 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.567 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:29:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.881 243456 DEBUG nova.network.neutron [req-203af496-928d-49e8-97e8-e2a878eebaaf req-f2a6ee0a-6118-4654-bcbd-3c8fb94986a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Updated VIF entry in instance network info cache for port 0e852cef-a459-43ce-ad9b-4f379e129ace. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.882 243456 DEBUG nova.network.neutron [req-203af496-928d-49e8-97e8-e2a878eebaaf req-f2a6ee0a-6118-4654-bcbd-3c8fb94986a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Updating instance_info_cache with network_info: [{"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:29:44 compute-0 nova_compute[243452]: 2026-02-28 10:29:44.906 243456 DEBUG oslo_concurrency.lockutils [req-203af496-928d-49e8-97e8-e2a878eebaaf req-f2a6ee0a-6118-4654-bcbd-3c8fb94986a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:29:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:29:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1906682355' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:29:45 compute-0 nova_compute[243452]: 2026-02-28 10:29:45.104 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:29:45 compute-0 nova_compute[243452]: 2026-02-28 10:29:45.111 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:29:45 compute-0 nova_compute[243452]: 2026-02-28 10:29:45.130 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:29:45 compute-0 nova_compute[243452]: 2026-02-28 10:29:45.160 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:29:45 compute-0 nova_compute[243452]: 2026-02-28 10:29:45.161 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:29:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1906682355' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:29:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:29:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2025239271' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:29:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:29:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2025239271' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:29:46 compute-0 ceph-mon[76304]: pgmap v1955: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Feb 28 10:29:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2025239271' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:29:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2025239271' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:29:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 938 B/s wr, 73 op/s
Feb 28 10:29:46 compute-0 nova_compute[243452]: 2026-02-28 10:29:46.893 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:47 compute-0 ovn_controller[146846]: 2026-02-28T10:29:47Z|00133|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.11
Feb 28 10:29:47 compute-0 ovn_controller[146846]: 2026-02-28T10:29:47Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:64:2e:7f 10.100.0.11
Feb 28 10:29:48 compute-0 ceph-mon[76304]: pgmap v1956: 305 pgs: 305 active+clean; 312 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 938 B/s wr, 73 op/s
Feb 28 10:29:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 318 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 178 KiB/s wr, 86 op/s
Feb 28 10:29:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:29:49 compute-0 nova_compute[243452]: 2026-02-28 10:29:49.210 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:50 compute-0 ceph-mon[76304]: pgmap v1957: 305 pgs: 305 active+clean; 318 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 178 KiB/s wr, 86 op/s
Feb 28 10:29:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 499 KiB/s wr, 101 op/s
Feb 28 10:29:51 compute-0 nova_compute[243452]: 2026-02-28 10:29:51.895 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:52 compute-0 ovn_controller[146846]: 2026-02-28T10:29:52Z|00135|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.11
Feb 28 10:29:52 compute-0 ovn_controller[146846]: 2026-02-28T10:29:52Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:64:2e:7f 10.100.0.11
Feb 28 10:29:52 compute-0 ceph-mon[76304]: pgmap v1958: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 499 KiB/s wr, 101 op/s
Feb 28 10:29:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 499 KiB/s wr, 54 op/s
Feb 28 10:29:52 compute-0 ovn_controller[146846]: 2026-02-28T10:29:52Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:64:2e:7f 10.100.0.11
Feb 28 10:29:52 compute-0 ovn_controller[146846]: 2026-02-28T10:29:52Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:64:2e:7f 10.100.0.11
Feb 28 10:29:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:29:54 compute-0 nova_compute[243452]: 2026-02-28 10:29:54.211 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:54 compute-0 ceph-mon[76304]: pgmap v1959: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 499 KiB/s wr, 54 op/s
Feb 28 10:29:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 509 KiB/s wr, 55 op/s
Feb 28 10:29:56 compute-0 ceph-mon[76304]: pgmap v1960: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 509 KiB/s wr, 55 op/s
Feb 28 10:29:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 329 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 56 op/s
Feb 28 10:29:56 compute-0 nova_compute[243452]: 2026-02-28 10:29:56.897 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:29:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:29:57.869 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:29:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:29:57.870 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:29:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:29:57.872 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:29:58 compute-0 podman[349593]: 2026-02-28 10:29:58.131770112 +0000 UTC m=+0.063301584 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 28 10:29:58 compute-0 podman[349592]: 2026-02-28 10:29:58.185167694 +0000 UTC m=+0.119326591 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 28 10:29:58 compute-0 ceph-mon[76304]: pgmap v1961: 305 pgs: 305 active+clean; 329 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 56 op/s
Feb 28 10:29:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 329 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 545 KiB/s wr, 56 op/s
Feb 28 10:29:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:29:59 compute-0 nova_compute[243452]: 2026-02-28 10:29:59.214 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:00 compute-0 nova_compute[243452]: 2026-02-28 10:30:00.093 243456 DEBUG nova.compute.manager [None req-62e28a86-b388-4154-a764-f162b1061421 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:30:00 compute-0 nova_compute[243452]: 2026-02-28 10:30:00.217 243456 INFO nova.compute.manager [None req-62e28a86-b388-4154-a764-f162b1061421 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] instance snapshotting
Feb 28 10:30:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:30:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:30:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:30:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:30:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:30:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:30:00 compute-0 ceph-mon[76304]: pgmap v1962: 305 pgs: 305 active+clean; 329 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 545 KiB/s wr, 56 op/s
Feb 28 10:30:00 compute-0 nova_compute[243452]: 2026-02-28 10:30:00.530 243456 INFO nova.virt.libvirt.driver [None req-62e28a86-b388-4154-a764-f162b1061421 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Beginning live snapshot process
Feb 28 10:30:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 329 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 371 KiB/s wr, 45 op/s
Feb 28 10:30:00 compute-0 nova_compute[243452]: 2026-02-28 10:30:00.784 243456 DEBUG nova.storage.rbd_utils [None req-62e28a86-b388-4154-a764-f162b1061421 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] creating snapshot(3390a3503c494e2abd5d03d13aa46d06) on rbd image(0292c690-55db-421d-a004-21a0acd72961_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:30:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Feb 28 10:30:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Feb 28 10:30:01 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Feb 28 10:30:01 compute-0 nova_compute[243452]: 2026-02-28 10:30:01.520 243456 DEBUG nova.storage.rbd_utils [None req-62e28a86-b388-4154-a764-f162b1061421 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] cloning vms/0292c690-55db-421d-a004-21a0acd72961_disk@3390a3503c494e2abd5d03d13aa46d06 to images/4f149197-39c6-4851-9529-9ce8f0064ae9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:30:01 compute-0 nova_compute[243452]: 2026-02-28 10:30:01.646 243456 DEBUG nova.storage.rbd_utils [None req-62e28a86-b388-4154-a764-f162b1061421 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] flattening images/4f149197-39c6-4851-9529-9ce8f0064ae9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:30:01 compute-0 nova_compute[243452]: 2026-02-28 10:30:01.899 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:02 compute-0 nova_compute[243452]: 2026-02-28 10:30:02.094 243456 DEBUG nova.storage.rbd_utils [None req-62e28a86-b388-4154-a764-f162b1061421 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] removing snapshot(3390a3503c494e2abd5d03d13aa46d06) on rbd image(0292c690-55db-421d-a004-21a0acd72961_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:30:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Feb 28 10:30:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Feb 28 10:30:02 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Feb 28 10:30:02 compute-0 ceph-mon[76304]: pgmap v1963: 305 pgs: 305 active+clean; 329 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 371 KiB/s wr, 45 op/s
Feb 28 10:30:02 compute-0 ceph-mon[76304]: osdmap e278: 3 total, 3 up, 3 in
Feb 28 10:30:02 compute-0 nova_compute[243452]: 2026-02-28 10:30:02.501 243456 DEBUG nova.storage.rbd_utils [None req-62e28a86-b388-4154-a764-f162b1061421 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] creating snapshot(snap) on rbd image(4f149197-39c6-4851-9529-9ce8f0064ae9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:30:02 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Feb 28 10:30:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 351 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.9 MiB/s wr, 37 op/s
Feb 28 10:30:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Feb 28 10:30:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Feb 28 10:30:03 compute-0 ceph-mon[76304]: osdmap e279: 3 total, 3 up, 3 in
Feb 28 10:30:03 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Feb 28 10:30:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:30:04 compute-0 nova_compute[243452]: 2026-02-28 10:30:04.216 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:04 compute-0 ceph-mon[76304]: pgmap v1966: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 351 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.9 MiB/s wr, 37 op/s
Feb 28 10:30:04 compute-0 ceph-mon[76304]: osdmap e280: 3 total, 3 up, 3 in
Feb 28 10:30:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 399 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 10 MiB/s wr, 146 op/s
Feb 28 10:30:05 compute-0 nova_compute[243452]: 2026-02-28 10:30:05.241 243456 INFO nova.virt.libvirt.driver [None req-62e28a86-b388-4154-a764-f162b1061421 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Snapshot image upload complete
Feb 28 10:30:05 compute-0 nova_compute[243452]: 2026-02-28 10:30:05.242 243456 INFO nova.compute.manager [None req-62e28a86-b388-4154-a764-f162b1061421 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Took 5.02 seconds to snapshot the instance on the hypervisor.
Feb 28 10:30:06 compute-0 sudo[349780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:30:06 compute-0 sudo[349780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:30:06 compute-0 sudo[349780]: pam_unix(sudo:session): session closed for user root
Feb 28 10:30:06 compute-0 sudo[349805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 28 10:30:06 compute-0 sudo[349805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:30:06 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Feb 28 10:30:06 compute-0 ceph-mon[76304]: pgmap v1968: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 399 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 10 MiB/s wr, 146 op/s
Feb 28 10:30:06 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Feb 28 10:30:06 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Feb 28 10:30:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 431 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.1 MiB/s rd, 17 MiB/s wr, 220 op/s
Feb 28 10:30:06 compute-0 podman[349875]: 2026-02-28 10:30:06.799315238 +0000 UTC m=+0.075300064 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 10:30:06 compute-0 podman[349875]: 2026-02-28 10:30:06.886293052 +0000 UTC m=+0.162277878 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:30:06 compute-0 nova_compute[243452]: 2026-02-28 10:30:06.901 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:07 compute-0 ceph-mon[76304]: osdmap e281: 3 total, 3 up, 3 in
Feb 28 10:30:07 compute-0 sudo[349805]: pam_unix(sudo:session): session closed for user root
Feb 28 10:30:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:30:07 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:30:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:30:07 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:30:07 compute-0 sudo[350062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:30:07 compute-0 sudo[350062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:30:07 compute-0 sudo[350062]: pam_unix(sudo:session): session closed for user root
Feb 28 10:30:07 compute-0 sudo[350087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:30:07 compute-0 sudo[350087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:30:08 compute-0 sudo[350087]: pam_unix(sudo:session): session closed for user root
Feb 28 10:30:08 compute-0 ceph-mon[76304]: pgmap v1970: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 431 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.1 MiB/s rd, 17 MiB/s wr, 220 op/s
Feb 28 10:30:08 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:30:08 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:30:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:30:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:30:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:30:08 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:30:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:30:08 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:30:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:30:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:30:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:30:08 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:30:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:30:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:30:08 compute-0 sudo[350142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:30:08 compute-0 sudo[350142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:30:08 compute-0 sudo[350142]: pam_unix(sudo:session): session closed for user root
Feb 28 10:30:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 410 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 15 MiB/s wr, 189 op/s
Feb 28 10:30:08 compute-0 sudo[350167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:30:08 compute-0 sudo[350167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:30:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.802765) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274608802822, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 1870, "num_deletes": 253, "total_data_size": 2941732, "memory_usage": 2988088, "flush_reason": "Manual Compaction"}
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274608813456, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 2876890, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40082, "largest_seqno": 41951, "table_properties": {"data_size": 2868053, "index_size": 5523, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 16807, "raw_average_key_size": 19, "raw_value_size": 2850633, "raw_average_value_size": 3254, "num_data_blocks": 243, "num_entries": 876, "num_filter_entries": 876, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772274442, "oldest_key_time": 1772274442, "file_creation_time": 1772274608, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 10768 microseconds, and 5506 cpu microseconds.
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.813535) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 2876890 bytes OK
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.813562) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.814871) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.814889) EVENT_LOG_v1 {"time_micros": 1772274608814883, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.814918) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 2933693, prev total WAL file size 2933693, number of live WAL files 2.
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.815702) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(2809KB)], [89(9117KB)]
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274608815754, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 12213194, "oldest_snapshot_seqno": -1}
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6861 keys, 11461924 bytes, temperature: kUnknown
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274608869863, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 11461924, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11413133, "index_size": 30503, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17221, "raw_key_size": 174497, "raw_average_key_size": 25, "raw_value_size": 11287735, "raw_average_value_size": 1645, "num_data_blocks": 1215, "num_entries": 6861, "num_filter_entries": 6861, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772274608, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.870149) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 11461924 bytes
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.871453) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.4 rd, 211.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 8.9 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(8.2) write-amplify(4.0) OK, records in: 7382, records dropped: 521 output_compression: NoCompression
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.871473) EVENT_LOG_v1 {"time_micros": 1772274608871463, "job": 52, "event": "compaction_finished", "compaction_time_micros": 54193, "compaction_time_cpu_micros": 31112, "output_level": 6, "num_output_files": 1, "total_output_size": 11461924, "num_input_records": 7382, "num_output_records": 6861, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274608871862, "job": 52, "event": "table_file_deletion", "file_number": 91}
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274608873020, "job": 52, "event": "table_file_deletion", "file_number": 89}
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.815593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.873049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.873054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.873056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.873058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:30:08 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:08.873060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:30:09 compute-0 podman[350204]: 2026-02-28 10:30:09.006209543 +0000 UTC m=+0.061218946 container create 217eca288214117b46e5c672921ed95e432d6492d194501248ab57d3332aef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.055 243456 DEBUG nova.compute.manager [req-637aa530-5530-4253-a469-fed987b7de67 req-26276085-102a-4a89-9b16-ec38a4027287 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Received event network-changed-0e852cef-a459-43ce-ad9b-4f379e129ace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.055 243456 DEBUG nova.compute.manager [req-637aa530-5530-4253-a469-fed987b7de67 req-26276085-102a-4a89-9b16-ec38a4027287 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Refreshing instance network info cache due to event network-changed-0e852cef-a459-43ce-ad9b-4f379e129ace. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.056 243456 DEBUG oslo_concurrency.lockutils [req-637aa530-5530-4253-a469-fed987b7de67 req-26276085-102a-4a89-9b16-ec38a4027287 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.056 243456 DEBUG oslo_concurrency.lockutils [req-637aa530-5530-4253-a469-fed987b7de67 req-26276085-102a-4a89-9b16-ec38a4027287 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.057 243456 DEBUG nova.network.neutron [req-637aa530-5530-4253-a469-fed987b7de67 req-26276085-102a-4a89-9b16-ec38a4027287 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Refreshing network info cache for port 0e852cef-a459-43ce-ad9b-4f379e129ace _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:30:09 compute-0 systemd[1]: Started libpod-conmon-217eca288214117b46e5c672921ed95e432d6492d194501248ab57d3332aef4a.scope.
Feb 28 10:30:09 compute-0 podman[350204]: 2026-02-28 10:30:08.980473294 +0000 UTC m=+0.035482777 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:30:09 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:30:09 compute-0 podman[350204]: 2026-02-28 10:30:09.113400999 +0000 UTC m=+0.168410422 container init 217eca288214117b46e5c672921ed95e432d6492d194501248ab57d3332aef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:30:09 compute-0 podman[350204]: 2026-02-28 10:30:09.121291783 +0000 UTC m=+0.176301216 container start 217eca288214117b46e5c672921ed95e432d6492d194501248ab57d3332aef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:30:09 compute-0 podman[350204]: 2026-02-28 10:30:09.126426538 +0000 UTC m=+0.181435941 container attach 217eca288214117b46e5c672921ed95e432d6492d194501248ab57d3332aef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 10:30:09 compute-0 youthful_ganguly[350221]: 167 167
Feb 28 10:30:09 compute-0 systemd[1]: libpod-217eca288214117b46e5c672921ed95e432d6492d194501248ab57d3332aef4a.scope: Deactivated successfully.
Feb 28 10:30:09 compute-0 conmon[350221]: conmon 217eca288214117b46e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-217eca288214117b46e5c672921ed95e432d6492d194501248ab57d3332aef4a.scope/container/memory.events
Feb 28 10:30:09 compute-0 podman[350204]: 2026-02-28 10:30:09.132303294 +0000 UTC m=+0.187312767 container died 217eca288214117b46e5c672921ed95e432d6492d194501248ab57d3332aef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 28 10:30:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c39fabcd655c0b2873905d296a9dea0b9f484a825b23e6a7fef4b94405983071-merged.mount: Deactivated successfully.
Feb 28 10:30:09 compute-0 podman[350204]: 2026-02-28 10:30:09.188605949 +0000 UTC m=+0.243615362 container remove 217eca288214117b46e5c672921ed95e432d6492d194501248ab57d3332aef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 10:30:09 compute-0 systemd[1]: libpod-conmon-217eca288214117b46e5c672921ed95e432d6492d194501248ab57d3332aef4a.scope: Deactivated successfully.
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.219 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.372 243456 DEBUG oslo_concurrency.lockutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "0292c690-55db-421d-a004-21a0acd72961" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.373 243456 DEBUG oslo_concurrency.lockutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.373 243456 DEBUG oslo_concurrency.lockutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "0292c690-55db-421d-a004-21a0acd72961-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.374 243456 DEBUG oslo_concurrency.lockutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.374 243456 DEBUG oslo_concurrency.lockutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.376 243456 INFO nova.compute.manager [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Terminating instance
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.377 243456 DEBUG nova.compute.manager [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:30:09 compute-0 podman[350245]: 2026-02-28 10:30:09.381483333 +0000 UTC m=+0.054235617 container create f07e10ea06f09b27a8306afe41f061fb791fe59231a42b3a9932204cb847a430 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jackson, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:30:09 compute-0 systemd[1]: Started libpod-conmon-f07e10ea06f09b27a8306afe41f061fb791fe59231a42b3a9932204cb847a430.scope.
Feb 28 10:30:09 compute-0 kernel: tap0e852cef-a4 (unregistering): left promiscuous mode
Feb 28 10:30:09 compute-0 NetworkManager[49805]: <info>  [1772274609.4466] device (tap0e852cef-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:30:09 compute-0 podman[350245]: 2026-02-28 10:30:09.357692859 +0000 UTC m=+0.030445163 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.453 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:09 compute-0 ovn_controller[146846]: 2026-02-28T10:30:09Z|01231|binding|INFO|Releasing lport 0e852cef-a459-43ce-ad9b-4f379e129ace from this chassis (sb_readonly=0)
Feb 28 10:30:09 compute-0 ovn_controller[146846]: 2026-02-28T10:30:09Z|01232|binding|INFO|Setting lport 0e852cef-a459-43ce-ad9b-4f379e129ace down in Southbound
Feb 28 10:30:09 compute-0 ovn_controller[146846]: 2026-02-28T10:30:09Z|01233|binding|INFO|Removing iface tap0e852cef-a4 ovn-installed in OVS
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.456 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.461 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.473 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:2e:7f 10.100.0.11'], port_security=['fa:16:3e:64:2e:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0292c690-55db-421d-a004-21a0acd72961', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-143add3f-ffeb-40fb-88e5-0af28b700615', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '344dd946e14146ab93c01183964c71b3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eee67bf9-5a46-4130-874d-acfd0939ad31', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c3d893a-030b-445a-98e5-9b2e4e6c6037, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0e852cef-a459-43ce-ad9b-4f379e129ace) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:30:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.475 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0e852cef-a459-43ce-ad9b-4f379e129ace in datapath 143add3f-ffeb-40fb-88e5-0af28b700615 unbound from our chassis
Feb 28 10:30:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.477 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 143add3f-ffeb-40fb-88e5-0af28b700615
Feb 28 10:30:09 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:30:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c6d7b8668a60401c1b48c023903cd1abb01d7eb85f36c651d9d0a4338ccc77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:30:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c6d7b8668a60401c1b48c023903cd1abb01d7eb85f36c651d9d0a4338ccc77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:30:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c6d7b8668a60401c1b48c023903cd1abb01d7eb85f36c651d9d0a4338ccc77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:30:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c6d7b8668a60401c1b48c023903cd1abb01d7eb85f36c651d9d0a4338ccc77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:30:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c6d7b8668a60401c1b48c023903cd1abb01d7eb85f36c651d9d0a4338ccc77/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:30:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.501 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[46f5f91a-9fc9-4f7a-a58f-d6ddcddabbb3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:30:09 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Feb 28 10:30:09 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007a.scope: Consumed 13.976s CPU time.
Feb 28 10:30:09 compute-0 podman[350245]: 2026-02-28 10:30:09.508030188 +0000 UTC m=+0.180782482 container init f07e10ea06f09b27a8306afe41f061fb791fe59231a42b3a9932204cb847a430 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:30:09 compute-0 systemd-machined[209480]: Machine qemu-155-instance-0000007a terminated.
Feb 28 10:30:09 compute-0 podman[350245]: 2026-02-28 10:30:09.516540829 +0000 UTC m=+0.189293143 container start f07e10ea06f09b27a8306afe41f061fb791fe59231a42b3a9932204cb847a430 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 10:30:09 compute-0 podman[350245]: 2026-02-28 10:30:09.521727416 +0000 UTC m=+0.194479710 container attach f07e10ea06f09b27a8306afe41f061fb791fe59231a42b3a9932204cb847a430 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jackson, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 10:30:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.531 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ca33baa8-86e9-477c-a707-c65af47e9624]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:30:09 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:30:09 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:30:09 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:30:09 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:30:09 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:30:09 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:30:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.537 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1848bcd2-88cf-4bc9-87fd-4da9adb5f054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:30:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.563 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c9cf5d21-e249-4023-a44b-d770c01b5de4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:30:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.586 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa6b2f3-4d39-49b3-a815-3eea24cc3651]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap143add3f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:02:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 365], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603768, 'reachable_time': 37208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350278, 'error': None, 'target': 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:30:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.604 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bd390dcf-21f7-46a6-873c-6c6d9048c929]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap143add3f-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603781, 'tstamp': 603781}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350279, 'error': None, 'target': 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap143add3f-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603784, 'tstamp': 603784}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350279, 'error': None, 'target': 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:30:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.607 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap143add3f-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.609 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.615 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.616 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap143add3f-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:30:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.616 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:30:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.616 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap143add3f-f0, col_values=(('external_ids', {'iface-id': '57c5c2bd-8863-4c9b-bde1-322e4dae0a4d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:30:09 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:09.617 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.618 243456 INFO nova.virt.libvirt.driver [-] [instance: 0292c690-55db-421d-a004-21a0acd72961] Instance destroyed successfully.
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.619 243456 DEBUG nova.objects.instance [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lazy-loading 'resources' on Instance uuid 0292c690-55db-421d-a004-21a0acd72961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.647 243456 DEBUG nova.virt.libvirt.vif [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:29:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-981413902',display_name='tempest-TestSnapshotPattern-server-981413902',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-981413902',id=122,image_ref='889a5b88-09b7-4c48-88ed-168ca73ca921',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUJ4MbCHcIoNICw2tL1X5ZefRw67v1vL6nTq2IACMxv94r+pZLfX9PQ5uIyPper8mbOQnjB2mZqp0GfnlVkrAkMMtWu4Dla7/PQOjpuGeOsYTKM5E2GBss1gtEw6oeaiA==',key_name='tempest-TestSnapshotPattern-538459727',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:29:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='344dd946e14146ab93c01183964c71b3',ramdisk_id='',reservation_id='r-r0dl808n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='360effe7-8380-410d-a5b8-59c28fa4a75a',image_min_disk='1',image_min_ram='0',image_owner_id='344dd946e14146ab93c01183964c71b3',image_owner_project_name='tempest-TestSnapshotPattern-2121060882',image_owner_user_name='tempest-TestSnapshotPattern-2121060882-project-member',image_user_id='f54beab12fce4ee8adf80742bf33b916',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-2121060882',owner_user_name='tempest-TestSnapshotPattern-2121060882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:30:05Z,user_data=None,user_id='f54beab12fce4ee8adf80742bf33b916',uuid=0292c690-55db-421d-a004-21a0acd72961,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.648 243456 DEBUG nova.network.os_vif_util [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converting VIF {"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.649 243456 DEBUG nova.network.os_vif_util [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:64:2e:7f,bridge_name='br-int',has_traffic_filtering=True,id=0e852cef-a459-43ce-ad9b-4f379e129ace,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e852cef-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.650 243456 DEBUG os_vif [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:2e:7f,bridge_name='br-int',has_traffic_filtering=True,id=0e852cef-a459-43ce-ad9b-4f379e129ace,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e852cef-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.653 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.654 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e852cef-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.655 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.659 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.664 243456 INFO os_vif [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:2e:7f,bridge_name='br-int',has_traffic_filtering=True,id=0e852cef-a459-43ce-ad9b-4f379e129ace,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e852cef-a4')
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.970 243456 INFO nova.virt.libvirt.driver [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Deleting instance files /var/lib/nova/instances/0292c690-55db-421d-a004-21a0acd72961_del
Feb 28 10:30:09 compute-0 nova_compute[243452]: 2026-02-28 10:30:09.972 243456 INFO nova.virt.libvirt.driver [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Deletion of /var/lib/nova/instances/0292c690-55db-421d-a004-21a0acd72961_del complete
Feb 28 10:30:10 compute-0 busy_jackson[350261]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:30:10 compute-0 busy_jackson[350261]: --> All data devices are unavailable
Feb 28 10:30:10 compute-0 systemd[1]: libpod-f07e10ea06f09b27a8306afe41f061fb791fe59231a42b3a9932204cb847a430.scope: Deactivated successfully.
Feb 28 10:30:10 compute-0 podman[350245]: 2026-02-28 10:30:10.046416099 +0000 UTC m=+0.719168413 container died f07e10ea06f09b27a8306afe41f061fb791fe59231a42b3a9932204cb847a430 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:30:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-22c6d7b8668a60401c1b48c023903cd1abb01d7eb85f36c651d9d0a4338ccc77-merged.mount: Deactivated successfully.
Feb 28 10:30:10 compute-0 podman[350245]: 2026-02-28 10:30:10.103159546 +0000 UTC m=+0.775911850 container remove f07e10ea06f09b27a8306afe41f061fb791fe59231a42b3a9932204cb847a430 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:30:10 compute-0 systemd[1]: libpod-conmon-f07e10ea06f09b27a8306afe41f061fb791fe59231a42b3a9932204cb847a430.scope: Deactivated successfully.
Feb 28 10:30:10 compute-0 sudo[350167]: pam_unix(sudo:session): session closed for user root
Feb 28 10:30:10 compute-0 sudo[350338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:30:10 compute-0 sudo[350338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:30:10 compute-0 sudo[350338]: pam_unix(sudo:session): session closed for user root
Feb 28 10:30:10 compute-0 sudo[350363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:30:10 compute-0 sudo[350363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:30:10 compute-0 nova_compute[243452]: 2026-02-28 10:30:10.374 243456 INFO nova.compute.manager [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Took 1.00 seconds to destroy the instance on the hypervisor.
Feb 28 10:30:10 compute-0 nova_compute[243452]: 2026-02-28 10:30:10.375 243456 DEBUG oslo.service.loopingcall [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:30:10 compute-0 nova_compute[243452]: 2026-02-28 10:30:10.375 243456 DEBUG nova.compute.manager [-] [instance: 0292c690-55db-421d-a004-21a0acd72961] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:30:10 compute-0 nova_compute[243452]: 2026-02-28 10:30:10.376 243456 DEBUG nova.network.neutron [-] [instance: 0292c690-55db-421d-a004-21a0acd72961] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:30:10 compute-0 ceph-mon[76304]: pgmap v1971: 305 pgs: 305 active+clean; 410 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 15 MiB/s wr, 189 op/s
Feb 28 10:30:10 compute-0 podman[350401]: 2026-02-28 10:30:10.583313438 +0000 UTC m=+0.054574157 container create ecbebcd51643caa4f6656596c6f5f0d56b5d64602e8ad09ec7c7f9522ed2d63a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_northcutt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:30:10 compute-0 systemd[1]: Started libpod-conmon-ecbebcd51643caa4f6656596c6f5f0d56b5d64602e8ad09ec7c7f9522ed2d63a.scope.
Feb 28 10:30:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 9.5 MiB/s wr, 151 op/s
Feb 28 10:30:10 compute-0 podman[350401]: 2026-02-28 10:30:10.557206668 +0000 UTC m=+0.028467437 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:30:10 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:30:10 compute-0 podman[350401]: 2026-02-28 10:30:10.675348085 +0000 UTC m=+0.146608854 container init ecbebcd51643caa4f6656596c6f5f0d56b5d64602e8ad09ec7c7f9522ed2d63a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_northcutt, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:30:10 compute-0 podman[350401]: 2026-02-28 10:30:10.682632851 +0000 UTC m=+0.153893560 container start ecbebcd51643caa4f6656596c6f5f0d56b5d64602e8ad09ec7c7f9522ed2d63a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:30:10 compute-0 podman[350401]: 2026-02-28 10:30:10.686278134 +0000 UTC m=+0.157538853 container attach ecbebcd51643caa4f6656596c6f5f0d56b5d64602e8ad09ec7c7f9522ed2d63a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_northcutt, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:30:10 compute-0 funny_northcutt[350419]: 167 167
Feb 28 10:30:10 compute-0 systemd[1]: libpod-ecbebcd51643caa4f6656596c6f5f0d56b5d64602e8ad09ec7c7f9522ed2d63a.scope: Deactivated successfully.
Feb 28 10:30:10 compute-0 podman[350401]: 2026-02-28 10:30:10.689241338 +0000 UTC m=+0.160502047 container died ecbebcd51643caa4f6656596c6f5f0d56b5d64602e8ad09ec7c7f9522ed2d63a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:30:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d1fb276c7ac69b4f3b08b90adca009a3193d304eddb492b5b956f0e7c5487ca-merged.mount: Deactivated successfully.
Feb 28 10:30:10 compute-0 podman[350401]: 2026-02-28 10:30:10.729935731 +0000 UTC m=+0.201196410 container remove ecbebcd51643caa4f6656596c6f5f0d56b5d64602e8ad09ec7c7f9522ed2d63a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 28 10:30:10 compute-0 systemd[1]: libpod-conmon-ecbebcd51643caa4f6656596c6f5f0d56b5d64602e8ad09ec7c7f9522ed2d63a.scope: Deactivated successfully.
Feb 28 10:30:10 compute-0 podman[350445]: 2026-02-28 10:30:10.878014415 +0000 UTC m=+0.044899872 container create 7a6736a192d7c661eefea3c3f478843f1324f2e0d85deda2c675c48918c4a997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 10:30:10 compute-0 systemd[1]: Started libpod-conmon-7a6736a192d7c661eefea3c3f478843f1324f2e0d85deda2c675c48918c4a997.scope.
Feb 28 10:30:10 compute-0 podman[350445]: 2026-02-28 10:30:10.858487973 +0000 UTC m=+0.025373480 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:30:10 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f603fe79234b8baed627fb08e8be842ecbc8cb126214966ce42539a235d08503/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f603fe79234b8baed627fb08e8be842ecbc8cb126214966ce42539a235d08503/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f603fe79234b8baed627fb08e8be842ecbc8cb126214966ce42539a235d08503/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f603fe79234b8baed627fb08e8be842ecbc8cb126214966ce42539a235d08503/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:30:10 compute-0 podman[350445]: 2026-02-28 10:30:10.9929373 +0000 UTC m=+0.159822767 container init 7a6736a192d7c661eefea3c3f478843f1324f2e0d85deda2c675c48918c4a997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:30:11 compute-0 podman[350445]: 2026-02-28 10:30:11.001792271 +0000 UTC m=+0.168677748 container start 7a6736a192d7c661eefea3c3f478843f1324f2e0d85deda2c675c48918c4a997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:30:11 compute-0 podman[350445]: 2026-02-28 10:30:11.005407593 +0000 UTC m=+0.172293050 container attach 7a6736a192d7c661eefea3c3f478843f1324f2e0d85deda2c675c48918c4a997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 10:30:11 compute-0 zealous_wing[350461]: {
Feb 28 10:30:11 compute-0 zealous_wing[350461]:     "0": [
Feb 28 10:30:11 compute-0 zealous_wing[350461]:         {
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "devices": [
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "/dev/loop3"
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             ],
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "lv_name": "ceph_lv0",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "lv_size": "21470642176",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "name": "ceph_lv0",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "tags": {
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.cluster_name": "ceph",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.crush_device_class": "",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.encrypted": "0",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.objectstore": "bluestore",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.osd_id": "0",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.type": "block",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.vdo": "0",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.with_tpm": "0"
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             },
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "type": "block",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "vg_name": "ceph_vg0"
Feb 28 10:30:11 compute-0 zealous_wing[350461]:         }
Feb 28 10:30:11 compute-0 zealous_wing[350461]:     ],
Feb 28 10:30:11 compute-0 zealous_wing[350461]:     "1": [
Feb 28 10:30:11 compute-0 zealous_wing[350461]:         {
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "devices": [
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "/dev/loop4"
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             ],
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "lv_name": "ceph_lv1",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "lv_size": "21470642176",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "name": "ceph_lv1",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "tags": {
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.cluster_name": "ceph",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.crush_device_class": "",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.encrypted": "0",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.objectstore": "bluestore",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.osd_id": "1",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.type": "block",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.vdo": "0",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.with_tpm": "0"
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             },
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "type": "block",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "vg_name": "ceph_vg1"
Feb 28 10:30:11 compute-0 zealous_wing[350461]:         }
Feb 28 10:30:11 compute-0 zealous_wing[350461]:     ],
Feb 28 10:30:11 compute-0 zealous_wing[350461]:     "2": [
Feb 28 10:30:11 compute-0 zealous_wing[350461]:         {
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "devices": [
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "/dev/loop5"
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             ],
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "lv_name": "ceph_lv2",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "lv_size": "21470642176",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "name": "ceph_lv2",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "tags": {
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.cluster_name": "ceph",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.crush_device_class": "",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.encrypted": "0",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.objectstore": "bluestore",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.osd_id": "2",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.type": "block",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.vdo": "0",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:                 "ceph.with_tpm": "0"
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             },
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "type": "block",
Feb 28 10:30:11 compute-0 zealous_wing[350461]:             "vg_name": "ceph_vg2"
Feb 28 10:30:11 compute-0 zealous_wing[350461]:         }
Feb 28 10:30:11 compute-0 zealous_wing[350461]:     ]
Feb 28 10:30:11 compute-0 zealous_wing[350461]: }
Feb 28 10:30:11 compute-0 systemd[1]: libpod-7a6736a192d7c661eefea3c3f478843f1324f2e0d85deda2c675c48918c4a997.scope: Deactivated successfully.
Feb 28 10:30:11 compute-0 podman[350445]: 2026-02-28 10:30:11.311115323 +0000 UTC m=+0.478000770 container died 7a6736a192d7c661eefea3c3f478843f1324f2e0d85deda2c675c48918c4a997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 10:30:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f603fe79234b8baed627fb08e8be842ecbc8cb126214966ce42539a235d08503-merged.mount: Deactivated successfully.
Feb 28 10:30:11 compute-0 podman[350445]: 2026-02-28 10:30:11.491481913 +0000 UTC m=+0.658367360 container remove 7a6736a192d7c661eefea3c3f478843f1324f2e0d85deda2c675c48918c4a997 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:30:11 compute-0 systemd[1]: libpod-conmon-7a6736a192d7c661eefea3c3f478843f1324f2e0d85deda2c675c48918c4a997.scope: Deactivated successfully.
Feb 28 10:30:11 compute-0 sudo[350363]: pam_unix(sudo:session): session closed for user root
Feb 28 10:30:11 compute-0 sudo[350483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:30:11 compute-0 sudo[350483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:30:11 compute-0 sudo[350483]: pam_unix(sudo:session): session closed for user root
Feb 28 10:30:11 compute-0 nova_compute[243452]: 2026-02-28 10:30:11.712 243456 DEBUG nova.compute.manager [req-f57b3178-3239-4004-b354-61ea18513485 req-f05cf79f-c5b5-40e6-b4d4-57453ff317c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Received event network-vif-deleted-0e852cef-a459-43ce-ad9b-4f379e129ace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:30:11 compute-0 nova_compute[243452]: 2026-02-28 10:30:11.713 243456 INFO nova.compute.manager [req-f57b3178-3239-4004-b354-61ea18513485 req-f05cf79f-c5b5-40e6-b4d4-57453ff317c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Neutron deleted interface 0e852cef-a459-43ce-ad9b-4f379e129ace; detaching it from the instance and deleting it from the info cache
Feb 28 10:30:11 compute-0 nova_compute[243452]: 2026-02-28 10:30:11.713 243456 DEBUG nova.network.neutron [req-f57b3178-3239-4004-b354-61ea18513485 req-f05cf79f-c5b5-40e6-b4d4-57453ff317c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:30:11 compute-0 sudo[350508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:30:11 compute-0 sudo[350508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:30:11 compute-0 nova_compute[243452]: 2026-02-28 10:30:11.732 243456 DEBUG nova.network.neutron [-] [instance: 0292c690-55db-421d-a004-21a0acd72961] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:30:11 compute-0 nova_compute[243452]: 2026-02-28 10:30:11.764 243456 DEBUG nova.network.neutron [req-637aa530-5530-4253-a469-fed987b7de67 req-26276085-102a-4a89-9b16-ec38a4027287 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Updated VIF entry in instance network info cache for port 0e852cef-a459-43ce-ad9b-4f379e129ace. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:30:11 compute-0 nova_compute[243452]: 2026-02-28 10:30:11.765 243456 DEBUG nova.network.neutron [req-637aa530-5530-4253-a469-fed987b7de67 req-26276085-102a-4a89-9b16-ec38a4027287 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Updating instance_info_cache with network_info: [{"id": "0e852cef-a459-43ce-ad9b-4f379e129ace", "address": "fa:16:3e:64:2e:7f", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e852cef-a4", "ovs_interfaceid": "0e852cef-a459-43ce-ad9b-4f379e129ace", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:30:12 compute-0 podman[350546]: 2026-02-28 10:30:12.062430636 +0000 UTC m=+0.069292474 container create 9f40741f7a68b4a6009216d0cb847cca8106d29b32fc1558e6a0d3645118a2bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 10:30:12 compute-0 podman[350546]: 2026-02-28 10:30:12.032723265 +0000 UTC m=+0.039585173 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:30:12 compute-0 systemd[1]: Started libpod-conmon-9f40741f7a68b4a6009216d0cb847cca8106d29b32fc1558e6a0d3645118a2bd.scope.
Feb 28 10:30:12 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:30:12 compute-0 nova_compute[243452]: 2026-02-28 10:30:12.228 243456 INFO nova.compute.manager [-] [instance: 0292c690-55db-421d-a004-21a0acd72961] Took 1.85 seconds to deallocate network for instance.
Feb 28 10:30:12 compute-0 podman[350546]: 2026-02-28 10:30:12.232627387 +0000 UTC m=+0.239489285 container init 9f40741f7a68b4a6009216d0cb847cca8106d29b32fc1558e6a0d3645118a2bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 10:30:12 compute-0 nova_compute[243452]: 2026-02-28 10:30:12.238 243456 DEBUG nova.compute.manager [req-f57b3178-3239-4004-b354-61ea18513485 req-f05cf79f-c5b5-40e6-b4d4-57453ff317c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 0292c690-55db-421d-a004-21a0acd72961] Detach interface failed, port_id=0e852cef-a459-43ce-ad9b-4f379e129ace, reason: Instance 0292c690-55db-421d-a004-21a0acd72961 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:30:12 compute-0 podman[350546]: 2026-02-28 10:30:12.240170541 +0000 UTC m=+0.247032379 container start 9f40741f7a68b4a6009216d0cb847cca8106d29b32fc1558e6a0d3645118a2bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:30:12 compute-0 determined_satoshi[350563]: 167 167
Feb 28 10:30:12 compute-0 systemd[1]: libpod-9f40741f7a68b4a6009216d0cb847cca8106d29b32fc1558e6a0d3645118a2bd.scope: Deactivated successfully.
Feb 28 10:30:12 compute-0 podman[350546]: 2026-02-28 10:30:12.24931111 +0000 UTC m=+0.256172948 container attach 9f40741f7a68b4a6009216d0cb847cca8106d29b32fc1558e6a0d3645118a2bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 10:30:12 compute-0 podman[350546]: 2026-02-28 10:30:12.249864406 +0000 UTC m=+0.256726254 container died 9f40741f7a68b4a6009216d0cb847cca8106d29b32fc1558e6a0d3645118a2bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_satoshi, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:30:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-9288ac3a71ba79d832c9d7be1da9178ef41a90896fa4a5452cfa6a9aa8b079a4-merged.mount: Deactivated successfully.
Feb 28 10:30:12 compute-0 nova_compute[243452]: 2026-02-28 10:30:12.356 243456 DEBUG oslo_concurrency.lockutils [req-637aa530-5530-4253-a469-fed987b7de67 req-26276085-102a-4a89-9b16-ec38a4027287 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-0292c690-55db-421d-a004-21a0acd72961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:30:12 compute-0 podman[350546]: 2026-02-28 10:30:12.36120678 +0000 UTC m=+0.368068618 container remove 9f40741f7a68b4a6009216d0cb847cca8106d29b32fc1558e6a0d3645118a2bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_satoshi, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:30:12 compute-0 systemd[1]: libpod-conmon-9f40741f7a68b4a6009216d0cb847cca8106d29b32fc1558e6a0d3645118a2bd.scope: Deactivated successfully.
Feb 28 10:30:12 compute-0 nova_compute[243452]: 2026-02-28 10:30:12.485 243456 DEBUG oslo_concurrency.lockutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:30:12 compute-0 nova_compute[243452]: 2026-02-28 10:30:12.486 243456 DEBUG oslo_concurrency.lockutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:30:12 compute-0 podman[350586]: 2026-02-28 10:30:12.560162196 +0000 UTC m=+0.072725201 container create dd5c3eed717951fea73edea2060c360a28b3c1bedabef348710998fa1e62d7d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_morse, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:30:12 compute-0 nova_compute[243452]: 2026-02-28 10:30:12.559 243456 DEBUG oslo_concurrency.processutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:30:12 compute-0 ceph-mon[76304]: pgmap v1972: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 9.5 MiB/s wr, 151 op/s
Feb 28 10:30:12 compute-0 systemd[1]: Started libpod-conmon-dd5c3eed717951fea73edea2060c360a28b3c1bedabef348710998fa1e62d7d4.scope.
Feb 28 10:30:12 compute-0 podman[350586]: 2026-02-28 10:30:12.527875931 +0000 UTC m=+0.040439046 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:30:12 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3970c8c42fd3dc5939003a0d09cb1762211ea50e578535a5ea1f256d2b5d2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3970c8c42fd3dc5939003a0d09cb1762211ea50e578535a5ea1f256d2b5d2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3970c8c42fd3dc5939003a0d09cb1762211ea50e578535a5ea1f256d2b5d2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3970c8c42fd3dc5939003a0d09cb1762211ea50e578535a5ea1f256d2b5d2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:30:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 320 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 8.3 MiB/s wr, 146 op/s
Feb 28 10:30:12 compute-0 podman[350586]: 2026-02-28 10:30:12.668861375 +0000 UTC m=+0.181424380 container init dd5c3eed717951fea73edea2060c360a28b3c1bedabef348710998fa1e62d7d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_morse, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 10:30:12 compute-0 podman[350586]: 2026-02-28 10:30:12.67609093 +0000 UTC m=+0.188653935 container start dd5c3eed717951fea73edea2060c360a28b3c1bedabef348710998fa1e62d7d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_morse, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 10:30:12 compute-0 podman[350586]: 2026-02-28 10:30:12.67997665 +0000 UTC m=+0.192539675 container attach dd5c3eed717951fea73edea2060c360a28b3c1bedabef348710998fa1e62d7d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_morse, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:30:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:30:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1419439717' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:30:13 compute-0 nova_compute[243452]: 2026-02-28 10:30:13.134 243456 DEBUG oslo_concurrency.processutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:30:13 compute-0 nova_compute[243452]: 2026-02-28 10:30:13.141 243456 DEBUG nova.compute.provider_tree [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:30:13 compute-0 nova_compute[243452]: 2026-02-28 10:30:13.192 243456 DEBUG nova.scheduler.client.report [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:30:13 compute-0 lvm[350704]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:30:13 compute-0 lvm[350704]: VG ceph_vg1 finished
Feb 28 10:30:13 compute-0 lvm[350701]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:30:13 compute-0 lvm[350701]: VG ceph_vg0 finished
Feb 28 10:30:13 compute-0 lvm[350706]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:30:13 compute-0 lvm[350706]: VG ceph_vg2 finished
Feb 28 10:30:13 compute-0 peaceful_morse[350604]: {}
Feb 28 10:30:13 compute-0 systemd[1]: libpod-dd5c3eed717951fea73edea2060c360a28b3c1bedabef348710998fa1e62d7d4.scope: Deactivated successfully.
Feb 28 10:30:13 compute-0 systemd[1]: libpod-dd5c3eed717951fea73edea2060c360a28b3c1bedabef348710998fa1e62d7d4.scope: Consumed 1.277s CPU time.
Feb 28 10:30:13 compute-0 podman[350586]: 2026-02-28 10:30:13.51239627 +0000 UTC m=+1.024959295 container died dd5c3eed717951fea73edea2060c360a28b3c1bedabef348710998fa1e62d7d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_morse, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:30:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd3970c8c42fd3dc5939003a0d09cb1762211ea50e578535a5ea1f256d2b5d2c-merged.mount: Deactivated successfully.
Feb 28 10:30:13 compute-0 nova_compute[243452]: 2026-02-28 10:30:13.542 243456 DEBUG oslo_concurrency.lockutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:30:13 compute-0 podman[350586]: 2026-02-28 10:30:13.562436747 +0000 UTC m=+1.074999802 container remove dd5c3eed717951fea73edea2060c360a28b3c1bedabef348710998fa1e62d7d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 10:30:13 compute-0 systemd[1]: libpod-conmon-dd5c3eed717951fea73edea2060c360a28b3c1bedabef348710998fa1e62d7d4.scope: Deactivated successfully.
Feb 28 10:30:13 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1419439717' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:30:13 compute-0 sudo[350508]: pam_unix(sudo:session): session closed for user root
Feb 28 10:30:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:30:13 compute-0 nova_compute[243452]: 2026-02-28 10:30:13.632 243456 INFO nova.scheduler.client.report [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Deleted allocations for instance 0292c690-55db-421d-a004-21a0acd72961
Feb 28 10:30:13 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:30:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:30:13 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:30:13 compute-0 sudo[350719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:30:13 compute-0 sudo[350719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:30:13 compute-0 sudo[350719]: pam_unix(sudo:session): session closed for user root
Feb 28 10:30:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:30:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Feb 28 10:30:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Feb 28 10:30:13 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Feb 28 10:30:13 compute-0 nova_compute[243452]: 2026-02-28 10:30:13.877 243456 DEBUG oslo_concurrency.lockutils [None req-9479cd56-8c8b-4525-9ac4-f96fdb454113 f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "0292c690-55db-421d-a004-21a0acd72961" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.504s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:30:14 compute-0 nova_compute[243452]: 2026-02-28 10:30:14.220 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:14 compute-0 ceph-mon[76304]: pgmap v1973: 305 pgs: 305 active+clean; 320 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 8.3 MiB/s wr, 146 op/s
Feb 28 10:30:14 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:30:14 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:30:14 compute-0 ceph-mon[76304]: osdmap e282: 3 total, 3 up, 3 in
Feb 28 10:30:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.7 KiB/s wr, 75 op/s
Feb 28 10:30:14 compute-0 nova_compute[243452]: 2026-02-28 10:30:14.656 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Feb 28 10:30:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Feb 28 10:30:14 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Feb 28 10:30:15 compute-0 ceph-mon[76304]: pgmap v1975: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.7 KiB/s wr, 75 op/s
Feb 28 10:30:15 compute-0 ceph-mon[76304]: osdmap e283: 3 total, 3 up, 3 in
Feb 28 10:30:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 289 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 4.0 KiB/s wr, 80 op/s
Feb 28 10:30:17 compute-0 ceph-mon[76304]: pgmap v1977: 305 pgs: 305 active+clean; 289 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 4.0 KiB/s wr, 80 op/s
Feb 28 10:30:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 276 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.7 KiB/s wr, 70 op/s
Feb 28 10:30:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.224 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.237 243456 DEBUG nova.compute.manager [req-8fc083ce-13a4-4c48-822a-a9542fdc87ae req-0ce0a081-5f85-4a8d-b266-bf2fc7bb167d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Received event network-changed-89352e9c-3fec-48bc-a264-6de98ec910c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.237 243456 DEBUG nova.compute.manager [req-8fc083ce-13a4-4c48-822a-a9542fdc87ae req-0ce0a081-5f85-4a8d-b266-bf2fc7bb167d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Refreshing instance network info cache due to event network-changed-89352e9c-3fec-48bc-a264-6de98ec910c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.238 243456 DEBUG oslo_concurrency.lockutils [req-8fc083ce-13a4-4c48-822a-a9542fdc87ae req-0ce0a081-5f85-4a8d-b266-bf2fc7bb167d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.239 243456 DEBUG oslo_concurrency.lockutils [req-8fc083ce-13a4-4c48-822a-a9542fdc87ae req-0ce0a081-5f85-4a8d-b266-bf2fc7bb167d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.239 243456 DEBUG nova.network.neutron [req-8fc083ce-13a4-4c48-822a-a9542fdc87ae req-0ce0a081-5f85-4a8d-b266-bf2fc7bb167d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Refreshing network info cache for port 89352e9c-3fec-48bc-a264-6de98ec910c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.364 243456 DEBUG oslo_concurrency.lockutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "360effe7-8380-410d-a5b8-59c28fa4a75a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.365 243456 DEBUG oslo_concurrency.lockutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.365 243456 DEBUG oslo_concurrency.lockutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.366 243456 DEBUG oslo_concurrency.lockutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.366 243456 DEBUG oslo_concurrency.lockutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.368 243456 INFO nova.compute.manager [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Terminating instance
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.369 243456 DEBUG nova.compute.manager [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:30:19 compute-0 kernel: tap89352e9c-3f (unregistering): left promiscuous mode
Feb 28 10:30:19 compute-0 NetworkManager[49805]: <info>  [1772274619.4376] device (tap89352e9c-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:30:19 compute-0 ovn_controller[146846]: 2026-02-28T10:30:19Z|01234|binding|INFO|Releasing lport 89352e9c-3fec-48bc-a264-6de98ec910c3 from this chassis (sb_readonly=0)
Feb 28 10:30:19 compute-0 ovn_controller[146846]: 2026-02-28T10:30:19Z|01235|binding|INFO|Setting lport 89352e9c-3fec-48bc-a264-6de98ec910c3 down in Southbound
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.449 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:19 compute-0 ovn_controller[146846]: 2026-02-28T10:30:19Z|01236|binding|INFO|Removing iface tap89352e9c-3f ovn-installed in OVS
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.461 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.471 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:b8:e7 10.100.0.7'], port_security=['fa:16:3e:ae:b8:e7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '360effe7-8380-410d-a5b8-59c28fa4a75a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-143add3f-ffeb-40fb-88e5-0af28b700615', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '344dd946e14146ab93c01183964c71b3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eee67bf9-5a46-4130-874d-acfd0939ad31', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c3d893a-030b-445a-98e5-9b2e4e6c6037, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=89352e9c-3fec-48bc-a264-6de98ec910c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:30:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.472 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 89352e9c-3fec-48bc-a264-6de98ec910c3 in datapath 143add3f-ffeb-40fb-88e5-0af28b700615 unbound from our chassis
Feb 28 10:30:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.474 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 143add3f-ffeb-40fb-88e5-0af28b700615, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:30:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.475 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[de3274e8-7ff2-4cdc-a2c3-1c6869f1444f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:30:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.476 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615 namespace which is not needed anymore
Feb 28 10:30:19 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d00000079.scope: Deactivated successfully.
Feb 28 10:30:19 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d00000079.scope: Consumed 15.753s CPU time.
Feb 28 10:30:19 compute-0 systemd-machined[209480]: Machine qemu-154-instance-00000079 terminated.
Feb 28 10:30:19 compute-0 neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615[348257]: [NOTICE]   (348261) : haproxy version is 2.8.14-c23fe91
Feb 28 10:30:19 compute-0 neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615[348257]: [NOTICE]   (348261) : path to executable is /usr/sbin/haproxy
Feb 28 10:30:19 compute-0 neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615[348257]: [WARNING]  (348261) : Exiting Master process...
Feb 28 10:30:19 compute-0 neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615[348257]: [ALERT]    (348261) : Current worker (348263) exited with code 143 (Terminated)
Feb 28 10:30:19 compute-0 neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615[348257]: [WARNING]  (348261) : All workers exited. Exiting... (0)
Feb 28 10:30:19 compute-0 systemd[1]: libpod-12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383.scope: Deactivated successfully.
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.605 243456 INFO nova.virt.libvirt.driver [-] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Instance destroyed successfully.
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.605 243456 DEBUG nova.objects.instance [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lazy-loading 'resources' on Instance uuid 360effe7-8380-410d-a5b8-59c28fa4a75a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:30:19 compute-0 podman[350769]: 2026-02-28 10:30:19.607315851 +0000 UTC m=+0.044179403 container died 12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:30:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-939fb4b0dac1cbd7c3f3f65d2ef77a6441361f4baacd11851b6ce732b973b040-merged.mount: Deactivated successfully.
Feb 28 10:30:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383-userdata-shm.mount: Deactivated successfully.
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.637 243456 DEBUG nova.virt.libvirt.vif [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:28:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1297881492',display_name='tempest-TestSnapshotPattern-server-1297881492',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1297881492',id=121,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUJ4MbCHcIoNICw2tL1X5ZefRw67v1vL6nTq2IACMxv94r+pZLfX9PQ5uIyPper8mbOQnjB2mZqp0GfnlVkrAkMMtWu4Dla7/PQOjpuGeOsYTKM5E2GBss1gtEw6oeaiA==',key_name='tempest-TestSnapshotPattern-538459727',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:28:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='344dd946e14146ab93c01183964c71b3',ramdisk_id='',reservation_id='r-yvdubwju',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-2121060882',owner_user_name='tempest-TestSnapshotPattern-2121060882-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:29:18Z,user_data=None,user_id='f54beab12fce4ee8adf80742bf33b916',uuid=360effe7-8380-410d-a5b8-59c28fa4a75a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.638 243456 DEBUG nova.network.os_vif_util [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converting VIF {"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.639 243456 DEBUG nova.network.os_vif_util [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:b8:e7,bridge_name='br-int',has_traffic_filtering=True,id=89352e9c-3fec-48bc-a264-6de98ec910c3,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89352e9c-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.639 243456 DEBUG os_vif [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:b8:e7,bridge_name='br-int',has_traffic_filtering=True,id=89352e9c-3fec-48bc-a264-6de98ec910c3,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89352e9c-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.642 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.643 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap89352e9c-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.644 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.646 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.649 243456 INFO os_vif [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:b8:e7,bridge_name='br-int',has_traffic_filtering=True,id=89352e9c-3fec-48bc-a264-6de98ec910c3,network=Network(143add3f-ffeb-40fb-88e5-0af28b700615),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89352e9c-3f')
Feb 28 10:30:19 compute-0 podman[350769]: 2026-02-28 10:30:19.668623138 +0000 UTC m=+0.105486720 container cleanup 12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 10:30:19 compute-0 systemd[1]: libpod-conmon-12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383.scope: Deactivated successfully.
Feb 28 10:30:19 compute-0 podman[350823]: 2026-02-28 10:30:19.749654313 +0000 UTC m=+0.053698442 container remove 12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:30:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.755 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c286f6df-2e8c-4fe5-ad9c-690632d5c304]: (4, ('Sat Feb 28 10:30:19 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615 (12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383)\n12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383\nSat Feb 28 10:30:19 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615 (12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383)\n12005a320c4fd58e341c0eb4a2505f318e105eacd41e55361db414728de84383\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:30:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.757 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b5151e6a-9589-40e3-bca3-95e45d66e979]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:30:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.758 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap143add3f-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.759 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:19 compute-0 kernel: tap143add3f-f0: left promiscuous mode
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.768 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.772 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[24d252a3-5f31-44ab-a49c-89f32ae30a94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:30:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.789 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f8479928-5307-407c-8633-7c24e6adb1c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:30:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.791 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[19b9a24e-16f0-413d-8f95-03a7cd129db0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:30:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.809 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6b8b26de-a131-4bd3-b793-baf321fec2ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603760, 'reachable_time': 27278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350841, 'error': None, 'target': 'ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:30:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d143add3f\x2dffeb\x2d40fb\x2d88e5\x2d0af28b700615.mount: Deactivated successfully.
Feb 28 10:30:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.812 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-143add3f-ffeb-40fb-88e5-0af28b700615 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:30:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:19.812 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[1d780805-b0e7-4e5d-bcc1-ec408d61938c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:30:19 compute-0 ceph-mon[76304]: pgmap v1978: 305 pgs: 305 active+clean; 276 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.7 KiB/s wr, 70 op/s
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.930 243456 INFO nova.virt.libvirt.driver [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Deleting instance files /var/lib/nova/instances/360effe7-8380-410d-a5b8-59c28fa4a75a_del
Feb 28 10:30:19 compute-0 nova_compute[243452]: 2026-02-28 10:30:19.931 243456 INFO nova.virt.libvirt.driver [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Deletion of /var/lib/nova/instances/360effe7-8380-410d-a5b8-59c28fa4a75a_del complete
Feb 28 10:30:20 compute-0 nova_compute[243452]: 2026-02-28 10:30:20.129 243456 INFO nova.compute.manager [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Took 0.76 seconds to destroy the instance on the hypervisor.
Feb 28 10:30:20 compute-0 nova_compute[243452]: 2026-02-28 10:30:20.131 243456 DEBUG oslo.service.loopingcall [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:30:20 compute-0 nova_compute[243452]: 2026-02-28 10:30:20.131 243456 DEBUG nova.compute.manager [-] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:30:20 compute-0 nova_compute[243452]: 2026-02-28 10:30:20.132 243456 DEBUG nova.network.neutron [-] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:30:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 209 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.5 KiB/s wr, 71 op/s
Feb 28 10:30:21 compute-0 ceph-mon[76304]: pgmap v1979: 305 pgs: 305 active+clean; 209 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.5 KiB/s wr, 71 op/s
Feb 28 10:30:22 compute-0 nova_compute[243452]: 2026-02-28 10:30:22.462 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:22.462 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:30:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:22.463 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:30:22 compute-0 nova_compute[243452]: 2026-02-28 10:30:22.505 243456 DEBUG nova.network.neutron [-] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:30:22 compute-0 nova_compute[243452]: 2026-02-28 10:30:22.579 243456 INFO nova.compute.manager [-] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Took 2.45 seconds to deallocate network for instance.
Feb 28 10:30:22 compute-0 nova_compute[243452]: 2026-02-28 10:30:22.588 243456 DEBUG nova.compute.manager [req-15f52590-43e8-40d3-8cf9-d23f9cac5702 req-ba70bc49-980b-443b-8cca-0f0f16372cb3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Received event network-vif-deleted-89352e9c-3fec-48bc-a264-6de98ec910c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:30:22 compute-0 nova_compute[243452]: 2026-02-28 10:30:22.588 243456 INFO nova.compute.manager [req-15f52590-43e8-40d3-8cf9-d23f9cac5702 req-ba70bc49-980b-443b-8cca-0f0f16372cb3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Neutron deleted interface 89352e9c-3fec-48bc-a264-6de98ec910c3; detaching it from the instance and deleting it from the info cache
Feb 28 10:30:22 compute-0 nova_compute[243452]: 2026-02-28 10:30:22.589 243456 DEBUG nova.network.neutron [req-15f52590-43e8-40d3-8cf9-d23f9cac5702 req-ba70bc49-980b-443b-8cca-0f0f16372cb3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:30:22 compute-0 nova_compute[243452]: 2026-02-28 10:30:22.611 243456 DEBUG nova.compute.manager [req-15f52590-43e8-40d3-8cf9-d23f9cac5702 req-ba70bc49-980b-443b-8cca-0f0f16372cb3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Detach interface failed, port_id=89352e9c-3fec-48bc-a264-6de98ec910c3, reason: Instance 360effe7-8380-410d-a5b8-59c28fa4a75a could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:30:22 compute-0 nova_compute[243452]: 2026-02-28 10:30:22.621 243456 DEBUG nova.network.neutron [req-8fc083ce-13a4-4c48-822a-a9542fdc87ae req-0ce0a081-5f85-4a8d-b266-bf2fc7bb167d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updated VIF entry in instance network info cache for port 89352e9c-3fec-48bc-a264-6de98ec910c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:30:22 compute-0 nova_compute[243452]: 2026-02-28 10:30:22.621 243456 DEBUG nova.network.neutron [req-8fc083ce-13a4-4c48-822a-a9542fdc87ae req-0ce0a081-5f85-4a8d-b266-bf2fc7bb167d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Updating instance_info_cache with network_info: [{"id": "89352e9c-3fec-48bc-a264-6de98ec910c3", "address": "fa:16:3e:ae:b8:e7", "network": {"id": "143add3f-ffeb-40fb-88e5-0af28b700615", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1308219422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "344dd946e14146ab93c01183964c71b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89352e9c-3f", "ovs_interfaceid": "89352e9c-3fec-48bc-a264-6de98ec910c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:30:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 171 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.2 KiB/s wr, 68 op/s
Feb 28 10:30:22 compute-0 nova_compute[243452]: 2026-02-28 10:30:22.659 243456 DEBUG oslo_concurrency.lockutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:30:22 compute-0 nova_compute[243452]: 2026-02-28 10:30:22.659 243456 DEBUG oslo_concurrency.lockutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:30:22 compute-0 nova_compute[243452]: 2026-02-28 10:30:22.683 243456 DEBUG oslo_concurrency.lockutils [req-8fc083ce-13a4-4c48-822a-a9542fdc87ae req-0ce0a081-5f85-4a8d-b266-bf2fc7bb167d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-360effe7-8380-410d-a5b8-59c28fa4a75a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:30:22 compute-0 nova_compute[243452]: 2026-02-28 10:30:22.721 243456 DEBUG oslo_concurrency.processutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:30:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:30:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3614214594' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:30:23 compute-0 nova_compute[243452]: 2026-02-28 10:30:23.295 243456 DEBUG oslo_concurrency.processutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:30:23 compute-0 nova_compute[243452]: 2026-02-28 10:30:23.304 243456 DEBUG nova.compute.provider_tree [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:30:23 compute-0 nova_compute[243452]: 2026-02-28 10:30:23.336 243456 DEBUG nova.scheduler.client.report [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:30:23 compute-0 nova_compute[243452]: 2026-02-28 10:30:23.376 243456 DEBUG oslo_concurrency.lockutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:30:23 compute-0 nova_compute[243452]: 2026-02-28 10:30:23.430 243456 INFO nova.scheduler.client.report [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Deleted allocations for instance 360effe7-8380-410d-a5b8-59c28fa4a75a
Feb 28 10:30:23 compute-0 nova_compute[243452]: 2026-02-28 10:30:23.526 243456 DEBUG oslo_concurrency.lockutils [None req-b370fb62-492c-40e7-8c7b-69de8aa4ddff f54beab12fce4ee8adf80742bf33b916 344dd946e14146ab93c01183964c71b3 - - default default] Lock "360effe7-8380-410d-a5b8-59c28fa4a75a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:30:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:30:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Feb 28 10:30:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Feb 28 10:30:23 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Feb 28 10:30:23 compute-0 ceph-mon[76304]: pgmap v1980: 305 pgs: 305 active+clean; 171 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.2 KiB/s wr, 68 op/s
Feb 28 10:30:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3614214594' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:30:23 compute-0 ceph-mon[76304]: osdmap e284: 3 total, 3 up, 3 in
Feb 28 10:30:24 compute-0 nova_compute[243452]: 2026-02-28 10:30:24.227 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:24 compute-0 nova_compute[243452]: 2026-02-28 10:30:24.618 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274609.6162598, 0292c690-55db-421d-a004-21a0acd72961 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:30:24 compute-0 nova_compute[243452]: 2026-02-28 10:30:24.618 243456 INFO nova.compute.manager [-] [instance: 0292c690-55db-421d-a004-21a0acd72961] VM Stopped (Lifecycle Event)
Feb 28 10:30:24 compute-0 nova_compute[243452]: 2026-02-28 10:30:24.645 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:24 compute-0 nova_compute[243452]: 2026-02-28 10:30:24.651 243456 DEBUG nova.compute.manager [None req-40fe18ae-f308-47f8-9046-c48d25a3a97e - - - - - -] [instance: 0292c690-55db-421d-a004-21a0acd72961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:30:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.8 KiB/s wr, 61 op/s
Feb 28 10:30:25 compute-0 ceph-mon[76304]: pgmap v1982: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.8 KiB/s wr, 61 op/s
Feb 28 10:30:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 54 op/s
Feb 28 10:30:27 compute-0 ceph-mon[76304]: pgmap v1983: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 54 op/s
Feb 28 10:30:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 35 op/s
Feb 28 10:30:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.815533) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274628815576, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 480, "num_deletes": 259, "total_data_size": 403177, "memory_usage": 413552, "flush_reason": "Manual Compaction"}
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274628820307, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 399628, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41952, "largest_seqno": 42431, "table_properties": {"data_size": 396807, "index_size": 793, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6833, "raw_average_key_size": 18, "raw_value_size": 391023, "raw_average_value_size": 1086, "num_data_blocks": 35, "num_entries": 360, "num_filter_entries": 360, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772274609, "oldest_key_time": 1772274609, "file_creation_time": 1772274628, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 4833 microseconds, and 1683 cpu microseconds.
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.820362) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 399628 bytes OK
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.820387) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.822034) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.822055) EVENT_LOG_v1 {"time_micros": 1772274628822049, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.822108) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 400266, prev total WAL file size 400266, number of live WAL files 2.
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.822510) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353032' seq:72057594037927935, type:22 .. '6C6F676D0031373535' seq:0, type:0; will stop at (end)
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(390KB)], [92(10MB)]
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274628822550, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 11861552, "oldest_snapshot_seqno": -1}
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6688 keys, 11729649 bytes, temperature: kUnknown
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274628880880, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 11729649, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11681109, "index_size": 30700, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 171907, "raw_average_key_size": 25, "raw_value_size": 11557688, "raw_average_value_size": 1728, "num_data_blocks": 1221, "num_entries": 6688, "num_filter_entries": 6688, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772274628, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.881151) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 11729649 bytes
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.882473) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.1 rd, 200.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.9 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(59.0) write-amplify(29.4) OK, records in: 7221, records dropped: 533 output_compression: NoCompression
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.882490) EVENT_LOG_v1 {"time_micros": 1772274628882481, "job": 54, "event": "compaction_finished", "compaction_time_micros": 58405, "compaction_time_cpu_micros": 22330, "output_level": 6, "num_output_files": 1, "total_output_size": 11729649, "num_input_records": 7221, "num_output_records": 6688, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274628882642, "job": 54, "event": "table_file_deletion", "file_number": 94}
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274628883903, "job": 54, "event": "table_file_deletion", "file_number": 92}
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.822458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.883978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.883998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.884000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.884002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:30:28 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:28.884004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:30:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:30:29
Feb 28 10:30:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:30:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:30:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'vms', '.mgr', 'default.rgw.meta', 'volumes', 'images', '.rgw.root', 'default.rgw.log']
Feb 28 10:30:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:30:29 compute-0 podman[350866]: 2026-02-28 10:30:29.143264518 +0000 UTC m=+0.075234763 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 10:30:29 compute-0 podman[350865]: 2026-02-28 10:30:29.15887337 +0000 UTC m=+0.097474502 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:30:29 compute-0 nova_compute[243452]: 2026-02-28 10:30:29.229 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:29 compute-0 nova_compute[243452]: 2026-02-28 10:30:29.648 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:29 compute-0 ceph-mon[76304]: pgmap v1984: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 35 op/s
Feb 28 10:30:30 compute-0 nova_compute[243452]: 2026-02-28 10:30:30.178 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:30:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:30:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:30:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:30:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:30:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:30:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:30:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.4 KiB/s wr, 22 op/s
Feb 28 10:30:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:30:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:30:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:30:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:30:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:30:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:30:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:30:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:30:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:30:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:30:30 compute-0 nova_compute[243452]: 2026-02-28 10:30:30.845 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:30 compute-0 nova_compute[243452]: 2026-02-28 10:30:30.910 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:31 compute-0 nova_compute[243452]: 2026-02-28 10:30:31.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:30:31 compute-0 ceph-mon[76304]: pgmap v1985: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.4 KiB/s wr, 22 op/s
Feb 28 10:30:32 compute-0 nova_compute[243452]: 2026-02-28 10:30:32.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:30:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:32.465 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:30:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.4 KiB/s wr, 18 op/s
Feb 28 10:30:33 compute-0 nova_compute[243452]: 2026-02-28 10:30:33.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:30:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:30:33 compute-0 ceph-mon[76304]: pgmap v1986: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.4 KiB/s wr, 18 op/s
Feb 28 10:30:34 compute-0 nova_compute[243452]: 2026-02-28 10:30:34.233 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:34 compute-0 nova_compute[243452]: 2026-02-28 10:30:34.604 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274619.6021905, 360effe7-8380-410d-a5b8-59c28fa4a75a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:30:34 compute-0 nova_compute[243452]: 2026-02-28 10:30:34.604 243456 INFO nova.compute.manager [-] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] VM Stopped (Lifecycle Event)
Feb 28 10:30:34 compute-0 nova_compute[243452]: 2026-02-28 10:30:34.623 243456 DEBUG nova.compute.manager [None req-feef7267-1560-474c-a095-9c4a1f22b6a3 - - - - - -] [instance: 360effe7-8380-410d-a5b8-59c28fa4a75a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:30:34 compute-0 nova_compute[243452]: 2026-02-28 10:30:34.650 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.3 KiB/s wr, 17 op/s
Feb 28 10:30:35 compute-0 nova_compute[243452]: 2026-02-28 10:30:35.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:30:35 compute-0 ceph-mon[76304]: pgmap v1987: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.3 KiB/s wr, 17 op/s
Feb 28 10:30:36 compute-0 sshd-session[350912]: Received disconnect from 103.67.78.202 port 52704:11: Bye Bye [preauth]
Feb 28 10:30:36 compute-0 sshd-session[350912]: Disconnected from authenticating user root 103.67.78.202 port 52704 [preauth]
Feb 28 10:30:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:37 compute-0 sshd-session[350914]: Received disconnect from 103.67.78.202 port 52716:11: Bye Bye [preauth]
Feb 28 10:30:37 compute-0 sshd-session[350914]: Disconnected from authenticating user root 103.67.78.202 port 52716 [preauth]
Feb 28 10:30:37 compute-0 ceph-mon[76304]: pgmap v1988: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:38 compute-0 nova_compute[243452]: 2026-02-28 10:30:38.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:30:38 compute-0 nova_compute[243452]: 2026-02-28 10:30:38.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:30:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:30:39 compute-0 nova_compute[243452]: 2026-02-28 10:30:39.234 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:39 compute-0 nova_compute[243452]: 2026-02-28 10:30:39.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:30:39 compute-0 nova_compute[243452]: 2026-02-28 10:30:39.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:30:39 compute-0 nova_compute[243452]: 2026-02-28 10:30:39.352 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:30:39 compute-0 nova_compute[243452]: 2026-02-28 10:30:39.352 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:30:39 compute-0 nova_compute[243452]: 2026-02-28 10:30:39.353 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:30:39 compute-0 nova_compute[243452]: 2026-02-28 10:30:39.353 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:30:39 compute-0 nova_compute[243452]: 2026-02-28 10:30:39.353 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:30:39 compute-0 nova_compute[243452]: 2026-02-28 10:30:39.652 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:30:39 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2622985130' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:30:39 compute-0 nova_compute[243452]: 2026-02-28 10:30:39.906 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:30:39 compute-0 ceph-mon[76304]: pgmap v1989: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:39 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2622985130' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:30:39 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Feb 28 10:30:39 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:39.935297) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:30:39 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Feb 28 10:30:39 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274639935346, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 341, "num_deletes": 251, "total_data_size": 183183, "memory_usage": 191000, "flush_reason": "Manual Compaction"}
Feb 28 10:30:39 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Feb 28 10:30:39 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274639939745, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 181724, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42432, "largest_seqno": 42772, "table_properties": {"data_size": 179574, "index_size": 314, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5358, "raw_average_key_size": 18, "raw_value_size": 175398, "raw_average_value_size": 604, "num_data_blocks": 14, "num_entries": 290, "num_filter_entries": 290, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772274628, "oldest_key_time": 1772274628, "file_creation_time": 1772274639, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:30:39 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 4523 microseconds, and 1698 cpu microseconds.
Feb 28 10:30:39 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:30:39 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:39.939817) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 181724 bytes OK
Feb 28 10:30:39 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:39.939843) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Feb 28 10:30:39 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:39.942336) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Feb 28 10:30:39 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:39.942354) EVENT_LOG_v1 {"time_micros": 1772274639942348, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:30:39 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:39.942367) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:30:39 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 180857, prev total WAL file size 180857, number of live WAL files 2.
Feb 28 10:30:39 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:30:39 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:39.945052) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Feb 28 10:30:39 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:30:39 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(177KB)], [95(11MB)]
Feb 28 10:30:39 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274639945209, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 11911373, "oldest_snapshot_seqno": -1}
Feb 28 10:30:40 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6469 keys, 10298370 bytes, temperature: kUnknown
Feb 28 10:30:40 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274640018666, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 10298370, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10252641, "index_size": 28467, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 168046, "raw_average_key_size": 25, "raw_value_size": 10134329, "raw_average_value_size": 1566, "num_data_blocks": 1118, "num_entries": 6469, "num_filter_entries": 6469, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772274639, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:30:40 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:30:40 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:40.018998) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 10298370 bytes
Feb 28 10:30:40 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:40.021110) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.9 rd, 140.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.2 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(122.2) write-amplify(56.7) OK, records in: 6978, records dropped: 509 output_compression: NoCompression
Feb 28 10:30:40 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:40.021143) EVENT_LOG_v1 {"time_micros": 1772274640021127, "job": 56, "event": "compaction_finished", "compaction_time_micros": 73550, "compaction_time_cpu_micros": 40144, "output_level": 6, "num_output_files": 1, "total_output_size": 10298370, "num_input_records": 6978, "num_output_records": 6469, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:30:40 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:30:40 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274640021334, "job": 56, "event": "table_file_deletion", "file_number": 97}
Feb 28 10:30:40 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:30:40 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274640023402, "job": 56, "event": "table_file_deletion", "file_number": 95}
Feb 28 10:30:40 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:39.942619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:30:40 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:40.023520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:30:40 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:40.023531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:30:40 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:40.023533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:30:40 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:40.023535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:30:40 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:30:40.023537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:30:40 compute-0 nova_compute[243452]: 2026-02-28 10:30:40.128 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:30:40 compute-0 nova_compute[243452]: 2026-02-28 10:30:40.131 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3694MB free_disk=59.987464110367GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:30:40 compute-0 nova_compute[243452]: 2026-02-28 10:30:40.132 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:30:40 compute-0 nova_compute[243452]: 2026-02-28 10:30:40.132 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:30:40 compute-0 nova_compute[243452]: 2026-02-28 10:30:40.208 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:30:40 compute-0 nova_compute[243452]: 2026-02-28 10:30:40.209 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:30:40 compute-0 nova_compute[243452]: 2026-02-28 10:30:40.232 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:30:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:40 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:30:40 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3664543732' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:30:40 compute-0 nova_compute[243452]: 2026-02-28 10:30:40.846 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:30:40 compute-0 nova_compute[243452]: 2026-02-28 10:30:40.854 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:30:40 compute-0 nova_compute[243452]: 2026-02-28 10:30:40.877 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:30:40 compute-0 nova_compute[243452]: 2026-02-28 10:30:40.912 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:30:40 compute-0 nova_compute[243452]: 2026-02-28 10:30:40.912 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:30:40 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3664543732' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.3621654362699331e-05 of space, bias 1.0, pg target 0.004086496308809799 quantized to 32 (current 32)
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024938175375048455 of space, bias 1.0, pg target 0.7481452612514536 quantized to 32 (current 32)
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.36012762782241e-07 of space, bias 4.0, pg target 0.0008832153153386892 quantized to 16 (current 16)
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:30:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:30:41 compute-0 nova_compute[243452]: 2026-02-28 10:30:41.913 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:30:41 compute-0 nova_compute[243452]: 2026-02-28 10:30:41.914 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:30:41 compute-0 nova_compute[243452]: 2026-02-28 10:30:41.940 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:30:41 compute-0 ceph-mon[76304]: pgmap v1990: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:30:43 compute-0 ceph-mon[76304]: pgmap v1991: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:44 compute-0 nova_compute[243452]: 2026-02-28 10:30:44.238 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:44 compute-0 nova_compute[243452]: 2026-02-28 10:30:44.654 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:30:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/865405397' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:30:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:30:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/865405397' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:30:45 compute-0 ceph-mon[76304]: pgmap v1992: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/865405397' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:30:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/865405397' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:30:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:47 compute-0 ceph-mon[76304]: pgmap v1993: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:30:49 compute-0 nova_compute[243452]: 2026-02-28 10:30:49.240 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:49 compute-0 nova_compute[243452]: 2026-02-28 10:30:49.656 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:50 compute-0 ceph-mon[76304]: pgmap v1994: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:52 compute-0 ceph-mon[76304]: pgmap v1995: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:30:54 compute-0 nova_compute[243452]: 2026-02-28 10:30:54.243 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:54 compute-0 ceph-mon[76304]: pgmap v1996: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:54 compute-0 nova_compute[243452]: 2026-02-28 10:30:54.658 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:56 compute-0 nova_compute[243452]: 2026-02-28 10:30:56.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:30:56 compute-0 nova_compute[243452]: 2026-02-28 10:30:56.316 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:30:56 compute-0 nova_compute[243452]: 2026-02-28 10:30:56.317 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:30:56 compute-0 nova_compute[243452]: 2026-02-28 10:30:56.317 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:30:56 compute-0 nova_compute[243452]: 2026-02-28 10:30:56.318 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:30:56 compute-0 nova_compute[243452]: 2026-02-28 10:30:56.318 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:30:56 compute-0 nova_compute[243452]: 2026-02-28 10:30:56.318 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:30:56 compute-0 nova_compute[243452]: 2026-02-28 10:30:56.340 243456 DEBUG nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Feb 28 10:30:56 compute-0 nova_compute[243452]: 2026-02-28 10:30:56.350 243456 DEBUG nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Feb 28 10:30:56 compute-0 nova_compute[243452]: 2026-02-28 10:30:56.350 243456 WARNING nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Unknown base file: /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2
Feb 28 10:30:56 compute-0 nova_compute[243452]: 2026-02-28 10:30:56.351 243456 WARNING nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Unknown base file: /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847
Feb 28 10:30:56 compute-0 nova_compute[243452]: 2026-02-28 10:30:56.351 243456 INFO nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Removable base files: /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847
Feb 28 10:30:56 compute-0 nova_compute[243452]: 2026-02-28 10:30:56.351 243456 INFO nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2
Feb 28 10:30:56 compute-0 nova_compute[243452]: 2026-02-28 10:30:56.351 243456 INFO nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847
Feb 28 10:30:56 compute-0 nova_compute[243452]: 2026-02-28 10:30:56.351 243456 DEBUG nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Feb 28 10:30:56 compute-0 nova_compute[243452]: 2026-02-28 10:30:56.352 243456 DEBUG nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Feb 28 10:30:56 compute-0 nova_compute[243452]: 2026-02-28 10:30:56.352 243456 DEBUG nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Feb 28 10:30:56 compute-0 nova_compute[243452]: 2026-02-28 10:30:56.352 243456 INFO nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Feb 28 10:30:56 compute-0 ceph-mon[76304]: pgmap v1997: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:57.871 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:30:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:57.871 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:30:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:30:57.871 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:30:58 compute-0 ceph-mon[76304]: pgmap v1998: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:30:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:30:59 compute-0 nova_compute[243452]: 2026-02-28 10:30:59.245 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:30:59 compute-0 nova_compute[243452]: 2026-02-28 10:30:59.660 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:00 compute-0 podman[350962]: 2026-02-28 10:31:00.144176491 +0000 UTC m=+0.068604244 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:31:00 compute-0 podman[350961]: 2026-02-28 10:31:00.189759372 +0000 UTC m=+0.121650847 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:31:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:31:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:31:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:31:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:31:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:31:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:31:00 compute-0 ceph-mon[76304]: pgmap v1999: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:02 compute-0 ceph-mon[76304]: pgmap v2000: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:31:04 compute-0 nova_compute[243452]: 2026-02-28 10:31:04.246 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:04 compute-0 ceph-mon[76304]: pgmap v2001: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:04 compute-0 nova_compute[243452]: 2026-02-28 10:31:04.662 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:06 compute-0 ovn_controller[146846]: 2026-02-28T10:31:06Z|01237|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Feb 28 10:31:06 compute-0 ceph-mon[76304]: pgmap v2002: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:08 compute-0 ceph-mon[76304]: pgmap v2003: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:31:09 compute-0 nova_compute[243452]: 2026-02-28 10:31:09.248 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:09 compute-0 nova_compute[243452]: 2026-02-28 10:31:09.665 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:10 compute-0 ceph-mon[76304]: pgmap v2004: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:12 compute-0 ceph-mon[76304]: pgmap v2005: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:13 compute-0 ceph-mon[76304]: pgmap v2006: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:13 compute-0 sudo[351004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:31:13 compute-0 sudo[351004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:31:13 compute-0 sudo[351004]: pam_unix(sudo:session): session closed for user root
Feb 28 10:31:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:31:13 compute-0 sudo[351029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:31:13 compute-0 sudo[351029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:31:14 compute-0 nova_compute[243452]: 2026-02-28 10:31:14.249 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:14 compute-0 sudo[351029]: pam_unix(sudo:session): session closed for user root
Feb 28 10:31:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:31:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:31:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:31:14 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:31:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:31:14 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:31:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:31:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:31:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:31:14 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:31:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:31:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:31:14 compute-0 sudo[351085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:31:14 compute-0 sudo[351085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:31:14 compute-0 sudo[351085]: pam_unix(sudo:session): session closed for user root
Feb 28 10:31:14 compute-0 sudo[351110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:31:14 compute-0 sudo[351110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:31:14 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:31:14 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:31:14 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:31:14 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:31:14 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:31:14 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:31:14 compute-0 nova_compute[243452]: 2026-02-28 10:31:14.667 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:14 compute-0 podman[351147]: 2026-02-28 10:31:14.824732588 +0000 UTC m=+0.072237238 container create a37262dff4618b07e5b0844a044b42708c2593b84beffbb7b5ad3493eda4761b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_morse, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:31:14 compute-0 systemd[1]: Started libpod-conmon-a37262dff4618b07e5b0844a044b42708c2593b84beffbb7b5ad3493eda4761b.scope.
Feb 28 10:31:14 compute-0 podman[351147]: 2026-02-28 10:31:14.787481813 +0000 UTC m=+0.034986472 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:31:14 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:31:14 compute-0 podman[351147]: 2026-02-28 10:31:14.919524024 +0000 UTC m=+0.167028693 container init a37262dff4618b07e5b0844a044b42708c2593b84beffbb7b5ad3493eda4761b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 10:31:14 compute-0 podman[351147]: 2026-02-28 10:31:14.93137985 +0000 UTC m=+0.178884489 container start a37262dff4618b07e5b0844a044b42708c2593b84beffbb7b5ad3493eda4761b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_morse, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 10:31:14 compute-0 podman[351147]: 2026-02-28 10:31:14.935488996 +0000 UTC m=+0.182993665 container attach a37262dff4618b07e5b0844a044b42708c2593b84beffbb7b5ad3493eda4761b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Feb 28 10:31:14 compute-0 fervent_morse[351163]: 167 167
Feb 28 10:31:14 compute-0 systemd[1]: libpod-a37262dff4618b07e5b0844a044b42708c2593b84beffbb7b5ad3493eda4761b.scope: Deactivated successfully.
Feb 28 10:31:14 compute-0 podman[351147]: 2026-02-28 10:31:14.940300282 +0000 UTC m=+0.187804901 container died a37262dff4618b07e5b0844a044b42708c2593b84beffbb7b5ad3493eda4761b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_morse, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:31:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-07571faa3d10730a2a8e126624fbae3cdca599b41b16882e038f07f3f843939b-merged.mount: Deactivated successfully.
Feb 28 10:31:14 compute-0 podman[351147]: 2026-02-28 10:31:14.981411677 +0000 UTC m=+0.228916296 container remove a37262dff4618b07e5b0844a044b42708c2593b84beffbb7b5ad3493eda4761b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_morse, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:31:15 compute-0 systemd[1]: libpod-conmon-a37262dff4618b07e5b0844a044b42708c2593b84beffbb7b5ad3493eda4761b.scope: Deactivated successfully.
Feb 28 10:31:15 compute-0 podman[351189]: 2026-02-28 10:31:15.153663866 +0000 UTC m=+0.039279793 container create 083bcae3d393ad26a90657897a08d01b378a2664712800b1054eab7ad09a9a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elion, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 10:31:15 compute-0 systemd[1]: Started libpod-conmon-083bcae3d393ad26a90657897a08d01b378a2664712800b1054eab7ad09a9a70.scope.
Feb 28 10:31:15 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4443879d69994eb46f087578065393b985a26985b579a889be6b821cbdd91749/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4443879d69994eb46f087578065393b985a26985b579a889be6b821cbdd91749/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4443879d69994eb46f087578065393b985a26985b579a889be6b821cbdd91749/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4443879d69994eb46f087578065393b985a26985b579a889be6b821cbdd91749/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4443879d69994eb46f087578065393b985a26985b579a889be6b821cbdd91749/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:31:15 compute-0 podman[351189]: 2026-02-28 10:31:15.13757379 +0000 UTC m=+0.023189737 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:31:15 compute-0 podman[351189]: 2026-02-28 10:31:15.263807206 +0000 UTC m=+0.149423163 container init 083bcae3d393ad26a90657897a08d01b378a2664712800b1054eab7ad09a9a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elion, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 10:31:15 compute-0 podman[351189]: 2026-02-28 10:31:15.269676763 +0000 UTC m=+0.155292690 container start 083bcae3d393ad26a90657897a08d01b378a2664712800b1054eab7ad09a9a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elion, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:31:15 compute-0 podman[351189]: 2026-02-28 10:31:15.273192472 +0000 UTC m=+0.158808439 container attach 083bcae3d393ad26a90657897a08d01b378a2664712800b1054eab7ad09a9a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elion, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 10:31:15 compute-0 ceph-mon[76304]: pgmap v2007: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:15 compute-0 youthful_elion[351206]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:31:15 compute-0 youthful_elion[351206]: --> All data devices are unavailable
Feb 28 10:31:15 compute-0 systemd[1]: libpod-083bcae3d393ad26a90657897a08d01b378a2664712800b1054eab7ad09a9a70.scope: Deactivated successfully.
Feb 28 10:31:15 compute-0 podman[351189]: 2026-02-28 10:31:15.792538843 +0000 UTC m=+0.678154790 container died 083bcae3d393ad26a90657897a08d01b378a2664712800b1054eab7ad09a9a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elion, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:31:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-4443879d69994eb46f087578065393b985a26985b579a889be6b821cbdd91749-merged.mount: Deactivated successfully.
Feb 28 10:31:15 compute-0 podman[351189]: 2026-02-28 10:31:15.840641846 +0000 UTC m=+0.726257773 container remove 083bcae3d393ad26a90657897a08d01b378a2664712800b1054eab7ad09a9a70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:31:15 compute-0 systemd[1]: libpod-conmon-083bcae3d393ad26a90657897a08d01b378a2664712800b1054eab7ad09a9a70.scope: Deactivated successfully.
Feb 28 10:31:15 compute-0 sudo[351110]: pam_unix(sudo:session): session closed for user root
Feb 28 10:31:15 compute-0 sudo[351239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:31:15 compute-0 sudo[351239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:31:15 compute-0 sudo[351239]: pam_unix(sudo:session): session closed for user root
Feb 28 10:31:16 compute-0 sudo[351264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:31:16 compute-0 sudo[351264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:31:16 compute-0 podman[351300]: 2026-02-28 10:31:16.324149522 +0000 UTC m=+0.054729621 container create 6f3b4c0073410eefd02a1ed95993c58c13ec4865c5e39cfb43efdb8ccdbab6c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bassi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 10:31:16 compute-0 systemd[1]: Started libpod-conmon-6f3b4c0073410eefd02a1ed95993c58c13ec4865c5e39cfb43efdb8ccdbab6c7.scope.
Feb 28 10:31:16 compute-0 podman[351300]: 2026-02-28 10:31:16.302212901 +0000 UTC m=+0.032793010 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:31:16 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:31:16 compute-0 podman[351300]: 2026-02-28 10:31:16.417235109 +0000 UTC m=+0.147815228 container init 6f3b4c0073410eefd02a1ed95993c58c13ec4865c5e39cfb43efdb8ccdbab6c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bassi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 10:31:16 compute-0 podman[351300]: 2026-02-28 10:31:16.427823619 +0000 UTC m=+0.158403688 container start 6f3b4c0073410eefd02a1ed95993c58c13ec4865c5e39cfb43efdb8ccdbab6c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bassi, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:31:16 compute-0 podman[351300]: 2026-02-28 10:31:16.431520653 +0000 UTC m=+0.162100822 container attach 6f3b4c0073410eefd02a1ed95993c58c13ec4865c5e39cfb43efdb8ccdbab6c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bassi, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 28 10:31:16 compute-0 stupefied_bassi[351316]: 167 167
Feb 28 10:31:16 compute-0 systemd[1]: libpod-6f3b4c0073410eefd02a1ed95993c58c13ec4865c5e39cfb43efdb8ccdbab6c7.scope: Deactivated successfully.
Feb 28 10:31:16 compute-0 podman[351300]: 2026-02-28 10:31:16.435038393 +0000 UTC m=+0.165618462 container died 6f3b4c0073410eefd02a1ed95993c58c13ec4865c5e39cfb43efdb8ccdbab6c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bassi, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:31:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-996a831a32f525b1937bd83053ab6b9b4622b4e9db103d61227d7ae87261b26f-merged.mount: Deactivated successfully.
Feb 28 10:31:16 compute-0 podman[351300]: 2026-02-28 10:31:16.476414705 +0000 UTC m=+0.206994814 container remove 6f3b4c0073410eefd02a1ed95993c58c13ec4865c5e39cfb43efdb8ccdbab6c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bassi, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 10:31:16 compute-0 systemd[1]: libpod-conmon-6f3b4c0073410eefd02a1ed95993c58c13ec4865c5e39cfb43efdb8ccdbab6c7.scope: Deactivated successfully.
Feb 28 10:31:16 compute-0 podman[351340]: 2026-02-28 10:31:16.596009633 +0000 UTC m=+0.030045492 container create 8ffd87df22f8342f545dca9d543fd1b5efb1e1f91d99c530939e33e9a2c1c5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_bell, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:31:16 compute-0 systemd[1]: Started libpod-conmon-8ffd87df22f8342f545dca9d543fd1b5efb1e1f91d99c530939e33e9a2c1c5e1.scope.
Feb 28 10:31:16 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08659e60d96c378b5f799639a2be55cc45185d982e10b4412668cc9cba38c156/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08659e60d96c378b5f799639a2be55cc45185d982e10b4412668cc9cba38c156/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08659e60d96c378b5f799639a2be55cc45185d982e10b4412668cc9cba38c156/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08659e60d96c378b5f799639a2be55cc45185d982e10b4412668cc9cba38c156/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:31:16 compute-0 podman[351340]: 2026-02-28 10:31:16.676240246 +0000 UTC m=+0.110276125 container init 8ffd87df22f8342f545dca9d543fd1b5efb1e1f91d99c530939e33e9a2c1c5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 10:31:16 compute-0 podman[351340]: 2026-02-28 10:31:16.582870101 +0000 UTC m=+0.016905980 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:31:16 compute-0 podman[351340]: 2026-02-28 10:31:16.683520462 +0000 UTC m=+0.117556321 container start 8ffd87df22f8342f545dca9d543fd1b5efb1e1f91d99c530939e33e9a2c1c5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_bell, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:31:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:16 compute-0 podman[351340]: 2026-02-28 10:31:16.686876277 +0000 UTC m=+0.120912156 container attach 8ffd87df22f8342f545dca9d543fd1b5efb1e1f91d99c530939e33e9a2c1c5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 10:31:16 compute-0 silly_bell[351357]: {
Feb 28 10:31:16 compute-0 silly_bell[351357]:     "0": [
Feb 28 10:31:16 compute-0 silly_bell[351357]:         {
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "devices": [
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "/dev/loop3"
Feb 28 10:31:16 compute-0 silly_bell[351357]:             ],
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "lv_name": "ceph_lv0",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "lv_size": "21470642176",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "name": "ceph_lv0",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "tags": {
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.cluster_name": "ceph",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.crush_device_class": "",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.encrypted": "0",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.objectstore": "bluestore",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.osd_id": "0",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.type": "block",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.vdo": "0",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.with_tpm": "0"
Feb 28 10:31:16 compute-0 silly_bell[351357]:             },
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "type": "block",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "vg_name": "ceph_vg0"
Feb 28 10:31:16 compute-0 silly_bell[351357]:         }
Feb 28 10:31:16 compute-0 silly_bell[351357]:     ],
Feb 28 10:31:16 compute-0 silly_bell[351357]:     "1": [
Feb 28 10:31:16 compute-0 silly_bell[351357]:         {
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "devices": [
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "/dev/loop4"
Feb 28 10:31:16 compute-0 silly_bell[351357]:             ],
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "lv_name": "ceph_lv1",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "lv_size": "21470642176",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "name": "ceph_lv1",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "tags": {
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.cluster_name": "ceph",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.crush_device_class": "",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.encrypted": "0",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.objectstore": "bluestore",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.osd_id": "1",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.type": "block",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.vdo": "0",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.with_tpm": "0"
Feb 28 10:31:16 compute-0 silly_bell[351357]:             },
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "type": "block",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "vg_name": "ceph_vg1"
Feb 28 10:31:16 compute-0 silly_bell[351357]:         }
Feb 28 10:31:16 compute-0 silly_bell[351357]:     ],
Feb 28 10:31:16 compute-0 silly_bell[351357]:     "2": [
Feb 28 10:31:16 compute-0 silly_bell[351357]:         {
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "devices": [
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "/dev/loop5"
Feb 28 10:31:16 compute-0 silly_bell[351357]:             ],
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "lv_name": "ceph_lv2",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "lv_size": "21470642176",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "name": "ceph_lv2",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "tags": {
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.cluster_name": "ceph",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.crush_device_class": "",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.encrypted": "0",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.objectstore": "bluestore",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.osd_id": "2",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.type": "block",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.vdo": "0",
Feb 28 10:31:16 compute-0 silly_bell[351357]:                 "ceph.with_tpm": "0"
Feb 28 10:31:16 compute-0 silly_bell[351357]:             },
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "type": "block",
Feb 28 10:31:16 compute-0 silly_bell[351357]:             "vg_name": "ceph_vg2"
Feb 28 10:31:16 compute-0 silly_bell[351357]:         }
Feb 28 10:31:16 compute-0 silly_bell[351357]:     ]
Feb 28 10:31:16 compute-0 silly_bell[351357]: }
Feb 28 10:31:16 compute-0 systemd[1]: libpod-8ffd87df22f8342f545dca9d543fd1b5efb1e1f91d99c530939e33e9a2c1c5e1.scope: Deactivated successfully.
Feb 28 10:31:16 compute-0 podman[351340]: 2026-02-28 10:31:16.993550574 +0000 UTC m=+0.427586473 container died 8ffd87df22f8342f545dca9d543fd1b5efb1e1f91d99c530939e33e9a2c1c5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:31:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-08659e60d96c378b5f799639a2be55cc45185d982e10b4412668cc9cba38c156-merged.mount: Deactivated successfully.
Feb 28 10:31:17 compute-0 podman[351340]: 2026-02-28 10:31:17.032873628 +0000 UTC m=+0.466909497 container remove 8ffd87df22f8342f545dca9d543fd1b5efb1e1f91d99c530939e33e9a2c1c5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 10:31:17 compute-0 systemd[1]: libpod-conmon-8ffd87df22f8342f545dca9d543fd1b5efb1e1f91d99c530939e33e9a2c1c5e1.scope: Deactivated successfully.
Feb 28 10:31:17 compute-0 sudo[351264]: pam_unix(sudo:session): session closed for user root
Feb 28 10:31:17 compute-0 sudo[351378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:31:17 compute-0 sudo[351378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:31:17 compute-0 sudo[351378]: pam_unix(sudo:session): session closed for user root
Feb 28 10:31:17 compute-0 sudo[351403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:31:17 compute-0 sudo[351403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:31:17 compute-0 podman[351439]: 2026-02-28 10:31:17.517099805 +0000 UTC m=+0.054044682 container create c8b78611c712cf8f13fcc887378925c2ee663998045ba26a20ab016eec232d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:31:17 compute-0 systemd[1]: Started libpod-conmon-c8b78611c712cf8f13fcc887378925c2ee663998045ba26a20ab016eec232d3d.scope.
Feb 28 10:31:17 compute-0 podman[351439]: 2026-02-28 10:31:17.486976202 +0000 UTC m=+0.023921139 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:31:17 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:31:17 compute-0 podman[351439]: 2026-02-28 10:31:17.600353433 +0000 UTC m=+0.137298361 container init c8b78611c712cf8f13fcc887378925c2ee663998045ba26a20ab016eec232d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_matsumoto, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:31:17 compute-0 podman[351439]: 2026-02-28 10:31:17.607915598 +0000 UTC m=+0.144860465 container start c8b78611c712cf8f13fcc887378925c2ee663998045ba26a20ab016eec232d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:31:17 compute-0 goofy_matsumoto[351455]: 167 167
Feb 28 10:31:17 compute-0 podman[351439]: 2026-02-28 10:31:17.612426526 +0000 UTC m=+0.149371393 container attach c8b78611c712cf8f13fcc887378925c2ee663998045ba26a20ab016eec232d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_matsumoto, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:31:17 compute-0 systemd[1]: libpod-c8b78611c712cf8f13fcc887378925c2ee663998045ba26a20ab016eec232d3d.scope: Deactivated successfully.
Feb 28 10:31:17 compute-0 podman[351439]: 2026-02-28 10:31:17.613521797 +0000 UTC m=+0.150466634 container died c8b78611c712cf8f13fcc887378925c2ee663998045ba26a20ab016eec232d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_matsumoto, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:31:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a98ea44e33e27c85de37d7e2341383f1a1601debcdde59a8b81f8117057d225-merged.mount: Deactivated successfully.
Feb 28 10:31:17 compute-0 podman[351439]: 2026-02-28 10:31:17.657235095 +0000 UTC m=+0.194179932 container remove c8b78611c712cf8f13fcc887378925c2ee663998045ba26a20ab016eec232d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 10:31:17 compute-0 systemd[1]: libpod-conmon-c8b78611c712cf8f13fcc887378925c2ee663998045ba26a20ab016eec232d3d.scope: Deactivated successfully.
Feb 28 10:31:17 compute-0 ceph-mon[76304]: pgmap v2008: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:17 compute-0 podman[351479]: 2026-02-28 10:31:17.835165165 +0000 UTC m=+0.058950661 container create c5fa8207cbfa6a297662e1b2e4a54fb951a2bded10994ee4244369a8662e85c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilbur, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 10:31:17 compute-0 systemd[1]: Started libpod-conmon-c5fa8207cbfa6a297662e1b2e4a54fb951a2bded10994ee4244369a8662e85c1.scope.
Feb 28 10:31:17 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:31:17 compute-0 podman[351479]: 2026-02-28 10:31:17.808894481 +0000 UTC m=+0.032680067 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbc94a44248c867539cab050052b1ff61f2e5d3ed1f66baf67b8f22f29aa49ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbc94a44248c867539cab050052b1ff61f2e5d3ed1f66baf67b8f22f29aa49ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbc94a44248c867539cab050052b1ff61f2e5d3ed1f66baf67b8f22f29aa49ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbc94a44248c867539cab050052b1ff61f2e5d3ed1f66baf67b8f22f29aa49ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:31:17 compute-0 podman[351479]: 2026-02-28 10:31:17.919099083 +0000 UTC m=+0.142884649 container init c5fa8207cbfa6a297662e1b2e4a54fb951a2bded10994ee4244369a8662e85c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:31:17 compute-0 podman[351479]: 2026-02-28 10:31:17.925790832 +0000 UTC m=+0.149576358 container start c5fa8207cbfa6a297662e1b2e4a54fb951a2bded10994ee4244369a8662e85c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:31:17 compute-0 podman[351479]: 2026-02-28 10:31:17.929123907 +0000 UTC m=+0.152909483 container attach c5fa8207cbfa6a297662e1b2e4a54fb951a2bded10994ee4244369a8662e85c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilbur, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:31:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:18 compute-0 lvm[351574]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:31:18 compute-0 lvm[351574]: VG ceph_vg1 finished
Feb 28 10:31:18 compute-0 lvm[351573]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:31:18 compute-0 lvm[351573]: VG ceph_vg0 finished
Feb 28 10:31:18 compute-0 lvm[351576]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:31:18 compute-0 lvm[351576]: VG ceph_vg2 finished
Feb 28 10:31:18 compute-0 bold_wilbur[351495]: {}
Feb 28 10:31:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:31:18 compute-0 systemd[1]: libpod-c5fa8207cbfa6a297662e1b2e4a54fb951a2bded10994ee4244369a8662e85c1.scope: Deactivated successfully.
Feb 28 10:31:18 compute-0 systemd[1]: libpod-c5fa8207cbfa6a297662e1b2e4a54fb951a2bded10994ee4244369a8662e85c1.scope: Consumed 1.380s CPU time.
Feb 28 10:31:18 compute-0 podman[351479]: 2026-02-28 10:31:18.845445634 +0000 UTC m=+1.069231140 container died c5fa8207cbfa6a297662e1b2e4a54fb951a2bded10994ee4244369a8662e85c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 10:31:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbc94a44248c867539cab050052b1ff61f2e5d3ed1f66baf67b8f22f29aa49ca-merged.mount: Deactivated successfully.
Feb 28 10:31:18 compute-0 podman[351479]: 2026-02-28 10:31:18.898022752 +0000 UTC m=+1.121808238 container remove c5fa8207cbfa6a297662e1b2e4a54fb951a2bded10994ee4244369a8662e85c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 10:31:18 compute-0 systemd[1]: libpod-conmon-c5fa8207cbfa6a297662e1b2e4a54fb951a2bded10994ee4244369a8662e85c1.scope: Deactivated successfully.
Feb 28 10:31:18 compute-0 sudo[351403]: pam_unix(sudo:session): session closed for user root
Feb 28 10:31:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:31:18 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:31:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:31:18 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:31:19 compute-0 sudo[351592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:31:19 compute-0 sudo[351592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:31:19 compute-0 sudo[351592]: pam_unix(sudo:session): session closed for user root
Feb 28 10:31:19 compute-0 nova_compute[243452]: 2026-02-28 10:31:19.252 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:19 compute-0 nova_compute[243452]: 2026-02-28 10:31:19.669 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:19 compute-0 ceph-mon[76304]: pgmap v2009: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:19 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:31:19 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:31:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:21 compute-0 ceph-mon[76304]: pgmap v2010: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:22.526 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:31:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:22.526 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:31:22 compute-0 nova_compute[243452]: 2026-02-28 10:31:22.527 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:22 compute-0 sshd-session[351618]: Received disconnect from 103.67.78.132 port 58346:11: Bye Bye [preauth]
Feb 28 10:31:22 compute-0 sshd-session[351618]: Disconnected from authenticating user root 103.67.78.132 port 58346 [preauth]
Feb 28 10:31:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:31:23 compute-0 ceph-mon[76304]: pgmap v2011: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:24 compute-0 sshd-session[351620]: Invalid user sol from 45.148.10.240 port 57340
Feb 28 10:31:24 compute-0 nova_compute[243452]: 2026-02-28 10:31:24.255 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:24 compute-0 sshd-session[351620]: Connection closed by invalid user sol 45.148.10.240 port 57340 [preauth]
Feb 28 10:31:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:24.529 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:31:24 compute-0 nova_compute[243452]: 2026-02-28 10:31:24.673 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:25 compute-0 ceph-mon[76304]: pgmap v2012: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:28 compute-0 ceph-mon[76304]: pgmap v2013: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:28 compute-0 nova_compute[243452]: 2026-02-28 10:31:28.352 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:31:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:31:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:31:29
Feb 28 10:31:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:31:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:31:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'vms', '.mgr']
Feb 28 10:31:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:31:29 compute-0 nova_compute[243452]: 2026-02-28 10:31:29.258 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:29 compute-0 nova_compute[243452]: 2026-02-28 10:31:29.676 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:30 compute-0 ceph-mon[76304]: pgmap v2014: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:31:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:31:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:31:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:31:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:31:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:31:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:31:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:31:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:31:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:31:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:31:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:31:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:31:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:31:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:31:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:31:31 compute-0 podman[351623]: 2026-02-28 10:31:31.124495401 +0000 UTC m=+0.060298580 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:31:31 compute-0 podman[351622]: 2026-02-28 10:31:31.164753141 +0000 UTC m=+0.106604861 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 28 10:31:32 compute-0 ceph-mon[76304]: pgmap v2015: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:33 compute-0 nova_compute[243452]: 2026-02-28 10:31:33.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:31:33 compute-0 nova_compute[243452]: 2026-02-28 10:31:33.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:31:33 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb 28 10:31:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:31:34 compute-0 ceph-mon[76304]: pgmap v2016: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:34 compute-0 nova_compute[243452]: 2026-02-28 10:31:34.260 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:34 compute-0 nova_compute[243452]: 2026-02-28 10:31:34.679 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:35 compute-0 nova_compute[243452]: 2026-02-28 10:31:35.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:31:36 compute-0 ceph-mon[76304]: pgmap v2017: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:37 compute-0 nova_compute[243452]: 2026-02-28 10:31:37.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:31:37 compute-0 nova_compute[243452]: 2026-02-28 10:31:37.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:31:38 compute-0 ceph-mon[76304]: pgmap v2018: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:38 compute-0 nova_compute[243452]: 2026-02-28 10:31:38.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:31:38 compute-0 nova_compute[243452]: 2026-02-28 10:31:38.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:31:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:31:39 compute-0 nova_compute[243452]: 2026-02-28 10:31:39.262 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:39 compute-0 nova_compute[243452]: 2026-02-28 10:31:39.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:31:39 compute-0 nova_compute[243452]: 2026-02-28 10:31:39.681 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:40 compute-0 ceph-mon[76304]: pgmap v2019: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.3621654362699331e-05 of space, bias 1.0, pg target 0.004086496308809799 quantized to 32 (current 32)
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024938175375048455 of space, bias 1.0, pg target 0.7481452612514536 quantized to 32 (current 32)
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.36012762782241e-07 of space, bias 4.0, pg target 0.0008832153153386892 quantized to 16 (current 16)
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:31:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:31:41 compute-0 nova_compute[243452]: 2026-02-28 10:31:41.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:31:41 compute-0 nova_compute[243452]: 2026-02-28 10:31:41.318 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:31:41 compute-0 nova_compute[243452]: 2026-02-28 10:31:41.318 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:31:41 compute-0 nova_compute[243452]: 2026-02-28 10:31:41.345 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:31:41 compute-0 nova_compute[243452]: 2026-02-28 10:31:41.346 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:31:41 compute-0 nova_compute[243452]: 2026-02-28 10:31:41.379 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:31:41 compute-0 nova_compute[243452]: 2026-02-28 10:31:41.379 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:31:41 compute-0 nova_compute[243452]: 2026-02-28 10:31:41.380 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:31:41 compute-0 nova_compute[243452]: 2026-02-28 10:31:41.380 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:31:41 compute-0 nova_compute[243452]: 2026-02-28 10:31:41.380 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:31:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:31:41 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3640620870' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:31:41 compute-0 nova_compute[243452]: 2026-02-28 10:31:41.975 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:31:42 compute-0 ceph-mon[76304]: pgmap v2020: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:42 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3640620870' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:31:42 compute-0 nova_compute[243452]: 2026-02-28 10:31:42.225 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:31:42 compute-0 nova_compute[243452]: 2026-02-28 10:31:42.227 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3786MB free_disk=59.987464110367GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:31:42 compute-0 nova_compute[243452]: 2026-02-28 10:31:42.228 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:31:42 compute-0 nova_compute[243452]: 2026-02-28 10:31:42.228 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:31:42 compute-0 nova_compute[243452]: 2026-02-28 10:31:42.401 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:31:42 compute-0 nova_compute[243452]: 2026-02-28 10:31:42.402 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:31:42 compute-0 nova_compute[243452]: 2026-02-28 10:31:42.481 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:31:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:31:43 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/339860127' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:31:43 compute-0 nova_compute[243452]: 2026-02-28 10:31:43.071 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:31:43 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/339860127' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:31:43 compute-0 nova_compute[243452]: 2026-02-28 10:31:43.080 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:31:43 compute-0 nova_compute[243452]: 2026-02-28 10:31:43.107 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:31:43 compute-0 nova_compute[243452]: 2026-02-28 10:31:43.110 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:31:43 compute-0 nova_compute[243452]: 2026-02-28 10:31:43.111 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:31:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:31:43 compute-0 nova_compute[243452]: 2026-02-28 10:31:43.853 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "995ed68f-0189-47b0-b060-6b738468c986" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:31:43 compute-0 nova_compute[243452]: 2026-02-28 10:31:43.854 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:31:43 compute-0 nova_compute[243452]: 2026-02-28 10:31:43.898 243456 DEBUG nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:31:44 compute-0 nova_compute[243452]: 2026-02-28 10:31:44.062 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:31:44 compute-0 nova_compute[243452]: 2026-02-28 10:31:44.063 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:31:44 compute-0 nova_compute[243452]: 2026-02-28 10:31:44.072 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:31:44 compute-0 nova_compute[243452]: 2026-02-28 10:31:44.073 243456 INFO nova.compute.claims [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:31:44 compute-0 ceph-mon[76304]: pgmap v2021: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:44 compute-0 nova_compute[243452]: 2026-02-28 10:31:44.188 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:31:44 compute-0 nova_compute[243452]: 2026-02-28 10:31:44.264 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:44 compute-0 nova_compute[243452]: 2026-02-28 10:31:44.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:31:44 compute-0 nova_compute[243452]: 2026-02-28 10:31:44.683 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:31:44 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4200127688' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:31:44 compute-0 nova_compute[243452]: 2026-02-28 10:31:44.745 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:31:44 compute-0 nova_compute[243452]: 2026-02-28 10:31:44.754 243456 DEBUG nova.compute.provider_tree [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:31:44 compute-0 nova_compute[243452]: 2026-02-28 10:31:44.776 243456 DEBUG nova.scheduler.client.report [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:31:44 compute-0 nova_compute[243452]: 2026-02-28 10:31:44.802 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:31:44 compute-0 nova_compute[243452]: 2026-02-28 10:31:44.803 243456 DEBUG nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:31:44 compute-0 nova_compute[243452]: 2026-02-28 10:31:44.853 243456 DEBUG nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:31:44 compute-0 nova_compute[243452]: 2026-02-28 10:31:44.853 243456 DEBUG nova.network.neutron [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:31:44 compute-0 nova_compute[243452]: 2026-02-28 10:31:44.877 243456 INFO nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:31:44 compute-0 nova_compute[243452]: 2026-02-28 10:31:44.896 243456 DEBUG nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.002 243456 DEBUG nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.004 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.005 243456 INFO nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Creating image(s)
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.038 243456 DEBUG nova.storage.rbd_utils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 995ed68f-0189-47b0-b060-6b738468c986_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.073 243456 DEBUG nova.storage.rbd_utils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 995ed68f-0189-47b0-b060-6b738468c986_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:31:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4200127688' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.113 243456 DEBUG nova.storage.rbd_utils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 995ed68f-0189-47b0-b060-6b738468c986_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.118 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.157 243456 DEBUG nova.policy [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.208 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.209 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.210 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.210 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.231 243456 DEBUG nova.storage.rbd_utils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 995ed68f-0189-47b0-b060-6b738468c986_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.235 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 995ed68f-0189-47b0-b060-6b738468c986_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:31:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:31:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3186015786' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:31:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:31:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3186015786' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.554 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 995ed68f-0189-47b0-b060-6b738468c986_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.631 243456 DEBUG nova.storage.rbd_utils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image 995ed68f-0189-47b0-b060-6b738468c986_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.719 243456 DEBUG nova.objects.instance [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid 995ed68f-0189-47b0-b060-6b738468c986 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.732 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.733 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Ensure instance console log exists: /var/lib/nova/instances/995ed68f-0189-47b0-b060-6b738468c986/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.733 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.734 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:31:45 compute-0 nova_compute[243452]: 2026-02-28 10:31:45.734 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:31:46 compute-0 ceph-mon[76304]: pgmap v2022: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:31:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3186015786' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:31:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3186015786' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:31:46 compute-0 nova_compute[243452]: 2026-02-28 10:31:46.394 243456 DEBUG nova.network.neutron [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Successfully created port: 09f54242-3301-4e2f-b606-8423be606192 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:31:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 167 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 291 KiB/s wr, 3 op/s
Feb 28 10:31:47 compute-0 nova_compute[243452]: 2026-02-28 10:31:47.202 243456 DEBUG nova.network.neutron [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Successfully updated port: 09f54242-3301-4e2f-b606-8423be606192 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:31:47 compute-0 nova_compute[243452]: 2026-02-28 10:31:47.217 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:31:47 compute-0 nova_compute[243452]: 2026-02-28 10:31:47.217 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:31:47 compute-0 nova_compute[243452]: 2026-02-28 10:31:47.217 243456 DEBUG nova.network.neutron [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:31:47 compute-0 nova_compute[243452]: 2026-02-28 10:31:47.310 243456 DEBUG nova.compute.manager [req-7c546b01-ba2b-4920-b518-06bbe708eb5a req-094c9548-8cfb-4f15-947d-f325f999ed7f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received event network-changed-09f54242-3301-4e2f-b606-8423be606192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:31:47 compute-0 nova_compute[243452]: 2026-02-28 10:31:47.311 243456 DEBUG nova.compute.manager [req-7c546b01-ba2b-4920-b518-06bbe708eb5a req-094c9548-8cfb-4f15-947d-f325f999ed7f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Refreshing instance network info cache due to event network-changed-09f54242-3301-4e2f-b606-8423be606192. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:31:47 compute-0 nova_compute[243452]: 2026-02-28 10:31:47.311 243456 DEBUG oslo_concurrency.lockutils [req-7c546b01-ba2b-4920-b518-06bbe708eb5a req-094c9548-8cfb-4f15-947d-f325f999ed7f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:31:47 compute-0 nova_compute[243452]: 2026-02-28 10:31:47.375 243456 DEBUG nova.network.neutron [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:31:48 compute-0 ceph-mon[76304]: pgmap v2023: 305 pgs: 305 active+clean; 167 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 291 KiB/s wr, 3 op/s
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.177 243456 DEBUG nova.network.neutron [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Updating instance_info_cache with network_info: [{"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.215 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.215 243456 DEBUG nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Instance network_info: |[{"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.216 243456 DEBUG oslo_concurrency.lockutils [req-7c546b01-ba2b-4920-b518-06bbe708eb5a req-094c9548-8cfb-4f15-947d-f325f999ed7f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.217 243456 DEBUG nova.network.neutron [req-7c546b01-ba2b-4920-b518-06bbe708eb5a req-094c9548-8cfb-4f15-947d-f325f999ed7f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Refreshing network info cache for port 09f54242-3301-4e2f-b606-8423be606192 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.222 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Start _get_guest_xml network_info=[{"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.228 243456 WARNING nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.236 243456 DEBUG nova.virt.libvirt.host [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.237 243456 DEBUG nova.virt.libvirt.host [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.247 243456 DEBUG nova.virt.libvirt.host [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.248 243456 DEBUG nova.virt.libvirt.host [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.248 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.249 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.250 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.250 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.250 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.251 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.251 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.251 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.252 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.252 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.253 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.253 243456 DEBUG nova.virt.hardware [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.258 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:31:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 171 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 304 KiB/s wr, 15 op/s
Feb 28 10:31:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:31:48 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1253996747' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:31:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.850 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.875 243456 DEBUG nova.storage.rbd_utils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 995ed68f-0189-47b0-b060-6b738468c986_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:31:48 compute-0 nova_compute[243452]: 2026-02-28 10:31:48.881 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:31:49 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1253996747' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.266 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:31:49 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1973787330' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.481 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.484 243456 DEBUG nova.virt.libvirt.vif [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:31:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1357951307',display_name='tempest-TestNetworkBasicOps-server-1357951307',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1357951307',id=123,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZHc+J8n+8VIElpku7oO13/ribfCNYlkJQgBtlmLu1lYsVW1sBn5Xu9dPl6htrZdrELeWC7ndyh8q80ljjBh3aAz6MIHX6lH85ayvL5wAI984ueXttgABed6q+nVWvTQw==',key_name='tempest-TestNetworkBasicOps-951014656',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-lmudny40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:31:44Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=995ed68f-0189-47b0-b060-6b738468c986,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.484 243456 DEBUG nova.network.os_vif_util [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.486 243456 DEBUG nova.network.os_vif_util [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:3a:58,bridge_name='br-int',has_traffic_filtering=True,id=09f54242-3301-4e2f-b606-8423be606192,network=Network(11c8a335-c267-4faf-a7d1-f407690da05d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09f54242-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.488 243456 DEBUG nova.objects.instance [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 995ed68f-0189-47b0-b060-6b738468c986 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.509 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:31:49 compute-0 nova_compute[243452]:   <uuid>995ed68f-0189-47b0-b060-6b738468c986</uuid>
Feb 28 10:31:49 compute-0 nova_compute[243452]:   <name>instance-0000007b</name>
Feb 28 10:31:49 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:31:49 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:31:49 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <nova:name>tempest-TestNetworkBasicOps-server-1357951307</nova:name>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:31:48</nova:creationTime>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:31:49 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:31:49 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:31:49 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:31:49 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:31:49 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:31:49 compute-0 nova_compute[243452]:         <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:31:49 compute-0 nova_compute[243452]:         <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:31:49 compute-0 nova_compute[243452]:         <nova:port uuid="09f54242-3301-4e2f-b606-8423be606192">
Feb 28 10:31:49 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:31:49 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:31:49 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <system>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <entry name="serial">995ed68f-0189-47b0-b060-6b738468c986</entry>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <entry name="uuid">995ed68f-0189-47b0-b060-6b738468c986</entry>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     </system>
Feb 28 10:31:49 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:31:49 compute-0 nova_compute[243452]:   <os>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:   </os>
Feb 28 10:31:49 compute-0 nova_compute[243452]:   <features>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:   </features>
Feb 28 10:31:49 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:31:49 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:31:49 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/995ed68f-0189-47b0-b060-6b738468c986_disk">
Feb 28 10:31:49 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       </source>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:31:49 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/995ed68f-0189-47b0-b060-6b738468c986_disk.config">
Feb 28 10:31:49 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       </source>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:31:49 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:3a:3a:58"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <target dev="tap09f54242-33"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/995ed68f-0189-47b0-b060-6b738468c986/console.log" append="off"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <video>
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     </video>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:31:49 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:31:49 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:31:49 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:31:49 compute-0 nova_compute[243452]: </domain>
Feb 28 10:31:49 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.510 243456 DEBUG nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Preparing to wait for external event network-vif-plugged-09f54242-3301-4e2f-b606-8423be606192 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.512 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "995ed68f-0189-47b0-b060-6b738468c986-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.513 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.513 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.515 243456 DEBUG nova.virt.libvirt.vif [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:31:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1357951307',display_name='tempest-TestNetworkBasicOps-server-1357951307',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1357951307',id=123,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZHc+J8n+8VIElpku7oO13/ribfCNYlkJQgBtlmLu1lYsVW1sBn5Xu9dPl6htrZdrELeWC7ndyh8q80ljjBh3aAz6MIHX6lH85ayvL5wAI984ueXttgABed6q+nVWvTQw==',key_name='tempest-TestNetworkBasicOps-951014656',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-lmudny40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:31:44Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=995ed68f-0189-47b0-b060-6b738468c986,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.516 243456 DEBUG nova.network.os_vif_util [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.517 243456 DEBUG nova.network.os_vif_util [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:3a:58,bridge_name='br-int',has_traffic_filtering=True,id=09f54242-3301-4e2f-b606-8423be606192,network=Network(11c8a335-c267-4faf-a7d1-f407690da05d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09f54242-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.518 243456 DEBUG os_vif [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:3a:58,bridge_name='br-int',has_traffic_filtering=True,id=09f54242-3301-4e2f-b606-8423be606192,network=Network(11c8a335-c267-4faf-a7d1-f407690da05d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09f54242-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.519 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.519 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.520 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.525 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.526 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09f54242-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.527 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap09f54242-33, col_values=(('external_ids', {'iface-id': '09f54242-3301-4e2f-b606-8423be606192', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:3a:58', 'vm-uuid': '995ed68f-0189-47b0-b060-6b738468c986'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.529 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:49 compute-0 NetworkManager[49805]: <info>  [1772274709.5307] manager: (tap09f54242-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/508)
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.533 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.539 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.541 243456 INFO os_vif [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:3a:58,bridge_name='br-int',has_traffic_filtering=True,id=09f54242-3301-4e2f-b606-8423be606192,network=Network(11c8a335-c267-4faf-a7d1-f407690da05d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09f54242-33')
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.617 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.618 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.618 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:3a:3a:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.619 243456 INFO nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Using config drive
Feb 28 10:31:49 compute-0 nova_compute[243452]: 2026-02-28 10:31:49.648 243456 DEBUG nova.storage.rbd_utils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 995ed68f-0189-47b0-b060-6b738468c986_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:31:50 compute-0 ceph-mon[76304]: pgmap v2024: 305 pgs: 305 active+clean; 171 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 304 KiB/s wr, 15 op/s
Feb 28 10:31:50 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1973787330' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:31:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:31:51 compute-0 nova_compute[243452]: 2026-02-28 10:31:51.686 243456 INFO nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Creating config drive at /var/lib/nova/instances/995ed68f-0189-47b0-b060-6b738468c986/disk.config
Feb 28 10:31:51 compute-0 nova_compute[243452]: 2026-02-28 10:31:51.691 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/995ed68f-0189-47b0-b060-6b738468c986/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpedrqg1n_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:31:51 compute-0 nova_compute[243452]: 2026-02-28 10:31:51.827 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/995ed68f-0189-47b0-b060-6b738468c986/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpedrqg1n_" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:31:51 compute-0 nova_compute[243452]: 2026-02-28 10:31:51.855 243456 DEBUG nova.storage.rbd_utils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 995ed68f-0189-47b0-b060-6b738468c986_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:31:51 compute-0 nova_compute[243452]: 2026-02-28 10:31:51.861 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/995ed68f-0189-47b0-b060-6b738468c986/disk.config 995ed68f-0189-47b0-b060-6b738468c986_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.001 243456 DEBUG nova.network.neutron [req-7c546b01-ba2b-4920-b518-06bbe708eb5a req-094c9548-8cfb-4f15-947d-f325f999ed7f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Updated VIF entry in instance network info cache for port 09f54242-3301-4e2f-b606-8423be606192. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.002 243456 DEBUG nova.network.neutron [req-7c546b01-ba2b-4920-b518-06bbe708eb5a req-094c9548-8cfb-4f15-947d-f325f999ed7f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Updating instance_info_cache with network_info: [{"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.018 243456 DEBUG oslo_concurrency.lockutils [req-7c546b01-ba2b-4920-b518-06bbe708eb5a req-094c9548-8cfb-4f15-947d-f325f999ed7f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.033 243456 DEBUG oslo_concurrency.processutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/995ed68f-0189-47b0-b060-6b738468c986/disk.config 995ed68f-0189-47b0-b060-6b738468c986_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.034 243456 INFO nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Deleting local config drive /var/lib/nova/instances/995ed68f-0189-47b0-b060-6b738468c986/disk.config because it was imported into RBD.
Feb 28 10:31:52 compute-0 systemd[1]: Starting libvirt secret daemon...
Feb 28 10:31:52 compute-0 systemd[1]: Started libvirt secret daemon.
Feb 28 10:31:52 compute-0 kernel: tap09f54242-33: entered promiscuous mode
Feb 28 10:31:52 compute-0 ovn_controller[146846]: 2026-02-28T10:31:52Z|01238|binding|INFO|Claiming lport 09f54242-3301-4e2f-b606-8423be606192 for this chassis.
Feb 28 10:31:52 compute-0 NetworkManager[49805]: <info>  [1772274712.1284] manager: (tap09f54242-33): new Tun device (/org/freedesktop/NetworkManager/Devices/509)
Feb 28 10:31:52 compute-0 ovn_controller[146846]: 2026-02-28T10:31:52Z|01239|binding|INFO|09f54242-3301-4e2f-b606-8423be606192: Claiming fa:16:3e:3a:3a:58 10.100.0.13
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.128 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.132 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.136 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.140 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:52 compute-0 ceph-mon[76304]: pgmap v2025: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.147 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:3a:58 10.100.0.13'], port_security=['fa:16:3e:3a:3a:58 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '995ed68f-0189-47b0-b060-6b738468c986', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-11c8a335-c267-4faf-a7d1-f407690da05d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '96252fd8-ac35-49bb-9585-1943d9426258', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51d69cbf-f4a8-43f3-8231-31d5040383f1, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=09f54242-3301-4e2f-b606-8423be606192) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.148 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 09f54242-3301-4e2f-b606-8423be606192 in datapath 11c8a335-c267-4faf-a7d1-f407690da05d bound to our chassis
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.149 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 11c8a335-c267-4faf-a7d1-f407690da05d
Feb 28 10:31:52 compute-0 systemd-udevd[352051]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:31:52 compute-0 systemd-machined[209480]: New machine qemu-156-instance-0000007b.
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.163 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc81cb8-b142-4a09-a7b3-fc8aa5332af4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.165 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap11c8a335-c1 in ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.167 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap11c8a335-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.167 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8e7901ce-21b5-48ed-b47e-3118067b872f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:31:52 compute-0 ovn_controller[146846]: 2026-02-28T10:31:52Z|01240|binding|INFO|Setting lport 09f54242-3301-4e2f-b606-8423be606192 ovn-installed in OVS
Feb 28 10:31:52 compute-0 ovn_controller[146846]: 2026-02-28T10:31:52Z|01241|binding|INFO|Setting lport 09f54242-3301-4e2f-b606-8423be606192 up in Southbound
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.169 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[15d916a0-69eb-48d0-a7a8-98886010c557]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:31:52 compute-0 NetworkManager[49805]: <info>  [1772274712.1710] device (tap09f54242-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:31:52 compute-0 NetworkManager[49805]: <info>  [1772274712.1716] device (tap09f54242-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.170 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:52 compute-0 systemd[1]: Started Virtual Machine qemu-156-instance-0000007b.
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.183 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc6fd90-1a4a-422d-aa2b-33f592a03862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.195 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e0cc056e-d7ab-484f-84e6-2ffdacda1cc2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.224 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6e2fc74d-6abc-4219-98f3-9238269416d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.230 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[686a52f1-2606-4b19-a43b-098f842eaae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:31:52 compute-0 NetworkManager[49805]: <info>  [1772274712.2315] manager: (tap11c8a335-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/510)
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.262 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc79437-83c6-4b01-8dde-0a8d2b6055d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.267 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[502a6644-a74b-4c7b-8266-ef85784f1247]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:31:52 compute-0 NetworkManager[49805]: <info>  [1772274712.2870] device (tap11c8a335-c0): carrier: link connected
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.290 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[07c159fa-2259-4695-85b9-df9458f4369e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.312 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8ac00714-8176-4485-acff-3a6e84df78e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap11c8a335-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0d:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 370], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621957, 'reachable_time': 34615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352085, 'error': None, 'target': 'ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.326 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d8667337-ef6d-417e-8fb8-9ee5d230ce1c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:d42'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621957, 'tstamp': 621957}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352086, 'error': None, 'target': 'ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.328 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.328 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.338 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b844da06-27b1-4e94-9655-005a18edb207]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap11c8a335-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0d:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 370], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621957, 'reachable_time': 34615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 352087, 'error': None, 'target': 'ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.345 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.371 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e532f09e-0d6f-418d-b3b6-26ee001f06b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.419 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc807e9-bb01-43d4-83f0-afe297a8c17c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.421 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11c8a335-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.422 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.422 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11c8a335-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:31:52 compute-0 NetworkManager[49805]: <info>  [1772274712.4251] manager: (tap11c8a335-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/511)
Feb 28 10:31:52 compute-0 kernel: tap11c8a335-c0: entered promiscuous mode
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.427 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap11c8a335-c0, col_values=(('external_ids', {'iface-id': '6d119866-5b77-4352-91cc-12a26b0fe463'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:31:52 compute-0 ovn_controller[146846]: 2026-02-28T10:31:52Z|01242|binding|INFO|Releasing lport 6d119866-5b77-4352-91cc-12a26b0fe463 from this chassis (sb_readonly=0)
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.431 243456 DEBUG nova.compute.manager [req-74576557-6833-4fb1-bc34-7f1cd2b1f3ab req-b1ff1942-7f01-4659-a256-4b07bdf2bc78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received event network-vif-plugged-09f54242-3301-4e2f-b606-8423be606192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.432 243456 DEBUG oslo_concurrency.lockutils [req-74576557-6833-4fb1-bc34-7f1cd2b1f3ab req-b1ff1942-7f01-4659-a256-4b07bdf2bc78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "995ed68f-0189-47b0-b060-6b738468c986-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.432 243456 DEBUG oslo_concurrency.lockutils [req-74576557-6833-4fb1-bc34-7f1cd2b1f3ab req-b1ff1942-7f01-4659-a256-4b07bdf2bc78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.433 243456 DEBUG oslo_concurrency.lockutils [req-74576557-6833-4fb1-bc34-7f1cd2b1f3ab req-b1ff1942-7f01-4659-a256-4b07bdf2bc78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.434 243456 DEBUG nova.compute.manager [req-74576557-6833-4fb1-bc34-7f1cd2b1f3ab req-b1ff1942-7f01-4659-a256-4b07bdf2bc78 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Processing event network-vif-plugged-09f54242-3301-4e2f-b606-8423be606192 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.435 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.438 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/11c8a335-c267-4faf-a7d1-f407690da05d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/11c8a335-c267-4faf-a7d1-f407690da05d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.439 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c3bb8795-ce66-44af-b61b-854870e2b829]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.440 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-11c8a335-c267-4faf-a7d1-f407690da05d
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/11c8a335-c267-4faf-a7d1-f407690da05d.pid.haproxy
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 11c8a335-c267-4faf-a7d1-f407690da05d
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:31:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:52.443 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d', 'env', 'PROCESS_TAG=haproxy-11c8a335-c267-4faf-a7d1-f407690da05d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/11c8a335-c267-4faf-a7d1-f407690da05d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.662 243456 DEBUG nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.665 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274712.6616683, 995ed68f-0189-47b0-b060-6b738468c986 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.665 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] VM Started (Lifecycle Event)
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.669 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.674 243456 INFO nova.virt.libvirt.driver [-] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Instance spawned successfully.
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.674 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.700 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:31:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.711 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.717 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.718 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.719 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.720 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.720 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.721 243456 DEBUG nova.virt.libvirt.driver [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.763 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.764 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274712.663714, 995ed68f-0189-47b0-b060-6b738468c986 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.764 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] VM Paused (Lifecycle Event)
Feb 28 10:31:52 compute-0 podman[352161]: 2026-02-28 10:31:52.794794806 +0000 UTC m=+0.053824096 container create 8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.800 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.807 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274712.6686232, 995ed68f-0189-47b0-b060-6b738468c986 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.807 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] VM Resumed (Lifecycle Event)
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.816 243456 INFO nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Took 7.81 seconds to spawn the instance on the hypervisor.
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.817 243456 DEBUG nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:31:52 compute-0 systemd[1]: Started libpod-conmon-8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c.scope.
Feb 28 10:31:52 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8bb818805ca7bc9b5cb8f2064ec0e8cd84927ba5cb46b2bac1c983c996bfba5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:31:52 compute-0 podman[352161]: 2026-02-28 10:31:52.860172348 +0000 UTC m=+0.119201668 container init 8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Feb 28 10:31:52 compute-0 podman[352161]: 2026-02-28 10:31:52.864821339 +0000 UTC m=+0.123850619 container start 8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 28 10:31:52 compute-0 podman[352161]: 2026-02-28 10:31:52.768702787 +0000 UTC m=+0.027732087 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.866 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.872 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:31:52 compute-0 neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d[352177]: [NOTICE]   (352181) : New worker (352183) forked
Feb 28 10:31:52 compute-0 neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d[352177]: [NOTICE]   (352181) : Loading success.
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.898 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.920 243456 INFO nova.compute.manager [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Took 8.94 seconds to build instance.
Feb 28 10:31:52 compute-0 nova_compute[243452]: 2026-02-28 10:31:52.939 243456 DEBUG oslo_concurrency.lockutils [None req-04893517-e434-4eec-8a3f-4a0062cc28f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:31:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:31:54 compute-0 ceph-mon[76304]: pgmap v2026: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 28 10:31:54 compute-0 nova_compute[243452]: 2026-02-28 10:31:54.270 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:54 compute-0 nova_compute[243452]: 2026-02-28 10:31:54.530 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:54 compute-0 nova_compute[243452]: 2026-02-28 10:31:54.553 243456 DEBUG nova.compute.manager [req-fa0f1a36-40ed-48c8-9b96-3ae3a77ddfb8 req-d27d943c-dc59-4859-a3e6-c4900a2a7a45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received event network-vif-plugged-09f54242-3301-4e2f-b606-8423be606192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:31:54 compute-0 nova_compute[243452]: 2026-02-28 10:31:54.553 243456 DEBUG oslo_concurrency.lockutils [req-fa0f1a36-40ed-48c8-9b96-3ae3a77ddfb8 req-d27d943c-dc59-4859-a3e6-c4900a2a7a45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "995ed68f-0189-47b0-b060-6b738468c986-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:31:54 compute-0 nova_compute[243452]: 2026-02-28 10:31:54.554 243456 DEBUG oslo_concurrency.lockutils [req-fa0f1a36-40ed-48c8-9b96-3ae3a77ddfb8 req-d27d943c-dc59-4859-a3e6-c4900a2a7a45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:31:54 compute-0 nova_compute[243452]: 2026-02-28 10:31:54.554 243456 DEBUG oslo_concurrency.lockutils [req-fa0f1a36-40ed-48c8-9b96-3ae3a77ddfb8 req-d27d943c-dc59-4859-a3e6-c4900a2a7a45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:31:54 compute-0 nova_compute[243452]: 2026-02-28 10:31:54.555 243456 DEBUG nova.compute.manager [req-fa0f1a36-40ed-48c8-9b96-3ae3a77ddfb8 req-d27d943c-dc59-4859-a3e6-c4900a2a7a45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] No waiting events found dispatching network-vif-plugged-09f54242-3301-4e2f-b606-8423be606192 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:31:54 compute-0 nova_compute[243452]: 2026-02-28 10:31:54.555 243456 WARNING nova.compute.manager [req-fa0f1a36-40ed-48c8-9b96-3ae3a77ddfb8 req-d27d943c-dc59-4859-a3e6-c4900a2a7a45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received unexpected event network-vif-plugged-09f54242-3301-4e2f-b606-8423be606192 for instance with vm_state active and task_state None.
Feb 28 10:31:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 84 op/s
Feb 28 10:31:56 compute-0 ceph-mon[76304]: pgmap v2027: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 84 op/s
Feb 28 10:31:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 10:31:57 compute-0 nova_compute[243452]: 2026-02-28 10:31:57.269 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:31:57 compute-0 nova_compute[243452]: 2026-02-28 10:31:57.748 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid 995ed68f-0189-47b0-b060-6b738468c986 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 28 10:31:57 compute-0 nova_compute[243452]: 2026-02-28 10:31:57.749 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "995ed68f-0189-47b0-b060-6b738468c986" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:31:57 compute-0 nova_compute[243452]: 2026-02-28 10:31:57.749 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "995ed68f-0189-47b0-b060-6b738468c986" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:31:57 compute-0 nova_compute[243452]: 2026-02-28 10:31:57.772 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "995ed68f-0189-47b0-b060-6b738468c986" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:31:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:57.872 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:31:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:57.873 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:31:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:31:57.873 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:31:58 compute-0 ceph-mon[76304]: pgmap v2028: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 10:31:58 compute-0 nova_compute[243452]: 2026-02-28 10:31:58.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:31:58 compute-0 nova_compute[243452]: 2026-02-28 10:31:58.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 28 10:31:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 96 op/s
Feb 28 10:31:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:31:59 compute-0 nova_compute[243452]: 2026-02-28 10:31:59.271 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:31:59 compute-0 nova_compute[243452]: 2026-02-28 10:31:59.532 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:00 compute-0 ceph-mon[76304]: pgmap v2029: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 96 op/s
Feb 28 10:32:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:32:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:32:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:32:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:32:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:32:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:32:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 85 op/s
Feb 28 10:32:02 compute-0 podman[352193]: 2026-02-28 10:32:02.128610705 +0000 UTC m=+0.057194991 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 10:32:02 compute-0 podman[352192]: 2026-02-28 10:32:02.191518867 +0000 UTC m=+0.126993158 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 28 10:32:02 compute-0 ceph-mon[76304]: pgmap v2030: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 85 op/s
Feb 28 10:32:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:32:03 compute-0 NetworkManager[49805]: <info>  [1772274723.5419] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/512)
Feb 28 10:32:03 compute-0 NetworkManager[49805]: <info>  [1772274723.5432] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/513)
Feb 28 10:32:03 compute-0 nova_compute[243452]: 2026-02-28 10:32:03.541 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:03 compute-0 nova_compute[243452]: 2026-02-28 10:32:03.594 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:03 compute-0 ovn_controller[146846]: 2026-02-28T10:32:03Z|01243|binding|INFO|Releasing lport 6d119866-5b77-4352-91cc-12a26b0fe463 from this chassis (sb_readonly=0)
Feb 28 10:32:03 compute-0 nova_compute[243452]: 2026-02-28 10:32:03.605 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:32:03 compute-0 nova_compute[243452]: 2026-02-28 10:32:03.974 243456 DEBUG nova.compute.manager [req-349d0c1e-bc0b-410c-a2bc-6628ad8dfc2e req-a00f5d85-ca7e-43dc-ab01-da504b1bdcac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received event network-changed-09f54242-3301-4e2f-b606-8423be606192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:32:03 compute-0 nova_compute[243452]: 2026-02-28 10:32:03.974 243456 DEBUG nova.compute.manager [req-349d0c1e-bc0b-410c-a2bc-6628ad8dfc2e req-a00f5d85-ca7e-43dc-ab01-da504b1bdcac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Refreshing instance network info cache due to event network-changed-09f54242-3301-4e2f-b606-8423be606192. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:32:03 compute-0 nova_compute[243452]: 2026-02-28 10:32:03.975 243456 DEBUG oslo_concurrency.lockutils [req-349d0c1e-bc0b-410c-a2bc-6628ad8dfc2e req-a00f5d85-ca7e-43dc-ab01-da504b1bdcac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:32:03 compute-0 nova_compute[243452]: 2026-02-28 10:32:03.975 243456 DEBUG oslo_concurrency.lockutils [req-349d0c1e-bc0b-410c-a2bc-6628ad8dfc2e req-a00f5d85-ca7e-43dc-ab01-da504b1bdcac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:32:03 compute-0 nova_compute[243452]: 2026-02-28 10:32:03.976 243456 DEBUG nova.network.neutron [req-349d0c1e-bc0b-410c-a2bc-6628ad8dfc2e req-a00f5d85-ca7e-43dc-ab01-da504b1bdcac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Refreshing network info cache for port 09f54242-3301-4e2f-b606-8423be606192 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:32:04 compute-0 ovn_controller[146846]: 2026-02-28T10:32:04Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3a:3a:58 10.100.0.13
Feb 28 10:32:04 compute-0 ovn_controller[146846]: 2026-02-28T10:32:04Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3a:3a:58 10.100.0.13
Feb 28 10:32:04 compute-0 ceph-mon[76304]: pgmap v2031: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:32:04 compute-0 nova_compute[243452]: 2026-02-28 10:32:04.274 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:04 compute-0 nova_compute[243452]: 2026-02-28 10:32:04.533 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 205 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 685 KiB/s wr, 82 op/s
Feb 28 10:32:05 compute-0 nova_compute[243452]: 2026-02-28 10:32:05.850 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:06 compute-0 ceph-mon[76304]: pgmap v2032: 305 pgs: 305 active+clean; 205 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 685 KiB/s wr, 82 op/s
Feb 28 10:32:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 216 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 696 KiB/s rd, 1.4 MiB/s wr, 54 op/s
Feb 28 10:32:07 compute-0 nova_compute[243452]: 2026-02-28 10:32:07.812 243456 DEBUG nova.network.neutron [req-349d0c1e-bc0b-410c-a2bc-6628ad8dfc2e req-a00f5d85-ca7e-43dc-ab01-da504b1bdcac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Updated VIF entry in instance network info cache for port 09f54242-3301-4e2f-b606-8423be606192. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:32:07 compute-0 nova_compute[243452]: 2026-02-28 10:32:07.813 243456 DEBUG nova.network.neutron [req-349d0c1e-bc0b-410c-a2bc-6628ad8dfc2e req-a00f5d85-ca7e-43dc-ab01-da504b1bdcac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Updating instance_info_cache with network_info: [{"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:32:07 compute-0 nova_compute[243452]: 2026-02-28 10:32:07.848 243456 DEBUG oslo_concurrency.lockutils [req-349d0c1e-bc0b-410c-a2bc-6628ad8dfc2e req-a00f5d85-ca7e-43dc-ab01-da504b1bdcac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:32:08 compute-0 ceph-mon[76304]: pgmap v2033: 305 pgs: 305 active+clean; 216 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 696 KiB/s rd, 1.4 MiB/s wr, 54 op/s
Feb 28 10:32:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 232 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Feb 28 10:32:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:32:09 compute-0 nova_compute[243452]: 2026-02-28 10:32:09.277 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:09 compute-0 nova_compute[243452]: 2026-02-28 10:32:09.536 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:09 compute-0 ovn_controller[146846]: 2026-02-28T10:32:09Z|01244|binding|INFO|Releasing lport 6d119866-5b77-4352-91cc-12a26b0fe463 from this chassis (sb_readonly=0)
Feb 28 10:32:09 compute-0 nova_compute[243452]: 2026-02-28 10:32:09.849 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:10 compute-0 ceph-mon[76304]: pgmap v2034: 305 pgs: 305 active+clean; 232 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Feb 28 10:32:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:32:11 compute-0 nova_compute[243452]: 2026-02-28 10:32:11.344 243456 INFO nova.compute.manager [None req-4057f0f1-b445-4be5-8fc7-ecc31ddc2357 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Get console output
Feb 28 10:32:11 compute-0 nova_compute[243452]: 2026-02-28 10:32:11.354 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:32:12 compute-0 ceph-mon[76304]: pgmap v2035: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:32:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:32:12 compute-0 ovn_controller[146846]: 2026-02-28T10:32:12Z|01245|binding|INFO|Releasing lport 6d119866-5b77-4352-91cc-12a26b0fe463 from this chassis (sb_readonly=0)
Feb 28 10:32:12 compute-0 nova_compute[243452]: 2026-02-28 10:32:12.985 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:32:14 compute-0 ceph-mon[76304]: pgmap v2036: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:32:14 compute-0 nova_compute[243452]: 2026-02-28 10:32:14.279 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:14 compute-0 nova_compute[243452]: 2026-02-28 10:32:14.539 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:32:16 compute-0 ceph-mon[76304]: pgmap v2037: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:32:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 1.5 MiB/s wr, 53 op/s
Feb 28 10:32:18 compute-0 ceph-mon[76304]: pgmap v2038: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 1.5 MiB/s wr, 53 op/s
Feb 28 10:32:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 715 KiB/s wr, 25 op/s
Feb 28 10:32:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:32:19 compute-0 sudo[352239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:32:19 compute-0 sudo[352239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:32:19 compute-0 sudo[352239]: pam_unix(sudo:session): session closed for user root
Feb 28 10:32:19 compute-0 sudo[352264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:32:19 compute-0 sudo[352264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:32:19 compute-0 nova_compute[243452]: 2026-02-28 10:32:19.283 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:19 compute-0 nova_compute[243452]: 2026-02-28 10:32:19.541 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:19 compute-0 sudo[352264]: pam_unix(sudo:session): session closed for user root
Feb 28 10:32:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:32:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:32:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:32:19 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:32:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:32:19 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:32:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:32:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:32:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:32:19 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:32:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:32:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:32:19 compute-0 sudo[352320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:32:19 compute-0 sudo[352320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:32:19 compute-0 sudo[352320]: pam_unix(sudo:session): session closed for user root
Feb 28 10:32:19 compute-0 sudo[352345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:32:19 compute-0 sudo[352345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:32:20 compute-0 podman[352383]: 2026-02-28 10:32:20.230766038 +0000 UTC m=+0.045294174 container create cc482a0848fc8254303210c31ed6a2528c800b314cc7369c4845133f6bc9424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_knuth, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:32:20 compute-0 systemd[1]: Started libpod-conmon-cc482a0848fc8254303210c31ed6a2528c800b314cc7369c4845133f6bc9424d.scope.
Feb 28 10:32:20 compute-0 ceph-mon[76304]: pgmap v2039: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 715 KiB/s wr, 25 op/s
Feb 28 10:32:20 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:32:20 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:32:20 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:32:20 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:32:20 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:32:20 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:32:20 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:32:20 compute-0 podman[352383]: 2026-02-28 10:32:20.211563494 +0000 UTC m=+0.026091640 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:32:20 compute-0 podman[352383]: 2026-02-28 10:32:20.316469186 +0000 UTC m=+0.130997342 container init cc482a0848fc8254303210c31ed6a2528c800b314cc7369c4845133f6bc9424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:32:20 compute-0 podman[352383]: 2026-02-28 10:32:20.324160573 +0000 UTC m=+0.138688699 container start cc482a0848fc8254303210c31ed6a2528c800b314cc7369c4845133f6bc9424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:32:20 compute-0 podman[352383]: 2026-02-28 10:32:20.327320113 +0000 UTC m=+0.141848409 container attach cc482a0848fc8254303210c31ed6a2528c800b314cc7369c4845133f6bc9424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_knuth, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:32:20 compute-0 eager_knuth[352400]: 167 167
Feb 28 10:32:20 compute-0 systemd[1]: libpod-cc482a0848fc8254303210c31ed6a2528c800b314cc7369c4845133f6bc9424d.scope: Deactivated successfully.
Feb 28 10:32:20 compute-0 conmon[352400]: conmon cc482a0848fc82543032 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc482a0848fc8254303210c31ed6a2528c800b314cc7369c4845133f6bc9424d.scope/container/memory.events
Feb 28 10:32:20 compute-0 podman[352383]: 2026-02-28 10:32:20.334371173 +0000 UTC m=+0.148899299 container died cc482a0848fc8254303210c31ed6a2528c800b314cc7369c4845133f6bc9424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 10:32:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e19b874f6a4b04e75703d78a32076434ae677a496ce305d6cc3df42dd2835ed3-merged.mount: Deactivated successfully.
Feb 28 10:32:20 compute-0 podman[352383]: 2026-02-28 10:32:20.382483226 +0000 UTC m=+0.197011352 container remove cc482a0848fc8254303210c31ed6a2528c800b314cc7369c4845133f6bc9424d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_knuth, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 28 10:32:20 compute-0 systemd[1]: libpod-conmon-cc482a0848fc8254303210c31ed6a2528c800b314cc7369c4845133f6bc9424d.scope: Deactivated successfully.
Feb 28 10:32:20 compute-0 podman[352423]: 2026-02-28 10:32:20.523805149 +0000 UTC m=+0.043303768 container create 53a0f99a4b241201b21ffa5fe39d42b563215ce9d69d0ef173d216f9738b2e05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_sinoussi, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:32:20 compute-0 systemd[1]: Started libpod-conmon-53a0f99a4b241201b21ffa5fe39d42b563215ce9d69d0ef173d216f9738b2e05.scope.
Feb 28 10:32:20 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ee393ec6c18baa957788bdd02553783066a9ac16d94f93e1cab0b885f05539/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ee393ec6c18baa957788bdd02553783066a9ac16d94f93e1cab0b885f05539/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ee393ec6c18baa957788bdd02553783066a9ac16d94f93e1cab0b885f05539/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ee393ec6c18baa957788bdd02553783066a9ac16d94f93e1cab0b885f05539/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ee393ec6c18baa957788bdd02553783066a9ac16d94f93e1cab0b885f05539/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:32:20 compute-0 podman[352423]: 2026-02-28 10:32:20.502609288 +0000 UTC m=+0.022107937 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:32:20 compute-0 podman[352423]: 2026-02-28 10:32:20.616384061 +0000 UTC m=+0.135882680 container init 53a0f99a4b241201b21ffa5fe39d42b563215ce9d69d0ef173d216f9738b2e05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_sinoussi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 10:32:20 compute-0 podman[352423]: 2026-02-28 10:32:20.632353434 +0000 UTC m=+0.151852033 container start 53a0f99a4b241201b21ffa5fe39d42b563215ce9d69d0ef173d216f9738b2e05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_sinoussi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 28 10:32:20 compute-0 podman[352423]: 2026-02-28 10:32:20.636151041 +0000 UTC m=+0.155649670 container attach 53a0f99a4b241201b21ffa5fe39d42b563215ce9d69d0ef173d216f9738b2e05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_sinoussi, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 10:32:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 58 KiB/s wr, 3 op/s
Feb 28 10:32:21 compute-0 frosty_sinoussi[352440]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:32:21 compute-0 frosty_sinoussi[352440]: --> All data devices are unavailable
Feb 28 10:32:21 compute-0 systemd[1]: libpod-53a0f99a4b241201b21ffa5fe39d42b563215ce9d69d0ef173d216f9738b2e05.scope: Deactivated successfully.
Feb 28 10:32:21 compute-0 podman[352423]: 2026-02-28 10:32:21.161701559 +0000 UTC m=+0.681200158 container died 53a0f99a4b241201b21ffa5fe39d42b563215ce9d69d0ef173d216f9738b2e05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_sinoussi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:32:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-24ee393ec6c18baa957788bdd02553783066a9ac16d94f93e1cab0b885f05539-merged.mount: Deactivated successfully.
Feb 28 10:32:21 compute-0 podman[352423]: 2026-02-28 10:32:21.196545796 +0000 UTC m=+0.716044395 container remove 53a0f99a4b241201b21ffa5fe39d42b563215ce9d69d0ef173d216f9738b2e05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_sinoussi, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 10:32:21 compute-0 systemd[1]: libpod-conmon-53a0f99a4b241201b21ffa5fe39d42b563215ce9d69d0ef173d216f9738b2e05.scope: Deactivated successfully.
Feb 28 10:32:21 compute-0 sudo[352345]: pam_unix(sudo:session): session closed for user root
Feb 28 10:32:21 compute-0 sudo[352473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:32:21 compute-0 sudo[352473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:32:21 compute-0 sudo[352473]: pam_unix(sudo:session): session closed for user root
Feb 28 10:32:21 compute-0 sudo[352498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:32:21 compute-0 sudo[352498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:32:21 compute-0 podman[352534]: 2026-02-28 10:32:21.650385502 +0000 UTC m=+0.064669483 container create b584b565ed05c31d57bf73c460a563182129293e1d107fe872a48957ba99c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dewdney, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:32:21 compute-0 systemd[1]: Started libpod-conmon-b584b565ed05c31d57bf73c460a563182129293e1d107fe872a48957ba99c0d2.scope.
Feb 28 10:32:21 compute-0 podman[352534]: 2026-02-28 10:32:21.626292189 +0000 UTC m=+0.040576160 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:32:21 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:32:21 compute-0 podman[352534]: 2026-02-28 10:32:21.747654207 +0000 UTC m=+0.161938198 container init b584b565ed05c31d57bf73c460a563182129293e1d107fe872a48957ba99c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dewdney, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 10:32:21 compute-0 podman[352534]: 2026-02-28 10:32:21.752629528 +0000 UTC m=+0.166913469 container start b584b565ed05c31d57bf73c460a563182129293e1d107fe872a48957ba99c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 10:32:21 compute-0 podman[352534]: 2026-02-28 10:32:21.755847559 +0000 UTC m=+0.170131590 container attach b584b565ed05c31d57bf73c460a563182129293e1d107fe872a48957ba99c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dewdney, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:32:21 compute-0 dazzling_dewdney[352551]: 167 167
Feb 28 10:32:21 compute-0 systemd[1]: libpod-b584b565ed05c31d57bf73c460a563182129293e1d107fe872a48957ba99c0d2.scope: Deactivated successfully.
Feb 28 10:32:21 compute-0 podman[352534]: 2026-02-28 10:32:21.760052658 +0000 UTC m=+0.174336619 container died b584b565ed05c31d57bf73c460a563182129293e1d107fe872a48957ba99c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dewdney, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:32:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8c96c3a3bfcc8fb96d2ece91ab44f8242921e40238d56da4a80e7fbd4a88759-merged.mount: Deactivated successfully.
Feb 28 10:32:21 compute-0 podman[352534]: 2026-02-28 10:32:21.794195606 +0000 UTC m=+0.208479547 container remove b584b565ed05c31d57bf73c460a563182129293e1d107fe872a48957ba99c0d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dewdney, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:32:21 compute-0 systemd[1]: libpod-conmon-b584b565ed05c31d57bf73c460a563182129293e1d107fe872a48957ba99c0d2.scope: Deactivated successfully.
Feb 28 10:32:21 compute-0 podman[352575]: 2026-02-28 10:32:21.966307541 +0000 UTC m=+0.058461207 container create 293f87efd43c3d2a911d80301fc80c71e04a0fd6e07421a91ff4c5735704a760 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 10:32:22 compute-0 systemd[1]: Started libpod-conmon-293f87efd43c3d2a911d80301fc80c71e04a0fd6e07421a91ff4c5735704a760.scope.
Feb 28 10:32:22 compute-0 podman[352575]: 2026-02-28 10:32:21.941328544 +0000 UTC m=+0.033482030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:32:22 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:32:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d1103ad3c9817f8449c863b914a0226fc1506083e257c693c18e932df9878ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:32:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d1103ad3c9817f8449c863b914a0226fc1506083e257c693c18e932df9878ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:32:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d1103ad3c9817f8449c863b914a0226fc1506083e257c693c18e932df9878ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:32:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d1103ad3c9817f8449c863b914a0226fc1506083e257c693c18e932df9878ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:32:22 compute-0 podman[352575]: 2026-02-28 10:32:22.059710247 +0000 UTC m=+0.151863683 container init 293f87efd43c3d2a911d80301fc80c71e04a0fd6e07421a91ff4c5735704a760 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:32:22 compute-0 podman[352575]: 2026-02-28 10:32:22.071006577 +0000 UTC m=+0.163159983 container start 293f87efd43c3d2a911d80301fc80c71e04a0fd6e07421a91ff4c5735704a760 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 10:32:22 compute-0 podman[352575]: 2026-02-28 10:32:22.075487534 +0000 UTC m=+0.167640950 container attach 293f87efd43c3d2a911d80301fc80c71e04a0fd6e07421a91ff4c5735704a760 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:32:22 compute-0 ceph-mon[76304]: pgmap v2040: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 58 KiB/s wr, 3 op/s
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]: {
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:     "0": [
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:         {
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "devices": [
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "/dev/loop3"
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             ],
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "lv_name": "ceph_lv0",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "lv_size": "21470642176",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "name": "ceph_lv0",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "tags": {
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.cluster_name": "ceph",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.crush_device_class": "",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.encrypted": "0",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.objectstore": "bluestore",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.osd_id": "0",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.type": "block",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.vdo": "0",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.with_tpm": "0"
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             },
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "type": "block",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "vg_name": "ceph_vg0"
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:         }
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:     ],
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:     "1": [
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:         {
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "devices": [
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "/dev/loop4"
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             ],
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "lv_name": "ceph_lv1",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "lv_size": "21470642176",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "name": "ceph_lv1",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "tags": {
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.cluster_name": "ceph",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.crush_device_class": "",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.encrypted": "0",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.objectstore": "bluestore",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.osd_id": "1",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.type": "block",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.vdo": "0",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.with_tpm": "0"
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             },
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "type": "block",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "vg_name": "ceph_vg1"
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:         }
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:     ],
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:     "2": [
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:         {
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "devices": [
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "/dev/loop5"
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             ],
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "lv_name": "ceph_lv2",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "lv_size": "21470642176",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "name": "ceph_lv2",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "tags": {
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.cluster_name": "ceph",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.crush_device_class": "",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.encrypted": "0",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.objectstore": "bluestore",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.osd_id": "2",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.type": "block",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.vdo": "0",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:                 "ceph.with_tpm": "0"
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             },
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "type": "block",
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:             "vg_name": "ceph_vg2"
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:         }
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]:     ]
Feb 28 10:32:22 compute-0 charming_mirzakhani[352592]: }
Feb 28 10:32:22 compute-0 systemd[1]: libpod-293f87efd43c3d2a911d80301fc80c71e04a0fd6e07421a91ff4c5735704a760.scope: Deactivated successfully.
Feb 28 10:32:22 compute-0 podman[352575]: 2026-02-28 10:32:22.386759701 +0000 UTC m=+0.478913137 container died 293f87efd43c3d2a911d80301fc80c71e04a0fd6e07421a91ff4c5735704a760 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 10:32:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d1103ad3c9817f8449c863b914a0226fc1506083e257c693c18e932df9878ef-merged.mount: Deactivated successfully.
Feb 28 10:32:22 compute-0 podman[352575]: 2026-02-28 10:32:22.425006215 +0000 UTC m=+0.517159621 container remove 293f87efd43c3d2a911d80301fc80c71e04a0fd6e07421a91ff4c5735704a760 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 10:32:22 compute-0 systemd[1]: libpod-conmon-293f87efd43c3d2a911d80301fc80c71e04a0fd6e07421a91ff4c5735704a760.scope: Deactivated successfully.
Feb 28 10:32:22 compute-0 sudo[352498]: pam_unix(sudo:session): session closed for user root
Feb 28 10:32:22 compute-0 sudo[352614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:32:22 compute-0 sudo[352614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:32:22 compute-0 sudo[352614]: pam_unix(sudo:session): session closed for user root
Feb 28 10:32:22 compute-0 sudo[352639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:32:22 compute-0 sudo[352639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:32:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 15 KiB/s wr, 0 op/s
Feb 28 10:32:22 compute-0 podman[352677]: 2026-02-28 10:32:22.980699346 +0000 UTC m=+0.064483528 container create 9bacd5e27c66091f2c446099dc58fdc10ffb4c9d098fc013e8845d038e698ddf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 10:32:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:22.996 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:32:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:22.999 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:32:23 compute-0 nova_compute[243452]: 2026-02-28 10:32:22.996 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:23 compute-0 systemd[1]: Started libpod-conmon-9bacd5e27c66091f2c446099dc58fdc10ffb4c9d098fc013e8845d038e698ddf.scope.
Feb 28 10:32:23 compute-0 podman[352677]: 2026-02-28 10:32:22.954380031 +0000 UTC m=+0.038164293 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:32:23 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:32:23 compute-0 podman[352677]: 2026-02-28 10:32:23.074840093 +0000 UTC m=+0.158624315 container init 9bacd5e27c66091f2c446099dc58fdc10ffb4c9d098fc013e8845d038e698ddf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:32:23 compute-0 podman[352677]: 2026-02-28 10:32:23.080949026 +0000 UTC m=+0.164733198 container start 9bacd5e27c66091f2c446099dc58fdc10ffb4c9d098fc013e8845d038e698ddf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_goldstine, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:32:23 compute-0 podman[352677]: 2026-02-28 10:32:23.084760494 +0000 UTC m=+0.168544676 container attach 9bacd5e27c66091f2c446099dc58fdc10ffb4c9d098fc013e8845d038e698ddf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_goldstine, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:32:23 compute-0 vibrant_goldstine[352693]: 167 167
Feb 28 10:32:23 compute-0 systemd[1]: libpod-9bacd5e27c66091f2c446099dc58fdc10ffb4c9d098fc013e8845d038e698ddf.scope: Deactivated successfully.
Feb 28 10:32:23 compute-0 podman[352677]: 2026-02-28 10:32:23.087636365 +0000 UTC m=+0.171420547 container died 9bacd5e27c66091f2c446099dc58fdc10ffb4c9d098fc013e8845d038e698ddf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_goldstine, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 10:32:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-49ce7dfef9ce2f860e88a2d7322ce977072315bd49f4e6ddd92780e09a5575d5-merged.mount: Deactivated successfully.
Feb 28 10:32:23 compute-0 podman[352677]: 2026-02-28 10:32:23.123226513 +0000 UTC m=+0.207010695 container remove 9bacd5e27c66091f2c446099dc58fdc10ffb4c9d098fc013e8845d038e698ddf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 10:32:23 compute-0 systemd[1]: libpod-conmon-9bacd5e27c66091f2c446099dc58fdc10ffb4c9d098fc013e8845d038e698ddf.scope: Deactivated successfully.
Feb 28 10:32:23 compute-0 podman[352719]: 2026-02-28 10:32:23.306840794 +0000 UTC m=+0.057415207 container create c2b509275be56de3f0cb89a874173db8090e021249d9eec0571e7bac99e84bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_driscoll, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:32:23 compute-0 systemd[1]: Started libpod-conmon-c2b509275be56de3f0cb89a874173db8090e021249d9eec0571e7bac99e84bc7.scope.
Feb 28 10:32:23 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4acac552662e52f2af114c5faf5371d71beaa58771e1740f300b047a591c0108/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4acac552662e52f2af114c5faf5371d71beaa58771e1740f300b047a591c0108/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4acac552662e52f2af114c5faf5371d71beaa58771e1740f300b047a591c0108/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4acac552662e52f2af114c5faf5371d71beaa58771e1740f300b047a591c0108/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:32:23 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 10:32:23 compute-0 podman[352719]: 2026-02-28 10:32:23.284511092 +0000 UTC m=+0.035085545 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:32:23 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 10:32:23 compute-0 podman[352719]: 2026-02-28 10:32:23.393378535 +0000 UTC m=+0.143952968 container init c2b509275be56de3f0cb89a874173db8090e021249d9eec0571e7bac99e84bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 10:32:23 compute-0 podman[352719]: 2026-02-28 10:32:23.401880806 +0000 UTC m=+0.152455209 container start c2b509275be56de3f0cb89a874173db8090e021249d9eec0571e7bac99e84bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 10:32:23 compute-0 podman[352719]: 2026-02-28 10:32:23.40554396 +0000 UTC m=+0.156118413 container attach c2b509275be56de3f0cb89a874173db8090e021249d9eec0571e7bac99e84bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:32:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.840726) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274743840795, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1060, "num_deletes": 250, "total_data_size": 1551929, "memory_usage": 1575608, "flush_reason": "Manual Compaction"}
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274743849964, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 925313, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42773, "largest_seqno": 43832, "table_properties": {"data_size": 921238, "index_size": 1663, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10704, "raw_average_key_size": 20, "raw_value_size": 912522, "raw_average_value_size": 1765, "num_data_blocks": 75, "num_entries": 517, "num_filter_entries": 517, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772274640, "oldest_key_time": 1772274640, "file_creation_time": 1772274743, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 9294 microseconds, and 2848 cpu microseconds.
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.850022) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 925313 bytes OK
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.850051) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.852287) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.852304) EVENT_LOG_v1 {"time_micros": 1772274743852299, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.852331) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 1546971, prev total WAL file size 1546971, number of live WAL files 2.
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.852853) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353032' seq:72057594037927935, type:22 .. '6D6772737461740031373533' seq:0, type:0; will stop at (end)
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(903KB)], [98(10057KB)]
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274743852929, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 11223683, "oldest_snapshot_seqno": -1}
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6520 keys, 8494150 bytes, temperature: kUnknown
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274743894420, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 8494150, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8451514, "index_size": 25246, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 169277, "raw_average_key_size": 25, "raw_value_size": 8335819, "raw_average_value_size": 1278, "num_data_blocks": 987, "num_entries": 6520, "num_filter_entries": 6520, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772274743, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.894727) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 8494150 bytes
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.896117) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 269.9 rd, 204.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.8 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(21.3) write-amplify(9.2) OK, records in: 6986, records dropped: 466 output_compression: NoCompression
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.896136) EVENT_LOG_v1 {"time_micros": 1772274743896126, "job": 58, "event": "compaction_finished", "compaction_time_micros": 41581, "compaction_time_cpu_micros": 20217, "output_level": 6, "num_output_files": 1, "total_output_size": 8494150, "num_input_records": 6986, "num_output_records": 6520, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274743896334, "job": 58, "event": "table_file_deletion", "file_number": 100}
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274743897574, "job": 58, "event": "table_file_deletion", "file_number": 98}
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.852756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.897746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.897755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.897757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.897846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:32:23 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:32:23.897848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:32:24 compute-0 lvm[352815]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:32:24 compute-0 lvm[352816]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:32:24 compute-0 lvm[352816]: VG ceph_vg1 finished
Feb 28 10:32:24 compute-0 lvm[352815]: VG ceph_vg0 finished
Feb 28 10:32:24 compute-0 lvm[352818]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:32:24 compute-0 lvm[352818]: VG ceph_vg2 finished
Feb 28 10:32:24 compute-0 nice_driscoll[352736]: {}
Feb 28 10:32:24 compute-0 systemd[1]: libpod-c2b509275be56de3f0cb89a874173db8090e021249d9eec0571e7bac99e84bc7.scope: Deactivated successfully.
Feb 28 10:32:24 compute-0 systemd[1]: libpod-c2b509275be56de3f0cb89a874173db8090e021249d9eec0571e7bac99e84bc7.scope: Consumed 1.264s CPU time.
Feb 28 10:32:24 compute-0 podman[352719]: 2026-02-28 10:32:24.247369746 +0000 UTC m=+0.997944259 container died c2b509275be56de3f0cb89a874173db8090e021249d9eec0571e7bac99e84bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 10:32:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4acac552662e52f2af114c5faf5371d71beaa58771e1740f300b047a591c0108-merged.mount: Deactivated successfully.
Feb 28 10:32:24 compute-0 nova_compute[243452]: 2026-02-28 10:32:24.284 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:24 compute-0 ceph-mon[76304]: pgmap v2041: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 15 KiB/s wr, 0 op/s
Feb 28 10:32:24 compute-0 podman[352719]: 2026-02-28 10:32:24.309355842 +0000 UTC m=+1.059930245 container remove c2b509275be56de3f0cb89a874173db8090e021249d9eec0571e7bac99e84bc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:32:24 compute-0 systemd[1]: libpod-conmon-c2b509275be56de3f0cb89a874173db8090e021249d9eec0571e7bac99e84bc7.scope: Deactivated successfully.
Feb 28 10:32:24 compute-0 sudo[352639]: pam_unix(sudo:session): session closed for user root
Feb 28 10:32:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:32:24 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:32:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:32:24 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:32:24 compute-0 sudo[352831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:32:24 compute-0 sudo[352831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:32:24 compute-0 sudo[352831]: pam_unix(sudo:session): session closed for user root
Feb 28 10:32:24 compute-0 nova_compute[243452]: 2026-02-28 10:32:24.543 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 16 KiB/s wr, 1 op/s
Feb 28 10:32:25 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:32:25 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:32:26 compute-0 ceph-mon[76304]: pgmap v2042: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 16 KiB/s wr, 1 op/s
Feb 28 10:32:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 5.0 KiB/s wr, 0 op/s
Feb 28 10:32:28 compute-0 sshd-session[352856]: Received disconnect from 103.217.144.161 port 36358:11: Bye Bye [preauth]
Feb 28 10:32:28 compute-0 sshd-session[352856]: Disconnected from authenticating user root 103.217.144.161 port 36358 [preauth]
Feb 28 10:32:28 compute-0 ceph-mon[76304]: pgmap v2043: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 5.0 KiB/s wr, 0 op/s
Feb 28 10:32:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Feb 28 10:32:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:32:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:32:29
Feb 28 10:32:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:32:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:32:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'vms', 'default.rgw.control', 'volumes', 'images', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log']
Feb 28 10:32:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:32:29 compute-0 nova_compute[243452]: 2026-02-28 10:32:29.287 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:29 compute-0 nova_compute[243452]: 2026-02-28 10:32:29.333 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:32:29 compute-0 nova_compute[243452]: 2026-02-28 10:32:29.547 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:30 compute-0 nova_compute[243452]: 2026-02-28 10:32:30.119 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:32:30 compute-0 nova_compute[243452]: 2026-02-28 10:32:30.120 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:32:30 compute-0 nova_compute[243452]: 2026-02-28 10:32:30.145 243456 DEBUG nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:32:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:32:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:32:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:32:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:32:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:32:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:32:30 compute-0 nova_compute[243452]: 2026-02-28 10:32:30.352 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:32:30 compute-0 nova_compute[243452]: 2026-02-28 10:32:30.353 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:32:30 compute-0 nova_compute[243452]: 2026-02-28 10:32:30.366 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:32:30 compute-0 nova_compute[243452]: 2026-02-28 10:32:30.366 243456 INFO nova.compute.claims [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:32:30 compute-0 ceph-mon[76304]: pgmap v2044: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Feb 28 10:32:30 compute-0 nova_compute[243452]: 2026-02-28 10:32:30.572 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:32:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Feb 28 10:32:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:32:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:32:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:32:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:32:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:32:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:32:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:32:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:32:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:32:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:32:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:32:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1517754547' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.146 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.155 243456 DEBUG nova.compute.provider_tree [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.176 243456 DEBUG nova.scheduler.client.report [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.209 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.210 243456 DEBUG nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.272 243456 DEBUG nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.272 243456 DEBUG nova.network.neutron [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.310 243456 INFO nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.330 243456 DEBUG nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.411 243456 DEBUG nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.413 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.413 243456 INFO nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Creating image(s)
Feb 28 10:32:31 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1517754547' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.444 243456 DEBUG nova.storage.rbd_utils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.477 243456 DEBUG nova.storage.rbd_utils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.503 243456 DEBUG nova.storage.rbd_utils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.508 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.587 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.588 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.588 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.589 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.612 243456 DEBUG nova.storage.rbd_utils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.617 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.735 243456 DEBUG nova.policy [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.850 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:32:31 compute-0 nova_compute[243452]: 2026-02-28 10:32:31.941 243456 DEBUG nova.storage.rbd_utils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:32:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:32.001 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:32:32 compute-0 nova_compute[243452]: 2026-02-28 10:32:32.030 243456 DEBUG nova.objects.instance [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:32:32 compute-0 nova_compute[243452]: 2026-02-28 10:32:32.051 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:32:32 compute-0 nova_compute[243452]: 2026-02-28 10:32:32.052 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Ensure instance console log exists: /var/lib/nova/instances/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:32:32 compute-0 nova_compute[243452]: 2026-02-28 10:32:32.052 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:32:32 compute-0 nova_compute[243452]: 2026-02-28 10:32:32.053 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:32:32 compute-0 nova_compute[243452]: 2026-02-28 10:32:32.053 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:32:32 compute-0 ceph-mon[76304]: pgmap v2045: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Feb 28 10:32:32 compute-0 nova_compute[243452]: 2026-02-28 10:32:32.631 243456 DEBUG nova.network.neutron [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Successfully created port: 9beeb630-5801-426b-8ae3-6d7b49d83ebe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:32:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 263 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 1.4 MiB/s wr, 2 op/s
Feb 28 10:32:33 compute-0 podman[353047]: 2026-02-28 10:32:33.131553909 +0000 UTC m=+0.064703124 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:32:33 compute-0 podman[353046]: 2026-02-28 10:32:33.159761098 +0000 UTC m=+0.090944087 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Feb 28 10:32:33 compute-0 nova_compute[243452]: 2026-02-28 10:32:33.185 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:33 compute-0 nova_compute[243452]: 2026-02-28 10:32:33.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:32:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:32:34 compute-0 nova_compute[243452]: 2026-02-28 10:32:34.226 243456 DEBUG nova.network.neutron [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Successfully updated port: 9beeb630-5801-426b-8ae3-6d7b49d83ebe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:32:34 compute-0 nova_compute[243452]: 2026-02-28 10:32:34.255 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:32:34 compute-0 nova_compute[243452]: 2026-02-28 10:32:34.255 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:32:34 compute-0 nova_compute[243452]: 2026-02-28 10:32:34.256 243456 DEBUG nova.network.neutron [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:32:34 compute-0 nova_compute[243452]: 2026-02-28 10:32:34.289 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:34 compute-0 nova_compute[243452]: 2026-02-28 10:32:34.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:32:34 compute-0 nova_compute[243452]: 2026-02-28 10:32:34.402 243456 DEBUG nova.compute.manager [req-4d57d239-23f6-45ae-9722-840d056cea36 req-55999fa8-ead5-483d-9fc7-32ea751aadac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Received event network-changed-9beeb630-5801-426b-8ae3-6d7b49d83ebe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:32:34 compute-0 nova_compute[243452]: 2026-02-28 10:32:34.403 243456 DEBUG nova.compute.manager [req-4d57d239-23f6-45ae-9722-840d056cea36 req-55999fa8-ead5-483d-9fc7-32ea751aadac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Refreshing instance network info cache due to event network-changed-9beeb630-5801-426b-8ae3-6d7b49d83ebe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:32:34 compute-0 nova_compute[243452]: 2026-02-28 10:32:34.403 243456 DEBUG oslo_concurrency.lockutils [req-4d57d239-23f6-45ae-9722-840d056cea36 req-55999fa8-ead5-483d-9fc7-32ea751aadac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:32:34 compute-0 ceph-mon[76304]: pgmap v2046: 305 pgs: 305 active+clean; 263 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 1.4 MiB/s wr, 2 op/s
Feb 28 10:32:34 compute-0 nova_compute[243452]: 2026-02-28 10:32:34.549 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:34 compute-0 nova_compute[243452]: 2026-02-28 10:32:34.602 243456 DEBUG nova.network.neutron [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:32:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 275 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 596 B/s rd, 1.7 MiB/s wr, 4 op/s
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.906 243456 DEBUG nova.network.neutron [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Updating instance_info_cache with network_info: [{"id": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "address": "fa:16:3e:1d:37:36", "network": {"id": "e78b0fff-240f-4279-a0b9-45aaac19e3aa", "bridge": "br-int", "label": "tempest-network-smoke--702672644", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9beeb630-58", "ovs_interfaceid": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.952 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.953 243456 DEBUG nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Instance network_info: |[{"id": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "address": "fa:16:3e:1d:37:36", "network": {"id": "e78b0fff-240f-4279-a0b9-45aaac19e3aa", "bridge": "br-int", "label": "tempest-network-smoke--702672644", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9beeb630-58", "ovs_interfaceid": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.953 243456 DEBUG oslo_concurrency.lockutils [req-4d57d239-23f6-45ae-9722-840d056cea36 req-55999fa8-ead5-483d-9fc7-32ea751aadac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.954 243456 DEBUG nova.network.neutron [req-4d57d239-23f6-45ae-9722-840d056cea36 req-55999fa8-ead5-483d-9fc7-32ea751aadac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Refreshing network info cache for port 9beeb630-5801-426b-8ae3-6d7b49d83ebe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.959 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Start _get_guest_xml network_info=[{"id": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "address": "fa:16:3e:1d:37:36", "network": {"id": "e78b0fff-240f-4279-a0b9-45aaac19e3aa", "bridge": "br-int", "label": "tempest-network-smoke--702672644", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9beeb630-58", "ovs_interfaceid": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.966 243456 WARNING nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.972 243456 DEBUG nova.virt.libvirt.host [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.973 243456 DEBUG nova.virt.libvirt.host [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.980 243456 DEBUG nova.virt.libvirt.host [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.980 243456 DEBUG nova.virt.libvirt.host [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.981 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.981 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.982 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.982 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.982 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.982 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.982 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.983 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.983 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.983 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.984 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.984 243456 DEBUG nova.virt.hardware [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:32:35 compute-0 nova_compute[243452]: 2026-02-28 10:32:35.987 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:32:36 compute-0 ceph-mon[76304]: pgmap v2047: 305 pgs: 305 active+clean; 275 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 596 B/s rd, 1.7 MiB/s wr, 4 op/s
Feb 28 10:32:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:32:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2368257348' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:32:36 compute-0 nova_compute[243452]: 2026-02-28 10:32:36.540 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:32:36 compute-0 nova_compute[243452]: 2026-02-28 10:32:36.579 243456 DEBUG nova.storage.rbd_utils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:32:36 compute-0 nova_compute[243452]: 2026-02-28 10:32:36.585 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:32:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 279 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:32:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:32:37 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/67345559' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.165 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.168 243456 DEBUG nova.virt.libvirt.vif [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:32:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1011796007',display_name='tempest-TestNetworkBasicOps-server-1011796007',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1011796007',id=124,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGaI6GmFiY2M2XWv8n7VZfco5jNR9UsWFQgG0FSFHFzEznnh5xFgYcRiJcUqa2jmfAlISnzEuMLhfEtiHn6OYhFIRroUvBHFYJa82ZAB9Pz8Xnj494OEdbr2ujYWW7kOeQ==',key_name='tempest-TestNetworkBasicOps-243906406',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-yb1lwevu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:32:31Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "address": "fa:16:3e:1d:37:36", "network": {"id": "e78b0fff-240f-4279-a0b9-45aaac19e3aa", "bridge": "br-int", "label": "tempest-network-smoke--702672644", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9beeb630-58", "ovs_interfaceid": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.169 243456 DEBUG nova.network.os_vif_util [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "address": "fa:16:3e:1d:37:36", "network": {"id": "e78b0fff-240f-4279-a0b9-45aaac19e3aa", "bridge": "br-int", "label": "tempest-network-smoke--702672644", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9beeb630-58", "ovs_interfaceid": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.170 243456 DEBUG nova.network.os_vif_util [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:37:36,bridge_name='br-int',has_traffic_filtering=True,id=9beeb630-5801-426b-8ae3-6d7b49d83ebe,network=Network(e78b0fff-240f-4279-a0b9-45aaac19e3aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9beeb630-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.172 243456 DEBUG nova.objects.instance [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.198 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:32:37 compute-0 nova_compute[243452]:   <uuid>8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc</uuid>
Feb 28 10:32:37 compute-0 nova_compute[243452]:   <name>instance-0000007c</name>
Feb 28 10:32:37 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:32:37 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:32:37 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <nova:name>tempest-TestNetworkBasicOps-server-1011796007</nova:name>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:32:35</nova:creationTime>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:32:37 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:32:37 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:32:37 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:32:37 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:32:37 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:32:37 compute-0 nova_compute[243452]:         <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:32:37 compute-0 nova_compute[243452]:         <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:32:37 compute-0 nova_compute[243452]:         <nova:port uuid="9beeb630-5801-426b-8ae3-6d7b49d83ebe">
Feb 28 10:32:37 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:32:37 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:32:37 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <system>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <entry name="serial">8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc</entry>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <entry name="uuid">8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc</entry>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     </system>
Feb 28 10:32:37 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:32:37 compute-0 nova_compute[243452]:   <os>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:   </os>
Feb 28 10:32:37 compute-0 nova_compute[243452]:   <features>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:   </features>
Feb 28 10:32:37 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:32:37 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:32:37 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk">
Feb 28 10:32:37 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       </source>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:32:37 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk.config">
Feb 28 10:32:37 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       </source>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:32:37 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:1d:37:36"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <target dev="tap9beeb630-58"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc/console.log" append="off"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <video>
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     </video>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:32:37 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:32:37 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:32:37 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:32:37 compute-0 nova_compute[243452]: </domain>
Feb 28 10:32:37 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.199 243456 DEBUG nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Preparing to wait for external event network-vif-plugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.200 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.200 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.200 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.201 243456 DEBUG nova.virt.libvirt.vif [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:32:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1011796007',display_name='tempest-TestNetworkBasicOps-server-1011796007',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1011796007',id=124,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGaI6GmFiY2M2XWv8n7VZfco5jNR9UsWFQgG0FSFHFzEznnh5xFgYcRiJcUqa2jmfAlISnzEuMLhfEtiHn6OYhFIRroUvBHFYJa82ZAB9Pz8Xnj494OEdbr2ujYWW7kOeQ==',key_name='tempest-TestNetworkBasicOps-243906406',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-yb1lwevu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:32:31Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "address": "fa:16:3e:1d:37:36", "network": {"id": "e78b0fff-240f-4279-a0b9-45aaac19e3aa", "bridge": "br-int", "label": "tempest-network-smoke--702672644", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9beeb630-58", "ovs_interfaceid": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.201 243456 DEBUG nova.network.os_vif_util [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "address": "fa:16:3e:1d:37:36", "network": {"id": "e78b0fff-240f-4279-a0b9-45aaac19e3aa", "bridge": "br-int", "label": "tempest-network-smoke--702672644", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9beeb630-58", "ovs_interfaceid": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.202 243456 DEBUG nova.network.os_vif_util [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:37:36,bridge_name='br-int',has_traffic_filtering=True,id=9beeb630-5801-426b-8ae3-6d7b49d83ebe,network=Network(e78b0fff-240f-4279-a0b9-45aaac19e3aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9beeb630-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.203 243456 DEBUG os_vif [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:37:36,bridge_name='br-int',has_traffic_filtering=True,id=9beeb630-5801-426b-8ae3-6d7b49d83ebe,network=Network(e78b0fff-240f-4279-a0b9-45aaac19e3aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9beeb630-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.203 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.204 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.204 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.208 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.209 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9beeb630-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.209 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9beeb630-58, col_values=(('external_ids', {'iface-id': '9beeb630-5801-426b-8ae3-6d7b49d83ebe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:37:36', 'vm-uuid': '8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.211 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.213 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:32:37 compute-0 NetworkManager[49805]: <info>  [1772274757.2129] manager: (tap9beeb630-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/514)
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.219 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.220 243456 INFO os_vif [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:37:36,bridge_name='br-int',has_traffic_filtering=True,id=9beeb630-5801-426b-8ae3-6d7b49d83ebe,network=Network(e78b0fff-240f-4279-a0b9-45aaac19e3aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9beeb630-58')
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.293 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.294 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.294 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:1d:37:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.295 243456 INFO nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Using config drive
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.325 243456 DEBUG nova.storage.rbd_utils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.333 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:32:37 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2368257348' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:32:37 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/67345559' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:32:37 compute-0 nova_compute[243452]: 2026-02-28 10:32:37.966 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:38 compute-0 nova_compute[243452]: 2026-02-28 10:32:38.107 243456 INFO nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Creating config drive at /var/lib/nova/instances/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc/disk.config
Feb 28 10:32:38 compute-0 nova_compute[243452]: 2026-02-28 10:32:38.113 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp0bed9d7d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:32:38 compute-0 nova_compute[243452]: 2026-02-28 10:32:38.265 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp0bed9d7d" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:32:38 compute-0 nova_compute[243452]: 2026-02-28 10:32:38.302 243456 DEBUG nova.storage.rbd_utils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:32:38 compute-0 nova_compute[243452]: 2026-02-28 10:32:38.306 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc/disk.config 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:32:38 compute-0 nova_compute[243452]: 2026-02-28 10:32:38.356 243456 DEBUG nova.network.neutron [req-4d57d239-23f6-45ae-9722-840d056cea36 req-55999fa8-ead5-483d-9fc7-32ea751aadac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Updated VIF entry in instance network info cache for port 9beeb630-5801-426b-8ae3-6d7b49d83ebe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:32:38 compute-0 nova_compute[243452]: 2026-02-28 10:32:38.358 243456 DEBUG nova.network.neutron [req-4d57d239-23f6-45ae-9722-840d056cea36 req-55999fa8-ead5-483d-9fc7-32ea751aadac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Updating instance_info_cache with network_info: [{"id": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "address": "fa:16:3e:1d:37:36", "network": {"id": "e78b0fff-240f-4279-a0b9-45aaac19e3aa", "bridge": "br-int", "label": "tempest-network-smoke--702672644", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9beeb630-58", "ovs_interfaceid": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:32:38 compute-0 nova_compute[243452]: 2026-02-28 10:32:38.377 243456 DEBUG oslo_concurrency.lockutils [req-4d57d239-23f6-45ae-9722-840d056cea36 req-55999fa8-ead5-483d-9fc7-32ea751aadac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:32:38 compute-0 nova_compute[243452]: 2026-02-28 10:32:38.456 243456 DEBUG oslo_concurrency.processutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc/disk.config 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:32:38 compute-0 nova_compute[243452]: 2026-02-28 10:32:38.457 243456 INFO nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Deleting local config drive /var/lib/nova/instances/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc/disk.config because it was imported into RBD.
Feb 28 10:32:38 compute-0 ceph-mon[76304]: pgmap v2048: 305 pgs: 305 active+clean; 279 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:32:38 compute-0 kernel: tap9beeb630-58: entered promiscuous mode
Feb 28 10:32:38 compute-0 NetworkManager[49805]: <info>  [1772274758.5423] manager: (tap9beeb630-58): new Tun device (/org/freedesktop/NetworkManager/Devices/515)
Feb 28 10:32:38 compute-0 ovn_controller[146846]: 2026-02-28T10:32:38Z|01246|binding|INFO|Claiming lport 9beeb630-5801-426b-8ae3-6d7b49d83ebe for this chassis.
Feb 28 10:32:38 compute-0 ovn_controller[146846]: 2026-02-28T10:32:38Z|01247|binding|INFO|9beeb630-5801-426b-8ae3-6d7b49d83ebe: Claiming fa:16:3e:1d:37:36 10.100.0.24
Feb 28 10:32:38 compute-0 nova_compute[243452]: 2026-02-28 10:32:38.545 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.559 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:37:36 10.100.0.24'], port_security=['fa:16:3e:1d:37:36 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e78b0fff-240f-4279-a0b9-45aaac19e3aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f5856a5a-70ec-4d73-a6be-8929e117dbd8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c7b4133e-8779-4f70-9e9b-cefbe96b1736, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=9beeb630-5801-426b-8ae3-6d7b49d83ebe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.561 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 9beeb630-5801-426b-8ae3-6d7b49d83ebe in datapath e78b0fff-240f-4279-a0b9-45aaac19e3aa bound to our chassis
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.564 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e78b0fff-240f-4279-a0b9-45aaac19e3aa
Feb 28 10:32:38 compute-0 nova_compute[243452]: 2026-02-28 10:32:38.567 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:38 compute-0 systemd-udevd[353225]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:32:38 compute-0 ovn_controller[146846]: 2026-02-28T10:32:38Z|01248|binding|INFO|Setting lport 9beeb630-5801-426b-8ae3-6d7b49d83ebe ovn-installed in OVS
Feb 28 10:32:38 compute-0 ovn_controller[146846]: 2026-02-28T10:32:38Z|01249|binding|INFO|Setting lport 9beeb630-5801-426b-8ae3-6d7b49d83ebe up in Southbound
Feb 28 10:32:38 compute-0 nova_compute[243452]: 2026-02-28 10:32:38.575 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.575 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9a52c39f-9d4f-4db5-8b23-28a1ffdb7aa8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.578 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape78b0fff-21 in ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.580 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape78b0fff-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.580 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b63a6b62-d4ee-41d3-93fe-4b871a9189c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.583 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c1348784-c350-4065-a951-173d11ff2077]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:38 compute-0 systemd-machined[209480]: New machine qemu-157-instance-0000007c.
Feb 28 10:32:38 compute-0 NetworkManager[49805]: <info>  [1772274758.5931] device (tap9beeb630-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:32:38 compute-0 NetworkManager[49805]: <info>  [1772274758.5944] device (tap9beeb630-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.597 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa8cc93-9b53-42f1-a61b-654e511cb4ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:38 compute-0 systemd[1]: Started Virtual Machine qemu-157-instance-0000007c.
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.624 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[79c9b1d8-fb3d-4ab3-a23c-e2caed1d53a7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.659 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1066b3f5-dc3c-4919-baed-919225179c6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:38 compute-0 NetworkManager[49805]: <info>  [1772274758.6668] manager: (tape78b0fff-20): new Veth device (/org/freedesktop/NetworkManager/Devices/516)
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.665 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9337e031-20dd-4427-ad41-2585628fe46e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:38 compute-0 systemd-udevd[353231]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.701 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[93e0d9a7-aba1-47d9-a9e0-79fbb75e5567]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.706 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4cc487a5-571c-4006-bccc-4d3bd1a27f0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:38 compute-0 NetworkManager[49805]: <info>  [1772274758.7299] device (tape78b0fff-20): carrier: link connected
Feb 28 10:32:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.738 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4f1282-448e-49a4-97dc-1747db489017]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.758 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[59e6814c-23c6-4237-a81a-009f10f957cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape78b0fff-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:9e:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 372], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626601, 'reachable_time': 42603, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353260, 'error': None, 'target': 'ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.780 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5fdd2841-2d0f-4182-9766-e999de5febad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec1:9e4d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626601, 'tstamp': 626601}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353261, 'error': None, 'target': 'ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.803 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7839ea5f-f7ae-4a4b-b879-4720a6b175ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape78b0fff-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:9e:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 372], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626601, 'reachable_time': 42603, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 353262, 'error': None, 'target': 'ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.848 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9101ea9a-fe6c-4315-b667-bfdf4783a397]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.917 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2c1fce0d-b1d8-4fb7-bff4-1a76e08aee3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.920 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape78b0fff-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.920 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.921 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape78b0fff-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:32:38 compute-0 NetworkManager[49805]: <info>  [1772274758.9234] manager: (tape78b0fff-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/517)
Feb 28 10:32:38 compute-0 kernel: tape78b0fff-20: entered promiscuous mode
Feb 28 10:32:38 compute-0 nova_compute[243452]: 2026-02-28 10:32:38.923 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.930 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape78b0fff-20, col_values=(('external_ids', {'iface-id': 'f3d174ad-0a29-4dd5-bb79-a661870a23b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:32:38 compute-0 ovn_controller[146846]: 2026-02-28T10:32:38Z|01250|binding|INFO|Releasing lport f3d174ad-0a29-4dd5-bb79-a661870a23b4 from this chassis (sb_readonly=0)
Feb 28 10:32:38 compute-0 nova_compute[243452]: 2026-02-28 10:32:38.931 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.935 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e78b0fff-240f-4279-a0b9-45aaac19e3aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e78b0fff-240f-4279-a0b9-45aaac19e3aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:32:38 compute-0 nova_compute[243452]: 2026-02-28 10:32:38.936 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.936 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0c301722-15a5-40ef-9ca2-89dba6ec50c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.938 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-e78b0fff-240f-4279-a0b9-45aaac19e3aa
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/e78b0fff-240f-4279-a0b9-45aaac19e3aa.pid.haproxy
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID e78b0fff-240f-4279-a0b9-45aaac19e3aa
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:32:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:38.938 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa', 'env', 'PROCESS_TAG=haproxy-e78b0fff-240f-4279-a0b9-45aaac19e3aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e78b0fff-240f-4279-a0b9-45aaac19e3aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:32:39 compute-0 nova_compute[243452]: 2026-02-28 10:32:39.292 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:39.296 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:f5:cc 10.100.0.2 2001:db8::f816:3eff:fe51:f5cc'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe51:f5cc/64', 'neutron:device_id': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cc36483-813d-493a-bc2a-2e5a365011f2, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=45460f81-310d-424c-88ee-85e2ae1b7444) old=Port_Binding(mac=['fa:16:3e:51:f5:cc 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:32:39 compute-0 podman[353294]: 2026-02-28 10:32:39.309730177 +0000 UTC m=+0.057696795 container create 1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 10:32:39 compute-0 nova_compute[243452]: 2026-02-28 10:32:39.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:32:39 compute-0 nova_compute[243452]: 2026-02-28 10:32:39.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:32:39 compute-0 nova_compute[243452]: 2026-02-28 10:32:39.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:32:39 compute-0 nova_compute[243452]: 2026-02-28 10:32:39.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:32:39 compute-0 systemd[1]: Started libpod-conmon-1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca.scope.
Feb 28 10:32:39 compute-0 podman[353294]: 2026-02-28 10:32:39.276428294 +0000 UTC m=+0.024394912 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:32:39 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:32:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800c74c84193b4781da546456c3a23d2158ddc640ab518b56dc1fd583515e1d0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:32:39 compute-0 podman[353294]: 2026-02-28 10:32:39.411820969 +0000 UTC m=+0.159787627 container init 1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:32:39 compute-0 podman[353294]: 2026-02-28 10:32:39.415947146 +0000 UTC m=+0.163913764 container start 1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:32:39 compute-0 neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa[353349]: [NOTICE]   (353354) : New worker (353357) forked
Feb 28 10:32:39 compute-0 neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa[353349]: [NOTICE]   (353354) : Loading success.
Feb 28 10:32:39 compute-0 nova_compute[243452]: 2026-02-28 10:32:39.462 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274759.4614143, 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:32:39 compute-0 nova_compute[243452]: 2026-02-28 10:32:39.462 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] VM Started (Lifecycle Event)
Feb 28 10:32:39 compute-0 nova_compute[243452]: 2026-02-28 10:32:39.486 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:32:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:39.491 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 45460f81-310d-424c-88ee-85e2ae1b7444 in datapath c662211d-11bd-4aa5-95d2-794ccdac29d7 updated
Feb 28 10:32:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:39.494 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c662211d-11bd-4aa5-95d2-794ccdac29d7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:32:39 compute-0 nova_compute[243452]: 2026-02-28 10:32:39.494 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274759.4615698, 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:32:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:39.495 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[789c47b3-9003-445d-8e95-8ae1f6526676]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:39 compute-0 nova_compute[243452]: 2026-02-28 10:32:39.495 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] VM Paused (Lifecycle Event)
Feb 28 10:32:39 compute-0 nova_compute[243452]: 2026-02-28 10:32:39.513 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:32:39 compute-0 nova_compute[243452]: 2026-02-28 10:32:39.516 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:32:39 compute-0 nova_compute[243452]: 2026-02-28 10:32:39.533 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:32:40 compute-0 ceph-mon[76304]: pgmap v2049: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:32:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011217277279944673 of space, bias 1.0, pg target 0.3365183183983402 quantized to 32 (current 32)
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493819121062511 of space, bias 1.0, pg target 0.7481457363187534 quantized to 32 (current 32)
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.36012762782241e-07 of space, bias 4.0, pg target 0.0008832153153386892 quantized to 16 (current 16)
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:32:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:32:42 compute-0 nova_compute[243452]: 2026-02-28 10:32:42.213 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:42 compute-0 ceph-mon[76304]: pgmap v2050: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Feb 28 10:32:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 28 10:32:43 compute-0 nova_compute[243452]: 2026-02-28 10:32:43.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:32:43 compute-0 nova_compute[243452]: 2026-02-28 10:32:43.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:32:43 compute-0 nova_compute[243452]: 2026-02-28 10:32:43.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:32:43 compute-0 nova_compute[243452]: 2026-02-28 10:32:43.350 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 28 10:32:43 compute-0 nova_compute[243452]: 2026-02-28 10:32:43.816 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:32:43 compute-0 nova_compute[243452]: 2026-02-28 10:32:43.817 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:32:43 compute-0 nova_compute[243452]: 2026-02-28 10:32:43.817 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:32:43 compute-0 nova_compute[243452]: 2026-02-28 10:32:43.818 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 995ed68f-0189-47b0-b060-6b738468c986 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:32:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.199 243456 DEBUG nova.compute.manager [req-6f2ce2cd-15a2-4acb-98b9-db168a137176 req-ad14abe1-c8c6-430f-a974-10a93d28b0d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Received event network-vif-plugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.200 243456 DEBUG oslo_concurrency.lockutils [req-6f2ce2cd-15a2-4acb-98b9-db168a137176 req-ad14abe1-c8c6-430f-a974-10a93d28b0d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.201 243456 DEBUG oslo_concurrency.lockutils [req-6f2ce2cd-15a2-4acb-98b9-db168a137176 req-ad14abe1-c8c6-430f-a974-10a93d28b0d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.201 243456 DEBUG oslo_concurrency.lockutils [req-6f2ce2cd-15a2-4acb-98b9-db168a137176 req-ad14abe1-c8c6-430f-a974-10a93d28b0d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.202 243456 DEBUG nova.compute.manager [req-6f2ce2cd-15a2-4acb-98b9-db168a137176 req-ad14abe1-c8c6-430f-a974-10a93d28b0d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Processing event network-vif-plugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.203 243456 DEBUG nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.207 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274764.2071095, 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.207 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] VM Resumed (Lifecycle Event)
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.213 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.223 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.224 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.229 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.235 243456 INFO nova.virt.libvirt.driver [-] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Instance spawned successfully.
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.236 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.241 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.246 243456 DEBUG nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.266 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.271 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.272 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.273 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.274 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.274 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.275 243456 DEBUG nova.virt.libvirt.driver [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.293 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.353 243456 INFO nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Took 12.94 seconds to spawn the instance on the hypervisor.
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.354 243456 DEBUG nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.369 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.370 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.379 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.380 243456 INFO nova.compute.claims [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.467 243456 INFO nova.compute.manager [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Took 14.15 seconds to build instance.
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.488 243456 DEBUG oslo_concurrency.lockutils [None req-bcb7efd8-ec88-49f1-acf5-a03c8504fa03 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:32:44 compute-0 ceph-mon[76304]: pgmap v2051: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 28 10:32:44 compute-0 nova_compute[243452]: 2026-02-28 10:32:44.552 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:32:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 384 KiB/s wr, 35 op/s
Feb 28 10:32:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:32:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1892046915' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.180 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.189 243456 DEBUG nova.compute.provider_tree [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.215 243456 DEBUG nova.scheduler.client.report [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.253 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.254 243456 DEBUG nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.336 243456 DEBUG nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.337 243456 DEBUG nova.network.neutron [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.356 243456 INFO nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.378 243456 DEBUG nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.466 243456 DEBUG nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.468 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.468 243456 INFO nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Creating image(s)
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.493 243456 DEBUG nova.storage.rbd_utils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:32:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1892046915' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.527 243456 DEBUG nova.storage.rbd_utils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:32:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:32:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1715526932' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:32:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:32:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1715526932' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.551 243456 DEBUG nova.storage.rbd_utils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.555 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.625 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.626 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.627 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.627 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.648 243456 DEBUG nova.storage.rbd_utils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.651 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.841 243456 DEBUG nova.policy [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.886 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:32:45 compute-0 nova_compute[243452]: 2026-02-28 10:32:45.950 243456 DEBUG nova.storage.rbd_utils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:32:46 compute-0 nova_compute[243452]: 2026-02-28 10:32:46.038 243456 DEBUG nova.objects.instance [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid b07b5e80-4820-4ee8-9750-3ee5ddc53519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:32:46 compute-0 nova_compute[243452]: 2026-02-28 10:32:46.115 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:32:46 compute-0 nova_compute[243452]: 2026-02-28 10:32:46.115 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Ensure instance console log exists: /var/lib/nova/instances/b07b5e80-4820-4ee8-9750-3ee5ddc53519/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:32:46 compute-0 nova_compute[243452]: 2026-02-28 10:32:46.116 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:32:46 compute-0 nova_compute[243452]: 2026-02-28 10:32:46.116 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:32:46 compute-0 nova_compute[243452]: 2026-02-28 10:32:46.116 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:32:46 compute-0 nova_compute[243452]: 2026-02-28 10:32:46.309 243456 DEBUG nova.compute.manager [req-7b47143a-6a44-4dfa-ad6f-5b94e7c147ee req-f229d992-488e-4add-a021-5de46ff04268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Received event network-vif-plugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:32:46 compute-0 nova_compute[243452]: 2026-02-28 10:32:46.310 243456 DEBUG oslo_concurrency.lockutils [req-7b47143a-6a44-4dfa-ad6f-5b94e7c147ee req-f229d992-488e-4add-a021-5de46ff04268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:32:46 compute-0 nova_compute[243452]: 2026-02-28 10:32:46.310 243456 DEBUG oslo_concurrency.lockutils [req-7b47143a-6a44-4dfa-ad6f-5b94e7c147ee req-f229d992-488e-4add-a021-5de46ff04268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:32:46 compute-0 nova_compute[243452]: 2026-02-28 10:32:46.310 243456 DEBUG oslo_concurrency.lockutils [req-7b47143a-6a44-4dfa-ad6f-5b94e7c147ee req-f229d992-488e-4add-a021-5de46ff04268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:32:46 compute-0 nova_compute[243452]: 2026-02-28 10:32:46.310 243456 DEBUG nova.compute.manager [req-7b47143a-6a44-4dfa-ad6f-5b94e7c147ee req-f229d992-488e-4add-a021-5de46ff04268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] No waiting events found dispatching network-vif-plugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:32:46 compute-0 nova_compute[243452]: 2026-02-28 10:32:46.310 243456 WARNING nova.compute.manager [req-7b47143a-6a44-4dfa-ad6f-5b94e7c147ee req-f229d992-488e-4add-a021-5de46ff04268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Received unexpected event network-vif-plugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe for instance with vm_state active and task_state None.
Feb 28 10:32:46 compute-0 ceph-mon[76304]: pgmap v2052: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 384 KiB/s wr, 35 op/s
Feb 28 10:32:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1715526932' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:32:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1715526932' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:32:46 compute-0 nova_compute[243452]: 2026-02-28 10:32:46.540 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Updating instance_info_cache with network_info: [{"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:32:46 compute-0 nova_compute[243452]: 2026-02-28 10:32:46.564 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:32:46 compute-0 nova_compute[243452]: 2026-02-28 10:32:46.564 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:32:46 compute-0 nova_compute[243452]: 2026-02-28 10:32:46.565 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:32:46 compute-0 nova_compute[243452]: 2026-02-28 10:32:46.590 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:32:46 compute-0 nova_compute[243452]: 2026-02-28 10:32:46.590 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:32:46 compute-0 nova_compute[243452]: 2026-02-28 10:32:46.591 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:32:46 compute-0 nova_compute[243452]: 2026-02-28 10:32:46.591 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:32:46 compute-0 nova_compute[243452]: 2026-02-28 10:32:46.592 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:32:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 96 KiB/s rd, 42 KiB/s wr, 37 op/s
Feb 28 10:32:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:32:47 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2020885134' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:32:47 compute-0 nova_compute[243452]: 2026-02-28 10:32:47.202 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:32:47 compute-0 nova_compute[243452]: 2026-02-28 10:32:47.215 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:47 compute-0 nova_compute[243452]: 2026-02-28 10:32:47.266 243456 DEBUG nova.network.neutron [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Successfully created port: b811084c-ad80-4e64-904a-d56bd59c9766 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:32:47 compute-0 nova_compute[243452]: 2026-02-28 10:32:47.359 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:32:47 compute-0 nova_compute[243452]: 2026-02-28 10:32:47.359 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:32:47 compute-0 nova_compute[243452]: 2026-02-28 10:32:47.364 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:32:47 compute-0 nova_compute[243452]: 2026-02-28 10:32:47.364 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:32:47 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2020885134' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:32:47 compute-0 nova_compute[243452]: 2026-02-28 10:32:47.545 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:32:47 compute-0 nova_compute[243452]: 2026-02-28 10:32:47.547 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3282MB free_disk=59.92099027894437GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:32:47 compute-0 nova_compute[243452]: 2026-02-28 10:32:47.547 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:32:47 compute-0 nova_compute[243452]: 2026-02-28 10:32:47.547 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:32:47 compute-0 nova_compute[243452]: 2026-02-28 10:32:47.778 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 995ed68f-0189-47b0-b060-6b738468c986 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:32:47 compute-0 nova_compute[243452]: 2026-02-28 10:32:47.778 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:32:47 compute-0 nova_compute[243452]: 2026-02-28 10:32:47.781 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance b07b5e80-4820-4ee8-9750-3ee5ddc53519 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:32:47 compute-0 nova_compute[243452]: 2026-02-28 10:32:47.781 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:32:47 compute-0 nova_compute[243452]: 2026-02-28 10:32:47.782 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:32:47 compute-0 nova_compute[243452]: 2026-02-28 10:32:47.855 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:32:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:32:48 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1695953747' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:32:48 compute-0 nova_compute[243452]: 2026-02-28 10:32:48.390 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:32:48 compute-0 nova_compute[243452]: 2026-02-28 10:32:48.397 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:32:48 compute-0 nova_compute[243452]: 2026-02-28 10:32:48.414 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:32:48 compute-0 nova_compute[243452]: 2026-02-28 10:32:48.441 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:32:48 compute-0 nova_compute[243452]: 2026-02-28 10:32:48.441 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:32:48 compute-0 ceph-mon[76304]: pgmap v2053: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 96 KiB/s rd, 42 KiB/s wr, 37 op/s
Feb 28 10:32:48 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1695953747' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:32:48 compute-0 nova_compute[243452]: 2026-02-28 10:32:48.662 243456 DEBUG nova.network.neutron [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Successfully updated port: b811084c-ad80-4e64-904a-d56bd59c9766 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:32:48 compute-0 nova_compute[243452]: 2026-02-28 10:32:48.687 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:32:48 compute-0 nova_compute[243452]: 2026-02-28 10:32:48.687 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:32:48 compute-0 nova_compute[243452]: 2026-02-28 10:32:48.688 243456 DEBUG nova.network.neutron [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:32:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 298 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 878 KiB/s wr, 76 op/s
Feb 28 10:32:48 compute-0 nova_compute[243452]: 2026-02-28 10:32:48.753 243456 DEBUG nova.compute.manager [req-596d5a4b-4122-4ba3-86f0-af2ba0b8f051 req-2bcd2191-274c-43da-9382-a5daedb0b1c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received event network-changed-b811084c-ad80-4e64-904a-d56bd59c9766 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:32:48 compute-0 nova_compute[243452]: 2026-02-28 10:32:48.754 243456 DEBUG nova.compute.manager [req-596d5a4b-4122-4ba3-86f0-af2ba0b8f051 req-2bcd2191-274c-43da-9382-a5daedb0b1c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Refreshing instance network info cache due to event network-changed-b811084c-ad80-4e64-904a-d56bd59c9766. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:32:48 compute-0 nova_compute[243452]: 2026-02-28 10:32:48.754 243456 DEBUG oslo_concurrency.lockutils [req-596d5a4b-4122-4ba3-86f0-af2ba0b8f051 req-2bcd2191-274c-43da-9382-a5daedb0b1c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:32:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:32:48 compute-0 nova_compute[243452]: 2026-02-28 10:32:48.899 243456 DEBUG nova.network.neutron [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:32:49 compute-0 nova_compute[243452]: 2026-02-28 10:32:49.298 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:50 compute-0 ceph-mon[76304]: pgmap v2054: 305 pgs: 305 active+clean; 298 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 878 KiB/s wr, 76 op/s
Feb 28 10:32:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.875 243456 DEBUG nova.network.neutron [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updating instance_info_cache with network_info: [{"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.893 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.894 243456 DEBUG nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Instance network_info: |[{"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.894 243456 DEBUG oslo_concurrency.lockutils [req-596d5a4b-4122-4ba3-86f0-af2ba0b8f051 req-2bcd2191-274c-43da-9382-a5daedb0b1c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.894 243456 DEBUG nova.network.neutron [req-596d5a4b-4122-4ba3-86f0-af2ba0b8f051 req-2bcd2191-274c-43da-9382-a5daedb0b1c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Refreshing network info cache for port b811084c-ad80-4e64-904a-d56bd59c9766 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.898 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Start _get_guest_xml network_info=[{"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.903 243456 WARNING nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.908 243456 DEBUG nova.virt.libvirt.host [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.910 243456 DEBUG nova.virt.libvirt.host [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.913 243456 DEBUG nova.virt.libvirt.host [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.914 243456 DEBUG nova.virt.libvirt.host [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.914 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.915 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.915 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.915 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.916 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.916 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.916 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.916 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.917 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.917 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.917 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.917 243456 DEBUG nova.virt.hardware [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:32:50 compute-0 nova_compute[243452]: 2026-02-28 10:32:50.921 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:32:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:32:51 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3031815688' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:32:51 compute-0 nova_compute[243452]: 2026-02-28 10:32:51.520 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:32:51 compute-0 nova_compute[243452]: 2026-02-28 10:32:51.541 243456 DEBUG nova.storage.rbd_utils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:32:51 compute-0 nova_compute[243452]: 2026-02-28 10:32:51.545 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:32:51 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3031815688' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:32:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:32:52 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1712000785' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.223 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.239 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.693s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.240 243456 DEBUG nova.virt.libvirt.vif [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:32:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-870516469',display_name='tempest-TestGettingAddress-server-870516469',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-870516469',id=125,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFieZm47z3EaXX8zbx7LaLT+w65lGbaba1NOcr04FPZCW63/mJeAcaJ3qZq57Nb7LiBZaem/WgewaoumutbM8PMth7FUNu0jTunr9bd13indOeb5XDIeGzJsnEfyg+wcGA==',key_name='tempest-TestGettingAddress-401608194',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-lcsv3c02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:32:45Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=b07b5e80-4820-4ee8-9750-3ee5ddc53519,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.241 243456 DEBUG nova.network.os_vif_util [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.242 243456 DEBUG nova.network.os_vif_util [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:c4:e4,bridge_name='br-int',has_traffic_filtering=True,id=b811084c-ad80-4e64-904a-d56bd59c9766,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb811084c-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.243 243456 DEBUG nova.objects.instance [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid b07b5e80-4820-4ee8-9750-3ee5ddc53519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.265 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:32:52 compute-0 nova_compute[243452]:   <uuid>b07b5e80-4820-4ee8-9750-3ee5ddc53519</uuid>
Feb 28 10:32:52 compute-0 nova_compute[243452]:   <name>instance-0000007d</name>
Feb 28 10:32:52 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:32:52 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:32:52 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <nova:name>tempest-TestGettingAddress-server-870516469</nova:name>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:32:50</nova:creationTime>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:32:52 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:32:52 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:32:52 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:32:52 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:32:52 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:32:52 compute-0 nova_compute[243452]:         <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 10:32:52 compute-0 nova_compute[243452]:         <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:32:52 compute-0 nova_compute[243452]:         <nova:port uuid="b811084c-ad80-4e64-904a-d56bd59c9766">
Feb 28 10:32:52 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fec3:c4e4" ipVersion="6"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:32:52 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:32:52 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <system>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <entry name="serial">b07b5e80-4820-4ee8-9750-3ee5ddc53519</entry>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <entry name="uuid">b07b5e80-4820-4ee8-9750-3ee5ddc53519</entry>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     </system>
Feb 28 10:32:52 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:32:52 compute-0 nova_compute[243452]:   <os>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:   </os>
Feb 28 10:32:52 compute-0 nova_compute[243452]:   <features>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:   </features>
Feb 28 10:32:52 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:32:52 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:32:52 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk">
Feb 28 10:32:52 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       </source>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:32:52 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk.config">
Feb 28 10:32:52 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       </source>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:32:52 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:c3:c4:e4"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <target dev="tapb811084c-ad"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/b07b5e80-4820-4ee8-9750-3ee5ddc53519/console.log" append="off"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <video>
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     </video>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:32:52 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:32:52 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:32:52 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:32:52 compute-0 nova_compute[243452]: </domain>
Feb 28 10:32:52 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.267 243456 DEBUG nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Preparing to wait for external event network-vif-plugged-b811084c-ad80-4e64-904a-d56bd59c9766 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.267 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.268 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.268 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.269 243456 DEBUG nova.virt.libvirt.vif [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:32:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-870516469',display_name='tempest-TestGettingAddress-server-870516469',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-870516469',id=125,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFieZm47z3EaXX8zbx7LaLT+w65lGbaba1NOcr04FPZCW63/mJeAcaJ3qZq57Nb7LiBZaem/WgewaoumutbM8PMth7FUNu0jTunr9bd13indOeb5XDIeGzJsnEfyg+wcGA==',key_name='tempest-TestGettingAddress-401608194',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-lcsv3c02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:32:45Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=b07b5e80-4820-4ee8-9750-3ee5ddc53519,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.270 243456 DEBUG nova.network.os_vif_util [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.270 243456 DEBUG nova.network.os_vif_util [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:c4:e4,bridge_name='br-int',has_traffic_filtering=True,id=b811084c-ad80-4e64-904a-d56bd59c9766,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb811084c-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.271 243456 DEBUG os_vif [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:c4:e4,bridge_name='br-int',has_traffic_filtering=True,id=b811084c-ad80-4e64-904a-d56bd59c9766,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb811084c-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.271 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.272 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.272 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.276 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.276 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb811084c-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.277 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb811084c-ad, col_values=(('external_ids', {'iface-id': 'b811084c-ad80-4e64-904a-d56bd59c9766', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:c4:e4', 'vm-uuid': 'b07b5e80-4820-4ee8-9750-3ee5ddc53519'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.278 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:52 compute-0 NetworkManager[49805]: <info>  [1772274772.2800] manager: (tapb811084c-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/518)
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.282 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.288 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.289 243456 INFO os_vif [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:c4:e4,bridge_name='br-int',has_traffic_filtering=True,id=b811084c-ad80-4e64-904a-d56bd59c9766,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb811084c-ad')
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.333 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.336 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.337 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:c3:c4:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.337 243456 INFO nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Using config drive
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.360 243456 DEBUG nova.storage.rbd_utils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:32:52 compute-0 ceph-mon[76304]: pgmap v2055: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 28 10:32:52 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1712000785' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:32:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.924 243456 INFO nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Creating config drive at /var/lib/nova/instances/b07b5e80-4820-4ee8-9750-3ee5ddc53519/disk.config
Feb 28 10:32:52 compute-0 nova_compute[243452]: 2026-02-28 10:32:52.930 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b07b5e80-4820-4ee8-9750-3ee5ddc53519/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphgn30jv1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:32:53 compute-0 nova_compute[243452]: 2026-02-28 10:32:53.060 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b07b5e80-4820-4ee8-9750-3ee5ddc53519/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmphgn30jv1" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:32:53 compute-0 nova_compute[243452]: 2026-02-28 10:32:53.096 243456 DEBUG nova.storage.rbd_utils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:32:53 compute-0 nova_compute[243452]: 2026-02-28 10:32:53.101 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b07b5e80-4820-4ee8-9750-3ee5ddc53519/disk.config b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:32:53 compute-0 nova_compute[243452]: 2026-02-28 10:32:53.241 243456 DEBUG oslo_concurrency.processutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b07b5e80-4820-4ee8-9750-3ee5ddc53519/disk.config b07b5e80-4820-4ee8-9750-3ee5ddc53519_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:32:53 compute-0 nova_compute[243452]: 2026-02-28 10:32:53.242 243456 INFO nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Deleting local config drive /var/lib/nova/instances/b07b5e80-4820-4ee8-9750-3ee5ddc53519/disk.config because it was imported into RBD.
Feb 28 10:32:53 compute-0 NetworkManager[49805]: <info>  [1772274773.2738] manager: (tapb811084c-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/519)
Feb 28 10:32:53 compute-0 kernel: tapb811084c-ad: entered promiscuous mode
Feb 28 10:32:53 compute-0 nova_compute[243452]: 2026-02-28 10:32:53.280 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:53 compute-0 ovn_controller[146846]: 2026-02-28T10:32:53Z|01251|binding|INFO|Claiming lport b811084c-ad80-4e64-904a-d56bd59c9766 for this chassis.
Feb 28 10:32:53 compute-0 ovn_controller[146846]: 2026-02-28T10:32:53Z|01252|binding|INFO|b811084c-ad80-4e64-904a-d56bd59c9766: Claiming fa:16:3e:c3:c4:e4 10.100.0.12 2001:db8::f816:3eff:fec3:c4e4
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.289 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:c4:e4 10.100.0.12 2001:db8::f816:3eff:fec3:c4e4'], port_security=['fa:16:3e:c3:c4:e4 10.100.0.12 2001:db8::f816:3eff:fec3:c4e4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fec3:c4e4/64', 'neutron:device_id': 'b07b5e80-4820-4ee8-9750-3ee5ddc53519', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6e383b51-5295-4e4b-b3c5-448c2d4edbf6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cc36483-813d-493a-bc2a-2e5a365011f2, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b811084c-ad80-4e64-904a-d56bd59c9766) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:32:53 compute-0 nova_compute[243452]: 2026-02-28 10:32:53.290 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.291 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b811084c-ad80-4e64-904a-d56bd59c9766 in datapath c662211d-11bd-4aa5-95d2-794ccdac29d7 bound to our chassis
Feb 28 10:32:53 compute-0 ovn_controller[146846]: 2026-02-28T10:32:53Z|01253|binding|INFO|Setting lport b811084c-ad80-4e64-904a-d56bd59c9766 ovn-installed in OVS
Feb 28 10:32:53 compute-0 ovn_controller[146846]: 2026-02-28T10:32:53Z|01254|binding|INFO|Setting lport b811084c-ad80-4e64-904a-d56bd59c9766 up in Southbound
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.292 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c662211d-11bd-4aa5-95d2-794ccdac29d7
Feb 28 10:32:53 compute-0 nova_compute[243452]: 2026-02-28 10:32:53.293 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:53 compute-0 systemd-machined[209480]: New machine qemu-158-instance-0000007d.
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.303 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cac05552-43e0-45e2-92b0-7a9eb3a5f277]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.304 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc662211d-11 in ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.306 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc662211d-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.306 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dd037136-a661-4b51-8eba-9d6db5651af6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.307 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b7b81b10-a6a8-4f86-947e-0a1f99d31d2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:53 compute-0 systemd[1]: Started Virtual Machine qemu-158-instance-0000007d.
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.320 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[f8d76572-f00b-4119-865f-7bc14316e25c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:53 compute-0 systemd-udevd[353736]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.332 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9d8f205f-a410-4af0-bb42-dc7393ee1870]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:53 compute-0 NetworkManager[49805]: <info>  [1772274773.3422] device (tapb811084c-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:32:53 compute-0 NetworkManager[49805]: <info>  [1772274773.3432] device (tapb811084c-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.356 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa5db51-6740-46d2-a1d2-02352fcbc75f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:53 compute-0 NetworkManager[49805]: <info>  [1772274773.3741] manager: (tapc662211d-10): new Veth device (/org/freedesktop/NetworkManager/Devices/520)
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.373 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0e89b3d9-f42b-4e8c-ab14-592d38bdb959]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.412 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[97905aa4-502c-40eb-8dd6-31cbc8c66bd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.415 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[67f2f968-5b76-41f9-a842-9b4654618107]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:53 compute-0 NetworkManager[49805]: <info>  [1772274773.4389] device (tapc662211d-10): carrier: link connected
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.441 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[00cfd9a3-0365-4741-b102-48cc87ec900b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.453 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[97030e0e-bb2e-4df6-8433-ac1900a5ee23]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc662211d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:f5:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628072, 'reachable_time': 22047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353766, 'error': None, 'target': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.462 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cf0b9d25-61f0-40fd-8225-bda56fbf0300]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe51:f5cc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 628072, 'tstamp': 628072}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353767, 'error': None, 'target': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.471 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7ef93853-d497-4d49-9fee-c5a17269ba18]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc662211d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:f5:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628072, 'reachable_time': 22047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 353768, 'error': None, 'target': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.498 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cf64aaf7-1ca2-402d-9491-99acbc75a4d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.552 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[39d0158d-a1ba-43ed-9ce1-cd731c156d4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.553 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc662211d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.553 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.554 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc662211d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:32:53 compute-0 nova_compute[243452]: 2026-02-28 10:32:53.555 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:53 compute-0 NetworkManager[49805]: <info>  [1772274773.5561] manager: (tapc662211d-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/521)
Feb 28 10:32:53 compute-0 kernel: tapc662211d-10: entered promiscuous mode
Feb 28 10:32:53 compute-0 nova_compute[243452]: 2026-02-28 10:32:53.558 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.559 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc662211d-10, col_values=(('external_ids', {'iface-id': '45460f81-310d-424c-88ee-85e2ae1b7444'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:32:53 compute-0 ovn_controller[146846]: 2026-02-28T10:32:53Z|01255|binding|INFO|Releasing lport 45460f81-310d-424c-88ee-85e2ae1b7444 from this chassis (sb_readonly=0)
Feb 28 10:32:53 compute-0 nova_compute[243452]: 2026-02-28 10:32:53.560 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:53 compute-0 nova_compute[243452]: 2026-02-28 10:32:53.569 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.570 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c662211d-11bd-4aa5-95d2-794ccdac29d7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c662211d-11bd-4aa5-95d2-794ccdac29d7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.570 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7723459f-71a1-4306-afb5-080e3e5f2439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.571 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-c662211d-11bd-4aa5-95d2-794ccdac29d7
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/c662211d-11bd-4aa5-95d2-794ccdac29d7.pid.haproxy
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID c662211d-11bd-4aa5-95d2-794ccdac29d7
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:32:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:53.572 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'env', 'PROCESS_TAG=haproxy-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c662211d-11bd-4aa5-95d2-794ccdac29d7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:32:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:32:53 compute-0 podman[353798]: 2026-02-28 10:32:53.883677661 +0000 UTC m=+0.041236089 container create b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:32:53 compute-0 systemd[1]: Started libpod-conmon-b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541.scope.
Feb 28 10:32:53 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ccde48f8a36bab7126971be92fbdb0c7788e67982ad3a6ca48727f62f636de5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:32:53 compute-0 podman[353798]: 2026-02-28 10:32:53.947992403 +0000 UTC m=+0.105550861 container init b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:32:53 compute-0 podman[353798]: 2026-02-28 10:32:53.952521891 +0000 UTC m=+0.110080319 container start b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:32:53 compute-0 podman[353798]: 2026-02-28 10:32:53.862877312 +0000 UTC m=+0.020435760 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:32:53 compute-0 neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7[353814]: [NOTICE]   (353818) : New worker (353820) forked
Feb 28 10:32:53 compute-0 neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7[353814]: [NOTICE]   (353818) : Loading success.
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.015 243456 DEBUG nova.compute.manager [req-1addc04e-3f63-4ac6-8354-9c297147df77 req-a6f9aa95-a788-4d80-8736-300cf1c8f9d0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received event network-vif-plugged-b811084c-ad80-4e64-904a-d56bd59c9766 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.015 243456 DEBUG oslo_concurrency.lockutils [req-1addc04e-3f63-4ac6-8354-9c297147df77 req-a6f9aa95-a788-4d80-8736-300cf1c8f9d0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.016 243456 DEBUG oslo_concurrency.lockutils [req-1addc04e-3f63-4ac6-8354-9c297147df77 req-a6f9aa95-a788-4d80-8736-300cf1c8f9d0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.016 243456 DEBUG oslo_concurrency.lockutils [req-1addc04e-3f63-4ac6-8354-9c297147df77 req-a6f9aa95-a788-4d80-8736-300cf1c8f9d0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.016 243456 DEBUG nova.compute.manager [req-1addc04e-3f63-4ac6-8354-9c297147df77 req-a6f9aa95-a788-4d80-8736-300cf1c8f9d0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Processing event network-vif-plugged-b811084c-ad80-4e64-904a-d56bd59c9766 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.074 243456 DEBUG nova.network.neutron [req-596d5a4b-4122-4ba3-86f0-af2ba0b8f051 req-2bcd2191-274c-43da-9382-a5daedb0b1c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updated VIF entry in instance network info cache for port b811084c-ad80-4e64-904a-d56bd59c9766. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.074 243456 DEBUG nova.network.neutron [req-596d5a4b-4122-4ba3-86f0-af2ba0b8f051 req-2bcd2191-274c-43da-9382-a5daedb0b1c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updating instance_info_cache with network_info: [{"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.090 243456 DEBUG oslo_concurrency.lockutils [req-596d5a4b-4122-4ba3-86f0-af2ba0b8f051 req-2bcd2191-274c-43da-9382-a5daedb0b1c6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.300 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:54 compute-0 ceph-mon[76304]: pgmap v2056: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Feb 28 10:32:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 98 op/s
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.877 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274774.8770206, b07b5e80-4820-4ee8-9750-3ee5ddc53519 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.878 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] VM Started (Lifecycle Event)
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.880 243456 DEBUG nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.883 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.886 243456 INFO nova.virt.libvirt.driver [-] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Instance spawned successfully.
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.886 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.901 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.905 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.912 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.912 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.913 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.913 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.914 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.914 243456 DEBUG nova.virt.libvirt.driver [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.941 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.942 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274774.878188, b07b5e80-4820-4ee8-9750-3ee5ddc53519 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.942 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] VM Paused (Lifecycle Event)
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.972 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.975 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274774.8824677, b07b5e80-4820-4ee8-9750-3ee5ddc53519 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.975 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] VM Resumed (Lifecycle Event)
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.980 243456 INFO nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Took 9.51 seconds to spawn the instance on the hypervisor.
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.981 243456 DEBUG nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.990 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:32:54 compute-0 nova_compute[243452]: 2026-02-28 10:32:54.992 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:32:55 compute-0 nova_compute[243452]: 2026-02-28 10:32:55.010 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:32:55 compute-0 nova_compute[243452]: 2026-02-28 10:32:55.044 243456 INFO nova.compute.manager [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Took 10.72 seconds to build instance.
Feb 28 10:32:55 compute-0 nova_compute[243452]: 2026-02-28 10:32:55.064 243456 DEBUG oslo_concurrency.lockutils [None req-43f1fec8-814f-47b6-8cfa-aef58a1f98d0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:32:56 compute-0 ovn_controller[146846]: 2026-02-28T10:32:56Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1d:37:36 10.100.0.24
Feb 28 10:32:56 compute-0 ovn_controller[146846]: 2026-02-28T10:32:56Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1d:37:36 10.100.0.24
Feb 28 10:32:56 compute-0 nova_compute[243452]: 2026-02-28 10:32:56.228 243456 DEBUG nova.compute.manager [req-842203e5-5dd1-44e9-b864-9a21f8afa7e7 req-762cacef-2dc7-45ac-8cb3-971b58c57705 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received event network-vif-plugged-b811084c-ad80-4e64-904a-d56bd59c9766 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:32:56 compute-0 nova_compute[243452]: 2026-02-28 10:32:56.228 243456 DEBUG oslo_concurrency.lockutils [req-842203e5-5dd1-44e9-b864-9a21f8afa7e7 req-762cacef-2dc7-45ac-8cb3-971b58c57705 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:32:56 compute-0 nova_compute[243452]: 2026-02-28 10:32:56.229 243456 DEBUG oslo_concurrency.lockutils [req-842203e5-5dd1-44e9-b864-9a21f8afa7e7 req-762cacef-2dc7-45ac-8cb3-971b58c57705 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:32:56 compute-0 nova_compute[243452]: 2026-02-28 10:32:56.229 243456 DEBUG oslo_concurrency.lockutils [req-842203e5-5dd1-44e9-b864-9a21f8afa7e7 req-762cacef-2dc7-45ac-8cb3-971b58c57705 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:32:56 compute-0 nova_compute[243452]: 2026-02-28 10:32:56.229 243456 DEBUG nova.compute.manager [req-842203e5-5dd1-44e9-b864-9a21f8afa7e7 req-762cacef-2dc7-45ac-8cb3-971b58c57705 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] No waiting events found dispatching network-vif-plugged-b811084c-ad80-4e64-904a-d56bd59c9766 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:32:56 compute-0 nova_compute[243452]: 2026-02-28 10:32:56.230 243456 WARNING nova.compute.manager [req-842203e5-5dd1-44e9-b864-9a21f8afa7e7 req-762cacef-2dc7-45ac-8cb3-971b58c57705 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received unexpected event network-vif-plugged-b811084c-ad80-4e64-904a-d56bd59c9766 for instance with vm_state active and task_state None.
Feb 28 10:32:56 compute-0 ceph-mon[76304]: pgmap v2057: 305 pgs: 305 active+clean; 328 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 98 op/s
Feb 28 10:32:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 337 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 113 op/s
Feb 28 10:32:57 compute-0 nova_compute[243452]: 2026-02-28 10:32:57.280 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:32:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:57.872 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:32:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:57.873 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:32:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:32:57.874 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:32:58 compute-0 ceph-mon[76304]: pgmap v2058: 305 pgs: 305 active+clean; 337 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 113 op/s
Feb 28 10:32:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 344 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.3 MiB/s wr, 173 op/s
Feb 28 10:32:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:32:58 compute-0 nova_compute[243452]: 2026-02-28 10:32:58.909 243456 DEBUG nova.compute.manager [req-79c8a50c-604f-4f04-aa56-e93bb23d4e44 req-b3543c47-5486-46e6-80f3-331cb4fab2bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received event network-changed-b811084c-ad80-4e64-904a-d56bd59c9766 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:32:58 compute-0 nova_compute[243452]: 2026-02-28 10:32:58.909 243456 DEBUG nova.compute.manager [req-79c8a50c-604f-4f04-aa56-e93bb23d4e44 req-b3543c47-5486-46e6-80f3-331cb4fab2bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Refreshing instance network info cache due to event network-changed-b811084c-ad80-4e64-904a-d56bd59c9766. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:32:58 compute-0 nova_compute[243452]: 2026-02-28 10:32:58.910 243456 DEBUG oslo_concurrency.lockutils [req-79c8a50c-604f-4f04-aa56-e93bb23d4e44 req-b3543c47-5486-46e6-80f3-331cb4fab2bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:32:58 compute-0 nova_compute[243452]: 2026-02-28 10:32:58.910 243456 DEBUG oslo_concurrency.lockutils [req-79c8a50c-604f-4f04-aa56-e93bb23d4e44 req-b3543c47-5486-46e6-80f3-331cb4fab2bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:32:58 compute-0 nova_compute[243452]: 2026-02-28 10:32:58.910 243456 DEBUG nova.network.neutron [req-79c8a50c-604f-4f04-aa56-e93bb23d4e44 req-b3543c47-5486-46e6-80f3-331cb4fab2bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Refreshing network info cache for port b811084c-ad80-4e64-904a-d56bd59c9766 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:32:59 compute-0 nova_compute[243452]: 2026-02-28 10:32:59.302 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:00 compute-0 nova_compute[243452]: 2026-02-28 10:33:00.166 243456 DEBUG nova.network.neutron [req-79c8a50c-604f-4f04-aa56-e93bb23d4e44 req-b3543c47-5486-46e6-80f3-331cb4fab2bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updated VIF entry in instance network info cache for port b811084c-ad80-4e64-904a-d56bd59c9766. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:33:00 compute-0 nova_compute[243452]: 2026-02-28 10:33:00.169 243456 DEBUG nova.network.neutron [req-79c8a50c-604f-4f04-aa56-e93bb23d4e44 req-b3543c47-5486-46e6-80f3-331cb4fab2bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updating instance_info_cache with network_info: [{"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:33:00 compute-0 nova_compute[243452]: 2026-02-28 10:33:00.209 243456 DEBUG oslo_concurrency.lockutils [req-79c8a50c-604f-4f04-aa56-e93bb23d4e44 req-b3543c47-5486-46e6-80f3-331cb4fab2bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:33:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:33:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:33:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:33:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:33:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:33:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:33:00 compute-0 ceph-mon[76304]: pgmap v2059: 305 pgs: 305 active+clean; 344 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.3 MiB/s wr, 173 op/s
Feb 28 10:33:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.1 MiB/s wr, 164 op/s
Feb 28 10:33:02 compute-0 nova_compute[243452]: 2026-02-28 10:33:02.282 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:02 compute-0 ceph-mon[76304]: pgmap v2060: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.1 MiB/s wr, 164 op/s
Feb 28 10:33:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Feb 28 10:33:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:33:04 compute-0 podman[353872]: 2026-02-28 10:33:04.127626912 +0000 UTC m=+0.060183836 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 28 10:33:04 compute-0 podman[353871]: 2026-02-28 10:33:04.155301176 +0000 UTC m=+0.085941386 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Feb 28 10:33:04 compute-0 nova_compute[243452]: 2026-02-28 10:33:04.305 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:04 compute-0 ceph-mon[76304]: pgmap v2061: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Feb 28 10:33:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.434 243456 DEBUG oslo_concurrency.lockutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.436 243456 DEBUG oslo_concurrency.lockutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.436 243456 DEBUG oslo_concurrency.lockutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.436 243456 DEBUG oslo_concurrency.lockutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.437 243456 DEBUG oslo_concurrency.lockutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.439 243456 INFO nova.compute.manager [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Terminating instance
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.441 243456 DEBUG nova.compute.manager [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:33:05 compute-0 kernel: tap9beeb630-58 (unregistering): left promiscuous mode
Feb 28 10:33:05 compute-0 NetworkManager[49805]: <info>  [1772274785.4891] device (tap9beeb630-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.491 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:05 compute-0 ovn_controller[146846]: 2026-02-28T10:33:05Z|01256|binding|INFO|Releasing lport 9beeb630-5801-426b-8ae3-6d7b49d83ebe from this chassis (sb_readonly=0)
Feb 28 10:33:05 compute-0 ovn_controller[146846]: 2026-02-28T10:33:05Z|01257|binding|INFO|Setting lport 9beeb630-5801-426b-8ae3-6d7b49d83ebe down in Southbound
Feb 28 10:33:05 compute-0 ovn_controller[146846]: 2026-02-28T10:33:05Z|01258|binding|INFO|Removing iface tap9beeb630-58 ovn-installed in OVS
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.504 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.503 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:37:36 10.100.0.24'], port_security=['fa:16:3e:1d:37:36 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e78b0fff-240f-4279-a0b9-45aaac19e3aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f5856a5a-70ec-4d73-a6be-8929e117dbd8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c7b4133e-8779-4f70-9e9b-cefbe96b1736, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=9beeb630-5801-426b-8ae3-6d7b49d83ebe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:33:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.506 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 9beeb630-5801-426b-8ae3-6d7b49d83ebe in datapath e78b0fff-240f-4279-a0b9-45aaac19e3aa unbound from our chassis
Feb 28 10:33:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.508 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e78b0fff-240f-4279-a0b9-45aaac19e3aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:33:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.509 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[27b4b828-2e0c-4188-84f0-2f4a89d75df6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.509 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa namespace which is not needed anymore
Feb 28 10:33:05 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Feb 28 10:33:05 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007c.scope: Consumed 12.608s CPU time.
Feb 28 10:33:05 compute-0 systemd-machined[209480]: Machine qemu-157-instance-0000007c terminated.
Feb 28 10:33:05 compute-0 neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa[353349]: [NOTICE]   (353354) : haproxy version is 2.8.14-c23fe91
Feb 28 10:33:05 compute-0 neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa[353349]: [NOTICE]   (353354) : path to executable is /usr/sbin/haproxy
Feb 28 10:33:05 compute-0 neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa[353349]: [WARNING]  (353354) : Exiting Master process...
Feb 28 10:33:05 compute-0 neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa[353349]: [ALERT]    (353354) : Current worker (353357) exited with code 143 (Terminated)
Feb 28 10:33:05 compute-0 neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa[353349]: [WARNING]  (353354) : All workers exited. Exiting... (0)
Feb 28 10:33:05 compute-0 systemd[1]: libpod-1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca.scope: Deactivated successfully.
Feb 28 10:33:05 compute-0 podman[353942]: 2026-02-28 10:33:05.650265874 +0000 UTC m=+0.045831449 container died 1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.670 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca-userdata-shm.mount: Deactivated successfully.
Feb 28 10:33:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-800c74c84193b4781da546456c3a23d2158ddc640ab518b56dc1fd583515e1d0-merged.mount: Deactivated successfully.
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.692 243456 INFO nova.virt.libvirt.driver [-] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Instance destroyed successfully.
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.693 243456 DEBUG nova.objects.instance [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:33:05 compute-0 podman[353942]: 2026-02-28 10:33:05.697264825 +0000 UTC m=+0.092830430 container cleanup 1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:33:05 compute-0 systemd[1]: libpod-conmon-1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca.scope: Deactivated successfully.
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.713 243456 DEBUG nova.virt.libvirt.vif [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:32:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1011796007',display_name='tempest-TestNetworkBasicOps-server-1011796007',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1011796007',id=124,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGaI6GmFiY2M2XWv8n7VZfco5jNR9UsWFQgG0FSFHFzEznnh5xFgYcRiJcUqa2jmfAlISnzEuMLhfEtiHn6OYhFIRroUvBHFYJa82ZAB9Pz8Xnj494OEdbr2ujYWW7kOeQ==',key_name='tempest-TestNetworkBasicOps-243906406',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:32:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-yb1lwevu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:32:44Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "address": "fa:16:3e:1d:37:36", "network": {"id": "e78b0fff-240f-4279-a0b9-45aaac19e3aa", "bridge": "br-int", "label": "tempest-network-smoke--702672644", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9beeb630-58", "ovs_interfaceid": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.714 243456 DEBUG nova.network.os_vif_util [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "address": "fa:16:3e:1d:37:36", "network": {"id": "e78b0fff-240f-4279-a0b9-45aaac19e3aa", "bridge": "br-int", "label": "tempest-network-smoke--702672644", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9beeb630-58", "ovs_interfaceid": "9beeb630-5801-426b-8ae3-6d7b49d83ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.715 243456 DEBUG nova.network.os_vif_util [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:37:36,bridge_name='br-int',has_traffic_filtering=True,id=9beeb630-5801-426b-8ae3-6d7b49d83ebe,network=Network(e78b0fff-240f-4279-a0b9-45aaac19e3aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9beeb630-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.715 243456 DEBUG os_vif [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:37:36,bridge_name='br-int',has_traffic_filtering=True,id=9beeb630-5801-426b-8ae3-6d7b49d83ebe,network=Network(e78b0fff-240f-4279-a0b9-45aaac19e3aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9beeb630-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.718 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.718 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9beeb630-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.721 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.725 243456 INFO os_vif [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:37:36,bridge_name='br-int',has_traffic_filtering=True,id=9beeb630-5801-426b-8ae3-6d7b49d83ebe,network=Network(e78b0fff-240f-4279-a0b9-45aaac19e3aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9beeb630-58')
Feb 28 10:33:05 compute-0 podman[353979]: 2026-02-28 10:33:05.755005171 +0000 UTC m=+0.039249383 container remove 1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 10:33:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.760 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6b17b507-6a7d-4c8a-ad2c-2af3491d17e1]: (4, ('Sat Feb 28 10:33:05 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa (1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca)\n1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca\nSat Feb 28 10:33:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa (1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca)\n1c89a52fe58d817291eaf30bd3d2b02949486780470b8c30872910784a9bc4ca\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.762 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[070ff5be-552e-4781-8ca8-193b33311234]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.764 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape78b0fff-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.767 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:05 compute-0 kernel: tape78b0fff-20: left promiscuous mode
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.776 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.782 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fb94d9b4-bf2e-48d3-982b-c7d66ae39962]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.804 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[78809cb3-6cd8-4123-8f53-f90c6199f4b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.806 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e94af009-483c-483f-9ba5-d205d1d9ed55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.827 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a76fc1c9-4fd8-4292-8e52-421ab5759115]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626593, 'reachable_time': 44785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354010, 'error': None, 'target': 'ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:05 compute-0 systemd[1]: run-netns-ovnmeta\x2de78b0fff\x2d240f\x2d4279\x2da0b9\x2d45aaac19e3aa.mount: Deactivated successfully.
Feb 28 10:33:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.831 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e78b0fff-240f-4279-a0b9-45aaac19e3aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:33:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:05.832 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[6b058294-000f-4f2b-871e-e69f472cd141]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.954 243456 DEBUG nova.compute.manager [req-b19f5ffa-14a4-447c-8cfd-1d26535d149b req-dcbf59bf-a0dd-4d76-91fe-6a85558aafe3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Received event network-vif-unplugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.954 243456 DEBUG oslo_concurrency.lockutils [req-b19f5ffa-14a4-447c-8cfd-1d26535d149b req-dcbf59bf-a0dd-4d76-91fe-6a85558aafe3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.955 243456 DEBUG oslo_concurrency.lockutils [req-b19f5ffa-14a4-447c-8cfd-1d26535d149b req-dcbf59bf-a0dd-4d76-91fe-6a85558aafe3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.955 243456 DEBUG oslo_concurrency.lockutils [req-b19f5ffa-14a4-447c-8cfd-1d26535d149b req-dcbf59bf-a0dd-4d76-91fe-6a85558aafe3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.956 243456 DEBUG nova.compute.manager [req-b19f5ffa-14a4-447c-8cfd-1d26535d149b req-dcbf59bf-a0dd-4d76-91fe-6a85558aafe3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] No waiting events found dispatching network-vif-unplugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:33:05 compute-0 nova_compute[243452]: 2026-02-28 10:33:05.956 243456 DEBUG nova.compute.manager [req-b19f5ffa-14a4-447c-8cfd-1d26535d149b req-dcbf59bf-a0dd-4d76-91fe-6a85558aafe3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Received event network-vif-unplugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:33:06 compute-0 nova_compute[243452]: 2026-02-28 10:33:06.097 243456 INFO nova.virt.libvirt.driver [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Deleting instance files /var/lib/nova/instances/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_del
Feb 28 10:33:06 compute-0 nova_compute[243452]: 2026-02-28 10:33:06.098 243456 INFO nova.virt.libvirt.driver [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Deletion of /var/lib/nova/instances/8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc_del complete
Feb 28 10:33:06 compute-0 nova_compute[243452]: 2026-02-28 10:33:06.152 243456 INFO nova.compute.manager [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Took 0.71 seconds to destroy the instance on the hypervisor.
Feb 28 10:33:06 compute-0 nova_compute[243452]: 2026-02-28 10:33:06.153 243456 DEBUG oslo.service.loopingcall [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:33:06 compute-0 nova_compute[243452]: 2026-02-28 10:33:06.154 243456 DEBUG nova.compute.manager [-] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:33:06 compute-0 nova_compute[243452]: 2026-02-28 10:33:06.154 243456 DEBUG nova.network.neutron [-] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:33:06 compute-0 ceph-mon[76304]: pgmap v2062: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Feb 28 10:33:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 132 op/s
Feb 28 10:33:07 compute-0 nova_compute[243452]: 2026-02-28 10:33:07.039 243456 DEBUG nova.network.neutron [-] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:33:07 compute-0 nova_compute[243452]: 2026-02-28 10:33:07.061 243456 INFO nova.compute.manager [-] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Took 0.91 seconds to deallocate network for instance.
Feb 28 10:33:07 compute-0 nova_compute[243452]: 2026-02-28 10:33:07.185 243456 DEBUG oslo_concurrency.lockutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:07 compute-0 nova_compute[243452]: 2026-02-28 10:33:07.185 243456 DEBUG oslo_concurrency.lockutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:07 compute-0 nova_compute[243452]: 2026-02-28 10:33:07.604 243456 DEBUG oslo_concurrency.processutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:33:08 compute-0 ovn_controller[146846]: 2026-02-28T10:33:08Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c3:c4:e4 10.100.0.12
Feb 28 10:33:08 compute-0 ovn_controller[146846]: 2026-02-28T10:33:08Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c3:c4:e4 10.100.0.12
Feb 28 10:33:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:33:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1963900522' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:33:08 compute-0 nova_compute[243452]: 2026-02-28 10:33:08.131 243456 DEBUG nova.compute.manager [req-54b3f7f3-2220-45af-a160-f13310926632 req-86ba2ad7-e806-4e2a-a949-4f49fc6a8280 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Received event network-vif-plugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:33:08 compute-0 nova_compute[243452]: 2026-02-28 10:33:08.132 243456 DEBUG oslo_concurrency.lockutils [req-54b3f7f3-2220-45af-a160-f13310926632 req-86ba2ad7-e806-4e2a-a949-4f49fc6a8280 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:08 compute-0 nova_compute[243452]: 2026-02-28 10:33:08.132 243456 DEBUG oslo_concurrency.lockutils [req-54b3f7f3-2220-45af-a160-f13310926632 req-86ba2ad7-e806-4e2a-a949-4f49fc6a8280 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:08 compute-0 nova_compute[243452]: 2026-02-28 10:33:08.133 243456 DEBUG oslo_concurrency.lockutils [req-54b3f7f3-2220-45af-a160-f13310926632 req-86ba2ad7-e806-4e2a-a949-4f49fc6a8280 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:08 compute-0 nova_compute[243452]: 2026-02-28 10:33:08.133 243456 DEBUG nova.compute.manager [req-54b3f7f3-2220-45af-a160-f13310926632 req-86ba2ad7-e806-4e2a-a949-4f49fc6a8280 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] No waiting events found dispatching network-vif-plugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:33:08 compute-0 nova_compute[243452]: 2026-02-28 10:33:08.133 243456 WARNING nova.compute.manager [req-54b3f7f3-2220-45af-a160-f13310926632 req-86ba2ad7-e806-4e2a-a949-4f49fc6a8280 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Received unexpected event network-vif-plugged-9beeb630-5801-426b-8ae3-6d7b49d83ebe for instance with vm_state deleted and task_state None.
Feb 28 10:33:08 compute-0 nova_compute[243452]: 2026-02-28 10:33:08.134 243456 DEBUG nova.compute.manager [req-54b3f7f3-2220-45af-a160-f13310926632 req-86ba2ad7-e806-4e2a-a949-4f49fc6a8280 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Received event network-vif-deleted-9beeb630-5801-426b-8ae3-6d7b49d83ebe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:33:08 compute-0 nova_compute[243452]: 2026-02-28 10:33:08.140 243456 DEBUG oslo_concurrency.processutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:33:08 compute-0 nova_compute[243452]: 2026-02-28 10:33:08.146 243456 DEBUG nova.compute.provider_tree [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:33:08 compute-0 nova_compute[243452]: 2026-02-28 10:33:08.236 243456 DEBUG nova.scheduler.client.report [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:33:08 compute-0 nova_compute[243452]: 2026-02-28 10:33:08.464 243456 DEBUG oslo_concurrency.lockutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:08 compute-0 nova_compute[243452]: 2026-02-28 10:33:08.603 243456 INFO nova.scheduler.client.report [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc
Feb 28 10:33:08 compute-0 ceph-mon[76304]: pgmap v2063: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 132 op/s
Feb 28 10:33:08 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1963900522' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:33:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 335 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 153 op/s
Feb 28 10:33:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:33:09 compute-0 nova_compute[243452]: 2026-02-28 10:33:09.038 243456 DEBUG oslo_concurrency.lockutils [None req-091472d5-73a2-426c-970b-38cac02d5038 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:09 compute-0 nova_compute[243452]: 2026-02-28 10:33:09.308 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:09 compute-0 ceph-mon[76304]: pgmap v2064: 305 pgs: 305 active+clean; 335 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 153 op/s
Feb 28 10:33:10 compute-0 nova_compute[243452]: 2026-02-28 10:33:10.721 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:10 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:33:10 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 9643 writes, 44K keys, 9643 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 9643 writes, 9643 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1491 writes, 7418 keys, 1491 commit groups, 1.0 writes per commit group, ingest: 9.32 MB, 0.02 MB/s
                                           Interval WAL: 1491 writes, 1491 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     71.4      0.73              0.13        29    0.025       0      0       0.0       0.0
                                             L6      1/0    8.10 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.5    157.3    132.6      1.77              0.62        28    0.063    159K    15K       0.0       0.0
                                            Sum      1/0    8.10 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.5    111.5    114.7      2.50              0.75        57    0.044    159K    15K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.7    171.3    169.0      0.53              0.25        16    0.033     56K   4093       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    157.3    132.6      1.77              0.62        28    0.063    159K    15K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     71.8      0.72              0.13        28    0.026       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.5      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.051, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.28 GB write, 0.08 MB/s write, 0.27 GB read, 0.08 MB/s read, 2.5 seconds
                                           Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555caadbb8d0#2 capacity: 304.00 MB usage: 30.20 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000338 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1918,29.00 MB,9.53935%) FilterBlock(58,448.61 KB,0.14411%) IndexBlock(58,782.92 KB,0.251504%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 28 10:33:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.8 MiB/s wr, 145 op/s
Feb 28 10:33:11 compute-0 ceph-mon[76304]: pgmap v2065: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.8 MiB/s wr, 145 op/s
Feb 28 10:33:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Feb 28 10:33:12 compute-0 ovn_controller[146846]: 2026-02-28T10:33:12Z|01259|binding|INFO|Releasing lport 45460f81-310d-424c-88ee-85e2ae1b7444 from this chassis (sb_readonly=0)
Feb 28 10:33:12 compute-0 ovn_controller[146846]: 2026-02-28T10:33:12Z|01260|binding|INFO|Releasing lport 6d119866-5b77-4352-91cc-12a26b0fe463 from this chassis (sb_readonly=0)
Feb 28 10:33:13 compute-0 nova_compute[243452]: 2026-02-28 10:33:13.009 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:13 compute-0 ceph-mon[76304]: pgmap v2066: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Feb 28 10:33:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.223 243456 DEBUG nova.compute.manager [req-17e0e765-06d2-472c-a18e-e322c3e690a6 req-a9c8a9f9-f00e-4424-bd4d-bfaa7f4ea569 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received event network-changed-09f54242-3301-4e2f-b606-8423be606192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.223 243456 DEBUG nova.compute.manager [req-17e0e765-06d2-472c-a18e-e322c3e690a6 req-a9c8a9f9-f00e-4424-bd4d-bfaa7f4ea569 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Refreshing instance network info cache due to event network-changed-09f54242-3301-4e2f-b606-8423be606192. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.224 243456 DEBUG oslo_concurrency.lockutils [req-17e0e765-06d2-472c-a18e-e322c3e690a6 req-a9c8a9f9-f00e-4424-bd4d-bfaa7f4ea569 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.224 243456 DEBUG oslo_concurrency.lockutils [req-17e0e765-06d2-472c-a18e-e322c3e690a6 req-a9c8a9f9-f00e-4424-bd4d-bfaa7f4ea569 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.224 243456 DEBUG nova.network.neutron [req-17e0e765-06d2-472c-a18e-e322c3e690a6 req-a9c8a9f9-f00e-4424-bd4d-bfaa7f4ea569 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Refreshing network info cache for port 09f54242-3301-4e2f-b606-8423be606192 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.300 243456 DEBUG oslo_concurrency.lockutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "995ed68f-0189-47b0-b060-6b738468c986" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.301 243456 DEBUG oslo_concurrency.lockutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.302 243456 DEBUG oslo_concurrency.lockutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "995ed68f-0189-47b0-b060-6b738468c986-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.302 243456 DEBUG oslo_concurrency.lockutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.302 243456 DEBUG oslo_concurrency.lockutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.304 243456 INFO nova.compute.manager [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Terminating instance
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.305 243456 DEBUG nova.compute.manager [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.309 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:14 compute-0 kernel: tap09f54242-33 (unregistering): left promiscuous mode
Feb 28 10:33:14 compute-0 NetworkManager[49805]: <info>  [1772274794.3584] device (tap09f54242-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:33:14 compute-0 ovn_controller[146846]: 2026-02-28T10:33:14Z|01261|binding|INFO|Releasing lport 09f54242-3301-4e2f-b606-8423be606192 from this chassis (sb_readonly=0)
Feb 28 10:33:14 compute-0 ovn_controller[146846]: 2026-02-28T10:33:14Z|01262|binding|INFO|Setting lport 09f54242-3301-4e2f-b606-8423be606192 down in Southbound
Feb 28 10:33:14 compute-0 ovn_controller[146846]: 2026-02-28T10:33:14Z|01263|binding|INFO|Removing iface tap09f54242-33 ovn-installed in OVS
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.369 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.373 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.378 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.400 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:3a:58 10.100.0.13'], port_security=['fa:16:3e:3a:3a:58 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '995ed68f-0189-47b0-b060-6b738468c986', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-11c8a335-c267-4faf-a7d1-f407690da05d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '96252fd8-ac35-49bb-9585-1943d9426258', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51d69cbf-f4a8-43f3-8231-31d5040383f1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=09f54242-3301-4e2f-b606-8423be606192) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:33:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.402 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 09f54242-3301-4e2f-b606-8423be606192 in datapath 11c8a335-c267-4faf-a7d1-f407690da05d unbound from our chassis
Feb 28 10:33:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.404 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 11c8a335-c267-4faf-a7d1-f407690da05d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:33:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.405 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3b4a6e82-65b2-43f5-aee1-69b303198961]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.406 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d namespace which is not needed anymore
Feb 28 10:33:14 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Feb 28 10:33:14 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007b.scope: Consumed 15.896s CPU time.
Feb 28 10:33:14 compute-0 systemd-machined[209480]: Machine qemu-156-instance-0000007b terminated.
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.528 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.535 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.542 243456 INFO nova.virt.libvirt.driver [-] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Instance destroyed successfully.
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.543 243456 DEBUG nova.objects.instance [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid 995ed68f-0189-47b0-b060-6b738468c986 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:33:14 compute-0 neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d[352177]: [NOTICE]   (352181) : haproxy version is 2.8.14-c23fe91
Feb 28 10:33:14 compute-0 neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d[352177]: [NOTICE]   (352181) : path to executable is /usr/sbin/haproxy
Feb 28 10:33:14 compute-0 neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d[352177]: [WARNING]  (352181) : Exiting Master process...
Feb 28 10:33:14 compute-0 neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d[352177]: [ALERT]    (352181) : Current worker (352183) exited with code 143 (Terminated)
Feb 28 10:33:14 compute-0 neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d[352177]: [WARNING]  (352181) : All workers exited. Exiting... (0)
Feb 28 10:33:14 compute-0 systemd[1]: libpod-8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c.scope: Deactivated successfully.
Feb 28 10:33:14 compute-0 podman[354057]: 2026-02-28 10:33:14.57868071 +0000 UTC m=+0.059911968 container died 8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.579 243456 DEBUG nova.virt.libvirt.vif [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:31:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1357951307',display_name='tempest-TestNetworkBasicOps-server-1357951307',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1357951307',id=123,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZHc+J8n+8VIElpku7oO13/ribfCNYlkJQgBtlmLu1lYsVW1sBn5Xu9dPl6htrZdrELeWC7ndyh8q80ljjBh3aAz6MIHX6lH85ayvL5wAI984ueXttgABed6q+nVWvTQw==',key_name='tempest-TestNetworkBasicOps-951014656',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:31:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-lmudny40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:31:52Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=995ed68f-0189-47b0-b060-6b738468c986,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.580 243456 DEBUG nova.network.os_vif_util [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.581 243456 DEBUG nova.network.os_vif_util [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:3a:58,bridge_name='br-int',has_traffic_filtering=True,id=09f54242-3301-4e2f-b606-8423be606192,network=Network(11c8a335-c267-4faf-a7d1-f407690da05d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09f54242-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.581 243456 DEBUG os_vif [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:3a:58,bridge_name='br-int',has_traffic_filtering=True,id=09f54242-3301-4e2f-b606-8423be606192,network=Network(11c8a335-c267-4faf-a7d1-f407690da05d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09f54242-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.582 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.583 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09f54242-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.584 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.586 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.591 243456 INFO os_vif [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:3a:58,bridge_name='br-int',has_traffic_filtering=True,id=09f54242-3301-4e2f-b606-8423be606192,network=Network(11c8a335-c267-4faf-a7d1-f407690da05d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09f54242-33')
Feb 28 10:33:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c-userdata-shm.mount: Deactivated successfully.
Feb 28 10:33:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8bb818805ca7bc9b5cb8f2064ec0e8cd84927ba5cb46b2bac1c983c996bfba5-merged.mount: Deactivated successfully.
Feb 28 10:33:14 compute-0 podman[354057]: 2026-02-28 10:33:14.649019203 +0000 UTC m=+0.130250431 container cleanup 8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true)
Feb 28 10:33:14 compute-0 systemd[1]: libpod-conmon-8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c.scope: Deactivated successfully.
Feb 28 10:33:14 compute-0 podman[354113]: 2026-02-28 10:33:14.726621281 +0000 UTC m=+0.050329217 container remove 8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 10:33:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.734 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2966625a-cda9-459d-98d7-016a34e4a2bd]: (4, ('Sat Feb 28 10:33:14 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d (8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c)\n8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c\nSat Feb 28 10:33:14 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d (8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c)\n8c7382fa31332234fb1f9f81e3aaea64c7a282aa5809375871491f589cce796c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.738 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f3d72836-54a3-4ee8-9a6e-27c00c53a1cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.739 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11c8a335-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:33:14 compute-0 kernel: tap11c8a335-c0: left promiscuous mode
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.742 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.748 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.749 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.752 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[38f0cbc7-db3a-4356-8892-89be305eba6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.764 243456 DEBUG nova.compute.manager [req-af6ef85d-4bf8-47e7-98d2-32a4d125ffff req-e7e62f8e-85e8-4561-8dc3-21fa4865adda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received event network-vif-unplugged-09f54242-3301-4e2f-b606-8423be606192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.764 243456 DEBUG oslo_concurrency.lockutils [req-af6ef85d-4bf8-47e7-98d2-32a4d125ffff req-e7e62f8e-85e8-4561-8dc3-21fa4865adda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "995ed68f-0189-47b0-b060-6b738468c986-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.764 243456 DEBUG oslo_concurrency.lockutils [req-af6ef85d-4bf8-47e7-98d2-32a4d125ffff req-e7e62f8e-85e8-4561-8dc3-21fa4865adda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.765 243456 DEBUG oslo_concurrency.lockutils [req-af6ef85d-4bf8-47e7-98d2-32a4d125ffff req-e7e62f8e-85e8-4561-8dc3-21fa4865adda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.765 243456 DEBUG nova.compute.manager [req-af6ef85d-4bf8-47e7-98d2-32a4d125ffff req-e7e62f8e-85e8-4561-8dc3-21fa4865adda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] No waiting events found dispatching network-vif-unplugged-09f54242-3301-4e2f-b606-8423be606192 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:33:14 compute-0 nova_compute[243452]: 2026-02-28 10:33:14.765 243456 DEBUG nova.compute.manager [req-af6ef85d-4bf8-47e7-98d2-32a4d125ffff req-e7e62f8e-85e8-4561-8dc3-21fa4865adda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received event network-vif-unplugged-09f54242-3301-4e2f-b606-8423be606192 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:33:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.768 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cd3de8e0-32d0-4f03-9f66-535c3c77014c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.771 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2f35c95a-296d-4953-a33e-d33e347b192a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.787 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8f22f72c-f078-49e0-aade-da36af1d5455]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621950, 'reachable_time': 30240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354129, 'error': None, 'target': 'ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:14 compute-0 systemd[1]: run-netns-ovnmeta\x2d11c8a335\x2dc267\x2d4faf\x2da7d1\x2df407690da05d.mount: Deactivated successfully.
Feb 28 10:33:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.803 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-11c8a335-c267-4faf-a7d1-f407690da05d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:33:14 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:14.803 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a0453582-9132-4ccc-b50e-c3b9d41194e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:15 compute-0 nova_compute[243452]: 2026-02-28 10:33:15.048 243456 INFO nova.virt.libvirt.driver [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Deleting instance files /var/lib/nova/instances/995ed68f-0189-47b0-b060-6b738468c986_del
Feb 28 10:33:15 compute-0 nova_compute[243452]: 2026-02-28 10:33:15.050 243456 INFO nova.virt.libvirt.driver [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Deletion of /var/lib/nova/instances/995ed68f-0189-47b0-b060-6b738468c986_del complete
Feb 28 10:33:15 compute-0 nova_compute[243452]: 2026-02-28 10:33:15.113 243456 INFO nova.compute.manager [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Took 0.81 seconds to destroy the instance on the hypervisor.
Feb 28 10:33:15 compute-0 nova_compute[243452]: 2026-02-28 10:33:15.114 243456 DEBUG oslo.service.loopingcall [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:33:15 compute-0 nova_compute[243452]: 2026-02-28 10:33:15.114 243456 DEBUG nova.compute.manager [-] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:33:15 compute-0 nova_compute[243452]: 2026-02-28 10:33:15.114 243456 DEBUG nova.network.neutron [-] [instance: 995ed68f-0189-47b0-b060-6b738468c986] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:33:15 compute-0 nova_compute[243452]: 2026-02-28 10:33:15.811 243456 DEBUG nova.network.neutron [req-17e0e765-06d2-472c-a18e-e322c3e690a6 req-a9c8a9f9-f00e-4424-bd4d-bfaa7f4ea569 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Updated VIF entry in instance network info cache for port 09f54242-3301-4e2f-b606-8423be606192. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:33:15 compute-0 nova_compute[243452]: 2026-02-28 10:33:15.812 243456 DEBUG nova.network.neutron [req-17e0e765-06d2-472c-a18e-e322c3e690a6 req-a9c8a9f9-f00e-4424-bd4d-bfaa7f4ea569 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Updating instance_info_cache with network_info: [{"id": "09f54242-3301-4e2f-b606-8423be606192", "address": "fa:16:3e:3a:3a:58", "network": {"id": "11c8a335-c267-4faf-a7d1-f407690da05d", "bridge": "br-int", "label": "tempest-network-smoke--2063408073", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09f54242-33", "ovs_interfaceid": "09f54242-3301-4e2f-b606-8423be606192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:33:15 compute-0 nova_compute[243452]: 2026-02-28 10:33:15.838 243456 DEBUG oslo_concurrency.lockutils [req-17e0e765-06d2-472c-a18e-e322c3e690a6 req-a9c8a9f9-f00e-4424-bd4d-bfaa7f4ea569 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-995ed68f-0189-47b0-b060-6b738468c986" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:33:15 compute-0 ceph-mon[76304]: pgmap v2067: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Feb 28 10:33:16 compute-0 nova_compute[243452]: 2026-02-28 10:33:16.159 243456 DEBUG nova.network.neutron [-] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:33:16 compute-0 nova_compute[243452]: 2026-02-28 10:33:16.178 243456 INFO nova.compute.manager [-] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Took 1.06 seconds to deallocate network for instance.
Feb 28 10:33:16 compute-0 nova_compute[243452]: 2026-02-28 10:33:16.233 243456 DEBUG oslo_concurrency.lockutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:16 compute-0 nova_compute[243452]: 2026-02-28 10:33:16.234 243456 DEBUG oslo_concurrency.lockutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:16 compute-0 nova_compute[243452]: 2026-02-28 10:33:16.332 243456 DEBUG oslo_concurrency.processutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:33:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 287 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.2 MiB/s wr, 102 op/s
Feb 28 10:33:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:33:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1995444784' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:33:16 compute-0 nova_compute[243452]: 2026-02-28 10:33:16.910 243456 DEBUG nova.compute.manager [req-3a334efa-c81a-4a59-927e-04b4fce2f7ef req-2398b378-ed25-44f8-80cf-62646d80ac7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received event network-vif-plugged-09f54242-3301-4e2f-b606-8423be606192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:33:16 compute-0 nova_compute[243452]: 2026-02-28 10:33:16.911 243456 DEBUG oslo_concurrency.lockutils [req-3a334efa-c81a-4a59-927e-04b4fce2f7ef req-2398b378-ed25-44f8-80cf-62646d80ac7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "995ed68f-0189-47b0-b060-6b738468c986-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:16 compute-0 nova_compute[243452]: 2026-02-28 10:33:16.912 243456 DEBUG oslo_concurrency.lockutils [req-3a334efa-c81a-4a59-927e-04b4fce2f7ef req-2398b378-ed25-44f8-80cf-62646d80ac7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:16 compute-0 nova_compute[243452]: 2026-02-28 10:33:16.912 243456 DEBUG oslo_concurrency.lockutils [req-3a334efa-c81a-4a59-927e-04b4fce2f7ef req-2398b378-ed25-44f8-80cf-62646d80ac7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:16 compute-0 nova_compute[243452]: 2026-02-28 10:33:16.913 243456 DEBUG nova.compute.manager [req-3a334efa-c81a-4a59-927e-04b4fce2f7ef req-2398b378-ed25-44f8-80cf-62646d80ac7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] No waiting events found dispatching network-vif-plugged-09f54242-3301-4e2f-b606-8423be606192 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:33:16 compute-0 nova_compute[243452]: 2026-02-28 10:33:16.913 243456 WARNING nova.compute.manager [req-3a334efa-c81a-4a59-927e-04b4fce2f7ef req-2398b378-ed25-44f8-80cf-62646d80ac7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received unexpected event network-vif-plugged-09f54242-3301-4e2f-b606-8423be606192 for instance with vm_state deleted and task_state None.
Feb 28 10:33:16 compute-0 nova_compute[243452]: 2026-02-28 10:33:16.914 243456 DEBUG nova.compute.manager [req-3a334efa-c81a-4a59-927e-04b4fce2f7ef req-2398b378-ed25-44f8-80cf-62646d80ac7d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Received event network-vif-deleted-09f54242-3301-4e2f-b606-8423be606192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:33:16 compute-0 nova_compute[243452]: 2026-02-28 10:33:16.927 243456 DEBUG oslo_concurrency.processutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:33:16 compute-0 nova_compute[243452]: 2026-02-28 10:33:16.935 243456 DEBUG nova.compute.provider_tree [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:33:16 compute-0 nova_compute[243452]: 2026-02-28 10:33:16.955 243456 DEBUG nova.scheduler.client.report [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:33:16 compute-0 nova_compute[243452]: 2026-02-28 10:33:16.984 243456 DEBUG oslo_concurrency.lockutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:17 compute-0 nova_compute[243452]: 2026-02-28 10:33:17.011 243456 INFO nova.scheduler.client.report [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance 995ed68f-0189-47b0-b060-6b738468c986
Feb 28 10:33:17 compute-0 nova_compute[243452]: 2026-02-28 10:33:17.082 243456 DEBUG oslo_concurrency.lockutils [None req-d29e5a02-f97d-4c20-aa5a-4e8fe8766c53 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "995ed68f-0189-47b0-b060-6b738468c986" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:17 compute-0 nova_compute[243452]: 2026-02-28 10:33:17.731 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:17 compute-0 nova_compute[243452]: 2026-02-28 10:33:17.731 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:17 compute-0 nova_compute[243452]: 2026-02-28 10:33:17.764 243456 DEBUG nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:33:17 compute-0 nova_compute[243452]: 2026-02-28 10:33:17.851 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:17 compute-0 nova_compute[243452]: 2026-02-28 10:33:17.853 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:17 compute-0 nova_compute[243452]: 2026-02-28 10:33:17.863 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:33:17 compute-0 nova_compute[243452]: 2026-02-28 10:33:17.863 243456 INFO nova.compute.claims [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:33:17 compute-0 ceph-mon[76304]: pgmap v2068: 305 pgs: 305 active+clean; 287 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.2 MiB/s wr, 102 op/s
Feb 28 10:33:17 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1995444784' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:33:18 compute-0 nova_compute[243452]: 2026-02-28 10:33:18.040 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:33:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:33:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1837150722' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:33:18 compute-0 nova_compute[243452]: 2026-02-28 10:33:18.653 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:33:18 compute-0 nova_compute[243452]: 2026-02-28 10:33:18.663 243456 DEBUG nova.compute.provider_tree [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:33:18 compute-0 nova_compute[243452]: 2026-02-28 10:33:18.728 243456 DEBUG nova.scheduler.client.report [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:33:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 267 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Feb 28 10:33:18 compute-0 nova_compute[243452]: 2026-02-28 10:33:18.786 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:18 compute-0 nova_compute[243452]: 2026-02-28 10:33:18.787 243456 DEBUG nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:33:18 compute-0 nova_compute[243452]: 2026-02-28 10:33:18.845 243456 DEBUG nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:33:18 compute-0 nova_compute[243452]: 2026-02-28 10:33:18.845 243456 DEBUG nova.network.neutron [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:33:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:33:18 compute-0 nova_compute[243452]: 2026-02-28 10:33:18.873 243456 INFO nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:33:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1837150722' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:33:18 compute-0 nova_compute[243452]: 2026-02-28 10:33:18.900 243456 DEBUG nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.020 243456 DEBUG nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.022 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.022 243456 INFO nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Creating image(s)
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.047 243456 DEBUG nova.storage.rbd_utils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.073 243456 DEBUG nova.storage.rbd_utils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.098 243456 DEBUG nova.storage.rbd_utils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.103 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.145 243456 DEBUG nova.policy [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.188 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.189 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.189 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.189 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.213 243456 DEBUG nova.storage.rbd_utils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.218 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.312 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.497 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.576 243456 DEBUG nova.storage.rbd_utils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.604 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.665 243456 DEBUG nova.objects.instance [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.692 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.692 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Ensure instance console log exists: /var/lib/nova/instances/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.693 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.693 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:19 compute-0 nova_compute[243452]: 2026-02-28 10:33:19.693 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:19 compute-0 ceph-mon[76304]: pgmap v2069: 305 pgs: 305 active+clean; 267 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Feb 28 10:33:20 compute-0 nova_compute[243452]: 2026-02-28 10:33:20.687 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274785.6856453, 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:33:20 compute-0 nova_compute[243452]: 2026-02-28 10:33:20.688 243456 INFO nova.compute.manager [-] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] VM Stopped (Lifecycle Event)
Feb 28 10:33:20 compute-0 nova_compute[243452]: 2026-02-28 10:33:20.727 243456 DEBUG nova.compute.manager [None req-105bb2c7-aa48-492b-bd4b-282072091f70 - - - - - -] [instance: 8b9b49c3-0fed-44c9-9fbd-cd9e592ee4bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:33:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 260 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 1.9 MiB/s wr, 109 op/s
Feb 28 10:33:21 compute-0 ceph-mon[76304]: pgmap v2070: 305 pgs: 305 active+clean; 260 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 1.9 MiB/s wr, 109 op/s
Feb 28 10:33:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 279 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 28 10:33:23 compute-0 ovn_controller[146846]: 2026-02-28T10:33:23Z|01264|binding|INFO|Releasing lport 45460f81-310d-424c-88ee-85e2ae1b7444 from this chassis (sb_readonly=0)
Feb 28 10:33:23 compute-0 nova_compute[243452]: 2026-02-28 10:33:23.084 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:23 compute-0 nova_compute[243452]: 2026-02-28 10:33:23.804 243456 DEBUG nova.network.neutron [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Successfully created port: 22759577-b4b2-4051-bacd-740bbdfcc4b4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:33:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:33:23 compute-0 ceph-mon[76304]: pgmap v2071: 305 pgs: 305 active+clean; 279 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 28 10:33:24 compute-0 nova_compute[243452]: 2026-02-28 10:33:24.190 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:24.190 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:33:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:24.191 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:33:24 compute-0 nova_compute[243452]: 2026-02-28 10:33:24.314 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:24 compute-0 sudo[354343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:33:24 compute-0 sudo[354343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:33:24 compute-0 sudo[354343]: pam_unix(sudo:session): session closed for user root
Feb 28 10:33:24 compute-0 nova_compute[243452]: 2026-02-28 10:33:24.607 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:24 compute-0 sudo[354368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:33:24 compute-0 sudo[354368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:33:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 28 10:33:24 compute-0 nova_compute[243452]: 2026-02-28 10:33:24.921 243456 DEBUG nova.network.neutron [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Successfully updated port: 22759577-b4b2-4051-bacd-740bbdfcc4b4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:33:25 compute-0 nova_compute[243452]: 2026-02-28 10:33:25.036 243456 DEBUG nova.compute.manager [req-96d8585c-d2e1-45e2-bf03-eca48d814b80 req-0ac4f346-4b3d-4216-8fca-67d16ebf587b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received event network-changed-22759577-b4b2-4051-bacd-740bbdfcc4b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:33:25 compute-0 nova_compute[243452]: 2026-02-28 10:33:25.037 243456 DEBUG nova.compute.manager [req-96d8585c-d2e1-45e2-bf03-eca48d814b80 req-0ac4f346-4b3d-4216-8fca-67d16ebf587b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Refreshing instance network info cache due to event network-changed-22759577-b4b2-4051-bacd-740bbdfcc4b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:33:25 compute-0 nova_compute[243452]: 2026-02-28 10:33:25.038 243456 DEBUG oslo_concurrency.lockutils [req-96d8585c-d2e1-45e2-bf03-eca48d814b80 req-0ac4f346-4b3d-4216-8fca-67d16ebf587b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:33:25 compute-0 nova_compute[243452]: 2026-02-28 10:33:25.038 243456 DEBUG oslo_concurrency.lockutils [req-96d8585c-d2e1-45e2-bf03-eca48d814b80 req-0ac4f346-4b3d-4216-8fca-67d16ebf587b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:33:25 compute-0 nova_compute[243452]: 2026-02-28 10:33:25.039 243456 DEBUG nova.network.neutron [req-96d8585c-d2e1-45e2-bf03-eca48d814b80 req-0ac4f346-4b3d-4216-8fca-67d16ebf587b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Refreshing network info cache for port 22759577-b4b2-4051-bacd-740bbdfcc4b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:33:25 compute-0 nova_compute[243452]: 2026-02-28 10:33:25.042 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:33:25 compute-0 sudo[354368]: pam_unix(sudo:session): session closed for user root
Feb 28 10:33:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:33:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:33:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:33:25 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:33:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:33:25 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:33:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:33:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:33:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:33:25 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:33:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:33:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:33:25 compute-0 sudo[354424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:33:25 compute-0 sudo[354424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:33:25 compute-0 sudo[354424]: pam_unix(sudo:session): session closed for user root
Feb 28 10:33:25 compute-0 sudo[354449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:33:25 compute-0 sudo[354449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:33:25 compute-0 podman[354488]: 2026-02-28 10:33:25.725972739 +0000 UTC m=+0.068538772 container create 638857c5a83e1c43585a2a740c34c6a86201cf8b9ef3404783895fa175e3d354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_torvalds, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 10:33:25 compute-0 systemd[1]: Started libpod-conmon-638857c5a83e1c43585a2a740c34c6a86201cf8b9ef3404783895fa175e3d354.scope.
Feb 28 10:33:25 compute-0 podman[354488]: 2026-02-28 10:33:25.701379563 +0000 UTC m=+0.043945666 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:33:25 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:33:25 compute-0 podman[354488]: 2026-02-28 10:33:25.821108114 +0000 UTC m=+0.163674157 container init 638857c5a83e1c43585a2a740c34c6a86201cf8b9ef3404783895fa175e3d354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_torvalds, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:33:25 compute-0 podman[354488]: 2026-02-28 10:33:25.82980751 +0000 UTC m=+0.172373543 container start 638857c5a83e1c43585a2a740c34c6a86201cf8b9ef3404783895fa175e3d354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 10:33:25 compute-0 podman[354488]: 2026-02-28 10:33:25.833751862 +0000 UTC m=+0.176317925 container attach 638857c5a83e1c43585a2a740c34c6a86201cf8b9ef3404783895fa175e3d354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 10:33:25 compute-0 frosty_torvalds[354504]: 167 167
Feb 28 10:33:25 compute-0 systemd[1]: libpod-638857c5a83e1c43585a2a740c34c6a86201cf8b9ef3404783895fa175e3d354.scope: Deactivated successfully.
Feb 28 10:33:25 compute-0 conmon[354504]: conmon 638857c5a83e1c43585a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-638857c5a83e1c43585a2a740c34c6a86201cf8b9ef3404783895fa175e3d354.scope/container/memory.events
Feb 28 10:33:25 compute-0 podman[354488]: 2026-02-28 10:33:25.8390022 +0000 UTC m=+0.181568223 container died 638857c5a83e1c43585a2a740c34c6a86201cf8b9ef3404783895fa175e3d354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_torvalds, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:33:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-98d9f64a7ec5ef7638d93e9e8469255ec11a831a2564f58afee95fb5190d2ce4-merged.mount: Deactivated successfully.
Feb 28 10:33:25 compute-0 nova_compute[243452]: 2026-02-28 10:33:25.866 243456 DEBUG nova.network.neutron [req-96d8585c-d2e1-45e2-bf03-eca48d814b80 req-0ac4f346-4b3d-4216-8fca-67d16ebf587b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:33:25 compute-0 podman[354488]: 2026-02-28 10:33:25.882964816 +0000 UTC m=+0.225530849 container remove 638857c5a83e1c43585a2a740c34c6a86201cf8b9ef3404783895fa175e3d354 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 10:33:25 compute-0 systemd[1]: libpod-conmon-638857c5a83e1c43585a2a740c34c6a86201cf8b9ef3404783895fa175e3d354.scope: Deactivated successfully.
Feb 28 10:33:25 compute-0 ceph-mon[76304]: pgmap v2072: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 28 10:33:25 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:33:25 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:33:25 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:33:25 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:33:25 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:33:25 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:33:26 compute-0 podman[354527]: 2026-02-28 10:33:26.027372796 +0000 UTC m=+0.049869813 container create 9929b20e0d679449c1722c6bb5985845a682dbe4135c5c7114892a8bd033d584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:33:26 compute-0 systemd[1]: Started libpod-conmon-9929b20e0d679449c1722c6bb5985845a682dbe4135c5c7114892a8bd033d584.scope.
Feb 28 10:33:26 compute-0 podman[354527]: 2026-02-28 10:33:26.005805895 +0000 UTC m=+0.028302942 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:33:26 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:33:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551933fec683181af7988ba1011e629b9d01d76b7fbe38913fd42ddd29d922b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:33:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551933fec683181af7988ba1011e629b9d01d76b7fbe38913fd42ddd29d922b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:33:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551933fec683181af7988ba1011e629b9d01d76b7fbe38913fd42ddd29d922b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:33:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551933fec683181af7988ba1011e629b9d01d76b7fbe38913fd42ddd29d922b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:33:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551933fec683181af7988ba1011e629b9d01d76b7fbe38913fd42ddd29d922b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:33:26 compute-0 podman[354527]: 2026-02-28 10:33:26.136981671 +0000 UTC m=+0.159478708 container init 9929b20e0d679449c1722c6bb5985845a682dbe4135c5c7114892a8bd033d584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lehmann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 10:33:26 compute-0 podman[354527]: 2026-02-28 10:33:26.14929528 +0000 UTC m=+0.171792277 container start 9929b20e0d679449c1722c6bb5985845a682dbe4135c5c7114892a8bd033d584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lehmann, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 10:33:26 compute-0 podman[354527]: 2026-02-28 10:33:26.153712305 +0000 UTC m=+0.176209392 container attach 9929b20e0d679449c1722c6bb5985845a682dbe4135c5c7114892a8bd033d584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lehmann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 10:33:26 compute-0 nova_compute[243452]: 2026-02-28 10:33:26.315 243456 DEBUG nova.network.neutron [req-96d8585c-d2e1-45e2-bf03-eca48d814b80 req-0ac4f346-4b3d-4216-8fca-67d16ebf587b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:33:26 compute-0 nova_compute[243452]: 2026-02-28 10:33:26.334 243456 DEBUG oslo_concurrency.lockutils [req-96d8585c-d2e1-45e2-bf03-eca48d814b80 req-0ac4f346-4b3d-4216-8fca-67d16ebf587b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:33:26 compute-0 nova_compute[243452]: 2026-02-28 10:33:26.335 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:33:26 compute-0 nova_compute[243452]: 2026-02-28 10:33:26.335 243456 DEBUG nova.network.neutron [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:33:26 compute-0 nova_compute[243452]: 2026-02-28 10:33:26.484 243456 DEBUG nova.network.neutron [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:33:26 compute-0 adoring_lehmann[354543]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:33:26 compute-0 adoring_lehmann[354543]: --> All data devices are unavailable
Feb 28 10:33:26 compute-0 systemd[1]: libpod-9929b20e0d679449c1722c6bb5985845a682dbe4135c5c7114892a8bd033d584.scope: Deactivated successfully.
Feb 28 10:33:26 compute-0 podman[354527]: 2026-02-28 10:33:26.713696598 +0000 UTC m=+0.736193635 container died 9929b20e0d679449c1722c6bb5985845a682dbe4135c5c7114892a8bd033d584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 10:33:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-551933fec683181af7988ba1011e629b9d01d76b7fbe38913fd42ddd29d922b6-merged.mount: Deactivated successfully.
Feb 28 10:33:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 28 10:33:26 compute-0 podman[354527]: 2026-02-28 10:33:26.770936289 +0000 UTC m=+0.793433326 container remove 9929b20e0d679449c1722c6bb5985845a682dbe4135c5c7114892a8bd033d584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:33:26 compute-0 systemd[1]: libpod-conmon-9929b20e0d679449c1722c6bb5985845a682dbe4135c5c7114892a8bd033d584.scope: Deactivated successfully.
Feb 28 10:33:26 compute-0 sudo[354449]: pam_unix(sudo:session): session closed for user root
Feb 28 10:33:26 compute-0 sudo[354577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:33:26 compute-0 sudo[354577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:33:26 compute-0 sudo[354577]: pam_unix(sudo:session): session closed for user root
Feb 28 10:33:26 compute-0 sudo[354602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:33:26 compute-0 sudo[354602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:33:27 compute-0 podman[354638]: 2026-02-28 10:33:27.283513539 +0000 UTC m=+0.062957274 container create dfc03d60cc45e9c97dccabd40a75a503840effaab573e6042445fe4ef7f117fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:33:27 compute-0 systemd[1]: Started libpod-conmon-dfc03d60cc45e9c97dccabd40a75a503840effaab573e6042445fe4ef7f117fb.scope.
Feb 28 10:33:27 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:33:27 compute-0 podman[354638]: 2026-02-28 10:33:27.262328279 +0000 UTC m=+0.041772004 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:33:27 compute-0 podman[354638]: 2026-02-28 10:33:27.372665965 +0000 UTC m=+0.152109740 container init dfc03d60cc45e9c97dccabd40a75a503840effaab573e6042445fe4ef7f117fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 10:33:27 compute-0 podman[354638]: 2026-02-28 10:33:27.379872609 +0000 UTC m=+0.159316314 container start dfc03d60cc45e9c97dccabd40a75a503840effaab573e6042445fe4ef7f117fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Feb 28 10:33:27 compute-0 zen_agnesi[354654]: 167 167
Feb 28 10:33:27 compute-0 systemd[1]: libpod-dfc03d60cc45e9c97dccabd40a75a503840effaab573e6042445fe4ef7f117fb.scope: Deactivated successfully.
Feb 28 10:33:27 compute-0 podman[354638]: 2026-02-28 10:33:27.38485987 +0000 UTC m=+0.164303575 container attach dfc03d60cc45e9c97dccabd40a75a503840effaab573e6042445fe4ef7f117fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_agnesi, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:33:27 compute-0 podman[354638]: 2026-02-28 10:33:27.385497368 +0000 UTC m=+0.164941093 container died dfc03d60cc45e9c97dccabd40a75a503840effaab573e6042445fe4ef7f117fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_agnesi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:33:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e17fb3de6d6b88850aa2ae2d57f070972a5c5454498ed2491b43250982aae8f-merged.mount: Deactivated successfully.
Feb 28 10:33:27 compute-0 podman[354638]: 2026-02-28 10:33:27.428520017 +0000 UTC m=+0.207963742 container remove dfc03d60cc45e9c97dccabd40a75a503840effaab573e6042445fe4ef7f117fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_agnesi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 10:33:27 compute-0 systemd[1]: libpod-conmon-dfc03d60cc45e9c97dccabd40a75a503840effaab573e6042445fe4ef7f117fb.scope: Deactivated successfully.
Feb 28 10:33:27 compute-0 podman[354681]: 2026-02-28 10:33:27.633107192 +0000 UTC m=+0.063850740 container create 0a89d0a46ff60acc35a97c50d2097834df3a10774bf87b6878c88b111a9d75d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hellman, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:33:27 compute-0 systemd[1]: Started libpod-conmon-0a89d0a46ff60acc35a97c50d2097834df3a10774bf87b6878c88b111a9d75d3.scope.
Feb 28 10:33:27 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:33:27 compute-0 podman[354681]: 2026-02-28 10:33:27.604834211 +0000 UTC m=+0.035577779 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee57a45e46e9f35e9a55da99e5f5bb4bfa659336eee81c11393cf0d2e21c72e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee57a45e46e9f35e9a55da99e5f5bb4bfa659336eee81c11393cf0d2e21c72e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee57a45e46e9f35e9a55da99e5f5bb4bfa659336eee81c11393cf0d2e21c72e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee57a45e46e9f35e9a55da99e5f5bb4bfa659336eee81c11393cf0d2e21c72e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:33:27 compute-0 podman[354681]: 2026-02-28 10:33:27.725635772 +0000 UTC m=+0.156379360 container init 0a89d0a46ff60acc35a97c50d2097834df3a10774bf87b6878c88b111a9d75d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:33:27 compute-0 podman[354681]: 2026-02-28 10:33:27.735583594 +0000 UTC m=+0.166327152 container start 0a89d0a46ff60acc35a97c50d2097834df3a10774bf87b6878c88b111a9d75d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hellman, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 10:33:27 compute-0 podman[354681]: 2026-02-28 10:33:27.740246156 +0000 UTC m=+0.170989764 container attach 0a89d0a46ff60acc35a97c50d2097834df3a10774bf87b6878c88b111a9d75d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hellman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:33:27 compute-0 ceph-mon[76304]: pgmap v2073: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]: {
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:     "0": [
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:         {
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "devices": [
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "/dev/loop3"
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             ],
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "lv_name": "ceph_lv0",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "lv_size": "21470642176",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "name": "ceph_lv0",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "tags": {
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.cluster_name": "ceph",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.crush_device_class": "",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.encrypted": "0",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.objectstore": "bluestore",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.osd_id": "0",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.type": "block",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.vdo": "0",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.with_tpm": "0"
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             },
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "type": "block",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "vg_name": "ceph_vg0"
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:         }
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:     ],
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:     "1": [
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:         {
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "devices": [
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "/dev/loop4"
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             ],
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "lv_name": "ceph_lv1",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "lv_size": "21470642176",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "name": "ceph_lv1",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "tags": {
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.cluster_name": "ceph",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.crush_device_class": "",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.encrypted": "0",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.objectstore": "bluestore",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.osd_id": "1",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.type": "block",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.vdo": "0",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.with_tpm": "0"
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             },
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "type": "block",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "vg_name": "ceph_vg1"
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:         }
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:     ],
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:     "2": [
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:         {
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "devices": [
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "/dev/loop5"
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             ],
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "lv_name": "ceph_lv2",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "lv_size": "21470642176",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "name": "ceph_lv2",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "tags": {
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.cluster_name": "ceph",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.crush_device_class": "",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.encrypted": "0",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.objectstore": "bluestore",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.osd_id": "2",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.type": "block",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.vdo": "0",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:                 "ceph.with_tpm": "0"
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             },
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "type": "block",
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:             "vg_name": "ceph_vg2"
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:         }
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]:     ]
Feb 28 10:33:28 compute-0 quizzical_hellman[354698]: }
Feb 28 10:33:28 compute-0 systemd[1]: libpod-0a89d0a46ff60acc35a97c50d2097834df3a10774bf87b6878c88b111a9d75d3.scope: Deactivated successfully.
Feb 28 10:33:28 compute-0 podman[354681]: 2026-02-28 10:33:28.075229705 +0000 UTC m=+0.505973283 container died 0a89d0a46ff60acc35a97c50d2097834df3a10774bf87b6878c88b111a9d75d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:33:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ee57a45e46e9f35e9a55da99e5f5bb4bfa659336eee81c11393cf0d2e21c72e-merged.mount: Deactivated successfully.
Feb 28 10:33:28 compute-0 podman[354681]: 2026-02-28 10:33:28.131601022 +0000 UTC m=+0.562344580 container remove 0a89d0a46ff60acc35a97c50d2097834df3a10774bf87b6878c88b111a9d75d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hellman, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:33:28 compute-0 systemd[1]: libpod-conmon-0a89d0a46ff60acc35a97c50d2097834df3a10774bf87b6878c88b111a9d75d3.scope: Deactivated successfully.
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.149 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:28 compute-0 sudo[354602]: pam_unix(sudo:session): session closed for user root
Feb 28 10:33:28 compute-0 sudo[354718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:33:28 compute-0 sudo[354718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:33:28 compute-0 sudo[354718]: pam_unix(sudo:session): session closed for user root
Feb 28 10:33:28 compute-0 sudo[354743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:33:28 compute-0 sudo[354743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.539 243456 DEBUG nova.network.neutron [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Updating instance_info_cache with network_info: [{"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.579 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.580 243456 DEBUG nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Instance network_info: |[{"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.585 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Start _get_guest_xml network_info=[{"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.594 243456 WARNING nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.602 243456 DEBUG nova.virt.libvirt.host [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.604 243456 DEBUG nova.virt.libvirt.host [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.610 243456 DEBUG nova.virt.libvirt.host [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.611 243456 DEBUG nova.virt.libvirt.host [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.612 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.612 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.613 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.614 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.614 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.614 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.615 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.615 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.616 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.616 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.617 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.618 243456 DEBUG nova.virt.hardware [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:33:28 compute-0 nova_compute[243452]: 2026-02-28 10:33:28.623 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:33:28 compute-0 podman[354781]: 2026-02-28 10:33:28.637629487 +0000 UTC m=+0.052883299 container create 23295b076556bbd44c5642c50a155223b9ce8062dc57b1a4c512953b9208f847 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hermann, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:33:28 compute-0 systemd[1]: Started libpod-conmon-23295b076556bbd44c5642c50a155223b9ce8062dc57b1a4c512953b9208f847.scope.
Feb 28 10:33:28 compute-0 podman[354781]: 2026-02-28 10:33:28.60949547 +0000 UTC m=+0.024749352 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:33:28 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:33:28 compute-0 podman[354781]: 2026-02-28 10:33:28.737806554 +0000 UTC m=+0.153060416 container init 23295b076556bbd44c5642c50a155223b9ce8062dc57b1a4c512953b9208f847 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hermann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 10:33:28 compute-0 podman[354781]: 2026-02-28 10:33:28.743367902 +0000 UTC m=+0.158621674 container start 23295b076556bbd44c5642c50a155223b9ce8062dc57b1a4c512953b9208f847 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hermann, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:33:28 compute-0 gracious_hermann[354798]: 167 167
Feb 28 10:33:28 compute-0 systemd[1]: libpod-23295b076556bbd44c5642c50a155223b9ce8062dc57b1a4c512953b9208f847.scope: Deactivated successfully.
Feb 28 10:33:28 compute-0 podman[354781]: 2026-02-28 10:33:28.748712363 +0000 UTC m=+0.163966195 container attach 23295b076556bbd44c5642c50a155223b9ce8062dc57b1a4c512953b9208f847 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hermann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 10:33:28 compute-0 conmon[354798]: conmon 23295b076556bbd44c56 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-23295b076556bbd44c5642c50a155223b9ce8062dc57b1a4c512953b9208f847.scope/container/memory.events
Feb 28 10:33:28 compute-0 podman[354781]: 2026-02-28 10:33:28.749448854 +0000 UTC m=+0.164702626 container died 23295b076556bbd44c5642c50a155223b9ce8062dc57b1a4c512953b9208f847 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hermann, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:33:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Feb 28 10:33:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5c16c639e2a1c79793abec9675afe4698be30853a91e54849e1e2e93fc91b0c-merged.mount: Deactivated successfully.
Feb 28 10:33:28 compute-0 podman[354781]: 2026-02-28 10:33:28.804833623 +0000 UTC m=+0.220087405 container remove 23295b076556bbd44c5642c50a155223b9ce8062dc57b1a4c512953b9208f847 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 10:33:28 compute-0 systemd[1]: libpod-conmon-23295b076556bbd44c5642c50a155223b9ce8062dc57b1a4c512953b9208f847.scope: Deactivated successfully.
Feb 28 10:33:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:33:28 compute-0 podman[354841]: 2026-02-28 10:33:28.973147521 +0000 UTC m=+0.052059826 container create 9c5d8573d8321b5411f835523bc087478e22b4551456f943c30744b6e88451ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_dhawan, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:33:29 compute-0 systemd[1]: Started libpod-conmon-9c5d8573d8321b5411f835523bc087478e22b4551456f943c30744b6e88451ae.scope.
Feb 28 10:33:29 compute-0 podman[354841]: 2026-02-28 10:33:28.9480504 +0000 UTC m=+0.026962725 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:33:29 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d875306b098d36a95503a0aa27672366e653242c2297f572492deff66e2f5b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d875306b098d36a95503a0aa27672366e653242c2297f572492deff66e2f5b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d875306b098d36a95503a0aa27672366e653242c2297f572492deff66e2f5b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d875306b098d36a95503a0aa27672366e653242c2297f572492deff66e2f5b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:33:29 compute-0 podman[354841]: 2026-02-28 10:33:29.076135138 +0000 UTC m=+0.155047523 container init 9c5d8573d8321b5411f835523bc087478e22b4551456f943c30744b6e88451ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 10:33:29 compute-0 podman[354841]: 2026-02-28 10:33:29.083423895 +0000 UTC m=+0.162336240 container start 9c5d8573d8321b5411f835523bc087478e22b4551456f943c30744b6e88451ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Feb 28 10:33:29 compute-0 podman[354841]: 2026-02-28 10:33:29.086916964 +0000 UTC m=+0.165829359 container attach 9c5d8573d8321b5411f835523bc087478e22b4551456f943c30744b6e88451ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_dhawan, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 10:33:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:33:29
Feb 28 10:33:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:33:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:33:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['images', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.log']
Feb 28 10:33:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:33:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:33:29 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2648422724' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.184 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.217 243456 DEBUG nova.storage.rbd_utils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.224 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.316 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.542 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274794.5411866, 995ed68f-0189-47b0-b060-6b738468c986 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.543 243456 INFO nova.compute.manager [-] [instance: 995ed68f-0189-47b0-b060-6b738468c986] VM Stopped (Lifecycle Event)
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.570 243456 DEBUG nova.compute.manager [None req-9eda41a0-f359-4023-aa0c-e74067ce366d - - - - - -] [instance: 995ed68f-0189-47b0-b060-6b738468c986] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.609 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:33:29 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2031951798' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.743 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.745 243456 DEBUG nova.virt.libvirt.vif [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:33:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-994916245',display_name='tempest-TestGettingAddress-server-994916245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-994916245',id=126,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFieZm47z3EaXX8zbx7LaLT+w65lGbaba1NOcr04FPZCW63/mJeAcaJ3qZq57Nb7LiBZaem/WgewaoumutbM8PMth7FUNu0jTunr9bd13indOeb5XDIeGzJsnEfyg+wcGA==',key_name='tempest-TestGettingAddress-401608194',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-6ygaahaz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:33:18Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.746 243456 DEBUG nova.network.os_vif_util [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.747 243456 DEBUG nova.network.os_vif_util [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:a3:94,bridge_name='br-int',has_traffic_filtering=True,id=22759577-b4b2-4051-bacd-740bbdfcc4b4,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22759577-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.748 243456 DEBUG nova.objects.instance [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:33:29 compute-0 lvm[354981]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:33:29 compute-0 lvm[354981]: VG ceph_vg1 finished
Feb 28 10:33:29 compute-0 lvm[354980]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:33:29 compute-0 lvm[354980]: VG ceph_vg0 finished
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.764 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:33:29 compute-0 nova_compute[243452]:   <uuid>f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989</uuid>
Feb 28 10:33:29 compute-0 nova_compute[243452]:   <name>instance-0000007e</name>
Feb 28 10:33:29 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:33:29 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:33:29 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <nova:name>tempest-TestGettingAddress-server-994916245</nova:name>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:33:28</nova:creationTime>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:33:29 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:33:29 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:33:29 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:33:29 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:33:29 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:33:29 compute-0 nova_compute[243452]:         <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 10:33:29 compute-0 nova_compute[243452]:         <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:33:29 compute-0 nova_compute[243452]:         <nova:port uuid="22759577-b4b2-4051-bacd-740bbdfcc4b4">
Feb 28 10:33:29 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe6e:a394" ipVersion="6"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:33:29 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:33:29 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <system>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <entry name="serial">f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989</entry>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <entry name="uuid">f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989</entry>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     </system>
Feb 28 10:33:29 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:33:29 compute-0 nova_compute[243452]:   <os>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:   </os>
Feb 28 10:33:29 compute-0 nova_compute[243452]:   <features>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:   </features>
Feb 28 10:33:29 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:33:29 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:33:29 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk">
Feb 28 10:33:29 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       </source>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:33:29 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk.config">
Feb 28 10:33:29 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       </source>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:33:29 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:6e:a3:94"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <target dev="tap22759577-b4"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989/console.log" append="off"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <video>
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     </video>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:33:29 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:33:29 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:33:29 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:33:29 compute-0 nova_compute[243452]: </domain>
Feb 28 10:33:29 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.765 243456 DEBUG nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Preparing to wait for external event network-vif-plugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.765 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.765 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.765 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.766 243456 DEBUG nova.virt.libvirt.vif [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:33:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-994916245',display_name='tempest-TestGettingAddress-server-994916245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-994916245',id=126,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFieZm47z3EaXX8zbx7LaLT+w65lGbaba1NOcr04FPZCW63/mJeAcaJ3qZq57Nb7LiBZaem/WgewaoumutbM8PMth7FUNu0jTunr9bd13indOeb5XDIeGzJsnEfyg+wcGA==',key_name='tempest-TestGettingAddress-401608194',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-6ygaahaz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:33:18Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.766 243456 DEBUG nova.network.os_vif_util [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.767 243456 DEBUG nova.network.os_vif_util [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:a3:94,bridge_name='br-int',has_traffic_filtering=True,id=22759577-b4b2-4051-bacd-740bbdfcc4b4,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22759577-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.767 243456 DEBUG os_vif [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:a3:94,bridge_name='br-int',has_traffic_filtering=True,id=22759577-b4b2-4051-bacd-740bbdfcc4b4,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22759577-b4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.768 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.768 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.769 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.773 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.773 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22759577-b4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.773 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap22759577-b4, col_values=(('external_ids', {'iface-id': '22759577-b4b2-4051-bacd-740bbdfcc4b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:a3:94', 'vm-uuid': 'f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.775 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:29 compute-0 NetworkManager[49805]: <info>  [1772274809.7768] manager: (tap22759577-b4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/522)
Feb 28 10:33:29 compute-0 lvm[354983]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:33:29 compute-0 lvm[354983]: VG ceph_vg2 finished
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.780 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.783 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.784 243456 INFO os_vif [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:a3:94,bridge_name='br-int',has_traffic_filtering=True,id=22759577-b4b2-4051-bacd-740bbdfcc4b4,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22759577-b4')
Feb 28 10:33:29 compute-0 lvm[354986]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:33:29 compute-0 lvm[354986]: VG ceph_vg0 finished
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.845 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.846 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.847 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:6e:a3:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:33:29 compute-0 nice_dhawan[354858]: {}
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.847 243456 INFO nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Using config drive
Feb 28 10:33:29 compute-0 nova_compute[243452]: 2026-02-28 10:33:29.876 243456 DEBUG nova.storage.rbd_utils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:33:29 compute-0 systemd[1]: libpod-9c5d8573d8321b5411f835523bc087478e22b4551456f943c30744b6e88451ae.scope: Deactivated successfully.
Feb 28 10:33:29 compute-0 podman[354841]: 2026-02-28 10:33:29.885903487 +0000 UTC m=+0.964815822 container died 9c5d8573d8321b5411f835523bc087478e22b4551456f943c30744b6e88451ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_dhawan, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Feb 28 10:33:29 compute-0 systemd[1]: libpod-9c5d8573d8321b5411f835523bc087478e22b4551456f943c30744b6e88451ae.scope: Consumed 1.255s CPU time.
Feb 28 10:33:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d875306b098d36a95503a0aa27672366e653242c2297f572492deff66e2f5b9-merged.mount: Deactivated successfully.
Feb 28 10:33:29 compute-0 podman[354841]: 2026-02-28 10:33:29.938541548 +0000 UTC m=+1.017453863 container remove 9c5d8573d8321b5411f835523bc087478e22b4551456f943c30744b6e88451ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_dhawan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 10:33:29 compute-0 ceph-mon[76304]: pgmap v2074: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Feb 28 10:33:29 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2648422724' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:33:29 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2031951798' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:33:29 compute-0 systemd[1]: libpod-conmon-9c5d8573d8321b5411f835523bc087478e22b4551456f943c30744b6e88451ae.scope: Deactivated successfully.
Feb 28 10:33:30 compute-0 sudo[354743]: pam_unix(sudo:session): session closed for user root
Feb 28 10:33:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:33:30 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:33:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:33:30 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:33:30 compute-0 sudo[355022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:33:30 compute-0 sudo[355022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:33:30 compute-0 sudo[355022]: pam_unix(sudo:session): session closed for user root
Feb 28 10:33:30 compute-0 nova_compute[243452]: 2026-02-28 10:33:30.281 243456 INFO nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Creating config drive at /var/lib/nova/instances/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989/disk.config
Feb 28 10:33:30 compute-0 nova_compute[243452]: 2026-02-28 10:33:30.291 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpf9eqbc1b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:33:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:33:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:33:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:33:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:33:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:33:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:33:30 compute-0 nova_compute[243452]: 2026-02-28 10:33:30.438 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpf9eqbc1b" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:33:30 compute-0 nova_compute[243452]: 2026-02-28 10:33:30.478 243456 DEBUG nova.storage.rbd_utils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:33:30 compute-0 nova_compute[243452]: 2026-02-28 10:33:30.484 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989/disk.config f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:33:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Feb 28 10:33:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:33:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:33:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:33:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:33:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:33:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:33:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:33:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:33:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:33:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:33:31 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:33:31 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.104 243456 DEBUG oslo_concurrency.processutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989/disk.config f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.105 243456 INFO nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Deleting local config drive /var/lib/nova/instances/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989/disk.config because it was imported into RBD.
Feb 28 10:33:31 compute-0 kernel: tap22759577-b4: entered promiscuous mode
Feb 28 10:33:31 compute-0 NetworkManager[49805]: <info>  [1772274811.1911] manager: (tap22759577-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/523)
Feb 28 10:33:31 compute-0 ovn_controller[146846]: 2026-02-28T10:33:31Z|01265|binding|INFO|Claiming lport 22759577-b4b2-4051-bacd-740bbdfcc4b4 for this chassis.
Feb 28 10:33:31 compute-0 ovn_controller[146846]: 2026-02-28T10:33:31Z|01266|binding|INFO|22759577-b4b2-4051-bacd-740bbdfcc4b4: Claiming fa:16:3e:6e:a3:94 10.100.0.14 2001:db8::f816:3eff:fe6e:a394
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.193 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:31 compute-0 systemd-udevd[354982]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:33:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.203 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:a3:94 10.100.0.14 2001:db8::f816:3eff:fe6e:a394'], port_security=['fa:16:3e:6e:a3:94 10.100.0.14 2001:db8::f816:3eff:fe6e:a394'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8::f816:3eff:fe6e:a394/64', 'neutron:device_id': 'f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6e383b51-5295-4e4b-b3c5-448c2d4edbf6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cc36483-813d-493a-bc2a-2e5a365011f2, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=22759577-b4b2-4051-bacd-740bbdfcc4b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:33:31 compute-0 ovn_controller[146846]: 2026-02-28T10:33:31Z|01267|binding|INFO|Setting lport 22759577-b4b2-4051-bacd-740bbdfcc4b4 ovn-installed in OVS
Feb 28 10:33:31 compute-0 ovn_controller[146846]: 2026-02-28T10:33:31Z|01268|binding|INFO|Setting lport 22759577-b4b2-4051-bacd-740bbdfcc4b4 up in Southbound
Feb 28 10:33:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.205 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 22759577-b4b2-4051-bacd-740bbdfcc4b4 in datapath c662211d-11bd-4aa5-95d2-794ccdac29d7 bound to our chassis
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.206 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.207 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c662211d-11bd-4aa5-95d2-794ccdac29d7
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.208 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:31 compute-0 NetworkManager[49805]: <info>  [1772274811.2179] device (tap22759577-b4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:33:31 compute-0 NetworkManager[49805]: <info>  [1772274811.2187] device (tap22759577-b4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:33:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.224 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c94e7037-fa80-4095-9b3c-da888f13eabc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:31 compute-0 systemd-machined[209480]: New machine qemu-159-instance-0000007e.
Feb 28 10:33:31 compute-0 systemd[1]: Started Virtual Machine qemu-159-instance-0000007e.
Feb 28 10:33:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.258 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0b6e339f-d6e9-49fd-904c-fd73d2708c23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.262 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b477214d-0c0a-4506-97ed-16a962e909cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.291 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[68bd3a2e-e78b-4edb-94cd-8e9139752c16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.310 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[61c0ef5a-bbc2-4d88-9bc6-0b6131891f38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc662211d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:f5:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 6, 'rx_bytes': 1656, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 6, 'rx_bytes': 1656, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628072, 'reachable_time': 22047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355112, 'error': None, 'target': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.335 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eac6621a-bf68-4d32-af6c-b6f7bce957dd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc662211d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 628080, 'tstamp': 628080}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355113, 'error': None, 'target': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc662211d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 628083, 'tstamp': 628083}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355113, 'error': None, 'target': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.337 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc662211d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.338 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.340 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc662211d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:33:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.340 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:33:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.341 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc662211d-10, col_values=(('external_ids', {'iface-id': '45460f81-310d-424c-88ee-85e2ae1b7444'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:33:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:31.341 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.593 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274811.5929613, f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.593 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] VM Started (Lifecycle Event)
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.628 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.633 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274811.5931377, f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.633 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] VM Paused (Lifecycle Event)
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.655 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.659 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.685 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:33:31 compute-0 sshd-session[354974]: Received disconnect from 103.67.78.202 port 55250:11: Bye Bye [preauth]
Feb 28 10:33:31 compute-0 sshd-session[354974]: Disconnected from authenticating user root 103.67.78.202 port 55250 [preauth]
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.936 243456 DEBUG nova.compute.manager [req-de3e79e6-058d-4199-8baa-463d8c0cccc3 req-f4f74660-0d0b-47c1-a975-e989bb2273a6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received event network-vif-plugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.937 243456 DEBUG oslo_concurrency.lockutils [req-de3e79e6-058d-4199-8baa-463d8c0cccc3 req-f4f74660-0d0b-47c1-a975-e989bb2273a6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.937 243456 DEBUG oslo_concurrency.lockutils [req-de3e79e6-058d-4199-8baa-463d8c0cccc3 req-f4f74660-0d0b-47c1-a975-e989bb2273a6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.937 243456 DEBUG oslo_concurrency.lockutils [req-de3e79e6-058d-4199-8baa-463d8c0cccc3 req-f4f74660-0d0b-47c1-a975-e989bb2273a6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.938 243456 DEBUG nova.compute.manager [req-de3e79e6-058d-4199-8baa-463d8c0cccc3 req-f4f74660-0d0b-47c1-a975-e989bb2273a6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Processing event network-vif-plugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.938 243456 DEBUG nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.942 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274811.9419978, f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.942 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] VM Resumed (Lifecycle Event)
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.946 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.951 243456 INFO nova.virt.libvirt.driver [-] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Instance spawned successfully.
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.951 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.965 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.968 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.976 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.976 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.977 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.977 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.977 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.978 243456 DEBUG nova.virt.libvirt.driver [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:33:31 compute-0 nova_compute[243452]: 2026-02-28 10:33:31.985 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:33:32 compute-0 nova_compute[243452]: 2026-02-28 10:33:32.046 243456 INFO nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Took 13.02 seconds to spawn the instance on the hypervisor.
Feb 28 10:33:32 compute-0 nova_compute[243452]: 2026-02-28 10:33:32.046 243456 DEBUG nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:33:32 compute-0 ceph-mon[76304]: pgmap v2075: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Feb 28 10:33:32 compute-0 nova_compute[243452]: 2026-02-28 10:33:32.110 243456 INFO nova.compute.manager [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Took 14.29 seconds to build instance.
Feb 28 10:33:32 compute-0 nova_compute[243452]: 2026-02-28 10:33:32.126 243456 DEBUG oslo_concurrency.lockutils [None req-afdda6f7-6d78-40c0-9fce-940ed6a55d2a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:32 compute-0 nova_compute[243452]: 2026-02-28 10:33:32.192 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:33:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:32.194 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:33:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 991 KiB/s wr, 4 op/s
Feb 28 10:33:33 compute-0 sshd-session[355129]: Received disconnect from 103.67.78.202 port 47616:11: Bye Bye [preauth]
Feb 28 10:33:33 compute-0 sshd-session[355129]: Disconnected from authenticating user root 103.67.78.202 port 47616 [preauth]
Feb 28 10:33:33 compute-0 nova_compute[243452]: 2026-02-28 10:33:33.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:33:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:33:34 compute-0 nova_compute[243452]: 2026-02-28 10:33:34.034 243456 DEBUG nova.compute.manager [req-5c7c8f8e-1297-40cb-a08a-a51a4b008088 req-f71b16ec-aa96-464a-b7a8-2101a7116871 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received event network-vif-plugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:33:34 compute-0 nova_compute[243452]: 2026-02-28 10:33:34.035 243456 DEBUG oslo_concurrency.lockutils [req-5c7c8f8e-1297-40cb-a08a-a51a4b008088 req-f71b16ec-aa96-464a-b7a8-2101a7116871 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:34 compute-0 nova_compute[243452]: 2026-02-28 10:33:34.035 243456 DEBUG oslo_concurrency.lockutils [req-5c7c8f8e-1297-40cb-a08a-a51a4b008088 req-f71b16ec-aa96-464a-b7a8-2101a7116871 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:34 compute-0 nova_compute[243452]: 2026-02-28 10:33:34.035 243456 DEBUG oslo_concurrency.lockutils [req-5c7c8f8e-1297-40cb-a08a-a51a4b008088 req-f71b16ec-aa96-464a-b7a8-2101a7116871 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:34 compute-0 nova_compute[243452]: 2026-02-28 10:33:34.035 243456 DEBUG nova.compute.manager [req-5c7c8f8e-1297-40cb-a08a-a51a4b008088 req-f71b16ec-aa96-464a-b7a8-2101a7116871 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] No waiting events found dispatching network-vif-plugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:33:34 compute-0 nova_compute[243452]: 2026-02-28 10:33:34.036 243456 WARNING nova.compute.manager [req-5c7c8f8e-1297-40cb-a08a-a51a4b008088 req-f71b16ec-aa96-464a-b7a8-2101a7116871 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received unexpected event network-vif-plugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 for instance with vm_state active and task_state None.
Feb 28 10:33:34 compute-0 ceph-mon[76304]: pgmap v2076: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 991 KiB/s wr, 4 op/s
Feb 28 10:33:34 compute-0 nova_compute[243452]: 2026-02-28 10:33:34.322 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 12 KiB/s wr, 16 op/s
Feb 28 10:33:34 compute-0 nova_compute[243452]: 2026-02-28 10:33:34.777 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:35 compute-0 nova_compute[243452]: 2026-02-28 10:33:35.123 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:35 compute-0 podman[355159]: 2026-02-28 10:33:35.152499983 +0000 UTC m=+0.078253577 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 28 10:33:35 compute-0 podman[355158]: 2026-02-28 10:33:35.190549641 +0000 UTC m=+0.121825402 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:33:36 compute-0 ceph-mon[76304]: pgmap v2077: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 12 KiB/s wr, 16 op/s
Feb 28 10:33:36 compute-0 nova_compute[243452]: 2026-02-28 10:33:36.319 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:33:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1007 KiB/s rd, 12 KiB/s wr, 43 op/s
Feb 28 10:33:37 compute-0 nova_compute[243452]: 2026-02-28 10:33:37.114 243456 DEBUG nova.compute.manager [req-2973e6f5-2274-42b0-aae7-86f5645d8213 req-d3adf2b9-c518-483e-a539-5ae09f142fc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received event network-changed-22759577-b4b2-4051-bacd-740bbdfcc4b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:33:37 compute-0 nova_compute[243452]: 2026-02-28 10:33:37.115 243456 DEBUG nova.compute.manager [req-2973e6f5-2274-42b0-aae7-86f5645d8213 req-d3adf2b9-c518-483e-a539-5ae09f142fc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Refreshing instance network info cache due to event network-changed-22759577-b4b2-4051-bacd-740bbdfcc4b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:33:37 compute-0 nova_compute[243452]: 2026-02-28 10:33:37.115 243456 DEBUG oslo_concurrency.lockutils [req-2973e6f5-2274-42b0-aae7-86f5645d8213 req-d3adf2b9-c518-483e-a539-5ae09f142fc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:33:37 compute-0 nova_compute[243452]: 2026-02-28 10:33:37.116 243456 DEBUG oslo_concurrency.lockutils [req-2973e6f5-2274-42b0-aae7-86f5645d8213 req-d3adf2b9-c518-483e-a539-5ae09f142fc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:33:37 compute-0 nova_compute[243452]: 2026-02-28 10:33:37.116 243456 DEBUG nova.network.neutron [req-2973e6f5-2274-42b0-aae7-86f5645d8213 req-d3adf2b9-c518-483e-a539-5ae09f142fc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Refreshing network info cache for port 22759577-b4b2-4051-bacd-740bbdfcc4b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:33:37 compute-0 nova_compute[243452]: 2026-02-28 10:33:37.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:33:38 compute-0 ceph-mon[76304]: pgmap v2078: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1007 KiB/s rd, 12 KiB/s wr, 43 op/s
Feb 28 10:33:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:33:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:33:39 compute-0 nova_compute[243452]: 2026-02-28 10:33:39.172 243456 DEBUG nova.network.neutron [req-2973e6f5-2274-42b0-aae7-86f5645d8213 req-d3adf2b9-c518-483e-a539-5ae09f142fc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Updated VIF entry in instance network info cache for port 22759577-b4b2-4051-bacd-740bbdfcc4b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:33:39 compute-0 nova_compute[243452]: 2026-02-28 10:33:39.173 243456 DEBUG nova.network.neutron [req-2973e6f5-2274-42b0-aae7-86f5645d8213 req-d3adf2b9-c518-483e-a539-5ae09f142fc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Updating instance_info_cache with network_info: [{"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:33:39 compute-0 nova_compute[243452]: 2026-02-28 10:33:39.197 243456 DEBUG oslo_concurrency.lockutils [req-2973e6f5-2274-42b0-aae7-86f5645d8213 req-d3adf2b9-c518-483e-a539-5ae09f142fc4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:33:39 compute-0 nova_compute[243452]: 2026-02-28 10:33:39.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:33:39 compute-0 nova_compute[243452]: 2026-02-28 10:33:39.321 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:39 compute-0 nova_compute[243452]: 2026-02-28 10:33:39.333 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:33:39 compute-0 nova_compute[243452]: 2026-02-28 10:33:39.333 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:33:39 compute-0 nova_compute[243452]: 2026-02-28 10:33:39.780 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:40 compute-0 ceph-mon[76304]: pgmap v2079: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:33:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011215138390341052 of space, bias 1.0, pg target 0.3364541517102316 quantized to 32 (current 32)
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024937829631624833 of space, bias 1.0, pg target 0.748134888948745 quantized to 32 (current 32)
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.354693851519385e-07 of space, bias 4.0, pg target 0.0008825632621823263 quantized to 16 (current 16)
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:33:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:33:41 compute-0 nova_compute[243452]: 2026-02-28 10:33:41.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:33:41 compute-0 nova_compute[243452]: 2026-02-28 10:33:41.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:33:41 compute-0 nova_compute[243452]: 2026-02-28 10:33:41.809 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:41 compute-0 nova_compute[243452]: 2026-02-28 10:33:41.809 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:41 compute-0 nova_compute[243452]: 2026-02-28 10:33:41.844 243456 DEBUG nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:33:41 compute-0 nova_compute[243452]: 2026-02-28 10:33:41.946 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:41 compute-0 nova_compute[243452]: 2026-02-28 10:33:41.947 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:41 compute-0 nova_compute[243452]: 2026-02-28 10:33:41.955 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:33:41 compute-0 nova_compute[243452]: 2026-02-28 10:33:41.956 243456 INFO nova.compute.claims [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:33:42 compute-0 nova_compute[243452]: 2026-02-28 10:33:42.115 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:33:42 compute-0 ceph-mon[76304]: pgmap v2080: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Feb 28 10:33:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:33:42 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1000893372' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:33:42 compute-0 nova_compute[243452]: 2026-02-28 10:33:42.698 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:33:42 compute-0 nova_compute[243452]: 2026-02-28 10:33:42.705 243456 DEBUG nova.compute.provider_tree [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:33:42 compute-0 nova_compute[243452]: 2026-02-28 10:33:42.730 243456 DEBUG nova.scheduler.client.report [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:33:42 compute-0 nova_compute[243452]: 2026-02-28 10:33:42.759 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:42 compute-0 nova_compute[243452]: 2026-02-28 10:33:42.761 243456 DEBUG nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:33:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 279 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 313 KiB/s wr, 81 op/s
Feb 28 10:33:42 compute-0 nova_compute[243452]: 2026-02-28 10:33:42.843 243456 DEBUG nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:33:42 compute-0 nova_compute[243452]: 2026-02-28 10:33:42.844 243456 DEBUG nova.network.neutron [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:33:42 compute-0 nova_compute[243452]: 2026-02-28 10:33:42.873 243456 INFO nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:33:42 compute-0 nova_compute[243452]: 2026-02-28 10:33:42.910 243456 DEBUG nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.010 243456 DEBUG nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.012 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.013 243456 INFO nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Creating image(s)
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.039 243456 DEBUG nova.storage.rbd_utils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.066 243456 DEBUG nova.storage.rbd_utils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.089 243456 DEBUG nova.storage.rbd_utils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.093 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:33:43 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1000893372' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.138 243456 DEBUG nova.policy [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.182 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.187 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.188 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.189 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.220 243456 DEBUG nova.storage.rbd_utils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.224 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.345 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.460 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.540 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.540 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.541 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.542 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b07b5e80-4820-4ee8-9750-3ee5ddc53519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.553 243456 DEBUG nova.storage.rbd_utils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.650 243456 DEBUG nova.objects.instance [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid 4cf0bd1e-baff-4e42-ab90-cb45145ea4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.673 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.673 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Ensure instance console log exists: /var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.674 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.674 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:43 compute-0 nova_compute[243452]: 2026-02-28 10:33:43.674 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:33:44 compute-0 ceph-mon[76304]: pgmap v2081: 305 pgs: 305 active+clean; 279 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 313 KiB/s wr, 81 op/s
Feb 28 10:33:44 compute-0 nova_compute[243452]: 2026-02-28 10:33:44.149 243456 DEBUG nova.network.neutron [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Successfully created port: 37a6ff99-c79f-4d1f-8384-b2117545bacf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:33:44 compute-0 nova_compute[243452]: 2026-02-28 10:33:44.327 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:44 compute-0 ovn_controller[146846]: 2026-02-28T10:33:44Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6e:a3:94 10.100.0.14
Feb 28 10:33:44 compute-0 ovn_controller[146846]: 2026-02-28T10:33:44Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6e:a3:94 10.100.0.14
Feb 28 10:33:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 313 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Feb 28 10:33:44 compute-0 nova_compute[243452]: 2026-02-28 10:33:44.782 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:44 compute-0 nova_compute[243452]: 2026-02-28 10:33:44.864 243456 DEBUG nova.network.neutron [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Successfully updated port: 37a6ff99-c79f-4d1f-8384-b2117545bacf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:33:44 compute-0 nova_compute[243452]: 2026-02-28 10:33:44.885 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:33:44 compute-0 nova_compute[243452]: 2026-02-28 10:33:44.885 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:33:44 compute-0 nova_compute[243452]: 2026-02-28 10:33:44.886 243456 DEBUG nova.network.neutron [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:33:44 compute-0 nova_compute[243452]: 2026-02-28 10:33:44.957 243456 DEBUG nova.compute.manager [req-5df34a12-6a56-46f0-a134-9ab5e50869ef req-a173dba7-a4ca-4c13-9c9a-797748d01467 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-changed-37a6ff99-c79f-4d1f-8384-b2117545bacf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:33:44 compute-0 nova_compute[243452]: 2026-02-28 10:33:44.957 243456 DEBUG nova.compute.manager [req-5df34a12-6a56-46f0-a134-9ab5e50869ef req-a173dba7-a4ca-4c13-9c9a-797748d01467 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Refreshing instance network info cache due to event network-changed-37a6ff99-c79f-4d1f-8384-b2117545bacf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:33:44 compute-0 nova_compute[243452]: 2026-02-28 10:33:44.958 243456 DEBUG oslo_concurrency.lockutils [req-5df34a12-6a56-46f0-a134-9ab5e50869ef req-a173dba7-a4ca-4c13-9c9a-797748d01467 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:33:45 compute-0 nova_compute[243452]: 2026-02-28 10:33:45.052 243456 DEBUG nova.network.neutron [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:33:45 compute-0 nova_compute[243452]: 2026-02-28 10:33:45.319 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updating instance_info_cache with network_info: [{"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:33:45 compute-0 nova_compute[243452]: 2026-02-28 10:33:45.340 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:33:45 compute-0 nova_compute[243452]: 2026-02-28 10:33:45.341 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:33:45 compute-0 nova_compute[243452]: 2026-02-28 10:33:45.342 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:33:45 compute-0 nova_compute[243452]: 2026-02-28 10:33:45.371 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:45 compute-0 nova_compute[243452]: 2026-02-28 10:33:45.372 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:45 compute-0 nova_compute[243452]: 2026-02-28 10:33:45.373 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:45 compute-0 nova_compute[243452]: 2026-02-28 10:33:45.373 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:33:45 compute-0 nova_compute[243452]: 2026-02-28 10:33:45.374 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:33:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:33:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2846806603' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:33:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:33:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2846806603' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:33:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:33:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/939079276' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:33:45 compute-0 nova_compute[243452]: 2026-02-28 10:33:45.980 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.070 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.070 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.076 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.077 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:33:46 compute-0 ceph-mon[76304]: pgmap v2082: 305 pgs: 305 active+clean; 313 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Feb 28 10:33:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2846806603' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:33:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2846806603' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:33:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/939079276' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.251 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.252 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3293MB free_disk=59.90014861244708GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.252 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.253 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.350 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance b07b5e80-4820-4ee8-9750-3ee5ddc53519 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.351 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.351 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 4cf0bd1e-baff-4e42-ab90-cb45145ea4db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.352 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.352 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.455 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.499 243456 DEBUG nova.network.neutron [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updating instance_info_cache with network_info: [{"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.527 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.528 243456 DEBUG nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Instance network_info: |[{"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.529 243456 DEBUG oslo_concurrency.lockutils [req-5df34a12-6a56-46f0-a134-9ab5e50869ef req-a173dba7-a4ca-4c13-9c9a-797748d01467 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.530 243456 DEBUG nova.network.neutron [req-5df34a12-6a56-46f0-a134-9ab5e50869ef req-a173dba7-a4ca-4c13-9c9a-797748d01467 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Refreshing network info cache for port 37a6ff99-c79f-4d1f-8384-b2117545bacf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.535 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Start _get_guest_xml network_info=[{"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.541 243456 WARNING nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.545 243456 DEBUG nova.virt.libvirt.host [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.546 243456 DEBUG nova.virt.libvirt.host [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.550 243456 DEBUG nova.virt.libvirt.host [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.551 243456 DEBUG nova.virt.libvirt.host [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.551 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.552 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.553 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.553 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.554 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.554 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.555 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.555 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.555 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.556 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.556 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.557 243456 DEBUG nova.virt.hardware [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:33:46 compute-0 nova_compute[243452]: 2026-02-28 10:33:46.563 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:33:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 340 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 134 op/s
Feb 28 10:33:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:33:47 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/958435207' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.059 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.065 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:33:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:33:47 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3100237635' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.090 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.095 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.123 243456 DEBUG nova.storage.rbd_utils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.129 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:33:47 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/958435207' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:33:47 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3100237635' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.168 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.169 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:33:47 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1186315616' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.634 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.636 243456 DEBUG nova.virt.libvirt.vif [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-770937853',display_name='tempest-TestNetworkBasicOps-server-770937853',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-770937853',id=127,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFAY2dHmsUOb7P6bLjV8d94Nwp5/6Rxqb3nkXfFeiEJx36+H0xArf++Ad9VgIE8+4m1lCjIO06O/qKmlMqDQresg45xrebHyRTaZLsdPDBvsp3hmYqaTlPvbO1fPCnhR8w==',key_name='tempest-TestNetworkBasicOps-431108198',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-bri91k0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:33:42Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=4cf0bd1e-baff-4e42-ab90-cb45145ea4db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.637 243456 DEBUG nova.network.os_vif_util [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.638 243456 DEBUG nova.network.os_vif_util [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:c3:a3,bridge_name='br-int',has_traffic_filtering=True,id=37a6ff99-c79f-4d1f-8384-b2117545bacf,network=Network(183ae61b-3b9b-4e1b-a73e-6b7a38731453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37a6ff99-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.639 243456 DEBUG nova.objects.instance [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4cf0bd1e-baff-4e42-ab90-cb45145ea4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.662 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:33:47 compute-0 nova_compute[243452]:   <uuid>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</uuid>
Feb 28 10:33:47 compute-0 nova_compute[243452]:   <name>instance-0000007f</name>
Feb 28 10:33:47 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:33:47 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:33:47 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <nova:name>tempest-TestNetworkBasicOps-server-770937853</nova:name>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:33:46</nova:creationTime>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:33:47 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:33:47 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:33:47 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:33:47 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:33:47 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:33:47 compute-0 nova_compute[243452]:         <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:33:47 compute-0 nova_compute[243452]:         <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:33:47 compute-0 nova_compute[243452]:         <nova:port uuid="37a6ff99-c79f-4d1f-8384-b2117545bacf">
Feb 28 10:33:47 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:33:47 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:33:47 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <system>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <entry name="serial">4cf0bd1e-baff-4e42-ab90-cb45145ea4db</entry>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <entry name="uuid">4cf0bd1e-baff-4e42-ab90-cb45145ea4db</entry>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     </system>
Feb 28 10:33:47 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:33:47 compute-0 nova_compute[243452]:   <os>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:   </os>
Feb 28 10:33:47 compute-0 nova_compute[243452]:   <features>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:   </features>
Feb 28 10:33:47 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:33:47 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:33:47 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk">
Feb 28 10:33:47 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       </source>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:33:47 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk.config">
Feb 28 10:33:47 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       </source>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:33:47 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:e1:c3:a3"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <target dev="tap37a6ff99-c7"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/console.log" append="off"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <video>
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     </video>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:33:47 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:33:47 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:33:47 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:33:47 compute-0 nova_compute[243452]: </domain>
Feb 28 10:33:47 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.663 243456 DEBUG nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Preparing to wait for external event network-vif-plugged-37a6ff99-c79f-4d1f-8384-b2117545bacf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.663 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.663 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.664 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.664 243456 DEBUG nova.virt.libvirt.vif [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-770937853',display_name='tempest-TestNetworkBasicOps-server-770937853',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-770937853',id=127,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFAY2dHmsUOb7P6bLjV8d94Nwp5/6Rxqb3nkXfFeiEJx36+H0xArf++Ad9VgIE8+4m1lCjIO06O/qKmlMqDQresg45xrebHyRTaZLsdPDBvsp3hmYqaTlPvbO1fPCnhR8w==',key_name='tempest-TestNetworkBasicOps-431108198',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-bri91k0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:33:42Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=4cf0bd1e-baff-4e42-ab90-cb45145ea4db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.665 243456 DEBUG nova.network.os_vif_util [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.666 243456 DEBUG nova.network.os_vif_util [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:c3:a3,bridge_name='br-int',has_traffic_filtering=True,id=37a6ff99-c79f-4d1f-8384-b2117545bacf,network=Network(183ae61b-3b9b-4e1b-a73e-6b7a38731453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37a6ff99-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.666 243456 DEBUG os_vif [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:c3:a3,bridge_name='br-int',has_traffic_filtering=True,id=37a6ff99-c79f-4d1f-8384-b2117545bacf,network=Network(183ae61b-3b9b-4e1b-a73e-6b7a38731453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37a6ff99-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.667 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.667 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.667 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.671 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.671 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37a6ff99-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.672 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap37a6ff99-c7, col_values=(('external_ids', {'iface-id': '37a6ff99-c79f-4d1f-8384-b2117545bacf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:c3:a3', 'vm-uuid': '4cf0bd1e-baff-4e42-ab90-cb45145ea4db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.673 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:47 compute-0 NetworkManager[49805]: <info>  [1772274827.6742] manager: (tap37a6ff99-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/524)
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.678 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.679 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.679 243456 INFO os_vif [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:c3:a3,bridge_name='br-int',has_traffic_filtering=True,id=37a6ff99-c79f-4d1f-8384-b2117545bacf,network=Network(183ae61b-3b9b-4e1b-a73e-6b7a38731453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37a6ff99-c7')
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.725 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.726 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.726 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:e1:c3:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.727 243456 INFO nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Using config drive
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.754 243456 DEBUG nova.storage.rbd_utils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.848 243456 DEBUG nova.network.neutron [req-5df34a12-6a56-46f0-a134-9ab5e50869ef req-a173dba7-a4ca-4c13-9c9a-797748d01467 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updated VIF entry in instance network info cache for port 37a6ff99-c79f-4d1f-8384-b2117545bacf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.849 243456 DEBUG nova.network.neutron [req-5df34a12-6a56-46f0-a134-9ab5e50869ef req-a173dba7-a4ca-4c13-9c9a-797748d01467 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updating instance_info_cache with network_info: [{"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:33:47 compute-0 nova_compute[243452]: 2026-02-28 10:33:47.868 243456 DEBUG oslo_concurrency.lockutils [req-5df34a12-6a56-46f0-a134-9ab5e50869ef req-a173dba7-a4ca-4c13-9c9a-797748d01467 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:33:48 compute-0 nova_compute[243452]: 2026-02-28 10:33:48.063 243456 INFO nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Creating config drive at /var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/disk.config
Feb 28 10:33:48 compute-0 nova_compute[243452]: 2026-02-28 10:33:48.067 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpx6yuhs9s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:33:48 compute-0 ceph-mon[76304]: pgmap v2083: 305 pgs: 305 active+clean; 340 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 134 op/s
Feb 28 10:33:48 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1186315616' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:33:48 compute-0 nova_compute[243452]: 2026-02-28 10:33:48.211 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpx6yuhs9s" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:33:48 compute-0 nova_compute[243452]: 2026-02-28 10:33:48.243 243456 DEBUG nova.storage.rbd_utils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:33:48 compute-0 nova_compute[243452]: 2026-02-28 10:33:48.249 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/disk.config 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:33:48 compute-0 nova_compute[243452]: 2026-02-28 10:33:48.398 243456 DEBUG oslo_concurrency.processutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/disk.config 4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:33:48 compute-0 nova_compute[243452]: 2026-02-28 10:33:48.400 243456 INFO nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Deleting local config drive /var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/disk.config because it was imported into RBD.
Feb 28 10:33:48 compute-0 kernel: tap37a6ff99-c7: entered promiscuous mode
Feb 28 10:33:48 compute-0 NetworkManager[49805]: <info>  [1772274828.4564] manager: (tap37a6ff99-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/525)
Feb 28 10:33:48 compute-0 ovn_controller[146846]: 2026-02-28T10:33:48Z|01269|binding|INFO|Claiming lport 37a6ff99-c79f-4d1f-8384-b2117545bacf for this chassis.
Feb 28 10:33:48 compute-0 ovn_controller[146846]: 2026-02-28T10:33:48Z|01270|binding|INFO|37a6ff99-c79f-4d1f-8384-b2117545bacf: Claiming fa:16:3e:e1:c3:a3 10.100.0.3
Feb 28 10:33:48 compute-0 nova_compute[243452]: 2026-02-28 10:33:48.458 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.469 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:c3:a3 10.100.0.3'], port_security=['fa:16:3e:e1:c3:a3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4cf0bd1e-baff-4e42-ab90-cb45145ea4db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-183ae61b-3b9b-4e1b-a73e-6b7a38731453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0844b2ff-c3dd-41f7-ab33-952597a3bda8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9e841af-c685-47c1-acc4-502d4238e857, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=37a6ff99-c79f-4d1f-8384-b2117545bacf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.471 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 37a6ff99-c79f-4d1f-8384-b2117545bacf in datapath 183ae61b-3b9b-4e1b-a73e-6b7a38731453 bound to our chassis
Feb 28 10:33:48 compute-0 ovn_controller[146846]: 2026-02-28T10:33:48Z|01271|binding|INFO|Setting lport 37a6ff99-c79f-4d1f-8384-b2117545bacf ovn-installed in OVS
Feb 28 10:33:48 compute-0 ovn_controller[146846]: 2026-02-28T10:33:48Z|01272|binding|INFO|Setting lport 37a6ff99-c79f-4d1f-8384-b2117545bacf up in Southbound
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.473 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 183ae61b-3b9b-4e1b-a73e-6b7a38731453
Feb 28 10:33:48 compute-0 nova_compute[243452]: 2026-02-28 10:33:48.474 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:48 compute-0 nova_compute[243452]: 2026-02-28 10:33:48.481 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.487 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3f5ea435-7f90-4e33-8be8-4f2cf745f98f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.489 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap183ae61b-31 in ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.491 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap183ae61b-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.491 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[25cf20ad-3a1e-437b-89a2-5997daaa04ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.492 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b77b3164-4223-4c2a-b5a2-e6e97e9d65aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:48 compute-0 systemd-machined[209480]: New machine qemu-160-instance-0000007f.
Feb 28 10:33:48 compute-0 systemd-udevd[355570]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.512 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[969550ba-1d61-40ef-aee0-26f3057c9a16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:48 compute-0 systemd[1]: Started Virtual Machine qemu-160-instance-0000007f.
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.530 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4cfdd216-f8f4-4935-8b76-9603da146503]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:48 compute-0 NetworkManager[49805]: <info>  [1772274828.5329] device (tap37a6ff99-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:33:48 compute-0 NetworkManager[49805]: <info>  [1772274828.5341] device (tap37a6ff99-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.570 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f46e027c-6250-404f-9167-46de4b904704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.578 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[24eaa41c-49ea-47d7-afb2-ec8f95f1bcd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:48 compute-0 NetworkManager[49805]: <info>  [1772274828.5804] manager: (tap183ae61b-30): new Veth device (/org/freedesktop/NetworkManager/Devices/526)
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.617 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bd381c89-1dda-4ea7-99ca-a917b6cbcb9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.621 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a86abca6-a47b-4c31-becc-61a9ee553ef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:48 compute-0 NetworkManager[49805]: <info>  [1772274828.6459] device (tap183ae61b-30): carrier: link connected
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.650 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[27701165-8e24-4fa8-b7e3-bfa81106940b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.671 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3df27f88-3f4b-4b09-9990-e03bb9d17f19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap183ae61b-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:42:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633593, 'reachable_time': 41175, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355601, 'error': None, 'target': 'ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.688 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a12384ab-bcf2-47c7-908c-2190bcb7e1db]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb8:4238'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633593, 'tstamp': 633593}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355602, 'error': None, 'target': 'ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:48 compute-0 nova_compute[243452]: 2026-02-28 10:33:48.703 243456 DEBUG nova.compute.manager [req-b5b20c20-2a73-4f99-a8f2-e6d24fb7b2a0 req-07112d7f-1a97-4762-862f-a3046f7b56e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-plugged-37a6ff99-c79f-4d1f-8384-b2117545bacf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:33:48 compute-0 nova_compute[243452]: 2026-02-28 10:33:48.704 243456 DEBUG oslo_concurrency.lockutils [req-b5b20c20-2a73-4f99-a8f2-e6d24fb7b2a0 req-07112d7f-1a97-4762-862f-a3046f7b56e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:48 compute-0 nova_compute[243452]: 2026-02-28 10:33:48.704 243456 DEBUG oslo_concurrency.lockutils [req-b5b20c20-2a73-4f99-a8f2-e6d24fb7b2a0 req-07112d7f-1a97-4762-862f-a3046f7b56e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:48 compute-0 nova_compute[243452]: 2026-02-28 10:33:48.705 243456 DEBUG oslo_concurrency.lockutils [req-b5b20c20-2a73-4f99-a8f2-e6d24fb7b2a0 req-07112d7f-1a97-4762-862f-a3046f7b56e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:48 compute-0 nova_compute[243452]: 2026-02-28 10:33:48.705 243456 DEBUG nova.compute.manager [req-b5b20c20-2a73-4f99-a8f2-e6d24fb7b2a0 req-07112d7f-1a97-4762-862f-a3046f7b56e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Processing event network-vif-plugged-37a6ff99-c79f-4d1f-8384-b2117545bacf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.714 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b12f8ce5-326b-4e38-a4bd-aeb0390bdf44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap183ae61b-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:42:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633593, 'reachable_time': 41175, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355603, 'error': None, 'target': 'ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.753 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3b710c41-b5a7-4e0f-bb28-f2011a3bfb00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 121 op/s
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.816 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ee527e50-ff12-46f8-87f6-0bc7aceb65ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.818 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap183ae61b-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.819 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.819 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap183ae61b-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:33:48 compute-0 NetworkManager[49805]: <info>  [1772274828.8238] manager: (tap183ae61b-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/527)
Feb 28 10:33:48 compute-0 kernel: tap183ae61b-30: entered promiscuous mode
Feb 28 10:33:48 compute-0 nova_compute[243452]: 2026-02-28 10:33:48.826 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.829 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap183ae61b-30, col_values=(('external_ids', {'iface-id': 'dbe8062b-c5f4-44f4-b690-d738c9fe51f1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:33:48 compute-0 ovn_controller[146846]: 2026-02-28T10:33:48Z|01273|binding|INFO|Releasing lport dbe8062b-c5f4-44f4-b690-d738c9fe51f1 from this chassis (sb_readonly=0)
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.832 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/183ae61b-3b9b-4e1b-a73e-6b7a38731453.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/183ae61b-3b9b-4e1b-a73e-6b7a38731453.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.833 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[00ac1c63-b852-4ce1-b47e-ed100fae94f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.834 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-183ae61b-3b9b-4e1b-a73e-6b7a38731453
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/183ae61b-3b9b-4e1b-a73e-6b7a38731453.pid.haproxy
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 183ae61b-3b9b-4e1b-a73e-6b7a38731453
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:33:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:48.835 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453', 'env', 'PROCESS_TAG=haproxy-183ae61b-3b9b-4e1b-a73e-6b7a38731453', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/183ae61b-3b9b-4e1b-a73e-6b7a38731453.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:33:48 compute-0 nova_compute[243452]: 2026-02-28 10:33:48.841 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.151 243456 DEBUG nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.153 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274829.150857, 4cf0bd1e-baff-4e42-ab90-cb45145ea4db => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.154 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] VM Started (Lifecycle Event)
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.159 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.165 243456 INFO nova.virt.libvirt.driver [-] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Instance spawned successfully.
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.165 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.184 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.190 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.195 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.195 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.196 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.196 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.197 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.197 243456 DEBUG nova.virt.libvirt.driver [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.225 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.226 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274829.1525602, 4cf0bd1e-baff-4e42-ab90-cb45145ea4db => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.226 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] VM Paused (Lifecycle Event)
Feb 28 10:33:49 compute-0 podman[355677]: 2026-02-28 10:33:49.253856513 +0000 UTC m=+0.070807395 container create 8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.274 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.278 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274829.1585624, 4cf0bd1e-baff-4e42-ab90-cb45145ea4db => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.278 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] VM Resumed (Lifecycle Event)
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.289 243456 INFO nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Took 6.28 seconds to spawn the instance on the hypervisor.
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.290 243456 DEBUG nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.300 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.303 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:33:49 compute-0 systemd[1]: Started libpod-conmon-8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d.scope.
Feb 28 10:33:49 compute-0 podman[355677]: 2026-02-28 10:33:49.221354953 +0000 UTC m=+0.038305885 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.327 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.331 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:33:49 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/519415a6e8605ec671aba53f1da4d6a90fb9d621f73f1043083370c3d20e14cc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.349 243456 INFO nova.compute.manager [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Took 7.43 seconds to build instance.
Feb 28 10:33:49 compute-0 podman[355677]: 2026-02-28 10:33:49.353537737 +0000 UTC m=+0.170488609 container init 8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 28 10:33:49 compute-0 podman[355677]: 2026-02-28 10:33:49.360500514 +0000 UTC m=+0.177451356 container start 8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:33:49 compute-0 nova_compute[243452]: 2026-02-28 10:33:49.364 243456 DEBUG oslo_concurrency.lockutils [None req-5dfbadaf-d58b-494a-bcd5-7f68201993cb ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:49 compute-0 neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453[355692]: [NOTICE]   (355696) : New worker (355698) forked
Feb 28 10:33:49 compute-0 neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453[355692]: [NOTICE]   (355696) : Loading success.
Feb 28 10:33:50 compute-0 ceph-mon[76304]: pgmap v2084: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 121 op/s
Feb 28 10:33:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 934 KiB/s rd, 3.9 MiB/s wr, 117 op/s
Feb 28 10:33:50 compute-0 nova_compute[243452]: 2026-02-28 10:33:50.790 243456 DEBUG nova.compute.manager [req-c849b18c-71d3-487a-ab1a-490e488f1fc8 req-75566613-af99-4de0-ab1a-d4701fbaebc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-plugged-37a6ff99-c79f-4d1f-8384-b2117545bacf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:33:50 compute-0 nova_compute[243452]: 2026-02-28 10:33:50.792 243456 DEBUG oslo_concurrency.lockutils [req-c849b18c-71d3-487a-ab1a-490e488f1fc8 req-75566613-af99-4de0-ab1a-d4701fbaebc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:50 compute-0 nova_compute[243452]: 2026-02-28 10:33:50.793 243456 DEBUG oslo_concurrency.lockutils [req-c849b18c-71d3-487a-ab1a-490e488f1fc8 req-75566613-af99-4de0-ab1a-d4701fbaebc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:50 compute-0 nova_compute[243452]: 2026-02-28 10:33:50.793 243456 DEBUG oslo_concurrency.lockutils [req-c849b18c-71d3-487a-ab1a-490e488f1fc8 req-75566613-af99-4de0-ab1a-d4701fbaebc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:50 compute-0 nova_compute[243452]: 2026-02-28 10:33:50.794 243456 DEBUG nova.compute.manager [req-c849b18c-71d3-487a-ab1a-490e488f1fc8 req-75566613-af99-4de0-ab1a-d4701fbaebc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] No waiting events found dispatching network-vif-plugged-37a6ff99-c79f-4d1f-8384-b2117545bacf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:33:50 compute-0 nova_compute[243452]: 2026-02-28 10:33:50.795 243456 WARNING nova.compute.manager [req-c849b18c-71d3-487a-ab1a-490e488f1fc8 req-75566613-af99-4de0-ab1a-d4701fbaebc7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received unexpected event network-vif-plugged-37a6ff99-c79f-4d1f-8384-b2117545bacf for instance with vm_state active and task_state None.
Feb 28 10:33:52 compute-0 ceph-mon[76304]: pgmap v2085: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 934 KiB/s rd, 3.9 MiB/s wr, 117 op/s
Feb 28 10:33:52 compute-0 nova_compute[243452]: 2026-02-28 10:33:52.674 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 137 op/s
Feb 28 10:33:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:33:54 compute-0 nova_compute[243452]: 2026-02-28 10:33:54.330 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:54 compute-0 ceph-mon[76304]: pgmap v2086: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 137 op/s
Feb 28 10:33:54 compute-0 nova_compute[243452]: 2026-02-28 10:33:54.733 243456 DEBUG nova.compute.manager [req-5634bd63-c482-4e91-bdde-53b89d73d63a req-1e42aa1a-9652-47da-9b28-0845a7884b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received event network-changed-22759577-b4b2-4051-bacd-740bbdfcc4b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:33:54 compute-0 nova_compute[243452]: 2026-02-28 10:33:54.733 243456 DEBUG nova.compute.manager [req-5634bd63-c482-4e91-bdde-53b89d73d63a req-1e42aa1a-9652-47da-9b28-0845a7884b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Refreshing instance network info cache due to event network-changed-22759577-b4b2-4051-bacd-740bbdfcc4b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:33:54 compute-0 nova_compute[243452]: 2026-02-28 10:33:54.734 243456 DEBUG oslo_concurrency.lockutils [req-5634bd63-c482-4e91-bdde-53b89d73d63a req-1e42aa1a-9652-47da-9b28-0845a7884b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:33:54 compute-0 nova_compute[243452]: 2026-02-28 10:33:54.734 243456 DEBUG oslo_concurrency.lockutils [req-5634bd63-c482-4e91-bdde-53b89d73d63a req-1e42aa1a-9652-47da-9b28-0845a7884b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:33:54 compute-0 nova_compute[243452]: 2026-02-28 10:33:54.734 243456 DEBUG nova.network.neutron [req-5634bd63-c482-4e91-bdde-53b89d73d63a req-1e42aa1a-9652-47da-9b28-0845a7884b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Refreshing network info cache for port 22759577-b4b2-4051-bacd-740bbdfcc4b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:33:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 159 op/s
Feb 28 10:33:54 compute-0 nova_compute[243452]: 2026-02-28 10:33:54.805 243456 DEBUG oslo_concurrency.lockutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:54 compute-0 nova_compute[243452]: 2026-02-28 10:33:54.805 243456 DEBUG oslo_concurrency.lockutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:54 compute-0 nova_compute[243452]: 2026-02-28 10:33:54.806 243456 DEBUG oslo_concurrency.lockutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:54 compute-0 nova_compute[243452]: 2026-02-28 10:33:54.807 243456 DEBUG oslo_concurrency.lockutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:54 compute-0 nova_compute[243452]: 2026-02-28 10:33:54.807 243456 DEBUG oslo_concurrency.lockutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:54 compute-0 nova_compute[243452]: 2026-02-28 10:33:54.810 243456 INFO nova.compute.manager [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Terminating instance
Feb 28 10:33:54 compute-0 nova_compute[243452]: 2026-02-28 10:33:54.811 243456 DEBUG nova.compute.manager [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:33:54 compute-0 kernel: tap22759577-b4 (unregistering): left promiscuous mode
Feb 28 10:33:54 compute-0 NetworkManager[49805]: <info>  [1772274834.8624] device (tap22759577-b4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:33:54 compute-0 ovn_controller[146846]: 2026-02-28T10:33:54Z|01274|binding|INFO|Releasing lport 22759577-b4b2-4051-bacd-740bbdfcc4b4 from this chassis (sb_readonly=0)
Feb 28 10:33:54 compute-0 ovn_controller[146846]: 2026-02-28T10:33:54Z|01275|binding|INFO|Setting lport 22759577-b4b2-4051-bacd-740bbdfcc4b4 down in Southbound
Feb 28 10:33:54 compute-0 ovn_controller[146846]: 2026-02-28T10:33:54Z|01276|binding|INFO|Removing iface tap22759577-b4 ovn-installed in OVS
Feb 28 10:33:54 compute-0 nova_compute[243452]: 2026-02-28 10:33:54.881 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:54.888 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:a3:94 10.100.0.14 2001:db8::f816:3eff:fe6e:a394'], port_security=['fa:16:3e:6e:a3:94 10.100.0.14 2001:db8::f816:3eff:fe6e:a394'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8::f816:3eff:fe6e:a394/64', 'neutron:device_id': 'f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6e383b51-5295-4e4b-b3c5-448c2d4edbf6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cc36483-813d-493a-bc2a-2e5a365011f2, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=22759577-b4b2-4051-bacd-740bbdfcc4b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:33:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:54.889 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 22759577-b4b2-4051-bacd-740bbdfcc4b4 in datapath c662211d-11bd-4aa5-95d2-794ccdac29d7 unbound from our chassis
Feb 28 10:33:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:54.890 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c662211d-11bd-4aa5-95d2-794ccdac29d7
Feb 28 10:33:54 compute-0 nova_compute[243452]: 2026-02-28 10:33:54.890 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:54.911 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ef57a7d4-1dff-4e1d-89ef-255053ab7aeb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:54 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Feb 28 10:33:54 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007e.scope: Consumed 12.455s CPU time.
Feb 28 10:33:54 compute-0 systemd-machined[209480]: Machine qemu-159-instance-0000007e terminated.
Feb 28 10:33:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:54.944 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[45ffcba4-137e-46e7-81c6-d5f94fa79145]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:54.948 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[764b8bea-66c2-41e4-84b3-8b3d60a8ccf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:54.976 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[90919e9c-ab63-4796-911b-332861fa117d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:54.996 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c37c585f-6bc4-4dd9-b0a4-3f6b03fbb2e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc662211d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:f5:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 8, 'rx_bytes': 2780, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 8, 'rx_bytes': 2780, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628072, 'reachable_time': 22047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355719, 'error': None, 'target': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:55.020 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[90f83873-32e4-4aac-8bb4-67549bef1318]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc662211d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 628080, 'tstamp': 628080}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355720, 'error': None, 'target': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc662211d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 628083, 'tstamp': 628083}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355720, 'error': None, 'target': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:55.028 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc662211d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:33:55 compute-0 nova_compute[243452]: 2026-02-28 10:33:55.036 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:55 compute-0 nova_compute[243452]: 2026-02-28 10:33:55.040 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:55.041 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc662211d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:33:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:55.042 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:33:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:55.043 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc662211d-10, col_values=(('external_ids', {'iface-id': '45460f81-310d-424c-88ee-85e2ae1b7444'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:33:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:55.044 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:33:55 compute-0 nova_compute[243452]: 2026-02-28 10:33:55.063 243456 INFO nova.virt.libvirt.driver [-] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Instance destroyed successfully.
Feb 28 10:33:55 compute-0 nova_compute[243452]: 2026-02-28 10:33:55.063 243456 DEBUG nova.objects.instance [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:33:55 compute-0 nova_compute[243452]: 2026-02-28 10:33:55.076 243456 DEBUG nova.virt.libvirt.vif [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:33:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-994916245',display_name='tempest-TestGettingAddress-server-994916245',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-994916245',id=126,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFieZm47z3EaXX8zbx7LaLT+w65lGbaba1NOcr04FPZCW63/mJeAcaJ3qZq57Nb7LiBZaem/WgewaoumutbM8PMth7FUNu0jTunr9bd13indOeb5XDIeGzJsnEfyg+wcGA==',key_name='tempest-TestGettingAddress-401608194',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:33:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-6ygaahaz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:33:32Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:33:55 compute-0 nova_compute[243452]: 2026-02-28 10:33:55.077 243456 DEBUG nova.network.os_vif_util [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:33:55 compute-0 nova_compute[243452]: 2026-02-28 10:33:55.078 243456 DEBUG nova.network.os_vif_util [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6e:a3:94,bridge_name='br-int',has_traffic_filtering=True,id=22759577-b4b2-4051-bacd-740bbdfcc4b4,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22759577-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:33:55 compute-0 nova_compute[243452]: 2026-02-28 10:33:55.078 243456 DEBUG os_vif [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:a3:94,bridge_name='br-int',has_traffic_filtering=True,id=22759577-b4b2-4051-bacd-740bbdfcc4b4,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22759577-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:33:55 compute-0 nova_compute[243452]: 2026-02-28 10:33:55.080 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:55 compute-0 nova_compute[243452]: 2026-02-28 10:33:55.080 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22759577-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:33:55 compute-0 nova_compute[243452]: 2026-02-28 10:33:55.081 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:55 compute-0 nova_compute[243452]: 2026-02-28 10:33:55.084 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:33:55 compute-0 nova_compute[243452]: 2026-02-28 10:33:55.086 243456 INFO os_vif [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:a3:94,bridge_name='br-int',has_traffic_filtering=True,id=22759577-b4b2-4051-bacd-740bbdfcc4b4,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22759577-b4')
Feb 28 10:33:55 compute-0 nova_compute[243452]: 2026-02-28 10:33:55.353 243456 INFO nova.virt.libvirt.driver [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Deleting instance files /var/lib/nova/instances/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_del
Feb 28 10:33:55 compute-0 nova_compute[243452]: 2026-02-28 10:33:55.354 243456 INFO nova.virt.libvirt.driver [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Deletion of /var/lib/nova/instances/f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989_del complete
Feb 28 10:33:55 compute-0 nova_compute[243452]: 2026-02-28 10:33:55.418 243456 INFO nova.compute.manager [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Took 0.61 seconds to destroy the instance on the hypervisor.
Feb 28 10:33:55 compute-0 nova_compute[243452]: 2026-02-28 10:33:55.419 243456 DEBUG oslo.service.loopingcall [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:33:55 compute-0 nova_compute[243452]: 2026-02-28 10:33:55.419 243456 DEBUG nova.compute.manager [-] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:33:55 compute-0 nova_compute[243452]: 2026-02-28 10:33:55.419 243456 DEBUG nova.network.neutron [-] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:33:56 compute-0 ceph-mon[76304]: pgmap v2087: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 159 op/s
Feb 28 10:33:56 compute-0 nova_compute[243452]: 2026-02-28 10:33:56.764 243456 DEBUG nova.network.neutron [-] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:33:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 322 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 150 op/s
Feb 28 10:33:56 compute-0 nova_compute[243452]: 2026-02-28 10:33:56.787 243456 INFO nova.compute.manager [-] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Took 1.37 seconds to deallocate network for instance.
Feb 28 10:33:56 compute-0 nova_compute[243452]: 2026-02-28 10:33:56.832 243456 DEBUG oslo_concurrency.lockutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:56 compute-0 nova_compute[243452]: 2026-02-28 10:33:56.833 243456 DEBUG oslo_concurrency.lockutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:56 compute-0 nova_compute[243452]: 2026-02-28 10:33:56.836 243456 DEBUG nova.compute.manager [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-changed-37a6ff99-c79f-4d1f-8384-b2117545bacf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:33:56 compute-0 nova_compute[243452]: 2026-02-28 10:33:56.836 243456 DEBUG nova.compute.manager [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Refreshing instance network info cache due to event network-changed-37a6ff99-c79f-4d1f-8384-b2117545bacf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:33:56 compute-0 nova_compute[243452]: 2026-02-28 10:33:56.836 243456 DEBUG oslo_concurrency.lockutils [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:33:56 compute-0 nova_compute[243452]: 2026-02-28 10:33:56.837 243456 DEBUG oslo_concurrency.lockutils [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:33:56 compute-0 nova_compute[243452]: 2026-02-28 10:33:56.837 243456 DEBUG nova.network.neutron [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Refreshing network info cache for port 37a6ff99-c79f-4d1f-8384-b2117545bacf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:33:56 compute-0 nova_compute[243452]: 2026-02-28 10:33:56.842 243456 DEBUG nova.compute.manager [req-186c8f72-2b3f-4af3-a906-b3b99b4bd21d req-714b3a0f-98c9-465f-98cf-bc7892b37284 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received event network-vif-deleted-22759577-b4b2-4051-bacd-740bbdfcc4b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:33:56 compute-0 nova_compute[243452]: 2026-02-28 10:33:56.946 243456 DEBUG oslo_concurrency.processutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:33:57 compute-0 nova_compute[243452]: 2026-02-28 10:33:57.102 243456 DEBUG nova.network.neutron [req-5634bd63-c482-4e91-bdde-53b89d73d63a req-1e42aa1a-9652-47da-9b28-0845a7884b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Updated VIF entry in instance network info cache for port 22759577-b4b2-4051-bacd-740bbdfcc4b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:33:57 compute-0 nova_compute[243452]: 2026-02-28 10:33:57.103 243456 DEBUG nova.network.neutron [req-5634bd63-c482-4e91-bdde-53b89d73d63a req-1e42aa1a-9652-47da-9b28-0845a7884b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Updating instance_info_cache with network_info: [{"id": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "address": "fa:16:3e:6e:a3:94", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6e:a394", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22759577-b4", "ovs_interfaceid": "22759577-b4b2-4051-bacd-740bbdfcc4b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:33:57 compute-0 nova_compute[243452]: 2026-02-28 10:33:57.134 243456 DEBUG oslo_concurrency.lockutils [req-5634bd63-c482-4e91-bdde-53b89d73d63a req-1e42aa1a-9652-47da-9b28-0845a7884b45 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:33:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:33:57 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3666230549' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:33:57 compute-0 nova_compute[243452]: 2026-02-28 10:33:57.525 243456 DEBUG oslo_concurrency.processutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:33:57 compute-0 nova_compute[243452]: 2026-02-28 10:33:57.532 243456 DEBUG nova.compute.provider_tree [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:33:57 compute-0 ceph-mon[76304]: pgmap v2088: 305 pgs: 305 active+clean; 322 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 150 op/s
Feb 28 10:33:57 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3666230549' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:33:57 compute-0 nova_compute[243452]: 2026-02-28 10:33:57.832 243456 DEBUG nova.scheduler.client.report [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:33:57 compute-0 nova_compute[243452]: 2026-02-28 10:33:57.857 243456 DEBUG oslo_concurrency.lockutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:57.873 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:57.873 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:57.874 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:57 compute-0 nova_compute[243452]: 2026-02-28 10:33:57.886 243456 INFO nova.scheduler.client.report [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989
Feb 28 10:33:57 compute-0 nova_compute[243452]: 2026-02-28 10:33:57.958 243456 DEBUG oslo_concurrency.lockutils [None req-b582a8b3-c6cd-4cc5-8b2e-412d468d043e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:58 compute-0 nova_compute[243452]: 2026-02-28 10:33:58.758 243456 DEBUG nova.network.neutron [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updated VIF entry in instance network info cache for port 37a6ff99-c79f-4d1f-8384-b2117545bacf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:33:58 compute-0 nova_compute[243452]: 2026-02-28 10:33:58.759 243456 DEBUG nova.network.neutron [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updating instance_info_cache with network_info: [{"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:33:58 compute-0 nova_compute[243452]: 2026-02-28 10:33:58.782 243456 DEBUG oslo_concurrency.lockutils [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:33:58 compute-0 nova_compute[243452]: 2026-02-28 10:33:58.782 243456 DEBUG nova.compute.manager [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received event network-vif-unplugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:33:58 compute-0 nova_compute[243452]: 2026-02-28 10:33:58.783 243456 DEBUG oslo_concurrency.lockutils [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:58 compute-0 nova_compute[243452]: 2026-02-28 10:33:58.783 243456 DEBUG oslo_concurrency.lockutils [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:58 compute-0 nova_compute[243452]: 2026-02-28 10:33:58.783 243456 DEBUG oslo_concurrency.lockutils [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:58 compute-0 nova_compute[243452]: 2026-02-28 10:33:58.783 243456 DEBUG nova.compute.manager [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] No waiting events found dispatching network-vif-unplugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:33:58 compute-0 nova_compute[243452]: 2026-02-28 10:33:58.784 243456 DEBUG nova.compute.manager [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received event network-vif-unplugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:33:58 compute-0 nova_compute[243452]: 2026-02-28 10:33:58.784 243456 DEBUG nova.compute.manager [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received event network-vif-plugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:33:58 compute-0 nova_compute[243452]: 2026-02-28 10:33:58.784 243456 DEBUG oslo_concurrency.lockutils [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 300 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 517 KiB/s wr, 104 op/s
Feb 28 10:33:58 compute-0 nova_compute[243452]: 2026-02-28 10:33:58.784 243456 DEBUG oslo_concurrency.lockutils [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:58 compute-0 nova_compute[243452]: 2026-02-28 10:33:58.785 243456 DEBUG oslo_concurrency.lockutils [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:58 compute-0 nova_compute[243452]: 2026-02-28 10:33:58.785 243456 DEBUG nova.compute.manager [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] No waiting events found dispatching network-vif-plugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:33:58 compute-0 nova_compute[243452]: 2026-02-28 10:33:58.785 243456 WARNING nova.compute.manager [req-aaff7416-baf8-4ceb-8954-6c63384e687c req-aa979186-2895-4bb3-a582-5af6bdd40cda 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Received unexpected event network-vif-plugged-22759577-b4b2-4051-bacd-740bbdfcc4b4 for instance with vm_state active and task_state deleting.
Feb 28 10:33:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:33:59 compute-0 nova_compute[243452]: 2026-02-28 10:33:59.333 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:59 compute-0 nova_compute[243452]: 2026-02-28 10:33:59.789 243456 DEBUG nova.compute.manager [req-da05aea6-f215-4742-9a30-3b290e855254 req-961e6cbe-448a-4a3e-ba77-f022f70ef755 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received event network-changed-b811084c-ad80-4e64-904a-d56bd59c9766 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:33:59 compute-0 nova_compute[243452]: 2026-02-28 10:33:59.790 243456 DEBUG nova.compute.manager [req-da05aea6-f215-4742-9a30-3b290e855254 req-961e6cbe-448a-4a3e-ba77-f022f70ef755 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Refreshing instance network info cache due to event network-changed-b811084c-ad80-4e64-904a-d56bd59c9766. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:33:59 compute-0 nova_compute[243452]: 2026-02-28 10:33:59.790 243456 DEBUG oslo_concurrency.lockutils [req-da05aea6-f215-4742-9a30-3b290e855254 req-961e6cbe-448a-4a3e-ba77-f022f70ef755 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:33:59 compute-0 nova_compute[243452]: 2026-02-28 10:33:59.791 243456 DEBUG oslo_concurrency.lockutils [req-da05aea6-f215-4742-9a30-3b290e855254 req-961e6cbe-448a-4a3e-ba77-f022f70ef755 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:33:59 compute-0 nova_compute[243452]: 2026-02-28 10:33:59.791 243456 DEBUG nova.network.neutron [req-da05aea6-f215-4742-9a30-3b290e855254 req-961e6cbe-448a-4a3e-ba77-f022f70ef755 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Refreshing network info cache for port b811084c-ad80-4e64-904a-d56bd59c9766 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:33:59 compute-0 ceph-mon[76304]: pgmap v2089: 305 pgs: 305 active+clean; 300 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 517 KiB/s wr, 104 op/s
Feb 28 10:33:59 compute-0 nova_compute[243452]: 2026-02-28 10:33:59.879 243456 DEBUG oslo_concurrency.lockutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:59 compute-0 nova_compute[243452]: 2026-02-28 10:33:59.880 243456 DEBUG oslo_concurrency.lockutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:59 compute-0 nova_compute[243452]: 2026-02-28 10:33:59.881 243456 DEBUG oslo_concurrency.lockutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:33:59 compute-0 nova_compute[243452]: 2026-02-28 10:33:59.881 243456 DEBUG oslo_concurrency.lockutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:33:59 compute-0 nova_compute[243452]: 2026-02-28 10:33:59.882 243456 DEBUG oslo_concurrency.lockutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:33:59 compute-0 nova_compute[243452]: 2026-02-28 10:33:59.883 243456 INFO nova.compute.manager [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Terminating instance
Feb 28 10:33:59 compute-0 nova_compute[243452]: 2026-02-28 10:33:59.885 243456 DEBUG nova.compute.manager [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:33:59 compute-0 kernel: tapb811084c-ad (unregistering): left promiscuous mode
Feb 28 10:33:59 compute-0 NetworkManager[49805]: <info>  [1772274839.9372] device (tapb811084c-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:33:59 compute-0 ovn_controller[146846]: 2026-02-28T10:33:59Z|01277|binding|INFO|Releasing lport b811084c-ad80-4e64-904a-d56bd59c9766 from this chassis (sb_readonly=0)
Feb 28 10:33:59 compute-0 nova_compute[243452]: 2026-02-28 10:33:59.944 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:59 compute-0 ovn_controller[146846]: 2026-02-28T10:33:59Z|01278|binding|INFO|Setting lport b811084c-ad80-4e64-904a-d56bd59c9766 down in Southbound
Feb 28 10:33:59 compute-0 ovn_controller[146846]: 2026-02-28T10:33:59Z|01279|binding|INFO|Removing iface tapb811084c-ad ovn-installed in OVS
Feb 28 10:33:59 compute-0 nova_compute[243452]: 2026-02-28 10:33:59.949 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:59 compute-0 nova_compute[243452]: 2026-02-28 10:33:59.955 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:33:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:59.957 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:c4:e4 10.100.0.12 2001:db8::f816:3eff:fec3:c4e4'], port_security=['fa:16:3e:c3:c4:e4 10.100.0.12 2001:db8::f816:3eff:fec3:c4e4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fec3:c4e4/64', 'neutron:device_id': 'b07b5e80-4820-4ee8-9750-3ee5ddc53519', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6e383b51-5295-4e4b-b3c5-448c2d4edbf6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cc36483-813d-493a-bc2a-2e5a365011f2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b811084c-ad80-4e64-904a-d56bd59c9766) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:33:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:59.962 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b811084c-ad80-4e64-904a-d56bd59c9766 in datapath c662211d-11bd-4aa5-95d2-794ccdac29d7 unbound from our chassis
Feb 28 10:33:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:59.964 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c662211d-11bd-4aa5-95d2-794ccdac29d7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:33:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:59.966 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eabd2d89-0150-4413-b6e0-6a47c952761d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:33:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:33:59.967 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7 namespace which is not needed anymore
Feb 28 10:34:00 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Feb 28 10:34:00 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007d.scope: Consumed 16.371s CPU time.
Feb 28 10:34:00 compute-0 systemd-machined[209480]: Machine qemu-158-instance-0000007d terminated.
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.082 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:00 compute-0 neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7[353814]: [NOTICE]   (353818) : haproxy version is 2.8.14-c23fe91
Feb 28 10:34:00 compute-0 neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7[353814]: [NOTICE]   (353818) : path to executable is /usr/sbin/haproxy
Feb 28 10:34:00 compute-0 neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7[353814]: [WARNING]  (353818) : Exiting Master process...
Feb 28 10:34:00 compute-0 neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7[353814]: [WARNING]  (353818) : Exiting Master process...
Feb 28 10:34:00 compute-0 neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7[353814]: [ALERT]    (353818) : Current worker (353820) exited with code 143 (Terminated)
Feb 28 10:34:00 compute-0 neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7[353814]: [WARNING]  (353818) : All workers exited. Exiting... (0)
Feb 28 10:34:00 compute-0 systemd[1]: libpod-b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541.scope: Deactivated successfully.
Feb 28 10:34:00 compute-0 NetworkManager[49805]: <info>  [1772274840.1064] manager: (tapb811084c-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/528)
Feb 28 10:34:00 compute-0 podman[355798]: 2026-02-28 10:34:00.10730861 +0000 UTC m=+0.048686800 container died b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.122 243456 INFO nova.virt.libvirt.driver [-] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Instance destroyed successfully.
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.124 243456 DEBUG nova.objects.instance [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid b07b5e80-4820-4ee8-9750-3ee5ddc53519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:34:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541-userdata-shm.mount: Deactivated successfully.
Feb 28 10:34:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ccde48f8a36bab7126971be92fbdb0c7788e67982ad3a6ca48727f62f636de5-merged.mount: Deactivated successfully.
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.146 243456 DEBUG nova.virt.libvirt.vif [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:32:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-870516469',display_name='tempest-TestGettingAddress-server-870516469',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-870516469',id=125,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFieZm47z3EaXX8zbx7LaLT+w65lGbaba1NOcr04FPZCW63/mJeAcaJ3qZq57Nb7LiBZaem/WgewaoumutbM8PMth7FUNu0jTunr9bd13indOeb5XDIeGzJsnEfyg+wcGA==',key_name='tempest-TestGettingAddress-401608194',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:32:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-lcsv3c02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:32:55Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=b07b5e80-4820-4ee8-9750-3ee5ddc53519,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.147 243456 DEBUG nova.network.os_vif_util [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.148 243456 DEBUG nova.network.os_vif_util [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c3:c4:e4,bridge_name='br-int',has_traffic_filtering=True,id=b811084c-ad80-4e64-904a-d56bd59c9766,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb811084c-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:34:00 compute-0 podman[355798]: 2026-02-28 10:34:00.14895842 +0000 UTC m=+0.090336610 container cleanup b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.149 243456 DEBUG os_vif [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:c4:e4,bridge_name='br-int',has_traffic_filtering=True,id=b811084c-ad80-4e64-904a-d56bd59c9766,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb811084c-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.152 243456 DEBUG nova.compute.manager [req-c49da988-23b4-41db-b3ce-d45da35a6e4d req-e4300efa-389d-478e-9d30-1d759b0bbd7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received event network-vif-unplugged-b811084c-ad80-4e64-904a-d56bd59c9766 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.153 243456 DEBUG oslo_concurrency.lockutils [req-c49da988-23b4-41db-b3ce-d45da35a6e4d req-e4300efa-389d-478e-9d30-1d759b0bbd7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.153 243456 DEBUG oslo_concurrency.lockutils [req-c49da988-23b4-41db-b3ce-d45da35a6e4d req-e4300efa-389d-478e-9d30-1d759b0bbd7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.153 243456 DEBUG oslo_concurrency.lockutils [req-c49da988-23b4-41db-b3ce-d45da35a6e4d req-e4300efa-389d-478e-9d30-1d759b0bbd7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.154 243456 DEBUG nova.compute.manager [req-c49da988-23b4-41db-b3ce-d45da35a6e4d req-e4300efa-389d-478e-9d30-1d759b0bbd7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] No waiting events found dispatching network-vif-unplugged-b811084c-ad80-4e64-904a-d56bd59c9766 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.154 243456 DEBUG nova.compute.manager [req-c49da988-23b4-41db-b3ce-d45da35a6e4d req-e4300efa-389d-478e-9d30-1d759b0bbd7a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received event network-vif-unplugged-b811084c-ad80-4e64-904a-d56bd59c9766 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.155 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.155 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb811084c-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.158 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.160 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.162 243456 INFO os_vif [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:c4:e4,bridge_name='br-int',has_traffic_filtering=True,id=b811084c-ad80-4e64-904a-d56bd59c9766,network=Network(c662211d-11bd-4aa5-95d2-794ccdac29d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb811084c-ad')
Feb 28 10:34:00 compute-0 systemd[1]: libpod-conmon-b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541.scope: Deactivated successfully.
Feb 28 10:34:00 compute-0 podman[355837]: 2026-02-28 10:34:00.22769121 +0000 UTC m=+0.052840998 container remove b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 10:34:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:00.232 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dca10a51-ba23-4fbb-aadc-37c7c7ace9b5]: (4, ('Sat Feb 28 10:34:00 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7 (b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541)\nb79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541\nSat Feb 28 10:34:00 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7 (b79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541)\nb79a3ef798d1a52ccf11a1459f7a259c6823ff13f0a1eb0e5ff8c2257a5f1541\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:00.234 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[85047c11-33fc-45fe-83a6-903460b2fb93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:00.236 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc662211d-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.238 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:00 compute-0 kernel: tapc662211d-10: left promiscuous mode
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.246 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.247 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:00.251 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e19c06-89dd-4edb-bf91-baca1e753549]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:00.263 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[943cd5da-b988-4faa-a432-028a5a01c76a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:00.265 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a1f6f6-3a35-47ec-ac7d-c1b87612fc62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:00.278 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f52a52ff-4376-4f7f-bd96-ed81d3dfb62e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628063, 'reachable_time': 16766, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355870, 'error': None, 'target': 'ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:00 compute-0 systemd[1]: run-netns-ovnmeta\x2dc662211d\x2d11bd\x2d4aa5\x2d95d2\x2d794ccdac29d7.mount: Deactivated successfully.
Feb 28 10:34:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:00.282 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c662211d-11bd-4aa5-95d2-794ccdac29d7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:34:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:00.283 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[3a262ffe-7485-4084-94b6-e566a6c95337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:34:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:34:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:34:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:34:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:34:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.459 243456 INFO nova.virt.libvirt.driver [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Deleting instance files /var/lib/nova/instances/b07b5e80-4820-4ee8-9750-3ee5ddc53519_del
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.461 243456 INFO nova.virt.libvirt.driver [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Deletion of /var/lib/nova/instances/b07b5e80-4820-4ee8-9750-3ee5ddc53519_del complete
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.506 243456 INFO nova.compute.manager [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Took 0.62 seconds to destroy the instance on the hypervisor.
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.506 243456 DEBUG oslo.service.loopingcall [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.507 243456 DEBUG nova.compute.manager [-] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:34:00 compute-0 nova_compute[243452]: 2026-02-28 10:34:00.507 243456 DEBUG nova.network.neutron [-] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:34:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 267 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 37 KiB/s wr, 108 op/s
Feb 28 10:34:01 compute-0 ovn_controller[146846]: 2026-02-28T10:34:01Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e1:c3:a3 10.100.0.3
Feb 28 10:34:01 compute-0 ovn_controller[146846]: 2026-02-28T10:34:01Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e1:c3:a3 10.100.0.3
Feb 28 10:34:01 compute-0 ceph-mon[76304]: pgmap v2090: 305 pgs: 305 active+clean; 267 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 37 KiB/s wr, 108 op/s
Feb 28 10:34:01 compute-0 sshd-session[355872]: Invalid user sol from 45.148.10.240 port 33230
Feb 28 10:34:02 compute-0 sshd-session[355872]: Connection closed by invalid user sol 45.148.10.240 port 33230 [preauth]
Feb 28 10:34:02 compute-0 nova_compute[243452]: 2026-02-28 10:34:02.410 243456 DEBUG nova.compute.manager [req-fb7a1aa4-937c-4a9c-b1b2-19c543f5217e req-f466be58-1fe4-46b7-87c7-a5c66e9cd9e7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received event network-vif-plugged-b811084c-ad80-4e64-904a-d56bd59c9766 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:02 compute-0 nova_compute[243452]: 2026-02-28 10:34:02.411 243456 DEBUG oslo_concurrency.lockutils [req-fb7a1aa4-937c-4a9c-b1b2-19c543f5217e req-f466be58-1fe4-46b7-87c7-a5c66e9cd9e7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:02 compute-0 nova_compute[243452]: 2026-02-28 10:34:02.412 243456 DEBUG oslo_concurrency.lockutils [req-fb7a1aa4-937c-4a9c-b1b2-19c543f5217e req-f466be58-1fe4-46b7-87c7-a5c66e9cd9e7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:02 compute-0 nova_compute[243452]: 2026-02-28 10:34:02.412 243456 DEBUG oslo_concurrency.lockutils [req-fb7a1aa4-937c-4a9c-b1b2-19c543f5217e req-f466be58-1fe4-46b7-87c7-a5c66e9cd9e7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:02 compute-0 nova_compute[243452]: 2026-02-28 10:34:02.413 243456 DEBUG nova.compute.manager [req-fb7a1aa4-937c-4a9c-b1b2-19c543f5217e req-f466be58-1fe4-46b7-87c7-a5c66e9cd9e7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] No waiting events found dispatching network-vif-plugged-b811084c-ad80-4e64-904a-d56bd59c9766 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:34:02 compute-0 nova_compute[243452]: 2026-02-28 10:34:02.413 243456 WARNING nova.compute.manager [req-fb7a1aa4-937c-4a9c-b1b2-19c543f5217e req-f466be58-1fe4-46b7-87c7-a5c66e9cd9e7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received unexpected event network-vif-plugged-b811084c-ad80-4e64-904a-d56bd59c9766 for instance with vm_state active and task_state deleting.
Feb 28 10:34:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 242 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 367 KiB/s wr, 102 op/s
Feb 28 10:34:02 compute-0 nova_compute[243452]: 2026-02-28 10:34:02.995 243456 DEBUG nova.network.neutron [-] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:34:03 compute-0 nova_compute[243452]: 2026-02-28 10:34:03.021 243456 DEBUG nova.compute.manager [req-6177312f-ad6b-489b-8ae8-5d30eb0638b4 req-195be376-0b9e-4943-8400-eaf4dbd96598 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Received event network-vif-deleted-b811084c-ad80-4e64-904a-d56bd59c9766 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:03 compute-0 nova_compute[243452]: 2026-02-28 10:34:03.022 243456 INFO nova.compute.manager [req-6177312f-ad6b-489b-8ae8-5d30eb0638b4 req-195be376-0b9e-4943-8400-eaf4dbd96598 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Neutron deleted interface b811084c-ad80-4e64-904a-d56bd59c9766; detaching it from the instance and deleting it from the info cache
Feb 28 10:34:03 compute-0 nova_compute[243452]: 2026-02-28 10:34:03.022 243456 DEBUG nova.network.neutron [req-6177312f-ad6b-489b-8ae8-5d30eb0638b4 req-195be376-0b9e-4943-8400-eaf4dbd96598 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:34:03 compute-0 nova_compute[243452]: 2026-02-28 10:34:03.041 243456 DEBUG nova.network.neutron [req-da05aea6-f215-4742-9a30-3b290e855254 req-961e6cbe-448a-4a3e-ba77-f022f70ef755 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updated VIF entry in instance network info cache for port b811084c-ad80-4e64-904a-d56bd59c9766. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:34:03 compute-0 nova_compute[243452]: 2026-02-28 10:34:03.042 243456 DEBUG nova.network.neutron [req-da05aea6-f215-4742-9a30-3b290e855254 req-961e6cbe-448a-4a3e-ba77-f022f70ef755 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Updating instance_info_cache with network_info: [{"id": "b811084c-ad80-4e64-904a-d56bd59c9766", "address": "fa:16:3e:c3:c4:e4", "network": {"id": "c662211d-11bd-4aa5-95d2-794ccdac29d7", "bridge": "br-int", "label": "tempest-network-smoke--868202325", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec3:c4e4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb811084c-ad", "ovs_interfaceid": "b811084c-ad80-4e64-904a-d56bd59c9766", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:34:03 compute-0 nova_compute[243452]: 2026-02-28 10:34:03.052 243456 INFO nova.compute.manager [-] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Took 2.55 seconds to deallocate network for instance.
Feb 28 10:34:03 compute-0 nova_compute[243452]: 2026-02-28 10:34:03.059 243456 DEBUG nova.compute.manager [req-6177312f-ad6b-489b-8ae8-5d30eb0638b4 req-195be376-0b9e-4943-8400-eaf4dbd96598 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Detach interface failed, port_id=b811084c-ad80-4e64-904a-d56bd59c9766, reason: Instance b07b5e80-4820-4ee8-9750-3ee5ddc53519 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:34:03 compute-0 nova_compute[243452]: 2026-02-28 10:34:03.077 243456 DEBUG oslo_concurrency.lockutils [req-da05aea6-f215-4742-9a30-3b290e855254 req-961e6cbe-448a-4a3e-ba77-f022f70ef755 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b07b5e80-4820-4ee8-9750-3ee5ddc53519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:34:03 compute-0 nova_compute[243452]: 2026-02-28 10:34:03.112 243456 DEBUG oslo_concurrency.lockutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:03 compute-0 nova_compute[243452]: 2026-02-28 10:34:03.112 243456 DEBUG oslo_concurrency.lockutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:03 compute-0 nova_compute[243452]: 2026-02-28 10:34:03.192 243456 DEBUG oslo_concurrency.processutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:34:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:34:03 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2831465108' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:34:03 compute-0 nova_compute[243452]: 2026-02-28 10:34:03.795 243456 DEBUG oslo_concurrency.processutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:34:03 compute-0 nova_compute[243452]: 2026-02-28 10:34:03.803 243456 DEBUG nova.compute.provider_tree [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:34:03 compute-0 nova_compute[243452]: 2026-02-28 10:34:03.824 243456 DEBUG nova.scheduler.client.report [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:34:03 compute-0 nova_compute[243452]: 2026-02-28 10:34:03.851 243456 DEBUG oslo_concurrency.lockutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:34:03 compute-0 ceph-mon[76304]: pgmap v2091: 305 pgs: 305 active+clean; 242 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 367 KiB/s wr, 102 op/s
Feb 28 10:34:03 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2831465108' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:34:03 compute-0 nova_compute[243452]: 2026-02-28 10:34:03.887 243456 INFO nova.scheduler.client.report [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance b07b5e80-4820-4ee8-9750-3ee5ddc53519
Feb 28 10:34:03 compute-0 nova_compute[243452]: 2026-02-28 10:34:03.986 243456 DEBUG oslo_concurrency.lockutils [None req-0be10393-16f0-4d51-aae3-d0c61db2614f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b07b5e80-4820-4ee8-9750-3ee5ddc53519" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:04 compute-0 nova_compute[243452]: 2026-02-28 10:34:04.337 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 216 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.2 MiB/s wr, 128 op/s
Feb 28 10:34:05 compute-0 nova_compute[243452]: 2026-02-28 10:34:05.157 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:05 compute-0 ceph-mon[76304]: pgmap v2092: 305 pgs: 305 active+clean; 216 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.2 MiB/s wr, 128 op/s
Feb 28 10:34:06 compute-0 podman[355897]: 2026-02-28 10:34:06.159255004 +0000 UTC m=+0.084247747 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Feb 28 10:34:06 compute-0 podman[355896]: 2026-02-28 10:34:06.204163347 +0000 UTC m=+0.129605223 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 28 10:34:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Feb 28 10:34:07 compute-0 nova_compute[243452]: 2026-02-28 10:34:07.795 243456 INFO nova.compute.manager [None req-7fb326aa-9c02-45cd-ba7c-3655b56a971b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Get console output
Feb 28 10:34:07 compute-0 nova_compute[243452]: 2026-02-28 10:34:07.807 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:34:07 compute-0 ceph-mon[76304]: pgmap v2093: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Feb 28 10:34:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Feb 28 10:34:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:34:09 compute-0 nova_compute[243452]: 2026-02-28 10:34:09.339 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:09 compute-0 ceph-mon[76304]: pgmap v2094: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Feb 28 10:34:10 compute-0 nova_compute[243452]: 2026-02-28 10:34:10.056 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274835.0559974, f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:34:10 compute-0 nova_compute[243452]: 2026-02-28 10:34:10.057 243456 INFO nova.compute.manager [-] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] VM Stopped (Lifecycle Event)
Feb 28 10:34:10 compute-0 nova_compute[243452]: 2026-02-28 10:34:10.082 243456 DEBUG nova.compute.manager [None req-d5cefe88-35a1-4b10-806d-3e505ceb9a46 - - - - - -] [instance: f8fc32cd-d7f2-4db4-9ccf-1c7bb5bdb989] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:34:10 compute-0 nova_compute[243452]: 2026-02-28 10:34:10.160 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:10 compute-0 ovn_controller[146846]: 2026-02-28T10:34:10Z|01280|binding|INFO|Releasing lport dbe8062b-c5f4-44f4-b690-d738c9fe51f1 from this chassis (sb_readonly=0)
Feb 28 10:34:10 compute-0 nova_compute[243452]: 2026-02-28 10:34:10.461 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Feb 28 10:34:11 compute-0 nova_compute[243452]: 2026-02-28 10:34:11.304 243456 DEBUG oslo_concurrency.lockutils [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "interface-4cf0bd1e-baff-4e42-ab90-cb45145ea4db-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:11 compute-0 nova_compute[243452]: 2026-02-28 10:34:11.305 243456 DEBUG oslo_concurrency.lockutils [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "interface-4cf0bd1e-baff-4e42-ab90-cb45145ea4db-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:11 compute-0 nova_compute[243452]: 2026-02-28 10:34:11.306 243456 DEBUG nova.objects.instance [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'flavor' on Instance uuid 4cf0bd1e-baff-4e42-ab90-cb45145ea4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:34:11 compute-0 ceph-mon[76304]: pgmap v2095: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Feb 28 10:34:11 compute-0 nova_compute[243452]: 2026-02-28 10:34:11.931 243456 DEBUG nova.objects.instance [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_requests' on Instance uuid 4cf0bd1e-baff-4e42-ab90-cb45145ea4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:34:11 compute-0 nova_compute[243452]: 2026-02-28 10:34:11.945 243456 DEBUG nova.network.neutron [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:34:12 compute-0 nova_compute[243452]: 2026-02-28 10:34:12.110 243456 DEBUG nova.policy [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:34:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Feb 28 10:34:12 compute-0 nova_compute[243452]: 2026-02-28 10:34:12.988 243456 DEBUG nova.network.neutron [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Successfully created port: edcb3b03-b894-4abf-96a5-f832e8ee3371 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:34:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:34:13 compute-0 ceph-mon[76304]: pgmap v2096: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Feb 28 10:34:14 compute-0 nova_compute[243452]: 2026-02-28 10:34:14.247 243456 DEBUG nova.network.neutron [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Successfully updated port: edcb3b03-b894-4abf-96a5-f832e8ee3371 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:34:14 compute-0 nova_compute[243452]: 2026-02-28 10:34:14.265 243456 DEBUG oslo_concurrency.lockutils [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:34:14 compute-0 nova_compute[243452]: 2026-02-28 10:34:14.266 243456 DEBUG oslo_concurrency.lockutils [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:34:14 compute-0 nova_compute[243452]: 2026-02-28 10:34:14.266 243456 DEBUG nova.network.neutron [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:34:14 compute-0 nova_compute[243452]: 2026-02-28 10:34:14.341 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:14 compute-0 nova_compute[243452]: 2026-02-28 10:34:14.349 243456 DEBUG nova.compute.manager [req-03191dcb-ed8c-4371-8e27-4aece7893743 req-37e522e2-3642-4328-beb7-f2901dfb628b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-changed-edcb3b03-b894-4abf-96a5-f832e8ee3371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:14 compute-0 nova_compute[243452]: 2026-02-28 10:34:14.350 243456 DEBUG nova.compute.manager [req-03191dcb-ed8c-4371-8e27-4aece7893743 req-37e522e2-3642-4328-beb7-f2901dfb628b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Refreshing instance network info cache due to event network-changed-edcb3b03-b894-4abf-96a5-f832e8ee3371. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:34:14 compute-0 nova_compute[243452]: 2026-02-28 10:34:14.350 243456 DEBUG oslo_concurrency.lockutils [req-03191dcb-ed8c-4371-8e27-4aece7893743 req-37e522e2-3642-4328-beb7-f2901dfb628b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:34:14 compute-0 sshd-session[355941]: Received disconnect from 103.67.78.132 port 50944:11: Bye Bye [preauth]
Feb 28 10:34:14 compute-0 sshd-session[355941]: Disconnected from authenticating user root 103.67.78.132 port 50944 [preauth]
Feb 28 10:34:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Feb 28 10:34:15 compute-0 nova_compute[243452]: 2026-02-28 10:34:15.120 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274840.1189055, b07b5e80-4820-4ee8-9750-3ee5ddc53519 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:34:15 compute-0 nova_compute[243452]: 2026-02-28 10:34:15.121 243456 INFO nova.compute.manager [-] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] VM Stopped (Lifecycle Event)
Feb 28 10:34:15 compute-0 nova_compute[243452]: 2026-02-28 10:34:15.143 243456 DEBUG nova.compute.manager [None req-ffdbded0-5656-47f9-9d93-cde79cbc653c - - - - - -] [instance: b07b5e80-4820-4ee8-9750-3ee5ddc53519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:34:15 compute-0 nova_compute[243452]: 2026-02-28 10:34:15.164 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:15 compute-0 ceph-mon[76304]: pgmap v2097: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.396 243456 DEBUG nova.network.neutron [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updating instance_info_cache with network_info: [{"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.417 243456 DEBUG oslo_concurrency.lockutils [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.418 243456 DEBUG oslo_concurrency.lockutils [req-03191dcb-ed8c-4371-8e27-4aece7893743 req-37e522e2-3642-4328-beb7-f2901dfb628b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.419 243456 DEBUG nova.network.neutron [req-03191dcb-ed8c-4371-8e27-4aece7893743 req-37e522e2-3642-4328-beb7-f2901dfb628b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Refreshing network info cache for port edcb3b03-b894-4abf-96a5-f832e8ee3371 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.423 243456 DEBUG nova.virt.libvirt.vif [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-770937853',display_name='tempest-TestNetworkBasicOps-server-770937853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-770937853',id=127,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFAY2dHmsUOb7P6bLjV8d94Nwp5/6Rxqb3nkXfFeiEJx36+H0xArf++Ad9VgIE8+4m1lCjIO06O/qKmlMqDQresg45xrebHyRTaZLsdPDBvsp3hmYqaTlPvbO1fPCnhR8w==',key_name='tempest-TestNetworkBasicOps-431108198',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:33:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-bri91k0h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:33:49Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=4cf0bd1e-baff-4e42-ab90-cb45145ea4db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.423 243456 DEBUG nova.network.os_vif_util [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.424 243456 DEBUG nova.network.os_vif_util [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.424 243456 DEBUG os_vif [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.425 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.425 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.425 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.429 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.429 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapedcb3b03-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.430 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapedcb3b03-b8, col_values=(('external_ids', {'iface-id': 'edcb3b03-b894-4abf-96a5-f832e8ee3371', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:6b:1e', 'vm-uuid': '4cf0bd1e-baff-4e42-ab90-cb45145ea4db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.431 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:16 compute-0 NetworkManager[49805]: <info>  [1772274856.4328] manager: (tapedcb3b03-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/529)
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.437 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.439 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.439 243456 INFO os_vif [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8')
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.440 243456 DEBUG nova.virt.libvirt.vif [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-770937853',display_name='tempest-TestNetworkBasicOps-server-770937853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-770937853',id=127,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFAY2dHmsUOb7P6bLjV8d94Nwp5/6Rxqb3nkXfFeiEJx36+H0xArf++Ad9VgIE8+4m1lCjIO06O/qKmlMqDQresg45xrebHyRTaZLsdPDBvsp3hmYqaTlPvbO1fPCnhR8w==',key_name='tempest-TestNetworkBasicOps-431108198',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:33:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-bri91k0h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:33:49Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=4cf0bd1e-baff-4e42-ab90-cb45145ea4db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.441 243456 DEBUG nova.network.os_vif_util [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.441 243456 DEBUG nova.network.os_vif_util [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.444 243456 DEBUG nova.virt.libvirt.guest [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] attach device xml: <interface type="ethernet">
Feb 28 10:34:16 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:64:6b:1e"/>
Feb 28 10:34:16 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:34:16 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:34:16 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:34:16 compute-0 nova_compute[243452]:   <target dev="tapedcb3b03-b8"/>
Feb 28 10:34:16 compute-0 nova_compute[243452]: </interface>
Feb 28 10:34:16 compute-0 nova_compute[243452]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 28 10:34:16 compute-0 kernel: tapedcb3b03-b8: entered promiscuous mode
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.463 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:16 compute-0 NetworkManager[49805]: <info>  [1772274856.4644] manager: (tapedcb3b03-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/530)
Feb 28 10:34:16 compute-0 ovn_controller[146846]: 2026-02-28T10:34:16Z|01281|binding|INFO|Claiming lport edcb3b03-b894-4abf-96a5-f832e8ee3371 for this chassis.
Feb 28 10:34:16 compute-0 ovn_controller[146846]: 2026-02-28T10:34:16Z|01282|binding|INFO|edcb3b03-b894-4abf-96a5-f832e8ee3371: Claiming fa:16:3e:64:6b:1e 10.100.0.24
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.474 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:6b:1e 10.100.0.24'], port_security=['fa:16:3e:64:6b:1e 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '4cf0bd1e-baff-4e42-ab90-cb45145ea4db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d699467-190a-4754-be38-8dcbc56ed7da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '26d464f3-c9fa-4347-96b9-fda1593d34a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8d3e50d-7a54-4c37-996e-1d4928f66955, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=edcb3b03-b894-4abf-96a5-f832e8ee3371) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.476 156681 INFO neutron.agent.ovn.metadata.agent [-] Port edcb3b03-b894-4abf-96a5-f832e8ee3371 in datapath 0d699467-190a-4754-be38-8dcbc56ed7da bound to our chassis
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.477 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0d699467-190a-4754-be38-8dcbc56ed7da
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.484 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:16 compute-0 ovn_controller[146846]: 2026-02-28T10:34:16Z|01283|binding|INFO|Setting lport edcb3b03-b894-4abf-96a5-f832e8ee3371 ovn-installed in OVS
Feb 28 10:34:16 compute-0 ovn_controller[146846]: 2026-02-28T10:34:16Z|01284|binding|INFO|Setting lport edcb3b03-b894-4abf-96a5-f832e8ee3371 up in Southbound
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.491 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.494 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0d715f68-344d-4278-8dd2-9dc69c26624f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.495 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0d699467-11 in ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.498 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0d699467-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.498 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[37cbeffd-929e-4e32-8a56-b30f894ced83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.500 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4b5830b0-49fe-4164-985d-ae31c9a2945e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:16 compute-0 systemd-udevd[355950]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.514 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[ad27918f-b05f-4120-a4dd-d2cc4a9dab33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:16 compute-0 NetworkManager[49805]: <info>  [1772274856.5215] device (tapedcb3b03-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:34:16 compute-0 NetworkManager[49805]: <info>  [1772274856.5229] device (tapedcb3b03-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.533 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[51776354-9087-4855-b986-9f90fec795aa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.554 243456 DEBUG nova.virt.libvirt.driver [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.554 243456 DEBUG nova.virt.libvirt.driver [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.554 243456 DEBUG nova.virt.libvirt.driver [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:e1:c3:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.554 243456 DEBUG nova.virt.libvirt.driver [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:64:6b:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.567 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[06d418b8-7f2a-43dc-95c1-ba3108395e70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.574 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[14d4fc2d-36e5-4444-be84-dfcc0a82d1d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:16 compute-0 NetworkManager[49805]: <info>  [1772274856.5759] manager: (tap0d699467-10): new Veth device (/org/freedesktop/NetworkManager/Devices/531)
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.581 243456 DEBUG nova.virt.libvirt.guest [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:34:16 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:34:16 compute-0 nova_compute[243452]:   <nova:name>tempest-TestNetworkBasicOps-server-770937853</nova:name>
Feb 28 10:34:16 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:34:16</nova:creationTime>
Feb 28 10:34:16 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:34:16 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:34:16 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:34:16 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:34:16 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:34:16 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:34:16 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:34:16 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:34:16 compute-0 nova_compute[243452]:     <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:34:16 compute-0 nova_compute[243452]:     <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:34:16 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:34:16 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:34:16 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:34:16 compute-0 nova_compute[243452]:     <nova:port uuid="37a6ff99-c79f-4d1f-8384-b2117545bacf">
Feb 28 10:34:16 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:34:16 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:34:16 compute-0 nova_compute[243452]:     <nova:port uuid="edcb3b03-b894-4abf-96a5-f832e8ee3371">
Feb 28 10:34:16 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Feb 28 10:34:16 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:34:16 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:34:16 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:34:16 compute-0 nova_compute[243452]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.608 243456 DEBUG oslo_concurrency.lockutils [None req-e48efd02-de0a-4a40-ab81-dd07e66b6778 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "interface-4cf0bd1e-baff-4e42-ab90-cb45145ea4db-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.609 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8173ed4d-d566-4b88-9682-91b089579e7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.613 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c437fa-2ce5-4aeb-905b-66713caf334e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:16 compute-0 NetworkManager[49805]: <info>  [1772274856.6355] device (tap0d699467-10): carrier: link connected
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.640 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f89fc5-5507-443a-bfda-75f71a1fe83a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.659 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[46d13ebd-71d6-4968-bf06-a985db1d2c33]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d699467-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:d8:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636392, 'reachable_time': 21712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355976, 'error': None, 'target': 'ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:16 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:34:16 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 38K writes, 151K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.74 writes per sync, written: 0.15 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4164 writes, 16K keys, 4164 commit groups, 1.0 writes per commit group, ingest: 17.42 MB, 0.03 MB/s
                                           Interval WAL: 4164 writes, 1650 syncs, 2.52 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.678 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[11d26ec8-2085-42b6-a294-4ce723fe618e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:d8a3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 636392, 'tstamp': 636392}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355977, 'error': None, 'target': 'ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.700 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef6291d-82f7-4555-85d4-5ddea69b4799]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d699467-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:d8:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636392, 'reachable_time': 21712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355978, 'error': None, 'target': 'ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.737 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[247642cd-0379-457c-9af9-338199949489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 991 KiB/s wr, 22 op/s
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.812 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3e338fb6-d24f-4eea-aa81-b11fcb904083]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.814 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d699467-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.814 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.815 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d699467-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:16 compute-0 kernel: tap0d699467-10: entered promiscuous mode
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.818 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:16 compute-0 NetworkManager[49805]: <info>  [1772274856.8203] manager: (tap0d699467-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/532)
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.821 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.822 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0d699467-10, col_values=(('external_ids', {'iface-id': '70e83520-4cec-4f09-8d59-db572fd53673'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.824 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:16 compute-0 ovn_controller[146846]: 2026-02-28T10:34:16Z|01285|binding|INFO|Releasing lport 70e83520-4cec-4f09-8d59-db572fd53673 from this chassis (sb_readonly=0)
Feb 28 10:34:16 compute-0 nova_compute[243452]: 2026-02-28 10:34:16.836 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.838 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0d699467-190a-4754-be38-8dcbc56ed7da.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0d699467-190a-4754-be38-8dcbc56ed7da.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.839 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f98a1df2-d123-4803-b383-ddabeb4892fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.839 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-0d699467-190a-4754-be38-8dcbc56ed7da
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/0d699467-190a-4754-be38-8dcbc56ed7da.pid.haproxy
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 0d699467-190a-4754-be38-8dcbc56ed7da
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:34:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:16.840 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da', 'env', 'PROCESS_TAG=haproxy-0d699467-190a-4754-be38-8dcbc56ed7da', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0d699467-190a-4754-be38-8dcbc56ed7da.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:34:17 compute-0 nova_compute[243452]: 2026-02-28 10:34:17.119 243456 DEBUG nova.compute.manager [req-01f80b31-7fc1-40a8-b441-7817a86e28b9 req-27c24061-24ab-4339-9a0c-8791cde6c344 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-plugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:17 compute-0 nova_compute[243452]: 2026-02-28 10:34:17.119 243456 DEBUG oslo_concurrency.lockutils [req-01f80b31-7fc1-40a8-b441-7817a86e28b9 req-27c24061-24ab-4339-9a0c-8791cde6c344 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:17 compute-0 nova_compute[243452]: 2026-02-28 10:34:17.120 243456 DEBUG oslo_concurrency.lockutils [req-01f80b31-7fc1-40a8-b441-7817a86e28b9 req-27c24061-24ab-4339-9a0c-8791cde6c344 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:17 compute-0 nova_compute[243452]: 2026-02-28 10:34:17.120 243456 DEBUG oslo_concurrency.lockutils [req-01f80b31-7fc1-40a8-b441-7817a86e28b9 req-27c24061-24ab-4339-9a0c-8791cde6c344 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:17 compute-0 nova_compute[243452]: 2026-02-28 10:34:17.120 243456 DEBUG nova.compute.manager [req-01f80b31-7fc1-40a8-b441-7817a86e28b9 req-27c24061-24ab-4339-9a0c-8791cde6c344 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] No waiting events found dispatching network-vif-plugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:34:17 compute-0 nova_compute[243452]: 2026-02-28 10:34:17.120 243456 WARNING nova.compute.manager [req-01f80b31-7fc1-40a8-b441-7817a86e28b9 req-27c24061-24ab-4339-9a0c-8791cde6c344 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received unexpected event network-vif-plugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 for instance with vm_state active and task_state None.
Feb 28 10:34:17 compute-0 podman[356010]: 2026-02-28 10:34:17.178203186 +0000 UTC m=+0.049174054 container create b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:34:17 compute-0 systemd[1]: Started libpod-conmon-b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66.scope.
Feb 28 10:34:17 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:34:17 compute-0 podman[356010]: 2026-02-28 10:34:17.1508065 +0000 UTC m=+0.021777368 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:34:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4b2e07cdeaa3d845002b7d5c69c4a6ea946ebe8ae1371af79f5f05018ddb64c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:34:17 compute-0 podman[356010]: 2026-02-28 10:34:17.261680141 +0000 UTC m=+0.132651029 container init b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:34:17 compute-0 podman[356010]: 2026-02-28 10:34:17.267137875 +0000 UTC m=+0.138108743 container start b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:34:17 compute-0 neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da[356025]: [NOTICE]   (356029) : New worker (356031) forked
Feb 28 10:34:17 compute-0 neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da[356025]: [NOTICE]   (356029) : Loading success.
Feb 28 10:34:17 compute-0 ceph-mon[76304]: pgmap v2098: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 991 KiB/s wr, 22 op/s
Feb 28 10:34:18 compute-0 ovn_controller[146846]: 2026-02-28T10:34:18Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:64:6b:1e 10.100.0.24
Feb 28 10:34:18 compute-0 ovn_controller[146846]: 2026-02-28T10:34:18Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:64:6b:1e 10.100.0.24
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.355 243456 DEBUG oslo_concurrency.lockutils [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "interface-4cf0bd1e-baff-4e42-ab90-cb45145ea4db-edcb3b03-b894-4abf-96a5-f832e8ee3371" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.356 243456 DEBUG oslo_concurrency.lockutils [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "interface-4cf0bd1e-baff-4e42-ab90-cb45145ea4db-edcb3b03-b894-4abf-96a5-f832e8ee3371" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.372 243456 DEBUG nova.objects.instance [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'flavor' on Instance uuid 4cf0bd1e-baff-4e42-ab90-cb45145ea4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.394 243456 DEBUG nova.virt.libvirt.vif [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-770937853',display_name='tempest-TestNetworkBasicOps-server-770937853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-770937853',id=127,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFAY2dHmsUOb7P6bLjV8d94Nwp5/6Rxqb3nkXfFeiEJx36+H0xArf++Ad9VgIE8+4m1lCjIO06O/qKmlMqDQresg45xrebHyRTaZLsdPDBvsp3hmYqaTlPvbO1fPCnhR8w==',key_name='tempest-TestNetworkBasicOps-431108198',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:33:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-bri91k0h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:33:49Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=4cf0bd1e-baff-4e42-ab90-cb45145ea4db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.394 243456 DEBUG nova.network.os_vif_util [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.395 243456 DEBUG nova.network.os_vif_util [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.400 243456 DEBUG nova.virt.libvirt.guest [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:64:6b:1e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapedcb3b03-b8"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.404 243456 DEBUG nova.virt.libvirt.guest [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:64:6b:1e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapedcb3b03-b8"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.408 243456 DEBUG nova.virt.libvirt.driver [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Attempting to detach device tapedcb3b03-b8 from instance 4cf0bd1e-baff-4e42-ab90-cb45145ea4db from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.409 243456 DEBUG nova.virt.libvirt.guest [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] detach device xml: <interface type="ethernet">
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:64:6b:1e"/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <target dev="tapedcb3b03-b8"/>
Feb 28 10:34:18 compute-0 nova_compute[243452]: </interface>
Feb 28 10:34:18 compute-0 nova_compute[243452]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.419 243456 DEBUG nova.virt.libvirt.guest [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:64:6b:1e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapedcb3b03-b8"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.423 243456 DEBUG nova.virt.libvirt.guest [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:64:6b:1e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapedcb3b03-b8"/></interface>not found in domain: <domain type='kvm' id='160'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <name>instance-0000007f</name>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <uuid>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</uuid>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <nova:name>tempest-TestNetworkBasicOps-server-770937853</nova:name>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:34:16</nova:creationTime>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:port uuid="37a6ff99-c79f-4d1f-8384-b2117545bacf">
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:port uuid="edcb3b03-b894-4abf-96a5-f832e8ee3371">
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:34:18 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <memory unit='KiB'>131072</memory>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <vcpu placement='static'>1</vcpu>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <resource>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <partition>/machine</partition>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </resource>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <sysinfo type='smbios'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <system>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <entry name='manufacturer'>RDO</entry>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <entry name='product'>OpenStack Compute</entry>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <entry name='serial'>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</entry>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <entry name='uuid'>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</entry>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <entry name='family'>Virtual Machine</entry>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </system>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <os>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <boot dev='hd'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <smbios mode='sysinfo'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </os>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <features>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <vmcoreinfo state='on'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </features>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <cpu mode='custom' match='exact' check='full'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <vendor>AMD</vendor>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='x2apic'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc-deadline'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='hypervisor'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc_adjust'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='spec-ctrl'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='stibp'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='ssbd'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='cmp_legacy'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='overflow-recov'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='succor'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='ibrs'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='amd-ssbd'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='virt-ssbd'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='lbrv'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='tsc-scale'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='vmcb-clean'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='flushbyasid'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='pause-filter'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='pfthreshold'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='xsaves'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='svm'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='topoext'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='npt'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='nrip-save'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <clock offset='utc'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <timer name='pit' tickpolicy='delay'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <timer name='hpet' present='no'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <on_poweroff>destroy</on_poweroff>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <on_reboot>restart</on_reboot>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <on_crash>destroy</on_crash>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <disk type='network' device='disk'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk' index='2'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       </source>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target dev='vda' bus='virtio'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='virtio-disk0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <disk type='network' device='cdrom'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk.config' index='1'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       </source>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target dev='sda' bus='sata'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <readonly/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='sata0-0-0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='0' model='pcie-root'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pcie.0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='1' port='0x10'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.1'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='2' port='0x11'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.2'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='3' port='0x12'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.3'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='4' port='0x13'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.4'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='5' port='0x14'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.5'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='6' port='0x15'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.6'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='7' port='0x16'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.7'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='8' port='0x17'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.8'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='9' port='0x18'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.9'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='10' port='0x19'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.10'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='11' port='0x1a'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.11'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='12' port='0x1b'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.12'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='13' port='0x1c'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.13'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='14' port='0x1d'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.14'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='15' port='0x1e'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.15'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='16' port='0x1f'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.16'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='17' port='0x20'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.17'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='18' port='0x21'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.18'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='19' port='0x22'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.19'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='20' port='0x23'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.20'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='21' port='0x24'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.21'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='22' port='0x25'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.22'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='23' port='0x26'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.23'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='24' port='0x27'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.24'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='25' port='0x28'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.25'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-pci-bridge'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.26'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='usb'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='sata' index='0'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='ide'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:e1:c3:a3'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target dev='tap37a6ff99-c7'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='net0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:64:6b:1e'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target dev='tapedcb3b03-b8'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='net1'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <serial type='pty'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <source path='/dev/pts/1'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/console.log' append='off'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target type='isa-serial' port='0'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:         <model name='isa-serial'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       </target>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <console type='pty' tty='/dev/pts/1'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <source path='/dev/pts/1'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/console.log' append='off'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target type='serial' port='0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </console>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <input type='tablet' bus='usb'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='input0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='usb' bus='0' port='1'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </input>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <input type='mouse' bus='ps2'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='input1'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </input>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <input type='keyboard' bus='ps2'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='input2'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </input>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <listen type='address' address='::0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </graphics>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <audio id='1' type='none'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <video>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model type='virtio' heads='1' primary='yes'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='video0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </video>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <watchdog model='itco' action='reset'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='watchdog0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </watchdog>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <memballoon model='virtio'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <stats period='10'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='balloon0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <rng model='virtio'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <backend model='random'>/dev/urandom</backend>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='rng0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <label>system_u:system_r:svirt_t:s0:c811,c1010</label>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c811,c1010</imagelabel>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <label>+107:+107</label>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <imagelabel>+107:+107</imagelabel>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:34:18 compute-0 nova_compute[243452]: </domain>
Feb 28 10:34:18 compute-0 nova_compute[243452]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.425 243456 INFO nova.virt.libvirt.driver [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully detached device tapedcb3b03-b8 from instance 4cf0bd1e-baff-4e42-ab90-cb45145ea4db from the persistent domain config.
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.425 243456 DEBUG nova.virt.libvirt.driver [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] (1/8): Attempting to detach device tapedcb3b03-b8 with device alias net1 from instance 4cf0bd1e-baff-4e42-ab90-cb45145ea4db from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.426 243456 DEBUG nova.virt.libvirt.guest [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] detach device xml: <interface type="ethernet">
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:64:6b:1e"/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <target dev="tapedcb3b03-b8"/>
Feb 28 10:34:18 compute-0 nova_compute[243452]: </interface>
Feb 28 10:34:18 compute-0 nova_compute[243452]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.463 243456 DEBUG nova.network.neutron [req-03191dcb-ed8c-4371-8e27-4aece7893743 req-37e522e2-3642-4328-beb7-f2901dfb628b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updated VIF entry in instance network info cache for port edcb3b03-b894-4abf-96a5-f832e8ee3371. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.464 243456 DEBUG nova.network.neutron [req-03191dcb-ed8c-4371-8e27-4aece7893743 req-37e522e2-3642-4328-beb7-f2901dfb628b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updating instance_info_cache with network_info: [{"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.478 243456 DEBUG oslo_concurrency.lockutils [req-03191dcb-ed8c-4371-8e27-4aece7893743 req-37e522e2-3642-4328-beb7-f2901dfb628b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:34:18 compute-0 kernel: tapedcb3b03-b8 (unregistering): left promiscuous mode
Feb 28 10:34:18 compute-0 NetworkManager[49805]: <info>  [1772274858.5316] device (tapedcb3b03-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:34:18 compute-0 ovn_controller[146846]: 2026-02-28T10:34:18Z|01286|binding|INFO|Releasing lport edcb3b03-b894-4abf-96a5-f832e8ee3371 from this chassis (sb_readonly=0)
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.538 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:18 compute-0 ovn_controller[146846]: 2026-02-28T10:34:18Z|01287|binding|INFO|Setting lport edcb3b03-b894-4abf-96a5-f832e8ee3371 down in Southbound
Feb 28 10:34:18 compute-0 ovn_controller[146846]: 2026-02-28T10:34:18Z|01288|binding|INFO|Removing iface tapedcb3b03-b8 ovn-installed in OVS
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.541 243456 DEBUG nova.virt.libvirt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Received event <DeviceRemovedEvent: 1772274858.5413241, 4cf0bd1e-baff-4e42-ab90-cb45145ea4db => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.541 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.542 243456 DEBUG nova.virt.libvirt.driver [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Start waiting for the detach event from libvirt for device tapedcb3b03-b8 with device alias net1 for instance 4cf0bd1e-baff-4e42-ab90-cb45145ea4db _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.543 243456 DEBUG nova.virt.libvirt.guest [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:64:6b:1e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapedcb3b03-b8"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.546 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.548 243456 DEBUG nova.virt.libvirt.guest [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:64:6b:1e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapedcb3b03-b8"/></interface>not found in domain: <domain type='kvm' id='160'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <name>instance-0000007f</name>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <uuid>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</uuid>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <nova:name>tempest-TestNetworkBasicOps-server-770937853</nova:name>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:34:16</nova:creationTime>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:port uuid="37a6ff99-c79f-4d1f-8384-b2117545bacf">
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:port uuid="edcb3b03-b894-4abf-96a5-f832e8ee3371">
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:34:18 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <memory unit='KiB'>131072</memory>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <vcpu placement='static'>1</vcpu>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <resource>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <partition>/machine</partition>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </resource>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <sysinfo type='smbios'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <system>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <entry name='manufacturer'>RDO</entry>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <entry name='product'>OpenStack Compute</entry>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <entry name='serial'>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</entry>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <entry name='uuid'>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</entry>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <entry name='family'>Virtual Machine</entry>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </system>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <os>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <boot dev='hd'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <smbios mode='sysinfo'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </os>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <features>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <vmcoreinfo state='on'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </features>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <cpu mode='custom' match='exact' check='full'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <vendor>AMD</vendor>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='x2apic'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc-deadline'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='hypervisor'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc_adjust'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='spec-ctrl'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='stibp'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='ssbd'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='cmp_legacy'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='overflow-recov'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='succor'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='ibrs'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='amd-ssbd'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='virt-ssbd'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='lbrv'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='tsc-scale'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='vmcb-clean'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='flushbyasid'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='pause-filter'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='pfthreshold'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='xsaves'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='svm'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='require' name='topoext'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='npt'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <feature policy='disable' name='nrip-save'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <clock offset='utc'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <timer name='pit' tickpolicy='delay'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <timer name='hpet' present='no'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <on_poweroff>destroy</on_poweroff>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <on_reboot>restart</on_reboot>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <on_crash>destroy</on_crash>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <disk type='network' device='disk'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk' index='2'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       </source>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target dev='vda' bus='virtio'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='virtio-disk0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <disk type='network' device='cdrom'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk.config' index='1'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       </source>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target dev='sda' bus='sata'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <readonly/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='sata0-0-0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='0' model='pcie-root'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pcie.0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='1' port='0x10'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.1'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='2' port='0x11'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.2'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='3' port='0x12'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.3'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='4' port='0x13'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.4'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='5' port='0x14'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.5'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='6' port='0x15'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.6'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='7' port='0x16'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.7'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='8' port='0x17'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.8'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='9' port='0x18'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.9'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='10' port='0x19'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.10'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='11' port='0x1a'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.11'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='12' port='0x1b'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.12'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='13' port='0x1c'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.13'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='14' port='0x1d'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.14'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='15' port='0x1e'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.15'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='16' port='0x1f'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.16'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='17' port='0x20'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.17'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='18' port='0x21'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.18'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='19' port='0x22'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.19'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='20' port='0x23'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.20'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='21' port='0x24'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.21'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='22' port='0x25'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.22'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='23' port='0x26'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.23'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='24' port='0x27'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.24'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target chassis='25' port='0x28'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.25'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model name='pcie-pci-bridge'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='pci.26'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='usb'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <controller type='sata' index='0'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='ide'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:e1:c3:a3'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target dev='tap37a6ff99-c7'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='net0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <serial type='pty'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <source path='/dev/pts/1'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/console.log' append='off'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target type='isa-serial' port='0'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:         <model name='isa-serial'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       </target>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <console type='pty' tty='/dev/pts/1'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <source path='/dev/pts/1'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/console.log' append='off'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <target type='serial' port='0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </console>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <input type='tablet' bus='usb'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='input0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='usb' bus='0' port='1'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </input>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <input type='mouse' bus='ps2'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='input1'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </input>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <input type='keyboard' bus='ps2'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='input2'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </input>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <listen type='address' address='::0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </graphics>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <audio id='1' type='none'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <video>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <model type='virtio' heads='1' primary='yes'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='video0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </video>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <watchdog model='itco' action='reset'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='watchdog0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </watchdog>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <memballoon model='virtio'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <stats period='10'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='balloon0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <rng model='virtio'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <backend model='random'>/dev/urandom</backend>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <alias name='rng0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <label>system_u:system_r:svirt_t:s0:c811,c1010</label>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c811,c1010</imagelabel>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <label>+107:+107</label>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <imagelabel>+107:+107</imagelabel>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:34:18 compute-0 nova_compute[243452]: </domain>
Feb 28 10:34:18 compute-0 nova_compute[243452]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.548 243456 INFO nova.virt.libvirt.driver [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully detached device tapedcb3b03-b8 from instance 4cf0bd1e-baff-4e42-ab90-cb45145ea4db from the live domain config.
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.549 243456 DEBUG nova.virt.libvirt.vif [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-770937853',display_name='tempest-TestNetworkBasicOps-server-770937853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-770937853',id=127,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFAY2dHmsUOb7P6bLjV8d94Nwp5/6Rxqb3nkXfFeiEJx36+H0xArf++Ad9VgIE8+4m1lCjIO06O/qKmlMqDQresg45xrebHyRTaZLsdPDBvsp3hmYqaTlPvbO1fPCnhR8w==',key_name='tempest-TestNetworkBasicOps-431108198',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:33:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-bri91k0h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:33:49Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=4cf0bd1e-baff-4e42-ab90-cb45145ea4db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.550 243456 DEBUG nova.network.os_vif_util [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.550 243456 DEBUG nova.network.os_vif_util [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.551 243456 DEBUG os_vif [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:34:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.550 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:6b:1e 10.100.0.24'], port_security=['fa:16:3e:64:6b:1e 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '4cf0bd1e-baff-4e42-ab90-cb45145ea4db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d699467-190a-4754-be38-8dcbc56ed7da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '26d464f3-c9fa-4347-96b9-fda1593d34a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8d3e50d-7a54-4c37-996e-1d4928f66955, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=edcb3b03-b894-4abf-96a5-f832e8ee3371) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.553 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.553 156681 INFO neutron.agent.ovn.metadata.agent [-] Port edcb3b03-b894-4abf-96a5-f832e8ee3371 in datapath 0d699467-190a-4754-be38-8dcbc56ed7da unbound from our chassis
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.553 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedcb3b03-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.555 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.555 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0d699467-190a-4754-be38-8dcbc56ed7da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.556 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.557 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bf42ee51-d395-4338-b5cc-7661ebe6b8a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.558 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da namespace which is not needed anymore
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.559 243456 INFO os_vif [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8')
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.559 243456 DEBUG nova.virt.libvirt.guest [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <nova:name>tempest-TestNetworkBasicOps-server-770937853</nova:name>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:34:18</nova:creationTime>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     <nova:port uuid="37a6ff99-c79f-4d1f-8384-b2117545bacf">
Feb 28 10:34:18 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:34:18 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:34:18 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:34:18 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:34:18 compute-0 nova_compute[243452]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 28 10:34:18 compute-0 neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da[356025]: [NOTICE]   (356029) : haproxy version is 2.8.14-c23fe91
Feb 28 10:34:18 compute-0 neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da[356025]: [NOTICE]   (356029) : path to executable is /usr/sbin/haproxy
Feb 28 10:34:18 compute-0 neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da[356025]: [WARNING]  (356029) : Exiting Master process...
Feb 28 10:34:18 compute-0 neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da[356025]: [ALERT]    (356029) : Current worker (356031) exited with code 143 (Terminated)
Feb 28 10:34:18 compute-0 neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da[356025]: [WARNING]  (356029) : All workers exited. Exiting... (0)
Feb 28 10:34:18 compute-0 systemd[1]: libpod-b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66.scope: Deactivated successfully.
Feb 28 10:34:18 compute-0 podman[356061]: 2026-02-28 10:34:18.687487449 +0000 UTC m=+0.055043330 container died b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 10:34:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66-userdata-shm.mount: Deactivated successfully.
Feb 28 10:34:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4b2e07cdeaa3d845002b7d5c69c4a6ea946ebe8ae1371af79f5f05018ddb64c-merged.mount: Deactivated successfully.
Feb 28 10:34:18 compute-0 podman[356061]: 2026-02-28 10:34:18.735607332 +0000 UTC m=+0.103163203 container cleanup b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:34:18 compute-0 systemd[1]: libpod-conmon-b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66.scope: Deactivated successfully.
Feb 28 10:34:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 15 KiB/s wr, 0 op/s
Feb 28 10:34:18 compute-0 podman[356090]: 2026-02-28 10:34:18.802551829 +0000 UTC m=+0.044220054 container remove b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 10:34:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.807 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8100af35-d367-41c1-9960-7977ef53a55a]: (4, ('Sat Feb 28 10:34:18 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da (b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66)\nb278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66\nSat Feb 28 10:34:18 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da (b278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66)\nb278c79df0639960795a3becc8bf3eec1128d4b3dbaba5b72b1d45092ccbcb66\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.809 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9255c953-0639-4ac1-9a02-ce0543aca0de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.810 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d699467-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.812 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:18 compute-0 kernel: tap0d699467-10: left promiscuous mode
Feb 28 10:34:18 compute-0 nova_compute[243452]: 2026-02-28 10:34:18.820 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.823 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[376edc94-9687-4ddb-8e42-6afc2b291ac2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.839 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[95cb9833-c684-4af5-ab53-c1f404d53e50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.840 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[af24fd3c-0330-4615-944b-10a0e2394c31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.857 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2e49c154-1746-42d6-a0ae-e1adb3be1bf6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636384, 'reachable_time': 21105, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356108, 'error': None, 'target': 'ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:18 compute-0 systemd[1]: run-netns-ovnmeta\x2d0d699467\x2d190a\x2d4754\x2dbe38\x2d8dcbc56ed7da.mount: Deactivated successfully.
Feb 28 10:34:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.861 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0d699467-190a-4754-be38-8dcbc56ed7da deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:34:18 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:18.861 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a12669-8d4d-4044-88e3-3e7f39000f15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:34:19 compute-0 nova_compute[243452]: 2026-02-28 10:34:19.212 243456 DEBUG nova.compute.manager [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-plugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:19 compute-0 nova_compute[243452]: 2026-02-28 10:34:19.213 243456 DEBUG oslo_concurrency.lockutils [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:19 compute-0 nova_compute[243452]: 2026-02-28 10:34:19.213 243456 DEBUG oslo_concurrency.lockutils [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:19 compute-0 nova_compute[243452]: 2026-02-28 10:34:19.213 243456 DEBUG oslo_concurrency.lockutils [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:19 compute-0 nova_compute[243452]: 2026-02-28 10:34:19.213 243456 DEBUG nova.compute.manager [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] No waiting events found dispatching network-vif-plugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:34:19 compute-0 nova_compute[243452]: 2026-02-28 10:34:19.213 243456 WARNING nova.compute.manager [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received unexpected event network-vif-plugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 for instance with vm_state active and task_state None.
Feb 28 10:34:19 compute-0 nova_compute[243452]: 2026-02-28 10:34:19.214 243456 DEBUG nova.compute.manager [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-unplugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:19 compute-0 nova_compute[243452]: 2026-02-28 10:34:19.214 243456 DEBUG oslo_concurrency.lockutils [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:19 compute-0 nova_compute[243452]: 2026-02-28 10:34:19.214 243456 DEBUG oslo_concurrency.lockutils [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:19 compute-0 nova_compute[243452]: 2026-02-28 10:34:19.214 243456 DEBUG oslo_concurrency.lockutils [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:19 compute-0 nova_compute[243452]: 2026-02-28 10:34:19.214 243456 DEBUG nova.compute.manager [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] No waiting events found dispatching network-vif-unplugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:34:19 compute-0 nova_compute[243452]: 2026-02-28 10:34:19.215 243456 WARNING nova.compute.manager [req-183ed6aa-638e-4849-bc5a-b525602969f5 req-cac71693-bb00-4a08-a0da-b81cbf670166 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received unexpected event network-vif-unplugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 for instance with vm_state active and task_state None.
Feb 28 10:34:19 compute-0 nova_compute[243452]: 2026-02-28 10:34:19.343 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:19 compute-0 nova_compute[243452]: 2026-02-28 10:34:19.702 243456 DEBUG oslo_concurrency.lockutils [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:34:19 compute-0 nova_compute[243452]: 2026-02-28 10:34:19.702 243456 DEBUG oslo_concurrency.lockutils [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:34:19 compute-0 nova_compute[243452]: 2026-02-28 10:34:19.703 243456 DEBUG nova.network.neutron [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:34:19 compute-0 ceph-mon[76304]: pgmap v2099: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 15 KiB/s wr, 0 op/s
Feb 28 10:34:20 compute-0 nova_compute[243452]: 2026-02-28 10:34:20.482 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 8.3 KiB/s rd, 3.7 KiB/s wr, 1 op/s
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.359 243456 DEBUG nova.compute.manager [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-plugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.359 243456 DEBUG oslo_concurrency.lockutils [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.360 243456 DEBUG oslo_concurrency.lockutils [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.360 243456 DEBUG oslo_concurrency.lockutils [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.360 243456 DEBUG nova.compute.manager [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] No waiting events found dispatching network-vif-plugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.361 243456 WARNING nova.compute.manager [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received unexpected event network-vif-plugged-edcb3b03-b894-4abf-96a5-f832e8ee3371 for instance with vm_state active and task_state None.
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.361 243456 DEBUG nova.compute.manager [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-deleted-edcb3b03-b894-4abf-96a5-f832e8ee3371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.361 243456 INFO nova.compute.manager [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Neutron deleted interface edcb3b03-b894-4abf-96a5-f832e8ee3371; detaching it from the instance and deleting it from the info cache
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.361 243456 DEBUG nova.network.neutron [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updating instance_info_cache with network_info: [{"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.396 243456 DEBUG nova.objects.instance [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lazy-loading 'system_metadata' on Instance uuid 4cf0bd1e-baff-4e42-ab90-cb45145ea4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.420 243456 DEBUG nova.objects.instance [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lazy-loading 'flavor' on Instance uuid 4cf0bd1e-baff-4e42-ab90-cb45145ea4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.444 243456 DEBUG nova.virt.libvirt.vif [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-770937853',display_name='tempest-TestNetworkBasicOps-server-770937853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-770937853',id=127,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFAY2dHmsUOb7P6bLjV8d94Nwp5/6Rxqb3nkXfFeiEJx36+H0xArf++Ad9VgIE8+4m1lCjIO06O/qKmlMqDQresg45xrebHyRTaZLsdPDBvsp3hmYqaTlPvbO1fPCnhR8w==',key_name='tempest-TestNetworkBasicOps-431108198',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:33:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-bri91k0h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:33:49Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=4cf0bd1e-baff-4e42-ab90-cb45145ea4db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.445 243456 DEBUG nova.network.os_vif_util [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converting VIF {"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.446 243456 DEBUG nova.network.os_vif_util [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.450 243456 DEBUG nova.virt.libvirt.guest [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:64:6b:1e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapedcb3b03-b8"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.456 243456 DEBUG nova.virt.libvirt.guest [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:64:6b:1e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapedcb3b03-b8"/></interface>not found in domain: <domain type='kvm' id='160'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <name>instance-0000007f</name>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <uuid>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</uuid>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <nova:name>tempest-TestNetworkBasicOps-server-770937853</nova:name>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:34:18</nova:creationTime>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:port uuid="37a6ff99-c79f-4d1f-8384-b2117545bacf">
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:34:21 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <memory unit='KiB'>131072</memory>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <vcpu placement='static'>1</vcpu>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <resource>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <partition>/machine</partition>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </resource>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <sysinfo type='smbios'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <system>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <entry name='manufacturer'>RDO</entry>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <entry name='product'>OpenStack Compute</entry>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <entry name='serial'>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</entry>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <entry name='uuid'>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</entry>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <entry name='family'>Virtual Machine</entry>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </system>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <os>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <boot dev='hd'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <smbios mode='sysinfo'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </os>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <features>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <vmcoreinfo state='on'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </features>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <cpu mode='custom' match='exact' check='full'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <vendor>AMD</vendor>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='x2apic'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc-deadline'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='hypervisor'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc_adjust'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='spec-ctrl'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='stibp'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='ssbd'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='cmp_legacy'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='overflow-recov'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='succor'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='ibrs'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='amd-ssbd'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='virt-ssbd'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='lbrv'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='tsc-scale'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='vmcb-clean'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='flushbyasid'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='pause-filter'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='pfthreshold'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='xsaves'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='svm'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='topoext'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='npt'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='nrip-save'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <clock offset='utc'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <timer name='pit' tickpolicy='delay'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <timer name='hpet' present='no'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <on_poweroff>destroy</on_poweroff>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <on_reboot>restart</on_reboot>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <on_crash>destroy</on_crash>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <disk type='network' device='disk'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk' index='2'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       </source>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target dev='vda' bus='virtio'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='virtio-disk0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <disk type='network' device='cdrom'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk.config' index='1'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       </source>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target dev='sda' bus='sata'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <readonly/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='sata0-0-0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='0' model='pcie-root'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pcie.0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='1' port='0x10'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.1'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='2' port='0x11'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.2'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='3' port='0x12'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.3'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='4' port='0x13'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.4'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='5' port='0x14'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.5'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='6' port='0x15'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.6'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='7' port='0x16'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.7'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='8' port='0x17'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.8'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='9' port='0x18'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.9'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='10' port='0x19'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.10'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='11' port='0x1a'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.11'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='12' port='0x1b'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.12'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='13' port='0x1c'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.13'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='14' port='0x1d'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.14'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='15' port='0x1e'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.15'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='16' port='0x1f'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.16'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='17' port='0x20'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.17'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='18' port='0x21'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.18'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='19' port='0x22'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.19'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='20' port='0x23'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.20'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='21' port='0x24'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.21'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='22' port='0x25'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.22'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='23' port='0x26'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.23'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='24' port='0x27'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.24'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='25' port='0x28'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.25'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-pci-bridge'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.26'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='usb'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='sata' index='0'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='ide'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:e1:c3:a3'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target dev='tap37a6ff99-c7'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='net0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <serial type='pty'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <source path='/dev/pts/1'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/console.log' append='off'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target type='isa-serial' port='0'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:         <model name='isa-serial'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       </target>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <console type='pty' tty='/dev/pts/1'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <source path='/dev/pts/1'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/console.log' append='off'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target type='serial' port='0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </console>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <input type='tablet' bus='usb'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='input0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='usb' bus='0' port='1'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </input>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <input type='mouse' bus='ps2'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='input1'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </input>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <input type='keyboard' bus='ps2'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='input2'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </input>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <listen type='address' address='::0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </graphics>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <audio id='1' type='none'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <video>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model type='virtio' heads='1' primary='yes'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='video0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </video>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <watchdog model='itco' action='reset'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='watchdog0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </watchdog>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <memballoon model='virtio'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <stats period='10'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='balloon0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <rng model='virtio'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <backend model='random'>/dev/urandom</backend>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='rng0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <label>system_u:system_r:svirt_t:s0:c811,c1010</label>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c811,c1010</imagelabel>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <label>+107:+107</label>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <imagelabel>+107:+107</imagelabel>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:34:21 compute-0 nova_compute[243452]: </domain>
Feb 28 10:34:21 compute-0 nova_compute[243452]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.458 243456 DEBUG nova.virt.libvirt.guest [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:64:6b:1e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapedcb3b03-b8"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.465 243456 DEBUG nova.virt.libvirt.guest [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:64:6b:1e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapedcb3b03-b8"/></interface>not found in domain: <domain type='kvm' id='160'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <name>instance-0000007f</name>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <uuid>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</uuid>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <nova:name>tempest-TestNetworkBasicOps-server-770937853</nova:name>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:34:18</nova:creationTime>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:port uuid="37a6ff99-c79f-4d1f-8384-b2117545bacf">
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:34:21 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <memory unit='KiB'>131072</memory>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <vcpu placement='static'>1</vcpu>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <resource>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <partition>/machine</partition>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </resource>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <sysinfo type='smbios'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <system>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <entry name='manufacturer'>RDO</entry>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <entry name='product'>OpenStack Compute</entry>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <entry name='serial'>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</entry>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <entry name='uuid'>4cf0bd1e-baff-4e42-ab90-cb45145ea4db</entry>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <entry name='family'>Virtual Machine</entry>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </system>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <os>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <boot dev='hd'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <smbios mode='sysinfo'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </os>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <features>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <vmcoreinfo state='on'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </features>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <cpu mode='custom' match='exact' check='full'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <vendor>AMD</vendor>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='x2apic'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc-deadline'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='hypervisor'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc_adjust'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='spec-ctrl'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='stibp'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='ssbd'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='cmp_legacy'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='overflow-recov'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='succor'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='ibrs'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='amd-ssbd'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='virt-ssbd'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='lbrv'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='tsc-scale'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='vmcb-clean'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='flushbyasid'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='pause-filter'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='pfthreshold'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='xsaves'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='svm'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='require' name='topoext'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='npt'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <feature policy='disable' name='nrip-save'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <clock offset='utc'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <timer name='pit' tickpolicy='delay'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <timer name='hpet' present='no'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <on_poweroff>destroy</on_poweroff>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <on_reboot>restart</on_reboot>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <on_crash>destroy</on_crash>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <disk type='network' device='disk'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk' index='2'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       </source>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target dev='vda' bus='virtio'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='virtio-disk0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <disk type='network' device='cdrom'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_disk.config' index='1'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       </source>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target dev='sda' bus='sata'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <readonly/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='sata0-0-0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='0' model='pcie-root'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pcie.0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='1' port='0x10'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.1'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='2' port='0x11'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.2'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='3' port='0x12'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.3'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='4' port='0x13'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.4'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='5' port='0x14'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.5'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='6' port='0x15'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.6'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='7' port='0x16'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.7'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='8' port='0x17'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.8'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='9' port='0x18'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.9'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='10' port='0x19'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.10'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='11' port='0x1a'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.11'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='12' port='0x1b'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.12'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='13' port='0x1c'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.13'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='14' port='0x1d'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.14'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='15' port='0x1e'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.15'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='16' port='0x1f'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.16'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='17' port='0x20'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.17'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='18' port='0x21'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.18'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='19' port='0x22'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.19'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='20' port='0x23'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.20'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='21' port='0x24'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.21'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='22' port='0x25'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.22'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='23' port='0x26'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.23'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='24' port='0x27'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.24'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target chassis='25' port='0x28'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.25'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model name='pcie-pci-bridge'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='pci.26'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='usb'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <controller type='sata' index='0'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='ide'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:e1:c3:a3'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target dev='tap37a6ff99-c7'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='net0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <serial type='pty'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <source path='/dev/pts/1'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/console.log' append='off'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target type='isa-serial' port='0'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:         <model name='isa-serial'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       </target>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <console type='pty' tty='/dev/pts/1'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <source path='/dev/pts/1'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db/console.log' append='off'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <target type='serial' port='0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </console>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <input type='tablet' bus='usb'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='input0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='usb' bus='0' port='1'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </input>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <input type='mouse' bus='ps2'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='input1'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </input>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <input type='keyboard' bus='ps2'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='input2'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </input>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <listen type='address' address='::0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </graphics>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <audio id='1' type='none'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <video>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <model type='virtio' heads='1' primary='yes'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='video0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </video>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <watchdog model='itco' action='reset'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='watchdog0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </watchdog>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <memballoon model='virtio'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <stats period='10'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='balloon0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <rng model='virtio'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <backend model='random'>/dev/urandom</backend>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <alias name='rng0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <label>system_u:system_r:svirt_t:s0:c811,c1010</label>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c811,c1010</imagelabel>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <label>+107:+107</label>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <imagelabel>+107:+107</imagelabel>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:34:21 compute-0 nova_compute[243452]: </domain>
Feb 28 10:34:21 compute-0 nova_compute[243452]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.466 243456 WARNING nova.virt.libvirt.driver [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Detaching interface fa:16:3e:64:6b:1e failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapedcb3b03-b8' not found.
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.468 243456 DEBUG nova.virt.libvirt.vif [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-770937853',display_name='tempest-TestNetworkBasicOps-server-770937853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-770937853',id=127,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFAY2dHmsUOb7P6bLjV8d94Nwp5/6Rxqb3nkXfFeiEJx36+H0xArf++Ad9VgIE8+4m1lCjIO06O/qKmlMqDQresg45xrebHyRTaZLsdPDBvsp3hmYqaTlPvbO1fPCnhR8w==',key_name='tempest-TestNetworkBasicOps-431108198',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:33:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-bri91k0h',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:33:49Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=4cf0bd1e-baff-4e42-ab90-cb45145ea4db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.468 243456 DEBUG nova.network.os_vif_util [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converting VIF {"id": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "address": "fa:16:3e:64:6b:1e", "network": {"id": "0d699467-190a-4754-be38-8dcbc56ed7da", "bridge": "br-int", "label": "tempest-network-smoke--1936914647", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedcb3b03-b8", "ovs_interfaceid": "edcb3b03-b894-4abf-96a5-f832e8ee3371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.469 243456 DEBUG nova.network.os_vif_util [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.470 243456 DEBUG os_vif [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.472 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.473 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedcb3b03-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.474 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.476 243456 INFO os_vif [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:6b:1e,bridge_name='br-int',has_traffic_filtering=True,id=edcb3b03-b894-4abf-96a5-f832e8ee3371,network=Network(0d699467-190a-4754-be38-8dcbc56ed7da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedcb3b03-b8')
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.477 243456 DEBUG nova.virt.libvirt.guest [req-6c594280-2f68-46bd-a413-fa9d44b9676a req-6b6deca9-048a-4d4e-a5e9-d1d59dcb15c2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <nova:name>tempest-TestNetworkBasicOps-server-770937853</nova:name>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:34:21</nova:creationTime>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     <nova:port uuid="37a6ff99-c79f-4d1f-8384-b2117545bacf">
Feb 28 10:34:21 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:34:21 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:34:21 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:34:21 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:34:21 compute-0 nova_compute[243452]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 28 10:34:21 compute-0 ovn_controller[146846]: 2026-02-28T10:34:21Z|01289|binding|INFO|Releasing lport dbe8062b-c5f4-44f4-b690-d738c9fe51f1 from this chassis (sb_readonly=0)
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.541 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.983 243456 INFO nova.network.neutron [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Port edcb3b03-b894-4abf-96a5-f832e8ee3371 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Feb 28 10:34:21 compute-0 nova_compute[243452]: 2026-02-28 10:34:21.983 243456 DEBUG nova.network.neutron [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updating instance_info_cache with network_info: [{"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:34:21 compute-0 ceph-mon[76304]: pgmap v2100: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 8.3 KiB/s rd, 3.7 KiB/s wr, 1 op/s
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.001 243456 DEBUG oslo_concurrency.lockutils [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.026 243456 DEBUG oslo_concurrency.lockutils [None req-153142b4-74f9-41fb-b54a-850658e9f1af ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "interface-4cf0bd1e-baff-4e42-ab90-cb45145ea4db-edcb3b03-b894-4abf-96a5-f832e8ee3371" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.374 243456 DEBUG nova.compute.manager [req-1da8939a-f180-4035-8468-48e93ecc8e38 req-1bf9465a-f8af-4bba-81c3-3086edfcf87a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-changed-37a6ff99-c79f-4d1f-8384-b2117545bacf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.374 243456 DEBUG nova.compute.manager [req-1da8939a-f180-4035-8468-48e93ecc8e38 req-1bf9465a-f8af-4bba-81c3-3086edfcf87a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Refreshing instance network info cache due to event network-changed-37a6ff99-c79f-4d1f-8384-b2117545bacf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.375 243456 DEBUG oslo_concurrency.lockutils [req-1da8939a-f180-4035-8468-48e93ecc8e38 req-1bf9465a-f8af-4bba-81c3-3086edfcf87a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.375 243456 DEBUG oslo_concurrency.lockutils [req-1da8939a-f180-4035-8468-48e93ecc8e38 req-1bf9465a-f8af-4bba-81c3-3086edfcf87a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.376 243456 DEBUG nova.network.neutron [req-1da8939a-f180-4035-8468-48e93ecc8e38 req-1bf9465a-f8af-4bba-81c3-3086edfcf87a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Refreshing network info cache for port 37a6ff99-c79f-4d1f-8384-b2117545bacf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.432 243456 DEBUG oslo_concurrency.lockutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.433 243456 DEBUG oslo_concurrency.lockutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.434 243456 DEBUG oslo_concurrency.lockutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.435 243456 DEBUG oslo_concurrency.lockutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.436 243456 DEBUG oslo_concurrency.lockutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.438 243456 INFO nova.compute.manager [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Terminating instance
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.441 243456 DEBUG nova.compute.manager [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:34:22 compute-0 kernel: tap37a6ff99-c7 (unregistering): left promiscuous mode
Feb 28 10:34:22 compute-0 NetworkManager[49805]: <info>  [1772274862.5061] device (tap37a6ff99-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.505 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:22 compute-0 ovn_controller[146846]: 2026-02-28T10:34:22Z|01290|binding|INFO|Releasing lport 37a6ff99-c79f-4d1f-8384-b2117545bacf from this chassis (sb_readonly=0)
Feb 28 10:34:22 compute-0 ovn_controller[146846]: 2026-02-28T10:34:22Z|01291|binding|INFO|Setting lport 37a6ff99-c79f-4d1f-8384-b2117545bacf down in Southbound
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.515 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:22 compute-0 ovn_controller[146846]: 2026-02-28T10:34:22Z|01292|binding|INFO|Removing iface tap37a6ff99-c7 ovn-installed in OVS
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.518 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.528 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:c3:a3 10.100.0.3'], port_security=['fa:16:3e:e1:c3:a3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4cf0bd1e-baff-4e42-ab90-cb45145ea4db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-183ae61b-3b9b-4e1b-a73e-6b7a38731453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0844b2ff-c3dd-41f7-ab33-952597a3bda8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9e841af-c685-47c1-acc4-502d4238e857, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=37a6ff99-c79f-4d1f-8384-b2117545bacf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.528 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.531 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 37a6ff99-c79f-4d1f-8384-b2117545bacf in datapath 183ae61b-3b9b-4e1b-a73e-6b7a38731453 unbound from our chassis
Feb 28 10:34:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.533 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 183ae61b-3b9b-4e1b-a73e-6b7a38731453, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:34:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.534 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[928abaeb-ef00-4739-85d9-961262a808fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.535 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453 namespace which is not needed anymore
Feb 28 10:34:22 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Feb 28 10:34:22 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d0000007f.scope: Consumed 14.932s CPU time.
Feb 28 10:34:22 compute-0 systemd-machined[209480]: Machine qemu-160-instance-0000007f terminated.
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.690 243456 INFO nova.virt.libvirt.driver [-] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Instance destroyed successfully.
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.691 243456 DEBUG nova.objects.instance [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid 4cf0bd1e-baff-4e42-ab90-cb45145ea4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.715 243456 DEBUG nova.virt.libvirt.vif [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-770937853',display_name='tempest-TestNetworkBasicOps-server-770937853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-770937853',id=127,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFAY2dHmsUOb7P6bLjV8d94Nwp5/6Rxqb3nkXfFeiEJx36+H0xArf++Ad9VgIE8+4m1lCjIO06O/qKmlMqDQresg45xrebHyRTaZLsdPDBvsp3hmYqaTlPvbO1fPCnhR8w==',key_name='tempest-TestNetworkBasicOps-431108198',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:33:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-bri91k0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:33:49Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=4cf0bd1e-baff-4e42-ab90-cb45145ea4db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.717 243456 DEBUG nova.network.os_vif_util [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:34:22 compute-0 neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453[355692]: [NOTICE]   (355696) : haproxy version is 2.8.14-c23fe91
Feb 28 10:34:22 compute-0 neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453[355692]: [NOTICE]   (355696) : path to executable is /usr/sbin/haproxy
Feb 28 10:34:22 compute-0 neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453[355692]: [WARNING]  (355696) : Exiting Master process...
Feb 28 10:34:22 compute-0 neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453[355692]: [WARNING]  (355696) : Exiting Master process...
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.719 243456 DEBUG nova.network.os_vif_util [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e1:c3:a3,bridge_name='br-int',has_traffic_filtering=True,id=37a6ff99-c79f-4d1f-8384-b2117545bacf,network=Network(183ae61b-3b9b-4e1b-a73e-6b7a38731453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37a6ff99-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.720 243456 DEBUG os_vif [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:c3:a3,bridge_name='br-int',has_traffic_filtering=True,id=37a6ff99-c79f-4d1f-8384-b2117545bacf,network=Network(183ae61b-3b9b-4e1b-a73e-6b7a38731453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37a6ff99-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:34:22 compute-0 neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453[355692]: [ALERT]    (355696) : Current worker (355698) exited with code 143 (Terminated)
Feb 28 10:34:22 compute-0 neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453[355692]: [WARNING]  (355696) : All workers exited. Exiting... (0)
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.722 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.723 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37a6ff99-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:22 compute-0 systemd[1]: libpod-8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d.scope: Deactivated successfully.
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.726 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.728 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:22 compute-0 podman[356133]: 2026-02-28 10:34:22.730499216 +0000 UTC m=+0.073156294 container died 8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.732 243456 INFO os_vif [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:c3:a3,bridge_name='br-int',has_traffic_filtering=True,id=37a6ff99-c79f-4d1f-8384-b2117545bacf,network=Network(183ae61b-3b9b-4e1b-a73e-6b7a38731453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37a6ff99-c7')
Feb 28 10:34:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d-userdata-shm.mount: Deactivated successfully.
Feb 28 10:34:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-519415a6e8605ec671aba53f1da4d6a90fb9d621f73f1043083370c3d20e14cc-merged.mount: Deactivated successfully.
Feb 28 10:34:22 compute-0 podman[356133]: 2026-02-28 10:34:22.773338589 +0000 UTC m=+0.115995597 container cleanup 8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 10:34:22 compute-0 systemd[1]: libpod-conmon-8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d.scope: Deactivated successfully.
Feb 28 10:34:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 7.3 KiB/s wr, 1 op/s
Feb 28 10:34:22 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:34:22 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3601.6 total, 600.0 interval
                                           Cumulative writes: 41K writes, 159K keys, 41K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s
                                           Cumulative WAL: 41K writes, 15K syncs, 2.72 writes per sync, written: 0.15 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4949 writes, 18K keys, 4949 commit groups, 1.0 writes per commit group, ingest: 19.96 MB, 0.03 MB/s
                                           Interval WAL: 4949 writes, 2024 syncs, 2.45 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 10:34:22 compute-0 podman[356183]: 2026-02-28 10:34:22.843421805 +0000 UTC m=+0.042903467 container remove 8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 10:34:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.849 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c94fe7bf-ac81-45b1-b3d5-ba78ae31bbd0]: (4, ('Sat Feb 28 10:34:22 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453 (8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d)\n8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d\nSat Feb 28 10:34:22 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453 (8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d)\n8c294bdcd66464187b2ec71c1b9da61ffed96569aed929f0ff2874a18f8ade2d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.852 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c7d0ae1d-d968-4c64-abe5-7cd2a0513e7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.853 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap183ae61b-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.855 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:22 compute-0 kernel: tap183ae61b-30: left promiscuous mode
Feb 28 10:34:22 compute-0 nova_compute[243452]: 2026-02-28 10:34:22.862 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.865 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[20ad6bb5-3f2f-40c4-8dda-7a908e2b6a6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.880 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[74cde80e-59ca-4bc8-aadb-dbd14df5bcfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.883 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7cfaa574-0d36-4a18-b678-3ff229a001bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.900 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[879189f0-f6be-457a-b9d8-5a331cc703f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633584, 'reachable_time': 27621, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356202, 'error': None, 'target': 'ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d183ae61b\x2d3b9b\x2d4e1b\x2da73e\x2d6b7a38731453.mount: Deactivated successfully.
Feb 28 10:34:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.904 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-183ae61b-3b9b-4e1b-a73e-6b7a38731453 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:34:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:22.904 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[f3729f3f-8484-4543-b21d-87cd1d897c8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:23 compute-0 nova_compute[243452]: 2026-02-28 10:34:23.030 243456 INFO nova.virt.libvirt.driver [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Deleting instance files /var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_del
Feb 28 10:34:23 compute-0 nova_compute[243452]: 2026-02-28 10:34:23.033 243456 INFO nova.virt.libvirt.driver [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Deletion of /var/lib/nova/instances/4cf0bd1e-baff-4e42-ab90-cb45145ea4db_del complete
Feb 28 10:34:23 compute-0 nova_compute[243452]: 2026-02-28 10:34:23.102 243456 INFO nova.compute.manager [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Took 0.66 seconds to destroy the instance on the hypervisor.
Feb 28 10:34:23 compute-0 nova_compute[243452]: 2026-02-28 10:34:23.103 243456 DEBUG oslo.service.loopingcall [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:34:23 compute-0 nova_compute[243452]: 2026-02-28 10:34:23.103 243456 DEBUG nova.compute.manager [-] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:34:23 compute-0 nova_compute[243452]: 2026-02-28 10:34:23.103 243456 DEBUG nova.network.neutron [-] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:34:23 compute-0 nova_compute[243452]: 2026-02-28 10:34:23.461 243456 DEBUG nova.compute.manager [req-309fcedc-b6dc-4348-b76b-cd1355494574 req-e1b04d03-2ba5-4225-9833-d1d45f3cdf31 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-unplugged-37a6ff99-c79f-4d1f-8384-b2117545bacf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:23 compute-0 nova_compute[243452]: 2026-02-28 10:34:23.461 243456 DEBUG oslo_concurrency.lockutils [req-309fcedc-b6dc-4348-b76b-cd1355494574 req-e1b04d03-2ba5-4225-9833-d1d45f3cdf31 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:23 compute-0 nova_compute[243452]: 2026-02-28 10:34:23.462 243456 DEBUG oslo_concurrency.lockutils [req-309fcedc-b6dc-4348-b76b-cd1355494574 req-e1b04d03-2ba5-4225-9833-d1d45f3cdf31 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:23 compute-0 nova_compute[243452]: 2026-02-28 10:34:23.463 243456 DEBUG oslo_concurrency.lockutils [req-309fcedc-b6dc-4348-b76b-cd1355494574 req-e1b04d03-2ba5-4225-9833-d1d45f3cdf31 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:23 compute-0 nova_compute[243452]: 2026-02-28 10:34:23.463 243456 DEBUG nova.compute.manager [req-309fcedc-b6dc-4348-b76b-cd1355494574 req-e1b04d03-2ba5-4225-9833-d1d45f3cdf31 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] No waiting events found dispatching network-vif-unplugged-37a6ff99-c79f-4d1f-8384-b2117545bacf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:34:23 compute-0 nova_compute[243452]: 2026-02-28 10:34:23.463 243456 DEBUG nova.compute.manager [req-309fcedc-b6dc-4348-b76b-cd1355494574 req-e1b04d03-2ba5-4225-9833-d1d45f3cdf31 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-unplugged-37a6ff99-c79f-4d1f-8384-b2117545bacf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:34:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:34:23 compute-0 ceph-mon[76304]: pgmap v2101: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 7.3 KiB/s wr, 1 op/s
Feb 28 10:34:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:24.287 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:34:24 compute-0 nova_compute[243452]: 2026-02-28 10:34:24.287 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:24.289 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:34:24 compute-0 nova_compute[243452]: 2026-02-28 10:34:24.345 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:24 compute-0 nova_compute[243452]: 2026-02-28 10:34:24.374 243456 DEBUG nova.network.neutron [req-1da8939a-f180-4035-8468-48e93ecc8e38 req-1bf9465a-f8af-4bba-81c3-3086edfcf87a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updated VIF entry in instance network info cache for port 37a6ff99-c79f-4d1f-8384-b2117545bacf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:34:24 compute-0 nova_compute[243452]: 2026-02-28 10:34:24.375 243456 DEBUG nova.network.neutron [req-1da8939a-f180-4035-8468-48e93ecc8e38 req-1bf9465a-f8af-4bba-81c3-3086edfcf87a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updating instance_info_cache with network_info: [{"id": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "address": "fa:16:3e:e1:c3:a3", "network": {"id": "183ae61b-3b9b-4e1b-a73e-6b7a38731453", "bridge": "br-int", "label": "tempest-network-smoke--1349478149", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37a6ff99-c7", "ovs_interfaceid": "37a6ff99-c79f-4d1f-8384-b2117545bacf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:34:24 compute-0 nova_compute[243452]: 2026-02-28 10:34:24.413 243456 DEBUG nova.network.neutron [-] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:34:24 compute-0 nova_compute[243452]: 2026-02-28 10:34:24.431 243456 DEBUG oslo_concurrency.lockutils [req-1da8939a-f180-4035-8468-48e93ecc8e38 req-1bf9465a-f8af-4bba-81c3-3086edfcf87a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-4cf0bd1e-baff-4e42-ab90-cb45145ea4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:34:24 compute-0 nova_compute[243452]: 2026-02-28 10:34:24.454 243456 INFO nova.compute.manager [-] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Took 1.35 seconds to deallocate network for instance.
Feb 28 10:34:24 compute-0 nova_compute[243452]: 2026-02-28 10:34:24.500 243456 DEBUG oslo_concurrency.lockutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:24 compute-0 nova_compute[243452]: 2026-02-28 10:34:24.501 243456 DEBUG oslo_concurrency.lockutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:24 compute-0 nova_compute[243452]: 2026-02-28 10:34:24.567 243456 DEBUG oslo_concurrency.processutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:34:24 compute-0 nova_compute[243452]: 2026-02-28 10:34:24.620 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 203 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 7.8 KiB/s wr, 11 op/s
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.007589) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274865007680, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1241, "num_deletes": 251, "total_data_size": 1868878, "memory_usage": 1896928, "flush_reason": "Manual Compaction"}
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274865023003, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 1839316, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43833, "largest_seqno": 45073, "table_properties": {"data_size": 1833430, "index_size": 3217, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12599, "raw_average_key_size": 19, "raw_value_size": 1821626, "raw_average_value_size": 2873, "num_data_blocks": 144, "num_entries": 634, "num_filter_entries": 634, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772274744, "oldest_key_time": 1772274744, "file_creation_time": 1772274865, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 15601 microseconds, and 4834 cpu microseconds.
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.023202) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 1839316 bytes OK
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.023248) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.024903) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.024920) EVENT_LOG_v1 {"time_micros": 1772274865024915, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.024944) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 1863252, prev total WAL file size 1863252, number of live WAL files 2.
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.025703) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(1796KB)], [101(8295KB)]
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274865025767, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10333466, "oldest_snapshot_seqno": -1}
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6640 keys, 8687052 bytes, temperature: kUnknown
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274865081105, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 8687052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8643385, "index_size": 25955, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 172449, "raw_average_key_size": 25, "raw_value_size": 8525256, "raw_average_value_size": 1283, "num_data_blocks": 1013, "num_entries": 6640, "num_filter_entries": 6640, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772274865, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.081430) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8687052 bytes
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.082976) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.4 rd, 156.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 8.1 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(10.3) write-amplify(4.7) OK, records in: 7154, records dropped: 514 output_compression: NoCompression
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.082999) EVENT_LOG_v1 {"time_micros": 1772274865082987, "job": 60, "event": "compaction_finished", "compaction_time_micros": 55425, "compaction_time_cpu_micros": 30484, "output_level": 6, "num_output_files": 1, "total_output_size": 8687052, "num_input_records": 7154, "num_output_records": 6640, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274865083321, "job": 60, "event": "table_file_deletion", "file_number": 103}
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772274865084622, "job": 60, "event": "table_file_deletion", "file_number": 101}
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.025595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.084764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.084775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.084780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.084783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:34:25 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:34:25.084787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:34:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:34:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/392256275' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:34:25 compute-0 nova_compute[243452]: 2026-02-28 10:34:25.271 243456 DEBUG oslo_concurrency.processutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.704s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:34:25 compute-0 nova_compute[243452]: 2026-02-28 10:34:25.279 243456 DEBUG nova.compute.provider_tree [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:34:25 compute-0 nova_compute[243452]: 2026-02-28 10:34:25.300 243456 DEBUG nova.scheduler.client.report [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:34:25 compute-0 nova_compute[243452]: 2026-02-28 10:34:25.333 243456 DEBUG oslo_concurrency.lockutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:25 compute-0 nova_compute[243452]: 2026-02-28 10:34:25.365 243456 INFO nova.scheduler.client.report [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance 4cf0bd1e-baff-4e42-ab90-cb45145ea4db
Feb 28 10:34:25 compute-0 nova_compute[243452]: 2026-02-28 10:34:25.431 243456 DEBUG oslo_concurrency.lockutils [None req-1ce16eb8-b73d-4fac-b1f1-7cdb927c8b7b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:25 compute-0 nova_compute[243452]: 2026-02-28 10:34:25.586 243456 DEBUG nova.compute.manager [req-3b5b45a8-82a6-4595-804b-76caee1247ff req-f4d5c920-ab1a-4be8-bcf8-96deec008fa1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-plugged-37a6ff99-c79f-4d1f-8384-b2117545bacf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:25 compute-0 nova_compute[243452]: 2026-02-28 10:34:25.587 243456 DEBUG oslo_concurrency.lockutils [req-3b5b45a8-82a6-4595-804b-76caee1247ff req-f4d5c920-ab1a-4be8-bcf8-96deec008fa1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:25 compute-0 nova_compute[243452]: 2026-02-28 10:34:25.588 243456 DEBUG oslo_concurrency.lockutils [req-3b5b45a8-82a6-4595-804b-76caee1247ff req-f4d5c920-ab1a-4be8-bcf8-96deec008fa1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:25 compute-0 nova_compute[243452]: 2026-02-28 10:34:25.588 243456 DEBUG oslo_concurrency.lockutils [req-3b5b45a8-82a6-4595-804b-76caee1247ff req-f4d5c920-ab1a-4be8-bcf8-96deec008fa1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4cf0bd1e-baff-4e42-ab90-cb45145ea4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:25 compute-0 nova_compute[243452]: 2026-02-28 10:34:25.589 243456 DEBUG nova.compute.manager [req-3b5b45a8-82a6-4595-804b-76caee1247ff req-f4d5c920-ab1a-4be8-bcf8-96deec008fa1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] No waiting events found dispatching network-vif-plugged-37a6ff99-c79f-4d1f-8384-b2117545bacf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:34:25 compute-0 nova_compute[243452]: 2026-02-28 10:34:25.589 243456 WARNING nova.compute.manager [req-3b5b45a8-82a6-4595-804b-76caee1247ff req-f4d5c920-ab1a-4be8-bcf8-96deec008fa1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received unexpected event network-vif-plugged-37a6ff99-c79f-4d1f-8384-b2117545bacf for instance with vm_state deleted and task_state None.
Feb 28 10:34:25 compute-0 nova_compute[243452]: 2026-02-28 10:34:25.590 243456 DEBUG nova.compute.manager [req-3b5b45a8-82a6-4595-804b-76caee1247ff req-f4d5c920-ab1a-4be8-bcf8-96deec008fa1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Received event network-vif-deleted-37a6ff99-c79f-4d1f-8384-b2117545bacf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:26 compute-0 ceph-mon[76304]: pgmap v2102: 305 pgs: 305 active+clean; 203 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 7.8 KiB/s wr, 11 op/s
Feb 28 10:34:26 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/392256275' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:34:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 178 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 8.8 KiB/s wr, 23 op/s
Feb 28 10:34:27 compute-0 nova_compute[243452]: 2026-02-28 10:34:27.727 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:28 compute-0 ceph-mon[76304]: pgmap v2103: 305 pgs: 305 active+clean; 178 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 8.8 KiB/s wr, 23 op/s
Feb 28 10:34:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 153 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 9.2 KiB/s wr, 28 op/s
Feb 28 10:34:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:34:28 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:34:28 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.2 total, 600.0 interval
                                           Cumulative writes: 33K writes, 127K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.04 MB/s
                                           Cumulative WAL: 33K writes, 12K syncs, 2.73 writes per sync, written: 0.12 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4166 writes, 15K keys, 4166 commit groups, 1.0 writes per commit group, ingest: 17.10 MB, 0.03 MB/s
                                           Interval WAL: 4166 writes, 1712 syncs, 2.43 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 10:34:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:34:29
Feb 28 10:34:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:34:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:34:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'backups', 'images', '.rgw.root', 'default.rgw.control']
Feb 28 10:34:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:34:29 compute-0 nova_compute[243452]: 2026-02-28 10:34:29.269 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:29.291 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:29 compute-0 nova_compute[243452]: 2026-02-28 10:34:29.346 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:29 compute-0 nova_compute[243452]: 2026-02-28 10:34:29.808 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:29 compute-0 nova_compute[243452]: 2026-02-28 10:34:29.884 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:30 compute-0 ceph-mon[76304]: pgmap v2104: 305 pgs: 305 active+clean; 153 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 9.2 KiB/s wr, 28 op/s
Feb 28 10:34:30 compute-0 sudo[356228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:34:30 compute-0 sudo[356228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:34:30 compute-0 sudo[356228]: pam_unix(sudo:session): session closed for user root
Feb 28 10:34:30 compute-0 sudo[356253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:34:30 compute-0 sudo[356253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:34:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:34:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:34:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:34:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:34:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:34:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:34:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:34:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:34:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:34:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:34:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:34:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:34:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:34:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:34:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:34:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:34:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 153 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Feb 28 10:34:30 compute-0 sudo[356253]: pam_unix(sudo:session): session closed for user root
Feb 28 10:34:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:34:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:34:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:34:30 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:34:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:34:30 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:34:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:34:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:34:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:34:30 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:34:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:34:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:34:30 compute-0 sudo[356310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:34:30 compute-0 sudo[356310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:34:30 compute-0 sudo[356310]: pam_unix(sudo:session): session closed for user root
Feb 28 10:34:31 compute-0 sudo[356335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:34:31 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:34:31 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:34:31 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:34:31 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:34:31 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:34:31 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:34:31 compute-0 sudo[356335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:34:31 compute-0 ceph-mgr[76610]: [devicehealth INFO root] Check health
Feb 28 10:34:31 compute-0 podman[356372]: 2026-02-28 10:34:31.353937683 +0000 UTC m=+0.070973812 container create 75c88b29786c27df72233c281d0ff1514a4f92c0b37697088a912f6813b4c52c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hopper, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:34:31 compute-0 systemd[1]: Started libpod-conmon-75c88b29786c27df72233c281d0ff1514a4f92c0b37697088a912f6813b4c52c.scope.
Feb 28 10:34:31 compute-0 podman[356372]: 2026-02-28 10:34:31.326157646 +0000 UTC m=+0.043193825 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:34:31 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:34:31 compute-0 podman[356372]: 2026-02-28 10:34:31.448731048 +0000 UTC m=+0.165767257 container init 75c88b29786c27df72233c281d0ff1514a4f92c0b37697088a912f6813b4c52c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 28 10:34:31 compute-0 podman[356372]: 2026-02-28 10:34:31.458554346 +0000 UTC m=+0.175590515 container start 75c88b29786c27df72233c281d0ff1514a4f92c0b37697088a912f6813b4c52c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hopper, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:34:31 compute-0 podman[356372]: 2026-02-28 10:34:31.464463734 +0000 UTC m=+0.181499893 container attach 75c88b29786c27df72233c281d0ff1514a4f92c0b37697088a912f6813b4c52c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hopper, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:34:31 compute-0 unruffled_hopper[356388]: 167 167
Feb 28 10:34:31 compute-0 systemd[1]: libpod-75c88b29786c27df72233c281d0ff1514a4f92c0b37697088a912f6813b4c52c.scope: Deactivated successfully.
Feb 28 10:34:31 compute-0 podman[356372]: 2026-02-28 10:34:31.468253911 +0000 UTC m=+0.185290080 container died 75c88b29786c27df72233c281d0ff1514a4f92c0b37697088a912f6813b4c52c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hopper, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:34:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e3c0693479d1dd5409223268e044fc11d30379c36f83bd69dc2718733b7ca44-merged.mount: Deactivated successfully.
Feb 28 10:34:31 compute-0 podman[356372]: 2026-02-28 10:34:31.513347599 +0000 UTC m=+0.230383718 container remove 75c88b29786c27df72233c281d0ff1514a4f92c0b37697088a912f6813b4c52c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 10:34:31 compute-0 systemd[1]: libpod-conmon-75c88b29786c27df72233c281d0ff1514a4f92c0b37697088a912f6813b4c52c.scope: Deactivated successfully.
Feb 28 10:34:31 compute-0 podman[356412]: 2026-02-28 10:34:31.701730985 +0000 UTC m=+0.054025861 container create b94cd2e572b571432504219fc7261faf5815ba15a4f5593c9e3b8bd6012e9b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_jang, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 10:34:31 compute-0 systemd[1]: Started libpod-conmon-b94cd2e572b571432504219fc7261faf5815ba15a4f5593c9e3b8bd6012e9b32.scope.
Feb 28 10:34:31 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db55bc98f0b0d73846869bf496d3bf745bbfbe442d431acf8ed731533f395cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db55bc98f0b0d73846869bf496d3bf745bbfbe442d431acf8ed731533f395cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db55bc98f0b0d73846869bf496d3bf745bbfbe442d431acf8ed731533f395cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:34:31 compute-0 podman[356412]: 2026-02-28 10:34:31.683692604 +0000 UTC m=+0.035987510 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db55bc98f0b0d73846869bf496d3bf745bbfbe442d431acf8ed731533f395cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db55bc98f0b0d73846869bf496d3bf745bbfbe442d431acf8ed731533f395cd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:34:31 compute-0 podman[356412]: 2026-02-28 10:34:31.792161197 +0000 UTC m=+0.144456113 container init b94cd2e572b571432504219fc7261faf5815ba15a4f5593c9e3b8bd6012e9b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_jang, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:34:31 compute-0 podman[356412]: 2026-02-28 10:34:31.803456107 +0000 UTC m=+0.155751003 container start b94cd2e572b571432504219fc7261faf5815ba15a4f5593c9e3b8bd6012e9b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:34:31 compute-0 podman[356412]: 2026-02-28 10:34:31.807701977 +0000 UTC m=+0.159996863 container attach b94cd2e572b571432504219fc7261faf5815ba15a4f5593c9e3b8bd6012e9b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_jang, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 10:34:32 compute-0 ceph-mon[76304]: pgmap v2105: 305 pgs: 305 active+clean; 153 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Feb 28 10:34:32 compute-0 pedantic_jang[356428]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:34:32 compute-0 pedantic_jang[356428]: --> All data devices are unavailable
Feb 28 10:34:32 compute-0 systemd[1]: libpod-b94cd2e572b571432504219fc7261faf5815ba15a4f5593c9e3b8bd6012e9b32.scope: Deactivated successfully.
Feb 28 10:34:32 compute-0 podman[356412]: 2026-02-28 10:34:32.284168753 +0000 UTC m=+0.636463649 container died b94cd2e572b571432504219fc7261faf5815ba15a4f5593c9e3b8bd6012e9b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_jang, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 10:34:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5db55bc98f0b0d73846869bf496d3bf745bbfbe442d431acf8ed731533f395cd-merged.mount: Deactivated successfully.
Feb 28 10:34:32 compute-0 podman[356412]: 2026-02-28 10:34:32.347688242 +0000 UTC m=+0.699983118 container remove b94cd2e572b571432504219fc7261faf5815ba15a4f5593c9e3b8bd6012e9b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:34:32 compute-0 systemd[1]: libpod-conmon-b94cd2e572b571432504219fc7261faf5815ba15a4f5593c9e3b8bd6012e9b32.scope: Deactivated successfully.
Feb 28 10:34:32 compute-0 sudo[356335]: pam_unix(sudo:session): session closed for user root
Feb 28 10:34:32 compute-0 sudo[356458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:34:32 compute-0 sudo[356458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:34:32 compute-0 sudo[356458]: pam_unix(sudo:session): session closed for user root
Feb 28 10:34:32 compute-0 sudo[356483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:34:32 compute-0 sudo[356483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:34:32 compute-0 nova_compute[243452]: 2026-02-28 10:34:32.730 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 153 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 6.8 KiB/s wr, 27 op/s
Feb 28 10:34:32 compute-0 podman[356521]: 2026-02-28 10:34:32.897019293 +0000 UTC m=+0.038922283 container create ef6b598d272863769fd44247c016b41e896a5bcfea7b6423d1629277c5878ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_nightingale, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:34:32 compute-0 systemd[1]: Started libpod-conmon-ef6b598d272863769fd44247c016b41e896a5bcfea7b6423d1629277c5878ab3.scope.
Feb 28 10:34:32 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:34:32 compute-0 podman[356521]: 2026-02-28 10:34:32.97315948 +0000 UTC m=+0.115062480 container init ef6b598d272863769fd44247c016b41e896a5bcfea7b6423d1629277c5878ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_nightingale, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:34:32 compute-0 podman[356521]: 2026-02-28 10:34:32.878050816 +0000 UTC m=+0.019953866 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:34:32 compute-0 podman[356521]: 2026-02-28 10:34:32.978561743 +0000 UTC m=+0.120464753 container start ef6b598d272863769fd44247c016b41e896a5bcfea7b6423d1629277c5878ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 10:34:32 compute-0 podman[356521]: 2026-02-28 10:34:32.981893688 +0000 UTC m=+0.123796718 container attach ef6b598d272863769fd44247c016b41e896a5bcfea7b6423d1629277c5878ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 10:34:32 compute-0 priceless_nightingale[356537]: 167 167
Feb 28 10:34:32 compute-0 systemd[1]: libpod-ef6b598d272863769fd44247c016b41e896a5bcfea7b6423d1629277c5878ab3.scope: Deactivated successfully.
Feb 28 10:34:32 compute-0 conmon[356537]: conmon ef6b598d272863769fd4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ef6b598d272863769fd44247c016b41e896a5bcfea7b6423d1629277c5878ab3.scope/container/memory.events
Feb 28 10:34:32 compute-0 podman[356521]: 2026-02-28 10:34:32.984275785 +0000 UTC m=+0.126178785 container died ef6b598d272863769fd44247c016b41e896a5bcfea7b6423d1629277c5878ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_nightingale, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:34:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-806b9e05dca1acdff5ca5558ada994500022ebfe7bf2651fff195cc146fbe00f-merged.mount: Deactivated successfully.
Feb 28 10:34:33 compute-0 podman[356521]: 2026-02-28 10:34:33.018164615 +0000 UTC m=+0.160067625 container remove ef6b598d272863769fd44247c016b41e896a5bcfea7b6423d1629277c5878ab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_nightingale, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:34:33 compute-0 systemd[1]: libpod-conmon-ef6b598d272863769fd44247c016b41e896a5bcfea7b6423d1629277c5878ab3.scope: Deactivated successfully.
Feb 28 10:34:33 compute-0 nova_compute[243452]: 2026-02-28 10:34:33.143 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:34:33 compute-0 podman[356561]: 2026-02-28 10:34:33.17754275 +0000 UTC m=+0.052960421 container create 8c3d6846cae8f8c182c916301233458884de58f36909afd3d2aba8a3382b27dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kare, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Feb 28 10:34:33 compute-0 systemd[1]: Started libpod-conmon-8c3d6846cae8f8c182c916301233458884de58f36909afd3d2aba8a3382b27dd.scope.
Feb 28 10:34:33 compute-0 podman[356561]: 2026-02-28 10:34:33.152154571 +0000 UTC m=+0.027572322 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:34:33 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928e1e36594004ff8016b6185a7e95de71566a5965f4057d0618371ba45d6525/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928e1e36594004ff8016b6185a7e95de71566a5965f4057d0618371ba45d6525/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928e1e36594004ff8016b6185a7e95de71566a5965f4057d0618371ba45d6525/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:34:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928e1e36594004ff8016b6185a7e95de71566a5965f4057d0618371ba45d6525/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:34:33 compute-0 podman[356561]: 2026-02-28 10:34:33.290293664 +0000 UTC m=+0.165711425 container init 8c3d6846cae8f8c182c916301233458884de58f36909afd3d2aba8a3382b27dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kare, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Feb 28 10:34:33 compute-0 podman[356561]: 2026-02-28 10:34:33.30251298 +0000 UTC m=+0.177930681 container start 8c3d6846cae8f8c182c916301233458884de58f36909afd3d2aba8a3382b27dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kare, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 10:34:33 compute-0 podman[356561]: 2026-02-28 10:34:33.306822282 +0000 UTC m=+0.182239983 container attach 8c3d6846cae8f8c182c916301233458884de58f36909afd3d2aba8a3382b27dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kare, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:34:33 compute-0 peaceful_kare[356577]: {
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:     "0": [
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:         {
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "devices": [
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "/dev/loop3"
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             ],
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "lv_name": "ceph_lv0",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "lv_size": "21470642176",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "name": "ceph_lv0",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "tags": {
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.cluster_name": "ceph",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.crush_device_class": "",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.encrypted": "0",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.objectstore": "bluestore",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.osd_id": "0",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.type": "block",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.vdo": "0",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.with_tpm": "0"
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             },
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "type": "block",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "vg_name": "ceph_vg0"
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:         }
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:     ],
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:     "1": [
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:         {
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "devices": [
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "/dev/loop4"
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             ],
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "lv_name": "ceph_lv1",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "lv_size": "21470642176",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "name": "ceph_lv1",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "tags": {
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.cluster_name": "ceph",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.crush_device_class": "",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.encrypted": "0",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.objectstore": "bluestore",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.osd_id": "1",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.type": "block",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.vdo": "0",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.with_tpm": "0"
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             },
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "type": "block",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "vg_name": "ceph_vg1"
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:         }
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:     ],
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:     "2": [
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:         {
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "devices": [
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "/dev/loop5"
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             ],
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "lv_name": "ceph_lv2",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "lv_size": "21470642176",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "name": "ceph_lv2",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "tags": {
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.cluster_name": "ceph",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.crush_device_class": "",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.encrypted": "0",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.objectstore": "bluestore",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.osd_id": "2",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.type": "block",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.vdo": "0",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:                 "ceph.with_tpm": "0"
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             },
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "type": "block",
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:             "vg_name": "ceph_vg2"
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:         }
Feb 28 10:34:33 compute-0 peaceful_kare[356577]:     ]
Feb 28 10:34:33 compute-0 peaceful_kare[356577]: }
Feb 28 10:34:33 compute-0 systemd[1]: libpod-8c3d6846cae8f8c182c916301233458884de58f36909afd3d2aba8a3382b27dd.scope: Deactivated successfully.
Feb 28 10:34:33 compute-0 podman[356561]: 2026-02-28 10:34:33.626757505 +0000 UTC m=+0.502175216 container died 8c3d6846cae8f8c182c916301233458884de58f36909afd3d2aba8a3382b27dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 10:34:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-928e1e36594004ff8016b6185a7e95de71566a5965f4057d0618371ba45d6525-merged.mount: Deactivated successfully.
Feb 28 10:34:33 compute-0 podman[356561]: 2026-02-28 10:34:33.682945257 +0000 UTC m=+0.558362928 container remove 8c3d6846cae8f8c182c916301233458884de58f36909afd3d2aba8a3382b27dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 10:34:33 compute-0 systemd[1]: libpod-conmon-8c3d6846cae8f8c182c916301233458884de58f36909afd3d2aba8a3382b27dd.scope: Deactivated successfully.
Feb 28 10:34:33 compute-0 sudo[356483]: pam_unix(sudo:session): session closed for user root
Feb 28 10:34:33 compute-0 sudo[356598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:34:33 compute-0 sudo[356598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:34:33 compute-0 sudo[356598]: pam_unix(sudo:session): session closed for user root
Feb 28 10:34:33 compute-0 sudo[356623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:34:33 compute-0 sudo[356623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:34:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:34:34 compute-0 ceph-mon[76304]: pgmap v2106: 305 pgs: 305 active+clean; 153 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 6.8 KiB/s wr, 27 op/s
Feb 28 10:34:34 compute-0 podman[356660]: 2026-02-28 10:34:34.124866105 +0000 UTC m=+0.045914362 container create 373542bd2d054ae681d9439acea153b3b86561a6e7b89faa929c7cf0d7fcc45f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_beaver, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:34:34 compute-0 systemd[1]: Started libpod-conmon-373542bd2d054ae681d9439acea153b3b86561a6e7b89faa929c7cf0d7fcc45f.scope.
Feb 28 10:34:34 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:34:34 compute-0 podman[356660]: 2026-02-28 10:34:34.197364649 +0000 UTC m=+0.118412906 container init 373542bd2d054ae681d9439acea153b3b86561a6e7b89faa929c7cf0d7fcc45f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_beaver, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 10:34:34 compute-0 podman[356660]: 2026-02-28 10:34:34.106703071 +0000 UTC m=+0.027751368 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:34:34 compute-0 podman[356660]: 2026-02-28 10:34:34.20482136 +0000 UTC m=+0.125869607 container start 373542bd2d054ae681d9439acea153b3b86561a6e7b89faa929c7cf0d7fcc45f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 10:34:34 compute-0 podman[356660]: 2026-02-28 10:34:34.208214436 +0000 UTC m=+0.129262743 container attach 373542bd2d054ae681d9439acea153b3b86561a6e7b89faa929c7cf0d7fcc45f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_beaver, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 10:34:34 compute-0 modest_beaver[356678]: 167 167
Feb 28 10:34:34 compute-0 systemd[1]: libpod-373542bd2d054ae681d9439acea153b3b86561a6e7b89faa929c7cf0d7fcc45f.scope: Deactivated successfully.
Feb 28 10:34:34 compute-0 podman[356660]: 2026-02-28 10:34:34.209796721 +0000 UTC m=+0.130844958 container died 373542bd2d054ae681d9439acea153b3b86561a6e7b89faa929c7cf0d7fcc45f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:34:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0fecf89f690cec9420bbbe6cd3622820c8754910623dcb3d411729be17a62d5-merged.mount: Deactivated successfully.
Feb 28 10:34:34 compute-0 podman[356660]: 2026-02-28 10:34:34.259218651 +0000 UTC m=+0.180266938 container remove 373542bd2d054ae681d9439acea153b3b86561a6e7b89faa929c7cf0d7fcc45f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_beaver, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 28 10:34:34 compute-0 systemd[1]: libpod-conmon-373542bd2d054ae681d9439acea153b3b86561a6e7b89faa929c7cf0d7fcc45f.scope: Deactivated successfully.
Feb 28 10:34:34 compute-0 nova_compute[243452]: 2026-02-28 10:34:34.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:34:34 compute-0 nova_compute[243452]: 2026-02-28 10:34:34.348 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:34 compute-0 podman[356702]: 2026-02-28 10:34:34.445819117 +0000 UTC m=+0.053855147 container create 8bc4cbd0ee616afe12c95baed2011d8dde18867fc8fa10e1c732ee4fb7874165 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_panini, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 10:34:34 compute-0 systemd[1]: Started libpod-conmon-8bc4cbd0ee616afe12c95baed2011d8dde18867fc8fa10e1c732ee4fb7874165.scope.
Feb 28 10:34:34 compute-0 podman[356702]: 2026-02-28 10:34:34.416117235 +0000 UTC m=+0.024153245 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:34:34 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:34:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d113fb27a73192ec1172163ce6494febe2ecd60094d6d6930d5e35a2d39f3f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:34:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d113fb27a73192ec1172163ce6494febe2ecd60094d6d6930d5e35a2d39f3f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:34:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d113fb27a73192ec1172163ce6494febe2ecd60094d6d6930d5e35a2d39f3f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:34:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d113fb27a73192ec1172163ce6494febe2ecd60094d6d6930d5e35a2d39f3f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:34:34 compute-0 podman[356702]: 2026-02-28 10:34:34.546024905 +0000 UTC m=+0.154060985 container init 8bc4cbd0ee616afe12c95baed2011d8dde18867fc8fa10e1c732ee4fb7874165 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_panini, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 10:34:34 compute-0 podman[356702]: 2026-02-28 10:34:34.551350386 +0000 UTC m=+0.159386396 container start 8bc4cbd0ee616afe12c95baed2011d8dde18867fc8fa10e1c732ee4fb7874165 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_panini, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:34:34 compute-0 podman[356702]: 2026-02-28 10:34:34.554524096 +0000 UTC m=+0.162560196 container attach 8bc4cbd0ee616afe12c95baed2011d8dde18867fc8fa10e1c732ee4fb7874165 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_panini, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:34:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 153 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 KiB/s wr, 27 op/s
Feb 28 10:34:35 compute-0 lvm[356796]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:34:35 compute-0 lvm[356796]: VG ceph_vg0 finished
Feb 28 10:34:35 compute-0 lvm[356797]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:34:35 compute-0 lvm[356797]: VG ceph_vg1 finished
Feb 28 10:34:35 compute-0 lvm[356799]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:34:35 compute-0 lvm[356799]: VG ceph_vg2 finished
Feb 28 10:34:35 compute-0 festive_panini[356718]: {}
Feb 28 10:34:35 compute-0 systemd[1]: libpod-8bc4cbd0ee616afe12c95baed2011d8dde18867fc8fa10e1c732ee4fb7874165.scope: Deactivated successfully.
Feb 28 10:34:35 compute-0 systemd[1]: libpod-8bc4cbd0ee616afe12c95baed2011d8dde18867fc8fa10e1c732ee4fb7874165.scope: Consumed 1.128s CPU time.
Feb 28 10:34:35 compute-0 podman[356702]: 2026-02-28 10:34:35.330039184 +0000 UTC m=+0.938075184 container died 8bc4cbd0ee616afe12c95baed2011d8dde18867fc8fa10e1c732ee4fb7874165 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_panini, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:34:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d113fb27a73192ec1172163ce6494febe2ecd60094d6d6930d5e35a2d39f3f2-merged.mount: Deactivated successfully.
Feb 28 10:34:35 compute-0 nova_compute[243452]: 2026-02-28 10:34:35.367 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:35 compute-0 nova_compute[243452]: 2026-02-28 10:34:35.369 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:35 compute-0 podman[356702]: 2026-02-28 10:34:35.37578923 +0000 UTC m=+0.983825230 container remove 8bc4cbd0ee616afe12c95baed2011d8dde18867fc8fa10e1c732ee4fb7874165 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_panini, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 10:34:35 compute-0 systemd[1]: libpod-conmon-8bc4cbd0ee616afe12c95baed2011d8dde18867fc8fa10e1c732ee4fb7874165.scope: Deactivated successfully.
Feb 28 10:34:35 compute-0 nova_compute[243452]: 2026-02-28 10:34:35.398 243456 DEBUG nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:34:35 compute-0 sudo[356623]: pam_unix(sudo:session): session closed for user root
Feb 28 10:34:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:34:35 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:34:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:34:35 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:34:35 compute-0 nova_compute[243452]: 2026-02-28 10:34:35.481 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:35 compute-0 nova_compute[243452]: 2026-02-28 10:34:35.482 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:35 compute-0 nova_compute[243452]: 2026-02-28 10:34:35.491 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:34:35 compute-0 nova_compute[243452]: 2026-02-28 10:34:35.491 243456 INFO nova.compute.claims [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:34:35 compute-0 sudo[356814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:34:35 compute-0 sudo[356814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:34:35 compute-0 sudo[356814]: pam_unix(sudo:session): session closed for user root
Feb 28 10:34:35 compute-0 nova_compute[243452]: 2026-02-28 10:34:35.608 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:34:36 compute-0 sshd-session[356278]: Connection reset by 198.235.24.176 port 60418 [preauth]
Feb 28 10:34:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:34:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/285151819' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.146 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.153 243456 DEBUG nova.compute.provider_tree [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.168 243456 DEBUG nova.scheduler.client.report [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.203 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.204 243456 DEBUG nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.254 243456 DEBUG nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.255 243456 DEBUG nova.network.neutron [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.285 243456 INFO nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.307 243456 DEBUG nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.419 243456 DEBUG nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.420 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.421 243456 INFO nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Creating image(s)
Feb 28 10:34:36 compute-0 ceph-mon[76304]: pgmap v2107: 305 pgs: 305 active+clean; 153 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 KiB/s wr, 27 op/s
Feb 28 10:34:36 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:34:36 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:34:36 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/285151819' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.451 243456 DEBUG nova.storage.rbd_utils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 29ebb761-c674-4ed1-aae0-554adf945402_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.481 243456 DEBUG nova.storage.rbd_utils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 29ebb761-c674-4ed1-aae0-554adf945402_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.510 243456 DEBUG nova.storage.rbd_utils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 29ebb761-c674-4ed1-aae0-554adf945402_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.515 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.547 243456 DEBUG nova.policy [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.586 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.587 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.588 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.589 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.620 243456 DEBUG nova.storage.rbd_utils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 29ebb761-c674-4ed1-aae0-554adf945402_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.624 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 29ebb761-c674-4ed1-aae0-554adf945402_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:34:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 153 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.3 KiB/s wr, 17 op/s
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.893 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 29ebb761-c674-4ed1-aae0-554adf945402_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:34:36 compute-0 nova_compute[243452]: 2026-02-28 10:34:36.949 243456 DEBUG nova.storage.rbd_utils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image 29ebb761-c674-4ed1-aae0-554adf945402_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:34:37 compute-0 nova_compute[243452]: 2026-02-28 10:34:37.020 243456 DEBUG nova.objects.instance [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid 29ebb761-c674-4ed1-aae0-554adf945402 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:34:37 compute-0 nova_compute[243452]: 2026-02-28 10:34:37.046 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:34:37 compute-0 nova_compute[243452]: 2026-02-28 10:34:37.046 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Ensure instance console log exists: /var/lib/nova/instances/29ebb761-c674-4ed1-aae0-554adf945402/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:34:37 compute-0 nova_compute[243452]: 2026-02-28 10:34:37.047 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:37 compute-0 nova_compute[243452]: 2026-02-28 10:34:37.047 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:37 compute-0 nova_compute[243452]: 2026-02-28 10:34:37.047 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:37 compute-0 podman[357028]: 2026-02-28 10:34:37.113650318 +0000 UTC m=+0.051109328 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 10:34:37 compute-0 podman[357027]: 2026-02-28 10:34:37.143677129 +0000 UTC m=+0.080951144 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:34:37 compute-0 nova_compute[243452]: 2026-02-28 10:34:37.488 243456 DEBUG nova.network.neutron [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Successfully created port: 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:34:37 compute-0 nova_compute[243452]: 2026-02-28 10:34:37.688 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274862.6872473, 4cf0bd1e-baff-4e42-ab90-cb45145ea4db => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:34:37 compute-0 nova_compute[243452]: 2026-02-28 10:34:37.689 243456 INFO nova.compute.manager [-] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] VM Stopped (Lifecycle Event)
Feb 28 10:34:37 compute-0 nova_compute[243452]: 2026-02-28 10:34:37.709 243456 DEBUG nova.compute.manager [None req-3cfaae37-2fbb-4c5f-8de7-2f949caf50ae - - - - - -] [instance: 4cf0bd1e-baff-4e42-ab90-cb45145ea4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:34:37 compute-0 nova_compute[243452]: 2026-02-28 10:34:37.734 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:37 compute-0 nova_compute[243452]: 2026-02-28 10:34:37.928 243456 DEBUG nova.network.neutron [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Successfully created port: 8f25c48f-b281-4784-a6b0-a2662d928d28 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:34:38 compute-0 nova_compute[243452]: 2026-02-28 10:34:38.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:34:38 compute-0 ceph-mon[76304]: pgmap v2108: 305 pgs: 305 active+clean; 153 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.3 KiB/s wr, 17 op/s
Feb 28 10:34:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 157 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 256 KiB/s wr, 16 op/s
Feb 28 10:34:38 compute-0 nova_compute[243452]: 2026-02-28 10:34:38.843 243456 DEBUG nova.network.neutron [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Successfully updated port: 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:34:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:34:38 compute-0 nova_compute[243452]: 2026-02-28 10:34:38.969 243456 DEBUG nova.compute.manager [req-8cea70a9-ac12-481d-b44c-cfb1fd4d1f45 req-584dce0c-c27e-42a4-9caf-e489262101bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-changed-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:38 compute-0 nova_compute[243452]: 2026-02-28 10:34:38.970 243456 DEBUG nova.compute.manager [req-8cea70a9-ac12-481d-b44c-cfb1fd4d1f45 req-584dce0c-c27e-42a4-9caf-e489262101bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Refreshing instance network info cache due to event network-changed-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:34:38 compute-0 nova_compute[243452]: 2026-02-28 10:34:38.970 243456 DEBUG oslo_concurrency.lockutils [req-8cea70a9-ac12-481d-b44c-cfb1fd4d1f45 req-584dce0c-c27e-42a4-9caf-e489262101bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:34:38 compute-0 nova_compute[243452]: 2026-02-28 10:34:38.970 243456 DEBUG oslo_concurrency.lockutils [req-8cea70a9-ac12-481d-b44c-cfb1fd4d1f45 req-584dce0c-c27e-42a4-9caf-e489262101bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:34:38 compute-0 nova_compute[243452]: 2026-02-28 10:34:38.971 243456 DEBUG nova.network.neutron [req-8cea70a9-ac12-481d-b44c-cfb1fd4d1f45 req-584dce0c-c27e-42a4-9caf-e489262101bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Refreshing network info cache for port 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:34:39 compute-0 nova_compute[243452]: 2026-02-28 10:34:39.245 243456 DEBUG nova.network.neutron [req-8cea70a9-ac12-481d-b44c-cfb1fd4d1f45 req-584dce0c-c27e-42a4-9caf-e489262101bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:34:39 compute-0 nova_compute[243452]: 2026-02-28 10:34:39.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:34:39 compute-0 nova_compute[243452]: 2026-02-28 10:34:39.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:34:39 compute-0 nova_compute[243452]: 2026-02-28 10:34:39.350 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:39 compute-0 nova_compute[243452]: 2026-02-28 10:34:39.899 243456 DEBUG nova.network.neutron [req-8cea70a9-ac12-481d-b44c-cfb1fd4d1f45 req-584dce0c-c27e-42a4-9caf-e489262101bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:34:39 compute-0 nova_compute[243452]: 2026-02-28 10:34:39.916 243456 DEBUG oslo_concurrency.lockutils [req-8cea70a9-ac12-481d-b44c-cfb1fd4d1f45 req-584dce0c-c27e-42a4-9caf-e489262101bf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:34:39 compute-0 nova_compute[243452]: 2026-02-28 10:34:39.987 243456 DEBUG nova.network.neutron [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Successfully updated port: 8f25c48f-b281-4784-a6b0-a2662d928d28 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:34:40 compute-0 nova_compute[243452]: 2026-02-28 10:34:40.007 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:34:40 compute-0 nova_compute[243452]: 2026-02-28 10:34:40.007 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:34:40 compute-0 nova_compute[243452]: 2026-02-28 10:34:40.008 243456 DEBUG nova.network.neutron [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:34:40 compute-0 nova_compute[243452]: 2026-02-28 10:34:40.178 243456 DEBUG nova.network.neutron [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:34:40 compute-0 ceph-mon[76304]: pgmap v2109: 305 pgs: 305 active+clean; 157 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 256 KiB/s wr, 16 op/s
Feb 28 10:34:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 200 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:34:41 compute-0 nova_compute[243452]: 2026-02-28 10:34:41.055 243456 DEBUG nova.compute.manager [req-e048e61d-f681-4e88-921f-43c5ba917fd5 req-4739973b-13d3-4082-8b3a-1d69c15964bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-changed-8f25c48f-b281-4784-a6b0-a2662d928d28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:41 compute-0 nova_compute[243452]: 2026-02-28 10:34:41.056 243456 DEBUG nova.compute.manager [req-e048e61d-f681-4e88-921f-43c5ba917fd5 req-4739973b-13d3-4082-8b3a-1d69c15964bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Refreshing instance network info cache due to event network-changed-8f25c48f-b281-4784-a6b0-a2662d928d28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:34:41 compute-0 nova_compute[243452]: 2026-02-28 10:34:41.056 243456 DEBUG oslo_concurrency.lockutils [req-e048e61d-f681-4e88-921f-43c5ba917fd5 req-4739973b-13d3-4082-8b3a-1d69c15964bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003597216889628009 of space, bias 1.0, pg target 0.10791650668884027 quantized to 32 (current 32)
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024938484013542466 of space, bias 1.0, pg target 0.748154520406274 quantized to 32 (current 32)
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.349881078222421e-07 of space, bias 4.0, pg target 0.0008819857293866905 quantized to 16 (current 16)
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:34:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:34:41 compute-0 nova_compute[243452]: 2026-02-28 10:34:41.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.277 243456 DEBUG nova.network.neutron [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updating instance_info_cache with network_info: [{"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.316 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.317 243456 DEBUG nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Instance network_info: |[{"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.318 243456 DEBUG oslo_concurrency.lockutils [req-e048e61d-f681-4e88-921f-43c5ba917fd5 req-4739973b-13d3-4082-8b3a-1d69c15964bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.318 243456 DEBUG nova.network.neutron [req-e048e61d-f681-4e88-921f-43c5ba917fd5 req-4739973b-13d3-4082-8b3a-1d69c15964bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Refreshing network info cache for port 8f25c48f-b281-4784-a6b0-a2662d928d28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.323 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Start _get_guest_xml network_info=[{"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.328 243456 WARNING nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.334 243456 DEBUG nova.virt.libvirt.host [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.335 243456 DEBUG nova.virt.libvirt.host [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.338 243456 DEBUG nova.virt.libvirt.host [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.338 243456 DEBUG nova.virt.libvirt.host [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.339 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.339 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.340 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.340 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.341 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.341 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.341 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.341 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.342 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.342 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.342 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.343 243456 DEBUG nova.virt.hardware [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.347 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:34:42 compute-0 ceph-mon[76304]: pgmap v2110: 305 pgs: 305 active+clean; 200 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.737 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:34:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:34:42 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1186612537' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.961 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:34:42 compute-0 nova_compute[243452]: 2026-02-28 10:34:42.994 243456 DEBUG nova.storage.rbd_utils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 29ebb761-c674-4ed1-aae0-554adf945402_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.001 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:34:43 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1186612537' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:34:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:34:43 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1627211567' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.563 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.566 243456 DEBUG nova.virt.libvirt.vif [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1840268040',display_name='tempest-TestGettingAddress-server-1840268040',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1840268040',id=128,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-6j90yoru',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:34:36Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=29ebb761-c674-4ed1-aae0-554adf945402,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.567 243456 DEBUG nova.network.os_vif_util [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.568 243456 DEBUG nova.network.os_vif_util [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:72:96,bridge_name='br-int',has_traffic_filtering=True,id=8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b2cb81f-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.570 243456 DEBUG nova.virt.libvirt.vif [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1840268040',display_name='tempest-TestGettingAddress-server-1840268040',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1840268040',id=128,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-6j90yoru',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:34:36Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=29ebb761-c674-4ed1-aae0-554adf945402,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.570 243456 DEBUG nova.network.os_vif_util [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.571 243456 DEBUG nova.network.os_vif_util [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:88:f4,bridge_name='br-int',has_traffic_filtering=True,id=8f25c48f-b281-4784-a6b0-a2662d928d28,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f25c48f-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.573 243456 DEBUG nova.objects.instance [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 29ebb761-c674-4ed1-aae0-554adf945402 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.593 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:34:43 compute-0 nova_compute[243452]:   <uuid>29ebb761-c674-4ed1-aae0-554adf945402</uuid>
Feb 28 10:34:43 compute-0 nova_compute[243452]:   <name>instance-00000080</name>
Feb 28 10:34:43 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:34:43 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:34:43 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <nova:name>tempest-TestGettingAddress-server-1840268040</nova:name>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:34:42</nova:creationTime>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:34:43 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:34:43 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:34:43 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:34:43 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:34:43 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:34:43 compute-0 nova_compute[243452]:         <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 10:34:43 compute-0 nova_compute[243452]:         <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:34:43 compute-0 nova_compute[243452]:         <nova:port uuid="8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b">
Feb 28 10:34:43 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:34:43 compute-0 nova_compute[243452]:         <nova:port uuid="8f25c48f-b281-4784-a6b0-a2662d928d28">
Feb 28 10:34:43 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe9f:88f4" ipVersion="6"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:34:43 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:34:43 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <system>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <entry name="serial">29ebb761-c674-4ed1-aae0-554adf945402</entry>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <entry name="uuid">29ebb761-c674-4ed1-aae0-554adf945402</entry>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     </system>
Feb 28 10:34:43 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:34:43 compute-0 nova_compute[243452]:   <os>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:   </os>
Feb 28 10:34:43 compute-0 nova_compute[243452]:   <features>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:   </features>
Feb 28 10:34:43 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:34:43 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:34:43 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/29ebb761-c674-4ed1-aae0-554adf945402_disk">
Feb 28 10:34:43 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       </source>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:34:43 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/29ebb761-c674-4ed1-aae0-554adf945402_disk.config">
Feb 28 10:34:43 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       </source>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:34:43 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:12:72:96"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <target dev="tap8b2cb81f-77"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:9f:88:f4"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <target dev="tap8f25c48f-b2"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/29ebb761-c674-4ed1-aae0-554adf945402/console.log" append="off"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <video>
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     </video>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:34:43 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:34:43 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:34:43 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:34:43 compute-0 nova_compute[243452]: </domain>
Feb 28 10:34:43 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.595 243456 DEBUG nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Preparing to wait for external event network-vif-plugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.595 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.596 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.596 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.597 243456 DEBUG nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Preparing to wait for external event network-vif-plugged-8f25c48f-b281-4784-a6b0-a2662d928d28 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.597 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.598 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.598 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.599 243456 DEBUG nova.virt.libvirt.vif [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1840268040',display_name='tempest-TestGettingAddress-server-1840268040',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1840268040',id=128,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-6j90yoru',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:34:36Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=29ebb761-c674-4ed1-aae0-554adf945402,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.600 243456 DEBUG nova.network.os_vif_util [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.601 243456 DEBUG nova.network.os_vif_util [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:72:96,bridge_name='br-int',has_traffic_filtering=True,id=8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b2cb81f-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.602 243456 DEBUG os_vif [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:72:96,bridge_name='br-int',has_traffic_filtering=True,id=8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b2cb81f-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.603 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.604 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.604 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.609 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.609 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b2cb81f-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.610 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8b2cb81f-77, col_values=(('external_ids', {'iface-id': '8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:12:72:96', 'vm-uuid': '29ebb761-c674-4ed1-aae0-554adf945402'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:43 compute-0 NetworkManager[49805]: <info>  [1772274883.6137] manager: (tap8b2cb81f-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/533)
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.612 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.618 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.622 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.624 243456 INFO os_vif [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:72:96,bridge_name='br-int',has_traffic_filtering=True,id=8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b2cb81f-77')
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.625 243456 DEBUG nova.virt.libvirt.vif [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1840268040',display_name='tempest-TestGettingAddress-server-1840268040',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1840268040',id=128,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-6j90yoru',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:34:36Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=29ebb761-c674-4ed1-aae0-554adf945402,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.625 243456 DEBUG nova.network.os_vif_util [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.626 243456 DEBUG nova.network.os_vif_util [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:88:f4,bridge_name='br-int',has_traffic_filtering=True,id=8f25c48f-b281-4784-a6b0-a2662d928d28,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f25c48f-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.626 243456 DEBUG os_vif [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:88:f4,bridge_name='br-int',has_traffic_filtering=True,id=8f25c48f-b281-4784-a6b0-a2662d928d28,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f25c48f-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.627 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.627 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.627 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.630 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.630 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f25c48f-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.630 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f25c48f-b2, col_values=(('external_ids', {'iface-id': '8f25c48f-b281-4784-a6b0-a2662d928d28', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9f:88:f4', 'vm-uuid': '29ebb761-c674-4ed1-aae0-554adf945402'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.632 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:43 compute-0 NetworkManager[49805]: <info>  [1772274883.6337] manager: (tap8f25c48f-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/534)
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.634 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.639 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.640 243456 INFO os_vif [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:88:f4,bridge_name='br-int',has_traffic_filtering=True,id=8f25c48f-b281-4784-a6b0-a2662d928d28,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f25c48f-b2')
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.710 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.711 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.711 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:12:72:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.711 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:9f:88:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.712 243456 INFO nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Using config drive
Feb 28 10:34:43 compute-0 nova_compute[243452]: 2026-02-28 10:34:43.736 243456 DEBUG nova.storage.rbd_utils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 29ebb761-c674-4ed1-aae0-554adf945402_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:34:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:34:44 compute-0 nova_compute[243452]: 2026-02-28 10:34:44.100 243456 INFO nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Creating config drive at /var/lib/nova/instances/29ebb761-c674-4ed1-aae0-554adf945402/disk.config
Feb 28 10:34:44 compute-0 nova_compute[243452]: 2026-02-28 10:34:44.104 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29ebb761-c674-4ed1-aae0-554adf945402/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpebrxls1q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:34:44 compute-0 nova_compute[243452]: 2026-02-28 10:34:44.145 243456 DEBUG nova.network.neutron [req-e048e61d-f681-4e88-921f-43c5ba917fd5 req-4739973b-13d3-4082-8b3a-1d69c15964bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updated VIF entry in instance network info cache for port 8f25c48f-b281-4784-a6b0-a2662d928d28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:34:44 compute-0 nova_compute[243452]: 2026-02-28 10:34:44.146 243456 DEBUG nova.network.neutron [req-e048e61d-f681-4e88-921f-43c5ba917fd5 req-4739973b-13d3-4082-8b3a-1d69c15964bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updating instance_info_cache with network_info: [{"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:34:44 compute-0 nova_compute[243452]: 2026-02-28 10:34:44.167 243456 DEBUG oslo_concurrency.lockutils [req-e048e61d-f681-4e88-921f-43c5ba917fd5 req-4739973b-13d3-4082-8b3a-1d69c15964bc 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:34:44 compute-0 nova_compute[243452]: 2026-02-28 10:34:44.249 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29ebb761-c674-4ed1-aae0-554adf945402/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpebrxls1q" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:34:44 compute-0 nova_compute[243452]: 2026-02-28 10:34:44.278 243456 DEBUG nova.storage.rbd_utils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 29ebb761-c674-4ed1-aae0-554adf945402_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:34:44 compute-0 nova_compute[243452]: 2026-02-28 10:34:44.282 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29ebb761-c674-4ed1-aae0-554adf945402/disk.config 29ebb761-c674-4ed1-aae0-554adf945402_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:34:44 compute-0 nova_compute[243452]: 2026-02-28 10:34:44.383 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:44 compute-0 nova_compute[243452]: 2026-02-28 10:34:44.461 243456 DEBUG oslo_concurrency.processutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29ebb761-c674-4ed1-aae0-554adf945402/disk.config 29ebb761-c674-4ed1-aae0-554adf945402_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:34:44 compute-0 nova_compute[243452]: 2026-02-28 10:34:44.461 243456 INFO nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Deleting local config drive /var/lib/nova/instances/29ebb761-c674-4ed1-aae0-554adf945402/disk.config because it was imported into RBD.
Feb 28 10:34:44 compute-0 ceph-mon[76304]: pgmap v2111: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:34:44 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1627211567' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:34:44 compute-0 kernel: tap8b2cb81f-77: entered promiscuous mode
Feb 28 10:34:44 compute-0 NetworkManager[49805]: <info>  [1772274884.5209] manager: (tap8b2cb81f-77): new Tun device (/org/freedesktop/NetworkManager/Devices/535)
Feb 28 10:34:44 compute-0 ovn_controller[146846]: 2026-02-28T10:34:44Z|01293|binding|INFO|Claiming lport 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b for this chassis.
Feb 28 10:34:44 compute-0 ovn_controller[146846]: 2026-02-28T10:34:44Z|01294|binding|INFO|8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b: Claiming fa:16:3e:12:72:96 10.100.0.3
Feb 28 10:34:44 compute-0 nova_compute[243452]: 2026-02-28 10:34:44.523 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:44 compute-0 NetworkManager[49805]: <info>  [1772274884.5336] manager: (tap8f25c48f-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/536)
Feb 28 10:34:44 compute-0 nova_compute[243452]: 2026-02-28 10:34:44.532 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:44 compute-0 kernel: tap8f25c48f-b2: entered promiscuous mode
Feb 28 10:34:44 compute-0 nova_compute[243452]: 2026-02-28 10:34:44.538 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:44 compute-0 nova_compute[243452]: 2026-02-28 10:34:44.543 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:44 compute-0 ovn_controller[146846]: 2026-02-28T10:34:44Z|01295|if_status|INFO|Not updating pb chassis for 8f25c48f-b281-4784-a6b0-a2662d928d28 now as sb is readonly
Feb 28 10:34:44 compute-0 ovn_controller[146846]: 2026-02-28T10:34:44Z|01296|binding|INFO|Claiming lport 8f25c48f-b281-4784-a6b0-a2662d928d28 for this chassis.
Feb 28 10:34:44 compute-0 ovn_controller[146846]: 2026-02-28T10:34:44Z|01297|binding|INFO|8f25c48f-b281-4784-a6b0-a2662d928d28: Claiming fa:16:3e:9f:88:f4 2001:db8::f816:3eff:fe9f:88f4
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.551 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:72:96 10.100.0.3'], port_security=['fa:16:3e:12:72:96 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '29ebb761-c674-4ed1-aae0-554adf945402', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd8337edd-8f59-40cf-a216-2b5eec8b4c00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d65e56d1-78ed-40b2-a041-17e977e92cba, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.553 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b in datapath 6d7aad4f-1a53-4b74-a216-4cac4be4283b bound to our chassis
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.555 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d7aad4f-1a53-4b74-a216-4cac4be4283b
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.562 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:88:f4 2001:db8::f816:3eff:fe9f:88f4'], port_security=['fa:16:3e:9f:88:f4 2001:db8::f816:3eff:fe9f:88f4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9f:88f4/64', 'neutron:device_id': '29ebb761-c674-4ed1-aae0-554adf945402', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd8337edd-8f59-40cf-a216-2b5eec8b4c00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7ae13e-602a-487f-8652-e7d0de9d97fa, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8f25c48f-b281-4784-a6b0-a2662d928d28) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:34:44 compute-0 systemd-udevd[357215]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:34:44 compute-0 systemd-udevd[357214]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.568 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3ec1964a-7483-4c01-8ef8-55b9b17e2bb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.569 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d7aad4f-11 in ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:34:44 compute-0 systemd-machined[209480]: New machine qemu-161-instance-00000080.
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.571 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d7aad4f-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.571 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e178e995-99cc-4773-a496-4afdb13e7ff0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.573 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0a0850e2-e806-4ee9-b22a-9c556f214c25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:44 compute-0 ovn_controller[146846]: 2026-02-28T10:34:44Z|01298|binding|INFO|Setting lport 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b ovn-installed in OVS
Feb 28 10:34:44 compute-0 ovn_controller[146846]: 2026-02-28T10:34:44Z|01299|binding|INFO|Setting lport 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b up in Southbound
Feb 28 10:34:44 compute-0 nova_compute[243452]: 2026-02-28 10:34:44.578 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:44 compute-0 ovn_controller[146846]: 2026-02-28T10:34:44Z|01300|binding|INFO|Setting lport 8f25c48f-b281-4784-a6b0-a2662d928d28 ovn-installed in OVS
Feb 28 10:34:44 compute-0 ovn_controller[146846]: 2026-02-28T10:34:44Z|01301|binding|INFO|Setting lport 8f25c48f-b281-4784-a6b0-a2662d928d28 up in Southbound
Feb 28 10:34:44 compute-0 systemd[1]: Started Virtual Machine qemu-161-instance-00000080.
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.584 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[15de9745-728e-4131-9db6-b22c8bd1401a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:44 compute-0 NetworkManager[49805]: <info>  [1772274884.5889] device (tap8f25c48f-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:34:44 compute-0 NetworkManager[49805]: <info>  [1772274884.5897] device (tap8f25c48f-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:34:44 compute-0 nova_compute[243452]: 2026-02-28 10:34:44.586 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:44 compute-0 NetworkManager[49805]: <info>  [1772274884.5924] device (tap8b2cb81f-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:34:44 compute-0 NetworkManager[49805]: <info>  [1772274884.5947] device (tap8b2cb81f-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.599 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f1c91d5d-bd7b-40f0-b27c-ba52119d369c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.627 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[cab5b0e4-c18c-4a26-a26f-de57ae46cf02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:44 compute-0 NetworkManager[49805]: <info>  [1772274884.6344] manager: (tap6d7aad4f-10): new Veth device (/org/freedesktop/NetworkManager/Devices/537)
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.633 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[23426419-043d-4e39-808b-ea266af7cf4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.664 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[60956e49-9cb1-47a4-b98d-09ec0d994188]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.668 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[cb6184eb-a36c-406d-a5c6-acd082e167a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:44 compute-0 NetworkManager[49805]: <info>  [1772274884.6898] device (tap6d7aad4f-10): carrier: link connected
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.694 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4f2cff55-378d-41df-83fa-f3fca8a1b596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.708 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[13f19c25-243d-43fd-a602-53b93ff4b6e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d7aad4f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:af:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 387], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639197, 'reachable_time': 19551, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357247, 'error': None, 'target': 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.723 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ed9a2912-4689-42c1-8c8f-9501def70088]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6d:af4e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639197, 'tstamp': 639197}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357248, 'error': None, 'target': 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.741 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b109234c-ce63-48b4-9577-e355723a7e69]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d7aad4f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:af:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 387], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639197, 'reachable_time': 19551, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357249, 'error': None, 'target': 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.767 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a77dc7ca-99f3-4139-b12b-c052cebad6e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.816 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cb7131a0-e327-4a28-8e56-e45ce0e47583]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.818 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d7aad4f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.818 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.819 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d7aad4f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:44 compute-0 nova_compute[243452]: 2026-02-28 10:34:44.820 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:44 compute-0 NetworkManager[49805]: <info>  [1772274884.8214] manager: (tap6d7aad4f-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/538)
Feb 28 10:34:44 compute-0 kernel: tap6d7aad4f-10: entered promiscuous mode
Feb 28 10:34:44 compute-0 nova_compute[243452]: 2026-02-28 10:34:44.823 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.824 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d7aad4f-10, col_values=(('external_ids', {'iface-id': '99dd359f-3ab9-477c-a58c-1c56298be9c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:44 compute-0 nova_compute[243452]: 2026-02-28 10:34:44.825 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:44 compute-0 ovn_controller[146846]: 2026-02-28T10:34:44Z|01302|binding|INFO|Releasing lport 99dd359f-3ab9-477c-a58c-1c56298be9c7 from this chassis (sb_readonly=0)
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.826 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d7aad4f-1a53-4b74-a216-4cac4be4283b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d7aad4f-1a53-4b74-a216-4cac4be4283b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.827 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[62d28e61-4b4c-4960-8b18-9c6a49f458a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.828 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-6d7aad4f-1a53-4b74-a216-4cac4be4283b
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/6d7aad4f-1a53-4b74-a216-4cac4be4283b.pid.haproxy
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 6d7aad4f-1a53-4b74-a216-4cac4be4283b
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:34:44 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:44.829 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'env', 'PROCESS_TAG=haproxy-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d7aad4f-1a53-4b74-a216-4cac4be4283b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:34:44 compute-0 nova_compute[243452]: 2026-02-28 10:34:44.830 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.071 243456 DEBUG nova.compute.manager [req-ac9756ea-b330-45d1-8cdc-ffb8cbe3ecf8 req-4665e637-ac40-4708-b699-7d369b3a2d2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-plugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.071 243456 DEBUG oslo_concurrency.lockutils [req-ac9756ea-b330-45d1-8cdc-ffb8cbe3ecf8 req-4665e637-ac40-4708-b699-7d369b3a2d2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.072 243456 DEBUG oslo_concurrency.lockutils [req-ac9756ea-b330-45d1-8cdc-ffb8cbe3ecf8 req-4665e637-ac40-4708-b699-7d369b3a2d2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.072 243456 DEBUG oslo_concurrency.lockutils [req-ac9756ea-b330-45d1-8cdc-ffb8cbe3ecf8 req-4665e637-ac40-4708-b699-7d369b3a2d2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.072 243456 DEBUG nova.compute.manager [req-ac9756ea-b330-45d1-8cdc-ffb8cbe3ecf8 req-4665e637-ac40-4708-b699-7d369b3a2d2c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Processing event network-vif-plugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.199 243456 DEBUG nova.compute.manager [req-d01de718-edfe-43b3-b09b-c07e39e58b2c req-069a95f4-f62a-4225-a2a1-eebdb03ea393 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-plugged-8f25c48f-b281-4784-a6b0-a2662d928d28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.200 243456 DEBUG oslo_concurrency.lockutils [req-d01de718-edfe-43b3-b09b-c07e39e58b2c req-069a95f4-f62a-4225-a2a1-eebdb03ea393 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.200 243456 DEBUG oslo_concurrency.lockutils [req-d01de718-edfe-43b3-b09b-c07e39e58b2c req-069a95f4-f62a-4225-a2a1-eebdb03ea393 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.201 243456 DEBUG oslo_concurrency.lockutils [req-d01de718-edfe-43b3-b09b-c07e39e58b2c req-069a95f4-f62a-4225-a2a1-eebdb03ea393 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.201 243456 DEBUG nova.compute.manager [req-d01de718-edfe-43b3-b09b-c07e39e58b2c req-069a95f4-f62a-4225-a2a1-eebdb03ea393 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Processing event network-vif-plugged-8f25c48f-b281-4784-a6b0-a2662d928d28 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:34:45 compute-0 podman[357317]: 2026-02-28 10:34:45.213002758 +0000 UTC m=+0.068885782 container create 53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 28 10:34:45 compute-0 systemd[1]: Started libpod-conmon-53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb.scope.
Feb 28 10:34:45 compute-0 podman[357317]: 2026-02-28 10:34:45.167679285 +0000 UTC m=+0.023562349 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:34:45 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.269 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274885.2683806, 29ebb761-c674-4ed1-aae0-554adf945402 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.270 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] VM Started (Lifecycle Event)
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.274 243456 DEBUG nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56367b1f032c1af58ec46ce8c0be39bfb51d1c49bf83d9e0c00051135fcf4571/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.287 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.290 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:34:45 compute-0 podman[357317]: 2026-02-28 10:34:45.29319829 +0000 UTC m=+0.149081324 container init 53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.295 243456 INFO nova.virt.libvirt.driver [-] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Instance spawned successfully.
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.296 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.299 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:34:45 compute-0 podman[357317]: 2026-02-28 10:34:45.301774863 +0000 UTC m=+0.157657847 container start 53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.320 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.321 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.326 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.327 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274885.268748, 29ebb761-c674-4ed1-aae0-554adf945402 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.327 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] VM Paused (Lifecycle Event)
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.336 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.337 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.338 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.338 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:34:45 compute-0 neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b[357339]: [NOTICE]   (357343) : New worker (357345) forked
Feb 28 10:34:45 compute-0 neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b[357339]: [NOTICE]   (357343) : Loading success.
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.339 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.340 243456 DEBUG nova.virt.libvirt.driver [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.348 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.348 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.353 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274885.285433, 29ebb761-c674-4ed1-aae0-554adf945402 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.354 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] VM Resumed (Lifecycle Event)
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.375 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.380 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.398 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8f25c48f-b281-4784-a6b0-a2662d928d28 in datapath 49ec66b0-8f5d-445b-a7e6-7fd41e785d9a unbound from our chassis
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.399 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49ec66b0-8f5d-445b-a7e6-7fd41e785d9a
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.402 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.411 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fc5c4824-763b-45a1-b9ec-fb17845bfd00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.412 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap49ec66b0-81 in ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.414 243456 INFO nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Took 8.99 seconds to spawn the instance on the hypervisor.
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.414 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap49ec66b0-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.415 243456 DEBUG nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.415 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4270933e-7004-4db5-8850-0c7540e0d4c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.416 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[484be7ba-b2cd-402d-ba31-88fd3c51c3d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.431 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[ec21a504-b3b9-4db8-b922-82323f1c8969]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.444 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e892f67e-6678-4ee3-bc9e-d0e4b9486e60]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.479 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f5eca19d-a2ca-432a-95b0-46428add556d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.487 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bab0812b-533b-4d20-a0ed-c1d4996aec40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.487 243456 INFO nova.compute.manager [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Took 10.04 seconds to build instance.
Feb 28 10:34:45 compute-0 systemd-udevd[357238]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:34:45 compute-0 NetworkManager[49805]: <info>  [1772274885.4901] manager: (tap49ec66b0-80): new Veth device (/org/freedesktop/NetworkManager/Devices/539)
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.520 243456 DEBUG oslo_concurrency.lockutils [None req-4e0f905b-8ad5-4d09-bc20-bbb5752813f2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.541 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[269b3f34-6041-49ea-a295-b9ae01662858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.546 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1671cbb4-32df-4f50-9c59-24af5444899d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:34:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2096092848' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:34:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:34:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2096092848' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:34:45 compute-0 NetworkManager[49805]: <info>  [1772274885.5734] device (tap49ec66b0-80): carrier: link connected
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.578 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb39be6-b7af-425c-8951-c3797affa60b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.599 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[69cc6280-617e-43ab-b04c-c60403273631]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49ec66b0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:61:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639285, 'reachable_time': 43442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357364, 'error': None, 'target': 'ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.614 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bdcd01ca-620c-4b26-aaeb-dcfb03e3379b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:61be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639285, 'tstamp': 639285}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357365, 'error': None, 'target': 'ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.632 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cf18d8a2-36ba-45c0-91f0-2f42b3a871ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49ec66b0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:61:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639285, 'reachable_time': 43442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357366, 'error': None, 'target': 'ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.680 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b75b5dbd-ac73-4180-865e-15156e293801]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.716 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5a52cc83-a23a-4d73-bd0f-f5b49e3b9481]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.719 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49ec66b0-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.719 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.720 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49ec66b0-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.723 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:45 compute-0 NetworkManager[49805]: <info>  [1772274885.7249] manager: (tap49ec66b0-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/540)
Feb 28 10:34:45 compute-0 kernel: tap49ec66b0-80: entered promiscuous mode
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.731 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49ec66b0-80, col_values=(('external_ids', {'iface-id': '0d93ffc1-1158-4b54-b2c1-6b7d48d62d16'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:45 compute-0 ovn_controller[146846]: 2026-02-28T10:34:45Z|01303|binding|INFO|Releasing lport 0d93ffc1-1158-4b54-b2c1-6b7d48d62d16 from this chassis (sb_readonly=0)
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.733 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.738 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/49ec66b0-8f5d-445b-a7e6-7fd41e785d9a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/49ec66b0-8f5d-445b-a7e6-7fd41e785d9a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:34:45 compute-0 nova_compute[243452]: 2026-02-28 10:34:45.740 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.740 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b00722e5-16d8-4d52-af71-dbad671f6fe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.742 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/49ec66b0-8f5d-445b-a7e6-7fd41e785d9a.pid.haproxy
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 49ec66b0-8f5d-445b-a7e6-7fd41e785d9a
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:34:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:45.746 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'env', 'PROCESS_TAG=haproxy-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/49ec66b0-8f5d-445b-a7e6-7fd41e785d9a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:34:46 compute-0 nova_compute[243452]: 2026-02-28 10:34:46.054 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "367042aa-0043-4283-a399-ea4a6a1545f7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:46 compute-0 nova_compute[243452]: 2026-02-28 10:34:46.055 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:46 compute-0 nova_compute[243452]: 2026-02-28 10:34:46.091 243456 DEBUG nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:34:46 compute-0 podman[357396]: 2026-02-28 10:34:46.116316697 +0000 UTC m=+0.049362739 container create 892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:34:46 compute-0 systemd[1]: Started libpod-conmon-892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817.scope.
Feb 28 10:34:46 compute-0 podman[357396]: 2026-02-28 10:34:46.090831275 +0000 UTC m=+0.023877297 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:34:46 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d17053d1edc9fada6f68225e884d047e5575813d4682169f8b77e31a5f244466/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:34:46 compute-0 nova_compute[243452]: 2026-02-28 10:34:46.210 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:46 compute-0 nova_compute[243452]: 2026-02-28 10:34:46.210 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:46 compute-0 nova_compute[243452]: 2026-02-28 10:34:46.220 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:34:46 compute-0 nova_compute[243452]: 2026-02-28 10:34:46.220 243456 INFO nova.compute.claims [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:34:46 compute-0 podman[357396]: 2026-02-28 10:34:46.234386922 +0000 UTC m=+0.167433034 container init 892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:34:46 compute-0 podman[357396]: 2026-02-28 10:34:46.242712877 +0000 UTC m=+0.175758919 container start 892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Feb 28 10:34:46 compute-0 neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a[357412]: [NOTICE]   (357416) : New worker (357418) forked
Feb 28 10:34:46 compute-0 neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a[357412]: [NOTICE]   (357416) : Loading success.
Feb 28 10:34:46 compute-0 nova_compute[243452]: 2026-02-28 10:34:46.294 243456 DEBUG nova.scheduler.client.report [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 28 10:34:46 compute-0 nova_compute[243452]: 2026-02-28 10:34:46.310 243456 DEBUG nova.scheduler.client.report [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 28 10:34:46 compute-0 nova_compute[243452]: 2026-02-28 10:34:46.311 243456 DEBUG nova.compute.provider_tree [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 28 10:34:46 compute-0 nova_compute[243452]: 2026-02-28 10:34:46.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:34:46 compute-0 nova_compute[243452]: 2026-02-28 10:34:46.331 243456 DEBUG nova.scheduler.client.report [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 28 10:34:46 compute-0 nova_compute[243452]: 2026-02-28 10:34:46.335 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:46 compute-0 nova_compute[243452]: 2026-02-28 10:34:46.360 243456 DEBUG nova.scheduler.client.report [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 28 10:34:46 compute-0 nova_compute[243452]: 2026-02-28 10:34:46.429 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:34:46 compute-0 ceph-mon[76304]: pgmap v2112: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Feb 28 10:34:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2096092848' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:34:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2096092848' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:34:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Feb 28 10:34:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:34:46 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2956785917' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.006 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.015 243456 DEBUG nova.compute.provider_tree [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.034 243456 DEBUG nova.scheduler.client.report [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.060 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.061 243456 DEBUG nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.063 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.063 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.064 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.064 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.153 243456 DEBUG nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.153 243456 DEBUG nova.network.neutron [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.157 243456 DEBUG nova.compute.manager [req-6f84bf83-b2a9-4a94-a4b3-2fdf1dd57688 req-8c46db3f-194b-4318-8ea7-68f485dedce6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-plugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.157 243456 DEBUG oslo_concurrency.lockutils [req-6f84bf83-b2a9-4a94-a4b3-2fdf1dd57688 req-8c46db3f-194b-4318-8ea7-68f485dedce6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.157 243456 DEBUG oslo_concurrency.lockutils [req-6f84bf83-b2a9-4a94-a4b3-2fdf1dd57688 req-8c46db3f-194b-4318-8ea7-68f485dedce6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.158 243456 DEBUG oslo_concurrency.lockutils [req-6f84bf83-b2a9-4a94-a4b3-2fdf1dd57688 req-8c46db3f-194b-4318-8ea7-68f485dedce6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.158 243456 DEBUG nova.compute.manager [req-6f84bf83-b2a9-4a94-a4b3-2fdf1dd57688 req-8c46db3f-194b-4318-8ea7-68f485dedce6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] No waiting events found dispatching network-vif-plugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.158 243456 WARNING nova.compute.manager [req-6f84bf83-b2a9-4a94-a4b3-2fdf1dd57688 req-8c46db3f-194b-4318-8ea7-68f485dedce6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received unexpected event network-vif-plugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b for instance with vm_state active and task_state None.
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.170 243456 INFO nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.188 243456 DEBUG nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.269 243456 DEBUG nova.compute.manager [req-c5a3041e-a045-467c-9ffd-48c6adf38368 req-0b76ab3d-486c-43ed-9dd3-a5a4a9fcf74c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-plugged-8f25c48f-b281-4784-a6b0-a2662d928d28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.269 243456 DEBUG oslo_concurrency.lockutils [req-c5a3041e-a045-467c-9ffd-48c6adf38368 req-0b76ab3d-486c-43ed-9dd3-a5a4a9fcf74c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.270 243456 DEBUG oslo_concurrency.lockutils [req-c5a3041e-a045-467c-9ffd-48c6adf38368 req-0b76ab3d-486c-43ed-9dd3-a5a4a9fcf74c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.270 243456 DEBUG oslo_concurrency.lockutils [req-c5a3041e-a045-467c-9ffd-48c6adf38368 req-0b76ab3d-486c-43ed-9dd3-a5a4a9fcf74c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.270 243456 DEBUG nova.compute.manager [req-c5a3041e-a045-467c-9ffd-48c6adf38368 req-0b76ab3d-486c-43ed-9dd3-a5a4a9fcf74c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] No waiting events found dispatching network-vif-plugged-8f25c48f-b281-4784-a6b0-a2662d928d28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.271 243456 WARNING nova.compute.manager [req-c5a3041e-a045-467c-9ffd-48c6adf38368 req-0b76ab3d-486c-43ed-9dd3-a5a4a9fcf74c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received unexpected event network-vif-plugged-8f25c48f-b281-4784-a6b0-a2662d928d28 for instance with vm_state active and task_state None.
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.276 243456 DEBUG nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.277 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.278 243456 INFO nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Creating image(s)
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.302 243456 DEBUG nova.storage.rbd_utils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 367042aa-0043-4283-a399-ea4a6a1545f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.328 243456 DEBUG nova.storage.rbd_utils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 367042aa-0043-4283-a399-ea4a6a1545f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.358 243456 DEBUG nova.storage.rbd_utils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 367042aa-0043-4283-a399-ea4a6a1545f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.363 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.439 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.441 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.442 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.442 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.478 243456 DEBUG nova.storage.rbd_utils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 367042aa-0043-4283-a399-ea4a6a1545f7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.483 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 367042aa-0043-4283-a399-ea4a6a1545f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:34:47 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2956785917' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.708 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 367042aa-0043-4283-a399-ea4a6a1545f7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:34:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:34:47 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1710081428' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.793 243456 DEBUG nova.storage.rbd_utils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image 367042aa-0043-4283-a399-ea4a6a1545f7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.828 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.764s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.886 243456 DEBUG nova.objects.instance [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid 367042aa-0043-4283-a399-ea4a6a1545f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.907 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.908 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Ensure instance console log exists: /var/lib/nova/instances/367042aa-0043-4283-a399-ea4a6a1545f7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.908 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.908 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.909 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.918 243456 DEBUG nova.policy [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.926 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:34:47 compute-0 nova_compute[243452]: 2026-02-28 10:34:47.926 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:34:48 compute-0 nova_compute[243452]: 2026-02-28 10:34:48.078 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:34:48 compute-0 nova_compute[243452]: 2026-02-28 10:34:48.079 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3474MB free_disk=59.966557927429676GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:34:48 compute-0 nova_compute[243452]: 2026-02-28 10:34:48.079 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:48 compute-0 nova_compute[243452]: 2026-02-28 10:34:48.080 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:48 compute-0 nova_compute[243452]: 2026-02-28 10:34:48.141 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 29ebb761-c674-4ed1-aae0-554adf945402 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:34:48 compute-0 nova_compute[243452]: 2026-02-28 10:34:48.142 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 367042aa-0043-4283-a399-ea4a6a1545f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:34:48 compute-0 nova_compute[243452]: 2026-02-28 10:34:48.142 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:34:48 compute-0 nova_compute[243452]: 2026-02-28 10:34:48.142 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:34:48 compute-0 nova_compute[243452]: 2026-02-28 10:34:48.190 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:34:48 compute-0 ceph-mon[76304]: pgmap v2113: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Feb 28 10:34:48 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1710081428' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:34:48 compute-0 nova_compute[243452]: 2026-02-28 10:34:48.633 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:34:48 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/841280857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:34:48 compute-0 nova_compute[243452]: 2026-02-28 10:34:48.785 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:34:48 compute-0 nova_compute[243452]: 2026-02-28 10:34:48.793 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:34:48 compute-0 nova_compute[243452]: 2026-02-28 10:34:48.811 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:34:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Feb 28 10:34:48 compute-0 nova_compute[243452]: 2026-02-28 10:34:48.839 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:34:48 compute-0 nova_compute[243452]: 2026-02-28 10:34:48.840 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:34:49 compute-0 nova_compute[243452]: 2026-02-28 10:34:49.134 243456 DEBUG nova.network.neutron [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Successfully created port: 92f5c154-2fa7-43e9-a6fd-da26d3ad985b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:34:49 compute-0 nova_compute[243452]: 2026-02-28 10:34:49.386 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:49 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/841280857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:34:49 compute-0 nova_compute[243452]: 2026-02-28 10:34:49.606 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:49 compute-0 NetworkManager[49805]: <info>  [1772274889.6076] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/541)
Feb 28 10:34:49 compute-0 NetworkManager[49805]: <info>  [1772274889.6085] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/542)
Feb 28 10:34:49 compute-0 nova_compute[243452]: 2026-02-28 10:34:49.652 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:49 compute-0 ovn_controller[146846]: 2026-02-28T10:34:49Z|01304|binding|INFO|Releasing lport 0d93ffc1-1158-4b54-b2c1-6b7d48d62d16 from this chassis (sb_readonly=0)
Feb 28 10:34:49 compute-0 ovn_controller[146846]: 2026-02-28T10:34:49Z|01305|binding|INFO|Releasing lport 99dd359f-3ab9-477c-a58c-1c56298be9c7 from this chassis (sb_readonly=0)
Feb 28 10:34:49 compute-0 nova_compute[243452]: 2026-02-28 10:34:49.666 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:50 compute-0 nova_compute[243452]: 2026-02-28 10:34:50.156 243456 DEBUG nova.compute.manager [req-a54a671c-7309-4763-ac95-5647cbe4e704 req-1f2c534d-5a70-4834-b38b-9296ee6da096 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-changed-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:50 compute-0 nova_compute[243452]: 2026-02-28 10:34:50.157 243456 DEBUG nova.compute.manager [req-a54a671c-7309-4763-ac95-5647cbe4e704 req-1f2c534d-5a70-4834-b38b-9296ee6da096 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Refreshing instance network info cache due to event network-changed-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:34:50 compute-0 nova_compute[243452]: 2026-02-28 10:34:50.157 243456 DEBUG oslo_concurrency.lockutils [req-a54a671c-7309-4763-ac95-5647cbe4e704 req-1f2c534d-5a70-4834-b38b-9296ee6da096 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:34:50 compute-0 nova_compute[243452]: 2026-02-28 10:34:50.158 243456 DEBUG oslo_concurrency.lockutils [req-a54a671c-7309-4763-ac95-5647cbe4e704 req-1f2c534d-5a70-4834-b38b-9296ee6da096 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:34:50 compute-0 nova_compute[243452]: 2026-02-28 10:34:50.158 243456 DEBUG nova.network.neutron [req-a54a671c-7309-4763-ac95-5647cbe4e704 req-1f2c534d-5a70-4834-b38b-9296ee6da096 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Refreshing network info cache for port 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:34:50 compute-0 ceph-mon[76304]: pgmap v2114: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Feb 28 10:34:50 compute-0 nova_compute[243452]: 2026-02-28 10:34:50.562 243456 DEBUG nova.network.neutron [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Successfully updated port: 92f5c154-2fa7-43e9-a6fd-da26d3ad985b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:34:50 compute-0 nova_compute[243452]: 2026-02-28 10:34:50.577 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:34:50 compute-0 nova_compute[243452]: 2026-02-28 10:34:50.578 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:34:50 compute-0 nova_compute[243452]: 2026-02-28 10:34:50.578 243456 DEBUG nova.network.neutron [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:34:50 compute-0 nova_compute[243452]: 2026-02-28 10:34:50.778 243456 DEBUG nova.network.neutron [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:34:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 224 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.8 MiB/s wr, 103 op/s
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.241 243456 DEBUG nova.network.neutron [req-a54a671c-7309-4763-ac95-5647cbe4e704 req-1f2c534d-5a70-4834-b38b-9296ee6da096 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updated VIF entry in instance network info cache for port 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.242 243456 DEBUG nova.network.neutron [req-a54a671c-7309-4763-ac95-5647cbe4e704 req-1f2c534d-5a70-4834-b38b-9296ee6da096 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updating instance_info_cache with network_info: [{"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.311 243456 DEBUG nova.compute.manager [req-f555f860-582a-4717-ab43-9e01b17c7a4a req-5ffc07bc-7f33-4ad3-9671-d0a289fea844 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received event network-changed-92f5c154-2fa7-43e9-a6fd-da26d3ad985b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.312 243456 DEBUG nova.compute.manager [req-f555f860-582a-4717-ab43-9e01b17c7a4a req-5ffc07bc-7f33-4ad3-9671-d0a289fea844 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Refreshing instance network info cache due to event network-changed-92f5c154-2fa7-43e9-a6fd-da26d3ad985b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.312 243456 DEBUG oslo_concurrency.lockutils [req-f555f860-582a-4717-ab43-9e01b17c7a4a req-5ffc07bc-7f33-4ad3-9671-d0a289fea844 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.403 243456 DEBUG oslo_concurrency.lockutils [req-a54a671c-7309-4763-ac95-5647cbe4e704 req-1f2c534d-5a70-4834-b38b-9296ee6da096 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:34:52 compute-0 ceph-mon[76304]: pgmap v2115: 305 pgs: 305 active+clean; 224 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.8 MiB/s wr, 103 op/s
Feb 28 10:34:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.867 243456 DEBUG nova.network.neutron [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Updating instance_info_cache with network_info: [{"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.890 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.890 243456 DEBUG nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Instance network_info: |[{"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.892 243456 DEBUG oslo_concurrency.lockutils [req-f555f860-582a-4717-ab43-9e01b17c7a4a req-5ffc07bc-7f33-4ad3-9671-d0a289fea844 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.892 243456 DEBUG nova.network.neutron [req-f555f860-582a-4717-ab43-9e01b17c7a4a req-5ffc07bc-7f33-4ad3-9671-d0a289fea844 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Refreshing network info cache for port 92f5c154-2fa7-43e9-a6fd-da26d3ad985b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.895 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Start _get_guest_xml network_info=[{"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.901 243456 WARNING nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.907 243456 DEBUG nova.virt.libvirt.host [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.907 243456 DEBUG nova.virt.libvirt.host [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.914 243456 DEBUG nova.virt.libvirt.host [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.916 243456 DEBUG nova.virt.libvirt.host [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.916 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.917 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.918 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.918 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.918 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.919 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.919 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.919 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.920 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.920 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.920 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.921 243456 DEBUG nova.virt.hardware [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:34:52 compute-0 nova_compute[243452]: 2026-02-28 10:34:52.925 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:34:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:34:53 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/934714578' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:34:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/934714578' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:34:53 compute-0 nova_compute[243452]: 2026-02-28 10:34:53.542 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:34:53 compute-0 nova_compute[243452]: 2026-02-28 10:34:53.571 243456 DEBUG nova.storage.rbd_utils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 367042aa-0043-4283-a399-ea4a6a1545f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:34:53 compute-0 nova_compute[243452]: 2026-02-28 10:34:53.577 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:34:53 compute-0 nova_compute[243452]: 2026-02-28 10:34:53.637 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:34:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:34:54 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2636135008' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.087 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.091 243456 DEBUG nova.virt.libvirt.vif [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:34:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1965662134',display_name='tempest-TestNetworkBasicOps-server-1965662134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1965662134',id=129,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAQEbRWdJEdDDH9ryFVMZgxJMx06zvko9WQ/yItISiQGZQFgF2ldPicyjXhFLx3IsCNHqxs8LEYCBDvAtjLxsqEXUJAPPXqcb32CUcOFzuHymtVJP4PyLuUKki41H129Mg==',key_name='tempest-TestNetworkBasicOps-1815275820',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-9q1u5ht7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:34:47Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=367042aa-0043-4283-a399-ea4a6a1545f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.092 243456 DEBUG nova.network.os_vif_util [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.094 243456 DEBUG nova.network.os_vif_util [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:eb:46,bridge_name='br-int',has_traffic_filtering=True,id=92f5c154-2fa7-43e9-a6fd-da26d3ad985b,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92f5c154-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.097 243456 DEBUG nova.objects.instance [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 367042aa-0043-4283-a399-ea4a6a1545f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.113 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:34:54 compute-0 nova_compute[243452]:   <uuid>367042aa-0043-4283-a399-ea4a6a1545f7</uuid>
Feb 28 10:34:54 compute-0 nova_compute[243452]:   <name>instance-00000081</name>
Feb 28 10:34:54 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:34:54 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:34:54 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <nova:name>tempest-TestNetworkBasicOps-server-1965662134</nova:name>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:34:52</nova:creationTime>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:34:54 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:34:54 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:34:54 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:34:54 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:34:54 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:34:54 compute-0 nova_compute[243452]:         <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:34:54 compute-0 nova_compute[243452]:         <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:34:54 compute-0 nova_compute[243452]:         <nova:port uuid="92f5c154-2fa7-43e9-a6fd-da26d3ad985b">
Feb 28 10:34:54 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:34:54 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:34:54 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <system>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <entry name="serial">367042aa-0043-4283-a399-ea4a6a1545f7</entry>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <entry name="uuid">367042aa-0043-4283-a399-ea4a6a1545f7</entry>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     </system>
Feb 28 10:34:54 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:34:54 compute-0 nova_compute[243452]:   <os>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:   </os>
Feb 28 10:34:54 compute-0 nova_compute[243452]:   <features>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:   </features>
Feb 28 10:34:54 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:34:54 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:34:54 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/367042aa-0043-4283-a399-ea4a6a1545f7_disk">
Feb 28 10:34:54 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       </source>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:34:54 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/367042aa-0043-4283-a399-ea4a6a1545f7_disk.config">
Feb 28 10:34:54 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       </source>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:34:54 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:56:eb:46"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <target dev="tap92f5c154-2f"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/367042aa-0043-4283-a399-ea4a6a1545f7/console.log" append="off"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <video>
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     </video>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:34:54 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:34:54 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:34:54 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:34:54 compute-0 nova_compute[243452]: </domain>
Feb 28 10:34:54 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.126 243456 DEBUG nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Preparing to wait for external event network-vif-plugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.127 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.128 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.128 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.134 243456 DEBUG nova.virt.libvirt.vif [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:34:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1965662134',display_name='tempest-TestNetworkBasicOps-server-1965662134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1965662134',id=129,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAQEbRWdJEdDDH9ryFVMZgxJMx06zvko9WQ/yItISiQGZQFgF2ldPicyjXhFLx3IsCNHqxs8LEYCBDvAtjLxsqEXUJAPPXqcb32CUcOFzuHymtVJP4PyLuUKki41H129Mg==',key_name='tempest-TestNetworkBasicOps-1815275820',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-9q1u5ht7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:34:47Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=367042aa-0043-4283-a399-ea4a6a1545f7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.135 243456 DEBUG nova.network.os_vif_util [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.136 243456 DEBUG nova.network.os_vif_util [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:eb:46,bridge_name='br-int',has_traffic_filtering=True,id=92f5c154-2fa7-43e9-a6fd-da26d3ad985b,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92f5c154-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.137 243456 DEBUG os_vif [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:eb:46,bridge_name='br-int',has_traffic_filtering=True,id=92f5c154-2fa7-43e9-a6fd-da26d3ad985b,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92f5c154-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.138 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.138 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.139 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.143 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.144 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92f5c154-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.144 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap92f5c154-2f, col_values=(('external_ids', {'iface-id': '92f5c154-2fa7-43e9-a6fd-da26d3ad985b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:eb:46', 'vm-uuid': '367042aa-0043-4283-a399-ea4a6a1545f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:54 compute-0 NetworkManager[49805]: <info>  [1772274894.1481] manager: (tap92f5c154-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/543)
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.147 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.158 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.160 243456 INFO os_vif [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:eb:46,bridge_name='br-int',has_traffic_filtering=True,id=92f5c154-2fa7-43e9-a6fd-da26d3ad985b,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92f5c154-2f')
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.214 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.214 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.215 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:56:eb:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.216 243456 INFO nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Using config drive
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.248 243456 DEBUG nova.storage.rbd_utils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 367042aa-0043-4283-a399-ea4a6a1545f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.387 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.464 243456 DEBUG nova.network.neutron [req-f555f860-582a-4717-ab43-9e01b17c7a4a req-5ffc07bc-7f33-4ad3-9671-d0a289fea844 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Updated VIF entry in instance network info cache for port 92f5c154-2fa7-43e9-a6fd-da26d3ad985b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.465 243456 DEBUG nova.network.neutron [req-f555f860-582a-4717-ab43-9e01b17c7a4a req-5ffc07bc-7f33-4ad3-9671-d0a289fea844 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Updating instance_info_cache with network_info: [{"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.480 243456 DEBUG oslo_concurrency.lockutils [req-f555f860-582a-4717-ab43-9e01b17c7a4a req-5ffc07bc-7f33-4ad3-9671-d0a289fea844 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:34:54 compute-0 ceph-mon[76304]: pgmap v2116: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 10:34:54 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2636135008' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:34:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.867 243456 INFO nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Creating config drive at /var/lib/nova/instances/367042aa-0043-4283-a399-ea4a6a1545f7/disk.config
Feb 28 10:34:54 compute-0 nova_compute[243452]: 2026-02-28 10:34:54.874 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/367042aa-0043-4283-a399-ea4a6a1545f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpe8ehqoe4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:34:55 compute-0 nova_compute[243452]: 2026-02-28 10:34:55.024 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/367042aa-0043-4283-a399-ea4a6a1545f7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpe8ehqoe4" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:34:55 compute-0 nova_compute[243452]: 2026-02-28 10:34:55.057 243456 DEBUG nova.storage.rbd_utils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 367042aa-0043-4283-a399-ea4a6a1545f7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:34:55 compute-0 nova_compute[243452]: 2026-02-28 10:34:55.061 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/367042aa-0043-4283-a399-ea4a6a1545f7/disk.config 367042aa-0043-4283-a399-ea4a6a1545f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:34:55 compute-0 nova_compute[243452]: 2026-02-28 10:34:55.212 243456 DEBUG oslo_concurrency.processutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/367042aa-0043-4283-a399-ea4a6a1545f7/disk.config 367042aa-0043-4283-a399-ea4a6a1545f7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:34:55 compute-0 nova_compute[243452]: 2026-02-28 10:34:55.214 243456 INFO nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Deleting local config drive /var/lib/nova/instances/367042aa-0043-4283-a399-ea4a6a1545f7/disk.config because it was imported into RBD.
Feb 28 10:34:55 compute-0 NetworkManager[49805]: <info>  [1772274895.2917] manager: (tap92f5c154-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/544)
Feb 28 10:34:55 compute-0 kernel: tap92f5c154-2f: entered promiscuous mode
Feb 28 10:34:55 compute-0 ovn_controller[146846]: 2026-02-28T10:34:55Z|01306|binding|INFO|Claiming lport 92f5c154-2fa7-43e9-a6fd-da26d3ad985b for this chassis.
Feb 28 10:34:55 compute-0 ovn_controller[146846]: 2026-02-28T10:34:55Z|01307|binding|INFO|92f5c154-2fa7-43e9-a6fd-da26d3ad985b: Claiming fa:16:3e:56:eb:46 10.100.0.11
Feb 28 10:34:55 compute-0 nova_compute[243452]: 2026-02-28 10:34:55.302 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.309 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:eb:46 10.100.0.11'], port_security=['fa:16:3e:56:eb:46 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '367042aa-0043-4283-a399-ea4a6a1545f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c48ff26a-49d0-4144-b27f-14431e751ba2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '18ab5e58-5378-41c9-af44-86d27866eb7f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cef915b3-5462-468d-ad19-808529c0dfba, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=92f5c154-2fa7-43e9-a6fd-da26d3ad985b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.312 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 92f5c154-2fa7-43e9-a6fd-da26d3ad985b in datapath c48ff26a-49d0-4144-b27f-14431e751ba2 bound to our chassis
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.315 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c48ff26a-49d0-4144-b27f-14431e751ba2
Feb 28 10:34:55 compute-0 ovn_controller[146846]: 2026-02-28T10:34:55Z|01308|binding|INFO|Setting lport 92f5c154-2fa7-43e9-a6fd-da26d3ad985b ovn-installed in OVS
Feb 28 10:34:55 compute-0 ovn_controller[146846]: 2026-02-28T10:34:55Z|01309|binding|INFO|Setting lport 92f5c154-2fa7-43e9-a6fd-da26d3ad985b up in Southbound
Feb 28 10:34:55 compute-0 nova_compute[243452]: 2026-02-28 10:34:55.324 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:55 compute-0 systemd-udevd[357798]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.329 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ace1e54e-61a6-4a9d-a5ea-15770741e2eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.333 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc48ff26a-41 in ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.338 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc48ff26a-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.338 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9cbf029b-a787-4fdf-a86f-93f83d22c50a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.340 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fa77982e-9d0c-4df7-b14c-8eb98816765e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:55 compute-0 systemd-machined[209480]: New machine qemu-162-instance-00000081.
Feb 28 10:34:55 compute-0 NetworkManager[49805]: <info>  [1772274895.3447] device (tap92f5c154-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:34:55 compute-0 NetworkManager[49805]: <info>  [1772274895.3458] device (tap92f5c154-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:34:55 compute-0 systemd[1]: Started Virtual Machine qemu-162-instance-00000081.
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.357 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[f67701cc-b8b5-4a06-893f-2531b84fb599]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.370 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe309e7-fadd-484a-8002-ba2f820d2bde]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.399 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4811d5ad-ca40-4d53-b40a-160a1919eebe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.405 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4d07e639-091a-4039-8dfb-e6233861cc9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:55 compute-0 NetworkManager[49805]: <info>  [1772274895.4056] manager: (tapc48ff26a-40): new Veth device (/org/freedesktop/NetworkManager/Devices/545)
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.442 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[810a140e-6857-4701-a0d5-c49ba20a9b39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.445 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a2e438-21de-48a6-9962-d4b3c80ace14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:55 compute-0 NetworkManager[49805]: <info>  [1772274895.4657] device (tapc48ff26a-40): carrier: link connected
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.469 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[515efe5a-513a-40e9-ba73-c65f0bccf114]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.488 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3aef9563-66af-41ff-928b-6df6968401ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc48ff26a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:5f:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640275, 'reachable_time': 38982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357831, 'error': None, 'target': 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.500 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5811a0e0-6565-4925-9851-819e24f8065a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:5ffd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640275, 'tstamp': 640275}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357832, 'error': None, 'target': 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.513 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[17bb3afd-de26-40ac-b702-78c2b08e3681]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc48ff26a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:5f:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640275, 'reachable_time': 38982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357833, 'error': None, 'target': 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.534 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1b81654d-7d2d-431e-a9a5-bce7994c898c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:55 compute-0 nova_compute[243452]: 2026-02-28 10:34:55.575 243456 DEBUG nova.compute.manager [req-a4575a6c-458c-4f58-bfad-5e13aa91dd04 req-19f96246-dd2b-4ba5-9971-3db05819e5e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received event network-vif-plugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:55 compute-0 nova_compute[243452]: 2026-02-28 10:34:55.575 243456 DEBUG oslo_concurrency.lockutils [req-a4575a6c-458c-4f58-bfad-5e13aa91dd04 req-19f96246-dd2b-4ba5-9971-3db05819e5e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:55 compute-0 nova_compute[243452]: 2026-02-28 10:34:55.575 243456 DEBUG oslo_concurrency.lockutils [req-a4575a6c-458c-4f58-bfad-5e13aa91dd04 req-19f96246-dd2b-4ba5-9971-3db05819e5e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:55 compute-0 nova_compute[243452]: 2026-02-28 10:34:55.575 243456 DEBUG oslo_concurrency.lockutils [req-a4575a6c-458c-4f58-bfad-5e13aa91dd04 req-19f96246-dd2b-4ba5-9971-3db05819e5e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:55 compute-0 nova_compute[243452]: 2026-02-28 10:34:55.576 243456 DEBUG nova.compute.manager [req-a4575a6c-458c-4f58-bfad-5e13aa91dd04 req-19f96246-dd2b-4ba5-9971-3db05819e5e0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Processing event network-vif-plugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.585 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d536e7-9a7f-4f92-b706-d947a99202c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.587 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc48ff26a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.587 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.588 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc48ff26a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:55 compute-0 nova_compute[243452]: 2026-02-28 10:34:55.589 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:55 compute-0 NetworkManager[49805]: <info>  [1772274895.5905] manager: (tapc48ff26a-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/546)
Feb 28 10:34:55 compute-0 kernel: tapc48ff26a-40: entered promiscuous mode
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.593 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc48ff26a-40, col_values=(('external_ids', {'iface-id': 'cb4199b0-c270-4ea9-a607-db9799f6f157'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:34:55 compute-0 ovn_controller[146846]: 2026-02-28T10:34:55Z|01310|binding|INFO|Releasing lport cb4199b0-c270-4ea9-a607-db9799f6f157 from this chassis (sb_readonly=0)
Feb 28 10:34:55 compute-0 nova_compute[243452]: 2026-02-28 10:34:55.602 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.604 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c48ff26a-49d0-4144-b27f-14431e751ba2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c48ff26a-49d0-4144-b27f-14431e751ba2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.605 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[165f15a1-e9e1-4378-aff6-e02756d65c63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.606 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-c48ff26a-49d0-4144-b27f-14431e751ba2
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/c48ff26a-49d0-4144-b27f-14431e751ba2.pid.haproxy
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID c48ff26a-49d0-4144-b27f-14431e751ba2
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:34:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:55.608 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'env', 'PROCESS_TAG=haproxy-c48ff26a-49d0-4144-b27f-14431e751ba2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c48ff26a-49d0-4144-b27f-14431e751ba2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:34:56 compute-0 podman[357863]: 2026-02-28 10:34:56.03026725 +0000 UTC m=+0.072521836 container create c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 10:34:56 compute-0 systemd[1]: Started libpod-conmon-c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3.scope.
Feb 28 10:34:56 compute-0 podman[357863]: 2026-02-28 10:34:55.992570652 +0000 UTC m=+0.034825258 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:34:56 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7cbf1e9a1127d383c385bf63b55eb566697bacb083039d81a658a47a2f6ccb6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:34:56 compute-0 podman[357863]: 2026-02-28 10:34:56.109367681 +0000 UTC m=+0.151622317 container init c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 28 10:34:56 compute-0 podman[357863]: 2026-02-28 10:34:56.116015889 +0000 UTC m=+0.158270475 container start c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 28 10:34:56 compute-0 neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2[357912]: [NOTICE]   (357923) : New worker (357925) forked
Feb 28 10:34:56 compute-0 neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2[357912]: [NOTICE]   (357923) : Loading success.
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.183 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274896.1834483, 367042aa-0043-4283-a399-ea4a6a1545f7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.185 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] VM Started (Lifecycle Event)
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.190 243456 DEBUG nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.195 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.199 243456 INFO nova.virt.libvirt.driver [-] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Instance spawned successfully.
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.199 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.223 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.229 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.234 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.235 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.236 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.236 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.236 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.237 243456 DEBUG nova.virt.libvirt.driver [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.248 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.249 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274896.183598, 367042aa-0043-4283-a399-ea4a6a1545f7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.249 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] VM Paused (Lifecycle Event)
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.271 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.275 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274896.1924148, 367042aa-0043-4283-a399-ea4a6a1545f7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.275 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] VM Resumed (Lifecycle Event)
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.292 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.295 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.299 243456 INFO nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Took 9.02 seconds to spawn the instance on the hypervisor.
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.300 243456 DEBUG nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.311 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.359 243456 INFO nova.compute.manager [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Took 10.19 seconds to build instance.
Feb 28 10:34:56 compute-0 nova_compute[243452]: 2026-02-28 10:34:56.374 243456 DEBUG oslo_concurrency.lockutils [None req-02772e82-ee32-4010-9dda-8ec2e0a1c01c ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.319s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:56 compute-0 ceph-mon[76304]: pgmap v2117: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 10:34:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Feb 28 10:34:57 compute-0 nova_compute[243452]: 2026-02-28 10:34:57.670 243456 DEBUG nova.compute.manager [req-daefa8b7-7f69-4936-8cdd-691b22fcbc50 req-c00ec00b-dfdf-4788-9436-33d4c22071ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received event network-vif-plugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:34:57 compute-0 nova_compute[243452]: 2026-02-28 10:34:57.671 243456 DEBUG oslo_concurrency.lockutils [req-daefa8b7-7f69-4936-8cdd-691b22fcbc50 req-c00ec00b-dfdf-4788-9436-33d4c22071ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:57 compute-0 nova_compute[243452]: 2026-02-28 10:34:57.672 243456 DEBUG oslo_concurrency.lockutils [req-daefa8b7-7f69-4936-8cdd-691b22fcbc50 req-c00ec00b-dfdf-4788-9436-33d4c22071ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:57 compute-0 nova_compute[243452]: 2026-02-28 10:34:57.673 243456 DEBUG oslo_concurrency.lockutils [req-daefa8b7-7f69-4936-8cdd-691b22fcbc50 req-c00ec00b-dfdf-4788-9436-33d4c22071ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:57 compute-0 nova_compute[243452]: 2026-02-28 10:34:57.673 243456 DEBUG nova.compute.manager [req-daefa8b7-7f69-4936-8cdd-691b22fcbc50 req-c00ec00b-dfdf-4788-9436-33d4c22071ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] No waiting events found dispatching network-vif-plugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:34:57 compute-0 nova_compute[243452]: 2026-02-28 10:34:57.674 243456 WARNING nova.compute.manager [req-daefa8b7-7f69-4936-8cdd-691b22fcbc50 req-c00ec00b-dfdf-4788-9436-33d4c22071ae 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received unexpected event network-vif-plugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b for instance with vm_state active and task_state None.
Feb 28 10:34:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:57.874 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:34:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:57.875 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:34:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:34:57.876 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:34:58 compute-0 ovn_controller[146846]: 2026-02-28T10:34:58Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:12:72:96 10.100.0.3
Feb 28 10:34:58 compute-0 ovn_controller[146846]: 2026-02-28T10:34:58Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:12:72:96 10.100.0.3
Feb 28 10:34:58 compute-0 ceph-mon[76304]: pgmap v2118: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Feb 28 10:34:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 251 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.0 MiB/s wr, 123 op/s
Feb 28 10:34:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:34:59 compute-0 nova_compute[243452]: 2026-02-28 10:34:59.149 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:34:59 compute-0 nova_compute[243452]: 2026-02-28 10:34:59.390 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:00 compute-0 nova_compute[243452]: 2026-02-28 10:35:00.114 243456 DEBUG nova.compute.manager [req-57ceae84-4f96-4798-a26c-11082b1c1c7b req-b1c51633-6b76-4917-81c3-b8b3165b027b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received event network-changed-92f5c154-2fa7-43e9-a6fd-da26d3ad985b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:00 compute-0 nova_compute[243452]: 2026-02-28 10:35:00.115 243456 DEBUG nova.compute.manager [req-57ceae84-4f96-4798-a26c-11082b1c1c7b req-b1c51633-6b76-4917-81c3-b8b3165b027b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Refreshing instance network info cache due to event network-changed-92f5c154-2fa7-43e9-a6fd-da26d3ad985b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:35:00 compute-0 nova_compute[243452]: 2026-02-28 10:35:00.116 243456 DEBUG oslo_concurrency.lockutils [req-57ceae84-4f96-4798-a26c-11082b1c1c7b req-b1c51633-6b76-4917-81c3-b8b3165b027b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:35:00 compute-0 nova_compute[243452]: 2026-02-28 10:35:00.116 243456 DEBUG oslo_concurrency.lockutils [req-57ceae84-4f96-4798-a26c-11082b1c1c7b req-b1c51633-6b76-4917-81c3-b8b3165b027b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:35:00 compute-0 nova_compute[243452]: 2026-02-28 10:35:00.116 243456 DEBUG nova.network.neutron [req-57ceae84-4f96-4798-a26c-11082b1c1c7b req-b1c51633-6b76-4917-81c3-b8b3165b027b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Refreshing network info cache for port 92f5c154-2fa7-43e9-a6fd-da26d3ad985b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:35:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:35:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:35:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:35:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:35:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:35:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:35:00 compute-0 ceph-mon[76304]: pgmap v2119: 305 pgs: 305 active+clean; 251 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.0 MiB/s wr, 123 op/s
Feb 28 10:35:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 277 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 223 op/s
Feb 28 10:35:01 compute-0 nova_compute[243452]: 2026-02-28 10:35:01.375 243456 DEBUG nova.network.neutron [req-57ceae84-4f96-4798-a26c-11082b1c1c7b req-b1c51633-6b76-4917-81c3-b8b3165b027b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Updated VIF entry in instance network info cache for port 92f5c154-2fa7-43e9-a6fd-da26d3ad985b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:35:01 compute-0 nova_compute[243452]: 2026-02-28 10:35:01.377 243456 DEBUG nova.network.neutron [req-57ceae84-4f96-4798-a26c-11082b1c1c7b req-b1c51633-6b76-4917-81c3-b8b3165b027b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Updating instance_info_cache with network_info: [{"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:35:01 compute-0 nova_compute[243452]: 2026-02-28 10:35:01.401 243456 DEBUG oslo_concurrency.lockutils [req-57ceae84-4f96-4798-a26c-11082b1c1c7b req-b1c51633-6b76-4917-81c3-b8b3165b027b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:35:02 compute-0 ceph-mon[76304]: pgmap v2120: 305 pgs: 305 active+clean; 277 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 223 op/s
Feb 28 10:35:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 150 op/s
Feb 28 10:35:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:35:04 compute-0 nova_compute[243452]: 2026-02-28 10:35:04.151 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:04 compute-0 nova_compute[243452]: 2026-02-28 10:35:04.393 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:04 compute-0 ceph-mon[76304]: pgmap v2121: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 150 op/s
Feb 28 10:35:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Feb 28 10:35:06 compute-0 ceph-mon[76304]: pgmap v2122: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Feb 28 10:35:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Feb 28 10:35:07 compute-0 ovn_controller[146846]: 2026-02-28T10:35:07Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:56:eb:46 10.100.0.11
Feb 28 10:35:07 compute-0 ovn_controller[146846]: 2026-02-28T10:35:07Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:56:eb:46 10.100.0.11
Feb 28 10:35:08 compute-0 podman[357935]: 2026-02-28 10:35:08.133624982 +0000 UTC m=+0.071375013 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:35:08 compute-0 podman[357936]: 2026-02-28 10:35:08.137739869 +0000 UTC m=+0.070622282 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223)
Feb 28 10:35:08 compute-0 ceph-mon[76304]: pgmap v2123: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Feb 28 10:35:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 288 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 157 op/s
Feb 28 10:35:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:35:08 compute-0 nova_compute[243452]: 2026-02-28 10:35:08.941 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:08 compute-0 nova_compute[243452]: 2026-02-28 10:35:08.942 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:08 compute-0 nova_compute[243452]: 2026-02-28 10:35:08.956 243456 DEBUG nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:35:09 compute-0 nova_compute[243452]: 2026-02-28 10:35:09.028 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:09 compute-0 nova_compute[243452]: 2026-02-28 10:35:09.029 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:09 compute-0 nova_compute[243452]: 2026-02-28 10:35:09.038 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:35:09 compute-0 nova_compute[243452]: 2026-02-28 10:35:09.039 243456 INFO nova.compute.claims [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:35:09 compute-0 nova_compute[243452]: 2026-02-28 10:35:09.154 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:09 compute-0 nova_compute[243452]: 2026-02-28 10:35:09.162 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:35:09 compute-0 nova_compute[243452]: 2026-02-28 10:35:09.397 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:35:09 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/328452197' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:35:09 compute-0 nova_compute[243452]: 2026-02-28 10:35:09.740 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:35:09 compute-0 nova_compute[243452]: 2026-02-28 10:35:09.745 243456 DEBUG nova.compute.provider_tree [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:35:09 compute-0 nova_compute[243452]: 2026-02-28 10:35:09.762 243456 DEBUG nova.scheduler.client.report [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:35:09 compute-0 nova_compute[243452]: 2026-02-28 10:35:09.783 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:09 compute-0 nova_compute[243452]: 2026-02-28 10:35:09.784 243456 DEBUG nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:35:09 compute-0 nova_compute[243452]: 2026-02-28 10:35:09.826 243456 DEBUG nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:35:09 compute-0 nova_compute[243452]: 2026-02-28 10:35:09.827 243456 DEBUG nova.network.neutron [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:35:09 compute-0 nova_compute[243452]: 2026-02-28 10:35:09.846 243456 INFO nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:35:09 compute-0 nova_compute[243452]: 2026-02-28 10:35:09.864 243456 DEBUG nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:35:09 compute-0 nova_compute[243452]: 2026-02-28 10:35:09.948 243456 DEBUG nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:35:09 compute-0 nova_compute[243452]: 2026-02-28 10:35:09.950 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:35:09 compute-0 nova_compute[243452]: 2026-02-28 10:35:09.950 243456 INFO nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Creating image(s)
Feb 28 10:35:09 compute-0 nova_compute[243452]: 2026-02-28 10:35:09.977 243456 DEBUG nova.storage.rbd_utils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f13d8adc-1a08-412b-a9fa-c8a601cda923_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:35:10 compute-0 nova_compute[243452]: 2026-02-28 10:35:10.005 243456 DEBUG nova.storage.rbd_utils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f13d8adc-1a08-412b-a9fa-c8a601cda923_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:35:10 compute-0 nova_compute[243452]: 2026-02-28 10:35:10.032 243456 DEBUG nova.storage.rbd_utils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f13d8adc-1a08-412b-a9fa-c8a601cda923_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:35:10 compute-0 nova_compute[243452]: 2026-02-28 10:35:10.035 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:35:10 compute-0 nova_compute[243452]: 2026-02-28 10:35:10.070 243456 DEBUG nova.policy [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:35:10 compute-0 nova_compute[243452]: 2026-02-28 10:35:10.122 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:35:10 compute-0 nova_compute[243452]: 2026-02-28 10:35:10.123 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:10 compute-0 nova_compute[243452]: 2026-02-28 10:35:10.124 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:10 compute-0 nova_compute[243452]: 2026-02-28 10:35:10.125 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:10 compute-0 nova_compute[243452]: 2026-02-28 10:35:10.156 243456 DEBUG nova.storage.rbd_utils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f13d8adc-1a08-412b-a9fa-c8a601cda923_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:35:10 compute-0 nova_compute[243452]: 2026-02-28 10:35:10.160 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f13d8adc-1a08-412b-a9fa-c8a601cda923_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:35:10 compute-0 nova_compute[243452]: 2026-02-28 10:35:10.423 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f13d8adc-1a08-412b-a9fa-c8a601cda923_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:35:10 compute-0 nova_compute[243452]: 2026-02-28 10:35:10.516 243456 DEBUG nova.storage.rbd_utils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image f13d8adc-1a08-412b-a9fa-c8a601cda923_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:35:10 compute-0 ceph-mon[76304]: pgmap v2124: 305 pgs: 305 active+clean; 288 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 157 op/s
Feb 28 10:35:10 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/328452197' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:35:10 compute-0 nova_compute[243452]: 2026-02-28 10:35:10.642 243456 DEBUG nova.objects.instance [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid f13d8adc-1a08-412b-a9fa-c8a601cda923 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:35:10 compute-0 nova_compute[243452]: 2026-02-28 10:35:10.662 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:35:10 compute-0 nova_compute[243452]: 2026-02-28 10:35:10.662 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Ensure instance console log exists: /var/lib/nova/instances/f13d8adc-1a08-412b-a9fa-c8a601cda923/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:35:10 compute-0 nova_compute[243452]: 2026-02-28 10:35:10.663 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:10 compute-0 nova_compute[243452]: 2026-02-28 10:35:10.664 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:10 compute-0 nova_compute[243452]: 2026-02-28 10:35:10.664 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:10 compute-0 nova_compute[243452]: 2026-02-28 10:35:10.677 243456 DEBUG nova.network.neutron [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Successfully created port: a81e3b75-649b-4321-b436-ab01ab0a9e05 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:35:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 322 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.8 MiB/s wr, 165 op/s
Feb 28 10:35:11 compute-0 nova_compute[243452]: 2026-02-28 10:35:11.206 243456 DEBUG nova.network.neutron [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Successfully created port: d681366d-e6b5-4dad-847e-d091bc7b112d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:35:11 compute-0 nova_compute[243452]: 2026-02-28 10:35:11.833 243456 DEBUG nova.network.neutron [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Successfully updated port: a81e3b75-649b-4321-b436-ab01ab0a9e05 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:35:11 compute-0 nova_compute[243452]: 2026-02-28 10:35:11.909 243456 DEBUG nova.compute.manager [req-7b789caf-1680-4a0d-80e1-e61234a67bc4 req-c4aea1bb-336e-40f0-ac33-6bf4ff298ed7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-changed-a81e3b75-649b-4321-b436-ab01ab0a9e05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:11 compute-0 nova_compute[243452]: 2026-02-28 10:35:11.910 243456 DEBUG nova.compute.manager [req-7b789caf-1680-4a0d-80e1-e61234a67bc4 req-c4aea1bb-336e-40f0-ac33-6bf4ff298ed7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Refreshing instance network info cache due to event network-changed-a81e3b75-649b-4321-b436-ab01ab0a9e05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:35:11 compute-0 nova_compute[243452]: 2026-02-28 10:35:11.910 243456 DEBUG oslo_concurrency.lockutils [req-7b789caf-1680-4a0d-80e1-e61234a67bc4 req-c4aea1bb-336e-40f0-ac33-6bf4ff298ed7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:35:11 compute-0 nova_compute[243452]: 2026-02-28 10:35:11.910 243456 DEBUG oslo_concurrency.lockutils [req-7b789caf-1680-4a0d-80e1-e61234a67bc4 req-c4aea1bb-336e-40f0-ac33-6bf4ff298ed7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:35:11 compute-0 nova_compute[243452]: 2026-02-28 10:35:11.910 243456 DEBUG nova.network.neutron [req-7b789caf-1680-4a0d-80e1-e61234a67bc4 req-c4aea1bb-336e-40f0-ac33-6bf4ff298ed7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Refreshing network info cache for port a81e3b75-649b-4321-b436-ab01ab0a9e05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:35:12 compute-0 nova_compute[243452]: 2026-02-28 10:35:12.059 243456 DEBUG nova.network.neutron [req-7b789caf-1680-4a0d-80e1-e61234a67bc4 req-c4aea1bb-336e-40f0-ac33-6bf4ff298ed7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:35:12 compute-0 ceph-mon[76304]: pgmap v2125: 305 pgs: 305 active+clean; 322 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.8 MiB/s wr, 165 op/s
Feb 28 10:35:12 compute-0 nova_compute[243452]: 2026-02-28 10:35:12.711 243456 DEBUG nova.network.neutron [req-7b789caf-1680-4a0d-80e1-e61234a67bc4 req-c4aea1bb-336e-40f0-ac33-6bf4ff298ed7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:35:12 compute-0 nova_compute[243452]: 2026-02-28 10:35:12.729 243456 DEBUG oslo_concurrency.lockutils [req-7b789caf-1680-4a0d-80e1-e61234a67bc4 req-c4aea1bb-336e-40f0-ac33-6bf4ff298ed7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:35:12 compute-0 nova_compute[243452]: 2026-02-28 10:35:12.739 243456 INFO nova.compute.manager [None req-2c9fc8d1-def2-4a7a-ab0f-844ace4f96fd ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Get console output
Feb 28 10:35:12 compute-0 nova_compute[243452]: 2026-02-28 10:35:12.748 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:35:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 344 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 3.7 MiB/s wr, 75 op/s
Feb 28 10:35:12 compute-0 nova_compute[243452]: 2026-02-28 10:35:12.907 243456 DEBUG nova.network.neutron [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Successfully updated port: d681366d-e6b5-4dad-847e-d091bc7b112d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:35:12 compute-0 nova_compute[243452]: 2026-02-28 10:35:12.924 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:35:12 compute-0 nova_compute[243452]: 2026-02-28 10:35:12.925 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:35:12 compute-0 nova_compute[243452]: 2026-02-28 10:35:12.925 243456 DEBUG nova.network.neutron [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:35:13 compute-0 nova_compute[243452]: 2026-02-28 10:35:13.066 243456 DEBUG nova.network.neutron [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:35:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.052 243456 DEBUG nova.compute.manager [req-10722c50-b0b0-4e26-a272-f523f83a84d0 req-2aaf6ba4-b481-40cb-a232-c90f0f07bb47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-changed-d681366d-e6b5-4dad-847e-d091bc7b112d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.053 243456 DEBUG nova.compute.manager [req-10722c50-b0b0-4e26-a272-f523f83a84d0 req-2aaf6ba4-b481-40cb-a232-c90f0f07bb47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Refreshing instance network info cache due to event network-changed-d681366d-e6b5-4dad-847e-d091bc7b112d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.054 243456 DEBUG oslo_concurrency.lockutils [req-10722c50-b0b0-4e26-a272-f523f83a84d0 req-2aaf6ba4-b481-40cb-a232-c90f0f07bb47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.156 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.193 243456 DEBUG nova.compute.manager [req-2b9979fc-5369-440f-baee-a7cff35bf526 req-1cc59d99-a5ad-4a3c-94af-834b6d269aa0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received event network-changed-92f5c154-2fa7-43e9-a6fd-da26d3ad985b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.193 243456 DEBUG nova.compute.manager [req-2b9979fc-5369-440f-baee-a7cff35bf526 req-1cc59d99-a5ad-4a3c-94af-834b6d269aa0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Refreshing instance network info cache due to event network-changed-92f5c154-2fa7-43e9-a6fd-da26d3ad985b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.194 243456 DEBUG oslo_concurrency.lockutils [req-2b9979fc-5369-440f-baee-a7cff35bf526 req-1cc59d99-a5ad-4a3c-94af-834b6d269aa0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.194 243456 DEBUG oslo_concurrency.lockutils [req-2b9979fc-5369-440f-baee-a7cff35bf526 req-1cc59d99-a5ad-4a3c-94af-834b6d269aa0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.194 243456 DEBUG nova.network.neutron [req-2b9979fc-5369-440f-baee-a7cff35bf526 req-1cc59d99-a5ad-4a3c-94af-834b6d269aa0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Refreshing network info cache for port 92f5c154-2fa7-43e9-a6fd-da26d3ad985b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.400 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:14 compute-0 ceph-mon[76304]: pgmap v2126: 305 pgs: 305 active+clean; 344 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 3.7 MiB/s wr, 75 op/s
Feb 28 10:35:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 297 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.861 243456 DEBUG nova.network.neutron [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Updating instance_info_cache with network_info: [{"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.892 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.893 243456 DEBUG nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Instance network_info: |[{"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.894 243456 DEBUG oslo_concurrency.lockutils [req-10722c50-b0b0-4e26-a272-f523f83a84d0 req-2aaf6ba4-b481-40cb-a232-c90f0f07bb47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.894 243456 DEBUG nova.network.neutron [req-10722c50-b0b0-4e26-a272-f523f83a84d0 req-2aaf6ba4-b481-40cb-a232-c90f0f07bb47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Refreshing network info cache for port d681366d-e6b5-4dad-847e-d091bc7b112d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.901 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Start _get_guest_xml network_info=[{"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.908 243456 WARNING nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.914 243456 DEBUG nova.virt.libvirt.host [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.915 243456 DEBUG nova.virt.libvirt.host [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.927 243456 DEBUG nova.virt.libvirt.host [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.928 243456 DEBUG nova.virt.libvirt.host [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.928 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.929 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.929 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.930 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.930 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.930 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.931 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.931 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.932 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.932 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.932 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.933 243456 DEBUG nova.virt.hardware [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:35:14 compute-0 nova_compute[243452]: 2026-02-28 10:35:14.938 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:35:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:35:15 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1037745291' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:35:15 compute-0 nova_compute[243452]: 2026-02-28 10:35:15.510 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:35:15 compute-0 nova_compute[243452]: 2026-02-28 10:35:15.536 243456 DEBUG nova.storage.rbd_utils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f13d8adc-1a08-412b-a9fa-c8a601cda923_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:35:15 compute-0 nova_compute[243452]: 2026-02-28 10:35:15.541 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:35:15 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1037745291' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:35:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:15.859 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:35:15 compute-0 nova_compute[243452]: 2026-02-28 10:35:15.860 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:15.862 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:35:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:35:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3408567993' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.102 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.105 243456 DEBUG nova.virt.libvirt.vif [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:35:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1382104323',display_name='tempest-TestGettingAddress-server-1382104323',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1382104323',id=130,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-kd4ju8yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:35:09Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f13d8adc-1a08-412b-a9fa-c8a601cda923,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.106 243456 DEBUG nova.network.os_vif_util [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.107 243456 DEBUG nova.network.os_vif_util [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:7a:c9,bridge_name='br-int',has_traffic_filtering=True,id=a81e3b75-649b-4321-b436-ab01ab0a9e05,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa81e3b75-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.109 243456 DEBUG nova.virt.libvirt.vif [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:35:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1382104323',display_name='tempest-TestGettingAddress-server-1382104323',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1382104323',id=130,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-kd4ju8yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:35:09Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f13d8adc-1a08-412b-a9fa-c8a601cda923,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.109 243456 DEBUG nova.network.os_vif_util [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.110 243456 DEBUG nova.network.os_vif_util [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:a7:ba,bridge_name='br-int',has_traffic_filtering=True,id=d681366d-e6b5-4dad-847e-d091bc7b112d,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd681366d-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.112 243456 DEBUG nova.objects.instance [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid f13d8adc-1a08-412b-a9fa-c8a601cda923 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.134 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:35:16 compute-0 nova_compute[243452]:   <uuid>f13d8adc-1a08-412b-a9fa-c8a601cda923</uuid>
Feb 28 10:35:16 compute-0 nova_compute[243452]:   <name>instance-00000082</name>
Feb 28 10:35:16 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:35:16 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:35:16 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <nova:name>tempest-TestGettingAddress-server-1382104323</nova:name>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:35:14</nova:creationTime>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:35:16 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:35:16 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:35:16 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:35:16 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:35:16 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:35:16 compute-0 nova_compute[243452]:         <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 10:35:16 compute-0 nova_compute[243452]:         <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:35:16 compute-0 nova_compute[243452]:         <nova:port uuid="a81e3b75-649b-4321-b436-ab01ab0a9e05">
Feb 28 10:35:16 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:35:16 compute-0 nova_compute[243452]:         <nova:port uuid="d681366d-e6b5-4dad-847e-d091bc7b112d">
Feb 28 10:35:16 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:febb:a7ba" ipVersion="6"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:35:16 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:35:16 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <system>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <entry name="serial">f13d8adc-1a08-412b-a9fa-c8a601cda923</entry>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <entry name="uuid">f13d8adc-1a08-412b-a9fa-c8a601cda923</entry>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     </system>
Feb 28 10:35:16 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:35:16 compute-0 nova_compute[243452]:   <os>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:   </os>
Feb 28 10:35:16 compute-0 nova_compute[243452]:   <features>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:   </features>
Feb 28 10:35:16 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:35:16 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:35:16 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/f13d8adc-1a08-412b-a9fa-c8a601cda923_disk">
Feb 28 10:35:16 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       </source>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:35:16 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/f13d8adc-1a08-412b-a9fa-c8a601cda923_disk.config">
Feb 28 10:35:16 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       </source>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:35:16 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:2e:7a:c9"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <target dev="tapa81e3b75-64"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:bb:a7:ba"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <target dev="tapd681366d-e6"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/f13d8adc-1a08-412b-a9fa-c8a601cda923/console.log" append="off"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <video>
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     </video>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:35:16 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:35:16 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:35:16 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:35:16 compute-0 nova_compute[243452]: </domain>
Feb 28 10:35:16 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.135 243456 DEBUG nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Preparing to wait for external event network-vif-plugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.135 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.136 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.136 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.137 243456 DEBUG nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Preparing to wait for external event network-vif-plugged-d681366d-e6b5-4dad-847e-d091bc7b112d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.137 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.137 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.138 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.139 243456 DEBUG nova.virt.libvirt.vif [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:35:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1382104323',display_name='tempest-TestGettingAddress-server-1382104323',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1382104323',id=130,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-kd4ju8yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:35:09Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f13d8adc-1a08-412b-a9fa-c8a601cda923,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.140 243456 DEBUG nova.network.os_vif_util [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.141 243456 DEBUG nova.network.os_vif_util [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:7a:c9,bridge_name='br-int',has_traffic_filtering=True,id=a81e3b75-649b-4321-b436-ab01ab0a9e05,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa81e3b75-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.142 243456 DEBUG os_vif [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:7a:c9,bridge_name='br-int',has_traffic_filtering=True,id=a81e3b75-649b-4321-b436-ab01ab0a9e05,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa81e3b75-64') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.143 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.143 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.144 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.149 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.149 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa81e3b75-64, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.150 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa81e3b75-64, col_values=(('external_ids', {'iface-id': 'a81e3b75-649b-4321-b436-ab01ab0a9e05', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:7a:c9', 'vm-uuid': 'f13d8adc-1a08-412b-a9fa-c8a601cda923'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:16 compute-0 NetworkManager[49805]: <info>  [1772274916.1545] manager: (tapa81e3b75-64): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/547)
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.155 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.160 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.161 243456 INFO os_vif [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:7a:c9,bridge_name='br-int',has_traffic_filtering=True,id=a81e3b75-649b-4321-b436-ab01ab0a9e05,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa81e3b75-64')
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.162 243456 DEBUG nova.virt.libvirt.vif [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:35:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1382104323',display_name='tempest-TestGettingAddress-server-1382104323',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1382104323',id=130,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-kd4ju8yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:35:09Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f13d8adc-1a08-412b-a9fa-c8a601cda923,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.162 243456 DEBUG nova.network.os_vif_util [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.163 243456 DEBUG nova.network.os_vif_util [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:a7:ba,bridge_name='br-int',has_traffic_filtering=True,id=d681366d-e6b5-4dad-847e-d091bc7b112d,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd681366d-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.163 243456 DEBUG os_vif [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:a7:ba,bridge_name='br-int',has_traffic_filtering=True,id=d681366d-e6b5-4dad-847e-d091bc7b112d,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd681366d-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.163 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.164 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.164 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.167 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.167 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd681366d-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.168 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd681366d-e6, col_values=(('external_ids', {'iface-id': 'd681366d-e6b5-4dad-847e-d091bc7b112d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:a7:ba', 'vm-uuid': 'f13d8adc-1a08-412b-a9fa-c8a601cda923'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.169 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:16 compute-0 NetworkManager[49805]: <info>  [1772274916.1704] manager: (tapd681366d-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/548)
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.171 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.177 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.178 243456 INFO os_vif [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:a7:ba,bridge_name='br-int',has_traffic_filtering=True,id=d681366d-e6b5-4dad-847e-d091bc7b112d,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd681366d-e6')
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.242 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.243 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.243 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:2e:7a:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.243 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:bb:a7:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.244 243456 INFO nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Using config drive
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.274 243456 DEBUG nova.storage.rbd_utils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f13d8adc-1a08-412b-a9fa-c8a601cda923_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.406 243456 DEBUG nova.network.neutron [req-2b9979fc-5369-440f-baee-a7cff35bf526 req-1cc59d99-a5ad-4a3c-94af-834b6d269aa0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Updated VIF entry in instance network info cache for port 92f5c154-2fa7-43e9-a6fd-da26d3ad985b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.407 243456 DEBUG nova.network.neutron [req-2b9979fc-5369-440f-baee-a7cff35bf526 req-1cc59d99-a5ad-4a3c-94af-834b6d269aa0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Updating instance_info_cache with network_info: [{"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.432 243456 DEBUG oslo_concurrency.lockutils [req-2b9979fc-5369-440f-baee-a7cff35bf526 req-1cc59d99-a5ad-4a3c-94af-834b6d269aa0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-367042aa-0043-4283-a399-ea4a6a1545f7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.590 243456 INFO nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Creating config drive at /var/lib/nova/instances/f13d8adc-1a08-412b-a9fa-c8a601cda923/disk.config
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.596 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f13d8adc-1a08-412b-a9fa-c8a601cda923/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3h85tpwm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:35:16 compute-0 ceph-mon[76304]: pgmap v2127: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 297 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Feb 28 10:35:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3408567993' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.750 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f13d8adc-1a08-412b-a9fa-c8a601cda923/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3h85tpwm" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.788 243456 DEBUG nova.storage.rbd_utils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f13d8adc-1a08-412b-a9fa-c8a601cda923_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.792 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f13d8adc-1a08-412b-a9fa-c8a601cda923/disk.config f13d8adc-1a08-412b-a9fa-c8a601cda923_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:35:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 308 KiB/s rd, 3.9 MiB/s wr, 112 op/s
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.866 243456 DEBUG nova.network.neutron [req-10722c50-b0b0-4e26-a272-f523f83a84d0 req-2aaf6ba4-b481-40cb-a232-c90f0f07bb47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Updated VIF entry in instance network info cache for port d681366d-e6b5-4dad-847e-d091bc7b112d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.867 243456 DEBUG nova.network.neutron [req-10722c50-b0b0-4e26-a272-f523f83a84d0 req-2aaf6ba4-b481-40cb-a232-c90f0f07bb47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Updating instance_info_cache with network_info: [{"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.886 243456 DEBUG oslo_concurrency.lockutils [req-10722c50-b0b0-4e26-a272-f523f83a84d0 req-2aaf6ba4-b481-40cb-a232-c90f0f07bb47 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.962 243456 DEBUG oslo_concurrency.processutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f13d8adc-1a08-412b-a9fa-c8a601cda923/disk.config f13d8adc-1a08-412b-a9fa-c8a601cda923_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:35:16 compute-0 nova_compute[243452]: 2026-02-28 10:35:16.963 243456 INFO nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Deleting local config drive /var/lib/nova/instances/f13d8adc-1a08-412b-a9fa-c8a601cda923/disk.config because it was imported into RBD.
Feb 28 10:35:17 compute-0 kernel: tapa81e3b75-64: entered promiscuous mode
Feb 28 10:35:17 compute-0 NetworkManager[49805]: <info>  [1772274917.0125] manager: (tapa81e3b75-64): new Tun device (/org/freedesktop/NetworkManager/Devices/549)
Feb 28 10:35:17 compute-0 ovn_controller[146846]: 2026-02-28T10:35:17Z|01311|binding|INFO|Claiming lport a81e3b75-649b-4321-b436-ab01ab0a9e05 for this chassis.
Feb 28 10:35:17 compute-0 ovn_controller[146846]: 2026-02-28T10:35:17Z|01312|binding|INFO|a81e3b75-649b-4321-b436-ab01ab0a9e05: Claiming fa:16:3e:2e:7a:c9 10.100.0.13
Feb 28 10:35:17 compute-0 nova_compute[243452]: 2026-02-28 10:35:17.015 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.026 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:7a:c9 10.100.0.13'], port_security=['fa:16:3e:2e:7a:c9 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f13d8adc-1a08-412b-a9fa-c8a601cda923', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd8337edd-8f59-40cf-a216-2b5eec8b4c00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d65e56d1-78ed-40b2-a041-17e977e92cba, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a81e3b75-649b-4321-b436-ab01ab0a9e05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.029 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a81e3b75-649b-4321-b436-ab01ab0a9e05 in datapath 6d7aad4f-1a53-4b74-a216-4cac4be4283b bound to our chassis
Feb 28 10:35:17 compute-0 ovn_controller[146846]: 2026-02-28T10:35:17Z|01313|binding|INFO|Setting lport a81e3b75-649b-4321-b436-ab01ab0a9e05 ovn-installed in OVS
Feb 28 10:35:17 compute-0 ovn_controller[146846]: 2026-02-28T10:35:17Z|01314|binding|INFO|Setting lport a81e3b75-649b-4321-b436-ab01ab0a9e05 up in Southbound
Feb 28 10:35:17 compute-0 nova_compute[243452]: 2026-02-28 10:35:17.032 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.033 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d7aad4f-1a53-4b74-a216-4cac4be4283b
Feb 28 10:35:17 compute-0 kernel: tapd681366d-e6: entered promiscuous mode
Feb 28 10:35:17 compute-0 NetworkManager[49805]: <info>  [1772274917.0362] manager: (tapd681366d-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/550)
Feb 28 10:35:17 compute-0 nova_compute[243452]: 2026-02-28 10:35:17.039 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:17 compute-0 ovn_controller[146846]: 2026-02-28T10:35:17Z|01315|if_status|INFO|Not updating pb chassis for d681366d-e6b5-4dad-847e-d091bc7b112d now as sb is readonly
Feb 28 10:35:17 compute-0 ovn_controller[146846]: 2026-02-28T10:35:17Z|01316|binding|INFO|Claiming lport d681366d-e6b5-4dad-847e-d091bc7b112d for this chassis.
Feb 28 10:35:17 compute-0 ovn_controller[146846]: 2026-02-28T10:35:17Z|01317|binding|INFO|d681366d-e6b5-4dad-847e-d091bc7b112d: Claiming fa:16:3e:bb:a7:ba 2001:db8::f816:3eff:febb:a7ba
Feb 28 10:35:17 compute-0 ovn_controller[146846]: 2026-02-28T10:35:17Z|01318|binding|INFO|Setting lport d681366d-e6b5-4dad-847e-d091bc7b112d ovn-installed in OVS
Feb 28 10:35:17 compute-0 systemd-udevd[358308]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:35:17 compute-0 systemd-udevd[358309]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:35:17 compute-0 nova_compute[243452]: 2026-02-28 10:35:17.051 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.054 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:a7:ba 2001:db8::f816:3eff:febb:a7ba'], port_security=['fa:16:3e:bb:a7:ba 2001:db8::f816:3eff:febb:a7ba'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:febb:a7ba/64', 'neutron:device_id': 'f13d8adc-1a08-412b-a9fa-c8a601cda923', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd8337edd-8f59-40cf-a216-2b5eec8b4c00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7ae13e-602a-487f-8652-e7d0de9d97fa, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d681366d-e6b5-4dad-847e-d091bc7b112d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:35:17 compute-0 ovn_controller[146846]: 2026-02-28T10:35:17Z|01319|binding|INFO|Setting lport d681366d-e6b5-4dad-847e-d091bc7b112d up in Southbound
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.054 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ae57246e-daed-4f66-b5a6-22d279fd03d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:17 compute-0 nova_compute[243452]: 2026-02-28 10:35:17.055 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:17 compute-0 NetworkManager[49805]: <info>  [1772274917.0646] device (tapa81e3b75-64): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:35:17 compute-0 NetworkManager[49805]: <info>  [1772274917.0655] device (tapa81e3b75-64): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:35:17 compute-0 NetworkManager[49805]: <info>  [1772274917.0674] device (tapd681366d-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:35:17 compute-0 NetworkManager[49805]: <info>  [1772274917.0682] device (tapd681366d-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:35:17 compute-0 systemd-machined[209480]: New machine qemu-163-instance-00000082.
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.091 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[234acf0a-6926-4353-9cfa-5adff19d3900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.094 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[483b6df9-5365-4a3f-a9de-2abbcd384c0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:17 compute-0 systemd[1]: Started Virtual Machine qemu-163-instance-00000082.
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.123 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2847b9c9-b7ad-4875-803d-f281f4f3706a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.142 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5d2f12-1e55-4824-a2fd-5e5f7f5bb776]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d7aad4f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:af:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 387], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639197, 'reachable_time': 19551, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358325, 'error': None, 'target': 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.158 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9e0126c2-67e3-4825-8851-e1f80fe9361e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6d7aad4f-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639207, 'tstamp': 639207}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358327, 'error': None, 'target': 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6d7aad4f-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639210, 'tstamp': 639210}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358327, 'error': None, 'target': 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.161 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d7aad4f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:17 compute-0 nova_compute[243452]: 2026-02-28 10:35:17.163 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:17 compute-0 nova_compute[243452]: 2026-02-28 10:35:17.164 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.164 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d7aad4f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.164 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.165 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d7aad4f-10, col_values=(('external_ids', {'iface-id': '99dd359f-3ab9-477c-a58c-1c56298be9c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.165 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.167 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d681366d-e6b5-4dad-847e-d091bc7b112d in datapath 49ec66b0-8f5d-445b-a7e6-7fd41e785d9a unbound from our chassis
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.168 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49ec66b0-8f5d-445b-a7e6-7fd41e785d9a
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.185 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[feb9ac71-9a39-4eda-ba7e-34c7c863e491]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.207 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4a3546a2-ce02-41f6-8ae8-cef92046b870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.210 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[808df8af-4cc8-4a4d-8d22-d137882b5efc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.231 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb1a0c1-af39-4651-8657-381790bbe548]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.248 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[76d901c1-7e8b-4fd6-935f-ff430078e2b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49ec66b0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:61:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 5, 'rx_bytes': 1572, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 5, 'rx_bytes': 1572, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639285, 'reachable_time': 43442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358333, 'error': None, 'target': 'ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.259 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ea741ae6-231c-43d2-9faf-37bb0a5843ba]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap49ec66b0-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639299, 'tstamp': 639299}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358334, 'error': None, 'target': 'ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.261 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49ec66b0-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:17 compute-0 nova_compute[243452]: 2026-02-28 10:35:17.262 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.264 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49ec66b0-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.264 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.264 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49ec66b0-80, col_values=(('external_ids', {'iface-id': '0d93ffc1-1158-4b54-b2c1-6b7d48d62d16'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:17.265 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:35:17 compute-0 nova_compute[243452]: 2026-02-28 10:35:17.548 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274917.5477853, f13d8adc-1a08-412b-a9fa-c8a601cda923 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:35:17 compute-0 nova_compute[243452]: 2026-02-28 10:35:17.549 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] VM Started (Lifecycle Event)
Feb 28 10:35:17 compute-0 nova_compute[243452]: 2026-02-28 10:35:17.578 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:35:17 compute-0 nova_compute[243452]: 2026-02-28 10:35:17.583 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274917.5481324, f13d8adc-1a08-412b-a9fa-c8a601cda923 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:35:17 compute-0 nova_compute[243452]: 2026-02-28 10:35:17.583 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] VM Paused (Lifecycle Event)
Feb 28 10:35:17 compute-0 nova_compute[243452]: 2026-02-28 10:35:17.601 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:35:17 compute-0 nova_compute[243452]: 2026-02-28 10:35:17.605 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:35:17 compute-0 nova_compute[243452]: 2026-02-28 10:35:17.625 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:35:18 compute-0 ceph-mon[76304]: pgmap v2128: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 308 KiB/s rd, 3.9 MiB/s wr, 112 op/s
Feb 28 10:35:18 compute-0 nova_compute[243452]: 2026-02-28 10:35:18.686 243456 DEBUG nova.compute.manager [req-8f881ff6-02bb-41d3-8285-070a1bfc7bc1 req-d6eebcef-a011-40c1-85a9-47e4423af8d4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-plugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:18 compute-0 nova_compute[243452]: 2026-02-28 10:35:18.686 243456 DEBUG oslo_concurrency.lockutils [req-8f881ff6-02bb-41d3-8285-070a1bfc7bc1 req-d6eebcef-a011-40c1-85a9-47e4423af8d4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:18 compute-0 nova_compute[243452]: 2026-02-28 10:35:18.687 243456 DEBUG oslo_concurrency.lockutils [req-8f881ff6-02bb-41d3-8285-070a1bfc7bc1 req-d6eebcef-a011-40c1-85a9-47e4423af8d4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:18 compute-0 nova_compute[243452]: 2026-02-28 10:35:18.687 243456 DEBUG oslo_concurrency.lockutils [req-8f881ff6-02bb-41d3-8285-070a1bfc7bc1 req-d6eebcef-a011-40c1-85a9-47e4423af8d4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:18 compute-0 nova_compute[243452]: 2026-02-28 10:35:18.688 243456 DEBUG nova.compute.manager [req-8f881ff6-02bb-41d3-8285-070a1bfc7bc1 req-d6eebcef-a011-40c1-85a9-47e4423af8d4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Processing event network-vif-plugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:35:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 3.9 MiB/s wr, 128 op/s
Feb 28 10:35:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:35:19 compute-0 nova_compute[243452]: 2026-02-28 10:35:19.404 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:20 compute-0 ceph-mon[76304]: pgmap v2129: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 3.9 MiB/s wr, 128 op/s
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.772 243456 DEBUG nova.compute.manager [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-plugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.774 243456 DEBUG oslo_concurrency.lockutils [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.775 243456 DEBUG oslo_concurrency.lockutils [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.775 243456 DEBUG oslo_concurrency.lockutils [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.775 243456 DEBUG nova.compute.manager [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] No event matching network-vif-plugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 in dict_keys([('network-vif-plugged', 'd681366d-e6b5-4dad-847e-d091bc7b112d')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.775 243456 WARNING nova.compute.manager [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received unexpected event network-vif-plugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 for instance with vm_state building and task_state spawning.
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.776 243456 DEBUG nova.compute.manager [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-plugged-d681366d-e6b5-4dad-847e-d091bc7b112d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.776 243456 DEBUG oslo_concurrency.lockutils [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.776 243456 DEBUG oslo_concurrency.lockutils [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.776 243456 DEBUG oslo_concurrency.lockutils [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.776 243456 DEBUG nova.compute.manager [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Processing event network-vif-plugged-d681366d-e6b5-4dad-847e-d091bc7b112d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.777 243456 DEBUG nova.compute.manager [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-plugged-d681366d-e6b5-4dad-847e-d091bc7b112d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.777 243456 DEBUG oslo_concurrency.lockutils [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.777 243456 DEBUG oslo_concurrency.lockutils [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.778 243456 DEBUG oslo_concurrency.lockutils [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.778 243456 DEBUG nova.compute.manager [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] No waiting events found dispatching network-vif-plugged-d681366d-e6b5-4dad-847e-d091bc7b112d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.778 243456 WARNING nova.compute.manager [req-858d4e9f-ead2-498f-8979-4a3c6d402b10 req-34e543a6-0f96-4d3e-b20f-bc01af42c40c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received unexpected event network-vif-plugged-d681366d-e6b5-4dad-847e-d091bc7b112d for instance with vm_state building and task_state spawning.
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.779 243456 DEBUG nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Instance event wait completed in 3 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.784 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274920.7842233, f13d8adc-1a08-412b-a9fa-c8a601cda923 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.785 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] VM Resumed (Lifecycle Event)
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.787 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.792 243456 INFO nova.virt.libvirt.driver [-] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Instance spawned successfully.
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.792 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.812 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.820 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.826 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.827 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.828 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.828 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.829 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:35:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 2.9 MiB/s wr, 147 op/s
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.830 243456 DEBUG nova.virt.libvirt.driver [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.841 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.891 243456 INFO nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Took 10.94 seconds to spawn the instance on the hypervisor.
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.892 243456 DEBUG nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.971 243456 INFO nova.compute.manager [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Took 11.97 seconds to build instance.
Feb 28 10:35:20 compute-0 nova_compute[243452]: 2026-02-28 10:35:20.997 243456 DEBUG oslo_concurrency.lockutils [None req-f1ca718c-f7a2-4820-8bae-4aa6672bae46 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:21 compute-0 nova_compute[243452]: 2026-02-28 10:35:21.170 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:21 compute-0 ceph-mon[76304]: pgmap v2130: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 2.9 MiB/s wr, 147 op/s
Feb 28 10:35:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 964 KiB/s rd, 1.0 MiB/s wr, 159 op/s
Feb 28 10:35:23 compute-0 nova_compute[243452]: 2026-02-28 10:35:23.327 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b7d98834-924e-4fbd-a701-d22949f44f77" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:23 compute-0 nova_compute[243452]: 2026-02-28 10:35:23.328 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:23 compute-0 nova_compute[243452]: 2026-02-28 10:35:23.350 243456 DEBUG nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:35:23 compute-0 nova_compute[243452]: 2026-02-28 10:35:23.436 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:23 compute-0 nova_compute[243452]: 2026-02-28 10:35:23.437 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:23 compute-0 nova_compute[243452]: 2026-02-28 10:35:23.444 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:35:23 compute-0 nova_compute[243452]: 2026-02-28 10:35:23.444 243456 INFO nova.compute.claims [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:35:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:23.864 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:35:23 compute-0 ceph-mon[76304]: pgmap v2131: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 964 KiB/s rd, 1.0 MiB/s wr, 159 op/s
Feb 28 10:35:24 compute-0 nova_compute[243452]: 2026-02-28 10:35:24.041 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:35:24 compute-0 nova_compute[243452]: 2026-02-28 10:35:24.405 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:35:24 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3654053878' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:35:24 compute-0 nova_compute[243452]: 2026-02-28 10:35:24.640 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:35:24 compute-0 nova_compute[243452]: 2026-02-28 10:35:24.647 243456 DEBUG nova.compute.provider_tree [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:35:24 compute-0 nova_compute[243452]: 2026-02-28 10:35:24.665 243456 DEBUG nova.scheduler.client.report [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:35:24 compute-0 nova_compute[243452]: 2026-02-28 10:35:24.694 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:24 compute-0 nova_compute[243452]: 2026-02-28 10:35:24.695 243456 DEBUG nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:35:24 compute-0 nova_compute[243452]: 2026-02-28 10:35:24.762 243456 DEBUG nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:35:24 compute-0 nova_compute[243452]: 2026-02-28 10:35:24.763 243456 DEBUG nova.network.neutron [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:35:24 compute-0 nova_compute[243452]: 2026-02-28 10:35:24.783 243456 INFO nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:35:24 compute-0 nova_compute[243452]: 2026-02-28 10:35:24.800 243456 DEBUG nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:35:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 248 KiB/s wr, 148 op/s
Feb 28 10:35:24 compute-0 nova_compute[243452]: 2026-02-28 10:35:24.898 243456 DEBUG nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:35:24 compute-0 nova_compute[243452]: 2026-02-28 10:35:24.900 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:35:24 compute-0 nova_compute[243452]: 2026-02-28 10:35:24.900 243456 INFO nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Creating image(s)
Feb 28 10:35:24 compute-0 nova_compute[243452]: 2026-02-28 10:35:24.927 243456 DEBUG nova.storage.rbd_utils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b7d98834-924e-4fbd-a701-d22949f44f77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:35:24 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3654053878' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:35:24 compute-0 nova_compute[243452]: 2026-02-28 10:35:24.953 243456 DEBUG nova.storage.rbd_utils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b7d98834-924e-4fbd-a701-d22949f44f77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:35:24 compute-0 nova_compute[243452]: 2026-02-28 10:35:24.980 243456 DEBUG nova.storage.rbd_utils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b7d98834-924e-4fbd-a701-d22949f44f77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:35:24 compute-0 nova_compute[243452]: 2026-02-28 10:35:24.985 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:35:25 compute-0 nova_compute[243452]: 2026-02-28 10:35:25.023 243456 DEBUG nova.policy [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:35:25 compute-0 nova_compute[243452]: 2026-02-28 10:35:25.063 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:35:25 compute-0 nova_compute[243452]: 2026-02-28 10:35:25.064 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:25 compute-0 nova_compute[243452]: 2026-02-28 10:35:25.065 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:25 compute-0 nova_compute[243452]: 2026-02-28 10:35:25.065 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:25 compute-0 nova_compute[243452]: 2026-02-28 10:35:25.089 243456 DEBUG nova.storage.rbd_utils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b7d98834-924e-4fbd-a701-d22949f44f77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:35:25 compute-0 nova_compute[243452]: 2026-02-28 10:35:25.095 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b7d98834-924e-4fbd-a701-d22949f44f77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:35:25 compute-0 nova_compute[243452]: 2026-02-28 10:35:25.307 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b7d98834-924e-4fbd-a701-d22949f44f77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:35:25 compute-0 nova_compute[243452]: 2026-02-28 10:35:25.378 243456 DEBUG nova.storage.rbd_utils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image b7d98834-924e-4fbd-a701-d22949f44f77_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:35:25 compute-0 nova_compute[243452]: 2026-02-28 10:35:25.471 243456 DEBUG nova.objects.instance [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid b7d98834-924e-4fbd-a701-d22949f44f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:35:25 compute-0 nova_compute[243452]: 2026-02-28 10:35:25.488 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:35:25 compute-0 nova_compute[243452]: 2026-02-28 10:35:25.489 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Ensure instance console log exists: /var/lib/nova/instances/b7d98834-924e-4fbd-a701-d22949f44f77/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:35:25 compute-0 nova_compute[243452]: 2026-02-28 10:35:25.489 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:25 compute-0 nova_compute[243452]: 2026-02-28 10:35:25.489 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:25 compute-0 nova_compute[243452]: 2026-02-28 10:35:25.489 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:25 compute-0 ceph-mon[76304]: pgmap v2132: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 248 KiB/s wr, 148 op/s
Feb 28 10:35:26 compute-0 nova_compute[243452]: 2026-02-28 10:35:26.173 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:26 compute-0 nova_compute[243452]: 2026-02-28 10:35:26.764 243456 DEBUG nova.network.neutron [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Successfully created port: f6a52694-af4a-4ecc-926c-b1867c375983 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:35:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 380 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 609 KiB/s wr, 154 op/s
Feb 28 10:35:27 compute-0 nova_compute[243452]: 2026-02-28 10:35:27.578 243456 DEBUG nova.network.neutron [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Successfully updated port: f6a52694-af4a-4ecc-926c-b1867c375983 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:35:27 compute-0 nova_compute[243452]: 2026-02-28 10:35:27.600 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-b7d98834-924e-4fbd-a701-d22949f44f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:35:27 compute-0 nova_compute[243452]: 2026-02-28 10:35:27.600 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-b7d98834-924e-4fbd-a701-d22949f44f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:35:27 compute-0 nova_compute[243452]: 2026-02-28 10:35:27.600 243456 DEBUG nova.network.neutron [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:35:27 compute-0 nova_compute[243452]: 2026-02-28 10:35:27.676 243456 DEBUG nova.compute.manager [req-9c845327-4664-46e0-8453-fa123ab8102d req-9d9e340d-8109-4553-902c-5b611594f60c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Received event network-changed-f6a52694-af4a-4ecc-926c-b1867c375983 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:27 compute-0 nova_compute[243452]: 2026-02-28 10:35:27.677 243456 DEBUG nova.compute.manager [req-9c845327-4664-46e0-8453-fa123ab8102d req-9d9e340d-8109-4553-902c-5b611594f60c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Refreshing instance network info cache due to event network-changed-f6a52694-af4a-4ecc-926c-b1867c375983. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:35:27 compute-0 nova_compute[243452]: 2026-02-28 10:35:27.677 243456 DEBUG oslo_concurrency.lockutils [req-9c845327-4664-46e0-8453-fa123ab8102d req-9d9e340d-8109-4553-902c-5b611594f60c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b7d98834-924e-4fbd-a701-d22949f44f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:35:27 compute-0 nova_compute[243452]: 2026-02-28 10:35:27.755 243456 DEBUG nova.compute.manager [req-e720d5ee-4367-4fc2-b3b0-65d9e76f550a req-c5d30e9a-1e04-4755-a0d4-592ea7025d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-changed-a81e3b75-649b-4321-b436-ab01ab0a9e05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:27 compute-0 nova_compute[243452]: 2026-02-28 10:35:27.756 243456 DEBUG nova.compute.manager [req-e720d5ee-4367-4fc2-b3b0-65d9e76f550a req-c5d30e9a-1e04-4755-a0d4-592ea7025d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Refreshing instance network info cache due to event network-changed-a81e3b75-649b-4321-b436-ab01ab0a9e05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:35:27 compute-0 nova_compute[243452]: 2026-02-28 10:35:27.756 243456 DEBUG oslo_concurrency.lockutils [req-e720d5ee-4367-4fc2-b3b0-65d9e76f550a req-c5d30e9a-1e04-4755-a0d4-592ea7025d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:35:27 compute-0 nova_compute[243452]: 2026-02-28 10:35:27.757 243456 DEBUG oslo_concurrency.lockutils [req-e720d5ee-4367-4fc2-b3b0-65d9e76f550a req-c5d30e9a-1e04-4755-a0d4-592ea7025d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:35:27 compute-0 nova_compute[243452]: 2026-02-28 10:35:27.757 243456 DEBUG nova.network.neutron [req-e720d5ee-4367-4fc2-b3b0-65d9e76f550a req-c5d30e9a-1e04-4755-a0d4-592ea7025d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Refreshing network info cache for port a81e3b75-649b-4321-b436-ab01ab0a9e05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:35:27 compute-0 nova_compute[243452]: 2026-02-28 10:35:27.786 243456 DEBUG nova.network.neutron [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:35:27 compute-0 ceph-mon[76304]: pgmap v2133: 305 pgs: 305 active+clean; 380 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 609 KiB/s wr, 154 op/s
Feb 28 10:35:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 385 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 656 KiB/s wr, 155 op/s
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.851 243456 DEBUG nova.network.neutron [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Updating instance_info_cache with network_info: [{"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.871 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-b7d98834-924e-4fbd-a701-d22949f44f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.872 243456 DEBUG nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Instance network_info: |[{"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.873 243456 DEBUG oslo_concurrency.lockutils [req-9c845327-4664-46e0-8453-fa123ab8102d req-9d9e340d-8109-4553-902c-5b611594f60c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b7d98834-924e-4fbd-a701-d22949f44f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.874 243456 DEBUG nova.network.neutron [req-9c845327-4664-46e0-8453-fa123ab8102d req-9d9e340d-8109-4553-902c-5b611594f60c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Refreshing network info cache for port f6a52694-af4a-4ecc-926c-b1867c375983 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.880 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Start _get_guest_xml network_info=[{"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.891 243456 WARNING nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.907 243456 DEBUG nova.virt.libvirt.host [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.908 243456 DEBUG nova.virt.libvirt.host [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:35:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.912 243456 DEBUG nova.virt.libvirt.host [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.913 243456 DEBUG nova.virt.libvirt.host [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.914 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.915 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.916 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.916 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.917 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.918 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.918 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.919 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.919 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.920 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.921 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.921 243456 DEBUG nova.virt.hardware [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:35:28 compute-0 nova_compute[243452]: 2026-02-28 10:35:28.928 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:35:29 compute-0 nova_compute[243452]: 2026-02-28 10:35:29.134 243456 DEBUG nova.network.neutron [req-e720d5ee-4367-4fc2-b3b0-65d9e76f550a req-c5d30e9a-1e04-4755-a0d4-592ea7025d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Updated VIF entry in instance network info cache for port a81e3b75-649b-4321-b436-ab01ab0a9e05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:35:29 compute-0 nova_compute[243452]: 2026-02-28 10:35:29.135 243456 DEBUG nova.network.neutron [req-e720d5ee-4367-4fc2-b3b0-65d9e76f550a req-c5d30e9a-1e04-4755-a0d4-592ea7025d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Updating instance_info_cache with network_info: [{"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:35:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:35:29
Feb 28 10:35:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:35:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:35:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', '.mgr', 'images', 'vms', 'default.rgw.control', 'backups', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log']
Feb 28 10:35:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:35:29 compute-0 nova_compute[243452]: 2026-02-28 10:35:29.153 243456 DEBUG oslo_concurrency.lockutils [req-e720d5ee-4367-4fc2-b3b0-65d9e76f550a req-c5d30e9a-1e04-4755-a0d4-592ea7025d3b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:35:29 compute-0 nova_compute[243452]: 2026-02-28 10:35:29.407 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:35:29 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3483494627' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:35:29 compute-0 nova_compute[243452]: 2026-02-28 10:35:29.559 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:35:29 compute-0 nova_compute[243452]: 2026-02-28 10:35:29.597 243456 DEBUG nova.storage.rbd_utils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b7d98834-924e-4fbd-a701-d22949f44f77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:35:29 compute-0 nova_compute[243452]: 2026-02-28 10:35:29.605 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:35:29 compute-0 ceph-mon[76304]: pgmap v2134: 305 pgs: 305 active+clean; 385 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 656 KiB/s wr, 155 op/s
Feb 28 10:35:29 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3483494627' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:35:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:35:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2644296980' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.140 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.143 243456 DEBUG nova.virt.libvirt.vif [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:35:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1920589440',display_name='tempest-TestNetworkBasicOps-server-1920589440',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1920589440',id=131,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBx2yXJrlxl/hjn9MMHE5KhSD01UI6o5tdb/M6CuQZyKKSCBz23zQdd3YZK9kMbqEHNUQ9eYVQehuDxFu0Ax80TEtUbLmqV5yzH0bXR2RG7KLxh93Ak1z3EjsfV7s46ZjA==',key_name='tempest-TestNetworkBasicOps-1493336509',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-tu2phxkv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:35:24Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b7d98834-924e-4fbd-a701-d22949f44f77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.143 243456 DEBUG nova.network.os_vif_util [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.145 243456 DEBUG nova.network.os_vif_util [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=f6a52694-af4a-4ecc-926c-b1867c375983,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6a52694-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.147 243456 DEBUG nova.objects.instance [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid b7d98834-924e-4fbd-a701-d22949f44f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.176 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:35:30 compute-0 nova_compute[243452]:   <uuid>b7d98834-924e-4fbd-a701-d22949f44f77</uuid>
Feb 28 10:35:30 compute-0 nova_compute[243452]:   <name>instance-00000083</name>
Feb 28 10:35:30 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:35:30 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:35:30 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <nova:name>tempest-TestNetworkBasicOps-server-1920589440</nova:name>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:35:28</nova:creationTime>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:35:30 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:35:30 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:35:30 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:35:30 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:35:30 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:35:30 compute-0 nova_compute[243452]:         <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:35:30 compute-0 nova_compute[243452]:         <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:35:30 compute-0 nova_compute[243452]:         <nova:port uuid="f6a52694-af4a-4ecc-926c-b1867c375983">
Feb 28 10:35:30 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:35:30 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:35:30 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <system>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <entry name="serial">b7d98834-924e-4fbd-a701-d22949f44f77</entry>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <entry name="uuid">b7d98834-924e-4fbd-a701-d22949f44f77</entry>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     </system>
Feb 28 10:35:30 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:35:30 compute-0 nova_compute[243452]:   <os>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:   </os>
Feb 28 10:35:30 compute-0 nova_compute[243452]:   <features>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:   </features>
Feb 28 10:35:30 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:35:30 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:35:30 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/b7d98834-924e-4fbd-a701-d22949f44f77_disk">
Feb 28 10:35:30 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       </source>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:35:30 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/b7d98834-924e-4fbd-a701-d22949f44f77_disk.config">
Feb 28 10:35:30 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       </source>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:35:30 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:21:5e:ba"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <target dev="tapf6a52694-af"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/b7d98834-924e-4fbd-a701-d22949f44f77/console.log" append="off"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <video>
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     </video>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:35:30 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:35:30 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:35:30 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:35:30 compute-0 nova_compute[243452]: </domain>
Feb 28 10:35:30 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.183 243456 DEBUG nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Preparing to wait for external event network-vif-plugged-f6a52694-af4a-4ecc-926c-b1867c375983 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.183 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.183 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.184 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.185 243456 DEBUG nova.virt.libvirt.vif [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:35:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1920589440',display_name='tempest-TestNetworkBasicOps-server-1920589440',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1920589440',id=131,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBx2yXJrlxl/hjn9MMHE5KhSD01UI6o5tdb/M6CuQZyKKSCBz23zQdd3YZK9kMbqEHNUQ9eYVQehuDxFu0Ax80TEtUbLmqV5yzH0bXR2RG7KLxh93Ak1z3EjsfV7s46ZjA==',key_name='tempest-TestNetworkBasicOps-1493336509',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-tu2phxkv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:35:24Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b7d98834-924e-4fbd-a701-d22949f44f77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.185 243456 DEBUG nova.network.os_vif_util [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.186 243456 DEBUG nova.network.os_vif_util [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=f6a52694-af4a-4ecc-926c-b1867c375983,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6a52694-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.187 243456 DEBUG os_vif [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=f6a52694-af4a-4ecc-926c-b1867c375983,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6a52694-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.188 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.189 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.189 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.192 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.193 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6a52694-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.193 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf6a52694-af, col_values=(('external_ids', {'iface-id': 'f6a52694-af4a-4ecc-926c-b1867c375983', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:21:5e:ba', 'vm-uuid': 'b7d98834-924e-4fbd-a701-d22949f44f77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:30 compute-0 NetworkManager[49805]: <info>  [1772274930.1963] manager: (tapf6a52694-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/551)
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.199 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.203 243456 INFO os_vif [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=f6a52694-af4a-4ecc-926c-b1867c375983,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6a52694-af')
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.218 243456 DEBUG nova.network.neutron [req-9c845327-4664-46e0-8453-fa123ab8102d req-9d9e340d-8109-4553-902c-5b611594f60c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Updated VIF entry in instance network info cache for port f6a52694-af4a-4ecc-926c-b1867c375983. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.219 243456 DEBUG nova.network.neutron [req-9c845327-4664-46e0-8453-fa123ab8102d req-9d9e340d-8109-4553-902c-5b611594f60c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Updating instance_info_cache with network_info: [{"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.252 243456 DEBUG oslo_concurrency.lockutils [req-9c845327-4664-46e0-8453-fa123ab8102d req-9d9e340d-8109-4553-902c-5b611594f60c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b7d98834-924e-4fbd-a701-d22949f44f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.271 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.272 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.272 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:21:5e:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.273 243456 INFO nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Using config drive
Feb 28 10:35:30 compute-0 nova_compute[243452]: 2026-02-28 10:35:30.301 243456 DEBUG nova.storage.rbd_utils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b7d98834-924e-4fbd-a701-d22949f44f77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:35:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:35:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:35:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:35:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:35:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:35:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:35:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:35:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:35:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:35:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:35:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:35:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:35:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:35:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:35:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:35:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:35:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 151 op/s
Feb 28 10:35:30 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2644296980' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:35:31 compute-0 nova_compute[243452]: 2026-02-28 10:35:31.039 243456 INFO nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Creating config drive at /var/lib/nova/instances/b7d98834-924e-4fbd-a701-d22949f44f77/disk.config
Feb 28 10:35:31 compute-0 nova_compute[243452]: 2026-02-28 10:35:31.047 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b7d98834-924e-4fbd-a701-d22949f44f77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp0b7vozgt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:35:31 compute-0 nova_compute[243452]: 2026-02-28 10:35:31.188 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b7d98834-924e-4fbd-a701-d22949f44f77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp0b7vozgt" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:35:31 compute-0 nova_compute[243452]: 2026-02-28 10:35:31.232 243456 DEBUG nova.storage.rbd_utils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b7d98834-924e-4fbd-a701-d22949f44f77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:35:31 compute-0 nova_compute[243452]: 2026-02-28 10:35:31.238 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b7d98834-924e-4fbd-a701-d22949f44f77/disk.config b7d98834-924e-4fbd-a701-d22949f44f77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:35:31 compute-0 nova_compute[243452]: 2026-02-28 10:35:31.397 243456 DEBUG oslo_concurrency.processutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b7d98834-924e-4fbd-a701-d22949f44f77/disk.config b7d98834-924e-4fbd-a701-d22949f44f77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:35:31 compute-0 nova_compute[243452]: 2026-02-28 10:35:31.398 243456 INFO nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Deleting local config drive /var/lib/nova/instances/b7d98834-924e-4fbd-a701-d22949f44f77/disk.config because it was imported into RBD.
Feb 28 10:35:31 compute-0 NetworkManager[49805]: <info>  [1772274931.4390] manager: (tapf6a52694-af): new Tun device (/org/freedesktop/NetworkManager/Devices/552)
Feb 28 10:35:31 compute-0 kernel: tapf6a52694-af: entered promiscuous mode
Feb 28 10:35:31 compute-0 ovn_controller[146846]: 2026-02-28T10:35:31Z|01320|binding|INFO|Claiming lport f6a52694-af4a-4ecc-926c-b1867c375983 for this chassis.
Feb 28 10:35:31 compute-0 ovn_controller[146846]: 2026-02-28T10:35:31Z|01321|binding|INFO|f6a52694-af4a-4ecc-926c-b1867c375983: Claiming fa:16:3e:21:5e:ba 10.100.0.12
Feb 28 10:35:31 compute-0 nova_compute[243452]: 2026-02-28 10:35:31.446 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.452 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:5e:ba 10.100.0.12'], port_security=['fa:16:3e:21:5e:ba 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b7d98834-924e-4fbd-a701-d22949f44f77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c48ff26a-49d0-4144-b27f-14431e751ba2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7156ff74-6e4d-4300-84e4-6890f3b16e55', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cef915b3-5462-468d-ad19-808529c0dfba, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f6a52694-af4a-4ecc-926c-b1867c375983) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:35:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.453 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f6a52694-af4a-4ecc-926c-b1867c375983 in datapath c48ff26a-49d0-4144-b27f-14431e751ba2 bound to our chassis
Feb 28 10:35:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.455 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c48ff26a-49d0-4144-b27f-14431e751ba2
Feb 28 10:35:31 compute-0 nova_compute[243452]: 2026-02-28 10:35:31.462 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:31 compute-0 ovn_controller[146846]: 2026-02-28T10:35:31Z|01322|binding|INFO|Setting lport f6a52694-af4a-4ecc-926c-b1867c375983 ovn-installed in OVS
Feb 28 10:35:31 compute-0 ovn_controller[146846]: 2026-02-28T10:35:31Z|01323|binding|INFO|Setting lport f6a52694-af4a-4ecc-926c-b1867c375983 up in Southbound
Feb 28 10:35:31 compute-0 systemd-udevd[358701]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:35:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.480 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[108f6beb-28cf-4a5e-92c9-49ea81bab93f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:31 compute-0 NetworkManager[49805]: <info>  [1772274931.4866] device (tapf6a52694-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:35:31 compute-0 NetworkManager[49805]: <info>  [1772274931.4872] device (tapf6a52694-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:35:31 compute-0 systemd-machined[209480]: New machine qemu-164-instance-00000083.
Feb 28 10:35:31 compute-0 systemd[1]: Started Virtual Machine qemu-164-instance-00000083.
Feb 28 10:35:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.511 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c3cbb520-ef39-48ef-8f02-50b9a833de2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.516 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[41e244bd-37e7-4374-9049-87d1c83e3c8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.545 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b1b112-ae1d-47ba-bfda-72bf5a32ba04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.564 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1c520d69-4340-4283-a88f-a55cf4cca3c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc48ff26a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:5f:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640275, 'reachable_time': 38982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358715, 'error': None, 'target': 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.590 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[132d835d-ee8f-416c-ae99-c3904704cab0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc48ff26a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640284, 'tstamp': 640284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358717, 'error': None, 'target': 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc48ff26a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640286, 'tstamp': 640286}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358717, 'error': None, 'target': 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.592 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc48ff26a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:31 compute-0 nova_compute[243452]: 2026-02-28 10:35:31.594 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:31 compute-0 nova_compute[243452]: 2026-02-28 10:35:31.595 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.595 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc48ff26a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.596 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:35:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.596 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc48ff26a-40, col_values=(('external_ids', {'iface-id': 'cb4199b0-c270-4ea9-a607-db9799f6f157'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:31.597 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:35:31 compute-0 nova_compute[243452]: 2026-02-28 10:35:31.793 243456 DEBUG nova.compute.manager [req-8a64292b-98aa-4a62-ae62-f284c20c32d6 req-33e73449-2d07-4cb7-94e4-68a8b28a79fa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Received event network-vif-plugged-f6a52694-af4a-4ecc-926c-b1867c375983 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:31 compute-0 nova_compute[243452]: 2026-02-28 10:35:31.794 243456 DEBUG oslo_concurrency.lockutils [req-8a64292b-98aa-4a62-ae62-f284c20c32d6 req-33e73449-2d07-4cb7-94e4-68a8b28a79fa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:31 compute-0 nova_compute[243452]: 2026-02-28 10:35:31.794 243456 DEBUG oslo_concurrency.lockutils [req-8a64292b-98aa-4a62-ae62-f284c20c32d6 req-33e73449-2d07-4cb7-94e4-68a8b28a79fa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:31 compute-0 nova_compute[243452]: 2026-02-28 10:35:31.794 243456 DEBUG oslo_concurrency.lockutils [req-8a64292b-98aa-4a62-ae62-f284c20c32d6 req-33e73449-2d07-4cb7-94e4-68a8b28a79fa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:31 compute-0 nova_compute[243452]: 2026-02-28 10:35:31.794 243456 DEBUG nova.compute.manager [req-8a64292b-98aa-4a62-ae62-f284c20c32d6 req-33e73449-2d07-4cb7-94e4-68a8b28a79fa 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Processing event network-vif-plugged-f6a52694-af4a-4ecc-926c-b1867c375983 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:35:31 compute-0 ceph-mon[76304]: pgmap v2135: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 151 op/s
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.266 243456 DEBUG nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.268 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274932.2652206, b7d98834-924e-4fbd-a701-d22949f44f77 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.269 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] VM Started (Lifecycle Event)
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.273 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.277 243456 INFO nova.virt.libvirt.driver [-] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Instance spawned successfully.
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.278 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.295 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.302 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.307 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.308 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.308 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.309 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.309 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.310 243456 DEBUG nova.virt.libvirt.driver [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.336 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.337 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274932.272231, b7d98834-924e-4fbd-a701-d22949f44f77 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.337 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] VM Paused (Lifecycle Event)
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.358 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.362 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274932.2727046, b7d98834-924e-4fbd-a701-d22949f44f77 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.362 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] VM Resumed (Lifecycle Event)
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.374 243456 INFO nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Took 7.48 seconds to spawn the instance on the hypervisor.
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.375 243456 DEBUG nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.382 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.385 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.418 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.446 243456 INFO nova.compute.manager [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Took 9.04 seconds to build instance.
Feb 28 10:35:32 compute-0 nova_compute[243452]: 2026-02-28 10:35:32.462 243456 DEBUG oslo_concurrency.lockutils [None req-393219ec-657e-4343-97c6-b49f3b8b2caa ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Feb 28 10:35:33 compute-0 nova_compute[243452]: 2026-02-28 10:35:33.841 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:35:33 compute-0 nova_compute[243452]: 2026-02-28 10:35:33.887 243456 DEBUG nova.compute.manager [req-0417454d-ec34-4e5e-9f09-0e79ae2ae1cc req-91a410bb-cdf8-42fc-88c5-71b3a694aeb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Received event network-vif-plugged-f6a52694-af4a-4ecc-926c-b1867c375983 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:33 compute-0 nova_compute[243452]: 2026-02-28 10:35:33.888 243456 DEBUG oslo_concurrency.lockutils [req-0417454d-ec34-4e5e-9f09-0e79ae2ae1cc req-91a410bb-cdf8-42fc-88c5-71b3a694aeb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:33 compute-0 nova_compute[243452]: 2026-02-28 10:35:33.888 243456 DEBUG oslo_concurrency.lockutils [req-0417454d-ec34-4e5e-9f09-0e79ae2ae1cc req-91a410bb-cdf8-42fc-88c5-71b3a694aeb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:33 compute-0 nova_compute[243452]: 2026-02-28 10:35:33.888 243456 DEBUG oslo_concurrency.lockutils [req-0417454d-ec34-4e5e-9f09-0e79ae2ae1cc req-91a410bb-cdf8-42fc-88c5-71b3a694aeb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:33 compute-0 nova_compute[243452]: 2026-02-28 10:35:33.888 243456 DEBUG nova.compute.manager [req-0417454d-ec34-4e5e-9f09-0e79ae2ae1cc req-91a410bb-cdf8-42fc-88c5-71b3a694aeb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] No waiting events found dispatching network-vif-plugged-f6a52694-af4a-4ecc-926c-b1867c375983 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:35:33 compute-0 nova_compute[243452]: 2026-02-28 10:35:33.888 243456 WARNING nova.compute.manager [req-0417454d-ec34-4e5e-9f09-0e79ae2ae1cc req-91a410bb-cdf8-42fc-88c5-71b3a694aeb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Received unexpected event network-vif-plugged-f6a52694-af4a-4ecc-926c-b1867c375983 for instance with vm_state active and task_state None.
Feb 28 10:35:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:35:33 compute-0 ceph-mon[76304]: pgmap v2136: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Feb 28 10:35:34 compute-0 nova_compute[243452]: 2026-02-28 10:35:34.416 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 414 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Feb 28 10:35:34 compute-0 ovn_controller[146846]: 2026-02-28T10:35:34Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2e:7a:c9 10.100.0.13
Feb 28 10:35:34 compute-0 ovn_controller[146846]: 2026-02-28T10:35:34Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2e:7a:c9 10.100.0.13
Feb 28 10:35:35 compute-0 nova_compute[243452]: 2026-02-28 10:35:35.197 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:35 compute-0 nova_compute[243452]: 2026-02-28 10:35:35.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:35:35 compute-0 sudo[358762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:35:35 compute-0 sudo[358762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:35:35 compute-0 sudo[358762]: pam_unix(sudo:session): session closed for user root
Feb 28 10:35:35 compute-0 sudo[358787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:35:35 compute-0 sudo[358787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:35:35 compute-0 ceph-mon[76304]: pgmap v2137: 305 pgs: 305 active+clean; 414 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Feb 28 10:35:36 compute-0 nova_compute[243452]: 2026-02-28 10:35:36.016 243456 DEBUG nova.compute.manager [req-2e0b5240-64b2-4766-b056-abbbf018eeae req-028cf98a-59f7-43cd-8a9d-00a16d81fa06 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Received event network-changed-f6a52694-af4a-4ecc-926c-b1867c375983 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:36 compute-0 nova_compute[243452]: 2026-02-28 10:35:36.018 243456 DEBUG nova.compute.manager [req-2e0b5240-64b2-4766-b056-abbbf018eeae req-028cf98a-59f7-43cd-8a9d-00a16d81fa06 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Refreshing instance network info cache due to event network-changed-f6a52694-af4a-4ecc-926c-b1867c375983. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:35:36 compute-0 nova_compute[243452]: 2026-02-28 10:35:36.018 243456 DEBUG oslo_concurrency.lockutils [req-2e0b5240-64b2-4766-b056-abbbf018eeae req-028cf98a-59f7-43cd-8a9d-00a16d81fa06 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b7d98834-924e-4fbd-a701-d22949f44f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:35:36 compute-0 nova_compute[243452]: 2026-02-28 10:35:36.020 243456 DEBUG oslo_concurrency.lockutils [req-2e0b5240-64b2-4766-b056-abbbf018eeae req-028cf98a-59f7-43cd-8a9d-00a16d81fa06 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b7d98834-924e-4fbd-a701-d22949f44f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:35:36 compute-0 nova_compute[243452]: 2026-02-28 10:35:36.020 243456 DEBUG nova.network.neutron [req-2e0b5240-64b2-4766-b056-abbbf018eeae req-028cf98a-59f7-43cd-8a9d-00a16d81fa06 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Refreshing network info cache for port f6a52694-af4a-4ecc-926c-b1867c375983 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:35:36 compute-0 sudo[358787]: pam_unix(sudo:session): session closed for user root
Feb 28 10:35:36 compute-0 sudo[358844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:35:36 compute-0 sudo[358844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:35:36 compute-0 sudo[358844]: pam_unix(sudo:session): session closed for user root
Feb 28 10:35:36 compute-0 sudo[358869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Feb 28 10:35:36 compute-0 sudo[358869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:35:36 compute-0 sudo[358869]: pam_unix(sudo:session): session closed for user root
Feb 28 10:35:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:35:36 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:35:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:35:36 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:35:36 compute-0 sudo[358913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:35:36 compute-0 sudo[358913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:35:36 compute-0 sudo[358913]: pam_unix(sudo:session): session closed for user root
Feb 28 10:35:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 430 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.2 MiB/s wr, 175 op/s
Feb 28 10:35:36 compute-0 sudo[358938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- inventory --format=json-pretty --filter-for-batch
Feb 28 10:35:36 compute-0 sudo[358938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:35:37 compute-0 podman[358975]: 2026-02-28 10:35:37.214524887 +0000 UTC m=+0.055247987 container create a3a772c3db91bd5050bcd540f6adee88d6f018cdc3902c3690dd0bad48cba8be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mahavira, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:35:37 compute-0 nova_compute[243452]: 2026-02-28 10:35:37.223 243456 DEBUG nova.network.neutron [req-2e0b5240-64b2-4766-b056-abbbf018eeae req-028cf98a-59f7-43cd-8a9d-00a16d81fa06 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Updated VIF entry in instance network info cache for port f6a52694-af4a-4ecc-926c-b1867c375983. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:35:37 compute-0 nova_compute[243452]: 2026-02-28 10:35:37.224 243456 DEBUG nova.network.neutron [req-2e0b5240-64b2-4766-b056-abbbf018eeae req-028cf98a-59f7-43cd-8a9d-00a16d81fa06 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Updating instance_info_cache with network_info: [{"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:35:37 compute-0 nova_compute[243452]: 2026-02-28 10:35:37.252 243456 DEBUG oslo_concurrency.lockutils [req-2e0b5240-64b2-4766-b056-abbbf018eeae req-028cf98a-59f7-43cd-8a9d-00a16d81fa06 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b7d98834-924e-4fbd-a701-d22949f44f77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:35:37 compute-0 systemd[1]: Started libpod-conmon-a3a772c3db91bd5050bcd540f6adee88d6f018cdc3902c3690dd0bad48cba8be.scope.
Feb 28 10:35:37 compute-0 podman[358975]: 2026-02-28 10:35:37.194704765 +0000 UTC m=+0.035427885 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:35:37 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:35:37 compute-0 podman[358975]: 2026-02-28 10:35:37.307807039 +0000 UTC m=+0.148530159 container init a3a772c3db91bd5050bcd540f6adee88d6f018cdc3902c3690dd0bad48cba8be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mahavira, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 10:35:37 compute-0 podman[358975]: 2026-02-28 10:35:37.316508595 +0000 UTC m=+0.157231705 container start a3a772c3db91bd5050bcd540f6adee88d6f018cdc3902c3690dd0bad48cba8be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:35:37 compute-0 systemd[1]: libpod-a3a772c3db91bd5050bcd540f6adee88d6f018cdc3902c3690dd0bad48cba8be.scope: Deactivated successfully.
Feb 28 10:35:37 compute-0 podman[358975]: 2026-02-28 10:35:37.324632296 +0000 UTC m=+0.165355436 container attach a3a772c3db91bd5050bcd540f6adee88d6f018cdc3902c3690dd0bad48cba8be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:35:37 compute-0 tender_mahavira[358991]: 167 167
Feb 28 10:35:37 compute-0 conmon[358991]: conmon a3a772c3db91bd5050bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a3a772c3db91bd5050bcd540f6adee88d6f018cdc3902c3690dd0bad48cba8be.scope/container/memory.events
Feb 28 10:35:37 compute-0 podman[358975]: 2026-02-28 10:35:37.328647589 +0000 UTC m=+0.169370699 container died a3a772c3db91bd5050bcd540f6adee88d6f018cdc3902c3690dd0bad48cba8be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:35:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-16b8dfa4ad5c05acdcfc61890be661bbf6614484bfa9ca86a9ab461fa8553dc2-merged.mount: Deactivated successfully.
Feb 28 10:35:37 compute-0 podman[358975]: 2026-02-28 10:35:37.37349386 +0000 UTC m=+0.214216950 container remove a3a772c3db91bd5050bcd540f6adee88d6f018cdc3902c3690dd0bad48cba8be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mahavira, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 28 10:35:37 compute-0 systemd[1]: libpod-conmon-a3a772c3db91bd5050bcd540f6adee88d6f018cdc3902c3690dd0bad48cba8be.scope: Deactivated successfully.
Feb 28 10:35:37 compute-0 podman[359015]: 2026-02-28 10:35:37.529861479 +0000 UTC m=+0.048293189 container create 9e5f9a14eac468770ef3f1c8ac8bd54e05780f7b29b6dd7bb17e53b6e0eb5313 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bhabha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Feb 28 10:35:37 compute-0 systemd[1]: Started libpod-conmon-9e5f9a14eac468770ef3f1c8ac8bd54e05780f7b29b6dd7bb17e53b6e0eb5313.scope.
Feb 28 10:35:37 compute-0 podman[359015]: 2026-02-28 10:35:37.5048237 +0000 UTC m=+0.023255400 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:35:37 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c212b0124df158038bd7c4373e8ff5cb4ee7f220ea912fd19a718ac7d3e2dadc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c212b0124df158038bd7c4373e8ff5cb4ee7f220ea912fd19a718ac7d3e2dadc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c212b0124df158038bd7c4373e8ff5cb4ee7f220ea912fd19a718ac7d3e2dadc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c212b0124df158038bd7c4373e8ff5cb4ee7f220ea912fd19a718ac7d3e2dadc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:35:37 compute-0 podman[359015]: 2026-02-28 10:35:37.646640067 +0000 UTC m=+0.165071787 container init 9e5f9a14eac468770ef3f1c8ac8bd54e05780f7b29b6dd7bb17e53b6e0eb5313 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 10:35:37 compute-0 podman[359015]: 2026-02-28 10:35:37.652589216 +0000 UTC m=+0.171020916 container start 9e5f9a14eac468770ef3f1c8ac8bd54e05780f7b29b6dd7bb17e53b6e0eb5313 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bhabha, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 10:35:37 compute-0 podman[359015]: 2026-02-28 10:35:37.656545738 +0000 UTC m=+0.174977428 container attach 9e5f9a14eac468770ef3f1c8ac8bd54e05780f7b29b6dd7bb17e53b6e0eb5313 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 10:35:37 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:35:37 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:35:37 compute-0 ceph-mon[76304]: pgmap v2138: 305 pgs: 305 active+clean; 430 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.2 MiB/s wr, 175 op/s
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]: [
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:     {
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:         "available": false,
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:         "being_replaced": false,
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:         "ceph_device_lvm": false,
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:         "lsm_data": {},
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:         "lvs": [],
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:         "path": "/dev/sr0",
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:         "rejected_reasons": [
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "Insufficient space (<5GB)",
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "Has a FileSystem"
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:         ],
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:         "sys_api": {
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "actuators": null,
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "device_nodes": [
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:                 "sr0"
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             ],
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "devname": "sr0",
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "human_readable_size": "482.00 KB",
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "id_bus": "ata",
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "model": "QEMU DVD-ROM",
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "nr_requests": "2",
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "parent": "/dev/sr0",
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "partitions": {},
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "path": "/dev/sr0",
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "removable": "1",
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "rev": "2.5+",
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "ro": "0",
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "rotational": "1",
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "sas_address": "",
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "sas_device_handle": "",
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "scheduler_mode": "mq-deadline",
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "sectors": 0,
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "sectorsize": "2048",
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "size": 493568.0,
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "support_discard": "2048",
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "type": "disk",
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:             "vendor": "QEMU"
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:         }
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]:     }
Feb 28 10:35:38 compute-0 condescending_bhabha[359032]: ]
Feb 28 10:35:38 compute-0 systemd[1]: libpod-9e5f9a14eac468770ef3f1c8ac8bd54e05780f7b29b6dd7bb17e53b6e0eb5313.scope: Deactivated successfully.
Feb 28 10:35:38 compute-0 podman[359015]: 2026-02-28 10:35:38.268813562 +0000 UTC m=+0.787245242 container died 9e5f9a14eac468770ef3f1c8ac8bd54e05780f7b29b6dd7bb17e53b6e0eb5313 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bhabha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:35:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c212b0124df158038bd7c4373e8ff5cb4ee7f220ea912fd19a718ac7d3e2dadc-merged.mount: Deactivated successfully.
Feb 28 10:35:38 compute-0 nova_compute[243452]: 2026-02-28 10:35:38.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:35:38 compute-0 podman[359015]: 2026-02-28 10:35:38.326805624 +0000 UTC m=+0.845237304 container remove 9e5f9a14eac468770ef3f1c8ac8bd54e05780f7b29b6dd7bb17e53b6e0eb5313 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 10:35:38 compute-0 systemd[1]: libpod-conmon-9e5f9a14eac468770ef3f1c8ac8bd54e05780f7b29b6dd7bb17e53b6e0eb5313.scope: Deactivated successfully.
Feb 28 10:35:38 compute-0 sudo[358938]: pam_unix(sudo:session): session closed for user root
Feb 28 10:35:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:35:38 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:35:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:35:38 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:35:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:35:38 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:35:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:35:38 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:35:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:35:38 compute-0 podman[359855]: 2026-02-28 10:35:38.416710641 +0000 UTC m=+0.099991043 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:35:38 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:35:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:35:38 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:35:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:35:38 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:35:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:35:38 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:35:38 compute-0 podman[359846]: 2026-02-28 10:35:38.43360931 +0000 UTC m=+0.126775412 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 28 10:35:38 compute-0 sudo[359902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:35:38 compute-0 sudo[359902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:35:38 compute-0 sudo[359902]: pam_unix(sudo:session): session closed for user root
Feb 28 10:35:38 compute-0 sudo[359928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:35:38 compute-0 sudo[359928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:35:38 compute-0 podman[359965]: 2026-02-28 10:35:38.833153348 +0000 UTC m=+0.054300179 container create c5054d18fbcf2e83c26136fb7038473557045402cf4dec184817ff3d4da99da0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_bose, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:35:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 438 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 163 op/s
Feb 28 10:35:38 compute-0 systemd[1]: Started libpod-conmon-c5054d18fbcf2e83c26136fb7038473557045402cf4dec184817ff3d4da99da0.scope.
Feb 28 10:35:38 compute-0 podman[359965]: 2026-02-28 10:35:38.805306419 +0000 UTC m=+0.026453300 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:35:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:35:38 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:35:38 compute-0 podman[359965]: 2026-02-28 10:35:38.935750924 +0000 UTC m=+0.156897815 container init c5054d18fbcf2e83c26136fb7038473557045402cf4dec184817ff3d4da99da0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:35:38 compute-0 podman[359965]: 2026-02-28 10:35:38.944524043 +0000 UTC m=+0.165670854 container start c5054d18fbcf2e83c26136fb7038473557045402cf4dec184817ff3d4da99da0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_bose, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 10:35:38 compute-0 podman[359965]: 2026-02-28 10:35:38.947714053 +0000 UTC m=+0.168860964 container attach c5054d18fbcf2e83c26136fb7038473557045402cf4dec184817ff3d4da99da0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_bose, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 10:35:38 compute-0 serene_bose[359982]: 167 167
Feb 28 10:35:38 compute-0 systemd[1]: libpod-c5054d18fbcf2e83c26136fb7038473557045402cf4dec184817ff3d4da99da0.scope: Deactivated successfully.
Feb 28 10:35:38 compute-0 conmon[359982]: conmon c5054d18fbcf2e83c261 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c5054d18fbcf2e83c26136fb7038473557045402cf4dec184817ff3d4da99da0.scope/container/memory.events
Feb 28 10:35:38 compute-0 podman[359965]: 2026-02-28 10:35:38.952463368 +0000 UTC m=+0.173610209 container died c5054d18fbcf2e83c26136fb7038473557045402cf4dec184817ff3d4da99da0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_bose, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 10:35:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-530bd5c9f50dbb13b9ff04eae7a6c92ad79fad44736ad559be153dc05f5e7204-merged.mount: Deactivated successfully.
Feb 28 10:35:38 compute-0 podman[359965]: 2026-02-28 10:35:38.993284694 +0000 UTC m=+0.214431525 container remove c5054d18fbcf2e83c26136fb7038473557045402cf4dec184817ff3d4da99da0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_bose, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 10:35:39 compute-0 systemd[1]: libpod-conmon-c5054d18fbcf2e83c26136fb7038473557045402cf4dec184817ff3d4da99da0.scope: Deactivated successfully.
Feb 28 10:35:39 compute-0 podman[360007]: 2026-02-28 10:35:39.209816988 +0000 UTC m=+0.056015658 container create 905eb9c858ce002f0c0f1cd23d7892daeae7106f5d37baf888e63db47c5e81be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 10:35:39 compute-0 systemd[1]: Started libpod-conmon-905eb9c858ce002f0c0f1cd23d7892daeae7106f5d37baf888e63db47c5e81be.scope.
Feb 28 10:35:39 compute-0 podman[360007]: 2026-02-28 10:35:39.186695563 +0000 UTC m=+0.032894263 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:35:39 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffab7d5c250874486c5d828c486c700354b874c430f27f63a038de6cd8f0e0ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffab7d5c250874486c5d828c486c700354b874c430f27f63a038de6cd8f0e0ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffab7d5c250874486c5d828c486c700354b874c430f27f63a038de6cd8f0e0ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffab7d5c250874486c5d828c486c700354b874c430f27f63a038de6cd8f0e0ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffab7d5c250874486c5d828c486c700354b874c430f27f63a038de6cd8f0e0ac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:35:39 compute-0 podman[360007]: 2026-02-28 10:35:39.311391945 +0000 UTC m=+0.157590655 container init 905eb9c858ce002f0c0f1cd23d7892daeae7106f5d37baf888e63db47c5e81be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_murdock, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 10:35:39 compute-0 nova_compute[243452]: 2026-02-28 10:35:39.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:35:39 compute-0 podman[360007]: 2026-02-28 10:35:39.320567305 +0000 UTC m=+0.166765965 container start 905eb9c858ce002f0c0f1cd23d7892daeae7106f5d37baf888e63db47c5e81be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_murdock, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:35:39 compute-0 podman[360007]: 2026-02-28 10:35:39.325551456 +0000 UTC m=+0.171750196 container attach 905eb9c858ce002f0c0f1cd23d7892daeae7106f5d37baf888e63db47c5e81be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 10:35:39 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:35:39 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:35:39 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:35:39 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:35:39 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:35:39 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:35:39 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:35:39 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:35:39 compute-0 nova_compute[243452]: 2026-02-28 10:35:39.416 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:39 compute-0 mystifying_murdock[360024]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:35:39 compute-0 mystifying_murdock[360024]: --> All data devices are unavailable
Feb 28 10:35:39 compute-0 systemd[1]: libpod-905eb9c858ce002f0c0f1cd23d7892daeae7106f5d37baf888e63db47c5e81be.scope: Deactivated successfully.
Feb 28 10:35:39 compute-0 podman[360007]: 2026-02-28 10:35:39.802852037 +0000 UTC m=+0.649050697 container died 905eb9c858ce002f0c0f1cd23d7892daeae7106f5d37baf888e63db47c5e81be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_murdock, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 10:35:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffab7d5c250874486c5d828c486c700354b874c430f27f63a038de6cd8f0e0ac-merged.mount: Deactivated successfully.
Feb 28 10:35:39 compute-0 podman[360007]: 2026-02-28 10:35:39.845414863 +0000 UTC m=+0.691613523 container remove 905eb9c858ce002f0c0f1cd23d7892daeae7106f5d37baf888e63db47c5e81be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_murdock, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:35:39 compute-0 systemd[1]: libpod-conmon-905eb9c858ce002f0c0f1cd23d7892daeae7106f5d37baf888e63db47c5e81be.scope: Deactivated successfully.
Feb 28 10:35:39 compute-0 sudo[359928]: pam_unix(sudo:session): session closed for user root
Feb 28 10:35:39 compute-0 sudo[360054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:35:39 compute-0 sudo[360054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:35:39 compute-0 sudo[360054]: pam_unix(sudo:session): session closed for user root
Feb 28 10:35:40 compute-0 sudo[360079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:35:40 compute-0 sudo[360079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:35:40 compute-0 nova_compute[243452]: 2026-02-28 10:35:40.202 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:40 compute-0 nova_compute[243452]: 2026-02-28 10:35:40.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:35:40 compute-0 nova_compute[243452]: 2026-02-28 10:35:40.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:35:40 compute-0 nova_compute[243452]: 2026-02-28 10:35:40.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:35:40 compute-0 podman[360118]: 2026-02-28 10:35:40.365945237 +0000 UTC m=+0.058444327 container create 01c10fc256e62fc07529dd3110b765669772d1b384b6a847bd8cf3e87bc47672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 28 10:35:40 compute-0 systemd[1]: Started libpod-conmon-01c10fc256e62fc07529dd3110b765669772d1b384b6a847bd8cf3e87bc47672.scope.
Feb 28 10:35:40 compute-0 ceph-mon[76304]: pgmap v2139: 305 pgs: 305 active+clean; 438 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 163 op/s
Feb 28 10:35:40 compute-0 podman[360118]: 2026-02-28 10:35:40.344269283 +0000 UTC m=+0.036768363 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:35:40 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:35:40 compute-0 podman[360118]: 2026-02-28 10:35:40.471656351 +0000 UTC m=+0.164155431 container init 01c10fc256e62fc07529dd3110b765669772d1b384b6a847bd8cf3e87bc47672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:35:40 compute-0 podman[360118]: 2026-02-28 10:35:40.480960455 +0000 UTC m=+0.173459565 container start 01c10fc256e62fc07529dd3110b765669772d1b384b6a847bd8cf3e87bc47672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swanson, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 10:35:40 compute-0 podman[360118]: 2026-02-28 10:35:40.485271707 +0000 UTC m=+0.177770797 container attach 01c10fc256e62fc07529dd3110b765669772d1b384b6a847bd8cf3e87bc47672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swanson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:35:40 compute-0 nifty_swanson[360134]: 167 167
Feb 28 10:35:40 compute-0 systemd[1]: libpod-01c10fc256e62fc07529dd3110b765669772d1b384b6a847bd8cf3e87bc47672.scope: Deactivated successfully.
Feb 28 10:35:40 compute-0 podman[360118]: 2026-02-28 10:35:40.488941201 +0000 UTC m=+0.181440341 container died 01c10fc256e62fc07529dd3110b765669772d1b384b6a847bd8cf3e87bc47672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swanson, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 10:35:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e99647b06ae26ff0596cb2763ab89a66ee1933f41bc524e00abee55de202676-merged.mount: Deactivated successfully.
Feb 28 10:35:40 compute-0 podman[360118]: 2026-02-28 10:35:40.535822929 +0000 UTC m=+0.228321999 container remove 01c10fc256e62fc07529dd3110b765669772d1b384b6a847bd8cf3e87bc47672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 10:35:40 compute-0 systemd[1]: libpod-conmon-01c10fc256e62fc07529dd3110b765669772d1b384b6a847bd8cf3e87bc47672.scope: Deactivated successfully.
Feb 28 10:35:40 compute-0 podman[360158]: 2026-02-28 10:35:40.722106166 +0000 UTC m=+0.050782470 container create d8f3d3efe154f3d36a3990da3d7e56f60e7fe6c2e56a0665414acedf045abf27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mestorf, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Feb 28 10:35:40 compute-0 systemd[1]: Started libpod-conmon-d8f3d3efe154f3d36a3990da3d7e56f60e7fe6c2e56a0665414acedf045abf27.scope.
Feb 28 10:35:40 compute-0 podman[360158]: 2026-02-28 10:35:40.701824931 +0000 UTC m=+0.030501245 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:35:40 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:35:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f36de4d6ed877ff3dad9c907e79de2ce39ce96af776e7e483a6d9dea690aeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:35:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f36de4d6ed877ff3dad9c907e79de2ce39ce96af776e7e483a6d9dea690aeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:35:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f36de4d6ed877ff3dad9c907e79de2ce39ce96af776e7e483a6d9dea690aeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:35:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f36de4d6ed877ff3dad9c907e79de2ce39ce96af776e7e483a6d9dea690aeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:35:40 compute-0 podman[360158]: 2026-02-28 10:35:40.819860905 +0000 UTC m=+0.148537199 container init d8f3d3efe154f3d36a3990da3d7e56f60e7fe6c2e56a0665414acedf045abf27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 10:35:40 compute-0 podman[360158]: 2026-02-28 10:35:40.825168405 +0000 UTC m=+0.153844699 container start d8f3d3efe154f3d36a3990da3d7e56f60e7fe6c2e56a0665414acedf045abf27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 10:35:40 compute-0 podman[360158]: 2026-02-28 10:35:40.828546521 +0000 UTC m=+0.157222815 container attach d8f3d3efe154f3d36a3990da3d7e56f60e7fe6c2e56a0665414acedf045abf27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mestorf, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:35:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 438 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 150 op/s
Feb 28 10:35:41 compute-0 great_mestorf[360175]: {
Feb 28 10:35:41 compute-0 great_mestorf[360175]:     "0": [
Feb 28 10:35:41 compute-0 great_mestorf[360175]:         {
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "devices": [
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "/dev/loop3"
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             ],
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "lv_name": "ceph_lv0",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "lv_size": "21470642176",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "name": "ceph_lv0",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "tags": {
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.cluster_name": "ceph",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.crush_device_class": "",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.encrypted": "0",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.objectstore": "bluestore",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.osd_id": "0",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.type": "block",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.vdo": "0",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.with_tpm": "0"
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             },
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "type": "block",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "vg_name": "ceph_vg0"
Feb 28 10:35:41 compute-0 great_mestorf[360175]:         }
Feb 28 10:35:41 compute-0 great_mestorf[360175]:     ],
Feb 28 10:35:41 compute-0 great_mestorf[360175]:     "1": [
Feb 28 10:35:41 compute-0 great_mestorf[360175]:         {
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "devices": [
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "/dev/loop4"
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             ],
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "lv_name": "ceph_lv1",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "lv_size": "21470642176",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "name": "ceph_lv1",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "tags": {
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.cluster_name": "ceph",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.crush_device_class": "",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.encrypted": "0",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.objectstore": "bluestore",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.osd_id": "1",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.type": "block",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.vdo": "0",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.with_tpm": "0"
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             },
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "type": "block",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "vg_name": "ceph_vg1"
Feb 28 10:35:41 compute-0 great_mestorf[360175]:         }
Feb 28 10:35:41 compute-0 great_mestorf[360175]:     ],
Feb 28 10:35:41 compute-0 great_mestorf[360175]:     "2": [
Feb 28 10:35:41 compute-0 great_mestorf[360175]:         {
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "devices": [
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "/dev/loop5"
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             ],
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "lv_name": "ceph_lv2",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "lv_size": "21470642176",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "name": "ceph_lv2",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "tags": {
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.cluster_name": "ceph",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.crush_device_class": "",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.encrypted": "0",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.objectstore": "bluestore",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.osd_id": "2",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.type": "block",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.vdo": "0",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:                 "ceph.with_tpm": "0"
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             },
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "type": "block",
Feb 28 10:35:41 compute-0 great_mestorf[360175]:             "vg_name": "ceph_vg2"
Feb 28 10:35:41 compute-0 great_mestorf[360175]:         }
Feb 28 10:35:41 compute-0 great_mestorf[360175]:     ]
Feb 28 10:35:41 compute-0 great_mestorf[360175]: }
Feb 28 10:35:41 compute-0 systemd[1]: libpod-d8f3d3efe154f3d36a3990da3d7e56f60e7fe6c2e56a0665414acedf045abf27.scope: Deactivated successfully.
Feb 28 10:35:41 compute-0 conmon[360175]: conmon d8f3d3efe154f3d36a39 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d8f3d3efe154f3d36a3990da3d7e56f60e7fe6c2e56a0665414acedf045abf27.scope/container/memory.events
Feb 28 10:35:41 compute-0 podman[360158]: 2026-02-28 10:35:41.152622061 +0000 UTC m=+0.481298365 container died d8f3d3efe154f3d36a3990da3d7e56f60e7fe6c2e56a0665414acedf045abf27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mestorf, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 10:35:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-86f36de4d6ed877ff3dad9c907e79de2ce39ce96af776e7e483a6d9dea690aeb-merged.mount: Deactivated successfully.
Feb 28 10:35:41 compute-0 podman[360158]: 2026-02-28 10:35:41.2199963 +0000 UTC m=+0.548672634 container remove d8f3d3efe154f3d36a3990da3d7e56f60e7fe6c2e56a0665414acedf045abf27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026378167702551347 of space, bias 1.0, pg target 0.7913450310765404 quantized to 32 (current 32)
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024938661775652953 of space, bias 1.0, pg target 0.7481598532695886 quantized to 32 (current 32)
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.339634528622432e-07 of space, bias 4.0, pg target 0.0008807561434346919 quantized to 16 (current 16)
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:35:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:35:41 compute-0 systemd[1]: libpod-conmon-d8f3d3efe154f3d36a3990da3d7e56f60e7fe6c2e56a0665414acedf045abf27.scope: Deactivated successfully.
Feb 28 10:35:41 compute-0 sudo[360079]: pam_unix(sudo:session): session closed for user root
Feb 28 10:35:41 compute-0 sudo[360196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:35:41 compute-0 sudo[360196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:35:41 compute-0 sudo[360196]: pam_unix(sudo:session): session closed for user root
Feb 28 10:35:41 compute-0 sudo[360221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:35:41 compute-0 sudo[360221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:35:41 compute-0 podman[360258]: 2026-02-28 10:35:41.758967368 +0000 UTC m=+0.050255135 container create eff3e5f6a458e743b08287f6c21a538f6e78e6095b92b2d97a276540000e03f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_greider, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 10:35:41 compute-0 systemd[1]: Started libpod-conmon-eff3e5f6a458e743b08287f6c21a538f6e78e6095b92b2d97a276540000e03f4.scope.
Feb 28 10:35:41 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:35:41 compute-0 podman[360258]: 2026-02-28 10:35:41.742912093 +0000 UTC m=+0.034199900 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:35:41 compute-0 podman[360258]: 2026-02-28 10:35:41.842825253 +0000 UTC m=+0.134113040 container init eff3e5f6a458e743b08287f6c21a538f6e78e6095b92b2d97a276540000e03f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_greider, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 10:35:41 compute-0 podman[360258]: 2026-02-28 10:35:41.850630184 +0000 UTC m=+0.141917961 container start eff3e5f6a458e743b08287f6c21a538f6e78e6095b92b2d97a276540000e03f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_greider, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:35:41 compute-0 podman[360258]: 2026-02-28 10:35:41.854818703 +0000 UTC m=+0.146106500 container attach eff3e5f6a458e743b08287f6c21a538f6e78e6095b92b2d97a276540000e03f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 10:35:41 compute-0 recursing_greider[360275]: 167 167
Feb 28 10:35:41 compute-0 systemd[1]: libpod-eff3e5f6a458e743b08287f6c21a538f6e78e6095b92b2d97a276540000e03f4.scope: Deactivated successfully.
Feb 28 10:35:41 compute-0 conmon[360275]: conmon eff3e5f6a458e743b082 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eff3e5f6a458e743b08287f6c21a538f6e78e6095b92b2d97a276540000e03f4.scope/container/memory.events
Feb 28 10:35:41 compute-0 podman[360258]: 2026-02-28 10:35:41.857215001 +0000 UTC m=+0.148502798 container died eff3e5f6a458e743b08287f6c21a538f6e78e6095b92b2d97a276540000e03f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_greider, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 10:35:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-324704176df6e23636e319956af41dad779dc0ae7fd7c3dcc247d2ac2fb9788e-merged.mount: Deactivated successfully.
Feb 28 10:35:41 compute-0 podman[360258]: 2026-02-28 10:35:41.906460126 +0000 UTC m=+0.197747903 container remove eff3e5f6a458e743b08287f6c21a538f6e78e6095b92b2d97a276540000e03f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 10:35:41 compute-0 systemd[1]: libpod-conmon-eff3e5f6a458e743b08287f6c21a538f6e78e6095b92b2d97a276540000e03f4.scope: Deactivated successfully.
Feb 28 10:35:42 compute-0 podman[360299]: 2026-02-28 10:35:42.061655112 +0000 UTC m=+0.041728263 container create 03e1a14ebdc5549b4bff8526e8d2a657989a5d46fc14c4eea531d46123918b4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shockley, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:35:42 compute-0 systemd[1]: Started libpod-conmon-03e1a14ebdc5549b4bff8526e8d2a657989a5d46fc14c4eea531d46123918b4f.scope.
Feb 28 10:35:42 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63b0a9429f6d27f5331b06b6b711aca282462b371d79f2adcdf36949008d4cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63b0a9429f6d27f5331b06b6b711aca282462b371d79f2adcdf36949008d4cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63b0a9429f6d27f5331b06b6b711aca282462b371d79f2adcdf36949008d4cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63b0a9429f6d27f5331b06b6b711aca282462b371d79f2adcdf36949008d4cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:35:42 compute-0 podman[360299]: 2026-02-28 10:35:42.131695156 +0000 UTC m=+0.111768307 container init 03e1a14ebdc5549b4bff8526e8d2a657989a5d46fc14c4eea531d46123918b4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shockley, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:35:42 compute-0 podman[360299]: 2026-02-28 10:35:42.139198539 +0000 UTC m=+0.119271730 container start 03e1a14ebdc5549b4bff8526e8d2a657989a5d46fc14c4eea531d46123918b4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shockley, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:35:42 compute-0 podman[360299]: 2026-02-28 10:35:42.04462971 +0000 UTC m=+0.024702891 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:35:42 compute-0 podman[360299]: 2026-02-28 10:35:42.143196552 +0000 UTC m=+0.123269703 container attach 03e1a14ebdc5549b4bff8526e8d2a657989a5d46fc14c4eea531d46123918b4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:35:42 compute-0 ceph-mon[76304]: pgmap v2140: 305 pgs: 305 active+clean; 438 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 150 op/s
Feb 28 10:35:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 438 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Feb 28 10:35:42 compute-0 lvm[360394]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:35:42 compute-0 lvm[360395]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:35:42 compute-0 lvm[360395]: VG ceph_vg0 finished
Feb 28 10:35:42 compute-0 lvm[360394]: VG ceph_vg1 finished
Feb 28 10:35:42 compute-0 lvm[360397]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:35:42 compute-0 lvm[360397]: VG ceph_vg2 finished
Feb 28 10:35:43 compute-0 zen_shockley[360316]: {}
Feb 28 10:35:43 compute-0 systemd[1]: libpod-03e1a14ebdc5549b4bff8526e8d2a657989a5d46fc14c4eea531d46123918b4f.scope: Deactivated successfully.
Feb 28 10:35:43 compute-0 systemd[1]: libpod-03e1a14ebdc5549b4bff8526e8d2a657989a5d46fc14c4eea531d46123918b4f.scope: Consumed 1.310s CPU time.
Feb 28 10:35:43 compute-0 podman[360299]: 2026-02-28 10:35:43.061504915 +0000 UTC m=+1.041578086 container died 03e1a14ebdc5549b4bff8526e8d2a657989a5d46fc14c4eea531d46123918b4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:35:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b63b0a9429f6d27f5331b06b6b711aca282462b371d79f2adcdf36949008d4cd-merged.mount: Deactivated successfully.
Feb 28 10:35:43 compute-0 podman[360299]: 2026-02-28 10:35:43.121218897 +0000 UTC m=+1.101292078 container remove 03e1a14ebdc5549b4bff8526e8d2a657989a5d46fc14c4eea531d46123918b4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shockley, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 10:35:43 compute-0 systemd[1]: libpod-conmon-03e1a14ebdc5549b4bff8526e8d2a657989a5d46fc14c4eea531d46123918b4f.scope: Deactivated successfully.
Feb 28 10:35:43 compute-0 sudo[360221]: pam_unix(sudo:session): session closed for user root
Feb 28 10:35:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:35:43 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:35:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:35:43 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:35:43 compute-0 sudo[360412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:35:43 compute-0 sudo[360412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:35:43 compute-0 sudo[360412]: pam_unix(sudo:session): session closed for user root
Feb 28 10:35:43 compute-0 nova_compute[243452]: 2026-02-28 10:35:43.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:35:43 compute-0 nova_compute[243452]: 2026-02-28 10:35:43.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:35:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:35:44 compute-0 ceph-mon[76304]: pgmap v2141: 305 pgs: 305 active+clean; 438 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Feb 28 10:35:44 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:35:44 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:35:44 compute-0 nova_compute[243452]: 2026-02-28 10:35:44.418 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 438 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Feb 28 10:35:45 compute-0 nova_compute[243452]: 2026-02-28 10:35:45.206 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:35:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1958862523' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:35:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:35:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1958862523' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:35:46 compute-0 ceph-mon[76304]: pgmap v2142: 305 pgs: 305 active+clean; 438 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Feb 28 10:35:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1958862523' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:35:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1958862523' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:35:46 compute-0 nova_compute[243452]: 2026-02-28 10:35:46.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:35:46 compute-0 nova_compute[243452]: 2026-02-28 10:35:46.349 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:46 compute-0 nova_compute[243452]: 2026-02-28 10:35:46.350 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:46 compute-0 nova_compute[243452]: 2026-02-28 10:35:46.350 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:46 compute-0 nova_compute[243452]: 2026-02-28 10:35:46.351 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:35:46 compute-0 nova_compute[243452]: 2026-02-28 10:35:46.351 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:35:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 445 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 861 KiB/s rd, 2.8 MiB/s wr, 76 op/s
Feb 28 10:35:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:35:46 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1267820184' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:35:46 compute-0 nova_compute[243452]: 2026-02-28 10:35:46.970 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.087 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.088 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.095 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.096 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.104 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.106 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.112 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.113 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:35:47 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1267820184' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.247 243456 DEBUG nova.compute.manager [req-9437197d-8555-4a42-a3a4-00b5c4379c01 req-b2ade040-bc72-438b-a676-e7fbdacc314b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-changed-a81e3b75-649b-4321-b436-ab01ab0a9e05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.248 243456 DEBUG nova.compute.manager [req-9437197d-8555-4a42-a3a4-00b5c4379c01 req-b2ade040-bc72-438b-a676-e7fbdacc314b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Refreshing instance network info cache due to event network-changed-a81e3b75-649b-4321-b436-ab01ab0a9e05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.248 243456 DEBUG oslo_concurrency.lockutils [req-9437197d-8555-4a42-a3a4-00b5c4379c01 req-b2ade040-bc72-438b-a676-e7fbdacc314b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.249 243456 DEBUG oslo_concurrency.lockutils [req-9437197d-8555-4a42-a3a4-00b5c4379c01 req-b2ade040-bc72-438b-a676-e7fbdacc314b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.249 243456 DEBUG nova.network.neutron [req-9437197d-8555-4a42-a3a4-00b5c4379c01 req-b2ade040-bc72-438b-a676-e7fbdacc314b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Refreshing network info cache for port a81e3b75-649b-4321-b436-ab01ab0a9e05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.356 243456 DEBUG oslo_concurrency.lockutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.356 243456 DEBUG oslo_concurrency.lockutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.357 243456 DEBUG oslo_concurrency.lockutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.357 243456 DEBUG oslo_concurrency.lockutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.358 243456 DEBUG oslo_concurrency.lockutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.359 243456 INFO nova.compute.manager [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Terminating instance
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.360 243456 DEBUG nova.compute.manager [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.378 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.379 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2903MB free_disk=59.8299172706902GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.379 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.379 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:47 compute-0 ovn_controller[146846]: 2026-02-28T10:35:47Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:21:5e:ba 10.100.0.12
Feb 28 10:35:47 compute-0 ovn_controller[146846]: 2026-02-28T10:35:47Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:21:5e:ba 10.100.0.12
Feb 28 10:35:47 compute-0 kernel: tapa81e3b75-64 (unregistering): left promiscuous mode
Feb 28 10:35:47 compute-0 NetworkManager[49805]: <info>  [1772274947.4181] device (tapa81e3b75-64): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:35:47 compute-0 ovn_controller[146846]: 2026-02-28T10:35:47Z|01324|binding|INFO|Releasing lport a81e3b75-649b-4321-b436-ab01ab0a9e05 from this chassis (sb_readonly=0)
Feb 28 10:35:47 compute-0 ovn_controller[146846]: 2026-02-28T10:35:47Z|01325|binding|INFO|Setting lport a81e3b75-649b-4321-b436-ab01ab0a9e05 down in Southbound
Feb 28 10:35:47 compute-0 ovn_controller[146846]: 2026-02-28T10:35:47Z|01326|binding|INFO|Removing iface tapa81e3b75-64 ovn-installed in OVS
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.430 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:47 compute-0 kernel: tapd681366d-e6 (unregistering): left promiscuous mode
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.439 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:7a:c9 10.100.0.13'], port_security=['fa:16:3e:2e:7a:c9 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f13d8adc-1a08-412b-a9fa-c8a601cda923', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd8337edd-8f59-40cf-a216-2b5eec8b4c00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d65e56d1-78ed-40b2-a041-17e977e92cba, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=a81e3b75-649b-4321-b436-ab01ab0a9e05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.441 156681 INFO neutron.agent.ovn.metadata.agent [-] Port a81e3b75-649b-4321-b436-ab01ab0a9e05 in datapath 6d7aad4f-1a53-4b74-a216-4cac4be4283b unbound from our chassis
Feb 28 10:35:47 compute-0 NetworkManager[49805]: <info>  [1772274947.4420] device (tapd681366d-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.444 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d7aad4f-1a53-4b74-a216-4cac4be4283b
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.446 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:47 compute-0 ovn_controller[146846]: 2026-02-28T10:35:47Z|01327|binding|INFO|Releasing lport d681366d-e6b5-4dad-847e-d091bc7b112d from this chassis (sb_readonly=0)
Feb 28 10:35:47 compute-0 ovn_controller[146846]: 2026-02-28T10:35:47Z|01328|binding|INFO|Setting lport d681366d-e6b5-4dad-847e-d091bc7b112d down in Southbound
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.455 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:47 compute-0 ovn_controller[146846]: 2026-02-28T10:35:47Z|01329|binding|INFO|Removing iface tapd681366d-e6 ovn-installed in OVS
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.461 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.468 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.471 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:a7:ba 2001:db8::f816:3eff:febb:a7ba'], port_security=['fa:16:3e:bb:a7:ba 2001:db8::f816:3eff:febb:a7ba'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:febb:a7ba/64', 'neutron:device_id': 'f13d8adc-1a08-412b-a9fa-c8a601cda923', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd8337edd-8f59-40cf-a216-2b5eec8b4c00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7ae13e-602a-487f-8652-e7d0de9d97fa, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d681366d-e6b5-4dad-847e-d091bc7b112d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.471 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b2974f0d-9e9c-4676-a65e-ec7de6b7bb6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.485 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 29ebb761-c674-4ed1-aae0-554adf945402 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.485 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 367042aa-0043-4283-a399-ea4a6a1545f7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.485 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance f13d8adc-1a08-412b-a9fa-c8a601cda923 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.486 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance b7d98834-924e-4fbd-a701-d22949f44f77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.486 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.486 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:35:47 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000082.scope: Deactivated successfully.
Feb 28 10:35:47 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000082.scope: Consumed 14.526s CPU time.
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.511 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5e169f25-7790-4455-bece-a618d731adcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:47 compute-0 systemd-machined[209480]: Machine qemu-163-instance-00000082 terminated.
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.516 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[dfde8eec-8043-4a4c-8c19-61024c2147f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.542 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2a1f3778-9906-478b-af0b-26f26533d28e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.557 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1827305b-afa8-4aff-a585-3751334b02d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d7aad4f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:af:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 387], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639197, 'reachable_time': 19551, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360475, 'error': None, 'target': 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.572 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.576 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[92327584-c0ef-469a-ba75-802bca269e8e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6d7aad4f-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639207, 'tstamp': 639207}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360476, 'error': None, 'target': 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6d7aad4f-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639210, 'tstamp': 639210}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360476, 'error': None, 'target': 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.579 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d7aad4f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:47 compute-0 NetworkManager[49805]: <info>  [1772274947.5830] manager: (tapa81e3b75-64): new Tun device (/org/freedesktop/NetworkManager/Devices/553)
Feb 28 10:35:47 compute-0 NetworkManager[49805]: <info>  [1772274947.5937] manager: (tapd681366d-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/554)
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.598 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d7aad4f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.599 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.599 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d7aad4f-10, col_values=(('external_ids', {'iface-id': '99dd359f-3ab9-477c-a58c-1c56298be9c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.600 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.602 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d681366d-e6b5-4dad-847e-d091bc7b112d in datapath 49ec66b0-8f5d-445b-a7e6-7fd41e785d9a unbound from our chassis
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.604 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49ec66b0-8f5d-445b-a7e6-7fd41e785d9a
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.616 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.622 243456 INFO nova.virt.libvirt.driver [-] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Instance destroyed successfully.
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.623 243456 DEBUG nova.objects.instance [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid f13d8adc-1a08-412b-a9fa-c8a601cda923 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.624 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[432b32a1-4dba-40f7-84bc-dece40d82063]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.640 243456 DEBUG nova.virt.libvirt.vif [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:35:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1382104323',display_name='tempest-TestGettingAddress-server-1382104323',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1382104323',id=130,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:35:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-kd4ju8yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:35:20Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f13d8adc-1a08-412b-a9fa-c8a601cda923,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.641 243456 DEBUG nova.network.os_vif_util [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.642 243456 DEBUG nova.network.os_vif_util [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2e:7a:c9,bridge_name='br-int',has_traffic_filtering=True,id=a81e3b75-649b-4321-b436-ab01ab0a9e05,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa81e3b75-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.643 243456 DEBUG os_vif [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:7a:c9,bridge_name='br-int',has_traffic_filtering=True,id=a81e3b75-649b-4321-b436-ab01ab0a9e05,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa81e3b75-64') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.646 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.646 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa81e3b75-64, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.649 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.651 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.654 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.655 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a925e197-ea54-4d81-b77f-33e8784daf5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.656 243456 INFO os_vif [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:7a:c9,bridge_name='br-int',has_traffic_filtering=True,id=a81e3b75-649b-4321-b436-ab01ab0a9e05,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa81e3b75-64')
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.657 243456 DEBUG nova.virt.libvirt.vif [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:35:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1382104323',display_name='tempest-TestGettingAddress-server-1382104323',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1382104323',id=130,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:35:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-kd4ju8yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:35:20Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f13d8adc-1a08-412b-a9fa-c8a601cda923,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.658 243456 DEBUG nova.network.os_vif_util [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.658 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0cc020c2-a2a5-4799-a01a-0e8ee0d9bc8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.659 243456 DEBUG nova.network.os_vif_util [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:a7:ba,bridge_name='br-int',has_traffic_filtering=True,id=d681366d-e6b5-4dad-847e-d091bc7b112d,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd681366d-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.659 243456 DEBUG os_vif [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:a7:ba,bridge_name='br-int',has_traffic_filtering=True,id=d681366d-e6b5-4dad-847e-d091bc7b112d,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd681366d-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.660 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.660 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd681366d-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.662 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.665 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.667 243456 INFO os_vif [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:a7:ba,bridge_name='br-int',has_traffic_filtering=True,id=d681366d-e6b5-4dad-847e-d091bc7b112d,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd681366d-e6')
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.687 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2404f36f-2570-419a-8179-1f5698c751d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.704 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4643291e-3abc-4fc0-be53-66000054f70f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49ec66b0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:61:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 6, 'rx_bytes': 2612, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 6, 'rx_bytes': 2612, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639285, 'reachable_time': 43442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360517, 'error': None, 'target': 'ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.725 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0c7aa397-d06e-42b4-a3e6-2c3b7cee3a3e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap49ec66b0-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639299, 'tstamp': 639299}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360523, 'error': None, 'target': 'ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.728 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49ec66b0-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.731 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.732 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49ec66b0-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.733 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.733 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49ec66b0-80, col_values=(('external_ids', {'iface-id': '0d93ffc1-1158-4b54-b2c1-6b7d48d62d16'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:47.734 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.969 243456 INFO nova.virt.libvirt.driver [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Deleting instance files /var/lib/nova/instances/f13d8adc-1a08-412b-a9fa-c8a601cda923_del
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.970 243456 INFO nova.virt.libvirt.driver [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Deletion of /var/lib/nova/instances/f13d8adc-1a08-412b-a9fa-c8a601cda923_del complete
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.977 243456 DEBUG nova.compute.manager [req-7af90231-dd3e-4d08-98ad-aad6b436f289 req-3e4ef0d2-ed2c-4899-8f9a-56f8fee9b122 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-unplugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.978 243456 DEBUG oslo_concurrency.lockutils [req-7af90231-dd3e-4d08-98ad-aad6b436f289 req-3e4ef0d2-ed2c-4899-8f9a-56f8fee9b122 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.978 243456 DEBUG oslo_concurrency.lockutils [req-7af90231-dd3e-4d08-98ad-aad6b436f289 req-3e4ef0d2-ed2c-4899-8f9a-56f8fee9b122 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.978 243456 DEBUG oslo_concurrency.lockutils [req-7af90231-dd3e-4d08-98ad-aad6b436f289 req-3e4ef0d2-ed2c-4899-8f9a-56f8fee9b122 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.978 243456 DEBUG nova.compute.manager [req-7af90231-dd3e-4d08-98ad-aad6b436f289 req-3e4ef0d2-ed2c-4899-8f9a-56f8fee9b122 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] No waiting events found dispatching network-vif-unplugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:35:47 compute-0 nova_compute[243452]: 2026-02-28 10:35:47.978 243456 DEBUG nova.compute.manager [req-7af90231-dd3e-4d08-98ad-aad6b436f289 req-3e4ef0d2-ed2c-4899-8f9a-56f8fee9b122 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-unplugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:35:48 compute-0 nova_compute[243452]: 2026-02-28 10:35:48.024 243456 INFO nova.compute.manager [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Took 0.66 seconds to destroy the instance on the hypervisor.
Feb 28 10:35:48 compute-0 nova_compute[243452]: 2026-02-28 10:35:48.026 243456 DEBUG oslo.service.loopingcall [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:35:48 compute-0 nova_compute[243452]: 2026-02-28 10:35:48.027 243456 DEBUG nova.compute.manager [-] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:35:48 compute-0 nova_compute[243452]: 2026-02-28 10:35:48.028 243456 DEBUG nova.network.neutron [-] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:35:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:35:48 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/789579066' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:35:48 compute-0 nova_compute[243452]: 2026-02-28 10:35:48.162 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:35:48 compute-0 nova_compute[243452]: 2026-02-28 10:35:48.169 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:35:48 compute-0 nova_compute[243452]: 2026-02-28 10:35:48.191 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:35:48 compute-0 ceph-mon[76304]: pgmap v2143: 305 pgs: 305 active+clean; 445 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 861 KiB/s rd, 2.8 MiB/s wr, 76 op/s
Feb 28 10:35:48 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/789579066' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:35:48 compute-0 nova_compute[243452]: 2026-02-28 10:35:48.228 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:35:48 compute-0 nova_compute[243452]: 2026-02-28 10:35:48.229 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 458 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 290 KiB/s rd, 2.4 MiB/s wr, 65 op/s
Feb 28 10:35:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:35:48 compute-0 nova_compute[243452]: 2026-02-28 10:35:48.964 243456 DEBUG nova.network.neutron [req-9437197d-8555-4a42-a3a4-00b5c4379c01 req-b2ade040-bc72-438b-a676-e7fbdacc314b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Updated VIF entry in instance network info cache for port a81e3b75-649b-4321-b436-ab01ab0a9e05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:35:48 compute-0 nova_compute[243452]: 2026-02-28 10:35:48.965 243456 DEBUG nova.network.neutron [req-9437197d-8555-4a42-a3a4-00b5c4379c01 req-b2ade040-bc72-438b-a676-e7fbdacc314b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Updating instance_info_cache with network_info: [{"id": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "address": "fa:16:3e:2e:7a:c9", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa81e3b75-64", "ovs_interfaceid": "a81e3b75-649b-4321-b436-ab01ab0a9e05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:35:48 compute-0 nova_compute[243452]: 2026-02-28 10:35:48.990 243456 DEBUG oslo_concurrency.lockutils [req-9437197d-8555-4a42-a3a4-00b5c4379c01 req-b2ade040-bc72-438b-a676-e7fbdacc314b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f13d8adc-1a08-412b-a9fa-c8a601cda923" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:35:49 compute-0 nova_compute[243452]: 2026-02-28 10:35:49.230 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:35:49 compute-0 nova_compute[243452]: 2026-02-28 10:35:49.231 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:35:49 compute-0 nova_compute[243452]: 2026-02-28 10:35:49.231 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:35:49 compute-0 nova_compute[243452]: 2026-02-28 10:35:49.265 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Feb 28 10:35:49 compute-0 nova_compute[243452]: 2026-02-28 10:35:49.343 243456 DEBUG nova.compute.manager [req-baed6973-e289-464f-ad19-000ddbef3569 req-c91d79c2-48f6-4129-806f-0ecfbcef0816 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-deleted-a81e3b75-649b-4321-b436-ab01ab0a9e05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:49 compute-0 nova_compute[243452]: 2026-02-28 10:35:49.344 243456 INFO nova.compute.manager [req-baed6973-e289-464f-ad19-000ddbef3569 req-c91d79c2-48f6-4129-806f-0ecfbcef0816 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Neutron deleted interface a81e3b75-649b-4321-b436-ab01ab0a9e05; detaching it from the instance and deleting it from the info cache
Feb 28 10:35:49 compute-0 nova_compute[243452]: 2026-02-28 10:35:49.344 243456 DEBUG nova.network.neutron [req-baed6973-e289-464f-ad19-000ddbef3569 req-c91d79c2-48f6-4129-806f-0ecfbcef0816 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Updating instance_info_cache with network_info: [{"id": "d681366d-e6b5-4dad-847e-d091bc7b112d", "address": "fa:16:3e:bb:a7:ba", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:a7ba", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd681366d-e6", "ovs_interfaceid": "d681366d-e6b5-4dad-847e-d091bc7b112d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:35:49 compute-0 nova_compute[243452]: 2026-02-28 10:35:49.348 243456 DEBUG nova.network.neutron [-] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:35:49 compute-0 nova_compute[243452]: 2026-02-28 10:35:49.364 243456 DEBUG nova.compute.manager [req-baed6973-e289-464f-ad19-000ddbef3569 req-c91d79c2-48f6-4129-806f-0ecfbcef0816 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Detach interface failed, port_id=a81e3b75-649b-4321-b436-ab01ab0a9e05, reason: Instance f13d8adc-1a08-412b-a9fa-c8a601cda923 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:35:49 compute-0 nova_compute[243452]: 2026-02-28 10:35:49.367 243456 INFO nova.compute.manager [-] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Took 1.34 seconds to deallocate network for instance.
Feb 28 10:35:49 compute-0 nova_compute[243452]: 2026-02-28 10:35:49.406 243456 DEBUG oslo_concurrency.lockutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:49 compute-0 nova_compute[243452]: 2026-02-28 10:35:49.407 243456 DEBUG oslo_concurrency.lockutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:49 compute-0 nova_compute[243452]: 2026-02-28 10:35:49.422 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:49 compute-0 nova_compute[243452]: 2026-02-28 10:35:49.430 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:35:49 compute-0 nova_compute[243452]: 2026-02-28 10:35:49.430 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:35:49 compute-0 nova_compute[243452]: 2026-02-28 10:35:49.431 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:35:49 compute-0 nova_compute[243452]: 2026-02-28 10:35:49.431 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 29ebb761-c674-4ed1-aae0-554adf945402 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:35:49 compute-0 nova_compute[243452]: 2026-02-28 10:35:49.496 243456 DEBUG oslo_concurrency.processutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:35:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:35:50 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2398810676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.086 243456 DEBUG nova.compute.manager [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-plugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.086 243456 DEBUG oslo_concurrency.lockutils [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.087 243456 DEBUG oslo_concurrency.lockutils [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.087 243456 DEBUG oslo_concurrency.lockutils [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.087 243456 DEBUG nova.compute.manager [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] No waiting events found dispatching network-vif-plugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.087 243456 WARNING nova.compute.manager [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received unexpected event network-vif-plugged-a81e3b75-649b-4321-b436-ab01ab0a9e05 for instance with vm_state deleted and task_state None.
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.087 243456 DEBUG nova.compute.manager [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-unplugged-d681366d-e6b5-4dad-847e-d091bc7b112d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.087 243456 DEBUG oslo_concurrency.lockutils [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.088 243456 DEBUG oslo_concurrency.lockutils [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.088 243456 DEBUG oslo_concurrency.lockutils [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.088 243456 DEBUG nova.compute.manager [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] No waiting events found dispatching network-vif-unplugged-d681366d-e6b5-4dad-847e-d091bc7b112d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.088 243456 WARNING nova.compute.manager [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received unexpected event network-vif-unplugged-d681366d-e6b5-4dad-847e-d091bc7b112d for instance with vm_state deleted and task_state None.
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.088 243456 DEBUG nova.compute.manager [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-plugged-d681366d-e6b5-4dad-847e-d091bc7b112d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.088 243456 DEBUG oslo_concurrency.lockutils [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.089 243456 DEBUG oslo_concurrency.lockutils [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.089 243456 DEBUG oslo_concurrency.lockutils [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.089 243456 DEBUG nova.compute.manager [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] No waiting events found dispatching network-vif-plugged-d681366d-e6b5-4dad-847e-d091bc7b112d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.089 243456 WARNING nova.compute.manager [req-03c84bfc-7256-4f9f-b9ea-96bafe990674 req-86bc0d81-525c-4ab9-8c40-1503451f37a9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received unexpected event network-vif-plugged-d681366d-e6b5-4dad-847e-d091bc7b112d for instance with vm_state deleted and task_state None.
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.100 243456 DEBUG oslo_concurrency.processutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.107 243456 DEBUG nova.compute.provider_tree [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.127 243456 DEBUG nova.scheduler.client.report [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.169 243456 DEBUG oslo_concurrency.lockutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.196 243456 INFO nova.scheduler.client.report [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance f13d8adc-1a08-412b-a9fa-c8a601cda923
Feb 28 10:35:50 compute-0 ceph-mon[76304]: pgmap v2144: 305 pgs: 305 active+clean; 458 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 290 KiB/s rd, 2.4 MiB/s wr, 65 op/s
Feb 28 10:35:50 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2398810676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:35:50 compute-0 nova_compute[243452]: 2026-02-28 10:35:50.270 243456 DEBUG oslo_concurrency.lockutils [None req-a43202d3-c11b-460f-ad40-ad559cd3abc7 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f13d8adc-1a08-412b-a9fa-c8a601cda923" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 419 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 295 KiB/s rd, 2.2 MiB/s wr, 82 op/s
Feb 28 10:35:51 compute-0 nova_compute[243452]: 2026-02-28 10:35:51.476 243456 DEBUG nova.compute.manager [req-482fcf4d-a8f3-4373-a81e-334275295f23 req-bae0e608-0d2d-4c32-a194-09c99b101614 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Received event network-vif-deleted-d681366d-e6b5-4dad-847e-d091bc7b112d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:51 compute-0 nova_compute[243452]: 2026-02-28 10:35:51.511 243456 DEBUG oslo_concurrency.lockutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:51 compute-0 nova_compute[243452]: 2026-02-28 10:35:51.512 243456 DEBUG oslo_concurrency.lockutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:51 compute-0 nova_compute[243452]: 2026-02-28 10:35:51.512 243456 DEBUG oslo_concurrency.lockutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:51 compute-0 nova_compute[243452]: 2026-02-28 10:35:51.513 243456 DEBUG oslo_concurrency.lockutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:51 compute-0 nova_compute[243452]: 2026-02-28 10:35:51.513 243456 DEBUG oslo_concurrency.lockutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:51 compute-0 nova_compute[243452]: 2026-02-28 10:35:51.515 243456 INFO nova.compute.manager [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Terminating instance
Feb 28 10:35:51 compute-0 nova_compute[243452]: 2026-02-28 10:35:51.518 243456 DEBUG nova.compute.manager [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:35:51 compute-0 kernel: tap8b2cb81f-77 (unregistering): left promiscuous mode
Feb 28 10:35:51 compute-0 NetworkManager[49805]: <info>  [1772274951.8318] device (tap8b2cb81f-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:35:51 compute-0 nova_compute[243452]: 2026-02-28 10:35:51.840 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:51 compute-0 ovn_controller[146846]: 2026-02-28T10:35:51Z|01330|binding|INFO|Releasing lport 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b from this chassis (sb_readonly=0)
Feb 28 10:35:51 compute-0 ovn_controller[146846]: 2026-02-28T10:35:51Z|01331|binding|INFO|Setting lport 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b down in Southbound
Feb 28 10:35:51 compute-0 ovn_controller[146846]: 2026-02-28T10:35:51Z|01332|binding|INFO|Removing iface tap8b2cb81f-77 ovn-installed in OVS
Feb 28 10:35:51 compute-0 nova_compute[243452]: 2026-02-28 10:35:51.844 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:51 compute-0 nova_compute[243452]: 2026-02-28 10:35:51.850 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:51.851 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:72:96 10.100.0.3'], port_security=['fa:16:3e:12:72:96 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '29ebb761-c674-4ed1-aae0-554adf945402', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd8337edd-8f59-40cf-a216-2b5eec8b4c00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d65e56d1-78ed-40b2-a041-17e977e92cba, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:35:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:51.853 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b in datapath 6d7aad4f-1a53-4b74-a216-4cac4be4283b unbound from our chassis
Feb 28 10:35:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:51.855 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d7aad4f-1a53-4b74-a216-4cac4be4283b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:35:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:51.856 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0f0aa2ba-57d5-4897-b3f8-b1fbd3b8c16a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:51.857 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b namespace which is not needed anymore
Feb 28 10:35:51 compute-0 kernel: tap8f25c48f-b2 (unregistering): left promiscuous mode
Feb 28 10:35:51 compute-0 NetworkManager[49805]: <info>  [1772274951.8645] device (tap8f25c48f-b2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:35:51 compute-0 ovn_controller[146846]: 2026-02-28T10:35:51Z|01333|binding|INFO|Releasing lport 8f25c48f-b281-4784-a6b0-a2662d928d28 from this chassis (sb_readonly=0)
Feb 28 10:35:51 compute-0 ovn_controller[146846]: 2026-02-28T10:35:51Z|01334|binding|INFO|Setting lport 8f25c48f-b281-4784-a6b0-a2662d928d28 down in Southbound
Feb 28 10:35:51 compute-0 ovn_controller[146846]: 2026-02-28T10:35:51Z|01335|binding|INFO|Removing iface tap8f25c48f-b2 ovn-installed in OVS
Feb 28 10:35:51 compute-0 nova_compute[243452]: 2026-02-28 10:35:51.876 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:51 compute-0 nova_compute[243452]: 2026-02-28 10:35:51.878 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:51 compute-0 nova_compute[243452]: 2026-02-28 10:35:51.886 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:51.886 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:88:f4 2001:db8::f816:3eff:fe9f:88f4'], port_security=['fa:16:3e:9f:88:f4 2001:db8::f816:3eff:fe9f:88f4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9f:88f4/64', 'neutron:device_id': '29ebb761-c674-4ed1-aae0-554adf945402', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd8337edd-8f59-40cf-a216-2b5eec8b4c00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f7ae13e-602a-487f-8652-e7d0de9d97fa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=8f25c48f-b281-4784-a6b0-a2662d928d28) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:35:51 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000080.scope: Deactivated successfully.
Feb 28 10:35:51 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000080.scope: Consumed 15.259s CPU time.
Feb 28 10:35:51 compute-0 systemd-machined[209480]: Machine qemu-161-instance-00000080 terminated.
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.011 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updating instance_info_cache with network_info: [{"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:35:52 compute-0 neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b[357339]: [NOTICE]   (357343) : haproxy version is 2.8.14-c23fe91
Feb 28 10:35:52 compute-0 neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b[357339]: [NOTICE]   (357343) : path to executable is /usr/sbin/haproxy
Feb 28 10:35:52 compute-0 neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b[357339]: [WARNING]  (357343) : Exiting Master process...
Feb 28 10:35:52 compute-0 neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b[357339]: [WARNING]  (357343) : Exiting Master process...
Feb 28 10:35:52 compute-0 neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b[357339]: [ALERT]    (357343) : Current worker (357345) exited with code 143 (Terminated)
Feb 28 10:35:52 compute-0 neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b[357339]: [WARNING]  (357343) : All workers exited. Exiting... (0)
Feb 28 10:35:52 compute-0 systemd[1]: libpod-53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb.scope: Deactivated successfully.
Feb 28 10:35:52 compute-0 podman[360595]: 2026-02-28 10:35:52.041613643 +0000 UTC m=+0.061150863 container died 53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.040 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.042 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.047 243456 INFO nova.virt.libvirt.driver [-] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Instance destroyed successfully.
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.047 243456 DEBUG nova.objects.instance [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid 29ebb761-c674-4ed1-aae0-554adf945402 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.060 243456 DEBUG nova.virt.libvirt.vif [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1840268040',display_name='tempest-TestGettingAddress-server-1840268040',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1840268040',id=128,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:34:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-6j90yoru',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:34:45Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=29ebb761-c674-4ed1-aae0-554adf945402,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.060 243456 DEBUG nova.network.os_vif_util [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.061 243456 DEBUG nova.network.os_vif_util [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:12:72:96,bridge_name='br-int',has_traffic_filtering=True,id=8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b2cb81f-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.061 243456 DEBUG os_vif [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:72:96,bridge_name='br-int',has_traffic_filtering=True,id=8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b2cb81f-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.063 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.063 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b2cb81f-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.065 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.068 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.070 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.073 243456 INFO os_vif [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:72:96,bridge_name='br-int',has_traffic_filtering=True,id=8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b,network=Network(6d7aad4f-1a53-4b74-a216-4cac4be4283b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b2cb81f-77')
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.075 243456 DEBUG nova.virt.libvirt.vif [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1840268040',display_name='tempest-TestGettingAddress-server-1840268040',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1840268040',id=128,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLvbGTbrEfz9W2tnu/9jGE+Ib3UUNypoWhKLTWUiVUxUtyJAzaLx270DWUn96sPglPAR/qjjr3PTH3Chgoe7aVB/RslRFNHrD7aVCbvYz4FzhoyO9zWK1x//nFLJoPGj0w==',key_name='tempest-TestGettingAddress-1669963629',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:34:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-6j90yoru',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:34:45Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=29ebb761-c674-4ed1-aae0-554adf945402,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:35:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb-userdata-shm.mount: Deactivated successfully.
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.076 243456 DEBUG nova.network.os_vif_util [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.077 243456 DEBUG nova.network.os_vif_util [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:88:f4,bridge_name='br-int',has_traffic_filtering=True,id=8f25c48f-b281-4784-a6b0-a2662d928d28,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f25c48f-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.078 243456 DEBUG os_vif [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:88:f4,bridge_name='br-int',has_traffic_filtering=True,id=8f25c48f-b281-4784-a6b0-a2662d928d28,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f25c48f-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:35:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-56367b1f032c1af58ec46ce8c0be39bfb51d1c49bf83d9e0c00051135fcf4571-merged.mount: Deactivated successfully.
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.080 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.081 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f25c48f-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:52 compute-0 podman[360595]: 2026-02-28 10:35:52.084301773 +0000 UTC m=+0.103838993 container cleanup 53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.084 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.087 243456 INFO os_vif [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:88:f4,bridge_name='br-int',has_traffic_filtering=True,id=8f25c48f-b281-4784-a6b0-a2662d928d28,network=Network(49ec66b0-8f5d-445b-a7e6-7fd41e785d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f25c48f-b2')
Feb 28 10:35:52 compute-0 systemd[1]: libpod-conmon-53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb.scope: Deactivated successfully.
Feb 28 10:35:52 compute-0 podman[360658]: 2026-02-28 10:35:52.155310884 +0000 UTC m=+0.045691435 container remove 53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.165 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed6fba9-98c5-44b0-bb88-747df577ab7b]: (4, ('Sat Feb 28 10:35:51 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b (53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb)\n53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb\nSat Feb 28 10:35:52 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b (53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb)\n53e5e05462a4775caa7ef7d3674e1cc75185d4b1272bab7f24e80babaaf37ffb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.168 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[74e0035b-c938-463f-acaf-f0a52ee0ba32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.168 243456 DEBUG nova.compute.manager [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-changed-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.169 243456 DEBUG nova.compute.manager [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Refreshing instance network info cache due to event network-changed-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.169 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d7aad4f-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.169 243456 DEBUG oslo_concurrency.lockutils [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.169 243456 DEBUG oslo_concurrency.lockutils [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.169 243456 DEBUG nova.network.neutron [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Refreshing network info cache for port 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:35:52 compute-0 kernel: tap6d7aad4f-10: left promiscuous mode
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.172 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.175 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.176 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[edfaa418-63c9-4c07-8638-130bcf21cba0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.177 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.193 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0872d0-9902-4a4f-a1f7-e3bde3dec828]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.194 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bb4ba571-86bc-4709-a31f-229d0f48188e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.215 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bac4c47f-0ddc-449f-91d1-8a7c9b728786]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639190, 'reachable_time': 15837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360685, 'error': None, 'target': 'ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d6d7aad4f\x2d1a53\x2d4b74\x2da216\x2d4cac4be4283b.mount: Deactivated successfully.
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.218 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d7aad4f-1a53-4b74-a216-4cac4be4283b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.219 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[03cddc68-f2d4-4809-9592-1e74f066c752]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.220 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 8f25c48f-b281-4784-a6b0-a2662d928d28 in datapath 49ec66b0-8f5d-445b-a7e6-7fd41e785d9a unbound from our chassis
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.222 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 49ec66b0-8f5d-445b-a7e6-7fd41e785d9a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.223 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[54fa5f03-9d88-4d8e-80c0-6301f0815345]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.223 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a namespace which is not needed anymore
Feb 28 10:35:52 compute-0 ceph-mon[76304]: pgmap v2145: 305 pgs: 305 active+clean; 419 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 295 KiB/s rd, 2.2 MiB/s wr, 82 op/s
Feb 28 10:35:52 compute-0 neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a[357412]: [NOTICE]   (357416) : haproxy version is 2.8.14-c23fe91
Feb 28 10:35:52 compute-0 neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a[357412]: [NOTICE]   (357416) : path to executable is /usr/sbin/haproxy
Feb 28 10:35:52 compute-0 neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a[357412]: [WARNING]  (357416) : Exiting Master process...
Feb 28 10:35:52 compute-0 neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a[357412]: [WARNING]  (357416) : Exiting Master process...
Feb 28 10:35:52 compute-0 neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a[357412]: [ALERT]    (357416) : Current worker (357418) exited with code 143 (Terminated)
Feb 28 10:35:52 compute-0 neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a[357412]: [WARNING]  (357416) : All workers exited. Exiting... (0)
Feb 28 10:35:52 compute-0 systemd[1]: libpod-892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817.scope: Deactivated successfully.
Feb 28 10:35:52 compute-0 podman[360705]: 2026-02-28 10:35:52.34685431 +0000 UTC m=+0.041334392 container died 892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.354 243456 INFO nova.virt.libvirt.driver [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Deleting instance files /var/lib/nova/instances/29ebb761-c674-4ed1-aae0-554adf945402_del
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.355 243456 INFO nova.virt.libvirt.driver [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Deletion of /var/lib/nova/instances/29ebb761-c674-4ed1-aae0-554adf945402_del complete
Feb 28 10:35:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817-userdata-shm.mount: Deactivated successfully.
Feb 28 10:35:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-d17053d1edc9fada6f68225e884d047e5575813d4682169f8b77e31a5f244466-merged.mount: Deactivated successfully.
Feb 28 10:35:52 compute-0 podman[360705]: 2026-02-28 10:35:52.379479774 +0000 UTC m=+0.073959856 container cleanup 892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 28 10:35:52 compute-0 systemd[1]: libpod-conmon-892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817.scope: Deactivated successfully.
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.426 243456 INFO nova.compute.manager [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Took 0.91 seconds to destroy the instance on the hypervisor.
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.427 243456 DEBUG oslo.service.loopingcall [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.427 243456 DEBUG nova.compute.manager [-] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.427 243456 DEBUG nova.network.neutron [-] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:35:52 compute-0 podman[360734]: 2026-02-28 10:35:52.433788452 +0000 UTC m=+0.038874292 container remove 892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.439 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8a0d820c-4365-42a4-9aba-b57caecfe7f4]: (4, ('Sat Feb 28 10:35:52 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a (892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817)\n892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817\nSat Feb 28 10:35:52 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a (892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817)\n892a502cfb05eed25227823c231c7fbcea94a19eb0f2c269b570f112cf3ba817\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.440 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[81c45e48-bdbc-46d8-b0a4-4eff10aad77a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.441 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49ec66b0-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.443 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:52 compute-0 kernel: tap49ec66b0-80: left promiscuous mode
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.448 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bc752214-6892-4d3b-a025-775a6c42e289]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:52 compute-0 nova_compute[243452]: 2026-02-28 10:35:52.449 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.465 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4b42315d-e188-44d6-b7d5-0211441d36fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.466 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c4e121-91bf-4dcf-8a8e-fb0b734405cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.484 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[345e1f78-c0e1-45db-b045-92f6f5d0b104]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639276, 'reachable_time': 23403, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360749, 'error': None, 'target': 'ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.486 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-49ec66b0-8f5d-445b-a7e6-7fd41e785d9a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:35:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:52.486 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[980b6d2b-7de3-438d-852b-cfa3986b3983]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 380 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 2.2 MiB/s wr, 101 op/s
Feb 28 10:35:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d49ec66b0\x2d8f5d\x2d445b\x2da7e6\x2d7fd41e785d9a.mount: Deactivated successfully.
Feb 28 10:35:53 compute-0 nova_compute[243452]: 2026-02-28 10:35:53.548 243456 DEBUG nova.compute.manager [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-unplugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:53 compute-0 nova_compute[243452]: 2026-02-28 10:35:53.548 243456 DEBUG oslo_concurrency.lockutils [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:53 compute-0 nova_compute[243452]: 2026-02-28 10:35:53.549 243456 DEBUG oslo_concurrency.lockutils [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:53 compute-0 nova_compute[243452]: 2026-02-28 10:35:53.549 243456 DEBUG oslo_concurrency.lockutils [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:53 compute-0 nova_compute[243452]: 2026-02-28 10:35:53.549 243456 DEBUG nova.compute.manager [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] No waiting events found dispatching network-vif-unplugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:35:53 compute-0 nova_compute[243452]: 2026-02-28 10:35:53.549 243456 DEBUG nova.compute.manager [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-unplugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:35:53 compute-0 nova_compute[243452]: 2026-02-28 10:35:53.550 243456 DEBUG nova.compute.manager [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-plugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:53 compute-0 nova_compute[243452]: 2026-02-28 10:35:53.550 243456 DEBUG oslo_concurrency.lockutils [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:53 compute-0 nova_compute[243452]: 2026-02-28 10:35:53.550 243456 DEBUG oslo_concurrency.lockutils [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:53 compute-0 nova_compute[243452]: 2026-02-28 10:35:53.550 243456 DEBUG oslo_concurrency.lockutils [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:53 compute-0 nova_compute[243452]: 2026-02-28 10:35:53.550 243456 DEBUG nova.compute.manager [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] No waiting events found dispatching network-vif-plugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:35:53 compute-0 nova_compute[243452]: 2026-02-28 10:35:53.551 243456 WARNING nova.compute.manager [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received unexpected event network-vif-plugged-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b for instance with vm_state active and task_state deleting.
Feb 28 10:35:53 compute-0 nova_compute[243452]: 2026-02-28 10:35:53.551 243456 DEBUG nova.compute.manager [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-deleted-8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:53 compute-0 nova_compute[243452]: 2026-02-28 10:35:53.551 243456 INFO nova.compute.manager [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Neutron deleted interface 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b; detaching it from the instance and deleting it from the info cache
Feb 28 10:35:53 compute-0 nova_compute[243452]: 2026-02-28 10:35:53.551 243456 DEBUG nova.network.neutron [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updating instance_info_cache with network_info: [{"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:35:53 compute-0 nova_compute[243452]: 2026-02-28 10:35:53.573 243456 DEBUG nova.compute.manager [req-7bc1a1b9-30f0-41d0-a1ef-1bd59066cf95 req-7c0f9e79-dfe2-4300-bf12-898f6910c94d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Detach interface failed, port_id=8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b, reason: Instance 29ebb761-c674-4ed1-aae0-554adf945402 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:35:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.203 243456 DEBUG nova.network.neutron [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updated VIF entry in instance network info cache for port 8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.203 243456 DEBUG nova.network.neutron [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updating instance_info_cache with network_info: [{"id": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "address": "fa:16:3e:12:72:96", "network": {"id": "6d7aad4f-1a53-4b74-a216-4cac4be4283b", "bridge": "br-int", "label": "tempest-network-smoke--1859156326", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b2cb81f-77", "ovs_interfaceid": "8b2cb81f-77d0-4a6f-88cc-acc03f6ad53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8f25c48f-b281-4784-a6b0-a2662d928d28", "address": "fa:16:3e:9f:88:f4", "network": {"id": "49ec66b0-8f5d-445b-a7e6-7fd41e785d9a", "bridge": "br-int", "label": "tempest-network-smoke--451825455", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9f:88f4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f25c48f-b2", "ovs_interfaceid": "8f25c48f-b281-4784-a6b0-a2662d928d28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.219 243456 DEBUG nova.network.neutron [-] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.228 243456 DEBUG oslo_concurrency.lockutils [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-29ebb761-c674-4ed1-aae0-554adf945402" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.229 243456 DEBUG nova.compute.manager [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-unplugged-8f25c48f-b281-4784-a6b0-a2662d928d28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.230 243456 DEBUG oslo_concurrency.lockutils [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.231 243456 DEBUG oslo_concurrency.lockutils [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.231 243456 DEBUG oslo_concurrency.lockutils [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.232 243456 DEBUG nova.compute.manager [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] No waiting events found dispatching network-vif-unplugged-8f25c48f-b281-4784-a6b0-a2662d928d28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.232 243456 DEBUG nova.compute.manager [req-115dd162-2e4a-4380-9c35-a6d1bb9668cd req-8b898fb1-3632-4cd4-9674-b77c93b34a4c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-unplugged-8f25c48f-b281-4784-a6b0-a2662d928d28 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.237 243456 INFO nova.compute.manager [-] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Took 1.81 seconds to deallocate network for instance.
Feb 28 10:35:54 compute-0 ceph-mon[76304]: pgmap v2146: 305 pgs: 305 active+clean; 380 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 2.2 MiB/s wr, 101 op/s
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.269 243456 DEBUG nova.compute.manager [req-6d105208-2bbb-4352-88fb-56ea506f5714 req-aa287118-5e9c-4b60-ac6d-5fecdecbafa2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-plugged-8f25c48f-b281-4784-a6b0-a2662d928d28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.270 243456 DEBUG oslo_concurrency.lockutils [req-6d105208-2bbb-4352-88fb-56ea506f5714 req-aa287118-5e9c-4b60-ac6d-5fecdecbafa2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "29ebb761-c674-4ed1-aae0-554adf945402-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.270 243456 DEBUG oslo_concurrency.lockutils [req-6d105208-2bbb-4352-88fb-56ea506f5714 req-aa287118-5e9c-4b60-ac6d-5fecdecbafa2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.270 243456 DEBUG oslo_concurrency.lockutils [req-6d105208-2bbb-4352-88fb-56ea506f5714 req-aa287118-5e9c-4b60-ac6d-5fecdecbafa2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.271 243456 DEBUG nova.compute.manager [req-6d105208-2bbb-4352-88fb-56ea506f5714 req-aa287118-5e9c-4b60-ac6d-5fecdecbafa2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] No waiting events found dispatching network-vif-plugged-8f25c48f-b281-4784-a6b0-a2662d928d28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.271 243456 WARNING nova.compute.manager [req-6d105208-2bbb-4352-88fb-56ea506f5714 req-aa287118-5e9c-4b60-ac6d-5fecdecbafa2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received unexpected event network-vif-plugged-8f25c48f-b281-4784-a6b0-a2662d928d28 for instance with vm_state active and task_state deleting.
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.284 243456 DEBUG oslo_concurrency.lockutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.284 243456 DEBUG oslo_concurrency.lockutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.356 243456 DEBUG oslo_concurrency.processutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.424 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.517 243456 INFO nova.compute.manager [None req-c12e0a07-2f36-4543-b469-66593776abce ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Get console output
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.528 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:35:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 334 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 2.2 MiB/s wr, 118 op/s
Feb 28 10:35:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:35:54 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1242944676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.896 243456 DEBUG oslo_concurrency.processutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.898 243456 DEBUG oslo_concurrency.lockutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b7d98834-924e-4fbd-a701-d22949f44f77" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.899 243456 DEBUG oslo_concurrency.lockutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.899 243456 DEBUG oslo_concurrency.lockutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.899 243456 DEBUG oslo_concurrency.lockutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.900 243456 DEBUG oslo_concurrency.lockutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.901 243456 INFO nova.compute.manager [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Terminating instance
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.903 243456 DEBUG nova.compute.manager [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.909 243456 DEBUG nova.compute.provider_tree [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.926 243456 DEBUG nova.scheduler.client.report [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:35:54 compute-0 kernel: tapf6a52694-af (unregistering): left promiscuous mode
Feb 28 10:35:54 compute-0 NetworkManager[49805]: <info>  [1772274954.9513] device (tapf6a52694-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.955 243456 DEBUG oslo_concurrency.lockutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:54 compute-0 ovn_controller[146846]: 2026-02-28T10:35:54Z|01336|binding|INFO|Releasing lport f6a52694-af4a-4ecc-926c-b1867c375983 from this chassis (sb_readonly=0)
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.962 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:54 compute-0 ovn_controller[146846]: 2026-02-28T10:35:54Z|01337|binding|INFO|Setting lport f6a52694-af4a-4ecc-926c-b1867c375983 down in Southbound
Feb 28 10:35:54 compute-0 ovn_controller[146846]: 2026-02-28T10:35:54Z|01338|binding|INFO|Removing iface tapf6a52694-af ovn-installed in OVS
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.966 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:54.972 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:5e:ba 10.100.0.12'], port_security=['fa:16:3e:21:5e:ba 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b7d98834-924e-4fbd-a701-d22949f44f77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c48ff26a-49d0-4144-b27f-14431e751ba2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7156ff74-6e4d-4300-84e4-6890f3b16e55', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cef915b3-5462-468d-ad19-808529c0dfba, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f6a52694-af4a-4ecc-926c-b1867c375983) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:35:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:54.974 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f6a52694-af4a-4ecc-926c-b1867c375983 in datapath c48ff26a-49d0-4144-b27f-14431e751ba2 unbound from our chassis
Feb 28 10:35:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:54.979 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c48ff26a-49d0-4144-b27f-14431e751ba2
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.979 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:54 compute-0 nova_compute[243452]: 2026-02-28 10:35:54.992 243456 INFO nova.scheduler.client.report [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance 29ebb761-c674-4ed1-aae0-554adf945402
Feb 28 10:35:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.002 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b30c8202-1f30-4b84-8061-06839151a1aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.035 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1d99bb1e-20a7-4e03-acfe-43f8feb1c402]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.040 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1fdf1fdd-0633-4052-9650-c5338b44b040]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:55 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000083.scope: Deactivated successfully.
Feb 28 10:35:55 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000083.scope: Consumed 15.426s CPU time.
Feb 28 10:35:55 compute-0 systemd-machined[209480]: Machine qemu-164-instance-00000083 terminated.
Feb 28 10:35:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.077 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[29aafdfd-d508-49ba-b3fe-3c39d94567a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:55 compute-0 nova_compute[243452]: 2026-02-28 10:35:55.080 243456 DEBUG oslo_concurrency.lockutils [None req-28614745-3aba-401e-89bc-a9ef93c135ed be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "29ebb761-c674-4ed1-aae0-554adf945402" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.101 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b2467b65-7cbb-4c85-bb1f-26ac3532b3dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc48ff26a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:5f:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640275, 'reachable_time': 38982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360783, 'error': None, 'target': 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.124 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[afe4cc98-8f33-41d5-9355-d53f7430f253]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc48ff26a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640284, 'tstamp': 640284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360784, 'error': None, 'target': 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc48ff26a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640286, 'tstamp': 640286}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360784, 'error': None, 'target': 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.127 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc48ff26a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:55 compute-0 nova_compute[243452]: 2026-02-28 10:35:55.129 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:55 compute-0 nova_compute[243452]: 2026-02-28 10:35:55.138 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.139 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc48ff26a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.140 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:35:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.141 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc48ff26a-40, col_values=(('external_ids', {'iface-id': 'cb4199b0-c270-4ea9-a607-db9799f6f157'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:55.141 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:35:55 compute-0 nova_compute[243452]: 2026-02-28 10:35:55.150 243456 INFO nova.virt.libvirt.driver [-] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Instance destroyed successfully.
Feb 28 10:35:55 compute-0 nova_compute[243452]: 2026-02-28 10:35:55.150 243456 DEBUG nova.objects.instance [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid b7d98834-924e-4fbd-a701-d22949f44f77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:35:55 compute-0 nova_compute[243452]: 2026-02-28 10:35:55.163 243456 DEBUG nova.virt.libvirt.vif [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:35:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1920589440',display_name='tempest-TestNetworkBasicOps-server-1920589440',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1920589440',id=131,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBx2yXJrlxl/hjn9MMHE5KhSD01UI6o5tdb/M6CuQZyKKSCBz23zQdd3YZK9kMbqEHNUQ9eYVQehuDxFu0Ax80TEtUbLmqV5yzH0bXR2RG7KLxh93Ak1z3EjsfV7s46ZjA==',key_name='tempest-TestNetworkBasicOps-1493336509',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:35:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-tu2phxkv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:35:32Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b7d98834-924e-4fbd-a701-d22949f44f77,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:35:55 compute-0 nova_compute[243452]: 2026-02-28 10:35:55.163 243456 DEBUG nova.network.os_vif_util [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "f6a52694-af4a-4ecc-926c-b1867c375983", "address": "fa:16:3e:21:5e:ba", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6a52694-af", "ovs_interfaceid": "f6a52694-af4a-4ecc-926c-b1867c375983", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:35:55 compute-0 nova_compute[243452]: 2026-02-28 10:35:55.164 243456 DEBUG nova.network.os_vif_util [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:21:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=f6a52694-af4a-4ecc-926c-b1867c375983,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6a52694-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:35:55 compute-0 nova_compute[243452]: 2026-02-28 10:35:55.164 243456 DEBUG os_vif [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:21:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=f6a52694-af4a-4ecc-926c-b1867c375983,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6a52694-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:35:55 compute-0 nova_compute[243452]: 2026-02-28 10:35:55.165 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:55 compute-0 nova_compute[243452]: 2026-02-28 10:35:55.166 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6a52694-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:55 compute-0 nova_compute[243452]: 2026-02-28 10:35:55.168 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:55 compute-0 nova_compute[243452]: 2026-02-28 10:35:55.172 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:35:55 compute-0 nova_compute[243452]: 2026-02-28 10:35:55.176 243456 INFO os_vif [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:21:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=f6a52694-af4a-4ecc-926c-b1867c375983,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6a52694-af')
Feb 28 10:35:55 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1242944676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:35:55 compute-0 nova_compute[243452]: 2026-02-28 10:35:55.485 243456 INFO nova.virt.libvirt.driver [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Deleting instance files /var/lib/nova/instances/b7d98834-924e-4fbd-a701-d22949f44f77_del
Feb 28 10:35:55 compute-0 nova_compute[243452]: 2026-02-28 10:35:55.486 243456 INFO nova.virt.libvirt.driver [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Deletion of /var/lib/nova/instances/b7d98834-924e-4fbd-a701-d22949f44f77_del complete
Feb 28 10:35:55 compute-0 nova_compute[243452]: 2026-02-28 10:35:55.540 243456 INFO nova.compute.manager [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Took 0.64 seconds to destroy the instance on the hypervisor.
Feb 28 10:35:55 compute-0 nova_compute[243452]: 2026-02-28 10:35:55.541 243456 DEBUG oslo.service.loopingcall [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:35:55 compute-0 nova_compute[243452]: 2026-02-28 10:35:55.542 243456 DEBUG nova.compute.manager [-] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:35:55 compute-0 nova_compute[243452]: 2026-02-28 10:35:55.542 243456 DEBUG nova.network.neutron [-] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:35:55 compute-0 nova_compute[243452]: 2026-02-28 10:35:55.651 243456 DEBUG nova.compute.manager [req-318199de-e63c-4481-bf4e-2e2a3128c625 req-90324762-20dd-4ddd-a0e1-95f12d42f74b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Received event network-vif-deleted-8f25c48f-b281-4784-a6b0-a2662d928d28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:56 compute-0 ceph-mon[76304]: pgmap v2147: 305 pgs: 305 active+clean; 334 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 2.2 MiB/s wr, 118 op/s
Feb 28 10:35:56 compute-0 nova_compute[243452]: 2026-02-28 10:35:56.601 243456 DEBUG nova.compute.manager [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Received event network-vif-unplugged-f6a52694-af4a-4ecc-926c-b1867c375983 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:56 compute-0 nova_compute[243452]: 2026-02-28 10:35:56.601 243456 DEBUG oslo_concurrency.lockutils [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:56 compute-0 nova_compute[243452]: 2026-02-28 10:35:56.602 243456 DEBUG oslo_concurrency.lockutils [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:56 compute-0 nova_compute[243452]: 2026-02-28 10:35:56.602 243456 DEBUG oslo_concurrency.lockutils [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:56 compute-0 nova_compute[243452]: 2026-02-28 10:35:56.602 243456 DEBUG nova.compute.manager [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] No waiting events found dispatching network-vif-unplugged-f6a52694-af4a-4ecc-926c-b1867c375983 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:35:56 compute-0 nova_compute[243452]: 2026-02-28 10:35:56.603 243456 DEBUG nova.compute.manager [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Received event network-vif-unplugged-f6a52694-af4a-4ecc-926c-b1867c375983 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:35:56 compute-0 nova_compute[243452]: 2026-02-28 10:35:56.603 243456 DEBUG nova.compute.manager [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Received event network-vif-plugged-f6a52694-af4a-4ecc-926c-b1867c375983 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:56 compute-0 nova_compute[243452]: 2026-02-28 10:35:56.603 243456 DEBUG oslo_concurrency.lockutils [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:56 compute-0 nova_compute[243452]: 2026-02-28 10:35:56.604 243456 DEBUG oslo_concurrency.lockutils [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:56 compute-0 nova_compute[243452]: 2026-02-28 10:35:56.604 243456 DEBUG oslo_concurrency.lockutils [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:56 compute-0 nova_compute[243452]: 2026-02-28 10:35:56.604 243456 DEBUG nova.compute.manager [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] No waiting events found dispatching network-vif-plugged-f6a52694-af4a-4ecc-926c-b1867c375983 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:35:56 compute-0 nova_compute[243452]: 2026-02-28 10:35:56.604 243456 WARNING nova.compute.manager [req-0805e054-66a1-4ad4-a4f4-fd46d98fde91 req-c1f68a37-9002-4347-92e5-c6e9dcfd5c40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Received unexpected event network-vif-plugged-f6a52694-af4a-4ecc-926c-b1867c375983 for instance with vm_state active and task_state deleting.
Feb 28 10:35:56 compute-0 nova_compute[243452]: 2026-02-28 10:35:56.700 243456 DEBUG nova.network.neutron [-] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:35:56 compute-0 nova_compute[243452]: 2026-02-28 10:35:56.722 243456 INFO nova.compute.manager [-] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Took 1.18 seconds to deallocate network for instance.
Feb 28 10:35:56 compute-0 nova_compute[243452]: 2026-02-28 10:35:56.780 243456 DEBUG oslo_concurrency.lockutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:56 compute-0 nova_compute[243452]: 2026-02-28 10:35:56.780 243456 DEBUG oslo_concurrency.lockutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 277 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 2.2 MiB/s wr, 120 op/s
Feb 28 10:35:56 compute-0 nova_compute[243452]: 2026-02-28 10:35:56.865 243456 DEBUG oslo_concurrency.processutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:35:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:35:57 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2063512626' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:35:57 compute-0 nova_compute[243452]: 2026-02-28 10:35:57.405 243456 DEBUG oslo_concurrency.processutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:35:57 compute-0 nova_compute[243452]: 2026-02-28 10:35:57.411 243456 DEBUG nova.compute.provider_tree [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:35:57 compute-0 nova_compute[243452]: 2026-02-28 10:35:57.431 243456 DEBUG nova.scheduler.client.report [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:35:57 compute-0 nova_compute[243452]: 2026-02-28 10:35:57.458 243456 DEBUG oslo_concurrency.lockutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:57 compute-0 nova_compute[243452]: 2026-02-28 10:35:57.487 243456 INFO nova.scheduler.client.report [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance b7d98834-924e-4fbd-a701-d22949f44f77
Feb 28 10:35:57 compute-0 nova_compute[243452]: 2026-02-28 10:35:57.560 243456 DEBUG oslo_concurrency.lockutils [None req-e92a83bf-7724-4a3d-bff5-27ad2c37560a ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b7d98834-924e-4fbd-a701-d22949f44f77" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:57.875 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:57.875 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:57.876 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:58 compute-0 ovn_controller[146846]: 2026-02-28T10:35:58Z|01339|binding|INFO|Releasing lport cb4199b0-c270-4ea9-a607-db9799f6f157 from this chassis (sb_readonly=0)
Feb 28 10:35:58 compute-0 nova_compute[243452]: 2026-02-28 10:35:58.193 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:58 compute-0 ovn_controller[146846]: 2026-02-28T10:35:58Z|01340|binding|INFO|Releasing lport cb4199b0-c270-4ea9-a607-db9799f6f157 from this chassis (sb_readonly=0)
Feb 28 10:35:58 compute-0 nova_compute[243452]: 2026-02-28 10:35:58.253 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:58 compute-0 ceph-mon[76304]: pgmap v2148: 305 pgs: 305 active+clean; 277 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 2.2 MiB/s wr, 120 op/s
Feb 28 10:35:58 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2063512626' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:35:58 compute-0 nova_compute[243452]: 2026-02-28 10:35:58.708 243456 DEBUG nova.compute.manager [req-fb659427-0428-48ca-b568-469f2f1ec7d9 req-58ed66c2-5984-49e1-b618-ed245d93d08e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Received event network-vif-deleted-f6a52694-af4a-4ecc-926c-b1867c375983 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:35:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 260 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 302 KiB/s rd, 1.1 MiB/s wr, 119 op/s
Feb 28 10:35:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:35:58 compute-0 nova_compute[243452]: 2026-02-28 10:35:58.944 243456 DEBUG oslo_concurrency.lockutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "367042aa-0043-4283-a399-ea4a6a1545f7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:58 compute-0 nova_compute[243452]: 2026-02-28 10:35:58.945 243456 DEBUG oslo_concurrency.lockutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:58 compute-0 nova_compute[243452]: 2026-02-28 10:35:58.945 243456 DEBUG oslo_concurrency.lockutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:35:58 compute-0 nova_compute[243452]: 2026-02-28 10:35:58.945 243456 DEBUG oslo_concurrency.lockutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:35:58 compute-0 nova_compute[243452]: 2026-02-28 10:35:58.946 243456 DEBUG oslo_concurrency.lockutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:35:58 compute-0 nova_compute[243452]: 2026-02-28 10:35:58.948 243456 INFO nova.compute.manager [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Terminating instance
Feb 28 10:35:58 compute-0 nova_compute[243452]: 2026-02-28 10:35:58.949 243456 DEBUG nova.compute.manager [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:35:59 compute-0 kernel: tap92f5c154-2f (unregistering): left promiscuous mode
Feb 28 10:35:59 compute-0 NetworkManager[49805]: <info>  [1772274959.0087] device (tap92f5c154-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:35:59 compute-0 ovn_controller[146846]: 2026-02-28T10:35:59Z|01341|binding|INFO|Releasing lport 92f5c154-2fa7-43e9-a6fd-da26d3ad985b from this chassis (sb_readonly=0)
Feb 28 10:35:59 compute-0 ovn_controller[146846]: 2026-02-28T10:35:59Z|01342|binding|INFO|Setting lport 92f5c154-2fa7-43e9-a6fd-da26d3ad985b down in Southbound
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.012 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:59 compute-0 ovn_controller[146846]: 2026-02-28T10:35:59Z|01343|binding|INFO|Removing iface tap92f5c154-2f ovn-installed in OVS
Feb 28 10:35:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.021 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:eb:46 10.100.0.11'], port_security=['fa:16:3e:56:eb:46 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '367042aa-0043-4283-a399-ea4a6a1545f7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c48ff26a-49d0-4144-b27f-14431e751ba2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '18ab5e58-5378-41c9-af44-86d27866eb7f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cef915b3-5462-468d-ad19-808529c0dfba, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=92f5c154-2fa7-43e9-a6fd-da26d3ad985b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:35:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.024 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 92f5c154-2fa7-43e9-a6fd-da26d3ad985b in datapath c48ff26a-49d0-4144-b27f-14431e751ba2 unbound from our chassis
Feb 28 10:35:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.026 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c48ff26a-49d0-4144-b27f-14431e751ba2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.026 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.027 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[708379da-f18a-476f-85c7-03c4cbcea8ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.028 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2 namespace which is not needed anymore
Feb 28 10:35:59 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000081.scope: Deactivated successfully.
Feb 28 10:35:59 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000081.scope: Consumed 15.124s CPU time.
Feb 28 10:35:59 compute-0 systemd-machined[209480]: Machine qemu-162-instance-00000081 terminated.
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.171 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.229 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:59 compute-0 neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2[357912]: [NOTICE]   (357923) : haproxy version is 2.8.14-c23fe91
Feb 28 10:35:59 compute-0 neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2[357912]: [NOTICE]   (357923) : path to executable is /usr/sbin/haproxy
Feb 28 10:35:59 compute-0 neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2[357912]: [WARNING]  (357923) : Exiting Master process...
Feb 28 10:35:59 compute-0 neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2[357912]: [WARNING]  (357923) : Exiting Master process...
Feb 28 10:35:59 compute-0 neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2[357912]: [ALERT]    (357923) : Current worker (357925) exited with code 143 (Terminated)
Feb 28 10:35:59 compute-0 neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2[357912]: [WARNING]  (357923) : All workers exited. Exiting... (0)
Feb 28 10:35:59 compute-0 systemd[1]: libpod-c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3.scope: Deactivated successfully.
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.235 243456 INFO nova.virt.libvirt.driver [-] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Instance destroyed successfully.
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.236 243456 DEBUG nova.objects.instance [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid 367042aa-0043-4283-a399-ea4a6a1545f7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:35:59 compute-0 podman[360863]: 2026-02-28 10:35:59.242599625 +0000 UTC m=+0.107805885 container died c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.249 243456 DEBUG nova.virt.libvirt.vif [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:34:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1965662134',display_name='tempest-TestNetworkBasicOps-server-1965662134',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1965662134',id=129,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAQEbRWdJEdDDH9ryFVMZgxJMx06zvko9WQ/yItISiQGZQFgF2ldPicyjXhFLx3IsCNHqxs8LEYCBDvAtjLxsqEXUJAPPXqcb32CUcOFzuHymtVJP4PyLuUKki41H129Mg==',key_name='tempest-TestNetworkBasicOps-1815275820',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:34:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-9q1u5ht7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:34:56Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=367042aa-0043-4283-a399-ea4a6a1545f7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.250 243456 DEBUG nova.network.os_vif_util [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "address": "fa:16:3e:56:eb:46", "network": {"id": "c48ff26a-49d0-4144-b27f-14431e751ba2", "bridge": "br-int", "label": "tempest-network-smoke--1404315148", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92f5c154-2f", "ovs_interfaceid": "92f5c154-2fa7-43e9-a6fd-da26d3ad985b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.251 243456 DEBUG nova.network.os_vif_util [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:56:eb:46,bridge_name='br-int',has_traffic_filtering=True,id=92f5c154-2fa7-43e9-a6fd-da26d3ad985b,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92f5c154-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.251 243456 DEBUG os_vif [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:eb:46,bridge_name='br-int',has_traffic_filtering=True,id=92f5c154-2fa7-43e9-a6fd-da26d3ad985b,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92f5c154-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.254 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.254 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92f5c154-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.257 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.260 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.264 243456 INFO os_vif [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:eb:46,bridge_name='br-int',has_traffic_filtering=True,id=92f5c154-2fa7-43e9-a6fd-da26d3ad985b,network=Network(c48ff26a-49d0-4144-b27f-14431e751ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92f5c154-2f')
Feb 28 10:35:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3-userdata-shm.mount: Deactivated successfully.
Feb 28 10:35:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7cbf1e9a1127d383c385bf63b55eb566697bacb083039d81a658a47a2f6ccb6-merged.mount: Deactivated successfully.
Feb 28 10:35:59 compute-0 podman[360863]: 2026-02-28 10:35:59.28124946 +0000 UTC m=+0.146455680 container cleanup c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 10:35:59 compute-0 systemd[1]: libpod-conmon-c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3.scope: Deactivated successfully.
Feb 28 10:35:59 compute-0 podman[360916]: 2026-02-28 10:35:59.365667982 +0000 UTC m=+0.058306803 container remove c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 28 10:35:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.371 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[37dc1703-b240-40a5-b3e7-c7ca36f6b1e5]: (4, ('Sat Feb 28 10:35:59 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2 (c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3)\nc2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3\nSat Feb 28 10:35:59 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2 (c2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3)\nc2f65d6c222ee23a1f1591501bc6cfbd2866d525d5e9f3f02eb915017f2c36f3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.374 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a411ce32-a9b9-46eb-b342-9ffa2269ae3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.375 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc48ff26a-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.377 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:59 compute-0 kernel: tapc48ff26a-40: left promiscuous mode
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.387 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.390 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[601791a7-0beb-4ea7-a6cb-49c561be44f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.401 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cce141d3-5d3c-4daa-8aeb-25ef652eb1d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.404 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a7ea3a-1785-459e-87ff-68b5ae12fe67]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.424 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[52db82cd-edc4-41a0-afc1-a1d3e263e7a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640267, 'reachable_time': 27917, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360935, 'error': None, 'target': 'ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.426 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:35:59 compute-0 systemd[1]: run-netns-ovnmeta\x2dc48ff26a\x2d49d0\x2d4144\x2db27f\x2d14431e751ba2.mount: Deactivated successfully.
Feb 28 10:35:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.428 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c48ff26a-49d0-4144-b27f-14431e751ba2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:35:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:35:59.428 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[f70334a2-6113-476e-b95a-1bc8394d1754]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.581 243456 INFO nova.virt.libvirt.driver [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Deleting instance files /var/lib/nova/instances/367042aa-0043-4283-a399-ea4a6a1545f7_del
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.582 243456 INFO nova.virt.libvirt.driver [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Deletion of /var/lib/nova/instances/367042aa-0043-4283-a399-ea4a6a1545f7_del complete
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.658 243456 INFO nova.compute.manager [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Took 0.71 seconds to destroy the instance on the hypervisor.
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.658 243456 DEBUG oslo.service.loopingcall [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.659 243456 DEBUG nova.compute.manager [-] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:35:59 compute-0 nova_compute[243452]: 2026-02-28 10:35:59.659 243456 DEBUG nova.network.neutron [-] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:36:00 compute-0 ceph-mon[76304]: pgmap v2149: 305 pgs: 305 active+clean; 260 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 302 KiB/s rd, 1.1 MiB/s wr, 119 op/s
Feb 28 10:36:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:36:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:36:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:36:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:36:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:36:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:36:00 compute-0 nova_compute[243452]: 2026-02-28 10:36:00.523 243456 DEBUG nova.network.neutron [-] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:36:00 compute-0 nova_compute[243452]: 2026-02-28 10:36:00.544 243456 INFO nova.compute.manager [-] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Took 0.89 seconds to deallocate network for instance.
Feb 28 10:36:00 compute-0 nova_compute[243452]: 2026-02-28 10:36:00.622 243456 DEBUG oslo_concurrency.lockutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:00 compute-0 nova_compute[243452]: 2026-02-28 10:36:00.623 243456 DEBUG oslo_concurrency.lockutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:00 compute-0 nova_compute[243452]: 2026-02-28 10:36:00.683 243456 DEBUG oslo_concurrency.processutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:36:00 compute-0 nova_compute[243452]: 2026-02-28 10:36:00.785 243456 DEBUG nova.compute.manager [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received event network-vif-unplugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:36:00 compute-0 nova_compute[243452]: 2026-02-28 10:36:00.786 243456 DEBUG oslo_concurrency.lockutils [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:00 compute-0 nova_compute[243452]: 2026-02-28 10:36:00.786 243456 DEBUG oslo_concurrency.lockutils [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:00 compute-0 nova_compute[243452]: 2026-02-28 10:36:00.786 243456 DEBUG oslo_concurrency.lockutils [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:00 compute-0 nova_compute[243452]: 2026-02-28 10:36:00.787 243456 DEBUG nova.compute.manager [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] No waiting events found dispatching network-vif-unplugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:36:00 compute-0 nova_compute[243452]: 2026-02-28 10:36:00.787 243456 WARNING nova.compute.manager [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received unexpected event network-vif-unplugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b for instance with vm_state deleted and task_state None.
Feb 28 10:36:00 compute-0 nova_compute[243452]: 2026-02-28 10:36:00.787 243456 DEBUG nova.compute.manager [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received event network-vif-plugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:36:00 compute-0 nova_compute[243452]: 2026-02-28 10:36:00.787 243456 DEBUG oslo_concurrency.lockutils [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:00 compute-0 nova_compute[243452]: 2026-02-28 10:36:00.788 243456 DEBUG oslo_concurrency.lockutils [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:00 compute-0 nova_compute[243452]: 2026-02-28 10:36:00.788 243456 DEBUG oslo_concurrency.lockutils [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:00 compute-0 nova_compute[243452]: 2026-02-28 10:36:00.788 243456 DEBUG nova.compute.manager [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] No waiting events found dispatching network-vif-plugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:36:00 compute-0 nova_compute[243452]: 2026-02-28 10:36:00.788 243456 WARNING nova.compute.manager [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received unexpected event network-vif-plugged-92f5c154-2fa7-43e9-a6fd-da26d3ad985b for instance with vm_state deleted and task_state None.
Feb 28 10:36:00 compute-0 nova_compute[243452]: 2026-02-28 10:36:00.788 243456 DEBUG nova.compute.manager [req-66afb6b5-aa66-4e67-815b-9a700ef1e799 req-f4aadaf1-4cae-4e54-a88a-a767f26214ec 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Received event network-vif-deleted-92f5c154-2fa7-43e9-a6fd-da26d3ad985b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:36:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 187 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 227 KiB/s rd, 532 KiB/s wr, 113 op/s
Feb 28 10:36:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:36:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4187903459' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:36:01 compute-0 nova_compute[243452]: 2026-02-28 10:36:01.217 243456 DEBUG oslo_concurrency.processutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:36:01 compute-0 nova_compute[243452]: 2026-02-28 10:36:01.225 243456 DEBUG nova.compute.provider_tree [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:36:01 compute-0 nova_compute[243452]: 2026-02-28 10:36:01.248 243456 DEBUG nova.scheduler.client.report [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:36:01 compute-0 nova_compute[243452]: 2026-02-28 10:36:01.270 243456 DEBUG oslo_concurrency.lockutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:01 compute-0 nova_compute[243452]: 2026-02-28 10:36:01.296 243456 INFO nova.scheduler.client.report [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance 367042aa-0043-4283-a399-ea4a6a1545f7
Feb 28 10:36:01 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4187903459' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:36:01 compute-0 nova_compute[243452]: 2026-02-28 10:36:01.364 243456 DEBUG oslo_concurrency.lockutils [None req-6185c579-10d3-493c-8d12-2d0b096da8a7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "367042aa-0043-4283-a399-ea4a6a1545f7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:02 compute-0 ceph-mon[76304]: pgmap v2150: 305 pgs: 305 active+clean; 187 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 227 KiB/s rd, 532 KiB/s wr, 113 op/s
Feb 28 10:36:02 compute-0 nova_compute[243452]: 2026-02-28 10:36:02.617 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274947.6078107, f13d8adc-1a08-412b-a9fa-c8a601cda923 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:36:02 compute-0 nova_compute[243452]: 2026-02-28 10:36:02.618 243456 INFO nova.compute.manager [-] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] VM Stopped (Lifecycle Event)
Feb 28 10:36:02 compute-0 nova_compute[243452]: 2026-02-28 10:36:02.640 243456 DEBUG nova.compute.manager [None req-0f59a399-c570-4a6f-84a4-759152fb0719 - - - - - -] [instance: f13d8adc-1a08-412b-a9fa-c8a601cda923] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:36:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 153 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 15 KiB/s wr, 90 op/s
Feb 28 10:36:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:36:04 compute-0 nova_compute[243452]: 2026-02-28 10:36:04.260 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:04 compute-0 ceph-mon[76304]: pgmap v2151: 305 pgs: 305 active+clean; 153 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 15 KiB/s wr, 90 op/s
Feb 28 10:36:04 compute-0 nova_compute[243452]: 2026-02-28 10:36:04.428 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 15 KiB/s wr, 74 op/s
Feb 28 10:36:06 compute-0 ceph-mon[76304]: pgmap v2152: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 15 KiB/s wr, 74 op/s
Feb 28 10:36:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 13 KiB/s wr, 57 op/s
Feb 28 10:36:07 compute-0 nova_compute[243452]: 2026-02-28 10:36:07.045 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274952.043712, 29ebb761-c674-4ed1-aae0-554adf945402 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:36:07 compute-0 nova_compute[243452]: 2026-02-28 10:36:07.045 243456 INFO nova.compute.manager [-] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] VM Stopped (Lifecycle Event)
Feb 28 10:36:07 compute-0 nova_compute[243452]: 2026-02-28 10:36:07.061 243456 DEBUG nova.compute.manager [None req-1267ded7-c238-44ae-bbb9-196781f08275 - - - - - -] [instance: 29ebb761-c674-4ed1-aae0-554adf945402] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:36:07 compute-0 nova_compute[243452]: 2026-02-28 10:36:07.313 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:08 compute-0 ceph-mon[76304]: pgmap v2153: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 13 KiB/s wr, 57 op/s
Feb 28 10:36:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 53 op/s
Feb 28 10:36:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:36:09 compute-0 podman[360960]: 2026-02-28 10:36:09.154694206 +0000 UTC m=+0.080474941 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 28 10:36:09 compute-0 podman[360959]: 2026-02-28 10:36:09.166286024 +0000 UTC m=+0.100972471 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:36:09 compute-0 nova_compute[243452]: 2026-02-28 10:36:09.262 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:09 compute-0 nova_compute[243452]: 2026-02-28 10:36:09.431 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:10 compute-0 nova_compute[243452]: 2026-02-28 10:36:10.148 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274955.147958, b7d98834-924e-4fbd-a701-d22949f44f77 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:36:10 compute-0 nova_compute[243452]: 2026-02-28 10:36:10.149 243456 INFO nova.compute.manager [-] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] VM Stopped (Lifecycle Event)
Feb 28 10:36:10 compute-0 nova_compute[243452]: 2026-02-28 10:36:10.169 243456 DEBUG nova.compute.manager [None req-945f364a-9cef-47bd-a422-268a87865e9f - - - - - -] [instance: b7d98834-924e-4fbd-a701-d22949f44f77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:36:10 compute-0 ceph-mon[76304]: pgmap v2154: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 53 op/s
Feb 28 10:36:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.0 KiB/s wr, 41 op/s
Feb 28 10:36:12 compute-0 ceph-mon[76304]: pgmap v2155: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.0 KiB/s wr, 41 op/s
Feb 28 10:36:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 938 B/s wr, 16 op/s
Feb 28 10:36:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:36:14 compute-0 nova_compute[243452]: 2026-02-28 10:36:14.187 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772274959.1860487, 367042aa-0043-4283-a399-ea4a6a1545f7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:36:14 compute-0 nova_compute[243452]: 2026-02-28 10:36:14.188 243456 INFO nova.compute.manager [-] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] VM Stopped (Lifecycle Event)
Feb 28 10:36:14 compute-0 nova_compute[243452]: 2026-02-28 10:36:14.217 243456 DEBUG nova.compute.manager [None req-473571e4-5217-4f47-b87e-cd46d383ad00 - - - - - -] [instance: 367042aa-0043-4283-a399-ea4a6a1545f7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:36:14 compute-0 nova_compute[243452]: 2026-02-28 10:36:14.265 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:14 compute-0 ceph-mon[76304]: pgmap v2156: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 938 B/s wr, 16 op/s
Feb 28 10:36:14 compute-0 nova_compute[243452]: 2026-02-28 10:36:14.432 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 1.9 KiB/s rd, 596 B/s wr, 3 op/s
Feb 28 10:36:16 compute-0 ceph-mon[76304]: pgmap v2157: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 1.9 KiB/s rd, 596 B/s wr, 3 op/s
Feb 28 10:36:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:16.693 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:36:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:16.694 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:36:16 compute-0 nova_compute[243452]: 2026-02-28 10:36:16.695 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:36:18 compute-0 ceph-mon[76304]: pgmap v2158: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:36:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:36:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:36:19 compute-0 nova_compute[243452]: 2026-02-28 10:36:19.269 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:19 compute-0 nova_compute[243452]: 2026-02-28 10:36:19.435 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:19.508 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:9f:67 2001:db8:0:1:f816:3eff:fea2:9f67 2001:db8::f816:3eff:fea2:9f67'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fea2:9f67/64 2001:db8::f816:3eff:fea2:9f67/64', 'neutron:device_id': 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ac6d088-e3e0-46fc-b81f-2eb4f1fd8c9c, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a03e0d2d-7933-48b4-9d0a-61369ba848c9) old=Port_Binding(mac=['fa:16:3e:a2:9f:67 2001:db8::f816:3eff:fea2:9f67'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea2:9f67/64', 'neutron:device_id': 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:36:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:19.510 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a03e0d2d-7933-48b4-9d0a-61369ba848c9 in datapath bf5aa4f8-b85b-496f-a7ad-1ab36250968c updated
Feb 28 10:36:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:19.512 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bf5aa4f8-b85b-496f-a7ad-1ab36250968c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:36:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:19.513 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[675155e6-e4f2-44ce-be1c-a89434d37d25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:19.696 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:36:20 compute-0 ceph-mon[76304]: pgmap v2159: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:36:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:36:22 compute-0 sshd-session[361003]: Received disconnect from 103.217.144.161 port 44102:11: Bye Bye [preauth]
Feb 28 10:36:22 compute-0 sshd-session[361003]: Disconnected from authenticating user root 103.217.144.161 port 44102 [preauth]
Feb 28 10:36:22 compute-0 nova_compute[243452]: 2026-02-28 10:36:22.171 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:22 compute-0 nova_compute[243452]: 2026-02-28 10:36:22.172 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:22 compute-0 nova_compute[243452]: 2026-02-28 10:36:22.189 243456 DEBUG nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:36:22 compute-0 nova_compute[243452]: 2026-02-28 10:36:22.276 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:22 compute-0 nova_compute[243452]: 2026-02-28 10:36:22.277 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:22 compute-0 nova_compute[243452]: 2026-02-28 10:36:22.286 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:36:22 compute-0 nova_compute[243452]: 2026-02-28 10:36:22.286 243456 INFO nova.compute.claims [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:36:22 compute-0 nova_compute[243452]: 2026-02-28 10:36:22.384 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:36:22 compute-0 ceph-mon[76304]: pgmap v2160: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:36:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:36:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:36:22 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1540571358' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:36:22 compute-0 nova_compute[243452]: 2026-02-28 10:36:22.991 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.001 243456 DEBUG nova.compute.provider_tree [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:36:23 compute-0 sshd-session[361006]: Received disconnect from 103.67.78.202 port 37546:11: Bye Bye [preauth]
Feb 28 10:36:23 compute-0 sshd-session[361006]: Disconnected from authenticating user root 103.67.78.202 port 37546 [preauth]
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.024 243456 DEBUG nova.scheduler.client.report [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.062 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.063 243456 DEBUG nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.116 243456 DEBUG nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.117 243456 DEBUG nova.network.neutron [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.138 243456 INFO nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.164 243456 DEBUG nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.258 243456 DEBUG nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.261 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.262 243456 INFO nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Creating image(s)
Feb 28 10:36:23 compute-0 sshd-session[361008]: Received disconnect from 103.67.78.202 port 51454:11: Bye Bye [preauth]
Feb 28 10:36:23 compute-0 sshd-session[361008]: Disconnected from authenticating user root 103.67.78.202 port 51454 [preauth]
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.298 243456 DEBUG nova.storage.rbd_utils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.334 243456 DEBUG nova.storage.rbd_utils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.363 243456 DEBUG nova.storage.rbd_utils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.369 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:36:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1540571358' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.464 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.466 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.468 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.468 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.500 243456 DEBUG nova.storage.rbd_utils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.504 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.769 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.864 243456 DEBUG nova.storage.rbd_utils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.918 243456 DEBUG nova.policy [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:36:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:36:23 compute-0 nova_compute[243452]: 2026-02-28 10:36:23.976 243456 DEBUG nova.objects.instance [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid b3a7b19c-ebb2-442d-bac0-66e1b9e2655c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:36:24 compute-0 nova_compute[243452]: 2026-02-28 10:36:24.015 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:36:24 compute-0 nova_compute[243452]: 2026-02-28 10:36:24.015 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Ensure instance console log exists: /var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:36:24 compute-0 nova_compute[243452]: 2026-02-28 10:36:24.016 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:24 compute-0 nova_compute[243452]: 2026-02-28 10:36:24.016 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:24 compute-0 nova_compute[243452]: 2026-02-28 10:36:24.017 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:24 compute-0 nova_compute[243452]: 2026-02-28 10:36:24.273 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:24 compute-0 ceph-mon[76304]: pgmap v2161: 305 pgs: 305 active+clean; 153 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:36:24 compute-0 nova_compute[243452]: 2026-02-28 10:36:24.436 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 165 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 657 KiB/s wr, 0 op/s
Feb 28 10:36:26 compute-0 ceph-mon[76304]: pgmap v2162: 305 pgs: 305 active+clean; 165 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 657 KiB/s wr, 0 op/s
Feb 28 10:36:26 compute-0 nova_compute[243452]: 2026-02-28 10:36:26.514 243456 DEBUG nova.network.neutron [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Successfully created port: 4b48043a-8194-4cf4-bd7f-1c138d7960ac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:36:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 181 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 1.4 MiB/s wr, 13 op/s
Feb 28 10:36:26 compute-0 nova_compute[243452]: 2026-02-28 10:36:26.936 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:26 compute-0 nova_compute[243452]: 2026-02-28 10:36:26.937 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:26 compute-0 nova_compute[243452]: 2026-02-28 10:36:26.953 243456 DEBUG nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:36:27 compute-0 nova_compute[243452]: 2026-02-28 10:36:27.007 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:27 compute-0 nova_compute[243452]: 2026-02-28 10:36:27.008 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:27 compute-0 nova_compute[243452]: 2026-02-28 10:36:27.014 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:36:27 compute-0 nova_compute[243452]: 2026-02-28 10:36:27.015 243456 INFO nova.compute.claims [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:36:27 compute-0 nova_compute[243452]: 2026-02-28 10:36:27.123 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:36:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:36:27 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3799923328' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:36:27 compute-0 nova_compute[243452]: 2026-02-28 10:36:27.682 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:36:27 compute-0 nova_compute[243452]: 2026-02-28 10:36:27.688 243456 DEBUG nova.compute.provider_tree [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:36:27 compute-0 nova_compute[243452]: 2026-02-28 10:36:27.706 243456 DEBUG nova.scheduler.client.report [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:36:27 compute-0 nova_compute[243452]: 2026-02-28 10:36:27.733 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:27 compute-0 nova_compute[243452]: 2026-02-28 10:36:27.734 243456 DEBUG nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:36:27 compute-0 nova_compute[243452]: 2026-02-28 10:36:27.790 243456 DEBUG nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:36:27 compute-0 nova_compute[243452]: 2026-02-28 10:36:27.791 243456 DEBUG nova.network.neutron [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:36:27 compute-0 nova_compute[243452]: 2026-02-28 10:36:27.819 243456 INFO nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:36:27 compute-0 nova_compute[243452]: 2026-02-28 10:36:27.841 243456 DEBUG nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:36:27 compute-0 nova_compute[243452]: 2026-02-28 10:36:27.946 243456 DEBUG nova.network.neutron [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Successfully updated port: 4b48043a-8194-4cf4-bd7f-1c138d7960ac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:36:27 compute-0 nova_compute[243452]: 2026-02-28 10:36:27.950 243456 DEBUG nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:36:27 compute-0 nova_compute[243452]: 2026-02-28 10:36:27.952 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:36:27 compute-0 nova_compute[243452]: 2026-02-28 10:36:27.952 243456 INFO nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Creating image(s)
Feb 28 10:36:27 compute-0 nova_compute[243452]: 2026-02-28 10:36:27.976 243456 DEBUG nova.storage.rbd_utils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.004 243456 DEBUG nova.storage.rbd_utils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.031 243456 DEBUG nova.storage.rbd_utils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.035 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.067 243456 DEBUG nova.policy [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.073 243456 DEBUG nova.compute.manager [req-ae47af30-8196-4839-ac0c-74dce5fc1180 req-c150eff6-08bc-410c-b18d-ce297cd50b52 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-changed-4b48043a-8194-4cf4-bd7f-1c138d7960ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.074 243456 DEBUG nova.compute.manager [req-ae47af30-8196-4839-ac0c-74dce5fc1180 req-c150eff6-08bc-410c-b18d-ce297cd50b52 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Refreshing instance network info cache due to event network-changed-4b48043a-8194-4cf4-bd7f-1c138d7960ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.074 243456 DEBUG oslo_concurrency.lockutils [req-ae47af30-8196-4839-ac0c-74dce5fc1180 req-c150eff6-08bc-410c-b18d-ce297cd50b52 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.074 243456 DEBUG oslo_concurrency.lockutils [req-ae47af30-8196-4839-ac0c-74dce5fc1180 req-c150eff6-08bc-410c-b18d-ce297cd50b52 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.075 243456 DEBUG nova.network.neutron [req-ae47af30-8196-4839-ac0c-74dce5fc1180 req-c150eff6-08bc-410c-b18d-ce297cd50b52 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Refreshing network info cache for port 4b48043a-8194-4cf4-bd7f-1c138d7960ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.076 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.096 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.097 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.098 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.098 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.121 243456 DEBUG nova.storage.rbd_utils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.124 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.329 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.406 243456 DEBUG nova.storage.rbd_utils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:36:28 compute-0 ceph-mon[76304]: pgmap v2163: 305 pgs: 305 active+clean; 181 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 1.4 MiB/s wr, 13 op/s
Feb 28 10:36:28 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3799923328' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.501 243456 DEBUG nova.objects.instance [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid 555d381e-ed8a-4a73-9f43-f79c0b0a0afd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.520 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.521 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Ensure instance console log exists: /var/lib/nova/instances/555d381e-ed8a-4a73-9f43-f79c0b0a0afd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.521 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.522 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.522 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.534 243456 DEBUG nova.network.neutron [req-ae47af30-8196-4839-ac0c-74dce5fc1180 req-c150eff6-08bc-410c-b18d-ce297cd50b52 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.813 243456 DEBUG nova.network.neutron [req-ae47af30-8196-4839-ac0c-74dce5fc1180 req-c150eff6-08bc-410c-b18d-ce297cd50b52 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.829 243456 DEBUG oslo_concurrency.lockutils [req-ae47af30-8196-4839-ac0c-74dce5fc1180 req-c150eff6-08bc-410c-b18d-ce297cd50b52 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.830 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.830 243456 DEBUG nova.network.neutron [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:36:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 200 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:36:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:36:28 compute-0 nova_compute[243452]: 2026-02-28 10:36:28.932 243456 DEBUG nova.network.neutron [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Successfully created port: b8c427fe-78c5-4d60-9c44-68985f50b598 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.023 243456 DEBUG nova.network.neutron [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:36:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:36:29
Feb 28 10:36:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:36:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:36:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'backups', 'default.rgw.control', 'default.rgw.log', 'images', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data']
Feb 28 10:36:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.277 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.437 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.508 243456 DEBUG nova.network.neutron [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Successfully created port: ffaef000-523c-4637-99e6-2cc96b907c15 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.844 243456 DEBUG nova.network.neutron [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.936 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.937 243456 DEBUG nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Instance network_info: |[{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.939 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Start _get_guest_xml network_info=[{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.946 243456 WARNING nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.954 243456 DEBUG nova.virt.libvirt.host [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.954 243456 DEBUG nova.virt.libvirt.host [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.959 243456 DEBUG nova.virt.libvirt.host [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.959 243456 DEBUG nova.virt.libvirt.host [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.960 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.960 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.960 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.961 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.961 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.961 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.961 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.962 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.962 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.962 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.962 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.963 243456 DEBUG nova.virt.hardware [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:36:29 compute-0 nova_compute[243452]: 2026-02-28 10:36:29.966 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:36:30 compute-0 nova_compute[243452]: 2026-02-28 10:36:30.238 243456 DEBUG nova.network.neutron [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Successfully updated port: b8c427fe-78c5-4d60-9c44-68985f50b598 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:36:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:36:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:36:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:36:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:36:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:36:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:36:30 compute-0 nova_compute[243452]: 2026-02-28 10:36:30.384 243456 DEBUG nova.compute.manager [req-99638faf-1a3c-4277-b36a-5398292134ff req-11db7e0b-03ef-4d54-b411-caf191eb44f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-changed-b8c427fe-78c5-4d60-9c44-68985f50b598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:36:30 compute-0 nova_compute[243452]: 2026-02-28 10:36:30.384 243456 DEBUG nova.compute.manager [req-99638faf-1a3c-4277-b36a-5398292134ff req-11db7e0b-03ef-4d54-b411-caf191eb44f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Refreshing instance network info cache due to event network-changed-b8c427fe-78c5-4d60-9c44-68985f50b598. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:36:30 compute-0 nova_compute[243452]: 2026-02-28 10:36:30.385 243456 DEBUG oslo_concurrency.lockutils [req-99638faf-1a3c-4277-b36a-5398292134ff req-11db7e0b-03ef-4d54-b411-caf191eb44f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:36:30 compute-0 nova_compute[243452]: 2026-02-28 10:36:30.385 243456 DEBUG oslo_concurrency.lockutils [req-99638faf-1a3c-4277-b36a-5398292134ff req-11db7e0b-03ef-4d54-b411-caf191eb44f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:36:30 compute-0 nova_compute[243452]: 2026-02-28 10:36:30.386 243456 DEBUG nova.network.neutron [req-99638faf-1a3c-4277-b36a-5398292134ff req-11db7e0b-03ef-4d54-b411-caf191eb44f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Refreshing network info cache for port b8c427fe-78c5-4d60-9c44-68985f50b598 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:36:30 compute-0 ceph-mon[76304]: pgmap v2164: 305 pgs: 305 active+clean; 200 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:36:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:36:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2025732497' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:36:30 compute-0 nova_compute[243452]: 2026-02-28 10:36:30.494 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:36:30 compute-0 nova_compute[243452]: 2026-02-28 10:36:30.534 243456 DEBUG nova.storage.rbd_utils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:36:30 compute-0 nova_compute[243452]: 2026-02-28 10:36:30.542 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:36:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:36:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:36:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:36:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:36:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:36:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:36:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:36:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:36:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:36:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:36:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 235 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 3.4 MiB/s wr, 42 op/s
Feb 28 10:36:30 compute-0 nova_compute[243452]: 2026-02-28 10:36:30.886 243456 DEBUG nova.network.neutron [req-99638faf-1a3c-4277-b36a-5398292134ff req-11db7e0b-03ef-4d54-b411-caf191eb44f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:36:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:36:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4072729338' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.133 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.135 243456 DEBUG nova.virt.libvirt.vif [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:36:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855517723',display_name='tempest-TestNetworkBasicOps-server-1855517723',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855517723',id=132,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJL4zKlYo/AdgMNnuaaJtL1hh1goiDwZemHJ69UN8pXos044opX7z+RfBxL3O8z0M85ghtATU1A3xGoZN5EAhEOadn0D2QfxzAhrWZrpHsFCg070YOwKNthfwFLhWWt9Cw==',key_name='tempest-TestNetworkBasicOps-2114804889',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-fsl96lrj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:36:23Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b3a7b19c-ebb2-442d-bac0-66e1b9e2655c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.135 243456 DEBUG nova.network.os_vif_util [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.136 243456 DEBUG nova.network.os_vif_util [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:60:12,bridge_name='br-int',has_traffic_filtering=True,id=4b48043a-8194-4cf4-bd7f-1c138d7960ac,network=Network(f91ad996-44c8-45ac-a5d6-208982ca2ce1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b48043a-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.137 243456 DEBUG nova.objects.instance [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid b3a7b19c-ebb2-442d-bac0-66e1b9e2655c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.161 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:36:31 compute-0 nova_compute[243452]:   <uuid>b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</uuid>
Feb 28 10:36:31 compute-0 nova_compute[243452]:   <name>instance-00000084</name>
Feb 28 10:36:31 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:36:31 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:36:31 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <nova:name>tempest-TestNetworkBasicOps-server-1855517723</nova:name>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:36:29</nova:creationTime>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:36:31 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:36:31 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:36:31 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:36:31 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:36:31 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:36:31 compute-0 nova_compute[243452]:         <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:36:31 compute-0 nova_compute[243452]:         <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:36:31 compute-0 nova_compute[243452]:         <nova:port uuid="4b48043a-8194-4cf4-bd7f-1c138d7960ac">
Feb 28 10:36:31 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:36:31 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:36:31 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <system>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <entry name="serial">b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</entry>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <entry name="uuid">b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</entry>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     </system>
Feb 28 10:36:31 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:36:31 compute-0 nova_compute[243452]:   <os>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:   </os>
Feb 28 10:36:31 compute-0 nova_compute[243452]:   <features>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:   </features>
Feb 28 10:36:31 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:36:31 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:36:31 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk">
Feb 28 10:36:31 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       </source>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:36:31 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk.config">
Feb 28 10:36:31 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       </source>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:36:31 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:e3:60:12"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <target dev="tap4b48043a-81"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/console.log" append="off"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <video>
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     </video>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:36:31 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:36:31 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:36:31 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:36:31 compute-0 nova_compute[243452]: </domain>
Feb 28 10:36:31 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.162 243456 DEBUG nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Preparing to wait for external event network-vif-plugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.163 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.163 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.163 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.164 243456 DEBUG nova.virt.libvirt.vif [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:36:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855517723',display_name='tempest-TestNetworkBasicOps-server-1855517723',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855517723',id=132,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJL4zKlYo/AdgMNnuaaJtL1hh1goiDwZemHJ69UN8pXos044opX7z+RfBxL3O8z0M85ghtATU1A3xGoZN5EAhEOadn0D2QfxzAhrWZrpHsFCg070YOwKNthfwFLhWWt9Cw==',key_name='tempest-TestNetworkBasicOps-2114804889',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-fsl96lrj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:36:23Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b3a7b19c-ebb2-442d-bac0-66e1b9e2655c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.164 243456 DEBUG nova.network.os_vif_util [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.165 243456 DEBUG nova.network.os_vif_util [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:60:12,bridge_name='br-int',has_traffic_filtering=True,id=4b48043a-8194-4cf4-bd7f-1c138d7960ac,network=Network(f91ad996-44c8-45ac-a5d6-208982ca2ce1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b48043a-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.165 243456 DEBUG os_vif [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:60:12,bridge_name='br-int',has_traffic_filtering=True,id=4b48043a-8194-4cf4-bd7f-1c138d7960ac,network=Network(f91ad996-44c8-45ac-a5d6-208982ca2ce1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b48043a-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.166 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.167 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.168 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.173 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.173 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b48043a-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.173 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4b48043a-81, col_values=(('external_ids', {'iface-id': '4b48043a-8194-4cf4-bd7f-1c138d7960ac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e3:60:12', 'vm-uuid': 'b3a7b19c-ebb2-442d-bac0-66e1b9e2655c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:36:31 compute-0 NetworkManager[49805]: <info>  [1772274991.1769] manager: (tap4b48043a-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/555)
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.176 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.179 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.183 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.184 243456 INFO os_vif [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:60:12,bridge_name='br-int',has_traffic_filtering=True,id=4b48043a-8194-4cf4-bd7f-1c138d7960ac,network=Network(f91ad996-44c8-45ac-a5d6-208982ca2ce1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b48043a-81')
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.222 243456 DEBUG nova.network.neutron [req-99638faf-1a3c-4277-b36a-5398292134ff req-11db7e0b-03ef-4d54-b411-caf191eb44f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.268 243456 DEBUG oslo_concurrency.lockutils [req-99638faf-1a3c-4277-b36a-5398292134ff req-11db7e0b-03ef-4d54-b411-caf191eb44f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.293 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.294 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.294 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:e3:60:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.294 243456 INFO nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Using config drive
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.322 243456 DEBUG nova.storage.rbd_utils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:36:31 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2025732497' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:36:31 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4072729338' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.813 243456 INFO nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Creating config drive at /var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/disk.config
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.819 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6s8jk2ol execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.866 243456 DEBUG nova.network.neutron [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Successfully updated port: ffaef000-523c-4637-99e6-2cc96b907c15 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.968 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.969 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.969 243456 DEBUG nova.network.neutron [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:36:31 compute-0 nova_compute[243452]: 2026-02-28 10:36:31.974 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6s8jk2ol" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:36:32 compute-0 nova_compute[243452]: 2026-02-28 10:36:32.016 243456 DEBUG nova.storage.rbd_utils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:36:32 compute-0 nova_compute[243452]: 2026-02-28 10:36:32.022 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/disk.config b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:36:32 compute-0 nova_compute[243452]: 2026-02-28 10:36:32.176 243456 DEBUG oslo_concurrency.processutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/disk.config b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:36:32 compute-0 nova_compute[243452]: 2026-02-28 10:36:32.177 243456 INFO nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Deleting local config drive /var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/disk.config because it was imported into RBD.
Feb 28 10:36:32 compute-0 nova_compute[243452]: 2026-02-28 10:36:32.219 243456 DEBUG nova.network.neutron [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:36:32 compute-0 kernel: tap4b48043a-81: entered promiscuous mode
Feb 28 10:36:32 compute-0 NetworkManager[49805]: <info>  [1772274992.2370] manager: (tap4b48043a-81): new Tun device (/org/freedesktop/NetworkManager/Devices/556)
Feb 28 10:36:32 compute-0 ovn_controller[146846]: 2026-02-28T10:36:32Z|01344|binding|INFO|Claiming lport 4b48043a-8194-4cf4-bd7f-1c138d7960ac for this chassis.
Feb 28 10:36:32 compute-0 ovn_controller[146846]: 2026-02-28T10:36:32Z|01345|binding|INFO|4b48043a-8194-4cf4-bd7f-1c138d7960ac: Claiming fa:16:3e:e3:60:12 10.100.0.5
Feb 28 10:36:32 compute-0 nova_compute[243452]: 2026-02-28 10:36:32.241 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:32 compute-0 nova_compute[243452]: 2026-02-28 10:36:32.253 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:32 compute-0 systemd-machined[209480]: New machine qemu-165-instance-00000084.
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.279 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:60:12 10.100.0.5'], port_security=['fa:16:3e:e3:60:12 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b3a7b19c-ebb2-442d-bac0-66e1b9e2655c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f91ad996-44c8-45ac-a5d6-208982ca2ce1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '76f6832b-9f40-4eef-bddf-580a90432b21', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f90d2e4f-4906-4cb9-bc1a-6b3a4bcb9d24, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=4b48043a-8194-4cf4-bd7f-1c138d7960ac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.281 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 4b48043a-8194-4cf4-bd7f-1c138d7960ac in datapath f91ad996-44c8-45ac-a5d6-208982ca2ce1 bound to our chassis
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.283 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f91ad996-44c8-45ac-a5d6-208982ca2ce1
Feb 28 10:36:32 compute-0 systemd[1]: Started Virtual Machine qemu-165-instance-00000084.
Feb 28 10:36:32 compute-0 ovn_controller[146846]: 2026-02-28T10:36:32Z|01346|binding|INFO|Setting lport 4b48043a-8194-4cf4-bd7f-1c138d7960ac ovn-installed in OVS
Feb 28 10:36:32 compute-0 ovn_controller[146846]: 2026-02-28T10:36:32Z|01347|binding|INFO|Setting lport 4b48043a-8194-4cf4-bd7f-1c138d7960ac up in Southbound
Feb 28 10:36:32 compute-0 nova_compute[243452]: 2026-02-28 10:36:32.294 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:32 compute-0 systemd-udevd[361523]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.299 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c4d9bd6d-3eb4-49e4-bda4-f77aa3526738]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.301 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf91ad996-41 in ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.304 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf91ad996-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.304 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1079cc95-e727-4d7e-a23f-8465e74a3036]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.306 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d6059286-ef59-4a08-86a4-69edb9794c09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:32 compute-0 NetworkManager[49805]: <info>  [1772274992.3149] device (tap4b48043a-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:36:32 compute-0 NetworkManager[49805]: <info>  [1772274992.3159] device (tap4b48043a-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.321 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[e36624b2-4c49-4028-8ff7-27d57df90926]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.349 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ff396e1c-053e-4727-854c-17c8bd03aeab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.385 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a543177f-46f9-4a2d-ae51-97ef724ec540]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.392 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ff70b10e-c103-4a9d-9c98-acd9419ab5aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:32 compute-0 systemd-udevd[361526]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:36:32 compute-0 NetworkManager[49805]: <info>  [1772274992.3937] manager: (tapf91ad996-40): new Veth device (/org/freedesktop/NetworkManager/Devices/557)
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.424 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd86172-a4aa-4949-bd94-f4d7ab44ae03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.427 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9c748978-de71-4c44-b96e-e2cb1ea7fb79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:32 compute-0 NetworkManager[49805]: <info>  [1772274992.4536] device (tapf91ad996-40): carrier: link connected
Feb 28 10:36:32 compute-0 ceph-mon[76304]: pgmap v2165: 305 pgs: 305 active+clean; 235 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 3.4 MiB/s wr, 42 op/s
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.458 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[192dbf3e-5d30-4541-8c41-1fd320b2ceb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.477 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0c306817-2bda-4da2-a317-9c7a2ba54829]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf91ad996-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:58:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649973, 'reachable_time': 28350, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361555, 'error': None, 'target': 'ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.494 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c307c35e-66ec-418b-becb-8c2dfde929c0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:585b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 649973, 'tstamp': 649973}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361556, 'error': None, 'target': 'ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.512 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[75b8e8ce-3faa-4415-ba77-ee963b8a6bd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf91ad996-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:58:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649973, 'reachable_time': 28350, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361557, 'error': None, 'target': 'ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.547 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[30c9e605-299c-4f6d-8954-24763ab256fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.616 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[128e0a71-7b38-41e6-ac13-de46b50e55ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.619 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf91ad996-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.619 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.620 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf91ad996-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:36:32 compute-0 kernel: tapf91ad996-40: entered promiscuous mode
Feb 28 10:36:32 compute-0 NetworkManager[49805]: <info>  [1772274992.6240] manager: (tapf91ad996-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/558)
Feb 28 10:36:32 compute-0 nova_compute[243452]: 2026-02-28 10:36:32.626 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.629 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf91ad996-40, col_values=(('external_ids', {'iface-id': '806ea448-5fbd-4b2a-a972-1602ffe39b97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:36:32 compute-0 ovn_controller[146846]: 2026-02-28T10:36:32Z|01348|binding|INFO|Releasing lport 806ea448-5fbd-4b2a-a972-1602ffe39b97 from this chassis (sb_readonly=0)
Feb 28 10:36:32 compute-0 nova_compute[243452]: 2026-02-28 10:36:32.630 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.632 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f91ad996-44c8-45ac-a5d6-208982ca2ce1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f91ad996-44c8-45ac-a5d6-208982ca2ce1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:36:32 compute-0 nova_compute[243452]: 2026-02-28 10:36:32.640 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.639 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f539c9f5-3e61-4d92-b721-55ceb09796d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.643 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-f91ad996-44c8-45ac-a5d6-208982ca2ce1
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/f91ad996-44c8-45ac-a5d6-208982ca2ce1.pid.haproxy
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID f91ad996-44c8-45ac-a5d6-208982ca2ce1
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:36:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:32.644 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1', 'env', 'PROCESS_TAG=haproxy-f91ad996-44c8-45ac-a5d6-208982ca2ce1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f91ad996-44c8-45ac-a5d6-208982ca2ce1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:36:32 compute-0 nova_compute[243452]: 2026-02-28 10:36:32.685 243456 DEBUG nova.compute.manager [req-e1a742dd-ef0a-4317-be5a-3de2d95efb2c req-d556379a-a944-4d95-9e53-9e8414e14000 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-changed-ffaef000-523c-4637-99e6-2cc96b907c15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:36:32 compute-0 nova_compute[243452]: 2026-02-28 10:36:32.686 243456 DEBUG nova.compute.manager [req-e1a742dd-ef0a-4317-be5a-3de2d95efb2c req-d556379a-a944-4d95-9e53-9e8414e14000 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Refreshing instance network info cache due to event network-changed-ffaef000-523c-4637-99e6-2cc96b907c15. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:36:32 compute-0 nova_compute[243452]: 2026-02-28 10:36:32.687 243456 DEBUG oslo_concurrency.lockutils [req-e1a742dd-ef0a-4317-be5a-3de2d95efb2c req-d556379a-a944-4d95-9e53-9e8414e14000 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:36:32 compute-0 nova_compute[243452]: 2026-02-28 10:36:32.797 243456 DEBUG nova.compute.manager [req-cd013528-6b5d-4045-9085-f1ba3e09960a req-d4d786df-32a5-46b6-b3aa-03c5a8c1c5b0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-plugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:36:32 compute-0 nova_compute[243452]: 2026-02-28 10:36:32.798 243456 DEBUG oslo_concurrency.lockutils [req-cd013528-6b5d-4045-9085-f1ba3e09960a req-d4d786df-32a5-46b6-b3aa-03c5a8c1c5b0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:32 compute-0 nova_compute[243452]: 2026-02-28 10:36:32.798 243456 DEBUG oslo_concurrency.lockutils [req-cd013528-6b5d-4045-9085-f1ba3e09960a req-d4d786df-32a5-46b6-b3aa-03c5a8c1c5b0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:32 compute-0 nova_compute[243452]: 2026-02-28 10:36:32.799 243456 DEBUG oslo_concurrency.lockutils [req-cd013528-6b5d-4045-9085-f1ba3e09960a req-d4d786df-32a5-46b6-b3aa-03c5a8c1c5b0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:32 compute-0 nova_compute[243452]: 2026-02-28 10:36:32.799 243456 DEBUG nova.compute.manager [req-cd013528-6b5d-4045-9085-f1ba3e09960a req-d4d786df-32a5-46b6-b3aa-03c5a8c1c5b0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Processing event network-vif-plugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:36:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 246 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 58 op/s
Feb 28 10:36:33 compute-0 podman[361589]: 2026-02-28 10:36:33.032035909 +0000 UTC m=+0.063901041 container create 4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 28 10:36:33 compute-0 podman[361589]: 2026-02-28 10:36:32.996993856 +0000 UTC m=+0.028859038 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:36:33 compute-0 systemd[1]: Started libpod-conmon-4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f.scope.
Feb 28 10:36:33 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:36:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16788202b082e39979c66011ee31c3dde40edd942458ccb06127574de32bd96f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:36:33 compute-0 podman[361589]: 2026-02-28 10:36:33.153288394 +0000 UTC m=+0.185153546 container init 4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:36:33 compute-0 podman[361589]: 2026-02-28 10:36:33.160794086 +0000 UTC m=+0.192659198 container start 4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 28 10:36:33 compute-0 neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1[361605]: [NOTICE]   (361609) : New worker (361611) forked
Feb 28 10:36:33 compute-0 neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1[361605]: [NOTICE]   (361609) : Loading success.
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.402 243456 DEBUG nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.404 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274993.4018123, b3a7b19c-ebb2-442d-bac0-66e1b9e2655c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.404 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] VM Started (Lifecycle Event)
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.407 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.412 243456 INFO nova.virt.libvirt.driver [-] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Instance spawned successfully.
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.412 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.455 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.464 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.468 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.469 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.469 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.470 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.470 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.471 243456 DEBUG nova.virt.libvirt.driver [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.525 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.526 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274993.4022837, b3a7b19c-ebb2-442d-bac0-66e1b9e2655c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.526 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] VM Paused (Lifecycle Event)
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.602 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.607 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274993.4070237, b3a7b19c-ebb2-442d-bac0-66e1b9e2655c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.609 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] VM Resumed (Lifecycle Event)
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.620 243456 INFO nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Took 10.36 seconds to spawn the instance on the hypervisor.
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.621 243456 DEBUG nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.658 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.663 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.708 243456 INFO nova.compute.manager [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Took 11.46 seconds to build instance.
Feb 28 10:36:33 compute-0 nova_compute[243452]: 2026-02-28 10:36:33.736 243456 DEBUG oslo_concurrency.lockutils [None req-df42db90-2d14-4d8b-b694-813977090186 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.331 243456 DEBUG nova.network.neutron [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Updating instance_info_cache with network_info: [{"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.357 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.358 243456 DEBUG nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Instance network_info: |[{"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.358 243456 DEBUG oslo_concurrency.lockutils [req-e1a742dd-ef0a-4317-be5a-3de2d95efb2c req-d556379a-a944-4d95-9e53-9e8414e14000 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.359 243456 DEBUG nova.network.neutron [req-e1a742dd-ef0a-4317-be5a-3de2d95efb2c req-d556379a-a944-4d95-9e53-9e8414e14000 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Refreshing network info cache for port ffaef000-523c-4637-99e6-2cc96b907c15 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.366 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Start _get_guest_xml network_info=[{"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.374 243456 WARNING nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.382 243456 DEBUG nova.virt.libvirt.host [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.384 243456 DEBUG nova.virt.libvirt.host [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.402 243456 DEBUG nova.virt.libvirt.host [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.403 243456 DEBUG nova.virt.libvirt.host [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.404 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.405 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.406 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.407 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.408 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.408 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.409 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.410 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.410 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.411 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.411 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.412 243456 DEBUG nova.virt.hardware [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.419 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.454 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:34 compute-0 ceph-mon[76304]: pgmap v2166: 305 pgs: 305 active+clean; 246 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 58 op/s
Feb 28 10:36:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 246 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 589 KiB/s rd, 3.6 MiB/s wr, 77 op/s
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.925 243456 DEBUG nova.compute.manager [req-6d6aa6a4-867e-4176-a389-c0fc29c3349e req-49bd2bde-298a-4f19-9917-d6058dffcbd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-plugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.926 243456 DEBUG oslo_concurrency.lockutils [req-6d6aa6a4-867e-4176-a389-c0fc29c3349e req-49bd2bde-298a-4f19-9917-d6058dffcbd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.927 243456 DEBUG oslo_concurrency.lockutils [req-6d6aa6a4-867e-4176-a389-c0fc29c3349e req-49bd2bde-298a-4f19-9917-d6058dffcbd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.927 243456 DEBUG oslo_concurrency.lockutils [req-6d6aa6a4-867e-4176-a389-c0fc29c3349e req-49bd2bde-298a-4f19-9917-d6058dffcbd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.928 243456 DEBUG nova.compute.manager [req-6d6aa6a4-867e-4176-a389-c0fc29c3349e req-49bd2bde-298a-4f19-9917-d6058dffcbd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] No waiting events found dispatching network-vif-plugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.928 243456 WARNING nova.compute.manager [req-6d6aa6a4-867e-4176-a389-c0fc29c3349e req-49bd2bde-298a-4f19-9917-d6058dffcbd8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received unexpected event network-vif-plugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac for instance with vm_state active and task_state None.
Feb 28 10:36:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:36:34 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3290399496' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:36:34 compute-0 nova_compute[243452]: 2026-02-28 10:36:34.972 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.009 243456 DEBUG nova.storage.rbd_utils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.014 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:36:35 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3290399496' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:36:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:36:35 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3333573190' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.632 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.635 243456 DEBUG nova.virt.libvirt.vif [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:36:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102866438',display_name='tempest-TestGettingAddress-server-1102866438',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102866438',id=133,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-qiz0sgp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:36:27Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=555d381e-ed8a-4a73-9f43-f79c0b0a0afd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.636 243456 DEBUG nova.network.os_vif_util [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.637 243456 DEBUG nova.network.os_vif_util [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:2a:81,bridge_name='br-int',has_traffic_filtering=True,id=b8c427fe-78c5-4d60-9c44-68985f50b598,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8c427fe-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.639 243456 DEBUG nova.virt.libvirt.vif [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:36:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102866438',display_name='tempest-TestGettingAddress-server-1102866438',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102866438',id=133,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-qiz0sgp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:36:27Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=555d381e-ed8a-4a73-9f43-f79c0b0a0afd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.640 243456 DEBUG nova.network.os_vif_util [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.641 243456 DEBUG nova.network.os_vif_util [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=ffaef000-523c-4637-99e6-2cc96b907c15,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffaef000-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.643 243456 DEBUG nova.objects.instance [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 555d381e-ed8a-4a73-9f43-f79c0b0a0afd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.694 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:36:35 compute-0 nova_compute[243452]:   <uuid>555d381e-ed8a-4a73-9f43-f79c0b0a0afd</uuid>
Feb 28 10:36:35 compute-0 nova_compute[243452]:   <name>instance-00000085</name>
Feb 28 10:36:35 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:36:35 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:36:35 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <nova:name>tempest-TestGettingAddress-server-1102866438</nova:name>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:36:34</nova:creationTime>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:36:35 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:36:35 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:36:35 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:36:35 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:36:35 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:36:35 compute-0 nova_compute[243452]:         <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 10:36:35 compute-0 nova_compute[243452]:         <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:36:35 compute-0 nova_compute[243452]:         <nova:port uuid="b8c427fe-78c5-4d60-9c44-68985f50b598">
Feb 28 10:36:35 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:36:35 compute-0 nova_compute[243452]:         <nova:port uuid="ffaef000-523c-4637-99e6-2cc96b907c15">
Feb 28 10:36:35 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe28:b75a" ipVersion="6"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe28:b75a" ipVersion="6"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:36:35 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:36:35 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <system>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <entry name="serial">555d381e-ed8a-4a73-9f43-f79c0b0a0afd</entry>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <entry name="uuid">555d381e-ed8a-4a73-9f43-f79c0b0a0afd</entry>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     </system>
Feb 28 10:36:35 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:36:35 compute-0 nova_compute[243452]:   <os>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:   </os>
Feb 28 10:36:35 compute-0 nova_compute[243452]:   <features>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:   </features>
Feb 28 10:36:35 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:36:35 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:36:35 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk">
Feb 28 10:36:35 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       </source>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:36:35 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk.config">
Feb 28 10:36:35 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       </source>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:36:35 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:d6:2a:81"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <target dev="tapb8c427fe-78"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:28:b7:5a"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <target dev="tapffaef000-52"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/555d381e-ed8a-4a73-9f43-f79c0b0a0afd/console.log" append="off"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <video>
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     </video>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:36:35 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:36:35 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:36:35 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:36:35 compute-0 nova_compute[243452]: </domain>
Feb 28 10:36:35 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.696 243456 DEBUG nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Preparing to wait for external event network-vif-plugged-b8c427fe-78c5-4d60-9c44-68985f50b598 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.697 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.697 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.698 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.698 243456 DEBUG nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Preparing to wait for external event network-vif-plugged-ffaef000-523c-4637-99e6-2cc96b907c15 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.699 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.699 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.699 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.701 243456 DEBUG nova.virt.libvirt.vif [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:36:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102866438',display_name='tempest-TestGettingAddress-server-1102866438',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102866438',id=133,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-qiz0sgp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:36:27Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=555d381e-ed8a-4a73-9f43-f79c0b0a0afd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.701 243456 DEBUG nova.network.os_vif_util [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.702 243456 DEBUG nova.network.os_vif_util [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:2a:81,bridge_name='br-int',has_traffic_filtering=True,id=b8c427fe-78c5-4d60-9c44-68985f50b598,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8c427fe-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.703 243456 DEBUG os_vif [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:2a:81,bridge_name='br-int',has_traffic_filtering=True,id=b8c427fe-78c5-4d60-9c44-68985f50b598,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8c427fe-78') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.704 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.705 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.706 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.709 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.710 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8c427fe-78, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.711 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb8c427fe-78, col_values=(('external_ids', {'iface-id': 'b8c427fe-78c5-4d60-9c44-68985f50b598', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:2a:81', 'vm-uuid': '555d381e-ed8a-4a73-9f43-f79c0b0a0afd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.713 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:35 compute-0 NetworkManager[49805]: <info>  [1772274995.7146] manager: (tapb8c427fe-78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/559)
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.717 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.721 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.722 243456 INFO os_vif [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:2a:81,bridge_name='br-int',has_traffic_filtering=True,id=b8c427fe-78c5-4d60-9c44-68985f50b598,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8c427fe-78')
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.723 243456 DEBUG nova.virt.libvirt.vif [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:36:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102866438',display_name='tempest-TestGettingAddress-server-1102866438',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102866438',id=133,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-qiz0sgp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:36:27Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=555d381e-ed8a-4a73-9f43-f79c0b0a0afd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.723 243456 DEBUG nova.network.os_vif_util [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.725 243456 DEBUG nova.network.os_vif_util [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=ffaef000-523c-4637-99e6-2cc96b907c15,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffaef000-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.726 243456 DEBUG os_vif [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=ffaef000-523c-4637-99e6-2cc96b907c15,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffaef000-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.726 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.726 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.727 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.731 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.731 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapffaef000-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.732 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapffaef000-52, col_values=(('external_ids', {'iface-id': 'ffaef000-523c-4637-99e6-2cc96b907c15', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:b7:5a', 'vm-uuid': '555d381e-ed8a-4a73-9f43-f79c0b0a0afd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.733 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:35 compute-0 NetworkManager[49805]: <info>  [1772274995.7352] manager: (tapffaef000-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/560)
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.736 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.752 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.753 243456 INFO os_vif [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=ffaef000-523c-4637-99e6-2cc96b907c15,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffaef000-52')
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.912 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.913 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.913 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:d6:2a:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.914 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:28:b7:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.915 243456 INFO nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Using config drive
Feb 28 10:36:35 compute-0 nova_compute[243452]: 2026-02-28 10:36:35.953 243456 DEBUG nova.storage.rbd_utils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:36:36 compute-0 ceph-mon[76304]: pgmap v2167: 305 pgs: 305 active+clean; 246 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 589 KiB/s rd, 3.6 MiB/s wr, 77 op/s
Feb 28 10:36:36 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3333573190' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:36:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 246 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 101 op/s
Feb 28 10:36:37 compute-0 nova_compute[243452]: 2026-02-28 10:36:37.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:36:38 compute-0 ceph-mon[76304]: pgmap v2168: 305 pgs: 305 active+clean; 246 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 101 op/s
Feb 28 10:36:38 compute-0 nova_compute[243452]: 2026-02-28 10:36:38.535 243456 INFO nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Creating config drive at /var/lib/nova/instances/555d381e-ed8a-4a73-9f43-f79c0b0a0afd/disk.config
Feb 28 10:36:38 compute-0 nova_compute[243452]: 2026-02-28 10:36:38.540 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/555d381e-ed8a-4a73-9f43-f79c0b0a0afd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpl2zmk9w_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:36:38 compute-0 sshd-session[361747]: Invalid user sol from 45.148.10.240 port 41060
Feb 28 10:36:38 compute-0 nova_compute[243452]: 2026-02-28 10:36:38.681 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/555d381e-ed8a-4a73-9f43-f79c0b0a0afd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpl2zmk9w_" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:36:38 compute-0 nova_compute[243452]: 2026-02-28 10:36:38.723 243456 DEBUG nova.storage.rbd_utils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:36:38 compute-0 nova_compute[243452]: 2026-02-28 10:36:38.729 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/555d381e-ed8a-4a73-9f43-f79c0b0a0afd/disk.config 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:36:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 246 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 110 op/s
Feb 28 10:36:38 compute-0 nova_compute[243452]: 2026-02-28 10:36:38.898 243456 DEBUG oslo_concurrency.processutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/555d381e-ed8a-4a73-9f43-f79c0b0a0afd/disk.config 555d381e-ed8a-4a73-9f43-f79c0b0a0afd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:36:38 compute-0 nova_compute[243452]: 2026-02-28 10:36:38.899 243456 INFO nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Deleting local config drive /var/lib/nova/instances/555d381e-ed8a-4a73-9f43-f79c0b0a0afd/disk.config because it was imported into RBD.
Feb 28 10:36:38 compute-0 sshd-session[361747]: Connection closed by invalid user sol 45.148.10.240 port 41060 [preauth]
Feb 28 10:36:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:36:38 compute-0 NetworkManager[49805]: <info>  [1772274998.9654] manager: (tapb8c427fe-78): new Tun device (/org/freedesktop/NetworkManager/Devices/561)
Feb 28 10:36:38 compute-0 kernel: tapb8c427fe-78: entered promiscuous mode
Feb 28 10:36:38 compute-0 ovn_controller[146846]: 2026-02-28T10:36:38Z|01349|binding|INFO|Claiming lport b8c427fe-78c5-4d60-9c44-68985f50b598 for this chassis.
Feb 28 10:36:38 compute-0 ovn_controller[146846]: 2026-02-28T10:36:38Z|01350|binding|INFO|b8c427fe-78c5-4d60-9c44-68985f50b598: Claiming fa:16:3e:d6:2a:81 10.100.0.13
Feb 28 10:36:38 compute-0 nova_compute[243452]: 2026-02-28 10:36:38.978 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:38 compute-0 NetworkManager[49805]: <info>  [1772274998.9855] manager: (tapffaef000-52): new Tun device (/org/freedesktop/NetworkManager/Devices/562)
Feb 28 10:36:38 compute-0 kernel: tapffaef000-52: entered promiscuous mode
Feb 28 10:36:38 compute-0 nova_compute[243452]: 2026-02-28 10:36:38.992 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:38 compute-0 systemd-udevd[361803]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:36:38 compute-0 ovn_controller[146846]: 2026-02-28T10:36:38Z|01351|if_status|INFO|Dropped 1 log messages in last 82 seconds (most recently, 82 seconds ago) due to excessive rate
Feb 28 10:36:38 compute-0 ovn_controller[146846]: 2026-02-28T10:36:38Z|01352|if_status|INFO|Not updating pb chassis for ffaef000-523c-4637-99e6-2cc96b907c15 now as sb is readonly
Feb 28 10:36:38 compute-0 systemd-udevd[361804]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:36:39 compute-0 ovn_controller[146846]: 2026-02-28T10:36:39Z|01353|binding|INFO|Claiming lport ffaef000-523c-4637-99e6-2cc96b907c15 for this chassis.
Feb 28 10:36:39 compute-0 ovn_controller[146846]: 2026-02-28T10:36:39Z|01354|binding|INFO|ffaef000-523c-4637-99e6-2cc96b907c15: Claiming fa:16:3e:28:b7:5a 2001:db8:0:1:f816:3eff:fe28:b75a 2001:db8::f816:3eff:fe28:b75a
Feb 28 10:36:39 compute-0 NetworkManager[49805]: <info>  [1772274999.0108] device (tapb8c427fe-78): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:36:39 compute-0 NetworkManager[49805]: <info>  [1772274999.0123] device (tapb8c427fe-78): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.006 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:2a:81 10.100.0.13'], port_security=['fa:16:3e:d6:2a:81 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '555d381e-ed8a-4a73-9f43-f79c0b0a0afd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '41e41550-f20b-4082-98cd-41d49e388f83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f53aa21-0a70-46b8-aad2-7db237a64c92, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b8c427fe-78c5-4d60-9c44-68985f50b598) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.009 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b8c427fe-78c5-4d60-9c44-68985f50b598 in datapath e68f9d98-c075-4ed0-b1ee-ef05de1c055d bound to our chassis
Feb 28 10:36:39 compute-0 NetworkManager[49805]: <info>  [1772274999.0158] device (tapffaef000-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.016 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e68f9d98-c075-4ed0-b1ee-ef05de1c055d
Feb 28 10:36:39 compute-0 NetworkManager[49805]: <info>  [1772274999.0203] device (tapffaef000-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:36:39 compute-0 systemd-machined[209480]: New machine qemu-166-instance-00000085.
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.029 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:b7:5a 2001:db8:0:1:f816:3eff:fe28:b75a 2001:db8::f816:3eff:fe28:b75a'], port_security=['fa:16:3e:28:b7:5a 2001:db8:0:1:f816:3eff:fe28:b75a 2001:db8::f816:3eff:fe28:b75a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe28:b75a/64 2001:db8::f816:3eff:fe28:b75a/64', 'neutron:device_id': '555d381e-ed8a-4a73-9f43-f79c0b0a0afd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '41e41550-f20b-4082-98cd-41d49e388f83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ac6d088-e3e0-46fc-b81f-2eb4f1fd8c9c, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ffaef000-523c-4637-99e6-2cc96b907c15) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.031 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9421fa-6d06-4ee8-89d0-b3d8967c47c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.032 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape68f9d98-c1 in ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.034 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape68f9d98-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.035 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3757e409-6bc2-4acf-81b7-fa06e5082f65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.036 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ce73544e-5534-4eae-8df1-133986e0a6e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:39 compute-0 systemd[1]: Started Virtual Machine qemu-166-instance-00000085.
Feb 28 10:36:39 compute-0 nova_compute[243452]: 2026-02-28 10:36:39.038 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:39 compute-0 ovn_controller[146846]: 2026-02-28T10:36:39Z|01355|binding|INFO|Setting lport b8c427fe-78c5-4d60-9c44-68985f50b598 ovn-installed in OVS
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.053 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[fd60d04a-720e-44c9-a2ae-4d92ceaf6dd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:39 compute-0 ovn_controller[146846]: 2026-02-28T10:36:39Z|01356|binding|INFO|Setting lport b8c427fe-78c5-4d60-9c44-68985f50b598 up in Southbound
Feb 28 10:36:39 compute-0 nova_compute[243452]: 2026-02-28 10:36:39.055 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:39 compute-0 ovn_controller[146846]: 2026-02-28T10:36:39Z|01357|binding|INFO|Setting lport ffaef000-523c-4637-99e6-2cc96b907c15 ovn-installed in OVS
Feb 28 10:36:39 compute-0 nova_compute[243452]: 2026-02-28 10:36:39.060 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:39 compute-0 ovn_controller[146846]: 2026-02-28T10:36:39Z|01358|binding|INFO|Setting lport ffaef000-523c-4637-99e6-2cc96b907c15 up in Southbound
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.071 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[27e0ac0c-6d2a-46ec-852d-9e5ea7dbb88f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.135 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5deb49f6-c992-4e36-8536-eaa2c5467bf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.146 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[547f1856-bcee-4f75-9142-8a8c01a86130]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:39 compute-0 NetworkManager[49805]: <info>  [1772274999.1480] manager: (tape68f9d98-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/563)
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.180 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ac698580-cd3f-47a6-9ce0-e1be871efab5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.184 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f0edab68-533a-45c0-a7d8-ba40d1eae41d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:39 compute-0 NetworkManager[49805]: <info>  [1772274999.2082] device (tape68f9d98-c0): carrier: link connected
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.214 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b526eb24-94d3-4e83-83a0-f04183f1c07c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.235 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[68ddecb8-b273-4032-91dd-73d03bb85746]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape68f9d98-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:f2:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 404], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650649, 'reachable_time': 23842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361840, 'error': None, 'target': 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.252 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[054472a5-24be-44fb-a4d1-66cefc0f1f95]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec8:f24a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650649, 'tstamp': 650649}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361841, 'error': None, 'target': 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.278 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d577e852-3cbd-4d1c-a808-7b913f4acacd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape68f9d98-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:f2:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 404], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650649, 'reachable_time': 23842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361842, 'error': None, 'target': 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.326 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[95bf6757-db84-47e6-a97e-66c99e5e7c33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.394 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c77684ee-2377-48dc-b8b8-5ea63ae5afae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.396 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape68f9d98-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.397 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.397 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape68f9d98-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:36:39 compute-0 kernel: tape68f9d98-c0: entered promiscuous mode
Feb 28 10:36:39 compute-0 NetworkManager[49805]: <info>  [1772274999.4010] manager: (tape68f9d98-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/564)
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.404 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape68f9d98-c0, col_values=(('external_ids', {'iface-id': 'b9d9d72f-0ab4-40d7-98e4-dcafab8ff1a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:36:39 compute-0 nova_compute[243452]: 2026-02-28 10:36:39.404 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:39 compute-0 ovn_controller[146846]: 2026-02-28T10:36:39Z|01359|binding|INFO|Releasing lport b9d9d72f-0ab4-40d7-98e4-dcafab8ff1a1 from this chassis (sb_readonly=0)
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.407 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e68f9d98-c075-4ed0-b1ee-ef05de1c055d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e68f9d98-c075-4ed0-b1ee-ef05de1c055d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.410 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cf20539e-b333-49e6-9771-f1f9beb31d70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.411 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-e68f9d98-c075-4ed0-b1ee-ef05de1c055d
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/e68f9d98-c075-4ed0-b1ee-ef05de1c055d.pid.haproxy
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID e68f9d98-c075-4ed0-b1ee-ef05de1c055d
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:36:39 compute-0 nova_compute[243452]: 2026-02-28 10:36:39.412 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:39.413 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'env', 'PROCESS_TAG=haproxy-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e68f9d98-c075-4ed0-b1ee-ef05de1c055d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:36:39 compute-0 nova_compute[243452]: 2026-02-28 10:36:39.442 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:39 compute-0 nova_compute[243452]: 2026-02-28 10:36:39.663 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274999.660911, 555d381e-ed8a-4a73-9f43-f79c0b0a0afd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:36:39 compute-0 nova_compute[243452]: 2026-02-28 10:36:39.663 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] VM Started (Lifecycle Event)
Feb 28 10:36:39 compute-0 nova_compute[243452]: 2026-02-28 10:36:39.714 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:36:39 compute-0 nova_compute[243452]: 2026-02-28 10:36:39.720 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772274999.6635458, 555d381e-ed8a-4a73-9f43-f79c0b0a0afd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:36:39 compute-0 nova_compute[243452]: 2026-02-28 10:36:39.721 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] VM Paused (Lifecycle Event)
Feb 28 10:36:39 compute-0 nova_compute[243452]: 2026-02-28 10:36:39.795 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:36:39 compute-0 nova_compute[243452]: 2026-02-28 10:36:39.804 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:36:39 compute-0 podman[361917]: 2026-02-28 10:36:39.83512343 +0000 UTC m=+0.072261018 container create 86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:36:39 compute-0 systemd[1]: Started libpod-conmon-86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21.scope.
Feb 28 10:36:39 compute-0 nova_compute[243452]: 2026-02-28 10:36:39.893 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:36:39 compute-0 podman[361917]: 2026-02-28 10:36:39.80793918 +0000 UTC m=+0.045076748 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:36:39 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:36:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/377aba4e407f3a2f65f2a4eb7e0e22e21a70346077c645cb2f77ea779d688910/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:36:39 compute-0 podman[361917]: 2026-02-28 10:36:39.947532174 +0000 UTC m=+0.184669732 container init 86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:36:39 compute-0 nova_compute[243452]: 2026-02-28 10:36:39.952 243456 DEBUG nova.network.neutron [req-e1a742dd-ef0a-4317-be5a-3de2d95efb2c req-d556379a-a944-4d95-9e53-9e8414e14000 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Updated VIF entry in instance network info cache for port ffaef000-523c-4637-99e6-2cc96b907c15. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:36:39 compute-0 nova_compute[243452]: 2026-02-28 10:36:39.953 243456 DEBUG nova.network.neutron [req-e1a742dd-ef0a-4317-be5a-3de2d95efb2c req-d556379a-a944-4d95-9e53-9e8414e14000 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Updating instance_info_cache with network_info: [{"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:36:39 compute-0 podman[361917]: 2026-02-28 10:36:39.957119186 +0000 UTC m=+0.194256744 container start 86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:36:39 compute-0 podman[361931]: 2026-02-28 10:36:39.955360066 +0000 UTC m=+0.066016371 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Feb 28 10:36:39 compute-0 nova_compute[243452]: 2026-02-28 10:36:39.986 243456 DEBUG oslo_concurrency.lockutils [req-e1a742dd-ef0a-4317-be5a-3de2d95efb2c req-d556379a-a944-4d95-9e53-9e8414e14000 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:36:39 compute-0 neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d[361939]: [NOTICE]   (361969) : New worker (361973) forked
Feb 28 10:36:39 compute-0 neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d[361939]: [NOTICE]   (361969) : Loading success.
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.044 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ffaef000-523c-4637-99e6-2cc96b907c15 in datapath bf5aa4f8-b85b-496f-a7ad-1ab36250968c unbound from our chassis
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.047 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf5aa4f8-b85b-496f-a7ad-1ab36250968c
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.058 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e37940ca-b61e-4f0b-bc85-f9f397bb0275]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.060 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbf5aa4f8-b1 in ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.062 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbf5aa4f8-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.062 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bb053ed1-510d-4c84-940d-6acf7d924f02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.063 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[51980c19-1190-4a69-a4e7-da8eb2409473]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.079 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c2cf1234-a7f4-43ef-98e9-644384609851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:40 compute-0 podman[361930]: 2026-02-28 10:36:40.090035001 +0000 UTC m=+0.185738842 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.097 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c8459d1c-c3f1-4548-87ea-487ac8d2ef4c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:40 compute-0 nova_compute[243452]: 2026-02-28 10:36:40.103 243456 DEBUG nova.compute.manager [req-ec6f2d17-f8c9-43bc-9b7d-e27050a3d209 req-7dbf5d53-523a-488f-851a-740001ef3fef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-plugged-b8c427fe-78c5-4d60-9c44-68985f50b598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:36:40 compute-0 nova_compute[243452]: 2026-02-28 10:36:40.104 243456 DEBUG oslo_concurrency.lockutils [req-ec6f2d17-f8c9-43bc-9b7d-e27050a3d209 req-7dbf5d53-523a-488f-851a-740001ef3fef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:40 compute-0 nova_compute[243452]: 2026-02-28 10:36:40.104 243456 DEBUG oslo_concurrency.lockutils [req-ec6f2d17-f8c9-43bc-9b7d-e27050a3d209 req-7dbf5d53-523a-488f-851a-740001ef3fef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:40 compute-0 nova_compute[243452]: 2026-02-28 10:36:40.105 243456 DEBUG oslo_concurrency.lockutils [req-ec6f2d17-f8c9-43bc-9b7d-e27050a3d209 req-7dbf5d53-523a-488f-851a-740001ef3fef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:40 compute-0 nova_compute[243452]: 2026-02-28 10:36:40.105 243456 DEBUG nova.compute.manager [req-ec6f2d17-f8c9-43bc-9b7d-e27050a3d209 req-7dbf5d53-523a-488f-851a-740001ef3fef 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Processing event network-vif-plugged-b8c427fe-78c5-4d60-9c44-68985f50b598 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.124 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[901dbb42-4884-4a84-ac68-ef6ee260c9b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.131 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eefa1276-7799-4728-8437-201d418c6afa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:40 compute-0 NetworkManager[49805]: <info>  [1772275000.1326] manager: (tapbf5aa4f8-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/565)
Feb 28 10:36:40 compute-0 systemd-udevd[361837]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.165 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[53fe636d-ae01-4a00-b429-f7e1760d142f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.169 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7ca5b4f5-af67-4ad3-8eda-3286167f67b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:40 compute-0 NetworkManager[49805]: <info>  [1772275000.1923] device (tapbf5aa4f8-b0): carrier: link connected
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.196 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[56c44b6d-b8a3-4291-a781-f545999d195f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.218 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e16f2c94-e210-4793-b446-88d9c2f8b9d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf5aa4f8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:9f:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650747, 'reachable_time': 24980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362003, 'error': None, 'target': 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.236 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[faca2518-2dbb-4d21-b83b-ee5ab6aa39d5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea2:9f67'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650747, 'tstamp': 650747}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362004, 'error': None, 'target': 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.257 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3345d26b-0181-4048-b940-0f2e01bf5aeb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf5aa4f8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:9f:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650747, 'reachable_time': 24980, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362005, 'error': None, 'target': 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.296 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5bffa9cd-a8b8-472f-8957-1ff05d92a962]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:40 compute-0 nova_compute[243452]: 2026-02-28 10:36:40.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:36:40 compute-0 nova_compute[243452]: 2026-02-28 10:36:40.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:36:40 compute-0 nova_compute[243452]: 2026-02-28 10:36:40.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.340 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd7abcc-7abd-4f7f-aedd-b76cbe71fc96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.343 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf5aa4f8-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.344 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.345 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf5aa4f8-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:36:40 compute-0 kernel: tapbf5aa4f8-b0: entered promiscuous mode
Feb 28 10:36:40 compute-0 NetworkManager[49805]: <info>  [1772275000.3490] manager: (tapbf5aa4f8-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/566)
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.353 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf5aa4f8-b0, col_values=(('external_ids', {'iface-id': 'a03e0d2d-7933-48b4-9d0a-61369ba848c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:36:40 compute-0 nova_compute[243452]: 2026-02-28 10:36:40.353 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:40 compute-0 ovn_controller[146846]: 2026-02-28T10:36:40Z|01360|binding|INFO|Releasing lport a03e0d2d-7933-48b4-9d0a-61369ba848c9 from this chassis (sb_readonly=0)
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.358 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bf5aa4f8-b85b-496f-a7ad-1ab36250968c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bf5aa4f8-b85b-496f-a7ad-1ab36250968c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.359 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae55112-6912-44ad-ab60-a83805d05303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.360 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-bf5aa4f8-b85b-496f-a7ad-1ab36250968c
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/bf5aa4f8-b85b-496f-a7ad-1ab36250968c.pid.haproxy
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID bf5aa4f8-b85b-496f-a7ad-1ab36250968c
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:36:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:40.363 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'env', 'PROCESS_TAG=haproxy-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bf5aa4f8-b85b-496f-a7ad-1ab36250968c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:36:40 compute-0 nova_compute[243452]: 2026-02-28 10:36:40.365 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:40 compute-0 ceph-mon[76304]: pgmap v2169: 305 pgs: 305 active+clean; 246 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 110 op/s
Feb 28 10:36:40 compute-0 nova_compute[243452]: 2026-02-28 10:36:40.734 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:40 compute-0 podman[362035]: 2026-02-28 10:36:40.80237817 +0000 UTC m=+0.072754372 container create 54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 10:36:40 compute-0 systemd[1]: Started libpod-conmon-54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e.scope.
Feb 28 10:36:40 compute-0 podman[362035]: 2026-02-28 10:36:40.776119596 +0000 UTC m=+0.046495848 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:36:40 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:36:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 246 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Feb 28 10:36:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8955cce884765a04034c460f9c74e35c1e8bf35ef5a4315568d401093bba957d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:36:40 compute-0 podman[362035]: 2026-02-28 10:36:40.899408549 +0000 UTC m=+0.169784791 container init 54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:36:40 compute-0 podman[362035]: 2026-02-28 10:36:40.908104325 +0000 UTC m=+0.178480547 container start 54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 10:36:40 compute-0 neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c[362050]: [NOTICE]   (362054) : New worker (362056) forked
Feb 28 10:36:40 compute-0 neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c[362050]: [NOTICE]   (362054) : Loading success.
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007083475290894501 of space, bias 1.0, pg target 0.21250425872683504 quantized to 32 (current 32)
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024938552168622384 of space, bias 1.0, pg target 0.7481565650586715 quantized to 32 (current 32)
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.326593465495173e-07 of space, bias 4.0, pg target 0.0008791912158594207 quantized to 16 (current 16)
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:36:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.191 243456 DEBUG nova.compute.manager [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-plugged-b8c427fe-78c5-4d60-9c44-68985f50b598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.192 243456 DEBUG oslo_concurrency.lockutils [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.193 243456 DEBUG oslo_concurrency.lockutils [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.194 243456 DEBUG oslo_concurrency.lockutils [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.195 243456 DEBUG nova.compute.manager [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] No event matching network-vif-plugged-b8c427fe-78c5-4d60-9c44-68985f50b598 in dict_keys([('network-vif-plugged', 'ffaef000-523c-4637-99e6-2cc96b907c15')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.195 243456 WARNING nova.compute.manager [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received unexpected event network-vif-plugged-b8c427fe-78c5-4d60-9c44-68985f50b598 for instance with vm_state building and task_state spawning.
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.196 243456 DEBUG nova.compute.manager [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-plugged-ffaef000-523c-4637-99e6-2cc96b907c15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.197 243456 DEBUG oslo_concurrency.lockutils [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.197 243456 DEBUG oslo_concurrency.lockutils [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.198 243456 DEBUG oslo_concurrency.lockutils [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.199 243456 DEBUG nova.compute.manager [req-d48d6969-0155-4ddf-a6dc-5041f4be3e75 req-6e2f26f3-baa3-4b0f-8526-0a886f2a6ade 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Processing event network-vif-plugged-ffaef000-523c-4637-99e6-2cc96b907c15 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.200 243456 DEBUG nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.207 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275002.2058234, 555d381e-ed8a-4a73-9f43-f79c0b0a0afd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.207 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] VM Resumed (Lifecycle Event)
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.212 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.217 243456 INFO nova.virt.libvirt.driver [-] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Instance spawned successfully.
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.218 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.238 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.245 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.249 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.250 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.250 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.251 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.252 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.252 243456 DEBUG nova.virt.libvirt.driver [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.291 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.335 243456 INFO nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Took 14.38 seconds to spawn the instance on the hypervisor.
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.336 243456 DEBUG nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.418 243456 INFO nova.compute.manager [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Took 15.43 seconds to build instance.
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.444 243456 DEBUG oslo_concurrency.lockutils [None req-5cca5d90-cca4-45c2-8906-71170081b331 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:42 compute-0 ceph-mon[76304]: pgmap v2170: 305 pgs: 305 active+clean; 246 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Feb 28 10:36:42 compute-0 NetworkManager[49805]: <info>  [1772275002.5983] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/567)
Feb 28 10:36:42 compute-0 NetworkManager[49805]: <info>  [1772275002.5997] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/568)
Feb 28 10:36:42 compute-0 ovn_controller[146846]: 2026-02-28T10:36:42Z|01361|binding|INFO|Releasing lport 806ea448-5fbd-4b2a-a972-1602ffe39b97 from this chassis (sb_readonly=0)
Feb 28 10:36:42 compute-0 ovn_controller[146846]: 2026-02-28T10:36:42Z|01362|binding|INFO|Releasing lport a03e0d2d-7933-48b4-9d0a-61369ba848c9 from this chassis (sb_readonly=0)
Feb 28 10:36:42 compute-0 ovn_controller[146846]: 2026-02-28T10:36:42Z|01363|binding|INFO|Releasing lport b9d9d72f-0ab4-40d7-98e4-dcafab8ff1a1 from this chassis (sb_readonly=0)
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.611 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:42 compute-0 ovn_controller[146846]: 2026-02-28T10:36:42Z|01364|binding|INFO|Releasing lport 806ea448-5fbd-4b2a-a972-1602ffe39b97 from this chassis (sb_readonly=0)
Feb 28 10:36:42 compute-0 ovn_controller[146846]: 2026-02-28T10:36:42Z|01365|binding|INFO|Releasing lport a03e0d2d-7933-48b4-9d0a-61369ba848c9 from this chassis (sb_readonly=0)
Feb 28 10:36:42 compute-0 ovn_controller[146846]: 2026-02-28T10:36:42Z|01366|binding|INFO|Releasing lport b9d9d72f-0ab4-40d7-98e4-dcafab8ff1a1 from this chassis (sb_readonly=0)
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.631 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:42 compute-0 nova_compute[243452]: 2026-02-28 10:36:42.646 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 246 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 166 KiB/s wr, 95 op/s
Feb 28 10:36:43 compute-0 nova_compute[243452]: 2026-02-28 10:36:43.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:36:43 compute-0 sudo[362066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:36:43 compute-0 sudo[362066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:36:43 compute-0 sudo[362066]: pam_unix(sudo:session): session closed for user root
Feb 28 10:36:43 compute-0 sudo[362091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:36:43 compute-0 sudo[362091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:36:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:36:44 compute-0 sudo[362091]: pam_unix(sudo:session): session closed for user root
Feb 28 10:36:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:36:44 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:36:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:36:44 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:36:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:36:44 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:36:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:36:44 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:36:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:36:44 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:36:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:36:44 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:36:44 compute-0 sudo[362148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:36:44 compute-0 sudo[362148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:36:44 compute-0 sudo[362148]: pam_unix(sudo:session): session closed for user root
Feb 28 10:36:44 compute-0 sudo[362173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:36:44 compute-0 sudo[362173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:36:44 compute-0 nova_compute[243452]: 2026-02-28 10:36:44.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:36:44 compute-0 nova_compute[243452]: 2026-02-28 10:36:44.370 243456 DEBUG nova.compute.manager [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-plugged-ffaef000-523c-4637-99e6-2cc96b907c15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:36:44 compute-0 nova_compute[243452]: 2026-02-28 10:36:44.370 243456 DEBUG oslo_concurrency.lockutils [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:44 compute-0 nova_compute[243452]: 2026-02-28 10:36:44.370 243456 DEBUG oslo_concurrency.lockutils [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:44 compute-0 nova_compute[243452]: 2026-02-28 10:36:44.370 243456 DEBUG oslo_concurrency.lockutils [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:44 compute-0 nova_compute[243452]: 2026-02-28 10:36:44.371 243456 DEBUG nova.compute.manager [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] No waiting events found dispatching network-vif-plugged-ffaef000-523c-4637-99e6-2cc96b907c15 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:36:44 compute-0 nova_compute[243452]: 2026-02-28 10:36:44.371 243456 WARNING nova.compute.manager [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received unexpected event network-vif-plugged-ffaef000-523c-4637-99e6-2cc96b907c15 for instance with vm_state active and task_state None.
Feb 28 10:36:44 compute-0 nova_compute[243452]: 2026-02-28 10:36:44.371 243456 DEBUG nova.compute.manager [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-changed-4b48043a-8194-4cf4-bd7f-1c138d7960ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:36:44 compute-0 nova_compute[243452]: 2026-02-28 10:36:44.371 243456 DEBUG nova.compute.manager [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Refreshing instance network info cache due to event network-changed-4b48043a-8194-4cf4-bd7f-1c138d7960ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:36:44 compute-0 nova_compute[243452]: 2026-02-28 10:36:44.371 243456 DEBUG oslo_concurrency.lockutils [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:36:44 compute-0 nova_compute[243452]: 2026-02-28 10:36:44.371 243456 DEBUG oslo_concurrency.lockutils [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:36:44 compute-0 nova_compute[243452]: 2026-02-28 10:36:44.371 243456 DEBUG nova.network.neutron [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Refreshing network info cache for port 4b48043a-8194-4cf4-bd7f-1c138d7960ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:36:44 compute-0 nova_compute[243452]: 2026-02-28 10:36:44.444 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:44 compute-0 ceph-mon[76304]: pgmap v2171: 305 pgs: 305 active+clean; 246 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 166 KiB/s wr, 95 op/s
Feb 28 10:36:44 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:36:44 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:36:44 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:36:44 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:36:44 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:36:44 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:36:44 compute-0 podman[362210]: 2026-02-28 10:36:44.578847355 +0000 UTC m=+0.041253198 container create c50e256efd41f15ce8164cf7b5a625337a938052ca12d002824febc7ab7a7365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:36:44 compute-0 systemd[1]: Started libpod-conmon-c50e256efd41f15ce8164cf7b5a625337a938052ca12d002824febc7ab7a7365.scope.
Feb 28 10:36:44 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:36:44 compute-0 podman[362210]: 2026-02-28 10:36:44.563303556 +0000 UTC m=+0.025709419 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:36:44 compute-0 podman[362210]: 2026-02-28 10:36:44.66161807 +0000 UTC m=+0.124023943 container init c50e256efd41f15ce8164cf7b5a625337a938052ca12d002824febc7ab7a7365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 10:36:44 compute-0 podman[362210]: 2026-02-28 10:36:44.67042658 +0000 UTC m=+0.132832413 container start c50e256efd41f15ce8164cf7b5a625337a938052ca12d002824febc7ab7a7365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 10:36:44 compute-0 thirsty_bassi[362227]: 167 167
Feb 28 10:36:44 compute-0 systemd[1]: libpod-c50e256efd41f15ce8164cf7b5a625337a938052ca12d002824febc7ab7a7365.scope: Deactivated successfully.
Feb 28 10:36:44 compute-0 podman[362210]: 2026-02-28 10:36:44.676397819 +0000 UTC m=+0.138803682 container attach c50e256efd41f15ce8164cf7b5a625337a938052ca12d002824febc7ab7a7365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:36:44 compute-0 podman[362210]: 2026-02-28 10:36:44.676771809 +0000 UTC m=+0.139177652 container died c50e256efd41f15ce8164cf7b5a625337a938052ca12d002824febc7ab7a7365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 10:36:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-8812063b7af6f60afa17b57ffffec7ae260b30ec26466bd033bf4064fc37bb4c-merged.mount: Deactivated successfully.
Feb 28 10:36:44 compute-0 podman[362210]: 2026-02-28 10:36:44.714753975 +0000 UTC m=+0.177159818 container remove c50e256efd41f15ce8164cf7b5a625337a938052ca12d002824febc7ab7a7365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 10:36:44 compute-0 systemd[1]: libpod-conmon-c50e256efd41f15ce8164cf7b5a625337a938052ca12d002824febc7ab7a7365.scope: Deactivated successfully.
Feb 28 10:36:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 257 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 812 KiB/s wr, 118 op/s
Feb 28 10:36:44 compute-0 podman[362250]: 2026-02-28 10:36:44.887084377 +0000 UTC m=+0.053569159 container create 82b516bb37eb8b57ab60a6e6a3846c6dbd3b83d30eb874672d5375b8eaea7549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_proskuriakova, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:36:44 compute-0 systemd[1]: Started libpod-conmon-82b516bb37eb8b57ab60a6e6a3846c6dbd3b83d30eb874672d5375b8eaea7549.scope.
Feb 28 10:36:44 compute-0 podman[362250]: 2026-02-28 10:36:44.859905027 +0000 UTC m=+0.026389889 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:36:45 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394ee0e72795abab66400a867e8985a92bd0200423acc3618c7b3953b41ebc75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394ee0e72795abab66400a867e8985a92bd0200423acc3618c7b3953b41ebc75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394ee0e72795abab66400a867e8985a92bd0200423acc3618c7b3953b41ebc75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394ee0e72795abab66400a867e8985a92bd0200423acc3618c7b3953b41ebc75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394ee0e72795abab66400a867e8985a92bd0200423acc3618c7b3953b41ebc75/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:36:45 compute-0 ovn_controller[146846]: 2026-02-28T10:36:45Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e3:60:12 10.100.0.5
Feb 28 10:36:45 compute-0 ovn_controller[146846]: 2026-02-28T10:36:45Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e3:60:12 10.100.0.5
Feb 28 10:36:45 compute-0 podman[362250]: 2026-02-28 10:36:45.032096035 +0000 UTC m=+0.198580857 container init 82b516bb37eb8b57ab60a6e6a3846c6dbd3b83d30eb874672d5375b8eaea7549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 28 10:36:45 compute-0 podman[362250]: 2026-02-28 10:36:45.043841477 +0000 UTC m=+0.210326259 container start 82b516bb37eb8b57ab60a6e6a3846c6dbd3b83d30eb874672d5375b8eaea7549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_proskuriakova, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:36:45 compute-0 podman[362250]: 2026-02-28 10:36:45.047641355 +0000 UTC m=+0.214126147 container attach 82b516bb37eb8b57ab60a6e6a3846c6dbd3b83d30eb874672d5375b8eaea7549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_proskuriakova, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:36:45 compute-0 beautiful_proskuriakova[362266]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:36:45 compute-0 beautiful_proskuriakova[362266]: --> All data devices are unavailable
Feb 28 10:36:45 compute-0 systemd[1]: libpod-82b516bb37eb8b57ab60a6e6a3846c6dbd3b83d30eb874672d5375b8eaea7549.scope: Deactivated successfully.
Feb 28 10:36:45 compute-0 conmon[362266]: conmon 82b516bb37eb8b57ab60 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82b516bb37eb8b57ab60a6e6a3846c6dbd3b83d30eb874672d5375b8eaea7549.scope/container/memory.events
Feb 28 10:36:45 compute-0 podman[362250]: 2026-02-28 10:36:45.552892638 +0000 UTC m=+0.719377440 container died 82b516bb37eb8b57ab60a6e6a3846c6dbd3b83d30eb874672d5375b8eaea7549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 10:36:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:36:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/660430431' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:36:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:36:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/660430431' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:36:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-394ee0e72795abab66400a867e8985a92bd0200423acc3618c7b3953b41ebc75-merged.mount: Deactivated successfully.
Feb 28 10:36:45 compute-0 podman[362250]: 2026-02-28 10:36:45.61158377 +0000 UTC m=+0.778068582 container remove 82b516bb37eb8b57ab60a6e6a3846c6dbd3b83d30eb874672d5375b8eaea7549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 10:36:45 compute-0 systemd[1]: libpod-conmon-82b516bb37eb8b57ab60a6e6a3846c6dbd3b83d30eb874672d5375b8eaea7549.scope: Deactivated successfully.
Feb 28 10:36:45 compute-0 sudo[362173]: pam_unix(sudo:session): session closed for user root
Feb 28 10:36:45 compute-0 nova_compute[243452]: 2026-02-28 10:36:45.736 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:45 compute-0 sudo[362296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:36:45 compute-0 sudo[362296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:36:45 compute-0 sudo[362296]: pam_unix(sudo:session): session closed for user root
Feb 28 10:36:45 compute-0 sudo[362321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:36:45 compute-0 sudo[362321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:36:46 compute-0 podman[362358]: 2026-02-28 10:36:46.225922749 +0000 UTC m=+0.036572958 container create fac68737be5178c934b5e58ed09e39532405861cd88b1b9594bc83c82c9a017b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:36:46 compute-0 systemd[1]: Started libpod-conmon-fac68737be5178c934b5e58ed09e39532405861cd88b1b9594bc83c82c9a017b.scope.
Feb 28 10:36:46 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:36:46 compute-0 podman[362358]: 2026-02-28 10:36:46.209762761 +0000 UTC m=+0.020412970 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:36:46 compute-0 podman[362358]: 2026-02-28 10:36:46.322294873 +0000 UTC m=+0.132945122 container init fac68737be5178c934b5e58ed09e39532405861cd88b1b9594bc83c82c9a017b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ellis, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:36:46 compute-0 podman[362358]: 2026-02-28 10:36:46.32928113 +0000 UTC m=+0.139931329 container start fac68737be5178c934b5e58ed09e39532405861cd88b1b9594bc83c82c9a017b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ellis, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:36:46 compute-0 podman[362358]: 2026-02-28 10:36:46.332546203 +0000 UTC m=+0.143196472 container attach fac68737be5178c934b5e58ed09e39532405861cd88b1b9594bc83c82c9a017b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ellis, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 10:36:46 compute-0 suspicious_ellis[362374]: 167 167
Feb 28 10:36:46 compute-0 systemd[1]: libpod-fac68737be5178c934b5e58ed09e39532405861cd88b1b9594bc83c82c9a017b.scope: Deactivated successfully.
Feb 28 10:36:46 compute-0 podman[362358]: 2026-02-28 10:36:46.335329182 +0000 UTC m=+0.145979461 container died fac68737be5178c934b5e58ed09e39532405861cd88b1b9594bc83c82c9a017b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ellis, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:36:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-62f24a719683a8b277e9ec87d58837ed56194773a29600cb6e57b178e70908fa-merged.mount: Deactivated successfully.
Feb 28 10:36:46 compute-0 podman[362358]: 2026-02-28 10:36:46.368769119 +0000 UTC m=+0.179419328 container remove fac68737be5178c934b5e58ed09e39532405861cd88b1b9594bc83c82c9a017b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ellis, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 10:36:46 compute-0 systemd[1]: libpod-conmon-fac68737be5178c934b5e58ed09e39532405861cd88b1b9594bc83c82c9a017b.scope: Deactivated successfully.
Feb 28 10:36:46 compute-0 podman[362399]: 2026-02-28 10:36:46.530604433 +0000 UTC m=+0.048800453 container create 2915a448859820274a985cc70dc47648427233558ae470a2980c7f286c387d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 10:36:46 compute-0 ceph-mon[76304]: pgmap v2172: 305 pgs: 305 active+clean; 257 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 812 KiB/s wr, 118 op/s
Feb 28 10:36:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/660430431' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:36:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/660430431' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:36:46 compute-0 systemd[1]: Started libpod-conmon-2915a448859820274a985cc70dc47648427233558ae470a2980c7f286c387d08.scope.
Feb 28 10:36:46 compute-0 podman[362399]: 2026-02-28 10:36:46.505550844 +0000 UTC m=+0.023746874 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:36:46 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:36:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d3cc99809b7240e969b1a592d467d177d4a334816d094e43876872043d6002/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:36:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d3cc99809b7240e969b1a592d467d177d4a334816d094e43876872043d6002/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:36:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d3cc99809b7240e969b1a592d467d177d4a334816d094e43876872043d6002/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:36:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d3cc99809b7240e969b1a592d467d177d4a334816d094e43876872043d6002/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:36:46 compute-0 podman[362399]: 2026-02-28 10:36:46.645510428 +0000 UTC m=+0.163706518 container init 2915a448859820274a985cc70dc47648427233558ae470a2980c7f286c387d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 10:36:46 compute-0 podman[362399]: 2026-02-28 10:36:46.65969752 +0000 UTC m=+0.177893540 container start 2915a448859820274a985cc70dc47648427233558ae470a2980c7f286c387d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 10:36:46 compute-0 podman[362399]: 2026-02-28 10:36:46.663407285 +0000 UTC m=+0.181603305 container attach 2915a448859820274a985cc70dc47648427233558ae470a2980c7f286c387d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 10:36:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 257 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 995 KiB/s wr, 144 op/s
Feb 28 10:36:46 compute-0 frosty_hugle[362415]: {
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:     "0": [
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:         {
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "devices": [
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "/dev/loop3"
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             ],
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "lv_name": "ceph_lv0",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "lv_size": "21470642176",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "name": "ceph_lv0",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "tags": {
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.cluster_name": "ceph",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.crush_device_class": "",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.encrypted": "0",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.objectstore": "bluestore",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.osd_id": "0",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.type": "block",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.vdo": "0",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.with_tpm": "0"
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             },
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "type": "block",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "vg_name": "ceph_vg0"
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:         }
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:     ],
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:     "1": [
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:         {
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "devices": [
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "/dev/loop4"
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             ],
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "lv_name": "ceph_lv1",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "lv_size": "21470642176",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "name": "ceph_lv1",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "tags": {
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.cluster_name": "ceph",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.crush_device_class": "",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.encrypted": "0",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.objectstore": "bluestore",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.osd_id": "1",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.type": "block",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.vdo": "0",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.with_tpm": "0"
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             },
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "type": "block",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "vg_name": "ceph_vg1"
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:         }
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:     ],
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:     "2": [
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:         {
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "devices": [
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "/dev/loop5"
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             ],
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "lv_name": "ceph_lv2",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "lv_size": "21470642176",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "name": "ceph_lv2",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "tags": {
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.cluster_name": "ceph",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.crush_device_class": "",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.encrypted": "0",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.objectstore": "bluestore",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.osd_id": "2",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.type": "block",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.vdo": "0",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:                 "ceph.with_tpm": "0"
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             },
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "type": "block",
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:             "vg_name": "ceph_vg2"
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:         }
Feb 28 10:36:46 compute-0 frosty_hugle[362415]:     ]
Feb 28 10:36:46 compute-0 frosty_hugle[362415]: }
Feb 28 10:36:46 compute-0 systemd[1]: libpod-2915a448859820274a985cc70dc47648427233558ae470a2980c7f286c387d08.scope: Deactivated successfully.
Feb 28 10:36:47 compute-0 podman[362399]: 2026-02-28 10:36:47.000593837 +0000 UTC m=+0.518789867 container died 2915a448859820274a985cc70dc47648427233558ae470a2980c7f286c387d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 10:36:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-94d3cc99809b7240e969b1a592d467d177d4a334816d094e43876872043d6002-merged.mount: Deactivated successfully.
Feb 28 10:36:47 compute-0 podman[362399]: 2026-02-28 10:36:47.043706888 +0000 UTC m=+0.561902908 container remove 2915a448859820274a985cc70dc47648427233558ae470a2980c7f286c387d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:36:47 compute-0 systemd[1]: libpod-conmon-2915a448859820274a985cc70dc47648427233558ae470a2980c7f286c387d08.scope: Deactivated successfully.
Feb 28 10:36:47 compute-0 sudo[362321]: pam_unix(sudo:session): session closed for user root
Feb 28 10:36:47 compute-0 sudo[362436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:36:47 compute-0 sudo[362436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:36:47 compute-0 sudo[362436]: pam_unix(sudo:session): session closed for user root
Feb 28 10:36:47 compute-0 sudo[362461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:36:47 compute-0 sudo[362461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:36:47 compute-0 nova_compute[243452]: 2026-02-28 10:36:47.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:36:47 compute-0 nova_compute[243452]: 2026-02-28 10:36:47.350 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:47 compute-0 nova_compute[243452]: 2026-02-28 10:36:47.350 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:47 compute-0 nova_compute[243452]: 2026-02-28 10:36:47.351 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:47 compute-0 nova_compute[243452]: 2026-02-28 10:36:47.351 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:36:47 compute-0 nova_compute[243452]: 2026-02-28 10:36:47.351 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:36:47 compute-0 nova_compute[243452]: 2026-02-28 10:36:47.516 243456 DEBUG nova.network.neutron [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updated VIF entry in instance network info cache for port 4b48043a-8194-4cf4-bd7f-1c138d7960ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:36:47 compute-0 nova_compute[243452]: 2026-02-28 10:36:47.518 243456 DEBUG nova.network.neutron [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:36:47 compute-0 nova_compute[243452]: 2026-02-28 10:36:47.545 243456 DEBUG oslo_concurrency.lockutils [req-532b790c-4f45-447c-a6d8-3b97823e3ddf req-254629e5-4aa8-4da9-8b41-622626eaa536 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:36:47 compute-0 podman[362499]: 2026-02-28 10:36:47.559190731 +0000 UTC m=+0.060474885 container create fed6c2ba7a05a4944188f1f222e5a727e929c866a3cb2f7fb8e91b30bb6f1399 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banach, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:36:47 compute-0 systemd[1]: Started libpod-conmon-fed6c2ba7a05a4944188f1f222e5a727e929c866a3cb2f7fb8e91b30bb6f1399.scope.
Feb 28 10:36:47 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:36:47 compute-0 podman[362499]: 2026-02-28 10:36:47.540900583 +0000 UTC m=+0.042184747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:36:47 compute-0 podman[362499]: 2026-02-28 10:36:47.647565904 +0000 UTC m=+0.148850078 container init fed6c2ba7a05a4944188f1f222e5a727e929c866a3cb2f7fb8e91b30bb6f1399 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:36:47 compute-0 podman[362499]: 2026-02-28 10:36:47.655640553 +0000 UTC m=+0.156924707 container start fed6c2ba7a05a4944188f1f222e5a727e929c866a3cb2f7fb8e91b30bb6f1399 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banach, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 10:36:47 compute-0 podman[362499]: 2026-02-28 10:36:47.660024427 +0000 UTC m=+0.161308601 container attach fed6c2ba7a05a4944188f1f222e5a727e929c866a3cb2f7fb8e91b30bb6f1399 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:36:47 compute-0 epic_banach[362532]: 167 167
Feb 28 10:36:47 compute-0 systemd[1]: libpod-fed6c2ba7a05a4944188f1f222e5a727e929c866a3cb2f7fb8e91b30bb6f1399.scope: Deactivated successfully.
Feb 28 10:36:47 compute-0 conmon[362532]: conmon fed6c2ba7a05a4944188 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fed6c2ba7a05a4944188f1f222e5a727e929c866a3cb2f7fb8e91b30bb6f1399.scope/container/memory.events
Feb 28 10:36:47 compute-0 podman[362499]: 2026-02-28 10:36:47.664273917 +0000 UTC m=+0.165558071 container died fed6c2ba7a05a4944188f1f222e5a727e929c866a3cb2f7fb8e91b30bb6f1399 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banach, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 10:36:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f460d5b8aa3b861a4d7793eed5340ef4aeabc52441a424dfa1b881c16313889-merged.mount: Deactivated successfully.
Feb 28 10:36:47 compute-0 podman[362499]: 2026-02-28 10:36:47.703711034 +0000 UTC m=+0.204995188 container remove fed6c2ba7a05a4944188f1f222e5a727e929c866a3cb2f7fb8e91b30bb6f1399 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banach, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 10:36:47 compute-0 systemd[1]: libpod-conmon-fed6c2ba7a05a4944188f1f222e5a727e929c866a3cb2f7fb8e91b30bb6f1399.scope: Deactivated successfully.
Feb 28 10:36:47 compute-0 podman[362559]: 2026-02-28 10:36:47.887513541 +0000 UTC m=+0.057230512 container create 970d9c6dbc94530188e5f42f6a33a41c24b3b1bdc2db96c85f37f541b1c0e4a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:36:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:36:47 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1454418771' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:36:47 compute-0 systemd[1]: Started libpod-conmon-970d9c6dbc94530188e5f42f6a33a41c24b3b1bdc2db96c85f37f541b1c0e4a5.scope.
Feb 28 10:36:47 compute-0 podman[362559]: 2026-02-28 10:36:47.862660657 +0000 UTC m=+0.032377668 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:36:47 compute-0 nova_compute[243452]: 2026-02-28 10:36:47.963 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:36:47 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:36:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c122b183770727355e1bf59fc73b719f6aa21d40e479a5b8cef672dfbbd59c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:36:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c122b183770727355e1bf59fc73b719f6aa21d40e479a5b8cef672dfbbd59c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:36:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c122b183770727355e1bf59fc73b719f6aa21d40e479a5b8cef672dfbbd59c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:36:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c122b183770727355e1bf59fc73b719f6aa21d40e479a5b8cef672dfbbd59c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:36:47 compute-0 podman[362559]: 2026-02-28 10:36:47.998388452 +0000 UTC m=+0.168105453 container init 970d9c6dbc94530188e5f42f6a33a41c24b3b1bdc2db96c85f37f541b1c0e4a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 10:36:48 compute-0 podman[362559]: 2026-02-28 10:36:48.005082752 +0000 UTC m=+0.174799703 container start 970d9c6dbc94530188e5f42f6a33a41c24b3b1bdc2db96c85f37f541b1c0e4a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:36:48 compute-0 podman[362559]: 2026-02-28 10:36:48.008362734 +0000 UTC m=+0.178079695 container attach 970d9c6dbc94530188e5f42f6a33a41c24b3b1bdc2db96c85f37f541b1c0e4a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 10:36:48 compute-0 nova_compute[243452]: 2026-02-28 10:36:48.078 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:36:48 compute-0 nova_compute[243452]: 2026-02-28 10:36:48.079 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:36:48 compute-0 nova_compute[243452]: 2026-02-28 10:36:48.082 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:36:48 compute-0 nova_compute[243452]: 2026-02-28 10:36:48.082 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:36:48 compute-0 nova_compute[243452]: 2026-02-28 10:36:48.088 243456 DEBUG nova.compute.manager [req-350e5fdd-71d5-42dd-9084-61137c849cd5 req-710be096-1c02-46c9-9563-436f88d84c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-changed-b8c427fe-78c5-4d60-9c44-68985f50b598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:36:48 compute-0 nova_compute[243452]: 2026-02-28 10:36:48.088 243456 DEBUG nova.compute.manager [req-350e5fdd-71d5-42dd-9084-61137c849cd5 req-710be096-1c02-46c9-9563-436f88d84c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Refreshing instance network info cache due to event network-changed-b8c427fe-78c5-4d60-9c44-68985f50b598. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:36:48 compute-0 nova_compute[243452]: 2026-02-28 10:36:48.089 243456 DEBUG oslo_concurrency.lockutils [req-350e5fdd-71d5-42dd-9084-61137c849cd5 req-710be096-1c02-46c9-9563-436f88d84c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:36:48 compute-0 nova_compute[243452]: 2026-02-28 10:36:48.089 243456 DEBUG oslo_concurrency.lockutils [req-350e5fdd-71d5-42dd-9084-61137c849cd5 req-710be096-1c02-46c9-9563-436f88d84c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:36:48 compute-0 nova_compute[243452]: 2026-02-28 10:36:48.089 243456 DEBUG nova.network.neutron [req-350e5fdd-71d5-42dd-9084-61137c849cd5 req-710be096-1c02-46c9-9563-436f88d84c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Refreshing network info cache for port b8c427fe-78c5-4d60-9c44-68985f50b598 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:36:48 compute-0 nova_compute[243452]: 2026-02-28 10:36:48.306 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:36:48 compute-0 nova_compute[243452]: 2026-02-28 10:36:48.307 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3204MB free_disk=59.934458611533046GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:36:48 compute-0 nova_compute[243452]: 2026-02-28 10:36:48.307 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:48 compute-0 nova_compute[243452]: 2026-02-28 10:36:48.307 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:48 compute-0 nova_compute[243452]: 2026-02-28 10:36:48.480 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance b3a7b19c-ebb2-442d-bac0-66e1b9e2655c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:36:48 compute-0 nova_compute[243452]: 2026-02-28 10:36:48.482 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 555d381e-ed8a-4a73-9f43-f79c0b0a0afd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:36:48 compute-0 nova_compute[243452]: 2026-02-28 10:36:48.482 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:36:48 compute-0 nova_compute[243452]: 2026-02-28 10:36:48.483 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:36:48 compute-0 ceph-mon[76304]: pgmap v2173: 305 pgs: 305 active+clean; 257 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 995 KiB/s wr, 144 op/s
Feb 28 10:36:48 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1454418771' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:36:48 compute-0 nova_compute[243452]: 2026-02-28 10:36:48.633 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:36:48 compute-0 lvm[362661]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:36:48 compute-0 lvm[362661]: VG ceph_vg1 finished
Feb 28 10:36:48 compute-0 lvm[362659]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:36:48 compute-0 lvm[362659]: VG ceph_vg0 finished
Feb 28 10:36:48 compute-0 lvm[362672]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:36:48 compute-0 lvm[362672]: VG ceph_vg2 finished
Feb 28 10:36:48 compute-0 nostalgic_joliot[362578]: {}
Feb 28 10:36:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 271 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Feb 28 10:36:48 compute-0 systemd[1]: libpod-970d9c6dbc94530188e5f42f6a33a41c24b3b1bdc2db96c85f37f541b1c0e4a5.scope: Deactivated successfully.
Feb 28 10:36:48 compute-0 systemd[1]: libpod-970d9c6dbc94530188e5f42f6a33a41c24b3b1bdc2db96c85f37f541b1c0e4a5.scope: Consumed 1.291s CPU time.
Feb 28 10:36:48 compute-0 podman[362559]: 2026-02-28 10:36:48.886135388 +0000 UTC m=+1.055852379 container died 970d9c6dbc94530188e5f42f6a33a41c24b3b1bdc2db96c85f37f541b1c0e4a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:36:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c122b183770727355e1bf59fc73b719f6aa21d40e479a5b8cef672dfbbd59c1-merged.mount: Deactivated successfully.
Feb 28 10:36:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:36:48 compute-0 podman[362559]: 2026-02-28 10:36:48.948272979 +0000 UTC m=+1.117989940 container remove 970d9c6dbc94530188e5f42f6a33a41c24b3b1bdc2db96c85f37f541b1c0e4a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:36:48 compute-0 systemd[1]: libpod-conmon-970d9c6dbc94530188e5f42f6a33a41c24b3b1bdc2db96c85f37f541b1c0e4a5.scope: Deactivated successfully.
Feb 28 10:36:48 compute-0 sudo[362461]: pam_unix(sudo:session): session closed for user root
Feb 28 10:36:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:36:49 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:36:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:36:49 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:36:49 compute-0 sudo[362698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:36:49 compute-0 sudo[362698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:36:49 compute-0 sudo[362698]: pam_unix(sudo:session): session closed for user root
Feb 28 10:36:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:36:49 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3807439814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:36:49 compute-0 nova_compute[243452]: 2026-02-28 10:36:49.200 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:36:49 compute-0 nova_compute[243452]: 2026-02-28 10:36:49.207 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:36:49 compute-0 nova_compute[243452]: 2026-02-28 10:36:49.225 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:36:49 compute-0 nova_compute[243452]: 2026-02-28 10:36:49.254 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:36:49 compute-0 nova_compute[243452]: 2026-02-28 10:36:49.255 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:49 compute-0 nova_compute[243452]: 2026-02-28 10:36:49.442 243456 DEBUG nova.network.neutron [req-350e5fdd-71d5-42dd-9084-61137c849cd5 req-710be096-1c02-46c9-9563-436f88d84c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Updated VIF entry in instance network info cache for port b8c427fe-78c5-4d60-9c44-68985f50b598. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:36:49 compute-0 nova_compute[243452]: 2026-02-28 10:36:49.443 243456 DEBUG nova.network.neutron [req-350e5fdd-71d5-42dd-9084-61137c849cd5 req-710be096-1c02-46c9-9563-436f88d84c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Updating instance_info_cache with network_info: [{"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:36:49 compute-0 nova_compute[243452]: 2026-02-28 10:36:49.447 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:49 compute-0 nova_compute[243452]: 2026-02-28 10:36:49.473 243456 DEBUG oslo_concurrency.lockutils [req-350e5fdd-71d5-42dd-9084-61137c849cd5 req-710be096-1c02-46c9-9563-436f88d84c3c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:36:50 compute-0 ceph-mon[76304]: pgmap v2174: 305 pgs: 305 active+clean; 271 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Feb 28 10:36:50 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:36:50 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:36:50 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3807439814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:36:50 compute-0 nova_compute[243452]: 2026-02-28 10:36:50.260 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:36:50 compute-0 nova_compute[243452]: 2026-02-28 10:36:50.261 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:36:50 compute-0 nova_compute[243452]: 2026-02-28 10:36:50.326 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:36:50 compute-0 nova_compute[243452]: 2026-02-28 10:36:50.424 243456 INFO nova.compute.manager [None req-f9a6f329-827d-4ee7-9613-8890c5cac900 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Get console output
Feb 28 10:36:50 compute-0 nova_compute[243452]: 2026-02-28 10:36:50.434 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:36:50 compute-0 nova_compute[243452]: 2026-02-28 10:36:50.740 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Feb 28 10:36:52 compute-0 ceph-mon[76304]: pgmap v2175: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Feb 28 10:36:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 133 op/s
Feb 28 10:36:53 compute-0 nova_compute[243452]: 2026-02-28 10:36:53.653 243456 DEBUG oslo_concurrency.lockutils [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "interface-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:53 compute-0 nova_compute[243452]: 2026-02-28 10:36:53.654 243456 DEBUG oslo_concurrency.lockutils [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "interface-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:53 compute-0 nova_compute[243452]: 2026-02-28 10:36:53.654 243456 DEBUG nova.objects.instance [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'flavor' on Instance uuid b3a7b19c-ebb2-442d-bac0-66e1b9e2655c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:36:53 compute-0 ovn_controller[146846]: 2026-02-28T10:36:53Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:2a:81 10.100.0.13
Feb 28 10:36:53 compute-0 ovn_controller[146846]: 2026-02-28T10:36:53Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:2a:81 10.100.0.13
Feb 28 10:36:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:36:54 compute-0 ceph-mon[76304]: pgmap v2176: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 133 op/s
Feb 28 10:36:54 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Feb 28 10:36:54 compute-0 nova_compute[243452]: 2026-02-28 10:36:54.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:36:54 compute-0 nova_compute[243452]: 2026-02-28 10:36:54.448 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 288 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 145 op/s
Feb 28 10:36:54 compute-0 nova_compute[243452]: 2026-02-28 10:36:54.926 243456 DEBUG nova.objects.instance [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_requests' on Instance uuid b3a7b19c-ebb2-442d-bac0-66e1b9e2655c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:36:54 compute-0 nova_compute[243452]: 2026-02-28 10:36:54.945 243456 DEBUG nova.network.neutron [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:36:55 compute-0 nova_compute[243452]: 2026-02-28 10:36:55.742 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:36:56 compute-0 nova_compute[243452]: 2026-02-28 10:36:56.000 243456 DEBUG nova.policy [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:36:56 compute-0 ceph-mon[76304]: pgmap v2177: 305 pgs: 305 active+clean; 288 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 145 op/s
Feb 28 10:36:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 304 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.2 MiB/s wr, 122 op/s
Feb 28 10:36:56 compute-0 nova_compute[243452]: 2026-02-28 10:36:56.925 243456 DEBUG nova.network.neutron [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Successfully created port: 784aca10-13b2-42d5-9828-68914533af46 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:36:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:57.876 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:36:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:57.877 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:36:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:36:57.878 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:36:58 compute-0 ceph-mon[76304]: pgmap v2178: 305 pgs: 305 active+clean; 304 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.2 MiB/s wr, 122 op/s
Feb 28 10:36:58 compute-0 nova_compute[243452]: 2026-02-28 10:36:58.230 243456 DEBUG nova.network.neutron [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Successfully updated port: 784aca10-13b2-42d5-9828-68914533af46 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:36:58 compute-0 nova_compute[243452]: 2026-02-28 10:36:58.244 243456 DEBUG oslo_concurrency.lockutils [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:36:58 compute-0 nova_compute[243452]: 2026-02-28 10:36:58.245 243456 DEBUG oslo_concurrency.lockutils [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:36:58 compute-0 nova_compute[243452]: 2026-02-28 10:36:58.245 243456 DEBUG nova.network.neutron [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:36:58 compute-0 nova_compute[243452]: 2026-02-28 10:36:58.331 243456 DEBUG nova.compute.manager [req-0ae1ad0a-dc6b-4e16-b6f0-59090e980425 req-bc20593f-e4b2-41b3-8a15-1212a92479d9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-changed-784aca10-13b2-42d5-9828-68914533af46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:36:58 compute-0 nova_compute[243452]: 2026-02-28 10:36:58.331 243456 DEBUG nova.compute.manager [req-0ae1ad0a-dc6b-4e16-b6f0-59090e980425 req-bc20593f-e4b2-41b3-8a15-1212a92479d9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Refreshing instance network info cache due to event network-changed-784aca10-13b2-42d5-9828-68914533af46. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:36:58 compute-0 nova_compute[243452]: 2026-02-28 10:36:58.332 243456 DEBUG oslo_concurrency.lockutils [req-0ae1ad0a-dc6b-4e16-b6f0-59090e980425 req-bc20593f-e4b2-41b3-8a15-1212a92479d9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:36:58 compute-0 sshd-session[362725]: Received disconnect from 103.67.78.132 port 48164:11: Bye Bye [preauth]
Feb 28 10:36:58 compute-0 sshd-session[362725]: Disconnected from authenticating user root 103.67.78.132 port 48164 [preauth]
Feb 28 10:36:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 678 KiB/s rd, 3.3 MiB/s wr, 105 op/s
Feb 28 10:36:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:36:59 compute-0 nova_compute[243452]: 2026-02-28 10:36:59.452 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:00 compute-0 ceph-mon[76304]: pgmap v2179: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 678 KiB/s rd, 3.3 MiB/s wr, 105 op/s
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.155 243456 DEBUG nova.network.neutron [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.182 243456 DEBUG oslo_concurrency.lockutils [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.184 243456 DEBUG oslo_concurrency.lockutils [req-0ae1ad0a-dc6b-4e16-b6f0-59090e980425 req-bc20593f-e4b2-41b3-8a15-1212a92479d9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.184 243456 DEBUG nova.network.neutron [req-0ae1ad0a-dc6b-4e16-b6f0-59090e980425 req-bc20593f-e4b2-41b3-8a15-1212a92479d9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Refreshing network info cache for port 784aca10-13b2-42d5-9828-68914533af46 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.189 243456 DEBUG nova.virt.libvirt.vif [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:36:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855517723',display_name='tempest-TestNetworkBasicOps-server-1855517723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855517723',id=132,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJL4zKlYo/AdgMNnuaaJtL1hh1goiDwZemHJ69UN8pXos044opX7z+RfBxL3O8z0M85ghtATU1A3xGoZN5EAhEOadn0D2QfxzAhrWZrpHsFCg070YOwKNthfwFLhWWt9Cw==',key_name='tempest-TestNetworkBasicOps-2114804889',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:36:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-fsl96lrj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:36:33Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b3a7b19c-ebb2-442d-bac0-66e1b9e2655c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.189 243456 DEBUG nova.network.os_vif_util [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.191 243456 DEBUG nova.network.os_vif_util [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.191 243456 DEBUG os_vif [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.192 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.193 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.194 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.197 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.197 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap784aca10-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.198 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap784aca10-13, col_values=(('external_ids', {'iface-id': '784aca10-13b2-42d5-9828-68914533af46', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:24:67', 'vm-uuid': 'b3a7b19c-ebb2-442d-bac0-66e1b9e2655c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:00 compute-0 NetworkManager[49805]: <info>  [1772275020.2017] manager: (tap784aca10-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/569)
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.207 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.212 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.213 243456 INFO os_vif [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13')
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.215 243456 DEBUG nova.virt.libvirt.vif [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:36:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855517723',display_name='tempest-TestNetworkBasicOps-server-1855517723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855517723',id=132,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJL4zKlYo/AdgMNnuaaJtL1hh1goiDwZemHJ69UN8pXos044opX7z+RfBxL3O8z0M85ghtATU1A3xGoZN5EAhEOadn0D2QfxzAhrWZrpHsFCg070YOwKNthfwFLhWWt9Cw==',key_name='tempest-TestNetworkBasicOps-2114804889',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:36:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-fsl96lrj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:36:33Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b3a7b19c-ebb2-442d-bac0-66e1b9e2655c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.215 243456 DEBUG nova.network.os_vif_util [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.217 243456 DEBUG nova.network.os_vif_util [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.221 243456 DEBUG nova.virt.libvirt.guest [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] attach device xml: <interface type="ethernet">
Feb 28 10:37:00 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:6a:24:67"/>
Feb 28 10:37:00 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:37:00 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:37:00 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:37:00 compute-0 nova_compute[243452]:   <target dev="tap784aca10-13"/>
Feb 28 10:37:00 compute-0 nova_compute[243452]: </interface>
Feb 28 10:37:00 compute-0 nova_compute[243452]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Feb 28 10:37:00 compute-0 NetworkManager[49805]: <info>  [1772275020.2372] manager: (tap784aca10-13): new Tun device (/org/freedesktop/NetworkManager/Devices/570)
Feb 28 10:37:00 compute-0 kernel: tap784aca10-13: entered promiscuous mode
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.240 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:00 compute-0 ovn_controller[146846]: 2026-02-28T10:37:00Z|01367|binding|INFO|Claiming lport 784aca10-13b2-42d5-9828-68914533af46 for this chassis.
Feb 28 10:37:00 compute-0 ovn_controller[146846]: 2026-02-28T10:37:00Z|01368|binding|INFO|784aca10-13b2-42d5-9828-68914533af46: Claiming fa:16:3e:6a:24:67 10.100.0.20
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.254 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:24:67 10.100.0.20'], port_security=['fa:16:3e:6a:24:67 10.100.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': 'b3a7b19c-ebb2-442d-bac0-66e1b9e2655c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '26d464f3-c9fa-4347-96b9-fda1593d34a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe4444ac-cf87-44d4-a522-71da130237e6, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=784aca10-13b2-42d5-9828-68914533af46) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.256 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 784aca10-13b2-42d5-9828-68914533af46 in datapath 9fbbf27e-04ea-4f06-b1ef-3ac4b98db361 bound to our chassis
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.257 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.258 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9fbbf27e-04ea-4f06-b1ef-3ac4b98db361
Feb 28 10:37:00 compute-0 ovn_controller[146846]: 2026-02-28T10:37:00Z|01369|binding|INFO|Setting lport 784aca10-13b2-42d5-9828-68914533af46 ovn-installed in OVS
Feb 28 10:37:00 compute-0 ovn_controller[146846]: 2026-02-28T10:37:00Z|01370|binding|INFO|Setting lport 784aca10-13b2-42d5-9828-68914533af46 up in Southbound
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.265 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:00 compute-0 systemd-udevd[362733]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.276 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dc00c5e8-8599-444b-bd3b-86368c05dee3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.277 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9fbbf27e-01 in ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.279 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9fbbf27e-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.279 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c42da96c-cf10-4b19-8436-41aa1c5fbb47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.281 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dc8c122c-653c-4e8b-8abf-6289da949d9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:00 compute-0 NetworkManager[49805]: <info>  [1772275020.2890] device (tap784aca10-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:37:00 compute-0 NetworkManager[49805]: <info>  [1772275020.2898] device (tap784aca10-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.300 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[fd91fe58-cf17-4812-ba55-4489bb8f7fe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.321 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6005828c-294c-43a8-b5b4-789cfe4ba3ae]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.333 243456 DEBUG nova.virt.libvirt.driver [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.334 243456 DEBUG nova.virt.libvirt.driver [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.334 243456 DEBUG nova.virt.libvirt.driver [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:e3:60:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.335 243456 DEBUG nova.virt.libvirt.driver [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:6a:24:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:37:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:37:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:37:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:37:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:37:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:37:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.349 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6f75d9-e259-4a5e-85bb-df181ed38d07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:00 compute-0 systemd-udevd[362737]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:37:00 compute-0 NetworkManager[49805]: <info>  [1772275020.3602] manager: (tap9fbbf27e-00): new Veth device (/org/freedesktop/NetworkManager/Devices/571)
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.359 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[48ae886a-63f7-4046-919c-671e6e2530fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.366 243456 DEBUG nova.virt.libvirt.guest [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:37:00 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:37:00 compute-0 nova_compute[243452]:   <nova:name>tempest-TestNetworkBasicOps-server-1855517723</nova:name>
Feb 28 10:37:00 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:37:00</nova:creationTime>
Feb 28 10:37:00 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:37:00 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:37:00 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:37:00 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:37:00 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:37:00 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:37:00 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:37:00 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:37:00 compute-0 nova_compute[243452]:     <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:37:00 compute-0 nova_compute[243452]:     <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:37:00 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:37:00 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:37:00 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:37:00 compute-0 nova_compute[243452]:     <nova:port uuid="4b48043a-8194-4cf4-bd7f-1c138d7960ac">
Feb 28 10:37:00 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:37:00 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:37:00 compute-0 nova_compute[243452]:     <nova:port uuid="784aca10-13b2-42d5-9828-68914533af46">
Feb 28 10:37:00 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Feb 28 10:37:00 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:37:00 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:37:00 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:37:00 compute-0 nova_compute[243452]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.391 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5418d987-0901-4c19-9cbb-28b658d70244]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.395 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a25b914d-86a6-4c46-82a8-df4733d851bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:00 compute-0 NetworkManager[49805]: <info>  [1772275020.4182] device (tap9fbbf27e-00): carrier: link connected
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.421 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c21e2053-c327-4fc8-9d8c-ffdfa49f0a0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.423 243456 DEBUG oslo_concurrency.lockutils [None req-1d2203ab-c1a8-4cf2-89ef-dfea670c8d29 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "interface-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.439 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dc85ea7a-10a3-41e8-8a40-7eba656e9d7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9fbbf27e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:56:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 407], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652770, 'reachable_time': 24614, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362760, 'error': None, 'target': 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.458 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[93488ba5-5622-45b7-8ea2-c42f9f7e0b6c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec9:56da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652770, 'tstamp': 652770}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362761, 'error': None, 'target': 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.479 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bd8ec5dc-cb07-43da-bdac-d9821871309f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9fbbf27e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:56:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 407], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652770, 'reachable_time': 24614, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362762, 'error': None, 'target': 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.514 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4e538215-274b-4333-ac14-e417dab81c7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.577 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[94f213e3-5025-42dc-905e-6decea640682]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.580 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9fbbf27e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.581 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.581 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9fbbf27e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.583 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:00 compute-0 kernel: tap9fbbf27e-00: entered promiscuous mode
Feb 28 10:37:00 compute-0 NetworkManager[49805]: <info>  [1772275020.5848] manager: (tap9fbbf27e-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/572)
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.588 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.590 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9fbbf27e-00, col_values=(('external_ids', {'iface-id': '5b7b2456-d9c3-4443-8d4c-51b3db80db5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:00 compute-0 ovn_controller[146846]: 2026-02-28T10:37:00Z|01371|binding|INFO|Releasing lport 5b7b2456-d9c3-4443-8d4c-51b3db80db5c from this chassis (sb_readonly=0)
Feb 28 10:37:00 compute-0 nova_compute[243452]: 2026-02-28 10:37:00.602 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.604 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9fbbf27e-04ea-4f06-b1ef-3ac4b98db361.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9fbbf27e-04ea-4f06-b1ef-3ac4b98db361.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.605 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b3541856-976f-47b0-a7df-acc1e60b4b83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.606 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/9fbbf27e-04ea-4f06-b1ef-3ac4b98db361.pid.haproxy
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 9fbbf27e-04ea-4f06-b1ef-3ac4b98db361
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:37:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:00.606 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'env', 'PROCESS_TAG=haproxy-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9fbbf27e-04ea-4f06-b1ef-3ac4b98db361.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:37:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 2.2 MiB/s wr, 83 op/s
Feb 28 10:37:01 compute-0 podman[362794]: 2026-02-28 10:37:01.019365657 +0000 UTC m=+0.061173994 container create cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 10:37:01 compute-0 systemd[1]: Started libpod-conmon-cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0.scope.
Feb 28 10:37:01 compute-0 podman[362794]: 2026-02-28 10:37:00.992119315 +0000 UTC m=+0.033927662 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:37:01 compute-0 nova_compute[243452]: 2026-02-28 10:37:01.096 243456 DEBUG nova.compute.manager [req-cc886b45-5e1a-40c4-be19-d80b2624fd39 req-a47b396f-d0d6-42d8-9217-fbdf635c153d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-plugged-784aca10-13b2-42d5-9828-68914533af46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:01 compute-0 nova_compute[243452]: 2026-02-28 10:37:01.097 243456 DEBUG oslo_concurrency.lockutils [req-cc886b45-5e1a-40c4-be19-d80b2624fd39 req-a47b396f-d0d6-42d8-9217-fbdf635c153d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:01 compute-0 nova_compute[243452]: 2026-02-28 10:37:01.098 243456 DEBUG oslo_concurrency.lockutils [req-cc886b45-5e1a-40c4-be19-d80b2624fd39 req-a47b396f-d0d6-42d8-9217-fbdf635c153d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:01 compute-0 nova_compute[243452]: 2026-02-28 10:37:01.098 243456 DEBUG oslo_concurrency.lockutils [req-cc886b45-5e1a-40c4-be19-d80b2624fd39 req-a47b396f-d0d6-42d8-9217-fbdf635c153d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:01 compute-0 nova_compute[243452]: 2026-02-28 10:37:01.099 243456 DEBUG nova.compute.manager [req-cc886b45-5e1a-40c4-be19-d80b2624fd39 req-a47b396f-d0d6-42d8-9217-fbdf635c153d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] No waiting events found dispatching network-vif-plugged-784aca10-13b2-42d5-9828-68914533af46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:37:01 compute-0 nova_compute[243452]: 2026-02-28 10:37:01.100 243456 WARNING nova.compute.manager [req-cc886b45-5e1a-40c4-be19-d80b2624fd39 req-a47b396f-d0d6-42d8-9217-fbdf635c153d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received unexpected event network-vif-plugged-784aca10-13b2-42d5-9828-68914533af46 for instance with vm_state active and task_state None.
Feb 28 10:37:01 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6de54db0465fd2bcf02f33b89700b7eb0c2eaa14d4e03df2211f39fd0f3421/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:37:01 compute-0 podman[362794]: 2026-02-28 10:37:01.133427638 +0000 UTC m=+0.175236045 container init cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 10:37:01 compute-0 podman[362794]: 2026-02-28 10:37:01.14162518 +0000 UTC m=+0.183433547 container start cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:37:01 compute-0 neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361[362810]: [NOTICE]   (362814) : New worker (362816) forked
Feb 28 10:37:01 compute-0 neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361[362810]: [NOTICE]   (362814) : Loading success.
Feb 28 10:37:01 compute-0 ovn_controller[146846]: 2026-02-28T10:37:01Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6a:24:67 10.100.0.20
Feb 28 10:37:01 compute-0 ovn_controller[146846]: 2026-02-28T10:37:01Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:24:67 10.100.0.20
Feb 28 10:37:01 compute-0 nova_compute[243452]: 2026-02-28 10:37:01.951 243456 DEBUG nova.network.neutron [req-0ae1ad0a-dc6b-4e16-b6f0-59090e980425 req-bc20593f-e4b2-41b3-8a15-1212a92479d9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updated VIF entry in instance network info cache for port 784aca10-13b2-42d5-9828-68914533af46. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:37:01 compute-0 nova_compute[243452]: 2026-02-28 10:37:01.952 243456 DEBUG nova.network.neutron [req-0ae1ad0a-dc6b-4e16-b6f0-59090e980425 req-bc20593f-e4b2-41b3-8a15-1212a92479d9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:37:01 compute-0 nova_compute[243452]: 2026-02-28 10:37:01.977 243456 DEBUG oslo_concurrency.lockutils [req-0ae1ad0a-dc6b-4e16-b6f0-59090e980425 req-bc20593f-e4b2-41b3-8a15-1212a92479d9 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:37:02 compute-0 ceph-mon[76304]: pgmap v2180: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 2.2 MiB/s wr, 83 op/s
Feb 28 10:37:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 2.2 MiB/s wr, 62 op/s
Feb 28 10:37:03 compute-0 nova_compute[243452]: 2026-02-28 10:37:03.333 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:37:03 compute-0 nova_compute[243452]: 2026-02-28 10:37:03.334 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 28 10:37:03 compute-0 nova_compute[243452]: 2026-02-28 10:37:03.425 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 28 10:37:03 compute-0 nova_compute[243452]: 2026-02-28 10:37:03.470 243456 DEBUG nova.compute.manager [req-194299ad-d224-4b92-9801-8694efc3c6b8 req-6532dc58-dec4-4f2a-a2f0-17cd05e88787 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-plugged-784aca10-13b2-42d5-9828-68914533af46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:03 compute-0 nova_compute[243452]: 2026-02-28 10:37:03.470 243456 DEBUG oslo_concurrency.lockutils [req-194299ad-d224-4b92-9801-8694efc3c6b8 req-6532dc58-dec4-4f2a-a2f0-17cd05e88787 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:03 compute-0 nova_compute[243452]: 2026-02-28 10:37:03.471 243456 DEBUG oslo_concurrency.lockutils [req-194299ad-d224-4b92-9801-8694efc3c6b8 req-6532dc58-dec4-4f2a-a2f0-17cd05e88787 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:03 compute-0 nova_compute[243452]: 2026-02-28 10:37:03.471 243456 DEBUG oslo_concurrency.lockutils [req-194299ad-d224-4b92-9801-8694efc3c6b8 req-6532dc58-dec4-4f2a-a2f0-17cd05e88787 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:03 compute-0 nova_compute[243452]: 2026-02-28 10:37:03.471 243456 DEBUG nova.compute.manager [req-194299ad-d224-4b92-9801-8694efc3c6b8 req-6532dc58-dec4-4f2a-a2f0-17cd05e88787 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] No waiting events found dispatching network-vif-plugged-784aca10-13b2-42d5-9828-68914533af46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:37:03 compute-0 nova_compute[243452]: 2026-02-28 10:37:03.471 243456 WARNING nova.compute.manager [req-194299ad-d224-4b92-9801-8694efc3c6b8 req-6532dc58-dec4-4f2a-a2f0-17cd05e88787 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received unexpected event network-vif-plugged-784aca10-13b2-42d5-9828-68914533af46 for instance with vm_state active and task_state None.
Feb 28 10:37:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:37:04 compute-0 ceph-mon[76304]: pgmap v2181: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 2.2 MiB/s wr, 62 op/s
Feb 28 10:37:04 compute-0 nova_compute[243452]: 2026-02-28 10:37:04.456 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Feb 28 10:37:05 compute-0 nova_compute[243452]: 2026-02-28 10:37:05.201 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:05 compute-0 nova_compute[243452]: 2026-02-28 10:37:05.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:37:05 compute-0 nova_compute[243452]: 2026-02-28 10:37:05.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 28 10:37:06 compute-0 ceph-mon[76304]: pgmap v2182: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Feb 28 10:37:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 154 KiB/s rd, 1.6 MiB/s wr, 45 op/s
Feb 28 10:37:07 compute-0 nova_compute[243452]: 2026-02-28 10:37:07.553 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:07 compute-0 nova_compute[243452]: 2026-02-28 10:37:07.554 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:07 compute-0 nova_compute[243452]: 2026-02-28 10:37:07.628 243456 DEBUG nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:37:07 compute-0 nova_compute[243452]: 2026-02-28 10:37:07.848 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:07 compute-0 nova_compute[243452]: 2026-02-28 10:37:07.850 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:07 compute-0 nova_compute[243452]: 2026-02-28 10:37:07.860 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:37:07 compute-0 nova_compute[243452]: 2026-02-28 10:37:07.861 243456 INFO nova.compute.claims [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:37:08 compute-0 ceph-mon[76304]: pgmap v2183: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 154 KiB/s rd, 1.6 MiB/s wr, 45 op/s
Feb 28 10:37:08 compute-0 nova_compute[243452]: 2026-02-28 10:37:08.150 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:37:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:37:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3069755206' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:37:08 compute-0 nova_compute[243452]: 2026-02-28 10:37:08.739 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:37:08 compute-0 nova_compute[243452]: 2026-02-28 10:37:08.747 243456 DEBUG nova.compute.provider_tree [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:37:08 compute-0 nova_compute[243452]: 2026-02-28 10:37:08.766 243456 DEBUG nova.scheduler.client.report [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:37:08 compute-0 nova_compute[243452]: 2026-02-28 10:37:08.786 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:08 compute-0 nova_compute[243452]: 2026-02-28 10:37:08.788 243456 DEBUG nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:37:08 compute-0 nova_compute[243452]: 2026-02-28 10:37:08.847 243456 DEBUG nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:37:08 compute-0 nova_compute[243452]: 2026-02-28 10:37:08.848 243456 DEBUG nova.network.neutron [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:37:08 compute-0 nova_compute[243452]: 2026-02-28 10:37:08.872 243456 INFO nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:37:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 349 KiB/s wr, 28 op/s
Feb 28 10:37:08 compute-0 nova_compute[243452]: 2026-02-28 10:37:08.893 243456 DEBUG nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:37:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.003 243456 DEBUG nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.006 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.008 243456 INFO nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Creating image(s)
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.034 243456 DEBUG nova.storage.rbd_utils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.071 243456 DEBUG nova.storage.rbd_utils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.103 243456 DEBUG nova.storage.rbd_utils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.109 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:37:09 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3069755206' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.157 243456 DEBUG nova.policy [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.171 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "cbf94077-46c2-457d-8486-25f3dd0517b9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.172 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.189 243456 DEBUG nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.207 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.208 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.209 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.209 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.238 243456 DEBUG nova.storage.rbd_utils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.244 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.317 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.318 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.325 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.326 243456 INFO nova.compute.claims [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.464 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.510 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.549 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.642 243456 DEBUG nova.storage.rbd_utils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.762 243456 DEBUG nova.objects.instance [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid ae0acd9c-7c2d-4e8b-84a4-d577eff31d02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.782 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.783 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Ensure instance console log exists: /var/lib/nova/instances/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.783 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.783 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.784 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:09 compute-0 nova_compute[243452]: 2026-02-28 10:37:09.846 243456 DEBUG nova.network.neutron [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Successfully created port: 0739db55-7f81-4ed8-a2c2-73ad1bf09084 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:37:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:37:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1423731554' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:37:10 compute-0 ceph-mon[76304]: pgmap v2184: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 349 KiB/s wr, 28 op/s
Feb 28 10:37:10 compute-0 podman[363033]: 2026-02-28 10:37:10.173838301 +0000 UTC m=+0.102560916 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.201 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.205 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.209 243456 DEBUG nova.compute.provider_tree [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.229 243456 DEBUG nova.scheduler.client.report [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.254 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.255 243456 DEBUG nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:37:10 compute-0 podman[363052]: 2026-02-28 10:37:10.269176552 +0000 UTC m=+0.091063421 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.320 243456 DEBUG nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.321 243456 DEBUG nova.network.neutron [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.344 243456 INFO nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.368 243456 DEBUG nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.471 243456 DEBUG nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.474 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.474 243456 INFO nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Creating image(s)
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.498 243456 DEBUG nova.storage.rbd_utils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image cbf94077-46c2-457d-8486-25f3dd0517b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.520 243456 DEBUG nova.storage.rbd_utils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image cbf94077-46c2-457d-8486-25f3dd0517b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.544 243456 DEBUG nova.storage.rbd_utils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image cbf94077-46c2-457d-8486-25f3dd0517b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.548 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.603 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.604 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.605 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.605 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.633 243456 DEBUG nova.storage.rbd_utils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image cbf94077-46c2-457d-8486-25f3dd0517b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.638 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 cbf94077-46c2-457d-8486-25f3dd0517b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:37:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 459 KiB/s wr, 23 op/s
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.911 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 cbf94077-46c2-457d-8486-25f3dd0517b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.986 243456 DEBUG nova.policy [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:37:10 compute-0 nova_compute[243452]: 2026-02-28 10:37:10.995 243456 DEBUG nova.storage.rbd_utils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image cbf94077-46c2-457d-8486-25f3dd0517b9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:37:11 compute-0 nova_compute[243452]: 2026-02-28 10:37:11.072 243456 DEBUG nova.objects.instance [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid cbf94077-46c2-457d-8486-25f3dd0517b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:37:11 compute-0 nova_compute[243452]: 2026-02-28 10:37:11.089 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:37:11 compute-0 nova_compute[243452]: 2026-02-28 10:37:11.090 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Ensure instance console log exists: /var/lib/nova/instances/cbf94077-46c2-457d-8486-25f3dd0517b9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:37:11 compute-0 nova_compute[243452]: 2026-02-28 10:37:11.090 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:11 compute-0 nova_compute[243452]: 2026-02-28 10:37:11.090 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:11 compute-0 nova_compute[243452]: 2026-02-28 10:37:11.091 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1423731554' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:37:11 compute-0 nova_compute[243452]: 2026-02-28 10:37:11.393 243456 DEBUG nova.network.neutron [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Successfully created port: fbc01aec-00f6-4ce9-960c-352295a6c18e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:37:12 compute-0 ceph-mon[76304]: pgmap v2185: 305 pgs: 305 active+clean; 328 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 459 KiB/s wr, 23 op/s
Feb 28 10:37:12 compute-0 nova_compute[243452]: 2026-02-28 10:37:12.278 243456 DEBUG nova.network.neutron [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Successfully created port: 282fa143-1175-40e2-9ab8-2d2b012d5b78 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:37:12 compute-0 nova_compute[243452]: 2026-02-28 10:37:12.684 243456 DEBUG nova.network.neutron [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Successfully updated port: 0739db55-7f81-4ed8-a2c2-73ad1bf09084 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:37:12 compute-0 nova_compute[243452]: 2026-02-28 10:37:12.807 243456 DEBUG nova.compute.manager [req-bdc03138-e4ea-4022-bb62-aba6ab21335f req-f193a12e-107f-4964-a6db-1d87fc24fde4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-changed-0739db55-7f81-4ed8-a2c2-73ad1bf09084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:12 compute-0 nova_compute[243452]: 2026-02-28 10:37:12.807 243456 DEBUG nova.compute.manager [req-bdc03138-e4ea-4022-bb62-aba6ab21335f req-f193a12e-107f-4964-a6db-1d87fc24fde4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Refreshing instance network info cache due to event network-changed-0739db55-7f81-4ed8-a2c2-73ad1bf09084. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:37:12 compute-0 nova_compute[243452]: 2026-02-28 10:37:12.807 243456 DEBUG oslo_concurrency.lockutils [req-bdc03138-e4ea-4022-bb62-aba6ab21335f req-f193a12e-107f-4964-a6db-1d87fc24fde4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:37:12 compute-0 nova_compute[243452]: 2026-02-28 10:37:12.808 243456 DEBUG oslo_concurrency.lockutils [req-bdc03138-e4ea-4022-bb62-aba6ab21335f req-f193a12e-107f-4964-a6db-1d87fc24fde4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:37:12 compute-0 nova_compute[243452]: 2026-02-28 10:37:12.808 243456 DEBUG nova.network.neutron [req-bdc03138-e4ea-4022-bb62-aba6ab21335f req-f193a12e-107f-4964-a6db-1d87fc24fde4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Refreshing network info cache for port 0739db55-7f81-4ed8-a2c2-73ad1bf09084 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:37:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 376 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.5 MiB/s wr, 28 op/s
Feb 28 10:37:12 compute-0 nova_compute[243452]: 2026-02-28 10:37:12.964 243456 DEBUG nova.network.neutron [req-bdc03138-e4ea-4022-bb62-aba6ab21335f req-f193a12e-107f-4964-a6db-1d87fc24fde4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:37:13 compute-0 nova_compute[243452]: 2026-02-28 10:37:13.362 243456 DEBUG nova.network.neutron [req-bdc03138-e4ea-4022-bb62-aba6ab21335f req-f193a12e-107f-4964-a6db-1d87fc24fde4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:37:13 compute-0 nova_compute[243452]: 2026-02-28 10:37:13.376 243456 DEBUG oslo_concurrency.lockutils [req-bdc03138-e4ea-4022-bb62-aba6ab21335f req-f193a12e-107f-4964-a6db-1d87fc24fde4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:37:13 compute-0 nova_compute[243452]: 2026-02-28 10:37:13.636 243456 DEBUG nova.network.neutron [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Successfully updated port: 282fa143-1175-40e2-9ab8-2d2b012d5b78 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:37:13 compute-0 nova_compute[243452]: 2026-02-28 10:37:13.665 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-cbf94077-46c2-457d-8486-25f3dd0517b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:37:13 compute-0 nova_compute[243452]: 2026-02-28 10:37:13.666 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-cbf94077-46c2-457d-8486-25f3dd0517b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:37:13 compute-0 nova_compute[243452]: 2026-02-28 10:37:13.666 243456 DEBUG nova.network.neutron [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:37:13 compute-0 nova_compute[243452]: 2026-02-28 10:37:13.691 243456 DEBUG nova.network.neutron [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Successfully updated port: fbc01aec-00f6-4ce9-960c-352295a6c18e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:37:13 compute-0 nova_compute[243452]: 2026-02-28 10:37:13.708 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:37:13 compute-0 nova_compute[243452]: 2026-02-28 10:37:13.708 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:37:13 compute-0 nova_compute[243452]: 2026-02-28 10:37:13.709 243456 DEBUG nova.network.neutron [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:37:13 compute-0 nova_compute[243452]: 2026-02-28 10:37:13.857 243456 DEBUG nova.network.neutron [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:37:13 compute-0 nova_compute[243452]: 2026-02-28 10:37:13.906 243456 DEBUG nova.network.neutron [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:37:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:37:14 compute-0 ceph-mon[76304]: pgmap v2186: 305 pgs: 305 active+clean; 376 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.5 MiB/s wr, 28 op/s
Feb 28 10:37:14 compute-0 nova_compute[243452]: 2026-02-28 10:37:14.495 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:14 compute-0 nova_compute[243452]: 2026-02-28 10:37:14.885 243456 DEBUG nova.compute.manager [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Received event network-changed-282fa143-1175-40e2-9ab8-2d2b012d5b78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:14 compute-0 nova_compute[243452]: 2026-02-28 10:37:14.886 243456 DEBUG nova.compute.manager [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Refreshing instance network info cache due to event network-changed-282fa143-1175-40e2-9ab8-2d2b012d5b78. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:37:14 compute-0 nova_compute[243452]: 2026-02-28 10:37:14.886 243456 DEBUG oslo_concurrency.lockutils [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-cbf94077-46c2-457d-8486-25f3dd0517b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:37:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 387 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.8 MiB/s wr, 41 op/s
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.208 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.382 243456 DEBUG nova.network.neutron [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Updating instance_info_cache with network_info: [{"id": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "address": "fa:16:3e:65:25:11", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282fa143-11", "ovs_interfaceid": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.404 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-cbf94077-46c2-457d-8486-25f3dd0517b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.405 243456 DEBUG nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Instance network_info: |[{"id": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "address": "fa:16:3e:65:25:11", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282fa143-11", "ovs_interfaceid": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.406 243456 DEBUG oslo_concurrency.lockutils [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-cbf94077-46c2-457d-8486-25f3dd0517b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.407 243456 DEBUG nova.network.neutron [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Refreshing network info cache for port 282fa143-1175-40e2-9ab8-2d2b012d5b78 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.413 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Start _get_guest_xml network_info=[{"id": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "address": "fa:16:3e:65:25:11", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282fa143-11", "ovs_interfaceid": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.421 243456 WARNING nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.426 243456 DEBUG nova.virt.libvirt.host [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.427 243456 DEBUG nova.virt.libvirt.host [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.436 243456 DEBUG nova.virt.libvirt.host [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.437 243456 DEBUG nova.virt.libvirt.host [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.438 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.438 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.439 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.439 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.439 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.439 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.440 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.440 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.440 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.440 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.441 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.441 243456 DEBUG nova.virt.hardware [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:37:15 compute-0 nova_compute[243452]: 2026-02-28 10:37:15.444 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:37:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:37:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3451173864' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.046 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.081 243456 DEBUG nova.storage.rbd_utils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image cbf94077-46c2-457d-8486-25f3dd0517b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.087 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:37:16 compute-0 ceph-mon[76304]: pgmap v2187: 305 pgs: 305 active+clean; 387 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.8 MiB/s wr, 41 op/s
Feb 28 10:37:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3451173864' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:37:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:37:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2430262589' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.669 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.672 243456 DEBUG nova.virt.libvirt.vif [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:37:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1336250417',display_name='tempest-TestNetworkBasicOps-server-1336250417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1336250417',id=135,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFdV8K7ZSzWmQkbjGONrQPTCovb6WrbWPkGawJk1yo00887S+Z8HGahuNdlpRV8Rys7acQ23bSRlDv0kC4ADkUOxH7s/2SDS0CUgX9WEqv+CcAPyMpFhaDSJb5+3QqGJmA==',key_name='tempest-TestNetworkBasicOps-99259032',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-6zhm2bv3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:37:10Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=cbf94077-46c2-457d-8486-25f3dd0517b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "address": "fa:16:3e:65:25:11", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282fa143-11", "ovs_interfaceid": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.672 243456 DEBUG nova.network.os_vif_util [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "address": "fa:16:3e:65:25:11", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282fa143-11", "ovs_interfaceid": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.674 243456 DEBUG nova.network.os_vif_util [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:25:11,bridge_name='br-int',has_traffic_filtering=True,id=282fa143-1175-40e2-9ab8-2d2b012d5b78,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282fa143-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.676 243456 DEBUG nova.objects.instance [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid cbf94077-46c2-457d-8486-25f3dd0517b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.699 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:37:16 compute-0 nova_compute[243452]:   <uuid>cbf94077-46c2-457d-8486-25f3dd0517b9</uuid>
Feb 28 10:37:16 compute-0 nova_compute[243452]:   <name>instance-00000087</name>
Feb 28 10:37:16 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:37:16 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:37:16 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <nova:name>tempest-TestNetworkBasicOps-server-1336250417</nova:name>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:37:15</nova:creationTime>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:37:16 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:37:16 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:37:16 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:37:16 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:37:16 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:37:16 compute-0 nova_compute[243452]:         <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:37:16 compute-0 nova_compute[243452]:         <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:37:16 compute-0 nova_compute[243452]:         <nova:port uuid="282fa143-1175-40e2-9ab8-2d2b012d5b78">
Feb 28 10:37:16 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:37:16 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:37:16 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <system>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <entry name="serial">cbf94077-46c2-457d-8486-25f3dd0517b9</entry>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <entry name="uuid">cbf94077-46c2-457d-8486-25f3dd0517b9</entry>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     </system>
Feb 28 10:37:16 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:37:16 compute-0 nova_compute[243452]:   <os>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:   </os>
Feb 28 10:37:16 compute-0 nova_compute[243452]:   <features>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:   </features>
Feb 28 10:37:16 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:37:16 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:37:16 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/cbf94077-46c2-457d-8486-25f3dd0517b9_disk">
Feb 28 10:37:16 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       </source>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:37:16 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/cbf94077-46c2-457d-8486-25f3dd0517b9_disk.config">
Feb 28 10:37:16 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       </source>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:37:16 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:65:25:11"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <target dev="tap282fa143-11"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/cbf94077-46c2-457d-8486-25f3dd0517b9/console.log" append="off"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <video>
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     </video>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:37:16 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:37:16 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:37:16 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:37:16 compute-0 nova_compute[243452]: </domain>
Feb 28 10:37:16 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.700 243456 DEBUG nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Preparing to wait for external event network-vif-plugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.701 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.701 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.702 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.703 243456 DEBUG nova.virt.libvirt.vif [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:37:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1336250417',display_name='tempest-TestNetworkBasicOps-server-1336250417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1336250417',id=135,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFdV8K7ZSzWmQkbjGONrQPTCovb6WrbWPkGawJk1yo00887S+Z8HGahuNdlpRV8Rys7acQ23bSRlDv0kC4ADkUOxH7s/2SDS0CUgX9WEqv+CcAPyMpFhaDSJb5+3QqGJmA==',key_name='tempest-TestNetworkBasicOps-99259032',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-6zhm2bv3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:37:10Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=cbf94077-46c2-457d-8486-25f3dd0517b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "address": "fa:16:3e:65:25:11", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282fa143-11", "ovs_interfaceid": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.703 243456 DEBUG nova.network.os_vif_util [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "address": "fa:16:3e:65:25:11", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282fa143-11", "ovs_interfaceid": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.704 243456 DEBUG nova.network.os_vif_util [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:25:11,bridge_name='br-int',has_traffic_filtering=True,id=282fa143-1175-40e2-9ab8-2d2b012d5b78,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282fa143-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.705 243456 DEBUG os_vif [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:25:11,bridge_name='br-int',has_traffic_filtering=True,id=282fa143-1175-40e2-9ab8-2d2b012d5b78,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282fa143-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.706 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.706 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.707 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.713 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.713 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap282fa143-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.714 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap282fa143-11, col_values=(('external_ids', {'iface-id': '282fa143-1175-40e2-9ab8-2d2b012d5b78', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:25:11', 'vm-uuid': 'cbf94077-46c2-457d-8486-25f3dd0517b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.730 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:16 compute-0 NetworkManager[49805]: <info>  [1772275036.7316] manager: (tap282fa143-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/573)
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.734 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.740 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.741 243456 INFO os_vif [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:25:11,bridge_name='br-int',has_traffic_filtering=True,id=282fa143-1175-40e2-9ab8-2d2b012d5b78,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282fa143-11')
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.827 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.828 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.828 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:65:25:11, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.829 243456 INFO nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Using config drive
Feb 28 10:37:16 compute-0 nova_compute[243452]: 2026-02-28 10:37:16.867 243456 DEBUG nova.storage.rbd_utils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image cbf94077-46c2-457d-8486-25f3dd0517b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:37:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.6 MiB/s wr, 54 op/s
Feb 28 10:37:17 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2430262589' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:37:17 compute-0 nova_compute[243452]: 2026-02-28 10:37:17.239 243456 INFO nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Creating config drive at /var/lib/nova/instances/cbf94077-46c2-457d-8486-25f3dd0517b9/disk.config
Feb 28 10:37:17 compute-0 nova_compute[243452]: 2026-02-28 10:37:17.246 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cbf94077-46c2-457d-8486-25f3dd0517b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpu7lreufj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:37:17 compute-0 nova_compute[243452]: 2026-02-28 10:37:17.392 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cbf94077-46c2-457d-8486-25f3dd0517b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpu7lreufj" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:37:17 compute-0 nova_compute[243452]: 2026-02-28 10:37:17.424 243456 DEBUG nova.storage.rbd_utils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image cbf94077-46c2-457d-8486-25f3dd0517b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:37:17 compute-0 nova_compute[243452]: 2026-02-28 10:37:17.429 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cbf94077-46c2-457d-8486-25f3dd0517b9/disk.config cbf94077-46c2-457d-8486-25f3dd0517b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:37:17 compute-0 nova_compute[243452]: 2026-02-28 10:37:17.616 243456 DEBUG oslo_concurrency.processutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cbf94077-46c2-457d-8486-25f3dd0517b9/disk.config cbf94077-46c2-457d-8486-25f3dd0517b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:37:17 compute-0 nova_compute[243452]: 2026-02-28 10:37:17.617 243456 INFO nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Deleting local config drive /var/lib/nova/instances/cbf94077-46c2-457d-8486-25f3dd0517b9/disk.config because it was imported into RBD.
Feb 28 10:37:17 compute-0 NetworkManager[49805]: <info>  [1772275037.6893] manager: (tap282fa143-11): new Tun device (/org/freedesktop/NetworkManager/Devices/574)
Feb 28 10:37:17 compute-0 kernel: tap282fa143-11: entered promiscuous mode
Feb 28 10:37:17 compute-0 ovn_controller[146846]: 2026-02-28T10:37:17Z|01372|binding|INFO|Claiming lport 282fa143-1175-40e2-9ab8-2d2b012d5b78 for this chassis.
Feb 28 10:37:17 compute-0 ovn_controller[146846]: 2026-02-28T10:37:17Z|01373|binding|INFO|282fa143-1175-40e2-9ab8-2d2b012d5b78: Claiming fa:16:3e:65:25:11 10.100.0.28
Feb 28 10:37:17 compute-0 nova_compute[243452]: 2026-02-28 10:37:17.693 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:17 compute-0 ovn_controller[146846]: 2026-02-28T10:37:17Z|01374|binding|INFO|Setting lport 282fa143-1175-40e2-9ab8-2d2b012d5b78 ovn-installed in OVS
Feb 28 10:37:17 compute-0 ovn_controller[146846]: 2026-02-28T10:37:17Z|01375|binding|INFO|Setting lport 282fa143-1175-40e2-9ab8-2d2b012d5b78 up in Southbound
Feb 28 10:37:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.702 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:25:11 10.100.0.28'], port_security=['fa:16:3e:65:25:11 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'cbf94077-46c2-457d-8486-25f3dd0517b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '47e8c685-91d8-4bae-bf96-b1284d3eef68', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe4444ac-cf87-44d4-a522-71da130237e6, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=282fa143-1175-40e2-9ab8-2d2b012d5b78) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:37:17 compute-0 nova_compute[243452]: 2026-02-28 10:37:17.702 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.705 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 282fa143-1175-40e2-9ab8-2d2b012d5b78 in datapath 9fbbf27e-04ea-4f06-b1ef-3ac4b98db361 bound to our chassis
Feb 28 10:37:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.706 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9fbbf27e-04ea-4f06-b1ef-3ac4b98db361
Feb 28 10:37:17 compute-0 systemd-udevd[363380]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:37:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.725 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7a51ff79-0952-4e75-b17a-a26d19be521b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:17 compute-0 NetworkManager[49805]: <info>  [1772275037.7353] device (tap282fa143-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:37:17 compute-0 NetworkManager[49805]: <info>  [1772275037.7367] device (tap282fa143-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:37:17 compute-0 systemd-machined[209480]: New machine qemu-167-instance-00000087.
Feb 28 10:37:17 compute-0 systemd[1]: Started Virtual Machine qemu-167-instance-00000087.
Feb 28 10:37:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.758 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[45c36006-36c6-4d73-bfe6-d53c78ee27f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.762 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9b0628c4-8570-4c5c-836d-36a98fa8fc14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.790 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc4585f-d802-42b4-84e4-6884393f56a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.808 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6cfd0520-26e4-459c-b2c2-548b9191a168]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9fbbf27e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:56:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 407], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652770, 'reachable_time': 24614, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363392, 'error': None, 'target': 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.831 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c67e13-c195-4a2c-a845-35b66bb4657d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap9fbbf27e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652782, 'tstamp': 652782}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363395, 'error': None, 'target': 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9fbbf27e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652785, 'tstamp': 652785}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363395, 'error': None, 'target': 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.834 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9fbbf27e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:17 compute-0 nova_compute[243452]: 2026-02-28 10:37:17.836 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.838 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9fbbf27e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.838 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:37:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.838 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9fbbf27e-00, col_values=(('external_ids', {'iface-id': '5b7b2456-d9c3-4443-8d4c-51b3db80db5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:17 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:17.839 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.025 243456 DEBUG nova.network.neutron [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Updating instance_info_cache with network_info: [{"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.033 243456 DEBUG nova.network.neutron [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Updated VIF entry in instance network info cache for port 282fa143-1175-40e2-9ab8-2d2b012d5b78. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.034 243456 DEBUG nova.network.neutron [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Updating instance_info_cache with network_info: [{"id": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "address": "fa:16:3e:65:25:11", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282fa143-11", "ovs_interfaceid": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.046 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.046 243456 DEBUG nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Instance network_info: |[{"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.047 243456 DEBUG oslo_concurrency.lockutils [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-cbf94077-46c2-457d-8486-25f3dd0517b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.047 243456 DEBUG nova.compute.manager [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-changed-fbc01aec-00f6-4ce9-960c-352295a6c18e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.048 243456 DEBUG nova.compute.manager [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Refreshing instance network info cache due to event network-changed-fbc01aec-00f6-4ce9-960c-352295a6c18e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.048 243456 DEBUG oslo_concurrency.lockutils [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.048 243456 DEBUG oslo_concurrency.lockutils [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.048 243456 DEBUG nova.network.neutron [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Refreshing network info cache for port fbc01aec-00f6-4ce9-960c-352295a6c18e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.053 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Start _get_guest_xml network_info=[{"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.060 243456 WARNING nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.066 243456 DEBUG nova.virt.libvirt.host [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.067 243456 DEBUG nova.virt.libvirt.host [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.070 243456 DEBUG nova.virt.libvirt.host [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.070 243456 DEBUG nova.virt.libvirt.host [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.071 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.071 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.072 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.072 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.072 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.072 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.072 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.073 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.073 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.073 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.073 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.073 243456 DEBUG nova.virt.hardware [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.077 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.121 243456 DEBUG nova.compute.manager [req-2f713891-45ad-4025-bb42-e913f6553f72 req-701ea399-f961-411d-91e2-053db1c9bb63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Received event network-vif-plugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.122 243456 DEBUG oslo_concurrency.lockutils [req-2f713891-45ad-4025-bb42-e913f6553f72 req-701ea399-f961-411d-91e2-053db1c9bb63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.122 243456 DEBUG oslo_concurrency.lockutils [req-2f713891-45ad-4025-bb42-e913f6553f72 req-701ea399-f961-411d-91e2-053db1c9bb63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.123 243456 DEBUG oslo_concurrency.lockutils [req-2f713891-45ad-4025-bb42-e913f6553f72 req-701ea399-f961-411d-91e2-053db1c9bb63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.123 243456 DEBUG nova.compute.manager [req-2f713891-45ad-4025-bb42-e913f6553f72 req-701ea399-f961-411d-91e2-053db1c9bb63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Processing event network-vif-plugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.124 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275038.1241934, cbf94077-46c2-457d-8486-25f3dd0517b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.124 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] VM Started (Lifecycle Event)
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.128 243456 DEBUG nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.133 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.141 243456 INFO nova.virt.libvirt.driver [-] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Instance spawned successfully.
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.150 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.155 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.160 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.187 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.187 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275038.126763, cbf94077-46c2-457d-8486-25f3dd0517b9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.188 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] VM Paused (Lifecycle Event)
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.198 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.199 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.200 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.200 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.201 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.202 243456 DEBUG nova.virt.libvirt.driver [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.214 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:37:18 compute-0 ceph-mon[76304]: pgmap v2188: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.6 MiB/s wr, 54 op/s
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.224 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275038.1318524, cbf94077-46c2-457d-8486-25f3dd0517b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.225 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] VM Resumed (Lifecycle Event)
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.251 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.256 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.266 243456 INFO nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Took 7.79 seconds to spawn the instance on the hypervisor.
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.267 243456 DEBUG nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.276 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.346 243456 INFO nova.compute.manager [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Took 9.06 seconds to build instance.
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.371 243456 DEBUG oslo_concurrency.lockutils [None req-845a9c30-9dd3-4023-adaf-4fd7f9c0f320 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:37:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/917389727' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.689 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.726 243456 DEBUG nova.storage.rbd_utils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:37:18 compute-0 nova_compute[243452]: 2026-02-28 10:37:18.732 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:37:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.6 MiB/s wr, 55 op/s
Feb 28 10:37:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:37:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/917389727' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:37:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:37:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3284198875' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.434 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.702s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.436 243456 DEBUG nova.virt.libvirt.vif [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:37:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-25721937',display_name='tempest-TestGettingAddress-server-25721937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-25721937',id=134,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-v0acm2xt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:37:08Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=ae0acd9c-7c2d-4e8b-84a4-d577eff31d02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.436 243456 DEBUG nova.network.os_vif_util [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.437 243456 DEBUG nova.network.os_vif_util [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:ed:5e,bridge_name='br-int',has_traffic_filtering=True,id=0739db55-7f81-4ed8-a2c2-73ad1bf09084,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0739db55-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.438 243456 DEBUG nova.virt.libvirt.vif [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:37:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-25721937',display_name='tempest-TestGettingAddress-server-25721937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-25721937',id=134,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-v0acm2xt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:37:08Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=ae0acd9c-7c2d-4e8b-84a4-d577eff31d02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.438 243456 DEBUG nova.network.os_vif_util [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.439 243456 DEBUG nova.network.os_vif_util [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:cb:24,bridge_name='br-int',has_traffic_filtering=True,id=fbc01aec-00f6-4ce9-960c-352295a6c18e,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc01aec-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.440 243456 DEBUG nova.objects.instance [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid ae0acd9c-7c2d-4e8b-84a4-d577eff31d02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.454 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:37:19 compute-0 nova_compute[243452]:   <uuid>ae0acd9c-7c2d-4e8b-84a4-d577eff31d02</uuid>
Feb 28 10:37:19 compute-0 nova_compute[243452]:   <name>instance-00000086</name>
Feb 28 10:37:19 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:37:19 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:37:19 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <nova:name>tempest-TestGettingAddress-server-25721937</nova:name>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:37:18</nova:creationTime>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:37:19 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:37:19 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:37:19 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:37:19 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:37:19 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:37:19 compute-0 nova_compute[243452]:         <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 10:37:19 compute-0 nova_compute[243452]:         <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:37:19 compute-0 nova_compute[243452]:         <nova:port uuid="0739db55-7f81-4ed8-a2c2-73ad1bf09084">
Feb 28 10:37:19 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:37:19 compute-0 nova_compute[243452]:         <nova:port uuid="fbc01aec-00f6-4ce9-960c-352295a6c18e">
Feb 28 10:37:19 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:febb:cb24" ipVersion="6"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:febb:cb24" ipVersion="6"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:37:19 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:37:19 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <system>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <entry name="serial">ae0acd9c-7c2d-4e8b-84a4-d577eff31d02</entry>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <entry name="uuid">ae0acd9c-7c2d-4e8b-84a4-d577eff31d02</entry>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     </system>
Feb 28 10:37:19 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:37:19 compute-0 nova_compute[243452]:   <os>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:   </os>
Feb 28 10:37:19 compute-0 nova_compute[243452]:   <features>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:   </features>
Feb 28 10:37:19 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:37:19 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:37:19 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk">
Feb 28 10:37:19 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:37:19 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk.config">
Feb 28 10:37:19 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:37:19 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:b8:ed:5e"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <target dev="tap0739db55-7f"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:bb:cb:24"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <target dev="tapfbc01aec-00"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02/console.log" append="off"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <video>
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     </video>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:37:19 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:37:19 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:37:19 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:37:19 compute-0 nova_compute[243452]: </domain>
Feb 28 10:37:19 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.459 243456 DEBUG nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Preparing to wait for external event network-vif-plugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.459 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.460 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.460 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.460 243456 DEBUG nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Preparing to wait for external event network-vif-plugged-fbc01aec-00f6-4ce9-960c-352295a6c18e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.460 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.461 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.461 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.462 243456 DEBUG nova.virt.libvirt.vif [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:37:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-25721937',display_name='tempest-TestGettingAddress-server-25721937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-25721937',id=134,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-v0acm2xt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:37:08Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=ae0acd9c-7c2d-4e8b-84a4-d577eff31d02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.462 243456 DEBUG nova.network.os_vif_util [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.463 243456 DEBUG nova.network.os_vif_util [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:ed:5e,bridge_name='br-int',has_traffic_filtering=True,id=0739db55-7f81-4ed8-a2c2-73ad1bf09084,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0739db55-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.463 243456 DEBUG os_vif [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:ed:5e,bridge_name='br-int',has_traffic_filtering=True,id=0739db55-7f81-4ed8-a2c2-73ad1bf09084,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0739db55-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.464 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.464 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.464 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.468 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.469 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0739db55-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.469 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0739db55-7f, col_values=(('external_ids', {'iface-id': '0739db55-7f81-4ed8-a2c2-73ad1bf09084', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:ed:5e', 'vm-uuid': 'ae0acd9c-7c2d-4e8b-84a4-d577eff31d02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.471 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.472 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:37:19 compute-0 NetworkManager[49805]: <info>  [1772275039.4738] manager: (tap0739db55-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/575)
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.479 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.480 243456 INFO os_vif [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:ed:5e,bridge_name='br-int',has_traffic_filtering=True,id=0739db55-7f81-4ed8-a2c2-73ad1bf09084,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0739db55-7f')
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.481 243456 DEBUG nova.virt.libvirt.vif [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:37:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-25721937',display_name='tempest-TestGettingAddress-server-25721937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-25721937',id=134,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-v0acm2xt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:37:08Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=ae0acd9c-7c2d-4e8b-84a4-d577eff31d02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.481 243456 DEBUG nova.network.os_vif_util [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.482 243456 DEBUG nova.network.os_vif_util [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:cb:24,bridge_name='br-int',has_traffic_filtering=True,id=fbc01aec-00f6-4ce9-960c-352295a6c18e,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc01aec-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.482 243456 DEBUG os_vif [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:cb:24,bridge_name='br-int',has_traffic_filtering=True,id=fbc01aec-00f6-4ce9-960c-352295a6c18e,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc01aec-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.483 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.483 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.484 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.486 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.486 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbc01aec-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.486 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfbc01aec-00, col_values=(('external_ids', {'iface-id': 'fbc01aec-00f6-4ce9-960c-352295a6c18e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:cb:24', 'vm-uuid': 'ae0acd9c-7c2d-4e8b-84a4-d577eff31d02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.487 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.489 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:37:19 compute-0 NetworkManager[49805]: <info>  [1772275039.4902] manager: (tapfbc01aec-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/576)
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.496 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.498 243456 INFO os_vif [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:cb:24,bridge_name='br-int',has_traffic_filtering=True,id=fbc01aec-00f6-4ce9-960c-352295a6c18e,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc01aec-00')
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.550 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.551 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.551 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:b8:ed:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.551 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:bb:cb:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.552 243456 INFO nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Using config drive
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.573 243456 DEBUG nova.storage.rbd_utils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.973 243456 DEBUG nova.network.neutron [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Updated VIF entry in instance network info cache for port fbc01aec-00f6-4ce9-960c-352295a6c18e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.975 243456 DEBUG nova.network.neutron [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Updating instance_info_cache with network_info: [{"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:37:19 compute-0 nova_compute[243452]: 2026-02-28 10:37:19.997 243456 DEBUG oslo_concurrency.lockutils [req-728556f8-c022-4110-811f-f0c9ce6acb2b req-0e3c5ec9-b668-4c6a-b542-095d0ebaf3c0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.078 243456 INFO nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Creating config drive at /var/lib/nova/instances/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02/disk.config
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.085 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8amf3fzn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.177 243456 DEBUG nova.compute.manager [req-531aa3d2-bac0-49ba-882c-e5276fd6eb8b req-f4bcd59f-1caa-4a0f-bd51-34f499a9daf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Received event network-vif-plugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.178 243456 DEBUG oslo_concurrency.lockutils [req-531aa3d2-bac0-49ba-882c-e5276fd6eb8b req-f4bcd59f-1caa-4a0f-bd51-34f499a9daf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.179 243456 DEBUG oslo_concurrency.lockutils [req-531aa3d2-bac0-49ba-882c-e5276fd6eb8b req-f4bcd59f-1caa-4a0f-bd51-34f499a9daf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.179 243456 DEBUG oslo_concurrency.lockutils [req-531aa3d2-bac0-49ba-882c-e5276fd6eb8b req-f4bcd59f-1caa-4a0f-bd51-34f499a9daf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.180 243456 DEBUG nova.compute.manager [req-531aa3d2-bac0-49ba-882c-e5276fd6eb8b req-f4bcd59f-1caa-4a0f-bd51-34f499a9daf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] No waiting events found dispatching network-vif-plugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.181 243456 WARNING nova.compute.manager [req-531aa3d2-bac0-49ba-882c-e5276fd6eb8b req-f4bcd59f-1caa-4a0f-bd51-34f499a9daf6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Received unexpected event network-vif-plugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 for instance with vm_state active and task_state None.
Feb 28 10:37:20 compute-0 ceph-mon[76304]: pgmap v2189: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.6 MiB/s wr, 55 op/s
Feb 28 10:37:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3284198875' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.238 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8amf3fzn" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.259 243456 DEBUG nova.storage.rbd_utils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.265 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02/disk.config ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.557 243456 DEBUG oslo_concurrency.processutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02/disk.config ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.560 243456 INFO nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Deleting local config drive /var/lib/nova/instances/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02/disk.config because it was imported into RBD.
Feb 28 10:37:20 compute-0 kernel: tap0739db55-7f: entered promiscuous mode
Feb 28 10:37:20 compute-0 NetworkManager[49805]: <info>  [1772275040.6178] manager: (tap0739db55-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/577)
Feb 28 10:37:20 compute-0 ovn_controller[146846]: 2026-02-28T10:37:20Z|01376|binding|INFO|Claiming lport 0739db55-7f81-4ed8-a2c2-73ad1bf09084 for this chassis.
Feb 28 10:37:20 compute-0 ovn_controller[146846]: 2026-02-28T10:37:20Z|01377|binding|INFO|0739db55-7f81-4ed8-a2c2-73ad1bf09084: Claiming fa:16:3e:b8:ed:5e 10.100.0.9
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.622 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.631 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:ed:5e 10.100.0.9'], port_security=['fa:16:3e:b8:ed:5e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ae0acd9c-7c2d-4e8b-84a4-d577eff31d02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '41e41550-f20b-4082-98cd-41d49e388f83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f53aa21-0a70-46b8-aad2-7db237a64c92, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0739db55-7f81-4ed8-a2c2-73ad1bf09084) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.632 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0739db55-7f81-4ed8-a2c2-73ad1bf09084 in datapath e68f9d98-c075-4ed0-b1ee-ef05de1c055d bound to our chassis
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.633 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e68f9d98-c075-4ed0-b1ee-ef05de1c055d
Feb 28 10:37:20 compute-0 NetworkManager[49805]: <info>  [1772275040.6359] device (tap0739db55-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:37:20 compute-0 NetworkManager[49805]: <info>  [1772275040.6367] device (tap0739db55-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:37:20 compute-0 ovn_controller[146846]: 2026-02-28T10:37:20Z|01378|binding|INFO|Setting lport 0739db55-7f81-4ed8-a2c2-73ad1bf09084 ovn-installed in OVS
Feb 28 10:37:20 compute-0 ovn_controller[146846]: 2026-02-28T10:37:20Z|01379|binding|INFO|Setting lport 0739db55-7f81-4ed8-a2c2-73ad1bf09084 up in Southbound
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.643 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:20 compute-0 kernel: tapfbc01aec-00: entered promiscuous mode
Feb 28 10:37:20 compute-0 NetworkManager[49805]: <info>  [1772275040.6520] manager: (tapfbc01aec-00): new Tun device (/org/freedesktop/NetworkManager/Devices/578)
Feb 28 10:37:20 compute-0 ovn_controller[146846]: 2026-02-28T10:37:20Z|01380|binding|INFO|Claiming lport fbc01aec-00f6-4ce9-960c-352295a6c18e for this chassis.
Feb 28 10:37:20 compute-0 ovn_controller[146846]: 2026-02-28T10:37:20Z|01381|binding|INFO|fbc01aec-00f6-4ce9-960c-352295a6c18e: Claiming fa:16:3e:bb:cb:24 2001:db8:0:1:f816:3eff:febb:cb24 2001:db8::f816:3eff:febb:cb24
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.656 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.657 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f2568481-16cf-4b43-88fc-8c802ef98abf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.663 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:cb:24 2001:db8:0:1:f816:3eff:febb:cb24 2001:db8::f816:3eff:febb:cb24'], port_security=['fa:16:3e:bb:cb:24 2001:db8:0:1:f816:3eff:febb:cb24 2001:db8::f816:3eff:febb:cb24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:febb:cb24/64 2001:db8::f816:3eff:febb:cb24/64', 'neutron:device_id': 'ae0acd9c-7c2d-4e8b-84a4-d577eff31d02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '41e41550-f20b-4082-98cd-41d49e388f83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ac6d088-e3e0-46fc-b81f-2eb4f1fd8c9c, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=fbc01aec-00f6-4ce9-960c-352295a6c18e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:37:20 compute-0 NetworkManager[49805]: <info>  [1772275040.6649] device (tapfbc01aec-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:37:20 compute-0 NetworkManager[49805]: <info>  [1772275040.6658] device (tapfbc01aec-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:37:20 compute-0 ovn_controller[146846]: 2026-02-28T10:37:20Z|01382|binding|INFO|Setting lport fbc01aec-00f6-4ce9-960c-352295a6c18e ovn-installed in OVS
Feb 28 10:37:20 compute-0 ovn_controller[146846]: 2026-02-28T10:37:20Z|01383|binding|INFO|Setting lport fbc01aec-00f6-4ce9-960c-352295a6c18e up in Southbound
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.668 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.672 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.690 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9bec46e7-d230-4b60-a099-72cd20369ebf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.697 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ffac8f22-ca58-42ce-ba7f-0f63d3a3cf8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:20 compute-0 systemd-machined[209480]: New machine qemu-168-instance-00000086.
Feb 28 10:37:20 compute-0 systemd[1]: Started Virtual Machine qemu-168-instance-00000086.
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.728 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[eed4ff30-c40c-4b76-9072-1c68c2f322e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.745 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4d03fe1a-d7b7-495e-8b8d-ded8f5d84d4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape68f9d98-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:f2:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 404], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650649, 'reachable_time': 31213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363591, 'error': None, 'target': 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.766 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4282c692-dc57-4105-a21d-c02f3e823fa7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape68f9d98-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650664, 'tstamp': 650664}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363595, 'error': None, 'target': 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape68f9d98-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650667, 'tstamp': 650667}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363595, 'error': None, 'target': 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.768 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape68f9d98-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.769 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.770 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.771 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape68f9d98-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.771 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.771 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape68f9d98-c0, col_values=(('external_ids', {'iface-id': 'b9d9d72f-0ab4-40d7-98e4-dcafab8ff1a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.772 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.773 156681 INFO neutron.agent.ovn.metadata.agent [-] Port fbc01aec-00f6-4ce9-960c-352295a6c18e in datapath bf5aa4f8-b85b-496f-a7ad-1ab36250968c unbound from our chassis
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.775 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf5aa4f8-b85b-496f-a7ad-1ab36250968c
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.792 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[758e9416-e228-480a-8f16-93018e62dc08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.826 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4984b230-426b-4b05-9827-86070c549aa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.833 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[239a87f5-de0f-46cc-93ad-1dfeb3d58b36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.861 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[08766900-d6ad-4e88-8407-5445ea0415a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.882 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd169ff-33e1-48e2-bc42-9c105f1bcee0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf5aa4f8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:9f:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650747, 'reachable_time': 20076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363602, 'error': None, 'target': 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.6 MiB/s wr, 104 op/s
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.903 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d43f3792-8728-4c16-a3c8-9b20f0288bc9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbf5aa4f8-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650761, 'tstamp': 650761}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363603, 'error': None, 'target': 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.905 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf5aa4f8-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.907 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:20 compute-0 nova_compute[243452]: 2026-02-28 10:37:20.908 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.911 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf5aa4f8-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.911 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.911 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf5aa4f8-b0, col_values=(('external_ids', {'iface-id': 'a03e0d2d-7933-48b4-9d0a-61369ba848c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:20.912 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:37:21 compute-0 nova_compute[243452]: 2026-02-28 10:37:21.193 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275041.1918066, ae0acd9c-7c2d-4e8b-84a4-d577eff31d02 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:37:21 compute-0 nova_compute[243452]: 2026-02-28 10:37:21.194 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] VM Started (Lifecycle Event)
Feb 28 10:37:21 compute-0 nova_compute[243452]: 2026-02-28 10:37:21.221 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:37:21 compute-0 nova_compute[243452]: 2026-02-28 10:37:21.226 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275041.1922653, ae0acd9c-7c2d-4e8b-84a4-d577eff31d02 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:37:21 compute-0 nova_compute[243452]: 2026-02-28 10:37:21.226 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] VM Paused (Lifecycle Event)
Feb 28 10:37:21 compute-0 nova_compute[243452]: 2026-02-28 10:37:21.244 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:37:21 compute-0 nova_compute[243452]: 2026-02-28 10:37:21.249 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:37:21 compute-0 nova_compute[243452]: 2026-02-28 10:37:21.272 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:37:22 compute-0 ceph-mon[76304]: pgmap v2190: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.6 MiB/s wr, 104 op/s
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.263 243456 DEBUG nova.compute.manager [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-plugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.264 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.265 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.265 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.266 243456 DEBUG nova.compute.manager [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Processing event network-vif-plugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.266 243456 DEBUG nova.compute.manager [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-plugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.267 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.268 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.268 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.269 243456 DEBUG nova.compute.manager [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] No event matching network-vif-plugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 in dict_keys([('network-vif-plugged', 'fbc01aec-00f6-4ce9-960c-352295a6c18e')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.270 243456 WARNING nova.compute.manager [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received unexpected event network-vif-plugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 for instance with vm_state building and task_state spawning.
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.270 243456 DEBUG nova.compute.manager [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-plugged-fbc01aec-00f6-4ce9-960c-352295a6c18e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.271 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.272 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.273 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.273 243456 DEBUG nova.compute.manager [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Processing event network-vif-plugged-fbc01aec-00f6-4ce9-960c-352295a6c18e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.274 243456 DEBUG nova.compute.manager [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-plugged-fbc01aec-00f6-4ce9-960c-352295a6c18e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.275 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.276 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.276 243456 DEBUG oslo_concurrency.lockutils [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.277 243456 DEBUG nova.compute.manager [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] No waiting events found dispatching network-vif-plugged-fbc01aec-00f6-4ce9-960c-352295a6c18e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.277 243456 WARNING nova.compute.manager [req-063e85cd-b205-47d9-9e3b-e8bbadf67cf3 req-4d2abaa2-00a9-42ae-bbb9-cebb684903f8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received unexpected event network-vif-plugged-fbc01aec-00f6-4ce9-960c-352295a6c18e for instance with vm_state building and task_state spawning.
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.279 243456 DEBUG nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.283 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275042.2831938, ae0acd9c-7c2d-4e8b-84a4-d577eff31d02 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.284 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] VM Resumed (Lifecycle Event)
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.288 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.292 243456 INFO nova.virt.libvirt.driver [-] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Instance spawned successfully.
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.293 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.313 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.322 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.327 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.328 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.328 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.329 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.329 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.330 243456 DEBUG nova.virt.libvirt.driver [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.337 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.377 243456 INFO nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Took 13.37 seconds to spawn the instance on the hypervisor.
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.378 243456 DEBUG nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.436 243456 INFO nova.compute.manager [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Took 14.63 seconds to build instance.
Feb 28 10:37:22 compute-0 nova_compute[243452]: 2026-02-28 10:37:22.456 243456 DEBUG oslo_concurrency.lockutils [None req-2a0b11ce-0e4d-4ae2-91fb-6e4df0cd8ee5 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.1 MiB/s wr, 113 op/s
Feb 28 10:37:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:37:24 compute-0 ceph-mon[76304]: pgmap v2191: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.1 MiB/s wr, 113 op/s
Feb 28 10:37:24 compute-0 nova_compute[243452]: 2026-02-28 10:37:24.490 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:24 compute-0 nova_compute[243452]: 2026-02-28 10:37:24.499 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 123 op/s
Feb 28 10:37:26 compute-0 ceph-mon[76304]: pgmap v2192: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 123 op/s
Feb 28 10:37:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 751 KiB/s wr, 116 op/s
Feb 28 10:37:27 compute-0 nova_compute[243452]: 2026-02-28 10:37:27.396 243456 DEBUG nova.compute.manager [req-2361a3ec-998a-4694-9a63-b98fcd2388d7 req-608708cc-e891-41ec-8849-a77c4ac0eb9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-changed-0739db55-7f81-4ed8-a2c2-73ad1bf09084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:27 compute-0 nova_compute[243452]: 2026-02-28 10:37:27.397 243456 DEBUG nova.compute.manager [req-2361a3ec-998a-4694-9a63-b98fcd2388d7 req-608708cc-e891-41ec-8849-a77c4ac0eb9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Refreshing instance network info cache due to event network-changed-0739db55-7f81-4ed8-a2c2-73ad1bf09084. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:37:27 compute-0 nova_compute[243452]: 2026-02-28 10:37:27.398 243456 DEBUG oslo_concurrency.lockutils [req-2361a3ec-998a-4694-9a63-b98fcd2388d7 req-608708cc-e891-41ec-8849-a77c4ac0eb9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:37:27 compute-0 nova_compute[243452]: 2026-02-28 10:37:27.399 243456 DEBUG oslo_concurrency.lockutils [req-2361a3ec-998a-4694-9a63-b98fcd2388d7 req-608708cc-e891-41ec-8849-a77c4ac0eb9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:37:27 compute-0 nova_compute[243452]: 2026-02-28 10:37:27.399 243456 DEBUG nova.network.neutron [req-2361a3ec-998a-4694-9a63-b98fcd2388d7 req-608708cc-e891-41ec-8849-a77c4ac0eb9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Refreshing network info cache for port 0739db55-7f81-4ed8-a2c2-73ad1bf09084 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:37:28 compute-0 ceph-mon[76304]: pgmap v2193: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 751 KiB/s wr, 116 op/s
Feb 28 10:37:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 33 KiB/s wr, 149 op/s
Feb 28 10:37:28 compute-0 nova_compute[243452]: 2026-02-28 10:37:28.932 243456 DEBUG nova.network.neutron [req-2361a3ec-998a-4694-9a63-b98fcd2388d7 req-608708cc-e891-41ec-8849-a77c4ac0eb9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Updated VIF entry in instance network info cache for port 0739db55-7f81-4ed8-a2c2-73ad1bf09084. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:37:28 compute-0 nova_compute[243452]: 2026-02-28 10:37:28.932 243456 DEBUG nova.network.neutron [req-2361a3ec-998a-4694-9a63-b98fcd2388d7 req-608708cc-e891-41ec-8849-a77c4ac0eb9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Updating instance_info_cache with network_info: [{"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:37:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:37:28 compute-0 nova_compute[243452]: 2026-02-28 10:37:28.953 243456 DEBUG oslo_concurrency.lockutils [req-2361a3ec-998a-4694-9a63-b98fcd2388d7 req-608708cc-e891-41ec-8849-a77c4ac0eb9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:37:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:37:29
Feb 28 10:37:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:37:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:37:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'images', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'backups', '.mgr', 'default.rgw.meta']
Feb 28 10:37:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:37:29 compute-0 nova_compute[243452]: 2026-02-28 10:37:29.493 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:29 compute-0 nova_compute[243452]: 2026-02-28 10:37:29.503 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:30 compute-0 ceph-mon[76304]: pgmap v2194: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 33 KiB/s wr, 149 op/s
Feb 28 10:37:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:37:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:37:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:37:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:37:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:37:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:37:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:37:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:37:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:37:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:37:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:37:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:37:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:37:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:37:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:37:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:37:30 compute-0 ovn_controller[146846]: 2026-02-28T10:37:30Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:25:11 10.100.0.28
Feb 28 10:37:30 compute-0 ovn_controller[146846]: 2026-02-28T10:37:30Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:25:11 10.100.0.28
Feb 28 10:37:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 413 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 170 op/s
Feb 28 10:37:32 compute-0 ceph-mon[76304]: pgmap v2195: 305 pgs: 305 active+clean; 413 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 170 op/s
Feb 28 10:37:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 427 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.0 MiB/s wr, 145 op/s
Feb 28 10:37:33 compute-0 nova_compute[243452]: 2026-02-28 10:37:33.364 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:37:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:37:34 compute-0 ceph-mon[76304]: pgmap v2196: 305 pgs: 305 active+clean; 427 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.0 MiB/s wr, 145 op/s
Feb 28 10:37:34 compute-0 nova_compute[243452]: 2026-02-28 10:37:34.497 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:34 compute-0 nova_compute[243452]: 2026-02-28 10:37:34.504 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 434 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.4 MiB/s wr, 124 op/s
Feb 28 10:37:35 compute-0 ovn_controller[146846]: 2026-02-28T10:37:35Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b8:ed:5e 10.100.0.9
Feb 28 10:37:35 compute-0 ovn_controller[146846]: 2026-02-28T10:37:35Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:ed:5e 10.100.0.9
Feb 28 10:37:36 compute-0 ceph-mon[76304]: pgmap v2197: 305 pgs: 305 active+clean; 434 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.4 MiB/s wr, 124 op/s
Feb 28 10:37:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 446 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.6 MiB/s wr, 131 op/s
Feb 28 10:37:37 compute-0 nova_compute[243452]: 2026-02-28 10:37:37.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:37:38 compute-0 ceph-mon[76304]: pgmap v2198: 305 pgs: 305 active+clean; 446 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.6 MiB/s wr, 131 op/s
Feb 28 10:37:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 463 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.2 MiB/s wr, 156 op/s
Feb 28 10:37:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:37:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:39.171 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:37:39 compute-0 nova_compute[243452]: 2026-02-28 10:37:39.172 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:39.175 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:37:39 compute-0 nova_compute[243452]: 2026-02-28 10:37:39.299 243456 DEBUG nova.compute.manager [req-86972e59-dfcb-44a8-8774-54408e2a14a0 req-8473bc0e-7a22-4e86-964c-b2252602b195 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-changed-784aca10-13b2-42d5-9828-68914533af46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:39 compute-0 nova_compute[243452]: 2026-02-28 10:37:39.299 243456 DEBUG nova.compute.manager [req-86972e59-dfcb-44a8-8774-54408e2a14a0 req-8473bc0e-7a22-4e86-964c-b2252602b195 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Refreshing instance network info cache due to event network-changed-784aca10-13b2-42d5-9828-68914533af46. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:37:39 compute-0 nova_compute[243452]: 2026-02-28 10:37:39.300 243456 DEBUG oslo_concurrency.lockutils [req-86972e59-dfcb-44a8-8774-54408e2a14a0 req-8473bc0e-7a22-4e86-964c-b2252602b195 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:37:39 compute-0 nova_compute[243452]: 2026-02-28 10:37:39.300 243456 DEBUG oslo_concurrency.lockutils [req-86972e59-dfcb-44a8-8774-54408e2a14a0 req-8473bc0e-7a22-4e86-964c-b2252602b195 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:37:39 compute-0 nova_compute[243452]: 2026-02-28 10:37:39.300 243456 DEBUG nova.network.neutron [req-86972e59-dfcb-44a8-8774-54408e2a14a0 req-8473bc0e-7a22-4e86-964c-b2252602b195 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Refreshing network info cache for port 784aca10-13b2-42d5-9828-68914533af46 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:37:39 compute-0 nova_compute[243452]: 2026-02-28 10:37:39.501 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:39 compute-0 nova_compute[243452]: 2026-02-28 10:37:39.508 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:40.177 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:40 compute-0 nova_compute[243452]: 2026-02-28 10:37:40.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:37:40 compute-0 nova_compute[243452]: 2026-02-28 10:37:40.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:37:40 compute-0 ceph-mon[76304]: pgmap v2199: 305 pgs: 305 active+clean; 463 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.2 MiB/s wr, 156 op/s
Feb 28 10:37:40 compute-0 nova_compute[243452]: 2026-02-28 10:37:40.425 243456 DEBUG nova.network.neutron [req-86972e59-dfcb-44a8-8774-54408e2a14a0 req-8473bc0e-7a22-4e86-964c-b2252602b195 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updated VIF entry in instance network info cache for port 784aca10-13b2-42d5-9828-68914533af46. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:37:40 compute-0 nova_compute[243452]: 2026-02-28 10:37:40.426 243456 DEBUG nova.network.neutron [req-86972e59-dfcb-44a8-8774-54408e2a14a0 req-8473bc0e-7a22-4e86-964c-b2252602b195 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:37:40 compute-0 nova_compute[243452]: 2026-02-28 10:37:40.446 243456 DEBUG oslo_concurrency.lockutils [req-86972e59-dfcb-44a8-8774-54408e2a14a0 req-8473bc0e-7a22-4e86-964c-b2252602b195 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:37:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 471 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 612 KiB/s rd, 4.3 MiB/s wr, 123 op/s
Feb 28 10:37:41 compute-0 podman[363649]: 2026-02-28 10:37:41.170301316 +0000 UTC m=+0.093195731 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:37:41 compute-0 podman[363648]: 2026-02-28 10:37:41.197833176 +0000 UTC m=+0.119430944 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0030503430589766695 of space, bias 1.0, pg target 0.9151029176930009 quantized to 32 (current 32)
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493843790406927 of space, bias 1.0, pg target 0.7481531371220781 quantized to 32 (current 32)
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.326593465495173e-07 of space, bias 4.0, pg target 0.0008791912158594207 quantized to 16 (current 16)
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:37:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:37:42 compute-0 nova_compute[243452]: 2026-02-28 10:37:42.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:37:42 compute-0 nova_compute[243452]: 2026-02-28 10:37:42.337 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:37:42 compute-0 nova_compute[243452]: 2026-02-28 10:37:42.338 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:37:42 compute-0 ceph-mon[76304]: pgmap v2200: 305 pgs: 305 active+clean; 471 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 612 KiB/s rd, 4.3 MiB/s wr, 123 op/s
Feb 28 10:37:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 471 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 527 KiB/s rd, 3.3 MiB/s wr, 102 op/s
Feb 28 10:37:43 compute-0 nova_compute[243452]: 2026-02-28 10:37:43.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:37:43 compute-0 nova_compute[243452]: 2026-02-28 10:37:43.750 243456 DEBUG oslo_concurrency.lockutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "cbf94077-46c2-457d-8486-25f3dd0517b9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:43 compute-0 nova_compute[243452]: 2026-02-28 10:37:43.751 243456 DEBUG oslo_concurrency.lockutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:43 compute-0 nova_compute[243452]: 2026-02-28 10:37:43.752 243456 DEBUG oslo_concurrency.lockutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:43 compute-0 nova_compute[243452]: 2026-02-28 10:37:43.752 243456 DEBUG oslo_concurrency.lockutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:43 compute-0 nova_compute[243452]: 2026-02-28 10:37:43.753 243456 DEBUG oslo_concurrency.lockutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:43 compute-0 nova_compute[243452]: 2026-02-28 10:37:43.756 243456 INFO nova.compute.manager [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Terminating instance
Feb 28 10:37:43 compute-0 nova_compute[243452]: 2026-02-28 10:37:43.757 243456 DEBUG nova.compute.manager [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:37:43 compute-0 kernel: tap282fa143-11 (unregistering): left promiscuous mode
Feb 28 10:37:43 compute-0 NetworkManager[49805]: <info>  [1772275063.8014] device (tap282fa143-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:37:43 compute-0 ovn_controller[146846]: 2026-02-28T10:37:43Z|01384|binding|INFO|Releasing lport 282fa143-1175-40e2-9ab8-2d2b012d5b78 from this chassis (sb_readonly=0)
Feb 28 10:37:43 compute-0 ovn_controller[146846]: 2026-02-28T10:37:43Z|01385|binding|INFO|Setting lport 282fa143-1175-40e2-9ab8-2d2b012d5b78 down in Southbound
Feb 28 10:37:43 compute-0 nova_compute[243452]: 2026-02-28 10:37:43.811 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:43 compute-0 ovn_controller[146846]: 2026-02-28T10:37:43Z|01386|binding|INFO|Removing iface tap282fa143-11 ovn-installed in OVS
Feb 28 10:37:43 compute-0 nova_compute[243452]: 2026-02-28 10:37:43.814 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.820 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:25:11 10.100.0.28'], port_security=['fa:16:3e:65:25:11 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'cbf94077-46c2-457d-8486-25f3dd0517b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '47e8c685-91d8-4bae-bf96-b1284d3eef68', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe4444ac-cf87-44d4-a522-71da130237e6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=282fa143-1175-40e2-9ab8-2d2b012d5b78) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:37:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.822 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 282fa143-1175-40e2-9ab8-2d2b012d5b78 in datapath 9fbbf27e-04ea-4f06-b1ef-3ac4b98db361 unbound from our chassis
Feb 28 10:37:43 compute-0 nova_compute[243452]: 2026-02-28 10:37:43.823 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.824 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9fbbf27e-04ea-4f06-b1ef-3ac4b98db361
Feb 28 10:37:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.843 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7e7dc1c3-4ece-41db-b3e4-5cac5f3e8c86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:43 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000087.scope: Deactivated successfully.
Feb 28 10:37:43 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000087.scope: Consumed 13.408s CPU time.
Feb 28 10:37:43 compute-0 systemd-machined[209480]: Machine qemu-167-instance-00000087 terminated.
Feb 28 10:37:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.874 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e1208c4d-d109-4b27-bf41-635388f00c8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.878 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[81443d68-1050-4599-a6f1-271b25e8f284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.907 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ae2def45-d020-4d39-828d-786e4e641f2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.923 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b0b294-fca3-46eb-89ba-bab724d67322]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9fbbf27e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:56:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 407], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652770, 'reachable_time': 24614, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363704, 'error': None, 'target': 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.943 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[feb20d39-d18f-4ec7-a3b6-93cb81453a71]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap9fbbf27e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652782, 'tstamp': 652782}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363705, 'error': None, 'target': 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9fbbf27e-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652785, 'tstamp': 652785}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363705, 'error': None, 'target': 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.945 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9fbbf27e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:37:43 compute-0 nova_compute[243452]: 2026-02-28 10:37:43.948 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:43 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Feb 28 10:37:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:43.952253) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:37:43 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Feb 28 10:37:43 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275063952294, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 1927, "num_deletes": 256, "total_data_size": 3146175, "memory_usage": 3192368, "flush_reason": "Manual Compaction"}
Feb 28 10:37:43 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Feb 28 10:37:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.954 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9fbbf27e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.955 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:37:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.955 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9fbbf27e-00, col_values=(('external_ids', {'iface-id': '5b7b2456-d9c3-4443-8d4c-51b3db80db5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:43.956 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:37:43 compute-0 nova_compute[243452]: 2026-02-28 10:37:43.955 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:43 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275063974883, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3069104, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45074, "largest_seqno": 47000, "table_properties": {"data_size": 3060395, "index_size": 5331, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17893, "raw_average_key_size": 19, "raw_value_size": 3042958, "raw_average_value_size": 3381, "num_data_blocks": 237, "num_entries": 900, "num_filter_entries": 900, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772274866, "oldest_key_time": 1772274866, "file_creation_time": 1772275063, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:37:43 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 22701 microseconds, and 5723 cpu microseconds.
Feb 28 10:37:43 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:37:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:43.974949) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3069104 bytes OK
Feb 28 10:37:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:43.974975) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Feb 28 10:37:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:43.977396) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Feb 28 10:37:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:43.977412) EVENT_LOG_v1 {"time_micros": 1772275063977407, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:37:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:43.977437) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:37:43 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3137974, prev total WAL file size 3137974, number of live WAL files 2.
Feb 28 10:37:43 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:37:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:43.977996) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373534' seq:72057594037927935, type:22 .. '6C6F676D0032303036' seq:0, type:0; will stop at (end)
Feb 28 10:37:43 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:37:43 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(2997KB)], [104(8483KB)]
Feb 28 10:37:43 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275063978037, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 11756156, "oldest_snapshot_seqno": -1}
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:43.999 243456 INFO nova.virt.libvirt.driver [-] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Instance destroyed successfully.
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:43.999 243456 DEBUG nova.objects.instance [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid cbf94077-46c2-457d-8486-25f3dd0517b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.019 243456 DEBUG nova.virt.libvirt.vif [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:37:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1336250417',display_name='tempest-TestNetworkBasicOps-server-1336250417',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1336250417',id=135,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFdV8K7ZSzWmQkbjGONrQPTCovb6WrbWPkGawJk1yo00887S+Z8HGahuNdlpRV8Rys7acQ23bSRlDv0kC4ADkUOxH7s/2SDS0CUgX9WEqv+CcAPyMpFhaDSJb5+3QqGJmA==',key_name='tempest-TestNetworkBasicOps-99259032',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:37:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-6zhm2bv3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:37:18Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=cbf94077-46c2-457d-8486-25f3dd0517b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "address": "fa:16:3e:65:25:11", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282fa143-11", "ovs_interfaceid": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.019 243456 DEBUG nova.network.os_vif_util [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "address": "fa:16:3e:65:25:11", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap282fa143-11", "ovs_interfaceid": "282fa143-1175-40e2-9ab8-2d2b012d5b78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.020 243456 DEBUG nova.network.os_vif_util [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:25:11,bridge_name='br-int',has_traffic_filtering=True,id=282fa143-1175-40e2-9ab8-2d2b012d5b78,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282fa143-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.021 243456 DEBUG os_vif [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:25:11,bridge_name='br-int',has_traffic_filtering=True,id=282fa143-1175-40e2-9ab8-2d2b012d5b78,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282fa143-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.023 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.023 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap282fa143-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.025 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.029 243456 INFO os_vif [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:25:11,bridge_name='br-int',has_traffic_filtering=True,id=282fa143-1175-40e2-9ab8-2d2b012d5b78,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap282fa143-11')
Feb 28 10:37:44 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7016 keys, 11630128 bytes, temperature: kUnknown
Feb 28 10:37:44 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275064058326, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 11630128, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11580644, "index_size": 30812, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17605, "raw_key_size": 181198, "raw_average_key_size": 25, "raw_value_size": 11452743, "raw_average_value_size": 1632, "num_data_blocks": 1221, "num_entries": 7016, "num_filter_entries": 7016, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772275063, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:37:44 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:37:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:44.058593) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 11630128 bytes
Feb 28 10:37:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:44.060455) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.3 rd, 144.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 8.3 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(7.6) write-amplify(3.8) OK, records in: 7540, records dropped: 524 output_compression: NoCompression
Feb 28 10:37:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:44.060475) EVENT_LOG_v1 {"time_micros": 1772275064060465, "job": 62, "event": "compaction_finished", "compaction_time_micros": 80370, "compaction_time_cpu_micros": 21762, "output_level": 6, "num_output_files": 1, "total_output_size": 11630128, "num_input_records": 7540, "num_output_records": 7016, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:37:44 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:37:44 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275064060928, "job": 62, "event": "table_file_deletion", "file_number": 106}
Feb 28 10:37:44 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:37:44 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275064061871, "job": 62, "event": "table_file_deletion", "file_number": 104}
Feb 28 10:37:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:43.977910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:37:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:44.062003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:37:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:44.062015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:37:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:44.062018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:37:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:44.062022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:37:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:44.062025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.103 243456 DEBUG nova.compute.manager [req-98b0488f-9cab-4d2c-b96f-54047f38de2b req-7426cabe-f961-4952-9ba5-d4fe2c9b629d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Received event network-vif-unplugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.103 243456 DEBUG oslo_concurrency.lockutils [req-98b0488f-9cab-4d2c-b96f-54047f38de2b req-7426cabe-f961-4952-9ba5-d4fe2c9b629d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.104 243456 DEBUG oslo_concurrency.lockutils [req-98b0488f-9cab-4d2c-b96f-54047f38de2b req-7426cabe-f961-4952-9ba5-d4fe2c9b629d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.104 243456 DEBUG oslo_concurrency.lockutils [req-98b0488f-9cab-4d2c-b96f-54047f38de2b req-7426cabe-f961-4952-9ba5-d4fe2c9b629d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.104 243456 DEBUG nova.compute.manager [req-98b0488f-9cab-4d2c-b96f-54047f38de2b req-7426cabe-f961-4952-9ba5-d4fe2c9b629d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] No waiting events found dispatching network-vif-unplugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.104 243456 DEBUG nova.compute.manager [req-98b0488f-9cab-4d2c-b96f-54047f38de2b req-7426cabe-f961-4952-9ba5-d4fe2c9b629d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Received event network-vif-unplugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.320 243456 INFO nova.virt.libvirt.driver [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Deleting instance files /var/lib/nova/instances/cbf94077-46c2-457d-8486-25f3dd0517b9_del
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.321 243456 INFO nova.virt.libvirt.driver [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Deletion of /var/lib/nova/instances/cbf94077-46c2-457d-8486-25f3dd0517b9_del complete
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.404 243456 INFO nova.compute.manager [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Took 0.65 seconds to destroy the instance on the hypervisor.
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.405 243456 DEBUG oslo.service.loopingcall [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.405 243456 DEBUG nova.compute.manager [-] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.405 243456 DEBUG nova.network.neutron [-] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:37:44 compute-0 ceph-mon[76304]: pgmap v2201: 305 pgs: 305 active+clean; 471 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 527 KiB/s rd, 3.3 MiB/s wr, 102 op/s
Feb 28 10:37:44 compute-0 nova_compute[243452]: 2026-02-28 10:37:44.512 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 452 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 461 KiB/s rd, 2.3 MiB/s wr, 95 op/s
Feb 28 10:37:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:37:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3837739649' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:37:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:37:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3837739649' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.193 243456 DEBUG nova.compute.manager [req-4f3306a0-f076-4a36-9bb9-6f3df45e89ad req-83ca6106-faae-405f-a742-55540db2b453 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Received event network-vif-plugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.194 243456 DEBUG oslo_concurrency.lockutils [req-4f3306a0-f076-4a36-9bb9-6f3df45e89ad req-83ca6106-faae-405f-a742-55540db2b453 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.194 243456 DEBUG oslo_concurrency.lockutils [req-4f3306a0-f076-4a36-9bb9-6f3df45e89ad req-83ca6106-faae-405f-a742-55540db2b453 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.194 243456 DEBUG oslo_concurrency.lockutils [req-4f3306a0-f076-4a36-9bb9-6f3df45e89ad req-83ca6106-faae-405f-a742-55540db2b453 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.195 243456 DEBUG nova.compute.manager [req-4f3306a0-f076-4a36-9bb9-6f3df45e89ad req-83ca6106-faae-405f-a742-55540db2b453 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] No waiting events found dispatching network-vif-plugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.195 243456 WARNING nova.compute.manager [req-4f3306a0-f076-4a36-9bb9-6f3df45e89ad req-83ca6106-faae-405f-a742-55540db2b453 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Received unexpected event network-vif-plugged-282fa143-1175-40e2-9ab8-2d2b012d5b78 for instance with vm_state active and task_state deleting.
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.376 243456 DEBUG nova.network.neutron [-] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.404 243456 INFO nova.compute.manager [-] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Took 2.00 seconds to deallocate network for instance.
Feb 28 10:37:46 compute-0 ceph-mon[76304]: pgmap v2202: 305 pgs: 305 active+clean; 452 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 461 KiB/s rd, 2.3 MiB/s wr, 95 op/s
Feb 28 10:37:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3837739649' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:37:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3837739649' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.481 243456 DEBUG oslo_concurrency.lockutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.482 243456 DEBUG oslo_concurrency.lockutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.586 243456 DEBUG oslo_concurrency.processutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.824 243456 DEBUG oslo_concurrency.lockutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.825 243456 DEBUG oslo_concurrency.lockutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.825 243456 DEBUG oslo_concurrency.lockutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.826 243456 DEBUG oslo_concurrency.lockutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.826 243456 DEBUG oslo_concurrency.lockutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.828 243456 INFO nova.compute.manager [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Terminating instance
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.829 243456 DEBUG nova.compute.manager [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:37:46 compute-0 kernel: tap0739db55-7f (unregistering): left promiscuous mode
Feb 28 10:37:46 compute-0 NetworkManager[49805]: <info>  [1772275066.8797] device (tap0739db55-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.888 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:46 compute-0 ovn_controller[146846]: 2026-02-28T10:37:46Z|01387|binding|INFO|Releasing lport 0739db55-7f81-4ed8-a2c2-73ad1bf09084 from this chassis (sb_readonly=0)
Feb 28 10:37:46 compute-0 ovn_controller[146846]: 2026-02-28T10:37:46Z|01388|binding|INFO|Setting lport 0739db55-7f81-4ed8-a2c2-73ad1bf09084 down in Southbound
Feb 28 10:37:46 compute-0 ovn_controller[146846]: 2026-02-28T10:37:46Z|01389|binding|INFO|Removing iface tap0739db55-7f ovn-installed in OVS
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.893 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.899 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:46.901 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:ed:5e 10.100.0.9'], port_security=['fa:16:3e:b8:ed:5e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ae0acd9c-7c2d-4e8b-84a4-d577eff31d02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '41e41550-f20b-4082-98cd-41d49e388f83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f53aa21-0a70-46b8-aad2-7db237a64c92, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=0739db55-7f81-4ed8-a2c2-73ad1bf09084) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:37:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:46.902 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 0739db55-7f81-4ed8-a2c2-73ad1bf09084 in datapath e68f9d98-c075-4ed0-b1ee-ef05de1c055d unbound from our chassis
Feb 28 10:37:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:46.903 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e68f9d98-c075-4ed0-b1ee-ef05de1c055d
Feb 28 10:37:46 compute-0 kernel: tapfbc01aec-00 (unregistering): left promiscuous mode
Feb 28 10:37:46 compute-0 NetworkManager[49805]: <info>  [1772275066.9128] device (tapfbc01aec-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:37:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 423 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 1.9 MiB/s wr, 92 op/s
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.930 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:46.932 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e71ff592-e586-4793-8661-f378ca4c4296]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:46 compute-0 ovn_controller[146846]: 2026-02-28T10:37:46Z|01390|binding|INFO|Releasing lport fbc01aec-00f6-4ce9-960c-352295a6c18e from this chassis (sb_readonly=0)
Feb 28 10:37:46 compute-0 ovn_controller[146846]: 2026-02-28T10:37:46Z|01391|binding|INFO|Setting lport fbc01aec-00f6-4ce9-960c-352295a6c18e down in Southbound
Feb 28 10:37:46 compute-0 ovn_controller[146846]: 2026-02-28T10:37:46Z|01392|binding|INFO|Removing iface tapfbc01aec-00 ovn-installed in OVS
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.937 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:46.950 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:cb:24 2001:db8:0:1:f816:3eff:febb:cb24 2001:db8::f816:3eff:febb:cb24'], port_security=['fa:16:3e:bb:cb:24 2001:db8:0:1:f816:3eff:febb:cb24 2001:db8::f816:3eff:febb:cb24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:febb:cb24/64 2001:db8::f816:3eff:febb:cb24/64', 'neutron:device_id': 'ae0acd9c-7c2d-4e8b-84a4-d577eff31d02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '41e41550-f20b-4082-98cd-41d49e388f83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ac6d088-e3e0-46fc-b81f-2eb4f1fd8c9c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=fbc01aec-00f6-4ce9-960c-352295a6c18e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:37:46 compute-0 nova_compute[243452]: 2026-02-28 10:37:46.953 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:46.970 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[37ea6e91-5d07-46cf-9779-1e321e70feb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:46 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000086.scope: Deactivated successfully.
Feb 28 10:37:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:46.974 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb34efe-8bc3-4724-9279-f04b813eb37f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:46 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000086.scope: Consumed 14.563s CPU time.
Feb 28 10:37:46 compute-0 systemd-machined[209480]: Machine qemu-168-instance-00000086 terminated.
Feb 28 10:37:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.006 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[86b81f85-791f-4b59-a47c-2966df6efcf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.025 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[55ac2cc5-a374-41ef-88e7-21dc96525687]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape68f9d98-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:f2:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 404], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650649, 'reachable_time': 31213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363774, 'error': None, 'target': 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.043 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6b006e81-cf65-46b9-b0c6-beed5c64171e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape68f9d98-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650664, 'tstamp': 650664}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363775, 'error': None, 'target': 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape68f9d98-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650667, 'tstamp': 650667}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363775, 'error': None, 'target': 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.045 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape68f9d98-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.051 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.054 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.055 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape68f9d98-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.056 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:37:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.056 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape68f9d98-c0, col_values=(('external_ids', {'iface-id': 'b9d9d72f-0ab4-40d7-98e4-dcafab8ff1a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.056 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:37:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.058 156681 INFO neutron.agent.ovn.metadata.agent [-] Port fbc01aec-00f6-4ce9-960c-352295a6c18e in datapath bf5aa4f8-b85b-496f-a7ad-1ab36250968c unbound from our chassis
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.062 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.063 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf5aa4f8-b85b-496f-a7ad-1ab36250968c
Feb 28 10:37:47 compute-0 NetworkManager[49805]: <info>  [1772275067.0652] manager: (tapfbc01aec-00): new Tun device (/org/freedesktop/NetworkManager/Devices/579)
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.071 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.076 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.091 243456 INFO nova.virt.libvirt.driver [-] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Instance destroyed successfully.
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.092 243456 DEBUG nova.objects.instance [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid ae0acd9c-7c2d-4e8b-84a4-d577eff31d02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:37:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.100 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[371c5e8a-69cd-465f-bf50-25fb16997a8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.108 243456 DEBUG nova.virt.libvirt.vif [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:37:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-25721937',display_name='tempest-TestGettingAddress-server-25721937',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-25721937',id=134,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:37:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-v0acm2xt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:37:22Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=ae0acd9c-7c2d-4e8b-84a4-d577eff31d02,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.108 243456 DEBUG nova.network.os_vif_util [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.109 243456 DEBUG nova.network.os_vif_util [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:ed:5e,bridge_name='br-int',has_traffic_filtering=True,id=0739db55-7f81-4ed8-a2c2-73ad1bf09084,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0739db55-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.110 243456 DEBUG os_vif [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:ed:5e,bridge_name='br-int',has_traffic_filtering=True,id=0739db55-7f81-4ed8-a2c2-73ad1bf09084,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0739db55-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.111 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.112 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0739db55-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.114 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.116 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.117 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.120 243456 INFO os_vif [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:ed:5e,bridge_name='br-int',has_traffic_filtering=True,id=0739db55-7f81-4ed8-a2c2-73ad1bf09084,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0739db55-7f')
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.121 243456 DEBUG nova.virt.libvirt.vif [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:37:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-25721937',display_name='tempest-TestGettingAddress-server-25721937',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-25721937',id=134,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:37:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-v0acm2xt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:37:22Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=ae0acd9c-7c2d-4e8b-84a4-d577eff31d02,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.121 243456 DEBUG nova.network.os_vif_util [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.122 243456 DEBUG nova.network.os_vif_util [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:cb:24,bridge_name='br-int',has_traffic_filtering=True,id=fbc01aec-00f6-4ce9-960c-352295a6c18e,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc01aec-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.123 243456 DEBUG os_vif [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:cb:24,bridge_name='br-int',has_traffic_filtering=True,id=fbc01aec-00f6-4ce9-960c-352295a6c18e,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc01aec-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.124 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.124 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbc01aec-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.126 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.127 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.129 243456 INFO os_vif [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:cb:24,bridge_name='br-int',has_traffic_filtering=True,id=fbc01aec-00f6-4ce9-960c-352295a6c18e,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbc01aec-00')
Feb 28 10:37:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.137 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a19262fb-24c7-41cb-878e-2b0a7260b2c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.140 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e72b31ce-b9d5-43f3-b2ee-64fa60685660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:37:47 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2433001994' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:37:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.181 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[472eaea3-956a-4a35-a23a-1799ae1653ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.194 243456 DEBUG oslo_concurrency.processutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:37:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.203 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ad48e882-44f5-45ef-820f-f09866dba418]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf5aa4f8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:9f:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650747, 'reachable_time': 20076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363824, 'error': None, 'target': 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.204 243456 DEBUG nova.compute.provider_tree [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:37:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.225 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9bbee128-8941-466d-9254-749ec1ddb840]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbf5aa4f8-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650761, 'tstamp': 650761}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363825, 'error': None, 'target': 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.227 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf5aa4f8-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.229 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.232 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf5aa4f8-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.233 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:37:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.234 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf5aa4f8-b0, col_values=(('external_ids', {'iface-id': 'a03e0d2d-7933-48b4-9d0a-61369ba848c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:47.235 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.263 243456 DEBUG nova.scheduler.client.report [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.296 243456 DEBUG oslo_concurrency.lockutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.329 243456 INFO nova.scheduler.client.report [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance cbf94077-46c2-457d-8486-25f3dd0517b9
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.415 243456 INFO nova.virt.libvirt.driver [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Deleting instance files /var/lib/nova/instances/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_del
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.416 243456 INFO nova.virt.libvirt.driver [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Deletion of /var/lib/nova/instances/ae0acd9c-7c2d-4e8b-84a4-d577eff31d02_del complete
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.422 243456 DEBUG oslo_concurrency.lockutils [None req-b79e7f95-9899-45e9-88b6-3db747f8e27b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "cbf94077-46c2-457d-8486-25f3dd0517b9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:47 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2433001994' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.481 243456 INFO nova.compute.manager [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Took 0.65 seconds to destroy the instance on the hypervisor.
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.482 243456 DEBUG oslo.service.loopingcall [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.482 243456 DEBUG nova.compute.manager [-] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:37:47 compute-0 nova_compute[243452]: 2026-02-28 10:37:47.483 243456 DEBUG nova.network.neutron [-] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:37:48 compute-0 nova_compute[243452]: 2026-02-28 10:37:48.276 243456 DEBUG nova.compute.manager [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Received event network-vif-deleted-282fa143-1175-40e2-9ab8-2d2b012d5b78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:48 compute-0 nova_compute[243452]: 2026-02-28 10:37:48.276 243456 DEBUG nova.compute.manager [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-changed-0739db55-7f81-4ed8-a2c2-73ad1bf09084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:48 compute-0 nova_compute[243452]: 2026-02-28 10:37:48.277 243456 DEBUG nova.compute.manager [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Refreshing instance network info cache due to event network-changed-0739db55-7f81-4ed8-a2c2-73ad1bf09084. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:37:48 compute-0 nova_compute[243452]: 2026-02-28 10:37:48.277 243456 DEBUG oslo_concurrency.lockutils [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:37:48 compute-0 nova_compute[243452]: 2026-02-28 10:37:48.278 243456 DEBUG oslo_concurrency.lockutils [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:37:48 compute-0 nova_compute[243452]: 2026-02-28 10:37:48.278 243456 DEBUG nova.network.neutron [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Refreshing network info cache for port 0739db55-7f81-4ed8-a2c2-73ad1bf09084 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:37:48 compute-0 nova_compute[243452]: 2026-02-28 10:37:48.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:37:48 compute-0 nova_compute[243452]: 2026-02-28 10:37:48.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:37:48 compute-0 nova_compute[243452]: 2026-02-28 10:37:48.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:37:48 compute-0 nova_compute[243452]: 2026-02-28 10:37:48.337 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Feb 28 10:37:48 compute-0 ceph-mon[76304]: pgmap v2203: 305 pgs: 305 active+clean; 423 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 1.9 MiB/s wr, 92 op/s
Feb 28 10:37:48 compute-0 nova_compute[243452]: 2026-02-28 10:37:48.579 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:37:48 compute-0 nova_compute[243452]: 2026-02-28 10:37:48.580 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:37:48 compute-0 nova_compute[243452]: 2026-02-28 10:37:48.581 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:37:48 compute-0 nova_compute[243452]: 2026-02-28 10:37:48.581 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b3a7b19c-ebb2-442d-bac0-66e1b9e2655c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:37:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 345 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 290 KiB/s rd, 1.7 MiB/s wr, 86 op/s
Feb 28 10:37:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:37:49 compute-0 sudo[363827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:37:49 compute-0 sudo[363827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:37:49 compute-0 sudo[363827]: pam_unix(sudo:session): session closed for user root
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.197 243456 DEBUG oslo_concurrency.lockutils [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "interface-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-784aca10-13b2-42d5-9828-68914533af46" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.199 243456 DEBUG oslo_concurrency.lockutils [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "interface-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-784aca10-13b2-42d5-9828-68914533af46" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.217 243456 DEBUG nova.objects.instance [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'flavor' on Instance uuid b3a7b19c-ebb2-442d-bac0-66e1b9e2655c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.238 243456 DEBUG nova.virt.libvirt.vif [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:36:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855517723',display_name='tempest-TestNetworkBasicOps-server-1855517723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855517723',id=132,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJL4zKlYo/AdgMNnuaaJtL1hh1goiDwZemHJ69UN8pXos044opX7z+RfBxL3O8z0M85ghtATU1A3xGoZN5EAhEOadn0D2QfxzAhrWZrpHsFCg070YOwKNthfwFLhWWt9Cw==',key_name='tempest-TestNetworkBasicOps-2114804889',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:36:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-fsl96lrj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:36:33Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b3a7b19c-ebb2-442d-bac0-66e1b9e2655c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.239 243456 DEBUG nova.network.os_vif_util [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.239 243456 DEBUG nova.network.os_vif_util [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.244 243456 DEBUG nova.virt.libvirt.guest [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:6a:24:67"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap784aca10-13"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.248 243456 DEBUG nova.virt.libvirt.guest [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:6a:24:67"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap784aca10-13"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.252 243456 DEBUG nova.virt.libvirt.driver [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Attempting to detach device tap784aca10-13 from instance b3a7b19c-ebb2-442d-bac0-66e1b9e2655c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.253 243456 DEBUG nova.virt.libvirt.guest [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] detach device xml: <interface type="ethernet">
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:6a:24:67"/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <target dev="tap784aca10-13"/>
Feb 28 10:37:49 compute-0 nova_compute[243452]: </interface>
Feb 28 10:37:49 compute-0 nova_compute[243452]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.261 243456 DEBUG nova.virt.libvirt.guest [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:6a:24:67"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap784aca10-13"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:37:49 compute-0 sudo[363852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.267 243456 DEBUG nova.virt.libvirt.guest [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:6a:24:67"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap784aca10-13"/></interface>not found in domain: <domain type='kvm' id='165'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <name>instance-00000084</name>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <uuid>b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</uuid>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <nova:name>tempest-TestNetworkBasicOps-server-1855517723</nova:name>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:37:00</nova:creationTime>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:port uuid="4b48043a-8194-4cf4-bd7f-1c138d7960ac">
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:port uuid="784aca10-13b2-42d5-9828-68914533af46">
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:37:49 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <memory unit='KiB'>131072</memory>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <vcpu placement='static'>1</vcpu>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <resource>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <partition>/machine</partition>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </resource>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <sysinfo type='smbios'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <system>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <entry name='manufacturer'>RDO</entry>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <entry name='product'>OpenStack Compute</entry>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <entry name='serial'>b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</entry>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <entry name='uuid'>b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</entry>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <entry name='family'>Virtual Machine</entry>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </system>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <os>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <boot dev='hd'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <smbios mode='sysinfo'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </os>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <features>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <vmcoreinfo state='on'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </features>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <cpu mode='custom' match='exact' check='full'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <vendor>AMD</vendor>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='x2apic'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc-deadline'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='hypervisor'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc_adjust'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='spec-ctrl'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='stibp'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='ssbd'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='cmp_legacy'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='overflow-recov'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='succor'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='ibrs'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='amd-ssbd'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='virt-ssbd'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='lbrv'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='tsc-scale'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='vmcb-clean'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='flushbyasid'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='pause-filter'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='pfthreshold'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='xsaves'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='svm'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='topoext'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='npt'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='nrip-save'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <clock offset='utc'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <timer name='pit' tickpolicy='delay'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <timer name='hpet' present='no'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <on_poweroff>destroy</on_poweroff>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <on_reboot>restart</on_reboot>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <on_crash>destroy</on_crash>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <disk type='network' device='disk'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk' index='2'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       </source>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target dev='vda' bus='virtio'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='virtio-disk0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:37:49 compute-0 sudo[363852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <disk type='network' device='cdrom'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk.config' index='1'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       </source>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target dev='sda' bus='sata'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <readonly/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='sata0-0-0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='0' model='pcie-root'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pcie.0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='1' port='0x10'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.1'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='2' port='0x11'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.2'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='3' port='0x12'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.3'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='4' port='0x13'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.4'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='5' port='0x14'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.5'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='6' port='0x15'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.6'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='7' port='0x16'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.7'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='8' port='0x17'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.8'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='9' port='0x18'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.9'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='10' port='0x19'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.10'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='11' port='0x1a'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.11'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='12' port='0x1b'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.12'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='13' port='0x1c'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.13'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='14' port='0x1d'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.14'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='15' port='0x1e'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.15'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='16' port='0x1f'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.16'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='17' port='0x20'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.17'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='18' port='0x21'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.18'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='19' port='0x22'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.19'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='20' port='0x23'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.20'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='21' port='0x24'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.21'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='22' port='0x25'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.22'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='23' port='0x26'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.23'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='24' port='0x27'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.24'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='25' port='0x28'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.25'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-pci-bridge'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.26'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='usb'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='sata' index='0'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='ide'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:e3:60:12'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target dev='tap4b48043a-81'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='net0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:6a:24:67'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target dev='tap784aca10-13'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='net1'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <serial type='pty'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <source path='/dev/pts/0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/console.log' append='off'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target type='isa-serial' port='0'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:         <model name='isa-serial'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       </target>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <console type='pty' tty='/dev/pts/0'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <source path='/dev/pts/0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/console.log' append='off'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target type='serial' port='0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </console>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <input type='tablet' bus='usb'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='input0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='usb' bus='0' port='1'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </input>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <input type='mouse' bus='ps2'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='input1'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </input>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <input type='keyboard' bus='ps2'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='input2'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </input>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <listen type='address' address='::0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </graphics>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <audio id='1' type='none'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <video>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model type='virtio' heads='1' primary='yes'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='video0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </video>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <watchdog model='itco' action='reset'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='watchdog0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </watchdog>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <memballoon model='virtio'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <stats period='10'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='balloon0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <rng model='virtio'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <backend model='random'>/dev/urandom</backend>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='rng0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <label>system_u:system_r:svirt_t:s0:c786,c875</label>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c786,c875</imagelabel>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <label>+107:+107</label>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <imagelabel>+107:+107</imagelabel>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:37:49 compute-0 nova_compute[243452]: </domain>
Feb 28 10:37:49 compute-0 nova_compute[243452]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.267 243456 INFO nova.virt.libvirt.driver [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully detached device tap784aca10-13 from instance b3a7b19c-ebb2-442d-bac0-66e1b9e2655c from the persistent domain config.
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.268 243456 DEBUG nova.virt.libvirt.driver [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] (1/8): Attempting to detach device tap784aca10-13 with device alias net1 from instance b3a7b19c-ebb2-442d-bac0-66e1b9e2655c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.268 243456 DEBUG nova.virt.libvirt.guest [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] detach device xml: <interface type="ethernet">
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <mac address="fa:16:3e:6a:24:67"/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <model type="virtio"/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <mtu size="1442"/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <target dev="tap784aca10-13"/>
Feb 28 10:37:49 compute-0 nova_compute[243452]: </interface>
Feb 28 10:37:49 compute-0 nova_compute[243452]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Feb 28 10:37:49 compute-0 kernel: tap784aca10-13 (unregistering): left promiscuous mode
Feb 28 10:37:49 compute-0 NetworkManager[49805]: <info>  [1772275069.3663] device (tap784aca10-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:37:49 compute-0 ovn_controller[146846]: 2026-02-28T10:37:49Z|01393|binding|INFO|Releasing lport 784aca10-13b2-42d5-9828-68914533af46 from this chassis (sb_readonly=0)
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.373 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:49 compute-0 ovn_controller[146846]: 2026-02-28T10:37:49Z|01394|binding|INFO|Setting lport 784aca10-13b2-42d5-9828-68914533af46 down in Southbound
Feb 28 10:37:49 compute-0 ovn_controller[146846]: 2026-02-28T10:37:49Z|01395|binding|INFO|Removing iface tap784aca10-13 ovn-installed in OVS
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.377 243456 DEBUG nova.virt.libvirt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Received event <DeviceRemovedEvent: 1772275069.377185, b3a7b19c-ebb2-442d-bac0-66e1b9e2655c => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.378 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.379 243456 DEBUG nova.virt.libvirt.driver [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Start waiting for the detach event from libvirt for device tap784aca10-13 with device alias net1 for instance b3a7b19c-ebb2-442d-bac0-66e1b9e2655c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.380 243456 DEBUG nova.virt.libvirt.guest [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:6a:24:67"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap784aca10-13"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.384 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.385 243456 DEBUG nova.virt.libvirt.guest [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:6a:24:67"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap784aca10-13"/></interface>not found in domain: <domain type='kvm' id='165'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <name>instance-00000084</name>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <uuid>b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</uuid>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <nova:name>tempest-TestNetworkBasicOps-server-1855517723</nova:name>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:37:00</nova:creationTime>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:port uuid="4b48043a-8194-4cf4-bd7f-1c138d7960ac">
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:port uuid="784aca10-13b2-42d5-9828-68914533af46">
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:37:49 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <memory unit='KiB'>131072</memory>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <vcpu placement='static'>1</vcpu>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <resource>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <partition>/machine</partition>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </resource>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <sysinfo type='smbios'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <system>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <entry name='manufacturer'>RDO</entry>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <entry name='product'>OpenStack Compute</entry>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <entry name='serial'>b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</entry>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <entry name='uuid'>b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</entry>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <entry name='family'>Virtual Machine</entry>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </system>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <os>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <boot dev='hd'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <smbios mode='sysinfo'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </os>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <features>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <vmcoreinfo state='on'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </features>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <cpu mode='custom' match='exact' check='full'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <model fallback='forbid'>EPYC-Rome</model>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <vendor>AMD</vendor>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='x2apic'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc-deadline'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='hypervisor'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='tsc_adjust'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='spec-ctrl'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='stibp'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='ssbd'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='cmp_legacy'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='overflow-recov'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='succor'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='ibrs'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='amd-ssbd'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='virt-ssbd'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='lbrv'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='tsc-scale'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='vmcb-clean'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='flushbyasid'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='pause-filter'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='pfthreshold'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='svme-addr-chk'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='lfence-always-serializing'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='xsaves'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='svm'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='require' name='topoext'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='npt'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <feature policy='disable' name='nrip-save'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <clock offset='utc'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <timer name='pit' tickpolicy='delay'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <timer name='hpet' present='no'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <on_poweroff>destroy</on_poweroff>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <on_reboot>restart</on_reboot>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <on_crash>destroy</on_crash>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <disk type='network' device='disk'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk' index='2'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       </source>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target dev='vda' bus='virtio'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='virtio-disk0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <disk type='network' device='cdrom'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk.config' index='1'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       </source>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target dev='sda' bus='sata'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <readonly/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='sata0-0-0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='0' model='pcie-root'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pcie.0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='1' port='0x10'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.1'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='2' port='0x11'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.2'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='3' port='0x12'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.3'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='4' port='0x13'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.4'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='5' port='0x14'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.5'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='6' port='0x15'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.6'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='7' port='0x16'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.7'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='8' port='0x17'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.8'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='9' port='0x18'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.9'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='10' port='0x19'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.10'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='11' port='0x1a'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.11'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='12' port='0x1b'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.12'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='13' port='0x1c'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.13'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='14' port='0x1d'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.14'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='15' port='0x1e'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.15'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='16' port='0x1f'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.16'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='17' port='0x20'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.17'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='18' port='0x21'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.18'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='19' port='0x22'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.19'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='20' port='0x23'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.20'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='21' port='0x24'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.21'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='22' port='0x25'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.22'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='23' port='0x26'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.23'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='24' port='0x27'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.24'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target chassis='25' port='0x28'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.25'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model name='pcie-pci-bridge'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='pci.26'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='usb'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <controller type='sata' index='0'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='ide'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:e3:60:12'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target dev='tap4b48043a-81'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='net0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <serial type='pty'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <source path='/dev/pts/0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/console.log' append='off'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target type='isa-serial' port='0'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:         <model name='isa-serial'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       </target>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <console type='pty' tty='/dev/pts/0'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <source path='/dev/pts/0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/console.log' append='off'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <target type='serial' port='0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='serial0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </console>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <input type='tablet' bus='usb'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='input0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='usb' bus='0' port='1'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </input>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <input type='mouse' bus='ps2'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='input1'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </input>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <input type='keyboard' bus='ps2'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='input2'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </input>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <listen type='address' address='::0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </graphics>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <audio id='1' type='none'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <video>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <model type='virtio' heads='1' primary='yes'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='video0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </video>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <watchdog model='itco' action='reset'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='watchdog0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </watchdog>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <memballoon model='virtio'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <stats period='10'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='balloon0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <rng model='virtio'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <backend model='random'>/dev/urandom</backend>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <alias name='rng0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <label>system_u:system_r:svirt_t:s0:c786,c875</label>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c786,c875</imagelabel>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <label>+107:+107</label>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <imagelabel>+107:+107</imagelabel>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </seclabel>
Feb 28 10:37:49 compute-0 nova_compute[243452]: </domain>
Feb 28 10:37:49 compute-0 nova_compute[243452]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.386 243456 INFO nova.virt.libvirt.driver [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully detached device tap784aca10-13 from instance b3a7b19c-ebb2-442d-bac0-66e1b9e2655c from the live domain config.
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.386 243456 DEBUG nova.virt.libvirt.vif [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:36:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855517723',display_name='tempest-TestNetworkBasicOps-server-1855517723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855517723',id=132,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJL4zKlYo/AdgMNnuaaJtL1hh1goiDwZemHJ69UN8pXos044opX7z+RfBxL3O8z0M85ghtATU1A3xGoZN5EAhEOadn0D2QfxzAhrWZrpHsFCg070YOwKNthfwFLhWWt9Cw==',key_name='tempest-TestNetworkBasicOps-2114804889',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:36:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-fsl96lrj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:36:33Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b3a7b19c-ebb2-442d-bac0-66e1b9e2655c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.387 243456 DEBUG nova.network.os_vif_util [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.388 243456 DEBUG nova.network.os_vif_util [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.388 243456 DEBUG os_vif [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.389 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.390 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap784aca10-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.391 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.393 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.396 243456 INFO os_vif [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13')
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.397 243456 DEBUG nova.virt.libvirt.guest [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <nova:name>tempest-TestNetworkBasicOps-server-1855517723</nova:name>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:37:49</nova:creationTime>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     <nova:port uuid="4b48043a-8194-4cf4-bd7f-1c138d7960ac">
Feb 28 10:37:49 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:37:49 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:37:49 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:37:49 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:37:49 compute-0 nova_compute[243452]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 28 10:37:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.423 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:24:67 10.100.0.20', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': 'b3a7b19c-ebb2-442d-bac0-66e1b9e2655c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe4444ac-cf87-44d4-a522-71da130237e6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=784aca10-13b2-42d5-9828-68914533af46) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:37:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.424 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 784aca10-13b2-42d5-9828-68914533af46 in datapath 9fbbf27e-04ea-4f06-b1ef-3ac4b98db361 unbound from our chassis
Feb 28 10:37:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.425 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9fbbf27e-04ea-4f06-b1ef-3ac4b98db361, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:37:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.426 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6976bd1c-7d9c-4de5-aff4-737e6dcb6ec0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.427 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361 namespace which is not needed anymore
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.516 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:49 compute-0 neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361[362810]: [NOTICE]   (362814) : haproxy version is 2.8.14-c23fe91
Feb 28 10:37:49 compute-0 neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361[362810]: [NOTICE]   (362814) : path to executable is /usr/sbin/haproxy
Feb 28 10:37:49 compute-0 neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361[362810]: [WARNING]  (362814) : Exiting Master process...
Feb 28 10:37:49 compute-0 neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361[362810]: [WARNING]  (362814) : Exiting Master process...
Feb 28 10:37:49 compute-0 neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361[362810]: [ALERT]    (362814) : Current worker (362816) exited with code 143 (Terminated)
Feb 28 10:37:49 compute-0 neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361[362810]: [WARNING]  (362814) : All workers exited. Exiting... (0)
Feb 28 10:37:49 compute-0 systemd[1]: libpod-cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0.scope: Deactivated successfully.
Feb 28 10:37:49 compute-0 podman[363909]: 2026-02-28 10:37:49.576077466 +0000 UTC m=+0.052475808 container died cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:37:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0-userdata-shm.mount: Deactivated successfully.
Feb 28 10:37:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f6de54db0465fd2bcf02f33b89700b7eb0c2eaa14d4e03df2211f39fd0f3421-merged.mount: Deactivated successfully.
Feb 28 10:37:49 compute-0 podman[363909]: 2026-02-28 10:37:49.632132674 +0000 UTC m=+0.108531026 container cleanup cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 28 10:37:49 compute-0 systemd[1]: libpod-conmon-cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0.scope: Deactivated successfully.
Feb 28 10:37:49 compute-0 podman[363942]: 2026-02-28 10:37:49.707770056 +0000 UTC m=+0.053571188 container remove cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:37:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.713 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b739e22d-c45b-47d8-9839-d122ec682dd0]: (4, ('Sat Feb 28 10:37:49 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361 (cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0)\ncf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0\nSat Feb 28 10:37:49 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361 (cf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0)\ncf4e80c1cea4bcf3fd79a9de0f043e44d8a3e395931c872ff5179437dd4e8cb0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.715 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2c7dfb7c-a773-4cc4-a81c-be402cf49bb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.717 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9fbbf27e-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:49 compute-0 kernel: tap9fbbf27e-00: left promiscuous mode
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.719 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.727 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.731 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d7cfc522-1654-426a-8ade-3ec7b759d790]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.751 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b601236f-b080-4e64-934e-ba4e8a7f84ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.752 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[51bda20e-c63c-4a91-bafd-aada1718e874]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.773 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e5878ec5-66ea-45c0-9682-56a31b237703]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652763, 'reachable_time': 41643, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363964, 'error': None, 'target': 'ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d9fbbf27e\x2d04ea\x2d4f06\x2db1ef\x2d3ac4b98db361.mount: Deactivated successfully.
Feb 28 10:37:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.778 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9fbbf27e-04ea-4f06-b1ef-3ac4b98db361 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:37:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:49.779 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d22dc1-830a-45fb-8e5f-daf550d77145]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:49 compute-0 sudo[363852]: pam_unix(sudo:session): session closed for user root
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.884 243456 DEBUG nova.network.neutron [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Updated VIF entry in instance network info cache for port 0739db55-7f81-4ed8-a2c2-73ad1bf09084. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.885 243456 DEBUG nova.network.neutron [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Updating instance_info_cache with network_info: [{"id": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "address": "fa:16:3e:b8:ed:5e", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0739db55-7f", "ovs_interfaceid": "0739db55-7f81-4ed8-a2c2-73ad1bf09084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "address": "fa:16:3e:bb:cb:24", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febb:cb24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbc01aec-00", "ovs_interfaceid": "fbc01aec-00f6-4ce9-960c-352295a6c18e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:37:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:37:49 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:37:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:37:49 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:37:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:37:49 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:37:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:37:49 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:37:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:37:49 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:37:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:37:49 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.953 243456 DEBUG oslo_concurrency.lockutils [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.954 243456 DEBUG nova.compute.manager [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-unplugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.955 243456 DEBUG oslo_concurrency.lockutils [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.955 243456 DEBUG oslo_concurrency.lockutils [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.955 243456 DEBUG oslo_concurrency.lockutils [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.956 243456 DEBUG nova.compute.manager [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] No waiting events found dispatching network-vif-unplugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.956 243456 DEBUG nova.compute.manager [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-unplugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.956 243456 DEBUG nova.compute.manager [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-plugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.957 243456 DEBUG oslo_concurrency.lockutils [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.957 243456 DEBUG oslo_concurrency.lockutils [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.958 243456 DEBUG oslo_concurrency.lockutils [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.958 243456 DEBUG nova.compute.manager [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] No waiting events found dispatching network-vif-plugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:37:49 compute-0 nova_compute[243452]: 2026-02-28 10:37:49.958 243456 WARNING nova.compute.manager [req-9acd9a50-a549-4bdc-ad0b-5ce0328e28d7 req-b4c82107-0b7c-4891-9aaa-9e579f4fe9a3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received unexpected event network-vif-plugged-0739db55-7f81-4ed8-a2c2-73ad1bf09084 for instance with vm_state active and task_state deleting.
Feb 28 10:37:49 compute-0 sudo[363974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:37:49 compute-0 sudo[363974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:37:49 compute-0 sudo[363974]: pam_unix(sudo:session): session closed for user root
Feb 28 10:37:50 compute-0 sudo[363999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:37:50 compute-0 sudo[363999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.208 243456 DEBUG nova.network.neutron [-] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.229 243456 INFO nova.compute.manager [-] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Took 2.75 seconds to deallocate network for instance.
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.278 243456 DEBUG oslo_concurrency.lockutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.279 243456 DEBUG oslo_concurrency.lockutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.353 243456 DEBUG oslo_concurrency.lockutils [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:37:50 compute-0 podman[364038]: 2026-02-28 10:37:50.379368761 +0000 UTC m=+0.053230819 container create 58600fd143cb0e71b9099bd72fb8aed035e49905621e720c0ef11c5303f0a9e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_roentgen, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.385 243456 DEBUG nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-unplugged-fbc01aec-00f6-4ce9-960c-352295a6c18e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.386 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.386 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.386 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.386 243456 DEBUG nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] No waiting events found dispatching network-vif-unplugged-fbc01aec-00f6-4ce9-960c-352295a6c18e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.386 243456 WARNING nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received unexpected event network-vif-unplugged-fbc01aec-00f6-4ce9-960c-352295a6c18e for instance with vm_state deleted and task_state None.
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.387 243456 DEBUG nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-plugged-fbc01aec-00f6-4ce9-960c-352295a6c18e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.387 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.387 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.387 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.387 243456 DEBUG nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] No waiting events found dispatching network-vif-plugged-fbc01aec-00f6-4ce9-960c-352295a6c18e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.388 243456 WARNING nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received unexpected event network-vif-plugged-fbc01aec-00f6-4ce9-960c-352295a6c18e for instance with vm_state deleted and task_state None.
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.388 243456 DEBUG nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-deleted-fbc01aec-00f6-4ce9-960c-352295a6c18e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.388 243456 DEBUG nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-unplugged-784aca10-13b2-42d5-9828-68914533af46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.388 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.388 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.388 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.389 243456 DEBUG nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] No waiting events found dispatching network-vif-unplugged-784aca10-13b2-42d5-9828-68914533af46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.389 243456 WARNING nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received unexpected event network-vif-unplugged-784aca10-13b2-42d5-9828-68914533af46 for instance with vm_state active and task_state None.
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.389 243456 DEBUG nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Received event network-vif-deleted-0739db55-7f81-4ed8-a2c2-73ad1bf09084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.389 243456 DEBUG nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-plugged-784aca10-13b2-42d5-9828-68914533af46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.389 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.390 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.390 243456 DEBUG oslo_concurrency.lockutils [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.390 243456 DEBUG nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] No waiting events found dispatching network-vif-plugged-784aca10-13b2-42d5-9828-68914533af46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.390 243456 WARNING nova.compute.manager [req-e1421970-909a-44af-9fa9-eb7c021a4fb1 req-7d6deaaf-e381-4855-9ec7-0cd09c94b20f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received unexpected event network-vif-plugged-784aca10-13b2-42d5-9828-68914533af46 for instance with vm_state active and task_state None.
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.396 243456 DEBUG oslo_concurrency.processutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:37:50 compute-0 systemd[1]: Started libpod-conmon-58600fd143cb0e71b9099bd72fb8aed035e49905621e720c0ef11c5303f0a9e6.scope.
Feb 28 10:37:50 compute-0 podman[364038]: 2026-02-28 10:37:50.358919942 +0000 UTC m=+0.032782020 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:37:50 compute-0 ceph-mon[76304]: pgmap v2204: 305 pgs: 305 active+clean; 345 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 290 KiB/s rd, 1.7 MiB/s wr, 86 op/s
Feb 28 10:37:50 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:37:50 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:37:50 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:37:50 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:37:50 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:37:50 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:37:50 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:37:50 compute-0 podman[364038]: 2026-02-28 10:37:50.497868738 +0000 UTC m=+0.171730886 container init 58600fd143cb0e71b9099bd72fb8aed035e49905621e720c0ef11c5303f0a9e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_roentgen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 10:37:50 compute-0 podman[364038]: 2026-02-28 10:37:50.508009425 +0000 UTC m=+0.181871493 container start 58600fd143cb0e71b9099bd72fb8aed035e49905621e720c0ef11c5303f0a9e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:37:50 compute-0 podman[364038]: 2026-02-28 10:37:50.511485193 +0000 UTC m=+0.185347271 container attach 58600fd143cb0e71b9099bd72fb8aed035e49905621e720c0ef11c5303f0a9e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_roentgen, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:37:50 compute-0 systemd[1]: libpod-58600fd143cb0e71b9099bd72fb8aed035e49905621e720c0ef11c5303f0a9e6.scope: Deactivated successfully.
Feb 28 10:37:50 compute-0 elated_roentgen[364055]: 167 167
Feb 28 10:37:50 compute-0 podman[364038]: 2026-02-28 10:37:50.518527763 +0000 UTC m=+0.192389821 container died 58600fd143cb0e71b9099bd72fb8aed035e49905621e720c0ef11c5303f0a9e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_roentgen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:37:50 compute-0 conmon[364055]: conmon 58600fd143cb0e71b909 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58600fd143cb0e71b9099bd72fb8aed035e49905621e720c0ef11c5303f0a9e6.scope/container/memory.events
Feb 28 10:37:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc9d296b531044f456d24b2daed4db1258fc7f61a8073aa82b6690a7df849cd3-merged.mount: Deactivated successfully.
Feb 28 10:37:50 compute-0 podman[364038]: 2026-02-28 10:37:50.559524584 +0000 UTC m=+0.233386642 container remove 58600fd143cb0e71b9099bd72fb8aed035e49905621e720c0ef11c5303f0a9e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_roentgen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:37:50 compute-0 systemd[1]: libpod-conmon-58600fd143cb0e71b9099bd72fb8aed035e49905621e720c0ef11c5303f0a9e6.scope: Deactivated successfully.
Feb 28 10:37:50 compute-0 podman[364097]: 2026-02-28 10:37:50.742804306 +0000 UTC m=+0.049826622 container create a8f6c0e03adfbfe153dccc6c801d75d3cc12fae2f8261662b336f4211e6726fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 10:37:50 compute-0 systemd[1]: Started libpod-conmon-a8f6c0e03adfbfe153dccc6c801d75d3cc12fae2f8261662b336f4211e6726fb.scope.
Feb 28 10:37:50 compute-0 podman[364097]: 2026-02-28 10:37:50.724658172 +0000 UTC m=+0.031680518 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:37:50 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:37:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc90bc29140d33a7d3f900c3d8aa002e751210d1bad039b2eb638e09b340ae1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:37:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc90bc29140d33a7d3f900c3d8aa002e751210d1bad039b2eb638e09b340ae1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:37:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc90bc29140d33a7d3f900c3d8aa002e751210d1bad039b2eb638e09b340ae1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:37:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc90bc29140d33a7d3f900c3d8aa002e751210d1bad039b2eb638e09b340ae1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:37:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc90bc29140d33a7d3f900c3d8aa002e751210d1bad039b2eb638e09b340ae1d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:37:50 compute-0 podman[364097]: 2026-02-28 10:37:50.863267038 +0000 UTC m=+0.170289384 container init a8f6c0e03adfbfe153dccc6c801d75d3cc12fae2f8261662b336f4211e6726fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 10:37:50 compute-0 podman[364097]: 2026-02-28 10:37:50.875023212 +0000 UTC m=+0.182045528 container start a8f6c0e03adfbfe153dccc6c801d75d3cc12fae2f8261662b336f4211e6726fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:37:50 compute-0 podman[364097]: 2026-02-28 10:37:50.878569162 +0000 UTC m=+0.185591498 container attach a8f6c0e03adfbfe153dccc6c801d75d3cc12fae2f8261662b336f4211e6726fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_meninsky, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:37:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 99 KiB/s wr, 72 op/s
Feb 28 10:37:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:37:50 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1770482847' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.980 243456 DEBUG oslo_concurrency.processutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:37:50 compute-0 nova_compute[243452]: 2026-02-28 10:37:50.990 243456 DEBUG nova.compute.provider_tree [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:37:51 compute-0 nova_compute[243452]: 2026-02-28 10:37:51.007 243456 DEBUG nova.scheduler.client.report [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:37:51 compute-0 nova_compute[243452]: 2026-02-28 10:37:51.027 243456 DEBUG oslo_concurrency.lockutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:51 compute-0 nova_compute[243452]: 2026-02-28 10:37:51.051 243456 INFO nova.scheduler.client.report [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance ae0acd9c-7c2d-4e8b-84a4-d577eff31d02
Feb 28 10:37:51 compute-0 nova_compute[243452]: 2026-02-28 10:37:51.108 243456 DEBUG oslo_concurrency.lockutils [None req-c2ab6066-71d5-49fa-a26f-9323071067f9 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "ae0acd9c-7c2d-4e8b-84a4-d577eff31d02" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.283s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:51 compute-0 nova_compute[243452]: 2026-02-28 10:37:51.239 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:37:51 compute-0 nova_compute[243452]: 2026-02-28 10:37:51.256 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:37:51 compute-0 nova_compute[243452]: 2026-02-28 10:37:51.257 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:37:51 compute-0 nova_compute[243452]: 2026-02-28 10:37:51.257 243456 DEBUG oslo_concurrency.lockutils [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:37:51 compute-0 nova_compute[243452]: 2026-02-28 10:37:51.258 243456 DEBUG nova.network.neutron [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:37:51 compute-0 nova_compute[243452]: 2026-02-28 10:37:51.261 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:37:51 compute-0 nova_compute[243452]: 2026-02-28 10:37:51.286 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:51 compute-0 nova_compute[243452]: 2026-02-28 10:37:51.287 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:51 compute-0 nova_compute[243452]: 2026-02-28 10:37:51.287 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:51 compute-0 nova_compute[243452]: 2026-02-28 10:37:51.287 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:37:51 compute-0 nova_compute[243452]: 2026-02-28 10:37:51.288 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:37:51 compute-0 jovial_meninsky[364113]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:37:51 compute-0 jovial_meninsky[364113]: --> All data devices are unavailable
Feb 28 10:37:51 compute-0 ovn_controller[146846]: 2026-02-28T10:37:51Z|01396|binding|INFO|Releasing lport 806ea448-5fbd-4b2a-a972-1602ffe39b97 from this chassis (sb_readonly=0)
Feb 28 10:37:51 compute-0 ovn_controller[146846]: 2026-02-28T10:37:51Z|01397|binding|INFO|Releasing lport a03e0d2d-7933-48b4-9d0a-61369ba848c9 from this chassis (sb_readonly=0)
Feb 28 10:37:51 compute-0 ovn_controller[146846]: 2026-02-28T10:37:51Z|01398|binding|INFO|Releasing lport b9d9d72f-0ab4-40d7-98e4-dcafab8ff1a1 from this chassis (sb_readonly=0)
Feb 28 10:37:51 compute-0 systemd[1]: libpod-a8f6c0e03adfbfe153dccc6c801d75d3cc12fae2f8261662b336f4211e6726fb.scope: Deactivated successfully.
Feb 28 10:37:51 compute-0 podman[364097]: 2026-02-28 10:37:51.407825134 +0000 UTC m=+0.714847460 container died a8f6c0e03adfbfe153dccc6c801d75d3cc12fae2f8261662b336f4211e6726fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Feb 28 10:37:51 compute-0 nova_compute[243452]: 2026-02-28 10:37:51.417 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc90bc29140d33a7d3f900c3d8aa002e751210d1bad039b2eb638e09b340ae1d-merged.mount: Deactivated successfully.
Feb 28 10:37:51 compute-0 podman[364097]: 2026-02-28 10:37:51.449783763 +0000 UTC m=+0.756806089 container remove a8f6c0e03adfbfe153dccc6c801d75d3cc12fae2f8261662b336f4211e6726fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_meninsky, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:37:51 compute-0 systemd[1]: libpod-conmon-a8f6c0e03adfbfe153dccc6c801d75d3cc12fae2f8261662b336f4211e6726fb.scope: Deactivated successfully.
Feb 28 10:37:51 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1770482847' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:37:51 compute-0 sudo[363999]: pam_unix(sudo:session): session closed for user root
Feb 28 10:37:51 compute-0 sudo[364169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:37:51 compute-0 sudo[364169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:37:51 compute-0 sudo[364169]: pam_unix(sudo:session): session closed for user root
Feb 28 10:37:51 compute-0 sudo[364194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:37:51 compute-0 sudo[364194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:37:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:37:51 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1605093675' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:37:51 compute-0 nova_compute[243452]: 2026-02-28 10:37:51.901 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:37:51 compute-0 podman[364233]: 2026-02-28 10:37:51.951540396 +0000 UTC m=+0.052746455 container create 2729a835efb67e968792ecdcc2ef2bbd917e34834d50c2b6ae40a2330b18e351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 10:37:51 compute-0 nova_compute[243452]: 2026-02-28 10:37:51.980 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:37:51 compute-0 nova_compute[243452]: 2026-02-28 10:37:51.981 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:37:51 compute-0 nova_compute[243452]: 2026-02-28 10:37:51.986 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:37:51 compute-0 nova_compute[243452]: 2026-02-28 10:37:51.986 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:37:51 compute-0 systemd[1]: Started libpod-conmon-2729a835efb67e968792ecdcc2ef2bbd917e34834d50c2b6ae40a2330b18e351.scope.
Feb 28 10:37:52 compute-0 podman[364233]: 2026-02-28 10:37:51.928580296 +0000 UTC m=+0.029786375 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:37:52 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:37:52 compute-0 podman[364233]: 2026-02-28 10:37:52.048203965 +0000 UTC m=+0.149410004 container init 2729a835efb67e968792ecdcc2ef2bbd917e34834d50c2b6ae40a2330b18e351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldberg, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Feb 28 10:37:52 compute-0 podman[364233]: 2026-02-28 10:37:52.055849881 +0000 UTC m=+0.157055910 container start 2729a835efb67e968792ecdcc2ef2bbd917e34834d50c2b6ae40a2330b18e351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldberg, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 10:37:52 compute-0 podman[364233]: 2026-02-28 10:37:52.060604406 +0000 UTC m=+0.161810455 container attach 2729a835efb67e968792ecdcc2ef2bbd917e34834d50c2b6ae40a2330b18e351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 10:37:52 compute-0 funny_goldberg[364250]: 167 167
Feb 28 10:37:52 compute-0 systemd[1]: libpod-2729a835efb67e968792ecdcc2ef2bbd917e34834d50c2b6ae40a2330b18e351.scope: Deactivated successfully.
Feb 28 10:37:52 compute-0 conmon[364250]: conmon 2729a835efb67e968792 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2729a835efb67e968792ecdcc2ef2bbd917e34834d50c2b6ae40a2330b18e351.scope/container/memory.events
Feb 28 10:37:52 compute-0 podman[364233]: 2026-02-28 10:37:52.064416364 +0000 UTC m=+0.165622403 container died 2729a835efb67e968792ecdcc2ef2bbd917e34834d50c2b6ae40a2330b18e351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 10:37:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5d8aca32a9abedd0397a63da3ab918ec4ffe9f1ab242db9355d2f7f7dfcbf3e-merged.mount: Deactivated successfully.
Feb 28 10:37:52 compute-0 podman[364233]: 2026-02-28 10:37:52.100482896 +0000 UTC m=+0.201688915 container remove 2729a835efb67e968792ecdcc2ef2bbd917e34834d50c2b6ae40a2330b18e351 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldberg, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 10:37:52 compute-0 systemd[1]: libpod-conmon-2729a835efb67e968792ecdcc2ef2bbd917e34834d50c2b6ae40a2330b18e351.scope: Deactivated successfully.
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.189 243456 DEBUG oslo_concurrency.lockutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.190 243456 DEBUG oslo_concurrency.lockutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.190 243456 DEBUG oslo_concurrency.lockutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.190 243456 DEBUG oslo_concurrency.lockutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.190 243456 DEBUG oslo_concurrency.lockutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.192 243456 INFO nova.compute.manager [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Terminating instance
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.193 243456 DEBUG nova.compute.manager [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:37:52 compute-0 kernel: tap4b48043a-81 (unregistering): left promiscuous mode
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.232 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.234 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3204MB free_disk=59.89615597296506GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.234 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.234 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:52 compute-0 NetworkManager[49805]: <info>  [1772275072.2377] device (tap4b48043a-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.245 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:52 compute-0 ovn_controller[146846]: 2026-02-28T10:37:52Z|01399|binding|INFO|Releasing lport 4b48043a-8194-4cf4-bd7f-1c138d7960ac from this chassis (sb_readonly=0)
Feb 28 10:37:52 compute-0 ovn_controller[146846]: 2026-02-28T10:37:52Z|01400|binding|INFO|Setting lport 4b48043a-8194-4cf4-bd7f-1c138d7960ac down in Southbound
Feb 28 10:37:52 compute-0 ovn_controller[146846]: 2026-02-28T10:37:52Z|01401|binding|INFO|Removing iface tap4b48043a-81 ovn-installed in OVS
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.248 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:52.252 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:60:12 10.100.0.5'], port_security=['fa:16:3e:e3:60:12 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b3a7b19c-ebb2-442d-bac0-66e1b9e2655c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f91ad996-44c8-45ac-a5d6-208982ca2ce1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '76f6832b-9f40-4eef-bddf-580a90432b21', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f90d2e4f-4906-4cb9-bc1a-6b3a4bcb9d24, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=4b48043a-8194-4cf4-bd7f-1c138d7960ac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:37:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:52.254 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 4b48043a-8194-4cf4-bd7f-1c138d7960ac in datapath f91ad996-44c8-45ac-a5d6-208982ca2ce1 unbound from our chassis
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.256 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:52.257 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f91ad996-44c8-45ac-a5d6-208982ca2ce1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:37:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:52.258 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d576a0b5-a0c0-4f09-9abc-242547852e46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:52.258 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1 namespace which is not needed anymore
Feb 28 10:37:52 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000084.scope: Deactivated successfully.
Feb 28 10:37:52 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000084.scope: Consumed 16.926s CPU time.
Feb 28 10:37:52 compute-0 systemd-machined[209480]: Machine qemu-165-instance-00000084 terminated.
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.311 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance b3a7b19c-ebb2-442d-bac0-66e1b9e2655c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.312 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 555d381e-ed8a-4a73-9f43-f79c0b0a0afd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.312 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.313 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:37:52 compute-0 podman[364277]: 2026-02-28 10:37:52.320096197 +0000 UTC m=+0.053363993 container create 3755428e7550ea3b0bf9ecd34dd3528e78782a18cb8687c4f31fa8d382d6433b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shamir, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 10:37:52 compute-0 systemd[1]: Started libpod-conmon-3755428e7550ea3b0bf9ecd34dd3528e78782a18cb8687c4f31fa8d382d6433b.scope.
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.370 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:37:52 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:37:52 compute-0 podman[364277]: 2026-02-28 10:37:52.293525024 +0000 UTC m=+0.026792870 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:37:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11a7a86fa35507d8088a08f8efaa3a81c96ff4d58da2771d52b1f659a40d976/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:37:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11a7a86fa35507d8088a08f8efaa3a81c96ff4d58da2771d52b1f659a40d976/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:37:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11a7a86fa35507d8088a08f8efaa3a81c96ff4d58da2771d52b1f659a40d976/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:37:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11a7a86fa35507d8088a08f8efaa3a81c96ff4d58da2771d52b1f659a40d976/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:37:52 compute-0 neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1[361605]: [NOTICE]   (361609) : haproxy version is 2.8.14-c23fe91
Feb 28 10:37:52 compute-0 neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1[361605]: [NOTICE]   (361609) : path to executable is /usr/sbin/haproxy
Feb 28 10:37:52 compute-0 neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1[361605]: [WARNING]  (361609) : Exiting Master process...
Feb 28 10:37:52 compute-0 neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1[361605]: [WARNING]  (361609) : Exiting Master process...
Feb 28 10:37:52 compute-0 neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1[361605]: [ALERT]    (361609) : Current worker (361611) exited with code 143 (Terminated)
Feb 28 10:37:52 compute-0 neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1[361605]: [WARNING]  (361609) : All workers exited. Exiting... (0)
Feb 28 10:37:52 compute-0 systemd[1]: libpod-4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f.scope: Deactivated successfully.
Feb 28 10:37:52 compute-0 podman[364314]: 2026-02-28 10:37:52.411603399 +0000 UTC m=+0.048638289 container died 4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.428 243456 INFO nova.virt.libvirt.driver [-] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Instance destroyed successfully.
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.429 243456 DEBUG nova.objects.instance [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid b3a7b19c-ebb2-442d-bac0-66e1b9e2655c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:37:52 compute-0 podman[364277]: 2026-02-28 10:37:52.438974464 +0000 UTC m=+0.172242290 container init 3755428e7550ea3b0bf9ecd34dd3528e78782a18cb8687c4f31fa8d382d6433b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shamir, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.442 243456 DEBUG nova.virt.libvirt.vif [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:36:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855517723',display_name='tempest-TestNetworkBasicOps-server-1855517723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855517723',id=132,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJL4zKlYo/AdgMNnuaaJtL1hh1goiDwZemHJ69UN8pXos044opX7z+RfBxL3O8z0M85ghtATU1A3xGoZN5EAhEOadn0D2QfxzAhrWZrpHsFCg070YOwKNthfwFLhWWt9Cw==',key_name='tempest-TestNetworkBasicOps-2114804889',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:36:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-fsl96lrj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:36:33Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b3a7b19c-ebb2-442d-bac0-66e1b9e2655c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.443 243456 DEBUG nova.network.os_vif_util [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.444 243456 DEBUG nova.network.os_vif_util [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e3:60:12,bridge_name='br-int',has_traffic_filtering=True,id=4b48043a-8194-4cf4-bd7f-1c138d7960ac,network=Network(f91ad996-44c8-45ac-a5d6-208982ca2ce1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b48043a-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.445 243456 DEBUG os_vif [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:60:12,bridge_name='br-int',has_traffic_filtering=True,id=4b48043a-8194-4cf4-bd7f-1c138d7960ac,network=Network(f91ad996-44c8-45ac-a5d6-208982ca2ce1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b48043a-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.447 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f-userdata-shm.mount: Deactivated successfully.
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.448 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b48043a-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-16788202b082e39979c66011ee31c3dde40edd942458ccb06127574de32bd96f-merged.mount: Deactivated successfully.
Feb 28 10:37:52 compute-0 podman[364277]: 2026-02-28 10:37:52.451351105 +0000 UTC m=+0.184618921 container start 3755428e7550ea3b0bf9ecd34dd3528e78782a18cb8687c4f31fa8d382d6433b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.452 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.453 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:52 compute-0 podman[364277]: 2026-02-28 10:37:52.456310725 +0000 UTC m=+0.189578531 container attach 3755428e7550ea3b0bf9ecd34dd3528e78782a18cb8687c4f31fa8d382d6433b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shamir, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.457 243456 INFO os_vif [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:60:12,bridge_name='br-int',has_traffic_filtering=True,id=4b48043a-8194-4cf4-bd7f-1c138d7960ac,network=Network(f91ad996-44c8-45ac-a5d6-208982ca2ce1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b48043a-81')
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.459 243456 DEBUG nova.virt.libvirt.vif [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:36:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855517723',display_name='tempest-TestNetworkBasicOps-server-1855517723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855517723',id=132,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJL4zKlYo/AdgMNnuaaJtL1hh1goiDwZemHJ69UN8pXos044opX7z+RfBxL3O8z0M85ghtATU1A3xGoZN5EAhEOadn0D2QfxzAhrWZrpHsFCg070YOwKNthfwFLhWWt9Cw==',key_name='tempest-TestNetworkBasicOps-2114804889',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:36:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-fsl96lrj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:36:33Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b3a7b19c-ebb2-442d-bac0-66e1b9e2655c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.459 243456 DEBUG nova.network.os_vif_util [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:37:52 compute-0 podman[364314]: 2026-02-28 10:37:52.460049951 +0000 UTC m=+0.097084841 container cleanup 4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.461 243456 DEBUG nova.network.os_vif_util [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.462 243456 DEBUG os_vif [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.463 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.464 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap784aca10-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.464 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.472 243456 DEBUG nova.compute.manager [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-deleted-784aca10-13b2-42d5-9828-68914533af46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.472 243456 INFO nova.compute.manager [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Neutron deleted interface 784aca10-13b2-42d5-9828-68914533af46; detaching it from the instance and deleting it from the info cache
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.472 243456 DEBUG nova.network.neutron [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:37:52 compute-0 systemd[1]: libpod-conmon-4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f.scope: Deactivated successfully.
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.475 243456 INFO os_vif [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13')
Feb 28 10:37:52 compute-0 ceph-mon[76304]: pgmap v2205: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 99 KiB/s wr, 72 op/s
Feb 28 10:37:52 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1605093675' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.509 243456 DEBUG nova.objects.instance [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lazy-loading 'system_metadata' on Instance uuid b3a7b19c-ebb2-442d-bac0-66e1b9e2655c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.532 243456 DEBUG nova.objects.instance [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lazy-loading 'flavor' on Instance uuid b3a7b19c-ebb2-442d-bac0-66e1b9e2655c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:37:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:37:52 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2168583835' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:37:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 33 KiB/s wr, 61 op/s
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.954 243456 DEBUG nova.virt.libvirt.vif [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:36:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855517723',display_name='tempest-TestNetworkBasicOps-server-1855517723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855517723',id=132,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJL4zKlYo/AdgMNnuaaJtL1hh1goiDwZemHJ69UN8pXos044opX7z+RfBxL3O8z0M85ghtATU1A3xGoZN5EAhEOadn0D2QfxzAhrWZrpHsFCg070YOwKNthfwFLhWWt9Cw==',key_name='tempest-TestNetworkBasicOps-2114804889',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:36:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-fsl96lrj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:37:52Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b3a7b19c-ebb2-442d-bac0-66e1b9e2655c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.955 243456 DEBUG nova.network.os_vif_util [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converting VIF {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.956 243456 DEBUG nova.network.os_vif_util [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.966 243456 DEBUG nova.virt.libvirt.guest [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:6a:24:67"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap784aca10-13"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Feb 28 10:37:52 compute-0 podman[364358]: 2026-02-28 10:37:52.971317973 +0000 UTC m=+0.485755660 container remove 4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.972 243456 DEBUG nova.virt.libvirt.guest [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:6a:24:67"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap784aca10-13"/></interface>not found in domain: <domain type='kvm'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <name>instance-00000084</name>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <uuid>b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</uuid>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <nova:name>tempest-TestNetworkBasicOps-server-1855517723</nova:name>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:36:29</nova:creationTime>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:37:52 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:37:52 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:37:52 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:37:52 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:37:52 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:37:52 compute-0 nova_compute[243452]:         <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:37:52 compute-0 nova_compute[243452]:         <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:37:52 compute-0 nova_compute[243452]:         <nova:port uuid="4b48043a-8194-4cf4-bd7f-1c138d7960ac">
Feb 28 10:37:52 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <memory unit='KiB'>131072</memory>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <currentMemory unit='KiB'>131072</currentMemory>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <vcpu placement='static'>1</vcpu>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <sysinfo type='smbios'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <system>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <entry name='manufacturer'>RDO</entry>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <entry name='product'>OpenStack Compute</entry>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <entry name='serial'>b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</entry>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <entry name='uuid'>b3a7b19c-ebb2-442d-bac0-66e1b9e2655c</entry>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <entry name='family'>Virtual Machine</entry>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </system>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <os>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <boot dev='hd'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <smbios mode='sysinfo'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   </os>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <features>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <vmcoreinfo state='on'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   </features>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <cpu mode='host-model' check='partial'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <clock offset='utc'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <timer name='pit' tickpolicy='delay'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <timer name='rtc' tickpolicy='catchup'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <timer name='hpet' present='no'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <on_poweroff>destroy</on_poweroff>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <on_reboot>restart</on_reboot>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <on_crash>destroy</on_crash>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <disk type='network' device='disk'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       </source>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target dev='vda' bus='virtio'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <disk type='network' device='cdrom'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <driver name='qemu' type='raw' cache='none'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <auth username='openstack'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:         <secret type='ceph' uuid='8f528268-ea2d-5d7b-af45-49b405fed6de'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <source protocol='rbd' name='vms/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_disk.config'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:         <host name='192.168.122.100' port='6789'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       </source>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target dev='sda' bus='sata'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <readonly/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='0' model='pcie-root'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='1' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='1' port='0x10'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='2' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='2' port='0x11'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='3' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='3' port='0x12'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='4' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='4' port='0x13'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='5' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='5' port='0x14'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='6' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='6' port='0x15'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='7' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='7' port='0x16'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='8' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='8' port='0x17'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='9' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='9' port='0x18'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='10' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='10' port='0x19'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='11' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='11' port='0x1a'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='12' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='12' port='0x1b'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='13' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='13' port='0x1c'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='14' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='14' port='0x1d'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='15' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='15' port='0x1e'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='16' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='16' port='0x1f'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='17' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='17' port='0x20'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='18' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='18' port='0x21'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='19' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='19' port='0x22'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='20' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='20' port='0x23'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='21' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='21' port='0x24'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='22' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='22' port='0x25'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='23' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='23' port='0x26'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='24' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='24' port='0x27'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='25' model='pcie-root-port'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-root-port'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target chassis='25' port='0x28'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model name='pcie-pci-bridge'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='usb' index='0' model='piix3-uhci'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <controller type='sata' index='0'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </controller>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <interface type='ethernet'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <mac address='fa:16:3e:e3:60:12'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target dev='tap4b48043a-81'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model type='virtio'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <driver name='vhost' rx_queue_size='512'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <mtu size='1442'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <serial type='pty'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/console.log' append='off'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target type='isa-serial' port='0'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:         <model name='isa-serial'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       </target>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <console type='pty'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <log file='/var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c/console.log' append='off'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <target type='serial' port='0'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </console>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <input type='tablet' bus='usb'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='usb' bus='0' port='1'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </input>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <input type='mouse' bus='ps2'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <input type='keyboard' bus='ps2'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <listen type='address' address='::0'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </graphics>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <audio id='1' type='none'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <video>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <model type='virtio' heads='1' primary='yes'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </video>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <watchdog model='itco' action='reset'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <memballoon model='virtio'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <stats period='10'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <rng model='virtio'>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <backend model='random'>/dev/urandom</backend>
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:37:52 compute-0 nova_compute[243452]: </domain>
Feb 28 10:37:52 compute-0 nova_compute[243452]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.972 243456 WARNING nova.virt.libvirt.driver [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Detaching interface fa:16:3e:6a:24:67 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap784aca10-13' not found.
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.973 243456 DEBUG nova.virt.libvirt.vif [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:36:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1855517723',display_name='tempest-TestNetworkBasicOps-server-1855517723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1855517723',id=132,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJL4zKlYo/AdgMNnuaaJtL1hh1goiDwZemHJ69UN8pXos044opX7z+RfBxL3O8z0M85ghtATU1A3xGoZN5EAhEOadn0D2QfxzAhrWZrpHsFCg070YOwKNthfwFLhWWt9Cw==',key_name='tempest-TestNetworkBasicOps-2114804889',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:36:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-fsl96lrj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:37:52Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=b3a7b19c-ebb2-442d-bac0-66e1b9e2655c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.974 243456 DEBUG nova.network.os_vif_util [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converting VIF {"id": "784aca10-13b2-42d5-9828-68914533af46", "address": "fa:16:3e:6a:24:67", "network": {"id": "9fbbf27e-04ea-4f06-b1ef-3ac4b98db361", "bridge": "br-int", "label": "tempest-network-smoke--665811831", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap784aca10-13", "ovs_interfaceid": "784aca10-13b2-42d5-9828-68914533af46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.974 243456 DEBUG nova.network.os_vif_util [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.975 243456 DEBUG os_vif [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:37:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:52.975 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4f9c0e-7dfd-4b82-be62-a1f016ac9bc6]: (4, ('Sat Feb 28 10:37:52 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1 (4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f)\n4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f\nSat Feb 28 10:37:52 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1 (4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f)\n4f4d3002a8a6cb0dde66caba3119f0e6a550113d36d6fd63707ef980b6edda1f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.978 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:52.978 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1642d9aa-ca59-4028-acb6-791549ec2569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.978 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap784aca10-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:52.979 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf91ad996-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.979 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:37:52 compute-0 kernel: tapf91ad996-40: left promiscuous mode
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.982 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.985 243456 INFO os_vif [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:24:67,bridge_name='br-int',has_traffic_filtering=True,id=784aca10-13b2-42d5-9828-68914533af46,network=Network(9fbbf27e-04ea-4f06-b1ef-3ac4b98db361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap784aca10-13')
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.987 243456 DEBUG nova.virt.libvirt.guest [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <nova:name>tempest-TestNetworkBasicOps-server-1855517723</nova:name>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <nova:creationTime>2026-02-28 10:37:52</nova:creationTime>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <nova:flavor name="m1.nano">
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <nova:memory>128</nova:memory>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <nova:disk>1</nova:disk>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <nova:swap>0</nova:swap>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <nova:vcpus>1</nova:vcpus>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   </nova:flavor>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <nova:owner>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   </nova:owner>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   <nova:ports>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     <nova:port uuid="4b48043a-8194-4cf4-bd7f-1c138d7960ac">
Feb 28 10:37:52 compute-0 nova_compute[243452]:       <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:37:52 compute-0 nova_compute[243452]:     </nova:port>
Feb 28 10:37:52 compute-0 nova_compute[243452]:   </nova:ports>
Feb 28 10:37:52 compute-0 nova_compute[243452]: </nova:instance>
Feb 28 10:37:52 compute-0 nova_compute[243452]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.988 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.991 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:37:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:52.992 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff11da8-f540-4dca-a5b0-64015698b3a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.995 243456 DEBUG nova.compute.manager [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-changed-4b48043a-8194-4cf4-bd7f-1c138d7960ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.996 243456 DEBUG nova.compute.manager [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Refreshing instance network info cache due to event network-changed-4b48043a-8194-4cf4-bd7f-1c138d7960ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:37:52 compute-0 nova_compute[243452]: 2026-02-28 10:37:52.996 243456 DEBUG oslo_concurrency.lockutils [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.002 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.004 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[27c942a5-651c-4086-9006-3d510ddae43f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.006 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ac9759-e44d-407d-8ec1-ef76141678cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.021 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.023 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[98b7b718-3c0c-49a8-ad54-87a1ea7310fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649966, 'reachable_time': 18037, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364414, 'error': None, 'target': 'ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.026 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f91ad996-44c8-45ac-a5d6-208982ca2ce1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.026 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[1822b56e-4058-49b2-8dcf-4804a51c3d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:53 compute-0 systemd[1]: run-netns-ovnmeta\x2df91ad996\x2d44c8\x2d45ac\x2da5d6\x2d208982ca2ce1.mount: Deactivated successfully.
Feb 28 10:37:53 compute-0 infallible_shamir[364320]: {
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:     "0": [
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:         {
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "devices": [
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "/dev/loop3"
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             ],
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "lv_name": "ceph_lv0",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "lv_size": "21470642176",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "name": "ceph_lv0",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "tags": {
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.cluster_name": "ceph",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.crush_device_class": "",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.encrypted": "0",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.objectstore": "bluestore",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.osd_id": "0",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.type": "block",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.vdo": "0",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.with_tpm": "0"
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             },
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "type": "block",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "vg_name": "ceph_vg0"
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:         }
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:     ],
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:     "1": [
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:         {
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "devices": [
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "/dev/loop4"
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             ],
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "lv_name": "ceph_lv1",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "lv_size": "21470642176",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "name": "ceph_lv1",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "tags": {
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.cluster_name": "ceph",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.crush_device_class": "",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.encrypted": "0",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.objectstore": "bluestore",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.osd_id": "1",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.type": "block",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.vdo": "0",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.with_tpm": "0"
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             },
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "type": "block",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "vg_name": "ceph_vg1"
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:         }
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:     ],
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:     "2": [
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:         {
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "devices": [
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "/dev/loop5"
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             ],
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "lv_name": "ceph_lv2",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "lv_size": "21470642176",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "name": "ceph_lv2",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "tags": {
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.cluster_name": "ceph",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.crush_device_class": "",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.encrypted": "0",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.objectstore": "bluestore",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.osd_id": "2",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.type": "block",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.vdo": "0",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:                 "ceph.with_tpm": "0"
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             },
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "type": "block",
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:             "vg_name": "ceph_vg2"
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:         }
Feb 28 10:37:53 compute-0 infallible_shamir[364320]:     ]
Feb 28 10:37:53 compute-0 infallible_shamir[364320]: }
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.053 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.054 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:53 compute-0 systemd[1]: libpod-3755428e7550ea3b0bf9ecd34dd3528e78782a18cb8687c4f31fa8d382d6433b.scope: Deactivated successfully.
Feb 28 10:37:53 compute-0 conmon[364320]: conmon 3755428e7550ea3b0bf9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3755428e7550ea3b0bf9ecd34dd3528e78782a18cb8687c4f31fa8d382d6433b.scope/container/memory.events
Feb 28 10:37:53 compute-0 podman[364277]: 2026-02-28 10:37:53.087898795 +0000 UTC m=+0.821166641 container died 3755428e7550ea3b0bf9ecd34dd3528e78782a18cb8687c4f31fa8d382d6433b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shamir, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 10:37:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b11a7a86fa35507d8088a08f8efaa3a81c96ff4d58da2771d52b1f659a40d976-merged.mount: Deactivated successfully.
Feb 28 10:37:53 compute-0 podman[364277]: 2026-02-28 10:37:53.139209209 +0000 UTC m=+0.872477055 container remove 3755428e7550ea3b0bf9ecd34dd3528e78782a18cb8687c4f31fa8d382d6433b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shamir, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 10:37:53 compute-0 systemd[1]: libpod-conmon-3755428e7550ea3b0bf9ecd34dd3528e78782a18cb8687c4f31fa8d382d6433b.scope: Deactivated successfully.
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.162 243456 DEBUG nova.compute.manager [req-675304aa-ffd7-4705-aed5-c46d7b6619ca req-29039da8-6e9f-4e64-88a7-cbf12580f68e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-unplugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.164 243456 DEBUG oslo_concurrency.lockutils [req-675304aa-ffd7-4705-aed5-c46d7b6619ca req-29039da8-6e9f-4e64-88a7-cbf12580f68e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.164 243456 DEBUG oslo_concurrency.lockutils [req-675304aa-ffd7-4705-aed5-c46d7b6619ca req-29039da8-6e9f-4e64-88a7-cbf12580f68e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.164 243456 DEBUG oslo_concurrency.lockutils [req-675304aa-ffd7-4705-aed5-c46d7b6619ca req-29039da8-6e9f-4e64-88a7-cbf12580f68e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.165 243456 DEBUG nova.compute.manager [req-675304aa-ffd7-4705-aed5-c46d7b6619ca req-29039da8-6e9f-4e64-88a7-cbf12580f68e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] No waiting events found dispatching network-vif-unplugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.165 243456 DEBUG nova.compute.manager [req-675304aa-ffd7-4705-aed5-c46d7b6619ca req-29039da8-6e9f-4e64-88a7-cbf12580f68e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-unplugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:37:53 compute-0 sudo[364194]: pam_unix(sudo:session): session closed for user root
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.257 243456 INFO nova.virt.libvirt.driver [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Deleting instance files /var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_del
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.258 243456 INFO nova.virt.libvirt.driver [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Deletion of /var/lib/nova/instances/b3a7b19c-ebb2-442d-bac0-66e1b9e2655c_del complete
Feb 28 10:37:53 compute-0 sudo[364427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:37:53 compute-0 sudo[364427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:37:53 compute-0 sudo[364427]: pam_unix(sudo:session): session closed for user root
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.315 243456 INFO nova.compute.manager [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Took 1.12 seconds to destroy the instance on the hypervisor.
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.315 243456 DEBUG oslo.service.loopingcall [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.316 243456 DEBUG nova.compute.manager [-] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.316 243456 DEBUG nova.network.neutron [-] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:37:53 compute-0 sudo[364452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:37:53 compute-0 sudo[364452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.501 243456 DEBUG oslo_concurrency.lockutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2168583835' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.503 243456 DEBUG oslo_concurrency.lockutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.504 243456 DEBUG oslo_concurrency.lockutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.504 243456 DEBUG oslo_concurrency.lockutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.504 243456 DEBUG oslo_concurrency.lockutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.506 243456 INFO nova.compute.manager [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Terminating instance
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.507 243456 DEBUG nova.compute.manager [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.508 243456 INFO nova.network.neutron [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Port 784aca10-13b2-42d5-9828-68914533af46 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.508 243456 DEBUG nova.network.neutron [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.527 243456 DEBUG oslo_concurrency.lockutils [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.530 243456 DEBUG oslo_concurrency.lockutils [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.530 243456 DEBUG nova.network.neutron [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Refreshing network info cache for port 4b48043a-8194-4cf4-bd7f-1c138d7960ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:37:53 compute-0 kernel: tapb8c427fe-78 (unregistering): left promiscuous mode
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.561 243456 DEBUG oslo_concurrency.lockutils [None req-2105c83f-e968-4abb-a948-15af8e5c710e ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "interface-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-784aca10-13b2-42d5-9828-68914533af46" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:53 compute-0 NetworkManager[49805]: <info>  [1772275073.5625] device (tapb8c427fe-78): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.575 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:53 compute-0 ovn_controller[146846]: 2026-02-28T10:37:53Z|01402|binding|INFO|Releasing lport b8c427fe-78c5-4d60-9c44-68985f50b598 from this chassis (sb_readonly=0)
Feb 28 10:37:53 compute-0 ovn_controller[146846]: 2026-02-28T10:37:53Z|01403|binding|INFO|Setting lport b8c427fe-78c5-4d60-9c44-68985f50b598 down in Southbound
Feb 28 10:37:53 compute-0 ovn_controller[146846]: 2026-02-28T10:37:53Z|01404|binding|INFO|Removing iface tapb8c427fe-78 ovn-installed in OVS
Feb 28 10:37:53 compute-0 kernel: tapffaef000-52 (unregistering): left promiscuous mode
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.586 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:53 compute-0 NetworkManager[49805]: <info>  [1772275073.5898] device (tapffaef000-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.590 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:2a:81 10.100.0.13'], port_security=['fa:16:3e:d6:2a:81 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '555d381e-ed8a-4a73-9f43-f79c0b0a0afd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '41e41550-f20b-4082-98cd-41d49e388f83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f53aa21-0a70-46b8-aad2-7db237a64c92, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b8c427fe-78c5-4d60-9c44-68985f50b598) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.591 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b8c427fe-78c5-4d60-9c44-68985f50b598 in datapath e68f9d98-c075-4ed0-b1ee-ef05de1c055d unbound from our chassis
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.593 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e68f9d98-c075-4ed0-b1ee-ef05de1c055d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.595 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a51da2d2-f78c-496a-86ae-2e1833f9e1b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.598 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d namespace which is not needed anymore
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.599 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:53 compute-0 ovn_controller[146846]: 2026-02-28T10:37:53Z|01405|binding|INFO|Releasing lport ffaef000-523c-4637-99e6-2cc96b907c15 from this chassis (sb_readonly=0)
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.608 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:53 compute-0 ovn_controller[146846]: 2026-02-28T10:37:53Z|01406|binding|INFO|Setting lport ffaef000-523c-4637-99e6-2cc96b907c15 down in Southbound
Feb 28 10:37:53 compute-0 ovn_controller[146846]: 2026-02-28T10:37:53Z|01407|binding|INFO|Removing iface tapffaef000-52 ovn-installed in OVS
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.609 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.615 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.616 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:b7:5a 2001:db8:0:1:f816:3eff:fe28:b75a 2001:db8::f816:3eff:fe28:b75a'], port_security=['fa:16:3e:28:b7:5a 2001:db8:0:1:f816:3eff:fe28:b75a 2001:db8::f816:3eff:fe28:b75a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe28:b75a/64 2001:db8::f816:3eff:fe28:b75a/64', 'neutron:device_id': '555d381e-ed8a-4a73-9f43-f79c0b0a0afd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '41e41550-f20b-4082-98cd-41d49e388f83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ac6d088-e3e0-46fc-b81f-2eb4f1fd8c9c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=ffaef000-523c-4637-99e6-2cc96b907c15) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:37:53 compute-0 podman[364490]: 2026-02-28 10:37:53.634320974 +0000 UTC m=+0.046612811 container create 61880a85aca61d5c720743a4ff06fd0f218efc00d727c040b4a168fa5c887264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mayer, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:37:53 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000085.scope: Deactivated successfully.
Feb 28 10:37:53 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000085.scope: Consumed 15.521s CPU time.
Feb 28 10:37:53 compute-0 systemd-machined[209480]: Machine qemu-166-instance-00000085 terminated.
Feb 28 10:37:53 compute-0 systemd[1]: Started libpod-conmon-61880a85aca61d5c720743a4ff06fd0f218efc00d727c040b4a168fa5c887264.scope.
Feb 28 10:37:53 compute-0 podman[364490]: 2026-02-28 10:37:53.612835515 +0000 UTC m=+0.025127393 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:37:53 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:37:53 compute-0 podman[364490]: 2026-02-28 10:37:53.731888078 +0000 UTC m=+0.144179945 container init 61880a85aca61d5c720743a4ff06fd0f218efc00d727c040b4a168fa5c887264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 28 10:37:53 compute-0 podman[364490]: 2026-02-28 10:37:53.738952428 +0000 UTC m=+0.151244265 container start 61880a85aca61d5c720743a4ff06fd0f218efc00d727c040b4a168fa5c887264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 10:37:53 compute-0 NetworkManager[49805]: <info>  [1772275073.7432] manager: (tapffaef000-52): new Tun device (/org/freedesktop/NetworkManager/Devices/580)
Feb 28 10:37:53 compute-0 podman[364490]: 2026-02-28 10:37:53.743942129 +0000 UTC m=+0.156233966 container attach 61880a85aca61d5c720743a4ff06fd0f218efc00d727c040b4a168fa5c887264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mayer, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 10:37:53 compute-0 condescending_mayer[364529]: 167 167
Feb 28 10:37:53 compute-0 systemd[1]: libpod-61880a85aca61d5c720743a4ff06fd0f218efc00d727c040b4a168fa5c887264.scope: Deactivated successfully.
Feb 28 10:37:53 compute-0 podman[364490]: 2026-02-28 10:37:53.759271544 +0000 UTC m=+0.171563381 container died 61880a85aca61d5c720743a4ff06fd0f218efc00d727c040b4a168fa5c887264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mayer, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.759 243456 INFO nova.virt.libvirt.driver [-] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Instance destroyed successfully.
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.762 243456 DEBUG nova.objects.instance [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid 555d381e-ed8a-4a73-9f43-f79c0b0a0afd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:37:53 compute-0 neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d[361939]: [NOTICE]   (361969) : haproxy version is 2.8.14-c23fe91
Feb 28 10:37:53 compute-0 neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d[361939]: [NOTICE]   (361969) : path to executable is /usr/sbin/haproxy
Feb 28 10:37:53 compute-0 neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d[361939]: [WARNING]  (361969) : Exiting Master process...
Feb 28 10:37:53 compute-0 neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d[361939]: [WARNING]  (361969) : Exiting Master process...
Feb 28 10:37:53 compute-0 neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d[361939]: [ALERT]    (361969) : Current worker (361973) exited with code 143 (Terminated)
Feb 28 10:37:53 compute-0 neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d[361939]: [WARNING]  (361969) : All workers exited. Exiting... (0)
Feb 28 10:37:53 compute-0 systemd[1]: libpod-86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21.scope: Deactivated successfully.
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.774 243456 DEBUG nova.virt.libvirt.vif [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:36:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102866438',display_name='tempest-TestGettingAddress-server-1102866438',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102866438',id=133,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:36:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-qiz0sgp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:36:42Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=555d381e-ed8a-4a73-9f43-f79c0b0a0afd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:37:53 compute-0 podman[364533]: 2026-02-28 10:37:53.774492425 +0000 UTC m=+0.068664966 container died 86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.775 243456 DEBUG nova.network.os_vif_util [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.775 243456 DEBUG nova.network.os_vif_util [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:2a:81,bridge_name='br-int',has_traffic_filtering=True,id=b8c427fe-78c5-4d60-9c44-68985f50b598,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8c427fe-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.776 243456 DEBUG os_vif [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:2a:81,bridge_name='br-int',has_traffic_filtering=True,id=b8c427fe-78c5-4d60-9c44-68985f50b598,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8c427fe-78') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.777 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.778 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8c427fe-78, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.780 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.781 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.787 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.790 243456 INFO os_vif [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:2a:81,bridge_name='br-int',has_traffic_filtering=True,id=b8c427fe-78c5-4d60-9c44-68985f50b598,network=Network(e68f9d98-c075-4ed0-b1ee-ef05de1c055d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8c427fe-78')
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.792 243456 DEBUG nova.virt.libvirt.vif [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:36:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1102866438',display_name='tempest-TestGettingAddress-server-1102866438',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1102866438',id=133,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEjJUmVDkOZGXP9YHWwcrQX3rRmUtQjwiGdrqFm8v//iM8ddQ6Oygrz9eS7/NNUCVpxPFFxhbxOwQ9+w5wIv/8LuNuf9Xfg2nwokqVt8ivVWLTyGZCoHMk8LXyLAebo5og==',key_name='tempest-TestGettingAddress-95527438',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:36:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-qiz0sgp7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:36:42Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=555d381e-ed8a-4a73-9f43-f79c0b0a0afd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.792 243456 DEBUG nova.network.os_vif_util [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.793 243456 DEBUG nova.network.os_vif_util [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=ffaef000-523c-4637-99e6-2cc96b907c15,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffaef000-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.794 243456 DEBUG os_vif [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=ffaef000-523c-4637-99e6-2cc96b907c15,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffaef000-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.796 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.797 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapffaef000-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-524088f97a17eeda9184aa2c185be8645e35f5e61298abfe94e599e260c4602a-merged.mount: Deactivated successfully.
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.798 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.801 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.802 243456 INFO os_vif [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:b7:5a,bridge_name='br-int',has_traffic_filtering=True,id=ffaef000-523c-4637-99e6-2cc96b907c15,network=Network(bf5aa4f8-b85b-496f-a7ad-1ab36250968c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapffaef000-52')
Feb 28 10:37:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21-userdata-shm.mount: Deactivated successfully.
Feb 28 10:37:53 compute-0 podman[364533]: 2026-02-28 10:37:53.816016781 +0000 UTC m=+0.110189302 container cleanup 86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 28 10:37:53 compute-0 podman[364490]: 2026-02-28 10:37:53.820897069 +0000 UTC m=+0.233188906 container remove 61880a85aca61d5c720743a4ff06fd0f218efc00d727c040b4a168fa5c887264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Feb 28 10:37:53 compute-0 systemd[1]: libpod-conmon-61880a85aca61d5c720743a4ff06fd0f218efc00d727c040b4a168fa5c887264.scope: Deactivated successfully.
Feb 28 10:37:53 compute-0 systemd[1]: libpod-conmon-86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21.scope: Deactivated successfully.
Feb 28 10:37:53 compute-0 podman[364611]: 2026-02-28 10:37:53.887086544 +0000 UTC m=+0.048546506 container remove 86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.891 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[38e8ebdf-712e-4f20-bdf7-d72769d944a7]: (4, ('Sat Feb 28 10:37:53 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d (86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21)\n86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21\nSat Feb 28 10:37:53 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d (86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21)\n86d2c7506b713ad0a27ca041ce0cfafbde6c23085b1109924b25947519efdc21\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.894 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c83dea-4e94-448b-bb6a-16aa5100e83b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.895 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape68f9d98-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:53 compute-0 kernel: tape68f9d98-c0: left promiscuous mode
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.899 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:53 compute-0 nova_compute[243452]: 2026-02-28 10:37:53.905 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.909 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[386cd538-ffd4-4660-af6f-9103333a3514]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.928 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[526c98e8-8cb7-4903-bd22-876529b54f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.930 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bb2a94e1-1b39-4c8b-971c-f6aea4bf0ebf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.959 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[50383623-89fb-4304-8504-00f1129d2579]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650641, 'reachable_time': 18882, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364638, 'error': None, 'target': 'ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.961 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e68f9d98-c075-4ed0-b1ee-ef05de1c055d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.961 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[5237a2ba-286b-4a71-bfe4-2d9f1c9ad2d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.962 156681 INFO neutron.agent.ovn.metadata.agent [-] Port ffaef000-523c-4637-99e6-2cc96b907c15 in datapath bf5aa4f8-b85b-496f-a7ad-1ab36250968c unbound from our chassis
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.964 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bf5aa4f8-b85b-496f-a7ad-1ab36250968c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.965 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea17539-5d52-49ac-ac04-271a7e4c9644]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:53.965 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c namespace which is not needed anymore
Feb 28 10:37:53 compute-0 podman[364636]: 2026-02-28 10:37:53.99142852 +0000 UTC m=+0.048984819 container create c745accc01e0d52c8a24e6d8b75e550e99283c4b87a6b82fc20c51d147e4c549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:37:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-377aba4e407f3a2f65f2a4eb7e0e22e21a70346077c645cb2f77ea779d688910-merged.mount: Deactivated successfully.
Feb 28 10:37:54 compute-0 systemd[1]: run-netns-ovnmeta\x2de68f9d98\x2dc075\x2d4ed0\x2db1ee\x2def05de1c055d.mount: Deactivated successfully.
Feb 28 10:37:54 compute-0 systemd[1]: Started libpod-conmon-c745accc01e0d52c8a24e6d8b75e550e99283c4b87a6b82fc20c51d147e4c549.scope.
Feb 28 10:37:54 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:37:54 compute-0 podman[364636]: 2026-02-28 10:37:53.967702598 +0000 UTC m=+0.025258967 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5622cb74c664c425ef3752326d6bf5d49a3fe415180ab4b30d13c0f5844194c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5622cb74c664c425ef3752326d6bf5d49a3fe415180ab4b30d13c0f5844194c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5622cb74c664c425ef3752326d6bf5d49a3fe415180ab4b30d13c0f5844194c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:37:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5622cb74c664c425ef3752326d6bf5d49a3fe415180ab4b30d13c0f5844194c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:37:54 compute-0 podman[364636]: 2026-02-28 10:37:54.088189821 +0000 UTC m=+0.145746120 container init c745accc01e0d52c8a24e6d8b75e550e99283c4b87a6b82fc20c51d147e4c549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:37:54 compute-0 podman[364636]: 2026-02-28 10:37:54.097764812 +0000 UTC m=+0.155321111 container start c745accc01e0d52c8a24e6d8b75e550e99283c4b87a6b82fc20c51d147e4c549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 28 10:37:54 compute-0 nova_compute[243452]: 2026-02-28 10:37:54.099 243456 INFO nova.virt.libvirt.driver [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Deleting instance files /var/lib/nova/instances/555d381e-ed8a-4a73-9f43-f79c0b0a0afd_del
Feb 28 10:37:54 compute-0 nova_compute[243452]: 2026-02-28 10:37:54.100 243456 INFO nova.virt.libvirt.driver [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Deletion of /var/lib/nova/instances/555d381e-ed8a-4a73-9f43-f79c0b0a0afd_del complete
Feb 28 10:37:54 compute-0 podman[364636]: 2026-02-28 10:37:54.106941312 +0000 UTC m=+0.164497601 container attach c745accc01e0d52c8a24e6d8b75e550e99283c4b87a6b82fc20c51d147e4c549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:37:54 compute-0 neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c[362050]: [NOTICE]   (362054) : haproxy version is 2.8.14-c23fe91
Feb 28 10:37:54 compute-0 neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c[362050]: [NOTICE]   (362054) : path to executable is /usr/sbin/haproxy
Feb 28 10:37:54 compute-0 neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c[362050]: [WARNING]  (362054) : Exiting Master process...
Feb 28 10:37:54 compute-0 neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c[362050]: [ALERT]    (362054) : Current worker (362056) exited with code 143 (Terminated)
Feb 28 10:37:54 compute-0 neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c[362050]: [WARNING]  (362054) : All workers exited. Exiting... (0)
Feb 28 10:37:54 compute-0 systemd[1]: libpod-54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e.scope: Deactivated successfully.
Feb 28 10:37:54 compute-0 podman[364674]: 2026-02-28 10:37:54.139243827 +0000 UTC m=+0.062369218 container died 54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 28 10:37:54 compute-0 nova_compute[243452]: 2026-02-28 10:37:54.151 243456 INFO nova.compute.manager [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Took 0.64 seconds to destroy the instance on the hypervisor.
Feb 28 10:37:54 compute-0 nova_compute[243452]: 2026-02-28 10:37:54.152 243456 DEBUG oslo.service.loopingcall [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:37:54 compute-0 nova_compute[243452]: 2026-02-28 10:37:54.152 243456 DEBUG nova.compute.manager [-] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:37:54 compute-0 nova_compute[243452]: 2026-02-28 10:37:54.152 243456 DEBUG nova.network.neutron [-] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:37:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e-userdata-shm.mount: Deactivated successfully.
Feb 28 10:37:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8955cce884765a04034c460f9c74e35c1e8bf35ef5a4315568d401093bba957d-merged.mount: Deactivated successfully.
Feb 28 10:37:54 compute-0 podman[364674]: 2026-02-28 10:37:54.168904737 +0000 UTC m=+0.092030118 container cleanup 54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:37:54 compute-0 systemd[1]: libpod-conmon-54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e.scope: Deactivated successfully.
Feb 28 10:37:54 compute-0 podman[364702]: 2026-02-28 10:37:54.25017677 +0000 UTC m=+0.055884254 container remove 54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:37:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:54.255 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c83f6dff-e7e2-4d10-828e-b51a2c8839c5]: (4, ('Sat Feb 28 10:37:54 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c (54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e)\n54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e\nSat Feb 28 10:37:54 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c (54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e)\n54b2fcd4313d801df064b80da8a90184b29534447d516735b8f6b2b4e1445d6e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:54.258 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[72b44cd8-1b62-4f2d-be2c-201a3f3c61cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:54.259 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf5aa4f8-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:37:54 compute-0 nova_compute[243452]: 2026-02-28 10:37:54.262 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:54 compute-0 kernel: tapbf5aa4f8-b0: left promiscuous mode
Feb 28 10:37:54 compute-0 nova_compute[243452]: 2026-02-28 10:37:54.272 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:54.279 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[55333be1-d50b-4882-b20e-474a126bd171]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:54.290 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8a6de2-81aa-4b37-a390-478e71e25e89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:54.292 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7b8168eb-b2c0-4d36-836a-bf796a8734e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:54.309 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ad4fb647-ef1e-44f6-a2bb-df5b69c0fb4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650740, 'reachable_time': 21107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364727, 'error': None, 'target': 'ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:54 compute-0 systemd[1]: run-netns-ovnmeta\x2dbf5aa4f8\x2db85b\x2d496f\x2da7ad\x2d1ab36250968c.mount: Deactivated successfully.
Feb 28 10:37:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:54.311 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bf5aa4f8-b85b-496f-a7ad-1ab36250968c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:37:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:54.311 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[25427bfa-40d5-49bd-b1d0-78f26cb7f514]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:37:54 compute-0 ceph-mon[76304]: pgmap v2206: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 33 KiB/s wr, 61 op/s
Feb 28 10:37:54 compute-0 nova_compute[243452]: 2026-02-28 10:37:54.517 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:54 compute-0 nova_compute[243452]: 2026-02-28 10:37:54.574 243456 DEBUG nova.network.neutron [-] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:37:54 compute-0 nova_compute[243452]: 2026-02-28 10:37:54.603 243456 INFO nova.compute.manager [-] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Took 1.29 seconds to deallocate network for instance.
Feb 28 10:37:54 compute-0 nova_compute[243452]: 2026-02-28 10:37:54.647 243456 DEBUG nova.compute.manager [req-7717b20c-0876-4aa3-96c8-28d8358f7307 req-d500b584-4af8-41a7-96fb-0f38a7704bb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-unplugged-b8c427fe-78c5-4d60-9c44-68985f50b598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:54 compute-0 nova_compute[243452]: 2026-02-28 10:37:54.647 243456 DEBUG oslo_concurrency.lockutils [req-7717b20c-0876-4aa3-96c8-28d8358f7307 req-d500b584-4af8-41a7-96fb-0f38a7704bb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:54 compute-0 nova_compute[243452]: 2026-02-28 10:37:54.648 243456 DEBUG oslo_concurrency.lockutils [req-7717b20c-0876-4aa3-96c8-28d8358f7307 req-d500b584-4af8-41a7-96fb-0f38a7704bb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:54 compute-0 nova_compute[243452]: 2026-02-28 10:37:54.648 243456 DEBUG oslo_concurrency.lockutils [req-7717b20c-0876-4aa3-96c8-28d8358f7307 req-d500b584-4af8-41a7-96fb-0f38a7704bb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:54 compute-0 nova_compute[243452]: 2026-02-28 10:37:54.648 243456 DEBUG nova.compute.manager [req-7717b20c-0876-4aa3-96c8-28d8358f7307 req-d500b584-4af8-41a7-96fb-0f38a7704bb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] No waiting events found dispatching network-vif-unplugged-b8c427fe-78c5-4d60-9c44-68985f50b598 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:37:54 compute-0 nova_compute[243452]: 2026-02-28 10:37:54.649 243456 DEBUG nova.compute.manager [req-7717b20c-0876-4aa3-96c8-28d8358f7307 req-d500b584-4af8-41a7-96fb-0f38a7704bb2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-unplugged-b8c427fe-78c5-4d60-9c44-68985f50b598 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:37:54 compute-0 nova_compute[243452]: 2026-02-28 10:37:54.650 243456 DEBUG oslo_concurrency.lockutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:54 compute-0 nova_compute[243452]: 2026-02-28 10:37:54.651 243456 DEBUG oslo_concurrency.lockutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:54 compute-0 nova_compute[243452]: 2026-02-28 10:37:54.744 243456 DEBUG oslo_concurrency.processutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:37:54 compute-0 lvm[364789]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:37:54 compute-0 lvm[364789]: VG ceph_vg0 finished
Feb 28 10:37:54 compute-0 lvm[364791]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:37:54 compute-0 lvm[364791]: VG ceph_vg1 finished
Feb 28 10:37:54 compute-0 lvm[364792]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:37:54 compute-0 lvm[364792]: VG ceph_vg2 finished
Feb 28 10:37:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 255 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 21 KiB/s wr, 69 op/s
Feb 28 10:37:54 compute-0 serene_hermann[364668]: {}
Feb 28 10:37:55 compute-0 systemd[1]: libpod-c745accc01e0d52c8a24e6d8b75e550e99283c4b87a6b82fc20c51d147e4c549.scope: Deactivated successfully.
Feb 28 10:37:55 compute-0 podman[364636]: 2026-02-28 10:37:55.006151124 +0000 UTC m=+1.063707413 container died c745accc01e0d52c8a24e6d8b75e550e99283c4b87a6b82fc20c51d147e4c549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 10:37:55 compute-0 systemd[1]: libpod-c745accc01e0d52c8a24e6d8b75e550e99283c4b87a6b82fc20c51d147e4c549.scope: Consumed 1.246s CPU time.
Feb 28 10:37:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-5622cb74c664c425ef3752326d6bf5d49a3fe415180ab4b30d13c0f5844194c4-merged.mount: Deactivated successfully.
Feb 28 10:37:55 compute-0 podman[364636]: 2026-02-28 10:37:55.053471725 +0000 UTC m=+1.111028004 container remove c745accc01e0d52c8a24e6d8b75e550e99283c4b87a6b82fc20c51d147e4c549 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:37:55 compute-0 systemd[1]: libpod-conmon-c745accc01e0d52c8a24e6d8b75e550e99283c4b87a6b82fc20c51d147e4c549.scope: Deactivated successfully.
Feb 28 10:37:55 compute-0 sudo[364452]: pam_unix(sudo:session): session closed for user root
Feb 28 10:37:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:37:55 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:37:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:37:55 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:37:55 compute-0 sudo[364824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:37:55 compute-0 sudo[364824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:37:55 compute-0 sudo[364824]: pam_unix(sudo:session): session closed for user root
Feb 28 10:37:55 compute-0 nova_compute[243452]: 2026-02-28 10:37:55.257 243456 DEBUG nova.compute.manager [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-plugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:55 compute-0 nova_compute[243452]: 2026-02-28 10:37:55.261 243456 DEBUG oslo_concurrency.lockutils [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:55 compute-0 nova_compute[243452]: 2026-02-28 10:37:55.262 243456 DEBUG oslo_concurrency.lockutils [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:55 compute-0 nova_compute[243452]: 2026-02-28 10:37:55.262 243456 DEBUG oslo_concurrency.lockutils [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:55 compute-0 nova_compute[243452]: 2026-02-28 10:37:55.263 243456 DEBUG nova.compute.manager [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] No waiting events found dispatching network-vif-plugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:37:55 compute-0 nova_compute[243452]: 2026-02-28 10:37:55.263 243456 WARNING nova.compute.manager [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received unexpected event network-vif-plugged-4b48043a-8194-4cf4-bd7f-1c138d7960ac for instance with vm_state deleted and task_state None.
Feb 28 10:37:55 compute-0 nova_compute[243452]: 2026-02-28 10:37:55.264 243456 DEBUG nova.compute.manager [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-changed-b8c427fe-78c5-4d60-9c44-68985f50b598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:55 compute-0 nova_compute[243452]: 2026-02-28 10:37:55.265 243456 DEBUG nova.compute.manager [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Refreshing instance network info cache due to event network-changed-b8c427fe-78c5-4d60-9c44-68985f50b598. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:37:55 compute-0 nova_compute[243452]: 2026-02-28 10:37:55.265 243456 DEBUG oslo_concurrency.lockutils [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:37:55 compute-0 nova_compute[243452]: 2026-02-28 10:37:55.266 243456 DEBUG oslo_concurrency.lockutils [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:37:55 compute-0 nova_compute[243452]: 2026-02-28 10:37:55.266 243456 DEBUG nova.network.neutron [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Refreshing network info cache for port b8c427fe-78c5-4d60-9c44-68985f50b598 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:37:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:37:55 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/423195585' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:37:55 compute-0 nova_compute[243452]: 2026-02-28 10:37:55.329 243456 DEBUG oslo_concurrency.processutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:37:55 compute-0 nova_compute[243452]: 2026-02-28 10:37:55.336 243456 DEBUG nova.compute.provider_tree [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:37:55 compute-0 nova_compute[243452]: 2026-02-28 10:37:55.365 243456 DEBUG nova.scheduler.client.report [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:37:55 compute-0 nova_compute[243452]: 2026-02-28 10:37:55.475 243456 DEBUG oslo_concurrency.lockutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:55 compute-0 nova_compute[243452]: 2026-02-28 10:37:55.587 243456 INFO nova.scheduler.client.report [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance b3a7b19c-ebb2-442d-bac0-66e1b9e2655c
Feb 28 10:37:55 compute-0 nova_compute[243452]: 2026-02-28 10:37:55.735 243456 DEBUG oslo_concurrency.lockutils [None req-beb1a871-0d56-4f86-8669-41d4c878ed85 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:55 compute-0 nova_compute[243452]: 2026-02-28 10:37:55.930 243456 DEBUG nova.network.neutron [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updated VIF entry in instance network info cache for port 4b48043a-8194-4cf4-bd7f-1c138d7960ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:37:55 compute-0 nova_compute[243452]: 2026-02-28 10:37:55.930 243456 DEBUG nova.network.neutron [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Updating instance_info_cache with network_info: [{"id": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "address": "fa:16:3e:e3:60:12", "network": {"id": "f91ad996-44c8-45ac-a5d6-208982ca2ce1", "bridge": "br-int", "label": "tempest-network-smoke--334607792", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b48043a-81", "ovs_interfaceid": "4b48043a-8194-4cf4-bd7f-1c138d7960ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:37:55 compute-0 nova_compute[243452]: 2026-02-28 10:37:55.998 243456 DEBUG oslo_concurrency.lockutils [req-61a5f7d1-451a-4fbb-8c95-4ab16afa428c req-cf1a906c-7890-48de-9919-f4795542af0c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b3a7b19c-ebb2-442d-bac0-66e1b9e2655c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:37:56 compute-0 ceph-mon[76304]: pgmap v2207: 305 pgs: 305 active+clean; 255 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 21 KiB/s wr, 69 op/s
Feb 28 10:37:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:37:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:37:56 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/423195585' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.124208) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275076124252, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 401, "num_deletes": 251, "total_data_size": 249007, "memory_usage": 256360, "flush_reason": "Manual Compaction"}
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275076129281, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 246630, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47001, "largest_seqno": 47401, "table_properties": {"data_size": 244229, "index_size": 501, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6185, "raw_average_key_size": 19, "raw_value_size": 239345, "raw_average_value_size": 741, "num_data_blocks": 22, "num_entries": 323, "num_filter_entries": 323, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772275064, "oldest_key_time": 1772275064, "file_creation_time": 1772275076, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 5143 microseconds, and 1722 cpu microseconds.
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.129345) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 246630 bytes OK
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.129376) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.131128) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.131149) EVENT_LOG_v1 {"time_micros": 1772275076131142, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.131174) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 246429, prev total WAL file size 246429, number of live WAL files 2.
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.131709) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(240KB)], [107(11MB)]
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275076131806, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 11876758, "oldest_snapshot_seqno": -1}
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6826 keys, 10089030 bytes, temperature: kUnknown
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275076191759, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 10089030, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10042519, "index_size": 28351, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 177980, "raw_average_key_size": 26, "raw_value_size": 9919562, "raw_average_value_size": 1453, "num_data_blocks": 1108, "num_entries": 6826, "num_filter_entries": 6826, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772275076, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.192307) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 10089030 bytes
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.193953) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.7 rd, 168.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.1 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(89.1) write-amplify(40.9) OK, records in: 7339, records dropped: 513 output_compression: NoCompression
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.193987) EVENT_LOG_v1 {"time_micros": 1772275076193971, "job": 64, "event": "compaction_finished", "compaction_time_micros": 60071, "compaction_time_cpu_micros": 37374, "output_level": 6, "num_output_files": 1, "total_output_size": 10089030, "num_input_records": 7339, "num_output_records": 6826, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275076194279, "job": 64, "event": "table_file_deletion", "file_number": 109}
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275076196602, "job": 64, "event": "table_file_deletion", "file_number": 107}
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.131550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.196653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.196661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.196665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.196669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:37:56 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:37:56.196673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.239 243456 DEBUG nova.network.neutron [-] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.255 243456 INFO nova.compute.manager [-] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Took 2.10 seconds to deallocate network for instance.
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.299 243456 DEBUG oslo_concurrency.lockutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.299 243456 DEBUG oslo_concurrency.lockutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.353 243456 DEBUG oslo_concurrency.processutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.737 243456 DEBUG nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Received event network-vif-deleted-4b48043a-8194-4cf4-bd7f-1c138d7960ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.737 243456 DEBUG nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-plugged-b8c427fe-78c5-4d60-9c44-68985f50b598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.738 243456 DEBUG oslo_concurrency.lockutils [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.739 243456 DEBUG oslo_concurrency.lockutils [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.739 243456 DEBUG oslo_concurrency.lockutils [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.739 243456 DEBUG nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] No waiting events found dispatching network-vif-plugged-b8c427fe-78c5-4d60-9c44-68985f50b598 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.739 243456 WARNING nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received unexpected event network-vif-plugged-b8c427fe-78c5-4d60-9c44-68985f50b598 for instance with vm_state deleted and task_state None.
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.740 243456 DEBUG nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-unplugged-ffaef000-523c-4637-99e6-2cc96b907c15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.740 243456 DEBUG oslo_concurrency.lockutils [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.740 243456 DEBUG oslo_concurrency.lockutils [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.740 243456 DEBUG oslo_concurrency.lockutils [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.741 243456 DEBUG nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] No waiting events found dispatching network-vif-unplugged-ffaef000-523c-4637-99e6-2cc96b907c15 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.741 243456 WARNING nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received unexpected event network-vif-unplugged-ffaef000-523c-4637-99e6-2cc96b907c15 for instance with vm_state deleted and task_state None.
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.741 243456 DEBUG nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-plugged-ffaef000-523c-4637-99e6-2cc96b907c15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.741 243456 DEBUG oslo_concurrency.lockutils [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.742 243456 DEBUG oslo_concurrency.lockutils [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.742 243456 DEBUG oslo_concurrency.lockutils [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.742 243456 DEBUG nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] No waiting events found dispatching network-vif-plugged-ffaef000-523c-4637-99e6-2cc96b907c15 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.742 243456 WARNING nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received unexpected event network-vif-plugged-ffaef000-523c-4637-99e6-2cc96b907c15 for instance with vm_state deleted and task_state None.
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.743 243456 DEBUG nova.compute.manager [req-42f4dfbf-a1a3-4b25-8c97-883b9ffa85e2 req-c43386de-cfff-485e-a598-71590d3b18c3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-deleted-ffaef000-523c-4637-99e6-2cc96b907c15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.778 243456 DEBUG nova.network.neutron [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Updated VIF entry in instance network info cache for port b8c427fe-78c5-4d60-9c44-68985f50b598. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.778 243456 DEBUG nova.network.neutron [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Updating instance_info_cache with network_info: [{"id": "b8c427fe-78c5-4d60-9c44-68985f50b598", "address": "fa:16:3e:d6:2a:81", "network": {"id": "e68f9d98-c075-4ed0-b1ee-ef05de1c055d", "bridge": "br-int", "label": "tempest-network-smoke--2127391046", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c427fe-78", "ovs_interfaceid": "b8c427fe-78c5-4d60-9c44-68985f50b598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ffaef000-523c-4637-99e6-2cc96b907c15", "address": "fa:16:3e:28:b7:5a", "network": {"id": "bf5aa4f8-b85b-496f-a7ad-1ab36250968c", "bridge": "br-int", "label": "tempest-network-smoke--2030752729", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:b75a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapffaef000-52", "ovs_interfaceid": "ffaef000-523c-4637-99e6-2cc96b907c15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.800 243456 DEBUG oslo_concurrency.lockutils [req-e0c1c304-ce7e-4eb8-a672-0f97b7b35622 req-5a620ef8-7a31-41c4-b138-3baa7db8700c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-555d381e-ed8a-4a73-9f43-f79c0b0a0afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:37:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:37:56 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1366421356' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.930 243456 DEBUG oslo_concurrency.processutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.937 243456 DEBUG nova.compute.provider_tree [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.952 243456 DEBUG nova.scheduler.client.report [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:37:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 204 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 20 KiB/s wr, 65 op/s
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.972 243456 DEBUG oslo_concurrency.lockutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:56 compute-0 nova_compute[243452]: 2026-02-28 10:37:56.993 243456 INFO nova.scheduler.client.report [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance 555d381e-ed8a-4a73-9f43-f79c0b0a0afd
Feb 28 10:37:57 compute-0 nova_compute[243452]: 2026-02-28 10:37:57.055 243456 DEBUG oslo_concurrency.lockutils [None req-0dac6b8b-3019-473c-a882-9411868d42e2 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "555d381e-ed8a-4a73-9f43-f79c0b0a0afd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:57 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1366421356' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:37:57 compute-0 nova_compute[243452]: 2026-02-28 10:37:57.331 243456 DEBUG nova.compute.manager [req-2719855d-db36-451e-9308-e65794438866 req-8707ac56-190a-413b-a4b4-ca47ff68fd07 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Received event network-vif-deleted-b8c427fe-78c5-4d60-9c44-68985f50b598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:37:57 compute-0 nova_compute[243452]: 2026-02-28 10:37:57.332 243456 INFO nova.compute.manager [req-2719855d-db36-451e-9308-e65794438866 req-8707ac56-190a-413b-a4b4-ca47ff68fd07 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Neutron deleted interface b8c427fe-78c5-4d60-9c44-68985f50b598; detaching it from the instance and deleting it from the info cache
Feb 28 10:37:57 compute-0 nova_compute[243452]: 2026-02-28 10:37:57.332 243456 DEBUG nova.network.neutron [req-2719855d-db36-451e-9308-e65794438866 req-8707ac56-190a-413b-a4b4-ca47ff68fd07 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Feb 28 10:37:57 compute-0 nova_compute[243452]: 2026-02-28 10:37:57.336 243456 DEBUG nova.compute.manager [req-2719855d-db36-451e-9308-e65794438866 req-8707ac56-190a-413b-a4b4-ca47ff68fd07 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Detach interface failed, port_id=b8c427fe-78c5-4d60-9c44-68985f50b598, reason: Instance 555d381e-ed8a-4a73-9f43-f79c0b0a0afd could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:37:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:57.876 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:37:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:57.877 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:37:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:37:57.877 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:37:58 compute-0 ceph-mon[76304]: pgmap v2208: 305 pgs: 305 active+clean; 204 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 20 KiB/s wr, 65 op/s
Feb 28 10:37:58 compute-0 nova_compute[243452]: 2026-02-28 10:37:58.800 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:37:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:37:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 153 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 20 KiB/s wr, 88 op/s
Feb 28 10:37:58 compute-0 nova_compute[243452]: 2026-02-28 10:37:58.997 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275063.9958286, cbf94077-46c2-457d-8486-25f3dd0517b9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:37:58 compute-0 nova_compute[243452]: 2026-02-28 10:37:58.998 243456 INFO nova.compute.manager [-] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] VM Stopped (Lifecycle Event)
Feb 28 10:37:59 compute-0 nova_compute[243452]: 2026-02-28 10:37:59.023 243456 DEBUG nova.compute.manager [None req-f7a2e2be-e8ae-4a75-b743-ddb6950af602 - - - - - -] [instance: cbf94077-46c2-457d-8486-25f3dd0517b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:37:59 compute-0 nova_compute[243452]: 2026-02-28 10:37:59.519 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:00 compute-0 ceph-mon[76304]: pgmap v2209: 305 pgs: 305 active+clean; 153 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 20 KiB/s wr, 88 op/s
Feb 28 10:38:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:38:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:38:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:38:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:38:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:38:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:38:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 18 KiB/s wr, 71 op/s
Feb 28 10:38:01 compute-0 nova_compute[243452]: 2026-02-28 10:38:01.397 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:01 compute-0 nova_compute[243452]: 2026-02-28 10:38:01.473 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:02 compute-0 nova_compute[243452]: 2026-02-28 10:38:02.085 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275067.0843432, ae0acd9c-7c2d-4e8b-84a4-d577eff31d02 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:38:02 compute-0 nova_compute[243452]: 2026-02-28 10:38:02.086 243456 INFO nova.compute.manager [-] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] VM Stopped (Lifecycle Event)
Feb 28 10:38:02 compute-0 nova_compute[243452]: 2026-02-28 10:38:02.107 243456 DEBUG nova.compute.manager [None req-d522a948-91b5-4cf5-9aad-8e231bedb056 - - - - - -] [instance: ae0acd9c-7c2d-4e8b-84a4-d577eff31d02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:38:02 compute-0 ceph-mon[76304]: pgmap v2210: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 18 KiB/s wr, 71 op/s
Feb 28 10:38:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 7.3 KiB/s wr, 55 op/s
Feb 28 10:38:03 compute-0 nova_compute[243452]: 2026-02-28 10:38:03.804 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:38:04 compute-0 ceph-mon[76304]: pgmap v2211: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 7.3 KiB/s wr, 55 op/s
Feb 28 10:38:04 compute-0 nova_compute[243452]: 2026-02-28 10:38:04.521 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.3 KiB/s wr, 53 op/s
Feb 28 10:38:06 compute-0 ceph-mon[76304]: pgmap v2212: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.3 KiB/s wr, 53 op/s
Feb 28 10:38:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 1023 B/s wr, 44 op/s
Feb 28 10:38:07 compute-0 nova_compute[243452]: 2026-02-28 10:38:07.426 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275072.424708, b3a7b19c-ebb2-442d-bac0-66e1b9e2655c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:38:07 compute-0 nova_compute[243452]: 2026-02-28 10:38:07.426 243456 INFO nova.compute.manager [-] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] VM Stopped (Lifecycle Event)
Feb 28 10:38:07 compute-0 nova_compute[243452]: 2026-02-28 10:38:07.453 243456 DEBUG nova.compute.manager [None req-36d2661b-b219-45bf-9a1d-76467cc932a0 - - - - - -] [instance: b3a7b19c-ebb2-442d-bac0-66e1b9e2655c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:38:08 compute-0 ceph-mon[76304]: pgmap v2213: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 1023 B/s wr, 44 op/s
Feb 28 10:38:08 compute-0 nova_compute[243452]: 2026-02-28 10:38:08.756 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275073.7554436, 555d381e-ed8a-4a73-9f43-f79c0b0a0afd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:38:08 compute-0 nova_compute[243452]: 2026-02-28 10:38:08.756 243456 INFO nova.compute.manager [-] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] VM Stopped (Lifecycle Event)
Feb 28 10:38:08 compute-0 nova_compute[243452]: 2026-02-28 10:38:08.781 243456 DEBUG nova.compute.manager [None req-fd34f180-230d-412f-b82a-e0572084e007 - - - - - -] [instance: 555d381e-ed8a-4a73-9f43-f79c0b0a0afd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:38:08 compute-0 nova_compute[243452]: 2026-02-28 10:38:08.807 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:38:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 682 B/s wr, 31 op/s
Feb 28 10:38:09 compute-0 nova_compute[243452]: 2026-02-28 10:38:09.522 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:10 compute-0 ceph-mon[76304]: pgmap v2214: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 682 B/s wr, 31 op/s
Feb 28 10:38:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:38:12 compute-0 podman[364875]: 2026-02-28 10:38:12.161014492 +0000 UTC m=+0.088537548 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 28 10:38:12 compute-0 podman[364874]: 2026-02-28 10:38:12.188890201 +0000 UTC m=+0.117581250 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.build-date=20260223)
Feb 28 10:38:12 compute-0 ceph-mon[76304]: pgmap v2215: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:38:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:38:13 compute-0 nova_compute[243452]: 2026-02-28 10:38:13.850 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:38:14 compute-0 ceph-mon[76304]: pgmap v2216: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:38:14 compute-0 nova_compute[243452]: 2026-02-28 10:38:14.524 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:38:16 compute-0 ceph-mon[76304]: pgmap v2217: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:38:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:38:18 compute-0 ceph-mon[76304]: pgmap v2218: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:38:18 compute-0 nova_compute[243452]: 2026-02-28 10:38:18.602 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "99d76f35-045f-485f-8e24-62b0c9293154" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:18 compute-0 nova_compute[243452]: 2026-02-28 10:38:18.603 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:18 compute-0 nova_compute[243452]: 2026-02-28 10:38:18.622 243456 DEBUG nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:38:18 compute-0 nova_compute[243452]: 2026-02-28 10:38:18.737 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:18 compute-0 nova_compute[243452]: 2026-02-28 10:38:18.738 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:18 compute-0 nova_compute[243452]: 2026-02-28 10:38:18.746 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:38:18 compute-0 nova_compute[243452]: 2026-02-28 10:38:18.746 243456 INFO nova.compute.claims [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:38:18 compute-0 nova_compute[243452]: 2026-02-28 10:38:18.859 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:18 compute-0 nova_compute[243452]: 2026-02-28 10:38:18.899 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:38:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:38:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:38:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2474547833' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.465 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.473 243456 DEBUG nova.compute.provider_tree [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.499 243456 DEBUG nova.scheduler.client.report [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.557 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.558 243456 DEBUG nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.563 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.616 243456 DEBUG nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.617 243456 DEBUG nova.network.neutron [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.655 243456 INFO nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.676 243456 DEBUG nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.760 243456 DEBUG nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.763 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.763 243456 INFO nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Creating image(s)
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.795 243456 DEBUG nova.storage.rbd_utils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 99d76f35-045f-485f-8e24-62b0c9293154_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.815 243456 DEBUG nova.storage.rbd_utils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 99d76f35-045f-485f-8e24-62b0c9293154_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.839 243456 DEBUG nova.storage.rbd_utils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 99d76f35-045f-485f-8e24-62b0c9293154_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.843 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.900 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.901 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.902 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.903 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.930 243456 DEBUG nova.storage.rbd_utils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 99d76f35-045f-485f-8e24-62b0c9293154_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:38:19 compute-0 nova_compute[243452]: 2026-02-28 10:38:19.934 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 99d76f35-045f-485f-8e24-62b0c9293154_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:20 compute-0 nova_compute[243452]: 2026-02-28 10:38:20.035 243456 DEBUG nova.policy [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:38:20 compute-0 nova_compute[243452]: 2026-02-28 10:38:20.175 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 99d76f35-045f-485f-8e24-62b0c9293154_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.241s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:20 compute-0 ceph-mon[76304]: pgmap v2219: 305 pgs: 305 active+clean; 153 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail
Feb 28 10:38:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2474547833' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:38:20 compute-0 nova_compute[243452]: 2026-02-28 10:38:20.248 243456 DEBUG nova.storage.rbd_utils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image 99d76f35-045f-485f-8e24-62b0c9293154_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:38:20 compute-0 nova_compute[243452]: 2026-02-28 10:38:20.340 243456 DEBUG nova.objects.instance [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid 99d76f35-045f-485f-8e24-62b0c9293154 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:38:20 compute-0 nova_compute[243452]: 2026-02-28 10:38:20.356 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:38:20 compute-0 nova_compute[243452]: 2026-02-28 10:38:20.356 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Ensure instance console log exists: /var/lib/nova/instances/99d76f35-045f-485f-8e24-62b0c9293154/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:38:20 compute-0 nova_compute[243452]: 2026-02-28 10:38:20.357 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:20 compute-0 nova_compute[243452]: 2026-02-28 10:38:20.358 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:20 compute-0 nova_compute[243452]: 2026-02-28 10:38:20.358 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:20 compute-0 nova_compute[243452]: 2026-02-28 10:38:20.926 243456 DEBUG nova.network.neutron [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Successfully updated port: fc0b52d5-3577-4d09-bac1-192cb6c80057 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:38:20 compute-0 nova_compute[243452]: 2026-02-28 10:38:20.942 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-99d76f35-045f-485f-8e24-62b0c9293154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:38:20 compute-0 nova_compute[243452]: 2026-02-28 10:38:20.943 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-99d76f35-045f-485f-8e24-62b0c9293154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:38:20 compute-0 nova_compute[243452]: 2026-02-28 10:38:20.943 243456 DEBUG nova.network.neutron [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:38:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 164 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 421 KiB/s wr, 12 op/s
Feb 28 10:38:21 compute-0 nova_compute[243452]: 2026-02-28 10:38:21.039 243456 DEBUG nova.compute.manager [req-792d3b42-0b1e-4395-856f-f8f5fcec18bc req-1f66332a-fead-4657-85b5-71c85a407fd5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Received event network-changed-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:38:21 compute-0 nova_compute[243452]: 2026-02-28 10:38:21.040 243456 DEBUG nova.compute.manager [req-792d3b42-0b1e-4395-856f-f8f5fcec18bc req-1f66332a-fead-4657-85b5-71c85a407fd5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Refreshing instance network info cache due to event network-changed-fc0b52d5-3577-4d09-bac1-192cb6c80057. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:38:21 compute-0 nova_compute[243452]: 2026-02-28 10:38:21.040 243456 DEBUG oslo_concurrency.lockutils [req-792d3b42-0b1e-4395-856f-f8f5fcec18bc req-1f66332a-fead-4657-85b5-71c85a407fd5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-99d76f35-045f-485f-8e24-62b0c9293154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:38:21 compute-0 nova_compute[243452]: 2026-02-28 10:38:21.289 243456 DEBUG nova.network.neutron [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.170 243456 DEBUG nova.network.neutron [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Updating instance_info_cache with network_info: [{"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.206 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-99d76f35-045f-485f-8e24-62b0c9293154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.206 243456 DEBUG nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Instance network_info: |[{"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.206 243456 DEBUG oslo_concurrency.lockutils [req-792d3b42-0b1e-4395-856f-f8f5fcec18bc req-1f66332a-fead-4657-85b5-71c85a407fd5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-99d76f35-045f-485f-8e24-62b0c9293154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.207 243456 DEBUG nova.network.neutron [req-792d3b42-0b1e-4395-856f-f8f5fcec18bc req-1f66332a-fead-4657-85b5-71c85a407fd5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Refreshing network info cache for port fc0b52d5-3577-4d09-bac1-192cb6c80057 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.210 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Start _get_guest_xml network_info=[{"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.217 243456 WARNING nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.224 243456 DEBUG nova.virt.libvirt.host [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.225 243456 DEBUG nova.virt.libvirt.host [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.228 243456 DEBUG nova.virt.libvirt.host [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.228 243456 DEBUG nova.virt.libvirt.host [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.229 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.229 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.229 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.230 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.230 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.230 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.230 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.230 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.231 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.231 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.231 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.231 243456 DEBUG nova.virt.hardware [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.234 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:22 compute-0 ceph-mon[76304]: pgmap v2220: 305 pgs: 305 active+clean; 164 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 421 KiB/s wr, 12 op/s
Feb 28 10:38:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:38:22 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1862684305' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.811 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.841 243456 DEBUG nova.storage.rbd_utils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 99d76f35-045f-485f-8e24-62b0c9293154_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:38:22 compute-0 nova_compute[243452]: 2026-02-28 10:38:22.847 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 176 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 532 KiB/s wr, 13 op/s
Feb 28 10:38:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1862684305' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:38:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:38:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3692655547' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.429 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.433 243456 DEBUG nova.virt.libvirt.vif [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:38:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1194760241',display_name='tempest-TestNetworkBasicOps-server-1194760241',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1194760241',id=136,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLMpYWEfQZAeIGdf778Rcwj2RHzc0cF5hcgXyiKvBbK6MzrGJ4l9QqwgzOi+/5IsYucdlZIzO5dS5nkEroUnidPMu1WEcs4ZeEyF4/2vdCruZF9XUmldWgh1Mor2eQK+6g==',key_name='tempest-TestNetworkBasicOps-1495280618',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-0l56ndn2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:38:19Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=99d76f35-045f-485f-8e24-62b0c9293154,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.433 243456 DEBUG nova.network.os_vif_util [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.435 243456 DEBUG nova.network.os_vif_util [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.437 243456 DEBUG nova.objects.instance [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 99d76f35-045f-485f-8e24-62b0c9293154 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.454 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:38:23 compute-0 nova_compute[243452]:   <uuid>99d76f35-045f-485f-8e24-62b0c9293154</uuid>
Feb 28 10:38:23 compute-0 nova_compute[243452]:   <name>instance-00000088</name>
Feb 28 10:38:23 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:38:23 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:38:23 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <nova:name>tempest-TestNetworkBasicOps-server-1194760241</nova:name>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:38:22</nova:creationTime>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:38:23 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:38:23 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:38:23 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:38:23 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:38:23 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:38:23 compute-0 nova_compute[243452]:         <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:38:23 compute-0 nova_compute[243452]:         <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:38:23 compute-0 nova_compute[243452]:         <nova:port uuid="fc0b52d5-3577-4d09-bac1-192cb6c80057">
Feb 28 10:38:23 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:38:23 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:38:23 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <system>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <entry name="serial">99d76f35-045f-485f-8e24-62b0c9293154</entry>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <entry name="uuid">99d76f35-045f-485f-8e24-62b0c9293154</entry>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     </system>
Feb 28 10:38:23 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:38:23 compute-0 nova_compute[243452]:   <os>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:   </os>
Feb 28 10:38:23 compute-0 nova_compute[243452]:   <features>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:   </features>
Feb 28 10:38:23 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:38:23 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:38:23 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/99d76f35-045f-485f-8e24-62b0c9293154_disk">
Feb 28 10:38:23 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       </source>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:38:23 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/99d76f35-045f-485f-8e24-62b0c9293154_disk.config">
Feb 28 10:38:23 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       </source>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:38:23 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:68:09:7e"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <target dev="tapfc0b52d5-35"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/99d76f35-045f-485f-8e24-62b0c9293154/console.log" append="off"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <video>
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     </video>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:38:23 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:38:23 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:38:23 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:38:23 compute-0 nova_compute[243452]: </domain>
Feb 28 10:38:23 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.456 243456 DEBUG nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Preparing to wait for external event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.457 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "99d76f35-045f-485f-8e24-62b0c9293154-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.457 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.458 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.459 243456 DEBUG nova.virt.libvirt.vif [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:38:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1194760241',display_name='tempest-TestNetworkBasicOps-server-1194760241',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1194760241',id=136,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLMpYWEfQZAeIGdf778Rcwj2RHzc0cF5hcgXyiKvBbK6MzrGJ4l9QqwgzOi+/5IsYucdlZIzO5dS5nkEroUnidPMu1WEcs4ZeEyF4/2vdCruZF9XUmldWgh1Mor2eQK+6g==',key_name='tempest-TestNetworkBasicOps-1495280618',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-0l56ndn2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:38:19Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=99d76f35-045f-485f-8e24-62b0c9293154,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.459 243456 DEBUG nova.network.os_vif_util [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.461 243456 DEBUG nova.network.os_vif_util [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.461 243456 DEBUG os_vif [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.462 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.463 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.464 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.469 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.470 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc0b52d5-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.471 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfc0b52d5-35, col_values=(('external_ids', {'iface-id': 'fc0b52d5-3577-4d09-bac1-192cb6c80057', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:09:7e', 'vm-uuid': '99d76f35-045f-485f-8e24-62b0c9293154'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.473 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:23 compute-0 NetworkManager[49805]: <info>  [1772275103.4747] manager: (tapfc0b52d5-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/581)
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.477 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.481 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.482 243456 INFO os_vif [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35')
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.544 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.544 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.545 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:68:09:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.545 243456 INFO nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Using config drive
Feb 28 10:38:23 compute-0 nova_compute[243452]: 2026-02-28 10:38:23.573 243456 DEBUG nova.storage.rbd_utils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 99d76f35-045f-485f-8e24-62b0c9293154_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:38:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:38:24 compute-0 ceph-mon[76304]: pgmap v2221: 305 pgs: 305 active+clean; 176 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 532 KiB/s wr, 13 op/s
Feb 28 10:38:24 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3692655547' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:38:24 compute-0 nova_compute[243452]: 2026-02-28 10:38:24.557 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:24 compute-0 nova_compute[243452]: 2026-02-28 10:38:24.952 243456 INFO nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Creating config drive at /var/lib/nova/instances/99d76f35-045f-485f-8e24-62b0c9293154/disk.config
Feb 28 10:38:24 compute-0 nova_compute[243452]: 2026-02-28 10:38:24.958 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/99d76f35-045f-485f-8e24-62b0c9293154/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp97f7f97t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:38:25 compute-0 nova_compute[243452]: 2026-02-28 10:38:25.093 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/99d76f35-045f-485f-8e24-62b0c9293154/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp97f7f97t" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:25 compute-0 nova_compute[243452]: 2026-02-28 10:38:25.134 243456 DEBUG nova.storage.rbd_utils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 99d76f35-045f-485f-8e24-62b0c9293154_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:38:25 compute-0 nova_compute[243452]: 2026-02-28 10:38:25.138 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/99d76f35-045f-485f-8e24-62b0c9293154/disk.config 99d76f35-045f-485f-8e24-62b0c9293154_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:25 compute-0 nova_compute[243452]: 2026-02-28 10:38:25.309 243456 DEBUG oslo_concurrency.processutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/99d76f35-045f-485f-8e24-62b0c9293154/disk.config 99d76f35-045f-485f-8e24-62b0c9293154_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:25 compute-0 nova_compute[243452]: 2026-02-28 10:38:25.310 243456 INFO nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Deleting local config drive /var/lib/nova/instances/99d76f35-045f-485f-8e24-62b0c9293154/disk.config because it was imported into RBD.
Feb 28 10:38:25 compute-0 kernel: tapfc0b52d5-35: entered promiscuous mode
Feb 28 10:38:25 compute-0 NetworkManager[49805]: <info>  [1772275105.3812] manager: (tapfc0b52d5-35): new Tun device (/org/freedesktop/NetworkManager/Devices/582)
Feb 28 10:38:25 compute-0 ovn_controller[146846]: 2026-02-28T10:38:25Z|01408|binding|INFO|Claiming lport fc0b52d5-3577-4d09-bac1-192cb6c80057 for this chassis.
Feb 28 10:38:25 compute-0 ovn_controller[146846]: 2026-02-28T10:38:25Z|01409|binding|INFO|fc0b52d5-3577-4d09-bac1-192cb6c80057: Claiming fa:16:3e:68:09:7e 10.100.0.7
Feb 28 10:38:25 compute-0 nova_compute[243452]: 2026-02-28 10:38:25.406 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:25 compute-0 nova_compute[243452]: 2026-02-28 10:38:25.411 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.423 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:09:7e 10.100.0.7'], port_security=['fa:16:3e:68:09:7e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-600658055', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '99d76f35-045f-485f-8e24-62b0c9293154', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48d6bc14-7625-4769-ad1d-6c202dc94953', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-600658055', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '26d464f3-c9fa-4347-96b9-fda1593d34a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61ce24be-7860-4d1f-956c-5f6de3fe0ffe, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=fc0b52d5-3577-4d09-bac1-192cb6c80057) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.426 156681 INFO neutron.agent.ovn.metadata.agent [-] Port fc0b52d5-3577-4d09-bac1-192cb6c80057 in datapath 48d6bc14-7625-4769-ad1d-6c202dc94953 bound to our chassis
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.429 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48d6bc14-7625-4769-ad1d-6c202dc94953
Feb 28 10:38:25 compute-0 systemd-udevd[365243]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:38:25 compute-0 ovn_controller[146846]: 2026-02-28T10:38:25Z|01410|binding|INFO|Setting lport fc0b52d5-3577-4d09-bac1-192cb6c80057 ovn-installed in OVS
Feb 28 10:38:25 compute-0 ovn_controller[146846]: 2026-02-28T10:38:25Z|01411|binding|INFO|Setting lport fc0b52d5-3577-4d09-bac1-192cb6c80057 up in Southbound
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.442 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c98c7463-1620-4fb2-9f85-5546cf3f0210]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:25 compute-0 systemd-machined[209480]: New machine qemu-169-instance-00000088.
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.443 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48d6bc14-71 in ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.446 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48d6bc14-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.446 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b39af200-4967-4b88-a6ed-701b8eaa8023]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:25 compute-0 nova_compute[243452]: 2026-02-28 10:38:25.444 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.449 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1362a3cc-ba84-4e0d-8278-1d1b1a6ecfe6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:25 compute-0 NetworkManager[49805]: <info>  [1772275105.4518] device (tapfc0b52d5-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:38:25 compute-0 NetworkManager[49805]: <info>  [1772275105.4527] device (tapfc0b52d5-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:38:25 compute-0 systemd[1]: Started Virtual Machine qemu-169-instance-00000088.
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.464 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[cdb928df-7406-4a8f-b214-2b9b8973312e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.481 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f4bf5ff7-a085-4e1e-9ab2-681e3e0ea595]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.512 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe03181-d1fb-498f-9f0c-95f813f08be5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.519 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2aaebd37-eb34-46d3-a602-984d5c4a8a66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:25 compute-0 NetworkManager[49805]: <info>  [1772275105.5207] manager: (tap48d6bc14-70): new Veth device (/org/freedesktop/NetworkManager/Devices/583)
Feb 28 10:38:25 compute-0 systemd-udevd[365247]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.555 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[43b0f5e7-6721-4e16-ae52-79ab4d9c8c66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.560 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[dff4fbcc-7eff-46c0-89fa-2cddbf1ad025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:25 compute-0 NetworkManager[49805]: <info>  [1772275105.5898] device (tap48d6bc14-70): carrier: link connected
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.595 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ea55f712-9f08-4551-ad1e-4f19082491d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.615 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1e36360e-df0d-4738-add5-108ca55f7903]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48d6bc14-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:e1:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661287, 'reachable_time': 32128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365277, 'error': None, 'target': 'ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.635 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[af729dca-9bf8-4691-a61c-d1e61f2eec1a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:e19e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661287, 'tstamp': 661287}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365278, 'error': None, 'target': 'ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.659 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d8aae10b-5360-4439-b8a7-6de7e46311c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48d6bc14-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:e1:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661287, 'reachable_time': 32128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 365279, 'error': None, 'target': 'ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.695 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d007778b-4c14-4896-9bd5-ec059c2f1fd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.768 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[42e57607-31dc-49d1-8377-566d2bda0c08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.770 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48d6bc14-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.770 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.771 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48d6bc14-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:25 compute-0 nova_compute[243452]: 2026-02-28 10:38:25.774 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:25 compute-0 kernel: tap48d6bc14-70: entered promiscuous mode
Feb 28 10:38:25 compute-0 NetworkManager[49805]: <info>  [1772275105.7754] manager: (tap48d6bc14-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/584)
Feb 28 10:38:25 compute-0 nova_compute[243452]: 2026-02-28 10:38:25.778 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.780 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48d6bc14-70, col_values=(('external_ids', {'iface-id': '6091ce53-d3cc-490d-8054-526a8c4039b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:25 compute-0 ovn_controller[146846]: 2026-02-28T10:38:25Z|01412|binding|INFO|Releasing lport 6091ce53-d3cc-490d-8054-526a8c4039b0 from this chassis (sb_readonly=0)
Feb 28 10:38:25 compute-0 nova_compute[243452]: 2026-02-28 10:38:25.782 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:25 compute-0 nova_compute[243452]: 2026-02-28 10:38:25.783 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.784 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48d6bc14-7625-4769-ad1d-6c202dc94953.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48d6bc14-7625-4769-ad1d-6c202dc94953.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.785 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9694b499-2ad3-4056-923c-e1844bff6f27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.786 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-48d6bc14-7625-4769-ad1d-6c202dc94953
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/48d6bc14-7625-4769-ad1d-6c202dc94953.pid.haproxy
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 48d6bc14-7625-4769-ad1d-6c202dc94953
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:38:25 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:25.787 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953', 'env', 'PROCESS_TAG=haproxy-48d6bc14-7625-4769-ad1d-6c202dc94953', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48d6bc14-7625-4769-ad1d-6c202dc94953.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:38:25 compute-0 nova_compute[243452]: 2026-02-28 10:38:25.788 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:25 compute-0 nova_compute[243452]: 2026-02-28 10:38:25.921 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275105.9211786, 99d76f35-045f-485f-8e24-62b0c9293154 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:38:25 compute-0 nova_compute[243452]: 2026-02-28 10:38:25.922 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] VM Started (Lifecycle Event)
Feb 28 10:38:25 compute-0 nova_compute[243452]: 2026-02-28 10:38:25.953 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:38:25 compute-0 nova_compute[243452]: 2026-02-28 10:38:25.961 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275105.9226139, 99d76f35-045f-485f-8e24-62b0c9293154 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:38:25 compute-0 nova_compute[243452]: 2026-02-28 10:38:25.962 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] VM Paused (Lifecycle Event)
Feb 28 10:38:25 compute-0 nova_compute[243452]: 2026-02-28 10:38:25.994 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.000 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.026 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.113 243456 DEBUG nova.compute.manager [req-cf3ad46e-5315-45f1-9145-a9b2362e49f7 req-0691b9db-13b0-43c1-805f-1b04f328ac98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Received event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.114 243456 DEBUG oslo_concurrency.lockutils [req-cf3ad46e-5315-45f1-9145-a9b2362e49f7 req-0691b9db-13b0-43c1-805f-1b04f328ac98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "99d76f35-045f-485f-8e24-62b0c9293154-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.114 243456 DEBUG oslo_concurrency.lockutils [req-cf3ad46e-5315-45f1-9145-a9b2362e49f7 req-0691b9db-13b0-43c1-805f-1b04f328ac98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.114 243456 DEBUG oslo_concurrency.lockutils [req-cf3ad46e-5315-45f1-9145-a9b2362e49f7 req-0691b9db-13b0-43c1-805f-1b04f328ac98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.115 243456 DEBUG nova.compute.manager [req-cf3ad46e-5315-45f1-9145-a9b2362e49f7 req-0691b9db-13b0-43c1-805f-1b04f328ac98 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Processing event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.115 243456 DEBUG nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.120 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275106.11997, 99d76f35-045f-485f-8e24-62b0c9293154 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.120 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] VM Resumed (Lifecycle Event)
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.122 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.125 243456 INFO nova.virt.libvirt.driver [-] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Instance spawned successfully.
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.125 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.155 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.162 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.167 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.168 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.168 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.168 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.169 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.169 243456 DEBUG nova.virt.libvirt.driver [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.197 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.239 243456 INFO nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Took 6.48 seconds to spawn the instance on the hypervisor.
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.240 243456 DEBUG nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:38:26 compute-0 podman[365353]: 2026-02-28 10:38:26.280587121 +0000 UTC m=+0.089335671 container create 54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 28 10:38:26 compute-0 ceph-mon[76304]: pgmap v2222: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.302 243456 INFO nova.compute.manager [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Took 7.61 seconds to build instance.
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.320 243456 DEBUG oslo_concurrency.lockutils [None req-ee4916e9-ee9a-4ea1-9515-174d4846f852 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:26 compute-0 podman[365353]: 2026-02-28 10:38:26.230955925 +0000 UTC m=+0.039704525 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:38:26 compute-0 systemd[1]: Started libpod-conmon-54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74.scope.
Feb 28 10:38:26 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3142c94eb22a5779a4e33f0f5abf6a497c89eeb647fdd906caceeb4b98a92608/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:38:26 compute-0 podman[365353]: 2026-02-28 10:38:26.378283947 +0000 UTC m=+0.187032597 container init 54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 10:38:26 compute-0 podman[365353]: 2026-02-28 10:38:26.385573294 +0000 UTC m=+0.194321854 container start 54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 10:38:26 compute-0 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[365368]: [NOTICE]   (365372) : New worker (365374) forked
Feb 28 10:38:26 compute-0 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[365368]: [NOTICE]   (365372) : Loading success.
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.850 243456 DEBUG nova.network.neutron [req-792d3b42-0b1e-4395-856f-f8f5fcec18bc req-1f66332a-fead-4657-85b5-71c85a407fd5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Updated VIF entry in instance network info cache for port fc0b52d5-3577-4d09-bac1-192cb6c80057. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.853 243456 DEBUG nova.network.neutron [req-792d3b42-0b1e-4395-856f-f8f5fcec18bc req-1f66332a-fead-4657-85b5-71c85a407fd5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Updating instance_info_cache with network_info: [{"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:38:26 compute-0 nova_compute[243452]: 2026-02-28 10:38:26.957 243456 DEBUG oslo_concurrency.lockutils [req-792d3b42-0b1e-4395-856f-f8f5fcec18bc req-1f66332a-fead-4657-85b5-71c85a407fd5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-99d76f35-045f-485f-8e24-62b0c9293154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:38:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:38:28 compute-0 nova_compute[243452]: 2026-02-28 10:38:28.230 243456 DEBUG nova.compute.manager [req-0f1e3581-f87f-41e0-b317-20f8954a03e7 req-395dcbc2-8f71-4df1-93e4-bbb72b4d5d49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Received event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:38:28 compute-0 nova_compute[243452]: 2026-02-28 10:38:28.232 243456 DEBUG oslo_concurrency.lockutils [req-0f1e3581-f87f-41e0-b317-20f8954a03e7 req-395dcbc2-8f71-4df1-93e4-bbb72b4d5d49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "99d76f35-045f-485f-8e24-62b0c9293154-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:28 compute-0 nova_compute[243452]: 2026-02-28 10:38:28.232 243456 DEBUG oslo_concurrency.lockutils [req-0f1e3581-f87f-41e0-b317-20f8954a03e7 req-395dcbc2-8f71-4df1-93e4-bbb72b4d5d49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:28 compute-0 nova_compute[243452]: 2026-02-28 10:38:28.233 243456 DEBUG oslo_concurrency.lockutils [req-0f1e3581-f87f-41e0-b317-20f8954a03e7 req-395dcbc2-8f71-4df1-93e4-bbb72b4d5d49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:28 compute-0 nova_compute[243452]: 2026-02-28 10:38:28.234 243456 DEBUG nova.compute.manager [req-0f1e3581-f87f-41e0-b317-20f8954a03e7 req-395dcbc2-8f71-4df1-93e4-bbb72b4d5d49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] No waiting events found dispatching network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:38:28 compute-0 nova_compute[243452]: 2026-02-28 10:38:28.234 243456 WARNING nova.compute.manager [req-0f1e3581-f87f-41e0-b317-20f8954a03e7 req-395dcbc2-8f71-4df1-93e4-bbb72b4d5d49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Received unexpected event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 for instance with vm_state active and task_state None.
Feb 28 10:38:28 compute-0 ceph-mon[76304]: pgmap v2223: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:38:28 compute-0 nova_compute[243452]: 2026-02-28 10:38:28.474 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:38:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 28 10:38:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:38:29
Feb 28 10:38:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:38:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:38:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', '.mgr', 'vms', 'default.rgw.control', 'default.rgw.meta', '.rgw.root']
Feb 28 10:38:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:38:29 compute-0 nova_compute[243452]: 2026-02-28 10:38:29.559 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:29.664 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:b7:5d 2001:db8:0:1:f816:3eff:fe53:b75d 2001:db8::f816:3eff:fe53:b75d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe53:b75d/64 2001:db8::f816:3eff:fe53:b75d/64', 'neutron:device_id': 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df8aa31f-e638-4e49-ac27-bf5e1988a64a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=86a5934a-bf0f-4d7e-ba7a-fdb27e99df6a) old=Port_Binding(mac=['fa:16:3e:53:b7:5d 2001:db8::f816:3eff:fe53:b75d'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe53:b75d/64', 'neutron:device_id': 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:38:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:29.667 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 86a5934a-bf0f-4d7e-ba7a-fdb27e99df6a in datapath 685f3a92-853c-417a-a00b-ba5c70b02f2d updated
Feb 28 10:38:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:29.670 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 685f3a92-853c-417a-a00b-ba5c70b02f2d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:38:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:29.671 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[490cfd71-5227-47f7-92e2-a73dd99121dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:30 compute-0 ceph-mon[76304]: pgmap v2224: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 28 10:38:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:38:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:38:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:38:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:38:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:38:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:38:30 compute-0 nova_compute[243452]: 2026-02-28 10:38:30.705 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:30 compute-0 NetworkManager[49805]: <info>  [1772275110.7062] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/585)
Feb 28 10:38:30 compute-0 NetworkManager[49805]: <info>  [1772275110.7083] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/586)
Feb 28 10:38:30 compute-0 nova_compute[243452]: 2026-02-28 10:38:30.763 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:30 compute-0 ovn_controller[146846]: 2026-02-28T10:38:30Z|01413|binding|INFO|Releasing lport 6091ce53-d3cc-490d-8054-526a8c4039b0 from this chassis (sb_readonly=0)
Feb 28 10:38:30 compute-0 nova_compute[243452]: 2026-02-28 10:38:30.786 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:38:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:38:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:38:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:38:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:38:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:38:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:38:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:38:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:38:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:38:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 10:38:31 compute-0 nova_compute[243452]: 2026-02-28 10:38:31.792 243456 DEBUG nova.compute.manager [req-c31afc70-95f0-4e9f-a3c5-54dbb3121c46 req-906ee03b-dca5-4f1d-b2d2-9da62ef45fcf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Received event network-changed-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:38:31 compute-0 nova_compute[243452]: 2026-02-28 10:38:31.792 243456 DEBUG nova.compute.manager [req-c31afc70-95f0-4e9f-a3c5-54dbb3121c46 req-906ee03b-dca5-4f1d-b2d2-9da62ef45fcf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Refreshing instance network info cache due to event network-changed-fc0b52d5-3577-4d09-bac1-192cb6c80057. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:38:31 compute-0 nova_compute[243452]: 2026-02-28 10:38:31.793 243456 DEBUG oslo_concurrency.lockutils [req-c31afc70-95f0-4e9f-a3c5-54dbb3121c46 req-906ee03b-dca5-4f1d-b2d2-9da62ef45fcf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-99d76f35-045f-485f-8e24-62b0c9293154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:38:31 compute-0 nova_compute[243452]: 2026-02-28 10:38:31.793 243456 DEBUG oslo_concurrency.lockutils [req-c31afc70-95f0-4e9f-a3c5-54dbb3121c46 req-906ee03b-dca5-4f1d-b2d2-9da62ef45fcf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-99d76f35-045f-485f-8e24-62b0c9293154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:38:31 compute-0 nova_compute[243452]: 2026-02-28 10:38:31.793 243456 DEBUG nova.network.neutron [req-c31afc70-95f0-4e9f-a3c5-54dbb3121c46 req-906ee03b-dca5-4f1d-b2d2-9da62ef45fcf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Refreshing network info cache for port fc0b52d5-3577-4d09-bac1-192cb6c80057 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:38:31 compute-0 nova_compute[243452]: 2026-02-28 10:38:31.950 243456 DEBUG oslo_concurrency.lockutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "99d76f35-045f-485f-8e24-62b0c9293154" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:31 compute-0 nova_compute[243452]: 2026-02-28 10:38:31.951 243456 DEBUG oslo_concurrency.lockutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:31 compute-0 nova_compute[243452]: 2026-02-28 10:38:31.951 243456 DEBUG oslo_concurrency.lockutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "99d76f35-045f-485f-8e24-62b0c9293154-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:31 compute-0 nova_compute[243452]: 2026-02-28 10:38:31.951 243456 DEBUG oslo_concurrency.lockutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:31 compute-0 nova_compute[243452]: 2026-02-28 10:38:31.952 243456 DEBUG oslo_concurrency.lockutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:31 compute-0 nova_compute[243452]: 2026-02-28 10:38:31.953 243456 INFO nova.compute.manager [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Terminating instance
Feb 28 10:38:31 compute-0 nova_compute[243452]: 2026-02-28 10:38:31.954 243456 DEBUG nova.compute.manager [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:38:31 compute-0 kernel: tapfc0b52d5-35 (unregistering): left promiscuous mode
Feb 28 10:38:31 compute-0 NetworkManager[49805]: <info>  [1772275111.9906] device (tapfc0b52d5-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.005 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:32 compute-0 ovn_controller[146846]: 2026-02-28T10:38:32Z|01414|binding|INFO|Releasing lport fc0b52d5-3577-4d09-bac1-192cb6c80057 from this chassis (sb_readonly=0)
Feb 28 10:38:32 compute-0 ovn_controller[146846]: 2026-02-28T10:38:32Z|01415|binding|INFO|Setting lport fc0b52d5-3577-4d09-bac1-192cb6c80057 down in Southbound
Feb 28 10:38:32 compute-0 ovn_controller[146846]: 2026-02-28T10:38:32Z|01416|binding|INFO|Removing iface tapfc0b52d5-35 ovn-installed in OVS
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.008 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.015 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.017 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:09:7e 10.100.0.7'], port_security=['fa:16:3e:68:09:7e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-600658055', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '99d76f35-045f-485f-8e24-62b0c9293154', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48d6bc14-7625-4769-ad1d-6c202dc94953', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-600658055', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '26d464f3-c9fa-4347-96b9-fda1593d34a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61ce24be-7860-4d1f-956c-5f6de3fe0ffe, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=fc0b52d5-3577-4d09-bac1-192cb6c80057) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:38:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.018 156681 INFO neutron.agent.ovn.metadata.agent [-] Port fc0b52d5-3577-4d09-bac1-192cb6c80057 in datapath 48d6bc14-7625-4769-ad1d-6c202dc94953 unbound from our chassis
Feb 28 10:38:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.019 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48d6bc14-7625-4769-ad1d-6c202dc94953, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:38:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.020 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f7226da5-81a7-4b7b-b78c-ce39ec7ddc60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.020 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953 namespace which is not needed anymore
Feb 28 10:38:32 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000088.scope: Deactivated successfully.
Feb 28 10:38:32 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000088.scope: Consumed 6.437s CPU time.
Feb 28 10:38:32 compute-0 systemd-machined[209480]: Machine qemu-169-instance-00000088 terminated.
Feb 28 10:38:32 compute-0 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[365368]: [NOTICE]   (365372) : haproxy version is 2.8.14-c23fe91
Feb 28 10:38:32 compute-0 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[365368]: [NOTICE]   (365372) : path to executable is /usr/sbin/haproxy
Feb 28 10:38:32 compute-0 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[365368]: [WARNING]  (365372) : Exiting Master process...
Feb 28 10:38:32 compute-0 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[365368]: [ALERT]    (365372) : Current worker (365374) exited with code 143 (Terminated)
Feb 28 10:38:32 compute-0 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[365368]: [WARNING]  (365372) : All workers exited. Exiting... (0)
Feb 28 10:38:32 compute-0 systemd[1]: libpod-54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74.scope: Deactivated successfully.
Feb 28 10:38:32 compute-0 podman[365409]: 2026-02-28 10:38:32.173775391 +0000 UTC m=+0.066844934 container died 54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.188 243456 INFO nova.virt.libvirt.driver [-] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Instance destroyed successfully.
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.189 243456 DEBUG nova.objects.instance [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid 99d76f35-045f-485f-8e24-62b0c9293154 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.203 243456 DEBUG nova.virt.libvirt.vif [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:38:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1194760241',display_name='tempest-TestNetworkBasicOps-server-1194760241',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1194760241',id=136,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLMpYWEfQZAeIGdf778Rcwj2RHzc0cF5hcgXyiKvBbK6MzrGJ4l9QqwgzOi+/5IsYucdlZIzO5dS5nkEroUnidPMu1WEcs4ZeEyF4/2vdCruZF9XUmldWgh1Mor2eQK+6g==',key_name='tempest-TestNetworkBasicOps-1495280618',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:38:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-0l56ndn2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:38:26Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=99d76f35-045f-485f-8e24-62b0c9293154,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.204 243456 DEBUG nova.network.os_vif_util [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.204 243456 DEBUG nova.network.os_vif_util [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.205 243456 DEBUG os_vif [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.209 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.209 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc0b52d5-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.211 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.212 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.215 243456 INFO os_vif [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35')
Feb 28 10:38:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74-userdata-shm.mount: Deactivated successfully.
Feb 28 10:38:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-3142c94eb22a5779a4e33f0f5abf6a497c89eeb647fdd906caceeb4b98a92608-merged.mount: Deactivated successfully.
Feb 28 10:38:32 compute-0 podman[365409]: 2026-02-28 10:38:32.266703273 +0000 UTC m=+0.159772806 container cleanup 54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:38:32 compute-0 systemd[1]: libpod-conmon-54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74.scope: Deactivated successfully.
Feb 28 10:38:32 compute-0 ceph-mon[76304]: pgmap v2225: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 10:38:32 compute-0 podman[365469]: 2026-02-28 10:38:32.329881101 +0000 UTC m=+0.041432544 container remove 54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 28 10:38:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.335 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e471721f-e19a-46ff-a0cd-d3f138a2e2aa]: (4, ('Sat Feb 28 10:38:32 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953 (54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74)\n54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74\nSat Feb 28 10:38:32 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953 (54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74)\n54bc2bb7d2253d4f0682434a4226ec82657ad0dace0e35941c5ec9a790dd0e74\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.337 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b80d3d56-2ff6-4e8c-b7fb-ad99ceb73700]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.338 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48d6bc14-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.341 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:32 compute-0 kernel: tap48d6bc14-70: left promiscuous mode
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.346 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.349 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[31d201a7-38ee-4b0c-a2c5-e9b2b0819620]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.378 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0190e094-d2dd-40da-abdf-25a75aff8e35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.380 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b05b6ed2-a5ba-451f-b8c9-f217959b1290]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.396 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dbc1e7af-4308-4fbc-b3ce-1ee95a0873bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661279, 'reachable_time': 26375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365482, 'error': None, 'target': 'ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d48d6bc14\x2d7625\x2d4769\x2dad1d\x2d6c202dc94953.mount: Deactivated successfully.
Feb 28 10:38:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.399 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:38:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:32.399 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[e64f0109-6685-470a-80ed-c097f426be9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.491 243456 INFO nova.virt.libvirt.driver [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Deleting instance files /var/lib/nova/instances/99d76f35-045f-485f-8e24-62b0c9293154_del
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.492 243456 INFO nova.virt.libvirt.driver [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Deletion of /var/lib/nova/instances/99d76f35-045f-485f-8e24-62b0c9293154_del complete
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.683 243456 INFO nova.compute.manager [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Took 0.73 seconds to destroy the instance on the hypervisor.
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.684 243456 DEBUG oslo.service.loopingcall [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.684 243456 DEBUG nova.compute.manager [-] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:38:32 compute-0 nova_compute[243452]: 2026-02-28 10:38:32.685 243456 DEBUG nova.network.neutron [-] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:38:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 188 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 92 op/s
Feb 28 10:38:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:38:34 compute-0 nova_compute[243452]: 2026-02-28 10:38:34.165 243456 DEBUG nova.compute.manager [req-21e7ab27-9586-4fcb-b4a5-f22c5d6d0e32 req-caf78d7c-818c-4745-84f3-2ec17b9844c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Received event network-vif-unplugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:38:34 compute-0 nova_compute[243452]: 2026-02-28 10:38:34.166 243456 DEBUG oslo_concurrency.lockutils [req-21e7ab27-9586-4fcb-b4a5-f22c5d6d0e32 req-caf78d7c-818c-4745-84f3-2ec17b9844c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "99d76f35-045f-485f-8e24-62b0c9293154-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:34 compute-0 nova_compute[243452]: 2026-02-28 10:38:34.167 243456 DEBUG oslo_concurrency.lockutils [req-21e7ab27-9586-4fcb-b4a5-f22c5d6d0e32 req-caf78d7c-818c-4745-84f3-2ec17b9844c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:34 compute-0 nova_compute[243452]: 2026-02-28 10:38:34.167 243456 DEBUG oslo_concurrency.lockutils [req-21e7ab27-9586-4fcb-b4a5-f22c5d6d0e32 req-caf78d7c-818c-4745-84f3-2ec17b9844c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:34 compute-0 nova_compute[243452]: 2026-02-28 10:38:34.168 243456 DEBUG nova.compute.manager [req-21e7ab27-9586-4fcb-b4a5-f22c5d6d0e32 req-caf78d7c-818c-4745-84f3-2ec17b9844c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] No waiting events found dispatching network-vif-unplugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:38:34 compute-0 nova_compute[243452]: 2026-02-28 10:38:34.168 243456 DEBUG nova.compute.manager [req-21e7ab27-9586-4fcb-b4a5-f22c5d6d0e32 req-caf78d7c-818c-4745-84f3-2ec17b9844c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Received event network-vif-unplugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:38:34 compute-0 ceph-mon[76304]: pgmap v2226: 305 pgs: 305 active+clean; 188 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 92 op/s
Feb 28 10:38:34 compute-0 nova_compute[243452]: 2026-02-28 10:38:34.562 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 164 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 104 op/s
Feb 28 10:38:35 compute-0 nova_compute[243452]: 2026-02-28 10:38:35.376 243456 DEBUG nova.network.neutron [req-c31afc70-95f0-4e9f-a3c5-54dbb3121c46 req-906ee03b-dca5-4f1d-b2d2-9da62ef45fcf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Updated VIF entry in instance network info cache for port fc0b52d5-3577-4d09-bac1-192cb6c80057. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:38:35 compute-0 nova_compute[243452]: 2026-02-28 10:38:35.376 243456 DEBUG nova.network.neutron [req-c31afc70-95f0-4e9f-a3c5-54dbb3121c46 req-906ee03b-dca5-4f1d-b2d2-9da62ef45fcf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Updating instance_info_cache with network_info: [{"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:38:35 compute-0 nova_compute[243452]: 2026-02-28 10:38:35.401 243456 DEBUG oslo_concurrency.lockutils [req-c31afc70-95f0-4e9f-a3c5-54dbb3121c46 req-906ee03b-dca5-4f1d-b2d2-9da62ef45fcf 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-99d76f35-045f-485f-8e24-62b0c9293154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:38:36 compute-0 nova_compute[243452]: 2026-02-28 10:38:36.022 243456 DEBUG nova.network.neutron [-] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:38:36 compute-0 nova_compute[243452]: 2026-02-28 10:38:36.048 243456 INFO nova.compute.manager [-] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Took 3.36 seconds to deallocate network for instance.
Feb 28 10:38:36 compute-0 nova_compute[243452]: 2026-02-28 10:38:36.103 243456 DEBUG oslo_concurrency.lockutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:36 compute-0 nova_compute[243452]: 2026-02-28 10:38:36.104 243456 DEBUG oslo_concurrency.lockutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:36 compute-0 nova_compute[243452]: 2026-02-28 10:38:36.173 243456 DEBUG oslo_concurrency.processutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:36 compute-0 nova_compute[243452]: 2026-02-28 10:38:36.284 243456 DEBUG nova.compute.manager [req-11f2452f-e8be-43ff-a0e8-0a8f33698891 req-936ace32-aeae-4c10-a23b-4527838ce0ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Received event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:38:36 compute-0 nova_compute[243452]: 2026-02-28 10:38:36.285 243456 DEBUG oslo_concurrency.lockutils [req-11f2452f-e8be-43ff-a0e8-0a8f33698891 req-936ace32-aeae-4c10-a23b-4527838ce0ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "99d76f35-045f-485f-8e24-62b0c9293154-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:36 compute-0 nova_compute[243452]: 2026-02-28 10:38:36.285 243456 DEBUG oslo_concurrency.lockutils [req-11f2452f-e8be-43ff-a0e8-0a8f33698891 req-936ace32-aeae-4c10-a23b-4527838ce0ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:36 compute-0 nova_compute[243452]: 2026-02-28 10:38:36.286 243456 DEBUG oslo_concurrency.lockutils [req-11f2452f-e8be-43ff-a0e8-0a8f33698891 req-936ace32-aeae-4c10-a23b-4527838ce0ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:36 compute-0 nova_compute[243452]: 2026-02-28 10:38:36.286 243456 DEBUG nova.compute.manager [req-11f2452f-e8be-43ff-a0e8-0a8f33698891 req-936ace32-aeae-4c10-a23b-4527838ce0ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] No waiting events found dispatching network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:38:36 compute-0 nova_compute[243452]: 2026-02-28 10:38:36.287 243456 WARNING nova.compute.manager [req-11f2452f-e8be-43ff-a0e8-0a8f33698891 req-936ace32-aeae-4c10-a23b-4527838ce0ed 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Received unexpected event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 for instance with vm_state deleted and task_state None.
Feb 28 10:38:36 compute-0 ceph-mon[76304]: pgmap v2227: 305 pgs: 305 active+clean; 164 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 104 op/s
Feb 28 10:38:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:38:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4149854294' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:38:36 compute-0 nova_compute[243452]: 2026-02-28 10:38:36.734 243456 DEBUG oslo_concurrency.processutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:36 compute-0 nova_compute[243452]: 2026-02-28 10:38:36.741 243456 DEBUG nova.compute.provider_tree [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:38:36 compute-0 nova_compute[243452]: 2026-02-28 10:38:36.759 243456 DEBUG nova.scheduler.client.report [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:38:36 compute-0 nova_compute[243452]: 2026-02-28 10:38:36.781 243456 DEBUG oslo_concurrency.lockutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:36 compute-0 nova_compute[243452]: 2026-02-28 10:38:36.807 243456 INFO nova.scheduler.client.report [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance 99d76f35-045f-485f-8e24-62b0c9293154
Feb 28 10:38:36 compute-0 nova_compute[243452]: 2026-02-28 10:38:36.885 243456 DEBUG oslo_concurrency.lockutils [None req-a797614f-905a-46ab-ae8e-16fae718238b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "99d76f35-045f-485f-8e24-62b0c9293154" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 153 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb 28 10:38:37 compute-0 nova_compute[243452]: 2026-02-28 10:38:37.092 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:37 compute-0 nova_compute[243452]: 2026-02-28 10:38:37.093 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:37 compute-0 nova_compute[243452]: 2026-02-28 10:38:37.108 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:38:37 compute-0 nova_compute[243452]: 2026-02-28 10:38:37.108 243456 DEBUG nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:38:37 compute-0 nova_compute[243452]: 2026-02-28 10:38:37.167 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:37 compute-0 nova_compute[243452]: 2026-02-28 10:38:37.167 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:37 compute-0 nova_compute[243452]: 2026-02-28 10:38:37.174 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:38:37 compute-0 nova_compute[243452]: 2026-02-28 10:38:37.175 243456 INFO nova.compute.claims [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:38:37 compute-0 nova_compute[243452]: 2026-02-28 10:38:37.213 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:37 compute-0 nova_compute[243452]: 2026-02-28 10:38:37.270 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:37 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4149854294' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:38:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:38:37 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/826191562' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:38:37 compute-0 nova_compute[243452]: 2026-02-28 10:38:37.812 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:37 compute-0 nova_compute[243452]: 2026-02-28 10:38:37.819 243456 DEBUG nova.compute.provider_tree [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:38:37 compute-0 nova_compute[243452]: 2026-02-28 10:38:37.837 243456 DEBUG nova.scheduler.client.report [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:38:37 compute-0 nova_compute[243452]: 2026-02-28 10:38:37.864 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:37 compute-0 nova_compute[243452]: 2026-02-28 10:38:37.865 243456 DEBUG nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:38:37 compute-0 nova_compute[243452]: 2026-02-28 10:38:37.918 243456 DEBUG nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:38:37 compute-0 nova_compute[243452]: 2026-02-28 10:38:37.919 243456 DEBUG nova.network.neutron [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:38:37 compute-0 nova_compute[243452]: 2026-02-28 10:38:37.942 243456 INFO nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:38:37 compute-0 nova_compute[243452]: 2026-02-28 10:38:37.970 243456 DEBUG nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.061 243456 DEBUG nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.062 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.063 243456 INFO nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Creating image(s)
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.095 243456 DEBUG nova.storage.rbd_utils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f6deb920-f186-43f5-9ea0-642f4a6e830e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.135 243456 DEBUG nova.storage.rbd_utils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f6deb920-f186-43f5-9ea0-642f4a6e830e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.172 243456 DEBUG nova.storage.rbd_utils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f6deb920-f186-43f5-9ea0-642f4a6e830e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.177 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.269 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.270 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.271 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.271 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.301 243456 DEBUG nova.storage.rbd_utils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f6deb920-f186-43f5-9ea0-642f4a6e830e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.306 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f6deb920-f186-43f5-9ea0-642f4a6e830e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:38 compute-0 ceph-mon[76304]: pgmap v2228: 305 pgs: 305 active+clean; 153 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb 28 10:38:38 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/826191562' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.542 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 f6deb920-f186-43f5-9ea0-642f4a6e830e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.584 243456 DEBUG nova.policy [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.632 243456 DEBUG nova.storage.rbd_utils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image f6deb920-f186-43f5-9ea0-642f4a6e830e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.734 243456 DEBUG nova.objects.instance [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid f6deb920-f186-43f5-9ea0-642f4a6e830e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.748 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.748 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Ensure instance console log exists: /var/lib/nova/instances/f6deb920-f186-43f5-9ea0-642f4a6e830e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.749 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.749 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:38 compute-0 nova_compute[243452]: 2026-02-28 10:38:38.749 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:38:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 153 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Feb 28 10:38:39 compute-0 nova_compute[243452]: 2026-02-28 10:38:39.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:38:39 compute-0 nova_compute[243452]: 2026-02-28 10:38:39.563 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:39.827 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:38:39 compute-0 nova_compute[243452]: 2026-02-28 10:38:39.827 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:39 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:39.828 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:38:39 compute-0 nova_compute[243452]: 2026-02-28 10:38:39.966 243456 DEBUG nova.network.neutron [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Successfully created port: 5bc174e9-3e24-499b-8f52-0bcff974bf56 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:38:40 compute-0 ceph-mon[76304]: pgmap v2229: 305 pgs: 305 active+clean; 153 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Feb 28 10:38:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:40.830 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 187 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 105 op/s
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0002643687577423674 of space, bias 1.0, pg target 0.07931062732271021 quantized to 32 (current 32)
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024937943896177948 of space, bias 1.0, pg target 0.7481383168853384 quantized to 32 (current 32)
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.302374348258836e-07 of space, bias 4.0, pg target 0.0008762849217910602 quantized to 16 (current 16)
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:38:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:38:41 compute-0 nova_compute[243452]: 2026-02-28 10:38:41.982 243456 DEBUG nova.network.neutron [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Successfully created port: 2fcd845f-8646-4afc-923a-1b7570fbdc36 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:38:42 compute-0 nova_compute[243452]: 2026-02-28 10:38:42.217 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:42 compute-0 nova_compute[243452]: 2026-02-28 10:38:42.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:38:42 compute-0 nova_compute[243452]: 2026-02-28 10:38:42.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:38:42 compute-0 nova_compute[243452]: 2026-02-28 10:38:42.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:38:42 compute-0 ceph-mon[76304]: pgmap v2230: 305 pgs: 305 active+clean; 187 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 105 op/s
Feb 28 10:38:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Feb 28 10:38:43 compute-0 podman[365694]: 2026-02-28 10:38:43.13332562 +0000 UTC m=+0.066609007 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 28 10:38:43 compute-0 podman[365693]: 2026-02-28 10:38:43.173052405 +0000 UTC m=+0.106488056 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:38:43 compute-0 nova_compute[243452]: 2026-02-28 10:38:43.646 243456 DEBUG nova.network.neutron [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Successfully updated port: 5bc174e9-3e24-499b-8f52-0bcff974bf56 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:38:43 compute-0 nova_compute[243452]: 2026-02-28 10:38:43.803 243456 DEBUG nova.compute.manager [req-6d3d5b07-7af8-4f8f-9f73-d6d715b9dd33 req-58e23341-426e-45b5-b916-5f49d05ee746 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-changed-5bc174e9-3e24-499b-8f52-0bcff974bf56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:38:43 compute-0 nova_compute[243452]: 2026-02-28 10:38:43.804 243456 DEBUG nova.compute.manager [req-6d3d5b07-7af8-4f8f-9f73-d6d715b9dd33 req-58e23341-426e-45b5-b916-5f49d05ee746 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Refreshing instance network info cache due to event network-changed-5bc174e9-3e24-499b-8f52-0bcff974bf56. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:38:43 compute-0 nova_compute[243452]: 2026-02-28 10:38:43.804 243456 DEBUG oslo_concurrency.lockutils [req-6d3d5b07-7af8-4f8f-9f73-d6d715b9dd33 req-58e23341-426e-45b5-b916-5f49d05ee746 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:38:43 compute-0 nova_compute[243452]: 2026-02-28 10:38:43.805 243456 DEBUG oslo_concurrency.lockutils [req-6d3d5b07-7af8-4f8f-9f73-d6d715b9dd33 req-58e23341-426e-45b5-b916-5f49d05ee746 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:38:43 compute-0 nova_compute[243452]: 2026-02-28 10:38:43.805 243456 DEBUG nova.network.neutron [req-6d3d5b07-7af8-4f8f-9f73-d6d715b9dd33 req-58e23341-426e-45b5-b916-5f49d05ee746 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Refreshing network info cache for port 5bc174e9-3e24-499b-8f52-0bcff974bf56 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:38:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:38:44 compute-0 nova_compute[243452]: 2026-02-28 10:38:44.112 243456 DEBUG nova.network.neutron [req-6d3d5b07-7af8-4f8f-9f73-d6d715b9dd33 req-58e23341-426e-45b5-b916-5f49d05ee746 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:38:44 compute-0 nova_compute[243452]: 2026-02-28 10:38:44.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:38:44 compute-0 nova_compute[243452]: 2026-02-28 10:38:44.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:38:44 compute-0 ceph-mon[76304]: pgmap v2231: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Feb 28 10:38:44 compute-0 nova_compute[243452]: 2026-02-28 10:38:44.564 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Feb 28 10:38:45 compute-0 nova_compute[243452]: 2026-02-28 10:38:45.233 243456 DEBUG nova.network.neutron [req-6d3d5b07-7af8-4f8f-9f73-d6d715b9dd33 req-58e23341-426e-45b5-b916-5f49d05ee746 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:38:45 compute-0 nova_compute[243452]: 2026-02-28 10:38:45.342 243456 DEBUG oslo_concurrency.lockutils [req-6d3d5b07-7af8-4f8f-9f73-d6d715b9dd33 req-58e23341-426e-45b5-b916-5f49d05ee746 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:38:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:38:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1722856212' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:38:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:38:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1722856212' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:38:46 compute-0 nova_compute[243452]: 2026-02-28 10:38:46.014 243456 DEBUG nova.network.neutron [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Successfully updated port: 2fcd845f-8646-4afc-923a-1b7570fbdc36 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:38:46 compute-0 nova_compute[243452]: 2026-02-28 10:38:46.031 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:38:46 compute-0 nova_compute[243452]: 2026-02-28 10:38:46.032 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:38:46 compute-0 nova_compute[243452]: 2026-02-28 10:38:46.032 243456 DEBUG nova.network.neutron [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:38:46 compute-0 nova_compute[243452]: 2026-02-28 10:38:46.124 243456 DEBUG nova.compute.manager [req-3a736e76-05e8-4933-a427-d4c5455f8b9f req-f8f5bf8e-b74f-4cff-b90c-d93b12e9d62f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-changed-2fcd845f-8646-4afc-923a-1b7570fbdc36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:38:46 compute-0 nova_compute[243452]: 2026-02-28 10:38:46.124 243456 DEBUG nova.compute.manager [req-3a736e76-05e8-4933-a427-d4c5455f8b9f req-f8f5bf8e-b74f-4cff-b90c-d93b12e9d62f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Refreshing instance network info cache due to event network-changed-2fcd845f-8646-4afc-923a-1b7570fbdc36. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:38:46 compute-0 nova_compute[243452]: 2026-02-28 10:38:46.125 243456 DEBUG oslo_concurrency.lockutils [req-3a736e76-05e8-4933-a427-d4c5455f8b9f req-f8f5bf8e-b74f-4cff-b90c-d93b12e9d62f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:38:46 compute-0 nova_compute[243452]: 2026-02-28 10:38:46.233 243456 DEBUG nova.network.neutron [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:38:46 compute-0 nova_compute[243452]: 2026-02-28 10:38:46.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:38:46 compute-0 ceph-mon[76304]: pgmap v2232: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Feb 28 10:38:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1722856212' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:38:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1722856212' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:38:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Feb 28 10:38:47 compute-0 nova_compute[243452]: 2026-02-28 10:38:47.187 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275112.186385, 99d76f35-045f-485f-8e24-62b0c9293154 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:38:47 compute-0 nova_compute[243452]: 2026-02-28 10:38:47.188 243456 INFO nova.compute.manager [-] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] VM Stopped (Lifecycle Event)
Feb 28 10:38:47 compute-0 nova_compute[243452]: 2026-02-28 10:38:47.213 243456 DEBUG nova.compute.manager [None req-b3f64d5b-f8bc-4195-a00a-e6827ed1cea9 - - - - - -] [instance: 99d76f35-045f-485f-8e24-62b0c9293154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:38:47 compute-0 nova_compute[243452]: 2026-02-28 10:38:47.256 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.341 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:38:48 compute-0 ceph-mon[76304]: pgmap v2233: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.469 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "1546a5a7-9aea-450c-acbe-689f2b660359" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.470 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.496 243456 DEBUG nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.602 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.603 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.613 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.613 243456 INFO nova.compute.claims [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.741 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.913 243456 DEBUG nova.network.neutron [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updating instance_info_cache with network_info: [{"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.941 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.942 243456 DEBUG nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Instance network_info: |[{"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.943 243456 DEBUG oslo_concurrency.lockutils [req-3a736e76-05e8-4933-a427-d4c5455f8b9f req-f8f5bf8e-b74f-4cff-b90c-d93b12e9d62f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.944 243456 DEBUG nova.network.neutron [req-3a736e76-05e8-4933-a427-d4c5455f8b9f req-f8f5bf8e-b74f-4cff-b90c-d93b12e9d62f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Refreshing network info cache for port 2fcd845f-8646-4afc-923a-1b7570fbdc36 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.950 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Start _get_guest_xml network_info=[{"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.956 243456 WARNING nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:38:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.964 243456 DEBUG nova.virt.libvirt.host [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.965 243456 DEBUG nova.virt.libvirt.host [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.976 243456 DEBUG nova.virt.libvirt.host [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.977 243456 DEBUG nova.virt.libvirt.host [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.977 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.978 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.978 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.978 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.979 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.979 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.979 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.979 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.980 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.980 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.980 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.980 243456 DEBUG nova.virt.hardware [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:38:48 compute-0 nova_compute[243452]: 2026-02-28 10:38:48.985 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:38:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:38:49 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2540973052' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.297 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.305 243456 DEBUG nova.compute.provider_tree [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.326 243456 DEBUG nova.scheduler.client.report [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.359 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.361 243456 DEBUG nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:38:49 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2540973052' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.447 243456 DEBUG nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.447 243456 DEBUG nova.network.neutron [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.474 243456 INFO nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:38:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:38:49 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4122792992' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.497 243456 DEBUG nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.517 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.549 243456 DEBUG nova.storage.rbd_utils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f6deb920-f186-43f5-9ea0-642f4a6e830e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.556 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.590 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.642 243456 DEBUG nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.645 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.645 243456 INFO nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Creating image(s)
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.674 243456 DEBUG nova.storage.rbd_utils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 1546a5a7-9aea-450c-acbe-689f2b660359_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.706 243456 DEBUG nova.storage.rbd_utils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 1546a5a7-9aea-450c-acbe-689f2b660359_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.738 243456 DEBUG nova.storage.rbd_utils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 1546a5a7-9aea-450c-acbe-689f2b660359_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.744 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.821 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.823 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.824 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.824 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.852 243456 DEBUG nova.storage.rbd_utils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 1546a5a7-9aea-450c-acbe-689f2b660359_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:38:49 compute-0 nova_compute[243452]: 2026-02-28 10:38:49.857 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 1546a5a7-9aea-450c-acbe-689f2b660359_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.042 243456 DEBUG nova.policy [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.117 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 1546a5a7-9aea-450c-acbe-689f2b660359_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:38:50 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3107797305' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.188 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.195 243456 DEBUG nova.virt.libvirt.vif [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:38:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1245781750',display_name='tempest-TestGettingAddress-server-1245781750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1245781750',id=137,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-34sdq54p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:38:38Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f6deb920-f186-43f5-9ea0-642f4a6e830e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.195 243456 DEBUG nova.network.os_vif_util [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.197 243456 DEBUG nova.network.os_vif_util [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:80:6e,bridge_name='br-int',has_traffic_filtering=True,id=5bc174e9-3e24-499b-8f52-0bcff974bf56,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bc174e9-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.198 243456 DEBUG nova.virt.libvirt.vif [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:38:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1245781750',display_name='tempest-TestGettingAddress-server-1245781750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1245781750',id=137,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-34sdq54p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:38:38Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f6deb920-f186-43f5-9ea0-642f4a6e830e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.198 243456 DEBUG nova.network.os_vif_util [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.199 243456 DEBUG nova.network.os_vif_util [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:ee:08,bridge_name='br-int',has_traffic_filtering=True,id=2fcd845f-8646-4afc-923a-1b7570fbdc36,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fcd845f-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.200 243456 DEBUG nova.objects.instance [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid f6deb920-f186-43f5-9ea0-642f4a6e830e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.206 243456 DEBUG nova.storage.rbd_utils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image 1546a5a7-9aea-450c-acbe-689f2b660359_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.246 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:38:50 compute-0 nova_compute[243452]:   <uuid>f6deb920-f186-43f5-9ea0-642f4a6e830e</uuid>
Feb 28 10:38:50 compute-0 nova_compute[243452]:   <name>instance-00000089</name>
Feb 28 10:38:50 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:38:50 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:38:50 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <nova:name>tempest-TestGettingAddress-server-1245781750</nova:name>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:38:48</nova:creationTime>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:38:50 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:38:50 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:38:50 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:38:50 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:38:50 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:38:50 compute-0 nova_compute[243452]:         <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 10:38:50 compute-0 nova_compute[243452]:         <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:38:50 compute-0 nova_compute[243452]:         <nova:port uuid="5bc174e9-3e24-499b-8f52-0bcff974bf56">
Feb 28 10:38:50 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:38:50 compute-0 nova_compute[243452]:         <nova:port uuid="2fcd845f-8646-4afc-923a-1b7570fbdc36">
Feb 28 10:38:50 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feb3:ee08" ipVersion="6"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:feb3:ee08" ipVersion="6"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:38:50 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:38:50 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <system>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <entry name="serial">f6deb920-f186-43f5-9ea0-642f4a6e830e</entry>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <entry name="uuid">f6deb920-f186-43f5-9ea0-642f4a6e830e</entry>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     </system>
Feb 28 10:38:50 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:38:50 compute-0 nova_compute[243452]:   <os>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:   </os>
Feb 28 10:38:50 compute-0 nova_compute[243452]:   <features>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:   </features>
Feb 28 10:38:50 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:38:50 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:38:50 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/f6deb920-f186-43f5-9ea0-642f4a6e830e_disk">
Feb 28 10:38:50 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       </source>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:38:50 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/f6deb920-f186-43f5-9ea0-642f4a6e830e_disk.config">
Feb 28 10:38:50 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       </source>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:38:50 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:40:80:6e"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <target dev="tap5bc174e9-3e"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:b3:ee:08"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <target dev="tap2fcd845f-86"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/f6deb920-f186-43f5-9ea0-642f4a6e830e/console.log" append="off"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <video>
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     </video>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:38:50 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:38:50 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:38:50 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:38:50 compute-0 nova_compute[243452]: </domain>
Feb 28 10:38:50 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.248 243456 DEBUG nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Preparing to wait for external event network-vif-plugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.249 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.250 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.250 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.250 243456 DEBUG nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Preparing to wait for external event network-vif-plugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.250 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.251 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.251 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.252 243456 DEBUG nova.virt.libvirt.vif [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:38:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1245781750',display_name='tempest-TestGettingAddress-server-1245781750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1245781750',id=137,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-34sdq54p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:38:38Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f6deb920-f186-43f5-9ea0-642f4a6e830e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.252 243456 DEBUG nova.network.os_vif_util [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.253 243456 DEBUG nova.network.os_vif_util [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:80:6e,bridge_name='br-int',has_traffic_filtering=True,id=5bc174e9-3e24-499b-8f52-0bcff974bf56,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bc174e9-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.254 243456 DEBUG os_vif [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:80:6e,bridge_name='br-int',has_traffic_filtering=True,id=5bc174e9-3e24-499b-8f52-0bcff974bf56,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bc174e9-3e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.254 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.255 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.256 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.261 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.261 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bc174e9-3e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.262 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5bc174e9-3e, col_values=(('external_ids', {'iface-id': '5bc174e9-3e24-499b-8f52-0bcff974bf56', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:80:6e', 'vm-uuid': 'f6deb920-f186-43f5-9ea0-642f4a6e830e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.263 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:50 compute-0 NetworkManager[49805]: <info>  [1772275130.2649] manager: (tap5bc174e9-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/587)
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.265 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.270 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.272 243456 INFO os_vif [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:80:6e,bridge_name='br-int',has_traffic_filtering=True,id=5bc174e9-3e24-499b-8f52-0bcff974bf56,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bc174e9-3e')
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.272 243456 DEBUG nova.virt.libvirt.vif [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:38:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1245781750',display_name='tempest-TestGettingAddress-server-1245781750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1245781750',id=137,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-34sdq54p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:38:38Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f6deb920-f186-43f5-9ea0-642f4a6e830e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.273 243456 DEBUG nova.network.os_vif_util [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.274 243456 DEBUG nova.network.os_vif_util [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:ee:08,bridge_name='br-int',has_traffic_filtering=True,id=2fcd845f-8646-4afc-923a-1b7570fbdc36,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fcd845f-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.274 243456 DEBUG os_vif [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:ee:08,bridge_name='br-int',has_traffic_filtering=True,id=2fcd845f-8646-4afc-923a-1b7570fbdc36,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fcd845f-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.275 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.275 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.275 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.309 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.310 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fcd845f-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.311 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2fcd845f-86, col_values=(('external_ids', {'iface-id': '2fcd845f-8646-4afc-923a-1b7570fbdc36', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:ee:08', 'vm-uuid': 'f6deb920-f186-43f5-9ea0-642f4a6e830e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:50 compute-0 NetworkManager[49805]: <info>  [1772275130.3136] manager: (tap2fcd845f-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/588)
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.317 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.325 243456 DEBUG nova.objects.instance [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid 1546a5a7-9aea-450c-acbe-689f2b660359 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.327 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.328 243456 INFO os_vif [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:ee:08,bridge_name='br-int',has_traffic_filtering=True,id=2fcd845f-8646-4afc-923a-1b7570fbdc36,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fcd845f-86')
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.338 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.339 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Ensure instance console log exists: /var/lib/nova/instances/1546a5a7-9aea-450c-acbe-689f2b660359/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.339 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.340 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.340 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.378 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.379 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.379 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:40:80:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.379 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:b3:ee:08, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.380 243456 INFO nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Using config drive
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.404 243456 DEBUG nova.storage.rbd_utils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f6deb920-f186-43f5-9ea0-642f4a6e830e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:38:50 compute-0 ceph-mon[76304]: pgmap v2234: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:38:50 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4122792992' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:38:50 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3107797305' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.773 243456 INFO nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Creating config drive at /var/lib/nova/instances/f6deb920-f186-43f5-9ea0-642f4a6e830e/disk.config
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.780 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f6deb920-f186-43f5-9ea0-642f4a6e830e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpmjo_2x8o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.839 243456 DEBUG nova.network.neutron [req-3a736e76-05e8-4933-a427-d4c5455f8b9f req-f8f5bf8e-b74f-4cff-b90c-d93b12e9d62f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updated VIF entry in instance network info cache for port 2fcd845f-8646-4afc-923a-1b7570fbdc36. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.841 243456 DEBUG nova.network.neutron [req-3a736e76-05e8-4933-a427-d4c5455f8b9f req-f8f5bf8e-b74f-4cff-b90c-d93b12e9d62f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updating instance_info_cache with network_info: [{"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.858 243456 DEBUG oslo_concurrency.lockutils [req-3a736e76-05e8-4933-a427-d4c5455f8b9f req-f8f5bf8e-b74f-4cff-b90c-d93b12e9d62f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.924 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f6deb920-f186-43f5-9ea0-642f4a6e830e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpmjo_2x8o" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.959 243456 DEBUG nova.storage.rbd_utils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image f6deb920-f186-43f5-9ea0-642f4a6e830e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:38:50 compute-0 nova_compute[243452]: 2026-02-28 10:38:50.963 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f6deb920-f186-43f5-9ea0-642f4a6e830e/disk.config f6deb920-f186-43f5-9ea0-642f4a6e830e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 211 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.2 MiB/s wr, 27 op/s
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.054 243456 DEBUG nova.network.neutron [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Successfully updated port: fc0b52d5-3577-4d09-bac1-192cb6c80057 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.076 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-1546a5a7-9aea-450c-acbe-689f2b660359" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.077 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-1546a5a7-9aea-450c-acbe-689f2b660359" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.077 243456 DEBUG nova.network.neutron [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.151 243456 DEBUG oslo_concurrency.processutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f6deb920-f186-43f5-9ea0-642f4a6e830e/disk.config f6deb920-f186-43f5-9ea0-642f4a6e830e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.152 243456 INFO nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Deleting local config drive /var/lib/nova/instances/f6deb920-f186-43f5-9ea0-642f4a6e830e/disk.config because it was imported into RBD.
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.230 243456 DEBUG nova.compute.manager [req-5f70c5b4-e1bb-475d-b638-fa665b171077 req-b57b7f49-74db-4948-abcd-04895a545914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Received event network-changed-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:38:51 compute-0 NetworkManager[49805]: <info>  [1772275131.2310] manager: (tap5bc174e9-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/589)
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.230 243456 DEBUG nova.compute.manager [req-5f70c5b4-e1bb-475d-b638-fa665b171077 req-b57b7f49-74db-4948-abcd-04895a545914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Refreshing instance network info cache due to event network-changed-fc0b52d5-3577-4d09-bac1-192cb6c80057. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.231 243456 DEBUG oslo_concurrency.lockutils [req-5f70c5b4-e1bb-475d-b638-fa665b171077 req-b57b7f49-74db-4948-abcd-04895a545914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-1546a5a7-9aea-450c-acbe-689f2b660359" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:38:51 compute-0 kernel: tap5bc174e9-3e: entered promiscuous mode
Feb 28 10:38:51 compute-0 ovn_controller[146846]: 2026-02-28T10:38:51Z|01417|binding|INFO|Claiming lport 5bc174e9-3e24-499b-8f52-0bcff974bf56 for this chassis.
Feb 28 10:38:51 compute-0 ovn_controller[146846]: 2026-02-28T10:38:51Z|01418|binding|INFO|5bc174e9-3e24-499b-8f52-0bcff974bf56: Claiming fa:16:3e:40:80:6e 10.100.0.9
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.236 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.246 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:80:6e 10.100.0.9'], port_security=['fa:16:3e:40:80:6e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f6deb920-f186-43f5-9ea0-642f4a6e830e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-386ad93c-8128-414d-bc97-7c3f009f2aee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '77637734-a2d3-42b4-baf6-bf1326483326', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12c8c4dc-c3d0-48e2-a2ff-065b46b0bb43, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=5bc174e9-3e24-499b-8f52-0bcff974bf56) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.248 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 5bc174e9-3e24-499b-8f52-0bcff974bf56 in datapath 386ad93c-8128-414d-bc97-7c3f009f2aee bound to our chassis
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.249 243456 DEBUG nova.network.neutron [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.250 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 386ad93c-8128-414d-bc97-7c3f009f2aee
Feb 28 10:38:51 compute-0 NetworkManager[49805]: <info>  [1772275131.2530] manager: (tap2fcd845f-86): new Tun device (/org/freedesktop/NetworkManager/Devices/590)
Feb 28 10:38:51 compute-0 kernel: tap2fcd845f-86: entered promiscuous mode
Feb 28 10:38:51 compute-0 ovn_controller[146846]: 2026-02-28T10:38:51Z|01419|binding|INFO|Setting lport 5bc174e9-3e24-499b-8f52-0bcff974bf56 ovn-installed in OVS
Feb 28 10:38:51 compute-0 ovn_controller[146846]: 2026-02-28T10:38:51Z|01420|binding|INFO|Setting lport 5bc174e9-3e24-499b-8f52-0bcff974bf56 up in Southbound
Feb 28 10:38:51 compute-0 ovn_controller[146846]: 2026-02-28T10:38:51Z|01421|if_status|INFO|Not updating pb chassis for 2fcd845f-8646-4afc-923a-1b7570fbdc36 now as sb is readonly
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.259 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:51 compute-0 ovn_controller[146846]: 2026-02-28T10:38:51Z|01422|binding|INFO|Claiming lport 2fcd845f-8646-4afc-923a-1b7570fbdc36 for this chassis.
Feb 28 10:38:51 compute-0 ovn_controller[146846]: 2026-02-28T10:38:51Z|01423|binding|INFO|2fcd845f-8646-4afc-923a-1b7570fbdc36: Claiming fa:16:3e:b3:ee:08 2001:db8:0:1:f816:3eff:feb3:ee08 2001:db8::f816:3eff:feb3:ee08
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.267 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e445d9-45fa-44d9-bf56-5baca00e22f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.268 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap386ad93c-81 in ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.271 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:ee:08 2001:db8:0:1:f816:3eff:feb3:ee08 2001:db8::f816:3eff:feb3:ee08'], port_security=['fa:16:3e:b3:ee:08 2001:db8:0:1:f816:3eff:feb3:ee08 2001:db8::f816:3eff:feb3:ee08'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feb3:ee08/64 2001:db8::f816:3eff:feb3:ee08/64', 'neutron:device_id': 'f6deb920-f186-43f5-9ea0-642f4a6e830e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '77637734-a2d3-42b4-baf6-bf1326483326', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df8aa31f-e638-4e49-ac27-bf5e1988a64a, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2fcd845f-8646-4afc-923a-1b7570fbdc36) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.272 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.271 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap386ad93c-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.271 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cdbaf9c5-d22f-40f8-b6d0-16a8ccf7334c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:51 compute-0 ovn_controller[146846]: 2026-02-28T10:38:51Z|01424|binding|INFO|Setting lport 2fcd845f-8646-4afc-923a-1b7570fbdc36 ovn-installed in OVS
Feb 28 10:38:51 compute-0 ovn_controller[146846]: 2026-02-28T10:38:51Z|01425|binding|INFO|Setting lport 2fcd845f-8646-4afc-923a-1b7570fbdc36 up in Southbound
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.275 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f982ca95-3820-4b27-89c8-f247ddaf71a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.276 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:51 compute-0 systemd-udevd[366069]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:38:51 compute-0 systemd-udevd[366071]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.293 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[fd57ff43-a2c8-4e01-8219-b824b3170ade]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:51 compute-0 NetworkManager[49805]: <info>  [1772275131.3062] device (tap2fcd845f-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:38:51 compute-0 NetworkManager[49805]: <info>  [1772275131.3073] device (tap2fcd845f-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:38:51 compute-0 NetworkManager[49805]: <info>  [1772275131.3088] device (tap5bc174e9-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:38:51 compute-0 NetworkManager[49805]: <info>  [1772275131.3103] device (tap5bc174e9-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:38:51 compute-0 systemd-machined[209480]: New machine qemu-170-instance-00000089.
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.312 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d43657c9-a366-41dd-9262-e9ed388ddd2d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:38:51 compute-0 systemd[1]: Started Virtual Machine qemu-170-instance-00000089.
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.340 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.341 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.342 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.349 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb1c1e7-7ccf-4152-8f58-7f19c801de46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.354 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb4a04b-0b42-4e15-bd42-7477903008f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:51 compute-0 NetworkManager[49805]: <info>  [1772275131.3564] manager: (tap386ad93c-80): new Veth device (/org/freedesktop/NetworkManager/Devices/591)
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.384 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4dbbf49d-c93d-4efb-be0b-0e6b350836a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.387 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[77287526-1a95-49ee-a3db-e634b63ac63c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:51 compute-0 NetworkManager[49805]: <info>  [1772275131.4041] device (tap386ad93c-80): carrier: link connected
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.405 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1c9e4022-d552-41cb-a239-f9018c7f666c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.425 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[77bb0242-eb98-4c6c-9778-3b852094dcb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap386ad93c-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:f3:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663869, 'reachable_time': 19915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366104, 'error': None, 'target': 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.443 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9069f0b3-a82e-40f5-894b-6d0017a89d28]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec9:f3db'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663869, 'tstamp': 663869}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366105, 'error': None, 'target': 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.467 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c928f557-cd7c-48db-9527-a97d9beaee95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap386ad93c-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:f3:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663869, 'reachable_time': 19915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366106, 'error': None, 'target': 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.508 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba1edb4-88bb-498b-a2c7-7f4a63351e6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.556 243456 DEBUG nova.compute.manager [req-885898a8-88b7-4b30-8d51-0b870405e25e req-aa3f592d-ece7-46f0-a484-95fd2aec5f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-plugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.556 243456 DEBUG oslo_concurrency.lockutils [req-885898a8-88b7-4b30-8d51-0b870405e25e req-aa3f592d-ece7-46f0-a484-95fd2aec5f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.560 243456 DEBUG oslo_concurrency.lockutils [req-885898a8-88b7-4b30-8d51-0b870405e25e req-aa3f592d-ece7-46f0-a484-95fd2aec5f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.560 243456 DEBUG oslo_concurrency.lockutils [req-885898a8-88b7-4b30-8d51-0b870405e25e req-aa3f592d-ece7-46f0-a484-95fd2aec5f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.561 243456 DEBUG nova.compute.manager [req-885898a8-88b7-4b30-8d51-0b870405e25e req-aa3f592d-ece7-46f0-a484-95fd2aec5f54 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Processing event network-vif-plugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.586 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[119b9ee9-4565-4550-b790-b3ce3cdeb06c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.588 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap386ad93c-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.588 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.589 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap386ad93c-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:51 compute-0 NetworkManager[49805]: <info>  [1772275131.5924] manager: (tap386ad93c-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/592)
Feb 28 10:38:51 compute-0 kernel: tap386ad93c-80: entered promiscuous mode
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.594 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.596 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap386ad93c-80, col_values=(('external_ids', {'iface-id': '1c423105-d23e-4da9-afeb-9405c7fd0060'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:51 compute-0 ovn_controller[146846]: 2026-02-28T10:38:51Z|01426|binding|INFO|Releasing lport 1c423105-d23e-4da9-afeb-9405c7fd0060 from this chassis (sb_readonly=0)
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.602 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/386ad93c-8128-414d-bc97-7c3f009f2aee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/386ad93c-8128-414d-bc97-7c3f009f2aee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.604 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.604 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cf6446c7-b3bf-480e-851f-d2e35b2ae0e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.605 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-386ad93c-8128-414d-bc97-7c3f009f2aee
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/386ad93c-8128-414d-bc97-7c3f009f2aee.pid.haproxy
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 386ad93c-8128-414d-bc97-7c3f009f2aee
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:38:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:51.606 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'env', 'PROCESS_TAG=haproxy-386ad93c-8128-414d-bc97-7c3f009f2aee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/386ad93c-8128-414d-bc97-7c3f009f2aee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.789 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275131.7887, f6deb920-f186-43f5-9ea0-642f4a6e830e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.789 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] VM Started (Lifecycle Event)
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.806 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.813 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275131.7896054, f6deb920-f186-43f5-9ea0-642f4a6e830e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.814 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] VM Paused (Lifecycle Event)
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.849 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.853 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.873 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:38:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:38:51 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2518945075' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.896 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.965 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:38:51 compute-0 nova_compute[243452]: 2026-02-28 10:38:51.965 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:38:51 compute-0 podman[366203]: 2026-02-28 10:38:51.997795256 +0000 UTC m=+0.066739861 container create 19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.026 243456 DEBUG nova.network.neutron [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Updating instance_info_cache with network_info: [{"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:38:52 compute-0 systemd[1]: Started libpod-conmon-19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d.scope.
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.047 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-1546a5a7-9aea-450c-acbe-689f2b660359" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.048 243456 DEBUG nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Instance network_info: |[{"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.048 243456 DEBUG oslo_concurrency.lockutils [req-5f70c5b4-e1bb-475d-b638-fa665b171077 req-b57b7f49-74db-4948-abcd-04895a545914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-1546a5a7-9aea-450c-acbe-689f2b660359" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.048 243456 DEBUG nova.network.neutron [req-5f70c5b4-e1bb-475d-b638-fa665b171077 req-b57b7f49-74db-4948-abcd-04895a545914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Refreshing network info cache for port fc0b52d5-3577-4d09-bac1-192cb6c80057 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:38:52 compute-0 podman[366203]: 2026-02-28 10:38:51.957609318 +0000 UTC m=+0.026554003 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.052 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Start _get_guest_xml network_info=[{"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.057 243456 WARNING nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.062 243456 DEBUG nova.virt.libvirt.host [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.063 243456 DEBUG nova.virt.libvirt.host [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.071 243456 DEBUG nova.virt.libvirt.host [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.072 243456 DEBUG nova.virt.libvirt.host [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.073 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.073 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.074 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.074 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.074 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.074 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.075 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.075 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.075 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.075 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.076 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.076 243456 DEBUG nova.virt.hardware [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.079 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:52 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:38:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/353ad3c61fc41f7332854dd7161b413a55568a38d52cee91efbf7a09e713c9f0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:38:52 compute-0 podman[366203]: 2026-02-28 10:38:52.12118698 +0000 UTC m=+0.190131675 container init 19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 28 10:38:52 compute-0 podman[366203]: 2026-02-28 10:38:52.126671325 +0000 UTC m=+0.195615970 container start 19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:38:52 compute-0 neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee[366218]: [NOTICE]   (366223) : New worker (366225) forked
Feb 28 10:38:52 compute-0 neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee[366218]: [NOTICE]   (366223) : Loading success.
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.207 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2fcd845f-8646-4afc-923a-1b7570fbdc36 in datapath 685f3a92-853c-417a-a00b-ba5c70b02f2d unbound from our chassis
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.210 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 685f3a92-853c-417a-a00b-ba5c70b02f2d
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.220 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[08c1e369-7d97-44ee-a73f-c464bbedaa4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.222 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap685f3a92-81 in ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.224 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap685f3a92-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.224 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d32961-3975-4194-9c08-c181bbac7efa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.226 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5c63ed08-3b32-4b70-a59d-7b7bb84f809f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.239 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[be3303b3-c285-4964-9ed3-ee1a3e60e6da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.256 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5a77ca5d-3b51-43d7-a132-085a6771ea77]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.275 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.277 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3472MB free_disk=59.96124897431582GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.277 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.277 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.286 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[822aa432-2021-4194-98aa-7430c9a17fa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.291 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb0a65b-50e1-4b5f-81e1-b1dcdb02a621]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:52 compute-0 NetworkManager[49805]: <info>  [1772275132.2929] manager: (tap685f3a92-80): new Veth device (/org/freedesktop/NetworkManager/Devices/593)
Feb 28 10:38:52 compute-0 systemd-udevd[366094]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.322 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb8ae48-57f6-4729-98cf-282a98163c3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.326 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fc2ad932-58d8-437a-8f0e-875df1e85452]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:52 compute-0 NetworkManager[49805]: <info>  [1772275132.3528] device (tap685f3a92-80): carrier: link connected
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.359 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[95bd1991-3b8f-4327-9b8e-72a4b19d2418]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.375 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f0f08ac4-a8de-4378-8ad6-345f74904743]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap685f3a92-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:b7:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 423], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663963, 'reachable_time': 33771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366263, 'error': None, 'target': 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.393 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[594d4c6b-9863-4fdb-8617-0e6924a45f78]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe53:b75d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663963, 'tstamp': 663963}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366264, 'error': None, 'target': 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.412 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9c27e2c5-d2ee-4cf1-93d8-406000c8f2e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap685f3a92-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:b7:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 423], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663963, 'reachable_time': 33771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366265, 'error': None, 'target': 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:52 compute-0 ceph-mon[76304]: pgmap v2235: 305 pgs: 305 active+clean; 211 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.2 MiB/s wr, 27 op/s
Feb 28 10:38:52 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2518945075' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.451 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f54f0777-377b-4893-a1f5-763e2afa7818]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.483 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b24da5b1-f7e3-484b-9899-1120dcd0576f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.485 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap685f3a92-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.486 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.487 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap685f3a92-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:52 compute-0 NetworkManager[49805]: <info>  [1772275132.4902] manager: (tap685f3a92-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/594)
Feb 28 10:38:52 compute-0 kernel: tap685f3a92-80: entered promiscuous mode
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.492 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.498 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap685f3a92-80, col_values=(('external_ids', {'iface-id': '86a5934a-bf0f-4d7e-ba7a-fdb27e99df6a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:52 compute-0 ovn_controller[146846]: 2026-02-28T10:38:52Z|01427|binding|INFO|Releasing lport 86a5934a-bf0f-4d7e-ba7a-fdb27e99df6a from this chassis (sb_readonly=0)
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.499 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.501 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.503 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/685f3a92-853c-417a-a00b-ba5c70b02f2d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/685f3a92-853c-417a-a00b-ba5c70b02f2d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.504 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dffb88c0-0e25-4005-ade2-4e57243112c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.505 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-685f3a92-853c-417a-a00b-ba5c70b02f2d
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/685f3a92-853c-417a-a00b-ba5c70b02f2d.pid.haproxy
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 685f3a92-853c-417a-a00b-ba5c70b02f2d
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:38:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:52.506 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'env', 'PROCESS_TAG=haproxy-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/685f3a92-853c-417a-a00b-ba5c70b02f2d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.508 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.521 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance f6deb920-f186-43f5-9ea0-642f4a6e830e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.522 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 1546a5a7-9aea-450c-acbe-689f2b660359 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.522 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.522 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.585 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:38:52 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2685745294' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.659 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.686 243456 DEBUG nova.storage.rbd_utils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 1546a5a7-9aea-450c-acbe-689f2b660359_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:38:52 compute-0 nova_compute[243452]: 2026-02-28 10:38:52.692 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:52 compute-0 podman[366334]: 2026-02-28 10:38:52.817578699 +0000 UTC m=+0.031206225 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:38:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 232 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.5 MiB/s wr, 30 op/s
Feb 28 10:38:53 compute-0 podman[366334]: 2026-02-28 10:38:53.036865748 +0000 UTC m=+0.250493264 container create 7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:38:53 compute-0 systemd[1]: Started libpod-conmon-7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003.scope.
Feb 28 10:38:53 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eca0bd47b535c15e2228c3436fbfee03368e983c319a518449256dd29eafc04f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:38:53 compute-0 podman[366334]: 2026-02-28 10:38:53.120459215 +0000 UTC m=+0.334086761 container init 7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 10:38:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:38:53 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4233696413' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:38:53 compute-0 podman[366334]: 2026-02-28 10:38:53.127759132 +0000 UTC m=+0.341386648 container start 7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.149 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.156 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:38:53 compute-0 neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d[366368]: [NOTICE]   (366373) : New worker (366376) forked
Feb 28 10:38:53 compute-0 neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d[366368]: [NOTICE]   (366373) : Loading success.
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.176 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.205 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.206 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:38:53 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2730403303' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.257 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.259 243456 DEBUG nova.virt.libvirt.vif [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:38:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-734126459',display_name='tempest-TestNetworkBasicOps-server-734126459',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-734126459',id=138,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTD6voq7CUXfm7WlI8qSl7cssyv4IdTFz+Rehs0Uzk9fFDHOvsC3uyrogbmbPToHSRv6RvbfGhcR/wGG6aXvaGAP4w+wIGU1b4ql+5cXN1wXdt7cbhMO8QZYhAJcKUF+w==',key_name='tempest-TestNetworkBasicOps-1975385676',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-ai5i2y9c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:38:49Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=1546a5a7-9aea-450c-acbe-689f2b660359,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.259 243456 DEBUG nova.network.os_vif_util [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.260 243456 DEBUG nova.network.os_vif_util [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.262 243456 DEBUG nova.objects.instance [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1546a5a7-9aea-450c-acbe-689f2b660359 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.281 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:38:53 compute-0 nova_compute[243452]:   <uuid>1546a5a7-9aea-450c-acbe-689f2b660359</uuid>
Feb 28 10:38:53 compute-0 nova_compute[243452]:   <name>instance-0000008a</name>
Feb 28 10:38:53 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:38:53 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:38:53 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <nova:name>tempest-TestNetworkBasicOps-server-734126459</nova:name>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:38:52</nova:creationTime>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:38:53 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:38:53 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:38:53 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:38:53 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:38:53 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:38:53 compute-0 nova_compute[243452]:         <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:38:53 compute-0 nova_compute[243452]:         <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:38:53 compute-0 nova_compute[243452]:         <nova:port uuid="fc0b52d5-3577-4d09-bac1-192cb6c80057">
Feb 28 10:38:53 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:38:53 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:38:53 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <system>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <entry name="serial">1546a5a7-9aea-450c-acbe-689f2b660359</entry>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <entry name="uuid">1546a5a7-9aea-450c-acbe-689f2b660359</entry>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     </system>
Feb 28 10:38:53 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:38:53 compute-0 nova_compute[243452]:   <os>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:   </os>
Feb 28 10:38:53 compute-0 nova_compute[243452]:   <features>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:   </features>
Feb 28 10:38:53 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:38:53 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:38:53 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/1546a5a7-9aea-450c-acbe-689f2b660359_disk">
Feb 28 10:38:53 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       </source>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:38:53 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/1546a5a7-9aea-450c-acbe-689f2b660359_disk.config">
Feb 28 10:38:53 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       </source>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:38:53 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:68:09:7e"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <target dev="tapfc0b52d5-35"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/1546a5a7-9aea-450c-acbe-689f2b660359/console.log" append="off"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <video>
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     </video>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:38:53 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:38:53 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:38:53 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:38:53 compute-0 nova_compute[243452]: </domain>
Feb 28 10:38:53 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.284 243456 DEBUG nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Preparing to wait for external event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.284 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.284 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.284 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.285 243456 DEBUG nova.virt.libvirt.vif [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:38:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-734126459',display_name='tempest-TestNetworkBasicOps-server-734126459',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-734126459',id=138,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTD6voq7CUXfm7WlI8qSl7cssyv4IdTFz+Rehs0Uzk9fFDHOvsC3uyrogbmbPToHSRv6RvbfGhcR/wGG6aXvaGAP4w+wIGU1b4ql+5cXN1wXdt7cbhMO8QZYhAJcKUF+w==',key_name='tempest-TestNetworkBasicOps-1975385676',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-ai5i2y9c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:38:49Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=1546a5a7-9aea-450c-acbe-689f2b660359,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.285 243456 DEBUG nova.network.os_vif_util [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.286 243456 DEBUG nova.network.os_vif_util [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.287 243456 DEBUG os_vif [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.287 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.287 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.288 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.291 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.291 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc0b52d5-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.292 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfc0b52d5-35, col_values=(('external_ids', {'iface-id': 'fc0b52d5-3577-4d09-bac1-192cb6c80057', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:09:7e', 'vm-uuid': '1546a5a7-9aea-450c-acbe-689f2b660359'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.293 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:53 compute-0 NetworkManager[49805]: <info>  [1772275133.2943] manager: (tapfc0b52d5-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/595)
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.296 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.299 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.300 243456 INFO os_vif [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35')
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.336 243456 DEBUG nova.compute.manager [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-plugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.337 243456 DEBUG oslo_concurrency.lockutils [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.337 243456 DEBUG oslo_concurrency.lockutils [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.338 243456 DEBUG oslo_concurrency.lockutils [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.338 243456 DEBUG nova.compute.manager [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Processing event network-vif-plugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.338 243456 DEBUG nova.compute.manager [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-plugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.338 243456 DEBUG oslo_concurrency.lockutils [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.338 243456 DEBUG oslo_concurrency.lockutils [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.339 243456 DEBUG oslo_concurrency.lockutils [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.339 243456 DEBUG nova.compute.manager [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] No waiting events found dispatching network-vif-plugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.339 243456 WARNING nova.compute.manager [req-b991b2cf-69aa-448c-aa6f-1421d5006d01 req-575bf46f-92df-4d41-954b-9f3a204f6515 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received unexpected event network-vif-plugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 for instance with vm_state building and task_state spawning.
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.341 243456 DEBUG nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.347 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275133.3467934, f6deb920-f186-43f5-9ea0-642f4a6e830e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.348 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] VM Resumed (Lifecycle Event)
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.349 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.358 243456 INFO nova.virt.libvirt.driver [-] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Instance spawned successfully.
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.359 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.362 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.363 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.363 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:68:09:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.364 243456 INFO nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Using config drive
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.388 243456 DEBUG nova.storage.rbd_utils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 1546a5a7-9aea-450c-acbe-689f2b660359_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.397 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.402 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.406 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.406 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.406 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.407 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.407 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.408 243456 DEBUG nova.virt.libvirt.driver [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.433 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:38:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2685745294' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:38:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4233696413' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:38:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2730403303' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.469 243456 INFO nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Took 15.41 seconds to spawn the instance on the hypervisor.
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.470 243456 DEBUG nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.526 243456 INFO nova.compute.manager [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Took 16.38 seconds to build instance.
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.540 243456 DEBUG oslo_concurrency.lockutils [None req-ffbf6aa2-3479-4fae-9be1-8136c0a949b0 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.447s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.640 243456 DEBUG nova.compute.manager [req-27bee96e-10fe-4e7d-96fe-d24ca2432140 req-40d2c3a2-0e89-4eee-a6e8-321ebd7e4230 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-plugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.641 243456 DEBUG oslo_concurrency.lockutils [req-27bee96e-10fe-4e7d-96fe-d24ca2432140 req-40d2c3a2-0e89-4eee-a6e8-321ebd7e4230 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.641 243456 DEBUG oslo_concurrency.lockutils [req-27bee96e-10fe-4e7d-96fe-d24ca2432140 req-40d2c3a2-0e89-4eee-a6e8-321ebd7e4230 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.641 243456 DEBUG oslo_concurrency.lockutils [req-27bee96e-10fe-4e7d-96fe-d24ca2432140 req-40d2c3a2-0e89-4eee-a6e8-321ebd7e4230 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.642 243456 DEBUG nova.compute.manager [req-27bee96e-10fe-4e7d-96fe-d24ca2432140 req-40d2c3a2-0e89-4eee-a6e8-321ebd7e4230 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] No waiting events found dispatching network-vif-plugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.642 243456 WARNING nova.compute.manager [req-27bee96e-10fe-4e7d-96fe-d24ca2432140 req-40d2c3a2-0e89-4eee-a6e8-321ebd7e4230 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received unexpected event network-vif-plugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 for instance with vm_state active and task_state None.
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.681 243456 DEBUG nova.network.neutron [req-5f70c5b4-e1bb-475d-b638-fa665b171077 req-b57b7f49-74db-4948-abcd-04895a545914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Updated VIF entry in instance network info cache for port fc0b52d5-3577-4d09-bac1-192cb6c80057. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.682 243456 DEBUG nova.network.neutron [req-5f70c5b4-e1bb-475d-b638-fa665b171077 req-b57b7f49-74db-4948-abcd-04895a545914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Updating instance_info_cache with network_info: [{"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:38:53 compute-0 nova_compute[243452]: 2026-02-28 10:38:53.697 243456 DEBUG oslo_concurrency.lockutils [req-5f70c5b4-e1bb-475d-b638-fa665b171077 req-b57b7f49-74db-4948-abcd-04895a545914 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-1546a5a7-9aea-450c-acbe-689f2b660359" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:38:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.019 243456 INFO nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Creating config drive at /var/lib/nova/instances/1546a5a7-9aea-450c-acbe-689f2b660359/disk.config
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.022 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1546a5a7-9aea-450c-acbe-689f2b660359/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpd97okymh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.166 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1546a5a7-9aea-450c-acbe-689f2b660359/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpd97okymh" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.199 243456 DEBUG nova.storage.rbd_utils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 1546a5a7-9aea-450c-acbe-689f2b660359_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.203 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1546a5a7-9aea-450c-acbe-689f2b660359/disk.config 1546a5a7-9aea-450c-acbe-689f2b660359_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.358 243456 DEBUG oslo_concurrency.processutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1546a5a7-9aea-450c-acbe-689f2b660359/disk.config 1546a5a7-9aea-450c-acbe-689f2b660359_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.359 243456 INFO nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Deleting local config drive /var/lib/nova/instances/1546a5a7-9aea-450c-acbe-689f2b660359/disk.config because it was imported into RBD.
Feb 28 10:38:54 compute-0 kernel: tapfc0b52d5-35: entered promiscuous mode
Feb 28 10:38:54 compute-0 NetworkManager[49805]: <info>  [1772275134.4132] manager: (tapfc0b52d5-35): new Tun device (/org/freedesktop/NetworkManager/Devices/596)
Feb 28 10:38:54 compute-0 ovn_controller[146846]: 2026-02-28T10:38:54Z|01428|binding|INFO|Claiming lport fc0b52d5-3577-4d09-bac1-192cb6c80057 for this chassis.
Feb 28 10:38:54 compute-0 ovn_controller[146846]: 2026-02-28T10:38:54Z|01429|binding|INFO|fc0b52d5-3577-4d09-bac1-192cb6c80057: Claiming fa:16:3e:68:09:7e 10.100.0.7
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.416 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.426 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:09:7e 10.100.0.7'], port_security=['fa:16:3e:68:09:7e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-600658055', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1546a5a7-9aea-450c-acbe-689f2b660359', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48d6bc14-7625-4769-ad1d-6c202dc94953', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-600658055', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '7', 'neutron:security_group_ids': '26d464f3-c9fa-4347-96b9-fda1593d34a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61ce24be-7860-4d1f-956c-5f6de3fe0ffe, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=fc0b52d5-3577-4d09-bac1-192cb6c80057) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.427 156681 INFO neutron.agent.ovn.metadata.agent [-] Port fc0b52d5-3577-4d09-bac1-192cb6c80057 in datapath 48d6bc14-7625-4769-ad1d-6c202dc94953 bound to our chassis
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.428 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48d6bc14-7625-4769-ad1d-6c202dc94953
Feb 28 10:38:54 compute-0 ovn_controller[146846]: 2026-02-28T10:38:54Z|01430|binding|INFO|Setting lport fc0b52d5-3577-4d09-bac1-192cb6c80057 ovn-installed in OVS
Feb 28 10:38:54 compute-0 ovn_controller[146846]: 2026-02-28T10:38:54Z|01431|binding|INFO|Setting lport fc0b52d5-3577-4d09-bac1-192cb6c80057 up in Southbound
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.434 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.438 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.442 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bea0a2e1-7c81-48ad-a665-b9c6c44466fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.443 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48d6bc14-71 in ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.444 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48d6bc14-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.444 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b1b8c36a-3820-4443-aa84-0c41913f05ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.445 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[65e4c64c-e035-4c2b-92e9-9ccaa86e05de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:54 compute-0 ceph-mon[76304]: pgmap v2236: 305 pgs: 305 active+clean; 232 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.5 MiB/s wr, 30 op/s
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.460 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[ab91f93b-1b2a-4ca5-badc-e07a277c0839]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:54 compute-0 systemd-machined[209480]: New machine qemu-171-instance-0000008a.
Feb 28 10:38:54 compute-0 systemd[1]: Started Virtual Machine qemu-171-instance-0000008a.
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.475 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d067cd4c-af0e-4033-9d9a-a72c610b004e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:54 compute-0 systemd-udevd[366462]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:38:54 compute-0 NetworkManager[49805]: <info>  [1772275134.4867] device (tapfc0b52d5-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:38:54 compute-0 NetworkManager[49805]: <info>  [1772275134.4874] device (tapfc0b52d5-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.506 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e5b7d2b8-8485-42ec-97c7-7fab7fd03e01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:54 compute-0 NetworkManager[49805]: <info>  [1772275134.5163] manager: (tap48d6bc14-70): new Veth device (/org/freedesktop/NetworkManager/Devices/597)
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.514 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[30ffedc3-b074-4d80-8062-ba77d19d548a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.546 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[eec08c06-e672-4635-baf1-493a4b6ee133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.550 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3c4a7e-f75e-4949-b5a4-0872f4ec6fcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:54 compute-0 NetworkManager[49805]: <info>  [1772275134.5684] device (tap48d6bc14-70): carrier: link connected
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.568 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.574 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[038aeddf-b0a2-44cc-b59b-9ac16865b1ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.592 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0a5b2d4d-0c09-4c5b-9aa4-4deda5b64cbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48d6bc14-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:e1:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664185, 'reachable_time': 43124, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366492, 'error': None, 'target': 'ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.606 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5819ab2c-ec05-4370-9f6a-d7f03d6d9ea0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:e19e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664185, 'tstamp': 664185}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366493, 'error': None, 'target': 'ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.623 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1296819f-5891-4531-a18d-113edc569b9b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48d6bc14-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:e1:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664185, 'reachable_time': 43124, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366494, 'error': None, 'target': 'ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.656 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d5388088-03b0-4bf3-8290-49520777fad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.724 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0199d91c-02b8-4947-b4f7-fedc7124c3f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.725 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48d6bc14-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.725 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.726 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48d6bc14-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.728 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:54 compute-0 kernel: tap48d6bc14-70: entered promiscuous mode
Feb 28 10:38:54 compute-0 NetworkManager[49805]: <info>  [1772275134.7304] manager: (tap48d6bc14-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/598)
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.730 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48d6bc14-70, col_values=(('external_ids', {'iface-id': '6091ce53-d3cc-490d-8054-526a8c4039b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.731 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:54 compute-0 ovn_controller[146846]: 2026-02-28T10:38:54Z|01432|binding|INFO|Releasing lport 6091ce53-d3cc-490d-8054-526a8c4039b0 from this chassis (sb_readonly=0)
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.732 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.734 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48d6bc14-7625-4769-ad1d-6c202dc94953.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48d6bc14-7625-4769-ad1d-6c202dc94953.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.735 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f46ed904-50f8-4da2-9f87-f1ca2ecbfae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.736 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-48d6bc14-7625-4769-ad1d-6c202dc94953
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/48d6bc14-7625-4769-ad1d-6c202dc94953.pid.haproxy
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 48d6bc14-7625-4769-ad1d-6c202dc94953
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.737 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:54.737 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953', 'env', 'PROCESS_TAG=haproxy-48d6bc14-7625-4769-ad1d-6c202dc94953', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48d6bc14-7625-4769-ad1d-6c202dc94953.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.855 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275134.8553002, 1546a5a7-9aea-450c-acbe-689f2b660359 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.856 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] VM Started (Lifecycle Event)
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.878 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.891 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275134.8555055, 1546a5a7-9aea-450c-acbe-689f2b660359 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.891 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] VM Paused (Lifecycle Event)
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.917 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.922 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:38:54 compute-0 nova_compute[243452]: 2026-02-28 10:38:54.942 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:38:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 730 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Feb 28 10:38:55 compute-0 podman[366568]: 2026-02-28 10:38:55.089397247 +0000 UTC m=+0.052097236 container create bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 10:38:55 compute-0 systemd[1]: Started libpod-conmon-bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704.scope.
Feb 28 10:38:55 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:38:55 compute-0 podman[366568]: 2026-02-28 10:38:55.063956816 +0000 UTC m=+0.026656815 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:38:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98216c588452612fd8d98a1b43b07a762efeac7094a299d039a9b7dd839c2a5b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:38:55 compute-0 podman[366568]: 2026-02-28 10:38:55.181493055 +0000 UTC m=+0.144193124 container init bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:38:55 compute-0 podman[366568]: 2026-02-28 10:38:55.186587439 +0000 UTC m=+0.149287458 container start bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:38:55 compute-0 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[366583]: [NOTICE]   (366587) : New worker (366589) forked
Feb 28 10:38:55 compute-0 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[366583]: [NOTICE]   (366587) : Loading success.
Feb 28 10:38:55 compute-0 sudo[366598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:38:55 compute-0 sudo[366598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:38:55 compute-0 sudo[366598]: pam_unix(sudo:session): session closed for user root
Feb 28 10:38:55 compute-0 sudo[366623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 28 10:38:55 compute-0 sudo[366623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:38:55 compute-0 sudo[366623]: pam_unix(sudo:session): session closed for user root
Feb 28 10:38:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:38:55 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:38:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:38:55 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:38:55 compute-0 sudo[366668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:38:55 compute-0 sudo[366668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:38:55 compute-0 sudo[366668]: pam_unix(sudo:session): session closed for user root
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.750 243456 DEBUG nova.compute.manager [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Received event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.751 243456 DEBUG oslo_concurrency.lockutils [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.752 243456 DEBUG oslo_concurrency.lockutils [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.752 243456 DEBUG oslo_concurrency.lockutils [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.752 243456 DEBUG nova.compute.manager [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Processing event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.752 243456 DEBUG nova.compute.manager [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Received event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.753 243456 DEBUG oslo_concurrency.lockutils [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.753 243456 DEBUG oslo_concurrency.lockutils [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.753 243456 DEBUG oslo_concurrency.lockutils [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.753 243456 DEBUG nova.compute.manager [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] No waiting events found dispatching network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.753 243456 WARNING nova.compute.manager [req-2f654643-bd93-47af-95d1-9f9d579db35c req-1af68714-6ca0-44cd-a41e-dc6c102a5cc1 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Received unexpected event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 for instance with vm_state building and task_state spawning.
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.754 243456 DEBUG nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.768 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.769 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275135.768259, 1546a5a7-9aea-450c-acbe-689f2b660359 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.769 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] VM Resumed (Lifecycle Event)
Feb 28 10:38:55 compute-0 sudo[366693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:38:55 compute-0 sudo[366693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.775 243456 INFO nova.virt.libvirt.driver [-] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Instance spawned successfully.
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.775 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.787 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.789 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.806 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.807 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.807 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.808 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.808 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.808 243456 DEBUG nova.virt.libvirt.driver [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.823 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.876 243456 INFO nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Took 6.23 seconds to spawn the instance on the hypervisor.
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.877 243456 DEBUG nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.946 243456 INFO nova.compute.manager [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Took 7.38 seconds to build instance.
Feb 28 10:38:55 compute-0 nova_compute[243452]: 2026-02-28 10:38:55.973 243456 DEBUG oslo_concurrency.lockutils [None req-f054f982-4746-4e0e-9ccb-0e8b134fe048 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:56 compute-0 sudo[366693]: pam_unix(sudo:session): session closed for user root
Feb 28 10:38:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:38:56 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:38:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:38:56 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:38:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:38:56 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:38:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:38:56 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:38:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:38:56 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:38:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:38:56 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:38:56 compute-0 sudo[366749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:38:56 compute-0 sudo[366749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:38:56 compute-0 sudo[366749]: pam_unix(sudo:session): session closed for user root
Feb 28 10:38:56 compute-0 sudo[366774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:38:56 compute-0 sudo[366774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:38:56 compute-0 ceph-mon[76304]: pgmap v2237: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 730 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Feb 28 10:38:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:38:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:38:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:38:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:38:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:38:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:38:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:38:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:38:56 compute-0 podman[366811]: 2026-02-28 10:38:56.798214764 +0000 UTC m=+0.043891584 container create 75475741eec07c33abaf0ad144069405897ea5eed13049cd4c438ba6b9e9ff38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_perlman, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:38:56 compute-0 systemd[1]: Started libpod-conmon-75475741eec07c33abaf0ad144069405897ea5eed13049cd4c438ba6b9e9ff38.scope.
Feb 28 10:38:56 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:38:56 compute-0 podman[366811]: 2026-02-28 10:38:56.874196885 +0000 UTC m=+0.119873725 container init 75475741eec07c33abaf0ad144069405897ea5eed13049cd4c438ba6b9e9ff38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:38:56 compute-0 podman[366811]: 2026-02-28 10:38:56.779191055 +0000 UTC m=+0.024867925 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:38:56 compute-0 podman[366811]: 2026-02-28 10:38:56.881742879 +0000 UTC m=+0.127419699 container start 75475741eec07c33abaf0ad144069405897ea5eed13049cd4c438ba6b9e9ff38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_perlman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 10:38:56 compute-0 podman[366811]: 2026-02-28 10:38:56.885269969 +0000 UTC m=+0.130946789 container attach 75475741eec07c33abaf0ad144069405897ea5eed13049cd4c438ba6b9e9ff38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_perlman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 10:38:56 compute-0 heuristic_perlman[366827]: 167 167
Feb 28 10:38:56 compute-0 systemd[1]: libpod-75475741eec07c33abaf0ad144069405897ea5eed13049cd4c438ba6b9e9ff38.scope: Deactivated successfully.
Feb 28 10:38:56 compute-0 conmon[366827]: conmon 75475741eec07c33abaf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-75475741eec07c33abaf0ad144069405897ea5eed13049cd4c438ba6b9e9ff38.scope/container/memory.events
Feb 28 10:38:56 compute-0 podman[366811]: 2026-02-28 10:38:56.890483327 +0000 UTC m=+0.136160157 container died 75475741eec07c33abaf0ad144069405897ea5eed13049cd4c438ba6b9e9ff38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_perlman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 10:38:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c767b7b819ffff5f7190349d56156435bd73b8813d80a91cc0c07ed00ea6104-merged.mount: Deactivated successfully.
Feb 28 10:38:56 compute-0 podman[366811]: 2026-02-28 10:38:56.926530057 +0000 UTC m=+0.172206877 container remove 75475741eec07c33abaf0ad144069405897ea5eed13049cd4c438ba6b9e9ff38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:38:56 compute-0 systemd[1]: libpod-conmon-75475741eec07c33abaf0ad144069405897ea5eed13049cd4c438ba6b9e9ff38.scope: Deactivated successfully.
Feb 28 10:38:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Feb 28 10:38:57 compute-0 podman[366849]: 2026-02-28 10:38:57.125702446 +0000 UTC m=+0.077651570 container create c779049d746282a3ee91e03bc012f890254accd60a330eae60fb5d76023ecd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_maxwell, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 10:38:57 compute-0 podman[366849]: 2026-02-28 10:38:57.074159967 +0000 UTC m=+0.026109191 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:38:57 compute-0 systemd[1]: Started libpod-conmon-c779049d746282a3ee91e03bc012f890254accd60a330eae60fb5d76023ecd77.scope.
Feb 28 10:38:57 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f866e600cb0a0b0edc9973c320cd1f9549342f568c0991da39a593ceee83db0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f866e600cb0a0b0edc9973c320cd1f9549342f568c0991da39a593ceee83db0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f866e600cb0a0b0edc9973c320cd1f9549342f568c0991da39a593ceee83db0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f866e600cb0a0b0edc9973c320cd1f9549342f568c0991da39a593ceee83db0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f866e600cb0a0b0edc9973c320cd1f9549342f568c0991da39a593ceee83db0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:38:57 compute-0 podman[366849]: 2026-02-28 10:38:57.230265827 +0000 UTC m=+0.182215051 container init c779049d746282a3ee91e03bc012f890254accd60a330eae60fb5d76023ecd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:38:57 compute-0 podman[366849]: 2026-02-28 10:38:57.237402359 +0000 UTC m=+0.189351493 container start c779049d746282a3ee91e03bc012f890254accd60a330eae60fb5d76023ecd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:38:57 compute-0 podman[366849]: 2026-02-28 10:38:57.241290249 +0000 UTC m=+0.193239423 container attach c779049d746282a3ee91e03bc012f890254accd60a330eae60fb5d76023ecd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_maxwell, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 10:38:57 compute-0 flamboyant_maxwell[366865]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:38:57 compute-0 flamboyant_maxwell[366865]: --> All data devices are unavailable
Feb 28 10:38:57 compute-0 systemd[1]: libpod-c779049d746282a3ee91e03bc012f890254accd60a330eae60fb5d76023ecd77.scope: Deactivated successfully.
Feb 28 10:38:57 compute-0 podman[366849]: 2026-02-28 10:38:57.749042397 +0000 UTC m=+0.700991531 container died c779049d746282a3ee91e03bc012f890254accd60a330eae60fb5d76023ecd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:38:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f866e600cb0a0b0edc9973c320cd1f9549342f568c0991da39a593ceee83db0-merged.mount: Deactivated successfully.
Feb 28 10:38:57 compute-0 podman[366849]: 2026-02-28 10:38:57.817929937 +0000 UTC m=+0.769879071 container remove c779049d746282a3ee91e03bc012f890254accd60a330eae60fb5d76023ecd77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_maxwell, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:38:57 compute-0 systemd[1]: libpod-conmon-c779049d746282a3ee91e03bc012f890254accd60a330eae60fb5d76023ecd77.scope: Deactivated successfully.
Feb 28 10:38:57 compute-0 nova_compute[243452]: 2026-02-28 10:38:57.857 243456 DEBUG nova.compute.manager [req-4d2bd8d9-6f27-4bb8-8eb8-268f8af3cb78 req-051a0615-f4db-4447-abbf-b7a7ec4430f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-changed-5bc174e9-3e24-499b-8f52-0bcff974bf56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:38:57 compute-0 nova_compute[243452]: 2026-02-28 10:38:57.858 243456 DEBUG nova.compute.manager [req-4d2bd8d9-6f27-4bb8-8eb8-268f8af3cb78 req-051a0615-f4db-4447-abbf-b7a7ec4430f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Refreshing instance network info cache due to event network-changed-5bc174e9-3e24-499b-8f52-0bcff974bf56. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:38:57 compute-0 nova_compute[243452]: 2026-02-28 10:38:57.858 243456 DEBUG oslo_concurrency.lockutils [req-4d2bd8d9-6f27-4bb8-8eb8-268f8af3cb78 req-051a0615-f4db-4447-abbf-b7a7ec4430f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:38:57 compute-0 nova_compute[243452]: 2026-02-28 10:38:57.858 243456 DEBUG oslo_concurrency.lockutils [req-4d2bd8d9-6f27-4bb8-8eb8-268f8af3cb78 req-051a0615-f4db-4447-abbf-b7a7ec4430f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:38:57 compute-0 nova_compute[243452]: 2026-02-28 10:38:57.858 243456 DEBUG nova.network.neutron [req-4d2bd8d9-6f27-4bb8-8eb8-268f8af3cb78 req-051a0615-f4db-4447-abbf-b7a7ec4430f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Refreshing network info cache for port 5bc174e9-3e24-499b-8f52-0bcff974bf56 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:38:57 compute-0 sudo[366774]: pam_unix(sudo:session): session closed for user root
Feb 28 10:38:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:57.877 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:57.878 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:57.879 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:57 compute-0 sudo[366899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:38:57 compute-0 sudo[366899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:38:57 compute-0 sudo[366899]: pam_unix(sudo:session): session closed for user root
Feb 28 10:38:57 compute-0 sudo[366924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:38:57 compute-0 sudo[366924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:38:58 compute-0 podman[366960]: 2026-02-28 10:38:58.258193854 +0000 UTC m=+0.052788106 container create e8526d8766595112dc23aac29a200da1f8555ca026e467cf4266bd7dac35bcb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:38:58 compute-0 nova_compute[243452]: 2026-02-28 10:38:58.293 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:58 compute-0 systemd[1]: Started libpod-conmon-e8526d8766595112dc23aac29a200da1f8555ca026e467cf4266bd7dac35bcb1.scope.
Feb 28 10:38:58 compute-0 podman[366960]: 2026-02-28 10:38:58.233421953 +0000 UTC m=+0.028016285 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:38:58 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:38:58 compute-0 podman[366960]: 2026-02-28 10:38:58.363619189 +0000 UTC m=+0.158213471 container init e8526d8766595112dc23aac29a200da1f8555ca026e467cf4266bd7dac35bcb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 10:38:58 compute-0 podman[366960]: 2026-02-28 10:38:58.370453243 +0000 UTC m=+0.165047525 container start e8526d8766595112dc23aac29a200da1f8555ca026e467cf4266bd7dac35bcb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:38:58 compute-0 reverent_zhukovsky[366976]: 167 167
Feb 28 10:38:58 compute-0 systemd[1]: libpod-e8526d8766595112dc23aac29a200da1f8555ca026e467cf4266bd7dac35bcb1.scope: Deactivated successfully.
Feb 28 10:38:58 compute-0 podman[366960]: 2026-02-28 10:38:58.377102511 +0000 UTC m=+0.171696783 container attach e8526d8766595112dc23aac29a200da1f8555ca026e467cf4266bd7dac35bcb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:38:58 compute-0 podman[366960]: 2026-02-28 10:38:58.377445961 +0000 UTC m=+0.172040203 container died e8526d8766595112dc23aac29a200da1f8555ca026e467cf4266bd7dac35bcb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Feb 28 10:38:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a361e3c784d79b109ee2772374c6796cd2024f8501b3b403bcb456af43d03154-merged.mount: Deactivated successfully.
Feb 28 10:38:58 compute-0 podman[366960]: 2026-02-28 10:38:58.425353227 +0000 UTC m=+0.219947509 container remove e8526d8766595112dc23aac29a200da1f8555ca026e467cf4266bd7dac35bcb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:38:58 compute-0 systemd[1]: libpod-conmon-e8526d8766595112dc23aac29a200da1f8555ca026e467cf4266bd7dac35bcb1.scope: Deactivated successfully.
Feb 28 10:38:58 compute-0 podman[366999]: 2026-02-28 10:38:58.628546661 +0000 UTC m=+0.062923353 container create d057e94191b57b4bb13b11e26bbaed38154f446f5cfb0dcc25d248a415e91fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 28 10:38:58 compute-0 ceph-mon[76304]: pgmap v2238: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Feb 28 10:38:58 compute-0 systemd[1]: Started libpod-conmon-d057e94191b57b4bb13b11e26bbaed38154f446f5cfb0dcc25d248a415e91fe6.scope.
Feb 28 10:38:58 compute-0 podman[366999]: 2026-02-28 10:38:58.605286212 +0000 UTC m=+0.039662904 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:38:58 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fad9603dfaa5a6aa35fc1ec81afaaca0f600fafa9f4131646ca8e4da746f6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fad9603dfaa5a6aa35fc1ec81afaaca0f600fafa9f4131646ca8e4da746f6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fad9603dfaa5a6aa35fc1ec81afaaca0f600fafa9f4131646ca8e4da746f6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fad9603dfaa5a6aa35fc1ec81afaaca0f600fafa9f4131646ca8e4da746f6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:38:58 compute-0 podman[366999]: 2026-02-28 10:38:58.726552346 +0000 UTC m=+0.160929068 container init d057e94191b57b4bb13b11e26bbaed38154f446f5cfb0dcc25d248a415e91fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:38:58 compute-0 podman[366999]: 2026-02-28 10:38:58.73446068 +0000 UTC m=+0.168837342 container start d057e94191b57b4bb13b11e26bbaed38154f446f5cfb0dcc25d248a415e91fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 10:38:58 compute-0 podman[366999]: 2026-02-28 10:38:58.738227937 +0000 UTC m=+0.172604639 container attach d057e94191b57b4bb13b11e26bbaed38154f446f5cfb0dcc25d248a415e91fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:38:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:38:58 compute-0 fervent_einstein[367015]: {
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:     "0": [
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:         {
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "devices": [
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "/dev/loop3"
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             ],
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "lv_name": "ceph_lv0",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "lv_size": "21470642176",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "name": "ceph_lv0",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "tags": {
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.cluster_name": "ceph",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.crush_device_class": "",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.encrypted": "0",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.objectstore": "bluestore",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.osd_id": "0",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.type": "block",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.vdo": "0",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.with_tpm": "0"
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             },
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "type": "block",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "vg_name": "ceph_vg0"
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:         }
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:     ],
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:     "1": [
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:         {
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "devices": [
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "/dev/loop4"
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             ],
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "lv_name": "ceph_lv1",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "lv_size": "21470642176",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "name": "ceph_lv1",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "tags": {
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.cluster_name": "ceph",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.crush_device_class": "",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.encrypted": "0",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.objectstore": "bluestore",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.osd_id": "1",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.type": "block",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.vdo": "0",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.with_tpm": "0"
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             },
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "type": "block",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "vg_name": "ceph_vg1"
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:         }
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:     ],
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:     "2": [
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:         {
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "devices": [
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "/dev/loop5"
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             ],
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "lv_name": "ceph_lv2",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "lv_size": "21470642176",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "name": "ceph_lv2",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "tags": {
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.cluster_name": "ceph",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.crush_device_class": "",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.encrypted": "0",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.objectstore": "bluestore",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.osd_id": "2",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.type": "block",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.vdo": "0",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:                 "ceph.with_tpm": "0"
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             },
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "type": "block",
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:             "vg_name": "ceph_vg2"
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:         }
Feb 28 10:38:58 compute-0 fervent_einstein[367015]:     ]
Feb 28 10:38:58 compute-0 fervent_einstein[367015]: }
Feb 28 10:38:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 120 op/s
Feb 28 10:38:59 compute-0 systemd[1]: libpod-d057e94191b57b4bb13b11e26bbaed38154f446f5cfb0dcc25d248a415e91fe6.scope: Deactivated successfully.
Feb 28 10:38:59 compute-0 podman[366999]: 2026-02-28 10:38:59.037343037 +0000 UTC m=+0.471719679 container died d057e94191b57b4bb13b11e26bbaed38154f446f5cfb0dcc25d248a415e91fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 10:38:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-76fad9603dfaa5a6aa35fc1ec81afaaca0f600fafa9f4131646ca8e4da746f6b-merged.mount: Deactivated successfully.
Feb 28 10:38:59 compute-0 podman[366999]: 2026-02-28 10:38:59.076705641 +0000 UTC m=+0.511082293 container remove d057e94191b57b4bb13b11e26bbaed38154f446f5cfb0dcc25d248a415e91fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:38:59 compute-0 systemd[1]: libpod-conmon-d057e94191b57b4bb13b11e26bbaed38154f446f5cfb0dcc25d248a415e91fe6.scope: Deactivated successfully.
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.095 243456 DEBUG oslo_concurrency.lockutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "1546a5a7-9aea-450c-acbe-689f2b660359" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.096 243456 DEBUG oslo_concurrency.lockutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.097 243456 DEBUG oslo_concurrency.lockutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.097 243456 DEBUG oslo_concurrency.lockutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.097 243456 DEBUG oslo_concurrency.lockutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.099 243456 INFO nova.compute.manager [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Terminating instance
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.099 243456 DEBUG nova.compute.manager [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:38:59 compute-0 sudo[366924]: pam_unix(sudo:session): session closed for user root
Feb 28 10:38:59 compute-0 kernel: tapfc0b52d5-35 (unregistering): left promiscuous mode
Feb 28 10:38:59 compute-0 NetworkManager[49805]: <info>  [1772275139.1711] device (tapfc0b52d5-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.177 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:59 compute-0 ovn_controller[146846]: 2026-02-28T10:38:59Z|01433|binding|INFO|Releasing lport fc0b52d5-3577-4d09-bac1-192cb6c80057 from this chassis (sb_readonly=0)
Feb 28 10:38:59 compute-0 ovn_controller[146846]: 2026-02-28T10:38:59Z|01434|binding|INFO|Setting lport fc0b52d5-3577-4d09-bac1-192cb6c80057 down in Southbound
Feb 28 10:38:59 compute-0 ovn_controller[146846]: 2026-02-28T10:38:59Z|01435|binding|INFO|Removing iface tapfc0b52d5-35 ovn-installed in OVS
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.183 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:59 compute-0 sudo[367036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:38:59 compute-0 sudo[367036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:38:59 compute-0 sudo[367036]: pam_unix(sudo:session): session closed for user root
Feb 28 10:38:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.209 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:09:7e 10.100.0.7'], port_security=['fa:16:3e:68:09:7e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-600658055', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1546a5a7-9aea-450c-acbe-689f2b660359', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48d6bc14-7625-4769-ad1d-6c202dc94953', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-600658055', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '9', 'neutron:security_group_ids': '26d464f3-c9fa-4347-96b9-fda1593d34a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.241', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61ce24be-7860-4d1f-956c-5f6de3fe0ffe, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=fc0b52d5-3577-4d09-bac1-192cb6c80057) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:38:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.212 156681 INFO neutron.agent.ovn.metadata.agent [-] Port fc0b52d5-3577-4d09-bac1-192cb6c80057 in datapath 48d6bc14-7625-4769-ad1d-6c202dc94953 unbound from our chassis
Feb 28 10:38:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.215 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48d6bc14-7625-4769-ad1d-6c202dc94953, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:38:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.216 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[14c9d6b6-950b-4c32-ab3c-d21d9fd8d8cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.217 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953 namespace which is not needed anymore
Feb 28 10:38:59 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Feb 28 10:38:59 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008a.scope: Consumed 3.693s CPU time.
Feb 28 10:38:59 compute-0 systemd-machined[209480]: Machine qemu-171-instance-0000008a terminated.
Feb 28 10:38:59 compute-0 sudo[367063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:38:59 compute-0 sudo[367063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:38:59 compute-0 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[366583]: [NOTICE]   (366587) : haproxy version is 2.8.14-c23fe91
Feb 28 10:38:59 compute-0 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[366583]: [NOTICE]   (366587) : path to executable is /usr/sbin/haproxy
Feb 28 10:38:59 compute-0 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[366583]: [WARNING]  (366587) : Exiting Master process...
Feb 28 10:38:59 compute-0 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[366583]: [WARNING]  (366587) : Exiting Master process...
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.337 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:59 compute-0 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[366583]: [ALERT]    (366587) : Current worker (366589) exited with code 143 (Terminated)
Feb 28 10:38:59 compute-0 neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953[366583]: [WARNING]  (366587) : All workers exited. Exiting... (0)
Feb 28 10:38:59 compute-0 systemd[1]: libpod-bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704.scope: Deactivated successfully.
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.343 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:59 compute-0 podman[367110]: 2026-02-28 10:38:59.348550969 +0000 UTC m=+0.044124341 container died bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.354 243456 INFO nova.virt.libvirt.driver [-] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Instance destroyed successfully.
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.355 243456 DEBUG nova.objects.instance [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid 1546a5a7-9aea-450c-acbe-689f2b660359 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:38:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704-userdata-shm.mount: Deactivated successfully.
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.377 243456 DEBUG nova.virt.libvirt.vif [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:38:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-734126459',display_name='tempest-TestNetworkBasicOps-server-734126459',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-734126459',id=138,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTD6voq7CUXfm7WlI8qSl7cssyv4IdTFz+Rehs0Uzk9fFDHOvsC3uyrogbmbPToHSRv6RvbfGhcR/wGG6aXvaGAP4w+wIGU1b4ql+5cXN1wXdt7cbhMO8QZYhAJcKUF+w==',key_name='tempest-TestNetworkBasicOps-1975385676',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:38:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-ai5i2y9c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:38:55Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=1546a5a7-9aea-450c-acbe-689f2b660359,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.378 243456 DEBUG nova.network.os_vif_util [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "address": "fa:16:3e:68:09:7e", "network": {"id": "48d6bc14-7625-4769-ad1d-6c202dc94953", "bridge": "br-int", "label": "tempest-network-smoke--432003905", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc0b52d5-35", "ovs_interfaceid": "fc0b52d5-3577-4d09-bac1-192cb6c80057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.379 243456 DEBUG nova.network.os_vif_util [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.379 243456 DEBUG os_vif [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.382 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.382 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc0b52d5-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.384 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.386 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.388 243456 INFO os_vif [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:09:7e,bridge_name='br-int',has_traffic_filtering=True,id=fc0b52d5-3577-4d09-bac1-192cb6c80057,network=Network(48d6bc14-7625-4769-ad1d-6c202dc94953),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfc0b52d5-35')
Feb 28 10:38:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-98216c588452612fd8d98a1b43b07a762efeac7094a299d039a9b7dd839c2a5b-merged.mount: Deactivated successfully.
Feb 28 10:38:59 compute-0 podman[367110]: 2026-02-28 10:38:59.402100115 +0000 UTC m=+0.097673507 container cleanup bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:38:59 compute-0 systemd[1]: libpod-conmon-bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704.scope: Deactivated successfully.
Feb 28 10:38:59 compute-0 podman[367163]: 2026-02-28 10:38:59.470613075 +0000 UTC m=+0.047423404 container remove bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:38:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.477 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2872113d-341d-431a-8bd0-1877ab36f664]: (4, ('Sat Feb 28 10:38:59 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953 (bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704)\nbd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704\nSat Feb 28 10:38:59 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953 (bd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704)\nbd2db4da0b483325dc610c290b68c92a10533d43e984d5091cae889f686f8704\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.480 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[81d98239-0198-435e-9d8d-34c55e98e2dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.481 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48d6bc14-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.483 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:59 compute-0 kernel: tap48d6bc14-70: left promiscuous mode
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.491 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.491 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[077e92c1-b0dc-46de-ba02-deb9a1b4afcb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.508 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d98160-fc76-43be-b4ae-66ca1bcd4d9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.510 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1280daef-f251-4dc6-b53f-41dca3b4cb90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.528 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d557efc5-1618-421e-bdf3-b0f298a8ff1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664179, 'reachable_time': 44909, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367194, 'error': None, 'target': 'ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d48d6bc14\x2d7625\x2d4769\x2dad1d\x2d6c202dc94953.mount: Deactivated successfully.
Feb 28 10:38:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.535 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48d6bc14-7625-4769-ad1d-6c202dc94953 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:38:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:38:59.535 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[ed1aa000-5724-4b40-8951-391d93d78290]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:38:59 compute-0 podman[367192]: 2026-02-28 10:38:59.564579646 +0000 UTC m=+0.041793504 container create e5ad60a99096d10d1cf06a22418206564def6724d822a4abb15c66d17e89e177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.572 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:38:59 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 10:38:59 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 10:38:59 compute-0 systemd[1]: Started libpod-conmon-e5ad60a99096d10d1cf06a22418206564def6724d822a4abb15c66d17e89e177.scope.
Feb 28 10:38:59 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:38:59 compute-0 podman[367192]: 2026-02-28 10:38:59.545692681 +0000 UTC m=+0.022906339 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:38:59 compute-0 podman[367192]: 2026-02-28 10:38:59.656014875 +0000 UTC m=+0.133228523 container init e5ad60a99096d10d1cf06a22418206564def6724d822a4abb15c66d17e89e177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:38:59 compute-0 podman[367192]: 2026-02-28 10:38:59.663799536 +0000 UTC m=+0.141013164 container start e5ad60a99096d10d1cf06a22418206564def6724d822a4abb15c66d17e89e177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 10:38:59 compute-0 podman[367192]: 2026-02-28 10:38:59.667632814 +0000 UTC m=+0.144846482 container attach e5ad60a99096d10d1cf06a22418206564def6724d822a4abb15c66d17e89e177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 10:38:59 compute-0 vigilant_heyrovsky[367208]: 167 167
Feb 28 10:38:59 compute-0 systemd[1]: libpod-e5ad60a99096d10d1cf06a22418206564def6724d822a4abb15c66d17e89e177.scope: Deactivated successfully.
Feb 28 10:38:59 compute-0 conmon[367208]: conmon e5ad60a99096d10d1cf0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e5ad60a99096d10d1cf06a22418206564def6724d822a4abb15c66d17e89e177.scope/container/memory.events
Feb 28 10:38:59 compute-0 podman[367192]: 2026-02-28 10:38:59.672548003 +0000 UTC m=+0.149761631 container died e5ad60a99096d10d1cf06a22418206564def6724d822a4abb15c66d17e89e177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 10:38:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-97cfdd2ce4842ceb5a41ebf8e455f8e9c609025d5db3eaaedd9a66d5cad9c5e9-merged.mount: Deactivated successfully.
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.714 243456 INFO nova.virt.libvirt.driver [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Deleting instance files /var/lib/nova/instances/1546a5a7-9aea-450c-acbe-689f2b660359_del
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.716 243456 INFO nova.virt.libvirt.driver [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Deletion of /var/lib/nova/instances/1546a5a7-9aea-450c-acbe-689f2b660359_del complete
Feb 28 10:38:59 compute-0 podman[367192]: 2026-02-28 10:38:59.719824422 +0000 UTC m=+0.197038060 container remove e5ad60a99096d10d1cf06a22418206564def6724d822a4abb15c66d17e89e177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 10:38:59 compute-0 systemd[1]: libpod-conmon-e5ad60a99096d10d1cf06a22418206564def6724d822a4abb15c66d17e89e177.scope: Deactivated successfully.
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.856 243456 INFO nova.compute.manager [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Took 0.76 seconds to destroy the instance on the hypervisor.
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.858 243456 DEBUG oslo.service.loopingcall [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.859 243456 DEBUG nova.compute.manager [-] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:38:59 compute-0 nova_compute[243452]: 2026-02-28 10:38:59.859 243456 DEBUG nova.network.neutron [-] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:38:59 compute-0 podman[367234]: 2026-02-28 10:38:59.884921437 +0000 UTC m=+0.045908571 container create 5872de76d109d9133442f330f53a13bfa30c53d1b132317c48a31e2358f496a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_liskov, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:38:59 compute-0 systemd[1]: Started libpod-conmon-5872de76d109d9133442f330f53a13bfa30c53d1b132317c48a31e2358f496a0.scope.
Feb 28 10:38:59 compute-0 podman[367234]: 2026-02-28 10:38:59.864052086 +0000 UTC m=+0.025039240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:38:59 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccdb2e1973d7c77c8b190c9ce13144d7f6f7439b3fbc3d492805ecbcec52e98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccdb2e1973d7c77c8b190c9ce13144d7f6f7439b3fbc3d492805ecbcec52e98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccdb2e1973d7c77c8b190c9ce13144d7f6f7439b3fbc3d492805ecbcec52e98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccdb2e1973d7c77c8b190c9ce13144d7f6f7439b3fbc3d492805ecbcec52e98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:38:59 compute-0 podman[367234]: 2026-02-28 10:38:59.997586717 +0000 UTC m=+0.158573871 container init 5872de76d109d9133442f330f53a13bfa30c53d1b132317c48a31e2358f496a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 10:39:00 compute-0 podman[367234]: 2026-02-28 10:39:00.010792421 +0000 UTC m=+0.171779565 container start 5872de76d109d9133442f330f53a13bfa30c53d1b132317c48a31e2358f496a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 10:39:00 compute-0 nova_compute[243452]: 2026-02-28 10:39:00.010 243456 DEBUG nova.compute.manager [req-7494d190-ec1c-4774-8e62-252ab267f007 req-750e5d08-df42-4efb-b86c-4675a09b4264 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Received event network-vif-unplugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:00 compute-0 nova_compute[243452]: 2026-02-28 10:39:00.010 243456 DEBUG oslo_concurrency.lockutils [req-7494d190-ec1c-4774-8e62-252ab267f007 req-750e5d08-df42-4efb-b86c-4675a09b4264 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:00 compute-0 nova_compute[243452]: 2026-02-28 10:39:00.011 243456 DEBUG oslo_concurrency.lockutils [req-7494d190-ec1c-4774-8e62-252ab267f007 req-750e5d08-df42-4efb-b86c-4675a09b4264 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:00 compute-0 nova_compute[243452]: 2026-02-28 10:39:00.012 243456 DEBUG oslo_concurrency.lockutils [req-7494d190-ec1c-4774-8e62-252ab267f007 req-750e5d08-df42-4efb-b86c-4675a09b4264 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:00 compute-0 nova_compute[243452]: 2026-02-28 10:39:00.012 243456 DEBUG nova.compute.manager [req-7494d190-ec1c-4774-8e62-252ab267f007 req-750e5d08-df42-4efb-b86c-4675a09b4264 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] No waiting events found dispatching network-vif-unplugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:39:00 compute-0 nova_compute[243452]: 2026-02-28 10:39:00.012 243456 DEBUG nova.compute.manager [req-7494d190-ec1c-4774-8e62-252ab267f007 req-750e5d08-df42-4efb-b86c-4675a09b4264 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Received event network-vif-unplugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:39:00 compute-0 podman[367234]: 2026-02-28 10:39:00.016481152 +0000 UTC m=+0.177468296 container attach 5872de76d109d9133442f330f53a13bfa30c53d1b132317c48a31e2358f496a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_liskov, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 10:39:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:39:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:39:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:39:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:39:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:39:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:39:00 compute-0 nova_compute[243452]: 2026-02-28 10:39:00.401 243456 DEBUG nova.network.neutron [req-4d2bd8d9-6f27-4bb8-8eb8-268f8af3cb78 req-051a0615-f4db-4447-abbf-b7a7ec4430f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updated VIF entry in instance network info cache for port 5bc174e9-3e24-499b-8f52-0bcff974bf56. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:39:00 compute-0 nova_compute[243452]: 2026-02-28 10:39:00.401 243456 DEBUG nova.network.neutron [req-4d2bd8d9-6f27-4bb8-8eb8-268f8af3cb78 req-051a0615-f4db-4447-abbf-b7a7ec4430f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updating instance_info_cache with network_info: [{"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:39:00 compute-0 nova_compute[243452]: 2026-02-28 10:39:00.608 243456 DEBUG oslo_concurrency.lockutils [req-4d2bd8d9-6f27-4bb8-8eb8-268f8af3cb78 req-051a0615-f4db-4447-abbf-b7a7ec4430f3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:39:00 compute-0 ceph-mon[76304]: pgmap v2239: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 120 op/s
Feb 28 10:39:00 compute-0 lvm[367326]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:39:00 compute-0 lvm[367326]: VG ceph_vg0 finished
Feb 28 10:39:00 compute-0 lvm[367328]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:39:00 compute-0 lvm[367328]: VG ceph_vg1 finished
Feb 28 10:39:00 compute-0 lvm[367329]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:39:00 compute-0 lvm[367329]: VG ceph_vg2 finished
Feb 28 10:39:00 compute-0 infallible_liskov[367251]: {}
Feb 28 10:39:00 compute-0 systemd[1]: libpod-5872de76d109d9133442f330f53a13bfa30c53d1b132317c48a31e2358f496a0.scope: Deactivated successfully.
Feb 28 10:39:00 compute-0 systemd[1]: libpod-5872de76d109d9133442f330f53a13bfa30c53d1b132317c48a31e2358f496a0.scope: Consumed 1.301s CPU time.
Feb 28 10:39:00 compute-0 podman[367332]: 2026-02-28 10:39:00.921797346 +0000 UTC m=+0.030177015 container died 5872de76d109d9133442f330f53a13bfa30c53d1b132317c48a31e2358f496a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_liskov, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:39:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ccdb2e1973d7c77c8b190c9ce13144d7f6f7439b3fbc3d492805ecbcec52e98-merged.mount: Deactivated successfully.
Feb 28 10:39:00 compute-0 podman[367332]: 2026-02-28 10:39:00.982825084 +0000 UTC m=+0.091204713 container remove 5872de76d109d9133442f330f53a13bfa30c53d1b132317c48a31e2358f496a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 10:39:00 compute-0 systemd[1]: libpod-conmon-5872de76d109d9133442f330f53a13bfa30c53d1b132317c48a31e2358f496a0.scope: Deactivated successfully.
Feb 28 10:39:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 235 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 176 op/s
Feb 28 10:39:01 compute-0 sudo[367063]: pam_unix(sudo:session): session closed for user root
Feb 28 10:39:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:39:01 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:39:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:39:01 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:39:01 compute-0 sudo[367347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:39:01 compute-0 sudo[367347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:39:01 compute-0 sudo[367347]: pam_unix(sudo:session): session closed for user root
Feb 28 10:39:01 compute-0 nova_compute[243452]: 2026-02-28 10:39:01.767 243456 DEBUG nova.network.neutron [-] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:39:01 compute-0 nova_compute[243452]: 2026-02-28 10:39:01.882 243456 INFO nova.compute.manager [-] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Took 2.02 seconds to deallocate network for instance.
Feb 28 10:39:01 compute-0 nova_compute[243452]: 2026-02-28 10:39:01.982 243456 DEBUG oslo_concurrency.lockutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:01 compute-0 nova_compute[243452]: 2026-02-28 10:39:01.982 243456 DEBUG oslo_concurrency.lockutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:02 compute-0 nova_compute[243452]: 2026-02-28 10:39:02.068 243456 DEBUG oslo_concurrency.processutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:39:02 compute-0 nova_compute[243452]: 2026-02-28 10:39:02.109 243456 DEBUG nova.compute.manager [req-41e501e8-cb70-4580-a367-4a728d708e53 req-b377c67e-eec1-4156-ad26-68c1f1a4df30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Received event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:02 compute-0 nova_compute[243452]: 2026-02-28 10:39:02.109 243456 DEBUG oslo_concurrency.lockutils [req-41e501e8-cb70-4580-a367-4a728d708e53 req-b377c67e-eec1-4156-ad26-68c1f1a4df30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:02 compute-0 nova_compute[243452]: 2026-02-28 10:39:02.110 243456 DEBUG oslo_concurrency.lockutils [req-41e501e8-cb70-4580-a367-4a728d708e53 req-b377c67e-eec1-4156-ad26-68c1f1a4df30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:02 compute-0 nova_compute[243452]: 2026-02-28 10:39:02.110 243456 DEBUG oslo_concurrency.lockutils [req-41e501e8-cb70-4580-a367-4a728d708e53 req-b377c67e-eec1-4156-ad26-68c1f1a4df30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:02 compute-0 nova_compute[243452]: 2026-02-28 10:39:02.110 243456 DEBUG nova.compute.manager [req-41e501e8-cb70-4580-a367-4a728d708e53 req-b377c67e-eec1-4156-ad26-68c1f1a4df30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] No waiting events found dispatching network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:39:02 compute-0 nova_compute[243452]: 2026-02-28 10:39:02.110 243456 WARNING nova.compute.manager [req-41e501e8-cb70-4580-a367-4a728d708e53 req-b377c67e-eec1-4156-ad26-68c1f1a4df30 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Received unexpected event network-vif-plugged-fc0b52d5-3577-4d09-bac1-192cb6c80057 for instance with vm_state deleted and task_state None.
Feb 28 10:39:02 compute-0 ceph-mon[76304]: pgmap v2240: 305 pgs: 305 active+clean; 235 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 176 op/s
Feb 28 10:39:02 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:39:02 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:39:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:39:02 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/216275258' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:39:02 compute-0 nova_compute[243452]: 2026-02-28 10:39:02.582 243456 DEBUG oslo_concurrency.processutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:39:02 compute-0 nova_compute[243452]: 2026-02-28 10:39:02.589 243456 DEBUG nova.compute.provider_tree [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:39:02 compute-0 nova_compute[243452]: 2026-02-28 10:39:02.623 243456 DEBUG nova.scheduler.client.report [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:39:02 compute-0 nova_compute[243452]: 2026-02-28 10:39:02.810 243456 DEBUG oslo_concurrency.lockutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:02 compute-0 nova_compute[243452]: 2026-02-28 10:39:02.847 243456 INFO nova.scheduler.client.report [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance 1546a5a7-9aea-450c-acbe-689f2b660359
Feb 28 10:39:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 214 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 185 op/s
Feb 28 10:39:03 compute-0 nova_compute[243452]: 2026-02-28 10:39:03.101 243456 DEBUG oslo_concurrency.lockutils [None req-99fae916-22e7-4f30-a900-a5737b5b5da2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "1546a5a7-9aea-450c-acbe-689f2b660359" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:03 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/216275258' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:39:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:39:04 compute-0 ceph-mon[76304]: pgmap v2241: 305 pgs: 305 active+clean; 214 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 185 op/s
Feb 28 10:39:04 compute-0 nova_compute[243452]: 2026-02-28 10:39:04.386 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:04 compute-0 nova_compute[243452]: 2026-02-28 10:39:04.613 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:04 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Feb 28 10:39:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 850 KiB/s wr, 182 op/s
Feb 28 10:39:05 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Feb 28 10:39:06 compute-0 ceph-mon[76304]: pgmap v2242: 305 pgs: 305 active+clean; 200 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 850 KiB/s wr, 182 op/s
Feb 28 10:39:06 compute-0 ovn_controller[146846]: 2026-02-28T10:39:06Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:40:80:6e 10.100.0.9
Feb 28 10:39:06 compute-0 ovn_controller[146846]: 2026-02-28T10:39:06Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:40:80:6e 10.100.0.9
Feb 28 10:39:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 204 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 389 KiB/s wr, 153 op/s
Feb 28 10:39:08 compute-0 ceph-mon[76304]: pgmap v2243: 305 pgs: 305 active+clean; 204 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 389 KiB/s wr, 153 op/s
Feb 28 10:39:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:39:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.1 MiB/s wr, 144 op/s
Feb 28 10:39:09 compute-0 nova_compute[243452]: 2026-02-28 10:39:09.389 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:09 compute-0 nova_compute[243452]: 2026-02-28 10:39:09.615 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:10 compute-0 ceph-mon[76304]: pgmap v2244: 305 pgs: 305 active+clean; 213 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.1 MiB/s wr, 144 op/s
Feb 28 10:39:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Feb 28 10:39:11 compute-0 ovn_controller[146846]: 2026-02-28T10:39:11Z|01436|binding|INFO|Releasing lport 1c423105-d23e-4da9-afeb-9405c7fd0060 from this chassis (sb_readonly=0)
Feb 28 10:39:11 compute-0 ovn_controller[146846]: 2026-02-28T10:39:11Z|01437|binding|INFO|Releasing lport 86a5934a-bf0f-4d7e-ba7a-fdb27e99df6a from this chassis (sb_readonly=0)
Feb 28 10:39:11 compute-0 nova_compute[243452]: 2026-02-28 10:39:11.538 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:12 compute-0 ceph-mon[76304]: pgmap v2245: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Feb 28 10:39:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Feb 28 10:39:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:39:14 compute-0 podman[367398]: 2026-02-28 10:39:14.148215915 +0000 UTC m=+0.074561412 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 28 10:39:14 compute-0 podman[367397]: 2026-02-28 10:39:14.186423857 +0000 UTC m=+0.113564616 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 10:39:14 compute-0 sshd-session[367395]: Invalid user sol from 45.148.10.240 port 43728
Feb 28 10:39:14 compute-0 nova_compute[243452]: 2026-02-28 10:39:14.352 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275139.3489072, 1546a5a7-9aea-450c-acbe-689f2b660359 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:39:14 compute-0 nova_compute[243452]: 2026-02-28 10:39:14.352 243456 INFO nova.compute.manager [-] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] VM Stopped (Lifecycle Event)
Feb 28 10:39:14 compute-0 nova_compute[243452]: 2026-02-28 10:39:14.387 243456 DEBUG nova.compute.manager [None req-6312bf48-fee0-4fbc-84ee-fd1c9231f0c0 - - - - - -] [instance: 1546a5a7-9aea-450c-acbe-689f2b660359] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:39:14 compute-0 nova_compute[243452]: 2026-02-28 10:39:14.391 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:14 compute-0 sshd-session[367395]: Connection closed by invalid user sol 45.148.10.240 port 43728 [preauth]
Feb 28 10:39:14 compute-0 nova_compute[243452]: 2026-02-28 10:39:14.619 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:14 compute-0 ceph-mon[76304]: pgmap v2246: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Feb 28 10:39:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Feb 28 10:39:16 compute-0 nova_compute[243452]: 2026-02-28 10:39:16.098 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:16 compute-0 ceph-mon[76304]: pgmap v2247: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Feb 28 10:39:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:39:17 compute-0 nova_compute[243452]: 2026-02-28 10:39:17.338 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:17 compute-0 nova_compute[243452]: 2026-02-28 10:39:17.338 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:17 compute-0 nova_compute[243452]: 2026-02-28 10:39:17.362 243456 DEBUG nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:39:17 compute-0 nova_compute[243452]: 2026-02-28 10:39:17.471 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:17 compute-0 nova_compute[243452]: 2026-02-28 10:39:17.472 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:17 compute-0 nova_compute[243452]: 2026-02-28 10:39:17.481 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:39:17 compute-0 nova_compute[243452]: 2026-02-28 10:39:17.481 243456 INFO nova.compute.claims [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:39:17 compute-0 nova_compute[243452]: 2026-02-28 10:39:17.617 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:39:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:39:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2947877545' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.165 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.172 243456 DEBUG nova.compute.provider_tree [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.189 243456 DEBUG nova.scheduler.client.report [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.210 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.211 243456 DEBUG nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.267 243456 DEBUG nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.268 243456 DEBUG nova.network.neutron [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.292 243456 INFO nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.311 243456 DEBUG nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.418 243456 DEBUG nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.419 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.420 243456 INFO nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Creating image(s)
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.450 243456 DEBUG nova.storage.rbd_utils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 69899d22-e5ee-410a-8280-57cc79ffa188_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.483 243456 DEBUG nova.storage.rbd_utils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 69899d22-e5ee-410a-8280-57cc79ffa188_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.514 243456 DEBUG nova.storage.rbd_utils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 69899d22-e5ee-410a-8280-57cc79ffa188_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.519 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.553 243456 DEBUG nova.policy [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.590 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.591 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.591 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.592 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.615 243456 DEBUG nova.storage.rbd_utils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 69899d22-e5ee-410a-8280-57cc79ffa188_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:39:18 compute-0 nova_compute[243452]: 2026-02-28 10:39:18.620 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 69899d22-e5ee-410a-8280-57cc79ffa188_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:39:18 compute-0 ceph-mon[76304]: pgmap v2248: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:39:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2947877545' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:39:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:39:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Feb 28 10:39:19 compute-0 nova_compute[243452]: 2026-02-28 10:39:19.057 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 69899d22-e5ee-410a-8280-57cc79ffa188_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:39:19 compute-0 nova_compute[243452]: 2026-02-28 10:39:19.131 243456 DEBUG nova.storage.rbd_utils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image 69899d22-e5ee-410a-8280-57cc79ffa188_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:39:19 compute-0 nova_compute[243452]: 2026-02-28 10:39:19.261 243456 DEBUG nova.objects.instance [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid 69899d22-e5ee-410a-8280-57cc79ffa188 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:39:19 compute-0 nova_compute[243452]: 2026-02-28 10:39:19.281 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:39:19 compute-0 nova_compute[243452]: 2026-02-28 10:39:19.282 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Ensure instance console log exists: /var/lib/nova/instances/69899d22-e5ee-410a-8280-57cc79ffa188/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:39:19 compute-0 nova_compute[243452]: 2026-02-28 10:39:19.283 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:19 compute-0 nova_compute[243452]: 2026-02-28 10:39:19.283 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:19 compute-0 nova_compute[243452]: 2026-02-28 10:39:19.284 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:19 compute-0 nova_compute[243452]: 2026-02-28 10:39:19.312 243456 DEBUG nova.network.neutron [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Successfully created port: 05baa6e8-a67d-4e5d-84d7-2d27c726335a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:39:19 compute-0 nova_compute[243452]: 2026-02-28 10:39:19.428 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:19 compute-0 nova_compute[243452]: 2026-02-28 10:39:19.621 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:19 compute-0 ceph-mon[76304]: pgmap v2249: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Feb 28 10:39:19 compute-0 nova_compute[243452]: 2026-02-28 10:39:19.858 243456 DEBUG nova.network.neutron [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Successfully created port: 05cd9b0a-030d-404b-b77a-570992915fae _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:39:20 compute-0 sshd-session[367557]: Received disconnect from 103.67.78.202 port 47002:11: Bye Bye [preauth]
Feb 28 10:39:20 compute-0 sshd-session[367557]: Disconnected from authenticating user root 103.67.78.202 port 47002 [preauth]
Feb 28 10:39:21 compute-0 nova_compute[243452]: 2026-02-28 10:39:21.030 243456 DEBUG nova.network.neutron [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Successfully updated port: 05baa6e8-a67d-4e5d-84d7-2d27c726335a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:39:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 257 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 1.9 MiB/s wr, 36 op/s
Feb 28 10:39:21 compute-0 nova_compute[243452]: 2026-02-28 10:39:21.175 243456 DEBUG nova.compute.manager [req-be48a697-b51c-4eb5-9924-69acc3fe9c64 req-0cf63253-9da5-4e7a-a11f-ea14b9f9f396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-changed-05baa6e8-a67d-4e5d-84d7-2d27c726335a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:21 compute-0 nova_compute[243452]: 2026-02-28 10:39:21.177 243456 DEBUG nova.compute.manager [req-be48a697-b51c-4eb5-9924-69acc3fe9c64 req-0cf63253-9da5-4e7a-a11f-ea14b9f9f396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Refreshing instance network info cache due to event network-changed-05baa6e8-a67d-4e5d-84d7-2d27c726335a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:39:21 compute-0 nova_compute[243452]: 2026-02-28 10:39:21.178 243456 DEBUG oslo_concurrency.lockutils [req-be48a697-b51c-4eb5-9924-69acc3fe9c64 req-0cf63253-9da5-4e7a-a11f-ea14b9f9f396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:39:21 compute-0 nova_compute[243452]: 2026-02-28 10:39:21.178 243456 DEBUG oslo_concurrency.lockutils [req-be48a697-b51c-4eb5-9924-69acc3fe9c64 req-0cf63253-9da5-4e7a-a11f-ea14b9f9f396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:39:21 compute-0 nova_compute[243452]: 2026-02-28 10:39:21.178 243456 DEBUG nova.network.neutron [req-be48a697-b51c-4eb5-9924-69acc3fe9c64 req-0cf63253-9da5-4e7a-a11f-ea14b9f9f396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Refreshing network info cache for port 05baa6e8-a67d-4e5d-84d7-2d27c726335a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:39:21 compute-0 nova_compute[243452]: 2026-02-28 10:39:21.373 243456 DEBUG nova.network.neutron [req-be48a697-b51c-4eb5-9924-69acc3fe9c64 req-0cf63253-9da5-4e7a-a11f-ea14b9f9f396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:39:21 compute-0 nova_compute[243452]: 2026-02-28 10:39:21.730 243456 DEBUG nova.network.neutron [req-be48a697-b51c-4eb5-9924-69acc3fe9c64 req-0cf63253-9da5-4e7a-a11f-ea14b9f9f396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:39:21 compute-0 nova_compute[243452]: 2026-02-28 10:39:21.749 243456 DEBUG oslo_concurrency.lockutils [req-be48a697-b51c-4eb5-9924-69acc3fe9c64 req-0cf63253-9da5-4e7a-a11f-ea14b9f9f396 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:39:21 compute-0 nova_compute[243452]: 2026-02-28 10:39:21.776 243456 DEBUG nova.network.neutron [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Successfully updated port: 05cd9b0a-030d-404b-b77a-570992915fae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:39:21 compute-0 nova_compute[243452]: 2026-02-28 10:39:21.792 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:39:21 compute-0 nova_compute[243452]: 2026-02-28 10:39:21.793 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:39:21 compute-0 nova_compute[243452]: 2026-02-28 10:39:21.793 243456 DEBUG nova.network.neutron [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:39:21 compute-0 nova_compute[243452]: 2026-02-28 10:39:21.980 243456 DEBUG nova.network.neutron [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:39:22 compute-0 ceph-mon[76304]: pgmap v2250: 305 pgs: 305 active+clean; 257 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 1.9 MiB/s wr, 36 op/s
Feb 28 10:39:22 compute-0 sshd-session[367632]: Received disconnect from 103.67.78.202 port 47012:11: Bye Bye [preauth]
Feb 28 10:39:22 compute-0 sshd-session[367632]: Disconnected from authenticating user root 103.67.78.202 port 47012 [preauth]
Feb 28 10:39:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:39:23 compute-0 nova_compute[243452]: 2026-02-28 10:39:23.534 243456 DEBUG nova.compute.manager [req-2a9bdf59-ba2d-4568-953c-40e13ca8e99c req-48716d26-ed7e-45e4-87b6-4eeb6d58586a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-changed-05cd9b0a-030d-404b-b77a-570992915fae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:23 compute-0 nova_compute[243452]: 2026-02-28 10:39:23.535 243456 DEBUG nova.compute.manager [req-2a9bdf59-ba2d-4568-953c-40e13ca8e99c req-48716d26-ed7e-45e4-87b6-4eeb6d58586a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Refreshing instance network info cache due to event network-changed-05cd9b0a-030d-404b-b77a-570992915fae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:39:23 compute-0 nova_compute[243452]: 2026-02-28 10:39:23.535 243456 DEBUG oslo_concurrency.lockutils [req-2a9bdf59-ba2d-4568-953c-40e13ca8e99c req-48716d26-ed7e-45e4-87b6-4eeb6d58586a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:39:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.054 243456 DEBUG nova.network.neutron [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Updating instance_info_cache with network_info: [{"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.077 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.078 243456 DEBUG nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Instance network_info: |[{"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.079 243456 DEBUG oslo_concurrency.lockutils [req-2a9bdf59-ba2d-4568-953c-40e13ca8e99c req-48716d26-ed7e-45e4-87b6-4eeb6d58586a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.079 243456 DEBUG nova.network.neutron [req-2a9bdf59-ba2d-4568-953c-40e13ca8e99c req-48716d26-ed7e-45e4-87b6-4eeb6d58586a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Refreshing network info cache for port 05cd9b0a-030d-404b-b77a-570992915fae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.086 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Start _get_guest_xml network_info=[{"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.095 243456 WARNING nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.113 243456 DEBUG nova.virt.libvirt.host [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.114 243456 DEBUG nova.virt.libvirt.host [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:39:24 compute-0 ceph-mon[76304]: pgmap v2251: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.119 243456 DEBUG nova.virt.libvirt.host [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.119 243456 DEBUG nova.virt.libvirt.host [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.120 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.121 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.122 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.122 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.123 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.123 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.124 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.124 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.125 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.125 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.126 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.126 243456 DEBUG nova.virt.hardware [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.132 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.431 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.624 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:39:24 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/366693204' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.744 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.780 243456 DEBUG nova.storage.rbd_utils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 69899d22-e5ee-410a-8280-57cc79ffa188_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:39:24 compute-0 nova_compute[243452]: 2026-02-28 10:39:24.786 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:39:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:39:25 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/366693204' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.306 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.307 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.327 243456 DEBUG nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:39:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:39:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/620690958' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.382 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.384 243456 DEBUG nova.virt.libvirt.vif [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:39:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1192420529',display_name='tempest-TestGettingAddress-server-1192420529',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1192420529',id=139,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-2ripkvj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:39:18Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=69899d22-e5ee-410a-8280-57cc79ffa188,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.385 243456 DEBUG nova.network.os_vif_util [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.386 243456 DEBUG nova.network.os_vif_util [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:6f:db,bridge_name='br-int',has_traffic_filtering=True,id=05baa6e8-a67d-4e5d-84d7-2d27c726335a,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05baa6e8-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.387 243456 DEBUG nova.virt.libvirt.vif [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:39:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1192420529',display_name='tempest-TestGettingAddress-server-1192420529',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1192420529',id=139,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-2ripkvj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:39:18Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=69899d22-e5ee-410a-8280-57cc79ffa188,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.387 243456 DEBUG nova.network.os_vif_util [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.388 243456 DEBUG nova.network.os_vif_util [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:32:1c,bridge_name='br-int',has_traffic_filtering=True,id=05cd9b0a-030d-404b-b77a-570992915fae,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cd9b0a-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.389 243456 DEBUG nova.objects.instance [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 69899d22-e5ee-410a-8280-57cc79ffa188 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.411 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:39:25 compute-0 nova_compute[243452]:   <uuid>69899d22-e5ee-410a-8280-57cc79ffa188</uuid>
Feb 28 10:39:25 compute-0 nova_compute[243452]:   <name>instance-0000008b</name>
Feb 28 10:39:25 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:39:25 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:39:25 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <nova:name>tempest-TestGettingAddress-server-1192420529</nova:name>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:39:24</nova:creationTime>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:39:25 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:39:25 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:39:25 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:39:25 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:39:25 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:39:25 compute-0 nova_compute[243452]:         <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 10:39:25 compute-0 nova_compute[243452]:         <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:39:25 compute-0 nova_compute[243452]:         <nova:port uuid="05baa6e8-a67d-4e5d-84d7-2d27c726335a">
Feb 28 10:39:25 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:39:25 compute-0 nova_compute[243452]:         <nova:port uuid="05cd9b0a-030d-404b-b77a-570992915fae">
Feb 28 10:39:25 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe0b:321c" ipVersion="6"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe0b:321c" ipVersion="6"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:39:25 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:39:25 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <system>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <entry name="serial">69899d22-e5ee-410a-8280-57cc79ffa188</entry>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <entry name="uuid">69899d22-e5ee-410a-8280-57cc79ffa188</entry>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     </system>
Feb 28 10:39:25 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:39:25 compute-0 nova_compute[243452]:   <os>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:   </os>
Feb 28 10:39:25 compute-0 nova_compute[243452]:   <features>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:   </features>
Feb 28 10:39:25 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:39:25 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:39:25 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/69899d22-e5ee-410a-8280-57cc79ffa188_disk">
Feb 28 10:39:25 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       </source>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:39:25 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/69899d22-e5ee-410a-8280-57cc79ffa188_disk.config">
Feb 28 10:39:25 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       </source>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:39:25 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:b7:6f:db"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <target dev="tap05baa6e8-a6"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:0b:32:1c"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <target dev="tap05cd9b0a-03"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/69899d22-e5ee-410a-8280-57cc79ffa188/console.log" append="off"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <video>
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     </video>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:39:25 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:39:25 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:39:25 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:39:25 compute-0 nova_compute[243452]: </domain>
Feb 28 10:39:25 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.413 243456 DEBUG nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Preparing to wait for external event network-vif-plugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.413 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.413 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.414 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.414 243456 DEBUG nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Preparing to wait for external event network-vif-plugged-05cd9b0a-030d-404b-b77a-570992915fae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.414 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.414 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.415 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.415 243456 DEBUG nova.virt.libvirt.vif [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:39:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1192420529',display_name='tempest-TestGettingAddress-server-1192420529',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1192420529',id=139,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-2ripkvj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:39:18Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=69899d22-e5ee-410a-8280-57cc79ffa188,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.416 243456 DEBUG nova.network.os_vif_util [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.416 243456 DEBUG nova.network.os_vif_util [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:6f:db,bridge_name='br-int',has_traffic_filtering=True,id=05baa6e8-a67d-4e5d-84d7-2d27c726335a,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05baa6e8-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.417 243456 DEBUG os_vif [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:6f:db,bridge_name='br-int',has_traffic_filtering=True,id=05baa6e8-a67d-4e5d-84d7-2d27c726335a,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05baa6e8-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.418 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.419 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.419 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.420 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.421 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.424 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.425 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05baa6e8-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.426 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05baa6e8-a6, col_values=(('external_ids', {'iface-id': '05baa6e8-a67d-4e5d-84d7-2d27c726335a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:6f:db', 'vm-uuid': '69899d22-e5ee-410a-8280-57cc79ffa188'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:25 compute-0 NetworkManager[49805]: <info>  [1772275165.4292] manager: (tap05baa6e8-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/599)
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.432 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.437 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.438 243456 INFO nova.compute.claims [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.442 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.446 243456 INFO os_vif [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:6f:db,bridge_name='br-int',has_traffic_filtering=True,id=05baa6e8-a67d-4e5d-84d7-2d27c726335a,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05baa6e8-a6')
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.448 243456 DEBUG nova.virt.libvirt.vif [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:39:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1192420529',display_name='tempest-TestGettingAddress-server-1192420529',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1192420529',id=139,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-2ripkvj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:39:18Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=69899d22-e5ee-410a-8280-57cc79ffa188,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.448 243456 DEBUG nova.network.os_vif_util [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.450 243456 DEBUG nova.network.os_vif_util [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:32:1c,bridge_name='br-int',has_traffic_filtering=True,id=05cd9b0a-030d-404b-b77a-570992915fae,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cd9b0a-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.451 243456 DEBUG os_vif [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:32:1c,bridge_name='br-int',has_traffic_filtering=True,id=05cd9b0a-030d-404b-b77a-570992915fae,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cd9b0a-03') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.452 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.452 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.453 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.456 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.457 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05cd9b0a-03, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.457 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05cd9b0a-03, col_values=(('external_ids', {'iface-id': '05cd9b0a-030d-404b-b77a-570992915fae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:32:1c', 'vm-uuid': '69899d22-e5ee-410a-8280-57cc79ffa188'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.460 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:25 compute-0 NetworkManager[49805]: <info>  [1772275165.4617] manager: (tap05cd9b0a-03): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/600)
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.462 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.466 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.467 243456 INFO os_vif [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:32:1c,bridge_name='br-int',has_traffic_filtering=True,id=05cd9b0a-030d-404b-b77a-570992915fae,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cd9b0a-03')
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.581 243456 DEBUG nova.network.neutron [req-2a9bdf59-ba2d-4568-953c-40e13ca8e99c req-48716d26-ed7e-45e4-87b6-4eeb6d58586a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Updated VIF entry in instance network info cache for port 05cd9b0a-030d-404b-b77a-570992915fae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.583 243456 DEBUG nova.network.neutron [req-2a9bdf59-ba2d-4568-953c-40e13ca8e99c req-48716d26-ed7e-45e4-87b6-4eeb6d58586a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Updating instance_info_cache with network_info: [{"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.592 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.592 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.593 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:b7:6f:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.593 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:0b:32:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.594 243456 INFO nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Using config drive
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.625 243456 DEBUG nova.storage.rbd_utils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 69899d22-e5ee-410a-8280-57cc79ffa188_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.633 243456 DEBUG oslo_concurrency.lockutils [req-2a9bdf59-ba2d-4568-953c-40e13ca8e99c req-48716d26-ed7e-45e4-87b6-4eeb6d58586a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:39:25 compute-0 nova_compute[243452]: 2026-02-28 10:39:25.680 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.020 243456 INFO nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Creating config drive at /var/lib/nova/instances/69899d22-e5ee-410a-8280-57cc79ffa188/disk.config
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.030 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/69899d22-e5ee-410a-8280-57cc79ffa188/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjqx6p9zh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:39:26 compute-0 ceph-mon[76304]: pgmap v2252: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:39:26 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/620690958' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.173 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/69899d22-e5ee-410a-8280-57cc79ffa188/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjqx6p9zh" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.204 243456 DEBUG nova.storage.rbd_utils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 69899d22-e5ee-410a-8280-57cc79ffa188_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.208 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/69899d22-e5ee-410a-8280-57cc79ffa188/disk.config 69899d22-e5ee-410a-8280-57cc79ffa188_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:39:26 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:39:26 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3371558887' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.252 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.259 243456 DEBUG nova.compute.provider_tree [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.276 243456 DEBUG nova.scheduler.client.report [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.326 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.327 243456 DEBUG nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.483 243456 DEBUG oslo_concurrency.processutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/69899d22-e5ee-410a-8280-57cc79ffa188/disk.config 69899d22-e5ee-410a-8280-57cc79ffa188_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.484 243456 INFO nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Deleting local config drive /var/lib/nova/instances/69899d22-e5ee-410a-8280-57cc79ffa188/disk.config because it was imported into RBD.
Feb 28 10:39:26 compute-0 NetworkManager[49805]: <info>  [1772275166.5415] manager: (tap05baa6e8-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/601)
Feb 28 10:39:26 compute-0 kernel: tap05baa6e8-a6: entered promiscuous mode
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.545 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:26 compute-0 ovn_controller[146846]: 2026-02-28T10:39:26Z|01438|binding|INFO|Claiming lport 05baa6e8-a67d-4e5d-84d7-2d27c726335a for this chassis.
Feb 28 10:39:26 compute-0 ovn_controller[146846]: 2026-02-28T10:39:26Z|01439|binding|INFO|05baa6e8-a67d-4e5d-84d7-2d27c726335a: Claiming fa:16:3e:b7:6f:db 10.100.0.13
Feb 28 10:39:26 compute-0 kernel: tap05cd9b0a-03: entered promiscuous mode
Feb 28 10:39:26 compute-0 NetworkManager[49805]: <info>  [1772275166.5595] manager: (tap05cd9b0a-03): new Tun device (/org/freedesktop/NetworkManager/Devices/602)
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.561 243456 DEBUG nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.561 243456 DEBUG nova.network.neutron [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:39:26 compute-0 ovn_controller[146846]: 2026-02-28T10:39:26Z|01440|binding|INFO|Setting lport 05baa6e8-a67d-4e5d-84d7-2d27c726335a ovn-installed in OVS
Feb 28 10:39:26 compute-0 ovn_controller[146846]: 2026-02-28T10:39:26Z|01441|if_status|INFO|Dropped 1 log messages in last 35 seconds (most recently, 35 seconds ago) due to excessive rate
Feb 28 10:39:26 compute-0 ovn_controller[146846]: 2026-02-28T10:39:26Z|01442|if_status|INFO|Not updating pb chassis for 05cd9b0a-030d-404b-b77a-570992915fae now as sb is readonly
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.564 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:26 compute-0 systemd-udevd[367794]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:39:26 compute-0 systemd-udevd[367795]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.579 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:26 compute-0 NetworkManager[49805]: <info>  [1772275166.5799] device (tap05cd9b0a-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:39:26 compute-0 NetworkManager[49805]: <info>  [1772275166.5831] device (tap05cd9b0a-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:39:26 compute-0 NetworkManager[49805]: <info>  [1772275166.5838] device (tap05baa6e8-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:39:26 compute-0 NetworkManager[49805]: <info>  [1772275166.5844] device (tap05baa6e8-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:39:26 compute-0 ovn_controller[146846]: 2026-02-28T10:39:26Z|01443|binding|INFO|Claiming lport 05cd9b0a-030d-404b-b77a-570992915fae for this chassis.
Feb 28 10:39:26 compute-0 ovn_controller[146846]: 2026-02-28T10:39:26Z|01444|binding|INFO|05cd9b0a-030d-404b-b77a-570992915fae: Claiming fa:16:3e:0b:32:1c 2001:db8:0:1:f816:3eff:fe0b:321c 2001:db8::f816:3eff:fe0b:321c
Feb 28 10:39:26 compute-0 ovn_controller[146846]: 2026-02-28T10:39:26Z|01445|binding|INFO|Setting lport 05baa6e8-a67d-4e5d-84d7-2d27c726335a up in Southbound
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.610 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:6f:db 10.100.0.13'], port_security=['fa:16:3e:b7:6f:db 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '69899d22-e5ee-410a-8280-57cc79ffa188', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-386ad93c-8128-414d-bc97-7c3f009f2aee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '77637734-a2d3-42b4-baf6-bf1326483326', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12c8c4dc-c3d0-48e2-a2ff-065b46b0bb43, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=05baa6e8-a67d-4e5d-84d7-2d27c726335a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:39:26 compute-0 ovn_controller[146846]: 2026-02-28T10:39:26Z|01446|binding|INFO|Setting lport 05cd9b0a-030d-404b-b77a-570992915fae ovn-installed in OVS
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.611 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 05baa6e8-a67d-4e5d-84d7-2d27c726335a in datapath 386ad93c-8128-414d-bc97-7c3f009f2aee bound to our chassis
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.612 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 386ad93c-8128-414d-bc97-7c3f009f2aee
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.612 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:26 compute-0 systemd-machined[209480]: New machine qemu-172-instance-0000008b.
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.630 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4e187b48-c520-42db-b8b7-11cbc560778a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:26 compute-0 systemd[1]: Started Virtual Machine qemu-172-instance-0000008b.
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.649 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:32:1c 2001:db8:0:1:f816:3eff:fe0b:321c 2001:db8::f816:3eff:fe0b:321c'], port_security=['fa:16:3e:0b:32:1c 2001:db8:0:1:f816:3eff:fe0b:321c 2001:db8::f816:3eff:fe0b:321c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe0b:321c/64 2001:db8::f816:3eff:fe0b:321c/64', 'neutron:device_id': '69899d22-e5ee-410a-8280-57cc79ffa188', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '77637734-a2d3-42b4-baf6-bf1326483326', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df8aa31f-e638-4e49-ac27-bf5e1988a64a, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=05cd9b0a-030d-404b-b77a-570992915fae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:39:26 compute-0 ovn_controller[146846]: 2026-02-28T10:39:26Z|01447|binding|INFO|Setting lport 05cd9b0a-030d-404b-b77a-570992915fae up in Southbound
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.650 243456 INFO nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.659 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[77ece079-88ed-41eb-a7e6-8402f9e07445]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.662 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b52427-09b8-4d18-b2b6-7cee077bb519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.694 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[cd04589a-e7ed-4e14-900c-bc3b21b6176e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.715 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[852d7c2d-6600-4519-ae92-8e6e01f12b19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap386ad93c-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:f3:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663869, 'reachable_time': 19915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367813, 'error': None, 'target': 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.733 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[19b5bae6-9762-4fc9-b748-eb9c95503072]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap386ad93c-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663882, 'tstamp': 663882}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367815, 'error': None, 'target': 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap386ad93c-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663886, 'tstamp': 663886}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367815, 'error': None, 'target': 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.735 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap386ad93c-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.741 243456 DEBUG nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.786 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.788 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap386ad93c-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.788 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.789 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap386ad93c-80, col_values=(('external_ids', {'iface-id': '1c423105-d23e-4da9-afeb-9405c7fd0060'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.789 243456 DEBUG nova.policy [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.789 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.791 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.791 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 05cd9b0a-030d-404b-b77a-570992915fae in datapath 685f3a92-853c-417a-a00b-ba5c70b02f2d unbound from our chassis
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.792 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 685f3a92-853c-417a-a00b-ba5c70b02f2d
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.805 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0892e985-baca-46cd-af7d-a663ea25830c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.836 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[80d6b0be-c621-4e73-b8c7-3be7194387ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.840 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e830fa-64f7-4967-ba70-02c1422d0e2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.862 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9663b3ec-a655-470b-bbdb-ad37cd492114]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.880 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2521eae7-ef7b-45e2-999f-cbe4b84f0764]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap685f3a92-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:b7:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1846, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1846, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 423], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663963, 'reachable_time': 33771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367821, 'error': None, 'target': 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.894 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[48049375-0f20-42c0-adc1-5f8af4c88726]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap685f3a92-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663976, 'tstamp': 663976}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367822, 'error': None, 'target': 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.896 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap685f3a92-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.897 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.899 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap685f3a92-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.899 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.899 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap685f3a92-80, col_values=(('external_ids', {'iface-id': '86a5934a-bf0f-4d7e-ba7a-fdb27e99df6a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:26.900 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.931 243456 DEBUG nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.932 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.933 243456 INFO nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Creating image(s)
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.954 243456 DEBUG nova.storage.rbd_utils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:39:26 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.978 243456 DEBUG nova.storage.rbd_utils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:26.999 243456 DEBUG nova.storage.rbd_utils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.004 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:39:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.045 243456 DEBUG nova.compute.manager [req-10ee384b-e006-4b89-928d-652fdee25965 req-81c99645-22ef-4bf0-9946-d18a9fc3001d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-plugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.046 243456 DEBUG oslo_concurrency.lockutils [req-10ee384b-e006-4b89-928d-652fdee25965 req-81c99645-22ef-4bf0-9946-d18a9fc3001d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.046 243456 DEBUG oslo_concurrency.lockutils [req-10ee384b-e006-4b89-928d-652fdee25965 req-81c99645-22ef-4bf0-9946-d18a9fc3001d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.047 243456 DEBUG oslo_concurrency.lockutils [req-10ee384b-e006-4b89-928d-652fdee25965 req-81c99645-22ef-4bf0-9946-d18a9fc3001d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.047 243456 DEBUG nova.compute.manager [req-10ee384b-e006-4b89-928d-652fdee25965 req-81c99645-22ef-4bf0-9946-d18a9fc3001d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Processing event network-vif-plugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.070 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.070 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.071 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.071 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.092 243456 DEBUG nova.storage.rbd_utils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.096 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:39:27 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3371558887' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.302 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275167.3015208, 69899d22-e5ee-410a-8280-57cc79ffa188 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.303 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] VM Started (Lifecycle Event)
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.344 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.350 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275167.3038695, 69899d22-e5ee-410a-8280-57cc79ffa188 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.350 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] VM Paused (Lifecycle Event)
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.380 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.385 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.389 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.465 243456 DEBUG nova.storage.rbd_utils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.497 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.551 243456 DEBUG nova.objects.instance [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.568 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.568 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Ensure instance console log exists: /var/lib/nova/instances/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.569 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.569 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:27 compute-0 nova_compute[243452]: 2026-02-28 10:39:27.570 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:28 compute-0 nova_compute[243452]: 2026-02-28 10:39:28.089 243456 DEBUG nova.network.neutron [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Successfully created port: 247ab5af-a0a0-4d02-93ea-d7dbed82f571 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:39:28 compute-0 ceph-mon[76304]: pgmap v2253: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:39:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:39:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 288 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 45 op/s
Feb 28 10:39:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:39:29
Feb 28 10:39:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:39:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:39:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'volumes', 'default.rgw.log', 'vms', 'backups', 'default.rgw.meta']
Feb 28 10:39:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.627 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.708 243456 DEBUG nova.compute.manager [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-plugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.709 243456 DEBUG oslo_concurrency.lockutils [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.710 243456 DEBUG oslo_concurrency.lockutils [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.710 243456 DEBUG oslo_concurrency.lockutils [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.711 243456 DEBUG nova.compute.manager [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] No event matching network-vif-plugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a in dict_keys([('network-vif-plugged', '05cd9b0a-030d-404b-b77a-570992915fae')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.712 243456 WARNING nova.compute.manager [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received unexpected event network-vif-plugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a for instance with vm_state building and task_state spawning.
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.712 243456 DEBUG nova.compute.manager [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-plugged-05cd9b0a-030d-404b-b77a-570992915fae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.713 243456 DEBUG oslo_concurrency.lockutils [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.714 243456 DEBUG oslo_concurrency.lockutils [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.714 243456 DEBUG oslo_concurrency.lockutils [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.715 243456 DEBUG nova.compute.manager [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Processing event network-vif-plugged-05cd9b0a-030d-404b-b77a-570992915fae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.716 243456 DEBUG nova.compute.manager [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-plugged-05cd9b0a-030d-404b-b77a-570992915fae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.716 243456 DEBUG oslo_concurrency.lockutils [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.717 243456 DEBUG oslo_concurrency.lockutils [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.718 243456 DEBUG oslo_concurrency.lockutils [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.718 243456 DEBUG nova.compute.manager [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] No waiting events found dispatching network-vif-plugged-05cd9b0a-030d-404b-b77a-570992915fae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.719 243456 WARNING nova.compute.manager [req-532ca3ef-dfbf-42f1-9fe2-850a0caf69ce req-227ccd6e-936c-4dfd-b1fe-7a1508c0e117 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received unexpected event network-vif-plugged-05cd9b0a-030d-404b-b77a-570992915fae for instance with vm_state building and task_state spawning.
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.721 243456 DEBUG nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.726 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275169.72594, 69899d22-e5ee-410a-8280-57cc79ffa188 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.727 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] VM Resumed (Lifecycle Event)
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.731 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.736 243456 INFO nova.virt.libvirt.driver [-] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Instance spawned successfully.
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.737 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.748 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.753 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.808 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.811 243456 DEBUG nova.network.neutron [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Successfully updated port: 247ab5af-a0a0-4d02-93ea-d7dbed82f571 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.815 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.816 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.816 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.817 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.818 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.818 243456 DEBUG nova.virt.libvirt.driver [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.824 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.824 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.824 243456 DEBUG nova.network.neutron [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.884 243456 INFO nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Took 11.47 seconds to spawn the instance on the hypervisor.
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.885 243456 DEBUG nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.953 243456 INFO nova.compute.manager [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Took 12.53 seconds to build instance.
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.974 243456 DEBUG oslo_concurrency.lockutils [None req-446a4013-f592-4753-9851-044328ea3bde be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:29 compute-0 nova_compute[243452]: 2026-02-28 10:39:29.986 243456 DEBUG nova.network.neutron [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:39:30 compute-0 ceph-mon[76304]: pgmap v2254: 305 pgs: 305 active+clean; 288 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 45 op/s
Feb 28 10:39:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:39:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:39:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:39:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:39:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:39:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:39:30 compute-0 nova_compute[243452]: 2026-02-28 10:39:30.493 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:39:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:39:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:39:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:39:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:39:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:39:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:39:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:39:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:39:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:39:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 178 KiB/s rd, 3.6 MiB/s wr, 69 op/s
Feb 28 10:39:31 compute-0 nova_compute[243452]: 2026-02-28 10:39:31.880 243456 DEBUG nova.compute.manager [req-3295f048-b555-4250-99bc-e430b3b9625e req-4d3d1e3a-7b3c-46ae-bbc4-21227e10d049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received event network-changed-247ab5af-a0a0-4d02-93ea-d7dbed82f571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:31 compute-0 nova_compute[243452]: 2026-02-28 10:39:31.881 243456 DEBUG nova.compute.manager [req-3295f048-b555-4250-99bc-e430b3b9625e req-4d3d1e3a-7b3c-46ae-bbc4-21227e10d049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Refreshing instance network info cache due to event network-changed-247ab5af-a0a0-4d02-93ea-d7dbed82f571. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:39:31 compute-0 nova_compute[243452]: 2026-02-28 10:39:31.882 243456 DEBUG oslo_concurrency.lockutils [req-3295f048-b555-4250-99bc-e430b3b9625e req-4d3d1e3a-7b3c-46ae-bbc4-21227e10d049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:39:31 compute-0 nova_compute[243452]: 2026-02-28 10:39:31.994 243456 DEBUG nova.network.neutron [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Updating instance_info_cache with network_info: [{"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.243 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.244 243456 DEBUG nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Instance network_info: |[{"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.246 243456 DEBUG oslo_concurrency.lockutils [req-3295f048-b555-4250-99bc-e430b3b9625e req-4d3d1e3a-7b3c-46ae-bbc4-21227e10d049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.246 243456 DEBUG nova.network.neutron [req-3295f048-b555-4250-99bc-e430b3b9625e req-4d3d1e3a-7b3c-46ae-bbc4-21227e10d049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Refreshing network info cache for port 247ab5af-a0a0-4d02-93ea-d7dbed82f571 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.249 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Start _get_guest_xml network_info=[{"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.255 243456 WARNING nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.260 243456 DEBUG nova.virt.libvirt.host [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.261 243456 DEBUG nova.virt.libvirt.host [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.264 243456 DEBUG nova.virt.libvirt.host [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.265 243456 DEBUG nova.virt.libvirt.host [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.266 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.266 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.267 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.267 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.268 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.268 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.269 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.269 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.269 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.270 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.270 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.271 243456 DEBUG nova.virt.hardware [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:39:32 compute-0 ceph-mon[76304]: pgmap v2255: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 178 KiB/s rd, 3.6 MiB/s wr, 69 op/s
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.275 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:39:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:39:32 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/252349400' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.823 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.869 243456 DEBUG nova.storage.rbd_utils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:39:32 compute-0 nova_compute[243452]: 2026-02-28 10:39:32.876 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:39:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.6 MiB/s wr, 95 op/s
Feb 28 10:39:33 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/252349400' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:39:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:39:33 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3397351184' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.482 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.484 243456 DEBUG nova.virt.libvirt.vif [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:39:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-823103484',display_name='tempest-TestNetworkBasicOps-server-823103484',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-823103484',id=140,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDlRPDBZ/UdFU5xhRLg2YsbBBqME/rNpkkZFa7kTxtUL7bLINgILV0oznkIgnASqfKbT8PhHg4HY5yFJH+rTHkD1QXlc7NRmyoVKdQxyQnQqn2gOVqEPju9kBsy9tmaBqQ==',key_name='tempest-TestNetworkBasicOps-1428223995',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-d46lhxih',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:39:26Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=023e788b-dd3e-4a85-a2e5-e97ad24d6ff1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.485 243456 DEBUG nova.network.os_vif_util [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.486 243456 DEBUG nova.network.os_vif_util [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:58:95,bridge_name='br-int',has_traffic_filtering=True,id=247ab5af-a0a0-4d02-93ea-d7dbed82f571,network=Network(72eb3f47-a6c4-4982-9972-2a47fea8b4a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247ab5af-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.488 243456 DEBUG nova.objects.instance [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.533 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:39:33 compute-0 nova_compute[243452]:   <uuid>023e788b-dd3e-4a85-a2e5-e97ad24d6ff1</uuid>
Feb 28 10:39:33 compute-0 nova_compute[243452]:   <name>instance-0000008c</name>
Feb 28 10:39:33 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:39:33 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:39:33 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <nova:name>tempest-TestNetworkBasicOps-server-823103484</nova:name>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:39:32</nova:creationTime>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:39:33 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:39:33 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:39:33 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:39:33 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:39:33 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:39:33 compute-0 nova_compute[243452]:         <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:39:33 compute-0 nova_compute[243452]:         <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:39:33 compute-0 nova_compute[243452]:         <nova:port uuid="247ab5af-a0a0-4d02-93ea-d7dbed82f571">
Feb 28 10:39:33 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:39:33 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:39:33 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <system>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <entry name="serial">023e788b-dd3e-4a85-a2e5-e97ad24d6ff1</entry>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <entry name="uuid">023e788b-dd3e-4a85-a2e5-e97ad24d6ff1</entry>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     </system>
Feb 28 10:39:33 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:39:33 compute-0 nova_compute[243452]:   <os>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:   </os>
Feb 28 10:39:33 compute-0 nova_compute[243452]:   <features>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:   </features>
Feb 28 10:39:33 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:39:33 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:39:33 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk">
Feb 28 10:39:33 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       </source>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:39:33 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk.config">
Feb 28 10:39:33 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       </source>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:39:33 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:7f:58:95"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <target dev="tap247ab5af-a0"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1/console.log" append="off"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <video>
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     </video>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:39:33 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:39:33 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:39:33 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:39:33 compute-0 nova_compute[243452]: </domain>
Feb 28 10:39:33 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.537 243456 DEBUG nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Preparing to wait for external event network-vif-plugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.538 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.539 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.539 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.540 243456 DEBUG nova.virt.libvirt.vif [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:39:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-823103484',display_name='tempest-TestNetworkBasicOps-server-823103484',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-823103484',id=140,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDlRPDBZ/UdFU5xhRLg2YsbBBqME/rNpkkZFa7kTxtUL7bLINgILV0oznkIgnASqfKbT8PhHg4HY5yFJH+rTHkD1QXlc7NRmyoVKdQxyQnQqn2gOVqEPju9kBsy9tmaBqQ==',key_name='tempest-TestNetworkBasicOps-1428223995',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-d46lhxih',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:39:26Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=023e788b-dd3e-4a85-a2e5-e97ad24d6ff1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.540 243456 DEBUG nova.network.os_vif_util [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.541 243456 DEBUG nova.network.os_vif_util [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:58:95,bridge_name='br-int',has_traffic_filtering=True,id=247ab5af-a0a0-4d02-93ea-d7dbed82f571,network=Network(72eb3f47-a6c4-4982-9972-2a47fea8b4a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247ab5af-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.541 243456 DEBUG os_vif [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:58:95,bridge_name='br-int',has_traffic_filtering=True,id=247ab5af-a0a0-4d02-93ea-d7dbed82f571,network=Network(72eb3f47-a6c4-4982-9972-2a47fea8b4a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247ab5af-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.542 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.542 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.543 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.545 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.545 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap247ab5af-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.546 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap247ab5af-a0, col_values=(('external_ids', {'iface-id': '247ab5af-a0a0-4d02-93ea-d7dbed82f571', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:58:95', 'vm-uuid': '023e788b-dd3e-4a85-a2e5-e97ad24d6ff1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.548 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:33 compute-0 NetworkManager[49805]: <info>  [1772275173.5492] manager: (tap247ab5af-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/603)
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.552 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.554 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.554 243456 INFO os_vif [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:58:95,bridge_name='br-int',has_traffic_filtering=True,id=247ab5af-a0a0-4d02-93ea-d7dbed82f571,network=Network(72eb3f47-a6c4-4982-9972-2a47fea8b4a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247ab5af-a0')
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.654 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.655 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.655 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:7f:58:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.656 243456 INFO nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Using config drive
Feb 28 10:39:33 compute-0 nova_compute[243452]: 2026-02-28 10:39:33.678 243456 DEBUG nova.storage.rbd_utils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:39:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:39:34 compute-0 nova_compute[243452]: 2026-02-28 10:39:34.004 243456 DEBUG nova.network.neutron [req-3295f048-b555-4250-99bc-e430b3b9625e req-4d3d1e3a-7b3c-46ae-bbc4-21227e10d049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Updated VIF entry in instance network info cache for port 247ab5af-a0a0-4d02-93ea-d7dbed82f571. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:39:34 compute-0 nova_compute[243452]: 2026-02-28 10:39:34.004 243456 DEBUG nova.network.neutron [req-3295f048-b555-4250-99bc-e430b3b9625e req-4d3d1e3a-7b3c-46ae-bbc4-21227e10d049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Updating instance_info_cache with network_info: [{"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:39:34 compute-0 nova_compute[243452]: 2026-02-28 10:39:34.088 243456 DEBUG oslo_concurrency.lockutils [req-3295f048-b555-4250-99bc-e430b3b9625e req-4d3d1e3a-7b3c-46ae-bbc4-21227e10d049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:39:34 compute-0 nova_compute[243452]: 2026-02-28 10:39:34.118 243456 INFO nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Creating config drive at /var/lib/nova/instances/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1/disk.config
Feb 28 10:39:34 compute-0 nova_compute[243452]: 2026-02-28 10:39:34.123 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5wuyk6i3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:39:34 compute-0 nova_compute[243452]: 2026-02-28 10:39:34.262 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5wuyk6i3" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:39:34 compute-0 ceph-mon[76304]: pgmap v2256: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.6 MiB/s wr, 95 op/s
Feb 28 10:39:34 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3397351184' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:39:34 compute-0 nova_compute[243452]: 2026-02-28 10:39:34.327 243456 DEBUG nova.storage.rbd_utils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:39:34 compute-0 nova_compute[243452]: 2026-02-28 10:39:34.332 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1/disk.config 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:39:34 compute-0 nova_compute[243452]: 2026-02-28 10:39:34.486 243456 DEBUG oslo_concurrency.processutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1/disk.config 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:39:34 compute-0 nova_compute[243452]: 2026-02-28 10:39:34.488 243456 INFO nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Deleting local config drive /var/lib/nova/instances/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1/disk.config because it was imported into RBD.
Feb 28 10:39:34 compute-0 NetworkManager[49805]: <info>  [1772275174.5656] manager: (tap247ab5af-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/604)
Feb 28 10:39:34 compute-0 kernel: tap247ab5af-a0: entered promiscuous mode
Feb 28 10:39:34 compute-0 ovn_controller[146846]: 2026-02-28T10:39:34Z|01448|binding|INFO|Claiming lport 247ab5af-a0a0-4d02-93ea-d7dbed82f571 for this chassis.
Feb 28 10:39:34 compute-0 ovn_controller[146846]: 2026-02-28T10:39:34Z|01449|binding|INFO|247ab5af-a0a0-4d02-93ea-d7dbed82f571: Claiming fa:16:3e:7f:58:95 10.100.0.4
Feb 28 10:39:34 compute-0 nova_compute[243452]: 2026-02-28 10:39:34.575 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:34 compute-0 ovn_controller[146846]: 2026-02-28T10:39:34Z|01450|binding|INFO|Setting lport 247ab5af-a0a0-4d02-93ea-d7dbed82f571 ovn-installed in OVS
Feb 28 10:39:34 compute-0 nova_compute[243452]: 2026-02-28 10:39:34.584 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:34 compute-0 nova_compute[243452]: 2026-02-28 10:39:34.588 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:34 compute-0 systemd-udevd[368165]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.598 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:58:95 10.100.0.4'], port_security=['fa:16:3e:7f:58:95 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '023e788b-dd3e-4a85-a2e5-e97ad24d6ff1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72eb3f47-a6c4-4982-9972-2a47fea8b4a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4240f7d5-bbec-4937-b4a6-68a1a0e467cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d75922a7-401f-4a05-a541-e32618728569, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=247ab5af-a0a0-4d02-93ea-d7dbed82f571) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:39:34 compute-0 ovn_controller[146846]: 2026-02-28T10:39:34Z|01451|binding|INFO|Setting lport 247ab5af-a0a0-4d02-93ea-d7dbed82f571 up in Southbound
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.600 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 247ab5af-a0a0-4d02-93ea-d7dbed82f571 in datapath 72eb3f47-a6c4-4982-9972-2a47fea8b4a8 bound to our chassis
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.601 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72eb3f47-a6c4-4982-9972-2a47fea8b4a8
Feb 28 10:39:34 compute-0 NetworkManager[49805]: <info>  [1772275174.6045] device (tap247ab5af-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:39:34 compute-0 NetworkManager[49805]: <info>  [1772275174.6054] device (tap247ab5af-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:39:34 compute-0 systemd-machined[209480]: New machine qemu-173-instance-0000008c.
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.623 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f49b9ae2-e4c2-46c4-97ac-9c8ba695bd12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.624 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap72eb3f47-a1 in ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.626 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap72eb3f47-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.626 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cec91d5e-2ed7-4db5-9e6c-f63985397d61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.628 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[832a625e-2862-463c-b272-b75aa6283886]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:34 compute-0 nova_compute[243452]: 2026-02-28 10:39:34.631 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:34 compute-0 systemd[1]: Started Virtual Machine qemu-173-instance-0000008c.
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.645 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b9d514-664e-46fb-80fe-842dfc5b578e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.670 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9741acbc-3486-4717-b39f-631953ddac28]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.703 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1c9f05ad-4c4a-4e4b-a516-7fc4443b3a8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:34 compute-0 NetworkManager[49805]: <info>  [1772275174.7108] manager: (tap72eb3f47-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/605)
Feb 28 10:39:34 compute-0 systemd-udevd[368169]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.714 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b587b8f0-e071-4fc9-9cd7-0d3ec893df2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.754 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7241af1e-e032-4e14-aae6-15fe1b9c6ec0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.759 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b7295ee6-2716-46aa-9729-28e5e89eb186]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:34 compute-0 NetworkManager[49805]: <info>  [1772275174.7911] device (tap72eb3f47-a0): carrier: link connected
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.798 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bf26987d-3209-4eb5-a778-4c1df0a52ef5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.820 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6bdd45b9-beaa-417e-ae5a-b07ae835bbbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72eb3f47-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:eb:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 430], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668207, 'reachable_time': 24149, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368201, 'error': None, 'target': 'ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.840 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4f0149f7-200e-49d4-9e6a-2994ca2cc848]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3f:eb92'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 668207, 'tstamp': 668207}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368202, 'error': None, 'target': 'ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.861 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bde6892e-7ac0-4a8b-a2c2-afc639f1c9a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72eb3f47-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3f:eb:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 430], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668207, 'reachable_time': 24149, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 368203, 'error': None, 'target': 'ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.896 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6c8bb65d-da4b-49da-b3e5-2c7a3f8af9a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.971 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6544009d-88ae-4ea1-b9c6-68275d45828b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.973 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72eb3f47-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.974 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.975 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72eb3f47-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:34 compute-0 NetworkManager[49805]: <info>  [1772275174.9780] manager: (tap72eb3f47-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/606)
Feb 28 10:39:34 compute-0 kernel: tap72eb3f47-a0: entered promiscuous mode
Feb 28 10:39:34 compute-0 nova_compute[243452]: 2026-02-28 10:39:34.980 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:34 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:34.990 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72eb3f47-a0, col_values=(('external_ids', {'iface-id': 'a55b6aa3-f8a4-4c13-b5f3-0d6d78f0bfc1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:34 compute-0 ovn_controller[146846]: 2026-02-28T10:39:34Z|01452|binding|INFO|Releasing lport a55b6aa3-f8a4-4c13-b5f3-0d6d78f0bfc1 from this chassis (sb_readonly=0)
Feb 28 10:39:34 compute-0 nova_compute[243452]: 2026-02-28 10:39:34.994 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:35 compute-0 nova_compute[243452]: 2026-02-28 10:39:35.002 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:35.003 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/72eb3f47-a6c4-4982-9972-2a47fea8b4a8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/72eb3f47-a6c4-4982-9972-2a47fea8b4a8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:35.004 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5bbe62b6-3fc0-484d-a96e-3311ad64cd4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:35.006 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-72eb3f47-a6c4-4982-9972-2a47fea8b4a8
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/72eb3f47-a6c4-4982-9972-2a47fea8b4a8.pid.haproxy
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 72eb3f47-a6c4-4982-9972-2a47fea8b4a8
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:39:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:35.007 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8', 'env', 'PROCESS_TAG=haproxy-72eb3f47-a6c4-4982-9972-2a47fea8b4a8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/72eb3f47-a6c4-4982-9972-2a47fea8b4a8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:39:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Feb 28 10:39:35 compute-0 nova_compute[243452]: 2026-02-28 10:39:35.316 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275175.3156798, 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:39:35 compute-0 nova_compute[243452]: 2026-02-28 10:39:35.318 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] VM Started (Lifecycle Event)
Feb 28 10:39:35 compute-0 podman[368276]: 2026-02-28 10:39:35.410025254 +0000 UTC m=+0.055329458 container create 983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:39:35 compute-0 systemd[1]: Started libpod-conmon-983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375.scope.
Feb 28 10:39:35 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:39:35 compute-0 podman[368276]: 2026-02-28 10:39:35.377832792 +0000 UTC m=+0.023136996 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:39:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b41b330d4269ed8fc4838007e0e8ccc448467ce7b729453762d243b302d938/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:39:35 compute-0 nova_compute[243452]: 2026-02-28 10:39:35.482 243456 DEBUG nova.compute.manager [req-a828c0ef-7aa3-4c12-9172-cf6a2a840cee req-3fbd039b-319a-46a9-b609-47b3ac80fa65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-changed-05baa6e8-a67d-4e5d-84d7-2d27c726335a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:35 compute-0 nova_compute[243452]: 2026-02-28 10:39:35.483 243456 DEBUG nova.compute.manager [req-a828c0ef-7aa3-4c12-9172-cf6a2a840cee req-3fbd039b-319a-46a9-b609-47b3ac80fa65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Refreshing instance network info cache due to event network-changed-05baa6e8-a67d-4e5d-84d7-2d27c726335a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:39:35 compute-0 nova_compute[243452]: 2026-02-28 10:39:35.484 243456 DEBUG oslo_concurrency.lockutils [req-a828c0ef-7aa3-4c12-9172-cf6a2a840cee req-3fbd039b-319a-46a9-b609-47b3ac80fa65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:39:35 compute-0 nova_compute[243452]: 2026-02-28 10:39:35.484 243456 DEBUG oslo_concurrency.lockutils [req-a828c0ef-7aa3-4c12-9172-cf6a2a840cee req-3fbd039b-319a-46a9-b609-47b3ac80fa65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:39:35 compute-0 nova_compute[243452]: 2026-02-28 10:39:35.484 243456 DEBUG nova.network.neutron [req-a828c0ef-7aa3-4c12-9172-cf6a2a840cee req-3fbd039b-319a-46a9-b609-47b3ac80fa65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Refreshing network info cache for port 05baa6e8-a67d-4e5d-84d7-2d27c726335a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:39:35 compute-0 podman[368276]: 2026-02-28 10:39:35.488857296 +0000 UTC m=+0.134161500 container init 983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 28 10:39:35 compute-0 podman[368276]: 2026-02-28 10:39:35.493902599 +0000 UTC m=+0.139206803 container start 983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 28 10:39:35 compute-0 nova_compute[243452]: 2026-02-28 10:39:35.500 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:39:35 compute-0 nova_compute[243452]: 2026-02-28 10:39:35.506 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275175.3160083, 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:39:35 compute-0 nova_compute[243452]: 2026-02-28 10:39:35.507 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] VM Paused (Lifecycle Event)
Feb 28 10:39:35 compute-0 neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8[368291]: [NOTICE]   (368295) : New worker (368297) forked
Feb 28 10:39:35 compute-0 neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8[368291]: [NOTICE]   (368295) : Loading success.
Feb 28 10:39:35 compute-0 nova_compute[243452]: 2026-02-28 10:39:35.575 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:39:35 compute-0 nova_compute[243452]: 2026-02-28 10:39:35.582 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:39:35 compute-0 nova_compute[243452]: 2026-02-28 10:39:35.620 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.150 243456 DEBUG nova.compute.manager [req-52c37f4e-3599-4255-a1fc-19263529f2a7 req-fd1dd2fa-5812-4c16-98c9-f8032bfcd155 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received event network-vif-plugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.151 243456 DEBUG oslo_concurrency.lockutils [req-52c37f4e-3599-4255-a1fc-19263529f2a7 req-fd1dd2fa-5812-4c16-98c9-f8032bfcd155 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.152 243456 DEBUG oslo_concurrency.lockutils [req-52c37f4e-3599-4255-a1fc-19263529f2a7 req-fd1dd2fa-5812-4c16-98c9-f8032bfcd155 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.152 243456 DEBUG oslo_concurrency.lockutils [req-52c37f4e-3599-4255-a1fc-19263529f2a7 req-fd1dd2fa-5812-4c16-98c9-f8032bfcd155 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.153 243456 DEBUG nova.compute.manager [req-52c37f4e-3599-4255-a1fc-19263529f2a7 req-fd1dd2fa-5812-4c16-98c9-f8032bfcd155 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Processing event network-vif-plugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.154 243456 DEBUG nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.160 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275176.16023, 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.161 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] VM Resumed (Lifecycle Event)
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.163 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.173 243456 INFO nova.virt.libvirt.driver [-] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Instance spawned successfully.
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.174 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.187 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.196 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.205 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.206 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.207 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.208 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.209 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.210 243456 DEBUG nova.virt.libvirt.driver [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.280 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.302 243456 INFO nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Took 9.37 seconds to spawn the instance on the hypervisor.
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.303 243456 DEBUG nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:39:36 compute-0 ceph-mon[76304]: pgmap v2257: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.380 243456 INFO nova.compute.manager [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Took 11.01 seconds to build instance.
Feb 28 10:39:36 compute-0 nova_compute[243452]: 2026-02-28 10:39:36.410 243456 DEBUG oslo_concurrency.lockutils [None req-4be82386-edc1-47c2-a1b8-002dae9c63b7 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Feb 28 10:39:38 compute-0 nova_compute[243452]: 2026-02-28 10:39:38.240 243456 DEBUG nova.compute.manager [req-e168949d-cefa-4120-a6d8-607d9ac813a3 req-625a0702-3107-4c81-a8a3-4847bf55a593 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received event network-vif-plugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:38 compute-0 nova_compute[243452]: 2026-02-28 10:39:38.242 243456 DEBUG oslo_concurrency.lockutils [req-e168949d-cefa-4120-a6d8-607d9ac813a3 req-625a0702-3107-4c81-a8a3-4847bf55a593 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:38 compute-0 nova_compute[243452]: 2026-02-28 10:39:38.242 243456 DEBUG oslo_concurrency.lockutils [req-e168949d-cefa-4120-a6d8-607d9ac813a3 req-625a0702-3107-4c81-a8a3-4847bf55a593 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:38 compute-0 nova_compute[243452]: 2026-02-28 10:39:38.243 243456 DEBUG oslo_concurrency.lockutils [req-e168949d-cefa-4120-a6d8-607d9ac813a3 req-625a0702-3107-4c81-a8a3-4847bf55a593 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:38 compute-0 nova_compute[243452]: 2026-02-28 10:39:38.244 243456 DEBUG nova.compute.manager [req-e168949d-cefa-4120-a6d8-607d9ac813a3 req-625a0702-3107-4c81-a8a3-4847bf55a593 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] No waiting events found dispatching network-vif-plugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:39:38 compute-0 nova_compute[243452]: 2026-02-28 10:39:38.244 243456 WARNING nova.compute.manager [req-e168949d-cefa-4120-a6d8-607d9ac813a3 req-625a0702-3107-4c81-a8a3-4847bf55a593 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received unexpected event network-vif-plugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 for instance with vm_state active and task_state None.
Feb 28 10:39:38 compute-0 nova_compute[243452]: 2026-02-28 10:39:38.287 243456 DEBUG nova.network.neutron [req-a828c0ef-7aa3-4c12-9172-cf6a2a840cee req-3fbd039b-319a-46a9-b609-47b3ac80fa65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Updated VIF entry in instance network info cache for port 05baa6e8-a67d-4e5d-84d7-2d27c726335a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:39:38 compute-0 nova_compute[243452]: 2026-02-28 10:39:38.288 243456 DEBUG nova.network.neutron [req-a828c0ef-7aa3-4c12-9172-cf6a2a840cee req-3fbd039b-319a-46a9-b609-47b3ac80fa65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Updating instance_info_cache with network_info: [{"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:39:38 compute-0 nova_compute[243452]: 2026-02-28 10:39:38.319 243456 DEBUG oslo_concurrency.lockutils [req-a828c0ef-7aa3-4c12-9172-cf6a2a840cee req-3fbd039b-319a-46a9-b609-47b3ac80fa65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:39:38 compute-0 ceph-mon[76304]: pgmap v2258: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Feb 28 10:39:38 compute-0 nova_compute[243452]: 2026-02-28 10:39:38.549 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:39:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Feb 28 10:39:39 compute-0 nova_compute[243452]: 2026-02-28 10:39:39.207 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:39:39 compute-0 nova_compute[243452]: 2026-02-28 10:39:39.631 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:40 compute-0 nova_compute[243452]: 2026-02-28 10:39:40.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:39:40 compute-0 ceph-mon[76304]: pgmap v2259: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Feb 28 10:39:40 compute-0 nova_compute[243452]: 2026-02-28 10:39:40.335 243456 DEBUG nova.compute.manager [req-feb07667-5d44-456a-ab12-1ddb25208c25 req-ac1a7ae1-774f-4d44-9cc7-eada70518aac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received event network-changed-247ab5af-a0a0-4d02-93ea-d7dbed82f571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:40 compute-0 nova_compute[243452]: 2026-02-28 10:39:40.336 243456 DEBUG nova.compute.manager [req-feb07667-5d44-456a-ab12-1ddb25208c25 req-ac1a7ae1-774f-4d44-9cc7-eada70518aac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Refreshing instance network info cache due to event network-changed-247ab5af-a0a0-4d02-93ea-d7dbed82f571. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:39:40 compute-0 nova_compute[243452]: 2026-02-28 10:39:40.336 243456 DEBUG oslo_concurrency.lockutils [req-feb07667-5d44-456a-ab12-1ddb25208c25 req-ac1a7ae1-774f-4d44-9cc7-eada70518aac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:39:40 compute-0 nova_compute[243452]: 2026-02-28 10:39:40.337 243456 DEBUG oslo_concurrency.lockutils [req-feb07667-5d44-456a-ab12-1ddb25208c25 req-ac1a7ae1-774f-4d44-9cc7-eada70518aac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:39:40 compute-0 nova_compute[243452]: 2026-02-28 10:39:40.337 243456 DEBUG nova.network.neutron [req-feb07667-5d44-456a-ab12-1ddb25208c25 req-ac1a7ae1-774f-4d44-9cc7-eada70518aac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Refreshing network info cache for port 247ab5af-a0a0-4d02-93ea-d7dbed82f571 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:39:40 compute-0 ovn_controller[146846]: 2026-02-28T10:39:40Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:6f:db 10.100.0.13
Feb 28 10:39:40 compute-0 ovn_controller[146846]: 2026-02-28T10:39:40Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:6f:db 10.100.0.13
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 331 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.9 MiB/s wr, 160 op/s
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001559519042740221 of space, bias 1.0, pg target 0.4678557128220663 quantized to 32 (current 32)
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024939441600177813 of space, bias 1.0, pg target 0.7481832480053344 quantized to 32 (current 32)
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.299579834731566e-07 of space, bias 4.0, pg target 0.0008759495801677878 quantized to 16 (current 16)
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:39:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:39:41 compute-0 nova_compute[243452]: 2026-02-28 10:39:41.717 243456 DEBUG nova.network.neutron [req-feb07667-5d44-456a-ab12-1ddb25208c25 req-ac1a7ae1-774f-4d44-9cc7-eada70518aac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Updated VIF entry in instance network info cache for port 247ab5af-a0a0-4d02-93ea-d7dbed82f571. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:39:41 compute-0 nova_compute[243452]: 2026-02-28 10:39:41.719 243456 DEBUG nova.network.neutron [req-feb07667-5d44-456a-ab12-1ddb25208c25 req-ac1a7ae1-774f-4d44-9cc7-eada70518aac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Updating instance_info_cache with network_info: [{"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:39:41 compute-0 nova_compute[243452]: 2026-02-28 10:39:41.744 243456 DEBUG oslo_concurrency.lockutils [req-feb07667-5d44-456a-ab12-1ddb25208c25 req-ac1a7ae1-774f-4d44-9cc7-eada70518aac 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:39:42 compute-0 nova_compute[243452]: 2026-02-28 10:39:42.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:39:42 compute-0 nova_compute[243452]: 2026-02-28 10:39:42.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:39:42 compute-0 nova_compute[243452]: 2026-02-28 10:39:42.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:39:42 compute-0 ceph-mon[76304]: pgmap v2260: 305 pgs: 305 active+clean; 331 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.9 MiB/s wr, 160 op/s
Feb 28 10:39:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 347 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 164 op/s
Feb 28 10:39:43 compute-0 nova_compute[243452]: 2026-02-28 10:39:43.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:39:43 compute-0 nova_compute[243452]: 2026-02-28 10:39:43.552 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:39:44 compute-0 ceph-mon[76304]: pgmap v2261: 305 pgs: 305 active+clean; 347 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 164 op/s
Feb 28 10:39:44 compute-0 nova_compute[243452]: 2026-02-28 10:39:44.635 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 166 op/s
Feb 28 10:39:45 compute-0 podman[368308]: 2026-02-28 10:39:45.145035749 +0000 UTC m=+0.070985981 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 10:39:45 compute-0 podman[368307]: 2026-02-28 10:39:45.18851947 +0000 UTC m=+0.116652364 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 28 10:39:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:39:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/82606068' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:39:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:39:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/82606068' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:39:46 compute-0 nova_compute[243452]: 2026-02-28 10:39:46.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:39:46 compute-0 nova_compute[243452]: 2026-02-28 10:39:46.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:39:46 compute-0 ceph-mon[76304]: pgmap v2262: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 166 op/s
Feb 28 10:39:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/82606068' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:39:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/82606068' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:39:46 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Feb 28 10:39:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Feb 28 10:39:47 compute-0 ovn_controller[146846]: 2026-02-28T10:39:47Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:58:95 10.100.0.4
Feb 28 10:39:47 compute-0 ovn_controller[146846]: 2026-02-28T10:39:47Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:58:95 10.100.0.4
Feb 28 10:39:48 compute-0 nova_compute[243452]: 2026-02-28 10:39:48.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:39:48 compute-0 ceph-mon[76304]: pgmap v2263: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Feb 28 10:39:48 compute-0 nova_compute[243452]: 2026-02-28 10:39:48.554 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:39:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 143 op/s
Feb 28 10:39:49 compute-0 nova_compute[243452]: 2026-02-28 10:39:49.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:39:49 compute-0 nova_compute[243452]: 2026-02-28 10:39:49.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:39:49 compute-0 nova_compute[243452]: 2026-02-28 10:39:49.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:39:49 compute-0 nova_compute[243452]: 2026-02-28 10:39:49.596 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:39:49 compute-0 nova_compute[243452]: 2026-02-28 10:39:49.597 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:39:49 compute-0 nova_compute[243452]: 2026-02-28 10:39:49.597 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:39:49 compute-0 nova_compute[243452]: 2026-02-28 10:39:49.597 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid f6deb920-f186-43f5-9ea0-642f4a6e830e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:39:49 compute-0 nova_compute[243452]: 2026-02-28 10:39:49.642 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:50 compute-0 ceph-mon[76304]: pgmap v2264: 305 pgs: 305 active+clean; 374 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 143 op/s
Feb 28 10:39:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 389 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.2 MiB/s wr, 171 op/s
Feb 28 10:39:52 compute-0 nova_compute[243452]: 2026-02-28 10:39:52.293 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updating instance_info_cache with network_info: [{"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:39:52 compute-0 nova_compute[243452]: 2026-02-28 10:39:52.320 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:39:52 compute-0 nova_compute[243452]: 2026-02-28 10:39:52.321 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:39:52 compute-0 nova_compute[243452]: 2026-02-28 10:39:52.322 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:39:52 compute-0 nova_compute[243452]: 2026-02-28 10:39:52.343 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:52 compute-0 nova_compute[243452]: 2026-02-28 10:39:52.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:52 compute-0 nova_compute[243452]: 2026-02-28 10:39:52.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:52 compute-0 nova_compute[243452]: 2026-02-28 10:39:52.344 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:39:52 compute-0 nova_compute[243452]: 2026-02-28 10:39:52.344 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:39:52 compute-0 ceph-mon[76304]: pgmap v2265: 305 pgs: 305 active+clean; 389 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.2 MiB/s wr, 171 op/s
Feb 28 10:39:52 compute-0 sshd-session[368348]: Received disconnect from 103.67.78.132 port 39192:11: Bye Bye [preauth]
Feb 28 10:39:52 compute-0 sshd-session[368348]: Disconnected from authenticating user root 103.67.78.132 port 39192 [preauth]
Feb 28 10:39:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:39:52 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4104048542' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:39:52 compute-0 nova_compute[243452]: 2026-02-28 10:39:52.980 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:39:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 628 KiB/s rd, 3.8 MiB/s wr, 123 op/s
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.066 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.066 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.070 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.070 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.074 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.074 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.263 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.265 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2976MB free_disk=59.851134022697806GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.265 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.265 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.346 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance f6deb920-f186-43f5-9ea0-642f4a6e830e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.346 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 69899d22-e5ee-410a-8280-57cc79ffa188 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.347 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.347 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.347 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.360 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.375 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.376 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.388 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 28 10:39:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4104048542' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.407 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.468 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:39:53 compute-0 nova_compute[243452]: 2026-02-28 10:39:53.556 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:39:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:39:54 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1400991154' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.095 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.102 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.120 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.144 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.144 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.327 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.328 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.329 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:39:54 compute-0 ceph-mon[76304]: pgmap v2266: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 628 KiB/s rd, 3.8 MiB/s wr, 123 op/s
Feb 28 10:39:54 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1400991154' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.574 243456 DEBUG nova.compute.manager [req-067175d0-1714-4db9-a1dd-1e2e5dd27c42 req-9c8f1508-2605-4760-9073-c2b583290022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-changed-05baa6e8-a67d-4e5d-84d7-2d27c726335a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.575 243456 DEBUG nova.compute.manager [req-067175d0-1714-4db9-a1dd-1e2e5dd27c42 req-9c8f1508-2605-4760-9073-c2b583290022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Refreshing instance network info cache due to event network-changed-05baa6e8-a67d-4e5d-84d7-2d27c726335a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.578 243456 DEBUG oslo_concurrency.lockutils [req-067175d0-1714-4db9-a1dd-1e2e5dd27c42 req-9c8f1508-2605-4760-9073-c2b583290022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.579 243456 DEBUG oslo_concurrency.lockutils [req-067175d0-1714-4db9-a1dd-1e2e5dd27c42 req-9c8f1508-2605-4760-9073-c2b583290022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.579 243456 DEBUG nova.network.neutron [req-067175d0-1714-4db9-a1dd-1e2e5dd27c42 req-9c8f1508-2605-4760-9073-c2b583290022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Refreshing network info cache for port 05baa6e8-a67d-4e5d-84d7-2d27c726335a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.645 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.673 243456 DEBUG oslo_concurrency.lockutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.673 243456 DEBUG oslo_concurrency.lockutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.674 243456 DEBUG oslo_concurrency.lockutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.674 243456 DEBUG oslo_concurrency.lockutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.675 243456 DEBUG oslo_concurrency.lockutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.678 243456 INFO nova.compute.manager [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Terminating instance
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.680 243456 DEBUG nova.compute.manager [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:39:54 compute-0 kernel: tap05baa6e8-a6 (unregistering): left promiscuous mode
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.732 243456 INFO nova.compute.manager [None req-a6c02d16-1f2d-465c-987a-532b6922013b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Get console output
Feb 28 10:39:54 compute-0 NetworkManager[49805]: <info>  [1772275194.7347] device (tap05baa6e8-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.744 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:39:54 compute-0 kernel: tap05cd9b0a-03 (unregistering): left promiscuous mode
Feb 28 10:39:54 compute-0 NetworkManager[49805]: <info>  [1772275194.7563] device (tap05cd9b0a-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:39:54 compute-0 ovn_controller[146846]: 2026-02-28T10:39:54Z|01453|binding|INFO|Releasing lport 05baa6e8-a67d-4e5d-84d7-2d27c726335a from this chassis (sb_readonly=0)
Feb 28 10:39:54 compute-0 ovn_controller[146846]: 2026-02-28T10:39:54Z|01454|binding|INFO|Setting lport 05baa6e8-a67d-4e5d-84d7-2d27c726335a down in Southbound
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.802 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:54 compute-0 ovn_controller[146846]: 2026-02-28T10:39:54Z|01455|binding|INFO|Removing iface tap05baa6e8-a6 ovn-installed in OVS
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.814 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:6f:db 10.100.0.13'], port_security=['fa:16:3e:b7:6f:db 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '69899d22-e5ee-410a-8280-57cc79ffa188', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-386ad93c-8128-414d-bc97-7c3f009f2aee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '77637734-a2d3-42b4-baf6-bf1326483326', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12c8c4dc-c3d0-48e2-a2ff-065b46b0bb43, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=05baa6e8-a67d-4e5d-84d7-2d27c726335a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.815 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 05baa6e8-a67d-4e5d-84d7-2d27c726335a in datapath 386ad93c-8128-414d-bc97-7c3f009f2aee unbound from our chassis
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.817 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 386ad93c-8128-414d-bc97-7c3f009f2aee
Feb 28 10:39:54 compute-0 ovn_controller[146846]: 2026-02-28T10:39:54Z|01456|binding|INFO|Releasing lport 05cd9b0a-030d-404b-b77a-570992915fae from this chassis (sb_readonly=0)
Feb 28 10:39:54 compute-0 ovn_controller[146846]: 2026-02-28T10:39:54Z|01457|binding|INFO|Setting lport 05cd9b0a-030d-404b-b77a-570992915fae down in Southbound
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.819 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:54 compute-0 ovn_controller[146846]: 2026-02-28T10:39:54Z|01458|binding|INFO|Removing iface tap05cd9b0a-03 ovn-installed in OVS
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.825 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.827 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:32:1c 2001:db8:0:1:f816:3eff:fe0b:321c 2001:db8::f816:3eff:fe0b:321c'], port_security=['fa:16:3e:0b:32:1c 2001:db8:0:1:f816:3eff:fe0b:321c 2001:db8::f816:3eff:fe0b:321c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe0b:321c/64 2001:db8::f816:3eff:fe0b:321c/64', 'neutron:device_id': '69899d22-e5ee-410a-8280-57cc79ffa188', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '77637734-a2d3-42b4-baf6-bf1326483326', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df8aa31f-e638-4e49-ac27-bf5e1988a64a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=05cd9b0a-030d-404b-b77a-570992915fae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.829 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.838 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ea04e9d5-7bba-4340-bdba-4a46eafd8365]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:54 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Feb 28 10:39:54 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008b.scope: Consumed 12.605s CPU time.
Feb 28 10:39:54 compute-0 systemd-machined[209480]: Machine qemu-172-instance-0000008b terminated.
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.864 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf328d7-413f-4d51-8783-3abdb9ec9457]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.868 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6079e890-7675-4f7b-a2ee-07624a6c107c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.890 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[61b0da5d-d28a-430f-b85b-56f48cc9ce0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.908 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f86e5fef-5acd-44e9-b2c1-001bf1621713]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap386ad93c-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:f3:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663869, 'reachable_time': 19915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368411, 'error': None, 'target': 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.928 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4923e28f-858a-4e0e-8b85-81147a673019]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap386ad93c-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663882, 'tstamp': 663882}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368421, 'error': None, 'target': 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap386ad93c-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663886, 'tstamp': 663886}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368421, 'error': None, 'target': 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.930 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap386ad93c-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.932 243456 INFO nova.virt.libvirt.driver [-] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Instance destroyed successfully.
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.932 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.933 243456 DEBUG nova.objects.instance [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid 69899d22-e5ee-410a-8280-57cc79ffa188 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.940 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.940 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap386ad93c-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.942 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.943 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap386ad93c-80, col_values=(('external_ids', {'iface-id': '1c423105-d23e-4da9-afeb-9405c7fd0060'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.943 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.945 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 05cd9b0a-030d-404b-b77a-570992915fae in datapath 685f3a92-853c-417a-a00b-ba5c70b02f2d unbound from our chassis
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.946 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 685f3a92-853c-417a-a00b-ba5c70b02f2d
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.951 243456 DEBUG nova.virt.libvirt.vif [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:39:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1192420529',display_name='tempest-TestGettingAddress-server-1192420529',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1192420529',id=139,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:39:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-2ripkvj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:39:29Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=69899d22-e5ee-410a-8280-57cc79ffa188,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.953 243456 DEBUG nova.network.os_vif_util [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.954 243456 DEBUG nova.network.os_vif_util [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b7:6f:db,bridge_name='br-int',has_traffic_filtering=True,id=05baa6e8-a67d-4e5d-84d7-2d27c726335a,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05baa6e8-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.955 243456 DEBUG os_vif [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:6f:db,bridge_name='br-int',has_traffic_filtering=True,id=05baa6e8-a67d-4e5d-84d7-2d27c726335a,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05baa6e8-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.958 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.958 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05baa6e8-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.961 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.962 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fe9549e1-2f66-4121-b99f-2e33c1f89a49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.966 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.971 243456 INFO os_vif [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:6f:db,bridge_name='br-int',has_traffic_filtering=True,id=05baa6e8-a67d-4e5d-84d7-2d27c726335a,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05baa6e8-a6')
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.972 243456 DEBUG nova.virt.libvirt.vif [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:39:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1192420529',display_name='tempest-TestGettingAddress-server-1192420529',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1192420529',id=139,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:39:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-2ripkvj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:39:29Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=69899d22-e5ee-410a-8280-57cc79ffa188,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.972 243456 DEBUG nova.network.os_vif_util [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.973 243456 DEBUG nova.network.os_vif_util [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:32:1c,bridge_name='br-int',has_traffic_filtering=True,id=05cd9b0a-030d-404b-b77a-570992915fae,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cd9b0a-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.974 243456 DEBUG os_vif [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:32:1c,bridge_name='br-int',has_traffic_filtering=True,id=05cd9b0a-030d-404b-b77a-570992915fae,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cd9b0a-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.975 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.975 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05cd9b0a-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.977 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.979 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:39:54 compute-0 nova_compute[243452]: 2026-02-28 10:39:54.982 243456 INFO os_vif [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:32:1c,bridge_name='br-int',has_traffic_filtering=True,id=05cd9b0a-030d-404b-b77a-570992915fae,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cd9b0a-03')
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.984 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[82f9c21a-33a1-4ffe-a384-381a2b4052db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:54.987 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8f62c852-4940-46fd-92fe-93411a81cbc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:55.012 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[05cce998-b841-4280-ae1e-eb8f56640d4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:55.028 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3d8b8ac8-3aee-47e3-b94f-9dc95c17ecb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap685f3a92-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:b7:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 6, 'rx_bytes': 3160, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 6, 'rx_bytes': 3160, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 423], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663963, 'reachable_time': 33771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368461, 'error': None, 'target': 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:55.043 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[493253ce-816a-4080-848a-f810ee7502a9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap685f3a92-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663976, 'tstamp': 663976}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368462, 'error': None, 'target': 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:55.046 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap685f3a92-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:55 compute-0 nova_compute[243452]: 2026-02-28 10:39:55.047 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:55.049 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap685f3a92-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:55 compute-0 nova_compute[243452]: 2026-02-28 10:39:55.049 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:55.049 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:39:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:55.050 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap685f3a92-80, col_values=(('external_ids', {'iface-id': '86a5934a-bf0f-4d7e-ba7a-fdb27e99df6a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:55.050 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:39:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 571 KiB/s rd, 3.0 MiB/s wr, 97 op/s
Feb 28 10:39:55 compute-0 nova_compute[243452]: 2026-02-28 10:39:55.260 243456 INFO nova.virt.libvirt.driver [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Deleting instance files /var/lib/nova/instances/69899d22-e5ee-410a-8280-57cc79ffa188_del
Feb 28 10:39:55 compute-0 nova_compute[243452]: 2026-02-28 10:39:55.261 243456 INFO nova.virt.libvirt.driver [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Deletion of /var/lib/nova/instances/69899d22-e5ee-410a-8280-57cc79ffa188_del complete
Feb 28 10:39:55 compute-0 nova_compute[243452]: 2026-02-28 10:39:55.321 243456 INFO nova.compute.manager [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Took 0.64 seconds to destroy the instance on the hypervisor.
Feb 28 10:39:55 compute-0 nova_compute[243452]: 2026-02-28 10:39:55.322 243456 DEBUG oslo.service.loopingcall [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:39:55 compute-0 nova_compute[243452]: 2026-02-28 10:39:55.323 243456 DEBUG nova.compute.manager [-] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:39:55 compute-0 nova_compute[243452]: 2026-02-28 10:39:55.323 243456 DEBUG nova.network.neutron [-] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:39:55 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:55.332 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:56 compute-0 ovn_controller[146846]: 2026-02-28T10:39:56Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:58:95 10.100.0.4
Feb 28 10:39:56 compute-0 ceph-mon[76304]: pgmap v2267: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 571 KiB/s rd, 3.0 MiB/s wr, 97 op/s
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.415 243456 DEBUG nova.network.neutron [req-067175d0-1714-4db9-a1dd-1e2e5dd27c42 req-9c8f1508-2605-4760-9073-c2b583290022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Updated VIF entry in instance network info cache for port 05baa6e8-a67d-4e5d-84d7-2d27c726335a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.416 243456 DEBUG nova.network.neutron [req-067175d0-1714-4db9-a1dd-1e2e5dd27c42 req-9c8f1508-2605-4760-9073-c2b583290022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Updating instance_info_cache with network_info: [{"id": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "address": "fa:16:3e:b7:6f:db", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05baa6e8-a6", "ovs_interfaceid": "05baa6e8-a67d-4e5d-84d7-2d27c726335a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.441 243456 DEBUG oslo_concurrency.lockutils [req-067175d0-1714-4db9-a1dd-1e2e5dd27c42 req-9c8f1508-2605-4760-9073-c2b583290022 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-69899d22-e5ee-410a-8280-57cc79ffa188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.785 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-unplugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.785 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.785 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.785 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.786 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] No waiting events found dispatching network-vif-unplugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.786 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-unplugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.786 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-plugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.786 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.786 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.786 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.786 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] No waiting events found dispatching network-vif-plugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.787 243456 WARNING nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received unexpected event network-vif-plugged-05baa6e8-a67d-4e5d-84d7-2d27c726335a for instance with vm_state active and task_state deleting.
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.787 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-unplugged-05cd9b0a-030d-404b-b77a-570992915fae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.787 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.787 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.787 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.787 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] No waiting events found dispatching network-vif-unplugged-05cd9b0a-030d-404b-b77a-570992915fae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.788 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-unplugged-05cd9b0a-030d-404b-b77a-570992915fae for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.788 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-plugged-05cd9b0a-030d-404b-b77a-570992915fae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.788 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.788 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.788 243456 DEBUG oslo_concurrency.lockutils [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.788 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] No waiting events found dispatching network-vif-plugged-05cd9b0a-030d-404b-b77a-570992915fae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.789 243456 WARNING nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received unexpected event network-vif-plugged-05cd9b0a-030d-404b-b77a-570992915fae for instance with vm_state active and task_state deleting.
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.789 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-deleted-05baa6e8-a67d-4e5d-84d7-2d27c726335a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.789 243456 INFO nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Neutron deleted interface 05baa6e8-a67d-4e5d-84d7-2d27c726335a; detaching it from the instance and deleting it from the info cache
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.789 243456 DEBUG nova.network.neutron [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Updating instance_info_cache with network_info: [{"id": "05cd9b0a-030d-404b-b77a-570992915fae", "address": "fa:16:3e:0b:32:1c", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0b:321c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cd9b0a-03", "ovs_interfaceid": "05cd9b0a-030d-404b-b77a-570992915fae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.790 243456 DEBUG nova.network.neutron [-] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.813 243456 DEBUG nova.compute.manager [req-f20cd7e0-71d3-4c45-adfa-1eb2ce6d868e req-a78ccb7a-de9e-416f-8512-a4cbfbabcb62 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Detach interface failed, port_id=05baa6e8-a67d-4e5d-84d7-2d27c726335a, reason: Instance 69899d22-e5ee-410a-8280-57cc79ffa188 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.822 243456 INFO nova.compute.manager [-] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Took 1.50 seconds to deallocate network for instance.
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.885 243456 DEBUG oslo_concurrency.lockutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.886 243456 DEBUG oslo_concurrency.lockutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.891 243456 DEBUG nova.compute.manager [req-a46d8eb7-ff43-48f8-acf0-3791b1579470 req-1f7f6ba6-eca8-4a91-8aab-30c1cde294e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received event network-changed-247ab5af-a0a0-4d02-93ea-d7dbed82f571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.892 243456 DEBUG nova.compute.manager [req-a46d8eb7-ff43-48f8-acf0-3791b1579470 req-1f7f6ba6-eca8-4a91-8aab-30c1cde294e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Refreshing instance network info cache due to event network-changed-247ab5af-a0a0-4d02-93ea-d7dbed82f571. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.893 243456 DEBUG oslo_concurrency.lockutils [req-a46d8eb7-ff43-48f8-acf0-3791b1579470 req-1f7f6ba6-eca8-4a91-8aab-30c1cde294e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.893 243456 DEBUG oslo_concurrency.lockutils [req-a46d8eb7-ff43-48f8-acf0-3791b1579470 req-1f7f6ba6-eca8-4a91-8aab-30c1cde294e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.894 243456 DEBUG nova.network.neutron [req-a46d8eb7-ff43-48f8-acf0-3791b1579470 req-1f7f6ba6-eca8-4a91-8aab-30c1cde294e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Refreshing network info cache for port 247ab5af-a0a0-4d02-93ea-d7dbed82f571 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.948 243456 DEBUG oslo_concurrency.lockutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.948 243456 DEBUG oslo_concurrency.lockutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.949 243456 DEBUG oslo_concurrency.lockutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.949 243456 DEBUG oslo_concurrency.lockutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.949 243456 DEBUG oslo_concurrency.lockutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.950 243456 INFO nova.compute.manager [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Terminating instance
Feb 28 10:39:56 compute-0 nova_compute[243452]: 2026-02-28 10:39:56.951 243456 DEBUG nova.compute.manager [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.003 243456 DEBUG oslo_concurrency.processutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:39:57 compute-0 kernel: tap247ab5af-a0 (unregistering): left promiscuous mode
Feb 28 10:39:57 compute-0 NetworkManager[49805]: <info>  [1772275197.0223] device (tap247ab5af-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:39:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 378 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Feb 28 10:39:57 compute-0 ovn_controller[146846]: 2026-02-28T10:39:57Z|01459|binding|INFO|Releasing lport 247ab5af-a0a0-4d02-93ea-d7dbed82f571 from this chassis (sb_readonly=0)
Feb 28 10:39:57 compute-0 ovn_controller[146846]: 2026-02-28T10:39:57Z|01460|binding|INFO|Setting lport 247ab5af-a0a0-4d02-93ea-d7dbed82f571 down in Southbound
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.053 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:57 compute-0 ovn_controller[146846]: 2026-02-28T10:39:57Z|01461|binding|INFO|Removing iface tap247ab5af-a0 ovn-installed in OVS
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.056 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.061 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.065 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:58:95 10.100.0.4'], port_security=['fa:16:3e:7f:58:95 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '023e788b-dd3e-4a85-a2e5-e97ad24d6ff1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72eb3f47-a6c4-4982-9972-2a47fea8b4a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4240f7d5-bbec-4937-b4a6-68a1a0e467cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d75922a7-401f-4a05-a541-e32618728569, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=247ab5af-a0a0-4d02-93ea-d7dbed82f571) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.066 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 247ab5af-a0a0-4d02-93ea-d7dbed82f571 in datapath 72eb3f47-a6c4-4982-9972-2a47fea8b4a8 unbound from our chassis
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.068 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72eb3f47-a6c4-4982-9972-2a47fea8b4a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.069 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[02c3cf29-e484-4421-ae95-66764a2339f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.069 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8 namespace which is not needed anymore
Feb 28 10:39:57 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Feb 28 10:39:57 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008c.scope: Consumed 12.696s CPU time.
Feb 28 10:39:57 compute-0 systemd-machined[209480]: Machine qemu-173-instance-0000008c terminated.
Feb 28 10:39:57 compute-0 kernel: tap247ab5af-a0: entered promiscuous mode
Feb 28 10:39:57 compute-0 NetworkManager[49805]: <info>  [1772275197.1746] manager: (tap247ab5af-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/607)
Feb 28 10:39:57 compute-0 systemd-udevd[368400]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:39:57 compute-0 ovn_controller[146846]: 2026-02-28T10:39:57Z|01462|binding|INFO|Claiming lport 247ab5af-a0a0-4d02-93ea-d7dbed82f571 for this chassis.
Feb 28 10:39:57 compute-0 ovn_controller[146846]: 2026-02-28T10:39:57Z|01463|binding|INFO|247ab5af-a0a0-4d02-93ea-d7dbed82f571: Claiming fa:16:3e:7f:58:95 10.100.0.4
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.176 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:57 compute-0 kernel: tap247ab5af-a0 (unregistering): left promiscuous mode
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.188 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:58:95 10.100.0.4'], port_security=['fa:16:3e:7f:58:95 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '023e788b-dd3e-4a85-a2e5-e97ad24d6ff1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72eb3f47-a6c4-4982-9972-2a47fea8b4a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4240f7d5-bbec-4937-b4a6-68a1a0e467cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d75922a7-401f-4a05-a541-e32618728569, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=247ab5af-a0a0-4d02-93ea-d7dbed82f571) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.195 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:57 compute-0 ovn_controller[146846]: 2026-02-28T10:39:57Z|01464|binding|INFO|Releasing lport 247ab5af-a0a0-4d02-93ea-d7dbed82f571 from this chassis (sb_readonly=0)
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.199 243456 INFO nova.virt.libvirt.driver [-] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Instance destroyed successfully.
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.200 243456 DEBUG nova.objects.instance [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.205 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:58:95 10.100.0.4'], port_security=['fa:16:3e:7f:58:95 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '023e788b-dd3e-4a85-a2e5-e97ad24d6ff1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72eb3f47-a6c4-4982-9972-2a47fea8b4a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4240f7d5-bbec-4937-b4a6-68a1a0e467cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d75922a7-401f-4a05-a541-e32618728569, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=247ab5af-a0a0-4d02-93ea-d7dbed82f571) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.218 243456 DEBUG nova.virt.libvirt.vif [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:39:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-823103484',display_name='tempest-TestNetworkBasicOps-server-823103484',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-823103484',id=140,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDlRPDBZ/UdFU5xhRLg2YsbBBqME/rNpkkZFa7kTxtUL7bLINgILV0oznkIgnASqfKbT8PhHg4HY5yFJH+rTHkD1QXlc7NRmyoVKdQxyQnQqn2gOVqEPju9kBsy9tmaBqQ==',key_name='tempest-TestNetworkBasicOps-1428223995',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:39:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-d46lhxih',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:39:36Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=023e788b-dd3e-4a85-a2e5-e97ad24d6ff1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.218 243456 DEBUG nova.network.os_vif_util [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.219 243456 DEBUG nova.network.os_vif_util [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:58:95,bridge_name='br-int',has_traffic_filtering=True,id=247ab5af-a0a0-4d02-93ea-d7dbed82f571,network=Network(72eb3f47-a6c4-4982-9972-2a47fea8b4a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247ab5af-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.219 243456 DEBUG os_vif [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:58:95,bridge_name='br-int',has_traffic_filtering=True,id=247ab5af-a0a0-4d02-93ea-d7dbed82f571,network=Network(72eb3f47-a6c4-4982-9972-2a47fea8b4a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247ab5af-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.221 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:57 compute-0 neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8[368291]: [NOTICE]   (368295) : haproxy version is 2.8.14-c23fe91
Feb 28 10:39:57 compute-0 neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8[368291]: [NOTICE]   (368295) : path to executable is /usr/sbin/haproxy
Feb 28 10:39:57 compute-0 neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8[368291]: [WARNING]  (368295) : Exiting Master process...
Feb 28 10:39:57 compute-0 neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8[368291]: [WARNING]  (368295) : Exiting Master process...
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.221 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap247ab5af-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.223 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.224 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:57 compute-0 neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8[368291]: [ALERT]    (368295) : Current worker (368297) exited with code 143 (Terminated)
Feb 28 10:39:57 compute-0 neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8[368291]: [WARNING]  (368295) : All workers exited. Exiting... (0)
Feb 28 10:39:57 compute-0 systemd[1]: libpod-983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375.scope: Deactivated successfully.
Feb 28 10:39:57 compute-0 conmon[368291]: conmon 983d5540a162e6b997ec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375.scope/container/memory.events
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.226 243456 INFO os_vif [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:58:95,bridge_name='br-int',has_traffic_filtering=True,id=247ab5af-a0a0-4d02-93ea-d7dbed82f571,network=Network(72eb3f47-a6c4-4982-9972-2a47fea8b4a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap247ab5af-a0')
Feb 28 10:39:57 compute-0 podman[368503]: 2026-02-28 10:39:57.233813132 +0000 UTC m=+0.070319902 container died 983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 10:39:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375-userdata-shm.mount: Deactivated successfully.
Feb 28 10:39:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-05b41b330d4269ed8fc4838007e0e8ccc448467ce7b729453762d243b302d938-merged.mount: Deactivated successfully.
Feb 28 10:39:57 compute-0 podman[368503]: 2026-02-28 10:39:57.273263989 +0000 UTC m=+0.109770759 container cleanup 983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:39:57 compute-0 systemd[1]: libpod-conmon-983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375.scope: Deactivated successfully.
Feb 28 10:39:57 compute-0 podman[368554]: 2026-02-28 10:39:57.3439183 +0000 UTC m=+0.051800178 container remove 983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.349 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a34ccbd7-23f5-4b93-aa72-227316196366]: (4, ('Sat Feb 28 10:39:57 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8 (983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375)\n983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375\nSat Feb 28 10:39:57 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8 (983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375)\n983d5540a162e6b997ec97d3ea8e8bd546cd489f1584fdb60853d829aed0e375\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.352 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0b171db2-f4ab-4f16-af96-593b9b4e841a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.353 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72eb3f47-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.356 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:57 compute-0 kernel: tap72eb3f47-a0: left promiscuous mode
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.364 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.368 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f99350-1799-4b6c-9bdf-524d0acf36da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.394 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[00324aaa-1385-426d-8388-3c87eb95e7aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.396 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2cbbae86-44de-46c1-b1af-7894c2e6a8af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.413 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[747c4707-a50e-4fee-afec-dabaa8334023]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668198, 'reachable_time': 31790, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368569, 'error': None, 'target': 'ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d72eb3f47\x2da6c4\x2d4982\x2d9972\x2d2a47fea8b4a8.mount: Deactivated successfully.
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.418 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-72eb3f47-a6c4-4982-9972-2a47fea8b4a8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.419 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[e5204d31-53df-4599-b046-e30a1b0d391a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.420 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 247ab5af-a0a0-4d02-93ea-d7dbed82f571 in datapath 72eb3f47-a6c4-4982-9972-2a47fea8b4a8 unbound from our chassis
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.421 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72eb3f47-a6c4-4982-9972-2a47fea8b4a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.422 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a212b197-37ac-45bc-879f-484cd8af8092]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.422 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 247ab5af-a0a0-4d02-93ea-d7dbed82f571 in datapath 72eb3f47-a6c4-4982-9972-2a47fea8b4a8 unbound from our chassis
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.423 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72eb3f47-a6c4-4982-9972-2a47fea8b4a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.424 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8e333786-bd4c-47c2-99b5-0c5d5ffa0cba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:39:57 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1293911241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.562 243456 DEBUG oslo_concurrency.processutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.570 243456 DEBUG nova.compute.provider_tree [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.579 243456 INFO nova.virt.libvirt.driver [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Deleting instance files /var/lib/nova/instances/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_del
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.581 243456 INFO nova.virt.libvirt.driver [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Deletion of /var/lib/nova/instances/023e788b-dd3e-4a85-a2e5-e97ad24d6ff1_del complete
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.607 243456 DEBUG nova.scheduler.client.report [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.636 243456 DEBUG oslo_concurrency.lockutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.640 243456 INFO nova.compute.manager [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Took 0.69 seconds to destroy the instance on the hypervisor.
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.640 243456 DEBUG oslo.service.loopingcall [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.641 243456 DEBUG nova.compute.manager [-] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.641 243456 DEBUG nova.network.neutron [-] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.663 243456 INFO nova.scheduler.client.report [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance 69899d22-e5ee-410a-8280-57cc79ffa188
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.878 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.878 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:57.879 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:57 compute-0 nova_compute[243452]: 2026-02-28 10:39:57.938 243456 DEBUG oslo_concurrency.lockutils [None req-8c2449db-77e2-4854-b232-104a8c59fd31 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "69899d22-e5ee-410a-8280-57cc79ffa188" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.265s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.273 243456 DEBUG nova.network.neutron [req-a46d8eb7-ff43-48f8-acf0-3791b1579470 req-1f7f6ba6-eca8-4a91-8aab-30c1cde294e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Updated VIF entry in instance network info cache for port 247ab5af-a0a0-4d02-93ea-d7dbed82f571. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.274 243456 DEBUG nova.network.neutron [req-a46d8eb7-ff43-48f8-acf0-3791b1579470 req-1f7f6ba6-eca8-4a91-8aab-30c1cde294e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Updating instance_info_cache with network_info: [{"id": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "address": "fa:16:3e:7f:58:95", "network": {"id": "72eb3f47-a6c4-4982-9972-2a47fea8b4a8", "bridge": "br-int", "label": "tempest-network-smoke--2052995257", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap247ab5af-a0", "ovs_interfaceid": "247ab5af-a0a0-4d02-93ea-d7dbed82f571", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.303 243456 DEBUG oslo_concurrency.lockutils [req-a46d8eb7-ff43-48f8-acf0-3791b1579470 req-1f7f6ba6-eca8-4a91-8aab-30c1cde294e6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.353 243456 DEBUG nova.network.neutron [-] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.376 243456 INFO nova.compute.manager [-] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Took 0.74 seconds to deallocate network for instance.
Feb 28 10:39:58 compute-0 ceph-mon[76304]: pgmap v2268: 305 pgs: 305 active+clean; 378 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Feb 28 10:39:58 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1293911241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.429 243456 DEBUG oslo_concurrency.lockutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.431 243456 DEBUG oslo_concurrency.lockutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.510 243456 DEBUG oslo_concurrency.processutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.701 243456 DEBUG oslo_concurrency.lockutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.702 243456 DEBUG oslo_concurrency.lockutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.703 243456 DEBUG oslo_concurrency.lockutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.704 243456 DEBUG oslo_concurrency.lockutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.704 243456 DEBUG oslo_concurrency.lockutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.707 243456 INFO nova.compute.manager [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Terminating instance
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.709 243456 DEBUG nova.compute.manager [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:39:58 compute-0 kernel: tap5bc174e9-3e (unregistering): left promiscuous mode
Feb 28 10:39:58 compute-0 NetworkManager[49805]: <info>  [1772275198.7684] device (tap5bc174e9-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:39:58 compute-0 ovn_controller[146846]: 2026-02-28T10:39:58Z|01465|binding|INFO|Releasing lport 5bc174e9-3e24-499b-8f52-0bcff974bf56 from this chassis (sb_readonly=0)
Feb 28 10:39:58 compute-0 ovn_controller[146846]: 2026-02-28T10:39:58Z|01466|binding|INFO|Setting lport 5bc174e9-3e24-499b-8f52-0bcff974bf56 down in Southbound
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.779 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:58 compute-0 ovn_controller[146846]: 2026-02-28T10:39:58Z|01467|binding|INFO|Removing iface tap5bc174e9-3e ovn-installed in OVS
Feb 28 10:39:58 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:58.790 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:80:6e 10.100.0.9'], port_security=['fa:16:3e:40:80:6e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f6deb920-f186-43f5-9ea0-642f4a6e830e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-386ad93c-8128-414d-bc97-7c3f009f2aee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '77637734-a2d3-42b4-baf6-bf1326483326', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12c8c4dc-c3d0-48e2-a2ff-065b46b0bb43, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=5bc174e9-3e24-499b-8f52-0bcff974bf56) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:39:58 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:58.792 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 5bc174e9-3e24-499b-8f52-0bcff974bf56 in datapath 386ad93c-8128-414d-bc97-7c3f009f2aee unbound from our chassis
Feb 28 10:39:58 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:58.794 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 386ad93c-8128-414d-bc97-7c3f009f2aee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:39:58 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:58.795 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ee9d98c5-d7e1-40bd-87f8-b202ea9b68fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:58 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:58.796 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee namespace which is not needed anymore
Feb 28 10:39:58 compute-0 kernel: tap2fcd845f-86 (unregistering): left promiscuous mode
Feb 28 10:39:58 compute-0 NetworkManager[49805]: <info>  [1772275198.8033] device (tap2fcd845f-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.808 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.815 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:58 compute-0 ovn_controller[146846]: 2026-02-28T10:39:58Z|01468|binding|INFO|Releasing lport 2fcd845f-8646-4afc-923a-1b7570fbdc36 from this chassis (sb_readonly=0)
Feb 28 10:39:58 compute-0 ovn_controller[146846]: 2026-02-28T10:39:58Z|01469|binding|INFO|Setting lport 2fcd845f-8646-4afc-923a-1b7570fbdc36 down in Southbound
Feb 28 10:39:58 compute-0 ovn_controller[146846]: 2026-02-28T10:39:58Z|01470|binding|INFO|Removing iface tap2fcd845f-86 ovn-installed in OVS
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.823 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:58 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:58.829 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:ee:08 2001:db8:0:1:f816:3eff:feb3:ee08 2001:db8::f816:3eff:feb3:ee08'], port_security=['fa:16:3e:b3:ee:08 2001:db8:0:1:f816:3eff:feb3:ee08 2001:db8::f816:3eff:feb3:ee08'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feb3:ee08/64 2001:db8::f816:3eff:feb3:ee08/64', 'neutron:device_id': 'f6deb920-f186-43f5-9ea0-642f4a6e830e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '77637734-a2d3-42b4-baf6-bf1326483326', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df8aa31f-e638-4e49-ac27-bf5e1988a64a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=2fcd845f-8646-4afc-923a-1b7570fbdc36) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.843 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:58 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d00000089.scope: Deactivated successfully.
Feb 28 10:39:58 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d00000089.scope: Consumed 15.126s CPU time.
Feb 28 10:39:58 compute-0 systemd-machined[209480]: Machine qemu-170-instance-00000089 terminated.
Feb 28 10:39:58 compute-0 neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee[366218]: [NOTICE]   (366223) : haproxy version is 2.8.14-c23fe91
Feb 28 10:39:58 compute-0 neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee[366218]: [NOTICE]   (366223) : path to executable is /usr/sbin/haproxy
Feb 28 10:39:58 compute-0 neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee[366218]: [WARNING]  (366223) : Exiting Master process...
Feb 28 10:39:58 compute-0 neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee[366218]: [WARNING]  (366223) : Exiting Master process...
Feb 28 10:39:58 compute-0 neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee[366218]: [ALERT]    (366223) : Current worker (366225) exited with code 143 (Terminated)
Feb 28 10:39:58 compute-0 neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee[366218]: [WARNING]  (366223) : All workers exited. Exiting... (0)
Feb 28 10:39:58 compute-0 systemd[1]: libpod-19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d.scope: Deactivated successfully.
Feb 28 10:39:58 compute-0 NetworkManager[49805]: <info>  [1772275198.9434] manager: (tap2fcd845f-86): new Tun device (/org/freedesktop/NetworkManager/Devices/608)
Feb 28 10:39:58 compute-0 podman[368621]: 2026-02-28 10:39:58.949601915 +0000 UTC m=+0.050023967 container died 19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 10:39:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.982 243456 DEBUG nova.compute.manager [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Received event network-vif-deleted-05cd9b0a-030d-404b-b77a-570992915fae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.983 243456 DEBUG nova.compute.manager [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received event network-vif-unplugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.984 243456 DEBUG oslo_concurrency.lockutils [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.987 243456 DEBUG oslo_concurrency.lockutils [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.987 243456 DEBUG oslo_concurrency.lockutils [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.987 243456 DEBUG nova.compute.manager [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] No waiting events found dispatching network-vif-unplugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.987 243456 WARNING nova.compute.manager [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received unexpected event network-vif-unplugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 for instance with vm_state deleted and task_state None.
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.988 243456 DEBUG nova.compute.manager [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received event network-vif-plugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.988 243456 DEBUG oslo_concurrency.lockutils [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.988 243456 DEBUG oslo_concurrency.lockutils [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.988 243456 DEBUG oslo_concurrency.lockutils [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.988 243456 DEBUG nova.compute.manager [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] No waiting events found dispatching network-vif-plugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.988 243456 WARNING nova.compute.manager [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received unexpected event network-vif-plugged-247ab5af-a0a0-4d02-93ea-d7dbed82f571 for instance with vm_state deleted and task_state None.
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.989 243456 DEBUG nova.compute.manager [req-82f6b693-462c-467b-9932-c88ad0bfc0cc req-7785314d-c234-4b83-8763-8c05edde2c3a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Received event network-vif-deleted-247ab5af-a0a0-4d02-93ea-d7dbed82f571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.995 243456 INFO nova.virt.libvirt.driver [-] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Instance destroyed successfully.
Feb 28 10:39:58 compute-0 nova_compute[243452]: 2026-02-28 10:39:58.996 243456 DEBUG nova.objects.instance [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid f6deb920-f186-43f5-9ea0-642f4a6e830e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:39:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d-userdata-shm.mount: Deactivated successfully.
Feb 28 10:39:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-353ad3c61fc41f7332854dd7161b413a55568a38d52cee91efbf7a09e713c9f0-merged.mount: Deactivated successfully.
Feb 28 10:39:59 compute-0 podman[368621]: 2026-02-28 10:39:59.018958419 +0000 UTC m=+0.119380451 container cleanup 19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.021 243456 DEBUG nova.virt.libvirt.vif [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:38:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1245781750',display_name='tempest-TestGettingAddress-server-1245781750',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1245781750',id=137,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:38:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-34sdq54p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:38:53Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f6deb920-f186-43f5-9ea0-642f4a6e830e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.022 243456 DEBUG nova.network.os_vif_util [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.022 243456 DEBUG nova.network.os_vif_util [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:80:6e,bridge_name='br-int',has_traffic_filtering=True,id=5bc174e9-3e24-499b-8f52-0bcff974bf56,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bc174e9-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.023 243456 DEBUG os_vif [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:80:6e,bridge_name='br-int',has_traffic_filtering=True,id=5bc174e9-3e24-499b-8f52-0bcff974bf56,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bc174e9-3e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.027 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bc174e9-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:59 compute-0 systemd[1]: libpod-conmon-19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d.scope: Deactivated successfully.
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.029 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.031 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.034 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.036 243456 INFO os_vif [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:80:6e,bridge_name='br-int',has_traffic_filtering=True,id=5bc174e9-3e24-499b-8f52-0bcff974bf56,network=Network(386ad93c-8128-414d-bc97-7c3f009f2aee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bc174e9-3e')
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.037 243456 DEBUG nova.virt.libvirt.vif [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:38:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1245781750',display_name='tempest-TestGettingAddress-server-1245781750',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1245781750',id=137,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHDH+5DUTx5iBPqC0EDa1S6dogceA/+8wHwWd1BSaRvWHn3D8AtYXHbWFXwRPGXZg46P7b16sTNaYzFdl+C2i24U7XPS1mXh/yVjcDyoOTxx5RFD1Jt0UBrR4R+2oVkYmg==',key_name='tempest-TestGettingAddress-830625014',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:38:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-34sdq54p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:38:53Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=f6deb920-f186-43f5-9ea0-642f4a6e830e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.038 243456 DEBUG nova.network.os_vif_util [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.038 243456 DEBUG nova.network.os_vif_util [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b3:ee:08,bridge_name='br-int',has_traffic_filtering=True,id=2fcd845f-8646-4afc-923a-1b7570fbdc36,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fcd845f-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.038 243456 DEBUG os_vif [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:ee:08,bridge_name='br-int',has_traffic_filtering=True,id=2fcd845f-8646-4afc-923a-1b7570fbdc36,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fcd845f-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.039 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.039 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fcd845f-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.040 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.041 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.042 243456 INFO os_vif [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:ee:08,bridge_name='br-int',has_traffic_filtering=True,id=2fcd845f-8646-4afc-923a-1b7570fbdc36,network=Network(685f3a92-853c-417a-a00b-ba5c70b02f2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fcd845f-86')
Feb 28 10:39:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 313 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Feb 28 10:39:59 compute-0 podman[368671]: 2026-02-28 10:39:59.086127081 +0000 UTC m=+0.048820093 container remove 19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.093 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d5da477d-9a15-46ac-b72b-15de52169454]: (4, ('Sat Feb 28 10:39:58 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee (19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d)\n19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d\nSat Feb 28 10:39:59 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee (19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d)\n19a1fbbe2e2d896f48f0f30852e803704211aa1cfc20300acbf35235512aec8d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.095 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff21df3-3f75-4f38-9a47-5f3362396704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.096 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap386ad93c-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.098 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:59 compute-0 kernel: tap386ad93c-80: left promiscuous mode
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.106 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.107 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.109 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7f7cb165-4ec0-4222-8160-5488df5bb13c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.126 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e4f5df-d53c-4665-a019-8b8cc7aaf9ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.127 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f0dea760-a8e9-4878-8765-2f7dbb73e879]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:39:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2070273112' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.144 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cac4dedd-331b-466c-a3f8-455e61a39985]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663862, 'reachable_time': 25554, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368705, 'error': None, 'target': 'ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d386ad93c\x2d8128\x2d414d\x2dbc97\x2d7c3f009f2aee.mount: Deactivated successfully.
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.149 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-386ad93c-8128-414d-bc97-7c3f009f2aee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.149 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[92c07af0-f69d-4f9f-894c-2481ec714d52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.150 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 2fcd845f-8646-4afc-923a-1b7570fbdc36 in datapath 685f3a92-853c-417a-a00b-ba5c70b02f2d unbound from our chassis
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.152 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 685f3a92-853c-417a-a00b-ba5c70b02f2d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.153 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fd9ca700-398d-42bb-b6fc-d4ad3d335324]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.154 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d namespace which is not needed anymore
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.174 243456 DEBUG oslo_concurrency.processutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.182 243456 DEBUG nova.compute.provider_tree [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:39:59 compute-0 neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d[366368]: [NOTICE]   (366373) : haproxy version is 2.8.14-c23fe91
Feb 28 10:39:59 compute-0 neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d[366368]: [NOTICE]   (366373) : path to executable is /usr/sbin/haproxy
Feb 28 10:39:59 compute-0 neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d[366368]: [WARNING]  (366373) : Exiting Master process...
Feb 28 10:39:59 compute-0 neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d[366368]: [ALERT]    (366373) : Current worker (366376) exited with code 143 (Terminated)
Feb 28 10:39:59 compute-0 neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d[366368]: [WARNING]  (366373) : All workers exited. Exiting... (0)
Feb 28 10:39:59 compute-0 systemd[1]: libpod-7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003.scope: Deactivated successfully.
Feb 28 10:39:59 compute-0 podman[368727]: 2026-02-28 10:39:59.295753777 +0000 UTC m=+0.051629893 container died 7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 28 10:39:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003-userdata-shm.mount: Deactivated successfully.
Feb 28 10:39:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-eca0bd47b535c15e2228c3436fbfee03368e983c319a518449256dd29eafc04f-merged.mount: Deactivated successfully.
Feb 28 10:39:59 compute-0 podman[368727]: 2026-02-28 10:39:59.333791024 +0000 UTC m=+0.089667140 container cleanup 7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:39:59 compute-0 systemd[1]: libpod-conmon-7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003.scope: Deactivated successfully.
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.348 243456 INFO nova.virt.libvirt.driver [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Deleting instance files /var/lib/nova/instances/f6deb920-f186-43f5-9ea0-642f4a6e830e_del
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.349 243456 INFO nova.virt.libvirt.driver [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Deletion of /var/lib/nova/instances/f6deb920-f186-43f5-9ea0-642f4a6e830e_del complete
Feb 28 10:39:59 compute-0 podman[368758]: 2026-02-28 10:39:59.395649096 +0000 UTC m=+0.040613682 container remove 7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.399 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8d5534c7-4a81-4e91-95cf-ec6c210573fc]: (4, ('Sat Feb 28 10:39:59 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d (7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003)\n7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003\nSat Feb 28 10:39:59 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d (7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003)\n7306c60b59d58abab76c109c9286f8c764728593a16ad7c6eff1c2c907ead003\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.401 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f581f36f-f06e-4dbd-8ae5-860a5507b90e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.402 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap685f3a92-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.403 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:59 compute-0 kernel: tap685f3a92-80: left promiscuous mode
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.408 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.410 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3e14b21d-582d-4bbd-89c3-7b5bc5c3a55d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:59 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2070273112' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.422 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[08aa3a8b-f3dc-4325-81bd-c1260f8ae36b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.423 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb398a7-5145-4452-a011-73914c640000]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.436 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1a30459f-7a0a-4d45-bc6a-dc1d640b3e89]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663956, 'reachable_time': 44608, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368773, 'error': None, 'target': 'ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.438 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-685f3a92-853c-417a-a00b-ba5c70b02f2d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:39:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:39:59.438 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[a65761f2-833d-48ff-b68e-99ada1d41e4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.473 243456 DEBUG nova.compute.manager [req-5dcc5254-7555-435d-b61e-46f119d6602e req-eb2aecec-9b5c-40da-a28d-45b50de26049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-changed-5bc174e9-3e24-499b-8f52-0bcff974bf56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.473 243456 DEBUG nova.compute.manager [req-5dcc5254-7555-435d-b61e-46f119d6602e req-eb2aecec-9b5c-40da-a28d-45b50de26049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Refreshing instance network info cache due to event network-changed-5bc174e9-3e24-499b-8f52-0bcff974bf56. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.473 243456 DEBUG oslo_concurrency.lockutils [req-5dcc5254-7555-435d-b61e-46f119d6602e req-eb2aecec-9b5c-40da-a28d-45b50de26049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.473 243456 DEBUG oslo_concurrency.lockutils [req-5dcc5254-7555-435d-b61e-46f119d6602e req-eb2aecec-9b5c-40da-a28d-45b50de26049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.474 243456 DEBUG nova.network.neutron [req-5dcc5254-7555-435d-b61e-46f119d6602e req-eb2aecec-9b5c-40da-a28d-45b50de26049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Refreshing network info cache for port 5bc174e9-3e24-499b-8f52-0bcff974bf56 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.495 243456 DEBUG nova.scheduler.client.report [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.526 243456 DEBUG oslo_concurrency.lockutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.533 243456 INFO nova.compute.manager [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Took 0.82 seconds to destroy the instance on the hypervisor.
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.534 243456 DEBUG oslo.service.loopingcall [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.534 243456 DEBUG nova.compute.manager [-] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.534 243456 DEBUG nova.network.neutron [-] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.568 243456 INFO nova.scheduler.client.report [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.633 243456 DEBUG oslo_concurrency.lockutils [None req-a0d8f8b3-fddb-4ba3-8208-b58ccec8c553 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "023e788b-dd3e-4a85-a2e5-e97ad24d6ff1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:39:59 compute-0 nova_compute[243452]: 2026-02-28 10:39:59.642 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:00 compute-0 systemd[1]: run-netns-ovnmeta\x2d685f3a92\x2d853c\x2d417a\x2da00b\x2dba5c70b02f2d.mount: Deactivated successfully.
Feb 28 10:40:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:40:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:40:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:40:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:40:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:40:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:40:00 compute-0 ceph-mon[76304]: pgmap v2269: 305 pgs: 305 active+clean; 313 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Feb 28 10:40:00 compute-0 nova_compute[243452]: 2026-02-28 10:40:00.808 243456 DEBUG nova.network.neutron [req-5dcc5254-7555-435d-b61e-46f119d6602e req-eb2aecec-9b5c-40da-a28d-45b50de26049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updated VIF entry in instance network info cache for port 5bc174e9-3e24-499b-8f52-0bcff974bf56. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:40:00 compute-0 nova_compute[243452]: 2026-02-28 10:40:00.809 243456 DEBUG nova.network.neutron [req-5dcc5254-7555-435d-b61e-46f119d6602e req-eb2aecec-9b5c-40da-a28d-45b50de26049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updating instance_info_cache with network_info: [{"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "address": "fa:16:3e:b3:ee:08", "network": {"id": "685f3a92-853c-417a-a00b-ba5c70b02f2d", "bridge": "br-int", "label": "tempest-network-smoke--1683323948", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb3:ee08", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fcd845f-86", "ovs_interfaceid": "2fcd845f-8646-4afc-923a-1b7570fbdc36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:40:00 compute-0 nova_compute[243452]: 2026-02-28 10:40:00.830 243456 DEBUG oslo_concurrency.lockutils [req-5dcc5254-7555-435d-b61e-46f119d6602e req-eb2aecec-9b5c-40da-a28d-45b50de26049 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-f6deb920-f186-43f5-9ea0-642f4a6e830e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:40:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 175 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 293 KiB/s rd, 1.5 MiB/s wr, 127 op/s
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.081 243456 DEBUG nova.compute.manager [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-unplugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.082 243456 DEBUG oslo_concurrency.lockutils [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.082 243456 DEBUG oslo_concurrency.lockutils [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.083 243456 DEBUG oslo_concurrency.lockutils [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.083 243456 DEBUG nova.compute.manager [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] No waiting events found dispatching network-vif-unplugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.083 243456 DEBUG nova.compute.manager [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-unplugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.084 243456 DEBUG nova.compute.manager [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-plugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.084 243456 DEBUG oslo_concurrency.lockutils [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.084 243456 DEBUG oslo_concurrency.lockutils [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.084 243456 DEBUG oslo_concurrency.lockutils [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.085 243456 DEBUG nova.compute.manager [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] No waiting events found dispatching network-vif-plugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.085 243456 WARNING nova.compute.manager [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received unexpected event network-vif-plugged-5bc174e9-3e24-499b-8f52-0bcff974bf56 for instance with vm_state active and task_state deleting.
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.085 243456 DEBUG nova.compute.manager [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-deleted-2fcd845f-8646-4afc-923a-1b7570fbdc36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.086 243456 INFO nova.compute.manager [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Neutron deleted interface 2fcd845f-8646-4afc-923a-1b7570fbdc36; detaching it from the instance and deleting it from the info cache
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.086 243456 DEBUG nova.network.neutron [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updating instance_info_cache with network_info: [{"id": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "address": "fa:16:3e:40:80:6e", "network": {"id": "386ad93c-8128-414d-bc97-7c3f009f2aee", "bridge": "br-int", "label": "tempest-network-smoke--622868633", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bc174e9-3e", "ovs_interfaceid": "5bc174e9-3e24-499b-8f52-0bcff974bf56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.111 243456 DEBUG nova.compute.manager [req-ffc8b4bf-e23c-4901-a414-8d7984b9da3c req-6fa9dbb3-de64-4a39-95a5-48a99a35ce2a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Detach interface failed, port_id=2fcd845f-8646-4afc-923a-1b7570fbdc36, reason: Instance f6deb920-f186-43f5-9ea0-642f4a6e830e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.157 243456 DEBUG nova.network.neutron [-] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.172 243456 INFO nova.compute.manager [-] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Took 1.64 seconds to deallocate network for instance.
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.220 243456 DEBUG oslo_concurrency.lockutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.221 243456 DEBUG oslo_concurrency.lockutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:01 compute-0 sudo[368774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:40:01 compute-0 sudo[368774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:40:01 compute-0 sudo[368774]: pam_unix(sudo:session): session closed for user root
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.267 243456 DEBUG oslo_concurrency.processutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:01 compute-0 sudo[368799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:40:01 compute-0 sudo[368799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.572 243456 DEBUG nova.compute.manager [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-unplugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.572 243456 DEBUG oslo_concurrency.lockutils [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.572 243456 DEBUG oslo_concurrency.lockutils [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.572 243456 DEBUG oslo_concurrency.lockutils [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.573 243456 DEBUG nova.compute.manager [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] No waiting events found dispatching network-vif-unplugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.573 243456 WARNING nova.compute.manager [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received unexpected event network-vif-unplugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 for instance with vm_state deleted and task_state None.
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.573 243456 DEBUG nova.compute.manager [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-plugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.573 243456 DEBUG oslo_concurrency.lockutils [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.573 243456 DEBUG oslo_concurrency.lockutils [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.573 243456 DEBUG oslo_concurrency.lockutils [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.573 243456 DEBUG nova.compute.manager [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] No waiting events found dispatching network-vif-plugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.574 243456 WARNING nova.compute.manager [req-4d9cb7b8-c076-4568-a7f6-e44bcfa2ceb8 req-9eda3b91-e542-4ee8-9013-d5c675d9708f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received unexpected event network-vif-plugged-2fcd845f-8646-4afc-923a-1b7570fbdc36 for instance with vm_state deleted and task_state None.
Feb 28 10:40:01 compute-0 sudo[368799]: pam_unix(sudo:session): session closed for user root
Feb 28 10:40:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 28 10:40:01 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 10:40:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:40:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:40:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:40:01 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:40:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:40:01 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:40:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:40:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:40:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:40:01 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:40:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:40:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:40:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:40:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/987712077' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:40:01 compute-0 sudo[368874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:40:01 compute-0 sudo[368874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:40:01 compute-0 sudo[368874]: pam_unix(sudo:session): session closed for user root
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.853 243456 DEBUG oslo_concurrency.processutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.860 243456 DEBUG nova.compute.provider_tree [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.875 243456 DEBUG nova.scheduler.client.report [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.894 243456 DEBUG oslo_concurrency.lockutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:01 compute-0 sudo[368901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:40:01 compute-0 sudo[368901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:40:01 compute-0 nova_compute[243452]: 2026-02-28 10:40:01.928 243456 INFO nova.scheduler.client.report [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance f6deb920-f186-43f5-9ea0-642f4a6e830e
Feb 28 10:40:02 compute-0 nova_compute[243452]: 2026-02-28 10:40:02.018 243456 DEBUG oslo_concurrency.lockutils [None req-955e518d-3be0-47be-9b1e-f8819924b3a1 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "f6deb920-f186-43f5-9ea0-642f4a6e830e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:02 compute-0 podman[368938]: 2026-02-28 10:40:02.203009678 +0000 UTC m=+0.063191000 container create fda13903744767e27f600e63b468ac7709b06715c066e88cf6096312cecddf98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_colden, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 10:40:02 compute-0 systemd[1]: Started libpod-conmon-fda13903744767e27f600e63b468ac7709b06715c066e88cf6096312cecddf98.scope.
Feb 28 10:40:02 compute-0 podman[368938]: 2026-02-28 10:40:02.177198187 +0000 UTC m=+0.037379579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:40:02 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:40:02 compute-0 podman[368938]: 2026-02-28 10:40:02.296507615 +0000 UTC m=+0.156688937 container init fda13903744767e27f600e63b468ac7709b06715c066e88cf6096312cecddf98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_colden, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 10:40:02 compute-0 podman[368938]: 2026-02-28 10:40:02.30444371 +0000 UTC m=+0.164625002 container start fda13903744767e27f600e63b468ac7709b06715c066e88cf6096312cecddf98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 10:40:02 compute-0 podman[368938]: 2026-02-28 10:40:02.309332429 +0000 UTC m=+0.169513721 container attach fda13903744767e27f600e63b468ac7709b06715c066e88cf6096312cecddf98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 28 10:40:02 compute-0 dreamy_colden[368955]: 167 167
Feb 28 10:40:02 compute-0 systemd[1]: libpod-fda13903744767e27f600e63b468ac7709b06715c066e88cf6096312cecddf98.scope: Deactivated successfully.
Feb 28 10:40:02 compute-0 podman[368938]: 2026-02-28 10:40:02.314096243 +0000 UTC m=+0.174277545 container died fda13903744767e27f600e63b468ac7709b06715c066e88cf6096312cecddf98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_colden, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:40:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1276add8fcafdffaa64631a2efa3a41707c252dd2da49707ed6364ddaef0d7a7-merged.mount: Deactivated successfully.
Feb 28 10:40:02 compute-0 podman[368938]: 2026-02-28 10:40:02.358691856 +0000 UTC m=+0.218873188 container remove fda13903744767e27f600e63b468ac7709b06715c066e88cf6096312cecddf98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_colden, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:40:02 compute-0 systemd[1]: libpod-conmon-fda13903744767e27f600e63b468ac7709b06715c066e88cf6096312cecddf98.scope: Deactivated successfully.
Feb 28 10:40:02 compute-0 ceph-mon[76304]: pgmap v2270: 305 pgs: 305 active+clean; 175 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 293 KiB/s rd, 1.5 MiB/s wr, 127 op/s
Feb 28 10:40:02 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 10:40:02 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:40:02 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:40:02 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:40:02 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:40:02 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:40:02 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:40:02 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/987712077' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:40:02 compute-0 podman[368978]: 2026-02-28 10:40:02.485425295 +0000 UTC m=+0.038978735 container create fb1fe6fcb999346f07c38761793f5ed24802191013bfccf329045215d18ac0c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 10:40:02 compute-0 systemd[1]: Started libpod-conmon-fb1fe6fcb999346f07c38761793f5ed24802191013bfccf329045215d18ac0c1.scope.
Feb 28 10:40:02 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:40:02 compute-0 podman[368978]: 2026-02-28 10:40:02.469550595 +0000 UTC m=+0.023104045 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:40:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d47b26eec4190c25cad8e1c053cbfc2b6dad5eee24b93242add40126d2b117/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:40:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d47b26eec4190c25cad8e1c053cbfc2b6dad5eee24b93242add40126d2b117/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:40:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d47b26eec4190c25cad8e1c053cbfc2b6dad5eee24b93242add40126d2b117/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:40:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d47b26eec4190c25cad8e1c053cbfc2b6dad5eee24b93242add40126d2b117/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:40:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d47b26eec4190c25cad8e1c053cbfc2b6dad5eee24b93242add40126d2b117/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:40:02 compute-0 podman[368978]: 2026-02-28 10:40:02.592041484 +0000 UTC m=+0.145595014 container init fb1fe6fcb999346f07c38761793f5ed24802191013bfccf329045215d18ac0c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:40:02 compute-0 podman[368978]: 2026-02-28 10:40:02.599964188 +0000 UTC m=+0.153517628 container start fb1fe6fcb999346f07c38761793f5ed24802191013bfccf329045215d18ac0c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khayyam, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:40:02 compute-0 podman[368978]: 2026-02-28 10:40:02.604317091 +0000 UTC m=+0.157870531 container attach fb1fe6fcb999346f07c38761793f5ed24802191013bfccf329045215d18ac0c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khayyam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 10:40:03 compute-0 happy_khayyam[368994]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:40:03 compute-0 happy_khayyam[368994]: --> All data devices are unavailable
Feb 28 10:40:03 compute-0 systemd[1]: libpod-fb1fe6fcb999346f07c38761793f5ed24802191013bfccf329045215d18ac0c1.scope: Deactivated successfully.
Feb 28 10:40:03 compute-0 podman[368978]: 2026-02-28 10:40:03.040136622 +0000 UTC m=+0.593690052 container died fb1fe6fcb999346f07c38761793f5ed24802191013bfccf329045215d18ac0c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khayyam, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 10:40:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 72 KiB/s wr, 91 op/s
Feb 28 10:40:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4d47b26eec4190c25cad8e1c053cbfc2b6dad5eee24b93242add40126d2b117-merged.mount: Deactivated successfully.
Feb 28 10:40:03 compute-0 podman[368978]: 2026-02-28 10:40:03.156981331 +0000 UTC m=+0.710534761 container remove fb1fe6fcb999346f07c38761793f5ed24802191013bfccf329045215d18ac0c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:40:03 compute-0 nova_compute[243452]: 2026-02-28 10:40:03.154 243456 DEBUG nova.compute.manager [req-a2b7722a-143e-4e7a-954a-b4884d2a9304 req-916c271b-eda5-4e70-bb48-9326ea7f0aee 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Received event network-vif-deleted-5bc174e9-3e24-499b-8f52-0bcff974bf56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:40:03 compute-0 systemd[1]: libpod-conmon-fb1fe6fcb999346f07c38761793f5ed24802191013bfccf329045215d18ac0c1.scope: Deactivated successfully.
Feb 28 10:40:03 compute-0 sudo[368901]: pam_unix(sudo:session): session closed for user root
Feb 28 10:40:03 compute-0 sudo[369027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:40:03 compute-0 sudo[369027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:40:03 compute-0 sudo[369027]: pam_unix(sudo:session): session closed for user root
Feb 28 10:40:03 compute-0 sudo[369052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:40:03 compute-0 sudo[369052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:40:03 compute-0 podman[369090]: 2026-02-28 10:40:03.589290632 +0000 UTC m=+0.043057520 container create 9fea45b1262f2162897c1dce23140873e2ad400bcf1d6b144ca303da0fb8114a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 10:40:03 compute-0 systemd[1]: Started libpod-conmon-9fea45b1262f2162897c1dce23140873e2ad400bcf1d6b144ca303da0fb8114a.scope.
Feb 28 10:40:03 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:40:03 compute-0 podman[369090]: 2026-02-28 10:40:03.660783716 +0000 UTC m=+0.114550574 container init 9fea45b1262f2162897c1dce23140873e2ad400bcf1d6b144ca303da0fb8114a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:40:03 compute-0 podman[369090]: 2026-02-28 10:40:03.57122208 +0000 UTC m=+0.024988938 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:40:03 compute-0 podman[369090]: 2026-02-28 10:40:03.670254245 +0000 UTC m=+0.124021093 container start 9fea45b1262f2162897c1dce23140873e2ad400bcf1d6b144ca303da0fb8114a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_galois, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:40:03 compute-0 podman[369090]: 2026-02-28 10:40:03.673713163 +0000 UTC m=+0.127480001 container attach 9fea45b1262f2162897c1dce23140873e2ad400bcf1d6b144ca303da0fb8114a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_galois, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:40:03 compute-0 pedantic_galois[369106]: 167 167
Feb 28 10:40:03 compute-0 systemd[1]: libpod-9fea45b1262f2162897c1dce23140873e2ad400bcf1d6b144ca303da0fb8114a.scope: Deactivated successfully.
Feb 28 10:40:03 compute-0 podman[369090]: 2026-02-28 10:40:03.675882404 +0000 UTC m=+0.129649252 container died 9fea45b1262f2162897c1dce23140873e2ad400bcf1d6b144ca303da0fb8114a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 10:40:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-26292ed989e625c93f0e7aa5d8b937c11db22b4d063a8b78edb54e8ea35ba78b-merged.mount: Deactivated successfully.
Feb 28 10:40:03 compute-0 podman[369090]: 2026-02-28 10:40:03.710554876 +0000 UTC m=+0.164321714 container remove 9fea45b1262f2162897c1dce23140873e2ad400bcf1d6b144ca303da0fb8114a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_galois, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:40:03 compute-0 systemd[1]: libpod-conmon-9fea45b1262f2162897c1dce23140873e2ad400bcf1d6b144ca303da0fb8114a.scope: Deactivated successfully.
Feb 28 10:40:03 compute-0 podman[369131]: 2026-02-28 10:40:03.861891761 +0000 UTC m=+0.043397190 container create 36fa44d458abcf61decbe9b9b5b59ae685c9802ccac2e3a3aed0629e35c1a8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 10:40:03 compute-0 systemd[1]: Started libpod-conmon-36fa44d458abcf61decbe9b9b5b59ae685c9802ccac2e3a3aed0629e35c1a8df.scope.
Feb 28 10:40:03 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:40:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e179c9e25e327aabbd523f1f6161c35cb2fc27402b7e73467729448d7bf0072a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:40:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e179c9e25e327aabbd523f1f6161c35cb2fc27402b7e73467729448d7bf0072a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:40:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e179c9e25e327aabbd523f1f6161c35cb2fc27402b7e73467729448d7bf0072a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:40:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e179c9e25e327aabbd523f1f6161c35cb2fc27402b7e73467729448d7bf0072a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:40:03 compute-0 podman[369131]: 2026-02-28 10:40:03.922622231 +0000 UTC m=+0.104127670 container init 36fa44d458abcf61decbe9b9b5b59ae685c9802ccac2e3a3aed0629e35c1a8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_buck, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 10:40:03 compute-0 podman[369131]: 2026-02-28 10:40:03.927307513 +0000 UTC m=+0.108812902 container start 36fa44d458abcf61decbe9b9b5b59ae685c9802ccac2e3a3aed0629e35c1a8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_buck, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:40:03 compute-0 podman[369131]: 2026-02-28 10:40:03.930453142 +0000 UTC m=+0.111958561 container attach 36fa44d458abcf61decbe9b9b5b59ae685c9802ccac2e3a3aed0629e35c1a8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_buck, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:40:03 compute-0 podman[369131]: 2026-02-28 10:40:03.842709418 +0000 UTC m=+0.024214807 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:40:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:40:04 compute-0 nova_compute[243452]: 2026-02-28 10:40:04.043 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:04 compute-0 gracious_buck[369147]: {
Feb 28 10:40:04 compute-0 gracious_buck[369147]:     "0": [
Feb 28 10:40:04 compute-0 gracious_buck[369147]:         {
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "devices": [
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "/dev/loop3"
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             ],
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "lv_name": "ceph_lv0",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "lv_size": "21470642176",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "name": "ceph_lv0",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "tags": {
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.cluster_name": "ceph",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.crush_device_class": "",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.encrypted": "0",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.objectstore": "bluestore",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.osd_id": "0",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.type": "block",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.vdo": "0",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.with_tpm": "0"
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             },
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "type": "block",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "vg_name": "ceph_vg0"
Feb 28 10:40:04 compute-0 gracious_buck[369147]:         }
Feb 28 10:40:04 compute-0 gracious_buck[369147]:     ],
Feb 28 10:40:04 compute-0 gracious_buck[369147]:     "1": [
Feb 28 10:40:04 compute-0 gracious_buck[369147]:         {
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "devices": [
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "/dev/loop4"
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             ],
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "lv_name": "ceph_lv1",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "lv_size": "21470642176",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "name": "ceph_lv1",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "tags": {
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.cluster_name": "ceph",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.crush_device_class": "",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.encrypted": "0",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.objectstore": "bluestore",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.osd_id": "1",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.type": "block",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.vdo": "0",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.with_tpm": "0"
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             },
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "type": "block",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "vg_name": "ceph_vg1"
Feb 28 10:40:04 compute-0 gracious_buck[369147]:         }
Feb 28 10:40:04 compute-0 gracious_buck[369147]:     ],
Feb 28 10:40:04 compute-0 gracious_buck[369147]:     "2": [
Feb 28 10:40:04 compute-0 gracious_buck[369147]:         {
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "devices": [
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "/dev/loop5"
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             ],
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "lv_name": "ceph_lv2",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "lv_size": "21470642176",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "name": "ceph_lv2",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "tags": {
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.cluster_name": "ceph",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.crush_device_class": "",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.encrypted": "0",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.objectstore": "bluestore",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.osd_id": "2",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.type": "block",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.vdo": "0",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:                 "ceph.with_tpm": "0"
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             },
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "type": "block",
Feb 28 10:40:04 compute-0 gracious_buck[369147]:             "vg_name": "ceph_vg2"
Feb 28 10:40:04 compute-0 gracious_buck[369147]:         }
Feb 28 10:40:04 compute-0 gracious_buck[369147]:     ]
Feb 28 10:40:04 compute-0 gracious_buck[369147]: }
Feb 28 10:40:04 compute-0 systemd[1]: libpod-36fa44d458abcf61decbe9b9b5b59ae685c9802ccac2e3a3aed0629e35c1a8df.scope: Deactivated successfully.
Feb 28 10:40:04 compute-0 podman[369131]: 2026-02-28 10:40:04.19451475 +0000 UTC m=+0.376020159 container died 36fa44d458abcf61decbe9b9b5b59ae685c9802ccac2e3a3aed0629e35c1a8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 10:40:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-e179c9e25e327aabbd523f1f6161c35cb2fc27402b7e73467729448d7bf0072a-merged.mount: Deactivated successfully.
Feb 28 10:40:04 compute-0 podman[369131]: 2026-02-28 10:40:04.252998996 +0000 UTC m=+0.434504425 container remove 36fa44d458abcf61decbe9b9b5b59ae685c9802ccac2e3a3aed0629e35c1a8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 10:40:04 compute-0 systemd[1]: libpod-conmon-36fa44d458abcf61decbe9b9b5b59ae685c9802ccac2e3a3aed0629e35c1a8df.scope: Deactivated successfully.
Feb 28 10:40:04 compute-0 sudo[369052]: pam_unix(sudo:session): session closed for user root
Feb 28 10:40:04 compute-0 sudo[369168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:40:04 compute-0 sudo[369168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:40:04 compute-0 sudo[369168]: pam_unix(sudo:session): session closed for user root
Feb 28 10:40:04 compute-0 sudo[369193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:40:04 compute-0 sudo[369193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:40:04 compute-0 ceph-mon[76304]: pgmap v2271: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 72 KiB/s wr, 91 op/s
Feb 28 10:40:04 compute-0 nova_compute[243452]: 2026-02-28 10:40:04.645 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:04 compute-0 podman[369232]: 2026-02-28 10:40:04.780512583 +0000 UTC m=+0.057388906 container create accf3cfa935b1c3d2bb0dd6c700c758f679a5f0a1579bb465c8e4155dcc73477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaplygin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 10:40:04 compute-0 systemd[1]: Started libpod-conmon-accf3cfa935b1c3d2bb0dd6c700c758f679a5f0a1579bb465c8e4155dcc73477.scope.
Feb 28 10:40:04 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:40:04 compute-0 podman[369232]: 2026-02-28 10:40:04.759721834 +0000 UTC m=+0.036598127 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:40:04 compute-0 podman[369232]: 2026-02-28 10:40:04.856440323 +0000 UTC m=+0.133316636 container init accf3cfa935b1c3d2bb0dd6c700c758f679a5f0a1579bb465c8e4155dcc73477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 10:40:04 compute-0 podman[369232]: 2026-02-28 10:40:04.866149608 +0000 UTC m=+0.143025891 container start accf3cfa935b1c3d2bb0dd6c700c758f679a5f0a1579bb465c8e4155dcc73477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 10:40:04 compute-0 podman[369232]: 2026-02-28 10:40:04.87012147 +0000 UTC m=+0.146997853 container attach accf3cfa935b1c3d2bb0dd6c700c758f679a5f0a1579bb465c8e4155dcc73477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaplygin, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:40:04 compute-0 xenodochial_chaplygin[369248]: 167 167
Feb 28 10:40:04 compute-0 systemd[1]: libpod-accf3cfa935b1c3d2bb0dd6c700c758f679a5f0a1579bb465c8e4155dcc73477.scope: Deactivated successfully.
Feb 28 10:40:04 compute-0 podman[369232]: 2026-02-28 10:40:04.872910529 +0000 UTC m=+0.149786852 container died accf3cfa935b1c3d2bb0dd6c700c758f679a5f0a1579bb465c8e4155dcc73477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaplygin, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 10:40:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b75f804841c73e142a4a23f3425e931fdc1d9e738785159166ddeb316c53b245-merged.mount: Deactivated successfully.
Feb 28 10:40:04 compute-0 podman[369232]: 2026-02-28 10:40:04.911342377 +0000 UTC m=+0.188218690 container remove accf3cfa935b1c3d2bb0dd6c700c758f679a5f0a1579bb465c8e4155dcc73477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaplygin, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 10:40:04 compute-0 systemd[1]: libpod-conmon-accf3cfa935b1c3d2bb0dd6c700c758f679a5f0a1579bb465c8e4155dcc73477.scope: Deactivated successfully.
Feb 28 10:40:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 31 KiB/s wr, 87 op/s
Feb 28 10:40:05 compute-0 podman[369272]: 2026-02-28 10:40:05.121294552 +0000 UTC m=+0.066012971 container create cba106451efaf8ffac354c8868bcb708a8eca602617b92328b16e3d589168081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_feynman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 10:40:05 compute-0 systemd[1]: Started libpod-conmon-cba106451efaf8ffac354c8868bcb708a8eca602617b92328b16e3d589168081.scope.
Feb 28 10:40:05 compute-0 podman[369272]: 2026-02-28 10:40:05.094369359 +0000 UTC m=+0.039087858 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:40:05 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:40:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073a1c4da57fab067b583b837cbbb6361f094172c492a7bbebea5b3e7960852f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:40:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073a1c4da57fab067b583b837cbbb6361f094172c492a7bbebea5b3e7960852f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:40:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073a1c4da57fab067b583b837cbbb6361f094172c492a7bbebea5b3e7960852f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:40:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073a1c4da57fab067b583b837cbbb6361f094172c492a7bbebea5b3e7960852f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:40:05 compute-0 podman[369272]: 2026-02-28 10:40:05.237902963 +0000 UTC m=+0.182621482 container init cba106451efaf8ffac354c8868bcb708a8eca602617b92328b16e3d589168081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:40:05 compute-0 podman[369272]: 2026-02-28 10:40:05.245705984 +0000 UTC m=+0.190424433 container start cba106451efaf8ffac354c8868bcb708a8eca602617b92328b16e3d589168081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_feynman, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:40:05 compute-0 podman[369272]: 2026-02-28 10:40:05.253612178 +0000 UTC m=+0.198330637 container attach cba106451efaf8ffac354c8868bcb708a8eca602617b92328b16e3d589168081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_feynman, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:40:05 compute-0 nova_compute[243452]: 2026-02-28 10:40:05.646 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:05 compute-0 nova_compute[243452]: 2026-02-28 10:40:05.694 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:05 compute-0 lvm[369366]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:40:05 compute-0 lvm[369368]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:40:05 compute-0 lvm[369368]: VG ceph_vg1 finished
Feb 28 10:40:05 compute-0 lvm[369366]: VG ceph_vg0 finished
Feb 28 10:40:05 compute-0 lvm[369370]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:40:05 compute-0 lvm[369370]: VG ceph_vg2 finished
Feb 28 10:40:06 compute-0 great_feynman[369288]: {}
Feb 28 10:40:06 compute-0 systemd[1]: libpod-cba106451efaf8ffac354c8868bcb708a8eca602617b92328b16e3d589168081.scope: Deactivated successfully.
Feb 28 10:40:06 compute-0 podman[369272]: 2026-02-28 10:40:06.03380093 +0000 UTC m=+0.978519339 container died cba106451efaf8ffac354c8868bcb708a8eca602617b92328b16e3d589168081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_feynman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 10:40:06 compute-0 systemd[1]: libpod-cba106451efaf8ffac354c8868bcb708a8eca602617b92328b16e3d589168081.scope: Consumed 1.219s CPU time.
Feb 28 10:40:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-073a1c4da57fab067b583b837cbbb6361f094172c492a7bbebea5b3e7960852f-merged.mount: Deactivated successfully.
Feb 28 10:40:06 compute-0 podman[369272]: 2026-02-28 10:40:06.072005382 +0000 UTC m=+1.016723781 container remove cba106451efaf8ffac354c8868bcb708a8eca602617b92328b16e3d589168081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_feynman, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 10:40:06 compute-0 systemd[1]: libpod-conmon-cba106451efaf8ffac354c8868bcb708a8eca602617b92328b16e3d589168081.scope: Deactivated successfully.
Feb 28 10:40:06 compute-0 sudo[369193]: pam_unix(sudo:session): session closed for user root
Feb 28 10:40:06 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:40:06 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:40:06 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:40:06 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:40:06 compute-0 sudo[369385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:40:06 compute-0 sudo[369385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:40:06 compute-0 sudo[369385]: pam_unix(sudo:session): session closed for user root
Feb 28 10:40:06 compute-0 ceph-mon[76304]: pgmap v2272: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 31 KiB/s wr, 87 op/s
Feb 28 10:40:06 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:40:06 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:40:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 25 KiB/s wr, 85 op/s
Feb 28 10:40:08 compute-0 ceph-mon[76304]: pgmap v2273: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 25 KiB/s wr, 85 op/s
Feb 28 10:40:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:40:09 compute-0 nova_compute[243452]: 2026-02-28 10:40:09.047 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 10 KiB/s wr, 83 op/s
Feb 28 10:40:09 compute-0 nova_compute[243452]: 2026-02-28 10:40:09.646 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:09 compute-0 nova_compute[243452]: 2026-02-28 10:40:09.930 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275194.9295805, 69899d22-e5ee-410a-8280-57cc79ffa188 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:40:09 compute-0 nova_compute[243452]: 2026-02-28 10:40:09.931 243456 INFO nova.compute.manager [-] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] VM Stopped (Lifecycle Event)
Feb 28 10:40:09 compute-0 nova_compute[243452]: 2026-02-28 10:40:09.970 243456 DEBUG nova.compute.manager [None req-056af5f4-c19e-4fc7-bb36-936df1a5ae18 - - - - - -] [instance: 69899d22-e5ee-410a-8280-57cc79ffa188] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:40:10 compute-0 ceph-mon[76304]: pgmap v2274: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 10 KiB/s wr, 83 op/s
Feb 28 10:40:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 6.8 KiB/s wr, 55 op/s
Feb 28 10:40:12 compute-0 nova_compute[243452]: 2026-02-28 10:40:12.193 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275197.1926315, 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:40:12 compute-0 nova_compute[243452]: 2026-02-28 10:40:12.194 243456 INFO nova.compute.manager [-] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] VM Stopped (Lifecycle Event)
Feb 28 10:40:12 compute-0 nova_compute[243452]: 2026-02-28 10:40:12.213 243456 DEBUG nova.compute.manager [None req-3465c9e4-376c-4f56-ba90-5aa71a01ae72 - - - - - -] [instance: 023e788b-dd3e-4a85-a2e5-e97ad24d6ff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:40:12 compute-0 ceph-mon[76304]: pgmap v2275: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 6.8 KiB/s wr, 55 op/s
Feb 28 10:40:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 0 B/s wr, 9 op/s
Feb 28 10:40:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:40:13 compute-0 nova_compute[243452]: 2026-02-28 10:40:13.990 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275198.9838357, f6deb920-f186-43f5-9ea0-642f4a6e830e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:40:13 compute-0 nova_compute[243452]: 2026-02-28 10:40:13.991 243456 INFO nova.compute.manager [-] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] VM Stopped (Lifecycle Event)
Feb 28 10:40:14 compute-0 nova_compute[243452]: 2026-02-28 10:40:14.027 243456 DEBUG nova.compute.manager [None req-e893f997-e533-4ec5-b280-c26b22d4d9da - - - - - -] [instance: f6deb920-f186-43f5-9ea0-642f4a6e830e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:40:14 compute-0 nova_compute[243452]: 2026-02-28 10:40:14.050 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:14 compute-0 ceph-mon[76304]: pgmap v2276: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 0 B/s wr, 9 op/s
Feb 28 10:40:14 compute-0 nova_compute[243452]: 2026-02-28 10:40:14.650 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:40:16 compute-0 podman[369411]: 2026-02-28 10:40:16.157467859 +0000 UTC m=+0.085091660 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:40:16 compute-0 podman[369410]: 2026-02-28 10:40:16.213034332 +0000 UTC m=+0.144768030 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Feb 28 10:40:16 compute-0 ceph-mon[76304]: pgmap v2277: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:40:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:40:18 compute-0 ceph-mon[76304]: pgmap v2278: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:40:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:40:19 compute-0 nova_compute[243452]: 2026-02-28 10:40:19.054 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:40:19 compute-0 nova_compute[243452]: 2026-02-28 10:40:19.477 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:19 compute-0 nova_compute[243452]: 2026-02-28 10:40:19.478 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:19 compute-0 nova_compute[243452]: 2026-02-28 10:40:19.500 243456 DEBUG nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:40:19 compute-0 nova_compute[243452]: 2026-02-28 10:40:19.594 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:19 compute-0 nova_compute[243452]: 2026-02-28 10:40:19.595 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:19 compute-0 nova_compute[243452]: 2026-02-28 10:40:19.607 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:40:19 compute-0 nova_compute[243452]: 2026-02-28 10:40:19.607 243456 INFO nova.compute.claims [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:40:19 compute-0 nova_compute[243452]: 2026-02-28 10:40:19.657 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:19 compute-0 nova_compute[243452]: 2026-02-28 10:40:19.722 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:20 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:40:20 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3453011184' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.315 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.325 243456 DEBUG nova.compute.provider_tree [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.346 243456 DEBUG nova.scheduler.client.report [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.379 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.381 243456 DEBUG nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.438 243456 DEBUG nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.439 243456 DEBUG nova.network.neutron [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.468 243456 INFO nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.492 243456 DEBUG nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.579 243456 DEBUG nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.581 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.582 243456 INFO nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Creating image(s)
Feb 28 10:40:20 compute-0 ceph-mon[76304]: pgmap v2279: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:40:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3453011184' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.621 243456 DEBUG nova.storage.rbd_utils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.655 243456 DEBUG nova.storage.rbd_utils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.689 243456 DEBUG nova.storage.rbd_utils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.695 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.728 243456 DEBUG nova.policy [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.780 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.781 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.782 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.782 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.812 243456 DEBUG nova.storage.rbd_utils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:40:20 compute-0 nova_compute[243452]: 2026-02-28 10:40:20.816 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:40:21 compute-0 nova_compute[243452]: 2026-02-28 10:40:21.073 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.256s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:21 compute-0 nova_compute[243452]: 2026-02-28 10:40:21.160 243456 DEBUG nova.storage.rbd_utils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:40:21 compute-0 nova_compute[243452]: 2026-02-28 10:40:21.253 243456 DEBUG nova.objects.instance [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid 5e1a8d62-9ac1-417d-8194-58901bb4018e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:40:21 compute-0 nova_compute[243452]: 2026-02-28 10:40:21.269 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:40:21 compute-0 nova_compute[243452]: 2026-02-28 10:40:21.270 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Ensure instance console log exists: /var/lib/nova/instances/5e1a8d62-9ac1-417d-8194-58901bb4018e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:40:21 compute-0 nova_compute[243452]: 2026-02-28 10:40:21.270 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:21 compute-0 nova_compute[243452]: 2026-02-28 10:40:21.271 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:21 compute-0 nova_compute[243452]: 2026-02-28 10:40:21.271 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:22 compute-0 nova_compute[243452]: 2026-02-28 10:40:22.085 243456 DEBUG nova.network.neutron [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Successfully created port: d3feb971-63a7-4d54-8310-9c6d40c29637 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:40:22 compute-0 ceph-mon[76304]: pgmap v2280: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:40:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 176 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 717 KiB/s wr, 1 op/s
Feb 28 10:40:23 compute-0 nova_compute[243452]: 2026-02-28 10:40:23.135 243456 DEBUG nova.network.neutron [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Successfully updated port: d3feb971-63a7-4d54-8310-9c6d40c29637 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:40:23 compute-0 nova_compute[243452]: 2026-02-28 10:40:23.157 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:40:23 compute-0 nova_compute[243452]: 2026-02-28 10:40:23.157 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:40:23 compute-0 nova_compute[243452]: 2026-02-28 10:40:23.157 243456 DEBUG nova.network.neutron [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:40:23 compute-0 nova_compute[243452]: 2026-02-28 10:40:23.273 243456 DEBUG nova.compute.manager [req-756931c4-57af-4f80-b6b8-5537ce4120c2 req-ab9500cf-4fe8-4b02-8d96-728c80de1268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-changed-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:40:23 compute-0 nova_compute[243452]: 2026-02-28 10:40:23.273 243456 DEBUG nova.compute.manager [req-756931c4-57af-4f80-b6b8-5537ce4120c2 req-ab9500cf-4fe8-4b02-8d96-728c80de1268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Refreshing instance network info cache due to event network-changed-d3feb971-63a7-4d54-8310-9c6d40c29637. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:40:23 compute-0 nova_compute[243452]: 2026-02-28 10:40:23.274 243456 DEBUG oslo_concurrency.lockutils [req-756931c4-57af-4f80-b6b8-5537ce4120c2 req-ab9500cf-4fe8-4b02-8d96-728c80de1268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:40:23 compute-0 nova_compute[243452]: 2026-02-28 10:40:23.350 243456 DEBUG nova.network.neutron [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:40:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.025 243456 DEBUG nova.network.neutron [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updating instance_info_cache with network_info: [{"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.048 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.049 243456 DEBUG nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Instance network_info: |[{"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.049 243456 DEBUG oslo_concurrency.lockutils [req-756931c4-57af-4f80-b6b8-5537ce4120c2 req-ab9500cf-4fe8-4b02-8d96-728c80de1268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.050 243456 DEBUG nova.network.neutron [req-756931c4-57af-4f80-b6b8-5537ce4120c2 req-ab9500cf-4fe8-4b02-8d96-728c80de1268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Refreshing network info cache for port d3feb971-63a7-4d54-8310-9c6d40c29637 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.059 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Start _get_guest_xml network_info=[{"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.077 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.089 243456 WARNING nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.096 243456 DEBUG nova.virt.libvirt.host [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.097 243456 DEBUG nova.virt.libvirt.host [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.101 243456 DEBUG nova.virt.libvirt.host [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.102 243456 DEBUG nova.virt.libvirt.host [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.103 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.103 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.104 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.105 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.105 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.106 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.106 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.107 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.107 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.108 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.108 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.109 243456 DEBUG nova.virt.hardware [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.114 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:24 compute-0 ceph-mon[76304]: pgmap v2281: 305 pgs: 305 active+clean; 176 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 717 KiB/s wr, 1 op/s
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.655 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:40:24 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/363183282' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.716 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.742 243456 DEBUG nova.storage.rbd_utils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:40:24 compute-0 nova_compute[243452]: 2026-02-28 10:40:24.747 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 190 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.4 MiB/s wr, 13 op/s
Feb 28 10:40:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:40:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1618747359' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.306 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.309 243456 DEBUG nova.virt.libvirt.vif [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:40:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-806666479',display_name='tempest-TestNetworkBasicOps-server-806666479',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-806666479',id=141,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEvYU5ohMTONYZF98enrXff7H/UA6S+83Ft8Ojoxq+P+keZUL46io/3fcohxtuAI5aeVtpG6o1nJ2kDJwbKtvHAweQjJLzp2omWlOmQ8VQJrOL3ujh53baZZlNUH6C9OQ==',key_name='tempest-TestNetworkBasicOps-2075562092',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-12rzj72r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:40:20Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=5e1a8d62-9ac1-417d-8194-58901bb4018e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.310 243456 DEBUG nova.network.os_vif_util [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.311 243456 DEBUG nova.network.os_vif_util [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:70:52,bridge_name='br-int',has_traffic_filtering=True,id=d3feb971-63a7-4d54-8310-9c6d40c29637,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3feb971-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.313 243456 DEBUG nova.objects.instance [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5e1a8d62-9ac1-417d-8194-58901bb4018e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.332 243456 DEBUG nova.network.neutron [req-756931c4-57af-4f80-b6b8-5537ce4120c2 req-ab9500cf-4fe8-4b02-8d96-728c80de1268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updated VIF entry in instance network info cache for port d3feb971-63a7-4d54-8310-9c6d40c29637. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.333 243456 DEBUG nova.network.neutron [req-756931c4-57af-4f80-b6b8-5537ce4120c2 req-ab9500cf-4fe8-4b02-8d96-728c80de1268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updating instance_info_cache with network_info: [{"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.341 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:40:25 compute-0 nova_compute[243452]:   <uuid>5e1a8d62-9ac1-417d-8194-58901bb4018e</uuid>
Feb 28 10:40:25 compute-0 nova_compute[243452]:   <name>instance-0000008d</name>
Feb 28 10:40:25 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:40:25 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:40:25 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <nova:name>tempest-TestNetworkBasicOps-server-806666479</nova:name>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:40:24</nova:creationTime>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:40:25 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:40:25 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:40:25 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:40:25 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:40:25 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:40:25 compute-0 nova_compute[243452]:         <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:40:25 compute-0 nova_compute[243452]:         <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:40:25 compute-0 nova_compute[243452]:         <nova:port uuid="d3feb971-63a7-4d54-8310-9c6d40c29637">
Feb 28 10:40:25 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:40:25 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:40:25 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <system>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <entry name="serial">5e1a8d62-9ac1-417d-8194-58901bb4018e</entry>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <entry name="uuid">5e1a8d62-9ac1-417d-8194-58901bb4018e</entry>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     </system>
Feb 28 10:40:25 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:40:25 compute-0 nova_compute[243452]:   <os>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:   </os>
Feb 28 10:40:25 compute-0 nova_compute[243452]:   <features>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:   </features>
Feb 28 10:40:25 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:40:25 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:40:25 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/5e1a8d62-9ac1-417d-8194-58901bb4018e_disk">
Feb 28 10:40:25 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       </source>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:40:25 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/5e1a8d62-9ac1-417d-8194-58901bb4018e_disk.config">
Feb 28 10:40:25 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       </source>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:40:25 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:7a:70:52"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <target dev="tapd3feb971-63"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/5e1a8d62-9ac1-417d-8194-58901bb4018e/console.log" append="off"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <video>
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     </video>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:40:25 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:40:25 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:40:25 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:40:25 compute-0 nova_compute[243452]: </domain>
Feb 28 10:40:25 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.344 243456 DEBUG nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Preparing to wait for external event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.344 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.344 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.345 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.345 243456 DEBUG nova.virt.libvirt.vif [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:40:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-806666479',display_name='tempest-TestNetworkBasicOps-server-806666479',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-806666479',id=141,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEvYU5ohMTONYZF98enrXff7H/UA6S+83Ft8Ojoxq+P+keZUL46io/3fcohxtuAI5aeVtpG6o1nJ2kDJwbKtvHAweQjJLzp2omWlOmQ8VQJrOL3ujh53baZZlNUH6C9OQ==',key_name='tempest-TestNetworkBasicOps-2075562092',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-12rzj72r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:40:20Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=5e1a8d62-9ac1-417d-8194-58901bb4018e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.346 243456 DEBUG nova.network.os_vif_util [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.346 243456 DEBUG nova.network.os_vif_util [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:70:52,bridge_name='br-int',has_traffic_filtering=True,id=d3feb971-63a7-4d54-8310-9c6d40c29637,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3feb971-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.347 243456 DEBUG os_vif [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:70:52,bridge_name='br-int',has_traffic_filtering=True,id=d3feb971-63a7-4d54-8310-9c6d40c29637,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3feb971-63') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.348 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.348 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.349 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.352 243456 DEBUG oslo_concurrency.lockutils [req-756931c4-57af-4f80-b6b8-5537ce4120c2 req-ab9500cf-4fe8-4b02-8d96-728c80de1268 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.354 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.355 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3feb971-63, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.356 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd3feb971-63, col_values=(('external_ids', {'iface-id': 'd3feb971-63a7-4d54-8310-9c6d40c29637', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:70:52', 'vm-uuid': '5e1a8d62-9ac1-417d-8194-58901bb4018e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:25 compute-0 NetworkManager[49805]: <info>  [1772275225.3599] manager: (tapd3feb971-63): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/609)
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.363 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.368 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.369 243456 INFO os_vif [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:70:52,bridge_name='br-int',has_traffic_filtering=True,id=d3feb971-63a7-4d54-8310-9c6d40c29637,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3feb971-63')
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.434 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.435 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.435 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:7a:70:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.437 243456 INFO nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Using config drive
Feb 28 10:40:25 compute-0 nova_compute[243452]: 2026-02-28 10:40:25.477 243456 DEBUG nova.storage.rbd_utils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:40:25 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/363183282' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:40:25 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1618747359' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:40:25 compute-0 sshd-session[369641]: Received disconnect from 103.217.144.161 port 40232:11: Bye Bye [preauth]
Feb 28 10:40:25 compute-0 sshd-session[369641]: Disconnected from authenticating user root 103.217.144.161 port 40232 [preauth]
Feb 28 10:40:26 compute-0 nova_compute[243452]: 2026-02-28 10:40:26.063 243456 INFO nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Creating config drive at /var/lib/nova/instances/5e1a8d62-9ac1-417d-8194-58901bb4018e/disk.config
Feb 28 10:40:26 compute-0 nova_compute[243452]: 2026-02-28 10:40:26.070 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5e1a8d62-9ac1-417d-8194-58901bb4018e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_uic1oqr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:26 compute-0 nova_compute[243452]: 2026-02-28 10:40:26.217 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5e1a8d62-9ac1-417d-8194-58901bb4018e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_uic1oqr" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:26 compute-0 nova_compute[243452]: 2026-02-28 10:40:26.258 243456 DEBUG nova.storage.rbd_utils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:40:26 compute-0 nova_compute[243452]: 2026-02-28 10:40:26.265 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5e1a8d62-9ac1-417d-8194-58901bb4018e/disk.config 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:26 compute-0 nova_compute[243452]: 2026-02-28 10:40:26.424 243456 DEBUG oslo_concurrency.processutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5e1a8d62-9ac1-417d-8194-58901bb4018e/disk.config 5e1a8d62-9ac1-417d-8194-58901bb4018e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:26 compute-0 nova_compute[243452]: 2026-02-28 10:40:26.426 243456 INFO nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Deleting local config drive /var/lib/nova/instances/5e1a8d62-9ac1-417d-8194-58901bb4018e/disk.config because it was imported into RBD.
Feb 28 10:40:26 compute-0 kernel: tapd3feb971-63: entered promiscuous mode
Feb 28 10:40:26 compute-0 NetworkManager[49805]: <info>  [1772275226.4923] manager: (tapd3feb971-63): new Tun device (/org/freedesktop/NetworkManager/Devices/610)
Feb 28 10:40:26 compute-0 ovn_controller[146846]: 2026-02-28T10:40:26Z|01471|binding|INFO|Claiming lport d3feb971-63a7-4d54-8310-9c6d40c29637 for this chassis.
Feb 28 10:40:26 compute-0 ovn_controller[146846]: 2026-02-28T10:40:26Z|01472|binding|INFO|d3feb971-63a7-4d54-8310-9c6d40c29637: Claiming fa:16:3e:7a:70:52 10.100.0.7
Feb 28 10:40:26 compute-0 nova_compute[243452]: 2026-02-28 10:40:26.495 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:26 compute-0 nova_compute[243452]: 2026-02-28 10:40:26.498 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:26 compute-0 nova_compute[243452]: 2026-02-28 10:40:26.500 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:26 compute-0 nova_compute[243452]: 2026-02-28 10:40:26.504 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.518 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:70:52 10.100.0.7'], port_security=['fa:16:3e:7a:70:52 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5e1a8d62-9ac1-417d-8194-58901bb4018e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '44c3724e-fd4e-435a-91b1-2ee7cbaa561d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27d4da06-6dac-452d-951c-54e43b1c22a3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d3feb971-63a7-4d54-8310-9c6d40c29637) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.520 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d3feb971-63a7-4d54-8310-9c6d40c29637 in datapath 11e06da5-bfc5-4a1a-9148-ff3afccf9569 bound to our chassis
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.522 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 11e06da5-bfc5-4a1a-9148-ff3afccf9569
Feb 28 10:40:26 compute-0 systemd-udevd[369779]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:40:26 compute-0 systemd-machined[209480]: New machine qemu-174-instance-0000008d.
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.534 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d06516be-ae27-4079-988e-a19bc25b62a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.536 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap11e06da5-b1 in ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:40:26 compute-0 ovn_controller[146846]: 2026-02-28T10:40:26Z|01473|binding|INFO|Setting lport d3feb971-63a7-4d54-8310-9c6d40c29637 ovn-installed in OVS
Feb 28 10:40:26 compute-0 ovn_controller[146846]: 2026-02-28T10:40:26Z|01474|binding|INFO|Setting lport d3feb971-63a7-4d54-8310-9c6d40c29637 up in Southbound
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.538 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap11e06da5-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.538 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6dff3e6c-1abf-46cb-bf5d-98963833ef01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:26 compute-0 nova_compute[243452]: 2026-02-28 10:40:26.539 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.540 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe245ca-c146-461d-b877-878746647f0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:26 compute-0 systemd[1]: Started Virtual Machine qemu-174-instance-0000008d.
Feb 28 10:40:26 compute-0 NetworkManager[49805]: <info>  [1772275226.5515] device (tapd3feb971-63): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:40:26 compute-0 NetworkManager[49805]: <info>  [1772275226.5521] device (tapd3feb971-63): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.560 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[c74c47f2-76d9-4dd6-9005-0658b62f0711]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.586 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[501bf585-f5fc-4bd9-be68-c3c8d2c0ea3d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.624 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[62737ff3-3581-4be4-8943-89bac49ca8d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:26 compute-0 ceph-mon[76304]: pgmap v2282: 305 pgs: 305 active+clean; 190 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.4 MiB/s wr, 13 op/s
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.631 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e98e91ac-733f-4f7b-8b51-462693f2cb25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:26 compute-0 NetworkManager[49805]: <info>  [1772275226.6328] manager: (tap11e06da5-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/611)
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.668 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[05d810b1-1e46-44f8-a2ff-239b4c08bfd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.672 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[945386ba-7224-4c85-9e2a-54d4fb604654]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:26 compute-0 NetworkManager[49805]: <info>  [1772275226.7073] device (tap11e06da5-b0): carrier: link connected
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.713 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[afa2cbfd-618b-4a94-aa3d-d9fd3417a5f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.736 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[41e90ce2-9284-4858-890d-4e8d8618f389]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap11e06da5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:17:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673399, 'reachable_time': 31055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369812, 'error': None, 'target': 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.752 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0327682f-71cc-4516-9f0e-764a3ccbcec6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1e:1773'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673399, 'tstamp': 673399}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369813, 'error': None, 'target': 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.770 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3410bb-2542-4dea-9471-1fa64673026a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap11e06da5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:17:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673399, 'reachable_time': 31055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 369814, 'error': None, 'target': 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.816 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f394638b-45da-4b4b-b08c-6c795ec782e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.876 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2677d4-1829-4703-9c97-b41abc21bd9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.879 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11e06da5-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.879 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.879 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11e06da5-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:26 compute-0 nova_compute[243452]: 2026-02-28 10:40:26.881 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:26 compute-0 NetworkManager[49805]: <info>  [1772275226.8825] manager: (tap11e06da5-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/612)
Feb 28 10:40:26 compute-0 kernel: tap11e06da5-b0: entered promiscuous mode
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.884 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap11e06da5-b0, col_values=(('external_ids', {'iface-id': 'ea6114a2-28c6-4510-bbd6-16e4d9cb4f71'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:26 compute-0 nova_compute[243452]: 2026-02-28 10:40:26.885 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:26 compute-0 ovn_controller[146846]: 2026-02-28T10:40:26Z|01475|binding|INFO|Releasing lport ea6114a2-28c6-4510-bbd6-16e4d9cb4f71 from this chassis (sb_readonly=0)
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.886 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/11e06da5-bfc5-4a1a-9148-ff3afccf9569.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/11e06da5-bfc5-4a1a-9148-ff3afccf9569.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.888 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b32851-e9ea-4422-be81-921ea8d8b94d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.888 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-11e06da5-bfc5-4a1a-9148-ff3afccf9569
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/11e06da5-bfc5-4a1a-9148-ff3afccf9569.pid.haproxy
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 11e06da5-bfc5-4a1a-9148-ff3afccf9569
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:40:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:26.889 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'env', 'PROCESS_TAG=haproxy-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/11e06da5-bfc5-4a1a-9148-ff3afccf9569.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:40:26 compute-0 nova_compute[243452]: 2026-02-28 10:40:26.892 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:26 compute-0 nova_compute[243452]: 2026-02-28 10:40:26.977 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275226.9766095, 5e1a8d62-9ac1-417d-8194-58901bb4018e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:40:26 compute-0 nova_compute[243452]: 2026-02-28 10:40:26.977 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] VM Started (Lifecycle Event)
Feb 28 10:40:26 compute-0 nova_compute[243452]: 2026-02-28 10:40:26.995 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:40:26 compute-0 nova_compute[243452]: 2026-02-28 10:40:26.997 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275226.9776947, 5e1a8d62-9ac1-417d-8194-58901bb4018e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:40:26 compute-0 nova_compute[243452]: 2026-02-28 10:40:26.998 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] VM Paused (Lifecycle Event)
Feb 28 10:40:27 compute-0 nova_compute[243452]: 2026-02-28 10:40:27.015 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:40:27 compute-0 nova_compute[243452]: 2026-02-28 10:40:27.018 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:40:27 compute-0 nova_compute[243452]: 2026-02-28 10:40:27.033 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:40:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:40:27 compute-0 podman[369888]: 2026-02-28 10:40:27.223110844 +0000 UTC m=+0.058061495 container create 1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:40:27 compute-0 systemd[1]: Started libpod-conmon-1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1.scope.
Feb 28 10:40:27 compute-0 podman[369888]: 2026-02-28 10:40:27.188134324 +0000 UTC m=+0.023084835 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:40:27 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec7ac42405f247e62e492745590376a4dfe553da32d8a80cb4173cd7e7d5ce14/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:40:27 compute-0 podman[369888]: 2026-02-28 10:40:27.31584062 +0000 UTC m=+0.150791051 container init 1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:40:27 compute-0 podman[369888]: 2026-02-28 10:40:27.32008561 +0000 UTC m=+0.155036051 container start 1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 28 10:40:27 compute-0 neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569[369903]: [NOTICE]   (369907) : New worker (369909) forked
Feb 28 10:40:27 compute-0 neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569[369903]: [NOTICE]   (369907) : Loading success.
Feb 28 10:40:28 compute-0 ceph-mon[76304]: pgmap v2283: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:40:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:40:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:40:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:40:29
Feb 28 10:40:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:40:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:40:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'backups', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.control']
Feb 28 10:40:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:40:29 compute-0 nova_compute[243452]: 2026-02-28 10:40:29.698 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:30 compute-0 nova_compute[243452]: 2026-02-28 10:40:30.359 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:40:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:40:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:40:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:40:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:40:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:40:30 compute-0 ceph-mon[76304]: pgmap v2284: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:40:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:40:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:40:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:40:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:40:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:40:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:40:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:40:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:40:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:40:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:40:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.855 243456 DEBUG nova.compute.manager [req-599d0bcd-7516-4fa3-ac1e-06f00b6caca1 req-b46f6f54-9010-4494-92f2-eb9a6e5dfa03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.856 243456 DEBUG oslo_concurrency.lockutils [req-599d0bcd-7516-4fa3-ac1e-06f00b6caca1 req-b46f6f54-9010-4494-92f2-eb9a6e5dfa03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.856 243456 DEBUG oslo_concurrency.lockutils [req-599d0bcd-7516-4fa3-ac1e-06f00b6caca1 req-b46f6f54-9010-4494-92f2-eb9a6e5dfa03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.857 243456 DEBUG oslo_concurrency.lockutils [req-599d0bcd-7516-4fa3-ac1e-06f00b6caca1 req-b46f6f54-9010-4494-92f2-eb9a6e5dfa03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.857 243456 DEBUG nova.compute.manager [req-599d0bcd-7516-4fa3-ac1e-06f00b6caca1 req-b46f6f54-9010-4494-92f2-eb9a6e5dfa03 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Processing event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.859 243456 DEBUG nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.864 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275231.8640964, 5e1a8d62-9ac1-417d-8194-58901bb4018e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.865 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] VM Resumed (Lifecycle Event)
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.869 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.876 243456 INFO nova.virt.libvirt.driver [-] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Instance spawned successfully.
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.877 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.897 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.908 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.913 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.913 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.914 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.914 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.915 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.915 243456 DEBUG nova.virt.libvirt.driver [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.941 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.976 243456 INFO nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Took 11.40 seconds to spawn the instance on the hypervisor.
Feb 28 10:40:31 compute-0 nova_compute[243452]: 2026-02-28 10:40:31.977 243456 DEBUG nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:40:32 compute-0 nova_compute[243452]: 2026-02-28 10:40:32.067 243456 INFO nova.compute.manager [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Took 12.51 seconds to build instance.
Feb 28 10:40:32 compute-0 nova_compute[243452]: 2026-02-28 10:40:32.094 243456 DEBUG oslo_concurrency.lockutils [None req-13a2f316-9792-41f4-9355-ab5768eb16da ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:32 compute-0 ceph-mon[76304]: pgmap v2285: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 28 10:40:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Feb 28 10:40:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:40:34 compute-0 nova_compute[243452]: 2026-02-28 10:40:34.177 243456 DEBUG nova.compute.manager [req-efd542c9-4902-44dd-92c9-baa691cfd563 req-0abb1550-76bc-47e3-b0dc-71ffcbdc49cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:40:34 compute-0 nova_compute[243452]: 2026-02-28 10:40:34.178 243456 DEBUG oslo_concurrency.lockutils [req-efd542c9-4902-44dd-92c9-baa691cfd563 req-0abb1550-76bc-47e3-b0dc-71ffcbdc49cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:34 compute-0 nova_compute[243452]: 2026-02-28 10:40:34.178 243456 DEBUG oslo_concurrency.lockutils [req-efd542c9-4902-44dd-92c9-baa691cfd563 req-0abb1550-76bc-47e3-b0dc-71ffcbdc49cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:34 compute-0 nova_compute[243452]: 2026-02-28 10:40:34.178 243456 DEBUG oslo_concurrency.lockutils [req-efd542c9-4902-44dd-92c9-baa691cfd563 req-0abb1550-76bc-47e3-b0dc-71ffcbdc49cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:34 compute-0 nova_compute[243452]: 2026-02-28 10:40:34.178 243456 DEBUG nova.compute.manager [req-efd542c9-4902-44dd-92c9-baa691cfd563 req-0abb1550-76bc-47e3-b0dc-71ffcbdc49cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] No waiting events found dispatching network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:40:34 compute-0 nova_compute[243452]: 2026-02-28 10:40:34.179 243456 WARNING nova.compute.manager [req-efd542c9-4902-44dd-92c9-baa691cfd563 req-0abb1550-76bc-47e3-b0dc-71ffcbdc49cb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received unexpected event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 for instance with vm_state active and task_state None.
Feb 28 10:40:34 compute-0 ceph-mon[76304]: pgmap v2286: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Feb 28 10:40:34 compute-0 nova_compute[243452]: 2026-02-28 10:40:34.701 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 967 KiB/s rd, 1.1 MiB/s wr, 66 op/s
Feb 28 10:40:35 compute-0 nova_compute[243452]: 2026-02-28 10:40:35.361 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:35 compute-0 nova_compute[243452]: 2026-02-28 10:40:35.499 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:35 compute-0 nova_compute[243452]: 2026-02-28 10:40:35.499 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:35 compute-0 nova_compute[243452]: 2026-02-28 10:40:35.517 243456 DEBUG nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:40:35 compute-0 nova_compute[243452]: 2026-02-28 10:40:35.662 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:35 compute-0 nova_compute[243452]: 2026-02-28 10:40:35.663 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:35 compute-0 nova_compute[243452]: 2026-02-28 10:40:35.674 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:40:35 compute-0 nova_compute[243452]: 2026-02-28 10:40:35.674 243456 INFO nova.compute.claims [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:40:35 compute-0 nova_compute[243452]: 2026-02-28 10:40:35.825 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:40:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3922049186' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.388 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:36 compute-0 NetworkManager[49805]: <info>  [1772275236.4083] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/613)
Feb 28 10:40:36 compute-0 NetworkManager[49805]: <info>  [1772275236.4089] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/614)
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.407 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.413 243456 DEBUG nova.compute.provider_tree [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.432 243456 DEBUG nova.scheduler.client.report [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.461 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.463 243456 DEBUG nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.468 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:36 compute-0 ovn_controller[146846]: 2026-02-28T10:40:36Z|01476|binding|INFO|Releasing lport ea6114a2-28c6-4510-bbd6-16e4d9cb4f71 from this chassis (sb_readonly=0)
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.491 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.522 243456 DEBUG nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.523 243456 DEBUG nova.network.neutron [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.546 243456 INFO nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.572 243456 DEBUG nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:40:36 compute-0 ceph-mon[76304]: pgmap v2287: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 967 KiB/s rd, 1.1 MiB/s wr, 66 op/s
Feb 28 10:40:36 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3922049186' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.688 243456 DEBUG nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.690 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.691 243456 INFO nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Creating image(s)
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.719 243456 DEBUG nova.storage.rbd_utils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9b081668-1653-448a-957e-da1ead7ecd21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.752 243456 DEBUG nova.storage.rbd_utils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9b081668-1653-448a-957e-da1ead7ecd21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.785 243456 DEBUG nova.storage.rbd_utils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9b081668-1653-448a-957e-da1ead7ecd21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.790 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.829 243456 DEBUG nova.policy [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.836 243456 DEBUG nova.compute.manager [req-9c218cc3-c63b-47a7-986d-55c77670ff30 req-d3ebe85d-5322-4a39-9bf1-6df3085a95b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-changed-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.836 243456 DEBUG nova.compute.manager [req-9c218cc3-c63b-47a7-986d-55c77670ff30 req-d3ebe85d-5322-4a39-9bf1-6df3085a95b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Refreshing instance network info cache due to event network-changed-d3feb971-63a7-4d54-8310-9c6d40c29637. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.836 243456 DEBUG oslo_concurrency.lockutils [req-9c218cc3-c63b-47a7-986d-55c77670ff30 req-d3ebe85d-5322-4a39-9bf1-6df3085a95b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.837 243456 DEBUG oslo_concurrency.lockutils [req-9c218cc3-c63b-47a7-986d-55c77670ff30 req-d3ebe85d-5322-4a39-9bf1-6df3085a95b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:40:36 compute-0 nova_compute[243452]: 2026-02-28 10:40:36.837 243456 DEBUG nova.network.neutron [req-9c218cc3-c63b-47a7-986d-55c77670ff30 req-d3ebe85d-5322-4a39-9bf1-6df3085a95b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Refreshing network info cache for port d3feb971-63a7-4d54-8310-9c6d40c29637 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:40:37 compute-0 nova_compute[243452]: 2026-02-28 10:40:37.050 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:37 compute-0 nova_compute[243452]: 2026-02-28 10:40:37.051 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:37 compute-0 nova_compute[243452]: 2026-02-28 10:40:37.051 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:37 compute-0 nova_compute[243452]: 2026-02-28 10:40:37.052 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 434 KiB/s wr, 66 op/s
Feb 28 10:40:37 compute-0 nova_compute[243452]: 2026-02-28 10:40:37.075 243456 DEBUG nova.storage.rbd_utils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9b081668-1653-448a-957e-da1ead7ecd21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:40:37 compute-0 nova_compute[243452]: 2026-02-28 10:40:37.080 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9b081668-1653-448a-957e-da1ead7ecd21_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:37 compute-0 nova_compute[243452]: 2026-02-28 10:40:37.351 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9b081668-1653-448a-957e-da1ead7ecd21_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:37 compute-0 nova_compute[243452]: 2026-02-28 10:40:37.439 243456 DEBUG nova.storage.rbd_utils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image 9b081668-1653-448a-957e-da1ead7ecd21_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:40:37 compute-0 nova_compute[243452]: 2026-02-28 10:40:37.538 243456 DEBUG nova.objects.instance [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b081668-1653-448a-957e-da1ead7ecd21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:40:37 compute-0 nova_compute[243452]: 2026-02-28 10:40:37.554 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:40:37 compute-0 nova_compute[243452]: 2026-02-28 10:40:37.554 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Ensure instance console log exists: /var/lib/nova/instances/9b081668-1653-448a-957e-da1ead7ecd21/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:40:37 compute-0 nova_compute[243452]: 2026-02-28 10:40:37.555 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:37 compute-0 nova_compute[243452]: 2026-02-28 10:40:37.555 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:37 compute-0 nova_compute[243452]: 2026-02-28 10:40:37.556 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:38 compute-0 nova_compute[243452]: 2026-02-28 10:40:38.249 243456 DEBUG nova.network.neutron [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Successfully created port: 5952cc57-b25c-40a2-b208-47e2104b88ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:40:38 compute-0 ceph-mon[76304]: pgmap v2288: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 434 KiB/s wr, 66 op/s
Feb 28 10:40:38 compute-0 nova_compute[243452]: 2026-02-28 10:40:38.726 243456 DEBUG nova.network.neutron [req-9c218cc3-c63b-47a7-986d-55c77670ff30 req-d3ebe85d-5322-4a39-9bf1-6df3085a95b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updated VIF entry in instance network info cache for port d3feb971-63a7-4d54-8310-9c6d40c29637. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:40:38 compute-0 nova_compute[243452]: 2026-02-28 10:40:38.726 243456 DEBUG nova.network.neutron [req-9c218cc3-c63b-47a7-986d-55c77670ff30 req-d3ebe85d-5322-4a39-9bf1-6df3085a95b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updating instance_info_cache with network_info: [{"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:40:38 compute-0 nova_compute[243452]: 2026-02-28 10:40:38.746 243456 DEBUG oslo_concurrency.lockutils [req-9c218cc3-c63b-47a7-986d-55c77670ff30 req-d3ebe85d-5322-4a39-9bf1-6df3085a95b3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:40:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:40:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 218 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Feb 28 10:40:39 compute-0 nova_compute[243452]: 2026-02-28 10:40:39.104 243456 DEBUG nova.network.neutron [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Successfully created port: 6914940c-920a-4dc2-982a-9ae63584aee2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:40:39 compute-0 nova_compute[243452]: 2026-02-28 10:40:39.704 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:40 compute-0 nova_compute[243452]: 2026-02-28 10:40:40.340 243456 DEBUG nova.network.neutron [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Successfully updated port: 5952cc57-b25c-40a2-b208-47e2104b88ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:40:40 compute-0 nova_compute[243452]: 2026-02-28 10:40:40.363 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:40 compute-0 nova_compute[243452]: 2026-02-28 10:40:40.594 243456 DEBUG nova.compute.manager [req-502c62c2-6cdf-4348-9dec-0a8d940d9ab2 req-0dbb999b-4e78-47c0-bb83-fbed0fd44f65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-changed-5952cc57-b25c-40a2-b208-47e2104b88ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:40:40 compute-0 nova_compute[243452]: 2026-02-28 10:40:40.594 243456 DEBUG nova.compute.manager [req-502c62c2-6cdf-4348-9dec-0a8d940d9ab2 req-0dbb999b-4e78-47c0-bb83-fbed0fd44f65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Refreshing instance network info cache due to event network-changed-5952cc57-b25c-40a2-b208-47e2104b88ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:40:40 compute-0 nova_compute[243452]: 2026-02-28 10:40:40.594 243456 DEBUG oslo_concurrency.lockutils [req-502c62c2-6cdf-4348-9dec-0a8d940d9ab2 req-0dbb999b-4e78-47c0-bb83-fbed0fd44f65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:40:40 compute-0 nova_compute[243452]: 2026-02-28 10:40:40.594 243456 DEBUG oslo_concurrency.lockutils [req-502c62c2-6cdf-4348-9dec-0a8d940d9ab2 req-0dbb999b-4e78-47c0-bb83-fbed0fd44f65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:40:40 compute-0 nova_compute[243452]: 2026-02-28 10:40:40.595 243456 DEBUG nova.network.neutron [req-502c62c2-6cdf-4348-9dec-0a8d940d9ab2 req-0dbb999b-4e78-47c0-bb83-fbed0fd44f65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Refreshing network info cache for port 5952cc57-b25c-40a2-b208-47e2104b88ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:40:40 compute-0 ceph-mon[76304]: pgmap v2289: 305 pgs: 305 active+clean; 218 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Feb 28 10:40:40 compute-0 nova_compute[243452]: 2026-02-28 10:40:40.797 243456 DEBUG nova.network.neutron [req-502c62c2-6cdf-4348-9dec-0a8d940d9ab2 req-0dbb999b-4e78-47c0-bb83-fbed0fd44f65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 10:40:41 compute-0 nova_compute[243452]: 2026-02-28 10:40:41.130 243456 DEBUG nova.network.neutron [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Successfully updated port: 6914940c-920a-4dc2-982a-9ae63584aee2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:40:41 compute-0 nova_compute[243452]: 2026-02-28 10:40:41.138 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:40:41 compute-0 nova_compute[243452]: 2026-02-28 10:40:41.145 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:40:41 compute-0 nova_compute[243452]: 2026-02-28 10:40:41.174 243456 DEBUG nova.network.neutron [req-502c62c2-6cdf-4348-9dec-0a8d940d9ab2 req-0dbb999b-4e78-47c0-bb83-fbed0fd44f65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:40:41 compute-0 nova_compute[243452]: 2026-02-28 10:40:41.190 243456 DEBUG oslo_concurrency.lockutils [req-502c62c2-6cdf-4348-9dec-0a8d940d9ab2 req-0dbb999b-4e78-47c0-bb83-fbed0fd44f65 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:40:41 compute-0 nova_compute[243452]: 2026-02-28 10:40:41.191 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:40:41 compute-0 nova_compute[243452]: 2026-02-28 10:40:41.191 243456 DEBUG nova.network.neutron [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007084495598833457 of space, bias 1.0, pg target 0.21253486796500373 quantized to 32 (current 32)
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024939279828894733 of space, bias 1.0, pg target 0.748178394866842 quantized to 32 (current 32)
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.298182577967931e-07 of space, bias 4.0, pg target 0.0008757819093561517 quantized to 16 (current 16)
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:40:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:40:41 compute-0 nova_compute[243452]: 2026-02-28 10:40:41.377 243456 DEBUG nova.network.neutron [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:40:41 compute-0 nova_compute[243452]: 2026-02-28 10:40:41.954 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:41 compute-0 nova_compute[243452]: 2026-02-28 10:40:41.955 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:41 compute-0 nova_compute[243452]: 2026-02-28 10:40:41.981 243456 DEBUG nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:40:42 compute-0 nova_compute[243452]: 2026-02-28 10:40:42.060 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:42 compute-0 nova_compute[243452]: 2026-02-28 10:40:42.060 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:42 compute-0 nova_compute[243452]: 2026-02-28 10:40:42.069 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:40:42 compute-0 nova_compute[243452]: 2026-02-28 10:40:42.070 243456 INFO nova.compute.claims [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:40:42 compute-0 nova_compute[243452]: 2026-02-28 10:40:42.240 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:42 compute-0 nova_compute[243452]: 2026-02-28 10:40:42.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:40:42 compute-0 nova_compute[243452]: 2026-02-28 10:40:42.668 243456 DEBUG nova.compute.manager [req-26f75e43-46a5-4027-ae7c-d96730a9c1e8 req-455a3b70-3c8d-4a6c-9cdd-c1778ac2ee63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-changed-6914940c-920a-4dc2-982a-9ae63584aee2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:40:42 compute-0 nova_compute[243452]: 2026-02-28 10:40:42.669 243456 DEBUG nova.compute.manager [req-26f75e43-46a5-4027-ae7c-d96730a9c1e8 req-455a3b70-3c8d-4a6c-9cdd-c1778ac2ee63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Refreshing instance network info cache due to event network-changed-6914940c-920a-4dc2-982a-9ae63584aee2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:40:42 compute-0 nova_compute[243452]: 2026-02-28 10:40:42.669 243456 DEBUG oslo_concurrency.lockutils [req-26f75e43-46a5-4027-ae7c-d96730a9c1e8 req-455a3b70-3c8d-4a6c-9cdd-c1778ac2ee63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:40:42 compute-0 ceph-mon[76304]: pgmap v2290: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 10:40:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:40:42 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/269186968' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:40:42 compute-0 nova_compute[243452]: 2026-02-28 10:40:42.866 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:42 compute-0 nova_compute[243452]: 2026-02-28 10:40:42.871 243456 DEBUG nova.compute.provider_tree [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:40:42 compute-0 nova_compute[243452]: 2026-02-28 10:40:42.886 243456 DEBUG nova.scheduler.client.report [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:40:42 compute-0 nova_compute[243452]: 2026-02-28 10:40:42.909 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:42 compute-0 nova_compute[243452]: 2026-02-28 10:40:42.910 243456 DEBUG nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:40:42 compute-0 nova_compute[243452]: 2026-02-28 10:40:42.959 243456 DEBUG nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:40:42 compute-0 nova_compute[243452]: 2026-02-28 10:40:42.960 243456 DEBUG nova.network.neutron [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:40:42 compute-0 nova_compute[243452]: 2026-02-28 10:40:42.985 243456 INFO nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.003 243456 DEBUG nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:40:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 256 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.8 MiB/s wr, 99 op/s
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.100 243456 DEBUG nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.101 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.102 243456 INFO nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Creating image(s)
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.123 243456 DEBUG nova.storage.rbd_utils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.147 243456 DEBUG nova.storage.rbd_utils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.182 243456 DEBUG nova.storage.rbd_utils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.189 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.227 243456 DEBUG nova.network.neutron [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Updating instance_info_cache with network_info: [{"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.231 243456 DEBUG nova.policy [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.260 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.261 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.262 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.263 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.292 243456 DEBUG nova.storage.rbd_utils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.296 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.325 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.326 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.327 243456 DEBUG nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Instance network_info: |[{"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.328 243456 DEBUG oslo_concurrency.lockutils [req-26f75e43-46a5-4027-ae7c-d96730a9c1e8 req-455a3b70-3c8d-4a6c-9cdd-c1778ac2ee63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.329 243456 DEBUG nova.network.neutron [req-26f75e43-46a5-4027-ae7c-d96730a9c1e8 req-455a3b70-3c8d-4a6c-9cdd-c1778ac2ee63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Refreshing network info cache for port 6914940c-920a-4dc2-982a-9ae63584aee2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.336 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Start _get_guest_xml network_info=[{"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.343 243456 WARNING nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.351 243456 DEBUG nova.virt.libvirt.host [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.352 243456 DEBUG nova.virt.libvirt.host [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.366 243456 DEBUG nova.virt.libvirt.host [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.367 243456 DEBUG nova.virt.libvirt.host [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.368 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.368 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.369 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.369 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.370 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.370 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.370 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.371 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.371 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.372 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.372 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.373 243456 DEBUG nova.virt.hardware [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.377 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.513 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.610 243456 DEBUG nova.storage.rbd_utils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:40:43 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/269186968' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.739 243456 DEBUG nova.objects.instance [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid edd7bb04-60e7-4998-afe7-73fa36c25f5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.759 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.760 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Ensure instance console log exists: /var/lib/nova/instances/edd7bb04-60e7-4998-afe7-73fa36c25f5d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.762 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.762 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:43 compute-0 nova_compute[243452]: 2026-02-28 10:40:43.763 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:40:43 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Feb 28 10:40:43 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:43.990678) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:40:43 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Feb 28 10:40:43 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275243990756, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1615, "num_deletes": 250, "total_data_size": 2590305, "memory_usage": 2622600, "flush_reason": "Manual Compaction"}
Feb 28 10:40:43 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Feb 28 10:40:43 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275243999144, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 1504286, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47402, "largest_seqno": 49016, "table_properties": {"data_size": 1498814, "index_size": 2612, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14541, "raw_average_key_size": 20, "raw_value_size": 1486757, "raw_average_value_size": 2126, "num_data_blocks": 119, "num_entries": 699, "num_filter_entries": 699, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772275077, "oldest_key_time": 1772275077, "file_creation_time": 1772275243, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:40:43 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 8545 microseconds, and 4369 cpu microseconds.
Feb 28 10:40:43 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:43.999240) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 1504286 bytes OK
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:43.999282) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.000616) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.000635) EVENT_LOG_v1 {"time_micros": 1772275244000628, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.000661) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 2583286, prev total WAL file size 2583286, number of live WAL files 2.
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.001861) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373532' seq:72057594037927935, type:22 .. '6D6772737461740032303033' seq:0, type:0; will stop at (end)
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(1469KB)], [110(9852KB)]
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275244001929, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 11593316, "oldest_snapshot_seqno": -1}
Feb 28 10:40:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:40:44 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3362230952' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 7091 keys, 9281075 bytes, temperature: kUnknown
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275244037967, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 9281075, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9235407, "index_size": 26868, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17733, "raw_key_size": 183688, "raw_average_key_size": 25, "raw_value_size": 9110521, "raw_average_value_size": 1284, "num_data_blocks": 1053, "num_entries": 7091, "num_filter_entries": 7091, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772275244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.038313) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 9281075 bytes
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.040269) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 320.5 rd, 256.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.6 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(13.9) write-amplify(6.2) OK, records in: 7525, records dropped: 434 output_compression: NoCompression
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.040309) EVENT_LOG_v1 {"time_micros": 1772275244040297, "job": 66, "event": "compaction_finished", "compaction_time_micros": 36170, "compaction_time_cpu_micros": 18744, "output_level": 6, "num_output_files": 1, "total_output_size": 9281075, "num_input_records": 7525, "num_output_records": 7091, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275244040622, "job": 66, "event": "table_file_deletion", "file_number": 112}
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275244042189, "job": 66, "event": "table_file_deletion", "file_number": 110}
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.001784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.042288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.042297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.042299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.042300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:40:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:40:44.042302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.051 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.674s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.072 243456 DEBUG nova.storage.rbd_utils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9b081668-1653-448a-957e-da1ead7ecd21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.076 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:44 compute-0 ovn_controller[146846]: 2026-02-28T10:40:44Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:70:52 10.100.0.7
Feb 28 10:40:44 compute-0 ovn_controller[146846]: 2026-02-28T10:40:44Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:70:52 10.100.0.7
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.252 243456 DEBUG nova.network.neutron [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Successfully created port: fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:40:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:40:44 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1647071877' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.618 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.621 243456 DEBUG nova.virt.libvirt.vif [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:40:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1426595857',display_name='tempest-TestGettingAddress-server-1426595857',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1426595857',id=142,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-nmxy1pc6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:40:36Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=9b081668-1653-448a-957e-da1ead7ecd21,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.622 243456 DEBUG nova.network.os_vif_util [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.623 243456 DEBUG nova.network.os_vif_util [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:9b:bb,bridge_name='br-int',has_traffic_filtering=True,id=5952cc57-b25c-40a2-b208-47e2104b88ad,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5952cc57-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.623 243456 DEBUG nova.virt.libvirt.vif [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:40:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1426595857',display_name='tempest-TestGettingAddress-server-1426595857',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1426595857',id=142,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-nmxy1pc6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:40:36Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=9b081668-1653-448a-957e-da1ead7ecd21,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.624 243456 DEBUG nova.network.os_vif_util [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.624 243456 DEBUG nova.network.os_vif_util [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=6914940c-920a-4dc2-982a-9ae63584aee2,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6914940c-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.625 243456 DEBUG nova.objects.instance [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b081668-1653-448a-957e-da1ead7ecd21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.705 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:44 compute-0 ceph-mon[76304]: pgmap v2291: 305 pgs: 305 active+clean; 256 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.8 MiB/s wr, 99 op/s
Feb 28 10:40:44 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3362230952' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:40:44 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1647071877' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.727 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:40:44 compute-0 nova_compute[243452]:   <uuid>9b081668-1653-448a-957e-da1ead7ecd21</uuid>
Feb 28 10:40:44 compute-0 nova_compute[243452]:   <name>instance-0000008e</name>
Feb 28 10:40:44 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:40:44 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:40:44 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <nova:name>tempest-TestGettingAddress-server-1426595857</nova:name>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:40:43</nova:creationTime>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:40:44 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:40:44 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:40:44 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:40:44 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:40:44 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:40:44 compute-0 nova_compute[243452]:         <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 10:40:44 compute-0 nova_compute[243452]:         <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:40:44 compute-0 nova_compute[243452]:         <nova:port uuid="5952cc57-b25c-40a2-b208-47e2104b88ad">
Feb 28 10:40:44 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:40:44 compute-0 nova_compute[243452]:         <nova:port uuid="6914940c-920a-4dc2-982a-9ae63584aee2">
Feb 28 10:40:44 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fefb:c897" ipVersion="6"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:40:44 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:40:44 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <system>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <entry name="serial">9b081668-1653-448a-957e-da1ead7ecd21</entry>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <entry name="uuid">9b081668-1653-448a-957e-da1ead7ecd21</entry>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     </system>
Feb 28 10:40:44 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:40:44 compute-0 nova_compute[243452]:   <os>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:   </os>
Feb 28 10:40:44 compute-0 nova_compute[243452]:   <features>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:   </features>
Feb 28 10:40:44 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:40:44 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:40:44 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9b081668-1653-448a-957e-da1ead7ecd21_disk">
Feb 28 10:40:44 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       </source>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:40:44 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9b081668-1653-448a-957e-da1ead7ecd21_disk.config">
Feb 28 10:40:44 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       </source>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:40:44 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:49:9b:bb"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <target dev="tap5952cc57-b2"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:fb:c8:97"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <target dev="tap6914940c-92"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/9b081668-1653-448a-957e-da1ead7ecd21/console.log" append="off"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <video>
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     </video>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:40:44 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:40:44 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:40:44 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:40:44 compute-0 nova_compute[243452]: </domain>
Feb 28 10:40:44 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.728 243456 DEBUG nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Preparing to wait for external event network-vif-plugged-5952cc57-b25c-40a2-b208-47e2104b88ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.728 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.728 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.729 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.729 243456 DEBUG nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Preparing to wait for external event network-vif-plugged-6914940c-920a-4dc2-982a-9ae63584aee2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.729 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.729 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.729 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.730 243456 DEBUG nova.virt.libvirt.vif [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:40:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1426595857',display_name='tempest-TestGettingAddress-server-1426595857',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1426595857',id=142,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-nmxy1pc6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:40:36Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=9b081668-1653-448a-957e-da1ead7ecd21,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.730 243456 DEBUG nova.network.os_vif_util [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.731 243456 DEBUG nova.network.os_vif_util [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:9b:bb,bridge_name='br-int',has_traffic_filtering=True,id=5952cc57-b25c-40a2-b208-47e2104b88ad,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5952cc57-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.731 243456 DEBUG os_vif [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:9b:bb,bridge_name='br-int',has_traffic_filtering=True,id=5952cc57-b25c-40a2-b208-47e2104b88ad,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5952cc57-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.731 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.732 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.732 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.735 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.735 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5952cc57-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.736 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5952cc57-b2, col_values=(('external_ids', {'iface-id': '5952cc57-b25c-40a2-b208-47e2104b88ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:9b:bb', 'vm-uuid': '9b081668-1653-448a-957e-da1ead7ecd21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.737 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:44 compute-0 NetworkManager[49805]: <info>  [1772275244.7383] manager: (tap5952cc57-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/615)
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.742 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.746 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.747 243456 INFO os_vif [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:9b:bb,bridge_name='br-int',has_traffic_filtering=True,id=5952cc57-b25c-40a2-b208-47e2104b88ad,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5952cc57-b2')
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.748 243456 DEBUG nova.virt.libvirt.vif [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:40:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1426595857',display_name='tempest-TestGettingAddress-server-1426595857',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1426595857',id=142,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-nmxy1pc6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:40:36Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=9b081668-1653-448a-957e-da1ead7ecd21,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.748 243456 DEBUG nova.network.os_vif_util [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.749 243456 DEBUG nova.network.os_vif_util [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=6914940c-920a-4dc2-982a-9ae63584aee2,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6914940c-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.749 243456 DEBUG os_vif [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=6914940c-920a-4dc2-982a-9ae63584aee2,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6914940c-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.750 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.750 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.750 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.753 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.754 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6914940c-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.754 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6914940c-92, col_values=(('external_ids', {'iface-id': '6914940c-920a-4dc2-982a-9ae63584aee2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:c8:97', 'vm-uuid': '9b081668-1653-448a-957e-da1ead7ecd21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.755 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:44 compute-0 NetworkManager[49805]: <info>  [1772275244.7567] manager: (tap6914940c-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/616)
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.760 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.762 243456 INFO os_vif [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=6914940c-920a-4dc2-982a-9ae63584aee2,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6914940c-92')
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.832 243456 DEBUG nova.network.neutron [req-26f75e43-46a5-4027-ae7c-d96730a9c1e8 req-455a3b70-3c8d-4a6c-9cdd-c1778ac2ee63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Updated VIF entry in instance network info cache for port 6914940c-920a-4dc2-982a-9ae63584aee2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.833 243456 DEBUG nova.network.neutron [req-26f75e43-46a5-4027-ae7c-d96730a9c1e8 req-455a3b70-3c8d-4a6c-9cdd-c1778ac2ee63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Updating instance_info_cache with network_info: [{"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.838 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.838 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.839 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:49:9b:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.839 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:fb:c8:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.840 243456 INFO nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Using config drive
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.874 243456 DEBUG nova.storage.rbd_utils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9b081668-1653-448a-957e-da1ead7ecd21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:40:44 compute-0 nova_compute[243452]: 2026-02-28 10:40:44.886 243456 DEBUG oslo_concurrency.lockutils [req-26f75e43-46a5-4027-ae7c-d96730a9c1e8 req-455a3b70-3c8d-4a6c-9cdd-c1778ac2ee63 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:40:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.5 MiB/s wr, 137 op/s
Feb 28 10:40:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:40:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3869735481' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:40:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:40:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3869735481' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:40:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3869735481' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:40:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3869735481' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:40:46 compute-0 nova_compute[243452]: 2026-02-28 10:40:46.531 243456 INFO nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Creating config drive at /var/lib/nova/instances/9b081668-1653-448a-957e-da1ead7ecd21/disk.config
Feb 28 10:40:46 compute-0 nova_compute[243452]: 2026-02-28 10:40:46.535 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b081668-1653-448a-957e-da1ead7ecd21/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8wl40i0o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:46 compute-0 nova_compute[243452]: 2026-02-28 10:40:46.569 243456 DEBUG nova.network.neutron [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Successfully updated port: fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:40:46 compute-0 nova_compute[243452]: 2026-02-28 10:40:46.676 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b081668-1653-448a-957e-da1ead7ecd21/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8wl40i0o" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:46 compute-0 nova_compute[243452]: 2026-02-28 10:40:46.705 243456 DEBUG nova.storage.rbd_utils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9b081668-1653-448a-957e-da1ead7ecd21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:40:46 compute-0 nova_compute[243452]: 2026-02-28 10:40:46.709 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b081668-1653-448a-957e-da1ead7ecd21/disk.config 9b081668-1653-448a-957e-da1ead7ecd21_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:46 compute-0 ceph-mon[76304]: pgmap v2292: 305 pgs: 305 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.5 MiB/s wr, 137 op/s
Feb 28 10:40:46 compute-0 nova_compute[243452]: 2026-02-28 10:40:46.746 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:40:46 compute-0 nova_compute[243452]: 2026-02-28 10:40:46.747 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:40:46 compute-0 nova_compute[243452]: 2026-02-28 10:40:46.747 243456 DEBUG nova.network.neutron [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:40:46 compute-0 nova_compute[243452]: 2026-02-28 10:40:46.858 243456 DEBUG oslo_concurrency.processutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b081668-1653-448a-957e-da1ead7ecd21/disk.config 9b081668-1653-448a-957e-da1ead7ecd21_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:46 compute-0 nova_compute[243452]: 2026-02-28 10:40:46.859 243456 INFO nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Deleting local config drive /var/lib/nova/instances/9b081668-1653-448a-957e-da1ead7ecd21/disk.config because it was imported into RBD.
Feb 28 10:40:46 compute-0 nova_compute[243452]: 2026-02-28 10:40:46.902 243456 DEBUG nova.compute.manager [req-98abd5fa-c27c-43f8-92fd-25714e905ad8 req-06bfc05c-5082-4263-bb20-14a80476ac97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received event network-changed-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:40:46 compute-0 nova_compute[243452]: 2026-02-28 10:40:46.903 243456 DEBUG nova.compute.manager [req-98abd5fa-c27c-43f8-92fd-25714e905ad8 req-06bfc05c-5082-4263-bb20-14a80476ac97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Refreshing instance network info cache due to event network-changed-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:40:46 compute-0 nova_compute[243452]: 2026-02-28 10:40:46.903 243456 DEBUG oslo_concurrency.lockutils [req-98abd5fa-c27c-43f8-92fd-25714e905ad8 req-06bfc05c-5082-4263-bb20-14a80476ac97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:40:46 compute-0 kernel: tap5952cc57-b2: entered promiscuous mode
Feb 28 10:40:46 compute-0 NetworkManager[49805]: <info>  [1772275246.9183] manager: (tap5952cc57-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/617)
Feb 28 10:40:46 compute-0 nova_compute[243452]: 2026-02-28 10:40:46.925 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:46 compute-0 ovn_controller[146846]: 2026-02-28T10:40:46Z|01477|binding|INFO|Claiming lport 5952cc57-b25c-40a2-b208-47e2104b88ad for this chassis.
Feb 28 10:40:46 compute-0 ovn_controller[146846]: 2026-02-28T10:40:46Z|01478|binding|INFO|5952cc57-b25c-40a2-b208-47e2104b88ad: Claiming fa:16:3e:49:9b:bb 10.100.0.13
Feb 28 10:40:46 compute-0 ovn_controller[146846]: 2026-02-28T10:40:46Z|01479|binding|INFO|Setting lport 5952cc57-b25c-40a2-b208-47e2104b88ad ovn-installed in OVS
Feb 28 10:40:46 compute-0 NetworkManager[49805]: <info>  [1772275246.9388] manager: (tap6914940c-92): new Tun device (/org/freedesktop/NetworkManager/Devices/618)
Feb 28 10:40:46 compute-0 nova_compute[243452]: 2026-02-28 10:40:46.938 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:46 compute-0 kernel: tap6914940c-92: entered promiscuous mode
Feb 28 10:40:46 compute-0 ovn_controller[146846]: 2026-02-28T10:40:46Z|01480|if_status|INFO|Dropped 6 log messages in last 81 seconds (most recently, 81 seconds ago) due to excessive rate
Feb 28 10:40:46 compute-0 nova_compute[243452]: 2026-02-28 10:40:46.945 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:46 compute-0 ovn_controller[146846]: 2026-02-28T10:40:46Z|01481|if_status|INFO|Not updating pb chassis for 6914940c-920a-4dc2-982a-9ae63584aee2 now as sb is readonly
Feb 28 10:40:46 compute-0 systemd-machined[209480]: New machine qemu-175-instance-0000008e.
Feb 28 10:40:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:46.969 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:9b:bb 10.100.0.13'], port_security=['fa:16:3e:49:9b:bb 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9b081668-1653-448a-957e-da1ead7ecd21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67f9b9ea-1e17-4554-8bb9-a54c6cbc146f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e740c691-959b-49dd-920a-81817c99bba9, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=5952cc57-b25c-40a2-b208-47e2104b88ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:40:46 compute-0 systemd-udevd[370451]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:40:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:46.970 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 5952cc57-b25c-40a2-b208-47e2104b88ad in datapath b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9 bound to our chassis
Feb 28 10:40:46 compute-0 systemd-udevd[370452]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:40:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:46.972 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9
Feb 28 10:40:46 compute-0 ovn_controller[146846]: 2026-02-28T10:40:46Z|01482|binding|INFO|Claiming lport 6914940c-920a-4dc2-982a-9ae63584aee2 for this chassis.
Feb 28 10:40:46 compute-0 ovn_controller[146846]: 2026-02-28T10:40:46Z|01483|binding|INFO|6914940c-920a-4dc2-982a-9ae63584aee2: Claiming fa:16:3e:fb:c8:97 2001:db8::f816:3eff:fefb:c897
Feb 28 10:40:46 compute-0 ovn_controller[146846]: 2026-02-28T10:40:46Z|01484|binding|INFO|Setting lport 5952cc57-b25c-40a2-b208-47e2104b88ad up in Southbound
Feb 28 10:40:46 compute-0 ovn_controller[146846]: 2026-02-28T10:40:46Z|01485|binding|INFO|Setting lport 6914940c-920a-4dc2-982a-9ae63584aee2 ovn-installed in OVS
Feb 28 10:40:46 compute-0 systemd[1]: Started Virtual Machine qemu-175-instance-0000008e.
Feb 28 10:40:46 compute-0 nova_compute[243452]: 2026-02-28 10:40:46.972 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:46 compute-0 NetworkManager[49805]: <info>  [1772275246.9897] device (tap6914940c-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:40:46 compute-0 NetworkManager[49805]: <info>  [1772275246.9902] device (tap5952cc57-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:40:46 compute-0 NetworkManager[49805]: <info>  [1772275246.9905] device (tap6914940c-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:40:46 compute-0 NetworkManager[49805]: <info>  [1772275246.9909] device (tap5952cc57-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:40:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:46.990 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5f84d3-7b5c-4d37-adfb-11ddc6388f3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:46.991 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb0ad6cbe-81 in ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:40:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:46.993 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb0ad6cbe-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:40:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:46.993 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[94311a69-63bf-4ed3-99f3-183abc0d5cd3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:46 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:46.994 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf2e56d-a1d2-497d-809c-4a9c84d0132e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:46.999 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:c8:97 2001:db8::f816:3eff:fefb:c897'], port_security=['fa:16:3e:fb:c8:97 2001:db8::f816:3eff:fefb:c897'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fefb:c897/64', 'neutron:device_id': '9b081668-1653-448a-957e-da1ead7ecd21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67f9b9ea-1e17-4554-8bb9-a54c6cbc146f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b98334fa-e37e-4532-99f2-fef0d70a8f7d, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6914940c-920a-4dc2-982a-9ae63584aee2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:40:47 compute-0 ovn_controller[146846]: 2026-02-28T10:40:46Z|01486|binding|INFO|Setting lport 6914940c-920a-4dc2-982a-9ae63584aee2 up in Southbound
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.008 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[822cf5d8-800c-4acc-8746-e2688f41077a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.023 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[93eba847-1922-4558-bcbe-182ca82a48e6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:47 compute-0 nova_compute[243452]: 2026-02-28 10:40:47.030 243456 DEBUG nova.network.neutron [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:40:47 compute-0 podman[370435]: 2026-02-28 10:40:47.046532863 +0000 UTC m=+0.089247538 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Feb 28 10:40:47 compute-0 podman[370431]: 2026-02-28 10:40:47.073775404 +0000 UTC m=+0.118530117 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 28 10:40:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 302 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.9 MiB/s wr, 144 op/s
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.182 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[143707ee-285a-40c0-b039-b6f3a148c404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.188 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2bad9b90-9e7a-42dd-b0c2-e2cb066f4874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:47 compute-0 NetworkManager[49805]: <info>  [1772275247.1908] manager: (tapb0ad6cbe-80): new Veth device (/org/freedesktop/NetworkManager/Devices/619)
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.237 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0abf67ea-96c2-4372-bf68-5d80cf706466]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.241 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[a2bbc4f7-ad8e-466f-be0f-51efb83a392c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:47 compute-0 NetworkManager[49805]: <info>  [1772275247.2679] device (tapb0ad6cbe-80): carrier: link connected
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.278 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[26c93401-feb3-4137-99b0-50ed24ffb51d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.298 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[69ce7c17-90b4-4418-8a0e-6ce17e312877]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb0ad6cbe-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:f6:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 440], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675455, 'reachable_time': 42136, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370514, 'error': None, 'target': 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.314 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dc350d06-caca-4b4c-87f5-9fb76cdaa9db]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:f67e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675455, 'tstamp': 675455}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370515, 'error': None, 'target': 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:47 compute-0 nova_compute[243452]: 2026-02-28 10:40:47.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:40:47 compute-0 nova_compute[243452]: 2026-02-28 10:40:47.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.337 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b1306692-c8ce-4737-9ae3-afd86b248fb2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb0ad6cbe-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:f6:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 440], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675455, 'reachable_time': 42136, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 370516, 'error': None, 'target': 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.372 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f505da8d-1a07-4a7f-8955-a5527c470a34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.449 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[02e45937-a405-4524-b626-1e026fa46e41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.450 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0ad6cbe-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.451 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.453 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0ad6cbe-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:47 compute-0 NetworkManager[49805]: <info>  [1772275247.4936] manager: (tapb0ad6cbe-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/620)
Feb 28 10:40:47 compute-0 kernel: tapb0ad6cbe-80: entered promiscuous mode
Feb 28 10:40:47 compute-0 nova_compute[243452]: 2026-02-28 10:40:47.492 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:47 compute-0 nova_compute[243452]: 2026-02-28 10:40:47.496 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.499 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb0ad6cbe-80, col_values=(('external_ids', {'iface-id': '7d3f4d3d-8eb5-46b3-a0b2-7c10be424ebb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:47 compute-0 nova_compute[243452]: 2026-02-28 10:40:47.501 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:47 compute-0 ovn_controller[146846]: 2026-02-28T10:40:47Z|01487|binding|INFO|Releasing lport 7d3f4d3d-8eb5-46b3-a0b2-7c10be424ebb from this chassis (sb_readonly=0)
Feb 28 10:40:47 compute-0 nova_compute[243452]: 2026-02-28 10:40:47.506 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.507 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.508 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ee4c27ad-939d-4f74-976a-5da28027ac99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.510 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9.pid.haproxy
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:40:47 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:47.510 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'env', 'PROCESS_TAG=haproxy-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:40:47 compute-0 nova_compute[243452]: 2026-02-28 10:40:47.580 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275247.5801299, 9b081668-1653-448a-957e-da1ead7ecd21 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:40:47 compute-0 nova_compute[243452]: 2026-02-28 10:40:47.581 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] VM Started (Lifecycle Event)
Feb 28 10:40:47 compute-0 nova_compute[243452]: 2026-02-28 10:40:47.744 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:40:47 compute-0 nova_compute[243452]: 2026-02-28 10:40:47.752 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275247.5808573, 9b081668-1653-448a-957e-da1ead7ecd21 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:40:47 compute-0 nova_compute[243452]: 2026-02-28 10:40:47.753 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] VM Paused (Lifecycle Event)
Feb 28 10:40:47 compute-0 nova_compute[243452]: 2026-02-28 10:40:47.801 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:40:47 compute-0 nova_compute[243452]: 2026-02-28 10:40:47.806 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:40:47 compute-0 nova_compute[243452]: 2026-02-28 10:40:47.883 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:40:47 compute-0 podman[370591]: 2026-02-28 10:40:47.887472795 +0000 UTC m=+0.064674612 container create 3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 28 10:40:47 compute-0 systemd[1]: Started libpod-conmon-3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9.scope.
Feb 28 10:40:47 compute-0 podman[370591]: 2026-02-28 10:40:47.859186934 +0000 UTC m=+0.036388811 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:40:47 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:40:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b49f6fee2a5624b0785481ede3d870b1d9e7cdf89c50a35e547fbbaa11922ea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:40:47 compute-0 podman[370591]: 2026-02-28 10:40:47.988285809 +0000 UTC m=+0.165487686 container init 3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 28 10:40:47 compute-0 podman[370591]: 2026-02-28 10:40:47.993143096 +0000 UTC m=+0.170344933 container start 3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 10:40:48 compute-0 neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9[370607]: [NOTICE]   (370611) : New worker (370613) forked
Feb 28 10:40:48 compute-0 neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9[370607]: [NOTICE]   (370611) : Loading success.
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.042 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6914940c-920a-4dc2-982a-9ae63584aee2 in datapath caa8646e-5c97-4eb8-add7-69ea9ee54379 unbound from our chassis
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.044 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network caa8646e-5c97-4eb8-add7-69ea9ee54379
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.051 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2220c127-ab23-4e04-bfee-d400c537a2be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.052 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcaa8646e-51 in ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.054 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcaa8646e-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.054 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[61106858-6846-44ec-8c21-b1bc6d3f0745]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.055 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[60ccf8a0-d8e3-4f67-b0ea-b0d9ac9f9fc3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.065 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[959a8aa8-3acc-46f4-977b-d45749a07f36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.088 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ca92b2f8-7ee0-4953-9fb4-fd5a04eef275]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.115 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[072db84a-74b7-4bdd-b2f5-a7145c9e79e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:48 compute-0 NetworkManager[49805]: <info>  [1772275248.1242] manager: (tapcaa8646e-50): new Veth device (/org/freedesktop/NetworkManager/Devices/621)
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.124 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5ebba587-31f4-473a-b218-a77ff43c84bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:48 compute-0 systemd-udevd[370499]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.161 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[442d269b-4166-4a9b-ba0a-5cd1d3b445bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.165 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e441089c-44a2-4458-bd53-fa58e3d8a0b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:48 compute-0 NetworkManager[49805]: <info>  [1772275248.1873] device (tapcaa8646e-50): carrier: link connected
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.194 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6a3df221-1d52-4796-9b58-823f425c5c70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.214 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bc79356c-d892-4c33-ba2c-f347b8c00990]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcaa8646e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:7b:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675547, 'reachable_time': 36290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370632, 'error': None, 'target': 'ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.236 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[99c8e9c5-d927-49b8-8ca2-f75d20019d10]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe74:7bce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675547, 'tstamp': 675547}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370633, 'error': None, 'target': 'ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.257 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c18a53ee-3af9-44ab-9b5c-7b657b3ebfa9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcaa8646e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:7b:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675547, 'reachable_time': 36290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 370634, 'error': None, 'target': 'ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.297 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba729ab-76c7-429b-886b-b7f7cb1fdba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.345 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dbc8de9e-403c-4cbc-884b-bd088b5d2c13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.348 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcaa8646e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.349 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.350 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcaa8646e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:48 compute-0 nova_compute[243452]: 2026-02-28 10:40:48.353 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:48 compute-0 NetworkManager[49805]: <info>  [1772275248.3540] manager: (tapcaa8646e-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/622)
Feb 28 10:40:48 compute-0 kernel: tapcaa8646e-50: entered promiscuous mode
Feb 28 10:40:48 compute-0 nova_compute[243452]: 2026-02-28 10:40:48.357 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.359 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcaa8646e-50, col_values=(('external_ids', {'iface-id': 'e3227d18-ed73-459d-b1a1-aecd179beb21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:48 compute-0 nova_compute[243452]: 2026-02-28 10:40:48.361 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:48 compute-0 ovn_controller[146846]: 2026-02-28T10:40:48Z|01488|binding|INFO|Releasing lport e3227d18-ed73-459d-b1a1-aecd179beb21 from this chassis (sb_readonly=0)
Feb 28 10:40:48 compute-0 nova_compute[243452]: 2026-02-28 10:40:48.372 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.373 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/caa8646e-5c97-4eb8-add7-69ea9ee54379.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/caa8646e-5c97-4eb8-add7-69ea9ee54379.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.375 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[99aabbd0-d62b-4fd6-8e48-3561df0869f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.376 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-caa8646e-5c97-4eb8-add7-69ea9ee54379
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/caa8646e-5c97-4eb8-add7-69ea9ee54379.pid.haproxy
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID caa8646e-5c97-4eb8-add7-69ea9ee54379
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:40:48 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:48.377 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'env', 'PROCESS_TAG=haproxy-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/caa8646e-5c97-4eb8-add7-69ea9ee54379.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:40:48 compute-0 nova_compute[243452]: 2026-02-28 10:40:48.741 243456 DEBUG nova.compute.manager [req-d045dcfb-f0ce-4dd9-8476-5f0b01d37ef0 req-15de751c-61c8-4cd1-982d-3464e141f5b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-plugged-5952cc57-b25c-40a2-b208-47e2104b88ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:40:48 compute-0 nova_compute[243452]: 2026-02-28 10:40:48.742 243456 DEBUG oslo_concurrency.lockutils [req-d045dcfb-f0ce-4dd9-8476-5f0b01d37ef0 req-15de751c-61c8-4cd1-982d-3464e141f5b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:48 compute-0 nova_compute[243452]: 2026-02-28 10:40:48.744 243456 DEBUG oslo_concurrency.lockutils [req-d045dcfb-f0ce-4dd9-8476-5f0b01d37ef0 req-15de751c-61c8-4cd1-982d-3464e141f5b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:48 compute-0 nova_compute[243452]: 2026-02-28 10:40:48.745 243456 DEBUG oslo_concurrency.lockutils [req-d045dcfb-f0ce-4dd9-8476-5f0b01d37ef0 req-15de751c-61c8-4cd1-982d-3464e141f5b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:48 compute-0 nova_compute[243452]: 2026-02-28 10:40:48.746 243456 DEBUG nova.compute.manager [req-d045dcfb-f0ce-4dd9-8476-5f0b01d37ef0 req-15de751c-61c8-4cd1-982d-3464e141f5b2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Processing event network-vif-plugged-5952cc57-b25c-40a2-b208-47e2104b88ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:40:48 compute-0 ceph-mon[76304]: pgmap v2293: 305 pgs: 305 active+clean; 302 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.9 MiB/s wr, 144 op/s
Feb 28 10:40:48 compute-0 podman[370664]: 2026-02-28 10:40:48.781605683 +0000 UTC m=+0.064454116 container create c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 28 10:40:48 compute-0 systemd[1]: Started libpod-conmon-c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28.scope.
Feb 28 10:40:48 compute-0 podman[370664]: 2026-02-28 10:40:48.747526578 +0000 UTC m=+0.030375071 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:40:48 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:40:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb10f203ca6c1c169e1b23413d123cb3b37254372fe81642560a63587f0c639/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:40:48 compute-0 podman[370664]: 2026-02-28 10:40:48.87192564 +0000 UTC m=+0.154774073 container init c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 10:40:48 compute-0 podman[370664]: 2026-02-28 10:40:48.880103082 +0000 UTC m=+0.162951505 container start c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:40:48 compute-0 neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379[370680]: [NOTICE]   (370684) : New worker (370686) forked
Feb 28 10:40:48 compute-0 neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379[370680]: [NOTICE]   (370684) : Loading success.
Feb 28 10:40:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.034 243456 DEBUG nova.compute.manager [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-plugged-6914940c-920a-4dc2-982a-9ae63584aee2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.035 243456 DEBUG oslo_concurrency.lockutils [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.035 243456 DEBUG oslo_concurrency.lockutils [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.035 243456 DEBUG oslo_concurrency.lockutils [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.036 243456 DEBUG nova.compute.manager [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Processing event network-vif-plugged-6914940c-920a-4dc2-982a-9ae63584aee2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.036 243456 DEBUG nova.compute.manager [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-plugged-6914940c-920a-4dc2-982a-9ae63584aee2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.036 243456 DEBUG oslo_concurrency.lockutils [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.036 243456 DEBUG oslo_concurrency.lockutils [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.037 243456 DEBUG oslo_concurrency.lockutils [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.037 243456 DEBUG nova.compute.manager [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] No waiting events found dispatching network-vif-plugged-6914940c-920a-4dc2-982a-9ae63584aee2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.037 243456 WARNING nova.compute.manager [req-6a4e2798-f521-4e6a-9ef5-fca82b956291 req-d9d2b273-682e-4020-b84b-7673bc60bb01 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received unexpected event network-vif-plugged-6914940c-920a-4dc2-982a-9ae63584aee2 for instance with vm_state building and task_state spawning.
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.038 243456 DEBUG nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.041 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275249.0412693, 9b081668-1653-448a-957e-da1ead7ecd21 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.042 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] VM Resumed (Lifecycle Event)
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.044 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.047 243456 INFO nova.virt.libvirt.driver [-] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Instance spawned successfully.
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.048 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:40:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 5.7 MiB/s wr, 143 op/s
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.083 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.089 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.094 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.094 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.095 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.095 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.096 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.096 243456 DEBUG nova.virt.libvirt.driver [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.108 243456 DEBUG nova.network.neutron [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Updating instance_info_cache with network_info: [{"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.126 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.192 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.193 243456 DEBUG nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Instance network_info: |[{"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.193 243456 DEBUG oslo_concurrency.lockutils [req-98abd5fa-c27c-43f8-92fd-25714e905ad8 req-06bfc05c-5082-4263-bb20-14a80476ac97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.194 243456 DEBUG nova.network.neutron [req-98abd5fa-c27c-43f8-92fd-25714e905ad8 req-06bfc05c-5082-4263-bb20-14a80476ac97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Refreshing network info cache for port fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.198 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Start _get_guest_xml network_info=[{"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.203 243456 WARNING nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.210 243456 DEBUG nova.virt.libvirt.host [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.210 243456 DEBUG nova.virt.libvirt.host [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.215 243456 DEBUG nova.virt.libvirt.host [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.216 243456 DEBUG nova.virt.libvirt.host [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.216 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.216 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.217 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.217 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.217 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.217 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.218 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.218 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.218 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.218 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.218 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.219 243456 DEBUG nova.virt.hardware [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.222 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.269 243456 INFO nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Took 12.58 seconds to spawn the instance on the hypervisor.
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.270 243456 DEBUG nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.333 243456 INFO nova.compute.manager [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Took 13.70 seconds to build instance.
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.352 243456 DEBUG oslo_concurrency.lockutils [None req-ffbe21a2-91d7-455c-9735-08a9c3ea0c0e be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.710 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.755 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:40:49 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2767841553' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.812 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.850 243456 DEBUG nova.storage.rbd_utils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:40:49 compute-0 nova_compute[243452]: 2026-02-28 10:40:49.856 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.381 243456 DEBUG nova.network.neutron [req-98abd5fa-c27c-43f8-92fd-25714e905ad8 req-06bfc05c-5082-4263-bb20-14a80476ac97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Updated VIF entry in instance network info cache for port fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.383 243456 DEBUG nova.network.neutron [req-98abd5fa-c27c-43f8-92fd-25714e905ad8 req-06bfc05c-5082-4263-bb20-14a80476ac97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Updating instance_info_cache with network_info: [{"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.413 243456 DEBUG oslo_concurrency.lockutils [req-98abd5fa-c27c-43f8-92fd-25714e905ad8 req-06bfc05c-5082-4263-bb20-14a80476ac97 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:40:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:40:50 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174489619' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.455 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.458 243456 DEBUG nova.virt.libvirt.vif [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:40:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1953289712',display_name='tempest-TestNetworkBasicOps-server-1953289712',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1953289712',id=143,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPUh8LuUCOXWWDa6D8UF8euvZqjOZJ54pO6ZFrTQ6XYCJT4oCgqwtyXIMREsD2M2EJrCVoUUgoERLi0Re4iBqJPnVad9u2jkDO9MLRXngT9Ks7jWQAvBhSsQWV2xvZyyw==',key_name='tempest-TestNetworkBasicOps-1629892919',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-plwthb3l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:40:43Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=edd7bb04-60e7-4998-afe7-73fa36c25f5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.459 243456 DEBUG nova.network.os_vif_util [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.461 243456 DEBUG nova.network.os_vif_util [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:97:49,bridge_name='br-int',has_traffic_filtering=True,id=fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcbaa6a-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.464 243456 DEBUG nova.objects.instance [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid edd7bb04-60e7-4998-afe7-73fa36c25f5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.486 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:40:50 compute-0 nova_compute[243452]:   <uuid>edd7bb04-60e7-4998-afe7-73fa36c25f5d</uuid>
Feb 28 10:40:50 compute-0 nova_compute[243452]:   <name>instance-0000008f</name>
Feb 28 10:40:50 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:40:50 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:40:50 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <nova:name>tempest-TestNetworkBasicOps-server-1953289712</nova:name>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:40:49</nova:creationTime>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:40:50 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:40:50 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:40:50 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:40:50 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:40:50 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:40:50 compute-0 nova_compute[243452]:         <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:40:50 compute-0 nova_compute[243452]:         <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:40:50 compute-0 nova_compute[243452]:         <nova:port uuid="fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565">
Feb 28 10:40:50 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:40:50 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:40:50 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <system>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <entry name="serial">edd7bb04-60e7-4998-afe7-73fa36c25f5d</entry>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <entry name="uuid">edd7bb04-60e7-4998-afe7-73fa36c25f5d</entry>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     </system>
Feb 28 10:40:50 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:40:50 compute-0 nova_compute[243452]:   <os>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:   </os>
Feb 28 10:40:50 compute-0 nova_compute[243452]:   <features>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:   </features>
Feb 28 10:40:50 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:40:50 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:40:50 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk">
Feb 28 10:40:50 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       </source>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:40:50 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk.config">
Feb 28 10:40:50 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       </source>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:40:50 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:11:97:49"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <target dev="tapfdcbaa6a-ae"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/edd7bb04-60e7-4998-afe7-73fa36c25f5d/console.log" append="off"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <video>
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     </video>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:40:50 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:40:50 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:40:50 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:40:50 compute-0 nova_compute[243452]: </domain>
Feb 28 10:40:50 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.498 243456 DEBUG nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Preparing to wait for external event network-vif-plugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.499 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.499 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.500 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.501 243456 DEBUG nova.virt.libvirt.vif [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:40:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1953289712',display_name='tempest-TestNetworkBasicOps-server-1953289712',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1953289712',id=143,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPUh8LuUCOXWWDa6D8UF8euvZqjOZJ54pO6ZFrTQ6XYCJT4oCgqwtyXIMREsD2M2EJrCVoUUgoERLi0Re4iBqJPnVad9u2jkDO9MLRXngT9Ks7jWQAvBhSsQWV2xvZyyw==',key_name='tempest-TestNetworkBasicOps-1629892919',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-plwthb3l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:40:43Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=edd7bb04-60e7-4998-afe7-73fa36c25f5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.503 243456 DEBUG nova.network.os_vif_util [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.504 243456 DEBUG nova.network.os_vif_util [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:97:49,bridge_name='br-int',has_traffic_filtering=True,id=fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcbaa6a-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.505 243456 DEBUG os_vif [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:97:49,bridge_name='br-int',has_traffic_filtering=True,id=fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcbaa6a-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.506 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.508 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.508 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.514 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.514 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfdcbaa6a-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.516 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfdcbaa6a-ae, col_values=(('external_ids', {'iface-id': 'fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:97:49', 'vm-uuid': 'edd7bb04-60e7-4998-afe7-73fa36c25f5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:50 compute-0 NetworkManager[49805]: <info>  [1772275250.5194] manager: (tapfdcbaa6a-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/623)
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.518 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.526 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.528 243456 INFO os_vif [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:97:49,bridge_name='br-int',has_traffic_filtering=True,id=fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcbaa6a-ae')
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.686 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.687 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.688 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:11:97:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.689 243456 INFO nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Using config drive
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.726 243456 DEBUG nova.storage.rbd_utils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:40:50 compute-0 ceph-mon[76304]: pgmap v2294: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 5.7 MiB/s wr, 143 op/s
Feb 28 10:40:50 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2767841553' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:40:50 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4174489619' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.953 243456 DEBUG nova.compute.manager [req-3b509401-7a2a-4404-849b-4d4f13715594 req-38727c35-b883-44db-b497-5514eca2f110 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-plugged-5952cc57-b25c-40a2-b208-47e2104b88ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.955 243456 DEBUG oslo_concurrency.lockutils [req-3b509401-7a2a-4404-849b-4d4f13715594 req-38727c35-b883-44db-b497-5514eca2f110 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.956 243456 DEBUG oslo_concurrency.lockutils [req-3b509401-7a2a-4404-849b-4d4f13715594 req-38727c35-b883-44db-b497-5514eca2f110 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.956 243456 DEBUG oslo_concurrency.lockutils [req-3b509401-7a2a-4404-849b-4d4f13715594 req-38727c35-b883-44db-b497-5514eca2f110 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.957 243456 DEBUG nova.compute.manager [req-3b509401-7a2a-4404-849b-4d4f13715594 req-38727c35-b883-44db-b497-5514eca2f110 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] No waiting events found dispatching network-vif-plugged-5952cc57-b25c-40a2-b208-47e2104b88ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:40:50 compute-0 nova_compute[243452]: 2026-02-28 10:40:50.958 243456 WARNING nova.compute.manager [req-3b509401-7a2a-4404-849b-4d4f13715594 req-38727c35-b883-44db-b497-5514eca2f110 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received unexpected event network-vif-plugged-5952cc57-b25c-40a2-b208-47e2104b88ad for instance with vm_state active and task_state None.
Feb 28 10:40:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 448 KiB/s rd, 4.6 MiB/s wr, 119 op/s
Feb 28 10:40:51 compute-0 nova_compute[243452]: 2026-02-28 10:40:51.223 243456 INFO nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Creating config drive at /var/lib/nova/instances/edd7bb04-60e7-4998-afe7-73fa36c25f5d/disk.config
Feb 28 10:40:51 compute-0 nova_compute[243452]: 2026-02-28 10:40:51.228 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/edd7bb04-60e7-4998-afe7-73fa36c25f5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpp25gxqpo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:51 compute-0 nova_compute[243452]: 2026-02-28 10:40:51.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:40:51 compute-0 nova_compute[243452]: 2026-02-28 10:40:51.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:40:51 compute-0 nova_compute[243452]: 2026-02-28 10:40:51.377 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/edd7bb04-60e7-4998-afe7-73fa36c25f5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpp25gxqpo" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:51 compute-0 nova_compute[243452]: 2026-02-28 10:40:51.409 243456 DEBUG nova.storage.rbd_utils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:40:51 compute-0 nova_compute[243452]: 2026-02-28 10:40:51.414 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/edd7bb04-60e7-4998-afe7-73fa36c25f5d/disk.config edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:51 compute-0 nova_compute[243452]: 2026-02-28 10:40:51.550 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:40:51 compute-0 nova_compute[243452]: 2026-02-28 10:40:51.552 243456 DEBUG oslo_concurrency.processutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/edd7bb04-60e7-4998-afe7-73fa36c25f5d/disk.config edd7bb04-60e7-4998-afe7-73fa36c25f5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:51 compute-0 nova_compute[243452]: 2026-02-28 10:40:51.553 243456 INFO nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Deleting local config drive /var/lib/nova/instances/edd7bb04-60e7-4998-afe7-73fa36c25f5d/disk.config because it was imported into RBD.
Feb 28 10:40:51 compute-0 NetworkManager[49805]: <info>  [1772275251.6080] manager: (tapfdcbaa6a-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/624)
Feb 28 10:40:51 compute-0 kernel: tapfdcbaa6a-ae: entered promiscuous mode
Feb 28 10:40:51 compute-0 ovn_controller[146846]: 2026-02-28T10:40:51Z|01489|binding|INFO|Claiming lport fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 for this chassis.
Feb 28 10:40:51 compute-0 ovn_controller[146846]: 2026-02-28T10:40:51Z|01490|binding|INFO|fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565: Claiming fa:16:3e:11:97:49 10.100.0.4
Feb 28 10:40:51 compute-0 nova_compute[243452]: 2026-02-28 10:40:51.619 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:51 compute-0 ovn_controller[146846]: 2026-02-28T10:40:51Z|01491|binding|INFO|Setting lport fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 ovn-installed in OVS
Feb 28 10:40:51 compute-0 nova_compute[243452]: 2026-02-28 10:40:51.625 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:51 compute-0 nova_compute[243452]: 2026-02-28 10:40:51.629 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:51 compute-0 systemd-udevd[370828]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:40:51 compute-0 NetworkManager[49805]: <info>  [1772275251.6487] device (tapfdcbaa6a-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:40:51 compute-0 NetworkManager[49805]: <info>  [1772275251.6497] device (tapfdcbaa6a-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:40:51 compute-0 systemd-machined[209480]: New machine qemu-176-instance-0000008f.
Feb 28 10:40:51 compute-0 systemd[1]: Started Virtual Machine qemu-176-instance-0000008f.
Feb 28 10:40:51 compute-0 ovn_controller[146846]: 2026-02-28T10:40:51Z|01492|binding|INFO|Setting lport fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 up in Southbound
Feb 28 10:40:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.676 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:97:49 10.100.0.4'], port_security=['fa:16:3e:11:97:49 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'edd7bb04-60e7-4998-afe7-73fa36c25f5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a9965c3d-ca45-4c8c-963d-46eedcf9cfa8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27d4da06-6dac-452d-951c-54e43b1c22a3, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:40:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.679 156681 INFO neutron.agent.ovn.metadata.agent [-] Port fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 in datapath 11e06da5-bfc5-4a1a-9148-ff3afccf9569 bound to our chassis
Feb 28 10:40:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.681 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 11e06da5-bfc5-4a1a-9148-ff3afccf9569
Feb 28 10:40:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.702 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[54a43c0c-ebda-4ff3-8f27-fad022d91308]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.732 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[86dc68ed-b919-420a-9b3a-89639eae9726]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.736 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[34253bd2-b2d9-47f4-9168-41498efe7dba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.772 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[62754743-3066-4fd6-be9d-74dd6bcefc5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.796 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f85ff6-9ad6-4da3-bec2-7ffb813aa235]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap11e06da5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:17:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673399, 'reachable_time': 31055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370845, 'error': None, 'target': 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.813 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1637f329-54a8-4c9f-8a45-f8bf5f56cca1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap11e06da5-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673413, 'tstamp': 673413}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370846, 'error': None, 'target': 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap11e06da5-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673415, 'tstamp': 673415}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370846, 'error': None, 'target': 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:40:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.815 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11e06da5-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:51 compute-0 nova_compute[243452]: 2026-02-28 10:40:51.817 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:51 compute-0 nova_compute[243452]: 2026-02-28 10:40:51.818 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.819 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11e06da5-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.820 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:40:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.821 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap11e06da5-b0, col_values=(('external_ids', {'iface-id': 'ea6114a2-28c6-4510-bbd6-16e4d9cb4f71'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:51.821 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:40:51 compute-0 nova_compute[243452]: 2026-02-28 10:40:51.986 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275251.9852517, edd7bb04-60e7-4998-afe7-73fa36c25f5d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:40:51 compute-0 nova_compute[243452]: 2026-02-28 10:40:51.987 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] VM Started (Lifecycle Event)
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.044 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.050 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275251.9866974, edd7bb04-60e7-4998-afe7-73fa36c25f5d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.051 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] VM Paused (Lifecycle Event)
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.072 243456 DEBUG nova.compute.manager [req-12ddcfe9-2e70-4b3e-8acb-2867d38b1e34 req-7cce9769-4d8d-4c3d-9351-94f80f61e89d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received event network-vif-plugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.073 243456 DEBUG oslo_concurrency.lockutils [req-12ddcfe9-2e70-4b3e-8acb-2867d38b1e34 req-7cce9769-4d8d-4c3d-9351-94f80f61e89d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.074 243456 DEBUG oslo_concurrency.lockutils [req-12ddcfe9-2e70-4b3e-8acb-2867d38b1e34 req-7cce9769-4d8d-4c3d-9351-94f80f61e89d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.074 243456 DEBUG oslo_concurrency.lockutils [req-12ddcfe9-2e70-4b3e-8acb-2867d38b1e34 req-7cce9769-4d8d-4c3d-9351-94f80f61e89d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.075 243456 DEBUG nova.compute.manager [req-12ddcfe9-2e70-4b3e-8acb-2867d38b1e34 req-7cce9769-4d8d-4c3d-9351-94f80f61e89d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Processing event network-vif-plugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.076 243456 DEBUG nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.084 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.090 243456 INFO nova.virt.libvirt.driver [-] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Instance spawned successfully.
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.090 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.096 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.100 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275252.0794888, edd7bb04-60e7-4998-afe7-73fa36c25f5d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.101 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] VM Resumed (Lifecycle Event)
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.285 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.294 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.301 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.301 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.302 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.303 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.303 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.304 243456 DEBUG nova.virt.libvirt.driver [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.637 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:40:52 compute-0 ceph-mon[76304]: pgmap v2295: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 448 KiB/s rd, 4.6 MiB/s wr, 119 op/s
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.782 243456 INFO nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Took 9.68 seconds to spawn the instance on the hypervisor.
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.783 243456 DEBUG nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:40:52 compute-0 nova_compute[243452]: 2026-02-28 10:40:52.915 243456 INFO nova.compute.manager [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Took 10.88 seconds to build instance.
Feb 28 10:40:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 147 op/s
Feb 28 10:40:53 compute-0 nova_compute[243452]: 2026-02-28 10:40:53.098 243456 DEBUG oslo_concurrency.lockutils [None req-0e31de9e-fe5d-4dbd-a9a6-ba598e19dbda ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:53 compute-0 nova_compute[243452]: 2026-02-28 10:40:53.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:40:53 compute-0 nova_compute[243452]: 2026-02-28 10:40:53.464 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:53 compute-0 nova_compute[243452]: 2026-02-28 10:40:53.464 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:53 compute-0 nova_compute[243452]: 2026-02-28 10:40:53.465 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:53 compute-0 nova_compute[243452]: 2026-02-28 10:40:53.465 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:40:53 compute-0 nova_compute[243452]: 2026-02-28 10:40:53.465 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:53 compute-0 nova_compute[243452]: 2026-02-28 10:40:53.724 243456 DEBUG nova.compute.manager [req-7408730a-ff05-4bc3-a262-1032457d528b req-57acfc0b-df9c-4af4-9d7a-17b0c73555d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-changed-5952cc57-b25c-40a2-b208-47e2104b88ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:40:53 compute-0 nova_compute[243452]: 2026-02-28 10:40:53.724 243456 DEBUG nova.compute.manager [req-7408730a-ff05-4bc3-a262-1032457d528b req-57acfc0b-df9c-4af4-9d7a-17b0c73555d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Refreshing instance network info cache due to event network-changed-5952cc57-b25c-40a2-b208-47e2104b88ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:40:53 compute-0 nova_compute[243452]: 2026-02-28 10:40:53.725 243456 DEBUG oslo_concurrency.lockutils [req-7408730a-ff05-4bc3-a262-1032457d528b req-57acfc0b-df9c-4af4-9d7a-17b0c73555d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:40:53 compute-0 nova_compute[243452]: 2026-02-28 10:40:53.725 243456 DEBUG oslo_concurrency.lockutils [req-7408730a-ff05-4bc3-a262-1032457d528b req-57acfc0b-df9c-4af4-9d7a-17b0c73555d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:40:53 compute-0 nova_compute[243452]: 2026-02-28 10:40:53.725 243456 DEBUG nova.network.neutron [req-7408730a-ff05-4bc3-a262-1032457d528b req-57acfc0b-df9c-4af4-9d7a-17b0c73555d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Refreshing network info cache for port 5952cc57-b25c-40a2-b208-47e2104b88ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:40:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:40:53 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/670201687' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:40:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.002 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.116 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.117 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.121 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.121 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.125 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.126 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.152 243456 DEBUG nova.compute.manager [req-c7201c40-80ff-460e-bbc7-08b3a131c527 req-2111e595-bbb4-4570-af6c-af83d10b32e2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received event network-vif-plugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.153 243456 DEBUG oslo_concurrency.lockutils [req-c7201c40-80ff-460e-bbc7-08b3a131c527 req-2111e595-bbb4-4570-af6c-af83d10b32e2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.153 243456 DEBUG oslo_concurrency.lockutils [req-c7201c40-80ff-460e-bbc7-08b3a131c527 req-2111e595-bbb4-4570-af6c-af83d10b32e2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.153 243456 DEBUG oslo_concurrency.lockutils [req-c7201c40-80ff-460e-bbc7-08b3a131c527 req-2111e595-bbb4-4570-af6c-af83d10b32e2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.153 243456 DEBUG nova.compute.manager [req-c7201c40-80ff-460e-bbc7-08b3a131c527 req-2111e595-bbb4-4570-af6c-af83d10b32e2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] No waiting events found dispatching network-vif-plugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.154 243456 WARNING nova.compute.manager [req-c7201c40-80ff-460e-bbc7-08b3a131c527 req-2111e595-bbb4-4570-af6c-af83d10b32e2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received unexpected event network-vif-plugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 for instance with vm_state active and task_state None.
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.315 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.316 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3140MB free_disk=59.90025749709457GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.316 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.317 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.401 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 5e1a8d62-9ac1-417d-8194-58901bb4018e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.401 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 9b081668-1653-448a-957e-da1ead7ecd21 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.401 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance edd7bb04-60e7-4998-afe7-73fa36c25f5d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.402 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.402 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.491 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:40:54 compute-0 nova_compute[243452]: 2026-02-28 10:40:54.712 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:54 compute-0 ceph-mon[76304]: pgmap v2296: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 147 op/s
Feb 28 10:40:54 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/670201687' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:40:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:40:55 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/374508833' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:40:55 compute-0 nova_compute[243452]: 2026-02-28 10:40:55.040 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:40:55 compute-0 nova_compute[243452]: 2026-02-28 10:40:55.051 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:40:55 compute-0 nova_compute[243452]: 2026-02-28 10:40:55.072 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:40:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.9 MiB/s wr, 199 op/s
Feb 28 10:40:55 compute-0 nova_compute[243452]: 2026-02-28 10:40:55.100 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:40:55 compute-0 nova_compute[243452]: 2026-02-28 10:40:55.101 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:55 compute-0 nova_compute[243452]: 2026-02-28 10:40:55.281 243456 DEBUG nova.network.neutron [req-7408730a-ff05-4bc3-a262-1032457d528b req-57acfc0b-df9c-4af4-9d7a-17b0c73555d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Updated VIF entry in instance network info cache for port 5952cc57-b25c-40a2-b208-47e2104b88ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:40:55 compute-0 nova_compute[243452]: 2026-02-28 10:40:55.283 243456 DEBUG nova.network.neutron [req-7408730a-ff05-4bc3-a262-1032457d528b req-57acfc0b-df9c-4af4-9d7a-17b0c73555d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Updating instance_info_cache with network_info: [{"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:40:55 compute-0 nova_compute[243452]: 2026-02-28 10:40:55.318 243456 DEBUG oslo_concurrency.lockutils [req-7408730a-ff05-4bc3-a262-1032457d528b req-57acfc0b-df9c-4af4-9d7a-17b0c73555d2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:40:55 compute-0 nova_compute[243452]: 2026-02-28 10:40:55.518 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:55 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/374508833' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:40:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:56.263 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:40:56 compute-0 nova_compute[243452]: 2026-02-28 10:40:56.293 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:40:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:56.294 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:40:56 compute-0 ceph-mon[76304]: pgmap v2297: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.9 MiB/s wr, 199 op/s
Feb 28 10:40:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.2 MiB/s wr, 163 op/s
Feb 28 10:40:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:57.879 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:40:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:57.880 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:40:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:57.880 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:40:58 compute-0 nova_compute[243452]: 2026-02-28 10:40:58.699 243456 DEBUG nova.compute.manager [req-23b99f19-2bef-43ed-b9b4-4c56496f8690 req-f947901f-88d1-4947-868d-c5681f0c0039 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received event network-changed-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:40:58 compute-0 nova_compute[243452]: 2026-02-28 10:40:58.701 243456 DEBUG nova.compute.manager [req-23b99f19-2bef-43ed-b9b4-4c56496f8690 req-f947901f-88d1-4947-868d-c5681f0c0039 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Refreshing instance network info cache due to event network-changed-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:40:58 compute-0 nova_compute[243452]: 2026-02-28 10:40:58.701 243456 DEBUG oslo_concurrency.lockutils [req-23b99f19-2bef-43ed-b9b4-4c56496f8690 req-f947901f-88d1-4947-868d-c5681f0c0039 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:40:58 compute-0 nova_compute[243452]: 2026-02-28 10:40:58.702 243456 DEBUG oslo_concurrency.lockutils [req-23b99f19-2bef-43ed-b9b4-4c56496f8690 req-f947901f-88d1-4947-868d-c5681f0c0039 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:40:58 compute-0 nova_compute[243452]: 2026-02-28 10:40:58.702 243456 DEBUG nova.network.neutron [req-23b99f19-2bef-43ed-b9b4-4c56496f8690 req-f947901f-88d1-4947-868d-c5681f0c0039 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Refreshing network info cache for port fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:40:58 compute-0 ceph-mon[76304]: pgmap v2298: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.2 MiB/s wr, 163 op/s
Feb 28 10:40:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:40:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 787 KiB/s wr, 155 op/s
Feb 28 10:40:59 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:40:59.298 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:40:59 compute-0 nova_compute[243452]: 2026-02-28 10:40:59.715 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:00 compute-0 nova_compute[243452]: 2026-02-28 10:41:00.048 243456 DEBUG nova.network.neutron [req-23b99f19-2bef-43ed-b9b4-4c56496f8690 req-f947901f-88d1-4947-868d-c5681f0c0039 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Updated VIF entry in instance network info cache for port fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:41:00 compute-0 nova_compute[243452]: 2026-02-28 10:41:00.048 243456 DEBUG nova.network.neutron [req-23b99f19-2bef-43ed-b9b4-4c56496f8690 req-f947901f-88d1-4947-868d-c5681f0c0039 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Updating instance_info_cache with network_info: [{"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:41:00 compute-0 nova_compute[243452]: 2026-02-28 10:41:00.068 243456 DEBUG oslo_concurrency.lockutils [req-23b99f19-2bef-43ed-b9b4-4c56496f8690 req-f947901f-88d1-4947-868d-c5681f0c0039 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:41:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:41:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:41:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:41:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:41:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:41:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:41:00 compute-0 nova_compute[243452]: 2026-02-28 10:41:00.521 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:00 compute-0 ceph-mon[76304]: pgmap v2299: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 787 KiB/s wr, 155 op/s
Feb 28 10:41:00 compute-0 ovn_controller[146846]: 2026-02-28T10:41:00Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:49:9b:bb 10.100.0.13
Feb 28 10:41:00 compute-0 ovn_controller[146846]: 2026-02-28T10:41:00Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:49:9b:bb 10.100.0.13
Feb 28 10:41:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 329 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 257 KiB/s wr, 147 op/s
Feb 28 10:41:01 compute-0 ceph-mon[76304]: pgmap v2300: 305 pgs: 305 active+clean; 329 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 257 KiB/s wr, 147 op/s
Feb 28 10:41:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 345 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.3 MiB/s wr, 167 op/s
Feb 28 10:41:03 compute-0 ovn_controller[146846]: 2026-02-28T10:41:03Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:11:97:49 10.100.0.4
Feb 28 10:41:03 compute-0 ovn_controller[146846]: 2026-02-28T10:41:03Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:11:97:49 10.100.0.4
Feb 28 10:41:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:41:04 compute-0 ceph-mon[76304]: pgmap v2301: 305 pgs: 305 active+clean; 345 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.3 MiB/s wr, 167 op/s
Feb 28 10:41:04 compute-0 nova_compute[243452]: 2026-02-28 10:41:04.719 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 378 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 199 op/s
Feb 28 10:41:05 compute-0 nova_compute[243452]: 2026-02-28 10:41:05.523 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:06 compute-0 ceph-mon[76304]: pgmap v2302: 305 pgs: 305 active+clean; 378 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 199 op/s
Feb 28 10:41:06 compute-0 sudo[370935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:41:06 compute-0 sudo[370935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:41:06 compute-0 sudo[370935]: pam_unix(sudo:session): session closed for user root
Feb 28 10:41:06 compute-0 sudo[370960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 28 10:41:06 compute-0 sudo[370960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:41:06 compute-0 podman[371028]: 2026-02-28 10:41:06.865198925 +0000 UTC m=+0.071643190 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:41:07 compute-0 podman[371028]: 2026-02-28 10:41:07.027225693 +0000 UTC m=+0.233669878 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 10:41:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 390 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.0 MiB/s wr, 153 op/s
Feb 28 10:41:07 compute-0 sudo[370960]: pam_unix(sudo:session): session closed for user root
Feb 28 10:41:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:41:07 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:41:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:41:07 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:41:07 compute-0 sudo[371217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:41:07 compute-0 sudo[371217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:41:07 compute-0 sudo[371217]: pam_unix(sudo:session): session closed for user root
Feb 28 10:41:08 compute-0 sudo[371242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:41:08 compute-0 sudo[371242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:41:08 compute-0 ceph-mon[76304]: pgmap v2303: 305 pgs: 305 active+clean; 390 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.0 MiB/s wr, 153 op/s
Feb 28 10:41:08 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:41:08 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:41:08 compute-0 sudo[371242]: pam_unix(sudo:session): session closed for user root
Feb 28 10:41:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:41:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:41:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:41:08 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:41:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:41:08 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:41:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:41:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:41:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:41:08 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:41:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:41:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:41:08 compute-0 sudo[371299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:41:08 compute-0 sudo[371299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:41:08 compute-0 sudo[371299]: pam_unix(sudo:session): session closed for user root
Feb 28 10:41:08 compute-0 sudo[371324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:41:08 compute-0 sudo[371324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:41:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:41:09 compute-0 podman[371362]: 2026-02-28 10:41:09.088060457 +0000 UTC m=+0.066651568 container create d3954b8d1bae0910f412e626b1dc43a58754114fdb180687b48aa694ccbae0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_jones, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 10:41:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.3 MiB/s wr, 153 op/s
Feb 28 10:41:09 compute-0 podman[371362]: 2026-02-28 10:41:09.052643044 +0000 UTC m=+0.031234225 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:41:09 compute-0 systemd[1]: Started libpod-conmon-d3954b8d1bae0910f412e626b1dc43a58754114fdb180687b48aa694ccbae0e0.scope.
Feb 28 10:41:09 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:41:09 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:41:09 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:41:09 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:41:09 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:41:09 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:41:09 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:41:09 compute-0 podman[371362]: 2026-02-28 10:41:09.201506379 +0000 UTC m=+0.180097500 container init d3954b8d1bae0910f412e626b1dc43a58754114fdb180687b48aa694ccbae0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 10:41:09 compute-0 podman[371362]: 2026-02-28 10:41:09.208834937 +0000 UTC m=+0.187426058 container start d3954b8d1bae0910f412e626b1dc43a58754114fdb180687b48aa694ccbae0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_jones, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 10:41:09 compute-0 podman[371362]: 2026-02-28 10:41:09.212927093 +0000 UTC m=+0.191518184 container attach d3954b8d1bae0910f412e626b1dc43a58754114fdb180687b48aa694ccbae0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_jones, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:41:09 compute-0 systemd[1]: libpod-d3954b8d1bae0910f412e626b1dc43a58754114fdb180687b48aa694ccbae0e0.scope: Deactivated successfully.
Feb 28 10:41:09 compute-0 jolly_jones[371378]: 167 167
Feb 28 10:41:09 compute-0 conmon[371378]: conmon d3954b8d1bae0910f412 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d3954b8d1bae0910f412e626b1dc43a58754114fdb180687b48aa694ccbae0e0.scope/container/memory.events
Feb 28 10:41:09 compute-0 podman[371362]: 2026-02-28 10:41:09.217441551 +0000 UTC m=+0.196032672 container died d3954b8d1bae0910f412e626b1dc43a58754114fdb180687b48aa694ccbae0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_jones, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 10:41:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-79b1fb3273cd653a81ea424462aa73b0a2aa3505ede94f3ce2b98b09957552b2-merged.mount: Deactivated successfully.
Feb 28 10:41:09 compute-0 podman[371362]: 2026-02-28 10:41:09.266306714 +0000 UTC m=+0.244897825 container remove d3954b8d1bae0910f412e626b1dc43a58754114fdb180687b48aa694ccbae0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_jones, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:41:09 compute-0 systemd[1]: libpod-conmon-d3954b8d1bae0910f412e626b1dc43a58754114fdb180687b48aa694ccbae0e0.scope: Deactivated successfully.
Feb 28 10:41:09 compute-0 podman[371401]: 2026-02-28 10:41:09.47525565 +0000 UTC m=+0.061231684 container create 135ba6235386fce0bc5564b3850b758e5a11caf22a3b3562dbb87e1e6fecd786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:41:09 compute-0 systemd[1]: Started libpod-conmon-135ba6235386fce0bc5564b3850b758e5a11caf22a3b3562dbb87e1e6fecd786.scope.
Feb 28 10:41:09 compute-0 podman[371401]: 2026-02-28 10:41:09.448995526 +0000 UTC m=+0.034971560 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:41:09 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:41:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2762a65e31663c609c771b2d3a49328d1ba61b8e80c584b228fe0bd0b0496f26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:41:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2762a65e31663c609c771b2d3a49328d1ba61b8e80c584b228fe0bd0b0496f26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:41:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2762a65e31663c609c771b2d3a49328d1ba61b8e80c584b228fe0bd0b0496f26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:41:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2762a65e31663c609c771b2d3a49328d1ba61b8e80c584b228fe0bd0b0496f26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:41:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2762a65e31663c609c771b2d3a49328d1ba61b8e80c584b228fe0bd0b0496f26/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:41:09 compute-0 podman[371401]: 2026-02-28 10:41:09.588207098 +0000 UTC m=+0.174183172 container init 135ba6235386fce0bc5564b3850b758e5a11caf22a3b3562dbb87e1e6fecd786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:41:09 compute-0 podman[371401]: 2026-02-28 10:41:09.598210502 +0000 UTC m=+0.184186506 container start 135ba6235386fce0bc5564b3850b758e5a11caf22a3b3562dbb87e1e6fecd786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hypatia, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 10:41:09 compute-0 podman[371401]: 2026-02-28 10:41:09.602249316 +0000 UTC m=+0.188225390 container attach 135ba6235386fce0bc5564b3850b758e5a11caf22a3b3562dbb87e1e6fecd786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hypatia, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 10:41:09 compute-0 nova_compute[243452]: 2026-02-28 10:41:09.723 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:10 compute-0 interesting_hypatia[371418]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:41:10 compute-0 interesting_hypatia[371418]: --> All data devices are unavailable
Feb 28 10:41:10 compute-0 systemd[1]: libpod-135ba6235386fce0bc5564b3850b758e5a11caf22a3b3562dbb87e1e6fecd786.scope: Deactivated successfully.
Feb 28 10:41:10 compute-0 podman[371401]: 2026-02-28 10:41:10.129616499 +0000 UTC m=+0.715592543 container died 135ba6235386fce0bc5564b3850b758e5a11caf22a3b3562dbb87e1e6fecd786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:41:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-2762a65e31663c609c771b2d3a49328d1ba61b8e80c584b228fe0bd0b0496f26-merged.mount: Deactivated successfully.
Feb 28 10:41:10 compute-0 podman[371401]: 2026-02-28 10:41:10.177839355 +0000 UTC m=+0.763815339 container remove 135ba6235386fce0bc5564b3850b758e5a11caf22a3b3562dbb87e1e6fecd786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hypatia, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:41:10 compute-0 systemd[1]: libpod-conmon-135ba6235386fce0bc5564b3850b758e5a11caf22a3b3562dbb87e1e6fecd786.scope: Deactivated successfully.
Feb 28 10:41:10 compute-0 sudo[371324]: pam_unix(sudo:session): session closed for user root
Feb 28 10:41:10 compute-0 sudo[371450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:41:10 compute-0 sudo[371450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:41:10 compute-0 sudo[371450]: pam_unix(sudo:session): session closed for user root
Feb 28 10:41:10 compute-0 sudo[371475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:41:10 compute-0 sudo[371475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:41:10 compute-0 nova_compute[243452]: 2026-02-28 10:41:10.525 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:10 compute-0 ceph-mon[76304]: pgmap v2304: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.3 MiB/s wr, 153 op/s
Feb 28 10:41:10 compute-0 podman[371512]: 2026-02-28 10:41:10.671455362 +0000 UTC m=+0.048754962 container create 9623db0b9105f546074939178d208554af08c9c473b8a0449f380cf633ea8136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:41:10 compute-0 systemd[1]: Started libpod-conmon-9623db0b9105f546074939178d208554af08c9c473b8a0449f380cf633ea8136.scope.
Feb 28 10:41:10 compute-0 podman[371512]: 2026-02-28 10:41:10.645710093 +0000 UTC m=+0.023009743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:41:10 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:41:10 compute-0 podman[371512]: 2026-02-28 10:41:10.774670115 +0000 UTC m=+0.151969745 container init 9623db0b9105f546074939178d208554af08c9c473b8a0449f380cf633ea8136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_zhukovsky, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 10:41:10 compute-0 podman[371512]: 2026-02-28 10:41:10.784455262 +0000 UTC m=+0.161754892 container start 9623db0b9105f546074939178d208554af08c9c473b8a0449f380cf633ea8136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_zhukovsky, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 10:41:10 compute-0 podman[371512]: 2026-02-28 10:41:10.789017201 +0000 UTC m=+0.166316831 container attach 9623db0b9105f546074939178d208554af08c9c473b8a0449f380cf633ea8136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:41:10 compute-0 modest_zhukovsky[371529]: 167 167
Feb 28 10:41:10 compute-0 systemd[1]: libpod-9623db0b9105f546074939178d208554af08c9c473b8a0449f380cf633ea8136.scope: Deactivated successfully.
Feb 28 10:41:10 compute-0 podman[371512]: 2026-02-28 10:41:10.790593715 +0000 UTC m=+0.167893345 container died 9623db0b9105f546074939178d208554af08c9c473b8a0449f380cf633ea8136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 10:41:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b532511749929baa8358acc45e4428316161154651ac61cad504c1725f5b29a7-merged.mount: Deactivated successfully.
Feb 28 10:41:10 compute-0 podman[371512]: 2026-02-28 10:41:10.838293936 +0000 UTC m=+0.215593566 container remove 9623db0b9105f546074939178d208554af08c9c473b8a0449f380cf633ea8136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:41:10 compute-0 systemd[1]: libpod-conmon-9623db0b9105f546074939178d208554af08c9c473b8a0449f380cf633ea8136.scope: Deactivated successfully.
Feb 28 10:41:11 compute-0 podman[371553]: 2026-02-28 10:41:11.029382787 +0000 UTC m=+0.042343530 container create dfdf05cde32835f405fe6032d5c51f02c189647cb2a5d4d3ffbf9e965d0cce15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldberg, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Feb 28 10:41:11 compute-0 systemd[1]: Started libpod-conmon-dfdf05cde32835f405fe6032d5c51f02c189647cb2a5d4d3ffbf9e965d0cce15.scope.
Feb 28 10:41:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 685 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Feb 28 10:41:11 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:41:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8f9c83e25499c1d517154ada15b3b9cf06b68c98a34786d5b6fcf8998a50c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:41:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8f9c83e25499c1d517154ada15b3b9cf06b68c98a34786d5b6fcf8998a50c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:41:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8f9c83e25499c1d517154ada15b3b9cf06b68c98a34786d5b6fcf8998a50c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:41:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8f9c83e25499c1d517154ada15b3b9cf06b68c98a34786d5b6fcf8998a50c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:41:11 compute-0 podman[371553]: 2026-02-28 10:41:11.010457351 +0000 UTC m=+0.023418094 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:41:11 compute-0 podman[371553]: 2026-02-28 10:41:11.136340876 +0000 UTC m=+0.149301689 container init dfdf05cde32835f405fe6032d5c51f02c189647cb2a5d4d3ffbf9e965d0cce15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldberg, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 28 10:41:11 compute-0 podman[371553]: 2026-02-28 10:41:11.146435541 +0000 UTC m=+0.159396284 container start dfdf05cde32835f405fe6032d5c51f02c189647cb2a5d4d3ffbf9e965d0cce15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldberg, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:41:11 compute-0 podman[371553]: 2026-02-28 10:41:11.150698392 +0000 UTC m=+0.163659165 container attach dfdf05cde32835f405fe6032d5c51f02c189647cb2a5d4d3ffbf9e965d0cce15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]: {
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:     "0": [
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:         {
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "devices": [
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "/dev/loop3"
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             ],
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "lv_name": "ceph_lv0",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "lv_size": "21470642176",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "name": "ceph_lv0",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "tags": {
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.cluster_name": "ceph",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.crush_device_class": "",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.encrypted": "0",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.objectstore": "bluestore",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.osd_id": "0",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.type": "block",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.vdo": "0",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.with_tpm": "0"
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             },
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "type": "block",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "vg_name": "ceph_vg0"
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:         }
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:     ],
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:     "1": [
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:         {
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "devices": [
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "/dev/loop4"
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             ],
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "lv_name": "ceph_lv1",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "lv_size": "21470642176",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "name": "ceph_lv1",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "tags": {
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.cluster_name": "ceph",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.crush_device_class": "",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.encrypted": "0",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.objectstore": "bluestore",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.osd_id": "1",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.type": "block",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.vdo": "0",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.with_tpm": "0"
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             },
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "type": "block",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "vg_name": "ceph_vg1"
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:         }
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:     ],
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:     "2": [
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:         {
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "devices": [
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "/dev/loop5"
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             ],
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "lv_name": "ceph_lv2",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "lv_size": "21470642176",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "name": "ceph_lv2",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "tags": {
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.cluster_name": "ceph",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.crush_device_class": "",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.encrypted": "0",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.objectstore": "bluestore",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.osd_id": "2",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.type": "block",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.vdo": "0",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:                 "ceph.with_tpm": "0"
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             },
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "type": "block",
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:             "vg_name": "ceph_vg2"
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:         }
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]:     ]
Feb 28 10:41:11 compute-0 wizardly_goldberg[371570]: }
Feb 28 10:41:11 compute-0 systemd[1]: libpod-dfdf05cde32835f405fe6032d5c51f02c189647cb2a5d4d3ffbf9e965d0cce15.scope: Deactivated successfully.
Feb 28 10:41:11 compute-0 podman[371553]: 2026-02-28 10:41:11.433246953 +0000 UTC m=+0.446207746 container died dfdf05cde32835f405fe6032d5c51f02c189647cb2a5d4d3ffbf9e965d0cce15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:41:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed8f9c83e25499c1d517154ada15b3b9cf06b68c98a34786d5b6fcf8998a50c5-merged.mount: Deactivated successfully.
Feb 28 10:41:11 compute-0 podman[371553]: 2026-02-28 10:41:11.487867409 +0000 UTC m=+0.500828182 container remove dfdf05cde32835f405fe6032d5c51f02c189647cb2a5d4d3ffbf9e965d0cce15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 10:41:11 compute-0 systemd[1]: libpod-conmon-dfdf05cde32835f405fe6032d5c51f02c189647cb2a5d4d3ffbf9e965d0cce15.scope: Deactivated successfully.
Feb 28 10:41:11 compute-0 sudo[371475]: pam_unix(sudo:session): session closed for user root
Feb 28 10:41:11 compute-0 sudo[371591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:41:11 compute-0 sudo[371591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:41:11 compute-0 sudo[371591]: pam_unix(sudo:session): session closed for user root
Feb 28 10:41:11 compute-0 sudo[371616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:41:11 compute-0 sudo[371616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:41:11 compute-0 podman[371654]: 2026-02-28 10:41:11.93782288 +0000 UTC m=+0.041624019 container create 3ee62b7e511552750e5c8b3d29d89c46a8da4ef56aec6b3486d4f560ca501ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:41:11 compute-0 systemd[1]: Started libpod-conmon-3ee62b7e511552750e5c8b3d29d89c46a8da4ef56aec6b3486d4f560ca501ee5.scope.
Feb 28 10:41:12 compute-0 podman[371654]: 2026-02-28 10:41:11.920978413 +0000 UTC m=+0.024779582 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:41:12 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:41:12 compute-0 podman[371654]: 2026-02-28 10:41:12.038273385 +0000 UTC m=+0.142074544 container init 3ee62b7e511552750e5c8b3d29d89c46a8da4ef56aec6b3486d4f560ca501ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hamilton, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:41:12 compute-0 podman[371654]: 2026-02-28 10:41:12.047681031 +0000 UTC m=+0.151482160 container start 3ee62b7e511552750e5c8b3d29d89c46a8da4ef56aec6b3486d4f560ca501ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 10:41:12 compute-0 podman[371654]: 2026-02-28 10:41:12.051981163 +0000 UTC m=+0.155782472 container attach 3ee62b7e511552750e5c8b3d29d89c46a8da4ef56aec6b3486d4f560ca501ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hamilton, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 10:41:12 compute-0 charming_hamilton[371670]: 167 167
Feb 28 10:41:12 compute-0 systemd[1]: libpod-3ee62b7e511552750e5c8b3d29d89c46a8da4ef56aec6b3486d4f560ca501ee5.scope: Deactivated successfully.
Feb 28 10:41:12 compute-0 podman[371654]: 2026-02-28 10:41:12.054954697 +0000 UTC m=+0.158755826 container died 3ee62b7e511552750e5c8b3d29d89c46a8da4ef56aec6b3486d4f560ca501ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hamilton, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 10:41:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-df0153440a59ffc30239947511fc5ef40dcadad74239ff267f02cf0d67e8aae6-merged.mount: Deactivated successfully.
Feb 28 10:41:12 compute-0 podman[371654]: 2026-02-28 10:41:12.094676102 +0000 UTC m=+0.198477271 container remove 3ee62b7e511552750e5c8b3d29d89c46a8da4ef56aec6b3486d4f560ca501ee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:41:12 compute-0 systemd[1]: libpod-conmon-3ee62b7e511552750e5c8b3d29d89c46a8da4ef56aec6b3486d4f560ca501ee5.scope: Deactivated successfully.
Feb 28 10:41:12 compute-0 podman[371694]: 2026-02-28 10:41:12.285375242 +0000 UTC m=+0.044811160 container create 7e3f2f9288585d28a179f7be9b04e4176f76f21cf8be6b4e5aa0fbc46df29f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_greider, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:41:12 compute-0 systemd[1]: Started libpod-conmon-7e3f2f9288585d28a179f7be9b04e4176f76f21cf8be6b4e5aa0fbc46df29f39.scope.
Feb 28 10:41:12 compute-0 podman[371694]: 2026-02-28 10:41:12.26272867 +0000 UTC m=+0.022164348 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:41:12 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:41:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cea9687415f431f5e3ad234a28224019221870349d9494980308195a8f09ada/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:41:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cea9687415f431f5e3ad234a28224019221870349d9494980308195a8f09ada/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:41:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cea9687415f431f5e3ad234a28224019221870349d9494980308195a8f09ada/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:41:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cea9687415f431f5e3ad234a28224019221870349d9494980308195a8f09ada/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:41:12 compute-0 podman[371694]: 2026-02-28 10:41:12.392199507 +0000 UTC m=+0.151635255 container init 7e3f2f9288585d28a179f7be9b04e4176f76f21cf8be6b4e5aa0fbc46df29f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:41:12 compute-0 podman[371694]: 2026-02-28 10:41:12.4039761 +0000 UTC m=+0.163411788 container start 7e3f2f9288585d28a179f7be9b04e4176f76f21cf8be6b4e5aa0fbc46df29f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 28 10:41:12 compute-0 podman[371694]: 2026-02-28 10:41:12.409773814 +0000 UTC m=+0.169209572 container attach 7e3f2f9288585d28a179f7be9b04e4176f76f21cf8be6b4e5aa0fbc46df29f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_greider, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 10:41:12 compute-0 nova_compute[243452]: 2026-02-28 10:41:12.414 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:12 compute-0 nova_compute[243452]: 2026-02-28 10:41:12.416 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:12 compute-0 nova_compute[243452]: 2026-02-28 10:41:12.433 243456 DEBUG nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:41:12 compute-0 nova_compute[243452]: 2026-02-28 10:41:12.509 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:12 compute-0 nova_compute[243452]: 2026-02-28 10:41:12.509 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:12 compute-0 nova_compute[243452]: 2026-02-28 10:41:12.520 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:41:12 compute-0 nova_compute[243452]: 2026-02-28 10:41:12.520 243456 INFO nova.compute.claims [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:41:12 compute-0 ceph-mon[76304]: pgmap v2305: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 685 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Feb 28 10:41:12 compute-0 nova_compute[243452]: 2026-02-28 10:41:12.698 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:41:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 678 KiB/s rd, 4.1 MiB/s wr, 127 op/s
Feb 28 10:41:13 compute-0 lvm[371811]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:41:13 compute-0 lvm[371811]: VG ceph_vg1 finished
Feb 28 10:41:13 compute-0 lvm[371810]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:41:13 compute-0 lvm[371810]: VG ceph_vg0 finished
Feb 28 10:41:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:41:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2193481640' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:41:13 compute-0 lvm[371813]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:41:13 compute-0 lvm[371813]: VG ceph_vg2 finished
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.251 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.262 243456 DEBUG nova.compute.provider_tree [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.285 243456 DEBUG nova.scheduler.client.report [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.309 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.310 243456 DEBUG nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:41:13 compute-0 dreamy_greider[371711]: {}
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.354 243456 DEBUG nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.355 243456 DEBUG nova.network.neutron [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:41:13 compute-0 systemd[1]: libpod-7e3f2f9288585d28a179f7be9b04e4176f76f21cf8be6b4e5aa0fbc46df29f39.scope: Deactivated successfully.
Feb 28 10:41:13 compute-0 systemd[1]: libpod-7e3f2f9288585d28a179f7be9b04e4176f76f21cf8be6b4e5aa0fbc46df29f39.scope: Consumed 1.337s CPU time.
Feb 28 10:41:13 compute-0 podman[371694]: 2026-02-28 10:41:13.374887261 +0000 UTC m=+1.134322949 container died 7e3f2f9288585d28a179f7be9b04e4176f76f21cf8be6b4e5aa0fbc46df29f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_greider, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.375 243456 INFO nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.390 243456 DEBUG nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:41:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cea9687415f431f5e3ad234a28224019221870349d9494980308195a8f09ada-merged.mount: Deactivated successfully.
Feb 28 10:41:13 compute-0 podman[371694]: 2026-02-28 10:41:13.432262796 +0000 UTC m=+1.191698494 container remove 7e3f2f9288585d28a179f7be9b04e4176f76f21cf8be6b4e5aa0fbc46df29f39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:41:13 compute-0 systemd[1]: libpod-conmon-7e3f2f9288585d28a179f7be9b04e4176f76f21cf8be6b4e5aa0fbc46df29f39.scope: Deactivated successfully.
Feb 28 10:41:13 compute-0 sudo[371616]: pam_unix(sudo:session): session closed for user root
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.479 243456 DEBUG nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.480 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.481 243456 INFO nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Creating image(s)
Feb 28 10:41:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:41:13 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:41:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:41:13 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.521 243456 DEBUG nova.storage.rbd_utils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.551 243456 DEBUG nova.storage.rbd_utils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.587 243456 DEBUG nova.storage.rbd_utils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:41:13 compute-0 sudo[371847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.595 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:41:13 compute-0 sudo[371847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:41:13 compute-0 sudo[371847]: pam_unix(sudo:session): session closed for user root
Feb 28 10:41:13 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2193481640' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:41:13 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:41:13 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.649 243456 DEBUG nova.policy [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.710 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.711 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.712 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.713 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.743 243456 DEBUG nova.storage.rbd_utils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.748 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:41:13 compute-0 nova_compute[243452]: 2026-02-28 10:41:13.993 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.245s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:41:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:41:14 compute-0 nova_compute[243452]: 2026-02-28 10:41:14.064 243456 DEBUG nova.storage.rbd_utils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:41:14 compute-0 nova_compute[243452]: 2026-02-28 10:41:14.143 243456 DEBUG nova.objects.instance [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid 47d618f6-612e-4944-8a4d-a3509d6e3d35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:41:14 compute-0 nova_compute[243452]: 2026-02-28 10:41:14.159 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:41:14 compute-0 nova_compute[243452]: 2026-02-28 10:41:14.159 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Ensure instance console log exists: /var/lib/nova/instances/47d618f6-612e-4944-8a4d-a3509d6e3d35/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:41:14 compute-0 nova_compute[243452]: 2026-02-28 10:41:14.160 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:14 compute-0 nova_compute[243452]: 2026-02-28 10:41:14.160 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:14 compute-0 nova_compute[243452]: 2026-02-28 10:41:14.160 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:14 compute-0 nova_compute[243452]: 2026-02-28 10:41:14.374 243456 INFO nova.compute.manager [None req-03318e18-c967-45dc-bf0c-ba2092f5c42b ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Get console output
Feb 28 10:41:14 compute-0 nova_compute[243452]: 2026-02-28 10:41:14.384 243456 DEBUG nova.network.neutron [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Successfully created port: b06d794d-14a7-4fd8-bd85-2e37bff5e446 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:41:14 compute-0 nova_compute[243452]: 2026-02-28 10:41:14.384 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:41:14 compute-0 ceph-mon[76304]: pgmap v2306: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 678 KiB/s rd, 4.1 MiB/s wr, 127 op/s
Feb 28 10:41:14 compute-0 nova_compute[243452]: 2026-02-28 10:41:14.746 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:14 compute-0 nova_compute[243452]: 2026-02-28 10:41:14.954 243456 DEBUG nova.network.neutron [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Successfully created port: f107fdf1-b771-447b-adb6-bd9bc1e35b53 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:41:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 401 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 463 KiB/s rd, 3.4 MiB/s wr, 110 op/s
Feb 28 10:41:15 compute-0 nova_compute[243452]: 2026-02-28 10:41:15.279 243456 DEBUG nova.compute.manager [req-548c264f-1694-4372-b9ab-a404a48fb9a5 req-55887fca-35a6-4201-950c-18855c9675e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-changed-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:15 compute-0 nova_compute[243452]: 2026-02-28 10:41:15.279 243456 DEBUG nova.compute.manager [req-548c264f-1694-4372-b9ab-a404a48fb9a5 req-55887fca-35a6-4201-950c-18855c9675e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Refreshing instance network info cache due to event network-changed-d3feb971-63a7-4d54-8310-9c6d40c29637. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:41:15 compute-0 nova_compute[243452]: 2026-02-28 10:41:15.280 243456 DEBUG oslo_concurrency.lockutils [req-548c264f-1694-4372-b9ab-a404a48fb9a5 req-55887fca-35a6-4201-950c-18855c9675e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:41:15 compute-0 nova_compute[243452]: 2026-02-28 10:41:15.280 243456 DEBUG oslo_concurrency.lockutils [req-548c264f-1694-4372-b9ab-a404a48fb9a5 req-55887fca-35a6-4201-950c-18855c9675e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:41:15 compute-0 nova_compute[243452]: 2026-02-28 10:41:15.280 243456 DEBUG nova.network.neutron [req-548c264f-1694-4372-b9ab-a404a48fb9a5 req-55887fca-35a6-4201-950c-18855c9675e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Refreshing network info cache for port d3feb971-63a7-4d54-8310-9c6d40c29637 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:41:15 compute-0 nova_compute[243452]: 2026-02-28 10:41:15.527 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:15 compute-0 nova_compute[243452]: 2026-02-28 10:41:15.770 243456 DEBUG nova.network.neutron [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Successfully updated port: b06d794d-14a7-4fd8-bd85-2e37bff5e446 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:41:16 compute-0 nova_compute[243452]: 2026-02-28 10:41:16.345 243456 INFO nova.compute.manager [None req-755334bc-63bd-4ce8-8fdf-63c60a5ee753 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Get console output
Feb 28 10:41:16 compute-0 nova_compute[243452]: 2026-02-28 10:41:16.351 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:41:16 compute-0 nova_compute[243452]: 2026-02-28 10:41:16.409 243456 DEBUG nova.network.neutron [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Successfully updated port: f107fdf1-b771-447b-adb6-bd9bc1e35b53 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:41:16 compute-0 nova_compute[243452]: 2026-02-28 10:41:16.427 243456 DEBUG nova.network.neutron [req-548c264f-1694-4372-b9ab-a404a48fb9a5 req-55887fca-35a6-4201-950c-18855c9675e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updated VIF entry in instance network info cache for port d3feb971-63a7-4d54-8310-9c6d40c29637. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:41:16 compute-0 nova_compute[243452]: 2026-02-28 10:41:16.428 243456 DEBUG nova.network.neutron [req-548c264f-1694-4372-b9ab-a404a48fb9a5 req-55887fca-35a6-4201-950c-18855c9675e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updating instance_info_cache with network_info: [{"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:41:16 compute-0 nova_compute[243452]: 2026-02-28 10:41:16.431 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:41:16 compute-0 nova_compute[243452]: 2026-02-28 10:41:16.431 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:41:16 compute-0 nova_compute[243452]: 2026-02-28 10:41:16.431 243456 DEBUG nova.network.neutron [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:41:16 compute-0 nova_compute[243452]: 2026-02-28 10:41:16.448 243456 DEBUG oslo_concurrency.lockutils [req-548c264f-1694-4372-b9ab-a404a48fb9a5 req-55887fca-35a6-4201-950c-18855c9675e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:41:16 compute-0 nova_compute[243452]: 2026-02-28 10:41:16.615 243456 DEBUG nova.network.neutron [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:41:16 compute-0 ceph-mon[76304]: pgmap v2307: 305 pgs: 305 active+clean; 401 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 463 KiB/s rd, 3.4 MiB/s wr, 110 op/s
Feb 28 10:41:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 421 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 181 KiB/s rd, 1.9 MiB/s wr, 50 op/s
Feb 28 10:41:17 compute-0 nova_compute[243452]: 2026-02-28 10:41:17.370 243456 DEBUG nova.compute.manager [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-vif-unplugged-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:17 compute-0 nova_compute[243452]: 2026-02-28 10:41:17.370 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:17 compute-0 nova_compute[243452]: 2026-02-28 10:41:17.371 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:17 compute-0 nova_compute[243452]: 2026-02-28 10:41:17.371 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:17 compute-0 nova_compute[243452]: 2026-02-28 10:41:17.372 243456 DEBUG nova.compute.manager [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] No waiting events found dispatching network-vif-unplugged-d3feb971-63a7-4d54-8310-9c6d40c29637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:41:17 compute-0 nova_compute[243452]: 2026-02-28 10:41:17.372 243456 WARNING nova.compute.manager [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received unexpected event network-vif-unplugged-d3feb971-63a7-4d54-8310-9c6d40c29637 for instance with vm_state active and task_state None.
Feb 28 10:41:17 compute-0 nova_compute[243452]: 2026-02-28 10:41:17.372 243456 DEBUG nova.compute.manager [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:17 compute-0 nova_compute[243452]: 2026-02-28 10:41:17.373 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:17 compute-0 nova_compute[243452]: 2026-02-28 10:41:17.373 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:17 compute-0 nova_compute[243452]: 2026-02-28 10:41:17.373 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:17 compute-0 nova_compute[243452]: 2026-02-28 10:41:17.373 243456 DEBUG nova.compute.manager [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] No waiting events found dispatching network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:41:17 compute-0 nova_compute[243452]: 2026-02-28 10:41:17.374 243456 WARNING nova.compute.manager [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received unexpected event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 for instance with vm_state active and task_state None.
Feb 28 10:41:17 compute-0 nova_compute[243452]: 2026-02-28 10:41:17.374 243456 DEBUG nova.compute.manager [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-changed-b06d794d-14a7-4fd8-bd85-2e37bff5e446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:17 compute-0 nova_compute[243452]: 2026-02-28 10:41:17.374 243456 DEBUG nova.compute.manager [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Refreshing instance network info cache due to event network-changed-b06d794d-14a7-4fd8-bd85-2e37bff5e446. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:41:17 compute-0 nova_compute[243452]: 2026-02-28 10:41:17.375 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.075 243456 DEBUG nova.network.neutron [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updating instance_info_cache with network_info: [{"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.087 243456 INFO nova.compute.manager [None req-ff45accb-2c0b-40dd-a2cf-3a52bea86823 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Get console output
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.100 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.101 243456 DEBUG nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Instance network_info: |[{"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.097 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.103 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.103 243456 DEBUG nova.network.neutron [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Refreshing network info cache for port b06d794d-14a7-4fd8-bd85-2e37bff5e446 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.112 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Start _get_guest_xml network_info=[{"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.126 243456 WARNING nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.143 243456 DEBUG nova.virt.libvirt.host [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.144 243456 DEBUG nova.virt.libvirt.host [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.149 243456 DEBUG nova.virt.libvirt.host [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.150 243456 DEBUG nova.virt.libvirt.host [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.150 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.151 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.151 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.151 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.152 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.152 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.152 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.152 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.152 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.152 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.153 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.153 243456 DEBUG nova.virt.hardware [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.155 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:41:18 compute-0 podman[372024]: 2026-02-28 10:41:18.177460521 +0000 UTC m=+0.099908780 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 28 10:41:18 compute-0 podman[372023]: 2026-02-28 10:41:18.186044524 +0000 UTC m=+0.109449700 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Feb 28 10:41:18 compute-0 ceph-mon[76304]: pgmap v2308: 305 pgs: 305 active+clean; 421 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 181 KiB/s rd, 1.9 MiB/s wr, 50 op/s
Feb 28 10:41:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:41:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1372369374' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.732 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.772 243456 DEBUG nova.storage.rbd_utils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:41:18 compute-0 nova_compute[243452]: 2026-02-28 10:41:18.778 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:41:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.039 243456 DEBUG oslo_concurrency.lockutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.040 243456 DEBUG oslo_concurrency.lockutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.040 243456 DEBUG oslo_concurrency.lockutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.041 243456 DEBUG oslo_concurrency.lockutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.041 243456 DEBUG oslo_concurrency.lockutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.043 243456 INFO nova.compute.manager [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Terminating instance
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.045 243456 DEBUG nova.compute.manager [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:41:19 compute-0 kernel: tapfdcbaa6a-ae (unregistering): left promiscuous mode
Feb 28 10:41:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 438 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 121 KiB/s rd, 2.1 MiB/s wr, 38 op/s
Feb 28 10:41:19 compute-0 NetworkManager[49805]: <info>  [1772275279.1003] device (tapfdcbaa6a-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.125 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:19 compute-0 ovn_controller[146846]: 2026-02-28T10:41:19Z|01493|binding|INFO|Releasing lport fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 from this chassis (sb_readonly=0)
Feb 28 10:41:19 compute-0 ovn_controller[146846]: 2026-02-28T10:41:19Z|01494|binding|INFO|Setting lport fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 down in Southbound
Feb 28 10:41:19 compute-0 ovn_controller[146846]: 2026-02-28T10:41:19Z|01495|binding|INFO|Removing iface tapfdcbaa6a-ae ovn-installed in OVS
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.128 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.138 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:97:49 10.100.0.4'], port_security=['fa:16:3e:11:97:49 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'edd7bb04-60e7-4998-afe7-73fa36c25f5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9965c3d-ca45-4c8c-963d-46eedcf9cfa8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27d4da06-6dac-452d-951c-54e43b1c22a3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:41:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.142 156681 INFO neutron.agent.ovn.metadata.agent [-] Port fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 in datapath 11e06da5-bfc5-4a1a-9148-ff3afccf9569 unbound from our chassis
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.142 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.146 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 11e06da5-bfc5-4a1a-9148-ff3afccf9569
Feb 28 10:41:19 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Feb 28 10:41:19 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008f.scope: Consumed 12.512s CPU time.
Feb 28 10:41:19 compute-0 systemd-machined[209480]: Machine qemu-176-instance-0000008f terminated.
Feb 28 10:41:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.166 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[63eeb7b9-8fff-4156-a983-c7aa74c3fa3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.199 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[e23a6084-41d6-4986-a207-ce13b58ccbd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.203 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[eb9bb8a1-7417-4d76-9ba1-3df3c8bc3ad7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.236 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf2acc5-bcac-473c-b699-53c7382c362a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.264 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f11b8638-386b-43fa-a0c3-2abc85d00053]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap11e06da5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:17:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673399, 'reachable_time': 31055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372141, 'error': None, 'target': 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.285 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[813e15dc-b943-4fb8-82b1-e8ffaed3c746]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap11e06da5-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673413, 'tstamp': 673413}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372145, 'error': None, 'target': 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap11e06da5-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673415, 'tstamp': 673415}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372145, 'error': None, 'target': 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.288 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11e06da5-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.289 243456 INFO nova.virt.libvirt.driver [-] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Instance destroyed successfully.
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.291 243456 DEBUG nova.objects.instance [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid edd7bb04-60e7-4998-afe7-73fa36c25f5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.307 243456 DEBUG nova.virt.libvirt.vif [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:40:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1953289712',display_name='tempest-TestNetworkBasicOps-server-1953289712',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1953289712',id=143,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPUh8LuUCOXWWDa6D8UF8euvZqjOZJ54pO6ZFrTQ6XYCJT4oCgqwtyXIMREsD2M2EJrCVoUUgoERLi0Re4iBqJPnVad9u2jkDO9MLRXngT9Ks7jWQAvBhSsQWV2xvZyyw==',key_name='tempest-TestNetworkBasicOps-1629892919',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:40:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-plwthb3l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:40:52Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=edd7bb04-60e7-4998-afe7-73fa36c25f5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.308 243456 DEBUG nova.network.os_vif_util [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "address": "fa:16:3e:11:97:49", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcbaa6a-ae", "ovs_interfaceid": "fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.310 243456 DEBUG nova.network.os_vif_util [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:11:97:49,bridge_name='br-int',has_traffic_filtering=True,id=fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcbaa6a-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.311 243456 DEBUG os_vif [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:11:97:49,bridge_name='br-int',has_traffic_filtering=True,id=fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcbaa6a-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.314 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.315 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdcbaa6a-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.338 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.340 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.343 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11e06da5-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.343 243456 INFO os_vif [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:11:97:49,bridge_name='br-int',has_traffic_filtering=True,id=fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcbaa6a-ae')
Feb 28 10:41:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.343 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:41:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.344 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap11e06da5-b0, col_values=(('external_ids', {'iface-id': 'ea6114a2-28c6-4510-bbd6-16e4d9cb4f71'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:19 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:19.345 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:41:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:41:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1800979560' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.389 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.391 243456 DEBUG nova.virt.libvirt.vif [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:41:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-740899339',display_name='tempest-TestGettingAddress-server-740899339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-740899339',id=144,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-uspjsb87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:41:13Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=47d618f6-612e-4944-8a4d-a3509d6e3d35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.391 243456 DEBUG nova.network.os_vif_util [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.392 243456 DEBUG nova.network.os_vif_util [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:0a:a1,bridge_name='br-int',has_traffic_filtering=True,id=b06d794d-14a7-4fd8-bd85-2e37bff5e446,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06d794d-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.394 243456 DEBUG nova.virt.libvirt.vif [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:41:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-740899339',display_name='tempest-TestGettingAddress-server-740899339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-740899339',id=144,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-uspjsb87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:41:13Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=47d618f6-612e-4944-8a4d-a3509d6e3d35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.395 243456 DEBUG nova.network.os_vif_util [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.396 243456 DEBUG nova.network.os_vif_util [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:f0:9a,bridge_name='br-int',has_traffic_filtering=True,id=f107fdf1-b771-447b-adb6-bd9bc1e35b53,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf107fdf1-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.398 243456 DEBUG nova.objects.instance [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 47d618f6-612e-4944-8a4d-a3509d6e3d35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.416 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:41:19 compute-0 nova_compute[243452]:   <uuid>47d618f6-612e-4944-8a4d-a3509d6e3d35</uuid>
Feb 28 10:41:19 compute-0 nova_compute[243452]:   <name>instance-00000090</name>
Feb 28 10:41:19 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:41:19 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:41:19 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <nova:name>tempest-TestGettingAddress-server-740899339</nova:name>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:41:18</nova:creationTime>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:41:19 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:41:19 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:41:19 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:41:19 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:41:19 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:41:19 compute-0 nova_compute[243452]:         <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 10:41:19 compute-0 nova_compute[243452]:         <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:41:19 compute-0 nova_compute[243452]:         <nova:port uuid="b06d794d-14a7-4fd8-bd85-2e37bff5e446">
Feb 28 10:41:19 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:41:19 compute-0 nova_compute[243452]:         <nova:port uuid="f107fdf1-b771-447b-adb6-bd9bc1e35b53">
Feb 28 10:41:19 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe01:f09a" ipVersion="6"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:41:19 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:41:19 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <system>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <entry name="serial">47d618f6-612e-4944-8a4d-a3509d6e3d35</entry>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <entry name="uuid">47d618f6-612e-4944-8a4d-a3509d6e3d35</entry>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     </system>
Feb 28 10:41:19 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:41:19 compute-0 nova_compute[243452]:   <os>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:   </os>
Feb 28 10:41:19 compute-0 nova_compute[243452]:   <features>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:   </features>
Feb 28 10:41:19 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:41:19 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:41:19 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/47d618f6-612e-4944-8a4d-a3509d6e3d35_disk">
Feb 28 10:41:19 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:41:19 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/47d618f6-612e-4944-8a4d-a3509d6e3d35_disk.config">
Feb 28 10:41:19 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       </source>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:41:19 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:02:0a:a1"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <target dev="tapb06d794d-14"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:01:f0:9a"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <target dev="tapf107fdf1-b7"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/47d618f6-612e-4944-8a4d-a3509d6e3d35/console.log" append="off"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <video>
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     </video>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:41:19 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:41:19 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:41:19 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:41:19 compute-0 nova_compute[243452]: </domain>
Feb 28 10:41:19 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.417 243456 DEBUG nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Preparing to wait for external event network-vif-plugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.417 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.418 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.418 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.419 243456 DEBUG nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Preparing to wait for external event network-vif-plugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.419 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.419 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.420 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.421 243456 DEBUG nova.virt.libvirt.vif [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:41:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-740899339',display_name='tempest-TestGettingAddress-server-740899339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-740899339',id=144,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-uspjsb87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:41:13Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=47d618f6-612e-4944-8a4d-a3509d6e3d35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.421 243456 DEBUG nova.network.os_vif_util [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.422 243456 DEBUG nova.network.os_vif_util [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:0a:a1,bridge_name='br-int',has_traffic_filtering=True,id=b06d794d-14a7-4fd8-bd85-2e37bff5e446,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06d794d-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.423 243456 DEBUG os_vif [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:0a:a1,bridge_name='br-int',has_traffic_filtering=True,id=b06d794d-14a7-4fd8-bd85-2e37bff5e446,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06d794d-14') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.424 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.424 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.425 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.429 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.430 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb06d794d-14, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.431 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb06d794d-14, col_values=(('external_ids', {'iface-id': 'b06d794d-14a7-4fd8-bd85-2e37bff5e446', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:0a:a1', 'vm-uuid': '47d618f6-612e-4944-8a4d-a3509d6e3d35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.433 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:19 compute-0 NetworkManager[49805]: <info>  [1772275279.4340] manager: (tapb06d794d-14): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/625)
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.442 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.443 243456 INFO os_vif [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:0a:a1,bridge_name='br-int',has_traffic_filtering=True,id=b06d794d-14a7-4fd8-bd85-2e37bff5e446,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06d794d-14')
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.444 243456 DEBUG nova.virt.libvirt.vif [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:41:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-740899339',display_name='tempest-TestGettingAddress-server-740899339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-740899339',id=144,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-uspjsb87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:41:13Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=47d618f6-612e-4944-8a4d-a3509d6e3d35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.445 243456 DEBUG nova.network.os_vif_util [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.446 243456 DEBUG nova.network.os_vif_util [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:f0:9a,bridge_name='br-int',has_traffic_filtering=True,id=f107fdf1-b771-447b-adb6-bd9bc1e35b53,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf107fdf1-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.446 243456 DEBUG os_vif [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:f0:9a,bridge_name='br-int',has_traffic_filtering=True,id=f107fdf1-b771-447b-adb6-bd9bc1e35b53,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf107fdf1-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.447 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.447 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.447 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.451 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.452 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf107fdf1-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.453 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf107fdf1-b7, col_values=(('external_ids', {'iface-id': 'f107fdf1-b771-447b-adb6-bd9bc1e35b53', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:f0:9a', 'vm-uuid': '47d618f6-612e-4944-8a4d-a3509d6e3d35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.455 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:19 compute-0 NetworkManager[49805]: <info>  [1772275279.4561] manager: (tapf107fdf1-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/626)
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.458 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.462 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.464 243456 INFO os_vif [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:f0:9a,bridge_name='br-int',has_traffic_filtering=True,id=f107fdf1-b771-447b-adb6-bd9bc1e35b53,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf107fdf1-b7')
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.468 243456 DEBUG nova.compute.manager [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-changed-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.469 243456 DEBUG nova.compute.manager [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Refreshing instance network info cache due to event network-changed-d3feb971-63a7-4d54-8310-9c6d40c29637. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.469 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.469 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.469 243456 DEBUG nova.network.neutron [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Refreshing network info cache for port d3feb971-63a7-4d54-8310-9c6d40c29637 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.530 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.530 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.531 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:02:0a:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.531 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:01:f0:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.532 243456 INFO nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Using config drive
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.562 243456 DEBUG nova.storage.rbd_utils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.599 243456 INFO nova.virt.libvirt.driver [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Deleting instance files /var/lib/nova/instances/edd7bb04-60e7-4998-afe7-73fa36c25f5d_del
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.600 243456 INFO nova.virt.libvirt.driver [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Deletion of /var/lib/nova/instances/edd7bb04-60e7-4998-afe7-73fa36c25f5d_del complete
Feb 28 10:41:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1372369374' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:41:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1800979560' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.655 243456 INFO nova.compute.manager [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Took 0.61 seconds to destroy the instance on the hypervisor.
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.656 243456 DEBUG oslo.service.loopingcall [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.656 243456 DEBUG nova.compute.manager [-] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.656 243456 DEBUG nova.network.neutron [-] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.748 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.925 243456 DEBUG nova.network.neutron [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updated VIF entry in instance network info cache for port b06d794d-14a7-4fd8-bd85-2e37bff5e446. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.925 243456 DEBUG nova.network.neutron [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updating instance_info_cache with network_info: [{"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.943 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.943 243456 DEBUG nova.compute.manager [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-changed-f107fdf1-b771-447b-adb6-bd9bc1e35b53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.943 243456 DEBUG nova.compute.manager [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Refreshing instance network info cache due to event network-changed-f107fdf1-b771-447b-adb6-bd9bc1e35b53. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.944 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.944 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:41:19 compute-0 nova_compute[243452]: 2026-02-28 10:41:19.944 243456 DEBUG nova.network.neutron [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Refreshing network info cache for port f107fdf1-b771-447b-adb6-bd9bc1e35b53 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.011 243456 INFO nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Creating config drive at /var/lib/nova/instances/47d618f6-612e-4944-8a4d-a3509d6e3d35/disk.config
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.015 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/47d618f6-612e-4944-8a4d-a3509d6e3d35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1webwknp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.158 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/47d618f6-612e-4944-8a4d-a3509d6e3d35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1webwknp" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.190 243456 DEBUG nova.storage.rbd_utils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.195 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/47d618f6-612e-4944-8a4d-a3509d6e3d35/disk.config 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.368 243456 DEBUG oslo_concurrency.processutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/47d618f6-612e-4944-8a4d-a3509d6e3d35/disk.config 47d618f6-612e-4944-8a4d-a3509d6e3d35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.369 243456 INFO nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Deleting local config drive /var/lib/nova/instances/47d618f6-612e-4944-8a4d-a3509d6e3d35/disk.config because it was imported into RBD.
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.372 243456 DEBUG nova.network.neutron [-] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.399 243456 INFO nova.compute.manager [-] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Took 0.74 seconds to deallocate network for instance.
Feb 28 10:41:20 compute-0 kernel: tapb06d794d-14: entered promiscuous mode
Feb 28 10:41:20 compute-0 systemd-udevd[372133]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:41:20 compute-0 NetworkManager[49805]: <info>  [1772275280.4171] manager: (tapb06d794d-14): new Tun device (/org/freedesktop/NetworkManager/Devices/627)
Feb 28 10:41:20 compute-0 NetworkManager[49805]: <info>  [1772275280.4728] device (tapb06d794d-14): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:41:20 compute-0 NetworkManager[49805]: <info>  [1772275280.4757] device (tapb06d794d-14): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.475 243456 DEBUG oslo_concurrency.lockutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:20 compute-0 ovn_controller[146846]: 2026-02-28T10:41:20Z|01496|binding|INFO|Claiming lport b06d794d-14a7-4fd8-bd85-2e37bff5e446 for this chassis.
Feb 28 10:41:20 compute-0 ovn_controller[146846]: 2026-02-28T10:41:20Z|01497|binding|INFO|b06d794d-14a7-4fd8-bd85-2e37bff5e446: Claiming fa:16:3e:02:0a:a1 10.100.0.5
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.476 243456 DEBUG oslo_concurrency.lockutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.478 243456 DEBUG nova.compute.manager [req-1a0a0735-d085-4980-9686-5415fae56f95 req-23969133-0392-4d7d-9705-9e402e51e579 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received event network-vif-deleted-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.479 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:20 compute-0 NetworkManager[49805]: <info>  [1772275280.4855] manager: (tapf107fdf1-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/628)
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.485 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:0a:a1 10.100.0.5'], port_security=['fa:16:3e:02:0a:a1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '47d618f6-612e-4944-8a4d-a3509d6e3d35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67f9b9ea-1e17-4554-8bb9-a54c6cbc146f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e740c691-959b-49dd-920a-81817c99bba9, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b06d794d-14a7-4fd8-bd85-2e37bff5e446) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.487 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b06d794d-14a7-4fd8-bd85-2e37bff5e446 in datapath b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9 bound to our chassis
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.488 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9
Feb 28 10:41:20 compute-0 kernel: tapf107fdf1-b7: entered promiscuous mode
Feb 28 10:41:20 compute-0 ovn_controller[146846]: 2026-02-28T10:41:20Z|01498|binding|INFO|Setting lport b06d794d-14a7-4fd8-bd85-2e37bff5e446 ovn-installed in OVS
Feb 28 10:41:20 compute-0 ovn_controller[146846]: 2026-02-28T10:41:20Z|01499|binding|INFO|Setting lport b06d794d-14a7-4fd8-bd85-2e37bff5e446 up in Southbound
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.496 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:20 compute-0 ovn_controller[146846]: 2026-02-28T10:41:20Z|01500|binding|INFO|Claiming lport f107fdf1-b771-447b-adb6-bd9bc1e35b53 for this chassis.
Feb 28 10:41:20 compute-0 ovn_controller[146846]: 2026-02-28T10:41:20Z|01501|binding|INFO|f107fdf1-b771-447b-adb6-bd9bc1e35b53: Claiming fa:16:3e:01:f0:9a 2001:db8::f816:3eff:fe01:f09a
Feb 28 10:41:20 compute-0 NetworkManager[49805]: <info>  [1772275280.5040] device (tapf107fdf1-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:41:20 compute-0 NetworkManager[49805]: <info>  [1772275280.5051] device (tapf107fdf1-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.511 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:f0:9a 2001:db8::f816:3eff:fe01:f09a'], port_security=['fa:16:3e:01:f0:9a 2001:db8::f816:3eff:fe01:f09a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe01:f09a/64', 'neutron:device_id': '47d618f6-612e-4944-8a4d-a3509d6e3d35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67f9b9ea-1e17-4554-8bb9-a54c6cbc146f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b98334fa-e37e-4532-99f2-fef0d70a8f7d, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f107fdf1-b771-447b-adb6-bd9bc1e35b53) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.508 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[dc911d56-a175-4ac0-93fd-185675954ead]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:20 compute-0 ovn_controller[146846]: 2026-02-28T10:41:20Z|01502|binding|INFO|Setting lport f107fdf1-b771-447b-adb6-bd9bc1e35b53 ovn-installed in OVS
Feb 28 10:41:20 compute-0 ovn_controller[146846]: 2026-02-28T10:41:20Z|01503|binding|INFO|Setting lport f107fdf1-b771-447b-adb6-bd9bc1e35b53 up in Southbound
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.524 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:20 compute-0 systemd-machined[209480]: New machine qemu-177-instance-00000090.
Feb 28 10:41:20 compute-0 systemd[1]: Started Virtual Machine qemu-177-instance-00000090.
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.562 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[af9afa1d-2f0e-43a2-b8e6-973658eeb028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.567 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f49ca0-6913-485f-b2b1-ee877fb52f3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.594 243456 DEBUG oslo_concurrency.processutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.605 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9f160d-ddde-42cf-8aef-32b7f0b876f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.627 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7209cd86-9ecf-41b8-968a-da667e55c9c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb0ad6cbe-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:f6:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 440], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675455, 'reachable_time': 42136, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372266, 'error': None, 'target': 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.646 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e3939f68-4561-4f54-a105-4db9e276d305]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb0ad6cbe-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675468, 'tstamp': 675468}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372269, 'error': None, 'target': 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb0ad6cbe-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675472, 'tstamp': 675472}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372269, 'error': None, 'target': 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.650 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0ad6cbe-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.652 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.654 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0ad6cbe-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:20 compute-0 ceph-mon[76304]: pgmap v2309: 305 pgs: 305 active+clean; 438 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 121 KiB/s rd, 2.1 MiB/s wr, 38 op/s
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.655 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.656 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb0ad6cbe-80, col_values=(('external_ids', {'iface-id': '7d3f4d3d-8eb5-46b3-a0b2-7c10be424ebb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.657 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.660 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f107fdf1-b771-447b-adb6-bd9bc1e35b53 in datapath caa8646e-5c97-4eb8-add7-69ea9ee54379 unbound from our chassis
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.663 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network caa8646e-5c97-4eb8-add7-69ea9ee54379
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.679 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1a721200-d4fb-44e1-80a5-96f2deac7605]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.713 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[2e79a1e7-3ab3-4336-9d14-8504eb8d16c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.718 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[fd13343d-4e3d-4c05-8b30-a35c6c1eb539]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.747 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ef361ac1-a8f1-4f45-b450-e71a4c10e365]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.764 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2f871863-5fe8-4ab9-aa66-efc3d49211bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcaa8646e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:7b:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 17, 'tx_packets': 5, 'rx_bytes': 1502, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 17, 'tx_packets': 5, 'rx_bytes': 1502, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675547, 'reachable_time': 36290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372309, 'error': None, 'target': 'ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.765 243456 DEBUG nova.network.neutron [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updated VIF entry in instance network info cache for port d3feb971-63a7-4d54-8310-9c6d40c29637. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.766 243456 DEBUG nova.network.neutron [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updating instance_info_cache with network_info: [{"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.785 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bf68a074-4c86-4849-b5c2-eaf288261705]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcaa8646e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675562, 'tstamp': 675562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372311, 'error': None, 'target': 'ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.788 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcaa8646e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.790 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.791 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcaa8646e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.792 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.792 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcaa8646e-50, col_values=(('external_ids', {'iface-id': 'e3227d18-ed73-459d-b1a1-aecd179beb21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:20.793 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.795 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.795 243456 DEBUG nova.compute.manager [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.796 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.797 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.797 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.797 243456 DEBUG nova.compute.manager [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] No waiting events found dispatching network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.798 243456 WARNING nova.compute.manager [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received unexpected event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 for instance with vm_state active and task_state None.
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.798 243456 DEBUG nova.compute.manager [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.799 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.799 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.800 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.801 243456 DEBUG nova.compute.manager [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] No waiting events found dispatching network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.802 243456 WARNING nova.compute.manager [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received unexpected event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 for instance with vm_state active and task_state None.
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.802 243456 DEBUG nova.compute.manager [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received event network-changed-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.803 243456 DEBUG nova.compute.manager [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Refreshing instance network info cache due to event network-changed-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.803 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.804 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.804 243456 DEBUG nova.network.neutron [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Refreshing network info cache for port fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.926 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275280.9259956, 47d618f6-612e-4944-8a4d-a3509d6e3d35 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.927 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] VM Started (Lifecycle Event)
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.945 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.949 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275280.9261737, 47d618f6-612e-4944-8a4d-a3509d6e3d35 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.950 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] VM Paused (Lifecycle Event)
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.961 243456 DEBUG nova.network.neutron [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.968 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.973 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:41:20 compute-0 nova_compute[243452]: 2026-02-28 10:41:20.993 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:41:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 379 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Feb 28 10:41:21 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:41:21 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4003126640' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.195 243456 DEBUG oslo_concurrency.processutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.201 243456 DEBUG nova.compute.provider_tree [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.221 243456 DEBUG nova.scheduler.client.report [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.243 243456 DEBUG oslo_concurrency.lockutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.269 243456 INFO nova.scheduler.client.report [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance edd7bb04-60e7-4998-afe7-73fa36c25f5d
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.278 243456 DEBUG nova.network.neutron [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.304 243456 DEBUG oslo_concurrency.lockutils [req-893449e7-2fbb-436b-b92e-4f9b7195db0a req-53ecc196-d783-40cd-9cff-83036ea45b05 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-edd7bb04-60e7-4998-afe7-73fa36c25f5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.314 243456 DEBUG nova.network.neutron [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updated VIF entry in instance network info cache for port f107fdf1-b771-447b-adb6-bd9bc1e35b53. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.315 243456 DEBUG nova.network.neutron [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updating instance_info_cache with network_info: [{"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.341 243456 DEBUG oslo_concurrency.lockutils [req-48f87dac-ac04-4cf8-b014-be49b855e63e req-115ed4cd-361b-4617-bcc7-7135d6ef21bd 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.346 243456 DEBUG oslo_concurrency.lockutils [None req-01db3413-341c-4984-af1c-25a651763a33 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.572 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received event network-vif-unplugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.572 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.573 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.573 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.573 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] No waiting events found dispatching network-vif-unplugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.573 243456 WARNING nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received unexpected event network-vif-unplugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 for instance with vm_state deleted and task_state None.
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.573 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received event network-vif-plugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.573 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.573 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.574 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "edd7bb04-60e7-4998-afe7-73fa36c25f5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.574 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] No waiting events found dispatching network-vif-plugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.574 243456 WARNING nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Received unexpected event network-vif-plugged-fdcbaa6a-ae18-47b2-a5c6-d5b7a9731565 for instance with vm_state deleted and task_state None.
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.574 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-vif-plugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.574 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.574 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.575 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.575 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Processing event network-vif-plugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.575 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-vif-plugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.575 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.575 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.575 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.575 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] No event matching network-vif-plugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 in dict_keys([('network-vif-plugged', 'f107fdf1-b771-447b-adb6-bd9bc1e35b53')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.576 243456 WARNING nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received unexpected event network-vif-plugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 for instance with vm_state building and task_state spawning.
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.576 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-vif-plugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.576 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.576 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.576 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.576 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Processing event network-vif-plugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.577 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-vif-plugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.577 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.577 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.577 243456 DEBUG oslo_concurrency.lockutils [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.577 243456 DEBUG nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] No waiting events found dispatching network-vif-plugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.577 243456 WARNING nova.compute.manager [req-0e47e0ef-91d3-4ef0-b6c6-7b83028753e3 req-b755ef4a-b886-4171-9923-8f629f5e02b6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received unexpected event network-vif-plugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 for instance with vm_state building and task_state spawning.
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.578 243456 DEBUG nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.582 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275281.5822911, 47d618f6-612e-4944-8a4d-a3509d6e3d35 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.582 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] VM Resumed (Lifecycle Event)
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.585 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.589 243456 INFO nova.virt.libvirt.driver [-] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Instance spawned successfully.
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.590 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.609 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.616 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.619 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.620 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.620 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.620 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.621 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.621 243456 DEBUG nova.virt.libvirt.driver [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.654 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:41:21 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4003126640' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.695 243456 INFO nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Took 8.22 seconds to spawn the instance on the hypervisor.
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.695 243456 DEBUG nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.764 243456 INFO nova.compute.manager [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Took 9.29 seconds to build instance.
Feb 28 10:41:21 compute-0 nova_compute[243452]: 2026-02-28 10:41:21.781 243456 DEBUG oslo_concurrency.lockutils [None req-d5d91747-8237-4945-b2a6-a16f12453175 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.364s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:22 compute-0 ceph-mon[76304]: pgmap v2310: 305 pgs: 305 active+clean; 379 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Feb 28 10:41:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 511 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Feb 28 10:41:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.094 243456 DEBUG nova.compute.manager [req-febe04c7-869c-498e-829f-d534f9347016 req-8d1d79a7-eca5-4ea2-b1c9-ece23f522d60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-changed-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.094 243456 DEBUG nova.compute.manager [req-febe04c7-869c-498e-829f-d534f9347016 req-8d1d79a7-eca5-4ea2-b1c9-ece23f522d60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Refreshing instance network info cache due to event network-changed-d3feb971-63a7-4d54-8310-9c6d40c29637. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.095 243456 DEBUG oslo_concurrency.lockutils [req-febe04c7-869c-498e-829f-d534f9347016 req-8d1d79a7-eca5-4ea2-b1c9-ece23f522d60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.095 243456 DEBUG oslo_concurrency.lockutils [req-febe04c7-869c-498e-829f-d534f9347016 req-8d1d79a7-eca5-4ea2-b1c9-ece23f522d60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.095 243456 DEBUG nova.network.neutron [req-febe04c7-869c-498e-829f-d534f9347016 req-8d1d79a7-eca5-4ea2-b1c9-ece23f522d60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Refreshing network info cache for port d3feb971-63a7-4d54-8310-9c6d40c29637 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.388 243456 DEBUG oslo_concurrency.lockutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.388 243456 DEBUG oslo_concurrency.lockutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.389 243456 DEBUG oslo_concurrency.lockutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.389 243456 DEBUG oslo_concurrency.lockutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.389 243456 DEBUG oslo_concurrency.lockutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.390 243456 INFO nova.compute.manager [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Terminating instance
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.391 243456 DEBUG nova.compute.manager [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:41:24 compute-0 kernel: tapd3feb971-63 (unregistering): left promiscuous mode
Feb 28 10:41:24 compute-0 NetworkManager[49805]: <info>  [1772275284.4448] device (tapd3feb971-63): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:41:24 compute-0 ovn_controller[146846]: 2026-02-28T10:41:24Z|01504|binding|INFO|Releasing lport d3feb971-63a7-4d54-8310-9c6d40c29637 from this chassis (sb_readonly=0)
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.456 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:24 compute-0 ovn_controller[146846]: 2026-02-28T10:41:24Z|01505|binding|INFO|Setting lport d3feb971-63a7-4d54-8310-9c6d40c29637 down in Southbound
Feb 28 10:41:24 compute-0 ovn_controller[146846]: 2026-02-28T10:41:24Z|01506|binding|INFO|Removing iface tapd3feb971-63 ovn-installed in OVS
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.460 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.464 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.467 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:70:52 10.100.0.7'], port_security=['fa:16:3e:7a:70:52 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5e1a8d62-9ac1-417d-8194-58901bb4018e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '8', 'neutron:security_group_ids': '44c3724e-fd4e-435a-91b1-2ee7cbaa561d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27d4da06-6dac-452d-951c-54e43b1c22a3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=d3feb971-63a7-4d54-8310-9c6d40c29637) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:41:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.470 156681 INFO neutron.agent.ovn.metadata.agent [-] Port d3feb971-63a7-4d54-8310-9c6d40c29637 in datapath 11e06da5-bfc5-4a1a-9148-ff3afccf9569 unbound from our chassis
Feb 28 10:41:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.473 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 11e06da5-bfc5-4a1a-9148-ff3afccf9569, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:41:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.476 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e7381d-a167-43f6-adf8-a3da29223132]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.477 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569 namespace which is not needed anymore
Feb 28 10:41:24 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Feb 28 10:41:24 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008d.scope: Consumed 13.767s CPU time.
Feb 28 10:41:24 compute-0 systemd-machined[209480]: Machine qemu-174-instance-0000008d terminated.
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.631 243456 INFO nova.virt.libvirt.driver [-] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Instance destroyed successfully.
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.632 243456 DEBUG nova.objects.instance [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid 5e1a8d62-9ac1-417d-8194-58901bb4018e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:41:24 compute-0 neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569[369903]: [NOTICE]   (369907) : haproxy version is 2.8.14-c23fe91
Feb 28 10:41:24 compute-0 neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569[369903]: [NOTICE]   (369907) : path to executable is /usr/sbin/haproxy
Feb 28 10:41:24 compute-0 neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569[369903]: [WARNING]  (369907) : Exiting Master process...
Feb 28 10:41:24 compute-0 neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569[369903]: [ALERT]    (369907) : Current worker (369909) exited with code 143 (Terminated)
Feb 28 10:41:24 compute-0 neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569[369903]: [WARNING]  (369907) : All workers exited. Exiting... (0)
Feb 28 10:41:24 compute-0 systemd[1]: libpod-1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1.scope: Deactivated successfully.
Feb 28 10:41:24 compute-0 conmon[369903]: conmon 1ff0507443255768e763 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1.scope/container/memory.events
Feb 28 10:41:24 compute-0 podman[372365]: 2026-02-28 10:41:24.652293931 +0000 UTC m=+0.060245497 container stop 1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.653 243456 DEBUG nova.virt.libvirt.vif [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:40:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-806666479',display_name='tempest-TestNetworkBasicOps-server-806666479',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-806666479',id=141,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEvYU5ohMTONYZF98enrXff7H/UA6S+83Ft8Ojoxq+P+keZUL46io/3fcohxtuAI5aeVtpG6o1nJ2kDJwbKtvHAweQjJLzp2omWlOmQ8VQJrOL3ujh53baZZlNUH6C9OQ==',key_name='tempest-TestNetworkBasicOps-2075562092',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:40:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-12rzj72r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:40:32Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=5e1a8d62-9ac1-417d-8194-58901bb4018e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.653 243456 DEBUG nova.network.os_vif_util [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.654 243456 DEBUG nova.network.os_vif_util [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:70:52,bridge_name='br-int',has_traffic_filtering=True,id=d3feb971-63a7-4d54-8310-9c6d40c29637,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3feb971-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.654 243456 DEBUG os_vif [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:70:52,bridge_name='br-int',has_traffic_filtering=True,id=d3feb971-63a7-4d54-8310-9c6d40c29637,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3feb971-63') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.658 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.659 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3feb971-63, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.663 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.666 243456 INFO os_vif [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:70:52,bridge_name='br-int',has_traffic_filtering=True,id=d3feb971-63a7-4d54-8310-9c6d40c29637,network=Network(11e06da5-bfc5-4a1a-9148-ff3afccf9569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3feb971-63')
Feb 28 10:41:24 compute-0 podman[372365]: 2026-02-28 10:41:24.683300079 +0000 UTC m=+0.091251695 container died 1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 10:41:24 compute-0 ceph-mon[76304]: pgmap v2311: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 511 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Feb 28 10:41:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1-userdata-shm.mount: Deactivated successfully.
Feb 28 10:41:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec7ac42405f247e62e492745590376a4dfe553da32d8a80cb4173cd7e7d5ce14-merged.mount: Deactivated successfully.
Feb 28 10:41:24 compute-0 podman[372365]: 2026-02-28 10:41:24.742834925 +0000 UTC m=+0.150786511 container cleanup 1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.749 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:24 compute-0 systemd[1]: libpod-conmon-1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1.scope: Deactivated successfully.
Feb 28 10:41:24 compute-0 podman[372420]: 2026-02-28 10:41:24.844128313 +0000 UTC m=+0.079922564 container remove 1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:41:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.849 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[97a35ffb-900b-4983-af49-d527fad724f1]: (4, ('Sat Feb 28 10:41:24 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569 (1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1)\n1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1\nSat Feb 28 10:41:24 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569 (1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1)\n1ff0507443255768e7639335a77d8b5e5e7c7055605357eced658e2fb49249c1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.854 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5911f52f-08f3-405e-9a71-10170d550e4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.855 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11e06da5-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:24 compute-0 kernel: tap11e06da5-b0: left promiscuous mode
Feb 28 10:41:24 compute-0 nova_compute[243452]: 2026-02-28 10:41:24.865 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.868 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2fe6c4-660a-4483-8e80-d1e3140cf5e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.885 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0c244f67-0700-4169-8b0c-263326f92ae2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.888 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ade82b7b-e713-4003-868f-e9a061b73a30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.909 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5970d7ec-f5db-4646-8f7d-f709f235de62]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673390, 'reachable_time': 22571, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372436, 'error': None, 'target': 'ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d11e06da5\x2dbfc5\x2d4a1a\x2d9148\x2dff3afccf9569.mount: Deactivated successfully.
Feb 28 10:41:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.912 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-11e06da5-bfc5-4a1a-9148-ff3afccf9569 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:41:24 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:24.912 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[1df218c5-8d6c-4539-968f-1a211b98f9c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:25 compute-0 nova_compute[243452]: 2026-02-28 10:41:25.017 243456 INFO nova.virt.libvirt.driver [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Deleting instance files /var/lib/nova/instances/5e1a8d62-9ac1-417d-8194-58901bb4018e_del
Feb 28 10:41:25 compute-0 nova_compute[243452]: 2026-02-28 10:41:25.018 243456 INFO nova.virt.libvirt.driver [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Deletion of /var/lib/nova/instances/5e1a8d62-9ac1-417d-8194-58901bb4018e_del complete
Feb 28 10:41:25 compute-0 nova_compute[243452]: 2026-02-28 10:41:25.090 243456 INFO nova.compute.manager [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Took 0.70 seconds to destroy the instance on the hypervisor.
Feb 28 10:41:25 compute-0 nova_compute[243452]: 2026-02-28 10:41:25.090 243456 DEBUG oslo.service.loopingcall [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:41:25 compute-0 nova_compute[243452]: 2026-02-28 10:41:25.091 243456 DEBUG nova.compute.manager [-] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:41:25 compute-0 nova_compute[243452]: 2026-02-28 10:41:25.091 243456 DEBUG nova.network.neutron [-] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:41:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 957 KiB/s rd, 1.8 MiB/s wr, 93 op/s
Feb 28 10:41:25 compute-0 nova_compute[243452]: 2026-02-28 10:41:25.656 243456 DEBUG nova.network.neutron [-] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:41:25 compute-0 nova_compute[243452]: 2026-02-28 10:41:25.684 243456 INFO nova.compute.manager [-] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Took 0.59 seconds to deallocate network for instance.
Feb 28 10:41:25 compute-0 nova_compute[243452]: 2026-02-28 10:41:25.695 243456 DEBUG nova.network.neutron [req-febe04c7-869c-498e-829f-d534f9347016 req-8d1d79a7-eca5-4ea2-b1c9-ece23f522d60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updated VIF entry in instance network info cache for port d3feb971-63a7-4d54-8310-9c6d40c29637. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:41:25 compute-0 nova_compute[243452]: 2026-02-28 10:41:25.695 243456 DEBUG nova.network.neutron [req-febe04c7-869c-498e-829f-d534f9347016 req-8d1d79a7-eca5-4ea2-b1c9-ece23f522d60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updating instance_info_cache with network_info: [{"id": "d3feb971-63a7-4d54-8310-9c6d40c29637", "address": "fa:16:3e:7a:70:52", "network": {"id": "11e06da5-bfc5-4a1a-9148-ff3afccf9569", "bridge": "br-int", "label": "tempest-network-smoke--1024959127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3feb971-63", "ovs_interfaceid": "d3feb971-63a7-4d54-8310-9c6d40c29637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:41:25 compute-0 nova_compute[243452]: 2026-02-28 10:41:25.741 243456 DEBUG oslo_concurrency.lockutils [req-febe04c7-869c-498e-829f-d534f9347016 req-8d1d79a7-eca5-4ea2-b1c9-ece23f522d60 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-5e1a8d62-9ac1-417d-8194-58901bb4018e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:41:25 compute-0 nova_compute[243452]: 2026-02-28 10:41:25.747 243456 DEBUG oslo_concurrency.lockutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:25 compute-0 nova_compute[243452]: 2026-02-28 10:41:25.748 243456 DEBUG oslo_concurrency.lockutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:25 compute-0 nova_compute[243452]: 2026-02-28 10:41:25.782 243456 DEBUG nova.compute.manager [req-b2d69d29-a6d5-44ab-91c5-9b96a2b280ea req-42bb6533-b9e3-4a1b-92c6-61b6a593c5a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-changed-b06d794d-14a7-4fd8-bd85-2e37bff5e446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:25 compute-0 nova_compute[243452]: 2026-02-28 10:41:25.783 243456 DEBUG nova.compute.manager [req-b2d69d29-a6d5-44ab-91c5-9b96a2b280ea req-42bb6533-b9e3-4a1b-92c6-61b6a593c5a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Refreshing instance network info cache due to event network-changed-b06d794d-14a7-4fd8-bd85-2e37bff5e446. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:41:25 compute-0 nova_compute[243452]: 2026-02-28 10:41:25.783 243456 DEBUG oslo_concurrency.lockutils [req-b2d69d29-a6d5-44ab-91c5-9b96a2b280ea req-42bb6533-b9e3-4a1b-92c6-61b6a593c5a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:41:25 compute-0 nova_compute[243452]: 2026-02-28 10:41:25.783 243456 DEBUG oslo_concurrency.lockutils [req-b2d69d29-a6d5-44ab-91c5-9b96a2b280ea req-42bb6533-b9e3-4a1b-92c6-61b6a593c5a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:41:25 compute-0 nova_compute[243452]: 2026-02-28 10:41:25.783 243456 DEBUG nova.network.neutron [req-b2d69d29-a6d5-44ab-91c5-9b96a2b280ea req-42bb6533-b9e3-4a1b-92c6-61b6a593c5a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Refreshing network info cache for port b06d794d-14a7-4fd8-bd85-2e37bff5e446 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:41:25 compute-0 nova_compute[243452]: 2026-02-28 10:41:25.824 243456 DEBUG oslo_concurrency.processutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.202 243456 DEBUG nova.compute.manager [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-vif-unplugged-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.203 243456 DEBUG oslo_concurrency.lockutils [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.204 243456 DEBUG oslo_concurrency.lockutils [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.204 243456 DEBUG oslo_concurrency.lockutils [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.205 243456 DEBUG nova.compute.manager [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] No waiting events found dispatching network-vif-unplugged-d3feb971-63a7-4d54-8310-9c6d40c29637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.205 243456 WARNING nova.compute.manager [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received unexpected event network-vif-unplugged-d3feb971-63a7-4d54-8310-9c6d40c29637 for instance with vm_state deleted and task_state None.
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.206 243456 DEBUG nova.compute.manager [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.206 243456 DEBUG oslo_concurrency.lockutils [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.207 243456 DEBUG oslo_concurrency.lockutils [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.208 243456 DEBUG oslo_concurrency.lockutils [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.208 243456 DEBUG nova.compute.manager [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] No waiting events found dispatching network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.209 243456 WARNING nova.compute.manager [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received unexpected event network-vif-plugged-d3feb971-63a7-4d54-8310-9c6d40c29637 for instance with vm_state deleted and task_state None.
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.210 243456 DEBUG nova.compute.manager [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Received event network-vif-deleted-d3feb971-63a7-4d54-8310-9c6d40c29637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.210 243456 INFO nova.compute.manager [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Neutron deleted interface d3feb971-63a7-4d54-8310-9c6d40c29637; detaching it from the instance and deleting it from the info cache
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.211 243456 DEBUG nova.network.neutron [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.254 243456 DEBUG nova.compute.manager [req-51e7c80c-1ed5-45e8-94a3-fad5b4b85881 req-7141fdf1-56a2-42ba-affc-821f22caa995 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Detach interface failed, port_id=d3feb971-63a7-4d54-8310-9c6d40c29637, reason: Instance 5e1a8d62-9ac1-417d-8194-58901bb4018e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:41:26 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:41:26 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3289016923' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.466 243456 DEBUG oslo_concurrency.processutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.477 243456 DEBUG nova.compute.provider_tree [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.497 243456 DEBUG nova.scheduler.client.report [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.532 243456 DEBUG oslo_concurrency.lockutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.560 243456 INFO nova.scheduler.client.report [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance 5e1a8d62-9ac1-417d-8194-58901bb4018e
Feb 28 10:41:26 compute-0 nova_compute[243452]: 2026-02-28 10:41:26.658 243456 DEBUG oslo_concurrency.lockutils [None req-2a8287fd-6e5b-458a-8ddc-fbf49d7dbed9 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "5e1a8d62-9ac1-417d-8194-58901bb4018e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:26 compute-0 ceph-mon[76304]: pgmap v2312: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 957 KiB/s rd, 1.8 MiB/s wr, 93 op/s
Feb 28 10:41:26 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3289016923' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:41:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 343 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 131 op/s
Feb 28 10:41:27 compute-0 nova_compute[243452]: 2026-02-28 10:41:27.111 243456 DEBUG nova.network.neutron [req-b2d69d29-a6d5-44ab-91c5-9b96a2b280ea req-42bb6533-b9e3-4a1b-92c6-61b6a593c5a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updated VIF entry in instance network info cache for port b06d794d-14a7-4fd8-bd85-2e37bff5e446. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:41:27 compute-0 nova_compute[243452]: 2026-02-28 10:41:27.112 243456 DEBUG nova.network.neutron [req-b2d69d29-a6d5-44ab-91c5-9b96a2b280ea req-42bb6533-b9e3-4a1b-92c6-61b6a593c5a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updating instance_info_cache with network_info: [{"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:41:27 compute-0 nova_compute[243452]: 2026-02-28 10:41:27.139 243456 DEBUG oslo_concurrency.lockutils [req-b2d69d29-a6d5-44ab-91c5-9b96a2b280ea req-42bb6533-b9e3-4a1b-92c6-61b6a593c5a8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:41:28 compute-0 ceph-mon[76304]: pgmap v2313: 305 pgs: 305 active+clean; 343 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 131 op/s
Feb 28 10:41:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:41:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 306 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 576 KiB/s wr, 121 op/s
Feb 28 10:41:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:41:29
Feb 28 10:41:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:41:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:41:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['images', 'volumes', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', '.rgw.root']
Feb 28 10:41:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:41:29 compute-0 nova_compute[243452]: 2026-02-28 10:41:29.663 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:29 compute-0 nova_compute[243452]: 2026-02-28 10:41:29.752 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:41:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:41:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:41:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:41:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:41:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:41:30 compute-0 ovn_controller[146846]: 2026-02-28T10:41:30Z|01507|binding|INFO|Releasing lport e3227d18-ed73-459d-b1a1-aecd179beb21 from this chassis (sb_readonly=0)
Feb 28 10:41:30 compute-0 ovn_controller[146846]: 2026-02-28T10:41:30Z|01508|binding|INFO|Releasing lport 7d3f4d3d-8eb5-46b3-a0b2-7c10be424ebb from this chassis (sb_readonly=0)
Feb 28 10:41:30 compute-0 nova_compute[243452]: 2026-02-28 10:41:30.442 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:30 compute-0 ceph-mon[76304]: pgmap v2314: 305 pgs: 305 active+clean; 306 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 576 KiB/s wr, 121 op/s
Feb 28 10:41:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:41:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:41:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:41:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:41:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:41:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:41:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:41:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:41:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:41:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:41:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 18 KiB/s wr, 129 op/s
Feb 28 10:41:32 compute-0 ceph-mon[76304]: pgmap v2315: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 18 KiB/s wr, 129 op/s
Feb 28 10:41:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 280 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 392 KiB/s wr, 111 op/s
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.732272) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275293733028, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 700, "num_deletes": 251, "total_data_size": 840054, "memory_usage": 854136, "flush_reason": "Manual Compaction"}
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275293739818, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 820390, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49017, "largest_seqno": 49716, "table_properties": {"data_size": 816768, "index_size": 1466, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8444, "raw_average_key_size": 19, "raw_value_size": 809436, "raw_average_value_size": 1869, "num_data_blocks": 65, "num_entries": 433, "num_filter_entries": 433, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772275244, "oldest_key_time": 1772275244, "file_creation_time": 1772275293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 7607 microseconds, and 2900 cpu microseconds.
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.739887) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 820390 bytes OK
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.739912) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.741546) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.741566) EVENT_LOG_v1 {"time_micros": 1772275293741557, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.741588) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 836391, prev total WAL file size 836391, number of live WAL files 2.
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.742271) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(801KB)], [113(9063KB)]
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275293742321, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 10101465, "oldest_snapshot_seqno": -1}
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 7011 keys, 8369619 bytes, temperature: kUnknown
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275293776512, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 8369619, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8325368, "index_size": 25643, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17541, "raw_key_size": 182729, "raw_average_key_size": 26, "raw_value_size": 8202819, "raw_average_value_size": 1169, "num_data_blocks": 994, "num_entries": 7011, "num_filter_entries": 7011, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772275293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.776749) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 8369619 bytes
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.777782) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 294.8 rd, 244.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 8.9 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(22.5) write-amplify(10.2) OK, records in: 7524, records dropped: 513 output_compression: NoCompression
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.777802) EVENT_LOG_v1 {"time_micros": 1772275293777792, "job": 68, "event": "compaction_finished", "compaction_time_micros": 34268, "compaction_time_cpu_micros": 17902, "output_level": 6, "num_output_files": 1, "total_output_size": 8369619, "num_input_records": 7524, "num_output_records": 7011, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275293777982, "job": 68, "event": "table_file_deletion", "file_number": 115}
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275293778693, "job": 68, "event": "table_file_deletion", "file_number": 113}
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.742178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.778799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.778804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.778806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.778809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:41:33 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:41:33.778811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:41:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:41:34 compute-0 nova_compute[243452]: 2026-02-28 10:41:34.286 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275279.2850125, edd7bb04-60e7-4998-afe7-73fa36c25f5d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:41:34 compute-0 nova_compute[243452]: 2026-02-28 10:41:34.287 243456 INFO nova.compute.manager [-] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] VM Stopped (Lifecycle Event)
Feb 28 10:41:34 compute-0 nova_compute[243452]: 2026-02-28 10:41:34.312 243456 DEBUG nova.compute.manager [None req-7bac265c-80cc-4a06-894b-a35ba8569eda - - - - - -] [instance: edd7bb04-60e7-4998-afe7-73fa36c25f5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:41:34 compute-0 nova_compute[243452]: 2026-02-28 10:41:34.665 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:34 compute-0 ovn_controller[146846]: 2026-02-28T10:41:34Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:0a:a1 10.100.0.5
Feb 28 10:41:34 compute-0 ovn_controller[146846]: 2026-02-28T10:41:34Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:0a:a1 10.100.0.5
Feb 28 10:41:34 compute-0 ceph-mon[76304]: pgmap v2316: 305 pgs: 305 active+clean; 280 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 392 KiB/s wr, 111 op/s
Feb 28 10:41:34 compute-0 nova_compute[243452]: 2026-02-28 10:41:34.754 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 290 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.0 MiB/s wr, 114 op/s
Feb 28 10:41:36 compute-0 nova_compute[243452]: 2026-02-28 10:41:36.286 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:36 compute-0 ceph-mon[76304]: pgmap v2317: 305 pgs: 305 active+clean; 290 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.0 MiB/s wr, 114 op/s
Feb 28 10:41:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 305 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Feb 28 10:41:38 compute-0 ceph-mon[76304]: pgmap v2318: 305 pgs: 305 active+clean; 305 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Feb 28 10:41:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:41:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Feb 28 10:41:39 compute-0 nova_compute[243452]: 2026-02-28 10:41:39.629 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275284.6277022, 5e1a8d62-9ac1-417d-8194-58901bb4018e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:41:39 compute-0 nova_compute[243452]: 2026-02-28 10:41:39.630 243456 INFO nova.compute.manager [-] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] VM Stopped (Lifecycle Event)
Feb 28 10:41:39 compute-0 nova_compute[243452]: 2026-02-28 10:41:39.654 243456 DEBUG nova.compute.manager [None req-f5135037-bfaf-46bb-9c1c-3d854d5ccd28 - - - - - -] [instance: 5e1a8d62-9ac1-417d-8194-58901bb4018e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:41:39 compute-0 nova_compute[243452]: 2026-02-28 10:41:39.669 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:39 compute-0 nova_compute[243452]: 2026-02-28 10:41:39.756 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:40 compute-0 nova_compute[243452]: 2026-02-28 10:41:40.065 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:40 compute-0 ceph-mon[76304]: pgmap v2319: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015326591723209138 of space, bias 1.0, pg target 0.4597977516962742 quantized to 32 (current 32)
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024939305755770236 of space, bias 1.0, pg target 0.7481791726731071 quantized to 32 (current 32)
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.298182577967931e-07 of space, bias 4.0, pg target 0.0008757819093561517 quantized to 16 (current 16)
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:41:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:41:42 compute-0 ceph-mon[76304]: pgmap v2320: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Feb 28 10:41:43 compute-0 nova_compute[243452]: 2026-02-28 10:41:43.103 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:41:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 10:41:43 compute-0 nova_compute[243452]: 2026-02-28 10:41:43.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:41:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:41:44 compute-0 nova_compute[243452]: 2026-02-28 10:41:44.671 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:44 compute-0 nova_compute[243452]: 2026-02-28 10:41:44.758 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:44 compute-0 ceph-mon[76304]: pgmap v2321: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 10:41:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.357 243456 DEBUG oslo_concurrency.lockutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.358 243456 DEBUG oslo_concurrency.lockutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.358 243456 DEBUG oslo_concurrency.lockutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.359 243456 DEBUG oslo_concurrency.lockutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.359 243456 DEBUG oslo_concurrency.lockutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.361 243456 INFO nova.compute.manager [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Terminating instance
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.363 243456 DEBUG nova.compute.manager [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:41:45 compute-0 kernel: tapb06d794d-14 (unregistering): left promiscuous mode
Feb 28 10:41:45 compute-0 NetworkManager[49805]: <info>  [1772275305.4256] device (tapb06d794d-14): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.432 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:45 compute-0 ovn_controller[146846]: 2026-02-28T10:41:45Z|01509|binding|INFO|Releasing lport b06d794d-14a7-4fd8-bd85-2e37bff5e446 from this chassis (sb_readonly=0)
Feb 28 10:41:45 compute-0 ovn_controller[146846]: 2026-02-28T10:41:45Z|01510|binding|INFO|Setting lport b06d794d-14a7-4fd8-bd85-2e37bff5e446 down in Southbound
Feb 28 10:41:45 compute-0 ovn_controller[146846]: 2026-02-28T10:41:45Z|01511|binding|INFO|Removing iface tapb06d794d-14 ovn-installed in OVS
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.436 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:45 compute-0 kernel: tapf107fdf1-b7 (unregistering): left promiscuous mode
Feb 28 10:41:45 compute-0 NetworkManager[49805]: <info>  [1772275305.4440] device (tapf107fdf1-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.446 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.451 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:45 compute-0 ovn_controller[146846]: 2026-02-28T10:41:45Z|01512|binding|INFO|Releasing lport f107fdf1-b771-447b-adb6-bd9bc1e35b53 from this chassis (sb_readonly=1)
Feb 28 10:41:45 compute-0 ovn_controller[146846]: 2026-02-28T10:41:45Z|01513|binding|INFO|Removing iface tapf107fdf1-b7 ovn-installed in OVS
Feb 28 10:41:45 compute-0 ovn_controller[146846]: 2026-02-28T10:41:45Z|01514|if_status|INFO|Dropped 1 log messages in last 1231 seconds (most recently, 1231 seconds ago) due to excessive rate
Feb 28 10:41:45 compute-0 ovn_controller[146846]: 2026-02-28T10:41:45Z|01515|if_status|INFO|Not setting lport f107fdf1-b771-447b-adb6-bd9bc1e35b53 down as sb is readonly
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.459 243456 DEBUG nova.compute.manager [req-7c03dd9b-c6b2-492b-bfea-14c86cbab202 req-f3b9d133-da29-4e8e-9162-08a5a4f07ff0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-changed-b06d794d-14a7-4fd8-bd85-2e37bff5e446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.460 243456 DEBUG nova.compute.manager [req-7c03dd9b-c6b2-492b-bfea-14c86cbab202 req-f3b9d133-da29-4e8e-9162-08a5a4f07ff0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Refreshing instance network info cache due to event network-changed-b06d794d-14a7-4fd8-bd85-2e37bff5e446. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.460 243456 DEBUG oslo_concurrency.lockutils [req-7c03dd9b-c6b2-492b-bfea-14c86cbab202 req-f3b9d133-da29-4e8e-9162-08a5a4f07ff0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.461 243456 DEBUG oslo_concurrency.lockutils [req-7c03dd9b-c6b2-492b-bfea-14c86cbab202 req-f3b9d133-da29-4e8e-9162-08a5a4f07ff0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.461 243456 DEBUG nova.network.neutron [req-7c03dd9b-c6b2-492b-bfea-14c86cbab202 req-f3b9d133-da29-4e8e-9162-08a5a4f07ff0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Refreshing network info cache for port b06d794d-14a7-4fd8-bd85-2e37bff5e446 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.463 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:45 compute-0 ovn_controller[146846]: 2026-02-28T10:41:45Z|01516|binding|INFO|Setting lport f107fdf1-b771-447b-adb6-bd9bc1e35b53 down in Southbound
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.490 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:0a:a1 10.100.0.5'], port_security=['fa:16:3e:02:0a:a1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '47d618f6-612e-4944-8a4d-a3509d6e3d35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67f9b9ea-1e17-4554-8bb9-a54c6cbc146f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e740c691-959b-49dd-920a-81817c99bba9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=b06d794d-14a7-4fd8-bd85-2e37bff5e446) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.491 156681 INFO neutron.agent.ovn.metadata.agent [-] Port b06d794d-14a7-4fd8-bd85-2e37bff5e446 in datapath b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9 unbound from our chassis
Feb 28 10:41:45 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000090.scope: Deactivated successfully.
Feb 28 10:41:45 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000090.scope: Consumed 13.581s CPU time.
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.493 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9
Feb 28 10:41:45 compute-0 systemd-machined[209480]: Machine qemu-177-instance-00000090 terminated.
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.506 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4cdb2244-7389-464b-ba25-544648de1f91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.540 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5608660d-f281-44b3-a85d-d3c96a67b9f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.543 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[004df7a6-feeb-42e9-adfb-b27960749019]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.570 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[74fbbb07-23f8-4a9b-a693-4cc66f4f103a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.589 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.591 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcfd7b9-93d3-4f57-b16e-3540f42bfe66]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb0ad6cbe-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:f6:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 440], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675455, 'reachable_time': 42136, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372476, 'error': None, 'target': 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:45 compute-0 NetworkManager[49805]: <info>  [1772275305.5939] manager: (tapf107fdf1-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/629)
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.599 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:41:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3468049051' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:41:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:41:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3468049051' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.609 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e376bd01-84e2-4759-a674-907a2389db00]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb0ad6cbe-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675468, 'tstamp': 675468}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372489, 'error': None, 'target': 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb0ad6cbe-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675472, 'tstamp': 675472}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372489, 'error': None, 'target': 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.611 243456 INFO nova.virt.libvirt.driver [-] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Instance destroyed successfully.
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.612 243456 DEBUG nova.objects.instance [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid 47d618f6-612e-4944-8a4d-a3509d6e3d35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.612 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0ad6cbe-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.613 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.621 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.621 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0ad6cbe-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.622 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.622 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb0ad6cbe-80, col_values=(('external_ids', {'iface-id': '7d3f4d3d-8eb5-46b3-a0b2-7c10be424ebb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.623 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.647 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:f0:9a 2001:db8::f816:3eff:fe01:f09a'], port_security=['fa:16:3e:01:f0:9a 2001:db8::f816:3eff:fe01:f09a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe01:f09a/64', 'neutron:device_id': '47d618f6-612e-4944-8a4d-a3509d6e3d35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67f9b9ea-1e17-4554-8bb9-a54c6cbc146f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b98334fa-e37e-4532-99f2-fef0d70a8f7d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=f107fdf1-b771-447b-adb6-bd9bc1e35b53) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.649 156681 INFO neutron.agent.ovn.metadata.agent [-] Port f107fdf1-b771-447b-adb6-bd9bc1e35b53 in datapath caa8646e-5c97-4eb8-add7-69ea9ee54379 unbound from our chassis
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.650 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network caa8646e-5c97-4eb8-add7-69ea9ee54379
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.669 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1caf53f7-a8b9-451b-aad6-b69dcb2b6116]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.701 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[35d59c8a-8303-4dac-b81d-278a528ae6ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.706 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ae4a0d3d-8f0c-4e8b-8a7e-f4d987cbfdcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.734 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c854aca3-0df3-4285-b530-6b0a0d001dad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.751 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[63de763c-1a7b-43f1-8f82-483e94dd1b68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcaa8646e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:7b:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 28, 'tx_packets': 6, 'rx_bytes': 2472, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 28, 'tx_packets': 6, 'rx_bytes': 2472, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675547, 'reachable_time': 36290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372504, 'error': None, 'target': 'ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.772 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4e85014e-2b1d-4609-a65f-b7a625d606e4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcaa8646e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675562, 'tstamp': 675562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372505, 'error': None, 'target': 'ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.774 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcaa8646e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.776 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.783 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.784 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcaa8646e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.784 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.785 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcaa8646e-50, col_values=(('external_ids', {'iface-id': 'e3227d18-ed73-459d-b1a1-aecd179beb21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:45 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:45.785 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:41:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3468049051' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:41:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3468049051' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.803 243456 DEBUG nova.virt.libvirt.vif [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:41:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-740899339',display_name='tempest-TestGettingAddress-server-740899339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-740899339',id=144,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:41:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-uspjsb87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:41:21Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=47d618f6-612e-4944-8a4d-a3509d6e3d35,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.804 243456 DEBUG nova.network.os_vif_util [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.805 243456 DEBUG nova.network.os_vif_util [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:02:0a:a1,bridge_name='br-int',has_traffic_filtering=True,id=b06d794d-14a7-4fd8-bd85-2e37bff5e446,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06d794d-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.806 243456 DEBUG os_vif [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:0a:a1,bridge_name='br-int',has_traffic_filtering=True,id=b06d794d-14a7-4fd8-bd85-2e37bff5e446,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06d794d-14') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.808 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.808 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb06d794d-14, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.810 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.812 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.813 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.816 243456 INFO os_vif [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:0a:a1,bridge_name='br-int',has_traffic_filtering=True,id=b06d794d-14a7-4fd8-bd85-2e37bff5e446,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06d794d-14')
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.817 243456 DEBUG nova.virt.libvirt.vif [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:41:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-740899339',display_name='tempest-TestGettingAddress-server-740899339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-740899339',id=144,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:41:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-uspjsb87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:41:21Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=47d618f6-612e-4944-8a4d-a3509d6e3d35,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.817 243456 DEBUG nova.network.os_vif_util [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.818 243456 DEBUG nova.network.os_vif_util [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:f0:9a,bridge_name='br-int',has_traffic_filtering=True,id=f107fdf1-b771-447b-adb6-bd9bc1e35b53,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf107fdf1-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.818 243456 DEBUG os_vif [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:f0:9a,bridge_name='br-int',has_traffic_filtering=True,id=f107fdf1-b771-447b-adb6-bd9bc1e35b53,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf107fdf1-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.820 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.820 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf107fdf1-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.821 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.823 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:45 compute-0 nova_compute[243452]: 2026-02-28 10:41:45.825 243456 INFO os_vif [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:f0:9a,bridge_name='br-int',has_traffic_filtering=True,id=f107fdf1-b771-447b-adb6-bd9bc1e35b53,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf107fdf1-b7')
Feb 28 10:41:46 compute-0 nova_compute[243452]: 2026-02-28 10:41:46.198 243456 INFO nova.virt.libvirt.driver [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Deleting instance files /var/lib/nova/instances/47d618f6-612e-4944-8a4d-a3509d6e3d35_del
Feb 28 10:41:46 compute-0 nova_compute[243452]: 2026-02-28 10:41:46.200 243456 INFO nova.virt.libvirt.driver [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Deletion of /var/lib/nova/instances/47d618f6-612e-4944-8a4d-a3509d6e3d35_del complete
Feb 28 10:41:46 compute-0 nova_compute[243452]: 2026-02-28 10:41:46.272 243456 INFO nova.compute.manager [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Took 0.91 seconds to destroy the instance on the hypervisor.
Feb 28 10:41:46 compute-0 nova_compute[243452]: 2026-02-28 10:41:46.273 243456 DEBUG oslo.service.loopingcall [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:41:46 compute-0 nova_compute[243452]: 2026-02-28 10:41:46.273 243456 DEBUG nova.compute.manager [-] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:41:46 compute-0 nova_compute[243452]: 2026-02-28 10:41:46.274 243456 DEBUG nova.network.neutron [-] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:41:46 compute-0 ceph-mon[76304]: pgmap v2322: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Feb 28 10:41:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 232 KiB/s rd, 1.1 MiB/s wr, 33 op/s
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.149 243456 DEBUG nova.compute.manager [req-29581a3f-4346-4662-b8f6-907fef5b7c37 req-c5f4b36a-7318-406a-97f7-680405ff3902 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-vif-deleted-b06d794d-14a7-4fd8-bd85-2e37bff5e446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.149 243456 INFO nova.compute.manager [req-29581a3f-4346-4662-b8f6-907fef5b7c37 req-c5f4b36a-7318-406a-97f7-680405ff3902 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Neutron deleted interface b06d794d-14a7-4fd8-bd85-2e37bff5e446; detaching it from the instance and deleting it from the info cache
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.150 243456 DEBUG nova.network.neutron [req-29581a3f-4346-4662-b8f6-907fef5b7c37 req-c5f4b36a-7318-406a-97f7-680405ff3902 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updating instance_info_cache with network_info: [{"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.180 243456 DEBUG nova.compute.manager [req-29581a3f-4346-4662-b8f6-907fef5b7c37 req-c5f4b36a-7318-406a-97f7-680405ff3902 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Detach interface failed, port_id=b06d794d-14a7-4fd8-bd85-2e37bff5e446, reason: Instance 47d618f6-612e-4944-8a4d-a3509d6e3d35 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.241 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.241 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.262 243456 DEBUG nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.313 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.336 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.343 243456 DEBUG nova.network.neutron [req-7c03dd9b-c6b2-492b-bfea-14c86cbab202 req-f3b9d133-da29-4e8e-9162-08a5a4f07ff0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updated VIF entry in instance network info cache for port b06d794d-14a7-4fd8-bd85-2e37bff5e446. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.344 243456 DEBUG nova.network.neutron [req-7c03dd9b-c6b2-492b-bfea-14c86cbab202 req-f3b9d133-da29-4e8e-9162-08a5a4f07ff0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updating instance_info_cache with network_info: [{"id": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "address": "fa:16:3e:02:0a:a1", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06d794d-14", "ovs_interfaceid": "b06d794d-14a7-4fd8-bd85-2e37bff5e446", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "address": "fa:16:3e:01:f0:9a", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe01:f09a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf107fdf1-b7", "ovs_interfaceid": "f107fdf1-b771-447b-adb6-bd9bc1e35b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.355 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.356 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.360 243456 DEBUG oslo_concurrency.lockutils [req-7c03dd9b-c6b2-492b-bfea-14c86cbab202 req-f3b9d133-da29-4e8e-9162-08a5a4f07ff0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-47d618f6-612e-4944-8a4d-a3509d6e3d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.365 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.366 243456 INFO nova.compute.claims [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.448 243456 DEBUG nova.network.neutron [-] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.472 243456 INFO nova.compute.manager [-] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Took 1.20 seconds to deallocate network for instance.
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.501 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.550 243456 DEBUG oslo_concurrency.lockutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.557 243456 DEBUG nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-vif-unplugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.557 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.558 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.558 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.558 243456 DEBUG nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] No waiting events found dispatching network-vif-unplugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.559 243456 WARNING nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received unexpected event network-vif-unplugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 for instance with vm_state deleted and task_state None.
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.559 243456 DEBUG nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-vif-plugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.560 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.560 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.560 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.561 243456 DEBUG nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] No waiting events found dispatching network-vif-plugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.561 243456 WARNING nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received unexpected event network-vif-plugged-b06d794d-14a7-4fd8-bd85-2e37bff5e446 for instance with vm_state deleted and task_state None.
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.562 243456 DEBUG nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-vif-unplugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.562 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.562 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.563 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.563 243456 DEBUG nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] No waiting events found dispatching network-vif-unplugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.564 243456 WARNING nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received unexpected event network-vif-unplugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 for instance with vm_state deleted and task_state None.
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.564 243456 DEBUG nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-vif-plugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.564 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.565 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.565 243456 DEBUG oslo_concurrency.lockutils [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.565 243456 DEBUG nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] No waiting events found dispatching network-vif-plugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:41:47 compute-0 nova_compute[243452]: 2026-02-28 10:41:47.566 243456 WARNING nova.compute.manager [req-12efbbbc-5615-4f2f-ab70-dcbdf06c337c req-552fcb8f-de75-4597-a0d4-6afbcfd30bb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received unexpected event network-vif-plugged-f107fdf1-b771-447b-adb6-bd9bc1e35b53 for instance with vm_state deleted and task_state None.
Feb 28 10:41:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:41:48 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/130683762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.114 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.122 243456 DEBUG nova.compute.provider_tree [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.138 243456 DEBUG nova.scheduler.client.report [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.161 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.163 243456 DEBUG nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.168 243456 DEBUG oslo_concurrency.lockutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.218 243456 DEBUG nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.218 243456 DEBUG nova.network.neutron [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.237 243456 INFO nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.257 243456 DEBUG nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.363 243456 DEBUG nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.365 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.366 243456 INFO nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Creating image(s)
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.400 243456 DEBUG nova.storage.rbd_utils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.437 243456 DEBUG nova.storage.rbd_utils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.477 243456 DEBUG nova.storage.rbd_utils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.481 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.524 243456 DEBUG nova.policy [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecebf54c6f3344489cf5e51d91d8ed5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2fef3ce64b984ecab0903a053eab58f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.553 243456 DEBUG oslo_concurrency.processutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.592 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.593 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.593 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.594 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.619 243456 DEBUG nova.storage.rbd_utils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.623 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:41:48 compute-0 ceph-mon[76304]: pgmap v2323: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 232 KiB/s rd, 1.1 MiB/s wr, 33 op/s
Feb 28 10:41:48 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/130683762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.889 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:41:48 compute-0 nova_compute[243452]: 2026-02-28 10:41:48.969 243456 DEBUG nova.storage.rbd_utils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] resizing rbd image af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:41:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:41:49 compute-0 nova_compute[243452]: 2026-02-28 10:41:49.081 243456 DEBUG nova.objects.instance [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'migration_context' on Instance uuid af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:41:49 compute-0 nova_compute[243452]: 2026-02-28 10:41:49.097 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:41:49 compute-0 nova_compute[243452]: 2026-02-28 10:41:49.097 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Ensure instance console log exists: /var/lib/nova/instances/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:41:49 compute-0 nova_compute[243452]: 2026-02-28 10:41:49.098 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:49 compute-0 nova_compute[243452]: 2026-02-28 10:41:49.098 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:49 compute-0 nova_compute[243452]: 2026-02-28 10:41:49.098 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 167 KiB/s rd, 66 KiB/s wr, 27 op/s
Feb 28 10:41:49 compute-0 podman[372735]: 2026-02-28 10:41:49.136318829 +0000 UTC m=+0.069304263 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 28 10:41:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:41:49 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1633658776' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:41:49 compute-0 nova_compute[243452]: 2026-02-28 10:41:49.162 243456 DEBUG oslo_concurrency.processutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:41:49 compute-0 podman[372718]: 2026-02-28 10:41:49.169255212 +0000 UTC m=+0.103993716 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:41:49 compute-0 nova_compute[243452]: 2026-02-28 10:41:49.169 243456 DEBUG nova.compute.provider_tree [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:41:49 compute-0 nova_compute[243452]: 2026-02-28 10:41:49.185 243456 DEBUG nova.scheduler.client.report [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:41:49 compute-0 nova_compute[243452]: 2026-02-28 10:41:49.206 243456 DEBUG oslo_concurrency.lockutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:49 compute-0 nova_compute[243452]: 2026-02-28 10:41:49.225 243456 INFO nova.scheduler.client.report [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance 47d618f6-612e-4944-8a4d-a3509d6e3d35
Feb 28 10:41:49 compute-0 nova_compute[243452]: 2026-02-28 10:41:49.237 243456 DEBUG nova.compute.manager [req-6f4fce3b-020c-4c6a-8018-e85dd6261155 req-96a7cf5d-ed00-4ad7-8389-6a6b0ae3b991 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Received event network-vif-deleted-f107fdf1-b771-447b-adb6-bd9bc1e35b53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:49 compute-0 nova_compute[243452]: 2026-02-28 10:41:49.305 243456 DEBUG oslo_concurrency.lockutils [None req-33508940-6c58-4253-acf9-1bcd1c50dd72 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "47d618f6-612e-4944-8a4d-a3509d6e3d35" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:49 compute-0 nova_compute[243452]: 2026-02-28 10:41:49.543 243456 DEBUG nova.network.neutron [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Successfully created port: 85b213ac-7186-4120-8f8e-043293c9de8b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:41:49 compute-0 nova_compute[243452]: 2026-02-28 10:41:49.761 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:49 compute-0 ceph-mon[76304]: pgmap v2324: 305 pgs: 305 active+clean; 287 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 167 KiB/s rd, 66 KiB/s wr, 27 op/s
Feb 28 10:41:49 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1633658776' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:41:50 compute-0 nova_compute[243452]: 2026-02-28 10:41:50.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:41:50 compute-0 nova_compute[243452]: 2026-02-28 10:41:50.380 243456 DEBUG nova.network.neutron [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Successfully updated port: 85b213ac-7186-4120-8f8e-043293c9de8b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:41:50 compute-0 nova_compute[243452]: 2026-02-28 10:41:50.396 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:41:50 compute-0 nova_compute[243452]: 2026-02-28 10:41:50.396 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquired lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:41:50 compute-0 nova_compute[243452]: 2026-02-28 10:41:50.397 243456 DEBUG nova.network.neutron [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:41:50 compute-0 nova_compute[243452]: 2026-02-28 10:41:50.560 243456 DEBUG nova.network.neutron [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:41:50 compute-0 nova_compute[243452]: 2026-02-28 10:41:50.822 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 259 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 1.1 MiB/s wr, 54 op/s
Feb 28 10:41:51 compute-0 sshd-session[372784]: Invalid user sol from 45.148.10.240 port 37262
Feb 28 10:41:51 compute-0 sshd-session[372784]: Connection closed by invalid user sol 45.148.10.240 port 37262 [preauth]
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.328 243456 DEBUG nova.compute.manager [req-26c9a47f-4f76-4d7b-8249-97b46debc728 req-b2381116-ace6-45a0-98bc-d1bc85574975 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received event network-changed-85b213ac-7186-4120-8f8e-043293c9de8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.329 243456 DEBUG nova.compute.manager [req-26c9a47f-4f76-4d7b-8249-97b46debc728 req-b2381116-ace6-45a0-98bc-d1bc85574975 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Refreshing instance network info cache due to event network-changed-85b213ac-7186-4120-8f8e-043293c9de8b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.329 243456 DEBUG oslo_concurrency.lockutils [req-26c9a47f-4f76-4d7b-8249-97b46debc728 req-b2381116-ace6-45a0-98bc-d1bc85574975 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.340 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.455 243456 DEBUG oslo_concurrency.lockutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.455 243456 DEBUG oslo_concurrency.lockutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.456 243456 DEBUG oslo_concurrency.lockutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.456 243456 DEBUG oslo_concurrency.lockutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.456 243456 DEBUG oslo_concurrency.lockutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.457 243456 INFO nova.compute.manager [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Terminating instance
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.458 243456 DEBUG nova.compute.manager [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.494 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.495 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.495 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.495 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9b081668-1653-448a-957e-da1ead7ecd21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:41:51 compute-0 kernel: tap5952cc57-b2 (unregistering): left promiscuous mode
Feb 28 10:41:51 compute-0 NetworkManager[49805]: <info>  [1772275311.5200] device (tap5952cc57-b2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.528 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:51 compute-0 ovn_controller[146846]: 2026-02-28T10:41:51Z|01517|binding|INFO|Releasing lport 5952cc57-b25c-40a2-b208-47e2104b88ad from this chassis (sb_readonly=0)
Feb 28 10:41:51 compute-0 ovn_controller[146846]: 2026-02-28T10:41:51Z|01518|binding|INFO|Setting lport 5952cc57-b25c-40a2-b208-47e2104b88ad down in Southbound
Feb 28 10:41:51 compute-0 ovn_controller[146846]: 2026-02-28T10:41:51Z|01519|binding|INFO|Removing iface tap5952cc57-b2 ovn-installed in OVS
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.533 243456 DEBUG nova.network.neutron [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Updating instance_info_cache with network_info: [{"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:41:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.538 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:9b:bb 10.100.0.13'], port_security=['fa:16:3e:49:9b:bb 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9b081668-1653-448a-957e-da1ead7ecd21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67f9b9ea-1e17-4554-8bb9-a54c6cbc146f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e740c691-959b-49dd-920a-81817c99bba9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=5952cc57-b25c-40a2-b208-47e2104b88ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.539 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.540 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 5952cc57-b25c-40a2-b208-47e2104b88ad in datapath b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9 unbound from our chassis
Feb 28 10:41:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.541 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:41:51 compute-0 kernel: tap6914940c-92 (unregistering): left promiscuous mode
Feb 28 10:41:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.542 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0644aca6-ad8c-43d3-974d-51f9632839e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.544 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9 namespace which is not needed anymore
Feb 28 10:41:51 compute-0 NetworkManager[49805]: <info>  [1772275311.5450] device (tap6914940c-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.548 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.551 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Releasing lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.551 243456 DEBUG nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Instance network_info: |[{"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.552 243456 DEBUG oslo_concurrency.lockutils [req-26c9a47f-4f76-4d7b-8249-97b46debc728 req-b2381116-ace6-45a0-98bc-d1bc85574975 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.552 243456 DEBUG nova.network.neutron [req-26c9a47f-4f76-4d7b-8249-97b46debc728 req-b2381116-ace6-45a0-98bc-d1bc85574975 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Refreshing network info cache for port 85b213ac-7186-4120-8f8e-043293c9de8b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.554 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Start _get_guest_xml network_info=[{"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:41:51 compute-0 ovn_controller[146846]: 2026-02-28T10:41:51Z|01520|binding|INFO|Releasing lport 6914940c-920a-4dc2-982a-9ae63584aee2 from this chassis (sb_readonly=0)
Feb 28 10:41:51 compute-0 ovn_controller[146846]: 2026-02-28T10:41:51Z|01521|binding|INFO|Setting lport 6914940c-920a-4dc2-982a-9ae63584aee2 down in Southbound
Feb 28 10:41:51 compute-0 ovn_controller[146846]: 2026-02-28T10:41:51Z|01522|binding|INFO|Removing iface tap6914940c-92 ovn-installed in OVS
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.556 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.563 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:c8:97 2001:db8::f816:3eff:fefb:c897'], port_security=['fa:16:3e:fb:c8:97 2001:db8::f816:3eff:fefb:c897'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fefb:c897/64', 'neutron:device_id': '9b081668-1653-448a-957e-da1ead7ecd21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67f9b9ea-1e17-4554-8bb9-a54c6cbc146f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b98334fa-e37e-4532-99f2-fef0d70a8f7d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=6914940c-920a-4dc2-982a-9ae63584aee2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.563 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.576 243456 WARNING nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.583 243456 DEBUG nova.virt.libvirt.host [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.583 243456 DEBUG nova.virt.libvirt.host [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.586 243456 DEBUG nova.virt.libvirt.host [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.587 243456 DEBUG nova.virt.libvirt.host [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.587 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.587 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.588 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.588 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.588 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.588 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.588 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.588 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.589 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.589 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.589 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.589 243456 DEBUG nova.virt.hardware [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:41:51 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Feb 28 10:41:51 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008e.scope: Consumed 14.766s CPU time.
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.593 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:41:51 compute-0 systemd-machined[209480]: Machine qemu-175-instance-0000008e terminated.
Feb 28 10:41:51 compute-0 neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9[370607]: [NOTICE]   (370611) : haproxy version is 2.8.14-c23fe91
Feb 28 10:41:51 compute-0 neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9[370607]: [NOTICE]   (370611) : path to executable is /usr/sbin/haproxy
Feb 28 10:41:51 compute-0 neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9[370607]: [WARNING]  (370611) : Exiting Master process...
Feb 28 10:41:51 compute-0 neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9[370607]: [ALERT]    (370611) : Current worker (370613) exited with code 143 (Terminated)
Feb 28 10:41:51 compute-0 neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9[370607]: [WARNING]  (370611) : All workers exited. Exiting... (0)
Feb 28 10:41:51 compute-0 systemd[1]: libpod-3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9.scope: Deactivated successfully.
Feb 28 10:41:51 compute-0 podman[372813]: 2026-02-28 10:41:51.686408027 +0000 UTC m=+0.044722568 container died 3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 10:41:51 compute-0 NetworkManager[49805]: <info>  [1772275311.6927] manager: (tap6914940c-92): new Tun device (/org/freedesktop/NetworkManager/Devices/630)
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.721 243456 INFO nova.virt.libvirt.driver [-] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Instance destroyed successfully.
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.724 243456 DEBUG nova.objects.instance [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid 9b081668-1653-448a-957e-da1ead7ecd21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:41:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9-userdata-shm.mount: Deactivated successfully.
Feb 28 10:41:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b49f6fee2a5624b0785481ede3d870b1d9e7cdf89c50a35e547fbbaa11922ea-merged.mount: Deactivated successfully.
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.760 243456 DEBUG nova.virt.libvirt.vif [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:40:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1426595857',display_name='tempest-TestGettingAddress-server-1426595857',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1426595857',id=142,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:40:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-nmxy1pc6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:40:49Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=9b081668-1653-448a-957e-da1ead7ecd21,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.761 243456 DEBUG nova.network.os_vif_util [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.762 243456 DEBUG nova.network.os_vif_util [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:9b:bb,bridge_name='br-int',has_traffic_filtering=True,id=5952cc57-b25c-40a2-b208-47e2104b88ad,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5952cc57-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.763 243456 DEBUG os_vif [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:9b:bb,bridge_name='br-int',has_traffic_filtering=True,id=5952cc57-b25c-40a2-b208-47e2104b88ad,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5952cc57-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.764 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.765 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5952cc57-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.767 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.773 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.776 243456 INFO os_vif [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:9b:bb,bridge_name='br-int',has_traffic_filtering=True,id=5952cc57-b25c-40a2-b208-47e2104b88ad,network=Network(b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5952cc57-b2')
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.777 243456 DEBUG nova.virt.libvirt.vif [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:40:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1426595857',display_name='tempest-TestGettingAddress-server-1426595857',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1426595857',id=142,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIFM8yB08pIGURi8yBxt5PdCNey3s6NrP3no5rEh86EsRCqvxaXm0AljpWt6cD1IWrli2324RMsAEvYyRpYidnTv9SaSqyUCtTNlJoKd1y2+M8lrZO3SJcVKRfWdVShAUg==',key_name='tempest-TestGettingAddress-1215811874',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:40:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-nmxy1pc6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:40:49Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=9b081668-1653-448a-957e-da1ead7ecd21,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.777 243456 DEBUG nova.network.os_vif_util [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.778 243456 DEBUG nova.network.os_vif_util [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=6914940c-920a-4dc2-982a-9ae63584aee2,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6914940c-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.778 243456 DEBUG os_vif [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=6914940c-920a-4dc2-982a-9ae63584aee2,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6914940c-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.780 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.780 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6914940c-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.781 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.783 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.785 243456 INFO os_vif [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=6914940c-920a-4dc2-982a-9ae63584aee2,network=Network(caa8646e-5c97-4eb8-add7-69ea9ee54379),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6914940c-92')
Feb 28 10:41:51 compute-0 podman[372813]: 2026-02-28 10:41:51.793383116 +0000 UTC m=+0.151697647 container cleanup 3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 10:41:51 compute-0 systemd[1]: libpod-conmon-3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9.scope: Deactivated successfully.
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.858 243456 DEBUG nova.compute.manager [req-a041fb39-a4dd-48eb-8459-c41cf334f420 req-99933234-8235-4709-8ab6-9b36560795e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-unplugged-6914940c-920a-4dc2-982a-9ae63584aee2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.859 243456 DEBUG oslo_concurrency.lockutils [req-a041fb39-a4dd-48eb-8459-c41cf334f420 req-99933234-8235-4709-8ab6-9b36560795e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.859 243456 DEBUG oslo_concurrency.lockutils [req-a041fb39-a4dd-48eb-8459-c41cf334f420 req-99933234-8235-4709-8ab6-9b36560795e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.860 243456 DEBUG oslo_concurrency.lockutils [req-a041fb39-a4dd-48eb-8459-c41cf334f420 req-99933234-8235-4709-8ab6-9b36560795e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.860 243456 DEBUG nova.compute.manager [req-a041fb39-a4dd-48eb-8459-c41cf334f420 req-99933234-8235-4709-8ab6-9b36560795e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] No waiting events found dispatching network-vif-unplugged-6914940c-920a-4dc2-982a-9ae63584aee2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.860 243456 DEBUG nova.compute.manager [req-a041fb39-a4dd-48eb-8459-c41cf334f420 req-99933234-8235-4709-8ab6-9b36560795e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-unplugged-6914940c-920a-4dc2-982a-9ae63584aee2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:41:51 compute-0 podman[372892]: 2026-02-28 10:41:51.907922459 +0000 UTC m=+0.093971482 container remove 3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:41:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.915 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[610fbf0d-9ecf-4ffe-86fb-bb14912edfd2]: (4, ('Sat Feb 28 10:41:51 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9 (3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9)\n3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9\nSat Feb 28 10:41:51 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9 (3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9)\n3b646f4c5d77c4ee7f2122c71e56e84bd5746909b8bd5bd5d31d2e2bd13b98a9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.917 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[40dea92b-a547-44e2-93b3-0faec6fbc91a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.918 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0ad6cbe-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.920 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:51 compute-0 kernel: tapb0ad6cbe-80: left promiscuous mode
Feb 28 10:41:51 compute-0 nova_compute[243452]: 2026-02-28 10:41:51.929 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.932 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c53ccc48-f83b-4e24-bf0a-b1411e2d80b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.953 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[26864579-f530-4598-81e3-8b3bd2e3df16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.955 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e8be0169-14d2-4597-8e52-e6ab46c69d21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.973 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5733ea82-6f54-43d8-939b-b6fe06d391e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675446, 'reachable_time': 22616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372916, 'error': None, 'target': 'ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:51 compute-0 systemd[1]: run-netns-ovnmeta\x2db0ad6cbe\x2d89fe\x2d42da\x2da806\x2dcec0a8b2c3c9.mount: Deactivated successfully.
Feb 28 10:41:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.978 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:41:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.978 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[7a42e337-23b2-473b-bfe5-9a23943a0610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.980 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 6914940c-920a-4dc2-982a-9ae63584aee2 in datapath caa8646e-5c97-4eb8-add7-69ea9ee54379 unbound from our chassis
Feb 28 10:41:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.981 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network caa8646e-5c97-4eb8-add7-69ea9ee54379, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:41:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.983 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ba2cffd0-1166-404d-9337-8ddae9b82841]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:51 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:51.983 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379 namespace which is not needed anymore
Feb 28 10:41:52 compute-0 neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379[370680]: [NOTICE]   (370684) : haproxy version is 2.8.14-c23fe91
Feb 28 10:41:52 compute-0 neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379[370680]: [NOTICE]   (370684) : path to executable is /usr/sbin/haproxy
Feb 28 10:41:52 compute-0 neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379[370680]: [WARNING]  (370684) : Exiting Master process...
Feb 28 10:41:52 compute-0 neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379[370680]: [ALERT]    (370684) : Current worker (370686) exited with code 143 (Terminated)
Feb 28 10:41:52 compute-0 neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379[370680]: [WARNING]  (370684) : All workers exited. Exiting... (0)
Feb 28 10:41:52 compute-0 systemd[1]: libpod-c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28.scope: Deactivated successfully.
Feb 28 10:41:52 compute-0 podman[372935]: 2026-02-28 10:41:52.117221946 +0000 UTC m=+0.045694635 container died c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:41:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:41:52 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4269545208' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.165 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:41:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28-userdata-shm.mount: Deactivated successfully.
Feb 28 10:41:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-9eb10f203ca6c1c169e1b23413d123cb3b37254372fe81642560a63587f0c639-merged.mount: Deactivated successfully.
Feb 28 10:41:52 compute-0 ceph-mon[76304]: pgmap v2325: 305 pgs: 305 active+clean; 259 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 1.1 MiB/s wr, 54 op/s
Feb 28 10:41:52 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4269545208' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:41:52 compute-0 podman[372935]: 2026-02-28 10:41:52.201859212 +0000 UTC m=+0.130331901 container cleanup c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:41:52 compute-0 systemd[1]: libpod-conmon-c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28.scope: Deactivated successfully.
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.223 243456 DEBUG nova.storage.rbd_utils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.228 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.281 243456 INFO nova.virt.libvirt.driver [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Deleting instance files /var/lib/nova/instances/9b081668-1653-448a-957e-da1ead7ecd21_del
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.282 243456 INFO nova.virt.libvirt.driver [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Deletion of /var/lib/nova/instances/9b081668-1653-448a-957e-da1ead7ecd21_del complete
Feb 28 10:41:52 compute-0 podman[372983]: 2026-02-28 10:41:52.31266667 +0000 UTC m=+0.086115039 container remove c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:41:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:52.318 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e50c991f-754e-4d7b-b9f1-fba56830b78d]: (4, ('Sat Feb 28 10:41:52 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379 (c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28)\nc74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28\nSat Feb 28 10:41:52 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379 (c74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28)\nc74ff5cd935cc0cdd0bfab72ab600de74910123432ee714f7b8fd4a5064dae28\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:52.320 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ceac435c-1146-4bf4-bc38-a5d7a5028054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:52.321 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcaa8646e-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.323 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:52 compute-0 kernel: tapcaa8646e-50: left promiscuous mode
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.332 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:52.335 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[16a25821-7169-4945-bbb5-bd64e6947e08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.339 243456 INFO nova.compute.manager [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Took 0.88 seconds to destroy the instance on the hypervisor.
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.340 243456 DEBUG oslo.service.loopingcall [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.340 243456 DEBUG nova.compute.manager [-] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.340 243456 DEBUG nova.network.neutron [-] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:41:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:52.351 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[715066c2-7f1e-4ca5-a96a-8f1c85bf020a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:52.352 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f53b7f7c-f899-45b5-a62c-8ab6a9679f0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:52.371 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ba037da1-db70-4dbe-b074-ceda39fb2143]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675539, 'reachable_time': 37825, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373003, 'error': None, 'target': 'ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:52.373 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-caa8646e-5c97-4eb8-add7-69ea9ee54379 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:41:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:52.373 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[3e73faa6-8819-48a9-aaf1-ec7c3382bef0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:52 compute-0 systemd[1]: run-netns-ovnmeta\x2dcaa8646e\x2d5c97\x2d4eb8\x2dadd7\x2d69ea9ee54379.mount: Deactivated successfully.
Feb 28 10:41:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:41:52 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2004734532' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.859 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.861 243456 DEBUG nova.virt.libvirt.vif [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:41:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1524442108',display_name='tempest-TestNetworkBasicOps-server-1524442108',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1524442108',id=145,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG91PlskTqdY3f/2ETDaAtm5odqY3Pm73UeMDdBa5KYy9HWAlLs8njv3PvKqtqCgQvXgaXApoQMAIbbB474iNBXEqM7bw3A67rDTDaE0oBVKxg+86+w9ZDJ22AtUTnVnBg==',key_name='tempest-TestNetworkBasicOps-1470921471',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-lqpz0euc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:41:48Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.861 243456 DEBUG nova.network.os_vif_util [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.862 243456 DEBUG nova.network.os_vif_util [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:23:13,bridge_name='br-int',has_traffic_filtering=True,id=85b213ac-7186-4120-8f8e-043293c9de8b,network=Network(60f40e8c-30be-4a73-8b0b-ca447dd19ffc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85b213ac-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.863 243456 DEBUG nova.objects.instance [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.882 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:41:52 compute-0 nova_compute[243452]:   <uuid>af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9</uuid>
Feb 28 10:41:52 compute-0 nova_compute[243452]:   <name>instance-00000091</name>
Feb 28 10:41:52 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:41:52 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:41:52 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <nova:name>tempest-TestNetworkBasicOps-server-1524442108</nova:name>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:41:51</nova:creationTime>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:41:52 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:41:52 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:41:52 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:41:52 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:41:52 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:41:52 compute-0 nova_compute[243452]:         <nova:user uuid="ecebf54c6f3344489cf5e51d91d8ed5a">tempest-TestNetworkBasicOps-654042014-project-member</nova:user>
Feb 28 10:41:52 compute-0 nova_compute[243452]:         <nova:project uuid="2fef3ce64b984ecab0903a053eab58f6">tempest-TestNetworkBasicOps-654042014</nova:project>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:41:52 compute-0 nova_compute[243452]:         <nova:port uuid="85b213ac-7186-4120-8f8e-043293c9de8b">
Feb 28 10:41:52 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:41:52 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:41:52 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <system>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <entry name="serial">af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9</entry>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <entry name="uuid">af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9</entry>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     </system>
Feb 28 10:41:52 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:41:52 compute-0 nova_compute[243452]:   <os>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:   </os>
Feb 28 10:41:52 compute-0 nova_compute[243452]:   <features>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:   </features>
Feb 28 10:41:52 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:41:52 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:41:52 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk">
Feb 28 10:41:52 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       </source>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:41:52 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk.config">
Feb 28 10:41:52 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       </source>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:41:52 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:4e:23:13"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <target dev="tap85b213ac-71"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9/console.log" append="off"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <video>
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     </video>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:41:52 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:41:52 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:41:52 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:41:52 compute-0 nova_compute[243452]: </domain>
Feb 28 10:41:52 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.882 243456 DEBUG nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Preparing to wait for external event network-vif-plugged-85b213ac-7186-4120-8f8e-043293c9de8b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.883 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.883 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.883 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.884 243456 DEBUG nova.virt.libvirt.vif [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:41:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1524442108',display_name='tempest-TestNetworkBasicOps-server-1524442108',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1524442108',id=145,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG91PlskTqdY3f/2ETDaAtm5odqY3Pm73UeMDdBa5KYy9HWAlLs8njv3PvKqtqCgQvXgaXApoQMAIbbB474iNBXEqM7bw3A67rDTDaE0oBVKxg+86+w9ZDJ22AtUTnVnBg==',key_name='tempest-TestNetworkBasicOps-1470921471',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-lqpz0euc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:41:48Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.884 243456 DEBUG nova.network.os_vif_util [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.884 243456 DEBUG nova.network.os_vif_util [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:23:13,bridge_name='br-int',has_traffic_filtering=True,id=85b213ac-7186-4120-8f8e-043293c9de8b,network=Network(60f40e8c-30be-4a73-8b0b-ca447dd19ffc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85b213ac-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.885 243456 DEBUG os_vif [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:23:13,bridge_name='br-int',has_traffic_filtering=True,id=85b213ac-7186-4120-8f8e-043293c9de8b,network=Network(60f40e8c-30be-4a73-8b0b-ca447dd19ffc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85b213ac-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.885 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.885 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.886 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.888 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.888 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85b213ac-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.888 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap85b213ac-71, col_values=(('external_ids', {'iface-id': '85b213ac-7186-4120-8f8e-043293c9de8b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4e:23:13', 'vm-uuid': 'af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.890 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:52 compute-0 NetworkManager[49805]: <info>  [1772275312.8909] manager: (tap85b213ac-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/631)
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.892 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.894 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.895 243456 INFO os_vif [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:23:13,bridge_name='br-int',has_traffic_filtering=True,id=85b213ac-7186-4120-8f8e-043293c9de8b,network=Network(60f40e8c-30be-4a73-8b0b-ca447dd19ffc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85b213ac-71')
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.947 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.948 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.948 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] No VIF found with MAC fa:16:3e:4e:23:13, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.949 243456 INFO nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Using config drive
Feb 28 10:41:52 compute-0 nova_compute[243452]: 2026-02-28 10:41:52.977 243456 DEBUG nova.storage.rbd_utils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:41:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Feb 28 10:41:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2004734532' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.345 243456 DEBUG nova.network.neutron [req-26c9a47f-4f76-4d7b-8249-97b46debc728 req-b2381116-ace6-45a0-98bc-d1bc85574975 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Updated VIF entry in instance network info cache for port 85b213ac-7186-4120-8f8e-043293c9de8b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.346 243456 DEBUG nova.network.neutron [req-26c9a47f-4f76-4d7b-8249-97b46debc728 req-b2381116-ace6-45a0-98bc-d1bc85574975 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Updating instance_info_cache with network_info: [{"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.367 243456 DEBUG oslo_concurrency.lockutils [req-26c9a47f-4f76-4d7b-8249-97b46debc728 req-b2381116-ace6-45a0-98bc-d1bc85574975 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.368 243456 DEBUG nova.network.neutron [-] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.390 243456 INFO nova.compute.manager [-] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Took 1.05 seconds to deallocate network for instance.
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.417 243456 DEBUG nova.compute.manager [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-changed-5952cc57-b25c-40a2-b208-47e2104b88ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.417 243456 DEBUG nova.compute.manager [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Refreshing instance network info cache due to event network-changed-5952cc57-b25c-40a2-b208-47e2104b88ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.418 243456 DEBUG oslo_concurrency.lockutils [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.440 243456 DEBUG oslo_concurrency.lockutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.441 243456 DEBUG oslo_concurrency.lockutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.481 243456 INFO nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Creating config drive at /var/lib/nova/instances/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9/disk.config
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.486 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprao_i94p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.540 243456 DEBUG oslo_concurrency.processutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.629 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprao_i94p" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.665 243456 DEBUG nova.storage.rbd_utils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] rbd image af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.671 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9/disk.config af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.822 243456 DEBUG oslo_concurrency.processutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9/disk.config af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.823 243456 INFO nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Deleting local config drive /var/lib/nova/instances/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9/disk.config because it was imported into RBD.
Feb 28 10:41:53 compute-0 kernel: tap85b213ac-71: entered promiscuous mode
Feb 28 10:41:53 compute-0 systemd-udevd[372795]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:41:53 compute-0 NetworkManager[49805]: <info>  [1772275313.8796] manager: (tap85b213ac-71): new Tun device (/org/freedesktop/NetworkManager/Devices/632)
Feb 28 10:41:53 compute-0 ovn_controller[146846]: 2026-02-28T10:41:53Z|01523|binding|INFO|Claiming lport 85b213ac-7186-4120-8f8e-043293c9de8b for this chassis.
Feb 28 10:41:53 compute-0 ovn_controller[146846]: 2026-02-28T10:41:53Z|01524|binding|INFO|85b213ac-7186-4120-8f8e-043293c9de8b: Claiming fa:16:3e:4e:23:13 10.100.0.4
Feb 28 10:41:53 compute-0 NetworkManager[49805]: <info>  [1772275313.9112] device (tap85b213ac-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.909 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:53 compute-0 NetworkManager[49805]: <info>  [1772275313.9119] device (tap85b213ac-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:41:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:53.916 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:23:13 10.100.0.4'], port_security=['fa:16:3e:4e:23:13 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60f40e8c-30be-4a73-8b0b-ca447dd19ffc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'db7c5780-813a-4d88-b76c-0ab09180b373', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5ca6d69-e30b-431e-a08f-47f084fea79e, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=85b213ac-7186-4120-8f8e-043293c9de8b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:41:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:53.917 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 85b213ac-7186-4120-8f8e-043293c9de8b in datapath 60f40e8c-30be-4a73-8b0b-ca447dd19ffc bound to our chassis
Feb 28 10:41:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:53.918 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60f40e8c-30be-4a73-8b0b-ca447dd19ffc
Feb 28 10:41:53 compute-0 ovn_controller[146846]: 2026-02-28T10:41:53Z|01525|binding|INFO|Setting lport 85b213ac-7186-4120-8f8e-043293c9de8b up in Southbound
Feb 28 10:41:53 compute-0 ovn_controller[146846]: 2026-02-28T10:41:53Z|01526|binding|INFO|Setting lport 85b213ac-7186-4120-8f8e-043293c9de8b ovn-installed in OVS
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.924 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:53.931 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9f719369-bc24-439e-922e-e24de8e024f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:53.932 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap60f40e8c-31 in ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.933 243456 DEBUG nova.compute.manager [req-f4d9ba21-e49b-42b4-8a71-03a598ece368 req-ca72a1ad-22a2-4149-84ee-01883ef79ad5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-plugged-6914940c-920a-4dc2-982a-9ae63584aee2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.933 243456 DEBUG oslo_concurrency.lockutils [req-f4d9ba21-e49b-42b4-8a71-03a598ece368 req-ca72a1ad-22a2-4149-84ee-01883ef79ad5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.933 243456 DEBUG oslo_concurrency.lockutils [req-f4d9ba21-e49b-42b4-8a71-03a598ece368 req-ca72a1ad-22a2-4149-84ee-01883ef79ad5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.934 243456 DEBUG oslo_concurrency.lockutils [req-f4d9ba21-e49b-42b4-8a71-03a598ece368 req-ca72a1ad-22a2-4149-84ee-01883ef79ad5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.934 243456 DEBUG nova.compute.manager [req-f4d9ba21-e49b-42b4-8a71-03a598ece368 req-ca72a1ad-22a2-4149-84ee-01883ef79ad5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] No waiting events found dispatching network-vif-plugged-6914940c-920a-4dc2-982a-9ae63584aee2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.934 243456 WARNING nova.compute.manager [req-f4d9ba21-e49b-42b4-8a71-03a598ece368 req-ca72a1ad-22a2-4149-84ee-01883ef79ad5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received unexpected event network-vif-plugged-6914940c-920a-4dc2-982a-9ae63584aee2 for instance with vm_state deleted and task_state None.
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.934 243456 DEBUG nova.compute.manager [req-f4d9ba21-e49b-42b4-8a71-03a598ece368 req-ca72a1ad-22a2-4149-84ee-01883ef79ad5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-deleted-6914940c-920a-4dc2-982a-9ae63584aee2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.934 243456 DEBUG nova.compute.manager [req-f4d9ba21-e49b-42b4-8a71-03a598ece368 req-ca72a1ad-22a2-4149-84ee-01883ef79ad5 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-deleted-5952cc57-b25c-40a2-b208-47e2104b88ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:53 compute-0 nova_compute[243452]: 2026-02-28 10:41:53.935 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:53.935 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap60f40e8c-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:41:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:53.936 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[85bdc040-dbac-4c2c-89a9-9e482e91f59b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:53.937 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4e18a0bd-74ea-4590-a318-0e3ba2c8c3ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:53 compute-0 systemd-machined[209480]: New machine qemu-178-instance-00000091.
Feb 28 10:41:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:53.952 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[2cd383d9-6643-4855-9e02-af4e51723ba1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:53 compute-0 systemd[1]: Started Virtual Machine qemu-178-instance-00000091.
Feb 28 10:41:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:53.975 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7d1d87fa-d42e-4cea-9a88-513459f990fe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.003 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[250b9f17-0a50-423a-a9b8-15389de3f4e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.009 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0304de0f-2568-42dd-9f73-a0a055dff1d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:54 compute-0 NetworkManager[49805]: <info>  [1772275314.0110] manager: (tap60f40e8c-30): new Veth device (/org/freedesktop/NetworkManager/Devices/633)
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.043 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0b8afb-1422-46b8-a182-ffd9c9247e59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.046 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[40eed87a-9cc6-4b64-9354-30e0d0e295a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:54 compute-0 NetworkManager[49805]: <info>  [1772275314.0651] device (tap60f40e8c-30): carrier: link connected
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.067 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c7d125d1-2fd6-490c-bc44-77dc40482f0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.080 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ecbd35ac-8498-4934-ab04-703c68d3f00b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60f40e8c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:4d:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 452], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682135, 'reachable_time': 29337, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373147, 'error': None, 'target': 'ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.101 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[0134b980-11dc-4478-aa47-15a72a4b3215]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:4df4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 682135, 'tstamp': 682135}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373148, 'error': None, 'target': 'ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.115 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a750fbf7-777c-45c8-a2a3-2eec1e370b27]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60f40e8c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:4d:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 452], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682135, 'reachable_time': 29337, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373149, 'error': None, 'target': 'ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.140 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8dec70f5-d9bf-44c2-8973-ab9cb21db199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.169 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Updating instance_info_cache with network_info: [{"id": "5952cc57-b25c-40a2-b208-47e2104b88ad", "address": "fa:16:3e:49:9b:bb", "network": {"id": "b0ad6cbe-89fe-42da-a806-cec0a8b2c3c9", "bridge": "br-int", "label": "tempest-network-smoke--777733509", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5952cc57-b2", "ovs_interfaceid": "5952cc57-b25c-40a2-b208-47e2104b88ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:41:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:41:54 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/822945097' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.190 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.190 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.191 243456 DEBUG oslo_concurrency.lockutils [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.191 243456 DEBUG nova.network.neutron [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Refreshing network info cache for port 5952cc57-b25c-40a2-b208-47e2104b88ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.214 243456 DEBUG oslo_concurrency.processutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.674s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.214 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[71391e2b-a1fc-4141-a9d0-b020ca85b6a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.217 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60f40e8c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:54 compute-0 ceph-mon[76304]: pgmap v2326: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Feb 28 10:41:54 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/822945097' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.217 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.219 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60f40e8c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.222 243456 DEBUG nova.compute.provider_tree [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:41:54 compute-0 NetworkManager[49805]: <info>  [1772275314.2235] manager: (tap60f40e8c-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/634)
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.224 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:54 compute-0 kernel: tap60f40e8c-30: entered promiscuous mode
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.227 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.232 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60f40e8c-30, col_values=(('external_ids', {'iface-id': 'ee36debe-a46a-457c-a873-10b09fca4736'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.234 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:54 compute-0 ovn_controller[146846]: 2026-02-28T10:41:54Z|01527|binding|INFO|Releasing lport ee36debe-a46a-457c-a873-10b09fca4736 from this chassis (sb_readonly=0)
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.235 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.236 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/60f40e8c-30be-4a73-8b0b-ca447dd19ffc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/60f40e8c-30be-4a73-8b0b-ca447dd19ffc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.237 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[968a3e9d-cd51-48ab-8ed5-f9e079b24ce3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.238 243456 DEBUG nova.scheduler.client.report [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.238 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-60f40e8c-30be-4a73-8b0b-ca447dd19ffc
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/60f40e8c-30be-4a73-8b0b-ca447dd19ffc.pid.haproxy
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 60f40e8c-30be-4a73-8b0b-ca447dd19ffc
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:41:54 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:54.239 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc', 'env', 'PROCESS_TAG=haproxy-60f40e8c-30be-4a73-8b0b-ca447dd19ffc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/60f40e8c-30be-4a73-8b0b-ca447dd19ffc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.242 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.259 243456 DEBUG oslo_concurrency.lockutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.287 243456 INFO nova.scheduler.client.report [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance 9b081668-1653-448a-957e-da1ead7ecd21
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.326 243456 INFO nova.network.neutron [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Port 5952cc57-b25c-40a2-b208-47e2104b88ad from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.326 243456 DEBUG nova.network.neutron [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Updating instance_info_cache with network_info: [{"id": "6914940c-920a-4dc2-982a-9ae63584aee2", "address": "fa:16:3e:fb:c8:97", "network": {"id": "caa8646e-5c97-4eb8-add7-69ea9ee54379", "bridge": "br-int", "label": "tempest-network-smoke--717716780", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefb:c897", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6914940c-92", "ovs_interfaceid": "6914940c-920a-4dc2-982a-9ae63584aee2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.344 243456 DEBUG oslo_concurrency.lockutils [None req-2a907006-8c45-4c26-8650-2f43e3b5965a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.345 243456 DEBUG oslo_concurrency.lockutils [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9b081668-1653-448a-957e-da1ead7ecd21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.346 243456 DEBUG nova.compute.manager [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-unplugged-5952cc57-b25c-40a2-b208-47e2104b88ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.346 243456 DEBUG oslo_concurrency.lockutils [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.346 243456 DEBUG oslo_concurrency.lockutils [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.347 243456 DEBUG oslo_concurrency.lockutils [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.347 243456 DEBUG nova.compute.manager [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] No waiting events found dispatching network-vif-unplugged-5952cc57-b25c-40a2-b208-47e2104b88ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.347 243456 DEBUG nova.compute.manager [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-unplugged-5952cc57-b25c-40a2-b208-47e2104b88ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.347 243456 DEBUG nova.compute.manager [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received event network-vif-plugged-5952cc57-b25c-40a2-b208-47e2104b88ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.348 243456 DEBUG oslo_concurrency.lockutils [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9b081668-1653-448a-957e-da1ead7ecd21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.348 243456 DEBUG oslo_concurrency.lockutils [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.348 243456 DEBUG oslo_concurrency.lockutils [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9b081668-1653-448a-957e-da1ead7ecd21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.348 243456 DEBUG nova.compute.manager [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] No waiting events found dispatching network-vif-plugged-5952cc57-b25c-40a2-b208-47e2104b88ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.349 243456 WARNING nova.compute.manager [req-1b81036a-cb6e-41aa-967c-12051ca617b5 req-84272ac3-d0f5-482c-a7c3-f5f49629ccb8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Received unexpected event network-vif-plugged-5952cc57-b25c-40a2-b208-47e2104b88ad for instance with vm_state active and task_state deleting.
Feb 28 10:41:54 compute-0 podman[373197]: 2026-02-28 10:41:54.621160606 +0000 UTC m=+0.058161657 container create 1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:41:54 compute-0 systemd[1]: Started libpod-conmon-1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41.scope.
Feb 28 10:41:54 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:41:54 compute-0 podman[373197]: 2026-02-28 10:41:54.593094452 +0000 UTC m=+0.030095553 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:41:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c688fbac57555e23608447219d1d8709e3953d040412dffacb23a65f86321c8e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:41:54 compute-0 podman[373197]: 2026-02-28 10:41:54.706210785 +0000 UTC m=+0.143211866 container init 1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:41:54 compute-0 podman[373197]: 2026-02-28 10:41:54.715976401 +0000 UTC m=+0.152977452 container start 1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.724 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275314.7238557, af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.725 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] VM Started (Lifecycle Event)
Feb 28 10:41:54 compute-0 neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc[373239]: [NOTICE]   (373244) : New worker (373246) forked
Feb 28 10:41:54 compute-0 neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc[373239]: [NOTICE]   (373244) : Loading success.
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.751 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.757 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275314.7251217, af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.757 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] VM Paused (Lifecycle Event)
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.766 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.776 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.780 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:41:54 compute-0 nova_compute[243452]: 2026-02-28 10:41:54.799 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:41:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 208 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Feb 28 10:41:55 compute-0 nova_compute[243452]: 2026-02-28 10:41:55.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:41:55 compute-0 nova_compute[243452]: 2026-02-28 10:41:55.351 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:55 compute-0 nova_compute[243452]: 2026-02-28 10:41:55.352 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:55 compute-0 nova_compute[243452]: 2026-02-28 10:41:55.353 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:55 compute-0 nova_compute[243452]: 2026-02-28 10:41:55.353 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:41:55 compute-0 nova_compute[243452]: 2026-02-28 10:41:55.354 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:41:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:41:56 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3939127106' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.006 243456 DEBUG nova.compute.manager [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received event network-vif-plugged-85b213ac-7186-4120-8f8e-043293c9de8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.007 243456 DEBUG oslo_concurrency.lockutils [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.008 243456 DEBUG oslo_concurrency.lockutils [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.008 243456 DEBUG oslo_concurrency.lockutils [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.009 243456 DEBUG nova.compute.manager [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Processing event network-vif-plugged-85b213ac-7186-4120-8f8e-043293c9de8b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.009 243456 DEBUG nova.compute.manager [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received event network-vif-plugged-85b213ac-7186-4120-8f8e-043293c9de8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.010 243456 DEBUG oslo_concurrency.lockutils [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.010 243456 DEBUG oslo_concurrency.lockutils [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.011 243456 DEBUG oslo_concurrency.lockutils [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.011 243456 DEBUG nova.compute.manager [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] No waiting events found dispatching network-vif-plugged-85b213ac-7186-4120-8f8e-043293c9de8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.012 243456 WARNING nova.compute.manager [req-b8bc75a3-ce99-4544-8003-1243fb91c66b req-f2653cee-9f18-440b-841c-9725258d567e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received unexpected event network-vif-plugged-85b213ac-7186-4120-8f8e-043293c9de8b for instance with vm_state building and task_state spawning.
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.014 243456 DEBUG nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.027 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.673s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.030 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275316.0285456, af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.031 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] VM Resumed (Lifecycle Event)
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.035 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.045 243456 INFO nova.virt.libvirt.driver [-] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Instance spawned successfully.
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.047 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.057 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.069 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.085 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.086 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.086 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.087 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.087 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.087 243456 DEBUG nova.virt.libvirt.driver [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.092 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.148 243456 INFO nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Took 7.78 seconds to spawn the instance on the hypervisor.
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.149 243456 DEBUG nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.160 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.160 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:41:56 compute-0 ceph-mon[76304]: pgmap v2327: 305 pgs: 305 active+clean; 208 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Feb 28 10:41:56 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3939127106' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.245 243456 INFO nova.compute.manager [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Took 8.93 seconds to build instance.
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.262 243456 DEBUG oslo_concurrency.lockutils [None req-115ee05e-0d47-4d11-8f75-6cdf33f463f5 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.339 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.340 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3495MB free_disk=59.96363378781825GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.406 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.407 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.408 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:41:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:56.412 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:41:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:56.413 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.464 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:56 compute-0 nova_compute[243452]: 2026-02-28 10:41:56.478 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:41:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:41:57 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3290257330' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:41:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Feb 28 10:41:57 compute-0 nova_compute[243452]: 2026-02-28 10:41:57.128 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:41:57 compute-0 nova_compute[243452]: 2026-02-28 10:41:57.137 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:41:57 compute-0 nova_compute[243452]: 2026-02-28 10:41:57.156 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:41:57 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3290257330' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:41:57 compute-0 nova_compute[243452]: 2026-02-28 10:41:57.467 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:41:57 compute-0 nova_compute[243452]: 2026-02-28 10:41:57.467 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:57.880 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:41:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:57.881 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:41:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:41:57.882 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:41:57 compute-0 nova_compute[243452]: 2026-02-28 10:41:57.890 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:58 compute-0 ceph-mon[76304]: pgmap v2328: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Feb 28 10:41:58 compute-0 ovn_controller[146846]: 2026-02-28T10:41:58Z|01528|binding|INFO|Releasing lport ee36debe-a46a-457c-a873-10b09fca4736 from this chassis (sb_readonly=0)
Feb 28 10:41:58 compute-0 nova_compute[243452]: 2026-02-28 10:41:58.292 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:58 compute-0 ovn_controller[146846]: 2026-02-28T10:41:58Z|01529|binding|INFO|Releasing lport ee36debe-a46a-457c-a873-10b09fca4736 from this chassis (sb_readonly=0)
Feb 28 10:41:58 compute-0 nova_compute[243452]: 2026-02-28 10:41:58.328 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:41:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 852 KiB/s rd, 1.8 MiB/s wr, 115 op/s
Feb 28 10:41:59 compute-0 nova_compute[243452]: 2026-02-28 10:41:59.765 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:59 compute-0 ovn_controller[146846]: 2026-02-28T10:41:59Z|01530|binding|INFO|Releasing lport ee36debe-a46a-457c-a873-10b09fca4736 from this chassis (sb_readonly=0)
Feb 28 10:41:59 compute-0 nova_compute[243452]: 2026-02-28 10:41:59.957 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:59 compute-0 NetworkManager[49805]: <info>  [1772275319.9581] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/635)
Feb 28 10:41:59 compute-0 NetworkManager[49805]: <info>  [1772275319.9587] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/636)
Feb 28 10:41:59 compute-0 ovn_controller[146846]: 2026-02-28T10:41:59Z|01531|binding|INFO|Releasing lport ee36debe-a46a-457c-a873-10b09fca4736 from this chassis (sb_readonly=0)
Feb 28 10:41:59 compute-0 nova_compute[243452]: 2026-02-28 10:41:59.973 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:41:59 compute-0 nova_compute[243452]: 2026-02-28 10:41:59.978 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:00 compute-0 ceph-mon[76304]: pgmap v2329: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 852 KiB/s rd, 1.8 MiB/s wr, 115 op/s
Feb 28 10:42:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:42:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:42:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:42:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:42:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:42:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:42:00 compute-0 nova_compute[243452]: 2026-02-28 10:42:00.405 243456 DEBUG nova.compute.manager [req-5aa90817-05e1-4542-aa87-38f2568da283 req-d678152a-d1fe-4b0f-92fc-8b70588eb468 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received event network-changed-85b213ac-7186-4120-8f8e-043293c9de8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:42:00 compute-0 nova_compute[243452]: 2026-02-28 10:42:00.406 243456 DEBUG nova.compute.manager [req-5aa90817-05e1-4542-aa87-38f2568da283 req-d678152a-d1fe-4b0f-92fc-8b70588eb468 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Refreshing instance network info cache due to event network-changed-85b213ac-7186-4120-8f8e-043293c9de8b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:42:00 compute-0 nova_compute[243452]: 2026-02-28 10:42:00.407 243456 DEBUG oslo_concurrency.lockutils [req-5aa90817-05e1-4542-aa87-38f2568da283 req-d678152a-d1fe-4b0f-92fc-8b70588eb468 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:42:00 compute-0 nova_compute[243452]: 2026-02-28 10:42:00.407 243456 DEBUG oslo_concurrency.lockutils [req-5aa90817-05e1-4542-aa87-38f2568da283 req-d678152a-d1fe-4b0f-92fc-8b70588eb468 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:42:00 compute-0 nova_compute[243452]: 2026-02-28 10:42:00.408 243456 DEBUG nova.network.neutron [req-5aa90817-05e1-4542-aa87-38f2568da283 req-d678152a-d1fe-4b0f-92fc-8b70588eb468 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Refreshing network info cache for port 85b213ac-7186-4120-8f8e-043293c9de8b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:42:00 compute-0 nova_compute[243452]: 2026-02-28 10:42:00.610 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275305.6076045, 47d618f6-612e-4944-8a4d-a3509d6e3d35 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:42:00 compute-0 nova_compute[243452]: 2026-02-28 10:42:00.611 243456 INFO nova.compute.manager [-] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] VM Stopped (Lifecycle Event)
Feb 28 10:42:00 compute-0 nova_compute[243452]: 2026-02-28 10:42:00.638 243456 DEBUG nova.compute.manager [None req-f8c07176-b44e-4962-9935-03572d873649 - - - - - -] [instance: 47d618f6-612e-4944-8a4d-a3509d6e3d35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:42:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Feb 28 10:42:02 compute-0 ceph-mon[76304]: pgmap v2330: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Feb 28 10:42:02 compute-0 nova_compute[243452]: 2026-02-28 10:42:02.298 243456 DEBUG nova.network.neutron [req-5aa90817-05e1-4542-aa87-38f2568da283 req-d678152a-d1fe-4b0f-92fc-8b70588eb468 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Updated VIF entry in instance network info cache for port 85b213ac-7186-4120-8f8e-043293c9de8b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:42:02 compute-0 nova_compute[243452]: 2026-02-28 10:42:02.299 243456 DEBUG nova.network.neutron [req-5aa90817-05e1-4542-aa87-38f2568da283 req-d678152a-d1fe-4b0f-92fc-8b70588eb468 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Updating instance_info_cache with network_info: [{"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:42:02 compute-0 nova_compute[243452]: 2026-02-28 10:42:02.323 243456 DEBUG oslo_concurrency.lockutils [req-5aa90817-05e1-4542-aa87-38f2568da283 req-d678152a-d1fe-4b0f-92fc-8b70588eb468 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:42:02 compute-0 nova_compute[243452]: 2026-02-28 10:42:02.892 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 768 KiB/s wr, 104 op/s
Feb 28 10:42:03 compute-0 nova_compute[243452]: 2026-02-28 10:42:03.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:42:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:03.415 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:42:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:42:04 compute-0 ceph-mon[76304]: pgmap v2331: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 768 KiB/s wr, 104 op/s
Feb 28 10:42:04 compute-0 nova_compute[243452]: 2026-02-28 10:42:04.767 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 91 op/s
Feb 28 10:42:06 compute-0 ceph-mon[76304]: pgmap v2332: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 91 op/s
Feb 28 10:42:06 compute-0 nova_compute[243452]: 2026-02-28 10:42:06.708 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275311.7067533, 9b081668-1653-448a-957e-da1ead7ecd21 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:42:06 compute-0 nova_compute[243452]: 2026-02-28 10:42:06.708 243456 INFO nova.compute.manager [-] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] VM Stopped (Lifecycle Event)
Feb 28 10:42:06 compute-0 nova_compute[243452]: 2026-02-28 10:42:06.725 243456 DEBUG nova.compute.manager [None req-37e45e02-7a29-4477-8cdd-bfc416ffca6f - - - - - -] [instance: 9b081668-1653-448a-957e-da1ead7ecd21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:42:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Feb 28 10:42:07 compute-0 nova_compute[243452]: 2026-02-28 10:42:07.894 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:08 compute-0 ceph-mon[76304]: pgmap v2333: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Feb 28 10:42:08 compute-0 nova_compute[243452]: 2026-02-28 10:42:08.469 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:08 compute-0 ovn_controller[146846]: 2026-02-28T10:42:08Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4e:23:13 10.100.0.4
Feb 28 10:42:08 compute-0 ovn_controller[146846]: 2026-02-28T10:42:08Z|00185|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4e:23:13 10.100.0.4
Feb 28 10:42:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:42:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 212 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 972 KiB/s wr, 84 op/s
Feb 28 10:42:09 compute-0 nova_compute[243452]: 2026-02-28 10:42:09.770 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:10 compute-0 nova_compute[243452]: 2026-02-28 10:42:10.334 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:42:10 compute-0 nova_compute[243452]: 2026-02-28 10:42:10.334 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 28 10:42:10 compute-0 nova_compute[243452]: 2026-02-28 10:42:10.353 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 28 10:42:10 compute-0 ceph-mon[76304]: pgmap v2334: 305 pgs: 305 active+clean; 212 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 972 KiB/s wr, 84 op/s
Feb 28 10:42:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 231 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 102 op/s
Feb 28 10:42:11 compute-0 nova_compute[243452]: 2026-02-28 10:42:11.657 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:12 compute-0 ceph-mon[76304]: pgmap v2335: 305 pgs: 305 active+clean; 231 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 102 op/s
Feb 28 10:42:12 compute-0 nova_compute[243452]: 2026-02-28 10:42:12.896 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:42:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:13.240 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:79:6c 10.100.0.2 2001:db8::f816:3eff:feb9:796c'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feb9:796c/64', 'neutron:device_id': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39e94574-ff62-481a-b217-9667ee1ef596, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9495031a-3350-4b5d-b9e3-f7a6b929d37e) old=Port_Binding(mac=['fa:16:3e:b9:79:6c 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:42:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:13.241 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9495031a-3350-4b5d-b9e3-f7a6b929d37e in datapath f5ccb81b-dba1-47db-8a77-320af312ccad updated
Feb 28 10:42:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:13.241 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f5ccb81b-dba1-47db-8a77-320af312ccad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:42:13 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:13.243 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b418ac21-03cc-4373-a0f9-89c1b0b76218]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:13 compute-0 sudo[373302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:42:13 compute-0 sudo[373302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:42:13 compute-0 sudo[373302]: pam_unix(sudo:session): session closed for user root
Feb 28 10:42:13 compute-0 sudo[373327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:42:13 compute-0 sudo[373327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:42:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:42:14 compute-0 nova_compute[243452]: 2026-02-28 10:42:14.271 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:42:14 compute-0 sudo[373327]: pam_unix(sudo:session): session closed for user root
Feb 28 10:42:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:42:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:42:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:42:14 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:42:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:42:14 compute-0 nova_compute[243452]: 2026-02-28 10:42:14.305 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Triggering sync for uuid af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Feb 28 10:42:14 compute-0 nova_compute[243452]: 2026-02-28 10:42:14.306 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:42:14 compute-0 nova_compute[243452]: 2026-02-28 10:42:14.306 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:42:14 compute-0 nova_compute[243452]: 2026-02-28 10:42:14.340 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:42:14 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:42:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:42:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:42:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:42:14 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:42:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:42:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:42:14 compute-0 ceph-mon[76304]: pgmap v2336: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:42:14 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:42:14 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:42:14 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:42:14 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:42:14 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:42:14 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:42:14 compute-0 sudo[373383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:42:14 compute-0 sudo[373383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:42:14 compute-0 sudo[373383]: pam_unix(sudo:session): session closed for user root
Feb 28 10:42:14 compute-0 sudo[373408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:42:14 compute-0 sudo[373408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:42:14 compute-0 podman[373446]: 2026-02-28 10:42:14.740012319 +0000 UTC m=+0.052781056 container create af21d4ce1c08e0f7bd13b579f133718a5afdd01e64c7e3268cac74c066d3fe7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Feb 28 10:42:14 compute-0 nova_compute[243452]: 2026-02-28 10:42:14.772 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:14 compute-0 systemd[1]: Started libpod-conmon-af21d4ce1c08e0f7bd13b579f133718a5afdd01e64c7e3268cac74c066d3fe7c.scope.
Feb 28 10:42:14 compute-0 podman[373446]: 2026-02-28 10:42:14.7089978 +0000 UTC m=+0.021766587 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:42:14 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:42:14 compute-0 podman[373446]: 2026-02-28 10:42:14.826483707 +0000 UTC m=+0.139252434 container init af21d4ce1c08e0f7bd13b579f133718a5afdd01e64c7e3268cac74c066d3fe7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tesla, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:42:14 compute-0 podman[373446]: 2026-02-28 10:42:14.833481995 +0000 UTC m=+0.146250702 container start af21d4ce1c08e0f7bd13b579f133718a5afdd01e64c7e3268cac74c066d3fe7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tesla, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:42:14 compute-0 podman[373446]: 2026-02-28 10:42:14.83964049 +0000 UTC m=+0.152409207 container attach af21d4ce1c08e0f7bd13b579f133718a5afdd01e64c7e3268cac74c066d3fe7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 10:42:14 compute-0 romantic_tesla[373463]: 167 167
Feb 28 10:42:14 compute-0 systemd[1]: libpod-af21d4ce1c08e0f7bd13b579f133718a5afdd01e64c7e3268cac74c066d3fe7c.scope: Deactivated successfully.
Feb 28 10:42:14 compute-0 podman[373446]: 2026-02-28 10:42:14.842037887 +0000 UTC m=+0.154806634 container died af21d4ce1c08e0f7bd13b579f133718a5afdd01e64c7e3268cac74c066d3fe7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tesla, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 10:42:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b2f2a2fa0801f4f761dbe82efeec91e0ecc0c2dd8b5e6dd24c9bd0c45e0b180-merged.mount: Deactivated successfully.
Feb 28 10:42:14 compute-0 podman[373446]: 2026-02-28 10:42:14.898255909 +0000 UTC m=+0.211024636 container remove af21d4ce1c08e0f7bd13b579f133718a5afdd01e64c7e3268cac74c066d3fe7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tesla, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Feb 28 10:42:14 compute-0 systemd[1]: libpod-conmon-af21d4ce1c08e0f7bd13b579f133718a5afdd01e64c7e3268cac74c066d3fe7c.scope: Deactivated successfully.
Feb 28 10:42:15 compute-0 podman[373486]: 2026-02-28 10:42:15.045033926 +0000 UTC m=+0.049870474 container create bfb36c32e81275922defbb12b81db4b640c1ab513912cf04d43b926e321cc9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mahavira, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:42:15 compute-0 systemd[1]: Started libpod-conmon-bfb36c32e81275922defbb12b81db4b640c1ab513912cf04d43b926e321cc9e8.scope.
Feb 28 10:42:15 compute-0 podman[373486]: 2026-02-28 10:42:15.022344903 +0000 UTC m=+0.027181451 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:42:15 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:42:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac84d274873620fc6bbd835e9a1f5f30a06d21eedbef55a4135e82f1e7f7f3a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:42:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 10:42:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac84d274873620fc6bbd835e9a1f5f30a06d21eedbef55a4135e82f1e7f7f3a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:42:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac84d274873620fc6bbd835e9a1f5f30a06d21eedbef55a4135e82f1e7f7f3a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:42:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac84d274873620fc6bbd835e9a1f5f30a06d21eedbef55a4135e82f1e7f7f3a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:42:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac84d274873620fc6bbd835e9a1f5f30a06d21eedbef55a4135e82f1e7f7f3a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:42:15 compute-0 podman[373486]: 2026-02-28 10:42:15.149252977 +0000 UTC m=+0.154089515 container init bfb36c32e81275922defbb12b81db4b640c1ab513912cf04d43b926e321cc9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mahavira, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:42:15 compute-0 podman[373486]: 2026-02-28 10:42:15.157784178 +0000 UTC m=+0.162620726 container start bfb36c32e81275922defbb12b81db4b640c1ab513912cf04d43b926e321cc9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mahavira, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:42:15 compute-0 podman[373486]: 2026-02-28 10:42:15.167237866 +0000 UTC m=+0.172074424 container attach bfb36c32e81275922defbb12b81db4b640c1ab513912cf04d43b926e321cc9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mahavira, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 10:42:15 compute-0 nova_compute[243452]: 2026-02-28 10:42:15.279 243456 INFO nova.compute.manager [None req-34ba6e3e-c34d-4060-8835-20e824cc17ea ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Get console output
Feb 28 10:42:15 compute-0 nova_compute[243452]: 2026-02-28 10:42:15.290 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:42:15 compute-0 kind_mahavira[373503]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:42:15 compute-0 kind_mahavira[373503]: --> All data devices are unavailable
Feb 28 10:42:15 compute-0 podman[373486]: 2026-02-28 10:42:15.673809119 +0000 UTC m=+0.678645657 container died bfb36c32e81275922defbb12b81db4b640c1ab513912cf04d43b926e321cc9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mahavira, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 10:42:15 compute-0 systemd[1]: libpod-bfb36c32e81275922defbb12b81db4b640c1ab513912cf04d43b926e321cc9e8.scope: Deactivated successfully.
Feb 28 10:42:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac84d274873620fc6bbd835e9a1f5f30a06d21eedbef55a4135e82f1e7f7f3a8-merged.mount: Deactivated successfully.
Feb 28 10:42:15 compute-0 podman[373486]: 2026-02-28 10:42:15.856782871 +0000 UTC m=+0.861619429 container remove bfb36c32e81275922defbb12b81db4b640c1ab513912cf04d43b926e321cc9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mahavira, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:42:15 compute-0 systemd[1]: libpod-conmon-bfb36c32e81275922defbb12b81db4b640c1ab513912cf04d43b926e321cc9e8.scope: Deactivated successfully.
Feb 28 10:42:15 compute-0 ovn_controller[146846]: 2026-02-28T10:42:15Z|01532|binding|INFO|Releasing lport ee36debe-a46a-457c-a873-10b09fca4736 from this chassis (sb_readonly=0)
Feb 28 10:42:15 compute-0 nova_compute[243452]: 2026-02-28 10:42:15.900 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:15 compute-0 sudo[373408]: pam_unix(sudo:session): session closed for user root
Feb 28 10:42:15 compute-0 ovn_controller[146846]: 2026-02-28T10:42:15Z|01533|binding|INFO|Releasing lport ee36debe-a46a-457c-a873-10b09fca4736 from this chassis (sb_readonly=0)
Feb 28 10:42:15 compute-0 nova_compute[243452]: 2026-02-28 10:42:15.935 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:15 compute-0 sudo[373536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:42:15 compute-0 sudo[373536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:42:15 compute-0 sudo[373536]: pam_unix(sudo:session): session closed for user root
Feb 28 10:42:16 compute-0 sudo[373561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:42:16 compute-0 sudo[373561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:42:16 compute-0 podman[373598]: 2026-02-28 10:42:16.352955411 +0000 UTC m=+0.056335177 container create f27c60e4f650a5a751e6627bf09a44d14e91da14a07267fa43928cf6695ca350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haibt, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 10:42:16 compute-0 systemd[1]: Started libpod-conmon-f27c60e4f650a5a751e6627bf09a44d14e91da14a07267fa43928cf6695ca350.scope.
Feb 28 10:42:16 compute-0 ceph-mon[76304]: pgmap v2337: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 10:42:16 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:42:16 compute-0 podman[373598]: 2026-02-28 10:42:16.333586592 +0000 UTC m=+0.036966318 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:42:16 compute-0 podman[373598]: 2026-02-28 10:42:16.437596957 +0000 UTC m=+0.140976743 container init f27c60e4f650a5a751e6627bf09a44d14e91da14a07267fa43928cf6695ca350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haibt, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:42:16 compute-0 podman[373598]: 2026-02-28 10:42:16.447358834 +0000 UTC m=+0.150738560 container start f27c60e4f650a5a751e6627bf09a44d14e91da14a07267fa43928cf6695ca350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:42:16 compute-0 podman[373598]: 2026-02-28 10:42:16.451369067 +0000 UTC m=+0.154748893 container attach f27c60e4f650a5a751e6627bf09a44d14e91da14a07267fa43928cf6695ca350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haibt, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:42:16 compute-0 zealous_haibt[373614]: 167 167
Feb 28 10:42:16 compute-0 systemd[1]: libpod-f27c60e4f650a5a751e6627bf09a44d14e91da14a07267fa43928cf6695ca350.scope: Deactivated successfully.
Feb 28 10:42:16 compute-0 podman[373598]: 2026-02-28 10:42:16.455928806 +0000 UTC m=+0.159308542 container died f27c60e4f650a5a751e6627bf09a44d14e91da14a07267fa43928cf6695ca350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haibt, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 10:42:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f87d9347988b2ebb9fb058487d99c817e7f2dae02f56d70f2c2a08ff7cfce02-merged.mount: Deactivated successfully.
Feb 28 10:42:16 compute-0 podman[373598]: 2026-02-28 10:42:16.492861562 +0000 UTC m=+0.196241298 container remove f27c60e4f650a5a751e6627bf09a44d14e91da14a07267fa43928cf6695ca350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haibt, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:42:16 compute-0 systemd[1]: libpod-conmon-f27c60e4f650a5a751e6627bf09a44d14e91da14a07267fa43928cf6695ca350.scope: Deactivated successfully.
Feb 28 10:42:16 compute-0 podman[373640]: 2026-02-28 10:42:16.70043259 +0000 UTC m=+0.064815077 container create e106d1cd1c2fa406c17e22790699ef40446b41da860e6ae19adcd97133a2f967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 10:42:16 compute-0 systemd[1]: Started libpod-conmon-e106d1cd1c2fa406c17e22790699ef40446b41da860e6ae19adcd97133a2f967.scope.
Feb 28 10:42:16 compute-0 podman[373640]: 2026-02-28 10:42:16.67395539 +0000 UTC m=+0.038337937 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:42:16 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:42:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8ae253f2e2685ca3a1c07b57a73edde5e07ce8bca64a7fc3eacb73b26c53ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:42:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8ae253f2e2685ca3a1c07b57a73edde5e07ce8bca64a7fc3eacb73b26c53ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:42:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8ae253f2e2685ca3a1c07b57a73edde5e07ce8bca64a7fc3eacb73b26c53ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:42:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8ae253f2e2685ca3a1c07b57a73edde5e07ce8bca64a7fc3eacb73b26c53ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:42:16 compute-0 podman[373640]: 2026-02-28 10:42:16.800700499 +0000 UTC m=+0.165083006 container init e106d1cd1c2fa406c17e22790699ef40446b41da860e6ae19adcd97133a2f967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_margulis, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:42:16 compute-0 podman[373640]: 2026-02-28 10:42:16.8074661 +0000 UTC m=+0.171848547 container start e106d1cd1c2fa406c17e22790699ef40446b41da860e6ae19adcd97133a2f967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_margulis, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 28 10:42:16 compute-0 podman[373640]: 2026-02-28 10:42:16.810917198 +0000 UTC m=+0.175299645 container attach e106d1cd1c2fa406c17e22790699ef40446b41da860e6ae19adcd97133a2f967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_margulis, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 10:42:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:16.949 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:79:6c 10.100.0.2 2001:db8:0:1:f816:3eff:feb9:796c 2001:db8::f816:3eff:feb9:796c'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:feb9:796c/64 2001:db8::f816:3eff:feb9:796c/64', 'neutron:device_id': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39e94574-ff62-481a-b217-9667ee1ef596, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9495031a-3350-4b5d-b9e3-f7a6b929d37e) old=Port_Binding(mac=['fa:16:3e:b9:79:6c 10.100.0.2 2001:db8::f816:3eff:feb9:796c'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feb9:796c/64', 'neutron:device_id': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:42:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:16.951 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9495031a-3350-4b5d-b9e3-f7a6b929d37e in datapath f5ccb81b-dba1-47db-8a77-320af312ccad updated
Feb 28 10:42:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:16.954 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f5ccb81b-dba1-47db-8a77-320af312ccad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:42:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:16.955 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[71ff1383-5103-4cf1-bf5d-5aa7c4e0ca39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]: {
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:     "0": [
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:         {
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "devices": [
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "/dev/loop3"
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             ],
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "lv_name": "ceph_lv0",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "lv_size": "21470642176",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "name": "ceph_lv0",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "tags": {
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.cluster_name": "ceph",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.crush_device_class": "",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.encrypted": "0",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.objectstore": "bluestore",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.osd_id": "0",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.type": "block",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.vdo": "0",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.with_tpm": "0"
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             },
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "type": "block",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "vg_name": "ceph_vg0"
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:         }
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:     ],
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:     "1": [
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:         {
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "devices": [
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "/dev/loop4"
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             ],
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "lv_name": "ceph_lv1",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "lv_size": "21470642176",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "name": "ceph_lv1",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "tags": {
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.cluster_name": "ceph",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.crush_device_class": "",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.encrypted": "0",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.objectstore": "bluestore",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.osd_id": "1",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.type": "block",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.vdo": "0",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.with_tpm": "0"
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             },
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "type": "block",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "vg_name": "ceph_vg1"
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:         }
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:     ],
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:     "2": [
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:         {
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "devices": [
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "/dev/loop5"
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             ],
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "lv_name": "ceph_lv2",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "lv_size": "21470642176",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "name": "ceph_lv2",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "tags": {
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.cluster_name": "ceph",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.crush_device_class": "",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.encrypted": "0",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.objectstore": "bluestore",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.osd_id": "2",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.type": "block",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.vdo": "0",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:                 "ceph.with_tpm": "0"
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             },
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "type": "block",
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:             "vg_name": "ceph_vg2"
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:         }
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]:     ]
Feb 28 10:42:17 compute-0 thirsty_margulis[373657]: }
Feb 28 10:42:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 10:42:17 compute-0 systemd[1]: libpod-e106d1cd1c2fa406c17e22790699ef40446b41da860e6ae19adcd97133a2f967.scope: Deactivated successfully.
Feb 28 10:42:17 compute-0 podman[373640]: 2026-02-28 10:42:17.149006272 +0000 UTC m=+0.513388749 container died e106d1cd1c2fa406c17e22790699ef40446b41da860e6ae19adcd97133a2f967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_margulis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 10:42:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f8ae253f2e2685ca3a1c07b57a73edde5e07ce8bca64a7fc3eacb73b26c53ab-merged.mount: Deactivated successfully.
Feb 28 10:42:17 compute-0 podman[373640]: 2026-02-28 10:42:17.191019381 +0000 UTC m=+0.555401848 container remove e106d1cd1c2fa406c17e22790699ef40446b41da860e6ae19adcd97133a2f967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_margulis, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 10:42:17 compute-0 systemd[1]: libpod-conmon-e106d1cd1c2fa406c17e22790699ef40446b41da860e6ae19adcd97133a2f967.scope: Deactivated successfully.
Feb 28 10:42:17 compute-0 sudo[373561]: pam_unix(sudo:session): session closed for user root
Feb 28 10:42:17 compute-0 sudo[373679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:42:17 compute-0 sudo[373679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:42:17 compute-0 sudo[373679]: pam_unix(sudo:session): session closed for user root
Feb 28 10:42:17 compute-0 nova_compute[243452]: 2026-02-28 10:42:17.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:42:17 compute-0 nova_compute[243452]: 2026-02-28 10:42:17.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 28 10:42:17 compute-0 sudo[373704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:42:17 compute-0 sudo[373704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:42:17 compute-0 nova_compute[243452]: 2026-02-28 10:42:17.493 243456 INFO nova.compute.manager [None req-c3a0c0e0-7076-495b-9581-5ecc53f70879 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Get console output
Feb 28 10:42:17 compute-0 nova_compute[243452]: 2026-02-28 10:42:17.501 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:42:17 compute-0 podman[373742]: 2026-02-28 10:42:17.603174411 +0000 UTC m=+0.038872042 container create c1474c2e702e18a1842e0f60818e45f44a49b7acd405336bf8ba7854c6b32cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:42:17 compute-0 systemd[1]: Started libpod-conmon-c1474c2e702e18a1842e0f60818e45f44a49b7acd405336bf8ba7854c6b32cfb.scope.
Feb 28 10:42:17 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:42:17 compute-0 podman[373742]: 2026-02-28 10:42:17.584976485 +0000 UTC m=+0.020674096 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:42:17 compute-0 podman[373742]: 2026-02-28 10:42:17.690964387 +0000 UTC m=+0.126661998 container init c1474c2e702e18a1842e0f60818e45f44a49b7acd405336bf8ba7854c6b32cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_feynman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 10:42:17 compute-0 podman[373742]: 2026-02-28 10:42:17.698408597 +0000 UTC m=+0.134106178 container start c1474c2e702e18a1842e0f60818e45f44a49b7acd405336bf8ba7854c6b32cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:42:17 compute-0 podman[373742]: 2026-02-28 10:42:17.701548606 +0000 UTC m=+0.137246197 container attach c1474c2e702e18a1842e0f60818e45f44a49b7acd405336bf8ba7854c6b32cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:42:17 compute-0 affectionate_feynman[373758]: 167 167
Feb 28 10:42:17 compute-0 systemd[1]: libpod-c1474c2e702e18a1842e0f60818e45f44a49b7acd405336bf8ba7854c6b32cfb.scope: Deactivated successfully.
Feb 28 10:42:17 compute-0 podman[373742]: 2026-02-28 10:42:17.704359706 +0000 UTC m=+0.140057297 container died c1474c2e702e18a1842e0f60818e45f44a49b7acd405336bf8ba7854c6b32cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_feynman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:42:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcf10f1421ef0613d30ecef5c72d0c7c49c23a9e6845ae49346d7ca04029a0f8-merged.mount: Deactivated successfully.
Feb 28 10:42:17 compute-0 podman[373742]: 2026-02-28 10:42:17.739555562 +0000 UTC m=+0.175253153 container remove c1474c2e702e18a1842e0f60818e45f44a49b7acd405336bf8ba7854c6b32cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_feynman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 10:42:17 compute-0 systemd[1]: libpod-conmon-c1474c2e702e18a1842e0f60818e45f44a49b7acd405336bf8ba7854c6b32cfb.scope: Deactivated successfully.
Feb 28 10:42:17 compute-0 podman[373782]: 2026-02-28 10:42:17.8768489 +0000 UTC m=+0.032401798 container create ad53725caa104f9477b44026bd877ffeea0899690282a8c4ecd856fffba052e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:42:17 compute-0 nova_compute[243452]: 2026-02-28 10:42:17.898 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:17 compute-0 systemd[1]: Started libpod-conmon-ad53725caa104f9477b44026bd877ffeea0899690282a8c4ecd856fffba052e1.scope.
Feb 28 10:42:17 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:42:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25a6bc487fe33fe49a89de3da07ca18af70d399c9a6c328f1cc1addb6cef4ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:42:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25a6bc487fe33fe49a89de3da07ca18af70d399c9a6c328f1cc1addb6cef4ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:42:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25a6bc487fe33fe49a89de3da07ca18af70d399c9a6c328f1cc1addb6cef4ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:42:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25a6bc487fe33fe49a89de3da07ca18af70d399c9a6c328f1cc1addb6cef4ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:42:17 compute-0 podman[373782]: 2026-02-28 10:42:17.954872929 +0000 UTC m=+0.110425897 container init ad53725caa104f9477b44026bd877ffeea0899690282a8c4ecd856fffba052e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 10:42:17 compute-0 podman[373782]: 2026-02-28 10:42:17.863393919 +0000 UTC m=+0.018946837 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:42:17 compute-0 podman[373782]: 2026-02-28 10:42:17.963271047 +0000 UTC m=+0.118823945 container start ad53725caa104f9477b44026bd877ffeea0899690282a8c4ecd856fffba052e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 10:42:17 compute-0 podman[373782]: 2026-02-28 10:42:17.966754616 +0000 UTC m=+0.122307504 container attach ad53725caa104f9477b44026bd877ffeea0899690282a8c4ecd856fffba052e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 10:42:18 compute-0 ceph-mon[76304]: pgmap v2338: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 10:42:18 compute-0 lvm[373875]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:42:18 compute-0 lvm[373875]: VG ceph_vg0 finished
Feb 28 10:42:18 compute-0 lvm[373878]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:42:18 compute-0 lvm[373878]: VG ceph_vg1 finished
Feb 28 10:42:18 compute-0 lvm[373880]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:42:18 compute-0 lvm[373880]: VG ceph_vg2 finished
Feb 28 10:42:18 compute-0 dreamy_keller[373798]: {}
Feb 28 10:42:18 compute-0 systemd[1]: libpod-ad53725caa104f9477b44026bd877ffeea0899690282a8c4ecd856fffba052e1.scope: Deactivated successfully.
Feb 28 10:42:18 compute-0 systemd[1]: libpod-ad53725caa104f9477b44026bd877ffeea0899690282a8c4ecd856fffba052e1.scope: Consumed 1.286s CPU time.
Feb 28 10:42:18 compute-0 podman[373782]: 2026-02-28 10:42:18.834353673 +0000 UTC m=+0.989906601 container died ad53725caa104f9477b44026bd877ffeea0899690282a8c4ecd856fffba052e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:42:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-a25a6bc487fe33fe49a89de3da07ca18af70d399c9a6c328f1cc1addb6cef4ba-merged.mount: Deactivated successfully.
Feb 28 10:42:18 compute-0 podman[373782]: 2026-02-28 10:42:18.885480091 +0000 UTC m=+1.041032989 container remove ad53725caa104f9477b44026bd877ffeea0899690282a8c4ecd856fffba052e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:42:18 compute-0 systemd[1]: libpod-conmon-ad53725caa104f9477b44026bd877ffeea0899690282a8c4ecd856fffba052e1.scope: Deactivated successfully.
Feb 28 10:42:18 compute-0 sudo[373704]: pam_unix(sudo:session): session closed for user root
Feb 28 10:42:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:42:18 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:42:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:42:18 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:42:18 compute-0 NetworkManager[49805]: <info>  [1772275338.9924] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/637)
Feb 28 10:42:18 compute-0 NetworkManager[49805]: <info>  [1772275338.9938] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/638)
Feb 28 10:42:18 compute-0 nova_compute[243452]: 2026-02-28 10:42:18.991 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:19 compute-0 sudo[373895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:42:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:42:19 compute-0 sudo[373895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:42:19 compute-0 sudo[373895]: pam_unix(sudo:session): session closed for user root
Feb 28 10:42:19 compute-0 nova_compute[243452]: 2026-02-28 10:42:19.042 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:19 compute-0 ovn_controller[146846]: 2026-02-28T10:42:19Z|01534|binding|INFO|Releasing lport ee36debe-a46a-457c-a873-10b09fca4736 from this chassis (sb_readonly=0)
Feb 28 10:42:19 compute-0 nova_compute[243452]: 2026-02-28 10:42:19.054 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 10:42:19 compute-0 nova_compute[243452]: 2026-02-28 10:42:19.386 243456 INFO nova.compute.manager [None req-9396e5df-4ec8-4712-ac22-e7799c041980 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Get console output
Feb 28 10:42:19 compute-0 nova_compute[243452]: 2026-02-28 10:42:19.395 296900 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Feb 28 10:42:19 compute-0 nova_compute[243452]: 2026-02-28 10:42:19.774 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:19 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:42:19 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:42:19 compute-0 ceph-mon[76304]: pgmap v2339: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.106 243456 DEBUG nova.compute.manager [req-196f6aab-a658-41bc-9243-9bb9ed1e7166 req-2909678a-9316-4619-996e-53fd05788779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received event network-changed-85b213ac-7186-4120-8f8e-043293c9de8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.107 243456 DEBUG nova.compute.manager [req-196f6aab-a658-41bc-9243-9bb9ed1e7166 req-2909678a-9316-4619-996e-53fd05788779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Refreshing instance network info cache due to event network-changed-85b213ac-7186-4120-8f8e-043293c9de8b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.107 243456 DEBUG oslo_concurrency.lockutils [req-196f6aab-a658-41bc-9243-9bb9ed1e7166 req-2909678a-9316-4619-996e-53fd05788779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.108 243456 DEBUG oslo_concurrency.lockutils [req-196f6aab-a658-41bc-9243-9bb9ed1e7166 req-2909678a-9316-4619-996e-53fd05788779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.108 243456 DEBUG nova.network.neutron [req-196f6aab-a658-41bc-9243-9bb9ed1e7166 req-2909678a-9316-4619-996e-53fd05788779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Refreshing network info cache for port 85b213ac-7186-4120-8f8e-043293c9de8b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:42:20 compute-0 podman[373923]: 2026-02-28 10:42:20.147124956 +0000 UTC m=+0.084523584 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 10:42:20 compute-0 podman[373924]: 2026-02-28 10:42:20.147560038 +0000 UTC m=+0.084704469 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.170 243456 DEBUG oslo_concurrency.lockutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.171 243456 DEBUG oslo_concurrency.lockutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.171 243456 DEBUG oslo_concurrency.lockutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.171 243456 DEBUG oslo_concurrency.lockutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.171 243456 DEBUG oslo_concurrency.lockutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.173 243456 INFO nova.compute.manager [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Terminating instance
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.174 243456 DEBUG nova.compute.manager [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:42:20 compute-0 kernel: tap85b213ac-71 (unregistering): left promiscuous mode
Feb 28 10:42:20 compute-0 NetworkManager[49805]: <info>  [1772275340.2291] device (tap85b213ac-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:42:20 compute-0 ovn_controller[146846]: 2026-02-28T10:42:20Z|01535|binding|INFO|Releasing lport 85b213ac-7186-4120-8f8e-043293c9de8b from this chassis (sb_readonly=0)
Feb 28 10:42:20 compute-0 ovn_controller[146846]: 2026-02-28T10:42:20Z|01536|binding|INFO|Setting lport 85b213ac-7186-4120-8f8e-043293c9de8b down in Southbound
Feb 28 10:42:20 compute-0 ovn_controller[146846]: 2026-02-28T10:42:20Z|01537|binding|INFO|Removing iface tap85b213ac-71 ovn-installed in OVS
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.238 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.245 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:23:13 10.100.0.4'], port_security=['fa:16:3e:4e:23:13 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60f40e8c-30be-4a73-8b0b-ca447dd19ffc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2fef3ce64b984ecab0903a053eab58f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'db7c5780-813a-4d88-b76c-0ab09180b373', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5ca6d69-e30b-431e-a08f-47f084fea79e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=85b213ac-7186-4120-8f8e-043293c9de8b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:42:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.246 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 85b213ac-7186-4120-8f8e-043293c9de8b in datapath 60f40e8c-30be-4a73-8b0b-ca447dd19ffc unbound from our chassis
Feb 28 10:42:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.247 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60f40e8c-30be-4a73-8b0b-ca447dd19ffc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:42:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.248 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4f065283-c658-4b46-a024-386fcfbb852c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.249 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc namespace which is not needed anymore
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.252 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:20 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000091.scope: Deactivated successfully.
Feb 28 10:42:20 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000091.scope: Consumed 12.738s CPU time.
Feb 28 10:42:20 compute-0 systemd-machined[209480]: Machine qemu-178-instance-00000091 terminated.
Feb 28 10:42:20 compute-0 neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc[373239]: [NOTICE]   (373244) : haproxy version is 2.8.14-c23fe91
Feb 28 10:42:20 compute-0 neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc[373239]: [NOTICE]   (373244) : path to executable is /usr/sbin/haproxy
Feb 28 10:42:20 compute-0 neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc[373239]: [WARNING]  (373244) : Exiting Master process...
Feb 28 10:42:20 compute-0 neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc[373239]: [ALERT]    (373244) : Current worker (373246) exited with code 143 (Terminated)
Feb 28 10:42:20 compute-0 neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc[373239]: [WARNING]  (373244) : All workers exited. Exiting... (0)
Feb 28 10:42:20 compute-0 systemd[1]: libpod-1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41.scope: Deactivated successfully.
Feb 28 10:42:20 compute-0 podman[373988]: 2026-02-28 10:42:20.40014528 +0000 UTC m=+0.053213467 container died 1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.420 243456 INFO nova.virt.libvirt.driver [-] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Instance destroyed successfully.
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.421 243456 DEBUG nova.objects.instance [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lazy-loading 'resources' on Instance uuid af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.435 243456 DEBUG nova.virt.libvirt.vif [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:41:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1524442108',display_name='tempest-TestNetworkBasicOps-server-1524442108',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1524442108',id=145,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG91PlskTqdY3f/2ETDaAtm5odqY3Pm73UeMDdBa5KYy9HWAlLs8njv3PvKqtqCgQvXgaXApoQMAIbbB474iNBXEqM7bw3A67rDTDaE0oBVKxg+86+w9ZDJ22AtUTnVnBg==',key_name='tempest-TestNetworkBasicOps-1470921471',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:41:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2fef3ce64b984ecab0903a053eab58f6',ramdisk_id='',reservation_id='r-lqpz0euc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-654042014',owner_user_name='tempest-TestNetworkBasicOps-654042014-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:41:56Z,user_data=None,user_id='ecebf54c6f3344489cf5e51d91d8ed5a',uuid=af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.436 243456 DEBUG nova.network.os_vif_util [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converting VIF {"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.438 243456 DEBUG nova.network.os_vif_util [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4e:23:13,bridge_name='br-int',has_traffic_filtering=True,id=85b213ac-7186-4120-8f8e-043293c9de8b,network=Network(60f40e8c-30be-4a73-8b0b-ca447dd19ffc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85b213ac-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.439 243456 DEBUG os_vif [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:23:13,bridge_name='br-int',has_traffic_filtering=True,id=85b213ac-7186-4120-8f8e-043293c9de8b,network=Network(60f40e8c-30be-4a73-8b0b-ca447dd19ffc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85b213ac-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.444 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.445 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85b213ac-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.447 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.449 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.453 243456 INFO os_vif [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:23:13,bridge_name='br-int',has_traffic_filtering=True,id=85b213ac-7186-4120-8f8e-043293c9de8b,network=Network(60f40e8c-30be-4a73-8b0b-ca447dd19ffc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85b213ac-71')
Feb 28 10:42:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41-userdata-shm.mount: Deactivated successfully.
Feb 28 10:42:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c688fbac57555e23608447219d1d8709e3953d040412dffacb23a65f86321c8e-merged.mount: Deactivated successfully.
Feb 28 10:42:20 compute-0 podman[373988]: 2026-02-28 10:42:20.526968202 +0000 UTC m=+0.180036389 container cleanup 1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 28 10:42:20 compute-0 systemd[1]: libpod-conmon-1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41.scope: Deactivated successfully.
Feb 28 10:42:20 compute-0 podman[374045]: 2026-02-28 10:42:20.700777293 +0000 UTC m=+0.141540749 container remove 1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 10:42:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.707 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[99ab1cd3-9d13-4234-98f6-a4ba24e8e121]: (4, ('Sat Feb 28 10:42:20 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc (1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41)\n1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41\nSat Feb 28 10:42:20 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc (1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41)\n1fd38b6f3ee0174499c2c3439652cb7fcefc9f5c498707ef7a8f6d39482a1a41\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.709 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b407a6-d569-4ae3-924f-39c720ab91f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.711 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60f40e8c-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:42:20 compute-0 kernel: tap60f40e8c-30: left promiscuous mode
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.715 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:20 compute-0 nova_compute[243452]: 2026-02-28 10:42:20.722 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.725 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e905dd6f-f8f0-4f77-ac6d-83551830bf49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.745 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6bf37fb6-3ec7-4909-bed6-66f726df571e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.747 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1e79a42d-1d45-487f-b763-27bce101199f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.764 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[eb2c07f9-1ff7-4113-92ad-b07c5d4b5243]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682128, 'reachable_time': 43453, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374061, 'error': None, 'target': 'ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d60f40e8c\x2d30be\x2d4a73\x2d8b0b\x2dca447dd19ffc.mount: Deactivated successfully.
Feb 28 10:42:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.768 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-60f40e8c-30be-4a73-8b0b-ca447dd19ffc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:42:20 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:20.768 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[36d2e593-b127-45a8-b7fa-860763752eef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 1.2 MiB/s wr, 47 op/s
Feb 28 10:42:21 compute-0 nova_compute[243452]: 2026-02-28 10:42:21.140 243456 INFO nova.virt.libvirt.driver [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Deleting instance files /var/lib/nova/instances/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_del
Feb 28 10:42:21 compute-0 nova_compute[243452]: 2026-02-28 10:42:21.142 243456 INFO nova.virt.libvirt.driver [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Deletion of /var/lib/nova/instances/af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9_del complete
Feb 28 10:42:21 compute-0 nova_compute[243452]: 2026-02-28 10:42:21.244 243456 INFO nova.compute.manager [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Took 1.07 seconds to destroy the instance on the hypervisor.
Feb 28 10:42:21 compute-0 nova_compute[243452]: 2026-02-28 10:42:21.245 243456 DEBUG oslo.service.loopingcall [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:42:21 compute-0 nova_compute[243452]: 2026-02-28 10:42:21.245 243456 DEBUG nova.compute.manager [-] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:42:21 compute-0 nova_compute[243452]: 2026-02-28 10:42:21.245 243456 DEBUG nova.network.neutron [-] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:42:21 compute-0 nova_compute[243452]: 2026-02-28 10:42:21.889 243456 DEBUG nova.compute.manager [req-65e1459a-4a90-4728-b739-32e94fee7a9d req-6eb47ef7-c2bc-43f9-a7d7-b6c2a524d943 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received event network-vif-unplugged-85b213ac-7186-4120-8f8e-043293c9de8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:42:21 compute-0 nova_compute[243452]: 2026-02-28 10:42:21.890 243456 DEBUG oslo_concurrency.lockutils [req-65e1459a-4a90-4728-b739-32e94fee7a9d req-6eb47ef7-c2bc-43f9-a7d7-b6c2a524d943 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:42:21 compute-0 nova_compute[243452]: 2026-02-28 10:42:21.891 243456 DEBUG oslo_concurrency.lockutils [req-65e1459a-4a90-4728-b739-32e94fee7a9d req-6eb47ef7-c2bc-43f9-a7d7-b6c2a524d943 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:42:21 compute-0 nova_compute[243452]: 2026-02-28 10:42:21.891 243456 DEBUG oslo_concurrency.lockutils [req-65e1459a-4a90-4728-b739-32e94fee7a9d req-6eb47ef7-c2bc-43f9-a7d7-b6c2a524d943 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:42:21 compute-0 nova_compute[243452]: 2026-02-28 10:42:21.892 243456 DEBUG nova.compute.manager [req-65e1459a-4a90-4728-b739-32e94fee7a9d req-6eb47ef7-c2bc-43f9-a7d7-b6c2a524d943 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] No waiting events found dispatching network-vif-unplugged-85b213ac-7186-4120-8f8e-043293c9de8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:42:21 compute-0 nova_compute[243452]: 2026-02-28 10:42:21.892 243456 DEBUG nova.compute.manager [req-65e1459a-4a90-4728-b739-32e94fee7a9d req-6eb47ef7-c2bc-43f9-a7d7-b6c2a524d943 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received event network-vif-unplugged-85b213ac-7186-4120-8f8e-043293c9de8b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:42:22 compute-0 sshd-session[373921]: Received disconnect from 103.67.78.202 port 35942:11: Bye Bye [preauth]
Feb 28 10:42:22 compute-0 sshd-session[373921]: Disconnected from authenticating user root 103.67.78.202 port 35942 [preauth]
Feb 28 10:42:22 compute-0 nova_compute[243452]: 2026-02-28 10:42:22.220 243456 DEBUG nova.network.neutron [-] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:42:22 compute-0 nova_compute[243452]: 2026-02-28 10:42:22.246 243456 INFO nova.compute.manager [-] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Took 1.00 seconds to deallocate network for instance.
Feb 28 10:42:22 compute-0 ceph-mon[76304]: pgmap v2340: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 1.2 MiB/s wr, 47 op/s
Feb 28 10:42:22 compute-0 nova_compute[243452]: 2026-02-28 10:42:22.286 243456 DEBUG nova.compute.manager [req-d6b7af44-825d-4d56-9096-504afd821702 req-afe9b870-23e9-45ba-b56a-8c5680eb5ce3 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received event network-vif-deleted-85b213ac-7186-4120-8f8e-043293c9de8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:42:22 compute-0 nova_compute[243452]: 2026-02-28 10:42:22.301 243456 DEBUG oslo_concurrency.lockutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:42:22 compute-0 nova_compute[243452]: 2026-02-28 10:42:22.302 243456 DEBUG oslo_concurrency.lockutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:42:22 compute-0 nova_compute[243452]: 2026-02-28 10:42:22.357 243456 DEBUG oslo_concurrency.processutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:42:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:42:22 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1683179128' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:42:22 compute-0 nova_compute[243452]: 2026-02-28 10:42:22.895 243456 DEBUG oslo_concurrency.processutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:42:22 compute-0 nova_compute[243452]: 2026-02-28 10:42:22.900 243456 DEBUG nova.compute.provider_tree [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:42:22 compute-0 nova_compute[243452]: 2026-02-28 10:42:22.920 243456 DEBUG nova.scheduler.client.report [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:42:22 compute-0 nova_compute[243452]: 2026-02-28 10:42:22.944 243456 DEBUG oslo_concurrency.lockutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:42:22 compute-0 nova_compute[243452]: 2026-02-28 10:42:22.971 243456 INFO nova.scheduler.client.report [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Deleted allocations for instance af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9
Feb 28 10:42:23 compute-0 nova_compute[243452]: 2026-02-28 10:42:23.050 243456 DEBUG oslo_concurrency.lockutils [None req-f43400cd-c452-4862-950a-e236a60dabe2 ecebf54c6f3344489cf5e51d91d8ed5a 2fef3ce64b984ecab0903a053eab58f6 - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:42:23 compute-0 nova_compute[243452]: 2026-02-28 10:42:23.110 243456 DEBUG nova.network.neutron [req-196f6aab-a658-41bc-9243-9bb9ed1e7166 req-2909678a-9316-4619-996e-53fd05788779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Updated VIF entry in instance network info cache for port 85b213ac-7186-4120-8f8e-043293c9de8b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:42:23 compute-0 nova_compute[243452]: 2026-02-28 10:42:23.111 243456 DEBUG nova.network.neutron [req-196f6aab-a658-41bc-9243-9bb9ed1e7166 req-2909678a-9316-4619-996e-53fd05788779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Updating instance_info_cache with network_info: [{"id": "85b213ac-7186-4120-8f8e-043293c9de8b", "address": "fa:16:3e:4e:23:13", "network": {"id": "60f40e8c-30be-4a73-8b0b-ca447dd19ffc", "bridge": "br-int", "label": "tempest-network-smoke--1376393359", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2fef3ce64b984ecab0903a053eab58f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85b213ac-71", "ovs_interfaceid": "85b213ac-7186-4120-8f8e-043293c9de8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:42:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 199 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 20 KiB/s wr, 6 op/s
Feb 28 10:42:23 compute-0 nova_compute[243452]: 2026-02-28 10:42:23.133 243456 DEBUG oslo_concurrency.lockutils [req-196f6aab-a658-41bc-9243-9bb9ed1e7166 req-2909678a-9316-4619-996e-53fd05788779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:42:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1683179128' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:42:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:42:24 compute-0 nova_compute[243452]: 2026-02-28 10:42:24.009 243456 DEBUG nova.compute.manager [req-c21d5ece-d492-4d09-823b-5f6e2c25e74f req-ed4e408b-4983-4a01-a9ac-5a7dfb9107e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received event network-vif-plugged-85b213ac-7186-4120-8f8e-043293c9de8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:42:24 compute-0 nova_compute[243452]: 2026-02-28 10:42:24.010 243456 DEBUG oslo_concurrency.lockutils [req-c21d5ece-d492-4d09-823b-5f6e2c25e74f req-ed4e408b-4983-4a01-a9ac-5a7dfb9107e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:42:24 compute-0 nova_compute[243452]: 2026-02-28 10:42:24.011 243456 DEBUG oslo_concurrency.lockutils [req-c21d5ece-d492-4d09-823b-5f6e2c25e74f req-ed4e408b-4983-4a01-a9ac-5a7dfb9107e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:42:24 compute-0 nova_compute[243452]: 2026-02-28 10:42:24.011 243456 DEBUG oslo_concurrency.lockutils [req-c21d5ece-d492-4d09-823b-5f6e2c25e74f req-ed4e408b-4983-4a01-a9ac-5a7dfb9107e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:42:24 compute-0 nova_compute[243452]: 2026-02-28 10:42:24.012 243456 DEBUG nova.compute.manager [req-c21d5ece-d492-4d09-823b-5f6e2c25e74f req-ed4e408b-4983-4a01-a9ac-5a7dfb9107e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] No waiting events found dispatching network-vif-plugged-85b213ac-7186-4120-8f8e-043293c9de8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:42:24 compute-0 nova_compute[243452]: 2026-02-28 10:42:24.012 243456 WARNING nova.compute.manager [req-c21d5ece-d492-4d09-823b-5f6e2c25e74f req-ed4e408b-4983-4a01-a9ac-5a7dfb9107e4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Received unexpected event network-vif-plugged-85b213ac-7186-4120-8f8e-043293c9de8b for instance with vm_state deleted and task_state None.
Feb 28 10:42:24 compute-0 ceph-mon[76304]: pgmap v2341: 305 pgs: 305 active+clean; 199 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 20 KiB/s wr, 6 op/s
Feb 28 10:42:24 compute-0 sshd-session[374063]: Received disconnect from 103.67.78.202 port 57908:11: Bye Bye [preauth]
Feb 28 10:42:24 compute-0 sshd-session[374063]: Disconnected from authenticating user root 103.67.78.202 port 57908 [preauth]
Feb 28 10:42:24 compute-0 nova_compute[243452]: 2026-02-28 10:42:24.777 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 174 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 27 op/s
Feb 28 10:42:25 compute-0 nova_compute[243452]: 2026-02-28 10:42:25.448 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:25 compute-0 nova_compute[243452]: 2026-02-28 10:42:25.670 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "9a93fbef-9a9c-4d32-b200-626428537bfa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:42:25 compute-0 nova_compute[243452]: 2026-02-28 10:42:25.671 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:42:25 compute-0 nova_compute[243452]: 2026-02-28 10:42:25.691 243456 DEBUG nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:42:25 compute-0 nova_compute[243452]: 2026-02-28 10:42:25.779 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:42:25 compute-0 nova_compute[243452]: 2026-02-28 10:42:25.780 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:42:25 compute-0 nova_compute[243452]: 2026-02-28 10:42:25.787 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:42:25 compute-0 nova_compute[243452]: 2026-02-28 10:42:25.788 243456 INFO nova.compute.claims [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:42:25 compute-0 nova_compute[243452]: 2026-02-28 10:42:25.909 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:42:26 compute-0 ceph-mon[76304]: pgmap v2342: 305 pgs: 305 active+clean; 174 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 27 op/s
Feb 28 10:42:26 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:42:26 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2227240430' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.452 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.461 243456 DEBUG nova.compute.provider_tree [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.479 243456 DEBUG nova.scheduler.client.report [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.501 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.502 243456 DEBUG nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.549 243456 DEBUG nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.549 243456 DEBUG nova.network.neutron [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.578 243456 INFO nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.596 243456 DEBUG nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.696 243456 DEBUG nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.698 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.698 243456 INFO nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Creating image(s)
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.725 243456 DEBUG nova.storage.rbd_utils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9a93fbef-9a9c-4d32-b200-626428537bfa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.754 243456 DEBUG nova.storage.rbd_utils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9a93fbef-9a9c-4d32-b200-626428537bfa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.779 243456 DEBUG nova.storage.rbd_utils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9a93fbef-9a9c-4d32-b200-626428537bfa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.783 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.817 243456 DEBUG nova.policy [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.860 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.860 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.861 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.861 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.882 243456 DEBUG nova.storage.rbd_utils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9a93fbef-9a9c-4d32-b200-626428537bfa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:42:26 compute-0 nova_compute[243452]: 2026-02-28 10:42:26.886 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9a93fbef-9a9c-4d32-b200-626428537bfa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:42:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 28 op/s
Feb 28 10:42:27 compute-0 nova_compute[243452]: 2026-02-28 10:42:27.748 243456 DEBUG nova.network.neutron [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Successfully created port: 5b4f91cf-9f52-4422-873f-f11cf0d49dde _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:42:27 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2227240430' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:42:27 compute-0 nova_compute[243452]: 2026-02-28 10:42:27.946 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:27 compute-0 nova_compute[243452]: 2026-02-28 10:42:27.995 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:28 compute-0 nova_compute[243452]: 2026-02-28 10:42:28.326 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 9a93fbef-9a9c-4d32-b200-626428537bfa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:42:28 compute-0 nova_compute[243452]: 2026-02-28 10:42:28.422 243456 DEBUG nova.storage.rbd_utils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image 9a93fbef-9a9c-4d32-b200-626428537bfa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:42:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:42:29 compute-0 ceph-mon[76304]: pgmap v2343: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 28 op/s
Feb 28 10:42:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 155 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 98 KiB/s wr, 40 op/s
Feb 28 10:42:29 compute-0 nova_compute[243452]: 2026-02-28 10:42:29.160 243456 DEBUG nova.network.neutron [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Successfully updated port: 5b4f91cf-9f52-4422-873f-f11cf0d49dde _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:42:29 compute-0 nova_compute[243452]: 2026-02-28 10:42:29.174 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:42:29 compute-0 nova_compute[243452]: 2026-02-28 10:42:29.174 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:42:29 compute-0 nova_compute[243452]: 2026-02-28 10:42:29.175 243456 DEBUG nova.network.neutron [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:42:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:42:29
Feb 28 10:42:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:42:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:42:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.log', 'images', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', '.rgw.root', '.mgr']
Feb 28 10:42:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:42:29 compute-0 nova_compute[243452]: 2026-02-28 10:42:29.277 243456 DEBUG nova.objects.instance [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid 9a93fbef-9a9c-4d32-b200-626428537bfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:42:29 compute-0 nova_compute[243452]: 2026-02-28 10:42:29.286 243456 DEBUG nova.compute.manager [req-c32ad750-f43e-44f3-843c-82cfbd7099cf req-2a28a37a-3fcb-4e82-88c6-04dd22d32b25 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received event network-changed-5b4f91cf-9f52-4422-873f-f11cf0d49dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:42:29 compute-0 nova_compute[243452]: 2026-02-28 10:42:29.287 243456 DEBUG nova.compute.manager [req-c32ad750-f43e-44f3-843c-82cfbd7099cf req-2a28a37a-3fcb-4e82-88c6-04dd22d32b25 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Refreshing instance network info cache due to event network-changed-5b4f91cf-9f52-4422-873f-f11cf0d49dde. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:42:29 compute-0 nova_compute[243452]: 2026-02-28 10:42:29.287 243456 DEBUG oslo_concurrency.lockutils [req-c32ad750-f43e-44f3-843c-82cfbd7099cf req-2a28a37a-3fcb-4e82-88c6-04dd22d32b25 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:42:29 compute-0 nova_compute[243452]: 2026-02-28 10:42:29.306 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:42:29 compute-0 nova_compute[243452]: 2026-02-28 10:42:29.306 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Ensure instance console log exists: /var/lib/nova/instances/9a93fbef-9a9c-4d32-b200-626428537bfa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:42:29 compute-0 nova_compute[243452]: 2026-02-28 10:42:29.307 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:42:29 compute-0 nova_compute[243452]: 2026-02-28 10:42:29.307 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:42:29 compute-0 nova_compute[243452]: 2026-02-28 10:42:29.307 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:42:29 compute-0 nova_compute[243452]: 2026-02-28 10:42:29.653 243456 DEBUG nova.network.neutron [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:42:29 compute-0 nova_compute[243452]: 2026-02-28 10:42:29.779 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:30 compute-0 ceph-mon[76304]: pgmap v2344: 305 pgs: 305 active+clean; 155 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 98 KiB/s wr, 40 op/s
Feb 28 10:42:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:42:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:42:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:42:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:42:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:42:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:42:30 compute-0 nova_compute[243452]: 2026-02-28 10:42:30.450 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:42:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:42:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:42:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:42:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:42:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:42:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:42:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:42:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:42:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:42:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 181 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.2 MiB/s wr, 44 op/s
Feb 28 10:42:32 compute-0 ceph-mon[76304]: pgmap v2345: 305 pgs: 305 active+clean; 181 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.2 MiB/s wr, 44 op/s
Feb 28 10:42:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 28 10:42:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:42:34 compute-0 ceph-mon[76304]: pgmap v2346: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 28 10:42:34 compute-0 nova_compute[243452]: 2026-02-28 10:42:34.781 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:35 compute-0 sshd-session[374276]: Received disconnect from 103.67.78.132 port 47860:11: Bye Bye [preauth]
Feb 28 10:42:35 compute-0 sshd-session[374276]: Disconnected from authenticating user root 103.67.78.132 port 47860 [preauth]
Feb 28 10:42:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Feb 28 10:42:35 compute-0 nova_compute[243452]: 2026-02-28 10:42:35.417 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275340.415453, af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:42:35 compute-0 nova_compute[243452]: 2026-02-28 10:42:35.417 243456 INFO nova.compute.manager [-] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] VM Stopped (Lifecycle Event)
Feb 28 10:42:35 compute-0 nova_compute[243452]: 2026-02-28 10:42:35.452 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:35 compute-0 nova_compute[243452]: 2026-02-28 10:42:35.535 243456 DEBUG nova.compute.manager [None req-2b7a500c-2e80-48f5-9ddf-ccf5d2c6917a - - - - - -] [instance: af23ea05-a0b2-493b-b0f8-4b7e2eb62fa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:42:36 compute-0 nova_compute[243452]: 2026-02-28 10:42:36.168 243456 DEBUG nova.network.neutron [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Updating instance_info_cache with network_info: [{"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:42:36 compute-0 ceph-mon[76304]: pgmap v2347: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Feb 28 10:42:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.351 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.352 243456 DEBUG nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Instance network_info: |[{"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.353 243456 DEBUG oslo_concurrency.lockutils [req-c32ad750-f43e-44f3-843c-82cfbd7099cf req-2a28a37a-3fcb-4e82-88c6-04dd22d32b25 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.354 243456 DEBUG nova.network.neutron [req-c32ad750-f43e-44f3-843c-82cfbd7099cf req-2a28a37a-3fcb-4e82-88c6-04dd22d32b25 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Refreshing network info cache for port 5b4f91cf-9f52-4422-873f-f11cf0d49dde _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.361 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Start _get_guest_xml network_info=[{"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.368 243456 WARNING nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.380 243456 DEBUG nova.virt.libvirt.host [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.381 243456 DEBUG nova.virt.libvirt.host [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.391 243456 DEBUG nova.virt.libvirt.host [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.392 243456 DEBUG nova.virt.libvirt.host [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.392 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.393 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.394 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.394 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.394 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.395 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.395 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.396 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.396 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.397 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.397 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.398 243456 DEBUG nova.virt.hardware [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.403 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:42:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:42:37 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2526933911' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.941 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.966 243456 DEBUG nova.storage.rbd_utils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9a93fbef-9a9c-4d32-b200-626428537bfa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:42:37 compute-0 nova_compute[243452]: 2026-02-28 10:42:37.973 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:42:38 compute-0 ceph-mon[76304]: pgmap v2348: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 28 10:42:38 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2526933911' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:42:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:42:38 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2358285007' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.549 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.551 243456 DEBUG nova.virt.libvirt.vif [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:42:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-344642459',display_name='tempest-TestGettingAddress-server-344642459',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-344642459',id=146,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDkbXRt6B1cCWBHrWtATNSLJHmA0iLq2AXd2wKDeL3s6gDq5DjSDGWA5dtPpegRqUnUl/4AOWSTUcLQlCwH/VQEOx1aCDIzKCGn/G2HO33UA7mhN+ANludWedDRQ8saCjg==',key_name='tempest-TestGettingAddress-1127208510',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-r4xtm6sn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:42:26Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=9a93fbef-9a9c-4d32-b200-626428537bfa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.552 243456 DEBUG nova.network.os_vif_util [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.554 243456 DEBUG nova.network.os_vif_util [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:61:bf,bridge_name='br-int',has_traffic_filtering=True,id=5b4f91cf-9f52-4422-873f-f11cf0d49dde,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b4f91cf-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.555 243456 DEBUG nova.objects.instance [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a93fbef-9a9c-4d32-b200-626428537bfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.631 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:42:38 compute-0 nova_compute[243452]:   <uuid>9a93fbef-9a9c-4d32-b200-626428537bfa</uuid>
Feb 28 10:42:38 compute-0 nova_compute[243452]:   <name>instance-00000092</name>
Feb 28 10:42:38 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:42:38 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:42:38 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <nova:name>tempest-TestGettingAddress-server-344642459</nova:name>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:42:37</nova:creationTime>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:42:38 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:42:38 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:42:38 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:42:38 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:42:38 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:42:38 compute-0 nova_compute[243452]:         <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 10:42:38 compute-0 nova_compute[243452]:         <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:42:38 compute-0 nova_compute[243452]:         <nova:port uuid="5b4f91cf-9f52-4422-873f-f11cf0d49dde">
Feb 28 10:42:38 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe54:61bf" ipVersion="6"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe54:61bf" ipVersion="6"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:42:38 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:42:38 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <system>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <entry name="serial">9a93fbef-9a9c-4d32-b200-626428537bfa</entry>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <entry name="uuid">9a93fbef-9a9c-4d32-b200-626428537bfa</entry>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     </system>
Feb 28 10:42:38 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:42:38 compute-0 nova_compute[243452]:   <os>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:   </os>
Feb 28 10:42:38 compute-0 nova_compute[243452]:   <features>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:   </features>
Feb 28 10:42:38 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:42:38 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:42:38 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9a93fbef-9a9c-4d32-b200-626428537bfa_disk">
Feb 28 10:42:38 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       </source>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:42:38 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/9a93fbef-9a9c-4d32-b200-626428537bfa_disk.config">
Feb 28 10:42:38 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       </source>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:42:38 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:54:61:bf"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <target dev="tap5b4f91cf-9f"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/9a93fbef-9a9c-4d32-b200-626428537bfa/console.log" append="off"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <video>
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     </video>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:42:38 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:42:38 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:42:38 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:42:38 compute-0 nova_compute[243452]: </domain>
Feb 28 10:42:38 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.633 243456 DEBUG nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Preparing to wait for external event network-vif-plugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.633 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.634 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.634 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.635 243456 DEBUG nova.virt.libvirt.vif [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:42:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-344642459',display_name='tempest-TestGettingAddress-server-344642459',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-344642459',id=146,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDkbXRt6B1cCWBHrWtATNSLJHmA0iLq2AXd2wKDeL3s6gDq5DjSDGWA5dtPpegRqUnUl/4AOWSTUcLQlCwH/VQEOx1aCDIzKCGn/G2HO33UA7mhN+ANludWedDRQ8saCjg==',key_name='tempest-TestGettingAddress-1127208510',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-r4xtm6sn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:42:26Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=9a93fbef-9a9c-4d32-b200-626428537bfa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.636 243456 DEBUG nova.network.os_vif_util [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.638 243456 DEBUG nova.network.os_vif_util [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:61:bf,bridge_name='br-int',has_traffic_filtering=True,id=5b4f91cf-9f52-4422-873f-f11cf0d49dde,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b4f91cf-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.638 243456 DEBUG os_vif [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:61:bf,bridge_name='br-int',has_traffic_filtering=True,id=5b4f91cf-9f52-4422-873f-f11cf0d49dde,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b4f91cf-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.639 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.640 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.641 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.645 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.645 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b4f91cf-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.646 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5b4f91cf-9f, col_values=(('external_ids', {'iface-id': '5b4f91cf-9f52-4422-873f-f11cf0d49dde', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:61:bf', 'vm-uuid': '9a93fbef-9a9c-4d32-b200-626428537bfa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.677 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:38 compute-0 NetworkManager[49805]: <info>  [1772275358.6780] manager: (tap5b4f91cf-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/639)
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.680 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.684 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.685 243456 INFO os_vif [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:61:bf,bridge_name='br-int',has_traffic_filtering=True,id=5b4f91cf-9f52-4422-873f-f11cf0d49dde,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b4f91cf-9f')
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.878 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.878 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.879 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:54:61:bf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.880 243456 INFO nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Using config drive
Feb 28 10:42:38 compute-0 nova_compute[243452]: 2026-02-28 10:42:38.917 243456 DEBUG nova.storage.rbd_utils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9a93fbef-9a9c-4d32-b200-626428537bfa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:42:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:42:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:42:39 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2358285007' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:42:39 compute-0 nova_compute[243452]: 2026-02-28 10:42:39.782 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:39 compute-0 nova_compute[243452]: 2026-02-28 10:42:39.898 243456 INFO nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Creating config drive at /var/lib/nova/instances/9a93fbef-9a9c-4d32-b200-626428537bfa/disk.config
Feb 28 10:42:39 compute-0 nova_compute[243452]: 2026-02-28 10:42:39.904 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a93fbef-9a9c-4d32-b200-626428537bfa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpmhackzgn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:42:40 compute-0 nova_compute[243452]: 2026-02-28 10:42:40.063 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a93fbef-9a9c-4d32-b200-626428537bfa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpmhackzgn" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:42:40 compute-0 nova_compute[243452]: 2026-02-28 10:42:40.103 243456 DEBUG nova.storage.rbd_utils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 9a93fbef-9a9c-4d32-b200-626428537bfa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:42:40 compute-0 nova_compute[243452]: 2026-02-28 10:42:40.108 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a93fbef-9a9c-4d32-b200-626428537bfa/disk.config 9a93fbef-9a9c-4d32-b200-626428537bfa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:42:40 compute-0 ceph-mon[76304]: pgmap v2349: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:42:40 compute-0 nova_compute[243452]: 2026-02-28 10:42:40.270 243456 DEBUG oslo_concurrency.processutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a93fbef-9a9c-4d32-b200-626428537bfa/disk.config 9a93fbef-9a9c-4d32-b200-626428537bfa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:42:40 compute-0 nova_compute[243452]: 2026-02-28 10:42:40.271 243456 INFO nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Deleting local config drive /var/lib/nova/instances/9a93fbef-9a9c-4d32-b200-626428537bfa/disk.config because it was imported into RBD.
Feb 28 10:42:40 compute-0 kernel: tap5b4f91cf-9f: entered promiscuous mode
Feb 28 10:42:40 compute-0 NetworkManager[49805]: <info>  [1772275360.3528] manager: (tap5b4f91cf-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/640)
Feb 28 10:42:40 compute-0 ovn_controller[146846]: 2026-02-28T10:42:40Z|01538|binding|INFO|Claiming lport 5b4f91cf-9f52-4422-873f-f11cf0d49dde for this chassis.
Feb 28 10:42:40 compute-0 ovn_controller[146846]: 2026-02-28T10:42:40Z|01539|binding|INFO|5b4f91cf-9f52-4422-873f-f11cf0d49dde: Claiming fa:16:3e:54:61:bf 10.100.0.14 2001:db8:0:1:f816:3eff:fe54:61bf 2001:db8::f816:3eff:fe54:61bf
Feb 28 10:42:40 compute-0 nova_compute[243452]: 2026-02-28 10:42:40.354 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:40 compute-0 nova_compute[243452]: 2026-02-28 10:42:40.358 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.372 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:61:bf 10.100.0.14 2001:db8:0:1:f816:3eff:fe54:61bf 2001:db8::f816:3eff:fe54:61bf'], port_security=['fa:16:3e:54:61:bf 10.100.0.14 2001:db8:0:1:f816:3eff:fe54:61bf 2001:db8::f816:3eff:fe54:61bf'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:fe54:61bf/64 2001:db8::f816:3eff:fe54:61bf/64', 'neutron:device_id': '9a93fbef-9a9c-4d32-b200-626428537bfa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '00df5388-1a6c-45af-8672-636481c0abde', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39e94574-ff62-481a-b217-9667ee1ef596, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=5b4f91cf-9f52-4422-873f-f11cf0d49dde) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.374 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 5b4f91cf-9f52-4422-873f-f11cf0d49dde in datapath f5ccb81b-dba1-47db-8a77-320af312ccad bound to our chassis
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.375 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5ccb81b-dba1-47db-8a77-320af312ccad
Feb 28 10:42:40 compute-0 nova_compute[243452]: 2026-02-28 10:42:40.383 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:40 compute-0 systemd-machined[209480]: New machine qemu-179-instance-00000092.
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.390 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6a42ee-8f4c-4b63-89de-5db7dfcf831f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.391 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf5ccb81b-d1 in ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:42:40 compute-0 systemd[1]: Started Virtual Machine qemu-179-instance-00000092.
Feb 28 10:42:40 compute-0 ovn_controller[146846]: 2026-02-28T10:42:40Z|01540|binding|INFO|Setting lport 5b4f91cf-9f52-4422-873f-f11cf0d49dde ovn-installed in OVS
Feb 28 10:42:40 compute-0 ovn_controller[146846]: 2026-02-28T10:42:40Z|01541|binding|INFO|Setting lport 5b4f91cf-9f52-4422-873f-f11cf0d49dde up in Southbound
Feb 28 10:42:40 compute-0 nova_compute[243452]: 2026-02-28 10:42:40.395 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.396 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf5ccb81b-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.396 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[104941a9-01f1-4739-bb5c-87d090218bda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.397 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f6f785cf-d7ff-4954-87ae-c144459f844d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:40 compute-0 systemd-udevd[374415]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.409 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[91f0d243-fcb6-492d-928c-017a5ac5482d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:40 compute-0 NetworkManager[49805]: <info>  [1772275360.4191] device (tap5b4f91cf-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:42:40 compute-0 NetworkManager[49805]: <info>  [1772275360.4214] device (tap5b4f91cf-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.440 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c688dc-ac83-4823-92c4-343f3316e827]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.476 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ae54af0d-5aec-4ad1-873c-a739990880c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:40 compute-0 systemd-udevd[374420]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:42:40 compute-0 NetworkManager[49805]: <info>  [1772275360.4833] manager: (tapf5ccb81b-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/641)
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.483 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[178be08e-f6e6-4ab5-899a-eae623263f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.519 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b04910-c8a5-4256-927f-684227716512]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.523 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ed73522b-7a4b-41ba-b6ca-9b12c705d3b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:40 compute-0 NetworkManager[49805]: <info>  [1772275360.5526] device (tapf5ccb81b-d0): carrier: link connected
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.557 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9a157e12-176a-4aaf-9f41-60eae296116f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.573 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8f5b990e-d169-4d7a-b71e-ff6345e1de71]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5ccb81b-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:79:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686783, 'reachable_time': 38506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374476, 'error': None, 'target': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.585 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f1797e53-1a34-434a-860a-4a94faaa707c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:796c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686783, 'tstamp': 686783}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374484, 'error': None, 'target': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.597 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cf806ede-e0a9-4ddd-93cf-4b341017fa18]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5ccb81b-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:79:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686783, 'reachable_time': 38506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374486, 'error': None, 'target': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.627 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fa5d44aa-a649-4c15-a241-c8078c269443]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.667 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e2bdf3d1-619c-4553-96fb-a84737d6f006]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.668 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5ccb81b-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.668 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.669 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5ccb81b-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:42:40 compute-0 nova_compute[243452]: 2026-02-28 10:42:40.670 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:40 compute-0 NetworkManager[49805]: <info>  [1772275360.6713] manager: (tapf5ccb81b-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/642)
Feb 28 10:42:40 compute-0 kernel: tapf5ccb81b-d0: entered promiscuous mode
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.674 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5ccb81b-d0, col_values=(('external_ids', {'iface-id': '9495031a-3350-4b5d-b9e3-f7a6b929d37e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:42:40 compute-0 nova_compute[243452]: 2026-02-28 10:42:40.675 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:40 compute-0 ovn_controller[146846]: 2026-02-28T10:42:40Z|01542|binding|INFO|Releasing lport 9495031a-3350-4b5d-b9e3-f7a6b929d37e from this chassis (sb_readonly=0)
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.677 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f5ccb81b-dba1-47db-8a77-320af312ccad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f5ccb81b-dba1-47db-8a77-320af312ccad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.677 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bee7a3f0-5f57-455c-8299-4cbd0b67c3d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.678 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-f5ccb81b-dba1-47db-8a77-320af312ccad
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/f5ccb81b-dba1-47db-8a77-320af312ccad.pid.haproxy
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID f5ccb81b-dba1-47db-8a77-320af312ccad
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:42:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:40.679 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'env', 'PROCESS_TAG=haproxy-f5ccb81b-dba1-47db-8a77-320af312ccad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f5ccb81b-dba1-47db-8a77-320af312ccad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:42:40 compute-0 nova_compute[243452]: 2026-02-28 10:42:40.681 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:40 compute-0 nova_compute[243452]: 2026-02-28 10:42:40.683 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275360.6829443, 9a93fbef-9a9c-4d32-b200-626428537bfa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:42:40 compute-0 nova_compute[243452]: 2026-02-28 10:42:40.684 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] VM Started (Lifecycle Event)
Feb 28 10:42:40 compute-0 nova_compute[243452]: 2026-02-28 10:42:40.724 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:42:40 compute-0 nova_compute[243452]: 2026-02-28 10:42:40.729 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275360.683456, 9a93fbef-9a9c-4d32-b200-626428537bfa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:42:40 compute-0 nova_compute[243452]: 2026-02-28 10:42:40.729 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] VM Paused (Lifecycle Event)
Feb 28 10:42:40 compute-0 nova_compute[243452]: 2026-02-28 10:42:40.764 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:42:40 compute-0 nova_compute[243452]: 2026-02-28 10:42:40.769 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:42:40 compute-0 nova_compute[243452]: 2026-02-28 10:42:40.799 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:42:40 compute-0 podman[374523]: 2026-02-28 10:42:40.990678559 +0000 UTC m=+0.060856334 container create 79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:42:41 compute-0 systemd[1]: Started libpod-conmon-79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e.scope.
Feb 28 10:42:41 compute-0 podman[374523]: 2026-02-28 10:42:40.951706196 +0000 UTC m=+0.021884011 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:42:41 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:42:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3621c3eb848faba7f94d263f84c1b37e270520b2c10993c5bb339bb0de8e7c9c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:42:41 compute-0 podman[374523]: 2026-02-28 10:42:41.083771795 +0000 UTC m=+0.153949550 container init 79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:42:41 compute-0 podman[374523]: 2026-02-28 10:42:41.087935743 +0000 UTC m=+0.158113478 container start 79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:42:41 compute-0 neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad[374539]: [NOTICE]   (374543) : New worker (374545) forked
Feb 28 10:42:41 compute-0 neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad[374539]: [NOTICE]   (374543) : Loading success.
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.7 MiB/s wr, 14 op/s
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003626374843771541 of space, bias 1.0, pg target 0.10879124531314624 quantized to 32 (current 32)
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493924862349368 of space, bias 1.0, pg target 0.7481774587048103 quantized to 32 (current 32)
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.286538771604307e-07 of space, bias 4.0, pg target 0.0008743846525925168 quantized to 16 (current 16)
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:42:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:42:41 compute-0 nova_compute[243452]: 2026-02-28 10:42:41.940 243456 DEBUG nova.compute.manager [req-24cb2a2a-ec3a-45c7-bd96-d5ff3d36c642 req-5b32b42d-a763-4215-a857-e985f67c2127 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received event network-vif-plugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:42:41 compute-0 nova_compute[243452]: 2026-02-28 10:42:41.941 243456 DEBUG oslo_concurrency.lockutils [req-24cb2a2a-ec3a-45c7-bd96-d5ff3d36c642 req-5b32b42d-a763-4215-a857-e985f67c2127 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:42:41 compute-0 nova_compute[243452]: 2026-02-28 10:42:41.942 243456 DEBUG oslo_concurrency.lockutils [req-24cb2a2a-ec3a-45c7-bd96-d5ff3d36c642 req-5b32b42d-a763-4215-a857-e985f67c2127 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:42:41 compute-0 nova_compute[243452]: 2026-02-28 10:42:41.942 243456 DEBUG oslo_concurrency.lockutils [req-24cb2a2a-ec3a-45c7-bd96-d5ff3d36c642 req-5b32b42d-a763-4215-a857-e985f67c2127 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:42:41 compute-0 nova_compute[243452]: 2026-02-28 10:42:41.942 243456 DEBUG nova.compute.manager [req-24cb2a2a-ec3a-45c7-bd96-d5ff3d36c642 req-5b32b42d-a763-4215-a857-e985f67c2127 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Processing event network-vif-plugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:42:41 compute-0 nova_compute[243452]: 2026-02-28 10:42:41.943 243456 DEBUG nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:42:41 compute-0 nova_compute[243452]: 2026-02-28 10:42:41.949 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275361.9487586, 9a93fbef-9a9c-4d32-b200-626428537bfa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:42:41 compute-0 nova_compute[243452]: 2026-02-28 10:42:41.949 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] VM Resumed (Lifecycle Event)
Feb 28 10:42:41 compute-0 nova_compute[243452]: 2026-02-28 10:42:41.951 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:42:41 compute-0 nova_compute[243452]: 2026-02-28 10:42:41.954 243456 INFO nova.virt.libvirt.driver [-] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Instance spawned successfully.
Feb 28 10:42:41 compute-0 nova_compute[243452]: 2026-02-28 10:42:41.954 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:42:41 compute-0 nova_compute[243452]: 2026-02-28 10:42:41.959 243456 DEBUG nova.network.neutron [req-c32ad750-f43e-44f3-843c-82cfbd7099cf req-2a28a37a-3fcb-4e82-88c6-04dd22d32b25 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Updated VIF entry in instance network info cache for port 5b4f91cf-9f52-4422-873f-f11cf0d49dde. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:42:41 compute-0 nova_compute[243452]: 2026-02-28 10:42:41.960 243456 DEBUG nova.network.neutron [req-c32ad750-f43e-44f3-843c-82cfbd7099cf req-2a28a37a-3fcb-4e82-88c6-04dd22d32b25 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Updating instance_info_cache with network_info: [{"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:42:41 compute-0 nova_compute[243452]: 2026-02-28 10:42:41.978 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:42:41 compute-0 nova_compute[243452]: 2026-02-28 10:42:41.981 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:42:42 compute-0 nova_compute[243452]: 2026-02-28 10:42:42.174 243456 DEBUG oslo_concurrency.lockutils [req-c32ad750-f43e-44f3-843c-82cfbd7099cf req-2a28a37a-3fcb-4e82-88c6-04dd22d32b25 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:42:42 compute-0 nova_compute[243452]: 2026-02-28 10:42:42.177 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:42:42 compute-0 nova_compute[243452]: 2026-02-28 10:42:42.183 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:42:42 compute-0 nova_compute[243452]: 2026-02-28 10:42:42.183 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:42:42 compute-0 nova_compute[243452]: 2026-02-28 10:42:42.184 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:42:42 compute-0 nova_compute[243452]: 2026-02-28 10:42:42.185 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:42:42 compute-0 nova_compute[243452]: 2026-02-28 10:42:42.186 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:42:42 compute-0 nova_compute[243452]: 2026-02-28 10:42:42.187 243456 DEBUG nova.virt.libvirt.driver [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:42:42 compute-0 ceph-mon[76304]: pgmap v2350: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.7 MiB/s wr, 14 op/s
Feb 28 10:42:42 compute-0 nova_compute[243452]: 2026-02-28 10:42:42.353 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:42:42 compute-0 nova_compute[243452]: 2026-02-28 10:42:42.418 243456 INFO nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Took 15.72 seconds to spawn the instance on the hypervisor.
Feb 28 10:42:42 compute-0 nova_compute[243452]: 2026-02-28 10:42:42.419 243456 DEBUG nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:42:42 compute-0 nova_compute[243452]: 2026-02-28 10:42:42.697 243456 INFO nova.compute.manager [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Took 16.95 seconds to build instance.
Feb 28 10:42:42 compute-0 nova_compute[243452]: 2026-02-28 10:42:42.777 243456 DEBUG oslo_concurrency.lockutils [None req-1653b2d8-d3b8-4edf-864e-8a029adb94aa be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:42:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 582 KiB/s wr, 36 op/s
Feb 28 10:42:43 compute-0 nova_compute[243452]: 2026-02-28 10:42:43.677 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:42:44 compute-0 nova_compute[243452]: 2026-02-28 10:42:44.136 243456 DEBUG nova.compute.manager [req-761954a4-18b8-4172-954d-63087e9ddf38 req-1edd133a-06e7-412c-8004-b8ede19040eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received event network-vif-plugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:42:44 compute-0 nova_compute[243452]: 2026-02-28 10:42:44.138 243456 DEBUG oslo_concurrency.lockutils [req-761954a4-18b8-4172-954d-63087e9ddf38 req-1edd133a-06e7-412c-8004-b8ede19040eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:42:44 compute-0 nova_compute[243452]: 2026-02-28 10:42:44.140 243456 DEBUG oslo_concurrency.lockutils [req-761954a4-18b8-4172-954d-63087e9ddf38 req-1edd133a-06e7-412c-8004-b8ede19040eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:42:44 compute-0 nova_compute[243452]: 2026-02-28 10:42:44.140 243456 DEBUG oslo_concurrency.lockutils [req-761954a4-18b8-4172-954d-63087e9ddf38 req-1edd133a-06e7-412c-8004-b8ede19040eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:42:44 compute-0 nova_compute[243452]: 2026-02-28 10:42:44.140 243456 DEBUG nova.compute.manager [req-761954a4-18b8-4172-954d-63087e9ddf38 req-1edd133a-06e7-412c-8004-b8ede19040eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] No waiting events found dispatching network-vif-plugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:42:44 compute-0 nova_compute[243452]: 2026-02-28 10:42:44.140 243456 WARNING nova.compute.manager [req-761954a4-18b8-4172-954d-63087e9ddf38 req-1edd133a-06e7-412c-8004-b8ede19040eb 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received unexpected event network-vif-plugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde for instance with vm_state active and task_state None.
Feb 28 10:42:44 compute-0 ceph-mon[76304]: pgmap v2351: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 582 KiB/s wr, 36 op/s
Feb 28 10:42:44 compute-0 nova_compute[243452]: 2026-02-28 10:42:44.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:42:44 compute-0 nova_compute[243452]: 2026-02-28 10:42:44.784 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 12 KiB/s wr, 51 op/s
Feb 28 10:42:45 compute-0 nova_compute[243452]: 2026-02-28 10:42:45.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:42:45 compute-0 nova_compute[243452]: 2026-02-28 10:42:45.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:42:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:42:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1961663572' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:42:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:42:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1961663572' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:42:46 compute-0 ceph-mon[76304]: pgmap v2352: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 12 KiB/s wr, 51 op/s
Feb 28 10:42:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1961663572' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:42:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1961663572' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:42:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:42:47 compute-0 nova_compute[243452]: 2026-02-28 10:42:47.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:42:48 compute-0 ceph-mon[76304]: pgmap v2353: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:42:48 compute-0 nova_compute[243452]: 2026-02-28 10:42:48.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:42:48 compute-0 ovn_controller[146846]: 2026-02-28T10:42:48Z|01543|binding|INFO|Releasing lport 9495031a-3350-4b5d-b9e3-f7a6b929d37e from this chassis (sb_readonly=0)
Feb 28 10:42:48 compute-0 NetworkManager[49805]: <info>  [1772275368.4953] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/643)
Feb 28 10:42:48 compute-0 NetworkManager[49805]: <info>  [1772275368.4970] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/644)
Feb 28 10:42:48 compute-0 nova_compute[243452]: 2026-02-28 10:42:48.502 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:48 compute-0 ovn_controller[146846]: 2026-02-28T10:42:48Z|01544|binding|INFO|Releasing lport 9495031a-3350-4b5d-b9e3-f7a6b929d37e from this chassis (sb_readonly=0)
Feb 28 10:42:48 compute-0 nova_compute[243452]: 2026-02-28 10:42:48.510 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:48 compute-0 nova_compute[243452]: 2026-02-28 10:42:48.680 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:48 compute-0 nova_compute[243452]: 2026-02-28 10:42:48.770 243456 DEBUG nova.compute.manager [req-5b57f837-71d4-4ec6-8300-81f31bf3f9d4 req-1dca5a8c-4211-442a-80de-548802077b89 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received event network-changed-5b4f91cf-9f52-4422-873f-f11cf0d49dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:42:48 compute-0 nova_compute[243452]: 2026-02-28 10:42:48.770 243456 DEBUG nova.compute.manager [req-5b57f837-71d4-4ec6-8300-81f31bf3f9d4 req-1dca5a8c-4211-442a-80de-548802077b89 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Refreshing instance network info cache due to event network-changed-5b4f91cf-9f52-4422-873f-f11cf0d49dde. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:42:48 compute-0 nova_compute[243452]: 2026-02-28 10:42:48.771 243456 DEBUG oslo_concurrency.lockutils [req-5b57f837-71d4-4ec6-8300-81f31bf3f9d4 req-1dca5a8c-4211-442a-80de-548802077b89 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:42:48 compute-0 nova_compute[243452]: 2026-02-28 10:42:48.771 243456 DEBUG oslo_concurrency.lockutils [req-5b57f837-71d4-4ec6-8300-81f31bf3f9d4 req-1dca5a8c-4211-442a-80de-548802077b89 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:42:48 compute-0 nova_compute[243452]: 2026-02-28 10:42:48.772 243456 DEBUG nova.network.neutron [req-5b57f837-71d4-4ec6-8300-81f31bf3f9d4 req-1dca5a8c-4211-442a-80de-548802077b89 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Refreshing network info cache for port 5b4f91cf-9f52-4422-873f-f11cf0d49dde _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:42:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:42:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:42:49 compute-0 nova_compute[243452]: 2026-02-28 10:42:49.787 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:50 compute-0 ceph-mon[76304]: pgmap v2354: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:42:50 compute-0 nova_compute[243452]: 2026-02-28 10:42:50.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:42:50 compute-0 nova_compute[243452]: 2026-02-28 10:42:50.647 243456 DEBUG nova.network.neutron [req-5b57f837-71d4-4ec6-8300-81f31bf3f9d4 req-1dca5a8c-4211-442a-80de-548802077b89 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Updated VIF entry in instance network info cache for port 5b4f91cf-9f52-4422-873f-f11cf0d49dde. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:42:50 compute-0 nova_compute[243452]: 2026-02-28 10:42:50.648 243456 DEBUG nova.network.neutron [req-5b57f837-71d4-4ec6-8300-81f31bf3f9d4 req-1dca5a8c-4211-442a-80de-548802077b89 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Updating instance_info_cache with network_info: [{"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:42:50 compute-0 nova_compute[243452]: 2026-02-28 10:42:50.785 243456 DEBUG oslo_concurrency.lockutils [req-5b57f837-71d4-4ec6-8300-81f31bf3f9d4 req-1dca5a8c-4211-442a-80de-548802077b89 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:42:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:42:51 compute-0 podman[374556]: 2026-02-28 10:42:51.240581424 +0000 UTC m=+0.049298357 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 28 10:42:51 compute-0 podman[374555]: 2026-02-28 10:42:51.293041059 +0000 UTC m=+0.098317384 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.43.0)
Feb 28 10:42:51 compute-0 nova_compute[243452]: 2026-02-28 10:42:51.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:42:51 compute-0 nova_compute[243452]: 2026-02-28 10:42:51.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:42:51 compute-0 nova_compute[243452]: 2026-02-28 10:42:51.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:42:51 compute-0 nova_compute[243452]: 2026-02-28 10:42:51.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:42:51 compute-0 nova_compute[243452]: 2026-02-28 10:42:51.665 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:42:51 compute-0 nova_compute[243452]: 2026-02-28 10:42:51.666 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:42:51 compute-0 nova_compute[243452]: 2026-02-28 10:42:51.666 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:42:51 compute-0 nova_compute[243452]: 2026-02-28 10:42:51.667 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9a93fbef-9a9c-4d32-b200-626428537bfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:42:52 compute-0 ceph-mon[76304]: pgmap v2355: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:42:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 214 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 91 op/s
Feb 28 10:42:53 compute-0 ovn_controller[146846]: 2026-02-28T10:42:53Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:54:61:bf 10.100.0.14
Feb 28 10:42:53 compute-0 ovn_controller[146846]: 2026-02-28T10:42:53Z|00187|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:61:bf 10.100.0.14
Feb 28 10:42:53 compute-0 nova_compute[243452]: 2026-02-28 10:42:53.683 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:42:54 compute-0 ceph-mon[76304]: pgmap v2356: 305 pgs: 305 active+clean; 214 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 91 op/s
Feb 28 10:42:54 compute-0 nova_compute[243452]: 2026-02-28 10:42:54.331 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Updating instance_info_cache with network_info: [{"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:42:54 compute-0 nova_compute[243452]: 2026-02-28 10:42:54.349 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:42:54 compute-0 nova_compute[243452]: 2026-02-28 10:42:54.350 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:42:54 compute-0 nova_compute[243452]: 2026-02-28 10:42:54.791 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 230 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 91 op/s
Feb 28 10:42:56 compute-0 nova_compute[243452]: 2026-02-28 10:42:56.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:42:56 compute-0 nova_compute[243452]: 2026-02-28 10:42:56.339 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:42:56 compute-0 nova_compute[243452]: 2026-02-28 10:42:56.339 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:42:56 compute-0 nova_compute[243452]: 2026-02-28 10:42:56.339 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:42:56 compute-0 nova_compute[243452]: 2026-02-28 10:42:56.340 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:42:56 compute-0 nova_compute[243452]: 2026-02-28 10:42:56.340 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:42:56 compute-0 ceph-mon[76304]: pgmap v2357: 305 pgs: 305 active+clean; 230 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 91 op/s
Feb 28 10:42:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:42:56 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2080692160' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:42:56 compute-0 nova_compute[243452]: 2026-02-28 10:42:56.946 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:42:57 compute-0 nova_compute[243452]: 2026-02-28 10:42:57.012 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:42:57 compute-0 nova_compute[243452]: 2026-02-28 10:42:57.012 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:42:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 231 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 982 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Feb 28 10:42:57 compute-0 nova_compute[243452]: 2026-02-28 10:42:57.174 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:42:57 compute-0 nova_compute[243452]: 2026-02-28 10:42:57.175 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3386MB free_disk=59.9426728002727GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:42:57 compute-0 nova_compute[243452]: 2026-02-28 10:42:57.175 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:42:57 compute-0 nova_compute[243452]: 2026-02-28 10:42:57.176 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:42:57 compute-0 nova_compute[243452]: 2026-02-28 10:42:57.244 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 9a93fbef-9a9c-4d32-b200-626428537bfa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:42:57 compute-0 nova_compute[243452]: 2026-02-28 10:42:57.245 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:42:57 compute-0 nova_compute[243452]: 2026-02-28 10:42:57.245 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:42:57 compute-0 nova_compute[243452]: 2026-02-28 10:42:57.291 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:42:57 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2080692160' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:42:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:42:57 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/156772642' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:42:57 compute-0 nova_compute[243452]: 2026-02-28 10:42:57.811 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:42:57 compute-0 nova_compute[243452]: 2026-02-28 10:42:57.817 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:42:57 compute-0 nova_compute[243452]: 2026-02-28 10:42:57.831 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:42:57 compute-0 nova_compute[243452]: 2026-02-28 10:42:57.854 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:42:57 compute-0 nova_compute[243452]: 2026-02-28 10:42:57.855 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:42:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:57.882 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:42:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:57.883 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:42:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:42:57.884 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:42:58 compute-0 ceph-mon[76304]: pgmap v2358: 305 pgs: 305 active+clean; 231 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 982 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Feb 28 10:42:58 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/156772642' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:42:58 compute-0 nova_compute[243452]: 2026-02-28 10:42:58.718 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:42:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:42:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:42:59 compute-0 nova_compute[243452]: 2026-02-28 10:42:59.794 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:43:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:43:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:43:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:43:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:43:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:43:00 compute-0 ceph-mon[76304]: pgmap v2359: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:43:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:43:02 compute-0 ceph-mon[76304]: pgmap v2360: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:43:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:43:03 compute-0 nova_compute[243452]: 2026-02-28 10:43:03.720 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:43:04 compute-0 ceph-mon[76304]: pgmap v2361: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:43:04 compute-0 nova_compute[243452]: 2026-02-28 10:43:04.795 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 212 KiB/s rd, 855 KiB/s wr, 44 op/s
Feb 28 10:43:06 compute-0 ceph-mon[76304]: pgmap v2362: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 212 KiB/s rd, 855 KiB/s wr, 44 op/s
Feb 28 10:43:06 compute-0 nova_compute[243452]: 2026-02-28 10:43:06.789 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "4717a174-511e-4100-af09-e351eb2784a3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:43:06 compute-0 nova_compute[243452]: 2026-02-28 10:43:06.789 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:43:06 compute-0 nova_compute[243452]: 2026-02-28 10:43:06.808 243456 DEBUG nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:43:06 compute-0 nova_compute[243452]: 2026-02-28 10:43:06.904 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:43:06 compute-0 nova_compute[243452]: 2026-02-28 10:43:06.905 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:43:06 compute-0 nova_compute[243452]: 2026-02-28 10:43:06.914 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:43:06 compute-0 nova_compute[243452]: 2026-02-28 10:43:06.915 243456 INFO nova.compute.claims [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:43:07 compute-0 nova_compute[243452]: 2026-02-28 10:43:07.051 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:43:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 74 KiB/s wr, 19 op/s
Feb 28 10:43:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:43:07 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3416797750' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:43:07 compute-0 nova_compute[243452]: 2026-02-28 10:43:07.623 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:43:07 compute-0 nova_compute[243452]: 2026-02-28 10:43:07.630 243456 DEBUG nova.compute.provider_tree [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:43:07 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3416797750' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:43:07 compute-0 nova_compute[243452]: 2026-02-28 10:43:07.648 243456 DEBUG nova.scheduler.client.report [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:43:07 compute-0 nova_compute[243452]: 2026-02-28 10:43:07.673 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:43:07 compute-0 nova_compute[243452]: 2026-02-28 10:43:07.675 243456 DEBUG nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:43:07 compute-0 nova_compute[243452]: 2026-02-28 10:43:07.719 243456 DEBUG nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:43:07 compute-0 nova_compute[243452]: 2026-02-28 10:43:07.719 243456 DEBUG nova.network.neutron [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:43:07 compute-0 nova_compute[243452]: 2026-02-28 10:43:07.736 243456 INFO nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:43:07 compute-0 nova_compute[243452]: 2026-02-28 10:43:07.752 243456 DEBUG nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:43:07 compute-0 nova_compute[243452]: 2026-02-28 10:43:07.832 243456 DEBUG nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:43:07 compute-0 nova_compute[243452]: 2026-02-28 10:43:07.833 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:43:07 compute-0 nova_compute[243452]: 2026-02-28 10:43:07.834 243456 INFO nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Creating image(s)
Feb 28 10:43:07 compute-0 nova_compute[243452]: 2026-02-28 10:43:07.864 243456 DEBUG nova.storage.rbd_utils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 4717a174-511e-4100-af09-e351eb2784a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:43:07 compute-0 nova_compute[243452]: 2026-02-28 10:43:07.898 243456 DEBUG nova.storage.rbd_utils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 4717a174-511e-4100-af09-e351eb2784a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:43:07 compute-0 nova_compute[243452]: 2026-02-28 10:43:07.929 243456 DEBUG nova.storage.rbd_utils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 4717a174-511e-4100-af09-e351eb2784a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:43:07 compute-0 nova_compute[243452]: 2026-02-28 10:43:07.933 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:43:08 compute-0 nova_compute[243452]: 2026-02-28 10:43:08.003 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:43:08 compute-0 nova_compute[243452]: 2026-02-28 10:43:08.005 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:43:08 compute-0 nova_compute[243452]: 2026-02-28 10:43:08.006 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:43:08 compute-0 nova_compute[243452]: 2026-02-28 10:43:08.006 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:43:08 compute-0 nova_compute[243452]: 2026-02-28 10:43:08.038 243456 DEBUG nova.storage.rbd_utils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 4717a174-511e-4100-af09-e351eb2784a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:43:08 compute-0 nova_compute[243452]: 2026-02-28 10:43:08.043 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 4717a174-511e-4100-af09-e351eb2784a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:43:08 compute-0 nova_compute[243452]: 2026-02-28 10:43:08.216 243456 DEBUG nova.policy [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:43:08 compute-0 nova_compute[243452]: 2026-02-28 10:43:08.348 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 4717a174-511e-4100-af09-e351eb2784a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:43:08 compute-0 nova_compute[243452]: 2026-02-28 10:43:08.447 243456 DEBUG nova.storage.rbd_utils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image 4717a174-511e-4100-af09-e351eb2784a3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:43:08 compute-0 nova_compute[243452]: 2026-02-28 10:43:08.556 243456 DEBUG nova.objects.instance [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid 4717a174-511e-4100-af09-e351eb2784a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:43:08 compute-0 nova_compute[243452]: 2026-02-28 10:43:08.574 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:43:08 compute-0 nova_compute[243452]: 2026-02-28 10:43:08.575 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Ensure instance console log exists: /var/lib/nova/instances/4717a174-511e-4100-af09-e351eb2784a3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:43:08 compute-0 nova_compute[243452]: 2026-02-28 10:43:08.576 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:43:08 compute-0 nova_compute[243452]: 2026-02-28 10:43:08.577 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:43:08 compute-0 nova_compute[243452]: 2026-02-28 10:43:08.577 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:43:08 compute-0 ceph-mon[76304]: pgmap v2363: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 74 KiB/s wr, 19 op/s
Feb 28 10:43:08 compute-0 nova_compute[243452]: 2026-02-28 10:43:08.723 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:43:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 73 KiB/s wr, 11 op/s
Feb 28 10:43:09 compute-0 nova_compute[243452]: 2026-02-28 10:43:09.246 243456 DEBUG nova.network.neutron [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Successfully created port: 95742053-49d5-4e84-9dde-4a6563ebf953 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:43:09 compute-0 nova_compute[243452]: 2026-02-28 10:43:09.797 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:10 compute-0 nova_compute[243452]: 2026-02-28 10:43:10.279 243456 DEBUG nova.network.neutron [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Successfully updated port: 95742053-49d5-4e84-9dde-4a6563ebf953 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:43:10 compute-0 nova_compute[243452]: 2026-02-28 10:43:10.316 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:43:10 compute-0 nova_compute[243452]: 2026-02-28 10:43:10.317 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:43:10 compute-0 nova_compute[243452]: 2026-02-28 10:43:10.318 243456 DEBUG nova.network.neutron [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:43:10 compute-0 nova_compute[243452]: 2026-02-28 10:43:10.365 243456 DEBUG nova.compute.manager [req-206c7f2a-9a10-491d-8833-1ac18d065030 req-1a8afca7-e5e8-40c0-ae55-cccce9fb1d69 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received event network-changed-95742053-49d5-4e84-9dde-4a6563ebf953 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:43:10 compute-0 nova_compute[243452]: 2026-02-28 10:43:10.366 243456 DEBUG nova.compute.manager [req-206c7f2a-9a10-491d-8833-1ac18d065030 req-1a8afca7-e5e8-40c0-ae55-cccce9fb1d69 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Refreshing instance network info cache due to event network-changed-95742053-49d5-4e84-9dde-4a6563ebf953. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:43:10 compute-0 nova_compute[243452]: 2026-02-28 10:43:10.366 243456 DEBUG oslo_concurrency.lockutils [req-206c7f2a-9a10-491d-8833-1ac18d065030 req-1a8afca7-e5e8-40c0-ae55-cccce9fb1d69 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:43:10 compute-0 nova_compute[243452]: 2026-02-28 10:43:10.462 243456 DEBUG nova.network.neutron [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:43:10 compute-0 ceph-mon[76304]: pgmap v2364: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 73 KiB/s wr, 11 op/s
Feb 28 10:43:10 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:43:10 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 11K writes, 50K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1386 writes, 6266 keys, 1386 commit groups, 1.0 writes per commit group, ingest: 8.95 MB, 0.01 MB/s
                                           Interval WAL: 1386 writes, 1386 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     75.0      0.79              0.15        34    0.023       0      0       0.0       0.0
                                             L6      1/0    7.98 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.8    162.8    137.7      2.04              0.74        33    0.062    196K    17K       0.0       0.0
                                            Sum      1/0    7.98 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.8    117.4    120.3      2.82              0.90        67    0.042    196K    17K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   7.4    162.9    162.5      0.33              0.15        10    0.033     37K   2498       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    162.8    137.7      2.04              0.74        33    0.062    196K    17K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     75.5      0.78              0.15        33    0.024       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.5      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.058, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.33 GB write, 0.08 MB/s write, 0.32 GB read, 0.08 MB/s read, 2.8 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555caadbb8d0#2 capacity: 304.00 MB usage: 37.30 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000391 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2363,35.84 MB,11.7885%) FilterBlock(68,557.17 KB,0.178985%) IndexBlock(68,945.70 KB,0.303795%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 28 10:43:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 256 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.0 MiB/s wr, 14 op/s
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.848 243456 DEBUG nova.network.neutron [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Updating instance_info_cache with network_info: [{"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.875 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.876 243456 DEBUG nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Instance network_info: |[{"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.877 243456 DEBUG oslo_concurrency.lockutils [req-206c7f2a-9a10-491d-8833-1ac18d065030 req-1a8afca7-e5e8-40c0-ae55-cccce9fb1d69 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.878 243456 DEBUG nova.network.neutron [req-206c7f2a-9a10-491d-8833-1ac18d065030 req-1a8afca7-e5e8-40c0-ae55-cccce9fb1d69 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Refreshing network info cache for port 95742053-49d5-4e84-9dde-4a6563ebf953 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.884 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Start _get_guest_xml network_info=[{"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.891 243456 WARNING nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.896 243456 DEBUG nova.virt.libvirt.host [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.897 243456 DEBUG nova.virt.libvirt.host [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.908 243456 DEBUG nova.virt.libvirt.host [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.908 243456 DEBUG nova.virt.libvirt.host [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.909 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.910 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.911 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.911 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.911 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.912 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.912 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.913 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.913 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.914 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.914 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.915 243456 DEBUG nova.virt.hardware [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:43:11 compute-0 nova_compute[243452]: 2026-02-28 10:43:11.920 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:43:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:43:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1634588789' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:43:12 compute-0 nova_compute[243452]: 2026-02-28 10:43:12.578 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:43:12 compute-0 nova_compute[243452]: 2026-02-28 10:43:12.604 243456 DEBUG nova.storage.rbd_utils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 4717a174-511e-4100-af09-e351eb2784a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:43:12 compute-0 nova_compute[243452]: 2026-02-28 10:43:12.610 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:43:12 compute-0 ceph-mon[76304]: pgmap v2365: 305 pgs: 305 active+clean; 256 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.0 MiB/s wr, 14 op/s
Feb 28 10:43:12 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1634588789' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:43:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:43:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4043135003' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:43:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.199 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.201 243456 DEBUG nova.virt.libvirt.vif [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:43:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-612443702',display_name='tempest-TestGettingAddress-server-612443702',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-612443702',id=147,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDkbXRt6B1cCWBHrWtATNSLJHmA0iLq2AXd2wKDeL3s6gDq5DjSDGWA5dtPpegRqUnUl/4AOWSTUcLQlCwH/VQEOx1aCDIzKCGn/G2HO33UA7mhN+ANludWedDRQ8saCjg==',key_name='tempest-TestGettingAddress-1127208510',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-7oml1apf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:43:07Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=4717a174-511e-4100-af09-e351eb2784a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.202 243456 DEBUG nova.network.os_vif_util [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.203 243456 DEBUG nova.network.os_vif_util [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:bf:44,bridge_name='br-int',has_traffic_filtering=True,id=95742053-49d5-4e84-9dde-4a6563ebf953,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95742053-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.204 243456 DEBUG nova.objects.instance [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4717a174-511e-4100-af09-e351eb2784a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.224 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:43:13 compute-0 nova_compute[243452]:   <uuid>4717a174-511e-4100-af09-e351eb2784a3</uuid>
Feb 28 10:43:13 compute-0 nova_compute[243452]:   <name>instance-00000093</name>
Feb 28 10:43:13 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:43:13 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:43:13 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <nova:name>tempest-TestGettingAddress-server-612443702</nova:name>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:43:11</nova:creationTime>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:43:13 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:43:13 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:43:13 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:43:13 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:43:13 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:43:13 compute-0 nova_compute[243452]:         <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 10:43:13 compute-0 nova_compute[243452]:         <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:43:13 compute-0 nova_compute[243452]:         <nova:port uuid="95742053-49d5-4e84-9dde-4a6563ebf953">
Feb 28 10:43:13 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fee3:bf44" ipVersion="6"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fee3:bf44" ipVersion="6"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:43:13 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:43:13 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <system>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <entry name="serial">4717a174-511e-4100-af09-e351eb2784a3</entry>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <entry name="uuid">4717a174-511e-4100-af09-e351eb2784a3</entry>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     </system>
Feb 28 10:43:13 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:43:13 compute-0 nova_compute[243452]:   <os>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:   </os>
Feb 28 10:43:13 compute-0 nova_compute[243452]:   <features>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:   </features>
Feb 28 10:43:13 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:43:13 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:43:13 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/4717a174-511e-4100-af09-e351eb2784a3_disk">
Feb 28 10:43:13 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       </source>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:43:13 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/4717a174-511e-4100-af09-e351eb2784a3_disk.config">
Feb 28 10:43:13 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       </source>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:43:13 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:e3:bf:44"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <target dev="tap95742053-49"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/4717a174-511e-4100-af09-e351eb2784a3/console.log" append="off"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <video>
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     </video>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:43:13 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:43:13 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:43:13 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:43:13 compute-0 nova_compute[243452]: </domain>
Feb 28 10:43:13 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.225 243456 DEBUG nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Preparing to wait for external event network-vif-plugged-95742053-49d5-4e84-9dde-4a6563ebf953 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.226 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "4717a174-511e-4100-af09-e351eb2784a3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.226 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.226 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.227 243456 DEBUG nova.virt.libvirt.vif [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:43:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-612443702',display_name='tempest-TestGettingAddress-server-612443702',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-612443702',id=147,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDkbXRt6B1cCWBHrWtATNSLJHmA0iLq2AXd2wKDeL3s6gDq5DjSDGWA5dtPpegRqUnUl/4AOWSTUcLQlCwH/VQEOx1aCDIzKCGn/G2HO33UA7mhN+ANludWedDRQ8saCjg==',key_name='tempest-TestGettingAddress-1127208510',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-7oml1apf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:43:07Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=4717a174-511e-4100-af09-e351eb2784a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.227 243456 DEBUG nova.network.os_vif_util [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.228 243456 DEBUG nova.network.os_vif_util [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:bf:44,bridge_name='br-int',has_traffic_filtering=True,id=95742053-49d5-4e84-9dde-4a6563ebf953,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95742053-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.228 243456 DEBUG os_vif [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:bf:44,bridge_name='br-int',has_traffic_filtering=True,id=95742053-49d5-4e84-9dde-4a6563ebf953,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95742053-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.229 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.229 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.231 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.236 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.236 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95742053-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.237 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap95742053-49, col_values=(('external_ids', {'iface-id': '95742053-49d5-4e84-9dde-4a6563ebf953', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e3:bf:44', 'vm-uuid': '4717a174-511e-4100-af09-e351eb2784a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.239 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:13 compute-0 NetworkManager[49805]: <info>  [1772275393.2402] manager: (tap95742053-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/645)
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.243 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.249 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.251 243456 INFO os_vif [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:bf:44,bridge_name='br-int',has_traffic_filtering=True,id=95742053-49d5-4e84-9dde-4a6563ebf953,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95742053-49')
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.298 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.298 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.298 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:e3:bf:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.299 243456 INFO nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Using config drive
Feb 28 10:43:13 compute-0 nova_compute[243452]: 2026-02-28 10:43:13.318 243456 DEBUG nova.storage.rbd_utils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 4717a174-511e-4100-af09-e351eb2784a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:43:13 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4043135003' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:43:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:43:14 compute-0 ceph-mon[76304]: pgmap v2366: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:43:14 compute-0 nova_compute[243452]: 2026-02-28 10:43:14.800 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:43:15 compute-0 nova_compute[243452]: 2026-02-28 10:43:15.190 243456 INFO nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Creating config drive at /var/lib/nova/instances/4717a174-511e-4100-af09-e351eb2784a3/disk.config
Feb 28 10:43:15 compute-0 nova_compute[243452]: 2026-02-28 10:43:15.195 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4717a174-511e-4100-af09-e351eb2784a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpy5zpbeyu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:43:15 compute-0 nova_compute[243452]: 2026-02-28 10:43:15.244 243456 DEBUG nova.network.neutron [req-206c7f2a-9a10-491d-8833-1ac18d065030 req-1a8afca7-e5e8-40c0-ae55-cccce9fb1d69 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Updated VIF entry in instance network info cache for port 95742053-49d5-4e84-9dde-4a6563ebf953. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:43:15 compute-0 nova_compute[243452]: 2026-02-28 10:43:15.246 243456 DEBUG nova.network.neutron [req-206c7f2a-9a10-491d-8833-1ac18d065030 req-1a8afca7-e5e8-40c0-ae55-cccce9fb1d69 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Updating instance_info_cache with network_info: [{"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:43:15 compute-0 nova_compute[243452]: 2026-02-28 10:43:15.268 243456 DEBUG oslo_concurrency.lockutils [req-206c7f2a-9a10-491d-8833-1ac18d065030 req-1a8afca7-e5e8-40c0-ae55-cccce9fb1d69 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:43:15 compute-0 nova_compute[243452]: 2026-02-28 10:43:15.349 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4717a174-511e-4100-af09-e351eb2784a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpy5zpbeyu" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:43:15 compute-0 nova_compute[243452]: 2026-02-28 10:43:15.386 243456 DEBUG nova.storage.rbd_utils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 4717a174-511e-4100-af09-e351eb2784a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:43:15 compute-0 nova_compute[243452]: 2026-02-28 10:43:15.391 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4717a174-511e-4100-af09-e351eb2784a3/disk.config 4717a174-511e-4100-af09-e351eb2784a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:43:15 compute-0 nova_compute[243452]: 2026-02-28 10:43:15.565 243456 DEBUG oslo_concurrency.processutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4717a174-511e-4100-af09-e351eb2784a3/disk.config 4717a174-511e-4100-af09-e351eb2784a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:43:15 compute-0 nova_compute[243452]: 2026-02-28 10:43:15.567 243456 INFO nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Deleting local config drive /var/lib/nova/instances/4717a174-511e-4100-af09-e351eb2784a3/disk.config because it was imported into RBD.
Feb 28 10:43:15 compute-0 kernel: tap95742053-49: entered promiscuous mode
Feb 28 10:43:15 compute-0 NetworkManager[49805]: <info>  [1772275395.6287] manager: (tap95742053-49): new Tun device (/org/freedesktop/NetworkManager/Devices/646)
Feb 28 10:43:15 compute-0 systemd-udevd[374962]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:43:15 compute-0 ovn_controller[146846]: 2026-02-28T10:43:15Z|01545|binding|INFO|Claiming lport 95742053-49d5-4e84-9dde-4a6563ebf953 for this chassis.
Feb 28 10:43:15 compute-0 ovn_controller[146846]: 2026-02-28T10:43:15Z|01546|binding|INFO|95742053-49d5-4e84-9dde-4a6563ebf953: Claiming fa:16:3e:e3:bf:44 10.100.0.4 2001:db8:0:1:f816:3eff:fee3:bf44 2001:db8::f816:3eff:fee3:bf44
Feb 28 10:43:15 compute-0 nova_compute[243452]: 2026-02-28 10:43:15.685 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:15 compute-0 ovn_controller[146846]: 2026-02-28T10:43:15Z|01547|binding|INFO|Setting lport 95742053-49d5-4e84-9dde-4a6563ebf953 ovn-installed in OVS
Feb 28 10:43:15 compute-0 ovn_controller[146846]: 2026-02-28T10:43:15Z|01548|binding|INFO|Setting lport 95742053-49d5-4e84-9dde-4a6563ebf953 up in Southbound
Feb 28 10:43:15 compute-0 nova_compute[243452]: 2026-02-28 10:43:15.695 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.699 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:bf:44 10.100.0.4 2001:db8:0:1:f816:3eff:fee3:bf44 2001:db8::f816:3eff:fee3:bf44'], port_security=['fa:16:3e:e3:bf:44 10.100.0.4 2001:db8:0:1:f816:3eff:fee3:bf44 2001:db8::f816:3eff:fee3:bf44'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8:0:1:f816:3eff:fee3:bf44/64 2001:db8::f816:3eff:fee3:bf44/64', 'neutron:device_id': '4717a174-511e-4100-af09-e351eb2784a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '00df5388-1a6c-45af-8672-636481c0abde', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39e94574-ff62-481a-b217-9667ee1ef596, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=95742053-49d5-4e84-9dde-4a6563ebf953) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:43:15 compute-0 NetworkManager[49805]: <info>  [1772275395.7020] device (tap95742053-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:43:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.700 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 95742053-49d5-4e84-9dde-4a6563ebf953 in datapath f5ccb81b-dba1-47db-8a77-320af312ccad bound to our chassis
Feb 28 10:43:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.701 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5ccb81b-dba1-47db-8a77-320af312ccad
Feb 28 10:43:15 compute-0 NetworkManager[49805]: <info>  [1772275395.7041] device (tap95742053-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:43:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.722 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e966d291-7ad6-4539-8980-f5f5d49bc33e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:43:15 compute-0 systemd-machined[209480]: New machine qemu-180-instance-00000093.
Feb 28 10:43:15 compute-0 systemd[1]: Started Virtual Machine qemu-180-instance-00000093.
Feb 28 10:43:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.746 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9429d6ca-5227-402c-a745-8b7f0343452f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:43:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.749 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9f6d2cea-1e21-4e2a-87cc-d56139cf4754]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:43:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.769 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[78db983f-a43d-4de7-89fa-c8ea1370eeec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:43:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.792 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[86a5228f-624c-43a5-8dd9-8b2a3240affc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5ccb81b-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:79:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686783, 'reachable_time': 38506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374977, 'error': None, 'target': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:43:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.808 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[91d2f732-40d7-4950-a89c-9e39bf633536]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf5ccb81b-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686793, 'tstamp': 686793}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374979, 'error': None, 'target': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf5ccb81b-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686795, 'tstamp': 686795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374979, 'error': None, 'target': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:43:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.810 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5ccb81b-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:43:15 compute-0 nova_compute[243452]: 2026-02-28 10:43:15.813 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:15 compute-0 nova_compute[243452]: 2026-02-28 10:43:15.814 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.815 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5ccb81b-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:43:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.815 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:43:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.815 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5ccb81b-d0, col_values=(('external_ids', {'iface-id': '9495031a-3350-4b5d-b9e3-f7a6b929d37e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:43:15 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:15.816 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.061 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275396.0609992, 4717a174-511e-4100-af09-e351eb2784a3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.062 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] VM Started (Lifecycle Event)
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.093 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.097 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275396.0620372, 4717a174-511e-4100-af09-e351eb2784a3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.098 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] VM Paused (Lifecycle Event)
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.119 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.125 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.151 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.289 243456 DEBUG nova.compute.manager [req-da62b507-4a13-4628-8c9f-e69fd584bac6 req-0ebe69b6-28ee-4a17-97c7-760af96ebf87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received event network-vif-plugged-95742053-49d5-4e84-9dde-4a6563ebf953 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.289 243456 DEBUG oslo_concurrency.lockutils [req-da62b507-4a13-4628-8c9f-e69fd584bac6 req-0ebe69b6-28ee-4a17-97c7-760af96ebf87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4717a174-511e-4100-af09-e351eb2784a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.290 243456 DEBUG oslo_concurrency.lockutils [req-da62b507-4a13-4628-8c9f-e69fd584bac6 req-0ebe69b6-28ee-4a17-97c7-760af96ebf87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.290 243456 DEBUG oslo_concurrency.lockutils [req-da62b507-4a13-4628-8c9f-e69fd584bac6 req-0ebe69b6-28ee-4a17-97c7-760af96ebf87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.290 243456 DEBUG nova.compute.manager [req-da62b507-4a13-4628-8c9f-e69fd584bac6 req-0ebe69b6-28ee-4a17-97c7-760af96ebf87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Processing event network-vif-plugged-95742053-49d5-4e84-9dde-4a6563ebf953 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.291 243456 DEBUG nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.294 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275396.294668, 4717a174-511e-4100-af09-e351eb2784a3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.295 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] VM Resumed (Lifecycle Event)
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.296 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.300 243456 INFO nova.virt.libvirt.driver [-] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Instance spawned successfully.
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.300 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.317 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.326 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.326 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.327 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.327 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.327 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.328 243456 DEBUG nova.virt.libvirt.driver [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.332 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.358 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.381 243456 INFO nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Took 8.55 seconds to spawn the instance on the hypervisor.
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.382 243456 DEBUG nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.415 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:16.416 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:43:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:16.417 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:43:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:16.418 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.434 243456 INFO nova.compute.manager [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Took 9.57 seconds to build instance.
Feb 28 10:43:16 compute-0 nova_compute[243452]: 2026-02-28 10:43:16.449 243456 DEBUG oslo_concurrency.lockutils [None req-a3162834-3a12-4185-8ee9-c27c0bf1e81c be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:43:16 compute-0 ceph-mon[76304]: pgmap v2367: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:43:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 28 10:43:18 compute-0 nova_compute[243452]: 2026-02-28 10:43:18.239 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:18 compute-0 nova_compute[243452]: 2026-02-28 10:43:18.384 243456 DEBUG nova.compute.manager [req-60d41a9a-92f0-449d-8ae2-943972ed157d req-506873bd-2f5d-48ee-8eec-b7951ad74025 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received event network-vif-plugged-95742053-49d5-4e84-9dde-4a6563ebf953 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:43:18 compute-0 nova_compute[243452]: 2026-02-28 10:43:18.385 243456 DEBUG oslo_concurrency.lockutils [req-60d41a9a-92f0-449d-8ae2-943972ed157d req-506873bd-2f5d-48ee-8eec-b7951ad74025 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4717a174-511e-4100-af09-e351eb2784a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:43:18 compute-0 nova_compute[243452]: 2026-02-28 10:43:18.385 243456 DEBUG oslo_concurrency.lockutils [req-60d41a9a-92f0-449d-8ae2-943972ed157d req-506873bd-2f5d-48ee-8eec-b7951ad74025 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:43:18 compute-0 nova_compute[243452]: 2026-02-28 10:43:18.385 243456 DEBUG oslo_concurrency.lockutils [req-60d41a9a-92f0-449d-8ae2-943972ed157d req-506873bd-2f5d-48ee-8eec-b7951ad74025 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:43:18 compute-0 nova_compute[243452]: 2026-02-28 10:43:18.385 243456 DEBUG nova.compute.manager [req-60d41a9a-92f0-449d-8ae2-943972ed157d req-506873bd-2f5d-48ee-8eec-b7951ad74025 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] No waiting events found dispatching network-vif-plugged-95742053-49d5-4e84-9dde-4a6563ebf953 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:43:18 compute-0 nova_compute[243452]: 2026-02-28 10:43:18.386 243456 WARNING nova.compute.manager [req-60d41a9a-92f0-449d-8ae2-943972ed157d req-506873bd-2f5d-48ee-8eec-b7951ad74025 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received unexpected event network-vif-plugged-95742053-49d5-4e84-9dde-4a6563ebf953 for instance with vm_state active and task_state None.
Feb 28 10:43:18 compute-0 ceph-mon[76304]: pgmap v2368: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 28 10:43:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:43:19 compute-0 sudo[375023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:43:19 compute-0 sudo[375023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:43:19 compute-0 sudo[375023]: pam_unix(sudo:session): session closed for user root
Feb 28 10:43:19 compute-0 sudo[375048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:43:19 compute-0 sudo[375048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:43:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 897 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Feb 28 10:43:19 compute-0 sudo[375048]: pam_unix(sudo:session): session closed for user root
Feb 28 10:43:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:43:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:43:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:43:19 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:43:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:43:19 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:43:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:43:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:43:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:43:19 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:43:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:43:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:43:19 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:43:19 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:43:19 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:43:19 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:43:19 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:43:19 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:43:19 compute-0 nova_compute[243452]: 2026-02-28 10:43:19.801 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:19 compute-0 sudo[375104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:43:19 compute-0 sudo[375104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:43:19 compute-0 sudo[375104]: pam_unix(sudo:session): session closed for user root
Feb 28 10:43:19 compute-0 sudo[375129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:43:19 compute-0 sudo[375129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:43:20 compute-0 podman[375168]: 2026-02-28 10:43:20.235597279 +0000 UTC m=+0.065997610 container create ce26e6580e8595aa62eecaf9e34c15accace5df22258164d839c5f269497fdde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:43:20 compute-0 systemd[1]: Started libpod-conmon-ce26e6580e8595aa62eecaf9e34c15accace5df22258164d839c5f269497fdde.scope.
Feb 28 10:43:20 compute-0 podman[375168]: 2026-02-28 10:43:20.205857387 +0000 UTC m=+0.036257738 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:43:20 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:43:20 compute-0 podman[375168]: 2026-02-28 10:43:20.335044415 +0000 UTC m=+0.165444786 container init ce26e6580e8595aa62eecaf9e34c15accace5df22258164d839c5f269497fdde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:43:20 compute-0 podman[375168]: 2026-02-28 10:43:20.342952739 +0000 UTC m=+0.173353050 container start ce26e6580e8595aa62eecaf9e34c15accace5df22258164d839c5f269497fdde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_elbakyan, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:43:20 compute-0 podman[375168]: 2026-02-28 10:43:20.347431226 +0000 UTC m=+0.177831567 container attach ce26e6580e8595aa62eecaf9e34c15accace5df22258164d839c5f269497fdde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:43:20 compute-0 systemd[1]: libpod-ce26e6580e8595aa62eecaf9e34c15accace5df22258164d839c5f269497fdde.scope: Deactivated successfully.
Feb 28 10:43:20 compute-0 romantic_elbakyan[375185]: 167 167
Feb 28 10:43:20 compute-0 conmon[375185]: conmon ce26e6580e8595aa62ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce26e6580e8595aa62eecaf9e34c15accace5df22258164d839c5f269497fdde.scope/container/memory.events
Feb 28 10:43:20 compute-0 podman[375168]: 2026-02-28 10:43:20.35147148 +0000 UTC m=+0.181871821 container died ce26e6580e8595aa62eecaf9e34c15accace5df22258164d839c5f269497fdde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 10:43:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a56a783a4d6015f26a451843b8d04fc3623b3b11bbbd83165df17784963e945f-merged.mount: Deactivated successfully.
Feb 28 10:43:20 compute-0 nova_compute[243452]: 2026-02-28 10:43:20.396 243456 DEBUG nova.compute.manager [req-b5de043a-c072-4b21-ad89-42496ab43728 req-92a74962-332d-47fc-9929-5acea9eb50d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received event network-changed-95742053-49d5-4e84-9dde-4a6563ebf953 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:43:20 compute-0 nova_compute[243452]: 2026-02-28 10:43:20.397 243456 DEBUG nova.compute.manager [req-b5de043a-c072-4b21-ad89-42496ab43728 req-92a74962-332d-47fc-9929-5acea9eb50d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Refreshing instance network info cache due to event network-changed-95742053-49d5-4e84-9dde-4a6563ebf953. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:43:20 compute-0 nova_compute[243452]: 2026-02-28 10:43:20.398 243456 DEBUG oslo_concurrency.lockutils [req-b5de043a-c072-4b21-ad89-42496ab43728 req-92a74962-332d-47fc-9929-5acea9eb50d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:43:20 compute-0 nova_compute[243452]: 2026-02-28 10:43:20.398 243456 DEBUG oslo_concurrency.lockutils [req-b5de043a-c072-4b21-ad89-42496ab43728 req-92a74962-332d-47fc-9929-5acea9eb50d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:43:20 compute-0 nova_compute[243452]: 2026-02-28 10:43:20.398 243456 DEBUG nova.network.neutron [req-b5de043a-c072-4b21-ad89-42496ab43728 req-92a74962-332d-47fc-9929-5acea9eb50d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Refreshing network info cache for port 95742053-49d5-4e84-9dde-4a6563ebf953 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:43:20 compute-0 podman[375168]: 2026-02-28 10:43:20.419370743 +0000 UTC m=+0.249771084 container remove ce26e6580e8595aa62eecaf9e34c15accace5df22258164d839c5f269497fdde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_elbakyan, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:43:20 compute-0 systemd[1]: libpod-conmon-ce26e6580e8595aa62eecaf9e34c15accace5df22258164d839c5f269497fdde.scope: Deactivated successfully.
Feb 28 10:43:20 compute-0 podman[375209]: 2026-02-28 10:43:20.594614395 +0000 UTC m=+0.049469892 container create 922d1999805dca416dd845c01a8a4623937d55c56376f3421a4820198c8f9b5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:43:20 compute-0 systemd[1]: Started libpod-conmon-922d1999805dca416dd845c01a8a4623937d55c56376f3421a4820198c8f9b5f.scope.
Feb 28 10:43:20 compute-0 podman[375209]: 2026-02-28 10:43:20.579111326 +0000 UTC m=+0.033966843 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:43:20 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bcf9ad6da9b131d4c4edaea0b77f304aa58d053789e86ee711d2dae386da26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bcf9ad6da9b131d4c4edaea0b77f304aa58d053789e86ee711d2dae386da26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bcf9ad6da9b131d4c4edaea0b77f304aa58d053789e86ee711d2dae386da26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bcf9ad6da9b131d4c4edaea0b77f304aa58d053789e86ee711d2dae386da26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bcf9ad6da9b131d4c4edaea0b77f304aa58d053789e86ee711d2dae386da26/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:43:20 compute-0 podman[375209]: 2026-02-28 10:43:20.745685993 +0000 UTC m=+0.200541600 container init 922d1999805dca416dd845c01a8a4623937d55c56376f3421a4820198c8f9b5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 10:43:20 compute-0 podman[375209]: 2026-02-28 10:43:20.758813285 +0000 UTC m=+0.213668822 container start 922d1999805dca416dd845c01a8a4623937d55c56376f3421a4820198c8f9b5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:43:20 compute-0 podman[375209]: 2026-02-28 10:43:20.765194675 +0000 UTC m=+0.220050212 container attach 922d1999805dca416dd845c01a8a4623937d55c56376f3421a4820198c8f9b5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cartwright, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 10:43:20 compute-0 ceph-mon[76304]: pgmap v2369: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 897 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Feb 28 10:43:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 10:43:21 compute-0 sweet_cartwright[375225]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:43:21 compute-0 sweet_cartwright[375225]: --> All data devices are unavailable
Feb 28 10:43:21 compute-0 systemd[1]: libpod-922d1999805dca416dd845c01a8a4623937d55c56376f3421a4820198c8f9b5f.scope: Deactivated successfully.
Feb 28 10:43:21 compute-0 conmon[375225]: conmon 922d1999805dca416dd8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-922d1999805dca416dd845c01a8a4623937d55c56376f3421a4820198c8f9b5f.scope/container/memory.events
Feb 28 10:43:21 compute-0 podman[375209]: 2026-02-28 10:43:21.298644481 +0000 UTC m=+0.753499998 container died 922d1999805dca416dd845c01a8a4623937d55c56376f3421a4820198c8f9b5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cartwright, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:43:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0bcf9ad6da9b131d4c4edaea0b77f304aa58d053789e86ee711d2dae386da26-merged.mount: Deactivated successfully.
Feb 28 10:43:21 compute-0 podman[375209]: 2026-02-28 10:43:21.347839914 +0000 UTC m=+0.802695411 container remove 922d1999805dca416dd845c01a8a4623937d55c56376f3421a4820198c8f9b5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:43:21 compute-0 systemd[1]: libpod-conmon-922d1999805dca416dd845c01a8a4623937d55c56376f3421a4820198c8f9b5f.scope: Deactivated successfully.
Feb 28 10:43:21 compute-0 sudo[375129]: pam_unix(sudo:session): session closed for user root
Feb 28 10:43:21 compute-0 podman[375245]: 2026-02-28 10:43:21.421743026 +0000 UTC m=+0.096386910 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:43:21 compute-0 podman[375249]: 2026-02-28 10:43:21.429747513 +0000 UTC m=+0.099772536 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller)
Feb 28 10:43:21 compute-0 sudo[375297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:43:21 compute-0 sudo[375297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:43:21 compute-0 sudo[375297]: pam_unix(sudo:session): session closed for user root
Feb 28 10:43:21 compute-0 sudo[375326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:43:21 compute-0 sudo[375326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:43:21 compute-0 podman[375364]: 2026-02-28 10:43:21.796672183 +0000 UTC m=+0.048287778 container create 26c6de63f255f86670d81884dae48c9fa0f9ee429b0676e5c61ab1b20d0d3083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 10:43:21 compute-0 systemd[1]: Started libpod-conmon-26c6de63f255f86670d81884dae48c9fa0f9ee429b0676e5c61ab1b20d0d3083.scope.
Feb 28 10:43:21 compute-0 podman[375364]: 2026-02-28 10:43:21.771214272 +0000 UTC m=+0.022829877 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:43:21 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:43:21 compute-0 podman[375364]: 2026-02-28 10:43:21.908535879 +0000 UTC m=+0.160151484 container init 26c6de63f255f86670d81884dae48c9fa0f9ee429b0676e5c61ab1b20d0d3083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 10:43:21 compute-0 podman[375364]: 2026-02-28 10:43:21.91951779 +0000 UTC m=+0.171133395 container start 26c6de63f255f86670d81884dae48c9fa0f9ee429b0676e5c61ab1b20d0d3083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wright, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 10:43:21 compute-0 podman[375364]: 2026-02-28 10:43:21.923927175 +0000 UTC m=+0.175542780 container attach 26c6de63f255f86670d81884dae48c9fa0f9ee429b0676e5c61ab1b20d0d3083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wright, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:43:21 compute-0 nostalgic_wright[375380]: 167 167
Feb 28 10:43:21 compute-0 systemd[1]: libpod-26c6de63f255f86670d81884dae48c9fa0f9ee429b0676e5c61ab1b20d0d3083.scope: Deactivated successfully.
Feb 28 10:43:21 compute-0 podman[375364]: 2026-02-28 10:43:21.927855356 +0000 UTC m=+0.179470931 container died 26c6de63f255f86670d81884dae48c9fa0f9ee429b0676e5c61ab1b20d0d3083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 10:43:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdaca3247c9298b85a94a04883d1e549e041aa89efcca6cf4ccc2bf3f42f1dc6-merged.mount: Deactivated successfully.
Feb 28 10:43:21 compute-0 podman[375364]: 2026-02-28 10:43:21.972916752 +0000 UTC m=+0.224532357 container remove 26c6de63f255f86670d81884dae48c9fa0f9ee429b0676e5c61ab1b20d0d3083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:43:21 compute-0 systemd[1]: libpod-conmon-26c6de63f255f86670d81884dae48c9fa0f9ee429b0676e5c61ab1b20d0d3083.scope: Deactivated successfully.
Feb 28 10:43:22 compute-0 podman[375405]: 2026-02-28 10:43:22.158917699 +0000 UTC m=+0.044464430 container create 43a7feb0b6f89541504354aaf3032f3428469719649ff5e0251e877c4682388d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_pascal, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 10:43:22 compute-0 nova_compute[243452]: 2026-02-28 10:43:22.173 243456 DEBUG nova.network.neutron [req-b5de043a-c072-4b21-ad89-42496ab43728 req-92a74962-332d-47fc-9929-5acea9eb50d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Updated VIF entry in instance network info cache for port 95742053-49d5-4e84-9dde-4a6563ebf953. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:43:22 compute-0 nova_compute[243452]: 2026-02-28 10:43:22.175 243456 DEBUG nova.network.neutron [req-b5de043a-c072-4b21-ad89-42496ab43728 req-92a74962-332d-47fc-9929-5acea9eb50d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Updating instance_info_cache with network_info: [{"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:43:22 compute-0 nova_compute[243452]: 2026-02-28 10:43:22.193 243456 DEBUG oslo_concurrency.lockutils [req-b5de043a-c072-4b21-ad89-42496ab43728 req-92a74962-332d-47fc-9929-5acea9eb50d7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:43:22 compute-0 systemd[1]: Started libpod-conmon-43a7feb0b6f89541504354aaf3032f3428469719649ff5e0251e877c4682388d.scope.
Feb 28 10:43:22 compute-0 podman[375405]: 2026-02-28 10:43:22.13986794 +0000 UTC m=+0.025414691 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:43:22 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b64546b29ca0c492120ac26d01b6487e58d9b774393f91b9830312141905dc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b64546b29ca0c492120ac26d01b6487e58d9b774393f91b9830312141905dc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b64546b29ca0c492120ac26d01b6487e58d9b774393f91b9830312141905dc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b64546b29ca0c492120ac26d01b6487e58d9b774393f91b9830312141905dc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:43:22 compute-0 podman[375405]: 2026-02-28 10:43:22.272740862 +0000 UTC m=+0.158287613 container init 43a7feb0b6f89541504354aaf3032f3428469719649ff5e0251e877c4682388d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 10:43:22 compute-0 podman[375405]: 2026-02-28 10:43:22.279279897 +0000 UTC m=+0.164826628 container start 43a7feb0b6f89541504354aaf3032f3428469719649ff5e0251e877c4682388d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_pascal, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:43:22 compute-0 podman[375405]: 2026-02-28 10:43:22.282699574 +0000 UTC m=+0.168246325 container attach 43a7feb0b6f89541504354aaf3032f3428469719649ff5e0251e877c4682388d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Feb 28 10:43:22 compute-0 goofy_pascal[375422]: {
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:     "0": [
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:         {
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "devices": [
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "/dev/loop3"
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             ],
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "lv_name": "ceph_lv0",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "lv_size": "21470642176",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "name": "ceph_lv0",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "tags": {
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.cluster_name": "ceph",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.crush_device_class": "",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.encrypted": "0",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.objectstore": "bluestore",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.osd_id": "0",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.type": "block",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.vdo": "0",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.with_tpm": "0"
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             },
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "type": "block",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "vg_name": "ceph_vg0"
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:         }
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:     ],
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:     "1": [
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:         {
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "devices": [
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "/dev/loop4"
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             ],
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "lv_name": "ceph_lv1",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "lv_size": "21470642176",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "name": "ceph_lv1",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "tags": {
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.cluster_name": "ceph",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.crush_device_class": "",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.encrypted": "0",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.objectstore": "bluestore",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.osd_id": "1",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.type": "block",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.vdo": "0",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.with_tpm": "0"
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             },
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "type": "block",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "vg_name": "ceph_vg1"
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:         }
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:     ],
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:     "2": [
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:         {
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "devices": [
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "/dev/loop5"
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             ],
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "lv_name": "ceph_lv2",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "lv_size": "21470642176",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "name": "ceph_lv2",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "tags": {
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.cluster_name": "ceph",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.crush_device_class": "",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.encrypted": "0",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.objectstore": "bluestore",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.osd_id": "2",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.type": "block",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.vdo": "0",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:                 "ceph.with_tpm": "0"
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             },
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "type": "block",
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:             "vg_name": "ceph_vg2"
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:         }
Feb 28 10:43:22 compute-0 goofy_pascal[375422]:     ]
Feb 28 10:43:22 compute-0 goofy_pascal[375422]: }
Feb 28 10:43:22 compute-0 systemd[1]: libpod-43a7feb0b6f89541504354aaf3032f3428469719649ff5e0251e877c4682388d.scope: Deactivated successfully.
Feb 28 10:43:22 compute-0 podman[375405]: 2026-02-28 10:43:22.606156743 +0000 UTC m=+0.491703514 container died 43a7feb0b6f89541504354aaf3032f3428469719649ff5e0251e877c4682388d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:43:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b64546b29ca0c492120ac26d01b6487e58d9b774393f91b9830312141905dc5-merged.mount: Deactivated successfully.
Feb 28 10:43:22 compute-0 podman[375405]: 2026-02-28 10:43:22.652989769 +0000 UTC m=+0.538536530 container remove 43a7feb0b6f89541504354aaf3032f3428469719649ff5e0251e877c4682388d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:43:22 compute-0 systemd[1]: libpod-conmon-43a7feb0b6f89541504354aaf3032f3428469719649ff5e0251e877c4682388d.scope: Deactivated successfully.
Feb 28 10:43:22 compute-0 sudo[375326]: pam_unix(sudo:session): session closed for user root
Feb 28 10:43:22 compute-0 sudo[375443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:43:22 compute-0 sudo[375443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:43:22 compute-0 sudo[375443]: pam_unix(sudo:session): session closed for user root
Feb 28 10:43:22 compute-0 ceph-mon[76304]: pgmap v2370: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 10:43:22 compute-0 sudo[375468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:43:22 compute-0 sudo[375468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:43:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 815 KiB/s wr, 87 op/s
Feb 28 10:43:23 compute-0 podman[375506]: 2026-02-28 10:43:23.220218901 +0000 UTC m=+0.044416679 container create b01e44c3baf2d898becb0272958494f3306849f176711ae04d9883f0ba6fb0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 10:43:23 compute-0 nova_compute[243452]: 2026-02-28 10:43:23.241 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:23 compute-0 systemd[1]: Started libpod-conmon-b01e44c3baf2d898becb0272958494f3306849f176711ae04d9883f0ba6fb0d5.scope.
Feb 28 10:43:23 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:43:23 compute-0 podman[375506]: 2026-02-28 10:43:23.203556899 +0000 UTC m=+0.027754687 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:43:23 compute-0 podman[375506]: 2026-02-28 10:43:23.314721317 +0000 UTC m=+0.138919175 container init b01e44c3baf2d898becb0272958494f3306849f176711ae04d9883f0ba6fb0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:43:23 compute-0 podman[375506]: 2026-02-28 10:43:23.322505257 +0000 UTC m=+0.146703065 container start b01e44c3baf2d898becb0272958494f3306849f176711ae04d9883f0ba6fb0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_sanderson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 10:43:23 compute-0 podman[375506]: 2026-02-28 10:43:23.325897853 +0000 UTC m=+0.150095661 container attach b01e44c3baf2d898becb0272958494f3306849f176711ae04d9883f0ba6fb0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 10:43:23 compute-0 cranky_sanderson[375523]: 167 167
Feb 28 10:43:23 compute-0 systemd[1]: libpod-b01e44c3baf2d898becb0272958494f3306849f176711ae04d9883f0ba6fb0d5.scope: Deactivated successfully.
Feb 28 10:43:23 compute-0 podman[375506]: 2026-02-28 10:43:23.330424472 +0000 UTC m=+0.154622250 container died b01e44c3baf2d898becb0272958494f3306849f176711ae04d9883f0ba6fb0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_sanderson, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:43:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-34d21a063bea2ae7063dd93a13e4cc4955f6b7f97de73eeb1b5c822779e7d197-merged.mount: Deactivated successfully.
Feb 28 10:43:23 compute-0 podman[375506]: 2026-02-28 10:43:23.373523252 +0000 UTC m=+0.197721020 container remove b01e44c3baf2d898becb0272958494f3306849f176711ae04d9883f0ba6fb0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_sanderson, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 10:43:23 compute-0 systemd[1]: libpod-conmon-b01e44c3baf2d898becb0272958494f3306849f176711ae04d9883f0ba6fb0d5.scope: Deactivated successfully.
Feb 28 10:43:23 compute-0 podman[375550]: 2026-02-28 10:43:23.552455949 +0000 UTC m=+0.062308106 container create 947225870f5d25f5b891ac576a3854b493e89f934f91965578926a7a2c47d565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_snyder, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 10:43:23 compute-0 systemd[1]: Started libpod-conmon-947225870f5d25f5b891ac576a3854b493e89f934f91965578926a7a2c47d565.scope.
Feb 28 10:43:23 compute-0 podman[375550]: 2026-02-28 10:43:23.526743741 +0000 UTC m=+0.036595958 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:43:23 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:43:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cec211b4a99323ebc3eb828e3aa8003cb94a8be11ce45ad0ddb8b7752de4448/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:43:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cec211b4a99323ebc3eb828e3aa8003cb94a8be11ce45ad0ddb8b7752de4448/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:43:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cec211b4a99323ebc3eb828e3aa8003cb94a8be11ce45ad0ddb8b7752de4448/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:43:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cec211b4a99323ebc3eb828e3aa8003cb94a8be11ce45ad0ddb8b7752de4448/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:43:23 compute-0 podman[375550]: 2026-02-28 10:43:23.657544534 +0000 UTC m=+0.167396741 container init 947225870f5d25f5b891ac576a3854b493e89f934f91965578926a7a2c47d565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_snyder, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:43:23 compute-0 podman[375550]: 2026-02-28 10:43:23.667104325 +0000 UTC m=+0.176956462 container start 947225870f5d25f5b891ac576a3854b493e89f934f91965578926a7a2c47d565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:43:23 compute-0 podman[375550]: 2026-02-28 10:43:23.671418407 +0000 UTC m=+0.181270584 container attach 947225870f5d25f5b891ac576a3854b493e89f934f91965578926a7a2c47d565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 10:43:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:43:24 compute-0 lvm[375648]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:43:24 compute-0 lvm[375645]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:43:24 compute-0 lvm[375648]: VG ceph_vg2 finished
Feb 28 10:43:24 compute-0 lvm[375645]: VG ceph_vg1 finished
Feb 28 10:43:24 compute-0 lvm[375646]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:43:24 compute-0 lvm[375646]: VG ceph_vg0 finished
Feb 28 10:43:24 compute-0 agitated_snyder[375566]: {}
Feb 28 10:43:24 compute-0 systemd[1]: libpod-947225870f5d25f5b891ac576a3854b493e89f934f91965578926a7a2c47d565.scope: Deactivated successfully.
Feb 28 10:43:24 compute-0 systemd[1]: libpod-947225870f5d25f5b891ac576a3854b493e89f934f91965578926a7a2c47d565.scope: Consumed 1.375s CPU time.
Feb 28 10:43:24 compute-0 podman[375651]: 2026-02-28 10:43:24.581715194 +0000 UTC m=+0.043480803 container died 947225870f5d25f5b891ac576a3854b493e89f934f91965578926a7a2c47d565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_snyder, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:43:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cec211b4a99323ebc3eb828e3aa8003cb94a8be11ce45ad0ddb8b7752de4448-merged.mount: Deactivated successfully.
Feb 28 10:43:24 compute-0 podman[375651]: 2026-02-28 10:43:24.631424871 +0000 UTC m=+0.093190420 container remove 947225870f5d25f5b891ac576a3854b493e89f934f91965578926a7a2c47d565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:43:24 compute-0 systemd[1]: libpod-conmon-947225870f5d25f5b891ac576a3854b493e89f934f91965578926a7a2c47d565.scope: Deactivated successfully.
Feb 28 10:43:24 compute-0 sudo[375468]: pam_unix(sudo:session): session closed for user root
Feb 28 10:43:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:43:24 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:43:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:43:24 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:43:24 compute-0 nova_compute[243452]: 2026-02-28 10:43:24.804 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:24 compute-0 sudo[375666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:43:24 compute-0 sudo[375666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:43:24 compute-0 ceph-mon[76304]: pgmap v2371: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 815 KiB/s wr, 87 op/s
Feb 28 10:43:24 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:43:24 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:43:24 compute-0 sudo[375666]: pam_unix(sudo:session): session closed for user root
Feb 28 10:43:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Feb 28 10:43:26 compute-0 ceph-mon[76304]: pgmap v2372: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Feb 28 10:43:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Feb 28 10:43:27 compute-0 ceph-mon[76304]: pgmap v2373: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Feb 28 10:43:28 compute-0 nova_compute[243452]: 2026-02-28 10:43:28.246 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:43:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:43:29
Feb 28 10:43:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:43:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:43:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', '.mgr', 'images', '.rgw.root']
Feb 28 10:43:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:43:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 288 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 82 op/s
Feb 28 10:43:29 compute-0 nova_compute[243452]: 2026-02-28 10:43:29.807 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:30 compute-0 ovn_controller[146846]: 2026-02-28T10:43:30Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e3:bf:44 10.100.0.4
Feb 28 10:43:30 compute-0 ovn_controller[146846]: 2026-02-28T10:43:30Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e3:bf:44 10.100.0.4
Feb 28 10:43:30 compute-0 ceph-mon[76304]: pgmap v2374: 305 pgs: 305 active+clean; 288 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 82 op/s
Feb 28 10:43:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:43:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:43:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:43:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:43:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:43:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:43:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:43:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:43:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:43:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:43:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:43:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:43:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:43:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:43:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:43:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:43:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 307 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Feb 28 10:43:32 compute-0 ceph-mon[76304]: pgmap v2375: 305 pgs: 305 active+clean; 307 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Feb 28 10:43:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 10:43:33 compute-0 nova_compute[243452]: 2026-02-28 10:43:33.248 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:43:34 compute-0 ceph-mon[76304]: pgmap v2376: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 10:43:34 compute-0 nova_compute[243452]: 2026-02-28 10:43:34.808 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 10:43:36 compute-0 ceph-mon[76304]: pgmap v2377: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 10:43:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.250 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:38 compute-0 ceph-mon[76304]: pgmap v2378: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.586 243456 DEBUG nova.compute.manager [req-ea6004bf-4bbc-4686-bd64-90ed0ad44ce0 req-e4dfb0ee-b7b6-4792-8077-aa113a0c96e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received event network-changed-95742053-49d5-4e84-9dde-4a6563ebf953 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.587 243456 DEBUG nova.compute.manager [req-ea6004bf-4bbc-4686-bd64-90ed0ad44ce0 req-e4dfb0ee-b7b6-4792-8077-aa113a0c96e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Refreshing instance network info cache due to event network-changed-95742053-49d5-4e84-9dde-4a6563ebf953. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.588 243456 DEBUG oslo_concurrency.lockutils [req-ea6004bf-4bbc-4686-bd64-90ed0ad44ce0 req-e4dfb0ee-b7b6-4792-8077-aa113a0c96e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.588 243456 DEBUG oslo_concurrency.lockutils [req-ea6004bf-4bbc-4686-bd64-90ed0ad44ce0 req-e4dfb0ee-b7b6-4792-8077-aa113a0c96e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.588 243456 DEBUG nova.network.neutron [req-ea6004bf-4bbc-4686-bd64-90ed0ad44ce0 req-e4dfb0ee-b7b6-4792-8077-aa113a0c96e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Refreshing network info cache for port 95742053-49d5-4e84-9dde-4a6563ebf953 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.684 243456 DEBUG oslo_concurrency.lockutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "4717a174-511e-4100-af09-e351eb2784a3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.684 243456 DEBUG oslo_concurrency.lockutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.685 243456 DEBUG oslo_concurrency.lockutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "4717a174-511e-4100-af09-e351eb2784a3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.685 243456 DEBUG oslo_concurrency.lockutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.686 243456 DEBUG oslo_concurrency.lockutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.688 243456 INFO nova.compute.manager [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Terminating instance
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.689 243456 DEBUG nova.compute.manager [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:43:38 compute-0 kernel: tap95742053-49 (unregistering): left promiscuous mode
Feb 28 10:43:38 compute-0 NetworkManager[49805]: <info>  [1772275418.7458] device (tap95742053-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:43:38 compute-0 ovn_controller[146846]: 2026-02-28T10:43:38Z|01549|binding|INFO|Releasing lport 95742053-49d5-4e84-9dde-4a6563ebf953 from this chassis (sb_readonly=0)
Feb 28 10:43:38 compute-0 ovn_controller[146846]: 2026-02-28T10:43:38Z|01550|binding|INFO|Setting lport 95742053-49d5-4e84-9dde-4a6563ebf953 down in Southbound
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.757 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:38 compute-0 ovn_controller[146846]: 2026-02-28T10:43:38Z|01551|binding|INFO|Removing iface tap95742053-49 ovn-installed in OVS
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.764 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.766 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:bf:44 10.100.0.4 2001:db8:0:1:f816:3eff:fee3:bf44 2001:db8::f816:3eff:fee3:bf44'], port_security=['fa:16:3e:e3:bf:44 10.100.0.4 2001:db8:0:1:f816:3eff:fee3:bf44 2001:db8::f816:3eff:fee3:bf44'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8:0:1:f816:3eff:fee3:bf44/64 2001:db8::f816:3eff:fee3:bf44/64', 'neutron:device_id': '4717a174-511e-4100-af09-e351eb2784a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '00df5388-1a6c-45af-8672-636481c0abde', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39e94574-ff62-481a-b217-9667ee1ef596, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=95742053-49d5-4e84-9dde-4a6563ebf953) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:43:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.767 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 95742053-49d5-4e84-9dde-4a6563ebf953 in datapath f5ccb81b-dba1-47db-8a77-320af312ccad unbound from our chassis
Feb 28 10:43:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.768 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5ccb81b-dba1-47db-8a77-320af312ccad
Feb 28 10:43:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.790 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[70b1e71e-820a-4680-b8ae-c576c0d137cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:43:38 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000093.scope: Deactivated successfully.
Feb 28 10:43:38 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000093.scope: Consumed 13.556s CPU time.
Feb 28 10:43:38 compute-0 systemd-machined[209480]: Machine qemu-180-instance-00000093 terminated.
Feb 28 10:43:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.822 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8b077991-0aa3-4a41-a5f5-ec8ce412925d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:43:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.826 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[9c29b6a8-6c2b-44e9-8899-9a92e51c4fa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:43:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.857 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[94e8fb4f-c39e-4733-b021-dc08c96105ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:43:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.873 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[21c199c1-303c-47a3-a2fe-b4eb2d2620ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5ccb81b-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:79:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686783, 'reachable_time': 38506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375703, 'error': None, 'target': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:43:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.884 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[392293e2-2215-4f5a-862a-66e5be1becd1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf5ccb81b-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686793, 'tstamp': 686793}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375704, 'error': None, 'target': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf5ccb81b-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686795, 'tstamp': 686795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375704, 'error': None, 'target': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:43:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.886 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5ccb81b-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.888 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.892 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.893 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5ccb81b-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:43:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.894 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:43:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.894 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5ccb81b-d0, col_values=(('external_ids', {'iface-id': '9495031a-3350-4b5d-b9e3-f7a6b929d37e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:43:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:38.894 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.926 243456 INFO nova.virt.libvirt.driver [-] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Instance destroyed successfully.
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.927 243456 DEBUG nova.objects.instance [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid 4717a174-511e-4100-af09-e351eb2784a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.941 243456 DEBUG nova.virt.libvirt.vif [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:43:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-612443702',display_name='tempest-TestGettingAddress-server-612443702',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-612443702',id=147,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDkbXRt6B1cCWBHrWtATNSLJHmA0iLq2AXd2wKDeL3s6gDq5DjSDGWA5dtPpegRqUnUl/4AOWSTUcLQlCwH/VQEOx1aCDIzKCGn/G2HO33UA7mhN+ANludWedDRQ8saCjg==',key_name='tempest-TestGettingAddress-1127208510',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:43:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-7oml1apf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:43:16Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=4717a174-511e-4100-af09-e351eb2784a3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.942 243456 DEBUG nova.network.os_vif_util [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.943 243456 DEBUG nova.network.os_vif_util [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e3:bf:44,bridge_name='br-int',has_traffic_filtering=True,id=95742053-49d5-4e84-9dde-4a6563ebf953,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95742053-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.944 243456 DEBUG os_vif [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:bf:44,bridge_name='br-int',has_traffic_filtering=True,id=95742053-49d5-4e84-9dde-4a6563ebf953,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95742053-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.945 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.946 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95742053-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.947 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.949 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:38 compute-0 nova_compute[243452]: 2026-02-28 10:43:38.952 243456 INFO os_vif [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:bf:44,bridge_name='br-int',has_traffic_filtering=True,id=95742053-49d5-4e84-9dde-4a6563ebf953,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95742053-49')
Feb 28 10:43:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:43:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 28 10:43:39 compute-0 nova_compute[243452]: 2026-02-28 10:43:39.244 243456 INFO nova.virt.libvirt.driver [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Deleting instance files /var/lib/nova/instances/4717a174-511e-4100-af09-e351eb2784a3_del
Feb 28 10:43:39 compute-0 nova_compute[243452]: 2026-02-28 10:43:39.245 243456 INFO nova.virt.libvirt.driver [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Deletion of /var/lib/nova/instances/4717a174-511e-4100-af09-e351eb2784a3_del complete
Feb 28 10:43:39 compute-0 nova_compute[243452]: 2026-02-28 10:43:39.259 243456 DEBUG nova.compute.manager [req-527793a3-a2c3-493d-b549-8e6e3a3754de req-d922fb25-845e-4cb0-8e24-133be8102f0b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received event network-vif-unplugged-95742053-49d5-4e84-9dde-4a6563ebf953 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:43:39 compute-0 nova_compute[243452]: 2026-02-28 10:43:39.259 243456 DEBUG oslo_concurrency.lockutils [req-527793a3-a2c3-493d-b549-8e6e3a3754de req-d922fb25-845e-4cb0-8e24-133be8102f0b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4717a174-511e-4100-af09-e351eb2784a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:43:39 compute-0 nova_compute[243452]: 2026-02-28 10:43:39.260 243456 DEBUG oslo_concurrency.lockutils [req-527793a3-a2c3-493d-b549-8e6e3a3754de req-d922fb25-845e-4cb0-8e24-133be8102f0b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:43:39 compute-0 nova_compute[243452]: 2026-02-28 10:43:39.260 243456 DEBUG oslo_concurrency.lockutils [req-527793a3-a2c3-493d-b549-8e6e3a3754de req-d922fb25-845e-4cb0-8e24-133be8102f0b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:43:39 compute-0 nova_compute[243452]: 2026-02-28 10:43:39.261 243456 DEBUG nova.compute.manager [req-527793a3-a2c3-493d-b549-8e6e3a3754de req-d922fb25-845e-4cb0-8e24-133be8102f0b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] No waiting events found dispatching network-vif-unplugged-95742053-49d5-4e84-9dde-4a6563ebf953 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:43:39 compute-0 nova_compute[243452]: 2026-02-28 10:43:39.261 243456 DEBUG nova.compute.manager [req-527793a3-a2c3-493d-b549-8e6e3a3754de req-d922fb25-845e-4cb0-8e24-133be8102f0b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received event network-vif-unplugged-95742053-49d5-4e84-9dde-4a6563ebf953 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:43:39 compute-0 nova_compute[243452]: 2026-02-28 10:43:39.327 243456 INFO nova.compute.manager [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Took 0.64 seconds to destroy the instance on the hypervisor.
Feb 28 10:43:39 compute-0 nova_compute[243452]: 2026-02-28 10:43:39.328 243456 DEBUG oslo.service.loopingcall [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:43:39 compute-0 nova_compute[243452]: 2026-02-28 10:43:39.328 243456 DEBUG nova.compute.manager [-] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:43:39 compute-0 nova_compute[243452]: 2026-02-28 10:43:39.329 243456 DEBUG nova.network.neutron [-] [instance: 4717a174-511e-4100-af09-e351eb2784a3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:43:39 compute-0 nova_compute[243452]: 2026-02-28 10:43:39.810 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:40 compute-0 nova_compute[243452]: 2026-02-28 10:43:40.188 243456 DEBUG nova.network.neutron [-] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:43:40 compute-0 nova_compute[243452]: 2026-02-28 10:43:40.204 243456 INFO nova.compute.manager [-] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Took 0.88 seconds to deallocate network for instance.
Feb 28 10:43:40 compute-0 nova_compute[243452]: 2026-02-28 10:43:40.244 243456 DEBUG oslo_concurrency.lockutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:43:40 compute-0 nova_compute[243452]: 2026-02-28 10:43:40.245 243456 DEBUG oslo_concurrency.lockutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:43:40 compute-0 nova_compute[243452]: 2026-02-28 10:43:40.313 243456 DEBUG oslo_concurrency.processutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:43:40 compute-0 ceph-mon[76304]: pgmap v2379: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 28 10:43:40 compute-0 nova_compute[243452]: 2026-02-28 10:43:40.647 243456 DEBUG nova.network.neutron [req-ea6004bf-4bbc-4686-bd64-90ed0ad44ce0 req-e4dfb0ee-b7b6-4792-8077-aa113a0c96e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Updated VIF entry in instance network info cache for port 95742053-49d5-4e84-9dde-4a6563ebf953. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:43:40 compute-0 nova_compute[243452]: 2026-02-28 10:43:40.648 243456 DEBUG nova.network.neutron [req-ea6004bf-4bbc-4686-bd64-90ed0ad44ce0 req-e4dfb0ee-b7b6-4792-8077-aa113a0c96e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Updating instance_info_cache with network_info: [{"id": "95742053-49d5-4e84-9dde-4a6563ebf953", "address": "fa:16:3e:e3:bf:44", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:bf44", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95742053-49", "ovs_interfaceid": "95742053-49d5-4e84-9dde-4a6563ebf953", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:43:40 compute-0 nova_compute[243452]: 2026-02-28 10:43:40.751 243456 DEBUG oslo_concurrency.lockutils [req-ea6004bf-4bbc-4686-bd64-90ed0ad44ce0 req-e4dfb0ee-b7b6-4792-8077-aa113a0c96e8 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-4717a174-511e-4100-af09-e351eb2784a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:43:40 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:43:40 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1691343177' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:43:40 compute-0 nova_compute[243452]: 2026-02-28 10:43:40.874 243456 DEBUG oslo_concurrency.processutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:43:40 compute-0 nova_compute[243452]: 2026-02-28 10:43:40.881 243456 DEBUG nova.compute.provider_tree [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:43:40 compute-0 nova_compute[243452]: 2026-02-28 10:43:40.897 243456 DEBUG nova.scheduler.client.report [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:43:40 compute-0 nova_compute[243452]: 2026-02-28 10:43:40.919 243456 DEBUG oslo_concurrency.lockutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:43:40 compute-0 nova_compute[243452]: 2026-02-28 10:43:40.972 243456 INFO nova.scheduler.client.report [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance 4717a174-511e-4100-af09-e351eb2784a3
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 269 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 1.0 MiB/s wr, 77 op/s
Feb 28 10:43:41 compute-0 nova_compute[243452]: 2026-02-28 10:43:41.209 243456 DEBUG oslo_concurrency.lockutils [None req-b9b93d22-e840-4451-8f5f-0ebb355a4cbc be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:43:41 compute-0 nova_compute[243452]: 2026-02-28 10:43:41.331 243456 DEBUG nova.compute.manager [req-2e43daff-1451-47e2-b56b-db9728966e95 req-e3455e2b-75f7-4a72-99e6-cbaef64429f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received event network-vif-plugged-95742053-49d5-4e84-9dde-4a6563ebf953 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:43:41 compute-0 nova_compute[243452]: 2026-02-28 10:43:41.331 243456 DEBUG oslo_concurrency.lockutils [req-2e43daff-1451-47e2-b56b-db9728966e95 req-e3455e2b-75f7-4a72-99e6-cbaef64429f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "4717a174-511e-4100-af09-e351eb2784a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:43:41 compute-0 nova_compute[243452]: 2026-02-28 10:43:41.332 243456 DEBUG oslo_concurrency.lockutils [req-2e43daff-1451-47e2-b56b-db9728966e95 req-e3455e2b-75f7-4a72-99e6-cbaef64429f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:43:41 compute-0 nova_compute[243452]: 2026-02-28 10:43:41.332 243456 DEBUG oslo_concurrency.lockutils [req-2e43daff-1451-47e2-b56b-db9728966e95 req-e3455e2b-75f7-4a72-99e6-cbaef64429f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "4717a174-511e-4100-af09-e351eb2784a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:43:41 compute-0 nova_compute[243452]: 2026-02-28 10:43:41.332 243456 DEBUG nova.compute.manager [req-2e43daff-1451-47e2-b56b-db9728966e95 req-e3455e2b-75f7-4a72-99e6-cbaef64429f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] No waiting events found dispatching network-vif-plugged-95742053-49d5-4e84-9dde-4a6563ebf953 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:43:41 compute-0 nova_compute[243452]: 2026-02-28 10:43:41.332 243456 WARNING nova.compute.manager [req-2e43daff-1451-47e2-b56b-db9728966e95 req-e3455e2b-75f7-4a72-99e6-cbaef64429f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received unexpected event network-vif-plugged-95742053-49d5-4e84-9dde-4a6563ebf953 for instance with vm_state deleted and task_state None.
Feb 28 10:43:41 compute-0 nova_compute[243452]: 2026-02-28 10:43:41.332 243456 DEBUG nova.compute.manager [req-2e43daff-1451-47e2-b56b-db9728966e95 req-e3455e2b-75f7-4a72-99e6-cbaef64429f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Received event network-vif-deleted-95742053-49d5-4e84-9dde-4a6563ebf953 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:43:41 compute-0 nova_compute[243452]: 2026-02-28 10:43:41.333 243456 INFO nova.compute.manager [req-2e43daff-1451-47e2-b56b-db9728966e95 req-e3455e2b-75f7-4a72-99e6-cbaef64429f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Neutron deleted interface 95742053-49d5-4e84-9dde-4a6563ebf953; detaching it from the instance and deleting it from the info cache
Feb 28 10:43:41 compute-0 nova_compute[243452]: 2026-02-28 10:43:41.333 243456 DEBUG nova.network.neutron [req-2e43daff-1451-47e2-b56b-db9728966e95 req-e3455e2b-75f7-4a72-99e6-cbaef64429f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Feb 28 10:43:41 compute-0 nova_compute[243452]: 2026-02-28 10:43:41.334 243456 DEBUG nova.compute.manager [req-2e43daff-1451-47e2-b56b-db9728966e95 req-e3455e2b-75f7-4a72-99e6-cbaef64429f0 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Detach interface failed, port_id=95742053-49d5-4e84-9dde-4a6563ebf953, reason: Instance 4717a174-511e-4100-af09-e351eb2784a3 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:43:41 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1691343177' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001152170056080208 of space, bias 1.0, pg target 0.3456510168240624 quantized to 32 (current 32)
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024939238687445584 of space, bias 1.0, pg target 0.7481771606233675 quantized to 32 (current 32)
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.285141514840672e-07 of space, bias 4.0, pg target 0.0008742169817808807 quantized to 16 (current 16)
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:43:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:43:42 compute-0 ceph-mon[76304]: pgmap v2380: 305 pgs: 305 active+clean; 269 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 1.0 MiB/s wr, 77 op/s
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.613 243456 DEBUG nova.compute.manager [req-e74a609f-5393-45fd-b62d-eee66837a290 req-6d8f55f6-9a25-4db4-824e-965b1d2a2883 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received event network-changed-5b4f91cf-9f52-4422-873f-f11cf0d49dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.613 243456 DEBUG nova.compute.manager [req-e74a609f-5393-45fd-b62d-eee66837a290 req-6d8f55f6-9a25-4db4-824e-965b1d2a2883 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Refreshing instance network info cache due to event network-changed-5b4f91cf-9f52-4422-873f-f11cf0d49dde. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.614 243456 DEBUG oslo_concurrency.lockutils [req-e74a609f-5393-45fd-b62d-eee66837a290 req-6d8f55f6-9a25-4db4-824e-965b1d2a2883 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.614 243456 DEBUG oslo_concurrency.lockutils [req-e74a609f-5393-45fd-b62d-eee66837a290 req-6d8f55f6-9a25-4db4-824e-965b1d2a2883 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.614 243456 DEBUG nova.network.neutron [req-e74a609f-5393-45fd-b62d-eee66837a290 req-6d8f55f6-9a25-4db4-824e-965b1d2a2883 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Refreshing network info cache for port 5b4f91cf-9f52-4422-873f-f11cf0d49dde _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.698 243456 DEBUG oslo_concurrency.lockutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "9a93fbef-9a9c-4d32-b200-626428537bfa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.699 243456 DEBUG oslo_concurrency.lockutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.700 243456 DEBUG oslo_concurrency.lockutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.700 243456 DEBUG oslo_concurrency.lockutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.700 243456 DEBUG oslo_concurrency.lockutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.701 243456 INFO nova.compute.manager [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Terminating instance
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.703 243456 DEBUG nova.compute.manager [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:43:42 compute-0 kernel: tap5b4f91cf-9f (unregistering): left promiscuous mode
Feb 28 10:43:42 compute-0 NetworkManager[49805]: <info>  [1772275422.7539] device (tap5b4f91cf-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:43:42 compute-0 ovn_controller[146846]: 2026-02-28T10:43:42Z|01552|binding|INFO|Releasing lport 5b4f91cf-9f52-4422-873f-f11cf0d49dde from this chassis (sb_readonly=0)
Feb 28 10:43:42 compute-0 ovn_controller[146846]: 2026-02-28T10:43:42Z|01553|binding|INFO|Setting lport 5b4f91cf-9f52-4422-873f-f11cf0d49dde down in Southbound
Feb 28 10:43:42 compute-0 ovn_controller[146846]: 2026-02-28T10:43:42Z|01554|binding|INFO|Removing iface tap5b4f91cf-9f ovn-installed in OVS
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.760 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:42.770 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:61:bf 10.100.0.14 2001:db8:0:1:f816:3eff:fe54:61bf 2001:db8::f816:3eff:fe54:61bf'], port_security=['fa:16:3e:54:61:bf 10.100.0.14 2001:db8:0:1:f816:3eff:fe54:61bf 2001:db8::f816:3eff:fe54:61bf'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:fe54:61bf/64 2001:db8::f816:3eff:fe54:61bf/64', 'neutron:device_id': '9a93fbef-9a9c-4d32-b200-626428537bfa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5ccb81b-dba1-47db-8a77-320af312ccad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '00df5388-1a6c-45af-8672-636481c0abde', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39e94574-ff62-481a-b217-9667ee1ef596, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=5b4f91cf-9f52-4422-873f-f11cf0d49dde) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.771 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:42.772 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 5b4f91cf-9f52-4422-873f-f11cf0d49dde in datapath f5ccb81b-dba1-47db-8a77-320af312ccad unbound from our chassis
Feb 28 10:43:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:42.773 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f5ccb81b-dba1-47db-8a77-320af312ccad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:43:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:42.773 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c68e100c-f14d-4f5a-a74a-9e0f788fc034]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:43:42 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:42.774 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad namespace which is not needed anymore
Feb 28 10:43:42 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000092.scope: Deactivated successfully.
Feb 28 10:43:42 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000092.scope: Consumed 13.861s CPU time.
Feb 28 10:43:42 compute-0 systemd-machined[209480]: Machine qemu-179-instance-00000092 terminated.
Feb 28 10:43:42 compute-0 neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad[374539]: [NOTICE]   (374543) : haproxy version is 2.8.14-c23fe91
Feb 28 10:43:42 compute-0 neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad[374539]: [NOTICE]   (374543) : path to executable is /usr/sbin/haproxy
Feb 28 10:43:42 compute-0 neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad[374539]: [WARNING]  (374543) : Exiting Master process...
Feb 28 10:43:42 compute-0 neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad[374539]: [WARNING]  (374543) : Exiting Master process...
Feb 28 10:43:42 compute-0 neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad[374539]: [ALERT]    (374543) : Current worker (374545) exited with code 143 (Terminated)
Feb 28 10:43:42 compute-0 neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad[374539]: [WARNING]  (374543) : All workers exited. Exiting... (0)
Feb 28 10:43:42 compute-0 systemd[1]: libpod-79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e.scope: Deactivated successfully.
Feb 28 10:43:42 compute-0 podman[375782]: 2026-02-28 10:43:42.91138616 +0000 UTC m=+0.047194097 container died 79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 10:43:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e-userdata-shm.mount: Deactivated successfully.
Feb 28 10:43:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3621c3eb848faba7f94d263f84c1b37e270520b2c10993c5bb339bb0de8e7c9c-merged.mount: Deactivated successfully.
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.946 243456 INFO nova.virt.libvirt.driver [-] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Instance destroyed successfully.
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.947 243456 DEBUG nova.objects.instance [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid 9a93fbef-9a9c-4d32-b200-626428537bfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:43:42 compute-0 podman[375782]: 2026-02-28 10:43:42.952043821 +0000 UTC m=+0.087851758 container cleanup 79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 28 10:43:42 compute-0 systemd[1]: libpod-conmon-79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e.scope: Deactivated successfully.
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.963 243456 DEBUG nova.virt.libvirt.vif [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:42:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-344642459',display_name='tempest-TestGettingAddress-server-344642459',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-344642459',id=146,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDkbXRt6B1cCWBHrWtATNSLJHmA0iLq2AXd2wKDeL3s6gDq5DjSDGWA5dtPpegRqUnUl/4AOWSTUcLQlCwH/VQEOx1aCDIzKCGn/G2HO33UA7mhN+ANludWedDRQ8saCjg==',key_name='tempest-TestGettingAddress-1127208510',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:42:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-r4xtm6sn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:42:42Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=9a93fbef-9a9c-4d32-b200-626428537bfa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.964 243456 DEBUG nova.network.os_vif_util [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.967 243456 DEBUG nova.network.os_vif_util [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:61:bf,bridge_name='br-int',has_traffic_filtering=True,id=5b4f91cf-9f52-4422-873f-f11cf0d49dde,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b4f91cf-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.967 243456 DEBUG os_vif [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:61:bf,bridge_name='br-int',has_traffic_filtering=True,id=5b4f91cf-9f52-4422-873f-f11cf0d49dde,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b4f91cf-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.970 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.970 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b4f91cf-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.972 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.975 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:43:42 compute-0 nova_compute[243452]: 2026-02-28 10:43:42.978 243456 INFO os_vif [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:61:bf,bridge_name='br-int',has_traffic_filtering=True,id=5b4f91cf-9f52-4422-873f-f11cf0d49dde,network=Network(f5ccb81b-dba1-47db-8a77-320af312ccad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b4f91cf-9f')
Feb 28 10:43:43 compute-0 podman[375820]: 2026-02-28 10:43:43.022303841 +0000 UTC m=+0.046076086 container remove 79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 28 10:43:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:43.028 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d687a156-8f21-4a53-bbe7-cde1a7f93935]: (4, ('Sat Feb 28 10:43:42 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad (79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e)\n79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e\nSat Feb 28 10:43:42 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad (79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e)\n79755ca3e7e773ad89430c07b7d2b867fa329f76e9ba0b249372e731a0a0978e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:43:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:43.031 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[766242ac-6e8b-479b-b6f7-d6e26d241ba0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:43:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:43.033 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5ccb81b-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:43:43 compute-0 nova_compute[243452]: 2026-02-28 10:43:43.083 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:43 compute-0 kernel: tapf5ccb81b-d0: left promiscuous mode
Feb 28 10:43:43 compute-0 nova_compute[243452]: 2026-02-28 10:43:43.091 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:43.095 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd376c1-6798-4887-87bb-d0b8571c0b64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:43:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:43.123 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ca7c3c94-3f1a-4f29-939d-8df72b04f412]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:43:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:43.125 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[582ce830-3d9a-4d90-8dcb-e302c592be8f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:43:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:43.137 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9cca81-f2ae-49d9-a96f-d502f7ea3054]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686775, 'reachable_time': 38808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375853, 'error': None, 'target': 'ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:43:43 compute-0 systemd[1]: run-netns-ovnmeta\x2df5ccb81b\x2ddba1\x2d47db\x2d8a77\x2d320af312ccad.mount: Deactivated successfully.
Feb 28 10:43:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:43.141 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f5ccb81b-dba1-47db-8a77-320af312ccad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:43:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:43.141 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[7e40dd15-03c0-46ad-849d-061e94cf2ecb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:43:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 61 KiB/s wr, 38 op/s
Feb 28 10:43:43 compute-0 nova_compute[243452]: 2026-02-28 10:43:43.303 243456 INFO nova.virt.libvirt.driver [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Deleting instance files /var/lib/nova/instances/9a93fbef-9a9c-4d32-b200-626428537bfa_del
Feb 28 10:43:43 compute-0 nova_compute[243452]: 2026-02-28 10:43:43.305 243456 INFO nova.virt.libvirt.driver [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Deletion of /var/lib/nova/instances/9a93fbef-9a9c-4d32-b200-626428537bfa_del complete
Feb 28 10:43:43 compute-0 nova_compute[243452]: 2026-02-28 10:43:43.351 243456 INFO nova.compute.manager [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Took 0.65 seconds to destroy the instance on the hypervisor.
Feb 28 10:43:43 compute-0 nova_compute[243452]: 2026-02-28 10:43:43.352 243456 DEBUG oslo.service.loopingcall [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:43:43 compute-0 nova_compute[243452]: 2026-02-28 10:43:43.353 243456 DEBUG nova.compute.manager [-] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:43:43 compute-0 nova_compute[243452]: 2026-02-28 10:43:43.353 243456 DEBUG nova.network.neutron [-] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:43:43 compute-0 nova_compute[243452]: 2026-02-28 10:43:43.597 243456 DEBUG nova.compute.manager [req-2abf4122-d46a-4123-b9be-d61cbb78b5b6 req-d1866f58-9f17-4a22-a4a0-25b13535ab6e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received event network-vif-unplugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:43:43 compute-0 nova_compute[243452]: 2026-02-28 10:43:43.598 243456 DEBUG oslo_concurrency.lockutils [req-2abf4122-d46a-4123-b9be-d61cbb78b5b6 req-d1866f58-9f17-4a22-a4a0-25b13535ab6e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:43:43 compute-0 nova_compute[243452]: 2026-02-28 10:43:43.598 243456 DEBUG oslo_concurrency.lockutils [req-2abf4122-d46a-4123-b9be-d61cbb78b5b6 req-d1866f58-9f17-4a22-a4a0-25b13535ab6e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:43:43 compute-0 nova_compute[243452]: 2026-02-28 10:43:43.599 243456 DEBUG oslo_concurrency.lockutils [req-2abf4122-d46a-4123-b9be-d61cbb78b5b6 req-d1866f58-9f17-4a22-a4a0-25b13535ab6e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:43:43 compute-0 nova_compute[243452]: 2026-02-28 10:43:43.599 243456 DEBUG nova.compute.manager [req-2abf4122-d46a-4123-b9be-d61cbb78b5b6 req-d1866f58-9f17-4a22-a4a0-25b13535ab6e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] No waiting events found dispatching network-vif-unplugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:43:43 compute-0 nova_compute[243452]: 2026-02-28 10:43:43.599 243456 DEBUG nova.compute.manager [req-2abf4122-d46a-4123-b9be-d61cbb78b5b6 req-d1866f58-9f17-4a22-a4a0-25b13535ab6e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received event network-vif-unplugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:43:43 compute-0 nova_compute[243452]: 2026-02-28 10:43:43.855 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:43:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:43:44 compute-0 ceph-mon[76304]: pgmap v2381: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 61 KiB/s wr, 38 op/s
Feb 28 10:43:44 compute-0 nova_compute[243452]: 2026-02-28 10:43:44.595 243456 DEBUG nova.network.neutron [-] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:43:44 compute-0 nova_compute[243452]: 2026-02-28 10:43:44.614 243456 INFO nova.compute.manager [-] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Took 1.26 seconds to deallocate network for instance.
Feb 28 10:43:44 compute-0 nova_compute[243452]: 2026-02-28 10:43:44.660 243456 DEBUG oslo_concurrency.lockutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:43:44 compute-0 nova_compute[243452]: 2026-02-28 10:43:44.661 243456 DEBUG oslo_concurrency.lockutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:43:44 compute-0 nova_compute[243452]: 2026-02-28 10:43:44.691 243456 DEBUG nova.compute.manager [req-93e2964f-0f4f-418b-a397-abcbfc090db6 req-53bfe3c6-385f-4b2b-8385-c0a8a79f8d13 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received event network-vif-deleted-5b4f91cf-9f52-4422-873f-f11cf0d49dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:43:44 compute-0 nova_compute[243452]: 2026-02-28 10:43:44.713 243456 DEBUG oslo_concurrency.processutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:43:44 compute-0 nova_compute[243452]: 2026-02-28 10:43:44.814 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 198 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 16 KiB/s wr, 43 op/s
Feb 28 10:43:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:43:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/274630273' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:43:45 compute-0 nova_compute[243452]: 2026-02-28 10:43:45.323 243456 DEBUG oslo_concurrency.processutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:43:45 compute-0 nova_compute[243452]: 2026-02-28 10:43:45.358 243456 DEBUG nova.compute.provider_tree [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:43:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/274630273' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:43:45 compute-0 nova_compute[243452]: 2026-02-28 10:43:45.376 243456 DEBUG nova.scheduler.client.report [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:43:45 compute-0 nova_compute[243452]: 2026-02-28 10:43:45.400 243456 DEBUG oslo_concurrency.lockutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:43:45 compute-0 nova_compute[243452]: 2026-02-28 10:43:45.426 243456 INFO nova.scheduler.client.report [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance 9a93fbef-9a9c-4d32-b200-626428537bfa
Feb 28 10:43:45 compute-0 nova_compute[243452]: 2026-02-28 10:43:45.436 243456 DEBUG nova.network.neutron [req-e74a609f-5393-45fd-b62d-eee66837a290 req-6d8f55f6-9a25-4db4-824e-965b1d2a2883 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Updated VIF entry in instance network info cache for port 5b4f91cf-9f52-4422-873f-f11cf0d49dde. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:43:45 compute-0 nova_compute[243452]: 2026-02-28 10:43:45.436 243456 DEBUG nova.network.neutron [req-e74a609f-5393-45fd-b62d-eee66837a290 req-6d8f55f6-9a25-4db4-824e-965b1d2a2883 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Updating instance_info_cache with network_info: [{"id": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "address": "fa:16:3e:54:61:bf", "network": {"id": "f5ccb81b-dba1-47db-8a77-320af312ccad", "bridge": "br-int", "label": "tempest-network-smoke--505374029", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe54:61bf", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b4f91cf-9f", "ovs_interfaceid": "5b4f91cf-9f52-4422-873f-f11cf0d49dde", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:43:45 compute-0 nova_compute[243452]: 2026-02-28 10:43:45.459 243456 DEBUG oslo_concurrency.lockutils [req-e74a609f-5393-45fd-b62d-eee66837a290 req-6d8f55f6-9a25-4db4-824e-965b1d2a2883 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-9a93fbef-9a9c-4d32-b200-626428537bfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:43:45 compute-0 nova_compute[243452]: 2026-02-28 10:43:45.502 243456 DEBUG oslo_concurrency.lockutils [None req-cfc27bf1-3d47-4758-9014-8135b9665cc8 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:43:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:43:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1437969117' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:43:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:43:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1437969117' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:43:45 compute-0 nova_compute[243452]: 2026-02-28 10:43:45.687 243456 DEBUG nova.compute.manager [req-a4af4da8-c241-4754-8bad-c8675110fdc9 req-0ddc2665-4fa0-443d-b9f7-2f21d6eb775e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received event network-vif-plugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:43:45 compute-0 nova_compute[243452]: 2026-02-28 10:43:45.687 243456 DEBUG oslo_concurrency.lockutils [req-a4af4da8-c241-4754-8bad-c8675110fdc9 req-0ddc2665-4fa0-443d-b9f7-2f21d6eb775e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:43:45 compute-0 nova_compute[243452]: 2026-02-28 10:43:45.688 243456 DEBUG oslo_concurrency.lockutils [req-a4af4da8-c241-4754-8bad-c8675110fdc9 req-0ddc2665-4fa0-443d-b9f7-2f21d6eb775e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:43:45 compute-0 nova_compute[243452]: 2026-02-28 10:43:45.689 243456 DEBUG oslo_concurrency.lockutils [req-a4af4da8-c241-4754-8bad-c8675110fdc9 req-0ddc2665-4fa0-443d-b9f7-2f21d6eb775e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "9a93fbef-9a9c-4d32-b200-626428537bfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:43:45 compute-0 nova_compute[243452]: 2026-02-28 10:43:45.689 243456 DEBUG nova.compute.manager [req-a4af4da8-c241-4754-8bad-c8675110fdc9 req-0ddc2665-4fa0-443d-b9f7-2f21d6eb775e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] No waiting events found dispatching network-vif-plugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:43:45 compute-0 nova_compute[243452]: 2026-02-28 10:43:45.689 243456 WARNING nova.compute.manager [req-a4af4da8-c241-4754-8bad-c8675110fdc9 req-0ddc2665-4fa0-443d-b9f7-2f21d6eb775e 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Received unexpected event network-vif-plugged-5b4f91cf-9f52-4422-873f-f11cf0d49dde for instance with vm_state deleted and task_state None.
Feb 28 10:43:46 compute-0 nova_compute[243452]: 2026-02-28 10:43:46.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:43:46 compute-0 nova_compute[243452]: 2026-02-28 10:43:46.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:43:46 compute-0 nova_compute[243452]: 2026-02-28 10:43:46.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:43:46 compute-0 ceph-mon[76304]: pgmap v2382: 305 pgs: 305 active+clean; 198 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 16 KiB/s wr, 43 op/s
Feb 28 10:43:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1437969117' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:43:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1437969117' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:43:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 189 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 16 KiB/s wr, 44 op/s
Feb 28 10:43:47 compute-0 nova_compute[243452]: 2026-02-28 10:43:47.973 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:48 compute-0 nova_compute[243452]: 2026-02-28 10:43:48.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:43:48 compute-0 nova_compute[243452]: 2026-02-28 10:43:48.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:43:48 compute-0 ceph-mon[76304]: pgmap v2383: 305 pgs: 305 active+clean; 189 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 16 KiB/s wr, 44 op/s
Feb 28 10:43:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:43:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 15 KiB/s wr, 56 op/s
Feb 28 10:43:49 compute-0 nova_compute[243452]: 2026-02-28 10:43:49.814 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:50 compute-0 nova_compute[243452]: 2026-02-28 10:43:50.147 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:50 compute-0 nova_compute[243452]: 2026-02-28 10:43:50.196 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:50 compute-0 nova_compute[243452]: 2026-02-28 10:43:50.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:43:50 compute-0 ceph-mon[76304]: pgmap v2384: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 15 KiB/s wr, 56 op/s
Feb 28 10:43:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 5.1 KiB/s wr, 56 op/s
Feb 28 10:43:51 compute-0 nova_compute[243452]: 2026-02-28 10:43:51.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:43:51 compute-0 nova_compute[243452]: 2026-02-28 10:43:51.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:43:51 compute-0 nova_compute[243452]: 2026-02-28 10:43:51.330 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:43:51 compute-0 nova_compute[243452]: 2026-02-28 10:43:51.331 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:43:51 compute-0 nova_compute[243452]: 2026-02-28 10:43:51.331 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:43:51 compute-0 nova_compute[243452]: 2026-02-28 10:43:51.345 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:43:52 compute-0 podman[375879]: 2026-02-28 10:43:52.155470296 +0000 UTC m=+0.081227831 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 28 10:43:52 compute-0 podman[375878]: 2026-02-28 10:43:52.214143197 +0000 UTC m=+0.146733186 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 10:43:52 compute-0 ceph-mon[76304]: pgmap v2385: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 5.1 KiB/s wr, 56 op/s
Feb 28 10:43:52 compute-0 nova_compute[243452]: 2026-02-28 10:43:52.976 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 4.2 KiB/s wr, 33 op/s
Feb 28 10:43:53 compute-0 nova_compute[243452]: 2026-02-28 10:43:53.925 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275418.9233723, 4717a174-511e-4100-af09-e351eb2784a3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:43:53 compute-0 nova_compute[243452]: 2026-02-28 10:43:53.925 243456 INFO nova.compute.manager [-] [instance: 4717a174-511e-4100-af09-e351eb2784a3] VM Stopped (Lifecycle Event)
Feb 28 10:43:53 compute-0 nova_compute[243452]: 2026-02-28 10:43:53.945 243456 DEBUG nova.compute.manager [None req-8eae81fd-3047-4389-a30c-ac964f09c1e3 - - - - - -] [instance: 4717a174-511e-4100-af09-e351eb2784a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:43:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:43:54 compute-0 ceph-mon[76304]: pgmap v2386: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 4.2 KiB/s wr, 33 op/s
Feb 28 10:43:54 compute-0 nova_compute[243452]: 2026-02-28 10:43:54.816 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 28 10:43:56 compute-0 ceph-mon[76304]: pgmap v2387: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 28 10:43:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 341 B/s wr, 13 op/s
Feb 28 10:43:57 compute-0 nova_compute[243452]: 2026-02-28 10:43:57.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:43:57 compute-0 nova_compute[243452]: 2026-02-28 10:43:57.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:43:57 compute-0 nova_compute[243452]: 2026-02-28 10:43:57.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:43:57 compute-0 nova_compute[243452]: 2026-02-28 10:43:57.342 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:43:57 compute-0 nova_compute[243452]: 2026-02-28 10:43:57.342 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:43:57 compute-0 nova_compute[243452]: 2026-02-28 10:43:57.342 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:43:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:57.883 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:43:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:57.884 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:43:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:43:57.884 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:43:57 compute-0 nova_compute[243452]: 2026-02-28 10:43:57.941 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275422.939746, 9a93fbef-9a9c-4d32-b200-626428537bfa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:43:57 compute-0 nova_compute[243452]: 2026-02-28 10:43:57.942 243456 INFO nova.compute.manager [-] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] VM Stopped (Lifecycle Event)
Feb 28 10:43:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:43:57 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3441247053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:43:57 compute-0 nova_compute[243452]: 2026-02-28 10:43:57.965 243456 DEBUG nova.compute.manager [None req-99ae7032-add8-45ad-9a9e-9e491f16ce2d - - - - - -] [instance: 9a93fbef-9a9c-4d32-b200-626428537bfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:43:57 compute-0 nova_compute[243452]: 2026-02-28 10:43:57.970 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:43:57 compute-0 nova_compute[243452]: 2026-02-28 10:43:57.978 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:43:58 compute-0 nova_compute[243452]: 2026-02-28 10:43:58.173 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:43:58 compute-0 nova_compute[243452]: 2026-02-28 10:43:58.174 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3607MB free_disk=59.98741508834064GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:43:58 compute-0 nova_compute[243452]: 2026-02-28 10:43:58.174 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:43:58 compute-0 nova_compute[243452]: 2026-02-28 10:43:58.174 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:43:58 compute-0 nova_compute[243452]: 2026-02-28 10:43:58.294 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:43:58 compute-0 nova_compute[243452]: 2026-02-28 10:43:58.295 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:43:58 compute-0 nova_compute[243452]: 2026-02-28 10:43:58.315 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:43:58 compute-0 ceph-mon[76304]: pgmap v2388: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 341 B/s wr, 13 op/s
Feb 28 10:43:58 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3441247053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:43:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:43:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3425035802' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:43:58 compute-0 nova_compute[243452]: 2026-02-28 10:43:58.887 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:43:58 compute-0 nova_compute[243452]: 2026-02-28 10:43:58.896 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:43:58 compute-0 nova_compute[243452]: 2026-02-28 10:43:58.914 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:43:58 compute-0 nova_compute[243452]: 2026-02-28 10:43:58.940 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:43:58 compute-0 nova_compute[243452]: 2026-02-28 10:43:58.941 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:43:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:43:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 341 B/s wr, 12 op/s
Feb 28 10:43:59 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3425035802' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:43:59 compute-0 nova_compute[243452]: 2026-02-28 10:43:59.818 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:44:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:44:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:44:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:44:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:44:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:44:00 compute-0 ceph-mon[76304]: pgmap v2389: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 341 B/s wr, 12 op/s
Feb 28 10:44:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:44:02 compute-0 ceph-mon[76304]: pgmap v2390: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:44:02 compute-0 nova_compute[243452]: 2026-02-28 10:44:02.981 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:44:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:44:04 compute-0 ceph-mon[76304]: pgmap v2391: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:44:04 compute-0 nova_compute[243452]: 2026-02-28 10:44:04.819 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:44:06 compute-0 ceph-mon[76304]: pgmap v2392: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:44:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:44:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:07.345 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:3c:df 10.100.0.2 2001:db8::f816:3eff:fe9f:3cdf'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe9f:3cdf/64', 'neutron:device_id': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c946f0a3-f57c-4aff-b2b1-40e06de1a163, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1a8a9fba-10a8-48af-ae28-f3be3422056a) old=Port_Binding(mac=['fa:16:3e:9f:3c:df 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:44:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:07.347 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1a8a9fba-10a8-48af-ae28-f3be3422056a in datapath b90f6c6c-a634-4d08-98d7-fde34f18e37c updated
Feb 28 10:44:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:07.349 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b90f6c6c-a634-4d08-98d7-fde34f18e37c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:44:07 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:07.350 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8fae9190-9ad1-4e06-81c6-f77e7f9cff40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:07 compute-0 nova_compute[243452]: 2026-02-28 10:44:07.983 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:08 compute-0 ceph-mon[76304]: pgmap v2393: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:44:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:44:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:44:09 compute-0 nova_compute[243452]: 2026-02-28 10:44:09.832 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:10 compute-0 ceph-mon[76304]: pgmap v2394: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:44:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:10.742 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:3c:df 10.100.0.2 2001:db8:0:1:f816:3eff:fe9f:3cdf 2001:db8::f816:3eff:fe9f:3cdf'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe9f:3cdf/64 2001:db8::f816:3eff:fe9f:3cdf/64', 'neutron:device_id': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c946f0a3-f57c-4aff-b2b1-40e06de1a163, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1a8a9fba-10a8-48af-ae28-f3be3422056a) old=Port_Binding(mac=['fa:16:3e:9f:3c:df 10.100.0.2 2001:db8::f816:3eff:fe9f:3cdf'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe9f:3cdf/64', 'neutron:device_id': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:44:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:10.744 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1a8a9fba-10a8-48af-ae28-f3be3422056a in datapath b90f6c6c-a634-4d08-98d7-fde34f18e37c updated
Feb 28 10:44:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:10.746 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b90f6c6c-a634-4d08-98d7-fde34f18e37c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:44:10 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:10.747 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[06e8597f-f079-4a32-b4f2-b55edeef9612]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:44:12 compute-0 ceph-mon[76304]: pgmap v2395: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:44:12 compute-0 nova_compute[243452]: 2026-02-28 10:44:12.985 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:44:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:44:14 compute-0 ceph-mon[76304]: pgmap v2396: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:44:14 compute-0 nova_compute[243452]: 2026-02-28 10:44:14.880 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:15 compute-0 nova_compute[243452]: 2026-02-28 10:44:15.071 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:44:15 compute-0 nova_compute[243452]: 2026-02-28 10:44:15.072 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:44:15 compute-0 nova_compute[243452]: 2026-02-28 10:44:15.094 243456 DEBUG nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:44:15 compute-0 nova_compute[243452]: 2026-02-28 10:44:15.187 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:44:15 compute-0 nova_compute[243452]: 2026-02-28 10:44:15.188 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:44:15 compute-0 nova_compute[243452]: 2026-02-28 10:44:15.199 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:44:15 compute-0 nova_compute[243452]: 2026-02-28 10:44:15.200 243456 INFO nova.compute.claims [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:44:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:44:15 compute-0 nova_compute[243452]: 2026-02-28 10:44:15.317 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:44:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:44:15 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2954323466' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:44:15 compute-0 nova_compute[243452]: 2026-02-28 10:44:15.908 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:44:15 compute-0 nova_compute[243452]: 2026-02-28 10:44:15.917 243456 DEBUG nova.compute.provider_tree [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:44:15 compute-0 nova_compute[243452]: 2026-02-28 10:44:15.939 243456 DEBUG nova.scheduler.client.report [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:44:15 compute-0 nova_compute[243452]: 2026-02-28 10:44:15.966 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:44:15 compute-0 nova_compute[243452]: 2026-02-28 10:44:15.967 243456 DEBUG nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:44:16 compute-0 nova_compute[243452]: 2026-02-28 10:44:16.028 243456 DEBUG nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:44:16 compute-0 nova_compute[243452]: 2026-02-28 10:44:16.029 243456 DEBUG nova.network.neutron [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:44:16 compute-0 nova_compute[243452]: 2026-02-28 10:44:16.054 243456 INFO nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:44:16 compute-0 nova_compute[243452]: 2026-02-28 10:44:16.093 243456 DEBUG nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:44:16 compute-0 nova_compute[243452]: 2026-02-28 10:44:16.230 243456 DEBUG nova.policy [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:44:16 compute-0 nova_compute[243452]: 2026-02-28 10:44:16.273 243456 DEBUG nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:44:16 compute-0 nova_compute[243452]: 2026-02-28 10:44:16.275 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:44:16 compute-0 nova_compute[243452]: 2026-02-28 10:44:16.275 243456 INFO nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Creating image(s)
Feb 28 10:44:16 compute-0 nova_compute[243452]: 2026-02-28 10:44:16.309 243456 DEBUG nova.storage.rbd_utils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:44:16 compute-0 nova_compute[243452]: 2026-02-28 10:44:16.345 243456 DEBUG nova.storage.rbd_utils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:44:16 compute-0 nova_compute[243452]: 2026-02-28 10:44:16.380 243456 DEBUG nova.storage.rbd_utils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:44:16 compute-0 nova_compute[243452]: 2026-02-28 10:44:16.385 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:44:16 compute-0 nova_compute[243452]: 2026-02-28 10:44:16.467 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:44:16 compute-0 nova_compute[243452]: 2026-02-28 10:44:16.468 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:44:16 compute-0 nova_compute[243452]: 2026-02-28 10:44:16.469 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:44:16 compute-0 nova_compute[243452]: 2026-02-28 10:44:16.469 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:44:16 compute-0 nova_compute[243452]: 2026-02-28 10:44:16.494 243456 DEBUG nova.storage.rbd_utils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:44:16 compute-0 nova_compute[243452]: 2026-02-28 10:44:16.498 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:44:16 compute-0 ceph-mon[76304]: pgmap v2397: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:44:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2954323466' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:44:16 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:44:16 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 43K writes, 173K keys, 43K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s
                                           Cumulative WAL: 43K writes, 16K syncs, 2.71 writes per sync, written: 0.17 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5287 writes, 21K keys, 5287 commit groups, 1.0 writes per commit group, ingest: 24.46 MB, 0.04 MB/s
                                           Interval WAL: 5286 writes, 2100 syncs, 2.52 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 10:44:16 compute-0 nova_compute[243452]: 2026-02-28 10:44:16.771 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:44:16 compute-0 nova_compute[243452]: 2026-02-28 10:44:16.862 243456 DEBUG nova.storage.rbd_utils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:44:16 compute-0 nova_compute[243452]: 2026-02-28 10:44:16.982 243456 DEBUG nova.objects.instance [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid c45d561e-3581-4ca2-a8b7-b56cc6ce5b43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:44:17 compute-0 nova_compute[243452]: 2026-02-28 10:44:17.014 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:44:17 compute-0 nova_compute[243452]: 2026-02-28 10:44:17.015 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Ensure instance console log exists: /var/lib/nova/instances/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:44:17 compute-0 nova_compute[243452]: 2026-02-28 10:44:17.016 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:44:17 compute-0 nova_compute[243452]: 2026-02-28 10:44:17.016 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:44:17 compute-0 nova_compute[243452]: 2026-02-28 10:44:17.017 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:44:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:44:17 compute-0 nova_compute[243452]: 2026-02-28 10:44:17.987 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:18 compute-0 nova_compute[243452]: 2026-02-28 10:44:18.184 243456 DEBUG nova.network.neutron [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Successfully created port: e109a5e0-a347-4744-84e4-96a126379d1c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:44:18 compute-0 ceph-mon[76304]: pgmap v2398: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:44:19 compute-0 nova_compute[243452]: 2026-02-28 10:44:19.017 243456 DEBUG nova.network.neutron [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Successfully updated port: e109a5e0-a347-4744-84e4-96a126379d1c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:44:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:44:19 compute-0 nova_compute[243452]: 2026-02-28 10:44:19.058 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:44:19 compute-0 nova_compute[243452]: 2026-02-28 10:44:19.059 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:44:19 compute-0 nova_compute[243452]: 2026-02-28 10:44:19.059 243456 DEBUG nova.network.neutron [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:44:19 compute-0 nova_compute[243452]: 2026-02-28 10:44:19.122 243456 DEBUG nova.compute.manager [req-df4e1d10-a16a-4d36-9a0f-429936d977b8 req-96da64da-5a16-42d5-aadb-b9eb22ac1e49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received event network-changed-e109a5e0-a347-4744-84e4-96a126379d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:44:19 compute-0 nova_compute[243452]: 2026-02-28 10:44:19.122 243456 DEBUG nova.compute.manager [req-df4e1d10-a16a-4d36-9a0f-429936d977b8 req-96da64da-5a16-42d5-aadb-b9eb22ac1e49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Refreshing instance network info cache due to event network-changed-e109a5e0-a347-4744-84e4-96a126379d1c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:44:19 compute-0 nova_compute[243452]: 2026-02-28 10:44:19.123 243456 DEBUG oslo_concurrency.lockutils [req-df4e1d10-a16a-4d36-9a0f-429936d977b8 req-96da64da-5a16-42d5-aadb-b9eb22ac1e49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:44:19 compute-0 nova_compute[243452]: 2026-02-28 10:44:19.201 243456 DEBUG nova.network.neutron [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:44:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 166 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 670 KiB/s wr, 12 op/s
Feb 28 10:44:19 compute-0 nova_compute[243452]: 2026-02-28 10:44:19.883 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:20 compute-0 sshd-session[376156]: Received disconnect from 103.217.144.161 port 50764:11: Bye Bye [preauth]
Feb 28 10:44:20 compute-0 sshd-session[376156]: Disconnected from authenticating user root 103.217.144.161 port 50764 [preauth]
Feb 28 10:44:20 compute-0 ceph-mon[76304]: pgmap v2399: 305 pgs: 305 active+clean; 166 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 670 KiB/s wr, 12 op/s
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.693 243456 DEBUG nova.network.neutron [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Updating instance_info_cache with network_info: [{"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.740 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.742 243456 DEBUG nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Instance network_info: |[{"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.743 243456 DEBUG oslo_concurrency.lockutils [req-df4e1d10-a16a-4d36-9a0f-429936d977b8 req-96da64da-5a16-42d5-aadb-b9eb22ac1e49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.744 243456 DEBUG nova.network.neutron [req-df4e1d10-a16a-4d36-9a0f-429936d977b8 req-96da64da-5a16-42d5-aadb-b9eb22ac1e49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Refreshing network info cache for port e109a5e0-a347-4744-84e4-96a126379d1c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.751 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Start _get_guest_xml network_info=[{"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.758 243456 WARNING nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.763 243456 DEBUG nova.virt.libvirt.host [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.765 243456 DEBUG nova.virt.libvirt.host [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.774 243456 DEBUG nova.virt.libvirt.host [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.775 243456 DEBUG nova.virt.libvirt.host [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.776 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.776 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.777 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.778 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.779 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.779 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.780 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.781 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.782 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.782 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.783 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.784 243456 DEBUG nova.virt.hardware [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:44:20 compute-0 nova_compute[243452]: 2026-02-28 10:44:20.789 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:44:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:44:21 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:44:21 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/592900751' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.407 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.432 243456 DEBUG nova.storage.rbd_utils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.437 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:44:21 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/592900751' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:44:21 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:44:21 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/997131405' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.942 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.944 243456 DEBUG nova.virt.libvirt.vif [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:44:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-82849746',display_name='tempest-TestGettingAddress-server-82849746',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-82849746',id=148,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjumutHDOiCCJO6cfUL6ugRSFzHXiNGug9dJkwM0CV8zL85MVF3a1LvAcIuG15mWPEoaXuen5D3Ktxjz9xOw1dfLwx0HcN7GW0eMtopxI1h0bccpxE9hlkunZoyhuwL5A==',key_name='tempest-TestGettingAddress-1355395394',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-i18076q2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:44:16Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=c45d561e-3581-4ca2-a8b7-b56cc6ce5b43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.944 243456 DEBUG nova.network.os_vif_util [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.946 243456 DEBUG nova.network.os_vif_util [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:44:88,bridge_name='br-int',has_traffic_filtering=True,id=e109a5e0-a347-4744-84e4-96a126379d1c,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape109a5e0-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.949 243456 DEBUG nova.objects.instance [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid c45d561e-3581-4ca2-a8b7-b56cc6ce5b43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.976 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:44:21 compute-0 nova_compute[243452]:   <uuid>c45d561e-3581-4ca2-a8b7-b56cc6ce5b43</uuid>
Feb 28 10:44:21 compute-0 nova_compute[243452]:   <name>instance-00000094</name>
Feb 28 10:44:21 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:44:21 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:44:21 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <nova:name>tempest-TestGettingAddress-server-82849746</nova:name>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:44:20</nova:creationTime>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:44:21 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:44:21 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:44:21 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:44:21 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:44:21 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:44:21 compute-0 nova_compute[243452]:         <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 10:44:21 compute-0 nova_compute[243452]:         <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:44:21 compute-0 nova_compute[243452]:         <nova:port uuid="e109a5e0-a347-4744-84e4-96a126379d1c">
Feb 28 10:44:21 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe7a:4488" ipVersion="6"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe7a:4488" ipVersion="6"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:44:21 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:44:21 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <system>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <entry name="serial">c45d561e-3581-4ca2-a8b7-b56cc6ce5b43</entry>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <entry name="uuid">c45d561e-3581-4ca2-a8b7-b56cc6ce5b43</entry>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     </system>
Feb 28 10:44:21 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:44:21 compute-0 nova_compute[243452]:   <os>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:   </os>
Feb 28 10:44:21 compute-0 nova_compute[243452]:   <features>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:   </features>
Feb 28 10:44:21 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:44:21 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:44:21 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk">
Feb 28 10:44:21 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       </source>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:44:21 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk.config">
Feb 28 10:44:21 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       </source>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:44:21 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:7a:44:88"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <target dev="tape109a5e0-a3"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43/console.log" append="off"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <video>
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     </video>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:44:21 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:44:21 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:44:21 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:44:21 compute-0 nova_compute[243452]: </domain>
Feb 28 10:44:21 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.977 243456 DEBUG nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Preparing to wait for external event network-vif-plugged-e109a5e0-a347-4744-84e4-96a126379d1c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.977 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.977 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.978 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.978 243456 DEBUG nova.virt.libvirt.vif [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:44:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-82849746',display_name='tempest-TestGettingAddress-server-82849746',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-82849746',id=148,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjumutHDOiCCJO6cfUL6ugRSFzHXiNGug9dJkwM0CV8zL85MVF3a1LvAcIuG15mWPEoaXuen5D3Ktxjz9xOw1dfLwx0HcN7GW0eMtopxI1h0bccpxE9hlkunZoyhuwL5A==',key_name='tempest-TestGettingAddress-1355395394',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-i18076q2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:44:16Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=c45d561e-3581-4ca2-a8b7-b56cc6ce5b43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.979 243456 DEBUG nova.network.os_vif_util [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.980 243456 DEBUG nova.network.os_vif_util [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:44:88,bridge_name='br-int',has_traffic_filtering=True,id=e109a5e0-a347-4744-84e4-96a126379d1c,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape109a5e0-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.980 243456 DEBUG os_vif [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:44:88,bridge_name='br-int',has_traffic_filtering=True,id=e109a5e0-a347-4744-84e4-96a126379d1c,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape109a5e0-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.981 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.982 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.983 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.988 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.988 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape109a5e0-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.989 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape109a5e0-a3, col_values=(('external_ids', {'iface-id': 'e109a5e0-a347-4744-84e4-96a126379d1c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:44:88', 'vm-uuid': 'c45d561e-3581-4ca2-a8b7-b56cc6ce5b43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.991 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:21 compute-0 NetworkManager[49805]: <info>  [1772275461.9928] manager: (tape109a5e0-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/647)
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.996 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:44:21 compute-0 nova_compute[243452]: 2026-02-28 10:44:21.997 243456 INFO os_vif [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:44:88,bridge_name='br-int',has_traffic_filtering=True,id=e109a5e0-a347-4744-84e4-96a126379d1c,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape109a5e0-a3')
Feb 28 10:44:22 compute-0 nova_compute[243452]: 2026-02-28 10:44:22.052 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:44:22 compute-0 nova_compute[243452]: 2026-02-28 10:44:22.053 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:44:22 compute-0 nova_compute[243452]: 2026-02-28 10:44:22.053 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:7a:44:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:44:22 compute-0 nova_compute[243452]: 2026-02-28 10:44:22.054 243456 INFO nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Using config drive
Feb 28 10:44:22 compute-0 nova_compute[243452]: 2026-02-28 10:44:22.085 243456 DEBUG nova.storage.rbd_utils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:44:22 compute-0 nova_compute[243452]: 2026-02-28 10:44:22.398 243456 INFO nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Creating config drive at /var/lib/nova/instances/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43/disk.config
Feb 28 10:44:22 compute-0 nova_compute[243452]: 2026-02-28 10:44:22.403 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8y90bd6n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:44:22 compute-0 nova_compute[243452]: 2026-02-28 10:44:22.547 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8y90bd6n" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:44:22 compute-0 ceph-mon[76304]: pgmap v2400: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:44:22 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/997131405' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:44:22 compute-0 nova_compute[243452]: 2026-02-28 10:44:22.584 243456 DEBUG nova.storage.rbd_utils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:44:22 compute-0 nova_compute[243452]: 2026-02-28 10:44:22.589 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43/disk.config c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:44:22 compute-0 nova_compute[243452]: 2026-02-28 10:44:22.712 243456 DEBUG oslo_concurrency.processutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43/disk.config c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:44:22 compute-0 nova_compute[243452]: 2026-02-28 10:44:22.713 243456 INFO nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Deleting local config drive /var/lib/nova/instances/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43/disk.config because it was imported into RBD.
Feb 28 10:44:22 compute-0 kernel: tape109a5e0-a3: entered promiscuous mode
Feb 28 10:44:22 compute-0 NetworkManager[49805]: <info>  [1772275462.7684] manager: (tape109a5e0-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/648)
Feb 28 10:44:22 compute-0 ovn_controller[146846]: 2026-02-28T10:44:22Z|01555|binding|INFO|Claiming lport e109a5e0-a347-4744-84e4-96a126379d1c for this chassis.
Feb 28 10:44:22 compute-0 nova_compute[243452]: 2026-02-28 10:44:22.769 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:22 compute-0 ovn_controller[146846]: 2026-02-28T10:44:22Z|01556|binding|INFO|e109a5e0-a347-4744-84e4-96a126379d1c: Claiming fa:16:3e:7a:44:88 10.100.0.5 2001:db8:0:1:f816:3eff:fe7a:4488 2001:db8::f816:3eff:fe7a:4488
Feb 28 10:44:22 compute-0 nova_compute[243452]: 2026-02-28 10:44:22.776 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.790 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:44:88 10.100.0.5 2001:db8:0:1:f816:3eff:fe7a:4488 2001:db8::f816:3eff:fe7a:4488'], port_security=['fa:16:3e:7a:44:88 10.100.0.5 2001:db8:0:1:f816:3eff:fe7a:4488 2001:db8::f816:3eff:fe7a:4488'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28 2001:db8:0:1:f816:3eff:fe7a:4488/64 2001:db8::f816:3eff:fe7a:4488/64', 'neutron:device_id': 'c45d561e-3581-4ca2-a8b7-b56cc6ce5b43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6b062de3-7eb9-4b80-8825-2f32e8eb2071', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c946f0a3-f57c-4aff-b2b1-40e06de1a163, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=e109a5e0-a347-4744-84e4-96a126379d1c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:44:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.792 156681 INFO neutron.agent.ovn.metadata.agent [-] Port e109a5e0-a347-4744-84e4-96a126379d1c in datapath b90f6c6c-a634-4d08-98d7-fde34f18e37c bound to our chassis
Feb 28 10:44:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.793 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b90f6c6c-a634-4d08-98d7-fde34f18e37c
Feb 28 10:44:22 compute-0 nova_compute[243452]: 2026-02-28 10:44:22.796 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:22 compute-0 ovn_controller[146846]: 2026-02-28T10:44:22Z|01557|binding|INFO|Setting lport e109a5e0-a347-4744-84e4-96a126379d1c ovn-installed in OVS
Feb 28 10:44:22 compute-0 ovn_controller[146846]: 2026-02-28T10:44:22Z|01558|binding|INFO|Setting lport e109a5e0-a347-4744-84e4-96a126379d1c up in Southbound
Feb 28 10:44:22 compute-0 nova_compute[243452]: 2026-02-28 10:44:22.805 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.805 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[997c8e9a-95fe-48b6-a9fd-16e86dbcb713]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.806 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb90f6c6c-a1 in ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:44:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.809 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb90f6c6c-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:44:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.809 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8ce06796-15a2-4db6-b6ba-340c5ced81d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.810 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f90e4d-cd3f-41bf-8c5c-67874961c05c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.822 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[5277160f-f25f-4b4b-8d5f-df16200b3a5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:22 compute-0 systemd-machined[209480]: New machine qemu-181-instance-00000094.
Feb 28 10:44:22 compute-0 systemd-udevd[376313]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:44:22 compute-0 systemd[1]: Started Virtual Machine qemu-181-instance-00000094.
Feb 28 10:44:22 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:44:22 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4201.6 total, 600.0 interval
                                           Cumulative writes: 46K writes, 180K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.68 writes per sync, written: 0.17 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5448 writes, 20K keys, 5448 commit groups, 1.0 writes per commit group, ingest: 23.91 MB, 0.04 MB/s
                                           Interval WAL: 5448 writes, 2224 syncs, 2.45 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 10:44:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.835 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1dcf8fe1-0bcc-43bd-a31d-222b5ce088ed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:22 compute-0 NetworkManager[49805]: <info>  [1772275462.8376] device (tape109a5e0-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:44:22 compute-0 NetworkManager[49805]: <info>  [1772275462.8381] device (tape109a5e0-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:44:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.866 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[6442bac3-3e7d-4370-9ab5-375b523ce0d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:22 compute-0 NetworkManager[49805]: <info>  [1772275462.8747] manager: (tapb90f6c6c-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/649)
Feb 28 10:44:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.873 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bcad5985-155b-4be4-b0cc-61a920b183d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:22 compute-0 podman[376290]: 2026-02-28 10:44:22.874786829 +0000 UTC m=+0.077411933 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 28 10:44:22 compute-0 podman[376291]: 2026-02-28 10:44:22.882284521 +0000 UTC m=+0.084742281 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:44:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.902 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[bcdb9005-d893-4735-9b4f-082225af7914]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.906 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[17e187b8-a1cf-436c-9f8b-2d51bdecddc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:22 compute-0 NetworkManager[49805]: <info>  [1772275462.9258] device (tapb90f6c6c-a0): carrier: link connected
Feb 28 10:44:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.929 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1fecc492-dd9e-447a-b8ff-0e06cd7e1ce0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.945 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[63641463-8878-45d3-89db-366dfa1a5bdb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb90f6c6c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:3c:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697021, 'reachable_time': 30198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376371, 'error': None, 'target': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.958 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d5554bfa-7793-421a-bb26-d468c5c86519]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:3cdf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697021, 'tstamp': 697021}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376372, 'error': None, 'target': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.972 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e4392112-2348-4177-8c12-9ad479279758]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb90f6c6c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:3c:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697021, 'reachable_time': 30198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 376373, 'error': None, 'target': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:22.997 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2eff099b-a40d-4560-955c-3eb719db8b86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.042 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[06627395-5acc-41c2-a819-f637d59844a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.045 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb90f6c6c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.045 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.046 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb90f6c6c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.048 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:23 compute-0 NetworkManager[49805]: <info>  [1772275463.0495] manager: (tapb90f6c6c-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/650)
Feb 28 10:44:23 compute-0 kernel: tapb90f6c6c-a0: entered promiscuous mode
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.052 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.054 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb90f6c6c-a0, col_values=(('external_ids', {'iface-id': '1a8a9fba-10a8-48af-ae28-f3be3422056a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.055 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:23 compute-0 ovn_controller[146846]: 2026-02-28T10:44:23Z|01559|binding|INFO|Releasing lport 1a8a9fba-10a8-48af-ae28-f3be3422056a from this chassis (sb_readonly=0)
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.056 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.059 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b90f6c6c-a634-4d08-98d7-fde34f18e37c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b90f6c6c-a634-4d08-98d7-fde34f18e37c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.060 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[93557ebe-a0bb-4bdd-a699-19fc72cde917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.061 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-b90f6c6c-a634-4d08-98d7-fde34f18e37c
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/b90f6c6c-a634-4d08-98d7-fde34f18e37c.pid.haproxy
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID b90f6c6c-a634-4d08-98d7-fde34f18e37c
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.062 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'env', 'PROCESS_TAG=haproxy-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b90f6c6c-a634-4d08-98d7-fde34f18e37c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.065 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.198 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275463.1977963, c45d561e-3581-4ca2-a8b7-b56cc6ce5b43 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.199 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] VM Started (Lifecycle Event)
Feb 28 10:44:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.230 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.242 243456 DEBUG nova.network.neutron [req-df4e1d10-a16a-4d36-9a0f-429936d977b8 req-96da64da-5a16-42d5-aadb-b9eb22ac1e49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Updated VIF entry in instance network info cache for port e109a5e0-a347-4744-84e4-96a126379d1c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.242 243456 DEBUG nova.network.neutron [req-df4e1d10-a16a-4d36-9a0f-429936d977b8 req-96da64da-5a16-42d5-aadb-b9eb22ac1e49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Updating instance_info_cache with network_info: [{"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.245 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275463.1981812, c45d561e-3581-4ca2-a8b7-b56cc6ce5b43 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.245 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] VM Paused (Lifecycle Event)
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.272 243456 DEBUG oslo_concurrency.lockutils [req-df4e1d10-a16a-4d36-9a0f-429936d977b8 req-96da64da-5a16-42d5-aadb-b9eb22ac1e49 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.284 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.287 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.307 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:44:23 compute-0 podman[376446]: 2026-02-28 10:44:23.421246922 +0000 UTC m=+0.065250848 container create 5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:44:23 compute-0 systemd[1]: Started libpod-conmon-5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129.scope.
Feb 28 10:44:23 compute-0 podman[376446]: 2026-02-28 10:44:23.383874734 +0000 UTC m=+0.027878730 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:44:23 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:44:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b3be6acdeb1e35fcdffcfd7b1606173c8b51c2d19d95f4b060892ed6a5761cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:44:23 compute-0 podman[376446]: 2026-02-28 10:44:23.526013259 +0000 UTC m=+0.170017195 container init 5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 10:44:23 compute-0 podman[376446]: 2026-02-28 10:44:23.536110925 +0000 UTC m=+0.180114831 container start 5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:44:23 compute-0 neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c[376462]: [NOTICE]   (376466) : New worker (376468) forked
Feb 28 10:44:23 compute-0 neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c[376462]: [NOTICE]   (376466) : Loading success.
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.690 243456 DEBUG nova.compute.manager [req-37510728-5000-46a2-b650-fd99e04b5077 req-a7b3fdb9-b167-45df-bb0f-b87de3a81236 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received event network-vif-plugged-e109a5e0-a347-4744-84e4-96a126379d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.692 243456 DEBUG oslo_concurrency.lockutils [req-37510728-5000-46a2-b650-fd99e04b5077 req-a7b3fdb9-b167-45df-bb0f-b87de3a81236 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.693 243456 DEBUG oslo_concurrency.lockutils [req-37510728-5000-46a2-b650-fd99e04b5077 req-a7b3fdb9-b167-45df-bb0f-b87de3a81236 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.693 243456 DEBUG oslo_concurrency.lockutils [req-37510728-5000-46a2-b650-fd99e04b5077 req-a7b3fdb9-b167-45df-bb0f-b87de3a81236 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.694 243456 DEBUG nova.compute.manager [req-37510728-5000-46a2-b650-fd99e04b5077 req-a7b3fdb9-b167-45df-bb0f-b87de3a81236 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Processing event network-vif-plugged-e109a5e0-a347-4744-84e4-96a126379d1c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.696 243456 DEBUG nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.702 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275463.7017825, c45d561e-3581-4ca2-a8b7-b56cc6ce5b43 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.702 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] VM Resumed (Lifecycle Event)
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.705 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.710 243456 INFO nova.virt.libvirt.driver [-] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Instance spawned successfully.
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.711 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.726 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.735 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.740 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.740 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.741 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.741 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.742 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.742 243456 DEBUG nova.virt.libvirt.driver [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.772 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.819 243456 INFO nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Took 7.55 seconds to spawn the instance on the hypervisor.
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.820 243456 DEBUG nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.849 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:44:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:23.852 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.892 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.895 243456 INFO nova.compute.manager [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Took 8.75 seconds to build instance.
Feb 28 10:44:23 compute-0 nova_compute[243452]: 2026-02-28 10:44:23.937 243456 DEBUG oslo_concurrency.lockutils [None req-dbb31b27-65ba-48b3-a2b7-58e837f2a119 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:44:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:44:24 compute-0 ceph-mon[76304]: pgmap v2401: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:44:24 compute-0 nova_compute[243452]: 2026-02-28 10:44:24.887 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:24 compute-0 sudo[376477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:44:24 compute-0 sudo[376477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:44:24 compute-0 sudo[376477]: pam_unix(sudo:session): session closed for user root
Feb 28 10:44:25 compute-0 sudo[376502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:44:25 compute-0 sudo[376502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:44:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 418 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Feb 28 10:44:25 compute-0 sudo[376502]: pam_unix(sudo:session): session closed for user root
Feb 28 10:44:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:44:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:44:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:44:25 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:44:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:44:25 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:44:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:44:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:44:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:44:25 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:44:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:44:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:44:25 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:44:25 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:44:25 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:44:25 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:44:25 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:44:25 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:44:25 compute-0 sudo[376559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:44:25 compute-0 sudo[376559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:44:25 compute-0 sudo[376559]: pam_unix(sudo:session): session closed for user root
Feb 28 10:44:25 compute-0 sudo[376584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:44:25 compute-0 sudo[376584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:44:25 compute-0 nova_compute[243452]: 2026-02-28 10:44:25.780 243456 DEBUG nova.compute.manager [req-bb71fe7c-213b-4351-a56f-8d4343562fb5 req-ca6c275d-e044-4169-99ba-ec76db68be83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received event network-vif-plugged-e109a5e0-a347-4744-84e4-96a126379d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:44:25 compute-0 nova_compute[243452]: 2026-02-28 10:44:25.781 243456 DEBUG oslo_concurrency.lockutils [req-bb71fe7c-213b-4351-a56f-8d4343562fb5 req-ca6c275d-e044-4169-99ba-ec76db68be83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:44:25 compute-0 nova_compute[243452]: 2026-02-28 10:44:25.782 243456 DEBUG oslo_concurrency.lockutils [req-bb71fe7c-213b-4351-a56f-8d4343562fb5 req-ca6c275d-e044-4169-99ba-ec76db68be83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:44:25 compute-0 nova_compute[243452]: 2026-02-28 10:44:25.783 243456 DEBUG oslo_concurrency.lockutils [req-bb71fe7c-213b-4351-a56f-8d4343562fb5 req-ca6c275d-e044-4169-99ba-ec76db68be83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:44:25 compute-0 nova_compute[243452]: 2026-02-28 10:44:25.783 243456 DEBUG nova.compute.manager [req-bb71fe7c-213b-4351-a56f-8d4343562fb5 req-ca6c275d-e044-4169-99ba-ec76db68be83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] No waiting events found dispatching network-vif-plugged-e109a5e0-a347-4744-84e4-96a126379d1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:44:25 compute-0 nova_compute[243452]: 2026-02-28 10:44:25.783 243456 WARNING nova.compute.manager [req-bb71fe7c-213b-4351-a56f-8d4343562fb5 req-ca6c275d-e044-4169-99ba-ec76db68be83 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received unexpected event network-vif-plugged-e109a5e0-a347-4744-84e4-96a126379d1c for instance with vm_state active and task_state None.
Feb 28 10:44:25 compute-0 podman[376620]: 2026-02-28 10:44:25.995596058 +0000 UTC m=+0.047441804 container create e2d546d6efc6b6b709b02037fcd3c95b59cb6a675dc0ed6b97722bea0e95d9f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 10:44:26 compute-0 systemd[1]: Started libpod-conmon-e2d546d6efc6b6b709b02037fcd3c95b59cb6a675dc0ed6b97722bea0e95d9f8.scope.
Feb 28 10:44:26 compute-0 podman[376620]: 2026-02-28 10:44:25.969671164 +0000 UTC m=+0.021516890 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:44:26 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:44:26 compute-0 podman[376620]: 2026-02-28 10:44:26.086984916 +0000 UTC m=+0.138830642 container init e2d546d6efc6b6b709b02037fcd3c95b59cb6a675dc0ed6b97722bea0e95d9f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lichterman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 10:44:26 compute-0 podman[376620]: 2026-02-28 10:44:26.094875109 +0000 UTC m=+0.146720825 container start e2d546d6efc6b6b709b02037fcd3c95b59cb6a675dc0ed6b97722bea0e95d9f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 10:44:26 compute-0 podman[376620]: 2026-02-28 10:44:26.098246115 +0000 UTC m=+0.150091861 container attach e2d546d6efc6b6b709b02037fcd3c95b59cb6a675dc0ed6b97722bea0e95d9f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lichterman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 10:44:26 compute-0 tender_lichterman[376637]: 167 167
Feb 28 10:44:26 compute-0 systemd[1]: libpod-e2d546d6efc6b6b709b02037fcd3c95b59cb6a675dc0ed6b97722bea0e95d9f8.scope: Deactivated successfully.
Feb 28 10:44:26 compute-0 conmon[376637]: conmon e2d546d6efc6b6b709b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2d546d6efc6b6b709b02037fcd3c95b59cb6a675dc0ed6b97722bea0e95d9f8.scope/container/memory.events
Feb 28 10:44:26 compute-0 podman[376620]: 2026-02-28 10:44:26.100510749 +0000 UTC m=+0.152356475 container died e2d546d6efc6b6b709b02037fcd3c95b59cb6a675dc0ed6b97722bea0e95d9f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:44:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-044315fac50dbe83a6562f0bafe791c7287df28ec14ee888fdaca464e7f9a7b5-merged.mount: Deactivated successfully.
Feb 28 10:44:26 compute-0 podman[376620]: 2026-02-28 10:44:26.142535959 +0000 UTC m=+0.194381665 container remove e2d546d6efc6b6b709b02037fcd3c95b59cb6a675dc0ed6b97722bea0e95d9f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lichterman, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:44:26 compute-0 systemd[1]: libpod-conmon-e2d546d6efc6b6b709b02037fcd3c95b59cb6a675dc0ed6b97722bea0e95d9f8.scope: Deactivated successfully.
Feb 28 10:44:26 compute-0 podman[376661]: 2026-02-28 10:44:26.27183648 +0000 UTC m=+0.039430917 container create 8021004dc7622038460b2c43350aef9eb897259e12bb20b6cef077e9d5091201 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hugle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 10:44:26 compute-0 systemd[1]: Started libpod-conmon-8021004dc7622038460b2c43350aef9eb897259e12bb20b6cef077e9d5091201.scope.
Feb 28 10:44:26 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:44:26 compute-0 podman[376661]: 2026-02-28 10:44:26.251238337 +0000 UTC m=+0.018832794 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a47792399e767409489ff3c279915da6985cc90071d831f00fdb11883e2f9b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a47792399e767409489ff3c279915da6985cc90071d831f00fdb11883e2f9b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a47792399e767409489ff3c279915da6985cc90071d831f00fdb11883e2f9b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a47792399e767409489ff3c279915da6985cc90071d831f00fdb11883e2f9b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a47792399e767409489ff3c279915da6985cc90071d831f00fdb11883e2f9b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:44:26 compute-0 podman[376661]: 2026-02-28 10:44:26.370133052 +0000 UTC m=+0.137727489 container init 8021004dc7622038460b2c43350aef9eb897259e12bb20b6cef077e9d5091201 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 10:44:26 compute-0 podman[376661]: 2026-02-28 10:44:26.377308596 +0000 UTC m=+0.144903033 container start 8021004dc7622038460b2c43350aef9eb897259e12bb20b6cef077e9d5091201 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 10:44:26 compute-0 podman[376661]: 2026-02-28 10:44:26.381399982 +0000 UTC m=+0.148994439 container attach 8021004dc7622038460b2c43350aef9eb897259e12bb20b6cef077e9d5091201 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hugle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:44:26 compute-0 ceph-mon[76304]: pgmap v2402: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 418 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Feb 28 10:44:26 compute-0 reverent_hugle[376678]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:44:26 compute-0 reverent_hugle[376678]: --> All data devices are unavailable
Feb 28 10:44:26 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:26.855 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:44:26 compute-0 systemd[1]: libpod-8021004dc7622038460b2c43350aef9eb897259e12bb20b6cef077e9d5091201.scope: Deactivated successfully.
Feb 28 10:44:26 compute-0 podman[376661]: 2026-02-28 10:44:26.905504852 +0000 UTC m=+0.673099289 container died 8021004dc7622038460b2c43350aef9eb897259e12bb20b6cef077e9d5091201 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hugle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:44:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a47792399e767409489ff3c279915da6985cc90071d831f00fdb11883e2f9b6-merged.mount: Deactivated successfully.
Feb 28 10:44:26 compute-0 podman[376661]: 2026-02-28 10:44:26.946977577 +0000 UTC m=+0.714572014 container remove 8021004dc7622038460b2c43350aef9eb897259e12bb20b6cef077e9d5091201 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hugle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 10:44:26 compute-0 systemd[1]: libpod-conmon-8021004dc7622038460b2c43350aef9eb897259e12bb20b6cef077e9d5091201.scope: Deactivated successfully.
Feb 28 10:44:26 compute-0 sudo[376584]: pam_unix(sudo:session): session closed for user root
Feb 28 10:44:26 compute-0 nova_compute[243452]: 2026-02-28 10:44:26.992 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:27 compute-0 sudo[376711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:44:27 compute-0 sudo[376711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:44:27 compute-0 sudo[376711]: pam_unix(sudo:session): session closed for user root
Feb 28 10:44:27 compute-0 sudo[376736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:44:27 compute-0 sudo[376736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:44:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 850 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Feb 28 10:44:27 compute-0 ovn_controller[146846]: 2026-02-28T10:44:27Z|01560|binding|INFO|Releasing lport 1a8a9fba-10a8-48af-ae28-f3be3422056a from this chassis (sb_readonly=0)
Feb 28 10:44:27 compute-0 NetworkManager[49805]: <info>  [1772275467.3167] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/651)
Feb 28 10:44:27 compute-0 NetworkManager[49805]: <info>  [1772275467.3189] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/652)
Feb 28 10:44:27 compute-0 nova_compute[243452]: 2026-02-28 10:44:27.315 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:27 compute-0 ovn_controller[146846]: 2026-02-28T10:44:27Z|01561|binding|INFO|Releasing lport 1a8a9fba-10a8-48af-ae28-f3be3422056a from this chassis (sb_readonly=0)
Feb 28 10:44:27 compute-0 nova_compute[243452]: 2026-02-28 10:44:27.338 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:27 compute-0 nova_compute[243452]: 2026-02-28 10:44:27.344 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:27 compute-0 podman[376772]: 2026-02-28 10:44:27.418288872 +0000 UTC m=+0.056776588 container create e334eafec96f2fef2c7e3aae730dffab7e44cd2e30786ce4033a2e68858efedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_clarke, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 10:44:27 compute-0 systemd[1]: Started libpod-conmon-e334eafec96f2fef2c7e3aae730dffab7e44cd2e30786ce4033a2e68858efedb.scope.
Feb 28 10:44:27 compute-0 podman[376772]: 2026-02-28 10:44:27.390185086 +0000 UTC m=+0.028672842 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:44:27 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:44:27 compute-0 podman[376772]: 2026-02-28 10:44:27.51109867 +0000 UTC m=+0.149586346 container init e334eafec96f2fef2c7e3aae730dffab7e44cd2e30786ce4033a2e68858efedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_clarke, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 10:44:27 compute-0 podman[376772]: 2026-02-28 10:44:27.518178231 +0000 UTC m=+0.156665917 container start e334eafec96f2fef2c7e3aae730dffab7e44cd2e30786ce4033a2e68858efedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 10:44:27 compute-0 podman[376772]: 2026-02-28 10:44:27.52167867 +0000 UTC m=+0.160166356 container attach e334eafec96f2fef2c7e3aae730dffab7e44cd2e30786ce4033a2e68858efedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_clarke, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:44:27 compute-0 jolly_clarke[376788]: 167 167
Feb 28 10:44:27 compute-0 systemd[1]: libpod-e334eafec96f2fef2c7e3aae730dffab7e44cd2e30786ce4033a2e68858efedb.scope: Deactivated successfully.
Feb 28 10:44:27 compute-0 podman[376772]: 2026-02-28 10:44:27.523329477 +0000 UTC m=+0.161817183 container died e334eafec96f2fef2c7e3aae730dffab7e44cd2e30786ce4033a2e68858efedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_clarke, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 10:44:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9dd3166ea46e5377c405198e72c6cf7df57b4c877c44cc8b3f2060210885c02-merged.mount: Deactivated successfully.
Feb 28 10:44:27 compute-0 podman[376772]: 2026-02-28 10:44:27.576020329 +0000 UTC m=+0.214508045 container remove e334eafec96f2fef2c7e3aae730dffab7e44cd2e30786ce4033a2e68858efedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_clarke, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 10:44:27 compute-0 systemd[1]: libpod-conmon-e334eafec96f2fef2c7e3aae730dffab7e44cd2e30786ce4033a2e68858efedb.scope: Deactivated successfully.
Feb 28 10:44:27 compute-0 nova_compute[243452]: 2026-02-28 10:44:27.666 243456 DEBUG nova.compute.manager [req-e937f3ab-67bf-4e24-9bf4-eaada657d548 req-d16eaf1e-b465-49ea-be61-2b64ffc1fe57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received event network-changed-e109a5e0-a347-4744-84e4-96a126379d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:44:27 compute-0 nova_compute[243452]: 2026-02-28 10:44:27.668 243456 DEBUG nova.compute.manager [req-e937f3ab-67bf-4e24-9bf4-eaada657d548 req-d16eaf1e-b465-49ea-be61-2b64ffc1fe57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Refreshing instance network info cache due to event network-changed-e109a5e0-a347-4744-84e4-96a126379d1c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:44:27 compute-0 nova_compute[243452]: 2026-02-28 10:44:27.668 243456 DEBUG oslo_concurrency.lockutils [req-e937f3ab-67bf-4e24-9bf4-eaada657d548 req-d16eaf1e-b465-49ea-be61-2b64ffc1fe57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:44:27 compute-0 nova_compute[243452]: 2026-02-28 10:44:27.669 243456 DEBUG oslo_concurrency.lockutils [req-e937f3ab-67bf-4e24-9bf4-eaada657d548 req-d16eaf1e-b465-49ea-be61-2b64ffc1fe57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:44:27 compute-0 nova_compute[243452]: 2026-02-28 10:44:27.669 243456 DEBUG nova.network.neutron [req-e937f3ab-67bf-4e24-9bf4-eaada657d548 req-d16eaf1e-b465-49ea-be61-2b64ffc1fe57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Refreshing network info cache for port e109a5e0-a347-4744-84e4-96a126379d1c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:44:27 compute-0 podman[376811]: 2026-02-28 10:44:27.771208466 +0000 UTC m=+0.041368213 container create 1b57e1125c1b6c7da8b5a5ce305f80fe44a4afd32916dae091ec976f34363a87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_saha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 10:44:27 compute-0 systemd[1]: Started libpod-conmon-1b57e1125c1b6c7da8b5a5ce305f80fe44a4afd32916dae091ec976f34363a87.scope.
Feb 28 10:44:27 compute-0 podman[376811]: 2026-02-28 10:44:27.753124054 +0000 UTC m=+0.023283801 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:44:27 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:44:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d03969ce95156c5e5bf3dc290ce1d4d025a836ce85336020e2e44a63662ed59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:44:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d03969ce95156c5e5bf3dc290ce1d4d025a836ce85336020e2e44a63662ed59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:44:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d03969ce95156c5e5bf3dc290ce1d4d025a836ce85336020e2e44a63662ed59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:44:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d03969ce95156c5e5bf3dc290ce1d4d025a836ce85336020e2e44a63662ed59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:44:27 compute-0 podman[376811]: 2026-02-28 10:44:27.897228144 +0000 UTC m=+0.167387931 container init 1b57e1125c1b6c7da8b5a5ce305f80fe44a4afd32916dae091ec976f34363a87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_saha, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 10:44:27 compute-0 podman[376811]: 2026-02-28 10:44:27.903564823 +0000 UTC m=+0.173724540 container start 1b57e1125c1b6c7da8b5a5ce305f80fe44a4afd32916dae091ec976f34363a87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_saha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:44:27 compute-0 podman[376811]: 2026-02-28 10:44:27.906882197 +0000 UTC m=+0.177041994 container attach 1b57e1125c1b6c7da8b5a5ce305f80fe44a4afd32916dae091ec976f34363a87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 10:44:28 compute-0 suspicious_saha[376828]: {
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:     "0": [
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:         {
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "devices": [
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "/dev/loop3"
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             ],
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "lv_name": "ceph_lv0",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "lv_size": "21470642176",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "name": "ceph_lv0",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "tags": {
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.cluster_name": "ceph",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.crush_device_class": "",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.encrypted": "0",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.objectstore": "bluestore",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.osd_id": "0",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.type": "block",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.vdo": "0",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.with_tpm": "0"
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             },
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "type": "block",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "vg_name": "ceph_vg0"
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:         }
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:     ],
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:     "1": [
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:         {
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "devices": [
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "/dev/loop4"
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             ],
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "lv_name": "ceph_lv1",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "lv_size": "21470642176",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "name": "ceph_lv1",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "tags": {
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.cluster_name": "ceph",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.crush_device_class": "",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.encrypted": "0",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.objectstore": "bluestore",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.osd_id": "1",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.type": "block",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.vdo": "0",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.with_tpm": "0"
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             },
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "type": "block",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "vg_name": "ceph_vg1"
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:         }
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:     ],
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:     "2": [
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:         {
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "devices": [
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "/dev/loop5"
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             ],
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "lv_name": "ceph_lv2",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "lv_size": "21470642176",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "name": "ceph_lv2",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "tags": {
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.cluster_name": "ceph",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.crush_device_class": "",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.encrypted": "0",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.objectstore": "bluestore",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.osd_id": "2",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.type": "block",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.vdo": "0",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:                 "ceph.with_tpm": "0"
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             },
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "type": "block",
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:             "vg_name": "ceph_vg2"
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:         }
Feb 28 10:44:28 compute-0 suspicious_saha[376828]:     ]
Feb 28 10:44:28 compute-0 suspicious_saha[376828]: }
Feb 28 10:44:28 compute-0 systemd[1]: libpod-1b57e1125c1b6c7da8b5a5ce305f80fe44a4afd32916dae091ec976f34363a87.scope: Deactivated successfully.
Feb 28 10:44:28 compute-0 podman[376811]: 2026-02-28 10:44:28.248371137 +0000 UTC m=+0.518530944 container died 1b57e1125c1b6c7da8b5a5ce305f80fe44a4afd32916dae091ec976f34363a87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_saha, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 28 10:44:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d03969ce95156c5e5bf3dc290ce1d4d025a836ce85336020e2e44a63662ed59-merged.mount: Deactivated successfully.
Feb 28 10:44:28 compute-0 podman[376811]: 2026-02-28 10:44:28.302461579 +0000 UTC m=+0.572621326 container remove 1b57e1125c1b6c7da8b5a5ce305f80fe44a4afd32916dae091ec976f34363a87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_saha, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 10:44:28 compute-0 systemd[1]: libpod-conmon-1b57e1125c1b6c7da8b5a5ce305f80fe44a4afd32916dae091ec976f34363a87.scope: Deactivated successfully.
Feb 28 10:44:28 compute-0 sudo[376736]: pam_unix(sudo:session): session closed for user root
Feb 28 10:44:28 compute-0 sudo[376850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:44:28 compute-0 sudo[376850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:44:28 compute-0 sudo[376850]: pam_unix(sudo:session): session closed for user root
Feb 28 10:44:28 compute-0 sudo[376875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:44:28 compute-0 sudo[376875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:44:28 compute-0 ceph-mon[76304]: pgmap v2403: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 850 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Feb 28 10:44:28 compute-0 podman[376913]: 2026-02-28 10:44:28.811570585 +0000 UTC m=+0.048871855 container create 43a3c1f747ba2d5ba3c207ac3988a26b0734e9f89e05b63cce159187a755fc54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:44:28 compute-0 systemd[1]: Started libpod-conmon-43a3c1f747ba2d5ba3c207ac3988a26b0734e9f89e05b63cce159187a755fc54.scope.
Feb 28 10:44:28 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:44:28 compute-0 podman[376913]: 2026-02-28 10:44:28.786943367 +0000 UTC m=+0.024244687 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:44:28 compute-0 podman[376913]: 2026-02-28 10:44:28.887245667 +0000 UTC m=+0.124546917 container init 43a3c1f747ba2d5ba3c207ac3988a26b0734e9f89e05b63cce159187a755fc54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:44:28 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:44:28 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.2 total, 600.0 interval
                                           Cumulative writes: 36K writes, 142K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 36K writes, 13K syncs, 2.71 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3550 writes, 14K keys, 3550 commit groups, 1.0 writes per commit group, ingest: 18.05 MB, 0.03 MB/s
                                           Interval WAL: 3550 writes, 1394 syncs, 2.55 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 10:44:28 compute-0 podman[376913]: 2026-02-28 10:44:28.897055475 +0000 UTC m=+0.134356745 container start 43a3c1f747ba2d5ba3c207ac3988a26b0734e9f89e05b63cce159187a755fc54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_rosalind, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 10:44:28 compute-0 priceless_rosalind[376929]: 167 167
Feb 28 10:44:28 compute-0 systemd[1]: libpod-43a3c1f747ba2d5ba3c207ac3988a26b0734e9f89e05b63cce159187a755fc54.scope: Deactivated successfully.
Feb 28 10:44:28 compute-0 podman[376913]: 2026-02-28 10:44:28.901632285 +0000 UTC m=+0.138933535 container attach 43a3c1f747ba2d5ba3c207ac3988a26b0734e9f89e05b63cce159187a755fc54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 10:44:28 compute-0 podman[376913]: 2026-02-28 10:44:28.902396306 +0000 UTC m=+0.139697546 container died 43a3c1f747ba2d5ba3c207ac3988a26b0734e9f89e05b63cce159187a755fc54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_rosalind, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 10:44:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-63c9c0a10b1aa4e04857c24f3d7d692e43181c4da3b8d87dc2f6d9cafd35dec1-merged.mount: Deactivated successfully.
Feb 28 10:44:28 compute-0 podman[376913]: 2026-02-28 10:44:28.934732892 +0000 UTC m=+0.172034122 container remove 43a3c1f747ba2d5ba3c207ac3988a26b0734e9f89e05b63cce159187a755fc54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 10:44:28 compute-0 systemd[1]: libpod-conmon-43a3c1f747ba2d5ba3c207ac3988a26b0734e9f89e05b63cce159187a755fc54.scope: Deactivated successfully.
Feb 28 10:44:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:44:29 compute-0 podman[376953]: 2026-02-28 10:44:29.123558389 +0000 UTC m=+0.065675401 container create 17ad47dc15b6458e64350b61209a9a907f9ab4d16981bf563ff947f2191cba69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:44:29 compute-0 podman[376953]: 2026-02-28 10:44:29.088368962 +0000 UTC m=+0.030485964 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:44:29 compute-0 systemd[1]: Started libpod-conmon-17ad47dc15b6458e64350b61209a9a907f9ab4d16981bf563ff947f2191cba69.scope.
Feb 28 10:44:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:44:29
Feb 28 10:44:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:44:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:44:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['images', 'default.rgw.control', 'backups', '.mgr', 'vms', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root']
Feb 28 10:44:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:44:29 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:44:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b20911904dc4bd737523268be41edff916d1c814c88e7c41d1c9d42c9b6b56fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:44:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b20911904dc4bd737523268be41edff916d1c814c88e7c41d1c9d42c9b6b56fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:44:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b20911904dc4bd737523268be41edff916d1c814c88e7c41d1c9d42c9b6b56fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:44:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Feb 28 10:44:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b20911904dc4bd737523268be41edff916d1c814c88e7c41d1c9d42c9b6b56fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:44:29 compute-0 podman[376953]: 2026-02-28 10:44:29.238209965 +0000 UTC m=+0.180326977 container init 17ad47dc15b6458e64350b61209a9a907f9ab4d16981bf563ff947f2191cba69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:44:29 compute-0 podman[376953]: 2026-02-28 10:44:29.244061501 +0000 UTC m=+0.186178463 container start 17ad47dc15b6458e64350b61209a9a907f9ab4d16981bf563ff947f2191cba69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 10:44:29 compute-0 podman[376953]: 2026-02-28 10:44:29.247381885 +0000 UTC m=+0.189498837 container attach 17ad47dc15b6458e64350b61209a9a907f9ab4d16981bf563ff947f2191cba69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 10:44:29 compute-0 nova_compute[243452]: 2026-02-28 10:44:29.330 243456 DEBUG nova.network.neutron [req-e937f3ab-67bf-4e24-9bf4-eaada657d548 req-d16eaf1e-b465-49ea-be61-2b64ffc1fe57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Updated VIF entry in instance network info cache for port e109a5e0-a347-4744-84e4-96a126379d1c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:44:29 compute-0 nova_compute[243452]: 2026-02-28 10:44:29.334 243456 DEBUG nova.network.neutron [req-e937f3ab-67bf-4e24-9bf4-eaada657d548 req-d16eaf1e-b465-49ea-be61-2b64ffc1fe57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Updating instance_info_cache with network_info: [{"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:44:29 compute-0 nova_compute[243452]: 2026-02-28 10:44:29.397 243456 DEBUG oslo_concurrency.lockutils [req-e937f3ab-67bf-4e24-9bf4-eaada657d548 req-d16eaf1e-b465-49ea-be61-2b64ffc1fe57 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:44:29 compute-0 nova_compute[243452]: 2026-02-28 10:44:29.919 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:29 compute-0 lvm[377049]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:44:30 compute-0 lvm[377049]: VG ceph_vg1 finished
Feb 28 10:44:30 compute-0 lvm[377047]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:44:30 compute-0 lvm[377047]: VG ceph_vg0 finished
Feb 28 10:44:30 compute-0 lvm[377052]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:44:30 compute-0 lvm[377052]: VG ceph_vg2 finished
Feb 28 10:44:30 compute-0 lvm[377055]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:44:30 compute-0 lvm[377055]: VG ceph_vg0 finished
Feb 28 10:44:30 compute-0 dazzling_taussig[376970]: {}
Feb 28 10:44:30 compute-0 systemd[1]: libpod-17ad47dc15b6458e64350b61209a9a907f9ab4d16981bf563ff947f2191cba69.scope: Deactivated successfully.
Feb 28 10:44:30 compute-0 podman[376953]: 2026-02-28 10:44:30.146290018 +0000 UTC m=+1.088406950 container died 17ad47dc15b6458e64350b61209a9a907f9ab4d16981bf563ff947f2191cba69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Feb 28 10:44:30 compute-0 systemd[1]: libpod-17ad47dc15b6458e64350b61209a9a907f9ab4d16981bf563ff947f2191cba69.scope: Consumed 1.234s CPU time.
Feb 28 10:44:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b20911904dc4bd737523268be41edff916d1c814c88e7c41d1c9d42c9b6b56fe-merged.mount: Deactivated successfully.
Feb 28 10:44:30 compute-0 podman[376953]: 2026-02-28 10:44:30.227020754 +0000 UTC m=+1.169137676 container remove 17ad47dc15b6458e64350b61209a9a907f9ab4d16981bf563ff947f2191cba69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 10:44:30 compute-0 systemd[1]: libpod-conmon-17ad47dc15b6458e64350b61209a9a907f9ab4d16981bf563ff947f2191cba69.scope: Deactivated successfully.
Feb 28 10:44:30 compute-0 sudo[376875]: pam_unix(sudo:session): session closed for user root
Feb 28 10:44:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:44:30 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:44:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:44:30 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:44:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:44:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:44:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:44:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:44:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:44:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:44:30 compute-0 sudo[377068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:44:30 compute-0 sshd-session[377050]: Invalid user sol from 45.148.10.240 port 47792
Feb 28 10:44:30 compute-0 sudo[377068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:44:30 compute-0 sudo[377068]: pam_unix(sudo:session): session closed for user root
Feb 28 10:44:30 compute-0 sshd-session[377050]: Connection closed by invalid user sol 45.148.10.240 port 47792 [preauth]
Feb 28 10:44:30 compute-0 ceph-mon[76304]: pgmap v2404: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Feb 28 10:44:30 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:44:30 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:44:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:44:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:44:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:44:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:44:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:44:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:44:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:44:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:44:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:44:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:44:31 compute-0 ceph-mgr[76610]: [devicehealth INFO root] Check health
Feb 28 10:44:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 88 op/s
Feb 28 10:44:31 compute-0 nova_compute[243452]: 2026-02-28 10:44:31.995 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:32 compute-0 ceph-mon[76304]: pgmap v2405: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 88 op/s
Feb 28 10:44:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:44:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:44:34 compute-0 ceph-mon[76304]: pgmap v2406: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:44:34 compute-0 nova_compute[243452]: 2026-02-28 10:44:34.957 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 205 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 700 KiB/s wr, 85 op/s
Feb 28 10:44:35 compute-0 ovn_controller[146846]: 2026-02-28T10:44:35Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:44:88 10.100.0.5
Feb 28 10:44:35 compute-0 ovn_controller[146846]: 2026-02-28T10:44:35Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:44:88 10.100.0.5
Feb 28 10:44:36 compute-0 ceph-mon[76304]: pgmap v2407: 305 pgs: 305 active+clean; 205 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 700 KiB/s wr, 85 op/s
Feb 28 10:44:37 compute-0 nova_compute[243452]: 2026-02-28 10:44:37.002 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.3 MiB/s wr, 90 op/s
Feb 28 10:44:38 compute-0 ceph-mon[76304]: pgmap v2408: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.3 MiB/s wr, 90 op/s
Feb 28 10:44:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:44:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 231 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 93 op/s
Feb 28 10:44:39 compute-0 nova_compute[243452]: 2026-02-28 10:44:39.960 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:40 compute-0 ceph-mon[76304]: pgmap v2409: 305 pgs: 305 active+clean; 231 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 93 op/s
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 547 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007711872734998348 of space, bias 1.0, pg target 0.23135618204995043 quantized to 32 (current 32)
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024939207792546033 of space, bias 1.0, pg target 0.748176233776381 quantized to 32 (current 32)
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.270082191943719e-07 of space, bias 4.0, pg target 0.0008724098630332462 quantized to 16 (current 16)
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:44:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:44:42 compute-0 nova_compute[243452]: 2026-02-28 10:44:42.005 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:42 compute-0 ceph-mon[76304]: pgmap v2410: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 547 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Feb 28 10:44:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Feb 28 10:44:43 compute-0 nova_compute[243452]: 2026-02-28 10:44:43.942 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:44:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:44:44 compute-0 ceph-mon[76304]: pgmap v2411: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Feb 28 10:44:44 compute-0 nova_compute[243452]: 2026-02-28 10:44:44.962 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Feb 28 10:44:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:44:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4010870092' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:44:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:44:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4010870092' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:44:45 compute-0 nova_compute[243452]: 2026-02-28 10:44:45.678 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:44:45 compute-0 nova_compute[243452]: 2026-02-28 10:44:45.679 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:44:45 compute-0 nova_compute[243452]: 2026-02-28 10:44:45.701 243456 DEBUG nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:44:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/4010870092' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:44:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/4010870092' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:44:45 compute-0 nova_compute[243452]: 2026-02-28 10:44:45.798 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:44:45 compute-0 nova_compute[243452]: 2026-02-28 10:44:45.799 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:44:45 compute-0 nova_compute[243452]: 2026-02-28 10:44:45.809 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:44:45 compute-0 nova_compute[243452]: 2026-02-28 10:44:45.810 243456 INFO nova.compute.claims [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:44:45 compute-0 nova_compute[243452]: 2026-02-28 10:44:45.940 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:44:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:44:46 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/129897488' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.508 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.515 243456 DEBUG nova.compute.provider_tree [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.535 243456 DEBUG nova.scheduler.client.report [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.563 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.564 243456 DEBUG nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.623 243456 DEBUG nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.624 243456 DEBUG nova.network.neutron [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.644 243456 INFO nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.669 243456 DEBUG nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:44:46 compute-0 ceph-mon[76304]: pgmap v2412: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Feb 28 10:44:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/129897488' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.769 243456 DEBUG nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.771 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.772 243456 INFO nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Creating image(s)
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.805 243456 DEBUG nova.storage.rbd_utils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.837 243456 DEBUG nova.storage.rbd_utils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.870 243456 DEBUG nova.storage.rbd_utils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.875 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.914 243456 DEBUG nova.policy [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.968 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.969 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.970 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:44:46 compute-0 nova_compute[243452]: 2026-02-28 10:44:46.971 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:44:47 compute-0 nova_compute[243452]: 2026-02-28 10:44:47.017 243456 DEBUG nova.storage.rbd_utils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:44:47 compute-0 nova_compute[243452]: 2026-02-28 10:44:47.023 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:44:47 compute-0 nova_compute[243452]: 2026-02-28 10:44:47.073 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 181 KiB/s rd, 1.5 MiB/s wr, 45 op/s
Feb 28 10:44:47 compute-0 nova_compute[243452]: 2026-02-28 10:44:47.274 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:44:47 compute-0 nova_compute[243452]: 2026-02-28 10:44:47.346 243456 DEBUG nova.storage.rbd_utils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:44:47 compute-0 nova_compute[243452]: 2026-02-28 10:44:47.437 243456 DEBUG nova.objects.instance [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid 795552e4-ec78-4c39-a0a9-60c83ca0f0a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:44:47 compute-0 nova_compute[243452]: 2026-02-28 10:44:47.453 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:44:47 compute-0 nova_compute[243452]: 2026-02-28 10:44:47.453 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Ensure instance console log exists: /var/lib/nova/instances/795552e4-ec78-4c39-a0a9-60c83ca0f0a2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:44:47 compute-0 nova_compute[243452]: 2026-02-28 10:44:47.454 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:44:47 compute-0 nova_compute[243452]: 2026-02-28 10:44:47.455 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:44:47 compute-0 nova_compute[243452]: 2026-02-28 10:44:47.455 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:44:48 compute-0 nova_compute[243452]: 2026-02-28 10:44:48.385 243456 DEBUG nova.network.neutron [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Successfully created port: 138ee9e6-821d-4256-8858-52f9e81f4c8b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:44:48 compute-0 ceph-mon[76304]: pgmap v2413: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 181 KiB/s rd, 1.5 MiB/s wr, 45 op/s
Feb 28 10:44:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.051493) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275489051588, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1847, "num_deletes": 256, "total_data_size": 2965887, "memory_usage": 3010448, "flush_reason": "Manual Compaction"}
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275489063501, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 2913343, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49717, "largest_seqno": 51563, "table_properties": {"data_size": 2904958, "index_size": 5135, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17169, "raw_average_key_size": 19, "raw_value_size": 2888141, "raw_average_value_size": 3335, "num_data_blocks": 228, "num_entries": 866, "num_filter_entries": 866, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772275294, "oldest_key_time": 1772275294, "file_creation_time": 1772275489, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 12051 microseconds, and 6043 cpu microseconds.
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.063563) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 2913343 bytes OK
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.063586) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.065888) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.065906) EVENT_LOG_v1 {"time_micros": 1772275489065901, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.065932) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 2958006, prev total WAL file size 2958006, number of live WAL files 2.
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.066797) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303035' seq:72057594037927935, type:22 .. '6C6F676D0032323537' seq:0, type:0; will stop at (end)
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(2845KB)], [116(8173KB)]
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275489066853, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 11282962, "oldest_snapshot_seqno": -1}
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7353 keys, 11166748 bytes, temperature: kUnknown
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275489122146, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 11166748, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11116875, "index_size": 30369, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18437, "raw_key_size": 190754, "raw_average_key_size": 25, "raw_value_size": 10985106, "raw_average_value_size": 1493, "num_data_blocks": 1196, "num_entries": 7353, "num_filter_entries": 7353, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772275489, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.122720) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 11166748 bytes
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.124037) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.7 rd, 201.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 8.0 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(7.7) write-amplify(3.8) OK, records in: 7877, records dropped: 524 output_compression: NoCompression
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.124059) EVENT_LOG_v1 {"time_micros": 1772275489124048, "job": 70, "event": "compaction_finished", "compaction_time_micros": 55378, "compaction_time_cpu_micros": 37725, "output_level": 6, "num_output_files": 1, "total_output_size": 11166748, "num_input_records": 7877, "num_output_records": 7353, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275489124644, "job": 70, "event": "table_file_deletion", "file_number": 118}
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275489125622, "job": 70, "event": "table_file_deletion", "file_number": 116}
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.066622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.125852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.125864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.125867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.125871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:44:49 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:44:49.125874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:44:49 compute-0 nova_compute[243452]: 2026-02-28 10:44:49.177 243456 DEBUG nova.network.neutron [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Successfully updated port: 138ee9e6-821d-4256-8858-52f9e81f4c8b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:44:49 compute-0 nova_compute[243452]: 2026-02-28 10:44:49.191 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:44:49 compute-0 nova_compute[243452]: 2026-02-28 10:44:49.191 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:44:49 compute-0 nova_compute[243452]: 2026-02-28 10:44:49.191 243456 DEBUG nova.network.neutron [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:44:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 247 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 111 KiB/s rd, 1.3 MiB/s wr, 34 op/s
Feb 28 10:44:49 compute-0 nova_compute[243452]: 2026-02-28 10:44:49.292 243456 DEBUG nova.compute.manager [req-1c61ac87-dbd8-4ac8-88ff-3231eea7fe4f req-db380281-9894-4a44-94ef-2bf6f041753f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received event network-changed-138ee9e6-821d-4256-8858-52f9e81f4c8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:44:49 compute-0 nova_compute[243452]: 2026-02-28 10:44:49.292 243456 DEBUG nova.compute.manager [req-1c61ac87-dbd8-4ac8-88ff-3231eea7fe4f req-db380281-9894-4a44-94ef-2bf6f041753f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Refreshing instance network info cache due to event network-changed-138ee9e6-821d-4256-8858-52f9e81f4c8b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:44:49 compute-0 nova_compute[243452]: 2026-02-28 10:44:49.292 243456 DEBUG oslo_concurrency.lockutils [req-1c61ac87-dbd8-4ac8-88ff-3231eea7fe4f req-db380281-9894-4a44-94ef-2bf6f041753f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:44:49 compute-0 nova_compute[243452]: 2026-02-28 10:44:49.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:44:49 compute-0 nova_compute[243452]: 2026-02-28 10:44:49.326 243456 DEBUG nova.network.neutron [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.004 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:50 compute-0 ceph-mon[76304]: pgmap v2414: 305 pgs: 305 active+clean; 247 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 111 KiB/s rd, 1.3 MiB/s wr, 34 op/s
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.402 243456 DEBUG nova.network.neutron [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Updating instance_info_cache with network_info: [{"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.436 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.437 243456 DEBUG nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Instance network_info: |[{"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.437 243456 DEBUG oslo_concurrency.lockutils [req-1c61ac87-dbd8-4ac8-88ff-3231eea7fe4f req-db380281-9894-4a44-94ef-2bf6f041753f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.437 243456 DEBUG nova.network.neutron [req-1c61ac87-dbd8-4ac8-88ff-3231eea7fe4f req-db380281-9894-4a44-94ef-2bf6f041753f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Refreshing network info cache for port 138ee9e6-821d-4256-8858-52f9e81f4c8b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.440 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Start _get_guest_xml network_info=[{"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.445 243456 WARNING nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.454 243456 DEBUG nova.virt.libvirt.host [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.454 243456 DEBUG nova.virt.libvirt.host [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.459 243456 DEBUG nova.virt.libvirt.host [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.460 243456 DEBUG nova.virt.libvirt.host [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.460 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.460 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.461 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.461 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.461 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.461 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.461 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.461 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.462 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.462 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.462 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.462 243456 DEBUG nova.virt.hardware [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:44:50 compute-0 nova_compute[243452]: 2026-02-28 10:44:50.465 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:44:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:44:51 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3776011761' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:44:51 compute-0 nova_compute[243452]: 2026-02-28 10:44:51.044 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:44:51 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3776011761' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:44:51 compute-0 nova_compute[243452]: 2026-02-28 10:44:51.073 243456 DEBUG nova.storage.rbd_utils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:44:51 compute-0 nova_compute[243452]: 2026-02-28 10:44:51.079 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:44:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Feb 28 10:44:51 compute-0 nova_compute[243452]: 2026-02-28 10:44:51.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:44:51 compute-0 nova_compute[243452]: 2026-02-28 10:44:51.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:44:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:44:51 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/480743214' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:44:51 compute-0 nova_compute[243452]: 2026-02-28 10:44:51.987 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.908s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:44:51 compute-0 nova_compute[243452]: 2026-02-28 10:44:51.989 243456 DEBUG nova.virt.libvirt.vif [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1587775994',display_name='tempest-TestGettingAddress-server-1587775994',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1587775994',id=149,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjumutHDOiCCJO6cfUL6ugRSFzHXiNGug9dJkwM0CV8zL85MVF3a1LvAcIuG15mWPEoaXuen5D3Ktxjz9xOw1dfLwx0HcN7GW0eMtopxI1h0bccpxE9hlkunZoyhuwL5A==',key_name='tempest-TestGettingAddress-1355395394',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-i9i85993',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:44:46Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=795552e4-ec78-4c39-a0a9-60c83ca0f0a2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:44:51 compute-0 nova_compute[243452]: 2026-02-28 10:44:51.990 243456 DEBUG nova.network.os_vif_util [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:44:51 compute-0 nova_compute[243452]: 2026-02-28 10:44:51.991 243456 DEBUG nova.network.os_vif_util [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d2:53,bridge_name='br-int',has_traffic_filtering=True,id=138ee9e6-821d-4256-8858-52f9e81f4c8b,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap138ee9e6-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:44:51 compute-0 nova_compute[243452]: 2026-02-28 10:44:51.992 243456 DEBUG nova.objects.instance [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 795552e4-ec78-4c39-a0a9-60c83ca0f0a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.005 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:44:52 compute-0 nova_compute[243452]:   <uuid>795552e4-ec78-4c39-a0a9-60c83ca0f0a2</uuid>
Feb 28 10:44:52 compute-0 nova_compute[243452]:   <name>instance-00000095</name>
Feb 28 10:44:52 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:44:52 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:44:52 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <nova:name>tempest-TestGettingAddress-server-1587775994</nova:name>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:44:50</nova:creationTime>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:44:52 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:44:52 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:44:52 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:44:52 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:44:52 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:44:52 compute-0 nova_compute[243452]:         <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 10:44:52 compute-0 nova_compute[243452]:         <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:44:52 compute-0 nova_compute[243452]:         <nova:port uuid="138ee9e6-821d-4256-8858-52f9e81f4c8b">
Feb 28 10:44:52 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe4a:d253" ipVersion="6"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe4a:d253" ipVersion="6"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:44:52 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:44:52 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <system>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <entry name="serial">795552e4-ec78-4c39-a0a9-60c83ca0f0a2</entry>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <entry name="uuid">795552e4-ec78-4c39-a0a9-60c83ca0f0a2</entry>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     </system>
Feb 28 10:44:52 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:44:52 compute-0 nova_compute[243452]:   <os>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:   </os>
Feb 28 10:44:52 compute-0 nova_compute[243452]:   <features>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:   </features>
Feb 28 10:44:52 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:44:52 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:44:52 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk">
Feb 28 10:44:52 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       </source>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:44:52 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk.config">
Feb 28 10:44:52 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       </source>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:44:52 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:4a:d2:53"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <target dev="tap138ee9e6-82"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/795552e4-ec78-4c39-a0a9-60c83ca0f0a2/console.log" append="off"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <video>
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     </video>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:44:52 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:44:52 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:44:52 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:44:52 compute-0 nova_compute[243452]: </domain>
Feb 28 10:44:52 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.006 243456 DEBUG nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Preparing to wait for external event network-vif-plugged-138ee9e6-821d-4256-8858-52f9e81f4c8b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.006 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.006 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.006 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.007 243456 DEBUG nova.virt.libvirt.vif [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1587775994',display_name='tempest-TestGettingAddress-server-1587775994',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1587775994',id=149,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjumutHDOiCCJO6cfUL6ugRSFzHXiNGug9dJkwM0CV8zL85MVF3a1LvAcIuG15mWPEoaXuen5D3Ktxjz9xOw1dfLwx0HcN7GW0eMtopxI1h0bccpxE9hlkunZoyhuwL5A==',key_name='tempest-TestGettingAddress-1355395394',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-i9i85993',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:44:46Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=795552e4-ec78-4c39-a0a9-60c83ca0f0a2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.007 243456 DEBUG nova.network.os_vif_util [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.010 243456 DEBUG nova.network.os_vif_util [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d2:53,bridge_name='br-int',has_traffic_filtering=True,id=138ee9e6-821d-4256-8858-52f9e81f4c8b,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap138ee9e6-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.011 243456 DEBUG os_vif [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d2:53,bridge_name='br-int',has_traffic_filtering=True,id=138ee9e6-821d-4256-8858-52f9e81f4c8b,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap138ee9e6-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.011 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.012 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.012 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.016 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.016 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap138ee9e6-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.017 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap138ee9e6-82, col_values=(('external_ids', {'iface-id': '138ee9e6-821d-4256-8858-52f9e81f4c8b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:d2:53', 'vm-uuid': '795552e4-ec78-4c39-a0a9-60c83ca0f0a2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.018 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:52 compute-0 NetworkManager[49805]: <info>  [1772275492.0197] manager: (tap138ee9e6-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/653)
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.019 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.026 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.027 243456 INFO os_vif [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d2:53,bridge_name='br-int',has_traffic_filtering=True,id=138ee9e6-821d-4256-8858-52f9e81f4c8b,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap138ee9e6-82')
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.076 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.076 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.076 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:4a:d2:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.077 243456 INFO nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Using config drive
Feb 28 10:44:52 compute-0 ceph-mon[76304]: pgmap v2415: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Feb 28 10:44:52 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/480743214' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.103 243456 DEBUG nova.storage.rbd_utils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.504 243456 INFO nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Creating config drive at /var/lib/nova/instances/795552e4-ec78-4c39-a0a9-60c83ca0f0a2/disk.config
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.508 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/795552e4-ec78-4c39-a0a9-60c83ca0f0a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpcv5opa6m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.552 243456 DEBUG nova.network.neutron [req-1c61ac87-dbd8-4ac8-88ff-3231eea7fe4f req-db380281-9894-4a44-94ef-2bf6f041753f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Updated VIF entry in instance network info cache for port 138ee9e6-821d-4256-8858-52f9e81f4c8b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.554 243456 DEBUG nova.network.neutron [req-1c61ac87-dbd8-4ac8-88ff-3231eea7fe4f req-db380281-9894-4a44-94ef-2bf6f041753f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Updating instance_info_cache with network_info: [{"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.581 243456 DEBUG oslo_concurrency.lockutils [req-1c61ac87-dbd8-4ac8-88ff-3231eea7fe4f req-db380281-9894-4a44-94ef-2bf6f041753f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.655 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/795552e4-ec78-4c39-a0a9-60c83ca0f0a2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpcv5opa6m" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.689 243456 DEBUG nova.storage.rbd_utils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.693 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/795552e4-ec78-4c39-a0a9-60c83ca0f0a2/disk.config 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.840 243456 DEBUG oslo_concurrency.processutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/795552e4-ec78-4c39-a0a9-60c83ca0f0a2/disk.config 795552e4-ec78-4c39-a0a9-60c83ca0f0a2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.841 243456 INFO nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Deleting local config drive /var/lib/nova/instances/795552e4-ec78-4c39-a0a9-60c83ca0f0a2/disk.config because it was imported into RBD.
Feb 28 10:44:52 compute-0 kernel: tap138ee9e6-82: entered promiscuous mode
Feb 28 10:44:52 compute-0 NetworkManager[49805]: <info>  [1772275492.8925] manager: (tap138ee9e6-82): new Tun device (/org/freedesktop/NetworkManager/Devices/654)
Feb 28 10:44:52 compute-0 ovn_controller[146846]: 2026-02-28T10:44:52Z|01562|binding|INFO|Claiming lport 138ee9e6-821d-4256-8858-52f9e81f4c8b for this chassis.
Feb 28 10:44:52 compute-0 ovn_controller[146846]: 2026-02-28T10:44:52Z|01563|binding|INFO|138ee9e6-821d-4256-8858-52f9e81f4c8b: Claiming fa:16:3e:4a:d2:53 10.100.0.6 2001:db8:0:1:f816:3eff:fe4a:d253 2001:db8::f816:3eff:fe4a:d253
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.893 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.897 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:52.904 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:d2:53 10.100.0.6 2001:db8:0:1:f816:3eff:fe4a:d253 2001:db8::f816:3eff:fe4a:d253'], port_security=['fa:16:3e:4a:d2:53 10.100.0.6 2001:db8:0:1:f816:3eff:fe4a:d253 2001:db8::f816:3eff:fe4a:d253'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:fe4a:d253/64 2001:db8::f816:3eff:fe4a:d253/64', 'neutron:device_id': '795552e4-ec78-4c39-a0a9-60c83ca0f0a2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6b062de3-7eb9-4b80-8825-2f32e8eb2071', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c946f0a3-f57c-4aff-b2b1-40e06de1a163, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=138ee9e6-821d-4256-8858-52f9e81f4c8b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:44:52 compute-0 ovn_controller[146846]: 2026-02-28T10:44:52Z|01564|binding|INFO|Setting lport 138ee9e6-821d-4256-8858-52f9e81f4c8b ovn-installed in OVS
Feb 28 10:44:52 compute-0 ovn_controller[146846]: 2026-02-28T10:44:52Z|01565|binding|INFO|Setting lport 138ee9e6-821d-4256-8858-52f9e81f4c8b up in Southbound
Feb 28 10:44:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:52.905 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 138ee9e6-821d-4256-8858-52f9e81f4c8b in datapath b90f6c6c-a634-4d08-98d7-fde34f18e37c bound to our chassis
Feb 28 10:44:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:52.906 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b90f6c6c-a634-4d08-98d7-fde34f18e37c
Feb 28 10:44:52 compute-0 nova_compute[243452]: 2026-02-28 10:44:52.907 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:52.931 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[71b4aa1b-8c3c-4d45-bb25-f0d8cf5790c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:52 compute-0 systemd-udevd[377432]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:44:52 compute-0 systemd-machined[209480]: New machine qemu-182-instance-00000095.
Feb 28 10:44:52 compute-0 systemd[1]: Started Virtual Machine qemu-182-instance-00000095.
Feb 28 10:44:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:52.964 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[0215a68f-b7a6-4436-9710-0fa84ede6037]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:52 compute-0 NetworkManager[49805]: <info>  [1772275492.9677] device (tap138ee9e6-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:44:52 compute-0 NetworkManager[49805]: <info>  [1772275492.9691] device (tap138ee9e6-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:44:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:52.970 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[c805ed40-0a01-4815-8ad9-d6bc2817a554]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:53.011 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[08f5b622-e55e-4031-b04d-db2e83612ce2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:53.030 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4e095f18-60c4-41a8-93b7-956c328b5ed5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb90f6c6c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:3c:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 24, 'tx_packets': 5, 'rx_bytes': 2000, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 24, 'tx_packets': 5, 'rx_bytes': 2000, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697021, 'reachable_time': 30198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377467, 'error': None, 'target': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:53 compute-0 podman[377416]: 2026-02-28 10:44:53.041390162 +0000 UTC m=+0.100397864 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:44:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:53.049 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1657135c-b8a5-4b7f-a3e2-ffafdaf54a12]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb90f6c6c-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697030, 'tstamp': 697030}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377472, 'error': None, 'target': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb90f6c6c-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697032, 'tstamp': 697032}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377472, 'error': None, 'target': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:44:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:53.053 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb90f6c6c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:44:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:53.056 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb90f6c6c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.056 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:53.056 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:44:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:53.057 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb90f6c6c-a0, col_values=(('external_ids', {'iface-id': '1a8a9fba-10a8-48af-ae28-f3be3422056a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:44:53 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:53.057 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:44:53 compute-0 podman[377414]: 2026-02-28 10:44:53.060605556 +0000 UTC m=+0.122909542 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:44:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.338 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.388 243456 DEBUG nova.compute.manager [req-be7c3d40-0888-4bfb-8ba0-d260426bdaef req-16404c3a-d3c6-47b4-87de-398f7a79b5c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received event network-vif-plugged-138ee9e6-821d-4256-8858-52f9e81f4c8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.389 243456 DEBUG oslo_concurrency.lockutils [req-be7c3d40-0888-4bfb-8ba0-d260426bdaef req-16404c3a-d3c6-47b4-87de-398f7a79b5c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.390 243456 DEBUG oslo_concurrency.lockutils [req-be7c3d40-0888-4bfb-8ba0-d260426bdaef req-16404c3a-d3c6-47b4-87de-398f7a79b5c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.390 243456 DEBUG oslo_concurrency.lockutils [req-be7c3d40-0888-4bfb-8ba0-d260426bdaef req-16404c3a-d3c6-47b4-87de-398f7a79b5c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.391 243456 DEBUG nova.compute.manager [req-be7c3d40-0888-4bfb-8ba0-d260426bdaef req-16404c3a-d3c6-47b4-87de-398f7a79b5c7 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Processing event network-vif-plugged-138ee9e6-821d-4256-8858-52f9e81f4c8b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.393 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275493.3932407, 795552e4-ec78-4c39-a0a9-60c83ca0f0a2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.394 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] VM Started (Lifecycle Event)
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.398 243456 DEBUG nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.403 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.412 243456 INFO nova.virt.libvirt.driver [-] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Instance spawned successfully.
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.413 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.417 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.421 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.441 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.441 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.442 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.442 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.442 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.443 243456 DEBUG nova.virt.libvirt.driver [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.447 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.447 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275493.3978465, 795552e4-ec78-4c39-a0a9-60c83ca0f0a2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.447 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] VM Paused (Lifecycle Event)
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.473 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.477 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.477 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.477 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.478 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid c45d561e-3581-4ca2-a8b7-b56cc6ce5b43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.481 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275493.4019325, 795552e4-ec78-4c39-a0a9-60c83ca0f0a2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.481 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] VM Resumed (Lifecycle Event)
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.510 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.519 243456 INFO nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Took 6.75 seconds to spawn the instance on the hypervisor.
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.520 243456 DEBUG nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.521 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.536 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.581 243456 INFO nova.compute.manager [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Took 7.81 seconds to build instance.
Feb 28 10:44:53 compute-0 nova_compute[243452]: 2026-02-28 10:44:53.600 243456 DEBUG oslo_concurrency.lockutils [None req-2ee18396-befb-48e8-b6b1-01d41204b29a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:44:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:44:54 compute-0 ceph-mon[76304]: pgmap v2416: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Feb 28 10:44:55 compute-0 nova_compute[243452]: 2026-02-28 10:44:55.043 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 66 op/s
Feb 28 10:44:55 compute-0 nova_compute[243452]: 2026-02-28 10:44:55.492 243456 DEBUG nova.compute.manager [req-983f79c3-9e4b-45b5-98e3-e789594e55f0 req-03f8a8ba-8230-4688-8c8e-a444bf799b6a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received event network-vif-plugged-138ee9e6-821d-4256-8858-52f9e81f4c8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:44:55 compute-0 nova_compute[243452]: 2026-02-28 10:44:55.493 243456 DEBUG oslo_concurrency.lockutils [req-983f79c3-9e4b-45b5-98e3-e789594e55f0 req-03f8a8ba-8230-4688-8c8e-a444bf799b6a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:44:55 compute-0 nova_compute[243452]: 2026-02-28 10:44:55.494 243456 DEBUG oslo_concurrency.lockutils [req-983f79c3-9e4b-45b5-98e3-e789594e55f0 req-03f8a8ba-8230-4688-8c8e-a444bf799b6a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:44:55 compute-0 nova_compute[243452]: 2026-02-28 10:44:55.494 243456 DEBUG oslo_concurrency.lockutils [req-983f79c3-9e4b-45b5-98e3-e789594e55f0 req-03f8a8ba-8230-4688-8c8e-a444bf799b6a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:44:55 compute-0 nova_compute[243452]: 2026-02-28 10:44:55.494 243456 DEBUG nova.compute.manager [req-983f79c3-9e4b-45b5-98e3-e789594e55f0 req-03f8a8ba-8230-4688-8c8e-a444bf799b6a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] No waiting events found dispatching network-vif-plugged-138ee9e6-821d-4256-8858-52f9e81f4c8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:44:55 compute-0 nova_compute[243452]: 2026-02-28 10:44:55.495 243456 WARNING nova.compute.manager [req-983f79c3-9e4b-45b5-98e3-e789594e55f0 req-03f8a8ba-8230-4688-8c8e-a444bf799b6a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received unexpected event network-vif-plugged-138ee9e6-821d-4256-8858-52f9e81f4c8b for instance with vm_state active and task_state None.
Feb 28 10:44:56 compute-0 nova_compute[243452]: 2026-02-28 10:44:56.118 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Updating instance_info_cache with network_info: [{"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:44:56 compute-0 nova_compute[243452]: 2026-02-28 10:44:56.135 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:44:56 compute-0 nova_compute[243452]: 2026-02-28 10:44:56.135 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:44:56 compute-0 ceph-mon[76304]: pgmap v2417: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 66 op/s
Feb 28 10:44:56 compute-0 nova_compute[243452]: 2026-02-28 10:44:56.656 243456 DEBUG nova.compute.manager [req-ef669ee8-47fd-4308-997b-cafc0d5afa16 req-ba44e30f-af88-443e-821b-1ad402de5779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received event network-changed-138ee9e6-821d-4256-8858-52f9e81f4c8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:44:56 compute-0 nova_compute[243452]: 2026-02-28 10:44:56.657 243456 DEBUG nova.compute.manager [req-ef669ee8-47fd-4308-997b-cafc0d5afa16 req-ba44e30f-af88-443e-821b-1ad402de5779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Refreshing instance network info cache due to event network-changed-138ee9e6-821d-4256-8858-52f9e81f4c8b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:44:56 compute-0 nova_compute[243452]: 2026-02-28 10:44:56.658 243456 DEBUG oslo_concurrency.lockutils [req-ef669ee8-47fd-4308-997b-cafc0d5afa16 req-ba44e30f-af88-443e-821b-1ad402de5779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:44:56 compute-0 nova_compute[243452]: 2026-02-28 10:44:56.658 243456 DEBUG oslo_concurrency.lockutils [req-ef669ee8-47fd-4308-997b-cafc0d5afa16 req-ba44e30f-af88-443e-821b-1ad402de5779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:44:56 compute-0 nova_compute[243452]: 2026-02-28 10:44:56.659 243456 DEBUG nova.network.neutron [req-ef669ee8-47fd-4308-997b-cafc0d5afa16 req-ba44e30f-af88-443e-821b-1ad402de5779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Refreshing network info cache for port 138ee9e6-821d-4256-8858-52f9e81f4c8b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:44:57 compute-0 nova_compute[243452]: 2026-02-28 10:44:57.021 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:44:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 75 op/s
Feb 28 10:44:57 compute-0 nova_compute[243452]: 2026-02-28 10:44:57.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:44:57 compute-0 nova_compute[243452]: 2026-02-28 10:44:57.338 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:44:57 compute-0 nova_compute[243452]: 2026-02-28 10:44:57.338 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:44:57 compute-0 nova_compute[243452]: 2026-02-28 10:44:57.339 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:44:57 compute-0 nova_compute[243452]: 2026-02-28 10:44:57.339 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:44:57 compute-0 nova_compute[243452]: 2026-02-28 10:44:57.340 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:44:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:57.884 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:44:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:57.885 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:44:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:44:57.886 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:44:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:44:57 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/657388479' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:44:57 compute-0 nova_compute[243452]: 2026-02-28 10:44:57.920 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:44:58 compute-0 ceph-mon[76304]: pgmap v2418: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 75 op/s
Feb 28 10:44:58 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/657388479' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:44:58 compute-0 nova_compute[243452]: 2026-02-28 10:44:58.878 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:44:58 compute-0 nova_compute[243452]: 2026-02-28 10:44:58.879 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:44:58 compute-0 nova_compute[243452]: 2026-02-28 10:44:58.884 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:44:58 compute-0 nova_compute[243452]: 2026-02-28 10:44:58.884 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:44:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.054 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.055 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3250MB free_disk=59.92095302604139GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.055 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.055 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.127 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance c45d561e-3581-4ca2-a8b7-b56cc6ce5b43 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.128 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance 795552e4-ec78-4c39-a0a9-60c83ca0f0a2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.129 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.129 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.149 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.164 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.165 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.179 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.209 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 28 10:44:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.266 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.713 243456 DEBUG nova.network.neutron [req-ef669ee8-47fd-4308-997b-cafc0d5afa16 req-ba44e30f-af88-443e-821b-1ad402de5779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Updated VIF entry in instance network info cache for port 138ee9e6-821d-4256-8858-52f9e81f4c8b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.715 243456 DEBUG nova.network.neutron [req-ef669ee8-47fd-4308-997b-cafc0d5afa16 req-ba44e30f-af88-443e-821b-1ad402de5779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Updating instance_info_cache with network_info: [{"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.743 243456 DEBUG oslo_concurrency.lockutils [req-ef669ee8-47fd-4308-997b-cafc0d5afa16 req-ba44e30f-af88-443e-821b-1ad402de5779 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:44:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:44:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1185575056' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.779 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.785 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.800 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.820 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:44:59 compute-0 nova_compute[243452]: 2026-02-28 10:44:59.821 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:45:00 compute-0 nova_compute[243452]: 2026-02-28 10:45:00.045 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:00 compute-0 ceph-mon[76304]: pgmap v2419: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 28 10:45:00 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1185575056' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:45:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:45:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:45:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:45:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:45:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:45:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:45:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 87 op/s
Feb 28 10:45:02 compute-0 nova_compute[243452]: 2026-02-28 10:45:02.024 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:02 compute-0 ceph-mon[76304]: pgmap v2420: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 87 op/s
Feb 28 10:45:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:45:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:45:04 compute-0 ceph-mon[76304]: pgmap v2421: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:45:05 compute-0 nova_compute[243452]: 2026-02-28 10:45:05.047 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 283 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 575 KiB/s wr, 76 op/s
Feb 28 10:45:05 compute-0 ovn_controller[146846]: 2026-02-28T10:45:05Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:d2:53 10.100.0.6
Feb 28 10:45:05 compute-0 ovn_controller[146846]: 2026-02-28T10:45:05Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:d2:53 10.100.0.6
Feb 28 10:45:06 compute-0 ceph-mon[76304]: pgmap v2422: 305 pgs: 305 active+clean; 283 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 575 KiB/s wr, 76 op/s
Feb 28 10:45:07 compute-0 nova_compute[243452]: 2026-02-28 10:45:07.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 286 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 964 KiB/s rd, 797 KiB/s wr, 46 op/s
Feb 28 10:45:08 compute-0 ceph-mon[76304]: pgmap v2423: 305 pgs: 305 active+clean; 286 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 964 KiB/s rd, 797 KiB/s wr, 46 op/s
Feb 28 10:45:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:45:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 304 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 841 KiB/s rd, 1.9 MiB/s wr, 68 op/s
Feb 28 10:45:10 compute-0 nova_compute[243452]: 2026-02-28 10:45:10.050 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:10 compute-0 ceph-mon[76304]: pgmap v2424: 305 pgs: 305 active+clean; 304 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 841 KiB/s rd, 1.9 MiB/s wr, 68 op/s
Feb 28 10:45:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Feb 28 10:45:12 compute-0 nova_compute[243452]: 2026-02-28 10:45:12.030 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:12 compute-0 ceph-mon[76304]: pgmap v2425: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Feb 28 10:45:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:45:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:45:14 compute-0 ceph-mon[76304]: pgmap v2426: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:45:15 compute-0 nova_compute[243452]: 2026-02-28 10:45:15.053 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.368 243456 DEBUG nova.compute.manager [req-b04ec359-2139-4f98-916c-1fe4d39ddc82 req-5e3bbac5-3c02-4f9f-b81f-17b7de09a73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received event network-changed-138ee9e6-821d-4256-8858-52f9e81f4c8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.368 243456 DEBUG nova.compute.manager [req-b04ec359-2139-4f98-916c-1fe4d39ddc82 req-5e3bbac5-3c02-4f9f-b81f-17b7de09a73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Refreshing instance network info cache due to event network-changed-138ee9e6-821d-4256-8858-52f9e81f4c8b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.369 243456 DEBUG oslo_concurrency.lockutils [req-b04ec359-2139-4f98-916c-1fe4d39ddc82 req-5e3bbac5-3c02-4f9f-b81f-17b7de09a73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.369 243456 DEBUG oslo_concurrency.lockutils [req-b04ec359-2139-4f98-916c-1fe4d39ddc82 req-5e3bbac5-3c02-4f9f-b81f-17b7de09a73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.370 243456 DEBUG nova.network.neutron [req-b04ec359-2139-4f98-916c-1fe4d39ddc82 req-5e3bbac5-3c02-4f9f-b81f-17b7de09a73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Refreshing network info cache for port 138ee9e6-821d-4256-8858-52f9e81f4c8b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.420 243456 DEBUG oslo_concurrency.lockutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.421 243456 DEBUG oslo_concurrency.lockutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.422 243456 DEBUG oslo_concurrency.lockutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.422 243456 DEBUG oslo_concurrency.lockutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.423 243456 DEBUG oslo_concurrency.lockutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.425 243456 INFO nova.compute.manager [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Terminating instance
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.427 243456 DEBUG nova.compute.manager [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:45:16 compute-0 kernel: tap138ee9e6-82 (unregistering): left promiscuous mode
Feb 28 10:45:16 compute-0 NetworkManager[49805]: <info>  [1772275516.4782] device (tap138ee9e6-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:45:16 compute-0 ovn_controller[146846]: 2026-02-28T10:45:16Z|01566|binding|INFO|Releasing lport 138ee9e6-821d-4256-8858-52f9e81f4c8b from this chassis (sb_readonly=0)
Feb 28 10:45:16 compute-0 ovn_controller[146846]: 2026-02-28T10:45:16Z|01567|binding|INFO|Setting lport 138ee9e6-821d-4256-8858-52f9e81f4c8b down in Southbound
Feb 28 10:45:16 compute-0 ovn_controller[146846]: 2026-02-28T10:45:16Z|01568|binding|INFO|Removing iface tap138ee9e6-82 ovn-installed in OVS
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.490 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.492 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.497 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:d2:53 10.100.0.6 2001:db8:0:1:f816:3eff:fe4a:d253 2001:db8::f816:3eff:fe4a:d253'], port_security=['fa:16:3e:4a:d2:53 10.100.0.6 2001:db8:0:1:f816:3eff:fe4a:d253 2001:db8::f816:3eff:fe4a:d253'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:fe4a:d253/64 2001:db8::f816:3eff:fe4a:d253/64', 'neutron:device_id': '795552e4-ec78-4c39-a0a9-60c83ca0f0a2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6b062de3-7eb9-4b80-8825-2f32e8eb2071', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c946f0a3-f57c-4aff-b2b1-40e06de1a163, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=138ee9e6-821d-4256-8858-52f9e81f4c8b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:45:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.501 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 138ee9e6-821d-4256-8858-52f9e81f4c8b in datapath b90f6c6c-a634-4d08-98d7-fde34f18e37c unbound from our chassis
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.501 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.504 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b90f6c6c-a634-4d08-98d7-fde34f18e37c
Feb 28 10:45:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.596 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7f83e5-6667-49e5-b308-418607eb477e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:45:16 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000095.scope: Deactivated successfully.
Feb 28 10:45:16 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000095.scope: Consumed 12.780s CPU time.
Feb 28 10:45:16 compute-0 systemd-machined[209480]: Machine qemu-182-instance-00000095 terminated.
Feb 28 10:45:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.632 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[f8715ed1-f6ca-43a8-94bf-cd8b529dcd9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:45:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.638 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf11608-7c99-421c-adca-da8db5573fbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:45:16 compute-0 ceph-mon[76304]: pgmap v2427: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.655824) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275516655866, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 462, "num_deletes": 251, "total_data_size": 412687, "memory_usage": 421512, "flush_reason": "Manual Compaction"}
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275516660526, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 409031, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51564, "largest_seqno": 52025, "table_properties": {"data_size": 406334, "index_size": 733, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6290, "raw_average_key_size": 18, "raw_value_size": 401125, "raw_average_value_size": 1197, "num_data_blocks": 33, "num_entries": 335, "num_filter_entries": 335, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772275490, "oldest_key_time": 1772275490, "file_creation_time": 1772275516, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 4800 microseconds, and 1490 cpu microseconds.
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.660626) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 409031 bytes OK
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.660662) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.663339) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.663407) EVENT_LOG_v1 {"time_micros": 1772275516663394, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.663442) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 409922, prev total WAL file size 409922, number of live WAL files 2.
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.664841) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(399KB)], [119(10MB)]
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275516664907, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 11575779, "oldest_snapshot_seqno": -1}
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.664 243456 INFO nova.virt.libvirt.driver [-] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Instance destroyed successfully.
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.665 243456 DEBUG nova.objects.instance [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid 795552e4-ec78-4c39-a0a9-60c83ca0f0a2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:45:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.675 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[67792a3e-ab0f-45c4-a639-27901ac438fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.677 243456 DEBUG nova.virt.libvirt.vif [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1587775994',display_name='tempest-TestGettingAddress-server-1587775994',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1587775994',id=149,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjumutHDOiCCJO6cfUL6ugRSFzHXiNGug9dJkwM0CV8zL85MVF3a1LvAcIuG15mWPEoaXuen5D3Ktxjz9xOw1dfLwx0HcN7GW0eMtopxI1h0bccpxE9hlkunZoyhuwL5A==',key_name='tempest-TestGettingAddress-1355395394',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:44:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-i9i85993',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:44:53Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=795552e4-ec78-4c39-a0a9-60c83ca0f0a2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.677 243456 DEBUG nova.network.os_vif_util [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.678 243456 DEBUG nova.network.os_vif_util [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:d2:53,bridge_name='br-int',has_traffic_filtering=True,id=138ee9e6-821d-4256-8858-52f9e81f4c8b,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap138ee9e6-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.678 243456 DEBUG os_vif [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:d2:53,bridge_name='br-int',has_traffic_filtering=True,id=138ee9e6-821d-4256-8858-52f9e81f4c8b,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap138ee9e6-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.680 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.681 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap138ee9e6-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.682 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.683 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.687 243456 INFO os_vif [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:d2:53,bridge_name='br-int',has_traffic_filtering=True,id=138ee9e6-821d-4256-8858-52f9e81f4c8b,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap138ee9e6-82')
Feb 28 10:45:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.694 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe656c9-58c4-4ac9-847f-c9697cb9c69f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb90f6c6c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:3c:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3468, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3468, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697021, 'reachable_time': 30198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377590, 'error': None, 'target': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:45:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.710 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5475b55f-89a2-418b-9ccb-dac1e312f2b8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb90f6c6c-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697030, 'tstamp': 697030}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377598, 'error': None, 'target': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb90f6c6c-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697032, 'tstamp': 697032}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377598, 'error': None, 'target': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:45:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.712 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb90f6c6c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.713 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.715 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb90f6c6c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:45:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.715 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:45:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.715 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb90f6c6c-a0, col_values=(('external_ids', {'iface-id': '1a8a9fba-10a8-48af-ae28-f3be3422056a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:45:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:16.716 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7179 keys, 9868612 bytes, temperature: kUnknown
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275516718881, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 9868612, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9821005, "index_size": 28551, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17989, "raw_key_size": 187825, "raw_average_key_size": 26, "raw_value_size": 9693358, "raw_average_value_size": 1350, "num_data_blocks": 1111, "num_entries": 7179, "num_filter_entries": 7179, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772275516, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.719193) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 9868612 bytes
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.720562) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.1 rd, 182.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.6 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(52.4) write-amplify(24.1) OK, records in: 7688, records dropped: 509 output_compression: NoCompression
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.720628) EVENT_LOG_v1 {"time_micros": 1772275516720617, "job": 72, "event": "compaction_finished", "compaction_time_micros": 54075, "compaction_time_cpu_micros": 27810, "output_level": 6, "num_output_files": 1, "total_output_size": 9868612, "num_input_records": 7688, "num_output_records": 7179, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275516720943, "job": 72, "event": "table_file_deletion", "file_number": 121}
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275516722559, "job": 72, "event": "table_file_deletion", "file_number": 119}
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.664724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.722684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.722699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.722719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.722729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:45:16 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:45:16.722731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.994 243456 INFO nova.virt.libvirt.driver [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Deleting instance files /var/lib/nova/instances/795552e4-ec78-4c39-a0a9-60c83ca0f0a2_del
Feb 28 10:45:16 compute-0 nova_compute[243452]: 2026-02-28 10:45:16.995 243456 INFO nova.virt.libvirt.driver [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Deletion of /var/lib/nova/instances/795552e4-ec78-4c39-a0a9-60c83ca0f0a2_del complete
Feb 28 10:45:17 compute-0 nova_compute[243452]: 2026-02-28 10:45:17.143 243456 INFO nova.compute.manager [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Took 0.72 seconds to destroy the instance on the hypervisor.
Feb 28 10:45:17 compute-0 nova_compute[243452]: 2026-02-28 10:45:17.145 243456 DEBUG oslo.service.loopingcall [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:45:17 compute-0 nova_compute[243452]: 2026-02-28 10:45:17.146 243456 DEBUG nova.compute.manager [-] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:45:17 compute-0 nova_compute[243452]: 2026-02-28 10:45:17.147 243456 DEBUG nova.network.neutron [-] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:45:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 1.6 MiB/s wr, 60 op/s
Feb 28 10:45:17 compute-0 nova_compute[243452]: 2026-02-28 10:45:17.554 243456 DEBUG nova.compute.manager [req-308fcca6-9234-4d2c-a5e8-89decd12dce5 req-5d3a7f11-a4b0-4b9a-9cfc-2ec2134916a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received event network-vif-unplugged-138ee9e6-821d-4256-8858-52f9e81f4c8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:45:17 compute-0 nova_compute[243452]: 2026-02-28 10:45:17.555 243456 DEBUG oslo_concurrency.lockutils [req-308fcca6-9234-4d2c-a5e8-89decd12dce5 req-5d3a7f11-a4b0-4b9a-9cfc-2ec2134916a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:45:17 compute-0 nova_compute[243452]: 2026-02-28 10:45:17.556 243456 DEBUG oslo_concurrency.lockutils [req-308fcca6-9234-4d2c-a5e8-89decd12dce5 req-5d3a7f11-a4b0-4b9a-9cfc-2ec2134916a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:45:17 compute-0 nova_compute[243452]: 2026-02-28 10:45:17.557 243456 DEBUG oslo_concurrency.lockutils [req-308fcca6-9234-4d2c-a5e8-89decd12dce5 req-5d3a7f11-a4b0-4b9a-9cfc-2ec2134916a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:45:17 compute-0 nova_compute[243452]: 2026-02-28 10:45:17.557 243456 DEBUG nova.compute.manager [req-308fcca6-9234-4d2c-a5e8-89decd12dce5 req-5d3a7f11-a4b0-4b9a-9cfc-2ec2134916a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] No waiting events found dispatching network-vif-unplugged-138ee9e6-821d-4256-8858-52f9e81f4c8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:45:17 compute-0 nova_compute[243452]: 2026-02-28 10:45:17.557 243456 DEBUG nova.compute.manager [req-308fcca6-9234-4d2c-a5e8-89decd12dce5 req-5d3a7f11-a4b0-4b9a-9cfc-2ec2134916a4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received event network-vif-unplugged-138ee9e6-821d-4256-8858-52f9e81f4c8b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:45:18 compute-0 ceph-mon[76304]: pgmap v2428: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 1.6 MiB/s wr, 60 op/s
Feb 28 10:45:18 compute-0 nova_compute[243452]: 2026-02-28 10:45:18.759 243456 DEBUG nova.network.neutron [-] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:45:18 compute-0 nova_compute[243452]: 2026-02-28 10:45:18.776 243456 INFO nova.compute.manager [-] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Took 1.63 seconds to deallocate network for instance.
Feb 28 10:45:18 compute-0 nova_compute[243452]: 2026-02-28 10:45:18.814 243456 DEBUG oslo_concurrency.lockutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:45:18 compute-0 nova_compute[243452]: 2026-02-28 10:45:18.815 243456 DEBUG oslo_concurrency.lockutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:45:18 compute-0 nova_compute[243452]: 2026-02-28 10:45:18.886 243456 DEBUG oslo_concurrency.processutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:45:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:45:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 1.4 MiB/s wr, 74 op/s
Feb 28 10:45:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:45:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/146506471' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:45:19 compute-0 nova_compute[243452]: 2026-02-28 10:45:19.467 243456 DEBUG oslo_concurrency.processutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:45:19 compute-0 nova_compute[243452]: 2026-02-28 10:45:19.476 243456 DEBUG nova.compute.provider_tree [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:45:19 compute-0 nova_compute[243452]: 2026-02-28 10:45:19.503 243456 DEBUG nova.scheduler.client.report [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:45:19 compute-0 nova_compute[243452]: 2026-02-28 10:45:19.534 243456 DEBUG oslo_concurrency.lockutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:45:19 compute-0 nova_compute[243452]: 2026-02-28 10:45:19.537 243456 DEBUG nova.network.neutron [req-b04ec359-2139-4f98-916c-1fe4d39ddc82 req-5e3bbac5-3c02-4f9f-b81f-17b7de09a73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Updated VIF entry in instance network info cache for port 138ee9e6-821d-4256-8858-52f9e81f4c8b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:45:19 compute-0 nova_compute[243452]: 2026-02-28 10:45:19.537 243456 DEBUG nova.network.neutron [req-b04ec359-2139-4f98-916c-1fe4d39ddc82 req-5e3bbac5-3c02-4f9f-b81f-17b7de09a73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Updating instance_info_cache with network_info: [{"id": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "address": "fa:16:3e:4a:d2:53", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4a:d253", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138ee9e6-82", "ovs_interfaceid": "138ee9e6-821d-4256-8858-52f9e81f4c8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:45:19 compute-0 nova_compute[243452]: 2026-02-28 10:45:19.560 243456 DEBUG oslo_concurrency.lockutils [req-b04ec359-2139-4f98-916c-1fe4d39ddc82 req-5e3bbac5-3c02-4f9f-b81f-17b7de09a73b 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-795552e4-ec78-4c39-a0a9-60c83ca0f0a2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:45:19 compute-0 nova_compute[243452]: 2026-02-28 10:45:19.571 243456 INFO nova.scheduler.client.report [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance 795552e4-ec78-4c39-a0a9-60c83ca0f0a2
Feb 28 10:45:19 compute-0 nova_compute[243452]: 2026-02-28 10:45:19.641 243456 DEBUG nova.compute.manager [req-b2b5fa43-a0f9-46c5-b5fa-b73b3231bc2a req-3e151b8c-8f9f-45b4-9c7c-c44ba1f2f724 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received event network-vif-plugged-138ee9e6-821d-4256-8858-52f9e81f4c8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:45:19 compute-0 nova_compute[243452]: 2026-02-28 10:45:19.642 243456 DEBUG oslo_concurrency.lockutils [req-b2b5fa43-a0f9-46c5-b5fa-b73b3231bc2a req-3e151b8c-8f9f-45b4-9c7c-c44ba1f2f724 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:45:19 compute-0 nova_compute[243452]: 2026-02-28 10:45:19.642 243456 DEBUG oslo_concurrency.lockutils [req-b2b5fa43-a0f9-46c5-b5fa-b73b3231bc2a req-3e151b8c-8f9f-45b4-9c7c-c44ba1f2f724 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:45:19 compute-0 nova_compute[243452]: 2026-02-28 10:45:19.643 243456 DEBUG oslo_concurrency.lockutils [req-b2b5fa43-a0f9-46c5-b5fa-b73b3231bc2a req-3e151b8c-8f9f-45b4-9c7c-c44ba1f2f724 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:45:19 compute-0 nova_compute[243452]: 2026-02-28 10:45:19.643 243456 DEBUG nova.compute.manager [req-b2b5fa43-a0f9-46c5-b5fa-b73b3231bc2a req-3e151b8c-8f9f-45b4-9c7c-c44ba1f2f724 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] No waiting events found dispatching network-vif-plugged-138ee9e6-821d-4256-8858-52f9e81f4c8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:45:19 compute-0 nova_compute[243452]: 2026-02-28 10:45:19.644 243456 WARNING nova.compute.manager [req-b2b5fa43-a0f9-46c5-b5fa-b73b3231bc2a req-3e151b8c-8f9f-45b4-9c7c-c44ba1f2f724 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received unexpected event network-vif-plugged-138ee9e6-821d-4256-8858-52f9e81f4c8b for instance with vm_state deleted and task_state None.
Feb 28 10:45:19 compute-0 nova_compute[243452]: 2026-02-28 10:45:19.644 243456 DEBUG nova.compute.manager [req-b2b5fa43-a0f9-46c5-b5fa-b73b3231bc2a req-3e151b8c-8f9f-45b4-9c7c-c44ba1f2f724 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Received event network-vif-deleted-138ee9e6-821d-4256-8858-52f9e81f4c8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:45:19 compute-0 nova_compute[243452]: 2026-02-28 10:45:19.644 243456 INFO nova.compute.manager [req-b2b5fa43-a0f9-46c5-b5fa-b73b3231bc2a req-3e151b8c-8f9f-45b4-9c7c-c44ba1f2f724 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Neutron deleted interface 138ee9e6-821d-4256-8858-52f9e81f4c8b; detaching it from the instance and deleting it from the info cache
Feb 28 10:45:19 compute-0 nova_compute[243452]: 2026-02-28 10:45:19.645 243456 DEBUG nova.network.neutron [req-b2b5fa43-a0f9-46c5-b5fa-b73b3231bc2a req-3e151b8c-8f9f-45b4-9c7c-c44ba1f2f724 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:45:19 compute-0 nova_compute[243452]: 2026-02-28 10:45:19.651 243456 DEBUG oslo_concurrency.lockutils [None req-a1d1a6b7-8bc1-4571-b967-0c83ce0d818f be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "795552e4-ec78-4c39-a0a9-60c83ca0f0a2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:45:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/146506471' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:45:19 compute-0 nova_compute[243452]: 2026-02-28 10:45:19.671 243456 DEBUG nova.compute.manager [req-b2b5fa43-a0f9-46c5-b5fa-b73b3231bc2a req-3e151b8c-8f9f-45b4-9c7c-c44ba1f2f724 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Detach interface failed, port_id=138ee9e6-821d-4256-8858-52f9e81f4c8b, reason: Instance 795552e4-ec78-4c39-a0a9-60c83ca0f0a2 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:45:20 compute-0 nova_compute[243452]: 2026-02-28 10:45:20.096 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:20 compute-0 ceph-mon[76304]: pgmap v2429: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 1.4 MiB/s wr, 74 op/s
Feb 28 10:45:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 244 KiB/s wr, 89 op/s
Feb 28 10:45:21 compute-0 nova_compute[243452]: 2026-02-28 10:45:21.684 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.083 243456 DEBUG nova.compute.manager [req-baf2e8a6-5480-486c-96e0-44e19de47be4 req-ad51a0c0-523f-423a-9b6d-cdf0a3c5b601 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received event network-changed-e109a5e0-a347-4744-84e4-96a126379d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.083 243456 DEBUG nova.compute.manager [req-baf2e8a6-5480-486c-96e0-44e19de47be4 req-ad51a0c0-523f-423a-9b6d-cdf0a3c5b601 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Refreshing instance network info cache due to event network-changed-e109a5e0-a347-4744-84e4-96a126379d1c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.084 243456 DEBUG oslo_concurrency.lockutils [req-baf2e8a6-5480-486c-96e0-44e19de47be4 req-ad51a0c0-523f-423a-9b6d-cdf0a3c5b601 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.084 243456 DEBUG oslo_concurrency.lockutils [req-baf2e8a6-5480-486c-96e0-44e19de47be4 req-ad51a0c0-523f-423a-9b6d-cdf0a3c5b601 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.084 243456 DEBUG nova.network.neutron [req-baf2e8a6-5480-486c-96e0-44e19de47be4 req-ad51a0c0-523f-423a-9b6d-cdf0a3c5b601 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Refreshing network info cache for port e109a5e0-a347-4744-84e4-96a126379d1c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.540 243456 DEBUG oslo_concurrency.lockutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.541 243456 DEBUG oslo_concurrency.lockutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.541 243456 DEBUG oslo_concurrency.lockutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.541 243456 DEBUG oslo_concurrency.lockutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.542 243456 DEBUG oslo_concurrency.lockutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.543 243456 INFO nova.compute.manager [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Terminating instance
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.544 243456 DEBUG nova.compute.manager [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:45:22 compute-0 kernel: tape109a5e0-a3 (unregistering): left promiscuous mode
Feb 28 10:45:22 compute-0 NetworkManager[49805]: <info>  [1772275522.6037] device (tape109a5e0-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.616 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:22 compute-0 ovn_controller[146846]: 2026-02-28T10:45:22Z|01569|binding|INFO|Releasing lport e109a5e0-a347-4744-84e4-96a126379d1c from this chassis (sb_readonly=0)
Feb 28 10:45:22 compute-0 ovn_controller[146846]: 2026-02-28T10:45:22Z|01570|binding|INFO|Setting lport e109a5e0-a347-4744-84e4-96a126379d1c down in Southbound
Feb 28 10:45:22 compute-0 ovn_controller[146846]: 2026-02-28T10:45:22Z|01571|binding|INFO|Removing iface tape109a5e0-a3 ovn-installed in OVS
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.631 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:22 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000094.scope: Deactivated successfully.
Feb 28 10:45:22 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000094.scope: Consumed 14.179s CPU time.
Feb 28 10:45:22 compute-0 systemd-machined[209480]: Machine qemu-181-instance-00000094 terminated.
Feb 28 10:45:22 compute-0 ceph-mon[76304]: pgmap v2430: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 244 KiB/s wr, 89 op/s
Feb 28 10:45:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:22.722 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:44:88 10.100.0.5 2001:db8:0:1:f816:3eff:fe7a:4488 2001:db8::f816:3eff:fe7a:4488'], port_security=['fa:16:3e:7a:44:88 10.100.0.5 2001:db8:0:1:f816:3eff:fe7a:4488 2001:db8::f816:3eff:fe7a:4488'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28 2001:db8:0:1:f816:3eff:fe7a:4488/64 2001:db8::f816:3eff:fe7a:4488/64', 'neutron:device_id': 'c45d561e-3581-4ca2-a8b7-b56cc6ce5b43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6b062de3-7eb9-4b80-8825-2f32e8eb2071', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c946f0a3-f57c-4aff-b2b1-40e06de1a163, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=e109a5e0-a347-4744-84e4-96a126379d1c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:45:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:22.724 156681 INFO neutron.agent.ovn.metadata.agent [-] Port e109a5e0-a347-4744-84e4-96a126379d1c in datapath b90f6c6c-a634-4d08-98d7-fde34f18e37c unbound from our chassis
Feb 28 10:45:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:22.725 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b90f6c6c-a634-4d08-98d7-fde34f18e37c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:45:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:22.726 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[de509ebf-2c90-4a1c-80af-aa09b5e39916]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:45:22 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:22.727 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c namespace which is not needed anymore
Feb 28 10:45:22 compute-0 NetworkManager[49805]: <info>  [1772275522.7698] manager: (tape109a5e0-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/655)
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.772 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.777 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.795 243456 INFO nova.virt.libvirt.driver [-] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Instance destroyed successfully.
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.796 243456 DEBUG nova.objects.instance [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid c45d561e-3581-4ca2-a8b7-b56cc6ce5b43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:45:22 compute-0 neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c[376462]: [NOTICE]   (376466) : haproxy version is 2.8.14-c23fe91
Feb 28 10:45:22 compute-0 neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c[376462]: [NOTICE]   (376466) : path to executable is /usr/sbin/haproxy
Feb 28 10:45:22 compute-0 neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c[376462]: [WARNING]  (376466) : Exiting Master process...
Feb 28 10:45:22 compute-0 neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c[376462]: [WARNING]  (376466) : Exiting Master process...
Feb 28 10:45:22 compute-0 neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c[376462]: [ALERT]    (376466) : Current worker (376468) exited with code 143 (Terminated)
Feb 28 10:45:22 compute-0 neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c[376462]: [WARNING]  (376466) : All workers exited. Exiting... (0)
Feb 28 10:45:22 compute-0 systemd[1]: libpod-5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129.scope: Deactivated successfully.
Feb 28 10:45:22 compute-0 podman[377668]: 2026-02-28 10:45:22.911271321 +0000 UTC m=+0.068840461 container died 5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.941 243456 DEBUG nova.virt.libvirt.vif [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:44:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-82849746',display_name='tempest-TestGettingAddress-server-82849746',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-82849746',id=148,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjumutHDOiCCJO6cfUL6ugRSFzHXiNGug9dJkwM0CV8zL85MVF3a1LvAcIuG15mWPEoaXuen5D3Ktxjz9xOw1dfLwx0HcN7GW0eMtopxI1h0bccpxE9hlkunZoyhuwL5A==',key_name='tempest-TestGettingAddress-1355395394',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:44:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-i18076q2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:44:23Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=c45d561e-3581-4ca2-a8b7-b56cc6ce5b43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.942 243456 DEBUG nova.network.os_vif_util [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.943 243456 DEBUG nova.network.os_vif_util [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:44:88,bridge_name='br-int',has_traffic_filtering=True,id=e109a5e0-a347-4744-84e4-96a126379d1c,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape109a5e0-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.944 243456 DEBUG os_vif [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:44:88,bridge_name='br-int',has_traffic_filtering=True,id=e109a5e0-a347-4744-84e4-96a126379d1c,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape109a5e0-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.946 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.947 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape109a5e0-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:45:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129-userdata-shm.mount: Deactivated successfully.
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.950 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.951 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b3be6acdeb1e35fcdffcfd7b1606173c8b51c2d19d95f4b060892ed6a5761cd-merged.mount: Deactivated successfully.
Feb 28 10:45:22 compute-0 nova_compute[243452]: 2026-02-28 10:45:22.954 243456 INFO os_vif [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:44:88,bridge_name='br-int',has_traffic_filtering=True,id=e109a5e0-a347-4744-84e4-96a126379d1c,network=Network(b90f6c6c-a634-4d08-98d7-fde34f18e37c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape109a5e0-a3')
Feb 28 10:45:22 compute-0 podman[377668]: 2026-02-28 10:45:22.973244026 +0000 UTC m=+0.130813176 container cleanup 5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 28 10:45:22 compute-0 systemd[1]: libpod-conmon-5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129.scope: Deactivated successfully.
Feb 28 10:45:23 compute-0 podman[377712]: 2026-02-28 10:45:23.047513169 +0000 UTC m=+0.049072661 container remove 5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:45:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.054 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6f7e410c-12e4-46e8-a5da-e8cd21d1efbf]: (4, ('Sat Feb 28 10:45:22 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c (5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129)\n5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129\nSat Feb 28 10:45:22 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c (5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129)\n5f6a0c8709ef96f2099d013cfe5e3f08f8d63aece04e8d7220bc23ab3d440129\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:45:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.057 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[34ae9796-fcb6-4bfe-9f24-b6984f7df591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:45:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.058 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb90f6c6c-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:45:23 compute-0 nova_compute[243452]: 2026-02-28 10:45:23.059 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:23 compute-0 kernel: tapb90f6c6c-a0: left promiscuous mode
Feb 28 10:45:23 compute-0 nova_compute[243452]: 2026-02-28 10:45:23.066 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.069 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[77e6f371-f125-4f70-8703-080805818469]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:45:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.085 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3360ec23-991d-4949-be0b-564f79c7ce2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:45:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.087 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1c564e3e-eda8-4c5e-80fd-cad790456073]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:45:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.103 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a0f51f55-9270-46c8-9a29-55546b0d99d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697014, 'reachable_time': 32071, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377737, 'error': None, 'target': 'ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:45:23 compute-0 systemd[1]: run-netns-ovnmeta\x2db90f6c6c\x2da634\x2d4d08\x2d98d7\x2dfde34f18e37c.mount: Deactivated successfully.
Feb 28 10:45:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.109 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b90f6c6c-a634-4d08-98d7-fde34f18e37c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:45:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.109 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c4c16c-f5b4-4654-8d86-798f4b8a46d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:45:23 compute-0 podman[377728]: 2026-02-28 10:45:23.15561323 +0000 UTC m=+0.064489287 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent)
Feb 28 10:45:23 compute-0 podman[377730]: 2026-02-28 10:45:23.183910361 +0000 UTC m=+0.089997289 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 28 10:45:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 16 KiB/s wr, 89 op/s
Feb 28 10:45:23 compute-0 nova_compute[243452]: 2026-02-28 10:45:23.298 243456 INFO nova.virt.libvirt.driver [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Deleting instance files /var/lib/nova/instances/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_del
Feb 28 10:45:23 compute-0 nova_compute[243452]: 2026-02-28 10:45:23.299 243456 INFO nova.virt.libvirt.driver [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Deletion of /var/lib/nova/instances/c45d561e-3581-4ca2-a8b7-b56cc6ce5b43_del complete
Feb 28 10:45:23 compute-0 nova_compute[243452]: 2026-02-28 10:45:23.349 243456 INFO nova.compute.manager [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Took 0.80 seconds to destroy the instance on the hypervisor.
Feb 28 10:45:23 compute-0 nova_compute[243452]: 2026-02-28 10:45:23.350 243456 DEBUG oslo.service.loopingcall [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:45:23 compute-0 nova_compute[243452]: 2026-02-28 10:45:23.350 243456 DEBUG nova.compute.manager [-] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:45:23 compute-0 nova_compute[243452]: 2026-02-28 10:45:23.350 243456 DEBUG nova.network.neutron [-] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:45:23 compute-0 nova_compute[243452]: 2026-02-28 10:45:23.640 243456 DEBUG nova.compute.manager [req-01178898-91e3-438c-a2ee-ea94675bc9f0 req-daa6bc34-e4b3-420d-8fab-269b05e4933f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received event network-vif-unplugged-e109a5e0-a347-4744-84e4-96a126379d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:45:23 compute-0 nova_compute[243452]: 2026-02-28 10:45:23.641 243456 DEBUG oslo_concurrency.lockutils [req-01178898-91e3-438c-a2ee-ea94675bc9f0 req-daa6bc34-e4b3-420d-8fab-269b05e4933f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:45:23 compute-0 nova_compute[243452]: 2026-02-28 10:45:23.641 243456 DEBUG oslo_concurrency.lockutils [req-01178898-91e3-438c-a2ee-ea94675bc9f0 req-daa6bc34-e4b3-420d-8fab-269b05e4933f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:45:23 compute-0 nova_compute[243452]: 2026-02-28 10:45:23.642 243456 DEBUG oslo_concurrency.lockutils [req-01178898-91e3-438c-a2ee-ea94675bc9f0 req-daa6bc34-e4b3-420d-8fab-269b05e4933f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:45:23 compute-0 nova_compute[243452]: 2026-02-28 10:45:23.643 243456 DEBUG nova.compute.manager [req-01178898-91e3-438c-a2ee-ea94675bc9f0 req-daa6bc34-e4b3-420d-8fab-269b05e4933f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] No waiting events found dispatching network-vif-unplugged-e109a5e0-a347-4744-84e4-96a126379d1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:45:23 compute-0 nova_compute[243452]: 2026-02-28 10:45:23.643 243456 DEBUG nova.compute.manager [req-01178898-91e3-438c-a2ee-ea94675bc9f0 req-daa6bc34-e4b3-420d-8fab-269b05e4933f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received event network-vif-unplugged-e109a5e0-a347-4744-84e4-96a126379d1c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:45:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.925 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:45:23 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:23.926 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:45:23 compute-0 nova_compute[243452]: 2026-02-28 10:45:23.940 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:45:24 compute-0 nova_compute[243452]: 2026-02-28 10:45:24.468 243456 DEBUG nova.network.neutron [-] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:45:24 compute-0 nova_compute[243452]: 2026-02-28 10:45:24.486 243456 INFO nova.compute.manager [-] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Took 1.14 seconds to deallocate network for instance.
Feb 28 10:45:24 compute-0 nova_compute[243452]: 2026-02-28 10:45:24.537 243456 DEBUG oslo_concurrency.lockutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:45:24 compute-0 nova_compute[243452]: 2026-02-28 10:45:24.538 243456 DEBUG oslo_concurrency.lockutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:45:24 compute-0 nova_compute[243452]: 2026-02-28 10:45:24.599 243456 DEBUG oslo_concurrency.processutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:45:24 compute-0 nova_compute[243452]: 2026-02-28 10:45:24.700 243456 DEBUG nova.network.neutron [req-baf2e8a6-5480-486c-96e0-44e19de47be4 req-ad51a0c0-523f-423a-9b6d-cdf0a3c5b601 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Updated VIF entry in instance network info cache for port e109a5e0-a347-4744-84e4-96a126379d1c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:45:24 compute-0 nova_compute[243452]: 2026-02-28 10:45:24.701 243456 DEBUG nova.network.neutron [req-baf2e8a6-5480-486c-96e0-44e19de47be4 req-ad51a0c0-523f-423a-9b6d-cdf0a3c5b601 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Updating instance_info_cache with network_info: [{"id": "e109a5e0-a347-4744-84e4-96a126379d1c", "address": "fa:16:3e:7a:44:88", "network": {"id": "b90f6c6c-a634-4d08-98d7-fde34f18e37c", "bridge": "br-int", "label": "tempest-network-smoke--745786898", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7a:4488", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape109a5e0-a3", "ovs_interfaceid": "e109a5e0-a347-4744-84e4-96a126379d1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:45:24 compute-0 ceph-mon[76304]: pgmap v2431: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 16 KiB/s wr, 89 op/s
Feb 28 10:45:24 compute-0 nova_compute[243452]: 2026-02-28 10:45:24.727 243456 DEBUG oslo_concurrency.lockutils [req-baf2e8a6-5480-486c-96e0-44e19de47be4 req-ad51a0c0-523f-423a-9b6d-cdf0a3c5b601 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:45:25 compute-0 nova_compute[243452]: 2026-02-28 10:45:25.134 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:45:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1766251624' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:45:25 compute-0 nova_compute[243452]: 2026-02-28 10:45:25.191 243456 DEBUG oslo_concurrency.processutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:45:25 compute-0 nova_compute[243452]: 2026-02-28 10:45:25.198 243456 DEBUG nova.compute.provider_tree [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:45:25 compute-0 nova_compute[243452]: 2026-02-28 10:45:25.229 243456 DEBUG nova.scheduler.client.report [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:45:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 305 active+clean; 207 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 5.1 KiB/s wr, 101 op/s
Feb 28 10:45:25 compute-0 nova_compute[243452]: 2026-02-28 10:45:25.259 243456 DEBUG oslo_concurrency.lockutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:45:25 compute-0 nova_compute[243452]: 2026-02-28 10:45:25.306 243456 INFO nova.scheduler.client.report [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance c45d561e-3581-4ca2-a8b7-b56cc6ce5b43
Feb 28 10:45:25 compute-0 nova_compute[243452]: 2026-02-28 10:45:25.378 243456 DEBUG oslo_concurrency.lockutils [None req-e2c12ddc-682b-4b54-af99-07c6d7f92d8a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:45:25 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1766251624' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:45:25 compute-0 nova_compute[243452]: 2026-02-28 10:45:25.757 243456 DEBUG nova.compute.manager [req-502ff059-ba56-408a-ad6f-7a31f6c8119e req-25ff827a-eb46-4a55-8e08-70e4e7c9334a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received event network-vif-plugged-e109a5e0-a347-4744-84e4-96a126379d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:45:25 compute-0 nova_compute[243452]: 2026-02-28 10:45:25.758 243456 DEBUG oslo_concurrency.lockutils [req-502ff059-ba56-408a-ad6f-7a31f6c8119e req-25ff827a-eb46-4a55-8e08-70e4e7c9334a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:45:25 compute-0 nova_compute[243452]: 2026-02-28 10:45:25.758 243456 DEBUG oslo_concurrency.lockutils [req-502ff059-ba56-408a-ad6f-7a31f6c8119e req-25ff827a-eb46-4a55-8e08-70e4e7c9334a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:45:25 compute-0 nova_compute[243452]: 2026-02-28 10:45:25.758 243456 DEBUG oslo_concurrency.lockutils [req-502ff059-ba56-408a-ad6f-7a31f6c8119e req-25ff827a-eb46-4a55-8e08-70e4e7c9334a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "c45d561e-3581-4ca2-a8b7-b56cc6ce5b43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:45:25 compute-0 nova_compute[243452]: 2026-02-28 10:45:25.758 243456 DEBUG nova.compute.manager [req-502ff059-ba56-408a-ad6f-7a31f6c8119e req-25ff827a-eb46-4a55-8e08-70e4e7c9334a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] No waiting events found dispatching network-vif-plugged-e109a5e0-a347-4744-84e4-96a126379d1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:45:25 compute-0 nova_compute[243452]: 2026-02-28 10:45:25.759 243456 WARNING nova.compute.manager [req-502ff059-ba56-408a-ad6f-7a31f6c8119e req-25ff827a-eb46-4a55-8e08-70e4e7c9334a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received unexpected event network-vif-plugged-e109a5e0-a347-4744-84e4-96a126379d1c for instance with vm_state deleted and task_state None.
Feb 28 10:45:25 compute-0 nova_compute[243452]: 2026-02-28 10:45:25.759 243456 DEBUG nova.compute.manager [req-502ff059-ba56-408a-ad6f-7a31f6c8119e req-25ff827a-eb46-4a55-8e08-70e4e7c9334a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Received event network-vif-deleted-e109a5e0-a347-4744-84e4-96a126379d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:45:25 compute-0 nova_compute[243452]: 2026-02-28 10:45:25.759 243456 INFO nova.compute.manager [req-502ff059-ba56-408a-ad6f-7a31f6c8119e req-25ff827a-eb46-4a55-8e08-70e4e7c9334a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Neutron deleted interface e109a5e0-a347-4744-84e4-96a126379d1c; detaching it from the instance and deleting it from the info cache
Feb 28 10:45:25 compute-0 nova_compute[243452]: 2026-02-28 10:45:25.760 243456 DEBUG nova.network.neutron [req-502ff059-ba56-408a-ad6f-7a31f6c8119e req-25ff827a-eb46-4a55-8e08-70e4e7c9334a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Feb 28 10:45:25 compute-0 nova_compute[243452]: 2026-02-28 10:45:25.762 243456 DEBUG nova.compute.manager [req-502ff059-ba56-408a-ad6f-7a31f6c8119e req-25ff827a-eb46-4a55-8e08-70e4e7c9334a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Detach interface failed, port_id=e109a5e0-a347-4744-84e4-96a126379d1c, reason: Instance c45d561e-3581-4ca2-a8b7-b56cc6ce5b43 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:45:26 compute-0 ceph-mon[76304]: pgmap v2432: 305 pgs: 305 active+clean; 207 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 5.1 KiB/s wr, 101 op/s
Feb 28 10:45:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 305 active+clean; 180 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 5.4 KiB/s wr, 103 op/s
Feb 28 10:45:27 compute-0 sshd-session[377800]: Received disconnect from 103.67.78.202 port 43092:11: Bye Bye [preauth]
Feb 28 10:45:27 compute-0 sshd-session[377800]: Disconnected from authenticating user root 103.67.78.202 port 43092 [preauth]
Feb 28 10:45:27 compute-0 nova_compute[243452]: 2026-02-28 10:45:27.952 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:28 compute-0 sshd-session[377802]: Received disconnect from 103.67.78.132 port 49416:11: Bye Bye [preauth]
Feb 28 10:45:28 compute-0 sshd-session[377802]: Disconnected from authenticating user root 103.67.78.132 port 49416 [preauth]
Feb 28 10:45:28 compute-0 ceph-mon[76304]: pgmap v2433: 305 pgs: 305 active+clean; 180 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 5.4 KiB/s wr, 103 op/s
Feb 28 10:45:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:45:29 compute-0 sshd-session[377804]: Received disconnect from 103.67.78.202 port 43102:11: Bye Bye [preauth]
Feb 28 10:45:29 compute-0 sshd-session[377804]: Disconnected from authenticating user root 103.67.78.202 port 43102 [preauth]
Feb 28 10:45:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:45:29
Feb 28 10:45:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:45:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:45:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['vms', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'images', 'backups', 'cephfs.cephfs.meta']
Feb 28 10:45:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:45:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 5.1 KiB/s wr, 111 op/s
Feb 28 10:45:30 compute-0 nova_compute[243452]: 2026-02-28 10:45:30.137 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:45:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:45:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:45:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:45:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:45:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:45:30 compute-0 sudo[377806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:45:30 compute-0 sudo[377806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:45:30 compute-0 sudo[377806]: pam_unix(sudo:session): session closed for user root
Feb 28 10:45:30 compute-0 sudo[377831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:45:30 compute-0 sudo[377831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:45:30 compute-0 ceph-mon[76304]: pgmap v2434: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 5.1 KiB/s wr, 111 op/s
Feb 28 10:45:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:45:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:45:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:45:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:45:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:45:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:45:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:45:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:45:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:45:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:45:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:30.928 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:45:31 compute-0 sudo[377831]: pam_unix(sudo:session): session closed for user root
Feb 28 10:45:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:45:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:45:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:45:31 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:45:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:45:31 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:45:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:45:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:45:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:45:31 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:45:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:45:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:45:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 4.3 KiB/s wr, 92 op/s
Feb 28 10:45:31 compute-0 sudo[377887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:45:31 compute-0 sudo[377887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:45:31 compute-0 sudo[377887]: pam_unix(sudo:session): session closed for user root
Feb 28 10:45:31 compute-0 nova_compute[243452]: 2026-02-28 10:45:31.289 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:31 compute-0 sudo[377912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:45:31 compute-0 sudo[377912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:45:31 compute-0 podman[377950]: 2026-02-28 10:45:31.639023345 +0000 UTC m=+0.058339903 container create bebdd6caf16a72ec173a91b8b408cac22fd6991d0c588b9432d98b453b90001f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 10:45:31 compute-0 nova_compute[243452]: 2026-02-28 10:45:31.662 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275516.6620302, 795552e4-ec78-4c39-a0a9-60c83ca0f0a2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:45:31 compute-0 nova_compute[243452]: 2026-02-28 10:45:31.663 243456 INFO nova.compute.manager [-] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] VM Stopped (Lifecycle Event)
Feb 28 10:45:31 compute-0 nova_compute[243452]: 2026-02-28 10:45:31.687 243456 DEBUG nova.compute.manager [None req-47f860cf-ee8d-45c0-9129-775999614954 - - - - - -] [instance: 795552e4-ec78-4c39-a0a9-60c83ca0f0a2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:45:31 compute-0 systemd[1]: Started libpod-conmon-bebdd6caf16a72ec173a91b8b408cac22fd6991d0c588b9432d98b453b90001f.scope.
Feb 28 10:45:31 compute-0 podman[377950]: 2026-02-28 10:45:31.613953195 +0000 UTC m=+0.033269823 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:45:31 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:45:31 compute-0 podman[377950]: 2026-02-28 10:45:31.753701662 +0000 UTC m=+0.173018290 container init bebdd6caf16a72ec173a91b8b408cac22fd6991d0c588b9432d98b453b90001f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_heyrovsky, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:45:31 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:45:31 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:45:31 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:45:31 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:45:31 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:45:31 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:45:31 compute-0 podman[377950]: 2026-02-28 10:45:31.763202261 +0000 UTC m=+0.182518829 container start bebdd6caf16a72ec173a91b8b408cac22fd6991d0c588b9432d98b453b90001f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_heyrovsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:45:31 compute-0 podman[377950]: 2026-02-28 10:45:31.767898104 +0000 UTC m=+0.187214662 container attach bebdd6caf16a72ec173a91b8b408cac22fd6991d0c588b9432d98b453b90001f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_heyrovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 28 10:45:31 compute-0 angry_heyrovsky[377966]: 167 167
Feb 28 10:45:31 compute-0 systemd[1]: libpod-bebdd6caf16a72ec173a91b8b408cac22fd6991d0c588b9432d98b453b90001f.scope: Deactivated successfully.
Feb 28 10:45:31 compute-0 podman[377950]: 2026-02-28 10:45:31.772400232 +0000 UTC m=+0.191716800 container died bebdd6caf16a72ec173a91b8b408cac22fd6991d0c588b9432d98b453b90001f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_heyrovsky, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 10:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-3aae6d4fce438787de20cc003fb641c084f5fe38f155cc22f8397b7d07841deb-merged.mount: Deactivated successfully.
Feb 28 10:45:31 compute-0 podman[377950]: 2026-02-28 10:45:31.816984614 +0000 UTC m=+0.236301172 container remove bebdd6caf16a72ec173a91b8b408cac22fd6991d0c588b9432d98b453b90001f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:45:31 compute-0 systemd[1]: libpod-conmon-bebdd6caf16a72ec173a91b8b408cac22fd6991d0c588b9432d98b453b90001f.scope: Deactivated successfully.
Feb 28 10:45:31 compute-0 podman[377990]: 2026-02-28 10:45:31.986950907 +0000 UTC m=+0.047559848 container create 4e1a2da099e0651299065653c7b2c38dc948adc89490350a2b886f5f590c72fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_buck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 10:45:32 compute-0 systemd[1]: Started libpod-conmon-4e1a2da099e0651299065653c7b2c38dc948adc89490350a2b886f5f590c72fa.scope.
Feb 28 10:45:32 compute-0 podman[377990]: 2026-02-28 10:45:31.966199759 +0000 UTC m=+0.026808790 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:45:32 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ec705bfbeb27e2edc927325fa39b9364190e9750d36517dc752325dbb63b567/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ec705bfbeb27e2edc927325fa39b9364190e9750d36517dc752325dbb63b567/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ec705bfbeb27e2edc927325fa39b9364190e9750d36517dc752325dbb63b567/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ec705bfbeb27e2edc927325fa39b9364190e9750d36517dc752325dbb63b567/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ec705bfbeb27e2edc927325fa39b9364190e9750d36517dc752325dbb63b567/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:45:32 compute-0 podman[377990]: 2026-02-28 10:45:32.091727944 +0000 UTC m=+0.152336965 container init 4e1a2da099e0651299065653c7b2c38dc948adc89490350a2b886f5f590c72fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_buck, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:45:32 compute-0 podman[377990]: 2026-02-28 10:45:32.106854562 +0000 UTC m=+0.167463523 container start 4e1a2da099e0651299065653c7b2c38dc948adc89490350a2b886f5f590c72fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_buck, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:45:32 compute-0 podman[377990]: 2026-02-28 10:45:32.112039929 +0000 UTC m=+0.172648910 container attach 4e1a2da099e0651299065653c7b2c38dc948adc89490350a2b886f5f590c72fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_buck, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 28 10:45:32 compute-0 serene_buck[378007]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:45:32 compute-0 serene_buck[378007]: --> All data devices are unavailable
Feb 28 10:45:32 compute-0 systemd[1]: libpod-4e1a2da099e0651299065653c7b2c38dc948adc89490350a2b886f5f590c72fa.scope: Deactivated successfully.
Feb 28 10:45:32 compute-0 podman[377990]: 2026-02-28 10:45:32.623382598 +0000 UTC m=+0.683991559 container died 4e1a2da099e0651299065653c7b2c38dc948adc89490350a2b886f5f590c72fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_buck, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 28 10:45:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ec705bfbeb27e2edc927325fa39b9364190e9750d36517dc752325dbb63b567-merged.mount: Deactivated successfully.
Feb 28 10:45:32 compute-0 podman[377990]: 2026-02-28 10:45:32.663433872 +0000 UTC m=+0.724042823 container remove 4e1a2da099e0651299065653c7b2c38dc948adc89490350a2b886f5f590c72fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_buck, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 10:45:32 compute-0 systemd[1]: libpod-conmon-4e1a2da099e0651299065653c7b2c38dc948adc89490350a2b886f5f590c72fa.scope: Deactivated successfully.
Feb 28 10:45:32 compute-0 sudo[377912]: pam_unix(sudo:session): session closed for user root
Feb 28 10:45:32 compute-0 ceph-mon[76304]: pgmap v2435: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 4.3 KiB/s wr, 92 op/s
Feb 28 10:45:32 compute-0 sudo[378040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:45:32 compute-0 sudo[378040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:45:32 compute-0 sudo[378040]: pam_unix(sudo:session): session closed for user root
Feb 28 10:45:32 compute-0 sudo[378065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:45:32 compute-0 sudo[378065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:45:32 compute-0 nova_compute[243452]: 2026-02-28 10:45:32.956 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:33 compute-0 podman[378102]: 2026-02-28 10:45:33.136463407 +0000 UTC m=+0.062628295 container create 3e7737fac7fb1223e45a584f31a6d7b86ad977f8db93cfd2ab9dc810398b1b36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:45:33 compute-0 systemd[1]: Started libpod-conmon-3e7737fac7fb1223e45a584f31a6d7b86ad977f8db93cfd2ab9dc810398b1b36.scope.
Feb 28 10:45:33 compute-0 podman[378102]: 2026-02-28 10:45:33.108857235 +0000 UTC m=+0.035022153 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:45:33 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:45:33 compute-0 podman[378102]: 2026-02-28 10:45:33.220247419 +0000 UTC m=+0.146412307 container init 3e7737fac7fb1223e45a584f31a6d7b86ad977f8db93cfd2ab9dc810398b1b36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 10:45:33 compute-0 podman[378102]: 2026-02-28 10:45:33.226571388 +0000 UTC m=+0.152736236 container start 3e7737fac7fb1223e45a584f31a6d7b86ad977f8db93cfd2ab9dc810398b1b36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moser, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:45:33 compute-0 podman[378102]: 2026-02-28 10:45:33.231821917 +0000 UTC m=+0.157986855 container attach 3e7737fac7fb1223e45a584f31a6d7b86ad977f8db93cfd2ab9dc810398b1b36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moser, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 28 10:45:33 compute-0 ecstatic_moser[378118]: 167 167
Feb 28 10:45:33 compute-0 systemd[1]: libpod-3e7737fac7fb1223e45a584f31a6d7b86ad977f8db93cfd2ab9dc810398b1b36.scope: Deactivated successfully.
Feb 28 10:45:33 compute-0 podman[378102]: 2026-02-28 10:45:33.233634558 +0000 UTC m=+0.159799436 container died 3e7737fac7fb1223e45a584f31a6d7b86ad977f8db93cfd2ab9dc810398b1b36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moser, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 10:45:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.2 KiB/s wr, 46 op/s
Feb 28 10:45:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-857b4af8337db996ace11a171cfa8533a1d86cf930293f32e18ed5125698e308-merged.mount: Deactivated successfully.
Feb 28 10:45:33 compute-0 podman[378102]: 2026-02-28 10:45:33.276211814 +0000 UTC m=+0.202376702 container remove 3e7737fac7fb1223e45a584f31a6d7b86ad977f8db93cfd2ab9dc810398b1b36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 10:45:33 compute-0 systemd[1]: libpod-conmon-3e7737fac7fb1223e45a584f31a6d7b86ad977f8db93cfd2ab9dc810398b1b36.scope: Deactivated successfully.
Feb 28 10:45:33 compute-0 podman[378140]: 2026-02-28 10:45:33.44172741 +0000 UTC m=+0.054857614 container create 895ff2fcf2486d7795201c03d81c224f7e3ee0b508dcf01950471eea59e94dff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 10:45:33 compute-0 systemd[1]: Started libpod-conmon-895ff2fcf2486d7795201c03d81c224f7e3ee0b508dcf01950471eea59e94dff.scope.
Feb 28 10:45:33 compute-0 podman[378140]: 2026-02-28 10:45:33.413521332 +0000 UTC m=+0.026651566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:45:33 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d32cf1fb02ae65e6e2f10b0a7e216689f2da4b80b7a4418982307c80f7059b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d32cf1fb02ae65e6e2f10b0a7e216689f2da4b80b7a4418982307c80f7059b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d32cf1fb02ae65e6e2f10b0a7e216689f2da4b80b7a4418982307c80f7059b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5d32cf1fb02ae65e6e2f10b0a7e216689f2da4b80b7a4418982307c80f7059b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:45:33 compute-0 podman[378140]: 2026-02-28 10:45:33.552215459 +0000 UTC m=+0.165345723 container init 895ff2fcf2486d7795201c03d81c224f7e3ee0b508dcf01950471eea59e94dff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wozniak, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:45:33 compute-0 podman[378140]: 2026-02-28 10:45:33.563507849 +0000 UTC m=+0.176638053 container start 895ff2fcf2486d7795201c03d81c224f7e3ee0b508dcf01950471eea59e94dff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 10:45:33 compute-0 podman[378140]: 2026-02-28 10:45:33.567921674 +0000 UTC m=+0.181051878 container attach 895ff2fcf2486d7795201c03d81c224f7e3ee0b508dcf01950471eea59e94dff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wozniak, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]: {
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:     "0": [
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:         {
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "devices": [
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "/dev/loop3"
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             ],
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "lv_name": "ceph_lv0",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "lv_size": "21470642176",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "name": "ceph_lv0",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "tags": {
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.cluster_name": "ceph",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.crush_device_class": "",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.encrypted": "0",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.objectstore": "bluestore",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.osd_id": "0",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.type": "block",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.vdo": "0",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.with_tpm": "0"
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             },
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "type": "block",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "vg_name": "ceph_vg0"
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:         }
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:     ],
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:     "1": [
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:         {
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "devices": [
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "/dev/loop4"
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             ],
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "lv_name": "ceph_lv1",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "lv_size": "21470642176",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "name": "ceph_lv1",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "tags": {
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.cluster_name": "ceph",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.crush_device_class": "",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.encrypted": "0",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.objectstore": "bluestore",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.osd_id": "1",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.type": "block",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.vdo": "0",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.with_tpm": "0"
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             },
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "type": "block",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "vg_name": "ceph_vg1"
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:         }
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:     ],
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:     "2": [
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:         {
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "devices": [
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "/dev/loop5"
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             ],
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "lv_name": "ceph_lv2",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "lv_size": "21470642176",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "name": "ceph_lv2",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "tags": {
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.cluster_name": "ceph",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.crush_device_class": "",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.encrypted": "0",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.objectstore": "bluestore",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.osd_id": "2",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.type": "block",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.vdo": "0",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:                 "ceph.with_tpm": "0"
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             },
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "type": "block",
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:             "vg_name": "ceph_vg2"
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:         }
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]:     ]
Feb 28 10:45:33 compute-0 mystifying_wozniak[378156]: }
Feb 28 10:45:33 compute-0 systemd[1]: libpod-895ff2fcf2486d7795201c03d81c224f7e3ee0b508dcf01950471eea59e94dff.scope: Deactivated successfully.
Feb 28 10:45:33 compute-0 podman[378140]: 2026-02-28 10:45:33.893698979 +0000 UTC m=+0.506829163 container died 895ff2fcf2486d7795201c03d81c224f7e3ee0b508dcf01950471eea59e94dff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:45:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5d32cf1fb02ae65e6e2f10b0a7e216689f2da4b80b7a4418982307c80f7059b-merged.mount: Deactivated successfully.
Feb 28 10:45:33 compute-0 podman[378140]: 2026-02-28 10:45:33.955631552 +0000 UTC m=+0.568761756 container remove 895ff2fcf2486d7795201c03d81c224f7e3ee0b508dcf01950471eea59e94dff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 10:45:33 compute-0 systemd[1]: libpod-conmon-895ff2fcf2486d7795201c03d81c224f7e3ee0b508dcf01950471eea59e94dff.scope: Deactivated successfully.
Feb 28 10:45:34 compute-0 sudo[378065]: pam_unix(sudo:session): session closed for user root
Feb 28 10:45:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:45:34 compute-0 sudo[378178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:45:34 compute-0 sudo[378178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:45:34 compute-0 sudo[378178]: pam_unix(sudo:session): session closed for user root
Feb 28 10:45:34 compute-0 sudo[378203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:45:34 compute-0 sudo[378203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:45:34 compute-0 podman[378240]: 2026-02-28 10:45:34.464996645 +0000 UTC m=+0.063880140 container create ca28731013c2c5b2044394496f82177add496441ba6c9be1728ec1d5e4b5d4ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cerf, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 10:45:34 compute-0 systemd[1]: Started libpod-conmon-ca28731013c2c5b2044394496f82177add496441ba6c9be1728ec1d5e4b5d4ce.scope.
Feb 28 10:45:34 compute-0 podman[378240]: 2026-02-28 10:45:34.440320276 +0000 UTC m=+0.039203831 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:45:34 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:45:34 compute-0 podman[378240]: 2026-02-28 10:45:34.575464613 +0000 UTC m=+0.174348088 container init ca28731013c2c5b2044394496f82177add496441ba6c9be1728ec1d5e4b5d4ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cerf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:45:34 compute-0 podman[378240]: 2026-02-28 10:45:34.585518107 +0000 UTC m=+0.184401602 container start ca28731013c2c5b2044394496f82177add496441ba6c9be1728ec1d5e4b5d4ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 10:45:34 compute-0 podman[378240]: 2026-02-28 10:45:34.589658714 +0000 UTC m=+0.188542179 container attach ca28731013c2c5b2044394496f82177add496441ba6c9be1728ec1d5e4b5d4ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cerf, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 10:45:34 compute-0 frosty_cerf[378256]: 167 167
Feb 28 10:45:34 compute-0 systemd[1]: libpod-ca28731013c2c5b2044394496f82177add496441ba6c9be1728ec1d5e4b5d4ce.scope: Deactivated successfully.
Feb 28 10:45:34 compute-0 podman[378240]: 2026-02-28 10:45:34.592011291 +0000 UTC m=+0.190894796 container died ca28731013c2c5b2044394496f82177add496441ba6c9be1728ec1d5e4b5d4ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 10:45:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-304f5ced1b52adef178e09dd00eed4cb756ba069a4b99ca78ee418b1fb7fc76e-merged.mount: Deactivated successfully.
Feb 28 10:45:34 compute-0 podman[378240]: 2026-02-28 10:45:34.644271711 +0000 UTC m=+0.243155216 container remove ca28731013c2c5b2044394496f82177add496441ba6c9be1728ec1d5e4b5d4ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cerf, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 10:45:34 compute-0 systemd[1]: libpod-conmon-ca28731013c2c5b2044394496f82177add496441ba6c9be1728ec1d5e4b5d4ce.scope: Deactivated successfully.
Feb 28 10:45:34 compute-0 ceph-mon[76304]: pgmap v2436: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.2 KiB/s wr, 46 op/s
Feb 28 10:45:34 compute-0 podman[378279]: 2026-02-28 10:45:34.858531198 +0000 UTC m=+0.070094636 container create d024ae4a62786f6994e14e88afd3d65dd84e68f59303d0b898b125426e2cf2a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_proskuriakova, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 10:45:34 compute-0 systemd[1]: Started libpod-conmon-d024ae4a62786f6994e14e88afd3d65dd84e68f59303d0b898b125426e2cf2a5.scope.
Feb 28 10:45:34 compute-0 podman[378279]: 2026-02-28 10:45:34.827852189 +0000 UTC m=+0.039415717 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:45:34 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48d7289e2faab44f3020ada844ceb7c0b925b113984eda2423239b71ce5b1c63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48d7289e2faab44f3020ada844ceb7c0b925b113984eda2423239b71ce5b1c63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48d7289e2faab44f3020ada844ceb7c0b925b113984eda2423239b71ce5b1c63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48d7289e2faab44f3020ada844ceb7c0b925b113984eda2423239b71ce5b1c63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:45:34 compute-0 podman[378279]: 2026-02-28 10:45:34.964849458 +0000 UTC m=+0.176412946 container init d024ae4a62786f6994e14e88afd3d65dd84e68f59303d0b898b125426e2cf2a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_proskuriakova, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 10:45:34 compute-0 podman[378279]: 2026-02-28 10:45:34.978101484 +0000 UTC m=+0.189664932 container start d024ae4a62786f6994e14e88afd3d65dd84e68f59303d0b898b125426e2cf2a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:45:34 compute-0 podman[378279]: 2026-02-28 10:45:34.981672265 +0000 UTC m=+0.193235683 container attach d024ae4a62786f6994e14e88afd3d65dd84e68f59303d0b898b125426e2cf2a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 10:45:35 compute-0 nova_compute[243452]: 2026-02-28 10:45:35.140 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 28 10:45:35 compute-0 lvm[378374]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:45:35 compute-0 lvm[378374]: VG ceph_vg0 finished
Feb 28 10:45:35 compute-0 lvm[378376]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:45:35 compute-0 lvm[378376]: VG ceph_vg1 finished
Feb 28 10:45:35 compute-0 lvm[378377]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:45:35 compute-0 lvm[378377]: VG ceph_vg2 finished
Feb 28 10:45:35 compute-0 heuristic_proskuriakova[378296]: {}
Feb 28 10:45:35 compute-0 systemd[1]: libpod-d024ae4a62786f6994e14e88afd3d65dd84e68f59303d0b898b125426e2cf2a5.scope: Deactivated successfully.
Feb 28 10:45:35 compute-0 systemd[1]: libpod-d024ae4a62786f6994e14e88afd3d65dd84e68f59303d0b898b125426e2cf2a5.scope: Consumed 1.505s CPU time.
Feb 28 10:45:35 compute-0 podman[378279]: 2026-02-28 10:45:35.930451241 +0000 UTC m=+1.142014689 container died d024ae4a62786f6994e14e88afd3d65dd84e68f59303d0b898b125426e2cf2a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_proskuriakova, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:45:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-48d7289e2faab44f3020ada844ceb7c0b925b113984eda2423239b71ce5b1c63-merged.mount: Deactivated successfully.
Feb 28 10:45:35 compute-0 podman[378279]: 2026-02-28 10:45:35.987229338 +0000 UTC m=+1.198792756 container remove d024ae4a62786f6994e14e88afd3d65dd84e68f59303d0b898b125426e2cf2a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:45:36 compute-0 systemd[1]: libpod-conmon-d024ae4a62786f6994e14e88afd3d65dd84e68f59303d0b898b125426e2cf2a5.scope: Deactivated successfully.
Feb 28 10:45:36 compute-0 sudo[378203]: pam_unix(sudo:session): session closed for user root
Feb 28 10:45:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:45:36 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:45:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:45:36 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:45:36 compute-0 sudo[378393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:45:36 compute-0 sudo[378393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:45:36 compute-0 sudo[378393]: pam_unix(sudo:session): session closed for user root
Feb 28 10:45:36 compute-0 ceph-mon[76304]: pgmap v2437: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 28 10:45:36 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:45:36 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:45:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 341 B/s wr, 14 op/s
Feb 28 10:45:37 compute-0 nova_compute[243452]: 2026-02-28 10:45:37.793 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275522.791517, c45d561e-3581-4ca2-a8b7-b56cc6ce5b43 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:45:37 compute-0 nova_compute[243452]: 2026-02-28 10:45:37.794 243456 INFO nova.compute.manager [-] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] VM Stopped (Lifecycle Event)
Feb 28 10:45:37 compute-0 nova_compute[243452]: 2026-02-28 10:45:37.811 243456 DEBUG nova.compute.manager [None req-aed2dd59-caca-4026-9a44-9e511843907c - - - - - -] [instance: c45d561e-3581-4ca2-a8b7-b56cc6ce5b43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:45:37 compute-0 nova_compute[243452]: 2026-02-28 10:45:37.961 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:38 compute-0 ceph-mon[76304]: pgmap v2438: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 341 B/s wr, 14 op/s
Feb 28 10:45:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:45:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.3 KiB/s rd, 0 B/s wr, 9 op/s
Feb 28 10:45:40 compute-0 nova_compute[243452]: 2026-02-28 10:45:40.141 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:40 compute-0 ceph-mon[76304]: pgmap v2439: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.3 KiB/s rd, 0 B/s wr, 9 op/s
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4508337979826865e-05 of space, bias 1.0, pg target 0.00435250139394806 quantized to 32 (current 32)
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024939187920449838 of space, bias 1.0, pg target 0.7481756376134951 quantized to 32 (current 32)
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.25983564234373e-07 of space, bias 4.0, pg target 0.0008711802770812475 quantized to 16 (current 16)
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:45:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:45:42 compute-0 ceph-mon[76304]: pgmap v2440: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:45:43 compute-0 nova_compute[243452]: 2026-02-28 10:45:43.110 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:45:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:45:44 compute-0 ceph-mon[76304]: pgmap v2441: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:45:45 compute-0 nova_compute[243452]: 2026-02-28 10:45:45.144 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:45:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:45:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2639511603' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:45:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:45:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2639511603' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:45:45 compute-0 nova_compute[243452]: 2026-02-28 10:45:45.821 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:45:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2639511603' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:45:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2639511603' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:45:46 compute-0 ceph-mon[76304]: pgmap v2442: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:45:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:45:47 compute-0 nova_compute[243452]: 2026-02-28 10:45:47.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:45:47 compute-0 nova_compute[243452]: 2026-02-28 10:45:47.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:45:48 compute-0 nova_compute[243452]: 2026-02-28 10:45:48.112 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:48 compute-0 nova_compute[243452]: 2026-02-28 10:45:48.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:45:48 compute-0 ceph-mon[76304]: pgmap v2443: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:45:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:45:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:45:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:49.681 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:48:42 10.100.0.2 2001:db8::f816:3eff:feae:4842'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feae:4842/64', 'neutron:device_id': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23d0752a-b8d1-4ae1-a510-cd602da802b5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=74ab365e-66c5-40e6-8916-4199b68ab2e3) old=Port_Binding(mac=['fa:16:3e:ae:48:42 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:45:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:49.682 156681 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 74ab365e-66c5-40e6-8916-4199b68ab2e3 in datapath daf6d125-3e9a-40be-b7d7-68719005c3b1 updated
Feb 28 10:45:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:49.683 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network daf6d125-3e9a-40be-b7d7-68719005c3b1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:45:49 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:49.687 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[af7cfc31-99f6-4fcc-89a1-c109036582de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:45:50 compute-0 nova_compute[243452]: 2026-02-28 10:45:50.146 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:50 compute-0 nova_compute[243452]: 2026-02-28 10:45:50.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:45:50 compute-0 ceph-mon[76304]: pgmap v2444: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:45:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:45:52 compute-0 ceph-mon[76304]: pgmap v2445: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:45:52 compute-0 nova_compute[243452]: 2026-02-28 10:45:52.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:45:52 compute-0 nova_compute[243452]: 2026-02-28 10:45:52.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:45:53 compute-0 nova_compute[243452]: 2026-02-28 10:45:53.115 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:45:53 compute-0 nova_compute[243452]: 2026-02-28 10:45:53.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:45:53 compute-0 nova_compute[243452]: 2026-02-28 10:45:53.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:45:53 compute-0 nova_compute[243452]: 2026-02-28 10:45:53.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:45:53 compute-0 nova_compute[243452]: 2026-02-28 10:45:53.332 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:45:53 compute-0 nova_compute[243452]: 2026-02-28 10:45:53.333 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:45:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:45:54 compute-0 podman[378419]: 2026-02-28 10:45:54.145676242 +0000 UTC m=+0.080139740 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:45:54 compute-0 podman[378418]: 2026-02-28 10:45:54.184016338 +0000 UTC m=+0.117367195 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:45:54 compute-0 ceph-mon[76304]: pgmap v2446: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:45:54 compute-0 nova_compute[243452]: 2026-02-28 10:45:54.791 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:45:54 compute-0 nova_compute[243452]: 2026-02-28 10:45:54.791 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:45:54 compute-0 nova_compute[243452]: 2026-02-28 10:45:54.815 243456 DEBUG nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:45:54 compute-0 nova_compute[243452]: 2026-02-28 10:45:54.895 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:45:54 compute-0 nova_compute[243452]: 2026-02-28 10:45:54.896 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:45:54 compute-0 nova_compute[243452]: 2026-02-28 10:45:54.907 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:45:54 compute-0 nova_compute[243452]: 2026-02-28 10:45:54.908 243456 INFO nova.compute.claims [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.008 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.149 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:45:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:45:55 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4064019870' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.551 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.559 243456 DEBUG nova.compute.provider_tree [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.577 243456 DEBUG nova.scheduler.client.report [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.611 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.612 243456 DEBUG nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.657 243456 DEBUG nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.658 243456 DEBUG nova.network.neutron [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.682 243456 INFO nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.700 243456 DEBUG nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.777 243456 DEBUG nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.778 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.779 243456 INFO nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Creating image(s)
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.808 243456 DEBUG nova.storage.rbd_utils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.839 243456 DEBUG nova.storage.rbd_utils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.867 243456 DEBUG nova.storage.rbd_utils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.873 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.941 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.942 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.943 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.943 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.971 243456 DEBUG nova.storage.rbd_utils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:45:55 compute-0 nova_compute[243452]: 2026-02-28 10:45:55.976 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:45:56 compute-0 nova_compute[243452]: 2026-02-28 10:45:56.234 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:45:56 compute-0 nova_compute[243452]: 2026-02-28 10:45:56.271 243456 DEBUG nova.policy [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:45:56 compute-0 nova_compute[243452]: 2026-02-28 10:45:56.315 243456 DEBUG nova.storage.rbd_utils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:45:56 compute-0 ceph-mon[76304]: pgmap v2447: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:45:56 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4064019870' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:45:56 compute-0 nova_compute[243452]: 2026-02-28 10:45:56.357 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:45:56 compute-0 nova_compute[243452]: 2026-02-28 10:45:56.420 243456 DEBUG nova.objects.instance [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid b6b6d2e0-12e8-4804-a192-da4e2444f20e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:45:56 compute-0 nova_compute[243452]: 2026-02-28 10:45:56.433 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:45:56 compute-0 nova_compute[243452]: 2026-02-28 10:45:56.433 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Ensure instance console log exists: /var/lib/nova/instances/b6b6d2e0-12e8-4804-a192-da4e2444f20e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:45:56 compute-0 nova_compute[243452]: 2026-02-28 10:45:56.433 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:45:56 compute-0 nova_compute[243452]: 2026-02-28 10:45:56.434 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:45:56 compute-0 nova_compute[243452]: 2026-02-28 10:45:56.434 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:45:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:45:57 compute-0 nova_compute[243452]: 2026-02-28 10:45:57.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:45:57 compute-0 nova_compute[243452]: 2026-02-28 10:45:57.343 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:45:57 compute-0 nova_compute[243452]: 2026-02-28 10:45:57.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:45:57 compute-0 nova_compute[243452]: 2026-02-28 10:45:57.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:45:57 compute-0 nova_compute[243452]: 2026-02-28 10:45:57.344 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:45:57 compute-0 nova_compute[243452]: 2026-02-28 10:45:57.345 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:45:57 compute-0 nova_compute[243452]: 2026-02-28 10:45:57.382 243456 DEBUG nova.network.neutron [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Successfully created port: 194124fb-5288-4359-ab42-cd28d0ec06bc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:45:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:45:57 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2777666500' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:45:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:57.886 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:45:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:57.887 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:45:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:45:57.887 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:45:57 compute-0 nova_compute[243452]: 2026-02-28 10:45:57.904 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.104 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.106 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3540MB free_disk=59.987410919740796GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.107 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.107 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.118 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.187 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance b6b6d2e0-12e8-4804-a192-da4e2444f20e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.188 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.189 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.230 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:45:58 compute-0 ceph-mon[76304]: pgmap v2448: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:45:58 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2777666500' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.510 243456 DEBUG nova.network.neutron [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Successfully updated port: 194124fb-5288-4359-ab42-cd28d0ec06bc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.528 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.528 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.529 243456 DEBUG nova.network.neutron [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.609 243456 DEBUG nova.compute.manager [req-b0d48944-cd29-45b0-a53e-bd1039df0ce0 req-71385869-3580-476e-825c-89073e07a15d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received event network-changed-194124fb-5288-4359-ab42-cd28d0ec06bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.610 243456 DEBUG nova.compute.manager [req-b0d48944-cd29-45b0-a53e-bd1039df0ce0 req-71385869-3580-476e-825c-89073e07a15d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Refreshing instance network info cache due to event network-changed-194124fb-5288-4359-ab42-cd28d0ec06bc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.610 243456 DEBUG oslo_concurrency.lockutils [req-b0d48944-cd29-45b0-a53e-bd1039df0ce0 req-71385869-3580-476e-825c-89073e07a15d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.672 243456 DEBUG nova.network.neutron [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:45:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:45:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3469095079' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.784 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.791 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.807 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.835 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:45:58 compute-0 nova_compute[243452]: 2026-02-28 10:45:58.836 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:45:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:45:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 175 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 895 KiB/s wr, 13 op/s
Feb 28 10:45:59 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3469095079' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.150 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.224 243456 DEBUG nova.network.neutron [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Updating instance_info_cache with network_info: [{"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:46:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:46:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:46:00 compute-0 ceph-mon[76304]: pgmap v2449: 305 pgs: 305 active+clean; 175 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 895 KiB/s wr, 13 op/s
Feb 28 10:46:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:46:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:46:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:46:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.767 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.767 243456 DEBUG nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Instance network_info: |[{"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.767 243456 DEBUG oslo_concurrency.lockutils [req-b0d48944-cd29-45b0-a53e-bd1039df0ce0 req-71385869-3580-476e-825c-89073e07a15d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.768 243456 DEBUG nova.network.neutron [req-b0d48944-cd29-45b0-a53e-bd1039df0ce0 req-71385869-3580-476e-825c-89073e07a15d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Refreshing network info cache for port 194124fb-5288-4359-ab42-cd28d0ec06bc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.771 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Start _get_guest_xml network_info=[{"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.778 243456 WARNING nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.783 243456 DEBUG nova.virt.libvirt.host [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.784 243456 DEBUG nova.virt.libvirt.host [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.795 243456 DEBUG nova.virt.libvirt.host [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.796 243456 DEBUG nova.virt.libvirt.host [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.796 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.797 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.797 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.798 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.798 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.798 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.799 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.799 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.799 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.800 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.800 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.800 243456 DEBUG nova.virt.hardware [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:46:00 compute-0 nova_compute[243452]: 2026-02-28 10:46:00.806 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:46:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:46:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:46:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1668947447' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:46:01 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1668947447' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.384 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.410 243456 DEBUG nova.storage.rbd_utils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.414 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:46:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:46:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2094674942' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.940 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.943 243456 DEBUG nova.virt.libvirt.vif [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:45:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-760627914',display_name='tempest-TestGettingAddress-server-760627914',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-760627914',id=150,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCLqeUdnCmsfQtG8FFNYG3SOwWwHCJPtg4jwUf3aliCHpxaY4aOcizIwMIBv7Kr3bQWdfyFCQBO6S7GfxcAqSJauFtY2aqdteGcGPbGDToL8K97rBhkuRxZcIqdxYgu10A==',key_name='tempest-TestGettingAddress-891123315',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-7spaod71',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:45:55Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=b6b6d2e0-12e8-4804-a192-da4e2444f20e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.944 243456 DEBUG nova.network.os_vif_util [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.945 243456 DEBUG nova.network.os_vif_util [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:09:7c,bridge_name='br-int',has_traffic_filtering=True,id=194124fb-5288-4359-ab42-cd28d0ec06bc,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194124fb-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.946 243456 DEBUG nova.objects.instance [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid b6b6d2e0-12e8-4804-a192-da4e2444f20e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.968 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:46:01 compute-0 nova_compute[243452]:   <uuid>b6b6d2e0-12e8-4804-a192-da4e2444f20e</uuid>
Feb 28 10:46:01 compute-0 nova_compute[243452]:   <name>instance-00000096</name>
Feb 28 10:46:01 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:46:01 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:46:01 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <nova:name>tempest-TestGettingAddress-server-760627914</nova:name>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:46:00</nova:creationTime>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:46:01 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:46:01 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:46:01 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:46:01 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:46:01 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:46:01 compute-0 nova_compute[243452]:         <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 10:46:01 compute-0 nova_compute[243452]:         <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:46:01 compute-0 nova_compute[243452]:         <nova:port uuid="194124fb-5288-4359-ab42-cd28d0ec06bc">
Feb 28 10:46:01 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fee5:97c" ipVersion="6"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:46:01 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:46:01 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <system>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <entry name="serial">b6b6d2e0-12e8-4804-a192-da4e2444f20e</entry>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <entry name="uuid">b6b6d2e0-12e8-4804-a192-da4e2444f20e</entry>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     </system>
Feb 28 10:46:01 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:46:01 compute-0 nova_compute[243452]:   <os>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:   </os>
Feb 28 10:46:01 compute-0 nova_compute[243452]:   <features>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:   </features>
Feb 28 10:46:01 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:46:01 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:46:01 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk">
Feb 28 10:46:01 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       </source>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:46:01 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk.config">
Feb 28 10:46:01 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       </source>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:46:01 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:e5:09:7c"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <target dev="tap194124fb-52"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/b6b6d2e0-12e8-4804-a192-da4e2444f20e/console.log" append="off"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <video>
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     </video>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:46:01 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:46:01 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:46:01 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:46:01 compute-0 nova_compute[243452]: </domain>
Feb 28 10:46:01 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.969 243456 DEBUG nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Preparing to wait for external event network-vif-plugged-194124fb-5288-4359-ab42-cd28d0ec06bc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.969 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.969 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.969 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.970 243456 DEBUG nova.virt.libvirt.vif [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:45:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-760627914',display_name='tempest-TestGettingAddress-server-760627914',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-760627914',id=150,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCLqeUdnCmsfQtG8FFNYG3SOwWwHCJPtg4jwUf3aliCHpxaY4aOcizIwMIBv7Kr3bQWdfyFCQBO6S7GfxcAqSJauFtY2aqdteGcGPbGDToL8K97rBhkuRxZcIqdxYgu10A==',key_name='tempest-TestGettingAddress-891123315',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-7spaod71',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:45:55Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=b6b6d2e0-12e8-4804-a192-da4e2444f20e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.970 243456 DEBUG nova.network.os_vif_util [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.971 243456 DEBUG nova.network.os_vif_util [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:09:7c,bridge_name='br-int',has_traffic_filtering=True,id=194124fb-5288-4359-ab42-cd28d0ec06bc,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194124fb-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.971 243456 DEBUG os_vif [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:09:7c,bridge_name='br-int',has_traffic_filtering=True,id=194124fb-5288-4359-ab42-cd28d0ec06bc,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194124fb-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.972 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.972 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.973 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.976 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.976 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap194124fb-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.976 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap194124fb-52, col_values=(('external_ids', {'iface-id': '194124fb-5288-4359-ab42-cd28d0ec06bc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:09:7c', 'vm-uuid': 'b6b6d2e0-12e8-4804-a192-da4e2444f20e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.978 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:01 compute-0 NetworkManager[49805]: <info>  [1772275561.9797] manager: (tap194124fb-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/656)
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.981 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.986 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:01 compute-0 nova_compute[243452]: 2026-02-28 10:46:01.988 243456 INFO os_vif [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:09:7c,bridge_name='br-int',has_traffic_filtering=True,id=194124fb-5288-4359-ab42-cd28d0ec06bc,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194124fb-52')
Feb 28 10:46:02 compute-0 nova_compute[243452]: 2026-02-28 10:46:02.043 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:46:02 compute-0 nova_compute[243452]: 2026-02-28 10:46:02.044 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:46:02 compute-0 nova_compute[243452]: 2026-02-28 10:46:02.044 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:e5:09:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:46:02 compute-0 nova_compute[243452]: 2026-02-28 10:46:02.045 243456 INFO nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Using config drive
Feb 28 10:46:02 compute-0 nova_compute[243452]: 2026-02-28 10:46:02.077 243456 DEBUG nova.storage.rbd_utils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:46:02 compute-0 ceph-mon[76304]: pgmap v2450: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:46:02 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2094674942' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:46:02 compute-0 nova_compute[243452]: 2026-02-28 10:46:02.697 243456 INFO nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Creating config drive at /var/lib/nova/instances/b6b6d2e0-12e8-4804-a192-da4e2444f20e/disk.config
Feb 28 10:46:02 compute-0 nova_compute[243452]: 2026-02-28 10:46:02.700 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6b6d2e0-12e8-4804-a192-da4e2444f20e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp9vat019v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:46:02 compute-0 nova_compute[243452]: 2026-02-28 10:46:02.792 243456 DEBUG nova.network.neutron [req-b0d48944-cd29-45b0-a53e-bd1039df0ce0 req-71385869-3580-476e-825c-89073e07a15d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Updated VIF entry in instance network info cache for port 194124fb-5288-4359-ab42-cd28d0ec06bc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:46:02 compute-0 nova_compute[243452]: 2026-02-28 10:46:02.793 243456 DEBUG nova.network.neutron [req-b0d48944-cd29-45b0-a53e-bd1039df0ce0 req-71385869-3580-476e-825c-89073e07a15d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Updating instance_info_cache with network_info: [{"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:46:02 compute-0 nova_compute[243452]: 2026-02-28 10:46:02.808 243456 DEBUG oslo_concurrency.lockutils [req-b0d48944-cd29-45b0-a53e-bd1039df0ce0 req-71385869-3580-476e-825c-89073e07a15d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:46:02 compute-0 nova_compute[243452]: 2026-02-28 10:46:02.839 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6b6d2e0-12e8-4804-a192-da4e2444f20e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp9vat019v" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:46:02 compute-0 nova_compute[243452]: 2026-02-28 10:46:02.869 243456 DEBUG nova.storage.rbd_utils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:46:02 compute-0 nova_compute[243452]: 2026-02-28 10:46:02.874 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b6b6d2e0-12e8-4804-a192-da4e2444f20e/disk.config b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.014 243456 DEBUG oslo_concurrency.processutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b6b6d2e0-12e8-4804-a192-da4e2444f20e/disk.config b6b6d2e0-12e8-4804-a192-da4e2444f20e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.015 243456 INFO nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Deleting local config drive /var/lib/nova/instances/b6b6d2e0-12e8-4804-a192-da4e2444f20e/disk.config because it was imported into RBD.
Feb 28 10:46:03 compute-0 kernel: tap194124fb-52: entered promiscuous mode
Feb 28 10:46:03 compute-0 NetworkManager[49805]: <info>  [1772275563.0623] manager: (tap194124fb-52): new Tun device (/org/freedesktop/NetworkManager/Devices/657)
Feb 28 10:46:03 compute-0 ovn_controller[146846]: 2026-02-28T10:46:03Z|01572|binding|INFO|Claiming lport 194124fb-5288-4359-ab42-cd28d0ec06bc for this chassis.
Feb 28 10:46:03 compute-0 ovn_controller[146846]: 2026-02-28T10:46:03Z|01573|binding|INFO|194124fb-5288-4359-ab42-cd28d0ec06bc: Claiming fa:16:3e:e5:09:7c 10.100.0.4 2001:db8::f816:3eff:fee5:97c
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.061 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.069 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.072 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.081 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:09:7c 10.100.0.4 2001:db8::f816:3eff:fee5:97c'], port_security=['fa:16:3e:e5:09:7c 10.100.0.4 2001:db8::f816:3eff:fee5:97c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fee5:97c/64', 'neutron:device_id': 'b6b6d2e0-12e8-4804-a192-da4e2444f20e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cfc43bea-ddab-4fae-8219-6187bd0feedd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23d0752a-b8d1-4ae1-a510-cd602da802b5, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=194124fb-5288-4359-ab42-cd28d0ec06bc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.083 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 194124fb-5288-4359-ab42-cd28d0ec06bc in datapath daf6d125-3e9a-40be-b7d7-68719005c3b1 bound to our chassis
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.084 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network daf6d125-3e9a-40be-b7d7-68719005c3b1
Feb 28 10:46:03 compute-0 systemd-udevd[378831]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:46:03 compute-0 ovn_controller[146846]: 2026-02-28T10:46:03Z|01574|binding|INFO|Setting lport 194124fb-5288-4359-ab42-cd28d0ec06bc ovn-installed in OVS
Feb 28 10:46:03 compute-0 ovn_controller[146846]: 2026-02-28T10:46:03Z|01575|binding|INFO|Setting lport 194124fb-5288-4359-ab42-cd28d0ec06bc up in Southbound
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.092 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:03 compute-0 systemd-machined[209480]: New machine qemu-183-instance-00000096.
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.095 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd03e5e-be6b-4dd1-92b7-e68fbff90e0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.097 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdaf6d125-31 in ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.101 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdaf6d125-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.101 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[94f3b940-13ff-4156-a357-d0e172836a96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.103 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fe3beecf-40cb-421e-87d5-97d0dd8c092f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:03 compute-0 NetworkManager[49805]: <info>  [1772275563.1067] device (tap194124fb-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:46:03 compute-0 NetworkManager[49805]: <info>  [1772275563.1076] device (tap194124fb-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:46:03 compute-0 systemd[1]: Started Virtual Machine qemu-183-instance-00000096.
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.114 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[2259d5d3-c46a-46cf-b5a4-36f61670ac65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.126 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[1defb123-6c59-4ce8-acd2-e15d625d1f50]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.158 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[cc493af5-9c79-4de6-870e-9111840d3a23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.163 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[07401b11-6869-430d-abd1-90c233024350]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:03 compute-0 NetworkManager[49805]: <info>  [1772275563.1648] manager: (tapdaf6d125-30): new Veth device (/org/freedesktop/NetworkManager/Devices/658)
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.198 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d4595f4c-07a9-4f43-955f-6206a9328d22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.202 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[b71cf11b-3593-4fc2-8e56-4d56cb5c8776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:03 compute-0 NetworkManager[49805]: <info>  [1772275563.2224] device (tapdaf6d125-30): carrier: link connected
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.226 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ee7acb8f-c5ed-43e5-9a5e-d15ceac29711]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.241 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7e51bcd3-f179-4d5e-bd8f-eaa80c7b70cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdaf6d125-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:48:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 465], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707050, 'reachable_time': 34156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378865, 'error': None, 'target': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.256 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c924758b-8177-4216-b3eb-88038989befc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feae:4842'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707050, 'tstamp': 707050}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378866, 'error': None, 'target': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.271 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e1bcc6d5-a08b-4cca-b5ee-f5e0ab4fd1d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdaf6d125-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:48:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 465], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707050, 'reachable_time': 34156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378867, 'error': None, 'target': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.296 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[aacbfc61-89d3-484f-8785-8e3ac2d7bb72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.334 243456 DEBUG nova.compute.manager [req-ad3ef9d7-f2c8-4f52-ad19-71b7aad4c6ed req-088a2c18-2c4e-4863-a289-e830c92a6ff6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received event network-vif-plugged-194124fb-5288-4359-ab42-cd28d0ec06bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.335 243456 DEBUG oslo_concurrency.lockutils [req-ad3ef9d7-f2c8-4f52-ad19-71b7aad4c6ed req-088a2c18-2c4e-4863-a289-e830c92a6ff6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.335 243456 DEBUG oslo_concurrency.lockutils [req-ad3ef9d7-f2c8-4f52-ad19-71b7aad4c6ed req-088a2c18-2c4e-4863-a289-e830c92a6ff6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.336 243456 DEBUG oslo_concurrency.lockutils [req-ad3ef9d7-f2c8-4f52-ad19-71b7aad4c6ed req-088a2c18-2c4e-4863-a289-e830c92a6ff6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.337 243456 DEBUG nova.compute.manager [req-ad3ef9d7-f2c8-4f52-ad19-71b7aad4c6ed req-088a2c18-2c4e-4863-a289-e830c92a6ff6 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Processing event network-vif-plugged-194124fb-5288-4359-ab42-cd28d0ec06bc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.366 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[ffe91663-0d9b-4316-aea4-380b9703261c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.368 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdaf6d125-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.368 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.368 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdaf6d125-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:46:03 compute-0 NetworkManager[49805]: <info>  [1772275563.3713] manager: (tapdaf6d125-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/659)
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.371 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:03 compute-0 kernel: tapdaf6d125-30: entered promiscuous mode
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.374 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdaf6d125-30, col_values=(('external_ids', {'iface-id': '74ab365e-66c5-40e6-8916-4199b68ab2e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:46:03 compute-0 ovn_controller[146846]: 2026-02-28T10:46:03Z|01576|binding|INFO|Releasing lport 74ab365e-66c5-40e6-8916-4199b68ab2e3 from this chassis (sb_readonly=0)
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.380 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.381 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/daf6d125-3e9a-40be-b7d7-68719005c3b1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/daf6d125-3e9a-40be-b7d7-68719005c3b1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.382 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[271f063c-ac27-49aa-8321-020882f4997b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.383 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-daf6d125-3e9a-40be-b7d7-68719005c3b1
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/daf6d125-3e9a-40be-b7d7-68719005c3b1.pid.haproxy
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID daf6d125-3e9a-40be-b7d7-68719005c3b1
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:46:03 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:03.384 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'env', 'PROCESS_TAG=haproxy-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/daf6d125-3e9a-40be-b7d7-68719005c3b1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.649 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275563.6483757, b6b6d2e0-12e8-4804-a192-da4e2444f20e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.651 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] VM Started (Lifecycle Event)
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.654 243456 DEBUG nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.657 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.660 243456 INFO nova.virt.libvirt.driver [-] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Instance spawned successfully.
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.661 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.672 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.677 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.681 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.681 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.682 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.682 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.682 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.683 243456 DEBUG nova.virt.libvirt.driver [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.714 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.715 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275563.649735, b6b6d2e0-12e8-4804-a192-da4e2444f20e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.715 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] VM Paused (Lifecycle Event)
Feb 28 10:46:03 compute-0 podman[378937]: 2026-02-28 10:46:03.738038046 +0000 UTC m=+0.046157588 container create 5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 10:46:03 compute-0 systemd[1]: Started libpod-conmon-5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66.scope.
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.768 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.772 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275563.656227, b6b6d2e0-12e8-4804-a192-da4e2444f20e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.772 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] VM Resumed (Lifecycle Event)
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.775 243456 INFO nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Took 8.00 seconds to spawn the instance on the hypervisor.
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.775 243456 DEBUG nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:46:03 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:46:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fcf45e9da452fe6f31772cee126479285411300a0844b62ff3dcab6d823ade5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.812 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:46:03 compute-0 podman[378937]: 2026-02-28 10:46:03.714688155 +0000 UTC m=+0.022807727 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.815 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:46:03 compute-0 podman[378937]: 2026-02-28 10:46:03.81693477 +0000 UTC m=+0.125054322 container init 5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 28 10:46:03 compute-0 podman[378937]: 2026-02-28 10:46:03.826277635 +0000 UTC m=+0.134397187 container start 5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 28 10:46:03 compute-0 neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1[378952]: [NOTICE]   (378956) : New worker (378958) forked
Feb 28 10:46:03 compute-0 neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1[378952]: [NOTICE]   (378956) : Loading success.
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.856 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.873 243456 INFO nova.compute.manager [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Took 9.01 seconds to build instance.
Feb 28 10:46:03 compute-0 nova_compute[243452]: 2026-02-28 10:46:03.893 243456 DEBUG oslo_concurrency.lockutils [None req-59028c6e-05ee-49c1-a7f1-8e16107f8003 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:46:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:46:04 compute-0 ceph-mon[76304]: pgmap v2451: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:46:05 compute-0 nova_compute[243452]: 2026-02-28 10:46:05.153 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 845 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Feb 28 10:46:05 compute-0 nova_compute[243452]: 2026-02-28 10:46:05.442 243456 DEBUG nova.compute.manager [req-213a34a2-bd8c-4096-b59b-efa886fd5d68 req-bef64cbd-3da4-4775-8a48-e1c63097ba40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received event network-vif-plugged-194124fb-5288-4359-ab42-cd28d0ec06bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:46:05 compute-0 nova_compute[243452]: 2026-02-28 10:46:05.442 243456 DEBUG oslo_concurrency.lockutils [req-213a34a2-bd8c-4096-b59b-efa886fd5d68 req-bef64cbd-3da4-4775-8a48-e1c63097ba40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:46:05 compute-0 nova_compute[243452]: 2026-02-28 10:46:05.445 243456 DEBUG oslo_concurrency.lockutils [req-213a34a2-bd8c-4096-b59b-efa886fd5d68 req-bef64cbd-3da4-4775-8a48-e1c63097ba40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:46:05 compute-0 nova_compute[243452]: 2026-02-28 10:46:05.445 243456 DEBUG oslo_concurrency.lockutils [req-213a34a2-bd8c-4096-b59b-efa886fd5d68 req-bef64cbd-3da4-4775-8a48-e1c63097ba40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:46:05 compute-0 nova_compute[243452]: 2026-02-28 10:46:05.445 243456 DEBUG nova.compute.manager [req-213a34a2-bd8c-4096-b59b-efa886fd5d68 req-bef64cbd-3da4-4775-8a48-e1c63097ba40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] No waiting events found dispatching network-vif-plugged-194124fb-5288-4359-ab42-cd28d0ec06bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:46:05 compute-0 nova_compute[243452]: 2026-02-28 10:46:05.446 243456 WARNING nova.compute.manager [req-213a34a2-bd8c-4096-b59b-efa886fd5d68 req-bef64cbd-3da4-4775-8a48-e1c63097ba40 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received unexpected event network-vif-plugged-194124fb-5288-4359-ab42-cd28d0ec06bc for instance with vm_state active and task_state None.
Feb 28 10:46:06 compute-0 ceph-mon[76304]: pgmap v2452: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 845 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Feb 28 10:46:06 compute-0 nova_compute[243452]: 2026-02-28 10:46:06.980 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 862 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Feb 28 10:46:08 compute-0 ceph-mon[76304]: pgmap v2453: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 862 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Feb 28 10:46:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:46:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Feb 28 10:46:09 compute-0 NetworkManager[49805]: <info>  [1772275569.6759] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/660)
Feb 28 10:46:09 compute-0 NetworkManager[49805]: <info>  [1772275569.6773] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/661)
Feb 28 10:46:09 compute-0 nova_compute[243452]: 2026-02-28 10:46:09.675 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:09 compute-0 ovn_controller[146846]: 2026-02-28T10:46:09Z|01577|binding|INFO|Releasing lport 74ab365e-66c5-40e6-8916-4199b68ab2e3 from this chassis (sb_readonly=0)
Feb 28 10:46:09 compute-0 nova_compute[243452]: 2026-02-28 10:46:09.692 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:09 compute-0 ovn_controller[146846]: 2026-02-28T10:46:09Z|01578|binding|INFO|Releasing lport 74ab365e-66c5-40e6-8916-4199b68ab2e3 from this chassis (sb_readonly=0)
Feb 28 10:46:09 compute-0 nova_compute[243452]: 2026-02-28 10:46:09.698 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:10 compute-0 nova_compute[243452]: 2026-02-28 10:46:10.155 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:10 compute-0 ceph-mon[76304]: pgmap v2454: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Feb 28 10:46:10 compute-0 nova_compute[243452]: 2026-02-28 10:46:10.804 243456 DEBUG nova.compute.manager [req-999acf5c-1772-4419-99ce-efc267d94232 req-2fde3b12-5623-49d3-ab6d-dec174d3b62a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received event network-changed-194124fb-5288-4359-ab42-cd28d0ec06bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:46:10 compute-0 nova_compute[243452]: 2026-02-28 10:46:10.804 243456 DEBUG nova.compute.manager [req-999acf5c-1772-4419-99ce-efc267d94232 req-2fde3b12-5623-49d3-ab6d-dec174d3b62a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Refreshing instance network info cache due to event network-changed-194124fb-5288-4359-ab42-cd28d0ec06bc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:46:10 compute-0 nova_compute[243452]: 2026-02-28 10:46:10.804 243456 DEBUG oslo_concurrency.lockutils [req-999acf5c-1772-4419-99ce-efc267d94232 req-2fde3b12-5623-49d3-ab6d-dec174d3b62a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:46:10 compute-0 nova_compute[243452]: 2026-02-28 10:46:10.805 243456 DEBUG oslo_concurrency.lockutils [req-999acf5c-1772-4419-99ce-efc267d94232 req-2fde3b12-5623-49d3-ab6d-dec174d3b62a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:46:10 compute-0 nova_compute[243452]: 2026-02-28 10:46:10.805 243456 DEBUG nova.network.neutron [req-999acf5c-1772-4419-99ce-efc267d94232 req-2fde3b12-5623-49d3-ab6d-dec174d3b62a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Refreshing network info cache for port 194124fb-5288-4359-ab42-cd28d0ec06bc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:46:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 932 KiB/s wr, 87 op/s
Feb 28 10:46:11 compute-0 nova_compute[243452]: 2026-02-28 10:46:11.983 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:12 compute-0 ceph-mon[76304]: pgmap v2455: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 932 KiB/s wr, 87 op/s
Feb 28 10:46:12 compute-0 nova_compute[243452]: 2026-02-28 10:46:12.491 243456 DEBUG nova.network.neutron [req-999acf5c-1772-4419-99ce-efc267d94232 req-2fde3b12-5623-49d3-ab6d-dec174d3b62a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Updated VIF entry in instance network info cache for port 194124fb-5288-4359-ab42-cd28d0ec06bc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:46:12 compute-0 nova_compute[243452]: 2026-02-28 10:46:12.492 243456 DEBUG nova.network.neutron [req-999acf5c-1772-4419-99ce-efc267d94232 req-2fde3b12-5623-49d3-ab6d-dec174d3b62a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Updating instance_info_cache with network_info: [{"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:46:12 compute-0 nova_compute[243452]: 2026-02-28 10:46:12.516 243456 DEBUG oslo_concurrency.lockutils [req-999acf5c-1772-4419-99ce-efc267d94232 req-2fde3b12-5623-49d3-ab6d-dec174d3b62a 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:46:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:46:13 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Feb 28 10:46:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:46:14 compute-0 ceph-mon[76304]: pgmap v2456: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:46:15 compute-0 ovn_controller[146846]: 2026-02-28T10:46:15Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e5:09:7c 10.100.0.4
Feb 28 10:46:15 compute-0 ovn_controller[146846]: 2026-02-28T10:46:15Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e5:09:7c 10.100.0.4
Feb 28 10:46:15 compute-0 nova_compute[243452]: 2026-02-28 10:46:15.157 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 305 active+clean; 214 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 96 op/s
Feb 28 10:46:16 compute-0 ceph-mon[76304]: pgmap v2457: 305 pgs: 305 active+clean; 214 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 96 op/s
Feb 28 10:46:16 compute-0 nova_compute[243452]: 2026-02-28 10:46:16.986 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 305 active+clean; 222 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.4 MiB/s wr, 74 op/s
Feb 28 10:46:18 compute-0 ceph-mon[76304]: pgmap v2458: 305 pgs: 305 active+clean; 222 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.4 MiB/s wr, 74 op/s
Feb 28 10:46:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:46:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 230 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 93 op/s
Feb 28 10:46:20 compute-0 nova_compute[243452]: 2026-02-28 10:46:20.161 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:20 compute-0 ceph-mon[76304]: pgmap v2459: 305 pgs: 305 active+clean; 230 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 93 op/s
Feb 28 10:46:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 530 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Feb 28 10:46:21 compute-0 nova_compute[243452]: 2026-02-28 10:46:21.989 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:22 compute-0 ceph-mon[76304]: pgmap v2460: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 530 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Feb 28 10:46:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 28 10:46:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:46:24 compute-0 ceph-mon[76304]: pgmap v2461: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 28 10:46:25 compute-0 podman[378970]: 2026-02-28 10:46:25.152923943 +0000 UTC m=+0.081316123 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 28 10:46:25 compute-0 nova_compute[243452]: 2026-02-28 10:46:25.163 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:25 compute-0 podman[378969]: 2026-02-28 10:46:25.178015774 +0000 UTC m=+0.109396129 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.43.0)
Feb 28 10:46:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 28 10:46:26 compute-0 ceph-mon[76304]: pgmap v2462: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 28 10:46:26 compute-0 nova_compute[243452]: 2026-02-28 10:46:26.993 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 237 KiB/s rd, 906 KiB/s wr, 43 op/s
Feb 28 10:46:28 compute-0 ceph-mon[76304]: pgmap v2463: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 237 KiB/s rd, 906 KiB/s wr, 43 op/s
Feb 28 10:46:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:46:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:46:29
Feb 28 10:46:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:46:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:46:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'backups', 'volumes', 'vms', 'default.rgw.control', '.mgr', '.rgw.root']
Feb 28 10:46:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:46:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 210 KiB/s rd, 807 KiB/s wr, 32 op/s
Feb 28 10:46:29 compute-0 nova_compute[243452]: 2026-02-28 10:46:29.670 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "a4475d45-f11e-4b8c-a118-5b4a347c2506" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:46:29 compute-0 nova_compute[243452]: 2026-02-28 10:46:29.671 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:46:29 compute-0 nova_compute[243452]: 2026-02-28 10:46:29.693 243456 DEBUG nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:46:29 compute-0 nova_compute[243452]: 2026-02-28 10:46:29.772 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:46:29 compute-0 nova_compute[243452]: 2026-02-28 10:46:29.773 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:46:29 compute-0 nova_compute[243452]: 2026-02-28 10:46:29.784 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:46:29 compute-0 nova_compute[243452]: 2026-02-28 10:46:29.785 243456 INFO nova.compute.claims [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:46:29 compute-0 nova_compute[243452]: 2026-02-28 10:46:29.889 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.166 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:46:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:46:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:46:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:46:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:46:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:46:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:46:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/319888767' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.451 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.460 243456 DEBUG nova.compute.provider_tree [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.484 243456 DEBUG nova.scheduler.client.report [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:46:30 compute-0 ceph-mon[76304]: pgmap v2464: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 210 KiB/s rd, 807 KiB/s wr, 32 op/s
Feb 28 10:46:30 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/319888767' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.515 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.516 243456 DEBUG nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.566 243456 DEBUG nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.567 243456 DEBUG nova.network.neutron [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.588 243456 INFO nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.607 243456 DEBUG nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.699 243456 DEBUG nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.702 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.703 243456 INFO nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Creating image(s)
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.736 243456 DEBUG nova.storage.rbd_utils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image a4475d45-f11e-4b8c-a118-5b4a347c2506_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.765 243456 DEBUG nova.storage.rbd_utils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image a4475d45-f11e-4b8c-a118-5b4a347c2506_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.791 243456 DEBUG nova.storage.rbd_utils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image a4475d45-f11e-4b8c-a118-5b4a347c2506_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.795 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.874 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.875 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.875 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.875 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:46:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:46:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:46:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:46:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.899 243456 DEBUG nova.storage.rbd_utils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image a4475d45-f11e-4b8c-a118-5b4a347c2506_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:46:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:46:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:46:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:46:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:46:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:46:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:46:30 compute-0 nova_compute[243452]: 2026-02-28 10:46:30.903 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a4475d45-f11e-4b8c-a118-5b4a347c2506_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:46:31 compute-0 nova_compute[243452]: 2026-02-28 10:46:31.146 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 a4475d45-f11e-4b8c-a118-5b4a347c2506_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.243s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:46:31 compute-0 nova_compute[243452]: 2026-02-28 10:46:31.230 243456 DEBUG nova.storage.rbd_utils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] resizing rbd image a4475d45-f11e-4b8c-a118-5b4a347c2506_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:46:31 compute-0 nova_compute[243452]: 2026-02-28 10:46:31.286 243456 DEBUG nova.policy [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be6308f3690c465980a359a9628b8847', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:46:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 51 KiB/s wr, 12 op/s
Feb 28 10:46:31 compute-0 nova_compute[243452]: 2026-02-28 10:46:31.337 243456 DEBUG nova.objects.instance [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'migration_context' on Instance uuid a4475d45-f11e-4b8c-a118-5b4a347c2506 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:46:31 compute-0 nova_compute[243452]: 2026-02-28 10:46:31.350 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:46:31 compute-0 nova_compute[243452]: 2026-02-28 10:46:31.351 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Ensure instance console log exists: /var/lib/nova/instances/a4475d45-f11e-4b8c-a118-5b4a347c2506/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:46:31 compute-0 nova_compute[243452]: 2026-02-28 10:46:31.351 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:46:31 compute-0 nova_compute[243452]: 2026-02-28 10:46:31.351 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:46:31 compute-0 nova_compute[243452]: 2026-02-28 10:46:31.351 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:46:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:31.966 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:46:31 compute-0 nova_compute[243452]: 2026-02-28 10:46:31.966 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:31 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:31.969 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:46:31 compute-0 nova_compute[243452]: 2026-02-28 10:46:31.995 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:32 compute-0 nova_compute[243452]: 2026-02-28 10:46:32.051 243456 DEBUG nova.network.neutron [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Successfully created port: 7b3c6111-c4ec-40c0-94eb-725c085f9600 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:46:32 compute-0 ceph-mon[76304]: pgmap v2465: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 51 KiB/s wr, 12 op/s
Feb 28 10:46:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 305 active+clean; 255 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Feb 28 10:46:33 compute-0 nova_compute[243452]: 2026-02-28 10:46:33.520 243456 DEBUG nova.network.neutron [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Successfully updated port: 7b3c6111-c4ec-40c0-94eb-725c085f9600 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:46:33 compute-0 nova_compute[243452]: 2026-02-28 10:46:33.539 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:46:33 compute-0 nova_compute[243452]: 2026-02-28 10:46:33.540 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquired lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:46:33 compute-0 nova_compute[243452]: 2026-02-28 10:46:33.540 243456 DEBUG nova.network.neutron [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:46:33 compute-0 nova_compute[243452]: 2026-02-28 10:46:33.606 243456 DEBUG nova.compute.manager [req-0ad981f9-6fff-4090-85e4-8311dad9d236 req-5dedd6e3-5e27-4476-804f-b1849210ed9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received event network-changed-7b3c6111-c4ec-40c0-94eb-725c085f9600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:46:33 compute-0 nova_compute[243452]: 2026-02-28 10:46:33.607 243456 DEBUG nova.compute.manager [req-0ad981f9-6fff-4090-85e4-8311dad9d236 req-5dedd6e3-5e27-4476-804f-b1849210ed9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Refreshing instance network info cache due to event network-changed-7b3c6111-c4ec-40c0-94eb-725c085f9600. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:46:33 compute-0 nova_compute[243452]: 2026-02-28 10:46:33.607 243456 DEBUG oslo_concurrency.lockutils [req-0ad981f9-6fff-4090-85e4-8311dad9d236 req-5dedd6e3-5e27-4476-804f-b1849210ed9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:46:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:46:34 compute-0 nova_compute[243452]: 2026-02-28 10:46:34.215 243456 DEBUG nova.network.neutron [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:46:34 compute-0 ceph-mon[76304]: pgmap v2466: 305 pgs: 305 active+clean; 255 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.212 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 305 active+clean; 266 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 25 op/s
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.598 243456 DEBUG nova.network.neutron [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Updating instance_info_cache with network_info: [{"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.615 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Releasing lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.615 243456 DEBUG nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Instance network_info: |[{"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.616 243456 DEBUG oslo_concurrency.lockutils [req-0ad981f9-6fff-4090-85e4-8311dad9d236 req-5dedd6e3-5e27-4476-804f-b1849210ed9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.616 243456 DEBUG nova.network.neutron [req-0ad981f9-6fff-4090-85e4-8311dad9d236 req-5dedd6e3-5e27-4476-804f-b1849210ed9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Refreshing network info cache for port 7b3c6111-c4ec-40c0-94eb-725c085f9600 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.619 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Start _get_guest_xml network_info=[{"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.623 243456 WARNING nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.627 243456 DEBUG nova.virt.libvirt.host [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.628 243456 DEBUG nova.virt.libvirt.host [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.630 243456 DEBUG nova.virt.libvirt.host [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.631 243456 DEBUG nova.virt.libvirt.host [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.631 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.631 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.632 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.632 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.632 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.633 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.633 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.633 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.633 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.634 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.634 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.634 243456 DEBUG nova.virt.hardware [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:46:35 compute-0 nova_compute[243452]: 2026-02-28 10:46:35.637 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:46:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:46:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2625851700' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.186 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.219 243456 DEBUG nova.storage.rbd_utils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image a4475d45-f11e-4b8c-a118-5b4a347c2506_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.228 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:46:36 compute-0 sudo[379225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:46:36 compute-0 sudo[379225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:46:36 compute-0 sudo[379225]: pam_unix(sudo:session): session closed for user root
Feb 28 10:46:36 compute-0 sudo[379270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:46:36 compute-0 sudo[379270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:46:36 compute-0 ceph-mon[76304]: pgmap v2467: 305 pgs: 305 active+clean; 266 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 25 op/s
Feb 28 10:46:36 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2625851700' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:46:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:46:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2363406954' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:46:36 compute-0 sudo[379270]: pam_unix(sudo:session): session closed for user root
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.751 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.753 243456 DEBUG nova.virt.libvirt.vif [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:46:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1318971108',display_name='tempest-TestGettingAddress-server-1318971108',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1318971108',id=151,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCLqeUdnCmsfQtG8FFNYG3SOwWwHCJPtg4jwUf3aliCHpxaY4aOcizIwMIBv7Kr3bQWdfyFCQBO6S7GfxcAqSJauFtY2aqdteGcGPbGDToL8K97rBhkuRxZcIqdxYgu10A==',key_name='tempest-TestGettingAddress-891123315',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-4j40xw8q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:46:30Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=a4475d45-f11e-4b8c-a118-5b4a347c2506,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.753 243456 DEBUG nova.network.os_vif_util [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.754 243456 DEBUG nova.network.os_vif_util [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:c9:6d,bridge_name='br-int',has_traffic_filtering=True,id=7b3c6111-c4ec-40c0-94eb-725c085f9600,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3c6111-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.756 243456 DEBUG nova.objects.instance [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'pci_devices' on Instance uuid a4475d45-f11e-4b8c-a118-5b4a347c2506 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.778 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:46:36 compute-0 nova_compute[243452]:   <uuid>a4475d45-f11e-4b8c-a118-5b4a347c2506</uuid>
Feb 28 10:46:36 compute-0 nova_compute[243452]:   <name>instance-00000097</name>
Feb 28 10:46:36 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:46:36 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:46:36 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <nova:name>tempest-TestGettingAddress-server-1318971108</nova:name>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:46:35</nova:creationTime>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:46:36 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:46:36 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:46:36 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:46:36 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:46:36 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:46:36 compute-0 nova_compute[243452]:         <nova:user uuid="be6308f3690c465980a359a9628b8847">tempest-TestGettingAddress-573048003-project-member</nova:user>
Feb 28 10:46:36 compute-0 nova_compute[243452]:         <nova:project uuid="3eafc20864ad4643a4b4fe6f12c46c36">tempest-TestGettingAddress-573048003</nova:project>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:46:36 compute-0 nova_compute[243452]:         <nova:port uuid="7b3c6111-c4ec-40c0-94eb-725c085f9600">
Feb 28 10:46:36 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fec4:c96d" ipVersion="6"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:46:36 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:46:36 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <system>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <entry name="serial">a4475d45-f11e-4b8c-a118-5b4a347c2506</entry>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <entry name="uuid">a4475d45-f11e-4b8c-a118-5b4a347c2506</entry>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     </system>
Feb 28 10:46:36 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:46:36 compute-0 nova_compute[243452]:   <os>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:   </os>
Feb 28 10:46:36 compute-0 nova_compute[243452]:   <features>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:   </features>
Feb 28 10:46:36 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:46:36 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:46:36 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/a4475d45-f11e-4b8c-a118-5b4a347c2506_disk">
Feb 28 10:46:36 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       </source>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:46:36 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/a4475d45-f11e-4b8c-a118-5b4a347c2506_disk.config">
Feb 28 10:46:36 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       </source>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:46:36 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:c4:c9:6d"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <target dev="tap7b3c6111-c4"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/a4475d45-f11e-4b8c-a118-5b4a347c2506/console.log" append="off"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <video>
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     </video>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:46:36 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:46:36 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:46:36 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:46:36 compute-0 nova_compute[243452]: </domain>
Feb 28 10:46:36 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.779 243456 DEBUG nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Preparing to wait for external event network-vif-plugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.779 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.780 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.780 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.782 243456 DEBUG nova.virt.libvirt.vif [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:46:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1318971108',display_name='tempest-TestGettingAddress-server-1318971108',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1318971108',id=151,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCLqeUdnCmsfQtG8FFNYG3SOwWwHCJPtg4jwUf3aliCHpxaY4aOcizIwMIBv7Kr3bQWdfyFCQBO6S7GfxcAqSJauFtY2aqdteGcGPbGDToL8K97rBhkuRxZcIqdxYgu10A==',key_name='tempest-TestGettingAddress-891123315',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-4j40xw8q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:46:30Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=a4475d45-f11e-4b8c-a118-5b4a347c2506,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.782 243456 DEBUG nova.network.os_vif_util [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.783 243456 DEBUG nova.network.os_vif_util [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:c9:6d,bridge_name='br-int',has_traffic_filtering=True,id=7b3c6111-c4ec-40c0-94eb-725c085f9600,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3c6111-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.784 243456 DEBUG os_vif [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:c9:6d,bridge_name='br-int',has_traffic_filtering=True,id=7b3c6111-c4ec-40c0-94eb-725c085f9600,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3c6111-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.785 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.785 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.786 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.793 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.794 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b3c6111-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.795 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7b3c6111-c4, col_values=(('external_ids', {'iface-id': '7b3c6111-c4ec-40c0-94eb-725c085f9600', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:c9:6d', 'vm-uuid': 'a4475d45-f11e-4b8c-a118-5b4a347c2506'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.797 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:36 compute-0 NetworkManager[49805]: <info>  [1772275596.7991] manager: (tap7b3c6111-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/662)
Feb 28 10:46:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:46:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.800 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:46:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:46:36 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:46:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.804 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.805 243456 INFO os_vif [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:c9:6d,bridge_name='br-int',has_traffic_filtering=True,id=7b3c6111-c4ec-40c0-94eb-725c085f9600,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3c6111-c4')
Feb 28 10:46:36 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:46:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:46:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:46:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:46:36 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:46:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:46:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.860 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.862 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.862 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] No VIF found with MAC fa:16:3e:c4:c9:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.863 243456 INFO nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Using config drive
Feb 28 10:46:36 compute-0 sudo[379349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:46:36 compute-0 sudo[379349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:46:36 compute-0 sudo[379349]: pam_unix(sudo:session): session closed for user root
Feb 28 10:46:36 compute-0 nova_compute[243452]: 2026-02-28 10:46:36.900 243456 DEBUG nova.storage.rbd_utils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image a4475d45-f11e-4b8c-a118-5b4a347c2506_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:46:36 compute-0 sudo[379388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:46:36 compute-0 sudo[379388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:46:37 compute-0 podman[379430]: 2026-02-28 10:46:37.271440048 +0000 UTC m=+0.059645620 container create fcf72b970947a7754d06ba123ed66d2a3f9d23d80643e1fe4fa9bc888fa173a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cerf, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 28 10:46:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:46:37 compute-0 systemd[1]: Started libpod-conmon-fcf72b970947a7754d06ba123ed66d2a3f9d23d80643e1fe4fa9bc888fa173a9.scope.
Feb 28 10:46:37 compute-0 podman[379430]: 2026-02-28 10:46:37.240293446 +0000 UTC m=+0.028499078 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:46:37 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:46:37 compute-0 podman[379430]: 2026-02-28 10:46:37.374923199 +0000 UTC m=+0.163128761 container init fcf72b970947a7754d06ba123ed66d2a3f9d23d80643e1fe4fa9bc888fa173a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cerf, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Feb 28 10:46:37 compute-0 podman[379430]: 2026-02-28 10:46:37.381991609 +0000 UTC m=+0.170197171 container start fcf72b970947a7754d06ba123ed66d2a3f9d23d80643e1fe4fa9bc888fa173a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cerf, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 10:46:37 compute-0 confident_cerf[379446]: 167 167
Feb 28 10:46:37 compute-0 systemd[1]: libpod-fcf72b970947a7754d06ba123ed66d2a3f9d23d80643e1fe4fa9bc888fa173a9.scope: Deactivated successfully.
Feb 28 10:46:37 compute-0 podman[379430]: 2026-02-28 10:46:37.387163575 +0000 UTC m=+0.175369147 container attach fcf72b970947a7754d06ba123ed66d2a3f9d23d80643e1fe4fa9bc888fa173a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cerf, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:46:37 compute-0 podman[379430]: 2026-02-28 10:46:37.387661809 +0000 UTC m=+0.175867371 container died fcf72b970947a7754d06ba123ed66d2a3f9d23d80643e1fe4fa9bc888fa173a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cerf, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:46:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-01ea78c662de353dca9f27653a330019e30177deeab7d134afa7d0a411802adf-merged.mount: Deactivated successfully.
Feb 28 10:46:37 compute-0 nova_compute[243452]: 2026-02-28 10:46:37.423 243456 INFO nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Creating config drive at /var/lib/nova/instances/a4475d45-f11e-4b8c-a118-5b4a347c2506/disk.config
Feb 28 10:46:37 compute-0 nova_compute[243452]: 2026-02-28 10:46:37.430 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a4475d45-f11e-4b8c-a118-5b4a347c2506/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpk4xonqgk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:46:37 compute-0 podman[379430]: 2026-02-28 10:46:37.461059608 +0000 UTC m=+0.249265160 container remove fcf72b970947a7754d06ba123ed66d2a3f9d23d80643e1fe4fa9bc888fa173a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_cerf, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 10:46:37 compute-0 systemd[1]: libpod-conmon-fcf72b970947a7754d06ba123ed66d2a3f9d23d80643e1fe4fa9bc888fa173a9.scope: Deactivated successfully.
Feb 28 10:46:37 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2363406954' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:46:37 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:46:37 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:46:37 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:46:37 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:46:37 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:46:37 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:46:37 compute-0 nova_compute[243452]: 2026-02-28 10:46:37.568 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a4475d45-f11e-4b8c-a118-5b4a347c2506/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpk4xonqgk" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:46:37 compute-0 nova_compute[243452]: 2026-02-28 10:46:37.600 243456 DEBUG nova.storage.rbd_utils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] rbd image a4475d45-f11e-4b8c-a118-5b4a347c2506_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:46:37 compute-0 nova_compute[243452]: 2026-02-28 10:46:37.604 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a4475d45-f11e-4b8c-a118-5b4a347c2506/disk.config a4475d45-f11e-4b8c-a118-5b4a347c2506_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:46:37 compute-0 podman[379474]: 2026-02-28 10:46:37.640873429 +0000 UTC m=+0.074151320 container create 3a4c7b43980aef68d669828aa754d0da7c7062a01bbe7b6d4044e07ca4c2b17d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_margulis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:46:37 compute-0 systemd[1]: Started libpod-conmon-3a4c7b43980aef68d669828aa754d0da7c7062a01bbe7b6d4044e07ca4c2b17d.scope.
Feb 28 10:46:37 compute-0 podman[379474]: 2026-02-28 10:46:37.608874913 +0000 UTC m=+0.042152864 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:46:37 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9552d04dac78a63bd34a2c159527c874de55d5c73274fb51f30867d253b9d4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9552d04dac78a63bd34a2c159527c874de55d5c73274fb51f30867d253b9d4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9552d04dac78a63bd34a2c159527c874de55d5c73274fb51f30867d253b9d4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9552d04dac78a63bd34a2c159527c874de55d5c73274fb51f30867d253b9d4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9552d04dac78a63bd34a2c159527c874de55d5c73274fb51f30867d253b9d4b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:46:37 compute-0 podman[379474]: 2026-02-28 10:46:37.736840527 +0000 UTC m=+0.170118398 container init 3a4c7b43980aef68d669828aa754d0da7c7062a01bbe7b6d4044e07ca4c2b17d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_margulis, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:46:37 compute-0 podman[379474]: 2026-02-28 10:46:37.74471714 +0000 UTC m=+0.177994991 container start 3a4c7b43980aef68d669828aa754d0da7c7062a01bbe7b6d4044e07ca4c2b17d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_margulis, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:46:37 compute-0 podman[379474]: 2026-02-28 10:46:37.749262698 +0000 UTC m=+0.182540629 container attach 3a4c7b43980aef68d669828aa754d0da7c7062a01bbe7b6d4044e07ca4c2b17d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_margulis, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 10:46:37 compute-0 nova_compute[243452]: 2026-02-28 10:46:37.780 243456 DEBUG oslo_concurrency.processutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a4475d45-f11e-4b8c-a118-5b4a347c2506/disk.config a4475d45-f11e-4b8c-a118-5b4a347c2506_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:46:37 compute-0 nova_compute[243452]: 2026-02-28 10:46:37.781 243456 INFO nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Deleting local config drive /var/lib/nova/instances/a4475d45-f11e-4b8c-a118-5b4a347c2506/disk.config because it was imported into RBD.
Feb 28 10:46:37 compute-0 kernel: tap7b3c6111-c4: entered promiscuous mode
Feb 28 10:46:37 compute-0 NetworkManager[49805]: <info>  [1772275597.8313] manager: (tap7b3c6111-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/663)
Feb 28 10:46:37 compute-0 nova_compute[243452]: 2026-02-28 10:46:37.831 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:37 compute-0 ovn_controller[146846]: 2026-02-28T10:46:37Z|01579|binding|INFO|Claiming lport 7b3c6111-c4ec-40c0-94eb-725c085f9600 for this chassis.
Feb 28 10:46:37 compute-0 ovn_controller[146846]: 2026-02-28T10:46:37Z|01580|binding|INFO|7b3c6111-c4ec-40c0-94eb-725c085f9600: Claiming fa:16:3e:c4:c9:6d 10.100.0.9 2001:db8::f816:3eff:fec4:c96d
Feb 28 10:46:37 compute-0 ovn_controller[146846]: 2026-02-28T10:46:37Z|01581|binding|INFO|Setting lport 7b3c6111-c4ec-40c0-94eb-725c085f9600 ovn-installed in OVS
Feb 28 10:46:37 compute-0 nova_compute[243452]: 2026-02-28 10:46:37.840 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:37 compute-0 nova_compute[243452]: 2026-02-28 10:46:37.844 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:37 compute-0 systemd-machined[209480]: New machine qemu-184-instance-00000097.
Feb 28 10:46:37 compute-0 systemd-udevd[379546]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:46:37 compute-0 NetworkManager[49805]: <info>  [1772275597.8840] device (tap7b3c6111-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:46:37 compute-0 systemd[1]: Started Virtual Machine qemu-184-instance-00000097.
Feb 28 10:46:37 compute-0 NetworkManager[49805]: <info>  [1772275597.8851] device (tap7b3c6111-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:46:37 compute-0 ovn_controller[146846]: 2026-02-28T10:46:37Z|01582|binding|INFO|Setting lport 7b3c6111-c4ec-40c0-94eb-725c085f9600 up in Southbound
Feb 28 10:46:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:37.916 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:c9:6d 10.100.0.9 2001:db8::f816:3eff:fec4:c96d'], port_security=['fa:16:3e:c4:c9:6d 10.100.0.9 2001:db8::f816:3eff:fec4:c96d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8::f816:3eff:fec4:c96d/64', 'neutron:device_id': 'a4475d45-f11e-4b8c-a118-5b4a347c2506', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cfc43bea-ddab-4fae-8219-6187bd0feedd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23d0752a-b8d1-4ae1-a510-cd602da802b5, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=7b3c6111-c4ec-40c0-94eb-725c085f9600) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:46:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:37.917 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 7b3c6111-c4ec-40c0-94eb-725c085f9600 in datapath daf6d125-3e9a-40be-b7d7-68719005c3b1 bound to our chassis
Feb 28 10:46:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:37.918 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network daf6d125-3e9a-40be-b7d7-68719005c3b1
Feb 28 10:46:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:37.931 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea84289-8e5f-4e11-b157-c01f4ef5bd68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:37.964 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7c295f13-5cb3-4be9-98fd-8f3b6e694292]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:37.968 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[af288cf9-4f88-4ecb-8876-a2ba81498d05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:37 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:37.994 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[17452bed-5ca2-4c2b-ad13-ac3e924da77a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:38.007 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4e7667c2-8c91-4e17-b1b3-76cf66007c38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdaf6d125-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:48:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 465], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707050, 'reachable_time': 34156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379564, 'error': None, 'target': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:38.020 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[79cac967-a252-4d52-b82b-6e1b3a33412f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdaf6d125-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707061, 'tstamp': 707061}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379566, 'error': None, 'target': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdaf6d125-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707064, 'tstamp': 707064}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379566, 'error': None, 'target': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:46:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:38.022 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdaf6d125-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.024 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:38.025 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdaf6d125-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:46:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:38.025 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:46:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:38.025 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdaf6d125-30, col_values=(('external_ids', {'iface-id': '74ab365e-66c5-40e6-8916-4199b68ab2e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:46:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:38.026 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:46:38 compute-0 funny_margulis[379510]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:46:38 compute-0 funny_margulis[379510]: --> All data devices are unavailable
Feb 28 10:46:38 compute-0 systemd[1]: libpod-3a4c7b43980aef68d669828aa754d0da7c7062a01bbe7b6d4044e07ca4c2b17d.scope: Deactivated successfully.
Feb 28 10:46:38 compute-0 podman[379614]: 2026-02-28 10:46:38.279267426 +0000 UTC m=+0.029532597 container died 3a4c7b43980aef68d669828aa754d0da7c7062a01bbe7b6d4044e07ca4c2b17d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_margulis, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.297 243456 DEBUG nova.network.neutron [req-0ad981f9-6fff-4090-85e4-8311dad9d236 req-5dedd6e3-5e27-4476-804f-b1849210ed9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Updated VIF entry in instance network info cache for port 7b3c6111-c4ec-40c0-94eb-725c085f9600. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.298 243456 DEBUG nova.network.neutron [req-0ad981f9-6fff-4090-85e4-8311dad9d236 req-5dedd6e3-5e27-4476-804f-b1849210ed9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Updating instance_info_cache with network_info: [{"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:46:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9552d04dac78a63bd34a2c159527c874de55d5c73274fb51f30867d253b9d4b-merged.mount: Deactivated successfully.
Feb 28 10:46:38 compute-0 podman[379614]: 2026-02-28 10:46:38.332777851 +0000 UTC m=+0.083043022 container remove 3a4c7b43980aef68d669828aa754d0da7c7062a01bbe7b6d4044e07ca4c2b17d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:46:38 compute-0 systemd[1]: libpod-conmon-3a4c7b43980aef68d669828aa754d0da7c7062a01bbe7b6d4044e07ca4c2b17d.scope: Deactivated successfully.
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.358 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275598.356372, a4475d45-f11e-4b8c-a118-5b4a347c2506 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.358 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] VM Started (Lifecycle Event)
Feb 28 10:46:38 compute-0 sudo[379388]: pam_unix(sudo:session): session closed for user root
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.398 243456 DEBUG oslo_concurrency.lockutils [req-0ad981f9-6fff-4090-85e4-8311dad9d236 req-5dedd6e3-5e27-4476-804f-b1849210ed9d 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.430 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.435 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275598.3601243, a4475d45-f11e-4b8c-a118-5b4a347c2506 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.436 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] VM Paused (Lifecycle Event)
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.442 243456 DEBUG nova.compute.manager [req-4052938e-6343-40cf-92b3-849a569e5b55 req-57dbe0ad-27b4-44a9-b5f8-0e6b93897ba4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received event network-vif-plugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.442 243456 DEBUG oslo_concurrency.lockutils [req-4052938e-6343-40cf-92b3-849a569e5b55 req-57dbe0ad-27b4-44a9-b5f8-0e6b93897ba4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.442 243456 DEBUG oslo_concurrency.lockutils [req-4052938e-6343-40cf-92b3-849a569e5b55 req-57dbe0ad-27b4-44a9-b5f8-0e6b93897ba4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.443 243456 DEBUG oslo_concurrency.lockutils [req-4052938e-6343-40cf-92b3-849a569e5b55 req-57dbe0ad-27b4-44a9-b5f8-0e6b93897ba4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.443 243456 DEBUG nova.compute.manager [req-4052938e-6343-40cf-92b3-849a569e5b55 req-57dbe0ad-27b4-44a9-b5f8-0e6b93897ba4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Processing event network-vif-plugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.444 243456 DEBUG nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.449 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.454 243456 INFO nova.virt.libvirt.driver [-] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Instance spawned successfully.
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.454 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.458 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.462 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275598.4476247, a4475d45-f11e-4b8c-a118-5b4a347c2506 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.462 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] VM Resumed (Lifecycle Event)
Feb 28 10:46:38 compute-0 sudo[379635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:46:38 compute-0 sudo[379635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:46:38 compute-0 sudo[379635]: pam_unix(sudo:session): session closed for user root
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.475 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.477 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.477 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.478 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.478 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.479 243456 DEBUG nova.virt.libvirt.driver [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.488 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.492 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:46:38 compute-0 sudo[379660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:46:38 compute-0 sudo[379660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.531 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.753 243456 INFO nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Took 8.05 seconds to spawn the instance on the hypervisor.
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.754 243456 DEBUG nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:46:38 compute-0 ceph-mon[76304]: pgmap v2468: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:46:38 compute-0 podman[379696]: 2026-02-28 10:46:38.82754063 +0000 UTC m=+0.038702727 container create 80e9d20887d2627ef0ed6aa2b52a87386c9f51f092dbc13a7b333aaf9c629404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.851 243456 INFO nova.compute.manager [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Took 9.12 seconds to build instance.
Feb 28 10:46:38 compute-0 nova_compute[243452]: 2026-02-28 10:46:38.877 243456 DEBUG oslo_concurrency.lockutils [None req-733b3220-0336-4dbc-b78d-d59f3b359127 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:46:38 compute-0 systemd[1]: Started libpod-conmon-80e9d20887d2627ef0ed6aa2b52a87386c9f51f092dbc13a7b333aaf9c629404.scope.
Feb 28 10:46:38 compute-0 podman[379696]: 2026-02-28 10:46:38.809802968 +0000 UTC m=+0.020965055 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:46:38 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:46:38 compute-0 podman[379696]: 2026-02-28 10:46:38.930713002 +0000 UTC m=+0.141875159 container init 80e9d20887d2627ef0ed6aa2b52a87386c9f51f092dbc13a7b333aaf9c629404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_almeida, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:46:38 compute-0 podman[379696]: 2026-02-28 10:46:38.945877021 +0000 UTC m=+0.157039128 container start 80e9d20887d2627ef0ed6aa2b52a87386c9f51f092dbc13a7b333aaf9c629404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:46:38 compute-0 podman[379696]: 2026-02-28 10:46:38.95077664 +0000 UTC m=+0.161938817 container attach 80e9d20887d2627ef0ed6aa2b52a87386c9f51f092dbc13a7b333aaf9c629404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_almeida, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:46:38 compute-0 magical_almeida[379712]: 167 167
Feb 28 10:46:38 compute-0 systemd[1]: libpod-80e9d20887d2627ef0ed6aa2b52a87386c9f51f092dbc13a7b333aaf9c629404.scope: Deactivated successfully.
Feb 28 10:46:38 compute-0 podman[379696]: 2026-02-28 10:46:38.953716503 +0000 UTC m=+0.164878620 container died 80e9d20887d2627ef0ed6aa2b52a87386c9f51f092dbc13a7b333aaf9c629404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_almeida, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:46:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-5102d2b7618442f932cd9642af0ddb52a9e35e6554fefadb4bd882727be19cf4-merged.mount: Deactivated successfully.
Feb 28 10:46:38 compute-0 podman[379696]: 2026-02-28 10:46:38.998025858 +0000 UTC m=+0.209187945 container remove 80e9d20887d2627ef0ed6aa2b52a87386c9f51f092dbc13a7b333aaf9c629404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_almeida, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:46:39 compute-0 systemd[1]: libpod-conmon-80e9d20887d2627ef0ed6aa2b52a87386c9f51f092dbc13a7b333aaf9c629404.scope: Deactivated successfully.
Feb 28 10:46:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:46:39 compute-0 podman[379738]: 2026-02-28 10:46:39.177286714 +0000 UTC m=+0.038909943 container create 68d2a51195f120feaed9a40c73c8033e28b70f45f9ef894962976dd592eab312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wing, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Feb 28 10:46:39 compute-0 systemd[1]: Started libpod-conmon-68d2a51195f120feaed9a40c73c8033e28b70f45f9ef894962976dd592eab312.scope.
Feb 28 10:46:39 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c84338f5ed72175e6c08cca3b308d706792ba98bf784fe94b0732b6950be6cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c84338f5ed72175e6c08cca3b308d706792ba98bf784fe94b0732b6950be6cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c84338f5ed72175e6c08cca3b308d706792ba98bf784fe94b0732b6950be6cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c84338f5ed72175e6c08cca3b308d706792ba98bf784fe94b0732b6950be6cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:46:39 compute-0 podman[379738]: 2026-02-28 10:46:39.162248098 +0000 UTC m=+0.023871347 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:46:39 compute-0 podman[379738]: 2026-02-28 10:46:39.263614258 +0000 UTC m=+0.125237507 container init 68d2a51195f120feaed9a40c73c8033e28b70f45f9ef894962976dd592eab312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 10:46:39 compute-0 podman[379738]: 2026-02-28 10:46:39.269142185 +0000 UTC m=+0.130765414 container start 68d2a51195f120feaed9a40c73c8033e28b70f45f9ef894962976dd592eab312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wing, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:46:39 compute-0 podman[379738]: 2026-02-28 10:46:39.273377805 +0000 UTC m=+0.135001034 container attach 68d2a51195f120feaed9a40c73c8033e28b70f45f9ef894962976dd592eab312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 10:46:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 28 10:46:39 compute-0 stupefied_wing[379755]: {
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:     "0": [
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:         {
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "devices": [
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "/dev/loop3"
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             ],
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "lv_name": "ceph_lv0",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "lv_size": "21470642176",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "name": "ceph_lv0",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "tags": {
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.cluster_name": "ceph",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.crush_device_class": "",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.encrypted": "0",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.objectstore": "bluestore",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.osd_id": "0",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.type": "block",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.vdo": "0",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.with_tpm": "0"
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             },
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "type": "block",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "vg_name": "ceph_vg0"
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:         }
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:     ],
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:     "1": [
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:         {
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "devices": [
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "/dev/loop4"
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             ],
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "lv_name": "ceph_lv1",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "lv_size": "21470642176",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "name": "ceph_lv1",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "tags": {
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.cluster_name": "ceph",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.crush_device_class": "",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.encrypted": "0",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.objectstore": "bluestore",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.osd_id": "1",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.type": "block",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.vdo": "0",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.with_tpm": "0"
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             },
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "type": "block",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "vg_name": "ceph_vg1"
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:         }
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:     ],
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:     "2": [
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:         {
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "devices": [
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "/dev/loop5"
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             ],
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "lv_name": "ceph_lv2",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "lv_size": "21470642176",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "name": "ceph_lv2",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "tags": {
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.cluster_name": "ceph",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.crush_device_class": "",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.encrypted": "0",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.objectstore": "bluestore",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.osd_id": "2",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.type": "block",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.vdo": "0",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:                 "ceph.with_tpm": "0"
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             },
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "type": "block",
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:             "vg_name": "ceph_vg2"
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:         }
Feb 28 10:46:39 compute-0 stupefied_wing[379755]:     ]
Feb 28 10:46:39 compute-0 stupefied_wing[379755]: }
Feb 28 10:46:39 compute-0 systemd[1]: libpod-68d2a51195f120feaed9a40c73c8033e28b70f45f9ef894962976dd592eab312.scope: Deactivated successfully.
Feb 28 10:46:39 compute-0 podman[379738]: 2026-02-28 10:46:39.570378034 +0000 UTC m=+0.432001263 container died 68d2a51195f120feaed9a40c73c8033e28b70f45f9ef894962976dd592eab312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wing, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 10:46:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c84338f5ed72175e6c08cca3b308d706792ba98bf784fe94b0732b6950be6cc-merged.mount: Deactivated successfully.
Feb 28 10:46:39 compute-0 podman[379738]: 2026-02-28 10:46:39.617803817 +0000 UTC m=+0.479427046 container remove 68d2a51195f120feaed9a40c73c8033e28b70f45f9ef894962976dd592eab312 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:46:39 compute-0 systemd[1]: libpod-conmon-68d2a51195f120feaed9a40c73c8033e28b70f45f9ef894962976dd592eab312.scope: Deactivated successfully.
Feb 28 10:46:39 compute-0 sudo[379660]: pam_unix(sudo:session): session closed for user root
Feb 28 10:46:39 compute-0 sudo[379778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:46:39 compute-0 sudo[379778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:46:39 compute-0 sudo[379778]: pam_unix(sudo:session): session closed for user root
Feb 28 10:46:39 compute-0 sudo[379803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:46:39 compute-0 sudo[379803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:46:40 compute-0 podman[379840]: 2026-02-28 10:46:40.11937667 +0000 UTC m=+0.045253242 container create 5420984903f8ad618db5ee3edac67d7253b8422b33b9d34f4d313c4f61b647d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 28 10:46:40 compute-0 systemd[1]: Started libpod-conmon-5420984903f8ad618db5ee3edac67d7253b8422b33b9d34f4d313c4f61b647d7.scope.
Feb 28 10:46:40 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:46:40 compute-0 podman[379840]: 2026-02-28 10:46:40.096754169 +0000 UTC m=+0.022630761 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:46:40 compute-0 podman[379840]: 2026-02-28 10:46:40.200870548 +0000 UTC m=+0.126747150 container init 5420984903f8ad618db5ee3edac67d7253b8422b33b9d34f4d313c4f61b647d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mccarthy, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:46:40 compute-0 podman[379840]: 2026-02-28 10:46:40.21155238 +0000 UTC m=+0.137428942 container start 5420984903f8ad618db5ee3edac67d7253b8422b33b9d34f4d313c4f61b647d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:46:40 compute-0 podman[379840]: 2026-02-28 10:46:40.21577214 +0000 UTC m=+0.141648742 container attach 5420984903f8ad618db5ee3edac67d7253b8422b33b9d34f4d313c4f61b647d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 10:46:40 compute-0 systemd[1]: libpod-5420984903f8ad618db5ee3edac67d7253b8422b33b9d34f4d313c4f61b647d7.scope: Deactivated successfully.
Feb 28 10:46:40 compute-0 strange_mccarthy[379856]: 167 167
Feb 28 10:46:40 compute-0 conmon[379856]: conmon 5420984903f8ad618db5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5420984903f8ad618db5ee3edac67d7253b8422b33b9d34f4d313c4f61b647d7.scope/container/memory.events
Feb 28 10:46:40 compute-0 podman[379840]: 2026-02-28 10:46:40.252972373 +0000 UTC m=+0.178848945 container died 5420984903f8ad618db5ee3edac67d7253b8422b33b9d34f4d313c4f61b647d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mccarthy, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:46:40 compute-0 nova_compute[243452]: 2026-02-28 10:46:40.254 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4b4885894f5fb226486b6b915f5c9632aaa01dc6ac2f277e8c0e19bf11df8b1-merged.mount: Deactivated successfully.
Feb 28 10:46:40 compute-0 podman[379840]: 2026-02-28 10:46:40.301346983 +0000 UTC m=+0.227223585 container remove 5420984903f8ad618db5ee3edac67d7253b8422b33b9d34f4d313c4f61b647d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Feb 28 10:46:40 compute-0 systemd[1]: libpod-conmon-5420984903f8ad618db5ee3edac67d7253b8422b33b9d34f4d313c4f61b647d7.scope: Deactivated successfully.
Feb 28 10:46:40 compute-0 podman[379881]: 2026-02-28 10:46:40.47183733 +0000 UTC m=+0.046335993 container create e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 10:46:40 compute-0 systemd[1]: Started libpod-conmon-e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550.scope.
Feb 28 10:46:40 compute-0 nova_compute[243452]: 2026-02-28 10:46:40.538 243456 DEBUG nova.compute.manager [req-6c6bebd0-d300-47a8-98fb-da393fe3ccc8 req-046c0613-db35-4c9e-a181-c51c3602c406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received event network-vif-plugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:46:40 compute-0 nova_compute[243452]: 2026-02-28 10:46:40.538 243456 DEBUG oslo_concurrency.lockutils [req-6c6bebd0-d300-47a8-98fb-da393fe3ccc8 req-046c0613-db35-4c9e-a181-c51c3602c406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:46:40 compute-0 nova_compute[243452]: 2026-02-28 10:46:40.539 243456 DEBUG oslo_concurrency.lockutils [req-6c6bebd0-d300-47a8-98fb-da393fe3ccc8 req-046c0613-db35-4c9e-a181-c51c3602c406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:46:40 compute-0 nova_compute[243452]: 2026-02-28 10:46:40.539 243456 DEBUG oslo_concurrency.lockutils [req-6c6bebd0-d300-47a8-98fb-da393fe3ccc8 req-046c0613-db35-4c9e-a181-c51c3602c406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:46:40 compute-0 nova_compute[243452]: 2026-02-28 10:46:40.539 243456 DEBUG nova.compute.manager [req-6c6bebd0-d300-47a8-98fb-da393fe3ccc8 req-046c0613-db35-4c9e-a181-c51c3602c406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] No waiting events found dispatching network-vif-plugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:46:40 compute-0 nova_compute[243452]: 2026-02-28 10:46:40.539 243456 WARNING nova.compute.manager [req-6c6bebd0-d300-47a8-98fb-da393fe3ccc8 req-046c0613-db35-4c9e-a181-c51c3602c406 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received unexpected event network-vif-plugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 for instance with vm_state active and task_state None.
Feb 28 10:46:40 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17bbc5cd147fb426a68fe6d2418678497dfcf1b632ef98c0c24a03b701ce197b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17bbc5cd147fb426a68fe6d2418678497dfcf1b632ef98c0c24a03b701ce197b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17bbc5cd147fb426a68fe6d2418678497dfcf1b632ef98c0c24a03b701ce197b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:46:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17bbc5cd147fb426a68fe6d2418678497dfcf1b632ef98c0c24a03b701ce197b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:46:40 compute-0 podman[379881]: 2026-02-28 10:46:40.448462138 +0000 UTC m=+0.022960781 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:46:40 compute-0 podman[379881]: 2026-02-28 10:46:40.580632951 +0000 UTC m=+0.155131624 container init e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bartik, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:46:40 compute-0 podman[379881]: 2026-02-28 10:46:40.587056313 +0000 UTC m=+0.161554976 container start e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bartik, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:46:40 compute-0 podman[379881]: 2026-02-28 10:46:40.591300903 +0000 UTC m=+0.165799546 container attach e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 10:46:40 compute-0 ceph-mon[76304]: pgmap v2469: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2470: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 860 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Feb 28 10:46:41 compute-0 lvm[379976]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:46:41 compute-0 lvm[379976]: VG ceph_vg1 finished
Feb 28 10:46:41 compute-0 lvm[379975]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:46:41 compute-0 lvm[379975]: VG ceph_vg0 finished
Feb 28 10:46:41 compute-0 lvm[379978]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:46:41 compute-0 lvm[379978]: VG ceph_vg2 finished
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011221662958424221 of space, bias 1.0, pg target 0.33664988875272667 quantized to 32 (current 32)
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024939187920449838 of space, bias 1.0, pg target 0.7481756376134951 quantized to 32 (current 32)
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.25983564234373e-07 of space, bias 4.0, pg target 0.0008711802770812475 quantized to 16 (current 16)
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:46:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:46:41 compute-0 quizzical_bartik[379897]: {}
Feb 28 10:46:41 compute-0 systemd[1]: libpod-e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550.scope: Deactivated successfully.
Feb 28 10:46:41 compute-0 systemd[1]: libpod-e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550.scope: Consumed 1.271s CPU time.
Feb 28 10:46:41 compute-0 conmon[379897]: conmon e544e59f7b77aed0af48 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550.scope/container/memory.events
Feb 28 10:46:41 compute-0 podman[379881]: 2026-02-28 10:46:41.468959625 +0000 UTC m=+1.043458248 container died e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bartik, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 10:46:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-17bbc5cd147fb426a68fe6d2418678497dfcf1b632ef98c0c24a03b701ce197b-merged.mount: Deactivated successfully.
Feb 28 10:46:41 compute-0 podman[379881]: 2026-02-28 10:46:41.512553849 +0000 UTC m=+1.087052472 container remove e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bartik, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:46:41 compute-0 systemd[1]: libpod-conmon-e544e59f7b77aed0af4816503a443883117ebc80e0f7ae34ee18d67c0dd15550.scope: Deactivated successfully.
Feb 28 10:46:41 compute-0 sudo[379803]: pam_unix(sudo:session): session closed for user root
Feb 28 10:46:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:46:41 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:46:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:46:41 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:46:41 compute-0 sudo[379992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:46:41 compute-0 sudo[379992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:46:41 compute-0 sudo[379992]: pam_unix(sudo:session): session closed for user root
Feb 28 10:46:41 compute-0 nova_compute[243452]: 2026-02-28 10:46:41.798 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:41.972 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:46:42 compute-0 ceph-mon[76304]: pgmap v2470: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 860 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Feb 28 10:46:42 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:46:42 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:46:42 compute-0 nova_compute[243452]: 2026-02-28 10:46:42.631 243456 DEBUG nova.compute.manager [req-70a43941-66b8-4406-9734-346a85bf6de1 req-bc0c6402-5b3c-4e6f-9131-483d6e1f333c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received event network-changed-7b3c6111-c4ec-40c0-94eb-725c085f9600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:46:42 compute-0 nova_compute[243452]: 2026-02-28 10:46:42.631 243456 DEBUG nova.compute.manager [req-70a43941-66b8-4406-9734-346a85bf6de1 req-bc0c6402-5b3c-4e6f-9131-483d6e1f333c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Refreshing instance network info cache due to event network-changed-7b3c6111-c4ec-40c0-94eb-725c085f9600. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:46:42 compute-0 nova_compute[243452]: 2026-02-28 10:46:42.632 243456 DEBUG oslo_concurrency.lockutils [req-70a43941-66b8-4406-9734-346a85bf6de1 req-bc0c6402-5b3c-4e6f-9131-483d6e1f333c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:46:42 compute-0 nova_compute[243452]: 2026-02-28 10:46:42.632 243456 DEBUG oslo_concurrency.lockutils [req-70a43941-66b8-4406-9734-346a85bf6de1 req-bc0c6402-5b3c-4e6f-9131-483d6e1f333c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:46:42 compute-0 nova_compute[243452]: 2026-02-28 10:46:42.632 243456 DEBUG nova.network.neutron [req-70a43941-66b8-4406-9734-346a85bf6de1 req-bc0c6402-5b3c-4e6f-9131-483d6e1f333c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Refreshing network info cache for port 7b3c6111-c4ec-40c0-94eb-725c085f9600 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:46:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 10:46:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:46:44 compute-0 ceph-mon[76304]: pgmap v2471: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 28 10:46:44 compute-0 nova_compute[243452]: 2026-02-28 10:46:44.662 243456 DEBUG nova.network.neutron [req-70a43941-66b8-4406-9734-346a85bf6de1 req-bc0c6402-5b3c-4e6f-9131-483d6e1f333c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Updated VIF entry in instance network info cache for port 7b3c6111-c4ec-40c0-94eb-725c085f9600. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:46:44 compute-0 nova_compute[243452]: 2026-02-28 10:46:44.662 243456 DEBUG nova.network.neutron [req-70a43941-66b8-4406-9734-346a85bf6de1 req-bc0c6402-5b3c-4e6f-9131-483d6e1f333c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Updating instance_info_cache with network_info: [{"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:46:44 compute-0 nova_compute[243452]: 2026-02-28 10:46:44.735 243456 DEBUG oslo_concurrency.lockutils [req-70a43941-66b8-4406-9734-346a85bf6de1 req-bc0c6402-5b3c-4e6f-9131-483d6e1f333c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:46:45 compute-0 nova_compute[243452]: 2026-02-28 10:46:45.256 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 760 KiB/s wr, 87 op/s
Feb 28 10:46:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:46:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1141842466' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:46:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:46:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1141842466' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:46:46 compute-0 ceph-mon[76304]: pgmap v2472: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 760 KiB/s wr, 87 op/s
Feb 28 10:46:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1141842466' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:46:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1141842466' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:46:46 compute-0 nova_compute[243452]: 2026-02-28 10:46:46.801 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:46 compute-0 nova_compute[243452]: 2026-02-28 10:46:46.836 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:46:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 603 KiB/s wr, 75 op/s
Feb 28 10:46:48 compute-0 nova_compute[243452]: 2026-02-28 10:46:48.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:46:48 compute-0 nova_compute[243452]: 2026-02-28 10:46:48.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:46:48 compute-0 ceph-mon[76304]: pgmap v2473: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 603 KiB/s wr, 75 op/s
Feb 28 10:46:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:46:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2474: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Feb 28 10:46:49 compute-0 nova_compute[243452]: 2026-02-28 10:46:49.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:46:50 compute-0 nova_compute[243452]: 2026-02-28 10:46:50.299 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:50 compute-0 nova_compute[243452]: 2026-02-28 10:46:50.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:46:50 compute-0 ceph-mon[76304]: pgmap v2474: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Feb 28 10:46:50 compute-0 ovn_controller[146846]: 2026-02-28T10:46:50Z|00196|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c4:c9:6d 10.100.0.9
Feb 28 10:46:50 compute-0 ovn_controller[146846]: 2026-02-28T10:46:50Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c4:c9:6d 10.100.0.9
Feb 28 10:46:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 305 active+clean; 292 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 891 KiB/s wr, 103 op/s
Feb 28 10:46:51 compute-0 nova_compute[243452]: 2026-02-28 10:46:51.804 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:52 compute-0 ceph-mon[76304]: pgmap v2475: 305 pgs: 305 active+clean; 292 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 891 KiB/s wr, 103 op/s
Feb 28 10:46:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 305 active+clean; 310 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 95 op/s
Feb 28 10:46:53 compute-0 nova_compute[243452]: 2026-02-28 10:46:53.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:46:53 compute-0 nova_compute[243452]: 2026-02-28 10:46:53.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:46:53 compute-0 nova_compute[243452]: 2026-02-28 10:46:53.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:46:53 compute-0 nova_compute[243452]: 2026-02-28 10:46:53.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:46:53 compute-0 ceph-mon[76304]: pgmap v2476: 305 pgs: 305 active+clean; 310 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 95 op/s
Feb 28 10:46:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:46:54 compute-0 nova_compute[243452]: 2026-02-28 10:46:54.213 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:46:54 compute-0 nova_compute[243452]: 2026-02-28 10:46:54.214 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:46:54 compute-0 nova_compute[243452]: 2026-02-28 10:46:54.214 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:46:54 compute-0 nova_compute[243452]: 2026-02-28 10:46:54.214 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b6b6d2e0-12e8-4804-a192-da4e2444f20e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:46:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Feb 28 10:46:55 compute-0 nova_compute[243452]: 2026-02-28 10:46:55.341 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:56 compute-0 podman[380019]: 2026-02-28 10:46:56.139829156 +0000 UTC m=+0.070157688 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 28 10:46:56 compute-0 podman[380018]: 2026-02-28 10:46:56.18635497 +0000 UTC m=+0.117429013 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:46:56 compute-0 ceph-mon[76304]: pgmap v2477: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Feb 28 10:46:56 compute-0 nova_compute[243452]: 2026-02-28 10:46:56.551 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Updating instance_info_cache with network_info: [{"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:46:56 compute-0 nova_compute[243452]: 2026-02-28 10:46:56.566 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:46:56 compute-0 nova_compute[243452]: 2026-02-28 10:46:56.567 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:46:56 compute-0 nova_compute[243452]: 2026-02-28 10:46:56.568 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:46:56 compute-0 nova_compute[243452]: 2026-02-28 10:46:56.568 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:46:56 compute-0 nova_compute[243452]: 2026-02-28 10:46:56.806 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:46:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Feb 28 10:46:57 compute-0 nova_compute[243452]: 2026-02-28 10:46:57.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:46:57 compute-0 nova_compute[243452]: 2026-02-28 10:46:57.582 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:46:57 compute-0 nova_compute[243452]: 2026-02-28 10:46:57.583 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:46:57 compute-0 nova_compute[243452]: 2026-02-28 10:46:57.583 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:46:57 compute-0 nova_compute[243452]: 2026-02-28 10:46:57.584 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:46:57 compute-0 nova_compute[243452]: 2026-02-28 10:46:57.585 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:46:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:57.888 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:46:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:57.888 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:46:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:46:57.889 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:46:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:46:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2970508270' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:46:58 compute-0 nova_compute[243452]: 2026-02-28 10:46:58.147 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:46:58 compute-0 nova_compute[243452]: 2026-02-28 10:46:58.245 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:46:58 compute-0 nova_compute[243452]: 2026-02-28 10:46:58.246 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:46:58 compute-0 nova_compute[243452]: 2026-02-28 10:46:58.252 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:46:58 compute-0 nova_compute[243452]: 2026-02-28 10:46:58.252 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:46:58 compute-0 ceph-mon[76304]: pgmap v2478: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Feb 28 10:46:58 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2970508270' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:46:58 compute-0 nova_compute[243452]: 2026-02-28 10:46:58.494 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:46:58 compute-0 nova_compute[243452]: 2026-02-28 10:46:58.496 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3197MB free_disk=59.89634436648339GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:46:58 compute-0 nova_compute[243452]: 2026-02-28 10:46:58.496 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:46:58 compute-0 nova_compute[243452]: 2026-02-28 10:46:58.496 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:46:58 compute-0 nova_compute[243452]: 2026-02-28 10:46:58.669 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance b6b6d2e0-12e8-4804-a192-da4e2444f20e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:46:58 compute-0 nova_compute[243452]: 2026-02-28 10:46:58.670 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance a4475d45-f11e-4b8c-a118-5b4a347c2506 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:46:58 compute-0 nova_compute[243452]: 2026-02-28 10:46:58.670 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:46:58 compute-0 nova_compute[243452]: 2026-02-28 10:46:58.670 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:46:58 compute-0 nova_compute[243452]: 2026-02-28 10:46:58.808 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:46:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:46:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Feb 28 10:46:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:46:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1873521596' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:46:59 compute-0 nova_compute[243452]: 2026-02-28 10:46:59.600 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.791s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:46:59 compute-0 nova_compute[243452]: 2026-02-28 10:46:59.605 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.240 243456 DEBUG nova.compute.manager [req-0b361309-8026-4f6c-a8ed-4c79e918365a req-ce87b276-1c14-4f7a-a6c8-8ecedfcff7fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received event network-changed-7b3c6111-c4ec-40c0-94eb-725c085f9600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.240 243456 DEBUG nova.compute.manager [req-0b361309-8026-4f6c-a8ed-4c79e918365a req-ce87b276-1c14-4f7a-a6c8-8ecedfcff7fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Refreshing instance network info cache due to event network-changed-7b3c6111-c4ec-40c0-94eb-725c085f9600. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.240 243456 DEBUG oslo_concurrency.lockutils [req-0b361309-8026-4f6c-a8ed-4c79e918365a req-ce87b276-1c14-4f7a-a6c8-8ecedfcff7fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.240 243456 DEBUG oslo_concurrency.lockutils [req-0b361309-8026-4f6c-a8ed-4c79e918365a req-ce87b276-1c14-4f7a-a6c8-8ecedfcff7fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.241 243456 DEBUG nova.network.neutron [req-0b361309-8026-4f6c-a8ed-4c79e918365a req-ce87b276-1c14-4f7a-a6c8-8ecedfcff7fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Refreshing network info cache for port 7b3c6111-c4ec-40c0-94eb-725c085f9600 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.242 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:47:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:47:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:47:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:47:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:47:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:47:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:47:00 compute-0 ceph-mon[76304]: pgmap v2479: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Feb 28 10:47:00 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1873521596' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.595 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.595 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.603 243456 DEBUG oslo_concurrency.lockutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "a4475d45-f11e-4b8c-a118-5b4a347c2506" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.603 243456 DEBUG oslo_concurrency.lockutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.603 243456 DEBUG oslo_concurrency.lockutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.604 243456 DEBUG oslo_concurrency.lockutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.604 243456 DEBUG oslo_concurrency.lockutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.605 243456 INFO nova.compute.manager [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Terminating instance
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.606 243456 DEBUG nova.compute.manager [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.609 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:00 compute-0 kernel: tap7b3c6111-c4 (unregistering): left promiscuous mode
Feb 28 10:47:00 compute-0 NetworkManager[49805]: <info>  [1772275620.6583] device (tap7b3c6111-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.673 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:00 compute-0 ovn_controller[146846]: 2026-02-28T10:47:00Z|01583|binding|INFO|Releasing lport 7b3c6111-c4ec-40c0-94eb-725c085f9600 from this chassis (sb_readonly=0)
Feb 28 10:47:00 compute-0 ovn_controller[146846]: 2026-02-28T10:47:00Z|01584|binding|INFO|Setting lport 7b3c6111-c4ec-40c0-94eb-725c085f9600 down in Southbound
Feb 28 10:47:00 compute-0 ovn_controller[146846]: 2026-02-28T10:47:00Z|01585|binding|INFO|Removing iface tap7b3c6111-c4 ovn-installed in OVS
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.676 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.683 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.681 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:c9:6d 10.100.0.9 2001:db8::f816:3eff:fec4:c96d'], port_security=['fa:16:3e:c4:c9:6d 10.100.0.9 2001:db8::f816:3eff:fec4:c96d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8::f816:3eff:fec4:c96d/64', 'neutron:device_id': 'a4475d45-f11e-4b8c-a118-5b4a347c2506', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cfc43bea-ddab-4fae-8219-6187bd0feedd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23d0752a-b8d1-4ae1-a510-cd602da802b5, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=7b3c6111-c4ec-40c0-94eb-725c085f9600) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:47:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.685 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 7b3c6111-c4ec-40c0-94eb-725c085f9600 in datapath daf6d125-3e9a-40be-b7d7-68719005c3b1 unbound from our chassis
Feb 28 10:47:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.687 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network daf6d125-3e9a-40be-b7d7-68719005c3b1
Feb 28 10:47:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.706 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[24c15f21-eb6b-4f93-bcfb-cb905485410c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:00 compute-0 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000097.scope: Deactivated successfully.
Feb 28 10:47:00 compute-0 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000097.scope: Consumed 12.466s CPU time.
Feb 28 10:47:00 compute-0 systemd-machined[209480]: Machine qemu-184-instance-00000097 terminated.
Feb 28 10:47:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.736 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[ff6382a3-3984-4602-9443-c80e0ab26295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.740 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[499b19f4-853a-44fb-89df-f0ef4d192e35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.769 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[414b9b3b-1a42-4c5e-8a3b-64397e294642]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.792 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[667e8e64-880b-443d-941f-07b51a1b3654]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdaf6d125-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:48:42'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 465], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707050, 'reachable_time': 34156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380118, 'error': None, 'target': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.815 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[90dd9e86-b8af-49ee-9c96-d2feffd80678]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdaf6d125-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707061, 'tstamp': 707061}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380119, 'error': None, 'target': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdaf6d125-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707064, 'tstamp': 707064}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380119, 'error': None, 'target': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.818 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdaf6d125-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.821 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:00 compute-0 NetworkManager[49805]: <info>  [1772275620.8261] manager: (tap7b3c6111-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/664)
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.828 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.831 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdaf6d125-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:47:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.832 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:47:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.833 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdaf6d125-30, col_values=(('external_ids', {'iface-id': '74ab365e-66c5-40e6-8916-4199b68ab2e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:47:00 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:00.834 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.845 243456 INFO nova.virt.libvirt.driver [-] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Instance destroyed successfully.
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.846 243456 DEBUG nova.objects.instance [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid a4475d45-f11e-4b8c-a118-5b4a347c2506 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.860 243456 DEBUG nova.virt.libvirt.vif [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:46:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1318971108',display_name='tempest-TestGettingAddress-server-1318971108',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1318971108',id=151,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCLqeUdnCmsfQtG8FFNYG3SOwWwHCJPtg4jwUf3aliCHpxaY4aOcizIwMIBv7Kr3bQWdfyFCQBO6S7GfxcAqSJauFtY2aqdteGcGPbGDToL8K97rBhkuRxZcIqdxYgu10A==',key_name='tempest-TestGettingAddress-891123315',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:46:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-4j40xw8q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:46:38Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=a4475d45-f11e-4b8c-a118-5b4a347c2506,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.861 243456 DEBUG nova.network.os_vif_util [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.862 243456 DEBUG nova.network.os_vif_util [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c4:c9:6d,bridge_name='br-int',has_traffic_filtering=True,id=7b3c6111-c4ec-40c0-94eb-725c085f9600,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3c6111-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.862 243456 DEBUG os_vif [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:c9:6d,bridge_name='br-int',has_traffic_filtering=True,id=7b3c6111-c4ec-40c0-94eb-725c085f9600,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3c6111-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.864 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.865 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b3c6111-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.868 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:00 compute-0 nova_compute[243452]: 2026-02-28 10:47:00.872 243456 INFO os_vif [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:c9:6d,bridge_name='br-int',has_traffic_filtering=True,id=7b3c6111-c4ec-40c0-94eb-725c085f9600,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3c6111-c4')
Feb 28 10:47:01 compute-0 nova_compute[243452]: 2026-02-28 10:47:01.204 243456 INFO nova.virt.libvirt.driver [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Deleting instance files /var/lib/nova/instances/a4475d45-f11e-4b8c-a118-5b4a347c2506_del
Feb 28 10:47:01 compute-0 nova_compute[243452]: 2026-02-28 10:47:01.205 243456 INFO nova.virt.libvirt.driver [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Deletion of /var/lib/nova/instances/a4475d45-f11e-4b8c-a118-5b4a347c2506_del complete
Feb 28 10:47:01 compute-0 nova_compute[243452]: 2026-02-28 10:47:01.268 243456 INFO nova.compute.manager [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Took 0.66 seconds to destroy the instance on the hypervisor.
Feb 28 10:47:01 compute-0 nova_compute[243452]: 2026-02-28 10:47:01.269 243456 DEBUG oslo.service.loopingcall [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:47:01 compute-0 nova_compute[243452]: 2026-02-28 10:47:01.269 243456 DEBUG nova.compute.manager [-] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:47:01 compute-0 nova_compute[243452]: 2026-02-28 10:47:01.269 243456 DEBUG nova.network.neutron [-] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:47:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 295 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Feb 28 10:47:01 compute-0 nova_compute[243452]: 2026-02-28 10:47:01.434 243456 DEBUG nova.compute.manager [req-d5732811-e481-43f1-80dd-1687c315c3a8 req-ce63df15-8cb9-4e0b-a833-23fd4860c218 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received event network-vif-unplugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:47:01 compute-0 nova_compute[243452]: 2026-02-28 10:47:01.434 243456 DEBUG oslo_concurrency.lockutils [req-d5732811-e481-43f1-80dd-1687c315c3a8 req-ce63df15-8cb9-4e0b-a833-23fd4860c218 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:47:01 compute-0 nova_compute[243452]: 2026-02-28 10:47:01.434 243456 DEBUG oslo_concurrency.lockutils [req-d5732811-e481-43f1-80dd-1687c315c3a8 req-ce63df15-8cb9-4e0b-a833-23fd4860c218 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:47:01 compute-0 nova_compute[243452]: 2026-02-28 10:47:01.435 243456 DEBUG oslo_concurrency.lockutils [req-d5732811-e481-43f1-80dd-1687c315c3a8 req-ce63df15-8cb9-4e0b-a833-23fd4860c218 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:47:01 compute-0 nova_compute[243452]: 2026-02-28 10:47:01.435 243456 DEBUG nova.compute.manager [req-d5732811-e481-43f1-80dd-1687c315c3a8 req-ce63df15-8cb9-4e0b-a833-23fd4860c218 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] No waiting events found dispatching network-vif-unplugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:47:01 compute-0 nova_compute[243452]: 2026-02-28 10:47:01.436 243456 DEBUG nova.compute.manager [req-d5732811-e481-43f1-80dd-1687c315c3a8 req-ce63df15-8cb9-4e0b-a833-23fd4860c218 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received event network-vif-unplugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:47:01 compute-0 nova_compute[243452]: 2026-02-28 10:47:01.806 243456 DEBUG nova.network.neutron [-] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:47:01 compute-0 nova_compute[243452]: 2026-02-28 10:47:01.828 243456 INFO nova.compute.manager [-] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Took 0.56 seconds to deallocate network for instance.
Feb 28 10:47:01 compute-0 nova_compute[243452]: 2026-02-28 10:47:01.878 243456 DEBUG oslo_concurrency.lockutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:47:01 compute-0 nova_compute[243452]: 2026-02-28 10:47:01.878 243456 DEBUG oslo_concurrency.lockutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:47:01 compute-0 nova_compute[243452]: 2026-02-28 10:47:01.950 243456 DEBUG oslo_concurrency.processutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:47:02 compute-0 nova_compute[243452]: 2026-02-28 10:47:02.281 243456 DEBUG nova.network.neutron [req-0b361309-8026-4f6c-a8ed-4c79e918365a req-ce87b276-1c14-4f7a-a6c8-8ecedfcff7fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Updated VIF entry in instance network info cache for port 7b3c6111-c4ec-40c0-94eb-725c085f9600. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:47:02 compute-0 nova_compute[243452]: 2026-02-28 10:47:02.282 243456 DEBUG nova.network.neutron [req-0b361309-8026-4f6c-a8ed-4c79e918365a req-ce87b276-1c14-4f7a-a6c8-8ecedfcff7fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Updating instance_info_cache with network_info: [{"id": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "address": "fa:16:3e:c4:c9:6d", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:c96d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3c6111-c4", "ovs_interfaceid": "7b3c6111-c4ec-40c0-94eb-725c085f9600", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:47:02 compute-0 nova_compute[243452]: 2026-02-28 10:47:02.304 243456 DEBUG oslo_concurrency.lockutils [req-0b361309-8026-4f6c-a8ed-4c79e918365a req-ce87b276-1c14-4f7a-a6c8-8ecedfcff7fe 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-a4475d45-f11e-4b8c-a118-5b4a347c2506" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:47:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:47:02 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2154103194' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:47:02 compute-0 ceph-mon[76304]: pgmap v2480: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 295 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Feb 28 10:47:02 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2154103194' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:47:02 compute-0 nova_compute[243452]: 2026-02-28 10:47:02.524 243456 DEBUG oslo_concurrency.processutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:47:02 compute-0 nova_compute[243452]: 2026-02-28 10:47:02.531 243456 DEBUG nova.compute.provider_tree [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:47:02 compute-0 nova_compute[243452]: 2026-02-28 10:47:02.547 243456 DEBUG nova.scheduler.client.report [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:47:02 compute-0 nova_compute[243452]: 2026-02-28 10:47:02.569 243456 DEBUG oslo_concurrency.lockutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:47:02 compute-0 nova_compute[243452]: 2026-02-28 10:47:02.595 243456 INFO nova.scheduler.client.report [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance a4475d45-f11e-4b8c-a118-5b4a347c2506
Feb 28 10:47:02 compute-0 nova_compute[243452]: 2026-02-28 10:47:02.685 243456 DEBUG oslo_concurrency.lockutils [None req-fb563aab-12a7-4276-82cb-cff9e187b18a be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:47:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 305 active+clean; 272 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 1.3 MiB/s wr, 37 op/s
Feb 28 10:47:03 compute-0 nova_compute[243452]: 2026-02-28 10:47:03.544 243456 DEBUG nova.compute.manager [req-79a80469-88a3-4ca4-a35b-e9dcfcf71143 req-bf4eef27-de6b-46cb-a363-450c30c135a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received event network-vif-plugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:47:03 compute-0 nova_compute[243452]: 2026-02-28 10:47:03.545 243456 DEBUG oslo_concurrency.lockutils [req-79a80469-88a3-4ca4-a35b-e9dcfcf71143 req-bf4eef27-de6b-46cb-a363-450c30c135a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:47:03 compute-0 nova_compute[243452]: 2026-02-28 10:47:03.545 243456 DEBUG oslo_concurrency.lockutils [req-79a80469-88a3-4ca4-a35b-e9dcfcf71143 req-bf4eef27-de6b-46cb-a363-450c30c135a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:47:03 compute-0 nova_compute[243452]: 2026-02-28 10:47:03.546 243456 DEBUG oslo_concurrency.lockutils [req-79a80469-88a3-4ca4-a35b-e9dcfcf71143 req-bf4eef27-de6b-46cb-a363-450c30c135a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "a4475d45-f11e-4b8c-a118-5b4a347c2506-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:47:03 compute-0 nova_compute[243452]: 2026-02-28 10:47:03.546 243456 DEBUG nova.compute.manager [req-79a80469-88a3-4ca4-a35b-e9dcfcf71143 req-bf4eef27-de6b-46cb-a363-450c30c135a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] No waiting events found dispatching network-vif-plugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:47:03 compute-0 nova_compute[243452]: 2026-02-28 10:47:03.547 243456 WARNING nova.compute.manager [req-79a80469-88a3-4ca4-a35b-e9dcfcf71143 req-bf4eef27-de6b-46cb-a363-450c30c135a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received unexpected event network-vif-plugged-7b3c6111-c4ec-40c0-94eb-725c085f9600 for instance with vm_state deleted and task_state None.
Feb 28 10:47:03 compute-0 nova_compute[243452]: 2026-02-28 10:47:03.547 243456 DEBUG nova.compute.manager [req-79a80469-88a3-4ca4-a35b-e9dcfcf71143 req-bf4eef27-de6b-46cb-a363-450c30c135a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Received event network-vif-deleted-7b3c6111-c4ec-40c0-94eb-725c085f9600 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:47:03 compute-0 nova_compute[243452]: 2026-02-28 10:47:03.547 243456 INFO nova.compute.manager [req-79a80469-88a3-4ca4-a35b-e9dcfcf71143 req-bf4eef27-de6b-46cb-a363-450c30c135a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Neutron deleted interface 7b3c6111-c4ec-40c0-94eb-725c085f9600; detaching it from the instance and deleting it from the info cache
Feb 28 10:47:03 compute-0 nova_compute[243452]: 2026-02-28 10:47:03.548 243456 DEBUG nova.network.neutron [req-79a80469-88a3-4ca4-a35b-e9dcfcf71143 req-bf4eef27-de6b-46cb-a363-450c30c135a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Feb 28 10:47:03 compute-0 nova_compute[243452]: 2026-02-28 10:47:03.550 243456 DEBUG nova.compute.manager [req-79a80469-88a3-4ca4-a35b-e9dcfcf71143 req-bf4eef27-de6b-46cb-a363-450c30c135a2 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Detach interface failed, port_id=7b3c6111-c4ec-40c0-94eb-725c085f9600, reason: Instance a4475d45-f11e-4b8c-a118-5b4a347c2506 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:47:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:47:04 compute-0 ceph-mon[76304]: pgmap v2481: 305 pgs: 305 active+clean; 272 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 1.3 MiB/s wr, 37 op/s
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.643 243456 DEBUG nova.compute.manager [req-1dce98c8-f165-47c8-a85a-c468b6c64772 req-7149d00d-9336-4a0a-81a4-7c1205033611 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received event network-changed-194124fb-5288-4359-ab42-cd28d0ec06bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.643 243456 DEBUG nova.compute.manager [req-1dce98c8-f165-47c8-a85a-c468b6c64772 req-7149d00d-9336-4a0a-81a4-7c1205033611 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Refreshing instance network info cache due to event network-changed-194124fb-5288-4359-ab42-cd28d0ec06bc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.643 243456 DEBUG oslo_concurrency.lockutils [req-1dce98c8-f165-47c8-a85a-c468b6c64772 req-7149d00d-9336-4a0a-81a4-7c1205033611 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.643 243456 DEBUG oslo_concurrency.lockutils [req-1dce98c8-f165-47c8-a85a-c468b6c64772 req-7149d00d-9336-4a0a-81a4-7c1205033611 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.644 243456 DEBUG nova.network.neutron [req-1dce98c8-f165-47c8-a85a-c468b6c64772 req-7149d00d-9336-4a0a-81a4-7c1205033611 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Refreshing network info cache for port 194124fb-5288-4359-ab42-cd28d0ec06bc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.698 243456 DEBUG oslo_concurrency.lockutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.700 243456 DEBUG oslo_concurrency.lockutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.701 243456 DEBUG oslo_concurrency.lockutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.701 243456 DEBUG oslo_concurrency.lockutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.702 243456 DEBUG oslo_concurrency.lockutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.705 243456 INFO nova.compute.manager [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Terminating instance
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.707 243456 DEBUG nova.compute.manager [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:47:04 compute-0 kernel: tap194124fb-52 (unregistering): left promiscuous mode
Feb 28 10:47:04 compute-0 NetworkManager[49805]: <info>  [1772275624.7536] device (tap194124fb-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:47:04 compute-0 ovn_controller[146846]: 2026-02-28T10:47:04Z|01586|binding|INFO|Releasing lport 194124fb-5288-4359-ab42-cd28d0ec06bc from this chassis (sb_readonly=0)
Feb 28 10:47:04 compute-0 ovn_controller[146846]: 2026-02-28T10:47:04Z|01587|binding|INFO|Setting lport 194124fb-5288-4359-ab42-cd28d0ec06bc down in Southbound
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.758 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:04 compute-0 ovn_controller[146846]: 2026-02-28T10:47:04Z|01588|binding|INFO|Removing iface tap194124fb-52 ovn-installed in OVS
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.760 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:04.764 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:09:7c 10.100.0.4 2001:db8::f816:3eff:fee5:97c'], port_security=['fa:16:3e:e5:09:7c 10.100.0.4 2001:db8::f816:3eff:fee5:97c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fee5:97c/64', 'neutron:device_id': 'b6b6d2e0-12e8-4804-a192-da4e2444f20e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3eafc20864ad4643a4b4fe6f12c46c36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cfc43bea-ddab-4fae-8219-6187bd0feedd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23d0752a-b8d1-4ae1-a510-cd602da802b5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=194124fb-5288-4359-ab42-cd28d0ec06bc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:47:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:04.765 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 194124fb-5288-4359-ab42-cd28d0ec06bc in datapath daf6d125-3e9a-40be-b7d7-68719005c3b1 unbound from our chassis
Feb 28 10:47:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:04.766 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network daf6d125-3e9a-40be-b7d7-68719005c3b1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:47:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:04.767 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e0984a0f-bac3-4459-839b-930f09e508c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:04.767 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1 namespace which is not needed anymore
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.773 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:04 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000096.scope: Deactivated successfully.
Feb 28 10:47:04 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000096.scope: Consumed 14.393s CPU time.
Feb 28 10:47:04 compute-0 systemd-machined[209480]: Machine qemu-183-instance-00000096 terminated.
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.955 243456 INFO nova.virt.libvirt.driver [-] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Instance destroyed successfully.
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.956 243456 DEBUG nova.objects.instance [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lazy-loading 'resources' on Instance uuid b6b6d2e0-12e8-4804-a192-da4e2444f20e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:47:04 compute-0 neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1[378952]: [NOTICE]   (378956) : haproxy version is 2.8.14-c23fe91
Feb 28 10:47:04 compute-0 neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1[378952]: [NOTICE]   (378956) : path to executable is /usr/sbin/haproxy
Feb 28 10:47:04 compute-0 neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1[378952]: [WARNING]  (378956) : Exiting Master process...
Feb 28 10:47:04 compute-0 neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1[378952]: [WARNING]  (378956) : Exiting Master process...
Feb 28 10:47:04 compute-0 neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1[378952]: [ALERT]    (378956) : Current worker (378958) exited with code 143 (Terminated)
Feb 28 10:47:04 compute-0 neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1[378952]: [WARNING]  (378956) : All workers exited. Exiting... (0)
Feb 28 10:47:04 compute-0 systemd[1]: libpod-5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66.scope: Deactivated successfully.
Feb 28 10:47:04 compute-0 podman[380194]: 2026-02-28 10:47:04.970081604 +0000 UTC m=+0.072750581 container died 5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.974 243456 DEBUG nova.virt.libvirt.vif [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:45:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-760627914',display_name='tempest-TestGettingAddress-server-760627914',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-760627914',id=150,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCLqeUdnCmsfQtG8FFNYG3SOwWwHCJPtg4jwUf3aliCHpxaY4aOcizIwMIBv7Kr3bQWdfyFCQBO6S7GfxcAqSJauFtY2aqdteGcGPbGDToL8K97rBhkuRxZcIqdxYgu10A==',key_name='tempest-TestGettingAddress-891123315',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:46:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3eafc20864ad4643a4b4fe6f12c46c36',ramdisk_id='',reservation_id='r-7spaod71',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-573048003',owner_user_name='tempest-TestGettingAddress-573048003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:46:03Z,user_data=None,user_id='be6308f3690c465980a359a9628b8847',uuid=b6b6d2e0-12e8-4804-a192-da4e2444f20e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.975 243456 DEBUG nova.network.os_vif_util [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converting VIF {"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.976 243456 DEBUG nova.network.os_vif_util [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e5:09:7c,bridge_name='br-int',has_traffic_filtering=True,id=194124fb-5288-4359-ab42-cd28d0ec06bc,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194124fb-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.976 243456 DEBUG os_vif [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:09:7c,bridge_name='br-int',has_traffic_filtering=True,id=194124fb-5288-4359-ab42-cd28d0ec06bc,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194124fb-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.979 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.979 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap194124fb-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.980 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.982 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:04 compute-0 nova_compute[243452]: 2026-02-28 10:47:04.984 243456 INFO os_vif [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:09:7c,bridge_name='br-int',has_traffic_filtering=True,id=194124fb-5288-4359-ab42-cd28d0ec06bc,network=Network(daf6d125-3e9a-40be-b7d7-68719005c3b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap194124fb-52')
Feb 28 10:47:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66-userdata-shm.mount: Deactivated successfully.
Feb 28 10:47:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fcf45e9da452fe6f31772cee126479285411300a0844b62ff3dcab6d823ade5-merged.mount: Deactivated successfully.
Feb 28 10:47:05 compute-0 podman[380194]: 2026-02-28 10:47:05.02227222 +0000 UTC m=+0.124941177 container cleanup 5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:47:05 compute-0 systemd[1]: libpod-conmon-5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66.scope: Deactivated successfully.
Feb 28 10:47:05 compute-0 podman[380247]: 2026-02-28 10:47:05.116126141 +0000 UTC m=+0.069510409 container remove 5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:47:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:05.122 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5b87cf96-0fdb-4c78-bcd7-f1c6d8927c34]: (4, ('Sat Feb 28 10:47:04 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1 (5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66)\n5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66\nSat Feb 28 10:47:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1 (5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66)\n5836ed1421a1f350a4385e767bf5ae30ffd656e0f42ec151f0fcd9ff7258ca66\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:05.124 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[20a4836d-4e62-42f9-8bf0-d4b06a00d441]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:05.125 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdaf6d125-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:47:05 compute-0 kernel: tapdaf6d125-30: left promiscuous mode
Feb 28 10:47:05 compute-0 nova_compute[243452]: 2026-02-28 10:47:05.129 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:05 compute-0 nova_compute[243452]: 2026-02-28 10:47:05.134 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:05.138 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[115bb2ba-6b8f-48d2-8a9b-fe660fe60cc0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:05.150 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[36482e39-1a74-431f-8e32-f6217b1f81f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:05.151 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab0b82b-90eb-447a-a604-0ebb59a9566e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:05.165 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[04867f1d-83cf-4e19-8f00-6f26b119503a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707043, 'reachable_time': 23185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380265, 'error': None, 'target': 'ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:05.166 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-daf6d125-3e9a-40be-b7d7-68719005c3b1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:47:05 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:05.167 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[68b95509-ff82-47e4-b565-a7db2ae87033]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:05 compute-0 systemd[1]: run-netns-ovnmeta\x2ddaf6d125\x2d3e9a\x2d40be\x2db7d7\x2d68719005c3b1.mount: Deactivated successfully.
Feb 28 10:47:05 compute-0 nova_compute[243452]: 2026-02-28 10:47:05.303 243456 INFO nova.virt.libvirt.driver [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Deleting instance files /var/lib/nova/instances/b6b6d2e0-12e8-4804-a192-da4e2444f20e_del
Feb 28 10:47:05 compute-0 nova_compute[243452]: 2026-02-28 10:47:05.304 243456 INFO nova.virt.libvirt.driver [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Deletion of /var/lib/nova/instances/b6b6d2e0-12e8-4804-a192-da4e2444f20e_del complete
Feb 28 10:47:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 305 active+clean; 258 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 19 KiB/s wr, 29 op/s
Feb 28 10:47:05 compute-0 nova_compute[243452]: 2026-02-28 10:47:05.370 243456 INFO nova.compute.manager [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Took 0.66 seconds to destroy the instance on the hypervisor.
Feb 28 10:47:05 compute-0 nova_compute[243452]: 2026-02-28 10:47:05.371 243456 DEBUG oslo.service.loopingcall [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:47:05 compute-0 nova_compute[243452]: 2026-02-28 10:47:05.372 243456 DEBUG nova.compute.manager [-] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:47:05 compute-0 nova_compute[243452]: 2026-02-28 10:47:05.372 243456 DEBUG nova.network.neutron [-] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:47:05 compute-0 nova_compute[243452]: 2026-02-28 10:47:05.612 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:05 compute-0 nova_compute[243452]: 2026-02-28 10:47:05.636 243456 DEBUG nova.compute.manager [req-c693c3ab-a279-4893-8f40-91a5ee05b2d9 req-758b1e90-f982-4135-8864-613d3fb06520 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received event network-vif-unplugged-194124fb-5288-4359-ab42-cd28d0ec06bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:47:05 compute-0 nova_compute[243452]: 2026-02-28 10:47:05.637 243456 DEBUG oslo_concurrency.lockutils [req-c693c3ab-a279-4893-8f40-91a5ee05b2d9 req-758b1e90-f982-4135-8864-613d3fb06520 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:47:05 compute-0 nova_compute[243452]: 2026-02-28 10:47:05.638 243456 DEBUG oslo_concurrency.lockutils [req-c693c3ab-a279-4893-8f40-91a5ee05b2d9 req-758b1e90-f982-4135-8864-613d3fb06520 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:47:05 compute-0 nova_compute[243452]: 2026-02-28 10:47:05.638 243456 DEBUG oslo_concurrency.lockutils [req-c693c3ab-a279-4893-8f40-91a5ee05b2d9 req-758b1e90-f982-4135-8864-613d3fb06520 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:47:05 compute-0 nova_compute[243452]: 2026-02-28 10:47:05.639 243456 DEBUG nova.compute.manager [req-c693c3ab-a279-4893-8f40-91a5ee05b2d9 req-758b1e90-f982-4135-8864-613d3fb06520 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] No waiting events found dispatching network-vif-unplugged-194124fb-5288-4359-ab42-cd28d0ec06bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:47:05 compute-0 nova_compute[243452]: 2026-02-28 10:47:05.640 243456 DEBUG nova.compute.manager [req-c693c3ab-a279-4893-8f40-91a5ee05b2d9 req-758b1e90-f982-4135-8864-613d3fb06520 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received event network-vif-unplugged-194124fb-5288-4359-ab42-cd28d0ec06bc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:47:06 compute-0 nova_compute[243452]: 2026-02-28 10:47:06.489 243456 DEBUG nova.network.neutron [req-1dce98c8-f165-47c8-a85a-c468b6c64772 req-7149d00d-9336-4a0a-81a4-7c1205033611 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Updated VIF entry in instance network info cache for port 194124fb-5288-4359-ab42-cd28d0ec06bc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:47:06 compute-0 nova_compute[243452]: 2026-02-28 10:47:06.491 243456 DEBUG nova.network.neutron [req-1dce98c8-f165-47c8-a85a-c468b6c64772 req-7149d00d-9336-4a0a-81a4-7c1205033611 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Updating instance_info_cache with network_info: [{"id": "194124fb-5288-4359-ab42-cd28d0ec06bc", "address": "fa:16:3e:e5:09:7c", "network": {"id": "daf6d125-3e9a-40be-b7d7-68719005c3b1", "bridge": "br-int", "label": "tempest-network-smoke--1636653433", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:97c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3eafc20864ad4643a4b4fe6f12c46c36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap194124fb-52", "ovs_interfaceid": "194124fb-5288-4359-ab42-cd28d0ec06bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:47:06 compute-0 nova_compute[243452]: 2026-02-28 10:47:06.510 243456 DEBUG oslo_concurrency.lockutils [req-1dce98c8-f165-47c8-a85a-c468b6c64772 req-7149d00d-9336-4a0a-81a4-7c1205033611 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-b6b6d2e0-12e8-4804-a192-da4e2444f20e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:47:06 compute-0 ceph-mon[76304]: pgmap v2482: 305 pgs: 305 active+clean; 258 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 19 KiB/s wr, 29 op/s
Feb 28 10:47:06 compute-0 nova_compute[243452]: 2026-02-28 10:47:06.605 243456 DEBUG nova.network.neutron [-] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:47:06 compute-0 nova_compute[243452]: 2026-02-28 10:47:06.624 243456 INFO nova.compute.manager [-] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Took 1.25 seconds to deallocate network for instance.
Feb 28 10:47:06 compute-0 nova_compute[243452]: 2026-02-28 10:47:06.669 243456 DEBUG oslo_concurrency.lockutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:47:06 compute-0 nova_compute[243452]: 2026-02-28 10:47:06.669 243456 DEBUG oslo_concurrency.lockutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:47:06 compute-0 nova_compute[243452]: 2026-02-28 10:47:06.715 243456 DEBUG oslo_concurrency.processutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:47:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:47:07 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2297234911' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:47:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 305 active+clean; 221 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 14 KiB/s wr, 30 op/s
Feb 28 10:47:07 compute-0 nova_compute[243452]: 2026-02-28 10:47:07.322 243456 DEBUG oslo_concurrency.processutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:47:07 compute-0 nova_compute[243452]: 2026-02-28 10:47:07.329 243456 DEBUG nova.compute.provider_tree [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:47:07 compute-0 nova_compute[243452]: 2026-02-28 10:47:07.354 243456 DEBUG nova.scheduler.client.report [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:47:07 compute-0 nova_compute[243452]: 2026-02-28 10:47:07.381 243456 DEBUG oslo_concurrency.lockutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:47:07 compute-0 nova_compute[243452]: 2026-02-28 10:47:07.403 243456 INFO nova.scheduler.client.report [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Deleted allocations for instance b6b6d2e0-12e8-4804-a192-da4e2444f20e
Feb 28 10:47:07 compute-0 nova_compute[243452]: 2026-02-28 10:47:07.468 243456 DEBUG oslo_concurrency.lockutils [None req-9131f0e1-1b21-4176-86c8-368e00542a35 be6308f3690c465980a359a9628b8847 3eafc20864ad4643a4b4fe6f12c46c36 - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:47:07 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2297234911' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:47:07 compute-0 nova_compute[243452]: 2026-02-28 10:47:07.703 243456 DEBUG nova.compute.manager [req-041af1f1-d496-46be-9e60-18e6568b4e89 req-7a775792-efd6-47cd-a111-bc397dd3d320 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received event network-vif-plugged-194124fb-5288-4359-ab42-cd28d0ec06bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:47:07 compute-0 nova_compute[243452]: 2026-02-28 10:47:07.703 243456 DEBUG oslo_concurrency.lockutils [req-041af1f1-d496-46be-9e60-18e6568b4e89 req-7a775792-efd6-47cd-a111-bc397dd3d320 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:47:07 compute-0 nova_compute[243452]: 2026-02-28 10:47:07.703 243456 DEBUG oslo_concurrency.lockutils [req-041af1f1-d496-46be-9e60-18e6568b4e89 req-7a775792-efd6-47cd-a111-bc397dd3d320 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:47:07 compute-0 nova_compute[243452]: 2026-02-28 10:47:07.704 243456 DEBUG oslo_concurrency.lockutils [req-041af1f1-d496-46be-9e60-18e6568b4e89 req-7a775792-efd6-47cd-a111-bc397dd3d320 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "b6b6d2e0-12e8-4804-a192-da4e2444f20e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:47:07 compute-0 nova_compute[243452]: 2026-02-28 10:47:07.704 243456 DEBUG nova.compute.manager [req-041af1f1-d496-46be-9e60-18e6568b4e89 req-7a775792-efd6-47cd-a111-bc397dd3d320 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] No waiting events found dispatching network-vif-plugged-194124fb-5288-4359-ab42-cd28d0ec06bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:47:07 compute-0 nova_compute[243452]: 2026-02-28 10:47:07.704 243456 WARNING nova.compute.manager [req-041af1f1-d496-46be-9e60-18e6568b4e89 req-7a775792-efd6-47cd-a111-bc397dd3d320 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received unexpected event network-vif-plugged-194124fb-5288-4359-ab42-cd28d0ec06bc for instance with vm_state deleted and task_state None.
Feb 28 10:47:07 compute-0 nova_compute[243452]: 2026-02-28 10:47:07.705 243456 DEBUG nova.compute.manager [req-041af1f1-d496-46be-9e60-18e6568b4e89 req-7a775792-efd6-47cd-a111-bc397dd3d320 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Received event network-vif-deleted-194124fb-5288-4359-ab42-cd28d0ec06bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:47:08 compute-0 ceph-mon[76304]: pgmap v2483: 305 pgs: 305 active+clean; 221 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 14 KiB/s wr, 30 op/s
Feb 28 10:47:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:47:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 192 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.3 KiB/s wr, 40 op/s
Feb 28 10:47:09 compute-0 nova_compute[243452]: 2026-02-28 10:47:09.983 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:10 compute-0 sshd-session[380289]: Invalid user sol from 45.148.10.240 port 42128
Feb 28 10:47:10 compute-0 ceph-mon[76304]: pgmap v2484: 305 pgs: 305 active+clean; 192 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.3 KiB/s wr, 40 op/s
Feb 28 10:47:10 compute-0 nova_compute[243452]: 2026-02-28 10:47:10.614 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:10 compute-0 sshd-session[380289]: Connection closed by invalid user sol 45.148.10.240 port 42128 [preauth]
Feb 28 10:47:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 5.2 KiB/s wr, 56 op/s
Feb 28 10:47:11 compute-0 nova_compute[243452]: 2026-02-28 10:47:11.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:47:11 compute-0 nova_compute[243452]: 2026-02-28 10:47:11.945 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:11 compute-0 nova_compute[243452]: 2026-02-28 10:47:11.987 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:12 compute-0 ceph-mon[76304]: pgmap v2485: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 5.2 KiB/s wr, 56 op/s
Feb 28 10:47:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.2 KiB/s wr, 56 op/s
Feb 28 10:47:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:47:14 compute-0 ceph-mon[76304]: pgmap v2486: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.2 KiB/s wr, 56 op/s
Feb 28 10:47:15 compute-0 nova_compute[243452]: 2026-02-28 10:47:15.022 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 4.7 KiB/s wr, 50 op/s
Feb 28 10:47:15 compute-0 nova_compute[243452]: 2026-02-28 10:47:15.617 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:15 compute-0 nova_compute[243452]: 2026-02-28 10:47:15.844 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275620.842255, a4475d45-f11e-4b8c-a118-5b4a347c2506 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:47:15 compute-0 nova_compute[243452]: 2026-02-28 10:47:15.844 243456 INFO nova.compute.manager [-] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] VM Stopped (Lifecycle Event)
Feb 28 10:47:15 compute-0 nova_compute[243452]: 2026-02-28 10:47:15.866 243456 DEBUG nova.compute.manager [None req-e51bca5d-5e35-477a-9b3a-3d0b3af7b74a - - - - - -] [instance: a4475d45-f11e-4b8c-a118-5b4a347c2506] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:47:16 compute-0 ceph-mon[76304]: pgmap v2487: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 4.7 KiB/s wr, 50 op/s
Feb 28 10:47:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 29 op/s
Feb 28 10:47:18 compute-0 ceph-mon[76304]: pgmap v2488: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 29 op/s
Feb 28 10:47:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:47:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 3.6 KiB/s wr, 26 op/s
Feb 28 10:47:19 compute-0 nova_compute[243452]: 2026-02-28 10:47:19.952 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275624.950902, b6b6d2e0-12e8-4804-a192-da4e2444f20e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:47:19 compute-0 nova_compute[243452]: 2026-02-28 10:47:19.953 243456 INFO nova.compute.manager [-] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] VM Stopped (Lifecycle Event)
Feb 28 10:47:19 compute-0 nova_compute[243452]: 2026-02-28 10:47:19.994 243456 DEBUG nova.compute.manager [None req-213b2414-1b8a-49fc-bba5-a1fb71d4a8a3 - - - - - -] [instance: b6b6d2e0-12e8-4804-a192-da4e2444f20e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:47:20 compute-0 nova_compute[243452]: 2026-02-28 10:47:20.056 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:20 compute-0 nova_compute[243452]: 2026-02-28 10:47:20.618 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:20 compute-0 ceph-mon[76304]: pgmap v2489: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 3.6 KiB/s wr, 26 op/s
Feb 28 10:47:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s
Feb 28 10:47:22 compute-0 nova_compute[243452]: 2026-02-28 10:47:22.333 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:47:22 compute-0 nova_compute[243452]: 2026-02-28 10:47:22.334 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 28 10:47:22 compute-0 ceph-mon[76304]: pgmap v2490: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s
Feb 28 10:47:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:47:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:47:24 compute-0 nova_compute[243452]: 2026-02-28 10:47:24.334 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:47:24 compute-0 nova_compute[243452]: 2026-02-28 10:47:24.335 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 28 10:47:24 compute-0 nova_compute[243452]: 2026-02-28 10:47:24.354 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 28 10:47:24 compute-0 ceph-mon[76304]: pgmap v2491: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:47:25 compute-0 nova_compute[243452]: 2026-02-28 10:47:25.088 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2492: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:47:25 compute-0 nova_compute[243452]: 2026-02-28 10:47:25.622 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:26 compute-0 ceph-mon[76304]: pgmap v2492: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:47:27 compute-0 podman[380294]: 2026-02-28 10:47:27.141935301 +0000 UTC m=+0.073984867 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:47:27 compute-0 podman[380293]: 2026-02-28 10:47:27.17984791 +0000 UTC m=+0.112595416 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 28 10:47:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:47:28 compute-0 ceph-mon[76304]: pgmap v2493: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:47:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:47:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:47:29
Feb 28 10:47:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:47:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:47:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'volumes', '.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'default.rgw.meta']
Feb 28 10:47:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:47:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:47:30 compute-0 nova_compute[243452]: 2026-02-28 10:47:30.092 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:47:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:47:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:47:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:47:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:47:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:47:30 compute-0 nova_compute[243452]: 2026-02-28 10:47:30.624 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:30 compute-0 ceph-mon[76304]: pgmap v2494: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:47:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:47:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:47:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:47:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:47:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:47:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:47:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:47:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:47:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:47:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:47:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2495: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:47:32 compute-0 ceph-mon[76304]: pgmap v2495: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:47:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:33.258 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:47:33 compute-0 nova_compute[243452]: 2026-02-28 10:47:33.258 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:33.259 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:47:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:47:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:47:34 compute-0 ceph-mon[76304]: pgmap v2496: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:47:35 compute-0 nova_compute[243452]: 2026-02-28 10:47:35.134 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:47:35 compute-0 nova_compute[243452]: 2026-02-28 10:47:35.626 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:36 compute-0 ceph-mon[76304]: pgmap v2497: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:47:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:47:38 compute-0 ceph-mon[76304]: pgmap v2498: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:47:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:47:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2499: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:47:40 compute-0 nova_compute[243452]: 2026-02-28 10:47:40.173 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:40 compute-0 nova_compute[243452]: 2026-02-28 10:47:40.628 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:40 compute-0 ceph-mon[76304]: pgmap v2499: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:47:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:41.262 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4568978923368619e-05 of space, bias 1.0, pg target 0.004370693677010586 quantized to 32 (current 32)
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024939168048353643 of space, bias 1.0, pg target 0.7481750414506093 quantized to 32 (current 32)
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.24958909274374e-07 of space, bias 4.0, pg target 0.0008699506911292489 quantized to 16 (current 16)
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:47:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:47:41 compute-0 nova_compute[243452]: 2026-02-28 10:47:41.541 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:47:41 compute-0 nova_compute[243452]: 2026-02-28 10:47:41.542 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:47:41 compute-0 nova_compute[243452]: 2026-02-28 10:47:41.560 243456 DEBUG nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:47:41 compute-0 nova_compute[243452]: 2026-02-28 10:47:41.648 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:47:41 compute-0 nova_compute[243452]: 2026-02-28 10:47:41.649 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:47:41 compute-0 nova_compute[243452]: 2026-02-28 10:47:41.662 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:47:41 compute-0 nova_compute[243452]: 2026-02-28 10:47:41.663 243456 INFO nova.compute.claims [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:47:41 compute-0 sudo[380336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:47:41 compute-0 sudo[380336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:47:41 compute-0 sudo[380336]: pam_unix(sudo:session): session closed for user root
Feb 28 10:47:41 compute-0 sudo[380361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:47:41 compute-0 sudo[380361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:47:41 compute-0 nova_compute[243452]: 2026-02-28 10:47:41.833 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:47:42 compute-0 sudo[380361]: pam_unix(sudo:session): session closed for user root
Feb 28 10:47:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:47:42 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:47:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:47:42 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:47:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:47:42 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:47:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:47:42 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:47:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:47:42 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:47:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:47:42 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:47:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:47:42 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3095923840' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.372 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.382 243456 DEBUG nova.compute.provider_tree [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:47:42 compute-0 sudo[380438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:47:42 compute-0 sudo[380438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:47:42 compute-0 sudo[380438]: pam_unix(sudo:session): session closed for user root
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.403 243456 DEBUG nova.scheduler.client.report [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.429 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.431 243456 DEBUG nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:47:42 compute-0 sudo[380464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:47:42 compute-0 sudo[380464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.478 243456 DEBUG nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.478 243456 DEBUG nova.network.neutron [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.497 243456 INFO nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.513 243456 DEBUG nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.597 243456 DEBUG nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.599 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.599 243456 INFO nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Creating image(s)
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.625 243456 DEBUG nova.storage.rbd_utils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.651 243456 DEBUG nova.storage.rbd_utils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.680 243456 DEBUG nova.storage.rbd_utils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.684 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:47:42 compute-0 podman[380547]: 2026-02-28 10:47:42.735895242 +0000 UTC m=+0.057591590 container create 12fbd25f999fdbc8b1d565480585f52ca1a59b6f204b26da108a325a2a1786d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.750 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.751 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.751 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.752 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:47:42 compute-0 systemd[1]: Started libpod-conmon-12fbd25f999fdbc8b1d565480585f52ca1a59b6f204b26da108a325a2a1786d6.scope.
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.772 243456 DEBUG nova.storage.rbd_utils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:47:42 compute-0 nova_compute[243452]: 2026-02-28 10:47:42.776 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 d47f4919-0816-4363-b2eb-fa6580859e88_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:47:42 compute-0 podman[380547]: 2026-02-28 10:47:42.701941616 +0000 UTC m=+0.023637984 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:47:42 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:47:42 compute-0 podman[380547]: 2026-02-28 10:47:42.843560557 +0000 UTC m=+0.165256915 container init 12fbd25f999fdbc8b1d565480585f52ca1a59b6f204b26da108a325a2a1786d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_germain, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:47:42 compute-0 podman[380547]: 2026-02-28 10:47:42.851172493 +0000 UTC m=+0.172868831 container start 12fbd25f999fdbc8b1d565480585f52ca1a59b6f204b26da108a325a2a1786d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_germain, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 10:47:42 compute-0 nervous_germain[380592]: 167 167
Feb 28 10:47:42 compute-0 systemd[1]: libpod-12fbd25f999fdbc8b1d565480585f52ca1a59b6f204b26da108a325a2a1786d6.scope: Deactivated successfully.
Feb 28 10:47:42 compute-0 ceph-mon[76304]: pgmap v2500: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:47:42 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:47:42 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:47:42 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:47:42 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:47:42 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:47:42 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:47:42 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3095923840' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:47:42 compute-0 podman[380547]: 2026-02-28 10:47:42.889869165 +0000 UTC m=+0.211565503 container attach 12fbd25f999fdbc8b1d565480585f52ca1a59b6f204b26da108a325a2a1786d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 10:47:42 compute-0 podman[380547]: 2026-02-28 10:47:42.890199974 +0000 UTC m=+0.211896312 container died 12fbd25f999fdbc8b1d565480585f52ca1a59b6f204b26da108a325a2a1786d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:47:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6971a519c789758c877095a2786f7e5f62f7674400f4cf4b7a307866990cc86e-merged.mount: Deactivated successfully.
Feb 28 10:47:43 compute-0 podman[380547]: 2026-02-28 10:47:43.094105038 +0000 UTC m=+0.415801416 container remove 12fbd25f999fdbc8b1d565480585f52ca1a59b6f204b26da108a325a2a1786d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Feb 28 10:47:43 compute-0 systemd[1]: libpod-conmon-12fbd25f999fdbc8b1d565480585f52ca1a59b6f204b26da108a325a2a1786d6.scope: Deactivated successfully.
Feb 28 10:47:43 compute-0 podman[380634]: 2026-02-28 10:47:43.230570221 +0000 UTC m=+0.039443653 container create a75de0ab38cbf79ac174d4c92398a6cb6caa9236c1d4ef2cae5dce7882b4cf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 10:47:43 compute-0 nova_compute[243452]: 2026-02-28 10:47:43.229 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 d47f4919-0816-4363-b2eb-fa6580859e88_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:47:43 compute-0 systemd[1]: Started libpod-conmon-a75de0ab38cbf79ac174d4c92398a6cb6caa9236c1d4ef2cae5dce7882b4cf8a.scope.
Feb 28 10:47:43 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:47:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c9c2eefec0cad94d3637b580260184eae09496d5d0d26507afee33eb204cb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:47:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c9c2eefec0cad94d3637b580260184eae09496d5d0d26507afee33eb204cb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:47:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c9c2eefec0cad94d3637b580260184eae09496d5d0d26507afee33eb204cb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:47:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c9c2eefec0cad94d3637b580260184eae09496d5d0d26507afee33eb204cb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:47:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58c9c2eefec0cad94d3637b580260184eae09496d5d0d26507afee33eb204cb3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:47:43 compute-0 nova_compute[243452]: 2026-02-28 10:47:43.297 243456 DEBUG nova.policy [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7efc7418904f44aa8c8c9c3e06ac552b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd7baef4f72e742e8aa7530d7a586ed2b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 28 10:47:43 compute-0 nova_compute[243452]: 2026-02-28 10:47:43.309 243456 DEBUG nova.storage.rbd_utils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] resizing rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:47:43 compute-0 podman[380634]: 2026-02-28 10:47:43.311774433 +0000 UTC m=+0.120647885 container init a75de0ab38cbf79ac174d4c92398a6cb6caa9236c1d4ef2cae5dce7882b4cf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 28 10:47:43 compute-0 podman[380634]: 2026-02-28 10:47:43.215272486 +0000 UTC m=+0.024145918 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:47:43 compute-0 podman[380634]: 2026-02-28 10:47:43.321612143 +0000 UTC m=+0.130485555 container start a75de0ab38cbf79ac174d4c92398a6cb6caa9236c1d4ef2cae5dce7882b4cf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:47:43 compute-0 podman[380634]: 2026-02-28 10:47:43.324964328 +0000 UTC m=+0.133837790 container attach a75de0ab38cbf79ac174d4c92398a6cb6caa9236c1d4ef2cae5dce7882b4cf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:47:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 341 B/s wr, 3 op/s
Feb 28 10:47:43 compute-0 nova_compute[243452]: 2026-02-28 10:47:43.382 243456 DEBUG nova.objects.instance [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'migration_context' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:47:43 compute-0 nova_compute[243452]: 2026-02-28 10:47:43.402 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:47:43 compute-0 nova_compute[243452]: 2026-02-28 10:47:43.403 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Ensure instance console log exists: /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:47:43 compute-0 nova_compute[243452]: 2026-02-28 10:47:43.403 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:47:43 compute-0 nova_compute[243452]: 2026-02-28 10:47:43.403 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:47:43 compute-0 nova_compute[243452]: 2026-02-28 10:47:43.404 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:47:43 compute-0 practical_knuth[380668]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:47:43 compute-0 practical_knuth[380668]: --> All data devices are unavailable
Feb 28 10:47:43 compute-0 systemd[1]: libpod-a75de0ab38cbf79ac174d4c92398a6cb6caa9236c1d4ef2cae5dce7882b4cf8a.scope: Deactivated successfully.
Feb 28 10:47:43 compute-0 podman[380742]: 2026-02-28 10:47:43.844769354 +0000 UTC m=+0.028349888 container died a75de0ab38cbf79ac174d4c92398a6cb6caa9236c1d4ef2cae5dce7882b4cf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:47:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-58c9c2eefec0cad94d3637b580260184eae09496d5d0d26507afee33eb204cb3-merged.mount: Deactivated successfully.
Feb 28 10:47:43 compute-0 podman[380742]: 2026-02-28 10:47:43.894220581 +0000 UTC m=+0.077801065 container remove a75de0ab38cbf79ac174d4c92398a6cb6caa9236c1d4ef2cae5dce7882b4cf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 10:47:43 compute-0 systemd[1]: libpod-conmon-a75de0ab38cbf79ac174d4c92398a6cb6caa9236c1d4ef2cae5dce7882b4cf8a.scope: Deactivated successfully.
Feb 28 10:47:43 compute-0 sudo[380464]: pam_unix(sudo:session): session closed for user root
Feb 28 10:47:43 compute-0 sudo[380757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:47:43 compute-0 sudo[380757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:47:43 compute-0 sudo[380757]: pam_unix(sudo:session): session closed for user root
Feb 28 10:47:44 compute-0 sudo[380782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:47:44 compute-0 sudo[380782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:47:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:47:44 compute-0 podman[380820]: 2026-02-28 10:47:44.291417777 +0000 UTC m=+0.032997290 container create 7bab8cf4441fcb91e355538e3aa8cbdab5acb4881fcb2ece6e6a3193e17e6ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 10:47:44 compute-0 systemd[1]: Started libpod-conmon-7bab8cf4441fcb91e355538e3aa8cbdab5acb4881fcb2ece6e6a3193e17e6ec4.scope.
Feb 28 10:47:44 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:47:44 compute-0 podman[380820]: 2026-02-28 10:47:44.364044794 +0000 UTC m=+0.105624327 container init 7bab8cf4441fcb91e355538e3aa8cbdab5acb4881fcb2ece6e6a3193e17e6ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_payne, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:47:44 compute-0 podman[380820]: 2026-02-28 10:47:44.368734348 +0000 UTC m=+0.110313861 container start 7bab8cf4441fcb91e355538e3aa8cbdab5acb4881fcb2ece6e6a3193e17e6ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:47:44 compute-0 brave_payne[380837]: 167 167
Feb 28 10:47:44 compute-0 systemd[1]: libpod-7bab8cf4441fcb91e355538e3aa8cbdab5acb4881fcb2ece6e6a3193e17e6ec4.scope: Deactivated successfully.
Feb 28 10:47:44 compute-0 podman[380820]: 2026-02-28 10:47:44.373332719 +0000 UTC m=+0.114912232 container attach 7bab8cf4441fcb91e355538e3aa8cbdab5acb4881fcb2ece6e6a3193e17e6ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_payne, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:47:44 compute-0 podman[380820]: 2026-02-28 10:47:44.373544225 +0000 UTC m=+0.115123738 container died 7bab8cf4441fcb91e355538e3aa8cbdab5acb4881fcb2ece6e6a3193e17e6ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_payne, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 10:47:44 compute-0 podman[380820]: 2026-02-28 10:47:44.27782905 +0000 UTC m=+0.019408563 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:47:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-82134f1a858f243fe4d866c886a713ca8c8a118d78b88252257b216dfc2c9c33-merged.mount: Deactivated successfully.
Feb 28 10:47:44 compute-0 podman[380820]: 2026-02-28 10:47:44.408552521 +0000 UTC m=+0.150132034 container remove 7bab8cf4441fcb91e355538e3aa8cbdab5acb4881fcb2ece6e6a3193e17e6ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_payne, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 10:47:44 compute-0 systemd[1]: libpod-conmon-7bab8cf4441fcb91e355538e3aa8cbdab5acb4881fcb2ece6e6a3193e17e6ec4.scope: Deactivated successfully.
Feb 28 10:47:44 compute-0 podman[380861]: 2026-02-28 10:47:44.523848433 +0000 UTC m=+0.036071668 container create 2e17f67e31ec90de083cfbba7d4539aa5f92a1102817379751cf844e40a5b2ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euclid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:47:44 compute-0 systemd[1]: Started libpod-conmon-2e17f67e31ec90de083cfbba7d4539aa5f92a1102817379751cf844e40a5b2ae.scope.
Feb 28 10:47:44 compute-0 podman[380861]: 2026-02-28 10:47:44.508824745 +0000 UTC m=+0.021048000 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:47:44 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:47:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89558d877cfea8159f4f6f39a834eb318395116a649029eb870a09309e21dd97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:47:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89558d877cfea8159f4f6f39a834eb318395116a649029eb870a09309e21dd97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:47:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89558d877cfea8159f4f6f39a834eb318395116a649029eb870a09309e21dd97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:47:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89558d877cfea8159f4f6f39a834eb318395116a649029eb870a09309e21dd97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:47:44 compute-0 podman[380861]: 2026-02-28 10:47:44.63686914 +0000 UTC m=+0.149092395 container init 2e17f67e31ec90de083cfbba7d4539aa5f92a1102817379751cf844e40a5b2ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:47:44 compute-0 podman[380861]: 2026-02-28 10:47:44.652762812 +0000 UTC m=+0.164986047 container start 2e17f67e31ec90de083cfbba7d4539aa5f92a1102817379751cf844e40a5b2ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:47:44 compute-0 podman[380861]: 2026-02-28 10:47:44.657133057 +0000 UTC m=+0.169356322 container attach 2e17f67e31ec90de083cfbba7d4539aa5f92a1102817379751cf844e40a5b2ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euclid, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:47:44 compute-0 ceph-mon[76304]: pgmap v2501: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 341 B/s wr, 3 op/s
Feb 28 10:47:44 compute-0 nifty_euclid[380879]: {
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:     "0": [
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:         {
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "devices": [
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "/dev/loop3"
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             ],
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "lv_name": "ceph_lv0",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "lv_size": "21470642176",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "name": "ceph_lv0",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "tags": {
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.cluster_name": "ceph",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.crush_device_class": "",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.encrypted": "0",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.objectstore": "bluestore",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.osd_id": "0",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.type": "block",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.vdo": "0",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.with_tpm": "0"
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             },
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "type": "block",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "vg_name": "ceph_vg0"
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:         }
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:     ],
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:     "1": [
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:         {
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "devices": [
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "/dev/loop4"
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             ],
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "lv_name": "ceph_lv1",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "lv_size": "21470642176",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "name": "ceph_lv1",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "tags": {
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.cluster_name": "ceph",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.crush_device_class": "",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.encrypted": "0",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.objectstore": "bluestore",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.osd_id": "1",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.type": "block",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.vdo": "0",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.with_tpm": "0"
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             },
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "type": "block",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "vg_name": "ceph_vg1"
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:         }
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:     ],
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:     "2": [
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:         {
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "devices": [
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "/dev/loop5"
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             ],
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "lv_name": "ceph_lv2",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "lv_size": "21470642176",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "name": "ceph_lv2",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "tags": {
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.cluster_name": "ceph",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.crush_device_class": "",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.encrypted": "0",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.objectstore": "bluestore",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.osd_id": "2",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.type": "block",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.vdo": "0",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:                 "ceph.with_tpm": "0"
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             },
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "type": "block",
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:             "vg_name": "ceph_vg2"
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:         }
Feb 28 10:47:44 compute-0 nifty_euclid[380879]:     ]
Feb 28 10:47:44 compute-0 nifty_euclid[380879]: }
Feb 28 10:47:45 compute-0 systemd[1]: libpod-2e17f67e31ec90de083cfbba7d4539aa5f92a1102817379751cf844e40a5b2ae.scope: Deactivated successfully.
Feb 28 10:47:45 compute-0 podman[380861]: 2026-02-28 10:47:45.002448416 +0000 UTC m=+0.514671731 container died 2e17f67e31ec90de083cfbba7d4539aa5f92a1102817379751cf844e40a5b2ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euclid, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:47:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-89558d877cfea8159f4f6f39a834eb318395116a649029eb870a09309e21dd97-merged.mount: Deactivated successfully.
Feb 28 10:47:45 compute-0 podman[380861]: 2026-02-28 10:47:45.054886058 +0000 UTC m=+0.567109303 container remove 2e17f67e31ec90de083cfbba7d4539aa5f92a1102817379751cf844e40a5b2ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 10:47:45 compute-0 systemd[1]: libpod-conmon-2e17f67e31ec90de083cfbba7d4539aa5f92a1102817379751cf844e40a5b2ae.scope: Deactivated successfully.
Feb 28 10:47:45 compute-0 sudo[380782]: pam_unix(sudo:session): session closed for user root
Feb 28 10:47:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 305 active+clean; 176 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 998 KiB/s wr, 4 op/s
Feb 28 10:47:45 compute-0 nova_compute[243452]: 2026-02-28 10:47:45.299 243456 DEBUG nova.network.neutron [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Successfully created port: 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Feb 28 10:47:45 compute-0 nova_compute[243452]: 2026-02-28 10:47:45.630 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:47:45 compute-0 nova_compute[243452]: 2026-02-28 10:47:45.632 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:47:45 compute-0 nova_compute[243452]: 2026-02-28 10:47:45.632 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 10:47:45 compute-0 nova_compute[243452]: 2026-02-28 10:47:45.632 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:47:45 compute-0 nova_compute[243452]: 2026-02-28 10:47:45.842 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:45 compute-0 nova_compute[243452]: 2026-02-28 10:47:45.842 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:47:45 compute-0 nova_compute[243452]: 2026-02-28 10:47:45.847 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:45 compute-0 sudo[380901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:47:45 compute-0 sudo[380901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:47:45 compute-0 sudo[380901]: pam_unix(sudo:session): session closed for user root
Feb 28 10:47:45 compute-0 sudo[380926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:47:45 compute-0 sudo[380926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:47:46 compute-0 podman[380964]: 2026-02-28 10:47:46.20588506 +0000 UTC m=+0.038277380 container create 4dc5d8a17530969eaa996038d8d7e3c6d1c3880825a482372ae73aaaae72550f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cannon, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:47:46 compute-0 systemd[1]: Started libpod-conmon-4dc5d8a17530969eaa996038d8d7e3c6d1c3880825a482372ae73aaaae72550f.scope.
Feb 28 10:47:46 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:47:46 compute-0 podman[380964]: 2026-02-28 10:47:46.265791345 +0000 UTC m=+0.098183675 container init 4dc5d8a17530969eaa996038d8d7e3c6d1c3880825a482372ae73aaaae72550f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cannon, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 10:47:46 compute-0 podman[380964]: 2026-02-28 10:47:46.270820578 +0000 UTC m=+0.103212908 container start 4dc5d8a17530969eaa996038d8d7e3c6d1c3880825a482372ae73aaaae72550f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 10:47:46 compute-0 podman[380964]: 2026-02-28 10:47:46.273566407 +0000 UTC m=+0.105958737 container attach 4dc5d8a17530969eaa996038d8d7e3c6d1c3880825a482372ae73aaaae72550f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cannon, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 10:47:46 compute-0 determined_cannon[380981]: 167 167
Feb 28 10:47:46 compute-0 systemd[1]: libpod-4dc5d8a17530969eaa996038d8d7e3c6d1c3880825a482372ae73aaaae72550f.scope: Deactivated successfully.
Feb 28 10:47:46 compute-0 conmon[380981]: conmon 4dc5d8a17530969eaa99 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4dc5d8a17530969eaa996038d8d7e3c6d1c3880825a482372ae73aaaae72550f.scope/container/memory.events
Feb 28 10:47:46 compute-0 podman[380964]: 2026-02-28 10:47:46.276675805 +0000 UTC m=+0.109068135 container died 4dc5d8a17530969eaa996038d8d7e3c6d1c3880825a482372ae73aaaae72550f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 10:47:46 compute-0 podman[380964]: 2026-02-28 10:47:46.187231989 +0000 UTC m=+0.019624349 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:47:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bfa9e997f8fe54862bcaee426d0fdc9e367b56d59afbc63292133e1945d77c8-merged.mount: Deactivated successfully.
Feb 28 10:47:46 compute-0 podman[380964]: 2026-02-28 10:47:46.306656128 +0000 UTC m=+0.139048458 container remove 4dc5d8a17530969eaa996038d8d7e3c6d1c3880825a482372ae73aaaae72550f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cannon, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 10:47:46 compute-0 systemd[1]: libpod-conmon-4dc5d8a17530969eaa996038d8d7e3c6d1c3880825a482372ae73aaaae72550f.scope: Deactivated successfully.
Feb 28 10:47:46 compute-0 podman[381004]: 2026-02-28 10:47:46.493333792 +0000 UTC m=+0.038389664 container create 65a8f8246a273b56cfe3c31049e63ad8be954f05da47ac7a608588c75d9f77cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_golick, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:47:46 compute-0 systemd[1]: Started libpod-conmon-65a8f8246a273b56cfe3c31049e63ad8be954f05da47ac7a608588c75d9f77cb.scope.
Feb 28 10:47:46 compute-0 nova_compute[243452]: 2026-02-28 10:47:46.546 243456 DEBUG nova.network.neutron [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Successfully updated port: 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 28 10:47:46 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30466ed993cdc133fb8c981459176f6c61377be298d3b8e0dc6ca2e4a2b67e37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30466ed993cdc133fb8c981459176f6c61377be298d3b8e0dc6ca2e4a2b67e37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30466ed993cdc133fb8c981459176f6c61377be298d3b8e0dc6ca2e4a2b67e37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30466ed993cdc133fb8c981459176f6c61377be298d3b8e0dc6ca2e4a2b67e37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:47:46 compute-0 podman[381004]: 2026-02-28 10:47:46.476912255 +0000 UTC m=+0.021968157 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:47:46 compute-0 podman[381004]: 2026-02-28 10:47:46.584741444 +0000 UTC m=+0.129797316 container init 65a8f8246a273b56cfe3c31049e63ad8be954f05da47ac7a608588c75d9f77cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:47:46 compute-0 podman[381004]: 2026-02-28 10:47:46.589310954 +0000 UTC m=+0.134366866 container start 65a8f8246a273b56cfe3c31049e63ad8be954f05da47ac7a608588c75d9f77cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_golick, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 10:47:46 compute-0 podman[381004]: 2026-02-28 10:47:46.593901595 +0000 UTC m=+0.138957497 container attach 65a8f8246a273b56cfe3c31049e63ad8be954f05da47ac7a608588c75d9f77cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_golick, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:47:46 compute-0 nova_compute[243452]: 2026-02-28 10:47:46.780 243456 DEBUG nova.compute.manager [req-9dd28a7b-b09a-41dc-9d7b-37e3f5dbd03c req-e765113c-acaa-4d82-82cf-80e8d6c17608 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-changed-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:47:46 compute-0 nova_compute[243452]: 2026-02-28 10:47:46.780 243456 DEBUG nova.compute.manager [req-9dd28a7b-b09a-41dc-9d7b-37e3f5dbd03c req-e765113c-acaa-4d82-82cf-80e8d6c17608 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Refreshing instance network info cache due to event network-changed-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:47:46 compute-0 nova_compute[243452]: 2026-02-28 10:47:46.780 243456 DEBUG oslo_concurrency.lockutils [req-9dd28a7b-b09a-41dc-9d7b-37e3f5dbd03c req-e765113c-acaa-4d82-82cf-80e8d6c17608 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:47:46 compute-0 nova_compute[243452]: 2026-02-28 10:47:46.781 243456 DEBUG oslo_concurrency.lockutils [req-9dd28a7b-b09a-41dc-9d7b-37e3f5dbd03c req-e765113c-acaa-4d82-82cf-80e8d6c17608 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:47:46 compute-0 nova_compute[243452]: 2026-02-28 10:47:46.781 243456 DEBUG nova.network.neutron [req-9dd28a7b-b09a-41dc-9d7b-37e3f5dbd03c req-e765113c-acaa-4d82-82cf-80e8d6c17608 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Refreshing network info cache for port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:47:46 compute-0 ceph-mon[76304]: pgmap v2502: 305 pgs: 305 active+clean; 176 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 998 KiB/s wr, 4 op/s
Feb 28 10:47:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 305 active+clean; 187 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.4 MiB/s wr, 5 op/s
Feb 28 10:47:47 compute-0 nova_compute[243452]: 2026-02-28 10:47:47.336 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:47:47 compute-0 lvm[381098]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:47:47 compute-0 lvm[381098]: VG ceph_vg0 finished
Feb 28 10:47:47 compute-0 lvm[381099]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:47:47 compute-0 lvm[381099]: VG ceph_vg1 finished
Feb 28 10:47:47 compute-0 lvm[381101]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:47:47 compute-0 lvm[381101]: VG ceph_vg2 finished
Feb 28 10:47:47 compute-0 nova_compute[243452]: 2026-02-28 10:47:47.477 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:47:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:47:47 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2413801720' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:47:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:47:47 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2413801720' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:47:47 compute-0 elated_golick[381020]: {}
Feb 28 10:47:47 compute-0 systemd[1]: libpod-65a8f8246a273b56cfe3c31049e63ad8be954f05da47ac7a608588c75d9f77cb.scope: Deactivated successfully.
Feb 28 10:47:47 compute-0 systemd[1]: libpod-65a8f8246a273b56cfe3c31049e63ad8be954f05da47ac7a608588c75d9f77cb.scope: Consumed 1.453s CPU time.
Feb 28 10:47:47 compute-0 podman[381004]: 2026-02-28 10:47:47.555187135 +0000 UTC m=+1.100243007 container died 65a8f8246a273b56cfe3c31049e63ad8be954f05da47ac7a608588c75d9f77cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:47:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-30466ed993cdc133fb8c981459176f6c61377be298d3b8e0dc6ca2e4a2b67e37-merged.mount: Deactivated successfully.
Feb 28 10:47:47 compute-0 podman[381004]: 2026-02-28 10:47:47.593876217 +0000 UTC m=+1.138932089 container remove 65a8f8246a273b56cfe3c31049e63ad8be954f05da47ac7a608588c75d9f77cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:47:47 compute-0 nova_compute[243452]: 2026-02-28 10:47:47.609 243456 DEBUG nova.network.neutron [req-9dd28a7b-b09a-41dc-9d7b-37e3f5dbd03c req-e765113c-acaa-4d82-82cf-80e8d6c17608 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:47:47 compute-0 systemd[1]: libpod-conmon-65a8f8246a273b56cfe3c31049e63ad8be954f05da47ac7a608588c75d9f77cb.scope: Deactivated successfully.
Feb 28 10:47:47 compute-0 sudo[380926]: pam_unix(sudo:session): session closed for user root
Feb 28 10:47:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:47:47 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:47:47 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:47:47 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:47:47 compute-0 sudo[381115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:47:47 compute-0 sudo[381115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:47:47 compute-0 sudo[381115]: pam_unix(sudo:session): session closed for user root
Feb 28 10:47:47 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2413801720' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:47:47 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2413801720' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:47:47 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:47:47 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:47:48 compute-0 nova_compute[243452]: 2026-02-28 10:47:48.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:47:48 compute-0 nova_compute[243452]: 2026-02-28 10:47:48.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:47:48 compute-0 ceph-mon[76304]: pgmap v2503: 305 pgs: 305 active+clean; 187 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.4 MiB/s wr, 5 op/s
Feb 28 10:47:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:47:49 compute-0 nova_compute[243452]: 2026-02-28 10:47:49.215 243456 DEBUG nova.network.neutron [req-9dd28a7b-b09a-41dc-9d7b-37e3f5dbd03c req-e765113c-acaa-4d82-82cf-80e8d6c17608 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:47:49 compute-0 nova_compute[243452]: 2026-02-28 10:47:49.229 243456 DEBUG oslo_concurrency.lockutils [req-9dd28a7b-b09a-41dc-9d7b-37e3f5dbd03c req-e765113c-acaa-4d82-82cf-80e8d6c17608 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:47:49 compute-0 nova_compute[243452]: 2026-02-28 10:47:49.229 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquired lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:47:49 compute-0 nova_compute[243452]: 2026-02-28 10:47:49.230 243456 DEBUG nova.network.neutron [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:47:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2504: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:47:49 compute-0 nova_compute[243452]: 2026-02-28 10:47:49.374 243456 DEBUG nova.network.neutron [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.518 243456 DEBUG nova.network.neutron [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [{"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.543 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Releasing lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.544 243456 DEBUG nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance network_info: |[{"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.548 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Start _get_guest_xml network_info=[{"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.555 243456 WARNING nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.563 243456 DEBUG nova.virt.libvirt.host [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.565 243456 DEBUG nova.virt.libvirt.host [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.569 243456 DEBUG nova.virt.libvirt.host [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.570 243456 DEBUG nova.virt.libvirt.host [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.570 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.571 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.572 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.572 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.573 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.573 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.573 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.574 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.574 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.575 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.575 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.576 243456 DEBUG nova.virt.hardware [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.580 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.843 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:50 compute-0 nova_compute[243452]: 2026-02-28 10:47:50.848 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:50 compute-0 ceph-mon[76304]: pgmap v2504: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:47:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:47:51 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2428197361' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.162 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.190 243456 DEBUG nova.storage.rbd_utils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.195 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:47:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:47:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:47:51 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3361919348' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.743 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.745 243456 DEBUG nova.virt.libvirt.vif [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:47:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-373381639',display_name='tempest-TestShelveInstance-server-373381639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-373381639',id=152,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWK8NgpJq/O/Qf/cy3N8Z6KNy8MplNJlX7P39bqx+h9ho+aFNpLd6ovt9CyKQamqSZ1B8mN5tm9M+MnUX6czgs4K4I+aH3/UX/Gbp6WhW0TY9K8biYJUX8WuIDhtwQ8Cw==',key_name='tempest-TestShelveInstance-1745923104',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d7baef4f72e742e8aa7530d7a586ed2b',ramdisk_id='',reservation_id='r-4eojppig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1186988285',owner_user_name='tempest-TestShelveInstance-1186988285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:47:42Z,user_data=None,user_id='7efc7418904f44aa8c8c9c3e06ac552b',uuid=d47f4919-0816-4363-b2eb-fa6580859e88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.745 243456 DEBUG nova.network.os_vif_util [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converting VIF {"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.746 243456 DEBUG nova.network.os_vif_util [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.748 243456 DEBUG nova.objects.instance [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'pci_devices' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.765 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:47:51 compute-0 nova_compute[243452]:   <uuid>d47f4919-0816-4363-b2eb-fa6580859e88</uuid>
Feb 28 10:47:51 compute-0 nova_compute[243452]:   <name>instance-00000098</name>
Feb 28 10:47:51 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:47:51 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:47:51 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <nova:name>tempest-TestShelveInstance-server-373381639</nova:name>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:47:50</nova:creationTime>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:47:51 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:47:51 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:47:51 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:47:51 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:47:51 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:47:51 compute-0 nova_compute[243452]:         <nova:user uuid="7efc7418904f44aa8c8c9c3e06ac552b">tempest-TestShelveInstance-1186988285-project-member</nova:user>
Feb 28 10:47:51 compute-0 nova_compute[243452]:         <nova:project uuid="d7baef4f72e742e8aa7530d7a586ed2b">tempest-TestShelveInstance-1186988285</nova:project>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:47:51 compute-0 nova_compute[243452]:         <nova:port uuid="12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4">
Feb 28 10:47:51 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:47:51 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:47:51 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <system>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <entry name="serial">d47f4919-0816-4363-b2eb-fa6580859e88</entry>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <entry name="uuid">d47f4919-0816-4363-b2eb-fa6580859e88</entry>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     </system>
Feb 28 10:47:51 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:47:51 compute-0 nova_compute[243452]:   <os>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:   </os>
Feb 28 10:47:51 compute-0 nova_compute[243452]:   <features>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:   </features>
Feb 28 10:47:51 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:47:51 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:47:51 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/d47f4919-0816-4363-b2eb-fa6580859e88_disk">
Feb 28 10:47:51 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       </source>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:47:51 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/d47f4919-0816-4363-b2eb-fa6580859e88_disk.config">
Feb 28 10:47:51 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       </source>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:47:51 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:b5:24:ce"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <target dev="tap12bbec3d-25"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/console.log" append="off"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <video>
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     </video>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:47:51 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:47:51 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:47:51 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:47:51 compute-0 nova_compute[243452]: </domain>
Feb 28 10:47:51 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.766 243456 DEBUG nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Preparing to wait for external event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.767 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.767 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.767 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.768 243456 DEBUG nova.virt.libvirt.vif [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-28T10:47:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-373381639',display_name='tempest-TestShelveInstance-server-373381639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-373381639',id=152,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWK8NgpJq/O/Qf/cy3N8Z6KNy8MplNJlX7P39bqx+h9ho+aFNpLd6ovt9CyKQamqSZ1B8mN5tm9M+MnUX6czgs4K4I+aH3/UX/Gbp6WhW0TY9K8biYJUX8WuIDhtwQ8Cw==',key_name='tempest-TestShelveInstance-1745923104',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d7baef4f72e742e8aa7530d7a586ed2b',ramdisk_id='',reservation_id='r-4eojppig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1186988285',owner_user_name='tempest-TestShelveInstance-1186988285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:47:42Z,user_data=None,user_id='7efc7418904f44aa8c8c9c3e06ac552b',uuid=d47f4919-0816-4363-b2eb-fa6580859e88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.768 243456 DEBUG nova.network.os_vif_util [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converting VIF {"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.769 243456 DEBUG nova.network.os_vif_util [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.769 243456 DEBUG os_vif [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.770 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.770 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.771 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.775 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.775 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12bbec3d-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.776 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap12bbec3d-25, col_values=(('external_ids', {'iface-id': '12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:24:ce', 'vm-uuid': 'd47f4919-0816-4363-b2eb-fa6580859e88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.820 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:51 compute-0 NetworkManager[49805]: <info>  [1772275671.8210] manager: (tap12bbec3d-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/665)
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.823 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.827 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.829 243456 INFO os_vif [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25')
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.873 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.873 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.873 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] No VIF found with MAC fa:16:3e:b5:24:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.874 243456 INFO nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Using config drive
Feb 28 10:47:51 compute-0 nova_compute[243452]: 2026-02-28 10:47:51.895 243456 DEBUG nova.storage.rbd_utils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:47:51 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2428197361' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:47:51 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3361919348' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:47:52 compute-0 nova_compute[243452]: 2026-02-28 10:47:52.204 243456 INFO nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Creating config drive at /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config
Feb 28 10:47:52 compute-0 nova_compute[243452]: 2026-02-28 10:47:52.207 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp149tadvv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:47:52 compute-0 nova_compute[243452]: 2026-02-28 10:47:52.339 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp149tadvv" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:47:52 compute-0 nova_compute[243452]: 2026-02-28 10:47:52.362 243456 DEBUG nova.storage.rbd_utils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:47:52 compute-0 nova_compute[243452]: 2026-02-28 10:47:52.365 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config d47f4919-0816-4363-b2eb-fa6580859e88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:47:52 compute-0 nova_compute[243452]: 2026-02-28 10:47:52.526 243456 DEBUG oslo_concurrency.processutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config d47f4919-0816-4363-b2eb-fa6580859e88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:47:52 compute-0 nova_compute[243452]: 2026-02-28 10:47:52.528 243456 INFO nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Deleting local config drive /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config because it was imported into RBD.
Feb 28 10:47:52 compute-0 kernel: tap12bbec3d-25: entered promiscuous mode
Feb 28 10:47:52 compute-0 NetworkManager[49805]: <info>  [1772275672.5946] manager: (tap12bbec3d-25): new Tun device (/org/freedesktop/NetworkManager/Devices/666)
Feb 28 10:47:52 compute-0 ovn_controller[146846]: 2026-02-28T10:47:52Z|01589|binding|INFO|Claiming lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 for this chassis.
Feb 28 10:47:52 compute-0 ovn_controller[146846]: 2026-02-28T10:47:52Z|01590|binding|INFO|12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4: Claiming fa:16:3e:b5:24:ce 10.100.0.13
Feb 28 10:47:52 compute-0 nova_compute[243452]: 2026-02-28 10:47:52.593 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:52 compute-0 nova_compute[243452]: 2026-02-28 10:47:52.598 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:52 compute-0 nova_compute[243452]: 2026-02-28 10:47:52.604 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.610 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:24:ce 10.100.0.13'], port_security=['fa:16:3e:b5:24:ce 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd47f4919-0816-4363-b2eb-fa6580859e88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-978ebc43-7003-4100-92ba-e083df3fe8ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd7baef4f72e742e8aa7530d7a586ed2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '97c43427-956a-4c4e-a592-053a957c2802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2014e61-1555-4c0f-9d39-804f817029ca, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.612 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 in datapath 978ebc43-7003-4100-92ba-e083df3fe8ab bound to our chassis
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.614 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 978ebc43-7003-4100-92ba-e083df3fe8ab
Feb 28 10:47:52 compute-0 systemd-udevd[381275]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.627 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c2b11eae-8cff-4001-8790-a4d0b61c5644]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.628 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap978ebc43-71 in ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:47:52 compute-0 ovn_controller[146846]: 2026-02-28T10:47:52Z|01591|binding|INFO|Setting lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 ovn-installed in OVS
Feb 28 10:47:52 compute-0 ovn_controller[146846]: 2026-02-28T10:47:52Z|01592|binding|INFO|Setting lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 up in Southbound
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.631 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap978ebc43-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.631 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bf64da6f-a892-46f2-b375-031d83101ad2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:52 compute-0 nova_compute[243452]: 2026-02-28 10:47:52.632 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:52 compute-0 systemd-machined[209480]: New machine qemu-185-instance-00000098.
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.632 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[289e3795-044a-4b01-bce1-3136427791ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:52 compute-0 NetworkManager[49805]: <info>  [1772275672.6451] device (tap12bbec3d-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:47:52 compute-0 NetworkManager[49805]: <info>  [1772275672.6460] device (tap12bbec3d-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.646 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[0d14eed3-8ae2-4f84-ba5e-1bf6f8e7bc59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:52 compute-0 systemd[1]: Started Virtual Machine qemu-185-instance-00000098.
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.660 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec5f052-2e0f-49d0-b7e3-bdffae813bc5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.688 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[8b194180-5284-4495-a6a2-5c1bb0f2dbea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:52 compute-0 systemd-udevd[381279]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.694 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[04c33958-267c-4841-acb8-5c1d66b9eeb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:52 compute-0 NetworkManager[49805]: <info>  [1772275672.6956] manager: (tap978ebc43-70): new Veth device (/org/freedesktop/NetworkManager/Devices/667)
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.723 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[93ac0773-da0d-44f4-bda5-5fea566ee2ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.725 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[24bea523-9cb4-4b68-9237-2bee2e347861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:52 compute-0 NetworkManager[49805]: <info>  [1772275672.7516] device (tap978ebc43-70): carrier: link connected
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.755 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[7e7a375f-f0af-4c2c-ba16-acfb1c94c15d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.771 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd41312-028d-414f-bb67-bacd129a0357]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap978ebc43-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:85:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 470], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718003, 'reachable_time': 15333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381308, 'error': None, 'target': 'ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.787 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[69cef214-de04-4b1a-af34-e9ff218fa90c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:8571'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 718003, 'tstamp': 718003}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381309, 'error': None, 'target': 'ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.804 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[250c24da-188c-4df4-9376-86ed2e2be69c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap978ebc43-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:85:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 470], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718003, 'reachable_time': 15333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 381310, 'error': None, 'target': 'ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.831 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f9030289-804c-40e3-af5d-0aadc580bf50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:52 compute-0 nova_compute[243452]: 2026-02-28 10:47:52.860 243456 DEBUG nova.compute.manager [req-163c70fb-e4e1-4a22-b393-fbe47e125498 req-24116c30-c0dc-42f0-9579-26b560e46881 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:47:52 compute-0 nova_compute[243452]: 2026-02-28 10:47:52.861 243456 DEBUG oslo_concurrency.lockutils [req-163c70fb-e4e1-4a22-b393-fbe47e125498 req-24116c30-c0dc-42f0-9579-26b560e46881 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:47:52 compute-0 nova_compute[243452]: 2026-02-28 10:47:52.861 243456 DEBUG oslo_concurrency.lockutils [req-163c70fb-e4e1-4a22-b393-fbe47e125498 req-24116c30-c0dc-42f0-9579-26b560e46881 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:47:52 compute-0 nova_compute[243452]: 2026-02-28 10:47:52.862 243456 DEBUG oslo_concurrency.lockutils [req-163c70fb-e4e1-4a22-b393-fbe47e125498 req-24116c30-c0dc-42f0-9579-26b560e46881 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:47:52 compute-0 nova_compute[243452]: 2026-02-28 10:47:52.862 243456 DEBUG nova.compute.manager [req-163c70fb-e4e1-4a22-b393-fbe47e125498 req-24116c30-c0dc-42f0-9579-26b560e46881 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Processing event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.885 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[e3201b4d-fc26-426c-ab4a-83e07e220482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.886 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap978ebc43-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.887 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.887 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap978ebc43-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:47:52 compute-0 ceph-mon[76304]: pgmap v2505: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:47:52 compute-0 kernel: tap978ebc43-70: entered promiscuous mode
Feb 28 10:47:52 compute-0 NetworkManager[49805]: <info>  [1772275672.9299] manager: (tap978ebc43-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/668)
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.931 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap978ebc43-70, col_values=(('external_ids', {'iface-id': 'a5bdb09f-93f0-411c-9d75-fc368a22a5f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:47:52 compute-0 ovn_controller[146846]: 2026-02-28T10:47:52Z|01593|binding|INFO|Releasing lport a5bdb09f-93f0-411c-9d75-fc368a22a5f6 from this chassis (sb_readonly=0)
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.934 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/978ebc43-7003-4100-92ba-e083df3fe8ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/978ebc43-7003-4100-92ba-e083df3fe8ab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:47:52 compute-0 nova_compute[243452]: 2026-02-28 10:47:52.932 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.939 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[cbfcd257-50c3-4438-b40f-32c2ea8c6df6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.940 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-978ebc43-7003-4100-92ba-e083df3fe8ab
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/978ebc43-7003-4100-92ba-e083df3fe8ab.pid.haproxy
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 978ebc43-7003-4100-92ba-e083df3fe8ab
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:47:52 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:52.941 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab', 'env', 'PROCESS_TAG=haproxy-978ebc43-7003-4100-92ba-e083df3fe8ab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/978ebc43-7003-4100-92ba-e083df3fe8ab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:47:53 compute-0 podman[381348]: 2026-02-28 10:47:53.281813286 +0000 UTC m=+0.049159630 container create 579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:47:53 compute-0 systemd[1]: Started libpod-conmon-579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff.scope.
Feb 28 10:47:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2506: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:47:53 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/818e01895170b08d6e90bb8185ddea8a836b8bce4f18e3c5adc25d8b4adae354/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:47:53 compute-0 podman[381348]: 2026-02-28 10:47:53.252678397 +0000 UTC m=+0.020024521 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.350 243456 DEBUG nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.350 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275673.3504941, d47f4919-0816-4363-b2eb-fa6580859e88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.351 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] VM Started (Lifecycle Event)
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.354 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:47:53 compute-0 podman[381348]: 2026-02-28 10:47:53.354504355 +0000 UTC m=+0.121850479 container init 579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.357 243456 INFO nova.virt.libvirt.driver [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance spawned successfully.
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.357 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:47:53 compute-0 podman[381348]: 2026-02-28 10:47:53.358706635 +0000 UTC m=+0.126052739 container start 579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 28 10:47:53 compute-0 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[381396]: [NOTICE]   (381400) : New worker (381402) forked
Feb 28 10:47:53 compute-0 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[381396]: [NOTICE]   (381400) : Loading success.
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.382 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.389 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.394 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.395 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.395 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.396 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.397 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.397 243456 DEBUG nova.virt.libvirt.driver [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.449 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.449 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275673.3530557, d47f4919-0816-4363-b2eb-fa6580859e88 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.450 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] VM Paused (Lifecycle Event)
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.473 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.477 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275673.3531463, d47f4919-0816-4363-b2eb-fa6580859e88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.477 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] VM Resumed (Lifecycle Event)
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.483 243456 INFO nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Took 10.89 seconds to spawn the instance on the hypervisor.
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.483 243456 DEBUG nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.494 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.498 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.522 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.558 243456 INFO nova.compute.manager [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Took 11.95 seconds to build instance.
Feb 28 10:47:53 compute-0 nova_compute[243452]: 2026-02-28 10:47:53.578 243456 DEBUG oslo_concurrency.lockutils [None req-89102b74-9d73-470d-847e-f545b6b05df2 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:47:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:47:54 compute-0 nova_compute[243452]: 2026-02-28 10:47:54.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:47:54 compute-0 nova_compute[243452]: 2026-02-28 10:47:54.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:47:54 compute-0 nova_compute[243452]: 2026-02-28 10:47:54.356 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:47:54 compute-0 nova_compute[243452]: 2026-02-28 10:47:54.356 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:47:54 compute-0 ceph-mon[76304]: pgmap v2506: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 28 10:47:54 compute-0 nova_compute[243452]: 2026-02-28 10:47:54.951 243456 DEBUG nova.compute.manager [req-48453461-b0fc-48db-a9fb-2e24c7de5db1 req-3d744029-b093-4003-a8a7-a5e39760ef9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:47:54 compute-0 nova_compute[243452]: 2026-02-28 10:47:54.951 243456 DEBUG oslo_concurrency.lockutils [req-48453461-b0fc-48db-a9fb-2e24c7de5db1 req-3d744029-b093-4003-a8a7-a5e39760ef9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:47:54 compute-0 nova_compute[243452]: 2026-02-28 10:47:54.951 243456 DEBUG oslo_concurrency.lockutils [req-48453461-b0fc-48db-a9fb-2e24c7de5db1 req-3d744029-b093-4003-a8a7-a5e39760ef9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:47:54 compute-0 nova_compute[243452]: 2026-02-28 10:47:54.952 243456 DEBUG oslo_concurrency.lockutils [req-48453461-b0fc-48db-a9fb-2e24c7de5db1 req-3d744029-b093-4003-a8a7-a5e39760ef9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:47:54 compute-0 nova_compute[243452]: 2026-02-28 10:47:54.952 243456 DEBUG nova.compute.manager [req-48453461-b0fc-48db-a9fb-2e24c7de5db1 req-3d744029-b093-4003-a8a7-a5e39760ef9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] No waiting events found dispatching network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:47:54 compute-0 nova_compute[243452]: 2026-02-28 10:47:54.952 243456 WARNING nova.compute.manager [req-48453461-b0fc-48db-a9fb-2e24c7de5db1 req-3d744029-b093-4003-a8a7-a5e39760ef9f 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received unexpected event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 for instance with vm_state active and task_state None.
Feb 28 10:47:55 compute-0 nova_compute[243452]: 2026-02-28 10:47:55.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:47:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 60 op/s
Feb 28 10:47:55 compute-0 nova_compute[243452]: 2026-02-28 10:47:55.850 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:56 compute-0 ceph-mon[76304]: pgmap v2507: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 60 op/s
Feb 28 10:47:56 compute-0 nova_compute[243452]: 2026-02-28 10:47:56.820 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:57 compute-0 nova_compute[243452]: 2026-02-28 10:47:57.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:47:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 829 KiB/s wr, 79 op/s
Feb 28 10:47:57 compute-0 nova_compute[243452]: 2026-02-28 10:47:57.353 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:47:57 compute-0 nova_compute[243452]: 2026-02-28 10:47:57.354 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:47:57 compute-0 nova_compute[243452]: 2026-02-28 10:47:57.354 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:47:57 compute-0 nova_compute[243452]: 2026-02-28 10:47:57.355 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:47:57 compute-0 nova_compute[243452]: 2026-02-28 10:47:57.355 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:47:57 compute-0 ovn_controller[146846]: 2026-02-28T10:47:57Z|01594|binding|INFO|Releasing lport a5bdb09f-93f0-411c-9d75-fc368a22a5f6 from this chassis (sb_readonly=0)
Feb 28 10:47:57 compute-0 nova_compute[243452]: 2026-02-28 10:47:57.551 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:57 compute-0 NetworkManager[49805]: <info>  [1772275677.5532] manager: (patch-br-int-to-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/669)
Feb 28 10:47:57 compute-0 nova_compute[243452]: 2026-02-28 10:47:57.554 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:57 compute-0 NetworkManager[49805]: <info>  [1772275677.5553] manager: (patch-provnet-ee90750e-5e3d-41ed-9e2e-f53060d459fb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/670)
Feb 28 10:47:57 compute-0 ovn_controller[146846]: 2026-02-28T10:47:57Z|01595|binding|INFO|Releasing lport a5bdb09f-93f0-411c-9d75-fc368a22a5f6 from this chassis (sb_readonly=0)
Feb 28 10:47:57 compute-0 nova_compute[243452]: 2026-02-28 10:47:57.565 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:47:57 compute-0 nova_compute[243452]: 2026-02-28 10:47:57.749 243456 DEBUG nova.compute.manager [req-d9761376-ece6-415e-aa52-48c21c9dcd7a req-54cfc01e-86e6-4958-a056-71cb31094e8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-changed-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:47:57 compute-0 nova_compute[243452]: 2026-02-28 10:47:57.750 243456 DEBUG nova.compute.manager [req-d9761376-ece6-415e-aa52-48c21c9dcd7a req-54cfc01e-86e6-4958-a056-71cb31094e8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Refreshing instance network info cache due to event network-changed-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:47:57 compute-0 nova_compute[243452]: 2026-02-28 10:47:57.750 243456 DEBUG oslo_concurrency.lockutils [req-d9761376-ece6-415e-aa52-48c21c9dcd7a req-54cfc01e-86e6-4958-a056-71cb31094e8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:47:57 compute-0 nova_compute[243452]: 2026-02-28 10:47:57.751 243456 DEBUG oslo_concurrency.lockutils [req-d9761376-ece6-415e-aa52-48c21c9dcd7a req-54cfc01e-86e6-4958-a056-71cb31094e8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:47:57 compute-0 nova_compute[243452]: 2026-02-28 10:47:57.751 243456 DEBUG nova.network.neutron [req-d9761376-ece6-415e-aa52-48c21c9dcd7a req-54cfc01e-86e6-4958-a056-71cb31094e8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Refreshing network info cache for port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:47:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:47:57 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2128345171' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:47:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:57.889 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:47:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:57.890 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:47:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:47:57.891 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:47:57 compute-0 nova_compute[243452]: 2026-02-28 10:47:57.905 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:47:57 compute-0 nova_compute[243452]: 2026-02-28 10:47:57.964 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:47:57 compute-0 nova_compute[243452]: 2026-02-28 10:47:57.965 243456 DEBUG nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Feb 28 10:47:58 compute-0 podman[381436]: 2026-02-28 10:47:58.001374611 +0000 UTC m=+0.059861845 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:47:58 compute-0 podman[381435]: 2026-02-28 10:47:58.020364651 +0000 UTC m=+0.078988069 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 28 10:47:58 compute-0 nova_compute[243452]: 2026-02-28 10:47:58.127 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:47:58 compute-0 nova_compute[243452]: 2026-02-28 10:47:58.128 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3369MB free_disk=59.966501395218074GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:47:58 compute-0 nova_compute[243452]: 2026-02-28 10:47:58.128 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:47:58 compute-0 nova_compute[243452]: 2026-02-28 10:47:58.129 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:47:58 compute-0 nova_compute[243452]: 2026-02-28 10:47:58.226 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Instance d47f4919-0816-4363-b2eb-fa6580859e88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Feb 28 10:47:58 compute-0 nova_compute[243452]: 2026-02-28 10:47:58.227 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:47:58 compute-0 nova_compute[243452]: 2026-02-28 10:47:58.227 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:47:58 compute-0 nova_compute[243452]: 2026-02-28 10:47:58.277 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:47:58 compute-0 ceph-mon[76304]: pgmap v2508: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 829 KiB/s wr, 79 op/s
Feb 28 10:47:58 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2128345171' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:47:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:47:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1100574336' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:47:58 compute-0 nova_compute[243452]: 2026-02-28 10:47:58.817 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:47:58 compute-0 nova_compute[243452]: 2026-02-28 10:47:58.822 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:47:58 compute-0 nova_compute[243452]: 2026-02-28 10:47:58.844 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:47:58 compute-0 nova_compute[243452]: 2026-02-28 10:47:58.867 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:47:58 compute-0 nova_compute[243452]: 2026-02-28 10:47:58.867 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:47:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:47:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2509: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 359 KiB/s wr, 95 op/s
Feb 28 10:47:59 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1100574336' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:47:59 compute-0 nova_compute[243452]: 2026-02-28 10:47:59.706 243456 DEBUG nova.network.neutron [req-d9761376-ece6-415e-aa52-48c21c9dcd7a req-54cfc01e-86e6-4958-a056-71cb31094e8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updated VIF entry in instance network info cache for port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:47:59 compute-0 nova_compute[243452]: 2026-02-28 10:47:59.707 243456 DEBUG nova.network.neutron [req-d9761376-ece6-415e-aa52-48c21c9dcd7a req-54cfc01e-86e6-4958-a056-71cb31094e8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [{"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:47:59 compute-0 nova_compute[243452]: 2026-02-28 10:47:59.727 243456 DEBUG oslo_concurrency.lockutils [req-d9761376-ece6-415e-aa52-48c21c9dcd7a req-54cfc01e-86e6-4958-a056-71cb31094e8c 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:48:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:48:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:48:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:48:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:48:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:48:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:48:00 compute-0 ceph-mon[76304]: pgmap v2509: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 359 KiB/s wr, 95 op/s
Feb 28 10:48:00 compute-0 nova_compute[243452]: 2026-02-28 10:48:00.853 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:48:01 compute-0 nova_compute[243452]: 2026-02-28 10:48:01.822 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:01 compute-0 nova_compute[243452]: 2026-02-28 10:48:01.863 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:48:02 compute-0 ceph-mon[76304]: pgmap v2510: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:48:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:48:03 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Feb 28 10:48:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:48:04 compute-0 ovn_controller[146846]: 2026-02-28T10:48:04Z|00198|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b5:24:ce 10.100.0.13
Feb 28 10:48:04 compute-0 ovn_controller[146846]: 2026-02-28T10:48:04Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:24:ce 10.100.0.13
Feb 28 10:48:04 compute-0 ceph-mon[76304]: pgmap v2511: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 28 10:48:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 305 active+clean; 206 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 533 KiB/s wr, 82 op/s
Feb 28 10:48:05 compute-0 nova_compute[243452]: 2026-02-28 10:48:05.886 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:06 compute-0 ceph-mon[76304]: pgmap v2512: 305 pgs: 305 active+clean; 206 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 533 KiB/s wr, 82 op/s
Feb 28 10:48:06 compute-0 nova_compute[243452]: 2026-02-28 10:48:06.824 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2513: 305 pgs: 305 active+clean; 227 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.6 MiB/s wr, 71 op/s
Feb 28 10:48:08 compute-0 ceph-mon[76304]: pgmap v2513: 305 pgs: 305 active+clean; 227 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.6 MiB/s wr, 71 op/s
Feb 28 10:48:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:48:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 770 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Feb 28 10:48:10 compute-0 ceph-mon[76304]: pgmap v2514: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 770 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Feb 28 10:48:10 compute-0 nova_compute[243452]: 2026-02-28 10:48:10.890 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:48:11 compute-0 nova_compute[243452]: 2026-02-28 10:48:11.865 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:12 compute-0 ceph-mon[76304]: pgmap v2515: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:48:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread fragmentation_score=0.004223 took=0.000069s
Feb 28 10:48:12 compute-0 ceph-osd[89322]: bluestore.MempoolThread fragmentation_score=0.003738 took=0.000053s
Feb 28 10:48:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread fragmentation_score=0.004393 took=0.000057s
Feb 28 10:48:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:48:13 compute-0 sshd-session[381504]: Received disconnect from 103.67.78.132 port 37226:11: Bye Bye [preauth]
Feb 28 10:48:13 compute-0 sshd-session[381504]: Disconnected from authenticating user root 103.67.78.132 port 37226 [preauth]
Feb 28 10:48:13 compute-0 sshd-session[381502]: Received disconnect from 103.217.144.161 port 39666:11: Bye Bye [preauth]
Feb 28 10:48:13 compute-0 sshd-session[381502]: Disconnected from authenticating user root 103.217.144.161 port 39666 [preauth]
Feb 28 10:48:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:48:14 compute-0 nova_compute[243452]: 2026-02-28 10:48:14.184 243456 DEBUG oslo_concurrency.lockutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:48:14 compute-0 nova_compute[243452]: 2026-02-28 10:48:14.184 243456 DEBUG oslo_concurrency.lockutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:48:14 compute-0 nova_compute[243452]: 2026-02-28 10:48:14.185 243456 INFO nova.compute.manager [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Shelving
Feb 28 10:48:14 compute-0 nova_compute[243452]: 2026-02-28 10:48:14.205 243456 DEBUG nova.virt.libvirt.driver [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Feb 28 10:48:14 compute-0 ceph-mon[76304]: pgmap v2516: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 28 10:48:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Feb 28 10:48:15 compute-0 nova_compute[243452]: 2026-02-28 10:48:15.892 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:16 compute-0 kernel: tap12bbec3d-25 (unregistering): left promiscuous mode
Feb 28 10:48:16 compute-0 NetworkManager[49805]: <info>  [1772275696.4965] device (tap12bbec3d-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:48:16 compute-0 ovn_controller[146846]: 2026-02-28T10:48:16Z|01596|binding|INFO|Releasing lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 from this chassis (sb_readonly=0)
Feb 28 10:48:16 compute-0 ovn_controller[146846]: 2026-02-28T10:48:16Z|01597|binding|INFO|Setting lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 down in Southbound
Feb 28 10:48:16 compute-0 nova_compute[243452]: 2026-02-28 10:48:16.503 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:16 compute-0 ovn_controller[146846]: 2026-02-28T10:48:16Z|01598|binding|INFO|Removing iface tap12bbec3d-25 ovn-installed in OVS
Feb 28 10:48:16 compute-0 ceph-mon[76304]: pgmap v2517: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Feb 28 10:48:16 compute-0 nova_compute[243452]: 2026-02-28 10:48:16.524 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.526 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:24:ce 10.100.0.13'], port_security=['fa:16:3e:b5:24:ce 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd47f4919-0816-4363-b2eb-fa6580859e88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-978ebc43-7003-4100-92ba-e083df3fe8ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd7baef4f72e742e8aa7530d7a586ed2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '97c43427-956a-4c4e-a592-053a957c2802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2014e61-1555-4c0f-9d39-804f817029ca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:48:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.527 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 in datapath 978ebc43-7003-4100-92ba-e083df3fe8ab unbound from our chassis
Feb 28 10:48:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.528 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 978ebc43-7003-4100-92ba-e083df3fe8ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:48:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.529 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c80f07c6-0eab-4d40-84bc-580f0b0540dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.530 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab namespace which is not needed anymore
Feb 28 10:48:16 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000098.scope: Deactivated successfully.
Feb 28 10:48:16 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000098.scope: Consumed 12.411s CPU time.
Feb 28 10:48:16 compute-0 systemd-machined[209480]: Machine qemu-185-instance-00000098 terminated.
Feb 28 10:48:16 compute-0 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[381396]: [NOTICE]   (381400) : haproxy version is 2.8.14-c23fe91
Feb 28 10:48:16 compute-0 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[381396]: [NOTICE]   (381400) : path to executable is /usr/sbin/haproxy
Feb 28 10:48:16 compute-0 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[381396]: [WARNING]  (381400) : Exiting Master process...
Feb 28 10:48:16 compute-0 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[381396]: [WARNING]  (381400) : Exiting Master process...
Feb 28 10:48:16 compute-0 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[381396]: [ALERT]    (381400) : Current worker (381402) exited with code 143 (Terminated)
Feb 28 10:48:16 compute-0 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[381396]: [WARNING]  (381400) : All workers exited. Exiting... (0)
Feb 28 10:48:16 compute-0 systemd[1]: libpod-579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff.scope: Deactivated successfully.
Feb 28 10:48:16 compute-0 conmon[381396]: conmon 579179baa4938721d005 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff.scope/container/memory.events
Feb 28 10:48:16 compute-0 podman[381531]: 2026-02-28 10:48:16.674909733 +0000 UTC m=+0.057006224 container died 579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:48:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff-userdata-shm.mount: Deactivated successfully.
Feb 28 10:48:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-818e01895170b08d6e90bb8185ddea8a836b8bce4f18e3c5adc25d8b4adae354-merged.mount: Deactivated successfully.
Feb 28 10:48:16 compute-0 podman[381531]: 2026-02-28 10:48:16.731660258 +0000 UTC m=+0.113756779 container cleanup 579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:48:16 compute-0 systemd[1]: libpod-conmon-579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff.scope: Deactivated successfully.
Feb 28 10:48:16 compute-0 podman[381574]: 2026-02-28 10:48:16.808820394 +0000 UTC m=+0.054694808 container remove 579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:48:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.814 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[9b283b79-967a-457d-a4a2-e2404ce09197]: (4, ('Sat Feb 28 10:48:16 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab (579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff)\n579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff\nSat Feb 28 10:48:16 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab (579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff)\n579179baa4938721d005d4b856d08a86f9278ba7cccced47ddb6cd99ad63a1ff\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.816 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ec96db-2688-42f5-b0f5-92a2e76e3f93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.818 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap978ebc43-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:48:16 compute-0 nova_compute[243452]: 2026-02-28 10:48:16.820 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:16 compute-0 kernel: tap978ebc43-70: left promiscuous mode
Feb 28 10:48:16 compute-0 nova_compute[243452]: 2026-02-28 10:48:16.827 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:16 compute-0 nova_compute[243452]: 2026-02-28 10:48:16.828 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.831 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[d3911a2e-802b-40dd-ae98-dc86194e6e08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.844 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fb8ea97a-a471-4c88-b59f-506119aab627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.846 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[a6de2a6d-ac32-4785-a371-a9b066468086]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.861 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[17ca40af-1bdb-44a5-8c5d-0da45af96151]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717997, 'reachable_time': 27699, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381595, 'error': None, 'target': 'ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.863 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:48:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d978ebc43\x2d7003\x2d4100\x2d92ba\x2de083df3fe8ab.mount: Deactivated successfully.
Feb 28 10:48:16 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:16.863 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[638950ad-f2ec-4fc3-9a79-1fe9eb462ee7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:16 compute-0 nova_compute[243452]: 2026-02-28 10:48:16.866 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:17 compute-0 nova_compute[243452]: 2026-02-28 10:48:17.225 243456 INFO nova.virt.libvirt.driver [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance shutdown successfully after 3 seconds.
Feb 28 10:48:17 compute-0 nova_compute[243452]: 2026-02-28 10:48:17.232 243456 INFO nova.virt.libvirt.driver [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance destroyed successfully.
Feb 28 10:48:17 compute-0 nova_compute[243452]: 2026-02-28 10:48:17.232 243456 DEBUG nova.objects.instance [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'numa_topology' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:48:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 1.6 MiB/s wr, 57 op/s
Feb 28 10:48:17 compute-0 nova_compute[243452]: 2026-02-28 10:48:17.397 243456 DEBUG nova.compute.manager [req-2e7f3fbe-c434-4f49-93f2-153df84948a4 req-22bbd2d7-c763-4f5b-bbf6-0bd5258d3823 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-unplugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:48:17 compute-0 nova_compute[243452]: 2026-02-28 10:48:17.398 243456 DEBUG oslo_concurrency.lockutils [req-2e7f3fbe-c434-4f49-93f2-153df84948a4 req-22bbd2d7-c763-4f5b-bbf6-0bd5258d3823 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:48:17 compute-0 nova_compute[243452]: 2026-02-28 10:48:17.398 243456 DEBUG oslo_concurrency.lockutils [req-2e7f3fbe-c434-4f49-93f2-153df84948a4 req-22bbd2d7-c763-4f5b-bbf6-0bd5258d3823 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:48:17 compute-0 nova_compute[243452]: 2026-02-28 10:48:17.399 243456 DEBUG oslo_concurrency.lockutils [req-2e7f3fbe-c434-4f49-93f2-153df84948a4 req-22bbd2d7-c763-4f5b-bbf6-0bd5258d3823 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:48:17 compute-0 nova_compute[243452]: 2026-02-28 10:48:17.399 243456 DEBUG nova.compute.manager [req-2e7f3fbe-c434-4f49-93f2-153df84948a4 req-22bbd2d7-c763-4f5b-bbf6-0bd5258d3823 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] No waiting events found dispatching network-vif-unplugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:48:17 compute-0 nova_compute[243452]: 2026-02-28 10:48:17.399 243456 WARNING nova.compute.manager [req-2e7f3fbe-c434-4f49-93f2-153df84948a4 req-22bbd2d7-c763-4f5b-bbf6-0bd5258d3823 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received unexpected event network-vif-unplugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 for instance with vm_state active and task_state shelving.
Feb 28 10:48:17 compute-0 nova_compute[243452]: 2026-02-28 10:48:17.487 243456 INFO nova.virt.libvirt.driver [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Beginning cold snapshot process
Feb 28 10:48:17 compute-0 nova_compute[243452]: 2026-02-28 10:48:17.640 243456 DEBUG nova.virt.libvirt.imagebackend [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] No parent info for f3755b31-d5fb-4f1b-9b58-6260fd65fc72; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Feb 28 10:48:17 compute-0 nova_compute[243452]: 2026-02-28 10:48:17.840 243456 DEBUG nova.storage.rbd_utils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] creating snapshot(b382f622a0414db797cfdbf5ae588102) on rbd image(d47f4919-0816-4363-b2eb-fa6580859e88_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:48:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Feb 28 10:48:18 compute-0 ceph-mon[76304]: pgmap v2518: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 1.6 MiB/s wr, 57 op/s
Feb 28 10:48:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Feb 28 10:48:18 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Feb 28 10:48:18 compute-0 nova_compute[243452]: 2026-02-28 10:48:18.585 243456 DEBUG nova.storage.rbd_utils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] cloning vms/d47f4919-0816-4363-b2eb-fa6580859e88_disk@b382f622a0414db797cfdbf5ae588102 to images/4c20447f-84ff-45ca-86c2-e5ad9d598628 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:48:18 compute-0 nova_compute[243452]: 2026-02-28 10:48:18.704 243456 DEBUG nova.storage.rbd_utils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] flattening images/4c20447f-84ff-45ca-86c2-e5ad9d598628 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:48:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:48:19 compute-0 nova_compute[243452]: 2026-02-28 10:48:19.219 243456 DEBUG nova.storage.rbd_utils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] removing snapshot(b382f622a0414db797cfdbf5ae588102) on rbd image(d47f4919-0816-4363-b2eb-fa6580859e88_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Feb 28 10:48:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 47 KiB/s wr, 12 op/s
Feb 28 10:48:19 compute-0 nova_compute[243452]: 2026-02-28 10:48:19.494 243456 DEBUG nova.compute.manager [req-55ac6c79-e87e-4b34-b99b-814bd8d4c78c req-5a0006d0-74de-4ab4-baf4-22afdcebf2c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:48:19 compute-0 nova_compute[243452]: 2026-02-28 10:48:19.495 243456 DEBUG oslo_concurrency.lockutils [req-55ac6c79-e87e-4b34-b99b-814bd8d4c78c req-5a0006d0-74de-4ab4-baf4-22afdcebf2c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:48:19 compute-0 nova_compute[243452]: 2026-02-28 10:48:19.496 243456 DEBUG oslo_concurrency.lockutils [req-55ac6c79-e87e-4b34-b99b-814bd8d4c78c req-5a0006d0-74de-4ab4-baf4-22afdcebf2c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:48:19 compute-0 nova_compute[243452]: 2026-02-28 10:48:19.496 243456 DEBUG oslo_concurrency.lockutils [req-55ac6c79-e87e-4b34-b99b-814bd8d4c78c req-5a0006d0-74de-4ab4-baf4-22afdcebf2c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:48:19 compute-0 nova_compute[243452]: 2026-02-28 10:48:19.497 243456 DEBUG nova.compute.manager [req-55ac6c79-e87e-4b34-b99b-814bd8d4c78c req-5a0006d0-74de-4ab4-baf4-22afdcebf2c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] No waiting events found dispatching network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:48:19 compute-0 nova_compute[243452]: 2026-02-28 10:48:19.497 243456 WARNING nova.compute.manager [req-55ac6c79-e87e-4b34-b99b-814bd8d4c78c req-5a0006d0-74de-4ab4-baf4-22afdcebf2c4 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received unexpected event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 for instance with vm_state active and task_state shelving_image_uploading.
Feb 28 10:48:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Feb 28 10:48:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Feb 28 10:48:19 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Feb 28 10:48:19 compute-0 ceph-mon[76304]: osdmap e285: 3 total, 3 up, 3 in
Feb 28 10:48:19 compute-0 nova_compute[243452]: 2026-02-28 10:48:19.584 243456 DEBUG nova.storage.rbd_utils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] creating snapshot(snap) on rbd image(4c20447f-84ff-45ca-86c2-e5ad9d598628) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Feb 28 10:48:20 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Feb 28 10:48:20 compute-0 ceph-mon[76304]: pgmap v2520: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 47 KiB/s wr, 12 op/s
Feb 28 10:48:20 compute-0 ceph-mon[76304]: osdmap e286: 3 total, 3 up, 3 in
Feb 28 10:48:20 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Feb 28 10:48:20 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Feb 28 10:48:20 compute-0 nova_compute[243452]: 2026-02-28 10:48:20.893 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 295 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 6.0 MiB/s wr, 103 op/s
Feb 28 10:48:21 compute-0 ceph-mon[76304]: osdmap e287: 3 total, 3 up, 3 in
Feb 28 10:48:21 compute-0 nova_compute[243452]: 2026-02-28 10:48:21.811 243456 INFO nova.virt.libvirt.driver [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Snapshot image upload complete
Feb 28 10:48:21 compute-0 nova_compute[243452]: 2026-02-28 10:48:21.812 243456 DEBUG nova.compute.manager [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:48:21 compute-0 nova_compute[243452]: 2026-02-28 10:48:21.863 243456 INFO nova.compute.manager [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Shelve offloading
Feb 28 10:48:21 compute-0 nova_compute[243452]: 2026-02-28 10:48:21.869 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:21 compute-0 nova_compute[243452]: 2026-02-28 10:48:21.877 243456 INFO nova.virt.libvirt.driver [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance destroyed successfully.
Feb 28 10:48:21 compute-0 nova_compute[243452]: 2026-02-28 10:48:21.878 243456 DEBUG nova.compute.manager [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:48:21 compute-0 nova_compute[243452]: 2026-02-28 10:48:21.882 243456 DEBUG oslo_concurrency.lockutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:48:21 compute-0 nova_compute[243452]: 2026-02-28 10:48:21.883 243456 DEBUG oslo_concurrency.lockutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquired lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:48:21 compute-0 nova_compute[243452]: 2026-02-28 10:48:21.883 243456 DEBUG nova.network.neutron [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:48:22 compute-0 sshd-session[381738]: Received disconnect from 103.67.78.202 port 42608:11: Bye Bye [preauth]
Feb 28 10:48:22 compute-0 sshd-session[381738]: Disconnected from authenticating user root 103.67.78.202 port 42608 [preauth]
Feb 28 10:48:22 compute-0 ceph-mon[76304]: pgmap v2523: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 295 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 6.0 MiB/s wr, 103 op/s
Feb 28 10:48:23 compute-0 nova_compute[243452]: 2026-02-28 10:48:23.256 243456 DEBUG nova.network.neutron [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [{"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:48:23 compute-0 nova_compute[243452]: 2026-02-28 10:48:23.278 243456 DEBUG oslo_concurrency.lockutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Releasing lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:48:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 151 op/s
Feb 28 10:48:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.329 243456 INFO nova.virt.libvirt.driver [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance destroyed successfully.
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.330 243456 DEBUG nova.objects.instance [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'resources' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.342 243456 DEBUG nova.virt.libvirt.vif [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-28T10:47:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-373381639',display_name='tempest-TestShelveInstance-server-373381639',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-373381639',id=152,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWK8NgpJq/O/Qf/cy3N8Z6KNy8MplNJlX7P39bqx+h9ho+aFNpLd6ovt9CyKQamqSZ1B8mN5tm9M+MnUX6czgs4K4I+aH3/UX/Gbp6WhW0TY9K8biYJUX8WuIDhtwQ8Cw==',key_name='tempest-TestShelveInstance-1745923104',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:47:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d7baef4f72e742e8aa7530d7a586ed2b',ramdisk_id='',reservation_id='r-4eojppig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1186988285',owner_user_name='tempest-TestShelveInstance-1186988285-project-member',shelved_at='2026-02-28T10:48:21.812417',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='4c20447f-84ff-45ca-86c2-e5ad9d598628'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:48:17Z,user_data=None,user_id='7efc7418904f44aa8c8c9c3e06ac552b',uuid=d47f4919-0816-4363-b2eb-fa6580859e88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.343 243456 DEBUG nova.network.os_vif_util [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converting VIF {"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.344 243456 DEBUG nova.network.os_vif_util [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.344 243456 DEBUG os_vif [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.346 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.347 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12bbec3d-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.349 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.351 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.354 243456 INFO os_vif [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25')
Feb 28 10:48:24 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.421 243456 DEBUG nova.compute.manager [req-552db8d8-f5c1-445e-9e13-99afbeba442f req-7b40a3d5-9183-4f6a-932b-abbf24020478 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-changed-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.422 243456 DEBUG nova.compute.manager [req-552db8d8-f5c1-445e-9e13-99afbeba442f req-7b40a3d5-9183-4f6a-932b-abbf24020478 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Refreshing instance network info cache due to event network-changed-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.422 243456 DEBUG oslo_concurrency.lockutils [req-552db8d8-f5c1-445e-9e13-99afbeba442f req-7b40a3d5-9183-4f6a-932b-abbf24020478 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.422 243456 DEBUG oslo_concurrency.lockutils [req-552db8d8-f5c1-445e-9e13-99afbeba442f req-7b40a3d5-9183-4f6a-932b-abbf24020478 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.422 243456 DEBUG nova.network.neutron [req-552db8d8-f5c1-445e-9e13-99afbeba442f req-7b40a3d5-9183-4f6a-932b-abbf24020478 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Refreshing network info cache for port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:48:24 compute-0 ceph-mon[76304]: pgmap v2524: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 151 op/s
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.714 243456 INFO nova.virt.libvirt.driver [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Deleting instance files /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88_del
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.715 243456 INFO nova.virt.libvirt.driver [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Deletion of /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88_del complete
Feb 28 10:48:24 compute-0 sshd-session[381740]: Received disconnect from 103.67.78.202 port 60700:11: Bye Bye [preauth]
Feb 28 10:48:24 compute-0 sshd-session[381740]: Disconnected from authenticating user root 103.67.78.202 port 60700 [preauth]
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.813 243456 INFO nova.scheduler.client.report [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Deleted allocations for instance d47f4919-0816-4363-b2eb-fa6580859e88
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.869 243456 DEBUG oslo_concurrency.lockutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.870 243456 DEBUG oslo_concurrency.lockutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:48:24 compute-0 nova_compute[243452]: 2026-02-28 10:48:24.896 243456 DEBUG oslo_concurrency.processutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:48:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 305 active+clean; 283 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 6.9 MiB/s wr, 145 op/s
Feb 28 10:48:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:48:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3949162236' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:48:25 compute-0 nova_compute[243452]: 2026-02-28 10:48:25.524 243456 DEBUG oslo_concurrency.processutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:48:25 compute-0 nova_compute[243452]: 2026-02-28 10:48:25.531 243456 DEBUG nova.compute.provider_tree [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:48:25 compute-0 nova_compute[243452]: 2026-02-28 10:48:25.554 243456 DEBUG nova.scheduler.client.report [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:48:25 compute-0 nova_compute[243452]: 2026-02-28 10:48:25.581 243456 DEBUG oslo_concurrency.lockutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:48:25 compute-0 nova_compute[243452]: 2026-02-28 10:48:25.595 243456 DEBUG nova.network.neutron [req-552db8d8-f5c1-445e-9e13-99afbeba442f req-7b40a3d5-9183-4f6a-932b-abbf24020478 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updated VIF entry in instance network info cache for port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:48:25 compute-0 nova_compute[243452]: 2026-02-28 10:48:25.595 243456 DEBUG nova.network.neutron [req-552db8d8-f5c1-445e-9e13-99afbeba442f req-7b40a3d5-9183-4f6a-932b-abbf24020478 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [{"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": null, "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap12bbec3d-25", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:48:25 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3949162236' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:48:25 compute-0 nova_compute[243452]: 2026-02-28 10:48:25.751 243456 DEBUG oslo_concurrency.lockutils [req-552db8d8-f5c1-445e-9e13-99afbeba442f req-7b40a3d5-9183-4f6a-932b-abbf24020478 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:48:25 compute-0 nova_compute[243452]: 2026-02-28 10:48:25.768 243456 DEBUG oslo_concurrency.lockutils [None req-14c9716e-8313-47b7-b83c-d48c263f52e0 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 11.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:48:25 compute-0 nova_compute[243452]: 2026-02-28 10:48:25.895 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:26 compute-0 ceph-mon[76304]: pgmap v2525: 305 pgs: 305 active+clean; 283 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 6.9 MiB/s wr, 145 op/s
Feb 28 10:48:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 251 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 143 op/s
Feb 28 10:48:28 compute-0 podman[381784]: 2026-02-28 10:48:28.12492181 +0000 UTC m=+0.056458428 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 28 10:48:28 compute-0 podman[381783]: 2026-02-28 10:48:28.158978429 +0000 UTC m=+0.091277689 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 10:48:28 compute-0 nova_compute[243452]: 2026-02-28 10:48:28.500 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:48:28 compute-0 nova_compute[243452]: 2026-02-28 10:48:28.500 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:48:28 compute-0 nova_compute[243452]: 2026-02-28 10:48:28.501 243456 INFO nova.compute.manager [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Unshelving
Feb 28 10:48:28 compute-0 nova_compute[243452]: 2026-02-28 10:48:28.593 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:48:28 compute-0 nova_compute[243452]: 2026-02-28 10:48:28.593 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:48:28 compute-0 nova_compute[243452]: 2026-02-28 10:48:28.598 243456 DEBUG nova.objects.instance [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'pci_requests' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:48:28 compute-0 nova_compute[243452]: 2026-02-28 10:48:28.614 243456 DEBUG nova.objects.instance [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'numa_topology' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:48:28 compute-0 ceph-mon[76304]: pgmap v2526: 305 pgs: 305 active+clean; 251 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 143 op/s
Feb 28 10:48:28 compute-0 nova_compute[243452]: 2026-02-28 10:48:28.634 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:48:28 compute-0 nova_compute[243452]: 2026-02-28 10:48:28.635 243456 INFO nova.compute.claims [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:48:28 compute-0 nova_compute[243452]: 2026-02-28 10:48:28.749 243456 DEBUG oslo_concurrency.processutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:48:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:48:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Feb 28 10:48:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Feb 28 10:48:29 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Feb 28 10:48:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:48:29
Feb 28 10:48:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:48:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:48:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'backups', 'images', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta']
Feb 28 10:48:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:48:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:48:29 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1985976038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:48:29 compute-0 nova_compute[243452]: 2026-02-28 10:48:29.347 243456 DEBUG oslo_concurrency.processutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:48:29 compute-0 nova_compute[243452]: 2026-02-28 10:48:29.349 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 232 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.4 MiB/s wr, 101 op/s
Feb 28 10:48:29 compute-0 nova_compute[243452]: 2026-02-28 10:48:29.355 243456 DEBUG nova.compute.provider_tree [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:48:29 compute-0 nova_compute[243452]: 2026-02-28 10:48:29.378 243456 DEBUG nova.scheduler.client.report [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:48:29 compute-0 nova_compute[243452]: 2026-02-28 10:48:29.406 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:48:29 compute-0 nova_compute[243452]: 2026-02-28 10:48:29.652 243456 INFO nova.network.neutron [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Feb 28 10:48:30 compute-0 ceph-mon[76304]: osdmap e288: 3 total, 3 up, 3 in
Feb 28 10:48:30 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1985976038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:48:30 compute-0 ceph-mon[76304]: pgmap v2528: 305 pgs: 305 active+clean; 232 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.4 MiB/s wr, 101 op/s
Feb 28 10:48:30 compute-0 nova_compute[243452]: 2026-02-28 10:48:30.171 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:48:30 compute-0 nova_compute[243452]: 2026-02-28 10:48:30.171 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquired lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:48:30 compute-0 nova_compute[243452]: 2026-02-28 10:48:30.171 243456 DEBUG nova.network.neutron [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:48:30 compute-0 nova_compute[243452]: 2026-02-28 10:48:30.265 243456 DEBUG nova.compute.manager [req-0f91d331-fb2b-4a07-a7ae-518a9ddfba4a req-02b4ba58-7139-4ebe-94ae-ed563da55566 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-changed-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:48:30 compute-0 nova_compute[243452]: 2026-02-28 10:48:30.265 243456 DEBUG nova.compute.manager [req-0f91d331-fb2b-4a07-a7ae-518a9ddfba4a req-02b4ba58-7139-4ebe-94ae-ed563da55566 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Refreshing instance network info cache due to event network-changed-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:48:30 compute-0 nova_compute[243452]: 2026-02-28 10:48:30.266 243456 DEBUG oslo_concurrency.lockutils [req-0f91d331-fb2b-4a07-a7ae-518a9ddfba4a req-02b4ba58-7139-4ebe-94ae-ed563da55566 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:48:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:48:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:48:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:48:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:48:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:48:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:48:30 compute-0 nova_compute[243452]: 2026-02-28 10:48:30.898 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:48:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:48:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:48:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:48:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:48:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:48:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:48:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:48:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:48:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:48:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 232 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 79 op/s
Feb 28 10:48:31 compute-0 nova_compute[243452]: 2026-02-28 10:48:31.738 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275696.7374487, d47f4919-0816-4363-b2eb-fa6580859e88 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:48:31 compute-0 nova_compute[243452]: 2026-02-28 10:48:31.739 243456 INFO nova.compute.manager [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] VM Stopped (Lifecycle Event)
Feb 28 10:48:31 compute-0 nova_compute[243452]: 2026-02-28 10:48:31.764 243456 DEBUG nova.compute.manager [None req-2e4e9143-6510-4ce0-9570-e6cff86f335f - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:48:31 compute-0 nova_compute[243452]: 2026-02-28 10:48:31.924 243456 DEBUG nova.network.neutron [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [{"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:48:31 compute-0 nova_compute[243452]: 2026-02-28 10:48:31.942 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Releasing lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:48:31 compute-0 nova_compute[243452]: 2026-02-28 10:48:31.944 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:48:31 compute-0 nova_compute[243452]: 2026-02-28 10:48:31.945 243456 INFO nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Creating image(s)
Feb 28 10:48:31 compute-0 nova_compute[243452]: 2026-02-28 10:48:31.981 243456 DEBUG nova.storage.rbd_utils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:48:31 compute-0 nova_compute[243452]: 2026-02-28 10:48:31.987 243456 DEBUG nova.objects.instance [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'trusted_certs' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:48:31 compute-0 nova_compute[243452]: 2026-02-28 10:48:31.989 243456 DEBUG oslo_concurrency.lockutils [req-0f91d331-fb2b-4a07-a7ae-518a9ddfba4a req-02b4ba58-7139-4ebe-94ae-ed563da55566 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:48:31 compute-0 nova_compute[243452]: 2026-02-28 10:48:31.990 243456 DEBUG nova.network.neutron [req-0f91d331-fb2b-4a07-a7ae-518a9ddfba4a req-02b4ba58-7139-4ebe-94ae-ed563da55566 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Refreshing network info cache for port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:48:32 compute-0 nova_compute[243452]: 2026-02-28 10:48:32.046 243456 DEBUG nova.storage.rbd_utils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:48:32 compute-0 nova_compute[243452]: 2026-02-28 10:48:32.076 243456 DEBUG nova.storage.rbd_utils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:48:32 compute-0 nova_compute[243452]: 2026-02-28 10:48:32.081 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "6bf3b1a0a7f7c0688ef1a94c5f209b0729c56432" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:48:32 compute-0 nova_compute[243452]: 2026-02-28 10:48:32.082 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "6bf3b1a0a7f7c0688ef1a94c5f209b0729c56432" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:48:32 compute-0 nova_compute[243452]: 2026-02-28 10:48:32.303 243456 DEBUG nova.virt.libvirt.imagebackend [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Image locations are: [{'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/4c20447f-84ff-45ca-86c2-e5ad9d598628/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/4c20447f-84ff-45ca-86c2-e5ad9d598628/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Feb 28 10:48:32 compute-0 nova_compute[243452]: 2026-02-28 10:48:32.389 243456 DEBUG nova.virt.libvirt.imagebackend [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Selected location: {'url': 'rbd://8f528268-ea2d-5d7b-af45-49b405fed6de/images/4c20447f-84ff-45ca-86c2-e5ad9d598628/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Feb 28 10:48:32 compute-0 nova_compute[243452]: 2026-02-28 10:48:32.390 243456 DEBUG nova.storage.rbd_utils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] cloning images/4c20447f-84ff-45ca-86c2-e5ad9d598628@snap to None/d47f4919-0816-4363-b2eb-fa6580859e88_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 28 10:48:32 compute-0 ceph-mon[76304]: pgmap v2529: 305 pgs: 305 active+clean; 232 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 79 op/s
Feb 28 10:48:32 compute-0 nova_compute[243452]: 2026-02-28 10:48:32.534 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "6bf3b1a0a7f7c0688ef1a94c5f209b0729c56432" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:48:32 compute-0 nova_compute[243452]: 2026-02-28 10:48:32.688 243456 DEBUG nova.objects.instance [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'migration_context' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:48:32 compute-0 nova_compute[243452]: 2026-02-28 10:48:32.752 243456 DEBUG nova.storage.rbd_utils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] flattening vms/d47f4919-0816-4363-b2eb-fa6580859e88_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.276 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Image rbd:vms/d47f4919-0816-4363-b2eb-fa6580859e88_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.278 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.278 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Ensure instance console log exists: /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.279 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.280 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.280 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.285 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Start _get_guest_xml network_info=[{"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-28T10:48:14Z,direct_url=<?>,disk_format='raw',id=4c20447f-84ff-45ca-86c2-e5ad9d598628,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-373381639-shelved',owner='d7baef4f72e742e8aa7530d7a586ed2b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-28T10:48:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.291 243456 WARNING nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.296 243456 DEBUG nova.virt.libvirt.host [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.297 243456 DEBUG nova.virt.libvirt.host [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.303 243456 DEBUG nova.virt.libvirt.host [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.304 243456 DEBUG nova.virt.libvirt.host [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.305 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.306 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-28T10:48:14Z,direct_url=<?>,disk_format='raw',id=4c20447f-84ff-45ca-86c2-e5ad9d598628,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-373381639-shelved',owner='d7baef4f72e742e8aa7530d7a586ed2b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-28T10:48:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.307 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.307 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.308 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.308 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.309 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.309 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.310 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.310 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.310 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.311 243456 DEBUG nova.virt.hardware [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.311 243456 DEBUG nova.objects.instance [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'vcpu_model' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.337 243456 DEBUG oslo_concurrency.processutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:48:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 305 active+clean; 243 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 803 KiB/s wr, 66 op/s
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.400 243456 DEBUG nova.network.neutron [req-0f91d331-fb2b-4a07-a7ae-518a9ddfba4a req-02b4ba58-7139-4ebe-94ae-ed563da55566 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updated VIF entry in instance network info cache for port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.402 243456 DEBUG nova.network.neutron [req-0f91d331-fb2b-4a07-a7ae-518a9ddfba4a req-02b4ba58-7139-4ebe-94ae-ed563da55566 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [{"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:48:33 compute-0 nova_compute[243452]: 2026-02-28 10:48:33.428 243456 DEBUG oslo_concurrency.lockutils [req-0f91d331-fb2b-4a07-a7ae-518a9ddfba4a req-02b4ba58-7139-4ebe-94ae-ed563da55566 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:48:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:48:33 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/404527361' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.010 243456 DEBUG oslo_concurrency.processutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.673s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.045 243456 DEBUG nova.storage.rbd_utils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.051 243456 DEBUG oslo_concurrency.processutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:48:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.352 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:34 compute-0 ceph-mon[76304]: pgmap v2530: 305 pgs: 305 active+clean; 243 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 803 KiB/s wr, 66 op/s
Feb 28 10:48:34 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/404527361' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:48:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:48:34 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1669057543' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.652 243456 DEBUG oslo_concurrency.processutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.656 243456 DEBUG nova.virt.libvirt.vif [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:47:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-373381639',display_name='tempest-TestShelveInstance-server-373381639',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-373381639',id=152,image_ref='4c20447f-84ff-45ca-86c2-e5ad9d598628',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1745923104',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:47:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d7baef4f72e742e8aa7530d7a586ed2b',ramdisk_id='',reservation_id='r-4eojppig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1186988285',owner_user_name='tempest-TestShelveInstance-1186988285-project-member',shelved_at='2026-02-28T10:48:21.812417',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='4c20447f-84ff-45ca-86c2-e5ad9d598628'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:48:28Z,user_data=None,user_id='7efc7418904f44aa8c8c9c3e06ac552b',uuid=d47f4919-0816-4363-b2eb-fa6580859e88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.657 243456 DEBUG nova.network.os_vif_util [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converting VIF {"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.659 243456 DEBUG nova.network.os_vif_util [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.663 243456 DEBUG nova.objects.instance [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'pci_devices' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.691 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:48:34 compute-0 nova_compute[243452]:   <uuid>d47f4919-0816-4363-b2eb-fa6580859e88</uuid>
Feb 28 10:48:34 compute-0 nova_compute[243452]:   <name>instance-00000098</name>
Feb 28 10:48:34 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:48:34 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:48:34 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <nova:name>tempest-TestShelveInstance-server-373381639</nova:name>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:48:33</nova:creationTime>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:48:34 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:48:34 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:48:34 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:48:34 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:48:34 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:48:34 compute-0 nova_compute[243452]:         <nova:user uuid="7efc7418904f44aa8c8c9c3e06ac552b">tempest-TestShelveInstance-1186988285-project-member</nova:user>
Feb 28 10:48:34 compute-0 nova_compute[243452]:         <nova:project uuid="d7baef4f72e742e8aa7530d7a586ed2b">tempest-TestShelveInstance-1186988285</nova:project>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="4c20447f-84ff-45ca-86c2-e5ad9d598628"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <nova:ports>
Feb 28 10:48:34 compute-0 nova_compute[243452]:         <nova:port uuid="12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4">
Feb 28 10:48:34 compute-0 nova_compute[243452]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:         </nova:port>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       </nova:ports>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:48:34 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:48:34 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <system>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <entry name="serial">d47f4919-0816-4363-b2eb-fa6580859e88</entry>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <entry name="uuid">d47f4919-0816-4363-b2eb-fa6580859e88</entry>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     </system>
Feb 28 10:48:34 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:48:34 compute-0 nova_compute[243452]:   <os>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:   </os>
Feb 28 10:48:34 compute-0 nova_compute[243452]:   <features>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:   </features>
Feb 28 10:48:34 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:48:34 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:48:34 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/d47f4919-0816-4363-b2eb-fa6580859e88_disk">
Feb 28 10:48:34 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       </source>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:48:34 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/d47f4919-0816-4363-b2eb-fa6580859e88_disk.config">
Feb 28 10:48:34 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       </source>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:48:34 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <interface type="ethernet">
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <mac address="fa:16:3e:b5:24:ce"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <driver name="vhost" rx_queue_size="512"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <mtu size="1442"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <target dev="tap12bbec3d-25"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     </interface>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/console.log" append="off"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <video>
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     </video>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <input type="keyboard" bus="usb"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:48:34 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:48:34 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:48:34 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:48:34 compute-0 nova_compute[243452]: </domain>
Feb 28 10:48:34 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.693 243456 DEBUG nova.compute.manager [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Preparing to wait for external event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.694 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.694 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.695 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.695 243456 DEBUG nova.virt.libvirt.vif [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:47:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-373381639',display_name='tempest-TestShelveInstance-server-373381639',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-373381639',id=152,image_ref='4c20447f-84ff-45ca-86c2-e5ad9d598628',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1745923104',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:47:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d7baef4f72e742e8aa7530d7a586ed2b',ramdisk_id='',reservation_id='r-4eojppig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1186988285',owner_user_name='tempest-TestShelveInstance-1186988285-project-member',shelved_at='2026-02-28T10:48:21.812417',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='4c20447f-84ff-45ca-86c2-e5ad9d598628'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-28T10:48:28Z,user_data=None,user_id='7efc7418904f44aa8c8c9c3e06ac552b',uuid=d47f4919-0816-4363-b2eb-fa6580859e88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.696 243456 DEBUG nova.network.os_vif_util [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converting VIF {"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.696 243456 DEBUG nova.network.os_vif_util [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.697 243456 DEBUG os_vif [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.697 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.698 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.698 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.702 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.702 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12bbec3d-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.703 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap12bbec3d-25, col_values=(('external_ids', {'iface-id': '12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:24:ce', 'vm-uuid': 'd47f4919-0816-4363-b2eb-fa6580859e88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.704 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:34 compute-0 NetworkManager[49805]: <info>  [1772275714.7062] manager: (tap12bbec3d-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/671)
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.709 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.713 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.715 243456 INFO os_vif [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25')
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.780 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.781 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.781 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] No VIF found with MAC fa:16:3e:b5:24:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.782 243456 INFO nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Using config drive
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.817 243456 DEBUG nova.storage.rbd_utils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.840 243456 DEBUG nova.objects.instance [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'ec2_ids' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:48:34 compute-0 nova_compute[243452]: 2026-02-28 10:48:34.885 243456 DEBUG nova.objects.instance [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'keypairs' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:48:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 305 active+clean; 265 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.5 MiB/s wr, 97 op/s
Feb 28 10:48:35 compute-0 nova_compute[243452]: 2026-02-28 10:48:35.373 243456 INFO nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Creating config drive at /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config
Feb 28 10:48:35 compute-0 nova_compute[243452]: 2026-02-28 10:48:35.382 243456 DEBUG oslo_concurrency.processutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqjj1bdu5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:48:35 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1669057543' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:48:35 compute-0 nova_compute[243452]: 2026-02-28 10:48:35.533 243456 DEBUG oslo_concurrency.processutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqjj1bdu5" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:48:35 compute-0 nova_compute[243452]: 2026-02-28 10:48:35.575 243456 DEBUG nova.storage.rbd_utils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] rbd image d47f4919-0816-4363-b2eb-fa6580859e88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:48:35 compute-0 nova_compute[243452]: 2026-02-28 10:48:35.580 243456 DEBUG oslo_concurrency.processutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config d47f4919-0816-4363-b2eb-fa6580859e88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:48:35 compute-0 nova_compute[243452]: 2026-02-28 10:48:35.729 243456 DEBUG oslo_concurrency.processutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config d47f4919-0816-4363-b2eb-fa6580859e88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:48:35 compute-0 nova_compute[243452]: 2026-02-28 10:48:35.730 243456 INFO nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Deleting local config drive /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88/disk.config because it was imported into RBD.
Feb 28 10:48:35 compute-0 kernel: tap12bbec3d-25: entered promiscuous mode
Feb 28 10:48:35 compute-0 NetworkManager[49805]: <info>  [1772275715.7957] manager: (tap12bbec3d-25): new Tun device (/org/freedesktop/NetworkManager/Devices/672)
Feb 28 10:48:35 compute-0 ovn_controller[146846]: 2026-02-28T10:48:35Z|01599|binding|INFO|Claiming lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 for this chassis.
Feb 28 10:48:35 compute-0 ovn_controller[146846]: 2026-02-28T10:48:35Z|01600|binding|INFO|12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4: Claiming fa:16:3e:b5:24:ce 10.100.0.13
Feb 28 10:48:35 compute-0 nova_compute[243452]: 2026-02-28 10:48:35.796 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.804 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:24:ce 10.100.0.13'], port_security=['fa:16:3e:b5:24:ce 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd47f4919-0816-4363-b2eb-fa6580859e88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-978ebc43-7003-4100-92ba-e083df3fe8ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd7baef4f72e742e8aa7530d7a586ed2b', 'neutron:revision_number': '7', 'neutron:security_group_ids': '97c43427-956a-4c4e-a592-053a957c2802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2014e61-1555-4c0f-9d39-804f817029ca, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:48:35 compute-0 ovn_controller[146846]: 2026-02-28T10:48:35Z|01601|binding|INFO|Setting lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 ovn-installed in OVS
Feb 28 10:48:35 compute-0 ovn_controller[146846]: 2026-02-28T10:48:35Z|01602|binding|INFO|Setting lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 up in Southbound
Feb 28 10:48:35 compute-0 nova_compute[243452]: 2026-02-28 10:48:35.806 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.807 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 in datapath 978ebc43-7003-4100-92ba-e083df3fe8ab bound to our chassis
Feb 28 10:48:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.809 156681 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 978ebc43-7003-4100-92ba-e083df3fe8ab
Feb 28 10:48:35 compute-0 nova_compute[243452]: 2026-02-28 10:48:35.811 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.824 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[db86a416-d541-49a9-957c-0a281c684fe9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.825 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap978ebc43-71 in ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 28 10:48:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.828 250787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap978ebc43-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 28 10:48:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.828 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[caf1ee53-78e4-4be5-84c8-4d96701fcef4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.829 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[3432e102-7d66-4540-84be-8a628e013c54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:35 compute-0 systemd-udevd[382202]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:48:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.839 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb831f8-e143-4cf2-bf12-753a5b0e6143]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:35 compute-0 NetworkManager[49805]: <info>  [1772275715.8474] device (tap12bbec3d-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 28 10:48:35 compute-0 NetworkManager[49805]: <info>  [1772275715.8484] device (tap12bbec3d-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 28 10:48:35 compute-0 systemd-machined[209480]: New machine qemu-186-instance-00000098.
Feb 28 10:48:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.852 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[8e10690a-14d8-4172-9403-59362ef3b9ce]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:35 compute-0 systemd[1]: Started Virtual Machine qemu-186-instance-00000098.
Feb 28 10:48:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.887 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[5a48075c-4353-4b06-98c7-1b71c6becfa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.892 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[19e8aceb-0958-4555-920f-1f3d91ea919f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:35 compute-0 NetworkManager[49805]: <info>  [1772275715.8937] manager: (tap978ebc43-70): new Veth device (/org/freedesktop/NetworkManager/Devices/673)
Feb 28 10:48:35 compute-0 systemd-udevd[382207]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:48:35 compute-0 nova_compute[243452]: 2026-02-28 10:48:35.901 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.931 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[4c7f8d0b-37eb-4375-88f5-7a8fb1af1bf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.934 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[58855554-8b3b-4207-8011-a4f188607f26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:35 compute-0 NetworkManager[49805]: <info>  [1772275715.9588] device (tap978ebc43-70): carrier: link connected
Feb 28 10:48:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.964 250887 DEBUG oslo.privsep.daemon [-] privsep: reply[d963608b-9b2b-4143-ab03-9017e1a3870d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:35 compute-0 nova_compute[243452]: 2026-02-28 10:48:35.980 243456 DEBUG nova.compute.manager [req-dd4f55ee-0eda-4815-b277-dc31102d93a5 req-fe4f7546-7727-4244-9cb7-c8dbef2e5e38 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:48:35 compute-0 nova_compute[243452]: 2026-02-28 10:48:35.980 243456 DEBUG oslo_concurrency.lockutils [req-dd4f55ee-0eda-4815-b277-dc31102d93a5 req-fe4f7546-7727-4244-9cb7-c8dbef2e5e38 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:48:35 compute-0 nova_compute[243452]: 2026-02-28 10:48:35.980 243456 DEBUG oslo_concurrency.lockutils [req-dd4f55ee-0eda-4815-b277-dc31102d93a5 req-fe4f7546-7727-4244-9cb7-c8dbef2e5e38 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:48:35 compute-0 nova_compute[243452]: 2026-02-28 10:48:35.981 243456 DEBUG oslo_concurrency.lockutils [req-dd4f55ee-0eda-4815-b277-dc31102d93a5 req-fe4f7546-7727-4244-9cb7-c8dbef2e5e38 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:48:35 compute-0 nova_compute[243452]: 2026-02-28 10:48:35.981 243456 DEBUG nova.compute.manager [req-dd4f55ee-0eda-4815-b277-dc31102d93a5 req-fe4f7546-7727-4244-9cb7-c8dbef2e5e38 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Processing event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 28 10:48:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.981 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[90cf72f1-ec84-4b9f-b03c-b40f0e035dae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap978ebc43-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:85:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722324, 'reachable_time': 28644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382235, 'error': None, 'target': 'ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:35.998 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[6af9bddd-c1b5-414c-934e-f8447056449e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:8571'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722324, 'tstamp': 722324}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382236, 'error': None, 'target': 'ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.019 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[b7c4438c-a82b-4946-9965-e2e45e1ce9c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap978ebc43-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:85:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722324, 'reachable_time': 28644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 382237, 'error': None, 'target': 'ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.061 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[7ba423f4-63c6-4989-8ae6-349ff8d5927b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.065 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:48:36 compute-0 nova_compute[243452]: 2026-02-28 10:48:36.066 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.124 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[f729fd00-51fb-48bd-83c4-77053e28f0d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.126 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap978ebc43-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.126 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.127 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap978ebc43-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:48:36 compute-0 NetworkManager[49805]: <info>  [1772275716.1296] manager: (tap978ebc43-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/674)
Feb 28 10:48:36 compute-0 kernel: tap978ebc43-70: entered promiscuous mode
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.131 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap978ebc43-70, col_values=(('external_ids', {'iface-id': 'a5bdb09f-93f0-411c-9d75-fc368a22a5f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:48:36 compute-0 ovn_controller[146846]: 2026-02-28T10:48:36Z|01603|binding|INFO|Releasing lport a5bdb09f-93f0-411c-9d75-fc368a22a5f6 from this chassis (sb_readonly=0)
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.144 156681 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/978ebc43-7003-4100-92ba-e083df3fe8ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/978ebc43-7003-4100-92ba-e083df3fe8ab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.146 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[4eaa7f80-e980-450d-be6d-96d77d4db020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.147 156681 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]: global
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     log         /dev/log local0 debug
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     log-tag     haproxy-metadata-proxy-978ebc43-7003-4100-92ba-e083df3fe8ab
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     user        root
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     group       root
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     maxconn     1024
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     pidfile     /var/lib/neutron/external/pids/978ebc43-7003-4100-92ba-e083df3fe8ab.pid.haproxy
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     daemon
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]: defaults
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     log global
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     mode http
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     option httplog
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     option dontlognull
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     option http-server-close
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     option forwardfor
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     retries                 3
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     timeout http-request    30s
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     timeout connect         30s
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     timeout client          32s
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     timeout server          32s
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     timeout http-keep-alive 30s
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]: 
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]: listen listener
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     bind 169.254.169.254:80
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     server metadata /var/lib/neutron/metadata_proxy
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:     http-request add-header X-OVN-Network-ID 978ebc43-7003-4100-92ba-e083df3fe8ab
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 28 10:48:36 compute-0 nova_compute[243452]: 2026-02-28 10:48:36.142 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.147 156681 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab', 'env', 'PROCESS_TAG=haproxy-978ebc43-7003-4100-92ba-e083df3fe8ab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/978ebc43-7003-4100-92ba-e083df3fe8ab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 28 10:48:36 compute-0 ceph-mon[76304]: pgmap v2531: 305 pgs: 305 active+clean; 265 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.5 MiB/s wr, 97 op/s
Feb 28 10:48:36 compute-0 podman[382270]: 2026-02-28 10:48:36.537213443 +0000 UTC m=+0.064796415 container create ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 28 10:48:36 compute-0 systemd[1]: Started libpod-conmon-ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3.scope.
Feb 28 10:48:36 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:48:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85896b15e407a783697a1b7680c2b0fcf7d8c767aa52c496ab5f36da73752f0c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 28 10:48:36 compute-0 podman[382270]: 2026-02-28 10:48:36.497980776 +0000 UTC m=+0.025563758 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 28 10:48:36 compute-0 podman[382270]: 2026-02-28 10:48:36.596351386 +0000 UTC m=+0.123934398 container init ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 10:48:36 compute-0 podman[382270]: 2026-02-28 10:48:36.603044587 +0000 UTC m=+0.130627549 container start ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:48:36 compute-0 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[382324]: [NOTICE]   (382329) : New worker (382332) forked
Feb 28 10:48:36 compute-0 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[382324]: [NOTICE]   (382329) : Loading success.
Feb 28 10:48:36 compute-0 nova_compute[243452]: 2026-02-28 10:48:36.647 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275716.646727, d47f4919-0816-4363-b2eb-fa6580859e88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:48:36 compute-0 nova_compute[243452]: 2026-02-28 10:48:36.648 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] VM Started (Lifecycle Event)
Feb 28 10:48:36 compute-0 nova_compute[243452]: 2026-02-28 10:48:36.651 243456 DEBUG nova.compute.manager [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:48:36 compute-0 nova_compute[243452]: 2026-02-28 10:48:36.657 243456 DEBUG nova.virt.libvirt.driver [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:48:36 compute-0 nova_compute[243452]: 2026-02-28 10:48:36.662 243456 INFO nova.virt.libvirt.driver [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance spawned successfully.
Feb 28 10:48:36 compute-0 nova_compute[243452]: 2026-02-28 10:48:36.674 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:48:36 compute-0 nova_compute[243452]: 2026-02-28 10:48:36.679 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:48:36 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:36.684 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:48:36 compute-0 nova_compute[243452]: 2026-02-28 10:48:36.706 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:48:36 compute-0 nova_compute[243452]: 2026-02-28 10:48:36.707 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275716.646968, d47f4919-0816-4363-b2eb-fa6580859e88 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:48:36 compute-0 nova_compute[243452]: 2026-02-28 10:48:36.707 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] VM Paused (Lifecycle Event)
Feb 28 10:48:36 compute-0 nova_compute[243452]: 2026-02-28 10:48:36.727 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:48:36 compute-0 nova_compute[243452]: 2026-02-28 10:48:36.732 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275716.656777, d47f4919-0816-4363-b2eb-fa6580859e88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:48:36 compute-0 nova_compute[243452]: 2026-02-28 10:48:36.732 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] VM Resumed (Lifecycle Event)
Feb 28 10:48:36 compute-0 nova_compute[243452]: 2026-02-28 10:48:36.752 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:48:36 compute-0 nova_compute[243452]: 2026-02-28 10:48:36.757 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:48:36 compute-0 nova_compute[243452]: 2026-02-28 10:48:36.785 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:48:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 305 active+clean; 304 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 96 op/s
Feb 28 10:48:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Feb 28 10:48:37 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Feb 28 10:48:37 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Feb 28 10:48:37 compute-0 nova_compute[243452]: 2026-02-28 10:48:37.788 243456 DEBUG nova.compute.manager [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:48:37 compute-0 nova_compute[243452]: 2026-02-28 10:48:37.876 243456 DEBUG oslo_concurrency.lockutils [None req-5a774187-f428-4037-8759-cf203bbda1e7 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 9.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:48:38 compute-0 nova_compute[243452]: 2026-02-28 10:48:38.077 243456 DEBUG nova.compute.manager [req-6825f62a-bf45-4522-a425-6486c8a54317 req-8aaedf1f-cf5d-4dff-a6df-10c99247dc76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:48:38 compute-0 nova_compute[243452]: 2026-02-28 10:48:38.077 243456 DEBUG oslo_concurrency.lockutils [req-6825f62a-bf45-4522-a425-6486c8a54317 req-8aaedf1f-cf5d-4dff-a6df-10c99247dc76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:48:38 compute-0 nova_compute[243452]: 2026-02-28 10:48:38.078 243456 DEBUG oslo_concurrency.lockutils [req-6825f62a-bf45-4522-a425-6486c8a54317 req-8aaedf1f-cf5d-4dff-a6df-10c99247dc76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:48:38 compute-0 nova_compute[243452]: 2026-02-28 10:48:38.078 243456 DEBUG oslo_concurrency.lockutils [req-6825f62a-bf45-4522-a425-6486c8a54317 req-8aaedf1f-cf5d-4dff-a6df-10c99247dc76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:48:38 compute-0 nova_compute[243452]: 2026-02-28 10:48:38.078 243456 DEBUG nova.compute.manager [req-6825f62a-bf45-4522-a425-6486c8a54317 req-8aaedf1f-cf5d-4dff-a6df-10c99247dc76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] No waiting events found dispatching network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:48:38 compute-0 nova_compute[243452]: 2026-02-28 10:48:38.079 243456 WARNING nova.compute.manager [req-6825f62a-bf45-4522-a425-6486c8a54317 req-8aaedf1f-cf5d-4dff-a6df-10c99247dc76 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received unexpected event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 for instance with vm_state active and task_state None.
Feb 28 10:48:38 compute-0 ceph-mon[76304]: pgmap v2532: 305 pgs: 305 active+clean; 304 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 96 op/s
Feb 28 10:48:38 compute-0 ceph-mon[76304]: osdmap e289: 3 total, 3 up, 3 in
Feb 28 10:48:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:48:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 4.7 MiB/s wr, 115 op/s
Feb 28 10:48:39 compute-0 nova_compute[243452]: 2026-02-28 10:48:39.706 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:40 compute-0 ceph-mon[76304]: pgmap v2534: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 4.7 MiB/s wr, 115 op/s
Feb 28 10:48:40 compute-0 nova_compute[243452]: 2026-02-28 10:48:40.903 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007758369093071074 of space, bias 1.0, pg target 0.23275107279213222 quantized to 32 (current 32)
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493925902529403 of space, bias 1.0, pg target 0.7481777707588209 quantized to 32 (current 32)
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.261232899107364e-07 of space, bias 4.0, pg target 0.0008713479478928837 quantized to 16 (current 16)
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:48:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:48:41 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:41.686 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:48:42 compute-0 ceph-mon[76304]: pgmap v2535: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Feb 28 10:48:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 3.9 MiB/s wr, 190 op/s
Feb 28 10:48:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:48:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Feb 28 10:48:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Feb 28 10:48:44 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Feb 28 10:48:44 compute-0 ceph-mon[76304]: pgmap v2536: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 3.9 MiB/s wr, 190 op/s
Feb 28 10:48:44 compute-0 ceph-mon[76304]: osdmap e290: 3 total, 3 up, 3 in
Feb 28 10:48:44 compute-0 nova_compute[243452]: 2026-02-28 10:48:44.709 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 475 KiB/s wr, 158 op/s
Feb 28 10:48:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:48:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1579559209' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:48:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:48:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1579559209' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:48:45 compute-0 nova_compute[243452]: 2026-02-28 10:48:45.904 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:46 compute-0 ceph-mon[76304]: pgmap v2538: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 475 KiB/s wr, 158 op/s
Feb 28 10:48:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1579559209' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:48:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1579559209' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:48:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 383 KiB/s wr, 127 op/s
Feb 28 10:48:47 compute-0 sudo[382341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:48:47 compute-0 sudo[382341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:48:47 compute-0 sudo[382341]: pam_unix(sudo:session): session closed for user root
Feb 28 10:48:48 compute-0 sudo[382366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:48:48 compute-0 sudo[382366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:48:48 compute-0 ovn_controller[146846]: 2026-02-28T10:48:48Z|00200|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:24:ce 10.100.0.13
Feb 28 10:48:48 compute-0 ceph-mon[76304]: pgmap v2539: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 383 KiB/s wr, 127 op/s
Feb 28 10:48:48 compute-0 sudo[382366]: pam_unix(sudo:session): session closed for user root
Feb 28 10:48:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:48:48 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:48:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:48:48 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:48:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:48:48 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:48:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:48:48 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:48:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:48:48 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:48:48 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:48:48 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:48:48 compute-0 sudo[382421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:48:48 compute-0 sudo[382421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:48:48 compute-0 sudo[382421]: pam_unix(sudo:session): session closed for user root
Feb 28 10:48:48 compute-0 sudo[382446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:48:48 compute-0 sudo[382446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:48:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:48:49 compute-0 podman[382483]: 2026-02-28 10:48:49.119541601 +0000 UTC m=+0.021680588 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:48:49 compute-0 podman[382483]: 2026-02-28 10:48:49.242260274 +0000 UTC m=+0.144399271 container create 5eaa5c90f33a466c7155172032cb1ad37b42bbdfed8860af5448a36d04347259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hellman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 10:48:49 compute-0 nova_compute[243452]: 2026-02-28 10:48:49.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:48:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2540: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 KiB/s wr, 102 op/s
Feb 28 10:48:49 compute-0 systemd[1]: Started libpod-conmon-5eaa5c90f33a466c7155172032cb1ad37b42bbdfed8860af5448a36d04347259.scope.
Feb 28 10:48:49 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:48:49 compute-0 podman[382483]: 2026-02-28 10:48:49.479800625 +0000 UTC m=+0.381939612 container init 5eaa5c90f33a466c7155172032cb1ad37b42bbdfed8860af5448a36d04347259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hellman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 28 10:48:49 compute-0 podman[382483]: 2026-02-28 10:48:49.487877615 +0000 UTC m=+0.390016582 container start 5eaa5c90f33a466c7155172032cb1ad37b42bbdfed8860af5448a36d04347259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hellman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 10:48:49 compute-0 podman[382483]: 2026-02-28 10:48:49.492795725 +0000 UTC m=+0.394934702 container attach 5eaa5c90f33a466c7155172032cb1ad37b42bbdfed8860af5448a36d04347259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hellman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 10:48:49 compute-0 strange_hellman[382499]: 167 167
Feb 28 10:48:49 compute-0 systemd[1]: libpod-5eaa5c90f33a466c7155172032cb1ad37b42bbdfed8860af5448a36d04347259.scope: Deactivated successfully.
Feb 28 10:48:49 compute-0 podman[382483]: 2026-02-28 10:48:49.495603535 +0000 UTC m=+0.397742542 container died 5eaa5c90f33a466c7155172032cb1ad37b42bbdfed8860af5448a36d04347259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hellman, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:48:49 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:48:49 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:48:49 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:48:49 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:48:49 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:48:49 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:48:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4e7d616f5ac3bcfd783729476b04fb58542d2506ade7b604faf8b47e270bd3e-merged.mount: Deactivated successfully.
Feb 28 10:48:49 compute-0 podman[382483]: 2026-02-28 10:48:49.55725013 +0000 UTC m=+0.459389147 container remove 5eaa5c90f33a466c7155172032cb1ad37b42bbdfed8860af5448a36d04347259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:48:49 compute-0 systemd[1]: libpod-conmon-5eaa5c90f33a466c7155172032cb1ad37b42bbdfed8860af5448a36d04347259.scope: Deactivated successfully.
Feb 28 10:48:49 compute-0 nova_compute[243452]: 2026-02-28 10:48:49.711 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:49 compute-0 podman[382524]: 2026-02-28 10:48:49.814305789 +0000 UTC m=+0.136806838 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:48:49 compute-0 podman[382524]: 2026-02-28 10:48:49.985472125 +0000 UTC m=+0.307973154 container create 8cb49c26115b91aa4bec092a6fbe21c5e24c0eb2b64ac5a90d1434372f72abf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:48:50 compute-0 systemd[1]: Started libpod-conmon-8cb49c26115b91aa4bec092a6fbe21c5e24c0eb2b64ac5a90d1434372f72abf5.scope.
Feb 28 10:48:50 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:48:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cf4f8140c70baca7634d253839695840f0bb1c2a53d5a8c15f245de7a7f978/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:48:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cf4f8140c70baca7634d253839695840f0bb1c2a53d5a8c15f245de7a7f978/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:48:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cf4f8140c70baca7634d253839695840f0bb1c2a53d5a8c15f245de7a7f978/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:48:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cf4f8140c70baca7634d253839695840f0bb1c2a53d5a8c15f245de7a7f978/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:48:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cf4f8140c70baca7634d253839695840f0bb1c2a53d5a8c15f245de7a7f978/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:48:50 compute-0 podman[382524]: 2026-02-28 10:48:50.16785116 +0000 UTC m=+0.490352239 container init 8cb49c26115b91aa4bec092a6fbe21c5e24c0eb2b64ac5a90d1434372f72abf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 10:48:50 compute-0 podman[382524]: 2026-02-28 10:48:50.17876259 +0000 UTC m=+0.501263679 container start 8cb49c26115b91aa4bec092a6fbe21c5e24c0eb2b64ac5a90d1434372f72abf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 10:48:50 compute-0 podman[382524]: 2026-02-28 10:48:50.194644322 +0000 UTC m=+0.517145441 container attach 8cb49c26115b91aa4bec092a6fbe21c5e24c0eb2b64ac5a90d1434372f72abf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 10:48:50 compute-0 nova_compute[243452]: 2026-02-28 10:48:50.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:48:50 compute-0 nova_compute[243452]: 2026-02-28 10:48:50.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:48:50 compute-0 upbeat_lehmann[382541]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:48:50 compute-0 upbeat_lehmann[382541]: --> All data devices are unavailable
Feb 28 10:48:50 compute-0 systemd[1]: libpod-8cb49c26115b91aa4bec092a6fbe21c5e24c0eb2b64ac5a90d1434372f72abf5.scope: Deactivated successfully.
Feb 28 10:48:50 compute-0 podman[382524]: 2026-02-28 10:48:50.680935634 +0000 UTC m=+1.003436663 container died 8cb49c26115b91aa4bec092a6fbe21c5e24c0eb2b64ac5a90d1434372f72abf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:48:50 compute-0 ceph-mon[76304]: pgmap v2540: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 KiB/s wr, 102 op/s
Feb 28 10:48:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-68cf4f8140c70baca7634d253839695840f0bb1c2a53d5a8c15f245de7a7f978-merged.mount: Deactivated successfully.
Feb 28 10:48:50 compute-0 podman[382524]: 2026-02-28 10:48:50.729863997 +0000 UTC m=+1.052365056 container remove 8cb49c26115b91aa4bec092a6fbe21c5e24c0eb2b64ac5a90d1434372f72abf5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_lehmann, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 10:48:50 compute-0 systemd[1]: libpod-conmon-8cb49c26115b91aa4bec092a6fbe21c5e24c0eb2b64ac5a90d1434372f72abf5.scope: Deactivated successfully.
Feb 28 10:48:50 compute-0 sudo[382446]: pam_unix(sudo:session): session closed for user root
Feb 28 10:48:50 compute-0 sudo[382574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:48:50 compute-0 sudo[382574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:48:50 compute-0 sudo[382574]: pam_unix(sudo:session): session closed for user root
Feb 28 10:48:50 compute-0 nova_compute[243452]: 2026-02-28 10:48:50.906 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:50 compute-0 sudo[382599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:48:50 compute-0 sudo[382599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:48:51 compute-0 podman[382636]: 2026-02-28 10:48:51.261770276 +0000 UTC m=+0.051803826 container create b979d0e3a9f7e3402f8f726f7b7d8997bc6bf64ba204d10e6949ef3859ae6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wing, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 10:48:51 compute-0 systemd[1]: Started libpod-conmon-b979d0e3a9f7e3402f8f726f7b7d8997bc6bf64ba204d10e6949ef3859ae6511.scope.
Feb 28 10:48:51 compute-0 nova_compute[243452]: 2026-02-28 10:48:51.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:48:51 compute-0 podman[382636]: 2026-02-28 10:48:51.236235489 +0000 UTC m=+0.026269069 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:48:51 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:48:51 compute-0 podman[382636]: 2026-02-28 10:48:51.351331985 +0000 UTC m=+0.141365575 container init b979d0e3a9f7e3402f8f726f7b7d8997bc6bf64ba204d10e6949ef3859ae6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wing, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:48:51 compute-0 podman[382636]: 2026-02-28 10:48:51.358160979 +0000 UTC m=+0.148194499 container start b979d0e3a9f7e3402f8f726f7b7d8997bc6bf64ba204d10e6949ef3859ae6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wing, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 10:48:51 compute-0 podman[382636]: 2026-02-28 10:48:51.362247226 +0000 UTC m=+0.152280836 container attach b979d0e3a9f7e3402f8f726f7b7d8997bc6bf64ba204d10e6949ef3859ae6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:48:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 576 KiB/s rd, 16 KiB/s wr, 47 op/s
Feb 28 10:48:51 compute-0 admiring_wing[382652]: 167 167
Feb 28 10:48:51 compute-0 systemd[1]: libpod-b979d0e3a9f7e3402f8f726f7b7d8997bc6bf64ba204d10e6949ef3859ae6511.scope: Deactivated successfully.
Feb 28 10:48:51 compute-0 podman[382636]: 2026-02-28 10:48:51.368294298 +0000 UTC m=+0.158327848 container died b979d0e3a9f7e3402f8f726f7b7d8997bc6bf64ba204d10e6949ef3859ae6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:48:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-bff6934d4aaea5ccbe64d40054ac9883badfd459622af257be30e97683364e2f-merged.mount: Deactivated successfully.
Feb 28 10:48:51 compute-0 podman[382636]: 2026-02-28 10:48:51.415242784 +0000 UTC m=+0.205276304 container remove b979d0e3a9f7e3402f8f726f7b7d8997bc6bf64ba204d10e6949ef3859ae6511 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wing, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:48:51 compute-0 systemd[1]: libpod-conmon-b979d0e3a9f7e3402f8f726f7b7d8997bc6bf64ba204d10e6949ef3859ae6511.scope: Deactivated successfully.
Feb 28 10:48:51 compute-0 podman[382675]: 2026-02-28 10:48:51.582634059 +0000 UTC m=+0.057406165 container create b9546f99a6d14cd5816bc18101ecb1d156e4e293a1c506b83ee4bfc8105370a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_visvesvaraya, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:48:51 compute-0 systemd[1]: Started libpod-conmon-b9546f99a6d14cd5816bc18101ecb1d156e4e293a1c506b83ee4bfc8105370a7.scope.
Feb 28 10:48:51 compute-0 podman[382675]: 2026-02-28 10:48:51.558716178 +0000 UTC m=+0.033488294 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:48:51 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:48:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb57497a274d15ef4a64dbe1085802800822f4f1313841d4c9d1916a361ce34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:48:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb57497a274d15ef4a64dbe1085802800822f4f1313841d4c9d1916a361ce34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:48:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb57497a274d15ef4a64dbe1085802800822f4f1313841d4c9d1916a361ce34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:48:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb57497a274d15ef4a64dbe1085802800822f4f1313841d4c9d1916a361ce34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:48:51 compute-0 podman[382675]: 2026-02-28 10:48:51.701834392 +0000 UTC m=+0.176606538 container init b9546f99a6d14cd5816bc18101ecb1d156e4e293a1c506b83ee4bfc8105370a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:48:51 compute-0 podman[382675]: 2026-02-28 10:48:51.717248751 +0000 UTC m=+0.192020837 container start b9546f99a6d14cd5816bc18101ecb1d156e4e293a1c506b83ee4bfc8105370a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_visvesvaraya, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 10:48:51 compute-0 podman[382675]: 2026-02-28 10:48:51.724175508 +0000 UTC m=+0.198947604 container attach b9546f99a6d14cd5816bc18101ecb1d156e4e293a1c506b83ee4bfc8105370a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_visvesvaraya, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]: {
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:     "0": [
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:         {
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "devices": [
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "/dev/loop3"
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             ],
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "lv_name": "ceph_lv0",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "lv_size": "21470642176",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "name": "ceph_lv0",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "tags": {
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.cluster_name": "ceph",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.crush_device_class": "",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.encrypted": "0",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.objectstore": "bluestore",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.osd_id": "0",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.type": "block",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.vdo": "0",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.with_tpm": "0"
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             },
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "type": "block",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "vg_name": "ceph_vg0"
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:         }
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:     ],
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:     "1": [
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:         {
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "devices": [
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "/dev/loop4"
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             ],
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "lv_name": "ceph_lv1",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "lv_size": "21470642176",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "name": "ceph_lv1",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "tags": {
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.cluster_name": "ceph",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.crush_device_class": "",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.encrypted": "0",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.objectstore": "bluestore",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.osd_id": "1",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.type": "block",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.vdo": "0",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.with_tpm": "0"
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             },
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "type": "block",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "vg_name": "ceph_vg1"
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:         }
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:     ],
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:     "2": [
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:         {
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "devices": [
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "/dev/loop5"
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             ],
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "lv_name": "ceph_lv2",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "lv_size": "21470642176",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "name": "ceph_lv2",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "tags": {
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.cluster_name": "ceph",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.crush_device_class": "",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.encrypted": "0",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.objectstore": "bluestore",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.osd_id": "2",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.type": "block",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.vdo": "0",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:                 "ceph.with_tpm": "0"
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             },
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "type": "block",
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:             "vg_name": "ceph_vg2"
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:         }
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]:     ]
Feb 28 10:48:51 compute-0 adoring_visvesvaraya[382692]: }
Feb 28 10:48:52 compute-0 systemd[1]: libpod-b9546f99a6d14cd5816bc18101ecb1d156e4e293a1c506b83ee4bfc8105370a7.scope: Deactivated successfully.
Feb 28 10:48:52 compute-0 conmon[382692]: conmon b9546f99a6d14cd5816b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9546f99a6d14cd5816bc18101ecb1d156e4e293a1c506b83ee4bfc8105370a7.scope/container/memory.events
Feb 28 10:48:52 compute-0 podman[382675]: 2026-02-28 10:48:52.033963365 +0000 UTC m=+0.508735471 container died b9546f99a6d14cd5816bc18101ecb1d156e4e293a1c506b83ee4bfc8105370a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 10:48:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bb57497a274d15ef4a64dbe1085802800822f4f1313841d4c9d1916a361ce34-merged.mount: Deactivated successfully.
Feb 28 10:48:52 compute-0 podman[382675]: 2026-02-28 10:48:52.087274033 +0000 UTC m=+0.562046109 container remove b9546f99a6d14cd5816bc18101ecb1d156e4e293a1c506b83ee4bfc8105370a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:48:52 compute-0 systemd[1]: libpod-conmon-b9546f99a6d14cd5816bc18101ecb1d156e4e293a1c506b83ee4bfc8105370a7.scope: Deactivated successfully.
Feb 28 10:48:52 compute-0 sudo[382599]: pam_unix(sudo:session): session closed for user root
Feb 28 10:48:52 compute-0 sudo[382713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:48:52 compute-0 sudo[382713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:48:52 compute-0 sudo[382713]: pam_unix(sudo:session): session closed for user root
Feb 28 10:48:52 compute-0 sudo[382738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:48:52 compute-0 sudo[382738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:48:52 compute-0 nova_compute[243452]: 2026-02-28 10:48:52.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:48:52 compute-0 podman[382777]: 2026-02-28 10:48:52.617611828 +0000 UTC m=+0.063869869 container create 8c37b538c6b67eca7b59396d5fa43322ec1cb3b7a428fbc0b075f48dc9d0fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Feb 28 10:48:52 compute-0 systemd[1]: Started libpod-conmon-8c37b538c6b67eca7b59396d5fa43322ec1cb3b7a428fbc0b075f48dc9d0fb77.scope.
Feb 28 10:48:52 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:48:52 compute-0 podman[382777]: 2026-02-28 10:48:52.589348904 +0000 UTC m=+0.035607035 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:48:52 compute-0 podman[382777]: 2026-02-28 10:48:52.701851906 +0000 UTC m=+0.148109967 container init 8c37b538c6b67eca7b59396d5fa43322ec1cb3b7a428fbc0b075f48dc9d0fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_liskov, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:48:52 compute-0 ceph-mon[76304]: pgmap v2541: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 576 KiB/s rd, 16 KiB/s wr, 47 op/s
Feb 28 10:48:52 compute-0 podman[382777]: 2026-02-28 10:48:52.712281483 +0000 UTC m=+0.158539514 container start 8c37b538c6b67eca7b59396d5fa43322ec1cb3b7a428fbc0b075f48dc9d0fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_liskov, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 10:48:52 compute-0 podman[382777]: 2026-02-28 10:48:52.715902726 +0000 UTC m=+0.162160797 container attach 8c37b538c6b67eca7b59396d5fa43322ec1cb3b7a428fbc0b075f48dc9d0fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:48:52 compute-0 quizzical_liskov[382793]: 167 167
Feb 28 10:48:52 compute-0 systemd[1]: libpod-8c37b538c6b67eca7b59396d5fa43322ec1cb3b7a428fbc0b075f48dc9d0fb77.scope: Deactivated successfully.
Feb 28 10:48:52 compute-0 podman[382777]: 2026-02-28 10:48:52.721061183 +0000 UTC m=+0.167319264 container died 8c37b538c6b67eca7b59396d5fa43322ec1cb3b7a428fbc0b075f48dc9d0fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 10:48:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9f7f7e67096ab62ba36cbc36e6424fb97f6a816be829582fc2705e66c5c6c07-merged.mount: Deactivated successfully.
Feb 28 10:48:52 compute-0 podman[382777]: 2026-02-28 10:48:52.766678801 +0000 UTC m=+0.212936842 container remove 8c37b538c6b67eca7b59396d5fa43322ec1cb3b7a428fbc0b075f48dc9d0fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:48:52 compute-0 systemd[1]: libpod-conmon-8c37b538c6b67eca7b59396d5fa43322ec1cb3b7a428fbc0b075f48dc9d0fb77.scope: Deactivated successfully.
Feb 28 10:48:52 compute-0 podman[382817]: 2026-02-28 10:48:52.929919008 +0000 UTC m=+0.039486335 container create 38ffbf4bd94c4c2a65af01199e0b7cf3e3a455d1e523e3a96e76b795fc595be6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcnulty, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:48:52 compute-0 systemd[1]: Started libpod-conmon-38ffbf4bd94c4c2a65af01199e0b7cf3e3a455d1e523e3a96e76b795fc595be6.scope.
Feb 28 10:48:53 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd769865de1b0e9ac2a42bc3858f61e3e42b8526b12c1167b1b46c1c6841164/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd769865de1b0e9ac2a42bc3858f61e3e42b8526b12c1167b1b46c1c6841164/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd769865de1b0e9ac2a42bc3858f61e3e42b8526b12c1167b1b46c1c6841164/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd769865de1b0e9ac2a42bc3858f61e3e42b8526b12c1167b1b46c1c6841164/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:48:53 compute-0 podman[382817]: 2026-02-28 10:48:52.914761736 +0000 UTC m=+0.024329083 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:48:53 compute-0 podman[382817]: 2026-02-28 10:48:53.057739116 +0000 UTC m=+0.167306523 container init 38ffbf4bd94c4c2a65af01199e0b7cf3e3a455d1e523e3a96e76b795fc595be6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcnulty, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:48:53 compute-0 podman[382817]: 2026-02-28 10:48:53.066207937 +0000 UTC m=+0.175775304 container start 38ffbf4bd94c4c2a65af01199e0b7cf3e3a455d1e523e3a96e76b795fc595be6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcnulty, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:48:53 compute-0 podman[382817]: 2026-02-28 10:48:53.069886722 +0000 UTC m=+0.179454089 container attach 38ffbf4bd94c4c2a65af01199e0b7cf3e3a455d1e523e3a96e76b795fc595be6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 10:48:53 compute-0 nova_compute[243452]: 2026-02-28 10:48:53.313 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:48:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 647 KiB/s rd, 18 KiB/s wr, 53 op/s
Feb 28 10:48:53 compute-0 lvm[382912]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:48:53 compute-0 lvm[382912]: VG ceph_vg1 finished
Feb 28 10:48:53 compute-0 lvm[382911]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:48:53 compute-0 lvm[382911]: VG ceph_vg0 finished
Feb 28 10:48:53 compute-0 lvm[382914]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:48:53 compute-0 lvm[382914]: VG ceph_vg2 finished
Feb 28 10:48:53 compute-0 lvm[382915]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:48:53 compute-0 lvm[382915]: VG ceph_vg0 finished
Feb 28 10:48:53 compute-0 vigorous_mcnulty[382833]: {}
Feb 28 10:48:53 compute-0 systemd[1]: libpod-38ffbf4bd94c4c2a65af01199e0b7cf3e3a455d1e523e3a96e76b795fc595be6.scope: Deactivated successfully.
Feb 28 10:48:53 compute-0 systemd[1]: libpod-38ffbf4bd94c4c2a65af01199e0b7cf3e3a455d1e523e3a96e76b795fc595be6.scope: Consumed 1.421s CPU time.
Feb 28 10:48:53 compute-0 podman[382817]: 2026-02-28 10:48:53.960404069 +0000 UTC m=+1.069971396 container died 38ffbf4bd94c4c2a65af01199e0b7cf3e3a455d1e523e3a96e76b795fc595be6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 10:48:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcd769865de1b0e9ac2a42bc3858f61e3e42b8526b12c1167b1b46c1c6841164-merged.mount: Deactivated successfully.
Feb 28 10:48:54 compute-0 podman[382817]: 2026-02-28 10:48:54.006255384 +0000 UTC m=+1.115822711 container remove 38ffbf4bd94c4c2a65af01199e0b7cf3e3a455d1e523e3a96e76b795fc595be6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcnulty, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Feb 28 10:48:54 compute-0 systemd[1]: libpod-conmon-38ffbf4bd94c4c2a65af01199e0b7cf3e3a455d1e523e3a96e76b795fc595be6.scope: Deactivated successfully.
Feb 28 10:48:54 compute-0 sudo[382738]: pam_unix(sudo:session): session closed for user root
Feb 28 10:48:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:48:54 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:48:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:48:54 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.084676) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275734084752, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 2088, "num_deletes": 253, "total_data_size": 3445100, "memory_usage": 3501536, "flush_reason": "Manual Compaction"}
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Feb 28 10:48:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275734103086, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 3376007, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52026, "largest_seqno": 54113, "table_properties": {"data_size": 3366391, "index_size": 6110, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19498, "raw_average_key_size": 20, "raw_value_size": 3347259, "raw_average_value_size": 3494, "num_data_blocks": 270, "num_entries": 958, "num_filter_entries": 958, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772275517, "oldest_key_time": 1772275517, "file_creation_time": 1772275734, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 19164 microseconds, and 8222 cpu microseconds.
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.103131) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 3376007 bytes OK
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.103882) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.105753) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.105771) EVENT_LOG_v1 {"time_micros": 1772275734105765, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.105798) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 3436327, prev total WAL file size 3436327, number of live WAL files 2.
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.106776) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(3296KB)], [122(9637KB)]
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275734106832, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 13244619, "oldest_snapshot_seqno": -1}
Feb 28 10:48:54 compute-0 sudo[382930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:48:54 compute-0 sudo[382930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:48:54 compute-0 sudo[382930]: pam_unix(sudo:session): session closed for user root
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7615 keys, 11507710 bytes, temperature: kUnknown
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275734233491, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 11507710, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11455461, "index_size": 32105, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19077, "raw_key_size": 197600, "raw_average_key_size": 25, "raw_value_size": 11318445, "raw_average_value_size": 1486, "num_data_blocks": 1259, "num_entries": 7615, "num_filter_entries": 7615, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772275734, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.234974) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 11507710 bytes
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.236578) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.6 rd, 90.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 9.4 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 8137, records dropped: 522 output_compression: NoCompression
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.236595) EVENT_LOG_v1 {"time_micros": 1772275734236587, "job": 74, "event": "compaction_finished", "compaction_time_micros": 127853, "compaction_time_cpu_micros": 32847, "output_level": 6, "num_output_files": 1, "total_output_size": 11507710, "num_input_records": 8137, "num_output_records": 7615, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275734237060, "job": 74, "event": "table_file_deletion", "file_number": 124}
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275734238039, "job": 74, "event": "table_file_deletion", "file_number": 122}
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.106669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.238125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.238134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.238138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.238141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:48:54 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:48:54.238144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:48:54 compute-0 ovn_controller[146846]: 2026-02-28T10:48:54Z|01604|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Feb 28 10:48:54 compute-0 ceph-mon[76304]: pgmap v2542: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 647 KiB/s rd, 18 KiB/s wr, 53 op/s
Feb 28 10:48:54 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:48:54 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:48:54 compute-0 nova_compute[243452]: 2026-02-28 10:48:54.715 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:55 compute-0 nova_compute[243452]: 2026-02-28 10:48:55.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:48:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2543: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 635 KiB/s rd, 16 KiB/s wr, 48 op/s
Feb 28 10:48:55 compute-0 nova_compute[243452]: 2026-02-28 10:48:55.909 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.382 243456 DEBUG nova.compute.manager [req-8d090653-12db-4994-a91c-3167c9e5cf7f req-baaee5da-718c-4c4d-a2b7-151796d9fa87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-changed-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.383 243456 DEBUG nova.compute.manager [req-8d090653-12db-4994-a91c-3167c9e5cf7f req-baaee5da-718c-4c4d-a2b7-151796d9fa87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Refreshing instance network info cache due to event network-changed-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.384 243456 DEBUG oslo_concurrency.lockutils [req-8d090653-12db-4994-a91c-3167c9e5cf7f req-baaee5da-718c-4c4d-a2b7-151796d9fa87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.384 243456 DEBUG oslo_concurrency.lockutils [req-8d090653-12db-4994-a91c-3167c9e5cf7f req-baaee5da-718c-4c4d-a2b7-151796d9fa87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquired lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.385 243456 DEBUG nova.network.neutron [req-8d090653-12db-4994-a91c-3167c9e5cf7f req-baaee5da-718c-4c4d-a2b7-151796d9fa87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Refreshing network info cache for port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.445 243456 DEBUG oslo_concurrency.lockutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.446 243456 DEBUG oslo_concurrency.lockutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.447 243456 DEBUG oslo_concurrency.lockutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.447 243456 DEBUG oslo_concurrency.lockutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.448 243456 DEBUG oslo_concurrency.lockutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.450 243456 INFO nova.compute.manager [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Terminating instance
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.451 243456 DEBUG nova.compute.manager [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:48:56 compute-0 kernel: tap12bbec3d-25 (unregistering): left promiscuous mode
Feb 28 10:48:56 compute-0 NetworkManager[49805]: <info>  [1772275736.5099] device (tap12bbec3d-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 28 10:48:56 compute-0 ovn_controller[146846]: 2026-02-28T10:48:56Z|01605|binding|INFO|Releasing lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 from this chassis (sb_readonly=0)
Feb 28 10:48:56 compute-0 ovn_controller[146846]: 2026-02-28T10:48:56Z|01606|binding|INFO|Setting lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 down in Southbound
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.517 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:56 compute-0 ovn_controller[146846]: 2026-02-28T10:48:56Z|01607|binding|INFO|Removing iface tap12bbec3d-25 ovn-installed in OVS
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.520 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.524 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.527 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:24:ce 10.100.0.13'], port_security=['fa:16:3e:b5:24:ce 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd47f4919-0816-4363-b2eb-fa6580859e88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-978ebc43-7003-4100-92ba-e083df3fe8ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd7baef4f72e742e8aa7530d7a586ed2b', 'neutron:revision_number': '9', 'neutron:security_group_ids': '97c43427-956a-4c4e-a592-053a957c2802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2014e61-1555-4c0f-9d39-804f817029ca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.528 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 in datapath 978ebc43-7003-4100-92ba-e083df3fe8ab unbound from our chassis
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.529 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 978ebc43-7003-4100-92ba-e083df3fe8ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.532 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[65dd171d-0ffa-4056-b4be-b0dd2b4ab264]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.533 156681 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab namespace which is not needed anymore
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.536 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:56 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Deactivated successfully.
Feb 28 10:48:56 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Consumed 13.530s CPU time.
Feb 28 10:48:56 compute-0 systemd-machined[209480]: Machine qemu-186-instance-00000098 terminated.
Feb 28 10:48:56 compute-0 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[382324]: [NOTICE]   (382329) : haproxy version is 2.8.14-c23fe91
Feb 28 10:48:56 compute-0 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[382324]: [NOTICE]   (382329) : path to executable is /usr/sbin/haproxy
Feb 28 10:48:56 compute-0 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[382324]: [WARNING]  (382329) : Exiting Master process...
Feb 28 10:48:56 compute-0 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[382324]: [WARNING]  (382329) : Exiting Master process...
Feb 28 10:48:56 compute-0 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[382324]: [ALERT]    (382329) : Current worker (382332) exited with code 143 (Terminated)
Feb 28 10:48:56 compute-0 neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab[382324]: [WARNING]  (382329) : All workers exited. Exiting... (0)
Feb 28 10:48:56 compute-0 systemd[1]: libpod-ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3.scope: Deactivated successfully.
Feb 28 10:48:56 compute-0 podman[382977]: 2026-02-28 10:48:56.671589038 +0000 UTC m=+0.049583932 container died ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 10:48:56 compute-0 NetworkManager[49805]: <info>  [1772275736.6808] manager: (tap12bbec3d-25): new Tun device (/org/freedesktop/NetworkManager/Devices/675)
Feb 28 10:48:56 compute-0 kernel: tap12bbec3d-25: entered promiscuous mode
Feb 28 10:48:56 compute-0 systemd-udevd[382910]: Network interface NamePolicy= disabled on kernel command line.
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.682 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:56 compute-0 ovn_controller[146846]: 2026-02-28T10:48:56Z|01608|binding|INFO|Claiming lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 for this chassis.
Feb 28 10:48:56 compute-0 ovn_controller[146846]: 2026-02-28T10:48:56Z|01609|binding|INFO|12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4: Claiming fa:16:3e:b5:24:ce 10.100.0.13
Feb 28 10:48:56 compute-0 kernel: tap12bbec3d-25 (unregistering): left promiscuous mode
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.694 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:24:ce 10.100.0.13'], port_security=['fa:16:3e:b5:24:ce 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd47f4919-0816-4363-b2eb-fa6580859e88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-978ebc43-7003-4100-92ba-e083df3fe8ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd7baef4f72e742e8aa7530d7a586ed2b', 'neutron:revision_number': '9', 'neutron:security_group_ids': '97c43427-956a-4c4e-a592-053a957c2802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2014e61-1555-4c0f-9d39-804f817029ca, chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:48:56 compute-0 ovn_controller[146846]: 2026-02-28T10:48:56Z|01610|binding|INFO|Setting lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 ovn-installed in OVS
Feb 28 10:48:56 compute-0 ovn_controller[146846]: 2026-02-28T10:48:56Z|01611|binding|INFO|Setting lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 up in Southbound
Feb 28 10:48:56 compute-0 ovn_controller[146846]: 2026-02-28T10:48:56Z|01612|binding|INFO|Releasing lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 from this chassis (sb_readonly=1)
Feb 28 10:48:56 compute-0 ovn_controller[146846]: 2026-02-28T10:48:56Z|01613|if_status|INFO|Dropped 5 log messages in last 432 seconds (most recently, 432 seconds ago) due to excessive rate
Feb 28 10:48:56 compute-0 ovn_controller[146846]: 2026-02-28T10:48:56Z|01614|if_status|INFO|Not setting lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 down as sb is readonly
Feb 28 10:48:56 compute-0 ovn_controller[146846]: 2026-02-28T10:48:56Z|01615|binding|INFO|Removing iface tap12bbec3d-25 ovn-installed in OVS
Feb 28 10:48:56 compute-0 ovn_controller[146846]: 2026-02-28T10:48:56Z|01616|binding|INFO|Releasing lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 from this chassis (sb_readonly=0)
Feb 28 10:48:56 compute-0 ovn_controller[146846]: 2026-02-28T10:48:56Z|01617|binding|INFO|Setting lport 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 down in Southbound
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.709 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3-userdata-shm.mount: Deactivated successfully.
Feb 28 10:48:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-85896b15e407a783697a1b7680c2b0fcf7d8c767aa52c496ab5f36da73752f0c-merged.mount: Deactivated successfully.
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.718 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.720 243456 INFO nova.virt.libvirt.driver [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Instance destroyed successfully.
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.721 243456 DEBUG nova.objects.instance [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lazy-loading 'resources' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:48:56 compute-0 ceph-mon[76304]: pgmap v2543: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 635 KiB/s rd, 16 KiB/s wr, 48 op/s
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.723 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:24:ce 10.100.0.13'], port_security=['fa:16:3e:b5:24:ce 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd47f4919-0816-4363-b2eb-fa6580859e88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-978ebc43-7003-4100-92ba-e083df3fe8ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd7baef4f72e742e8aa7530d7a586ed2b', 'neutron:revision_number': '9', 'neutron:security_group_ids': '97c43427-956a-4c4e-a592-053a957c2802', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2014e61-1555-4c0f-9d39-804f817029ca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>], logical_port=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc619611dc0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:48:56 compute-0 podman[382977]: 2026-02-28 10:48:56.724870175 +0000 UTC m=+0.102865049 container cleanup ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.735 243456 DEBUG nova.virt.libvirt.vif [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-28T10:47:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-373381639',display_name='tempest-TestShelveInstance-server-373381639',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-373381639',id=152,image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWK8NgpJq/O/Qf/cy3N8Z6KNy8MplNJlX7P39bqx+h9ho+aFNpLd6ovt9CyKQamqSZ1B8mN5tm9M+MnUX6czgs4K4I+aH3/UX/Gbp6WhW0TY9K8biYJUX8WuIDhtwQ8Cw==',key_name='tempest-TestShelveInstance-1745923104',keypairs=<?>,launch_index=0,launched_at=2026-02-28T10:48:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d7baef4f72e742e8aa7530d7a586ed2b',ramdisk_id='',reservation_id='r-4eojppig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='f3755b31-d5fb-4f1b-9b58-6260fd65fc72',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1186988285',owner_user_name='tempest-TestShelveInstance-1186988285-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-28T10:48:37Z,user_data=None,user_id='7efc7418904f44aa8c8c9c3e06ac552b',uuid=d47f4919-0816-4363-b2eb-fa6580859e88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.736 243456 DEBUG nova.network.os_vif_util [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converting VIF {"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.737 243456 DEBUG nova.network.os_vif_util [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.737 243456 DEBUG os_vif [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 28 10:48:56 compute-0 systemd[1]: libpod-conmon-ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3.scope: Deactivated successfully.
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.739 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.739 243456 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12bbec3d-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.742 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.744 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.746 243456 INFO os_vif [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:24:ce,bridge_name='br-int',has_traffic_filtering=True,id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4,network=Network(978ebc43-7003-4100-92ba-e083df3fe8ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12bbec3d-25')
Feb 28 10:48:56 compute-0 podman[383011]: 2026-02-28 10:48:56.814500096 +0000 UTC m=+0.060789401 container remove ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.822 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[fe885648-f48a-4d75-a0c3-27cddebc691c]: (4, ('Sat Feb 28 10:48:56 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab (ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3)\nceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3\nSat Feb 28 10:48:56 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab (ceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3)\nceb0b2787cf2f0289ae0ed820a5b559d2748baf745b14492b0c6c9b339b030f3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.825 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[bb711169-e078-4d5b-9f6b-d47286cdfa05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.827 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap978ebc43-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:48:56 compute-0 kernel: tap978ebc43-70: left promiscuous mode
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.830 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.834 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:56 compute-0 nova_compute[243452]: 2026-02-28 10:48:56.834 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.839 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[42891087-9c31-43be-baf5-664941958fed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.859 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[47b99e47-7961-49d1-88a9-3b454f44ddef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.861 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[93571f51-7bde-49bb-b121-bb93f67b90e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.881 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[58db5f30-56e0-43dc-9833-f847ee195b60]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722316, 'reachable_time': 20155, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383044, 'error': None, 'target': 'ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d978ebc43\x2d7003\x2d4100\x2d92ba\x2de083df3fe8ab.mount: Deactivated successfully.
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.885 157319 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-978ebc43-7003-4100-92ba-e083df3fe8ab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.886 157319 DEBUG oslo.privsep.daemon [-] privsep: reply[0108e0bb-4cbb-4aae-a95f-4f38bd4fc6cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.888 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 in datapath 978ebc43-7003-4100-92ba-e083df3fe8ab unbound from our chassis
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.889 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 978ebc43-7003-4100-92ba-e083df3fe8ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.890 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d4c63e-a8fd-4322-938e-503175e8ac5f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.891 156681 INFO neutron.agent.ovn.metadata.agent [-] Port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 in datapath 978ebc43-7003-4100-92ba-e083df3fe8ab unbound from our chassis
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.892 156681 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 978ebc43-7003-4100-92ba-e083df3fe8ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 28 10:48:56 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:56.892 250787 DEBUG oslo.privsep.daemon [-] privsep: reply[2a091eeb-14bc-4352-94db-0873f14fb216]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 28 10:48:57 compute-0 nova_compute[243452]: 2026-02-28 10:48:57.039 243456 INFO nova.virt.libvirt.driver [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Deleting instance files /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88_del
Feb 28 10:48:57 compute-0 nova_compute[243452]: 2026-02-28 10:48:57.040 243456 INFO nova.virt.libvirt.driver [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Deletion of /var/lib/nova/instances/d47f4919-0816-4363-b2eb-fa6580859e88_del complete
Feb 28 10:48:57 compute-0 nova_compute[243452]: 2026-02-28 10:48:57.087 243456 INFO nova.compute.manager [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Took 0.63 seconds to destroy the instance on the hypervisor.
Feb 28 10:48:57 compute-0 nova_compute[243452]: 2026-02-28 10:48:57.087 243456 DEBUG oslo.service.loopingcall [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:48:57 compute-0 nova_compute[243452]: 2026-02-28 10:48:57.087 243456 DEBUG nova.compute.manager [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:48:57 compute-0 nova_compute[243452]: 2026-02-28 10:48:57.088 243456 DEBUG nova.network.neutron [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:48:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 726 KiB/s rd, 23 KiB/s wr, 49 op/s
Feb 28 10:48:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:57.891 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:48:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:57.891 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:48:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:48:57.892 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.519 243456 DEBUG nova.compute.manager [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-unplugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.519 243456 DEBUG oslo_concurrency.lockutils [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.520 243456 DEBUG oslo_concurrency.lockutils [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.520 243456 DEBUG oslo_concurrency.lockutils [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.520 243456 DEBUG nova.compute.manager [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] No waiting events found dispatching network-vif-unplugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.520 243456 DEBUG nova.compute.manager [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-unplugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.520 243456 DEBUG nova.compute.manager [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.520 243456 DEBUG oslo_concurrency.lockutils [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.520 243456 DEBUG oslo_concurrency.lockutils [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.521 243456 DEBUG oslo_concurrency.lockutils [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.521 243456 DEBUG nova.compute.manager [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] No waiting events found dispatching network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.521 243456 WARNING nova.compute.manager [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received unexpected event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 for instance with vm_state active and task_state deleting.
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.521 243456 DEBUG nova.compute.manager [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.521 243456 DEBUG oslo_concurrency.lockutils [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Acquiring lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.521 243456 DEBUG oslo_concurrency.lockutils [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.521 243456 DEBUG oslo_concurrency.lockutils [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.522 243456 DEBUG nova.compute.manager [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] No waiting events found dispatching network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.522 243456 WARNING nova.compute.manager [req-dfc4560d-3d0a-4841-9caf-b1542d496470 req-dfba2381-f3a6-4314-84a2-e62440099465 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received unexpected event network-vif-plugged-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 for instance with vm_state active and task_state deleting.
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.649 243456 DEBUG nova.network.neutron [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.725 243456 DEBUG nova.compute.manager [req-45092eb5-ebcc-47de-bef0-e5861d257da7 req-be5fdd6b-572e-4bc9-aa27-9c6c67a8bc81 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Received event network-vif-deleted-12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.726 243456 INFO nova.compute.manager [req-45092eb5-ebcc-47de-bef0-e5861d257da7 req-be5fdd6b-572e-4bc9-aa27-9c6c67a8bc81 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Neutron deleted interface 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4; detaching it from the instance and deleting it from the info cache
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.726 243456 DEBUG nova.network.neutron [req-45092eb5-ebcc-47de-bef0-e5861d257da7 req-be5fdd6b-572e-4bc9-aa27-9c6c67a8bc81 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.730 243456 INFO nova.compute.manager [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Took 1.64 seconds to deallocate network for instance.
Feb 28 10:48:58 compute-0 ceph-mon[76304]: pgmap v2544: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 726 KiB/s rd, 23 KiB/s wr, 49 op/s
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.746 243456 DEBUG nova.network.neutron [req-8d090653-12db-4994-a91c-3167c9e5cf7f req-baaee5da-718c-4c4d-a2b7-151796d9fa87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updated VIF entry in instance network info cache for port 12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.747 243456 DEBUG nova.network.neutron [req-8d090653-12db-4994-a91c-3167c9e5cf7f req-baaee5da-718c-4c4d-a2b7-151796d9fa87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [{"id": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "address": "fa:16:3e:b5:24:ce", "network": {"id": "978ebc43-7003-4100-92ba-e083df3fe8ab", "bridge": "br-int", "label": "tempest-TestShelveInstance-1715980458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d7baef4f72e742e8aa7530d7a586ed2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12bbec3d-25", "ovs_interfaceid": "12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.833 243456 DEBUG nova.compute.manager [req-45092eb5-ebcc-47de-bef0-e5861d257da7 req-be5fdd6b-572e-4bc9-aa27-9c6c67a8bc81 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Detach interface failed, port_id=12bbec3d-2577-4d20-8f9c-e93b1b2ae1e4, reason: Instance d47f4919-0816-4363-b2eb-fa6580859e88 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.865 243456 DEBUG oslo_concurrency.lockutils [req-8d090653-12db-4994-a91c-3167c9e5cf7f req-baaee5da-718c-4c4d-a2b7-151796d9fa87 4eea6738cacb4e0eac4facaba05c81fb c82e2e65a98541158f389f0d87d2945f - - default default] Releasing lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.867 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquired lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.867 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.867 243456 DEBUG nova.objects.instance [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lazy-loading 'info_cache' on Instance uuid d47f4919-0816-4363-b2eb-fa6580859e88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.869 243456 DEBUG oslo_concurrency.lockutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:48:58 compute-0 nova_compute[243452]: 2026-02-28 10:48:58.869 243456 DEBUG oslo_concurrency.lockutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:48:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:48:59 compute-0 podman[383049]: 2026-02-28 10:48:59.130499637 +0000 UTC m=+0.061492771 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Feb 28 10:48:59 compute-0 podman[383048]: 2026-02-28 10:48:59.178278837 +0000 UTC m=+0.114223022 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:48:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 305 active+clean; 215 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 760 KiB/s rd, 24 KiB/s wr, 63 op/s
Feb 28 10:48:59 compute-0 nova_compute[243452]: 2026-02-28 10:48:59.604 243456 DEBUG nova.network.neutron [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:48:59 compute-0 nova_compute[243452]: 2026-02-28 10:48:59.619 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Releasing lock "refresh_cache-d47f4919-0816-4363-b2eb-fa6580859e88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:48:59 compute-0 nova_compute[243452]: 2026-02-28 10:48:59.619 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 28 10:48:59 compute-0 nova_compute[243452]: 2026-02-28 10:48:59.619 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:48:59 compute-0 nova_compute[243452]: 2026-02-28 10:48:59.619 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:48:59 compute-0 nova_compute[243452]: 2026-02-28 10:48:59.647 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:49:00 compute-0 nova_compute[243452]: 2026-02-28 10:49:00.141 243456 DEBUG oslo_concurrency.processutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:49:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:49:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:49:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:49:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:49:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:49:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:49:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:49:00 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1678361634' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:49:00 compute-0 nova_compute[243452]: 2026-02-28 10:49:00.719 243456 DEBUG oslo_concurrency.processutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:49:00 compute-0 nova_compute[243452]: 2026-02-28 10:49:00.726 243456 DEBUG nova.compute.provider_tree [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:49:00 compute-0 ceph-mon[76304]: pgmap v2545: 305 pgs: 305 active+clean; 215 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 760 KiB/s rd, 24 KiB/s wr, 63 op/s
Feb 28 10:49:00 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1678361634' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:49:00 compute-0 nova_compute[243452]: 2026-02-28 10:49:00.743 243456 DEBUG nova.scheduler.client.report [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:49:00 compute-0 nova_compute[243452]: 2026-02-28 10:49:00.764 243456 DEBUG oslo_concurrency.lockutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:49:00 compute-0 nova_compute[243452]: 2026-02-28 10:49:00.767 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 1.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:49:00 compute-0 nova_compute[243452]: 2026-02-28 10:49:00.767 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:49:00 compute-0 nova_compute[243452]: 2026-02-28 10:49:00.767 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:49:00 compute-0 nova_compute[243452]: 2026-02-28 10:49:00.768 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:49:00 compute-0 nova_compute[243452]: 2026-02-28 10:49:00.910 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:01 compute-0 nova_compute[243452]: 2026-02-28 10:49:01.178 243456 INFO nova.scheduler.client.report [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Deleted allocations for instance d47f4919-0816-4363-b2eb-fa6580859e88
Feb 28 10:49:01 compute-0 nova_compute[243452]: 2026-02-28 10:49:01.261 243456 DEBUG oslo_concurrency.lockutils [None req-4de55241-62c6-4c30-b5a9-623da7da62f4 7efc7418904f44aa8c8c9c3e06ac552b d7baef4f72e742e8aa7530d7a586ed2b - - default default] Lock "d47f4919-0816-4363-b2eb-fa6580859e88" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:49:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:49:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4226271041' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:49:01 compute-0 nova_compute[243452]: 2026-02-28 10:49:01.330 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:49:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2546: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 676 KiB/s rd, 24 KiB/s wr, 70 op/s
Feb 28 10:49:01 compute-0 nova_compute[243452]: 2026-02-28 10:49:01.485 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:49:01 compute-0 nova_compute[243452]: 2026-02-28 10:49:01.487 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3467MB free_disk=59.95295037794858GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:49:01 compute-0 nova_compute[243452]: 2026-02-28 10:49:01.488 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:49:01 compute-0 nova_compute[243452]: 2026-02-28 10:49:01.489 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:49:01 compute-0 nova_compute[243452]: 2026-02-28 10:49:01.557 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:49:01 compute-0 nova_compute[243452]: 2026-02-28 10:49:01.557 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:49:01 compute-0 nova_compute[243452]: 2026-02-28 10:49:01.586 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:49:01 compute-0 nova_compute[243452]: 2026-02-28 10:49:01.740 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:01 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4226271041' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:49:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:49:02 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3441047511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:49:02 compute-0 nova_compute[243452]: 2026-02-28 10:49:02.113 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:49:02 compute-0 nova_compute[243452]: 2026-02-28 10:49:02.119 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:49:02 compute-0 nova_compute[243452]: 2026-02-28 10:49:02.142 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:49:02 compute-0 nova_compute[243452]: 2026-02-28 10:49:02.177 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:49:02 compute-0 nova_compute[243452]: 2026-02-28 10:49:02.178 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:49:02 compute-0 ceph-mon[76304]: pgmap v2546: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 676 KiB/s rd, 24 KiB/s wr, 70 op/s
Feb 28 10:49:02 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3441047511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:49:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 290 KiB/s rd, 11 KiB/s wr, 38 op/s
Feb 28 10:49:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.112621) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275744112697, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 336, "num_deletes": 250, "total_data_size": 160667, "memory_usage": 167888, "flush_reason": "Manual Compaction"}
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275744117808, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 158874, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54114, "largest_seqno": 54449, "table_properties": {"data_size": 156734, "index_size": 304, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5883, "raw_average_key_size": 20, "raw_value_size": 152535, "raw_average_value_size": 525, "num_data_blocks": 14, "num_entries": 290, "num_filter_entries": 290, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772275734, "oldest_key_time": 1772275734, "file_creation_time": 1772275744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 5236 microseconds, and 1487 cpu microseconds.
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.117860) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 158874 bytes OK
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.117882) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.121368) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.121407) EVENT_LOG_v1 {"time_micros": 1772275744121396, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.121437) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 158353, prev total WAL file size 158353, number of live WAL files 2.
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.122043) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303032' seq:72057594037927935, type:22 .. '6D6772737461740032323533' seq:0, type:0; will stop at (end)
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(155KB)], [125(10MB)]
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275744122138, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 11666584, "oldest_snapshot_seqno": -1}
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7398 keys, 8336840 bytes, temperature: kUnknown
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275744187776, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 8336840, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8290871, "index_size": 26400, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18501, "raw_key_size": 193298, "raw_average_key_size": 26, "raw_value_size": 8162404, "raw_average_value_size": 1103, "num_data_blocks": 1021, "num_entries": 7398, "num_filter_entries": 7398, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772275744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.188117) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 8336840 bytes
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.191033) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.5 rd, 126.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.0 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(125.9) write-amplify(52.5) OK, records in: 7905, records dropped: 507 output_compression: NoCompression
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.191083) EVENT_LOG_v1 {"time_micros": 1772275744191050, "job": 76, "event": "compaction_finished", "compaction_time_micros": 65727, "compaction_time_cpu_micros": 31470, "output_level": 6, "num_output_files": 1, "total_output_size": 8336840, "num_input_records": 7905, "num_output_records": 7398, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275744191256, "job": 76, "event": "table_file_deletion", "file_number": 127}
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275744192706, "job": 76, "event": "table_file_deletion", "file_number": 125}
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.121939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.192893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.192901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.192903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.192905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:49:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:49:04.192906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:49:04 compute-0 ceph-mon[76304]: pgmap v2547: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 290 KiB/s rd, 11 KiB/s wr, 38 op/s
Feb 28 10:49:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 9.8 KiB/s wr, 33 op/s
Feb 28 10:49:05 compute-0 nova_compute[243452]: 2026-02-28 10:49:05.952 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:06 compute-0 nova_compute[243452]: 2026-02-28 10:49:06.049 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:06 compute-0 nova_compute[243452]: 2026-02-28 10:49:06.110 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:06 compute-0 nova_compute[243452]: 2026-02-28 10:49:06.744 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:06 compute-0 ceph-mon[76304]: pgmap v2548: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 9.8 KiB/s wr, 33 op/s
Feb 28 10:49:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 174 KiB/s rd, 9.8 KiB/s wr, 32 op/s
Feb 28 10:49:08 compute-0 ceph-mon[76304]: pgmap v2549: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 174 KiB/s rd, 9.8 KiB/s wr, 32 op/s
Feb 28 10:49:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:49:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 28 10:49:10 compute-0 ceph-mon[76304]: pgmap v2550: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 28 10:49:11 compute-0 nova_compute[243452]: 2026-02-28 10:49:11.005 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.8 KiB/s rd, 852 B/s wr, 14 op/s
Feb 28 10:49:11 compute-0 nova_compute[243452]: 2026-02-28 10:49:11.715 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275736.7144413, d47f4919-0816-4363-b2eb-fa6580859e88 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:49:11 compute-0 nova_compute[243452]: 2026-02-28 10:49:11.716 243456 INFO nova.compute.manager [-] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] VM Stopped (Lifecycle Event)
Feb 28 10:49:11 compute-0 nova_compute[243452]: 2026-02-28 10:49:11.741 243456 DEBUG nova.compute.manager [None req-68313c09-2d81-41ba-a071-e48edcffaa1f - - - - - -] [instance: d47f4919-0816-4363-b2eb-fa6580859e88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:49:11 compute-0 nova_compute[243452]: 2026-02-28 10:49:11.745 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:12 compute-0 ceph-mon[76304]: pgmap v2551: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.8 KiB/s rd, 852 B/s wr, 14 op/s
Feb 28 10:49:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2552: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:49:14 compute-0 ceph-mon[76304]: pgmap v2552: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:16 compute-0 nova_compute[243452]: 2026-02-28 10:49:16.007 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:16 compute-0 nova_compute[243452]: 2026-02-28 10:49:16.747 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:16 compute-0 ceph-mon[76304]: pgmap v2553: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:18 compute-0 ceph-mon[76304]: pgmap v2554: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:49:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2555: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:20 compute-0 ceph-mon[76304]: pgmap v2555: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:21 compute-0 nova_compute[243452]: 2026-02-28 10:49:21.010 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:21 compute-0 nova_compute[243452]: 2026-02-28 10:49:21.749 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:22 compute-0 ceph-mon[76304]: pgmap v2556: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:49:24 compute-0 ceph-mon[76304]: pgmap v2557: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2558: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:26 compute-0 nova_compute[243452]: 2026-02-28 10:49:26.012 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:26 compute-0 nova_compute[243452]: 2026-02-28 10:49:26.751 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:26 compute-0 ceph-mon[76304]: pgmap v2558: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:28 compute-0 ceph-mon[76304]: pgmap v2559: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:49:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:49:29
Feb 28 10:49:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:49:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:49:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', 'vms', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'volumes', '.mgr']
Feb 28 10:49:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:49:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:30 compute-0 podman[383161]: 2026-02-28 10:49:30.124584055 +0000 UTC m=+0.060388010 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 10:49:30 compute-0 podman[383160]: 2026-02-28 10:49:30.154089555 +0000 UTC m=+0.092128034 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223)
Feb 28 10:49:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:49:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:49:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:49:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:49:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:49:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:49:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:49:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:49:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:49:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:49:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:49:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:49:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:49:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:49:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:49:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:49:30 compute-0 ceph-mon[76304]: pgmap v2560: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:31 compute-0 nova_compute[243452]: 2026-02-28 10:49:31.051 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:31 compute-0 nova_compute[243452]: 2026-02-28 10:49:31.754 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:32 compute-0 ceph-mon[76304]: pgmap v2561: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:33 compute-0 nova_compute[243452]: 2026-02-28 10:49:33.281 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Acquiring lock "4daf2a6e-e18a-481f-8174-d36bb3f79438" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:49:33 compute-0 nova_compute[243452]: 2026-02-28 10:49:33.282 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "4daf2a6e-e18a-481f-8174-d36bb3f79438" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:49:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:33 compute-0 nova_compute[243452]: 2026-02-28 10:49:33.557 243456 DEBUG nova.compute.manager [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 28 10:49:33 compute-0 nova_compute[243452]: 2026-02-28 10:49:33.673 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:49:33 compute-0 nova_compute[243452]: 2026-02-28 10:49:33.674 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:49:33 compute-0 nova_compute[243452]: 2026-02-28 10:49:33.683 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 28 10:49:33 compute-0 nova_compute[243452]: 2026-02-28 10:49:33.683 243456 INFO nova.compute.claims [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Claim successful on node compute-0.ctlplane.example.com
Feb 28 10:49:33 compute-0 nova_compute[243452]: 2026-02-28 10:49:33.792 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:49:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:49:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:49:34 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2246775156' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.363 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.370 243456 DEBUG nova.compute.provider_tree [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.389 243456 DEBUG nova.scheduler.client.report [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.408 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.409 243456 DEBUG nova.compute.manager [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.475 243456 DEBUG nova.compute.manager [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.475 243456 DEBUG nova.network.neutron [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.507 243456 INFO nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.532 243456 DEBUG nova.compute.manager [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.632 243456 DEBUG nova.compute.manager [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.634 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.635 243456 INFO nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Creating image(s)
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.672 243456 DEBUG nova.storage.rbd_utils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] rbd image 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.701 243456 DEBUG nova.storage.rbd_utils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] rbd image 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.732 243456 DEBUG nova.storage.rbd_utils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] rbd image 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.737 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.778 243456 DEBUG nova.network.neutron [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.779 243456 DEBUG nova.compute.manager [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.818 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.819 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Acquiring lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.820 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.820 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.847 243456 DEBUG nova.storage.rbd_utils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] rbd image 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:49:34 compute-0 nova_compute[243452]: 2026-02-28 10:49:34.853 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:49:34 compute-0 ceph-mon[76304]: pgmap v2562: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:49:34 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2246775156' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.194 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.288 243456 DEBUG nova.storage.rbd_utils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] resizing rbd image 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 28 10:49:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 305 active+clean; 163 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 728 KiB/s wr, 10 op/s
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.392 243456 DEBUG nova.objects.instance [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lazy-loading 'migration_context' on Instance uuid 4daf2a6e-e18a-481f-8174-d36bb3f79438 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.410 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.411 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Ensure instance console log exists: /var/lib/nova/instances/4daf2a6e-e18a-481f-8174-d36bb3f79438/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.412 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.412 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.412 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.413 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': 'f3755b31-d5fb-4f1b-9b58-6260fd65fc72'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.418 243456 WARNING nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.423 243456 DEBUG nova.virt.libvirt.host [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.424 243456 DEBUG nova.virt.libvirt.host [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.427 243456 DEBUG nova.virt.libvirt.host [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.427 243456 DEBUG nova.virt.libvirt.host [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.427 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.428 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-28T09:58:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2cc4a448-5c95-491f-964b-6bd23dfa5d7f',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-28T09:58:16Z,direct_url=<?>,disk_format='qcow2',id=f3755b31-d5fb-4f1b-9b58-6260fd65fc72,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb0c6377c53644999dac7645ece0c360',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-28T09:58:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.428 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.428 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.428 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.429 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.429 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.429 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.429 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.429 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.429 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.430 243456 DEBUG nova.virt.hardware [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 28 10:49:35 compute-0 nova_compute[243452]: 2026-02-28 10:49:35.432 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:49:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:49:35 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1921002510' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:49:36 compute-0 nova_compute[243452]: 2026-02-28 10:49:36.007 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:49:36 compute-0 nova_compute[243452]: 2026-02-28 10:49:36.040 243456 DEBUG nova.storage.rbd_utils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] rbd image 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:49:36 compute-0 nova_compute[243452]: 2026-02-28 10:49:36.046 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:49:36 compute-0 nova_compute[243452]: 2026-02-28 10:49:36.087 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 28 10:49:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1685980158' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:49:36 compute-0 nova_compute[243452]: 2026-02-28 10:49:36.658 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:49:36 compute-0 nova_compute[243452]: 2026-02-28 10:49:36.660 243456 DEBUG nova.objects.instance [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4daf2a6e-e18a-481f-8174-d36bb3f79438 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:49:36 compute-0 nova_compute[243452]: 2026-02-28 10:49:36.678 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] End _get_guest_xml xml=<domain type="kvm">
Feb 28 10:49:36 compute-0 nova_compute[243452]:   <uuid>4daf2a6e-e18a-481f-8174-d36bb3f79438</uuid>
Feb 28 10:49:36 compute-0 nova_compute[243452]:   <name>instance-00000099</name>
Feb 28 10:49:36 compute-0 nova_compute[243452]:   <memory>131072</memory>
Feb 28 10:49:36 compute-0 nova_compute[243452]:   <vcpu>1</vcpu>
Feb 28 10:49:36 compute-0 nova_compute[243452]:   <metadata>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <nova:name>tempest-AggregatesAdminTestJSON-server-345626621</nova:name>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <nova:creationTime>2026-02-28 10:49:35</nova:creationTime>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <nova:flavor name="m1.nano">
Feb 28 10:49:36 compute-0 nova_compute[243452]:         <nova:memory>128</nova:memory>
Feb 28 10:49:36 compute-0 nova_compute[243452]:         <nova:disk>1</nova:disk>
Feb 28 10:49:36 compute-0 nova_compute[243452]:         <nova:swap>0</nova:swap>
Feb 28 10:49:36 compute-0 nova_compute[243452]:         <nova:ephemeral>0</nova:ephemeral>
Feb 28 10:49:36 compute-0 nova_compute[243452]:         <nova:vcpus>1</nova:vcpus>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       </nova:flavor>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <nova:owner>
Feb 28 10:49:36 compute-0 nova_compute[243452]:         <nova:user uuid="3b61ee2651c648abacf025ebb79ec599">tempest-AggregatesAdminTestJSON-1550774056-project-member</nova:user>
Feb 28 10:49:36 compute-0 nova_compute[243452]:         <nova:project uuid="10cb603dbd5b46848d82ec2c2fad1311">tempest-AggregatesAdminTestJSON-1550774056</nova:project>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       </nova:owner>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <nova:root type="image" uuid="f3755b31-d5fb-4f1b-9b58-6260fd65fc72"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <nova:ports/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     </nova:instance>
Feb 28 10:49:36 compute-0 nova_compute[243452]:   </metadata>
Feb 28 10:49:36 compute-0 nova_compute[243452]:   <sysinfo type="smbios">
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <system>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <entry name="manufacturer">RDO</entry>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <entry name="product">OpenStack Compute</entry>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <entry name="serial">4daf2a6e-e18a-481f-8174-d36bb3f79438</entry>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <entry name="uuid">4daf2a6e-e18a-481f-8174-d36bb3f79438</entry>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <entry name="family">Virtual Machine</entry>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     </system>
Feb 28 10:49:36 compute-0 nova_compute[243452]:   </sysinfo>
Feb 28 10:49:36 compute-0 nova_compute[243452]:   <os>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <boot dev="hd"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <smbios mode="sysinfo"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:   </os>
Feb 28 10:49:36 compute-0 nova_compute[243452]:   <features>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <acpi/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <apic/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <vmcoreinfo/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:   </features>
Feb 28 10:49:36 compute-0 nova_compute[243452]:   <clock offset="utc">
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <timer name="pit" tickpolicy="delay"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <timer name="hpet" present="no"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:   </clock>
Feb 28 10:49:36 compute-0 nova_compute[243452]:   <cpu mode="host-model" match="exact">
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <topology sockets="1" cores="1" threads="1"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:   </cpu>
Feb 28 10:49:36 compute-0 nova_compute[243452]:   <devices>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <disk type="network" device="disk">
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/4daf2a6e-e18a-481f-8174-d36bb3f79438_disk">
Feb 28 10:49:36 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       </source>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:49:36 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <target dev="vda" bus="virtio"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <disk type="network" device="cdrom">
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <driver type="raw" cache="none"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <source protocol="rbd" name="vms/4daf2a6e-e18a-481f-8174-d36bb3f79438_disk.config">
Feb 28 10:49:36 compute-0 nova_compute[243452]:         <host name="192.168.122.100" port="6789"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       </source>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <auth username="openstack">
Feb 28 10:49:36 compute-0 nova_compute[243452]:         <secret type="ceph" uuid="8f528268-ea2d-5d7b-af45-49b405fed6de"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       </auth>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <target dev="sda" bus="sata"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     </disk>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <serial type="pty">
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <log file="/var/lib/nova/instances/4daf2a6e-e18a-481f-8174-d36bb3f79438/console.log" append="off"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     </serial>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <video>
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <model type="virtio"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     </video>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <input type="tablet" bus="usb"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <rng model="virtio">
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <backend model="random">/dev/urandom</backend>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     </rng>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="pci" model="pcie-root-port"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <controller type="usb" index="0"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     <memballoon model="virtio">
Feb 28 10:49:36 compute-0 nova_compute[243452]:       <stats period="10"/>
Feb 28 10:49:36 compute-0 nova_compute[243452]:     </memballoon>
Feb 28 10:49:36 compute-0 nova_compute[243452]:   </devices>
Feb 28 10:49:36 compute-0 nova_compute[243452]: </domain>
Feb 28 10:49:36 compute-0 nova_compute[243452]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 28 10:49:36 compute-0 nova_compute[243452]: 2026-02-28 10:49:36.734 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:49:36 compute-0 nova_compute[243452]: 2026-02-28 10:49:36.735 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 28 10:49:36 compute-0 nova_compute[243452]: 2026-02-28 10:49:36.736 243456 INFO nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Using config drive
Feb 28 10:49:36 compute-0 nova_compute[243452]: 2026-02-28 10:49:36.772 243456 DEBUG nova.storage.rbd_utils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] rbd image 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:49:36 compute-0 nova_compute[243452]: 2026-02-28 10:49:36.781 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:36 compute-0 ceph-mon[76304]: pgmap v2563: 305 pgs: 305 active+clean; 163 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 728 KiB/s wr, 10 op/s
Feb 28 10:49:36 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1921002510' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:49:36 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1685980158' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 28 10:49:37 compute-0 nova_compute[243452]: 2026-02-28 10:49:37.281 243456 INFO nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Creating config drive at /var/lib/nova/instances/4daf2a6e-e18a-481f-8174-d36bb3f79438/disk.config
Feb 28 10:49:37 compute-0 nova_compute[243452]: 2026-02-28 10:49:37.289 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4daf2a6e-e18a-481f-8174-d36bb3f79438/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp68cth1hn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:49:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 305 active+clean; 163 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 728 KiB/s wr, 10 op/s
Feb 28 10:49:37 compute-0 nova_compute[243452]: 2026-02-28 10:49:37.443 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4daf2a6e-e18a-481f-8174-d36bb3f79438/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp68cth1hn" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:49:37 compute-0 nova_compute[243452]: 2026-02-28 10:49:37.477 243456 DEBUG nova.storage.rbd_utils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] rbd image 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 28 10:49:37 compute-0 nova_compute[243452]: 2026-02-28 10:49:37.483 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4daf2a6e-e18a-481f-8174-d36bb3f79438/disk.config 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:49:37 compute-0 nova_compute[243452]: 2026-02-28 10:49:37.672 243456 DEBUG oslo_concurrency.processutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4daf2a6e-e18a-481f-8174-d36bb3f79438/disk.config 4daf2a6e-e18a-481f-8174-d36bb3f79438_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:49:37 compute-0 nova_compute[243452]: 2026-02-28 10:49:37.674 243456 INFO nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Deleting local config drive /var/lib/nova/instances/4daf2a6e-e18a-481f-8174-d36bb3f79438/disk.config because it was imported into RBD.
Feb 28 10:49:37 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 10:49:37 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 10:49:37 compute-0 systemd-machined[209480]: New machine qemu-187-instance-00000099.
Feb 28 10:49:37 compute-0 systemd[1]: Started Virtual Machine qemu-187-instance-00000099.
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.361 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275778.3605375, 4daf2a6e-e18a-481f-8174-d36bb3f79438 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.364 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] VM Resumed (Lifecycle Event)
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.369 243456 DEBUG nova.compute.manager [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.369 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.376 243456 INFO nova.virt.libvirt.driver [-] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Instance spawned successfully.
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.377 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.432 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.442 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.443 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.444 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.444 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.445 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.446 243456 DEBUG nova.virt.libvirt.driver [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.454 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.512 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.513 243456 DEBUG nova.virt.driver [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] Emitting event <LifecycleEvent: 1772275778.362056, 4daf2a6e-e18a-481f-8174-d36bb3f79438 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.513 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] VM Started (Lifecycle Event)
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.534 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.540 243456 DEBUG nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.550 243456 INFO nova.compute.manager [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Took 3.92 seconds to spawn the instance on the hypervisor.
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.550 243456 DEBUG nova.compute.manager [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.575 243456 INFO nova.compute.manager [None req-c37a49fe-917c-499c-888d-591af3cdb252 - - - - - -] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.648 243456 INFO nova.compute.manager [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Took 5.02 seconds to build instance.
Feb 28 10:49:38 compute-0 nova_compute[243452]: 2026-02-28 10:49:38.712 243456 DEBUG oslo_concurrency.lockutils [None req-6f1b82b8-75a8-4dba-b405-7edfb5c89a1c 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "4daf2a6e-e18a-481f-8174-d36bb3f79438" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:49:38 compute-0 ceph-mon[76304]: pgmap v2564: 305 pgs: 305 active+clean; 163 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 728 KiB/s wr, 10 op/s
Feb 28 10:49:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:49:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2565: 305 pgs: 305 active+clean; 180 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 943 KiB/s wr, 24 op/s
Feb 28 10:49:40 compute-0 ceph-mon[76304]: pgmap v2565: 305 pgs: 305 active+clean; 180 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 943 KiB/s wr, 24 op/s
Feb 28 10:49:41 compute-0 nova_compute[243452]: 2026-02-28 10:49:41.089 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2566: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 84 op/s
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003633719601574964 of space, bias 1.0, pg target 0.10901158804724892 quantized to 32 (current 32)
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024940314264652076 of space, bias 1.0, pg target 0.7482094279395622 quantized to 32 (current 32)
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.261232899107364e-07 of space, bias 4.0, pg target 0.0008713479478928837 quantized to 16 (current 16)
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:49:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:49:41 compute-0 nova_compute[243452]: 2026-02-28 10:49:41.784 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:41 compute-0 nova_compute[243452]: 2026-02-28 10:49:41.824 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Acquiring lock "4daf2a6e-e18a-481f-8174-d36bb3f79438" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:49:41 compute-0 nova_compute[243452]: 2026-02-28 10:49:41.825 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "4daf2a6e-e18a-481f-8174-d36bb3f79438" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:49:41 compute-0 nova_compute[243452]: 2026-02-28 10:49:41.825 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Acquiring lock "4daf2a6e-e18a-481f-8174-d36bb3f79438-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:49:41 compute-0 nova_compute[243452]: 2026-02-28 10:49:41.826 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "4daf2a6e-e18a-481f-8174-d36bb3f79438-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:49:41 compute-0 nova_compute[243452]: 2026-02-28 10:49:41.826 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "4daf2a6e-e18a-481f-8174-d36bb3f79438-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:49:41 compute-0 nova_compute[243452]: 2026-02-28 10:49:41.829 243456 INFO nova.compute.manager [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Terminating instance
Feb 28 10:49:41 compute-0 nova_compute[243452]: 2026-02-28 10:49:41.831 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Acquiring lock "refresh_cache-4daf2a6e-e18a-481f-8174-d36bb3f79438" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 28 10:49:41 compute-0 nova_compute[243452]: 2026-02-28 10:49:41.832 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Acquired lock "refresh_cache-4daf2a6e-e18a-481f-8174-d36bb3f79438" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 28 10:49:41 compute-0 nova_compute[243452]: 2026-02-28 10:49:41.832 243456 DEBUG nova.network.neutron [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 28 10:49:41 compute-0 nova_compute[243452]: 2026-02-28 10:49:41.990 243456 DEBUG nova.network.neutron [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:49:42 compute-0 nova_compute[243452]: 2026-02-28 10:49:42.238 243456 DEBUG nova.network.neutron [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:49:42 compute-0 nova_compute[243452]: 2026-02-28 10:49:42.253 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Releasing lock "refresh_cache-4daf2a6e-e18a-481f-8174-d36bb3f79438" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 28 10:49:42 compute-0 nova_compute[243452]: 2026-02-28 10:49:42.254 243456 DEBUG nova.compute.manager [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 28 10:49:42 compute-0 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Deactivated successfully.
Feb 28 10:49:42 compute-0 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Consumed 4.619s CPU time.
Feb 28 10:49:42 compute-0 systemd-machined[209480]: Machine qemu-187-instance-00000099 terminated.
Feb 28 10:49:42 compute-0 nova_compute[243452]: 2026-02-28 10:49:42.476 243456 INFO nova.virt.libvirt.driver [-] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Instance destroyed successfully.
Feb 28 10:49:42 compute-0 nova_compute[243452]: 2026-02-28 10:49:42.477 243456 DEBUG nova.objects.instance [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lazy-loading 'resources' on Instance uuid 4daf2a6e-e18a-481f-8174-d36bb3f79438 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 28 10:49:42 compute-0 nova_compute[243452]: 2026-02-28 10:49:42.826 243456 INFO nova.virt.libvirt.driver [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Deleting instance files /var/lib/nova/instances/4daf2a6e-e18a-481f-8174-d36bb3f79438_del
Feb 28 10:49:42 compute-0 nova_compute[243452]: 2026-02-28 10:49:42.828 243456 INFO nova.virt.libvirt.driver [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Deletion of /var/lib/nova/instances/4daf2a6e-e18a-481f-8174-d36bb3f79438_del complete
Feb 28 10:49:42 compute-0 nova_compute[243452]: 2026-02-28 10:49:42.908 243456 INFO nova.compute.manager [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Took 0.65 seconds to destroy the instance on the hypervisor.
Feb 28 10:49:42 compute-0 nova_compute[243452]: 2026-02-28 10:49:42.909 243456 DEBUG oslo.service.loopingcall [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 28 10:49:42 compute-0 nova_compute[243452]: 2026-02-28 10:49:42.909 243456 DEBUG nova.compute.manager [-] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 28 10:49:42 compute-0 nova_compute[243452]: 2026-02-28 10:49:42.909 243456 DEBUG nova.network.neutron [-] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 28 10:49:42 compute-0 ceph-mon[76304]: pgmap v2566: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 84 op/s
Feb 28 10:49:43 compute-0 nova_compute[243452]: 2026-02-28 10:49:43.274 243456 DEBUG nova.network.neutron [-] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 28 10:49:43 compute-0 nova_compute[243452]: 2026-02-28 10:49:43.295 243456 DEBUG nova.network.neutron [-] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 28 10:49:43 compute-0 nova_compute[243452]: 2026-02-28 10:49:43.313 243456 INFO nova.compute.manager [-] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Took 0.40 seconds to deallocate network for instance.
Feb 28 10:49:43 compute-0 nova_compute[243452]: 2026-02-28 10:49:43.351 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:49:43 compute-0 nova_compute[243452]: 2026-02-28 10:49:43.351 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:49:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2567: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Feb 28 10:49:43 compute-0 nova_compute[243452]: 2026-02-28 10:49:43.399 243456 DEBUG oslo_concurrency.processutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:49:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:49:43 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1242019216' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:49:43 compute-0 nova_compute[243452]: 2026-02-28 10:49:43.901 243456 DEBUG oslo_concurrency.processutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:49:43 compute-0 nova_compute[243452]: 2026-02-28 10:49:43.910 243456 DEBUG nova.compute.provider_tree [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:49:43 compute-0 nova_compute[243452]: 2026-02-28 10:49:43.939 243456 DEBUG nova.scheduler.client.report [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:49:43 compute-0 nova_compute[243452]: 2026-02-28 10:49:43.989 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:49:44 compute-0 ceph-mon[76304]: pgmap v2567: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Feb 28 10:49:44 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1242019216' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:49:44 compute-0 nova_compute[243452]: 2026-02-28 10:49:44.044 243456 INFO nova.scheduler.client.report [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Deleted allocations for instance 4daf2a6e-e18a-481f-8174-d36bb3f79438
Feb 28 10:49:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:49:44 compute-0 nova_compute[243452]: 2026-02-28 10:49:44.118 243456 DEBUG oslo_concurrency.lockutils [None req-89505608-3879-46d7-99df-e92690a81709 3b61ee2651c648abacf025ebb79ec599 10cb603dbd5b46848d82ec2c2fad1311 - - default default] Lock "4daf2a6e-e18a-481f-8174-d36bb3f79438" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:49:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 305 active+clean; 171 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Feb 28 10:49:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:49:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2095587030' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:49:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:49:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2095587030' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:49:46 compute-0 nova_compute[243452]: 2026-02-28 10:49:46.094 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:46 compute-0 ceph-mon[76304]: pgmap v2568: 305 pgs: 305 active+clean; 171 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Feb 28 10:49:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2095587030' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:49:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2095587030' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:49:46 compute-0 nova_compute[243452]: 2026-02-28 10:49:46.787 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 305 active+clean; 171 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 111 op/s
Feb 28 10:49:48 compute-0 ceph-mon[76304]: pgmap v2569: 305 pgs: 305 active+clean; 171 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 111 op/s
Feb 28 10:49:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:49:49 compute-0 ovn_controller[146846]: 2026-02-28T10:49:49Z|01618|memory_trim|INFO|Detected inactivity (last active 30020 ms ago): trimming memory
Feb 28 10:49:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 116 op/s
Feb 28 10:49:49 compute-0 sshd-session[383614]: Invalid user funded from 45.148.10.240 port 45768
Feb 28 10:49:49 compute-0 sshd-session[383614]: Connection closed by invalid user funded 45.148.10.240 port 45768 [preauth]
Feb 28 10:49:50 compute-0 ceph-mon[76304]: pgmap v2570: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 116 op/s
Feb 28 10:49:51 compute-0 nova_compute[243452]: 2026-02-28 10:49:51.130 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2571: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 885 KiB/s wr, 102 op/s
Feb 28 10:49:51 compute-0 nova_compute[243452]: 2026-02-28 10:49:51.789 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:51 compute-0 nova_compute[243452]: 2026-02-28 10:49:51.874 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:49:51 compute-0 nova_compute[243452]: 2026-02-28 10:49:51.875 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:49:51 compute-0 nova_compute[243452]: 2026-02-28 10:49:51.875 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:49:51 compute-0 nova_compute[243452]: 2026-02-28 10:49:51.875 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:49:52 compute-0 ceph-mon[76304]: pgmap v2571: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 885 KiB/s wr, 102 op/s
Feb 28 10:49:53 compute-0 nova_compute[243452]: 2026-02-28 10:49:53.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:49:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 481 KiB/s rd, 1.2 KiB/s wr, 43 op/s
Feb 28 10:49:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:49:54 compute-0 sudo[383616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:49:54 compute-0 sudo[383616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:49:54 compute-0 sudo[383616]: pam_unix(sudo:session): session closed for user root
Feb 28 10:49:54 compute-0 sudo[383641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 28 10:49:54 compute-0 sudo[383641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:49:54 compute-0 ceph-mon[76304]: pgmap v2572: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 481 KiB/s rd, 1.2 KiB/s wr, 43 op/s
Feb 28 10:49:54 compute-0 sudo[383641]: pam_unix(sudo:session): session closed for user root
Feb 28 10:49:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:49:54 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:49:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:49:54 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:49:54 compute-0 sudo[383686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:49:54 compute-0 sudo[383686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:49:54 compute-0 sudo[383686]: pam_unix(sudo:session): session closed for user root
Feb 28 10:49:54 compute-0 sudo[383711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:49:54 compute-0 sudo[383711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:49:55 compute-0 nova_compute[243452]: 2026-02-28 10:49:55.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:49:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 1.2 KiB/s wr, 34 op/s
Feb 28 10:49:55 compute-0 sudo[383711]: pam_unix(sudo:session): session closed for user root
Feb 28 10:49:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:49:55 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:49:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:49:55 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:49:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:49:55 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:49:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:49:55 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:49:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:49:55 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:49:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:49:55 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:49:55 compute-0 sudo[383767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:49:55 compute-0 sudo[383767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:49:55 compute-0 sudo[383767]: pam_unix(sudo:session): session closed for user root
Feb 28 10:49:55 compute-0 sudo[383792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:49:55 compute-0 sudo[383792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:49:55 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:49:55 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:49:55 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:49:55 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:49:55 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:49:55 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:49:55 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:49:55 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:49:55 compute-0 podman[383831]: 2026-02-28 10:49:55.931668136 +0000 UTC m=+0.049972024 container create 2573cd9d5674167a96f2c3312aa58db0da002ba66e27a5e4dcb1de05ceb32435 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:49:55 compute-0 systemd[1]: Started libpod-conmon-2573cd9d5674167a96f2c3312aa58db0da002ba66e27a5e4dcb1de05ceb32435.scope.
Feb 28 10:49:56 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:49:56 compute-0 podman[383831]: 2026-02-28 10:49:55.908375903 +0000 UTC m=+0.026679801 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:49:56 compute-0 podman[383831]: 2026-02-28 10:49:56.02385435 +0000 UTC m=+0.142158228 container init 2573cd9d5674167a96f2c3312aa58db0da002ba66e27a5e4dcb1de05ceb32435 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Feb 28 10:49:56 compute-0 podman[383831]: 2026-02-28 10:49:56.035145771 +0000 UTC m=+0.153449619 container start 2573cd9d5674167a96f2c3312aa58db0da002ba66e27a5e4dcb1de05ceb32435 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 28 10:49:56 compute-0 podman[383831]: 2026-02-28 10:49:56.040142533 +0000 UTC m=+0.158446381 container attach 2573cd9d5674167a96f2c3312aa58db0da002ba66e27a5e4dcb1de05ceb32435 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:49:56 compute-0 nifty_euler[383848]: 167 167
Feb 28 10:49:56 compute-0 systemd[1]: libpod-2573cd9d5674167a96f2c3312aa58db0da002ba66e27a5e4dcb1de05ceb32435.scope: Deactivated successfully.
Feb 28 10:49:56 compute-0 conmon[383848]: conmon 2573cd9d5674167a96f2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2573cd9d5674167a96f2c3312aa58db0da002ba66e27a5e4dcb1de05ceb32435.scope/container/memory.events
Feb 28 10:49:56 compute-0 podman[383831]: 2026-02-28 10:49:56.043654613 +0000 UTC m=+0.161958461 container died 2573cd9d5674167a96f2c3312aa58db0da002ba66e27a5e4dcb1de05ceb32435 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:49:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-79bb8f52362afa53bf30ae83f3562ef894ec4f50627471436486492575dc89e7-merged.mount: Deactivated successfully.
Feb 28 10:49:56 compute-0 podman[383831]: 2026-02-28 10:49:56.092615337 +0000 UTC m=+0.210919215 container remove 2573cd9d5674167a96f2c3312aa58db0da002ba66e27a5e4dcb1de05ceb32435 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_euler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:49:56 compute-0 systemd[1]: libpod-conmon-2573cd9d5674167a96f2c3312aa58db0da002ba66e27a5e4dcb1de05ceb32435.scope: Deactivated successfully.
Feb 28 10:49:56 compute-0 nova_compute[243452]: 2026-02-28 10:49:56.200 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:56 compute-0 nova_compute[243452]: 2026-02-28 10:49:56.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:49:56 compute-0 nova_compute[243452]: 2026-02-28 10:49:56.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:49:56 compute-0 nova_compute[243452]: 2026-02-28 10:49:56.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:49:56 compute-0 podman[383874]: 2026-02-28 10:49:56.316911861 +0000 UTC m=+0.057530978 container create 26483610557f4fb1b4337e652b05b2e0a8bb7c84a6fa3dd5adddec52d48cce27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 10:49:56 compute-0 nova_compute[243452]: 2026-02-28 10:49:56.334 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:49:56 compute-0 nova_compute[243452]: 2026-02-28 10:49:56.334 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:49:56 compute-0 systemd[1]: Started libpod-conmon-26483610557f4fb1b4337e652b05b2e0a8bb7c84a6fa3dd5adddec52d48cce27.scope.
Feb 28 10:49:56 compute-0 podman[383874]: 2026-02-28 10:49:56.296971754 +0000 UTC m=+0.037590891 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:49:56 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:49:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe27c403bcd1808fa533a3d09d54e30adc6ac7a53b88fb62ba231d7c0288bc2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:49:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe27c403bcd1808fa533a3d09d54e30adc6ac7a53b88fb62ba231d7c0288bc2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:49:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe27c403bcd1808fa533a3d09d54e30adc6ac7a53b88fb62ba231d7c0288bc2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:49:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe27c403bcd1808fa533a3d09d54e30adc6ac7a53b88fb62ba231d7c0288bc2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:49:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe27c403bcd1808fa533a3d09d54e30adc6ac7a53b88fb62ba231d7c0288bc2b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:49:56 compute-0 podman[383874]: 2026-02-28 10:49:56.435328202 +0000 UTC m=+0.175947389 container init 26483610557f4fb1b4337e652b05b2e0a8bb7c84a6fa3dd5adddec52d48cce27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_carson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 10:49:56 compute-0 podman[383874]: 2026-02-28 10:49:56.44196021 +0000 UTC m=+0.182579317 container start 26483610557f4fb1b4337e652b05b2e0a8bb7c84a6fa3dd5adddec52d48cce27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_carson, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:49:56 compute-0 podman[383874]: 2026-02-28 10:49:56.447444997 +0000 UTC m=+0.188064254 container attach 26483610557f4fb1b4337e652b05b2e0a8bb7c84a6fa3dd5adddec52d48cce27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_carson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 10:49:56 compute-0 ceph-mon[76304]: pgmap v2573: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 1.2 KiB/s wr, 34 op/s
Feb 28 10:49:56 compute-0 nova_compute[243452]: 2026-02-28 10:49:56.791 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:49:56 compute-0 awesome_carson[383891]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:49:56 compute-0 awesome_carson[383891]: --> All data devices are unavailable
Feb 28 10:49:56 compute-0 systemd[1]: libpod-26483610557f4fb1b4337e652b05b2e0a8bb7c84a6fa3dd5adddec52d48cce27.scope: Deactivated successfully.
Feb 28 10:49:56 compute-0 podman[383874]: 2026-02-28 10:49:56.926831272 +0000 UTC m=+0.667450389 container died 26483610557f4fb1b4337e652b05b2e0a8bb7c84a6fa3dd5adddec52d48cce27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 10:49:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe27c403bcd1808fa533a3d09d54e30adc6ac7a53b88fb62ba231d7c0288bc2b-merged.mount: Deactivated successfully.
Feb 28 10:49:56 compute-0 podman[383874]: 2026-02-28 10:49:56.979329146 +0000 UTC m=+0.719948253 container remove 26483610557f4fb1b4337e652b05b2e0a8bb7c84a6fa3dd5adddec52d48cce27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:49:56 compute-0 systemd[1]: libpod-conmon-26483610557f4fb1b4337e652b05b2e0a8bb7c84a6fa3dd5adddec52d48cce27.scope: Deactivated successfully.
Feb 28 10:49:57 compute-0 sudo[383792]: pam_unix(sudo:session): session closed for user root
Feb 28 10:49:57 compute-0 sudo[383923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:49:57 compute-0 sudo[383923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:49:57 compute-0 sudo[383923]: pam_unix(sudo:session): session closed for user root
Feb 28 10:49:57 compute-0 sudo[383948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:49:57 compute-0 sudo[383948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:49:57 compute-0 nova_compute[243452]: 2026-02-28 10:49:57.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:49:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 341 B/s wr, 4 op/s
Feb 28 10:49:57 compute-0 podman[383985]: 2026-02-28 10:49:57.449500779 +0000 UTC m=+0.038085395 container create c3c68a9b28ed041b75f5222b92ed255ff777b5a20aef6f77be2f88e1ae191f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lalande, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:49:57 compute-0 nova_compute[243452]: 2026-02-28 10:49:57.475 243456 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772275782.4739864, 4daf2a6e-e18a-481f-8174-d36bb3f79438 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 28 10:49:57 compute-0 nova_compute[243452]: 2026-02-28 10:49:57.476 243456 INFO nova.compute.manager [-] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] VM Stopped (Lifecycle Event)
Feb 28 10:49:57 compute-0 systemd[1]: Started libpod-conmon-c3c68a9b28ed041b75f5222b92ed255ff777b5a20aef6f77be2f88e1ae191f1f.scope.
Feb 28 10:49:57 compute-0 nova_compute[243452]: 2026-02-28 10:49:57.499 243456 DEBUG nova.compute.manager [None req-2be41c91-7702-4c9b-96f3-20fd2069abb5 - - - - - -] [instance: 4daf2a6e-e18a-481f-8174-d36bb3f79438] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 28 10:49:57 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:49:57 compute-0 podman[383985]: 2026-02-28 10:49:57.522940619 +0000 UTC m=+0.111525275 container init c3c68a9b28ed041b75f5222b92ed255ff777b5a20aef6f77be2f88e1ae191f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 10:49:57 compute-0 podman[383985]: 2026-02-28 10:49:57.433507724 +0000 UTC m=+0.022092390 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:49:57 compute-0 podman[383985]: 2026-02-28 10:49:57.532144741 +0000 UTC m=+0.120729377 container start c3c68a9b28ed041b75f5222b92ed255ff777b5a20aef6f77be2f88e1ae191f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lalande, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 10:49:57 compute-0 podman[383985]: 2026-02-28 10:49:57.535428735 +0000 UTC m=+0.124013411 container attach c3c68a9b28ed041b75f5222b92ed255ff777b5a20aef6f77be2f88e1ae191f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 10:49:57 compute-0 focused_lalande[384001]: 167 167
Feb 28 10:49:57 compute-0 systemd[1]: libpod-c3c68a9b28ed041b75f5222b92ed255ff777b5a20aef6f77be2f88e1ae191f1f.scope: Deactivated successfully.
Feb 28 10:49:57 compute-0 podman[383985]: 2026-02-28 10:49:57.539884152 +0000 UTC m=+0.128468778 container died c3c68a9b28ed041b75f5222b92ed255ff777b5a20aef6f77be2f88e1ae191f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 10:49:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3199dff106b29b9333eb2ffb44987002493b8a1229cf6f8d45a2ca69bac686a3-merged.mount: Deactivated successfully.
Feb 28 10:49:57 compute-0 podman[383985]: 2026-02-28 10:49:57.584248245 +0000 UTC m=+0.172832861 container remove c3c68a9b28ed041b75f5222b92ed255ff777b5a20aef6f77be2f88e1ae191f1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 10:49:57 compute-0 systemd[1]: libpod-conmon-c3c68a9b28ed041b75f5222b92ed255ff777b5a20aef6f77be2f88e1ae191f1f.scope: Deactivated successfully.
Feb 28 10:49:57 compute-0 podman[384026]: 2026-02-28 10:49:57.775861729 +0000 UTC m=+0.056944522 container create 74397ac73190812c3db69eaf160d35710dd7f4d12ac180e0484bfc3e57ae78f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_dhawan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:49:57 compute-0 systemd[1]: Started libpod-conmon-74397ac73190812c3db69eaf160d35710dd7f4d12ac180e0484bfc3e57ae78f3.scope.
Feb 28 10:49:57 compute-0 podman[384026]: 2026-02-28 10:49:57.751034822 +0000 UTC m=+0.032117615 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:49:57 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5234b73bd0525111e52aa290403a1658e3731779ef09b3b49c37ac4833f4eb9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5234b73bd0525111e52aa290403a1658e3731779ef09b3b49c37ac4833f4eb9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5234b73bd0525111e52aa290403a1658e3731779ef09b3b49c37ac4833f4eb9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5234b73bd0525111e52aa290403a1658e3731779ef09b3b49c37ac4833f4eb9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:49:57 compute-0 podman[384026]: 2026-02-28 10:49:57.887442545 +0000 UTC m=+0.168525398 container init 74397ac73190812c3db69eaf160d35710dd7f4d12ac180e0484bfc3e57ae78f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 10:49:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:49:57.891 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:49:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:49:57.893 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:49:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:49:57.893 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:49:57 compute-0 podman[384026]: 2026-02-28 10:49:57.897784979 +0000 UTC m=+0.178867732 container start 74397ac73190812c3db69eaf160d35710dd7f4d12ac180e0484bfc3e57ae78f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:49:57 compute-0 podman[384026]: 2026-02-28 10:49:57.901461884 +0000 UTC m=+0.182544677 container attach 74397ac73190812c3db69eaf160d35710dd7f4d12ac180e0484bfc3e57ae78f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]: {
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:     "0": [
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:         {
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "devices": [
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "/dev/loop3"
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             ],
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "lv_name": "ceph_lv0",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "lv_size": "21470642176",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "name": "ceph_lv0",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "tags": {
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.cluster_name": "ceph",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.crush_device_class": "",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.encrypted": "0",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.objectstore": "bluestore",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.osd_id": "0",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.type": "block",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.vdo": "0",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.with_tpm": "0"
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             },
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "type": "block",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "vg_name": "ceph_vg0"
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:         }
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:     ],
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:     "1": [
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:         {
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "devices": [
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "/dev/loop4"
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             ],
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "lv_name": "ceph_lv1",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "lv_size": "21470642176",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "name": "ceph_lv1",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "tags": {
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.cluster_name": "ceph",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.crush_device_class": "",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.encrypted": "0",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.objectstore": "bluestore",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.osd_id": "1",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.type": "block",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.vdo": "0",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.with_tpm": "0"
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             },
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "type": "block",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "vg_name": "ceph_vg1"
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:         }
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:     ],
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:     "2": [
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:         {
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "devices": [
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "/dev/loop5"
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             ],
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "lv_name": "ceph_lv2",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "lv_size": "21470642176",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "name": "ceph_lv2",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "tags": {
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.cluster_name": "ceph",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.crush_device_class": "",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.encrypted": "0",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.objectstore": "bluestore",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.osd_id": "2",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.type": "block",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.vdo": "0",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:                 "ceph.with_tpm": "0"
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             },
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "type": "block",
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:             "vg_name": "ceph_vg2"
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:         }
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]:     ]
Feb 28 10:49:58 compute-0 peaceful_dhawan[384042]: }
Feb 28 10:49:58 compute-0 systemd[1]: libpod-74397ac73190812c3db69eaf160d35710dd7f4d12ac180e0484bfc3e57ae78f3.scope: Deactivated successfully.
Feb 28 10:49:58 compute-0 podman[384026]: 2026-02-28 10:49:58.223599713 +0000 UTC m=+0.504682476 container died 74397ac73190812c3db69eaf160d35710dd7f4d12ac180e0484bfc3e57ae78f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_dhawan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:49:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-5234b73bd0525111e52aa290403a1658e3731779ef09b3b49c37ac4833f4eb9b-merged.mount: Deactivated successfully.
Feb 28 10:49:58 compute-0 podman[384026]: 2026-02-28 10:49:58.283091366 +0000 UTC m=+0.564174119 container remove 74397ac73190812c3db69eaf160d35710dd7f4d12ac180e0484bfc3e57ae78f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_dhawan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 10:49:58 compute-0 systemd[1]: libpod-conmon-74397ac73190812c3db69eaf160d35710dd7f4d12ac180e0484bfc3e57ae78f3.scope: Deactivated successfully.
Feb 28 10:49:58 compute-0 sudo[383948]: pam_unix(sudo:session): session closed for user root
Feb 28 10:49:58 compute-0 sudo[384062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:49:58 compute-0 sudo[384062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:49:58 compute-0 sudo[384062]: pam_unix(sudo:session): session closed for user root
Feb 28 10:49:58 compute-0 sudo[384087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:49:58 compute-0 sudo[384087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:49:58 compute-0 ceph-mon[76304]: pgmap v2574: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 341 B/s wr, 4 op/s
Feb 28 10:49:58 compute-0 podman[384124]: 2026-02-28 10:49:58.717685466 +0000 UTC m=+0.055310965 container create f64a58eaa696dc2a43a6529f2c58d6302bdfac7f5cb41fb0ad14b0a23459b1f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:49:58 compute-0 systemd[1]: Started libpod-conmon-f64a58eaa696dc2a43a6529f2c58d6302bdfac7f5cb41fb0ad14b0a23459b1f0.scope.
Feb 28 10:49:58 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:49:58 compute-0 podman[384124]: 2026-02-28 10:49:58.694179277 +0000 UTC m=+0.031804796 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:49:58 compute-0 podman[384124]: 2026-02-28 10:49:58.810817187 +0000 UTC m=+0.148442706 container init f64a58eaa696dc2a43a6529f2c58d6302bdfac7f5cb41fb0ad14b0a23459b1f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 10:49:58 compute-0 podman[384124]: 2026-02-28 10:49:58.819138044 +0000 UTC m=+0.156763553 container start f64a58eaa696dc2a43a6529f2c58d6302bdfac7f5cb41fb0ad14b0a23459b1f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_murdock, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 10:49:58 compute-0 podman[384124]: 2026-02-28 10:49:58.828150801 +0000 UTC m=+0.165776290 container attach f64a58eaa696dc2a43a6529f2c58d6302bdfac7f5cb41fb0ad14b0a23459b1f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_murdock, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 10:49:58 compute-0 systemd[1]: libpod-f64a58eaa696dc2a43a6529f2c58d6302bdfac7f5cb41fb0ad14b0a23459b1f0.scope: Deactivated successfully.
Feb 28 10:49:58 compute-0 frosty_murdock[384140]: 167 167
Feb 28 10:49:58 compute-0 podman[384124]: 2026-02-28 10:49:58.830020364 +0000 UTC m=+0.167645833 container died f64a58eaa696dc2a43a6529f2c58d6302bdfac7f5cb41fb0ad14b0a23459b1f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_murdock, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 10:49:58 compute-0 conmon[384140]: conmon f64a58eaa696dc2a43a6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f64a58eaa696dc2a43a6529f2c58d6302bdfac7f5cb41fb0ad14b0a23459b1f0.scope/container/memory.events
Feb 28 10:49:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-8504375ece0231ec830d1f9392e599e43a804b8bd28c9b12b04a52d52c0d756c-merged.mount: Deactivated successfully.
Feb 28 10:49:58 compute-0 podman[384124]: 2026-02-28 10:49:58.8723786 +0000 UTC m=+0.210004089 container remove f64a58eaa696dc2a43a6529f2c58d6302bdfac7f5cb41fb0ad14b0a23459b1f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:49:58 compute-0 systemd[1]: libpod-conmon-f64a58eaa696dc2a43a6529f2c58d6302bdfac7f5cb41fb0ad14b0a23459b1f0.scope: Deactivated successfully.
Feb 28 10:49:59 compute-0 podman[384164]: 2026-02-28 10:49:59.016491422 +0000 UTC m=+0.040437662 container create 41854b6b0e20fac10b8fe6bfd919afc7a96447126fa590cab036d9f3418f076d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 10:49:59 compute-0 systemd[1]: Started libpod-conmon-41854b6b0e20fac10b8fe6bfd919afc7a96447126fa590cab036d9f3418f076d.scope.
Feb 28 10:49:59 compute-0 podman[384164]: 2026-02-28 10:49:58.996885704 +0000 UTC m=+0.020831984 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:49:59 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71da97c2ecef03159393e5698518c5cc1b12fa667dc45de26d1f9b07305067dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71da97c2ecef03159393e5698518c5cc1b12fa667dc45de26d1f9b07305067dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71da97c2ecef03159393e5698518c5cc1b12fa667dc45de26d1f9b07305067dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71da97c2ecef03159393e5698518c5cc1b12fa667dc45de26d1f9b07305067dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:49:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:49:59 compute-0 podman[384164]: 2026-02-28 10:49:59.132289548 +0000 UTC m=+0.156235838 container init 41854b6b0e20fac10b8fe6bfd919afc7a96447126fa590cab036d9f3418f076d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_einstein, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:49:59 compute-0 podman[384164]: 2026-02-28 10:49:59.140411339 +0000 UTC m=+0.164357579 container start 41854b6b0e20fac10b8fe6bfd919afc7a96447126fa590cab036d9f3418f076d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_einstein, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 28 10:49:59 compute-0 podman[384164]: 2026-02-28 10:49:59.143694242 +0000 UTC m=+0.167640482 container attach 41854b6b0e20fac10b8fe6bfd919afc7a96447126fa590cab036d9f3418f076d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_einstein, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:49:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2575: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 341 B/s wr, 4 op/s
Feb 28 10:49:59 compute-0 lvm[384260]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:49:59 compute-0 lvm[384261]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:49:59 compute-0 lvm[384261]: VG ceph_vg0 finished
Feb 28 10:49:59 compute-0 lvm[384260]: VG ceph_vg1 finished
Feb 28 10:49:59 compute-0 lvm[384263]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:49:59 compute-0 lvm[384263]: VG ceph_vg2 finished
Feb 28 10:49:59 compute-0 confident_einstein[384181]: {}
Feb 28 10:49:59 compute-0 systemd[1]: libpod-41854b6b0e20fac10b8fe6bfd919afc7a96447126fa590cab036d9f3418f076d.scope: Deactivated successfully.
Feb 28 10:49:59 compute-0 systemd[1]: libpod-41854b6b0e20fac10b8fe6bfd919afc7a96447126fa590cab036d9f3418f076d.scope: Consumed 1.201s CPU time.
Feb 28 10:49:59 compute-0 podman[384164]: 2026-02-28 10:49:59.984947767 +0000 UTC m=+1.008894007 container died 41854b6b0e20fac10b8fe6bfd919afc7a96447126fa590cab036d9f3418f076d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 10:50:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-71da97c2ecef03159393e5698518c5cc1b12fa667dc45de26d1f9b07305067dd-merged.mount: Deactivated successfully.
Feb 28 10:50:00 compute-0 podman[384164]: 2026-02-28 10:50:00.028211478 +0000 UTC m=+1.052157728 container remove 41854b6b0e20fac10b8fe6bfd919afc7a96447126fa590cab036d9f3418f076d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_einstein, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 10:50:00 compute-0 systemd[1]: libpod-conmon-41854b6b0e20fac10b8fe6bfd919afc7a96447126fa590cab036d9f3418f076d.scope: Deactivated successfully.
Feb 28 10:50:00 compute-0 sudo[384087]: pam_unix(sudo:session): session closed for user root
Feb 28 10:50:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:50:00 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:50:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:50:00 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:50:00 compute-0 sudo[384277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:50:00 compute-0 sudo[384277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:50:00 compute-0 sudo[384277]: pam_unix(sudo:session): session closed for user root
Feb 28 10:50:00 compute-0 podman[384301]: 2026-02-28 10:50:00.248679103 +0000 UTC m=+0.060791881 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 28 10:50:00 compute-0 podman[384302]: 2026-02-28 10:50:00.310001799 +0000 UTC m=+0.112196165 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:50:00 compute-0 nova_compute[243452]: 2026-02-28 10:50:00.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:50:00 compute-0 nova_compute[243452]: 2026-02-28 10:50:00.345 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:50:00 compute-0 nova_compute[243452]: 2026-02-28 10:50:00.346 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:50:00 compute-0 nova_compute[243452]: 2026-02-28 10:50:00.346 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:50:00 compute-0 nova_compute[243452]: 2026-02-28 10:50:00.346 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:50:00 compute-0 nova_compute[243452]: 2026-02-28 10:50:00.347 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:50:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:50:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:50:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:50:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:50:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:50:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:50:00 compute-0 ceph-mon[76304]: pgmap v2575: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 341 B/s wr, 4 op/s
Feb 28 10:50:00 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:50:00 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:50:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:50:00 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3991673712' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:50:00 compute-0 nova_compute[243452]: 2026-02-28 10:50:00.909 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:50:01 compute-0 nova_compute[243452]: 2026-02-28 10:50:01.106 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:50:01 compute-0 nova_compute[243452]: 2026-02-28 10:50:01.108 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3502MB free_disk=59.98738141171634GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:50:01 compute-0 nova_compute[243452]: 2026-02-28 10:50:01.108 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:50:01 compute-0 nova_compute[243452]: 2026-02-28 10:50:01.109 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:50:01 compute-0 nova_compute[243452]: 2026-02-28 10:50:01.171 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:50:01 compute-0 nova_compute[243452]: 2026-02-28 10:50:01.172 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:50:01 compute-0 nova_compute[243452]: 2026-02-28 10:50:01.186 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 28 10:50:01 compute-0 nova_compute[243452]: 2026-02-28 10:50:01.203 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 28 10:50:01 compute-0 nova_compute[243452]: 2026-02-28 10:50:01.204 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 28 10:50:01 compute-0 nova_compute[243452]: 2026-02-28 10:50:01.218 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 28 10:50:01 compute-0 nova_compute[243452]: 2026-02-28 10:50:01.233 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:01 compute-0 nova_compute[243452]: 2026-02-28 10:50:01.243 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 28 10:50:01 compute-0 nova_compute[243452]: 2026-02-28 10:50:01.262 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:50:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:01 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3991673712' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:50:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:50:01.776 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:50:01 compute-0 nova_compute[243452]: 2026-02-28 10:50:01.777 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:01 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:50:01.779 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:50:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:50:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2267053798' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:50:01 compute-0 nova_compute[243452]: 2026-02-28 10:50:01.793 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:01 compute-0 nova_compute[243452]: 2026-02-28 10:50:01.808 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:50:01 compute-0 nova_compute[243452]: 2026-02-28 10:50:01.815 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:50:01 compute-0 nova_compute[243452]: 2026-02-28 10:50:01.841 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:50:01 compute-0 nova_compute[243452]: 2026-02-28 10:50:01.865 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:50:01 compute-0 nova_compute[243452]: 2026-02-28 10:50:01.866 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:50:02 compute-0 ceph-mon[76304]: pgmap v2576: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:02 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2267053798' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:50:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2577: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:50:04 compute-0 ceph-mon[76304]: pgmap v2577: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:04 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:50:04.781 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:50:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:06 compute-0 nova_compute[243452]: 2026-02-28 10:50:06.268 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:06 compute-0 ceph-mon[76304]: pgmap v2578: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:06 compute-0 nova_compute[243452]: 2026-02-28 10:50:06.795 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:06 compute-0 nova_compute[243452]: 2026-02-28 10:50:06.861 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:50:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:08 compute-0 ceph-mon[76304]: pgmap v2579: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:50:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2580: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:10 compute-0 ceph-mon[76304]: pgmap v2580: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:11 compute-0 nova_compute[243452]: 2026-02-28 10:50:11.353 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:11 compute-0 nova_compute[243452]: 2026-02-28 10:50:11.797 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:12 compute-0 ceph-mon[76304]: pgmap v2581: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2582: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:50:14 compute-0 ceph-mon[76304]: pgmap v2582: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Feb 28 10:50:16 compute-0 nova_compute[243452]: 2026-02-28 10:50:16.388 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:16 compute-0 nova_compute[243452]: 2026-02-28 10:50:16.799 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:16 compute-0 ceph-mon[76304]: pgmap v2583: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Feb 28 10:50:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Feb 28 10:50:18 compute-0 ceph-mon[76304]: pgmap v2584: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Feb 28 10:50:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:50:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 28 10:50:20 compute-0 ceph-mon[76304]: pgmap v2585: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 28 10:50:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 28 10:50:21 compute-0 nova_compute[243452]: 2026-02-28 10:50:21.433 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:21 compute-0 nova_compute[243452]: 2026-02-28 10:50:21.802 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:22 compute-0 ceph-mon[76304]: pgmap v2586: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 28 10:50:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 28 10:50:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:50:24 compute-0 ceph-mon[76304]: pgmap v2587: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 28 10:50:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2588: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 28 10:50:26 compute-0 nova_compute[243452]: 2026-02-28 10:50:26.435 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:26 compute-0 nova_compute[243452]: 2026-02-28 10:50:26.804 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:26 compute-0 ceph-mon[76304]: pgmap v2588: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 28 10:50:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Feb 28 10:50:28 compute-0 ceph-mon[76304]: pgmap v2589: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Feb 28 10:50:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:50:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:50:29
Feb 28 10:50:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:50:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:50:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'images', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', '.mgr', 'default.rgw.meta', 'backups']
Feb 28 10:50:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:50:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Feb 28 10:50:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:50:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:50:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:50:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:50:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:50:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:50:30 compute-0 ceph-mon[76304]: pgmap v2590: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Feb 28 10:50:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:50:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:50:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:50:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:50:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:50:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:50:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:50:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:50:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:50:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:50:31 compute-0 podman[384395]: 2026-02-28 10:50:31.130384218 +0000 UTC m=+0.052151676 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:50:31 compute-0 podman[384394]: 2026-02-28 10:50:31.261811238 +0000 UTC m=+0.194437405 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Feb 28 10:50:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:31 compute-0 nova_compute[243452]: 2026-02-28 10:50:31.437 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:31 compute-0 nova_compute[243452]: 2026-02-28 10:50:31.806 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:32 compute-0 ceph-mon[76304]: pgmap v2591: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2592: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:50:34 compute-0 ceph-mon[76304]: pgmap v2592: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:36 compute-0 nova_compute[243452]: 2026-02-28 10:50:36.465 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:36 compute-0 nova_compute[243452]: 2026-02-28 10:50:36.809 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:36 compute-0 ceph-mon[76304]: pgmap v2593: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:38 compute-0 ceph-mon[76304]: pgmap v2594: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:50:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:40 compute-0 ceph-mon[76304]: pgmap v2595: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:41 compute-0 nova_compute[243452]: 2026-02-28 10:50:41.468 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5000234460926945e-05 of space, bias 1.0, pg target 0.004500070338278084 quantized to 32 (current 32)
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494020776263654 of space, bias 1.0, pg target 0.7482062328790963 quantized to 32 (current 32)
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.261232899107364e-07 of space, bias 4.0, pg target 0.0008713479478928837 quantized to 16 (current 16)
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:50:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:50:41 compute-0 nova_compute[243452]: 2026-02-28 10:50:41.811 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:42 compute-0 ceph-mon[76304]: pgmap v2596: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:50:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Feb 28 10:50:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Feb 28 10:50:44 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Feb 28 10:50:44 compute-0 ceph-mon[76304]: pgmap v2597: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:50:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 614 B/s rd, 0 B/s wr, 1 op/s
Feb 28 10:50:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:50:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4085572625' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:50:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:50:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4085572625' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:50:45 compute-0 ceph-mon[76304]: osdmap e291: 3 total, 3 up, 3 in
Feb 28 10:50:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/4085572625' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:50:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/4085572625' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:50:46 compute-0 nova_compute[243452]: 2026-02-28 10:50:46.493 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:46 compute-0 nova_compute[243452]: 2026-02-28 10:50:46.813 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:47 compute-0 ceph-mon[76304]: pgmap v2599: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 614 B/s rd, 0 B/s wr, 1 op/s
Feb 28 10:50:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 73 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Feb 28 10:50:49 compute-0 ceph-mon[76304]: pgmap v2600: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 73 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Feb 28 10:50:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:50:49 compute-0 nova_compute[243452]: 2026-02-28 10:50:49.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:50:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 73 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Feb 28 10:50:51 compute-0 ceph-mon[76304]: pgmap v2601: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 73 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Feb 28 10:50:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 28 10:50:51 compute-0 nova_compute[243452]: 2026-02-28 10:50:51.548 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:51 compute-0 nova_compute[243452]: 2026-02-28 10:50:51.815 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:52 compute-0 ceph-mon[76304]: pgmap v2602: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 28 10:50:52 compute-0 nova_compute[243452]: 2026-02-28 10:50:52.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:50:52 compute-0 nova_compute[243452]: 2026-02-28 10:50:52.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:50:52 compute-0 nova_compute[243452]: 2026-02-28 10:50:52.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:50:53 compute-0 nova_compute[243452]: 2026-02-28 10:50:53.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:50:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 28 10:50:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:50:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Feb 28 10:50:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Feb 28 10:50:54 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Feb 28 10:50:54 compute-0 ceph-mon[76304]: pgmap v2603: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 28 10:50:54 compute-0 ceph-mon[76304]: osdmap e292: 3 total, 3 up, 3 in
Feb 28 10:50:55 compute-0 nova_compute[243452]: 2026-02-28 10:50:55.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:50:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Feb 28 10:50:56 compute-0 nova_compute[243452]: 2026-02-28 10:50:56.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:50:56 compute-0 ceph-mon[76304]: pgmap v2605: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Feb 28 10:50:56 compute-0 nova_compute[243452]: 2026-02-28 10:50:56.550 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:56 compute-0 nova_compute[243452]: 2026-02-28 10:50:56.817 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:50:57 compute-0 nova_compute[243452]: 2026-02-28 10:50:57.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:50:57 compute-0 nova_compute[243452]: 2026-02-28 10:50:57.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:50:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 716 B/s rd, 0 B/s wr, 1 op/s
Feb 28 10:50:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:50:57.892 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:50:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:50:57.893 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:50:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:50:57.893 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:50:58 compute-0 nova_compute[243452]: 2026-02-28 10:50:58.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:50:58 compute-0 nova_compute[243452]: 2026-02-28 10:50:58.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:50:58 compute-0 nova_compute[243452]: 2026-02-28 10:50:58.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:50:58 compute-0 nova_compute[243452]: 2026-02-28 10:50:58.420 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:50:58 compute-0 ceph-mon[76304]: pgmap v2606: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 716 B/s rd, 0 B/s wr, 1 op/s
Feb 28 10:50:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:50:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 716 B/s rd, 0 B/s wr, 1 op/s
Feb 28 10:51:00 compute-0 sudo[384440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:51:00 compute-0 sudo[384440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:51:00 compute-0 sudo[384440]: pam_unix(sudo:session): session closed for user root
Feb 28 10:51:00 compute-0 nova_compute[243452]: 2026-02-28 10:51:00.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:51:00 compute-0 sudo[384465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:51:00 compute-0 sudo[384465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:51:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:51:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:51:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:51:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:51:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:51:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:51:00 compute-0 nova_compute[243452]: 2026-02-28 10:51:00.383 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:51:00 compute-0 nova_compute[243452]: 2026-02-28 10:51:00.384 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:51:00 compute-0 nova_compute[243452]: 2026-02-28 10:51:00.384 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:51:00 compute-0 nova_compute[243452]: 2026-02-28 10:51:00.384 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:51:00 compute-0 nova_compute[243452]: 2026-02-28 10:51:00.385 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:51:00 compute-0 ceph-mon[76304]: pgmap v2607: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 716 B/s rd, 0 B/s wr, 1 op/s
Feb 28 10:51:00 compute-0 sudo[384465]: pam_unix(sudo:session): session closed for user root
Feb 28 10:51:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 28 10:51:00 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 10:51:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:51:00 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:51:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:51:00 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:51:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:51:00 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:51:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:51:00 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:51:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:51:00 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:51:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:51:00 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:51:00 compute-0 sudo[384541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:51:00 compute-0 sudo[384541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:51:00 compute-0 sudo[384541]: pam_unix(sudo:session): session closed for user root
Feb 28 10:51:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:51:00 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2130511990' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:51:00 compute-0 nova_compute[243452]: 2026-02-28 10:51:00.990 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:51:01 compute-0 sudo[384566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:51:01 compute-0 sudo[384566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:51:01 compute-0 nova_compute[243452]: 2026-02-28 10:51:01.223 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:51:01 compute-0 nova_compute[243452]: 2026-02-28 10:51:01.224 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3556MB free_disk=59.98738014232367GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:51:01 compute-0 nova_compute[243452]: 2026-02-28 10:51:01.224 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:51:01 compute-0 nova_compute[243452]: 2026-02-28 10:51:01.225 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:51:01 compute-0 podman[384606]: 2026-02-28 10:51:01.360325914 +0000 UTC m=+0.056060437 container create d230e5df6860d36cbe035a7355c56a6404d53ccd273ad74b11ba56dc6448940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_goodall, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:51:01 compute-0 nova_compute[243452]: 2026-02-28 10:51:01.384 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:51:01 compute-0 nova_compute[243452]: 2026-02-28 10:51:01.384 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:51:01 compute-0 systemd[1]: Started libpod-conmon-d230e5df6860d36cbe035a7355c56a6404d53ccd273ad74b11ba56dc6448940d.scope.
Feb 28 10:51:01 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:51:01 compute-0 nova_compute[243452]: 2026-02-28 10:51:01.410 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:51:01 compute-0 podman[384606]: 2026-02-28 10:51:01.337837434 +0000 UTC m=+0.033571987 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:51:01 compute-0 podman[384606]: 2026-02-28 10:51:01.437316245 +0000 UTC m=+0.133050818 container init d230e5df6860d36cbe035a7355c56a6404d53ccd273ad74b11ba56dc6448940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Feb 28 10:51:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:01 compute-0 podman[384606]: 2026-02-28 10:51:01.451481949 +0000 UTC m=+0.147216482 container start d230e5df6860d36cbe035a7355c56a6404d53ccd273ad74b11ba56dc6448940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_goodall, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:51:01 compute-0 podman[384606]: 2026-02-28 10:51:01.457097918 +0000 UTC m=+0.152832451 container attach d230e5df6860d36cbe035a7355c56a6404d53ccd273ad74b11ba56dc6448940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_goodall, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:51:01 compute-0 pedantic_goodall[384633]: 167 167
Feb 28 10:51:01 compute-0 systemd[1]: libpod-d230e5df6860d36cbe035a7355c56a6404d53ccd273ad74b11ba56dc6448940d.scope: Deactivated successfully.
Feb 28 10:51:01 compute-0 podman[384606]: 2026-02-28 10:51:01.458624472 +0000 UTC m=+0.154359015 container died d230e5df6860d36cbe035a7355c56a6404d53ccd273ad74b11ba56dc6448940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_goodall, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 10:51:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-70744b1ea8b0bc9bc71b9766eb8c8300c7b9d4ed2adc96cc690e67523bcbaba8-merged.mount: Deactivated successfully.
Feb 28 10:51:01 compute-0 podman[384624]: 2026-02-28 10:51:01.501398919 +0000 UTC m=+0.098479094 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Feb 28 10:51:01 compute-0 podman[384606]: 2026-02-28 10:51:01.515858001 +0000 UTC m=+0.211592524 container remove d230e5df6860d36cbe035a7355c56a6404d53ccd273ad74b11ba56dc6448940d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_goodall, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 10:51:01 compute-0 podman[384621]: 2026-02-28 10:51:01.516418427 +0000 UTC m=+0.111603958 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 28 10:51:01 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 10:51:01 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:51:01 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:51:01 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:51:01 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:51:01 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:51:01 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:51:01 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2130511990' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:51:01 compute-0 systemd[1]: libpod-conmon-d230e5df6860d36cbe035a7355c56a6404d53ccd273ad74b11ba56dc6448940d.scope: Deactivated successfully.
Feb 28 10:51:01 compute-0 nova_compute[243452]: 2026-02-28 10:51:01.556 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:01 compute-0 podman[384710]: 2026-02-28 10:51:01.68972003 +0000 UTC m=+0.049877401 container create 9a4e1fffcba293f03ecde61abea714df9201157382bccce718023f7b71a619ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hamilton, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:51:01 compute-0 systemd[1]: Started libpod-conmon-9a4e1fffcba293f03ecde61abea714df9201157382bccce718023f7b71a619ed.scope.
Feb 28 10:51:01 compute-0 podman[384710]: 2026-02-28 10:51:01.665400348 +0000 UTC m=+0.025557709 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:51:01 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608ec62e99c7794d8464232817eece5920f5cb57c8adb7e90c62489fb3fe9f22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608ec62e99c7794d8464232817eece5920f5cb57c8adb7e90c62489fb3fe9f22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608ec62e99c7794d8464232817eece5920f5cb57c8adb7e90c62489fb3fe9f22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608ec62e99c7794d8464232817eece5920f5cb57c8adb7e90c62489fb3fe9f22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608ec62e99c7794d8464232817eece5920f5cb57c8adb7e90c62489fb3fe9f22/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:51:01 compute-0 podman[384710]: 2026-02-28 10:51:01.784767245 +0000 UTC m=+0.144924626 container init 9a4e1fffcba293f03ecde61abea714df9201157382bccce718023f7b71a619ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hamilton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:51:01 compute-0 podman[384710]: 2026-02-28 10:51:01.796628693 +0000 UTC m=+0.156786034 container start 9a4e1fffcba293f03ecde61abea714df9201157382bccce718023f7b71a619ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hamilton, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:51:01 compute-0 podman[384710]: 2026-02-28 10:51:01.801365788 +0000 UTC m=+0.161523209 container attach 9a4e1fffcba293f03ecde61abea714df9201157382bccce718023f7b71a619ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:51:01 compute-0 nova_compute[243452]: 2026-02-28 10:51:01.819 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:51:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/266926937' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:51:01 compute-0 nova_compute[243452]: 2026-02-28 10:51:01.990 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:51:01 compute-0 nova_compute[243452]: 2026-02-28 10:51:01.997 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:51:02 compute-0 nova_compute[243452]: 2026-02-28 10:51:02.067 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:51:02 compute-0 nova_compute[243452]: 2026-02-28 10:51:02.068 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:51:02 compute-0 nova_compute[243452]: 2026-02-28 10:51:02.068 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:51:02 compute-0 dazzling_hamilton[384727]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:51:02 compute-0 dazzling_hamilton[384727]: --> All data devices are unavailable
Feb 28 10:51:02 compute-0 systemd[1]: libpod-9a4e1fffcba293f03ecde61abea714df9201157382bccce718023f7b71a619ed.scope: Deactivated successfully.
Feb 28 10:51:02 compute-0 podman[384710]: 2026-02-28 10:51:02.26414977 +0000 UTC m=+0.624307121 container died 9a4e1fffcba293f03ecde61abea714df9201157382bccce718023f7b71a619ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hamilton, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:51:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-608ec62e99c7794d8464232817eece5920f5cb57c8adb7e90c62489fb3fe9f22-merged.mount: Deactivated successfully.
Feb 28 10:51:02 compute-0 podman[384710]: 2026-02-28 10:51:02.310603493 +0000 UTC m=+0.670760854 container remove 9a4e1fffcba293f03ecde61abea714df9201157382bccce718023f7b71a619ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hamilton, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 10:51:02 compute-0 systemd[1]: libpod-conmon-9a4e1fffcba293f03ecde61abea714df9201157382bccce718023f7b71a619ed.scope: Deactivated successfully.
Feb 28 10:51:02 compute-0 sudo[384566]: pam_unix(sudo:session): session closed for user root
Feb 28 10:51:02 compute-0 sudo[384760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:51:02 compute-0 sudo[384760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:51:02 compute-0 sudo[384760]: pam_unix(sudo:session): session closed for user root
Feb 28 10:51:02 compute-0 sudo[384785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:51:02 compute-0 sudo[384785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:51:02 compute-0 podman[384822]: 2026-02-28 10:51:02.805778987 +0000 UTC m=+0.045221388 container create 441766aeb074401287704747df29232bd38ea799c3eb0f41a950578bff3496f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 10:51:02 compute-0 systemd[1]: Started libpod-conmon-441766aeb074401287704747df29232bd38ea799c3eb0f41a950578bff3496f9.scope.
Feb 28 10:51:02 compute-0 ceph-mon[76304]: pgmap v2608: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:02 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/266926937' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:51:02 compute-0 podman[384822]: 2026-02-28 10:51:02.791215793 +0000 UTC m=+0.030658204 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:51:02 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:51:02 compute-0 podman[384822]: 2026-02-28 10:51:02.905936748 +0000 UTC m=+0.145379169 container init 441766aeb074401287704747df29232bd38ea799c3eb0f41a950578bff3496f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:51:02 compute-0 podman[384822]: 2026-02-28 10:51:02.914487631 +0000 UTC m=+0.153930032 container start 441766aeb074401287704747df29232bd38ea799c3eb0f41a950578bff3496f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:51:02 compute-0 podman[384822]: 2026-02-28 10:51:02.918415183 +0000 UTC m=+0.157857584 container attach 441766aeb074401287704747df29232bd38ea799c3eb0f41a950578bff3496f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:51:02 compute-0 stoic_wiles[384839]: 167 167
Feb 28 10:51:02 compute-0 systemd[1]: libpod-441766aeb074401287704747df29232bd38ea799c3eb0f41a950578bff3496f9.scope: Deactivated successfully.
Feb 28 10:51:02 compute-0 conmon[384839]: conmon 441766aeb07440128770 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-441766aeb074401287704747df29232bd38ea799c3eb0f41a950578bff3496f9.scope/container/memory.events
Feb 28 10:51:02 compute-0 podman[384822]: 2026-02-28 10:51:02.921258844 +0000 UTC m=+0.160701265 container died 441766aeb074401287704747df29232bd38ea799c3eb0f41a950578bff3496f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:51:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-f78a3bdc11e5f1927c896a98679f55a83a367e4370f29917b3d2e88d988ecb98-merged.mount: Deactivated successfully.
Feb 28 10:51:02 compute-0 podman[384822]: 2026-02-28 10:51:02.96360987 +0000 UTC m=+0.203052261 container remove 441766aeb074401287704747df29232bd38ea799c3eb0f41a950578bff3496f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:51:02 compute-0 systemd[1]: libpod-conmon-441766aeb074401287704747df29232bd38ea799c3eb0f41a950578bff3496f9.scope: Deactivated successfully.
Feb 28 10:51:03 compute-0 podman[384863]: 2026-02-28 10:51:03.099384234 +0000 UTC m=+0.044143297 container create e2c5bf247abe4bcda3d2b99ddd46b0ee7929f11059bd2a93fe5a2a31f9c916bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 10:51:03 compute-0 systemd[1]: Started libpod-conmon-e2c5bf247abe4bcda3d2b99ddd46b0ee7929f11059bd2a93fe5a2a31f9c916bc.scope.
Feb 28 10:51:03 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:51:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/908be1e1ed284efe27dfdbc4051cf4f3e0b3cc780eea24ca123606da099ab0b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:51:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/908be1e1ed284efe27dfdbc4051cf4f3e0b3cc780eea24ca123606da099ab0b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:51:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/908be1e1ed284efe27dfdbc4051cf4f3e0b3cc780eea24ca123606da099ab0b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:51:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/908be1e1ed284efe27dfdbc4051cf4f3e0b3cc780eea24ca123606da099ab0b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:51:03 compute-0 podman[384863]: 2026-02-28 10:51:03.081790953 +0000 UTC m=+0.026550026 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:51:03 compute-0 podman[384863]: 2026-02-28 10:51:03.188280425 +0000 UTC m=+0.133039498 container init e2c5bf247abe4bcda3d2b99ddd46b0ee7929f11059bd2a93fe5a2a31f9c916bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:51:03 compute-0 podman[384863]: 2026-02-28 10:51:03.194935044 +0000 UTC m=+0.139694087 container start e2c5bf247abe4bcda3d2b99ddd46b0ee7929f11059bd2a93fe5a2a31f9c916bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:51:03 compute-0 podman[384863]: 2026-02-28 10:51:03.198565107 +0000 UTC m=+0.143324150 container attach e2c5bf247abe4bcda3d2b99ddd46b0ee7929f11059bd2a93fe5a2a31f9c916bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 10:51:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:03 compute-0 sharp_borg[384880]: {
Feb 28 10:51:03 compute-0 sharp_borg[384880]:     "0": [
Feb 28 10:51:03 compute-0 sharp_borg[384880]:         {
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "devices": [
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "/dev/loop3"
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             ],
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "lv_name": "ceph_lv0",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "lv_size": "21470642176",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "name": "ceph_lv0",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "tags": {
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.cluster_name": "ceph",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.crush_device_class": "",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.encrypted": "0",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.objectstore": "bluestore",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.osd_id": "0",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.type": "block",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.vdo": "0",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.with_tpm": "0"
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             },
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "type": "block",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "vg_name": "ceph_vg0"
Feb 28 10:51:03 compute-0 sharp_borg[384880]:         }
Feb 28 10:51:03 compute-0 sharp_borg[384880]:     ],
Feb 28 10:51:03 compute-0 sharp_borg[384880]:     "1": [
Feb 28 10:51:03 compute-0 sharp_borg[384880]:         {
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "devices": [
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "/dev/loop4"
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             ],
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "lv_name": "ceph_lv1",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "lv_size": "21470642176",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "name": "ceph_lv1",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "tags": {
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.cluster_name": "ceph",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.crush_device_class": "",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.encrypted": "0",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.objectstore": "bluestore",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.osd_id": "1",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.type": "block",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.vdo": "0",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.with_tpm": "0"
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             },
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "type": "block",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "vg_name": "ceph_vg1"
Feb 28 10:51:03 compute-0 sharp_borg[384880]:         }
Feb 28 10:51:03 compute-0 sharp_borg[384880]:     ],
Feb 28 10:51:03 compute-0 sharp_borg[384880]:     "2": [
Feb 28 10:51:03 compute-0 sharp_borg[384880]:         {
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "devices": [
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "/dev/loop5"
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             ],
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "lv_name": "ceph_lv2",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "lv_size": "21470642176",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "name": "ceph_lv2",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "tags": {
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.cluster_name": "ceph",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.crush_device_class": "",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.encrypted": "0",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.objectstore": "bluestore",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.osd_id": "2",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.type": "block",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.vdo": "0",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:                 "ceph.with_tpm": "0"
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             },
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "type": "block",
Feb 28 10:51:03 compute-0 sharp_borg[384880]:             "vg_name": "ceph_vg2"
Feb 28 10:51:03 compute-0 sharp_borg[384880]:         }
Feb 28 10:51:03 compute-0 sharp_borg[384880]:     ]
Feb 28 10:51:03 compute-0 sharp_borg[384880]: }
Feb 28 10:51:03 compute-0 systemd[1]: libpod-e2c5bf247abe4bcda3d2b99ddd46b0ee7929f11059bd2a93fe5a2a31f9c916bc.scope: Deactivated successfully.
Feb 28 10:51:03 compute-0 podman[384863]: 2026-02-28 10:51:03.510692492 +0000 UTC m=+0.455451585 container died e2c5bf247abe4bcda3d2b99ddd46b0ee7929f11059bd2a93fe5a2a31f9c916bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 10:51:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-908be1e1ed284efe27dfdbc4051cf4f3e0b3cc780eea24ca123606da099ab0b5-merged.mount: Deactivated successfully.
Feb 28 10:51:03 compute-0 podman[384863]: 2026-02-28 10:51:03.552885683 +0000 UTC m=+0.497644726 container remove e2c5bf247abe4bcda3d2b99ddd46b0ee7929f11059bd2a93fe5a2a31f9c916bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_borg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 10:51:03 compute-0 systemd[1]: libpod-conmon-e2c5bf247abe4bcda3d2b99ddd46b0ee7929f11059bd2a93fe5a2a31f9c916bc.scope: Deactivated successfully.
Feb 28 10:51:03 compute-0 sudo[384785]: pam_unix(sudo:session): session closed for user root
Feb 28 10:51:03 compute-0 sudo[384902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:51:03 compute-0 sudo[384902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:51:03 compute-0 sudo[384902]: pam_unix(sudo:session): session closed for user root
Feb 28 10:51:03 compute-0 sudo[384927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:51:03 compute-0 sudo[384927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:51:04 compute-0 podman[384964]: 2026-02-28 10:51:04.045575205 +0000 UTC m=+0.064709183 container create 5779fd0bca8c9541136806bcc0a1df9531a18dd8f91dc03a779f155e79465740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lamport, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 10:51:04 compute-0 systemd[1]: Started libpod-conmon-5779fd0bca8c9541136806bcc0a1df9531a18dd8f91dc03a779f155e79465740.scope.
Feb 28 10:51:04 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:51:04 compute-0 podman[384964]: 2026-02-28 10:51:04.01835018 +0000 UTC m=+0.037484178 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:51:04 compute-0 podman[384964]: 2026-02-28 10:51:04.122134184 +0000 UTC m=+0.141268182 container init 5779fd0bca8c9541136806bcc0a1df9531a18dd8f91dc03a779f155e79465740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lamport, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:51:04 compute-0 podman[384964]: 2026-02-28 10:51:04.128346241 +0000 UTC m=+0.147480179 container start 5779fd0bca8c9541136806bcc0a1df9531a18dd8f91dc03a779f155e79465740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lamport, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 10:51:04 compute-0 podman[384964]: 2026-02-28 10:51:04.131679806 +0000 UTC m=+0.150813764 container attach 5779fd0bca8c9541136806bcc0a1df9531a18dd8f91dc03a779f155e79465740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lamport, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Feb 28 10:51:04 compute-0 serene_lamport[384980]: 167 167
Feb 28 10:51:04 compute-0 systemd[1]: libpod-5779fd0bca8c9541136806bcc0a1df9531a18dd8f91dc03a779f155e79465740.scope: Deactivated successfully.
Feb 28 10:51:04 compute-0 conmon[384980]: conmon 5779fd0bca8c95411368 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5779fd0bca8c9541136806bcc0a1df9531a18dd8f91dc03a779f155e79465740.scope/container/memory.events
Feb 28 10:51:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:51:04 compute-0 podman[384964]: 2026-02-28 10:51:04.135770282 +0000 UTC m=+0.154904340 container died 5779fd0bca8c9541136806bcc0a1df9531a18dd8f91dc03a779f155e79465740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lamport, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 10:51:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-aeddb8d30aa43e29dce9ac7cebd27422e9e453fd4c104c9142a1c2cfd8697cd8-merged.mount: Deactivated successfully.
Feb 28 10:51:04 compute-0 podman[384964]: 2026-02-28 10:51:04.181550115 +0000 UTC m=+0.200684053 container remove 5779fd0bca8c9541136806bcc0a1df9531a18dd8f91dc03a779f155e79465740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lamport, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:51:04 compute-0 systemd[1]: libpod-conmon-5779fd0bca8c9541136806bcc0a1df9531a18dd8f91dc03a779f155e79465740.scope: Deactivated successfully.
Feb 28 10:51:04 compute-0 podman[385004]: 2026-02-28 10:51:04.386422606 +0000 UTC m=+0.065016291 container create 90f42fa80a77ab2527c90f42a2e544bf24522c609f73c46437977dd1c7e776e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_keldysh, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 10:51:04 compute-0 systemd[1]: Started libpod-conmon-90f42fa80a77ab2527c90f42a2e544bf24522c609f73c46437977dd1c7e776e5.scope.
Feb 28 10:51:04 compute-0 podman[385004]: 2026-02-28 10:51:04.359187091 +0000 UTC m=+0.037780836 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:51:04 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de802af4d2efc19574e0dc9c1dd219158ffe48d4055b430328ae5a826a9e1419/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de802af4d2efc19574e0dc9c1dd219158ffe48d4055b430328ae5a826a9e1419/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de802af4d2efc19574e0dc9c1dd219158ffe48d4055b430328ae5a826a9e1419/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de802af4d2efc19574e0dc9c1dd219158ffe48d4055b430328ae5a826a9e1419/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:51:04 compute-0 podman[385004]: 2026-02-28 10:51:04.495578843 +0000 UTC m=+0.174172578 container init 90f42fa80a77ab2527c90f42a2e544bf24522c609f73c46437977dd1c7e776e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:51:04 compute-0 podman[385004]: 2026-02-28 10:51:04.504776545 +0000 UTC m=+0.183370200 container start 90f42fa80a77ab2527c90f42a2e544bf24522c609f73c46437977dd1c7e776e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:51:04 compute-0 podman[385004]: 2026-02-28 10:51:04.511742733 +0000 UTC m=+0.190336438 container attach 90f42fa80a77ab2527c90f42a2e544bf24522c609f73c46437977dd1c7e776e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_keldysh, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 10:51:04 compute-0 ceph-mon[76304]: pgmap v2609: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:05 compute-0 lvm[385100]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:51:05 compute-0 lvm[385100]: VG ceph_vg1 finished
Feb 28 10:51:05 compute-0 lvm[385097]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:51:05 compute-0 lvm[385097]: VG ceph_vg0 finished
Feb 28 10:51:05 compute-0 lvm[385102]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:51:05 compute-0 lvm[385102]: VG ceph_vg2 finished
Feb 28 10:51:05 compute-0 flamboyant_keldysh[385021]: {}
Feb 28 10:51:05 compute-0 systemd[1]: libpod-90f42fa80a77ab2527c90f42a2e544bf24522c609f73c46437977dd1c7e776e5.scope: Deactivated successfully.
Feb 28 10:51:05 compute-0 systemd[1]: libpod-90f42fa80a77ab2527c90f42a2e544bf24522c609f73c46437977dd1c7e776e5.scope: Consumed 1.332s CPU time.
Feb 28 10:51:05 compute-0 podman[385004]: 2026-02-28 10:51:05.380979815 +0000 UTC m=+1.059573470 container died 90f42fa80a77ab2527c90f42a2e544bf24522c609f73c46437977dd1c7e776e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_keldysh, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:51:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-de802af4d2efc19574e0dc9c1dd219158ffe48d4055b430328ae5a826a9e1419-merged.mount: Deactivated successfully.
Feb 28 10:51:05 compute-0 podman[385004]: 2026-02-28 10:51:05.421538529 +0000 UTC m=+1.100132204 container remove 90f42fa80a77ab2527c90f42a2e544bf24522c609f73c46437977dd1c7e776e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_keldysh, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 10:51:05 compute-0 systemd[1]: libpod-conmon-90f42fa80a77ab2527c90f42a2e544bf24522c609f73c46437977dd1c7e776e5.scope: Deactivated successfully.
Feb 28 10:51:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2610: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:05 compute-0 sudo[384927]: pam_unix(sudo:session): session closed for user root
Feb 28 10:51:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:51:05 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:51:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:51:05 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:51:05 compute-0 sudo[385116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:51:05 compute-0 sudo[385116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:51:05 compute-0 sudo[385116]: pam_unix(sudo:session): session closed for user root
Feb 28 10:51:06 compute-0 ceph-mon[76304]: pgmap v2610: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:06 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:51:06 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:51:06 compute-0 nova_compute[243452]: 2026-02-28 10:51:06.558 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:06 compute-0 nova_compute[243452]: 2026-02-28 10:51:06.822 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:08 compute-0 ceph-mon[76304]: pgmap v2611: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:51:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:10 compute-0 ceph-mon[76304]: pgmap v2612: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:11 compute-0 nova_compute[243452]: 2026-02-28 10:51:11.559 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:11 compute-0 nova_compute[243452]: 2026-02-28 10:51:11.824 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:12 compute-0 ceph-mon[76304]: pgmap v2613: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:51:14 compute-0 ceph-mon[76304]: pgmap v2614: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:15 compute-0 sshd-session[385141]: Received disconnect from 103.67.78.202 port 51790:11: Bye Bye [preauth]
Feb 28 10:51:15 compute-0 sshd-session[385141]: Disconnected from authenticating user root 103.67.78.202 port 51790 [preauth]
Feb 28 10:51:16 compute-0 nova_compute[243452]: 2026-02-28 10:51:16.562 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:16 compute-0 ceph-mon[76304]: pgmap v2615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:16 compute-0 nova_compute[243452]: 2026-02-28 10:51:16.827 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:17 compute-0 sshd-session[385143]: Received disconnect from 103.67.78.202 port 51804:11: Bye Bye [preauth]
Feb 28 10:51:17 compute-0 sshd-session[385143]: Disconnected from authenticating user root 103.67.78.202 port 51804 [preauth]
Feb 28 10:51:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:18 compute-0 ceph-mon[76304]: pgmap v2616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:51:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:20 compute-0 ceph-mon[76304]: pgmap v2617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:21 compute-0 nova_compute[243452]: 2026-02-28 10:51:21.564 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:21 compute-0 nova_compute[243452]: 2026-02-28 10:51:21.829 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:22 compute-0 ceph-mon[76304]: pgmap v2618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:51:24 compute-0 ceph-mon[76304]: pgmap v2619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:26 compute-0 nova_compute[243452]: 2026-02-28 10:51:26.565 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:26 compute-0 ceph-mon[76304]: pgmap v2620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:26 compute-0 nova_compute[243452]: 2026-02-28 10:51:26.830 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:28 compute-0 ceph-mon[76304]: pgmap v2621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:51:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:51:29
Feb 28 10:51:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:51:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:51:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', '.rgw.root', 'vms', 'volumes', 'default.rgw.log', 'images', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta']
Feb 28 10:51:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:51:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:51:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:51:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:51:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:51:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:51:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:51:30 compute-0 ceph-mon[76304]: pgmap v2622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:51:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:51:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:51:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:51:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:51:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:51:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:51:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:51:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:51:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:51:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:31 compute-0 nova_compute[243452]: 2026-02-28 10:51:31.567 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:31 compute-0 nova_compute[243452]: 2026-02-28 10:51:31.832 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:32 compute-0 podman[385146]: 2026-02-28 10:51:32.141879123 +0000 UTC m=+0.076721285 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 28 10:51:32 compute-0 podman[385145]: 2026-02-28 10:51:32.201129579 +0000 UTC m=+0.136104765 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:51:32 compute-0 ceph-mon[76304]: pgmap v2623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:51:34 compute-0 ceph-mon[76304]: pgmap v2624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:36 compute-0 nova_compute[243452]: 2026-02-28 10:51:36.570 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:36 compute-0 ceph-mon[76304]: pgmap v2625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:36 compute-0 nova_compute[243452]: 2026-02-28 10:51:36.834 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:37 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb 28 10:51:38 compute-0 ceph-mon[76304]: pgmap v2626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:51:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:40 compute-0 ceph-mon[76304]: pgmap v2627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5021395138358437e-05 of space, bias 1.0, pg target 0.004506418541507531 quantized to 32 (current 32)
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000670922907129849 of space, bias 1.0, pg target 0.2012768721389547 quantized to 32 (current 32)
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.25983564234373e-07 of space, bias 4.0, pg target 0.0008711802770812475 quantized to 16 (current 16)
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:51:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:51:41 compute-0 nova_compute[243452]: 2026-02-28 10:51:41.572 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:41 compute-0 nova_compute[243452]: 2026-02-28 10:51:41.836 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:42 compute-0 ceph-mon[76304]: pgmap v2628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:51:44 compute-0 ceph-mon[76304]: pgmap v2629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:51:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3915553886' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:51:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:51:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3915553886' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:51:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3915553886' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:51:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3915553886' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:51:46 compute-0 nova_compute[243452]: 2026-02-28 10:51:46.573 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:46 compute-0 ceph-mon[76304]: pgmap v2630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:46 compute-0 nova_compute[243452]: 2026-02-28 10:51:46.839 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:48 compute-0 ceph-mon[76304]: pgmap v2631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:51:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:50 compute-0 ceph-mon[76304]: pgmap v2632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:51 compute-0 nova_compute[243452]: 2026-02-28 10:51:51.576 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:51 compute-0 nova_compute[243452]: 2026-02-28 10:51:51.841 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:52 compute-0 nova_compute[243452]: 2026-02-28 10:51:52.070 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:51:52 compute-0 nova_compute[243452]: 2026-02-28 10:51:52.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:51:52 compute-0 nova_compute[243452]: 2026-02-28 10:51:52.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:51:52 compute-0 nova_compute[243452]: 2026-02-28 10:51:52.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:51:53 compute-0 ceph-mon[76304]: pgmap v2633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:54 compute-0 ceph-mon[76304]: pgmap v2634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:51:54 compute-0 nova_compute[243452]: 2026-02-28 10:51:54.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:51:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:56 compute-0 ceph-mon[76304]: pgmap v2635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:56 compute-0 nova_compute[243452]: 2026-02-28 10:51:56.577 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:56 compute-0 nova_compute[243452]: 2026-02-28 10:51:56.843 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:51:57 compute-0 nova_compute[243452]: 2026-02-28 10:51:57.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:51:57 compute-0 nova_compute[243452]: 2026-02-28 10:51:57.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:51:57 compute-0 nova_compute[243452]: 2026-02-28 10:51:57.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:51:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:51:57.894 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:51:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:51:57.894 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:51:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:51:57.895 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:51:58 compute-0 ceph-mon[76304]: pgmap v2636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:51:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:51:59 compute-0 nova_compute[243452]: 2026-02-28 10:51:59.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:51:59 compute-0 nova_compute[243452]: 2026-02-28 10:51:59.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:51:59 compute-0 nova_compute[243452]: 2026-02-28 10:51:59.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:51:59 compute-0 nova_compute[243452]: 2026-02-28 10:51:59.334 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:51:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:52:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:52:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:52:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:52:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:52:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:52:00 compute-0 ceph-mon[76304]: pgmap v2637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:01 compute-0 nova_compute[243452]: 2026-02-28 10:52:01.580 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:01 compute-0 nova_compute[243452]: 2026-02-28 10:52:01.845 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:02 compute-0 nova_compute[243452]: 2026-02-28 10:52:02.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:52:02 compute-0 nova_compute[243452]: 2026-02-28 10:52:02.343 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:52:02 compute-0 nova_compute[243452]: 2026-02-28 10:52:02.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:52:02 compute-0 nova_compute[243452]: 2026-02-28 10:52:02.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:52:02 compute-0 nova_compute[243452]: 2026-02-28 10:52:02.344 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:52:02 compute-0 nova_compute[243452]: 2026-02-28 10:52:02.345 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:52:02 compute-0 ceph-mon[76304]: pgmap v2638: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:52:02 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3151074578' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:52:02 compute-0 nova_compute[243452]: 2026-02-28 10:52:02.867 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:52:03 compute-0 nova_compute[243452]: 2026-02-28 10:52:03.012 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:52:03 compute-0 nova_compute[243452]: 2026-02-28 10:52:03.013 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3590MB free_disk=59.98738014232367GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:52:03 compute-0 nova_compute[243452]: 2026-02-28 10:52:03.014 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:52:03 compute-0 nova_compute[243452]: 2026-02-28 10:52:03.014 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:52:03 compute-0 nova_compute[243452]: 2026-02-28 10:52:03.092 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:52:03 compute-0 nova_compute[243452]: 2026-02-28 10:52:03.093 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:52:03 compute-0 nova_compute[243452]: 2026-02-28 10:52:03.118 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:52:03 compute-0 podman[385218]: 2026-02-28 10:52:03.123205629 +0000 UTC m=+0.057268331 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 28 10:52:03 compute-0 podman[385217]: 2026-02-28 10:52:03.145580546 +0000 UTC m=+0.081853981 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:52:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:03 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3151074578' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:52:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:52:03 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1075238627' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:52:03 compute-0 nova_compute[243452]: 2026-02-28 10:52:03.635 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:52:03 compute-0 nova_compute[243452]: 2026-02-28 10:52:03.642 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:52:03 compute-0 nova_compute[243452]: 2026-02-28 10:52:03.662 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:52:03 compute-0 nova_compute[243452]: 2026-02-28 10:52:03.665 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:52:03 compute-0 nova_compute[243452]: 2026-02-28 10:52:03.665 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:52:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:52:04 compute-0 ceph-mon[76304]: pgmap v2639: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:04 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1075238627' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:52:05 compute-0 sshd-session[385280]: Received disconnect from 103.217.144.161 port 58066:11: Bye Bye [preauth]
Feb 28 10:52:05 compute-0 sshd-session[385280]: Disconnected from authenticating user root 103.217.144.161 port 58066 [preauth]
Feb 28 10:52:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:05 compute-0 sudo[385282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:52:05 compute-0 sudo[385282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:52:05 compute-0 sudo[385282]: pam_unix(sudo:session): session closed for user root
Feb 28 10:52:05 compute-0 sudo[385307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 28 10:52:05 compute-0 sudo[385307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:52:06 compute-0 podman[385376]: 2026-02-28 10:52:06.208682632 +0000 UTC m=+0.088108708 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:52:06 compute-0 podman[385376]: 2026-02-28 10:52:06.318123318 +0000 UTC m=+0.197549344 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:52:06 compute-0 ceph-mon[76304]: pgmap v2640: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:06 compute-0 nova_compute[243452]: 2026-02-28 10:52:06.582 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:06 compute-0 nova_compute[243452]: 2026-02-28 10:52:06.847 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:07 compute-0 sudo[385307]: pam_unix(sudo:session): session closed for user root
Feb 28 10:52:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:52:07 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:52:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:52:07 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:52:07 compute-0 sudo[385563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:52:07 compute-0 sudo[385563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:52:07 compute-0 sudo[385563]: pam_unix(sudo:session): session closed for user root
Feb 28 10:52:07 compute-0 sudo[385588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:52:07 compute-0 sudo[385588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:52:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:07 compute-0 sudo[385588]: pam_unix(sudo:session): session closed for user root
Feb 28 10:52:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:52:07 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:52:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:52:07 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:52:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:52:07 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:52:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:52:07 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:52:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:52:07 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:52:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:52:07 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:52:07 compute-0 sudo[385644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:52:07 compute-0 sudo[385644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:52:07 compute-0 sudo[385644]: pam_unix(sudo:session): session closed for user root
Feb 28 10:52:07 compute-0 sudo[385669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:52:07 compute-0 sudo[385669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:52:08 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:52:08 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:52:08 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:52:08 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:52:08 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:52:08 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:52:08 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:52:08 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:52:08 compute-0 podman[385708]: 2026-02-28 10:52:08.165900691 +0000 UTC m=+0.050191509 container create 15ebd7e4cfb7e480baf08dd59e4b8c72dd133784b301f256c297168af17bdeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 10:52:08 compute-0 systemd[1]: Started libpod-conmon-15ebd7e4cfb7e480baf08dd59e4b8c72dd133784b301f256c297168af17bdeae.scope.
Feb 28 10:52:08 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:52:08 compute-0 podman[385708]: 2026-02-28 10:52:08.237101488 +0000 UTC m=+0.121392296 container init 15ebd7e4cfb7e480baf08dd59e4b8c72dd133784b301f256c297168af17bdeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Feb 28 10:52:08 compute-0 podman[385708]: 2026-02-28 10:52:08.142444794 +0000 UTC m=+0.026735702 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:52:08 compute-0 podman[385708]: 2026-02-28 10:52:08.242390799 +0000 UTC m=+0.126681647 container start 15ebd7e4cfb7e480baf08dd59e4b8c72dd133784b301f256c297168af17bdeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_curran, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:52:08 compute-0 podman[385708]: 2026-02-28 10:52:08.245931729 +0000 UTC m=+0.130222557 container attach 15ebd7e4cfb7e480baf08dd59e4b8c72dd133784b301f256c297168af17bdeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_curran, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:52:08 compute-0 cranky_curran[385724]: 167 167
Feb 28 10:52:08 compute-0 systemd[1]: libpod-15ebd7e4cfb7e480baf08dd59e4b8c72dd133784b301f256c297168af17bdeae.scope: Deactivated successfully.
Feb 28 10:52:08 compute-0 podman[385708]: 2026-02-28 10:52:08.24769139 +0000 UTC m=+0.131982238 container died 15ebd7e4cfb7e480baf08dd59e4b8c72dd133784b301f256c297168af17bdeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_curran, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 10:52:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-821d3bad9b9f80400923a10234d51b979d7ff0c54b40cbb3c5c9572aa0b78188-merged.mount: Deactivated successfully.
Feb 28 10:52:08 compute-0 podman[385708]: 2026-02-28 10:52:08.293194405 +0000 UTC m=+0.177485253 container remove 15ebd7e4cfb7e480baf08dd59e4b8c72dd133784b301f256c297168af17bdeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_curran, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:52:08 compute-0 systemd[1]: libpod-conmon-15ebd7e4cfb7e480baf08dd59e4b8c72dd133784b301f256c297168af17bdeae.scope: Deactivated successfully.
Feb 28 10:52:08 compute-0 podman[385748]: 2026-02-28 10:52:08.463672977 +0000 UTC m=+0.059867765 container create 3ab36f7198b3103336d3b7651d9d75073c35caa1228681a8d58cc46fc496c046 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 10:52:08 compute-0 systemd[1]: Started libpod-conmon-3ab36f7198b3103336d3b7651d9d75073c35caa1228681a8d58cc46fc496c046.scope.
Feb 28 10:52:08 compute-0 podman[385748]: 2026-02-28 10:52:08.439600122 +0000 UTC m=+0.035794960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:52:08 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:52:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c8280b2dd8966153c24d0c57dadede76aa13aa8a9e4c9f1c3d6e1c16397e7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:52:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c8280b2dd8966153c24d0c57dadede76aa13aa8a9e4c9f1c3d6e1c16397e7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:52:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c8280b2dd8966153c24d0c57dadede76aa13aa8a9e4c9f1c3d6e1c16397e7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:52:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c8280b2dd8966153c24d0c57dadede76aa13aa8a9e4c9f1c3d6e1c16397e7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:52:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c8280b2dd8966153c24d0c57dadede76aa13aa8a9e4c9f1c3d6e1c16397e7d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:52:08 compute-0 podman[385748]: 2026-02-28 10:52:08.574711978 +0000 UTC m=+0.170906766 container init 3ab36f7198b3103336d3b7651d9d75073c35caa1228681a8d58cc46fc496c046 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wescoff, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 10:52:08 compute-0 podman[385748]: 2026-02-28 10:52:08.589577531 +0000 UTC m=+0.185772319 container start 3ab36f7198b3103336d3b7651d9d75073c35caa1228681a8d58cc46fc496c046 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wescoff, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 10:52:08 compute-0 podman[385748]: 2026-02-28 10:52:08.594297045 +0000 UTC m=+0.190491843 container attach 3ab36f7198b3103336d3b7651d9d75073c35caa1228681a8d58cc46fc496c046 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wescoff, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:52:08 compute-0 nova_compute[243452]: 2026-02-28 10:52:08.663 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:52:09 compute-0 boring_wescoff[385765]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:52:09 compute-0 boring_wescoff[385765]: --> All data devices are unavailable
Feb 28 10:52:09 compute-0 ceph-mon[76304]: pgmap v2641: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:09 compute-0 systemd[1]: libpod-3ab36f7198b3103336d3b7651d9d75073c35caa1228681a8d58cc46fc496c046.scope: Deactivated successfully.
Feb 28 10:52:09 compute-0 podman[385748]: 2026-02-28 10:52:09.069410549 +0000 UTC m=+0.665605307 container died 3ab36f7198b3103336d3b7651d9d75073c35caa1228681a8d58cc46fc496c046 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wescoff, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 10:52:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-25c8280b2dd8966153c24d0c57dadede76aa13aa8a9e4c9f1c3d6e1c16397e7d-merged.mount: Deactivated successfully.
Feb 28 10:52:09 compute-0 podman[385748]: 2026-02-28 10:52:09.116320764 +0000 UTC m=+0.712515522 container remove 3ab36f7198b3103336d3b7651d9d75073c35caa1228681a8d58cc46fc496c046 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 28 10:52:09 compute-0 systemd[1]: libpod-conmon-3ab36f7198b3103336d3b7651d9d75073c35caa1228681a8d58cc46fc496c046.scope: Deactivated successfully.
Feb 28 10:52:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:52:09 compute-0 sudo[385669]: pam_unix(sudo:session): session closed for user root
Feb 28 10:52:09 compute-0 sudo[385795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:52:09 compute-0 sudo[385795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:52:09 compute-0 sudo[385795]: pam_unix(sudo:session): session closed for user root
Feb 28 10:52:09 compute-0 sudo[385820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:52:09 compute-0 sudo[385820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:52:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:09 compute-0 podman[385857]: 2026-02-28 10:52:09.61257912 +0000 UTC m=+0.057783006 container create b8e110c08f89ae5c2ddb1e9280be35c5f4ae05f4b73d10700f556be676ea5655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lewin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:52:09 compute-0 systemd[1]: Started libpod-conmon-b8e110c08f89ae5c2ddb1e9280be35c5f4ae05f4b73d10700f556be676ea5655.scope.
Feb 28 10:52:09 compute-0 podman[385857]: 2026-02-28 10:52:09.583598625 +0000 UTC m=+0.028802541 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:52:09 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:52:09 compute-0 podman[385857]: 2026-02-28 10:52:09.688812299 +0000 UTC m=+0.134016165 container init b8e110c08f89ae5c2ddb1e9280be35c5f4ae05f4b73d10700f556be676ea5655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lewin, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:52:09 compute-0 podman[385857]: 2026-02-28 10:52:09.697035834 +0000 UTC m=+0.142239720 container start b8e110c08f89ae5c2ddb1e9280be35c5f4ae05f4b73d10700f556be676ea5655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lewin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Feb 28 10:52:09 compute-0 festive_lewin[385873]: 167 167
Feb 28 10:52:09 compute-0 podman[385857]: 2026-02-28 10:52:09.700968255 +0000 UTC m=+0.146172151 container attach b8e110c08f89ae5c2ddb1e9280be35c5f4ae05f4b73d10700f556be676ea5655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:52:09 compute-0 podman[385857]: 2026-02-28 10:52:09.701611694 +0000 UTC m=+0.146815560 container died b8e110c08f89ae5c2ddb1e9280be35c5f4ae05f4b73d10700f556be676ea5655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lewin, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 28 10:52:09 compute-0 systemd[1]: libpod-b8e110c08f89ae5c2ddb1e9280be35c5f4ae05f4b73d10700f556be676ea5655.scope: Deactivated successfully.
Feb 28 10:52:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-554477ad0fff0bd02d12df5dbd787b68ae9a429f10ccf7dfca645d3b5766b3f0-merged.mount: Deactivated successfully.
Feb 28 10:52:09 compute-0 podman[385857]: 2026-02-28 10:52:09.745568005 +0000 UTC m=+0.190771861 container remove b8e110c08f89ae5c2ddb1e9280be35c5f4ae05f4b73d10700f556be676ea5655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lewin, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:52:09 compute-0 systemd[1]: libpod-conmon-b8e110c08f89ae5c2ddb1e9280be35c5f4ae05f4b73d10700f556be676ea5655.scope: Deactivated successfully.
Feb 28 10:52:09 compute-0 podman[385896]: 2026-02-28 10:52:09.902034619 +0000 UTC m=+0.043305564 container create da217ae8830a4e6706841ee74e3297d84b3733e72befce1244f7481048872966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hoover, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 10:52:09 compute-0 systemd[1]: Started libpod-conmon-da217ae8830a4e6706841ee74e3297d84b3733e72befce1244f7481048872966.scope.
Feb 28 10:52:09 compute-0 podman[385896]: 2026-02-28 10:52:09.884393206 +0000 UTC m=+0.025664161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:52:09 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46fbaffdb1ab8d32c6d56416f7a74118de7941e09e00b9ce45de2029ea35bfc1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46fbaffdb1ab8d32c6d56416f7a74118de7941e09e00b9ce45de2029ea35bfc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46fbaffdb1ab8d32c6d56416f7a74118de7941e09e00b9ce45de2029ea35bfc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46fbaffdb1ab8d32c6d56416f7a74118de7941e09e00b9ce45de2029ea35bfc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:52:10 compute-0 podman[385896]: 2026-02-28 10:52:10.021726575 +0000 UTC m=+0.162997540 container init da217ae8830a4e6706841ee74e3297d84b3733e72befce1244f7481048872966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hoover, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:52:10 compute-0 podman[385896]: 2026-02-28 10:52:10.033728167 +0000 UTC m=+0.174999142 container start da217ae8830a4e6706841ee74e3297d84b3733e72befce1244f7481048872966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hoover, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:52:10 compute-0 podman[385896]: 2026-02-28 10:52:10.037809423 +0000 UTC m=+0.179080408 container attach da217ae8830a4e6706841ee74e3297d84b3733e72befce1244f7481048872966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hoover, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:52:10 compute-0 jovial_hoover[385913]: {
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:     "0": [
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:         {
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "devices": [
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "/dev/loop3"
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             ],
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "lv_name": "ceph_lv0",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "lv_size": "21470642176",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "name": "ceph_lv0",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "tags": {
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.cluster_name": "ceph",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.crush_device_class": "",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.encrypted": "0",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.objectstore": "bluestore",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.osd_id": "0",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.type": "block",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.vdo": "0",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.with_tpm": "0"
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             },
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "type": "block",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "vg_name": "ceph_vg0"
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:         }
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:     ],
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:     "1": [
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:         {
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "devices": [
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "/dev/loop4"
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             ],
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "lv_name": "ceph_lv1",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "lv_size": "21470642176",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "name": "ceph_lv1",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "tags": {
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.cluster_name": "ceph",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.crush_device_class": "",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.encrypted": "0",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.objectstore": "bluestore",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.osd_id": "1",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.type": "block",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.vdo": "0",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.with_tpm": "0"
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             },
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "type": "block",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "vg_name": "ceph_vg1"
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:         }
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:     ],
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:     "2": [
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:         {
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "devices": [
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "/dev/loop5"
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             ],
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "lv_name": "ceph_lv2",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "lv_size": "21470642176",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "name": "ceph_lv2",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "tags": {
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.cluster_name": "ceph",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.crush_device_class": "",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.encrypted": "0",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.objectstore": "bluestore",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.osd_id": "2",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.type": "block",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.vdo": "0",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:                 "ceph.with_tpm": "0"
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             },
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "type": "block",
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:             "vg_name": "ceph_vg2"
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:         }
Feb 28 10:52:10 compute-0 jovial_hoover[385913]:     ]
Feb 28 10:52:10 compute-0 jovial_hoover[385913]: }
Feb 28 10:52:10 compute-0 systemd[1]: libpod-da217ae8830a4e6706841ee74e3297d84b3733e72befce1244f7481048872966.scope: Deactivated successfully.
Feb 28 10:52:10 compute-0 podman[385896]: 2026-02-28 10:52:10.369218496 +0000 UTC m=+0.510489481 container died da217ae8830a4e6706841ee74e3297d84b3733e72befce1244f7481048872966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hoover, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 10:52:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-46fbaffdb1ab8d32c6d56416f7a74118de7941e09e00b9ce45de2029ea35bfc1-merged.mount: Deactivated successfully.
Feb 28 10:52:10 compute-0 podman[385896]: 2026-02-28 10:52:10.42343783 +0000 UTC m=+0.564708795 container remove da217ae8830a4e6706841ee74e3297d84b3733e72befce1244f7481048872966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hoover, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:52:10 compute-0 systemd[1]: libpod-conmon-da217ae8830a4e6706841ee74e3297d84b3733e72befce1244f7481048872966.scope: Deactivated successfully.
Feb 28 10:52:10 compute-0 sudo[385820]: pam_unix(sudo:session): session closed for user root
Feb 28 10:52:10 compute-0 sudo[385936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:52:10 compute-0 sudo[385936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:52:10 compute-0 sudo[385936]: pam_unix(sudo:session): session closed for user root
Feb 28 10:52:10 compute-0 sudo[385961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:52:10 compute-0 sudo[385961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:52:10 compute-0 podman[386000]: 2026-02-28 10:52:10.970485981 +0000 UTC m=+0.058623870 container create b9fbbaae5eaa49fec9bab6db24a6c3b18ac2af29353400f557d8e2b01f43d2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lichterman, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:52:11 compute-0 systemd[1]: Started libpod-conmon-b9fbbaae5eaa49fec9bab6db24a6c3b18ac2af29353400f557d8e2b01f43d2ef.scope.
Feb 28 10:52:11 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:52:11 compute-0 podman[386000]: 2026-02-28 10:52:10.948113744 +0000 UTC m=+0.036251733 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:52:11 compute-0 ceph-mon[76304]: pgmap v2642: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:11 compute-0 podman[386000]: 2026-02-28 10:52:11.05968136 +0000 UTC m=+0.147819289 container init b9fbbaae5eaa49fec9bab6db24a6c3b18ac2af29353400f557d8e2b01f43d2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lichterman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Feb 28 10:52:11 compute-0 podman[386000]: 2026-02-28 10:52:11.066144334 +0000 UTC m=+0.154282223 container start b9fbbaae5eaa49fec9bab6db24a6c3b18ac2af29353400f557d8e2b01f43d2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lichterman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 10:52:11 compute-0 podman[386000]: 2026-02-28 10:52:11.070641092 +0000 UTC m=+0.158779021 container attach b9fbbaae5eaa49fec9bab6db24a6c3b18ac2af29353400f557d8e2b01f43d2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lichterman, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 10:52:11 compute-0 charming_lichterman[386016]: 167 167
Feb 28 10:52:11 compute-0 systemd[1]: libpod-b9fbbaae5eaa49fec9bab6db24a6c3b18ac2af29353400f557d8e2b01f43d2ef.scope: Deactivated successfully.
Feb 28 10:52:11 compute-0 podman[386000]: 2026-02-28 10:52:11.07374165 +0000 UTC m=+0.161879539 container died b9fbbaae5eaa49fec9bab6db24a6c3b18ac2af29353400f557d8e2b01f43d2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lichterman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:52:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-71a6cfd8f9c614f0b635e1582e15089515240a3e763b606f33304ad3e5e8e714-merged.mount: Deactivated successfully.
Feb 28 10:52:11 compute-0 podman[386000]: 2026-02-28 10:52:11.113298106 +0000 UTC m=+0.201436035 container remove b9fbbaae5eaa49fec9bab6db24a6c3b18ac2af29353400f557d8e2b01f43d2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lichterman, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:52:11 compute-0 systemd[1]: libpod-conmon-b9fbbaae5eaa49fec9bab6db24a6c3b18ac2af29353400f557d8e2b01f43d2ef.scope: Deactivated successfully.
Feb 28 10:52:11 compute-0 podman[386040]: 2026-02-28 10:52:11.293216907 +0000 UTC m=+0.059439933 container create 2dbea2cad8d18a26838741a370d1d91c2dab3b9314fe3ad94c11fdaecd92d044 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Feb 28 10:52:11 compute-0 nova_compute[243452]: 2026-02-28 10:52:11.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:52:11 compute-0 systemd[1]: Started libpod-conmon-2dbea2cad8d18a26838741a370d1d91c2dab3b9314fe3ad94c11fdaecd92d044.scope.
Feb 28 10:52:11 compute-0 podman[386040]: 2026-02-28 10:52:11.269846132 +0000 UTC m=+0.036069238 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:52:11 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb81be24424f2e8e61a4c8c0b074940ace46f008c2aabcf50760ba305679c9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb81be24424f2e8e61a4c8c0b074940ace46f008c2aabcf50760ba305679c9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb81be24424f2e8e61a4c8c0b074940ace46f008c2aabcf50760ba305679c9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb81be24424f2e8e61a4c8c0b074940ace46f008c2aabcf50760ba305679c9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:52:11 compute-0 podman[386040]: 2026-02-28 10:52:11.395494928 +0000 UTC m=+0.161718044 container init 2dbea2cad8d18a26838741a370d1d91c2dab3b9314fe3ad94c11fdaecd92d044 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:52:11 compute-0 podman[386040]: 2026-02-28 10:52:11.405915995 +0000 UTC m=+0.172139061 container start 2dbea2cad8d18a26838741a370d1d91c2dab3b9314fe3ad94c11fdaecd92d044 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 10:52:11 compute-0 podman[386040]: 2026-02-28 10:52:11.409701733 +0000 UTC m=+0.175924859 container attach 2dbea2cad8d18a26838741a370d1d91c2dab3b9314fe3ad94c11fdaecd92d044 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 10:52:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:11 compute-0 nova_compute[243452]: 2026-02-28 10:52:11.584 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:11 compute-0 nova_compute[243452]: 2026-02-28 10:52:11.849 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:12 compute-0 lvm[386135]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:52:12 compute-0 lvm[386135]: VG ceph_vg0 finished
Feb 28 10:52:12 compute-0 lvm[386136]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:52:12 compute-0 lvm[386136]: VG ceph_vg1 finished
Feb 28 10:52:12 compute-0 lvm[386138]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:52:12 compute-0 lvm[386138]: VG ceph_vg2 finished
Feb 28 10:52:12 compute-0 priceless_black[386057]: {}
Feb 28 10:52:12 compute-0 systemd[1]: libpod-2dbea2cad8d18a26838741a370d1d91c2dab3b9314fe3ad94c11fdaecd92d044.scope: Deactivated successfully.
Feb 28 10:52:12 compute-0 podman[386040]: 2026-02-28 10:52:12.260957192 +0000 UTC m=+1.027180258 container died 2dbea2cad8d18a26838741a370d1d91c2dab3b9314fe3ad94c11fdaecd92d044 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:52:12 compute-0 systemd[1]: libpod-2dbea2cad8d18a26838741a370d1d91c2dab3b9314fe3ad94c11fdaecd92d044.scope: Consumed 1.214s CPU time.
Feb 28 10:52:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eb81be24424f2e8e61a4c8c0b074940ace46f008c2aabcf50760ba305679c9a-merged.mount: Deactivated successfully.
Feb 28 10:52:12 compute-0 podman[386040]: 2026-02-28 10:52:12.313844167 +0000 UTC m=+1.080067203 container remove 2dbea2cad8d18a26838741a370d1d91c2dab3b9314fe3ad94c11fdaecd92d044 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:52:12 compute-0 systemd[1]: libpod-conmon-2dbea2cad8d18a26838741a370d1d91c2dab3b9314fe3ad94c11fdaecd92d044.scope: Deactivated successfully.
Feb 28 10:52:12 compute-0 sudo[385961]: pam_unix(sudo:session): session closed for user root
Feb 28 10:52:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:52:12 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:52:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:52:12 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:52:12 compute-0 sudo[386156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:52:12 compute-0 sudo[386156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:52:12 compute-0 sudo[386156]: pam_unix(sudo:session): session closed for user root
Feb 28 10:52:13 compute-0 ceph-mon[76304]: pgmap v2643: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:13 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:52:13 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:52:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:52:15 compute-0 ceph-mon[76304]: pgmap v2644: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:16 compute-0 nova_compute[243452]: 2026-02-28 10:52:16.588 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:16 compute-0 nova_compute[243452]: 2026-02-28 10:52:16.851 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:17 compute-0 ceph-mon[76304]: pgmap v2645: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:19 compute-0 ceph-mon[76304]: pgmap v2646: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.157730) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275939157815, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 1832, "num_deletes": 256, "total_data_size": 3081956, "memory_usage": 3119680, "flush_reason": "Manual Compaction"}
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275939172806, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 3006658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54450, "largest_seqno": 56281, "table_properties": {"data_size": 2998183, "index_size": 5224, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17062, "raw_average_key_size": 19, "raw_value_size": 2981230, "raw_average_value_size": 3474, "num_data_blocks": 233, "num_entries": 858, "num_filter_entries": 858, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772275744, "oldest_key_time": 1772275744, "file_creation_time": 1772275939, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 15164 microseconds, and 7499 cpu microseconds.
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.172895) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 3006658 bytes OK
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.172935) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.174585) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.174608) EVENT_LOG_v1 {"time_micros": 1772275939174600, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.174639) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 3074132, prev total WAL file size 3074132, number of live WAL files 2.
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.175524) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323536' seq:72057594037927935, type:22 .. '6C6F676D0032353037' seq:0, type:0; will stop at (end)
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(2936KB)], [128(8141KB)]
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275939175595, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 11343498, "oldest_snapshot_seqno": -1}
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7728 keys, 11229095 bytes, temperature: kUnknown
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275939237988, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 11229095, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11177467, "index_size": 31227, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19333, "raw_key_size": 201048, "raw_average_key_size": 26, "raw_value_size": 11039720, "raw_average_value_size": 1428, "num_data_blocks": 1228, "num_entries": 7728, "num_filter_entries": 7728, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772275939, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.238586) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 11229095 bytes
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.240155) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.7 rd, 178.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 8.0 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(7.5) write-amplify(3.7) OK, records in: 8256, records dropped: 528 output_compression: NoCompression
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.240186) EVENT_LOG_v1 {"time_micros": 1772275939240171, "job": 78, "event": "compaction_finished", "compaction_time_micros": 62762, "compaction_time_cpu_micros": 35832, "output_level": 6, "num_output_files": 1, "total_output_size": 11229095, "num_input_records": 8256, "num_output_records": 7728, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275939241295, "job": 78, "event": "table_file_deletion", "file_number": 130}
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275939242818, "job": 78, "event": "table_file_deletion", "file_number": 128}
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.175417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.243097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.243106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.243111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.243115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:52:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:19.243119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:52:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:20 compute-0 ceph-mon[76304]: pgmap v2647: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:21 compute-0 nova_compute[243452]: 2026-02-28 10:52:21.095 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:52:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:21 compute-0 nova_compute[243452]: 2026-02-28 10:52:21.590 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:21 compute-0 nova_compute[243452]: 2026-02-28 10:52:21.853 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:22 compute-0 ceph-mon[76304]: pgmap v2648: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:52:24 compute-0 sshd-session[386181]: Invalid user sol from 45.148.10.240 port 59266
Feb 28 10:52:24 compute-0 ceph-mon[76304]: pgmap v2649: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:24 compute-0 sshd-session[386181]: Connection closed by invalid user sol 45.148.10.240 port 59266 [preauth]
Feb 28 10:52:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:26 compute-0 ceph-mon[76304]: pgmap v2650: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:26 compute-0 nova_compute[243452]: 2026-02-28 10:52:26.593 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:26 compute-0 nova_compute[243452]: 2026-02-28 10:52:26.855 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:28 compute-0 ceph-mon[76304]: pgmap v2651: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:52:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:52:29
Feb 28 10:52:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:52:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:52:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.meta', 'images']
Feb 28 10:52:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:52:29 compute-0 nova_compute[243452]: 2026-02-28 10:52:29.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:52:29 compute-0 nova_compute[243452]: 2026-02-28 10:52:29.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 28 10:52:29 compute-0 nova_compute[243452]: 2026-02-28 10:52:29.333 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 28 10:52:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:30 compute-0 nova_compute[243452]: 2026-02-28 10:52:30.269 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:52:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:52:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:52:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:52:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:52:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:52:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:52:30 compute-0 ceph-mon[76304]: pgmap v2652: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:52:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:52:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:52:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:52:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:52:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:52:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:52:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:52:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:52:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:52:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:31 compute-0 nova_compute[243452]: 2026-02-28 10:52:31.596 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:31 compute-0 nova_compute[243452]: 2026-02-28 10:52:31.859 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:32 compute-0 ceph-mon[76304]: pgmap v2653: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:33 compute-0 nova_compute[243452]: 2026-02-28 10:52:33.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:52:33 compute-0 nova_compute[243452]: 2026-02-28 10:52:33.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 28 10:52:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:52:34 compute-0 podman[386184]: 2026-02-28 10:52:34.164360479 +0000 UTC m=+0.093567525 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 28 10:52:34 compute-0 podman[386183]: 2026-02-28 10:52:34.19532807 +0000 UTC m=+0.124574477 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:52:34 compute-0 ceph-mon[76304]: pgmap v2654: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:36 compute-0 nova_compute[243452]: 2026-02-28 10:52:36.597 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:36 compute-0 ceph-mon[76304]: pgmap v2655: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:36 compute-0 nova_compute[243452]: 2026-02-28 10:52:36.862 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:38 compute-0 ceph-mon[76304]: pgmap v2656: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.623705) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275958623780, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 398, "num_deletes": 251, "total_data_size": 296459, "memory_usage": 303792, "flush_reason": "Manual Compaction"}
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275958628610, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 294051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56282, "largest_seqno": 56679, "table_properties": {"data_size": 291613, "index_size": 537, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5785, "raw_average_key_size": 18, "raw_value_size": 286901, "raw_average_value_size": 922, "num_data_blocks": 24, "num_entries": 311, "num_filter_entries": 311, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772275940, "oldest_key_time": 1772275940, "file_creation_time": 1772275958, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 4939 microseconds, and 2084 cpu microseconds.
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.628656) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 294051 bytes OK
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.628678) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.630367) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.630391) EVENT_LOG_v1 {"time_micros": 1772275958630384, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.630418) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 293926, prev total WAL file size 293926, number of live WAL files 2.
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.630970) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(287KB)], [131(10MB)]
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275958631026, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 11523146, "oldest_snapshot_seqno": -1}
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7529 keys, 9824285 bytes, temperature: kUnknown
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275958675552, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 9824285, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9775221, "index_size": 29111, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18885, "raw_key_size": 197590, "raw_average_key_size": 26, "raw_value_size": 9642145, "raw_average_value_size": 1280, "num_data_blocks": 1130, "num_entries": 7529, "num_filter_entries": 7529, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772275958, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.675954) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 9824285 bytes
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.678149) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 257.8 rd, 219.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.7 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(72.6) write-amplify(33.4) OK, records in: 8039, records dropped: 510 output_compression: NoCompression
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.678174) EVENT_LOG_v1 {"time_micros": 1772275958678162, "job": 80, "event": "compaction_finished", "compaction_time_micros": 44703, "compaction_time_cpu_micros": 22845, "output_level": 6, "num_output_files": 1, "total_output_size": 9824285, "num_input_records": 8039, "num_output_records": 7529, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275958678361, "job": 80, "event": "table_file_deletion", "file_number": 133}
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772275958679445, "job": 80, "event": "table_file_deletion", "file_number": 131}
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.630802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.679478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.679484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.679486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.679488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:52:38 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:52:38.679490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:52:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:52:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:40 compute-0 ceph-mon[76304]: pgmap v2657: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5021395138358437e-05 of space, bias 1.0, pg target 0.004506418541507531 quantized to 32 (current 32)
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000670922907129849 of space, bias 1.0, pg target 0.2012768721389547 quantized to 32 (current 32)
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.25983564234373e-07 of space, bias 4.0, pg target 0.0008711802770812475 quantized to 16 (current 16)
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:52:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:52:41 compute-0 nova_compute[243452]: 2026-02-28 10:52:41.600 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:41 compute-0 nova_compute[243452]: 2026-02-28 10:52:41.864 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:42 compute-0 ceph-mon[76304]: pgmap v2658: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:52:44 compute-0 ceph-mon[76304]: pgmap v2659: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:52:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Feb 28 10:52:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:52:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3149900555' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:52:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:52:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3149900555' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:52:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3149900555' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:52:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3149900555' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:52:46 compute-0 nova_compute[243452]: 2026-02-28 10:52:46.603 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:46 compute-0 ceph-mon[76304]: pgmap v2660: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Feb 28 10:52:46 compute-0 nova_compute[243452]: 2026-02-28 10:52:46.866 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Feb 28 10:52:48 compute-0 ceph-mon[76304]: pgmap v2661: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Feb 28 10:52:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:52:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Feb 28 10:52:50 compute-0 ceph-mon[76304]: pgmap v2662: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Feb 28 10:52:51 compute-0 nova_compute[243452]: 2026-02-28 10:52:51.334 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:52:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Feb 28 10:52:51 compute-0 nova_compute[243452]: 2026-02-28 10:52:51.605 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:51 compute-0 nova_compute[243452]: 2026-02-28 10:52:51.870 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Feb 28 10:52:53 compute-0 ceph-mon[76304]: pgmap v2663: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Feb 28 10:52:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Feb 28 10:52:53 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Feb 28 10:52:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 204 B/s wr, 9 op/s
Feb 28 10:52:54 compute-0 ceph-mon[76304]: osdmap e293: 3 total, 3 up, 3 in
Feb 28 10:52:54 compute-0 ceph-mon[76304]: pgmap v2665: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 204 B/s wr, 9 op/s
Feb 28 10:52:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:52:54 compute-0 nova_compute[243452]: 2026-02-28 10:52:54.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:52:54 compute-0 nova_compute[243452]: 2026-02-28 10:52:54.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:52:54 compute-0 nova_compute[243452]: 2026-02-28 10:52:54.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:52:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 716 B/s wr, 20 op/s
Feb 28 10:52:56 compute-0 nova_compute[243452]: 2026-02-28 10:52:56.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:52:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Feb 28 10:52:56 compute-0 ceph-mon[76304]: pgmap v2666: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 716 B/s wr, 20 op/s
Feb 28 10:52:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Feb 28 10:52:56 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Feb 28 10:52:56 compute-0 nova_compute[243452]: 2026-02-28 10:52:56.608 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:56 compute-0 nova_compute[243452]: 2026-02-28 10:52:56.872 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:52:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 895 B/s wr, 25 op/s
Feb 28 10:52:57 compute-0 ceph-mon[76304]: osdmap e294: 3 total, 3 up, 3 in
Feb 28 10:52:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:52:57.895 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:52:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:52:57.896 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:52:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:52:57.896 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:52:58 compute-0 nova_compute[243452]: 2026-02-28 10:52:58.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:52:58 compute-0 ceph-mon[76304]: pgmap v2668: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 895 B/s wr, 25 op/s
Feb 28 10:52:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:52:59 compute-0 nova_compute[243452]: 2026-02-28 10:52:59.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:52:59 compute-0 nova_compute[243452]: 2026-02-28 10:52:59.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:52:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.9 KiB/s wr, 54 op/s
Feb 28 10:53:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:53:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:53:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:53:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:53:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:53:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:53:00 compute-0 ceph-mon[76304]: pgmap v2669: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.9 KiB/s wr, 54 op/s
Feb 28 10:53:01 compute-0 nova_compute[243452]: 2026-02-28 10:53:01.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:53:01 compute-0 nova_compute[243452]: 2026-02-28 10:53:01.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:53:01 compute-0 nova_compute[243452]: 2026-02-28 10:53:01.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:53:01 compute-0 nova_compute[243452]: 2026-02-28 10:53:01.331 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:53:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 3.2 KiB/s wr, 58 op/s
Feb 28 10:53:01 compute-0 nova_compute[243452]: 2026-02-28 10:53:01.610 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:53:01 compute-0 nova_compute[243452]: 2026-02-28 10:53:01.874 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:53:02 compute-0 ceph-mon[76304]: pgmap v2670: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 3.2 KiB/s wr, 58 op/s
Feb 28 10:53:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.7 KiB/s wr, 49 op/s
Feb 28 10:53:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:53:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Feb 28 10:53:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Feb 28 10:53:04 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Feb 28 10:53:04 compute-0 nova_compute[243452]: 2026-02-28 10:53:04.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:53:04 compute-0 nova_compute[243452]: 2026-02-28 10:53:04.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:53:04 compute-0 nova_compute[243452]: 2026-02-28 10:53:04.345 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:53:04 compute-0 nova_compute[243452]: 2026-02-28 10:53:04.346 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:53:04 compute-0 nova_compute[243452]: 2026-02-28 10:53:04.346 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:53:04 compute-0 nova_compute[243452]: 2026-02-28 10:53:04.347 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:53:04 compute-0 ceph-mon[76304]: pgmap v2671: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.7 KiB/s wr, 49 op/s
Feb 28 10:53:04 compute-0 ceph-mon[76304]: osdmap e295: 3 total, 3 up, 3 in
Feb 28 10:53:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:53:04 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2649519656' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:53:04 compute-0 nova_compute[243452]: 2026-02-28 10:53:04.908 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:53:05 compute-0 nova_compute[243452]: 2026-02-28 10:53:05.147 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:53:05 compute-0 podman[386250]: 2026-02-28 10:53:05.14862071 +0000 UTC m=+0.079817003 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 28 10:53:05 compute-0 nova_compute[243452]: 2026-02-28 10:53:05.149 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3581MB free_disk=59.98737560398877GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:53:05 compute-0 nova_compute[243452]: 2026-02-28 10:53:05.149 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:53:05 compute-0 nova_compute[243452]: 2026-02-28 10:53:05.149 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:53:05 compute-0 podman[386249]: 2026-02-28 10:53:05.180852207 +0000 UTC m=+0.115058056 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 28 10:53:05 compute-0 nova_compute[243452]: 2026-02-28 10:53:05.217 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:53:05 compute-0 nova_compute[243452]: 2026-02-28 10:53:05.217 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:53:05 compute-0 nova_compute[243452]: 2026-02-28 10:53:05.240 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:53:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 2.4 KiB/s wr, 33 op/s
Feb 28 10:53:05 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2649519656' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:53:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:53:05 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1462219130' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:53:05 compute-0 nova_compute[243452]: 2026-02-28 10:53:05.790 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:53:05 compute-0 nova_compute[243452]: 2026-02-28 10:53:05.796 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:53:05 compute-0 nova_compute[243452]: 2026-02-28 10:53:05.817 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:53:05 compute-0 nova_compute[243452]: 2026-02-28 10:53:05.818 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:53:05 compute-0 nova_compute[243452]: 2026-02-28 10:53:05.819 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:53:06 compute-0 nova_compute[243452]: 2026-02-28 10:53:06.614 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:53:06 compute-0 ceph-mon[76304]: pgmap v2673: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 2.4 KiB/s wr, 33 op/s
Feb 28 10:53:06 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1462219130' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:53:06 compute-0 nova_compute[243452]: 2026-02-28 10:53:06.876 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:53:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2674: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 KiB/s wr, 29 op/s
Feb 28 10:53:08 compute-0 ceph-mon[76304]: pgmap v2674: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 KiB/s wr, 29 op/s
Feb 28 10:53:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:53:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2675: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 511 B/s wr, 6 op/s
Feb 28 10:53:10 compute-0 ceph-mon[76304]: pgmap v2675: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 511 B/s wr, 6 op/s
Feb 28 10:53:10 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:53:10 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 12K writes, 56K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1377 writes, 6469 keys, 1377 commit groups, 1.0 writes per commit group, ingest: 8.96 MB, 0.01 MB/s
                                           Interval WAL: 1377 writes, 1377 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     81.0      0.85              0.18        40    0.021       0      0       0.0       0.0
                                             L6      1/0    9.37 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.9    163.0    138.8      2.45              0.93        39    0.063    244K    21K       0.0       0.0
                                            Sum      1/0    9.37 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.9    121.1    123.9      3.30              1.11        79    0.042    244K    21K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.1    142.8    145.7      0.47              0.22        12    0.039     47K   3100       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    163.0    138.8      2.45              0.93        39    0.063    244K    21K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     81.5      0.84              0.18        39    0.022       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.5      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.067, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.40 GB write, 0.09 MB/s write, 0.39 GB read, 0.08 MB/s read, 3.3 seconds
                                           Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555caadbb8d0#2 capacity: 304.00 MB usage: 44.75 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000477 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2812,42.95 MB,14.1284%) FilterBlock(80,687.30 KB,0.220786%) IndexBlock(80,1.13 MB,0.370151%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 28 10:53:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 305 active+clean; 64 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 6.4 MiB/s wr, 21 op/s
Feb 28 10:53:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Feb 28 10:53:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Feb 28 10:53:11 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Feb 28 10:53:11 compute-0 nova_compute[243452]: 2026-02-28 10:53:11.878 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:53:11 compute-0 nova_compute[243452]: 2026-02-28 10:53:11.879 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:53:11 compute-0 nova_compute[243452]: 2026-02-28 10:53:11.880 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 10:53:11 compute-0 nova_compute[243452]: 2026-02-28 10:53:11.880 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:53:12 compute-0 nova_compute[243452]: 2026-02-28 10:53:12.892 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:53:12 compute-0 nova_compute[243452]: 2026-02-28 10:53:12.893 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:53:12 compute-0 ceph-mon[76304]: pgmap v2676: 305 pgs: 305 active+clean; 64 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 6.4 MiB/s wr, 21 op/s
Feb 28 10:53:12 compute-0 ceph-mon[76304]: osdmap e296: 3 total, 3 up, 3 in
Feb 28 10:53:12 compute-0 sudo[386319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:53:12 compute-0 sudo[386319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:53:12 compute-0 sudo[386319]: pam_unix(sudo:session): session closed for user root
Feb 28 10:53:13 compute-0 sudo[386344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:53:13 compute-0 sudo[386344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:53:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 9.4 MiB/s wr, 24 op/s
Feb 28 10:53:13 compute-0 sudo[386344]: pam_unix(sudo:session): session closed for user root
Feb 28 10:53:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:53:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:53:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:53:13 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:53:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:53:13 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:53:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:53:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:53:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:53:13 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:53:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:53:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:53:13 compute-0 sudo[386403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:53:13 compute-0 sudo[386403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:53:13 compute-0 sudo[386403]: pam_unix(sudo:session): session closed for user root
Feb 28 10:53:13 compute-0 sudo[386428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:53:13 compute-0 sudo[386428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:53:13 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:53:13 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:53:13 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:53:13 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:53:13 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:53:13 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:53:14 compute-0 podman[386465]: 2026-02-28 10:53:14.009244534 +0000 UTC m=+0.060794851 container create 12bb7dea7785adb32ea6c472e5d98f8e17ddccd633dc14f9937921c1ada2281b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_easley, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 10:53:14 compute-0 systemd[1]: Started libpod-conmon-12bb7dea7785adb32ea6c472e5d98f8e17ddccd633dc14f9937921c1ada2281b.scope.
Feb 28 10:53:14 compute-0 podman[386465]: 2026-02-28 10:53:13.986110466 +0000 UTC m=+0.037660833 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:53:14 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:53:14 compute-0 podman[386465]: 2026-02-28 10:53:14.101288364 +0000 UTC m=+0.152838711 container init 12bb7dea7785adb32ea6c472e5d98f8e17ddccd633dc14f9937921c1ada2281b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 28 10:53:14 compute-0 podman[386465]: 2026-02-28 10:53:14.112174354 +0000 UTC m=+0.163724681 container start 12bb7dea7785adb32ea6c472e5d98f8e17ddccd633dc14f9937921c1ada2281b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:53:14 compute-0 podman[386465]: 2026-02-28 10:53:14.115597282 +0000 UTC m=+0.167147629 container attach 12bb7dea7785adb32ea6c472e5d98f8e17ddccd633dc14f9937921c1ada2281b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 10:53:14 compute-0 distracted_easley[386481]: 167 167
Feb 28 10:53:14 compute-0 systemd[1]: libpod-12bb7dea7785adb32ea6c472e5d98f8e17ddccd633dc14f9937921c1ada2281b.scope: Deactivated successfully.
Feb 28 10:53:14 compute-0 conmon[386481]: conmon 12bb7dea7785adb32ea6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-12bb7dea7785adb32ea6c472e5d98f8e17ddccd633dc14f9937921c1ada2281b.scope/container/memory.events
Feb 28 10:53:14 compute-0 podman[386465]: 2026-02-28 10:53:14.123807205 +0000 UTC m=+0.175357552 container died 12bb7dea7785adb32ea6c472e5d98f8e17ddccd633dc14f9937921c1ada2281b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:53:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-37b085b6d3c3d884053068a0a68a5fa90f04f1c8e24717b306301de02e209303-merged.mount: Deactivated successfully.
Feb 28 10:53:14 compute-0 podman[386465]: 2026-02-28 10:53:14.160191811 +0000 UTC m=+0.211742128 container remove 12bb7dea7785adb32ea6c472e5d98f8e17ddccd633dc14f9937921c1ada2281b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_easley, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Feb 28 10:53:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:53:14 compute-0 systemd[1]: libpod-conmon-12bb7dea7785adb32ea6c472e5d98f8e17ddccd633dc14f9937921c1ada2281b.scope: Deactivated successfully.
Feb 28 10:53:14 compute-0 podman[386507]: 2026-02-28 10:53:14.362234882 +0000 UTC m=+0.071739513 container create 8983e2e3cd1f4117697f630fdebc06ca66da91e8dbe438350a8ad73f0a99e05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lewin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:53:14 compute-0 systemd[1]: Started libpod-conmon-8983e2e3cd1f4117697f630fdebc06ca66da91e8dbe438350a8ad73f0a99e05a.scope.
Feb 28 10:53:14 compute-0 podman[386507]: 2026-02-28 10:53:14.33653824 +0000 UTC m=+0.046042871 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:53:14 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:53:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae327da7b264ddc702053ddf0e15e24783dfd8331462655001809ecc59ea3023/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:53:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae327da7b264ddc702053ddf0e15e24783dfd8331462655001809ecc59ea3023/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:53:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae327da7b264ddc702053ddf0e15e24783dfd8331462655001809ecc59ea3023/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:53:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae327da7b264ddc702053ddf0e15e24783dfd8331462655001809ecc59ea3023/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:53:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae327da7b264ddc702053ddf0e15e24783dfd8331462655001809ecc59ea3023/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:53:14 compute-0 podman[386507]: 2026-02-28 10:53:14.473271162 +0000 UTC m=+0.182775773 container init 8983e2e3cd1f4117697f630fdebc06ca66da91e8dbe438350a8ad73f0a99e05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lewin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:53:14 compute-0 podman[386507]: 2026-02-28 10:53:14.482739572 +0000 UTC m=+0.192244163 container start 8983e2e3cd1f4117697f630fdebc06ca66da91e8dbe438350a8ad73f0a99e05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 10:53:14 compute-0 podman[386507]: 2026-02-28 10:53:14.486810308 +0000 UTC m=+0.196314959 container attach 8983e2e3cd1f4117697f630fdebc06ca66da91e8dbe438350a8ad73f0a99e05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lewin, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:53:14 compute-0 ceph-mon[76304]: pgmap v2678: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 9.4 MiB/s wr, 24 op/s
Feb 28 10:53:14 compute-0 busy_lewin[386523]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:53:14 compute-0 busy_lewin[386523]: --> All data devices are unavailable
Feb 28 10:53:14 compute-0 systemd[1]: libpod-8983e2e3cd1f4117697f630fdebc06ca66da91e8dbe438350a8ad73f0a99e05a.scope: Deactivated successfully.
Feb 28 10:53:14 compute-0 podman[386507]: 2026-02-28 10:53:14.993212042 +0000 UTC m=+0.702716653 container died 8983e2e3cd1f4117697f630fdebc06ca66da91e8dbe438350a8ad73f0a99e05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 10:53:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae327da7b264ddc702053ddf0e15e24783dfd8331462655001809ecc59ea3023-merged.mount: Deactivated successfully.
Feb 28 10:53:15 compute-0 podman[386507]: 2026-02-28 10:53:15.042051782 +0000 UTC m=+0.751556383 container remove 8983e2e3cd1f4117697f630fdebc06ca66da91e8dbe438350a8ad73f0a99e05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lewin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:53:15 compute-0 systemd[1]: libpod-conmon-8983e2e3cd1f4117697f630fdebc06ca66da91e8dbe438350a8ad73f0a99e05a.scope: Deactivated successfully.
Feb 28 10:53:15 compute-0 sudo[386428]: pam_unix(sudo:session): session closed for user root
Feb 28 10:53:15 compute-0 sudo[386555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:53:15 compute-0 sudo[386555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:53:15 compute-0 sudo[386555]: pam_unix(sudo:session): session closed for user root
Feb 28 10:53:15 compute-0 sudo[386580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:53:15 compute-0 sudo[386580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:53:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2679: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 11 MiB/s wr, 33 op/s
Feb 28 10:53:15 compute-0 podman[386617]: 2026-02-28 10:53:15.550940157 +0000 UTC m=+0.051736704 container create 62b18ee212d0cf8fb28646de5dc5d9e69d1fa1d1a60503c7b98927f5b9cd31e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 10:53:15 compute-0 systemd[1]: Started libpod-conmon-62b18ee212d0cf8fb28646de5dc5d9e69d1fa1d1a60503c7b98927f5b9cd31e0.scope.
Feb 28 10:53:15 compute-0 podman[386617]: 2026-02-28 10:53:15.526039438 +0000 UTC m=+0.026836065 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:53:15 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:53:15 compute-0 podman[386617]: 2026-02-28 10:53:15.638525 +0000 UTC m=+0.139321577 container init 62b18ee212d0cf8fb28646de5dc5d9e69d1fa1d1a60503c7b98927f5b9cd31e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 10:53:15 compute-0 podman[386617]: 2026-02-28 10:53:15.647138765 +0000 UTC m=+0.147935312 container start 62b18ee212d0cf8fb28646de5dc5d9e69d1fa1d1a60503c7b98927f5b9cd31e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_curran, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Feb 28 10:53:15 compute-0 podman[386617]: 2026-02-28 10:53:15.651018716 +0000 UTC m=+0.151815263 container attach 62b18ee212d0cf8fb28646de5dc5d9e69d1fa1d1a60503c7b98927f5b9cd31e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_curran, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 10:53:15 compute-0 dazzling_curran[386633]: 167 167
Feb 28 10:53:15 compute-0 systemd[1]: libpod-62b18ee212d0cf8fb28646de5dc5d9e69d1fa1d1a60503c7b98927f5b9cd31e0.scope: Deactivated successfully.
Feb 28 10:53:15 compute-0 podman[386617]: 2026-02-28 10:53:15.653326261 +0000 UTC m=+0.154122808 container died 62b18ee212d0cf8fb28646de5dc5d9e69d1fa1d1a60503c7b98927f5b9cd31e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_curran, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:53:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6a434642f36f9b64fa643f5b8471722acd448b277ef86cffa5f59bc1a7b7c17-merged.mount: Deactivated successfully.
Feb 28 10:53:15 compute-0 podman[386617]: 2026-02-28 10:53:15.703257023 +0000 UTC m=+0.204053600 container remove 62b18ee212d0cf8fb28646de5dc5d9e69d1fa1d1a60503c7b98927f5b9cd31e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_curran, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 10:53:15 compute-0 systemd[1]: libpod-conmon-62b18ee212d0cf8fb28646de5dc5d9e69d1fa1d1a60503c7b98927f5b9cd31e0.scope: Deactivated successfully.
Feb 28 10:53:15 compute-0 podman[386657]: 2026-02-28 10:53:15.90235863 +0000 UTC m=+0.066937996 container create 8ba3724585dd8e94c8a640c5ab8f157b5bc0ed99eb93856e91a7f8fa52fa03ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:53:15 compute-0 systemd[1]: Started libpod-conmon-8ba3724585dd8e94c8a640c5ab8f157b5bc0ed99eb93856e91a7f8fa52fa03ad.scope.
Feb 28 10:53:15 compute-0 podman[386657]: 2026-02-28 10:53:15.873429246 +0000 UTC m=+0.038008682 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:53:15 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1499730b5e5e4440e3434e1e42349046e0128589af7aefc1f811e8cc2707ca04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1499730b5e5e4440e3434e1e42349046e0128589af7aefc1f811e8cc2707ca04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1499730b5e5e4440e3434e1e42349046e0128589af7aefc1f811e8cc2707ca04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1499730b5e5e4440e3434e1e42349046e0128589af7aefc1f811e8cc2707ca04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:53:16 compute-0 podman[386657]: 2026-02-28 10:53:16.013655088 +0000 UTC m=+0.178234454 container init 8ba3724585dd8e94c8a640c5ab8f157b5bc0ed99eb93856e91a7f8fa52fa03ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:53:16 compute-0 podman[386657]: 2026-02-28 10:53:16.028097518 +0000 UTC m=+0.192676884 container start 8ba3724585dd8e94c8a640c5ab8f157b5bc0ed99eb93856e91a7f8fa52fa03ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:53:16 compute-0 podman[386657]: 2026-02-28 10:53:16.031961208 +0000 UTC m=+0.196540584 container attach 8ba3724585dd8e94c8a640c5ab8f157b5bc0ed99eb93856e91a7f8fa52fa03ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]: {
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:     "0": [
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:         {
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "devices": [
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "/dev/loop3"
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             ],
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "lv_name": "ceph_lv0",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "lv_size": "21470642176",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "name": "ceph_lv0",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "tags": {
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.cluster_name": "ceph",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.crush_device_class": "",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.encrypted": "0",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.objectstore": "bluestore",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.osd_id": "0",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.type": "block",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.vdo": "0",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.with_tpm": "0"
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             },
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "type": "block",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "vg_name": "ceph_vg0"
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:         }
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:     ],
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:     "1": [
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:         {
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "devices": [
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "/dev/loop4"
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             ],
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "lv_name": "ceph_lv1",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "lv_size": "21470642176",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "name": "ceph_lv1",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "tags": {
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.cluster_name": "ceph",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.crush_device_class": "",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.encrypted": "0",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.objectstore": "bluestore",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.osd_id": "1",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.type": "block",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.vdo": "0",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.with_tpm": "0"
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             },
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "type": "block",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "vg_name": "ceph_vg1"
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:         }
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:     ],
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:     "2": [
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:         {
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "devices": [
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "/dev/loop5"
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             ],
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "lv_name": "ceph_lv2",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "lv_size": "21470642176",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "name": "ceph_lv2",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "tags": {
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.cluster_name": "ceph",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.crush_device_class": "",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.encrypted": "0",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.objectstore": "bluestore",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.osd_id": "2",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.type": "block",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.vdo": "0",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:                 "ceph.with_tpm": "0"
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             },
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "type": "block",
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:             "vg_name": "ceph_vg2"
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:         }
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]:     ]
Feb 28 10:53:16 compute-0 confident_hodgkin[386673]: }
Feb 28 10:53:16 compute-0 systemd[1]: libpod-8ba3724585dd8e94c8a640c5ab8f157b5bc0ed99eb93856e91a7f8fa52fa03ad.scope: Deactivated successfully.
Feb 28 10:53:16 compute-0 podman[386657]: 2026-02-28 10:53:16.363617408 +0000 UTC m=+0.528196784 container died 8ba3724585dd8e94c8a640c5ab8f157b5bc0ed99eb93856e91a7f8fa52fa03ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:53:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1499730b5e5e4440e3434e1e42349046e0128589af7aefc1f811e8cc2707ca04-merged.mount: Deactivated successfully.
Feb 28 10:53:16 compute-0 podman[386657]: 2026-02-28 10:53:16.428053082 +0000 UTC m=+0.592632458 container remove 8ba3724585dd8e94c8a640c5ab8f157b5bc0ed99eb93856e91a7f8fa52fa03ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_hodgkin, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:53:16 compute-0 systemd[1]: libpod-conmon-8ba3724585dd8e94c8a640c5ab8f157b5bc0ed99eb93856e91a7f8fa52fa03ad.scope: Deactivated successfully.
Feb 28 10:53:16 compute-0 sudo[386580]: pam_unix(sudo:session): session closed for user root
Feb 28 10:53:16 compute-0 sudo[386694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:53:16 compute-0 sudo[386694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:53:16 compute-0 sudo[386694]: pam_unix(sudo:session): session closed for user root
Feb 28 10:53:16 compute-0 sudo[386719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:53:16 compute-0 sudo[386719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:53:16 compute-0 ceph-mon[76304]: pgmap v2679: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 11 MiB/s wr, 33 op/s
Feb 28 10:53:16 compute-0 podman[386755]: 2026-02-28 10:53:16.940670443 +0000 UTC m=+0.067022909 container create 5caf54900070496f74d14fd34755540e39fe4c69ddc27aca0a0c4f040816356a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_maxwell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:53:16 compute-0 systemd[1]: Started libpod-conmon-5caf54900070496f74d14fd34755540e39fe4c69ddc27aca0a0c4f040816356a.scope.
Feb 28 10:53:17 compute-0 podman[386755]: 2026-02-28 10:53:16.913246313 +0000 UTC m=+0.039598829 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:53:17 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:53:17 compute-0 podman[386755]: 2026-02-28 10:53:17.026566538 +0000 UTC m=+0.152919044 container init 5caf54900070496f74d14fd34755540e39fe4c69ddc27aca0a0c4f040816356a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_maxwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:53:17 compute-0 podman[386755]: 2026-02-28 10:53:17.036180082 +0000 UTC m=+0.162532508 container start 5caf54900070496f74d14fd34755540e39fe4c69ddc27aca0a0c4f040816356a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_maxwell, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 10:53:17 compute-0 gallant_maxwell[386772]: 167 167
Feb 28 10:53:17 compute-0 podman[386755]: 2026-02-28 10:53:17.040904366 +0000 UTC m=+0.167256882 container attach 5caf54900070496f74d14fd34755540e39fe4c69ddc27aca0a0c4f040816356a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 10:53:17 compute-0 systemd[1]: libpod-5caf54900070496f74d14fd34755540e39fe4c69ddc27aca0a0c4f040816356a.scope: Deactivated successfully.
Feb 28 10:53:17 compute-0 podman[386755]: 2026-02-28 10:53:17.041604036 +0000 UTC m=+0.167956472 container died 5caf54900070496f74d14fd34755540e39fe4c69ddc27aca0a0c4f040816356a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_maxwell, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:53:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca182b08646cfea93be199e60fd2ecfe981d31640e3ec949c030a119d3e0bfd5-merged.mount: Deactivated successfully.
Feb 28 10:53:17 compute-0 podman[386755]: 2026-02-28 10:53:17.081630755 +0000 UTC m=+0.207983211 container remove 5caf54900070496f74d14fd34755540e39fe4c69ddc27aca0a0c4f040816356a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:53:17 compute-0 systemd[1]: libpod-conmon-5caf54900070496f74d14fd34755540e39fe4c69ddc27aca0a0c4f040816356a.scope: Deactivated successfully.
Feb 28 10:53:17 compute-0 podman[386796]: 2026-02-28 10:53:17.263456281 +0000 UTC m=+0.050187050 container create f3839e435d31606cc2038b5e2e6caef49e0ec95376ab598271c1b9beb45d2ea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_easley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:53:17 compute-0 systemd[1]: Started libpod-conmon-f3839e435d31606cc2038b5e2e6caef49e0ec95376ab598271c1b9beb45d2ea1.scope.
Feb 28 10:53:17 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:53:17 compute-0 podman[386796]: 2026-02-28 10:53:17.243516963 +0000 UTC m=+0.030247762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:53:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd353851c18f294195996993b6181df0850ecaf87e6c66a17f97ab92bf0fbf3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:53:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd353851c18f294195996993b6181df0850ecaf87e6c66a17f97ab92bf0fbf3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:53:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd353851c18f294195996993b6181df0850ecaf87e6c66a17f97ab92bf0fbf3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:53:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd353851c18f294195996993b6181df0850ecaf87e6c66a17f97ab92bf0fbf3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:53:17 compute-0 podman[386796]: 2026-02-28 10:53:17.360600896 +0000 UTC m=+0.147331695 container init f3839e435d31606cc2038b5e2e6caef49e0ec95376ab598271c1b9beb45d2ea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_easley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 10:53:17 compute-0 podman[386796]: 2026-02-28 10:53:17.370795986 +0000 UTC m=+0.157526785 container start f3839e435d31606cc2038b5e2e6caef49e0ec95376ab598271c1b9beb45d2ea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_easley, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 10:53:17 compute-0 podman[386796]: 2026-02-28 10:53:17.375243053 +0000 UTC m=+0.161973852 container attach f3839e435d31606cc2038b5e2e6caef49e0ec95376ab598271c1b9beb45d2ea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_easley, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:53:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 11 MiB/s wr, 33 op/s
Feb 28 10:53:17 compute-0 nova_compute[243452]: 2026-02-28 10:53:17.893 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:53:18 compute-0 lvm[386893]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:53:18 compute-0 lvm[386890]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:53:18 compute-0 lvm[386893]: VG ceph_vg1 finished
Feb 28 10:53:18 compute-0 lvm[386890]: VG ceph_vg0 finished
Feb 28 10:53:18 compute-0 lvm[386895]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:53:18 compute-0 lvm[386895]: VG ceph_vg2 finished
Feb 28 10:53:18 compute-0 lvm[386896]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:53:18 compute-0 lvm[386896]: VG ceph_vg1 finished
Feb 28 10:53:18 compute-0 lvm[386898]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:53:18 compute-0 lvm[386898]: VG ceph_vg1 finished
Feb 28 10:53:18 compute-0 cranky_easley[386813]: {}
Feb 28 10:53:18 compute-0 systemd[1]: libpod-f3839e435d31606cc2038b5e2e6caef49e0ec95376ab598271c1b9beb45d2ea1.scope: Deactivated successfully.
Feb 28 10:53:18 compute-0 systemd[1]: libpod-f3839e435d31606cc2038b5e2e6caef49e0ec95376ab598271c1b9beb45d2ea1.scope: Consumed 1.336s CPU time.
Feb 28 10:53:18 compute-0 podman[386796]: 2026-02-28 10:53:18.304906065 +0000 UTC m=+1.091636834 container died f3839e435d31606cc2038b5e2e6caef49e0ec95376ab598271c1b9beb45d2ea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:53:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd353851c18f294195996993b6181df0850ecaf87e6c66a17f97ab92bf0fbf3d-merged.mount: Deactivated successfully.
Feb 28 10:53:18 compute-0 podman[386796]: 2026-02-28 10:53:18.3627096 +0000 UTC m=+1.149440389 container remove f3839e435d31606cc2038b5e2e6caef49e0ec95376ab598271c1b9beb45d2ea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:53:18 compute-0 systemd[1]: libpod-conmon-f3839e435d31606cc2038b5e2e6caef49e0ec95376ab598271c1b9beb45d2ea1.scope: Deactivated successfully.
Feb 28 10:53:18 compute-0 sudo[386719]: pam_unix(sudo:session): session closed for user root
Feb 28 10:53:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:53:18 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:53:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:53:18 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:53:18 compute-0 sudo[386913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:53:18 compute-0 sudo[386913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:53:18 compute-0 sudo[386913]: pam_unix(sudo:session): session closed for user root
Feb 28 10:53:18 compute-0 ceph-mon[76304]: pgmap v2680: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 11 MiB/s wr, 33 op/s
Feb 28 10:53:18 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:53:18 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:53:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:53:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 11 MiB/s wr, 33 op/s
Feb 28 10:53:20 compute-0 ceph-mon[76304]: pgmap v2681: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 11 MiB/s wr, 33 op/s
Feb 28 10:53:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.9 KiB/s rd, 4.8 MiB/s wr, 11 op/s
Feb 28 10:53:22 compute-0 nova_compute[243452]: 2026-02-28 10:53:22.895 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:53:22 compute-0 nova_compute[243452]: 2026-02-28 10:53:22.897 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:53:22 compute-0 ceph-mon[76304]: pgmap v2682: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.9 KiB/s rd, 4.8 MiB/s wr, 11 op/s
Feb 28 10:53:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2683: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.8 KiB/s rd, 4.1 MiB/s wr, 10 op/s
Feb 28 10:53:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:53:24 compute-0 ceph-mon[76304]: pgmap v2683: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.8 KiB/s rd, 4.1 MiB/s wr, 10 op/s
Feb 28 10:53:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 2.0 MiB/s wr, 8 op/s
Feb 28 10:53:26 compute-0 ceph-mon[76304]: pgmap v2684: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 2.0 MiB/s wr, 8 op/s
Feb 28 10:53:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:27 compute-0 nova_compute[243452]: 2026-02-28 10:53:27.898 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:53:28 compute-0 ceph-mon[76304]: pgmap v2685: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:53:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:53:29
Feb 28 10:53:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:53:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:53:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['volumes', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.meta']
Feb 28 10:53:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:53:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2686: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:53:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:53:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:53:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:53:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:53:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:53:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:53:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:53:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:53:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:53:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:53:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:53:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:53:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:53:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:53:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:53:30 compute-0 ceph-mon[76304]: pgmap v2686: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:32 compute-0 nova_compute[243452]: 2026-02-28 10:53:32.899 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:53:32 compute-0 ceph-mon[76304]: pgmap v2687: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:53:35 compute-0 ceph-mon[76304]: pgmap v2688: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:36 compute-0 podman[386939]: 2026-02-28 10:53:36.171944844 +0000 UTC m=+0.100157992 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 28 10:53:36 compute-0 podman[386938]: 2026-02-28 10:53:36.1960434 +0000 UTC m=+0.123614109 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:53:37 compute-0 ceph-mon[76304]: pgmap v2689: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:37 compute-0 nova_compute[243452]: 2026-02-28 10:53:37.901 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:53:39 compute-0 ceph-mon[76304]: pgmap v2690: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:53:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:41 compute-0 ceph-mon[76304]: pgmap v2691: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5158900728975258e-05 of space, bias 1.0, pg target 0.004547670218692577 quantized to 32 (current 32)
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0018285931247344293 of space, bias 1.0, pg target 0.5485779374203288 quantized to 32 (current 32)
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.268684935180083e-07 of space, bias 4.0, pg target 0.00087224219222161 quantized to 16 (current 16)
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:53:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:53:42 compute-0 nova_compute[243452]: 2026-02-28 10:53:42.904 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:53:42 compute-0 nova_compute[243452]: 2026-02-28 10:53:42.905 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:53:42 compute-0 nova_compute[243452]: 2026-02-28 10:53:42.905 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 10:53:42 compute-0 nova_compute[243452]: 2026-02-28 10:53:42.905 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:53:42 compute-0 nova_compute[243452]: 2026-02-28 10:53:42.906 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:53:42 compute-0 nova_compute[243452]: 2026-02-28 10:53:42.908 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:53:43 compute-0 ceph-mon[76304]: pgmap v2692: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:53:45 compute-0 ceph-mon[76304]: pgmap v2693: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:53:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2965535958' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:53:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:53:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2965535958' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:53:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2965535958' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:53:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2965535958' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:53:47 compute-0 ceph-mon[76304]: pgmap v2694: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:47 compute-0 nova_compute[243452]: 2026-02-28 10:53:47.908 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:53:49 compute-0 ceph-mon[76304]: pgmap v2695: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:53:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:51 compute-0 ceph-mon[76304]: pgmap v2696: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2697: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:52 compute-0 nova_compute[243452]: 2026-02-28 10:53:52.909 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:53:53 compute-0 ceph-mon[76304]: pgmap v2697: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:53:54 compute-0 nova_compute[243452]: 2026-02-28 10:53:54.819 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:53:55 compute-0 ceph-mon[76304]: pgmap v2698: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:55 compute-0 nova_compute[243452]: 2026-02-28 10:53:55.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:53:55 compute-0 nova_compute[243452]: 2026-02-28 10:53:55.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:53:55 compute-0 nova_compute[243452]: 2026-02-28 10:53:55.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:53:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:57 compute-0 ceph-mon[76304]: pgmap v2699: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:53:57.897 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:53:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:53:57.897 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:53:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:53:57.898 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:53:57 compute-0 nova_compute[243452]: 2026-02-28 10:53:57.910 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:53:57 compute-0 nova_compute[243452]: 2026-02-28 10:53:57.913 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:53:58 compute-0 nova_compute[243452]: 2026-02-28 10:53:58.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:53:59 compute-0 ceph-mon[76304]: pgmap v2700: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:53:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:53:59 compute-0 nova_compute[243452]: 2026-02-28 10:53:59.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:53:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:00 compute-0 nova_compute[243452]: 2026-02-28 10:54:00.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:54:00 compute-0 nova_compute[243452]: 2026-02-28 10:54:00.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:54:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:54:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:54:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:54:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:54:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:54:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:54:01 compute-0 ceph-mon[76304]: pgmap v2701: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:02 compute-0 nova_compute[243452]: 2026-02-28 10:54:02.911 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:54:02 compute-0 nova_compute[243452]: 2026-02-28 10:54:02.913 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:54:03 compute-0 ceph-mon[76304]: pgmap v2702: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:03 compute-0 nova_compute[243452]: 2026-02-28 10:54:03.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:54:03 compute-0 nova_compute[243452]: 2026-02-28 10:54:03.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:54:03 compute-0 nova_compute[243452]: 2026-02-28 10:54:03.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:54:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:54:04 compute-0 nova_compute[243452]: 2026-02-28 10:54:04.272 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:54:04 compute-0 nova_compute[243452]: 2026-02-28 10:54:04.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:54:04 compute-0 nova_compute[243452]: 2026-02-28 10:54:04.360 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:54:04 compute-0 nova_compute[243452]: 2026-02-28 10:54:04.361 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:54:04 compute-0 nova_compute[243452]: 2026-02-28 10:54:04.361 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:54:04 compute-0 nova_compute[243452]: 2026-02-28 10:54:04.361 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:54:04 compute-0 nova_compute[243452]: 2026-02-28 10:54:04.362 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:54:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:54:04 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1464655689' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:54:04 compute-0 nova_compute[243452]: 2026-02-28 10:54:04.875 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:54:05 compute-0 nova_compute[243452]: 2026-02-28 10:54:05.048 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:54:05 compute-0 nova_compute[243452]: 2026-02-28 10:54:05.050 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3569MB free_disk=59.98737189359963GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:54:05 compute-0 nova_compute[243452]: 2026-02-28 10:54:05.050 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:54:05 compute-0 nova_compute[243452]: 2026-02-28 10:54:05.050 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:54:05 compute-0 ceph-mon[76304]: pgmap v2703: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:05 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1464655689' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:54:05 compute-0 nova_compute[243452]: 2026-02-28 10:54:05.319 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:54:05 compute-0 nova_compute[243452]: 2026-02-28 10:54:05.319 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:54:05 compute-0 nova_compute[243452]: 2026-02-28 10:54:05.386 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:54:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:54:05 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2003571345' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:54:05 compute-0 nova_compute[243452]: 2026-02-28 10:54:05.984 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:54:05 compute-0 nova_compute[243452]: 2026-02-28 10:54:05.991 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:54:06 compute-0 nova_compute[243452]: 2026-02-28 10:54:06.026 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:54:06 compute-0 nova_compute[243452]: 2026-02-28 10:54:06.028 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:54:06 compute-0 nova_compute[243452]: 2026-02-28 10:54:06.028 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:54:06 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2003571345' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:54:07 compute-0 podman[387030]: 2026-02-28 10:54:07.126392236 +0000 UTC m=+0.060682048 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 10:54:07 compute-0 ceph-mon[76304]: pgmap v2704: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:07 compute-0 podman[387029]: 2026-02-28 10:54:07.153043725 +0000 UTC m=+0.092085213 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 28 10:54:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:07 compute-0 sshd-session[387027]: Received disconnect from 103.67.78.202 port 44674:11: Bye Bye [preauth]
Feb 28 10:54:07 compute-0 sshd-session[387027]: Disconnected from authenticating user root 103.67.78.202 port 44674 [preauth]
Feb 28 10:54:07 compute-0 nova_compute[243452]: 2026-02-28 10:54:07.912 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:54:07 compute-0 nova_compute[243452]: 2026-02-28 10:54:07.915 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:54:08 compute-0 ceph-mon[76304]: pgmap v2705: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:54:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:09 compute-0 sshd-session[387071]: Received disconnect from 103.67.78.202 port 44682:11: Bye Bye [preauth]
Feb 28 10:54:09 compute-0 sshd-session[387071]: Disconnected from authenticating user root 103.67.78.202 port 44682 [preauth]
Feb 28 10:54:10 compute-0 ceph-mon[76304]: pgmap v2706: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:12 compute-0 ceph-mon[76304]: pgmap v2707: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:12 compute-0 nova_compute[243452]: 2026-02-28 10:54:12.913 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:54:12 compute-0 nova_compute[243452]: 2026-02-28 10:54:12.916 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:54:13 compute-0 nova_compute[243452]: 2026-02-28 10:54:13.025 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:54:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:54:14 compute-0 ceph-mon[76304]: pgmap v2708: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:16 compute-0 ceph-mon[76304]: pgmap v2709: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:16 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:54:16 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 45K writes, 17K syncs, 2.69 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2352 writes, 8930 keys, 2352 commit groups, 1.0 writes per commit group, ingest: 9.76 MB, 0.02 MB/s
                                           Interval WAL: 2353 writes, 958 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 10:54:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:17 compute-0 nova_compute[243452]: 2026-02-28 10:54:17.915 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:54:18 compute-0 sudo[387073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:54:18 compute-0 sudo[387073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:54:18 compute-0 sudo[387073]: pam_unix(sudo:session): session closed for user root
Feb 28 10:54:18 compute-0 ceph-mon[76304]: pgmap v2710: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:18 compute-0 sudo[387098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:54:18 compute-0 sudo[387098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:54:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:54:19 compute-0 sudo[387098]: pam_unix(sudo:session): session closed for user root
Feb 28 10:54:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:54:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:54:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:54:19 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:54:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:54:19 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:54:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:54:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:54:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:54:19 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:54:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:54:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:54:19 compute-0 sudo[387155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:54:19 compute-0 sudo[387155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:54:19 compute-0 sudo[387155]: pam_unix(sudo:session): session closed for user root
Feb 28 10:54:19 compute-0 sudo[387180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:54:19 compute-0 sudo[387180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:54:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:19 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:54:19 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:54:19 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:54:19 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:54:19 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:54:19 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:54:19 compute-0 podman[387218]: 2026-02-28 10:54:19.741102674 +0000 UTC m=+0.057609820 container create b9b3d78ee872f40fc52af82fa892f141d89f465a5168dfca790998d20bfea8e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bouman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:54:19 compute-0 systemd[1]: Started libpod-conmon-b9b3d78ee872f40fc52af82fa892f141d89f465a5168dfca790998d20bfea8e1.scope.
Feb 28 10:54:19 compute-0 podman[387218]: 2026-02-28 10:54:19.711318217 +0000 UTC m=+0.027825413 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:54:19 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:54:19 compute-0 podman[387218]: 2026-02-28 10:54:19.828484012 +0000 UTC m=+0.144991158 container init b9b3d78ee872f40fc52af82fa892f141d89f465a5168dfca790998d20bfea8e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bouman, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 10:54:19 compute-0 podman[387218]: 2026-02-28 10:54:19.838641051 +0000 UTC m=+0.155148197 container start b9b3d78ee872f40fc52af82fa892f141d89f465a5168dfca790998d20bfea8e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bouman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:54:19 compute-0 podman[387218]: 2026-02-28 10:54:19.842704636 +0000 UTC m=+0.159211782 container attach b9b3d78ee872f40fc52af82fa892f141d89f465a5168dfca790998d20bfea8e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Feb 28 10:54:19 compute-0 charming_bouman[387235]: 167 167
Feb 28 10:54:19 compute-0 systemd[1]: libpod-b9b3d78ee872f40fc52af82fa892f141d89f465a5168dfca790998d20bfea8e1.scope: Deactivated successfully.
Feb 28 10:54:19 compute-0 podman[387218]: 2026-02-28 10:54:19.845327701 +0000 UTC m=+0.161834837 container died b9b3d78ee872f40fc52af82fa892f141d89f465a5168dfca790998d20bfea8e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:54:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-293d51b4e8aa71b7b63902f8d278219eb8e47c851260dbee0731f6b86670217f-merged.mount: Deactivated successfully.
Feb 28 10:54:19 compute-0 podman[387218]: 2026-02-28 10:54:19.885946447 +0000 UTC m=+0.202453553 container remove b9b3d78ee872f40fc52af82fa892f141d89f465a5168dfca790998d20bfea8e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:54:19 compute-0 systemd[1]: libpod-conmon-b9b3d78ee872f40fc52af82fa892f141d89f465a5168dfca790998d20bfea8e1.scope: Deactivated successfully.
Feb 28 10:54:20 compute-0 podman[387261]: 2026-02-28 10:54:20.075571945 +0000 UTC m=+0.057702234 container create f311049750c0080ed5464b26b45cf94b28c2edf4719ae3f1acaec6da03b685f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shaw, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:54:20 compute-0 systemd[1]: Started libpod-conmon-f311049750c0080ed5464b26b45cf94b28c2edf4719ae3f1acaec6da03b685f4.scope.
Feb 28 10:54:20 compute-0 podman[387261]: 2026-02-28 10:54:20.055271987 +0000 UTC m=+0.037402246 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:54:20 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d7e64524252e0f512bfcaf65d905a93c097d5d3d36a8d58c39414c3c1e3da19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d7e64524252e0f512bfcaf65d905a93c097d5d3d36a8d58c39414c3c1e3da19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d7e64524252e0f512bfcaf65d905a93c097d5d3d36a8d58c39414c3c1e3da19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d7e64524252e0f512bfcaf65d905a93c097d5d3d36a8d58c39414c3c1e3da19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d7e64524252e0f512bfcaf65d905a93c097d5d3d36a8d58c39414c3c1e3da19/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:54:20 compute-0 podman[387261]: 2026-02-28 10:54:20.181790698 +0000 UTC m=+0.163920997 container init f311049750c0080ed5464b26b45cf94b28c2edf4719ae3f1acaec6da03b685f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 10:54:20 compute-0 podman[387261]: 2026-02-28 10:54:20.188700155 +0000 UTC m=+0.170830414 container start f311049750c0080ed5464b26b45cf94b28c2edf4719ae3f1acaec6da03b685f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shaw, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:54:20 compute-0 podman[387261]: 2026-02-28 10:54:20.191644068 +0000 UTC m=+0.173774327 container attach f311049750c0080ed5464b26b45cf94b28c2edf4719ae3f1acaec6da03b685f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:54:20 compute-0 serene_shaw[387278]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:54:20 compute-0 serene_shaw[387278]: --> All data devices are unavailable
Feb 28 10:54:20 compute-0 ceph-mon[76304]: pgmap v2711: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:20 compute-0 systemd[1]: libpod-f311049750c0080ed5464b26b45cf94b28c2edf4719ae3f1acaec6da03b685f4.scope: Deactivated successfully.
Feb 28 10:54:20 compute-0 podman[387261]: 2026-02-28 10:54:20.680547194 +0000 UTC m=+0.662677453 container died f311049750c0080ed5464b26b45cf94b28c2edf4719ae3f1acaec6da03b685f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:54:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d7e64524252e0f512bfcaf65d905a93c097d5d3d36a8d58c39414c3c1e3da19-merged.mount: Deactivated successfully.
Feb 28 10:54:20 compute-0 podman[387261]: 2026-02-28 10:54:20.734307214 +0000 UTC m=+0.716437503 container remove f311049750c0080ed5464b26b45cf94b28c2edf4719ae3f1acaec6da03b685f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shaw, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:54:20 compute-0 systemd[1]: libpod-conmon-f311049750c0080ed5464b26b45cf94b28c2edf4719ae3f1acaec6da03b685f4.scope: Deactivated successfully.
Feb 28 10:54:20 compute-0 sudo[387180]: pam_unix(sudo:session): session closed for user root
Feb 28 10:54:20 compute-0 sudo[387310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:54:20 compute-0 sudo[387310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:54:20 compute-0 sudo[387310]: pam_unix(sudo:session): session closed for user root
Feb 28 10:54:20 compute-0 sudo[387335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:54:20 compute-0 sudo[387335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:54:21 compute-0 podman[387372]: 2026-02-28 10:54:21.212639689 +0000 UTC m=+0.030107118 container create 8584ee59b998ecfc4325f53a11ddf18e8f926a44acfd288ebf4301f0372e3d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:54:21 compute-0 systemd[1]: Started libpod-conmon-8584ee59b998ecfc4325f53a11ddf18e8f926a44acfd288ebf4301f0372e3d6d.scope.
Feb 28 10:54:21 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:54:21 compute-0 podman[387372]: 2026-02-28 10:54:21.292520783 +0000 UTC m=+0.109988242 container init 8584ee59b998ecfc4325f53a11ddf18e8f926a44acfd288ebf4301f0372e3d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 10:54:21 compute-0 podman[387372]: 2026-02-28 10:54:21.198903638 +0000 UTC m=+0.016371087 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:54:21 compute-0 podman[387372]: 2026-02-28 10:54:21.30192723 +0000 UTC m=+0.119394659 container start 8584ee59b998ecfc4325f53a11ddf18e8f926a44acfd288ebf4301f0372e3d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_swartz, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:54:21 compute-0 podman[387372]: 2026-02-28 10:54:21.305359628 +0000 UTC m=+0.122827057 container attach 8584ee59b998ecfc4325f53a11ddf18e8f926a44acfd288ebf4301f0372e3d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_swartz, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 10:54:21 compute-0 happy_swartz[387389]: 167 167
Feb 28 10:54:21 compute-0 systemd[1]: libpod-8584ee59b998ecfc4325f53a11ddf18e8f926a44acfd288ebf4301f0372e3d6d.scope: Deactivated successfully.
Feb 28 10:54:21 compute-0 podman[387372]: 2026-02-28 10:54:21.306737887 +0000 UTC m=+0.124205356 container died 8584ee59b998ecfc4325f53a11ddf18e8f926a44acfd288ebf4301f0372e3d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_swartz, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 10:54:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ddce51e65ad4867c45a79dc40035f4d7116f7563491c6d2bc3de5cea330eb3a-merged.mount: Deactivated successfully.
Feb 28 10:54:21 compute-0 podman[387372]: 2026-02-28 10:54:21.345957074 +0000 UTC m=+0.163424503 container remove 8584ee59b998ecfc4325f53a11ddf18e8f926a44acfd288ebf4301f0372e3d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_swartz, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Feb 28 10:54:21 compute-0 systemd[1]: libpod-conmon-8584ee59b998ecfc4325f53a11ddf18e8f926a44acfd288ebf4301f0372e3d6d.scope: Deactivated successfully.
Feb 28 10:54:21 compute-0 podman[387412]: 2026-02-28 10:54:21.525859844 +0000 UTC m=+0.056270842 container create aa9101d2f076f541861ceb7ac4e4a7e4138aabf53fc46b14cb713c6aea35185a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_pike, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:54:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:21 compute-0 systemd[1]: Started libpod-conmon-aa9101d2f076f541861ceb7ac4e4a7e4138aabf53fc46b14cb713c6aea35185a.scope.
Feb 28 10:54:21 compute-0 podman[387412]: 2026-02-28 10:54:21.503767266 +0000 UTC m=+0.034178264 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:54:21 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade6bd5dda0745f93572527b10928f9895de20071549c9e443adbe98b0b3f51d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade6bd5dda0745f93572527b10928f9895de20071549c9e443adbe98b0b3f51d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade6bd5dda0745f93572527b10928f9895de20071549c9e443adbe98b0b3f51d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade6bd5dda0745f93572527b10928f9895de20071549c9e443adbe98b0b3f51d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:54:21 compute-0 podman[387412]: 2026-02-28 10:54:21.644010077 +0000 UTC m=+0.174421075 container init aa9101d2f076f541861ceb7ac4e4a7e4138aabf53fc46b14cb713c6aea35185a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_pike, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:54:21 compute-0 podman[387412]: 2026-02-28 10:54:21.65076359 +0000 UTC m=+0.181174568 container start aa9101d2f076f541861ceb7ac4e4a7e4138aabf53fc46b14cb713c6aea35185a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:54:21 compute-0 podman[387412]: 2026-02-28 10:54:21.655187466 +0000 UTC m=+0.185598524 container attach aa9101d2f076f541861ceb7ac4e4a7e4138aabf53fc46b14cb713c6aea35185a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_pike, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:54:21 compute-0 distracted_pike[387428]: {
Feb 28 10:54:21 compute-0 distracted_pike[387428]:     "0": [
Feb 28 10:54:21 compute-0 distracted_pike[387428]:         {
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "devices": [
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "/dev/loop3"
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             ],
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "lv_name": "ceph_lv0",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "lv_size": "21470642176",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "name": "ceph_lv0",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "tags": {
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.cluster_name": "ceph",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.crush_device_class": "",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.encrypted": "0",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.objectstore": "bluestore",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.osd_id": "0",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.type": "block",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.vdo": "0",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.with_tpm": "0"
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             },
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "type": "block",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "vg_name": "ceph_vg0"
Feb 28 10:54:21 compute-0 distracted_pike[387428]:         }
Feb 28 10:54:21 compute-0 distracted_pike[387428]:     ],
Feb 28 10:54:21 compute-0 distracted_pike[387428]:     "1": [
Feb 28 10:54:21 compute-0 distracted_pike[387428]:         {
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "devices": [
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "/dev/loop4"
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             ],
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "lv_name": "ceph_lv1",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "lv_size": "21470642176",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "name": "ceph_lv1",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "tags": {
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.cluster_name": "ceph",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.crush_device_class": "",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.encrypted": "0",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.objectstore": "bluestore",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.osd_id": "1",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.type": "block",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.vdo": "0",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.with_tpm": "0"
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             },
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "type": "block",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "vg_name": "ceph_vg1"
Feb 28 10:54:21 compute-0 distracted_pike[387428]:         }
Feb 28 10:54:21 compute-0 distracted_pike[387428]:     ],
Feb 28 10:54:21 compute-0 distracted_pike[387428]:     "2": [
Feb 28 10:54:21 compute-0 distracted_pike[387428]:         {
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "devices": [
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "/dev/loop5"
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             ],
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "lv_name": "ceph_lv2",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "lv_size": "21470642176",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "name": "ceph_lv2",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "tags": {
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.cluster_name": "ceph",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.crush_device_class": "",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.encrypted": "0",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.objectstore": "bluestore",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.osd_id": "2",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.type": "block",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.vdo": "0",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:                 "ceph.with_tpm": "0"
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             },
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "type": "block",
Feb 28 10:54:21 compute-0 distracted_pike[387428]:             "vg_name": "ceph_vg2"
Feb 28 10:54:21 compute-0 distracted_pike[387428]:         }
Feb 28 10:54:21 compute-0 distracted_pike[387428]:     ]
Feb 28 10:54:21 compute-0 distracted_pike[387428]: }
Feb 28 10:54:21 compute-0 systemd[1]: libpod-aa9101d2f076f541861ceb7ac4e4a7e4138aabf53fc46b14cb713c6aea35185a.scope: Deactivated successfully.
Feb 28 10:54:21 compute-0 podman[387412]: 2026-02-28 10:54:21.950690757 +0000 UTC m=+0.481101755 container died aa9101d2f076f541861ceb7ac4e4a7e4138aabf53fc46b14cb713c6aea35185a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_pike, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:54:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-ade6bd5dda0745f93572527b10928f9895de20071549c9e443adbe98b0b3f51d-merged.mount: Deactivated successfully.
Feb 28 10:54:21 compute-0 podman[387412]: 2026-02-28 10:54:21.996022077 +0000 UTC m=+0.526433035 container remove aa9101d2f076f541861ceb7ac4e4a7e4138aabf53fc46b14cb713c6aea35185a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_pike, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:54:22 compute-0 systemd[1]: libpod-conmon-aa9101d2f076f541861ceb7ac4e4a7e4138aabf53fc46b14cb713c6aea35185a.scope: Deactivated successfully.
Feb 28 10:54:22 compute-0 sudo[387335]: pam_unix(sudo:session): session closed for user root
Feb 28 10:54:22 compute-0 sudo[387449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:54:22 compute-0 sudo[387449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:54:22 compute-0 sudo[387449]: pam_unix(sudo:session): session closed for user root
Feb 28 10:54:22 compute-0 sudo[387474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:54:22 compute-0 sudo[387474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:54:22 compute-0 podman[387511]: 2026-02-28 10:54:22.514997999 +0000 UTC m=+0.057511638 container create badf7ffe13c9b2e8e0f6406d39cca8cfadbeeff2233ae5bd2434c5b9838ea949 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:54:22 compute-0 systemd[1]: Started libpod-conmon-badf7ffe13c9b2e8e0f6406d39cca8cfadbeeff2233ae5bd2434c5b9838ea949.scope.
Feb 28 10:54:22 compute-0 podman[387511]: 2026-02-28 10:54:22.49043537 +0000 UTC m=+0.032949049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:54:22 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:54:22 compute-0 podman[387511]: 2026-02-28 10:54:22.61234795 +0000 UTC m=+0.154861619 container init badf7ffe13c9b2e8e0f6406d39cca8cfadbeeff2233ae5bd2434c5b9838ea949 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:54:22 compute-0 podman[387511]: 2026-02-28 10:54:22.617659761 +0000 UTC m=+0.160173400 container start badf7ffe13c9b2e8e0f6406d39cca8cfadbeeff2233ae5bd2434c5b9838ea949 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:54:22 compute-0 podman[387511]: 2026-02-28 10:54:22.620601165 +0000 UTC m=+0.163114804 container attach badf7ffe13c9b2e8e0f6406d39cca8cfadbeeff2233ae5bd2434c5b9838ea949 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:54:22 compute-0 systemd[1]: libpod-badf7ffe13c9b2e8e0f6406d39cca8cfadbeeff2233ae5bd2434c5b9838ea949.scope: Deactivated successfully.
Feb 28 10:54:22 compute-0 distracted_leavitt[387528]: 167 167
Feb 28 10:54:22 compute-0 conmon[387528]: conmon badf7ffe13c9b2e8e0f6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-badf7ffe13c9b2e8e0f6406d39cca8cfadbeeff2233ae5bd2434c5b9838ea949.scope/container/memory.events
Feb 28 10:54:22 compute-0 podman[387511]: 2026-02-28 10:54:22.623477727 +0000 UTC m=+0.165991356 container died badf7ffe13c9b2e8e0f6406d39cca8cfadbeeff2233ae5bd2434c5b9838ea949 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:54:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3220366f4075fd2beea108d08eedf8cf702a041f1c136951a6b7348d2f1fb95-merged.mount: Deactivated successfully.
Feb 28 10:54:22 compute-0 podman[387511]: 2026-02-28 10:54:22.659712798 +0000 UTC m=+0.202226437 container remove badf7ffe13c9b2e8e0f6406d39cca8cfadbeeff2233ae5bd2434c5b9838ea949 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:54:22 compute-0 ceph-mon[76304]: pgmap v2712: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:22 compute-0 systemd[1]: libpod-conmon-badf7ffe13c9b2e8e0f6406d39cca8cfadbeeff2233ae5bd2434c5b9838ea949.scope: Deactivated successfully.
Feb 28 10:54:22 compute-0 podman[387552]: 2026-02-28 10:54:22.823382647 +0000 UTC m=+0.041811101 container create 1dd75f9dd98c2b4126b7c6f75e289e3ef16fe07130e8d273eca556cc4d904352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 10:54:22 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:54:22 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4801.6 total, 600.0 interval
                                           Cumulative writes: 48K writes, 188K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 48K writes, 18K syncs, 2.67 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2185 writes, 8173 keys, 2185 commit groups, 1.0 writes per commit group, ingest: 7.57 MB, 0.01 MB/s
                                           Interval WAL: 2185 writes, 915 syncs, 2.39 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 10:54:22 compute-0 systemd[1]: Started libpod-conmon-1dd75f9dd98c2b4126b7c6f75e289e3ef16fe07130e8d273eca556cc4d904352.scope.
Feb 28 10:54:22 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8a68d3fa5ba030da4bd92188087dab359ef51da214132850dfd270b8bab7f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8a68d3fa5ba030da4bd92188087dab359ef51da214132850dfd270b8bab7f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8a68d3fa5ba030da4bd92188087dab359ef51da214132850dfd270b8bab7f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8a68d3fa5ba030da4bd92188087dab359ef51da214132850dfd270b8bab7f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:54:22 compute-0 podman[387552]: 2026-02-28 10:54:22.80663823 +0000 UTC m=+0.025066734 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:54:22 compute-0 podman[387552]: 2026-02-28 10:54:22.914307875 +0000 UTC m=+0.132736359 container init 1dd75f9dd98c2b4126b7c6f75e289e3ef16fe07130e8d273eca556cc4d904352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 10:54:22 compute-0 nova_compute[243452]: 2026-02-28 10:54:22.918 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:54:22 compute-0 podman[387552]: 2026-02-28 10:54:22.926140872 +0000 UTC m=+0.144569346 container start 1dd75f9dd98c2b4126b7c6f75e289e3ef16fe07130e8d273eca556cc4d904352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 10:54:22 compute-0 podman[387552]: 2026-02-28 10:54:22.932148933 +0000 UTC m=+0.150577467 container attach 1dd75f9dd98c2b4126b7c6f75e289e3ef16fe07130e8d273eca556cc4d904352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 10:54:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:23 compute-0 lvm[387644]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:54:23 compute-0 lvm[387644]: VG ceph_vg0 finished
Feb 28 10:54:23 compute-0 lvm[387647]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:54:23 compute-0 lvm[387647]: VG ceph_vg1 finished
Feb 28 10:54:23 compute-0 lvm[387649]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:54:23 compute-0 lvm[387649]: VG ceph_vg2 finished
Feb 28 10:54:23 compute-0 lvm[387650]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:54:23 compute-0 lvm[387650]: VG ceph_vg1 finished
Feb 28 10:54:23 compute-0 dreamy_thompson[387568]: {}
Feb 28 10:54:23 compute-0 systemd[1]: libpod-1dd75f9dd98c2b4126b7c6f75e289e3ef16fe07130e8d273eca556cc4d904352.scope: Deactivated successfully.
Feb 28 10:54:23 compute-0 systemd[1]: libpod-1dd75f9dd98c2b4126b7c6f75e289e3ef16fe07130e8d273eca556cc4d904352.scope: Consumed 1.260s CPU time.
Feb 28 10:54:23 compute-0 podman[387653]: 2026-02-28 10:54:23.845041807 +0000 UTC m=+0.020063752 container died 1dd75f9dd98c2b4126b7c6f75e289e3ef16fe07130e8d273eca556cc4d904352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:54:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e8a68d3fa5ba030da4bd92188087dab359ef51da214132850dfd270b8bab7f7-merged.mount: Deactivated successfully.
Feb 28 10:54:23 compute-0 podman[387653]: 2026-02-28 10:54:23.887231778 +0000 UTC m=+0.062253723 container remove 1dd75f9dd98c2b4126b7c6f75e289e3ef16fe07130e8d273eca556cc4d904352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:54:23 compute-0 systemd[1]: libpod-conmon-1dd75f9dd98c2b4126b7c6f75e289e3ef16fe07130e8d273eca556cc4d904352.scope: Deactivated successfully.
Feb 28 10:54:23 compute-0 sudo[387474]: pam_unix(sudo:session): session closed for user root
Feb 28 10:54:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:54:23 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:54:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:54:23 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:54:24 compute-0 sudo[387668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:54:24 compute-0 sudo[387668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:54:24 compute-0 sudo[387668]: pam_unix(sudo:session): session closed for user root
Feb 28 10:54:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:54:24 compute-0 ceph-mon[76304]: pgmap v2713: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:24 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:54:24 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:54:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:26 compute-0 ceph-mon[76304]: pgmap v2714: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:27 compute-0 nova_compute[243452]: 2026-02-28 10:54:27.921 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:54:27 compute-0 nova_compute[243452]: 2026-02-28 10:54:27.924 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:54:28 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 10:54:28 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.2 total, 600.0 interval
                                           Cumulative writes: 38K writes, 148K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.69 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1533 writes, 6281 keys, 1533 commit groups, 1.0 writes per commit group, ingest: 6.68 MB, 0.01 MB/s
                                           Interval WAL: 1533 writes, 657 syncs, 2.33 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 10:54:28 compute-0 ceph-mon[76304]: pgmap v2715: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:54:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:54:29
Feb 28 10:54:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:54:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:54:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.meta', 'vms', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'images', 'volumes', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta']
Feb 28 10:54:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:54:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:54:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:54:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:54:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:54:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:54:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:54:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:54:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:54:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:54:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:54:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:54:30 compute-0 ceph-mon[76304]: pgmap v2716: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:54:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:54:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:54:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:54:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:54:31 compute-0 ceph-mgr[76610]: [devicehealth INFO root] Check health
Feb 28 10:54:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:32 compute-0 nova_compute[243452]: 2026-02-28 10:54:32.922 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:54:32 compute-0 nova_compute[243452]: 2026-02-28 10:54:32.924 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:54:32 compute-0 ceph-mon[76304]: pgmap v2717: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:54:34 compute-0 ceph-mon[76304]: pgmap v2718: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:37 compute-0 ceph-mon[76304]: pgmap v2719: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:37 compute-0 nova_compute[243452]: 2026-02-28 10:54:37.925 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:54:38 compute-0 podman[387694]: 2026-02-28 10:54:38.137518015 +0000 UTC m=+0.070610071 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:54:38 compute-0 podman[387693]: 2026-02-28 10:54:38.165710227 +0000 UTC m=+0.099506763 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:54:39 compute-0 ceph-mon[76304]: pgmap v2720: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:54:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:41 compute-0 ceph-mon[76304]: pgmap v2721: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5158900728975258e-05 of space, bias 1.0, pg target 0.004547670218692577 quantized to 32 (current 32)
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0018285931247344293 of space, bias 1.0, pg target 0.5485779374203288 quantized to 32 (current 32)
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.268684935180083e-07 of space, bias 4.0, pg target 0.00087224219222161 quantized to 16 (current 16)
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:54:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:54:42 compute-0 nova_compute[243452]: 2026-02-28 10:54:42.927 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:54:42 compute-0 nova_compute[243452]: 2026-02-28 10:54:42.928 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:54:42 compute-0 nova_compute[243452]: 2026-02-28 10:54:42.928 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 10:54:42 compute-0 nova_compute[243452]: 2026-02-28 10:54:42.928 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:54:42 compute-0 nova_compute[243452]: 2026-02-28 10:54:42.929 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:54:42 compute-0 nova_compute[243452]: 2026-02-28 10:54:42.930 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:54:43 compute-0 ceph-mon[76304]: pgmap v2722: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:54:45 compute-0 ceph-mon[76304]: pgmap v2723: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:54:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2809831218' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:54:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:54:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2809831218' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:54:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2809831218' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:54:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2809831218' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:54:47 compute-0 ceph-mon[76304]: pgmap v2724: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2725: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:47 compute-0 nova_compute[243452]: 2026-02-28 10:54:47.928 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:54:47 compute-0 nova_compute[243452]: 2026-02-28 10:54:47.930 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:54:49 compute-0 ceph-mon[76304]: pgmap v2725: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:54:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:51 compute-0 ceph-mon[76304]: pgmap v2726: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:52 compute-0 ceph-mon[76304]: pgmap v2727: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:52 compute-0 nova_compute[243452]: 2026-02-28 10:54:52.931 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:54:53 compute-0 nova_compute[243452]: 2026-02-28 10:54:53.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:54:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:54:54 compute-0 ceph-mon[76304]: pgmap v2728: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:55 compute-0 nova_compute[243452]: 2026-02-28 10:54:55.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:54:55 compute-0 nova_compute[243452]: 2026-02-28 10:54:55.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:54:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:56 compute-0 ceph-mon[76304]: pgmap v2729: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:57 compute-0 nova_compute[243452]: 2026-02-28 10:54:57.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:54:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:54:57.897 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:54:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:54:57.897 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:54:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:54:57.897 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:54:57 compute-0 nova_compute[243452]: 2026-02-28 10:54:57.932 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:54:58 compute-0 nova_compute[243452]: 2026-02-28 10:54:58.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:54:58 compute-0 ceph-mon[76304]: pgmap v2730: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:54:58 compute-0 sshd-session[387736]: Invalid user sol from 45.148.10.240 port 46262
Feb 28 10:54:59 compute-0 sshd-session[387736]: Connection closed by invalid user sol 45.148.10.240 port 46262 [preauth]
Feb 28 10:54:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:54:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:55:00 compute-0 nova_compute[243452]: 2026-02-28 10:55:00.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:55:00 compute-0 nova_compute[243452]: 2026-02-28 10:55:00.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:55:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:55:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:55:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:55:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:55:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:55:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:55:00 compute-0 ceph-mon[76304]: pgmap v2731: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:55:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 341 B/s wr, 4 op/s
Feb 28 10:55:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Feb 28 10:55:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Feb 28 10:55:01 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Feb 28 10:55:02 compute-0 nova_compute[243452]: 2026-02-28 10:55:02.313 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:55:02 compute-0 ceph-mon[76304]: pgmap v2732: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 341 B/s wr, 4 op/s
Feb 28 10:55:02 compute-0 ceph-mon[76304]: osdmap e297: 3 total, 3 up, 3 in
Feb 28 10:55:02 compute-0 nova_compute[243452]: 2026-02-28 10:55:02.934 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:55:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 305 active+clean; 136 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.6 KiB/s rd, 2.4 MiB/s wr, 10 op/s
Feb 28 10:55:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Feb 28 10:55:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Feb 28 10:55:03 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Feb 28 10:55:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:55:04 compute-0 nova_compute[243452]: 2026-02-28 10:55:04.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:55:04 compute-0 nova_compute[243452]: 2026-02-28 10:55:04.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:55:04 compute-0 nova_compute[243452]: 2026-02-28 10:55:04.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:55:04 compute-0 nova_compute[243452]: 2026-02-28 10:55:04.333 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:55:04 compute-0 ceph-mon[76304]: pgmap v2734: 305 pgs: 305 active+clean; 136 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.6 KiB/s rd, 2.4 MiB/s wr, 10 op/s
Feb 28 10:55:04 compute-0 ceph-mon[76304]: osdmap e298: 3 total, 3 up, 3 in
Feb 28 10:55:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Feb 28 10:55:06 compute-0 nova_compute[243452]: 2026-02-28 10:55:06.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:55:06 compute-0 nova_compute[243452]: 2026-02-28 10:55:06.342 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:55:06 compute-0 nova_compute[243452]: 2026-02-28 10:55:06.343 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:55:06 compute-0 nova_compute[243452]: 2026-02-28 10:55:06.343 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:55:06 compute-0 nova_compute[243452]: 2026-02-28 10:55:06.343 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:55:06 compute-0 nova_compute[243452]: 2026-02-28 10:55:06.344 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:55:06 compute-0 ceph-mon[76304]: pgmap v2736: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Feb 28 10:55:06 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:55:06 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1825881743' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:55:06 compute-0 nova_compute[243452]: 2026-02-28 10:55:06.970 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:55:07 compute-0 nova_compute[243452]: 2026-02-28 10:55:07.137 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:55:07 compute-0 nova_compute[243452]: 2026-02-28 10:55:07.139 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3593MB free_disk=59.987370564602315GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:55:07 compute-0 nova_compute[243452]: 2026-02-28 10:55:07.139 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:55:07 compute-0 nova_compute[243452]: 2026-02-28 10:55:07.140 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:55:07 compute-0 nova_compute[243452]: 2026-02-28 10:55:07.200 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:55:07 compute-0 nova_compute[243452]: 2026-02-28 10:55:07.200 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:55:07 compute-0 nova_compute[243452]: 2026-02-28 10:55:07.214 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 28 10:55:07 compute-0 nova_compute[243452]: 2026-02-28 10:55:07.269 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 28 10:55:07 compute-0 nova_compute[243452]: 2026-02-28 10:55:07.270 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 28 10:55:07 compute-0 nova_compute[243452]: 2026-02-28 10:55:07.290 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 28 10:55:07 compute-0 nova_compute[243452]: 2026-02-28 10:55:07.309 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 28 10:55:07 compute-0 nova_compute[243452]: 2026-02-28 10:55:07.338 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:55:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Feb 28 10:55:07 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1825881743' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:55:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:55:07 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3244069651' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:55:07 compute-0 nova_compute[243452]: 2026-02-28 10:55:07.936 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:55:07 compute-0 nova_compute[243452]: 2026-02-28 10:55:07.951 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:55:07 compute-0 nova_compute[243452]: 2026-02-28 10:55:07.959 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:55:07 compute-0 nova_compute[243452]: 2026-02-28 10:55:07.977 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:55:07 compute-0 nova_compute[243452]: 2026-02-28 10:55:07.979 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:55:07 compute-0 nova_compute[243452]: 2026-02-28 10:55:07.980 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:55:08 compute-0 ceph-mon[76304]: pgmap v2737: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Feb 28 10:55:08 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3244069651' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:55:09 compute-0 podman[387785]: 2026-02-28 10:55:09.189277208 +0000 UTC m=+0.053326669 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 28 10:55:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:55:09 compute-0 podman[387784]: 2026-02-28 10:55:09.322830191 +0000 UTC m=+0.187645205 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:55:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2738: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 5.1 MiB/s wr, 40 op/s
Feb 28 10:55:10 compute-0 ceph-mon[76304]: pgmap v2738: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 5.1 MiB/s wr, 40 op/s
Feb 28 10:55:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 4.1 MiB/s wr, 32 op/s
Feb 28 10:55:12 compute-0 ceph-mon[76304]: pgmap v2739: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 4.1 MiB/s wr, 32 op/s
Feb 28 10:55:12 compute-0 nova_compute[243452]: 2026-02-28 10:55:12.939 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:55:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2740: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.7 MiB/s wr, 27 op/s
Feb 28 10:55:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:55:14 compute-0 ceph-mon[76304]: pgmap v2740: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.7 MiB/s wr, 27 op/s
Feb 28 10:55:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 MiB/s wr, 26 op/s
Feb 28 10:55:16 compute-0 ceph-mon[76304]: pgmap v2741: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 MiB/s wr, 26 op/s
Feb 28 10:55:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2742: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 85 B/s wr, 3 op/s
Feb 28 10:55:17 compute-0 nova_compute[243452]: 2026-02-28 10:55:17.940 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:55:18 compute-0 ceph-mon[76304]: pgmap v2742: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 85 B/s wr, 3 op/s
Feb 28 10:55:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:55:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 85 B/s wr, 20 op/s
Feb 28 10:55:20 compute-0 ceph-mon[76304]: pgmap v2743: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 85 B/s wr, 20 op/s
Feb 28 10:55:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2744: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Feb 28 10:55:22 compute-0 ceph-mon[76304]: pgmap v2744: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Feb 28 10:55:22 compute-0 nova_compute[243452]: 2026-02-28 10:55:22.942 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:55:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Feb 28 10:55:24 compute-0 sudo[387826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:55:24 compute-0 sudo[387826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:55:24 compute-0 sudo[387826]: pam_unix(sudo:session): session closed for user root
Feb 28 10:55:24 compute-0 sudo[387851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:55:24 compute-0 sudo[387851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:55:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:55:24 compute-0 sudo[387851]: pam_unix(sudo:session): session closed for user root
Feb 28 10:55:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:55:24 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:55:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:55:24 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:55:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:55:24 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:55:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:55:24 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:55:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:55:24 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:55:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:55:24 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:55:24 compute-0 sudo[387907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:55:24 compute-0 sudo[387907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:55:24 compute-0 sudo[387907]: pam_unix(sudo:session): session closed for user root
Feb 28 10:55:24 compute-0 sudo[387932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:55:24 compute-0 sudo[387932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:55:24 compute-0 ceph-mon[76304]: pgmap v2745: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Feb 28 10:55:24 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:55:24 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:55:24 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:55:24 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:55:24 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:55:24 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:55:25 compute-0 podman[387968]: 2026-02-28 10:55:25.093252525 +0000 UTC m=+0.058487247 container create c9ae557c8427eb434a95c2e226acc08500b2e388fabd8191cd405d7a3c16b18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hellman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 10:55:25 compute-0 systemd[1]: Started libpod-conmon-c9ae557c8427eb434a95c2e226acc08500b2e388fabd8191cd405d7a3c16b18a.scope.
Feb 28 10:55:25 compute-0 podman[387968]: 2026-02-28 10:55:25.068138474 +0000 UTC m=+0.033373276 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:55:25 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:55:25 compute-0 podman[387968]: 2026-02-28 10:55:25.201808899 +0000 UTC m=+0.167043701 container init c9ae557c8427eb434a95c2e226acc08500b2e388fabd8191cd405d7a3c16b18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb 28 10:55:25 compute-0 podman[387968]: 2026-02-28 10:55:25.210448064 +0000 UTC m=+0.175682826 container start c9ae557c8427eb434a95c2e226acc08500b2e388fabd8191cd405d7a3c16b18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 10:55:25 compute-0 podman[387968]: 2026-02-28 10:55:25.215197969 +0000 UTC m=+0.180432801 container attach c9ae557c8427eb434a95c2e226acc08500b2e388fabd8191cd405d7a3c16b18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hellman, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 10:55:25 compute-0 friendly_hellman[387984]: 167 167
Feb 28 10:55:25 compute-0 systemd[1]: libpod-c9ae557c8427eb434a95c2e226acc08500b2e388fabd8191cd405d7a3c16b18a.scope: Deactivated successfully.
Feb 28 10:55:25 compute-0 conmon[387984]: conmon c9ae557c8427eb434a95 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c9ae557c8427eb434a95c2e226acc08500b2e388fabd8191cd405d7a3c16b18a.scope/container/memory.events
Feb 28 10:55:25 compute-0 podman[387968]: 2026-02-28 10:55:25.220596101 +0000 UTC m=+0.185830863 container died c9ae557c8427eb434a95c2e226acc08500b2e388fabd8191cd405d7a3c16b18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hellman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:55:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bbf8f160e9b71d381363a797f37c702638ab133a0e27cff813a112423d7c1ce-merged.mount: Deactivated successfully.
Feb 28 10:55:25 compute-0 podman[387968]: 2026-02-28 10:55:25.266886082 +0000 UTC m=+0.232120814 container remove c9ae557c8427eb434a95c2e226acc08500b2e388fabd8191cd405d7a3c16b18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 10:55:25 compute-0 systemd[1]: libpod-conmon-c9ae557c8427eb434a95c2e226acc08500b2e388fabd8191cd405d7a3c16b18a.scope: Deactivated successfully.
Feb 28 10:55:25 compute-0 podman[388009]: 2026-02-28 10:55:25.451915883 +0000 UTC m=+0.062209023 container create 22ae4f966bb46c6c6d8a108db243d24fbd8495ec0edf07cfbc4a80e4f0165463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 10:55:25 compute-0 systemd[1]: Started libpod-conmon-22ae4f966bb46c6c6d8a108db243d24fbd8495ec0edf07cfbc4a80e4f0165463.scope.
Feb 28 10:55:25 compute-0 podman[388009]: 2026-02-28 10:55:25.425897786 +0000 UTC m=+0.036190946 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:55:25 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b5ef49cdbd652595e758c7f20631d53492a1629daeadeb1d2c67406dc50234/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b5ef49cdbd652595e758c7f20631d53492a1629daeadeb1d2c67406dc50234/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b5ef49cdbd652595e758c7f20631d53492a1629daeadeb1d2c67406dc50234/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b5ef49cdbd652595e758c7f20631d53492a1629daeadeb1d2c67406dc50234/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b5ef49cdbd652595e758c7f20631d53492a1629daeadeb1d2c67406dc50234/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:55:25 compute-0 podman[388009]: 2026-02-28 10:55:25.539872944 +0000 UTC m=+0.150166084 container init 22ae4f966bb46c6c6d8a108db243d24fbd8495ec0edf07cfbc4a80e4f0165463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khorana, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 10:55:25 compute-0 podman[388009]: 2026-02-28 10:55:25.548115717 +0000 UTC m=+0.158408847 container start 22ae4f966bb46c6c6d8a108db243d24fbd8495ec0edf07cfbc4a80e4f0165463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khorana, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 10:55:25 compute-0 podman[388009]: 2026-02-28 10:55:25.551617936 +0000 UTC m=+0.161911066 container attach 22ae4f966bb46c6c6d8a108db243d24fbd8495ec0edf07cfbc4a80e4f0165463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khorana, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:55:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 28 10:55:26 compute-0 keen_khorana[388026]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:55:26 compute-0 keen_khorana[388026]: --> All data devices are unavailable
Feb 28 10:55:26 compute-0 systemd[1]: libpod-22ae4f966bb46c6c6d8a108db243d24fbd8495ec0edf07cfbc4a80e4f0165463.scope: Deactivated successfully.
Feb 28 10:55:26 compute-0 podman[388009]: 2026-02-28 10:55:26.069197325 +0000 UTC m=+0.679490535 container died 22ae4f966bb46c6c6d8a108db243d24fbd8495ec0edf07cfbc4a80e4f0165463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:55:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5b5ef49cdbd652595e758c7f20631d53492a1629daeadeb1d2c67406dc50234-merged.mount: Deactivated successfully.
Feb 28 10:55:26 compute-0 podman[388009]: 2026-02-28 10:55:26.118653155 +0000 UTC m=+0.728946305 container remove 22ae4f966bb46c6c6d8a108db243d24fbd8495ec0edf07cfbc4a80e4f0165463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khorana, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:55:26 compute-0 systemd[1]: libpod-conmon-22ae4f966bb46c6c6d8a108db243d24fbd8495ec0edf07cfbc4a80e4f0165463.scope: Deactivated successfully.
Feb 28 10:55:26 compute-0 sudo[387932]: pam_unix(sudo:session): session closed for user root
Feb 28 10:55:26 compute-0 sudo[388058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:55:26 compute-0 sudo[388058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:55:26 compute-0 sudo[388058]: pam_unix(sudo:session): session closed for user root
Feb 28 10:55:26 compute-0 sudo[388083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:55:26 compute-0 sudo[388083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:55:26 compute-0 podman[388120]: 2026-02-28 10:55:26.655402187 +0000 UTC m=+0.042704311 container create 54fb5fb85e3b24305d38ab318f938b4b12c9dc252c2f17f6db4d667afd31ea97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_buck, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 10:55:26 compute-0 systemd[1]: Started libpod-conmon-54fb5fb85e3b24305d38ab318f938b4b12c9dc252c2f17f6db4d667afd31ea97.scope.
Feb 28 10:55:26 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:55:26 compute-0 podman[388120]: 2026-02-28 10:55:26.6311507 +0000 UTC m=+0.018452844 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:55:26 compute-0 podman[388120]: 2026-02-28 10:55:26.733122018 +0000 UTC m=+0.120424222 container init 54fb5fb85e3b24305d38ab318f938b4b12c9dc252c2f17f6db4d667afd31ea97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:55:26 compute-0 podman[388120]: 2026-02-28 10:55:26.741484365 +0000 UTC m=+0.128786489 container start 54fb5fb85e3b24305d38ab318f938b4b12c9dc252c2f17f6db4d667afd31ea97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:55:26 compute-0 podman[388120]: 2026-02-28 10:55:26.745317833 +0000 UTC m=+0.132620007 container attach 54fb5fb85e3b24305d38ab318f938b4b12c9dc252c2f17f6db4d667afd31ea97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_buck, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 10:55:26 compute-0 distracted_buck[388136]: 167 167
Feb 28 10:55:26 compute-0 systemd[1]: libpod-54fb5fb85e3b24305d38ab318f938b4b12c9dc252c2f17f6db4d667afd31ea97.scope: Deactivated successfully.
Feb 28 10:55:26 compute-0 podman[388120]: 2026-02-28 10:55:26.747974179 +0000 UTC m=+0.135276353 container died 54fb5fb85e3b24305d38ab318f938b4b12c9dc252c2f17f6db4d667afd31ea97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_buck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:55:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee98b954bc9473f5054931cdffad1f550672fc10910c3fef57181b024a1d4848-merged.mount: Deactivated successfully.
Feb 28 10:55:26 compute-0 podman[388120]: 2026-02-28 10:55:26.785131281 +0000 UTC m=+0.172433435 container remove 54fb5fb85e3b24305d38ab318f938b4b12c9dc252c2f17f6db4d667afd31ea97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:55:26 compute-0 systemd[1]: libpod-conmon-54fb5fb85e3b24305d38ab318f938b4b12c9dc252c2f17f6db4d667afd31ea97.scope: Deactivated successfully.
Feb 28 10:55:26 compute-0 ceph-mon[76304]: pgmap v2746: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 28 10:55:26 compute-0 podman[388159]: 2026-02-28 10:55:26.981198214 +0000 UTC m=+0.063921872 container create 52049f278d9d2d4f94e1695a1dc26471478f884bf1d77244a64edad5ad1d274c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_keldysh, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:55:27 compute-0 systemd[1]: Started libpod-conmon-52049f278d9d2d4f94e1695a1dc26471478f884bf1d77244a64edad5ad1d274c.scope.
Feb 28 10:55:27 compute-0 podman[388159]: 2026-02-28 10:55:26.955260419 +0000 UTC m=+0.037984137 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:55:27 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac2cc34a7bf9f9212d40928f6ddd249c185858b54ea11c32f2b0950e890a8ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac2cc34a7bf9f9212d40928f6ddd249c185858b54ea11c32f2b0950e890a8ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac2cc34a7bf9f9212d40928f6ddd249c185858b54ea11c32f2b0950e890a8ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac2cc34a7bf9f9212d40928f6ddd249c185858b54ea11c32f2b0950e890a8ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:55:27 compute-0 podman[388159]: 2026-02-28 10:55:27.093771172 +0000 UTC m=+0.176494850 container init 52049f278d9d2d4f94e1695a1dc26471478f884bf1d77244a64edad5ad1d274c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_keldysh, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 10:55:27 compute-0 podman[388159]: 2026-02-28 10:55:27.104954319 +0000 UTC m=+0.187677957 container start 52049f278d9d2d4f94e1695a1dc26471478f884bf1d77244a64edad5ad1d274c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_keldysh, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 10:55:27 compute-0 podman[388159]: 2026-02-28 10:55:27.108496949 +0000 UTC m=+0.191220627 container attach 52049f278d9d2d4f94e1695a1dc26471478f884bf1d77244a64edad5ad1d274c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]: {
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:     "0": [
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:         {
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "devices": [
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "/dev/loop3"
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             ],
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "lv_name": "ceph_lv0",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "lv_size": "21470642176",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "name": "ceph_lv0",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "tags": {
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.cluster_name": "ceph",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.crush_device_class": "",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.encrypted": "0",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.objectstore": "bluestore",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.osd_id": "0",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.type": "block",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.vdo": "0",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.with_tpm": "0"
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             },
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "type": "block",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "vg_name": "ceph_vg0"
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:         }
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:     ],
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:     "1": [
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:         {
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "devices": [
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "/dev/loop4"
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             ],
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "lv_name": "ceph_lv1",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "lv_size": "21470642176",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "name": "ceph_lv1",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "tags": {
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.cluster_name": "ceph",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.crush_device_class": "",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.encrypted": "0",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.objectstore": "bluestore",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.osd_id": "1",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.type": "block",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.vdo": "0",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.with_tpm": "0"
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             },
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "type": "block",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "vg_name": "ceph_vg1"
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:         }
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:     ],
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:     "2": [
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:         {
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "devices": [
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "/dev/loop5"
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             ],
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "lv_name": "ceph_lv2",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "lv_size": "21470642176",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "name": "ceph_lv2",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "tags": {
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.cluster_name": "ceph",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.crush_device_class": "",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.encrypted": "0",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.objectstore": "bluestore",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.osd_id": "2",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.type": "block",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.vdo": "0",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:                 "ceph.with_tpm": "0"
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             },
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "type": "block",
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:             "vg_name": "ceph_vg2"
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:         }
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]:     ]
Feb 28 10:55:27 compute-0 gallant_keldysh[388175]: }
Feb 28 10:55:27 compute-0 systemd[1]: libpod-52049f278d9d2d4f94e1695a1dc26471478f884bf1d77244a64edad5ad1d274c.scope: Deactivated successfully.
Feb 28 10:55:27 compute-0 podman[388184]: 2026-02-28 10:55:27.449692192 +0000 UTC m=+0.041630700 container died 52049f278d9d2d4f94e1695a1dc26471478f884bf1d77244a64edad5ad1d274c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_keldysh, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 10:55:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ac2cc34a7bf9f9212d40928f6ddd249c185858b54ea11c32f2b0950e890a8ed-merged.mount: Deactivated successfully.
Feb 28 10:55:27 compute-0 podman[388184]: 2026-02-28 10:55:27.489098518 +0000 UTC m=+0.081036936 container remove 52049f278d9d2d4f94e1695a1dc26471478f884bf1d77244a64edad5ad1d274c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 10:55:27 compute-0 systemd[1]: libpod-conmon-52049f278d9d2d4f94e1695a1dc26471478f884bf1d77244a64edad5ad1d274c.scope: Deactivated successfully.
Feb 28 10:55:27 compute-0 sudo[388083]: pam_unix(sudo:session): session closed for user root
Feb 28 10:55:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Feb 28 10:55:27 compute-0 sudo[388200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:55:27 compute-0 sudo[388200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:55:27 compute-0 sudo[388200]: pam_unix(sudo:session): session closed for user root
Feb 28 10:55:27 compute-0 sudo[388225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:55:27 compute-0 sudo[388225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:55:27 compute-0 nova_compute[243452]: 2026-02-28 10:55:27.944 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:55:28 compute-0 podman[388261]: 2026-02-28 10:55:28.015207818 +0000 UTC m=+0.042262838 container create bd17eaab93037d76df2cf712e696d1497e9d58419c204083090929810acf2c74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_shaw, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 10:55:28 compute-0 systemd[1]: Started libpod-conmon-bd17eaab93037d76df2cf712e696d1497e9d58419c204083090929810acf2c74.scope.
Feb 28 10:55:28 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:55:28 compute-0 podman[388261]: 2026-02-28 10:55:27.992837785 +0000 UTC m=+0.019892825 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:55:28 compute-0 podman[388261]: 2026-02-28 10:55:28.10072621 +0000 UTC m=+0.127781290 container init bd17eaab93037d76df2cf712e696d1497e9d58419c204083090929810acf2c74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 10:55:28 compute-0 podman[388261]: 2026-02-28 10:55:28.111704401 +0000 UTC m=+0.138759421 container start bd17eaab93037d76df2cf712e696d1497e9d58419c204083090929810acf2c74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_shaw, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:55:28 compute-0 podman[388261]: 2026-02-28 10:55:28.11554976 +0000 UTC m=+0.142604870 container attach bd17eaab93037d76df2cf712e696d1497e9d58419c204083090929810acf2c74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_shaw, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:55:28 compute-0 vibrant_shaw[388277]: 167 167
Feb 28 10:55:28 compute-0 systemd[1]: libpod-bd17eaab93037d76df2cf712e696d1497e9d58419c204083090929810acf2c74.scope: Deactivated successfully.
Feb 28 10:55:28 compute-0 podman[388261]: 2026-02-28 10:55:28.116840136 +0000 UTC m=+0.143895186 container died bd17eaab93037d76df2cf712e696d1497e9d58419c204083090929810acf2c74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 10:55:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e0820f8f1f9f0cf9ed159786b8c6661599f39480004de9097b9a3348a1700aa-merged.mount: Deactivated successfully.
Feb 28 10:55:28 compute-0 podman[388261]: 2026-02-28 10:55:28.156327715 +0000 UTC m=+0.183382765 container remove bd17eaab93037d76df2cf712e696d1497e9d58419c204083090929810acf2c74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_shaw, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:55:28 compute-0 systemd[1]: libpod-conmon-bd17eaab93037d76df2cf712e696d1497e9d58419c204083090929810acf2c74.scope: Deactivated successfully.
Feb 28 10:55:28 compute-0 podman[388301]: 2026-02-28 10:55:28.315955016 +0000 UTC m=+0.053775784 container create 3d429c9f6699bf7bf13a597ca1cbec34e7c161af53672b05b2464db46db06d92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:55:28 compute-0 systemd[1]: Started libpod-conmon-3d429c9f6699bf7bf13a597ca1cbec34e7c161af53672b05b2464db46db06d92.scope.
Feb 28 10:55:28 compute-0 podman[388301]: 2026-02-28 10:55:28.293021166 +0000 UTC m=+0.030842004 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:55:28 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9adca9e92ae04ee11fdf96e24ff461278e2053f12f5fd991bd55c562314a6625/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9adca9e92ae04ee11fdf96e24ff461278e2053f12f5fd991bd55c562314a6625/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9adca9e92ae04ee11fdf96e24ff461278e2053f12f5fd991bd55c562314a6625/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9adca9e92ae04ee11fdf96e24ff461278e2053f12f5fd991bd55c562314a6625/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:55:28 compute-0 podman[388301]: 2026-02-28 10:55:28.426827686 +0000 UTC m=+0.164648464 container init 3d429c9f6699bf7bf13a597ca1cbec34e7c161af53672b05b2464db46db06d92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 10:55:28 compute-0 podman[388301]: 2026-02-28 10:55:28.433836794 +0000 UTC m=+0.171657542 container start 3d429c9f6699bf7bf13a597ca1cbec34e7c161af53672b05b2464db46db06d92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 28 10:55:28 compute-0 podman[388301]: 2026-02-28 10:55:28.438156596 +0000 UTC m=+0.175977424 container attach 3d429c9f6699bf7bf13a597ca1cbec34e7c161af53672b05b2464db46db06d92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keller, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:55:28 compute-0 ceph-mon[76304]: pgmap v2747: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Feb 28 10:55:29 compute-0 lvm[388395]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:55:29 compute-0 lvm[388395]: VG ceph_vg0 finished
Feb 28 10:55:29 compute-0 lvm[388398]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:55:29 compute-0 lvm[388398]: VG ceph_vg1 finished
Feb 28 10:55:29 compute-0 lvm[388400]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:55:29 compute-0 lvm[388400]: VG ceph_vg2 finished
Feb 28 10:55:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:55:29 compute-0 nifty_keller[388319]: {}
Feb 28 10:55:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:55:29
Feb 28 10:55:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:55:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:55:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'images', 'volumes', '.rgw.root', 'default.rgw.control', 'vms']
Feb 28 10:55:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:55:29 compute-0 systemd[1]: libpod-3d429c9f6699bf7bf13a597ca1cbec34e7c161af53672b05b2464db46db06d92.scope: Deactivated successfully.
Feb 28 10:55:29 compute-0 podman[388301]: 2026-02-28 10:55:29.250131072 +0000 UTC m=+0.987951830 container died 3d429c9f6699bf7bf13a597ca1cbec34e7c161af53672b05b2464db46db06d92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keller, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:55:29 compute-0 systemd[1]: libpod-3d429c9f6699bf7bf13a597ca1cbec34e7c161af53672b05b2464db46db06d92.scope: Consumed 1.183s CPU time.
Feb 28 10:55:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-9adca9e92ae04ee11fdf96e24ff461278e2053f12f5fd991bd55c562314a6625-merged.mount: Deactivated successfully.
Feb 28 10:55:29 compute-0 podman[388301]: 2026-02-28 10:55:29.295227519 +0000 UTC m=+1.033048277 container remove 3d429c9f6699bf7bf13a597ca1cbec34e7c161af53672b05b2464db46db06d92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Feb 28 10:55:29 compute-0 systemd[1]: libpod-conmon-3d429c9f6699bf7bf13a597ca1cbec34e7c161af53672b05b2464db46db06d92.scope: Deactivated successfully.
Feb 28 10:55:29 compute-0 sudo[388225]: pam_unix(sudo:session): session closed for user root
Feb 28 10:55:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:55:29 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:55:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:55:29 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:55:29 compute-0 sudo[388417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:55:29 compute-0 sudo[388417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:55:29 compute-0 sudo[388417]: pam_unix(sudo:session): session closed for user root
Feb 28 10:55:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2748: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Feb 28 10:55:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:55:29.708 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:55:29 compute-0 nova_compute[243452]: 2026-02-28 10:55:29.710 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:55:29 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:55:29.710 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:55:30 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:55:30 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:55:30 compute-0 ceph-mon[76304]: pgmap v2748: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Feb 28 10:55:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:55:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:55:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:55:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:55:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:55:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:55:30 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:55:30.712 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:55:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:55:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:55:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:55:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:55:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:55:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:55:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:55:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:55:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:55:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:55:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 39 op/s
Feb 28 10:55:32 compute-0 ceph-mon[76304]: pgmap v2749: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 39 op/s
Feb 28 10:55:32 compute-0 nova_compute[243452]: 2026-02-28 10:55:32.946 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:55:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Feb 28 10:55:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:55:34 compute-0 ceph-mon[76304]: pgmap v2750: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Feb 28 10:55:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Feb 28 10:55:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Feb 28 10:55:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Feb 28 10:55:36 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Feb 28 10:55:36 compute-0 ceph-mon[76304]: pgmap v2751: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Feb 28 10:55:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:55:37 compute-0 ceph-mon[76304]: osdmap e299: 3 total, 3 up, 3 in
Feb 28 10:55:37 compute-0 nova_compute[243452]: 2026-02-28 10:55:37.949 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:55:38 compute-0 ceph-mon[76304]: pgmap v2753: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:55:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:55:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 305 active+clean; 97 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 920 B/s wr, 20 op/s
Feb 28 10:55:40 compute-0 podman[388444]: 2026-02-28 10:55:40.130193364 +0000 UTC m=+0.063172170 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 10:55:40 compute-0 podman[388443]: 2026-02-28 10:55:40.175751414 +0000 UTC m=+0.108730370 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260223)
Feb 28 10:55:40 compute-0 ceph-mon[76304]: pgmap v2754: 305 pgs: 305 active+clean; 97 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 920 B/s wr, 20 op/s
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5174751830704939e-05 of space, bias 1.0, pg target 0.0045524255492114815 quantized to 32 (current 32)
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0015822959425952849 of space, bias 1.0, pg target 0.47468878277858545 quantized to 32 (current 32)
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.25983564234373e-07 of space, bias 4.0, pg target 0.0008711802770812475 quantized to 16 (current 16)
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:55:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2755: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 28 10:55:42 compute-0 ceph-mon[76304]: pgmap v2755: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 28 10:55:42 compute-0 nova_compute[243452]: 2026-02-28 10:55:42.951 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:55:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 28 10:55:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:55:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Feb 28 10:55:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Feb 28 10:55:44 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Feb 28 10:55:44 compute-0 ceph-mon[76304]: pgmap v2756: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 28 10:55:44 compute-0 ceph-mon[76304]: osdmap e300: 3 total, 3 up, 3 in
Feb 28 10:55:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Feb 28 10:55:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:55:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3913974501' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:55:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:55:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3913974501' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:55:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3913974501' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:55:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3913974501' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:55:46 compute-0 ceph-mon[76304]: pgmap v2758: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Feb 28 10:55:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 28 10:55:47 compute-0 nova_compute[243452]: 2026-02-28 10:55:47.953 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:55:47 compute-0 nova_compute[243452]: 2026-02-28 10:55:47.956 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:55:47 compute-0 nova_compute[243452]: 2026-02-28 10:55:47.956 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 10:55:47 compute-0 nova_compute[243452]: 2026-02-28 10:55:47.956 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:55:47 compute-0 nova_compute[243452]: 2026-02-28 10:55:47.981 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:55:47 compute-0 nova_compute[243452]: 2026-02-28 10:55:47.982 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:55:48 compute-0 ceph-mon[76304]: pgmap v2759: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 28 10:55:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:55:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2760: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.9 KiB/s rd, 511 B/s wr, 5 op/s
Feb 28 10:55:50 compute-0 ceph-mon[76304]: pgmap v2760: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.9 KiB/s rd, 511 B/s wr, 5 op/s
Feb 28 10:55:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:55:52 compute-0 ceph-mon[76304]: pgmap v2761: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:55:52 compute-0 nova_compute[243452]: 2026-02-28 10:55:52.983 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:55:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:55:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:55:54 compute-0 ceph-mon[76304]: pgmap v2762: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:55:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:55:55 compute-0 nova_compute[243452]: 2026-02-28 10:55:55.982 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:55:56 compute-0 ceph-mon[76304]: pgmap v2763: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:55:57 compute-0 sshd-session[388488]: Received disconnect from 103.217.144.161 port 59646:11: Bye Bye [preauth]
Feb 28 10:55:57 compute-0 sshd-session[388488]: Disconnected from authenticating user root 103.217.144.161 port 59646 [preauth]
Feb 28 10:55:57 compute-0 nova_compute[243452]: 2026-02-28 10:55:57.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:55:57 compute-0 nova_compute[243452]: 2026-02-28 10:55:57.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:55:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:55:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:55:57.898 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:55:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:55:57.899 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:55:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:55:57.900 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:55:57 compute-0 nova_compute[243452]: 2026-02-28 10:55:57.986 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:55:58 compute-0 ceph-mon[76304]: pgmap v2764: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:55:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:55:59 compute-0 nova_compute[243452]: 2026-02-28 10:55:59.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:55:59 compute-0 nova_compute[243452]: 2026-02-28 10:55:59.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:55:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:00 compute-0 nova_compute[243452]: 2026-02-28 10:56:00.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:56:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:56:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:56:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:56:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:56:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:56:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:56:00 compute-0 ceph-mon[76304]: pgmap v2765: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:02 compute-0 nova_compute[243452]: 2026-02-28 10:56:02.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:56:02 compute-0 nova_compute[243452]: 2026-02-28 10:56:02.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:56:02 compute-0 ceph-mon[76304]: pgmap v2766: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:02 compute-0 nova_compute[243452]: 2026-02-28 10:56:02.988 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:56:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:56:04 compute-0 nova_compute[243452]: 2026-02-28 10:56:04.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:56:04 compute-0 nova_compute[243452]: 2026-02-28 10:56:04.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:56:04 compute-0 nova_compute[243452]: 2026-02-28 10:56:04.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:56:04 compute-0 nova_compute[243452]: 2026-02-28 10:56:04.336 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:56:04 compute-0 ceph-mon[76304]: pgmap v2767: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:06 compute-0 ceph-mon[76304]: pgmap v2768: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:07 compute-0 nova_compute[243452]: 2026-02-28 10:56:07.990 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:56:07 compute-0 nova_compute[243452]: 2026-02-28 10:56:07.991 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:56:07 compute-0 nova_compute[243452]: 2026-02-28 10:56:07.991 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 10:56:07 compute-0 nova_compute[243452]: 2026-02-28 10:56:07.991 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:56:07 compute-0 nova_compute[243452]: 2026-02-28 10:56:07.992 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:56:07 compute-0 nova_compute[243452]: 2026-02-28 10:56:07.993 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:56:08 compute-0 nova_compute[243452]: 2026-02-28 10:56:08.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:56:08 compute-0 nova_compute[243452]: 2026-02-28 10:56:08.350 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:56:08 compute-0 nova_compute[243452]: 2026-02-28 10:56:08.350 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:56:08 compute-0 nova_compute[243452]: 2026-02-28 10:56:08.351 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:56:08 compute-0 nova_compute[243452]: 2026-02-28 10:56:08.351 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:56:08 compute-0 nova_compute[243452]: 2026-02-28 10:56:08.352 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:56:08 compute-0 ceph-mon[76304]: pgmap v2769: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:56:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3842658606' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:56:08 compute-0 nova_compute[243452]: 2026-02-28 10:56:08.890 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:56:09 compute-0 nova_compute[243452]: 2026-02-28 10:56:09.066 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:56:09 compute-0 nova_compute[243452]: 2026-02-28 10:56:09.067 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3581MB free_disk=59.987361152656376GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:56:09 compute-0 nova_compute[243452]: 2026-02-28 10:56:09.068 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:56:09 compute-0 nova_compute[243452]: 2026-02-28 10:56:09.068 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:56:09 compute-0 nova_compute[243452]: 2026-02-28 10:56:09.142 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:56:09 compute-0 nova_compute[243452]: 2026-02-28 10:56:09.142 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:56:09 compute-0 nova_compute[243452]: 2026-02-28 10:56:09.163 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:56:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:56:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:56:09 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/347580519' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:56:09 compute-0 nova_compute[243452]: 2026-02-28 10:56:09.736 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:56:09 compute-0 nova_compute[243452]: 2026-02-28 10:56:09.742 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:56:09 compute-0 nova_compute[243452]: 2026-02-28 10:56:09.781 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:56:09 compute-0 nova_compute[243452]: 2026-02-28 10:56:09.783 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:56:09 compute-0 nova_compute[243452]: 2026-02-28 10:56:09.783 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:56:09 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3842658606' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:56:09 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/347580519' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:56:10 compute-0 ceph-mon[76304]: pgmap v2770: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:11 compute-0 podman[388535]: 2026-02-28 10:56:11.117907993 +0000 UTC m=+0.050145282 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.43.0)
Feb 28 10:56:11 compute-0 podman[388534]: 2026-02-28 10:56:11.179792035 +0000 UTC m=+0.108759231 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:56:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:12 compute-0 nova_compute[243452]: 2026-02-28 10:56:12.779 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:56:12 compute-0 ceph-mon[76304]: pgmap v2771: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:12 compute-0 nova_compute[243452]: 2026-02-28 10:56:12.992 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:56:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:56:14 compute-0 ceph-mon[76304]: pgmap v2772: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:16 compute-0 ceph-mon[76304]: pgmap v2773: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2774: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:17 compute-0 nova_compute[243452]: 2026-02-28 10:56:17.995 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:56:17 compute-0 nova_compute[243452]: 2026-02-28 10:56:17.997 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:56:17 compute-0 nova_compute[243452]: 2026-02-28 10:56:17.997 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 10:56:17 compute-0 nova_compute[243452]: 2026-02-28 10:56:17.997 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:56:17 compute-0 nova_compute[243452]: 2026-02-28 10:56:17.998 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:56:18 compute-0 nova_compute[243452]: 2026-02-28 10:56:18.000 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:56:18 compute-0 ceph-mon[76304]: pgmap v2774: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:56:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:19 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Feb 28 10:56:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:19.930132) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:56:19 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Feb 28 10:56:19 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276179930197, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 2076, "num_deletes": 253, "total_data_size": 3452590, "memory_usage": 3505664, "flush_reason": "Manual Compaction"}
Feb 28 10:56:19 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Feb 28 10:56:19 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276179947788, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 3394975, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56680, "largest_seqno": 58755, "table_properties": {"data_size": 3385415, "index_size": 6117, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19046, "raw_average_key_size": 20, "raw_value_size": 3366463, "raw_average_value_size": 3577, "num_data_blocks": 270, "num_entries": 941, "num_filter_entries": 941, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772275959, "oldest_key_time": 1772275959, "file_creation_time": 1772276179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:56:19 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 17713 microseconds, and 9640 cpu microseconds.
Feb 28 10:56:19 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:56:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:19.947852) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 3394975 bytes OK
Feb 28 10:56:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:19.947879) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Feb 28 10:56:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:19.950547) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Feb 28 10:56:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:19.950561) EVENT_LOG_v1 {"time_micros": 1772276179950556, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:56:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:19.950584) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:56:19 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 3443925, prev total WAL file size 3443925, number of live WAL files 2.
Feb 28 10:56:19 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:56:19 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:19.951274) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Feb 28 10:56:19 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:56:19 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(3315KB)], [134(9594KB)]
Feb 28 10:56:19 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276179951333, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 13219260, "oldest_snapshot_seqno": -1}
Feb 28 10:56:20 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 7948 keys, 11476341 bytes, temperature: kUnknown
Feb 28 10:56:20 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276180010856, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 11476341, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11422758, "index_size": 32606, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19909, "raw_key_size": 206913, "raw_average_key_size": 26, "raw_value_size": 11280609, "raw_average_value_size": 1419, "num_data_blocks": 1275, "num_entries": 7948, "num_filter_entries": 7948, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772276179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:56:20 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:56:20 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:20.011143) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11476341 bytes
Feb 28 10:56:20 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:20.012659) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.9 rd, 192.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 9.4 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 8470, records dropped: 522 output_compression: NoCompression
Feb 28 10:56:20 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:20.012675) EVENT_LOG_v1 {"time_micros": 1772276180012667, "job": 82, "event": "compaction_finished", "compaction_time_micros": 59578, "compaction_time_cpu_micros": 36964, "output_level": 6, "num_output_files": 1, "total_output_size": 11476341, "num_input_records": 8470, "num_output_records": 7948, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:56:20 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:56:20 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276180013141, "job": 82, "event": "table_file_deletion", "file_number": 136}
Feb 28 10:56:20 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:56:20 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276180014331, "job": 82, "event": "table_file_deletion", "file_number": 134}
Feb 28 10:56:20 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:19.951183) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:56:20 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:20.014446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:56:20 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:20.014456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:56:20 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:20.014461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:56:20 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:20.014464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:56:20 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:56:20.014468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:56:20 compute-0 ceph-mon[76304]: pgmap v2775: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:22 compute-0 ceph-mon[76304]: pgmap v2776: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:23 compute-0 nova_compute[243452]: 2026-02-28 10:56:22.999 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:56:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:56:24 compute-0 ceph-mon[76304]: pgmap v2777: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:26 compute-0 ceph-mon[76304]: pgmap v2778: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:28 compute-0 nova_compute[243452]: 2026-02-28 10:56:28.001 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:56:28 compute-0 ceph-mon[76304]: pgmap v2779: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:56:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:56:29
Feb 28 10:56:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:56:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:56:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'default.rgw.control', '.rgw.root', 'volumes', 'images', '.mgr', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log']
Feb 28 10:56:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:56:29 compute-0 sudo[388578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:56:29 compute-0 sudo[388578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:56:29 compute-0 sudo[388578]: pam_unix(sudo:session): session closed for user root
Feb 28 10:56:29 compute-0 sudo[388603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:56:29 compute-0 sudo[388603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:56:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:29 compute-0 sudo[388603]: pam_unix(sudo:session): session closed for user root
Feb 28 10:56:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:56:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:56:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:56:30 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:56:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:56:30 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:56:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:56:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:56:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:56:30 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:56:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:56:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:56:30 compute-0 sudo[388659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:56:30 compute-0 sudo[388659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:56:30 compute-0 sudo[388659]: pam_unix(sudo:session): session closed for user root
Feb 28 10:56:30 compute-0 sudo[388684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:56:30 compute-0 sudo[388684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:56:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:56:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:56:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:56:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:56:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:56:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:56:30 compute-0 podman[388721]: 2026-02-28 10:56:30.564981249 +0000 UTC m=+0.054570776 container create 4a1553391339b31f1d1663962a546ce8e3a2a406e4d9353a8247d51b01d6eb29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_turing, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 10:56:30 compute-0 systemd[1]: Started libpod-conmon-4a1553391339b31f1d1663962a546ce8e3a2a406e4d9353a8247d51b01d6eb29.scope.
Feb 28 10:56:30 compute-0 podman[388721]: 2026-02-28 10:56:30.54487869 +0000 UTC m=+0.034468227 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:56:30 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:56:30 compute-0 podman[388721]: 2026-02-28 10:56:30.663546841 +0000 UTC m=+0.153136408 container init 4a1553391339b31f1d1663962a546ce8e3a2a406e4d9353a8247d51b01d6eb29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:56:30 compute-0 podman[388721]: 2026-02-28 10:56:30.673508823 +0000 UTC m=+0.163098370 container start 4a1553391339b31f1d1663962a546ce8e3a2a406e4d9353a8247d51b01d6eb29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_turing, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:56:30 compute-0 podman[388721]: 2026-02-28 10:56:30.6780031 +0000 UTC m=+0.167592657 container attach 4a1553391339b31f1d1663962a546ce8e3a2a406e4d9353a8247d51b01d6eb29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:56:30 compute-0 youthful_turing[388738]: 167 167
Feb 28 10:56:30 compute-0 systemd[1]: libpod-4a1553391339b31f1d1663962a546ce8e3a2a406e4d9353a8247d51b01d6eb29.scope: Deactivated successfully.
Feb 28 10:56:30 compute-0 conmon[388738]: conmon 4a1553391339b31f1d16 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4a1553391339b31f1d1663962a546ce8e3a2a406e4d9353a8247d51b01d6eb29.scope/container/memory.events
Feb 28 10:56:30 compute-0 podman[388721]: 2026-02-28 10:56:30.681142999 +0000 UTC m=+0.170732546 container died 4a1553391339b31f1d1663962a546ce8e3a2a406e4d9353a8247d51b01d6eb29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 10:56:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-a446289184b907c44eeadd2549a8672c7318bd3d86cd11bfea418c60b0859ecd-merged.mount: Deactivated successfully.
Feb 28 10:56:30 compute-0 podman[388721]: 2026-02-28 10:56:30.737004041 +0000 UTC m=+0.226593568 container remove 4a1553391339b31f1d1663962a546ce8e3a2a406e4d9353a8247d51b01d6eb29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_turing, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:56:30 compute-0 systemd[1]: libpod-conmon-4a1553391339b31f1d1663962a546ce8e3a2a406e4d9353a8247d51b01d6eb29.scope: Deactivated successfully.
Feb 28 10:56:30 compute-0 podman[388762]: 2026-02-28 10:56:30.942352957 +0000 UTC m=+0.063571901 container create 904508c7553bf9bfa8a9d4bc994b78e5452e77f58fea2516d0b176336429880c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:56:30 compute-0 systemd[1]: Started libpod-conmon-904508c7553bf9bfa8a9d4bc994b78e5452e77f58fea2516d0b176336429880c.scope.
Feb 28 10:56:30 compute-0 ceph-mon[76304]: pgmap v2780: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:30 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:56:30 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:56:30 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:56:30 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:56:30 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:56:30 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:56:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:56:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:56:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:56:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:56:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:56:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:56:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:56:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:56:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:56:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:56:31 compute-0 podman[388762]: 2026-02-28 10:56:30.922047432 +0000 UTC m=+0.043266286 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:56:31 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9fbb4726de73256e4568e92d568b46cde8fc2a4cbbe7650a447e4dd442cf3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9fbb4726de73256e4568e92d568b46cde8fc2a4cbbe7650a447e4dd442cf3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9fbb4726de73256e4568e92d568b46cde8fc2a4cbbe7650a447e4dd442cf3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9fbb4726de73256e4568e92d568b46cde8fc2a4cbbe7650a447e4dd442cf3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:56:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9fbb4726de73256e4568e92d568b46cde8fc2a4cbbe7650a447e4dd442cf3b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:56:31 compute-0 podman[388762]: 2026-02-28 10:56:31.064040123 +0000 UTC m=+0.185258947 container init 904508c7553bf9bfa8a9d4bc994b78e5452e77f58fea2516d0b176336429880c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:56:31 compute-0 podman[388762]: 2026-02-28 10:56:31.072296127 +0000 UTC m=+0.193514971 container start 904508c7553bf9bfa8a9d4bc994b78e5452e77f58fea2516d0b176336429880c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:56:31 compute-0 podman[388762]: 2026-02-28 10:56:31.0769923 +0000 UTC m=+0.198211144 container attach 904508c7553bf9bfa8a9d4bc994b78e5452e77f58fea2516d0b176336429880c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 10:56:31 compute-0 cool_satoshi[388780]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:56:31 compute-0 cool_satoshi[388780]: --> All data devices are unavailable
Feb 28 10:56:31 compute-0 systemd[1]: libpod-904508c7553bf9bfa8a9d4bc994b78e5452e77f58fea2516d0b176336429880c.scope: Deactivated successfully.
Feb 28 10:56:31 compute-0 podman[388762]: 2026-02-28 10:56:31.55929611 +0000 UTC m=+0.680514974 container died 904508c7553bf9bfa8a9d4bc994b78e5452e77f58fea2516d0b176336429880c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:56:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d9fbb4726de73256e4568e92d568b46cde8fc2a4cbbe7650a447e4dd442cf3b-merged.mount: Deactivated successfully.
Feb 28 10:56:31 compute-0 podman[388762]: 2026-02-28 10:56:31.614430251 +0000 UTC m=+0.735649055 container remove 904508c7553bf9bfa8a9d4bc994b78e5452e77f58fea2516d0b176336429880c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:56:31 compute-0 systemd[1]: libpod-conmon-904508c7553bf9bfa8a9d4bc994b78e5452e77f58fea2516d0b176336429880c.scope: Deactivated successfully.
Feb 28 10:56:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:31 compute-0 sudo[388684]: pam_unix(sudo:session): session closed for user root
Feb 28 10:56:31 compute-0 sudo[388812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:56:31 compute-0 sudo[388812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:56:31 compute-0 sudo[388812]: pam_unix(sudo:session): session closed for user root
Feb 28 10:56:31 compute-0 sudo[388837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:56:31 compute-0 sudo[388837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:56:32 compute-0 podman[388876]: 2026-02-28 10:56:32.170378226 +0000 UTC m=+0.062662226 container create a7e0831f5ece1df6073302b6da54601ae1ccb7cf3cda41c3cc5b115958d7f0f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_saha, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:56:32 compute-0 systemd[1]: Started libpod-conmon-a7e0831f5ece1df6073302b6da54601ae1ccb7cf3cda41c3cc5b115958d7f0f4.scope.
Feb 28 10:56:32 compute-0 podman[388876]: 2026-02-28 10:56:32.140587452 +0000 UTC m=+0.032871262 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:56:32 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:56:32 compute-0 podman[388876]: 2026-02-28 10:56:32.256552937 +0000 UTC m=+0.148836707 container init a7e0831f5ece1df6073302b6da54601ae1ccb7cf3cda41c3cc5b115958d7f0f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_saha, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 10:56:32 compute-0 podman[388876]: 2026-02-28 10:56:32.264885553 +0000 UTC m=+0.157169293 container start a7e0831f5ece1df6073302b6da54601ae1ccb7cf3cda41c3cc5b115958d7f0f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_saha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:56:32 compute-0 podman[388876]: 2026-02-28 10:56:32.268434283 +0000 UTC m=+0.160718023 container attach a7e0831f5ece1df6073302b6da54601ae1ccb7cf3cda41c3cc5b115958d7f0f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_saha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:56:32 compute-0 fervent_saha[388892]: 167 167
Feb 28 10:56:32 compute-0 systemd[1]: libpod-a7e0831f5ece1df6073302b6da54601ae1ccb7cf3cda41c3cc5b115958d7f0f4.scope: Deactivated successfully.
Feb 28 10:56:32 compute-0 podman[388876]: 2026-02-28 10:56:32.271820289 +0000 UTC m=+0.164104029 container died a7e0831f5ece1df6073302b6da54601ae1ccb7cf3cda41c3cc5b115958d7f0f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 10:56:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-899ecc0b39653fd37c7955daf1c7def7cd475c2873ae07d1f3769e01be6434c5-merged.mount: Deactivated successfully.
Feb 28 10:56:32 compute-0 podman[388876]: 2026-02-28 10:56:32.317330858 +0000 UTC m=+0.209614608 container remove a7e0831f5ece1df6073302b6da54601ae1ccb7cf3cda41c3cc5b115958d7f0f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:56:32 compute-0 systemd[1]: libpod-conmon-a7e0831f5ece1df6073302b6da54601ae1ccb7cf3cda41c3cc5b115958d7f0f4.scope: Deactivated successfully.
Feb 28 10:56:32 compute-0 podman[388916]: 2026-02-28 10:56:32.502408039 +0000 UTC m=+0.060578626 container create 7c6d92c986bdbf39a7571103f4c9e4e42654c71884d758bfa434e13392bfcc44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 28 10:56:32 compute-0 systemd[1]: Started libpod-conmon-7c6d92c986bdbf39a7571103f4c9e4e42654c71884d758bfa434e13392bfcc44.scope.
Feb 28 10:56:32 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663b5c87bf46ffd986115370bd5f359a8d1078a9bfd33555b5de22eee60683a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663b5c87bf46ffd986115370bd5f359a8d1078a9bfd33555b5de22eee60683a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663b5c87bf46ffd986115370bd5f359a8d1078a9bfd33555b5de22eee60683a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663b5c87bf46ffd986115370bd5f359a8d1078a9bfd33555b5de22eee60683a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:56:32 compute-0 podman[388916]: 2026-02-28 10:56:32.482384212 +0000 UTC m=+0.040554839 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:56:32 compute-0 podman[388916]: 2026-02-28 10:56:32.587998704 +0000 UTC m=+0.146169281 container init 7c6d92c986bdbf39a7571103f4c9e4e42654c71884d758bfa434e13392bfcc44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gates, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 10:56:32 compute-0 podman[388916]: 2026-02-28 10:56:32.595115975 +0000 UTC m=+0.153286552 container start 7c6d92c986bdbf39a7571103f4c9e4e42654c71884d758bfa434e13392bfcc44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:56:32 compute-0 podman[388916]: 2026-02-28 10:56:32.600025964 +0000 UTC m=+0.158196541 container attach 7c6d92c986bdbf39a7571103f4c9e4e42654c71884d758bfa434e13392bfcc44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:56:32 compute-0 pedantic_gates[388934]: {
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:     "0": [
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:         {
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "devices": [
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "/dev/loop3"
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             ],
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "lv_name": "ceph_lv0",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "lv_size": "21470642176",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "name": "ceph_lv0",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "tags": {
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.cluster_name": "ceph",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.crush_device_class": "",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.encrypted": "0",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.objectstore": "bluestore",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.osd_id": "0",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.type": "block",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.vdo": "0",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.with_tpm": "0"
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             },
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "type": "block",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "vg_name": "ceph_vg0"
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:         }
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:     ],
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:     "1": [
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:         {
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "devices": [
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "/dev/loop4"
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             ],
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "lv_name": "ceph_lv1",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "lv_size": "21470642176",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "name": "ceph_lv1",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "tags": {
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.cluster_name": "ceph",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.crush_device_class": "",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.encrypted": "0",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.objectstore": "bluestore",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.osd_id": "1",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.type": "block",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.vdo": "0",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.with_tpm": "0"
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             },
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "type": "block",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "vg_name": "ceph_vg1"
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:         }
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:     ],
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:     "2": [
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:         {
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "devices": [
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "/dev/loop5"
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             ],
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "lv_name": "ceph_lv2",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "lv_size": "21470642176",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "name": "ceph_lv2",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "tags": {
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.cluster_name": "ceph",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.crush_device_class": "",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.encrypted": "0",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.objectstore": "bluestore",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.osd_id": "2",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.type": "block",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.vdo": "0",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:                 "ceph.with_tpm": "0"
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             },
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "type": "block",
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:             "vg_name": "ceph_vg2"
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:         }
Feb 28 10:56:32 compute-0 pedantic_gates[388934]:     ]
Feb 28 10:56:32 compute-0 pedantic_gates[388934]: }
Feb 28 10:56:32 compute-0 systemd[1]: libpod-7c6d92c986bdbf39a7571103f4c9e4e42654c71884d758bfa434e13392bfcc44.scope: Deactivated successfully.
Feb 28 10:56:32 compute-0 podman[388916]: 2026-02-28 10:56:32.866528942 +0000 UTC m=+0.424699519 container died 7c6d92c986bdbf39a7571103f4c9e4e42654c71884d758bfa434e13392bfcc44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 10:56:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-663b5c87bf46ffd986115370bd5f359a8d1078a9bfd33555b5de22eee60683a7-merged.mount: Deactivated successfully.
Feb 28 10:56:32 compute-0 podman[388916]: 2026-02-28 10:56:32.914951822 +0000 UTC m=+0.473122399 container remove 7c6d92c986bdbf39a7571103f4c9e4e42654c71884d758bfa434e13392bfcc44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 10:56:32 compute-0 systemd[1]: libpod-conmon-7c6d92c986bdbf39a7571103f4c9e4e42654c71884d758bfa434e13392bfcc44.scope: Deactivated successfully.
Feb 28 10:56:32 compute-0 sudo[388837]: pam_unix(sudo:session): session closed for user root
Feb 28 10:56:32 compute-0 ceph-mon[76304]: pgmap v2781: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:33 compute-0 nova_compute[243452]: 2026-02-28 10:56:33.002 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:56:33 compute-0 sudo[388958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:56:33 compute-0 sudo[388958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:56:33 compute-0 sudo[388958]: pam_unix(sudo:session): session closed for user root
Feb 28 10:56:33 compute-0 sudo[388983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:56:33 compute-0 sudo[388983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:56:33 compute-0 podman[389020]: 2026-02-28 10:56:33.342319756 +0000 UTC m=+0.054611288 container create 84c70abce96afa73e4d8becd850699fae9a61068c2c91a1d4f2654ffa1f33f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 10:56:33 compute-0 systemd[1]: Started libpod-conmon-84c70abce96afa73e4d8becd850699fae9a61068c2c91a1d4f2654ffa1f33f7d.scope.
Feb 28 10:56:33 compute-0 podman[389020]: 2026-02-28 10:56:33.317156133 +0000 UTC m=+0.029447715 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:56:33 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:56:33 compute-0 podman[389020]: 2026-02-28 10:56:33.434870317 +0000 UTC m=+0.147161849 container init 84c70abce96afa73e4d8becd850699fae9a61068c2c91a1d4f2654ffa1f33f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 10:56:33 compute-0 podman[389020]: 2026-02-28 10:56:33.443949024 +0000 UTC m=+0.156240556 container start 84c70abce96afa73e4d8becd850699fae9a61068c2c91a1d4f2654ffa1f33f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 10:56:33 compute-0 podman[389020]: 2026-02-28 10:56:33.449439599 +0000 UTC m=+0.161731181 container attach 84c70abce96afa73e4d8becd850699fae9a61068c2c91a1d4f2654ffa1f33f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:56:33 compute-0 nice_faraday[389036]: 167 167
Feb 28 10:56:33 compute-0 systemd[1]: libpod-84c70abce96afa73e4d8becd850699fae9a61068c2c91a1d4f2654ffa1f33f7d.scope: Deactivated successfully.
Feb 28 10:56:33 compute-0 podman[389020]: 2026-02-28 10:56:33.450781127 +0000 UTC m=+0.163072659 container died 84c70abce96afa73e4d8becd850699fae9a61068c2c91a1d4f2654ffa1f33f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 28 10:56:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-b464e3b8cfa4b93c3a384f0617ab2ade8818fcc54a140571a88ef4e22f59063b-merged.mount: Deactivated successfully.
Feb 28 10:56:33 compute-0 podman[389020]: 2026-02-28 10:56:33.49888543 +0000 UTC m=+0.211176932 container remove 84c70abce96afa73e4d8becd850699fae9a61068c2c91a1d4f2654ffa1f33f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 10:56:33 compute-0 systemd[1]: libpod-conmon-84c70abce96afa73e4d8becd850699fae9a61068c2c91a1d4f2654ffa1f33f7d.scope: Deactivated successfully.
Feb 28 10:56:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:33 compute-0 podman[389060]: 2026-02-28 10:56:33.65637725 +0000 UTC m=+0.047419934 container create 9c072894ff4d1dff078eefa5ed0db8401fd6969b26a04440151c2febc72c0229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 10:56:33 compute-0 systemd[1]: Started libpod-conmon-9c072894ff4d1dff078eefa5ed0db8401fd6969b26a04440151c2febc72c0229.scope.
Feb 28 10:56:33 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:56:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ab0bfefe5aaf7cad55558f057f375cab0e2389c36916024e75f17cb28cb1cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:56:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ab0bfefe5aaf7cad55558f057f375cab0e2389c36916024e75f17cb28cb1cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:56:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ab0bfefe5aaf7cad55558f057f375cab0e2389c36916024e75f17cb28cb1cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:56:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ab0bfefe5aaf7cad55558f057f375cab0e2389c36916024e75f17cb28cb1cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:56:33 compute-0 podman[389060]: 2026-02-28 10:56:33.639130312 +0000 UTC m=+0.030173016 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:56:33 compute-0 podman[389060]: 2026-02-28 10:56:33.751163425 +0000 UTC m=+0.142206129 container init 9c072894ff4d1dff078eefa5ed0db8401fd6969b26a04440151c2febc72c0229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 28 10:56:33 compute-0 podman[389060]: 2026-02-28 10:56:33.765453819 +0000 UTC m=+0.156496513 container start 9c072894ff4d1dff078eefa5ed0db8401fd6969b26a04440151c2febc72c0229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_einstein, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 10:56:33 compute-0 podman[389060]: 2026-02-28 10:56:33.772733205 +0000 UTC m=+0.163775879 container attach 9c072894ff4d1dff078eefa5ed0db8401fd6969b26a04440151c2febc72c0229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 10:56:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:56:34 compute-0 lvm[389153]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:56:34 compute-0 lvm[389153]: VG ceph_vg0 finished
Feb 28 10:56:34 compute-0 lvm[389156]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:56:34 compute-0 lvm[389156]: VG ceph_vg1 finished
Feb 28 10:56:34 compute-0 lvm[389158]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:56:34 compute-0 lvm[389158]: VG ceph_vg2 finished
Feb 28 10:56:34 compute-0 relaxed_einstein[389076]: {}
Feb 28 10:56:34 compute-0 systemd[1]: libpod-9c072894ff4d1dff078eefa5ed0db8401fd6969b26a04440151c2febc72c0229.scope: Deactivated successfully.
Feb 28 10:56:34 compute-0 systemd[1]: libpod-9c072894ff4d1dff078eefa5ed0db8401fd6969b26a04440151c2febc72c0229.scope: Consumed 1.190s CPU time.
Feb 28 10:56:34 compute-0 podman[389060]: 2026-02-28 10:56:34.61892015 +0000 UTC m=+1.009962834 container died 9c072894ff4d1dff078eefa5ed0db8401fd6969b26a04440151c2febc72c0229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:56:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3ab0bfefe5aaf7cad55558f057f375cab0e2389c36916024e75f17cb28cb1cc-merged.mount: Deactivated successfully.
Feb 28 10:56:34 compute-0 podman[389060]: 2026-02-28 10:56:34.672333933 +0000 UTC m=+1.063376657 container remove 9c072894ff4d1dff078eefa5ed0db8401fd6969b26a04440151c2febc72c0229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 28 10:56:34 compute-0 systemd[1]: libpod-conmon-9c072894ff4d1dff078eefa5ed0db8401fd6969b26a04440151c2febc72c0229.scope: Deactivated successfully.
Feb 28 10:56:34 compute-0 sudo[388983]: pam_unix(sudo:session): session closed for user root
Feb 28 10:56:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:56:34 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:56:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:56:34 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:56:34 compute-0 sudo[389175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:56:34 compute-0 sudo[389175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:56:34 compute-0 sudo[389175]: pam_unix(sudo:session): session closed for user root
Feb 28 10:56:35 compute-0 ceph-mon[76304]: pgmap v2782: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:35 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:56:35 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:56:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:37 compute-0 ceph-mon[76304]: pgmap v2783: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:38 compute-0 nova_compute[243452]: 2026-02-28 10:56:38.004 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:56:39 compute-0 ceph-mon[76304]: pgmap v2784: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:56:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:41 compute-0 ceph-mon[76304]: pgmap v2785: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.533795142069749e-05 of space, bias 1.0, pg target 0.004601385426209247 quantized to 32 (current 32)
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006706137252582069 of space, bias 1.0, pg target 0.20118411757746207 quantized to 32 (current 32)
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.261232899107364e-07 of space, bias 4.0, pg target 0.0008713479478928837 quantized to 16 (current 16)
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:56:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:42 compute-0 podman[389201]: 2026-02-28 10:56:42.14590092 +0000 UTC m=+0.074214163 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:56:42 compute-0 podman[389200]: 2026-02-28 10:56:42.18051049 +0000 UTC m=+0.113273009 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Feb 28 10:56:43 compute-0 nova_compute[243452]: 2026-02-28 10:56:43.008 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:56:43 compute-0 ceph-mon[76304]: pgmap v2786: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:56:45 compute-0 ceph-mon[76304]: pgmap v2787: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:56:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1117919498' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:56:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:56:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1117919498' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:56:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1117919498' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:56:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1117919498' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:56:47 compute-0 ceph-mon[76304]: pgmap v2788: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:48 compute-0 nova_compute[243452]: 2026-02-28 10:56:48.012 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:56:49 compute-0 ceph-mon[76304]: pgmap v2789: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:56:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:51 compute-0 ceph-mon[76304]: pgmap v2790: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:52 compute-0 ceph-mon[76304]: pgmap v2791: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:53 compute-0 nova_compute[243452]: 2026-02-28 10:56:53.013 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:56:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:56:54 compute-0 ceph-mon[76304]: pgmap v2792: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:56 compute-0 nova_compute[243452]: 2026-02-28 10:56:56.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:56:56 compute-0 ceph-mon[76304]: pgmap v2793: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:56:57.900 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:56:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:56:57.901 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:56:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:56:57.901 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:56:58 compute-0 nova_compute[243452]: 2026-02-28 10:56:58.015 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:56:58 compute-0 nova_compute[243452]: 2026-02-28 10:56:58.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:56:58 compute-0 nova_compute[243452]: 2026-02-28 10:56:58.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:56:58 compute-0 ceph-mon[76304]: pgmap v2794: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:56:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:56:59 compute-0 nova_compute[243452]: 2026-02-28 10:56:59.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:56:59 compute-0 nova_compute[243452]: 2026-02-28 10:56:59.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:56:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:00 compute-0 nova_compute[243452]: 2026-02-28 10:57:00.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:57:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:57:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:57:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:57:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:57:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:57:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:57:00 compute-0 ceph-mon[76304]: pgmap v2795: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:02 compute-0 nova_compute[243452]: 2026-02-28 10:57:02.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:57:02 compute-0 ceph-mon[76304]: pgmap v2796: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:03 compute-0 nova_compute[243452]: 2026-02-28 10:57:03.018 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:57:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:57:04 compute-0 nova_compute[243452]: 2026-02-28 10:57:04.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:57:04 compute-0 ceph-mon[76304]: pgmap v2797: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:05 compute-0 nova_compute[243452]: 2026-02-28 10:57:05.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:57:05 compute-0 nova_compute[243452]: 2026-02-28 10:57:05.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:57:05 compute-0 nova_compute[243452]: 2026-02-28 10:57:05.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:57:05 compute-0 nova_compute[243452]: 2026-02-28 10:57:05.336 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:57:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:06 compute-0 ceph-mon[76304]: pgmap v2798: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:08 compute-0 nova_compute[243452]: 2026-02-28 10:57:08.020 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:57:08 compute-0 nova_compute[243452]: 2026-02-28 10:57:08.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:57:08 compute-0 nova_compute[243452]: 2026-02-28 10:57:08.350 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:57:08 compute-0 nova_compute[243452]: 2026-02-28 10:57:08.351 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:57:08 compute-0 nova_compute[243452]: 2026-02-28 10:57:08.351 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:57:08 compute-0 nova_compute[243452]: 2026-02-28 10:57:08.352 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:57:08 compute-0 nova_compute[243452]: 2026-02-28 10:57:08.352 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:57:08 compute-0 ceph-mon[76304]: pgmap v2799: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:57:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/969889939' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:57:08 compute-0 nova_compute[243452]: 2026-02-28 10:57:08.928 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:57:09 compute-0 nova_compute[243452]: 2026-02-28 10:57:09.134 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:57:09 compute-0 nova_compute[243452]: 2026-02-28 10:57:09.136 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3601MB free_disk=59.987361152656376GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:57:09 compute-0 nova_compute[243452]: 2026-02-28 10:57:09.136 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:57:09 compute-0 nova_compute[243452]: 2026-02-28 10:57:09.137 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:57:09 compute-0 nova_compute[243452]: 2026-02-28 10:57:09.219 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:57:09 compute-0 nova_compute[243452]: 2026-02-28 10:57:09.221 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:57:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:57:09 compute-0 nova_compute[243452]: 2026-02-28 10:57:09.248 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:57:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:09 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/969889939' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:57:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:57:09 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209798470' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:57:09 compute-0 nova_compute[243452]: 2026-02-28 10:57:09.817 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:57:09 compute-0 nova_compute[243452]: 2026-02-28 10:57:09.825 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:57:09 compute-0 nova_compute[243452]: 2026-02-28 10:57:09.844 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:57:09 compute-0 nova_compute[243452]: 2026-02-28 10:57:09.845 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:57:09 compute-0 nova_compute[243452]: 2026-02-28 10:57:09.845 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:57:10 compute-0 ceph-mon[76304]: pgmap v2800: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:10 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3209798470' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:57:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:12 compute-0 ceph-mon[76304]: pgmap v2801: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:13 compute-0 nova_compute[243452]: 2026-02-28 10:57:13.023 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:57:13 compute-0 nova_compute[243452]: 2026-02-28 10:57:13.025 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:57:13 compute-0 nova_compute[243452]: 2026-02-28 10:57:13.025 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 10:57:13 compute-0 nova_compute[243452]: 2026-02-28 10:57:13.025 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:57:13 compute-0 nova_compute[243452]: 2026-02-28 10:57:13.026 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:57:13 compute-0 nova_compute[243452]: 2026-02-28 10:57:13.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:57:13 compute-0 podman[389290]: 2026-02-28 10:57:13.150054323 +0000 UTC m=+0.076498477 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:57:13 compute-0 podman[389289]: 2026-02-28 10:57:13.170664637 +0000 UTC m=+0.103078630 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:57:13 compute-0 nova_compute[243452]: 2026-02-28 10:57:13.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:57:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2802: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:57:14 compute-0 ceph-mon[76304]: pgmap v2802: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2803: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:16 compute-0 ceph-mon[76304]: pgmap v2803: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:18 compute-0 nova_compute[243452]: 2026-02-28 10:57:18.027 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:57:18 compute-0 ceph-mon[76304]: pgmap v2804: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:57:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2805: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:20 compute-0 ceph-mon[76304]: pgmap v2805: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2806: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:22 compute-0 ceph-mon[76304]: pgmap v2806: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:23 compute-0 nova_compute[243452]: 2026-02-28 10:57:23.029 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:57:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.231404) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276244231469, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 750, "num_deletes": 250, "total_data_size": 975700, "memory_usage": 988336, "flush_reason": "Manual Compaction"}
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276244237922, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 620764, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58756, "largest_seqno": 59505, "table_properties": {"data_size": 617488, "index_size": 1119, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8667, "raw_average_key_size": 20, "raw_value_size": 610599, "raw_average_value_size": 1453, "num_data_blocks": 50, "num_entries": 420, "num_filter_entries": 420, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772276180, "oldest_key_time": 1772276180, "file_creation_time": 1772276244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 6582 microseconds, and 3414 cpu microseconds.
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.237987) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 620764 bytes OK
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.238013) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.239503) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.239524) EVENT_LOG_v1 {"time_micros": 1772276244239517, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.239552) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 971880, prev total WAL file size 971880, number of live WAL files 2.
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.240038) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323532' seq:72057594037927935, type:22 .. '6D6772737461740032353033' seq:0, type:0; will stop at (end)
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(606KB)], [137(10MB)]
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276244240119, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 12097105, "oldest_snapshot_seqno": -1}
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 7882 keys, 9093214 bytes, temperature: kUnknown
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276244295266, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 9093214, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9044108, "index_size": 28321, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19717, "raw_key_size": 205707, "raw_average_key_size": 26, "raw_value_size": 8906972, "raw_average_value_size": 1130, "num_data_blocks": 1098, "num_entries": 7882, "num_filter_entries": 7882, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772276244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.295531) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 9093214 bytes
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.296888) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 219.0 rd, 164.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 10.9 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(34.1) write-amplify(14.6) OK, records in: 8368, records dropped: 486 output_compression: NoCompression
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.296914) EVENT_LOG_v1 {"time_micros": 1772276244296901, "job": 84, "event": "compaction_finished", "compaction_time_micros": 55227, "compaction_time_cpu_micros": 33162, "output_level": 6, "num_output_files": 1, "total_output_size": 9093214, "num_input_records": 8368, "num_output_records": 7882, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276244297178, "job": 84, "event": "table_file_deletion", "file_number": 139}
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276244298630, "job": 84, "event": "table_file_deletion", "file_number": 137}
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.239961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.298662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.298668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.298670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.298672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:57:24 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:57:24.298675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:57:24 compute-0 ceph-mon[76304]: pgmap v2807: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2808: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:26 compute-0 ceph-mon[76304]: pgmap v2808: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:28 compute-0 nova_compute[243452]: 2026-02-28 10:57:28.031 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:57:28 compute-0 ceph-mon[76304]: pgmap v2809: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:57:29
Feb 28 10:57:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:57:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:57:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'volumes', '.mgr', 'images', 'backups', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta']
Feb 28 10:57:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:57:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:57:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:57:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:57:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:57:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:57:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:57:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:57:30 compute-0 sshd-session[389335]: Accepted publickey for zuul from 192.168.122.30 port 38510 ssh2: ECDSA SHA256:0e783GbusLxW+8649QrtV4mEUjUuluwMjeLbzXNo3z0
Feb 28 10:57:30 compute-0 systemd[1]: Starting dnf makecache...
Feb 28 10:57:30 compute-0 systemd-logind[815]: New session 51 of user zuul.
Feb 28 10:57:30 compute-0 systemd[1]: Started Session 51 of User zuul.
Feb 28 10:57:30 compute-0 sshd-session[389335]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 28 10:57:30 compute-0 ceph-mon[76304]: pgmap v2810: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:57:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:57:30 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:57:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:57:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:57:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:57:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:57:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:57:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:57:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:57:31 compute-0 dnf[389337]: Metadata cache refreshed recently.
Feb 28 10:57:31 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Feb 28 10:57:31 compute-0 systemd[1]: Finished dnf makecache.
Feb 28 10:57:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:32 compute-0 ceph-mon[76304]: pgmap v2811: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:33 compute-0 nova_compute[243452]: 2026-02-28 10:57:33.034 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:57:33 compute-0 nova_compute[243452]: 2026-02-28 10:57:33.036 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:57:33 compute-0 nova_compute[243452]: 2026-02-28 10:57:33.036 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 10:57:33 compute-0 nova_compute[243452]: 2026-02-28 10:57:33.036 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:57:33 compute-0 nova_compute[243452]: 2026-02-28 10:57:33.062 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:57:33 compute-0 nova_compute[243452]: 2026-02-28 10:57:33.063 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:57:33 compute-0 nova_compute[243452]: 2026-02-28 10:57:33.444 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:57:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:57:33.443 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:57:33 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:57:33.444 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:57:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:57:34 compute-0 sshd-session[389386]: Invalid user sol from 45.148.10.240 port 56808
Feb 28 10:57:34 compute-0 ceph-mon[76304]: pgmap v2812: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:34 compute-0 sudo[389435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:57:34 compute-0 sudo[389435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:57:34 compute-0 sudo[389435]: pam_unix(sudo:session): session closed for user root
Feb 28 10:57:34 compute-0 sshd-session[389386]: Connection closed by invalid user sol 45.148.10.240 port 56808 [preauth]
Feb 28 10:57:34 compute-0 sudo[389505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:57:34 compute-0 sudo[389505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:57:35 compute-0 sudo[389505]: pam_unix(sudo:session): session closed for user root
Feb 28 10:57:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:57:35 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:57:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:57:35 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:57:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:57:35 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:57:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:57:35 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:57:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:57:35 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:57:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:57:35 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:57:35 compute-0 sudo[389653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:57:35 compute-0 sudo[389653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:57:35 compute-0 sudo[389653]: pam_unix(sudo:session): session closed for user root
Feb 28 10:57:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:35 compute-0 sudo[389678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:57:35 compute-0 sudo[389678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:57:35 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:57:35 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:57:35 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:57:35 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:57:35 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:57:35 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:57:36 compute-0 podman[389714]: 2026-02-28 10:57:36.074602823 +0000 UTC m=+0.083777124 container create 2a3198a839c22a6a31b2e1be9ea6892da9e58e457b627981d55bc3b3cb44666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wilbur, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 10:57:36 compute-0 systemd[1]: Started libpod-conmon-2a3198a839c22a6a31b2e1be9ea6892da9e58e457b627981d55bc3b3cb44666b.scope.
Feb 28 10:57:36 compute-0 podman[389714]: 2026-02-28 10:57:36.043570854 +0000 UTC m=+0.052745205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:57:36 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:57:36 compute-0 podman[389714]: 2026-02-28 10:57:36.173521754 +0000 UTC m=+0.182696065 container init 2a3198a839c22a6a31b2e1be9ea6892da9e58e457b627981d55bc3b3cb44666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:57:36 compute-0 podman[389714]: 2026-02-28 10:57:36.183455165 +0000 UTC m=+0.192629476 container start 2a3198a839c22a6a31b2e1be9ea6892da9e58e457b627981d55bc3b3cb44666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wilbur, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:57:36 compute-0 podman[389714]: 2026-02-28 10:57:36.187563172 +0000 UTC m=+0.196737473 container attach 2a3198a839c22a6a31b2e1be9ea6892da9e58e457b627981d55bc3b3cb44666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wilbur, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 10:57:36 compute-0 confident_wilbur[389730]: 167 167
Feb 28 10:57:36 compute-0 systemd[1]: libpod-2a3198a839c22a6a31b2e1be9ea6892da9e58e457b627981d55bc3b3cb44666b.scope: Deactivated successfully.
Feb 28 10:57:36 compute-0 conmon[389730]: conmon 2a3198a839c22a6a31b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a3198a839c22a6a31b2e1be9ea6892da9e58e457b627981d55bc3b3cb44666b.scope/container/memory.events
Feb 28 10:57:36 compute-0 podman[389714]: 2026-02-28 10:57:36.19278464 +0000 UTC m=+0.201958941 container died 2a3198a839c22a6a31b2e1be9ea6892da9e58e457b627981d55bc3b3cb44666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 10:57:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-69c8cd28fa32f3d27fc75b45c98577cec41f31d80d8ff511e3d26d91cfde3190-merged.mount: Deactivated successfully.
Feb 28 10:57:36 compute-0 podman[389714]: 2026-02-28 10:57:36.244102863 +0000 UTC m=+0.253277174 container remove 2a3198a839c22a6a31b2e1be9ea6892da9e58e457b627981d55bc3b3cb44666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Feb 28 10:57:36 compute-0 systemd[1]: libpod-conmon-2a3198a839c22a6a31b2e1be9ea6892da9e58e457b627981d55bc3b3cb44666b.scope: Deactivated successfully.
Feb 28 10:57:36 compute-0 nova_compute[243452]: 2026-02-28 10:57:36.421 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:57:36 compute-0 nova_compute[243452]: 2026-02-28 10:57:36.423 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 28 10:57:36 compute-0 podman[389754]: 2026-02-28 10:57:36.436945504 +0000 UTC m=+0.054246917 container create 611751ef0f1d82209745363c2a73ce0c251bb2aaf49087b829cdd6206dc9852e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_gagarin, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 10:57:36 compute-0 systemd[1]: Started libpod-conmon-611751ef0f1d82209745363c2a73ce0c251bb2aaf49087b829cdd6206dc9852e.scope.
Feb 28 10:57:36 compute-0 podman[389754]: 2026-02-28 10:57:36.415732904 +0000 UTC m=+0.033034397 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:57:36 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fef7177af9a303eed8b3c6a373368300ad33f3eb3b99d4467da47f216ebea6ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fef7177af9a303eed8b3c6a373368300ad33f3eb3b99d4467da47f216ebea6ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fef7177af9a303eed8b3c6a373368300ad33f3eb3b99d4467da47f216ebea6ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fef7177af9a303eed8b3c6a373368300ad33f3eb3b99d4467da47f216ebea6ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fef7177af9a303eed8b3c6a373368300ad33f3eb3b99d4467da47f216ebea6ec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:57:36 compute-0 podman[389754]: 2026-02-28 10:57:36.536734731 +0000 UTC m=+0.154036164 container init 611751ef0f1d82209745363c2a73ce0c251bb2aaf49087b829cdd6206dc9852e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 28 10:57:36 compute-0 podman[389754]: 2026-02-28 10:57:36.546993871 +0000 UTC m=+0.164295304 container start 611751ef0f1d82209745363c2a73ce0c251bb2aaf49087b829cdd6206dc9852e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_gagarin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:57:36 compute-0 podman[389754]: 2026-02-28 10:57:36.551087397 +0000 UTC m=+0.168388830 container attach 611751ef0f1d82209745363c2a73ce0c251bb2aaf49087b829cdd6206dc9852e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:57:36 compute-0 nova_compute[243452]: 2026-02-28 10:57:36.555 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 28 10:57:36 compute-0 ceph-mon[76304]: pgmap v2813: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:37 compute-0 nifty_gagarin[389771]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:57:37 compute-0 nifty_gagarin[389771]: --> All data devices are unavailable
Feb 28 10:57:37 compute-0 systemd[1]: libpod-611751ef0f1d82209745363c2a73ce0c251bb2aaf49087b829cdd6206dc9852e.scope: Deactivated successfully.
Feb 28 10:57:37 compute-0 podman[389754]: 2026-02-28 10:57:37.101361382 +0000 UTC m=+0.718662825 container died 611751ef0f1d82209745363c2a73ce0c251bb2aaf49087b829cdd6206dc9852e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 28 10:57:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-fef7177af9a303eed8b3c6a373368300ad33f3eb3b99d4467da47f216ebea6ec-merged.mount: Deactivated successfully.
Feb 28 10:57:37 compute-0 podman[389754]: 2026-02-28 10:57:37.146329995 +0000 UTC m=+0.763631398 container remove 611751ef0f1d82209745363c2a73ce0c251bb2aaf49087b829cdd6206dc9852e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 10:57:37 compute-0 systemd[1]: libpod-conmon-611751ef0f1d82209745363c2a73ce0c251bb2aaf49087b829cdd6206dc9852e.scope: Deactivated successfully.
Feb 28 10:57:37 compute-0 sudo[389678]: pam_unix(sudo:session): session closed for user root
Feb 28 10:57:37 compute-0 sudo[389803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:57:37 compute-0 sudo[389803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:57:37 compute-0 sudo[389803]: pam_unix(sudo:session): session closed for user root
Feb 28 10:57:37 compute-0 sudo[389828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:57:37 compute-0 sudo[389828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:57:37 compute-0 sshd-session[389339]: Connection closed by 192.168.122.30 port 38510
Feb 28 10:57:37 compute-0 sshd-session[389335]: pam_unix(sshd:session): session closed for user zuul
Feb 28 10:57:37 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Feb 28 10:57:37 compute-0 systemd-logind[815]: Session 51 logged out. Waiting for processes to exit.
Feb 28 10:57:37 compute-0 systemd-logind[815]: Removed session 51.
Feb 28 10:57:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:37 compute-0 podman[389887]: 2026-02-28 10:57:37.675364987 +0000 UTC m=+0.056126431 container create b835a65abf4a923e35e98075958adaeb1a9464488d667cce4749d3dfe88df6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_nash, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 10:57:37 compute-0 systemd[1]: Started libpod-conmon-b835a65abf4a923e35e98075958adaeb1a9464488d667cce4749d3dfe88df6e0.scope.
Feb 28 10:57:37 compute-0 podman[389887]: 2026-02-28 10:57:37.649280628 +0000 UTC m=+0.030042122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:57:37 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:57:37 compute-0 podman[389887]: 2026-02-28 10:57:37.783843939 +0000 UTC m=+0.164605393 container init b835a65abf4a923e35e98075958adaeb1a9464488d667cce4749d3dfe88df6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:57:37 compute-0 podman[389887]: 2026-02-28 10:57:37.792847964 +0000 UTC m=+0.173609408 container start b835a65abf4a923e35e98075958adaeb1a9464488d667cce4749d3dfe88df6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_nash, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:57:37 compute-0 podman[389887]: 2026-02-28 10:57:37.796831267 +0000 UTC m=+0.177592721 container attach b835a65abf4a923e35e98075958adaeb1a9464488d667cce4749d3dfe88df6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_nash, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:57:37 compute-0 sleepy_nash[389903]: 167 167
Feb 28 10:57:37 compute-0 systemd[1]: libpod-b835a65abf4a923e35e98075958adaeb1a9464488d667cce4749d3dfe88df6e0.scope: Deactivated successfully.
Feb 28 10:57:37 compute-0 conmon[389903]: conmon b835a65abf4a923e35e9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b835a65abf4a923e35e98075958adaeb1a9464488d667cce4749d3dfe88df6e0.scope/container/memory.events
Feb 28 10:57:37 compute-0 podman[389887]: 2026-02-28 10:57:37.800773209 +0000 UTC m=+0.181534643 container died b835a65abf4a923e35e98075958adaeb1a9464488d667cce4749d3dfe88df6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_nash, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Feb 28 10:57:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-dafef8885ab8f7e94fa74354201abb948bf3f00c32cae2c7372f7e528a4502de-merged.mount: Deactivated successfully.
Feb 28 10:57:37 compute-0 podman[389887]: 2026-02-28 10:57:37.849337794 +0000 UTC m=+0.230099208 container remove b835a65abf4a923e35e98075958adaeb1a9464488d667cce4749d3dfe88df6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_nash, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 10:57:37 compute-0 systemd[1]: libpod-conmon-b835a65abf4a923e35e98075958adaeb1a9464488d667cce4749d3dfe88df6e0.scope: Deactivated successfully.
Feb 28 10:57:38 compute-0 podman[389929]: 2026-02-28 10:57:38.039682115 +0000 UTC m=+0.055984757 container create d3313c826a4d51f006aa21e6e4bfcc50a54a604405bce44bd59ad4727c633e8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 10:57:38 compute-0 nova_compute[243452]: 2026-02-28 10:57:38.064 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:57:38 compute-0 systemd[1]: Started libpod-conmon-d3313c826a4d51f006aa21e6e4bfcc50a54a604405bce44bd59ad4727c633e8d.scope.
Feb 28 10:57:38 compute-0 podman[389929]: 2026-02-28 10:57:38.013452802 +0000 UTC m=+0.029755534 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:57:38 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d2fad089bb5d311706c99995c5c7cf8ef4ade3c7170b7ca92d85e0a2b50667c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d2fad089bb5d311706c99995c5c7cf8ef4ade3c7170b7ca92d85e0a2b50667c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d2fad089bb5d311706c99995c5c7cf8ef4ade3c7170b7ca92d85e0a2b50667c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d2fad089bb5d311706c99995c5c7cf8ef4ade3c7170b7ca92d85e0a2b50667c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:57:38 compute-0 podman[389929]: 2026-02-28 10:57:38.13485982 +0000 UTC m=+0.151162502 container init d3313c826a4d51f006aa21e6e4bfcc50a54a604405bce44bd59ad4727c633e8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:57:38 compute-0 podman[389929]: 2026-02-28 10:57:38.146419417 +0000 UTC m=+0.162722089 container start d3313c826a4d51f006aa21e6e4bfcc50a54a604405bce44bd59ad4727c633e8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:57:38 compute-0 podman[389929]: 2026-02-28 10:57:38.151232234 +0000 UTC m=+0.167534876 container attach d3313c826a4d51f006aa21e6e4bfcc50a54a604405bce44bd59ad4727c633e8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]: {
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:     "0": [
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:         {
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "devices": [
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "/dev/loop3"
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             ],
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "lv_name": "ceph_lv0",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "lv_size": "21470642176",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "name": "ceph_lv0",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "tags": {
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.cluster_name": "ceph",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.crush_device_class": "",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.encrypted": "0",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.objectstore": "bluestore",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.osd_id": "0",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.type": "block",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.vdo": "0",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.with_tpm": "0"
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             },
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "type": "block",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "vg_name": "ceph_vg0"
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:         }
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:     ],
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:     "1": [
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:         {
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "devices": [
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "/dev/loop4"
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             ],
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "lv_name": "ceph_lv1",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "lv_size": "21470642176",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "name": "ceph_lv1",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "tags": {
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.cluster_name": "ceph",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.crush_device_class": "",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.encrypted": "0",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.objectstore": "bluestore",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.osd_id": "1",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.type": "block",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.vdo": "0",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.with_tpm": "0"
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             },
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "type": "block",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "vg_name": "ceph_vg1"
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:         }
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:     ],
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:     "2": [
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:         {
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "devices": [
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "/dev/loop5"
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             ],
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "lv_name": "ceph_lv2",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "lv_size": "21470642176",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "name": "ceph_lv2",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "tags": {
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.cluster_name": "ceph",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.crush_device_class": "",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.encrypted": "0",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.objectstore": "bluestore",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.osd_id": "2",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.type": "block",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.vdo": "0",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:                 "ceph.with_tpm": "0"
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             },
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "type": "block",
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:             "vg_name": "ceph_vg2"
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:         }
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]:     ]
Feb 28 10:57:38 compute-0 compassionate_taussig[389946]: }
Feb 28 10:57:38 compute-0 systemd[1]: libpod-d3313c826a4d51f006aa21e6e4bfcc50a54a604405bce44bd59ad4727c633e8d.scope: Deactivated successfully.
Feb 28 10:57:38 compute-0 podman[389929]: 2026-02-28 10:57:38.526870502 +0000 UTC m=+0.543173184 container died d3313c826a4d51f006aa21e6e4bfcc50a54a604405bce44bd59ad4727c633e8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:57:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d2fad089bb5d311706c99995c5c7cf8ef4ade3c7170b7ca92d85e0a2b50667c-merged.mount: Deactivated successfully.
Feb 28 10:57:38 compute-0 podman[389929]: 2026-02-28 10:57:38.594017774 +0000 UTC m=+0.610320436 container remove d3313c826a4d51f006aa21e6e4bfcc50a54a604405bce44bd59ad4727c633e8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:57:38 compute-0 systemd[1]: libpod-conmon-d3313c826a4d51f006aa21e6e4bfcc50a54a604405bce44bd59ad4727c633e8d.scope: Deactivated successfully.
Feb 28 10:57:38 compute-0 sudo[389828]: pam_unix(sudo:session): session closed for user root
Feb 28 10:57:38 compute-0 sudo[389969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:57:38 compute-0 sudo[389969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:57:38 compute-0 sudo[389969]: pam_unix(sudo:session): session closed for user root
Feb 28 10:57:38 compute-0 sudo[389994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:57:38 compute-0 sudo[389994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:57:38 compute-0 ceph-mon[76304]: pgmap v2814: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:39 compute-0 podman[390031]: 2026-02-28 10:57:39.067213735 +0000 UTC m=+0.053779085 container create 1e52d6961924406917e3d49ad417c4c74fcd04e59cb69813790476b031d94e39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 10:57:39 compute-0 systemd[1]: Started libpod-conmon-1e52d6961924406917e3d49ad417c4c74fcd04e59cb69813790476b031d94e39.scope.
Feb 28 10:57:39 compute-0 podman[390031]: 2026-02-28 10:57:39.041835027 +0000 UTC m=+0.028400416 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:57:39 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:57:39 compute-0 podman[390031]: 2026-02-28 10:57:39.154416505 +0000 UTC m=+0.140981884 container init 1e52d6961924406917e3d49ad417c4c74fcd04e59cb69813790476b031d94e39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bouman, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:57:39 compute-0 podman[390031]: 2026-02-28 10:57:39.161496566 +0000 UTC m=+0.148061905 container start 1e52d6961924406917e3d49ad417c4c74fcd04e59cb69813790476b031d94e39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 10:57:39 compute-0 relaxed_bouman[390048]: 167 167
Feb 28 10:57:39 compute-0 podman[390031]: 2026-02-28 10:57:39.165265402 +0000 UTC m=+0.151830801 container attach 1e52d6961924406917e3d49ad417c4c74fcd04e59cb69813790476b031d94e39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bouman, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:57:39 compute-0 systemd[1]: libpod-1e52d6961924406917e3d49ad417c4c74fcd04e59cb69813790476b031d94e39.scope: Deactivated successfully.
Feb 28 10:57:39 compute-0 podman[390031]: 2026-02-28 10:57:39.16660013 +0000 UTC m=+0.153165469 container died 1e52d6961924406917e3d49ad417c4c74fcd04e59cb69813790476b031d94e39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:57:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-aeb1cb3e75daac96fc9658a98a45def6c202cbc89700f4d024e019053e299215-merged.mount: Deactivated successfully.
Feb 28 10:57:39 compute-0 podman[390031]: 2026-02-28 10:57:39.208211789 +0000 UTC m=+0.194777138 container remove 1e52d6961924406917e3d49ad417c4c74fcd04e59cb69813790476b031d94e39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bouman, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:57:39 compute-0 systemd[1]: libpod-conmon-1e52d6961924406917e3d49ad417c4c74fcd04e59cb69813790476b031d94e39.scope: Deactivated successfully.
Feb 28 10:57:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:57:39 compute-0 podman[390073]: 2026-02-28 10:57:39.393421844 +0000 UTC m=+0.059922488 container create 88704ba9b2dd684dcf165e3e48c0afef384ecbae680340605dfb76cc8159af32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cohen, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:57:39 compute-0 systemd[1]: Started libpod-conmon-88704ba9b2dd684dcf165e3e48c0afef384ecbae680340605dfb76cc8159af32.scope.
Feb 28 10:57:39 compute-0 podman[390073]: 2026-02-28 10:57:39.368878169 +0000 UTC m=+0.035378873 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:57:39 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4e30dfe2e02ae17c9d10760f98b414e0d22657854643ad46e2e24f401e6f16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4e30dfe2e02ae17c9d10760f98b414e0d22657854643ad46e2e24f401e6f16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4e30dfe2e02ae17c9d10760f98b414e0d22657854643ad46e2e24f401e6f16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4e30dfe2e02ae17c9d10760f98b414e0d22657854643ad46e2e24f401e6f16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:57:39 compute-0 podman[390073]: 2026-02-28 10:57:39.508476632 +0000 UTC m=+0.174977327 container init 88704ba9b2dd684dcf165e3e48c0afef384ecbae680340605dfb76cc8159af32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cohen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 10:57:39 compute-0 podman[390073]: 2026-02-28 10:57:39.518756224 +0000 UTC m=+0.185256868 container start 88704ba9b2dd684dcf165e3e48c0afef384ecbae680340605dfb76cc8159af32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cohen, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:57:39 compute-0 podman[390073]: 2026-02-28 10:57:39.522184451 +0000 UTC m=+0.188685095 container attach 88704ba9b2dd684dcf165e3e48c0afef384ecbae680340605dfb76cc8159af32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cohen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 10:57:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:40 compute-0 lvm[390166]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:57:40 compute-0 lvm[390166]: VG ceph_vg0 finished
Feb 28 10:57:40 compute-0 lvm[390169]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:57:40 compute-0 lvm[390169]: VG ceph_vg1 finished
Feb 28 10:57:40 compute-0 lvm[390171]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:57:40 compute-0 lvm[390171]: VG ceph_vg2 finished
Feb 28 10:57:40 compute-0 boring_cohen[390090]: {}
Feb 28 10:57:40 compute-0 systemd[1]: libpod-88704ba9b2dd684dcf165e3e48c0afef384ecbae680340605dfb76cc8159af32.scope: Deactivated successfully.
Feb 28 10:57:40 compute-0 systemd[1]: libpod-88704ba9b2dd684dcf165e3e48c0afef384ecbae680340605dfb76cc8159af32.scope: Consumed 1.279s CPU time.
Feb 28 10:57:40 compute-0 podman[390073]: 2026-02-28 10:57:40.385624374 +0000 UTC m=+1.052124988 container died 88704ba9b2dd684dcf165e3e48c0afef384ecbae680340605dfb76cc8159af32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:57:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d4e30dfe2e02ae17c9d10760f98b414e0d22657854643ad46e2e24f401e6f16-merged.mount: Deactivated successfully.
Feb 28 10:57:40 compute-0 podman[390073]: 2026-02-28 10:57:40.436846705 +0000 UTC m=+1.103347359 container remove 88704ba9b2dd684dcf165e3e48c0afef384ecbae680340605dfb76cc8159af32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 28 10:57:40 compute-0 systemd[1]: libpod-conmon-88704ba9b2dd684dcf165e3e48c0afef384ecbae680340605dfb76cc8159af32.scope: Deactivated successfully.
Feb 28 10:57:40 compute-0 sudo[389994]: pam_unix(sudo:session): session closed for user root
Feb 28 10:57:40 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:57:40 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:57:40 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:57:40 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:57:40 compute-0 sudo[390187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:57:40 compute-0 sudo[390187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:57:40 compute-0 sudo[390187]: pam_unix(sudo:session): session closed for user root
Feb 28 10:57:40 compute-0 ceph-mon[76304]: pgmap v2815: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:40 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:57:40 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.533795142069749e-05 of space, bias 1.0, pg target 0.004601385426209247 quantized to 32 (current 32)
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006706137252582069 of space, bias 1.0, pg target 0.20118411757746207 quantized to 32 (current 32)
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.261232899107364e-07 of space, bias 4.0, pg target 0.0008713479478928837 quantized to 16 (current 16)
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:57:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2816: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:42 compute-0 ceph-mon[76304]: pgmap v2816: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:43 compute-0 nova_compute[243452]: 2026-02-28 10:57:43.066 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:57:43 compute-0 nova_compute[243452]: 2026-02-28 10:57:43.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:57:43 compute-0 nova_compute[243452]: 2026-02-28 10:57:43.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 28 10:57:43 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:57:43.446 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:57:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:44 compute-0 podman[390213]: 2026-02-28 10:57:44.122829234 +0000 UTC m=+0.049718349 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 10:57:44 compute-0 podman[390212]: 2026-02-28 10:57:44.167933412 +0000 UTC m=+0.092669136 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:57:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:57:44 compute-0 ceph-mon[76304]: pgmap v2817: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:57:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1548389974' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:57:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:57:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1548389974' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:57:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1548389974' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:57:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1548389974' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:57:46 compute-0 ceph-mon[76304]: pgmap v2818: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2819: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:48 compute-0 nova_compute[243452]: 2026-02-28 10:57:48.069 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:57:48 compute-0 ceph-mon[76304]: pgmap v2819: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:57:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:51 compute-0 ceph-mon[76304]: pgmap v2820: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2821: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:52 compute-0 ceph-mon[76304]: pgmap v2821: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:53 compute-0 nova_compute[243452]: 2026-02-28 10:57:53.071 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:57:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:57:54 compute-0 ceph-mon[76304]: pgmap v2822: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2823: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:56 compute-0 nova_compute[243452]: 2026-02-28 10:57:56.333 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:57:56 compute-0 ceph-mon[76304]: pgmap v2823: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2824: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:57:57.901 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:57:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:57:57.902 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:57:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:57:57.902 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:57:58 compute-0 nova_compute[243452]: 2026-02-28 10:57:58.075 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:57:58 compute-0 ceph-mon[76304]: pgmap v2824: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:57:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:57:59 compute-0 nova_compute[243452]: 2026-02-28 10:57:59.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:57:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:00 compute-0 nova_compute[243452]: 2026-02-28 10:58:00.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:58:00 compute-0 nova_compute[243452]: 2026-02-28 10:58:00.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:58:00 compute-0 nova_compute[243452]: 2026-02-28 10:58:00.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:58:00 compute-0 nova_compute[243452]: 2026-02-28 10:58:00.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:58:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:58:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:58:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:58:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:58:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:58:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:58:00 compute-0 ceph-mon[76304]: pgmap v2825: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:02 compute-0 nova_compute[243452]: 2026-02-28 10:58:02.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:58:02 compute-0 ceph-mon[76304]: pgmap v2826: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:03 compute-0 nova_compute[243452]: 2026-02-28 10:58:03.076 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:58:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:58:04 compute-0 ceph-mon[76304]: pgmap v2827: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2828: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:06 compute-0 nova_compute[243452]: 2026-02-28 10:58:06.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:58:06 compute-0 ceph-mon[76304]: pgmap v2828: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:07 compute-0 nova_compute[243452]: 2026-02-28 10:58:07.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:58:07 compute-0 nova_compute[243452]: 2026-02-28 10:58:07.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:58:07 compute-0 nova_compute[243452]: 2026-02-28 10:58:07.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:58:07 compute-0 nova_compute[243452]: 2026-02-28 10:58:07.345 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:58:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2829: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:08 compute-0 nova_compute[243452]: 2026-02-28 10:58:08.078 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:58:08 compute-0 ceph-mon[76304]: pgmap v2829: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:58:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:10 compute-0 nova_compute[243452]: 2026-02-28 10:58:10.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:58:10 compute-0 nova_compute[243452]: 2026-02-28 10:58:10.346 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:58:10 compute-0 nova_compute[243452]: 2026-02-28 10:58:10.346 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:58:10 compute-0 nova_compute[243452]: 2026-02-28 10:58:10.347 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:58:10 compute-0 nova_compute[243452]: 2026-02-28 10:58:10.347 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:58:10 compute-0 nova_compute[243452]: 2026-02-28 10:58:10.348 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:58:10 compute-0 ceph-mon[76304]: pgmap v2830: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:58:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2125471960' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:58:10 compute-0 nova_compute[243452]: 2026-02-28 10:58:10.917 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:58:11 compute-0 nova_compute[243452]: 2026-02-28 10:58:11.091 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:58:11 compute-0 nova_compute[243452]: 2026-02-28 10:58:11.092 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3582MB free_disk=59.987361152656376GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:58:11 compute-0 nova_compute[243452]: 2026-02-28 10:58:11.093 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:58:11 compute-0 nova_compute[243452]: 2026-02-28 10:58:11.093 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:58:11 compute-0 nova_compute[243452]: 2026-02-28 10:58:11.160 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:58:11 compute-0 nova_compute[243452]: 2026-02-28 10:58:11.161 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:58:11 compute-0 nova_compute[243452]: 2026-02-28 10:58:11.185 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:58:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:58:11 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1596854906' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:58:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2125471960' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:58:11 compute-0 nova_compute[243452]: 2026-02-28 10:58:11.805 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:58:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1596854906' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:58:11 compute-0 nova_compute[243452]: 2026-02-28 10:58:11.814 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:58:11 compute-0 nova_compute[243452]: 2026-02-28 10:58:11.926 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:58:11 compute-0 nova_compute[243452]: 2026-02-28 10:58:11.931 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:58:11 compute-0 nova_compute[243452]: 2026-02-28 10:58:11.931 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:58:12 compute-0 ceph-mon[76304]: pgmap v2831: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:13 compute-0 nova_compute[243452]: 2026-02-28 10:58:13.080 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:58:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2832: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:58:14 compute-0 ceph-mon[76304]: pgmap v2832: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:14 compute-0 nova_compute[243452]: 2026-02-28 10:58:14.927 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:58:15 compute-0 podman[390303]: 2026-02-28 10:58:15.148983811 +0000 UTC m=+0.074938683 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 28 10:58:15 compute-0 podman[390302]: 2026-02-28 10:58:15.187714898 +0000 UTC m=+0.115136512 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 28 10:58:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2833: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:16 compute-0 ceph-mon[76304]: pgmap v2833: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:18 compute-0 nova_compute[243452]: 2026-02-28 10:58:18.083 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:58:18 compute-0 ceph-mon[76304]: pgmap v2834: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:58:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:20 compute-0 ceph-mon[76304]: pgmap v2835: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2836: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:22 compute-0 ceph-mon[76304]: pgmap v2836: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:23 compute-0 nova_compute[243452]: 2026-02-28 10:58:23.086 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:58:23 compute-0 nova_compute[243452]: 2026-02-28 10:58:23.087 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:58:23 compute-0 nova_compute[243452]: 2026-02-28 10:58:23.088 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 10:58:23 compute-0 nova_compute[243452]: 2026-02-28 10:58:23.088 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:58:23 compute-0 nova_compute[243452]: 2026-02-28 10:58:23.089 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:58:23 compute-0 nova_compute[243452]: 2026-02-28 10:58:23.090 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:58:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:58:24 compute-0 ceph-mon[76304]: pgmap v2837: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:26 compute-0 ceph-mon[76304]: pgmap v2838: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2839: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:28 compute-0 nova_compute[243452]: 2026-02-28 10:58:28.089 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:58:28 compute-0 ceph-mon[76304]: pgmap v2839: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:58:29
Feb 28 10:58:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:58:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:58:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.log', 'backups', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'vms']
Feb 28 10:58:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:58:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:58:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2840: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:58:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:58:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:58:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:58:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:58:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:58:30 compute-0 ceph-mon[76304]: pgmap v2840: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:58:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:58:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:58:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:58:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:58:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:58:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:58:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:58:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:58:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:58:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:32 compute-0 ceph-mon[76304]: pgmap v2841: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:33 compute-0 nova_compute[243452]: 2026-02-28 10:58:33.091 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:58:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:34 compute-0 sshd-session[390348]: Accepted publickey for zuul from 192.168.122.30 port 35286 ssh2: ECDSA SHA256:0e783GbusLxW+8649QrtV4mEUjUuluwMjeLbzXNo3z0
Feb 28 10:58:34 compute-0 systemd-logind[815]: New session 52 of user zuul.
Feb 28 10:58:34 compute-0 systemd[1]: Started Session 52 of User zuul.
Feb 28 10:58:34 compute-0 sshd-session[390348]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 28 10:58:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:58:34 compute-0 sudo[390421]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain iscsid.service
Feb 28 10:58:34 compute-0 sudo[390421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:34 compute-0 sudo[390421]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:34 compute-0 sudo[390446]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain edpm_nova_compute.service
Feb 28 10:58:34 compute-0 sudo[390446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:34 compute-0 sudo[390446]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:35 compute-0 ceph-mon[76304]: pgmap v2842: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:35 compute-0 sudo[390471]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain edpm_ovn_controller.service
Feb 28 10:58:35 compute-0 sudo[390471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:35 compute-0 sudo[390471]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:35 compute-0 sudo[390496]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain edpm_ovn_metadata_agent.service edpm_ovn_agent.service
Feb 28 10:58:35 compute-0 sudo[390496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:35 compute-0 sudo[390496]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:58:35.578 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 10:58:35 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:58:35.578 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 10:58:35 compute-0 nova_compute[243452]: 2026-02-28 10:58:35.579 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:58:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2843: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:37 compute-0 ceph-mon[76304]: pgmap v2843: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2844: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:38 compute-0 nova_compute[243452]: 2026-02-28 10:58:38.092 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:58:38 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:58:38.580 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 10:58:39 compute-0 ceph-mon[76304]: pgmap v2844: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:58:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:39 compute-0 sshd-session[390521]: Accepted publickey for zuul from 192.168.122.30 port 35294 ssh2: ECDSA SHA256:0e783GbusLxW+8649QrtV4mEUjUuluwMjeLbzXNo3z0
Feb 28 10:58:39 compute-0 systemd-logind[815]: New session 53 of user zuul.
Feb 28 10:58:39 compute-0 systemd[1]: Started Session 53 of User zuul.
Feb 28 10:58:39 compute-0 sshd-session[390521]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 28 10:58:40 compute-0 sudo[390594]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/test -f /var/podman_client_access_setup
Feb 28 10:58:40 compute-0 sudo[390594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:40 compute-0 sudo[390594]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:40 compute-0 sudo[390620]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/groupadd -f podman
Feb 28 10:58:40 compute-0 sudo[390620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:40 compute-0 groupadd[390622]: group added to /etc/group: name=podman, GID=42479
Feb 28 10:58:40 compute-0 groupadd[390622]: group added to /etc/gshadow: name=podman
Feb 28 10:58:40 compute-0 groupadd[390622]: new group: name=podman, GID=42479
Feb 28 10:58:40 compute-0 sudo[390620]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:40 compute-0 sudo[390628]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/usermod -a -G podman zuul
Feb 28 10:58:40 compute-0 sudo[390628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:40 compute-0 usermod[390630]: add 'zuul' to group 'podman'
Feb 28 10:58:40 compute-0 usermod[390630]: add 'zuul' to shadow group 'podman'
Feb 28 10:58:40 compute-0 sudo[390628]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:40 compute-0 sudo[390637]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod -R o=wxr /etc/tmpfiles.d
Feb 28 10:58:40 compute-0 sudo[390637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:40 compute-0 sudo[390637]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:40 compute-0 sudo[390640]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/echo 'd /run/podman 0770 root zuul'
Feb 28 10:58:40 compute-0 sudo[390640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:40 compute-0 sudo[390640]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:40 compute-0 sudo[390643]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cp /lib/systemd/system/podman.socket /etc/systemd/system/podman.socket
Feb 28 10:58:40 compute-0 sudo[390643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:40 compute-0 sudo[390643]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:40 compute-0 sudo[390646]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketMode 0660
Feb 28 10:58:40 compute-0 sudo[390646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:40 compute-0 sudo[390646]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:40 compute-0 sudo[390649]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketGroup podman
Feb 28 10:58:40 compute-0 sudo[390649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:40 compute-0 sudo[390652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:58:40 compute-0 sudo[390652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:58:40 compute-0 sudo[390652]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:40 compute-0 sudo[390649]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:40 compute-0 sudo[390680]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl daemon-reload
Feb 28 10:58:40 compute-0 sudo[390680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:40 compute-0 systemd[1]: Reloading.
Feb 28 10:58:40 compute-0 systemd-sysv-generator[390728]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 10:58:40 compute-0 systemd-rc-local-generator[390723]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 10:58:41 compute-0 sudo[390680]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:41 compute-0 ceph-mon[76304]: pgmap v2845: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:41 compute-0 sudo[390677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:58:41 compute-0 sudo[390677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:58:41 compute-0 sudo[390745]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemd-tmpfiles --create
Feb 28 10:58:41 compute-0 sudo[390745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:41 compute-0 sudo[390745]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:41 compute-0 sudo[390749]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl enable --now podman.socket
Feb 28 10:58:41 compute-0 sudo[390749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:41 compute-0 systemd[1]: Reloading.
Feb 28 10:58:41 compute-0 systemd-rc-local-generator[390789]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 28 10:58:41 compute-0 systemd-sysv-generator[390792]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 28 10:58:41 compute-0 sudo[390677]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:58:41 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:58:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:58:41 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:58:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:58:41 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:58:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:58:41 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:58:41 compute-0 systemd[1]: Starting Podman API Socket...
Feb 28 10:58:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:58:41 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:58:41 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:58:41 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:58:41 compute-0 systemd[1]: Listening on Podman API Socket.
Feb 28 10:58:41 compute-0 sudo[390749]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:41 compute-0 sudo[390829]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman
Feb 28 10:58:41 compute-0 sudo[390829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:41 compute-0 sudo[390829]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:41 compute-0 sudo[390849]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chown -R root: /run/podman
Feb 28 10:58:41 compute-0 sudo[390849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:41 compute-0 sudo[390825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:58:41 compute-0 sudo[390825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:58:41 compute-0 sudo[390849]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:41 compute-0 sudo[390825]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:41 compute-0 sudo[390856]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod g+rw /run/podman/podman.sock
Feb 28 10:58:41 compute-0 sudo[390856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:41 compute-0 sudo[390856]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:58:41 compute-0 sudo[390867]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman/podman.sock
Feb 28 10:58:41 compute-0 sudo[390867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:41 compute-0 sudo[390867]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.533795142069749e-05 of space, bias 1.0, pg target 0.004601385426209247 quantized to 32 (current 32)
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006706137252582069 of space, bias 1.0, pg target 0.20118411757746207 quantized to 32 (current 32)
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.261232899107364e-07 of space, bias 4.0, pg target 0.0008713479478928837 quantized to 16 (current 16)
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:58:41 compute-0 sudo[390857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:58:41 compute-0 sudo[390885]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/setenforce 0
Feb 28 10:58:41 compute-0 sudo[390857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:58:41 compute-0 sudo[390885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:41 compute-0 sudo[390885]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:41 compute-0 sudo[390890]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl restart podman.socket
Feb 28 10:58:41 compute-0 dbus-broker-launch[804]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Feb 28 10:58:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:41 compute-0 sudo[390890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:41 compute-0 systemd[1]: podman.socket: Deactivated successfully.
Feb 28 10:58:41 compute-0 systemd[1]: Closed Podman API Socket.
Feb 28 10:58:41 compute-0 systemd[1]: Stopping Podman API Socket...
Feb 28 10:58:41 compute-0 systemd[1]: Starting Podman API Socket...
Feb 28 10:58:41 compute-0 systemd[1]: Listening on Podman API Socket.
Feb 28 10:58:41 compute-0 sudo[390890]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:41 compute-0 sudo[390597]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/touch /var/podman_client_access_setup
Feb 28 10:58:41 compute-0 sudo[390597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:58:41 compute-0 sudo[390597]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:41 compute-0 sshd-session[390896]: Accepted publickey for zuul from 192.168.122.30 port 35300 ssh2: ECDSA SHA256:0e783GbusLxW+8649QrtV4mEUjUuluwMjeLbzXNo3z0
Feb 28 10:58:41 compute-0 systemd-logind[815]: New session 54 of user zuul.
Feb 28 10:58:41 compute-0 systemd[1]: Started Session 54 of User zuul.
Feb 28 10:58:41 compute-0 sshd-session[390896]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 28 10:58:41 compute-0 systemd[1]: Starting Podman API Service...
Feb 28 10:58:41 compute-0 systemd[1]: Started Podman API Service.
Feb 28 10:58:42 compute-0 podman[390909]: time="2026-02-28T10:58:42Z" level=info msg="/usr/bin/podman filtering at log level info"
Feb 28 10:58:42 compute-0 podman[390909]: time="2026-02-28T10:58:42Z" level=info msg="Setting parallel job count to 25"
Feb 28 10:58:42 compute-0 podman[390909]: time="2026-02-28T10:58:42Z" level=info msg="Using sqlite as database backend"
Feb 28 10:58:42 compute-0 podman[390909]: time="2026-02-28T10:58:42Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Feb 28 10:58:42 compute-0 podman[390909]: time="2026-02-28T10:58:42Z" level=info msg="Using systemd socket activation to determine API endpoint"
Feb 28 10:58:42 compute-0 podman[390909]: time="2026-02-28T10:58:42Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Feb 28 10:58:42 compute-0 podman[390909]: @ - - [28/Feb/2026:10:58:42 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Feb 28 10:58:42 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:58:42 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:58:42 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:58:42 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:58:42 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:58:42 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:58:42 compute-0 podman[390912]: 2026-02-28 10:58:42.051800621 +0000 UTC m=+0.056795119 container create b7d63a8909b34be933776d997b43bf64b6d9a1e85bd5dde603a359f143366980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 10:58:42 compute-0 systemd[1]: Started libpod-conmon-b7d63a8909b34be933776d997b43bf64b6d9a1e85bd5dde603a359f143366980.scope.
Feb 28 10:58:42 compute-0 podman[390912]: 2026-02-28 10:58:42.025595179 +0000 UTC m=+0.030589667 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:58:42 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:58:42 compute-0 podman[390912]: 2026-02-28 10:58:42.143861279 +0000 UTC m=+0.148855837 container init b7d63a8909b34be933776d997b43bf64b6d9a1e85bd5dde603a359f143366980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:58:42 compute-0 podman[390912]: 2026-02-28 10:58:42.155810957 +0000 UTC m=+0.160805465 container start b7d63a8909b34be933776d997b43bf64b6d9a1e85bd5dde603a359f143366980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 10:58:42 compute-0 podman[390912]: 2026-02-28 10:58:42.162871087 +0000 UTC m=+0.167865595 container attach b7d63a8909b34be933776d997b43bf64b6d9a1e85bd5dde603a359f143366980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carson, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:58:42 compute-0 naughty_carson[390940]: 167 167
Feb 28 10:58:42 compute-0 systemd[1]: libpod-b7d63a8909b34be933776d997b43bf64b6d9a1e85bd5dde603a359f143366980.scope: Deactivated successfully.
Feb 28 10:58:42 compute-0 podman[390912]: 2026-02-28 10:58:42.165330297 +0000 UTC m=+0.170324805 container died b7d63a8909b34be933776d997b43bf64b6d9a1e85bd5dde603a359f143366980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 10:58:42 compute-0 podman[390909]: @ - - [28/Feb/2026:10:58:42 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 24282 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Feb 28 10:58:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-4caa9937293a3253e26272d648dbdf90a724686170dd8928852a42fe590f0b1c-merged.mount: Deactivated successfully.
Feb 28 10:58:42 compute-0 podman[390912]: 2026-02-28 10:58:42.217128214 +0000 UTC m=+0.222122692 container remove b7d63a8909b34be933776d997b43bf64b6d9a1e85bd5dde603a359f143366980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_carson, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 10:58:42 compute-0 systemd[1]: libpod-conmon-b7d63a8909b34be933776d997b43bf64b6d9a1e85bd5dde603a359f143366980.scope: Deactivated successfully.
Feb 28 10:58:42 compute-0 podman[390966]: 2026-02-28 10:58:42.376666242 +0000 UTC m=+0.039743436 container create d32248f8145f5dfcf8e0e46e42b76e39e9ab93862e206da4689313b3973e9df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swirles, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 10:58:42 compute-0 systemd[1]: Started libpod-conmon-d32248f8145f5dfcf8e0e46e42b76e39e9ab93862e206da4689313b3973e9df6.scope.
Feb 28 10:58:42 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfd256e4864db7db58fcf3915c2a2fdee4067cd58f43ea8c5bc610effb56190b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfd256e4864db7db58fcf3915c2a2fdee4067cd58f43ea8c5bc610effb56190b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfd256e4864db7db58fcf3915c2a2fdee4067cd58f43ea8c5bc610effb56190b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfd256e4864db7db58fcf3915c2a2fdee4067cd58f43ea8c5bc610effb56190b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfd256e4864db7db58fcf3915c2a2fdee4067cd58f43ea8c5bc610effb56190b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:58:42 compute-0 podman[390966]: 2026-02-28 10:58:42.359902478 +0000 UTC m=+0.022979452 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:58:42 compute-0 podman[390966]: 2026-02-28 10:58:42.469058389 +0000 UTC m=+0.132135343 container init d32248f8145f5dfcf8e0e46e42b76e39e9ab93862e206da4689313b3973e9df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swirles, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 10:58:42 compute-0 podman[390966]: 2026-02-28 10:58:42.474975247 +0000 UTC m=+0.138052211 container start d32248f8145f5dfcf8e0e46e42b76e39e9ab93862e206da4689313b3973e9df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 10:58:42 compute-0 podman[390966]: 2026-02-28 10:58:42.480538224 +0000 UTC m=+0.143615218 container attach d32248f8145f5dfcf8e0e46e42b76e39e9ab93862e206da4689313b3973e9df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swirles, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:58:42 compute-0 romantic_swirles[390982]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:58:42 compute-0 romantic_swirles[390982]: --> All data devices are unavailable
Feb 28 10:58:43 compute-0 systemd[1]: libpod-d32248f8145f5dfcf8e0e46e42b76e39e9ab93862e206da4689313b3973e9df6.scope: Deactivated successfully.
Feb 28 10:58:43 compute-0 podman[390966]: 2026-02-28 10:58:43.006759727 +0000 UTC m=+0.669836701 container died d32248f8145f5dfcf8e0e46e42b76e39e9ab93862e206da4689313b3973e9df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swirles, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 10:58:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfd256e4864db7db58fcf3915c2a2fdee4067cd58f43ea8c5bc610effb56190b-merged.mount: Deactivated successfully.
Feb 28 10:58:43 compute-0 ceph-mon[76304]: pgmap v2846: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:43 compute-0 podman[390966]: 2026-02-28 10:58:43.05697253 +0000 UTC m=+0.720049484 container remove d32248f8145f5dfcf8e0e46e42b76e39e9ab93862e206da4689313b3973e9df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_swirles, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 10:58:43 compute-0 systemd[1]: libpod-conmon-d32248f8145f5dfcf8e0e46e42b76e39e9ab93862e206da4689313b3973e9df6.scope: Deactivated successfully.
Feb 28 10:58:43 compute-0 nova_compute[243452]: 2026-02-28 10:58:43.094 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:58:43 compute-0 sudo[390857]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:43 compute-0 sudo[391015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:58:43 compute-0 sudo[391015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:58:43 compute-0 sudo[391015]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:43 compute-0 sudo[391040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:58:43 compute-0 sudo[391040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:58:43 compute-0 podman[391078]: 2026-02-28 10:58:43.545161536 +0000 UTC m=+0.065315431 container create 616444b6ec11636512053644c091f10dbf5bf005c8bb7e5599066d19f742edf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:58:43 compute-0 systemd[1]: Started libpod-conmon-616444b6ec11636512053644c091f10dbf5bf005c8bb7e5599066d19f742edf8.scope.
Feb 28 10:58:43 compute-0 podman[391078]: 2026-02-28 10:58:43.517284156 +0000 UTC m=+0.037438031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:58:43 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:58:43 compute-0 podman[391078]: 2026-02-28 10:58:43.640981979 +0000 UTC m=+0.161135814 container init 616444b6ec11636512053644c091f10dbf5bf005c8bb7e5599066d19f742edf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:58:43 compute-0 podman[391078]: 2026-02-28 10:58:43.648604115 +0000 UTC m=+0.168757890 container start 616444b6ec11636512053644c091f10dbf5bf005c8bb7e5599066d19f742edf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 10:58:43 compute-0 podman[391078]: 2026-02-28 10:58:43.654227744 +0000 UTC m=+0.174381529 container attach 616444b6ec11636512053644c091f10dbf5bf005c8bb7e5599066d19f742edf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 10:58:43 compute-0 upbeat_ptolemy[391095]: 167 167
Feb 28 10:58:43 compute-0 systemd[1]: libpod-616444b6ec11636512053644c091f10dbf5bf005c8bb7e5599066d19f742edf8.scope: Deactivated successfully.
Feb 28 10:58:43 compute-0 podman[391078]: 2026-02-28 10:58:43.656397316 +0000 UTC m=+0.176551171 container died 616444b6ec11636512053644c091f10dbf5bf005c8bb7e5599066d19f742edf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 10:58:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-1133b5c94222cbc1d2bdfca04342688d3bb4511e96f128079d53b4bb2493356e-merged.mount: Deactivated successfully.
Feb 28 10:58:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2847: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:43 compute-0 podman[391078]: 2026-02-28 10:58:43.709295474 +0000 UTC m=+0.229449249 container remove 616444b6ec11636512053644c091f10dbf5bf005c8bb7e5599066d19f742edf8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_ptolemy, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:58:43 compute-0 systemd[1]: libpod-conmon-616444b6ec11636512053644c091f10dbf5bf005c8bb7e5599066d19f742edf8.scope: Deactivated successfully.
Feb 28 10:58:43 compute-0 podman[391119]: 2026-02-28 10:58:43.883690513 +0000 UTC m=+0.060199126 container create 0367680e6a4f03747c0f821a8b2e6b368d5e02c69a715879c52100724fdae38c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 10:58:43 compute-0 systemd[1]: Started libpod-conmon-0367680e6a4f03747c0f821a8b2e6b368d5e02c69a715879c52100724fdae38c.scope.
Feb 28 10:58:43 compute-0 podman[391119]: 2026-02-28 10:58:43.848166217 +0000 UTC m=+0.024674850 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:58:43 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20618e06ad47eb3889a8741093bd038864c0bf5d12be955bd8a84f9500746b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20618e06ad47eb3889a8741093bd038864c0bf5d12be955bd8a84f9500746b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20618e06ad47eb3889a8741093bd038864c0bf5d12be955bd8a84f9500746b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20618e06ad47eb3889a8741093bd038864c0bf5d12be955bd8a84f9500746b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:58:43 compute-0 podman[391119]: 2026-02-28 10:58:43.975236236 +0000 UTC m=+0.151744899 container init 0367680e6a4f03747c0f821a8b2e6b368d5e02c69a715879c52100724fdae38c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 10:58:43 compute-0 podman[391119]: 2026-02-28 10:58:43.982738528 +0000 UTC m=+0.159247151 container start 0367680e6a4f03747c0f821a8b2e6b368d5e02c69a715879c52100724fdae38c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 10:58:43 compute-0 podman[391119]: 2026-02-28 10:58:43.986015021 +0000 UTC m=+0.162523744 container attach 0367680e6a4f03747c0f821a8b2e6b368d5e02c69a715879c52100724fdae38c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ritchie, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]: {
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:     "0": [
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:         {
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "devices": [
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "/dev/loop3"
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             ],
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "lv_name": "ceph_lv0",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "lv_size": "21470642176",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "name": "ceph_lv0",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "tags": {
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.cluster_name": "ceph",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.crush_device_class": "",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.encrypted": "0",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.objectstore": "bluestore",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.osd_id": "0",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.type": "block",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.vdo": "0",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.with_tpm": "0"
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             },
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "type": "block",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "vg_name": "ceph_vg0"
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:         }
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:     ],
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:     "1": [
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:         {
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "devices": [
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "/dev/loop4"
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             ],
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "lv_name": "ceph_lv1",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "lv_size": "21470642176",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "name": "ceph_lv1",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "tags": {
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.cluster_name": "ceph",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.crush_device_class": "",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.encrypted": "0",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.objectstore": "bluestore",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.osd_id": "1",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.type": "block",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.vdo": "0",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.with_tpm": "0"
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             },
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "type": "block",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "vg_name": "ceph_vg1"
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:         }
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:     ],
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:     "2": [
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:         {
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "devices": [
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "/dev/loop5"
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             ],
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "lv_name": "ceph_lv2",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "lv_size": "21470642176",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "name": "ceph_lv2",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "tags": {
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.cluster_name": "ceph",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.crush_device_class": "",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.encrypted": "0",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.objectstore": "bluestore",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.osd_id": "2",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.type": "block",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.vdo": "0",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:                 "ceph.with_tpm": "0"
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             },
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "type": "block",
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:             "vg_name": "ceph_vg2"
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:         }
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]:     ]
Feb 28 10:58:44 compute-0 cranky_ritchie[391136]: }
Feb 28 10:58:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:58:44 compute-0 systemd[1]: libpod-0367680e6a4f03747c0f821a8b2e6b368d5e02c69a715879c52100724fdae38c.scope: Deactivated successfully.
Feb 28 10:58:44 compute-0 podman[391119]: 2026-02-28 10:58:44.287187701 +0000 UTC m=+0.463696354 container died 0367680e6a4f03747c0f821a8b2e6b368d5e02c69a715879c52100724fdae38c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 10:58:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a20618e06ad47eb3889a8741093bd038864c0bf5d12be955bd8a84f9500746b6-merged.mount: Deactivated successfully.
Feb 28 10:58:44 compute-0 podman[391119]: 2026-02-28 10:58:44.336423675 +0000 UTC m=+0.512932298 container remove 0367680e6a4f03747c0f821a8b2e6b368d5e02c69a715879c52100724fdae38c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:58:44 compute-0 systemd[1]: libpod-conmon-0367680e6a4f03747c0f821a8b2e6b368d5e02c69a715879c52100724fdae38c.scope: Deactivated successfully.
Feb 28 10:58:44 compute-0 sudo[391040]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:44 compute-0 sudo[391156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:58:44 compute-0 sudo[391156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:58:44 compute-0 sudo[391156]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:44 compute-0 sudo[391181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:58:44 compute-0 sudo[391181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:58:44 compute-0 podman[391220]: 2026-02-28 10:58:44.845011979 +0000 UTC m=+0.057588982 container create 626cd385fa9718cdccd35d13cec7c6e548c7db84a7b9bd9994b70a63993e7288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:58:44 compute-0 systemd[1]: Started libpod-conmon-626cd385fa9718cdccd35d13cec7c6e548c7db84a7b9bd9994b70a63993e7288.scope.
Feb 28 10:58:44 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:58:44 compute-0 podman[391220]: 2026-02-28 10:58:44.820162095 +0000 UTC m=+0.032739118 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:58:44 compute-0 podman[391220]: 2026-02-28 10:58:44.929823051 +0000 UTC m=+0.142400074 container init 626cd385fa9718cdccd35d13cec7c6e548c7db84a7b9bd9994b70a63993e7288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_turing, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:58:44 compute-0 podman[391220]: 2026-02-28 10:58:44.938460025 +0000 UTC m=+0.151036998 container start 626cd385fa9718cdccd35d13cec7c6e548c7db84a7b9bd9994b70a63993e7288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_turing, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Feb 28 10:58:44 compute-0 podman[391220]: 2026-02-28 10:58:44.942463119 +0000 UTC m=+0.155040182 container attach 626cd385fa9718cdccd35d13cec7c6e548c7db84a7b9bd9994b70a63993e7288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_turing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 10:58:44 compute-0 recursing_turing[391236]: 167 167
Feb 28 10:58:44 compute-0 systemd[1]: libpod-626cd385fa9718cdccd35d13cec7c6e548c7db84a7b9bd9994b70a63993e7288.scope: Deactivated successfully.
Feb 28 10:58:44 compute-0 podman[391220]: 2026-02-28 10:58:44.945941667 +0000 UTC m=+0.158518680 container died 626cd385fa9718cdccd35d13cec7c6e548c7db84a7b9bd9994b70a63993e7288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:58:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3cb1cedb78473ad7fa3d4e61ec7f5c58062821ac22c084b59a6d1e5f7092070-merged.mount: Deactivated successfully.
Feb 28 10:58:45 compute-0 podman[391220]: 2026-02-28 10:58:45.012021739 +0000 UTC m=+0.224598722 container remove 626cd385fa9718cdccd35d13cec7c6e548c7db84a7b9bd9994b70a63993e7288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_turing, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:58:45 compute-0 systemd[1]: libpod-conmon-626cd385fa9718cdccd35d13cec7c6e548c7db84a7b9bd9994b70a63993e7288.scope: Deactivated successfully.
Feb 28 10:58:45 compute-0 ceph-mon[76304]: pgmap v2847: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:45 compute-0 podman[391260]: 2026-02-28 10:58:45.189218687 +0000 UTC m=+0.063996343 container create 61ad8e9913dc5c382e55ed5d77559dd7b6e74deb09ad2651ac89be8e382d6943 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 10:58:45 compute-0 systemd[1]: Started libpod-conmon-61ad8e9913dc5c382e55ed5d77559dd7b6e74deb09ad2651ac89be8e382d6943.scope.
Feb 28 10:58:45 compute-0 podman[391260]: 2026-02-28 10:58:45.163012365 +0000 UTC m=+0.037790061 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:58:45 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b07d728b32a63e5979639cfe14b1ec06d3afd02d2511c79048148a81b443eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b07d728b32a63e5979639cfe14b1ec06d3afd02d2511c79048148a81b443eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b07d728b32a63e5979639cfe14b1ec06d3afd02d2511c79048148a81b443eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b07d728b32a63e5979639cfe14b1ec06d3afd02d2511c79048148a81b443eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:58:45 compute-0 podman[391260]: 2026-02-28 10:58:45.305262224 +0000 UTC m=+0.180039940 container init 61ad8e9913dc5c382e55ed5d77559dd7b6e74deb09ad2651ac89be8e382d6943 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_galileo, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 10:58:45 compute-0 podman[391260]: 2026-02-28 10:58:45.312699324 +0000 UTC m=+0.187476950 container start 61ad8e9913dc5c382e55ed5d77559dd7b6e74deb09ad2651ac89be8e382d6943 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_galileo, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:58:45 compute-0 podman[391260]: 2026-02-28 10:58:45.316562563 +0000 UTC m=+0.191340189 container attach 61ad8e9913dc5c382e55ed5d77559dd7b6e74deb09ad2651ac89be8e382d6943 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 10:58:45 compute-0 podman[391278]: 2026-02-28 10:58:45.344464753 +0000 UTC m=+0.104660504 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Feb 28 10:58:45 compute-0 podman[391275]: 2026-02-28 10:58:45.357723888 +0000 UTC m=+0.120715448 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 28 10:58:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:58:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2687783955' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:58:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:58:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2687783955' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:58:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:45 compute-0 lvm[391401]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:58:45 compute-0 lvm[391402]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:58:45 compute-0 lvm[391402]: VG ceph_vg0 finished
Feb 28 10:58:45 compute-0 lvm[391401]: VG ceph_vg1 finished
Feb 28 10:58:45 compute-0 lvm[391404]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:58:45 compute-0 lvm[391404]: VG ceph_vg2 finished
Feb 28 10:58:46 compute-0 stoic_galileo[391279]: {}
Feb 28 10:58:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2687783955' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:58:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2687783955' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:58:46 compute-0 systemd[1]: libpod-61ad8e9913dc5c382e55ed5d77559dd7b6e74deb09ad2651ac89be8e382d6943.scope: Deactivated successfully.
Feb 28 10:58:46 compute-0 systemd[1]: libpod-61ad8e9913dc5c382e55ed5d77559dd7b6e74deb09ad2651ac89be8e382d6943.scope: Consumed 1.245s CPU time.
Feb 28 10:58:46 compute-0 podman[391260]: 2026-02-28 10:58:46.102725828 +0000 UTC m=+0.977503524 container died 61ad8e9913dc5c382e55ed5d77559dd7b6e74deb09ad2651ac89be8e382d6943 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Feb 28 10:58:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9b07d728b32a63e5979639cfe14b1ec06d3afd02d2511c79048148a81b443eb-merged.mount: Deactivated successfully.
Feb 28 10:58:46 compute-0 podman[391260]: 2026-02-28 10:58:46.158065835 +0000 UTC m=+1.032843461 container remove 61ad8e9913dc5c382e55ed5d77559dd7b6e74deb09ad2651ac89be8e382d6943 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_galileo, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:58:46 compute-0 systemd[1]: libpod-conmon-61ad8e9913dc5c382e55ed5d77559dd7b6e74deb09ad2651ac89be8e382d6943.scope: Deactivated successfully.
Feb 28 10:58:46 compute-0 sudo[391181]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:58:46 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:58:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:58:46 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:58:46 compute-0 sudo[391419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:58:46 compute-0 sudo[391419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:58:46 compute-0 sudo[391419]: pam_unix(sudo:session): session closed for user root
Feb 28 10:58:47 compute-0 ceph-mon[76304]: pgmap v2848: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:47 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:58:47 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:58:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:48 compute-0 nova_compute[243452]: 2026-02-28 10:58:48.096 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:58:48 compute-0 nova_compute[243452]: 2026-02-28 10:58:48.100 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:58:49 compute-0 ceph-mon[76304]: pgmap v2849: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:58:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:51 compute-0 ceph-mon[76304]: pgmap v2850: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:53 compute-0 ceph-mon[76304]: pgmap v2851: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:53 compute-0 nova_compute[243452]: 2026-02-28 10:58:53.098 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:58:53 compute-0 nova_compute[243452]: 2026-02-28 10:58:53.101 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:58:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:58:55 compute-0 ceph-mon[76304]: pgmap v2852: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:57 compute-0 ceph-mon[76304]: pgmap v2853: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:57 compute-0 podman[390909]: time="2026-02-28T10:58:57Z" level=info msg="Received shutdown.Stop(), terminating!" PID=390909
Feb 28 10:58:57 compute-0 systemd[1]: podman.service: Deactivated successfully.
Feb 28 10:58:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:58:57.903 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:58:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:58:57.904 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:58:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:58:57.904 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:58:58 compute-0 nova_compute[243452]: 2026-02-28 10:58:58.100 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:58:58 compute-0 nova_compute[243452]: 2026-02-28 10:58:58.103 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:58:58 compute-0 nova_compute[243452]: 2026-02-28 10:58:58.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:58:59 compute-0 ceph-mon[76304]: pgmap v2854: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:58:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:58:59 compute-0 nova_compute[243452]: 2026-02-28 10:58:59.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:58:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:00 compute-0 nova_compute[243452]: 2026-02-28 10:59:00.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:59:00 compute-0 nova_compute[243452]: 2026-02-28 10:59:00.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 10:59:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:59:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:59:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:59:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:59:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:59:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:59:01 compute-0 ceph-mon[76304]: pgmap v2855: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:01 compute-0 nova_compute[243452]: 2026-02-28 10:59:01.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:59:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:02 compute-0 nova_compute[243452]: 2026-02-28 10:59:02.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:59:02 compute-0 nova_compute[243452]: 2026-02-28 10:59:02.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:59:03 compute-0 nova_compute[243452]: 2026-02-28 10:59:03.103 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:59:03 compute-0 ceph-mon[76304]: pgmap v2856: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:04 compute-0 sudo[391445]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip --brief address list
Feb 28 10:59:04 compute-0 sudo[391445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:59:04 compute-0 sudo[391445]: pam_unix(sudo:session): session closed for user root
Feb 28 10:59:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:59:04 compute-0 sudo[391470]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip -o netns list
Feb 28 10:59:04 compute-0 sudo[391470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 10:59:04 compute-0 sudo[391470]: pam_unix(sudo:session): session closed for user root
Feb 28 10:59:04 compute-0 sshd-session[390351]: Connection closed by 192.168.122.30 port 35286
Feb 28 10:59:04 compute-0 sshd-session[390348]: pam_unix(sshd:session): session closed for user zuul
Feb 28 10:59:04 compute-0 sshd-session[390524]: Connection closed by 192.168.122.30 port 35294
Feb 28 10:59:04 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Feb 28 10:59:04 compute-0 systemd-logind[815]: Session 52 logged out. Waiting for processes to exit.
Feb 28 10:59:04 compute-0 systemd-logind[815]: Removed session 52.
Feb 28 10:59:04 compute-0 sshd-session[390521]: pam_unix(sshd:session): session closed for user zuul
Feb 28 10:59:04 compute-0 systemd-logind[815]: Session 53 logged out. Waiting for processes to exit.
Feb 28 10:59:04 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Feb 28 10:59:04 compute-0 systemd[1]: session-53.scope: Consumed 1.171s CPU time.
Feb 28 10:59:04 compute-0 systemd-logind[815]: Removed session 53.
Feb 28 10:59:05 compute-0 ceph-mon[76304]: pgmap v2857: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:05 compute-0 sshd-session[390906]: Connection closed by 192.168.122.30 port 35300
Feb 28 10:59:05 compute-0 sshd-session[390896]: pam_unix(sshd:session): session closed for user zuul
Feb 28 10:59:05 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Feb 28 10:59:05 compute-0 systemd-logind[815]: Session 54 logged out. Waiting for processes to exit.
Feb 28 10:59:05 compute-0 systemd-logind[815]: Removed session 54.
Feb 28 10:59:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:06 compute-0 ceph-mon[76304]: pgmap v2858: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:07 compute-0 nova_compute[243452]: 2026-02-28 10:59:07.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:59:07 compute-0 nova_compute[243452]: 2026-02-28 10:59:07.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 10:59:07 compute-0 nova_compute[243452]: 2026-02-28 10:59:07.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 10:59:07 compute-0 nova_compute[243452]: 2026-02-28 10:59:07.333 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 10:59:07 compute-0 nova_compute[243452]: 2026-02-28 10:59:07.334 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:59:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:08 compute-0 nova_compute[243452]: 2026-02-28 10:59:08.104 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:59:08 compute-0 nova_compute[243452]: 2026-02-28 10:59:08.106 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:59:08 compute-0 nova_compute[243452]: 2026-02-28 10:59:08.106 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 10:59:08 compute-0 nova_compute[243452]: 2026-02-28 10:59:08.106 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:59:08 compute-0 nova_compute[243452]: 2026-02-28 10:59:08.107 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:59:08 compute-0 nova_compute[243452]: 2026-02-28 10:59:08.109 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:59:08 compute-0 ceph-mon[76304]: pgmap v2859: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:59:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:10 compute-0 ceph-mon[76304]: pgmap v2860: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:11 compute-0 nova_compute[243452]: 2026-02-28 10:59:11.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 10:59:11 compute-0 nova_compute[243452]: 2026-02-28 10:59:11.346 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:59:11 compute-0 nova_compute[243452]: 2026-02-28 10:59:11.346 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:59:11 compute-0 nova_compute[243452]: 2026-02-28 10:59:11.346 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:59:11 compute-0 nova_compute[243452]: 2026-02-28 10:59:11.346 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 10:59:11 compute-0 nova_compute[243452]: 2026-02-28 10:59:11.347 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:59:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:59:11 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/307564811' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:59:11 compute-0 nova_compute[243452]: 2026-02-28 10:59:11.979 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:59:12 compute-0 nova_compute[243452]: 2026-02-28 10:59:12.179 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 10:59:12 compute-0 nova_compute[243452]: 2026-02-28 10:59:12.181 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3613MB free_disk=59.987361152656376GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 10:59:12 compute-0 nova_compute[243452]: 2026-02-28 10:59:12.181 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:59:12 compute-0 nova_compute[243452]: 2026-02-28 10:59:12.182 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:59:12 compute-0 nova_compute[243452]: 2026-02-28 10:59:12.252 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 10:59:12 compute-0 nova_compute[243452]: 2026-02-28 10:59:12.252 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 10:59:12 compute-0 nova_compute[243452]: 2026-02-28 10:59:12.335 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 10:59:12 compute-0 ceph-mon[76304]: pgmap v2861: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:12 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/307564811' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:59:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 10:59:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2433169569' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:59:12 compute-0 nova_compute[243452]: 2026-02-28 10:59:12.944 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 10:59:12 compute-0 nova_compute[243452]: 2026-02-28 10:59:12.952 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 10:59:12 compute-0 nova_compute[243452]: 2026-02-28 10:59:12.968 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 10:59:12 compute-0 nova_compute[243452]: 2026-02-28 10:59:12.970 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 10:59:12 compute-0 nova_compute[243452]: 2026-02-28 10:59:12.971 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:59:13 compute-0 nova_compute[243452]: 2026-02-28 10:59:13.108 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:59:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:13 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2433169569' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 10:59:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:59:14 compute-0 ceph-mon[76304]: pgmap v2862: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:16 compute-0 podman[391541]: 2026-02-28 10:59:16.15941913 +0000 UTC m=+0.083454224 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:59:16 compute-0 podman[391540]: 2026-02-28 10:59:16.185971642 +0000 UTC m=+0.112124506 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ovn_controller, io.buildah.version=1.43.0)
Feb 28 10:59:16 compute-0 ceph-mon[76304]: pgmap v2863: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:18 compute-0 nova_compute[243452]: 2026-02-28 10:59:18.110 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:59:18 compute-0 ceph-mon[76304]: pgmap v2864: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:59:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:20 compute-0 ceph-mon[76304]: pgmap v2865: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:22 compute-0 ceph-mon[76304]: pgmap v2866: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:23 compute-0 nova_compute[243452]: 2026-02-28 10:59:23.113 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:59:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:59:24 compute-0 ceph-mon[76304]: pgmap v2867: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:26 compute-0 ceph-mon[76304]: pgmap v2868: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:28 compute-0 nova_compute[243452]: 2026-02-28 10:59:28.115 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:59:28 compute-0 nova_compute[243452]: 2026-02-28 10:59:28.116 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:59:28 compute-0 nova_compute[243452]: 2026-02-28 10:59:28.116 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 10:59:28 compute-0 nova_compute[243452]: 2026-02-28 10:59:28.116 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:59:28 compute-0 nova_compute[243452]: 2026-02-28 10:59:28.117 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:59:28 compute-0 nova_compute[243452]: 2026-02-28 10:59:28.118 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:59:28 compute-0 ceph-mon[76304]: pgmap v2869: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_10:59:29
Feb 28 10:59:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 10:59:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 10:59:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'backups', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta']
Feb 28 10:59:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 10:59:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:59:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:59:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:59:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:59:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:59:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 10:59:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 10:59:30 compute-0 ceph-mon[76304]: pgmap v2870: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 10:59:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 10:59:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:59:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 10:59:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:59:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 10:59:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:59:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 10:59:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:59:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 10:59:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:32 compute-0 ceph-mon[76304]: pgmap v2871: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:33 compute-0 nova_compute[243452]: 2026-02-28 10:59:33.118 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:59:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2872: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:59:34 compute-0 ceph-mon[76304]: pgmap v2872: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:36 compute-0 ceph-mon[76304]: pgmap v2873: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2874: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:38 compute-0 nova_compute[243452]: 2026-02-28 10:59:38.119 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:59:38 compute-0 nova_compute[243452]: 2026-02-28 10:59:38.120 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:59:38 compute-0 nova_compute[243452]: 2026-02-28 10:59:38.121 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 10:59:38 compute-0 nova_compute[243452]: 2026-02-28 10:59:38.121 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:59:38 compute-0 nova_compute[243452]: 2026-02-28 10:59:38.121 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 10:59:38 compute-0 nova_compute[243452]: 2026-02-28 10:59:38.123 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:59:38 compute-0 ceph-mon[76304]: pgmap v2874: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:59:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:40 compute-0 ceph-mon[76304]: pgmap v2875: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.533795142069749e-05 of space, bias 1.0, pg target 0.004601385426209247 quantized to 32 (current 32)
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006706137252582069 of space, bias 1.0, pg target 0.20118411757746207 quantized to 32 (current 32)
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.261232899107364e-07 of space, bias 4.0, pg target 0.0008713479478928837 quantized to 16 (current 16)
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 10:59:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2876: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:42 compute-0 ceph-mon[76304]: pgmap v2876: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:43 compute-0 nova_compute[243452]: 2026-02-28 10:59:43.123 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:59:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:59:44 compute-0 ceph-mon[76304]: pgmap v2877: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 10:59:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1555191970' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:59:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 10:59:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1555191970' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:59:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1555191970' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 10:59:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1555191970' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 10:59:46 compute-0 sudo[391586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:59:46 compute-0 sudo[391586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:59:46 compute-0 sudo[391586]: pam_unix(sudo:session): session closed for user root
Feb 28 10:59:46 compute-0 sudo[391625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 10:59:46 compute-0 sudo[391625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:59:46 compute-0 podman[391610]: 2026-02-28 10:59:46.536006378 +0000 UTC m=+0.112583020 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 28 10:59:46 compute-0 podman[391611]: 2026-02-28 10:59:46.558089823 +0000 UTC m=+0.125569517 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 10:59:46 compute-0 sudo[391625]: pam_unix(sudo:session): session closed for user root
Feb 28 10:59:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:59:46 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:59:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 10:59:46 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:59:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 10:59:46 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:59:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 10:59:46 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:59:46 compute-0 ceph-mon[76304]: pgmap v2878: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:46 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:59:46 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 10:59:46 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:59:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 10:59:46 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:59:46 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 10:59:46 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:59:47 compute-0 sudo[391711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:59:47 compute-0 sudo[391711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:59:47 compute-0 sudo[391711]: pam_unix(sudo:session): session closed for user root
Feb 28 10:59:47 compute-0 sudo[391736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 10:59:47 compute-0 sudo[391736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:59:47 compute-0 podman[391775]: 2026-02-28 10:59:47.423168373 +0000 UTC m=+0.059598228 container create 898612e26b0c71ffd58a886ca928778ba52db78a090365cf43972da646ce6792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_golick, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:59:47 compute-0 systemd[1]: Started libpod-conmon-898612e26b0c71ffd58a886ca928778ba52db78a090365cf43972da646ce6792.scope.
Feb 28 10:59:47 compute-0 podman[391775]: 2026-02-28 10:59:47.391682252 +0000 UTC m=+0.028112197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:59:47 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:59:47 compute-0 podman[391775]: 2026-02-28 10:59:47.51484079 +0000 UTC m=+0.151270675 container init 898612e26b0c71ffd58a886ca928778ba52db78a090365cf43972da646ce6792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_golick, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:59:47 compute-0 podman[391775]: 2026-02-28 10:59:47.525198763 +0000 UTC m=+0.161628638 container start 898612e26b0c71ffd58a886ca928778ba52db78a090365cf43972da646ce6792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 10:59:47 compute-0 podman[391775]: 2026-02-28 10:59:47.529746532 +0000 UTC m=+0.166176387 container attach 898612e26b0c71ffd58a886ca928778ba52db78a090365cf43972da646ce6792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_golick, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:59:47 compute-0 gallant_golick[391791]: 167 167
Feb 28 10:59:47 compute-0 systemd[1]: libpod-898612e26b0c71ffd58a886ca928778ba52db78a090365cf43972da646ce6792.scope: Deactivated successfully.
Feb 28 10:59:47 compute-0 podman[391775]: 2026-02-28 10:59:47.534437815 +0000 UTC m=+0.170867700 container died 898612e26b0c71ffd58a886ca928778ba52db78a090365cf43972da646ce6792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 28 10:59:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e88419c30e4885c72014ed72a1b5a1825ac20977dbe19abf0f174eda0bf5289-merged.mount: Deactivated successfully.
Feb 28 10:59:47 compute-0 podman[391775]: 2026-02-28 10:59:47.582173957 +0000 UTC m=+0.218603812 container remove 898612e26b0c71ffd58a886ca928778ba52db78a090365cf43972da646ce6792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_golick, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Feb 28 10:59:47 compute-0 systemd[1]: libpod-conmon-898612e26b0c71ffd58a886ca928778ba52db78a090365cf43972da646ce6792.scope: Deactivated successfully.
Feb 28 10:59:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:47 compute-0 podman[391816]: 2026-02-28 10:59:47.752648345 +0000 UTC m=+0.052944151 container create 9966669b23032f51b5547975d42f9330625316dd45e3a93da145523fa7b1c74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Feb 28 10:59:47 compute-0 systemd[1]: Started libpod-conmon-9966669b23032f51b5547975d42f9330625316dd45e3a93da145523fa7b1c74c.scope.
Feb 28 10:59:47 compute-0 sshd-session[391584]: Received disconnect from 103.217.144.161 port 40576:11: Bye Bye [preauth]
Feb 28 10:59:47 compute-0 sshd-session[391584]: Disconnected from authenticating user root 103.217.144.161 port 40576 [preauth]
Feb 28 10:59:47 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75c0c918f52ba79612e80db17ab73efc6547227a5fce511129675c247927ca1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75c0c918f52ba79612e80db17ab73efc6547227a5fce511129675c247927ca1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75c0c918f52ba79612e80db17ab73efc6547227a5fce511129675c247927ca1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75c0c918f52ba79612e80db17ab73efc6547227a5fce511129675c247927ca1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75c0c918f52ba79612e80db17ab73efc6547227a5fce511129675c247927ca1a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 10:59:47 compute-0 podman[391816]: 2026-02-28 10:59:47.734225443 +0000 UTC m=+0.034521249 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:59:47 compute-0 podman[391816]: 2026-02-28 10:59:47.843208929 +0000 UTC m=+0.143504705 container init 9966669b23032f51b5547975d42f9330625316dd45e3a93da145523fa7b1c74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 10:59:47 compute-0 podman[391816]: 2026-02-28 10:59:47.85630738 +0000 UTC m=+0.156603156 container start 9966669b23032f51b5547975d42f9330625316dd45e3a93da145523fa7b1c74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:59:47 compute-0 podman[391816]: 2026-02-28 10:59:47.860450698 +0000 UTC m=+0.160746484 container attach 9966669b23032f51b5547975d42f9330625316dd45e3a93da145523fa7b1c74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default)
Feb 28 10:59:48 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 10:59:48 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 10:59:48 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 10:59:48 compute-0 nova_compute[243452]: 2026-02-28 10:59:48.124 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 10:59:48 compute-0 wizardly_cartwright[391832]: --> passed data devices: 0 physical, 3 LVM
Feb 28 10:59:48 compute-0 wizardly_cartwright[391832]: --> All data devices are unavailable
Feb 28 10:59:48 compute-0 systemd[1]: libpod-9966669b23032f51b5547975d42f9330625316dd45e3a93da145523fa7b1c74c.scope: Deactivated successfully.
Feb 28 10:59:48 compute-0 podman[391816]: 2026-02-28 10:59:48.304640608 +0000 UTC m=+0.604936384 container died 9966669b23032f51b5547975d42f9330625316dd45e3a93da145523fa7b1c74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 10:59:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-75c0c918f52ba79612e80db17ab73efc6547227a5fce511129675c247927ca1a-merged.mount: Deactivated successfully.
Feb 28 10:59:48 compute-0 podman[391816]: 2026-02-28 10:59:48.355801717 +0000 UTC m=+0.656097523 container remove 9966669b23032f51b5547975d42f9330625316dd45e3a93da145523fa7b1c74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:59:48 compute-0 systemd[1]: libpod-conmon-9966669b23032f51b5547975d42f9330625316dd45e3a93da145523fa7b1c74c.scope: Deactivated successfully.
Feb 28 10:59:48 compute-0 sudo[391736]: pam_unix(sudo:session): session closed for user root
Feb 28 10:59:48 compute-0 sudo[391864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:59:48 compute-0 sudo[391864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:59:48 compute-0 sudo[391864]: pam_unix(sudo:session): session closed for user root
Feb 28 10:59:48 compute-0 sudo[391889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 10:59:48 compute-0 sudo[391889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:59:48 compute-0 podman[391924]: 2026-02-28 10:59:48.862239839 +0000 UTC m=+0.058396054 container create 91c0d6452df1605fb893122dc6a8383d492c3db2206c541a285ff1529e384849 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_galileo, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:59:48 compute-0 systemd[1]: Started libpod-conmon-91c0d6452df1605fb893122dc6a8383d492c3db2206c541a285ff1529e384849.scope.
Feb 28 10:59:48 compute-0 podman[391924]: 2026-02-28 10:59:48.838398844 +0000 UTC m=+0.034555089 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:59:48 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:59:48 compute-0 podman[391924]: 2026-02-28 10:59:48.953377541 +0000 UTC m=+0.149533826 container init 91c0d6452df1605fb893122dc6a8383d492c3db2206c541a285ff1529e384849 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:59:48 compute-0 podman[391924]: 2026-02-28 10:59:48.963786555 +0000 UTC m=+0.159942780 container start 91c0d6452df1605fb893122dc6a8383d492c3db2206c541a285ff1529e384849 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_galileo, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:59:48 compute-0 podman[391924]: 2026-02-28 10:59:48.968513919 +0000 UTC m=+0.164670164 container attach 91c0d6452df1605fb893122dc6a8383d492c3db2206c541a285ff1529e384849 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 10:59:48 compute-0 elastic_galileo[391941]: 167 167
Feb 28 10:59:48 compute-0 systemd[1]: libpod-91c0d6452df1605fb893122dc6a8383d492c3db2206c541a285ff1529e384849.scope: Deactivated successfully.
Feb 28 10:59:48 compute-0 podman[391924]: 2026-02-28 10:59:48.970830645 +0000 UTC m=+0.166986880 container died 91c0d6452df1605fb893122dc6a8383d492c3db2206c541a285ff1529e384849 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_galileo, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 10:59:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-794d47e1a03ebafd025c1416b0757cbf23889687d4d03fad64fc31a2e35ae9c5-merged.mount: Deactivated successfully.
Feb 28 10:59:49 compute-0 ceph-mon[76304]: pgmap v2879: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:49 compute-0 podman[391924]: 2026-02-28 10:59:49.011184598 +0000 UTC m=+0.207340803 container remove 91c0d6452df1605fb893122dc6a8383d492c3db2206c541a285ff1529e384849 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_galileo, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 10:59:49 compute-0 systemd[1]: libpod-conmon-91c0d6452df1605fb893122dc6a8383d492c3db2206c541a285ff1529e384849.scope: Deactivated successfully.
Feb 28 10:59:49 compute-0 podman[391965]: 2026-02-28 10:59:49.152734327 +0000 UTC m=+0.044874462 container create 4680edc2c65954a3bd36e894fbabcb42db9c4e21a39b81294e317b36c5abccc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 10:59:49 compute-0 systemd[1]: Started libpod-conmon-4680edc2c65954a3bd36e894fbabcb42db9c4e21a39b81294e317b36c5abccc9.scope.
Feb 28 10:59:49 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd941200daa04e5e7108670d47dfd35cddb9c03cd269829841162a6668f72753/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd941200daa04e5e7108670d47dfd35cddb9c03cd269829841162a6668f72753/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd941200daa04e5e7108670d47dfd35cddb9c03cd269829841162a6668f72753/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:59:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd941200daa04e5e7108670d47dfd35cddb9c03cd269829841162a6668f72753/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:59:49 compute-0 podman[391965]: 2026-02-28 10:59:49.134639884 +0000 UTC m=+0.026780069 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:59:49 compute-0 podman[391965]: 2026-02-28 10:59:49.258541903 +0000 UTC m=+0.150682078 container init 4680edc2c65954a3bd36e894fbabcb42db9c4e21a39b81294e317b36c5abccc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:59:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:59:49 compute-0 podman[391965]: 2026-02-28 10:59:49.264254215 +0000 UTC m=+0.156394350 container start 4680edc2c65954a3bd36e894fbabcb42db9c4e21a39b81294e317b36c5abccc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:59:49 compute-0 podman[391965]: 2026-02-28 10:59:49.268815634 +0000 UTC m=+0.160955809 container attach 4680edc2c65954a3bd36e894fbabcb42db9c4e21a39b81294e317b36c5abccc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]: {
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:     "0": [
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:         {
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "devices": [
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "/dev/loop3"
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             ],
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "lv_name": "ceph_lv0",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "lv_size": "21470642176",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "name": "ceph_lv0",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "tags": {
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.cluster_name": "ceph",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.crush_device_class": "",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.encrypted": "0",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.objectstore": "bluestore",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.osd_id": "0",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.type": "block",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.vdo": "0",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.with_tpm": "0"
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             },
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "type": "block",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "vg_name": "ceph_vg0"
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:         }
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:     ],
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:     "1": [
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:         {
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "devices": [
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "/dev/loop4"
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             ],
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "lv_name": "ceph_lv1",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "lv_size": "21470642176",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "name": "ceph_lv1",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "tags": {
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.cluster_name": "ceph",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.crush_device_class": "",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.encrypted": "0",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.objectstore": "bluestore",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.osd_id": "1",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.type": "block",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.vdo": "0",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.with_tpm": "0"
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             },
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "type": "block",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "vg_name": "ceph_vg1"
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:         }
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:     ],
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:     "2": [
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:         {
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "devices": [
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "/dev/loop5"
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             ],
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "lv_name": "ceph_lv2",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "lv_size": "21470642176",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "name": "ceph_lv2",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "tags": {
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.cluster_name": "ceph",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.crush_device_class": "",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.encrypted": "0",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.objectstore": "bluestore",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.osd_id": "2",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.type": "block",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.vdo": "0",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:                 "ceph.with_tpm": "0"
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             },
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "type": "block",
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:             "vg_name": "ceph_vg2"
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:         }
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]:     ]
Feb 28 10:59:49 compute-0 wizardly_cartwright[391981]: }
Feb 28 10:59:49 compute-0 systemd[1]: libpod-4680edc2c65954a3bd36e894fbabcb42db9c4e21a39b81294e317b36c5abccc9.scope: Deactivated successfully.
Feb 28 10:59:49 compute-0 podman[391965]: 2026-02-28 10:59:49.510012325 +0000 UTC m=+0.402152460 container died 4680edc2c65954a3bd36e894fbabcb42db9c4e21a39b81294e317b36c5abccc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 10:59:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd941200daa04e5e7108670d47dfd35cddb9c03cd269829841162a6668f72753-merged.mount: Deactivated successfully.
Feb 28 10:59:49 compute-0 podman[391965]: 2026-02-28 10:59:49.555672548 +0000 UTC m=+0.447812693 container remove 4680edc2c65954a3bd36e894fbabcb42db9c4e21a39b81294e317b36c5abccc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_cartwright, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Feb 28 10:59:49 compute-0 systemd[1]: libpod-conmon-4680edc2c65954a3bd36e894fbabcb42db9c4e21a39b81294e317b36c5abccc9.scope: Deactivated successfully.
Feb 28 10:59:49 compute-0 sudo[391889]: pam_unix(sudo:session): session closed for user root
Feb 28 10:59:49 compute-0 sudo[392002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 10:59:49 compute-0 sudo[392002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:59:49 compute-0 sudo[392002]: pam_unix(sudo:session): session closed for user root
Feb 28 10:59:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2880: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:49 compute-0 sudo[392027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 10:59:49 compute-0 sudo[392027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:59:50 compute-0 podman[392065]: 2026-02-28 10:59:50.010592021 +0000 UTC m=+0.048240597 container create 52fb93de215c93ee6826206adfef015a7e8342ca510dc425d3361ce618bae5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_swanson, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 10:59:50 compute-0 systemd[1]: Started libpod-conmon-52fb93de215c93ee6826206adfef015a7e8342ca510dc425d3361ce618bae5c8.scope.
Feb 28 10:59:50 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:59:50 compute-0 podman[392065]: 2026-02-28 10:59:50.08928026 +0000 UTC m=+0.126928856 container init 52fb93de215c93ee6826206adfef015a7e8342ca510dc425d3361ce618bae5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_swanson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:59:50 compute-0 podman[392065]: 2026-02-28 10:59:49.995134123 +0000 UTC m=+0.032782719 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:59:50 compute-0 podman[392065]: 2026-02-28 10:59:50.098357877 +0000 UTC m=+0.136006493 container start 52fb93de215c93ee6826206adfef015a7e8342ca510dc425d3361ce618bae5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_swanson, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 28 10:59:50 compute-0 podman[392065]: 2026-02-28 10:59:50.102255437 +0000 UTC m=+0.139904023 container attach 52fb93de215c93ee6826206adfef015a7e8342ca510dc425d3361ce618bae5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_swanson, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:59:50 compute-0 tender_swanson[392082]: 167 167
Feb 28 10:59:50 compute-0 systemd[1]: libpod-52fb93de215c93ee6826206adfef015a7e8342ca510dc425d3361ce618bae5c8.scope: Deactivated successfully.
Feb 28 10:59:50 compute-0 podman[392065]: 2026-02-28 10:59:50.104911872 +0000 UTC m=+0.142560438 container died 52fb93de215c93ee6826206adfef015a7e8342ca510dc425d3361ce618bae5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_swanson, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 10:59:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-992f9b8af7cece38faf48b989e4f0632cd26dfa2b2ff01646f19d9fc2886de2a-merged.mount: Deactivated successfully.
Feb 28 10:59:50 compute-0 podman[392065]: 2026-02-28 10:59:50.142458686 +0000 UTC m=+0.180107302 container remove 52fb93de215c93ee6826206adfef015a7e8342ca510dc425d3361ce618bae5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:59:50 compute-0 systemd[1]: libpod-conmon-52fb93de215c93ee6826206adfef015a7e8342ca510dc425d3361ce618bae5c8.scope: Deactivated successfully.
Feb 28 10:59:50 compute-0 podman[392107]: 2026-02-28 10:59:50.324844411 +0000 UTC m=+0.049944255 container create 32e478710c4c27427bb789f9f330d2614cdf1923adca5254892e57468ac4bf73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 10:59:50 compute-0 systemd[1]: Started libpod-conmon-32e478710c4c27427bb789f9f330d2614cdf1923adca5254892e57468ac4bf73.scope.
Feb 28 10:59:50 compute-0 podman[392107]: 2026-02-28 10:59:50.300666996 +0000 UTC m=+0.025766730 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 10:59:50 compute-0 systemd[1]: Started libcrun container.
Feb 28 10:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cc8c49b3f117b504f60a3fe15ec8f72809b416267e7e8c46860d8e8fd6c354/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 10:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cc8c49b3f117b504f60a3fe15ec8f72809b416267e7e8c46860d8e8fd6c354/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 10:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cc8c49b3f117b504f60a3fe15ec8f72809b416267e7e8c46860d8e8fd6c354/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 10:59:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cc8c49b3f117b504f60a3fe15ec8f72809b416267e7e8c46860d8e8fd6c354/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 10:59:50 compute-0 podman[392107]: 2026-02-28 10:59:50.426396207 +0000 UTC m=+0.151496011 container init 32e478710c4c27427bb789f9f330d2614cdf1923adca5254892e57468ac4bf73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 10:59:50 compute-0 podman[392107]: 2026-02-28 10:59:50.436496233 +0000 UTC m=+0.161595947 container start 32e478710c4c27427bb789f9f330d2614cdf1923adca5254892e57468ac4bf73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 10:59:50 compute-0 podman[392107]: 2026-02-28 10:59:50.440493026 +0000 UTC m=+0.165592750 container attach 32e478710c4c27427bb789f9f330d2614cdf1923adca5254892e57468ac4bf73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_shannon, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 10:59:51 compute-0 ceph-mon[76304]: pgmap v2880: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:51 compute-0 lvm[392203]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 10:59:51 compute-0 lvm[392202]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 10:59:51 compute-0 lvm[392203]: VG ceph_vg1 finished
Feb 28 10:59:51 compute-0 lvm[392202]: VG ceph_vg0 finished
Feb 28 10:59:51 compute-0 lvm[392205]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 10:59:51 compute-0 lvm[392205]: VG ceph_vg2 finished
Feb 28 10:59:51 compute-0 brave_shannon[392123]: {}
Feb 28 10:59:51 compute-0 systemd[1]: libpod-32e478710c4c27427bb789f9f330d2614cdf1923adca5254892e57468ac4bf73.scope: Deactivated successfully.
Feb 28 10:59:51 compute-0 systemd[1]: libpod-32e478710c4c27427bb789f9f330d2614cdf1923adca5254892e57468ac4bf73.scope: Consumed 1.318s CPU time.
Feb 28 10:59:51 compute-0 podman[392208]: 2026-02-28 10:59:51.690613621 +0000 UTC m=+0.027669714 container died 32e478710c4c27427bb789f9f330d2614cdf1923adca5254892e57468ac4bf73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 10:59:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-25cc8c49b3f117b504f60a3fe15ec8f72809b416267e7e8c46860d8e8fd6c354-merged.mount: Deactivated successfully.
Feb 28 10:59:51 compute-0 podman[392208]: 2026-02-28 10:59:51.73152613 +0000 UTC m=+0.068582243 container remove 32e478710c4c27427bb789f9f330d2614cdf1923adca5254892e57468ac4bf73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 10:59:51 compute-0 systemd[1]: libpod-conmon-32e478710c4c27427bb789f9f330d2614cdf1923adca5254892e57468ac4bf73.scope: Deactivated successfully.
Feb 28 10:59:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:51 compute-0 sudo[392027]: pam_unix(sudo:session): session closed for user root
Feb 28 10:59:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 10:59:51 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:59:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 10:59:51 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:59:51 compute-0 sudo[392223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 10:59:51 compute-0 sudo[392223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 10:59:51 compute-0 sudo[392223]: pam_unix(sudo:session): session closed for user root
Feb 28 10:59:52 compute-0 ceph-mon[76304]: pgmap v2881: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:52 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:59:52 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 10:59:53 compute-0 nova_compute[243452]: 2026-02-28 10:59:53.129 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:59:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2882: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:59:54 compute-0 ceph-mon[76304]: pgmap v2882: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2883: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:56 compute-0 ceph-mon[76304]: pgmap v2883: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:59:57.904 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 10:59:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:59:57.905 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 10:59:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 10:59:57.905 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 10:59:58 compute-0 nova_compute[243452]: 2026-02-28 10:59:58.131 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 10:59:58 compute-0 ceph-mon[76304]: pgmap v2884: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 10:59:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.269834) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276399269926, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 1494, "num_deletes": 257, "total_data_size": 2358689, "memory_usage": 2403312, "flush_reason": "Manual Compaction"}
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276399283770, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 2312737, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59506, "largest_seqno": 60999, "table_properties": {"data_size": 2305819, "index_size": 3988, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14228, "raw_average_key_size": 19, "raw_value_size": 2291924, "raw_average_value_size": 3152, "num_data_blocks": 179, "num_entries": 727, "num_filter_entries": 727, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772276244, "oldest_key_time": 1772276244, "file_creation_time": 1772276399, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 14003 microseconds, and 6892 cpu microseconds.
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.283841) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 2312737 bytes OK
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.283870) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.285417) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.285448) EVENT_LOG_v1 {"time_micros": 1772276399285438, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.285484) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 2352126, prev total WAL file size 2352126, number of live WAL files 2.
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.286594) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353036' seq:72057594037927935, type:22 .. '6C6F676D0032373539' seq:0, type:0; will stop at (end)
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(2258KB)], [140(8880KB)]
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276399286673, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 11405951, "oldest_snapshot_seqno": -1}
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8083 keys, 11286381 bytes, temperature: kUnknown
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276399334162, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 11286381, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11233211, "index_size": 31873, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20229, "raw_key_size": 210757, "raw_average_key_size": 26, "raw_value_size": 11089883, "raw_average_value_size": 1372, "num_data_blocks": 1249, "num_entries": 8083, "num_filter_entries": 8083, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772276399, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.334404) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 11286381 bytes
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.335800) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 239.9 rd, 237.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 8.7 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(9.8) write-amplify(4.9) OK, records in: 8609, records dropped: 526 output_compression: NoCompression
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.335825) EVENT_LOG_v1 {"time_micros": 1772276399335813, "job": 86, "event": "compaction_finished", "compaction_time_micros": 47554, "compaction_time_cpu_micros": 29512, "output_level": 6, "num_output_files": 1, "total_output_size": 11286381, "num_input_records": 8609, "num_output_records": 8083, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276399336229, "job": 86, "event": "table_file_deletion", "file_number": 142}
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276399337177, "job": 86, "event": "table_file_deletion", "file_number": 140}
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.286509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.337210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.337216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.337218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.337221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:59:59 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-10:59:59.337223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 10:59:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2885: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:00 compute-0 ceph-mon[76304]: pgmap v2885: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:00:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:00:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:00:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:00:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:00:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:00:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:01 compute-0 nova_compute[243452]: 2026-02-28 11:00:01.973 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:00:01 compute-0 nova_compute[243452]: 2026-02-28 11:00:01.974 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:00:01 compute-0 nova_compute[243452]: 2026-02-28 11:00:01.975 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:00:01 compute-0 nova_compute[243452]: 2026-02-28 11:00:01.975 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:00:01 compute-0 nova_compute[243452]: 2026-02-28 11:00:01.976 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 11:00:02 compute-0 nova_compute[243452]: 2026-02-28 11:00:02.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:00:02 compute-0 ceph-mon[76304]: pgmap v2886: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:03 compute-0 nova_compute[243452]: 2026-02-28 11:00:03.132 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:00:03 compute-0 nova_compute[243452]: 2026-02-28 11:00:03.134 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:00:03 compute-0 nova_compute[243452]: 2026-02-28 11:00:03.134 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:00:03 compute-0 nova_compute[243452]: 2026-02-28 11:00:03.134 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:00:03 compute-0 nova_compute[243452]: 2026-02-28 11:00:03.135 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:00:03 compute-0 nova_compute[243452]: 2026-02-28 11:00:03.137 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:00:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2887: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:00:04 compute-0 nova_compute[243452]: 2026-02-28 11:00:04.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:00:04 compute-0 ceph-mon[76304]: pgmap v2887: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.830558) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276405830680, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 305, "num_deletes": 251, "total_data_size": 124248, "memory_usage": 131272, "flush_reason": "Manual Compaction"}
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276405834424, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 123467, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61000, "largest_seqno": 61304, "table_properties": {"data_size": 121471, "index_size": 223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5063, "raw_average_key_size": 18, "raw_value_size": 117584, "raw_average_value_size": 426, "num_data_blocks": 10, "num_entries": 276, "num_filter_entries": 276, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772276400, "oldest_key_time": 1772276400, "file_creation_time": 1772276405, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 3972 microseconds, and 1504 cpu microseconds.
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.834534) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 123467 bytes OK
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.834594) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.836047) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.836124) EVENT_LOG_v1 {"time_micros": 1772276405836115, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.836150) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 122054, prev total WAL file size 122054, number of live WAL files 2.
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.836888) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(120KB)], [143(10MB)]
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276405836950, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 11409848, "oldest_snapshot_seqno": -1}
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 7850 keys, 9629039 bytes, temperature: kUnknown
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276405882030, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 9629039, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9578964, "index_size": 29354, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19653, "raw_key_size": 206601, "raw_average_key_size": 26, "raw_value_size": 9441208, "raw_average_value_size": 1202, "num_data_blocks": 1134, "num_entries": 7850, "num_filter_entries": 7850, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772276405, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.882519) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 9629039 bytes
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.883883) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 251.8 rd, 212.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.8 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(170.4) write-amplify(78.0) OK, records in: 8359, records dropped: 509 output_compression: NoCompression
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.883918) EVENT_LOG_v1 {"time_micros": 1772276405883902, "job": 88, "event": "compaction_finished", "compaction_time_micros": 45312, "compaction_time_cpu_micros": 23971, "output_level": 6, "num_output_files": 1, "total_output_size": 9629039, "num_input_records": 8359, "num_output_records": 7850, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276405884313, "job": 88, "event": "table_file_deletion", "file_number": 145}
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276405886868, "job": 88, "event": "table_file_deletion", "file_number": 143}
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.836795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.887090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.887102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.887106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.887110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:00:05 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:00:05.887113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:00:06 compute-0 ceph-mon[76304]: pgmap v2888: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:08 compute-0 nova_compute[243452]: 2026-02-28 11:00:08.136 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:00:08 compute-0 nova_compute[243452]: 2026-02-28 11:00:08.138 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:00:08 compute-0 nova_compute[243452]: 2026-02-28 11:00:08.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:00:08 compute-0 nova_compute[243452]: 2026-02-28 11:00:08.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 11:00:08 compute-0 nova_compute[243452]: 2026-02-28 11:00:08.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 11:00:08 compute-0 nova_compute[243452]: 2026-02-28 11:00:08.397 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 11:00:08 compute-0 ceph-mon[76304]: pgmap v2889: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:00:09 compute-0 nova_compute[243452]: 2026-02-28 11:00:09.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:00:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:10 compute-0 ceph-mon[76304]: pgmap v2890: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:12 compute-0 nova_compute[243452]: 2026-02-28 11:00:12.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:00:12 compute-0 nova_compute[243452]: 2026-02-28 11:00:12.376 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:00:12 compute-0 nova_compute[243452]: 2026-02-28 11:00:12.376 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:00:12 compute-0 nova_compute[243452]: 2026-02-28 11:00:12.377 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:00:12 compute-0 nova_compute[243452]: 2026-02-28 11:00:12.377 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 11:00:12 compute-0 nova_compute[243452]: 2026-02-28 11:00:12.377 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:00:12 compute-0 ceph-mon[76304]: pgmap v2891: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:00:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1738323311' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:00:12 compute-0 nova_compute[243452]: 2026-02-28 11:00:12.956 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:00:13 compute-0 nova_compute[243452]: 2026-02-28 11:00:13.110 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 11:00:13 compute-0 nova_compute[243452]: 2026-02-28 11:00:13.112 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3578MB free_disk=59.987361152656376GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 11:00:13 compute-0 nova_compute[243452]: 2026-02-28 11:00:13.112 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:00:13 compute-0 nova_compute[243452]: 2026-02-28 11:00:13.113 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:00:13 compute-0 nova_compute[243452]: 2026-02-28 11:00:13.138 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:00:13 compute-0 nova_compute[243452]: 2026-02-28 11:00:13.191 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 11:00:13 compute-0 nova_compute[243452]: 2026-02-28 11:00:13.191 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 11:00:13 compute-0 nova_compute[243452]: 2026-02-28 11:00:13.305 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 28 11:00:13 compute-0 nova_compute[243452]: 2026-02-28 11:00:13.414 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 28 11:00:13 compute-0 nova_compute[243452]: 2026-02-28 11:00:13.415 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 28 11:00:13 compute-0 nova_compute[243452]: 2026-02-28 11:00:13.430 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 28 11:00:13 compute-0 nova_compute[243452]: 2026-02-28 11:00:13.452 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 28 11:00:13 compute-0 nova_compute[243452]: 2026-02-28 11:00:13.470 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:00:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:13 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1738323311' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:00:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:00:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2438395517' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:00:13 compute-0 nova_compute[243452]: 2026-02-28 11:00:13.994 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:00:14 compute-0 nova_compute[243452]: 2026-02-28 11:00:14.000 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 11:00:14 compute-0 nova_compute[243452]: 2026-02-28 11:00:14.019 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 11:00:14 compute-0 nova_compute[243452]: 2026-02-28 11:00:14.021 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 11:00:14 compute-0 nova_compute[243452]: 2026-02-28 11:00:14.021 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:00:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:00:14 compute-0 ceph-mon[76304]: pgmap v2892: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:14 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2438395517' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:00:14 compute-0 sshd-session[392292]: Invalid user sol from 45.148.10.240 port 48158
Feb 28 11:00:15 compute-0 nova_compute[243452]: 2026-02-28 11:00:15.018 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:00:15 compute-0 sshd-session[392292]: Connection closed by invalid user sol 45.148.10.240 port 48158 [preauth]
Feb 28 11:00:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:16 compute-0 ceph-mon[76304]: pgmap v2893: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:17 compute-0 podman[392295]: 2026-02-28 11:00:17.143004108 +0000 UTC m=+0.071448125 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 28 11:00:17 compute-0 podman[392294]: 2026-02-28 11:00:17.180390507 +0000 UTC m=+0.108881315 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 28 11:00:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:18 compute-0 nova_compute[243452]: 2026-02-28 11:00:18.140 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:00:18 compute-0 ceph-mon[76304]: pgmap v2894: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:00:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:20 compute-0 ceph-mon[76304]: pgmap v2895: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:22 compute-0 ceph-mon[76304]: pgmap v2896: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:23 compute-0 nova_compute[243452]: 2026-02-28 11:00:23.143 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:00:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:00:24 compute-0 ceph-mon[76304]: pgmap v2897: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2898: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:26 compute-0 ceph-mon[76304]: pgmap v2898: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:28 compute-0 nova_compute[243452]: 2026-02-28 11:00:28.144 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:00:28 compute-0 sshd-session[392334]: Unable to negotiate with 176.29.199.252 port 56798: no matching host key type found. Their offer: ssh-rsa,ssh-dss [preauth]
Feb 28 11:00:28 compute-0 ceph-mon[76304]: pgmap v2899: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:00:29
Feb 28 11:00:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 11:00:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 11:00:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'vms', 'backups', 'cephfs.cephfs.meta', '.mgr']
Feb 28 11:00:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 11:00:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:00:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2900: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:00:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:00:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:00:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:00:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:00:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:00:30 compute-0 ceph-mon[76304]: pgmap v2900: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 11:00:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:00:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 11:00:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:00:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:00:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:00:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:00:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:00:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:00:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:00:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:32 compute-0 ceph-mon[76304]: pgmap v2901: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:33 compute-0 nova_compute[243452]: 2026-02-28 11:00:33.146 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:00:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:00:34 compute-0 ceph-mon[76304]: pgmap v2902: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2903: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:36 compute-0 ceph-mon[76304]: pgmap v2903: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:38 compute-0 nova_compute[243452]: 2026-02-28 11:00:38.148 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:00:38 compute-0 ceph-mon[76304]: pgmap v2904: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:00:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:40 compute-0 ceph-mon[76304]: pgmap v2905: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.533795142069749e-05 of space, bias 1.0, pg target 0.004601385426209247 quantized to 32 (current 32)
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006706137252582069 of space, bias 1.0, pg target 0.20118411757746207 quantized to 32 (current 32)
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.261232899107364e-07 of space, bias 4.0, pg target 0.0008713479478928837 quantized to 16 (current 16)
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 11:00:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:43 compute-0 ceph-mon[76304]: pgmap v2906: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:43 compute-0 nova_compute[243452]: 2026-02-28 11:00:43.151 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:00:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:00:45 compute-0 ceph-mon[76304]: pgmap v2907: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 11:00:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/342907728' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:00:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 11:00:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/342907728' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:00:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/342907728' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:00:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/342907728' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:00:47 compute-0 ceph-mon[76304]: pgmap v2908: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:48 compute-0 podman[392337]: 2026-02-28 11:00:48.149705952 +0000 UTC m=+0.078913175 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 28 11:00:48 compute-0 nova_compute[243452]: 2026-02-28 11:00:48.153 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:00:48 compute-0 podman[392336]: 2026-02-28 11:00:48.17291685 +0000 UTC m=+0.105490669 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 11:00:49 compute-0 ceph-mon[76304]: pgmap v2909: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:00:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:51 compute-0 ceph-mon[76304]: pgmap v2910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:51 compute-0 sudo[392382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:00:51 compute-0 sudo[392382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:00:51 compute-0 sudo[392382]: pam_unix(sudo:session): session closed for user root
Feb 28 11:00:52 compute-0 sudo[392407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 28 11:00:52 compute-0 sudo[392407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:00:52 compute-0 sudo[392407]: pam_unix(sudo:session): session closed for user root
Feb 28 11:00:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 11:00:52 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:00:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 11:00:52 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:00:52 compute-0 sudo[392452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:00:52 compute-0 sudo[392452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:00:52 compute-0 sudo[392452]: pam_unix(sudo:session): session closed for user root
Feb 28 11:00:52 compute-0 sudo[392477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 11:00:52 compute-0 sudo[392477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:00:53 compute-0 ceph-mon[76304]: pgmap v2911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:53 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:00:53 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:00:53 compute-0 nova_compute[243452]: 2026-02-28 11:00:53.155 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:00:53 compute-0 sudo[392477]: pam_unix(sudo:session): session closed for user root
Feb 28 11:00:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:00:53 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:00:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 11:00:53 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:00:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 11:00:53 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:00:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 11:00:53 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:00:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 11:00:53 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:00:53 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:00:53 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:00:53 compute-0 sudo[392533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:00:53 compute-0 sudo[392533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:00:53 compute-0 sudo[392533]: pam_unix(sudo:session): session closed for user root
Feb 28 11:00:53 compute-0 sudo[392558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 11:00:53 compute-0 sudo[392558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:00:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:53 compute-0 podman[392597]: 2026-02-28 11:00:53.77831925 +0000 UTC m=+0.076724664 container create 7572a451158c8f119c52a2d4f3a8d75be7f38cc7615c253646dfa7d5ca0b2592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 11:00:53 compute-0 systemd[1]: Started libpod-conmon-7572a451158c8f119c52a2d4f3a8d75be7f38cc7615c253646dfa7d5ca0b2592.scope.
Feb 28 11:00:53 compute-0 podman[392597]: 2026-02-28 11:00:53.749160014 +0000 UTC m=+0.047565478 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:00:53 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:00:53 compute-0 podman[392597]: 2026-02-28 11:00:53.889332084 +0000 UTC m=+0.187737538 container init 7572a451158c8f119c52a2d4f3a8d75be7f38cc7615c253646dfa7d5ca0b2592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:00:53 compute-0 podman[392597]: 2026-02-28 11:00:53.899930904 +0000 UTC m=+0.198336288 container start 7572a451158c8f119c52a2d4f3a8d75be7f38cc7615c253646dfa7d5ca0b2592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_engelbart, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 11:00:53 compute-0 sweet_engelbart[392613]: 167 167
Feb 28 11:00:53 compute-0 podman[392597]: 2026-02-28 11:00:53.907709245 +0000 UTC m=+0.206114649 container attach 7572a451158c8f119c52a2d4f3a8d75be7f38cc7615c253646dfa7d5ca0b2592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_engelbart, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 11:00:53 compute-0 systemd[1]: libpod-7572a451158c8f119c52a2d4f3a8d75be7f38cc7615c253646dfa7d5ca0b2592.scope: Deactivated successfully.
Feb 28 11:00:53 compute-0 conmon[392613]: conmon 7572a451158c8f119c52 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7572a451158c8f119c52a2d4f3a8d75be7f38cc7615c253646dfa7d5ca0b2592.scope/container/memory.events
Feb 28 11:00:53 compute-0 podman[392597]: 2026-02-28 11:00:53.909428913 +0000 UTC m=+0.207834337 container died 7572a451158c8f119c52a2d4f3a8d75be7f38cc7615c253646dfa7d5ca0b2592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:00:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5b18a34aeb32f61ff8f9e7a5b3eda6e2a6942b1736fd96627e6c0f20e10722a-merged.mount: Deactivated successfully.
Feb 28 11:00:53 compute-0 podman[392597]: 2026-02-28 11:00:53.988122462 +0000 UTC m=+0.286527856 container remove 7572a451158c8f119c52a2d4f3a8d75be7f38cc7615c253646dfa7d5ca0b2592 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 11:00:53 compute-0 systemd[1]: libpod-conmon-7572a451158c8f119c52a2d4f3a8d75be7f38cc7615c253646dfa7d5ca0b2592.scope: Deactivated successfully.
Feb 28 11:00:54 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:00:54 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:00:54 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:00:54 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:00:54 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:00:54 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:00:54 compute-0 podman[392639]: 2026-02-28 11:00:54.183681949 +0000 UTC m=+0.065013781 container create bf6597281926bb86e906dcf35a910ef441ce4d7bf666b8a19b553bc0723bf424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:00:54 compute-0 systemd[1]: Started libpod-conmon-bf6597281926bb86e906dcf35a910ef441ce4d7bf666b8a19b553bc0723bf424.scope.
Feb 28 11:00:54 compute-0 podman[392639]: 2026-02-28 11:00:54.155964595 +0000 UTC m=+0.037296507 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:00:54 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d870459e03de1f0c5a831dcaf70fcf1116201c17d41ee48d7271534bbf930ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d870459e03de1f0c5a831dcaf70fcf1116201c17d41ee48d7271534bbf930ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d870459e03de1f0c5a831dcaf70fcf1116201c17d41ee48d7271534bbf930ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d870459e03de1f0c5a831dcaf70fcf1116201c17d41ee48d7271534bbf930ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d870459e03de1f0c5a831dcaf70fcf1116201c17d41ee48d7271534bbf930ce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 11:00:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:00:54 compute-0 podman[392639]: 2026-02-28 11:00:54.298269405 +0000 UTC m=+0.179601257 container init bf6597281926bb86e906dcf35a910ef441ce4d7bf666b8a19b553bc0723bf424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Feb 28 11:00:54 compute-0 podman[392639]: 2026-02-28 11:00:54.307167317 +0000 UTC m=+0.188499139 container start bf6597281926bb86e906dcf35a910ef441ce4d7bf666b8a19b553bc0723bf424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:00:54 compute-0 podman[392639]: 2026-02-28 11:00:54.318293792 +0000 UTC m=+0.199625744 container attach bf6597281926bb86e906dcf35a910ef441ce4d7bf666b8a19b553bc0723bf424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 11:00:54 compute-0 nostalgic_beaver[392656]: --> passed data devices: 0 physical, 3 LVM
Feb 28 11:00:54 compute-0 nostalgic_beaver[392656]: --> All data devices are unavailable
Feb 28 11:00:54 compute-0 systemd[1]: libpod-bf6597281926bb86e906dcf35a910ef441ce4d7bf666b8a19b553bc0723bf424.scope: Deactivated successfully.
Feb 28 11:00:54 compute-0 podman[392639]: 2026-02-28 11:00:54.851340438 +0000 UTC m=+0.732672300 container died bf6597281926bb86e906dcf35a910ef441ce4d7bf666b8a19b553bc0723bf424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:00:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d870459e03de1f0c5a831dcaf70fcf1116201c17d41ee48d7271534bbf930ce-merged.mount: Deactivated successfully.
Feb 28 11:00:54 compute-0 podman[392639]: 2026-02-28 11:00:54.904138234 +0000 UTC m=+0.785470066 container remove bf6597281926bb86e906dcf35a910ef441ce4d7bf666b8a19b553bc0723bf424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:00:54 compute-0 systemd[1]: libpod-conmon-bf6597281926bb86e906dcf35a910ef441ce4d7bf666b8a19b553bc0723bf424.scope: Deactivated successfully.
Feb 28 11:00:54 compute-0 sudo[392558]: pam_unix(sudo:session): session closed for user root
Feb 28 11:00:55 compute-0 sudo[392690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:00:55 compute-0 sudo[392690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:00:55 compute-0 sudo[392690]: pam_unix(sudo:session): session closed for user root
Feb 28 11:00:55 compute-0 ceph-mon[76304]: pgmap v2912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:55 compute-0 sudo[392715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 11:00:55 compute-0 sudo[392715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:00:55 compute-0 podman[392752]: 2026-02-28 11:00:55.418866821 +0000 UTC m=+0.049060940 container create 6bf045516c7f4829ea2faa4d68d7295943124087451e434d2e0fdec1b3f9e1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ardinghelli, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:00:55 compute-0 systemd[1]: Started libpod-conmon-6bf045516c7f4829ea2faa4d68d7295943124087451e434d2e0fdec1b3f9e1a7.scope.
Feb 28 11:00:55 compute-0 podman[392752]: 2026-02-28 11:00:55.393375789 +0000 UTC m=+0.023569928 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:00:55 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:00:55 compute-0 podman[392752]: 2026-02-28 11:00:55.526560281 +0000 UTC m=+0.156754400 container init 6bf045516c7f4829ea2faa4d68d7295943124087451e434d2e0fdec1b3f9e1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ardinghelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Feb 28 11:00:55 compute-0 podman[392752]: 2026-02-28 11:00:55.534776184 +0000 UTC m=+0.164970263 container start 6bf045516c7f4829ea2faa4d68d7295943124087451e434d2e0fdec1b3f9e1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:00:55 compute-0 podman[392752]: 2026-02-28 11:00:55.538695815 +0000 UTC m=+0.168889924 container attach 6bf045516c7f4829ea2faa4d68d7295943124087451e434d2e0fdec1b3f9e1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:00:55 compute-0 great_ardinghelli[392768]: 167 167
Feb 28 11:00:55 compute-0 systemd[1]: libpod-6bf045516c7f4829ea2faa4d68d7295943124087451e434d2e0fdec1b3f9e1a7.scope: Deactivated successfully.
Feb 28 11:00:55 compute-0 podman[392752]: 2026-02-28 11:00:55.542308407 +0000 UTC m=+0.172502516 container died 6bf045516c7f4829ea2faa4d68d7295943124087451e434d2e0fdec1b3f9e1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ardinghelli, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:00:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c92563f4244079ff65a75fe760de7c0d0a8884ccba9416d66bfc3e3b32299277-merged.mount: Deactivated successfully.
Feb 28 11:00:55 compute-0 podman[392752]: 2026-02-28 11:00:55.588826825 +0000 UTC m=+0.219020904 container remove 6bf045516c7f4829ea2faa4d68d7295943124087451e434d2e0fdec1b3f9e1a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_ardinghelli, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 11:00:55 compute-0 systemd[1]: libpod-conmon-6bf045516c7f4829ea2faa4d68d7295943124087451e434d2e0fdec1b3f9e1a7.scope: Deactivated successfully.
Feb 28 11:00:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:55 compute-0 podman[392792]: 2026-02-28 11:00:55.798201724 +0000 UTC m=+0.064594070 container create e0f7ffb494442f40293b723643a4fb1f92329c5f13541fc5a37a66c2022528c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_germain, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:00:55 compute-0 systemd[1]: Started libpod-conmon-e0f7ffb494442f40293b723643a4fb1f92329c5f13541fc5a37a66c2022528c1.scope.
Feb 28 11:00:55 compute-0 podman[392792]: 2026-02-28 11:00:55.769981005 +0000 UTC m=+0.036373411 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:00:55 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947cb833f02a97e2d43bb94cdd4a4bafb7d7f6533e8a7214296be78713e11403/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947cb833f02a97e2d43bb94cdd4a4bafb7d7f6533e8a7214296be78713e11403/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947cb833f02a97e2d43bb94cdd4a4bafb7d7f6533e8a7214296be78713e11403/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:00:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947cb833f02a97e2d43bb94cdd4a4bafb7d7f6533e8a7214296be78713e11403/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:00:55 compute-0 podman[392792]: 2026-02-28 11:00:55.918622755 +0000 UTC m=+0.185015171 container init e0f7ffb494442f40293b723643a4fb1f92329c5f13541fc5a37a66c2022528c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:00:55 compute-0 podman[392792]: 2026-02-28 11:00:55.929354489 +0000 UTC m=+0.195746805 container start e0f7ffb494442f40293b723643a4fb1f92329c5f13541fc5a37a66c2022528c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 11:00:55 compute-0 podman[392792]: 2026-02-28 11:00:55.933874607 +0000 UTC m=+0.200267003 container attach e0f7ffb494442f40293b723643a4fb1f92329c5f13541fc5a37a66c2022528c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_germain, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:00:56 compute-0 nice_germain[392808]: {
Feb 28 11:00:56 compute-0 nice_germain[392808]:     "0": [
Feb 28 11:00:56 compute-0 nice_germain[392808]:         {
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "devices": [
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "/dev/loop3"
Feb 28 11:00:56 compute-0 nice_germain[392808]:             ],
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "lv_name": "ceph_lv0",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "lv_size": "21470642176",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "name": "ceph_lv0",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "tags": {
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.cluster_name": "ceph",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.crush_device_class": "",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.encrypted": "0",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.objectstore": "bluestore",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.osd_id": "0",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.type": "block",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.vdo": "0",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.with_tpm": "0"
Feb 28 11:00:56 compute-0 nice_germain[392808]:             },
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "type": "block",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "vg_name": "ceph_vg0"
Feb 28 11:00:56 compute-0 nice_germain[392808]:         }
Feb 28 11:00:56 compute-0 nice_germain[392808]:     ],
Feb 28 11:00:56 compute-0 nice_germain[392808]:     "1": [
Feb 28 11:00:56 compute-0 nice_germain[392808]:         {
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "devices": [
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "/dev/loop4"
Feb 28 11:00:56 compute-0 nice_germain[392808]:             ],
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "lv_name": "ceph_lv1",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "lv_size": "21470642176",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "name": "ceph_lv1",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "tags": {
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.cluster_name": "ceph",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.crush_device_class": "",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.encrypted": "0",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.objectstore": "bluestore",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.osd_id": "1",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.type": "block",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.vdo": "0",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.with_tpm": "0"
Feb 28 11:00:56 compute-0 nice_germain[392808]:             },
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "type": "block",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "vg_name": "ceph_vg1"
Feb 28 11:00:56 compute-0 nice_germain[392808]:         }
Feb 28 11:00:56 compute-0 nice_germain[392808]:     ],
Feb 28 11:00:56 compute-0 nice_germain[392808]:     "2": [
Feb 28 11:00:56 compute-0 nice_germain[392808]:         {
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "devices": [
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "/dev/loop5"
Feb 28 11:00:56 compute-0 nice_germain[392808]:             ],
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "lv_name": "ceph_lv2",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "lv_size": "21470642176",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "name": "ceph_lv2",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "tags": {
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.cluster_name": "ceph",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.crush_device_class": "",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.encrypted": "0",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.objectstore": "bluestore",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.osd_id": "2",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.type": "block",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.vdo": "0",
Feb 28 11:00:56 compute-0 nice_germain[392808]:                 "ceph.with_tpm": "0"
Feb 28 11:00:56 compute-0 nice_germain[392808]:             },
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "type": "block",
Feb 28 11:00:56 compute-0 nice_germain[392808]:             "vg_name": "ceph_vg2"
Feb 28 11:00:56 compute-0 nice_germain[392808]:         }
Feb 28 11:00:56 compute-0 nice_germain[392808]:     ]
Feb 28 11:00:56 compute-0 nice_germain[392808]: }
Feb 28 11:00:56 compute-0 systemd[1]: libpod-e0f7ffb494442f40293b723643a4fb1f92329c5f13541fc5a37a66c2022528c1.scope: Deactivated successfully.
Feb 28 11:00:56 compute-0 podman[392792]: 2026-02-28 11:00:56.280606487 +0000 UTC m=+0.546998833 container died e0f7ffb494442f40293b723643a4fb1f92329c5f13541fc5a37a66c2022528c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_germain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:00:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-947cb833f02a97e2d43bb94cdd4a4bafb7d7f6533e8a7214296be78713e11403-merged.mount: Deactivated successfully.
Feb 28 11:00:56 compute-0 podman[392792]: 2026-02-28 11:00:56.337299432 +0000 UTC m=+0.603691748 container remove e0f7ffb494442f40293b723643a4fb1f92329c5f13541fc5a37a66c2022528c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:00:56 compute-0 systemd[1]: libpod-conmon-e0f7ffb494442f40293b723643a4fb1f92329c5f13541fc5a37a66c2022528c1.scope: Deactivated successfully.
Feb 28 11:00:56 compute-0 sudo[392715]: pam_unix(sudo:session): session closed for user root
Feb 28 11:00:56 compute-0 sudo[392829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:00:56 compute-0 sudo[392829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:00:56 compute-0 sudo[392829]: pam_unix(sudo:session): session closed for user root
Feb 28 11:00:56 compute-0 sudo[392854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 11:00:56 compute-0 sudo[392854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:00:56 compute-0 podman[392893]: 2026-02-28 11:00:56.880137286 +0000 UTC m=+0.046439796 container create 908eb93a173d59a8ab9f5c355be186b1f4d694137e3b9a483652de33dbb61e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:00:56 compute-0 systemd[1]: Started libpod-conmon-908eb93a173d59a8ab9f5c355be186b1f4d694137e3b9a483652de33dbb61e45.scope.
Feb 28 11:00:56 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:00:56 compute-0 podman[392893]: 2026-02-28 11:00:56.858310878 +0000 UTC m=+0.024613398 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:00:56 compute-0 podman[392893]: 2026-02-28 11:00:56.963668562 +0000 UTC m=+0.129971112 container init 908eb93a173d59a8ab9f5c355be186b1f4d694137e3b9a483652de33dbb61e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:00:56 compute-0 podman[392893]: 2026-02-28 11:00:56.971518454 +0000 UTC m=+0.137820954 container start 908eb93a173d59a8ab9f5c355be186b1f4d694137e3b9a483652de33dbb61e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 28 11:00:56 compute-0 podman[392893]: 2026-02-28 11:00:56.975983471 +0000 UTC m=+0.142285951 container attach 908eb93a173d59a8ab9f5c355be186b1f4d694137e3b9a483652de33dbb61e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:00:56 compute-0 angry_beaver[392909]: 167 167
Feb 28 11:00:56 compute-0 systemd[1]: libpod-908eb93a173d59a8ab9f5c355be186b1f4d694137e3b9a483652de33dbb61e45.scope: Deactivated successfully.
Feb 28 11:00:56 compute-0 podman[392893]: 2026-02-28 11:00:56.978216624 +0000 UTC m=+0.144519124 container died 908eb93a173d59a8ab9f5c355be186b1f4d694137e3b9a483652de33dbb61e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 11:00:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba069e5449ed57e138341724b1091698b7a657ed86b963fa4728b56b8807112b-merged.mount: Deactivated successfully.
Feb 28 11:00:57 compute-0 podman[392893]: 2026-02-28 11:00:57.026407549 +0000 UTC m=+0.192710059 container remove 908eb93a173d59a8ab9f5c355be186b1f4d694137e3b9a483652de33dbb61e45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:00:57 compute-0 systemd[1]: libpod-conmon-908eb93a173d59a8ab9f5c355be186b1f4d694137e3b9a483652de33dbb61e45.scope: Deactivated successfully.
Feb 28 11:00:57 compute-0 ceph-mon[76304]: pgmap v2913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:57 compute-0 podman[392933]: 2026-02-28 11:00:57.243437715 +0000 UTC m=+0.067956395 container create 4303422d9489f6a098bfe80c37f9158c30ddd1e3b5edc7cb591d1f9bfa2077e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 11:00:57 compute-0 systemd[1]: Started libpod-conmon-4303422d9489f6a098bfe80c37f9158c30ddd1e3b5edc7cb591d1f9bfa2077e6.scope.
Feb 28 11:00:57 compute-0 podman[392933]: 2026-02-28 11:00:57.214467765 +0000 UTC m=+0.038986455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:00:57 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:00:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6ef6a511ae2b457052021bd9115abf69f874db0379c6bb8afaf43f1d2fdb2c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:00:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6ef6a511ae2b457052021bd9115abf69f874db0379c6bb8afaf43f1d2fdb2c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:00:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6ef6a511ae2b457052021bd9115abf69f874db0379c6bb8afaf43f1d2fdb2c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:00:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6ef6a511ae2b457052021bd9115abf69f874db0379c6bb8afaf43f1d2fdb2c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:00:57 compute-0 podman[392933]: 2026-02-28 11:00:57.342003557 +0000 UTC m=+0.166522267 container init 4303422d9489f6a098bfe80c37f9158c30ddd1e3b5edc7cb591d1f9bfa2077e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_shannon, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 11:00:57 compute-0 podman[392933]: 2026-02-28 11:00:57.348722207 +0000 UTC m=+0.173240897 container start 4303422d9489f6a098bfe80c37f9158c30ddd1e3b5edc7cb591d1f9bfa2077e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_shannon, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:00:57 compute-0 podman[392933]: 2026-02-28 11:00:57.352155064 +0000 UTC m=+0.176673764 container attach 4303422d9489f6a098bfe80c37f9158c30ddd1e3b5edc7cb591d1f9bfa2077e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:00:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:00:57.905 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:00:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:00:57.908 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:00:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:00:57.908 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:00:58 compute-0 lvm[393028]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 11:00:58 compute-0 lvm[393028]: VG ceph_vg0 finished
Feb 28 11:00:58 compute-0 lvm[393029]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 11:00:58 compute-0 lvm[393029]: VG ceph_vg1 finished
Feb 28 11:00:58 compute-0 lvm[393031]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 11:00:58 compute-0 lvm[393031]: VG ceph_vg2 finished
Feb 28 11:00:58 compute-0 tender_shannon[392950]: {}
Feb 28 11:00:58 compute-0 nova_compute[243452]: 2026-02-28 11:00:58.158 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:00:58 compute-0 systemd[1]: libpod-4303422d9489f6a098bfe80c37f9158c30ddd1e3b5edc7cb591d1f9bfa2077e6.scope: Deactivated successfully.
Feb 28 11:00:58 compute-0 systemd[1]: libpod-4303422d9489f6a098bfe80c37f9158c30ddd1e3b5edc7cb591d1f9bfa2077e6.scope: Consumed 1.251s CPU time.
Feb 28 11:00:58 compute-0 podman[392933]: 2026-02-28 11:00:58.198548284 +0000 UTC m=+1.023066964 container died 4303422d9489f6a098bfe80c37f9158c30ddd1e3b5edc7cb591d1f9bfa2077e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:00:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6ef6a511ae2b457052021bd9115abf69f874db0379c6bb8afaf43f1d2fdb2c6-merged.mount: Deactivated successfully.
Feb 28 11:00:58 compute-0 podman[392933]: 2026-02-28 11:00:58.256179126 +0000 UTC m=+1.080697816 container remove 4303422d9489f6a098bfe80c37f9158c30ddd1e3b5edc7cb591d1f9bfa2077e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 11:00:58 compute-0 systemd[1]: libpod-conmon-4303422d9489f6a098bfe80c37f9158c30ddd1e3b5edc7cb591d1f9bfa2077e6.scope: Deactivated successfully.
Feb 28 11:00:58 compute-0 sudo[392854]: pam_unix(sudo:session): session closed for user root
Feb 28 11:00:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 11:00:58 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:00:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 11:00:58 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:00:58 compute-0 sudo[393049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 11:00:58 compute-0 sudo[393049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:00:58 compute-0 sudo[393049]: pam_unix(sudo:session): session closed for user root
Feb 28 11:00:59 compute-0 ceph-mon[76304]: pgmap v2914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:00:59 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:00:59 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:00:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:00:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2915: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:00 compute-0 nova_compute[243452]: 2026-02-28 11:01:00.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:01:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:01:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:01:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:01:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:01:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:01:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:01:01 compute-0 ceph-mon[76304]: pgmap v2915: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:01 compute-0 nova_compute[243452]: 2026-02-28 11:01:01.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:01:01 compute-0 nova_compute[243452]: 2026-02-28 11:01:01.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:01:01 compute-0 CROND[393075]: (root) CMD (run-parts /etc/cron.hourly)
Feb 28 11:01:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:01 compute-0 run-parts[393078]: (/etc/cron.hourly) starting 0anacron
Feb 28 11:01:01 compute-0 run-parts[393084]: (/etc/cron.hourly) finished 0anacron
Feb 28 11:01:01 compute-0 CROND[393074]: (root) CMDEND (run-parts /etc/cron.hourly)
Feb 28 11:01:03 compute-0 ceph-mon[76304]: pgmap v2916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:03 compute-0 nova_compute[243452]: 2026-02-28 11:01:03.160 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:01:03 compute-0 nova_compute[243452]: 2026-02-28 11:01:03.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:01:03 compute-0 nova_compute[243452]: 2026-02-28 11:01:03.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 11:01:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:01:04 compute-0 nova_compute[243452]: 2026-02-28 11:01:04.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:01:04 compute-0 nova_compute[243452]: 2026-02-28 11:01:04.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:01:05 compute-0 ceph-mon[76304]: pgmap v2917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:07 compute-0 ceph-mon[76304]: pgmap v2918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2919: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:08 compute-0 nova_compute[243452]: 2026-02-28 11:01:08.163 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:01:09 compute-0 ceph-mon[76304]: pgmap v2919: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:01:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:10 compute-0 nova_compute[243452]: 2026-02-28 11:01:10.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:01:10 compute-0 nova_compute[243452]: 2026-02-28 11:01:10.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 11:01:10 compute-0 nova_compute[243452]: 2026-02-28 11:01:10.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 11:01:10 compute-0 nova_compute[243452]: 2026-02-28 11:01:10.341 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 11:01:10 compute-0 nova_compute[243452]: 2026-02-28 11:01:10.342 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:01:11 compute-0 ceph-mon[76304]: pgmap v2920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:12 compute-0 nova_compute[243452]: 2026-02-28 11:01:12.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:01:12 compute-0 nova_compute[243452]: 2026-02-28 11:01:12.352 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:01:12 compute-0 nova_compute[243452]: 2026-02-28 11:01:12.352 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:01:12 compute-0 nova_compute[243452]: 2026-02-28 11:01:12.353 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:01:12 compute-0 nova_compute[243452]: 2026-02-28 11:01:12.353 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 11:01:12 compute-0 nova_compute[243452]: 2026-02-28 11:01:12.354 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:01:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:01:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1209747722' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:01:12 compute-0 nova_compute[243452]: 2026-02-28 11:01:12.929 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:01:13 compute-0 nova_compute[243452]: 2026-02-28 11:01:13.123 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 11:01:13 compute-0 nova_compute[243452]: 2026-02-28 11:01:13.124 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3602MB free_disk=59.987361152656376GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 11:01:13 compute-0 nova_compute[243452]: 2026-02-28 11:01:13.124 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:01:13 compute-0 nova_compute[243452]: 2026-02-28 11:01:13.124 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:01:13 compute-0 ceph-mon[76304]: pgmap v2921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:13 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1209747722' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:01:13 compute-0 nova_compute[243452]: 2026-02-28 11:01:13.165 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:01:13 compute-0 nova_compute[243452]: 2026-02-28 11:01:13.186 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 11:01:13 compute-0 nova_compute[243452]: 2026-02-28 11:01:13.187 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 11:01:13 compute-0 nova_compute[243452]: 2026-02-28 11:01:13.208 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:01:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:01:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1127938045' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:01:13 compute-0 nova_compute[243452]: 2026-02-28 11:01:13.740 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:01:13 compute-0 nova_compute[243452]: 2026-02-28 11:01:13.745 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 11:01:13 compute-0 nova_compute[243452]: 2026-02-28 11:01:13.764 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 11:01:13 compute-0 nova_compute[243452]: 2026-02-28 11:01:13.766 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 11:01:13 compute-0 nova_compute[243452]: 2026-02-28 11:01:13.766 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:01:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:14 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1127938045' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:01:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:01:15 compute-0 ceph-mon[76304]: pgmap v2922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:17 compute-0 ceph-mon[76304]: pgmap v2923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:18 compute-0 nova_compute[243452]: 2026-02-28 11:01:18.166 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:01:19 compute-0 podman[393130]: 2026-02-28 11:01:19.145934236 +0000 UTC m=+0.076265291 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 28 11:01:19 compute-0 podman[393129]: 2026-02-28 11:01:19.181279257 +0000 UTC m=+0.111662694 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 28 11:01:19 compute-0 ceph-mon[76304]: pgmap v2924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:01:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:21 compute-0 ceph-mon[76304]: pgmap v2925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2926: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:23 compute-0 nova_compute[243452]: 2026-02-28 11:01:23.169 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:01:23 compute-0 ceph-mon[76304]: pgmap v2926: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:01:25 compute-0 ceph-mon[76304]: pgmap v2927: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2928: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:27 compute-0 ceph-mon[76304]: pgmap v2928: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2929: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:28 compute-0 nova_compute[243452]: 2026-02-28 11:01:28.170 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:01:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:01:29
Feb 28 11:01:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 11:01:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 11:01:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', '.mgr', 'backups', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta']
Feb 28 11:01:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 11:01:29 compute-0 ceph-mon[76304]: pgmap v2929: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:01:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:30 compute-0 ceph-mon[76304]: pgmap v2930: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:01:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:01:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:01:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:01:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:01:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:01:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 11:01:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:01:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 11:01:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:01:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:01:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:01:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:01:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:01:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:01:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:01:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:32 compute-0 ceph-mon[76304]: pgmap v2931: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:33 compute-0 nova_compute[243452]: 2026-02-28 11:01:33.172 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:01:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2932: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:01:34 compute-0 ceph-mon[76304]: pgmap v2932: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:36 compute-0 ceph-mon[76304]: pgmap v2933: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2934: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:38 compute-0 nova_compute[243452]: 2026-02-28 11:01:38.174 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:01:38 compute-0 ceph-mon[76304]: pgmap v2934: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:01:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2935: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:40 compute-0 ceph-mon[76304]: pgmap v2935: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.533795142069749e-05 of space, bias 1.0, pg target 0.004601385426209247 quantized to 32 (current 32)
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006706137252582069 of space, bias 1.0, pg target 0.20118411757746207 quantized to 32 (current 32)
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.261232899107364e-07 of space, bias 4.0, pg target 0.0008713479478928837 quantized to 16 (current 16)
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 11:01:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Feb 28 11:01:42 compute-0 ceph-mon[76304]: pgmap v2936: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 28 11:01:42 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Feb 28 11:01:42 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Feb 28 11:01:43 compute-0 nova_compute[243452]: 2026-02-28 11:01:43.176 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:01:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 305 active+clean; 33 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 818 B/s wr, 5 op/s
Feb 28 11:01:43 compute-0 ceph-mon[76304]: osdmap e301: 3 total, 3 up, 3 in
Feb 28 11:01:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:01:44 compute-0 ceph-mon[76304]: pgmap v2938: 305 pgs: 305 active+clean; 33 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 818 B/s wr, 5 op/s
Feb 28 11:01:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 11:01:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/863113695' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:01:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 11:01:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/863113695' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:01:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 305 active+clean; 21 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Feb 28 11:01:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Feb 28 11:01:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Feb 28 11:01:45 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Feb 28 11:01:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/863113695' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:01:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/863113695' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:01:46 compute-0 ceph-mon[76304]: pgmap v2939: 305 pgs: 305 active+clean; 21 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Feb 28 11:01:46 compute-0 ceph-mon[76304]: osdmap e302: 3 total, 3 up, 3 in
Feb 28 11:01:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2941: 305 pgs: 305 active+clean; 21 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Feb 28 11:01:48 compute-0 nova_compute[243452]: 2026-02-28 11:01:48.177 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:01:48 compute-0 ceph-mon[76304]: pgmap v2941: 305 pgs: 305 active+clean; 21 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Feb 28 11:01:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:01:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Feb 28 11:01:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Feb 28 11:01:49 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Feb 28 11:01:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2943: 305 pgs: 305 active+clean; 4.9 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 4.1 KiB/s wr, 71 op/s
Feb 28 11:01:50 compute-0 podman[393176]: 2026-02-28 11:01:50.150025595 +0000 UTC m=+0.080034208 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 28 11:01:50 compute-0 podman[393175]: 2026-02-28 11:01:50.188933017 +0000 UTC m=+0.124313322 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:01:50 compute-0 ceph-mon[76304]: osdmap e303: 3 total, 3 up, 3 in
Feb 28 11:01:50 compute-0 ceph-mon[76304]: pgmap v2943: 305 pgs: 305 active+clean; 4.9 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 4.1 KiB/s wr, 71 op/s
Feb 28 11:01:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Feb 28 11:01:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Feb 28 11:01:51 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Feb 28 11:01:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 3.0 KiB/s wr, 44 op/s
Feb 28 11:01:52 compute-0 ceph-mon[76304]: osdmap e304: 3 total, 3 up, 3 in
Feb 28 11:01:52 compute-0 ceph-mon[76304]: pgmap v2945: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 3.0 KiB/s wr, 44 op/s
Feb 28 11:01:53 compute-0 nova_compute[243452]: 2026-02-28 11:01:53.180 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:01:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.4 KiB/s wr, 33 op/s
Feb 28 11:01:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:01:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Feb 28 11:01:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Feb 28 11:01:54 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Feb 28 11:01:54 compute-0 ceph-mon[76304]: pgmap v2946: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.4 KiB/s wr, 33 op/s
Feb 28 11:01:54 compute-0 ceph-mon[76304]: osdmap e305: 3 total, 3 up, 3 in
Feb 28 11:01:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2948: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 3.1 MiB/s wr, 54 op/s
Feb 28 11:01:56 compute-0 ceph-mon[76304]: pgmap v2948: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 3.1 MiB/s wr, 54 op/s
Feb 28 11:01:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 2.6 MiB/s wr, 16 op/s
Feb 28 11:01:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:01:57.906 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:01:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:01:57.907 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:01:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:01:57.907 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:01:58 compute-0 nova_compute[243452]: 2026-02-28 11:01:58.182 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:01:58 compute-0 nova_compute[243452]: 2026-02-28 11:01:58.182 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:01:58 compute-0 nova_compute[243452]: 2026-02-28 11:01:58.183 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:01:58 compute-0 nova_compute[243452]: 2026-02-28 11:01:58.183 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:01:58 compute-0 nova_compute[243452]: 2026-02-28 11:01:58.183 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:01:58 compute-0 nova_compute[243452]: 2026-02-28 11:01:58.184 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:01:58 compute-0 sudo[393217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:01:58 compute-0 sudo[393217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:01:58 compute-0 sudo[393217]: pam_unix(sudo:session): session closed for user root
Feb 28 11:01:58 compute-0 sudo[393242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 11:01:58 compute-0 sudo[393242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:01:58 compute-0 ceph-mon[76304]: pgmap v2949: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 2.6 MiB/s wr, 16 op/s
Feb 28 11:01:59 compute-0 sudo[393242]: pam_unix(sudo:session): session closed for user root
Feb 28 11:01:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 28 11:01:59 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 11:01:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:01:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:01:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 11:01:59 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:01:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 11:01:59 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:01:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 11:01:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:01:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 11:01:59 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:01:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:01:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:01:59 compute-0 sudo[393298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:01:59 compute-0 sudo[393298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:01:59 compute-0 sudo[393298]: pam_unix(sudo:session): session closed for user root
Feb 28 11:01:59 compute-0 sudo[393323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 11:01:59 compute-0 sudo[393323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:01:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:01:59 compute-0 podman[393360]: 2026-02-28 11:01:59.517220111 +0000 UTC m=+0.051742556 container create a66eddac9745a193e000f608d3182a799a565dcbf07ed3d6795029de3b37e8fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nobel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 11:01:59 compute-0 systemd[1]: Started libpod-conmon-a66eddac9745a193e000f608d3182a799a565dcbf07ed3d6795029de3b37e8fc.scope.
Feb 28 11:01:59 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:01:59 compute-0 podman[393360]: 2026-02-28 11:01:59.490535326 +0000 UTC m=+0.025057791 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:01:59 compute-0 podman[393360]: 2026-02-28 11:01:59.589114128 +0000 UTC m=+0.123636603 container init a66eddac9745a193e000f608d3182a799a565dcbf07ed3d6795029de3b37e8fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nobel, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 28 11:01:59 compute-0 podman[393360]: 2026-02-28 11:01:59.596773725 +0000 UTC m=+0.131296170 container start a66eddac9745a193e000f608d3182a799a565dcbf07ed3d6795029de3b37e8fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nobel, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:01:59 compute-0 podman[393360]: 2026-02-28 11:01:59.600468649 +0000 UTC m=+0.134991114 container attach a66eddac9745a193e000f608d3182a799a565dcbf07ed3d6795029de3b37e8fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nobel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 11:01:59 compute-0 quizzical_nobel[393377]: 167 167
Feb 28 11:01:59 compute-0 systemd[1]: libpod-a66eddac9745a193e000f608d3182a799a565dcbf07ed3d6795029de3b37e8fc.scope: Deactivated successfully.
Feb 28 11:01:59 compute-0 podman[393360]: 2026-02-28 11:01:59.6050836 +0000 UTC m=+0.139606055 container died a66eddac9745a193e000f608d3182a799a565dcbf07ed3d6795029de3b37e8fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nobel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 11:01:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-526eece268e7da4fc04b89c242173b1c1cc0b507fc75d940df09901959e77105-merged.mount: Deactivated successfully.
Feb 28 11:01:59 compute-0 podman[393360]: 2026-02-28 11:01:59.646292637 +0000 UTC m=+0.180815082 container remove a66eddac9745a193e000f608d3182a799a565dcbf07ed3d6795029de3b37e8fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:01:59 compute-0 systemd[1]: libpod-conmon-a66eddac9745a193e000f608d3182a799a565dcbf07ed3d6795029de3b37e8fc.scope: Deactivated successfully.
Feb 28 11:01:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2950: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 8.1 KiB/s rd, 2.4 MiB/s wr, 12 op/s
Feb 28 11:01:59 compute-0 podman[393402]: 2026-02-28 11:01:59.774487188 +0000 UTC m=+0.030621079 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:01:59 compute-0 podman[393402]: 2026-02-28 11:01:59.905592441 +0000 UTC m=+0.161726272 container create 8e76992c67b2c0d23e878b09fba18417dcd95292d535d0d08368f6ebcdc53992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_feynman, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 28 11:01:59 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 11:01:59 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:01:59 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:01:59 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:01:59 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:01:59 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:01:59 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:01:59 compute-0 systemd[1]: Started libpod-conmon-8e76992c67b2c0d23e878b09fba18417dcd95292d535d0d08368f6ebcdc53992.scope.
Feb 28 11:01:59 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d22eb08c90f60da37e13995b14506e215fa3775574a4fbb09eb791c716d78a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d22eb08c90f60da37e13995b14506e215fa3775574a4fbb09eb791c716d78a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d22eb08c90f60da37e13995b14506e215fa3775574a4fbb09eb791c716d78a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d22eb08c90f60da37e13995b14506e215fa3775574a4fbb09eb791c716d78a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d22eb08c90f60da37e13995b14506e215fa3775574a4fbb09eb791c716d78a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 11:02:00 compute-0 podman[393402]: 2026-02-28 11:02:00.004908163 +0000 UTC m=+0.261042044 container init 8e76992c67b2c0d23e878b09fba18417dcd95292d535d0d08368f6ebcdc53992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:02:00 compute-0 podman[393402]: 2026-02-28 11:02:00.010523092 +0000 UTC m=+0.266656923 container start 8e76992c67b2c0d23e878b09fba18417dcd95292d535d0d08368f6ebcdc53992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_feynman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:02:00 compute-0 podman[393402]: 2026-02-28 11:02:00.014197046 +0000 UTC m=+0.270330967 container attach 8e76992c67b2c0d23e878b09fba18417dcd95292d535d0d08368f6ebcdc53992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_feynman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 11:02:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:02:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:02:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:02:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:02:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:02:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:02:00 compute-0 distracted_feynman[393418]: --> passed data devices: 0 physical, 3 LVM
Feb 28 11:02:00 compute-0 distracted_feynman[393418]: --> All data devices are unavailable
Feb 28 11:02:00 compute-0 systemd[1]: libpod-8e76992c67b2c0d23e878b09fba18417dcd95292d535d0d08368f6ebcdc53992.scope: Deactivated successfully.
Feb 28 11:02:00 compute-0 podman[393402]: 2026-02-28 11:02:00.491135334 +0000 UTC m=+0.747269205 container died 8e76992c67b2c0d23e878b09fba18417dcd95292d535d0d08368f6ebcdc53992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_feynman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 28 11:02:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6d22eb08c90f60da37e13995b14506e215fa3775574a4fbb09eb791c716d78a-merged.mount: Deactivated successfully.
Feb 28 11:02:00 compute-0 podman[393402]: 2026-02-28 11:02:00.538996639 +0000 UTC m=+0.795130510 container remove 8e76992c67b2c0d23e878b09fba18417dcd95292d535d0d08368f6ebcdc53992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 11:02:00 compute-0 systemd[1]: libpod-conmon-8e76992c67b2c0d23e878b09fba18417dcd95292d535d0d08368f6ebcdc53992.scope: Deactivated successfully.
Feb 28 11:02:00 compute-0 sudo[393323]: pam_unix(sudo:session): session closed for user root
Feb 28 11:02:00 compute-0 sudo[393451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:02:00 compute-0 sudo[393451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:02:00 compute-0 sudo[393451]: pam_unix(sudo:session): session closed for user root
Feb 28 11:02:00 compute-0 sudo[393476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 11:02:00 compute-0 sudo[393476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:02:00 compute-0 podman[393513]: 2026-02-28 11:02:00.975916733 +0000 UTC m=+0.046481877 container create 05c0f1b5a7b9ab7a92afa97ceb1bf6c715a2cbe279ea973023f6dbebd890ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dewdney, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 11:02:01 compute-0 systemd[1]: Started libpod-conmon-05c0f1b5a7b9ab7a92afa97ceb1bf6c715a2cbe279ea973023f6dbebd890ea59.scope.
Feb 28 11:02:01 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:02:01 compute-0 podman[393513]: 2026-02-28 11:02:00.953527069 +0000 UTC m=+0.024092243 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:02:01 compute-0 podman[393513]: 2026-02-28 11:02:01.050319761 +0000 UTC m=+0.120884925 container init 05c0f1b5a7b9ab7a92afa97ceb1bf6c715a2cbe279ea973023f6dbebd890ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 28 11:02:01 compute-0 podman[393513]: 2026-02-28 11:02:01.056120225 +0000 UTC m=+0.126685369 container start 05c0f1b5a7b9ab7a92afa97ceb1bf6c715a2cbe279ea973023f6dbebd890ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dewdney, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:02:01 compute-0 elastic_dewdney[393530]: 167 167
Feb 28 11:02:01 compute-0 systemd[1]: libpod-05c0f1b5a7b9ab7a92afa97ceb1bf6c715a2cbe279ea973023f6dbebd890ea59.scope: Deactivated successfully.
Feb 28 11:02:01 compute-0 podman[393513]: 2026-02-28 11:02:01.059409388 +0000 UTC m=+0.129974552 container attach 05c0f1b5a7b9ab7a92afa97ceb1bf6c715a2cbe279ea973023f6dbebd890ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dewdney, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 11:02:01 compute-0 podman[393513]: 2026-02-28 11:02:01.05983765 +0000 UTC m=+0.130402794 container died 05c0f1b5a7b9ab7a92afa97ceb1bf6c715a2cbe279ea973023f6dbebd890ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:02:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-38cd46eceb73a02471b5dcc4c5dfca3d51546c801ba3f6ccbc13ba9e62fad3f5-merged.mount: Deactivated successfully.
Feb 28 11:02:01 compute-0 podman[393513]: 2026-02-28 11:02:01.090246321 +0000 UTC m=+0.160811465 container remove 05c0f1b5a7b9ab7a92afa97ceb1bf6c715a2cbe279ea973023f6dbebd890ea59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_dewdney, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 11:02:01 compute-0 systemd[1]: libpod-conmon-05c0f1b5a7b9ab7a92afa97ceb1bf6c715a2cbe279ea973023f6dbebd890ea59.scope: Deactivated successfully.
Feb 28 11:02:01 compute-0 ceph-mon[76304]: pgmap v2950: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 8.1 KiB/s rd, 2.4 MiB/s wr, 12 op/s
Feb 28 11:02:01 compute-0 podman[393554]: 2026-02-28 11:02:01.195958255 +0000 UTC m=+0.033361946 container create ada6257be9315917ecb5e7eb33981e153936e362b7f2b5dd3f32da780d4803a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Feb 28 11:02:01 compute-0 systemd[1]: Started libpod-conmon-ada6257be9315917ecb5e7eb33981e153936e362b7f2b5dd3f32da780d4803a6.scope.
Feb 28 11:02:01 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c40e8a10a868fc21e8ca683a451250ac66432c7535197be6f6e20c22955e7457/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c40e8a10a868fc21e8ca683a451250ac66432c7535197be6f6e20c22955e7457/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c40e8a10a868fc21e8ca683a451250ac66432c7535197be6f6e20c22955e7457/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c40e8a10a868fc21e8ca683a451250ac66432c7535197be6f6e20c22955e7457/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:02:01 compute-0 podman[393554]: 2026-02-28 11:02:01.181413823 +0000 UTC m=+0.018817534 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:02:01 compute-0 podman[393554]: 2026-02-28 11:02:01.280497109 +0000 UTC m=+0.117900840 container init ada6257be9315917ecb5e7eb33981e153936e362b7f2b5dd3f32da780d4803a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:02:01 compute-0 podman[393554]: 2026-02-28 11:02:01.286485639 +0000 UTC m=+0.123889340 container start ada6257be9315917ecb5e7eb33981e153936e362b7f2b5dd3f32da780d4803a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bhaskara, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 11:02:01 compute-0 podman[393554]: 2026-02-28 11:02:01.292481029 +0000 UTC m=+0.129884750 container attach ada6257be9315917ecb5e7eb33981e153936e362b7f2b5dd3f32da780d4803a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bhaskara, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]: {
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:     "0": [
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:         {
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "devices": [
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "/dev/loop3"
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             ],
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "lv_name": "ceph_lv0",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "lv_size": "21470642176",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "name": "ceph_lv0",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "tags": {
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.cluster_name": "ceph",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.crush_device_class": "",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.encrypted": "0",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.objectstore": "bluestore",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.osd_id": "0",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.type": "block",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.vdo": "0",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.with_tpm": "0"
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             },
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "type": "block",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "vg_name": "ceph_vg0"
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:         }
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:     ],
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:     "1": [
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:         {
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "devices": [
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "/dev/loop4"
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             ],
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "lv_name": "ceph_lv1",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "lv_size": "21470642176",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "name": "ceph_lv1",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "tags": {
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.cluster_name": "ceph",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.crush_device_class": "",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.encrypted": "0",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.objectstore": "bluestore",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.osd_id": "1",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.type": "block",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.vdo": "0",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.with_tpm": "0"
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             },
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "type": "block",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "vg_name": "ceph_vg1"
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:         }
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:     ],
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:     "2": [
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:         {
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "devices": [
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "/dev/loop5"
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             ],
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "lv_name": "ceph_lv2",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "lv_size": "21470642176",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "name": "ceph_lv2",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "tags": {
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.cluster_name": "ceph",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.crush_device_class": "",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.encrypted": "0",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.objectstore": "bluestore",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.osd_id": "2",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.type": "block",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.vdo": "0",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:                 "ceph.with_tpm": "0"
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             },
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "type": "block",
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:             "vg_name": "ceph_vg2"
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:         }
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]:     ]
Feb 28 11:02:01 compute-0 dreamy_bhaskara[393571]: }
Feb 28 11:02:01 compute-0 systemd[1]: libpod-ada6257be9315917ecb5e7eb33981e153936e362b7f2b5dd3f32da780d4803a6.scope: Deactivated successfully.
Feb 28 11:02:01 compute-0 podman[393554]: 2026-02-28 11:02:01.582906234 +0000 UTC m=+0.420309935 container died ada6257be9315917ecb5e7eb33981e153936e362b7f2b5dd3f32da780d4803a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:02:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c40e8a10a868fc21e8ca683a451250ac66432c7535197be6f6e20c22955e7457-merged.mount: Deactivated successfully.
Feb 28 11:02:01 compute-0 podman[393554]: 2026-02-28 11:02:01.627671952 +0000 UTC m=+0.465075683 container remove ada6257be9315917ecb5e7eb33981e153936e362b7f2b5dd3f32da780d4803a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 11:02:01 compute-0 systemd[1]: libpod-conmon-ada6257be9315917ecb5e7eb33981e153936e362b7f2b5dd3f32da780d4803a6.scope: Deactivated successfully.
Feb 28 11:02:01 compute-0 sudo[393476]: pam_unix(sudo:session): session closed for user root
Feb 28 11:02:01 compute-0 sudo[393592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:02:01 compute-0 sudo[393592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:02:01 compute-0 sudo[393592]: pam_unix(sudo:session): session closed for user root
Feb 28 11:02:01 compute-0 sudo[393617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 11:02:01 compute-0 sudo[393617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:02:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 2.0 MiB/s wr, 10 op/s
Feb 28 11:02:02 compute-0 podman[393654]: 2026-02-28 11:02:02.058769681 +0000 UTC m=+0.042328830 container create 38ad56b42213c86e85b201ec07c508250baed09fc03ef4666753d810da53484b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noyce, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:02:02 compute-0 systemd[1]: Started libpod-conmon-38ad56b42213c86e85b201ec07c508250baed09fc03ef4666753d810da53484b.scope.
Feb 28 11:02:02 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:02:02 compute-0 podman[393654]: 2026-02-28 11:02:02.123898846 +0000 UTC m=+0.107458055 container init 38ad56b42213c86e85b201ec07c508250baed09fc03ef4666753d810da53484b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 11:02:02 compute-0 podman[393654]: 2026-02-28 11:02:02.130034779 +0000 UTC m=+0.113593948 container start 38ad56b42213c86e85b201ec07c508250baed09fc03ef4666753d810da53484b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noyce, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:02:02 compute-0 bold_noyce[393670]: 167 167
Feb 28 11:02:02 compute-0 podman[393654]: 2026-02-28 11:02:02.134687051 +0000 UTC m=+0.118246220 container attach 38ad56b42213c86e85b201ec07c508250baed09fc03ef4666753d810da53484b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:02:02 compute-0 systemd[1]: libpod-38ad56b42213c86e85b201ec07c508250baed09fc03ef4666753d810da53484b.scope: Deactivated successfully.
Feb 28 11:02:02 compute-0 podman[393654]: 2026-02-28 11:02:02.135421982 +0000 UTC m=+0.118981151 container died 38ad56b42213c86e85b201ec07c508250baed09fc03ef4666753d810da53484b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 11:02:02 compute-0 podman[393654]: 2026-02-28 11:02:02.041298016 +0000 UTC m=+0.024857165 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:02:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a2a53729a962318a4293b3d3086fbd873d94c40d56e3fbae8aed017e97196d7-merged.mount: Deactivated successfully.
Feb 28 11:02:02 compute-0 podman[393654]: 2026-02-28 11:02:02.184621234 +0000 UTC m=+0.168180403 container remove 38ad56b42213c86e85b201ec07c508250baed09fc03ef4666753d810da53484b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noyce, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:02:02 compute-0 systemd[1]: libpod-conmon-38ad56b42213c86e85b201ec07c508250baed09fc03ef4666753d810da53484b.scope: Deactivated successfully.
Feb 28 11:02:02 compute-0 podman[393694]: 2026-02-28 11:02:02.362566974 +0000 UTC m=+0.054018911 container create f9aa2258c517a10dc2614c1507f4d0060c0ebb313091229f562d9cb600547037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_grothendieck, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 11:02:02 compute-0 systemd[1]: Started libpod-conmon-f9aa2258c517a10dc2614c1507f4d0060c0ebb313091229f562d9cb600547037.scope.
Feb 28 11:02:02 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dede4d42dcf575d9b9a06c4271fea3eed29656a94fb204ef8535405dad0e7c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:02:02 compute-0 podman[393694]: 2026-02-28 11:02:02.341617631 +0000 UTC m=+0.033069618 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dede4d42dcf575d9b9a06c4271fea3eed29656a94fb204ef8535405dad0e7c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dede4d42dcf575d9b9a06c4271fea3eed29656a94fb204ef8535405dad0e7c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dede4d42dcf575d9b9a06c4271fea3eed29656a94fb204ef8535405dad0e7c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:02:02 compute-0 podman[393694]: 2026-02-28 11:02:02.470895362 +0000 UTC m=+0.162347299 container init f9aa2258c517a10dc2614c1507f4d0060c0ebb313091229f562d9cb600547037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_grothendieck, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 11:02:02 compute-0 podman[393694]: 2026-02-28 11:02:02.481197874 +0000 UTC m=+0.172649801 container start f9aa2258c517a10dc2614c1507f4d0060c0ebb313091229f562d9cb600547037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_grothendieck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:02:02 compute-0 podman[393694]: 2026-02-28 11:02:02.485621329 +0000 UTC m=+0.177073266 container attach f9aa2258c517a10dc2614c1507f4d0060c0ebb313091229f562d9cb600547037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:02:02 compute-0 nova_compute[243452]: 2026-02-28 11:02:02.767 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:02:02 compute-0 nova_compute[243452]: 2026-02-28 11:02:02.768 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:02:03 compute-0 lvm[393790]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 11:02:03 compute-0 lvm[393787]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 11:02:03 compute-0 lvm[393790]: VG ceph_vg2 finished
Feb 28 11:02:03 compute-0 lvm[393787]: VG ceph_vg0 finished
Feb 28 11:02:03 compute-0 ceph-mon[76304]: pgmap v2951: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 2.0 MiB/s wr, 10 op/s
Feb 28 11:02:03 compute-0 lvm[393791]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 11:02:03 compute-0 lvm[393791]: VG ceph_vg1 finished
Feb 28 11:02:03 compute-0 nova_compute[243452]: 2026-02-28 11:02:03.185 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:02:03 compute-0 epic_grothendieck[393710]: {}
Feb 28 11:02:03 compute-0 systemd[1]: libpod-f9aa2258c517a10dc2614c1507f4d0060c0ebb313091229f562d9cb600547037.scope: Deactivated successfully.
Feb 28 11:02:03 compute-0 systemd[1]: libpod-f9aa2258c517a10dc2614c1507f4d0060c0ebb313091229f562d9cb600547037.scope: Consumed 1.108s CPU time.
Feb 28 11:02:03 compute-0 podman[393694]: 2026-02-28 11:02:03.28257996 +0000 UTC m=+0.974031927 container died f9aa2258c517a10dc2614c1507f4d0060c0ebb313091229f562d9cb600547037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 28 11:02:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dede4d42dcf575d9b9a06c4271fea3eed29656a94fb204ef8535405dad0e7c4-merged.mount: Deactivated successfully.
Feb 28 11:02:03 compute-0 nova_compute[243452]: 2026-02-28 11:02:03.319 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:02:03 compute-0 podman[393694]: 2026-02-28 11:02:03.334602903 +0000 UTC m=+1.026054870 container remove f9aa2258c517a10dc2614c1507f4d0060c0ebb313091229f562d9cb600547037 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_grothendieck, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 28 11:02:03 compute-0 systemd[1]: libpod-conmon-f9aa2258c517a10dc2614c1507f4d0060c0ebb313091229f562d9cb600547037.scope: Deactivated successfully.
Feb 28 11:02:03 compute-0 sudo[393617]: pam_unix(sudo:session): session closed for user root
Feb 28 11:02:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 11:02:03 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:02:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 11:02:03 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:02:03 compute-0 sudo[393808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 11:02:03 compute-0 sudo[393808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:02:03 compute-0 sudo[393808]: pam_unix(sudo:session): session closed for user root
Feb 28 11:02:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2952: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 2.0 MiB/s wr, 10 op/s
Feb 28 11:02:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:02:04 compute-0 nova_compute[243452]: 2026-02-28 11:02:04.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:02:04 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:02:04 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:02:04 compute-0 ceph-mon[76304]: pgmap v2952: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 2.0 MiB/s wr, 10 op/s
Feb 28 11:02:05 compute-0 nova_compute[243452]: 2026-02-28 11:02:05.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:02:05 compute-0 nova_compute[243452]: 2026-02-28 11:02:05.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:02:05 compute-0 nova_compute[243452]: 2026-02-28 11:02:05.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 11:02:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 1.8 MiB/s wr, 8 op/s
Feb 28 11:02:06 compute-0 ceph-mon[76304]: pgmap v2953: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 1.8 MiB/s wr, 8 op/s
Feb 28 11:02:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:08 compute-0 nova_compute[243452]: 2026-02-28 11:02:08.187 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:02:08 compute-0 nova_compute[243452]: 2026-02-28 11:02:08.189 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:02:08 compute-0 nova_compute[243452]: 2026-02-28 11:02:08.189 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:02:08 compute-0 nova_compute[243452]: 2026-02-28 11:02:08.189 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:02:08 compute-0 nova_compute[243452]: 2026-02-28 11:02:08.189 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:02:08 compute-0 nova_compute[243452]: 2026-02-28 11:02:08.191 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:02:08 compute-0 ceph-mon[76304]: pgmap v2954: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:02:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:10 compute-0 nova_compute[243452]: 2026-02-28 11:02:10.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:02:10 compute-0 nova_compute[243452]: 2026-02-28 11:02:10.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 11:02:10 compute-0 nova_compute[243452]: 2026-02-28 11:02:10.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 11:02:10 compute-0 nova_compute[243452]: 2026-02-28 11:02:10.341 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 11:02:10 compute-0 ceph-mon[76304]: pgmap v2955: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:12 compute-0 nova_compute[243452]: 2026-02-28 11:02:12.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:02:12 compute-0 ceph-mon[76304]: pgmap v2956: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:13 compute-0 nova_compute[243452]: 2026-02-28 11:02:13.191 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:02:13 compute-0 nova_compute[243452]: 2026-02-28 11:02:13.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:02:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2957: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:02:14 compute-0 nova_compute[243452]: 2026-02-28 11:02:14.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:02:14 compute-0 nova_compute[243452]: 2026-02-28 11:02:14.342 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:02:14 compute-0 nova_compute[243452]: 2026-02-28 11:02:14.343 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:02:14 compute-0 nova_compute[243452]: 2026-02-28 11:02:14.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:02:14 compute-0 nova_compute[243452]: 2026-02-28 11:02:14.344 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 11:02:14 compute-0 nova_compute[243452]: 2026-02-28 11:02:14.344 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:02:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:02:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/583085115' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:02:14 compute-0 nova_compute[243452]: 2026-02-28 11:02:14.875 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:02:14 compute-0 ceph-mon[76304]: pgmap v2957: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:14 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/583085115' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:02:15 compute-0 nova_compute[243452]: 2026-02-28 11:02:15.034 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 11:02:15 compute-0 nova_compute[243452]: 2026-02-28 11:02:15.036 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3599MB free_disk=59.987355089746416GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 11:02:15 compute-0 nova_compute[243452]: 2026-02-28 11:02:15.036 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:02:15 compute-0 nova_compute[243452]: 2026-02-28 11:02:15.036 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:02:15 compute-0 nova_compute[243452]: 2026-02-28 11:02:15.112 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 11:02:15 compute-0 nova_compute[243452]: 2026-02-28 11:02:15.113 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 11:02:15 compute-0 nova_compute[243452]: 2026-02-28 11:02:15.135 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:02:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:02:15 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/777926260' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:02:15 compute-0 nova_compute[243452]: 2026-02-28 11:02:15.630 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:02:15 compute-0 nova_compute[243452]: 2026-02-28 11:02:15.637 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 11:02:15 compute-0 nova_compute[243452]: 2026-02-28 11:02:15.654 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 11:02:15 compute-0 nova_compute[243452]: 2026-02-28 11:02:15.656 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 11:02:15 compute-0 nova_compute[243452]: 2026-02-28 11:02:15.656 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:02:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2958: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:15 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/777926260' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:02:16 compute-0 ceph-mon[76304]: pgmap v2958: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:18 compute-0 nova_compute[243452]: 2026-02-28 11:02:18.192 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:02:18 compute-0 ceph-mon[76304]: pgmap v2959: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:02:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:20 compute-0 ceph-mon[76304]: pgmap v2960: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:21 compute-0 podman[393878]: 2026-02-28 11:02:21.127971747 +0000 UTC m=+0.057546851 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 11:02:21 compute-0 podman[393877]: 2026-02-28 11:02:21.186946967 +0000 UTC m=+0.116664035 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:02:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:22 compute-0 ceph-mon[76304]: pgmap v2961: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:23 compute-0 nova_compute[243452]: 2026-02-28 11:02:23.194 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:02:23 compute-0 nova_compute[243452]: 2026-02-28 11:02:23.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:02:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:02:24 compute-0 ceph-mon[76304]: pgmap v2962: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:26 compute-0 ceph-mon[76304]: pgmap v2963: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2964: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:28 compute-0 nova_compute[243452]: 2026-02-28 11:02:28.196 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:02:28 compute-0 ceph-mon[76304]: pgmap v2964: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:02:29
Feb 28 11:02:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 11:02:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 11:02:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'backups', 'default.rgw.meta', 'default.rgw.log', 'images', 'volumes', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'vms']
Feb 28 11:02:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 11:02:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:02:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:02:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:02:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:02:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:02:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:02:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:02:30 compute-0 ceph-mon[76304]: pgmap v2965: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 11:02:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:02:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 11:02:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:02:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:02:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:02:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:02:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:02:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:02:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:02:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:32 compute-0 ceph-mon[76304]: pgmap v2966: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:33 compute-0 nova_compute[243452]: 2026-02-28 11:02:33.199 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:02:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:02:34 compute-0 ceph-mon[76304]: pgmap v2967: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:35 compute-0 nova_compute[243452]: 2026-02-28 11:02:35.282 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:02:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:37 compute-0 ceph-mon[76304]: pgmap v2968: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:37 compute-0 nova_compute[243452]: 2026-02-28 11:02:37.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:02:37 compute-0 nova_compute[243452]: 2026-02-28 11:02:37.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 28 11:02:37 compute-0 nova_compute[243452]: 2026-02-28 11:02:37.336 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 28 11:02:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:38 compute-0 nova_compute[243452]: 2026-02-28 11:02:38.201 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:02:39 compute-0 ceph-mon[76304]: pgmap v2969: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:02:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:41 compute-0 ceph-mon[76304]: pgmap v2970: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5439019659933745e-05 of space, bias 1.0, pg target 0.004631705897980123 quantized to 32 (current 32)
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033777955687355776 of space, bias 1.0, pg target 0.10133386706206733 quantized to 32 (current 32)
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 11:02:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2971: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Feb 28 11:02:43 compute-0 ceph-mon[76304]: pgmap v2971: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:43 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Feb 28 11:02:43 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Feb 28 11:02:43 compute-0 nova_compute[243452]: 2026-02-28 11:02:43.203 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:02:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 716 B/s rd, 307 B/s wr, 1 op/s
Feb 28 11:02:44 compute-0 ceph-mon[76304]: osdmap e306: 3 total, 3 up, 3 in
Feb 28 11:02:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:02:44 compute-0 nova_compute[243452]: 2026-02-28 11:02:44.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:02:44 compute-0 nova_compute[243452]: 2026-02-28 11:02:44.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 28 11:02:45 compute-0 ceph-mon[76304]: pgmap v2973: 305 pgs: 305 active+clean; 21 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 716 B/s rd, 307 B/s wr, 1 op/s
Feb 28 11:02:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 11:02:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3227995318' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:02:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 11:02:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3227995318' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:02:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Feb 28 11:02:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3227995318' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:02:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3227995318' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:02:47 compute-0 ceph-mon[76304]: pgmap v2974: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Feb 28 11:02:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Feb 28 11:02:48 compute-0 nova_compute[243452]: 2026-02-28 11:02:48.204 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:02:49 compute-0 ceph-mon[76304]: pgmap v2975: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Feb 28 11:02:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:02:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Feb 28 11:02:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Feb 28 11:02:49 compute-0 ceph-mon[76304]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Feb 28 11:02:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Feb 28 11:02:50 compute-0 ceph-mon[76304]: osdmap e307: 3 total, 3 up, 3 in
Feb 28 11:02:50 compute-0 ceph-mon[76304]: pgmap v2977: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Feb 28 11:02:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Feb 28 11:02:52 compute-0 podman[393920]: 2026-02-28 11:02:52.114782815 +0000 UTC m=+0.052210360 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 11:02:52 compute-0 podman[393919]: 2026-02-28 11:02:52.134965297 +0000 UTC m=+0.074122401 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 28 11:02:52 compute-0 ceph-mon[76304]: pgmap v2978: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Feb 28 11:02:53 compute-0 nova_compute[243452]: 2026-02-28 11:02:53.206 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:02:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Feb 28 11:02:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:02:54 compute-0 ceph-mon[76304]: pgmap v2979: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Feb 28 11:02:55 compute-0 sshd-session[393961]: Invalid user sol from 45.148.10.240 port 49682
Feb 28 11:02:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:55 compute-0 sshd-session[393961]: Connection closed by invalid user sol 45.148.10.240 port 49682 [preauth]
Feb 28 11:02:56 compute-0 ceph-mon[76304]: pgmap v2980: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:02:57.907 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:02:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:02:57.908 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:02:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:02:57.908 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:02:58 compute-0 nova_compute[243452]: 2026-02-28 11:02:58.209 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:02:58 compute-0 ceph-mon[76304]: pgmap v2981: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:02:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:02:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:03:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:03:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:03:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:03:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:03:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:03:00 compute-0 ceph-mon[76304]: pgmap v2982: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2983: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:02 compute-0 ceph-mon[76304]: pgmap v2983: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:03 compute-0 nova_compute[243452]: 2026-02-28 11:03:03.210 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:03:03 compute-0 nova_compute[243452]: 2026-02-28 11:03:03.329 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:03:03 compute-0 nova_compute[243452]: 2026-02-28 11:03:03.330 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:03:03 compute-0 sudo[393963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:03:03 compute-0 sudo[393963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:03:03 compute-0 sudo[393963]: pam_unix(sudo:session): session closed for user root
Feb 28 11:03:03 compute-0 sudo[393988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 28 11:03:03 compute-0 sudo[393988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:03:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2984: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:04 compute-0 podman[394057]: 2026-02-28 11:03:04.114114503 +0000 UTC m=+0.086577253 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 11:03:04 compute-0 podman[394057]: 2026-02-28 11:03:04.260354935 +0000 UTC m=+0.232817615 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:03:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:03:04 compute-0 nova_compute[243452]: 2026-02-28 11:03:04.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:03:04 compute-0 ceph-mon[76304]: pgmap v2984: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:05 compute-0 sudo[393988]: pam_unix(sudo:session): session closed for user root
Feb 28 11:03:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 11:03:05 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:03:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 11:03:05 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:03:05 compute-0 sudo[394243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:03:05 compute-0 sudo[394243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:03:05 compute-0 sudo[394243]: pam_unix(sudo:session): session closed for user root
Feb 28 11:03:05 compute-0 sudo[394268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 11:03:05 compute-0 sudo[394268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:03:05 compute-0 sudo[394268]: pam_unix(sudo:session): session closed for user root
Feb 28 11:03:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:03:05 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:03:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 11:03:05 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:03:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 11:03:05 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:03:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 11:03:05 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:03:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 11:03:05 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:03:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:03:05 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:03:05 compute-0 sudo[394324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:03:05 compute-0 sudo[394324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:03:05 compute-0 sudo[394324]: pam_unix(sudo:session): session closed for user root
Feb 28 11:03:05 compute-0 sudo[394349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 11:03:05 compute-0 sudo[394349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:03:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:06 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:03:06 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:03:06 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:03:06 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:03:06 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:03:06 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:03:06 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:03:06 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:03:06 compute-0 podman[394386]: 2026-02-28 11:03:06.128275196 +0000 UTC m=+0.058711783 container create 186f9f7c02b12b3b05bd2295bbb22adcd6dc30136bd2b8110030ecf59985ff5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_gould, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:03:06 compute-0 systemd[1]: Started libpod-conmon-186f9f7c02b12b3b05bd2295bbb22adcd6dc30136bd2b8110030ecf59985ff5d.scope.
Feb 28 11:03:06 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:03:06 compute-0 podman[394386]: 2026-02-28 11:03:06.103565017 +0000 UTC m=+0.034001684 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:03:06 compute-0 podman[394386]: 2026-02-28 11:03:06.214880929 +0000 UTC m=+0.145317526 container init 186f9f7c02b12b3b05bd2295bbb22adcd6dc30136bd2b8110030ecf59985ff5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_gould, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:03:06 compute-0 podman[394386]: 2026-02-28 11:03:06.223508434 +0000 UTC m=+0.153945011 container start 186f9f7c02b12b3b05bd2295bbb22adcd6dc30136bd2b8110030ecf59985ff5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:03:06 compute-0 podman[394386]: 2026-02-28 11:03:06.227811945 +0000 UTC m=+0.158248542 container attach 186f9f7c02b12b3b05bd2295bbb22adcd6dc30136bd2b8110030ecf59985ff5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 11:03:06 compute-0 focused_gould[394402]: 167 167
Feb 28 11:03:06 compute-0 systemd[1]: libpod-186f9f7c02b12b3b05bd2295bbb22adcd6dc30136bd2b8110030ecf59985ff5d.scope: Deactivated successfully.
Feb 28 11:03:06 compute-0 conmon[394402]: conmon 186f9f7c02b12b3b05bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-186f9f7c02b12b3b05bd2295bbb22adcd6dc30136bd2b8110030ecf59985ff5d.scope/container/memory.events
Feb 28 11:03:06 compute-0 podman[394386]: 2026-02-28 11:03:06.232873119 +0000 UTC m=+0.163309736 container died 186f9f7c02b12b3b05bd2295bbb22adcd6dc30136bd2b8110030ecf59985ff5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_gould, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 11:03:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-e99df1c1d5805059cd6e7b8e3c3a6d5e420706557012b7851a34593f43958f30-merged.mount: Deactivated successfully.
Feb 28 11:03:06 compute-0 podman[394386]: 2026-02-28 11:03:06.280894189 +0000 UTC m=+0.211330806 container remove 186f9f7c02b12b3b05bd2295bbb22adcd6dc30136bd2b8110030ecf59985ff5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 28 11:03:06 compute-0 systemd[1]: libpod-conmon-186f9f7c02b12b3b05bd2295bbb22adcd6dc30136bd2b8110030ecf59985ff5d.scope: Deactivated successfully.
Feb 28 11:03:06 compute-0 nova_compute[243452]: 2026-02-28 11:03:06.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:03:06 compute-0 nova_compute[243452]: 2026-02-28 11:03:06.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:03:06 compute-0 nova_compute[243452]: 2026-02-28 11:03:06.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:03:06 compute-0 nova_compute[243452]: 2026-02-28 11:03:06.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 11:03:06 compute-0 podman[394427]: 2026-02-28 11:03:06.454082954 +0000 UTC m=+0.069458058 container create 3b41d73c7473c2958ee765c3921de01145d134feb96a0633c8f55ac02bf7ace8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 11:03:06 compute-0 systemd[1]: Started libpod-conmon-3b41d73c7473c2958ee765c3921de01145d134feb96a0633c8f55ac02bf7ace8.scope.
Feb 28 11:03:06 compute-0 podman[394427]: 2026-02-28 11:03:06.424611379 +0000 UTC m=+0.039986543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:03:06 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:03:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8149b186426fa227bef20414b5cea89dac5e3e3d2da31ecf10ddf3c79cc41b08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:03:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8149b186426fa227bef20414b5cea89dac5e3e3d2da31ecf10ddf3c79cc41b08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:03:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8149b186426fa227bef20414b5cea89dac5e3e3d2da31ecf10ddf3c79cc41b08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:03:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8149b186426fa227bef20414b5cea89dac5e3e3d2da31ecf10ddf3c79cc41b08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:03:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8149b186426fa227bef20414b5cea89dac5e3e3d2da31ecf10ddf3c79cc41b08/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 11:03:06 compute-0 podman[394427]: 2026-02-28 11:03:06.553684395 +0000 UTC m=+0.169059479 container init 3b41d73c7473c2958ee765c3921de01145d134feb96a0633c8f55ac02bf7ace8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mccarthy, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:03:06 compute-0 podman[394427]: 2026-02-28 11:03:06.569388259 +0000 UTC m=+0.184763333 container start 3b41d73c7473c2958ee765c3921de01145d134feb96a0633c8f55ac02bf7ace8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mccarthy, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 11:03:06 compute-0 podman[394427]: 2026-02-28 11:03:06.573552287 +0000 UTC m=+0.188927361 container attach 3b41d73c7473c2958ee765c3921de01145d134feb96a0633c8f55ac02bf7ace8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 11:03:07 compute-0 tender_mccarthy[394443]: --> passed data devices: 0 physical, 3 LVM
Feb 28 11:03:07 compute-0 tender_mccarthy[394443]: --> All data devices are unavailable
Feb 28 11:03:07 compute-0 systemd[1]: libpod-3b41d73c7473c2958ee765c3921de01145d134feb96a0633c8f55ac02bf7ace8.scope: Deactivated successfully.
Feb 28 11:03:07 compute-0 podman[394427]: 2026-02-28 11:03:07.060471726 +0000 UTC m=+0.675846840 container died 3b41d73c7473c2958ee765c3921de01145d134feb96a0633c8f55ac02bf7ace8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mccarthy, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 11:03:07 compute-0 ceph-mon[76304]: pgmap v2985: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-8149b186426fa227bef20414b5cea89dac5e3e3d2da31ecf10ddf3c79cc41b08-merged.mount: Deactivated successfully.
Feb 28 11:03:07 compute-0 podman[394427]: 2026-02-28 11:03:07.104813012 +0000 UTC m=+0.720188076 container remove 3b41d73c7473c2958ee765c3921de01145d134feb96a0633c8f55ac02bf7ace8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_mccarthy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:03:07 compute-0 systemd[1]: libpod-conmon-3b41d73c7473c2958ee765c3921de01145d134feb96a0633c8f55ac02bf7ace8.scope: Deactivated successfully.
Feb 28 11:03:07 compute-0 sudo[394349]: pam_unix(sudo:session): session closed for user root
Feb 28 11:03:07 compute-0 sudo[394478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:03:07 compute-0 sudo[394478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:03:07 compute-0 sudo[394478]: pam_unix(sudo:session): session closed for user root
Feb 28 11:03:07 compute-0 sudo[394503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 11:03:07 compute-0 sudo[394503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:03:07 compute-0 podman[394540]: 2026-02-28 11:03:07.558157001 +0000 UTC m=+0.052970031 container create 55ace20f42aaf263532513d2c08efcd7f79a16d3bc3870e5d560cd9b92a96a62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_clarke, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 28 11:03:07 compute-0 systemd[1]: Started libpod-conmon-55ace20f42aaf263532513d2c08efcd7f79a16d3bc3870e5d560cd9b92a96a62.scope.
Feb 28 11:03:07 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:03:07 compute-0 podman[394540]: 2026-02-28 11:03:07.531687142 +0000 UTC m=+0.026500272 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:03:07 compute-0 podman[394540]: 2026-02-28 11:03:07.64036572 +0000 UTC m=+0.135178790 container init 55ace20f42aaf263532513d2c08efcd7f79a16d3bc3870e5d560cd9b92a96a62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_clarke, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:03:07 compute-0 podman[394540]: 2026-02-28 11:03:07.646683878 +0000 UTC m=+0.141496908 container start 55ace20f42aaf263532513d2c08efcd7f79a16d3bc3870e5d560cd9b92a96a62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 11:03:07 compute-0 podman[394540]: 2026-02-28 11:03:07.649852138 +0000 UTC m=+0.144665198 container attach 55ace20f42aaf263532513d2c08efcd7f79a16d3bc3870e5d560cd9b92a96a62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:03:07 compute-0 beautiful_clarke[394556]: 167 167
Feb 28 11:03:07 compute-0 systemd[1]: libpod-55ace20f42aaf263532513d2c08efcd7f79a16d3bc3870e5d560cd9b92a96a62.scope: Deactivated successfully.
Feb 28 11:03:07 compute-0 podman[394540]: 2026-02-28 11:03:07.653549613 +0000 UTC m=+0.148362653 container died 55ace20f42aaf263532513d2c08efcd7f79a16d3bc3870e5d560cd9b92a96a62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 11:03:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-65996b36587bf5938fc687a45c38e162d4983eb030c0f0e5f99a15bb7544e2f7-merged.mount: Deactivated successfully.
Feb 28 11:03:07 compute-0 podman[394540]: 2026-02-28 11:03:07.690990323 +0000 UTC m=+0.185803373 container remove 55ace20f42aaf263532513d2c08efcd7f79a16d3bc3870e5d560cd9b92a96a62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_clarke, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 11:03:07 compute-0 systemd[1]: libpod-conmon-55ace20f42aaf263532513d2c08efcd7f79a16d3bc3870e5d560cd9b92a96a62.scope: Deactivated successfully.
Feb 28 11:03:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2986: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:07 compute-0 podman[394580]: 2026-02-28 11:03:07.864416615 +0000 UTC m=+0.047221118 container create 70ed2c667b5905eb337e6861d3b9052a88b8a9054799572fc9fa8d940830374f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_allen, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:03:07 compute-0 systemd[1]: Started libpod-conmon-70ed2c667b5905eb337e6861d3b9052a88b8a9054799572fc9fa8d940830374f.scope.
Feb 28 11:03:07 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:03:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/340e7adbe16cdb65822cf762a4d20168816e0e84a09af3cb63ae332cdf6c71db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:03:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/340e7adbe16cdb65822cf762a4d20168816e0e84a09af3cb63ae332cdf6c71db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:03:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/340e7adbe16cdb65822cf762a4d20168816e0e84a09af3cb63ae332cdf6c71db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:03:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/340e7adbe16cdb65822cf762a4d20168816e0e84a09af3cb63ae332cdf6c71db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:03:07 compute-0 podman[394580]: 2026-02-28 11:03:07.839009485 +0000 UTC m=+0.021814028 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:03:07 compute-0 podman[394580]: 2026-02-28 11:03:07.949659149 +0000 UTC m=+0.132463672 container init 70ed2c667b5905eb337e6861d3b9052a88b8a9054799572fc9fa8d940830374f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_allen, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:03:07 compute-0 podman[394580]: 2026-02-28 11:03:07.960112505 +0000 UTC m=+0.142916998 container start 70ed2c667b5905eb337e6861d3b9052a88b8a9054799572fc9fa8d940830374f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_allen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 11:03:07 compute-0 podman[394580]: 2026-02-28 11:03:07.963533922 +0000 UTC m=+0.146338415 container attach 70ed2c667b5905eb337e6861d3b9052a88b8a9054799572fc9fa8d940830374f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_allen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:03:08 compute-0 hardcore_allen[394596]: {
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:     "0": [
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:         {
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "devices": [
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "/dev/loop3"
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             ],
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "lv_name": "ceph_lv0",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "lv_size": "21470642176",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "name": "ceph_lv0",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "tags": {
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.cluster_name": "ceph",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.crush_device_class": "",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.encrypted": "0",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.objectstore": "bluestore",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.osd_id": "0",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.type": "block",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.vdo": "0",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.with_tpm": "0"
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             },
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "type": "block",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "vg_name": "ceph_vg0"
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:         }
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:     ],
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:     "1": [
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:         {
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "devices": [
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "/dev/loop4"
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             ],
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "lv_name": "ceph_lv1",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "lv_size": "21470642176",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "name": "ceph_lv1",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "tags": {
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.cluster_name": "ceph",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.crush_device_class": "",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.encrypted": "0",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.objectstore": "bluestore",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.osd_id": "1",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.type": "block",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.vdo": "0",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.with_tpm": "0"
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             },
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "type": "block",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "vg_name": "ceph_vg1"
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:         }
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:     ],
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:     "2": [
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:         {
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "devices": [
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "/dev/loop5"
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             ],
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "lv_name": "ceph_lv2",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "lv_size": "21470642176",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "name": "ceph_lv2",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "tags": {
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.cluster_name": "ceph",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.crush_device_class": "",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.encrypted": "0",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.objectstore": "bluestore",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.osd_id": "2",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.type": "block",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.vdo": "0",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:                 "ceph.with_tpm": "0"
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             },
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "type": "block",
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:             "vg_name": "ceph_vg2"
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:         }
Feb 28 11:03:08 compute-0 hardcore_allen[394596]:     ]
Feb 28 11:03:08 compute-0 hardcore_allen[394596]: }
Feb 28 11:03:08 compute-0 nova_compute[243452]: 2026-02-28 11:03:08.212 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:03:08 compute-0 systemd[1]: libpod-70ed2c667b5905eb337e6861d3b9052a88b8a9054799572fc9fa8d940830374f.scope: Deactivated successfully.
Feb 28 11:03:08 compute-0 podman[394580]: 2026-02-28 11:03:08.252696012 +0000 UTC m=+0.435500545 container died 70ed2c667b5905eb337e6861d3b9052a88b8a9054799572fc9fa8d940830374f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:03:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-340e7adbe16cdb65822cf762a4d20168816e0e84a09af3cb63ae332cdf6c71db-merged.mount: Deactivated successfully.
Feb 28 11:03:08 compute-0 podman[394580]: 2026-02-28 11:03:08.308007338 +0000 UTC m=+0.490811871 container remove 70ed2c667b5905eb337e6861d3b9052a88b8a9054799572fc9fa8d940830374f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 11:03:08 compute-0 systemd[1]: libpod-conmon-70ed2c667b5905eb337e6861d3b9052a88b8a9054799572fc9fa8d940830374f.scope: Deactivated successfully.
Feb 28 11:03:08 compute-0 sudo[394503]: pam_unix(sudo:session): session closed for user root
Feb 28 11:03:08 compute-0 sudo[394618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:03:08 compute-0 sudo[394618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:03:08 compute-0 sudo[394618]: pam_unix(sudo:session): session closed for user root
Feb 28 11:03:08 compute-0 sudo[394643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 11:03:08 compute-0 sudo[394643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:03:08 compute-0 podman[394680]: 2026-02-28 11:03:08.883557668 +0000 UTC m=+0.066962547 container create 799c29a39db4c1e2d2a0e7566ebb0be5546b03989932746b0792e982c1e58e23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_engelbart, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:03:08 compute-0 systemd[1]: Started libpod-conmon-799c29a39db4c1e2d2a0e7566ebb0be5546b03989932746b0792e982c1e58e23.scope.
Feb 28 11:03:08 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:03:08 compute-0 podman[394680]: 2026-02-28 11:03:08.854584218 +0000 UTC m=+0.037989147 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:03:08 compute-0 podman[394680]: 2026-02-28 11:03:08.957595225 +0000 UTC m=+0.141000104 container init 799c29a39db4c1e2d2a0e7566ebb0be5546b03989932746b0792e982c1e58e23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:03:08 compute-0 podman[394680]: 2026-02-28 11:03:08.963027189 +0000 UTC m=+0.146432038 container start 799c29a39db4c1e2d2a0e7566ebb0be5546b03989932746b0792e982c1e58e23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 28 11:03:08 compute-0 podman[394680]: 2026-02-28 11:03:08.965981173 +0000 UTC m=+0.149386032 container attach 799c29a39db4c1e2d2a0e7566ebb0be5546b03989932746b0792e982c1e58e23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 11:03:08 compute-0 nifty_engelbart[394696]: 167 167
Feb 28 11:03:08 compute-0 systemd[1]: libpod-799c29a39db4c1e2d2a0e7566ebb0be5546b03989932746b0792e982c1e58e23.scope: Deactivated successfully.
Feb 28 11:03:08 compute-0 podman[394680]: 2026-02-28 11:03:08.970425388 +0000 UTC m=+0.153830267 container died 799c29a39db4c1e2d2a0e7566ebb0be5546b03989932746b0792e982c1e58e23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 11:03:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-af09bf33e3ca87c5f579dea82fd7dc303e46de2b6b2e79db0b29bcd7fdf7fccd-merged.mount: Deactivated successfully.
Feb 28 11:03:09 compute-0 podman[394680]: 2026-02-28 11:03:09.010946826 +0000 UTC m=+0.194351675 container remove 799c29a39db4c1e2d2a0e7566ebb0be5546b03989932746b0792e982c1e58e23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:03:09 compute-0 systemd[1]: libpod-conmon-799c29a39db4c1e2d2a0e7566ebb0be5546b03989932746b0792e982c1e58e23.scope: Deactivated successfully.
Feb 28 11:03:09 compute-0 ceph-mon[76304]: pgmap v2986: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:09 compute-0 podman[394718]: 2026-02-28 11:03:09.161190471 +0000 UTC m=+0.046321183 container create e1441b45b0d800290d25bd794f1e5947291dfd2358f9e572a6a7b937af741bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_morse, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 11:03:09 compute-0 systemd[1]: Started libpod-conmon-e1441b45b0d800290d25bd794f1e5947291dfd2358f9e572a6a7b937af741bd5.scope.
Feb 28 11:03:09 compute-0 podman[394718]: 2026-02-28 11:03:09.14244325 +0000 UTC m=+0.027573722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:03:09 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:03:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9311cb1adc9d776c7d5d5251f580a57ea668c381a1510a693382467e3c63db97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:03:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9311cb1adc9d776c7d5d5251f580a57ea668c381a1510a693382467e3c63db97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:03:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9311cb1adc9d776c7d5d5251f580a57ea668c381a1510a693382467e3c63db97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:03:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9311cb1adc9d776c7d5d5251f580a57ea668c381a1510a693382467e3c63db97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:03:09 compute-0 podman[394718]: 2026-02-28 11:03:09.266120633 +0000 UTC m=+0.151251165 container init e1441b45b0d800290d25bd794f1e5947291dfd2358f9e572a6a7b937af741bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_morse, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:03:09 compute-0 podman[394718]: 2026-02-28 11:03:09.273176343 +0000 UTC m=+0.158306795 container start e1441b45b0d800290d25bd794f1e5947291dfd2358f9e572a6a7b937af741bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_morse, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:03:09 compute-0 podman[394718]: 2026-02-28 11:03:09.276709383 +0000 UTC m=+0.161839925 container attach e1441b45b0d800290d25bd794f1e5947291dfd2358f9e572a6a7b937af741bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_morse, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 11:03:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:03:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2987: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:09 compute-0 lvm[394812]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 11:03:09 compute-0 lvm[394813]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 11:03:09 compute-0 lvm[394813]: VG ceph_vg0 finished
Feb 28 11:03:09 compute-0 lvm[394812]: VG ceph_vg1 finished
Feb 28 11:03:09 compute-0 lvm[394815]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 11:03:09 compute-0 lvm[394815]: VG ceph_vg2 finished
Feb 28 11:03:10 compute-0 sharp_morse[394734]: {}
Feb 28 11:03:10 compute-0 systemd[1]: libpod-e1441b45b0d800290d25bd794f1e5947291dfd2358f9e572a6a7b937af741bd5.scope: Deactivated successfully.
Feb 28 11:03:10 compute-0 systemd[1]: libpod-e1441b45b0d800290d25bd794f1e5947291dfd2358f9e572a6a7b937af741bd5.scope: Consumed 1.200s CPU time.
Feb 28 11:03:10 compute-0 podman[394718]: 2026-02-28 11:03:10.057265349 +0000 UTC m=+0.942395821 container died e1441b45b0d800290d25bd794f1e5947291dfd2358f9e572a6a7b937af741bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_morse, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 11:03:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-9311cb1adc9d776c7d5d5251f580a57ea668c381a1510a693382467e3c63db97-merged.mount: Deactivated successfully.
Feb 28 11:03:10 compute-0 podman[394718]: 2026-02-28 11:03:10.110436385 +0000 UTC m=+0.995566867 container remove e1441b45b0d800290d25bd794f1e5947291dfd2358f9e572a6a7b937af741bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:03:10 compute-0 systemd[1]: libpod-conmon-e1441b45b0d800290d25bd794f1e5947291dfd2358f9e572a6a7b937af741bd5.scope: Deactivated successfully.
Feb 28 11:03:10 compute-0 sudo[394643]: pam_unix(sudo:session): session closed for user root
Feb 28 11:03:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 11:03:10 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:03:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 11:03:10 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:03:10 compute-0 sudo[394829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 11:03:10 compute-0 sudo[394829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:03:10 compute-0 sudo[394829]: pam_unix(sudo:session): session closed for user root
Feb 28 11:03:10 compute-0 nova_compute[243452]: 2026-02-28 11:03:10.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:03:10 compute-0 nova_compute[243452]: 2026-02-28 11:03:10.320 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 11:03:10 compute-0 nova_compute[243452]: 2026-02-28 11:03:10.320 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 11:03:10 compute-0 nova_compute[243452]: 2026-02-28 11:03:10.404 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 11:03:10 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:03:10 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 62K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1368 writes, 5913 keys, 1368 commit groups, 1.0 writes per commit group, ingest: 8.93 MB, 0.01 MB/s
                                           Interval WAL: 1368 writes, 1368 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     84.1      0.89              0.20        44    0.020       0      0       0.0       0.0
                                             L6      1/0    9.18 MB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   5.1    167.5    142.8      2.66              1.06        43    0.062    278K    23K       0.0       0.0
                                            Sum      1/0    9.18 MB   0.0      0.4     0.1      0.4       0.4      0.1       0.0   6.1    125.5    128.1      3.55              1.26        87    0.041    278K    23K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.4    183.7    182.9      0.25              0.15         8    0.031     33K   2043       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   0.0    167.5    142.8      2.66              1.06        43    0.062    278K    23K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     84.5      0.88              0.20        43    0.021       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.5      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.073, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.44 GB write, 0.08 MB/s write, 0.43 GB read, 0.08 MB/s read, 3.5 seconds
                                           Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.08 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555caadbb8d0#2 capacity: 304.00 MB usage: 49.80 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000762 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3122,47.78 MB,15.717%) FilterBlock(88,773.92 KB,0.248613%) IndexBlock(88,1.26 MB,0.415114%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 28 11:03:11 compute-0 ceph-mon[76304]: pgmap v2987: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:11 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:03:11 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:03:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2988: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:13 compute-0 ceph-mon[76304]: pgmap v2988: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:13 compute-0 nova_compute[243452]: 2026-02-28 11:03:13.215 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:03:13 compute-0 nova_compute[243452]: 2026-02-28 11:03:13.217 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:03:13 compute-0 nova_compute[243452]: 2026-02-28 11:03:13.217 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:03:13 compute-0 nova_compute[243452]: 2026-02-28 11:03:13.218 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:03:13 compute-0 nova_compute[243452]: 2026-02-28 11:03:13.259 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:03:13 compute-0 nova_compute[243452]: 2026-02-28 11:03:13.259 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:03:13 compute-0 nova_compute[243452]: 2026-02-28 11:03:13.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:03:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:03:14 compute-0 nova_compute[243452]: 2026-02-28 11:03:14.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:03:14 compute-0 nova_compute[243452]: 2026-02-28 11:03:14.340 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:03:14 compute-0 nova_compute[243452]: 2026-02-28 11:03:14.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:03:14 compute-0 nova_compute[243452]: 2026-02-28 11:03:14.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:03:14 compute-0 nova_compute[243452]: 2026-02-28 11:03:14.341 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 11:03:14 compute-0 nova_compute[243452]: 2026-02-28 11:03:14.342 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:03:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:03:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3445608344' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:03:14 compute-0 nova_compute[243452]: 2026-02-28 11:03:14.864 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:03:15 compute-0 nova_compute[243452]: 2026-02-28 11:03:15.054 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 11:03:15 compute-0 nova_compute[243452]: 2026-02-28 11:03:15.056 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3596MB free_disk=59.987354160286486GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 11:03:15 compute-0 nova_compute[243452]: 2026-02-28 11:03:15.057 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:03:15 compute-0 nova_compute[243452]: 2026-02-28 11:03:15.057 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:03:15 compute-0 nova_compute[243452]: 2026-02-28 11:03:15.152 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 11:03:15 compute-0 nova_compute[243452]: 2026-02-28 11:03:15.152 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 11:03:15 compute-0 nova_compute[243452]: 2026-02-28 11:03:15.170 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:03:15 compute-0 ceph-mon[76304]: pgmap v2989: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:15 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3445608344' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:03:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:03:15 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3313444922' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:03:15 compute-0 nova_compute[243452]: 2026-02-28 11:03:15.754 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:03:15 compute-0 nova_compute[243452]: 2026-02-28 11:03:15.761 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 11:03:15 compute-0 nova_compute[243452]: 2026-02-28 11:03:15.781 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 11:03:15 compute-0 nova_compute[243452]: 2026-02-28 11:03:15.784 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 11:03:15 compute-0 nova_compute[243452]: 2026-02-28 11:03:15.785 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:03:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2990: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3313444922' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:03:17 compute-0 ceph-mon[76304]: pgmap v2990: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:18 compute-0 nova_compute[243452]: 2026-02-28 11:03:18.260 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:03:18 compute-0 nova_compute[243452]: 2026-02-28 11:03:18.262 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:03:19 compute-0 ceph-mon[76304]: pgmap v2991: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:03:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2992: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:21 compute-0 ceph-mon[76304]: pgmap v2992: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2993: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:23 compute-0 podman[394899]: 2026-02-28 11:03:23.126555186 +0000 UTC m=+0.058600251 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 28 11:03:23 compute-0 podman[394898]: 2026-02-28 11:03:23.161245028 +0000 UTC m=+0.090971987 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 28 11:03:23 compute-0 ceph-mon[76304]: pgmap v2993: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:23 compute-0 nova_compute[243452]: 2026-02-28 11:03:23.262 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:03:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:03:25 compute-0 ceph-mon[76304]: pgmap v2994: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:27 compute-0 ceph-mon[76304]: pgmap v2995: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:28 compute-0 nova_compute[243452]: 2026-02-28 11:03:28.263 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:03:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:03:29
Feb 28 11:03:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 11:03:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 11:03:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'backups', 'volumes', 'default.rgw.control', 'vms', 'default.rgw.log', 'default.rgw.meta']
Feb 28 11:03:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 11:03:29 compute-0 ceph-mon[76304]: pgmap v2996: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:03:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2997: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:03:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:03:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:03:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:03:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:03:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:03:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 11:03:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:03:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 11:03:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:03:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:03:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:03:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:03:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:03:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:03:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:03:31 compute-0 ceph-mon[76304]: pgmap v2997: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:33 compute-0 nova_compute[243452]: 2026-02-28 11:03:33.264 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:03:33 compute-0 nova_compute[243452]: 2026-02-28 11:03:33.265 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:03:33 compute-0 ceph-mon[76304]: pgmap v2998: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v2999: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:03:35 compute-0 ceph-mon[76304]: pgmap v2999: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:36 compute-0 ceph-mon[76304]: pgmap v3000: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:38 compute-0 nova_compute[243452]: 2026-02-28 11:03:38.266 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:03:38 compute-0 ceph-mon[76304]: pgmap v3001: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:03:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3002: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:40 compute-0 ceph-mon[76304]: pgmap v3002: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5454513684934942e-05 of space, bias 1.0, pg target 0.0046363541054804825 quantized to 32 (current 32)
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.899822393338988e-06 of space, bias 1.0, pg target 0.0014699467180016965 quantized to 32 (current 32)
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 11:03:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:42 compute-0 ceph-mon[76304]: pgmap v3003: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:43 compute-0 nova_compute[243452]: 2026-02-28 11:03:43.268 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:03:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3004: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:03:44 compute-0 ceph-mon[76304]: pgmap v3004: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:44 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Feb 28 11:03:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:44.966164) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 11:03:44 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Feb 28 11:03:44 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276624966221, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 2083, "num_deletes": 254, "total_data_size": 3479397, "memory_usage": 3532864, "flush_reason": "Manual Compaction"}
Feb 28 11:03:44 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Feb 28 11:03:44 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276624983882, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 3398420, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61305, "largest_seqno": 63387, "table_properties": {"data_size": 3388903, "index_size": 6074, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19180, "raw_average_key_size": 20, "raw_value_size": 3369872, "raw_average_value_size": 3562, "num_data_blocks": 269, "num_entries": 946, "num_filter_entries": 946, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772276406, "oldest_key_time": 1772276406, "file_creation_time": 1772276624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Feb 28 11:03:44 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 17796 microseconds, and 8506 cpu microseconds.
Feb 28 11:03:44 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 11:03:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:44.983958) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 3398420 bytes OK
Feb 28 11:03:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:44.983988) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Feb 28 11:03:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:44.985550) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Feb 28 11:03:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:44.985575) EVENT_LOG_v1 {"time_micros": 1772276624985566, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 11:03:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:44.985610) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 11:03:44 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 3470670, prev total WAL file size 3470670, number of live WAL files 2.
Feb 28 11:03:44 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:03:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:44.986467) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Feb 28 11:03:44 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 11:03:44 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(3318KB)], [146(9403KB)]
Feb 28 11:03:44 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276624986527, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 13027459, "oldest_snapshot_seqno": -1}
Feb 28 11:03:45 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8274 keys, 11280917 bytes, temperature: kUnknown
Feb 28 11:03:45 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276625043303, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 11280917, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11226377, "index_size": 32731, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20741, "raw_key_size": 216058, "raw_average_key_size": 26, "raw_value_size": 11079469, "raw_average_value_size": 1339, "num_data_blocks": 1273, "num_entries": 8274, "num_filter_entries": 8274, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772276624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Feb 28 11:03:45 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 11:03:45 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:45.043717) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11280917 bytes
Feb 28 11:03:45 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:45.045096) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 229.0 rd, 198.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 9.2 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(7.2) write-amplify(3.3) OK, records in: 8796, records dropped: 522 output_compression: NoCompression
Feb 28 11:03:45 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:45.045127) EVENT_LOG_v1 {"time_micros": 1772276625045112, "job": 90, "event": "compaction_finished", "compaction_time_micros": 56896, "compaction_time_cpu_micros": 36752, "output_level": 6, "num_output_files": 1, "total_output_size": 11280917, "num_input_records": 8796, "num_output_records": 8274, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 11:03:45 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:03:45 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276625045818, "job": 90, "event": "table_file_deletion", "file_number": 148}
Feb 28 11:03:45 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:03:45 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276625047666, "job": 90, "event": "table_file_deletion", "file_number": 146}
Feb 28 11:03:45 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:44.986347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:03:45 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:45.047774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:03:45 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:45.047784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:03:45 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:45.047787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:03:45 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:45.047791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:03:45 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:03:45.047795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:03:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 11:03:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3037235996' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:03:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 11:03:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3037235996' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:03:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3037235996' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:03:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3037235996' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:03:46 compute-0 ceph-mon[76304]: pgmap v3005: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:48 compute-0 nova_compute[243452]: 2026-02-28 11:03:48.270 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:03:48 compute-0 sshd-session[394941]: Accepted publickey for zuul from 192.168.122.10 port 51960 ssh2: ECDSA SHA256:0e783GbusLxW+8649QrtV4mEUjUuluwMjeLbzXNo3z0
Feb 28 11:03:48 compute-0 systemd-logind[815]: New session 55 of user zuul.
Feb 28 11:03:48 compute-0 systemd[1]: Started Session 55 of User zuul.
Feb 28 11:03:48 compute-0 sshd-session[394941]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 28 11:03:48 compute-0 sudo[394945]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Feb 28 11:03:48 compute-0 sudo[394945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 11:03:48 compute-0 ceph-mon[76304]: pgmap v3006: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:03:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3007: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:50 compute-0 ceph-mon[76304]: pgmap v3007: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:51 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22944 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:03:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:51 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22946 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:03:52 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb 28 11:03:52 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1807713450' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 28 11:03:53 compute-0 ceph-mon[76304]: from='client.22944 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:03:53 compute-0 ceph-mon[76304]: pgmap v3008: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:53 compute-0 ceph-mon[76304]: from='client.22946 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:03:53 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1807713450' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 28 11:03:53 compute-0 nova_compute[243452]: 2026-02-28 11:03:53.272 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:03:53 compute-0 sshd-session[395143]: Received disconnect from 103.139.193.187 port 50732:11: Bye Bye [preauth]
Feb 28 11:03:53 compute-0 sshd-session[395143]: Disconnected from authenticating user root 103.139.193.187 port 50732 [preauth]
Feb 28 11:03:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3009: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:54 compute-0 podman[395200]: 2026-02-28 11:03:54.128787624 +0000 UTC m=+0.062468930 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 28 11:03:54 compute-0 podman[395199]: 2026-02-28 11:03:54.168510319 +0000 UTC m=+0.101333781 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 11:03:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:03:55 compute-0 ceph-mon[76304]: pgmap v3009: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:55 compute-0 ovs-vsctl[395273]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Feb 28 11:03:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:56 compute-0 virtqemud[242837]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Feb 28 11:03:56 compute-0 virtqemud[242837]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Feb 28 11:03:56 compute-0 virtqemud[242837]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb 28 11:03:57 compute-0 ceph-mon[76304]: pgmap v3010: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:57 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: cache status {prefix=cache status} (starting...)
Feb 28 11:03:57 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: client ls {prefix=client ls} (starting...)
Feb 28 11:03:57 compute-0 lvm[395615]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 11:03:57 compute-0 lvm[395615]: VG ceph_vg1 finished
Feb 28 11:03:57 compute-0 lvm[395620]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 11:03:57 compute-0 lvm[395620]: VG ceph_vg0 finished
Feb 28 11:03:57 compute-0 lvm[395631]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 11:03:57 compute-0 lvm[395631]: VG ceph_vg2 finished
Feb 28 11:03:57 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22950 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:03:57 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: damage ls {prefix=damage ls} (starting...)
Feb 28 11:03:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3011: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:57 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22952 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:03:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:03:57.909 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:03:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:03:57.910 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:03:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:03:57.910 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:03:57 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump loads {prefix=dump loads} (starting...)
Feb 28 11:03:58 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Feb 28 11:03:58 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Feb 28 11:03:58 compute-0 nova_compute[243452]: 2026-02-28 11:03:58.274 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:03:58 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22954 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:03:58 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Feb 28 11:03:58 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Feb 28 11:03:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Feb 28 11:03:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4209889848' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb 28 11:03:58 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Feb 28 11:03:58 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22958 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:03:58 compute-0 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo[76606]: 2026-02-28T11:03:58.814+0000 7fdac2ed4640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 28 11:03:58 compute-0 ceph-mgr[76610]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 28 11:03:58 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: get subtrees {prefix=get subtrees} (starting...)
Feb 28 11:03:58 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:03:58 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3009662277' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:03:59 compute-0 ceph-mon[76304]: from='client.22950 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:03:59 compute-0 ceph-mon[76304]: pgmap v3011: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:59 compute-0 ceph-mon[76304]: from='client.22952 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:03:59 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4209889848' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb 28 11:03:59 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3009662277' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:03:59 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: ops {prefix=ops} (starting...)
Feb 28 11:03:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:03:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Feb 28 11:03:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1896005864' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Feb 28 11:03:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Feb 28 11:03:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1224101874' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Feb 28 11:03:59 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: session ls {prefix=session ls} (starting...)
Feb 28 11:03:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 28 11:03:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2950520075' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 28 11:03:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:03:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Feb 28 11:03:59 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2824246745' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Feb 28 11:03:59 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: status {prefix=status} (starting...)
Feb 28 11:04:00 compute-0 ceph-mon[76304]: from='client.22954 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:00 compute-0 ceph-mon[76304]: from='client.22958 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:00 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1896005864' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Feb 28 11:04:00 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1224101874' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Feb 28 11:04:00 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2950520075' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 28 11:04:00 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2824246745' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Feb 28 11:04:00 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22972 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 28 11:04:00 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/515845332' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 28 11:04:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:04:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:04:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:04:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:04:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:04:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:04:00 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22975 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 28 11:04:00 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2856515785' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 28 11:04:01 compute-0 ceph-mon[76304]: pgmap v3012: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:01 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/515845332' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 28 11:04:01 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2856515785' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 28 11:04:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Feb 28 11:04:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/391597969' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb 28 11:04:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 28 11:04:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3544148651' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 28 11:04:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Feb 28 11:04:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2680149477' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Feb 28 11:04:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:01 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb 28 11:04:01 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1355709642' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 28 11:04:02 compute-0 ceph-mon[76304]: from='client.22972 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:02 compute-0 ceph-mon[76304]: from='client.22975 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:02 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/391597969' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb 28 11:04:02 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3544148651' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 28 11:04:02 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2680149477' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Feb 28 11:04:02 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1355709642' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 28 11:04:02 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22986 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:02 compute-0 ceph-mgr[76610]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 28 11:04:02 compute-0 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo[76606]: 2026-02-28T11:04:02.206+0000 7fdac2ed4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 28 11:04:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 28 11:04:02 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1790314155' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 28 11:04:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Feb 28 11:04:02 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/449009649' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Feb 28 11:04:02 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22992 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:03 compute-0 ceph-mon[76304]: pgmap v3013: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:03 compute-0 ceph-mon[76304]: from='client.22986 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:03 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1790314155' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 28 11:04:03 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/449009649' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Feb 28 11:04:03 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22996 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:03 compute-0 nova_compute[243452]: 2026-02-28 11:04:03.277 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672d400 session 0x557686110e00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686e5c400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5c400 session 0x55768bece8c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688d38400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688d38400 session 0x557686d01340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c5b0800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b0800 session 0x5576940e1a40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686156c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 56770560 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156c00 session 0x55768cc3ea80
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672d400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672d400 session 0x557688a10540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686e5c400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5c400 session 0x5576886d5500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688d38400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:49.602036+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688d38400 session 0x5576886f88c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c5b0800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b0800 session 0x5576886f8fc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:50.602247+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:51.602445+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb919000/0x0/0x4ffc00000, data 0x539fa23/0x5533000, compress 0x0/0x0/0x0, omap 0x4cae6, meta 0xed6351a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:52.602604+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291678 data_alloc: 218103808 data_used: 20651256
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:53.602763+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:54.602952+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb919000/0x0/0x4ffc00000, data 0x539fa23/0x5533000, compress 0x0/0x0/0x0, omap 0x4cae6, meta 0xed6351a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:55.603160+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:56.603363+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:57.603568+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291678 data_alloc: 218103808 data_used: 20651256
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:58.603739+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:59.603875+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687f7ec00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:00.604214+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb919000/0x0/0x4ffc00000, data 0x539fa23/0x5533000, compress 0x0/0x0/0x0, omap 0x4cae6, meta 0xed6351a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687f7ec00 session 0x5576886f8540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672d400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.131345749s of 18.433906555s, submitted: 75
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686e5c400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:01.604475+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 285573120 unmapped: 56573952 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:02.604652+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 55328768 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3335842 data_alloc: 218103808 data_used: 28060920
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:03.604827+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 55328768 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:04.605093+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb919000/0x0/0x4ffc00000, data 0x539fa23/0x5533000, compress 0x0/0x0/0x0, omap 0x4cae6, meta 0xed6351a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb919000/0x0/0x4ffc00000, data 0x539fa23/0x5533000, compress 0x0/0x0/0x0, omap 0x4cae6, meta 0xed6351a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 55312384 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:05.605247+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 55312384 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:06.605482+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 55312384 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:07.605644+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 55312384 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3334802 data_alloc: 218103808 data_used: 28060920
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:08.605889+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb919000/0x0/0x4ffc00000, data 0x539fa23/0x5533000, compress 0x0/0x0/0x0, omap 0x4cae6, meta 0xed6351a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 55312384 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:09.606165+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 55312384 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:10.606407+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 55312384 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:11.606569+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.838995934s of 10.858511925s, submitted: 5
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289505280 unmapped: 52641792 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:12.606731+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687f7ec00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eacce000/0x0/0x4ffc00000, data 0x5feaa23/0x617e000, compress 0x0/0x0/0x0, omap 0x4cae6, meta 0xed6351a), peers [0,1] op hist [0,0,0,0,0,0,3])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 293117952 unmapped: 49029120 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687f7ec00 session 0x557687aa0700
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688d38400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688d38400 session 0x557686197880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c5b0800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b0800 session 0x5576886f8000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3465106 data_alloc: 234881024 data_used: 29232376
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:13.606969+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x55768cc4b6c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557689b7b800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557689b7b800 session 0x5576886d4540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 47890432 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:14.607156+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 47890432 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:15.607407+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 47890432 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:16.607545+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687f7ec00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687f7ec00 session 0x557686d4fdc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 47890432 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:17.607735+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688d38400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688d38400 session 0x5576886e1500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c5b0800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b0800 session 0x5576861be8c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x55768bc58380
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294264832 unmapped: 47882240 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:18.607890+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3467264 data_alloc: 234881024 data_used: 29314296
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea587000/0x0/0x4ffc00000, data 0x6730a85/0x68c5000, compress 0x0/0x0/0x0, omap 0x4ccee, meta 0xed63312), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c3800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768694ec00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47874048 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:19.608009+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 44539904 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:20.608166+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 44539904 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:21.608357+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 44539904 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:22.608490+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea562000/0x0/0x4ffc00000, data 0x6754a95/0x68ea000, compress 0x0/0x0/0x0, omap 0x4ccee, meta 0xed63312), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 44539904 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:23.608614+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3503796 data_alloc: 234881024 data_used: 36240632
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 44539904 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:24.608805+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.089795113s of 12.524835587s, submitted: 174
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672d400 session 0x557688656a80
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5c400 session 0x5576886b3500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687f7ec00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687f7ec00 session 0x5576940e0c40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 46530560 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:25.609011+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 46530560 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:26.609209+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb377000/0x0/0x4ffc00000, data 0x52c2a23/0x5456000, compress 0x0/0x0/0x0, omap 0x4cec2, meta 0xed6313e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 46530560 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:27.609356+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 46530560 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:28.609587+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336798 data_alloc: 218103808 data_used: 27573496
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb377000/0x0/0x4ffc00000, data 0x52c2a23/0x5456000, compress 0x0/0x0/0x0, omap 0x4cec2, meta 0xed6313e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 46522368 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:29.609725+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768bb5f400 session 0x557686cadc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x55768800ddc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688d38400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688d38400 session 0x5576869f28c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 46514176 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:30.609849+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 45801472 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:31.609987+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:32.610154+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:33.610336+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3371178 data_alloc: 218103808 data_used: 26739315
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:34.610771+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb46b000/0x0/0x4ffc00000, data 0x584ea00/0x59e1000, compress 0x0/0x0/0x0, omap 0x4cec2, meta 0xed6313e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:35.610956+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:36.611150+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.071561813s of 12.451289177s, submitted: 145
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:37.611337+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:38.611528+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3370754 data_alloc: 218103808 data_used: 26751603
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:39.611722+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb468000/0x0/0x4ffc00000, data 0x5851a00/0x59e4000, compress 0x0/0x0/0x0, omap 0x4cec2, meta 0xed6313e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:40.611913+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:41.612161+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:42.612373+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:43.612648+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3370754 data_alloc: 218103808 data_used: 26751603
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:44.612886+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb468000/0x0/0x4ffc00000, data 0x5851a00/0x59e4000, compress 0x0/0x0/0x0, omap 0x4cec2, meta 0xed6313e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:45.613153+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb468000/0x0/0x4ffc00000, data 0x5851a00/0x59e4000, compress 0x0/0x0/0x0, omap 0x4cec2, meta 0xed6313e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:46.613317+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb468000/0x0/0x4ffc00000, data 0x5851a00/0x59e4000, compress 0x0/0x0/0x0, omap 0x4cec2, meta 0xed6313e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:47.613489+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb468000/0x0/0x4ffc00000, data 0x5851a00/0x59e4000, compress 0x0/0x0/0x0, omap 0x4cec2, meta 0xed6313e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:48.613654+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3371778 data_alloc: 218103808 data_used: 26829427
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:49.613891+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.321892738s of 13.334897041s, submitted: 2
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:50.614036+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:51.614174+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c3800 session 0x5576885bfdc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x557686196c40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297263104 unmapped: 44883968 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:52.614331+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576868d4400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x55768cc4afc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb466000/0x0/0x4ffc00000, data 0x5853a00/0x59e6000, compress 0x0/0x0/0x0, omap 0x4cec2, meta 0xed6313e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:53.614499+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219503 data_alloc: 218103808 data_used: 18690675
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:54.614766+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:55.615021+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:56.615250+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec2de000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 nova_compute[243452]: 2026-02-28 11:04:03.329 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:04:03 compute-0 nova_compute[243452]: 2026-02-28 11:04:03.330 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5054 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:04:03 compute-0 nova_compute[243452]: 2026-02-28 11:04:03.330 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:04:03 compute-0 nova_compute[243452]: 2026-02-28 11:04:03.330 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:04:03 compute-0 nova_compute[243452]: 2026-02-28 11:04:03.331 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:04:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Feb 28 11:04:03 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/573064748' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:57.615455+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:58.615587+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219503 data_alloc: 218103808 data_used: 18690675
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.2 total, 600.0 interval
                                           Cumulative writes: 33K writes, 127K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.04 MB/s
                                           Cumulative WAL: 33K writes, 12K syncs, 2.73 writes per sync, written: 0.12 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4166 writes, 15K keys, 4166 commit groups, 1.0 writes per commit group, ingest: 17.10 MB, 0.03 MB/s
                                           Interval WAL: 4166 writes, 1712 syncs, 2.43 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:59.615810+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:00.615991+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:01.616148+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec2de000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:02.616328+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:03.616469+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3219503 data_alloc: 218103808 data_used: 18690675
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec2de000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:04.616648+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:05.616851+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:06.617028+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291151872 unmapped: 50995200 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686e5c400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.095006943s of 16.153604507s, submitted: 36
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5c400 session 0x5576886e16c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687f7ec00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687f7ec00 session 0x5576886d41c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576868d4400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x55768d75c540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768694ec00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x5576886b2380
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686e5c400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5c400 session 0x5576886b2540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec2de000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:07.617392+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291160064 unmapped: 50987008 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:08.617547+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291160064 unmapped: 50987008 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3240437 data_alloc: 218103808 data_used: 18694673
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:09.617775+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291160064 unmapped: 50987008 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:10.617959+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291160064 unmapped: 50987008 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:11.618147+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291160064 unmapped: 50987008 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:12.618281+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291160064 unmapped: 50987008 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec508000/0x0/0x4ffc00000, data 0x47b398e/0x4944000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:13.618515+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291160064 unmapped: 50987008 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3240437 data_alloc: 218103808 data_used: 18694673
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c3800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c3800 session 0x557686d008c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:14.618715+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291160064 unmapped: 50987008 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec508000/0x0/0x4ffc00000, data 0x47b398e/0x4944000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768bb5f400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c5b0800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:15.622189+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291160064 unmapped: 50987008 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:16.622361+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 290668544 unmapped: 51478528 heap: 342147072 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.785350800s of 10.859200478s, submitted: 12
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:17.622484+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec508000/0x0/0x4ffc00000, data 0x47b398e/0x4944000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [0,0,0,0,1])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576861a9500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 290979840 unmapped: 54845440 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576940e0e00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576868d4400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x5576861be700
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768694ec00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x557688a10fc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686e5c400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5c400 session 0x5576861bee00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:18.622695+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 290979840 unmapped: 54845440 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306319 data_alloc: 218103808 data_used: 19779089
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:19.622839+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 290979840 unmapped: 54845440 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:20.622998+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 290979840 unmapped: 54845440 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:21.623126+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 290979840 unmapped: 54845440 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:22.623331+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 290979840 unmapped: 54845440 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ebb5c000/0x0/0x4ffc00000, data 0x515f98e/0x52f0000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c3800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c3800 session 0x557686d01880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:23.623572+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 290979840 unmapped: 54845440 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306319 data_alloc: 218103808 data_used: 19779089
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c3800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c3800 session 0x557685cc4c40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:24.623809+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 290979840 unmapped: 54845440 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576868d4400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x5576886576c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768694ec00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x557686c21180
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:25.623973+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 290979840 unmapped: 54845440 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686e5c400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ebb5b000/0x0/0x4ffc00000, data 0x515f99e/0x52f1000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:26.624129+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 291192832 unmapped: 54632448 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:27.624270+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289988608 unmapped: 55836672 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb333000/0x0/0x4ffc00000, data 0x597199e/0x5b03000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:28.624427+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289988608 unmapped: 55836672 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3424449 data_alloc: 234881024 data_used: 29714535
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:29.624605+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289988608 unmapped: 55836672 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:30.624907+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289988608 unmapped: 55836672 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb333000/0x0/0x4ffc00000, data 0x597199e/0x5b03000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:31.625185+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289988608 unmapped: 55836672 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:32.625456+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb333000/0x0/0x4ffc00000, data 0x597199e/0x5b03000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289988608 unmapped: 55836672 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.092023849s of 15.370637894s, submitted: 110
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:33.625686+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289054720 unmapped: 56770560 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3416521 data_alloc: 234881024 data_used: 29714535
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:34.626027+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289054720 unmapped: 56770560 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:35.626221+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289054720 unmapped: 56770560 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:36.626397+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 289054720 unmapped: 56770560 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb328000/0x0/0x4ffc00000, data 0x599299e/0x5b24000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:37.626590+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 293003264 unmapped: 52822016 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:38.626775+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 293052416 unmapped: 52772864 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3470731 data_alloc: 234881024 data_used: 30496871
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:39.626907+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 293052416 unmapped: 52772864 heap: 345825280 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x5576886e01c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e90800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e90800 session 0x55768bece8c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576868d4400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x557688722540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768694ec00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x5576886d4540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x55768d090e00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c3800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c3800 session 0x55768d75ddc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557689b78800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557689b78800 session 0x55768d75d340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576868d4400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x557685cc5c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:40.627095+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768694ec00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x557688723a40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 293085184 unmapped: 54353920 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:41.627273+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 293085184 unmapped: 54353920 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:42.627437+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea039000/0x0/0x4ffc00000, data 0x6c80a00/0x6e13000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 293085184 unmapped: 54353920 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.805257797s of 10.072917938s, submitted: 107
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:43.627604+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294174720 unmapped: 53264384 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3532258 data_alloc: 234881024 data_used: 30563397
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:44.627834+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 53248000 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x55768d091340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:45.627983+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 53248000 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c3800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c3800 session 0x557686d4e380
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:46.628173+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686108400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x5576886b3a40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 53248000 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686108400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x55768a538e00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576868d4400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768694ec00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:47.628323+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294223872 unmapped: 53215232 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea038000/0x0/0x4ffc00000, data 0x6c80a10/0x6e14000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:48.628483+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 53182464 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3533934 data_alloc: 234881024 data_used: 30563397
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:49.628692+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 53182464 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:50.629008+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 53182464 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:51.629431+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 51240960 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:52.629653+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea038000/0x0/0x4ffc00000, data 0x6c80a10/0x6e14000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48160768 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:53.689638+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48160768 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3598318 data_alloc: 234881024 data_used: 37816901
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:54.689944+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48160768 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.916085243s of 12.152958870s, submitted: 133
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x5576886f9880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c3800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c3800 session 0x55768c314540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687b5b000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b000 session 0x55768800dc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688546800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688546800 session 0x55768bc596c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686108400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x55768bc58380
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:55.690098+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299728896 unmapped: 47710208 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:56.690249+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299728896 unmapped: 47710208 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:57.690430+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299728896 unmapped: 47710208 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e996b000/0x0/0x4ffc00000, data 0x734da10/0x74e1000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e996b000/0x0/0x4ffc00000, data 0x734da10/0x74e1000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:58.690566+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 47702016 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3649372 data_alloc: 234881024 data_used: 37816901
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:59.690722+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 47702016 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:00.690916+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9969000/0x0/0x4ffc00000, data 0x734ea10/0x74e2000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 47702016 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687b5b000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b000 session 0x5576880f7340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:01.691345+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 47702016 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c3800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:02.691485+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305954816 unmapped: 41484288 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:03.691624+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3757208 data_alloc: 234881024 data_used: 41424453
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307699712 unmapped: 39739392 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:04.691852+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307781632 unmapped: 39657472 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8c78000/0x0/0x4ffc00000, data 0x8040a10/0x81d4000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:05.691986+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 39559168 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:06.692199+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 39559168 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:07.692370+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 39559168 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:08.692531+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3760712 data_alloc: 234881024 data_used: 41430085
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307879936 unmapped: 39559168 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.586961746s of 13.872611046s, submitted: 108
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:09.692664+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 39542784 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:10.692794+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307896320 unmapped: 39542784 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8c78000/0x0/0x4ffc00000, data 0x8040a10/0x81d4000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:11.692925+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 39510016 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:12.693049+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307929088 unmapped: 39510016 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:13.693224+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3759256 data_alloc: 234881024 data_used: 41410117
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307986432 unmapped: 39452672 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:14.693478+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308035584 unmapped: 39403520 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:15.693603+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310345728 unmapped: 37093376 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:16.693729+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313622528 unmapped: 33816576 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8016000/0x0/0x4ffc00000, data 0x8ca1a10/0x8e35000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x557686d01a40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x557689a01a40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:17.693850+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687b5b800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b800 session 0x5576886e16c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309157888 unmapped: 38281216 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:18.694009+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3643899 data_alloc: 234881024 data_used: 31891525
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309166080 unmapped: 38273024 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.773560524s of 10.151765823s, submitted: 167
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:19.694146+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 38379520 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e96c1000/0x0/0x4ffc00000, data 0x75f999e/0x778b000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:20.694339+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 38379520 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:21.784590+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768bb5f400 session 0x55768a539a40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b0800 session 0x557686c20700
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687b5b800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 38379520 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b800 session 0x5576886b2540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:22.784838+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 38379520 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea057000/0x0/0x4ffc00000, data 0x6c6399e/0x6df5000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea057000/0x0/0x4ffc00000, data 0x6c6399e/0x6df5000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:23.785121+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3570054 data_alloc: 234881024 data_used: 30572515
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 38379520 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea057000/0x0/0x4ffc00000, data 0x6c6399e/0x6df5000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:24.785356+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x557687aa1880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c3800 session 0x5576869f28c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309059584 unmapped: 38379520 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687b5b800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b800 session 0x5576861bec40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:25.785574+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 44425216 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb448000/0x0/0x4ffc00000, data 0x587299e/0x5a04000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:26.785734+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 44425216 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:27.785927+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 44425216 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb448000/0x0/0x4ffc00000, data 0x587299e/0x5a04000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:28.786577+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5c400 session 0x557687b436c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576940e08c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3416140 data_alloc: 218103808 data_used: 23893475
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 44425216 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.108342171s of 10.153443336s, submitted: 23
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x5576861a8540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:29.786773+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 52109312 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:30.786904+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 52109312 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec679000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:31.787148+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 52109312 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:32.787314+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 52109312 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:33.787545+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec679000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3259263 data_alloc: 218103808 data_used: 18691539
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 52109312 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:34.787823+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 52109312 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:35.787979+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 52109312 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:36.788189+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 52109312 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:37.788374+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 52101120 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:38.788568+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3259263 data_alloc: 218103808 data_used: 18691539
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 52101120 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:39.788748+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec679000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 52101120 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:40.788920+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 52101120 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec679000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:41.789119+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295337984 unmapped: 52101120 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:42.789289+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 52092928 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:43.789484+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3259263 data_alloc: 218103808 data_used: 18691539
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 52092928 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec679000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:44.789727+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 52092928 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:45.789923+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 52092928 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:46.790161+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 52092928 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:47.790342+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec679000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 52092928 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:48.790544+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3259263 data_alloc: 218103808 data_used: 18691539
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 52092928 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:49.790767+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 52092928 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:50.790979+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 52084736 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:51.791160+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 52084736 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec679000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:52.791364+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 52084736 heap: 347439104 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768bb5f400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.209495544s of 24.226617813s, submitted: 9
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768bb5f400 session 0x5576861be700
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:53.791511+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686e5c400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5c400 session 0x5576885be1c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687b5b800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b800 session 0x5576861be8c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x557686c21880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576861bf6c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316624 data_alloc: 218103808 data_used: 18691539
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 55738368 heap: 351117312 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:54.791722+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ebda9000/0x0/0x4ffc00000, data 0x4f119f0/0x50a3000, compress 0x0/0x0/0x0, omap 0x4d096, meta 0xed62f6a), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 55738368 heap: 351117312 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:55.791906+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 55738368 heap: 351117312 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:56.792176+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 55738368 heap: 351117312 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:57.792322+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c5b0800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306511872 unmapped: 53002240 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b0800 session 0x5576861be540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686e5c400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5c400 session 0x557686c20fc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687b5b800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b800 session 0x55768a539880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x5576886b2380
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x557685cc4c40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:58.792498+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388831 data_alloc: 218103808 data_used: 18691539
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 64110592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:59.792667+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 64110592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:00.792847+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb179000/0x0/0x4ffc00000, data 0x5b419f0/0x5cd3000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 64110592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:01.793008+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686108400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x557688a10380
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 64110592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:02.793206+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb179000/0x0/0x4ffc00000, data 0x5b419f0/0x5cd3000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686108400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686e5c400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 64110592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:03.793351+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395359 data_alloc: 218103808 data_used: 19748307
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 64110592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:04.793555+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 61906944 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:05.793702+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb179000/0x0/0x4ffc00000, data 0x5b419f0/0x5cd3000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 61906944 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:06.793849+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 61906944 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:07.794760+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 61906944 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb179000/0x0/0x4ffc00000, data 0x5b419f0/0x5cd3000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:08.795978+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687b5b800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b800 session 0x557686d01500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3443615 data_alloc: 218103808 data_used: 27915731
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 61906944 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.720023155s of 15.895346642s, submitted: 50
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:09.796168+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 61906944 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:10.796304+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 61906944 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:11.796510+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb179000/0x0/0x4ffc00000, data 0x5b419f0/0x5cd3000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 297607168 unmapped: 61906944 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:12.796624+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303906816 unmapped: 55607296 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:13.796770+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3526807 data_alloc: 234881024 data_used: 39512019
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305643520 unmapped: 53870592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:14.796973+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305733632 unmapped: 53780480 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:15.797267+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305741824 unmapped: 53772288 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:16.797510+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eaf37000/0x0/0x4ffc00000, data 0x5d839f0/0x5f15000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305741824 unmapped: 53772288 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:17.798013+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305741824 unmapped: 53772288 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:18.798165+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3527703 data_alloc: 234881024 data_used: 39708627
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305741824 unmapped: 53772288 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:19.798540+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305741824 unmapped: 53772288 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:20.798699+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305741824 unmapped: 53772288 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:21.798905+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eaf37000/0x0/0x4ffc00000, data 0x5d839f0/0x5f15000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305741824 unmapped: 53772288 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:22.799152+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eaf37000/0x0/0x4ffc00000, data 0x5d839f0/0x5f15000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.435902596s of 13.532156944s, submitted: 36
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 48521216 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea70c000/0x0/0x4ffc00000, data 0x656e9f0/0x6700000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:23.799341+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3616161 data_alloc: 234881024 data_used: 40302547
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311033856 unmapped: 48480256 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:24.799555+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311033856 unmapped: 48480256 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:25.799716+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311033856 unmapped: 48480256 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:26.800274+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311033856 unmapped: 48480256 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:27.800423+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311033856 unmapped: 48480256 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea02b000/0x0/0x4ffc00000, data 0x6c8e9f0/0x6e20000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:28.800557+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3619447 data_alloc: 234881024 data_used: 40421331
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311050240 unmapped: 48463872 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:29.800706+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311050240 unmapped: 48463872 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:30.800828+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311050240 unmapped: 48463872 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:31.800982+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311050240 unmapped: 48463872 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:32.801122+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311050240 unmapped: 48463872 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:33.801284+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea02b000/0x0/0x4ffc00000, data 0x6c8e9f0/0x6e20000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3619447 data_alloc: 234881024 data_used: 40421331
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311083008 unmapped: 48431104 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea02b000/0x0/0x4ffc00000, data 0x6c8e9f0/0x6e20000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:34.801553+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311091200 unmapped: 48422912 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:35.801710+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311099392 unmapped: 48414720 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:36.801893+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311099392 unmapped: 48414720 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:37.802041+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311099392 unmapped: 48414720 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:38.802142+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576868d4400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x557686c216c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768694ec00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x557688723a40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687b5b000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b000 session 0x55768d091340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c5b0000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b0000 session 0x55768bc596c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576868d4400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.026269913s of 16.309738159s, submitted: 91
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3619042 data_alloc: 234881024 data_used: 40421331
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x5576861a9180
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311492608 unmapped: 48021504 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768694ec00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x5576886576c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687b5b000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b000 session 0x5576886e0fc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687b5b800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b800 session 0x557688a116c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e70400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e70400 session 0x55768d75c8c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:39.802290+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea02b000/0x0/0x4ffc00000, data 0x6c8e9f0/0x6e20000, compress 0x0/0x0/0x0, omap 0x4d29e, meta 0xed62d62), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576868d4400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x557687b43180
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311500800 unmapped: 48013312 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768694ec00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x55768d090e00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687b5b000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b000 session 0x5576886f9a40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687b5b800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b800 session 0x5576886b3a40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3edc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:40.802496+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3edc00 session 0x557686197880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576868d4400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x557686cada40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768694ec00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x557689a01dc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687b5b000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b000 session 0x55768a539c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687b5b800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b800 session 0x55768becfc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310059008 unmapped: 49455104 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:41.802712+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e97b7000/0x0/0x4ffc00000, data 0x7501a62/0x7695000, compress 0x0/0x0/0x0, omap 0x4d47e, meta 0xed62b82), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 49422336 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:42.802889+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 49422336 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:43.803055+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3677473 data_alloc: 234881024 data_used: 40421331
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 49422336 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:44.803387+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 49422336 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:45.803520+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576926cd400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd400 session 0x5576886d41c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310091776 unmapped: 49422336 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:46.803704+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576868d4400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x557687aa01c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e97b7000/0x0/0x4ffc00000, data 0x7501a62/0x7695000, compress 0x0/0x0/0x0, omap 0x4d47e, meta 0xed62b82), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768694ec00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x557686c21dc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687b5b000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310099968 unmapped: 49414144 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b000 session 0x557686196a80
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:47.804051+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687b5b800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687a6e800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310099968 unmapped: 49414144 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e71400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e71400 session 0x557686c20700
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:48.804225+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3732250 data_alloc: 251658240 data_used: 47692259
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 45441024 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768ae2a400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768ae2a400 session 0x5576886e16c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:49.804369+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576868d4400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4400 session 0x55768becefc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768694ec00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.025026321s of 11.173868179s, submitted: 49
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694ec00 session 0x557686c208c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314376192 unmapped: 45137920 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:50.804525+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687b5b000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e71400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314384384 unmapped: 45129728 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9791000/0x0/0x4ffc00000, data 0x7525a94/0x76bb000, compress 0x0/0x0/0x0, omap 0x4d47e, meta 0xed62b82), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:51.804658+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314384384 unmapped: 45129728 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:52.804800+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314564608 unmapped: 44949504 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:53.804940+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3735366 data_alloc: 251658240 data_used: 47722979
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9791000/0x0/0x4ffc00000, data 0x7525a94/0x76bb000, compress 0x0/0x0/0x0, omap 0x4d47e, meta 0xed62b82), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314564608 unmapped: 44949504 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:54.805132+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314564608 unmapped: 44949504 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:55.805285+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314564608 unmapped: 44949504 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9791000/0x0/0x4ffc00000, data 0x7525a94/0x76bb000, compress 0x0/0x0/0x0, omap 0x4d47e, meta 0xed62b82), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:56.805501+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314564608 unmapped: 44949504 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:57.805670+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9791000/0x0/0x4ffc00000, data 0x7525a94/0x76bb000, compress 0x0/0x0/0x0, omap 0x4d47e, meta 0xed62b82), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314564608 unmapped: 44949504 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:58.805844+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3769052 data_alloc: 251658240 data_used: 47731171
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 317571072 unmapped: 41943040 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:59.805966+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 317718528 unmapped: 41795584 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8f75000/0x0/0x4ffc00000, data 0x7d3ba94/0x7ed1000, compress 0x0/0x0/0x0, omap 0x4d47e, meta 0xed62b82), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:00.806152+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.946731567s of 10.294985771s, submitted: 54
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 41336832 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:01.806272+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 41336832 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:02.806403+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318177280 unmapped: 41336832 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:03.806580+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3801222 data_alloc: 251658240 data_used: 48196067
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318676992 unmapped: 40837120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:04.806794+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8cd8000/0x0/0x4ffc00000, data 0x7fd8a94/0x816e000, compress 0x0/0x0/0x0, omap 0x4d47e, meta 0xed62b82), peers [0,1] op hist [1])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318693376 unmapped: 40820736 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:05.806927+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319930368 unmapped: 39583744 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:06.807096+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319930368 unmapped: 39583744 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:07.807294+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319930368 unmapped: 39583744 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:08.807437+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3823520 data_alloc: 251658240 data_used: 48304611
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8cbb000/0x0/0x4ffc00000, data 0x7ff3a94/0x8189000, compress 0x0/0x0/0x0, omap 0x4d47e, meta 0xed62b82), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319930368 unmapped: 39583744 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:09.807636+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319938560 unmapped: 39575552 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:10.807815+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319938560 unmapped: 39575552 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:11.807988+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319938560 unmapped: 39575552 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:12.808143+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.299750328s of 12.471056938s, submitted: 63
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319946752 unmapped: 39567360 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b800 session 0x5576886d4540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687a6e800 session 0x5576886b2c40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:13.808273+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576861d1800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8cbb000/0x0/0x4ffc00000, data 0x7ff3a94/0x8189000, compress 0x0/0x0/0x0, omap 0x4d4b2, meta 0xed62b4e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576861d1800 session 0x55768800d880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3665460 data_alloc: 234881024 data_used: 39023059
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 320012288 unmapped: 39501824 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:14.808493+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 320012288 unmapped: 39501824 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:15.808837+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9aa6000/0x0/0x4ffc00000, data 0x6f569f0/0x70e8000, compress 0x0/0x0/0x0, omap 0x4d4b2, meta 0xed62b4e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 320012288 unmapped: 39501824 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:16.809007+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687b5b000 session 0x557686cad180
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e71400 session 0x55768800da40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9aa6000/0x0/0x4ffc00000, data 0x6f569f0/0x70e8000, compress 0x0/0x0/0x0, omap 0x4d4b2, meta 0xed62b4e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576861d1800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576861d1800 session 0x5576861a9c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 320036864 unmapped: 39477248 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:17.809236+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 320036864 unmapped: 39477248 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:18.809427+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3638348 data_alloc: 234881024 data_used: 38877518
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 320036864 unmapped: 39477248 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:19.809608+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 320036864 unmapped: 39477248 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:20.809758+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 320036864 unmapped: 39477248 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:21.809933+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x557688a10fc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5c400 session 0x5576869f3180
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea02c000/0x0/0x4ffc00000, data 0x6c8e9f0/0x6e20000, compress 0x0/0x0/0x0, omap 0x4d603, meta 0xed629fd), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686108400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318980096 unmapped: 40534016 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:22.810108+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x55768c314c40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.080725670s of 10.223545074s, submitted: 71
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x55768cc4afc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x557686111500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312582144 unmapped: 46931968 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:23.810265+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c2000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2000 session 0x55768cc3f880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301066 data_alloc: 218103808 data_used: 18699598
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:24.810629+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:25.810806+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec0e1000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d519, meta 0xed62ae7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:26.810979+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:27.811165+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:28.811348+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec0e1000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d519, meta 0xed62ae7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301066 data_alloc: 218103808 data_used: 18699598
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:29.811507+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:30.811679+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec0e1000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d519, meta 0xed62ae7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:31.811894+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:32.812127+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec0e1000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d519, meta 0xed62ae7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:33.812318+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301066 data_alloc: 218103808 data_used: 18699598
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:34.812514+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:35.812689+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:36.812866+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:37.813027+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec0e1000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d519, meta 0xed62ae7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:38.813175+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301066 data_alloc: 218103808 data_used: 18699598
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:39.813337+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:40.813511+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec0e1000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d519, meta 0xed62ae7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:41.813781+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:42.813974+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:43.848998+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec0e1000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d519, meta 0xed62ae7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301066 data_alloc: 218103808 data_used: 18699598
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:44.849339+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:45.849524+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:46.849771+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec0e1000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d519, meta 0xed62ae7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:47.849928+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:48.850142+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301066 data_alloc: 218103808 data_used: 18699598
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 56385536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:49.850291+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686153800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.470396042s of 26.505001068s, submitted: 26
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686153800 session 0x5576886b2380
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686108400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x557687aa01c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x5576886b2540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c2000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2000 session 0x5576869f2fc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576886d5500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 56033280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:50.850453+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec18a000/0x0/0x4ffc00000, data 0x4b3198e/0x4cc2000, compress 0x0/0x0/0x0, omap 0x4d6d2, meta 0xed6292e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 56033280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:51.850650+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 56033280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686e5d000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5d000 session 0x5576886b2fc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:52.850812+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec18a000/0x0/0x4ffc00000, data 0x4b3198e/0x4cc2000, compress 0x0/0x0/0x0, omap 0x4d6d2, meta 0xed6292e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686108400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x557686196fc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 56033280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:53.850995+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3337065 data_alloc: 218103808 data_used: 18703596
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 56033280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:54.851223+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x557689a00380
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c2000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2000 session 0x557689a01180
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686108800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 56033280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:55.851401+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec18a000/0x0/0x4ffc00000, data 0x4b3198e/0x4cc2000, compress 0x0/0x0/0x0, omap 0x4d6d2, meta 0xed6292e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 56606720 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:56.851561+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56770560 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:57.851704+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:58.851876+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56770560 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360493 data_alloc: 218103808 data_used: 22635756
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:59.852084+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56770560 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec18a000/0x0/0x4ffc00000, data 0x4b3198e/0x4cc2000, compress 0x0/0x0/0x0, omap 0x4d6d2, meta 0xed6292e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:00.852230+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56770560 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:01.852448+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56770560 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.855288506s of 11.934335709s, submitted: 26
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576861bf500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108800 session 0x5576886f9880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686108400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x5576886d5c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:02.852659+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 59310080 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec18a000/0x0/0x4ffc00000, data 0x4b3198e/0x4cc2000, compress 0x0/0x0/0x0, omap 0x4d6d2, meta 0xed6292e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:03.852856+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 59310080 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3305995 data_alloc: 218103808 data_used: 18703596
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:04.853084+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 59310080 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:05.853457+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 59310080 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec679000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4d7ca, meta 0xed62836), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:06.853746+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 59310080 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:07.854167+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 59310080 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x5576869f28c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c2000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2000 session 0x557686bfaa80
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x557688a116c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768694e400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768694e400 session 0x557688a11340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686108400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x55768a538700
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x557686d4f500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c2000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2000 session 0x55768becea80
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x55768d0908c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768da43000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768da43000 session 0x55768a539500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:08.854348+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 59121664 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ebe9a000/0x0/0x4ffc00000, data 0x4e2099e/0x4fb2000, compress 0x0/0x0/0x0, omap 0x4d7ca, meta 0xed62836), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3352975 data_alloc: 218103808 data_used: 18703596
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:09.854525+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 59121664 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:10.854694+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 59121664 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ebe9a000/0x0/0x4ffc00000, data 0x4e2099e/0x4fb2000, compress 0x0/0x0/0x0, omap 0x4d7ca, meta 0xed62836), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:11.854878+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 59121664 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:12.855105+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 59121664 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:13.855310+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 59121664 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3352975 data_alloc: 218103808 data_used: 18703596
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:14.855544+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 59113472 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ebe9a000/0x0/0x4ffc00000, data 0x4e2099e/0x4fb2000, compress 0x0/0x0/0x0, omap 0x4d7ca, meta 0xed62836), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:15.855709+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 59113472 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:16.855923+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 59113472 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:17.856176+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 59113472 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ebe9a000/0x0/0x4ffc00000, data 0x4e2099e/0x4fb2000, compress 0x0/0x0/0x0, omap 0x4d7ca, meta 0xed62836), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:18.856354+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 59113472 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3352975 data_alloc: 218103808 data_used: 18703596
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:19.856498+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686108400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 59113472 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.932571411s of 18.043914795s, submitted: 31
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x5576886b3180
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x557686cac1c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c2000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2000 session 0x557686cacc40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576861116c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686e5a400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686e5a400 session 0x557686197180
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:20.856688+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 59113472 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686108400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x5576886f81c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c2000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:21.856847+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 59105280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:22.857029+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 59105280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:23.857140+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 59105280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb8e6000/0x0/0x4ffc00000, data 0x53d499e/0x5566000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xed62750), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x55768becea80
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576926cd000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:24.857331+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3432290 data_alloc: 218103808 data_used: 25511199
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 59105280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:25.857478+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 59105280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:26.857679+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 59105280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb8e6000/0x0/0x4ffc00000, data 0x53d499e/0x5566000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xed62750), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:27.858422+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 59105280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb8e6000/0x0/0x4ffc00000, data 0x53d499e/0x5566000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xed62750), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:28.858589+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 59105280 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1400 session 0x55768a5396c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd000 session 0x557687b42a80
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768a5388c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:29.858735+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3402384 data_alloc: 218103808 data_used: 25511199
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 59277312 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:30.858948+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 59277312 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ebe76000/0x0/0x4ffc00000, data 0x4e4499e/0x4fd6000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xed62750), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ebe76000/0x0/0x4ffc00000, data 0x4e4499e/0x4fd6000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xed62750), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:31.859134+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 59277312 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:32.859253+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 59277312 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:33.859390+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 59277312 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.672096252s of 14.787719727s, submitted: 13
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:34.859552+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3407832 data_alloc: 218103808 data_used: 25570591
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ebe4a000/0x0/0x4ffc00000, data 0x4e7099e/0x5002000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xed62750), peers [0,1] op hist [0,0,0,0,0,0,2])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 58515456 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:35.859721+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 58441728 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:36.859843+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:37.859986+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:38.860141+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:39.860378+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3445142 data_alloc: 218103808 data_used: 25646367
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea87a000/0x0/0x4ffc00000, data 0x529f99e/0x5431000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xff02750), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:40.860585+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:41.860726+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:42.860890+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:43.861133+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:44.861354+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3445142 data_alloc: 218103808 data_used: 25646367
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea87a000/0x0/0x4ffc00000, data 0x529f99e/0x5431000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xff02750), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:45.861501+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea87a000/0x0/0x4ffc00000, data 0x529f99e/0x5431000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xff02750), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:46.861661+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:47.861883+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 58433536 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576886b2540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686108400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x55768800d880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1400 session 0x557686111500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x557686d00000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576926cd000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.676843643s of 14.048244476s, submitted: 29
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:48.862040+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd000 session 0x557688a10c40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 58228736 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576926cd000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd000 session 0x5576861d2c40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768cc3fdc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686108400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x557687b43340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1400 session 0x55768bece540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:49.862207+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3464812 data_alloc: 218103808 data_used: 25646367
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 58228736 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea533000/0x0/0x4ffc00000, data 0x55e799e/0x5779000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xff02750), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:50.862446+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 58228736 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:51.862620+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 58228736 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:52.862848+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 58228736 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:53.863057+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 58228736 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x557686caddc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:54.863305+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3464812 data_alloc: 218103808 data_used: 25646367
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 58228736 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576885bfdc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:55.863543+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 58228736 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557688657500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686108400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x5576886f96c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea533000/0x0/0x4ffc00000, data 0x55e799e/0x5779000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xff02750), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:56.863693+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 58228736 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576926cd000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688a65400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x5576886d5dc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686157000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686157000 session 0x55768c314c40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576886f9500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686108400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x5576886b2fc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688a65400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x557686c208c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:57.863931+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 56688640 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:58.864241+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d2d000/0x0/0x4ffc00000, data 0x5deca00/0x5f7f000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xff02750), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 56688640 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:59.864377+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3521697 data_alloc: 218103808 data_used: 25646367
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 56688640 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:00.864511+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 56688640 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:01.864762+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 56688640 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d2d000/0x0/0x4ffc00000, data 0x5deca00/0x5f7f000, compress 0x0/0x0/0x0, omap 0x4d8b0, meta 0xff02750), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:02.865461+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576861bee00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 56688640 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x5576886f8700
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:03.865667+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 56688640 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768cc3e700
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686108400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.300589561s of 15.582851410s, submitted: 46
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x5576880f7340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:04.865889+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3544353 data_alloc: 218103808 data_used: 28884767
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56680448 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688a65400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d2b000/0x0/0x4ffc00000, data 0x5deca33/0x5f81000, compress 0x0/0x0/0x0, omap 0x4da9f, meta 0xff02561), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:05.866149+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56680448 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:06.866366+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56680448 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:07.866541+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56680448 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:08.866714+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56680448 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:09.866889+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3580761 data_alloc: 234881024 data_used: 31896351
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305373184 unmapped: 54140928 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d2b000/0x0/0x4ffc00000, data 0x5deca33/0x5f81000, compress 0x0/0x0/0x0, omap 0x4da9f, meta 0xff02561), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:10.867047+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305643520 unmapped: 53870592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:11.867217+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305643520 unmapped: 53870592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:12.867499+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305643520 unmapped: 53870592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:13.867733+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305643520 unmapped: 53870592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:14.867847+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3617591 data_alloc: 234881024 data_used: 31929119
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305643520 unmapped: 53870592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:15.867976+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305643520 unmapped: 53870592 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.965996742s of 12.156393051s, submitted: 50
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e973d000/0x0/0x4ffc00000, data 0x63daa33/0x656f000, compress 0x0/0x0/0x0, omap 0x4da9f, meta 0xff02561), peers [0,1] op hist [0,3,1])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:16.868122+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309141504 unmapped: 50372608 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:17.868316+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 49995776 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:18.868552+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 49995776 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:19.868717+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3663751 data_alloc: 234881024 data_used: 32969503
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 49995776 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:20.868891+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 49995776 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:21.869058+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e7e0b000/0x0/0x4ffc00000, data 0x6b6ba33/0x6d00000, compress 0x0/0x0/0x0, omap 0x4da9f, meta 0x110a2561), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 49995776 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:22.869243+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 49995776 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:23.869431+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309518336 unmapped: 49995776 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1400 session 0x55768bece8c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd000 session 0x557686cad500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:24.869646+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3591439 data_alloc: 234881024 data_used: 29702431
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x55768cc4b880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309542912 unmapped: 49971200 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:25.869833+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309542912 unmapped: 49971200 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:26.869998+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.119799614s of 10.389805794s, submitted: 107
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x55768becea80
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x5576861a8a80
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309551104 unmapped: 49963008 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768cc3fa40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e96d9000/0x0/0x4ffc00000, data 0x529f99e/0x5431000, compress 0x0/0x0/0x0, omap 0x4dc8e, meta 0x110a2372), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:27.870151+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305561600 unmapped: 53952512 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x55768c315500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2000 session 0x55768a538700
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:28.870288+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576861a9c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e96d9000/0x0/0x4ffc00000, data 0x529f99e/0x5431000, compress 0x0/0x0/0x0, omap 0x4dc8e, meta 0x110a2372), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:29.870847+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338834 data_alloc: 218103808 data_used: 18699535
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:30.871046+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:31.871230+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:32.871424+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea211000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4dd62, meta 0x110a229e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:33.871565+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:34.871837+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338834 data_alloc: 218103808 data_used: 18699535
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea211000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4dd62, meta 0x110a229e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea211000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4dd62, meta 0x110a229e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:35.872159+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:36.872413+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:37.872693+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:38.872983+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:39.873225+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338834 data_alloc: 218103808 data_used: 18699535
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea211000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4dd62, meta 0x110a229e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:40.873417+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea211000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4dd62, meta 0x110a229e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:41.873630+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305577984 unmapped: 53936128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:42.873818+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305586176 unmapped: 53927936 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:43.873981+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305586176 unmapped: 53927936 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:44.874153+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338834 data_alloc: 218103808 data_used: 18699535
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305586176 unmapped: 53927936 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:45.874334+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea211000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4dd62, meta 0x110a229e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305586176 unmapped: 53927936 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea211000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4dd62, meta 0x110a229e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:46.874548+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305586176 unmapped: 53927936 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:47.874733+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305586176 unmapped: 53927936 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:48.874909+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305586176 unmapped: 53927936 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:49.875139+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3338834 data_alloc: 218103808 data_used: 18699535
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea211000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4dd62, meta 0x110a229e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 305586176 unmapped: 53927936 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x55768c315880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x5576861be8c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688a65400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x5576861bf500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686108400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686108400 session 0x55768a538c40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:50.875305+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 23.794982910s of 23.933214188s, submitted: 69
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557686c20700
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x557688a10c40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x55768d75c1c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688a65400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x5576885bea80
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1400 session 0x5576886b2000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53387264 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:51.875455+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9e4e000/0x0/0x4ffc00000, data 0x4b2ba00/0x4cbe000, compress 0x0/0x0/0x0, omap 0x4dd62, meta 0x110a229e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53387264 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:52.875622+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53387264 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:53.875810+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53387264 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768800d500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:54.875996+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374899 data_alloc: 218103808 data_used: 18707594
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x55768800ddc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53387264 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:55.876243+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9e4e000/0x0/0x4ffc00000, data 0x4b2ba00/0x4cbe000, compress 0x0/0x0/0x0, omap 0x4dd62, meta 0x110a229e), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x5576869f2fc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688a65400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x557686d4fdc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306233344 unmapped: 53280768 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:56.876370+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576926cd000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306233344 unmapped: 53280768 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9e28000/0x0/0x4ffc00000, data 0x4b4fa33/0x4ce4000, compress 0x0/0x0/0x0, omap 0x4e365, meta 0x110a1c9b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:57.876496+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306233344 unmapped: 53280768 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:58.876722+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306233344 unmapped: 53280768 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:59.876948+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381967 data_alloc: 218103808 data_used: 18708122
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306233344 unmapped: 53280768 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:00.877233+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:01.877400+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9e28000/0x0/0x4ffc00000, data 0x4b4fa33/0x4ce4000, compress 0x0/0x0/0x0, omap 0x4e365, meta 0x110a1c9b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:02.877588+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:03.877731+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:04.877917+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3405647 data_alloc: 218103808 data_used: 22645914
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:05.878145+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9e28000/0x0/0x4ffc00000, data 0x4b4fa33/0x4ce4000, compress 0x0/0x0/0x0, omap 0x4e365, meta 0x110a1c9b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:06.878321+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c2400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.273567200s of 16.399780273s, submitted: 44
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x557688a11340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c2400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x5576885be1c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557687b436c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x5576886d5a40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x557686d01a40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:07.878485+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:08.878642+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:09.878799+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3411501 data_alloc: 218103808 data_used: 22650010
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:10.878993+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306241536 unmapped: 53272576 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:11.879151+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9da5000/0x0/0x4ffc00000, data 0x4bd2a33/0x4d67000, compress 0x0/0x0/0x0, omap 0x4e365, meta 0x110a1c9b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308609024 unmapped: 50905088 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:12.879278+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9ac8000/0x0/0x4ffc00000, data 0x4eafa33/0x5044000, compress 0x0/0x0/0x0, omap 0x4e365, meta 0x110a1c9b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688a65400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x557686cac540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308879360 unmapped: 50634752 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688a65400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x557688a10000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557686cad6c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x55768a538380
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x557688a11340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:13.879412+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308879360 unmapped: 50634752 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:14.879610+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464223 data_alloc: 218103808 data_used: 23882906
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308879360 unmapped: 50634752 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:15.879819+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308879360 unmapped: 50634752 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c2400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x557687b436c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:16.880219+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c2400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:17.880375+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e989b000/0x0/0x4ffc00000, data 0x50dba56/0x5271000, compress 0x0/0x0/0x0, omap 0x4e365, meta 0x110a1c9b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:18.880531+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:19.880676+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3463048 data_alloc: 218103808 data_used: 24411290
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:20.880829+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.976218224s of 14.206614494s, submitted: 74
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x557687aa0a80
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688a65400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:21.880966+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:22.881113+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:23.881233+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e989b000/0x0/0x4ffc00000, data 0x50dba56/0x5271000, compress 0x0/0x0/0x0, omap 0x4e554, meta 0x110a1aac), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:24.881454+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3471599 data_alloc: 218103808 data_used: 25476250
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:25.881607+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:26.881727+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:27.881891+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:28.882040+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e989b000/0x0/0x4ffc00000, data 0x50dba56/0x5271000, compress 0x0/0x0/0x0, omap 0x4e554, meta 0x110a1aac), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308436992 unmapped: 51077120 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:29.882187+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3487557 data_alloc: 218103808 data_used: 25488538
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308576256 unmapped: 50937856 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:30.882329+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308584448 unmapped: 50929664 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:31.882492+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.962013245s of 11.024630547s, submitted: 17
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309280768 unmapped: 50233344 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:32.882622+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311721984 unmapped: 47792128 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:33.882772+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311746560 unmapped: 47767552 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e912c000/0x0/0x4ffc00000, data 0x5849a56/0x59df000, compress 0x0/0x0/0x0, omap 0x4e554, meta 0x110a1aac), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:34.882986+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3526647 data_alloc: 218103808 data_used: 25730202
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311746560 unmapped: 47767552 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:35.883220+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311812096 unmapped: 47702016 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:36.883381+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311812096 unmapped: 47702016 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:37.883588+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 47693824 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:38.883804+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311820288 unmapped: 47693824 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:39.883977+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e912c000/0x0/0x4ffc00000, data 0x5849a56/0x59df000, compress 0x0/0x0/0x0, omap 0x4e554, meta 0x110a1aac), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3525199 data_alloc: 218103808 data_used: 25730202
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e912a000/0x0/0x4ffc00000, data 0x584ca56/0x59e2000, compress 0x0/0x0/0x0, omap 0x4e554, meta 0x110a1aac), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311828480 unmapped: 47685632 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:40.884160+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311828480 unmapped: 47685632 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:41.884414+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311828480 unmapped: 47685632 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:42.884603+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576926cd400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd400 session 0x55768bece8c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c4000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c4000 session 0x5576940e01c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0aa000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0aa000 session 0x557686d00000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311828480 unmapped: 47685632 heap: 359514112 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f0000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f0000 session 0x55768cc3fdc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.120762825s of 11.302170753s, submitted: 55
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:43.884745+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x5576886b2fc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c4000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c4000 session 0x5576880f7340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0aa000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0aa000 session 0x55768cc4b880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576926cd400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd400 session 0x557687aa0c40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e912a000/0x0/0x4ffc00000, data 0x584ca56/0x59e2000, compress 0x0/0x0/0x0, omap 0x4e554, meta 0x110a1aac), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3ec400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3ec400 session 0x55768cc4a540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311853056 unmapped: 51339264 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:44.884993+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590138 data_alloc: 218103808 data_used: 25730202
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311853056 unmapped: 51339264 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:45.885183+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311853056 unmapped: 51339264 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:46.885367+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311853056 unmapped: 51339264 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:47.885564+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311853056 unmapped: 51339264 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e877e000/0x0/0x4ffc00000, data 0x61f7a66/0x638e000, compress 0x0/0x0/0x0, omap 0x4e752, meta 0x110a18ae), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:48.885801+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x5576861bea80
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x557686196380
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x5576861a8a80
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308330496 unmapped: 54861824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:49.885992+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c4000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c4000 session 0x557688a10000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533217 data_alloc: 218103808 data_used: 24427674
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3ec400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308330496 unmapped: 54861824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0aa000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:50.886194+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308338688 unmapped: 54853632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:51.886322+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8e73000/0x0/0x4ffc00000, data 0x5b02a66/0x5c99000, compress 0x0/0x0/0x0, omap 0x4e941, meta 0x110a16bf), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 55115776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:52.886474+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8e73000/0x0/0x4ffc00000, data 0x5b02a66/0x5c99000, compress 0x0/0x0/0x0, omap 0x4e941, meta 0x110a16bf), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 55115776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:53.886604+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.426188469s of 10.563802719s, submitted: 50
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x55768d091180
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd000 session 0x5576885bf880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308076544 unmapped: 55115776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:54.886762+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x5576861a9c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3509476 data_alloc: 234881024 data_used: 29171354
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 55107584 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:55.886912+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9671000/0x0/0x4ffc00000, data 0x53059c1/0x5498000, compress 0x0/0x0/0x0, omap 0x4e9fa, meta 0x110a1606), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 55107584 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:56.887106+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 55107584 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:57.887254+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 55107584 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:58.887411+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 55107584 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9671000/0x0/0x4ffc00000, data 0x53059c1/0x5498000, compress 0x0/0x0/0x0, omap 0x4e9fa, meta 0x110a1606), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:59.887595+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3509476 data_alloc: 234881024 data_used: 29171354
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 55107584 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:00.887791+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9671000/0x0/0x4ffc00000, data 0x53059c1/0x5498000, compress 0x0/0x0/0x0, omap 0x4e9fa, meta 0x110a1606), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308084736 unmapped: 55107584 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:01.887955+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311910400 unmapped: 51281920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:02.888129+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8aad000/0x0/0x4ffc00000, data 0x5ec69c1/0x6059000, compress 0x0/0x0/0x0, omap 0x4e9fa, meta 0x110a1606), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311910400 unmapped: 51281920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:03.888291+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313434112 unmapped: 49758208 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:04.888510+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590632 data_alloc: 234881024 data_used: 29376021
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313434112 unmapped: 49758208 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:05.888746+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313434112 unmapped: 49758208 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:06.888913+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e8a04000/0x0/0x4ffc00000, data 0x5f6d9c1/0x6100000, compress 0x0/0x0/0x0, omap 0x4e9fa, meta 0x110a1606), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313434112 unmapped: 49758208 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:07.889136+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313434112 unmapped: 49758208 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:08.889315+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.913797379s of 15.265137672s, submitted: 144
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 49627136 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:09.889622+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e89eb000/0x0/0x4ffc00000, data 0x5f8e9c1/0x6121000, compress 0x0/0x0/0x0, omap 0x4e9fa, meta 0x110a1606), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3586752 data_alloc: 234881024 data_used: 29376021
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 49627136 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:10.889805+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 49627136 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:11.890037+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 49627136 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:12.890280+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313565184 unmapped: 49627136 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:13.890449+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 49618944 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:14.890682+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3ec400 session 0x5576886d5a40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0aa000 session 0x557686197c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e89eb000/0x0/0x4ffc00000, data 0x5f8e9c1/0x6121000, compress 0x0/0x0/0x0, omap 0x4e9fa, meta 0x110a1606), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3586352 data_alloc: 234881024 data_used: 29384213
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e24800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313573376 unmapped: 49618944 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:15.890794+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e24800 session 0x557685cc4540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307257344 unmapped: 55934976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:16.890972+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea020000/0x0/0x4ffc00000, data 0x495a9b1/0x4aec000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307257344 unmapped: 55934976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:17.891141+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x55768becfa40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:18.891280+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3ec400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3ec400 session 0x557685cc41c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0aa000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0aa000 session 0x55768bece1c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576926cd000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd000 session 0x5576885be540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688a65400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x55768800ddc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:19.891492+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3419612 data_alloc: 218103808 data_used: 19256325
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:20.891648+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x4b8a9b1/0x4d1c000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x557686d4ec40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576861be8c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:21.891738+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.198194504s of 12.342607498s, submitted: 38
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688a65400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x55768c315880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x5576861d3c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:22.891883+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3ec400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3ec400 session 0x557689a016c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576861be540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x557686c20c40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:23.903450+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688a65400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c2400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:24.903677+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395284 data_alloc: 218103808 data_used: 18715653
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:25.903920+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea108000/0x0/0x4ffc00000, data 0x487298e/0x4a03000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:26.904074+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea108000/0x0/0x4ffc00000, data 0x487298e/0x4a03000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:27.904346+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:28.904538+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:29.904713+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408852 data_alloc: 218103808 data_used: 20918277
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:30.904846+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea108000/0x0/0x4ffc00000, data 0x487298e/0x4a03000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea108000/0x0/0x4ffc00000, data 0x487298e/0x4a03000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:31.904978+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea108000/0x0/0x4ffc00000, data 0x487298e/0x4a03000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:32.905124+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:33.905259+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:34.905429+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306987008 unmapped: 56205312 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408852 data_alloc: 218103808 data_used: 20918277
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:35.905616+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306987008 unmapped: 56205312 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.364843369s of 14.394786835s, submitted: 12
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:36.905762+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310943744 unmapped: 52248576 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:37.905892+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9930000/0x0/0x4ffc00000, data 0x503b98e/0x51cc000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:38.906201+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:39.906432+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476420 data_alloc: 218103808 data_used: 21910498
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:40.906605+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:41.906795+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9918000/0x0/0x4ffc00000, data 0x504d98e/0x51de000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:42.906973+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:43.907228+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9918000/0x0/0x4ffc00000, data 0x504d98e/0x51de000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:44.907429+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476692 data_alloc: 218103808 data_used: 21918690
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:45.907579+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:46.907750+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:47.907916+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:48.908104+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9918000/0x0/0x4ffc00000, data 0x504d98e/0x51de000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:49.908344+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x5576940e1180
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x55768800d500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0aa000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.993345261s of 14.355683327s, submitted: 90
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385972 data_alloc: 218103808 data_used: 18719714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:50.908463+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0aa000 session 0x5576886b2c40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9974000/0x0/0x4ffc00000, data 0x464e98e/0x47df000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:51.908632+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:52.908772+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:53.909009+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:54.909313+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3383976 data_alloc: 218103808 data_used: 18719714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:55.909528+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:56.909678+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:57.909877+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311476224 unmapped: 51716096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557687b436c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x557686197180
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688a65400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x5576885bfa40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c2400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:58.910113+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x55768c315a40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576926cd000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd000 session 0x557687aa1340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:59.910361+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6f000/0x0/0x4ffc00000, data 0x4e0c98e/0x4f9d000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6f000/0x0/0x4ffc00000, data 0x4e0c98e/0x4f9d000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3426312 data_alloc: 218103808 data_used: 18719714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:00.910534+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:01.910778+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6f000/0x0/0x4ffc00000, data 0x4e0c98e/0x4f9d000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:02.910954+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:03.911172+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6f000/0x0/0x4ffc00000, data 0x4e0c98e/0x4f9d000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:04.911406+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6f000/0x0/0x4ffc00000, data 0x4e0c98e/0x4f9d000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3426312 data_alloc: 218103808 data_used: 18719714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:05.911632+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:06.911821+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 55377920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:07.912147+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 55377920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:08.913551+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 55377920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:09.913824+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 55377920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.629673004s of 19.488275528s, submitted: 12
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576886d5180
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688a65400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428832 data_alloc: 218103808 data_used: 18719714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:10.913961+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 55230464 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b4b000/0x0/0x4ffc00000, data 0x4e3098e/0x4fc1000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:11.914181+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 55230464 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b4b000/0x0/0x4ffc00000, data 0x4e3098e/0x4fc1000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:12.914333+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:13.914520+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:14.914730+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3477216 data_alloc: 218103808 data_used: 26889186
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:15.914893+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:16.915053+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:17.915262+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b4b000/0x0/0x4ffc00000, data 0x4e3098e/0x4fc1000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:18.915410+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b4b000/0x0/0x4ffc00000, data 0x4e3098e/0x4fc1000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:19.915553+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3477216 data_alloc: 218103808 data_used: 26889186
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:20.915689+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:21.915907+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:22.916294+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.102824211s of 13.107093811s, submitted: 1
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:23.916465+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:24.916750+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:25.916925+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3485238 data_alloc: 218103808 data_used: 26991586
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:26.917148+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:27.917288+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:28.917522+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:29.917699+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:30.917890+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3485238 data_alloc: 218103808 data_used: 26991586
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:31.918182+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:32.918314+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:33.918467+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:34.918645+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:35.918830+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3485238 data_alloc: 218103808 data_used: 26991586
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:36.918968+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:37.919113+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c2400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.839479446s of 14.852085114s, submitted: 2
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x55768afca380
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c4000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c4000 session 0x5576886b3880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576886e16c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576926cd400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd400 session 0x5576869f3340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576926cd400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd400 session 0x557687aa0a80
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:38.919297+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:39.919449+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:40.919610+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3501258 data_alloc: 218103808 data_used: 26991586
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:41.919770+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:42.919977+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:43.920126+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e99aa000/0x0/0x4ffc00000, data 0x4fd198e/0x5162000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:44.920412+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576887228c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:45.920597+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c2400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c4000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3504923 data_alloc: 218103808 data_used: 26992098
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311304192 unmapped: 51888128 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:46.920726+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311910400 unmapped: 51281920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:47.920862+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:48.921002+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9985000/0x0/0x4ffc00000, data 0x4ff59b1/0x5187000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:49.921144+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:50.921293+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3508763 data_alloc: 218103808 data_used: 27546594
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:51.921439+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:52.921589+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:53.921751+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:54.921965+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9985000/0x0/0x4ffc00000, data 0x4ff59b1/0x5187000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311992320 unmapped: 51200000 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:55.922155+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3508763 data_alloc: 218103808 data_used: 27546594
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312000512 unmapped: 51191808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:56.922341+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312000512 unmapped: 51191808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.325185776s of 19.358276367s, submitted: 15
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:57.922518+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313982976 unmapped: 49209344 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:58.922687+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e934f000/0x0/0x4ffc00000, data 0x56249b1/0x57b6000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:59.922854+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:00.923010+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3563869 data_alloc: 218103808 data_used: 27722722
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:01.923153+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:02.923357+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:03.923514+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9314000/0x0/0x4ffc00000, data 0x56579b1/0x57e9000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:04.923709+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:05.923976+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3558653 data_alloc: 218103808 data_used: 27722722
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:06.924165+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:07.924378+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x557686111500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c4000 session 0x55768457b340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:08.924520+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.165890694s of 11.501284599s, submitted: 68
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x557686d01c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312295424 unmapped: 50896896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:09.924731+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312295424 unmapped: 50896896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b46000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:10.924881+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3495179 data_alloc: 218103808 data_used: 26991586
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312295424 unmapped: 50896896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:11.925039+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312295424 unmapped: 50896896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x55768d090000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x55768becfdc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:12.925208+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576886576c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:13.925445+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:14.925709+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:15.925916+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:16.926103+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:17.926362+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:18.926589+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:19.926784+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:20.926941+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:21.927295+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:22.927540+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:23.927791+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:24.927970+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:25.928168+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:26.928368+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:27.928527+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:28.928616+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:29.928931+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:30.929137+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:31.929275+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:32.929407+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:33.929581+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:34.929931+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:35.930114+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:36.930341+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:37.930473+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:38.930844+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:39.931115+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:40.931345+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:41.931532+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:42.931725+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 53813248 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:43.931877+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 53813248 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:44.933268+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 53813248 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:45.933786+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768bc58380
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c2400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x55768a539dc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768cc3f880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557689b7b800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557689b7b800 session 0x55768a5381c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672cc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 37.512207031s of 37.553077698s, submitted: 22
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x55768d75ddc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557694ee2000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557694ee2000 session 0x5576886d5a40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557689b7b000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557689b7b000 session 0x5576861bea80
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576880f7340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:46.933919+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672cc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x5576880f7a40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:47.934154+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:48.934292+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:49.934445+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:50.934567+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441190 data_alloc: 218103808 data_used: 18723775
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557689b7b800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557689b7b800 session 0x557686d00000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:51.934764+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557694ee2000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557694ee2000 session 0x55768a538380
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d85000/0x0/0x4ffc00000, data 0x4bf59f0/0x4d87000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c5b1800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b1800 session 0x557688a11340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576886b2fc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:52.934901+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672cc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557689b7b800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:53.935030+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:54.935208+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:55.935353+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469990 data_alloc: 218103808 data_used: 23552959
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:56.935503+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d85000/0x0/0x4ffc00000, data 0x4bf59f0/0x4d87000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:57.935722+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.2 total, 600.0 interval
                                           Cumulative writes: 36K writes, 142K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 36K writes, 13K syncs, 2.71 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3550 writes, 14K keys, 3550 commit groups, 1.0 writes per commit group, ingest: 18.05 MB, 0.03 MB/s
                                           Interval WAL: 3550 writes, 1394 syncs, 2.55 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:58.935883+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:59.936053+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d85000/0x0/0x4ffc00000, data 0x4bf59f0/0x4d87000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:00.936239+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469990 data_alloc: 218103808 data_used: 23552959
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:01.936370+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d85000/0x0/0x4ffc00000, data 0x4bf59f0/0x4d87000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:02.936556+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:03.936729+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.287157059s of 17.387201309s, submitted: 32
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312320000 unmapped: 50872320 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:04.956054+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312377344 unmapped: 50814976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:05.956245+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3519878 data_alloc: 218103808 data_used: 23937983
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312377344 unmapped: 50814976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:06.956411+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312377344 unmapped: 50814976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:07.956571+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f4000/0x0/0x4ffc00000, data 0x53859f0/0x5517000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312377344 unmapped: 50814976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:08.956788+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312377344 unmapped: 50814976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:09.956964+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312385536 unmapped: 50806784 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:10.957161+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3518710 data_alloc: 218103808 data_used: 23942079
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312385536 unmapped: 50806784 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:11.957325+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f3000/0x0/0x4ffc00000, data 0x53879f0/0x5519000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312385536 unmapped: 50806784 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:12.957559+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312385536 unmapped: 50806784 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:13.957785+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312393728 unmapped: 50798592 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:14.958043+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f3000/0x0/0x4ffc00000, data 0x53879f0/0x5519000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f3000/0x0/0x4ffc00000, data 0x53879f0/0x5519000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312393728 unmapped: 50798592 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:15.958242+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3518710 data_alloc: 218103808 data_used: 23942079
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312393728 unmapped: 50798592 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:16.958436+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c5b1800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.962099075s of 13.174759865s, submitted: 92
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b1800 session 0x55768becf340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557694ee2000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557694ee2000 session 0x5576886d41c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686156800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x557686cadc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e71c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e71c00 session 0x55768bc58000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576886d4700
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:17.958650+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9341000/0x0/0x4ffc00000, data 0x56399f0/0x57cb000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:18.958814+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9341000/0x0/0x4ffc00000, data 0x56399f0/0x57cb000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:19.958964+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:20.959194+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686156800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x557688a10fc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538984 data_alloc: 218103808 data_used: 23942079
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:21.959413+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c5b1800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b1800 session 0x5576861a88c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557694ee2000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557694ee2000 session 0x557688a11500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768f50a000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x55768becf180
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:22.960131+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768f50a000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:23.960306+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9340000/0x0/0x4ffc00000, data 0x5639a00/0x57cc000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312688640 unmapped: 50503680 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:24.960532+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 50495488 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:25.960682+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3557002 data_alloc: 218103808 data_used: 26660799
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 50495488 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:26.960862+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9340000/0x0/0x4ffc00000, data 0x5639a00/0x57cc000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 50495488 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:27.961002+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 50495488 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:28.961134+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 50495488 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:29.961237+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9340000/0x0/0x4ffc00000, data 0x5639a00/0x57cc000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312705024 unmapped: 50487296 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:30.961382+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3557002 data_alloc: 218103808 data_used: 26660799
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312705024 unmapped: 50487296 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:31.961501+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312705024 unmapped: 50487296 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:32.961653+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312705024 unmapped: 50487296 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:33.961799+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.755399704s of 16.796745300s, submitted: 7
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314982400 unmapped: 48209920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:34.961998+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e909c000/0x0/0x4ffc00000, data 0x58dda00/0x5a70000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314310656 unmapped: 48881664 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:35.962139+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3580546 data_alloc: 218103808 data_used: 26796991
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314318848 unmapped: 48873472 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:36.962245+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314318848 unmapped: 48873472 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:37.962389+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9094000/0x0/0x4ffc00000, data 0x58e5a00/0x5a78000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:38.962528+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9094000/0x0/0x4ffc00000, data 0x58e5a00/0x5a78000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:39.963100+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:40.963267+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3580562 data_alloc: 218103808 data_used: 26796991
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:41.963629+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:42.963801+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314335232 unmapped: 48857088 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:43.963982+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9094000/0x0/0x4ffc00000, data 0x58e5a00/0x5a78000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314343424 unmapped: 48848896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:44.964253+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314343424 unmapped: 48848896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:45.964430+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x557686c20fc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768a539c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3580082 data_alloc: 218103808 data_used: 26858431
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686156800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314343424 unmapped: 48848896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.200057983s of 12.832662582s, submitted: 39
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:46.964544+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x5576886f8000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f1000/0x0/0x4ffc00000, data 0x5388a00/0x551b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314359808 unmapped: 48832512 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:47.964719+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314408960 unmapped: 48783360 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:48.964861+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f1000/0x0/0x4ffc00000, data 0x53899f0/0x551b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314417152 unmapped: 48775168 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:49.965050+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314417152 unmapped: 48775168 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f1000/0x0/0x4ffc00000, data 0x53899f0/0x551b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:50.965342+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526151 data_alloc: 218103808 data_used: 24003519
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314417152 unmapped: 48775168 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:51.965535+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x557686197c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557689b7b800 session 0x557688656c40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313155584 unmapped: 50036736 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:52.965718+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f1000/0x0/0x4ffc00000, data 0x53899f0/0x551b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557689a016c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:53.965876+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:54.966173+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:55.966341+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:56.966572+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:57.966750+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:58.966906+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:59.967111+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:00.967279+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:01.967443+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:02.967616+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:03.967777+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:04.967986+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:05.968192+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:06.968385+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:07.968559+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:08.968707+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:09.968876+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:10.969118+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:11.969295+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:12.969525+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:13.969714+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:14.969937+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:15.970159+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:16.970423+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:17.970600+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313196544 unmapped: 49995776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:18.970813+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313196544 unmapped: 49995776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:19.971008+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313196544 unmapped: 49995776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:20.971237+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313196544 unmapped: 49995776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:21.971411+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313212928 unmapped: 49979392 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:22.971605+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313212928 unmapped: 49979392 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:23.971782+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313212928 unmapped: 49979392 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:24.972019+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686156800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x5576940e01c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672cc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x55768c315a40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768f50a000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x557688723180
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313212928 unmapped: 49979392 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c5b1800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:25.972358+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b1800 session 0x557687b43340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 39.065399170s of 39.311141968s, submitted: 108
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768cc4afc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686156800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x5576886b2540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672cc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x557687aa1c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768f50a000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x5576886e1c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557694ee2000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557694ee2000 session 0x5576885be700
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417481 data_alloc: 218103808 data_used: 18727773
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 49856512 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:26.972522+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 49856512 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:27.972727+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea31b000/0x0/0x4ffc00000, data 0x466098e/0x47f1000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313344000 unmapped: 49848320 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:28.972869+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313344000 unmapped: 49848320 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:29.973060+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313352192 unmapped: 49840128 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:30.973238+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576886d4000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417481 data_alloc: 218103808 data_used: 18727773
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313352192 unmapped: 49840128 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:31.973412+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686156800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x557688a108c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672cc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x557686cac8c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768f50a000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 49864704 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:32.973540+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x557688656a80
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313630720 unmapped: 49561600 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:33.973650+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea2f6000/0x0/0x4ffc00000, data 0x468499e/0x4816000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:34.973843+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313630720 unmapped: 49561600 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:35.974045+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313630720 unmapped: 49561600 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3422291 data_alloc: 218103808 data_used: 18761565
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:36.974324+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea2f6000/0x0/0x4ffc00000, data 0x468499e/0x4816000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313630720 unmapped: 49561600 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:37.974484+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313638912 unmapped: 49553408 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:38.974661+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313638912 unmapped: 49553408 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea2f6000/0x0/0x4ffc00000, data 0x468499e/0x4816000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:39.974790+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313638912 unmapped: 49553408 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:40.974918+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313638912 unmapped: 49553408 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3422291 data_alloc: 218103808 data_used: 18761565
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:41.975086+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313638912 unmapped: 49553408 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea2f6000/0x0/0x4ffc00000, data 0x468499e/0x4816000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:42.975222+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313647104 unmapped: 49545216 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:43.975368+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.853551865s of 17.863443375s, submitted: 3
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:44.975526+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 49528832 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1fc000/0x0/0x4ffc00000, data 0x477e99e/0x4910000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:45.975682+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442903 data_alloc: 218103808 data_used: 18856797
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:46.975857+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:47.976021+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:48.976230+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:49.976385+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:50.976557+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442919 data_alloc: 218103808 data_used: 18856797
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:51.976748+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:52.976968+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:53.977108+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:54.977270+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:55.977468+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442919 data_alloc: 218103808 data_used: 18856797
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:56.977648+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:57.977867+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:58.978002+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 49512448 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:59.978232+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 49512448 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576868d4800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4800 session 0x55768bece1c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557686111500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686156800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x5576869f3180
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672cc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x5576886e16c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:00.978426+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768f50a000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.000139236s of 17.048984528s, submitted: 20
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314818560 unmapped: 48373760 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x5576886b2e00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768da43400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768da43400 session 0x5576886b2c40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768da43400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768da43400 session 0x5576885bfa40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768c315180
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686156800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x5576861d3c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3486925 data_alloc: 218103808 data_used: 18856797
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:01.978594+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313778176 unmapped: 51027968 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:02.978779+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313778176 unmapped: 51027968 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:03.978885+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313778176 unmapped: 51027968 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:04.979194+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313778176 unmapped: 51027968 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:05.979315+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313778176 unmapped: 51027968 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3486925 data_alloc: 218103808 data_used: 18856797
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:06.979479+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313786368 unmapped: 51019776 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672cc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x557686d4ea80
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:07.979609+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313786368 unmapped: 51019776 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768f50a000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:08.979721+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313761792 unmapped: 51044352 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:09.979953+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:10.980145+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3521229 data_alloc: 218103808 data_used: 24664925
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:11.980305+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:12.980480+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:13.980682+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:14.980878+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:15.981000+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3521229 data_alloc: 218103808 data_used: 24664925
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:16.981155+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:17.981320+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.758279800s of 17.839538574s, submitted: 13
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:18.981437+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316399616 unmapped: 48406528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:19.981604+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318586880 unmapped: 46219264 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:20.981771+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3559463 data_alloc: 218103808 data_used: 25188701
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:21.981997+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9747000/0x0/0x4ffc00000, data 0x522d99e/0x53bf000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:22.982198+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:23.982387+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:24.982607+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:25.982812+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3556031 data_alloc: 218103808 data_used: 25188701
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:26.982998+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:27.983172+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e972c000/0x0/0x4ffc00000, data 0x524e99e/0x53e0000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:28.983316+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:29.983466+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x5576886d5a40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686155400 session 0x5576880f7340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768f50a000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:30.983608+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.849447250s of 12.123903275s, submitted: 71
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x5576861be8c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e972c000/0x0/0x4ffc00000, data 0x524e99e/0x53e0000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:31.983752+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452972 data_alloc: 218103808 data_used: 18856797
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:32.983878+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:33.984023+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869e1400 session 0x55768d090380
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c073800 session 0x5576869f3340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:34.984329+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557688656c40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:35.984509+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:36.984653+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:37.984806+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:38.985040+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:39.985302+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:40.985535+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:41.985699+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:42.985858+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:43.986043+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:44.986361+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:45.986626+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:46.986812+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:47.987132+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:48.987402+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:49.987684+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:50.987974+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:51.988161+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:52.988329+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:53.988557+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:54.988794+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 48644096 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:55.988986+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 48644096 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:56.989215+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 48644096 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:57.989372+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316170240 unmapped: 48635904 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:58.989546+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316170240 unmapped: 48635904 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:59.989718+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316170240 unmapped: 48635904 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:00.989885+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316170240 unmapped: 48635904 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:01.990057+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316170240 unmapped: 48635904 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:02.990264+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:03.990451+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:04.990636+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:05.990778+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:06.990943+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:07.991130+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:08.991284+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:09.991410+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:10.991650+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:11.991846+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557688a108c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686155400 session 0x557686c21880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869e1400 session 0x557688a116c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c073800 session 0x557688722c40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768f50a000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:12.992034+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 42.027751923s of 42.075328827s, submitted: 18
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x5576886d5340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557688a10000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686155400 session 0x557686cacfc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869e1400 session 0x557688657a40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c073800 session 0x557688a10540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:13.992131+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:14.992329+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9db6000/0x0/0x4ffc00000, data 0x4bc598e/0x4d56000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:15.992484+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:16.992639+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469604 data_alloc: 218103808 data_used: 18727773
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:17.992797+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:18.992941+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:19.993189+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:20.993376+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9db6000/0x0/0x4ffc00000, data 0x4bc598e/0x4d56000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686156800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x557686d00fc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768d090700
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:21.993567+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469604 data_alloc: 218103808 data_used: 18727773
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686155400 session 0x55768800d6c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869e1400 session 0x5576886e0c40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:22.993714+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672cc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.067780495s of 10.190921783s, submitted: 22
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:23.993827+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9db4000/0x0/0x4ffc00000, data 0x4bc59c1/0x4d58000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:24.994028+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:25.994168+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:26.994323+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502024 data_alloc: 218103808 data_used: 23294813
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:27.994457+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9db4000/0x0/0x4ffc00000, data 0x4bc59c1/0x4d58000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:28.994687+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:29.994901+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:30.995121+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:31.995331+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502024 data_alloc: 218103808 data_used: 23294813
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:32.995482+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.957121849s of 10.000660896s, submitted: 20
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9db4000/0x0/0x4ffc00000, data 0x4bc59c1/0x4d58000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [0,1,0,0,0,8])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318406656 unmapped: 46399488 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:33.995722+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:34.995863+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9030000/0x0/0x4ffc00000, data 0x59489c1/0x5adb000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:35.996001+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:36.996141+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3599328 data_alloc: 218103808 data_used: 25293661
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:37.996297+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:38.996500+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9030000/0x0/0x4ffc00000, data 0x59489c1/0x5adb000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:39.996739+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:40.996910+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:41.997153+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3595064 data_alloc: 218103808 data_used: 25293661
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread fragmentation_score=0.003738 took=0.000053s
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:42.997319+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.949523926s of 10.245022774s, submitted: 88
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318488576 unmapped: 46317568 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:43.997498+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318488576 unmapped: 46317568 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:44.997733+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e902a000/0x0/0x4ffc00000, data 0x594e9c1/0x5ae1000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318488576 unmapped: 46317568 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:45.997926+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c073800 session 0x55768cc3fc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x5576861a81c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318488576 unmapped: 46317568 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:46.998153+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3596956 data_alloc: 218103808 data_used: 25334621
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557685cc4c40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318488576 unmapped: 46317568 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:47.998340+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 285 ms_handle_reset con 0x557686155400 session 0x557687b42700
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 285 heartbeat osd_stat(store_statfs(0x4e902a000/0x0/0x4ffc00000, data 0x594e9c1/0x5ae1000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 285 ms_handle_reset con 0x55768c073800 session 0x55768cc3f880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 285 ms_handle_reset con 0x5576869e1400 session 0x55768c3148c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768da43400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 323772416 unmapped: 41033728 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:48.998474+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 285 ms_handle_reset con 0x55768da43400 session 0x55768800ddc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 285 ms_handle_reset con 0x557685ffa800 session 0x55768457b340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337510400 unmapped: 43384832 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:49.998657+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 285 handle_osd_map epochs [286,287], i have 285, src has [1,287]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 287 ms_handle_reset con 0x557686155400 session 0x557686cac8c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337551360 unmapped: 43343872 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:50.998790+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 287 ms_handle_reset con 0x5576869e1400 session 0x5576886d4000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 287 ms_handle_reset con 0x55768c073800 session 0x5576886f8e00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0e9400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 287 ms_handle_reset con 0x55768d0e9400 session 0x5576861116c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 43327488 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:51.999005+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e7cd3000/0x0/0x4ffc00000, data 0x6c9fd67/0x6e37000, compress 0x0/0x0/0x0, omap 0x4f381, meta 0x110a0c7f), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3761780 data_alloc: 234881024 data_used: 35955647
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e7cd3000/0x0/0x4ffc00000, data 0x6c9fd67/0x6e37000, compress 0x0/0x0/0x0, omap 0x4f381, meta 0x110a0c7f), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 43327488 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:52.999157+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337633280 unmapped: 43261952 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:53.999347+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.589632988s of 10.994457245s, submitted: 34
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 287 ms_handle_reset con 0x557685ffa800 session 0x557687aa1340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e8fe2000/0x0/0x4ffc00000, data 0x5993d34/0x5b29000, compress 0x0/0x0/0x0, omap 0x4f41d, meta 0x110a0be3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:54.999588+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:55.999772+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e8fe2000/0x0/0x4ffc00000, data 0x5993d34/0x5b29000, compress 0x0/0x0/0x0, omap 0x4f41d, meta 0x110a0be3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:57.000029+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3574579 data_alloc: 218103808 data_used: 18723775
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e8fe2000/0x0/0x4ffc00000, data 0x5993d34/0x5b29000, compress 0x0/0x0/0x0, omap 0x4f41d, meta 0x110a0be3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:58.000275+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:59.000443+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 287 handle_osd_map epochs [287,288], i have 288, src has [1,288]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e8fe2000/0x0/0x4ffc00000, data 0x5993d34/0x5b29000, compress 0x0/0x0/0x0, omap 0x4f41d, meta 0x110a0be3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:00.000607+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e8fde000/0x0/0x4ffc00000, data 0x59957b3/0x5b2c000, compress 0x0/0x0/0x0, omap 0x4f52c, meta 0x110a0ad4), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:01.000744+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557686155400 session 0x5576885bfa40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:02.000882+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x5576869e1400 session 0x55768cc3f500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x55768c073800 session 0x557686196a80
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c035800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x55768c035800 session 0x5576940e1340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578137 data_alloc: 218103808 data_used: 18727934
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557686155400 session 0x5576861d3c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557685ffa800 session 0x557689a01c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x5576869e1400 session 0x5576886d5a40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x55768c073800 session 0x5576886d41c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557686155000 session 0x55768a5388c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557685ffa800 session 0x55768d090e00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 348602368 unmapped: 47890432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:03.001041+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557686155400 session 0x557689a00fc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337657856 unmapped: 58834944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:04.001167+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x5576869e1400 session 0x55768a538380
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x55768c073800 session 0x55768cc4b180
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337657856 unmapped: 58834944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:05.001346+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e8bc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557687e8bc00 session 0x557686d01880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.653596878s of 11.167081833s, submitted: 54
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557685ffa800 session 0x557685cc5dc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e73eb000/0x0/0x4ffc00000, data 0x75887e6/0x7721000, compress 0x0/0x0/0x0, omap 0x4f52c, meta 0x110a0ad4), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 329965568 unmapped: 66527232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:06.001476+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 329965568 unmapped: 66527232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:07.001596+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3789035 data_alloc: 234881024 data_used: 29690303
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 289 ms_handle_reset con 0x55768c073800 session 0x55768d090380
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:08.001768+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 289 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0x623e374/0x63d7000, compress 0x0/0x0/0x0, omap 0x4f643, meta 0x110a09bd), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:09.001919+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:10.002106+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:11.002241+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 289 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0x623e374/0x63d7000, compress 0x0/0x0/0x0, omap 0x4f643, meta 0x110a09bd), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:12.002360+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687671 data_alloc: 234881024 data_used: 21415871
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:13.002522+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:14.002675+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:15.002875+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:16.003006+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e8730000/0x0/0x4ffc00000, data 0x623fdf3/0x63da000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:17.003177+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.615905762s of 11.692889214s, submitted: 45
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3699421 data_alloc: 234881024 data_used: 22761407
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318464000 unmapped: 78028800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:18.003350+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318570496 unmapped: 77922304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:19.003522+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:20.003680+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318570496 unmapped: 77922304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:21.003816+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318570496 unmapped: 77922304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:22.003989+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318570496 unmapped: 77922304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e8732000/0x0/0x4ffc00000, data 0x623fdf3/0x63da000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3716445 data_alloc: 234881024 data_used: 24364991
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:23.004270+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318816256 unmapped: 77676544 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e8732000/0x0/0x4ffc00000, data 0x623fdf3/0x63da000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:24.004429+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318816256 unmapped: 77676544 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:25.004694+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 77447168 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e8732000/0x0/0x4ffc00000, data 0x623fdf3/0x63da000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:26.004857+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 77447168 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686155400 session 0x557686110000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x5576869e1400 session 0x557688a108c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672c000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:27.004977+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x55768672c000 session 0x5576886f8000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e8732000/0x0/0x4ffc00000, data 0x623fdf3/0x63da000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:28.005123+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:29.005283+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:30.005456+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:31.005612+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:32.005742+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:33.005887+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:34.006134+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:35.006414+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:36.006587+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:37.006739+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:38.006884+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:39.007033+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:40.007193+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:41.007442+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:42.007617+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:43.007963+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:44.008126+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:45.008344+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:46.008513+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:47.008730+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:48.008934+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:49.009197+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:50.009352+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:51.009542+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:52.009714+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:53.009947+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:54.010150+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:55.010299+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:56.010464+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:57.010681+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:58.010809+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:59.010956+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:00.011254+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:01.011405+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:02.011547+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:03.011702+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:04.011857+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 47.560314178s of 47.608993530s, submitted: 31
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:05.012113+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557685ffa800 session 0x5576886e1c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310362112 unmapped: 86130688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686155400 session 0x557686cac1c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x5576869e1400 session 0x55768bece380
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x55768c073800 session 0x5576880f7a40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: mgrc ms_handle_reset ms_handle_reset con 0x55768bb5f000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1095581968
Feb 28 11:04:03 compute-0 ceph-osd[89322]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1095581968,v1:192.168.122.100:6801/1095581968]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686258000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: get_auth_request con 0x55768672c000 auth_method 0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: mgrc handle_mgr_configure stats_period=5
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686258000 session 0x5576886f8000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:06.012243+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557685ffa800 session 0x557689a01c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310370304 unmapped: 86122496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686155400 session 0x5576861d3c00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:07.012356+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310370304 unmapped: 86122496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x5576869e1400 session 0x55768cc3f500
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x55768c073800 session 0x557687aa1340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3523018 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:08.012499+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310378496 unmapped: 86114304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:09.012740+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 86106112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e9a78000/0x0/0x4ffc00000, data 0x4efadd0/0x5094000, compress 0x0/0x0/0x0, omap 0x4fe69, meta 0x110a0197), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:10.012876+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 86106112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:11.013034+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 86106112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:12.013128+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 86106112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557685ffa800 session 0x557686d00380
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686155400 session 0x5576886b2fc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686258000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3478714 data_alloc: 218103808 data_used: 9620380
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea302000/0x0/0x4ffc00000, data 0x4670dd0/0x480a000, compress 0x0/0x0/0x0, omap 0x4fed4, meta 0x110a012c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686258000 session 0x5576861116c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:13.013250+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:14.013405+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:15.013647+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:16.013850+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:17.014040+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:18.014236+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:19.014429+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:20.014602+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:21.014815+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:22.014960+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:23.015185+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:24.015560+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:25.015829+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:26.016027+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:27.016127+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:28.016304+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:29.016498+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:30.016633+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:31.016802+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:32.016942+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:33.017143+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:34.017371+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:35.018296+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:36.018463+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:37.018605+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:38.018810+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:39.019659+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:40.019946+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:41.020249+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:42.020604+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:43.020963+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:44.021601+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.864528656s of 39.085803986s, submitted: 59
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:45.022221+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:46.022733+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:47.023231+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:48.023668+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:49.024040+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:50.024278+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:51.024493+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:52.024781+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:53.024971+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:54.025169+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:55.025388+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:56.025581+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:57.025856+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:58.026036+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:59.026159+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:00.026384+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:01.026631+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:02.027323+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:03.027554+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:04.027749+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:05.027967+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:06.028174+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:07.028314+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:08.028484+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:09.028691+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:10.028853+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:11.029010+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:12.029183+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:13.029350+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:14.029541+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.823881149s of 30.865018845s, submitted: 24
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:15.029717+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 291 ms_handle_reset con 0x5576869e1400 session 0x55768cc4a540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:16.029895+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:17.030056+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ec323000/0x0/0x4ffc00000, data 0x264e98a/0x27e7000, compress 0x0/0x0/0x0, omap 0x505c8, meta 0x1109fa38), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3317485 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:18.030264+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:19.030387+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ec323000/0x0/0x4ffc00000, data 0x264e98a/0x27e7000, compress 0x0/0x0/0x0, omap 0x505c8, meta 0x1109fa38), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:20.030536+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:21.030777+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:22.030954+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ec323000/0x0/0x4ffc00000, data 0x264e98a/0x27e7000, compress 0x0/0x0/0x0, omap 0x505c8, meta 0x1109fa38), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:23.031191+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3317485 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ec323000/0x0/0x4ffc00000, data 0x264e98a/0x27e7000, compress 0x0/0x0/0x0, omap 0x505c8, meta 0x1109fa38), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:24.031341+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:25.031529+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:26.031698+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:27.031866+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:28.032023+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:29.032218+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:30.032406+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:31.032597+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:32.032743+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:33.032909+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:34.033101+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:35.033304+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:36.033473+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:37.033619+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:38.033802+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:39.034013+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:40.034206+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:41.034431+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:42.034667+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:43.034865+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:44.035051+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:45.035362+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:46.035650+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:47.035862+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 87146496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:48.036059+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 87146496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:49.036224+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 87146496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:50.036419+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 87146496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:51.036572+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:52.036712+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:53.036855+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:54.036990+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:55.037150+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:56.037279+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:57.037477+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:58.037719+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:59.037848+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:00.038024+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:01.038200+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:02.038397+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:03.038537+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:04.038712+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:05.038908+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:06.039089+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:07.039299+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:08.039463+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:09.039668+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:10.039876+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:11.040138+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:12.040306+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:13.040468+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:14.040654+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:15.040869+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:16.041101+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:17.041235+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:18.041474+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:19.041702+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:20.041885+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:21.042145+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:22.042433+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:23.042624+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:24.042837+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:25.043148+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309403648 unmapped: 87089152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:26.043596+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309403648 unmapped: 87089152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:27.043861+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 87080960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:28.044164+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 87080960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:29.044350+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 87080960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:30.044523+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:31.044681+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:32.044843+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:33.044997+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:34.045243+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:35.045477+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:36.045655+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:37.045832+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:38.046086+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 87064576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:39.046260+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 87064576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:40.046505+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 87064576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:41.046745+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 87064576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:42.046935+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 87064576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:43.047191+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:44.047470+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:45.047821+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:46.048027+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:47.048182+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:48.048460+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:49.048700+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:50.048920+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 87040000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:51.049056+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 87040000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:52.049280+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 87040000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:53.049440+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 87040000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:54.049580+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:55.049781+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:56.049949+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:57.050098+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:58.050254+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:59.050416+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:00.050571+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:01.050807+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:02.050997+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:03.051142+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 87023616 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:04.051308+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 87023616 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:05.051512+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 87023616 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:06.051721+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 87023616 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:07.051929+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:08.052159+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:09.052358+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:10.052554+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:11.052751+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:12.052978+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:13.053196+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 ms_handle_reset con 0x55768d0abc00 session 0x557686d4f880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:14.053436+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315342848 unmapped: 81149952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:15.053793+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315351040 unmapped: 81141760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:16.054047+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315351040 unmapped: 81141760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:17.054292+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:18.054479+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327555 data_alloc: 234881024 data_used: 17875868
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:19.054754+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:20.055680+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:21.056046+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:22.056333+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:23.056616+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 128.169219971s of 128.219329834s, submitted: 33
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315392000 unmapped: 81100800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 293 ms_handle_reset con 0x557685ffa800 session 0x557688a10540
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3217201 data_alloc: 218103808 data_used: 4309916
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:24.056848+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 89096192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 293 heartbeat osd_stat(store_statfs(0x4ed31f000/0x0/0x4ffc00000, data 0x1651ff9/0x17ed000, compress 0x0/0x0/0x0, omap 0x50d73, meta 0x1109f28d), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:25.057467+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 89096192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:26.057909+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 89096192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 293 handle_osd_map epochs [293,294], i have 294, src has [1,294]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 294 ms_handle_reset con 0x557686155400 session 0x5576861a81c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:27.058294+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:28.059317+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ee78b000/0x0/0x4ffc00000, data 0x1e3bb6/0x37e000, compress 0x0/0x0/0x0, omap 0x50e89, meta 0x1109f177), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099618 data_alloc: 218103808 data_used: 181132
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:29.059620+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:30.060591+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ee78b000/0x0/0x4ffc00000, data 0x1e3bb6/0x37e000, compress 0x0/0x0/0x0, omap 0x50e89, meta 0x1109f177), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:31.061096+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:32.061460+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:33.061654+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099618 data_alloc: 218103808 data_used: 181132
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ee78b000/0x0/0x4ffc00000, data 0x1e3bb6/0x37e000, compress 0x0/0x0/0x0, omap 0x50e89, meta 0x1109f177), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:34.061826+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:35.062032+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.580028534s of 12.689705849s, submitted: 49
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:36.062457+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 93347840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:37.062692+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 93347840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:38.063132+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 93347840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3102328 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686258000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ee78a000/0x0/0x4ffc00000, data 0x1e5658/0x382000, compress 0x0/0x0/0x0, omap 0x51005, meta 0x1109effb), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:39.063283+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 93306880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ee78a000/0x0/0x4ffc00000, data 0x1e5658/0x382000, compress 0x0/0x0/0x0, omap 0x51005, meta 0x1109effb), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:40.063438+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 93290496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:41.063870+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:42.064019+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 ms_handle_reset con 0x557686258000 session 0x55768d75c380
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:43.064323+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:44.064499+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:45.064736+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:46.064921+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:47.065115+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:48.065254+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:49.065488+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:50.065690+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:51.065965+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:52.066116+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:53.066410+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:54.066683+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:55.066905+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:56.067114+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:57.067322+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:58.067524+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:59.067735+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:00.067928+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:01.068168+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:02.068394+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:03.068616+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:04.068833+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:05.069230+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:06.070260+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:07.070494+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:08.070834+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:09.071006+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:10.071206+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:11.071427+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:12.071714+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:13.071890+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:14.072115+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:15.072356+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:16.072536+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:17.072733+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:18.072940+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:19.073186+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:20.073382+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:21.073552+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:22.073751+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:23.074010+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:24.074179+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:25.074359+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:26.074585+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:27.074734+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:28.074859+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:29.075031+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:30.075169+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:31.075360+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 93241344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:32.075532+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 93241344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:33.075686+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 93241344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:34.075844+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 93241344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:35.076050+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:36.076273+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:37.076444+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:38.076591+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:39.076788+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:40.077027+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:41.077217+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:42.077402+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:43.077640+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 93224960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:44.077796+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 93224960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:45.078153+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 93224960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:46.078303+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 93224960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:47.078521+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 93216768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:48.078757+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 93216768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:49.078922+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 93216768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:50.079122+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 93216768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:51.079261+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:52.079438+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:53.079569+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:54.079710+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:55.079895+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:56.080226+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:57.080441+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:58.080631+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.2 total, 600.0 interval
                                           Cumulative writes: 38K writes, 148K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.69 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1533 writes, 6281 keys, 1533 commit groups, 1.0 writes per commit group, ingest: 6.68 MB, 0.01 MB/s
                                           Interval WAL: 1533 writes, 657 syncs, 2.33 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:59.080880+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:00.081157+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:01.081315+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:02.081491+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:03.081646+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:04.081890+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:05.082178+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:06.082445+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:07.082663+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 93184000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:08.082811+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 93184000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:09.082976+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 93184000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:10.083144+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 93184000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:11.083352+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:12.083577+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:13.083779+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:14.083984+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:15.084192+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:16.084329+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:17.084574+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:18.084822+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:19.085161+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:20.085806+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:21.085990+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:22.124728+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:23.124893+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:24.125178+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:25.125496+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:26.125736+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:27.125919+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:28.126132+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:29.126368+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:30.126538+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 114.041397095s of 114.333473206s, submitted: 46
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:31.126763+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302129152 unmapped: 94363648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 297 ms_handle_reset con 0x5576869e1400 session 0x55768bc58380
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:32.126926+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302153728 unmapped: 94339072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c4400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 297 heartbeat osd_stat(store_statfs(0x4eaf80000/0x0/0x4ffc00000, data 0x39e8dc6/0x3b8a000, compress 0x0/0x0/0x0, omap 0x51b9b, meta 0x1109e465), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:33.127132+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302178304 unmapped: 94314496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 298 ms_handle_reset con 0x55768b3c4400 session 0x5576886b28c0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:34.127307+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434144 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302194688 unmapped: 94298112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0a000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:35.127535+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302194688 unmapped: 94298112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:36.127717+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302194688 unmapped: 94298112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:37.127862+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302194688 unmapped: 94298112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:38.128026+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302194688 unmapped: 94298112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:39.128146+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434144 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302211072 unmapped: 94281728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0a000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:40.128294+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302211072 unmapped: 94281728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:41.128391+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302211072 unmapped: 94281728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0a000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:42.128531+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302211072 unmapped: 94281728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:43.128686+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302211072 unmapped: 94281728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.552754402s of 13.607363701s, submitted: 19
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:44.128844+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432672 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0a000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302219264 unmapped: 94273536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:45.129042+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302219264 unmapped: 94273536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:46.129289+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302219264 unmapped: 94273536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:47.129429+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0e000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302227456 unmapped: 94265344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:48.129594+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302227456 unmapped: 94265344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:49.129831+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432744 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302268416 unmapped: 94224384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:50.130057+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:51.130297+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0e000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:52.130469+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:53.130666+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0e000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:54.130813+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432672 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:55.130975+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:56.131164+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:57.131347+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:58.131533+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0e000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:59.131675+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432672 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:00.131834+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:01.132118+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:02.132388+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 94175232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0e000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:03.132589+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 94175232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:04.132772+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432672 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 94175232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:05.133008+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 94175232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.988111496s of 22.149505615s, submitted: 90
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:06.133221+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 94175232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 298 handle_osd_map epochs [298,299], i have 299, src has [1,299]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 299 ms_handle_reset con 0x557685ffa800 session 0x5576886f8fc0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:07.133476+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eab09000/0x0/0x4ffc00000, data 0x3e5c562/0x4001000, compress 0x0/0x0/0x0, omap 0x51d59, meta 0x1109e2a7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 94126080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:08.133665+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 94126080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:09.133883+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153870 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 94126080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:10.134041+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 94126080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:11.134247+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 94117888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:12.134450+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 94117888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ee309000/0x0/0x4ffc00000, data 0x65c53c/0x800000, compress 0x0/0x0/0x0, omap 0x51dc7, meta 0x1109e239), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:13.134682+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 94117888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:14.134879+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153870 data_alloc: 218103808 data_used: 185193
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 94109696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:15.135169+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 94109696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:16.135352+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 94109696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:17.135503+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 94109696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:18.135640+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 94109696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:19.135789+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 94101504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:20.136011+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 94101504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:21.136255+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:22.136405+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:23.136572+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:24.136721+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:25.136997+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:26.137181+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:27.137333+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:28.137550+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:29.137773+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:30.138239+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:31.138398+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:32.138565+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:33.138704+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:34.138916+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:35.139130+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:36.139273+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:37.139423+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:38.139604+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:39.139792+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:40.139972+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:41.140287+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:42.140497+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:43.140648+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:44.140810+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:45.141016+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:46.141156+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:47.141282+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:48.141440+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:49.141599+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:50.141794+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:51.141973+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:52.142145+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:53.142321+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:54.142461+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:55.142665+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:56.142846+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:57.143029+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:58.143251+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:59.143408+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:00.143574+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:01.143785+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:02.143925+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:03.144116+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:04.144267+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:05.144412+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:06.144580+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:07.144733+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299507712 unmapped: 96985088 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:08.144863+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299507712 unmapped: 96985088 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:09.145013+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299507712 unmapped: 96985088 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:10.145171+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:11.145372+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:12.145573+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:13.145719+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:14.145867+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:15.146098+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:16.146493+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:17.146693+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299532288 unmapped: 96960512 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:18.146893+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:19.147102+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:20.147348+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:21.147577+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:22.147757+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:23.147922+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:24.148198+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:25.148425+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:26.148589+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 96944128 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:27.148772+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 96944128 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:28.148994+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 96944128 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:29.149192+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 96944128 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:30.149390+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 96944128 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:31.149563+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:32.149767+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:33.149959+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:34.150189+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:35.150449+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:36.150659+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:37.150886+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:38.151123+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:39.151340+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:40.151526+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:41.151693+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:42.151870+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:43.152105+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:44.152387+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:45.152585+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:46.152750+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:47.152858+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 96911360 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:48.153013+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 96911360 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:49.153162+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 96903168 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:50.153342+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 96903168 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:51.153554+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 96894976 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:52.153742+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 96894976 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:53.153891+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 96894976 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:54.154035+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 96894976 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:55.154312+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299606016 unmapped: 96886784 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:56.154461+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299606016 unmapped: 96886784 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:57.154646+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:58.154806+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:59.154990+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:00.155208+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:01.155436+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:02.155624+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:03.155813+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:04.155976+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:05.156104+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:06.156266+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:07.156437+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:08.156605+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:09.156816+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:10.156967+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 96854016 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:11.157123+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 96854016 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:12.157321+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 96854016 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:13.157502+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 96854016 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:14.157685+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 96854016 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:15.157894+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 96845824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:16.158111+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 96845824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:17.158273+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 96845824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:18.158433+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 96845824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:19.158568+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299663360 unmapped: 96829440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:20.158755+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299663360 unmapped: 96829440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:21.158985+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299663360 unmapped: 96829440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:22.159214+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299671552 unmapped: 96821248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:23.159376+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299671552 unmapped: 96821248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:24.159521+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299671552 unmapped: 96821248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:25.159723+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299671552 unmapped: 96821248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:26.159953+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299679744 unmapped: 96813056 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:27.160232+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299687936 unmapped: 96804864 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:28.160476+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299687936 unmapped: 96804864 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:29.160621+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299687936 unmapped: 96804864 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:30.160791+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299687936 unmapped: 96804864 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:31.161002+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:32.161229+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:33.161458+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:34.161691+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:35.161929+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:36.162177+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:37.162358+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:38.162552+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299704320 unmapped: 96788480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:39.162804+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299704320 unmapped: 96788480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:40.163055+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299704320 unmapped: 96788480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:41.163280+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299704320 unmapped: 96788480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:42.163579+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299704320 unmapped: 96788480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:43.163750+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299712512 unmapped: 96780288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:44.164169+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299712512 unmapped: 96780288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:45.164386+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299712512 unmapped: 96780288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:46.164567+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299720704 unmapped: 96772096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:47.164739+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299720704 unmapped: 96772096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:48.164893+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299720704 unmapped: 96772096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:49.165114+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299720704 unmapped: 96772096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:50.165349+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299728896 unmapped: 96763904 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:51.165529+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:52.165690+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:53.165869+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:54.166116+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:55.166402+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:56.166572+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:57.166754+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299745280 unmapped: 96747520 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:58.166920+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299745280 unmapped: 96747520 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:59.167126+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:00.167296+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:01.167471+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:02.167711+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:03.167914+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:04.168159+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:05.168450+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:06.168654+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:07.168857+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299778048 unmapped: 96714752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:08.169037+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299778048 unmapped: 96714752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:09.169170+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299778048 unmapped: 96714752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:10.169314+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299778048 unmapped: 96714752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:11.169516+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:12.169687+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:13.169845+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:14.170489+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:15.170768+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:16.170962+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:17.171209+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:18.171434+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:19.171613+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:20.171748+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:21.172259+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:22.172472+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:23.172662+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:24.172828+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:25.173176+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:26.173498+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:27.173700+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:28.173961+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:29.174121+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:30.174410+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:31.174810+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:32.175150+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:33.175333+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:34.175550+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:35.175746+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:36.175876+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:37.176019+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:38.176240+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:39.176450+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299835392 unmapped: 96657408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:40.176677+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299835392 unmapped: 96657408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:41.176858+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299835392 unmapped: 96657408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:42.176998+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299835392 unmapped: 96657408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:43.177220+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299851776 unmapped: 96641024 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:44.177399+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299851776 unmapped: 96641024 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:45.177657+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299851776 unmapped: 96641024 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:46.177828+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299851776 unmapped: 96641024 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:47.177947+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299859968 unmapped: 96632832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:48.178174+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299859968 unmapped: 96632832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:49.178376+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299859968 unmapped: 96632832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:50.178572+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299859968 unmapped: 96632832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:51.178768+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299868160 unmapped: 96624640 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:52.178930+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299868160 unmapped: 96624640 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:53.179112+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299868160 unmapped: 96624640 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:54.179297+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299868160 unmapped: 96624640 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:55.179492+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299876352 unmapped: 96616448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:56.179658+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 96608256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:57.179887+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 96608256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:58.180110+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 96608256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:59.180262+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 96608256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:00.180410+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 96608256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:01.180658+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299892736 unmapped: 96600064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:02.180838+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299892736 unmapped: 96600064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:03.180943+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299900928 unmapped: 96591872 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:04.181166+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299900928 unmapped: 96591872 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:05.181361+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299900928 unmapped: 96591872 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:06.181514+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299900928 unmapped: 96591872 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:07.181700+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299900928 unmapped: 96591872 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:08.181837+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299909120 unmapped: 96583680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:09.182008+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299909120 unmapped: 96583680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:10.182175+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299909120 unmapped: 96583680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:11.182373+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299917312 unmapped: 96575488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:12.182578+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299917312 unmapped: 96575488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:13.182715+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299917312 unmapped: 96575488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:14.182913+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299925504 unmapped: 96567296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:15.183133+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299925504 unmapped: 96567296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:16.183294+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299925504 unmapped: 96567296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:17.183447+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 96559104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:18.184132+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 96559104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:19.184287+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 96559104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:20.184440+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 96559104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:21.184606+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 96550912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:22.185269+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 96550912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:23.185724+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 96542720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:24.186232+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 96542720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:25.186834+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 96542720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:26.187047+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 96542720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:27.187527+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:28.187767+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:29.188196+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:30.188546+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:31.188848+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:32.189200+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:33.189449+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:34.189739+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299966464 unmapped: 96526336 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:35.190209+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 96509952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:36.190509+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 96509952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:37.190791+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 96509952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:38.191152+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:39.191332+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:40.191621+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:41.191824+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:42.192180+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:43.192389+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:44.192539+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:45.192905+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:46.193184+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:47.193315+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:48.193528+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:49.193746+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299999232 unmapped: 96493568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:50.193911+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299999232 unmapped: 96493568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:51.194143+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300007424 unmapped: 96485376 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:52.194286+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300007424 unmapped: 96485376 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:53.194428+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300007424 unmapped: 96485376 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:54.194577+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 96477184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:55.194776+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 96477184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:56.194964+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 96477184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:57.195151+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 96477184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:58.195345+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 96477184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:59.196037+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300023808 unmapped: 96468992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:00.196209+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300023808 unmapped: 96468992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:01.196450+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300023808 unmapped: 96468992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:02.196705+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 96460800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:03.196981+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 96460800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:04.197239+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 96460800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:05.197490+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 96460800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:06.197656+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 96460800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:07.197835+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 96444416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:08.198198+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 96444416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:09.198450+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 96444416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:10.198707+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 96444416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:11.198967+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300056576 unmapped: 96436224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:12.199137+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300056576 unmapped: 96436224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:13.199537+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300056576 unmapped: 96436224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:14.199720+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300056576 unmapped: 96436224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:15.199923+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 96428032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:16.200107+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 96428032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:17.200277+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 96428032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:18.200432+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 96428032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:19.200599+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 96419840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:20.200785+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 96419840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:21.200947+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300081152 unmapped: 96411648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:22.201142+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300089344 unmapped: 96403456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:23.201299+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300089344 unmapped: 96403456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:24.201477+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300089344 unmapped: 96403456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:25.201754+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300097536 unmapped: 96395264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:26.201970+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 96387072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:27.202223+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 96387072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:28.202383+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 96378880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:29.202656+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 96378880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:30.203054+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 96378880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:31.203355+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 96378880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:32.203529+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 96378880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:33.203718+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:34.203913+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300122112 unmapped: 96370688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:35.204245+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300122112 unmapped: 96370688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:36.204485+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300130304 unmapped: 96362496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:37.204713+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300130304 unmapped: 96362496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:38.205017+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300130304 unmapped: 96362496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:39.205242+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:40.205507+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:41.205783+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:42.206045+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:43.206385+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:44.206604+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:45.206873+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:46.207201+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:47.207403+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:48.207654+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:49.207877+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:50.208209+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:51.208426+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:52.208697+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:53.209047+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:54.209485+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:55.209804+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300163072 unmapped: 96329728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:56.210052+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300163072 unmapped: 96329728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:57.210349+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300163072 unmapped: 96329728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:58.210560+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 96321536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:59.210768+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 96321536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:00.210977+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 96321536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:01.211259+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 96321536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:02.211496+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 96321536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:03.211663+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:04.211811+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:05.212008+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:06.212206+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:07.212441+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:08.212651+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:09.212827+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:10.213027+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:11.213199+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 96296960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:12.213391+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 96296960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 366.777954102s of 366.843353271s, submitted: 49
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 301 ms_handle_reset con 0x557686155400 session 0x55768c90da40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:13.213619+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ee306000/0x0/0x4ffc00000, data 0x65fb78/0x804000, compress 0x0/0x0/0x0, omap 0x52b13, meta 0x1109d4ed), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:14.213806+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686258000
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:15.214062+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 96190464 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3160433 data_alloc: 218103808 data_used: 189267
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 302 ms_handle_reset con 0x557686258000 session 0x557686cacc40
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:16.214268+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 96174080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:17.214457+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee774000/0x0/0x4ffc00000, data 0x1f1758/0x396000, compress 0x0/0x0/0x0, omap 0x52c29, meta 0x1109d3d7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 96174080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:18.214907+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 96174080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee774000/0x0/0x4ffc00000, data 0x1f1758/0x396000, compress 0x0/0x0/0x0, omap 0x52c29, meta 0x1109d3d7), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:19.215180+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 96174080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:20.215348+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3182875 data_alloc: 218103808 data_used: 189523
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:21.215598+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 303 heartbeat osd_stat(store_statfs(0x4edf72000/0x0/0x4ffc00000, data 0x9f3216/0xb9a000, compress 0x0/0x0/0x0, omap 0x52da5, meta 0x1109d25b), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 303 handle_osd_map epochs [304,304], i have 304, src has [1,304]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 304 ms_handle_reset con 0x5576869e1400 session 0x557692b07340
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:22.215770+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:23.215935+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:24.216158+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.334288597s of 11.534142494s, submitted: 81
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300466176 unmapped: 96026624 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:25.216368+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300466176 unmapped: 96026624 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:26.216530+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300466176 unmapped: 96026624 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:27.216665+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:28.216807+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:29.216932+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:30.217096+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:31.217242+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:32.217395+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:33.217567+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:34.217733+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:35.217915+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:36.218050+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:37.218352+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:38.218506+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:39.218669+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:40.218871+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:41.219131+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:42.219345+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:43.219579+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 95993856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:44.219837+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 95993856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:45.220152+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:46.220326+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:47.220565+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:48.220751+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:49.220985+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:50.221207+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:51.221324+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:52.221534+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:53.221743+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:54.221970+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 96288768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:55.222260+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 96288768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:56.222420+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 96288768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:57.222639+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 96288768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:58.222846+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 96288768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:59.223061+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300212224 unmapped: 96280576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:00.223296+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300212224 unmapped: 96280576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:01.223450+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 96272384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:02.223599+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 96272384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:03.223748+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 96272384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:04.223945+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 96272384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:05.224200+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300228608 unmapped: 96264192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:06.224388+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300228608 unmapped: 96264192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:07.224548+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:08.224700+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:09.224900+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:10.225118+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:11.225285+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687f7fc00
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 47.789546967s of 47.797679901s, submitted: 15
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:12.225480+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:13.225602+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 306 ms_handle_reset con 0x557687f7fc00 session 0x55768b427880
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 96223232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:14.225759+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 96223232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:15.225982+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f841a/0x3a2000, compress 0x0/0x0/0x0, omap 0x53906, meta 0x1109c6fa), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 96223232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151061 data_alloc: 218103808 data_used: 190108
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:16.226140+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f841a/0x3a2000, compress 0x0/0x0/0x0, omap 0x53906, meta 0x1109c6fa), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:17.226312+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f841a/0x3a2000, compress 0x0/0x0/0x0, omap 0x53906, meta 0x1109c6fa), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 96206848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:18.226458+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 96206848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f841a/0x3a2000, compress 0x0/0x0/0x0, omap 0x53906, meta 0x1109c6fa), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:19.226629+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 306 handle_osd_map epochs [306,307], i have 307, src has [1,307]
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:20.226781+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:21.226964+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:22.227189+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:23.227376+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:24.227560+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:25.227795+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:26.227966+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 96206848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:27.228152+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:28.228344+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:29.228562+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:30.228755+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:31.228918+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 96190464 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:32.230612+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 96190464 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:33.231357+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 96190464 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:34.233189+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 96182272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:35.233386+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 96182272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:36.233557+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 96182272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:37.233722+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 96182272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:38.233944+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 96182272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:39.234211+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:40.234411+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:41.234699+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:42.234867+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:43.258179+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:44.258885+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:45.259211+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:46.259393+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:47.260182+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300335104 unmapped: 96157696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:48.260478+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300335104 unmapped: 96157696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:49.260688+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300335104 unmapped: 96157696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:50.260929+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 96149504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:51.261140+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 96149504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:52.261314+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 96149504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:53.261541+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 96149504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:54.261740+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 96149504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:55.261913+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300351488 unmapped: 96141312 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:56.262119+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300359680 unmapped: 96133120 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:57.262291+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:58.262469+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:59.262625+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:00.262807+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:01.262989+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:02.263234+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:03.263415+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:04.263537+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:05.263724+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:06.263884+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:07.264035+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:08.264263+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:09.264477+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:10.264772+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:11.264997+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 96100352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:12.265179+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 96100352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:13.265431+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 96100352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:14.265627+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 96100352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:15.265848+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 96100352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:16.266040+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 96092160 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:17.266312+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 96092160 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:18.266538+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 96092160 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:19.266729+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 96083968 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:20.266991+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:21.267247+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:22.267428+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:23.267644+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:24.267893+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:25.268144+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:26.268368+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:27.269247+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 96067584 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:28.269418+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 96067584 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:29.269561+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 96067584 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:30.269708+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 95911936 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: do_command 'config diff' '{prefix=config diff}'
Feb 28 11:04:03 compute-0 ceph-osd[89322]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 28 11:04:03 compute-0 ceph-osd[89322]: do_command 'config show' '{prefix=config show}'
Feb 28 11:04:03 compute-0 ceph-osd[89322]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 28 11:04:03 compute-0 ceph-osd[89322]: do_command 'counter dump' '{prefix=counter dump}'
Feb 28 11:04:03 compute-0 ceph-osd[89322]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:31.269854+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: do_command 'counter schema' '{prefix=counter schema}'
Feb 28 11:04:03 compute-0 ceph-osd[89322]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:03 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:03 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:32.269978+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:04:03 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:04:03 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:33.270127+0000)
Feb 28 11:04:03 compute-0 ceph-osd[89322]: do_command 'log dump' '{prefix=log dump}'
Feb 28 11:04:03 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 11:04:03 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.22998 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} v 0)
Feb 28 11:04:03 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} : dispatch
Feb 28 11:04:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 28 11:04:03 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3988020636' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 28 11:04:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3014: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:04 compute-0 ceph-mon[76304]: from='client.22992 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:04 compute-0 ceph-mon[76304]: from='client.22996 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:04 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/573064748' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Feb 28 11:04:04 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} : dispatch
Feb 28 11:04:04 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3988020636' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 28 11:04:04 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23002 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} v 0)
Feb 28 11:04:04 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} : dispatch
Feb 28 11:04:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:04:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 28 11:04:04 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/653001906' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 28 11:04:04 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23006 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:04 compute-0 nova_compute[243452]: 2026-02-28 11:04:04.787 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:04:04 compute-0 nova_compute[243452]: 2026-02-28 11:04:04.787 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:04:04 compute-0 nova_compute[243452]: 2026-02-28 11:04:04.787 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:04:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 28 11:04:04 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2115761211' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 28 11:04:05 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23010 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:05 compute-0 ceph-mon[76304]: from='client.22998 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:05 compute-0 ceph-mon[76304]: pgmap v3014: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:05 compute-0 ceph-mon[76304]: from='client.23002 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:05 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} : dispatch
Feb 28 11:04:05 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/653001906' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 28 11:04:05 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2115761211' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 28 11:04:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 28 11:04:05 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3454852689' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 28 11:04:05 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23014 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 28 11:04:05 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4028395648' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 28 11:04:05 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23018 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:06 compute-0 ceph-mon[76304]: from='client.23006 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:06 compute-0 ceph-mon[76304]: from='client.23010 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:06 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3454852689' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 28 11:04:06 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4028395648' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 28 11:04:06 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23022 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:06 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Feb 28 11:04:06 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1209041268' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Feb 28 11:04:06 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23024 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:07 compute-0 ceph-mon[76304]: from='client.23014 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:07 compute-0 ceph-mon[76304]: from='client.23018 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:07 compute-0 ceph-mon[76304]: pgmap v3015: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:07 compute-0 ceph-mon[76304]: from='client.23022 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:07 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1209041268' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Feb 28 11:04:07 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23028 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:07 compute-0 ceph-mgr[76610]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 28 11:04:07 compute-0 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo[76606]: 2026-02-28T11:04:07.167+0000 7fdac2ed4640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 28 11:04:07 compute-0 nova_compute[243452]: 2026-02-28 11:04:07.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:04:07 compute-0 nova_compute[243452]: 2026-02-28 11:04:07.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:04:07 compute-0 nova_compute[243452]: 2026-02-28 11:04:07.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:04:07 compute-0 nova_compute[243452]: 2026-02-28 11:04:07.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 11:04:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Feb 28 11:04:07 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4164528576' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Feb 28 11:04:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Feb 28 11:04:07 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1150463311' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 332791808 unmapped: 62087168 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3865563 data_alloc: 251658240 data_used: 36185480
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:34.234315+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 332832768 unmapped: 62046208 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8ead000/0x0/0x4ffc00000, data 0x5ac7c4a/0x5c5f000, compress 0x0/0x0/0x0, omap 0x6719b, meta 0x11088e65), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:35.234487+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a38400 session 0x562fffc001c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3ac00 session 0x56300273f500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 332832768 unmapped: 62046208 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a47000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a47000 session 0x5630007ffdc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:36.234615+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 58793984 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:37.234793+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336470016 unmapped: 58408960 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:38.234981+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336470016 unmapped: 58408960 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3815252 data_alloc: 251658240 data_used: 32440696
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:39.235261+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336470016 unmapped: 58408960 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9639000/0x0/0x4ffc00000, data 0x5327c07/0x54bc000, compress 0x0/0x0/0x0, omap 0x67613, meta 0x110889ed), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:40.235459+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336470016 unmapped: 58408960 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:41.235832+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336470016 unmapped: 58408960 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:42.236042+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336470016 unmapped: 58408960 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9639000/0x0/0x4ffc00000, data 0x5327c07/0x54bc000, compress 0x0/0x0/0x0, omap 0x67613, meta 0x110889ed), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.976147652s of 15.230655670s, submitted: 124
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:43.236286+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336502784 unmapped: 58376192 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3815268 data_alloc: 251658240 data_used: 32440696
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:44.236430+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630052e4000 session 0x563000340e00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x562fffadf340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335831040 unmapped: 59047936 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a38400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a38400 session 0x56300273ec40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:45.236581+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 75841536 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:46.236867+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 75841536 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:47.237039+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 75841536 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb0c0000/0x0/0x4ffc00000, data 0x38b8bd4/0x3a4b000, compress 0x0/0x0/0x0, omap 0x672fe, meta 0x11088d02), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:48.237513+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 75841536 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3560608 data_alloc: 234881024 data_used: 15013701
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:49.237713+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006c58c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006c58c00 session 0x56300273ea80
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eae96000/0x0/0x4ffc00000, data 0x3ae3bd4/0x3c76000, compress 0x0/0x0/0x0, omap 0x672fe, meta 0x11088d02), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:50.237898+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:51.238131+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:52.238326+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:53.238516+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578346 data_alloc: 234881024 data_used: 15017762
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:54.238714+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eae96000/0x0/0x4ffc00000, data 0x3ae3bd4/0x3c76000, compress 0x0/0x0/0x0, omap 0x672fe, meta 0x11088d02), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:55.238936+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:56.239183+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:57.239342+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:58.239506+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eae96000/0x0/0x4ffc00000, data 0x3ae3bd4/0x3c76000, compress 0x0/0x0/0x0, omap 0x672fe, meta 0x11088d02), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578346 data_alloc: 234881024 data_used: 15017762
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eae96000/0x0/0x4ffc00000, data 0x3ae3bd4/0x3c76000, compress 0x0/0x0/0x0, omap 0x672fe, meta 0x11088d02), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a46000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46000 session 0x562fffba1a40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:32:59.239691+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300dcfd400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300dcfd400 session 0x562fffadefc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:00.239843+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563000898000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x562fffba1c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a38400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 74784768 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.832296371s of 18.060344696s, submitted: 51
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a38400 session 0x56300238b180
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:01.239990+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a46000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:02.240251+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006c58c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eae70000/0x0/0x4ffc00000, data 0x3b07c07/0x3c9c000, compress 0x0/0x0/0x0, omap 0x674db, meta 0x11088b25), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:03.240439+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599110 data_alloc: 234881024 data_used: 17189666
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:04.240707+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:05.240924+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:06.241135+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eae70000/0x0/0x4ffc00000, data 0x3b07c07/0x3c9c000, compress 0x0/0x0/0x0, omap 0x674db, meta 0x11088b25), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:07.241331+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:08.241505+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599110 data_alloc: 234881024 data_used: 17189666
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eae70000/0x0/0x4ffc00000, data 0x3b07c07/0x3c9c000, compress 0x0/0x0/0x0, omap 0x674db, meta 0x11088b25), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:09.241721+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:10.241891+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:11.242048+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 320421888 unmapped: 74457088 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:12.242203+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.238119125s of 11.266804695s, submitted: 12
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 323428352 unmapped: 71450624 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006905c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006905c00 session 0x563002011180
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630035d8c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630035d8c00 session 0x563002c34e00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002ea3400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002ea3400 session 0x563002574fc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002ea3400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002ea3400 session 0x5630025756c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:13.242345+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563000898000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x563002224700
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 323960832 unmapped: 70918144 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730344 data_alloc: 234881024 data_used: 17993506
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:14.242524+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324042752 unmapped: 70836224 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9c85000/0x0/0x4ffc00000, data 0x4ceac07/0x4e7f000, compress 0x0/0x0/0x0, omap 0x674db, meta 0x11088b25), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:15.242732+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324042752 unmapped: 70836224 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:16.242858+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324042752 unmapped: 70836224 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:17.243010+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324042752 unmapped: 70836224 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:18.243181+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006906c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006906c00 session 0x56300273f340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 323371008 unmapped: 71507968 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3727581 data_alloc: 234881024 data_used: 18001698
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3f400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a49400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:19.243363+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 323289088 unmapped: 71589888 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:20.243526+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 323076096 unmapped: 71802880 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9c8d000/0x0/0x4ffc00000, data 0x4ceac07/0x4e7f000, compress 0x0/0x0/0x0, omap 0x674db, meta 0x11088b25), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:21.243677+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 323076096 unmapped: 71802880 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:22.629239+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 323076096 unmapped: 71802880 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.760303497s of 11.015823364s, submitted: 72
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:23.629456+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9c8d000/0x0/0x4ffc00000, data 0x4ceac07/0x4e7f000, compress 0x0/0x0/0x0, omap 0x674db, meta 0x11088b25), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 323092480 unmapped: 71786496 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3781937 data_alloc: 234881024 data_used: 26610466
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:24.629604+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 323092480 unmapped: 71786496 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006c58c00 session 0x562fffadf180
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46000 session 0x56300222da40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563000898000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x563002472380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:25.629746+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 322404352 unmapped: 72474624 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea769000/0x0/0x4ffc00000, data 0x420fbd4/0x43a2000, compress 0x0/0x0/0x0, omap 0x67c44, meta 0x110883bc), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:26.629890+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 322404352 unmapped: 72474624 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:27.630014+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 322404352 unmapped: 72474624 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:28.630168+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 322404352 unmapped: 72474624 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686202 data_alloc: 234881024 data_used: 23556898
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:29.630353+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 322404352 unmapped: 72474624 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007abc800 session 0x562fffa6b340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9000 session 0x5630015648c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea76a000/0x0/0x4ffc00000, data 0x420fbd4/0x43a2000, compress 0x0/0x0/0x0, omap 0x67c44, meta 0x110883bc), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a46400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46400 session 0x563002c35c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:30.630478+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 323477504 unmapped: 71401472 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:31.630674+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325033984 unmapped: 69844992 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:32.630814+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325279744 unmapped: 69599232 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eafc9000/0x0/0x4ffc00000, data 0x39a0bc4/0x3b32000, compress 0x0/0x0/0x0, omap 0x685b2, meta 0x11087a4e), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:33.630997+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325279744 unmapped: 69599232 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575805 data_alloc: 234881024 data_used: 14628591
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:34.631189+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:35.631313+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eafce000/0x0/0x4ffc00000, data 0x39acbc4/0x3b3e000, compress 0x0/0x0/0x0, omap 0x685b2, meta 0x11087a4e), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:36.631450+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.512154579s of 13.937977791s, submitted: 211
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:37.631624+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:38.631771+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575821 data_alloc: 234881024 data_used: 14628591
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:39.631950+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:40.632496+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:41.632704+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eafce000/0x0/0x4ffc00000, data 0x39acbc4/0x3b3e000, compress 0x0/0x0/0x0, omap 0x685b2, meta 0x11087a4e), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:42.632959+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eafce000/0x0/0x4ffc00000, data 0x39acbc4/0x3b3e000, compress 0x0/0x0/0x0, omap 0x685b2, meta 0x11087a4e), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:43.633113+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575821 data_alloc: 234881024 data_used: 14628591
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:44.633286+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:45.633438+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:46.633634+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:47.633793+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eafce000/0x0/0x4ffc00000, data 0x39acbc4/0x3b3e000, compress 0x0/0x0/0x0, omap 0x685b2, meta 0x11087a4e), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:48.633932+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3576077 data_alloc: 234881024 data_used: 14636783
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:49.634161+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:50.634378+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:51.634632+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3f400 session 0x562fffc01880
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.207843781s of 15.209612846s, submitted: 1
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a49400 session 0x563000340e00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:52.634801+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563000898000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3601.6 total, 600.0 interval
                                           Cumulative writes: 41K writes, 159K keys, 41K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s
                                           Cumulative WAL: 41K writes, 15K syncs, 2.72 writes per sync, written: 0.15 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4949 writes, 18K keys, 4949 commit groups, 1.0 writes per commit group, ingest: 19.96 MB, 0.03 MB/s
                                           Interval WAL: 4949 writes, 2024 syncs, 2.45 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325361664 unmapped: 69517312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eafce000/0x0/0x4ffc00000, data 0x39acbc4/0x3b3e000, compress 0x0/0x0/0x0, omap 0x685b2, meta 0x11087a4e), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x563001d9bc00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:53.634969+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413195 data_alloc: 218103808 data_used: 4908783
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:54.635165+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:55.635374+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:56.635572+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ec2c8000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x68a22, meta 0x110875de), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:57.635772+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:58.635967+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413195 data_alloc: 218103808 data_used: 4908783
Feb 28 11:04:07 compute-0 ceph-osd[88267]: mgrc ms_handle_reset ms_handle_reset con 0x563004ae8c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1095581968
Feb 28 11:04:07 compute-0 ceph-osd[88267]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1095581968,v1:192.168.122.100:6801/1095581968]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: get_auth_request con 0x563002a3f400 auth_method 0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: mgrc handle_mgr_configure stats_period=5
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:59.636168+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6bc00 session 0x563000340700
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300dcfd000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ec2c8000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x68a22, meta 0x110875de), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:00.636378+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006904c00 session 0x562fffb87340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3ec00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8000 session 0x56300205c000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006914800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:01.636560+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:02.636759+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:03.636937+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413195 data_alloc: 218103808 data_used: 4908783
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:04.637126+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ec2c8000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x68a22, meta 0x110875de), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:05.637256+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ec2c8000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x68a22, meta 0x110875de), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 81207296 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300dcfc000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300dcfc000 session 0x5630022a8a80
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004ae9400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae9400 session 0x5630022f3c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630043d0c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630043d0c00 session 0x563002147500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563000898000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x562fffca1c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:06.637396+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a49400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.193162918s of 14.241909027s, submitted: 35
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a49400 session 0x563002574000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 314220544 unmapped: 80658432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630043d0c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630043d0c00 session 0x563002a6a540
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004ae9400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae9400 session 0x56300238a380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300dcfc000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300dcfc000 session 0x5630029aca80
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563000898000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x562fffc01dc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:07.637529+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 314220544 unmapped: 80658432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:08.637697+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 314220544 unmapped: 80658432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517492 data_alloc: 218103808 data_used: 4908783
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:09.637907+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 314220544 unmapped: 80658432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x377fc35/0x3913000, compress 0x0/0x0/0x0, omap 0x68a22, meta 0x110875de), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:10.638134+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 314220544 unmapped: 80658432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:11.638398+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 314220544 unmapped: 80658432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:12.638556+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb1f9000/0x0/0x4ffc00000, data 0x377fc35/0x3913000, compress 0x0/0x0/0x0, omap 0x68a22, meta 0x110875de), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 314220544 unmapped: 80658432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300212b000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300212b000 session 0x563001da01c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:13.638676+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004b6c400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6c400 session 0x562fffa6b500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 314220544 unmapped: 80658432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517492 data_alloc: 218103808 data_used: 4908783
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002f16400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002f16400 session 0x5630015e6380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630020c8000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8000 session 0x563002147a40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:14.638804+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 314499072 unmapped: 80379904 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563000898000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300212b000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:15.638976+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 314499072 unmapped: 80379904 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb1d3000/0x0/0x4ffc00000, data 0x37a3c67/0x3939000, compress 0x0/0x0/0x0, omap 0x686b9, meta 0x11087947), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:16.639129+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 318324736 unmapped: 76554240 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002f16400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002f16400 session 0x563001564fc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004b6c400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6c400 session 0x562fffadefc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004ae8c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae8c00 session 0x5630026de700
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004298c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298c00 session 0x5630004fe1c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630020c8800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.702307701s of 10.852206230s, submitted: 51
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:17.639312+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8800 session 0x5630029ac000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002f16400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002f16400 session 0x5630022f3c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004298c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298c00 session 0x563002472380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004ae8c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae8c00 session 0x563001564fc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004b6c400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6c400 session 0x56300205ce00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 75472896 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:18.639472+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 75472896 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3658083 data_alloc: 234881024 data_used: 21319422
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:19.639682+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 75472896 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eabc2000/0x0/0x4ffc00000, data 0x3db3c77/0x3f4a000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:20.639826+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 75472896 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:21.640026+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319406080 unmapped: 75472896 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:22.640202+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319414272 unmapped: 75464704 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:23.640377+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319414272 unmapped: 75464704 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3658083 data_alloc: 234881024 data_used: 21319422
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eabc2000/0x0/0x4ffc00000, data 0x3db3c77/0x3f4a000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:24.640583+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 319414272 unmapped: 75464704 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002ea2c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002ea2c00 session 0x563001da1500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:25.640758+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325664768 unmapped: 69214208 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002f16400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004298c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:26.640894+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327081984 unmapped: 67796992 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:27.641002+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327614464 unmapped: 67264512 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:28.641158+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327614464 unmapped: 67264512 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3804968 data_alloc: 234881024 data_used: 29722382
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9dc2000/0x0/0x4ffc00000, data 0x4bb1c9a/0x4d49000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:29.641371+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327614464 unmapped: 67264512 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:30.641566+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327614464 unmapped: 67264512 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:31.641753+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327614464 unmapped: 67264512 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:32.641915+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327614464 unmapped: 67264512 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9dc2000/0x0/0x4ffc00000, data 0x4bb1c9a/0x4d49000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:33.642082+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327614464 unmapped: 67264512 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3804968 data_alloc: 234881024 data_used: 29722382
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:34.642257+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327614464 unmapped: 67264512 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3fc00 session 0x56300238bc00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300b961400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9dc2000/0x0/0x4ffc00000, data 0x4bb1c9a/0x4d49000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:35.642441+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327614464 unmapped: 67264512 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:36.642700+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.093086243s of 19.357521057s, submitted: 142
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328990720 unmapped: 65888256 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:37.642889+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328728576 unmapped: 66150400 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:38.643275+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9a09000/0x0/0x4ffc00000, data 0x4f6bc9a/0x5103000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328728576 unmapped: 66150400 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3840382 data_alloc: 234881024 data_used: 29837070
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:39.643585+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006851400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006851400 session 0x5630029acfc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004b6dc00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6dc00 session 0x563002c42700
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004075400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004075400 session 0x5630029ad340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328728576 unmapped: 66150400 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004b6b400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6b400 session 0x5630008b2c40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006c59c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006c59c00 session 0x5630008c4e00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006c59c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006c59c00 session 0x56300238a000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004075400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004075400 session 0x56300273e380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004b6b400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6b400 session 0x5630022f28c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004b6dc00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6dc00 session 0x563001d9aa80
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:40.643754+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328941568 unmapped: 65937408 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:41.643957+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328941568 unmapped: 65937408 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:42.644166+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328949760 unmapped: 65929216 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:43.644359+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329007104 unmapped: 65871872 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3867883 data_alloc: 234881024 data_used: 29837730
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e976a000/0x0/0x4ffc00000, data 0x5209caa/0x53a2000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:44.644505+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e976a000/0x0/0x4ffc00000, data 0x5209caa/0x53a2000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329007104 unmapped: 65871872 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:45.644644+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329007104 unmapped: 65871872 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e976a000/0x0/0x4ffc00000, data 0x5209caa/0x53a2000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:46.644797+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007aab000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.789917946s of 10.007776260s, submitted: 107
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007aab000 session 0x562fffadfa40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e976a000/0x0/0x4ffc00000, data 0x5209caa/0x53a2000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329007104 unmapped: 65871872 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004075400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:47.644919+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329056256 unmapped: 65822720 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:48.645062+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329064448 unmapped: 65814528 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3870104 data_alloc: 234881024 data_used: 29842354
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:49.645289+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329064448 unmapped: 65814528 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:50.645499+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329064448 unmapped: 65814528 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:51.645787+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330317824 unmapped: 64561152 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9769000/0x0/0x4ffc00000, data 0x5209ccd/0x53a3000, compress 0x0/0x0/0x0, omap 0x682ae, meta 0x11087d52), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:52.646102+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330317824 unmapped: 64561152 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:53.689562+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3878936 data_alloc: 234881024 data_used: 31212978
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330317824 unmapped: 64561152 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:54.689802+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330317824 unmapped: 64561152 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffc0c800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc0c800 session 0x563002147880
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007aaa000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007aaa000 session 0x5630022a8380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a4a800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a800 session 0x563002225dc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630027f9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f9c00 session 0x5630008b3500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004075000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004075000 session 0x563002a6b180
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:55.690006+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330645504 unmapped: 64233472 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:56.690321+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330645504 unmapped: 64233472 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:57.690552+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8981000/0x0/0x4ffc00000, data 0x5ff1ccd/0x618b000, compress 0x0/0x0/0x0, omap 0x684c0, meta 0x11087b40), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330645504 unmapped: 64233472 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.100229263s of 11.311615944s, submitted: 103
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:58.690733+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3958365 data_alloc: 234881024 data_used: 31212978
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330645504 unmapped: 64233472 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de3c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de3c00 session 0x563000521880
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:59.690895+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8981000/0x0/0x4ffc00000, data 0x5ff1ccd/0x618b000, compress 0x0/0x0/0x0, omap 0x684c0, meta 0x11087b40), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330645504 unmapped: 64233472 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630043d0c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630043d0c00 session 0x563002a6a8c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:00.691299+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330645504 unmapped: 64233472 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004074400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004074400 session 0x56300238a1c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300645f400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300645f400 session 0x5630007ff880
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:01.691469+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de2000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004ae9800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330776576 unmapped: 64102400 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:02.691579+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 332668928 unmapped: 62210048 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:03.691724+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4077214 data_alloc: 251658240 data_used: 41646549
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340410368 unmapped: 54468608 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:04.691949+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e85b1000/0x0/0x4ffc00000, data 0x63b6d00/0x6552000, compress 0x0/0x0/0x0, omap 0x684c0, meta 0x11087b40), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 54403072 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:05.692098+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 54403072 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:06.692313+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 54403072 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:07.692507+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 54403072 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:08.692631+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.664144516s of 10.839223862s, submitted: 88
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4077558 data_alloc: 251658240 data_used: 41650645
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 54394880 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:09.692771+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 54370304 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:10.692898+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e85b8000/0x0/0x4ffc00000, data 0x63b7d00/0x6553000, compress 0x0/0x0/0x0, omap 0x684c0, meta 0x11087b40), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e85b8000/0x0/0x4ffc00000, data 0x63b7d00/0x6553000, compress 0x0/0x0/0x0, omap 0x684c0, meta 0x11087b40), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340516864 unmapped: 54362112 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:11.693020+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340516864 unmapped: 54362112 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:12.693162+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e85b8000/0x0/0x4ffc00000, data 0x63b7d00/0x6553000, compress 0x0/0x0/0x0, omap 0x684c0, meta 0x11087b40), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340516864 unmapped: 54362112 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:13.693378+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4072086 data_alloc: 251658240 data_used: 41650645
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340516864 unmapped: 54362112 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:14.693582+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340721664 unmapped: 54157312 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:15.693696+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342573056 unmapped: 52305920 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8090000/0x0/0x4ffc00000, data 0x68d8d00/0x6a74000, compress 0x0/0x0/0x0, omap 0x684c0, meta 0x11087b40), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:16.693807+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342704128 unmapped: 52174848 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004075400 session 0x5630026dea80
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004075400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:17.693909+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004075400 session 0x563002574380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342810624 unmapped: 52068352 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:18.694113+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4053236 data_alloc: 251658240 data_used: 39771589
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 52060160 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8689000/0x0/0x4ffc00000, data 0x628dccd/0x6427000, compress 0x0/0x0/0x0, omap 0x68928, meta 0x110876d8), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:19.694290+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342851584 unmapped: 52027392 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:20.694437+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342851584 unmapped: 52027392 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.205799103s of 12.442360878s, submitted: 96
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:21.784656+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x562fffba1500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300212b000 session 0x562fffba1c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630045adc00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 55926784 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630045adc00 session 0x563002aa0e00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:22.784788+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 55926784 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:23.784909+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea5b2000/0x0/0x4ffc00000, data 0x43c2c2a/0x4558000, compress 0x0/0x0/0x0, omap 0x68d90, meta 0x11087270), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760600 data_alloc: 234881024 data_used: 21230995
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 55926784 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:24.785098+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de2000 session 0x562fffa6b180
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae9800 session 0x562fffb86000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563000898000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 55926784 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x563001f03500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:25.785225+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 70230016 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea728000/0x0/0x4ffc00000, data 0x30afbf7/0x3243000, compress 0x0/0x0/0x0, omap 0x69139, meta 0x12226ec7), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:26.785402+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 70230016 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:27.785674+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 70230016 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:28.785935+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002f16400 session 0x5630025756c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298c00 session 0x563002c35dc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567825 data_alloc: 218103808 data_used: 6247659
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de2000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 70230016 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de2000 session 0x5630022a9180
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:29.786155+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb127000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:30.786565+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:31.786756+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:32.786967+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:33.787141+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472885 data_alloc: 218103808 data_used: 4921563
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:34.787363+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb127000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:35.787583+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb127000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:36.787897+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb127000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:37.788186+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:38.788423+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472885 data_alloc: 218103808 data_used: 4921563
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb127000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:39.788663+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:40.788868+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:41.789048+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:42.789327+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:43.789496+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472885 data_alloc: 218103808 data_used: 4921563
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324673536 unmapped: 70205440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:44.789665+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb127000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324640768 unmapped: 70238208 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:45.789863+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324640768 unmapped: 70238208 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:46.790012+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb127000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324640768 unmapped: 70238208 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb127000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:47.790146+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324640768 unmapped: 70238208 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:48.790377+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472885 data_alloc: 218103808 data_used: 4921563
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324640768 unmapped: 70238208 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:49.790757+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4eb127000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324640768 unmapped: 70238208 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:50.790921+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 70230016 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:51.791123+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 70230016 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:52.791258+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324648960 unmapped: 70230016 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002f17400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.625631332s of 32.128654480s, submitted: 153
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002f17400 session 0x562fffadefc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:53.791405+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3520287 data_alloc: 218103808 data_used: 4925624
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325697536 unmapped: 69181440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:54.791534+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea974000/0x0/0x4ffc00000, data 0x2e66bc4/0x2ff8000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325697536 unmapped: 69181440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:55.791665+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325697536 unmapped: 69181440 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:56.791818+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325705728 unmapped: 69173248 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:57.792036+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffc8ec00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc8ec00 session 0x563002146540
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630027f8000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f8000 session 0x56300273f6c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630040a6800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630040a6800 session 0x5630026de380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630020c8400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8400 session 0x562fffca1880
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffc8ec00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc8ec00 session 0x5630022f28c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630027f8000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f8000 session 0x562fffba01c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002f17400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002f17400 session 0x563002147880
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630040a6800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325746688 unmapped: 69132288 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630040a6800 session 0x563001d9aa80
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300329ec00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329ec00 session 0x56300261e8c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:58.792217+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3568403 data_alloc: 218103808 data_used: 4925624
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325746688 unmapped: 69132288 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:59.792378+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325746688 unmapped: 69132288 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea205000/0x0/0x4ffc00000, data 0x35d4bd4/0x3767000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:00.792513+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325754880 unmapped: 69124096 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:01.792686+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004298c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298c00 session 0x563001da1340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea205000/0x0/0x4ffc00000, data 0x35d4bd4/0x3767000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325754880 unmapped: 69124096 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:02.792936+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de0400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffc8e400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325754880 unmapped: 69124096 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:03.793120+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3590331 data_alloc: 218103808 data_used: 8167608
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325386240 unmapped: 69492736 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:04.793255+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300dcfc000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300dcfc000 session 0x562fffba0a80
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325386240 unmapped: 69492736 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:05.793411+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea1e1000/0x0/0x4ffc00000, data 0x35f8bd4/0x378b000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007a98000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007a98000 session 0x562fffba0e00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325386240 unmapped: 69492736 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:06.793564+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325386240 unmapped: 69492736 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:07.793743+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325386240 unmapped: 69492736 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea1e1000/0x0/0x4ffc00000, data 0x35f8bd4/0x378b000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004689400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004689400 session 0x5630003e7c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:08.793898+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003bf8c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.050533295s of 15.176102638s, submitted: 24
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf8c00 session 0x5630007fec40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615734 data_alloc: 218103808 data_used: 11878584
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325394432 unmapped: 69484544 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffc0c000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630007a8000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:09.794125+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325394432 unmapped: 69484544 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:10.794302+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325394432 unmapped: 69484544 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:11.794441+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea1df000/0x0/0x4ffc00000, data 0x35f8c07/0x378d000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 325394432 unmapped: 69484544 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:12.794596+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 324739072 unmapped: 70139904 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:13.794775+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3721560 data_alloc: 234881024 data_used: 20186312
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328998912 unmapped: 65880064 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4ea1df000/0x0/0x4ffc00000, data 0x35f8c07/0x378d000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:14.794910+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329244672 unmapped: 65634304 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:15.795041+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330539008 unmapped: 64339968 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:16.795292+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e985f000/0x0/0x4ffc00000, data 0x3f70c07/0x4105000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330539008 unmapped: 64339968 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:17.795445+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330539008 unmapped: 64339968 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:18.795593+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745308 data_alloc: 234881024 data_used: 20650696
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330539008 unmapped: 64339968 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:19.795768+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330539008 unmapped: 64339968 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:20.795949+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.288350105s of 12.520104408s, submitted: 103
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330539008 unmapped: 64339968 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:21.796111+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9864000/0x0/0x4ffc00000, data 0x3f73c07/0x4108000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x12226f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330547200 unmapped: 64331776 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:22.796274+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 332791808 unmapped: 62087168 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:23.796421+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796088 data_alloc: 234881024 data_used: 21364424
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335118336 unmapped: 59760640 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:24.796589+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8067000/0x0/0x4ffc00000, data 0x45a9c07/0x473e000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x133c6f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335118336 unmapped: 59760640 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:25.796756+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335118336 unmapped: 59760640 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:26.796920+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335118336 unmapped: 59760640 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:27.797097+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8067000/0x0/0x4ffc00000, data 0x45a9c07/0x473e000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x133c6f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335118336 unmapped: 59760640 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:28.797224+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796088 data_alloc: 234881024 data_used: 21364424
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334446592 unmapped: 60432384 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:29.797416+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 60416000 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:30.797581+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 60416000 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:31.797774+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e808b000/0x0/0x4ffc00000, data 0x45acc07/0x4741000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x133c6f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 60416000 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:32.797948+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 60416000 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:33.798138+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3789200 data_alloc: 234881024 data_used: 21380808
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 60416000 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:34.798337+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e808b000/0x0/0x4ffc00000, data 0x45acc07/0x4741000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x133c6f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 60416000 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:35.798500+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e808b000/0x0/0x4ffc00000, data 0x45acc07/0x4741000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x133c6f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.640681267s of 14.842003822s, submitted: 88
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 60416000 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:36.798638+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334462976 unmapped: 60416000 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:37.798807+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334471168 unmapped: 60407808 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:38.798952+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3e000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3e000 session 0x56300205d180
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e808a000/0x0/0x4ffc00000, data 0x45adc07/0x4742000, compress 0x0/0x0/0x0, omap 0x690d6, meta 0x133c6f2a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826053 data_alloc: 234881024 data_used: 21380808
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004298800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298800 session 0x562fffade8c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334495744 unmapped: 60383232 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:39.799126+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006912c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006912c00 session 0x563002a801c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a47c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a47c00 session 0x563001f028c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006850800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006850800 session 0x562fffa6b880
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006850800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006850800 session 0x56300222d500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334495744 unmapped: 60383232 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3e000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:40.799271+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3e000 session 0x563000521500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a47c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a47c00 session 0x5630021468c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004298800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298800 session 0x5630015e6540
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006912c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006912c00 session 0x562fffba1500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006912c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006912c00 session 0x563000521500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334700544 unmapped: 60178432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:41.799468+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334700544 unmapped: 60178432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:42.799625+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7710000/0x0/0x4ffc00000, data 0x4f26c69/0x50bc000, compress 0x0/0x0/0x0, omap 0x6964e, meta 0x133c69b2), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7710000/0x0/0x4ffc00000, data 0x4f26c69/0x50bc000, compress 0x0/0x0/0x0, omap 0x6964e, meta 0x133c69b2), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334700544 unmapped: 60178432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:43.799784+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3856880 data_alloc: 234881024 data_used: 21380808
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334700544 unmapped: 60178432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:44.799932+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334700544 unmapped: 60178432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:45.800183+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334716928 unmapped: 60162048 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:46.800366+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004298000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.183117867s of 11.356911659s, submitted: 52
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298000 session 0x5630007ffc00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334725120 unmapped: 60153856 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:47.800597+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3a000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300334e800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006904400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006904400 session 0x5630008c4700
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334725120 unmapped: 60153856 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:48.800826+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e770d000/0x0/0x4ffc00000, data 0x4f27c69/0x50bd000, compress 0x0/0x0/0x0, omap 0x6964e, meta 0x133c69b2), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de1400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1400 session 0x56300238a380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3885715 data_alloc: 234881024 data_used: 25845448
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334848000 unmapped: 60030976 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:49.801004+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630004de400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630004de400 session 0x563002472380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630004de400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630004de400 session 0x56300238bc00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334848000 unmapped: 60030976 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:50.801111+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de1400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e770c000/0x0/0x4ffc00000, data 0x4f27c79/0x50be000, compress 0x0/0x0/0x0, omap 0x6964e, meta 0x133c69b2), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334848000 unmapped: 60030976 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:51.801221+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334848000 unmapped: 60030976 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:52.801349+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004298000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335560704 unmapped: 59318272 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:53.801408+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e770c000/0x0/0x4ffc00000, data 0x4f27c79/0x50be000, compress 0x0/0x0/0x0, omap 0x6964e, meta 0x133c69b2), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3919221 data_alloc: 234881024 data_used: 31165394
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335560704 unmapped: 59318272 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:54.801544+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335560704 unmapped: 59318272 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:55.801693+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:56.802055+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335560704 unmapped: 59318272 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:57.802216+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335560704 unmapped: 59318272 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e770d000/0x0/0x4ffc00000, data 0x4f28c79/0x50bf000, compress 0x0/0x0/0x0, omap 0x6964e, meta 0x133c69b2), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:58.802399+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335560704 unmapped: 59318272 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.527291298s of 11.558494568s, submitted: 10
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3950583 data_alloc: 234881024 data_used: 31714258
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:59.802548+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 55926784 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:00.802674+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 55926784 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:01.802913+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 55926784 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:02.803145+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e71d8000/0x0/0x4ffc00000, data 0x545dc79/0x55f4000, compress 0x0/0x0/0x0, omap 0x6964e, meta 0x133c69b2), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 55926784 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:03.803327+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 338952192 unmapped: 55926784 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e71d8000/0x0/0x4ffc00000, data 0x545dc79/0x55f4000, compress 0x0/0x0/0x0, omap 0x6964e, meta 0x133c69b2), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3976585 data_alloc: 234881024 data_used: 31841234
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:04.803520+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340443136 unmapped: 54435840 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:05.803658+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 53387264 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:06.803830+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340836352 unmapped: 54042624 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6e5c000/0x0/0x4ffc00000, data 0x57d9c79/0x5970000, compress 0x0/0x0/0x0, omap 0x6964e, meta 0x133c69b2), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:07.803975+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340836352 unmapped: 54042624 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:08.804128+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340836352 unmapped: 54042624 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3981693 data_alloc: 234881024 data_used: 31906770
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:09.804335+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340836352 unmapped: 54042624 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:10.804493+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340844544 unmapped: 54034432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.701119423s of 12.182498932s, submitted: 108
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:11.804647+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340844544 unmapped: 54034432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:12.804808+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340844544 unmapped: 54034432 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6e5b000/0x0/0x4ffc00000, data 0x57dac79/0x5971000, compress 0x0/0x0/0x0, omap 0x6964e, meta 0x133c69b2), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3a000 session 0x562fffa6b180
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300334e800 session 0x563001564fc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:13.804972+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340852736 unmapped: 54026240 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de2c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de2c00 session 0x563002147a40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3894666 data_alloc: 234881024 data_used: 26787794
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:14.805149+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340230144 unmapped: 54648832 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e725b000/0x0/0x4ffc00000, data 0x4e4fc79/0x4fe6000, compress 0x0/0x0/0x0, omap 0x695bf, meta 0x133c6a41), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e725b000/0x0/0x4ffc00000, data 0x4e4fc79/0x4fe6000, compress 0x0/0x0/0x0, omap 0x695bf, meta 0x133c6a41), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:15.805353+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340230144 unmapped: 54648832 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:16.805534+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298000 session 0x56300222d6c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1400 session 0x563000520540
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 340230144 unmapped: 54648832 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004298000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298000 session 0x563002538fc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:17.805700+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 55091200 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:18.805876+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 55091200 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3810023 data_alloc: 234881024 data_used: 21409746
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:19.806079+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 55091200 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:20.806185+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 55091200 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7fc8000/0x0/0x4ffc00000, data 0x45b4c07/0x4749000, compress 0x0/0x0/0x0, omap 0x69a17, meta 0x133c65e9), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:21.806341+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 55091200 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.566295624s of 10.727230072s, submitted: 92
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de0400 session 0x563002a6b500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc8e400 session 0x563000341880
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004ae8c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:22.806496+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 55099392 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae8c00 session 0x5630022a8700
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc0c000 session 0x563001f02c40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a8000 session 0x5630026de8c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:23.807415+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004ae8c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328056832 unmapped: 66822144 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae8c00 session 0x56300261e8c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:24.807598+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517804 data_alloc: 218103808 data_used: 4913504
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 66805760 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:25.807788+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 66805760 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:26.807945+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 66805760 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:27.808186+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 66805760 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:28.808361+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 66805760 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:29.808575+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517804 data_alloc: 218103808 data_used: 4913504
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 66805760 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:30.808678+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 66805760 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:31.808843+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 66805760 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:32.808981+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 66805760 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:33.809163+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328073216 unmapped: 66805760 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:34.809326+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517804 data_alloc: 218103808 data_used: 4913504
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328081408 unmapped: 66797568 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:35.809474+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328081408 unmapped: 66797568 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:36.809652+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328081408 unmapped: 66797568 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:37.809819+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328081408 unmapped: 66797568 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:38.809990+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328081408 unmapped: 66797568 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:39.810207+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517804 data_alloc: 218103808 data_used: 4913504
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328081408 unmapped: 66797568 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:40.810373+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328089600 unmapped: 66789376 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:41.810540+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328089600 unmapped: 66789376 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:42.810732+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328097792 unmapped: 66781184 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:43.810922+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328097792 unmapped: 66781184 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:44.811214+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517804 data_alloc: 218103808 data_used: 4913504
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328097792 unmapped: 66781184 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:45.811400+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328097792 unmapped: 66781184 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:46.811584+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328097792 unmapped: 66781184 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:47.811824+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328097792 unmapped: 66781184 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:48.812034+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328097792 unmapped: 66781184 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630052e4800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630052e4800 session 0x563002a6a8c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:49.812276+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563000925000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000925000 session 0x563002538e00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffc0c000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc0c000 session 0x5630029adc00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630007a8000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517804 data_alloc: 218103808 data_used: 4913504
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a8000 session 0x5630029ad340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004ae8c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.916614532s of 28.000328064s, submitted: 56
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328097792 unmapped: 66781184 heap: 394878976 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae8c00 session 0x563002473340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630052e4800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630052e4800 session 0x562fffadf340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300157e400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300157e400 session 0x562fffba0c40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffc0c000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc0c000 session 0x563002574380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630007a8000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a8000 session 0x562fffca1880
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:50.812451+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328105984 unmapped: 70443008 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:51.812627+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328105984 unmapped: 70443008 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9084000/0x0/0x4ffc00000, data 0x35b4c36/0x3748000, compress 0x0/0x0/0x0, omap 0x6a2c7, meta 0x133c5d39), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:52.812794+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328105984 unmapped: 70443008 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:53.812997+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328105984 unmapped: 70443008 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:54.813164+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3608823 data_alloc: 218103808 data_used: 4917565
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328105984 unmapped: 70443008 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563000898400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898400 session 0x563002146000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004b6d000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:55.813312+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3fc00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328040448 unmapped: 70508544 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:56.813428+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 331685888 unmapped: 66863104 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:57.813600+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9084000/0x0/0x4ffc00000, data 0x35b4c36/0x3748000, compress 0x0/0x0/0x0, omap 0x6a7aa, meta 0x133c5856), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 333479936 unmapped: 65069056 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:58.813782+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 333479936 unmapped: 65069056 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9084000/0x0/0x4ffc00000, data 0x35b4c36/0x3748000, compress 0x0/0x0/0x0, omap 0x6a7aa, meta 0x133c5856), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:59.814017+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3702288 data_alloc: 234881024 data_used: 20563277
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 333479936 unmapped: 65069056 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:00.814158+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 333479936 unmapped: 65069056 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:01.814321+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6d000 session 0x563002538c40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.834800720s of 11.945485115s, submitted: 36
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3fc00 session 0x563000521880
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 333479936 unmapped: 65069056 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3fc00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3fc00 session 0x5630029ad6c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:02.814531+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327450624 unmapped: 71098368 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:03.814793+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a6d8, meta 0x133c5928), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327450624 unmapped: 71098368 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:04.814962+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528332 data_alloc: 218103808 data_used: 4917565
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327450624 unmapped: 71098368 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:05.815133+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327450624 unmapped: 71098368 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:06.815347+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327450624 unmapped: 71098368 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:07.815528+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9f87000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6a6d8, meta 0x133c5928), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 327450624 unmapped: 71098368 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004689000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004689000 session 0x5630022f2a80
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630007a9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x5630029ad340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:08.815763+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:09.815988+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3583805 data_alloc: 218103808 data_used: 4921563
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:10.816141+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:11.816278+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:12.816470+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:13.816673+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9808000/0x0/0x4ffc00000, data 0x2e31c26/0x2fc4000, compress 0x0/0x0/0x0, omap 0x6a6d8, meta 0x133c5928), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:14.816860+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3583805 data_alloc: 218103808 data_used: 4921563
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:15.817197+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:16.817387+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:17.817843+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9808000/0x0/0x4ffc00000, data 0x2e31c26/0x2fc4000, compress 0x0/0x0/0x0, omap 0x6a6d8, meta 0x133c5928), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:18.817993+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328867840 unmapped: 69681152 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004074800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004074800 session 0x5630020116c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de3c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de3c00 session 0x5630022a8700
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630007a9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x563002a6b500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:19.818261+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3fc00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3fc00 session 0x563000520540
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004074800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.817676544s of 17.979703903s, submitted: 75
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3661698 data_alloc: 218103808 data_used: 4921563
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004074800 session 0x563002147a40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328802304 unmapped: 69746688 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9808000/0x0/0x4ffc00000, data 0x2e31c26/0x2fc4000, compress 0x0/0x0/0x0, omap 0x6a6d8, meta 0x133c5928), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004689000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004689000 session 0x562fffba0a80
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:20.818468+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de1000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x5630015e6380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 69730304 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:21.818646+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328818688 unmapped: 69730304 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:22.818951+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328826880 unmapped: 69722112 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:23.819167+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630007a9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328343552 unmapped: 70205440 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3fc00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8e64000/0x0/0x4ffc00000, data 0x37d3cab/0x3968000, compress 0x0/0x0/0x0, omap 0x6a6d8, meta 0x133c5928), peers [0,2] op hist [1])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3fc00 session 0x562fffba0e00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002f16400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:24.819297+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3691301 data_alloc: 218103808 data_used: 12480219
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328310784 unmapped: 70238208 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:25.819428+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 328310784 unmapped: 70238208 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002ea3400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:26.819548+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329940992 unmapped: 68608000 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:27.819718+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330096640 unmapped: 68452352 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:28.819879+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002ea3400 session 0x563002539340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002f16400 session 0x56300273f340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 330096640 unmapped: 68452352 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3a400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e97ff000/0x0/0x4ffc00000, data 0x2e31c6c/0x2fc6000, compress 0x0/0x0/0x0, omap 0x6aa4a, meta 0x133c55b6), peers [0,2] op hist [1])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3a400 session 0x562fffb87340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:29.820096+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3637665 data_alloc: 218103808 data_used: 10774587
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329728000 unmapped: 68820992 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:30.820271+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329728000 unmapped: 68820992 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:31.820525+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e97ff000/0x0/0x4ffc00000, data 0x2e31c49/0x2fc5000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x133c5130), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329728000 unmapped: 68820992 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:32.820734+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329728000 unmapped: 68820992 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:33.820918+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 329728000 unmapped: 68820992 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.409809113s of 14.661175728s, submitted: 93
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:34.821094+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3685957 data_alloc: 218103808 data_used: 10787766
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 334241792 unmapped: 64307200 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:35.821275+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335126528 unmapped: 63422464 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:36.821445+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335822848 unmapped: 62726144 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:37.821751+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7ac8000/0x0/0x4ffc00000, data 0x39c8c49/0x3b5c000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335822848 unmapped: 62726144 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:38.821998+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335822848 unmapped: 62726144 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7ac8000/0x0/0x4ffc00000, data 0x39c8c49/0x3b5c000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:39.822301+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3710401 data_alloc: 218103808 data_used: 11049910
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335822848 unmapped: 62726144 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:40.822506+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7ac8000/0x0/0x4ffc00000, data 0x39c8c49/0x3b5c000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335822848 unmapped: 62726144 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:41.822710+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335822848 unmapped: 62726144 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:42.822895+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335839232 unmapped: 62709760 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:43.823100+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335839232 unmapped: 62709760 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:44.823340+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3706393 data_alloc: 218103808 data_used: 11054006
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335839232 unmapped: 62709760 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:45.823571+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7ab1000/0x0/0x4ffc00000, data 0x39e7c49/0x3b7b000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335839232 unmapped: 62709760 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:46.823993+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335839232 unmapped: 62709760 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:47.824164+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7ab1000/0x0/0x4ffc00000, data 0x39e7c49/0x3b7b000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 335839232 unmapped: 62709760 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.914414406s of 13.816678047s, submitted: 110
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:48.824355+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630007a8800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a8800 session 0x563001d9a380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 62177280 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:49.824544+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759649 data_alloc: 218103808 data_used: 11054006
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 62177280 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:50.824712+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 62177280 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:51.824873+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 62177280 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:52.825002+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 62177280 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:53.825196+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e72e2000/0x0/0x4ffc00000, data 0x41b6c49/0x434a000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 62177280 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:54.825339+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759649 data_alloc: 218103808 data_used: 11054006
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336379904 unmapped: 62169088 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:55.825472+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630015b6800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336379904 unmapped: 62169088 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015b6800 session 0x563001f02000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:56.825613+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007abcc00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007abcc00 session 0x5630015e6540
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3e000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3e000 session 0x563001f02fc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630040a7800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630040a7800 session 0x562fffadf180
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a46000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46000 session 0x5630008b2e00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a46000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630015b6800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3e000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3e000 session 0x5630003e7dc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336388096 unmapped: 62160896 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:57.825757+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69ca000/0x0/0x4ffc00000, data 0x4acec49/0x4c62000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 62152704 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:58.826128+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 62152704 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:59.826340+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3818314 data_alloc: 218103808 data_used: 11054518
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336068608 unmapped: 62480384 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:00.826602+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 62472192 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:01.827305+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 62472192 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:02.827501+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffb12c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffb12c00 session 0x563002c34c40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 62472192 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69ca000/0x0/0x4ffc00000, data 0x4acec49/0x4c62000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a39400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a39400 session 0x5630029addc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:03.827858+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004b6d000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6d000 session 0x56300222cfc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300329f800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 62472192 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.657527924s of 16.036277771s, submitted: 37
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329f800 session 0x563002c34e00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:04.828147+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003da9400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3861858 data_alloc: 234881024 data_used: 17980342
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 62472192 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:05.828272+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 336076800 unmapped: 62472192 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006851400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:06.828380+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 341164032 unmapped: 57384960 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:07.828505+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 341164032 unmapped: 57384960 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:08.828755+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69a4000/0x0/0x4ffc00000, data 0x4af3c49/0x4c87000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 341164032 unmapped: 57384960 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:09.829145+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3968416 data_alloc: 234881024 data_used: 27521974
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346292224 unmapped: 52256768 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:10.829283+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 50995200 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:11.829584+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 50995200 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:12.829741+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5fe9000/0x0/0x4ffc00000, data 0x54a6c49/0x563a000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 50995200 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:13.829897+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 50995200 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:14.830085+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3996190 data_alloc: 234881024 data_used: 29500342
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 50929664 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:15.830306+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 50929664 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.915058136s of 12.146306038s, submitted: 106
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:16.830443+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e575b000/0x0/0x4ffc00000, data 0x5d3dc49/0x5ed1000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350461952 unmapped: 48087040 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:17.830607+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e56f9000/0x0/0x4ffc00000, data 0x5d9fc49/0x5f33000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350461952 unmapped: 48087040 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:18.830806+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 48070656 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:19.830920+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4058962 data_alloc: 234881024 data_used: 30184374
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350543872 unmapped: 48005120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:20.831123+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350543872 unmapped: 48005120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:21.831381+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350543872 unmapped: 48005120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:22.831571+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 47964160 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:23.831741+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e56ed000/0x0/0x4ffc00000, data 0x5dabc49/0x5f3f000, compress 0x0/0x0/0x0, omap 0x6aed0, meta 0x14565130), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 47964160 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46000 session 0x563002a6b180
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015b6800 session 0x56300238a8c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:24.832087+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3d800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3d800 session 0x563000520380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3902990 data_alloc: 234881024 data_used: 21246902
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348307456 unmapped: 50241536 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:25.832807+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348168192 unmapped: 50380800 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:26.832971+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6857000/0x0/0x4ffc00000, data 0x4c41c49/0x4dd5000, compress 0x0/0x0/0x0, omap 0x6b320, meta 0x14564ce0), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006851400 session 0x5630004fe000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003da9400 session 0x563001d9a000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630015b6800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.460074425s of 10.716476440s, submitted: 116
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015b6800 session 0x562fffca1c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347398144 unmapped: 51150848 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:27.833145+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347398144 unmapped: 51150848 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x5630005216c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:28.833353+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3d800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7815000/0x0/0x4ffc00000, data 0x3a03c49/0x3b97000, compress 0x0/0x0/0x0, omap 0x6b770, meta 0x14564890), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3d800 session 0x5630022f3c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:29.833610+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575266 data_alloc: 218103808 data_used: 4905398
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:30.833817+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:31.833967+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de6000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6bbc0, meta 0x14564440), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:32.834102+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:33.834323+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:34.834557+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575266 data_alloc: 218103808 data_used: 4905398
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:35.834688+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:36.834836+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de6000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6bbc0, meta 0x14564440), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:37.835147+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:38.835426+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:39.835683+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575266 data_alloc: 218103808 data_used: 4905398
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:40.835852+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343375872 unmapped: 55173120 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:41.836170+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343384064 unmapped: 55164928 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:42.836346+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de6000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6bbc0, meta 0x14564440), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343384064 unmapped: 55164928 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:43.836527+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343384064 unmapped: 55164928 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:44.836721+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575266 data_alloc: 218103808 data_used: 4905398
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 55156736 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:45.836928+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 55156736 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:46.837149+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 55156736 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:47.837332+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de6000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6bbc0, meta 0x14564440), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 55156736 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:48.837514+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343392256 unmapped: 55156736 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:49.837779+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3575266 data_alloc: 218103808 data_used: 4905398
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343400448 unmapped: 55148544 heap: 398548992 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:50.837941+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004ae9800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.489187241s of 23.583105087s, submitted: 61
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae9800 session 0x562fffca1c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004b6d000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6d000 session 0x5630022f2a80
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630007a9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x5630008c41c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630015b6800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015b6800 session 0x563002c42e00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3d800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3d800 session 0x5630008c5340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de6000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6bbc0, meta 0x14564440), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 58810368 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:51.838123+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 58810368 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:52.838282+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 58810368 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:53.838449+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e85fc000/0x0/0x4ffc00000, data 0x2e9ebc4/0x3030000, compress 0x0/0x0/0x0, omap 0x6bbc0, meta 0x14564440), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 58810368 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:54.838650+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3626467 data_alloc: 218103808 data_used: 4913457
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 58810368 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:55.838841+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 58810368 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:56.838973+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004ae8c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343007232 unmapped: 59219968 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:57.839131+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e85fc000/0x0/0x4ffc00000, data 0x2e9ebc4/0x3030000, compress 0x0/0x0/0x0, omap 0x6bbc0, meta 0x14564440), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343015424 unmapped: 59211776 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:58.839306+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343015424 unmapped: 59211776 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:59.839543+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3626599 data_alloc: 218103808 data_used: 4913457
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343015424 unmapped: 59211776 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:00.839704+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343015424 unmapped: 59211776 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:01.839870+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342441984 unmapped: 59785216 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:02.840019+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 59949056 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:03.840153+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e85fc000/0x0/0x4ffc00000, data 0x2e9ebc4/0x3030000, compress 0x0/0x0/0x0, omap 0x6bbc0, meta 0x14564440), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 59949056 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:04.840251+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675495 data_alloc: 218103808 data_used: 13154609
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 59949056 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:05.840404+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a47000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a47000 session 0x562fffc00e00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3a400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3a400 session 0x562fffce6380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 342278144 unmapped: 59949056 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630007a9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x5630026df340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:06.840576+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630015b6800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015b6800 session 0x56300273fa40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3d800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.204570770s of 16.290288925s, submitted: 20
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3d800 session 0x5630008b2e00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a47000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a47000 session 0x56300238a8c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de2400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de2400 session 0x56300222cfc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de2400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de2400 session 0x563001da16c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630007a9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x5630026dfa40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 58687488 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:07.840731+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 58687488 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:08.840901+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7e6e000/0x0/0x4ffc00000, data 0x362ac36/0x37be000, compress 0x0/0x0/0x0, omap 0x6bbc0, meta 0x14564440), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 58687488 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:09.841098+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3732498 data_alloc: 218103808 data_used: 13154609
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 58687488 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:10.841254+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343539712 unmapped: 58687488 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:11.841406+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 55492608 heap: 402227200 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:12.841534+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e74c3000/0x0/0x4ffc00000, data 0x3fcfc36/0x4163000, compress 0x0/0x0/0x0, omap 0x6bbc0, meta 0x14564440), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002ea3400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002ea3400 session 0x563001d9a8c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007aaa000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007aaa000 session 0x5630020116c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007abd000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007abd000 session 0x5630022a8c40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007abd000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007abd000 session 0x5630026de1c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630007a9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x56300205c540
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 57786368 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:13.841687+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de2400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de2400 session 0x56300222c700
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69a1000/0x0/0x4ffc00000, data 0x4ae8c98/0x4c7d000, compress 0x0/0x0/0x0, omap 0x6bdc8, meta 0x14564238), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 57786368 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:14.841832+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a39400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a39400 session 0x5630022a9dc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3874916 data_alloc: 218103808 data_used: 14231857
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 57786368 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:15.842020+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300334f400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300334f400 session 0x562fffc001c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300334f400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348127232 unmapped: 57778176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:16.842151+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300334f400 session 0x563002c43c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630007a9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de2400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.853331566s of 10.227196693s, submitted: 183
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:17.842313+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 57958400 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:18.842513+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 57958400 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e698b000/0x0/0x4ffc00000, data 0x4b0cc98/0x4ca1000, compress 0x0/0x0/0x0, omap 0x6bdc8, meta 0x14564238), peers [0,2] op hist [1])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:19.842716+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006907000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 57958400 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006907000 session 0x5630026dec40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911325 data_alloc: 234881024 data_used: 18573899
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630027f9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f9c00 session 0x563002c34c40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:20.842853+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a4a400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a400 session 0x5630029addc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002f16400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002f16400 session 0x5630026dee00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630027f9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:21.842967+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348184576 unmapped: 57720832 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a4a400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:22.843100+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348397568 unmapped: 57507840 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:23.843269+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 55812096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:24.843499+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 55812096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6962000/0x0/0x4ffc00000, data 0x4b33cca/0x4cca000, compress 0x0/0x0/0x0, omap 0x6bdc8, meta 0x14564238), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3977248 data_alloc: 234881024 data_used: 27425867
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:25.843711+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 55812096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:26.843865+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 55812096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:27.844053+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 55812096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:28.844202+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 55812096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6962000/0x0/0x4ffc00000, data 0x4b33cca/0x4cca000, compress 0x0/0x0/0x0, omap 0x6bdc8, meta 0x14564238), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.239691734s of 12.257406235s, submitted: 10
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:29.844368+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 53223424 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4047860 data_alloc: 234881024 data_used: 27775563
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:30.844500+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 53133312 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:31.855691+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 52043776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:32.855845+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356794368 unmapped: 49111040 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e4e9a000/0x0/0x4ffc00000, data 0x65f5cca/0x678c000, compress 0x0/0x0/0x0, omap 0x6bdc8, meta 0x14564238), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:33.856370+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358236160 unmapped: 47669248 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:34.856536+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e4e76000/0x0/0x4ffc00000, data 0x6611cca/0x67a8000, compress 0x0/0x0/0x0, omap 0x6bdc8, meta 0x14564238), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 47554560 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4167062 data_alloc: 234881024 data_used: 28856907
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:35.856757+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 47554560 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:36.856932+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 47546368 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:37.857146+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 47546368 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:38.857316+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 47546368 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:39.857555+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 47546368 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e4e63000/0x0/0x4ffc00000, data 0x6632cca/0x67c9000, compress 0x0/0x0/0x0, omap 0x6bdc8, meta 0x14564238), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4161198 data_alloc: 234881024 data_used: 28861003
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:40.857723+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358367232 unmapped: 47538176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e4e63000/0x0/0x4ffc00000, data 0x6632cca/0x67c9000, compress 0x0/0x0/0x0, omap 0x6bdc8, meta 0x14564238), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:41.857872+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.773331642s of 12.197587967s, submitted: 242
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 47529984 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:42.858019+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 47529984 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e4e5d000/0x0/0x4ffc00000, data 0x6638cca/0x67cf000, compress 0x0/0x0/0x0, omap 0x6bdc8, meta 0x14564238), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002ea3400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:43.858145+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 47472640 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002ea3400 session 0x56300222c8c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a49400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a49400 session 0x563001d9a1c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:44.858293+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 47472640 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4194793 data_alloc: 234881024 data_used: 28938827
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:45.858458+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 47472640 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:46.858646+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358440960 unmapped: 47464448 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:47.858817+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358440960 unmapped: 47464448 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e4952000/0x0/0x4ffc00000, data 0x6b42d2c/0x6cda000, compress 0x0/0x0/0x0, omap 0x6bdc8, meta 0x14564238), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:48.859053+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358440960 unmapped: 47464448 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f9c00 session 0x56300222c1c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a400 session 0x562fffce7a40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003da8000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003da8000 session 0x562fffb86c40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:49.859311+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 54370304 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630027f9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f9c00 session 0x563002a6a540
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3965140 data_alloc: 234881024 data_used: 16362059
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a49400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:50.859475+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 54370304 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:51.859668+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 54370304 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a4a400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e638a000/0x0/0x4ffc00000, data 0x50e7c98/0x527c000, compress 0x0/0x0/0x0, omap 0x6c3ec, meta 0x14563c14), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:52.859883+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 54370304 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:53.860027+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351535104 unmapped: 54370304 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e638a000/0x0/0x4ffc00000, data 0x50e7c98/0x527c000, compress 0x0/0x0/0x0, omap 0x6c3ec, meta 0x14563c14), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.327329636s of 12.523733139s, submitted: 92
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae8c00 session 0x562fffc01180
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300334e000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:54.860161+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300334e000 session 0x5630008b2380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 54337536 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3848324 data_alloc: 234881024 data_used: 17609670
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:55.878215+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 54337536 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x3f24c98/0x40b9000, compress 0x0/0x0/0x0, omap 0x6c83c, meta 0x145637c4), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:56.878410+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 54337536 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:57.878566+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 54337536 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:58.878752+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 54337536 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x3f24c98/0x40b9000, compress 0x0/0x0/0x0, omap 0x6c83c, meta 0x145637c4), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:59.878957+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 54337536 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3848324 data_alloc: 234881024 data_used: 17609670
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:00.879171+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351567872 unmapped: 54337536 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x3f24c98/0x40b9000, compress 0x0/0x0/0x0, omap 0x6c83c, meta 0x145637c4), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:01.879346+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351576064 unmapped: 54329344 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x3f24c98/0x40b9000, compress 0x0/0x0/0x0, omap 0x6c83c, meta 0x145637c4), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:02.879506+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 54468608 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6dac000/0x0/0x4ffc00000, data 0x46eac98/0x487f000, compress 0x0/0x0/0x0, omap 0x6c83c, meta 0x145637c4), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:03.879633+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351436800 unmapped: 54468608 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:04.879778+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 54804480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3910436 data_alloc: 234881024 data_used: 18441158
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:05.879937+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 54804480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:06.880129+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 54804480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6da5000/0x0/0x4ffc00000, data 0x46f2c98/0x4887000, compress 0x0/0x0/0x0, omap 0x6c83c, meta 0x145637c4), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:07.880272+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 54804480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:08.880409+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 54804480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6da5000/0x0/0x4ffc00000, data 0x46f2c98/0x4887000, compress 0x0/0x0/0x0, omap 0x6c83c, meta 0x145637c4), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:09.880645+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 54804480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3910436 data_alloc: 234881024 data_used: 18441158
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:10.880807+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351109120 unmapped: 54796288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:11.880935+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351109120 unmapped: 54796288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:12.881056+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351109120 unmapped: 54796288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:13.881186+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351109120 unmapped: 54796288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6da5000/0x0/0x4ffc00000, data 0x46f2c98/0x4887000, compress 0x0/0x0/0x0, omap 0x6c83c, meta 0x145637c4), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:14.881387+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 351109120 unmapped: 54796288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.755073547s of 20.982717514s, submitted: 105
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a400 session 0x563000521500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a49400 session 0x56300273ec40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630027f9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911900 data_alloc: 234881024 data_used: 18535366
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:15.881510+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 53731328 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f9c00 session 0x562fffca0000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:16.881775+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 53731328 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:17.881926+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 53731328 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a4a400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a400 session 0x563000340700
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:18.882095+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300334e000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300334e000 session 0x5630008c5340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004ae8c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae8c00 session 0x563002574700
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffc8f000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc8f000 session 0x563002c34c40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630027f9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f9c00 session 0x562fffba1340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352509952 unmapped: 53395456 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:19.882302+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352509952 unmapped: 53395456 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e705e000/0x0/0x4ffc00000, data 0x4439c98/0x45ce000, compress 0x0/0x0/0x0, omap 0x6c44f, meta 0x14563bb1), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857330 data_alloc: 218103808 data_used: 12335972
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:20.882420+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352509952 unmapped: 53395456 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x5630008b28c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de2400 session 0x563001d9ae00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630015b7400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:21.882532+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 60399616 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003bf9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9c00 session 0x5630005208c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015b7400 session 0x562fffb868c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:22.882716+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630007a9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x5630022f2a80
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 59351040 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630015b7400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015b7400 session 0x563001d9a380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de2400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de2400 session 0x562fffce6000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:23.903843+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 59342848 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83ab000/0x0/0x4ffc00000, data 0x30e8c26/0x327b000, compress 0x0/0x0/0x0, omap 0x6c897, meta 0x14563769), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630027f9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3dc00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:24.904097+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 59342848 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:25.904238+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3698486 data_alloc: 218103808 data_used: 4918002
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 59342848 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:26.904356+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 59367424 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:27.904509+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:28.904627+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83af000/0x0/0x4ffc00000, data 0x30e8c59/0x327d000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83af000/0x0/0x4ffc00000, data 0x30e8c59/0x327d000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:29.904795+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:30.904970+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3754294 data_alloc: 218103808 data_used: 14295794
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83af000/0x0/0x4ffc00000, data 0x30e8c59/0x327d000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:31.905134+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:32.905433+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:33.905607+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83af000/0x0/0x4ffc00000, data 0x30e8c59/0x327d000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:34.905845+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:35.905997+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3754294 data_alloc: 218103808 data_used: 14295794
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83af000/0x0/0x4ffc00000, data 0x30e8c59/0x327d000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.583734512s of 21.009777069s, submitted: 135
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:36.906151+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 57458688 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:37.906272+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7dfd000/0x0/0x4ffc00000, data 0x3699c59/0x382e000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:38.906485+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:39.906696+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:40.906881+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3799496 data_alloc: 234881024 data_used: 14492402
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7dbf000/0x0/0x4ffc00000, data 0x36d7c59/0x386c000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:41.907120+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:42.907257+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:43.907418+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7dbf000/0x0/0x4ffc00000, data 0x36d7c59/0x386c000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:44.907553+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:45.907698+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796744 data_alloc: 234881024 data_used: 14496498
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:46.907910+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:47.908060+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:48.908235+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d9c000/0x0/0x4ffc00000, data 0x36fbc59/0x3890000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.710806847s of 13.025801659s, submitted: 82
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:49.908898+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f9c00 session 0x5630022a9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3dc00 session 0x56300222da40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349773824 unmapped: 56131584 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3dc00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:50.909026+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646460 data_alloc: 218103808 data_used: 4917490
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3dc00 session 0x563002574fc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:51.909161+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c2b000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6d25b, meta 0x14562da5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:52.909307+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:53.909468+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:54.909630+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:55.909835+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3644559 data_alloc: 218103808 data_used: 4913394
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c2b000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6d25b, meta 0x14562da5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630035d9000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630035d9000 session 0x5630008c5500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006907400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006907400 session 0x562fffc01c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630027f8c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f8c00 session 0x563002c42c40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004298800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298800 session 0x562fffc016c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:56.910015+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630027f8c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:57.910199+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f8c00 session 0x563002c42380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3dc00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3dc00 session 0x562fffba0000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630035d9000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630035d9000 session 0x5630004ff6c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006907400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:58.910405+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006907400 session 0x56300273efc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de0000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de0000 session 0x562fffce7340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8834000/0x0/0x4ffc00000, data 0x2c65bd4/0x2df8000, compress 0x0/0x0/0x0, omap 0x6d25b, meta 0x14562da5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:59.910646+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:00.910839+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688345 data_alloc: 218103808 data_used: 4921469
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:01.911107+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:02.911266+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8834000/0x0/0x4ffc00000, data 0x2c65bd4/0x2df8000, compress 0x0/0x0/0x0, omap 0x6d25b, meta 0x14562da5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:03.911571+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:04.911797+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8834000/0x0/0x4ffc00000, data 0x2c65bd4/0x2df8000, compress 0x0/0x0/0x0, omap 0x6d25b, meta 0x14562da5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:05.911989+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688345 data_alloc: 218103808 data_used: 4921469
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:06.912148+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343736320 unmapped: 62169088 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:07.912306+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343736320 unmapped: 62169088 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:08.912940+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343736320 unmapped: 62169088 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:09.913187+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a4a800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.425588608s of 20.792432785s, submitted: 85
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a800 session 0x563000520380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300329e400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3b400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:10.913328+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3691067 data_alloc: 218103808 data_used: 4921469
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8833000/0x0/0x4ffc00000, data 0x2c65bf7/0x2df9000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:11.913501+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:12.913644+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8833000/0x0/0x4ffc00000, data 0x2c65bf7/0x2df9000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:13.913813+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:14.913982+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:15.914151+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3726267 data_alloc: 218103808 data_used: 10794109
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8833000/0x0/0x4ffc00000, data 0x2c65bf7/0x2df9000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:16.914306+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:17.914491+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:18.914662+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8833000/0x0/0x4ffc00000, data 0x2c65bf7/0x2df9000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:19.914886+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:20.915094+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3726267 data_alloc: 218103808 data_used: 10794109
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8833000/0x0/0x4ffc00000, data 0x2c65bf7/0x2df9000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:21.915287+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.685478210s of 11.705242157s, submitted: 10
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347709440 unmapped: 58195968 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:22.915453+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:23.915607+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7f5f000/0x0/0x4ffc00000, data 0x3523bf7/0x36b7000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:24.915817+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:25.916007+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3804967 data_alloc: 218103808 data_used: 11819133
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:26.916174+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:27.916362+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:28.916532+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 57868288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:29.916772+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7f5f000/0x0/0x4ffc00000, data 0x3523bf7/0x36b7000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 57868288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:30.916959+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3804983 data_alloc: 218103808 data_used: 11819133
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 57868288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:31.917137+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 57868288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:32.917268+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 57860096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:33.917412+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7f5f000/0x0/0x4ffc00000, data 0x3523bf7/0x36b7000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 57860096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:34.917592+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 57860096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:35.917794+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3805239 data_alloc: 218103808 data_used: 11827325
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 57860096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:36.917968+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 57860096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:37.918128+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630040a6c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.995227814s of 16.248806000s, submitted: 101
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630040a6c00 session 0x56300238a380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003bf9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9c00 session 0x5630026de8c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300329f000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329f000 session 0x56300222c700
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347529216 unmapped: 65724416 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a47c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a47c00 session 0x562fffce6380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a4a800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a800 session 0x56300222da40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:38.918331+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347529216 unmapped: 65724416 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:39.918652+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752e000/0x0/0x4ffc00000, data 0x3f6abf7/0x40fe000, compress 0x0/0x0/0x0, omap 0x6d934, meta 0x145626cc), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:40.918785+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3859568 data_alloc: 218103808 data_used: 11827325
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752e000/0x0/0x4ffc00000, data 0x3f6abf7/0x40fe000, compress 0x0/0x0/0x0, omap 0x6d934, meta 0x145626cc), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:41.918973+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:42.919136+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:43.919346+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:44.919540+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003bf8400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf8400 session 0x562fffce7340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:45.919718+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300645f000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004299800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3861245 data_alloc: 218103808 data_used: 11827325
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 65691648 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:46.919884+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 65691648 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752e000/0x0/0x4ffc00000, data 0x3f6abf7/0x40fe000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [1])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:47.920106+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:48.920282+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752e000/0x0/0x4ffc00000, data 0x3f6abf7/0x40fe000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:49.920445+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:50.920643+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3916669 data_alloc: 234881024 data_used: 17889846
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:51.920848+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:52.920994+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:53.921129+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.772537231s of 15.883464813s, submitted: 28
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:54.921333+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752c000/0x0/0x4ffc00000, data 0x3f6bbf7/0x40ff000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:55.921471+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3916885 data_alloc: 234881024 data_used: 17889846
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752c000/0x0/0x4ffc00000, data 0x3f6bbf7/0x40ff000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:56.921703+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752c000/0x0/0x4ffc00000, data 0x3f6bbf7/0x40ff000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 65011712 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752c000/0x0/0x4ffc00000, data 0x3f6bbf7/0x40ff000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [0,0,0,0,1])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:57.921877+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 60211200 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:58.922048+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 60104704 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:59.922367+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 60104704 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:00.922644+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3970057 data_alloc: 234881024 data_used: 19615286
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 60104704 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:01.922839+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 60104704 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:02.923248+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 60104704 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7034000/0x0/0x4ffc00000, data 0x445cbf7/0x45f0000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:03.923440+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.204720497s of 10.448961258s, submitted: 70
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60096512 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:04.923669+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60096512 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:05.924000+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3970329 data_alloc: 234881024 data_used: 19623478
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60096512 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:06.924203+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60096512 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:07.924366+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60096512 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300645f000 session 0x563002539340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004299800 session 0x563002c421c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:08.924520+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300329f400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7034000/0x0/0x4ffc00000, data 0x445cbf7/0x45f0000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [0,2])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329f400 session 0x563002c43c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 63324160 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:09.924740+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 63324160 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:10.924954+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800321 data_alloc: 218103808 data_used: 8706614
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7f74000/0x0/0x4ffc00000, data 0x3524bf7/0x36b8000, compress 0x0/0x0/0x0, omap 0x6dfed, meta 0x14562013), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 63324160 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:11.925147+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 63324160 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329e400 session 0x563002c42e00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3b400 session 0x5630008c4c40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:12.925302+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003bf9800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9800 session 0x562fffce6540
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:13.925621+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:14.925833+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:15.926133+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:16.926319+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:17.926511+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:18.926707+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:19.926889+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:20.927017+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:21.927189+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:22.927450+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:23.927694+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 64290816 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:24.927856+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 64290816 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:25.928024+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 64290816 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:26.928284+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 64290816 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:27.928456+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 64290816 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:28.928751+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:29.929158+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:30.929323+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:31.929432+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:32.929556+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:33.929767+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:34.930045+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:35.930247+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:36.930421+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 64274432 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:37.930539+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 64274432 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:38.930678+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 64274432 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:39.931058+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 64274432 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:40.931249+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 64274432 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:41.931465+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 64266240 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:42.931671+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 64266240 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:43.931863+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 64266240 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:44.932012+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 64266240 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:45.932183+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007aaac00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007aaac00 session 0x5630015e6380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007a98000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007a98000 session 0x562fffce6fc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3b400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3b400 session 0x56300222c540
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300329e400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329e400 session 0x563000520700
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003bf9800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.949302673s of 42.108650208s, submitted: 99
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 63913984 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9800 session 0x5630026df6c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007aaac00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007aaac00 session 0x5630004fe000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630015e9400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015e9400 session 0x562fffba0c40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3b400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3b400 session 0x5630007fec40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:46.932371+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300329e400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329e400 session 0x563002c348c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 67854336 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:47.932545+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8637000/0x0/0x4ffc00000, data 0x2e62bd4/0x2ff5000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 67854336 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:48.932696+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8637000/0x0/0x4ffc00000, data 0x2e62bd4/0x2ff5000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 67854336 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:49.932905+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 67846144 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:50.933030+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3721561 data_alloc: 218103808 data_used: 4889091
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 67846144 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:51.933171+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de1000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x563001f02000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349011968 unmapped: 67919872 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4201.6 total, 600.0 interval
                                           Cumulative writes: 46K writes, 180K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.68 writes per sync, written: 0.17 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5448 writes, 20K keys, 5448 commit groups, 1.0 writes per commit group, ingest: 23.91 MB, 0.04 MB/s
                                           Interval WAL: 5448 writes, 2224 syncs, 2.45 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:52.933308+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630020c8000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003da8400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8612000/0x0/0x4ffc00000, data 0x2e86bf7/0x301a000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:53.933496+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:54.933634+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:55.933764+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773411 data_alloc: 218103808 data_used: 12658707
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:56.935152+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8612000/0x0/0x4ffc00000, data 0x2e86bf7/0x301a000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:57.935343+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:58.935503+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:59.935761+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:00.935906+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773411 data_alloc: 218103808 data_used: 12658707
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:01.936084+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:02.936213+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8612000/0x0/0x4ffc00000, data 0x2e86bf7/0x301a000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:03.936356+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.338871002s of 17.436998367s, submitted: 22
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352256000 unmapped: 64675840 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:04.955989+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 64241664 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:05.956236+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3832685 data_alloc: 218103808 data_used: 12786707
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 64241664 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:06.956443+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 64241664 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:07.956732+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 64241664 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:08.956965+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d99000/0x0/0x4ffc00000, data 0x36f7bf7/0x388b000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 64241664 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:09.957206+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d99000/0x0/0x4ffc00000, data 0x36f7bf7/0x388b000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:10.957486+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3828501 data_alloc: 218103808 data_used: 12786707
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:11.957647+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:12.957870+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:13.958024+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d82000/0x0/0x4ffc00000, data 0x3716bf7/0x38aa000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:14.958270+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:15.958420+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3828501 data_alloc: 218103808 data_used: 12786707
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630020c9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c9c00 session 0x563000520380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004689800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004689800 session 0x56300205c1c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de1000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x563002c42a80
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:16.958541+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630020c9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c9c00 session 0x5630008b3340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3b400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.907081604s of 13.095699310s, submitted: 66
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3b400 session 0x563001d9ba40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300329e400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329e400 session 0x562fffb87340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006912c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006912c00 session 0x56300238b180
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de1000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x563001d9afc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630020c9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c9c00 session 0x563002574000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:17.958790+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:18.959175+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x438dbf7/0x4521000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:19.959358+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x438dbf7/0x4521000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:20.959516+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x438dbf7/0x4521000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3906016 data_alloc: 218103808 data_used: 12786707
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:21.959833+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300b960c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300b960c00 session 0x563001f02fc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:22.959960+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300329f400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003da9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:23.960141+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:24.960360+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:25.960532+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e70e7000/0x0/0x4ffc00000, data 0x43b1bf7/0x4545000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3978348 data_alloc: 234881024 data_used: 24597523
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:26.960654+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:27.960774+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:28.961005+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:29.961109+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e70e7000/0x0/0x4ffc00000, data 0x43b1bf7/0x4545000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:30.961264+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 58130432 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e70e7000/0x0/0x4ffc00000, data 0x43b1bf7/0x4545000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3978348 data_alloc: 234881024 data_used: 24597523
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:31.961389+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 58130432 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:32.961508+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 58130432 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.627857208s of 16.728017807s, submitted: 31
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:33.961714+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360079360 unmapped: 56852480 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:34.961855+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:35.961987+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4046006 data_alloc: 234881024 data_used: 25961491
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:36.962154+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e67b8000/0x0/0x4ffc00000, data 0x4cdfbf7/0x4e73000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:37.962329+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:38.962526+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:39.962750+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:40.962963+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e67b8000/0x0/0x4ffc00000, data 0x4cdfbf7/0x4e73000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4046006 data_alloc: 234881024 data_used: 25961491
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:41.963118+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:42.963259+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e67b8000/0x0/0x4ffc00000, data 0x4cdfbf7/0x4e73000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e67b8000/0x0/0x4ffc00000, data 0x4cdfbf7/0x4e73000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:43.963392+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362446848 unmapped: 54484992 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:44.963576+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362446848 unmapped: 54484992 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:45.963779+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362446848 unmapped: 54484992 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.708907127s of 12.569359779s, submitted: 90
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329f400 session 0x562fffb861c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003da9c00 session 0x5630029ad340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de1000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4039350 data_alloc: 234881024 data_used: 25961491
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:46.963888+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x56300273f340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358670336 unmapped: 58261504 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:47.964111+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d76000/0x0/0x4ffc00000, data 0x3722bf7/0x38b6000, compress 0x0/0x0/0x0, omap 0x6e714, meta 0x145618ec), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358678528 unmapped: 58253312 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:48.964289+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358735872 unmapped: 58195968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:49.964491+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358735872 unmapped: 58195968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:50.964692+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358735872 unmapped: 58195968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3840646 data_alloc: 218103808 data_used: 12786707
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:51.964871+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358735872 unmapped: 58195968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d6f000/0x0/0x4ffc00000, data 0x3729bf7/0x38bd000, compress 0x0/0x0/0x0, omap 0x6e714, meta 0x145618ec), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8000 session 0x562fffba16c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003da8400 session 0x5630022a9dc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:52.965058+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a4a000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358653952 unmapped: 58277888 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a000 session 0x5630008c4c40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:53.965223+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:54.965412+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:55.965589+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:56.965810+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:57.965971+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:58.966236+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:59.966481+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:00.966672+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:01.966845+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:02.966995+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:03.967196+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:04.973935+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:05.974156+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:06.974399+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:07.974576+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:08.974797+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:09.975053+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:10.975336+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:11.975516+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:12.975686+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:13.975844+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:14.976032+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:15.976169+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:16.976356+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:17.976521+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:18.976700+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:19.976915+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:20.977176+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:21.977358+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:22.977520+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:23.977649+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:24.977839+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 63307776 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006905400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006905400 session 0x562fffb86700
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a46c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46c00 session 0x56300205c1c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de1000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x5630007fec40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630020c8000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8000 session 0x562fffba0000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:25.977987+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a4a000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.295139313s of 39.636463165s, submitted: 162
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 62242816 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a000 session 0x5630022f2c40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003da8400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003da8400 session 0x563002c34c40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de1000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x562fffb87c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630020c8000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8000 session 0x5630029ad6c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a46c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46c00 session 0x56300273f500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3765477 data_alloc: 218103808 data_used: 4893152
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:26.978122+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8343000/0x0/0x4ffc00000, data 0x3155c35/0x32e9000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:27.978363+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:28.978547+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:29.978723+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:30.978832+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004b6c000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6c000 session 0x563002473340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3765477 data_alloc: 218103808 data_used: 4893152
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:31.979195+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630007a8c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a8c00 session 0x563002c42380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de1000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x562fffba0c40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630020c8000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:32.979305+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8000 session 0x563002c42a80
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353583104 unmapped: 63348736 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a46c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8343000/0x0/0x4ffc00000, data 0x3155c35/0x32e9000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004b6c000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:33.979502+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 63676416 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:34.979643+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:35.979798+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3834807 data_alloc: 234881024 data_used: 16034287
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:36.979953+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8342000/0x0/0x4ffc00000, data 0x3155c58/0x32ea000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:37.980186+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:38.980369+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:39.980580+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8342000/0x0/0x4ffc00000, data 0x3155c58/0x32ea000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:40.980745+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3834807 data_alloc: 234881024 data_used: 16034287
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:41.980898+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:42.981040+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.666275024s of 17.794780731s, submitted: 43
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:43.981160+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359178240 unmapped: 57753600 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:44.981261+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e628e000/0x0/0x4ffc00000, data 0x4063c58/0x41f8000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x15700ff5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 57647104 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:45.981420+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 56467456 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:46.981619+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3932965 data_alloc: 234881024 data_used: 16882159
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 56467456 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:47.981765+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6263000/0x0/0x4ffc00000, data 0x4094c58/0x4229000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x15700ff5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 56467456 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:48.981903+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 56467456 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:49.982597+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 56467456 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:50.982777+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6263000/0x0/0x4ffc00000, data 0x4094c58/0x4229000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x15700ff5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:51.982938+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3933117 data_alloc: 234881024 data_used: 16882159
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:52.983137+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:53.983284+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:54.983449+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:55.983629+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:56.983869+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3933373 data_alloc: 234881024 data_used: 16890351
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6260000/0x0/0x4ffc00000, data 0x4097c58/0x422c000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x15700ff5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:57.984058+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:58.984226+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360480768 unmapped: 56451072 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:59.984367+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360480768 unmapped: 56451072 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6260000/0x0/0x4ffc00000, data 0x4097c58/0x422c000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x15700ff5), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:00.984514+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffc0c000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.938798904s of 17.121707916s, submitted: 144
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc0c000 session 0x563002538c40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3b000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3b000 session 0x563000520c40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563000898000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x562fffce7340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffc0c000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc0c000 session 0x562fffce6fc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563000898000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x563001da1c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:01.984663+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3953230 data_alloc: 234881024 data_used: 16890351
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6089000/0x0/0x4ffc00000, data 0x426ec58/0x4403000, compress 0x0/0x0/0x0, omap 0x6f209, meta 0x15700df7), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6089000/0x0/0x4ffc00000, data 0x426ec58/0x4403000, compress 0x0/0x0/0x0, omap 0x6f209, meta 0x15700df7), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:02.984849+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:03.985189+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:04.985392+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:05.985534+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007a99800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007a99800 session 0x563000520380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6089000/0x0/0x4ffc00000, data 0x426ec58/0x4403000, compress 0x0/0x0/0x0, omap 0x6f209, meta 0x15700df7), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630045ad000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630045ad000 session 0x5630008b3340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:06.985690+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3953230 data_alloc: 234881024 data_used: 16890351
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300334e000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300334e000 session 0x563000340700
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffc0c000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc0c000 session 0x563002147a40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6089000/0x0/0x4ffc00000, data 0x426ec58/0x4403000, compress 0x0/0x0/0x0, omap 0x6f209, meta 0x15700df7), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:07.985810+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563000898000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300334e000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:08.985954+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:09.986139+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:10.986278+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:11.986418+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3967924 data_alloc: 234881024 data_used: 18733551
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6088000/0x0/0x4ffc00000, data 0x426ec7b/0x4404000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:12.986682+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:13.986830+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:14.987008+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 56107008 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.814036369s of 14.911996841s, submitted: 29
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:15.987239+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 56107008 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:16.987479+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3968068 data_alloc: 234881024 data_used: 18733551
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6088000/0x0/0x4ffc00000, data 0x426ec7b/0x4404000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 56107008 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:17.987644+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 56107008 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6088000/0x0/0x4ffc00000, data 0x426ec7b/0x4404000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [0,0,0,0,0,3,6])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:19.062243+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 54755328 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:20.062471+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362184704 unmapped: 54747136 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:21.062649+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a1c000/0x0/0x4ffc00000, data 0x48dac7b/0x4a70000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362184704 unmapped: 54747136 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:22.062832+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4006276 data_alloc: 234881024 data_used: 19196399
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362184704 unmapped: 54747136 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a1c000/0x0/0x4ffc00000, data 0x48dac7b/0x4a70000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:23.063255+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:24.063411+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:25.063635+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a1c000/0x0/0x4ffc00000, data 0x48dac7b/0x4a70000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:26.063826+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a1c000/0x0/0x4ffc00000, data 0x48dac7b/0x4a70000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:27.063983+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4006276 data_alloc: 234881024 data_used: 19196399
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:28.064165+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:29.064329+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:30.064519+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.440196991s of 14.719416618s, submitted: 60
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x562fffce7500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300334e000 session 0x5630008c5500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003bf9c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:31.064684+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9c00 session 0x5630026de700
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:32.064867+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3941854 data_alloc: 234881024 data_used: 16951791
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e625f000/0x0/0x4ffc00000, data 0x4097c58/0x422c000, compress 0x0/0x0/0x0, omap 0x6f6c4, meta 0x1570093c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e625f000/0x0/0x4ffc00000, data 0x4097c58/0x422c000, compress 0x0/0x0/0x0, omap 0x6f6c4, meta 0x1570093c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:33.065033+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:34.065226+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46c00 session 0x563002574700
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6c000 session 0x562fffba1340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a41000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e625f000/0x0/0x4ffc00000, data 0x4098c58/0x422d000, compress 0x0/0x0/0x0, omap 0x6f6c4, meta 0x1570093c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:35.065355+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a41000 session 0x563002575500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:36.065563+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:37.065719+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:38.065894+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:39.066137+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:40.066402+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:41.066601+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:42.066783+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:43.066955+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:44.067153+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:45.067362+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:46.067648+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:47.067888+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:48.068165+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:49.068421+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:50.068705+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:51.068944+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:52.069150+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:53.069323+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:54.069495+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:55.069766+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:56.069960+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:57.070146+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 53600256 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:58.070330+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 53600256 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:59.070535+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 53600256 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:00.070744+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 53600256 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:01.070909+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 53600256 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:02.071134+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:03.071312+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:04.071485+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:05.071647+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:06.071802+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:07.072006+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:08.072160+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:09.072322+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:10.072516+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 53583872 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:11.072821+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 53583872 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:12.073057+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 53583872 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:13.073279+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007aaac00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.221851349s of 42.453300476s, submitted: 113
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007aaac00 session 0x562fffadfa40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 63627264 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:14.073511+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 63627264 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:15.073661+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 63627264 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:16.073828+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 63627264 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:17.073970+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3792375 data_alloc: 218103808 data_used: 4905272
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 63627264 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:18.074151+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:19.074552+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:20.074774+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:21.074946+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:22.075139+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3792375 data_alloc: 218103808 data_used: 4905272
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:23.075304+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007a98400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:24.075417+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:25.075582+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:26.075728+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:27.075863+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3862391 data_alloc: 234881024 data_used: 16737592
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:28.076035+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:29.076147+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:30.076341+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:31.076519+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:32.076675+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3862391 data_alloc: 234881024 data_used: 16737592
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:33.076823+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.046377182s of 20.163766861s, submitted: 13
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 61046784 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:34.077053+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6aee000/0x0/0x4ffc00000, data 0x3804bc4/0x3996000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:35.077274+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:36.077440+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:37.077574+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6a74000/0x0/0x4ffc00000, data 0x387ebc4/0x3a10000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3904073 data_alloc: 234881024 data_used: 16845112
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:38.077722+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:39.077965+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:40.078211+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:41.078443+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61497344 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:42.078663+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3900481 data_alloc: 234881024 data_used: 16845112
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread fragmentation_score=0.004393 took=0.000057s
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61497344 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:43.078871+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6a5b000/0x0/0x4ffc00000, data 0x389fbc4/0x3a31000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.036879539s of 10.267202377s, submitted: 60
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61489152 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:44.079049+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61489152 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:45.079322+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6a3c000/0x0/0x4ffc00000, data 0x38bebc4/0x3a50000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61489152 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:46.079541+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007a98400 session 0x562fffb86700
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61489152 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:47.079723+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3901945 data_alloc: 234881024 data_used: 16853304
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61489152 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:48.079896+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004b6a400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 285 ms_handle_reset con 0x563004b6a400 session 0x563001f02e00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a41000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007a98400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 285 ms_handle_reset con 0x563002a41000 session 0x5630008c41c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 285 ms_handle_reset con 0x563007a98400 session 0x5630020116c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007aaac00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 382451712 unmapped: 47079424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:49.080021+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 285 heartbeat osd_stat(store_statfs(0x4e5986000/0x0/0x4ffc00000, data 0x496f824/0x4b04000, compress 0x0/0x0/0x0, omap 0x6feaa, meta 0x15700156), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 285 ms_handle_reset con 0x563007aaac00 session 0x5630015e68c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300334ec00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 286 ms_handle_reset con 0x56300334ec00 session 0x562fffb87a40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffc0dc00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 371097600 unmapped: 58433536 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:50.080294+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 287 ms_handle_reset con 0x562fffc0dc00 session 0x562fffba1a40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 371154944 unmapped: 58376192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:51.080437+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a41000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 287 ms_handle_reset con 0x563002a41000 session 0x562fffce76c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300334ec00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 287 ms_handle_reset con 0x56300334ec00 session 0x563002c42380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007a98400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 287 ms_handle_reset con 0x563007a98400 session 0x563000341880
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 371154944 unmapped: 58376192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:52.080604+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4041853 data_alloc: 234881024 data_used: 23828792
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 371154944 unmapped: 58376192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:53.080893+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 371154944 unmapped: 58376192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:54.081123+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007aaac00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e5974000/0x0/0x4ffc00000, data 0x497df6a/0x4b14000, compress 0x0/0x0/0x0, omap 0x70a87, meta 0x156ff579), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.529747963s of 10.955096245s, submitted: 94
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 287 ms_handle_reset con 0x563007aaac00 session 0x56300273ec40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:55.081311+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:56.081566+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:57.081807+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3844847 data_alloc: 218103808 data_used: 4905272
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:58.082403+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:59.082613+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e6b84000/0x0/0x4ffc00000, data 0x3771f6a/0x3908000, compress 0x0/0x0/0x0, omap 0x70ebf, meta 0x156ff141), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 287 handle_osd_map epochs [288,288], i have 288, src has [1,288]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:00.082929+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:01.083181+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6b7f000/0x0/0x4ffc00000, data 0x37739e9/0x390b000, compress 0x0/0x0/0x0, omap 0x70f46, meta 0x156ff0ba), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:02.083362+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a46000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x563002a46000 session 0x563002c42a80
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a41000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300334ec00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x56300334ec00 session 0x5630022a88c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x563002a41000 session 0x5630022a9340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007a98400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3849155 data_alloc: 218103808 data_used: 4905272
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x563007a98400 session 0x562fffba0000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007aaac00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x563007aaac00 session 0x5630007ffc00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffd93800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x562fffd93800 session 0x563002c43dc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a41000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x563002a41000 session 0x56300222c700
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300334ec00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 369156096 unmapped: 60375040 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:03.083569+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x56300334ec00 session 0x562fffba0e00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e673b000/0x0/0x4ffc00000, data 0x3bb99e9/0x3d51000, compress 0x0/0x0/0x0, omap 0x71276, meta 0x156fed8a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 372703232 unmapped: 56827904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:04.083728+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 372703232 unmapped: 56827904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:05.084006+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 372703232 unmapped: 56827904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:06.084133+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630040a7000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.734997749s of 12.027949333s, submitted: 52
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300645f400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 369434624 unmapped: 60096512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:07.084289+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6431000/0x0/0x4ffc00000, data 0x3ec39e9/0x405b000, compress 0x0/0x0/0x0, omap 0x71276, meta 0x156fed8a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3968002 data_alloc: 234881024 data_used: 25452989
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 289 ms_handle_reset con 0x56300645f400 session 0x563002a81340
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:08.084485+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:09.084661+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:10.084886+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:11.085122+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:12.085302+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 289 heartbeat osd_stat(store_statfs(0x4e74e6000/0x0/0x4ffc00000, data 0x2e0b577/0x2fa3000, compress 0x0/0x0/0x0, omap 0x712fd, meta 0x156fed03), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3845228 data_alloc: 218103808 data_used: 9718717
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:13.085518+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:14.085756+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:15.085989+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e4000/0x0/0x4ffc00000, data 0x2e0cff6/0x2fa6000, compress 0x0/0x0/0x0, omap 0x71384, meta 0x156fec7c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:16.086152+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:17.086301+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.752949715s of 10.863764763s, submitted: 72
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3853182 data_alloc: 218103808 data_used: 9947483
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e4000/0x0/0x4ffc00000, data 0x2e0cff6/0x2fa6000, compress 0x0/0x0/0x0, omap 0x71384, meta 0x156fec7c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:18.086462+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e6000/0x0/0x4ffc00000, data 0x2e0cff6/0x2fa6000, compress 0x0/0x0/0x0, omap 0x71384, meta 0x156fec7c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:19.086604+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:20.086816+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:21.087018+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e6000/0x0/0x4ffc00000, data 0x2e0cff6/0x2fa6000, compress 0x0/0x0/0x0, omap 0x71384, meta 0x156fec7c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:22.087171+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3854974 data_alloc: 218103808 data_used: 10385755
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:23.087299+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:24.088050+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e6000/0x0/0x4ffc00000, data 0x2e0cff6/0x2fa6000, compress 0x0/0x0/0x0, omap 0x71384, meta 0x156fec7c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:25.088245+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:26.088418+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x5630040a7000 session 0x562fffc00a80
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300157f000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300157f000 session 0x562fffc01500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:27.088584+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e6000/0x0/0x4ffc00000, data 0x29c6ff6/0x2b60000, compress 0x0/0x0/0x0, omap 0x71519, meta 0x156feae7), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:28.088808+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:29.089238+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:30.089513+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:31.089712+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:32.089889+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:33.090048+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:34.090252+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:35.090440+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:36.090588+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:37.090781+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:38.090908+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:39.091127+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:40.091355+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:41.091541+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:42.091683+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:43.091852+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:44.092010+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:45.092193+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:46.092366+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:47.092586+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:48.092757+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:49.092918+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:50.093136+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:51.093308+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:52.093500+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:53.093717+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:54.093932+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:55.094141+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:56.094323+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:57.094498+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:58.094669+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:59.094869+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: mgrc ms_handle_reset ms_handle_reset con 0x563002a3f400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1095581968
Feb 28 11:04:07 compute-0 ceph-osd[88267]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1095581968,v1:192.168.122.100:6801/1095581968]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: get_auth_request con 0x56300157f000 auth_method 0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: mgrc handle_mgr_configure stats_period=5
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:00.095118+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300dcfd000 session 0x5630008c5a40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300645e800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x563002a3ec00 session 0x563002c43500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004b6cc00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x563006914800 session 0x562fffc00000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630015b7000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:01.095287+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:02.095440+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:03.095605+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:04.095796+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630052e4c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x5630052e4c00 session 0x563001f03500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a46400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x563002a46400 session 0x5630029ac8c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300212a800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300212a800 session 0x5630007ff500
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300645f800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300645f800 session 0x563002146380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630045adc00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 47.592243195s of 47.662048340s, submitted: 52
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [0,0,0,2])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:05.096044+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x5630045adc00 session 0x563002a6b6c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360480768 unmapped: 69050368 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300212a800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300212a800 session 0x5630029ac8c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:06.096220+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e71ff000/0x0/0x4ffc00000, data 0x30f3ff6/0x328d000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360480768 unmapped: 69050368 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a46400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x563002a46400 session 0x562fffba1dc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:07.096516+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360480768 unmapped: 69050368 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630052e4c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x5630052e4c00 session 0x562fffc00000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300645f800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300645f800 session 0x5630008c5a40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3836513 data_alloc: 218103808 data_used: 4897115
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:08.096728+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300329f400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:09.096877+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362455040 unmapped: 67076096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:10.097105+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 65650688 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:11.097281+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 65650688 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e71fe000/0x0/0x4ffc00000, data 0x30f4019/0x328e000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:12.097425+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300329f400 session 0x563002c43dc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 65650688 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300212a800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300212a800 session 0x562fffba0540
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:13.097635+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:14.097821+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:15.098010+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:16.098187+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:17.098351+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:18.098537+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:19.098713+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:20.098910+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:21.099115+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:22.099476+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:23.099634+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:24.099795+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:25.100022+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:26.100303+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:27.100490+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:28.100651+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:29.100829+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:30.101799+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:31.101941+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:32.102146+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:33.102288+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:34.102476+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:35.102733+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300b961400 session 0x56300273fdc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300212cc00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:36.102959+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:37.103146+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:38.103361+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:39.103609+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:40.104150+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:41.104498+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:42.104764+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:43.105279+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 69550080 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.875541687s of 39.054084778s, submitted: 43
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:44.105644+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:45.105843+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:46.106296+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:47.107029+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:48.107443+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:49.107643+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:50.107883+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:51.108599+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:52.109059+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:53.109451+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:54.109657+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:55.110010+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:56.110382+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:57.110731+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:58.110896+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:59.111100+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360005632 unmapped: 69525504 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:00.111477+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360005632 unmapped: 69525504 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:01.111617+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:02.111809+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:03.112058+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:04.112249+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:05.112521+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:06.112818+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:07.113117+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:08.113347+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:09.113520+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:10.113769+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:11.113960+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:12.114189+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:13.114342+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:14.114522+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630040a7c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.903411865s of 30.936098099s, submitted: 22
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:15.114684+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 291 ms_handle_reset con 0x5630040a7c00 session 0x562fffa6b6c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:16.114833+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:17.115010+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:18.115198+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3619554 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9c32000/0x0/0x4ffc00000, data 0x6bebd6/0x858000, compress 0x0/0x0/0x0, omap 0x74b7f, meta 0x156fb481), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:19.115916+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9c32000/0x0/0x4ffc00000, data 0x6bebd6/0x858000, compress 0x0/0x0/0x0, omap 0x74b7f, meta 0x156fb481), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:20.116165+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 69459968 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:21.116331+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 69459968 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9c32000/0x0/0x4ffc00000, data 0x6bebd6/0x858000, compress 0x0/0x0/0x0, omap 0x74b7f, meta 0x156fb481), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:22.116517+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 69459968 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9c32000/0x0/0x4ffc00000, data 0x6bebd6/0x858000, compress 0x0/0x0/0x0, omap 0x74b7f, meta 0x156fb481), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:23.116764+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3619554 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 69459968 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:24.117265+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:25.117465+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:26.117657+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:27.117824+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:28.118038+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:29.118275+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:30.118522+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:31.118700+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:32.118870+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:33.119055+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:34.119294+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:35.119483+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:36.119677+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:37.119932+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:38.120114+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:39.120304+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:40.120533+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:41.120842+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:42.121021+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:43.121194+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:44.121379+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:45.121600+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:46.121807+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:47.121979+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:48.122155+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:49.122373+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:50.122579+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:51.122734+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:52.122896+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:53.123118+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:54.123264+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:55.123445+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:56.123656+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:57.123794+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:58.124052+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:59.124312+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:00.124528+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:01.124712+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:02.124874+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:03.125046+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:04.125310+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:05.125506+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:06.125680+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:07.125957+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:08.126171+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:09.126394+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:10.126988+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:11.127291+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:12.127515+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:13.127730+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:14.127937+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:15.128242+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:16.128417+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:17.128664+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:18.128860+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:19.129184+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:20.129392+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:21.129749+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:22.130174+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:23.130335+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:24.130491+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:25.130707+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:26.130928+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:27.131216+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:28.131410+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:29.131635+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:30.131845+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:31.132197+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:32.132396+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:33.132534+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:34.132804+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:35.133051+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:36.133267+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:37.133441+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:38.133632+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:39.133786+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:40.133980+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:41.134137+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:42.134298+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:43.134495+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:44.134686+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:45.134959+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:46.135215+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:47.135433+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:48.135697+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:49.135938+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:50.136167+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:51.136348+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:52.136594+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:53.136836+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:54.137026+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:55.137189+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:56.137438+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:57.137690+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:58.137884+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:59.138884+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:00.139090+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:01.139326+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:02.139520+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:03.139722+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2069453453' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:04.139915+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:05.140105+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:06.140279+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:07.140480+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 69255168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:08.140636+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 69255168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:09.140846+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 69255168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:10.141113+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:11.141319+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:12.141516+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:13.141705+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006915000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 118.484336853s of 118.528816223s, submitted: 35
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 ms_handle_reset con 0x563006915000 session 0x5630022a9880
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffb12c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3621740 data_alloc: 218103808 data_used: 4901211
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 ms_handle_reset con 0x562fffb12c00 session 0x5630008c4c40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:14.141877+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c31000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c9f, meta 0x156fb361), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:15.142040+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 69230592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:16.142211+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 69230592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:17.142433+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:18.142651+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3621608 data_alloc: 218103808 data_used: 4901211
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:19.143277+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:20.143835+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c31000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c9f, meta 0x156fb361), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:21.144397+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:22.144683+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003da8400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:23.144869+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 292 handle_osd_map epochs [293,293], i have 293, src has [1,293]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 70385664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 293 ms_handle_reset con 0x563003da8400 session 0x56300238afc0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592192 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:24.145055+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 293 heartbeat osd_stat(store_statfs(0x4ea09d000/0x0/0x4ffc00000, data 0x252222/0x3ed000, compress 0x0/0x0/0x0, omap 0x75b78, meta 0x156fa488), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 70385664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:25.145215+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 70385664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3c400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:26.145366+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 70385664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.014619827s of 13.094200134s, submitted: 43
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 294 ms_handle_reset con 0x563002a3c400 session 0x5630003e6700
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:27.145733+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 294 heartbeat osd_stat(store_statfs(0x4ea09d000/0x0/0x4ffc00000, data 0x252222/0x3ed000, compress 0x0/0x0/0x0, omap 0x75b78, meta 0x156fa488), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359153664 unmapped: 70377472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:28.146048+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359153664 unmapped: 70377472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3594966 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:29.146305+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359153664 unmapped: 70377472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:30.146573+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359153664 unmapped: 70377472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:31.146746+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 294 heartbeat osd_stat(store_statfs(0x4ea09a000/0x0/0x4ffc00000, data 0x253e12/0x3f0000, compress 0x0/0x0/0x0, omap 0x75c00, meta 0x156fa400), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 70369280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:32.146940+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 70369280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:33.147137+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 70369280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3594966 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:34.147325+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 70344704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:35.147495+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 70344704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 295 heartbeat osd_stat(store_statfs(0x4ea097000/0x0/0x4ffc00000, data 0x255891/0x3f3000, compress 0x0/0x0/0x0, omap 0x75cee, meta 0x156fa312), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:36.147851+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 70344704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:37.148192+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 70344704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 295 heartbeat osd_stat(store_statfs(0x4ea097000/0x0/0x4ffc00000, data 0x255891/0x3f3000, compress 0x0/0x0/0x0, omap 0x75cee, meta 0x156fa312), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:38.148452+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 295 heartbeat osd_stat(store_statfs(0x4ea097000/0x0/0x4ffc00000, data 0x255891/0x3f3000, compress 0x0/0x0/0x0, omap 0x75cee, meta 0x156fa312), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 70344704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630040a6c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.104069710s of 12.140682220s, submitted: 28
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3598801 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:39.148682+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359194624 unmapped: 70336512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:40.148938+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359194624 unmapped: 70336512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:41.149157+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359219200 unmapped: 70311936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:42.149321+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359219200 unmapped: 70311936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8099000/0x0/0x4ffc00000, data 0x2255891/0x23f3000, compress 0x0/0x0/0x0, omap 0x7595f, meta 0x156fa6a1), peers [0,2] op hist [0,0,0,1])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 ms_handle_reset con 0x5630040a6c00 session 0x562fffba01c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:43.149506+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 70287360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:44.149650+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 70287360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:45.149823+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 70287360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:46.150043+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 70287360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:47.150212+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359251968 unmapped: 70279168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:48.150365+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359251968 unmapped: 70279168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:49.150501+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:50.150738+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:51.150960+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:52.151170+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:53.151386+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:54.151613+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:55.151799+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 70262784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:56.151993+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 70262784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:57.152159+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 70262784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:58.152361+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 70262784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:59.152585+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 70262784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:00.152794+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359276544 unmapped: 70254592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:01.152963+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359276544 unmapped: 70254592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:02.153196+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359276544 unmapped: 70254592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:03.153357+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:04.153553+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:05.153686+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:06.153842+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:07.154128+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:08.154313+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:09.154481+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:10.154732+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:11.154993+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359292928 unmapped: 70238208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:12.155209+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359292928 unmapped: 70238208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:13.155375+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:14.155586+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:15.155789+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:16.155962+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:17.156117+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:18.156311+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:19.156491+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:20.156712+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:21.156970+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:22.157187+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:23.157400+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:24.157589+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:25.157761+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:26.157915+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:27.158044+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:28.158199+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:29.158325+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:30.158520+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:31.158679+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:32.158862+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:33.159024+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:34.159196+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:35.159339+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359333888 unmapped: 70197248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:36.159528+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359333888 unmapped: 70197248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:37.159683+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359342080 unmapped: 70189056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:38.159881+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359342080 unmapped: 70189056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:39.160058+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359342080 unmapped: 70189056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:40.160357+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359350272 unmapped: 70180864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:41.160549+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359350272 unmapped: 70180864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:42.160727+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359350272 unmapped: 70180864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:43.160863+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:44.161013+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:45.161214+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:46.161374+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:47.161525+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:48.161729+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:49.161872+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:50.162119+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:51.162341+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:52.162504+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4801.6 total, 600.0 interval
                                           Cumulative writes: 48K writes, 188K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 48K writes, 18K syncs, 2.67 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2185 writes, 8173 keys, 2185 commit groups, 1.0 writes per commit group, ingest: 7.57 MB, 0.01 MB/s
                                           Interval WAL: 2185 writes, 915 syncs, 2.39 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:53.162650+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:54.162829+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:55.163518+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:56.163725+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:57.163926+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:58.164114+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:59.164229+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359383040 unmapped: 70148096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:00.164365+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359383040 unmapped: 70148096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:01.164513+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:02.164724+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:03.164948+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:04.165150+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:05.165354+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:06.165590+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:07.165803+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:08.165898+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:09.166211+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:10.166528+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:11.166767+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:12.166912+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:13.167140+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:14.167286+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:15.167488+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359407616 unmapped: 70123520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:16.167692+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359407616 unmapped: 70123520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:17.167853+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 70115328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:18.168038+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 70115328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:19.168290+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 70115328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:20.168462+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 70115328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:21.168616+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 70107136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:22.168750+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 70107136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:23.168906+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 70107136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:24.169388+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:25.169623+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:26.169768+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:27.169899+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:28.170104+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:29.170238+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630007a8800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:30.170463+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 111.617111206s of 111.771911621s, submitted: 17
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359440384 unmapped: 70090752 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:31.170595+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 297 ms_handle_reset con 0x5630007a8800 session 0x563002c356c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359464960 unmapped: 70066176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:32.170738+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006907c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359473152 unmapped: 70057984 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:33.170895+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 298 ms_handle_reset con 0x563006907c00 session 0x563002472380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e7420000/0x0/0x4ffc00000, data 0x2ec8fec/0x306a000, compress 0x0/0x0/0x0, omap 0x76256, meta 0x156f9daa), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:34.171033+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843633 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741b000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:35.171172+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:36.171354+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741b000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:37.171487+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:38.171653+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:39.171777+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843633 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 70017024 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:40.171861+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741b000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 70017024 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:41.171995+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:42.172206+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:43.172347+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.149941444s of 13.237030983s, submitted: 29
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:44.172473+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842145 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:45.172634+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:46.172774+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:47.172905+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 70000640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:48.173105+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 70000640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:49.173265+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842145 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:50.173466+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:51.173715+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:52.173878+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:53.174042+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:54.174218+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842145 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:55.174402+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:56.174571+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:57.174780+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:58.174913+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 69918720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:59.175112+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842145 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 69918720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:00.175288+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 69918720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:01.176036+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 69918720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:02.176411+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 69918720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:03.176935+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 69910528 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:04.177469+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842145 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 69910528 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:05.177635+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300212a000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.955919266s of 22.110708237s, submitted: 90
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 69885952 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:06.178105+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 299 ms_handle_reset con 0x56300212a000 session 0x56300238a000
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 69836800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:07.178550+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 69836800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:08.178832+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e941a000/0x0/0x4ffc00000, data 0xecc778/0x1070000, compress 0x0/0x0/0x0, omap 0x76df0, meta 0x156f9210), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:09.179193+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 69836800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683971 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:10.179388+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 69836800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:11.179686+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 69828608 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e941a000/0x0/0x4ffc00000, data 0xecc778/0x1070000, compress 0x0/0x0/0x0, omap 0x76df0, meta 0x156f9210), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:12.180000+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 69828608 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:13.180343+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 69828608 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:14.180624+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 299 handle_osd_map epochs [299,300], i have 299, src has [1,300]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 69804032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:15.180826+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 69804032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:16.181190+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 69804032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:17.181480+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 69795840 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:18.181734+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 69795840 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:19.181975+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:20.182241+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:21.182433+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:22.182638+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:23.182821+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:24.183022+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:25.183252+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:26.183485+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:27.183679+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:28.183885+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:29.184059+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:30.184295+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:31.184451+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:32.184663+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:33.184819+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:34.185006+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:35.185176+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 69771264 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:36.185302+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 69771264 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:37.185440+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 69771264 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:38.185586+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 69771264 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:39.187002+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 69771264 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:40.187195+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 69763072 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:41.187901+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 69754880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:42.188161+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 69754880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:43.188309+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 69754880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:44.188432+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 69754880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:45.188618+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:46.188795+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:47.189030+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:48.189180+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:49.189398+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:50.189639+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:51.189806+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:52.189952+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:53.190116+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:54.190297+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:55.190444+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:56.190587+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:57.190816+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:58.191021+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:59.191158+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 69713920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:00.191338+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 69713920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:01.191496+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 69705728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:02.191643+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 69705728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:03.191791+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 69705728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:04.191950+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 69705728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:05.192107+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 69705728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:06.192256+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 69705728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:07.192496+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 69697536 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:08.192636+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 69697536 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:09.192815+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 69697536 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:10.193031+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 69689344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:11.193195+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 69689344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:12.193342+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 69689344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:13.193518+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 69689344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:14.193630+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 69689344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:15.193767+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359849984 unmapped: 69681152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:16.193861+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 69672960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:17.194039+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 69672960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:18.194222+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 69672960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:19.194380+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 69672960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:20.194630+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 69672960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:21.194749+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 69664768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:22.194925+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 69664768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:23.195104+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:24.195257+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:25.195432+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:26.195605+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:27.195775+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:28.195964+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:29.196189+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:30.196458+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:31.196601+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 69640192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:32.196757+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 69640192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:33.196937+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 69640192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:34.197118+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 69640192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:35.197298+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 69640192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:36.197448+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 69640192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:37.197610+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359899136 unmapped: 69632000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:38.197781+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359899136 unmapped: 69632000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:39.197944+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359899136 unmapped: 69632000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:40.198222+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359899136 unmapped: 69632000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:41.198324+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 69623808 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:42.198478+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 69623808 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:43.198630+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 69623808 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:44.198772+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 69615616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:45.198934+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 69615616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:46.199111+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 69615616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:47.199317+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:48.199471+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:49.199602+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:50.199722+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:51.199843+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:52.200007+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:53.200143+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:54.200377+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:55.200526+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 69599232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:56.201292+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 69591040 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:57.201387+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 69582848 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:58.201573+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 69582848 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:59.201724+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 69582848 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:00.202002+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 69582848 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:01.202141+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 69582848 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:02.202330+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 69582848 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:03.202460+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 69574656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:04.202605+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 69574656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:05.202750+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 69574656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:06.202907+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 69574656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:07.203209+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 69574656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:08.203356+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 69574656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:09.203496+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:10.203670+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:11.203856+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:12.204025+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:13.204188+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 69558272 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:14.204328+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 69558272 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:15.204501+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 69558272 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:16.204665+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 69550080 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:17.204806+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 69550080 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:18.204949+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 69550080 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:19.205143+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:20.205298+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:21.205502+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:22.205695+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:23.205870+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:24.206117+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:25.206288+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:26.206420+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:27.206647+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360005632 unmapped: 69525504 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:28.206792+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360005632 unmapped: 69525504 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:29.206918+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:30.207127+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:31.207349+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:32.207552+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:33.207744+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:34.207924+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:35.208129+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 69509120 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:36.208283+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 69509120 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:37.208449+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 69509120 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:38.208639+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:39.208870+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:40.209161+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:41.209316+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:42.209669+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360038400 unmapped: 69492736 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:43.209878+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360046592 unmapped: 69484544 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:44.210225+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360046592 unmapped: 69484544 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:45.210370+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360046592 unmapped: 69484544 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:46.210624+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360046592 unmapped: 69484544 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:47.210820+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360046592 unmapped: 69484544 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:48.211004+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 69476352 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:49.211203+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 69476352 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:50.211443+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 69476352 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:51.211653+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:52.211869+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:53.212040+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:54.212235+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:55.212420+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:56.212659+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:57.212852+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:58.213011+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 69459968 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:59.213166+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 69443584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:00.213394+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 69443584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:01.213575+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 69443584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:02.213771+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 69443584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:03.213956+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 69443584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:04.214142+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:05.214370+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:06.214555+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:07.214770+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:08.214939+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:09.215108+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:10.215297+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:11.215482+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:12.215691+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:13.215831+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:14.216016+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:15.216202+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:16.216338+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:17.216607+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:18.217133+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:19.217512+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:21.029696+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:22.031004+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:23.031488+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:24.031787+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360128512 unmapped: 69402624 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:25.031952+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:26.032151+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:27.032322+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:28.032675+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:29.032938+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:30.033132+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:31.033345+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:32.033547+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:33.033738+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:34.034116+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:35.034247+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:36.034375+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:37.034517+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:38.034861+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:39.035038+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:40.035342+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:41.035504+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:42.035841+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:43.036013+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:44.036189+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:45.036327+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:46.036474+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:47.036655+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:48.036886+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:49.037055+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:50.037220+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:51.037403+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:52.037569+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:53.037753+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 69337088 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:54.037897+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 69337088 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:55.038053+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:56.038232+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:57.038399+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:58.038541+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:59.038699+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:00.038827+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:01.039013+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:02.039160+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:03.039343+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:04.039512+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:05.039699+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:06.040203+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:07.040435+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:08.040596+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:09.040744+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:10.040963+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:11.041196+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:12.041359+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:13.041679+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:14.041839+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:15.042048+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:16.042251+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:17.042428+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:18.042616+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:19.042850+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:20.043012+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:21.043157+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:22.043803+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:23.044955+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:24.045154+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:25.045330+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:26.045548+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:27.045716+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 69255168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:28.045918+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 69255168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:29.046085+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:30.046295+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:31.046546+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:32.046740+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:33.046902+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:34.047166+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:35.047374+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:36.047584+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:37.047764+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:38.047931+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:39.048149+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:40.048273+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:41.048426+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:42.048581+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:43.048712+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:44.048860+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:45.049053+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:46.049228+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:47.049437+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:48.049621+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360316928 unmapped: 69214208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:49.049803+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360316928 unmapped: 69214208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:50.049981+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360316928 unmapped: 69214208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:51.050165+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 69206016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:52.050309+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 69206016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:53.050486+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 69197824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:54.050628+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 69197824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:55.050768+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 69197824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:56.050931+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 69197824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:57.051089+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 69197824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:58.051352+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 69197824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:59.051543+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:00.051742+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:01.052002+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:02.052196+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:03.052400+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:04.052560+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:05.052736+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:06.052936+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:07.053236+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360349696 unmapped: 69181440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:08.053497+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360349696 unmapped: 69181440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:09.053681+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 69165056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:10.053941+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 69165056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:11.054190+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 69165056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:12.054413+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 69165056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:13.054601+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 69165056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:14.054792+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 69165056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:15.054991+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360374272 unmapped: 69156864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:16.055179+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360374272 unmapped: 69156864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:17.055351+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 69148672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:18.055500+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 69148672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:19.055642+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 69148672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:20.055837+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 69148672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:21.055990+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 69148672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:22.056201+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 69148672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:23.056373+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 69140480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:24.056779+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 69140480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:25.056975+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:26.057156+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:27.057313+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:28.057503+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:29.057702+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:30.057879+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:31.058043+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:32.058228+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:33.058546+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:34.058725+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:35.058950+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:36.059108+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:37.059313+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:38.059491+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:39.059734+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:40.059987+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:41.060226+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:42.060382+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:43.060504+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:44.060658+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:45.060814+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:46.060933+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:47.061109+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:48.061362+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 69091328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:49.061595+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360448000 unmapped: 69083136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:50.061860+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360448000 unmapped: 69083136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:51.062043+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360448000 unmapped: 69083136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:52.062252+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360448000 unmapped: 69083136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:53.062458+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 69074944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:54.062639+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 69074944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:55.062818+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 69074944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:56.062982+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 69074944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:57.063163+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 69066752 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:58.063339+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 69058560 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:59.063675+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 69058560 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:00.063857+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 69058560 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:01.064061+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 69058560 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:02.064322+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 69058560 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:03.064500+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 69058560 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:04.064666+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360488960 unmapped: 69042176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:05.064867+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360488960 unmapped: 69042176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:06.065049+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360488960 unmapped: 69042176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:07.065309+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360488960 unmapped: 69042176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:08.065511+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360488960 unmapped: 69042176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:09.065773+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360488960 unmapped: 69042176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:10.066011+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360497152 unmapped: 69033984 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:11.066308+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360497152 unmapped: 69033984 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:12.066512+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360497152 unmapped: 69033984 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630035d8800
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 366.986419678s of 367.067871094s, submitted: 52
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:13.066648+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 301 ms_handle_reset con 0x5630035d8800 session 0x5630022256c0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360521728 unmapped: 69009408 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:14.066835+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360529920 unmapped: 69001216 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 301 heartbeat osd_stat(store_statfs(0x4ea084000/0x0/0x4ffc00000, data 0x25fde7/0x406000, compress 0x0/0x0/0x0, omap 0x77a8c, meta 0x156f8574), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006c59c00
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:15.066976+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360529920 unmapped: 69001216 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:16.067127+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360579072 unmapped: 68952064 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628525 data_alloc: 218103808 data_used: 248120
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 302 ms_handle_reset con 0x563006c59c00 session 0x563000520c40
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:17.067402+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360579072 unmapped: 68952064 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:18.067615+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360579072 unmapped: 68952064 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:19.067831+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 302 heartbeat osd_stat(store_statfs(0x4ea081000/0x0/0x4ffc00000, data 0x2619b4/0x408000, compress 0x0/0x0/0x0, omap 0x778e2, meta 0x156f871e), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360587264 unmapped: 68943872 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:20.068010+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360603648 unmapped: 68927488 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007a99400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 303 heartbeat osd_stat(store_statfs(0x4ea07f000/0x0/0x4ffc00000, data 0x26344f/0x40b000, compress 0x0/0x0/0x0, omap 0x7850a, meta 0x156f7af6), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:21.068283+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360603648 unmapped: 68927488 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700002 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 304 ms_handle_reset con 0x563007a99400 session 0x5630004ff180
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:22.068568+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360620032 unmapped: 68911104 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:23.068720+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e940b000/0x0/0x4ffc00000, data 0xed5017/0x107f000, compress 0x0/0x0/0x0, omap 0x787f6, meta 0x156f780a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360620032 unmapped: 68911104 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:24.068926+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360620032 unmapped: 68911104 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:25.069056+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360620032 unmapped: 68911104 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:26.069302+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360620032 unmapped: 68911104 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3702408 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.496150970s of 13.664681435s, submitted: 92
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:27.069487+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e940b000/0x0/0x4ffc00000, data 0xed5017/0x107f000, compress 0x0/0x0/0x0, omap 0x787f6, meta 0x156f780a), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 68861952 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:28.069847+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 68861952 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:29.070011+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 68861952 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:30.070383+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 68853760 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:31.070560+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 68853760 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:32.070694+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 68853760 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:33.070819+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 68853760 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:34.070974+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 68853760 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:35.071167+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:36.071389+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:37.071593+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:38.071793+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:39.071960+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:40.072130+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:41.072373+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:42.072606+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:43.072756+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 68829184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:44.072900+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 68829184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:45.073022+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 68829184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:46.073160+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 68829184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:47.073344+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 68829184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:48.073508+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 68829184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:49.073689+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 68820992 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:50.073862+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 68820992 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:51.074077+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:52.074347+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:53.074511+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:54.074798+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:55.074993+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:56.075158+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:57.075312+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:58.075448+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:59.075637+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:00.075828+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:01.075998+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:02.076216+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:03.076328+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:04.076468+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:05.076618+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:06.076767+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:07.076921+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 68788224 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:08.077179+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 68788224 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:09.077307+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 68788224 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:10.077452+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360751104 unmapped: 68780032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:11.077625+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360751104 unmapped: 68780032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3c400
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 45.422519684s of 45.444473267s, submitted: 13
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:12.077786+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 68771840 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:13.077962+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 306 ms_handle_reset con 0x563002a3c400 session 0x563002574380
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 67706880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:14.078236+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 67706880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:15.078405+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 306 heartbeat osd_stat(store_statfs(0x4ea076000/0x0/0x4ffc00000, data 0x268676/0x414000, compress 0x0/0x0/0x0, omap 0x78ccf, meta 0x156f7331), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 67706880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:16.078550+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 67706880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643995 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:17.078732+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 67706880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:18.078871+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 67706880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:19.079050+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 306 heartbeat osd_stat(store_statfs(0x4ea076000/0x0/0x4ffc00000, data 0x268676/0x414000, compress 0x0/0x0/0x0, omap 0x78ccf, meta 0x156f7331), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 306 handle_osd_map epochs [307,307], i have 307, src has [1,307]
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361848832 unmapped: 67682304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:20.079291+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361848832 unmapped: 67682304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:21.079533+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361848832 unmapped: 67682304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:22.079713+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361848832 unmapped: 67682304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:23.079871+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361857024 unmapped: 67674112 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:24.080023+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361857024 unmapped: 67674112 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:25.080196+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361865216 unmapped: 67665920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:26.080369+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361865216 unmapped: 67665920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:27.080529+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361865216 unmapped: 67665920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:28.080739+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361865216 unmapped: 67665920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:29.080902+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361865216 unmapped: 67665920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:30.081061+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361865216 unmapped: 67665920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:31.081269+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:32.081548+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:33.081734+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:34.081897+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:35.082207+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:36.082370+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:37.082549+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:38.082727+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:39.082891+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361889792 unmapped: 67641344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:40.083053+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361889792 unmapped: 67641344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:41.083434+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:42.083724+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:43.083962+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:44.084159+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:45.084359+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:46.084558+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:47.084746+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:48.084919+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:49.085056+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361906176 unmapped: 67624960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:50.085245+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361906176 unmapped: 67624960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:51.085430+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361906176 unmapped: 67624960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:52.085665+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361906176 unmapped: 67624960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:53.085822+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361906176 unmapped: 67624960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:54.086059+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361906176 unmapped: 67624960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:55.086437+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:56.086585+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:57.087013+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:58.087474+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:59.087662+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:00.087929+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:01.088128+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:02.088342+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:03.088529+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361930752 unmapped: 67600384 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:04.088829+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361930752 unmapped: 67600384 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:05.089025+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 67592192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:06.089171+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 67592192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:07.089374+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 67592192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:08.089605+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 67592192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:09.089780+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 67592192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:10.089980+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 67592192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:11.090229+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:12.090409+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:13.090577+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:14.090751+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:15.090910+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:16.091122+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:17.091328+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:18.091529+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:19.091713+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361955328 unmapped: 67575808 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:20.091887+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361955328 unmapped: 67575808 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:21.092113+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361963520 unmapped: 67567616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:22.092275+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361963520 unmapped: 67567616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:23.092499+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361963520 unmapped: 67567616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:24.092647+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361963520 unmapped: 67567616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:25.092824+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361963520 unmapped: 67567616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:26.092981+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361963520 unmapped: 67567616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:27.093117+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361971712 unmapped: 67559424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:28.093237+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361971712 unmapped: 67559424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:29.093395+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361979904 unmapped: 67551232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:30.093523+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361979904 unmapped: 67551232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:31.093688+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361979904 unmapped: 67551232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:32.093828+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361979904 unmapped: 67551232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:33.093990+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361979904 unmapped: 67551232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:34.094121+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 67493888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:35.094253+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: do_command 'config diff' '{prefix=config diff}'
Feb 28 11:04:07 compute-0 ceph-osd[88267]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 28 11:04:07 compute-0 ceph-osd[88267]: do_command 'config show' '{prefix=config show}'
Feb 28 11:04:07 compute-0 ceph-osd[88267]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 28 11:04:07 compute-0 ceph-osd[88267]: do_command 'counter dump' '{prefix=counter dump}'
Feb 28 11:04:07 compute-0 ceph-osd[88267]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 28 11:04:07 compute-0 ceph-osd[88267]: do_command 'counter schema' '{prefix=counter schema}'
Feb 28 11:04:07 compute-0 ceph-osd[88267]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362217472 unmapped: 67313664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:36.094365+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:07 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:07 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:04:07 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 67338240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:04:07 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:37.094488+0000)
Feb 28 11:04:07 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:04:07 compute-0 ceph-osd[88267]: do_command 'log dump' '{prefix=log dump}'
Feb 28 11:04:08 compute-0 ceph-mon[76304]: from='client.23024 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:08 compute-0 ceph-mon[76304]: from='client.23028 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:08 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4164528576' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Feb 28 11:04:08 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1150463311' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Feb 28 11:04:08 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2069453453' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Feb 28 11:04:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Feb 28 11:04:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/291969807' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Feb 28 11:04:08 compute-0 nova_compute[243452]: 2026-02-28 11:04:08.332 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:04:08 compute-0 nova_compute[243452]: 2026-02-28 11:04:08.333 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:04:08 compute-0 nova_compute[243452]: 2026-02-28 11:04:08.334 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:04:08 compute-0 nova_compute[243452]: 2026-02-28 11:04:08.334 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:04:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Feb 28 11:04:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2468588432' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Feb 28 11:04:08 compute-0 nova_compute[243452]: 2026-02-28 11:04:08.379 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:04:08 compute-0 nova_compute[243452]: 2026-02-28 11:04:08.379 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:04:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Feb 28 11:04:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/363091576' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Feb 28 11:04:08 compute-0 rsyslogd[1017]: imjournal from <np0005634017:ceph-osd>: begin to drop messages due to rate-limiting
Feb 28 11:04:08 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 11:04:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Feb 28 11:04:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2343033175' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Feb 28 11:04:09 compute-0 ceph-mon[76304]: pgmap v3016: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:09 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/291969807' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Feb 28 11:04:09 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2468588432' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Feb 28 11:04:09 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/363091576' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Feb 28 11:04:09 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2343033175' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Feb 28 11:04:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Feb 28 11:04:09 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1392146419' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Feb 28 11:04:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Feb 28 11:04:09 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/601943042' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Feb 28 11:04:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:04:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Feb 28 11:04:09 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1542943371' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Feb 28 11:04:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Feb 28 11:04:09 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1818145720' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Feb 28 11:04:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3017: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Feb 28 11:04:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4267789720' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Feb 28 11:04:10 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1392146419' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Feb 28 11:04:10 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/601943042' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Feb 28 11:04:10 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1542943371' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Feb 28 11:04:10 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1818145720' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Feb 28 11:04:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Feb 28 11:04:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2531323484' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Feb 28 11:04:10 compute-0 nova_compute[243452]: 2026-02-28 11:04:10.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:04:10 compute-0 nova_compute[243452]: 2026-02-28 11:04:10.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 11:04:10 compute-0 nova_compute[243452]: 2026-02-28 11:04:10.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 11:04:10 compute-0 sudo[397322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:04:10 compute-0 sudo[397322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:04:10 compute-0 sudo[397322]: pam_unix(sudo:session): session closed for user root
Feb 28 11:04:10 compute-0 nova_compute[243452]: 2026-02-28 11:04:10.355 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 11:04:10 compute-0 sudo[397386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 11:04:10 compute-0 sudo[397386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:04:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb 28 11:04:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/772390333' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Feb 28 11:04:10 compute-0 kernel: /proc/cgroups lists only v1 controllers, use cgroup.controllers of root cgroup for v2 info
Feb 28 11:04:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Feb 28 11:04:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1223982814' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Feb 28 11:04:10 compute-0 sudo[397386]: pam_unix(sudo:session): session closed for user root
Feb 28 11:04:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:04:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:04:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 11:04:10 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:04:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 11:04:10 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:04:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 11:04:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:04:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 11:04:10 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:04:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:04:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:04:10 compute-0 sudo[397495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:04:10 compute-0 sudo[397495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:04:10 compute-0 sudo[397495]: pam_unix(sudo:session): session closed for user root
Feb 28 11:04:11 compute-0 sudo[397522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 11:04:11 compute-0 sudo[397522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:04:11 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23062 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Feb 28 11:04:11 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1235011693' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Feb 28 11:04:11 compute-0 ceph-mon[76304]: pgmap v3017: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4267789720' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Feb 28 11:04:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2531323484' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Feb 28 11:04:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/772390333' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Feb 28 11:04:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1223982814' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Feb 28 11:04:11 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:04:11 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:04:11 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:04:11 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:04:11 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:04:11 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:04:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1235011693' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Feb 28 11:04:11 compute-0 podman[397630]: 2026-02-28 11:04:11.269708319 +0000 UTC m=+0.037426261 container create cffc23741f442c9e8c718db345c0338a975264c8e4e7e5e563e5baf8227a13ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:04:11 compute-0 systemd[1]: Started libpod-conmon-cffc23741f442c9e8c718db345c0338a975264c8e4e7e5e563e5baf8227a13ff.scope.
Feb 28 11:04:11 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:04:11 compute-0 podman[397630]: 2026-02-28 11:04:11.252062179 +0000 UTC m=+0.019784901 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:04:11 compute-0 podman[397630]: 2026-02-28 11:04:11.359687937 +0000 UTC m=+0.127405889 container init cffc23741f442c9e8c718db345c0338a975264c8e4e7e5e563e5baf8227a13ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_sutherland, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 11:04:11 compute-0 podman[397630]: 2026-02-28 11:04:11.367729565 +0000 UTC m=+0.135447507 container start cffc23741f442c9e8c718db345c0338a975264c8e4e7e5e563e5baf8227a13ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_sutherland, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 11:04:11 compute-0 podman[397630]: 2026-02-28 11:04:11.370963566 +0000 UTC m=+0.138681508 container attach cffc23741f442c9e8c718db345c0338a975264c8e4e7e5e563e5baf8227a13ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 11:04:11 compute-0 peaceful_sutherland[397661]: 167 167
Feb 28 11:04:11 compute-0 systemd[1]: libpod-cffc23741f442c9e8c718db345c0338a975264c8e4e7e5e563e5baf8227a13ff.scope: Deactivated successfully.
Feb 28 11:04:11 compute-0 conmon[397661]: conmon cffc23741f442c9e8c71 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cffc23741f442c9e8c718db345c0338a975264c8e4e7e5e563e5baf8227a13ff.scope/container/memory.events
Feb 28 11:04:11 compute-0 podman[397630]: 2026-02-28 11:04:11.373606701 +0000 UTC m=+0.141324643 container died cffc23741f442c9e8c718db345c0338a975264c8e4e7e5e563e5baf8227a13ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_sutherland, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 28 11:04:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c900a523f1f5f14c7ed459a9f0ff3b542567edc30c787d1720bd2243eea5bf40-merged.mount: Deactivated successfully.
Feb 28 11:04:11 compute-0 podman[397630]: 2026-02-28 11:04:11.422743503 +0000 UTC m=+0.190461445 container remove cffc23741f442c9e8c718db345c0338a975264c8e4e7e5e563e5baf8227a13ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:04:11 compute-0 systemd[1]: libpod-conmon-cffc23741f442c9e8c718db345c0338a975264c8e4e7e5e563e5baf8227a13ff.scope: Deactivated successfully.
Feb 28 11:04:11 compute-0 crontab[397697]: (root) LIST (root)
Feb 28 11:04:11 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23066 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:11 compute-0 podman[397704]: 2026-02-28 11:04:11.562613114 +0000 UTC m=+0.051963442 container create 9ff9e08f58a721a90e772822a53d835617db6ecf496e50a98d4d819404716aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 11:04:11 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23064 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:11 compute-0 systemd[1]: Started libpod-conmon-9ff9e08f58a721a90e772822a53d835617db6ecf496e50a98d4d819404716aaf.scope.
Feb 28 11:04:11 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aaab149b654f40347f45f96fc7d3a9c2cc7922ff95857056416066575bc17c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aaab149b654f40347f45f96fc7d3a9c2cc7922ff95857056416066575bc17c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aaab149b654f40347f45f96fc7d3a9c2cc7922ff95857056416066575bc17c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aaab149b654f40347f45f96fc7d3a9c2cc7922ff95857056416066575bc17c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aaab149b654f40347f45f96fc7d3a9c2cc7922ff95857056416066575bc17c6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 11:04:11 compute-0 podman[397704]: 2026-02-28 11:04:11.540176669 +0000 UTC m=+0.029527057 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:04:11 compute-0 podman[397704]: 2026-02-28 11:04:11.651316286 +0000 UTC m=+0.140666634 container init 9ff9e08f58a721a90e772822a53d835617db6ecf496e50a98d4d819404716aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 11:04:11 compute-0 podman[397704]: 2026-02-28 11:04:11.659849058 +0000 UTC m=+0.149199386 container start 9ff9e08f58a721a90e772822a53d835617db6ecf496e50a98d4d819404716aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_austin, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:04:11 compute-0 podman[397704]: 2026-02-28 11:04:11.669180232 +0000 UTC m=+0.158530580 container attach 9ff9e08f58a721a90e772822a53d835617db6ecf496e50a98d4d819404716aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 11:04:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:11 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23068 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} v 0)
Feb 28 11:04:11 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} : dispatch
Feb 28 11:04:12 compute-0 relaxed_austin[397737]: --> passed data devices: 0 physical, 3 LVM
Feb 28 11:04:12 compute-0 relaxed_austin[397737]: --> All data devices are unavailable
Feb 28 11:04:12 compute-0 systemd[1]: libpod-9ff9e08f58a721a90e772822a53d835617db6ecf496e50a98d4d819404716aaf.scope: Deactivated successfully.
Feb 28 11:04:12 compute-0 podman[397704]: 2026-02-28 11:04:12.101713822 +0000 UTC m=+0.591064170 container died 9ff9e08f58a721a90e772822a53d835617db6ecf496e50a98d4d819404716aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_austin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 11:04:12 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23070 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-4aaab149b654f40347f45f96fc7d3a9c2cc7922ff95857056416066575bc17c6-merged.mount: Deactivated successfully.
Feb 28 11:04:12 compute-0 podman[397704]: 2026-02-28 11:04:12.144184415 +0000 UTC m=+0.633534743 container remove 9ff9e08f58a721a90e772822a53d835617db6ecf496e50a98d4d819404716aaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Feb 28 11:04:12 compute-0 systemd[1]: libpod-conmon-9ff9e08f58a721a90e772822a53d835617db6ecf496e50a98d4d819404716aaf.scope: Deactivated successfully.
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314793984 unmapped: 43540480 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.612444878s of 11.719148636s, submitted: 16
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c7fd1dc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:01.243124+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314793984 unmapped: 43540480 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb3c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:02.243281+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314793984 unmapped: 43540480 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:03.243468+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317579264 unmapped: 40755200 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3748424 data_alloc: 234881024 data_used: 22159015
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:04.243695+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317579264 unmapped: 40755200 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9551000/0x0/0x4ffc00000, data 0x54267a2/0x55bb000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:05.243833+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317579264 unmapped: 40755200 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:06.247612+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317579264 unmapped: 40755200 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9551000/0x0/0x4ffc00000, data 0x54267a2/0x55bb000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9551000/0x0/0x4ffc00000, data 0x54267a2/0x55bb000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:07.247834+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317579264 unmapped: 40755200 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:08.248130+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9551000/0x0/0x4ffc00000, data 0x54267a2/0x55bb000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317579264 unmapped: 40755200 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3747776 data_alloc: 234881024 data_used: 22159015
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:09.248275+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317579264 unmapped: 40755200 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:10.248420+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317579264 unmapped: 40755200 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e954f000/0x0/0x4ffc00000, data 0x54277a2/0x55bc000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:11.248589+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317579264 unmapped: 40755200 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:12.248732+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e954f000/0x0/0x4ffc00000, data 0x54277a2/0x55bc000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.247822762s of 11.706583023s, submitted: 3
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318308352 unmapped: 40026112 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c742b000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c742b000 session 0x55d7c77876c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c742b000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c742b000 session 0x55d7c6d6d500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c42ec400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c42ec400 session 0x55d7c49fb6c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c42ec800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c42ec800 session 0x55d7c7fd1180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:13.248917+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c4bd4a80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318595072 unmapped: 39739392 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7ca866400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7ca866400 session 0x55d7c4ebe700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7ca866400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7ca866400 session 0x55d7c7fd1180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c42ec400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c42ec400 session 0x55d7c9f3c8c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c42ec800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c42ec800 session 0x55d7c49fb6c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3818758 data_alloc: 234881024 data_used: 22441639
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:14.249103+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318676992 unmapped: 39657472 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:15.249290+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318676992 unmapped: 39657472 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:16.249429+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318676992 unmapped: 39657472 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x5d057b1/0x5e9b000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:17.249578+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c62000/0x0/0x4ffc00000, data 0x5d057b1/0x5e9b000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318676992 unmapped: 39657472 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:18.249723+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c49ee540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 39346176 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3816519 data_alloc: 234881024 data_used: 22441639
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c742b000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79cc800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:19.249868+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318988288 unmapped: 39346176 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:20.250216+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319528960 unmapped: 38805504 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c4c000/0x0/0x4ffc00000, data 0x5d297d4/0x5ec0000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:21.250478+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319545344 unmapped: 38789120 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319545344 unmapped: 38789120 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:22.629280+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.925846100s of 11.145988464s, submitted: 78
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319545344 unmapped: 38789120 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:23.630029+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3848919 data_alloc: 251658240 data_used: 27652005
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c4c000/0x0/0x4ffc00000, data 0x5d297d4/0x5ec0000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319578112 unmapped: 38756352 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:24.630150+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb3c00 session 0x55d7c4853180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c42ec400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c42ec400 session 0x55d7c9e916c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 316555264 unmapped: 41779200 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:25.630309+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 316555264 unmapped: 41779200 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:26.630543+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 316555264 unmapped: 41779200 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:27.630723+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 316555264 unmapped: 41779200 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:28.631209+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3697559 data_alloc: 234881024 data_used: 16490405
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 316555264 unmapped: 41779200 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:29.631398+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9b78000/0x0/0x4ffc00000, data 0x4dfd7d4/0x4f94000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c49cdc00 session 0x55d7c9e90380
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb0000 session 0x55d7c7438700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c42ec800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c42ec800 session 0x55d7c78c0c40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 305094656 unmapped: 53239808 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:30.631545+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb16c000/0x0/0x4ffc00000, data 0x3743762/0x38d8000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55296000 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:31.631707+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 304545792 unmapped: 53788672 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ead6d000/0x0/0x4ffc00000, data 0x3b42762/0x3cd7000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:32.631902+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ead6d000/0x0/0x4ffc00000, data 0x3b42762/0x3cd7000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 304545792 unmapped: 53788672 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:33.632136+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3541217 data_alloc: 218103808 data_used: 5826453
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 304545792 unmapped: 53788672 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:34.632318+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ead6d000/0x0/0x4ffc00000, data 0x3b42762/0x3cd7000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 304545792 unmapped: 53788672 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:35.632457+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 304545792 unmapped: 53788672 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:36.632616+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.536297798s of 13.826168060s, submitted: 91
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 53854208 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:37.632863+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 53854208 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:38.633039+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538393 data_alloc: 218103808 data_used: 5830451
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eae14000/0x0/0x4ffc00000, data 0x3b63762/0x3cf8000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 53854208 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:39.633238+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 53854208 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:40.633797+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 53854208 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:41.633999+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 53854208 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:42.634212+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eae0d000/0x0/0x4ffc00000, data 0x3b6a762/0x3cff000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 53796864 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:43.634515+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538625 data_alloc: 218103808 data_used: 5830451
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 53796864 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:44.634681+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 53796864 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:45.634830+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 53796864 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:46.635169+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 38K writes, 151K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.74 writes per sync, written: 0.15 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4164 writes, 16K keys, 4164 commit groups, 1.0 writes per commit group, ingest: 17.42 MB, 0.03 MB/s
                                           Interval WAL: 4164 writes, 1650 syncs, 2.52 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 53796864 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:47.635347+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eae0d000/0x0/0x4ffc00000, data 0x3b6a762/0x3cff000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 53796864 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:48.635431+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538881 data_alloc: 218103808 data_used: 5838643
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eae0d000/0x0/0x4ffc00000, data 0x3b6a762/0x3cff000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 53796864 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:49.635695+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.313487053s of 13.333294868s, submitted: 4
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 304545792 unmapped: 53788672 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:50.635912+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 304545792 unmapped: 53788672 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:51.636178+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eadfe000/0x0/0x4ffc00000, data 0x3b79762/0x3d0e000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c742b000 session 0x55d7c4ebf340
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79cc800 session 0x55d7c87ba540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 304545792 unmapped: 53788672 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:52.636350+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c742b000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c742b000 session 0x55d7c7fd0000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eadfe000/0x0/0x4ffc00000, data 0x3b79762/0x3d0e000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:53.636585+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437086 data_alloc: 218103808 data_used: 136484
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:54.636758+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb782000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:55.636965+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:56.637101+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb782000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: mgrc ms_handle_reset ms_handle_reset con 0x55d7c7020800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1095581968
Feb 28 11:04:12 compute-0 ceph-osd[87202]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1095581968,v1:192.168.122.100:6801/1095581968]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: get_auth_request con 0x55d7c6fb3c00 auth_method 0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: mgrc handle_mgr_configure stats_period=5
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:57.637247+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:58.637427+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437086 data_alloc: 218103808 data_used: 136484
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:33:59.637578+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:00.637777+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7cd365400 session 0x55d7c8ecd180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c42ec400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:01.637911+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:02.638093+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb782000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:03.638252+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb782000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437086 data_alloc: 218103808 data_used: 136484
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:04.638407+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:05.638529+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c42ec800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c42ec800 session 0x55d7c764a000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c49cdc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c49cdc00 session 0x55d7c4c876c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb0000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb0000 session 0x55d7c4ebf500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c42ec800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c42ec800 session 0x55d7c7fd1a40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:06.638709+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c49cdc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.073602676s of 16.109085083s, submitted: 24
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c49cdc00 session 0x55d7c87baa80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c742b000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c742b000 session 0x55d7c4c87340
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79cc800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79cc800 session 0x55d7c4b27880
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c77861c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7cbf4a540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb782000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:07.638872+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 57614336 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:08.639012+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 57614336 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3454955 data_alloc: 218103808 data_used: 140545
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:09.639167+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 57614336 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:10.639315+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 57614336 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb4a7000/0x0/0x4ffc00000, data 0x34d2730/0x3665000, compress 0x0/0x0/0x0, omap 0x62efb, meta 0x1108d105), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:11.639479+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 57614336 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:12.639623+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 57614336 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:13.640023+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 57614336 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3454955 data_alloc: 218103808 data_used: 140545
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c42ec800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c42ec800 session 0x55d7c7667dc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:14.640141+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 57614336 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c49cdc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:15.640338+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 57614336 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:16.640474+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb4a7000/0x0/0x4ffc00000, data 0x34d2730/0x3665000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 57614336 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c742b000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.798456192s of 10.852668762s, submitted: 16
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:17.640590+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c742b000 session 0x55d7c49ef500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 56918016 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79cc800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79cc800 session 0x55d7c77868c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:18.640749+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 301350912 unmapped: 56983552 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3519224 data_alloc: 218103808 data_used: 3286273
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:19.640922+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 301350912 unmapped: 56983552 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:20.641107+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 301350912 unmapped: 56983552 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eaf24000/0x0/0x4ffc00000, data 0x3a54792/0x3be8000, compress 0x0/0x0/0x0, omap 0x63583, meta 0x1108ca7d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:21.641281+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 301350912 unmapped: 56983552 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:22.641490+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 301350912 unmapped: 56983552 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7ca866400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7ca866400 session 0x55d7c49de000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:23.641675+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 301350912 unmapped: 56983552 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3519224 data_alloc: 218103808 data_used: 3286273
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c42ec800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c42ec800 session 0x55d7c900f340
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:24.641818+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 301350912 unmapped: 56983552 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c90b96c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c742b000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c742b000 session 0x55d7c8ecd340
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:25.642049+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 301367296 unmapped: 56967168 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79cc800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c586f000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:26.642202+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57466880 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eac21000/0x0/0x4ffc00000, data 0x3d57792/0x3eeb000, compress 0x0/0x0/0x0, omap 0x63769, meta 0x1108c897), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:27.642334+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:28.642469+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eabfc000/0x0/0x4ffc00000, data 0x3d7c792/0x3f10000, compress 0x0/0x0/0x0, omap 0x63769, meta 0x1108c897), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3574367 data_alloc: 218103808 data_used: 7641857
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:29.642616+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:30.642773+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:31.642904+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eabfc000/0x0/0x4ffc00000, data 0x3d7c792/0x3f10000, compress 0x0/0x0/0x0, omap 0x63769, meta 0x1108c897), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:32.643050+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.256127357s of 15.428375244s, submitted: 80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:33.643249+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573959 data_alloc: 218103808 data_used: 7641857
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:34.643400+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eabf9000/0x0/0x4ffc00000, data 0x3d7f792/0x3f13000, compress 0x0/0x0/0x0, omap 0x63769, meta 0x1108c897), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:35.643569+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79bcc00 session 0x55d7c4c87dc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c70bc400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:36.643737+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57638912 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:37.643909+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 50561024 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9fd8000/0x0/0x4ffc00000, data 0x499a792/0x4b2e000, compress 0x0/0x0/0x0, omap 0x63769, meta 0x1108c897), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:38.644118+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 308518912 unmapped: 49815552 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659127 data_alloc: 218103808 data_used: 8441601
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:39.644283+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 308518912 unmapped: 49815552 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c6e9ca80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:40.644430+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 308740096 unmapped: 49594368 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:41.644561+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 308740096 unmapped: 49594368 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:42.644737+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 308740096 unmapped: 49594368 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e96d8000/0x0/0x4ffc00000, data 0x5291792/0x5425000, compress 0x0/0x0/0x0, omap 0x63769, meta 0x1108c897), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.693429947s of 10.035803795s, submitted: 160
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:43.644949+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 308666368 unmapped: 49668096 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3708253 data_alloc: 218103808 data_used: 8447435
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:44.645107+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 308723712 unmapped: 49610752 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7a23400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7a23400 session 0x55d7c8ecd880
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:45.645266+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 308723712 unmapped: 49610752 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7a23400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7a23400 session 0x55d7c5894fc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:46.645448+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c42ec800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c42ec800 session 0x55d7c58941c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 308723712 unmapped: 49610752 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c49fafc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:47.645605+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 49405952 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e969e000/0x0/0x4ffc00000, data 0x52da792/0x546e000, compress 0x0/0x0/0x0, omap 0x64869, meta 0x1108b797), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:48.645963+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 308969472 unmapped: 49364992 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3712606 data_alloc: 218103808 data_used: 8447435
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:49.646110+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 308969472 unmapped: 49364992 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:50.646275+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 308969472 unmapped: 49364992 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c742b000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:51.646429+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 311132160 unmapped: 47202304 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:52.646680+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 311631872 unmapped: 46702592 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9698000/0x0/0x4ffc00000, data 0x52e0792/0x5474000, compress 0x0/0x0/0x0, omap 0x64869, meta 0x1108b797), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:53.646876+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 311631872 unmapped: 46702592 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3763726 data_alloc: 234881024 data_used: 16938955
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:54.647113+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 311631872 unmapped: 46702592 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c742bc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c742bc00 session 0x55d7c7683180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79ccc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79ccc00 session 0x55d7c4bd41c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c42ec800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c42ec800 session 0x55d7c7682540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c4bd4540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c742bc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.851370811s of 12.142789841s, submitted: 166
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c742bc00 session 0x55d7c90b8a80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7a23400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7a23400 session 0x55d7c90b8e00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6f24000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6f24000 session 0x55d7c4ebf500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c42ec800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c42ec800 session 0x55d7c8ecd180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c9e916c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:55.647338+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 312868864 unmapped: 45465600 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:56.647592+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 312868864 unmapped: 45465600 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e960d000/0x0/0x4ffc00000, data 0x5369804/0x54ff000, compress 0x0/0x0/0x0, omap 0x64869, meta 0x1108b797), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:57.647852+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 312868864 unmapped: 45465600 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:58.648514+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 312868864 unmapped: 45465600 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774759 data_alloc: 234881024 data_used: 16938955
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c742bc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c742bc00 session 0x55d7c9e908c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:34:59.648681+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 312868864 unmapped: 45465600 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7a23400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7a23400 session 0x55d7c75e3dc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:00.648838+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 312868864 unmapped: 45465600 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb2c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb2c00 session 0x55d7c7fd1a40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c42ec800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c42ec800 session 0x55d7c9f3d340
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:01.648980+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 312999936 unmapped: 45334528 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c742bc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e95e6000/0x0/0x4ffc00000, data 0x5390804/0x5526000, compress 0x0/0x0/0x0, omap 0x641a1, meta 0x1108be5f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:02.649130+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 316071936 unmapped: 42262528 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:03.649290+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 316416000 unmapped: 41918464 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8e2c000/0x0/0x4ffc00000, data 0x5b4a804/0x5ce0000, compress 0x0/0x0/0x0, omap 0x641a1, meta 0x1108be5f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843658 data_alloc: 234881024 data_used: 17813467
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:04.649740+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 316858368 unmapped: 41476096 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:05.649897+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 316858368 unmapped: 41476096 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:06.650057+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8d88000/0x0/0x4ffc00000, data 0x5bee804/0x5d84000, compress 0x0/0x0/0x0, omap 0x641a1, meta 0x1108be5f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 316858368 unmapped: 41476096 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:07.652430+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 316858368 unmapped: 41476096 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:08.652595+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 316858368 unmapped: 41476096 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.621756554s of 13.912006378s, submitted: 119
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842154 data_alloc: 234881024 data_used: 17817563
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:09.652747+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 316866560 unmapped: 41467904 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:10.652904+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 316866560 unmapped: 41467904 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:11.653082+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 316866560 unmapped: 41467904 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8d66000/0x0/0x4ffc00000, data 0x5c0f804/0x5da5000, compress 0x0/0x0/0x0, omap 0x641a1, meta 0x1108be5f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:12.653423+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 316866560 unmapped: 41467904 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:13.653665+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 316866560 unmapped: 41467904 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843570 data_alloc: 234881024 data_used: 17817563
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:14.653896+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 316866560 unmapped: 41467904 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:15.654061+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317759488 unmapped: 40574976 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:16.654237+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e867c000/0x0/0x4ffc00000, data 0x62f4804/0x648a000, compress 0x0/0x0/0x0, omap 0x641a1, meta 0x1108be5f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318840832 unmapped: 39493632 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c742b000 session 0x55d7c761a8c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c49ee540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:17.654386+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7a23400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7a23400 session 0x55d7c75e28c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317710336 unmapped: 40624128 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:18.654566+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317710336 unmapped: 40624128 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746792 data_alloc: 218103808 data_used: 9137627
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.933092117s of 10.153239250s, submitted: 106
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:19.654710+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317710336 unmapped: 40624128 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e91a0000/0x0/0x4ffc00000, data 0x520a804/0x53a0000, compress 0x0/0x0/0x0, omap 0x641a1, meta 0x1108be5f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:20.654859+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317710336 unmapped: 40624128 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:21.655037+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c49cdc00 session 0x55d7c77876c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317718528 unmapped: 40615936 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c49cdc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c49cdc00 session 0x55d7c764b500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:22.655142+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317530112 unmapped: 40804352 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:23.655294+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317530112 unmapped: 40804352 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687132 data_alloc: 218103808 data_used: 5995995
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:24.655535+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c4b26fc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c742bc00 session 0x55d7c7666700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317530112 unmapped: 40804352 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c42ec800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c42ec800 session 0x55d7c4ebe700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 sudo[397522]: pam_unix(sudo:session): session closed for user root
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:25.655698+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea52d000/0x0/0x4ffc00000, data 0x444a792/0x45de000, compress 0x0/0x0/0x0, omap 0x641a1, meta 0x1108be5f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317554688 unmapped: 40779776 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea52d000/0x0/0x4ffc00000, data 0x444a792/0x45de000, compress 0x0/0x0/0x0, omap 0x641a1, meta 0x1108be5f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:26.655865+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317554688 unmapped: 40779776 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:27.655990+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317554688 unmapped: 40779776 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea52d000/0x0/0x4ffc00000, data 0x444a792/0x45de000, compress 0x0/0x0/0x0, omap 0x641a1, meta 0x1108be5f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:28.656231+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79cc800 session 0x55d7c8f6d880
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c586f000 session 0x55d7c90b8fc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 317554688 unmapped: 40779776 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3626944 data_alloc: 218103808 data_used: 5305705
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c42ec800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.960047722s of 10.076869011s, submitted: 63
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c42ec800 session 0x55d7c9f3c000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:29.656407+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:30.656607+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:31.656773+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb7a4000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64387, meta 0x1108bc79), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:32.657010+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:33.657398+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487872 data_alloc: 218103808 data_used: 146281
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:34.657587+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:35.657781+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:36.657960+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:37.658169+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb7a4000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64387, meta 0x1108bc79), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:38.658363+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487872 data_alloc: 218103808 data_used: 146281
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:39.658587+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:40.658794+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:41.659014+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:42.659235+0000)
Feb 28 11:04:12 compute-0 ceph-mon[76304]: from='client.23062 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:12 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} : dispatch
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb7a4000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64387, meta 0x1108bc79), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:43.659480+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487872 data_alloc: 218103808 data_used: 146281
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:44.659694+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:45.659985+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:46.660198+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:47.660444+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb7a4000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64387, meta 0x1108bc79), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:48.660636+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb7a4000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64387, meta 0x1108bc79), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487872 data_alloc: 218103808 data_used: 146281
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:49.660841+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:50.661165+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:51.661379+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:52.661564+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 314056704 unmapped: 44277760 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c49cdc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c49cdc00 session 0x55d7c5895500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c7666000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c742bc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c742bc00 session 0x55d7c900e380
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c42ec800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c42ec800 session 0x55d7c75e2e00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c49cdc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.231277466s of 24.293329239s, submitted: 36
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:53.661761+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c49cdc00 session 0x55d7c58516c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c4bd41c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c586f000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c586f000 session 0x55d7c75e28c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c900e380
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c5894fc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 313090048 unmapped: 45244416 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3527281 data_alloc: 218103808 data_used: 150279
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:54.662034+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb2eb000/0x0/0x4ffc00000, data 0x368d740/0x3821000, compress 0x0/0x0/0x0, omap 0x645a3, meta 0x1108ba5d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 313090048 unmapped: 45244416 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:55.662254+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 313090048 unmapped: 45244416 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:56.662489+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 313090048 unmapped: 45244416 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb2eb000/0x0/0x4ffc00000, data 0x368d740/0x3821000, compress 0x0/0x0/0x0, omap 0x645a3, meta 0x1108ba5d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:57.662635+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 313090048 unmapped: 45244416 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c900f340
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c586f000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c586f000 session 0x55d7c3161c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:58.662794+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 310689792 unmapped: 47644672 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3547018 data_alloc: 218103808 data_used: 150279
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb14a000/0x0/0x4ffc00000, data 0x382d7a2/0x39c2000, compress 0x0/0x0/0x0, omap 0x645a3, meta 0x1108ba5d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:35:59.662997+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 310689792 unmapped: 47644672 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c742b000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c742b000 session 0x55d7c764a000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:00.663207+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 310689792 unmapped: 47644672 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb14a000/0x0/0x4ffc00000, data 0x382d7a2/0x39c2000, compress 0x0/0x0/0x0, omap 0x645a3, meta 0x1108ba5d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7a23400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7a23400 session 0x55d7c74388c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:01.663419+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb14a000/0x0/0x4ffc00000, data 0x382d7a2/0x39c2000, compress 0x0/0x0/0x0, omap 0x645a3, meta 0x1108ba5d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7a23400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7a23400 session 0x55d7c8f6c700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c9e91a40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 310640640 unmapped: 47693824 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:02.663572+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb148000/0x0/0x4ffc00000, data 0x382d7d5/0x39c4000, compress 0x0/0x0/0x0, omap 0x64781, meta 0x1108b87f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c586f000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 310640640 unmapped: 47693824 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:03.663762+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 311844864 unmapped: 46489600 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578786 data_alloc: 218103808 data_used: 4731159
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb148000/0x0/0x4ffc00000, data 0x382d7d5/0x39c4000, compress 0x0/0x0/0x0, omap 0x64781, meta 0x1108b87f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:04.663937+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c742b000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c742b000 session 0x55d7c8f6ca80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 46350336 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:05.664221+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c7682e00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 46350336 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:06.664412+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 46350336 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:07.664712+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 46350336 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:08.664929+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fcfc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fcfc00 session 0x55d7c49df6c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.059307098s of 15.220128059s, submitted: 70
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c7667dc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 311992320 unmapped: 46342144 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3582815 data_alloc: 218103808 data_used: 4735255
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:09.665071+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb124000/0x0/0x4ffc00000, data 0x38517d5/0x39e8000, compress 0x0/0x0/0x0, omap 0x64781, meta 0x1108b87f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 311992320 unmapped: 46342144 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:10.665254+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 311992320 unmapped: 46342144 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb124000/0x0/0x4ffc00000, data 0x38517d5/0x39e8000, compress 0x0/0x0/0x0, omap 0x64781, meta 0x1108b87f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:11.665395+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 311992320 unmapped: 46342144 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:12.666301+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fcfc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 312623104 unmapped: 45711360 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:13.667378+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319217664 unmapped: 39116800 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670233 data_alloc: 218103808 data_used: 6952727
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4eb124000/0x0/0x4ffc00000, data 0x38517d5/0x39e8000, compress 0x0/0x0/0x0, omap 0x64781, meta 0x1108b87f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:14.667500+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 320315392 unmapped: 38019072 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:15.667687+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 37765120 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:16.667880+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 37765120 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:17.668092+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea3c2000/0x0/0x4ffc00000, data 0x45ab7d5/0x4742000, compress 0x0/0x0/0x0, omap 0x64781, meta 0x1108b87f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 37765120 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:18.668222+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 37765120 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3682033 data_alloc: 218103808 data_used: 7182103
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:19.668842+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 320569344 unmapped: 37765120 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:20.669384+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.250706673s of 12.517893791s, submitted: 114
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319897600 unmapped: 38436864 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:21.669883+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319897600 unmapped: 38436864 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:22.670155+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea3a8000/0x0/0x4ffc00000, data 0x45cd7d5/0x4764000, compress 0x0/0x0/0x0, omap 0x64781, meta 0x1108b87f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 321855488 unmapped: 36478976 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:23.670339+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322142208 unmapped: 36192256 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3713429 data_alloc: 218103808 data_used: 7407383
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:24.670464+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322142208 unmapped: 36192256 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:25.670785+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322142208 unmapped: 36192256 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:26.671116+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322142208 unmapped: 36192256 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:27.671336+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322142208 unmapped: 36192256 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9fd4000/0x0/0x4ffc00000, data 0x49a07d5/0x4b37000, compress 0x0/0x0/0x0, omap 0x64781, meta 0x1108b87f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:28.671600+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 321871872 unmapped: 36462592 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3706333 data_alloc: 218103808 data_used: 7415575
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:29.671825+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 321871872 unmapped: 36462592 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:30.672034+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 321871872 unmapped: 36462592 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9fb1000/0x0/0x4ffc00000, data 0x49c47d5/0x4b5b000, compress 0x0/0x0/0x0, omap 0x64781, meta 0x1108b87f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:31.672386+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 321871872 unmapped: 36462592 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:32.672692+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 321871872 unmapped: 36462592 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:33.672946+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 321871872 unmapped: 36462592 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3707573 data_alloc: 218103808 data_used: 7427863
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:34.673284+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9fb1000/0x0/0x4ffc00000, data 0x49c47d5/0x4b5b000, compress 0x0/0x0/0x0, omap 0x64781, meta 0x1108b87f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 321871872 unmapped: 36462592 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:35.673568+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.684562683s of 14.848427773s, submitted: 90
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 321871872 unmapped: 36462592 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:36.673787+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 321871872 unmapped: 36462592 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:37.673958+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 321871872 unmapped: 36462592 heap: 358334464 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:38.674186+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c742b000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c742b000 session 0x55d7c75f9dc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7a23400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7a23400 session 0x55d7cbf4aa80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7caa2d400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9fa7000/0x0/0x4ffc00000, data 0x49ce7d5/0x4b65000, compress 0x0/0x0/0x0, omap 0x64781, meta 0x1108b87f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7caa2d400 session 0x55d7c4ebee00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb2000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb2000 session 0x55d7c7683a40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c5851c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322019328 unmapped: 49963008 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb2000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb2000 session 0x55d7c5895500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3810001 data_alloc: 218103808 data_used: 7427863
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c742b000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c742b000 session 0x55d7c7683180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7a23400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:39.674294+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7a23400 session 0x55d7c8f6dc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7caa2d400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7caa2d400 session 0x55d7c90b8e00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322019328 unmapped: 49963008 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:40.674444+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c49fafc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8f89000/0x0/0x4ffc00000, data 0x59eb7e5/0x5b83000, compress 0x0/0x0/0x0, omap 0x64781, meta 0x1108b87f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 323223552 unmapped: 48758784 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:41.674668+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 48750592 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:42.674892+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 48750592 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:43.675159+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 48750592 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3861555 data_alloc: 218103808 data_used: 7427863
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:44.675414+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8706000/0x0/0x4ffc00000, data 0x626e7e5/0x6406000, compress 0x0/0x0/0x0, omap 0x64781, meta 0x1108b87f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 323231744 unmapped: 48750592 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:45.675603+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8706000/0x0/0x4ffc00000, data 0x626e7e5/0x6406000, compress 0x0/0x0/0x0, omap 0x64781, meta 0x1108b87f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 323174400 unmapped: 48807936 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:46.675758+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb2000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.147439003s of 11.346515656s, submitted: 50
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 48791552 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb2000 session 0x55d7c90b8a80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:47.675979+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c742b000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7a23400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 323190784 unmapped: 48791552 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:48.676133+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 323207168 unmapped: 48775168 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3907865 data_alloc: 234881024 data_used: 14487241
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:49.676292+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7cd67f800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7cd67f800 session 0x55d7c7438000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 323207168 unmapped: 48775168 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:50.676438+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e86e1000/0x0/0x4ffc00000, data 0x6292808/0x642b000, compress 0x0/0x0/0x0, omap 0x64781, meta 0x1108b87f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79d0400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c5a18800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 323346432 unmapped: 48635904 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:51.676605+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 323354624 unmapped: 48627712 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:52.676766+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 330752000 unmapped: 41230336 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e86dd000/0x0/0x4ffc00000, data 0x6296808/0x642f000, compress 0x0/0x0/0x0, omap 0x64781, meta 0x1108b87f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:53.676950+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 330752000 unmapped: 41230336 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4005147 data_alloc: 251658240 data_used: 29358187
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:54.677164+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 330760192 unmapped: 41222144 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:55.677330+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 41197568 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:56.677488+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 41197568 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:57.677634+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 330784768 unmapped: 41197568 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:58.677780+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e86d7000/0x0/0x4ffc00000, data 0x629c808/0x6435000, compress 0x0/0x0/0x0, omap 0x647b7, meta 0x1108b849), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.456307411s of 11.655534744s, submitted: 23
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332144640 unmapped: 39837696 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:36:59.677903+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4074035 data_alloc: 251658240 data_used: 29439083
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7bbc000/0x0/0x4ffc00000, data 0x6dae808/0x6f47000, compress 0x0/0x0/0x0, omap 0x647b7, meta 0x1108b849), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332161024 unmapped: 39821312 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:00.678033+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332513280 unmapped: 39469056 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:01.678197+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7b1e000/0x0/0x4ffc00000, data 0x6e4c808/0x6fe5000, compress 0x0/0x0/0x0, omap 0x647b7, meta 0x1108b849), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331841536 unmapped: 40140800 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:02.678366+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331841536 unmapped: 40140800 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:03.678570+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339968000 unmapped: 32014336 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:04.678705+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4189357 data_alloc: 251658240 data_used: 30649451
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340090880 unmapped: 31891456 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:05.678860+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340623360 unmapped: 31358976 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:06.679029+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e5664000/0x0/0x4ffc00000, data 0x816e808/0x8307000, compress 0x0/0x0/0x0, omap 0x647b7, meta 0x1222b849), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 31326208 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:07.679139+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 31326208 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:08.679289+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 31326208 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:09.679437+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4202769 data_alloc: 251658240 data_used: 31288427
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340664320 unmapped: 31318016 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:10.679590+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.376032829s of 12.103551865s, submitted: 255
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340680704 unmapped: 31301632 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:11.679756+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e563c000/0x0/0x4ffc00000, data 0x8197808/0x8330000, compress 0x0/0x0/0x0, omap 0x647b7, meta 0x1222b849), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340721664 unmapped: 31260672 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:12.679924+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340721664 unmapped: 31260672 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:13.680164+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c742b000 session 0x55d7c4852700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7a23400 session 0x55d7c75e2fc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c3161340
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 35586048 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:14.680352+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4039986 data_alloc: 234881024 data_used: 23575147
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:15.680540+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 35586048 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:16.680711+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 35586048 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79d0400 session 0x55d7c75e3880
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c5a18800 session 0x55d7c7787c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb2000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb2000 session 0x55d7c4c87340
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:17.680986+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 48783360 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8de0000/0x0/0x4ffc00000, data 0x49f47d5/0x4b8b000, compress 0x0/0x0/0x0, omap 0x647b7, meta 0x1222b849), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:18.681188+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 48783360 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:19.681383+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 48783360 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739488 data_alloc: 218103808 data_used: 5214299
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:20.681533+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 48783360 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:21.681663+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 48783360 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.809987068s of 10.907455444s, submitted: 64
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c586f000 session 0x55d7c75e36c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c8ecddc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c586f000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:22.681813+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 323198976 unmapped: 48783360 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c586f000 session 0x55d7c6e9cfc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fcfc00 session 0x55d7c49efa40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c900f6c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:23.682040+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 320626688 unmapped: 51355648 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c49dee00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea03a000/0x0/0x4ffc00000, data 0x379d792/0x3931000, compress 0x0/0x0/0x0, omap 0x64767, meta 0x1222b899), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:24.682287+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544151 data_alloc: 218103808 data_used: 125992
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318750720 unmapped: 53231616 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:25.682467+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318750720 unmapped: 53231616 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:26.682608+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318750720 unmapped: 53231616 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea605000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64767, meta 0x1222b899), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:27.682857+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318750720 unmapped: 53231616 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:28.683209+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318750720 unmapped: 53231616 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:29.683537+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544151 data_alloc: 218103808 data_used: 125992
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318750720 unmapped: 53231616 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:30.683793+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318750720 unmapped: 53231616 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:31.683940+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318750720 unmapped: 53231616 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:32.684216+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea605000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64767, meta 0x1222b899), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318750720 unmapped: 53231616 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:33.684439+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea605000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64767, meta 0x1222b899), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318750720 unmapped: 53231616 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:34.684619+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544151 data_alloc: 218103808 data_used: 125992
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318750720 unmapped: 53231616 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:35.684818+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318750720 unmapped: 53231616 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:36.685218+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318750720 unmapped: 53231616 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea605000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64767, meta 0x1222b899), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:37.685376+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318750720 unmapped: 53231616 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:38.685499+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318750720 unmapped: 53231616 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:39.685658+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea605000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64767, meta 0x1222b899), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544151 data_alloc: 218103808 data_used: 125992
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318750720 unmapped: 53231616 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:40.685856+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318750720 unmapped: 53231616 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:41.685971+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318750720 unmapped: 53231616 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:42.686103+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318758912 unmapped: 53223424 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:43.686328+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea605000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64767, meta 0x1222b899), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318758912 unmapped: 53223424 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:44.686537+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544151 data_alloc: 218103808 data_used: 125992
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318758912 unmapped: 53223424 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:45.686680+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318758912 unmapped: 53223424 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:46.686815+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318758912 unmapped: 53223424 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:47.686993+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318758912 unmapped: 53223424 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:48.687187+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318758912 unmapped: 53223424 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:49.687360+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c9e901c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea605000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64767, meta 0x1222b899), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c8eccfc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c2800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c2800 session 0x55d7c900e700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544151 data_alloc: 218103808 data_used: 125992
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c1c00 session 0x55d7c4852e00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7ca866000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.247217178s of 27.809144974s, submitted: 90
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 318758912 unmapped: 53223424 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7ca866000 session 0x55d7c761a700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:50.687526+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319029248 unmapped: 52953088 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:51.687751+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319029248 unmapped: 52953088 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:52.687913+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c49fb6c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319029248 unmapped: 52953088 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea4ba000/0x0/0x4ffc00000, data 0x331f730/0x34b2000, compress 0x0/0x0/0x0, omap 0x64767, meta 0x1222b899), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c75e2380
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:53.688133+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 52944896 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:54.688318+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3560121 data_alloc: 218103808 data_used: 134051
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319037440 unmapped: 52944896 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c1c00 session 0x55d7c9e90540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c2800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c2800 session 0x55d7c874aa80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:55.688465+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c0800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319127552 unmapped: 52854784 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701cc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:56.688643+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319127552 unmapped: 52854784 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:57.688783+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea494000/0x0/0x4ffc00000, data 0x3343763/0x34d8000, compress 0x0/0x0/0x0, omap 0x64767, meta 0x1222b899), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319127552 unmapped: 52854784 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:58.688950+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319135744 unmapped: 52846592 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea494000/0x0/0x4ffc00000, data 0x3343763/0x34d8000, compress 0x0/0x0/0x0, omap 0x64767, meta 0x1222b899), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:37:59.689121+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572860 data_alloc: 218103808 data_used: 1285539
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319135744 unmapped: 52846592 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:00.689321+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319135744 unmapped: 52846592 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea494000/0x0/0x4ffc00000, data 0x3343763/0x34d8000, compress 0x0/0x0/0x0, omap 0x64767, meta 0x1222b899), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:01.689552+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701cc00 session 0x55d7c7683180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c0800 session 0x55d7c7683880
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319135744 unmapped: 52846592 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.242254257s of 12.286597252s, submitted: 17
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c49ef6c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:02.689704+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319135744 unmapped: 52846592 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:03.689892+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319135744 unmapped: 52846592 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:04.690046+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550077 data_alloc: 218103808 data_used: 134051
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319135744 unmapped: 52846592 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:05.690211+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319135744 unmapped: 52846592 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:06.690345+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319135744 unmapped: 52846592 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea606000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64767, meta 0x1222b899), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:07.690517+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319135744 unmapped: 52846592 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c7787500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c1c00 session 0x55d7c9f3c380
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c2800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c2800 session 0x55d7c76676c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c2800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:08.690675+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c2800 session 0x55d7c7683500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c8f6ca80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319463424 unmapped: 52518912 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:09.690865+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3590772 data_alloc: 218103808 data_used: 138112
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319463424 unmapped: 52518912 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:10.691024+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319463424 unmapped: 52518912 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea026000/0x0/0x4ffc00000, data 0x37b3730/0x3946000, compress 0x0/0x0/0x0, omap 0x64767, meta 0x1222b899), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:11.691165+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319463424 unmapped: 52518912 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea026000/0x0/0x4ffc00000, data 0x37b3730/0x3946000, compress 0x0/0x0/0x0, omap 0x64767, meta 0x1222b899), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:12.691410+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319463424 unmapped: 52518912 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:13.691642+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319463424 unmapped: 52518912 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:14.691819+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3590772 data_alloc: 218103808 data_used: 138112
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319471616 unmapped: 52510720 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:15.692021+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319471616 unmapped: 52510720 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:16.692247+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319471616 unmapped: 52510720 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:17.692431+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea026000/0x0/0x4ffc00000, data 0x37b3730/0x3946000, compress 0x0/0x0/0x0, omap 0x64767, meta 0x1222b899), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319471616 unmapped: 52510720 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:18.692686+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319479808 unmapped: 52502528 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c87ba540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c0800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c0800 session 0x55d7c75f81c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:19.692852+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c1c00 session 0x55d7c8f6d500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c1c00 session 0x55d7c90b8a80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c761ae00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.525907516s of 17.631921768s, submitted: 36
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643843 data_alloc: 218103808 data_used: 138112
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c5851c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c0800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c0800 session 0x55d7c7666e00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c2800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c2800 session 0x55d7c90b9dc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c2800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319946752 unmapped: 52035584 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c2800 session 0x55d7c7666e00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c9e901c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c75f8000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:20.693015+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c0800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c0800 session 0x55d7c6e9ce00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c1c00 session 0x55d7c7786fc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319963136 unmapped: 52019200 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9a19000/0x0/0x4ffc00000, data 0x3dbf740/0x3f53000, compress 0x0/0x0/0x0, omap 0x649de, meta 0x1222b622), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:21.693136+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319963136 unmapped: 52019200 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c6e9d180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:22.693278+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319963136 unmapped: 52019200 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c0800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c0800 session 0x55d7c8f6cfc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:23.693445+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c2800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c2800 session 0x55d7c90b9dc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319963136 unmapped: 52019200 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c9e5d400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c9e5d400 session 0x55d7c8ecddc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:24.693552+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4e66800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7caa2cc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e99f3000/0x0/0x4ffc00000, data 0x3de3760/0x3f79000, compress 0x0/0x0/0x0, omap 0x649de, meta 0x1222b622), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3682613 data_alloc: 218103808 data_used: 6217104
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319971328 unmapped: 52011008 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:25.693724+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319971328 unmapped: 52011008 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:26.693884+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e99f3000/0x0/0x4ffc00000, data 0x3de3760/0x3f79000, compress 0x0/0x0/0x0, omap 0x649de, meta 0x1222b622), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319971328 unmapped: 52011008 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:27.694043+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319971328 unmapped: 52011008 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:28.694158+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4e66800 session 0x55d7c761a700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7caa2cc00 session 0x55d7c49fafc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319971328 unmapped: 52011008 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:29.694287+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c90b8a80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea000000/0x0/0x4ffc00000, data 0x37d7750/0x396c000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x1222b47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3638646 data_alloc: 218103808 data_used: 6216576
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319283200 unmapped: 52699136 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:30.694440+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319283200 unmapped: 52699136 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:31.694727+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319283200 unmapped: 52699136 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:32.694912+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319283200 unmapped: 52699136 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:33.695150+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 319283200 unmapped: 52699136 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.463449478s of 14.645942688s, submitted: 64
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:34.695330+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3672484 data_alloc: 218103808 data_used: 6216576
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 324386816 unmapped: 47595520 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4ea025000/0x0/0x4ffc00000, data 0x37b3740/0x3947000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x1222b47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:35.695495+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326729728 unmapped: 45252608 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:36.695724+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326778880 unmapped: 45203456 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:37.695938+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326778880 unmapped: 45203456 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:38.696168+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326778880 unmapped: 45203456 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:39.696376+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3708136 data_alloc: 218103808 data_used: 8256384
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326778880 unmapped: 45203456 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8563000/0x0/0x4ffc00000, data 0x40d5740/0x4269000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:40.696616+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326778880 unmapped: 45203456 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:41.696873+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326778880 unmapped: 45203456 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:42.697024+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326713344 unmapped: 45268992 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:43.697246+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8561000/0x0/0x4ffc00000, data 0x40d7740/0x426b000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326713344 unmapped: 45268992 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:44.697486+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705144 data_alloc: 218103808 data_used: 8264576
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326713344 unmapped: 45268992 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:45.697670+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326713344 unmapped: 45268992 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:46.697926+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326713344 unmapped: 45268992 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8561000/0x0/0x4ffc00000, data 0x40d7740/0x426b000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:47.698155+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326713344 unmapped: 45268992 heap: 371982336 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.229008675s of 13.841867447s, submitted: 98
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4e66800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4e66800 session 0x55d7c8ecda40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c0800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c0800 session 0x55d7c7fd0540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c2800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c2800 session 0x55d7c761bdc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c75e2380
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:48.698317+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4e66800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4e66800 session 0x55d7c9e90540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c0800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c0800 session 0x55d7c75e3880
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7caa2cc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7caa2cc00 session 0x55d7c7683c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c9e5d400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c9e5d400 session 0x55d7c4b26000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c663dc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8560000/0x0/0x4ffc00000, data 0x40d8740/0x426c000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 52600832 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:49.698515+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774526 data_alloc: 218103808 data_used: 8264576
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 52600832 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:50.698887+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 52600832 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:51.699052+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 52600832 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:52.699249+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7b2b000/0x0/0x4ffc00000, data 0x4b0b7b2/0x4ca1000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 52600832 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:53.699445+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 52600832 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:54.699636+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4e66800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4e66800 session 0x55d7c9f3ca80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774526 data_alloc: 218103808 data_used: 8264576
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326737920 unmapped: 52600832 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c0800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c0800 session 0x55d7c87bae00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:55.699811+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7caa2cc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7caa2cc00 session 0x55d7c8ecd6c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7cd441800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326754304 unmapped: 52584448 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7cd441800 session 0x55d7c49dea80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:56.699964+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c49fb6c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4e66800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4e66800 session 0x55d7c7683880
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c0800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c0800 session 0x55d7c3161340
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7caa2cc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7886000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7886000 session 0x55d7c90b9340
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79ce000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7cde7b000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7cde7b000 session 0x55d7c7994a80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c8f6d6c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326901760 unmapped: 52436992 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4e66800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4e66800 session 0x55d7c9f3c8c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7886000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7886000 session 0x55d7c9f3c380
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c0800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c0800 session 0x55d7c7683880
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:57.700185+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326901760 unmapped: 52436992 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:58.701171+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e76e0000/0x0/0x4ffc00000, data 0x4f537f5/0x50ec000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326901760 unmapped: 52436992 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:38:59.701436+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3809400 data_alloc: 218103808 data_used: 8266656
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326901760 unmapped: 52436992 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:00.701573+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 52428800 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:01.702343+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 52428800 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:02.702584+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e76e0000/0x0/0x4ffc00000, data 0x4f537f5/0x50ec000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 52428800 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:03.702815+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7cde7b000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.440730095s of 15.993132591s, submitted: 66
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7cde7b000 session 0x55d7c7667340
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 52428800 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:04.703152+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3871916 data_alloc: 234881024 data_used: 18861984
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 52428800 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:05.703394+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 326909952 unmapped: 52428800 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:06.703572+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e76e0000/0x0/0x4ffc00000, data 0x4f537f5/0x50ec000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 328499200 unmapped: 50839552 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:07.703860+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 328499200 unmapped: 50839552 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:08.704093+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 328499200 unmapped: 50839552 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:09.704242+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3955982 data_alloc: 234881024 data_used: 23140256
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332169216 unmapped: 47169536 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:10.704455+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332259328 unmapped: 47079424 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:11.704620+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6d65000/0x0/0x4ffc00000, data 0x58c67f5/0x5a5f000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332259328 unmapped: 47079424 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:12.704915+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332324864 unmapped: 47013888 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:13.705276+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332324864 unmapped: 47013888 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:14.705596+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3972574 data_alloc: 234881024 data_used: 23547808
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332324864 unmapped: 47013888 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:15.705793+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6d65000/0x0/0x4ffc00000, data 0x58c67f5/0x5a5f000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332324864 unmapped: 47013888 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.888026237s of 12.157597542s, submitted: 105
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:16.705950+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 334151680 unmapped: 45187072 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:17.706196+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 334798848 unmapped: 44539904 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:18.706399+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 334798848 unmapped: 44539904 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:19.706717+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e64c2000/0x0/0x4ffc00000, data 0x61697f5/0x6302000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4031130 data_alloc: 234881024 data_used: 23859616
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 334798848 unmapped: 44539904 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:20.706915+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e64c2000/0x0/0x4ffc00000, data 0x61697f5/0x6302000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 334798848 unmapped: 44539904 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:21.707157+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 334798848 unmapped: 44539904 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:22.707339+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 334798848 unmapped: 44539904 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:23.707552+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 334798848 unmapped: 44539904 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7caa2cc00 session 0x55d7c7439180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:24.707721+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79ce000 session 0x55d7c4c876c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4e66800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4e66800 session 0x55d7c78c1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3839073 data_alloc: 234881024 data_used: 12908928
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 334864384 unmapped: 44474368 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e64c5000/0x0/0x4ffc00000, data 0x616e7f5/0x6307000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:25.708006+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 334864384 unmapped: 44474368 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:26.781608+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.084453583s of 10.398511887s, submitted: 147
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c90b8a80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7886000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7886000 session 0x55d7c4b276c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331874304 unmapped: 47464448 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:27.781893+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331874304 unmapped: 47464448 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c1c00 session 0x55d7c7667180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c75e2fc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:28.782102+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c7787500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322519040 unmapped: 56819712 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e855e000/0x0/0x4ffc00000, data 0x40da740/0x426e000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:29.782805+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599180 data_alloc: 218103808 data_used: 146171
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322519040 unmapped: 56819712 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:30.783138+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322519040 unmapped: 56819712 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:31.783366+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322519040 unmapped: 56819712 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:32.784019+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322519040 unmapped: 56819712 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:33.784256+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322519040 unmapped: 56819712 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:34.784357+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599180 data_alloc: 218103808 data_used: 146171
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322519040 unmapped: 56819712 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:35.784501+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322519040 unmapped: 56819712 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:36.784859+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322519040 unmapped: 56819712 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:37.785458+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322519040 unmapped: 56819712 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:38.785847+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322519040 unmapped: 56819712 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:39.786187+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599180 data_alloc: 218103808 data_used: 146171
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322519040 unmapped: 56819712 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:40.786346+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322519040 unmapped: 56819712 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:41.786516+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322527232 unmapped: 56811520 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:42.786691+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322527232 unmapped: 56811520 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:43.786933+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322527232 unmapped: 56811520 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:44.787161+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599180 data_alloc: 218103808 data_used: 146171
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322527232 unmapped: 56811520 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:45.787402+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322527232 unmapped: 56811520 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:46.787603+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322527232 unmapped: 56811520 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:47.787799+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322527232 unmapped: 56811520 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:48.787996+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322527232 unmapped: 56811520 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:49.788161+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3599180 data_alloc: 218103808 data_used: 146171
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322535424 unmapped: 56803328 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:50.788364+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4e66800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.821939468s of 23.924440384s, submitted: 54
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4e66800 session 0x55d7c9e90700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322535424 unmapped: 56803328 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:51.788624+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322535424 unmapped: 56803328 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:52.788821+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8bfe000/0x0/0x4ffc00000, data 0x3a3b730/0x3bce000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322535424 unmapped: 56803328 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:53.789091+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7886000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7886000 session 0x55d7c75e36c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322535424 unmapped: 56803328 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:54.789222+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7886000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7886000 session 0x55d7c75f8000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654902 data_alloc: 218103808 data_used: 146171
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322535424 unmapped: 56803328 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:55.789422+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c761a700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c900efc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322535424 unmapped: 56803328 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:56.789578+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8bfe000/0x0/0x4ffc00000, data 0x3a3b730/0x3bce000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8bfe000/0x0/0x4ffc00000, data 0x3a3b730/0x3bce000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:57.789723+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322543616 unmapped: 56795136 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:58.789907+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322551808 unmapped: 56786944 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:39:59.790240+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322551808 unmapped: 56786944 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8bfe000/0x0/0x4ffc00000, data 0x3a3b730/0x3bce000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654902 data_alloc: 218103808 data_used: 146171
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8bfe000/0x0/0x4ffc00000, data 0x3a3b730/0x3bce000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:00.790401+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322551808 unmapped: 56786944 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:01.790554+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322551808 unmapped: 56786944 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4e66800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:02.790687+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 322551808 unmapped: 56786944 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8bfe000/0x0/0x4ffc00000, data 0x3a3b730/0x3bce000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8bfe000/0x0/0x4ffc00000, data 0x3a3b730/0x3bce000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:03.790878+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 324182016 unmapped: 55156736 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8bfe000/0x0/0x4ffc00000, data 0x3a3b730/0x3bce000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:04.791062+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 324182016 unmapped: 55156736 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3704566 data_alloc: 218103808 data_used: 8566523
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:05.791305+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 324182016 unmapped: 55156736 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:06.791483+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 324182016 unmapped: 55156736 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8bfe000/0x0/0x4ffc00000, data 0x3a3b730/0x3bce000, compress 0x0/0x0/0x0, omap 0x64b85, meta 0x133cb47b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.225126266s of 16.295352936s, submitted: 15
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c1c00 session 0x55d7c900f6c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79ce000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79ce000 session 0x55d7c87ba1c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c49df6c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c4c86fc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7886000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7886000 session 0x55d7c7786700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:07.791678+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 324509696 unmapped: 54829056 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:08.791931+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 324509696 unmapped: 54829056 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:09.792100+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 324509696 unmapped: 54829056 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7ecf000/0x0/0x4ffc00000, data 0x476a730/0x48fd000, compress 0x0/0x0/0x0, omap 0x64dd5, meta 0x133cb22b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3783481 data_alloc: 218103808 data_used: 8566523
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:10.792221+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 324509696 unmapped: 54829056 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:11.792363+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 324509696 unmapped: 54829056 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:12.792531+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 328540160 unmapped: 50798592 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c1c00 session 0x55d7c75e3c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7caa2cc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7caa2cc00 session 0x55d7c6e9da40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c7787180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c9e916c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7886000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7886000 session 0x55d7c8f6d500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c1c00 session 0x55d7c90b81c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7caa2cc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7caa2cc00 session 0x55d7c87ba8c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c75e2000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c49de1c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:13.792734+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 330006528 unmapped: 49332224 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7886000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7886000 session 0x55d7c4ebfa40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:14.792927+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 330006528 unmapped: 49332224 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c1c00 session 0x55d7c6df5180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3924729 data_alloc: 218103808 data_used: 9149179
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6a3b000/0x0/0x4ffc00000, data 0x5bf5740/0x5d89000, compress 0x0/0x0/0x0, omap 0x64dd5, meta 0x133cb22b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:15.793030+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 330006528 unmapped: 49332224 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c0800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c0800 session 0x55d7c4ebfa40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:16.793160+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c75e2000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 329334784 unmapped: 50003968 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.858820915s of 10.197806358s, submitted: 134
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7886000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:17.793349+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 329334784 unmapped: 50003968 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:18.793551+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 329334784 unmapped: 50003968 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:19.793679+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c1c00 session 0x55d7c6e9ce00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 329318400 unmapped: 50020352 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4001815 data_alloc: 234881024 data_used: 22887163
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7020c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7020c00 session 0x55d7c874a8c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:20.793841+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 44531712 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6a21000/0x0/0x4ffc00000, data 0x5c16750/0x5dab000, compress 0x0/0x0/0x0, omap 0x64dd5, meta 0x133cb22b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79ca400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79ca400 session 0x55d7c49de8c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6a21000/0x0/0x4ffc00000, data 0x5c16750/0x5dab000, compress 0x0/0x0/0x0, omap 0x64dd5, meta 0x133cb22b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c7787c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:21.794019+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 334807040 unmapped: 44531712 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:22.794119+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 337854464 unmapped: 41484288 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:23.794297+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338911232 unmapped: 40427520 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:24.794471+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338911232 unmapped: 40427520 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4051095 data_alloc: 251658240 data_used: 30593204
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:25.794596+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338911232 unmapped: 40427520 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:26.794734+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6a21000/0x0/0x4ffc00000, data 0x5c16750/0x5dab000, compress 0x0/0x0/0x0, omap 0x64dd5, meta 0x133cb22b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 40394752 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:27.794908+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 40394752 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:28.795081+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338944000 unmapped: 40394752 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.271891594s of 12.286545753s, submitted: 5
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:29.795215+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345481216 unmapped: 33857536 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e5ff2000/0x0/0x4ffc00000, data 0x6645750/0x67da000, compress 0x0/0x0/0x0, omap 0x64dd5, meta 0x133cb22b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4125081 data_alloc: 251658240 data_used: 32477364
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:30.795365+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 32727040 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:31.795521+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 32604160 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:32.795665+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 30498816 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:33.795819+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 30441472 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e5a39000/0x0/0x4ffc00000, data 0x6bf6750/0x6d8b000, compress 0x0/0x0/0x0, omap 0x64dd5, meta 0x133cb22b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:34.796757+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348905472 unmapped: 30433280 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4168829 data_alloc: 251658240 data_used: 33067856
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:35.796886+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348905472 unmapped: 30433280 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:36.797038+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 31105024 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:37.797150+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 31105024 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:38.797268+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 31096832 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x6bf9750/0x6d8e000, compress 0x0/0x0/0x0, omap 0x64dd5, meta 0x133cb22b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:39.797429+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.819959641s of 10.186590195s, submitted: 194
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 31055872 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4162685 data_alloc: 251658240 data_used: 33076048
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:40.797561+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 31055872 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:41.797692+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 31055872 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e5a1d000/0x0/0x4ffc00000, data 0x6c1a750/0x6daf000, compress 0x0/0x0/0x0, omap 0x64dd5, meta 0x133cb22b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:42.797845+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 31023104 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e5a1d000/0x0/0x4ffc00000, data 0x6c1a750/0x6daf000, compress 0x0/0x0/0x0, omap 0x64dd5, meta 0x133cb22b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c78c01c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7020c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7020c00 session 0x55d7c49eec40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c1c00 session 0x55d7c874ba40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79ca400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79ca400 session 0x55d7c4b27dc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:43.798084+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7caa2d800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7caa2d800 session 0x55d7c7fd0700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 30892032 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e5a1d000/0x0/0x4ffc00000, data 0x6c1a750/0x6daf000, compress 0x0/0x0/0x0, omap 0x64dd5, meta 0x133cb22b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:44.798299+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 30892032 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e5392000/0x0/0x4ffc00000, data 0x72a4750/0x7439000, compress 0x0/0x0/0x0, omap 0x64dd5, meta 0x133cb22b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4208587 data_alloc: 251658240 data_used: 33076048
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:45.798426+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 30892032 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:46.798572+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348454912 unmapped: 30883840 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:47.798729+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348454912 unmapped: 30883840 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c7439a40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:48.798907+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348454912 unmapped: 30883840 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c7682fc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7020c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c1c00 session 0x55d7c7682000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7020c00 session 0x55d7c90b8540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:49.799110+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 34938880 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79ca400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79ca400 session 0x55d7c8f6cc40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79ca400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.352549553s of 10.437029839s, submitted: 32
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79ca400 session 0x55d7c4b26c40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:50.799269+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4087259 data_alloc: 234881024 data_used: 23783752
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7020c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 34938880 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e60af000/0x0/0x4ffc00000, data 0x6587773/0x671d000, compress 0x0/0x0/0x0, omap 0x64dd5, meta 0x133cb22b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:51.799535+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 34938880 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e60af000/0x0/0x4ffc00000, data 0x6587773/0x671d000, compress 0x0/0x0/0x0, omap 0x64dd5, meta 0x133cb22b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:52.799673+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 33906688 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:53.799860+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e60af000/0x0/0x4ffc00000, data 0x6587773/0x671d000, compress 0x0/0x0/0x0, omap 0x64dd5, meta 0x133cb22b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345432064 unmapped: 33906688 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4e66800 session 0x55d7c75e2380
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:54.800018+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c764a380
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 33882112 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:55.800212+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3947571 data_alloc: 234881024 data_used: 21354304
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 33882112 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7593000/0x0/0x4ffc00000, data 0x50a3773/0x5239000, compress 0x0/0x0/0x0, omap 0x64dd5, meta 0x133cb22b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:56.800401+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7593000/0x0/0x4ffc00000, data 0x50a3773/0x5239000, compress 0x0/0x0/0x0, omap 0x64dd5, meta 0x133cb22b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 33882112 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:57.800556+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7593000/0x0/0x4ffc00000, data 0x50a3773/0x5239000, compress 0x0/0x0/0x0, omap 0x64dd5, meta 0x133cb22b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 33882112 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:58.800729+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 33882112 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:40:59.800921+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 33882112 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:00.801182+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3947571 data_alloc: 234881024 data_used: 21354304
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 33882112 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.249159813s of 11.292816162s, submitted: 27
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:01.801344+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7593000/0x0/0x4ffc00000, data 0x50a3773/0x5239000, compress 0x0/0x0/0x0, omap 0x64dd5, meta 0x133cb22b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 33882112 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:02.801511+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349503488 unmapped: 29835264 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:03.801688+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 30507008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e70c5000/0x0/0x4ffc00000, data 0x5569773/0x56ff000, compress 0x0/0x0/0x0, omap 0x64dd5, meta 0x133cb22b), peers [1,2] op hist [0,0,0,1])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:04.801859+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 30507008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:05.802134+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3986207 data_alloc: 234881024 data_used: 22600512
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 30507008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:06.802295+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 30507008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:07.802455+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e70b3000/0x0/0x4ffc00000, data 0x5582773/0x5718000, compress 0x0/0x0/0x0, omap 0x64dd5, meta 0x133cb22b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 30507008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:08.802644+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 30507008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:09.802838+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 30507008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:10.802997+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3986223 data_alloc: 234881024 data_used: 22600512
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 30507008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.870260239s of 10.025369644s, submitted: 76
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:11.803132+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 30507008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:12.803271+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 30507008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e70b4000/0x0/0x4ffc00000, data 0x5582773/0x5718000, compress 0x0/0x0/0x0, omap 0x64e90, meta 0x133cb170), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:13.803488+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 30507008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:14.803628+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e70b4000/0x0/0x4ffc00000, data 0x5582773/0x5718000, compress 0x0/0x0/0x0, omap 0x64e90, meta 0x133cb170), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 30507008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e70b4000/0x0/0x4ffc00000, data 0x5582773/0x5718000, compress 0x0/0x0/0x0, omap 0x64e90, meta 0x133cb170), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c49fba40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7020c00 session 0x55d7c49eefc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:15.803767+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3985131 data_alloc: 234881024 data_used: 22690624
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344850432 unmapped: 34488320 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c7fd0fc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:16.803926+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 34480128 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:17.804177+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 34480128 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4e66800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4e66800 session 0x55d7c764afc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c6e9c380
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79ca400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79ca400 session 0x55d7c4c86e00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c1c00 session 0x55d7c8f6c700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:18.804341+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c1c00 session 0x55d7c7666540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c4b27880
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4e66800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4e66800 session 0x55d7c8f6c8c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c7438fc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79ca400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79ca400 session 0x55d7c4b26540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345030656 unmapped: 34308096 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:19.804575+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345030656 unmapped: 34308096 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7344000/0x0/0x4ffc00000, data 0x52f3750/0x5488000, compress 0x0/0x0/0x0, omap 0x64f4b, meta 0x133cb0b5), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:20.804746+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3935980 data_alloc: 234881024 data_used: 15891229
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345030656 unmapped: 34308096 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c75e3c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7886000 session 0x55d7c4bd4540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:21.804906+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79ca400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.983490944s of 10.210596085s, submitted: 56
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 47808512 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79ca400 session 0x55d7c761a700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:22.805080+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331530240 unmapped: 47808512 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:23.805276+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c900efc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331677696 unmapped: 47661056 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4e66800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:24.805408+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331677696 unmapped: 47661056 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8b6a000/0x0/0x4ffc00000, data 0x3ace740/0x3c62000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:25.805571+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3707755 data_alloc: 218103808 data_used: 125741
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331677696 unmapped: 47661056 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:26.805692+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8b6a000/0x0/0x4ffc00000, data 0x3ace740/0x3c62000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332201984 unmapped: 47136768 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:27.805825+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332201984 unmapped: 47136768 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:28.806029+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332201984 unmapped: 47136768 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8b6a000/0x0/0x4ffc00000, data 0x3ace740/0x3c62000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:29.806171+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332201984 unmapped: 47136768 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:30.806334+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762155 data_alloc: 218103808 data_used: 9302829
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332201984 unmapped: 47136768 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:31.806486+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332201984 unmapped: 47136768 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:32.806614+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332201984 unmapped: 47136768 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:33.806827+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8b6a000/0x0/0x4ffc00000, data 0x3ace740/0x3c62000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332201984 unmapped: 47136768 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:34.807000+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332201984 unmapped: 47136768 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:35.807119+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762539 data_alloc: 218103808 data_used: 9315117
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332201984 unmapped: 47136768 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8b6a000/0x0/0x4ffc00000, data 0x3ace740/0x3c62000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.527433395s of 14.586800575s, submitted: 31
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:36.807259+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336642048 unmapped: 42696704 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:37.807399+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:38.807734+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:39.807911+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:40.808130+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3841745 data_alloc: 234881024 data_used: 10449709
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8020000/0x0/0x4ffc00000, data 0x4618740/0x47ac000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:41.808311+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:42.808485+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:43.808676+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:44.808847+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8020000/0x0/0x4ffc00000, data 0x4618740/0x47ac000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:45.809019+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3841745 data_alloc: 234881024 data_used: 10449709
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:46.809160+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8020000/0x0/0x4ffc00000, data 0x4618740/0x47ac000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:47.809320+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:48.809482+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:49.809692+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4e66800 session 0x55d7c7995a40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c8f6c1c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.967909813s of 14.256855965s, submitted: 73
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:50.809847+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8021000/0x0/0x4ffc00000, data 0x4618730/0x47ab000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3666913 data_alloc: 218103808 data_used: 230173
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c7786000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 330022912 unmapped: 49315840 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:51.809986+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 330022912 unmapped: 49315840 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:52.810129+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 330022912 unmapped: 49315840 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:53.810365+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 330022912 unmapped: 49315840 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:54.810599+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 330022912 unmapped: 49315840 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:55.810763+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659337 data_alloc: 218103808 data_used: 125725
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 330022912 unmapped: 49315840 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:56.810938+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331071488 unmapped: 48267264 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [0,0,0,0,0,2,10,6])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:57.811152+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c7fd0e00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 47931392 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7886000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7886000 session 0x55d7c75f9dc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79ca400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79ca400 session 0x55d7c764a540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8d25000/0x0/0x4ffc00000, data 0x3913759/0x3aa7000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:58.811320+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c49df180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c74388c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:59.811507+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:00.811690+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717291 data_alloc: 218103808 data_used: 125725
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:01.811853+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8ca5000/0x0/0x4ffc00000, data 0x3993792/0x3b27000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:02.812015+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8ca5000/0x0/0x4ffc00000, data 0x3993792/0x3b27000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:03.812243+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:04.812420+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:05.812601+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717291 data_alloc: 218103808 data_used: 125725
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:06.812821+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 47906816 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:07.813010+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c8ecc380
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8ca5000/0x0/0x4ffc00000, data 0x3993792/0x3b27000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 47906816 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:08.813180+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7886000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7886000 session 0x55d7c8ecd880
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 47906816 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:09.813361+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c1c00 session 0x55d7c3161c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.453115463s of 19.530176163s, submitted: 62
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c87ba700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:10.813499+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3718017 data_alloc: 218103808 data_used: 126237
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:11.813645+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:12.813807+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8ca4000/0x0/0x4ffc00000, data 0x39937a2/0x3b28000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:13.813975+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:14.814137+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:15.814312+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8ca4000/0x0/0x4ffc00000, data 0x39937a2/0x3b28000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3758209 data_alloc: 218103808 data_used: 6841629
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:16.814497+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:17.814645+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:18.814810+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:19.814960+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:20.815146+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3758209 data_alloc: 218103808 data_used: 6841629
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8ca4000/0x0/0x4ffc00000, data 0x39937a2/0x3b28000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:21.815318+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.700842857s of 11.707605362s, submitted: 3
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 40738816 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:22.815526+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:23.815786+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:24.816022+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:25.816214+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3854727 data_alloc: 218103808 data_used: 8062237
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:26.816373+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7c52000/0x0/0x4ffc00000, data 0x49e57a2/0x4b7a000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:27.816527+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:28.816780+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:29.816962+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:30.817148+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3855879 data_alloc: 218103808 data_used: 8062237
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7c2e000/0x0/0x4ffc00000, data 0x4a097a2/0x4b9e000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:31.817328+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:32.817512+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7c2e000/0x0/0x4ffc00000, data 0x4a097a2/0x4b9e000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:33.817717+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 sudo[397874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:34.817921+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7c2e000/0x0/0x4ffc00000, data 0x4a097a2/0x4b9e000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:35.818159+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3855879 data_alloc: 218103808 data_used: 8062237
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:36.818371+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:37.818574+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7886000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7886000 session 0x55d7c90b9500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c706bc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c706bc00 session 0x55d7c49de380
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c7683880
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c7438e00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.913228035s of 16.233089447s, submitted: 138
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c4ebf340
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c8f6ca80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c6e9cfc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c706bc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c706bc00 session 0x55d7c8f6c380
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339501056 unmapped: 45162496 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7886000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7886000 session 0x55d7c4c868c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:38.818777+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 45154304 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:39.818953+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 45154304 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:40.819127+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e72cc000/0x0/0x4ffc00000, data 0x5369814/0x5500000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3917894 data_alloc: 218103808 data_used: 8062237
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 45154304 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:41.819293+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 45154304 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c4bd4e00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:42.819463+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c7fd0540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 45154304 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:43.819700+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e72cc000/0x0/0x4ffc00000, data 0x5369814/0x5500000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 45154304 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:44.819897+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c874a700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c706bc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 45154304 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c706bc00 session 0x55d7c8f6dc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:45.820111+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c49e2c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db0400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3919496 data_alloc: 218103808 data_used: 8062253
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 44892160 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:46.820291+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 44892160 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:47.820468+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 44892160 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:48.820736+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e72cb000/0x0/0x4ffc00000, data 0x5369824/0x5501000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 44892160 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:49.820915+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 44892160 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:50.821116+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e72cb000/0x0/0x4ffc00000, data 0x5369824/0x5501000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3976456 data_alloc: 234881024 data_used: 17665837
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 44892160 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:51.821263+0000)
Feb 28 11:04:12 compute-0 sudo[397874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e72cb000/0x0/0x4ffc00000, data 0x5369824/0x5501000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 44892160 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:52.821434+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 44892160 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:53.821638+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.763391495s of 15.905977249s, submitted: 38
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 44883968 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:54.821788+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 44883968 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:55.821965+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3977032 data_alloc: 234881024 data_used: 17665837
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 44883968 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:56.822158+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e72cb000/0x0/0x4ffc00000, data 0x5369824/0x5501000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340123648 unmapped: 44539904 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:57.822289+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 41877504 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:58.822490+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 41844736 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:59.822682+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 41844736 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:00.822879+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4060418 data_alloc: 234881024 data_used: 18100013
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 41844736 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:01.823133+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6509000/0x0/0x4ffc00000, data 0x612b824/0x62c3000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 41836544 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:02.823333+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 41836544 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:03.823655+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6509000/0x0/0x4ffc00000, data 0x612b824/0x62c3000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.108784676s of 10.460398674s, submitted: 113
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 41779200 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:04.823838+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 41779200 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:05.824010+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4055930 data_alloc: 234881024 data_used: 18104109
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 41779200 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:06.824149+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 41771008 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:07.824301+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e64ea000/0x0/0x4ffc00000, data 0x614a824/0x62e2000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 41771008 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c49e2c00 session 0x55d7c663dc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4db0400 session 0x55d7c9e91a40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:08.824463+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c4bd4000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342941696 unmapped: 41721856 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:09.824638+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342941696 unmapped: 41721856 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:10.824852+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3869287 data_alloc: 218103808 data_used: 8062237
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342941696 unmapped: 41721856 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:11.825020+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7c20000/0x0/0x4ffc00000, data 0x4a177a2/0x4bac000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342941696 unmapped: 41721856 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7cbf4a8c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c7786700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:12.825222+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c75f8380
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:13.825478+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:14.825670+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:15.825895+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688912 data_alloc: 218103808 data_used: 129723
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:16.826143+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:17.826341+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:18.826530+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:19.826687+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:20.826850+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688912 data_alloc: 218103808 data_used: 129723
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:21.827032+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:22.827225+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:23.827461+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:24.827634+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:25.827810+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688912 data_alloc: 218103808 data_used: 129723
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 44818432 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:26.828007+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 44818432 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:27.828204+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 44818432 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:28.828433+0000)
Feb 28 11:04:12 compute-0 sudo[397874]: pam_unix(sudo:session): session closed for user root
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 44810240 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:29.828616+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 44810240 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:30.828831+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688912 data_alloc: 218103808 data_used: 129723
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 44810240 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:31.829031+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 44810240 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:32.829270+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 44810240 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:33.829515+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 44802048 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:34.829690+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 44802048 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:35.829866+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688912 data_alloc: 218103808 data_used: 129723
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 44802048 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:36.830034+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 44802048 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:37.830213+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 44802048 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:38.830357+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:39.830536+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 44802048 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:40.830720+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 44802048 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688912 data_alloc: 218103808 data_used: 129723
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:41.830912+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 44802048 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:42.831181+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 44793856 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:43.831380+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 44793856 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:44.831566+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 44793856 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:45.831757+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 44793856 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688912 data_alloc: 218103808 data_used: 129723
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 41.972812653s of 42.093124390s, submitted: 71
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 43K writes, 173K keys, 43K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s
                                           Cumulative WAL: 43K writes, 16K syncs, 2.71 writes per sync, written: 0.17 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5287 writes, 21K keys, 5287 commit groups, 1.0 writes per commit group, ingest: 24.46 MB, 0.04 MB/s
                                           Interval WAL: 5286 writes, 2100 syncs, 2.52 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c78c0540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c8f6c540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:46.831918+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c9f3cc40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339763200 unmapped: 44900352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db0400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4db0400 session 0x55d7c6e9c700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c49dea80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:47.832141+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 44883968 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:48.832346+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 44883968 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:49.832526+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 44883968 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c8c000/0x0/0x4ffc00000, data 0x39ad730/0x3b40000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:50.832695+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 44875776 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c75e3500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3738379 data_alloc: 218103808 data_used: 133721
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:51.832852+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 44875776 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c8eccfc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c4bd4fc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db0400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4db0400 session 0x55d7c7786a80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:52.833033+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 44875776 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:53.833221+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 44875776 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:54.833368+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 44179456 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c8b000/0x0/0x4ffc00000, data 0x39ad740/0x3b41000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:55.833501+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 44179456 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788369 data_alloc: 218103808 data_used: 8284761
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:56.833651+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 44179456 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:57.833810+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 44179456 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c8b000/0x0/0x4ffc00000, data 0x39ad740/0x3b41000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:58.834005+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 44179456 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:59.834133+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 44179456 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:00.834287+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 44179456 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c8b000/0x0/0x4ffc00000, data 0x39ad740/0x3b41000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788369 data_alloc: 218103808 data_used: 8284761
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:01.834458+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 44179456 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:02.834633+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 44179456 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:03.834915+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 44179456 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.298162460s of 17.396247864s, submitted: 25
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:04.835081+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 42049536 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:05.835276+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e836a000/0x0/0x4ffc00000, data 0x42c0740/0x4454000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857135 data_alloc: 218103808 data_used: 8731225
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:06.835437+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:07.835590+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e836a000/0x0/0x4ffc00000, data 0x42c0740/0x4454000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:08.835794+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e836a000/0x0/0x4ffc00000, data 0x42c0740/0x4454000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:09.835985+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e836a000/0x0/0x4ffc00000, data 0x42c0740/0x4454000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:10.836150+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857135 data_alloc: 218103808 data_used: 8731225
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:11.836320+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:12.836475+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:13.836699+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:14.836928+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e836a000/0x0/0x4ffc00000, data 0x42c0740/0x4454000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:15.837124+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857135 data_alloc: 218103808 data_used: 8731225
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c706bc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c706bc00 session 0x55d7c8f6d6c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:16.837273+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c900f180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c761aa80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db0400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4db0400 session 0x55d7c49df500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e836a000/0x0/0x4ffc00000, data 0x42c0740/0x4454000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.961489677s of 13.127732277s, submitted: 90
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c58941c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7cd67f400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7cd67f400 session 0x55d7c764a380
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c49eefc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c6e9c380
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db0400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4db0400 session 0x55d7c764aa80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:17.837471+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344088576 unmapped: 40574976 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:18.837625+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344088576 unmapped: 40574976 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:19.837826+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344088576 unmapped: 40574976 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:20.838000+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344088576 unmapped: 40574976 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7d54000/0x0/0x4ffc00000, data 0x48e27b2/0x4a78000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3893462 data_alloc: 218103808 data_used: 8731225
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:21.838155+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344088576 unmapped: 40574976 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c761ae00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:22.838306+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344113152 unmapped: 40550400 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c8be6400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:23.838530+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344113152 unmapped: 40550400 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:24.838917+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 40001536 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:25.839146+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 40001536 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7d53000/0x0/0x4ffc00000, data 0x48e27d5/0x4a79000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3932792 data_alloc: 234881024 data_used: 13604969
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:26.839319+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 40001536 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:27.839501+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 40001536 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:28.839706+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 40001536 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:29.839849+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 40001536 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:30.839973+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 40001536 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7d53000/0x0/0x4ffc00000, data 0x48e27d5/0x4a79000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3932792 data_alloc: 234881024 data_used: 13604969
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:31.840118+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 40001536 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:32.840267+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 40001536 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:33.840522+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.633998871s of 16.749742508s, submitted: 44
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345743360 unmapped: 38920192 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:34.840700+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349257728 unmapped: 35405824 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:35.840815+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e70c2000/0x0/0x4ffc00000, data 0x556d7d5/0x5704000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [0,0,0,15])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 35266560 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4022310 data_alloc: 234881024 data_used: 14370921
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:36.841018+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 35266560 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:37.841213+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 35266560 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:38.841433+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 35266560 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:39.841592+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 35266560 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7017000/0x0/0x4ffc00000, data 0x561d7d5/0x57b4000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:40.841775+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 35258368 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4023030 data_alloc: 234881024 data_used: 14370921
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:41.842006+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 35258368 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:42.842139+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 35258368 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6ff4000/0x0/0x4ffc00000, data 0x56417d5/0x57d8000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:43.842364+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6ff4000/0x0/0x4ffc00000, data 0x56417d5/0x57d8000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 35250176 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:44.842532+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 35250176 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:45.842682+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 35250176 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.561341286s of 12.556861877s, submitted: 142
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c8be6400 session 0x55d7c900fdc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c87bbdc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:46.842841+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3866536 data_alloc: 218103808 data_used: 7380569
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c7667a40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344342528 unmapped: 40321024 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8371000/0x0/0x4ffc00000, data 0x42c1740/0x4455000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:47.843025+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8371000/0x0/0x4ffc00000, data 0x42c1740/0x4455000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344342528 unmapped: 40321024 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:48.843172+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 40263680 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:49.843467+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 40263680 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:50.843634+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 40263680 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:51.843799+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3864652 data_alloc: 218103808 data_used: 7384532
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8377000/0x0/0x4ffc00000, data 0x42c1740/0x4455000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 40263680 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c4b26000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c764b340
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:52.843965+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c7682000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:53.844198+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:54.844400+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:55.844630+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:56.844794+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3718631 data_alloc: 218103808 data_used: 141780
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:57.845003+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:58.845189+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:59.845405+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:00.845590+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:01.845748+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3718631 data_alloc: 218103808 data_used: 141780
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:02.845927+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:03.846131+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:04.846310+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:05.846450+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:06.846596+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3718631 data_alloc: 218103808 data_used: 141780
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 44400640 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:07.846813+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 44400640 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:08.846996+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 44400640 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:09.847178+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 44400640 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:10.847339+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 44392448 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:11.847528+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3718631 data_alloc: 218103808 data_used: 141780
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 44392448 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:12.847712+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 44392448 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:13.847940+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 44392448 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:14.848162+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 44392448 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:15.848344+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 44392448 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:16.848515+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3718631 data_alloc: 218103808 data_used: 141780
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 44392448 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:17.848748+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 44392448 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:18.848942+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 44384256 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:19.849132+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 44384256 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:20.849374+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 44384256 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:21.849540+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3718631 data_alloc: 218103808 data_used: 141780
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 44384256 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:22.849692+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 44384256 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:23.849911+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 44384256 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:24.850147+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 44384256 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:25.850325+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 39.287815094s of 39.636169434s, submitted: 172
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c6d6d340
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c7439500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341655552 unmapped: 43008000 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c6e9d880
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c764b6c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c78c1340
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:26.850511+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788426 data_alloc: 218103808 data_used: 141780
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 42991616 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:27.850712+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 42991616 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:28.850899+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e89ea000/0x0/0x4ffc00000, data 0x3c4f730/0x3de2000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 42991616 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:29.851127+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e89ea000/0x0/0x4ffc00000, data 0x3c4f730/0x3de2000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 42991616 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:30.851296+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 42991616 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:31.851474+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788426 data_alloc: 218103808 data_used: 141780
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 42991616 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:32.851662+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c9e91dc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 42991616 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:33.851850+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341524480 unmapped: 43139072 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:34.852120+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 43122688 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:35.852288+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e89ea000/0x0/0x4ffc00000, data 0x3c4f730/0x3de2000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 43122688 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:36.852456+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3845006 data_alloc: 234881024 data_used: 9724884
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 43122688 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:37.852621+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 43122688 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:38.852788+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 43122688 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e89ea000/0x0/0x4ffc00000, data 0x3c4f730/0x3de2000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:39.852984+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e89ea000/0x0/0x4ffc00000, data 0x3c4f730/0x3de2000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 43122688 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:40.853151+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 43122688 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:41.853375+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3845006 data_alloc: 234881024 data_used: 9724884
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 43122688 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:42.853518+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 43122688 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:43.853748+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.693708420s of 17.795551300s, submitted: 33
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 39698432 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:44.853906+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 39206912 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e81a5000/0x0/0x4ffc00000, data 0x4493730/0x4626000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:45.854108+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 39206912 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8118000/0x0/0x4ffc00000, data 0x451f730/0x46b2000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:46.854253+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3909802 data_alloc: 234881024 data_used: 11088852
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 39206912 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:47.854466+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 39206912 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:48.854650+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 39206912 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:49.854821+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 39206912 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:50.855023+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8118000/0x0/0x4ffc00000, data 0x451f730/0x46b2000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 39206912 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:51.855187+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3902194 data_alloc: 234881024 data_used: 11088852
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 39469056 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:52.855318+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 39469056 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:53.855508+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 39469056 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:54.855706+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e80f9000/0x0/0x4ffc00000, data 0x4540730/0x46d3000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 39469056 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:55.855904+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 39469056 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:56.856133+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3902194 data_alloc: 234881024 data_used: 11088852
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 39469056 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:57.856300+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 39469056 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:58.856435+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 39469056 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:59.856580+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e80f9000/0x0/0x4ffc00000, data 0x4540730/0x46d3000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 39469056 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c90b96c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c58956c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c7fd1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:00.856748+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db0400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4db0400 session 0x55d7c87bae00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.947948456s of 17.117033005s, submitted: 106
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e80f9000/0x0/0x4ffc00000, data 0x4540730/0x46d3000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [0,0,0,0,0,10])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c7682380
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c7682540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c4bd41c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c7786fc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c75e2540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 46784512 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:01.856919+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3980343 data_alloc: 234881024 data_used: 11088852
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 46784512 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:02.857085+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 46784512 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:03.857297+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 46784512 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:04.857474+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7476000/0x0/0x4ffc00000, data 0x51c17a1/0x5356000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 46784512 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:05.857702+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c7683500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 46784512 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:06.857879+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c7682700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3980343 data_alloc: 234881024 data_used: 11088852
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 46776320 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c90b8fc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c4bd5dc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:07.858041+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6f25000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 46776320 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:08.858167+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 43737088 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:09.858355+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7451000/0x0/0x4ffc00000, data 0x51e57b1/0x537b000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349937664 unmapped: 42074112 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:10.858567+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349937664 unmapped: 42074112 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:11.858714+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4060353 data_alloc: 234881024 data_used: 23154988
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7451000/0x0/0x4ffc00000, data 0x51e57b1/0x537b000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349937664 unmapped: 42074112 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:12.858853+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349937664 unmapped: 42074112 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:13.859029+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349937664 unmapped: 42074112 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:14.859164+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349937664 unmapped: 42074112 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7451000/0x0/0x4ffc00000, data 0x51e57b1/0x537b000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:15.859342+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.794326782s of 14.919716835s, submitted: 39
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 42065920 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:16.859499+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4061001 data_alloc: 234881024 data_used: 23154988
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 42065920 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:17.859783+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 42065920 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:18.859931+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e62af000/0x0/0x4ffc00000, data 0x51e67b1/0x537c000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x1456b3eb), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356286464 unmapped: 35725312 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:19.860179+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356548608 unmapped: 35463168 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:20.860392+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e53b8000/0x0/0x4ffc00000, data 0x60d67b1/0x626c000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x1456b3eb), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356548608 unmapped: 35463168 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:21.860594+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4163885 data_alloc: 234881024 data_used: 23983404
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356548608 unmapped: 35463168 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e53b8000/0x0/0x4ffc00000, data 0x60d67b1/0x626c000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x1456b3eb), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:22.860771+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356548608 unmapped: 35463168 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:23.860970+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e53b8000/0x0/0x4ffc00000, data 0x60d67b1/0x626c000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x1456b3eb), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356548608 unmapped: 35463168 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:24.861135+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356548608 unmapped: 35463168 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:25.861331+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 36126720 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:26.861514+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4158901 data_alloc: 234881024 data_used: 23987500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 36126720 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:27.861680+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e53bd000/0x0/0x4ffc00000, data 0x60d97b1/0x626f000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x1456b3eb), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 36126720 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.467883110s of 12.747834206s, submitted: 114
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:28.861897+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 36126720 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:29.862029+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 36126720 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c49de1c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6f25000 session 0x55d7c49de8c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:30.862143+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c7787a40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351019008 unmapped: 40992768 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:31.862373+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e53bd000/0x0/0x4ffc00000, data 0x60d97b1/0x626f000, compress 0x0/0x0/0x0, omap 0x64c4a, meta 0x1456b3b6), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3918190 data_alloc: 234881024 data_used: 10173725
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6aee000/0x0/0x4ffc00000, data 0x4547730/0x46da000, compress 0x0/0x0/0x0, omap 0x64c4a, meta 0x1456b3b6), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351019008 unmapped: 40992768 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:32.862603+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351019008 unmapped: 40992768 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:33.862895+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351019008 unmapped: 40992768 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c4852e00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:34.863056+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c7439c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 40968192 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:35.863312+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 40968192 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:36.863503+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745969 data_alloc: 218103808 data_used: 129723
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 40968192 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:37.863668+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 40968192 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:38.863854+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 40968192 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:39.864136+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 40968192 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:40.864348+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 40968192 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:41.864563+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745969 data_alloc: 218103808 data_used: 129723
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 40968192 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:42.864756+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 40968192 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:43.865170+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 40968192 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:44.865369+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351051776 unmapped: 40960000 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:45.865585+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351051776 unmapped: 40960000 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:46.865767+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745969 data_alloc: 218103808 data_used: 129723
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351059968 unmapped: 40951808 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:47.865982+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351059968 unmapped: 40951808 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:48.866185+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351059968 unmapped: 40951808 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:49.866405+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351059968 unmapped: 40951808 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:50.866661+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351059968 unmapped: 40951808 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:51.866870+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745969 data_alloc: 218103808 data_used: 129723
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351059968 unmapped: 40951808 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:52.867062+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351059968 unmapped: 40951808 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:53.867394+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351059968 unmapped: 40951808 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:54.867685+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351068160 unmapped: 40943616 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:55.867852+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351068160 unmapped: 40943616 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:56.867985+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745969 data_alloc: 218103808 data_used: 129723
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351068160 unmapped: 40943616 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:57.868134+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351068160 unmapped: 40943616 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:58.868296+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351068160 unmapped: 40943616 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:59.868460+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351068160 unmapped: 40943616 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:00.868632+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351068160 unmapped: 40943616 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:01.868801+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745969 data_alloc: 218103808 data_used: 129723
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351068160 unmapped: 40943616 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:02.868932+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351076352 unmapped: 40935424 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:03.869146+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351076352 unmapped: 40935424 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:04.869345+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351076352 unmapped: 40935424 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:05.869515+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351084544 unmapped: 40927232 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:06.869812+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745969 data_alloc: 218103808 data_used: 129723
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351084544 unmapped: 40927232 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:07.869986+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351084544 unmapped: 40927232 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:08.870219+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351084544 unmapped: 40927232 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:09.870409+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351084544 unmapped: 40927232 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:10.870620+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351092736 unmapped: 40919040 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:11.870763+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745969 data_alloc: 218103808 data_used: 129723
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c7683a40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c9e90e00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c7438fc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351092736 unmapped: 40919040 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c75e28c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 44.083389282s of 44.192253113s, submitted: 58
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:12.870866+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c8ecd500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6f25000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6f25000 session 0x55d7c874b6c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c90b8380
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c9f3c700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c5850fc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 40910848 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:13.871027+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e82c5000/0x0/0x4ffc00000, data 0x31d2769/0x3367000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 40910848 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:14.871248+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 40910848 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:15.871393+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 40910848 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:16.871548+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3780785 data_alloc: 218103808 data_used: 129723
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 40910848 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:17.871718+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 40910848 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:18.871935+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 40910848 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7ebd000/0x0/0x4ffc00000, data 0x35da7a2/0x376f000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:19.872107+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 40910848 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:20.872315+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7ebd000/0x0/0x4ffc00000, data 0x35da7a2/0x376f000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c7683dc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 40910848 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:21.872473+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c4ebfdc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c5850000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3782265 data_alloc: 218103808 data_used: 129723
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c4ebe540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 45522944 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:22.872626+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.035045624s of 10.393772125s, submitted: 42
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 45522944 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:23.872782+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 46260224 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:24.872907+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7e99000/0x0/0x4ffc00000, data 0x35fe7a2/0x3793000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 46260224 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:25.873037+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 46260224 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:26.873121+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3807869 data_alloc: 218103808 data_used: 4358859
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 46260224 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:27.873286+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 46260224 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:28.873451+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 46260224 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:29.873571+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 46260224 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:30.873733+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7e99000/0x0/0x4ffc00000, data 0x35fe7a2/0x3793000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 46260224 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:31.873941+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3807869 data_alloc: 218103808 data_used: 4358859
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:32.874167+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 46260224 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.973699570s of 10.002538681s, submitted: 7
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7e99000/0x0/0x4ffc00000, data 0x35fe7a2/0x3793000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:33.874348+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348758016 unmapped: 43253760 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:34.874477+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 43417600 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:35.874612+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 43417600 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:36.874764+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 43417600 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6779000/0x0/0x4ffc00000, data 0x3b767a2/0x3d0b000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1570b1a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3855323 data_alloc: 218103808 data_used: 4682443
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:37.875026+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 43417600 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:38.875344+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 43417600 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:39.875502+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 43417600 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6779000/0x0/0x4ffc00000, data 0x3b767a2/0x3d0b000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1570b1a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:40.875818+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 43417600 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:41.875985+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 43417600 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6779000/0x0/0x4ffc00000, data 0x3b767a2/0x3d0b000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1570b1a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3855323 data_alloc: 218103808 data_used: 4682443
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread fragmentation_score=0.004223 took=0.000069s
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:42.876456+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 43417600 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.051095963s of 10.274992943s, submitted: 72
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:43.876696+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348454912 unmapped: 43556864 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:44.876975+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 43548672 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:45.877210+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 43548672 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c900e380
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c7439180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:46.877366+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348471296 unmapped: 43540480 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7021400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3850787 data_alloc: 218103808 data_used: 4670155
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7021400 session 0x55d7c7667880
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6781000/0x0/0x4ffc00000, data 0x3b767a2/0x3d0b000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1570b1a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7021400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:47.877502+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 43532288 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 284 handle_osd_map epochs [284,285], i have 285, src has [1,285]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 285 ms_handle_reset con 0x55d7c7021400 session 0x55d7c49dec40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 285 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c5850700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 285 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c5850c40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:48.877667+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 41639936 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 285 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c75e2c40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 286 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c78c1a40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:49.877838+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 41910272 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 286 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c49ef500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 286 heartbeat osd_stat(store_statfs(0x4e5ce6000/0x0/0x4ffc00000, data 0x460bf3e/0x47a4000, compress 0x0/0x0/0x0, omap 0x64fd0, meta 0x1570b030), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 286 handle_osd_map epochs [287,287], i have 287, src has [1,287]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:50.878022+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 41885696 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 287 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c90b81c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 287 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c75e3a40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 287 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c7667180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:51.878171+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 357769216 unmapped: 41877504 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3955705 data_alloc: 234881024 data_used: 10195675
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:52.878453+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 357769216 unmapped: 41877504 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:53.878761+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 357769216 unmapped: 41877504 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7021400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.491305351s of 10.884724617s, submitted: 76
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 287 ms_handle_reset con 0x55d7c7021400 session 0x55d7c9f3d340
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 287 heartbeat osd_stat(store_statfs(0x4e5200000/0x0/0x4ffc00000, data 0x3c69a84/0x3e01000, compress 0x0/0x0/0x0, omap 0x655a0, meta 0x168aaa60), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:54.878963+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 46014464 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 287 heartbeat osd_stat(store_statfs(0x4e5200000/0x0/0x4ffc00000, data 0x3c69a84/0x3e01000, compress 0x0/0x0/0x0, omap 0x655a0, meta 0x168aaa60), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:55.879178+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 46014464 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:56.879369+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 46014464 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3835639 data_alloc: 218103808 data_used: 129739
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:57.879591+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 46014464 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:58.879760+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 46014464 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:59.880047+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 46006272 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:00.880317+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 46006272 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e54e6000/0x0/0x4ffc00000, data 0x3c6b503/0x3e04000, compress 0x0/0x0/0x0, omap 0x656c5, meta 0x168aa93b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:01.880447+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c87bbc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 46006272 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c9e901c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c7fd0540
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c8f6d500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb2000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c6fb2000 session 0x55d7c764a8c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb2000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c6fb2000 session 0x55d7c6df5dc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c4ebe1c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c4ebe000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3840817 data_alloc: 218103808 data_used: 133737
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c4ebe700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:02.880577+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 375349248 unmapped: 24297472 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c9e91340
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:03.895477+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 368320512 unmapped: 31326208 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c7667c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e496d000/0x0/0x4ffc00000, data 0x47e4575/0x497f000, compress 0x0/0x0/0x0, omap 0x656c5, meta 0x168aa93b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c5895500
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:04.895656+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 368320512 unmapped: 31326208 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c764b180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb2000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.735244751s of 11.266489029s, submitted: 89
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c6fb2000 session 0x55d7c764bdc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:05.895838+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 368320512 unmapped: 31326208 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c5a18800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db3800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7ca867800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:06.896008+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 368689152 unmapped: 30957568 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e496d000/0x0/0x4ffc00000, data 0x47e4575/0x497f000, compress 0x0/0x0/0x0, omap 0x656c5, meta 0x168aa93b), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 289 ms_handle_reset con 0x55d7ca867800 session 0x55d7c3161180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3870398 data_alloc: 218103808 data_used: 2344553
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:07.896131+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356532224 unmapped: 43114496 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:08.896322+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356532224 unmapped: 43114496 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:09.896513+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356532224 unmapped: 43114496 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:10.896713+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356532224 unmapped: 43114496 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:11.896910+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356532224 unmapped: 43114496 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 289 heartbeat osd_stat(store_statfs(0x4e53fb000/0x0/0x4ffc00000, data 0x3d54155/0x3eef000, compress 0x0/0x0/0x0, omap 0x65a6c, meta 0x168aa594), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879998 data_alloc: 218103808 data_used: 3958377
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:12.897098+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356532224 unmapped: 43114496 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:13.897477+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356532224 unmapped: 43114496 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:14.897642+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356532224 unmapped: 43114496 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e53f8000/0x0/0x4ffc00000, data 0x3d55bd4/0x3ef2000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:15.897813+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356532224 unmapped: 43114496 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:16.897995+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356532224 unmapped: 43114496 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.610133171s of 11.691160202s, submitted: 38
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e53f8000/0x0/0x4ffc00000, data 0x3d55bd4/0x3ef2000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3889116 data_alloc: 218103808 data_used: 4593257
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:17.898163+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356589568 unmapped: 43057152 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:18.898346+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356589568 unmapped: 43057152 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:19.898511+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356589568 unmapped: 43057152 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:20.898651+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356589568 unmapped: 43057152 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:21.898858+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356589568 unmapped: 43057152 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e53f8000/0x0/0x4ffc00000, data 0x3d55bd4/0x3ef2000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3897742 data_alloc: 218103808 data_used: 5477993
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:22.899043+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356597760 unmapped: 43048960 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:23.899219+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e53f5000/0x0/0x4ffc00000, data 0x3d5abd4/0x3ef7000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356597760 unmapped: 43048960 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:24.899359+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356597760 unmapped: 43048960 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:25.899539+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356597760 unmapped: 43048960 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c5a18800 session 0x55d7c78c16c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c4db3800 session 0x55d7c49eee00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:26.899710+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c49de000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 45064192 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793472 data_alloc: 218103808 data_used: 133721
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:27.899930+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 45064192 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:28.900178+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 45064192 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:29.900360+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 45064192 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:30.900502+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 45064192 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:31.900681+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 45064192 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793472 data_alloc: 218103808 data_used: 133721
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:32.900814+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 45064192 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:33.901034+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 45064192 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:34.901203+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 45064192 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:35.901387+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 45056000 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:36.901613+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 45056000 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793472 data_alloc: 218103808 data_used: 133721
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:37.901807+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 45056000 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:38.902019+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 45047808 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:39.902233+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 45047808 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:40.902430+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 45047808 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:41.902596+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 45047808 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793472 data_alloc: 218103808 data_used: 133721
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:42.902733+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 45047808 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:43.902928+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 45047808 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:44.903160+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 45047808 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:45.903312+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 45047808 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:46.903531+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 45039616 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793472 data_alloc: 218103808 data_used: 133721
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:47.903767+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 45039616 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:48.903972+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 45039616 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:49.904178+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 45039616 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:50.904339+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 45039616 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:51.904514+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 45039616 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:52.904723+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793472 data_alloc: 218103808 data_used: 133721
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 45039616 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:53.905003+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 45039616 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:54.905214+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 45031424 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:55.905443+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 45031424 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:56.905638+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 45031424 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: mgrc ms_handle_reset ms_handle_reset con 0x55d7c6fb3c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1095581968
Feb 28 11:04:12 compute-0 ceph-osd[87202]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1095581968,v1:192.168.122.100:6801/1095581968]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: get_auth_request con 0x55d7ca867800 auth_method 0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: mgrc handle_mgr_configure stats_period=5
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:57.905814+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793472 data_alloc: 218103808 data_used: 133721
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 45023232 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:58.906167+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 45023232 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:59.906417+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 45023232 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c42ec400 session 0x55d7c49fbc00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:00.906712+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 45023232 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:01.906887+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 45023232 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:02.907147+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793472 data_alloc: 218103808 data_used: 133721
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 45006848 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:03.907370+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 45006848 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c76836c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb2000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c6fb2000 session 0x55d7c74396c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c7fd0c40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c6df48c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:04.907615+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db3800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 47.527797699s of 47.597843170s, submitted: 41
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c4db3800 session 0x55d7c58956c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c5a18800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c5a18800 session 0x55d7c4b26e00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7ca866800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7ca866800 session 0x55d7c78c0a80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7ca866800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7ca866800 session 0x55d7c87bac40
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c7438a80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5cf6000/0x0/0x4ffc00000, data 0x3459b9b/0x35f6000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:05.907806+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:06.908000+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c4ebe380
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:07.908155+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3823215 data_alloc: 218103808 data_used: 137719
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db3800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5cf6000/0x0/0x4ffc00000, data 0x3459bd4/0x35f6000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:08.908280+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:09.908435+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:10.908655+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:11.908881+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c4db3800 session 0x55d7c900e000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c5a18800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c5a18800 session 0x55d7c4bd4700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:12.909017+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800495 data_alloc: 218103808 data_used: 137719
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 44580864 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:13.909325+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 44580864 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:14.909455+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355074048 unmapped: 44572672 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:15.909742+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355074048 unmapped: 44572672 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:16.909971+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355074048 unmapped: 44572672 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:17.910169+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800495 data_alloc: 218103808 data_used: 137719
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355074048 unmapped: 44572672 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:18.910355+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355074048 unmapped: 44572672 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:19.910531+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355074048 unmapped: 44572672 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:20.910703+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 44564480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:21.910835+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 44564480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:22.911204+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800495 data_alloc: 218103808 data_used: 137719
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 44564480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:23.911457+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 44564480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:24.911656+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 44564480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:25.911834+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 44564480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:26.912004+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 44556288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:27.912194+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800495 data_alloc: 218103808 data_used: 137719
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 44556288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:28.912351+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 44556288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:29.912509+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 44556288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:30.912695+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 44556288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:31.912876+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 44556288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:32.913141+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800495 data_alloc: 218103808 data_used: 137719
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 44556288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:33.913347+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 44556288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:34.913488+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:35.913752+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c70bc400 session 0x55d7c7fd0380
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:36.913942+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:37.914185+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800495 data_alloc: 218103808 data_used: 137719
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:38.919256+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:39.923276+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:40.925125+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:41.925491+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:42.926170+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800495 data_alloc: 218103808 data_used: 137719
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 44539904 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:43.926577+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 38.908409119s of 39.092102051s, submitted: 54
Feb 28 11:04:12 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 44515328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:44.926916+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 44515328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:45.927451+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 44515328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:46.927796+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 44515328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:47.928449+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800431 data_alloc: 218103808 data_used: 141717
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 44515328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:48.929218+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f75000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 44515328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:49.929609+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 44515328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:50.930000+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 44507136 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:51.930294+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 44507136 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:52.930889+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800431 data_alloc: 218103808 data_used: 141717
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f75000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 44507136 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:53.931801+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 44507136 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:54.933251+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 44507136 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f75000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:55.933481+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 44507136 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f75000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:56.933730+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355147776 unmapped: 44498944 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:57.933898+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800431 data_alloc: 218103808 data_used: 141717
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355147776 unmapped: 44498944 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:58.934117+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f75000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 44490752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:59.934262+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 44490752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:00.934446+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 44490752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:01.934604+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f75000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 44490752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:02.934758+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800431 data_alloc: 218103808 data_used: 141717
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 44490752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:03.934985+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f75000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 44490752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:04.935177+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 44490752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:05.935392+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 44490752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:06.935597+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355164160 unmapped: 44482560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:07.935815+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800431 data_alloc: 218103808 data_used: 141717
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355164160 unmapped: 44482560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:08.935991+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f75000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355164160 unmapped: 44482560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:09.936138+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355164160 unmapped: 44482560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:10.936320+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355164160 unmapped: 44482560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:11.936518+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355164160 unmapped: 44482560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:12.936694+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f75000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800431 data_alloc: 218103808 data_used: 141717
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355164160 unmapped: 44482560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:13.937041+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355164160 unmapped: 44482560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.807094574s of 30.837282181s, submitted: 18
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:14.937282+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 290 handle_osd_map epochs [290,291], i have 291, src has [1,291]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 291 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c6df5dc0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:15.937446+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:16.937625+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e8f70000/0x0/0x4ffc00000, data 0x1de72c/0x379000, compress 0x0/0x0/0x0, omap 0x66670, meta 0x168a9990), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:17.937872+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3561317 data_alloc: 218103808 data_used: 141717
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e8f70000/0x0/0x4ffc00000, data 0x1de72c/0x379000, compress 0x0/0x0/0x0, omap 0x66670, meta 0x168a9990), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:18.938195+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e8f70000/0x0/0x4ffc00000, data 0x1de72c/0x379000, compress 0x0/0x0/0x0, omap 0x66670, meta 0x168a9990), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:19.938440+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:20.938620+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:21.938798+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:22.938959+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e8f70000/0x0/0x4ffc00000, data 0x1de72c/0x379000, compress 0x0/0x0/0x0, omap 0x66670, meta 0x168a9990), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3561317 data_alloc: 218103808 data_used: 141717
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 43532288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:23.939149+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356098048 unmapped: 43548672 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:24.939313+0000)
Feb 28 11:04:12 compute-0 sudo[397899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356098048 unmapped: 43548672 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:25.939608+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:26.939834+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:27.940040+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:28.940266+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:29.940464+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:30.940642+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 43532288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:31.940827+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 43532288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:32.940978+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 43532288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:33.941257+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 43532288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:34.941499+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 43532288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:35.941727+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 43532288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:36.941932+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 43532288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:37.942179+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 43532288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:38.942402+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356122624 unmapped: 43524096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:39.942630+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356122624 unmapped: 43524096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:40.942804+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356122624 unmapped: 43524096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:41.942995+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356122624 unmapped: 43524096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:42.943200+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356122624 unmapped: 43524096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:43.943395+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356122624 unmapped: 43524096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:44.943581+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356122624 unmapped: 43524096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:45.943749+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356122624 unmapped: 43524096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:46.943932+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356139008 unmapped: 43507712 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:47.944139+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356139008 unmapped: 43507712 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:48.944319+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356139008 unmapped: 43507712 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:49.944486+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:50.944647+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356139008 unmapped: 43507712 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:51.944796+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356139008 unmapped: 43507712 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:52.944956+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356147200 unmapped: 43499520 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:53.945161+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356147200 unmapped: 43499520 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:54.945326+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356147200 unmapped: 43499520 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 sudo[397899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:55.945495+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356155392 unmapped: 43491328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:56.945641+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356155392 unmapped: 43491328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:57.945806+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356155392 unmapped: 43491328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:58.945942+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356155392 unmapped: 43491328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:59.946092+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356155392 unmapped: 43491328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:00.946239+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356155392 unmapped: 43491328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:01.946396+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356155392 unmapped: 43491328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:02.946542+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356163584 unmapped: 43483136 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:03.946759+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356163584 unmapped: 43483136 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:04.946898+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356163584 unmapped: 43483136 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:05.947115+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356163584 unmapped: 43483136 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:06.947317+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356171776 unmapped: 43474944 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:07.947465+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356171776 unmapped: 43474944 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:08.947691+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356171776 unmapped: 43474944 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:09.947878+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356171776 unmapped: 43474944 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:10.948037+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356179968 unmapped: 43466752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:11.948196+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356179968 unmapped: 43466752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:12.948378+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356179968 unmapped: 43466752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:13.948604+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356179968 unmapped: 43466752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:14.948776+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356179968 unmapped: 43466752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:15.948939+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356188160 unmapped: 43458560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:16.949146+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356188160 unmapped: 43458560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:17.949552+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356188160 unmapped: 43458560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:18.949796+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356196352 unmapped: 43450368 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:19.950003+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356196352 unmapped: 43450368 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:20.950195+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356196352 unmapped: 43450368 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:21.950375+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356196352 unmapped: 43450368 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:23.105235+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356204544 unmapped: 43442176 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:24.105462+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356204544 unmapped: 43442176 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:25.105638+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356204544 unmapped: 43442176 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:26.105800+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356204544 unmapped: 43442176 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:27.105964+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 43433984 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:28.106165+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 43433984 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:29.106497+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 43433984 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:30.106785+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 43433984 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:31.107104+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 43433984 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:32.107284+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 43433984 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:33.107422+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 43433984 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:34.107682+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 43433984 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:35.107910+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356229120 unmapped: 43417600 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:36.108134+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356229120 unmapped: 43417600 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:37.108339+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356229120 unmapped: 43417600 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:38.108507+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356229120 unmapped: 43417600 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:39.108819+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356229120 unmapped: 43417600 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:40.109200+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356229120 unmapped: 43417600 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:41.109364+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356229120 unmapped: 43417600 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:42.109633+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356229120 unmapped: 43417600 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:43.109817+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356237312 unmapped: 43409408 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:44.110182+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356237312 unmapped: 43409408 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:45.110375+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356237312 unmapped: 43409408 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:46.110693+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356237312 unmapped: 43409408 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:47.110875+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356237312 unmapped: 43409408 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:48.111198+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356245504 unmapped: 43401216 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:49.111410+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356245504 unmapped: 43401216 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:50.111570+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356245504 unmapped: 43401216 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:51.111742+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356253696 unmapped: 43393024 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:52.111969+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356253696 unmapped: 43393024 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:53.112201+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356253696 unmapped: 43393024 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:54.112911+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356253696 unmapped: 43393024 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:55.113061+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356253696 unmapped: 43393024 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:56.113274+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356253696 unmapped: 43393024 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:57.113444+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356261888 unmapped: 43384832 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:58.113599+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356261888 unmapped: 43384832 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:59.113782+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356270080 unmapped: 43376640 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:00.113965+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356270080 unmapped: 43376640 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:01.114166+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356270080 unmapped: 43376640 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:02.114319+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356270080 unmapped: 43376640 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:03.114521+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:04.114739+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:05.114935+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:06.115152+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:07.115438+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:08.115634+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:09.115862+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:10.116032+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:11.116231+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:12.116444+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:13.116682+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db3800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 ms_handle_reset con 0x55d7c4db3800 session 0x55d7c7fd1180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7ca866800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 ms_handle_reset con 0x55d7ca866800 session 0x55d7c4ebe1c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:14.116837+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:15.117002+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 43343872 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:16.117216+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 43343872 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:17.117459+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 43343872 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:18.117703+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 43343872 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:19.117928+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 43343872 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:20.119166+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 43343872 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:21.120210+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 43343872 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:22.121044+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c87f0c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 127.346313477s of 127.446342468s, submitted: 62
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 43343872 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:23.121244+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 293 ms_handle_reset con 0x55d7c87f0c00 session 0x55d7c7439180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356352000 unmapped: 43294720 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 293 heartbeat osd_stat(store_statfs(0x4e8f70000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3565624 data_alloc: 218103808 data_used: 145759
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:24.121933+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356352000 unmapped: 43294720 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 293 heartbeat osd_stat(store_statfs(0x4e8f6c000/0x0/0x4ffc00000, data 0x1e1d68/0x37d000, compress 0x0/0x0/0x0, omap 0x66b36, meta 0x168a94ca), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:25.122525+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356352000 unmapped: 43294720 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c49cd400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:26.123029+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356360192 unmapped: 43286528 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 294 ms_handle_reset con 0x55d7c49cd400 session 0x55d7c77876c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:27.123462+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 44793856 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:28.123927+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 44793856 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566996 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e8f6b000/0x0/0x4ffc00000, data 0x1e3935/0x37f000, compress 0x0/0x0/0x0, omap 0x675e3, meta 0x168a8a1d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:29.124338+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 44793856 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e8f6b000/0x0/0x4ffc00000, data 0x1e3935/0x37f000, compress 0x0/0x0/0x0, omap 0x675e3, meta 0x168a8a1d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:30.124666+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 44793856 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e8f6b000/0x0/0x4ffc00000, data 0x1e3935/0x37f000, compress 0x0/0x0/0x0, omap 0x675e3, meta 0x168a8a1d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:31.124985+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 44793856 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:32.125274+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 44793856 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:33.125401+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 44793856 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566996 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e8f6b000/0x0/0x4ffc00000, data 0x1e3935/0x37f000, compress 0x0/0x0/0x0, omap 0x675e3, meta 0x168a8a1d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:34.125690+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.699525833s of 11.839026451s, submitted: 63
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 44752896 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:35.125857+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 44752896 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:36.126128+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 44752896 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:37.126311+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 44752896 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:38.126483+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 44752896 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571508 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:39.126649+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8f67000/0x0/0x4ffc00000, data 0x1e53c4/0x383000, compress 0x0/0x0/0x0, omap 0x67706, meta 0x168a88fa), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 61521920 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:40.126858+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 61521920 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e7f69000/0x0/0x4ffc00000, data 0x11e53c4/0x1383000, compress 0x0/0x0/0x0, omap 0x67706, meta 0x168a88fa), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:41.127039+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 61521920 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:42.127208+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e7769000/0x0/0x4ffc00000, data 0x19e53c4/0x1b83000, compress 0x0/0x0/0x0, omap 0x67706, meta 0x168a88fa), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 61521920 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 295 handle_osd_map epochs [295,296], i have 296, src has [1,296]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c764a8c0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:43.127343+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 61513728 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:44.127552+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 61513728 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:45.127705+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 61513728 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:46.128160+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 61513728 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:47.128427+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 61505536 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:48.128595+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 61505536 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:49.128782+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 61497344 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:50.128977+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 61497344 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:51.129309+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 61497344 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:52.129509+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 61497344 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:53.129708+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 61497344 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:54.129986+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 61497344 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:55.130148+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61489152 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:56.130360+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61489152 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:57.130622+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61489152 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:58.130816+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61489152 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:59.131010+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61489152 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:00.131191+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 61480960 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:01.131362+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 61480960 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:02.131631+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 61480960 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:03.131826+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 61480960 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:04.132168+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 61480960 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:05.132351+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 61480960 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:06.132540+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 61472768 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:07.132785+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 61472768 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:08.132981+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 61472768 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:09.133154+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 61472768 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:10.133337+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354975744 unmapped: 61456384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:11.133516+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354975744 unmapped: 61456384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:12.133749+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354975744 unmapped: 61456384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:13.133990+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354975744 unmapped: 61456384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:14.134224+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354975744 unmapped: 61456384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:15.134410+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354975744 unmapped: 61456384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:16.134577+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354975744 unmapped: 61456384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:17.134758+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354975744 unmapped: 61456384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:18.134947+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 61448192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:19.135195+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 61448192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:20.135416+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 61448192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:21.135664+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:22.135989+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 61448192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:23.136190+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 61440000 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:24.136440+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 61440000 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:25.136637+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 61440000 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:26.136823+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 61440000 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:27.136975+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355000320 unmapped: 61431808 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:28.137138+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355000320 unmapped: 61431808 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:29.137269+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355000320 unmapped: 61431808 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:30.137438+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355000320 unmapped: 61431808 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:31.137646+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355008512 unmapped: 61423616 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:32.137917+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355008512 unmapped: 61423616 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:33.138123+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355008512 unmapped: 61423616 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:34.138404+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355008512 unmapped: 61423616 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:35.138585+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 61407232 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:36.138832+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 61407232 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:37.139005+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 61407232 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:38.139153+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 61407232 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:39.139381+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 61407232 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:40.139657+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 61407232 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:41.139838+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 61407232 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:42.140095+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 61407232 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:43.140244+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 61399040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:44.140397+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 61399040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:45.140621+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 61399040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:46.140849+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 61399040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 45K writes, 17K syncs, 2.69 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2352 writes, 8930 keys, 2352 commit groups, 1.0 writes per commit group, ingest: 9.76 MB, 0.02 MB/s
                                           Interval WAL: 2353 writes, 958 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:47.141187+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 61399040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:48.141407+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 61399040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:49.141655+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 61399040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:50.141839+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 61399040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:51.142003+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 61390848 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:52.142189+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 61390848 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:53.142349+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 61390848 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:54.142558+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 61390848 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:55.142783+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 61382656 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:56.143028+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 61382656 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:57.143247+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 61374464 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:58.143463+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 61374464 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:59.143664+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 61366272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:00.143900+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 61366272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:01.144201+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 61366272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:02.144413+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 61366272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:03.144604+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 61366272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:04.144811+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 61366272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:05.145023+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 61366272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:06.145253+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 61366272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:07.145439+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355074048 unmapped: 61358080 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:08.145648+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355074048 unmapped: 61358080 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:09.145849+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355074048 unmapped: 61358080 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:10.146091+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 61349888 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:11.146375+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 61341696 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:12.146769+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 61341696 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:13.146978+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 61341696 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:14.147244+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 61341696 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:15.147466+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 61333504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:16.147831+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 61333504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:17.148037+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 61333504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:18.148352+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 61333504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:19.148604+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 61333504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:20.148882+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 61333504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:21.149043+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 61333504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:22.149260+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 61333504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:23.149409+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 61317120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:24.149675+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 61317120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:25.149866+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 61317120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:26.150150+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 61317120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:27.150355+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 61317120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:28.150560+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 61317120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:29.150741+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 61317120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:30.150881+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db3800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 115.903709412s of 116.008216858s, submitted: 15
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355147776 unmapped: 61284352 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:31.151035+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 61276160 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 297 ms_handle_reset con 0x55d7c4db3800 session 0x55d7c49fb340
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e6f64000/0x0/0x4ffc00000, data 0x21e6fa9/0x2388000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:32.151175+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355172352 unmapped: 61259776 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7022800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:33.151283+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355123200 unmapped: 61308928 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e5f61000/0x0/0x4ffc00000, data 0x31e8b45/0x338b000, compress 0x0/0x0/0x0, omap 0x67dca, meta 0x168a8236), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 298 ms_handle_reset con 0x55d7c7022800 session 0x55d7c9f3ca80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5b000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:34.151421+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 61292544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3834415 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:35.151585+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 61292544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5b000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:36.151727+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 61292544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:37.151882+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 61292544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:38.152047+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 61292544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:39.152423+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 61292544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3834415 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:40.152570+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 61292544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5b000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:41.152667+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355147776 unmapped: 61284352 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:42.152767+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5b000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 61276160 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:43.152894+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 61276160 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.499187469s of 13.609086990s, submitted: 18
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5b000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:44.153050+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355180544 unmapped: 61251584 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833551 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5d000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:45.153232+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355180544 unmapped: 61251584 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:46.153391+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355180544 unmapped: 61251584 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:47.153545+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355180544 unmapped: 61251584 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:48.153704+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5d000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355188736 unmapped: 61243392 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:49.153927+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355237888 unmapped: 61194240 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833551 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:50.154179+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:51.154319+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:52.154477+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:53.154713+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5d000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:54.154961+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833551 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:55.155162+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:56.155315+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:57.155433+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:58.155595+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5d000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:59.155780+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833551 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:00.155981+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:01.156369+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:02.156740+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:03.157412+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355254272 unmapped: 61177856 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5d000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:04.158309+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833551 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355254272 unmapped: 61177856 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5d000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:05.158499+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355254272 unmapped: 61177856 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79cf400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.959863663s of 22.148071289s, submitted: 106
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:06.159478+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355254272 unmapped: 61177856 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 299 ms_handle_reset con 0x55d7c79cf400 session 0x55d7c75e2700
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:07.159783+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355270656 unmapped: 61161472 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:08.160104+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355270656 unmapped: 61161472 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:09.160369+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e775a000/0x0/0x4ffc00000, data 0x19ec2be/0x1b90000, compress 0x0/0x0/0x0, omap 0x68226, meta 0x168a7dda), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3715014 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355270656 unmapped: 61161472 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:10.160552+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355270656 unmapped: 61161472 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:11.160986+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355278848 unmapped: 61153280 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:12.161420+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355278848 unmapped: 61153280 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:13.161713+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e775a000/0x0/0x4ffc00000, data 0x19ec2be/0x1b90000, compress 0x0/0x0/0x0, omap 0x68226, meta 0x168a7dda), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355287040 unmapped: 61145088 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:14.162455+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355287040 unmapped: 61145088 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:15.162734+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355287040 unmapped: 61145088 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:16.163660+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355287040 unmapped: 61145088 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:17.164300+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355295232 unmapped: 61136896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:18.164638+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355295232 unmapped: 61136896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:19.164963+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355303424 unmapped: 61128704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:20.165725+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355303424 unmapped: 61128704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:21.165930+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355303424 unmapped: 61128704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:22.166479+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355303424 unmapped: 61128704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:23.166788+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355303424 unmapped: 61128704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:24.167028+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355303424 unmapped: 61128704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:25.167427+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355303424 unmapped: 61128704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:26.167712+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355303424 unmapped: 61128704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:27.167925+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355319808 unmapped: 61112320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:28.168211+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355319808 unmapped: 61112320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:29.168412+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355319808 unmapped: 61112320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:30.168705+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355319808 unmapped: 61112320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:31.169106+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355319808 unmapped: 61112320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:32.169556+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355319808 unmapped: 61112320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:33.169731+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355319808 unmapped: 61112320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:34.170001+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355319808 unmapped: 61112320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:35.170254+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 61095936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:36.170444+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 61095936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:37.170655+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 61095936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:38.170862+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 61095936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:39.171051+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 61095936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:40.171211+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 61095936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:41.171432+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 61095936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:42.171635+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 61095936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:43.171924+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 61087744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:44.172149+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 61087744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:45.172376+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 61087744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:46.172560+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 61087744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:47.172735+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 61087744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:48.172881+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 61087744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:49.173124+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 61087744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:50.173335+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 61087744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:51.173498+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355360768 unmapped: 61071360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:52.173613+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355360768 unmapped: 61071360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:53.173783+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355360768 unmapped: 61071360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:54.174020+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355368960 unmapped: 61063168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:55.174131+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355368960 unmapped: 61063168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:56.174274+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355368960 unmapped: 61063168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:57.174457+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355368960 unmapped: 61063168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:58.174689+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355368960 unmapped: 61063168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:59.174913+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 61054976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:00.175059+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 61054976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:01.175227+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 61054976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:02.175403+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 61054976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:03.175554+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 61054976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:04.175768+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 61054976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:05.175917+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 61054976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:06.176097+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 61054976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:07.176253+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 61046784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:08.176404+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 61046784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:09.176565+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 61046784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:10.176774+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 61038592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:11.177050+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 61038592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:12.177278+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 61038592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:13.177440+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 61038592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:14.177626+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 61038592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:15.177794+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 61030400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:16.177920+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 61030400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:17.178162+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 61030400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:18.178353+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 61030400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:19.178518+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 61030400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:20.178697+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 61030400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:21.178827+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 61030400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:22.178960+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 61030400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:23.179156+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 61014016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:24.179362+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 61014016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:25.179557+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 61014016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:26.179824+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 61014016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:27.180028+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 61014016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:28.180181+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 61014016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:29.180366+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 61014016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:30.180619+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355426304 unmapped: 61005824 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:31.180779+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 60997632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:32.180973+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 60997632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:33.181174+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 60997632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:34.181381+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 60997632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:35.181590+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 60997632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:36.181751+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 60997632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:37.181912+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 60997632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:38.182047+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 60997632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:39.182213+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 60981248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:40.182356+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 60981248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:41.182498+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 60981248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:42.182628+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 60981248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:43.182773+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 60981248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:44.182987+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 60981248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:45.183148+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 60981248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:46.183369+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 60973056 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:47.183574+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 60973056 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:48.183769+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 60973056 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:49.183933+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 60973056 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:50.184111+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 60973056 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:51.184265+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 60973056 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:52.184458+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 60973056 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:53.184648+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 60973056 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:54.184837+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 60948480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:55.185013+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 60948480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:56.185207+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 60948480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:57.185377+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 60948480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:58.185564+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 60948480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:59.185717+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 60948480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:00.185885+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 60948480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:01.186049+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 60948480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:02.186225+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 60940288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:03.186412+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 60940288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:04.186655+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 60940288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:05.186766+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:06.186962+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 60940288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:07.187160+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 60940288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:08.187341+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 60940288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:09.187503+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 60940288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:10.187658+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 60940288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:11.187870+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355508224 unmapped: 60923904 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:12.188105+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355508224 unmapped: 60923904 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:13.188320+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355508224 unmapped: 60923904 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:14.188524+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355508224 unmapped: 60923904 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:15.188740+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355516416 unmapped: 60915712 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:16.188898+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355516416 unmapped: 60915712 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:17.189043+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355516416 unmapped: 60915712 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:18.189219+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355516416 unmapped: 60915712 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:19.189485+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 60907520 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:20.189692+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 60907520 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:21.189897+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 60907520 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:22.190141+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 60907520 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:23.190376+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 60907520 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:24.190577+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 60907520 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:25.190725+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 60907520 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:26.190929+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 60907520 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:27.191139+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 60891136 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:28.191302+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 60891136 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:29.191434+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 60891136 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:30.191593+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 60891136 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:31.191795+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 60891136 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:32.191986+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 60891136 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:33.192216+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 60891136 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:34.192436+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 60891136 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:35.192732+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355549184 unmapped: 60882944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:36.192935+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355549184 unmapped: 60882944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:37.193159+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355549184 unmapped: 60882944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:38.193416+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355549184 unmapped: 60882944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:39.193650+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355549184 unmapped: 60882944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:40.193847+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355557376 unmapped: 60874752 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:41.194023+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355557376 unmapped: 60874752 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:42.194660+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355557376 unmapped: 60874752 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:43.195143+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355565568 unmapped: 60866560 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:44.195733+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355565568 unmapped: 60866560 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:45.195923+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355565568 unmapped: 60866560 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:46.196277+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355573760 unmapped: 60858368 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:47.196560+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355573760 unmapped: 60858368 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:48.196843+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355573760 unmapped: 60858368 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:49.197137+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355573760 unmapped: 60858368 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:50.197420+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355573760 unmapped: 60858368 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:51.197611+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355581952 unmapped: 60850176 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:52.197903+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355581952 unmapped: 60850176 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:53.198131+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355581952 unmapped: 60850176 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:54.198365+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355581952 unmapped: 60850176 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:55.198531+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355590144 unmapped: 60841984 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:56.198773+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355590144 unmapped: 60841984 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:57.198984+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355590144 unmapped: 60841984 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:58.199222+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355590144 unmapped: 60841984 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:59.199424+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 60833792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:00.199677+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 60833792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:01.199861+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 60833792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:02.200138+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 60833792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:03.200397+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 60833792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:04.200700+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 60833792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:05.200942+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 60833792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:06.201198+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 60833792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:07.201362+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355614720 unmapped: 60817408 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:08.201544+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355614720 unmapped: 60817408 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:09.201772+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355614720 unmapped: 60817408 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:10.201994+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355614720 unmapped: 60817408 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:11.202132+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 60809216 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:12.202269+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 60809216 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:13.202418+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 60809216 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:14.206382+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 60809216 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:15.206545+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355631104 unmapped: 60801024 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:16.206755+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355631104 unmapped: 60801024 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:17.207352+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355631104 unmapped: 60801024 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:18.208111+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355631104 unmapped: 60801024 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:19.208725+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355631104 unmapped: 60801024 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:20.208907+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355631104 unmapped: 60801024 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:21.209376+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355639296 unmapped: 60792832 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:22.209803+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355639296 unmapped: 60792832 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:23.210205+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355647488 unmapped: 60784640 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:24.210638+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355647488 unmapped: 60784640 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:25.211002+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355655680 unmapped: 60776448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:26.211363+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355655680 unmapped: 60776448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:27.211644+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355655680 unmapped: 60776448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:28.211833+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355655680 unmapped: 60776448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:29.212169+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355655680 unmapped: 60776448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:30.212443+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355655680 unmapped: 60776448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:31.212650+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355663872 unmapped: 60768256 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:32.212870+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355663872 unmapped: 60768256 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:33.213015+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355663872 unmapped: 60768256 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:34.213232+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355663872 unmapped: 60768256 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:35.213455+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355663872 unmapped: 60768256 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:36.213617+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355672064 unmapped: 60760064 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:37.213818+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355672064 unmapped: 60760064 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:38.214024+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355672064 unmapped: 60760064 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:39.214208+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355680256 unmapped: 60751872 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:40.214458+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355680256 unmapped: 60751872 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:41.214680+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355680256 unmapped: 60751872 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:42.214832+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355688448 unmapped: 60743680 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:43.215016+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355688448 unmapped: 60743680 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:44.215270+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355688448 unmapped: 60743680 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:45.215469+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355688448 unmapped: 60743680 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:46.215678+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355688448 unmapped: 60743680 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:47.215923+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355696640 unmapped: 60735488 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:48.216197+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355696640 unmapped: 60735488 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:49.216368+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355704832 unmapped: 60727296 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:50.216523+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355713024 unmapped: 60719104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:51.216767+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355713024 unmapped: 60719104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:52.216962+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355713024 unmapped: 60719104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:53.217210+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355713024 unmapped: 60719104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:54.217439+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355713024 unmapped: 60719104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:55.217710+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355713024 unmapped: 60719104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:56.217920+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355713024 unmapped: 60719104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:57.218097+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355713024 unmapped: 60719104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:58.218250+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355721216 unmapped: 60710912 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:59.218461+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355721216 unmapped: 60710912 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:00.218703+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355721216 unmapped: 60710912 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:01.218858+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355721216 unmapped: 60710912 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:02.219041+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355721216 unmapped: 60710912 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:03.219230+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 60694528 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:04.219449+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 60694528 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:05.219612+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 60694528 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:06.219718+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 60694528 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:07.219940+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 60694528 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:08.220103+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 60694528 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:09.220341+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 60694528 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:10.220533+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 60694528 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:11.220713+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355745792 unmapped: 60686336 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:12.220869+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355745792 unmapped: 60686336 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:13.221116+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355745792 unmapped: 60686336 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:14.221330+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355745792 unmapped: 60686336 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:15.221523+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355745792 unmapped: 60686336 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:16.221686+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355745792 unmapped: 60686336 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:17.221899+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355745792 unmapped: 60686336 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:18.222106+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355753984 unmapped: 60678144 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:19.222995+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355770368 unmapped: 60661760 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:20.223301+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355770368 unmapped: 60661760 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:21.223860+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355778560 unmapped: 60653568 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:22.227662+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355778560 unmapped: 60653568 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:23.229839+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355778560 unmapped: 60653568 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:24.230371+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355778560 unmapped: 60653568 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:25.232674+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355778560 unmapped: 60653568 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:26.232855+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355778560 unmapped: 60653568 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:27.234151+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355786752 unmapped: 60645376 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:28.234322+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355786752 unmapped: 60645376 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:29.234804+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355786752 unmapped: 60645376 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:30.234995+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:31.235936+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:32.236178+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:33.236929+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:34.237171+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:35.237801+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:36.237967+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:37.238442+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:38.238632+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:39.238959+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:40.239189+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:41.239385+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:42.239575+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:43.239725+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:44.239951+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:45.240188+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:46.240385+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:47.240548+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:48.240711+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:49.240869+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:50.241011+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:51.241236+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:52.241413+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:53.241661+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:54.242012+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 65208320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:55.242204+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 65200128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:56.242337+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 65200128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:57.242583+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 65200128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:58.242740+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 65200128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:59.242872+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 65183744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:00.243011+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 65183744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:01.243194+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 65183744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:02.243348+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 65183744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:03.243613+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 65183744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:04.243842+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 65183744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:05.243981+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 65167360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:06.244127+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 65167360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:07.244260+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 65167360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:08.244417+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 65167360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:09.244612+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 65167360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:10.244772+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 65167360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:11.244956+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 65159168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:12.245132+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 65159168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:13.245288+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 65159168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:14.245456+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 65142784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:15.245589+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 65142784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:16.245736+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 65142784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:17.245870+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 65142784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:18.245972+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 65142784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:19.246135+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 65134592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:20.246327+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 65134592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:21.246505+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 65134592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:22.246633+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 65126400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:23.246760+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 65126400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:24.247014+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 65126400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:25.247157+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 65126400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:26.247319+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 65126400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:27.247460+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 65126400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:28.247636+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 65126400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:29.247778+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 65126400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:30.247909+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 65110016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:31.248054+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 65110016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:32.248229+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 65110016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:33.248419+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 65110016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:34.248613+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 65110016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:35.248770+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 65110016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:36.248968+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 65110016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:37.249133+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 65110016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:38.249297+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 65093632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:39.249541+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 65093632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:40.249711+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 65093632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:41.249914+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 65093632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:42.250055+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:43.250255+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 65093632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:44.250593+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 65093632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:45.250751+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 65093632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:46.250923+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 65093632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:47.251112+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 65085440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:48.251280+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 65085440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:49.251461+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 65085440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:50.251649+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 65085440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:51.251860+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 65077248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:52.252043+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 65077248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:53.252270+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 65077248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:54.252511+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 65077248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:55.252649+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 65060864 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:56.252810+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 65060864 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:57.252967+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 65052672 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:58.253119+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 65052672 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:59.253269+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 65052672 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:00.253442+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 65052672 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:01.253629+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 65052672 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:02.253822+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 65052672 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:03.254108+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 65036288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:04.254310+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 65036288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:05.254471+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 65036288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:06.254655+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 65036288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:07.254891+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 65036288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:08.255167+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 65036288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:09.255340+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351404032 unmapped: 65028096 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:10.255529+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351404032 unmapped: 65028096 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:11.255760+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 65011712 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:12.255981+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 65011712 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6c94400
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 366.807525635s of 366.853698730s, submitted: 29
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 300 handle_osd_map epochs [300,301], i have 301, src has [1,301]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 301 ms_handle_reset con 0x55d7c6c94400 session 0x55d7c6e9d880
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:13.256203+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 64978944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:14.256415+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 64978944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e7f54000/0x0/0x4ffc00000, data 0x11ef90a/0x1395000, compress 0x0/0x0/0x0, omap 0x68c4a, meta 0x168a73b6), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:15.256697+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 64978944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e7f54000/0x0/0x4ffc00000, data 0x11ef90a/0x1395000, compress 0x0/0x0/0x0, omap 0x68c4a, meta 0x168a73b6), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 302 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c87bb180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601434 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:16.256907+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 67600384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:17.257116+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 67600384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:18.257291+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 67600384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e8f53000/0x0/0x4ffc00000, data 0x1f14d7/0x397000, compress 0x0/0x0/0x0, omap 0x68ff1, meta 0x168a700f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:19.257474+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 67592192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 303 heartbeat osd_stat(store_statfs(0x4e8f50000/0x0/0x4ffc00000, data 0x1f2f72/0x39a000, compress 0x0/0x0/0x0, omap 0x69116, meta 0x168a6eea), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:20.257664+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 67592192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:21.257817+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3604208 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db3800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 304 ms_handle_reset con 0x55d7c4db3800 session 0x55d7c4b26a80
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348856320 unmapped: 67575808 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:22.257966+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348864512 unmapped: 67567616 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:23.258142+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348872704 unmapped: 67559424 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:24.258375+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e8f4c000/0x0/0x4ffc00000, data 0x1f4b4d/0x39e000, compress 0x0/0x0/0x0, omap 0x691d1, meta 0x168a6e2f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348872704 unmapped: 67559424 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e8f4c000/0x0/0x4ffc00000, data 0x1f4b4d/0x39e000, compress 0x0/0x0/0x0, omap 0x691d1, meta 0x168a6e2f), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.871269226s of 11.983715057s, submitted: 57
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:25.258594+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 67502080 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:26.258800+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611352 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 67502080 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:27.258932+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 67485696 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:28.259191+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 67485696 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:29.259337+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 67485696 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:30.259476+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 67477504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:31.259610+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611352 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 67477504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:32.259741+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 67477504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:33.259879+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 67477504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:34.260140+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 67477504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:35.260329+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 67461120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:36.260520+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611352 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 67461120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:37.260706+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 67461120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:38.260911+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 67461120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:39.261121+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 67461120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:40.261271+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 67461120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:41.261471+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611352 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 67461120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:42.261608+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 67461120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:43.261811+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 67452928 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:44.262589+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 67452928 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:45.262740+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 67452928 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:46.262895+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611352 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 67452928 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:47.263103+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 67452928 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:48.263261+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67444736 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:49.263424+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67444736 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:50.263608+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67444736 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:51.263769+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611352 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349011968 unmapped: 67420160 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:52.263967+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67411968 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:53.264136+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67411968 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:54.264361+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67411968 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:55.264533+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67411968 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:56.264711+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611352 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67411968 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:57.264927+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67411968 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:58.265149+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67411968 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:59.265341+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67411968 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:00.265485+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67411968 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:01.265646+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611352 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349028352 unmapped: 67403776 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:02.265768+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349036544 unmapped: 67395584 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:03.265864+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 67387392 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:04.266041+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 67387392 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:05.266237+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 67387392 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:06.266388+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611352 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 67387392 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:07.266564+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 67379200 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:08.266701+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 67379200 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:09.266843+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 67379200 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:10.266988+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 67379200 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:11.267149+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611352 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 67379200 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7022800
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 47.325626373s of 47.337322235s, submitted: 10
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:12.267286+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 67379200 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:13.267474+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 306 ms_handle_reset con 0x55d7c7022800 session 0x55d7c78c1180
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 67338240 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:14.267666+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 67338240 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f47000/0x0/0x4ffc00000, data 0x1f8199/0x3a3000, compress 0x0/0x0/0x0, omap 0x69c08, meta 0x168a63f8), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:15.267853+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349102080 unmapped: 67330048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:16.267988+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3613512 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349110272 unmapped: 67321856 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:17.268153+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349110272 unmapped: 67321856 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:18.268335+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349110272 unmapped: 67321856 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:19.268555+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 67305472 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:20.268733+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 67305472 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:21.268868+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349134848 unmapped: 67297280 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:22.269022+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349134848 unmapped: 67297280 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:23.269171+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 67280896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:24.269343+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 67280896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:25.269543+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 67280896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:26.269715+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 67280896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:27.269880+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 67280896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:28.270047+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 67280896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:29.271024+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 67280896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:30.271227+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 67280896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:31.271429+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349159424 unmapped: 67272704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:32.272168+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349159424 unmapped: 67272704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:33.274658+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349159424 unmapped: 67272704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:34.274989+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349159424 unmapped: 67272704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:35.275223+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349167616 unmapped: 67264512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:36.275431+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349167616 unmapped: 67264512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:37.275600+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349167616 unmapped: 67264512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:38.275779+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349167616 unmapped: 67264512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:39.275935+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 67248128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:40.276193+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 67248128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:41.276551+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 67248128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:42.276917+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 67248128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:43.277202+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 67248128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:44.277547+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 67248128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:45.277739+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 67248128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:46.277906+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 67248128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:47.278107+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 67239936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:48.278265+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 67239936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:49.278428+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 67239936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:50.278623+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 67239936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:51.278841+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 67239936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:52.279133+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 67239936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:53.279414+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 67239936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:54.279651+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 67239936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:55.279877+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 67223552 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:56.280108+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 67223552 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:57.280336+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 67223552 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:58.280565+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349216768 unmapped: 67215360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:59.280757+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349216768 unmapped: 67215360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:00.280888+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349216768 unmapped: 67215360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:01.281117+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 67207168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:02.281323+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 67207168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:03.281514+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349233152 unmapped: 67198976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:04.281707+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349233152 unmapped: 67198976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:05.281863+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349233152 unmapped: 67198976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:06.281968+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349233152 unmapped: 67198976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:07.282165+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349241344 unmapped: 67190784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:08.282356+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349241344 unmapped: 67190784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:09.282503+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349241344 unmapped: 67190784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:10.282679+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349241344 unmapped: 67190784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:11.282813+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349249536 unmapped: 67182592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:12.282988+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349249536 unmapped: 67182592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:13.284611+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349249536 unmapped: 67182592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:14.284819+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349257728 unmapped: 67174400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:15.285022+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349257728 unmapped: 67174400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:16.285186+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349257728 unmapped: 67174400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:17.285318+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349257728 unmapped: 67174400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:18.285493+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349257728 unmapped: 67174400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:19.285685+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 67166208 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:20.285866+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 67166208 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:21.286018+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349274112 unmapped: 67158016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:22.286152+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349274112 unmapped: 67158016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:23.286304+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349274112 unmapped: 67158016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:24.286494+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349274112 unmapped: 67158016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:25.286638+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349274112 unmapped: 67158016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:26.286832+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349274112 unmapped: 67158016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:27.287021+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349282304 unmapped: 67149824 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:28.287139+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349298688 unmapped: 67133440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:29.287267+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349298688 unmapped: 67133440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:30.287388+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349298688 unmapped: 67133440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:31.287530+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349298688 unmapped: 67133440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:32.287661+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349298688 unmapped: 67133440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:33.329916+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349298688 unmapped: 67133440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:34.330116+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349298688 unmapped: 67133440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:35.330252+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 67117056 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:36.330410+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 67108864 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:37.330558+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 67108864 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:38.330691+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 67108864 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:39.330872+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: do_command 'config diff' '{prefix=config diff}'
Feb 28 11:04:12 compute-0 ceph-osd[87202]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 28 11:04:12 compute-0 ceph-osd[87202]: do_command 'config show' '{prefix=config show}'
Feb 28 11:04:12 compute-0 ceph-osd[87202]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 28 11:04:12 compute-0 ceph-osd[87202]: do_command 'counter dump' '{prefix=counter dump}'
Feb 28 11:04:12 compute-0 ceph-osd[87202]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349331456 unmapped: 67100672 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: do_command 'counter schema' '{prefix=counter schema}'
Feb 28 11:04:12 compute-0 ceph-osd[87202]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:40.331437+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349167616 unmapped: 67264512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:04:12 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:41.331595+0000)
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:04:12 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:04:12 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:04:12 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349249536 unmapped: 67182592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:04:12 compute-0 ceph-osd[87202]: do_command 'log dump' '{prefix=log dump}'
Feb 28 11:04:12 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23072 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} v 0)
Feb 28 11:04:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} : dispatch
Feb 28 11:04:12 compute-0 podman[397960]: 2026-02-28 11:04:12.585413901 +0000 UTC m=+0.054403242 container create e752105455ef55f31897ca44b4af35efb8db0afa5d1cbde87087ce3f50178de6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 11:04:12 compute-0 systemd[1]: Started libpod-conmon-e752105455ef55f31897ca44b4af35efb8db0afa5d1cbde87087ce3f50178de6.scope.
Feb 28 11:04:12 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:04:12 compute-0 podman[397960]: 2026-02-28 11:04:12.566259629 +0000 UTC m=+0.035248990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:04:12 compute-0 podman[397960]: 2026-02-28 11:04:12.665928341 +0000 UTC m=+0.134917702 container init e752105455ef55f31897ca44b4af35efb8db0afa5d1cbde87087ce3f50178de6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 11:04:12 compute-0 podman[397960]: 2026-02-28 11:04:12.670744028 +0000 UTC m=+0.139733369 container start e752105455ef55f31897ca44b4af35efb8db0afa5d1cbde87087ce3f50178de6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:04:12 compute-0 friendly_galileo[397996]: 167 167
Feb 28 11:04:12 compute-0 systemd[1]: libpod-e752105455ef55f31897ca44b4af35efb8db0afa5d1cbde87087ce3f50178de6.scope: Deactivated successfully.
Feb 28 11:04:12 compute-0 conmon[397996]: conmon e752105455ef55f31897 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e752105455ef55f31897ca44b4af35efb8db0afa5d1cbde87087ce3f50178de6.scope/container/memory.events
Feb 28 11:04:12 compute-0 podman[397960]: 2026-02-28 11:04:12.673985009 +0000 UTC m=+0.142974340 container attach e752105455ef55f31897ca44b4af35efb8db0afa5d1cbde87087ce3f50178de6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 11:04:12 compute-0 podman[397960]: 2026-02-28 11:04:12.676422079 +0000 UTC m=+0.145411420 container died e752105455ef55f31897ca44b4af35efb8db0afa5d1cbde87087ce3f50178de6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 11:04:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fa393c23592c2013b19009c1fb2682ad659fe2fcb9dade7b04480917354943b-merged.mount: Deactivated successfully.
Feb 28 11:04:12 compute-0 podman[397960]: 2026-02-28 11:04:12.708698083 +0000 UTC m=+0.177687424 container remove e752105455ef55f31897ca44b4af35efb8db0afa5d1cbde87087ce3f50178de6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:04:12 compute-0 systemd[1]: libpod-conmon-e752105455ef55f31897ca44b4af35efb8db0afa5d1cbde87087ce3f50178de6.scope: Deactivated successfully.
Feb 28 11:04:12 compute-0 podman[398040]: 2026-02-28 11:04:12.844130168 +0000 UTC m=+0.051017696 container create 795069bb9bd7923aaab6dca92fd80916eff8c44e4eae380bbcfe0ba9d337087f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:04:12 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23076 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:12 compute-0 systemd[1]: Started libpod-conmon-795069bb9bd7923aaab6dca92fd80916eff8c44e4eae380bbcfe0ba9d337087f.scope.
Feb 28 11:04:12 compute-0 podman[398040]: 2026-02-28 11:04:12.81841912 +0000 UTC m=+0.025306628 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:04:12 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe74307e06ab8ac9e8eb34e53ebebffe6fff76f3400071cb0b60094a1eb78982/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe74307e06ab8ac9e8eb34e53ebebffe6fff76f3400071cb0b60094a1eb78982/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe74307e06ab8ac9e8eb34e53ebebffe6fff76f3400071cb0b60094a1eb78982/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe74307e06ab8ac9e8eb34e53ebebffe6fff76f3400071cb0b60094a1eb78982/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:04:12 compute-0 podman[398040]: 2026-02-28 11:04:12.936184935 +0000 UTC m=+0.143072463 container init 795069bb9bd7923aaab6dca92fd80916eff8c44e4eae380bbcfe0ba9d337087f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_shirley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 11:04:12 compute-0 podman[398040]: 2026-02-28 11:04:12.943357998 +0000 UTC m=+0.150245506 container start 795069bb9bd7923aaab6dca92fd80916eff8c44e4eae380bbcfe0ba9d337087f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 11:04:12 compute-0 podman[398040]: 2026-02-28 11:04:12.948071452 +0000 UTC m=+0.154959000 container attach 795069bb9bd7923aaab6dca92fd80916eff8c44e4eae380bbcfe0ba9d337087f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_shirley, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:04:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Feb 28 11:04:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2586124495' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Feb 28 11:04:13 compute-0 naughty_shirley[398059]: {
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:     "0": [
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:         {
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "devices": [
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "/dev/loop3"
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             ],
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "lv_name": "ceph_lv0",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "lv_size": "21470642176",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "name": "ceph_lv0",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "tags": {
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.cluster_name": "ceph",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.crush_device_class": "",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.encrypted": "0",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.objectstore": "bluestore",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.osd_id": "0",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.type": "block",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.vdo": "0",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.with_tpm": "0"
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             },
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "type": "block",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "vg_name": "ceph_vg0"
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:         }
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:     ],
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:     "1": [
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:         {
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "devices": [
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "/dev/loop4"
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             ],
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "lv_name": "ceph_lv1",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "lv_size": "21470642176",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "name": "ceph_lv1",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "tags": {
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.cluster_name": "ceph",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.crush_device_class": "",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.encrypted": "0",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.objectstore": "bluestore",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.osd_id": "1",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.type": "block",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.vdo": "0",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.with_tpm": "0"
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             },
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "type": "block",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "vg_name": "ceph_vg1"
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:         }
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:     ],
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:     "2": [
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:         {
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "devices": [
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "/dev/loop5"
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             ],
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "lv_name": "ceph_lv2",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "lv_size": "21470642176",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "name": "ceph_lv2",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "tags": {
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.cluster_name": "ceph",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.crush_device_class": "",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.encrypted": "0",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.objectstore": "bluestore",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.osd_id": "2",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.type": "block",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.vdo": "0",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:                 "ceph.with_tpm": "0"
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             },
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "type": "block",
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:             "vg_name": "ceph_vg2"
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:         }
Feb 28 11:04:13 compute-0 naughty_shirley[398059]:     ]
Feb 28 11:04:13 compute-0 naughty_shirley[398059]: }
Feb 28 11:04:13 compute-0 ceph-mon[76304]: from='client.23066 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:13 compute-0 ceph-mon[76304]: from='client.23064 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:13 compute-0 ceph-mon[76304]: pgmap v3018: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:13 compute-0 ceph-mon[76304]: from='client.23068 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:13 compute-0 ceph-mon[76304]: from='client.23070 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:13 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} : dispatch
Feb 28 11:04:13 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2586124495' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Feb 28 11:04:13 compute-0 systemd[1]: libpod-795069bb9bd7923aaab6dca92fd80916eff8c44e4eae380bbcfe0ba9d337087f.scope: Deactivated successfully.
Feb 28 11:04:13 compute-0 podman[398040]: 2026-02-28 11:04:13.213699225 +0000 UTC m=+0.420586713 container died 795069bb9bd7923aaab6dca92fd80916eff8c44e4eae380bbcfe0ba9d337087f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_shirley, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:04:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe74307e06ab8ac9e8eb34e53ebebffe6fff76f3400071cb0b60094a1eb78982-merged.mount: Deactivated successfully.
Feb 28 11:04:13 compute-0 podman[398040]: 2026-02-28 11:04:13.254159801 +0000 UTC m=+0.461047289 container remove 795069bb9bd7923aaab6dca92fd80916eff8c44e4eae380bbcfe0ba9d337087f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:04:13 compute-0 systemd[1]: libpod-conmon-795069bb9bd7923aaab6dca92fd80916eff8c44e4eae380bbcfe0ba9d337087f.scope: Deactivated successfully.
Feb 28 11:04:13 compute-0 sudo[397899]: pam_unix(sudo:session): session closed for user root
Feb 28 11:04:13 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23080 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:13 compute-0 sudo[398109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:04:13 compute-0 sudo[398109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:04:13 compute-0 sudo[398109]: pam_unix(sudo:session): session closed for user root
Feb 28 11:04:13 compute-0 nova_compute[243452]: 2026-02-28 11:04:13.380 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:04:13 compute-0 sudo[398158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 11:04:13 compute-0 sudo[398158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:04:13 compute-0 podman[398231]: 2026-02-28 11:04:13.704803833 +0000 UTC m=+0.043300207 container create fabacd5eba5b3e5d1d1208015373af9ba67c8dc3d5cc846d15ad1eb5e8c3fbdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_archimedes, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:04:13 compute-0 systemd[1]: Started libpod-conmon-fabacd5eba5b3e5d1d1208015373af9ba67c8dc3d5cc846d15ad1eb5e8c3fbdf.scope.
Feb 28 11:04:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Feb 28 11:04:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3257471178' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Feb 28 11:04:13 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:04:13 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23084 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:13 compute-0 podman[398231]: 2026-02-28 11:04:13.684099177 +0000 UTC m=+0.022595561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:04:13 compute-0 podman[398231]: 2026-02-28 11:04:13.788573366 +0000 UTC m=+0.127069750 container init fabacd5eba5b3e5d1d1208015373af9ba67c8dc3d5cc846d15ad1eb5e8c3fbdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 11:04:13 compute-0 podman[398231]: 2026-02-28 11:04:13.798302841 +0000 UTC m=+0.136799195 container start fabacd5eba5b3e5d1d1208015373af9ba67c8dc3d5cc846d15ad1eb5e8c3fbdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 11:04:13 compute-0 podman[398231]: 2026-02-28 11:04:13.802454659 +0000 UTC m=+0.140951043 container attach fabacd5eba5b3e5d1d1208015373af9ba67c8dc3d5cc846d15ad1eb5e8c3fbdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:04:13 compute-0 mystifying_archimedes[398257]: 167 167
Feb 28 11:04:13 compute-0 systemd[1]: libpod-fabacd5eba5b3e5d1d1208015373af9ba67c8dc3d5cc846d15ad1eb5e8c3fbdf.scope: Deactivated successfully.
Feb 28 11:04:13 compute-0 podman[398231]: 2026-02-28 11:04:13.8038892 +0000 UTC m=+0.142385574 container died fabacd5eba5b3e5d1d1208015373af9ba67c8dc3d5cc846d15ad1eb5e8c3fbdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_archimedes, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 28 11:04:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a44191771e1375ff17829cd5d20cf95470e094c94a3b871ee607dc0894689a08-merged.mount: Deactivated successfully.
Feb 28 11:04:13 compute-0 podman[398231]: 2026-02-28 11:04:13.859212157 +0000 UTC m=+0.197708561 container remove fabacd5eba5b3e5d1d1208015373af9ba67c8dc3d5cc846d15ad1eb5e8c3fbdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 11:04:13 compute-0 systemd[1]: libpod-conmon-fabacd5eba5b3e5d1d1208015373af9ba67c8dc3d5cc846d15ad1eb5e8c3fbdf.scope: Deactivated successfully.
Feb 28 11:04:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3019: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:13 compute-0 podman[398319]: 2026-02-28 11:04:13.990123614 +0000 UTC m=+0.032825471 container create ca3f6ba71328809d3ec2d2cf0e571ae5fa099bb2f34066c479800c89ddbf98ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:04:14 compute-0 systemd[1]: Started libpod-conmon-ca3f6ba71328809d3ec2d2cf0e571ae5fa099bb2f34066c479800c89ddbf98ae.scope.
Feb 28 11:04:14 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:04:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be40972a6e1d42a9c1d471424ece2d6c896f76244d58ea6234b21c9345c89e9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:04:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be40972a6e1d42a9c1d471424ece2d6c896f76244d58ea6234b21c9345c89e9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:04:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be40972a6e1d42a9c1d471424ece2d6c896f76244d58ea6234b21c9345c89e9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:04:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be40972a6e1d42a9c1d471424ece2d6c896f76244d58ea6234b21c9345c89e9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:04:14 compute-0 podman[398319]: 2026-02-28 11:04:14.064094289 +0000 UTC m=+0.106796146 container init ca3f6ba71328809d3ec2d2cf0e571ae5fa099bb2f34066c479800c89ddbf98ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:04:14 compute-0 podman[398319]: 2026-02-28 11:04:13.976156169 +0000 UTC m=+0.018858026 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:04:14 compute-0 podman[398319]: 2026-02-28 11:04:14.074120443 +0000 UTC m=+0.116822300 container start ca3f6ba71328809d3ec2d2cf0e571ae5fa099bb2f34066c479800c89ddbf98ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 11:04:14 compute-0 podman[398319]: 2026-02-28 11:04:14.077797587 +0000 UTC m=+0.120499444 container attach ca3f6ba71328809d3ec2d2cf0e571ae5fa099bb2f34066c479800c89ddbf98ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 11:04:14 compute-0 ceph-mon[76304]: from='client.23072 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:14 compute-0 ceph-mon[76304]: from='client.23076 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:14 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3257471178' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Feb 28 11:04:14 compute-0 nova_compute[243452]: 2026-02-28 11:04:14.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:04:14 compute-0 nova_compute[243452]: 2026-02-28 11:04:14.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:04:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:04:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Feb 28 11:04:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2442455232' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Feb 28 11:04:14 compute-0 nova_compute[243452]: 2026-02-28 11:04:14.353 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:04:14 compute-0 nova_compute[243452]: 2026-02-28 11:04:14.353 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:04:14 compute-0 nova_compute[243452]: 2026-02-28 11:04:14.353 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:04:14 compute-0 nova_compute[243452]: 2026-02-28 11:04:14.353 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 11:04:14 compute-0 nova_compute[243452]: 2026-02-28 11:04:14.354 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:04:14 compute-0 lvm[398476]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 11:04:14 compute-0 lvm[398476]: VG ceph_vg1 finished
Feb 28 11:04:14 compute-0 lvm[398475]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 11:04:14 compute-0 lvm[398475]: VG ceph_vg0 finished
Feb 28 11:04:14 compute-0 lvm[398481]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 11:04:14 compute-0 lvm[398481]: VG ceph_vg2 finished
Feb 28 11:04:14 compute-0 lvm[398484]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 11:04:14 compute-0 lvm[398484]: VG ceph_vg0 finished
Feb 28 11:04:14 compute-0 naughty_napier[398335]: {}
Feb 28 11:04:14 compute-0 podman[398319]: 2026-02-28 11:04:14.784376657 +0000 UTC m=+0.827078534 container died ca3f6ba71328809d3ec2d2cf0e571ae5fa099bb2f34066c479800c89ddbf98ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Feb 28 11:04:14 compute-0 systemd[1]: Starting Hostname Service...
Feb 28 11:04:14 compute-0 systemd[1]: libpod-ca3f6ba71328809d3ec2d2cf0e571ae5fa099bb2f34066c479800c89ddbf98ae.scope: Deactivated successfully.
Feb 28 11:04:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Feb 28 11:04:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3079457849' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Feb 28 11:04:14 compute-0 systemd[1]: Started Hostname Service.
Feb 28 11:04:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:04:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1322367706' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:04:14 compute-0 nova_compute[243452]: 2026-02-28 11:04:14.930 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:04:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 28 11:04:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 28 11:04:15 compute-0 nova_compute[243452]: 2026-02-28 11:04:15.082 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 11:04:15 compute-0 nova_compute[243452]: 2026-02-28 11:04:15.083 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3261MB free_disk=59.987354160286486GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 11:04:15 compute-0 nova_compute[243452]: 2026-02-28 11:04:15.083 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:04:15 compute-0 nova_compute[243452]: 2026-02-28 11:04:15.084 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:04:15 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 28 11:04:15 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 28 11:04:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-be40972a6e1d42a9c1d471424ece2d6c896f76244d58ea6234b21c9345c89e9d-merged.mount: Deactivated successfully.
Feb 28 11:04:15 compute-0 podman[398319]: 2026-02-28 11:04:15.188159583 +0000 UTC m=+1.230861440 container remove ca3f6ba71328809d3ec2d2cf0e571ae5fa099bb2f34066c479800c89ddbf98ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_napier, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 11:04:15 compute-0 systemd[1]: libpod-conmon-ca3f6ba71328809d3ec2d2cf0e571ae5fa099bb2f34066c479800c89ddbf98ae.scope: Deactivated successfully.
Feb 28 11:04:15 compute-0 ceph-mon[76304]: from='client.23080 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:15 compute-0 ceph-mon[76304]: from='client.23084 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:15 compute-0 ceph-mon[76304]: pgmap v3019: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:15 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2442455232' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Feb 28 11:04:15 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3079457849' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Feb 28 11:04:15 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1322367706' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:04:15 compute-0 ceph-mon[76304]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 28 11:04:15 compute-0 ceph-mon[76304]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 28 11:04:15 compute-0 ceph-mon[76304]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 28 11:04:15 compute-0 ceph-mon[76304]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 28 11:04:15 compute-0 sudo[398158]: pam_unix(sudo:session): session closed for user root
Feb 28 11:04:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 11:04:15 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:04:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 11:04:15 compute-0 nova_compute[243452]: 2026-02-28 11:04:15.257 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 11:04:15 compute-0 nova_compute[243452]: 2026-02-28 11:04:15.259 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 11:04:15 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:04:15 compute-0 nova_compute[243452]: 2026-02-28 11:04:15.299 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:04:15 compute-0 sudo[398638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 11:04:15 compute-0 sudo[398638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:04:15 compute-0 sudo[398638]: pam_unix(sudo:session): session closed for user root
Feb 28 11:04:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Feb 28 11:04:15 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1187588875' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Feb 28 11:04:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:04:15 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1246517195' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:04:15 compute-0 nova_compute[243452]: 2026-02-28 11:04:15.828 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:04:15 compute-0 nova_compute[243452]: 2026-02-28 11:04:15.832 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 11:04:15 compute-0 nova_compute[243452]: 2026-02-28 11:04:15.860 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 11:04:15 compute-0 nova_compute[243452]: 2026-02-28 11:04:15.862 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 11:04:15 compute-0 nova_compute[243452]: 2026-02-28 11:04:15.862 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:04:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3020: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:15 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23104 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:16 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:04:16 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:04:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1187588875' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Feb 28 11:04:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1246517195' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:04:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Feb 28 11:04:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2797268771' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Feb 28 11:04:16 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:04:16 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 183K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.69 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 456 writes, 1331 keys, 456 commit groups, 1.0 writes per commit group, ingest: 0.61 MB, 0.00 MB/s
                                           Interval WAL: 456 writes, 212 syncs, 2.15 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 11:04:16 compute-0 nova_compute[243452]: 2026-02-28 11:04:16.857 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:04:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Feb 28 11:04:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1250578942' entity='client.admin' cmd={"prefix": "df"} : dispatch
Feb 28 11:04:17 compute-0 ceph-mon[76304]: pgmap v3020: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:17 compute-0 ceph-mon[76304]: from='client.23104 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:17 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2797268771' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Feb 28 11:04:17 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1250578942' entity='client.admin' cmd={"prefix": "df"} : dispatch
Feb 28 11:04:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Feb 28 11:04:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/422988324' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Feb 28 11:04:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Feb 28 11:04:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/701396052' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Feb 28 11:04:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/422988324' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Feb 28 11:04:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/701396052' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Feb 28 11:04:18 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23114 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:18 compute-0 nova_compute[243452]: 2026-02-28 11:04:18.382 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:04:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Feb 28 11:04:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/16567938' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Feb 28 11:04:19 compute-0 ceph-mon[76304]: pgmap v3021: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/16567938' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Feb 28 11:04:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:04:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Feb 28 11:04:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2735640254' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Feb 28 11:04:19 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23120 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:20 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Feb 28 11:04:20 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4089606488' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Feb 28 11:04:20 compute-0 ceph-mon[76304]: from='client.23114 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2735640254' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Feb 28 11:04:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4089606488' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Feb 28 11:04:20 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23124 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:21 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23126 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:21 compute-0 ceph-mon[76304]: from='client.23120 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:21 compute-0 ceph-mon[76304]: pgmap v3022: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:21 compute-0 ceph-mon[76304]: from='client.23124 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:21 compute-0 ceph-mon[76304]: from='client.23126 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:21 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0)
Feb 28 11:04:21 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/853597052' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Feb 28 11:04:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3023: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:21 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Feb 28 11:04:21 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2999180106' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Feb 28 11:04:22 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/853597052' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Feb 28 11:04:22 compute-0 ceph-mon[76304]: pgmap v3023: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:22 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2999180106' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23132 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23134 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5454513684934942e-05 of space, bias 1.0, pg target 0.0046363541054804825 quantized to 32 (current 32)
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.899822393338988e-06 of space, bias 1.0, pg target 0.0014699467180016965 quantized to 32 (current 32)
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:22 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 11:04:22 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:04:22 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5401.6 total, 600.0 interval
                                           Cumulative writes: 49K writes, 189K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 49K writes, 18K syncs, 2.66 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 609 writes, 1700 keys, 609 commit groups, 1.0 writes per commit group, ingest: 0.70 MB, 0.00 MB/s
                                           Interval WAL: 609 writes, 282 syncs, 2.16 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 11:04:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Feb 28 11:04:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/97967321' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Feb 28 11:04:23 compute-0 ceph-mon[76304]: from='client.23132 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:23 compute-0 ceph-mon[76304]: from='client.23134 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/97967321' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Feb 28 11:04:23 compute-0 nova_compute[243452]: 2026-02-28 11:04:23.383 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:04:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0)
Feb 28 11:04:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1209240196' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Feb 28 11:04:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:24 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23140 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:24 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1209240196' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Feb 28 11:04:24 compute-0 ceph-mon[76304]: pgmap v3024: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:24 compute-0 ceph-mon[76304]: from='client.23140 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:04:24 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23142 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb 28 11:04:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/337305753' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 28 11:04:25 compute-0 podman[400147]: 2026-02-28 11:04:25.15488148 +0000 UTC m=+0.078501555 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 28 11:04:25 compute-0 podman[400142]: 2026-02-28 11:04:25.22835275 +0000 UTC m=+0.150193324 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 28 11:04:25 compute-0 ceph-mon[76304]: from='client.23142 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:04:25 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/337305753' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 28 11:04:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Feb 28 11:04:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1979052588' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Feb 28 11:04:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:26 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Feb 28 11:04:26 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3402922608' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
Feb 28 11:04:26 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1979052588' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Feb 28 11:04:26 compute-0 ceph-mon[76304]: pgmap v3025: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:26 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3402922608' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
Feb 28 11:04:26 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23150 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:26 compute-0 ovs-appctl[400749]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Feb 28 11:04:26 compute-0 ovs-appctl[400753]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Feb 28 11:04:26 compute-0 ovs-appctl[400757]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Feb 28 11:04:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Feb 28 11:04:27 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4183133842' entity='client.admin' cmd={"prefix": "df", "detail": "detail", "format": "json-pretty"} : dispatch
Feb 28 11:04:27 compute-0 ceph-mon[76304]: from='client.23150 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:27 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4183133842' entity='client.admin' cmd={"prefix": "df", "detail": "detail", "format": "json-pretty"} : dispatch
Feb 28 11:04:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Feb 28 11:04:27 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1649470922' entity='client.admin' cmd={"prefix": "df", "format": "json-pretty"} : dispatch
Feb 28 11:04:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Feb 28 11:04:28 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/447053288' entity='client.admin' cmd={"prefix": "fs dump", "format": "json-pretty"} : dispatch
Feb 28 11:04:28 compute-0 nova_compute[243452]: 2026-02-28 11:04:28.385 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:04:28 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1649470922' entity='client.admin' cmd={"prefix": "df", "format": "json-pretty"} : dispatch
Feb 28 11:04:28 compute-0 ceph-mon[76304]: pgmap v3026: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:28 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/447053288' entity='client.admin' cmd={"prefix": "fs dump", "format": "json-pretty"} : dispatch
Feb 28 11:04:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Feb 28 11:04:28 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3058094226' entity='client.admin' cmd={"prefix": "fs ls", "format": "json-pretty"} : dispatch
Feb 28 11:04:28 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:04:28 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.2 total, 600.0 interval
                                           Cumulative writes: 38K writes, 150K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.69 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 542 writes, 1588 keys, 542 commit groups, 1.0 writes per commit group, ingest: 0.67 MB, 0.00 MB/s
                                           Interval WAL: 542 writes, 244 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 11:04:28 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23160 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:04:29
Feb 28 11:04:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 11:04:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 11:04:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'images', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'vms']
Feb 28 11:04:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 11:04:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:04:29 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3058094226' entity='client.admin' cmd={"prefix": "fs ls", "format": "json-pretty"} : dispatch
Feb 28 11:04:29 compute-0 ceph-mon[76304]: from='client.23160 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Feb 28 11:04:29 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1051734758' entity='client.admin' cmd={"prefix": "mds stat", "format": "json-pretty"} : dispatch
Feb 28 11:04:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Feb 28 11:04:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1987419560' entity='client.admin' cmd={"prefix": "mon dump", "format": "json-pretty"} : dispatch
Feb 28 11:04:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:04:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:04:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:04:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:04:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:04:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:04:30 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1051734758' entity='client.admin' cmd={"prefix": "mds stat", "format": "json-pretty"} : dispatch
Feb 28 11:04:30 compute-0 ceph-mon[76304]: pgmap v3027: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:30 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1987419560' entity='client.admin' cmd={"prefix": "mon dump", "format": "json-pretty"} : dispatch
Feb 28 11:04:30 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23166 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Feb 28 11:04:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3457957666' entity='client.admin' cmd={"prefix": "osd blocklist ls", "format": "json-pretty"} : dispatch
Feb 28 11:04:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 11:04:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:04:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 11:04:31 compute-0 ceph-mgr[76610]: [devicehealth INFO root] Check health
Feb 28 11:04:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:04:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:04:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:04:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:04:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:04:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:04:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:04:31 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23170 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:31 compute-0 ceph-mon[76304]: from='client.23166 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:31 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3457957666' entity='client.admin' cmd={"prefix": "osd blocklist ls", "format": "json-pretty"} : dispatch
Feb 28 11:04:31 compute-0 ceph-mon[76304]: from='client.23170 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:31 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23172 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Feb 28 11:04:32 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944515867' entity='client.admin' cmd={"prefix": "osd dump", "format": "json-pretty"} : dispatch
Feb 28 11:04:32 compute-0 ceph-mon[76304]: from='client.23172 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:32 compute-0 ceph-mon[76304]: pgmap v3028: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:32 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2944515867' entity='client.admin' cmd={"prefix": "osd dump", "format": "json-pretty"} : dispatch
Feb 28 11:04:32 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Feb 28 11:04:32 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/476021574' entity='client.admin' cmd={"prefix": "osd numa-status", "format": "json-pretty"} : dispatch
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23178 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:33 compute-0 nova_compute[243452]: 2026-02-28 11:04:33.387 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:04:33 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/476021574' entity='client.admin' cmd={"prefix": "osd numa-status", "format": "json-pretty"} : dispatch
Feb 28 11:04:33 compute-0 ceph-mon[76304]: from='client.23178 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23180 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5454513684934942e-05 of space, bias 1.0, pg target 0.0046363541054804825 quantized to 32 (current 32)
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.899822393338988e-06 of space, bias 1.0, pg target 0.0014699467180016965 quantized to 32 (current 32)
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 11:04:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3029: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Feb 28 11:04:33 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3722679556' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} : dispatch
Feb 28 11:04:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:04:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Feb 28 11:04:34 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2034929054' entity='client.admin' cmd={"prefix": "osd stat", "format": "json-pretty"} : dispatch
Feb 28 11:04:34 compute-0 ceph-mon[76304]: from='client.23180 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:34 compute-0 ceph-mon[76304]: pgmap v3029: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:34 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3722679556' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} : dispatch
Feb 28 11:04:34 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2034929054' entity='client.admin' cmd={"prefix": "osd stat", "format": "json-pretty"} : dispatch
Feb 28 11:04:34 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23186 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:35 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23188 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:35 compute-0 ceph-mon[76304]: from='client.23186 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:35 compute-0 ceph-mon[76304]: from='client.23188 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:04:35 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 28 11:04:35 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2607915553' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 28 11:04:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:36 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Feb 28 11:04:36 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2200733812' entity='client.admin' cmd={"prefix": "time-sync-status", "format": "json-pretty"} : dispatch
Feb 28 11:04:36 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2607915553' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 28 11:04:36 compute-0 ceph-mon[76304]: pgmap v3030: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:36 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2200733812' entity='client.admin' cmd={"prefix": "time-sync-status", "format": "json-pretty"} : dispatch
Feb 28 11:04:36 compute-0 virtqemud[242837]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb 28 11:04:37 compute-0 systemd[1]: Starting Time & Date Service...
Feb 28 11:04:37 compute-0 systemd[1]: Started Time & Date Service.
Feb 28 11:04:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3031: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:38 compute-0 nova_compute[243452]: 2026-02-28 11:04:38.390 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:04:38 compute-0 ceph-mon[76304]: pgmap v3031: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:04:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:40 compute-0 ceph-mon[76304]: pgmap v3032: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5454513684934942e-05 of space, bias 1.0, pg target 0.0046363541054804825 quantized to 32 (current 32)
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.899822393338988e-06 of space, bias 1.0, pg target 0.0014699467180016965 quantized to 32 (current 32)
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 11:04:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3033: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:42 compute-0 ceph-mon[76304]: pgmap v3033: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:43 compute-0 nova_compute[243452]: 2026-02-28 11:04:43.391 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:04:43 compute-0 nova_compute[243452]: 2026-02-28 11:04:43.394 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:04:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3034: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:04:45 compute-0 ceph-mon[76304]: pgmap v3034: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 11:04:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3098691062' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:04:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 11:04:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3098691062' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:04:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3098691062' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:04:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3098691062' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:04:47 compute-0 ceph-mon[76304]: pgmap v3035: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3036: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:48 compute-0 nova_compute[243452]: 2026-02-28 11:04:48.396 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:04:48 compute-0 nova_compute[243452]: 2026-02-28 11:04:48.403 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:04:48 compute-0 nova_compute[243452]: 2026-02-28 11:04:48.403 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5008 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:04:48 compute-0 nova_compute[243452]: 2026-02-28 11:04:48.403 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:04:48 compute-0 nova_compute[243452]: 2026-02-28 11:04:48.405 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:04:48 compute-0 nova_compute[243452]: 2026-02-28 11:04:48.406 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:04:48 compute-0 nova_compute[243452]: 2026-02-28 11:04:48.407 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:04:49 compute-0 ceph-mon[76304]: pgmap v3036: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:04:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3037: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:51 compute-0 ceph-mon[76304]: pgmap v3037: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3038: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:52 compute-0 ceph-mon[76304]: pgmap v3038: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:53 compute-0 nova_compute[243452]: 2026-02-28 11:04:53.410 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:04:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:04:54 compute-0 ceph-mon[76304]: pgmap v3039: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:55 compute-0 podman[402395]: 2026-02-28 11:04:55.596020167 +0000 UTC m=+0.086347377 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 28 11:04:55 compute-0 podman[402394]: 2026-02-28 11:04:55.628948899 +0000 UTC m=+0.118453336 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 28 11:04:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:56 compute-0 ceph-mon[76304]: pgmap v3040: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:04:57.910 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:04:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3041: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:04:57.911 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:04:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:04:57.912 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:04:58 compute-0 nova_compute[243452]: 2026-02-28 11:04:58.411 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:04:58 compute-0 nova_compute[243452]: 2026-02-28 11:04:58.415 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:04:58 compute-0 ceph-mon[76304]: pgmap v3041: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:04:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:04:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3042: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:05:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:05:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:05:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:05:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:05:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:05:01 compute-0 ceph-mon[76304]: pgmap v3042: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3043: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:03 compute-0 ceph-mon[76304]: pgmap v3043: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:03 compute-0 nova_compute[243452]: 2026-02-28 11:05:03.413 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:05:03 compute-0 nova_compute[243452]: 2026-02-28 11:05:03.415 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:05:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3044: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:05:05 compute-0 ceph-mon[76304]: pgmap v3044: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:05 compute-0 nova_compute[243452]: 2026-02-28 11:05:05.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:05:05 compute-0 nova_compute[243452]: 2026-02-28 11:05:05.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:05:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3045: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:06 compute-0 nova_compute[243452]: 2026-02-28 11:05:06.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:05:07 compute-0 ceph-mon[76304]: pgmap v3045: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:07 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 28 11:05:07 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 28 11:05:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3046: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:08 compute-0 nova_compute[243452]: 2026-02-28 11:05:08.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:05:08 compute-0 nova_compute[243452]: 2026-02-28 11:05:08.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:05:08 compute-0 nova_compute[243452]: 2026-02-28 11:05:08.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 11:05:08 compute-0 nova_compute[243452]: 2026-02-28 11:05:08.414 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:05:08 compute-0 nova_compute[243452]: 2026-02-28 11:05:08.417 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:05:09 compute-0 ceph-mon[76304]: pgmap v3046: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:09 compute-0 nova_compute[243452]: 2026-02-28 11:05:09.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:05:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:05:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:11 compute-0 ceph-mon[76304]: pgmap v3047: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3048: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:12 compute-0 nova_compute[243452]: 2026-02-28 11:05:12.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:05:12 compute-0 nova_compute[243452]: 2026-02-28 11:05:12.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 11:05:12 compute-0 nova_compute[243452]: 2026-02-28 11:05:12.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 11:05:12 compute-0 nova_compute[243452]: 2026-02-28 11:05:12.337 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 11:05:13 compute-0 ceph-mon[76304]: pgmap v3048: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:13 compute-0 nova_compute[243452]: 2026-02-28 11:05:13.415 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:05:13 compute-0 nova_compute[243452]: 2026-02-28 11:05:13.417 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:05:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3049: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:05:15 compute-0 ceph-mon[76304]: pgmap v3049: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:15 compute-0 nova_compute[243452]: 2026-02-28 11:05:15.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:05:15 compute-0 nova_compute[243452]: 2026-02-28 11:05:15.362 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:05:15 compute-0 nova_compute[243452]: 2026-02-28 11:05:15.363 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:05:15 compute-0 nova_compute[243452]: 2026-02-28 11:05:15.363 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:05:15 compute-0 nova_compute[243452]: 2026-02-28 11:05:15.363 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 11:05:15 compute-0 nova_compute[243452]: 2026-02-28 11:05:15.364 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:05:15 compute-0 sudo[402443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:05:15 compute-0 sudo[402443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:05:15 compute-0 sudo[402443]: pam_unix(sudo:session): session closed for user root
Feb 28 11:05:15 compute-0 sudo[402469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 11:05:15 compute-0 sudo[402469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:05:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3050: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 28 11:05:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:05:15 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2292359532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:05:15 compute-0 nova_compute[243452]: 2026-02-28 11:05:15.946 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:05:16 compute-0 sudo[402469]: pam_unix(sudo:session): session closed for user root
Feb 28 11:05:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:05:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:05:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 11:05:16 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:05:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 11:05:16 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:05:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 11:05:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:05:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 11:05:16 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:05:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:05:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:05:16 compute-0 nova_compute[243452]: 2026-02-28 11:05:16.186 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 11:05:16 compute-0 nova_compute[243452]: 2026-02-28 11:05:16.188 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3443MB free_disk=59.987354160286486GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 11:05:16 compute-0 nova_compute[243452]: 2026-02-28 11:05:16.188 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:05:16 compute-0 nova_compute[243452]: 2026-02-28 11:05:16.188 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:05:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2292359532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:05:16 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:05:16 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:05:16 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:05:16 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:05:16 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:05:16 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:05:16 compute-0 sudo[402547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:05:16 compute-0 sudo[402547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:05:16 compute-0 sudo[402547]: pam_unix(sudo:session): session closed for user root
Feb 28 11:05:16 compute-0 sudo[402572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 11:05:16 compute-0 sudo[402572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:05:16 compute-0 nova_compute[243452]: 2026-02-28 11:05:16.418 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 11:05:16 compute-0 nova_compute[243452]: 2026-02-28 11:05:16.418 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 11:05:16 compute-0 nova_compute[243452]: 2026-02-28 11:05:16.529 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 28 11:05:16 compute-0 podman[402609]: 2026-02-28 11:05:16.619616753 +0000 UTC m=+0.061104302 container create 91cb1d468e98daa9309f7fa09bedef9d6737a4db832ad249f52b196316bc8e0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_feistel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:05:16 compute-0 nova_compute[243452]: 2026-02-28 11:05:16.653 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 28 11:05:16 compute-0 nova_compute[243452]: 2026-02-28 11:05:16.653 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 28 11:05:16 compute-0 systemd[1]: Started libpod-conmon-91cb1d468e98daa9309f7fa09bedef9d6737a4db832ad249f52b196316bc8e0b.scope.
Feb 28 11:05:16 compute-0 nova_compute[243452]: 2026-02-28 11:05:16.672 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 28 11:05:16 compute-0 podman[402609]: 2026-02-28 11:05:16.592782243 +0000 UTC m=+0.034269792 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:05:16 compute-0 nova_compute[243452]: 2026-02-28 11:05:16.691 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 28 11:05:16 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:05:16 compute-0 nova_compute[243452]: 2026-02-28 11:05:16.711 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:05:16 compute-0 podman[402609]: 2026-02-28 11:05:16.71838366 +0000 UTC m=+0.159871249 container init 91cb1d468e98daa9309f7fa09bedef9d6737a4db832ad249f52b196316bc8e0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_feistel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:05:16 compute-0 podman[402609]: 2026-02-28 11:05:16.732378866 +0000 UTC m=+0.173866405 container start 91cb1d468e98daa9309f7fa09bedef9d6737a4db832ad249f52b196316bc8e0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_feistel, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:05:16 compute-0 podman[402609]: 2026-02-28 11:05:16.736487733 +0000 UTC m=+0.177975332 container attach 91cb1d468e98daa9309f7fa09bedef9d6737a4db832ad249f52b196316bc8e0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 11:05:16 compute-0 stoic_feistel[402625]: 167 167
Feb 28 11:05:16 compute-0 systemd[1]: libpod-91cb1d468e98daa9309f7fa09bedef9d6737a4db832ad249f52b196316bc8e0b.scope: Deactivated successfully.
Feb 28 11:05:16 compute-0 conmon[402625]: conmon 91cb1d468e98daa9309f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91cb1d468e98daa9309f7fa09bedef9d6737a4db832ad249f52b196316bc8e0b.scope/container/memory.events
Feb 28 11:05:16 compute-0 podman[402609]: 2026-02-28 11:05:16.741285439 +0000 UTC m=+0.182772998 container died 91cb1d468e98daa9309f7fa09bedef9d6737a4db832ad249f52b196316bc8e0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_feistel, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 28 11:05:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-558f630a23024082fe91975f9be8bbe38e650b7cf5525383cc30fa6a9e7fdd58-merged.mount: Deactivated successfully.
Feb 28 11:05:16 compute-0 podman[402609]: 2026-02-28 11:05:16.794147656 +0000 UTC m=+0.235635205 container remove 91cb1d468e98daa9309f7fa09bedef9d6737a4db832ad249f52b196316bc8e0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:05:16 compute-0 systemd[1]: libpod-conmon-91cb1d468e98daa9309f7fa09bedef9d6737a4db832ad249f52b196316bc8e0b.scope: Deactivated successfully.
Feb 28 11:05:16 compute-0 podman[402669]: 2026-02-28 11:05:16.950224996 +0000 UTC m=+0.041773534 container create 7906feeac532268659ccd4f4690156854e7fe225b5522be14e802cad06ff5a44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_gates, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 11:05:16 compute-0 systemd[1]: Started libpod-conmon-7906feeac532268659ccd4f4690156854e7fe225b5522be14e802cad06ff5a44.scope.
Feb 28 11:05:17 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca87c547e7747cc21fb272f733b5b9a5294c9ff4405797880df9128f1964ada1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca87c547e7747cc21fb272f733b5b9a5294c9ff4405797880df9128f1964ada1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca87c547e7747cc21fb272f733b5b9a5294c9ff4405797880df9128f1964ada1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca87c547e7747cc21fb272f733b5b9a5294c9ff4405797880df9128f1964ada1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca87c547e7747cc21fb272f733b5b9a5294c9ff4405797880df9128f1964ada1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 11:05:17 compute-0 podman[402669]: 2026-02-28 11:05:16.930839417 +0000 UTC m=+0.022387995 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:05:17 compute-0 podman[402669]: 2026-02-28 11:05:17.030296544 +0000 UTC m=+0.121845092 container init 7906feeac532268659ccd4f4690156854e7fe225b5522be14e802cad06ff5a44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_gates, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:05:17 compute-0 podman[402669]: 2026-02-28 11:05:17.037131467 +0000 UTC m=+0.128680045 container start 7906feeac532268659ccd4f4690156854e7fe225b5522be14e802cad06ff5a44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_gates, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:05:17 compute-0 podman[402669]: 2026-02-28 11:05:17.067215639 +0000 UTC m=+0.158764177 container attach 7906feeac532268659ccd4f4690156854e7fe225b5522be14e802cad06ff5a44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_gates, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 11:05:17 compute-0 ceph-mon[76304]: pgmap v3050: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 28 11:05:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:05:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3592787947' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:05:17 compute-0 nova_compute[243452]: 2026-02-28 11:05:17.300 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:05:17 compute-0 nova_compute[243452]: 2026-02-28 11:05:17.307 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 11:05:17 compute-0 nova_compute[243452]: 2026-02-28 11:05:17.323 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 11:05:17 compute-0 nova_compute[243452]: 2026-02-28 11:05:17.325 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 11:05:17 compute-0 nova_compute[243452]: 2026-02-28 11:05:17.326 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:05:17 compute-0 tender_gates[402687]: --> passed data devices: 0 physical, 3 LVM
Feb 28 11:05:17 compute-0 tender_gates[402687]: --> All data devices are unavailable
Feb 28 11:05:17 compute-0 systemd[1]: libpod-7906feeac532268659ccd4f4690156854e7fe225b5522be14e802cad06ff5a44.scope: Deactivated successfully.
Feb 28 11:05:17 compute-0 podman[402669]: 2026-02-28 11:05:17.42810696 +0000 UTC m=+0.519655538 container died 7906feeac532268659ccd4f4690156854e7fe225b5522be14e802cad06ff5a44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 11:05:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca87c547e7747cc21fb272f733b5b9a5294c9ff4405797880df9128f1964ada1-merged.mount: Deactivated successfully.
Feb 28 11:05:17 compute-0 podman[402669]: 2026-02-28 11:05:17.598109635 +0000 UTC m=+0.689658203 container remove 7906feeac532268659ccd4f4690156854e7fe225b5522be14e802cad06ff5a44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:05:17 compute-0 systemd[1]: libpod-conmon-7906feeac532268659ccd4f4690156854e7fe225b5522be14e802cad06ff5a44.scope: Deactivated successfully.
Feb 28 11:05:17 compute-0 sudo[402572]: pam_unix(sudo:session): session closed for user root
Feb 28 11:05:17 compute-0 sudo[402723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:05:17 compute-0 sudo[402723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:05:17 compute-0 sudo[402723]: pam_unix(sudo:session): session closed for user root
Feb 28 11:05:17 compute-0 sudo[402748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 11:05:17 compute-0 sudo[402748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:05:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3051: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 22 op/s
Feb 28 11:05:18 compute-0 podman[402783]: 2026-02-28 11:05:18.092013873 +0000 UTC m=+0.066111894 container create eafe70bef1ad2142c629d88793264df171d2d753ac784880d8b3bb3f47e3cce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_neumann, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 11:05:18 compute-0 systemd[1]: Started libpod-conmon-eafe70bef1ad2142c629d88793264df171d2d753ac784880d8b3bb3f47e3cce0.scope.
Feb 28 11:05:18 compute-0 podman[402783]: 2026-02-28 11:05:18.048528381 +0000 UTC m=+0.022626392 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:05:18 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:05:18 compute-0 podman[402783]: 2026-02-28 11:05:18.198456207 +0000 UTC m=+0.172554268 container init eafe70bef1ad2142c629d88793264df171d2d753ac784880d8b3bb3f47e3cce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_neumann, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:05:18 compute-0 podman[402783]: 2026-02-28 11:05:18.210614562 +0000 UTC m=+0.184712613 container start eafe70bef1ad2142c629d88793264df171d2d753ac784880d8b3bb3f47e3cce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_neumann, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:05:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3592787947' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:05:18 compute-0 silly_neumann[402799]: 167 167
Feb 28 11:05:18 compute-0 systemd[1]: libpod-eafe70bef1ad2142c629d88793264df171d2d753ac784880d8b3bb3f47e3cce0.scope: Deactivated successfully.
Feb 28 11:05:18 compute-0 conmon[402799]: conmon eafe70bef1ad2142c629 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eafe70bef1ad2142c629d88793264df171d2d753ac784880d8b3bb3f47e3cce0.scope/container/memory.events
Feb 28 11:05:18 compute-0 podman[402783]: 2026-02-28 11:05:18.222247281 +0000 UTC m=+0.196345332 container attach eafe70bef1ad2142c629d88793264df171d2d753ac784880d8b3bb3f47e3cce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_neumann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:05:18 compute-0 podman[402783]: 2026-02-28 11:05:18.222961071 +0000 UTC m=+0.197059092 container died eafe70bef1ad2142c629d88793264df171d2d753ac784880d8b3bb3f47e3cce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_neumann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 28 11:05:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-18cec72276e49517665c1d0c694ede10c98794d62e0856e8b302bab0df1a6858-merged.mount: Deactivated successfully.
Feb 28 11:05:18 compute-0 podman[402783]: 2026-02-28 11:05:18.27059863 +0000 UTC m=+0.244696651 container remove eafe70bef1ad2142c629d88793264df171d2d753ac784880d8b3bb3f47e3cce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 11:05:18 compute-0 systemd[1]: libpod-conmon-eafe70bef1ad2142c629d88793264df171d2d753ac784880d8b3bb3f47e3cce0.scope: Deactivated successfully.
Feb 28 11:05:18 compute-0 nova_compute[243452]: 2026-02-28 11:05:18.326 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:05:18 compute-0 nova_compute[243452]: 2026-02-28 11:05:18.419 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:05:18 compute-0 nova_compute[243452]: 2026-02-28 11:05:18.420 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:05:18 compute-0 podman[402824]: 2026-02-28 11:05:18.488216764 +0000 UTC m=+0.094053325 container create 0ced5e58d80694139bf45609c5ad83146de477786dde4a779edc956fa3c027d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:05:18 compute-0 podman[402824]: 2026-02-28 11:05:18.415186315 +0000 UTC m=+0.021022906 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:05:18 compute-0 systemd[1]: Started libpod-conmon-0ced5e58d80694139bf45609c5ad83146de477786dde4a779edc956fa3c027d6.scope.
Feb 28 11:05:18 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8643259221b5792dceebb595db3106e4ce5d4b7e7cd8ba0900a1bb9f7062fe03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8643259221b5792dceebb595db3106e4ce5d4b7e7cd8ba0900a1bb9f7062fe03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8643259221b5792dceebb595db3106e4ce5d4b7e7cd8ba0900a1bb9f7062fe03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8643259221b5792dceebb595db3106e4ce5d4b7e7cd8ba0900a1bb9f7062fe03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:05:18 compute-0 podman[402824]: 2026-02-28 11:05:18.634196558 +0000 UTC m=+0.240033199 container init 0ced5e58d80694139bf45609c5ad83146de477786dde4a779edc956fa3c027d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_ride, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:05:18 compute-0 podman[402824]: 2026-02-28 11:05:18.641877305 +0000 UTC m=+0.247713866 container start 0ced5e58d80694139bf45609c5ad83146de477786dde4a779edc956fa3c027d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_ride, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:05:18 compute-0 podman[402824]: 2026-02-28 11:05:18.645818507 +0000 UTC m=+0.251655138 container attach 0ced5e58d80694139bf45609c5ad83146de477786dde4a779edc956fa3c027d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_ride, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:05:18 compute-0 sudo[394945]: pam_unix(sudo:session): session closed for user root
Feb 28 11:05:18 compute-0 sshd-session[394944]: Received disconnect from 192.168.122.10 port 51960:11: disconnected by user
Feb 28 11:05:18 compute-0 sshd-session[394944]: Disconnected from user zuul 192.168.122.10 port 51960
Feb 28 11:05:18 compute-0 sshd-session[394941]: pam_unix(sshd:session): session closed for user zuul
Feb 28 11:05:18 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Feb 28 11:05:18 compute-0 systemd[1]: session-55.scope: Consumed 2min 59.287s CPU time, 1.0G memory peak, read 610.0M from disk, written 393.1M to disk.
Feb 28 11:05:18 compute-0 systemd-logind[815]: Session 55 logged out. Waiting for processes to exit.
Feb 28 11:05:18 compute-0 systemd-logind[815]: Removed session 55.
Feb 28 11:05:18 compute-0 sshd-session[402846]: Accepted publickey for zuul from 192.168.122.10 port 41636 ssh2: ECDSA SHA256:0e783GbusLxW+8649QrtV4mEUjUuluwMjeLbzXNo3z0
Feb 28 11:05:18 compute-0 systemd-logind[815]: New session 56 of user zuul.
Feb 28 11:05:18 compute-0 systemd[1]: Started Session 56 of User zuul.
Feb 28 11:05:18 compute-0 sshd-session[402846]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 28 11:05:18 compute-0 sudo[402850]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2026-02-28-acwmgdi.tar.xz
Feb 28 11:05:18 compute-0 sudo[402850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 11:05:18 compute-0 stoic_ride[402841]: {
Feb 28 11:05:18 compute-0 stoic_ride[402841]:     "0": [
Feb 28 11:05:18 compute-0 stoic_ride[402841]:         {
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "devices": [
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "/dev/loop3"
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             ],
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "lv_name": "ceph_lv0",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "lv_size": "21470642176",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "name": "ceph_lv0",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "tags": {
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.cluster_name": "ceph",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.crush_device_class": "",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.encrypted": "0",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.objectstore": "bluestore",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.osd_id": "0",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.type": "block",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.vdo": "0",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.with_tpm": "0"
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             },
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "type": "block",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "vg_name": "ceph_vg0"
Feb 28 11:05:18 compute-0 stoic_ride[402841]:         }
Feb 28 11:05:18 compute-0 stoic_ride[402841]:     ],
Feb 28 11:05:18 compute-0 stoic_ride[402841]:     "1": [
Feb 28 11:05:18 compute-0 stoic_ride[402841]:         {
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "devices": [
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "/dev/loop4"
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             ],
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "lv_name": "ceph_lv1",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "lv_size": "21470642176",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "name": "ceph_lv1",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "tags": {
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.cluster_name": "ceph",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.crush_device_class": "",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.encrypted": "0",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.objectstore": "bluestore",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.osd_id": "1",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.type": "block",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.vdo": "0",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.with_tpm": "0"
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             },
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "type": "block",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "vg_name": "ceph_vg1"
Feb 28 11:05:18 compute-0 stoic_ride[402841]:         }
Feb 28 11:05:18 compute-0 stoic_ride[402841]:     ],
Feb 28 11:05:18 compute-0 stoic_ride[402841]:     "2": [
Feb 28 11:05:18 compute-0 stoic_ride[402841]:         {
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "devices": [
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "/dev/loop5"
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             ],
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "lv_name": "ceph_lv2",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "lv_size": "21470642176",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "name": "ceph_lv2",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "tags": {
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.cluster_name": "ceph",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.crush_device_class": "",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.encrypted": "0",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.objectstore": "bluestore",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.osd_id": "2",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.type": "block",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.vdo": "0",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:                 "ceph.with_tpm": "0"
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             },
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "type": "block",
Feb 28 11:05:18 compute-0 stoic_ride[402841]:             "vg_name": "ceph_vg2"
Feb 28 11:05:18 compute-0 stoic_ride[402841]:         }
Feb 28 11:05:18 compute-0 stoic_ride[402841]:     ]
Feb 28 11:05:18 compute-0 stoic_ride[402841]: }
Feb 28 11:05:18 compute-0 systemd[1]: libpod-0ced5e58d80694139bf45609c5ad83146de477786dde4a779edc956fa3c027d6.scope: Deactivated successfully.
Feb 28 11:05:18 compute-0 podman[402824]: 2026-02-28 11:05:18.990814968 +0000 UTC m=+0.596651619 container died 0ced5e58d80694139bf45609c5ad83146de477786dde4a779edc956fa3c027d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 11:05:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-8643259221b5792dceebb595db3106e4ce5d4b7e7cd8ba0900a1bb9f7062fe03-merged.mount: Deactivated successfully.
Feb 28 11:05:19 compute-0 podman[402824]: 2026-02-28 11:05:19.050005473 +0000 UTC m=+0.655842044 container remove 0ced5e58d80694139bf45609c5ad83146de477786dde4a779edc956fa3c027d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 28 11:05:19 compute-0 systemd[1]: libpod-conmon-0ced5e58d80694139bf45609c5ad83146de477786dde4a779edc956fa3c027d6.scope: Deactivated successfully.
Feb 28 11:05:19 compute-0 sudo[402850]: pam_unix(sudo:session): session closed for user root
Feb 28 11:05:19 compute-0 sshd-session[402849]: Received disconnect from 192.168.122.10 port 41636:11: disconnected by user
Feb 28 11:05:19 compute-0 sshd-session[402849]: Disconnected from user zuul 192.168.122.10 port 41636
Feb 28 11:05:19 compute-0 sshd-session[402846]: pam_unix(sshd:session): session closed for user zuul
Feb 28 11:05:19 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Feb 28 11:05:19 compute-0 sudo[402748]: pam_unix(sudo:session): session closed for user root
Feb 28 11:05:19 compute-0 systemd-logind[815]: Session 56 logged out. Waiting for processes to exit.
Feb 28 11:05:19 compute-0 systemd-logind[815]: Removed session 56.
Feb 28 11:05:19 compute-0 sudo[402891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:05:19 compute-0 sudo[402891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:05:19 compute-0 sudo[402891]: pam_unix(sudo:session): session closed for user root
Feb 28 11:05:19 compute-0 sshd-session[402890]: Accepted publickey for zuul from 192.168.122.10 port 41652 ssh2: ECDSA SHA256:0e783GbusLxW+8649QrtV4mEUjUuluwMjeLbzXNo3z0
Feb 28 11:05:19 compute-0 systemd-logind[815]: New session 57 of user zuul.
Feb 28 11:05:19 compute-0 systemd[1]: Started Session 57 of User zuul.
Feb 28 11:05:19 compute-0 sudo[402917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 11:05:19 compute-0 ceph-mon[76304]: pgmap v3051: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 22 op/s
Feb 28 11:05:19 compute-0 sudo[402917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:05:19 compute-0 sshd-session[402890]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 28 11:05:19 compute-0 sudo[402944]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Feb 28 11:05:19 compute-0 sudo[402944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 11:05:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:05:19 compute-0 sudo[402944]: pam_unix(sudo:session): session closed for user root
Feb 28 11:05:19 compute-0 sshd-session[402943]: Received disconnect from 192.168.122.10 port 41652:11: disconnected by user
Feb 28 11:05:19 compute-0 sshd-session[402943]: Disconnected from user zuul 192.168.122.10 port 41652
Feb 28 11:05:19 compute-0 sshd-session[402890]: pam_unix(sshd:session): session closed for user zuul
Feb 28 11:05:19 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Feb 28 11:05:19 compute-0 systemd-logind[815]: Session 57 logged out. Waiting for processes to exit.
Feb 28 11:05:19 compute-0 systemd-logind[815]: Removed session 57.
Feb 28 11:05:19 compute-0 podman[402983]: 2026-02-28 11:05:19.536949354 +0000 UTC m=+0.051302034 container create 99e830c8397bbe9d1cf716d7bc29d15a40b24558bdcfd70e10aa5c32aef76bc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:05:19 compute-0 systemd[1]: Started libpod-conmon-99e830c8397bbe9d1cf716d7bc29d15a40b24558bdcfd70e10aa5c32aef76bc2.scope.
Feb 28 11:05:19 compute-0 podman[402983]: 2026-02-28 11:05:19.513043257 +0000 UTC m=+0.027395947 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:05:19 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:05:19 compute-0 podman[402983]: 2026-02-28 11:05:19.62966486 +0000 UTC m=+0.144017590 container init 99e830c8397bbe9d1cf716d7bc29d15a40b24558bdcfd70e10aa5c32aef76bc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:05:19 compute-0 podman[402983]: 2026-02-28 11:05:19.637671676 +0000 UTC m=+0.152024356 container start 99e830c8397bbe9d1cf716d7bc29d15a40b24558bdcfd70e10aa5c32aef76bc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:05:19 compute-0 podman[402983]: 2026-02-28 11:05:19.641783103 +0000 UTC m=+0.156135833 container attach 99e830c8397bbe9d1cf716d7bc29d15a40b24558bdcfd70e10aa5c32aef76bc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_yalow, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 28 11:05:19 compute-0 sweet_yalow[403000]: 167 167
Feb 28 11:05:19 compute-0 systemd[1]: libpod-99e830c8397bbe9d1cf716d7bc29d15a40b24558bdcfd70e10aa5c32aef76bc2.scope: Deactivated successfully.
Feb 28 11:05:19 compute-0 podman[402983]: 2026-02-28 11:05:19.645489168 +0000 UTC m=+0.159841858 container died 99e830c8397bbe9d1cf716d7bc29d15a40b24558bdcfd70e10aa5c32aef76bc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_yalow, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 28 11:05:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ae94d91534428aedd4d44bbb23d210be64e1c4147e3774710e2ee139e2030a1-merged.mount: Deactivated successfully.
Feb 28 11:05:19 compute-0 podman[402983]: 2026-02-28 11:05:19.69147436 +0000 UTC m=+0.205827020 container remove 99e830c8397bbe9d1cf716d7bc29d15a40b24558bdcfd70e10aa5c32aef76bc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:05:19 compute-0 systemd[1]: libpod-conmon-99e830c8397bbe9d1cf716d7bc29d15a40b24558bdcfd70e10aa5c32aef76bc2.scope: Deactivated successfully.
Feb 28 11:05:19 compute-0 podman[403026]: 2026-02-28 11:05:19.895365085 +0000 UTC m=+0.073974226 container create 89cb7038617cafb230ca8e0233927b570aa81af98761603dbfd2d7c4049633c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_khayyam, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:05:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 24 op/s
Feb 28 11:05:19 compute-0 systemd[1]: Started libpod-conmon-89cb7038617cafb230ca8e0233927b570aa81af98761603dbfd2d7c4049633c1.scope.
Feb 28 11:05:19 compute-0 podman[403026]: 2026-02-28 11:05:19.86553509 +0000 UTC m=+0.044144261 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:05:19 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc11f4efb7bc03c856be0141a32b4f556cf917a34a2f9c5901687416dbe8b5da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc11f4efb7bc03c856be0141a32b4f556cf917a34a2f9c5901687416dbe8b5da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc11f4efb7bc03c856be0141a32b4f556cf917a34a2f9c5901687416dbe8b5da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc11f4efb7bc03c856be0141a32b4f556cf917a34a2f9c5901687416dbe8b5da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:05:20 compute-0 podman[403026]: 2026-02-28 11:05:20.000267936 +0000 UTC m=+0.178877087 container init 89cb7038617cafb230ca8e0233927b570aa81af98761603dbfd2d7c4049633c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_khayyam, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 11:05:20 compute-0 podman[403026]: 2026-02-28 11:05:20.011346219 +0000 UTC m=+0.189955330 container start 89cb7038617cafb230ca8e0233927b570aa81af98761603dbfd2d7c4049633c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:05:20 compute-0 podman[403026]: 2026-02-28 11:05:20.015218499 +0000 UTC m=+0.193827690 container attach 89cb7038617cafb230ca8e0233927b570aa81af98761603dbfd2d7c4049633c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 28 11:05:20 compute-0 lvm[403120]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 11:05:20 compute-0 lvm[403122]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 11:05:20 compute-0 lvm[403122]: VG ceph_vg1 finished
Feb 28 11:05:20 compute-0 lvm[403120]: VG ceph_vg0 finished
Feb 28 11:05:20 compute-0 lvm[403124]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 11:05:20 compute-0 lvm[403124]: VG ceph_vg2 finished
Feb 28 11:05:20 compute-0 peaceful_khayyam[403043]: {}
Feb 28 11:05:20 compute-0 systemd[1]: libpod-89cb7038617cafb230ca8e0233927b570aa81af98761603dbfd2d7c4049633c1.scope: Deactivated successfully.
Feb 28 11:05:20 compute-0 systemd[1]: libpod-89cb7038617cafb230ca8e0233927b570aa81af98761603dbfd2d7c4049633c1.scope: Consumed 1.162s CPU time.
Feb 28 11:05:20 compute-0 podman[403026]: 2026-02-28 11:05:20.838416513 +0000 UTC m=+1.017025614 container died 89cb7038617cafb230ca8e0233927b570aa81af98761603dbfd2d7c4049633c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 11:05:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc11f4efb7bc03c856be0141a32b4f556cf917a34a2f9c5901687416dbe8b5da-merged.mount: Deactivated successfully.
Feb 28 11:05:20 compute-0 podman[403026]: 2026-02-28 11:05:20.880734391 +0000 UTC m=+1.059343492 container remove 89cb7038617cafb230ca8e0233927b570aa81af98761603dbfd2d7c4049633c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_khayyam, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 28 11:05:20 compute-0 systemd[1]: libpod-conmon-89cb7038617cafb230ca8e0233927b570aa81af98761603dbfd2d7c4049633c1.scope: Deactivated successfully.
Feb 28 11:05:20 compute-0 sudo[402917]: pam_unix(sudo:session): session closed for user root
Feb 28 11:05:20 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 11:05:20 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:05:20 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 11:05:20 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:05:20 compute-0 sudo[403138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 11:05:20 compute-0 sudo[403138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:05:21 compute-0 sudo[403138]: pam_unix(sudo:session): session closed for user root
Feb 28 11:05:21 compute-0 ceph-mon[76304]: pgmap v3052: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 24 op/s
Feb 28 11:05:21 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:05:21 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:05:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 41 op/s
Feb 28 11:05:23 compute-0 ceph-mon[76304]: pgmap v3053: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 41 op/s
Feb 28 11:05:23 compute-0 nova_compute[243452]: 2026-02-28 11:05:23.421 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:05:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3054: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 60 op/s
Feb 28 11:05:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:05:25 compute-0 ceph-mon[76304]: pgmap v3054: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 60 op/s
Feb 28 11:05:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Feb 28 11:05:26 compute-0 podman[403164]: 2026-02-28 11:05:26.148809446 +0000 UTC m=+0.081352185 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible)
Feb 28 11:05:26 compute-0 podman[403163]: 2026-02-28 11:05:26.185513546 +0000 UTC m=+0.118616961 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, config_id=ovn_controller)
Feb 28 11:05:27 compute-0 ceph-mon[76304]: pgmap v3055: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Feb 28 11:05:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3056: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Feb 28 11:05:28 compute-0 nova_compute[243452]: 2026-02-28 11:05:28.424 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:05:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:05:29
Feb 28 11:05:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 11:05:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 11:05:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'backups', '.mgr', 'vms', 'images', 'default.rgw.control']
Feb 28 11:05:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 11:05:29 compute-0 ceph-mon[76304]: pgmap v3056: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Feb 28 11:05:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:05:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Feb 28 11:05:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:05:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:05:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:05:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:05:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:05:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:05:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 11:05:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:05:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 11:05:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:05:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:05:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:05:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:05:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:05:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:05:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:05:31 compute-0 ceph-mon[76304]: pgmap v3057: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Feb 28 11:05:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Feb 28 11:05:33 compute-0 ceph-mon[76304]: pgmap v3058: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Feb 28 11:05:33 compute-0 nova_compute[243452]: 2026-02-28 11:05:33.426 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:05:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 34 op/s
Feb 28 11:05:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:05:35 compute-0 sshd-session[403210]: Invalid user sol from 45.148.10.240 port 54716
Feb 28 11:05:35 compute-0 ceph-mon[76304]: pgmap v3059: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 34 op/s
Feb 28 11:05:35 compute-0 sshd-session[403210]: Connection closed by invalid user sol 45.148.10.240 port 54716 [preauth]
Feb 28 11:05:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3060: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 14 op/s
Feb 28 11:05:37 compute-0 ceph-mon[76304]: pgmap v3060: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 14 op/s
Feb 28 11:05:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3061: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:38 compute-0 nova_compute[243452]: 2026-02-28 11:05:38.428 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:05:39 compute-0 ceph-mon[76304]: pgmap v3061: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:05:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:41 compute-0 ceph-mon[76304]: pgmap v3062: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5454513684934942e-05 of space, bias 1.0, pg target 0.0046363541054804825 quantized to 32 (current 32)
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.899822393338988e-06 of space, bias 1.0, pg target 0.0014699467180016965 quantized to 32 (current 32)
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 11:05:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3063: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:43 compute-0 ceph-mon[76304]: pgmap v3063: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:43 compute-0 nova_compute[243452]: 2026-02-28 11:05:43.431 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:05:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:05:44.351561) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276744351617, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 1413, "num_deletes": 250, "total_data_size": 1970890, "memory_usage": 2002480, "flush_reason": "Manual Compaction"}
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276744359404, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 1216987, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63388, "largest_seqno": 64800, "table_properties": {"data_size": 1211653, "index_size": 2473, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 15394, "raw_average_key_size": 21, "raw_value_size": 1199507, "raw_average_value_size": 1684, "num_data_blocks": 111, "num_entries": 712, "num_filter_entries": 712, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772276625, "oldest_key_time": 1772276625, "file_creation_time": 1772276744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 7900 microseconds, and 3824 cpu microseconds.
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:05:44.359463) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 1216987 bytes OK
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:05:44.359488) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:05:44.360991) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:05:44.361008) EVENT_LOG_v1 {"time_micros": 1772276744361002, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:05:44.361032) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1964301, prev total WAL file size 1964301, number of live WAL files 2.
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:05:44.361644) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353032' seq:72057594037927935, type:22 .. '6D6772737461740032373533' seq:0, type:0; will stop at (end)
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(1188KB)], [149(10MB)]
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276744361712, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 12497904, "oldest_snapshot_seqno": -1}
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8528 keys, 9875060 bytes, temperature: kUnknown
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276744416708, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 9875060, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9822057, "index_size": 30554, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21381, "raw_key_size": 222139, "raw_average_key_size": 26, "raw_value_size": 9673938, "raw_average_value_size": 1134, "num_data_blocks": 1186, "num_entries": 8528, "num_filter_entries": 8528, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772276744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:05:44.417022) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 9875060 bytes
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:05:44.418148) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 226.9 rd, 179.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 10.8 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(18.4) write-amplify(8.1) OK, records in: 8986, records dropped: 458 output_compression: NoCompression
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:05:44.418178) EVENT_LOG_v1 {"time_micros": 1772276744418163, "job": 92, "event": "compaction_finished", "compaction_time_micros": 55088, "compaction_time_cpu_micros": 34341, "output_level": 6, "num_output_files": 1, "total_output_size": 9875060, "num_input_records": 8986, "num_output_records": 8528, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276744418552, "job": 92, "event": "table_file_deletion", "file_number": 151}
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276744420916, "job": 92, "event": "table_file_deletion", "file_number": 149}
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:05:44.361524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:05:44.421032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:05:44.421039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:05:44.421041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:05:44.421043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:05:44 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:05:44.421045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:05:45 compute-0 ceph-mon[76304]: pgmap v3064: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 11:05:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3901498200' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:05:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 11:05:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3901498200' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:05:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3065: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3901498200' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:05:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3901498200' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:05:47 compute-0 ceph-mon[76304]: pgmap v3065: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:48 compute-0 nova_compute[243452]: 2026-02-28 11:05:48.432 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:05:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:05:49 compute-0 ceph-mon[76304]: pgmap v3066: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3067: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:51 compute-0 ceph-mon[76304]: pgmap v3067: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3068: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:53 compute-0 ceph-mon[76304]: pgmap v3068: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:53 compute-0 nova_compute[243452]: 2026-02-28 11:05:53.434 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:05:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:05:55 compute-0 ceph-mon[76304]: pgmap v3069: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3070: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:57 compute-0 podman[403213]: 2026-02-28 11:05:57.14292801 +0000 UTC m=+0.077230618 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:05:57 compute-0 podman[403212]: 2026-02-28 11:05:57.17858472 +0000 UTC m=+0.114431832 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 28 11:05:57 compute-0 ceph-mon[76304]: pgmap v3070: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:05:57.912 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:05:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:05:57.913 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:05:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:05:57.913 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:05:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:58 compute-0 nova_compute[243452]: 2026-02-28 11:05:58.436 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:05:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:05:59 compute-0 ceph-mon[76304]: pgmap v3071: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:05:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:06:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:06:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:06:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:06:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:06:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:06:01 compute-0 ceph-mon[76304]: pgmap v3072: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:03 compute-0 ceph-mon[76304]: pgmap v3073: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:03 compute-0 nova_compute[243452]: 2026-02-28 11:06:03.438 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:06:03 compute-0 nova_compute[243452]: 2026-02-28 11:06:03.441 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:06:03 compute-0 nova_compute[243452]: 2026-02-28 11:06:03.441 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:06:03 compute-0 nova_compute[243452]: 2026-02-28 11:06:03.441 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:03 compute-0 nova_compute[243452]: 2026-02-28 11:06:03.466 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:06:03 compute-0 nova_compute[243452]: 2026-02-28 11:06:03.466 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3074: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:06:05 compute-0 ceph-mon[76304]: pgmap v3074: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:06 compute-0 nova_compute[243452]: 2026-02-28 11:06:06.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:06:06 compute-0 nova_compute[243452]: 2026-02-28 11:06:06.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:06:07 compute-0 nova_compute[243452]: 2026-02-28 11:06:07.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:06:07 compute-0 ceph-mon[76304]: pgmap v3075: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:08 compute-0 nova_compute[243452]: 2026-02-28 11:06:08.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:06:08 compute-0 nova_compute[243452]: 2026-02-28 11:06:08.467 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:06:08 compute-0 nova_compute[243452]: 2026-02-28 11:06:08.469 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:06:08 compute-0 nova_compute[243452]: 2026-02-28 11:06:08.470 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:06:08 compute-0 nova_compute[243452]: 2026-02-28 11:06:08.470 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:08 compute-0 nova_compute[243452]: 2026-02-28 11:06:08.509 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:06:08 compute-0 nova_compute[243452]: 2026-02-28 11:06:08.510 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:09 compute-0 nova_compute[243452]: 2026-02-28 11:06:09.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:06:09 compute-0 nova_compute[243452]: 2026-02-28 11:06:09.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:06:09 compute-0 nova_compute[243452]: 2026-02-28 11:06:09.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 11:06:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:06:09 compute-0 ceph-mon[76304]: pgmap v3076: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:11 compute-0 ceph-mon[76304]: pgmap v3077: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:12 compute-0 nova_compute[243452]: 2026-02-28 11:06:12.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:06:12 compute-0 nova_compute[243452]: 2026-02-28 11:06:12.318 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 11:06:12 compute-0 nova_compute[243452]: 2026-02-28 11:06:12.318 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 11:06:12 compute-0 nova_compute[243452]: 2026-02-28 11:06:12.342 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 11:06:13 compute-0 ceph-mon[76304]: pgmap v3078: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:13 compute-0 nova_compute[243452]: 2026-02-28 11:06:13.511 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:06:13 compute-0 nova_compute[243452]: 2026-02-28 11:06:13.512 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:06:13 compute-0 nova_compute[243452]: 2026-02-28 11:06:13.512 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:06:13 compute-0 nova_compute[243452]: 2026-02-28 11:06:13.513 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:13 compute-0 nova_compute[243452]: 2026-02-28 11:06:13.513 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:13 compute-0 nova_compute[243452]: 2026-02-28 11:06:13.514 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:06:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3079: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:06:15 compute-0 nova_compute[243452]: 2026-02-28 11:06:15.336 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:06:15 compute-0 ceph-mon[76304]: pgmap v3079: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:16 compute-0 nova_compute[243452]: 2026-02-28 11:06:16.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:06:16 compute-0 nova_compute[243452]: 2026-02-28 11:06:16.357 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:06:16 compute-0 nova_compute[243452]: 2026-02-28 11:06:16.358 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:06:16 compute-0 nova_compute[243452]: 2026-02-28 11:06:16.358 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:06:16 compute-0 nova_compute[243452]: 2026-02-28 11:06:16.359 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 11:06:16 compute-0 nova_compute[243452]: 2026-02-28 11:06:16.359 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:06:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:06:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2252518195' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:06:16 compute-0 nova_compute[243452]: 2026-02-28 11:06:16.887 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:06:17 compute-0 nova_compute[243452]: 2026-02-28 11:06:17.118 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 11:06:17 compute-0 nova_compute[243452]: 2026-02-28 11:06:17.120 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3560MB free_disk=59.987354160286486GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 11:06:17 compute-0 nova_compute[243452]: 2026-02-28 11:06:17.121 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:06:17 compute-0 nova_compute[243452]: 2026-02-28 11:06:17.121 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:06:17 compute-0 nova_compute[243452]: 2026-02-28 11:06:17.227 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 11:06:17 compute-0 nova_compute[243452]: 2026-02-28 11:06:17.228 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 11:06:17 compute-0 nova_compute[243452]: 2026-02-28 11:06:17.257 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:06:17 compute-0 ceph-mon[76304]: pgmap v3080: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:17 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2252518195' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:06:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:06:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3861315788' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:06:17 compute-0 nova_compute[243452]: 2026-02-28 11:06:17.857 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:06:17 compute-0 nova_compute[243452]: 2026-02-28 11:06:17.864 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 11:06:17 compute-0 nova_compute[243452]: 2026-02-28 11:06:17.886 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 11:06:17 compute-0 nova_compute[243452]: 2026-02-28 11:06:17.889 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 11:06:17 compute-0 nova_compute[243452]: 2026-02-28 11:06:17.889 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:06:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:18 compute-0 nova_compute[243452]: 2026-02-28 11:06:18.515 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:06:18 compute-0 nova_compute[243452]: 2026-02-28 11:06:18.517 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:06:18 compute-0 nova_compute[243452]: 2026-02-28 11:06:18.518 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:06:18 compute-0 nova_compute[243452]: 2026-02-28 11:06:18.518 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3861315788' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:06:18 compute-0 nova_compute[243452]: 2026-02-28 11:06:18.543 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:06:18 compute-0 nova_compute[243452]: 2026-02-28 11:06:18.544 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:06:19 compute-0 ceph-mon[76304]: pgmap v3081: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:19 compute-0 nova_compute[243452]: 2026-02-28 11:06:19.891 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:06:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:21 compute-0 sudo[403300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:06:21 compute-0 sudo[403300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:06:21 compute-0 sudo[403300]: pam_unix(sudo:session): session closed for user root
Feb 28 11:06:21 compute-0 sudo[403325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 11:06:21 compute-0 sudo[403325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:06:21 compute-0 ceph-mon[76304]: pgmap v3082: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:21 compute-0 sudo[403325]: pam_unix(sudo:session): session closed for user root
Feb 28 11:06:21 compute-0 sudo[403381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:06:21 compute-0 sudo[403381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:06:21 compute-0 sudo[403381]: pam_unix(sudo:session): session closed for user root
Feb 28 11:06:21 compute-0 sudo[403406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Feb 28 11:06:21 compute-0 sudo[403406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:06:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:22 compute-0 sudo[403406]: pam_unix(sudo:session): session closed for user root
Feb 28 11:06:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 11:06:22 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:06:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 11:06:22 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:06:22 compute-0 sudo[403450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:06:22 compute-0 sudo[403450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:06:22 compute-0 sudo[403450]: pam_unix(sudo:session): session closed for user root
Feb 28 11:06:22 compute-0 sudo[403475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- inventory --format=json-pretty --filter-for-batch
Feb 28 11:06:22 compute-0 sudo[403475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:06:22 compute-0 podman[403513]: 2026-02-28 11:06:22.539569686 +0000 UTC m=+0.043464852 container create f9815d40382320efa6a2a7d5835f5ba5a065533af42e180b3a9457a1c2d25d96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_cannon, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:06:22 compute-0 systemd[1]: Started libpod-conmon-f9815d40382320efa6a2a7d5835f5ba5a065533af42e180b3a9457a1c2d25d96.scope.
Feb 28 11:06:22 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:06:22 compute-0 podman[403513]: 2026-02-28 11:06:22.608275622 +0000 UTC m=+0.112170798 container init f9815d40382320efa6a2a7d5835f5ba5a065533af42e180b3a9457a1c2d25d96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 11:06:22 compute-0 podman[403513]: 2026-02-28 11:06:22.616815944 +0000 UTC m=+0.120711100 container start f9815d40382320efa6a2a7d5835f5ba5a065533af42e180b3a9457a1c2d25d96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_cannon, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:06:22 compute-0 podman[403513]: 2026-02-28 11:06:22.619659964 +0000 UTC m=+0.123555140 container attach f9815d40382320efa6a2a7d5835f5ba5a065533af42e180b3a9457a1c2d25d96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb 28 11:06:22 compute-0 podman[403513]: 2026-02-28 11:06:22.523702807 +0000 UTC m=+0.027597973 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:06:22 compute-0 pedantic_cannon[403529]: 167 167
Feb 28 11:06:22 compute-0 systemd[1]: libpod-f9815d40382320efa6a2a7d5835f5ba5a065533af42e180b3a9457a1c2d25d96.scope: Deactivated successfully.
Feb 28 11:06:22 compute-0 podman[403513]: 2026-02-28 11:06:22.623303248 +0000 UTC m=+0.127198414 container died f9815d40382320efa6a2a7d5835f5ba5a065533af42e180b3a9457a1c2d25d96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 11:06:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecb62f6f17734ab9ba6a87522fccec612fc38a57be5055509b93d728e1a9a721-merged.mount: Deactivated successfully.
Feb 28 11:06:22 compute-0 podman[403513]: 2026-02-28 11:06:22.662801996 +0000 UTC m=+0.166697202 container remove f9815d40382320efa6a2a7d5835f5ba5a065533af42e180b3a9457a1c2d25d96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 11:06:22 compute-0 systemd[1]: libpod-conmon-f9815d40382320efa6a2a7d5835f5ba5a065533af42e180b3a9457a1c2d25d96.scope: Deactivated successfully.
Feb 28 11:06:22 compute-0 podman[403556]: 2026-02-28 11:06:22.854281499 +0000 UTC m=+0.061520303 container create a21a50761c25ba6c3dab07b4853cf88914794945b156e6d4abcd79241c64ddf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_diffie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:06:22 compute-0 systemd[1]: Started libpod-conmon-a21a50761c25ba6c3dab07b4853cf88914794945b156e6d4abcd79241c64ddf4.scope.
Feb 28 11:06:22 compute-0 podman[403556]: 2026-02-28 11:06:22.828220381 +0000 UTC m=+0.035459235 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:06:22 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e35dfaef6e32c20d25f837fb473b44c40a9ca1909dc326633080ce1e8d55401/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e35dfaef6e32c20d25f837fb473b44c40a9ca1909dc326633080ce1e8d55401/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e35dfaef6e32c20d25f837fb473b44c40a9ca1909dc326633080ce1e8d55401/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e35dfaef6e32c20d25f837fb473b44c40a9ca1909dc326633080ce1e8d55401/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:06:22 compute-0 podman[403556]: 2026-02-28 11:06:22.956544666 +0000 UTC m=+0.163783530 container init a21a50761c25ba6c3dab07b4853cf88914794945b156e6d4abcd79241c64ddf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_diffie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 11:06:22 compute-0 podman[403556]: 2026-02-28 11:06:22.969494612 +0000 UTC m=+0.176733416 container start a21a50761c25ba6c3dab07b4853cf88914794945b156e6d4abcd79241c64ddf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_diffie, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 28 11:06:22 compute-0 podman[403556]: 2026-02-28 11:06:22.973380932 +0000 UTC m=+0.180619726 container attach a21a50761c25ba6c3dab07b4853cf88914794945b156e6d4abcd79241c64ddf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:06:23 compute-0 ceph-mon[76304]: pgmap v3083: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:23 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:06:23 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:06:23 compute-0 hungry_diffie[403573]: [
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:     {
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:         "available": false,
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:         "being_replaced": false,
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:         "ceph_device_lvm": false,
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:         "lsm_data": {},
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:         "lvs": [],
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:         "path": "/dev/sr0",
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:         "rejected_reasons": [
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "Has a FileSystem",
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "Insufficient space (<5GB)"
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:         ],
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:         "sys_api": {
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "actuators": null,
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "device_nodes": [
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:                 "sr0"
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             ],
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "devname": "sr0",
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "human_readable_size": "482.00 KB",
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "id_bus": "ata",
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "model": "QEMU DVD-ROM",
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "nr_requests": "2",
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "parent": "/dev/sr0",
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "partitions": {},
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "path": "/dev/sr0",
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "removable": "1",
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "rev": "2.5+",
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "ro": "0",
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "rotational": "1",
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "sas_address": "",
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "sas_device_handle": "",
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "scheduler_mode": "mq-deadline",
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "sectors": 0,
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "sectorsize": "2048",
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "size": 493568.0,
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "support_discard": "2048",
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "type": "disk",
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:             "vendor": "QEMU"
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:         }
Feb 28 11:06:23 compute-0 hungry_diffie[403573]:     }
Feb 28 11:06:23 compute-0 hungry_diffie[403573]: ]
Feb 28 11:06:23 compute-0 systemd[1]: libpod-a21a50761c25ba6c3dab07b4853cf88914794945b156e6d4abcd79241c64ddf4.scope: Deactivated successfully.
Feb 28 11:06:23 compute-0 podman[403556]: 2026-02-28 11:06:23.585861977 +0000 UTC m=+0.793100771 container died a21a50761c25ba6c3dab07b4853cf88914794945b156e6d4abcd79241c64ddf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_diffie, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:06:23 compute-0 nova_compute[243452]: 2026-02-28 11:06:23.584 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:06:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e35dfaef6e32c20d25f837fb473b44c40a9ca1909dc326633080ce1e8d55401-merged.mount: Deactivated successfully.
Feb 28 11:06:23 compute-0 podman[403556]: 2026-02-28 11:06:23.631646274 +0000 UTC m=+0.838885078 container remove a21a50761c25ba6c3dab07b4853cf88914794945b156e6d4abcd79241c64ddf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_diffie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 11:06:23 compute-0 systemd[1]: libpod-conmon-a21a50761c25ba6c3dab07b4853cf88914794945b156e6d4abcd79241c64ddf4.scope: Deactivated successfully.
Feb 28 11:06:23 compute-0 sudo[403475]: pam_unix(sudo:session): session closed for user root
Feb 28 11:06:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 11:06:23 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:06:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 11:06:23 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:06:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:06:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:06:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 11:06:23 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:06:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 11:06:23 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:06:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 11:06:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:06:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 11:06:23 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:06:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:06:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:06:23 compute-0 sudo[404464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:06:23 compute-0 sudo[404464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:06:23 compute-0 sudo[404464]: pam_unix(sudo:session): session closed for user root
Feb 28 11:06:23 compute-0 sudo[404489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 11:06:23 compute-0 sudo[404489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:06:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3084: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:24 compute-0 podman[404527]: 2026-02-28 11:06:24.118123412 +0000 UTC m=+0.045487130 container create d7823c32bc7f6228c31cdce493a8fd97ebc7640845b1d420f6f36725a5d1636c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:06:24 compute-0 systemd[1]: Started libpod-conmon-d7823c32bc7f6228c31cdce493a8fd97ebc7640845b1d420f6f36725a5d1636c.scope.
Feb 28 11:06:24 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:06:24 compute-0 podman[404527]: 2026-02-28 11:06:24.188472494 +0000 UTC m=+0.115836172 container init d7823c32bc7f6228c31cdce493a8fd97ebc7640845b1d420f6f36725a5d1636c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 28 11:06:24 compute-0 podman[404527]: 2026-02-28 11:06:24.093831224 +0000 UTC m=+0.021194992 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:06:24 compute-0 podman[404527]: 2026-02-28 11:06:24.195817542 +0000 UTC m=+0.123181250 container start d7823c32bc7f6228c31cdce493a8fd97ebc7640845b1d420f6f36725a5d1636c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_tu, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 11:06:24 compute-0 podman[404527]: 2026-02-28 11:06:24.200118064 +0000 UTC m=+0.127481742 container attach d7823c32bc7f6228c31cdce493a8fd97ebc7640845b1d420f6f36725a5d1636c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_tu, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:06:24 compute-0 suspicious_tu[404543]: 167 167
Feb 28 11:06:24 compute-0 systemd[1]: libpod-d7823c32bc7f6228c31cdce493a8fd97ebc7640845b1d420f6f36725a5d1636c.scope: Deactivated successfully.
Feb 28 11:06:24 compute-0 podman[404527]: 2026-02-28 11:06:24.201694818 +0000 UTC m=+0.129058496 container died d7823c32bc7f6228c31cdce493a8fd97ebc7640845b1d420f6f36725a5d1636c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 28 11:06:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-28dc377768ab4d2d44c7aa9ee01bf402673e4a157e21ccf9ac82cbaee6071c93-merged.mount: Deactivated successfully.
Feb 28 11:06:24 compute-0 podman[404527]: 2026-02-28 11:06:24.247133795 +0000 UTC m=+0.174497523 container remove d7823c32bc7f6228c31cdce493a8fd97ebc7640845b1d420f6f36725a5d1636c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_tu, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:06:24 compute-0 systemd[1]: libpod-conmon-d7823c32bc7f6228c31cdce493a8fd97ebc7640845b1d420f6f36725a5d1636c.scope: Deactivated successfully.
Feb 28 11:06:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:06:24 compute-0 podman[404567]: 2026-02-28 11:06:24.441750977 +0000 UTC m=+0.057548651 container create c271f63a1ab29ad49d4d10e28dcbba2ce9c20190d35c5d9fcab90041a021e9cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:06:24 compute-0 systemd[1]: Started libpod-conmon-c271f63a1ab29ad49d4d10e28dcbba2ce9c20190d35c5d9fcab90041a021e9cc.scope.
Feb 28 11:06:24 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:06:24 compute-0 podman[404567]: 2026-02-28 11:06:24.418783027 +0000 UTC m=+0.034580741 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f08b68f10172454d84c77cfe1cc479db187592e09345a8ea031ea7ab63f6fac5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f08b68f10172454d84c77cfe1cc479db187592e09345a8ea031ea7ab63f6fac5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f08b68f10172454d84c77cfe1cc479db187592e09345a8ea031ea7ab63f6fac5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f08b68f10172454d84c77cfe1cc479db187592e09345a8ea031ea7ab63f6fac5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f08b68f10172454d84c77cfe1cc479db187592e09345a8ea031ea7ab63f6fac5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 11:06:24 compute-0 podman[404567]: 2026-02-28 11:06:24.536203112 +0000 UTC m=+0.152000786 container init c271f63a1ab29ad49d4d10e28dcbba2ce9c20190d35c5d9fcab90041a021e9cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:06:24 compute-0 podman[404567]: 2026-02-28 11:06:24.544659912 +0000 UTC m=+0.160457586 container start c271f63a1ab29ad49d4d10e28dcbba2ce9c20190d35c5d9fcab90041a021e9cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:06:24 compute-0 podman[404567]: 2026-02-28 11:06:24.548634214 +0000 UTC m=+0.164431928 container attach c271f63a1ab29ad49d4d10e28dcbba2ce9c20190d35c5d9fcab90041a021e9cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hopper, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 11:06:24 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:06:24 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:06:24 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:06:24 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:06:24 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:06:24 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:06:24 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:06:24 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:06:24 compute-0 inspiring_hopper[404583]: --> passed data devices: 0 physical, 3 LVM
Feb 28 11:06:24 compute-0 inspiring_hopper[404583]: --> All data devices are unavailable
Feb 28 11:06:25 compute-0 systemd[1]: libpod-c271f63a1ab29ad49d4d10e28dcbba2ce9c20190d35c5d9fcab90041a021e9cc.scope: Deactivated successfully.
Feb 28 11:06:25 compute-0 podman[404603]: 2026-02-28 11:06:25.085957412 +0000 UTC m=+0.031309978 container died c271f63a1ab29ad49d4d10e28dcbba2ce9c20190d35c5d9fcab90041a021e9cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 11:06:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f08b68f10172454d84c77cfe1cc479db187592e09345a8ea031ea7ab63f6fac5-merged.mount: Deactivated successfully.
Feb 28 11:06:25 compute-0 podman[404603]: 2026-02-28 11:06:25.127858818 +0000 UTC m=+0.073211354 container remove c271f63a1ab29ad49d4d10e28dcbba2ce9c20190d35c5d9fcab90041a021e9cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hopper, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:06:25 compute-0 systemd[1]: libpod-conmon-c271f63a1ab29ad49d4d10e28dcbba2ce9c20190d35c5d9fcab90041a021e9cc.scope: Deactivated successfully.
Feb 28 11:06:25 compute-0 sudo[404489]: pam_unix(sudo:session): session closed for user root
Feb 28 11:06:25 compute-0 sudo[404619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:06:25 compute-0 sudo[404619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:06:25 compute-0 sudo[404619]: pam_unix(sudo:session): session closed for user root
Feb 28 11:06:25 compute-0 sudo[404644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 11:06:25 compute-0 sudo[404644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:06:25 compute-0 podman[404682]: 2026-02-28 11:06:25.629949128 +0000 UTC m=+0.047416394 container create 38f3867ed6cb6334afa798aeda5de0cfc650f156a16541001d40ff57ea5b9890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:06:25 compute-0 systemd[1]: Started libpod-conmon-38f3867ed6cb6334afa798aeda5de0cfc650f156a16541001d40ff57ea5b9890.scope.
Feb 28 11:06:25 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:06:25 compute-0 podman[404682]: 2026-02-28 11:06:25.605718302 +0000 UTC m=+0.023185578 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:06:25 compute-0 podman[404682]: 2026-02-28 11:06:25.708631237 +0000 UTC m=+0.126098493 container init 38f3867ed6cb6334afa798aeda5de0cfc650f156a16541001d40ff57ea5b9890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hellman, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 28 11:06:25 compute-0 ceph-mon[76304]: pgmap v3084: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:25 compute-0 podman[404682]: 2026-02-28 11:06:25.71650553 +0000 UTC m=+0.133972766 container start 38f3867ed6cb6334afa798aeda5de0cfc650f156a16541001d40ff57ea5b9890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hellman, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:06:25 compute-0 interesting_hellman[404699]: 167 167
Feb 28 11:06:25 compute-0 systemd[1]: libpod-38f3867ed6cb6334afa798aeda5de0cfc650f156a16541001d40ff57ea5b9890.scope: Deactivated successfully.
Feb 28 11:06:25 compute-0 podman[404682]: 2026-02-28 11:06:25.722843049 +0000 UTC m=+0.140310325 container attach 38f3867ed6cb6334afa798aeda5de0cfc650f156a16541001d40ff57ea5b9890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hellman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 11:06:25 compute-0 podman[404682]: 2026-02-28 11:06:25.723404005 +0000 UTC m=+0.140871261 container died 38f3867ed6cb6334afa798aeda5de0cfc650f156a16541001d40ff57ea5b9890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hellman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:06:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-5308be6184da19d97c936a150e7c84ea588f2b3da8b97dba09a6d88e61f55388-merged.mount: Deactivated successfully.
Feb 28 11:06:25 compute-0 podman[404682]: 2026-02-28 11:06:25.760178157 +0000 UTC m=+0.177645423 container remove 38f3867ed6cb6334afa798aeda5de0cfc650f156a16541001d40ff57ea5b9890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:06:25 compute-0 systemd[1]: libpod-conmon-38f3867ed6cb6334afa798aeda5de0cfc650f156a16541001d40ff57ea5b9890.scope: Deactivated successfully.
Feb 28 11:06:25 compute-0 podman[404722]: 2026-02-28 11:06:25.952964217 +0000 UTC m=+0.056518522 container create 10c85237d58307fbf0437219b5941106ea381e08dee67a8f38f7de6a26ad92cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_gould, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 11:06:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:25 compute-0 systemd[1]: Started libpod-conmon-10c85237d58307fbf0437219b5941106ea381e08dee67a8f38f7de6a26ad92cd.scope.
Feb 28 11:06:26 compute-0 auditd[718]: Audit daemon rotating log files
Feb 28 11:06:26 compute-0 podman[404722]: 2026-02-28 11:06:25.926892978 +0000 UTC m=+0.030447343 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:06:26 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9402a93da8d2b04e951d39376e3ba7ca0b718a7cf56ee7ee8364530936c07bea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9402a93da8d2b04e951d39376e3ba7ca0b718a7cf56ee7ee8364530936c07bea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9402a93da8d2b04e951d39376e3ba7ca0b718a7cf56ee7ee8364530936c07bea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9402a93da8d2b04e951d39376e3ba7ca0b718a7cf56ee7ee8364530936c07bea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:06:26 compute-0 podman[404722]: 2026-02-28 11:06:26.049010707 +0000 UTC m=+0.152565022 container init 10c85237d58307fbf0437219b5941106ea381e08dee67a8f38f7de6a26ad92cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_gould, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 11:06:26 compute-0 podman[404722]: 2026-02-28 11:06:26.063673992 +0000 UTC m=+0.167228307 container start 10c85237d58307fbf0437219b5941106ea381e08dee67a8f38f7de6a26ad92cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_gould, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 11:06:26 compute-0 podman[404722]: 2026-02-28 11:06:26.067715336 +0000 UTC m=+0.171269651 container attach 10c85237d58307fbf0437219b5941106ea381e08dee67a8f38f7de6a26ad92cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_gould, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 28 11:06:26 compute-0 peaceful_gould[404738]: {
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:     "0": [
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:         {
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "devices": [
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "/dev/loop3"
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             ],
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "lv_name": "ceph_lv0",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "lv_size": "21470642176",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "name": "ceph_lv0",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "tags": {
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.cluster_name": "ceph",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.crush_device_class": "",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.encrypted": "0",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.objectstore": "bluestore",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.osd_id": "0",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.type": "block",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.vdo": "0",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.with_tpm": "0"
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             },
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "type": "block",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "vg_name": "ceph_vg0"
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:         }
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:     ],
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:     "1": [
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:         {
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "devices": [
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "/dev/loop4"
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             ],
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "lv_name": "ceph_lv1",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "lv_size": "21470642176",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "name": "ceph_lv1",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "tags": {
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.cluster_name": "ceph",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.crush_device_class": "",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.encrypted": "0",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.objectstore": "bluestore",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.osd_id": "1",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.type": "block",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.vdo": "0",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.with_tpm": "0"
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             },
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "type": "block",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "vg_name": "ceph_vg1"
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:         }
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:     ],
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:     "2": [
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:         {
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "devices": [
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "/dev/loop5"
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             ],
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "lv_name": "ceph_lv2",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "lv_size": "21470642176",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "name": "ceph_lv2",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "tags": {
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.cluster_name": "ceph",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.crush_device_class": "",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.encrypted": "0",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.objectstore": "bluestore",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.osd_id": "2",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.type": "block",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.vdo": "0",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:                 "ceph.with_tpm": "0"
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             },
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "type": "block",
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:             "vg_name": "ceph_vg2"
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:         }
Feb 28 11:06:26 compute-0 peaceful_gould[404738]:     ]
Feb 28 11:06:26 compute-0 peaceful_gould[404738]: }
Feb 28 11:06:26 compute-0 systemd[1]: libpod-10c85237d58307fbf0437219b5941106ea381e08dee67a8f38f7de6a26ad92cd.scope: Deactivated successfully.
Feb 28 11:06:26 compute-0 podman[404722]: 2026-02-28 11:06:26.336484848 +0000 UTC m=+0.440039133 container died 10c85237d58307fbf0437219b5941106ea381e08dee67a8f38f7de6a26ad92cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_gould, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:06:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-9402a93da8d2b04e951d39376e3ba7ca0b718a7cf56ee7ee8364530936c07bea-merged.mount: Deactivated successfully.
Feb 28 11:06:26 compute-0 podman[404722]: 2026-02-28 11:06:26.384587771 +0000 UTC m=+0.488142056 container remove 10c85237d58307fbf0437219b5941106ea381e08dee67a8f38f7de6a26ad92cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 11:06:26 compute-0 systemd[1]: libpod-conmon-10c85237d58307fbf0437219b5941106ea381e08dee67a8f38f7de6a26ad92cd.scope: Deactivated successfully.
Feb 28 11:06:26 compute-0 sudo[404644]: pam_unix(sudo:session): session closed for user root
Feb 28 11:06:26 compute-0 sudo[404760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:06:26 compute-0 sudo[404760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:06:26 compute-0 sudo[404760]: pam_unix(sudo:session): session closed for user root
Feb 28 11:06:26 compute-0 sudo[404785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 11:06:26 compute-0 sudo[404785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:06:26 compute-0 podman[404824]: 2026-02-28 11:06:26.78145325 +0000 UTC m=+0.046237790 container create ae5bbfedeee14699fd3df664766e2f55c454b847cc7cabbbe97377f058fd9392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_euler, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 11:06:26 compute-0 systemd[1]: Started libpod-conmon-ae5bbfedeee14699fd3df664766e2f55c454b847cc7cabbbe97377f058fd9392.scope.
Feb 28 11:06:26 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:06:26 compute-0 podman[404824]: 2026-02-28 11:06:26.767606368 +0000 UTC m=+0.032390938 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:06:26 compute-0 podman[404824]: 2026-02-28 11:06:26.86654525 +0000 UTC m=+0.131329830 container init ae5bbfedeee14699fd3df664766e2f55c454b847cc7cabbbe97377f058fd9392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_euler, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 28 11:06:26 compute-0 podman[404824]: 2026-02-28 11:06:26.872968172 +0000 UTC m=+0.137752752 container start ae5bbfedeee14699fd3df664766e2f55c454b847cc7cabbbe97377f058fd9392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_euler, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 11:06:26 compute-0 podman[404824]: 2026-02-28 11:06:26.876291056 +0000 UTC m=+0.141075636 container attach ae5bbfedeee14699fd3df664766e2f55c454b847cc7cabbbe97377f058fd9392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 11:06:26 compute-0 quizzical_euler[404840]: 167 167
Feb 28 11:06:26 compute-0 systemd[1]: libpod-ae5bbfedeee14699fd3df664766e2f55c454b847cc7cabbbe97377f058fd9392.scope: Deactivated successfully.
Feb 28 11:06:26 compute-0 conmon[404840]: conmon ae5bbfedeee14699fd3d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae5bbfedeee14699fd3df664766e2f55c454b847cc7cabbbe97377f058fd9392.scope/container/memory.events
Feb 28 11:06:26 compute-0 podman[404824]: 2026-02-28 11:06:26.879686122 +0000 UTC m=+0.144470722 container died ae5bbfedeee14699fd3df664766e2f55c454b847cc7cabbbe97377f058fd9392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_euler, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:06:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-1367e3907ac2ee4a6e7619922cb6b791df6ee88900ee9edf47aab4f15f005ec7-merged.mount: Deactivated successfully.
Feb 28 11:06:26 compute-0 podman[404824]: 2026-02-28 11:06:26.922916197 +0000 UTC m=+0.187700757 container remove ae5bbfedeee14699fd3df664766e2f55c454b847cc7cabbbe97377f058fd9392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_euler, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 11:06:26 compute-0 systemd[1]: libpod-conmon-ae5bbfedeee14699fd3df664766e2f55c454b847cc7cabbbe97377f058fd9392.scope: Deactivated successfully.
Feb 28 11:06:27 compute-0 podman[404864]: 2026-02-28 11:06:27.108237704 +0000 UTC m=+0.052407025 container create 547092c90c00f52eab571cba5294c53b391f8dc1ab90377a6e22788cc96093b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:06:27 compute-0 systemd[1]: Started libpod-conmon-547092c90c00f52eab571cba5294c53b391f8dc1ab90377a6e22788cc96093b7.scope.
Feb 28 11:06:27 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:06:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81cf8c7a24a31e02010c67e46ee64a5aecf2417a436c1e8fe35fba8b89bb571/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:06:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81cf8c7a24a31e02010c67e46ee64a5aecf2417a436c1e8fe35fba8b89bb571/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:06:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81cf8c7a24a31e02010c67e46ee64a5aecf2417a436c1e8fe35fba8b89bb571/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:06:27 compute-0 podman[404864]: 2026-02-28 11:06:27.086469218 +0000 UTC m=+0.030638609 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:06:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81cf8c7a24a31e02010c67e46ee64a5aecf2417a436c1e8fe35fba8b89bb571/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:06:27 compute-0 podman[404864]: 2026-02-28 11:06:27.201106965 +0000 UTC m=+0.145276286 container init 547092c90c00f52eab571cba5294c53b391f8dc1ab90377a6e22788cc96093b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:06:27 compute-0 podman[404864]: 2026-02-28 11:06:27.221135142 +0000 UTC m=+0.165304453 container start 547092c90c00f52eab571cba5294c53b391f8dc1ab90377a6e22788cc96093b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:06:27 compute-0 podman[404864]: 2026-02-28 11:06:27.225038062 +0000 UTC m=+0.169207393 container attach 547092c90c00f52eab571cba5294c53b391f8dc1ab90377a6e22788cc96093b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ride, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:06:27 compute-0 podman[404885]: 2026-02-28 11:06:27.296421694 +0000 UTC m=+0.094599610 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 28 11:06:27 compute-0 podman[404884]: 2026-02-28 11:06:27.296756333 +0000 UTC m=+0.093091497 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 28 11:06:27 compute-0 ceph-mon[76304]: pgmap v3085: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:27 compute-0 lvm[405003]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 11:06:27 compute-0 lvm[405003]: VG ceph_vg0 finished
Feb 28 11:06:27 compute-0 lvm[405004]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 11:06:27 compute-0 lvm[405004]: VG ceph_vg1 finished
Feb 28 11:06:28 compute-0 lvm[405006]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 11:06:28 compute-0 lvm[405006]: VG ceph_vg2 finished
Feb 28 11:06:28 compute-0 amazing_ride[404881]: {}
Feb 28 11:06:28 compute-0 systemd[1]: libpod-547092c90c00f52eab571cba5294c53b391f8dc1ab90377a6e22788cc96093b7.scope: Deactivated successfully.
Feb 28 11:06:28 compute-0 systemd[1]: libpod-547092c90c00f52eab571cba5294c53b391f8dc1ab90377a6e22788cc96093b7.scope: Consumed 1.414s CPU time.
Feb 28 11:06:28 compute-0 podman[404864]: 2026-02-28 11:06:28.109319466 +0000 UTC m=+1.053488837 container died 547092c90c00f52eab571cba5294c53b391f8dc1ab90377a6e22788cc96093b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ride, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:06:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d81cf8c7a24a31e02010c67e46ee64a5aecf2417a436c1e8fe35fba8b89bb571-merged.mount: Deactivated successfully.
Feb 28 11:06:28 compute-0 podman[404864]: 2026-02-28 11:06:28.40427444 +0000 UTC m=+1.348443761 container remove 547092c90c00f52eab571cba5294c53b391f8dc1ab90377a6e22788cc96093b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ride, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 11:06:28 compute-0 systemd[1]: libpod-conmon-547092c90c00f52eab571cba5294c53b391f8dc1ab90377a6e22788cc96093b7.scope: Deactivated successfully.
Feb 28 11:06:28 compute-0 sudo[404785]: pam_unix(sudo:session): session closed for user root
Feb 28 11:06:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 11:06:28 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:06:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 11:06:28 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:06:28 compute-0 sudo[405023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 11:06:28 compute-0 sudo[405023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:06:28 compute-0 sudo[405023]: pam_unix(sudo:session): session closed for user root
Feb 28 11:06:28 compute-0 nova_compute[243452]: 2026-02-28 11:06:28.588 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:06:28 compute-0 nova_compute[243452]: 2026-02-28 11:06:28.590 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:06:28 compute-0 nova_compute[243452]: 2026-02-28 11:06:28.591 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:06:28 compute-0 nova_compute[243452]: 2026-02-28 11:06:28.591 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:28 compute-0 nova_compute[243452]: 2026-02-28 11:06:28.629 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:06:28 compute-0 nova_compute[243452]: 2026-02-28 11:06:28.629 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:06:29
Feb 28 11:06:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 11:06:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 11:06:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['backups', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', '.mgr', 'images', 'default.rgw.meta']
Feb 28 11:06:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 11:06:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:06:29 compute-0 ceph-mon[76304]: pgmap v3086: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:29 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:06:29 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:06:29 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:06:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:06:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:06:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:06:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:06:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:06:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 11:06:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 11:06:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:06:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:06:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:06:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:06:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:06:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:06:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:06:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:06:31 compute-0 ceph-mon[76304]: pgmap v3087: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:31 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:33 compute-0 ceph-mon[76304]: pgmap v3088: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:33 compute-0 nova_compute[243452]: 2026-02-28 11:06:33.630 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:06:33 compute-0 nova_compute[243452]: 2026-02-28 11:06:33.632 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:06:33 compute-0 nova_compute[243452]: 2026-02-28 11:06:33.632 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:06:33 compute-0 nova_compute[243452]: 2026-02-28 11:06:33.633 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:33 compute-0 nova_compute[243452]: 2026-02-28 11:06:33.633 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:33 compute-0 nova_compute[243452]: 2026-02-28 11:06:33.634 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:06:33 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:06:35 compute-0 ceph-mon[76304]: pgmap v3089: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:35 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:37 compute-0 ceph-mon[76304]: pgmap v3090: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:37 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:38 compute-0 nova_compute[243452]: 2026-02-28 11:06:38.636 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:06:38 compute-0 nova_compute[243452]: 2026-02-28 11:06:38.637 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:06:38 compute-0 nova_compute[243452]: 2026-02-28 11:06:38.637 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:06:38 compute-0 nova_compute[243452]: 2026-02-28 11:06:38.637 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:38 compute-0 nova_compute[243452]: 2026-02-28 11:06:38.666 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:06:38 compute-0 nova_compute[243452]: 2026-02-28 11:06:38.667 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:06:39 compute-0 ceph-mon[76304]: pgmap v3091: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:39 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:41 compute-0 ceph-mon[76304]: pgmap v3092: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5454513684934942e-05 of space, bias 1.0, pg target 0.0046363541054804825 quantized to 32 (current 32)
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.899822393338988e-06 of space, bias 1.0, pg target 0.0014699467180016965 quantized to 32 (current 32)
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 11:06:41 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3093: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:43 compute-0 ceph-mon[76304]: pgmap v3093: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:43 compute-0 nova_compute[243452]: 2026-02-28 11:06:43.667 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:06:43 compute-0 nova_compute[243452]: 2026-02-28 11:06:43.710 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:06:43 compute-0 nova_compute[243452]: 2026-02-28 11:06:43.711 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5043 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:06:43 compute-0 nova_compute[243452]: 2026-02-28 11:06:43.711 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:43 compute-0 nova_compute[243452]: 2026-02-28 11:06:43.711 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:43 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:06:45 compute-0 ceph-mon[76304]: pgmap v3094: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 11:06:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2923982927' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:06:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 11:06:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2923982927' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:06:45 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2923982927' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:06:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2923982927' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:06:47 compute-0 ceph-mon[76304]: pgmap v3095: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:47 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3096: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:48 compute-0 nova_compute[243452]: 2026-02-28 11:06:48.713 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:06:48 compute-0 nova_compute[243452]: 2026-02-28 11:06:48.715 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:06:48 compute-0 nova_compute[243452]: 2026-02-28 11:06:48.715 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:06:48 compute-0 nova_compute[243452]: 2026-02-28 11:06:48.715 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:48 compute-0 nova_compute[243452]: 2026-02-28 11:06:48.716 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:48 compute-0 nova_compute[243452]: 2026-02-28 11:06:48.717 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:06:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:06:49 compute-0 ceph-mon[76304]: pgmap v3096: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:49 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:51 compute-0 ceph-mon[76304]: pgmap v3097: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:51 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3098: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:52 compute-0 ceph-mon[76304]: pgmap v3098: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:53 compute-0 nova_compute[243452]: 2026-02-28 11:06:53.718 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:06:53 compute-0 nova_compute[243452]: 2026-02-28 11:06:53.720 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:06:53 compute-0 nova_compute[243452]: 2026-02-28 11:06:53.720 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:06:53 compute-0 nova_compute[243452]: 2026-02-28 11:06:53.721 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:53 compute-0 nova_compute[243452]: 2026-02-28 11:06:53.740 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:06:53 compute-0 nova_compute[243452]: 2026-02-28 11:06:53.740 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:53 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:06:55 compute-0 ceph-mon[76304]: pgmap v3099: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:55 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:57 compute-0 ceph-mon[76304]: pgmap v3100: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:06:57.913 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:06:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:06:57.913 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:06:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:06:57.913 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:06:57 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3101: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:58 compute-0 podman[405049]: 2026-02-28 11:06:58.175766711 +0000 UTC m=+0.101244629 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 11:06:58 compute-0 podman[405048]: 2026-02-28 11:06:58.180258898 +0000 UTC m=+0.108748981 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 28 11:06:58 compute-0 nova_compute[243452]: 2026-02-28 11:06:58.741 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:06:58 compute-0 nova_compute[243452]: 2026-02-28 11:06:58.742 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:06:58 compute-0 nova_compute[243452]: 2026-02-28 11:06:58.742 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:06:58 compute-0 nova_compute[243452]: 2026-02-28 11:06:58.742 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:58 compute-0 nova_compute[243452]: 2026-02-28 11:06:58.783 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:06:58 compute-0 nova_compute[243452]: 2026-02-28 11:06:58.783 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:06:59 compute-0 ceph-mon[76304]: pgmap v3101: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:06:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:06:59 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3102: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:07:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:07:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:07:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:07:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:07:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:07:01 compute-0 ceph-mon[76304]: pgmap v3102: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:01 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3103: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:03 compute-0 ceph-mon[76304]: pgmap v3103: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:03 compute-0 nova_compute[243452]: 2026-02-28 11:07:03.785 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:07:03 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3104: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:04.376269) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276824376325, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 912, "num_deletes": 254, "total_data_size": 1297656, "memory_usage": 1321008, "flush_reason": "Manual Compaction"}
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276824385440, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 1263569, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64801, "largest_seqno": 65712, "table_properties": {"data_size": 1259037, "index_size": 2183, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9698, "raw_average_key_size": 19, "raw_value_size": 1249932, "raw_average_value_size": 2460, "num_data_blocks": 98, "num_entries": 508, "num_filter_entries": 508, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772276745, "oldest_key_time": 1772276745, "file_creation_time": 1772276824, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 9244 microseconds, and 4968 cpu microseconds.
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:04.385507) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 1263569 bytes OK
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:04.385538) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:04.387691) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:04.387718) EVENT_LOG_v1 {"time_micros": 1772276824387709, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:04.387747) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 1293226, prev total WAL file size 1293226, number of live WAL files 2.
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:04.388452) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373538' seq:72057594037927935, type:22 .. '6C6F676D0033303039' seq:0, type:0; will stop at (end)
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(1233KB)], [152(9643KB)]
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276824388508, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 11138629, "oldest_snapshot_seqno": -1}
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 8516 keys, 11027408 bytes, temperature: kUnknown
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276824444536, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 11027408, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10972746, "index_size": 32277, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21317, "raw_key_size": 222771, "raw_average_key_size": 26, "raw_value_size": 10823043, "raw_average_value_size": 1270, "num_data_blocks": 1258, "num_entries": 8516, "num_filter_entries": 8516, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772276824, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:04.444917) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 11027408 bytes
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:04.446498) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 198.4 rd, 196.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 9.4 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(17.5) write-amplify(8.7) OK, records in: 9036, records dropped: 520 output_compression: NoCompression
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:04.446531) EVENT_LOG_v1 {"time_micros": 1772276824446514, "job": 94, "event": "compaction_finished", "compaction_time_micros": 56138, "compaction_time_cpu_micros": 35828, "output_level": 6, "num_output_files": 1, "total_output_size": 11027408, "num_input_records": 9036, "num_output_records": 8516, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276824446967, "job": 94, "event": "table_file_deletion", "file_number": 154}
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276824449853, "job": 94, "event": "table_file_deletion", "file_number": 152}
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:04.388307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:04.449955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:04.449965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:04.449968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:04.449971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:07:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:04.449974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:07:05 compute-0 ceph-mon[76304]: pgmap v3104: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:05 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:07 compute-0 nova_compute[243452]: 2026-02-28 11:07:07.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:07:07 compute-0 ceph-mon[76304]: pgmap v3105: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:07 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:08 compute-0 nova_compute[243452]: 2026-02-28 11:07:08.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:07:08 compute-0 nova_compute[243452]: 2026-02-28 11:07:08.788 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:07:08 compute-0 nova_compute[243452]: 2026-02-28 11:07:08.790 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:07:08 compute-0 nova_compute[243452]: 2026-02-28 11:07:08.790 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:07:08 compute-0 nova_compute[243452]: 2026-02-28 11:07:08.790 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:07:08 compute-0 nova_compute[243452]: 2026-02-28 11:07:08.837 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:07:08 compute-0 nova_compute[243452]: 2026-02-28 11:07:08.838 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:07:09 compute-0 nova_compute[243452]: 2026-02-28 11:07:09.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:07:09 compute-0 nova_compute[243452]: 2026-02-28 11:07:09.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:07:09 compute-0 nova_compute[243452]: 2026-02-28 11:07:09.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 11:07:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:07:09 compute-0 ceph-mon[76304]: pgmap v3106: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:09 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3107: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:10 compute-0 nova_compute[243452]: 2026-02-28 11:07:10.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:07:10 compute-0 nova_compute[243452]: 2026-02-28 11:07:10.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:07:11 compute-0 ceph-mon[76304]: pgmap v3107: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:11 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:12 compute-0 nova_compute[243452]: 2026-02-28 11:07:12.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:07:12 compute-0 nova_compute[243452]: 2026-02-28 11:07:12.318 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 11:07:12 compute-0 nova_compute[243452]: 2026-02-28 11:07:12.318 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 11:07:12 compute-0 nova_compute[243452]: 2026-02-28 11:07:12.341 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 11:07:13 compute-0 ceph-mon[76304]: pgmap v3108: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:13 compute-0 nova_compute[243452]: 2026-02-28 11:07:13.840 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:07:13 compute-0 nova_compute[243452]: 2026-02-28 11:07:13.842 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:07:13 compute-0 nova_compute[243452]: 2026-02-28 11:07:13.842 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:07:13 compute-0 nova_compute[243452]: 2026-02-28 11:07:13.842 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:07:13 compute-0 nova_compute[243452]: 2026-02-28 11:07:13.880 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:07:13 compute-0 nova_compute[243452]: 2026-02-28 11:07:13.882 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:07:13 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:14.424729) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276834424770, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 326, "num_deletes": 251, "total_data_size": 153475, "memory_usage": 159800, "flush_reason": "Manual Compaction"}
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276834428396, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 152339, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65713, "largest_seqno": 66038, "table_properties": {"data_size": 150217, "index_size": 285, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5229, "raw_average_key_size": 18, "raw_value_size": 146158, "raw_average_value_size": 514, "num_data_blocks": 13, "num_entries": 284, "num_filter_entries": 284, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772276825, "oldest_key_time": 1772276825, "file_creation_time": 1772276834, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 3722 microseconds, and 1483 cpu microseconds.
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:14.428451) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 152339 bytes OK
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:14.428475) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:14.430237) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:14.430253) EVENT_LOG_v1 {"time_micros": 1772276834430248, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:14.430275) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 151206, prev total WAL file size 151206, number of live WAL files 2.
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:14.430627) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(148KB)], [155(10MB)]
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276834430818, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 11179747, "oldest_snapshot_seqno": -1}
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 8291 keys, 9439169 bytes, temperature: kUnknown
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276834481390, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 9439169, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9387525, "index_size": 29835, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20741, "raw_key_size": 218781, "raw_average_key_size": 26, "raw_value_size": 9243212, "raw_average_value_size": 1114, "num_data_blocks": 1146, "num_entries": 8291, "num_filter_entries": 8291, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772276834, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:14.481905) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 9439169 bytes
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:14.483546) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 220.4 rd, 186.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.5 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(135.3) write-amplify(62.0) OK, records in: 8800, records dropped: 509 output_compression: NoCompression
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:14.483588) EVENT_LOG_v1 {"time_micros": 1772276834483569, "job": 96, "event": "compaction_finished", "compaction_time_micros": 50735, "compaction_time_cpu_micros": 32411, "output_level": 6, "num_output_files": 1, "total_output_size": 9439169, "num_input_records": 8800, "num_output_records": 8291, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276834484141, "job": 96, "event": "table_file_deletion", "file_number": 157}
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772276834486796, "job": 96, "event": "table_file_deletion", "file_number": 155}
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:14.430523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:14.486993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:14.487003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:14.487006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:14.487009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:07:14 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:07:14.487012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:07:15 compute-0 ceph-mon[76304]: pgmap v3109: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:15 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3110: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:17 compute-0 nova_compute[243452]: 2026-02-28 11:07:17.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:07:17 compute-0 nova_compute[243452]: 2026-02-28 11:07:17.345 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:07:17 compute-0 nova_compute[243452]: 2026-02-28 11:07:17.346 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:07:17 compute-0 nova_compute[243452]: 2026-02-28 11:07:17.346 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:07:17 compute-0 nova_compute[243452]: 2026-02-28 11:07:17.346 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 11:07:17 compute-0 nova_compute[243452]: 2026-02-28 11:07:17.347 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:07:17 compute-0 ceph-mon[76304]: pgmap v3110: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:07:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3505766472' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:07:17 compute-0 nova_compute[243452]: 2026-02-28 11:07:17.852 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:07:17 compute-0 nova_compute[243452]: 2026-02-28 11:07:17.988 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 11:07:17 compute-0 nova_compute[243452]: 2026-02-28 11:07:17.989 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3557MB free_disk=59.987354160286486GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 11:07:17 compute-0 nova_compute[243452]: 2026-02-28 11:07:17.989 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:07:17 compute-0 nova_compute[243452]: 2026-02-28 11:07:17.989 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:07:17 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:18 compute-0 nova_compute[243452]: 2026-02-28 11:07:18.037 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 11:07:18 compute-0 nova_compute[243452]: 2026-02-28 11:07:18.038 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 11:07:18 compute-0 nova_compute[243452]: 2026-02-28 11:07:18.054 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:07:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3505766472' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:07:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:07:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2113439994' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:07:18 compute-0 nova_compute[243452]: 2026-02-28 11:07:18.603 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:07:18 compute-0 nova_compute[243452]: 2026-02-28 11:07:18.611 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 11:07:18 compute-0 nova_compute[243452]: 2026-02-28 11:07:18.650 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 11:07:18 compute-0 nova_compute[243452]: 2026-02-28 11:07:18.654 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 11:07:18 compute-0 nova_compute[243452]: 2026-02-28 11:07:18.654 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:07:18 compute-0 nova_compute[243452]: 2026-02-28 11:07:18.880 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:07:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:07:19 compute-0 ceph-mon[76304]: pgmap v3111: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2113439994' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:07:19 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3112: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:21 compute-0 ceph-mon[76304]: pgmap v3112: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:21 compute-0 nova_compute[243452]: 2026-02-28 11:07:21.655 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:07:21 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:23 compute-0 ceph-mon[76304]: pgmap v3113: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:23 compute-0 nova_compute[243452]: 2026-02-28 11:07:23.883 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:07:23 compute-0 nova_compute[243452]: 2026-02-28 11:07:23.885 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:07:23 compute-0 nova_compute[243452]: 2026-02-28 11:07:23.885 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:07:23 compute-0 nova_compute[243452]: 2026-02-28 11:07:23.885 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:07:23 compute-0 nova_compute[243452]: 2026-02-28 11:07:23.922 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:07:23 compute-0 nova_compute[243452]: 2026-02-28 11:07:23.923 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:07:23 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3114: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:07:25 compute-0 ceph-mon[76304]: pgmap v3114: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:25 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:27 compute-0 ceph-mon[76304]: pgmap v3115: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:27 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:28 compute-0 sudo[405137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:07:28 compute-0 sudo[405137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:07:28 compute-0 sudo[405137]: pam_unix(sudo:session): session closed for user root
Feb 28 11:07:28 compute-0 sudo[405174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 11:07:28 compute-0 sudo[405174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:07:28 compute-0 podman[405162]: 2026-02-28 11:07:28.75286876 +0000 UTC m=+0.071567928 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 28 11:07:28 compute-0 podman[405161]: 2026-02-28 11:07:28.844309099 +0000 UTC m=+0.165644162 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 28 11:07:28 compute-0 nova_compute[243452]: 2026-02-28 11:07:28.922 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:07:28 compute-0 nova_compute[243452]: 2026-02-28 11:07:28.924 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:07:29 compute-0 sudo[405174]: pam_unix(sudo:session): session closed for user root
Feb 28 11:07:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:07:29
Feb 28 11:07:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 11:07:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 11:07:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', '.rgw.root', '.mgr', 'default.rgw.meta', 'vms', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data']
Feb 28 11:07:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 11:07:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:07:29 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:07:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 11:07:29 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:07:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 11:07:29 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:07:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 11:07:29 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:07:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 11:07:29 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:07:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:07:29 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:07:29 compute-0 sudo[405262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:07:29 compute-0 sudo[405262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:07:29 compute-0 sudo[405262]: pam_unix(sudo:session): session closed for user root
Feb 28 11:07:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:07:29 compute-0 sudo[405287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 11:07:29 compute-0 sudo[405287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:07:29 compute-0 ceph-mon[76304]: pgmap v3116: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:29 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:07:29 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:07:29 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:07:29 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:07:29 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:07:29 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:07:29 compute-0 podman[405324]: 2026-02-28 11:07:29.697656047 +0000 UTC m=+0.048895886 container create 245f85705fc461039426c873c69bab16cfe070c0ee06cdfc5392a91b43cc9ed9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:07:29 compute-0 systemd[1]: Started libpod-conmon-245f85705fc461039426c873c69bab16cfe070c0ee06cdfc5392a91b43cc9ed9.scope.
Feb 28 11:07:29 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:07:29 compute-0 podman[405324]: 2026-02-28 11:07:29.756738711 +0000 UTC m=+0.107978570 container init 245f85705fc461039426c873c69bab16cfe070c0ee06cdfc5392a91b43cc9ed9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:07:29 compute-0 podman[405324]: 2026-02-28 11:07:29.767033592 +0000 UTC m=+0.118273441 container start 245f85705fc461039426c873c69bab16cfe070c0ee06cdfc5392a91b43cc9ed9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_sutherland, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 11:07:29 compute-0 podman[405324]: 2026-02-28 11:07:29.771807247 +0000 UTC m=+0.123047086 container attach 245f85705fc461039426c873c69bab16cfe070c0ee06cdfc5392a91b43cc9ed9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_sutherland, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 11:07:29 compute-0 epic_sutherland[405341]: 167 167
Feb 28 11:07:29 compute-0 systemd[1]: libpod-245f85705fc461039426c873c69bab16cfe070c0ee06cdfc5392a91b43cc9ed9.scope: Deactivated successfully.
Feb 28 11:07:29 compute-0 podman[405324]: 2026-02-28 11:07:29.678194386 +0000 UTC m=+0.029434305 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:07:29 compute-0 conmon[405341]: conmon 245f85705fc461039426 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-245f85705fc461039426c873c69bab16cfe070c0ee06cdfc5392a91b43cc9ed9.scope/container/memory.events
Feb 28 11:07:29 compute-0 podman[405324]: 2026-02-28 11:07:29.774470143 +0000 UTC m=+0.125709992 container died 245f85705fc461039426c873c69bab16cfe070c0ee06cdfc5392a91b43cc9ed9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:07:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f2db24e38017585eeb8230d6d21e514d35ab5572cb7be4e526e26e06bdd4f2b-merged.mount: Deactivated successfully.
Feb 28 11:07:29 compute-0 podman[405324]: 2026-02-28 11:07:29.812674865 +0000 UTC m=+0.163914704 container remove 245f85705fc461039426c873c69bab16cfe070c0ee06cdfc5392a91b43cc9ed9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 11:07:29 compute-0 systemd[1]: libpod-conmon-245f85705fc461039426c873c69bab16cfe070c0ee06cdfc5392a91b43cc9ed9.scope: Deactivated successfully.
Feb 28 11:07:29 compute-0 podman[405365]: 2026-02-28 11:07:29.956520619 +0000 UTC m=+0.037151154 container create 51ee3d8a0d5360c56b6ab834750779730edbfe6b1a2b0f8b32a68507d413ac26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 11:07:29 compute-0 systemd[1]: Started libpod-conmon-51ee3d8a0d5360c56b6ab834750779730edbfe6b1a2b0f8b32a68507d413ac26.scope.
Feb 28 11:07:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3117: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:30 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e75bad767ce25fa280bab581ebca4b3cc50b77ef2374c0a80a848be2e2eedf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e75bad767ce25fa280bab581ebca4b3cc50b77ef2374c0a80a848be2e2eedf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e75bad767ce25fa280bab581ebca4b3cc50b77ef2374c0a80a848be2e2eedf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e75bad767ce25fa280bab581ebca4b3cc50b77ef2374c0a80a848be2e2eedf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e75bad767ce25fa280bab581ebca4b3cc50b77ef2374c0a80a848be2e2eedf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 11:07:30 compute-0 podman[405365]: 2026-02-28 11:07:29.941821752 +0000 UTC m=+0.022452287 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:07:30 compute-0 podman[405365]: 2026-02-28 11:07:30.042778741 +0000 UTC m=+0.123409306 container init 51ee3d8a0d5360c56b6ab834750779730edbfe6b1a2b0f8b32a68507d413ac26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:07:30 compute-0 podman[405365]: 2026-02-28 11:07:30.049662996 +0000 UTC m=+0.130293551 container start 51ee3d8a0d5360c56b6ab834750779730edbfe6b1a2b0f8b32a68507d413ac26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Feb 28 11:07:30 compute-0 podman[405365]: 2026-02-28 11:07:30.05260431 +0000 UTC m=+0.133234925 container attach 51ee3d8a0d5360c56b6ab834750779730edbfe6b1a2b0f8b32a68507d413ac26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_chandrasekhar, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 11:07:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:07:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:07:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:07:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:07:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:07:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:07:30 compute-0 thirsty_chandrasekhar[405381]: --> passed data devices: 0 physical, 3 LVM
Feb 28 11:07:30 compute-0 thirsty_chandrasekhar[405381]: --> All data devices are unavailable
Feb 28 11:07:30 compute-0 systemd[1]: libpod-51ee3d8a0d5360c56b6ab834750779730edbfe6b1a2b0f8b32a68507d413ac26.scope: Deactivated successfully.
Feb 28 11:07:30 compute-0 podman[405365]: 2026-02-28 11:07:30.475365113 +0000 UTC m=+0.555995688 container died 51ee3d8a0d5360c56b6ab834750779730edbfe6b1a2b0f8b32a68507d413ac26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_chandrasekhar, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 11:07:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-40e75bad767ce25fa280bab581ebca4b3cc50b77ef2374c0a80a848be2e2eedf-merged.mount: Deactivated successfully.
Feb 28 11:07:30 compute-0 podman[405365]: 2026-02-28 11:07:30.531361119 +0000 UTC m=+0.611991684 container remove 51ee3d8a0d5360c56b6ab834750779730edbfe6b1a2b0f8b32a68507d413ac26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_chandrasekhar, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:07:30 compute-0 systemd[1]: libpod-conmon-51ee3d8a0d5360c56b6ab834750779730edbfe6b1a2b0f8b32a68507d413ac26.scope: Deactivated successfully.
Feb 28 11:07:30 compute-0 sudo[405287]: pam_unix(sudo:session): session closed for user root
Feb 28 11:07:30 compute-0 sudo[405413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:07:30 compute-0 sudo[405413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:07:30 compute-0 sudo[405413]: pam_unix(sudo:session): session closed for user root
Feb 28 11:07:30 compute-0 sudo[405438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 11:07:30 compute-0 sudo[405438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:07:31 compute-0 podman[405476]: 2026-02-28 11:07:31.059743093 +0000 UTC m=+0.066979498 container create 45c5241bef855e9ebf0e98fc49be9714a680bb05f89d70da6b7349f653f69bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_booth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 11:07:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 11:07:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:07:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 11:07:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:07:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:07:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:07:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:07:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:07:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:07:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:07:31 compute-0 systemd[1]: Started libpod-conmon-45c5241bef855e9ebf0e98fc49be9714a680bb05f89d70da6b7349f653f69bdc.scope.
Feb 28 11:07:31 compute-0 podman[405476]: 2026-02-28 11:07:31.032317196 +0000 UTC m=+0.039553661 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:07:31 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:07:31 compute-0 podman[405476]: 2026-02-28 11:07:31.162860934 +0000 UTC m=+0.170097339 container init 45c5241bef855e9ebf0e98fc49be9714a680bb05f89d70da6b7349f653f69bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 11:07:31 compute-0 podman[405476]: 2026-02-28 11:07:31.16944537 +0000 UTC m=+0.176681785 container start 45c5241bef855e9ebf0e98fc49be9714a680bb05f89d70da6b7349f653f69bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_booth, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:07:31 compute-0 podman[405476]: 2026-02-28 11:07:31.173836354 +0000 UTC m=+0.181072819 container attach 45c5241bef855e9ebf0e98fc49be9714a680bb05f89d70da6b7349f653f69bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_booth, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 11:07:31 compute-0 stupefied_booth[405492]: 167 167
Feb 28 11:07:31 compute-0 systemd[1]: libpod-45c5241bef855e9ebf0e98fc49be9714a680bb05f89d70da6b7349f653f69bdc.scope: Deactivated successfully.
Feb 28 11:07:31 compute-0 podman[405476]: 2026-02-28 11:07:31.175176852 +0000 UTC m=+0.182413237 container died 45c5241bef855e9ebf0e98fc49be9714a680bb05f89d70da6b7349f653f69bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_booth, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 11:07:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-4539165e26f201992cab8653af4e8f7d91ac6a0ec9e8b2aa8e5462ee23dd53cd-merged.mount: Deactivated successfully.
Feb 28 11:07:31 compute-0 podman[405476]: 2026-02-28 11:07:31.215276348 +0000 UTC m=+0.222512753 container remove 45c5241bef855e9ebf0e98fc49be9714a680bb05f89d70da6b7349f653f69bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_booth, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 11:07:31 compute-0 systemd[1]: libpod-conmon-45c5241bef855e9ebf0e98fc49be9714a680bb05f89d70da6b7349f653f69bdc.scope: Deactivated successfully.
Feb 28 11:07:31 compute-0 podman[405517]: 2026-02-28 11:07:31.413205643 +0000 UTC m=+0.065085164 container create 39122904ed0902ad746f223ff441eb31fced36193016a0c7b6ebdfdcdfbb1b95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_elbakyan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:07:31 compute-0 systemd[1]: Started libpod-conmon-39122904ed0902ad746f223ff441eb31fced36193016a0c7b6ebdfdcdfbb1b95.scope.
Feb 28 11:07:31 compute-0 podman[405517]: 2026-02-28 11:07:31.386038194 +0000 UTC m=+0.037917765 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:07:31 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/690948afbb15f362974e355879fd44ae5995bab34725a894c5e751b248dda113/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/690948afbb15f362974e355879fd44ae5995bab34725a894c5e751b248dda113/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/690948afbb15f362974e355879fd44ae5995bab34725a894c5e751b248dda113/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/690948afbb15f362974e355879fd44ae5995bab34725a894c5e751b248dda113/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:07:31 compute-0 ceph-mon[76304]: pgmap v3117: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:31 compute-0 podman[405517]: 2026-02-28 11:07:31.528527159 +0000 UTC m=+0.180406750 container init 39122904ed0902ad746f223ff441eb31fced36193016a0c7b6ebdfdcdfbb1b95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_elbakyan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:07:31 compute-0 podman[405517]: 2026-02-28 11:07:31.541183317 +0000 UTC m=+0.193062838 container start 39122904ed0902ad746f223ff441eb31fced36193016a0c7b6ebdfdcdfbb1b95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 11:07:31 compute-0 podman[405517]: 2026-02-28 11:07:31.545215101 +0000 UTC m=+0.197094642 container attach 39122904ed0902ad746f223ff441eb31fced36193016a0c7b6ebdfdcdfbb1b95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]: {
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:     "0": [
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:         {
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "devices": [
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "/dev/loop3"
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             ],
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "lv_name": "ceph_lv0",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "lv_size": "21470642176",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "name": "ceph_lv0",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "tags": {
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.cluster_name": "ceph",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.crush_device_class": "",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.encrypted": "0",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.objectstore": "bluestore",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.osd_id": "0",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.type": "block",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.vdo": "0",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.with_tpm": "0"
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             },
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "type": "block",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "vg_name": "ceph_vg0"
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:         }
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:     ],
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:     "1": [
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:         {
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "devices": [
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "/dev/loop4"
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             ],
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "lv_name": "ceph_lv1",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "lv_size": "21470642176",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "name": "ceph_lv1",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "tags": {
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.cluster_name": "ceph",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.crush_device_class": "",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.encrypted": "0",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.objectstore": "bluestore",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.osd_id": "1",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.type": "block",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.vdo": "0",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.with_tpm": "0"
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             },
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "type": "block",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "vg_name": "ceph_vg1"
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:         }
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:     ],
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:     "2": [
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:         {
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "devices": [
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "/dev/loop5"
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             ],
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "lv_name": "ceph_lv2",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "lv_size": "21470642176",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "name": "ceph_lv2",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "tags": {
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.cluster_name": "ceph",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.crush_device_class": "",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.encrypted": "0",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.objectstore": "bluestore",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.osd_id": "2",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.type": "block",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.vdo": "0",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:                 "ceph.with_tpm": "0"
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             },
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "type": "block",
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:             "vg_name": "ceph_vg2"
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:         }
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]:     ]
Feb 28 11:07:31 compute-0 objective_elbakyan[405533]: }
Feb 28 11:07:31 compute-0 systemd[1]: libpod-39122904ed0902ad746f223ff441eb31fced36193016a0c7b6ebdfdcdfbb1b95.scope: Deactivated successfully.
Feb 28 11:07:31 compute-0 podman[405517]: 2026-02-28 11:07:31.842843201 +0000 UTC m=+0.494722692 container died 39122904ed0902ad746f223ff441eb31fced36193016a0c7b6ebdfdcdfbb1b95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 11:07:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-690948afbb15f362974e355879fd44ae5995bab34725a894c5e751b248dda113-merged.mount: Deactivated successfully.
Feb 28 11:07:31 compute-0 podman[405517]: 2026-02-28 11:07:31.891797317 +0000 UTC m=+0.543676828 container remove 39122904ed0902ad746f223ff441eb31fced36193016a0c7b6ebdfdcdfbb1b95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_elbakyan, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 11:07:31 compute-0 systemd[1]: libpod-conmon-39122904ed0902ad746f223ff441eb31fced36193016a0c7b6ebdfdcdfbb1b95.scope: Deactivated successfully.
Feb 28 11:07:31 compute-0 sudo[405438]: pam_unix(sudo:session): session closed for user root
Feb 28 11:07:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:32 compute-0 sudo[405553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:07:32 compute-0 sudo[405553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:07:32 compute-0 sudo[405553]: pam_unix(sudo:session): session closed for user root
Feb 28 11:07:32 compute-0 sudo[405578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 11:07:32 compute-0 sudo[405578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:07:32 compute-0 podman[405617]: 2026-02-28 11:07:32.385957392 +0000 UTC m=+0.062124391 container create b1e1308120d9b4b9cf11098013355b5a5cea41564daf03770026ba5381cb107f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:07:32 compute-0 systemd[1]: Started libpod-conmon-b1e1308120d9b4b9cf11098013355b5a5cea41564daf03770026ba5381cb107f.scope.
Feb 28 11:07:32 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:07:32 compute-0 podman[405617]: 2026-02-28 11:07:32.357965589 +0000 UTC m=+0.034132638 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:07:32 compute-0 podman[405617]: 2026-02-28 11:07:32.461619345 +0000 UTC m=+0.137786324 container init b1e1308120d9b4b9cf11098013355b5a5cea41564daf03770026ba5381cb107f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:07:32 compute-0 podman[405617]: 2026-02-28 11:07:32.470844486 +0000 UTC m=+0.147011485 container start b1e1308120d9b4b9cf11098013355b5a5cea41564daf03770026ba5381cb107f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_keller, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:07:32 compute-0 podman[405617]: 2026-02-28 11:07:32.474616613 +0000 UTC m=+0.150783592 container attach b1e1308120d9b4b9cf11098013355b5a5cea41564daf03770026ba5381cb107f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:07:32 compute-0 practical_keller[405633]: 167 167
Feb 28 11:07:32 compute-0 systemd[1]: libpod-b1e1308120d9b4b9cf11098013355b5a5cea41564daf03770026ba5381cb107f.scope: Deactivated successfully.
Feb 28 11:07:32 compute-0 podman[405617]: 2026-02-28 11:07:32.479888422 +0000 UTC m=+0.156055421 container died b1e1308120d9b4b9cf11098013355b5a5cea41564daf03770026ba5381cb107f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:07:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a67f1dbb2fab5c2838e6b079a89bed3c63a57ab8dc3c9599eb903d39204a476-merged.mount: Deactivated successfully.
Feb 28 11:07:32 compute-0 podman[405617]: 2026-02-28 11:07:32.526609005 +0000 UTC m=+0.202775974 container remove b1e1308120d9b4b9cf11098013355b5a5cea41564daf03770026ba5381cb107f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_keller, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 11:07:32 compute-0 systemd[1]: libpod-conmon-b1e1308120d9b4b9cf11098013355b5a5cea41564daf03770026ba5381cb107f.scope: Deactivated successfully.
Feb 28 11:07:32 compute-0 podman[405658]: 2026-02-28 11:07:32.696896448 +0000 UTC m=+0.057407357 container create 8e7d6becf181bf4a2f258e6f94b9711ec90f81d9cb007d9698ae1ffb7b99ae84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:07:32 compute-0 systemd[1]: Started libpod-conmon-8e7d6becf181bf4a2f258e6f94b9711ec90f81d9cb007d9698ae1ffb7b99ae84.scope.
Feb 28 11:07:32 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:07:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b8495db4f323065f19b4a6f11edc57c852590af513c354a582e97163da9eaf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:07:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b8495db4f323065f19b4a6f11edc57c852590af513c354a582e97163da9eaf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:07:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b8495db4f323065f19b4a6f11edc57c852590af513c354a582e97163da9eaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:07:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b8495db4f323065f19b4a6f11edc57c852590af513c354a582e97163da9eaf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:07:32 compute-0 podman[405658]: 2026-02-28 11:07:32.679480475 +0000 UTC m=+0.039991424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:07:32 compute-0 podman[405658]: 2026-02-28 11:07:32.790589992 +0000 UTC m=+0.151100941 container init 8e7d6becf181bf4a2f258e6f94b9711ec90f81d9cb007d9698ae1ffb7b99ae84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 11:07:32 compute-0 podman[405658]: 2026-02-28 11:07:32.799918826 +0000 UTC m=+0.160429765 container start 8e7d6becf181bf4a2f258e6f94b9711ec90f81d9cb007d9698ae1ffb7b99ae84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_golick, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 11:07:32 compute-0 podman[405658]: 2026-02-28 11:07:32.803492477 +0000 UTC m=+0.164003416 container attach 8e7d6becf181bf4a2f258e6f94b9711ec90f81d9cb007d9698ae1ffb7b99ae84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_golick, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Feb 28 11:07:33 compute-0 lvm[405752]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 11:07:33 compute-0 lvm[405752]: VG ceph_vg1 finished
Feb 28 11:07:33 compute-0 lvm[405753]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 11:07:33 compute-0 lvm[405753]: VG ceph_vg0 finished
Feb 28 11:07:33 compute-0 lvm[405755]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 11:07:33 compute-0 lvm[405755]: VG ceph_vg2 finished
Feb 28 11:07:33 compute-0 ceph-mon[76304]: pgmap v3118: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:33 compute-0 clever_golick[405674]: {}
Feb 28 11:07:33 compute-0 systemd[1]: libpod-8e7d6becf181bf4a2f258e6f94b9711ec90f81d9cb007d9698ae1ffb7b99ae84.scope: Deactivated successfully.
Feb 28 11:07:33 compute-0 podman[405658]: 2026-02-28 11:07:33.610387199 +0000 UTC m=+0.970898098 container died 8e7d6becf181bf4a2f258e6f94b9711ec90f81d9cb007d9698ae1ffb7b99ae84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_golick, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 11:07:33 compute-0 systemd[1]: libpod-8e7d6becf181bf4a2f258e6f94b9711ec90f81d9cb007d9698ae1ffb7b99ae84.scope: Consumed 1.095s CPU time.
Feb 28 11:07:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6b8495db4f323065f19b4a6f11edc57c852590af513c354a582e97163da9eaf-merged.mount: Deactivated successfully.
Feb 28 11:07:33 compute-0 podman[405658]: 2026-02-28 11:07:33.649934579 +0000 UTC m=+1.010445478 container remove 8e7d6becf181bf4a2f258e6f94b9711ec90f81d9cb007d9698ae1ffb7b99ae84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_golick, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 28 11:07:33 compute-0 systemd[1]: libpod-conmon-8e7d6becf181bf4a2f258e6f94b9711ec90f81d9cb007d9698ae1ffb7b99ae84.scope: Deactivated successfully.
Feb 28 11:07:33 compute-0 sudo[405578]: pam_unix(sudo:session): session closed for user root
Feb 28 11:07:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 11:07:33 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:07:33 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 11:07:33 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:07:33 compute-0 sudo[405772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 11:07:33 compute-0 sudo[405772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:07:33 compute-0 sudo[405772]: pam_unix(sudo:session): session closed for user root
Feb 28 11:07:33 compute-0 nova_compute[243452]: 2026-02-28 11:07:33.925 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:07:33 compute-0 nova_compute[243452]: 2026-02-28 11:07:33.928 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:07:33 compute-0 nova_compute[243452]: 2026-02-28 11:07:33.928 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:07:33 compute-0 nova_compute[243452]: 2026-02-28 11:07:33.929 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:07:33 compute-0 nova_compute[243452]: 2026-02-28 11:07:33.956 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:07:33 compute-0 nova_compute[243452]: 2026-02-28 11:07:33.957 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:07:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3119: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:34 compute-0 nova_compute[243452]: 2026-02-28 11:07:34.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:07:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:07:34 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:07:34 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:07:35 compute-0 ceph-mon[76304]: pgmap v3119: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:37 compute-0 ceph-mon[76304]: pgmap v3120: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:38 compute-0 nova_compute[243452]: 2026-02-28 11:07:38.958 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:07:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:07:39 compute-0 ceph-mon[76304]: pgmap v3121: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:41 compute-0 ceph-mon[76304]: pgmap v3122: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5454513684934942e-05 of space, bias 1.0, pg target 0.0046363541054804825 quantized to 32 (current 32)
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.899822393338988e-06 of space, bias 1.0, pg target 0.0014699467180016965 quantized to 32 (current 32)
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:07:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 11:07:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:43 compute-0 ceph-mon[76304]: pgmap v3123: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:43 compute-0 nova_compute[243452]: 2026-02-28 11:07:43.959 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:07:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3124: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:07:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 11:07:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4063352786' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:07:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 11:07:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4063352786' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:07:45 compute-0 ceph-mon[76304]: pgmap v3124: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/4063352786' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:07:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/4063352786' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:07:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3125: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:46 compute-0 nova_compute[243452]: 2026-02-28 11:07:46.327 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:07:46 compute-0 nova_compute[243452]: 2026-02-28 11:07:46.328 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 28 11:07:47 compute-0 ceph-mon[76304]: pgmap v3125: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:48 compute-0 nova_compute[243452]: 2026-02-28 11:07:48.960 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:07:48 compute-0 nova_compute[243452]: 2026-02-28 11:07:48.961 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:07:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:07:49 compute-0 ceph-mon[76304]: pgmap v3126: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3127: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:50 compute-0 nova_compute[243452]: 2026-02-28 11:07:50.463 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:07:50 compute-0 nova_compute[243452]: 2026-02-28 11:07:50.464 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 28 11:07:50 compute-0 nova_compute[243452]: 2026-02-28 11:07:50.660 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 28 11:07:51 compute-0 ceph-mon[76304]: pgmap v3127: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:53 compute-0 ceph-mon[76304]: pgmap v3128: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:53 compute-0 nova_compute[243452]: 2026-02-28 11:07:53.962 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:07:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3129: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:07:55 compute-0 ceph-mon[76304]: pgmap v3129: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:57 compute-0 ceph-mon[76304]: pgmap v3130: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:07:57.914 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:07:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:07:57.916 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:07:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:07:57.916 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:07:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:07:58 compute-0 nova_compute[243452]: 2026-02-28 11:07:58.964 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:07:59 compute-0 podman[405798]: 2026-02-28 11:07:59.118434709 +0000 UTC m=+0.054145054 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Feb 28 11:07:59 compute-0 podman[405797]: 2026-02-28 11:07:59.166096509 +0000 UTC m=+0.100528268 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:07:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:07:59 compute-0 ceph-mon[76304]: pgmap v3131: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3132: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:08:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:08:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:08:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:08:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:08:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:08:01 compute-0 ceph-mon[76304]: pgmap v3132: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3133: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:02 compute-0 ceph-mon[76304]: pgmap v3133: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:03 compute-0 nova_compute[243452]: 2026-02-28 11:08:03.966 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:08:03 compute-0 nova_compute[243452]: 2026-02-28 11:08:03.968 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:08:03 compute-0 nova_compute[243452]: 2026-02-28 11:08:03.968 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:08:03 compute-0 nova_compute[243452]: 2026-02-28 11:08:03.968 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:08:03 compute-0 nova_compute[243452]: 2026-02-28 11:08:03.986 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:08:03 compute-0 nova_compute[243452]: 2026-02-28 11:08:03.987 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:08:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:08:05 compute-0 ceph-mon[76304]: pgmap v3134: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3135: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:07 compute-0 ceph-mon[76304]: pgmap v3135: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3136: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:08 compute-0 nova_compute[243452]: 2026-02-28 11:08:08.512 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:08:08 compute-0 nova_compute[243452]: 2026-02-28 11:08:08.513 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:08:08 compute-0 nova_compute[243452]: 2026-02-28 11:08:08.987 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:08:09 compute-0 ceph-mon[76304]: pgmap v3136: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:09 compute-0 nova_compute[243452]: 2026-02-28 11:08:09.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:08:09 compute-0 nova_compute[243452]: 2026-02-28 11:08:09.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 11:08:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:08:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3137: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:11 compute-0 ceph-mon[76304]: pgmap v3137: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:11 compute-0 nova_compute[243452]: 2026-02-28 11:08:11.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:08:11 compute-0 nova_compute[243452]: 2026-02-28 11:08:11.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:08:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3138: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:12 compute-0 nova_compute[243452]: 2026-02-28 11:08:12.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:08:12 compute-0 sshd-session[405842]: Invalid user sol from 45.148.10.240 port 57084
Feb 28 11:08:13 compute-0 ceph-mon[76304]: pgmap v3138: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:13 compute-0 sshd-session[405842]: Connection closed by invalid user sol 45.148.10.240 port 57084 [preauth]
Feb 28 11:08:13 compute-0 nova_compute[243452]: 2026-02-28 11:08:13.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:08:13 compute-0 nova_compute[243452]: 2026-02-28 11:08:13.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 11:08:13 compute-0 nova_compute[243452]: 2026-02-28 11:08:13.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 11:08:13 compute-0 nova_compute[243452]: 2026-02-28 11:08:13.385 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 11:08:13 compute-0 nova_compute[243452]: 2026-02-28 11:08:13.991 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:08:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3139: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:08:15 compute-0 ceph-mon[76304]: pgmap v3139: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:17 compute-0 ceph-mon[76304]: pgmap v3140: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:17 compute-0 nova_compute[243452]: 2026-02-28 11:08:17.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:08:17 compute-0 nova_compute[243452]: 2026-02-28 11:08:17.501 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:08:17 compute-0 nova_compute[243452]: 2026-02-28 11:08:17.501 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:08:17 compute-0 nova_compute[243452]: 2026-02-28 11:08:17.502 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:08:17 compute-0 nova_compute[243452]: 2026-02-28 11:08:17.502 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 11:08:17 compute-0 nova_compute[243452]: 2026-02-28 11:08:17.503 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:08:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3141: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:08:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3326580606' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:08:18 compute-0 nova_compute[243452]: 2026-02-28 11:08:18.068 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:08:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3326580606' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:08:18 compute-0 nova_compute[243452]: 2026-02-28 11:08:18.253 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 11:08:18 compute-0 nova_compute[243452]: 2026-02-28 11:08:18.255 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3551MB free_disk=59.987354160286486GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 11:08:18 compute-0 nova_compute[243452]: 2026-02-28 11:08:18.255 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:08:18 compute-0 nova_compute[243452]: 2026-02-28 11:08:18.256 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:08:18 compute-0 nova_compute[243452]: 2026-02-28 11:08:18.476 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 11:08:18 compute-0 nova_compute[243452]: 2026-02-28 11:08:18.476 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 11:08:18 compute-0 nova_compute[243452]: 2026-02-28 11:08:18.493 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:08:18 compute-0 nova_compute[243452]: 2026-02-28 11:08:18.993 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:08:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:08:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1578744710' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:08:19 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 11:08:19 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 11:08:19 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 11:08:19 compute-0 nova_compute[243452]: 2026-02-28 11:08:19.037 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:08:19 compute-0 nova_compute[243452]: 2026-02-28 11:08:19.043 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 11:08:19 compute-0 ceph-mon[76304]: pgmap v3141: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1578744710' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:08:19 compute-0 nova_compute[243452]: 2026-02-28 11:08:19.199 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 11:08:19 compute-0 nova_compute[243452]: 2026-02-28 11:08:19.200 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 11:08:19 compute-0 nova_compute[243452]: 2026-02-28 11:08:19.200 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:08:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:08:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:21 compute-0 ceph-mon[76304]: pgmap v3142: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:21 compute-0 nova_compute[243452]: 2026-02-28 11:08:21.197 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:08:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:22 compute-0 nova_compute[243452]: 2026-02-28 11:08:22.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:08:23 compute-0 ceph-mon[76304]: pgmap v3143: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:23 compute-0 nova_compute[243452]: 2026-02-28 11:08:23.995 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:08:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3144: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:08:25 compute-0 ceph-mon[76304]: pgmap v3144: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:26 compute-0 nova_compute[243452]: 2026-02-28 11:08:26.952 243456 DEBUG oslo_concurrency.processutils [None req-470c4b4d-3d3a-4787-8bae-7836b82ef63f 3c965705ef9f4f29b279dd463fa4cd3e eb0c6377c53644999dac7645ece0c360 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:08:26 compute-0 nova_compute[243452]: 2026-02-28 11:08:26.993 243456 DEBUG oslo_concurrency.processutils [None req-470c4b4d-3d3a-4787-8bae-7836b82ef63f 3c965705ef9f4f29b279dd463fa4cd3e eb0c6377c53644999dac7645ece0c360 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:08:27 compute-0 ceph-mon[76304]: pgmap v3145: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:28 compute-0 nova_compute[243452]: 2026-02-28 11:08:28.998 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:08:28 compute-0 nova_compute[243452]: 2026-02-28 11:08:28.999 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:08:29 compute-0 nova_compute[243452]: 2026-02-28 11:08:28.999 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:08:29 compute-0 nova_compute[243452]: 2026-02-28 11:08:28.999 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:08:29 compute-0 nova_compute[243452]: 2026-02-28 11:08:29.000 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:08:29 compute-0 nova_compute[243452]: 2026-02-28 11:08:29.002 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:08:29 compute-0 ceph-mon[76304]: pgmap v3146: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:08:29
Feb 28 11:08:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 11:08:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 11:08:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.control', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'images']
Feb 28 11:08:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 11:08:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:08:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:30 compute-0 podman[405891]: 2026-02-28 11:08:30.170779587 +0000 UTC m=+0.102169284 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 28 11:08:30 compute-0 podman[405890]: 2026-02-28 11:08:30.184723852 +0000 UTC m=+0.118903208 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 28 11:08:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:08:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:08:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:08:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:08:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:08:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:08:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 11:08:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:08:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 11:08:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:08:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:08:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:08:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:08:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:08:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:08:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:08:31 compute-0 ceph-mon[76304]: pgmap v3147: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:08:32.500 156681 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '06:61:90', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '72:6c:78:58:a9:d7'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 28 11:08:32 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:08:32.501 156681 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 28 11:08:32 compute-0 nova_compute[243452]: 2026-02-28 11:08:32.536 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:08:33 compute-0 ceph-mon[76304]: pgmap v3148: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:33 compute-0 sudo[405935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:08:33 compute-0 sudo[405935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:08:33 compute-0 sudo[405935]: pam_unix(sudo:session): session closed for user root
Feb 28 11:08:33 compute-0 sudo[405960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 11:08:33 compute-0 sudo[405960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:08:34 compute-0 nova_compute[243452]: 2026-02-28 11:08:34.000 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:08:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:08:34 compute-0 sudo[405960]: pam_unix(sudo:session): session closed for user root
Feb 28 11:08:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:08:34 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:08:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 11:08:34 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:08:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 11:08:34 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:08:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 11:08:34 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:08:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 11:08:34 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:08:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:08:34 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:08:34 compute-0 sudo[406015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:08:34 compute-0 sudo[406015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:08:34 compute-0 sudo[406015]: pam_unix(sudo:session): session closed for user root
Feb 28 11:08:34 compute-0 sudo[406040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 11:08:34 compute-0 sudo[406040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:08:34 compute-0 podman[406078]: 2026-02-28 11:08:34.947930332 +0000 UTC m=+0.036599808 container create 0a4058229b83e689ae882dad11ae3de9e6e32c7420d9428e080c3eb755dc823b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:08:34 compute-0 systemd[1]: Started libpod-conmon-0a4058229b83e689ae882dad11ae3de9e6e32c7420d9428e080c3eb755dc823b.scope.
Feb 28 11:08:34 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:08:35 compute-0 podman[406078]: 2026-02-28 11:08:35.002821397 +0000 UTC m=+0.091490853 container init 0a4058229b83e689ae882dad11ae3de9e6e32c7420d9428e080c3eb755dc823b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:08:35 compute-0 podman[406078]: 2026-02-28 11:08:35.007200431 +0000 UTC m=+0.095869887 container start 0a4058229b83e689ae882dad11ae3de9e6e32c7420d9428e080c3eb755dc823b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hertz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 28 11:08:35 compute-0 podman[406078]: 2026-02-28 11:08:35.010160805 +0000 UTC m=+0.098830281 container attach 0a4058229b83e689ae882dad11ae3de9e6e32c7420d9428e080c3eb755dc823b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hertz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:08:35 compute-0 wonderful_hertz[406094]: 167 167
Feb 28 11:08:35 compute-0 systemd[1]: libpod-0a4058229b83e689ae882dad11ae3de9e6e32c7420d9428e080c3eb755dc823b.scope: Deactivated successfully.
Feb 28 11:08:35 compute-0 podman[406078]: 2026-02-28 11:08:35.011797141 +0000 UTC m=+0.100466607 container died 0a4058229b83e689ae882dad11ae3de9e6e32c7420d9428e080c3eb755dc823b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:08:35 compute-0 podman[406078]: 2026-02-28 11:08:34.93196 +0000 UTC m=+0.020629446 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:08:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a1d355dd507a550ac096e4a44184c55bc0f9590cc0d21ac7034823a454127ca-merged.mount: Deactivated successfully.
Feb 28 11:08:35 compute-0 podman[406078]: 2026-02-28 11:08:35.04708462 +0000 UTC m=+0.135754076 container remove 0a4058229b83e689ae882dad11ae3de9e6e32c7420d9428e080c3eb755dc823b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hertz, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 11:08:35 compute-0 systemd[1]: libpod-conmon-0a4058229b83e689ae882dad11ae3de9e6e32c7420d9428e080c3eb755dc823b.scope: Deactivated successfully.
Feb 28 11:08:35 compute-0 podman[406118]: 2026-02-28 11:08:35.163740924 +0000 UTC m=+0.035722173 container create 2b48afc7b5aad7193cc87110cbec1798af3fb6e1bf22b123dc3124804a715e91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:08:35 compute-0 systemd[1]: Started libpod-conmon-2b48afc7b5aad7193cc87110cbec1798af3fb6e1bf22b123dc3124804a715e91.scope.
Feb 28 11:08:35 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da03a9866e435ba3563c5fb75230a0d945dde6c1bf45913001737f1ee1285360/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da03a9866e435ba3563c5fb75230a0d945dde6c1bf45913001737f1ee1285360/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da03a9866e435ba3563c5fb75230a0d945dde6c1bf45913001737f1ee1285360/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da03a9866e435ba3563c5fb75230a0d945dde6c1bf45913001737f1ee1285360/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da03a9866e435ba3563c5fb75230a0d945dde6c1bf45913001737f1ee1285360/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 11:08:35 compute-0 ceph-mon[76304]: pgmap v3149: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:35 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:08:35 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:08:35 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:08:35 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:08:35 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:08:35 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:08:35 compute-0 podman[406118]: 2026-02-28 11:08:35.232028738 +0000 UTC m=+0.104009997 container init 2b48afc7b5aad7193cc87110cbec1798af3fb6e1bf22b123dc3124804a715e91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:08:35 compute-0 podman[406118]: 2026-02-28 11:08:35.239085558 +0000 UTC m=+0.111066817 container start 2b48afc7b5aad7193cc87110cbec1798af3fb6e1bf22b123dc3124804a715e91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Feb 28 11:08:35 compute-0 podman[406118]: 2026-02-28 11:08:35.146876766 +0000 UTC m=+0.018858055 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:08:35 compute-0 podman[406118]: 2026-02-28 11:08:35.242661529 +0000 UTC m=+0.114642788 container attach 2b48afc7b5aad7193cc87110cbec1798af3fb6e1bf22b123dc3124804a715e91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:08:35 compute-0 funny_brahmagupta[406134]: --> passed data devices: 0 physical, 3 LVM
Feb 28 11:08:35 compute-0 funny_brahmagupta[406134]: --> All data devices are unavailable
Feb 28 11:08:35 compute-0 systemd[1]: libpod-2b48afc7b5aad7193cc87110cbec1798af3fb6e1bf22b123dc3124804a715e91.scope: Deactivated successfully.
Feb 28 11:08:35 compute-0 podman[406154]: 2026-02-28 11:08:35.725954897 +0000 UTC m=+0.027320695 container died 2b48afc7b5aad7193cc87110cbec1798af3fb6e1bf22b123dc3124804a715e91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_brahmagupta, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 11:08:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-da03a9866e435ba3563c5fb75230a0d945dde6c1bf45913001737f1ee1285360-merged.mount: Deactivated successfully.
Feb 28 11:08:35 compute-0 podman[406154]: 2026-02-28 11:08:35.769261593 +0000 UTC m=+0.070627301 container remove 2b48afc7b5aad7193cc87110cbec1798af3fb6e1bf22b123dc3124804a715e91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 11:08:35 compute-0 systemd[1]: libpod-conmon-2b48afc7b5aad7193cc87110cbec1798af3fb6e1bf22b123dc3124804a715e91.scope: Deactivated successfully.
Feb 28 11:08:35 compute-0 sudo[406040]: pam_unix(sudo:session): session closed for user root
Feb 28 11:08:35 compute-0 sudo[406169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:08:35 compute-0 sudo[406169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:08:35 compute-0 sudo[406169]: pam_unix(sudo:session): session closed for user root
Feb 28 11:08:35 compute-0 sudo[406194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 11:08:35 compute-0 sudo[406194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:08:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:36 compute-0 podman[406233]: 2026-02-28 11:08:36.258183419 +0000 UTC m=+0.059291220 container create 5089c15a30429dadd7bcd0eaa2edba2a832e276054b96a0f6bcb16b4acd3d9d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 11:08:36 compute-0 systemd[1]: Started libpod-conmon-5089c15a30429dadd7bcd0eaa2edba2a832e276054b96a0f6bcb16b4acd3d9d6.scope.
Feb 28 11:08:36 compute-0 podman[406233]: 2026-02-28 11:08:36.233430748 +0000 UTC m=+0.034538609 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:08:36 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:08:36 compute-0 podman[406233]: 2026-02-28 11:08:36.343117504 +0000 UTC m=+0.144225295 container init 5089c15a30429dadd7bcd0eaa2edba2a832e276054b96a0f6bcb16b4acd3d9d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Feb 28 11:08:36 compute-0 podman[406233]: 2026-02-28 11:08:36.350991417 +0000 UTC m=+0.152099188 container start 5089c15a30429dadd7bcd0eaa2edba2a832e276054b96a0f6bcb16b4acd3d9d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bose, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 28 11:08:36 compute-0 podman[406233]: 2026-02-28 11:08:36.354202208 +0000 UTC m=+0.155309979 container attach 5089c15a30429dadd7bcd0eaa2edba2a832e276054b96a0f6bcb16b4acd3d9d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:08:36 compute-0 exciting_bose[406249]: 167 167
Feb 28 11:08:36 compute-0 systemd[1]: libpod-5089c15a30429dadd7bcd0eaa2edba2a832e276054b96a0f6bcb16b4acd3d9d6.scope: Deactivated successfully.
Feb 28 11:08:36 compute-0 podman[406233]: 2026-02-28 11:08:36.356189355 +0000 UTC m=+0.157297136 container died 5089c15a30429dadd7bcd0eaa2edba2a832e276054b96a0f6bcb16b4acd3d9d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bose, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:08:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ada20bfff6d07b6337b1f9410300756ac2642a9cf7f83b70d53a7fea5571095-merged.mount: Deactivated successfully.
Feb 28 11:08:36 compute-0 podman[406233]: 2026-02-28 11:08:36.390526067 +0000 UTC m=+0.191633838 container remove 5089c15a30429dadd7bcd0eaa2edba2a832e276054b96a0f6bcb16b4acd3d9d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:08:36 compute-0 systemd[1]: libpod-conmon-5089c15a30429dadd7bcd0eaa2edba2a832e276054b96a0f6bcb16b4acd3d9d6.scope: Deactivated successfully.
Feb 28 11:08:36 compute-0 podman[406273]: 2026-02-28 11:08:36.572199172 +0000 UTC m=+0.048018311 container create 4903408d1c55442d8c89d67675120114549c01def9743d34b6b2bbb236c4cde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_visvesvaraya, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Feb 28 11:08:36 compute-0 systemd[1]: Started libpod-conmon-4903408d1c55442d8c89d67675120114549c01def9743d34b6b2bbb236c4cde3.scope.
Feb 28 11:08:36 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:08:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9800ed332274b21dabfec30904b99b9179e10883d50347e452bb9a43a5f3ab39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:08:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9800ed332274b21dabfec30904b99b9179e10883d50347e452bb9a43a5f3ab39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:08:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9800ed332274b21dabfec30904b99b9179e10883d50347e452bb9a43a5f3ab39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:08:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9800ed332274b21dabfec30904b99b9179e10883d50347e452bb9a43a5f3ab39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:08:36 compute-0 podman[406273]: 2026-02-28 11:08:36.556047025 +0000 UTC m=+0.031866174 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:08:36 compute-0 podman[406273]: 2026-02-28 11:08:36.657746685 +0000 UTC m=+0.133565834 container init 4903408d1c55442d8c89d67675120114549c01def9743d34b6b2bbb236c4cde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 28 11:08:36 compute-0 podman[406273]: 2026-02-28 11:08:36.672815252 +0000 UTC m=+0.148634391 container start 4903408d1c55442d8c89d67675120114549c01def9743d34b6b2bbb236c4cde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 28 11:08:36 compute-0 podman[406273]: 2026-02-28 11:08:36.677595637 +0000 UTC m=+0.153414786 container attach 4903408d1c55442d8c89d67675120114549c01def9743d34b6b2bbb236c4cde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_visvesvaraya, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]: {
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:     "0": [
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:         {
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "devices": [
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "/dev/loop3"
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             ],
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "lv_name": "ceph_lv0",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "lv_size": "21470642176",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "name": "ceph_lv0",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "tags": {
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.cluster_name": "ceph",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.crush_device_class": "",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.encrypted": "0",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.objectstore": "bluestore",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.osd_id": "0",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.type": "block",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.vdo": "0",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.with_tpm": "0"
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             },
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "type": "block",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "vg_name": "ceph_vg0"
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:         }
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:     ],
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:     "1": [
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:         {
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "devices": [
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "/dev/loop4"
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             ],
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "lv_name": "ceph_lv1",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "lv_size": "21470642176",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "name": "ceph_lv1",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "tags": {
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.cluster_name": "ceph",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.crush_device_class": "",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.encrypted": "0",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.objectstore": "bluestore",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.osd_id": "1",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.type": "block",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.vdo": "0",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.with_tpm": "0"
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             },
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "type": "block",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "vg_name": "ceph_vg1"
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:         }
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:     ],
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:     "2": [
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:         {
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "devices": [
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "/dev/loop5"
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             ],
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "lv_name": "ceph_lv2",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "lv_size": "21470642176",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "name": "ceph_lv2",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "tags": {
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.cluster_name": "ceph",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.crush_device_class": "",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.encrypted": "0",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.objectstore": "bluestore",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.osd_id": "2",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.type": "block",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.vdo": "0",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:                 "ceph.with_tpm": "0"
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             },
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "type": "block",
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:             "vg_name": "ceph_vg2"
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:         }
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]:     ]
Feb 28 11:08:36 compute-0 boring_visvesvaraya[406290]: }
Feb 28 11:08:36 compute-0 systemd[1]: libpod-4903408d1c55442d8c89d67675120114549c01def9743d34b6b2bbb236c4cde3.scope: Deactivated successfully.
Feb 28 11:08:36 compute-0 conmon[406290]: conmon 4903408d1c55442d8c89 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4903408d1c55442d8c89d67675120114549c01def9743d34b6b2bbb236c4cde3.scope/container/memory.events
Feb 28 11:08:36 compute-0 podman[406273]: 2026-02-28 11:08:36.95805165 +0000 UTC m=+0.433870789 container died 4903408d1c55442d8c89d67675120114549c01def9743d34b6b2bbb236c4cde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:08:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-9800ed332274b21dabfec30904b99b9179e10883d50347e452bb9a43a5f3ab39-merged.mount: Deactivated successfully.
Feb 28 11:08:37 compute-0 podman[406273]: 2026-02-28 11:08:37.003708843 +0000 UTC m=+0.479527932 container remove 4903408d1c55442d8c89d67675120114549c01def9743d34b6b2bbb236c4cde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 28 11:08:37 compute-0 systemd[1]: libpod-conmon-4903408d1c55442d8c89d67675120114549c01def9743d34b6b2bbb236c4cde3.scope: Deactivated successfully.
Feb 28 11:08:37 compute-0 sudo[406194]: pam_unix(sudo:session): session closed for user root
Feb 28 11:08:37 compute-0 sudo[406313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:08:37 compute-0 sudo[406313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:08:37 compute-0 sudo[406313]: pam_unix(sudo:session): session closed for user root
Feb 28 11:08:37 compute-0 sudo[406338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 11:08:37 compute-0 sudo[406338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:08:37 compute-0 ceph-mon[76304]: pgmap v3150: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:37 compute-0 podman[406376]: 2026-02-28 11:08:37.519554722 +0000 UTC m=+0.055559014 container create f5f8fb1867db25229d3ed14b49caf68bece1f022270dddb679fca19e53ad5228 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_yalow, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 28 11:08:37 compute-0 systemd[1]: Started libpod-conmon-f5f8fb1867db25229d3ed14b49caf68bece1f022270dddb679fca19e53ad5228.scope.
Feb 28 11:08:37 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:08:37 compute-0 podman[406376]: 2026-02-28 11:08:37.495930803 +0000 UTC m=+0.031935195 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:08:37 compute-0 podman[406376]: 2026-02-28 11:08:37.597928772 +0000 UTC m=+0.133933064 container init f5f8fb1867db25229d3ed14b49caf68bece1f022270dddb679fca19e53ad5228 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_yalow, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:08:37 compute-0 podman[406376]: 2026-02-28 11:08:37.605748174 +0000 UTC m=+0.141752466 container start f5f8fb1867db25229d3ed14b49caf68bece1f022270dddb679fca19e53ad5228 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_yalow, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 28 11:08:37 compute-0 podman[406376]: 2026-02-28 11:08:37.608155782 +0000 UTC m=+0.144160074 container attach f5f8fb1867db25229d3ed14b49caf68bece1f022270dddb679fca19e53ad5228 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_yalow, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:08:37 compute-0 magical_yalow[406393]: 167 167
Feb 28 11:08:37 compute-0 systemd[1]: libpod-f5f8fb1867db25229d3ed14b49caf68bece1f022270dddb679fca19e53ad5228.scope: Deactivated successfully.
Feb 28 11:08:37 compute-0 podman[406376]: 2026-02-28 11:08:37.61268739 +0000 UTC m=+0.148691722 container died f5f8fb1867db25229d3ed14b49caf68bece1f022270dddb679fca19e53ad5228 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:08:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-f544fa54f3bd61c5ac5318de5d7bf336d8dfd2f859288233af95d6bc89ee6379-merged.mount: Deactivated successfully.
Feb 28 11:08:37 compute-0 podman[406376]: 2026-02-28 11:08:37.648895335 +0000 UTC m=+0.184899627 container remove f5f8fb1867db25229d3ed14b49caf68bece1f022270dddb679fca19e53ad5228 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_yalow, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 11:08:37 compute-0 systemd[1]: libpod-conmon-f5f8fb1867db25229d3ed14b49caf68bece1f022270dddb679fca19e53ad5228.scope: Deactivated successfully.
Feb 28 11:08:37 compute-0 podman[406416]: 2026-02-28 11:08:37.809277878 +0000 UTC m=+0.056668376 container create 2f095b122af0c87fed84efc682fb6be6cef9c1e52ab55ec384003e89332dd5ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_leakey, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 28 11:08:37 compute-0 systemd[1]: Started libpod-conmon-2f095b122af0c87fed84efc682fb6be6cef9c1e52ab55ec384003e89332dd5ea.scope.
Feb 28 11:08:37 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:08:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbacc7e62b722edc9c1aa8be8d265109be2d134b1a3cb57a6f416a4de597b1fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:08:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbacc7e62b722edc9c1aa8be8d265109be2d134b1a3cb57a6f416a4de597b1fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:08:37 compute-0 podman[406416]: 2026-02-28 11:08:37.789456196 +0000 UTC m=+0.036846654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:08:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbacc7e62b722edc9c1aa8be8d265109be2d134b1a3cb57a6f416a4de597b1fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:08:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbacc7e62b722edc9c1aa8be8d265109be2d134b1a3cb57a6f416a4de597b1fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:08:37 compute-0 podman[406416]: 2026-02-28 11:08:37.919789438 +0000 UTC m=+0.167179916 container init 2f095b122af0c87fed84efc682fb6be6cef9c1e52ab55ec384003e89332dd5ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_leakey, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 11:08:37 compute-0 podman[406416]: 2026-02-28 11:08:37.928236537 +0000 UTC m=+0.175627025 container start 2f095b122af0c87fed84efc682fb6be6cef9c1e52ab55ec384003e89332dd5ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:08:37 compute-0 podman[406416]: 2026-02-28 11:08:37.931802648 +0000 UTC m=+0.179193126 container attach 2f095b122af0c87fed84efc682fb6be6cef9c1e52ab55ec384003e89332dd5ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_leakey, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:08:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:38 compute-0 lvm[406513]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 11:08:38 compute-0 lvm[406513]: VG ceph_vg2 finished
Feb 28 11:08:38 compute-0 lvm[406510]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 11:08:38 compute-0 lvm[406510]: VG ceph_vg0 finished
Feb 28 11:08:38 compute-0 lvm[406514]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 11:08:38 compute-0 lvm[406514]: VG ceph_vg1 finished
Feb 28 11:08:38 compute-0 compassionate_leakey[406433]: {}
Feb 28 11:08:38 compute-0 systemd[1]: libpod-2f095b122af0c87fed84efc682fb6be6cef9c1e52ab55ec384003e89332dd5ea.scope: Deactivated successfully.
Feb 28 11:08:38 compute-0 systemd[1]: libpod-2f095b122af0c87fed84efc682fb6be6cef9c1e52ab55ec384003e89332dd5ea.scope: Consumed 1.253s CPU time.
Feb 28 11:08:38 compute-0 podman[406416]: 2026-02-28 11:08:38.799102321 +0000 UTC m=+1.046492819 container died 2f095b122af0c87fed84efc682fb6be6cef9c1e52ab55ec384003e89332dd5ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:08:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbacc7e62b722edc9c1aa8be8d265109be2d134b1a3cb57a6f416a4de597b1fc-merged.mount: Deactivated successfully.
Feb 28 11:08:38 compute-0 podman[406416]: 2026-02-28 11:08:38.839205106 +0000 UTC m=+1.086595574 container remove 2f095b122af0c87fed84efc682fb6be6cef9c1e52ab55ec384003e89332dd5ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:08:38 compute-0 systemd[1]: libpod-conmon-2f095b122af0c87fed84efc682fb6be6cef9c1e52ab55ec384003e89332dd5ea.scope: Deactivated successfully.
Feb 28 11:08:38 compute-0 sudo[406338]: pam_unix(sudo:session): session closed for user root
Feb 28 11:08:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 11:08:38 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:08:38 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 11:08:38 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:08:39 compute-0 nova_compute[243452]: 2026-02-28 11:08:39.018 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:08:39 compute-0 nova_compute[243452]: 2026-02-28 11:08:39.021 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:08:39 compute-0 sudo[406527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 11:08:39 compute-0 sudo[406527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:08:39 compute-0 sudo[406527]: pam_unix(sudo:session): session closed for user root
Feb 28 11:08:39 compute-0 ceph-mon[76304]: pgmap v3151: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:39 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:08:39 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:08:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:08:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:40 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:08:40.503 156681 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cdfa85e0-fba9-4bed-b591-423dd0221b71, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 28 11:08:41 compute-0 ceph-mon[76304]: pgmap v3152: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5454513684934942e-05 of space, bias 1.0, pg target 0.0046363541054804825 quantized to 32 (current 32)
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.899822393338988e-06 of space, bias 1.0, pg target 0.0014699467180016965 quantized to 32 (current 32)
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:08:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 11:08:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:43 compute-0 ceph-mon[76304]: pgmap v3153: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:44 compute-0 nova_compute[243452]: 2026-02-28 11:08:44.021 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:08:44 compute-0 nova_compute[243452]: 2026-02-28 11:08:44.023 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:08:44 compute-0 nova_compute[243452]: 2026-02-28 11:08:44.024 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:08:44 compute-0 nova_compute[243452]: 2026-02-28 11:08:44.024 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:08:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3154: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:44 compute-0 nova_compute[243452]: 2026-02-28 11:08:44.053 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:08:44 compute-0 nova_compute[243452]: 2026-02-28 11:08:44.054 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:08:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:08:45 compute-0 ceph-mon[76304]: pgmap v3154: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 11:08:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/818201547' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:08:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 11:08:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/818201547' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:08:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/818201547' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:08:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/818201547' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:08:47 compute-0 ceph-mon[76304]: pgmap v3155: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:49 compute-0 nova_compute[243452]: 2026-02-28 11:08:49.055 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:08:49 compute-0 ceph-mon[76304]: pgmap v3156: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:08:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:51 compute-0 ceph-mon[76304]: pgmap v3157: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:53 compute-0 ceph-mon[76304]: pgmap v3158: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:54 compute-0 nova_compute[243452]: 2026-02-28 11:08:54.056 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:08:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:08:55 compute-0 ceph-mon[76304]: pgmap v3159: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3160: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:57 compute-0 ceph-mon[76304]: pgmap v3160: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:08:57.915 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:08:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:08:57.915 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:08:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:08:57.916 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:08:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:59 compute-0 nova_compute[243452]: 2026-02-28 11:08:59.059 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:08:59 compute-0 ceph-mon[76304]: pgmap v3161: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:08:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:09:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:09:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:09:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:09:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:09:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:09:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:09:01 compute-0 podman[406553]: 2026-02-28 11:09:01.160126201 +0000 UTC m=+0.080721787 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 28 11:09:01 compute-0 podman[406552]: 2026-02-28 11:09:01.25014635 +0000 UTC m=+0.173360930 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260223)
Feb 28 11:09:01 compute-0 ceph-mon[76304]: pgmap v3162: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:02 compute-0 rsyslogd[1017]: imjournal: 15122 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb 28 11:09:03 compute-0 ceph-mon[76304]: pgmap v3163: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:04 compute-0 nova_compute[243452]: 2026-02-28 11:09:04.061 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:09:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:09:05 compute-0 ceph-mon[76304]: pgmap v3164: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3165: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:07 compute-0 ceph-mon[76304]: pgmap v3165: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:08 compute-0 nova_compute[243452]: 2026-02-28 11:09:08.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:09:09 compute-0 nova_compute[243452]: 2026-02-28 11:09:09.064 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:09:09 compute-0 ceph-mon[76304]: pgmap v3166: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:09:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:10 compute-0 nova_compute[243452]: 2026-02-28 11:09:10.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:09:10 compute-0 nova_compute[243452]: 2026-02-28 11:09:10.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:09:10 compute-0 nova_compute[243452]: 2026-02-28 11:09:10.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 11:09:11 compute-0 nova_compute[243452]: 2026-02-28 11:09:11.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:09:11 compute-0 ceph-mon[76304]: pgmap v3167: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:12 compute-0 nova_compute[243452]: 2026-02-28 11:09:12.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:09:13 compute-0 nova_compute[243452]: 2026-02-28 11:09:13.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:09:13 compute-0 nova_compute[243452]: 2026-02-28 11:09:13.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 11:09:13 compute-0 nova_compute[243452]: 2026-02-28 11:09:13.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 11:09:13 compute-0 nova_compute[243452]: 2026-02-28 11:09:13.347 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 11:09:13 compute-0 ceph-mon[76304]: pgmap v3168: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:14 compute-0 nova_compute[243452]: 2026-02-28 11:09:14.066 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:09:14 compute-0 nova_compute[243452]: 2026-02-28 11:09:14.068 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:09:14 compute-0 nova_compute[243452]: 2026-02-28 11:09:14.068 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:09:14 compute-0 nova_compute[243452]: 2026-02-28 11:09:14.068 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:09:14 compute-0 nova_compute[243452]: 2026-02-28 11:09:14.092 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:09:14 compute-0 nova_compute[243452]: 2026-02-28 11:09:14.093 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:09:14 compute-0 nova_compute[243452]: 2026-02-28 11:09:14.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:09:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:09:15 compute-0 ceph-mon[76304]: pgmap v3169: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3170: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:17 compute-0 ceph-mon[76304]: pgmap v3170: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:19 compute-0 nova_compute[243452]: 2026-02-28 11:09:19.093 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:09:19 compute-0 nova_compute[243452]: 2026-02-28 11:09:19.095 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:09:19 compute-0 nova_compute[243452]: 2026-02-28 11:09:19.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:09:19 compute-0 nova_compute[243452]: 2026-02-28 11:09:19.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:09:19 compute-0 nova_compute[243452]: 2026-02-28 11:09:19.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:09:19 compute-0 nova_compute[243452]: 2026-02-28 11:09:19.341 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:09:19 compute-0 nova_compute[243452]: 2026-02-28 11:09:19.342 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 11:09:19 compute-0 nova_compute[243452]: 2026-02-28 11:09:19.342 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:09:19 compute-0 ceph-mon[76304]: pgmap v3171: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:09:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:09:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1583978062' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:09:19 compute-0 nova_compute[243452]: 2026-02-28 11:09:19.908 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:09:20 compute-0 nova_compute[243452]: 2026-02-28 11:09:20.027 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 11:09:20 compute-0 nova_compute[243452]: 2026-02-28 11:09:20.029 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3548MB free_disk=59.987354160286486GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 11:09:20 compute-0 nova_compute[243452]: 2026-02-28 11:09:20.029 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:09:20 compute-0 nova_compute[243452]: 2026-02-28 11:09:20.029 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:09:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:20 compute-0 nova_compute[243452]: 2026-02-28 11:09:20.117 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 11:09:20 compute-0 nova_compute[243452]: 2026-02-28 11:09:20.117 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 11:09:20 compute-0 nova_compute[243452]: 2026-02-28 11:09:20.133 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:09:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1583978062' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:09:20 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:09:20 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/573631554' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:09:20 compute-0 nova_compute[243452]: 2026-02-28 11:09:20.656 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:09:20 compute-0 nova_compute[243452]: 2026-02-28 11:09:20.662 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 11:09:20 compute-0 nova_compute[243452]: 2026-02-28 11:09:20.679 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 11:09:20 compute-0 nova_compute[243452]: 2026-02-28 11:09:20.682 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 11:09:20 compute-0 nova_compute[243452]: 2026-02-28 11:09:20.682 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:09:21 compute-0 ceph-mon[76304]: pgmap v3172: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:21 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/573631554' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:09:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:23 compute-0 ceph-mon[76304]: pgmap v3173: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:24 compute-0 nova_compute[243452]: 2026-02-28 11:09:24.096 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:09:24 compute-0 nova_compute[243452]: 2026-02-28 11:09:24.098 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:09:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:09:25 compute-0 ceph-mon[76304]: pgmap v3174: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:25 compute-0 nova_compute[243452]: 2026-02-28 11:09:25.683 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:09:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:27 compute-0 ceph-mon[76304]: pgmap v3175: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:29 compute-0 nova_compute[243452]: 2026-02-28 11:09:29.099 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:09:29 compute-0 nova_compute[243452]: 2026-02-28 11:09:29.102 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:09:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:09:29
Feb 28 11:09:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 11:09:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 11:09:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['images', '.mgr', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'default.rgw.log', '.rgw.root', 'volumes', 'default.rgw.meta']
Feb 28 11:09:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 11:09:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:09:29 compute-0 ceph-mon[76304]: pgmap v3176: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:09:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:09:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:09:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:09:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:09:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:09:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 11:09:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:09:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 11:09:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:09:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:09:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:09:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:09:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:09:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:09:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:09:31 compute-0 ceph-mon[76304]: pgmap v3177: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:32 compute-0 podman[406642]: 2026-02-28 11:09:32.149176406 +0000 UTC m=+0.073138473 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 28 11:09:32 compute-0 podman[406641]: 2026-02-28 11:09:32.232443204 +0000 UTC m=+0.155629329 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0)
Feb 28 11:09:33 compute-0 ceph-mon[76304]: pgmap v3178: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:34 compute-0 nova_compute[243452]: 2026-02-28 11:09:34.101 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:09:34 compute-0 nova_compute[243452]: 2026-02-28 11:09:34.102 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:09:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:09:34 compute-0 sshd-session[406687]: error: kex_exchange_identification: read: Connection reset by peer
Feb 28 11:09:34 compute-0 sshd-session[406687]: Connection reset by 176.120.22.52 port 33011
Feb 28 11:09:35 compute-0 ceph-mon[76304]: pgmap v3179: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3180: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:37 compute-0 ceph-mon[76304]: pgmap v3180: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:39 compute-0 nova_compute[243452]: 2026-02-28 11:09:39.103 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:09:39 compute-0 nova_compute[243452]: 2026-02-28 11:09:39.104 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:09:39 compute-0 nova_compute[243452]: 2026-02-28 11:09:39.105 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:09:39 compute-0 nova_compute[243452]: 2026-02-28 11:09:39.105 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:09:39 compute-0 nova_compute[243452]: 2026-02-28 11:09:39.105 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:09:39 compute-0 nova_compute[243452]: 2026-02-28 11:09:39.106 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:09:39 compute-0 sudo[406688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:09:39 compute-0 sudo[406688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:09:39 compute-0 sudo[406688]: pam_unix(sudo:session): session closed for user root
Feb 28 11:09:39 compute-0 sudo[406713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 11:09:39 compute-0 sudo[406713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:09:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:09:39 compute-0 ceph-mon[76304]: pgmap v3181: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:39 compute-0 sudo[406713]: pam_unix(sudo:session): session closed for user root
Feb 28 11:09:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:09:39 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:09:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 11:09:39 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:09:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 11:09:39 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:09:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 11:09:39 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:09:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 11:09:39 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:09:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:09:39 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:09:39 compute-0 sudo[406769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:09:39 compute-0 sudo[406769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:09:39 compute-0 sudo[406769]: pam_unix(sudo:session): session closed for user root
Feb 28 11:09:39 compute-0 sudo[406794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 11:09:39 compute-0 sudo[406794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:09:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3182: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:40 compute-0 podman[406831]: 2026-02-28 11:09:40.136695607 +0000 UTC m=+0.062131811 container create a0790a1af8bf9098ebb032c236f7c307917bf636abc26139ab12d29dab9bccaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:09:40 compute-0 systemd[1]: Started libpod-conmon-a0790a1af8bf9098ebb032c236f7c307917bf636abc26139ab12d29dab9bccaf.scope.
Feb 28 11:09:40 compute-0 podman[406831]: 2026-02-28 11:09:40.109562088 +0000 UTC m=+0.034998332 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:09:40 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:09:40 compute-0 podman[406831]: 2026-02-28 11:09:40.240660101 +0000 UTC m=+0.166096285 container init a0790a1af8bf9098ebb032c236f7c307917bf636abc26139ab12d29dab9bccaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_carson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Feb 28 11:09:40 compute-0 podman[406831]: 2026-02-28 11:09:40.251324313 +0000 UTC m=+0.176760507 container start a0790a1af8bf9098ebb032c236f7c307917bf636abc26139ab12d29dab9bccaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_carson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 11:09:40 compute-0 podman[406831]: 2026-02-28 11:09:40.255250344 +0000 UTC m=+0.180686518 container attach a0790a1af8bf9098ebb032c236f7c307917bf636abc26139ab12d29dab9bccaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_carson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 11:09:40 compute-0 adoring_carson[406847]: 167 167
Feb 28 11:09:40 compute-0 systemd[1]: libpod-a0790a1af8bf9098ebb032c236f7c307917bf636abc26139ab12d29dab9bccaf.scope: Deactivated successfully.
Feb 28 11:09:40 compute-0 conmon[406847]: conmon a0790a1af8bf9098ebb0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a0790a1af8bf9098ebb032c236f7c307917bf636abc26139ab12d29dab9bccaf.scope/container/memory.events
Feb 28 11:09:40 compute-0 podman[406831]: 2026-02-28 11:09:40.260895344 +0000 UTC m=+0.186331538 container died a0790a1af8bf9098ebb032c236f7c307917bf636abc26139ab12d29dab9bccaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:09:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ca8df98f9ae806cadfccb57773c30f811c7e392ba124bce7eb6abcd472d4696-merged.mount: Deactivated successfully.
Feb 28 11:09:40 compute-0 podman[406831]: 2026-02-28 11:09:40.306925588 +0000 UTC m=+0.232361782 container remove a0790a1af8bf9098ebb032c236f7c307917bf636abc26139ab12d29dab9bccaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_carson, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:09:40 compute-0 systemd[1]: libpod-conmon-a0790a1af8bf9098ebb032c236f7c307917bf636abc26139ab12d29dab9bccaf.scope: Deactivated successfully.
Feb 28 11:09:40 compute-0 podman[406871]: 2026-02-28 11:09:40.491670759 +0000 UTC m=+0.060628778 container create e05c78270ac99a34b49afc539a9b1c71cac8af91e173b2db97ea15f66bd204b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_merkle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 11:09:40 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:09:40 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:09:40 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:09:40 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:09:40 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:09:40 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:09:40 compute-0 systemd[1]: Started libpod-conmon-e05c78270ac99a34b49afc539a9b1c71cac8af91e173b2db97ea15f66bd204b6.scope.
Feb 28 11:09:40 compute-0 podman[406871]: 2026-02-28 11:09:40.468468882 +0000 UTC m=+0.037426941 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:09:40 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/408191266e4e74013d1e7b4817671ba3ed23f8899617dd2da33e77890028a47a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/408191266e4e74013d1e7b4817671ba3ed23f8899617dd2da33e77890028a47a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/408191266e4e74013d1e7b4817671ba3ed23f8899617dd2da33e77890028a47a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/408191266e4e74013d1e7b4817671ba3ed23f8899617dd2da33e77890028a47a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/408191266e4e74013d1e7b4817671ba3ed23f8899617dd2da33e77890028a47a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 11:09:40 compute-0 podman[406871]: 2026-02-28 11:09:40.603446725 +0000 UTC m=+0.172404794 container init e05c78270ac99a34b49afc539a9b1c71cac8af91e173b2db97ea15f66bd204b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 28 11:09:40 compute-0 podman[406871]: 2026-02-28 11:09:40.614979911 +0000 UTC m=+0.183937930 container start e05c78270ac99a34b49afc539a9b1c71cac8af91e173b2db97ea15f66bd204b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_merkle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 11:09:40 compute-0 podman[406871]: 2026-02-28 11:09:40.618903802 +0000 UTC m=+0.187861871 container attach e05c78270ac99a34b49afc539a9b1c71cac8af91e173b2db97ea15f66bd204b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 11:09:41 compute-0 pensive_merkle[406888]: --> passed data devices: 0 physical, 3 LVM
Feb 28 11:09:41 compute-0 pensive_merkle[406888]: --> All data devices are unavailable
Feb 28 11:09:41 compute-0 systemd[1]: libpod-e05c78270ac99a34b49afc539a9b1c71cac8af91e173b2db97ea15f66bd204b6.scope: Deactivated successfully.
Feb 28 11:09:41 compute-0 podman[406871]: 2026-02-28 11:09:41.110541176 +0000 UTC m=+0.679499165 container died e05c78270ac99a34b49afc539a9b1c71cac8af91e173b2db97ea15f66bd204b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_merkle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 28 11:09:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-408191266e4e74013d1e7b4817671ba3ed23f8899617dd2da33e77890028a47a-merged.mount: Deactivated successfully.
Feb 28 11:09:41 compute-0 podman[406871]: 2026-02-28 11:09:41.159148493 +0000 UTC m=+0.728106492 container remove e05c78270ac99a34b49afc539a9b1c71cac8af91e173b2db97ea15f66bd204b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_merkle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 28 11:09:41 compute-0 systemd[1]: libpod-conmon-e05c78270ac99a34b49afc539a9b1c71cac8af91e173b2db97ea15f66bd204b6.scope: Deactivated successfully.
Feb 28 11:09:41 compute-0 sudo[406794]: pam_unix(sudo:session): session closed for user root
Feb 28 11:09:41 compute-0 sudo[406922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:09:41 compute-0 sudo[406922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:09:41 compute-0 sudo[406922]: pam_unix(sudo:session): session closed for user root
Feb 28 11:09:41 compute-0 sudo[406947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 11:09:41 compute-0 sudo[406947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:09:41 compute-0 podman[406985]: 2026-02-28 11:09:41.624831391 +0000 UTC m=+0.053179247 container create 55a54f941e073b58b122fa8ab50c947c4b2c380fbff0bd03c16bdeb4b7f9ca27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 11:09:41 compute-0 systemd[1]: Started libpod-conmon-55a54f941e073b58b122fa8ab50c947c4b2c380fbff0bd03c16bdeb4b7f9ca27.scope.
Feb 28 11:09:41 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:09:41 compute-0 podman[406985]: 2026-02-28 11:09:41.606133802 +0000 UTC m=+0.034481668 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:09:41 compute-0 podman[406985]: 2026-02-28 11:09:41.710130227 +0000 UTC m=+0.138478103 container init 55a54f941e073b58b122fa8ab50c947c4b2c380fbff0bd03c16bdeb4b7f9ca27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_jang, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:09:41 compute-0 podman[406985]: 2026-02-28 11:09:41.720798269 +0000 UTC m=+0.149146125 container start 55a54f941e073b58b122fa8ab50c947c4b2c380fbff0bd03c16bdeb4b7f9ca27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 28 11:09:41 compute-0 podman[406985]: 2026-02-28 11:09:41.724942537 +0000 UTC m=+0.153290403 container attach 55a54f941e073b58b122fa8ab50c947c4b2c380fbff0bd03c16bdeb4b7f9ca27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_jang, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 11:09:41 compute-0 xenodochial_jang[407001]: 167 167
Feb 28 11:09:41 compute-0 systemd[1]: libpod-55a54f941e073b58b122fa8ab50c947c4b2c380fbff0bd03c16bdeb4b7f9ca27.scope: Deactivated successfully.
Feb 28 11:09:41 compute-0 podman[406985]: 2026-02-28 11:09:41.72824964 +0000 UTC m=+0.156597556 container died 55a54f941e073b58b122fa8ab50c947c4b2c380fbff0bd03c16bdeb4b7f9ca27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_jang, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:09:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdb7ee5ba6114939149860c3e4ab0e5e191bc3dcea6d02bed8d9f83bcc6f6215-merged.mount: Deactivated successfully.
Feb 28 11:09:41 compute-0 ceph-mon[76304]: pgmap v3182: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:41 compute-0 podman[406985]: 2026-02-28 11:09:41.771170796 +0000 UTC m=+0.199518672 container remove 55a54f941e073b58b122fa8ab50c947c4b2c380fbff0bd03c16bdeb4b7f9ca27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_jang, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:09:41 compute-0 systemd[1]: libpod-conmon-55a54f941e073b58b122fa8ab50c947c4b2c380fbff0bd03c16bdeb4b7f9ca27.scope: Deactivated successfully.
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5454513684934942e-05 of space, bias 1.0, pg target 0.0046363541054804825 quantized to 32 (current 32)
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.899822393338988e-06 of space, bias 1.0, pg target 0.0014699467180016965 quantized to 32 (current 32)
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:09:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 11:09:41 compute-0 podman[407024]: 2026-02-28 11:09:41.940561213 +0000 UTC m=+0.047290410 container create d975fb0d4a6689cde3622598d72ec7210fbd72cb4f2c16638a0fb99738e537ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:09:41 compute-0 systemd[1]: Started libpod-conmon-d975fb0d4a6689cde3622598d72ec7210fbd72cb4f2c16638a0fb99738e537ee.scope.
Feb 28 11:09:42 compute-0 podman[407024]: 2026-02-28 11:09:41.918850508 +0000 UTC m=+0.025579785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:09:42 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6b9ea5c95fa20e346e87e3e51788d7bcf8181fc5b9d38da072c0da662694cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6b9ea5c95fa20e346e87e3e51788d7bcf8181fc5b9d38da072c0da662694cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6b9ea5c95fa20e346e87e3e51788d7bcf8181fc5b9d38da072c0da662694cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6b9ea5c95fa20e346e87e3e51788d7bcf8181fc5b9d38da072c0da662694cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:09:42 compute-0 podman[407024]: 2026-02-28 11:09:42.046345319 +0000 UTC m=+0.153074546 container init d975fb0d4a6689cde3622598d72ec7210fbd72cb4f2c16638a0fb99738e537ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 28 11:09:42 compute-0 podman[407024]: 2026-02-28 11:09:42.056435985 +0000 UTC m=+0.163165172 container start d975fb0d4a6689cde3622598d72ec7210fbd72cb4f2c16638a0fb99738e537ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:09:42 compute-0 podman[407024]: 2026-02-28 11:09:42.060959853 +0000 UTC m=+0.167689090 container attach d975fb0d4a6689cde3622598d72ec7210fbd72cb4f2c16638a0fb99738e537ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 28 11:09:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:42 compute-0 serene_ganguly[407041]: {
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:     "0": [
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:         {
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "devices": [
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "/dev/loop3"
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             ],
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "lv_name": "ceph_lv0",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "lv_size": "21470642176",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "name": "ceph_lv0",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "tags": {
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.cluster_name": "ceph",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.crush_device_class": "",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.encrypted": "0",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.objectstore": "bluestore",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.osd_id": "0",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.type": "block",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.vdo": "0",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.with_tpm": "0"
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             },
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "type": "block",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "vg_name": "ceph_vg0"
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:         }
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:     ],
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:     "1": [
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:         {
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "devices": [
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "/dev/loop4"
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             ],
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "lv_name": "ceph_lv1",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "lv_size": "21470642176",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "name": "ceph_lv1",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "tags": {
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.cluster_name": "ceph",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.crush_device_class": "",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.encrypted": "0",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.objectstore": "bluestore",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.osd_id": "1",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.type": "block",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.vdo": "0",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.with_tpm": "0"
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             },
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "type": "block",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "vg_name": "ceph_vg1"
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:         }
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:     ],
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:     "2": [
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:         {
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "devices": [
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "/dev/loop5"
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             ],
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "lv_name": "ceph_lv2",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "lv_size": "21470642176",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "name": "ceph_lv2",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "tags": {
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.cluster_name": "ceph",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.crush_device_class": "",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.encrypted": "0",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.objectstore": "bluestore",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.osd_id": "2",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.type": "block",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.vdo": "0",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:                 "ceph.with_tpm": "0"
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             },
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "type": "block",
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:             "vg_name": "ceph_vg2"
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:         }
Feb 28 11:09:42 compute-0 serene_ganguly[407041]:     ]
Feb 28 11:09:42 compute-0 serene_ganguly[407041]: }
Feb 28 11:09:42 compute-0 systemd[1]: libpod-d975fb0d4a6689cde3622598d72ec7210fbd72cb4f2c16638a0fb99738e537ee.scope: Deactivated successfully.
Feb 28 11:09:42 compute-0 podman[407024]: 2026-02-28 11:09:42.34684334 +0000 UTC m=+0.453572557 container died d975fb0d4a6689cde3622598d72ec7210fbd72cb4f2c16638a0fb99738e537ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 11:09:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec6b9ea5c95fa20e346e87e3e51788d7bcf8181fc5b9d38da072c0da662694cd-merged.mount: Deactivated successfully.
Feb 28 11:09:42 compute-0 podman[407024]: 2026-02-28 11:09:42.451394031 +0000 UTC m=+0.558123228 container remove d975fb0d4a6689cde3622598d72ec7210fbd72cb4f2c16638a0fb99738e537ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:09:42 compute-0 systemd[1]: libpod-conmon-d975fb0d4a6689cde3622598d72ec7210fbd72cb4f2c16638a0fb99738e537ee.scope: Deactivated successfully.
Feb 28 11:09:42 compute-0 sudo[406947]: pam_unix(sudo:session): session closed for user root
Feb 28 11:09:42 compute-0 sudo[407064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:09:42 compute-0 sudo[407064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:09:42 compute-0 sudo[407064]: pam_unix(sudo:session): session closed for user root
Feb 28 11:09:42 compute-0 sudo[407089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 11:09:42 compute-0 sudo[407089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:09:42 compute-0 podman[407126]: 2026-02-28 11:09:42.952308417 +0000 UTC m=+0.054605567 container create 379760b9fda2a35cd5e839f34ae0c4e246c1afb78cec42de75e28a65def3e1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:09:42 compute-0 systemd[1]: Started libpod-conmon-379760b9fda2a35cd5e839f34ae0c4e246c1afb78cec42de75e28a65def3e1da.scope.
Feb 28 11:09:43 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:09:43 compute-0 podman[407126]: 2026-02-28 11:09:42.93193767 +0000 UTC m=+0.034234870 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:09:43 compute-0 podman[407126]: 2026-02-28 11:09:43.029155423 +0000 UTC m=+0.131452583 container init 379760b9fda2a35cd5e839f34ae0c4e246c1afb78cec42de75e28a65def3e1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 11:09:43 compute-0 podman[407126]: 2026-02-28 11:09:43.036334467 +0000 UTC m=+0.138631627 container start 379760b9fda2a35cd5e839f34ae0c4e246c1afb78cec42de75e28a65def3e1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wright, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 11:09:43 compute-0 podman[407126]: 2026-02-28 11:09:43.039809065 +0000 UTC m=+0.142106195 container attach 379760b9fda2a35cd5e839f34ae0c4e246c1afb78cec42de75e28a65def3e1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wright, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:09:43 compute-0 frosty_wright[407142]: 167 167
Feb 28 11:09:43 compute-0 systemd[1]: libpod-379760b9fda2a35cd5e839f34ae0c4e246c1afb78cec42de75e28a65def3e1da.scope: Deactivated successfully.
Feb 28 11:09:43 compute-0 podman[407126]: 2026-02-28 11:09:43.04278801 +0000 UTC m=+0.145085130 container died 379760b9fda2a35cd5e839f34ae0c4e246c1afb78cec42de75e28a65def3e1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wright, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:09:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-045fa46a408e6ca26fe22636378e17eed41cd2d3b5290acbe75ba159aedbc4fa-merged.mount: Deactivated successfully.
Feb 28 11:09:43 compute-0 podman[407126]: 2026-02-28 11:09:43.085843909 +0000 UTC m=+0.188141029 container remove 379760b9fda2a35cd5e839f34ae0c4e246c1afb78cec42de75e28a65def3e1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:09:43 compute-0 systemd[1]: libpod-conmon-379760b9fda2a35cd5e839f34ae0c4e246c1afb78cec42de75e28a65def3e1da.scope: Deactivated successfully.
Feb 28 11:09:43 compute-0 podman[407166]: 2026-02-28 11:09:43.264104288 +0000 UTC m=+0.063428028 container create e8c299fc07d2876c51120b3f9d4757661d2ea65f41e1e295e0e648b42f9fb638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_lalande, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:09:43 compute-0 systemd[1]: Started libpod-conmon-e8c299fc07d2876c51120b3f9d4757661d2ea65f41e1e295e0e648b42f9fb638.scope.
Feb 28 11:09:43 compute-0 podman[407166]: 2026-02-28 11:09:43.239199352 +0000 UTC m=+0.038523182 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:09:43 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d5aa536dae8b48faf049c9be3056ac9b6ca3d4e59bde83d1cdf5695c3e13658/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d5aa536dae8b48faf049c9be3056ac9b6ca3d4e59bde83d1cdf5695c3e13658/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d5aa536dae8b48faf049c9be3056ac9b6ca3d4e59bde83d1cdf5695c3e13658/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d5aa536dae8b48faf049c9be3056ac9b6ca3d4e59bde83d1cdf5695c3e13658/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:09:43 compute-0 podman[407166]: 2026-02-28 11:09:43.376923613 +0000 UTC m=+0.176247413 container init e8c299fc07d2876c51120b3f9d4757661d2ea65f41e1e295e0e648b42f9fb638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_lalande, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 11:09:43 compute-0 podman[407166]: 2026-02-28 11:09:43.38601399 +0000 UTC m=+0.185337740 container start e8c299fc07d2876c51120b3f9d4757661d2ea65f41e1e295e0e648b42f9fb638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_lalande, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 28 11:09:43 compute-0 podman[407166]: 2026-02-28 11:09:43.389236001 +0000 UTC m=+0.188559831 container attach e8c299fc07d2876c51120b3f9d4757661d2ea65f41e1e295e0e648b42f9fb638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 11:09:43 compute-0 ceph-mon[76304]: pgmap v3183: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:44 compute-0 nova_compute[243452]: 2026-02-28 11:09:44.106 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:09:44 compute-0 lvm[407262]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 11:09:44 compute-0 lvm[407262]: VG ceph_vg1 finished
Feb 28 11:09:44 compute-0 lvm[407261]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 11:09:44 compute-0 lvm[407261]: VG ceph_vg0 finished
Feb 28 11:09:44 compute-0 lvm[407264]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 11:09:44 compute-0 lvm[407264]: VG ceph_vg2 finished
Feb 28 11:09:44 compute-0 confident_lalande[407183]: {}
Feb 28 11:09:44 compute-0 systemd[1]: libpod-e8c299fc07d2876c51120b3f9d4757661d2ea65f41e1e295e0e648b42f9fb638.scope: Deactivated successfully.
Feb 28 11:09:44 compute-0 systemd[1]: libpod-e8c299fc07d2876c51120b3f9d4757661d2ea65f41e1e295e0e648b42f9fb638.scope: Consumed 1.244s CPU time.
Feb 28 11:09:44 compute-0 podman[407166]: 2026-02-28 11:09:44.261803712 +0000 UTC m=+1.061127502 container died e8c299fc07d2876c51120b3f9d4757661d2ea65f41e1e295e0e648b42f9fb638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:09:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d5aa536dae8b48faf049c9be3056ac9b6ca3d4e59bde83d1cdf5695c3e13658-merged.mount: Deactivated successfully.
Feb 28 11:09:44 compute-0 podman[407166]: 2026-02-28 11:09:44.310663466 +0000 UTC m=+1.109987206 container remove e8c299fc07d2876c51120b3f9d4757661d2ea65f41e1e295e0e648b42f9fb638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 28 11:09:44 compute-0 systemd[1]: libpod-conmon-e8c299fc07d2876c51120b3f9d4757661d2ea65f41e1e295e0e648b42f9fb638.scope: Deactivated successfully.
Feb 28 11:09:44 compute-0 sudo[407089]: pam_unix(sudo:session): session closed for user root
Feb 28 11:09:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 11:09:44 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:09:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 11:09:44 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:09:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:09:44 compute-0 sudo[407280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 11:09:44 compute-0 sudo[407280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:09:44 compute-0 sudo[407280]: pam_unix(sudo:session): session closed for user root
Feb 28 11:09:45 compute-0 ceph-mon[76304]: pgmap v3184: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:45 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:09:45 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:09:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 11:09:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2241819429' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:09:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 11:09:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2241819429' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:09:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2241819429' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:09:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2241819429' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:09:47 compute-0 ceph-mon[76304]: pgmap v3185: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:49 compute-0 nova_compute[243452]: 2026-02-28 11:09:49.108 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:09:49 compute-0 nova_compute[243452]: 2026-02-28 11:09:49.112 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:09:49 compute-0 ceph-mon[76304]: pgmap v3186: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:09:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:51 compute-0 ceph-mon[76304]: pgmap v3187: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:53 compute-0 ceph-mon[76304]: pgmap v3188: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:54 compute-0 nova_compute[243452]: 2026-02-28 11:09:54.110 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:09:54 compute-0 nova_compute[243452]: 2026-02-28 11:09:54.113 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:09:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:09:55 compute-0 ceph-mon[76304]: pgmap v3189: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:57 compute-0 ceph-mon[76304]: pgmap v3190: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:09:57.917 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:09:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:09:57.917 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:09:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:09:57.918 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:09:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:09:59 compute-0 nova_compute[243452]: 2026-02-28 11:09:59.113 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:09:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:09:59 compute-0 ceph-mon[76304]: pgmap v3191: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:10:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:10:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:10:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:10:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:10:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:10:01 compute-0 ceph-mon[76304]: pgmap v3192: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:03 compute-0 podman[407306]: 2026-02-28 11:10:03.153661769 +0000 UTC m=+0.078287318 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 28 11:10:03 compute-0 podman[407305]: 2026-02-28 11:10:03.188756393 +0000 UTC m=+0.112638161 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 28 11:10:03 compute-0 ceph-mon[76304]: pgmap v3193: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:04 compute-0 nova_compute[243452]: 2026-02-28 11:10:04.114 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:10:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:10:05 compute-0 ceph-mon[76304]: pgmap v3194: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:07 compute-0 ceph-mon[76304]: pgmap v3195: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:09 compute-0 nova_compute[243452]: 2026-02-28 11:10:09.117 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:10:09 compute-0 nova_compute[243452]: 2026-02-28 11:10:09.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:10:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:10:09 compute-0 ceph-mon[76304]: pgmap v3196: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3197: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:11 compute-0 ceph-mon[76304]: pgmap v3197: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:12 compute-0 nova_compute[243452]: 2026-02-28 11:10:12.318 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:10:12 compute-0 nova_compute[243452]: 2026-02-28 11:10:12.319 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:10:12 compute-0 nova_compute[243452]: 2026-02-28 11:10:12.319 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 11:10:13 compute-0 nova_compute[243452]: 2026-02-28 11:10:13.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:10:13 compute-0 ceph-mon[76304]: pgmap v3198: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:14 compute-0 nova_compute[243452]: 2026-02-28 11:10:14.119 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:10:14 compute-0 nova_compute[243452]: 2026-02-28 11:10:14.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:10:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:10:15 compute-0 nova_compute[243452]: 2026-02-28 11:10:15.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:10:15 compute-0 nova_compute[243452]: 2026-02-28 11:10:15.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 11:10:15 compute-0 nova_compute[243452]: 2026-02-28 11:10:15.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 11:10:15 compute-0 nova_compute[243452]: 2026-02-28 11:10:15.341 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 11:10:15 compute-0 nova_compute[243452]: 2026-02-28 11:10:15.342 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:10:15 compute-0 ceph-mon[76304]: pgmap v3199: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:17 compute-0 ceph-mon[76304]: pgmap v3200: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:19 compute-0 nova_compute[243452]: 2026-02-28 11:10:19.121 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:10:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:10:19 compute-0 ceph-mon[76304]: pgmap v3201: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:21 compute-0 nova_compute[243452]: 2026-02-28 11:10:21.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:10:21 compute-0 nova_compute[243452]: 2026-02-28 11:10:21.347 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:10:21 compute-0 nova_compute[243452]: 2026-02-28 11:10:21.348 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:10:21 compute-0 nova_compute[243452]: 2026-02-28 11:10:21.349 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:10:21 compute-0 nova_compute[243452]: 2026-02-28 11:10:21.349 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 11:10:21 compute-0 nova_compute[243452]: 2026-02-28 11:10:21.350 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:10:21 compute-0 ceph-mon[76304]: pgmap v3202: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:21 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:10:21 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1031500812' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:10:21 compute-0 nova_compute[243452]: 2026-02-28 11:10:21.905 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:10:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3203: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:22 compute-0 nova_compute[243452]: 2026-02-28 11:10:22.119 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 11:10:22 compute-0 nova_compute[243452]: 2026-02-28 11:10:22.121 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3549MB free_disk=59.987354160286486GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 11:10:22 compute-0 nova_compute[243452]: 2026-02-28 11:10:22.122 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:10:22 compute-0 nova_compute[243452]: 2026-02-28 11:10:22.122 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:10:22 compute-0 nova_compute[243452]: 2026-02-28 11:10:22.307 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 11:10:22 compute-0 nova_compute[243452]: 2026-02-28 11:10:22.308 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 11:10:22 compute-0 nova_compute[243452]: 2026-02-28 11:10:22.394 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing inventories for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 28 11:10:22 compute-0 nova_compute[243452]: 2026-02-28 11:10:22.496 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating ProviderTree inventory for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 28 11:10:22 compute-0 nova_compute[243452]: 2026-02-28 11:10:22.497 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Updating inventory in ProviderTree for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 28 11:10:22 compute-0 nova_compute[243452]: 2026-02-28 11:10:22.512 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing aggregate associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 28 11:10:22 compute-0 nova_compute[243452]: 2026-02-28 11:10:22.531 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Refreshing trait associations for resource provider 1ab5ec33-a817-4b85-91a2-974557eeabfb, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 28 11:10:22 compute-0 nova_compute[243452]: 2026-02-28 11:10:22.553 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:10:22 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1031500812' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:10:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:10:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1328841490' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:10:23 compute-0 nova_compute[243452]: 2026-02-28 11:10:23.087 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:10:23 compute-0 nova_compute[243452]: 2026-02-28 11:10:23.095 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 11:10:23 compute-0 nova_compute[243452]: 2026-02-28 11:10:23.141 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 11:10:23 compute-0 nova_compute[243452]: 2026-02-28 11:10:23.144 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 11:10:23 compute-0 nova_compute[243452]: 2026-02-28 11:10:23.144 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:10:23 compute-0 ceph-mon[76304]: pgmap v3203: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1328841490' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:10:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:24 compute-0 nova_compute[243452]: 2026-02-28 11:10:24.123 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:10:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:10:25 compute-0 nova_compute[243452]: 2026-02-28 11:10:25.141 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:10:25 compute-0 nova_compute[243452]: 2026-02-28 11:10:25.161 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:10:25 compute-0 ceph-mon[76304]: pgmap v3204: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:27 compute-0 ceph-mon[76304]: pgmap v3205: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:29 compute-0 nova_compute[243452]: 2026-02-28 11:10:29.126 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:10:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:10:29
Feb 28 11:10:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 11:10:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 11:10:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'images', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'backups', 'volumes', 'default.rgw.meta']
Feb 28 11:10:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 11:10:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:10:29 compute-0 ceph-mon[76304]: pgmap v3206: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:10:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:10:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:10:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:10:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:10:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:10:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 11:10:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 11:10:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:10:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:10:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:10:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:10:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:10:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:10:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:10:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:10:31 compute-0 ceph-mon[76304]: pgmap v3207: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:33 compute-0 ceph-mon[76304]: pgmap v3208: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:34 compute-0 nova_compute[243452]: 2026-02-28 11:10:34.127 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:10:34 compute-0 nova_compute[243452]: 2026-02-28 11:10:34.130 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:10:34 compute-0 podman[407396]: 2026-02-28 11:10:34.139585377 +0000 UTC m=+0.069452468 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 28 11:10:34 compute-0 podman[407395]: 2026-02-28 11:10:34.172554761 +0000 UTC m=+0.108848124 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 28 11:10:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:10:35 compute-0 ceph-mon[76304]: pgmap v3209: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3210: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:37 compute-0 ceph-mon[76304]: pgmap v3210: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:39 compute-0 nova_compute[243452]: 2026-02-28 11:10:39.129 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:10:39 compute-0 nova_compute[243452]: 2026-02-28 11:10:39.131 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:10:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:10:39 compute-0 ceph-mon[76304]: pgmap v3211: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:41 compute-0 ceph-mon[76304]: pgmap v3212: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5454513684934942e-05 of space, bias 1.0, pg target 0.0046363541054804825 quantized to 32 (current 32)
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.899822393338988e-06 of space, bias 1.0, pg target 0.0014699467180016965 quantized to 32 (current 32)
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:10:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 11:10:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:43 compute-0 ceph-mon[76304]: pgmap v3213: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:44 compute-0 nova_compute[243452]: 2026-02-28 11:10:44.131 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:10:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:10:44 compute-0 sudo[407442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:10:44 compute-0 sudo[407442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:10:44 compute-0 sudo[407442]: pam_unix(sudo:session): session closed for user root
Feb 28 11:10:44 compute-0 sudo[407467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 11:10:44 compute-0 sudo[407467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:10:45 compute-0 sudo[407467]: pam_unix(sudo:session): session closed for user root
Feb 28 11:10:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:10:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:10:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 11:10:45 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:10:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 11:10:45 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:10:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 11:10:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:10:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 11:10:45 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:10:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:10:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:10:45 compute-0 sudo[407523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:10:45 compute-0 sudo[407523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:10:45 compute-0 sudo[407523]: pam_unix(sudo:session): session closed for user root
Feb 28 11:10:45 compute-0 sudo[407548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 11:10:45 compute-0 sudo[407548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:10:45 compute-0 podman[407586]: 2026-02-28 11:10:45.637163507 +0000 UTC m=+0.060189806 container create e89a5e71e4ae155fac63d2fecb52601b3efa590ea56163033adf8ba98154fc3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 28 11:10:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 11:10:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2175088020' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:10:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 11:10:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2175088020' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:10:45 compute-0 systemd[1]: Started libpod-conmon-e89a5e71e4ae155fac63d2fecb52601b3efa590ea56163033adf8ba98154fc3f.scope.
Feb 28 11:10:45 compute-0 podman[407586]: 2026-02-28 11:10:45.608191226 +0000 UTC m=+0.031217565 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:10:45 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:10:45 compute-0 podman[407586]: 2026-02-28 11:10:45.73685518 +0000 UTC m=+0.159881429 container init e89a5e71e4ae155fac63d2fecb52601b3efa590ea56163033adf8ba98154fc3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:10:45 compute-0 podman[407586]: 2026-02-28 11:10:45.748915001 +0000 UTC m=+0.171941260 container start e89a5e71e4ae155fac63d2fecb52601b3efa590ea56163033adf8ba98154fc3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 28 11:10:45 compute-0 podman[407586]: 2026-02-28 11:10:45.752313888 +0000 UTC m=+0.175340167 container attach e89a5e71e4ae155fac63d2fecb52601b3efa590ea56163033adf8ba98154fc3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 28 11:10:45 compute-0 romantic_taussig[407602]: 167 167
Feb 28 11:10:45 compute-0 systemd[1]: libpod-e89a5e71e4ae155fac63d2fecb52601b3efa590ea56163033adf8ba98154fc3f.scope: Deactivated successfully.
Feb 28 11:10:45 compute-0 podman[407586]: 2026-02-28 11:10:45.759552433 +0000 UTC m=+0.182578732 container died e89a5e71e4ae155fac63d2fecb52601b3efa590ea56163033adf8ba98154fc3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 11:10:45 compute-0 ceph-mon[76304]: pgmap v3214: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:45 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:10:45 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:10:45 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:10:45 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:10:45 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:10:45 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:10:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2175088020' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:10:45 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2175088020' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:10:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-49ddceb7231719c860f06dd73a3a37f779ceb575188d1f6918c28744318e0a82-merged.mount: Deactivated successfully.
Feb 28 11:10:45 compute-0 podman[407586]: 2026-02-28 11:10:45.813376677 +0000 UTC m=+0.236402976 container remove e89a5e71e4ae155fac63d2fecb52601b3efa590ea56163033adf8ba98154fc3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_taussig, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:10:45 compute-0 systemd[1]: libpod-conmon-e89a5e71e4ae155fac63d2fecb52601b3efa590ea56163033adf8ba98154fc3f.scope: Deactivated successfully.
Feb 28 11:10:45 compute-0 podman[407630]: 2026-02-28 11:10:45.995857255 +0000 UTC m=+0.062601584 container create fca95100822719bbccf5feffa972332f70883a385eb48f946fbf2632ea097be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:10:46 compute-0 systemd[1]: Started libpod-conmon-fca95100822719bbccf5feffa972332f70883a385eb48f946fbf2632ea097be3.scope.
Feb 28 11:10:46 compute-0 podman[407630]: 2026-02-28 11:10:45.96742505 +0000 UTC m=+0.034169429 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:10:46 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a719851cc3c4c8a0177a38e0d60f5ae377ea25af6e38a8aa815abaea75d42d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a719851cc3c4c8a0177a38e0d60f5ae377ea25af6e38a8aa815abaea75d42d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a719851cc3c4c8a0177a38e0d60f5ae377ea25af6e38a8aa815abaea75d42d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a719851cc3c4c8a0177a38e0d60f5ae377ea25af6e38a8aa815abaea75d42d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a719851cc3c4c8a0177a38e0d60f5ae377ea25af6e38a8aa815abaea75d42d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 11:10:46 compute-0 podman[407630]: 2026-02-28 11:10:46.09667188 +0000 UTC m=+0.163416259 container init fca95100822719bbccf5feffa972332f70883a385eb48f946fbf2632ea097be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ritchie, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 28 11:10:46 compute-0 podman[407630]: 2026-02-28 11:10:46.109158194 +0000 UTC m=+0.175902523 container start fca95100822719bbccf5feffa972332f70883a385eb48f946fbf2632ea097be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ritchie, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 28 11:10:46 compute-0 podman[407630]: 2026-02-28 11:10:46.113586529 +0000 UTC m=+0.180330848 container attach fca95100822719bbccf5feffa972332f70883a385eb48f946fbf2632ea097be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 28 11:10:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:46 compute-0 elastic_ritchie[407646]: --> passed data devices: 0 physical, 3 LVM
Feb 28 11:10:46 compute-0 elastic_ritchie[407646]: --> All data devices are unavailable
Feb 28 11:10:46 compute-0 systemd[1]: libpod-fca95100822719bbccf5feffa972332f70883a385eb48f946fbf2632ea097be3.scope: Deactivated successfully.
Feb 28 11:10:46 compute-0 podman[407630]: 2026-02-28 11:10:46.653884541 +0000 UTC m=+0.720628860 container died fca95100822719bbccf5feffa972332f70883a385eb48f946fbf2632ea097be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 11:10:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2a719851cc3c4c8a0177a38e0d60f5ae377ea25af6e38a8aa815abaea75d42d-merged.mount: Deactivated successfully.
Feb 28 11:10:46 compute-0 podman[407630]: 2026-02-28 11:10:46.707045857 +0000 UTC m=+0.773790176 container remove fca95100822719bbccf5feffa972332f70883a385eb48f946fbf2632ea097be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:10:46 compute-0 systemd[1]: libpod-conmon-fca95100822719bbccf5feffa972332f70883a385eb48f946fbf2632ea097be3.scope: Deactivated successfully.
Feb 28 11:10:46 compute-0 sudo[407548]: pam_unix(sudo:session): session closed for user root
Feb 28 11:10:46 compute-0 sudo[407677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:10:46 compute-0 sudo[407677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:10:46 compute-0 sudo[407677]: pam_unix(sudo:session): session closed for user root
Feb 28 11:10:46 compute-0 sudo[407702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 11:10:46 compute-0 sudo[407702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:10:47 compute-0 podman[407740]: 2026-02-28 11:10:47.214987022 +0000 UTC m=+0.046091116 container create 82e8be20fad9a494b2d2b45f1ba800939f733c6c436e81f5a10bd3271ebb96a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 28 11:10:47 compute-0 systemd[1]: Started libpod-conmon-82e8be20fad9a494b2d2b45f1ba800939f733c6c436e81f5a10bd3271ebb96a9.scope.
Feb 28 11:10:47 compute-0 podman[407740]: 2026-02-28 11:10:47.197030984 +0000 UTC m=+0.028134848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:10:47 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:10:47 compute-0 podman[407740]: 2026-02-28 11:10:47.320799179 +0000 UTC m=+0.151903113 container init 82e8be20fad9a494b2d2b45f1ba800939f733c6c436e81f5a10bd3271ebb96a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:10:47 compute-0 podman[407740]: 2026-02-28 11:10:47.32930952 +0000 UTC m=+0.160413404 container start 82e8be20fad9a494b2d2b45f1ba800939f733c6c436e81f5a10bd3271ebb96a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:10:47 compute-0 systemd[1]: libpod-82e8be20fad9a494b2d2b45f1ba800939f733c6c436e81f5a10bd3271ebb96a9.scope: Deactivated successfully.
Feb 28 11:10:47 compute-0 admiring_germain[407756]: 167 167
Feb 28 11:10:47 compute-0 podman[407740]: 2026-02-28 11:10:47.334088225 +0000 UTC m=+0.165192109 container attach 82e8be20fad9a494b2d2b45f1ba800939f733c6c436e81f5a10bd3271ebb96a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_germain, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Feb 28 11:10:47 compute-0 conmon[407756]: conmon 82e8be20fad9a494b2d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82e8be20fad9a494b2d2b45f1ba800939f733c6c436e81f5a10bd3271ebb96a9.scope/container/memory.events
Feb 28 11:10:47 compute-0 podman[407740]: 2026-02-28 11:10:47.335051633 +0000 UTC m=+0.166155477 container died 82e8be20fad9a494b2d2b45f1ba800939f733c6c436e81f5a10bd3271ebb96a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_germain, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 28 11:10:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-20de343a137311074b119a53729ef63d8a875c841fc2efea84e41b45ad6f8ed9-merged.mount: Deactivated successfully.
Feb 28 11:10:47 compute-0 podman[407740]: 2026-02-28 11:10:47.3642582 +0000 UTC m=+0.195362044 container remove 82e8be20fad9a494b2d2b45f1ba800939f733c6c436e81f5a10bd3271ebb96a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_germain, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:10:47 compute-0 systemd[1]: libpod-conmon-82e8be20fad9a494b2d2b45f1ba800939f733c6c436e81f5a10bd3271ebb96a9.scope: Deactivated successfully.
Feb 28 11:10:47 compute-0 podman[407781]: 2026-02-28 11:10:47.479994568 +0000 UTC m=+0.037126113 container create 07fac0c9baaa6f877ac2c014d698be417532a84521751c156c3aac67f3f484af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 28 11:10:47 compute-0 systemd[1]: Started libpod-conmon-07fac0c9baaa6f877ac2c014d698be417532a84521751c156c3aac67f3f484af.scope.
Feb 28 11:10:47 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:10:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ede6c1b0aa8210ddd8f045b4eafb9337ee3349dfd4de68c56c560358d21203/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:10:47 compute-0 podman[407781]: 2026-02-28 11:10:47.461999398 +0000 UTC m=+0.019130903 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:10:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ede6c1b0aa8210ddd8f045b4eafb9337ee3349dfd4de68c56c560358d21203/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:10:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ede6c1b0aa8210ddd8f045b4eafb9337ee3349dfd4de68c56c560358d21203/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:10:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ede6c1b0aa8210ddd8f045b4eafb9337ee3349dfd4de68c56c560358d21203/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:10:47 compute-0 podman[407781]: 2026-02-28 11:10:47.576428789 +0000 UTC m=+0.133560374 container init 07fac0c9baaa6f877ac2c014d698be417532a84521751c156c3aac67f3f484af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:10:47 compute-0 podman[407781]: 2026-02-28 11:10:47.585261529 +0000 UTC m=+0.142393024 container start 07fac0c9baaa6f877ac2c014d698be417532a84521751c156c3aac67f3f484af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_gates, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 28 11:10:47 compute-0 podman[407781]: 2026-02-28 11:10:47.58918515 +0000 UTC m=+0.146316735 container attach 07fac0c9baaa6f877ac2c014d698be417532a84521751c156c3aac67f3f484af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 28 11:10:47 compute-0 ceph-mon[76304]: pgmap v3215: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:47 compute-0 fervent_gates[407797]: {
Feb 28 11:10:47 compute-0 fervent_gates[407797]:     "0": [
Feb 28 11:10:47 compute-0 fervent_gates[407797]:         {
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "devices": [
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "/dev/loop3"
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             ],
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "lv_name": "ceph_lv0",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "lv_size": "21470642176",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "name": "ceph_lv0",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "tags": {
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.cluster_name": "ceph",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.crush_device_class": "",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.encrypted": "0",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.objectstore": "bluestore",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.osd_id": "0",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.type": "block",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.vdo": "0",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.with_tpm": "0"
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             },
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "type": "block",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "vg_name": "ceph_vg0"
Feb 28 11:10:47 compute-0 fervent_gates[407797]:         }
Feb 28 11:10:47 compute-0 fervent_gates[407797]:     ],
Feb 28 11:10:47 compute-0 fervent_gates[407797]:     "1": [
Feb 28 11:10:47 compute-0 fervent_gates[407797]:         {
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "devices": [
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "/dev/loop4"
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             ],
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "lv_name": "ceph_lv1",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "lv_size": "21470642176",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "name": "ceph_lv1",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "tags": {
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.cluster_name": "ceph",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.crush_device_class": "",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.encrypted": "0",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.objectstore": "bluestore",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.osd_id": "1",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.type": "block",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.vdo": "0",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.with_tpm": "0"
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             },
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "type": "block",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "vg_name": "ceph_vg1"
Feb 28 11:10:47 compute-0 fervent_gates[407797]:         }
Feb 28 11:10:47 compute-0 fervent_gates[407797]:     ],
Feb 28 11:10:47 compute-0 fervent_gates[407797]:     "2": [
Feb 28 11:10:47 compute-0 fervent_gates[407797]:         {
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "devices": [
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "/dev/loop5"
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             ],
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "lv_name": "ceph_lv2",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "lv_size": "21470642176",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "name": "ceph_lv2",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "tags": {
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.cluster_name": "ceph",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.crush_device_class": "",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.encrypted": "0",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.objectstore": "bluestore",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.osd_id": "2",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.type": "block",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.vdo": "0",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:                 "ceph.with_tpm": "0"
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             },
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "type": "block",
Feb 28 11:10:47 compute-0 fervent_gates[407797]:             "vg_name": "ceph_vg2"
Feb 28 11:10:47 compute-0 fervent_gates[407797]:         }
Feb 28 11:10:47 compute-0 fervent_gates[407797]:     ]
Feb 28 11:10:47 compute-0 fervent_gates[407797]: }
Feb 28 11:10:47 compute-0 systemd[1]: libpod-07fac0c9baaa6f877ac2c014d698be417532a84521751c156c3aac67f3f484af.scope: Deactivated successfully.
Feb 28 11:10:47 compute-0 podman[407781]: 2026-02-28 11:10:47.891942414 +0000 UTC m=+0.449073949 container died 07fac0c9baaa6f877ac2c014d698be417532a84521751c156c3aac67f3f484af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:10:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6ede6c1b0aa8210ddd8f045b4eafb9337ee3349dfd4de68c56c560358d21203-merged.mount: Deactivated successfully.
Feb 28 11:10:47 compute-0 podman[407781]: 2026-02-28 11:10:47.946770447 +0000 UTC m=+0.503901992 container remove 07fac0c9baaa6f877ac2c014d698be417532a84521751c156c3aac67f3f484af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 28 11:10:47 compute-0 systemd[1]: libpod-conmon-07fac0c9baaa6f877ac2c014d698be417532a84521751c156c3aac67f3f484af.scope: Deactivated successfully.
Feb 28 11:10:48 compute-0 sudo[407702]: pam_unix(sudo:session): session closed for user root
Feb 28 11:10:48 compute-0 sudo[407822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:10:48 compute-0 sudo[407822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:10:48 compute-0 sudo[407822]: pam_unix(sudo:session): session closed for user root
Feb 28 11:10:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3216: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:48 compute-0 sudo[407847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 11:10:48 compute-0 sudo[407847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:10:48 compute-0 podman[407885]: 2026-02-28 11:10:48.382350122 +0000 UTC m=+0.055324757 container create ee362ef76b8e3c1858101d0aac2623dd1b1925c7b196fb69ef20e11e6016f595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:10:48 compute-0 systemd[1]: Started libpod-conmon-ee362ef76b8e3c1858101d0aac2623dd1b1925c7b196fb69ef20e11e6016f595.scope.
Feb 28 11:10:48 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:10:48 compute-0 podman[407885]: 2026-02-28 11:10:48.359854275 +0000 UTC m=+0.032828960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:10:48 compute-0 podman[407885]: 2026-02-28 11:10:48.455593597 +0000 UTC m=+0.128568242 container init ee362ef76b8e3c1858101d0aac2623dd1b1925c7b196fb69ef20e11e6016f595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_nash, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:10:48 compute-0 podman[407885]: 2026-02-28 11:10:48.462671247 +0000 UTC m=+0.135645882 container start ee362ef76b8e3c1858101d0aac2623dd1b1925c7b196fb69ef20e11e6016f595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_nash, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:10:48 compute-0 vibrant_nash[407901]: 167 167
Feb 28 11:10:48 compute-0 podman[407885]: 2026-02-28 11:10:48.466491055 +0000 UTC m=+0.139465720 container attach ee362ef76b8e3c1858101d0aac2623dd1b1925c7b196fb69ef20e11e6016f595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 28 11:10:48 compute-0 systemd[1]: libpod-ee362ef76b8e3c1858101d0aac2623dd1b1925c7b196fb69ef20e11e6016f595.scope: Deactivated successfully.
Feb 28 11:10:48 compute-0 podman[407885]: 2026-02-28 11:10:48.467520585 +0000 UTC m=+0.140495230 container died ee362ef76b8e3c1858101d0aac2623dd1b1925c7b196fb69ef20e11e6016f595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_nash, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:10:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-137e4a1ccc62147bf23770756fd3c971a28b4e571716a24837c972912c4ac2bf-merged.mount: Deactivated successfully.
Feb 28 11:10:48 compute-0 podman[407885]: 2026-02-28 11:10:48.511276444 +0000 UTC m=+0.184251079 container remove ee362ef76b8e3c1858101d0aac2623dd1b1925c7b196fb69ef20e11e6016f595 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 28 11:10:48 compute-0 systemd[1]: libpod-conmon-ee362ef76b8e3c1858101d0aac2623dd1b1925c7b196fb69ef20e11e6016f595.scope: Deactivated successfully.
Feb 28 11:10:48 compute-0 podman[407923]: 2026-02-28 11:10:48.670760631 +0000 UTC m=+0.053173077 container create efd00d2fe120bfea8dc0c4f803f19b6dc6482a2fcd04afd99584849ff1481b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 11:10:48 compute-0 systemd[1]: Started libpod-conmon-efd00d2fe120bfea8dc0c4f803f19b6dc6482a2fcd04afd99584849ff1481b50.scope.
Feb 28 11:10:48 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/250565ee650e21ef709c56f0cfdd4ea64b9b16952189832d2d8561adf1b786f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/250565ee650e21ef709c56f0cfdd4ea64b9b16952189832d2d8561adf1b786f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/250565ee650e21ef709c56f0cfdd4ea64b9b16952189832d2d8561adf1b786f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/250565ee650e21ef709c56f0cfdd4ea64b9b16952189832d2d8561adf1b786f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:10:48 compute-0 podman[407923]: 2026-02-28 11:10:48.650227859 +0000 UTC m=+0.032640365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:10:48 compute-0 podman[407923]: 2026-02-28 11:10:48.756334094 +0000 UTC m=+0.138746570 container init efd00d2fe120bfea8dc0c4f803f19b6dc6482a2fcd04afd99584849ff1481b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_rosalind, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:10:48 compute-0 podman[407923]: 2026-02-28 11:10:48.76255219 +0000 UTC m=+0.144964636 container start efd00d2fe120bfea8dc0c4f803f19b6dc6482a2fcd04afd99584849ff1481b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:10:48 compute-0 podman[407923]: 2026-02-28 11:10:48.76571359 +0000 UTC m=+0.148126036 container attach efd00d2fe120bfea8dc0c4f803f19b6dc6482a2fcd04afd99584849ff1481b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 11:10:49 compute-0 nova_compute[243452]: 2026-02-28 11:10:49.132 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:10:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:10:49 compute-0 lvm[408018]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 11:10:49 compute-0 lvm[408018]: VG ceph_vg1 finished
Feb 28 11:10:49 compute-0 lvm[408017]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 11:10:49 compute-0 lvm[408017]: VG ceph_vg0 finished
Feb 28 11:10:49 compute-0 lvm[408020]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 11:10:49 compute-0 lvm[408020]: VG ceph_vg2 finished
Feb 28 11:10:49 compute-0 vibrant_rosalind[407939]: {}
Feb 28 11:10:49 compute-0 systemd[1]: libpod-efd00d2fe120bfea8dc0c4f803f19b6dc6482a2fcd04afd99584849ff1481b50.scope: Deactivated successfully.
Feb 28 11:10:49 compute-0 podman[407923]: 2026-02-28 11:10:49.587953456 +0000 UTC m=+0.970365952 container died efd00d2fe120bfea8dc0c4f803f19b6dc6482a2fcd04afd99584849ff1481b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_rosalind, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:10:49 compute-0 systemd[1]: libpod-efd00d2fe120bfea8dc0c4f803f19b6dc6482a2fcd04afd99584849ff1481b50.scope: Consumed 1.146s CPU time.
Feb 28 11:10:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-250565ee650e21ef709c56f0cfdd4ea64b9b16952189832d2d8561adf1b786f0-merged.mount: Deactivated successfully.
Feb 28 11:10:49 compute-0 podman[407923]: 2026-02-28 11:10:49.638527409 +0000 UTC m=+1.020939895 container remove efd00d2fe120bfea8dc0c4f803f19b6dc6482a2fcd04afd99584849ff1481b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_rosalind, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:10:49 compute-0 systemd[1]: libpod-conmon-efd00d2fe120bfea8dc0c4f803f19b6dc6482a2fcd04afd99584849ff1481b50.scope: Deactivated successfully.
Feb 28 11:10:49 compute-0 sshd-session[407775]: Invalid user sol from 45.148.10.240 port 43944
Feb 28 11:10:49 compute-0 sudo[407847]: pam_unix(sudo:session): session closed for user root
Feb 28 11:10:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 11:10:49 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:10:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 11:10:49 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:10:49 compute-0 sudo[408035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 11:10:49 compute-0 sudo[408035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:10:49 compute-0 sudo[408035]: pam_unix(sudo:session): session closed for user root
Feb 28 11:10:49 compute-0 ceph-mon[76304]: pgmap v3216: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:49 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:10:49 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:10:49 compute-0 sshd-session[407775]: Connection closed by invalid user sol 45.148.10.240 port 43944 [preauth]
Feb 28 11:10:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:51 compute-0 ceph-mon[76304]: pgmap v3217: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:53 compute-0 ceph-mon[76304]: pgmap v3218: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:54 compute-0 nova_compute[243452]: 2026-02-28 11:10:54.135 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:10:54 compute-0 nova_compute[243452]: 2026-02-28 11:10:54.137 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:10:54 compute-0 nova_compute[243452]: 2026-02-28 11:10:54.138 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:10:54 compute-0 nova_compute[243452]: 2026-02-28 11:10:54.138 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:10:54 compute-0 nova_compute[243452]: 2026-02-28 11:10:54.173 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:10:54 compute-0 nova_compute[243452]: 2026-02-28 11:10:54.174 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:10:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3219: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:10:55 compute-0 ceph-mon[76304]: pgmap v3219: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3220: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:57 compute-0 ceph-mon[76304]: pgmap v3220: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:10:57.918 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:10:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:10:57.918 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:10:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:10:57.919 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:10:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3221: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:10:59 compute-0 nova_compute[243452]: 2026-02-28 11:10:59.175 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:10:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:10:59 compute-0 ceph-mon[76304]: pgmap v3221: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:11:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:11:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:11:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:11:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:11:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:11:01 compute-0 ceph-mon[76304]: pgmap v3222: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:03 compute-0 ceph-mon[76304]: pgmap v3223: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:04 compute-0 nova_compute[243452]: 2026-02-28 11:11:04.177 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:11:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.857489) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277064857520, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 2052, "num_deletes": 251, "total_data_size": 3485006, "memory_usage": 3546992, "flush_reason": "Manual Compaction"}
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277064874238, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 3418278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66039, "largest_seqno": 68090, "table_properties": {"data_size": 3408833, "index_size": 6002, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18683, "raw_average_key_size": 20, "raw_value_size": 3390193, "raw_average_value_size": 3645, "num_data_blocks": 266, "num_entries": 930, "num_filter_entries": 930, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772276835, "oldest_key_time": 1772276835, "file_creation_time": 1772277064, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 16846 microseconds, and 8761 cpu microseconds.
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.874324) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 3418278 bytes OK
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.874353) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.876729) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.876760) EVENT_LOG_v1 {"time_micros": 1772277064876749, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.876793) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 3476415, prev total WAL file size 3476415, number of live WAL files 2.
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.878406) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(3338KB)], [158(9217KB)]
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277064878466, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 12857447, "oldest_snapshot_seqno": -1}
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 8707 keys, 11098323 bytes, temperature: kUnknown
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277064933929, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 11098323, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11042259, "index_size": 33167, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21829, "raw_key_size": 228037, "raw_average_key_size": 26, "raw_value_size": 10888926, "raw_average_value_size": 1250, "num_data_blocks": 1284, "num_entries": 8707, "num_filter_entries": 8707, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772277064, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.934461) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11098323 bytes
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.936802) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 230.9 rd, 199.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 9.0 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 9221, records dropped: 514 output_compression: NoCompression
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.936836) EVENT_LOG_v1 {"time_micros": 1772277064936820, "job": 98, "event": "compaction_finished", "compaction_time_micros": 55693, "compaction_time_cpu_micros": 35531, "output_level": 6, "num_output_files": 1, "total_output_size": 11098323, "num_input_records": 9221, "num_output_records": 8707, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277064937554, "job": 98, "event": "table_file_deletion", "file_number": 160}
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277064938944, "job": 98, "event": "table_file_deletion", "file_number": 158}
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.878196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.939187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.939203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.939210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.939216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:11:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:11:04.939221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:11:05 compute-0 podman[408062]: 2026-02-28 11:11:05.145374206 +0000 UTC m=+0.072975788 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 28 11:11:05 compute-0 podman[408061]: 2026-02-28 11:11:05.239877093 +0000 UTC m=+0.167519906 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 28 11:11:05 compute-0 ceph-mon[76304]: pgmap v3224: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3225: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:07 compute-0 ceph-mon[76304]: pgmap v3225: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:08 compute-0 ceph-mon[76304]: pgmap v3226: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:09 compute-0 nova_compute[243452]: 2026-02-28 11:11:09.179 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:11:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:11:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3227: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:10 compute-0 nova_compute[243452]: 2026-02-28 11:11:10.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:11:11 compute-0 ceph-mon[76304]: pgmap v3227: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:11 compute-0 sshd-session[408060]: Connection reset by 5.181.87.35 port 35600 [preauth]
Feb 28 11:11:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:13 compute-0 ceph-mon[76304]: pgmap v3228: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:13 compute-0 nova_compute[243452]: 2026-02-28 11:11:13.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:11:14 compute-0 nova_compute[243452]: 2026-02-28 11:11:14.182 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:11:14 compute-0 nova_compute[243452]: 2026-02-28 11:11:14.184 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:11:14 compute-0 nova_compute[243452]: 2026-02-28 11:11:14.185 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:11:14 compute-0 nova_compute[243452]: 2026-02-28 11:11:14.185 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:11:14 compute-0 nova_compute[243452]: 2026-02-28 11:11:14.215 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:11:14 compute-0 nova_compute[243452]: 2026-02-28 11:11:14.216 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:11:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:14 compute-0 nova_compute[243452]: 2026-02-28 11:11:14.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:11:14 compute-0 nova_compute[243452]: 2026-02-28 11:11:14.314 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:11:14 compute-0 nova_compute[243452]: 2026-02-28 11:11:14.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:11:14 compute-0 nova_compute[243452]: 2026-02-28 11:11:14.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 11:11:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:11:15 compute-0 nova_compute[243452]: 2026-02-28 11:11:15.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:11:15 compute-0 nova_compute[243452]: 2026-02-28 11:11:15.318 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 11:11:15 compute-0 nova_compute[243452]: 2026-02-28 11:11:15.318 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 11:11:15 compute-0 nova_compute[243452]: 2026-02-28 11:11:15.336 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 11:11:15 compute-0 ceph-mon[76304]: pgmap v3229: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:17 compute-0 nova_compute[243452]: 2026-02-28 11:11:17.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:11:17 compute-0 ceph-mon[76304]: pgmap v3230: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:19 compute-0 nova_compute[243452]: 2026-02-28 11:11:19.217 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:11:19 compute-0 ceph-mon[76304]: pgmap v3231: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:11:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3232: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:21 compute-0 ceph-mon[76304]: pgmap v3232: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3233: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:23 compute-0 nova_compute[243452]: 2026-02-28 11:11:23.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:11:23 compute-0 nova_compute[243452]: 2026-02-28 11:11:23.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:11:23 compute-0 nova_compute[243452]: 2026-02-28 11:11:23.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:11:23 compute-0 nova_compute[243452]: 2026-02-28 11:11:23.344 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:11:23 compute-0 nova_compute[243452]: 2026-02-28 11:11:23.344 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 11:11:23 compute-0 nova_compute[243452]: 2026-02-28 11:11:23.345 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:11:23 compute-0 ceph-mon[76304]: pgmap v3233: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:11:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2420158980' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:11:23 compute-0 nova_compute[243452]: 2026-02-28 11:11:23.887 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:11:24 compute-0 nova_compute[243452]: 2026-02-28 11:11:24.101 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 11:11:24 compute-0 nova_compute[243452]: 2026-02-28 11:11:24.103 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3554MB free_disk=59.987354160286486GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 11:11:24 compute-0 nova_compute[243452]: 2026-02-28 11:11:24.104 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:11:24 compute-0 nova_compute[243452]: 2026-02-28 11:11:24.104 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:11:24 compute-0 nova_compute[243452]: 2026-02-28 11:11:24.165 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 11:11:24 compute-0 nova_compute[243452]: 2026-02-28 11:11:24.166 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 11:11:24 compute-0 nova_compute[243452]: 2026-02-28 11:11:24.182 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:11:24 compute-0 nova_compute[243452]: 2026-02-28 11:11:24.219 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:11:24 compute-0 nova_compute[243452]: 2026-02-28 11:11:24.221 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:11:24 compute-0 nova_compute[243452]: 2026-02-28 11:11:24.222 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:11:24 compute-0 nova_compute[243452]: 2026-02-28 11:11:24.222 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:11:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:24 compute-0 nova_compute[243452]: 2026-02-28 11:11:24.257 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:11:24 compute-0 nova_compute[243452]: 2026-02-28 11:11:24.258 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:11:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:11:24 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2420158980' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:11:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:11:24 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/497188344' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:11:24 compute-0 nova_compute[243452]: 2026-02-28 11:11:24.749 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:11:24 compute-0 nova_compute[243452]: 2026-02-28 11:11:24.754 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 11:11:24 compute-0 nova_compute[243452]: 2026-02-28 11:11:24.771 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 11:11:24 compute-0 nova_compute[243452]: 2026-02-28 11:11:24.773 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 11:11:24 compute-0 nova_compute[243452]: 2026-02-28 11:11:24.774 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:11:25 compute-0 ceph-mon[76304]: pgmap v3234: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:25 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/497188344' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:11:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:27 compute-0 ceph-mon[76304]: pgmap v3235: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:27 compute-0 nova_compute[243452]: 2026-02-28 11:11:27.774 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:11:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3236: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:29 compute-0 nova_compute[243452]: 2026-02-28 11:11:29.258 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:11:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:11:29
Feb 28 11:11:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 11:11:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 11:11:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'backups', '.rgw.root', '.mgr', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log']
Feb 28 11:11:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 11:11:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:11:29 compute-0 ceph-mon[76304]: pgmap v3236: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:11:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:11:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:11:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:11:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:11:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:11:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 11:11:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:11:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 11:11:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:11:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:11:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:11:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:11:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:11:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:11:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:11:31 compute-0 ceph-mon[76304]: pgmap v3237: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3238: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:33 compute-0 ceph-mon[76304]: pgmap v3238: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:34 compute-0 nova_compute[243452]: 2026-02-28 11:11:34.261 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:11:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:11:35 compute-0 ceph-mon[76304]: pgmap v3239: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:36 compute-0 podman[408150]: 2026-02-28 11:11:36.14458872 +0000 UTC m=+0.068148871 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 28 11:11:36 compute-0 podman[408149]: 2026-02-28 11:11:36.172763108 +0000 UTC m=+0.106854148 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 28 11:11:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:37 compute-0 ceph-mon[76304]: pgmap v3240: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:39 compute-0 nova_compute[243452]: 2026-02-28 11:11:39.264 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:11:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:11:39 compute-0 ceph-mon[76304]: pgmap v3241: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:41 compute-0 ceph-mon[76304]: pgmap v3242: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5454513684934942e-05 of space, bias 1.0, pg target 0.0046363541054804825 quantized to 32 (current 32)
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.899822393338988e-06 of space, bias 1.0, pg target 0.0014699467180016965 quantized to 32 (current 32)
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:11:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 11:11:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3243: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:43 compute-0 ceph-mon[76304]: pgmap v3243: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:44 compute-0 nova_compute[243452]: 2026-02-28 11:11:44.266 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:11:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:44 compute-0 nova_compute[243452]: 2026-02-28 11:11:44.267 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:11:44 compute-0 nova_compute[243452]: 2026-02-28 11:11:44.267 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:11:44 compute-0 nova_compute[243452]: 2026-02-28 11:11:44.267 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:11:44 compute-0 nova_compute[243452]: 2026-02-28 11:11:44.268 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:11:44 compute-0 nova_compute[243452]: 2026-02-28 11:11:44.269 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:11:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:11:45 compute-0 ceph-mon[76304]: pgmap v3244: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 11:11:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2658666051' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:11:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 11:11:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2658666051' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:11:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2658666051' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:11:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/2658666051' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:11:47 compute-0 ceph-mon[76304]: pgmap v3245: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3246: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:49 compute-0 nova_compute[243452]: 2026-02-28 11:11:49.270 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:11:49 compute-0 nova_compute[243452]: 2026-02-28 11:11:49.272 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:11:49 compute-0 nova_compute[243452]: 2026-02-28 11:11:49.272 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:11:49 compute-0 nova_compute[243452]: 2026-02-28 11:11:49.272 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:11:49 compute-0 nova_compute[243452]: 2026-02-28 11:11:49.306 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:11:49 compute-0 nova_compute[243452]: 2026-02-28 11:11:49.307 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:11:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:11:49 compute-0 ceph-mon[76304]: pgmap v3246: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:49 compute-0 sudo[408194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:11:49 compute-0 sudo[408194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:11:49 compute-0 sudo[408194]: pam_unix(sudo:session): session closed for user root
Feb 28 11:11:49 compute-0 sudo[408219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Feb 28 11:11:49 compute-0 sudo[408219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:11:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:50 compute-0 sudo[408219]: pam_unix(sudo:session): session closed for user root
Feb 28 11:11:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 11:11:50 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:11:50 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 11:11:50 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:11:50 compute-0 sudo[408266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:11:50 compute-0 sudo[408266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:11:50 compute-0 sudo[408266]: pam_unix(sudo:session): session closed for user root
Feb 28 11:11:50 compute-0 sudo[408291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 11:11:50 compute-0 sudo[408291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:11:51 compute-0 sudo[408291]: pam_unix(sudo:session): session closed for user root
Feb 28 11:11:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:11:51 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:11:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 11:11:51 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:11:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 11:11:51 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:11:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 11:11:51 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:11:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 11:11:51 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:11:51 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:11:51 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:11:51 compute-0 sudo[408348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:11:51 compute-0 sudo[408348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:11:51 compute-0 sudo[408348]: pam_unix(sudo:session): session closed for user root
Feb 28 11:11:51 compute-0 ceph-mon[76304]: pgmap v3247: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:51 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:11:51 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:11:51 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:11:51 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:11:51 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:11:51 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:11:51 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:11:51 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:11:51 compute-0 sudo[408373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 11:11:51 compute-0 sudo[408373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:11:51 compute-0 podman[408410]: 2026-02-28 11:11:51.585574753 +0000 UTC m=+0.051217961 container create ff7df16d44cbf8dd103080ab1ade812ad4756f65e965a38c5b9cf6db23646fc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 28 11:11:51 compute-0 systemd[1]: Started libpod-conmon-ff7df16d44cbf8dd103080ab1ade812ad4756f65e965a38c5b9cf6db23646fc7.scope.
Feb 28 11:11:51 compute-0 podman[408410]: 2026-02-28 11:11:51.5579269 +0000 UTC m=+0.023570138 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:11:51 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:11:51 compute-0 podman[408410]: 2026-02-28 11:11:51.681670045 +0000 UTC m=+0.147313293 container init ff7df16d44cbf8dd103080ab1ade812ad4756f65e965a38c5b9cf6db23646fc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_einstein, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 11:11:51 compute-0 podman[408410]: 2026-02-28 11:11:51.688838448 +0000 UTC m=+0.154481616 container start ff7df16d44cbf8dd103080ab1ade812ad4756f65e965a38c5b9cf6db23646fc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 28 11:11:51 compute-0 podman[408410]: 2026-02-28 11:11:51.692519132 +0000 UTC m=+0.158162340 container attach ff7df16d44cbf8dd103080ab1ade812ad4756f65e965a38c5b9cf6db23646fc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_einstein, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:11:51 compute-0 systemd[1]: libpod-ff7df16d44cbf8dd103080ab1ade812ad4756f65e965a38c5b9cf6db23646fc7.scope: Deactivated successfully.
Feb 28 11:11:51 compute-0 hopeful_einstein[408426]: 167 167
Feb 28 11:11:51 compute-0 conmon[408426]: conmon ff7df16d44cbf8dd1030 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff7df16d44cbf8dd103080ab1ade812ad4756f65e965a38c5b9cf6db23646fc7.scope/container/memory.events
Feb 28 11:11:51 compute-0 podman[408410]: 2026-02-28 11:11:51.703202855 +0000 UTC m=+0.168846023 container died ff7df16d44cbf8dd103080ab1ade812ad4756f65e965a38c5b9cf6db23646fc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_einstein, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:11:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9e14ff1e17183cf20fa903daf8ab29f77ef91c2b45f93275a4eb0f1e9701284-merged.mount: Deactivated successfully.
Feb 28 11:11:51 compute-0 podman[408410]: 2026-02-28 11:11:51.750735191 +0000 UTC m=+0.216378359 container remove ff7df16d44cbf8dd103080ab1ade812ad4756f65e965a38c5b9cf6db23646fc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_einstein, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 28 11:11:51 compute-0 systemd[1]: libpod-conmon-ff7df16d44cbf8dd103080ab1ade812ad4756f65e965a38c5b9cf6db23646fc7.scope: Deactivated successfully.
Feb 28 11:11:51 compute-0 podman[408450]: 2026-02-28 11:11:51.928036662 +0000 UTC m=+0.073074180 container create 9c5e4f3bd2bcb5ebb7275fcc9dd89b07b9e8591ea803696cb35e2fe906de410a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:11:51 compute-0 systemd[1]: Started libpod-conmon-9c5e4f3bd2bcb5ebb7275fcc9dd89b07b9e8591ea803696cb35e2fe906de410a.scope.
Feb 28 11:11:51 compute-0 podman[408450]: 2026-02-28 11:11:51.895056908 +0000 UTC m=+0.040094486 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:11:52 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc25e2c6130c979fb21df5fd58bfaa13d4cfd9c4b27230726f461655b87b532/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc25e2c6130c979fb21df5fd58bfaa13d4cfd9c4b27230726f461655b87b532/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc25e2c6130c979fb21df5fd58bfaa13d4cfd9c4b27230726f461655b87b532/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc25e2c6130c979fb21df5fd58bfaa13d4cfd9c4b27230726f461655b87b532/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc25e2c6130c979fb21df5fd58bfaa13d4cfd9c4b27230726f461655b87b532/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 11:11:52 compute-0 podman[408450]: 2026-02-28 11:11:52.038103499 +0000 UTC m=+0.183141087 container init 9c5e4f3bd2bcb5ebb7275fcc9dd89b07b9e8591ea803696cb35e2fe906de410a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lewin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:11:52 compute-0 podman[408450]: 2026-02-28 11:11:52.059854576 +0000 UTC m=+0.204892104 container start 9c5e4f3bd2bcb5ebb7275fcc9dd89b07b9e8591ea803696cb35e2fe906de410a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 28 11:11:52 compute-0 podman[408450]: 2026-02-28 11:11:52.064232319 +0000 UTC m=+0.209269847 container attach 9c5e4f3bd2bcb5ebb7275fcc9dd89b07b9e8591ea803696cb35e2fe906de410a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lewin, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:11:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3248: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:52 compute-0 gifted_lewin[408466]: --> passed data devices: 0 physical, 3 LVM
Feb 28 11:11:52 compute-0 gifted_lewin[408466]: --> All data devices are unavailable
Feb 28 11:11:52 compute-0 systemd[1]: libpod-9c5e4f3bd2bcb5ebb7275fcc9dd89b07b9e8591ea803696cb35e2fe906de410a.scope: Deactivated successfully.
Feb 28 11:11:52 compute-0 podman[408450]: 2026-02-28 11:11:52.523903298 +0000 UTC m=+0.668940786 container died 9c5e4f3bd2bcb5ebb7275fcc9dd89b07b9e8591ea803696cb35e2fe906de410a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 28 11:11:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-9dc25e2c6130c979fb21df5fd58bfaa13d4cfd9c4b27230726f461655b87b532-merged.mount: Deactivated successfully.
Feb 28 11:11:52 compute-0 podman[408450]: 2026-02-28 11:11:52.567173343 +0000 UTC m=+0.712210841 container remove 9c5e4f3bd2bcb5ebb7275fcc9dd89b07b9e8591ea803696cb35e2fe906de410a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 11:11:52 compute-0 systemd[1]: libpod-conmon-9c5e4f3bd2bcb5ebb7275fcc9dd89b07b9e8591ea803696cb35e2fe906de410a.scope: Deactivated successfully.
Feb 28 11:11:52 compute-0 sudo[408373]: pam_unix(sudo:session): session closed for user root
Feb 28 11:11:52 compute-0 sudo[408498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:11:52 compute-0 sudo[408498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:11:52 compute-0 sudo[408498]: pam_unix(sudo:session): session closed for user root
Feb 28 11:11:52 compute-0 sudo[408523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 11:11:52 compute-0 sudo[408523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:11:53 compute-0 podman[408560]: 2026-02-28 11:11:53.010655522 +0000 UTC m=+0.059249639 container create 2a1ad97cd0f0917d9bcb44ebbb0d3601ad4cda3ad980a835d6ddc7b9bd5f3b1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_montalcini, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:11:53 compute-0 systemd[1]: Started libpod-conmon-2a1ad97cd0f0917d9bcb44ebbb0d3601ad4cda3ad980a835d6ddc7b9bd5f3b1c.scope.
Feb 28 11:11:53 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:11:53 compute-0 podman[408560]: 2026-02-28 11:11:52.986352714 +0000 UTC m=+0.034946921 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:11:53 compute-0 podman[408560]: 2026-02-28 11:11:53.093220221 +0000 UTC m=+0.141814388 container init 2a1ad97cd0f0917d9bcb44ebbb0d3601ad4cda3ad980a835d6ddc7b9bd5f3b1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_montalcini, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:11:53 compute-0 podman[408560]: 2026-02-28 11:11:53.101893086 +0000 UTC m=+0.150487243 container start 2a1ad97cd0f0917d9bcb44ebbb0d3601ad4cda3ad980a835d6ddc7b9bd5f3b1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_montalcini, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:11:53 compute-0 podman[408560]: 2026-02-28 11:11:53.106674392 +0000 UTC m=+0.155268519 container attach 2a1ad97cd0f0917d9bcb44ebbb0d3601ad4cda3ad980a835d6ddc7b9bd5f3b1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_montalcini, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:11:53 compute-0 affectionate_montalcini[408577]: 167 167
Feb 28 11:11:53 compute-0 systemd[1]: libpod-2a1ad97cd0f0917d9bcb44ebbb0d3601ad4cda3ad980a835d6ddc7b9bd5f3b1c.scope: Deactivated successfully.
Feb 28 11:11:53 compute-0 conmon[408577]: conmon 2a1ad97cd0f0917d9bcb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a1ad97cd0f0917d9bcb44ebbb0d3601ad4cda3ad980a835d6ddc7b9bd5f3b1c.scope/container/memory.events
Feb 28 11:11:53 compute-0 podman[408560]: 2026-02-28 11:11:53.110916302 +0000 UTC m=+0.159510419 container died 2a1ad97cd0f0917d9bcb44ebbb0d3601ad4cda3ad980a835d6ddc7b9bd5f3b1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_montalcini, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 28 11:11:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b70bd2530ee200474ebb0aa8bc16b79ffb9da2e6eb59cc00e6e8a6f5ddfb7b05-merged.mount: Deactivated successfully.
Feb 28 11:11:53 compute-0 podman[408560]: 2026-02-28 11:11:53.159301712 +0000 UTC m=+0.207895889 container remove 2a1ad97cd0f0917d9bcb44ebbb0d3601ad4cda3ad980a835d6ddc7b9bd5f3b1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_montalcini, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:11:53 compute-0 systemd[1]: libpod-conmon-2a1ad97cd0f0917d9bcb44ebbb0d3601ad4cda3ad980a835d6ddc7b9bd5f3b1c.scope: Deactivated successfully.
Feb 28 11:11:53 compute-0 ceph-mon[76304]: pgmap v3248: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:53 compute-0 podman[408599]: 2026-02-28 11:11:53.342966994 +0000 UTC m=+0.059065404 container create e0d41a05f0d789e955928eb007e4a968a032f7090b32ce8090e1f863b7006e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 28 11:11:53 compute-0 systemd[1]: Started libpod-conmon-e0d41a05f0d789e955928eb007e4a968a032f7090b32ce8090e1f863b7006e59.scope.
Feb 28 11:11:53 compute-0 podman[408599]: 2026-02-28 11:11:53.317441901 +0000 UTC m=+0.033540291 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:11:53 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:11:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d30d412431da8d17a09e0d5b26d2bb5938eb192c8f30c8c51573ec80ea3db77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:11:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d30d412431da8d17a09e0d5b26d2bb5938eb192c8f30c8c51573ec80ea3db77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:11:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d30d412431da8d17a09e0d5b26d2bb5938eb192c8f30c8c51573ec80ea3db77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:11:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d30d412431da8d17a09e0d5b26d2bb5938eb192c8f30c8c51573ec80ea3db77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:11:53 compute-0 podman[408599]: 2026-02-28 11:11:53.442168303 +0000 UTC m=+0.158266693 container init e0d41a05f0d789e955928eb007e4a968a032f7090b32ce8090e1f863b7006e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_thompson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Feb 28 11:11:53 compute-0 podman[408599]: 2026-02-28 11:11:53.448211724 +0000 UTC m=+0.164310134 container start e0d41a05f0d789e955928eb007e4a968a032f7090b32ce8090e1f863b7006e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_thompson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 11:11:53 compute-0 podman[408599]: 2026-02-28 11:11:53.451486777 +0000 UTC m=+0.167585167 container attach e0d41a05f0d789e955928eb007e4a968a032f7090b32ce8090e1f863b7006e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_thompson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 11:11:53 compute-0 elated_thompson[408615]: {
Feb 28 11:11:53 compute-0 elated_thompson[408615]:     "0": [
Feb 28 11:11:53 compute-0 elated_thompson[408615]:         {
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "devices": [
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "/dev/loop3"
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             ],
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "lv_name": "ceph_lv0",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "lv_size": "21470642176",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "name": "ceph_lv0",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "tags": {
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.cluster_name": "ceph",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.crush_device_class": "",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.encrypted": "0",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.objectstore": "bluestore",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.osd_id": "0",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.type": "block",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.vdo": "0",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.with_tpm": "0"
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             },
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "type": "block",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "vg_name": "ceph_vg0"
Feb 28 11:11:53 compute-0 elated_thompson[408615]:         }
Feb 28 11:11:53 compute-0 elated_thompson[408615]:     ],
Feb 28 11:11:53 compute-0 elated_thompson[408615]:     "1": [
Feb 28 11:11:53 compute-0 elated_thompson[408615]:         {
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "devices": [
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "/dev/loop4"
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             ],
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "lv_name": "ceph_lv1",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "lv_size": "21470642176",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "name": "ceph_lv1",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "tags": {
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.cluster_name": "ceph",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.crush_device_class": "",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.encrypted": "0",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.objectstore": "bluestore",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.osd_id": "1",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.type": "block",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.vdo": "0",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.with_tpm": "0"
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             },
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "type": "block",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "vg_name": "ceph_vg1"
Feb 28 11:11:53 compute-0 elated_thompson[408615]:         }
Feb 28 11:11:53 compute-0 elated_thompson[408615]:     ],
Feb 28 11:11:53 compute-0 elated_thompson[408615]:     "2": [
Feb 28 11:11:53 compute-0 elated_thompson[408615]:         {
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "devices": [
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "/dev/loop5"
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             ],
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "lv_name": "ceph_lv2",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "lv_size": "21470642176",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "name": "ceph_lv2",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "tags": {
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.cluster_name": "ceph",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.crush_device_class": "",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.encrypted": "0",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.objectstore": "bluestore",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.osd_id": "2",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.type": "block",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.vdo": "0",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:                 "ceph.with_tpm": "0"
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             },
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "type": "block",
Feb 28 11:11:53 compute-0 elated_thompson[408615]:             "vg_name": "ceph_vg2"
Feb 28 11:11:53 compute-0 elated_thompson[408615]:         }
Feb 28 11:11:53 compute-0 elated_thompson[408615]:     ]
Feb 28 11:11:53 compute-0 elated_thompson[408615]: }
Feb 28 11:11:53 compute-0 systemd[1]: libpod-e0d41a05f0d789e955928eb007e4a968a032f7090b32ce8090e1f863b7006e59.scope: Deactivated successfully.
Feb 28 11:11:53 compute-0 podman[408599]: 2026-02-28 11:11:53.804018921 +0000 UTC m=+0.520117341 container died e0d41a05f0d789e955928eb007e4a968a032f7090b32ce8090e1f863b7006e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 11:11:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d30d412431da8d17a09e0d5b26d2bb5938eb192c8f30c8c51573ec80ea3db77-merged.mount: Deactivated successfully.
Feb 28 11:11:53 compute-0 podman[408599]: 2026-02-28 11:11:53.853454701 +0000 UTC m=+0.569553071 container remove e0d41a05f0d789e955928eb007e4a968a032f7090b32ce8090e1f863b7006e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 11:11:53 compute-0 systemd[1]: libpod-conmon-e0d41a05f0d789e955928eb007e4a968a032f7090b32ce8090e1f863b7006e59.scope: Deactivated successfully.
Feb 28 11:11:53 compute-0 sudo[408523]: pam_unix(sudo:session): session closed for user root
Feb 28 11:11:53 compute-0 sudo[408638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:11:53 compute-0 sudo[408638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:11:53 compute-0 sudo[408638]: pam_unix(sudo:session): session closed for user root
Feb 28 11:11:54 compute-0 sudo[408663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 11:11:54 compute-0 sudo[408663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:11:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3249: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:54 compute-0 nova_compute[243452]: 2026-02-28 11:11:54.307 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:11:54 compute-0 nova_compute[243452]: 2026-02-28 11:11:54.310 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:11:54 compute-0 podman[408699]: 2026-02-28 11:11:54.35939225 +0000 UTC m=+0.052171619 container create 6ef69f534d2e83ac65a3bb6c2610d6b97650f2e8eca620fed27ce36ea223c8a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:11:54 compute-0 systemd[1]: Started libpod-conmon-6ef69f534d2e83ac65a3bb6c2610d6b97650f2e8eca620fed27ce36ea223c8a7.scope.
Feb 28 11:11:54 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:11:54 compute-0 podman[408699]: 2026-02-28 11:11:54.335336759 +0000 UTC m=+0.028116148 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:11:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:11:54 compute-0 podman[408699]: 2026-02-28 11:11:54.442001249 +0000 UTC m=+0.134780638 container init 6ef69f534d2e83ac65a3bb6c2610d6b97650f2e8eca620fed27ce36ea223c8a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_beaver, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 11:11:54 compute-0 podman[408699]: 2026-02-28 11:11:54.448891565 +0000 UTC m=+0.141670924 container start 6ef69f534d2e83ac65a3bb6c2610d6b97650f2e8eca620fed27ce36ea223c8a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Feb 28 11:11:54 compute-0 podman[408699]: 2026-02-28 11:11:54.452000903 +0000 UTC m=+0.144780292 container attach 6ef69f534d2e83ac65a3bb6c2610d6b97650f2e8eca620fed27ce36ea223c8a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_beaver, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:11:54 compute-0 adoring_beaver[408715]: 167 167
Feb 28 11:11:54 compute-0 systemd[1]: libpod-6ef69f534d2e83ac65a3bb6c2610d6b97650f2e8eca620fed27ce36ea223c8a7.scope: Deactivated successfully.
Feb 28 11:11:54 compute-0 podman[408699]: 2026-02-28 11:11:54.455961045 +0000 UTC m=+0.148740454 container died 6ef69f534d2e83ac65a3bb6c2610d6b97650f2e8eca620fed27ce36ea223c8a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_beaver, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:11:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce1e73f44e00d7ee204dc7c919f4dd0b32f065aeb19846f9c14ccfa0f82d16a7-merged.mount: Deactivated successfully.
Feb 28 11:11:54 compute-0 podman[408699]: 2026-02-28 11:11:54.503689127 +0000 UTC m=+0.196468486 container remove 6ef69f534d2e83ac65a3bb6c2610d6b97650f2e8eca620fed27ce36ea223c8a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 28 11:11:54 compute-0 systemd[1]: libpod-conmon-6ef69f534d2e83ac65a3bb6c2610d6b97650f2e8eca620fed27ce36ea223c8a7.scope: Deactivated successfully.
Feb 28 11:11:54 compute-0 podman[408742]: 2026-02-28 11:11:54.704686469 +0000 UTC m=+0.059923658 container create db2b84da6ab1de9fde976c1483488337ff3017da86c0317fb6941e2414d1cd8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 28 11:11:54 compute-0 systemd[1]: Started libpod-conmon-db2b84da6ab1de9fde976c1483488337ff3017da86c0317fb6941e2414d1cd8a.scope.
Feb 28 11:11:54 compute-0 podman[408742]: 2026-02-28 11:11:54.679557387 +0000 UTC m=+0.034794666 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:11:54 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5068787a400e4709ad0546d2509842110154079083461e8440178f10b59348b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5068787a400e4709ad0546d2509842110154079083461e8440178f10b59348b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5068787a400e4709ad0546d2509842110154079083461e8440178f10b59348b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5068787a400e4709ad0546d2509842110154079083461e8440178f10b59348b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:11:54 compute-0 podman[408742]: 2026-02-28 11:11:54.810479385 +0000 UTC m=+0.165716594 container init db2b84da6ab1de9fde976c1483488337ff3017da86c0317fb6941e2414d1cd8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Feb 28 11:11:54 compute-0 podman[408742]: 2026-02-28 11:11:54.820551621 +0000 UTC m=+0.175788810 container start db2b84da6ab1de9fde976c1483488337ff3017da86c0317fb6941e2414d1cd8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 28 11:11:54 compute-0 podman[408742]: 2026-02-28 11:11:54.824627456 +0000 UTC m=+0.179864685 container attach db2b84da6ab1de9fde976c1483488337ff3017da86c0317fb6941e2414d1cd8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 28 11:11:55 compute-0 ceph-mon[76304]: pgmap v3249: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:55 compute-0 lvm[408838]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 11:11:55 compute-0 lvm[408838]: VG ceph_vg1 finished
Feb 28 11:11:55 compute-0 lvm[408837]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 11:11:55 compute-0 lvm[408837]: VG ceph_vg0 finished
Feb 28 11:11:55 compute-0 lvm[408840]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 11:11:55 compute-0 lvm[408840]: VG ceph_vg2 finished
Feb 28 11:11:55 compute-0 boring_sanderson[408759]: {}
Feb 28 11:11:55 compute-0 systemd[1]: libpod-db2b84da6ab1de9fde976c1483488337ff3017da86c0317fb6941e2414d1cd8a.scope: Deactivated successfully.
Feb 28 11:11:55 compute-0 podman[408742]: 2026-02-28 11:11:55.660790317 +0000 UTC m=+1.016027506 container died db2b84da6ab1de9fde976c1483488337ff3017da86c0317fb6941e2414d1cd8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:11:55 compute-0 systemd[1]: libpod-db2b84da6ab1de9fde976c1483488337ff3017da86c0317fb6941e2414d1cd8a.scope: Consumed 1.230s CPU time.
Feb 28 11:11:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-5068787a400e4709ad0546d2509842110154079083461e8440178f10b59348b3-merged.mount: Deactivated successfully.
Feb 28 11:11:55 compute-0 podman[408742]: 2026-02-28 11:11:55.69336221 +0000 UTC m=+1.048599399 container remove db2b84da6ab1de9fde976c1483488337ff3017da86c0317fb6941e2414d1cd8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:11:55 compute-0 systemd[1]: libpod-conmon-db2b84da6ab1de9fde976c1483488337ff3017da86c0317fb6941e2414d1cd8a.scope: Deactivated successfully.
Feb 28 11:11:55 compute-0 sudo[408663]: pam_unix(sudo:session): session closed for user root
Feb 28 11:11:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 11:11:55 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:11:55 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 11:11:55 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:11:55 compute-0 sudo[408854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 11:11:55 compute-0 sudo[408854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:11:55 compute-0 sudo[408854]: pam_unix(sudo:session): session closed for user root
Feb 28 11:11:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:11:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:11:57 compute-0 ceph-mon[76304]: pgmap v3250: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:11:57.919 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:11:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:11:57.921 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:11:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:11:57.922 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:11:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:11:59 compute-0 nova_compute[243452]: 2026-02-28 11:11:59.309 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:11:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:11:59 compute-0 ceph-mon[76304]: pgmap v3251: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:12:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:12:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:12:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:12:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:12:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:12:01 compute-0 ceph-mon[76304]: pgmap v3252: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3253: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:03 compute-0 ceph-mon[76304]: pgmap v3253: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:04 compute-0 sshd-session[408879]: Received disconnect from 103.139.193.187 port 36798:11: Bye Bye [preauth]
Feb 28 11:12:04 compute-0 sshd-session[408879]: Disconnected from authenticating user root 103.139.193.187 port 36798 [preauth]
Feb 28 11:12:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:04 compute-0 nova_compute[243452]: 2026-02-28 11:12:04.310 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:12:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:12:05 compute-0 ceph-mon[76304]: pgmap v3254: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:07 compute-0 podman[408882]: 2026-02-28 11:12:07.147989342 +0000 UTC m=+0.068953874 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 28 11:12:07 compute-0 podman[408881]: 2026-02-28 11:12:07.17757929 +0000 UTC m=+0.104877541 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 28 11:12:07 compute-0 ceph-mon[76304]: pgmap v3255: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:09 compute-0 nova_compute[243452]: 2026-02-28 11:12:09.312 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:12:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:12:09 compute-0 ceph-mon[76304]: pgmap v3256: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:11 compute-0 ceph-mon[76304]: pgmap v3257: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:12 compute-0 nova_compute[243452]: 2026-02-28 11:12:12.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:12:13 compute-0 nova_compute[243452]: 2026-02-28 11:12:13.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:12:13 compute-0 ceph-mon[76304]: pgmap v3258: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3259: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:14 compute-0 nova_compute[243452]: 2026-02-28 11:12:14.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:12:14 compute-0 nova_compute[243452]: 2026-02-28 11:12:14.314 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:12:14 compute-0 nova_compute[243452]: 2026-02-28 11:12:14.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:12:14 compute-0 nova_compute[243452]: 2026-02-28 11:12:14.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 11:12:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:12:15 compute-0 ceph-mon[76304]: pgmap v3259: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:16 compute-0 nova_compute[243452]: 2026-02-28 11:12:16.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:12:16 compute-0 nova_compute[243452]: 2026-02-28 11:12:16.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 11:12:16 compute-0 nova_compute[243452]: 2026-02-28 11:12:16.317 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 11:12:16 compute-0 nova_compute[243452]: 2026-02-28 11:12:16.336 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 11:12:16 compute-0 nova_compute[243452]: 2026-02-28 11:12:16.337 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:12:17 compute-0 ceph-mon[76304]: pgmap v3260: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:18 compute-0 nova_compute[243452]: 2026-02-28 11:12:18.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:12:19 compute-0 nova_compute[243452]: 2026-02-28 11:12:19.316 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:12:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:12:19 compute-0 ceph-mon[76304]: pgmap v3261: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:21 compute-0 ceph-mon[76304]: pgmap v3262: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3263: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:22 compute-0 ceph-mon[76304]: pgmap v3263: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:23 compute-0 nova_compute[243452]: 2026-02-28 11:12:23.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:12:23 compute-0 nova_compute[243452]: 2026-02-28 11:12:23.357 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:12:23 compute-0 nova_compute[243452]: 2026-02-28 11:12:23.357 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:12:23 compute-0 nova_compute[243452]: 2026-02-28 11:12:23.358 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:12:23 compute-0 nova_compute[243452]: 2026-02-28 11:12:23.358 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 11:12:23 compute-0 nova_compute[243452]: 2026-02-28 11:12:23.359 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:12:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:12:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/430315698' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:12:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/430315698' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:12:23 compute-0 nova_compute[243452]: 2026-02-28 11:12:23.956 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:12:24 compute-0 nova_compute[243452]: 2026-02-28 11:12:24.138 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 11:12:24 compute-0 nova_compute[243452]: 2026-02-28 11:12:24.140 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3553MB free_disk=59.987354160286486GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 11:12:24 compute-0 nova_compute[243452]: 2026-02-28 11:12:24.141 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:12:24 compute-0 nova_compute[243452]: 2026-02-28 11:12:24.141 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:12:24 compute-0 nova_compute[243452]: 2026-02-28 11:12:24.211 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 11:12:24 compute-0 nova_compute[243452]: 2026-02-28 11:12:24.212 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 11:12:24 compute-0 nova_compute[243452]: 2026-02-28 11:12:24.227 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:12:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3264: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:24 compute-0 nova_compute[243452]: 2026-02-28 11:12:24.319 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:12:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:12:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:12:24 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1325925670' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:12:24 compute-0 nova_compute[243452]: 2026-02-28 11:12:24.797 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:12:24 compute-0 nova_compute[243452]: 2026-02-28 11:12:24.804 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 11:12:24 compute-0 nova_compute[243452]: 2026-02-28 11:12:24.823 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 11:12:24 compute-0 nova_compute[243452]: 2026-02-28 11:12:24.825 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 11:12:24 compute-0 nova_compute[243452]: 2026-02-28 11:12:24.825 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:12:24 compute-0 ceph-mon[76304]: pgmap v3264: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:24 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1325925670' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:12:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:26 compute-0 nova_compute[243452]: 2026-02-28 11:12:26.822 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:12:27 compute-0 nova_compute[243452]: 2026-02-28 11:12:27.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:12:27 compute-0 ceph-mon[76304]: pgmap v3265: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3266: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:12:29
Feb 28 11:12:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 11:12:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 11:12:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['default.rgw.log', 'backups', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'images', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta']
Feb 28 11:12:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 11:12:29 compute-0 nova_compute[243452]: 2026-02-28 11:12:29.320 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:12:29 compute-0 nova_compute[243452]: 2026-02-28 11:12:29.324 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:12:29 compute-0 ceph-mon[76304]: pgmap v3266: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:12:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:12:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:12:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:12:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:12:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:12:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:12:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 11:12:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 11:12:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:12:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:12:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:12:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:12:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:12:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:12:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:12:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:12:31 compute-0 ceph-mon[76304]: pgmap v3267: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:33 compute-0 ceph-mon[76304]: pgmap v3268: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:34 compute-0 nova_compute[243452]: 2026-02-28 11:12:34.321 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:12:34 compute-0 nova_compute[243452]: 2026-02-28 11:12:34.326 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:12:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:12:35 compute-0 ceph-mon[76304]: pgmap v3269: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3270: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:36 compute-0 nova_compute[243452]: 2026-02-28 11:12:36.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:12:36 compute-0 nova_compute[243452]: 2026-02-28 11:12:36.316 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:12:36 compute-0 nova_compute[243452]: 2026-02-28 11:12:36.316 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:12:36 compute-0 nova_compute[243452]: 2026-02-28 11:12:36.317 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:12:36 compute-0 nova_compute[243452]: 2026-02-28 11:12:36.317 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:12:36 compute-0 nova_compute[243452]: 2026-02-28 11:12:36.318 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:12:36 compute-0 nova_compute[243452]: 2026-02-28 11:12:36.318 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:12:36 compute-0 nova_compute[243452]: 2026-02-28 11:12:36.340 243456 DEBUG nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Feb 28 11:12:36 compute-0 nova_compute[243452]: 2026-02-28 11:12:36.349 243456 DEBUG nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Feb 28 11:12:36 compute-0 nova_compute[243452]: 2026-02-28 11:12:36.349 243456 WARNING nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Unknown base file: /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2
Feb 28 11:12:36 compute-0 nova_compute[243452]: 2026-02-28 11:12:36.349 243456 WARNING nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Unknown base file: /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847
Feb 28 11:12:36 compute-0 nova_compute[243452]: 2026-02-28 11:12:36.350 243456 INFO nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Removable base files: /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2 /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847
Feb 28 11:12:36 compute-0 nova_compute[243452]: 2026-02-28 11:12:36.350 243456 INFO nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/b74f2e7ca23d9f6895a792e85e4e82ab25fbf2e2
Feb 28 11:12:36 compute-0 nova_compute[243452]: 2026-02-28 11:12:36.350 243456 INFO nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/d8b42e29e77cfa759998bf1aaf01e8a4671ba847
Feb 28 11:12:36 compute-0 nova_compute[243452]: 2026-02-28 11:12:36.351 243456 DEBUG nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Feb 28 11:12:36 compute-0 nova_compute[243452]: 2026-02-28 11:12:36.351 243456 DEBUG nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Feb 28 11:12:36 compute-0 nova_compute[243452]: 2026-02-28 11:12:36.351 243456 DEBUG nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Feb 28 11:12:36 compute-0 nova_compute[243452]: 2026-02-28 11:12:36.352 243456 INFO nova.virt.libvirt.imagecache [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Feb 28 11:12:37 compute-0 ceph-mon[76304]: pgmap v3270: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:38 compute-0 podman[408971]: 2026-02-28 11:12:38.135835363 +0000 UTC m=+0.060171705 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 28 11:12:38 compute-0 podman[408970]: 2026-02-28 11:12:38.175933038 +0000 UTC m=+0.105025064 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 28 11:12:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:39 compute-0 nova_compute[243452]: 2026-02-28 11:12:39.289 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:12:39 compute-0 nova_compute[243452]: 2026-02-28 11:12:39.323 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:12:39 compute-0 nova_compute[243452]: 2026-02-28 11:12:39.327 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:12:39 compute-0 ceph-mon[76304]: pgmap v3271: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:12:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:41 compute-0 nova_compute[243452]: 2026-02-28 11:12:41.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:12:41 compute-0 ceph-mon[76304]: pgmap v3272: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5454513684934942e-05 of space, bias 1.0, pg target 0.0046363541054804825 quantized to 32 (current 32)
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.899822393338988e-06 of space, bias 1.0, pg target 0.0014699467180016965 quantized to 32 (current 32)
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:12:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 11:12:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:43 compute-0 ceph-mon[76304]: pgmap v3273: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:44 compute-0 nova_compute[243452]: 2026-02-28 11:12:44.326 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:12:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:12:45 compute-0 ceph-mon[76304]: pgmap v3274: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 11:12:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1003014709' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:12:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 11:12:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1003014709' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:12:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1003014709' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:12:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/1003014709' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:12:47 compute-0 ceph-mon[76304]: pgmap v3275: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:49 compute-0 nova_compute[243452]: 2026-02-28 11:12:49.327 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:12:49 compute-0 nova_compute[243452]: 2026-02-28 11:12:49.329 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:12:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:12:49 compute-0 ceph-mon[76304]: pgmap v3276: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:50 compute-0 nova_compute[243452]: 2026-02-28 11:12:50.339 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:12:50 compute-0 nova_compute[243452]: 2026-02-28 11:12:50.340 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 28 11:12:50 compute-0 nova_compute[243452]: 2026-02-28 11:12:50.355 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 28 11:12:51 compute-0 ceph-mon[76304]: pgmap v3277: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:53 compute-0 ceph-mon[76304]: pgmap v3278: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:54 compute-0 nova_compute[243452]: 2026-02-28 11:12:54.329 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:12:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:12:55 compute-0 ceph-mon[76304]: pgmap v3279: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:55 compute-0 sudo[409014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:12:55 compute-0 sudo[409014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:12:55 compute-0 sudo[409014]: pam_unix(sudo:session): session closed for user root
Feb 28 11:12:55 compute-0 sudo[409039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 11:12:55 compute-0 sudo[409039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:12:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:56 compute-0 nova_compute[243452]: 2026-02-28 11:12:56.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:12:56 compute-0 nova_compute[243452]: 2026-02-28 11:12:56.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 28 11:12:56 compute-0 sudo[409039]: pam_unix(sudo:session): session closed for user root
Feb 28 11:12:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 28 11:12:56 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 11:12:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:12:56 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:12:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 11:12:56 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:12:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 11:12:56 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:12:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 11:12:56 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:12:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 11:12:56 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:12:56 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:12:56 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:12:56 compute-0 sudo[409095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:12:56 compute-0 sudo[409095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:12:56 compute-0 sudo[409095]: pam_unix(sudo:session): session closed for user root
Feb 28 11:12:56 compute-0 sudo[409120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 11:12:56 compute-0 sudo[409120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:12:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 28 11:12:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:12:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:12:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:12:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:12:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:12:56 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:12:56 compute-0 podman[409157]: 2026-02-28 11:12:56.864623055 +0000 UTC m=+0.041233938 container create d57ddc427c01d308e14c70cb452a6813d435e16768f6b9ad9fe9db7c74d5348f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_gates, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:12:56 compute-0 systemd[1]: Started libpod-conmon-d57ddc427c01d308e14c70cb452a6813d435e16768f6b9ad9fe9db7c74d5348f.scope.
Feb 28 11:12:56 compute-0 podman[409157]: 2026-02-28 11:12:56.848916361 +0000 UTC m=+0.025527274 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:12:56 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:12:56 compute-0 podman[409157]: 2026-02-28 11:12:56.966688035 +0000 UTC m=+0.143298948 container init d57ddc427c01d308e14c70cb452a6813d435e16768f6b9ad9fe9db7c74d5348f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_gates, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:12:56 compute-0 podman[409157]: 2026-02-28 11:12:56.976483862 +0000 UTC m=+0.153094765 container start d57ddc427c01d308e14c70cb452a6813d435e16768f6b9ad9fe9db7c74d5348f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_gates, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:12:56 compute-0 exciting_gates[409173]: 167 167
Feb 28 11:12:56 compute-0 systemd[1]: libpod-d57ddc427c01d308e14c70cb452a6813d435e16768f6b9ad9fe9db7c74d5348f.scope: Deactivated successfully.
Feb 28 11:12:56 compute-0 podman[409157]: 2026-02-28 11:12:56.980103195 +0000 UTC m=+0.156714078 container attach d57ddc427c01d308e14c70cb452a6813d435e16768f6b9ad9fe9db7c74d5348f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_gates, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 28 11:12:56 compute-0 podman[409157]: 2026-02-28 11:12:56.9831115 +0000 UTC m=+0.159722383 container died d57ddc427c01d308e14c70cb452a6813d435e16768f6b9ad9fe9db7c74d5348f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_gates, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:12:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c93496747ecb86d9f387e0cf12243837368608e4609e6884e42c8e4f1b8bc946-merged.mount: Deactivated successfully.
Feb 28 11:12:57 compute-0 podman[409157]: 2026-02-28 11:12:57.029983817 +0000 UTC m=+0.206594710 container remove d57ddc427c01d308e14c70cb452a6813d435e16768f6b9ad9fe9db7c74d5348f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 28 11:12:57 compute-0 systemd[1]: libpod-conmon-d57ddc427c01d308e14c70cb452a6813d435e16768f6b9ad9fe9db7c74d5348f.scope: Deactivated successfully.
Feb 28 11:12:57 compute-0 podman[409197]: 2026-02-28 11:12:57.218382389 +0000 UTC m=+0.061636815 container create 88691617e3cfc75d5ff6ad057eaf60ae2da5631f8f94e1cdf118a82a72e90ba4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_payne, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:12:57 compute-0 systemd[1]: Started libpod-conmon-88691617e3cfc75d5ff6ad057eaf60ae2da5631f8f94e1cdf118a82a72e90ba4.scope.
Feb 28 11:12:57 compute-0 podman[409197]: 2026-02-28 11:12:57.192526857 +0000 UTC m=+0.035781343 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:12:57 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:12:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8870a5d22a71e86ad9b3bcba30ea29efe7c92165b0a29c3e24b84fb6b2dc04c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:12:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8870a5d22a71e86ad9b3bcba30ea29efe7c92165b0a29c3e24b84fb6b2dc04c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:12:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8870a5d22a71e86ad9b3bcba30ea29efe7c92165b0a29c3e24b84fb6b2dc04c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:12:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8870a5d22a71e86ad9b3bcba30ea29efe7c92165b0a29c3e24b84fb6b2dc04c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:12:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8870a5d22a71e86ad9b3bcba30ea29efe7c92165b0a29c3e24b84fb6b2dc04c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 11:12:57 compute-0 podman[409197]: 2026-02-28 11:12:57.325009098 +0000 UTC m=+0.168263594 container init 88691617e3cfc75d5ff6ad057eaf60ae2da5631f8f94e1cdf118a82a72e90ba4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_payne, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:12:57 compute-0 podman[409197]: 2026-02-28 11:12:57.338282274 +0000 UTC m=+0.181536670 container start 88691617e3cfc75d5ff6ad057eaf60ae2da5631f8f94e1cdf118a82a72e90ba4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_payne, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 28 11:12:57 compute-0 podman[409197]: 2026-02-28 11:12:57.342265937 +0000 UTC m=+0.185520443 container attach 88691617e3cfc75d5ff6ad057eaf60ae2da5631f8f94e1cdf118a82a72e90ba4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 28 11:12:57 compute-0 wizardly_payne[409214]: --> passed data devices: 0 physical, 3 LVM
Feb 28 11:12:57 compute-0 wizardly_payne[409214]: --> All data devices are unavailable
Feb 28 11:12:57 compute-0 systemd[1]: libpod-88691617e3cfc75d5ff6ad057eaf60ae2da5631f8f94e1cdf118a82a72e90ba4.scope: Deactivated successfully.
Feb 28 11:12:57 compute-0 podman[409197]: 2026-02-28 11:12:57.797961947 +0000 UTC m=+0.641216403 container died 88691617e3cfc75d5ff6ad057eaf60ae2da5631f8f94e1cdf118a82a72e90ba4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_payne, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:12:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-8870a5d22a71e86ad9b3bcba30ea29efe7c92165b0a29c3e24b84fb6b2dc04c0-merged.mount: Deactivated successfully.
Feb 28 11:12:57 compute-0 podman[409197]: 2026-02-28 11:12:57.840923374 +0000 UTC m=+0.684177770 container remove 88691617e3cfc75d5ff6ad057eaf60ae2da5631f8f94e1cdf118a82a72e90ba4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 28 11:12:57 compute-0 ceph-mon[76304]: pgmap v3280: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:57 compute-0 systemd[1]: libpod-conmon-88691617e3cfc75d5ff6ad057eaf60ae2da5631f8f94e1cdf118a82a72e90ba4.scope: Deactivated successfully.
Feb 28 11:12:57 compute-0 sudo[409120]: pam_unix(sudo:session): session closed for user root
Feb 28 11:12:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:12:57.921 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:12:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:12:57.924 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:12:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:12:57.924 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:12:57 compute-0 sudo[409246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:12:57 compute-0 sudo[409246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:12:57 compute-0 sudo[409246]: pam_unix(sudo:session): session closed for user root
Feb 28 11:12:58 compute-0 sudo[409271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 11:12:58 compute-0 sudo[409271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:12:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:58 compute-0 podman[409310]: 2026-02-28 11:12:58.353237708 +0000 UTC m=+0.054792373 container create b17e4dab1da1839f1ba09be76b93609a86b79760a4d9155e16f0065cb00438b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 11:12:58 compute-0 systemd[1]: Started libpod-conmon-b17e4dab1da1839f1ba09be76b93609a86b79760a4d9155e16f0065cb00438b0.scope.
Feb 28 11:12:58 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:12:58 compute-0 podman[409310]: 2026-02-28 11:12:58.33038459 +0000 UTC m=+0.031939275 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:12:58 compute-0 podman[409310]: 2026-02-28 11:12:58.43598422 +0000 UTC m=+0.137538885 container init b17e4dab1da1839f1ba09be76b93609a86b79760a4d9155e16f0065cb00438b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:12:58 compute-0 podman[409310]: 2026-02-28 11:12:58.444492361 +0000 UTC m=+0.146047036 container start b17e4dab1da1839f1ba09be76b93609a86b79760a4d9155e16f0065cb00438b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 28 11:12:58 compute-0 podman[409310]: 2026-02-28 11:12:58.448892946 +0000 UTC m=+0.150447611 container attach b17e4dab1da1839f1ba09be76b93609a86b79760a4d9155e16f0065cb00438b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_solomon, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:12:58 compute-0 systemd[1]: libpod-b17e4dab1da1839f1ba09be76b93609a86b79760a4d9155e16f0065cb00438b0.scope: Deactivated successfully.
Feb 28 11:12:58 compute-0 tender_solomon[409326]: 167 167
Feb 28 11:12:58 compute-0 conmon[409326]: conmon b17e4dab1da1839f1ba0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b17e4dab1da1839f1ba09be76b93609a86b79760a4d9155e16f0065cb00438b0.scope/container/memory.events
Feb 28 11:12:58 compute-0 podman[409310]: 2026-02-28 11:12:58.452463657 +0000 UTC m=+0.154018332 container died b17e4dab1da1839f1ba09be76b93609a86b79760a4d9155e16f0065cb00438b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_solomon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Feb 28 11:12:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-19a6163ba3ce9e025b39d1d489c045055f1a3675d8a9a4869fca33aedca3b910-merged.mount: Deactivated successfully.
Feb 28 11:12:58 compute-0 podman[409310]: 2026-02-28 11:12:58.497754409 +0000 UTC m=+0.199309084 container remove b17e4dab1da1839f1ba09be76b93609a86b79760a4d9155e16f0065cb00438b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_solomon, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:12:58 compute-0 systemd[1]: libpod-conmon-b17e4dab1da1839f1ba09be76b93609a86b79760a4d9155e16f0065cb00438b0.scope: Deactivated successfully.
Feb 28 11:12:58 compute-0 podman[409350]: 2026-02-28 11:12:58.683425525 +0000 UTC m=+0.051761646 container create f02a5a430ff9ab983b29e1211ce4b187150b5089c7fec9cd092e9aaa1338eb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_leavitt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 28 11:12:58 compute-0 systemd[1]: Started libpod-conmon-f02a5a430ff9ab983b29e1211ce4b187150b5089c7fec9cd092e9aaa1338eb89.scope.
Feb 28 11:12:58 compute-0 podman[409350]: 2026-02-28 11:12:58.657221663 +0000 UTC m=+0.025557874 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:12:58 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af38acf773ecf90a0e13d618160febbc96e9e71cf2bfe4506ada85e1ff42882b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af38acf773ecf90a0e13d618160febbc96e9e71cf2bfe4506ada85e1ff42882b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af38acf773ecf90a0e13d618160febbc96e9e71cf2bfe4506ada85e1ff42882b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af38acf773ecf90a0e13d618160febbc96e9e71cf2bfe4506ada85e1ff42882b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:12:58 compute-0 podman[409350]: 2026-02-28 11:12:58.800125709 +0000 UTC m=+0.168461900 container init f02a5a430ff9ab983b29e1211ce4b187150b5089c7fec9cd092e9aaa1338eb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_leavitt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 28 11:12:58 compute-0 podman[409350]: 2026-02-28 11:12:58.812969543 +0000 UTC m=+0.181305664 container start f02a5a430ff9ab983b29e1211ce4b187150b5089c7fec9cd092e9aaa1338eb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 11:12:58 compute-0 podman[409350]: 2026-02-28 11:12:58.816861643 +0000 UTC m=+0.185197844 container attach f02a5a430ff9ab983b29e1211ce4b187150b5089c7fec9cd092e9aaa1338eb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_leavitt, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]: {
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:     "0": [
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:         {
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "devices": [
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "/dev/loop3"
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             ],
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "lv_name": "ceph_lv0",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "lv_size": "21470642176",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "name": "ceph_lv0",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "tags": {
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.cluster_name": "ceph",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.crush_device_class": "",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.encrypted": "0",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.objectstore": "bluestore",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.osd_id": "0",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.type": "block",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.vdo": "0",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.with_tpm": "0"
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             },
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "type": "block",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "vg_name": "ceph_vg0"
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:         }
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:     ],
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:     "1": [
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:         {
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "devices": [
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "/dev/loop4"
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             ],
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "lv_name": "ceph_lv1",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "lv_size": "21470642176",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "name": "ceph_lv1",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "tags": {
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.cluster_name": "ceph",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.crush_device_class": "",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.encrypted": "0",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.objectstore": "bluestore",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.osd_id": "1",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.type": "block",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.vdo": "0",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.with_tpm": "0"
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             },
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "type": "block",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "vg_name": "ceph_vg1"
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:         }
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:     ],
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:     "2": [
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:         {
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "devices": [
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "/dev/loop5"
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             ],
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "lv_name": "ceph_lv2",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "lv_size": "21470642176",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "name": "ceph_lv2",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "tags": {
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.cluster_name": "ceph",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.crush_device_class": "",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.encrypted": "0",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.objectstore": "bluestore",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.osd_id": "2",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.type": "block",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.vdo": "0",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:                 "ceph.with_tpm": "0"
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             },
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "type": "block",
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:             "vg_name": "ceph_vg2"
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:         }
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]:     ]
Feb 28 11:12:59 compute-0 nostalgic_leavitt[409367]: }
Feb 28 11:12:59 compute-0 systemd[1]: libpod-f02a5a430ff9ab983b29e1211ce4b187150b5089c7fec9cd092e9aaa1338eb89.scope: Deactivated successfully.
Feb 28 11:12:59 compute-0 podman[409350]: 2026-02-28 11:12:59.145706753 +0000 UTC m=+0.514042904 container died f02a5a430ff9ab983b29e1211ce4b187150b5089c7fec9cd092e9aaa1338eb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_leavitt, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:12:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-af38acf773ecf90a0e13d618160febbc96e9e71cf2bfe4506ada85e1ff42882b-merged.mount: Deactivated successfully.
Feb 28 11:12:59 compute-0 podman[409350]: 2026-02-28 11:12:59.178377317 +0000 UTC m=+0.546713428 container remove f02a5a430ff9ab983b29e1211ce4b187150b5089c7fec9cd092e9aaa1338eb89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 28 11:12:59 compute-0 systemd[1]: libpod-conmon-f02a5a430ff9ab983b29e1211ce4b187150b5089c7fec9cd092e9aaa1338eb89.scope: Deactivated successfully.
Feb 28 11:12:59 compute-0 sudo[409271]: pam_unix(sudo:session): session closed for user root
Feb 28 11:12:59 compute-0 sudo[409388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:12:59 compute-0 sudo[409388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:12:59 compute-0 sudo[409388]: pam_unix(sudo:session): session closed for user root
Feb 28 11:12:59 compute-0 sudo[409413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 11:12:59 compute-0 sudo[409413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:12:59 compute-0 nova_compute[243452]: 2026-02-28 11:12:59.331 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:12:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:12:59 compute-0 podman[409452]: 2026-02-28 11:12:59.599939042 +0000 UTC m=+0.056444319 container create 89b830819f9609fbfe98665e1885e73d4513e8e3e83ccbc2f1b0e2fb7a4d2bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:12:59 compute-0 systemd[1]: Started libpod-conmon-89b830819f9609fbfe98665e1885e73d4513e8e3e83ccbc2f1b0e2fb7a4d2bae.scope.
Feb 28 11:12:59 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:12:59 compute-0 podman[409452]: 2026-02-28 11:12:59.575264713 +0000 UTC m=+0.031770080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:12:59 compute-0 podman[409452]: 2026-02-28 11:12:59.679254927 +0000 UTC m=+0.135760234 container init 89b830819f9609fbfe98665e1885e73d4513e8e3e83ccbc2f1b0e2fb7a4d2bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 11:12:59 compute-0 podman[409452]: 2026-02-28 11:12:59.684602899 +0000 UTC m=+0.141108186 container start 89b830819f9609fbfe98665e1885e73d4513e8e3e83ccbc2f1b0e2fb7a4d2bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 28 11:12:59 compute-0 podman[409452]: 2026-02-28 11:12:59.687305725 +0000 UTC m=+0.143811032 container attach 89b830819f9609fbfe98665e1885e73d4513e8e3e83ccbc2f1b0e2fb7a4d2bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Feb 28 11:12:59 compute-0 vigilant_kilby[409469]: 167 167
Feb 28 11:12:59 compute-0 systemd[1]: libpod-89b830819f9609fbfe98665e1885e73d4513e8e3e83ccbc2f1b0e2fb7a4d2bae.scope: Deactivated successfully.
Feb 28 11:12:59 compute-0 podman[409452]: 2026-02-28 11:12:59.690339991 +0000 UTC m=+0.146845278 container died 89b830819f9609fbfe98665e1885e73d4513e8e3e83ccbc2f1b0e2fb7a4d2bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 28 11:12:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9a28792cc7adbd2368211bc991a0451b088fa2d776c3029a19cd8e54ab40744-merged.mount: Deactivated successfully.
Feb 28 11:12:59 compute-0 podman[409452]: 2026-02-28 11:12:59.72313305 +0000 UTC m=+0.179638357 container remove 89b830819f9609fbfe98665e1885e73d4513e8e3e83ccbc2f1b0e2fb7a4d2bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 11:12:59 compute-0 systemd[1]: libpod-conmon-89b830819f9609fbfe98665e1885e73d4513e8e3e83ccbc2f1b0e2fb7a4d2bae.scope: Deactivated successfully.
Feb 28 11:12:59 compute-0 podman[409492]: 2026-02-28 11:12:59.853094759 +0000 UTC m=+0.054183545 container create a6812bbdce8a9205841705cee8cbfde556887c07f5d1c21afeda31e51eaf24ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:12:59 compute-0 ceph-mon[76304]: pgmap v3281: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:12:59 compute-0 systemd[1]: Started libpod-conmon-a6812bbdce8a9205841705cee8cbfde556887c07f5d1c21afeda31e51eaf24ee.scope.
Feb 28 11:12:59 compute-0 podman[409492]: 2026-02-28 11:12:59.823287075 +0000 UTC m=+0.024375921 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:12:59 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:12:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9952a9f56985d8e889cbcdca0e65c74ab0eb065b2c4439e15d7bc7e48c08066/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:12:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9952a9f56985d8e889cbcdca0e65c74ab0eb065b2c4439e15d7bc7e48c08066/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:12:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9952a9f56985d8e889cbcdca0e65c74ab0eb065b2c4439e15d7bc7e48c08066/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:12:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9952a9f56985d8e889cbcdca0e65c74ab0eb065b2c4439e15d7bc7e48c08066/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:12:59 compute-0 podman[409492]: 2026-02-28 11:12:59.974687901 +0000 UTC m=+0.175776747 container init a6812bbdce8a9205841705cee8cbfde556887c07f5d1c21afeda31e51eaf24ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 28 11:12:59 compute-0 podman[409492]: 2026-02-28 11:12:59.984074467 +0000 UTC m=+0.185163263 container start a6812bbdce8a9205841705cee8cbfde556887c07f5d1c21afeda31e51eaf24ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 11:12:59 compute-0 podman[409492]: 2026-02-28 11:12:59.987880735 +0000 UTC m=+0.188969631 container attach a6812bbdce8a9205841705cee8cbfde556887c07f5d1c21afeda31e51eaf24ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:13:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:13:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:13:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:13:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:13:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:13:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:13:00 compute-0 lvm[409584]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 11:13:00 compute-0 lvm[409584]: VG ceph_vg0 finished
Feb 28 11:13:00 compute-0 lvm[409587]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 11:13:00 compute-0 lvm[409587]: VG ceph_vg1 finished
Feb 28 11:13:00 compute-0 lvm[409588]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 11:13:00 compute-0 lvm[409588]: VG ceph_vg2 finished
Feb 28 11:13:00 compute-0 happy_liskov[409508]: {}
Feb 28 11:13:00 compute-0 systemd[1]: libpod-a6812bbdce8a9205841705cee8cbfde556887c07f5d1c21afeda31e51eaf24ee.scope: Deactivated successfully.
Feb 28 11:13:00 compute-0 systemd[1]: libpod-a6812bbdce8a9205841705cee8cbfde556887c07f5d1c21afeda31e51eaf24ee.scope: Consumed 1.128s CPU time.
Feb 28 11:13:00 compute-0 podman[409492]: 2026-02-28 11:13:00.79593568 +0000 UTC m=+0.997024486 container died a6812bbdce8a9205841705cee8cbfde556887c07f5d1c21afeda31e51eaf24ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 28 11:13:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9952a9f56985d8e889cbcdca0e65c74ab0eb065b2c4439e15d7bc7e48c08066-merged.mount: Deactivated successfully.
Feb 28 11:13:00 compute-0 podman[409492]: 2026-02-28 11:13:00.840859252 +0000 UTC m=+1.041948048 container remove a6812bbdce8a9205841705cee8cbfde556887c07f5d1c21afeda31e51eaf24ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:13:00 compute-0 systemd[1]: libpod-conmon-a6812bbdce8a9205841705cee8cbfde556887c07f5d1c21afeda31e51eaf24ee.scope: Deactivated successfully.
Feb 28 11:13:00 compute-0 sudo[409413]: pam_unix(sudo:session): session closed for user root
Feb 28 11:13:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 11:13:00 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:13:00 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 11:13:00 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:13:00 compute-0 sudo[409604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 11:13:00 compute-0 sudo[409604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:13:00 compute-0 sudo[409604]: pam_unix(sudo:session): session closed for user root
Feb 28 11:13:01 compute-0 ceph-mon[76304]: pgmap v3282: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:01 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:13:01 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:13:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:03 compute-0 ceph-mon[76304]: pgmap v3283: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3284: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:04 compute-0 nova_compute[243452]: 2026-02-28 11:13:04.334 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:13:04 compute-0 nova_compute[243452]: 2026-02-28 11:13:04.336 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:13:04 compute-0 nova_compute[243452]: 2026-02-28 11:13:04.336 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:13:04 compute-0 nova_compute[243452]: 2026-02-28 11:13:04.336 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:13:04 compute-0 nova_compute[243452]: 2026-02-28 11:13:04.362 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:13:04 compute-0 nova_compute[243452]: 2026-02-28 11:13:04.364 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:13:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:13:04 compute-0 ceph-mon[76304]: pgmap v3284: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:07 compute-0 ceph-mon[76304]: pgmap v3285: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:09 compute-0 podman[409630]: 2026-02-28 11:13:09.173186051 +0000 UTC m=+0.102487292 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 28 11:13:09 compute-0 podman[409629]: 2026-02-28 11:13:09.182374021 +0000 UTC m=+0.113659068 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 28 11:13:09 compute-0 nova_compute[243452]: 2026-02-28 11:13:09.364 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:13:09 compute-0 ceph-mon[76304]: pgmap v3286: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:13:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3287: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:10 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:13:10 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 15K writes, 68K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s
                                           Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1358 writes, 6219 keys, 1358 commit groups, 1.0 writes per commit group, ingest: 8.88 MB, 0.01 MB/s
                                           Interval WAL: 1358 writes, 1358 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     88.7      0.95              0.23        49    0.019       0      0       0.0       0.0
                                             L6      1/0   10.58 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.1    171.6    146.6      2.93              1.23        48    0.061    323K    25K       0.0       0.0
                                            Sum      1/0   10.58 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.1    129.7    132.5      3.88              1.46        97    0.040    323K    25K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.6    175.4    179.6      0.33              0.20        10    0.033     44K   2523       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    171.6    146.6      2.93              1.23        48    0.061    323K    25K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     89.1      0.94              0.23        48    0.020       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.5      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.082, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.50 GB write, 0.09 MB/s write, 0.49 GB read, 0.08 MB/s read, 3.9 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555caadbb8d0#2 capacity: 304.00 MB usage: 58.58 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000559 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3937,56.25 MB,18.5046%) FilterBlock(98,903.48 KB,0.290233%) IndexBlock(98,1.44 MB,0.474804%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Feb 28 11:13:11 compute-0 ceph-mon[76304]: pgmap v3287: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:12 compute-0 nova_compute[243452]: 2026-02-28 11:13:12.359 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:13:13 compute-0 nova_compute[243452]: 2026-02-28 11:13:13.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:13:13 compute-0 ceph-mon[76304]: pgmap v3288: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:14 compute-0 nova_compute[243452]: 2026-02-28 11:13:14.366 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:13:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:13:15 compute-0 ceph-mon[76304]: pgmap v3289: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:16 compute-0 nova_compute[243452]: 2026-02-28 11:13:16.311 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:13:16 compute-0 nova_compute[243452]: 2026-02-28 11:13:16.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:13:16 compute-0 nova_compute[243452]: 2026-02-28 11:13:16.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 11:13:16 compute-0 nova_compute[243452]: 2026-02-28 11:13:16.315 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 11:13:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3290: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:16 compute-0 nova_compute[243452]: 2026-02-28 11:13:16.375 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 11:13:16 compute-0 nova_compute[243452]: 2026-02-28 11:13:16.377 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:13:16 compute-0 nova_compute[243452]: 2026-02-28 11:13:16.377 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 11:13:17 compute-0 nova_compute[243452]: 2026-02-28 11:13:17.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:13:17 compute-0 ceph-mon[76304]: pgmap v3290: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:18 compute-0 nova_compute[243452]: 2026-02-28 11:13:18.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:13:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3291: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:19 compute-0 nova_compute[243452]: 2026-02-28 11:13:19.368 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:13:19 compute-0 ceph-mon[76304]: pgmap v3291: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:13:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:21 compute-0 ceph-mon[76304]: pgmap v3292: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:23 compute-0 nova_compute[243452]: 2026-02-28 11:13:23.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:13:23 compute-0 nova_compute[243452]: 2026-02-28 11:13:23.348 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:13:23 compute-0 nova_compute[243452]: 2026-02-28 11:13:23.349 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:13:23 compute-0 nova_compute[243452]: 2026-02-28 11:13:23.349 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:13:23 compute-0 nova_compute[243452]: 2026-02-28 11:13:23.350 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 11:13:23 compute-0 nova_compute[243452]: 2026-02-28 11:13:23.350 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:13:23 compute-0 ceph-mon[76304]: pgmap v3293: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:13:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3655902496' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:13:23 compute-0 nova_compute[243452]: 2026-02-28 11:13:23.911 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:13:24 compute-0 sshd-session[409687]: Invalid user sol from 45.148.10.240 port 37852
Feb 28 11:13:24 compute-0 nova_compute[243452]: 2026-02-28 11:13:24.112 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 11:13:24 compute-0 nova_compute[243452]: 2026-02-28 11:13:24.114 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3556MB free_disk=59.987354160286486GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 11:13:24 compute-0 nova_compute[243452]: 2026-02-28 11:13:24.114 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:13:24 compute-0 nova_compute[243452]: 2026-02-28 11:13:24.114 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:13:24 compute-0 nova_compute[243452]: 2026-02-28 11:13:24.185 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 11:13:24 compute-0 nova_compute[243452]: 2026-02-28 11:13:24.185 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 11:13:24 compute-0 nova_compute[243452]: 2026-02-28 11:13:24.204 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:13:24 compute-0 sshd-session[409687]: Connection closed by invalid user sol 45.148.10.240 port 37852 [preauth]
Feb 28 11:13:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:24 compute-0 nova_compute[243452]: 2026-02-28 11:13:24.370 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:13:24 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3655902496' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:13:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:13:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:13:24 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3033646151' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:13:24 compute-0 nova_compute[243452]: 2026-02-28 11:13:24.762 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:13:24 compute-0 nova_compute[243452]: 2026-02-28 11:13:24.769 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 11:13:24 compute-0 nova_compute[243452]: 2026-02-28 11:13:24.789 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 11:13:24 compute-0 nova_compute[243452]: 2026-02-28 11:13:24.791 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 11:13:24 compute-0 nova_compute[243452]: 2026-02-28 11:13:24.792 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:13:25 compute-0 ceph-mon[76304]: pgmap v3294: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:25 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3033646151' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:13:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:27 compute-0 ceph-mon[76304]: pgmap v3295: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3296: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:13:29
Feb 28 11:13:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 11:13:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 11:13:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'backups', 'images', 'vms', 'volumes', 'cephfs.cephfs.data']
Feb 28 11:13:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 11:13:29 compute-0 nova_compute[243452]: 2026-02-28 11:13:29.373 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:13:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:13:29 compute-0 ceph-mon[76304]: pgmap v3296: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:29 compute-0 nova_compute[243452]: 2026-02-28 11:13:29.792 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:13:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:13:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:13:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:13:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:13:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:13:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:13:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 11:13:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 11:13:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:13:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:13:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:13:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:13:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:13:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:13:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:13:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:13:31 compute-0 ceph-mon[76304]: pgmap v3297: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:33 compute-0 ceph-mon[76304]: pgmap v3298: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:34 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:34 compute-0 nova_compute[243452]: 2026-02-28 11:13:34.372 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:13:34 compute-0 nova_compute[243452]: 2026-02-28 11:13:34.375 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:13:34 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:13:35 compute-0 ceph-mon[76304]: pgmap v3299: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:36 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:37 compute-0 ceph-mon[76304]: pgmap v3300: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:38 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:39 compute-0 nova_compute[243452]: 2026-02-28 11:13:39.375 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:13:39 compute-0 nova_compute[243452]: 2026-02-28 11:13:39.376 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:13:39 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:13:39 compute-0 ceph-mon[76304]: pgmap v3301: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:40 compute-0 podman[409722]: 2026-02-28 11:13:40.168562064 +0000 UTC m=+0.106008292 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Feb 28 11:13:40 compute-0 podman[409723]: 2026-02-28 11:13:40.169268844 +0000 UTC m=+0.097135960 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 28 11:13:40 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:41 compute-0 ceph-mon[76304]: pgmap v3302: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] _maybe_adjust
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5454513684934942e-05 of space, bias 1.0, pg target 0.0046363541054804825 quantized to 32 (current 32)
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.899822393338988e-06 of space, bias 1.0, pg target 0.0014699467180016965 quantized to 32 (current 32)
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:13:41 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 11:13:42 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3303: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:43 compute-0 ceph-mon[76304]: pgmap v3303: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:44 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3304: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:44 compute-0 nova_compute[243452]: 2026-02-28 11:13:44.378 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:13:44 compute-0 nova_compute[243452]: 2026-02-28 11:13:44.380 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:13:44 compute-0 nova_compute[243452]: 2026-02-28 11:13:44.380 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:13:44 compute-0 nova_compute[243452]: 2026-02-28 11:13:44.380 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:13:44 compute-0 nova_compute[243452]: 2026-02-28 11:13:44.405 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:13:44 compute-0 nova_compute[243452]: 2026-02-28 11:13:44.406 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:13:44 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:13:45 compute-0 ceph-mon[76304]: pgmap v3304: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 28 11:13:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3613126888' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:13:45 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 28 11:13:45 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3613126888' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:13:46 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3613126888' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 28 11:13:46 compute-0 ceph-mon[76304]: from='client.? 192.168.122.10:0/3613126888' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 28 11:13:47 compute-0 ceph-mon[76304]: pgmap v3305: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:48 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:49 compute-0 nova_compute[243452]: 2026-02-28 11:13:49.407 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:13:49 compute-0 nova_compute[243452]: 2026-02-28 11:13:49.408 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:13:49 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:13:49 compute-0 ceph-mon[76304]: pgmap v3306: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:50 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:50 compute-0 ceph-mon[76304]: pgmap v3307: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:52 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:53 compute-0 ceph-mon[76304]: pgmap v3308: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:53 compute-0 sshd-session[409767]: Accepted publickey for zuul from 192.168.122.10 port 59784 ssh2: ECDSA SHA256:0e783GbusLxW+8649QrtV4mEUjUuluwMjeLbzXNo3z0
Feb 28 11:13:53 compute-0 systemd-logind[815]: New session 58 of user zuul.
Feb 28 11:13:53 compute-0 systemd[1]: Started Session 58 of User zuul.
Feb 28 11:13:53 compute-0 sshd-session[409767]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Feb 28 11:13:53 compute-0 sudo[409771]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Feb 28 11:13:53 compute-0 sudo[409771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 28 11:13:54 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3309: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:54 compute-0 nova_compute[243452]: 2026-02-28 11:13:54.409 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:13:54 compute-0 nova_compute[243452]: 2026-02-28 11:13:54.412 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:13:54 compute-0 nova_compute[243452]: 2026-02-28 11:13:54.412 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:13:54 compute-0 nova_compute[243452]: 2026-02-28 11:13:54.412 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:13:54 compute-0 nova_compute[243452]: 2026-02-28 11:13:54.440 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:13:54 compute-0 nova_compute[243452]: 2026-02-28 11:13:54.441 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:13:54 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:13:55 compute-0 ceph-mon[76304]: pgmap v3309: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:56 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23270 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:13:56 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:56 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23272 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:13:57 compute-0 ceph-mon[76304]: from='client.23270 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:13:57 compute-0 ceph-mon[76304]: pgmap v3310: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:57 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb 28 11:13:57 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1471020115' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 28 11:13:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:13:57.922 156681 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:13:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:13:57.924 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:13:57 compute-0 ovn_metadata_agent[156634]: 2026-02-28 11:13:57.924 156681 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:13:58 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:58 compute-0 ceph-mon[76304]: from='client.23272 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:13:58 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1471020115' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 28 11:13:59 compute-0 ceph-mon[76304]: pgmap v3311: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:13:59 compute-0 nova_compute[243452]: 2026-02-28 11:13:59.441 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:13:59 compute-0 nova_compute[243452]: 2026-02-28 11:13:59.445 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:13:59 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:14:00 compute-0 ovs-vsctl[410053]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Feb 28 11:14:00 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:14:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:14:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:14:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:14:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:14:00 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:14:00 compute-0 ceph-mon[76304]: pgmap v3312: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:01 compute-0 sudo[410175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:14:01 compute-0 sudo[410175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:14:01 compute-0 sudo[410175]: pam_unix(sudo:session): session closed for user root
Feb 28 11:14:01 compute-0 virtqemud[242837]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Feb 28 11:14:01 compute-0 virtqemud[242837]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Feb 28 11:14:01 compute-0 sudo[410207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Feb 28 11:14:01 compute-0 sudo[410207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:14:01 compute-0 virtqemud[242837]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb 28 11:14:01 compute-0 podman[410371]: 2026-02-28 11:14:01.632146868 +0000 UTC m=+0.086340794 container exec 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 28 11:14:01 compute-0 podman[410371]: 2026-02-28 11:14:01.709621301 +0000 UTC m=+0.163815187 container exec_died 02b99353aded71f89195a0d52f59f72a4fe79aef017c9a116709542277b3beec (image=quay.io/ceph/ceph:v20, name=ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 28 11:14:01 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: cache status {prefix=cache status} (starting...)
Feb 28 11:14:01 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: client ls {prefix=client ls} (starting...)
Feb 28 11:14:01 compute-0 lvm[410538]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 11:14:01 compute-0 lvm[410538]: VG ceph_vg2 finished
Feb 28 11:14:01 compute-0 lvm[410553]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 11:14:01 compute-0 lvm[410553]: VG ceph_vg0 finished
Feb 28 11:14:02 compute-0 lvm[410558]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 11:14:02 compute-0 lvm[410558]: VG ceph_vg1 finished
Feb 28 11:14:02 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23276 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:02 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:02 compute-0 sudo[410207]: pam_unix(sudo:session): session closed for user root
Feb 28 11:14:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 11:14:02 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:14:02 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 11:14:02 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:14:02 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: damage ls {prefix=damage ls} (starting...)
Feb 28 11:14:02 compute-0 sudo[410756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:14:02 compute-0 sudo[410756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:14:02 compute-0 sudo[410756]: pam_unix(sudo:session): session closed for user root
Feb 28 11:14:02 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump loads {prefix=dump loads} (starting...)
Feb 28 11:14:02 compute-0 sudo[410788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Feb 28 11:14:02 compute-0 sudo[410788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:14:02 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Feb 28 11:14:02 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23278 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:02 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Feb 28 11:14:03 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Feb 28 11:14:03 compute-0 sudo[410788]: pam_unix(sudo:session): session closed for user root
Feb 28 11:14:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Feb 28 11:14:03 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/526184937' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb 28 11:14:03 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Feb 28 11:14:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:14:03 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:14:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 28 11:14:03 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:14:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 28 11:14:03 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:14:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 28 11:14:03 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:14:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 28 11:14:03 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:14:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:14:03 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:14:03 compute-0 sudo[410901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:14:03 compute-0 sudo[410901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:14:03 compute-0 sudo[410901]: pam_unix(sudo:session): session closed for user root
Feb 28 11:14:03 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23282 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:03 compute-0 sudo[410927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Feb 28 11:14:03 compute-0 sudo[410927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:14:03 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Feb 28 11:14:03 compute-0 ceph-mon[76304]: from='client.23276 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:03 compute-0 ceph-mon[76304]: pgmap v3313: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:03 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:14:03 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:14:03 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/526184937' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb 28 11:14:03 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:14:03 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 28 11:14:03 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:14:03 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 28 11:14:03 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 28 11:14:03 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:14:03 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: get subtrees {prefix=get subtrees} (starting...)
Feb 28 11:14:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 28 11:14:03 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1572024949' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:14:03 compute-0 podman[411009]: 2026-02-28 11:14:03.592021803 +0000 UTC m=+0.038835601 container create 20bac391ff05958cabf8f0dc70178c7511df87c9d6758b3d05f1898a0cdd3981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_cannon, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 28 11:14:03 compute-0 systemd[1]: Started libpod-conmon-20bac391ff05958cabf8f0dc70178c7511df87c9d6758b3d05f1898a0cdd3981.scope.
Feb 28 11:14:03 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:14:03 compute-0 podman[411009]: 2026-02-28 11:14:03.573454577 +0000 UTC m=+0.020268405 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:14:03 compute-0 podman[411009]: 2026-02-28 11:14:03.675837306 +0000 UTC m=+0.122651114 container init 20bac391ff05958cabf8f0dc70178c7511df87c9d6758b3d05f1898a0cdd3981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:14:03 compute-0 podman[411009]: 2026-02-28 11:14:03.687764203 +0000 UTC m=+0.134577991 container start 20bac391ff05958cabf8f0dc70178c7511df87c9d6758b3d05f1898a0cdd3981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_cannon, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True)
Feb 28 11:14:03 compute-0 podman[411009]: 2026-02-28 11:14:03.691028766 +0000 UTC m=+0.137842584 container attach 20bac391ff05958cabf8f0dc70178c7511df87c9d6758b3d05f1898a0cdd3981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_cannon, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 28 11:14:03 compute-0 stoic_cannon[411029]: 167 167
Feb 28 11:14:03 compute-0 systemd[1]: libpod-20bac391ff05958cabf8f0dc70178c7511df87c9d6758b3d05f1898a0cdd3981.scope: Deactivated successfully.
Feb 28 11:14:03 compute-0 podman[411009]: 2026-02-28 11:14:03.698200859 +0000 UTC m=+0.145014647 container died 20bac391ff05958cabf8f0dc70178c7511df87c9d6758b3d05f1898a0cdd3981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Feb 28 11:14:03 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23286 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:03 compute-0 ceph-mgr[76610]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 28 11:14:03 compute-0 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo[76606]: 2026-02-28T11:14:03.724+0000 7fdac2ed4640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 28 11:14:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ffa6d541f37c065a567ddb7f42ff6104dbab56a72ba0ca407558db2efe2565a-merged.mount: Deactivated successfully.
Feb 28 11:14:03 compute-0 podman[411009]: 2026-02-28 11:14:03.75015465 +0000 UTC m=+0.196968438 container remove 20bac391ff05958cabf8f0dc70178c7511df87c9d6758b3d05f1898a0cdd3981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_cannon, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:14:03 compute-0 systemd[1]: libpod-conmon-20bac391ff05958cabf8f0dc70178c7511df87c9d6758b3d05f1898a0cdd3981.scope: Deactivated successfully.
Feb 28 11:14:03 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: ops {prefix=ops} (starting...)
Feb 28 11:14:03 compute-0 podman[411083]: 2026-02-28 11:14:03.900906907 +0000 UTC m=+0.046738364 container create 9f5b79ff7490dd2d7fec16398a1a274f8a844fcf9acf29ab74714021febbba93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 28 11:14:03 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Feb 28 11:14:03 compute-0 systemd[1]: Started libpod-conmon-9f5b79ff7490dd2d7fec16398a1a274f8a844fcf9acf29ab74714021febbba93.scope.
Feb 28 11:14:03 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2024216684' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Feb 28 11:14:03 compute-0 podman[411083]: 2026-02-28 11:14:03.878117282 +0000 UTC m=+0.023948759 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:14:03 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace3e5ab140cddc3eea681a3627d7101aee2362a279834718870f28a6df7dd69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace3e5ab140cddc3eea681a3627d7101aee2362a279834718870f28a6df7dd69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace3e5ab140cddc3eea681a3627d7101aee2362a279834718870f28a6df7dd69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace3e5ab140cddc3eea681a3627d7101aee2362a279834718870f28a6df7dd69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace3e5ab140cddc3eea681a3627d7101aee2362a279834718870f28a6df7dd69/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 28 11:14:04 compute-0 podman[411083]: 2026-02-28 11:14:04.016720226 +0000 UTC m=+0.162551733 container init 9f5b79ff7490dd2d7fec16398a1a274f8a844fcf9acf29ab74714021febbba93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_dubinsky, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 28 11:14:04 compute-0 podman[411083]: 2026-02-28 11:14:04.023285312 +0000 UTC m=+0.169116769 container start 9f5b79ff7490dd2d7fec16398a1a274f8a844fcf9acf29ab74714021febbba93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_dubinsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 28 11:14:04 compute-0 podman[411083]: 2026-02-28 11:14:04.026923095 +0000 UTC m=+0.172754582 container attach 9f5b79ff7490dd2d7fec16398a1a274f8a844fcf9acf29ab74714021febbba93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_dubinsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Feb 28 11:14:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Feb 28 11:14:04 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2485113866' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Feb 28 11:14:04 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:04 compute-0 vibrant_dubinsky[411120]: --> passed data devices: 0 physical, 3 LVM
Feb 28 11:14:04 compute-0 vibrant_dubinsky[411120]: --> All data devices are unavailable
Feb 28 11:14:04 compute-0 ceph-mon[76304]: from='client.23278 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:04 compute-0 ceph-mon[76304]: from='client.23282 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:04 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1572024949' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 28 11:14:04 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2024216684' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Feb 28 11:14:04 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2485113866' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Feb 28 11:14:04 compute-0 nova_compute[243452]: 2026-02-28 11:14:04.443 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:14:04 compute-0 nova_compute[243452]: 2026-02-28 11:14:04.448 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:14:04 compute-0 systemd[1]: libpod-9f5b79ff7490dd2d7fec16398a1a274f8a844fcf9acf29ab74714021febbba93.scope: Deactivated successfully.
Feb 28 11:14:04 compute-0 podman[411083]: 2026-02-28 11:14:04.463947767 +0000 UTC m=+0.609779224 container died 9f5b79ff7490dd2d7fec16398a1a274f8a844fcf9acf29ab74714021febbba93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_dubinsky, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 28 11:14:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.472636) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277244472676, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1705, "num_deletes": 250, "total_data_size": 2782655, "memory_usage": 2827376, "flush_reason": "Manual Compaction"}
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277244479279, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 1605140, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68091, "largest_seqno": 69795, "table_properties": {"data_size": 1599457, "index_size": 2758, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14980, "raw_average_key_size": 20, "raw_value_size": 1586801, "raw_average_value_size": 2197, "num_data_blocks": 127, "num_entries": 722, "num_filter_entries": 722, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772277065, "oldest_key_time": 1772277065, "file_creation_time": 1772277244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 6714 microseconds, and 3645 cpu microseconds.
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.479329) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 1605140 bytes OK
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.479375) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.481136) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.481154) EVENT_LOG_v1 {"time_micros": 1772277244481149, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.481181) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 2775318, prev total WAL file size 2775318, number of live WAL files 2.
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.481837) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373532' seq:72057594037927935, type:22 .. '6D6772737461740033303033' seq:0, type:0; will stop at (end)
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(1567KB)], [161(10MB)]
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277244481950, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 12703463, "oldest_snapshot_seqno": -1}
Feb 28 11:14:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ace3e5ab140cddc3eea681a3627d7101aee2362a279834718870f28a6df7dd69-merged.mount: Deactivated successfully.
Feb 28 11:14:04 compute-0 podman[411083]: 2026-02-28 11:14:04.505059101 +0000 UTC m=+0.650890558 container remove 9f5b79ff7490dd2d7fec16398a1a274f8a844fcf9acf29ab74714021febbba93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:14:04 compute-0 systemd[1]: libpod-conmon-9f5b79ff7490dd2d7fec16398a1a274f8a844fcf9acf29ab74714021febbba93.scope: Deactivated successfully.
Feb 28 11:14:04 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: session ls {prefix=session ls} (starting...)
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 9001 keys, 10477556 bytes, temperature: kUnknown
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277244547941, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 10477556, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10421921, "index_size": 32034, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22533, "raw_key_size": 234304, "raw_average_key_size": 26, "raw_value_size": 10265858, "raw_average_value_size": 1140, "num_data_blocks": 1243, "num_entries": 9001, "num_filter_entries": 9001, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772277244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.548237) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10477556 bytes
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.549442) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.3 rd, 158.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 10.6 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(14.4) write-amplify(6.5) OK, records in: 9429, records dropped: 428 output_compression: NoCompression
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.549458) EVENT_LOG_v1 {"time_micros": 1772277244549450, "job": 100, "event": "compaction_finished", "compaction_time_micros": 66066, "compaction_time_cpu_micros": 30862, "output_level": 6, "num_output_files": 1, "total_output_size": 10477556, "num_input_records": 9429, "num_output_records": 9001, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277244549676, "job": 100, "event": "table_file_deletion", "file_number": 163}
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277244550997, "job": 100, "event": "table_file_deletion", "file_number": 161}
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.481696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.551112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.551122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.551124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.551126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:14:04 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:04.551128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:14:04 compute-0 sudo[410927]: pam_unix(sudo:session): session closed for user root
Feb 28 11:14:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Feb 28 11:14:04 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3332890894' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Feb 28 11:14:04 compute-0 sudo[411208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:14:04 compute-0 sudo[411208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:14:04 compute-0 sudo[411208]: pam_unix(sudo:session): session closed for user root
Feb 28 11:14:04 compute-0 ceph-mds[96507]: mds.cephfs.compute-0.eyikfq asok_command: status {prefix=status} (starting...)
Feb 28 11:14:04 compute-0 sudo[411235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- lvm list --format json
Feb 28 11:14:04 compute-0 sudo[411235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:14:04 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 28 11:14:04 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2512350153' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 28 11:14:04 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23296 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:05 compute-0 podman[411320]: 2026-02-28 11:14:05.007252788 +0000 UTC m=+0.045968722 container create 5dcac8f193f0d5fc6e1e3ad3f11b9c8c30d704d9060eb15f5e0d7f04ce560180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jackson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 28 11:14:05 compute-0 systemd[1]: Started libpod-conmon-5dcac8f193f0d5fc6e1e3ad3f11b9c8c30d704d9060eb15f5e0d7f04ce560180.scope.
Feb 28 11:14:05 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:14:05 compute-0 podman[411320]: 2026-02-28 11:14:04.990773822 +0000 UTC m=+0.029489846 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:14:05 compute-0 podman[411320]: 2026-02-28 11:14:05.086324927 +0000 UTC m=+0.125040921 container init 5dcac8f193f0d5fc6e1e3ad3f11b9c8c30d704d9060eb15f5e0d7f04ce560180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 28 11:14:05 compute-0 podman[411320]: 2026-02-28 11:14:05.093628284 +0000 UTC m=+0.132344218 container start 5dcac8f193f0d5fc6e1e3ad3f11b9c8c30d704d9060eb15f5e0d7f04ce560180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jackson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 28 11:14:05 compute-0 podman[411320]: 2026-02-28 11:14:05.096865965 +0000 UTC m=+0.135581949 container attach 5dcac8f193f0d5fc6e1e3ad3f11b9c8c30d704d9060eb15f5e0d7f04ce560180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:14:05 compute-0 thirsty_jackson[411342]: 167 167
Feb 28 11:14:05 compute-0 systemd[1]: libpod-5dcac8f193f0d5fc6e1e3ad3f11b9c8c30d704d9060eb15f5e0d7f04ce560180.scope: Deactivated successfully.
Feb 28 11:14:05 compute-0 podman[411320]: 2026-02-28 11:14:05.099442298 +0000 UTC m=+0.138158232 container died 5dcac8f193f0d5fc6e1e3ad3f11b9c8c30d704d9060eb15f5e0d7f04ce560180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jackson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:14:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-62afddbcf13730c1bae4f3be31bbae3a697e7c1b0ced1fa7b08d641ac6fd38d1-merged.mount: Deactivated successfully.
Feb 28 11:14:05 compute-0 podman[411320]: 2026-02-28 11:14:05.138629358 +0000 UTC m=+0.177345302 container remove 5dcac8f193f0d5fc6e1e3ad3f11b9c8c30d704d9060eb15f5e0d7f04ce560180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jackson, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:14:05 compute-0 systemd[1]: libpod-conmon-5dcac8f193f0d5fc6e1e3ad3f11b9c8c30d704d9060eb15f5e0d7f04ce560180.scope: Deactivated successfully.
Feb 28 11:14:05 compute-0 podman[411388]: 2026-02-28 11:14:05.270581402 +0000 UTC m=+0.037552434 container create 103bd15e650a9f327ff8b1e2a2f9ed2773b462c8f23408c8d994d0e76f2619e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:14:05 compute-0 systemd[1]: Started libpod-conmon-103bd15e650a9f327ff8b1e2a2f9ed2773b462c8f23408c8d994d0e76f2619e8.scope.
Feb 28 11:14:05 compute-0 podman[411388]: 2026-02-28 11:14:05.251893133 +0000 UTC m=+0.018864205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:14:05 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:14:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49931047dbdb154b3b0159a304092e40c3386a9a38804af8c001d64f4a99e493/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:14:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 28 11:14:05 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3241801687' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 28 11:14:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49931047dbdb154b3b0159a304092e40c3386a9a38804af8c001d64f4a99e493/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:14:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49931047dbdb154b3b0159a304092e40c3386a9a38804af8c001d64f4a99e493/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:14:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49931047dbdb154b3b0159a304092e40c3386a9a38804af8c001d64f4a99e493/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:14:05 compute-0 podman[411388]: 2026-02-28 11:14:05.370226883 +0000 UTC m=+0.137197985 container init 103bd15e650a9f327ff8b1e2a2f9ed2773b462c8f23408c8d994d0e76f2619e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cannon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:14:05 compute-0 podman[411388]: 2026-02-28 11:14:05.377231752 +0000 UTC m=+0.144202784 container start 103bd15e650a9f327ff8b1e2a2f9ed2773b462c8f23408c8d994d0e76f2619e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cannon, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:14:05 compute-0 podman[411388]: 2026-02-28 11:14:05.3799875 +0000 UTC m=+0.146958532 container attach 103bd15e650a9f327ff8b1e2a2f9ed2773b462c8f23408c8d994d0e76f2619e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cannon, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 28 11:14:05 compute-0 ceph-mon[76304]: from='client.23286 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:05 compute-0 ceph-mon[76304]: pgmap v3314: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:05 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3332890894' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Feb 28 11:14:05 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2512350153' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 28 11:14:05 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3241801687' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 28 11:14:05 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23300 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:05 compute-0 recursing_cannon[411406]: {
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:     "0": [
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:         {
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "devices": [
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "/dev/loop3"
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             ],
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "lv_name": "ceph_lv0",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "lv_size": "21470642176",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9e119a96-18af-4ac6-8479-8955afc510ba,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "lv_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "name": "ceph_lv0",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "tags": {
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.block_uuid": "LDQa8z-PSZv-NiLJ-S2yJ-nQf5-qCJu-YhesHY",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.cluster_name": "ceph",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.crush_device_class": "",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.encrypted": "0",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.objectstore": "bluestore",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.osd_fsid": "9e119a96-18af-4ac6-8479-8955afc510ba",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.osd_id": "0",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.type": "block",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.vdo": "0",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.with_tpm": "0"
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             },
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "type": "block",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "vg_name": "ceph_vg0"
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:         }
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:     ],
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:     "1": [
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:         {
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "devices": [
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "/dev/loop4"
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             ],
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "lv_name": "ceph_lv1",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "lv_size": "21470642176",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8e01b2f2-2262-4a17-91a8-b31f77e16d5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "lv_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "name": "ceph_lv1",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "tags": {
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.block_uuid": "nIpF8m-2TE4-4Xb1-GUqM-8d28-VEN5-enUn9m",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.cluster_name": "ceph",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.crush_device_class": "",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.encrypted": "0",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.objectstore": "bluestore",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.osd_fsid": "8e01b2f2-2262-4a17-91a8-b31f77e16d5f",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.osd_id": "1",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.type": "block",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.vdo": "0",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.with_tpm": "0"
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             },
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "type": "block",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "vg_name": "ceph_vg1"
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:         }
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:     ],
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:     "2": [
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:         {
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "devices": [
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "/dev/loop5"
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             ],
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "lv_name": "ceph_lv2",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "lv_size": "21470642176",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8f528268-ea2d-5d7b-af45-49b405fed6de,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cf6beeec-5018-4e1c-b8f4-bf775a604cf3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "lv_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "name": "ceph_lv2",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "path": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "tags": {
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.block_uuid": "12XHkV-ltM2-AgCS-Tgov-tLcd-kI6A-4Znai6",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.cephx_lockbox_secret": "",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.cluster_fsid": "8f528268-ea2d-5d7b-af45-49b405fed6de",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.cluster_name": "ceph",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.crush_device_class": "",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.encrypted": "0",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.objectstore": "bluestore",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.osd_fsid": "cf6beeec-5018-4e1c-b8f4-bf775a604cf3",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.osd_id": "2",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.type": "block",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.vdo": "0",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:                 "ceph.with_tpm": "0"
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             },
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "type": "block",
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:             "vg_name": "ceph_vg2"
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:         }
Feb 28 11:14:05 compute-0 recursing_cannon[411406]:     ]
Feb 28 11:14:05 compute-0 recursing_cannon[411406]: }
Feb 28 11:14:05 compute-0 systemd[1]: libpod-103bd15e650a9f327ff8b1e2a2f9ed2773b462c8f23408c8d994d0e76f2619e8.scope: Deactivated successfully.
Feb 28 11:14:05 compute-0 podman[411388]: 2026-02-28 11:14:05.697058526 +0000 UTC m=+0.464029588 container died 103bd15e650a9f327ff8b1e2a2f9ed2773b462c8f23408c8d994d0e76f2619e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cannon, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Feb 28 11:14:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-49931047dbdb154b3b0159a304092e40c3386a9a38804af8c001d64f4a99e493-merged.mount: Deactivated successfully.
Feb 28 11:14:05 compute-0 podman[411388]: 2026-02-28 11:14:05.750830138 +0000 UTC m=+0.517801170 container remove 103bd15e650a9f327ff8b1e2a2f9ed2773b462c8f23408c8d994d0e76f2619e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 28 11:14:05 compute-0 systemd[1]: libpod-conmon-103bd15e650a9f327ff8b1e2a2f9ed2773b462c8f23408c8d994d0e76f2619e8.scope: Deactivated successfully.
Feb 28 11:14:05 compute-0 sudo[411235]: pam_unix(sudo:session): session closed for user root
Feb 28 11:14:05 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 28 11:14:05 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1634838279' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 28 11:14:05 compute-0 sudo[411477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 28 11:14:05 compute-0 sudo[411477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:14:05 compute-0 sudo[411477]: pam_unix(sudo:session): session closed for user root
Feb 28 11:14:05 compute-0 sudo[411507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8f528268-ea2d-5d7b-af45-49b405fed6de/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8f528268-ea2d-5d7b-af45-49b405fed6de -- raw list --format json
Feb 28 11:14:05 compute-0 sudo[411507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:14:06 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Feb 28 11:14:06 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1590331172' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb 28 11:14:06 compute-0 podman[411596]: 2026-02-28 11:14:06.25492937 +0000 UTC m=+0.038689777 container create a2ae6174c6f4eb8fb50b1adb3f224c4536ea568cce0b82bc25c8276f0af70448 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 28 11:14:06 compute-0 systemd[1]: Started libpod-conmon-a2ae6174c6f4eb8fb50b1adb3f224c4536ea568cce0b82bc25c8276f0af70448.scope.
Feb 28 11:14:06 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:14:06 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 28 11:14:06 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2322899888' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 28 11:14:06 compute-0 podman[411596]: 2026-02-28 11:14:06.237600479 +0000 UTC m=+0.021360926 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:14:06 compute-0 podman[411596]: 2026-02-28 11:14:06.342866659 +0000 UTC m=+0.126627116 container init a2ae6174c6f4eb8fb50b1adb3f224c4536ea568cce0b82bc25c8276f0af70448 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:14:06 compute-0 podman[411596]: 2026-02-28 11:14:06.351051511 +0000 UTC m=+0.134811938 container start a2ae6174c6f4eb8fb50b1adb3f224c4536ea568cce0b82bc25c8276f0af70448 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_grothendieck, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 28 11:14:06 compute-0 podman[411596]: 2026-02-28 11:14:06.354629262 +0000 UTC m=+0.138389689 container attach a2ae6174c6f4eb8fb50b1adb3f224c4536ea568cce0b82bc25c8276f0af70448 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_grothendieck, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 28 11:14:06 compute-0 keen_grothendieck[411614]: 167 167
Feb 28 11:14:06 compute-0 systemd[1]: libpod-a2ae6174c6f4eb8fb50b1adb3f224c4536ea568cce0b82bc25c8276f0af70448.scope: Deactivated successfully.
Feb 28 11:14:06 compute-0 podman[411596]: 2026-02-28 11:14:06.358053069 +0000 UTC m=+0.141813506 container died a2ae6174c6f4eb8fb50b1adb3f224c4536ea568cce0b82bc25c8276f0af70448 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_grothendieck, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 28 11:14:06 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3315: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c6de90612406524c019bc44d2479d972f7f93b8d52e4c19cd86b5a21be7105e-merged.mount: Deactivated successfully.
Feb 28 11:14:06 compute-0 podman[411596]: 2026-02-28 11:14:06.391040553 +0000 UTC m=+0.174800970 container remove a2ae6174c6f4eb8fb50b1adb3f224c4536ea568cce0b82bc25c8276f0af70448 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_grothendieck, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 28 11:14:06 compute-0 systemd[1]: libpod-conmon-a2ae6174c6f4eb8fb50b1adb3f224c4536ea568cce0b82bc25c8276f0af70448.scope: Deactivated successfully.
Feb 28 11:14:06 compute-0 ceph-mon[76304]: from='client.23296 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:06 compute-0 ceph-mon[76304]: from='client.23300 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:06 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1634838279' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 28 11:14:06 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1590331172' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb 28 11:14:06 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2322899888' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 28 11:14:06 compute-0 podman[411663]: 2026-02-28 11:14:06.52938528 +0000 UTC m=+0.041805305 container create 2ffd91eea7f1bbc51da97afe59f05b35b89b6ebe67a816fd50988f939cf3cd4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:14:06 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Feb 28 11:14:06 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3812442866' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Feb 28 11:14:06 compute-0 systemd[1]: Started libpod-conmon-2ffd91eea7f1bbc51da97afe59f05b35b89b6ebe67a816fd50988f939cf3cd4f.scope.
Feb 28 11:14:06 compute-0 systemd[1]: Started libcrun container.
Feb 28 11:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ebfbc94794ddc77f6ed1e07da6099e916c266b83c08f7e06ff3c858e8fd1b26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 28 11:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ebfbc94794ddc77f6ed1e07da6099e916c266b83c08f7e06ff3c858e8fd1b26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 28 11:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ebfbc94794ddc77f6ed1e07da6099e916c266b83c08f7e06ff3c858e8fd1b26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 28 11:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ebfbc94794ddc77f6ed1e07da6099e916c266b83c08f7e06ff3c858e8fd1b26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 28 11:14:06 compute-0 podman[411663]: 2026-02-28 11:14:06.511219165 +0000 UTC m=+0.023639230 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 28 11:14:06 compute-0 podman[411663]: 2026-02-28 11:14:06.614622013 +0000 UTC m=+0.127042048 container init 2ffd91eea7f1bbc51da97afe59f05b35b89b6ebe67a816fd50988f939cf3cd4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:14:06 compute-0 podman[411663]: 2026-02-28 11:14:06.62018008 +0000 UTC m=+0.132600105 container start 2ffd91eea7f1bbc51da97afe59f05b35b89b6ebe67a816fd50988f939cf3cd4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bassi, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 28 11:14:06 compute-0 podman[411663]: 2026-02-28 11:14:06.624363798 +0000 UTC m=+0.136783853 container attach 2ffd91eea7f1bbc51da97afe59f05b35b89b6ebe67a816fd50988f939cf3cd4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 28 11:14:06 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb 28 11:14:06 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1399654361' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 28 11:14:07 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23312 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:07 compute-0 ceph-mgr[76610]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 28 11:14:07 compute-0 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo[76606]: 2026-02-28T11:14:07.044+0000 7fdac2ed4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 28 11:14:07 compute-0 lvm[411836]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 28 11:14:07 compute-0 lvm[411836]: VG ceph_vg0 finished
Feb 28 11:14:07 compute-0 lvm[411837]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 28 11:14:07 compute-0 lvm[411837]: VG ceph_vg1 finished
Feb 28 11:14:07 compute-0 lvm[411839]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 28 11:14:07 compute-0 lvm[411839]: VG ceph_vg2 finished
Feb 28 11:14:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 28 11:14:07 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3772552681' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 28 11:14:07 compute-0 awesome_bassi[411683]: {}
Feb 28 11:14:07 compute-0 podman[411663]: 2026-02-28 11:14:07.401295664 +0000 UTC m=+0.913715679 container died 2ffd91eea7f1bbc51da97afe59f05b35b89b6ebe67a816fd50988f939cf3cd4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 28 11:14:07 compute-0 systemd[1]: libpod-2ffd91eea7f1bbc51da97afe59f05b35b89b6ebe67a816fd50988f939cf3cd4f.scope: Deactivated successfully.
Feb 28 11:14:07 compute-0 systemd[1]: libpod-2ffd91eea7f1bbc51da97afe59f05b35b89b6ebe67a816fd50988f939cf3cd4f.scope: Consumed 1.156s CPU time.
Feb 28 11:14:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ebfbc94794ddc77f6ed1e07da6099e916c266b83c08f7e06ff3c858e8fd1b26-merged.mount: Deactivated successfully.
Feb 28 11:14:07 compute-0 ceph-mon[76304]: pgmap v3315: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:07 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3812442866' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Feb 28 11:14:07 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1399654361' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 28 11:14:07 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3772552681' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 28 11:14:07 compute-0 podman[411663]: 2026-02-28 11:14:07.48065289 +0000 UTC m=+0.993072905 container remove 2ffd91eea7f1bbc51da97afe59f05b35b89b6ebe67a816fd50988f939cf3cd4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bassi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 28 11:14:07 compute-0 systemd[1]: libpod-conmon-2ffd91eea7f1bbc51da97afe59f05b35b89b6ebe67a816fd50988f939cf3cd4f.scope: Deactivated successfully.
Feb 28 11:14:07 compute-0 sudo[411507]: pam_unix(sudo:session): session closed for user root
Feb 28 11:14:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 28 11:14:07 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:14:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 28 11:14:07 compute-0 ceph-mon[76304]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:14:07 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Feb 28 11:14:07 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2407995255' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Feb 28 11:14:07 compute-0 sudo[411881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 28 11:14:07 compute-0 sudo[411881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Feb 28 11:14:07 compute-0 sudo[411881]: pam_unix(sudo:session): session closed for user root
Feb 28 11:14:07 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23318 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Feb 28 11:14:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/759175547' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Feb 28 11:14:08 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23322 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x55768becfa40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:18.891280+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3ec400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3ec400 session 0x557685cc41c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0aa000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0aa000 session 0x55768bece1c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576926cd000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd000 session 0x5576885be540
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688a65400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x55768800ddc0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:19.891492+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3419612 data_alloc: 218103808 data_used: 19256325
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:20.891648+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x4b8a9b1/0x4d1c000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x557686d4ec40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576861be8c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1c00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:21.891738+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.198194504s of 12.342607498s, submitted: 38
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688a65400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x55768c315880
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x5576861d3c00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:22.891883+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3ec400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3ec400 session 0x557689a016c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576861be540
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1c00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x557686c20c40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:23.903450+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688a65400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c2400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:24.903677+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395284 data_alloc: 218103808 data_used: 18715653
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:25.903920+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307388416 unmapped: 55803904 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea108000/0x0/0x4ffc00000, data 0x487298e/0x4a03000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:26.904074+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea108000/0x0/0x4ffc00000, data 0x487298e/0x4a03000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:27.904346+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:28.904538+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:29.904713+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408852 data_alloc: 218103808 data_used: 20918277
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:30.904846+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea108000/0x0/0x4ffc00000, data 0x487298e/0x4a03000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea108000/0x0/0x4ffc00000, data 0x487298e/0x4a03000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:31.904978+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea108000/0x0/0x4ffc00000, data 0x487298e/0x4a03000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:32.905124+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:33.905259+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306978816 unmapped: 56213504 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:34.905429+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306987008 unmapped: 56205312 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408852 data_alloc: 218103808 data_used: 20918277
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:35.905616+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 306987008 unmapped: 56205312 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.364843369s of 14.394786835s, submitted: 12
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:36.905762+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310943744 unmapped: 52248576 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:37.905892+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9930000/0x0/0x4ffc00000, data 0x503b98e/0x51cc000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:38.906201+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:39.906432+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476420 data_alloc: 218103808 data_used: 21910498
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:40.906605+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:41.906795+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9918000/0x0/0x4ffc00000, data 0x504d98e/0x51de000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:42.906973+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312492032 unmapped: 50700288 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:43.907228+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9918000/0x0/0x4ffc00000, data 0x504d98e/0x51de000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:44.907429+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476692 data_alloc: 218103808 data_used: 21918690
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:45.907579+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:46.907750+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:47.907916+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:48.908104+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9918000/0x0/0x4ffc00000, data 0x504d98e/0x51de000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:49.908344+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312500224 unmapped: 50692096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x5576940e1180
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x55768800d500
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0aa000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.993345261s of 14.355683327s, submitted: 90
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385972 data_alloc: 218103808 data_used: 18719714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:50.908463+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0aa000 session 0x5576886b2c40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9974000/0x0/0x4ffc00000, data 0x464e98e/0x47df000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:51.908632+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:52.908772+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:53.909009+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:54.909313+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3383976 data_alloc: 218103808 data_used: 18719714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:55.909528+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:56.909678+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:57.909877+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311476224 unmapped: 51716096 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557687b436c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1c00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x557686197180
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688a65400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x5576885bfa40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c2400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:58.910113+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x55768c315a40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576926cd000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd000 session 0x557687aa1340
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:59.910361+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6f000/0x0/0x4ffc00000, data 0x4e0c98e/0x4f9d000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6f000/0x0/0x4ffc00000, data 0x4e0c98e/0x4f9d000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3426312 data_alloc: 218103808 data_used: 18719714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:00.910534+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:01.910778+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6f000/0x0/0x4ffc00000, data 0x4e0c98e/0x4f9d000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:02.910954+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:03.911172+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6f000/0x0/0x4ffc00000, data 0x4e0c98e/0x4f9d000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:04.911406+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b6f000/0x0/0x4ffc00000, data 0x4e0c98e/0x4f9d000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3426312 data_alloc: 218103808 data_used: 18719714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:05.911632+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307806208 unmapped: 55386112 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:06.911821+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 55377920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:07.912147+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 55377920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:08.913551+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 55377920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:09.913824+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307814400 unmapped: 55377920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.629673004s of 19.488275528s, submitted: 12
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576886d5180
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869f1c00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557688a65400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428832 data_alloc: 218103808 data_used: 18719714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:10.913961+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 55230464 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b4b000/0x0/0x4ffc00000, data 0x4e3098e/0x4fc1000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:11.914181+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307961856 unmapped: 55230464 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b4b000/0x0/0x4ffc00000, data 0x4e3098e/0x4fc1000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:12.914333+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:13.914520+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:14.914730+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3477216 data_alloc: 218103808 data_used: 26889186
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:15.914893+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:16.915053+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:17.915262+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b4b000/0x0/0x4ffc00000, data 0x4e3098e/0x4fc1000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:18.915410+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b4b000/0x0/0x4ffc00000, data 0x4e3098e/0x4fc1000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:19.915553+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3477216 data_alloc: 218103808 data_used: 26889186
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:20.915689+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:21.915907+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:22.916294+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.102824211s of 13.107093811s, submitted: 1
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:23.916465+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:24.916750+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:25.916925+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3485238 data_alloc: 218103808 data_used: 26991586
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:26.917148+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:27.917288+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:28.917522+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:29.917699+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:30.917890+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3485238 data_alloc: 218103808 data_used: 26991586
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:31.918182+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:32.918314+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:33.918467+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:34.918645+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:35.918830+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3485238 data_alloc: 218103808 data_used: 26991586
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:36.918968+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:37.919113+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c2400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b47000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.839479446s of 14.852085114s, submitted: 2
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310992896 unmapped: 52199424 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x55768afca380
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c4000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c4000 session 0x5576886b3880
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576886e16c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576926cd400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd400 session 0x5576869f3340
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576926cd400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576926cd400 session 0x557687aa0a80
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:38.919297+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:39.919449+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:40.919610+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3501258 data_alloc: 218103808 data_used: 26991586
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:41.919770+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:42.919977+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:43.920126+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e99aa000/0x0/0x4ffc00000, data 0x4fd198e/0x5162000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:44.920412+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311001088 unmapped: 52191232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576887228c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:45.920597+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c2400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c4000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3504923 data_alloc: 218103808 data_used: 26992098
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311304192 unmapped: 51888128 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:46.920726+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311910400 unmapped: 51281920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:47.920862+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:48.921002+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9985000/0x0/0x4ffc00000, data 0x4ff59b1/0x5187000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:49.921144+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:50.921293+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3508763 data_alloc: 218103808 data_used: 27546594
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:51.921439+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:52.921589+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:53.921751+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311984128 unmapped: 51208192 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:54.921965+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9985000/0x0/0x4ffc00000, data 0x4ff59b1/0x5187000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 311992320 unmapped: 51200000 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:55.922155+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3508763 data_alloc: 218103808 data_used: 27546594
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312000512 unmapped: 51191808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:56.922341+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312000512 unmapped: 51191808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.325185776s of 19.358276367s, submitted: 15
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:57.922518+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313982976 unmapped: 49209344 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:58.922687+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e934f000/0x0/0x4ffc00000, data 0x56249b1/0x57b6000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:59.922854+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:00.923010+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3563869 data_alloc: 218103808 data_used: 27722722
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:01.923153+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:02.923357+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:03.923514+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9314000/0x0/0x4ffc00000, data 0x56579b1/0x57e9000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:04.923709+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:05.923976+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3558653 data_alloc: 218103808 data_used: 27722722
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:06.924165+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:07.924378+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314073088 unmapped: 49119232 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x557686111500
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c4000 session 0x55768457b340
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:08.924520+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.165890694s of 11.501284599s, submitted: 68
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x557686d01c00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312295424 unmapped: 50896896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:09.924731+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312295424 unmapped: 50896896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b46000/0x0/0x4ffc00000, data 0x4e3498e/0x4fc5000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:10.924881+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3495179 data_alloc: 218103808 data_used: 26991586
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312295424 unmapped: 50896896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:11.925039+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312295424 unmapped: 50896896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869f1c00 session 0x55768d090000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557688a65400 session 0x55768becfdc0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:12.925208+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768d0abc00 session 0x5576886576c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:13.925445+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:14.925709+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:15.925916+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:16.926103+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:17.926362+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:18.926589+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:19.926784+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:20.926941+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:21.927295+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:22.927540+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:23.927791+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:24.927970+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:25.928168+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 53837824 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:26.928368+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:27.928527+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:28.928616+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:29.928931+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:30.929137+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:31.929275+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:32.929407+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:33.929581+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 53829632 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:34.929931+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:35.930114+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:36.930341+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:37.930473+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:38.930844+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:39.931115+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:40.931345+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:41.931532+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309370880 unmapped: 53821440 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:42.931725+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 53813248 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:43.931877+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 53813248 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:44.933268+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 53813248 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:45.933786+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398397 data_alloc: 218103808 data_used: 18723775
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768bc58380
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c2400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768b3c2400 session 0x55768a539dc0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768cc3f880
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557689b7b800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557689b7b800 session 0x55768a5381c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 55394304 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672cc00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 37.512207031s of 37.553077698s, submitted: 22
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x55768d75ddc0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557694ee2000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557694ee2000 session 0x5576886d5a40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557689b7b000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557689b7b000 session 0x5576861bea80
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576880f7340
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:46.933919+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672cc00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x5576880f7a40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:47.934154+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:48.934292+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:49.934445+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:50.934567+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441190 data_alloc: 218103808 data_used: 18723775
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557689b7b800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557689b7b800 session 0x557686d00000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:51.934764+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557694ee2000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557694ee2000 session 0x55768a538380
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d85000/0x0/0x4ffc00000, data 0x4bf59f0/0x4d87000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c5b1800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b1800 session 0x557688a11340
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576886b2fc0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:52.934901+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672cc00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557689b7b800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:53.935030+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308846592 unmapped: 54345728 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:54.935208+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:55.935353+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469990 data_alloc: 218103808 data_used: 23552959
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:56.935503+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d85000/0x0/0x4ffc00000, data 0x4bf59f0/0x4d87000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:57.935722+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.2 total, 600.0 interval
                                           Cumulative writes: 36K writes, 142K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 36K writes, 13K syncs, 2.71 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3550 writes, 14K keys, 3550 commit groups, 1.0 writes per commit group, ingest: 18.05 MB, 0.03 MB/s
                                           Interval WAL: 3550 writes, 1394 syncs, 2.55 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:58.935883+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:59.936053+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d85000/0x0/0x4ffc00000, data 0x4bf59f0/0x4d87000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:00.936239+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469990 data_alloc: 218103808 data_used: 23552959
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:01.936370+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9d85000/0x0/0x4ffc00000, data 0x4bf59f0/0x4d87000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:02.936556+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 308928512 unmapped: 54263808 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:03.936729+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.287157059s of 17.387201309s, submitted: 32
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312320000 unmapped: 50872320 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:04.956054+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312377344 unmapped: 50814976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:05.956245+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3519878 data_alloc: 218103808 data_used: 23937983
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312377344 unmapped: 50814976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:06.956411+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312377344 unmapped: 50814976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:07.956571+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f4000/0x0/0x4ffc00000, data 0x53859f0/0x5517000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312377344 unmapped: 50814976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:08.956788+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312377344 unmapped: 50814976 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:09.956964+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312385536 unmapped: 50806784 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:10.957161+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3518710 data_alloc: 218103808 data_used: 23942079
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312385536 unmapped: 50806784 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:11.957325+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f3000/0x0/0x4ffc00000, data 0x53879f0/0x5519000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312385536 unmapped: 50806784 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:12.957559+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312385536 unmapped: 50806784 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:13.957785+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312393728 unmapped: 50798592 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:14.958043+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f3000/0x0/0x4ffc00000, data 0x53879f0/0x5519000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f3000/0x0/0x4ffc00000, data 0x53879f0/0x5519000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312393728 unmapped: 50798592 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:15.958242+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3518710 data_alloc: 218103808 data_used: 23942079
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312393728 unmapped: 50798592 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:16.958436+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c5b1800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.962099075s of 13.174759865s, submitted: 92
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b1800 session 0x55768becf340
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557694ee2000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557694ee2000 session 0x5576886d41c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686156800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x557686cadc00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e71c00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557687e71c00 session 0x55768bc58000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576886d4700
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:17.958650+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9341000/0x0/0x4ffc00000, data 0x56399f0/0x57cb000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:18.958814+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9341000/0x0/0x4ffc00000, data 0x56399f0/0x57cb000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:19.958964+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:20.959194+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686156800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x557688a10fc0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538984 data_alloc: 218103808 data_used: 23942079
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:21.959413+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c5b1800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b1800 session 0x5576861a88c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557694ee2000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557694ee2000 session 0x557688a11500
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768f50a000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x55768becf180
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:22.960131+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768f50a000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312401920 unmapped: 50790400 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:23.960306+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9340000/0x0/0x4ffc00000, data 0x5639a00/0x57cc000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312688640 unmapped: 50503680 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:24.960532+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 50495488 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:25.960682+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3557002 data_alloc: 218103808 data_used: 26660799
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 50495488 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:26.960862+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9340000/0x0/0x4ffc00000, data 0x5639a00/0x57cc000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 50495488 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:27.961002+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 50495488 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:28.961134+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312696832 unmapped: 50495488 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:29.961237+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9340000/0x0/0x4ffc00000, data 0x5639a00/0x57cc000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312705024 unmapped: 50487296 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:30.961382+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3557002 data_alloc: 218103808 data_used: 26660799
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312705024 unmapped: 50487296 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:31.961501+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312705024 unmapped: 50487296 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:32.961653+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 312705024 unmapped: 50487296 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:33.961799+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.755399704s of 16.796745300s, submitted: 7
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314982400 unmapped: 48209920 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:34.961998+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e909c000/0x0/0x4ffc00000, data 0x58dda00/0x5a70000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314310656 unmapped: 48881664 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:35.962139+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3580546 data_alloc: 218103808 data_used: 26796991
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314318848 unmapped: 48873472 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:36.962245+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314318848 unmapped: 48873472 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:37.962389+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9094000/0x0/0x4ffc00000, data 0x58e5a00/0x5a78000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:38.962528+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9094000/0x0/0x4ffc00000, data 0x58e5a00/0x5a78000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:39.963100+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:40.963267+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3580562 data_alloc: 218103808 data_used: 26796991
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:41.963629+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:42.963801+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314335232 unmapped: 48857088 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:43.963982+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9094000/0x0/0x4ffc00000, data 0x58e5a00/0x5a78000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314343424 unmapped: 48848896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:44.964253+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314343424 unmapped: 48848896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:45.964430+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x557686c20fc0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768a539c00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3580082 data_alloc: 218103808 data_used: 26858431
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686156800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314343424 unmapped: 48848896 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.200057983s of 12.832662582s, submitted: 39
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:46.964544+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x5576886f8000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f1000/0x0/0x4ffc00000, data 0x5388a00/0x551b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314359808 unmapped: 48832512 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:47.964719+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314408960 unmapped: 48783360 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:48.964861+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f1000/0x0/0x4ffc00000, data 0x53899f0/0x551b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314417152 unmapped: 48775168 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:49.965050+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314417152 unmapped: 48775168 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f1000/0x0/0x4ffc00000, data 0x53899f0/0x551b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:50.965342+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526151 data_alloc: 218103808 data_used: 24003519
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314417152 unmapped: 48775168 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:51.965535+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x557686197c00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557689b7b800 session 0x557688656c40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313155584 unmapped: 50036736 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:52.965718+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e95f1000/0x0/0x4ffc00000, data 0x53899f0/0x551b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557689a016c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:53.965876+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:54.966173+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:55.966341+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:56.966572+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:57.966750+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:58.966906+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:59.967111+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:00.967279+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313163776 unmapped: 50028544 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:01.967443+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:02.967616+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:03.967777+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:04.967986+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:05.968192+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:06.968385+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:07.968559+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:08.968707+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313171968 unmapped: 50020352 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:09.968876+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:10.969118+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:11.969295+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:12.969525+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:13.969714+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:14.969937+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:15.970159+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:16.970423+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313180160 unmapped: 50012160 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:17.970600+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313196544 unmapped: 49995776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:18.970813+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313196544 unmapped: 49995776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:19.971008+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313196544 unmapped: 49995776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:20.971237+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414931 data_alloc: 218103808 data_used: 18723775
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313196544 unmapped: 49995776 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:21.971411+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea338000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313212928 unmapped: 49979392 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:22.971605+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313212928 unmapped: 49979392 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:23.971782+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313212928 unmapped: 49979392 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:24.972019+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686156800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x5576940e01c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672cc00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x55768c315a40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768f50a000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x557688723180
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313212928 unmapped: 49979392 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c5b1800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:25.972358+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c5b1800 session 0x557687b43340
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 39.065399170s of 39.311141968s, submitted: 108
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768cc4afc0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686156800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x5576886b2540
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672cc00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x557687aa1c00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768f50a000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x5576886e1c00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557694ee2000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557694ee2000 session 0x5576885be700
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417481 data_alloc: 218103808 data_used: 18727773
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 49856512 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:26.972522+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 49856512 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:27.972727+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea31b000/0x0/0x4ffc00000, data 0x466098e/0x47f1000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313344000 unmapped: 49848320 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:28.972869+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313344000 unmapped: 49848320 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:29.973060+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313352192 unmapped: 49840128 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:30.973238+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x5576886d4000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417481 data_alloc: 218103808 data_used: 18727773
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313352192 unmapped: 49840128 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:31.973412+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686156800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x557688a108c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672cc00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x557686cac8c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768f50a000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 49864704 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:32.973540+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x557688656a80
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313630720 unmapped: 49561600 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:33.973650+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea2f6000/0x0/0x4ffc00000, data 0x468499e/0x4816000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:34.973843+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313630720 unmapped: 49561600 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:35.974045+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313630720 unmapped: 49561600 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3422291 data_alloc: 218103808 data_used: 18761565
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:36.974324+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea2f6000/0x0/0x4ffc00000, data 0x468499e/0x4816000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313630720 unmapped: 49561600 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:37.974484+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313638912 unmapped: 49553408 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:38.974661+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313638912 unmapped: 49553408 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea2f6000/0x0/0x4ffc00000, data 0x468499e/0x4816000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:39.974790+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313638912 unmapped: 49553408 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:40.974918+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313638912 unmapped: 49553408 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3422291 data_alloc: 218103808 data_used: 18761565
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:41.975086+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313638912 unmapped: 49553408 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea2f6000/0x0/0x4ffc00000, data 0x468499e/0x4816000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:42.975222+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313647104 unmapped: 49545216 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:43.975368+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.853551865s of 17.863443375s, submitted: 3
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314327040 unmapped: 48865280 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:44.975526+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313663488 unmapped: 49528832 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1fc000/0x0/0x4ffc00000, data 0x477e99e/0x4910000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:45.975682+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442903 data_alloc: 218103808 data_used: 18856797
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:46.975857+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:47.976021+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:48.976230+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:49.976385+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:50.976557+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442919 data_alloc: 218103808 data_used: 18856797
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:51.976748+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:52.976968+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:53.977108+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:54.977270+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:55.977468+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442919 data_alloc: 218103808 data_used: 18856797
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:56.977648+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:57.977867+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313671680 unmapped: 49520640 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:58.978002+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 49512448 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:59.978232+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313679872 unmapped: 49512448 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576868d4800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576868d4800 session 0x55768bece1c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557686111500
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686156800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x5576869f3180
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672cc00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x5576886e16c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:00.978426+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768f50a000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.000139236s of 17.048984528s, submitted: 20
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 314818560 unmapped: 48373760 heap: 363192320 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x5576886b2e00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768da43400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768da43400 session 0x5576886b2c40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768da43400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768da43400 session 0x5576885bfa40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768c315180
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686156800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x5576861d3c00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3486925 data_alloc: 218103808 data_used: 18856797
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:01.978594+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313778176 unmapped: 51027968 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:02.978779+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313778176 unmapped: 51027968 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:03.978885+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313778176 unmapped: 51027968 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:04.979194+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313778176 unmapped: 51027968 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:05.979315+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313778176 unmapped: 51027968 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3486925 data_alloc: 218103808 data_used: 18856797
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:06.979479+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313786368 unmapped: 51019776 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672cc00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x557686d4ea80
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:07.979609+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313786368 unmapped: 51019776 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768f50a000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:08.979721+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313761792 unmapped: 51044352 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:09.979953+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:10.980145+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3521229 data_alloc: 218103808 data_used: 24664925
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:11.980305+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:12.980480+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:13.980682+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:14.980878+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:15.981000+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9b05000/0x0/0x4ffc00000, data 0x4e7599e/0x5007000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3521229 data_alloc: 218103808 data_used: 24664925
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:16.981155+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:17.981320+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315269120 unmapped: 49537024 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.758279800s of 17.839538574s, submitted: 13
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:18.981437+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316399616 unmapped: 48406528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:19.981604+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318586880 unmapped: 46219264 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:20.981771+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3559463 data_alloc: 218103808 data_used: 25188701
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:21.981997+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9747000/0x0/0x4ffc00000, data 0x522d99e/0x53bf000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:22.982198+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:23.982387+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:24.982607+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:25.982812+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3556031 data_alloc: 218103808 data_used: 25188701
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:26.982998+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:27.983172+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e972c000/0x0/0x4ffc00000, data 0x524e99e/0x53e0000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:28.983316+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:29.983466+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318808064 unmapped: 45998080 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x5576886d5a40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686155400 session 0x5576880f7340
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768f50a000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:30.983608+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.849447250s of 12.123903275s, submitted: 71
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x5576861be8c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e972c000/0x0/0x4ffc00000, data 0x524e99e/0x53e0000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:31.983752+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3452972 data_alloc: 218103808 data_used: 18856797
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:32.983878+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:33.984023+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869e1400 session 0x55768d090380
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c073800 session 0x5576869f3340
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:34.984329+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea1f1000/0x0/0x4ffc00000, data 0x478999e/0x491b000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557688656c40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:35.984509+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:36.984653+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:37.984806+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:38.985040+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:39.985302+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:40.985535+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:41.985699+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:42.985858+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:43.986043+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:44.986361+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:45.986626+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:46.986812+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:47.987132+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:48.987402+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:49.987684+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:50.987974+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:51.988161+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:52.988329+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:53.988557+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316153856 unmapped: 48652288 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:54.988794+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 48644096 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:55.988986+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 48644096 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:56.989215+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316162048 unmapped: 48644096 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:57.989372+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316170240 unmapped: 48635904 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:58.989546+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316170240 unmapped: 48635904 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:59.989718+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316170240 unmapped: 48635904 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:00.989885+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316170240 unmapped: 48635904 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:01.990057+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316170240 unmapped: 48635904 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:02.990264+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:03.990451+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:04.990636+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:05.990778+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:06.990943+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:07.991130+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:08.991284+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:09.991410+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:10.991650+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:11.991846+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432900 data_alloc: 218103808 data_used: 18727773
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 316178432 unmapped: 48627712 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557688a108c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686155400 session 0x557686c21880
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869e1400 session 0x557688a116c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c073800 session 0x557688722c40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768f50a000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ea339000/0x0/0x4ffc00000, data 0x464298e/0x47d3000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:12.992034+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 42.027751923s of 42.075328827s, submitted: 18
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768f50a000 session 0x5576886d5340
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557688a10000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686155400 session 0x557686cacfc0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869e1400 session 0x557688657a40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c073800 session 0x557688a10540
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:13.992131+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:14.992329+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9db6000/0x0/0x4ffc00000, data 0x4bc598e/0x4d56000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:15.992484+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:16.992639+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469604 data_alloc: 218103808 data_used: 18727773
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:17.992797+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313327616 unmapped: 51478528 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:18.992941+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:19.993189+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:20.993376+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9db6000/0x0/0x4ffc00000, data 0x4bc598e/0x4d56000, compress 0x0/0x0/0x0, omap 0x4ebc5, meta 0x110a143b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686156800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686156800 session 0x557686d00fc0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x55768d090700
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:21.993567+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469604 data_alloc: 218103808 data_used: 18727773
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557686155400 session 0x55768800d6c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x5576869e1400 session 0x5576886e0c40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:22.993714+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672cc00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.067780495s of 10.190921783s, submitted: 22
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:23.993827+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9db4000/0x0/0x4ffc00000, data 0x4bc59c1/0x4d58000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:24.994028+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:25.994168+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:26.994323+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502024 data_alloc: 218103808 data_used: 23294813
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:27.994457+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9db4000/0x0/0x4ffc00000, data 0x4bc59c1/0x4d58000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:28.994687+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:29.994901+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:30.995121+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:31.995331+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502024 data_alloc: 218103808 data_used: 23294813
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 313335808 unmapped: 51470336 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:32.995482+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.957121849s of 10.000660896s, submitted: 20
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9db4000/0x0/0x4ffc00000, data 0x4bc59c1/0x4d58000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [0,1,0,0,0,8])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318406656 unmapped: 46399488 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:33.995722+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:34.995863+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9030000/0x0/0x4ffc00000, data 0x59489c1/0x5adb000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:35.996001+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:36.996141+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3599328 data_alloc: 218103808 data_used: 25293661
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:37.996297+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:38.996500+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e9030000/0x0/0x4ffc00000, data 0x59489c1/0x5adb000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:39.996739+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:40.996910+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:41.997153+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3595064 data_alloc: 218103808 data_used: 25293661
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread fragmentation_score=0.003738 took=0.000053s
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318480384 unmapped: 46325760 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:42.997319+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.949523926s of 10.245022774s, submitted: 88
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318488576 unmapped: 46317568 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:43.997498+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318488576 unmapped: 46317568 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:44.997733+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 heartbeat osd_stat(store_statfs(0x4e902a000/0x0/0x4ffc00000, data 0x594e9c1/0x5ae1000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318488576 unmapped: 46317568 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:45.997926+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768c073800 session 0x55768cc3fc00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x55768672cc00 session 0x5576861a81c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318488576 unmapped: 46317568 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:46.998153+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3596956 data_alloc: 218103808 data_used: 25334621
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 ms_handle_reset con 0x557685ffa800 session 0x557685cc4c40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318488576 unmapped: 46317568 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:47.998340+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 285 ms_handle_reset con 0x557686155400 session 0x557687b42700
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 285 heartbeat osd_stat(store_statfs(0x4e902a000/0x0/0x4ffc00000, data 0x594e9c1/0x5ae1000, compress 0x0/0x0/0x0, omap 0x4ec6c, meta 0x110a1394), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 285 ms_handle_reset con 0x55768c073800 session 0x55768cc3f880
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 285 ms_handle_reset con 0x5576869e1400 session 0x55768c3148c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768da43400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 323772416 unmapped: 41033728 heap: 364806144 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:48.998474+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 285 ms_handle_reset con 0x55768da43400 session 0x55768800ddc0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 285 ms_handle_reset con 0x557685ffa800 session 0x55768457b340
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337510400 unmapped: 43384832 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:49.998657+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 285 handle_osd_map epochs [286,287], i have 285, src has [1,287]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 287 ms_handle_reset con 0x557686155400 session 0x557686cac8c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337551360 unmapped: 43343872 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:50.998790+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 287 ms_handle_reset con 0x5576869e1400 session 0x5576886d4000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 287 ms_handle_reset con 0x55768c073800 session 0x5576886f8e00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0e9400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 287 ms_handle_reset con 0x55768d0e9400 session 0x5576861116c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 43327488 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:51.999005+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e7cd3000/0x0/0x4ffc00000, data 0x6c9fd67/0x6e37000, compress 0x0/0x0/0x0, omap 0x4f381, meta 0x110a0c7f), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3761780 data_alloc: 234881024 data_used: 35955647
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e7cd3000/0x0/0x4ffc00000, data 0x6c9fd67/0x6e37000, compress 0x0/0x0/0x0, omap 0x4f381, meta 0x110a0c7f), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 43327488 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:52.999157+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337633280 unmapped: 43261952 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:53.999347+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.589632988s of 10.994457245s, submitted: 34
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 287 ms_handle_reset con 0x557685ffa800 session 0x557687aa1340
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e8fe2000/0x0/0x4ffc00000, data 0x5993d34/0x5b29000, compress 0x0/0x0/0x0, omap 0x4f41d, meta 0x110a0be3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:54.999588+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:55.999772+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e8fe2000/0x0/0x4ffc00000, data 0x5993d34/0x5b29000, compress 0x0/0x0/0x0, omap 0x4f41d, meta 0x110a0be3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:57.000029+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3574579 data_alloc: 218103808 data_used: 18723775
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 287 heartbeat osd_stat(store_statfs(0x4e8fe2000/0x0/0x4ffc00000, data 0x5993d34/0x5b29000, compress 0x0/0x0/0x0, omap 0x4f41d, meta 0x110a0be3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:58.000275+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:59.000443+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 287 handle_osd_map epochs [287,288], i have 288, src has [1,288]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e8fe2000/0x0/0x4ffc00000, data 0x5993d34/0x5b29000, compress 0x0/0x0/0x0, omap 0x4f41d, meta 0x110a0be3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:00.000607+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e8fde000/0x0/0x4ffc00000, data 0x59957b3/0x5b2c000, compress 0x0/0x0/0x0, omap 0x4f52c, meta 0x110a0ad4), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:01.000744+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318930944 unmapped: 61964288 heap: 380895232 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557686155400 session 0x5576885bfa40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:02.000882+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x5576869e1400 session 0x55768cc3f500
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x55768c073800 session 0x557686196a80
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c035800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x55768c035800 session 0x5576940e1340
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578137 data_alloc: 218103808 data_used: 18727934
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557686155400 session 0x5576861d3c00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557685ffa800 session 0x557689a01c00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x5576869e1400 session 0x5576886d5a40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x55768c073800 session 0x5576886d41c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557686155000 session 0x55768a5388c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557685ffa800 session 0x55768d090e00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 348602368 unmapped: 47890432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:03.001041+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557686155400 session 0x557689a00fc0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337657856 unmapped: 58834944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:04.001167+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x5576869e1400 session 0x55768a538380
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x55768c073800 session 0x55768cc4b180
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 337657856 unmapped: 58834944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:05.001346+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687e8bc00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557687e8bc00 session 0x557686d01880
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.653596878s of 11.167081833s, submitted: 54
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 288 ms_handle_reset con 0x557685ffa800 session 0x557685cc5dc0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 288 heartbeat osd_stat(store_statfs(0x4e73eb000/0x0/0x4ffc00000, data 0x75887e6/0x7721000, compress 0x0/0x0/0x0, omap 0x4f52c, meta 0x110a0ad4), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 329965568 unmapped: 66527232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:06.001476+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 329965568 unmapped: 66527232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:07.001596+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3789035 data_alloc: 234881024 data_used: 29690303
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 289 ms_handle_reset con 0x55768c073800 session 0x55768d090380
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:08.001768+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 289 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0x623e374/0x63d7000, compress 0x0/0x0/0x0, omap 0x4f643, meta 0x110a09bd), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:09.001919+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:10.002106+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:11.002241+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 289 heartbeat osd_stat(store_statfs(0x4e8733000/0x0/0x4ffc00000, data 0x623e374/0x63d7000, compress 0x0/0x0/0x0, omap 0x4f643, meta 0x110a09bd), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:12.002360+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687671 data_alloc: 234881024 data_used: 21415871
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:13.002522+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:14.002675+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:15.002875+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:16.003006+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e8730000/0x0/0x4ffc00000, data 0x623fdf3/0x63da000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318144512 unmapped: 78348288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:17.003177+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.615905762s of 11.692889214s, submitted: 45
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3699421 data_alloc: 234881024 data_used: 22761407
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318464000 unmapped: 78028800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:18.003350+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318570496 unmapped: 77922304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:19.003522+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:20.003680+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318570496 unmapped: 77922304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:21.003816+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318570496 unmapped: 77922304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:22.003989+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318570496 unmapped: 77922304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e8732000/0x0/0x4ffc00000, data 0x623fdf3/0x63da000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3716445 data_alloc: 234881024 data_used: 24364991
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:23.004270+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318816256 unmapped: 77676544 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e8732000/0x0/0x4ffc00000, data 0x623fdf3/0x63da000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:24.004429+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 318816256 unmapped: 77676544 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:25.004694+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 77447168 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e8732000/0x0/0x4ffc00000, data 0x623fdf3/0x63da000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:26.004857+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 319045632 unmapped: 77447168 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686155400 session 0x557686110000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x5576869e1400 session 0x557688a108c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768672c000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:27.004977+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x55768672c000 session 0x5576886f8000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e8732000/0x0/0x4ffc00000, data 0x623fdf3/0x63da000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:28.005123+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:29.005283+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:30.005456+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:31.005612+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:32.005742+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:33.005887+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:34.006134+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:35.006414+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:36.006587+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:37.006739+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:38.006884+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:39.007033+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:40.007193+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:41.007442+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:42.007617+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:43.007963+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:44.008126+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:45.008344+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:46.008513+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:47.008730+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:48.008934+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:49.009197+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:50.009352+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:51.009542+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:52.009714+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:53.009947+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:54.010150+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:55.010299+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:56.010464+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:57.010681+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:58.010809+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:59.010956+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:00.011254+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:01.011405+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:02.011547+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468586 data_alloc: 218103808 data_used: 9614271
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:03.011702+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea326000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4fcb3, meta 0x110a034d), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:04.011857+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309305344 unmapped: 87187456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 47.560314178s of 47.608993530s, submitted: 31
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:05.012113+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557685ffa800 session 0x5576886e1c00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310362112 unmapped: 86130688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686155400 session 0x557686cac1c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x5576869e1400 session 0x55768bece380
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x55768c073800 session 0x5576880f7a40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: mgrc ms_handle_reset ms_handle_reset con 0x55768bb5f000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1095581968
Feb 28 11:14:08 compute-0 ceph-osd[89322]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1095581968,v1:192.168.122.100:6801/1095581968]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686258000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: get_auth_request con 0x55768672c000 auth_method 0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: mgrc handle_mgr_configure stats_period=5
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686258000 session 0x5576886f8000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:06.012243+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557685ffa800 session 0x557689a01c00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310370304 unmapped: 86122496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686155400 session 0x5576861d3c00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:07.012356+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310370304 unmapped: 86122496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x5576869e1400 session 0x55768cc3f500
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768c073800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x55768c073800 session 0x557687aa1340
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3523018 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:08.012499+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310378496 unmapped: 86114304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:09.012740+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 86106112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4e9a78000/0x0/0x4ffc00000, data 0x4efadd0/0x5094000, compress 0x0/0x0/0x0, omap 0x4fe69, meta 0x110a0197), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:10.012876+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 86106112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:11.013034+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 86106112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:12.013128+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 86106112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557685ffa800 session 0x557686d00380
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686155400 session 0x5576886b2fc0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686258000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3478714 data_alloc: 218103808 data_used: 9620380
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea302000/0x0/0x4ffc00000, data 0x4670dd0/0x480a000, compress 0x0/0x0/0x0, omap 0x4fed4, meta 0x110a012c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 ms_handle_reset con 0x557686258000 session 0x5576861116c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:13.013250+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:14.013405+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:15.013647+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:16.013850+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:17.014040+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:18.014236+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:19.014429+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:20.014602+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:21.014815+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:22.014960+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:23.015185+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:24.015560+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:25.015829+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:26.016027+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:27.016127+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:28.016304+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:29.016498+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:30.016633+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:31.016802+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:32.016942+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:33.017143+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:34.017371+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:35.018296+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:36.018463+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:37.018605+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:38.018810+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:39.019659+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:40.019946+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:41.020249+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:42.020604+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:43.020963+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309149696 unmapped: 87343104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:44.021601+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.864528656s of 39.085803986s, submitted: 59
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:45.022221+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:46.022733+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:47.023231+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:48.023668+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:49.024040+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:50.024278+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:51.024493+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:52.024781+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:53.024971+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:54.025169+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:55.025388+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:56.025581+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:57.025856+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:58.026036+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:59.026159+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:00.026384+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:01.026631+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:02.027323+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:03.027554+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:04.027749+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:05.027967+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:06.028174+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309198848 unmapped: 87293952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:07.028314+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:08.028484+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:09.028691+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:10.028853+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:11.029010+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:12.029183+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475677 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:13.029350+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ea327000/0x0/0x4ffc00000, data 0x464cdc0/0x47e5000, compress 0x0/0x0/0x0, omap 0x4ff2d, meta 0x110a00d3), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:14.029541+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309207040 unmapped: 87285760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.823881149s of 30.865018845s, submitted: 24
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:15.029717+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 291 ms_handle_reset con 0x5576869e1400 session 0x55768cc4a540
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:16.029895+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:17.030056+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ec323000/0x0/0x4ffc00000, data 0x264e98a/0x27e7000, compress 0x0/0x0/0x0, omap 0x505c8, meta 0x1109fa38), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3317485 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:18.030264+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:19.030387+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ec323000/0x0/0x4ffc00000, data 0x264e98a/0x27e7000, compress 0x0/0x0/0x0, omap 0x505c8, meta 0x1109fa38), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:20.030536+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:21.030777+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:22.030954+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ec323000/0x0/0x4ffc00000, data 0x264e98a/0x27e7000, compress 0x0/0x0/0x0, omap 0x505c8, meta 0x1109fa38), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:23.031191+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3317485 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309272576 unmapped: 87220224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ec323000/0x0/0x4ffc00000, data 0x264e98a/0x27e7000, compress 0x0/0x0/0x0, omap 0x505c8, meta 0x1109fa38), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:24.031341+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:25.031529+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:26.031698+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:27.031866+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:28.032023+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:29.032218+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:30.032406+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:31.032597+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:32.032743+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:33.032909+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:34.033101+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:35.033304+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:36.033473+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:37.033619+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:38.033802+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309321728 unmapped: 87171072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:39.034013+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:40.034206+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:41.034431+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:42.034667+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:43.034865+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:44.035051+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:45.035362+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:46.035650+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309329920 unmapped: 87162880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:47.035862+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 87146496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:48.036059+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 87146496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:49.036224+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 87146496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:50.036419+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309346304 unmapped: 87146496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:51.036572+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:52.036712+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:53.036855+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:54.036990+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:55.037150+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:56.037279+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309354496 unmapped: 87138304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:57.037477+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:58.037719+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:59.037848+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:00.038024+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:01.038200+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:02.038397+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309362688 unmapped: 87130112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:03.038537+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:04.038712+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:05.038908+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:06.039089+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:07.039299+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:08.039463+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:09.039668+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:10.039876+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309379072 unmapped: 87113728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:11.040138+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:12.040306+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:13.040468+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:14.040654+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:15.040869+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:16.041101+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:17.041235+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:18.041474+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309387264 unmapped: 87105536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:19.041702+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:20.041885+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:21.042145+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:22.042433+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:23.042624+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:24.042837+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309395456 unmapped: 87097344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:25.043148+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309403648 unmapped: 87089152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:26.043596+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309403648 unmapped: 87089152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:27.043861+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 87080960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:28.044164+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 87080960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:29.044350+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309411840 unmapped: 87080960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:30.044523+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:31.044681+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:32.044843+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:33.044997+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:34.045243+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:35.045477+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:36.045655+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:37.045832+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309420032 unmapped: 87072768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:38.046086+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 87064576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:39.046260+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 87064576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:40.046505+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 87064576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:41.046745+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 87064576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:42.046935+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309428224 unmapped: 87064576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:43.047191+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:44.047470+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:45.047821+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:46.048027+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:47.048182+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:48.048460+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:49.048700+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309436416 unmapped: 87056384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:50.048920+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 87040000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:51.049056+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 87040000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:52.049280+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 87040000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:53.049440+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309452800 unmapped: 87040000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:54.049580+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:55.049781+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:56.049949+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:57.050098+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-mon[76304]: from='client.23312 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:08 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:14:08 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' 
Feb 28 11:14:08 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2407995255' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Feb 28 11:14:08 compute-0 ceph-mon[76304]: from='client.23318 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:08 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/759175547' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Feb 28 11:14:08 compute-0 ceph-mon[76304]: from='client.23322 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:08 compute-0 ceph-mon[76304]: pgmap v3316: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:58.050254+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:59.050416+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:00.050571+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:01.050807+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:02.050997+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309460992 unmapped: 87031808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:03.051142+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 87023616 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:04.051308+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 87023616 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:05.051512+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 87023616 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:06.051721+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309469184 unmapped: 87023616 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:07.051929+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:08.052159+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:09.052358+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:10.052554+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:11.052751+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:12.052978+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:13.053196+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 309477376 unmapped: 87015424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320259 data_alloc: 218103808 data_used: 9618332
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768d0abc00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 ms_handle_reset con 0x55768d0abc00 session 0x557686d4f880
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:14.053436+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315342848 unmapped: 81149952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:15.053793+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315351040 unmapped: 81141760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:16.054047+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315351040 unmapped: 81141760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:17.054292+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:18.054479+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327555 data_alloc: 234881024 data_used: 17875868
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:19.054754+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:20.055680+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:21.056046+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ec320000/0x0/0x4ffc00000, data 0x2650409/0x27ea000, compress 0x0/0x0/0x0, omap 0x506d5, meta 0x1109f92b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:22.056333+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315359232 unmapped: 81133568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:23.056616+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 128.169219971s of 128.219329834s, submitted: 33
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 315392000 unmapped: 81100800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 293 ms_handle_reset con 0x557685ffa800 session 0x557688a10540
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3217201 data_alloc: 218103808 data_used: 4309916
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:24.056848+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 89096192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 293 heartbeat osd_stat(store_statfs(0x4ed31f000/0x0/0x4ffc00000, data 0x1651ff9/0x17ed000, compress 0x0/0x0/0x0, omap 0x50d73, meta 0x1109f28d), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:25.057467+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 89096192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:26.057909+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 307396608 unmapped: 89096192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 293 handle_osd_map epochs [293,294], i have 294, src has [1,294]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 294 ms_handle_reset con 0x557686155400 session 0x5576861a81c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:27.058294+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:28.059317+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ee78b000/0x0/0x4ffc00000, data 0x1e3bb6/0x37e000, compress 0x0/0x0/0x0, omap 0x50e89, meta 0x1109f177), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099618 data_alloc: 218103808 data_used: 181132
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:29.059620+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:30.060591+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ee78b000/0x0/0x4ffc00000, data 0x1e3bb6/0x37e000, compress 0x0/0x0/0x0, omap 0x50e89, meta 0x1109f177), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:31.061096+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:32.061460+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:33.061654+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099618 data_alloc: 218103808 data_used: 181132
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ee78b000/0x0/0x4ffc00000, data 0x1e3bb6/0x37e000, compress 0x0/0x0/0x0, omap 0x50e89, meta 0x1109f177), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:34.061826+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:35.062032+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 94396416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.580028534s of 12.689705849s, submitted: 49
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:36.062457+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 93347840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:37.062692+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 93347840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:38.063132+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 93347840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3102328 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686258000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ee78a000/0x0/0x4ffc00000, data 0x1e5658/0x382000, compress 0x0/0x0/0x0, omap 0x51005, meta 0x1109effb), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:39.063283+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 93306880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ee78a000/0x0/0x4ffc00000, data 0x1e5658/0x382000, compress 0x0/0x0/0x0, omap 0x51005, meta 0x1109effb), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:40.063438+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 93290496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:41.063870+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:42.064019+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 ms_handle_reset con 0x557686258000 session 0x55768d75c380
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:43.064323+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:44.064499+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:45.064736+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:46.064921+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:47.065115+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:48.065254+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:49.065488+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:50.065690+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:51.065965+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:52.066116+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:53.066410+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:54.066683+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:55.066905+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:56.067114+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:57.067322+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:58.067524+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:59.067735+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:00.067928+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:01.068168+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:02.068394+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:03.068616+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:04.068833+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:05.069230+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:06.070260+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:07.070494+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:08.070834+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:09.071006+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:10.071206+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:11.071427+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:12.071714+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:13.071890+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:14.072115+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:15.072356+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:16.072536+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:17.072733+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:18.072940+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:19.073186+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:20.073382+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:21.073552+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:22.073751+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:23.074010+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:24.074179+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:25.074359+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:26.074585+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:27.074734+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:28.074859+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:29.075031+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:30.075169+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 93265920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:31.075360+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 93241344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:32.075532+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 93241344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:33.075686+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 93241344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:34.075844+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 93241344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:35.076050+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:36.076273+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:37.076444+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:38.076591+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:39.076788+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:40.077027+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:41.077217+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:42.077402+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 93233152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:43.077640+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 93224960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:44.077796+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 93224960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:45.078153+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 93224960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:46.078303+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 93224960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:47.078521+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 93216768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:48.078757+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 93216768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:49.078922+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 93216768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:50.079122+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 93216768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:51.079261+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:52.079438+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:53.079569+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:54.079710+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:55.079895+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:56.080226+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:57.080441+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:58.080631+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 93208576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.2 total, 600.0 interval
                                           Cumulative writes: 38K writes, 148K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.69 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1533 writes, 6281 keys, 1533 commit groups, 1.0 writes per commit group, ingest: 6.68 MB, 0.01 MB/s
                                           Interval WAL: 1533 writes, 657 syncs, 2.33 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:59.080880+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:00.081157+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:01.081315+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:02.081491+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:03.081646+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:04.081890+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:05.082178+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:06.082445+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 93192192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:07.082663+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 93184000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:08.082811+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 93184000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:09.082976+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 93184000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:10.083144+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 93184000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:11.083352+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:12.083577+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:13.083779+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:14.083984+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:15.084192+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:16.084329+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:17.084574+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 93175808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:18.084822+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:19.085161+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:20.085806+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:21.085990+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:22.124728+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:23.124893+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:24.125178+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 93159424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:25.125496+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:26.125736+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:27.125919+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 heartbeat osd_stat(store_statfs(0x4eaf85000/0x0/0x4ffc00000, data 0x39e71f7/0x3b85000, compress 0x0/0x0/0x0, omap 0x51639, meta 0x1109e9c7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:28.126132+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:29.126368+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397995 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:30.126538+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 114.041397095s of 114.333473206s, submitted: 46
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 93151232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:31.126763+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302129152 unmapped: 94363648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 297 ms_handle_reset con 0x5576869e1400 session 0x55768bc58380
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:32.126926+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302153728 unmapped: 94339072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x55768b3c4400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 297 heartbeat osd_stat(store_statfs(0x4eaf80000/0x0/0x4ffc00000, data 0x39e8dc6/0x3b8a000, compress 0x0/0x0/0x0, omap 0x51b9b, meta 0x1109e465), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:33.127132+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302178304 unmapped: 94314496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 298 ms_handle_reset con 0x55768b3c4400 session 0x5576886b28c0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:34.127307+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434144 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302194688 unmapped: 94298112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0a000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:35.127535+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302194688 unmapped: 94298112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:36.127717+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302194688 unmapped: 94298112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:37.127862+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302194688 unmapped: 94298112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:38.128026+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302194688 unmapped: 94298112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:39.128146+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434144 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302211072 unmapped: 94281728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0a000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:40.128294+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302211072 unmapped: 94281728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:41.128391+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302211072 unmapped: 94281728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0a000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:42.128531+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302211072 unmapped: 94281728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:43.128686+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302211072 unmapped: 94281728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.552754402s of 13.607363701s, submitted: 19
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:44.128844+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432672 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0a000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302219264 unmapped: 94273536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:45.129042+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302219264 unmapped: 94273536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:46.129289+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302219264 unmapped: 94273536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:47.129429+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0e000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302227456 unmapped: 94265344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:48.129594+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302227456 unmapped: 94265344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:49.129831+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432744 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302268416 unmapped: 94224384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:50.130057+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:51.130297+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0e000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:52.130469+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:53.130666+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0e000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:54.130813+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432672 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:55.130975+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:56.131164+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:57.131347+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:58.131533+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0e000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:59.131675+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432672 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:00.131834+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:01.132118+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 94183424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:02.132388+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 94175232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 298 heartbeat osd_stat(store_statfs(0x4eab0e000/0x0/0x4ffc00000, data 0x3e5a972/0x3ffe000, compress 0x0/0x0/0x0, omap 0x51cb1, meta 0x1109e34f), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:03.132589+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 94175232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:04.132772+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3432672 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 94175232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:05.133008+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 94175232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557685ffa800
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.988111496s of 22.149505615s, submitted: 90
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:06.133221+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 94175232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 298 handle_osd_map epochs [298,299], i have 299, src has [1,299]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 299 ms_handle_reset con 0x557685ffa800 session 0x5576886f8fc0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:07.133476+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 299 heartbeat osd_stat(store_statfs(0x4eab09000/0x0/0x4ffc00000, data 0x3e5c562/0x4001000, compress 0x0/0x0/0x0, omap 0x51d59, meta 0x1109e2a7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 94126080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:08.133665+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 94126080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:09.133883+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153870 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 94126080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:10.134041+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 94126080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:11.134247+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 94117888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:12.134450+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 94117888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ee309000/0x0/0x4ffc00000, data 0x65c53c/0x800000, compress 0x0/0x0/0x0, omap 0x51dc7, meta 0x1109e239), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:13.134682+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 94117888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:14.134879+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153870 data_alloc: 218103808 data_used: 185193
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 94109696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:15.135169+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 94109696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:16.135352+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 94109696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:17.135503+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 94109696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:18.135640+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 94109696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:19.135789+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 94101504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:20.136011+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 94101504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:21.136255+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:22.136405+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:23.136572+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:24.136721+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:25.136997+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:26.137181+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:27.137333+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:28.137550+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:29.137773+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:30.138239+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:31.138398+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:32.138565+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:33.138704+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:34.138916+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:35.139130+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:36.139273+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299458560 unmapped: 97034240 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:37.139423+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:38.139604+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:39.139792+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:40.139972+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:41.140287+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:42.140497+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299466752 unmapped: 97026048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:43.140648+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:44.140810+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:45.141016+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:46.141156+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:47.141282+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:48.141440+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:49.141599+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:50.141794+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299474944 unmapped: 97017856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:51.141973+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:52.142145+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:53.142321+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:54.142461+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:55.142665+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:56.142846+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:57.143029+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:58.143251+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299483136 unmapped: 97009664 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:59.143408+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:00.143574+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:01.143785+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:02.143925+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:03.144116+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:04.144267+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:05.144412+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:06.144580+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299499520 unmapped: 96993280 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:07.144733+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299507712 unmapped: 96985088 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:08.144863+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299507712 unmapped: 96985088 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:09.145013+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299507712 unmapped: 96985088 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:10.145171+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:11.145372+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:12.145573+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:13.145719+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:14.145867+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:15.146098+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:16.146493+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299515904 unmapped: 96976896 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:17.146693+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299532288 unmapped: 96960512 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:18.146893+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:19.147102+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:20.147348+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:21.147577+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:22.147757+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:23.147922+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:24.148198+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:25.148425+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299540480 unmapped: 96952320 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:26.148589+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 96944128 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:27.148772+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 96944128 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:28.148994+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 96944128 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:29.149192+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 96944128 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:30.149390+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299548672 unmapped: 96944128 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:31.149563+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:32.149767+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:33.149959+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:34.150189+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:35.150449+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:36.150659+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:37.150886+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:38.151123+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 96927744 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:39.151340+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:40.151526+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:41.151693+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:42.151870+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:43.152105+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:44.152387+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:45.152585+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:46.152750+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 96919552 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:47.152858+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 96911360 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:48.153013+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 96911360 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:49.153162+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 96903168 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:50.153342+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 96903168 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:51.153554+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 96894976 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:52.153742+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 96894976 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:53.153891+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 96894976 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:54.154035+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 96894976 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:55.154312+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299606016 unmapped: 96886784 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:56.154461+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299606016 unmapped: 96886784 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:57.154646+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:58.154806+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:59.154990+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:00.155208+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:01.155436+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:02.155624+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299614208 unmapped: 96878592 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:03.155813+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:04.155976+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:05.156104+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:06.156266+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:07.156437+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:08.156605+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:09.156816+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299630592 unmapped: 96862208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:10.156967+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 96854016 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:11.157123+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 96854016 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:12.157321+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 96854016 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:13.157502+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 96854016 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:14.157685+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 96854016 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:15.157894+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 96845824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:16.158111+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 96845824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:17.158273+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 96845824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:18.158433+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 96845824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:19.158568+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299663360 unmapped: 96829440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:20.158755+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299663360 unmapped: 96829440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:21.158985+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299663360 unmapped: 96829440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:22.159214+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299671552 unmapped: 96821248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:23.159376+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299671552 unmapped: 96821248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:24.159521+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299671552 unmapped: 96821248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:25.159723+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299671552 unmapped: 96821248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:26.159953+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299679744 unmapped: 96813056 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:27.160232+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299687936 unmapped: 96804864 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:28.160476+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299687936 unmapped: 96804864 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:29.160621+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299687936 unmapped: 96804864 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:30.160791+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299687936 unmapped: 96804864 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:31.161002+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:32.161229+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:33.161458+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:34.161691+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:35.161929+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:36.162177+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:37.162358+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299696128 unmapped: 96796672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:38.162552+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299704320 unmapped: 96788480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:39.162804+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299704320 unmapped: 96788480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:40.163055+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299704320 unmapped: 96788480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:41.163280+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299704320 unmapped: 96788480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:42.163579+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299704320 unmapped: 96788480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:43.163750+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299712512 unmapped: 96780288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:44.164169+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299712512 unmapped: 96780288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:45.164386+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299712512 unmapped: 96780288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:46.164567+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299720704 unmapped: 96772096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:47.164739+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299720704 unmapped: 96772096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:48.164893+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299720704 unmapped: 96772096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:49.165114+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299720704 unmapped: 96772096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:50.165349+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299728896 unmapped: 96763904 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:51.165529+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:52.165690+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:53.165869+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:54.166116+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:55.166402+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:56.166572+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299737088 unmapped: 96755712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:57.166754+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299745280 unmapped: 96747520 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:58.166920+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299745280 unmapped: 96747520 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:59.167126+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:00.167296+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:01.167471+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:02.167711+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:03.167914+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:04.168159+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:05.168450+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:06.168654+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299769856 unmapped: 96722944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:07.168857+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299778048 unmapped: 96714752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:08.169037+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299778048 unmapped: 96714752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:09.169170+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299778048 unmapped: 96714752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:10.169314+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299778048 unmapped: 96714752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:11.169516+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:12.169687+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:13.169845+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:14.170489+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:15.170768+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:16.170962+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:17.171209+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:18.171434+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:19.171613+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:20.171748+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:21.172259+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:22.172472+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 96706560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:23.172662+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:24.172828+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:25.173176+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:26.173498+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:27.173700+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:28.173961+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:29.174121+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:30.174410+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 96690176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:31.174810+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:32.175150+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:33.175333+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:34.175550+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:35.175746+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:36.175876+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:37.176019+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:38.176240+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299810816 unmapped: 96681984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:39.176450+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299835392 unmapped: 96657408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:40.176677+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299835392 unmapped: 96657408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:41.176858+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299835392 unmapped: 96657408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:42.176998+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299835392 unmapped: 96657408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:43.177220+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299851776 unmapped: 96641024 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:44.177399+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299851776 unmapped: 96641024 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:45.177657+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299851776 unmapped: 96641024 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:46.177828+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299851776 unmapped: 96641024 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:47.177947+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299859968 unmapped: 96632832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:48.178174+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299859968 unmapped: 96632832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:49.178376+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299859968 unmapped: 96632832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:50.178572+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299859968 unmapped: 96632832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:51.178768+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299868160 unmapped: 96624640 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:52.178930+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299868160 unmapped: 96624640 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:53.179112+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299868160 unmapped: 96624640 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:54.179297+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299868160 unmapped: 96624640 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:55.179492+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299876352 unmapped: 96616448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:56.179658+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 96608256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:57.179887+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 96608256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:58.180110+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 96608256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:59.180262+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 96608256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:00.180410+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299884544 unmapped: 96608256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:01.180658+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299892736 unmapped: 96600064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:02.180838+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299892736 unmapped: 96600064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:03.180943+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299900928 unmapped: 96591872 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:04.181166+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299900928 unmapped: 96591872 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:05.181361+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299900928 unmapped: 96591872 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:06.181514+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299900928 unmapped: 96591872 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:07.181700+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299900928 unmapped: 96591872 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:08.181837+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299909120 unmapped: 96583680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:09.182008+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299909120 unmapped: 96583680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:10.182175+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299909120 unmapped: 96583680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:11.182373+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299917312 unmapped: 96575488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:12.182578+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299917312 unmapped: 96575488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:13.182715+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299917312 unmapped: 96575488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:14.182913+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299925504 unmapped: 96567296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:15.183133+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299925504 unmapped: 96567296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:16.183294+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299925504 unmapped: 96567296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:17.183447+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 96559104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:18.184132+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 96559104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:19.184287+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 96559104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:20.184440+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 96559104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:21.184606+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 96550912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:22.185269+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 96550912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:23.185724+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 96542720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:24.186232+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 96542720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:25.186834+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 96542720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:26.187047+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 96542720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:27.187527+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:28.187767+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:29.188196+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:30.188546+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:31.188848+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:32.189200+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:33.189449+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 96534528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:34.189739+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299966464 unmapped: 96526336 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:35.190209+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 96509952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:36.190509+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 96509952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:37.190791+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 96509952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:38.191152+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:39.191332+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:40.191621+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:41.191824+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:42.192180+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:43.192389+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:44.192539+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:45.192905+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:46.193184+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:47.193315+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:48.193528+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 96501760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:49.193746+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299999232 unmapped: 96493568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:50.193911+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 299999232 unmapped: 96493568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:51.194143+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300007424 unmapped: 96485376 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:52.194286+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300007424 unmapped: 96485376 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:53.194428+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300007424 unmapped: 96485376 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:54.194577+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 96477184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:55.194776+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 96477184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:56.194964+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 96477184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:57.195151+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 96477184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:58.195345+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 96477184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:59.196037+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300023808 unmapped: 96468992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:00.196209+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300023808 unmapped: 96468992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:01.196450+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300023808 unmapped: 96468992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:02.196705+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 96460800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:03.196981+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 96460800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:04.197239+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 96460800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:05.197490+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 96460800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:06.197656+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 96460800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:07.197835+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 96444416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:08.198198+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 96444416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:09.198450+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 96444416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:10.198707+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 96444416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:11.198967+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300056576 unmapped: 96436224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:12.199137+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300056576 unmapped: 96436224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:13.199537+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300056576 unmapped: 96436224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:14.199720+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300056576 unmapped: 96436224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:15.199923+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 96428032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:16.200107+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 96428032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:17.200277+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 96428032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:18.200432+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 96428032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:19.200599+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 96419840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:20.200785+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 96419840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:21.200947+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300081152 unmapped: 96411648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:22.201142+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300089344 unmapped: 96403456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:23.201299+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300089344 unmapped: 96403456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:24.201477+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300089344 unmapped: 96403456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:25.201754+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300097536 unmapped: 96395264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:26.201970+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 96387072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:27.202223+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 96387072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:28.202383+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 96378880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:29.202656+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 96378880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:30.203054+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 96378880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:31.203355+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 96378880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:32.203529+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 96378880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:33.203718+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:34.203913+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300122112 unmapped: 96370688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:35.204245+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300122112 unmapped: 96370688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:36.204485+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300130304 unmapped: 96362496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:37.204713+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300130304 unmapped: 96362496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:38.205017+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300130304 unmapped: 96362496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:39.205242+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:40.205507+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:41.205783+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:42.206045+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:43.206385+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:44.206604+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:45.206873+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:46.207201+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 96346112 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:47.207403+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:48.207654+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:49.207877+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:50.208209+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:51.208426+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:52.208697+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:53.209047+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:54.209485+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 96337920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:55.209804+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300163072 unmapped: 96329728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:56.210052+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300163072 unmapped: 96329728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:57.210349+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300163072 unmapped: 96329728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:58.210560+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 96321536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:59.210768+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 96321536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:00.210977+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 96321536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:01.211259+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 96321536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:02.211496+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 96321536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:03.211663+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:04.211811+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:05.212008+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:06.212206+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:07.212441+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:08.212651+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee307000/0x0/0x4ffc00000, data 0x65dfbb/0x803000, compress 0x0/0x0/0x0, omap 0x52474, meta 0x1109db8c), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:09.212827+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:10.213027+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156580 data_alloc: 218103808 data_used: 189251
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:11.213199+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 96296960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:12.213391+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 96296960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686155400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 366.777954102s of 366.843353271s, submitted: 49
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 301 ms_handle_reset con 0x557686155400 session 0x55768c90da40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:13.213619+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ee306000/0x0/0x4ffc00000, data 0x65fb78/0x804000, compress 0x0/0x0/0x0, omap 0x52b13, meta 0x1109d4ed), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:14.213806+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557686258000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:15.214062+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 96190464 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3160433 data_alloc: 218103808 data_used: 189267
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 302 ms_handle_reset con 0x557686258000 session 0x557686cacc40
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:16.214268+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 96174080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:17.214457+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee774000/0x0/0x4ffc00000, data 0x1f1758/0x396000, compress 0x0/0x0/0x0, omap 0x52c29, meta 0x1109d3d7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 96174080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:18.214907+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 96174080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee774000/0x0/0x4ffc00000, data 0x1f1758/0x396000, compress 0x0/0x0/0x0, omap 0x52c29, meta 0x1109d3d7), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:19.215180+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 96174080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:20.215348+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x5576869e1400
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3182875 data_alloc: 218103808 data_used: 189523
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:21.215598+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 303 heartbeat osd_stat(store_statfs(0x4edf72000/0x0/0x4ffc00000, data 0x9f3216/0xb9a000, compress 0x0/0x0/0x0, omap 0x52da5, meta 0x1109d25b), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 303 handle_osd_map epochs [304,304], i have 304, src has [1,304]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 304 ms_handle_reset con 0x5576869e1400 session 0x557692b07340
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:22.215770+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:23.215935+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:24.216158+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.334288597s of 11.534142494s, submitted: 81
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300466176 unmapped: 96026624 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:25.216368+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300466176 unmapped: 96026624 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:26.216530+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300466176 unmapped: 96026624 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:27.216665+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:28.216807+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:29.216932+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:30.217096+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:31.217242+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:32.217395+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:33.217567+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:34.217733+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 96018432 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:35.217915+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:36.218050+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:37.218352+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:38.218506+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:39.218669+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:40.218871+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:41.219131+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:42.219345+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:43.219579+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 95993856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:44.219837+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 95993856 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:45.220152+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:46.220326+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:47.220565+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:48.220751+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:49.220985+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:50.221207+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:51.221324+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:52.221534+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:53.221743+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 96305152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:54.221970+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 96288768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:55.222260+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 96288768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:56.222420+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 96288768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:57.222639+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 96288768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:58.222846+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 96288768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:59.223061+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300212224 unmapped: 96280576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:00.223296+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300212224 unmapped: 96280576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:01.223450+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 96272384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:02.223599+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 96272384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:03.223748+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 96272384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:04.223945+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 96272384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:05.224200+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300228608 unmapped: 96264192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:06.224388+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300228608 unmapped: 96264192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:07.224548+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:08.224700+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:09.224900+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:10.225118+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf6a000/0x0/0x4ffc00000, data 0x9f684d/0xba0000, compress 0x0/0x0/0x0, omap 0x537ef, meta 0x1109c811), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188959 data_alloc: 218103808 data_used: 190108
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:11.225285+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: handle_auth_request added challenge on 0x557687f7fc00
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 47.789546967s of 47.797679901s, submitted: 15
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:12.225480+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 96256000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:13.225602+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 306 ms_handle_reset con 0x557687f7fc00 session 0x55768b427880
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 96223232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:14.225759+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 96223232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:15.225982+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f841a/0x3a2000, compress 0x0/0x0/0x0, omap 0x53906, meta 0x1109c6fa), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 96223232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151061 data_alloc: 218103808 data_used: 190108
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:16.226140+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f841a/0x3a2000, compress 0x0/0x0/0x0, omap 0x53906, meta 0x1109c6fa), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:17.226312+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f841a/0x3a2000, compress 0x0/0x0/0x0, omap 0x53906, meta 0x1109c6fa), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 96206848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:18.226458+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 96206848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 306 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f841a/0x3a2000, compress 0x0/0x0/0x0, omap 0x53906, meta 0x1109c6fa), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:19.226629+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 306 handle_osd_map epochs [306,307], i have 307, src has [1,307]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:20.226781+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:21.226964+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:22.227189+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:23.227376+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:24.227560+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:25.227795+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 96215040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _renew_subs
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:26.227966+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 96206848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:27.228152+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:28.228344+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:29.228562+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:30.228755+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 96198656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:31.228918+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 96190464 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:32.230612+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 96190464 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:33.231357+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 96190464 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:34.233189+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 96182272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:35.233386+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 96182272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:36.233557+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 96182272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:37.233722+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 96182272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:38.233944+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 96182272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:39.234211+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:40.234411+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:41.234699+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:42.234867+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:43.258179+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:44.258885+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:45.259211+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:46.259393+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 96165888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:47.260182+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300335104 unmapped: 96157696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:48.260478+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300335104 unmapped: 96157696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:49.260688+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300335104 unmapped: 96157696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:50.260929+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 96149504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:51.261140+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 96149504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:52.261314+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 96149504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:53.261541+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 96149504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:54.261740+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 96149504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:55.261913+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300351488 unmapped: 96141312 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:56.262119+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300359680 unmapped: 96133120 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:57.262291+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:58.262469+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:59.262625+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:00.262807+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:01.262989+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:02.263234+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:03.263415+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:04.263537+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 96124928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:05.263724+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:06.263884+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:07.264035+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:08.264263+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:09.264477+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:10.264772+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 96116736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:11.264997+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 96100352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:12.265179+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 96100352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:13.265431+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 96100352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:14.265627+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 96100352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:15.265848+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 96100352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:16.266040+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 96092160 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:17.266312+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 96092160 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:18.266538+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 96092160 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:19.266729+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 96083968 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:20.266991+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:21.267247+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:22.267428+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:23.267644+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:24.267893+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:25.268144+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:26.268368+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:27.269247+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 96067584 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:28.269418+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 96067584 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:29.269561+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 96067584 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:30.269708+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 95911936 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'config diff' '{prefix=config diff}'
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'config show' '{prefix=config show}'
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'counter dump' '{prefix=counter dump}'
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:31.269854+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'counter schema' '{prefix=counter schema}'
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 96075776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:32.269978+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:33.270127+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'log dump' '{prefix=log dump}'
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 96002048 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'perf dump' '{prefix=perf dump}'
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'perf schema' '{prefix=perf schema}'
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:34.270231+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300646400 unmapped: 95846400 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:35.270368+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 95838208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:36.270519+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 95838208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:37.270676+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 95838208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:38.270813+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 95838208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:39.271014+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 95838208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:40.271294+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 95838208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:41.271409+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 95838208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:42.271559+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 95838208 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:43.271697+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300670976 unmapped: 95821824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:44.271835+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300670976 unmapped: 95821824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:45.271987+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300670976 unmapped: 95821824 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:46.272174+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 95805440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:47.272347+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 95805440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:48.272525+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 95805440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:49.272721+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 95805440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:50.272871+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 95805440 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:51.273094+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 95797248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:52.273283+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 95797248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:53.273435+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 95797248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:54.273547+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 95797248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:55.273698+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 95797248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:56.273848+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 95797248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:57.273988+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 95797248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:58.274139+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.2 total, 600.0 interval
                                           Cumulative writes: 38K writes, 150K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.69 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 542 writes, 1588 keys, 542 commit groups, 1.0 writes per commit group, ingest: 0.67 MB, 0.00 MB/s
                                           Interval WAL: 542 writes, 244 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 95797248 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:59.274271+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 95789056 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets getting new tickets!
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:00.274563+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _finish_auth 0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:00.275296+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300711936 unmapped: 95780864 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:01.274711+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 95772672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:02.274838+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 95772672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:03.274981+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 95772672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:04.275136+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 95772672 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:05.275311+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: mgrc ms_handle_reset ms_handle_reset con 0x55768672c000
Feb 28 11:14:08 compute-0 ceph-osd[89322]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1095581968
Feb 28 11:14:08 compute-0 ceph-osd[89322]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1095581968,v1:192.168.122.100:6801/1095581968]
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: get_auth_request con 0x557687f7fc00 auth_method 0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: mgrc handle_mgr_configure stats_period=5
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 95764480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:06.275447+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 95764480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:07.275579+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 95756288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:08.275709+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 95756288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:09.275939+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 95756288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:10.276080+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 95748096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:11.276193+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 95748096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:12.276348+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 95748096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:13.276546+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 95748096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:14.276695+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 95748096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:15.276854+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 95723520 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:16.277000+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 95715328 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:17.277143+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 95715328 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:18.277299+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 95715328 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:19.277428+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 95715328 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:20.277586+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 95715328 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:21.829272+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 95715328 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:22.831888+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 95707136 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:23.832056+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:24.832226+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:25.832451+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:26.832647+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:27.832818+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:28.833023+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:29.833205+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:30.833357+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300802048 unmapped: 95690752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:31.833635+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300802048 unmapped: 95690752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:32.833803+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300802048 unmapped: 95690752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:33.834001+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300802048 unmapped: 95690752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:34.834248+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300802048 unmapped: 95690752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:35.834429+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 95682560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:36.834575+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 95682560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:37.834690+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 95682560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:38.834823+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 95682560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:39.834946+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 95682560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:40.835164+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee765000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 95674368 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:41.835290+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153755 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 95666176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:42.835488+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 95666176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:43.835654+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 151.716339111s of 151.783905029s, submitted: 44
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 95641600 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:44.835845+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 95641600 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:45.836087+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 95641600 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:46.836253+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 95641600 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:47.836401+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 95641600 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:48.836545+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 95641600 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:49.836680+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:50.836815+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:51.836971+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:52.837145+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:53.837312+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:54.837507+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:55.837713+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:56.837866+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:57.837993+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:58.838162+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:59.838322+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:00.838498+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:01.838654+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:02.838820+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:03.839013+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:04.839184+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:05.839417+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:06.839562+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:07.839731+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:08.839904+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:09.840252+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:10.840456+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:11.840648+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:12.840819+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:13.840971+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:14.841170+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:15.841413+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:16.841588+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:17.841738+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:18.841935+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 95551488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:19.842167+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 95551488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:20.842366+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 95551488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:21.842524+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 95551488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:22.842717+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 95543296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:23.842909+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 95543296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:24.843055+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 95543296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:25.843268+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300957696 unmapped: 95535104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:26.843451+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 95526912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:27.843657+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 95526912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:28.843832+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 95526912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:29.843973+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 95526912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:30.844150+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 95526912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:31.844317+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 95526912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:32.844505+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 95526912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:33.844670+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 95526912 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:34.844941+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:35.845113+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:36.845281+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:37.845549+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:38.845707+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:39.845869+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:40.846025+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300982272 unmapped: 95510528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:41.846158+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300982272 unmapped: 95510528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:42.846296+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 95502336 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:43.846455+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 95502336 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:44.846686+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 95502336 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:45.846893+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 95494144 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:46.847037+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 95494144 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:47.847158+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 95494144 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:48.847369+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 95494144 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:49.847557+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 95494144 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:50.847730+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:51.847863+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:52.848035+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:53.848166+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:54.848313+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:55.848482+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:56.848642+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:57.848777+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301015040 unmapped: 95477760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:58.848991+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 95469568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:59.849156+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 95469568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:00.849322+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 95461376 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:01.849458+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 95453184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:02.849638+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 95453184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:03.849794+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 95453184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:04.849966+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 95453184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:05.850244+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 95453184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:06.850435+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:07.850630+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:08.850793+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:09.850947+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:10.851106+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:11.851296+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:12.851464+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:13.851620+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:14.851786+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 95436800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:15.852017+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 95436800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:16.852170+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 95436800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:17.852299+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 95428608 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:18.852462+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 95428608 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:19.852656+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:20.852841+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 95420416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:21.852993+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 95420416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:22.853138+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 95412224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:23.853307+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 95412224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:24.853449+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 95412224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:25.853729+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:26.853912+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:27.854590+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:28.854770+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:29.854927+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:30.855108+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:31.855293+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:32.855452+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301096960 unmapped: 95395840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:33.855599+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:34.855790+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301096960 unmapped: 95395840 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:35.856029+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:36.856213+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:37.856365+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:38.856559+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:39.856758+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:40.856926+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:41.857098+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:42.857257+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:43.857373+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:44.857480+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:45.857604+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:46.857754+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:47.857894+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 95354880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:48.858121+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 95354880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:49.858347+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 95354880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:50.858534+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 95346688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:51.858733+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 95346688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:52.858902+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 95346688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:53.859112+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 95346688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:54.859408+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 95346688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:55.859780+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:56.859946+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:57.860226+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:58.860409+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:59.860624+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:00.860814+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:01.860989+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:02.861288+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:03.861578+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301162496 unmapped: 95330304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:04.861839+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301162496 unmapped: 95330304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:05.862148+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 95313920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:06.862386+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:07.862606+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:08.862816+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:09.863019+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:10.863250+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301195264 unmapped: 95297536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:11.863441+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:12.863672+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:13.863824+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:14.863999+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:15.864283+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:16.864517+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:17.864736+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:18.864917+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:19.865109+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301211648 unmapped: 95281152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:20.865306+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301211648 unmapped: 95281152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:21.865512+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301211648 unmapped: 95281152 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:22.865738+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:23.865860+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:24.866012+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:25.866161+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:26.866299+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:27.866430+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301236224 unmapped: 95256576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:28.866612+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301236224 unmapped: 95256576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:29.866767+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301244416 unmapped: 95248384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:30.866928+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301244416 unmapped: 95248384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:31.867106+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301244416 unmapped: 95248384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:32.867274+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301244416 unmapped: 95248384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:33.867489+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301244416 unmapped: 95248384 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:34.867698+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:35.867913+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:36.868136+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:37.868316+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:38.868487+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:39.868662+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:40.868814+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:41.869011+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 95764480 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:42.869225+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 95756288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:43.869343+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 95756288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:44.869489+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 95756288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:45.869695+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300736512 unmapped: 95756288 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:46.869851+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 95748096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:47.869985+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 95748096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:48.870161+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 95748096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:49.870273+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 95748096 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:50.870417+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 95739904 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:51.870566+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 95739904 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:52.870844+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 95739904 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:53.871030+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 95739904 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:54.871245+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 95739904 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:55.871415+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 95739904 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:56.871605+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 95739904 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:57.871764+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 95739904 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:58.871991+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300761088 unmapped: 95731712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:59.872161+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300761088 unmapped: 95731712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:00.872335+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300761088 unmapped: 95731712 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:01.872505+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 95723520 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:02.872676+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 95723520 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:03.872846+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 95723520 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:04.872997+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 95723520 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:05.873251+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 95723520 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:06.873402+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 95715328 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:07.873523+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 95715328 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:08.873663+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 95707136 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:09.873820+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 95707136 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:10.873980+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 95707136 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:11.874110+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 95707136 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:12.874290+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 95707136 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:13.874454+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 95707136 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:14.874598+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:15.874829+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:16.875020+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:17.875191+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:18.875345+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:19.875503+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:20.875645+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:21.875784+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 95698944 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:22.875987+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300802048 unmapped: 95690752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:23.876126+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300802048 unmapped: 95690752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:24.876343+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300802048 unmapped: 95690752 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:25.876660+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 95682560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:26.876829+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 95682560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:27.877008+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 95682560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:28.877227+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 95682560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:29.877465+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 95682560 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:30.877660+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 95674368 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:31.877765+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 95674368 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:32.877964+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 95674368 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:33.878170+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 95674368 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:34.878346+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 95666176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:35.878560+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 95666176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:36.878745+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 95666176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:37.878861+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 95666176 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:38.878994+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300834816 unmapped: 95657984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:39.879176+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300834816 unmapped: 95657984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:40.879291+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300834816 unmapped: 95657984 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:41.879402+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300843008 unmapped: 95649792 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:42.879521+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300843008 unmapped: 95649792 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:43.879674+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300843008 unmapped: 95649792 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:44.879909+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300843008 unmapped: 95649792 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:45.880246+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 95641600 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:46.880417+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 95641600 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:47.880547+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 95641600 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:48.880692+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 95641600 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:49.880831+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 95633408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:50.881008+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 95633408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:51.881193+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 95633408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:52.881371+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 95633408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:53.881562+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 95633408 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:54.881704+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 95625216 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:55.881909+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 95625216 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:56.882150+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 95625216 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:57.882402+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 95625216 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:58.882661+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 95625216 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:59.882834+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 95625216 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:00.883431+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 95625216 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:01.883585+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 95625216 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:02.884054+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300875776 unmapped: 95617024 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:03.884327+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300883968 unmapped: 95608832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:04.884638+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300883968 unmapped: 95608832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:05.885032+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300883968 unmapped: 95608832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:06.885369+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300883968 unmapped: 95608832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:07.885568+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300883968 unmapped: 95608832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:08.885833+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300883968 unmapped: 95608832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:09.886051+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300883968 unmapped: 95608832 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:10.886333+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 95600640 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:11.886484+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 95600640 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:12.886598+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 95592448 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:13.886718+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:14.886872+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:15.887116+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:16.887449+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:17.887654+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 95584256 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:18.887846+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:19.889378+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:20.889556+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:21.889764+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:22.889935+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:23.890125+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:24.890297+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:25.890537+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 95576064 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:26.890762+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:27.890954+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:28.891160+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:29.891291+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:30.891426+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:31.891569+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:32.891645+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:33.891750+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 95559680 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:34.891919+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 95551488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:35.892171+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 95551488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:36.892345+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 95551488 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:37.892511+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 95543296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:38.892680+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 95543296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:39.892828+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 95543296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:40.892979+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 95543296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:41.893133+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 95543296 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:42.893289+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300957696 unmapped: 95535104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:43.893452+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300957696 unmapped: 95535104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:44.893659+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300957696 unmapped: 95535104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:45.893883+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300957696 unmapped: 95535104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:46.894045+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300957696 unmapped: 95535104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:47.894221+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300957696 unmapped: 95535104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:48.894392+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300957696 unmapped: 95535104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:49.894559+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300957696 unmapped: 95535104 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:50.894701+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:51.894906+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:52.895098+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:53.895250+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:54.895411+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:55.895711+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:56.895875+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 95518720 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:57.896019+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300982272 unmapped: 95510528 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:58.896199+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 95502336 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:59.896373+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 95502336 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:00.896545+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 95502336 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:01.896723+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 95494144 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:02.896874+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 95494144 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:03.897037+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 95494144 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:04.897253+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 95494144 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:05.897456+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 95494144 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:06.897650+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:07.897811+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:08.897989+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:09.898156+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:10.898326+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:11.898487+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:12.898661+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:13.898818+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 95485952 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:14.898968+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301015040 unmapped: 95477760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:15.899168+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301015040 unmapped: 95477760 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:16.899329+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 95469568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:17.899489+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 95469568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:18.899628+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 95469568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:19.899749+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 95469568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:20.899880+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 95469568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:21.900050+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 95469568 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:22.900244+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 95461376 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:23.900403+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 95461376 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:24.900566+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 95461376 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:25.900743+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 95453184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:26.900914+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 95453184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:27.901154+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 95453184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:28.901350+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 95453184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:29.901495+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 95453184 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:30.901646+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:31.901789+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:32.901946+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:33.902136+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:34.902378+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:35.902570+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:36.902722+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:37.902913+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:38.903145+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301047808 unmapped: 95444992 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:39.903308+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 95428608 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:40.903475+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 95428608 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:41.903632+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 95428608 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:42.903827+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 95428608 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:43.904015+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 95428608 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:44.904164+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 95428608 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:45.904347+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 95428608 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:46.904592+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 95428608 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:47.904742+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 95420416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:48.904929+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 95420416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:49.905093+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 95420416 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:50.905233+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 95412224 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:51.905366+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:52.905475+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:53.905666+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:54.905874+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 95404032 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:55.906148+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:56.906352+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:57.906536+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:58.906703+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:59.906997+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:00.907218+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:01.907356+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:02.907507+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 95387648 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:03.907758+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 95379456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:04.907915+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 95379456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:05.908128+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 95379456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:06.909551+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 95379456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:07.909720+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 95379456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:08.910000+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 95379456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:09.910380+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 95379456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:10.911623+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 95379456 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:11.911896+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:12.912519+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:13.912670+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 95371264 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:14.913029+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 95363072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:15.913248+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 95363072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:16.913625+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 95363072 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:17.913817+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 95354880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:18.914315+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 95354880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:19.914530+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 95354880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:20.914678+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 95354880 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:21.914872+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 95346688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:22.915043+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 95346688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:23.915236+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 95346688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:24.915351+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 95346688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:25.915593+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 95346688 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:26.915811+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:27.916127+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:28.916412+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:29.916686+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 95338496 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:30.916887+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301162496 unmapped: 95330304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:31.917113+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301162496 unmapped: 95330304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:32.917289+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301162496 unmapped: 95330304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:33.917492+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301162496 unmapped: 95330304 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:34.917690+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 95313920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:35.917892+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 95313920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:36.918054+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 95313920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:37.918300+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 95313920 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:38.918499+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:39.918669+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:40.918805+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:41.918960+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:42.919140+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:43.919325+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:44.919510+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:45.919705+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:46.919903+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:47.920287+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:48.920558+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:49.920750+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 95305728 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:50.920978+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301195264 unmapped: 95297536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:51.921203+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301195264 unmapped: 95297536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:52.921414+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301195264 unmapped: 95297536 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:53.921552+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:54.921733+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:55.921978+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:56.922279+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:57.922444+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 95289344 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:58.922605+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:59.922830+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:00.922961+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:01.923155+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:02.923304+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:03.923449+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:04.923576+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:05.923716+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 95272960 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:06.923859+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301228032 unmapped: 95264768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:07.924061+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301228032 unmapped: 95264768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:08.924373+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301228032 unmapped: 95264768 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:09.924546+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301236224 unmapped: 95256576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:10.924765+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301236224 unmapped: 95256576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:11.924982+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301236224 unmapped: 95256576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:12.925289+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301236224 unmapped: 95256576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:13.925511+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301236224 unmapped: 95256576 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:14.925716+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:15.925918+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:16.926127+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:17.926308+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 95240192 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:18.926494+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301260800 unmapped: 95232000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:19.926704+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301260800 unmapped: 95232000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:20.926925+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301260800 unmapped: 95232000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:21.927125+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301260800 unmapped: 95232000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:22.927342+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301260800 unmapped: 95232000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:23.927547+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301260800 unmapped: 95232000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:24.927742+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301260800 unmapped: 95232000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:25.927942+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301260800 unmapped: 95232000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:26.928192+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301260800 unmapped: 95232000 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:27.928514+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301268992 unmapped: 95223808 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:28.928712+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301277184 unmapped: 95215616 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:29.929008+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301277184 unmapped: 95215616 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:30.929275+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 95207424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:31.929507+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 95207424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:32.929681+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 95207424 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:33.929870+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301293568 unmapped: 95199232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:34.930129+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301293568 unmapped: 95199232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:35.930391+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301293568 unmapped: 95199232 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:36.930556+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301301760 unmapped: 95191040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:37.930750+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301301760 unmapped: 95191040 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:38.930947+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301309952 unmapped: 95182848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:39.931216+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301309952 unmapped: 95182848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:40.931514+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301309952 unmapped: 95182848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:41.931673+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301309952 unmapped: 95182848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:42.931986+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301309952 unmapped: 95182848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:43.932149+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301309952 unmapped: 95182848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:44.932331+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301309952 unmapped: 95182848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:45.932543+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301309952 unmapped: 95182848 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:46.932772+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301318144 unmapped: 95174656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:47.932977+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301318144 unmapped: 95174656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:48.933188+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301318144 unmapped: 95174656 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:49.933419+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301334528 unmapped: 95158272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:50.933582+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301334528 unmapped: 95158272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:51.933787+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301334528 unmapped: 95158272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:52.933975+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301334528 unmapped: 95158272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:53.934154+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301334528 unmapped: 95158272 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:54.934353+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301342720 unmapped: 95150080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:55.934572+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301342720 unmapped: 95150080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:56.934724+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301342720 unmapped: 95150080 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:57.934909+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:58.935053+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301350912 unmapped: 95141888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:59.935351+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301350912 unmapped: 95141888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:00.935669+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301350912 unmapped: 95141888 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:01.935855+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301359104 unmapped: 95133696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:02.936136+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301359104 unmapped: 95133696 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:03.936288+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301367296 unmapped: 95125504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:04.936424+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301367296 unmapped: 95125504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:05.936625+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301367296 unmapped: 95125504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:06.936781+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301367296 unmapped: 95125504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:07.936927+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301367296 unmapped: 95125504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:08.937135+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301367296 unmapped: 95125504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:09.937305+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301367296 unmapped: 95125504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:10.937470+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301367296 unmapped: 95125504 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:11.937640+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301375488 unmapped: 95117312 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:12.937842+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301375488 unmapped: 95117312 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:13.937974+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301375488 unmapped: 95117312 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:14.939263+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301383680 unmapped: 95109120 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:15.939490+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301383680 unmapped: 95109120 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:16.939813+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301383680 unmapped: 95109120 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:17.941149+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301383680 unmapped: 95109120 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:18.942362+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301383680 unmapped: 95109120 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:19.943620+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301383680 unmapped: 95109120 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:20.944708+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301383680 unmapped: 95109120 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:21.944974+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301391872 unmapped: 95100928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:22.945486+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301391872 unmapped: 95100928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:23.945655+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301391872 unmapped: 95100928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:24.945837+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301391872 unmapped: 95100928 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:25.946094+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301400064 unmapped: 95092736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:26.946409+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301400064 unmapped: 95092736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:27.946961+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301400064 unmapped: 95092736 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:28.947462+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301408256 unmapped: 95084544 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:29.948740+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 95076352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:30.949016+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301416448 unmapped: 95076352 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:31.949169+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 95068160 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:32.949330+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301432832 unmapped: 95059968 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:33.949484+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:08 compute-0 ceph-osd[89322]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:08 compute-0 ceph-osd[89322]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153035 data_alloc: 218103808 data_used: 194169
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301432832 unmapped: 95059968 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:34.949617+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 95051776 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'config diff' '{prefix=config diff}'
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'config show' '{prefix=config show}'
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:35.949760+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'counter dump' '{prefix=counter dump}'
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'counter schema' '{prefix=counter schema}'
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301326336 unmapped: 95166464 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:36.949897+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 95436800 heap: 396492800 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:08 compute-0 ceph-osd[89322]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ee767000/0x0/0x4ffc00000, data 0x1f9e99/0x3a5000, compress 0x0/0x0/0x0, omap 0x53fce, meta 0x1109c032), peers [0,1] op hist [])
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: tick
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_tickets
Feb 28 11:14:08 compute-0 ceph-osd[89322]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:37.950049+0000)
Feb 28 11:14:08 compute-0 ceph-osd[89322]: do_command 'log dump' '{prefix=log dump}'
Feb 28 11:14:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 28 11:14:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3316234082' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 28 11:14:08 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23326 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:08 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} v 0)
Feb 28 11:14:08 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} : dispatch
Feb 28 11:14:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 28 11:14:09 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/938010990' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 28 11:14:09 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23330 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} v 0)
Feb 28 11:14:09 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} : dispatch
Feb 28 11:14:09 compute-0 nova_compute[243452]: 2026-02-28 11:14:09.446 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:14:09 compute-0 nova_compute[243452]: 2026-02-28 11:14:09.450 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:14:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:14:09 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3316234082' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 28 11:14:09 compute-0 ceph-mon[76304]: from='client.23326 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:09 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} : dispatch
Feb 28 11:14:09 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/938010990' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 28 11:14:09 compute-0 ceph-mon[76304]: from='client.23330 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:09 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} : dispatch
Feb 28 11:14:09 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23334 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:09 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 28 11:14:09 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1829037130' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 28 11:14:10 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23336 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 28 11:14:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3827080679' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 28 11:14:10 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:10 compute-0 ceph-mon[76304]: from='client.23334 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:10 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1829037130' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 28 11:14:10 compute-0 ceph-mon[76304]: from='client.23336 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:10 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3827080679' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 28 11:14:10 compute-0 ceph-mon[76304]: pgmap v3317: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:10 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23340 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:10 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 28 11:14:10 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1777725137' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 28 11:14:10 compute-0 podman[412232]: 2026-02-28 11:14:10.788851825 +0000 UTC m=+0.068825339 container health_status 3b24dc1ec6ec20bb7cb24df6968571c43ebb3b90d313a3d82bd0d327f17044bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 28 11:14:10 compute-0 podman[412231]: 2026-02-28 11:14:10.830928927 +0000 UTC m=+0.116715896 container health_status 342a991a0fec9766ed5f853b99a1977d67fb6b9d8a24e759f2c4ff49e943eebf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '87b252f082a10c89a12ad104482db58ad95700e37f5646ea98d94c54ead9f123-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1-50536b882045a73707caac044ed86e2684ad7d46901f7c91118333531db5d5e1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 28 11:14:11 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23344 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:11 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Feb 28 11:14:11 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3481473638' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Feb 28 11:14:11 compute-0 ceph-mon[76304]: from='client.23340 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1777725137' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 28 11:14:11 compute-0 ceph-mon[76304]: from='client.23344 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:11 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3481473638' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Feb 28 11:14:11 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23348 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:12 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23352 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:12 compute-0 nova_compute[243452]: 2026-02-28 11:14:12.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:14:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Feb 28 11:14:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/948105503' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Feb 28 11:14:12 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:12 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23356 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:12 compute-0 ceph-mgr[76610]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 28 11:14:12 compute-0 ceph-8f528268-ea2d-5d7b-af45-49b405fed6de-mgr-compute-0-izimmo[76606]: 2026-02-28T11:14:12.533+0000 7fdac2ed4640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 28 11:14:12 compute-0 ceph-mon[76304]: from='client.23348 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:12 compute-0 ceph-mon[76304]: from='client.23352 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:12 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/948105503' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Feb 28 11:14:12 compute-0 ceph-mon[76304]: pgmap v3318: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:12 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Feb 28 11:14:12 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3868391455' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.755073547s of 20.982717514s, submitted: 105
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a400 session 0x563000521500
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a49400 session 0x56300273ec40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630027f9c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911900 data_alloc: 234881024 data_used: 18535366
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:15.881510+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 53731328 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f9c00 session 0x562fffca0000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:16.881775+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 53731328 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:17.881926+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 53731328 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a4a400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a400 session 0x563000340700
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:18.882095+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300334e000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300334e000 session 0x5630008c5340
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004ae8c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004ae8c00 session 0x563002574700
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffc8f000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc8f000 session 0x563002c34c40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630027f9c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f9c00 session 0x562fffba1340
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352509952 unmapped: 53395456 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:19.882302+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352509952 unmapped: 53395456 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e705e000/0x0/0x4ffc00000, data 0x4439c98/0x45ce000, compress 0x0/0x0/0x0, omap 0x6c44f, meta 0x14563bb1), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857330 data_alloc: 218103808 data_used: 12335972
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:20.882420+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352509952 unmapped: 53395456 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x5630008b28c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de2400 session 0x563001d9ae00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630015b7400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:21.882532+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 345505792 unmapped: 60399616 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003bf9c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9c00 session 0x5630005208c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015b7400 session 0x562fffb868c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:22.882716+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630007a9c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a9c00 session 0x5630022f2a80
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 59351040 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630015b7400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015b7400 session 0x563001d9a380
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de2400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de2400 session 0x562fffce6000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:23.903843+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 59342848 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83ab000/0x0/0x4ffc00000, data 0x30e8c26/0x327b000, compress 0x0/0x0/0x0, omap 0x6c897, meta 0x14563769), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630027f9c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3dc00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:24.904097+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 59342848 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:25.904238+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3698486 data_alloc: 218103808 data_used: 4918002
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 59342848 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:26.904356+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 59367424 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:27.904509+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:28.904627+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83af000/0x0/0x4ffc00000, data 0x30e8c59/0x327d000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83af000/0x0/0x4ffc00000, data 0x30e8c59/0x327d000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:29.904795+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:30.904970+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3754294 data_alloc: 218103808 data_used: 14295794
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83af000/0x0/0x4ffc00000, data 0x30e8c59/0x327d000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:31.905134+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:32.905433+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:33.905607+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83af000/0x0/0x4ffc00000, data 0x30e8c59/0x327d000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:34.905845+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:35.905997+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3754294 data_alloc: 218103808 data_used: 14295794
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 58802176 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e83af000/0x0/0x4ffc00000, data 0x30e8c59/0x327d000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.583734512s of 21.009777069s, submitted: 135
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:36.906151+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 57458688 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:37.906272+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7dfd000/0x0/0x4ffc00000, data 0x3699c59/0x382e000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:38.906485+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:39.906696+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:40.906881+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3799496 data_alloc: 234881024 data_used: 14492402
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7dbf000/0x0/0x4ffc00000, data 0x36d7c59/0x386c000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:41.907120+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:42.907257+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:43.907418+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7dbf000/0x0/0x4ffc00000, data 0x36d7c59/0x386c000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:44.907553+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:45.907698+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3796744 data_alloc: 234881024 data_used: 14496498
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:46.907910+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:47.908060+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:48.908235+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d9c000/0x0/0x4ffc00000, data 0x36fbc59/0x3890000, compress 0x0/0x0/0x0, omap 0x6ca6b, meta 0x14563595), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 56139776 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.710806847s of 13.025801659s, submitted: 82
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:49.908898+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f9c00 session 0x5630022a9c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3dc00 session 0x56300222da40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349773824 unmapped: 56131584 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3dc00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:50.909026+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646460 data_alloc: 218103808 data_used: 4917490
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3dc00 session 0x563002574fc0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:51.909161+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c2b000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6d25b, meta 0x14562da5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:52.909307+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:53.909468+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:54.909630+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:55.909835+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3644559 data_alloc: 218103808 data_used: 4913394
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c2b000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6d25b, meta 0x14562da5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630035d9000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630035d9000 session 0x5630008c5500
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006907400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006907400 session 0x562fffc01c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630027f8c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f8c00 session 0x563002c42c40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004298800
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004298800 session 0x562fffc016c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:56.910015+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630027f8c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:57.910199+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630027f8c00 session 0x563002c42380
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3dc00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3dc00 session 0x562fffba0000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630035d9000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630035d9000 session 0x5630004ff6c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006907400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:58.910405+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006907400 session 0x56300273efc0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de0000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de0000 session 0x562fffce7340
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8834000/0x0/0x4ffc00000, data 0x2c65bd4/0x2df8000, compress 0x0/0x0/0x0, omap 0x6d25b, meta 0x14562da5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:59.910646+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:00.910839+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688345 data_alloc: 218103808 data_used: 4921469
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:01.911107+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:02.911266+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8834000/0x0/0x4ffc00000, data 0x2c65bd4/0x2df8000, compress 0x0/0x0/0x0, omap 0x6d25b, meta 0x14562da5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:03.911571+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:04.911797+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8834000/0x0/0x4ffc00000, data 0x2c65bd4/0x2df8000, compress 0x0/0x0/0x0, omap 0x6d25b, meta 0x14562da5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:05.911989+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688345 data_alloc: 218103808 data_used: 4921469
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 62177280 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:06.912148+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343736320 unmapped: 62169088 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:07.912306+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343736320 unmapped: 62169088 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:08.912940+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343736320 unmapped: 62169088 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:09.913187+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a4a800
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.425588608s of 20.792432785s, submitted: 85
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a800 session 0x563000520380
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300329e400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3b400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:10.913328+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3691067 data_alloc: 218103808 data_used: 4921469
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8833000/0x0/0x4ffc00000, data 0x2c65bf7/0x2df9000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:11.913501+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343744512 unmapped: 62160896 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:12.913644+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8833000/0x0/0x4ffc00000, data 0x2c65bf7/0x2df9000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:13.913813+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:14.913982+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:15.914151+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3726267 data_alloc: 218103808 data_used: 10794109
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8833000/0x0/0x4ffc00000, data 0x2c65bf7/0x2df9000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:16.914306+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:17.914491+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:18.914662+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8833000/0x0/0x4ffc00000, data 0x2c65bf7/0x2df9000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:19.914886+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:20.915094+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3726267 data_alloc: 218103808 data_used: 10794109
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 343416832 unmapped: 62488576 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8833000/0x0/0x4ffc00000, data 0x2c65bf7/0x2df9000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:21.915287+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.685478210s of 11.705242157s, submitted: 10
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347709440 unmapped: 58195968 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:22.915453+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:23.915607+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7f5f000/0x0/0x4ffc00000, data 0x3523bf7/0x36b7000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:24.915817+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:25.916007+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3804967 data_alloc: 218103808 data_used: 11819133
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:26.916174+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:27.916362+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 57876480 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:28.916532+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 57868288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:29.916772+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7f5f000/0x0/0x4ffc00000, data 0x3523bf7/0x36b7000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 57868288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:30.916959+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3804983 data_alloc: 218103808 data_used: 11819133
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 57868288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:31.917137+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 57868288 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:32.917268+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 57860096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:33.917412+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7f5f000/0x0/0x4ffc00000, data 0x3523bf7/0x36b7000, compress 0x0/0x0/0x0, omap 0x6d72c, meta 0x145628d4), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 57860096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:34.917592+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 57860096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:35.917794+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3805239 data_alloc: 218103808 data_used: 11827325
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 57860096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:36.917968+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 57860096 heap: 405905408 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:37.918128+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630040a6c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.995227814s of 16.248806000s, submitted: 101
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630040a6c00 session 0x56300238a380
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003bf9c00
Feb 28 11:14:12 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9c00 session 0x5630026de8c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300329f000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329f000 session 0x56300222c700
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347529216 unmapped: 65724416 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a47c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a47c00 session 0x562fffce6380
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a4a800
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a800 session 0x56300222da40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:38.918331+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347529216 unmapped: 65724416 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:39.918652+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752e000/0x0/0x4ffc00000, data 0x3f6abf7/0x40fe000, compress 0x0/0x0/0x0, omap 0x6d934, meta 0x145626cc), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:40.918785+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3859568 data_alloc: 218103808 data_used: 11827325
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752e000/0x0/0x4ffc00000, data 0x3f6abf7/0x40fe000, compress 0x0/0x0/0x0, omap 0x6d934, meta 0x145626cc), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:41.918973+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:42.919136+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:43.919346+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:44.919540+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347537408 unmapped: 65716224 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003bf8400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf8400 session 0x562fffce7340
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:45.919718+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300645f000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004299800
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3861245 data_alloc: 218103808 data_used: 11827325
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 65691648 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:46.919884+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 65691648 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752e000/0x0/0x4ffc00000, data 0x3f6abf7/0x40fe000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [1])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:47.920106+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:48.920282+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752e000/0x0/0x4ffc00000, data 0x3f6abf7/0x40fe000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:49.920445+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:50.920643+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3916669 data_alloc: 234881024 data_used: 17889846
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:51.920848+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:52.920994+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:53.921129+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.772537231s of 15.883464813s, submitted: 28
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:54.921333+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752c000/0x0/0x4ffc00000, data 0x3f6bbf7/0x40ff000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:55.921471+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3916885 data_alloc: 234881024 data_used: 17889846
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752c000/0x0/0x4ffc00000, data 0x3f6bbf7/0x40ff000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348233728 unmapped: 65019904 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:56.921703+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752c000/0x0/0x4ffc00000, data 0x3f6bbf7/0x40ff000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348241920 unmapped: 65011712 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e752c000/0x0/0x4ffc00000, data 0x3f6bbf7/0x40ff000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [0,0,0,0,1])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:57.921877+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 60211200 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:58.922048+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 60104704 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:59.922367+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 60104704 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:00.922644+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3970057 data_alloc: 234881024 data_used: 19615286
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 60104704 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:01.922839+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 60104704 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:02.923248+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 60104704 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7034000/0x0/0x4ffc00000, data 0x445cbf7/0x45f0000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:03.923440+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.204720497s of 10.448961258s, submitted: 70
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60096512 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:04.923669+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60096512 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:05.924000+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3970329 data_alloc: 234881024 data_used: 19623478
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60096512 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:06.924203+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60096512 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:07.924366+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353157120 unmapped: 60096512 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300645f000 session 0x563002539340
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004299800 session 0x563002c421c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:08.924520+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300329f400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7034000/0x0/0x4ffc00000, data 0x445cbf7/0x45f0000, compress 0x0/0x0/0x0, omap 0x6db08, meta 0x145624f8), peers [0,2] op hist [0,2])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329f400 session 0x563002c43c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 63324160 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:09.924740+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 63324160 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:10.924954+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800321 data_alloc: 218103808 data_used: 8706614
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7f74000/0x0/0x4ffc00000, data 0x3524bf7/0x36b8000, compress 0x0/0x0/0x0, omap 0x6dfed, meta 0x14562013), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 63324160 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:11.925147+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 63324160 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329e400 session 0x563002c42e00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3b400 session 0x5630008c4c40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:12.925302+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003bf9800
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9800 session 0x562fffce6540
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:13.925621+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:14.925833+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:15.926133+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:16.926319+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:17.926511+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:18.926707+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:19.926889+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:20.927017+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:21.927189+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:22.927450+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 64299008 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:23.927694+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 64290816 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:24.927856+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 64290816 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:25.928024+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 64290816 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:26.928284+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 64290816 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:27.928456+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 64290816 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:28.928751+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:29.929158+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:30.929323+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:31.929432+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:32.929556+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:33.929767+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:34.930045+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:35.930247+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 64282624 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:36.930421+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 64274432 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:37.930539+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 64274432 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:38.930678+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 64274432 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:39.931058+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 64274432 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:40.931249+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 64274432 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:41.931465+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 64266240 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:42.931671+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 64266240 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:43.931863+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 64266240 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:44.932012+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 64266240 heap: 413253632 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:45.932183+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007aaac00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007aaac00 session 0x5630015e6380
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007a98000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007a98000 session 0x562fffce6fc0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3b400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3b400 session 0x56300222c540
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3670737 data_alloc: 218103808 data_used: 4885030
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300329e400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329e400 session 0x563000520700
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003bf9800
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.949302673s of 42.108650208s, submitted: 99
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 63913984 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9800 session 0x5630026df6c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007aaac00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007aaac00 session 0x5630004fe000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630015e9400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630015e9400 session 0x562fffba0c40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3b400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3b400 session 0x5630007fec40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:46.932371+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300329e400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329e400 session 0x563002c348c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 67854336 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:47.932545+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8637000/0x0/0x4ffc00000, data 0x2e62bd4/0x2ff5000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 67854336 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:48.932696+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8637000/0x0/0x4ffc00000, data 0x2e62bd4/0x2ff5000, compress 0x0/0x0/0x0, omap 0x6de8f, meta 0x14562171), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349077504 unmapped: 67854336 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:49.932905+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 67846144 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:50.933030+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3721561 data_alloc: 218103808 data_used: 4889091
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349085696 unmapped: 67846144 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:51.933171+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de1000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x563001f02000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349011968 unmapped: 67919872 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4201.6 total, 600.0 interval
                                           Cumulative writes: 46K writes, 180K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.68 writes per sync, written: 0.17 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5448 writes, 20K keys, 5448 commit groups, 1.0 writes per commit group, ingest: 23.91 MB, 0.04 MB/s
                                           Interval WAL: 5448 writes, 2224 syncs, 2.45 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:52.933308+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630020c8000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003da8400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8612000/0x0/0x4ffc00000, data 0x2e86bf7/0x301a000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:53.933496+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:54.933634+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:55.933764+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773411 data_alloc: 218103808 data_used: 12658707
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:56.935152+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8612000/0x0/0x4ffc00000, data 0x2e86bf7/0x301a000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:57.935343+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:58.935503+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:59.935761+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:00.935906+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773411 data_alloc: 218103808 data_used: 12658707
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:01.936084+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:02.936213+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8612000/0x0/0x4ffc00000, data 0x2e86bf7/0x301a000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67911680 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:03.936356+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.338871002s of 17.436998367s, submitted: 22
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352256000 unmapped: 64675840 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:04.955989+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 64241664 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:05.956236+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3832685 data_alloc: 218103808 data_used: 12786707
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 64241664 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:06.956443+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 64241664 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:07.956732+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 64241664 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:08.956965+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d99000/0x0/0x4ffc00000, data 0x36f7bf7/0x388b000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 64241664 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:09.957206+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d99000/0x0/0x4ffc00000, data 0x36f7bf7/0x388b000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:10.957486+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3828501 data_alloc: 218103808 data_used: 12786707
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:11.957647+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:12.957870+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:13.958024+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d82000/0x0/0x4ffc00000, data 0x3716bf7/0x38aa000, compress 0x0/0x0/0x0, omap 0x6e357, meta 0x14561ca9), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:14.958270+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:15.958420+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3828501 data_alloc: 218103808 data_used: 12786707
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630020c9c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c9c00 session 0x563000520380
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004689800
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004689800 session 0x56300205c1c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de1000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x563002c42a80
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 64905216 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:16.958541+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630020c9c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c9c00 session 0x5630008b3340
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3b400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.907081604s of 13.095699310s, submitted: 66
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3b400 session 0x563001d9ba40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300329e400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329e400 session 0x562fffb87340
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006912c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006912c00 session 0x56300238b180
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de1000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x563001d9afc0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630020c9c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c9c00 session 0x563002574000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:17.958790+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:18.959175+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x438dbf7/0x4521000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:19.959358+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x438dbf7/0x4521000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:20.959516+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x438dbf7/0x4521000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3906016 data_alloc: 218103808 data_used: 12786707
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:21.959833+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300b960c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300b960c00 session 0x563001f02fc0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:22.959960+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300329f400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003da9c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 64733184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:23.960141+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:24.960360+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:25.960532+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e70e7000/0x0/0x4ffc00000, data 0x43b1bf7/0x4545000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3978348 data_alloc: 234881024 data_used: 24597523
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:26.960654+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:27.960774+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:28.961005+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:29.961109+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358793216 unmapped: 58138624 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e70e7000/0x0/0x4ffc00000, data 0x43b1bf7/0x4545000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:30.961264+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 58130432 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e70e7000/0x0/0x4ffc00000, data 0x43b1bf7/0x4545000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3978348 data_alloc: 234881024 data_used: 24597523
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:31.961389+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 58130432 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:32.961508+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358801408 unmapped: 58130432 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.627857208s of 16.728017807s, submitted: 31
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:33.961714+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360079360 unmapped: 56852480 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:34.961855+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:35.961987+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4046006 data_alloc: 234881024 data_used: 25961491
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:36.962154+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e67b8000/0x0/0x4ffc00000, data 0x4cdfbf7/0x4e73000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:37.962329+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:38.962526+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:39.962750+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:40.962963+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e67b8000/0x0/0x4ffc00000, data 0x4cdfbf7/0x4e73000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4046006 data_alloc: 234881024 data_used: 25961491
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:41.963118+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:42.963259+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 54501376 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e67b8000/0x0/0x4ffc00000, data 0x4cdfbf7/0x4e73000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e67b8000/0x0/0x4ffc00000, data 0x4cdfbf7/0x4e73000, compress 0x0/0x0/0x0, omap 0x6e8a7, meta 0x14561759), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:43.963392+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362446848 unmapped: 54484992 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:44.963576+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362446848 unmapped: 54484992 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:45.963779+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362446848 unmapped: 54484992 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.708907127s of 12.569359779s, submitted: 90
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300329f400 session 0x562fffb861c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003da9c00 session 0x5630029ad340
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de1000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4039350 data_alloc: 234881024 data_used: 25961491
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:46.963888+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x56300273f340
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358670336 unmapped: 58261504 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:47.964111+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d76000/0x0/0x4ffc00000, data 0x3722bf7/0x38b6000, compress 0x0/0x0/0x0, omap 0x6e714, meta 0x145618ec), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358678528 unmapped: 58253312 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:48.964289+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358735872 unmapped: 58195968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:49.964491+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358735872 unmapped: 58195968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:50.964692+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358735872 unmapped: 58195968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3840646 data_alloc: 218103808 data_used: 12786707
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:51.964871+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358735872 unmapped: 58195968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7d6f000/0x0/0x4ffc00000, data 0x3729bf7/0x38bd000, compress 0x0/0x0/0x0, omap 0x6e714, meta 0x145618ec), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8000 session 0x562fffba16c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003da8400 session 0x5630022a9dc0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:52.965058+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a4a000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 358653952 unmapped: 58277888 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a000 session 0x5630008c4c40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:53.965223+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:54.965412+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:55.965589+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:56.965810+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:57.965971+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:58.966236+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:59.966481+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:00.966672+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:01.966845+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:02.966995+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:03.967196+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:04.973935+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:05.974156+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:06.974399+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:07.974576+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:08.974797+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:09.975053+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 63332352 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:10.975336+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:11.975516+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:12.975686+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:13.975844+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:14.976032+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:15.976169+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:16.976356+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:17.976521+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 63324160 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:18.976700+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:19.976915+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:20.977176+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3694722 data_alloc: 218103808 data_used: 4889091
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:21.977358+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8de7000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:22.977520+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:23.977649+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 63315968 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:24.977839+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 63307776 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006905400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563006905400 session 0x562fffb86700
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a46c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46c00 session 0x56300205c1c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de1000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x5630007fec40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630020c8000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8000 session 0x562fffba0000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:25.977987+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a4a000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.295139313s of 39.636463165s, submitted: 162
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354689024 unmapped: 62242816 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a4a000 session 0x5630022f2c40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003da8400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003da8400 session 0x563002c34c40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de1000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x562fffb87c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630020c8000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8000 session 0x5630029ad6c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a46c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46c00 session 0x56300273f500
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3765477 data_alloc: 218103808 data_used: 4893152
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:26.978122+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8343000/0x0/0x4ffc00000, data 0x3155c35/0x32e9000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:27.978363+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:28.978547+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:29.978723+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:30.978832+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004b6c000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6c000 session 0x563002473340
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3765477 data_alloc: 218103808 data_used: 4893152
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:31.979195+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630007a8c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630007a8c00 session 0x563002c42380
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 63291392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563001de1000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563001de1000 session 0x562fffba0c40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630020c8000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:32.979305+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630020c8000 session 0x563002c42a80
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353583104 unmapped: 63348736 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a46c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8343000/0x0/0x4ffc00000, data 0x3155c35/0x32e9000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004b6c000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:33.979502+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 63676416 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:34.979643+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:35.979798+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3834807 data_alloc: 234881024 data_used: 16034287
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:36.979953+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8342000/0x0/0x4ffc00000, data 0x3155c58/0x32ea000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:37.980186+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:38.980369+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:39.980580+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8342000/0x0/0x4ffc00000, data 0x3155c58/0x32ea000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x14560ff5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:40.980745+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3834807 data_alloc: 234881024 data_used: 16034287
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:41.980898+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:42.981040+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 62570496 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.666275024s of 17.794780731s, submitted: 43
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:43.981160+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359178240 unmapped: 57753600 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:44.981261+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e628e000/0x0/0x4ffc00000, data 0x4063c58/0x41f8000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x15700ff5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 57647104 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:45.981420+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 56467456 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:46.981619+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3932965 data_alloc: 234881024 data_used: 16882159
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 56467456 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:47.981765+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6263000/0x0/0x4ffc00000, data 0x4094c58/0x4229000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x15700ff5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 56467456 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:48.981903+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 56467456 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:49.982597+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 56467456 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:50.982777+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6263000/0x0/0x4ffc00000, data 0x4094c58/0x4229000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x15700ff5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:51.982938+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3933117 data_alloc: 234881024 data_used: 16882159
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:52.983137+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:53.983284+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:54.983449+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:55.983629+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:56.983869+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3933373 data_alloc: 234881024 data_used: 16890351
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6260000/0x0/0x4ffc00000, data 0x4097c58/0x422c000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x15700ff5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:57.984058+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 56459264 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:58.984226+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360480768 unmapped: 56451072 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:59.984367+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360480768 unmapped: 56451072 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6260000/0x0/0x4ffc00000, data 0x4097c58/0x422c000, compress 0x0/0x0/0x0, omap 0x6f00b, meta 0x15700ff5), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:00.984514+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffc0c000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.938798904s of 17.121707916s, submitted: 144
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc0c000 session 0x563002538c40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3b000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a3b000 session 0x563000520c40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563000898000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x562fffce7340
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffc0c000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc0c000 session 0x562fffce6fc0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563000898000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x563001da1c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:01.984663+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3953230 data_alloc: 234881024 data_used: 16890351
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6089000/0x0/0x4ffc00000, data 0x426ec58/0x4403000, compress 0x0/0x0/0x0, omap 0x6f209, meta 0x15700df7), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6089000/0x0/0x4ffc00000, data 0x426ec58/0x4403000, compress 0x0/0x0/0x0, omap 0x6f209, meta 0x15700df7), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:02.984849+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:03.985189+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:04.985392+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:05.985534+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007a99800
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007a99800 session 0x563000520380
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6089000/0x0/0x4ffc00000, data 0x426ec58/0x4403000, compress 0x0/0x0/0x0, omap 0x6f209, meta 0x15700df7), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630045ad000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x5630045ad000 session 0x5630008b3340
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:06.985690+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3953230 data_alloc: 234881024 data_used: 16890351
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 56123392 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300334e000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300334e000 session 0x563000340700
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffc0c000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x562fffc0c000 session 0x563002147a40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6089000/0x0/0x4ffc00000, data 0x426ec58/0x4403000, compress 0x0/0x0/0x0, omap 0x6f209, meta 0x15700df7), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:07.985810+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563000898000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300334e000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:08.985954+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:09.986139+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:10.986278+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:11.986418+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3967924 data_alloc: 234881024 data_used: 18733551
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6088000/0x0/0x4ffc00000, data 0x426ec7b/0x4404000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:12.986682+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:13.986830+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 56115200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:14.987008+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 56107008 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.814036369s of 14.911996841s, submitted: 29
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:15.987239+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 56107008 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:16.987479+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3968068 data_alloc: 234881024 data_used: 18733551
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6088000/0x0/0x4ffc00000, data 0x426ec7b/0x4404000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 56107008 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:17.987644+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 56107008 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6088000/0x0/0x4ffc00000, data 0x426ec7b/0x4404000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [0,0,0,0,0,3,6])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:19.062243+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 54755328 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:20.062471+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362184704 unmapped: 54747136 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:21.062649+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a1c000/0x0/0x4ffc00000, data 0x48dac7b/0x4a70000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362184704 unmapped: 54747136 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:22.062832+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4006276 data_alloc: 234881024 data_used: 19196399
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362184704 unmapped: 54747136 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a1c000/0x0/0x4ffc00000, data 0x48dac7b/0x4a70000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:23.063255+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:24.063411+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:25.063635+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a1c000/0x0/0x4ffc00000, data 0x48dac7b/0x4a70000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:26.063826+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a1c000/0x0/0x4ffc00000, data 0x48dac7b/0x4a70000, compress 0x0/0x0/0x0, omap 0x6f6c8, meta 0x15700938), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:27.063983+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4006276 data_alloc: 234881024 data_used: 19196399
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:28.064165+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:29.064329+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:30.064519+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 54738944 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.440196991s of 14.719416618s, submitted: 60
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563000898000 session 0x562fffce7500
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x56300334e000 session 0x5630008c5500
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003bf9c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:31.064684+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563003bf9c00 session 0x5630026de700
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:32.064867+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3941854 data_alloc: 234881024 data_used: 16951791
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e625f000/0x0/0x4ffc00000, data 0x4097c58/0x422c000, compress 0x0/0x0/0x0, omap 0x6f6c4, meta 0x1570093c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e625f000/0x0/0x4ffc00000, data 0x4097c58/0x422c000, compress 0x0/0x0/0x0, omap 0x6f6c4, meta 0x1570093c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:33.065033+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:34.065226+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a46c00 session 0x563002574700
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563004b6c000 session 0x562fffba1340
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a41000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e625f000/0x0/0x4ffc00000, data 0x4098c58/0x422d000, compress 0x0/0x0/0x0, omap 0x6f6c4, meta 0x1570093c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:35.065355+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563002a41000 session 0x563002575500
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:36.065563+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:37.065719+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:38.065894+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:39.066137+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:40.066402+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:41.066601+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:42.066783+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:43.066955+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:44.067153+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:45.067362+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:46.067648+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 53624832 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:47.067888+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:48.068165+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:49.068421+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:50.068705+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:51.068944+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:52.069150+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:53.069323+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 53616640 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:54.069495+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:55.069766+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:56.069960+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363323392 unmapped: 53608448 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:57.070146+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 53600256 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:58.070330+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 53600256 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:59.070535+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 53600256 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:00.070744+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 53600256 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:01.070909+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 53600256 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:02.071134+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:03.071312+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:04.071485+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:05.071647+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:06.071802+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:07.072006+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:08.072160+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:09.072322+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 53592064 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:10.072516+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c44000/0x0/0x4ffc00000, data 0x26b2bc4/0x2844000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 53583872 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:11.072821+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 53583872 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:12.073057+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3720795 data_alloc: 218103808 data_used: 4897213
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 53583872 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:13.073279+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007aaac00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.221851349s of 42.453300476s, submitted: 113
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007aaac00 session 0x562fffadfa40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 63627264 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:14.073511+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 63627264 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:15.073661+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 63627264 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:16.073828+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 63627264 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:17.073970+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3792375 data_alloc: 218103808 data_used: 4905272
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 63627264 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:18.074151+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:19.074552+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:20.074774+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:21.074946+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:22.075139+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3792375 data_alloc: 218103808 data_used: 4905272
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:23.075304+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361709568 unmapped: 63619072 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007a98400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:24.075417+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:25.075582+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:26.075728+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:27.075863+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3862391 data_alloc: 234881024 data_used: 16737592
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:28.076035+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:29.076147+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7095000/0x0/0x4ffc00000, data 0x3265bc4/0x33f7000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:30.076341+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:31.076519+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:32.076675+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3862391 data_alloc: 234881024 data_used: 16737592
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362668032 unmapped: 62660608 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:33.076823+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.046377182s of 20.163766861s, submitted: 13
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 61046784 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:34.077053+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6aee000/0x0/0x4ffc00000, data 0x3804bc4/0x3996000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:35.077274+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:36.077440+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:37.077574+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6a74000/0x0/0x4ffc00000, data 0x387ebc4/0x3a10000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3904073 data_alloc: 234881024 data_used: 16845112
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:38.077722+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:39.077965+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:40.078211+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364494848 unmapped: 60833792 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:41.078443+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61497344 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:42.078663+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3900481 data_alloc: 234881024 data_used: 16845112
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread fragmentation_score=0.004393 took=0.000057s
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 61497344 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:43.078871+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6a5b000/0x0/0x4ffc00000, data 0x389fbc4/0x3a31000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.036879539s of 10.267202377s, submitted: 60
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61489152 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:44.079049+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61489152 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:45.079322+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6a3c000/0x0/0x4ffc00000, data 0x38bebc4/0x3a50000, compress 0x0/0x0/0x0, omap 0x6faf4, meta 0x1570050c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61489152 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:46.079541+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 ms_handle_reset con 0x563007a98400 session 0x562fffb86700
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61489152 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:47.079723+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3901945 data_alloc: 234881024 data_used: 16853304
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 61489152 heap: 425328640 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:48.079896+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004b6a400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 285 ms_handle_reset con 0x563004b6a400 session 0x563001f02e00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a41000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007a98400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 285 ms_handle_reset con 0x563002a41000 session 0x5630008c41c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 285 ms_handle_reset con 0x563007a98400 session 0x5630020116c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007aaac00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 382451712 unmapped: 47079424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:49.080021+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 285 heartbeat osd_stat(store_statfs(0x4e5986000/0x0/0x4ffc00000, data 0x496f824/0x4b04000, compress 0x0/0x0/0x0, omap 0x6feaa, meta 0x15700156), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 285 ms_handle_reset con 0x563007aaac00 session 0x5630015e68c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300334ec00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 286 ms_handle_reset con 0x56300334ec00 session 0x562fffb87a40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffc0dc00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 371097600 unmapped: 58433536 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:50.080294+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 287 ms_handle_reset con 0x562fffc0dc00 session 0x562fffba1a40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 371154944 unmapped: 58376192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:51.080437+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a41000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 287 ms_handle_reset con 0x563002a41000 session 0x562fffce76c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300334ec00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 287 ms_handle_reset con 0x56300334ec00 session 0x563002c42380
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007a98400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 287 ms_handle_reset con 0x563007a98400 session 0x563000341880
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 371154944 unmapped: 58376192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:52.080604+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4041853 data_alloc: 234881024 data_used: 23828792
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 371154944 unmapped: 58376192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:53.080893+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 371154944 unmapped: 58376192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:54.081123+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007aaac00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e5974000/0x0/0x4ffc00000, data 0x497df6a/0x4b14000, compress 0x0/0x0/0x0, omap 0x70a87, meta 0x156ff579), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.529747963s of 10.955096245s, submitted: 94
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 287 ms_handle_reset con 0x563007aaac00 session 0x56300273ec40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:55.081311+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:56.081566+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:57.081807+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3844847 data_alloc: 218103808 data_used: 4905272
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:58.082403+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:59.082613+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e6b84000/0x0/0x4ffc00000, data 0x3771f6a/0x3908000, compress 0x0/0x0/0x0, omap 0x70ebf, meta 0x156ff141), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 287 handle_osd_map epochs [288,288], i have 288, src has [1,288]
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:00.082929+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:01.083181+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6b7f000/0x0/0x4ffc00000, data 0x37739e9/0x390b000, compress 0x0/0x0/0x0, omap 0x70f46, meta 0x156ff0ba), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 356409344 unmapped: 73121792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:02.083362+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a46000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x563002a46000 session 0x563002c42a80
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a41000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300334ec00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x56300334ec00 session 0x5630022a88c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x563002a41000 session 0x5630022a9340
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007a98400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3849155 data_alloc: 218103808 data_used: 4905272
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x563007a98400 session 0x562fffba0000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007aaac00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x563007aaac00 session 0x5630007ffc00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffd93800
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x562fffd93800 session 0x563002c43dc0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a41000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x563002a41000 session 0x56300222c700
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300334ec00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 369156096 unmapped: 60375040 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:03.083569+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 288 ms_handle_reset con 0x56300334ec00 session 0x562fffba0e00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e673b000/0x0/0x4ffc00000, data 0x3bb99e9/0x3d51000, compress 0x0/0x0/0x0, omap 0x71276, meta 0x156fed8a), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 372703232 unmapped: 56827904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:04.083728+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 372703232 unmapped: 56827904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:05.084006+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 372703232 unmapped: 56827904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:06.084133+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630040a7000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.734997749s of 12.027949333s, submitted: 52
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300645f400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 369434624 unmapped: 60096512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:07.084289+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e6431000/0x0/0x4ffc00000, data 0x3ec39e9/0x405b000, compress 0x0/0x0/0x0, omap 0x71276, meta 0x156fed8a), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3968002 data_alloc: 234881024 data_used: 25452989
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 289 ms_handle_reset con 0x56300645f400 session 0x563002a81340
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:08.084485+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:09.084661+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:10.084886+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:11.085122+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:12.085302+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 289 heartbeat osd_stat(store_statfs(0x4e74e6000/0x0/0x4ffc00000, data 0x2e0b577/0x2fa3000, compress 0x0/0x0/0x0, omap 0x712fd, meta 0x156fed03), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3845228 data_alloc: 218103808 data_used: 9718717
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:13.085518+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:14.085756+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:15.085989+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e4000/0x0/0x4ffc00000, data 0x2e0cff6/0x2fa6000, compress 0x0/0x0/0x0, omap 0x71384, meta 0x156fec7c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:16.086152+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 64741376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:17.086301+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.752949715s of 10.863764763s, submitted: 72
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3853182 data_alloc: 218103808 data_used: 9947483
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e4000/0x0/0x4ffc00000, data 0x2e0cff6/0x2fa6000, compress 0x0/0x0/0x0, omap 0x71384, meta 0x156fec7c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:18.086462+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e6000/0x0/0x4ffc00000, data 0x2e0cff6/0x2fa6000, compress 0x0/0x0/0x0, omap 0x71384, meta 0x156fec7c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:19.086604+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:20.086816+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:21.087018+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e6000/0x0/0x4ffc00000, data 0x2e0cff6/0x2fa6000, compress 0x0/0x0/0x0, omap 0x71384, meta 0x156fec7c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:22.087171+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3854974 data_alloc: 218103808 data_used: 10385755
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:23.087299+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:24.088050+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e6000/0x0/0x4ffc00000, data 0x2e0cff6/0x2fa6000, compress 0x0/0x0/0x0, omap 0x71384, meta 0x156fec7c), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:25.088245+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 64733184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:26.088418+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x5630040a7000 session 0x562fffc00a80
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300157f000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300157f000 session 0x562fffc01500
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:27.088584+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74e6000/0x0/0x4ffc00000, data 0x29c6ff6/0x2b60000, compress 0x0/0x0/0x0, omap 0x71519, meta 0x156feae7), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:28.088808+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:29.089238+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:30.089513+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:31.089712+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:32.089889+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:33.090048+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:34.090252+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:35.090440+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:36.090588+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:37.090781+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:38.090908+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:39.091127+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:40.091355+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:41.091541+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:42.091683+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:43.091852+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:44.092010+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:45.092193+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:46.092366+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:47.092586+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:48.092757+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:49.092918+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:50.093136+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:51.093308+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:52.093500+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:53.093717+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:54.093932+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:55.094141+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:56.094323+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:57.094498+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:58.094669+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:59.094869+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: mgrc ms_handle_reset ms_handle_reset con 0x563002a3f400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1095581968
Feb 28 11:14:12 compute-0 ceph-osd[88267]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1095581968,v1:192.168.122.100:6801/1095581968]
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: get_auth_request con 0x56300157f000 auth_method 0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: mgrc handle_mgr_configure stats_period=5
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:00.095118+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300dcfd000 session 0x5630008c5a40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300645e800
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x563002a3ec00 session 0x563002c43500
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563004b6cc00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x563006914800 session 0x562fffc00000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630015b7000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:01.095287+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:02.095440+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770396 data_alloc: 218103808 data_used: 4897115
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:03.095605+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:04.095796+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630052e4c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x5630052e4c00 session 0x563001f03500
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a46400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x563002a46400 session 0x5630029ac8c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300212a800
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300212a800 session 0x5630007ff500
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300645f800
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300645f800 session 0x563002146380
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630045adc00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 47.592243195s of 47.662048340s, submitted: 52
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [0,0,0,2])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:05.096044+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x5630045adc00 session 0x563002a6b6c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360480768 unmapped: 69050368 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300212a800
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300212a800 session 0x5630029ac8c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:06.096220+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e71ff000/0x0/0x4ffc00000, data 0x30f3ff6/0x328d000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360480768 unmapped: 69050368 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a46400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x563002a46400 session 0x562fffba1dc0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:07.096516+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360480768 unmapped: 69050368 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630052e4c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x5630052e4c00 session 0x562fffc00000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300645f800
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300645f800 session 0x5630008c5a40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3836513 data_alloc: 218103808 data_used: 4897115
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:08.096728+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300329f400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:09.096877+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362455040 unmapped: 67076096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:10.097105+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 65650688 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:11.097281+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 65650688 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e71fe000/0x0/0x4ffc00000, data 0x30f4019/0x328e000, compress 0x0/0x0/0x0, omap 0x746c0, meta 0x156fb940), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:12.097425+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300329f400 session 0x563002c43dc0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363880448 unmapped: 65650688 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300212a800
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300212a800 session 0x562fffba0540
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:13.097635+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:14.097821+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:15.098010+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:16.098187+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:17.098351+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:18.098537+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:19.098713+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:20.098910+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:21.099115+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:22.099476+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:23.099634+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:24.099795+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:25.100022+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:26.100303+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:27.100490+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:28.100651+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:29.100829+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:30.101799+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:31.101941+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:32.102146+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:33.102288+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:34.102476+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:35.102733+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 ms_handle_reset con 0x56300b961400 session 0x56300273fdc0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300212cc00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:36.102959+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:37.103146+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:38.103361+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:39.103609+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:40.104150+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:41.104498+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:42.104764+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c35000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777809 data_alloc: 218103808 data_used: 4897115
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:43.105279+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 69550080 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.875541687s of 39.054084778s, submitted: 43
Feb 28 11:14:12 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:44.105644+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:45.105843+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:46.106296+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:47.107029+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:48.107443+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:49.107643+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:50.107883+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 69541888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:51.108599+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:52.109059+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:53.109451+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:54.109657+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:55.110010+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:56.110382+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:57.110731+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:58.110896+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:59.111100+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360005632 unmapped: 69525504 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:00.111477+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360005632 unmapped: 69525504 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:01.111617+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:02.111809+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:03.112058+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:04.112249+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:05.112521+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:06.112818+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:07.113117+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:08.113347+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:09.113520+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:10.113769+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:11.113960+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:12.114189+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:13.114342+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3777745 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7c36000/0x0/0x4ffc00000, data 0x26bcff6/0x2856000, compress 0x0/0x0/0x0, omap 0x74af8, meta 0x156fb508), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:14.114522+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630040a7c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.903411865s of 30.936098099s, submitted: 22
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:15.114684+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 291 ms_handle_reset con 0x5630040a7c00 session 0x562fffa6b6c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:16.114833+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:17.115010+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:18.115198+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3619554 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9c32000/0x0/0x4ffc00000, data 0x6bebd6/0x858000, compress 0x0/0x0/0x0, omap 0x74b7f, meta 0x156fb481), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:19.115916+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9c32000/0x0/0x4ffc00000, data 0x6bebd6/0x858000, compress 0x0/0x0/0x0, omap 0x74b7f, meta 0x156fb481), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:20.116165+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 69459968 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:21.116331+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 69459968 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9c32000/0x0/0x4ffc00000, data 0x6bebd6/0x858000, compress 0x0/0x0/0x0, omap 0x74b7f, meta 0x156fb481), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:22.116517+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 69459968 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e9c32000/0x0/0x4ffc00000, data 0x6bebd6/0x858000, compress 0x0/0x0/0x0, omap 0x74b7f, meta 0x156fb481), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:23.116764+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3619554 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 69459968 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:24.117265+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:25.117465+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:26.117657+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:27.117824+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:28.118038+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:29.118275+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:30.118522+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:31.118700+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:32.118870+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:33.119055+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:34.119294+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:35.119483+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:36.119677+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:37.119932+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:38.120114+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:39.120304+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:40.120533+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:41.120842+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:42.121021+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:43.121194+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:44.121379+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:45.121600+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:46.121807+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:47.121979+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:48.122155+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:49.122373+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:50.122579+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:51.122734+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:52.122896+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:53.123118+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:54.123264+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:55.123445+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:56.123656+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:57.123794+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:58.124052+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:59.124312+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:00.124528+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:01.124712+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:02.124874+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:03.125046+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:04.125310+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:05.125506+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:06.125680+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:07.125957+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:08.126171+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:09.126394+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:10.126988+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:11.127291+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:12.127515+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:13.127730+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:14.127937+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:15.128242+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:16.128417+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:17.128664+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:18.128860+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:19.129184+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:20.129392+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:21.129749+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:22.130174+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:23.130335+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:24.130491+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:25.130707+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:26.130928+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:27.131216+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:28.131410+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:29.131635+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:30.131845+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:31.132197+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:32.132396+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:33.132534+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:34.132804+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:35.133051+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:36.133267+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:37.133441+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:38.133632+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:39.133786+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:40.133980+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:41.134137+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:42.134298+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:43.134495+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:44.134686+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:45.134959+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:46.135215+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:47.135433+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:48.135697+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:49.135938+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:50.136167+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:51.136348+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:52.136594+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:53.136836+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:54.137026+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:55.137189+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:56.137438+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:57.137690+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:58.137884+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:59.138884+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:00.139090+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:01.139326+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:02.139520+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:03.139722+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:04.139915+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:05.140105+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:06.140279+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 69263360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:07.140480+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 69255168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:08.140636+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 69255168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622328 data_alloc: 218103808 data_used: 4901176
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:09.140846+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 69255168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:10.141113+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:11.141319+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c2f000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c6c, meta 0x156fb394), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:12.141516+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:13.141705+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006915000
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 118.484336853s of 118.528816223s, submitted: 35
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 ms_handle_reset con 0x563006915000 session 0x5630022a9880
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x562fffb12c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3621740 data_alloc: 218103808 data_used: 4901211
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 ms_handle_reset con 0x562fffb12c00 session 0x5630008c4c40
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:14.141877+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c31000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c9f, meta 0x156fb361), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:15.142040+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 69230592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:16.142211+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 69230592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:17.142433+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:18.142651+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3621608 data_alloc: 218103808 data_used: 4901211
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:19.143277+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:20.143835+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e9c31000/0x0/0x4ffc00000, data 0x6c0655/0x85b000, compress 0x0/0x0/0x0, omap 0x74c9f, meta 0x156fb361), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:21.144397+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:22.144683+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563003da8400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:23.144869+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 292 handle_osd_map epochs [293,293], i have 293, src has [1,293]
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 70385664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 293 ms_handle_reset con 0x563003da8400 session 0x56300238afc0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592192 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:24.145055+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 293 heartbeat osd_stat(store_statfs(0x4ea09d000/0x0/0x4ffc00000, data 0x252222/0x3ed000, compress 0x0/0x0/0x0, omap 0x75b78, meta 0x156fa488), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 70385664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:25.145215+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 70385664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3c400
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:26.145366+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 70385664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.014619827s of 13.094200134s, submitted: 43
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 294 ms_handle_reset con 0x563002a3c400 session 0x5630003e6700
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:27.145733+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 294 heartbeat osd_stat(store_statfs(0x4ea09d000/0x0/0x4ffc00000, data 0x252222/0x3ed000, compress 0x0/0x0/0x0, omap 0x75b78, meta 0x156fa488), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359153664 unmapped: 70377472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:28.146048+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359153664 unmapped: 70377472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3594966 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:29.146305+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359153664 unmapped: 70377472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:30.146573+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359153664 unmapped: 70377472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:31.146746+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 294 heartbeat osd_stat(store_statfs(0x4ea09a000/0x0/0x4ffc00000, data 0x253e12/0x3f0000, compress 0x0/0x0/0x0, omap 0x75c00, meta 0x156fa400), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 70369280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:32.146940+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 70369280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:33.147137+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 70369280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3594966 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:34.147325+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 70344704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:35.147495+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 70344704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 295 heartbeat osd_stat(store_statfs(0x4ea097000/0x0/0x4ffc00000, data 0x255891/0x3f3000, compress 0x0/0x0/0x0, omap 0x75cee, meta 0x156fa312), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:36.147851+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 70344704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:37.148192+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 70344704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 295 heartbeat osd_stat(store_statfs(0x4ea097000/0x0/0x4ffc00000, data 0x255891/0x3f3000, compress 0x0/0x0/0x0, omap 0x75cee, meta 0x156fa312), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:38.148452+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 295 heartbeat osd_stat(store_statfs(0x4ea097000/0x0/0x4ffc00000, data 0x255891/0x3f3000, compress 0x0/0x0/0x0, omap 0x75cee, meta 0x156fa312), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 70344704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630040a6c00
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.104069710s of 12.140682220s, submitted: 28
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3598801 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:39.148682+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359194624 unmapped: 70336512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:40.148938+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359194624 unmapped: 70336512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:41.149157+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359219200 unmapped: 70311936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:42.149321+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359219200 unmapped: 70311936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8099000/0x0/0x4ffc00000, data 0x2255891/0x23f3000, compress 0x0/0x0/0x0, omap 0x7595f, meta 0x156fa6a1), peers [0,2] op hist [0,0,0,1])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 ms_handle_reset con 0x5630040a6c00 session 0x562fffba01c0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:43.149506+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 70287360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:44.149650+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 70287360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:45.149823+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 70287360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:46.150043+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 70287360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:47.150212+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359251968 unmapped: 70279168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:48.150365+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359251968 unmapped: 70279168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:49.150501+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:50.150738+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:51.150960+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:52.151170+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:53.151386+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:54.151613+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 70270976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:55.151799+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 70262784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:56.151993+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 70262784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:57.152159+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 70262784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:58.152361+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 70262784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:59.152585+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 70262784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:00.152794+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359276544 unmapped: 70254592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:01.152963+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359276544 unmapped: 70254592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:02.153196+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359276544 unmapped: 70254592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:03.153357+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:04.153553+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:05.153686+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:06.153842+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:07.154128+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:08.154313+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:09.154481+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:10.154732+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 70246400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:11.154993+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359292928 unmapped: 70238208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:12.155209+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359292928 unmapped: 70238208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:13.155375+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:14.155586+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:15.155789+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:16.155962+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:17.156117+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:18.156311+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:19.156491+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:20.156712+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 70230016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:21.156970+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:22.157187+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:23.157400+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:24.157589+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:25.157761+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:26.157915+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:27.158044+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:28.158199+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 70213632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:29.158325+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:30.158520+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:31.158679+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:32.158862+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:33.159024+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:34.159196+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 70205440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:35.159339+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359333888 unmapped: 70197248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:36.159528+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359333888 unmapped: 70197248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:37.159683+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359342080 unmapped: 70189056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:38.159881+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359342080 unmapped: 70189056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:39.160058+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359342080 unmapped: 70189056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:40.160357+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359350272 unmapped: 70180864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:41.160549+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359350272 unmapped: 70180864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:42.160727+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359350272 unmapped: 70180864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:43.160863+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:44.161013+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:45.161214+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:46.161374+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:47.161525+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:48.161729+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:49.161872+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:50.162119+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 70172672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:51.162341+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:52.162504+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4801.6 total, 600.0 interval
                                           Cumulative writes: 48K writes, 188K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 48K writes, 18K syncs, 2.67 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2185 writes, 8173 keys, 2185 commit groups, 1.0 writes per commit group, ingest: 7.57 MB, 0.01 MB/s
                                           Interval WAL: 2185 writes, 915 syncs, 2.39 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:53.162650+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:54.162829+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:55.163518+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:56.163725+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:57.163926+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:58.164114+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 70164480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:59.164229+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359383040 unmapped: 70148096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:00.164365+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359383040 unmapped: 70148096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:01.164513+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:02.164724+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:03.164948+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:04.165150+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:05.165354+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:06.165590+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 70139904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:07.165803+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:08.165898+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:09.166211+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:10.166528+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:11.166767+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:12.166912+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:13.167140+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:14.167286+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 70131712 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:15.167488+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359407616 unmapped: 70123520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:16.167692+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359407616 unmapped: 70123520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:17.167853+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 70115328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:18.168038+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 70115328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:19.168290+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 70115328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:20.168462+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 70115328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:21.168616+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 70107136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:22.168750+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 70107136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:23.168906+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 70107136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:24.169388+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:25.169623+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:26.169768+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:27.169899+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:28.170104+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:29.170238+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:12 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768695 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 70098944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630007a8800
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:30.170463+0000)
Feb 28 11:14:12 compute-0 ceph-osd[88267]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8094000/0x0/0x4ffc00000, data 0x225742d/0x23f6000, compress 0x0/0x0/0x0, omap 0x7667e, meta 0x156f9982), peers [0,2] op hist [])
Feb 28 11:14:12 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 111.617111206s of 111.771911621s, submitted: 17
Feb 28 11:14:12 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359440384 unmapped: 70090752 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:12 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:31.170595+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 297 ms_handle_reset con 0x5630007a8800 session 0x563002c356c0
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359464960 unmapped: 70066176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:32.170738+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006907c00
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359473152 unmapped: 70057984 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:33.170895+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 298 ms_handle_reset con 0x563006907c00 session 0x563002472380
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e7420000/0x0/0x4ffc00000, data 0x2ec8fec/0x306a000, compress 0x0/0x0/0x0, omap 0x76256, meta 0x156f9daa), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:34.171033+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843633 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741b000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:35.171172+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:36.171354+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741b000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:37.171487+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:38.171653+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 70041600 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:39.171777+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843633 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 70017024 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:40.171861+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741b000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 70017024 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:41.171995+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:42.172206+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:43.172347+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.149941444s of 13.237030983s, submitted: 29
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:44.172473+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842145 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:45.172634+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:46.172774+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 70008832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:47.172905+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 70000640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:48.173105+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 70000640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:49.173265+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842145 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:50.173466+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:51.173715+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:52.173878+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:53.174042+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:54.174218+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842145 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:55.174402+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:56.174571+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:57.174780+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 69926912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:58.174913+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 69918720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:59.175112+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842145 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 69918720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:00.175288+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 69918720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:01.176036+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 69918720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:02.176411+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 69918720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:03.176935+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 69910528 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:04.177469+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3842145 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e741f000/0x0/0x4ffc00000, data 0x2ecab88/0x306d000, compress 0x0/0x0/0x0, omap 0x76f86, meta 0x156f907a), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 69910528 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:05.177635+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x56300212a000
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.955919266s of 22.110708237s, submitted: 90
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 69885952 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:06.178105+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 299 ms_handle_reset con 0x56300212a000 session 0x56300238a000
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 69836800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:07.178550+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 69836800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:08.178832+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e941a000/0x0/0x4ffc00000, data 0xecc778/0x1070000, compress 0x0/0x0/0x0, omap 0x76df0, meta 0x156f9210), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:09.179193+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 69836800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3683971 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:10.179388+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 69836800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:11.179686+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 69828608 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e941a000/0x0/0x4ffc00000, data 0xecc778/0x1070000, compress 0x0/0x0/0x0, omap 0x76df0, meta 0x156f9210), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:12.180000+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 69828608 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:13.180343+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 69828608 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:14.180624+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 299 handle_osd_map epochs [299,300], i have 299, src has [1,300]
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 69804032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:15.180826+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 69804032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:16.181190+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 69804032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:17.181480+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 69795840 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:18.181734+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 69795840 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:19.181975+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:20.182241+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:21.182433+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:22.182638+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:23.182821+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:24.183022+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:25.183252+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:26.183485+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 69787648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:27.183679+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:28.183885+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:29.184059+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:30.184295+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:31.184451+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:32.184663+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:33.184819+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:34.185006+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 69779456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:35.185176+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 69771264 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:36.185302+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 69771264 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:37.185440+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 69771264 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:38.185586+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 69771264 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:39.187002+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 69771264 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:40.187195+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 69763072 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:41.187901+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 69754880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:42.188161+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 69754880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:43.188309+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 69754880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:44.188432+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 69754880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:45.188618+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:46.188795+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:47.189030+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:48.189180+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:49.189398+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:50.189639+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 69738496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:51.189806+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:52.189952+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:53.190116+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:54.190297+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:55.190444+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:56.190587+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:57.190816+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:58.191021+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 69730304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:59.191158+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 69713920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:00.191338+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 69713920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:01.191496+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 69705728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:02.191643+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 69705728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:03.191791+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 69705728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:04.191950+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 69705728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:05.192107+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 69705728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:06.192256+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 69705728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:07.192496+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 69697536 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:08.192636+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 69697536 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:09.192815+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 69697536 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:10.193031+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 69689344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:11.193195+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 69689344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:12.193342+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 69689344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:13.193518+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 69689344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:14.193630+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 69689344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:15.193767+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359849984 unmapped: 69681152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:16.193861+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 69672960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:17.194039+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 69672960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:18.194222+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 69672960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:19.194380+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 69672960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:20.194630+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 69672960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:21.194749+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 69664768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:22.194925+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 69664768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:23.195104+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:24.195257+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:25.195432+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:26.195605+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:27.195775+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:28.195964+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:29.196189+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:30.196458+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 69656576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:31.196601+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 69640192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:32.196757+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 69640192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:33.196937+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 69640192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:34.197118+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 69640192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:35.197298+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 69640192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:36.197448+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 69640192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:37.197610+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359899136 unmapped: 69632000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:38.197781+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359899136 unmapped: 69632000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:39.197944+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359899136 unmapped: 69632000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:40.198222+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359899136 unmapped: 69632000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:41.198324+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 69623808 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:42.198478+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 69623808 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:43.198630+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 69623808 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:44.198772+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 69615616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:45.198934+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 69615616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:46.199111+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 69615616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:47.199317+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:48.199471+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:49.199602+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:50.199722+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:51.199843+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:52.200007+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:53.200143+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:54.200377+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 69607424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:55.200526+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 69599232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:56.201292+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 69591040 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:57.201387+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 69582848 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:58.201573+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 69582848 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:59.201724+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 69582848 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:00.202002+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 69582848 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:01.202141+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 69582848 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:02.202330+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 69582848 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:03.202460+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 69574656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:04.202605+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 69574656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:05.202750+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 69574656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:06.202907+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 69574656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:07.203209+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 69574656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:08.203356+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 69574656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:09.203496+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:10.203670+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:11.203856+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:12.204025+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 69566464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:13.204188+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 69558272 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:14.204328+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 69558272 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:15.204501+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 69558272 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:16.204665+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 69550080 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:17.204806+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 69550080 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:18.204949+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 69550080 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:19.205143+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:20.205298+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:21.205502+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:22.205695+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:23.205870+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:24.206117+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:25.206288+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:26.206420+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 69533696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:27.206647+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360005632 unmapped: 69525504 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:28.206792+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360005632 unmapped: 69525504 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:29.206918+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:30.207127+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:31.207349+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:32.207552+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:33.207744+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:34.207924+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 69517312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:35.208129+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 69509120 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:36.208283+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 69509120 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:37.208449+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 69509120 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:38.208639+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:39.208870+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:40.209161+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:41.209316+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 69500928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:42.209669+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360038400 unmapped: 69492736 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:43.209878+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360046592 unmapped: 69484544 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:44.210225+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360046592 unmapped: 69484544 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:45.210370+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360046592 unmapped: 69484544 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:46.210624+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360046592 unmapped: 69484544 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:47.210820+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360046592 unmapped: 69484544 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:48.211004+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 69476352 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:49.211203+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 69476352 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:50.211443+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 69476352 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:51.211653+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:52.211869+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:53.212040+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:54.212235+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:55.212420+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:56.212659+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:57.212852+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 69468160 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:58.213011+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 69459968 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:59.213166+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 69443584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:00.213394+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 69443584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:01.213575+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 69443584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:02.213771+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 69443584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:03.213956+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 69443584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:04.214142+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:05.214370+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:06.214555+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:07.214770+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:08.214939+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:09.215108+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:10.215297+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:11.215482+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:12.215691+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:13.215831+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:14.216016+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 69435392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:15.216202+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:16.216338+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 69427200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:17.216607+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:18.217133+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:19.217512+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:21.029696+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 69419008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:22.031004+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:23.031488+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 69410816 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:24.031787+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360128512 unmapped: 69402624 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:25.031952+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:26.032151+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:27.032322+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:28.032675+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:29.032938+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:30.033132+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 69394432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:31.033345+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:32.033547+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:33.033738+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:34.034116+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:35.034247+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:36.034375+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 69386240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:37.034517+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:38.034861+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 69378048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:39.035038+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:40.035342+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:41.035504+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 69369856 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:42.035841+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:43.036013+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:44.036189+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360169472 unmapped: 69361664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:45.036327+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:46.036474+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 69353472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:47.036655+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:48.036886+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:49.037055+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:50.037220+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:51.037403+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:52.037569+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 69345280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:53.037753+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 69337088 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:54.037897+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 69337088 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:55.038053+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:56.038232+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:57.038399+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360202240 unmapped: 69328896 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:58.038541+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:59.038699+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:00.038827+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:01.039013+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:02.039160+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 69320704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:03.039343+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:04.039512+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:05.039699+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:06.040203+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:07.040435+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 69312512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:08.040596+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:09.040744+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:10.040963+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 69304320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:11.041196+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:12.041359+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 69287936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:13.041679+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:14.041839+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:15.042048+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:16.042251+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:17.042428+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:18.042616+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 69279744 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:19.042850+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:20.043012+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:21.043157+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:22.043803+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:23.044955+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:24.045154+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:25.045330+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:26.045548+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360259584 unmapped: 69271552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:27.045716+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 69255168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:28.045918+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 69255168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:29.046085+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:30.046295+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:31.046546+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:32.046740+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:33.046902+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:34.047166+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 69246976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:35.047374+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:36.047584+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:37.047764+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:38.047931+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:39.048149+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:40.048273+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:41.048426+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:42.048581+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 69238784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:43.048712+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:44.048860+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:45.049053+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:46.049228+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:47.049437+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 69222400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:48.049621+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360316928 unmapped: 69214208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:49.049803+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360316928 unmapped: 69214208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:50.049981+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360316928 unmapped: 69214208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:51.050165+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 69206016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:52.050309+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 69206016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:53.050486+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 69197824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:54.050628+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 69197824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:55.050768+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 69197824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:56.050931+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 69197824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:57.051089+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 69197824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:58.051352+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 69197824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:59.051543+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:00.051742+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:01.052002+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:02.052196+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:03.052400+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:04.052560+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:05.052736+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:06.052936+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 69189632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:07.053236+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360349696 unmapped: 69181440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:08.053497+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360349696 unmapped: 69181440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:09.053681+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 69165056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:10.053941+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 69165056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:11.054190+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 69165056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:12.054413+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 69165056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:13.054601+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 69165056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:14.054792+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 69165056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:15.054991+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360374272 unmapped: 69156864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:16.055179+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360374272 unmapped: 69156864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:17.055351+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 69148672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:18.055500+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 69148672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:19.055642+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 69148672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:20.055837+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 69148672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:21.055990+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 69148672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:22.056201+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 69148672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:23.056373+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 69140480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:24.056779+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 69140480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:25.056975+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:26.057156+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:27.057313+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:28.057503+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:29.057702+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:30.057879+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 69132288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:31.058043+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:32.058228+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:33.058546+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:34.058725+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:35.058950+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:36.059108+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:37.059313+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:38.059491+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 69124096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:39.059734+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:40.059987+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:41.060226+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:42.060382+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:43.060504+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:44.060658+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:45.060814+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:46.060933+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 69099520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:47.061109+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:48.061362+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 69091328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:49.061595+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360448000 unmapped: 69083136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:50.061860+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360448000 unmapped: 69083136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:51.062043+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360448000 unmapped: 69083136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:52.062252+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360448000 unmapped: 69083136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:53.062458+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 69074944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:54.062639+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 69074944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:55.062818+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 69074944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:56.062982+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 69074944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:57.063163+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 69066752 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:58.063339+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 69058560 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:59.063675+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 69058560 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:00.063857+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 69058560 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:01.064061+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 69058560 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:02.064322+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 69058560 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:03.064500+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 69058560 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:04.064666+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360488960 unmapped: 69042176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:05.064867+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360488960 unmapped: 69042176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:06.065049+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360488960 unmapped: 69042176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:07.065309+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360488960 unmapped: 69042176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:08.065511+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360488960 unmapped: 69042176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:09.065773+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360488960 unmapped: 69042176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9417000/0x0/0x4ffc00000, data 0xece1f7/0x1073000, compress 0x0/0x0/0x0, omap 0x76edd, meta 0x156f9123), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:10.066011+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360497152 unmapped: 69033984 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:11.066308+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360497152 unmapped: 69033984 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686745 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:12.066512+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360497152 unmapped: 69033984 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630035d8800
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 366.986419678s of 367.067871094s, submitted: 52
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:13.066648+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 301 ms_handle_reset con 0x5630035d8800 session 0x5630022256c0
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360521728 unmapped: 69009408 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:14.066835+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360529920 unmapped: 69001216 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 301 heartbeat osd_stat(store_statfs(0x4ea084000/0x0/0x4ffc00000, data 0x25fde7/0x406000, compress 0x0/0x0/0x0, omap 0x77a8c, meta 0x156f8574), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563006c59c00
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:15.066976+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360529920 unmapped: 69001216 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:16.067127+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360579072 unmapped: 68952064 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628525 data_alloc: 218103808 data_used: 248120
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 302 ms_handle_reset con 0x563006c59c00 session 0x563000520c40
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:17.067402+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360579072 unmapped: 68952064 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:18.067615+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360579072 unmapped: 68952064 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:19.067831+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 302 heartbeat osd_stat(store_statfs(0x4ea081000/0x0/0x4ffc00000, data 0x2619b4/0x408000, compress 0x0/0x0/0x0, omap 0x778e2, meta 0x156f871e), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360587264 unmapped: 68943872 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:20.068010+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360603648 unmapped: 68927488 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563007a99400
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 303 heartbeat osd_stat(store_statfs(0x4ea07f000/0x0/0x4ffc00000, data 0x26344f/0x40b000, compress 0x0/0x0/0x0, omap 0x7850a, meta 0x156f7af6), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:21.068283+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360603648 unmapped: 68927488 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700002 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _renew_subs
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 304 ms_handle_reset con 0x563007a99400 session 0x5630004ff180
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:22.068568+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360620032 unmapped: 68911104 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:23.068720+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e940b000/0x0/0x4ffc00000, data 0xed5017/0x107f000, compress 0x0/0x0/0x0, omap 0x787f6, meta 0x156f780a), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360620032 unmapped: 68911104 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:24.068926+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360620032 unmapped: 68911104 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:25.069056+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360620032 unmapped: 68911104 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:26.069302+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360620032 unmapped: 68911104 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3702408 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.496150970s of 13.664681435s, submitted: 92
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:27.069487+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e940b000/0x0/0x4ffc00000, data 0xed5017/0x107f000, compress 0x0/0x0/0x0, omap 0x787f6, meta 0x156f780a), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 68861952 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:28.069847+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 68861952 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:29.070011+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 68861952 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:30.070383+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 68853760 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:31.070560+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 68853760 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:32.070694+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 68853760 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:33.070819+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 68853760 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:34.070974+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 68853760 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:35.071167+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:36.071389+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:37.071593+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:38.071793+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:39.071960+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:40.072130+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:41.072373+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:42.072606+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 68837376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:43.072756+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 68829184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:44.072900+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 68829184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:45.073022+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 68829184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:46.073160+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 68829184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:47.073344+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 68829184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:48.073508+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 68829184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:49.073689+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 68820992 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:50.073862+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 68820992 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:51.074077+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:52.074347+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:53.074511+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:54.074798+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:55.074993+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:56.075158+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:57.075312+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:58.075448+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 68812800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:59.075637+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:00.075828+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:01.075998+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:02.076216+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:03.076328+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:04.076468+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:05.076618+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:06.076767+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 68796416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:07.076921+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 68788224 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:08.077179+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 68788224 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:09.077307+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 68788224 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9408000/0x0/0x4ffc00000, data 0xed6a96/0x1082000, compress 0x0/0x0/0x0, omap 0x788e4, meta 0x156f771c), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:10.077452+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360751104 unmapped: 68780032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:11.077625+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360751104 unmapped: 68780032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3705182 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3c400
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 45.422519684s of 45.444473267s, submitted: 13
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:12.077786+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 68771840 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:13.077962+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 306 ms_handle_reset con 0x563002a3c400 session 0x563002574380
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 67706880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:14.078236+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 67706880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:15.078405+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 306 heartbeat osd_stat(store_statfs(0x4ea076000/0x0/0x4ffc00000, data 0x268676/0x414000, compress 0x0/0x0/0x0, omap 0x78ccf, meta 0x156f7331), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 67706880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:16.078550+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 67706880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643995 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:17.078732+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 67706880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:18.078871+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361824256 unmapped: 67706880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:19.079050+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 306 heartbeat osd_stat(store_statfs(0x4ea076000/0x0/0x4ffc00000, data 0x268676/0x414000, compress 0x0/0x0/0x0, omap 0x78ccf, meta 0x156f7331), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 306 handle_osd_map epochs [307,307], i have 307, src has [1,307]
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361848832 unmapped: 67682304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:20.079291+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361848832 unmapped: 67682304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:21.079533+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361848832 unmapped: 67682304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:22.079713+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361848832 unmapped: 67682304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:23.079871+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361857024 unmapped: 67674112 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:24.080023+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361857024 unmapped: 67674112 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:25.080196+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361865216 unmapped: 67665920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:26.080369+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361865216 unmapped: 67665920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:27.080529+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361865216 unmapped: 67665920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:28.080739+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361865216 unmapped: 67665920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:29.080902+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361865216 unmapped: 67665920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:30.081061+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361865216 unmapped: 67665920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:31.081269+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:32.081548+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:33.081734+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:34.081897+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:35.082207+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:36.082370+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:37.082549+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:38.082727+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 67657728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:39.082891+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361889792 unmapped: 67641344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:40.083053+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361889792 unmapped: 67641344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:41.083434+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:42.083724+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:43.083962+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:44.084159+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:45.084359+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:46.084558+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:47.084746+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:48.084919+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361897984 unmapped: 67633152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:49.085056+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361906176 unmapped: 67624960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:50.085245+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361906176 unmapped: 67624960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:51.085430+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361906176 unmapped: 67624960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:52.085665+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361906176 unmapped: 67624960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:53.085822+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361906176 unmapped: 67624960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:54.086059+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361906176 unmapped: 67624960 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:55.086437+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:56.086585+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:57.087013+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:58.087474+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:59.087662+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:00.087929+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:01.088128+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:02.088342+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361914368 unmapped: 67616768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:03.088529+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361930752 unmapped: 67600384 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:04.088829+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361930752 unmapped: 67600384 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:05.089025+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 67592192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:06.089171+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 67592192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:07.089374+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 67592192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:08.089605+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 67592192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:09.089780+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 67592192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:10.089980+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361938944 unmapped: 67592192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:11.090229+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:12.090409+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:13.090577+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:14.090751+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:15.090910+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:16.091122+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:17.091328+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:18.091529+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361947136 unmapped: 67584000 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:19.091713+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361955328 unmapped: 67575808 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:20.091887+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361955328 unmapped: 67575808 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:21.092113+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361963520 unmapped: 67567616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:22.092275+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361963520 unmapped: 67567616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:23.092499+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361963520 unmapped: 67567616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:24.092647+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361963520 unmapped: 67567616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:25.092824+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361963520 unmapped: 67567616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:26.092981+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361963520 unmapped: 67567616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:27.093117+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361971712 unmapped: 67559424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:28.093237+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361971712 unmapped: 67559424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:29.093395+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361979904 unmapped: 67551232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:30.093523+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361979904 unmapped: 67551232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:31.093688+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361979904 unmapped: 67551232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:32.093828+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361979904 unmapped: 67551232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:33.093990+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361979904 unmapped: 67551232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:34.094121+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 67493888 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:35.094253+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'config diff' '{prefix=config diff}'
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'config show' '{prefix=config show}'
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'counter dump' '{prefix=counter dump}'
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'counter schema' '{prefix=counter schema}'
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362217472 unmapped: 67313664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:36.094365+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362192896 unmapped: 67338240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:37.094488+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'log dump' '{prefix=log dump}'
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 373260288 unmapped: 56270848 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:38.094678+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'perf dump' '{prefix=perf dump}'
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'perf schema' '{prefix=perf schema}'
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 67158016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:39.111461+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 67158016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:40.111638+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 67158016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:41.112017+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 67158016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:42.112229+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 67158016 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:43.112394+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 67141632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:44.112617+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 67141632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:45.112792+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 67141632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:46.112926+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 67141632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:47.113037+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 67141632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:48.113123+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362397696 unmapped: 67133440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:49.113262+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362397696 unmapped: 67133440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:50.113399+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362397696 unmapped: 67133440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:51.113596+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 67125248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:52.113734+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5401.6 total, 600.0 interval
                                           Cumulative writes: 49K writes, 189K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 49K writes, 18K syncs, 2.66 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 609 writes, 1700 keys, 609 commit groups, 1.0 writes per commit group, ingest: 0.70 MB, 0.00 MB/s
                                           Interval WAL: 609 writes, 282 syncs, 2.16 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 67125248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets getting new tickets!
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:53.113930+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _finish_auth 0
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:53.114711+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362422272 unmapped: 67108864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:54.114075+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362422272 unmapped: 67108864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:55.114927+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362422272 unmapped: 67108864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:56.115059+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362422272 unmapped: 67108864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:57.115194+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362422272 unmapped: 67108864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:58.115328+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362422272 unmapped: 67108864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:59.115506+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362438656 unmapped: 67092480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:00.115653+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 ms_handle_reset con 0x56300645e800 session 0x5630003e61c0
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3c400
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 ms_handle_reset con 0x563004b6cc00 session 0x5630029ac540
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630035d8800
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 ms_handle_reset con 0x5630015b7000 session 0x563000521c00
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x563002a3ac00
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361373696 unmapped: 68157440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:01.115805+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361373696 unmapped: 68157440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:02.115953+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361373696 unmapped: 68157440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:03.116170+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361373696 unmapped: 68157440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:04.116346+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361381888 unmapped: 68149248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:05.116471+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361381888 unmapped: 68149248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:06.116647+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361381888 unmapped: 68149248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:07.116771+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361390080 unmapped: 68141056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:08.116943+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361390080 unmapped: 68141056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:09.117875+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361390080 unmapped: 68141056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:10.118025+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361390080 unmapped: 68141056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:11.118303+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361390080 unmapped: 68141056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:12.118483+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361390080 unmapped: 68141056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:13.118656+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361390080 unmapped: 68141056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:14.118833+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361390080 unmapped: 68141056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:15.119018+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361406464 unmapped: 68124672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:16.119246+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361414656 unmapped: 68116480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:17.119422+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361422848 unmapped: 68108288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:18.119584+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361422848 unmapped: 68108288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:19.119750+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361422848 unmapped: 68108288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:20.119948+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361422848 unmapped: 68108288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:21.120161+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361422848 unmapped: 68108288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:22.120298+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361422848 unmapped: 68108288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:23.120413+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361431040 unmapped: 68100096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:24.120565+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361431040 unmapped: 68100096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:25.120726+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361431040 unmapped: 68100096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:26.121124+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:27.121331+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361431040 unmapped: 68100096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:28.121533+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361431040 unmapped: 68100096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:29.121695+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361431040 unmapped: 68100096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:30.121861+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361431040 unmapped: 68100096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:31.122111+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361439232 unmapped: 68091904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:32.122308+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361455616 unmapped: 68075520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:33.122567+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361455616 unmapped: 68075520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:34.122754+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361455616 unmapped: 68075520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:35.122923+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361455616 unmapped: 68075520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 ms_handle_reset con 0x56300212cc00 session 0x563002c34380
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: handle_auth_request added challenge on 0x5630007a8c00
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:36.123111+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361455616 unmapped: 68075520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:37.123258+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361455616 unmapped: 68075520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:38.123395+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361455616 unmapped: 68075520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:39.123553+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361463808 unmapped: 68067328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:40.123687+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361463808 unmapped: 68067328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:41.123894+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361463808 unmapped: 68067328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646769 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:42.124104+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 68059136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:43.124302+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 68059136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 151.738861084s of 151.820251465s, submitted: 51
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:44.124456+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 68059136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea073000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:45.124626+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 68059136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:46.124808+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 68059136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:47.124992+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 68059136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:48.125163+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 68059136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:49.125591+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 361472000 unmapped: 68059136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:50.125765+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 66945024 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:51.125959+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:52.126174+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:53.126379+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:54.126712+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:55.126878+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:56.127039+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:57.127244+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:58.127442+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:59.127698+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:00.127893+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:01.128154+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:02.128437+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 66936832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:03.128642+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 66928640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:04.128903+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 66928640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:05.129156+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 66928640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:06.129392+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 66928640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:07.129570+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 66928640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:08.129815+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 66928640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:09.130156+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 66928640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:10.130336+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 66928640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:11.130550+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 66920448 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:12.130726+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 66920448 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:13.130909+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 66912256 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:14.131099+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 66912256 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:15.131307+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 66912256 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:16.131511+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 66912256 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:17.131705+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 66912256 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:18.131968+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 66912256 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:19.132171+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 66904064 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:20.132374+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 66904064 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:21.132606+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 66904064 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:22.132819+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 66904064 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:23.133130+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 66904064 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:24.133326+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 66904064 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:25.133491+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 66904064 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:26.133660+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 66904064 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:27.133820+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362635264 unmapped: 66895872 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:28.133991+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362635264 unmapped: 66895872 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:29.134143+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362635264 unmapped: 66895872 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:30.134295+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362635264 unmapped: 66895872 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:31.134535+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362635264 unmapped: 66895872 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:32.134720+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362635264 unmapped: 66895872 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:33.134920+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362643456 unmapped: 66887680 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:34.135132+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362643456 unmapped: 66887680 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:35.135305+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 66879488 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:36.135452+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 66879488 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:37.135643+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 66879488 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:38.135815+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 66879488 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:39.136011+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 66879488 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:40.136197+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 66879488 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:41.136464+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 66879488 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:42.136620+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 66879488 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:43.136877+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362659840 unmapped: 66871296 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:44.137039+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362659840 unmapped: 66871296 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:45.137224+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362659840 unmapped: 66871296 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:46.137457+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362659840 unmapped: 66871296 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:47.137658+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362659840 unmapped: 66871296 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:48.137882+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362659840 unmapped: 66871296 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:49.138124+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362684416 unmapped: 66846720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:50.138327+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362684416 unmapped: 66846720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:51.138521+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362684416 unmapped: 66846720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:52.138667+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362684416 unmapped: 66846720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:53.138930+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362684416 unmapped: 66846720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:54.139119+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362684416 unmapped: 66846720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:55.139249+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362692608 unmapped: 66838528 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:56.139485+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362692608 unmapped: 66838528 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:57.139657+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362692608 unmapped: 66838528 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:58.139879+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362692608 unmapped: 66838528 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:59.140105+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362700800 unmapped: 66830336 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:00.140324+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362700800 unmapped: 66830336 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:01.140597+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362700800 unmapped: 66830336 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:02.140798+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362708992 unmapped: 66822144 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:03.141017+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362708992 unmapped: 66822144 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:04.141218+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362708992 unmapped: 66822144 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:05.141434+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362708992 unmapped: 66822144 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:06.141612+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362708992 unmapped: 66822144 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:07.141767+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362717184 unmapped: 66813952 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:08.141960+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362725376 unmapped: 66805760 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:09.142266+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362733568 unmapped: 66797568 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:10.142474+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362733568 unmapped: 66797568 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:11.142711+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362733568 unmapped: 66797568 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:12.142931+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362733568 unmapped: 66797568 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:13.143122+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362733568 unmapped: 66797568 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:14.143351+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362733568 unmapped: 66797568 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:15.143550+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362741760 unmapped: 66789376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:16.143745+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362741760 unmapped: 66789376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:17.143930+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362749952 unmapped: 66781184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:18.144123+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362749952 unmapped: 66781184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:19.144296+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362749952 unmapped: 66781184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:20.144478+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362749952 unmapped: 66781184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:21.144705+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362749952 unmapped: 66781184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:22.144889+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362749952 unmapped: 66781184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:23.145059+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362758144 unmapped: 66772992 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:24.145237+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362758144 unmapped: 66772992 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:25.145441+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362758144 unmapped: 66772992 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:26.145612+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362758144 unmapped: 66772992 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:27.145776+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362758144 unmapped: 66772992 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:28.145929+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362766336 unmapped: 66764800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:29.146136+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362766336 unmapped: 66764800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:30.146304+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362774528 unmapped: 66756608 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:31.146534+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362782720 unmapped: 66748416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:32.146706+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362782720 unmapped: 66748416 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:33.146915+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362790912 unmapped: 66740224 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:34.147138+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362790912 unmapped: 66740224 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:35.147283+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362790912 unmapped: 66740224 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:36.147473+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362790912 unmapped: 66740224 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:37.147658+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362790912 unmapped: 66740224 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:38.147886+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362790912 unmapped: 66740224 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:39.148155+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362799104 unmapped: 66732032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:40.148398+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362799104 unmapped: 66732032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:41.148642+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362799104 unmapped: 66732032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:42.148816+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362799104 unmapped: 66732032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:43.149005+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362799104 unmapped: 66732032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:44.149192+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362807296 unmapped: 66723840 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:45.149341+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362807296 unmapped: 66723840 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:46.149557+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362807296 unmapped: 66723840 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:47.149714+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362807296 unmapped: 66723840 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:48.149844+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362807296 unmapped: 66723840 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:49.150165+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362823680 unmapped: 66707456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:50.151201+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362823680 unmapped: 66707456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:51.151492+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362823680 unmapped: 66707456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:52.151831+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362823680 unmapped: 66707456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:53.152117+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362823680 unmapped: 66707456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:54.152371+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362823680 unmapped: 66707456 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:55.152566+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362840064 unmapped: 66691072 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:56.152947+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362840064 unmapped: 66691072 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:57.153193+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362840064 unmapped: 66691072 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:58.153426+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362840064 unmapped: 66691072 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:59.153599+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362840064 unmapped: 66691072 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:00.154138+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362840064 unmapped: 66691072 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:01.154363+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362848256 unmapped: 66682880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:02.154640+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362848256 unmapped: 66682880 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:03.154766+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362864640 unmapped: 66666496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:04.154979+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362864640 unmapped: 66666496 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:05.155131+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362872832 unmapped: 66658304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:06.155548+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362872832 unmapped: 66658304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:07.155750+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362872832 unmapped: 66658304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:08.155996+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362872832 unmapped: 66658304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:09.156238+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362872832 unmapped: 66658304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:10.156727+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362872832 unmapped: 66658304 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:11.156952+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362881024 unmapped: 66650112 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:12.157170+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362881024 unmapped: 66650112 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:13.157357+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362889216 unmapped: 66641920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:14.157634+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362889216 unmapped: 66641920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:15.157803+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362889216 unmapped: 66641920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:16.158033+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362889216 unmapped: 66641920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:17.158232+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362889216 unmapped: 66641920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:18.158438+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362889216 unmapped: 66641920 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:19.158659+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362897408 unmapped: 66633728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:20.158919+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362897408 unmapped: 66633728 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:21.159139+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 66625536 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:22.159357+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 66625536 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:23.159541+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 66625536 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:24.159739+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 66625536 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:25.159872+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 66625536 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:26.160035+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362905600 unmapped: 66625536 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:27.160177+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362913792 unmapped: 66617344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:28.160334+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362913792 unmapped: 66617344 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:29.160484+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362921984 unmapped: 66609152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:30.160666+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362921984 unmapped: 66609152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:31.161135+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362921984 unmapped: 66609152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:32.161290+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362921984 unmapped: 66609152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:33.161439+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362921984 unmapped: 66609152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:34.161595+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362921984 unmapped: 66609152 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:35.161725+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362938368 unmapped: 66592768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:36.161886+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362938368 unmapped: 66592768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:37.162033+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362938368 unmapped: 66592768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:38.162347+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362938368 unmapped: 66592768 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:39.162565+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362946560 unmapped: 66584576 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:40.162823+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362954752 unmapped: 66576384 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:41.163129+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362954752 unmapped: 66576384 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:42.163342+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362954752 unmapped: 66576384 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:43.163565+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 66568192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:44.163808+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 66568192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:45.164021+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 66568192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:46.164289+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 66568192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:47.164541+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 66568192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:48.164743+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 66568192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:49.164870+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 66568192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:50.165026+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362962944 unmapped: 66568192 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:51.165238+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362979328 unmapped: 66551808 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:52.165440+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362979328 unmapped: 66551808 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:53.165612+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362979328 unmapped: 66551808 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:54.165784+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362979328 unmapped: 66551808 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:55.165961+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362987520 unmapped: 66543616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:56.166237+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362987520 unmapped: 66543616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:57.166421+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362987520 unmapped: 66543616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:58.166588+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362987520 unmapped: 66543616 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:59.166810+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 66535424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:00.166949+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 66535424 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:01.167161+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363003904 unmapped: 66527232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:02.167344+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363003904 unmapped: 66527232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:03.167514+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363003904 unmapped: 66527232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:04.167659+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363003904 unmapped: 66527232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:05.167768+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363003904 unmapped: 66527232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:06.167907+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363003904 unmapped: 66527232 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:07.187332+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363028480 unmapped: 66502656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:08.187534+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363028480 unmapped: 66502656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:09.187720+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363028480 unmapped: 66502656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:10.187899+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363028480 unmapped: 66502656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:11.188133+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363028480 unmapped: 66502656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:12.188305+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363028480 unmapped: 66502656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:13.188524+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:14.188721+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363028480 unmapped: 66502656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:15.188871+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363028480 unmapped: 66502656 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:16.189016+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363036672 unmapped: 66494464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:17.189235+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363036672 unmapped: 66494464 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:18.189454+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363044864 unmapped: 66486272 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:19.189654+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363044864 unmapped: 66486272 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:20.189843+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363044864 unmapped: 66486272 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:21.190151+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363044864 unmapped: 66486272 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:22.190354+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363044864 unmapped: 66486272 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:23.190611+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363053056 unmapped: 66478080 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:24.190861+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363053056 unmapped: 66478080 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:25.191059+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363053056 unmapped: 66478080 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:26.191264+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363053056 unmapped: 66478080 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:27.191511+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363053056 unmapped: 66478080 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:28.191739+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363053056 unmapped: 66478080 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:29.191921+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363053056 unmapped: 66478080 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:30.192109+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363053056 unmapped: 66478080 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:31.192313+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363069440 unmapped: 66461696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:32.192511+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363069440 unmapped: 66461696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:33.192735+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363069440 unmapped: 66461696 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:34.193157+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363077632 unmapped: 66453504 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:35.193436+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 66445312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:36.193700+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 66445312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:37.193942+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 66445312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:38.194174+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363085824 unmapped: 66445312 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:39.194389+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363094016 unmapped: 66437120 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:40.194589+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 66428928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:41.194818+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 66428928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:42.195002+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 66428928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:43.195256+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 66428928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:44.195516+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363102208 unmapped: 66428928 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:45.195736+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363110400 unmapped: 66420736 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:46.195905+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363110400 unmapped: 66420736 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:47.196100+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363118592 unmapped: 66412544 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:48.196306+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363118592 unmapped: 66412544 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:49.196490+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363126784 unmapped: 66404352 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:50.196639+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363126784 unmapped: 66404352 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:51.196835+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363126784 unmapped: 66404352 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:52.196994+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363126784 unmapped: 66404352 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:53.197190+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363126784 unmapped: 66404352 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:54.197460+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363126784 unmapped: 66404352 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:55.197609+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 66387968 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:56.197775+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363143168 unmapped: 66387968 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:57.197922+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 66379776 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:58.198691+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 66379776 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:59.198977+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 66379776 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:00.199319+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 66379776 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:01.199608+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 66379776 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:02.199808+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363151360 unmapped: 66379776 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:03.200026+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363159552 unmapped: 66371584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:04.200224+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363159552 unmapped: 66371584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:05.200444+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363159552 unmapped: 66371584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:06.200622+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363159552 unmapped: 66371584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:07.200908+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363159552 unmapped: 66371584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:08.201135+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363159552 unmapped: 66371584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:09.201289+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363159552 unmapped: 66371584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:10.201449+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363159552 unmapped: 66371584 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:11.201620+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 66363392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:12.201862+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363167744 unmapped: 66363392 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:13.202043+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 66355200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:14.202269+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 66355200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:15.202492+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363175936 unmapped: 66355200 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:16.202686+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 66347008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:17.202884+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 66347008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:18.203114+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363184128 unmapped: 66347008 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:19.203305+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363200512 unmapped: 66330624 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:20.203526+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363200512 unmapped: 66330624 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:21.203751+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363200512 unmapped: 66330624 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:22.204047+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363208704 unmapped: 66322432 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:23.204251+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 66314240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:24.204709+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 66314240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:25.204895+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 66314240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:26.205107+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 66314240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:27.205293+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 66314240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:28.205525+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 66314240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:29.205723+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363216896 unmapped: 66314240 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:30.206041+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 66306048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:31.206446+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 66306048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:32.206623+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 66306048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:33.206804+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 66306048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:34.207014+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363225088 unmapped: 66306048 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:35.207235+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363241472 unmapped: 66289664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:36.207535+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363241472 unmapped: 66289664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:37.207932+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363241472 unmapped: 66289664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:38.208258+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363241472 unmapped: 66289664 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:39.208517+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 66281472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:40.208781+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 66281472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:41.208987+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 66281472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:42.209201+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363249664 unmapped: 66281472 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:43.209373+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363257856 unmapped: 66273280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:44.209632+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363257856 unmapped: 66273280 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:45.209886+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 66265088 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:46.210007+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 66265088 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:47.210204+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 66265088 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:48.210375+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 66265088 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:49.210496+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 66265088 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:50.210578+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363266048 unmapped: 66265088 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:51.212211+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363282432 unmapped: 66248704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:52.213496+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363282432 unmapped: 66248704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:53.213698+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363282432 unmapped: 66248704 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:54.213839+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363290624 unmapped: 66240512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:55.213975+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363290624 unmapped: 66240512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:56.214236+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363290624 unmapped: 66240512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:57.214522+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363290624 unmapped: 66240512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:58.214731+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363290624 unmapped: 66240512 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:59.214927+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363298816 unmapped: 66232320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:00.215123+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363298816 unmapped: 66232320 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:01.215455+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 66224128 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:02.215689+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 66224128 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:03.215904+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363307008 unmapped: 66224128 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:04.216097+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 66215936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:05.216349+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 66215936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:06.216537+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 66215936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:07.216709+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 66215936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:08.216898+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363315200 unmapped: 66215936 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:09.217123+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 66199552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:10.217296+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 66199552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:11.217574+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 66199552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:12.217745+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 66199552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:13.217909+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363331584 unmapped: 66199552 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:14.218047+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363339776 unmapped: 66191360 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:15.218235+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 66183168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:16.218412+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 66183168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:17.218695+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 66183168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:18.218860+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 66183168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:19.218988+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363347968 unmapped: 66183168 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:20.219151+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363356160 unmapped: 66174976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:21.219361+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363356160 unmapped: 66174976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:22.219502+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363356160 unmapped: 66174976 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:23.219894+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 66166784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:24.220096+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 66166784 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:25.220291+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363372544 unmapped: 66158592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:26.220485+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363372544 unmapped: 66158592 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:27.220654+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363380736 unmapped: 66150400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:28.220826+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363380736 unmapped: 66150400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:29.221005+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363380736 unmapped: 66150400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:30.221223+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363380736 unmapped: 66150400 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:31.221382+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 66142208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:32.222285+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 66142208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:33.222476+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 66142208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:34.222636+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 66142208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:35.222783+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 66142208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:36.222913+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 66142208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:37.223059+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 66142208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:38.223362+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363388928 unmapped: 66142208 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:39.223533+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 66125824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:40.223650+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 66125824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:41.223883+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 66125824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:42.224024+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 66125824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:43.224208+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363405312 unmapped: 66125824 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:44.224348+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363413504 unmapped: 66117632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:45.224524+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363413504 unmapped: 66117632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:46.224805+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363413504 unmapped: 66117632 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:47.224940+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363421696 unmapped: 66109440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:48.225111+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363421696 unmapped: 66109440 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:49.225294+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363429888 unmapped: 66101248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:50.225390+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363429888 unmapped: 66101248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:51.225568+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363429888 unmapped: 66101248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:52.225708+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363429888 unmapped: 66101248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:53.225897+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363429888 unmapped: 66101248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:54.226051+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363429888 unmapped: 66101248 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:55.226200+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363438080 unmapped: 66093056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:56.226360+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363438080 unmapped: 66093056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:57.226529+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363438080 unmapped: 66093056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:58.226666+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363438080 unmapped: 66093056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:59.226857+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363438080 unmapped: 66093056 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:00.227034+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363446272 unmapped: 66084864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:01.227294+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363446272 unmapped: 66084864 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:02.227498+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363454464 unmapped: 66076672 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:03.227637+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363462656 unmapped: 66068480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:04.227800+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363462656 unmapped: 66068480 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:05.228020+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363470848 unmapped: 66060288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:06.228206+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363470848 unmapped: 66060288 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:07.231703+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363479040 unmapped: 66052096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:08.233936+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363479040 unmapped: 66052096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:09.237258+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363479040 unmapped: 66052096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:10.237406+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363479040 unmapped: 66052096 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:11.237912+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 66043904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:12.240105+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 66043904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:13.241357+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 66043904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:14.242949+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 66043904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:15.246286+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 66043904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:16.246493+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 66043904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:17.247825+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 66043904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:18.248885+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363487232 unmapped: 66043904 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:19.249275+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363503616 unmapped: 66027520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:20.249661+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363503616 unmapped: 66027520 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:21.249881+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363511808 unmapped: 66019328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:22.250134+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363511808 unmapped: 66019328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:23.250275+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363511808 unmapped: 66019328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:24.250452+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363511808 unmapped: 66019328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:25.250589+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363511808 unmapped: 66019328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:26.250738+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363511808 unmapped: 66019328 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:27.250944+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363520000 unmapped: 66011136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:28.251128+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363520000 unmapped: 66011136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:29.251355+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363520000 unmapped: 66011136 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:30.251582+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363528192 unmapped: 66002944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:31.251940+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363528192 unmapped: 66002944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:32.252120+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363528192 unmapped: 66002944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:33.252293+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363528192 unmapped: 66002944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:34.252495+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363528192 unmapped: 66002944 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:35.252740+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363544576 unmapped: 65986560 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:36.252907+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363544576 unmapped: 65986560 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:37.253159+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 65978368 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:38.253355+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 65978368 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:39.253536+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 65978368 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:40.253746+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 65978368 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:41.253944+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 65978368 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:42.254114+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 65978368 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:43.254280+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 65978368 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:44.254440+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363552768 unmapped: 65978368 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:45.254662+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 65970176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:46.254857+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 65970176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:47.255155+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 65970176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:48.255361+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 65970176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:49.255531+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 65970176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:50.255710+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363560960 unmapped: 65970176 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:51.255927+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363569152 unmapped: 65961984 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:52.256110+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 65953792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:53.256284+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 65953792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:54.256421+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 65953792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:55.256543+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363577344 unmapped: 65953792 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:56.256712+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363593728 unmapped: 65937408 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:57.257005+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363593728 unmapped: 65937408 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:58.257234+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363593728 unmapped: 65937408 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:59.257471+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363601920 unmapped: 65929216 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:00.257762+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363601920 unmapped: 65929216 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:01.258032+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363601920 unmapped: 65929216 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:02.258227+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363601920 unmapped: 65929216 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:03.258322+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363610112 unmapped: 65921024 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:04.258479+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363618304 unmapped: 65912832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:05.258721+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363618304 unmapped: 65912832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:06.258827+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363618304 unmapped: 65912832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:07.258976+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363618304 unmapped: 65912832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:08.259141+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363618304 unmapped: 65912832 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:09.259277+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363626496 unmapped: 65904640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:10.259403+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363626496 unmapped: 65904640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:11.259579+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:12.259753+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363626496 unmapped: 65904640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:13.259965+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363626496 unmapped: 65904640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:14.260177+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363626496 unmapped: 65904640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:15.260342+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 65896448 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:16.260571+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363626496 unmapped: 65904640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:17.260786+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363626496 unmapped: 65904640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:18.260939+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363626496 unmapped: 65904640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:19.261035+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363626496 unmapped: 65904640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:20.261193+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363626496 unmapped: 65904640 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:21.262042+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 65896448 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:22.262317+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363634688 unmapped: 65896448 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:23.262540+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 65888256 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:24.262691+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363642880 unmapped: 65888256 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:25.262879+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363651072 unmapped: 65880064 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:26.263125+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363659264 unmapped: 65871872 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:27.263295+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363659264 unmapped: 65871872 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:28.263451+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363659264 unmapped: 65871872 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:29.263584+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363659264 unmapped: 65871872 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:30.263713+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363659264 unmapped: 65871872 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:31.263901+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 65863680 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:32.264177+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 65863680 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:33.264344+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 65863680 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:34.264481+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363683840 unmapped: 65847296 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:35.264668+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363683840 unmapped: 65847296 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:36.264820+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363683840 unmapped: 65847296 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:37.264971+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363683840 unmapped: 65847296 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:38.265143+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363683840 unmapped: 65847296 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:39.265303+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363683840 unmapped: 65847296 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:40.265470+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363692032 unmapped: 65839104 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:41.265651+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363692032 unmapped: 65839104 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:42.265843+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363692032 unmapped: 65839104 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:43.266005+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363692032 unmapped: 65839104 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:44.266183+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363692032 unmapped: 65839104 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:45.266352+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363692032 unmapped: 65839104 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:46.266471+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363700224 unmapped: 65830912 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:47.266610+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363708416 unmapped: 65822720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:48.266786+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363708416 unmapped: 65822720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:49.266928+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363708416 unmapped: 65822720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:50.267158+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363708416 unmapped: 65822720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:51.267375+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363708416 unmapped: 65822720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:52.268049+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363708416 unmapped: 65822720 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:53.268718+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 65806336 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:54.268878+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363724800 unmapped: 65806336 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:55.268981+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 65789952 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:56.269174+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 65789952 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:57.269376+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 65789952 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:58.269581+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 65789952 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:59.269827+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363749376 unmapped: 65781760 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:00.270002+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363749376 unmapped: 65781760 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:01.270254+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363749376 unmapped: 65781760 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:02.270456+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363749376 unmapped: 65781760 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:03.270629+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363757568 unmapped: 65773568 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:04.270805+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363757568 unmapped: 65773568 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:05.270976+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363757568 unmapped: 65773568 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:06.271218+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363765760 unmapped: 65765376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:07.271398+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363765760 unmapped: 65765376 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:08.271543+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 65757184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:09.271688+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 65757184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:10.271877+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 65757184 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:11.272154+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 65748992 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:12.272339+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 65748992 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:13.272530+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 65748992 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:14.272664+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 65748992 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:15.272886+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 65748992 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:16.273216+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 65748992 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:17.273734+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363790336 unmapped: 65740800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:18.274370+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363790336 unmapped: 65740800 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:19.274606+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 65732608 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:20.274805+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 65732608 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:21.275161+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 65732608 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:22.275579+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 65732608 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:23.276275+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 65732608 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:24.276470+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 65732608 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:25.276669+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 65732608 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:26.276807+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 65732608 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:27.277302+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363814912 unmapped: 65716224 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:28.277437+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363814912 unmapped: 65716224 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:29.277577+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 65708032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:30.277773+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 65708032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:31.277987+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 65708032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:32.278158+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 65708032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:33.278295+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 65708032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:34.278454+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363823104 unmapped: 65708032 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:35.278646+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 65699840 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:36.278788+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 65699840 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:37.278928+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 65699840 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:38.279136+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363831296 unmapped: 65699840 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:39.279352+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:13 compute-0 ceph-osd[88267]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:13 compute-0 ceph-osd[88267]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646049 data_alloc: 218103808 data_used: 252181
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363839488 unmapped: 65691648 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:40.279556+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: osd.1 307 heartbeat osd_stat(store_statfs(0x4ea075000/0x0/0x4ffc00000, data 0x26a0f5/0x417000, compress 0x0/0x0/0x0, omap 0x78dbd, meta 0x156f7243), peers [0,2] op hist [])
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'config diff' '{prefix=config diff}'
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'config show' '{prefix=config show}'
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363782144 unmapped: 65748992 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'counter dump' '{prefix=counter dump}'
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'counter schema' '{prefix=counter schema}'
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:41.279790+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363544576 unmapped: 65986560 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: tick
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_tickets
Feb 28 11:14:13 compute-0 ceph-osd[88267]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:42.280011+0000)
Feb 28 11:14:13 compute-0 ceph-osd[88267]: prioritycache tune_memory target: 4294967296 mapped: 363569152 unmapped: 65961984 heap: 429531136 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:13 compute-0 ceph-osd[88267]: do_command 'log dump' '{prefix=log dump}'
Feb 28 11:14:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Feb 28 11:14:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/768895500' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Feb 28 11:14:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Feb 28 11:14:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1763832434' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Feb 28 11:14:13 compute-0 nova_compute[243452]: 2026-02-28 11:14:13.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:14:13 compute-0 rsyslogd[1017]: imjournal from <np0005634017:ceph-osd>: begin to drop messages due to rate-limiting
Feb 28 11:14:13 compute-0 ceph-mon[76304]: from='client.23356 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:13 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3868391455' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Feb 28 11:14:13 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/768895500' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Feb 28 11:14:13 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1763832434' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Feb 28 11:14:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Feb 28 11:14:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4111253239' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Feb 28 11:14:13 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Feb 28 11:14:13 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2337476306' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Feb 28 11:14:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Feb 28 11:14:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2517374129' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Feb 28 11:14:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Feb 28 11:14:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3394316084' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Feb 28 11:14:14 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:14 compute-0 nova_compute[243452]: 2026-02-28 11:14:14.449 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:14:14 compute-0 nova_compute[243452]: 2026-02-28 11:14:14.450 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:14:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:14:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Feb 28 11:14:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1447096113' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Feb 28 11:14:14 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4111253239' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Feb 28 11:14:14 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2337476306' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Feb 28 11:14:14 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2517374129' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Feb 28 11:14:14 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3394316084' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Feb 28 11:14:14 compute-0 ceph-mon[76304]: pgmap v3319: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:14 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Feb 28 11:14:14 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3225110427' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Feb 28 11:14:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Feb 28 11:14:15 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/172682911' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Feb 28 11:14:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Feb 28 11:14:15 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1130266133' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Feb 28 11:14:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb 28 11:14:15 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1260105125' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Feb 28 11:14:15 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1447096113' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Feb 28 11:14:15 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3225110427' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Feb 28 11:14:15 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/172682911' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Feb 28 11:14:15 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1130266133' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Feb 28 11:14:15 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1260105125' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Feb 28 11:14:15 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Feb 28 11:14:15 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/296149411' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Feb 28 11:14:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Feb 28 11:14:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1527120282' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Feb 28 11:14:16 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Feb 28 11:14:16 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/408387577' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Feb 28 11:14:16 compute-0 nova_compute[243452]: 2026-02-28 11:14:16.312 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:14:16 compute-0 nova_compute[243452]: 2026-02-28 11:14:16.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:14:16 compute-0 nova_compute[243452]: 2026-02-28 11:14:16.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 28 11:14:16 compute-0 nova_compute[243452]: 2026-02-28 11:14:16.316 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 28 11:14:16 compute-0 nova_compute[243452]: 2026-02-28 11:14:16.339 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 28 11:14:16 compute-0 nova_compute[243452]: 2026-02-28 11:14:16.340 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:14:16 compute-0 nova_compute[243452]: 2026-02-28 11:14:16.341 243456 DEBUG nova.compute.manager [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 28 11:14:16 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3320: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/296149411' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Feb 28 11:14:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1527120282' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Feb 28 11:14:16 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/408387577' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Feb 28 11:14:16 compute-0 ceph-mon[76304]: pgmap v3320: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:16 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23388 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:16 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:14:16 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 183K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.68 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 385 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 28 11:14:16 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23390 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:16 compute-0 crontab[413211]: (root) LIST (root)
Feb 28 11:14:17 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23394 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:17 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23392 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332201984 unmapped: 47136768 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8b6a000/0x0/0x4ffc00000, data 0x3ace740/0x3c62000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:29.806171+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332201984 unmapped: 47136768 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:30.806334+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762155 data_alloc: 218103808 data_used: 9302829
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332201984 unmapped: 47136768 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:31.806486+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332201984 unmapped: 47136768 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:32.806614+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332201984 unmapped: 47136768 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:33.806827+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8b6a000/0x0/0x4ffc00000, data 0x3ace740/0x3c62000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332201984 unmapped: 47136768 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:34.807000+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332201984 unmapped: 47136768 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:35.807119+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762539 data_alloc: 218103808 data_used: 9315117
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 332201984 unmapped: 47136768 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8b6a000/0x0/0x4ffc00000, data 0x3ace740/0x3c62000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.527433395s of 14.586800575s, submitted: 31
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:36.807259+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336642048 unmapped: 42696704 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:37.807399+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:38.807734+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:39.807911+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:40.808130+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3841745 data_alloc: 234881024 data_used: 10449709
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8020000/0x0/0x4ffc00000, data 0x4618740/0x47ac000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:41.808311+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:42.808485+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:43.808676+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:44.808847+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8020000/0x0/0x4ffc00000, data 0x4618740/0x47ac000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:45.809019+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3841745 data_alloc: 234881024 data_used: 10449709
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:46.809160+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8020000/0x0/0x4ffc00000, data 0x4618740/0x47ac000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:47.809320+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:48.809482+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:49.809692+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4e66800 session 0x55d7c7995a40
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c8f6c1c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 42565632 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.967909813s of 14.256855965s, submitted: 73
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:50.809847+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8021000/0x0/0x4ffc00000, data 0x4618730/0x47ab000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3666913 data_alloc: 218103808 data_used: 230173
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c7786000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 330022912 unmapped: 49315840 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:51.809986+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 330022912 unmapped: 49315840 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:52.810129+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 330022912 unmapped: 49315840 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:53.810365+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 330022912 unmapped: 49315840 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:54.810599+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 330022912 unmapped: 49315840 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:55.810763+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659337 data_alloc: 218103808 data_used: 125725
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 330022912 unmapped: 49315840 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:56.810938+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331071488 unmapped: 48267264 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [0,0,0,0,0,2,10,6])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:57.811152+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c7fd0e00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331407360 unmapped: 47931392 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7886000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7886000 session 0x55d7c75f9dc0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79ca400
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79ca400 session 0x55d7c764a540
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8d25000/0x0/0x4ffc00000, data 0x3913759/0x3aa7000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:58.811320+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c49df180
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c74388c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:41:59.811507+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:00.811690+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717291 data_alloc: 218103808 data_used: 125725
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:01.811853+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8ca5000/0x0/0x4ffc00000, data 0x3993792/0x3b27000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:02.812015+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8ca5000/0x0/0x4ffc00000, data 0x3993792/0x3b27000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:03.812243+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:04.812420+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:05.812601+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717291 data_alloc: 218103808 data_used: 125725
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:06.812821+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 47906816 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:07.813010+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c8ecc380
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8ca5000/0x0/0x4ffc00000, data 0x3993792/0x3b27000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 47906816 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:08.813180+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7886000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7886000 session 0x55d7c8ecd880
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331431936 unmapped: 47906816 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:09.813361+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79c1c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c79c1c00 session 0x55d7c3161c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.453115463s of 19.530176163s, submitted: 62
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c87ba700
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:10.813499+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3718017 data_alloc: 218103808 data_used: 126237
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:11.813645+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:12.813807+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8ca4000/0x0/0x4ffc00000, data 0x39937a2/0x3b28000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:13.813975+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:14.814137+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:15.814312+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8ca4000/0x0/0x4ffc00000, data 0x39937a2/0x3b28000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3758209 data_alloc: 218103808 data_used: 6841629
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:16.814497+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:17.814645+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:18.814810+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:19.814960+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:20.815146+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3758209 data_alloc: 218103808 data_used: 6841629
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8ca4000/0x0/0x4ffc00000, data 0x39937a2/0x3b28000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 331423744 unmapped: 47915008 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:21.815318+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.700842857s of 11.707605362s, submitted: 3
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338599936 unmapped: 40738816 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:22.815526+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:23.815786+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:24.816022+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:25.816214+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3854727 data_alloc: 218103808 data_used: 8062237
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:26.816373+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7c52000/0x0/0x4ffc00000, data 0x49e57a2/0x4b7a000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:27.816527+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:28.816780+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:29.816962+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:30.817148+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3855879 data_alloc: 218103808 data_used: 8062237
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7c2e000/0x0/0x4ffc00000, data 0x4a097a2/0x4b9e000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:31.817328+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:32.817512+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7c2e000/0x0/0x4ffc00000, data 0x4a097a2/0x4b9e000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:33.817717+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:34.817921+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7c2e000/0x0/0x4ffc00000, data 0x4a097a2/0x4b9e000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:35.818159+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3855879 data_alloc: 218103808 data_used: 8062237
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:36.818371+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 338370560 unmapped: 40968192 heap: 379338752 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:37.818574+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7886000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7886000 session 0x55d7c90b9500
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c706bc00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c706bc00 session 0x55d7c49de380
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c7683880
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c7438e00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.913228035s of 16.233089447s, submitted: 138
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c4ebf340
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c8f6ca80
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c6e9cfc0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c706bc00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c706bc00 session 0x55d7c8f6c380
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339501056 unmapped: 45162496 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7886000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7886000 session 0x55d7c4c868c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:38.818777+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 45154304 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:39.818953+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 45154304 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:40.819127+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e72cc000/0x0/0x4ffc00000, data 0x5369814/0x5500000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3917894 data_alloc: 218103808 data_used: 8062237
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 45154304 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:41.819293+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 45154304 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c4bd4e00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:42.819463+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c7fd0540
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 45154304 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:43.819700+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e72cc000/0x0/0x4ffc00000, data 0x5369814/0x5500000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 45154304 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:44.819897+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c874a700
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c706bc00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 45154304 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c706bc00 session 0x55d7c8f6dc00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:45.820111+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c49e2c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db0400
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3919496 data_alloc: 218103808 data_used: 8062253
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 44892160 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:46.820291+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 44892160 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:47.820468+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 44892160 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:48.820736+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e72cb000/0x0/0x4ffc00000, data 0x5369824/0x5501000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 44892160 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:49.820915+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 44892160 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:50.821116+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e72cb000/0x0/0x4ffc00000, data 0x5369824/0x5501000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3976456 data_alloc: 234881024 data_used: 17665837
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 44892160 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:51.821263+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e72cb000/0x0/0x4ffc00000, data 0x5369824/0x5501000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 44892160 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:52.821434+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 44892160 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:53.821638+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.763391495s of 15.905977249s, submitted: 38
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 44883968 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:54.821788+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 44883968 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:55.821965+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3977032 data_alloc: 234881024 data_used: 17665837
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 44883968 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:56.822158+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e72cb000/0x0/0x4ffc00000, data 0x5369824/0x5501000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340123648 unmapped: 44539904 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:57.822289+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342786048 unmapped: 41877504 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:58.822490+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 41844736 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:42:59.822682+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 41844736 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:00.822879+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4060418 data_alloc: 234881024 data_used: 18100013
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342818816 unmapped: 41844736 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:01.823133+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6509000/0x0/0x4ffc00000, data 0x612b824/0x62c3000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 41836544 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:02.823333+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342827008 unmapped: 41836544 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:03.823655+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6509000/0x0/0x4ffc00000, data 0x612b824/0x62c3000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.108784676s of 10.460398674s, submitted: 113
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 41779200 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:04.823838+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 41779200 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:05.824010+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4055930 data_alloc: 234881024 data_used: 18104109
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342884352 unmapped: 41779200 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:06.824149+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 41771008 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:07.824301+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e64ea000/0x0/0x4ffc00000, data 0x614a824/0x62e2000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342892544 unmapped: 41771008 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c49e2c00 session 0x55d7c663dc00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4db0400 session 0x55d7c9e91a40
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:08.824463+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c4bd4000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342941696 unmapped: 41721856 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:09.824638+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342941696 unmapped: 41721856 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:10.824852+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3869287 data_alloc: 218103808 data_used: 8062237
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342941696 unmapped: 41721856 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:11.825020+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7c20000/0x0/0x4ffc00000, data 0x4a177a2/0x4bac000, compress 0x0/0x0/0x0, omap 0x650cb, meta 0x133caf35), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342941696 unmapped: 41721856 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7cbf4a8c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c7786700
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:12.825222+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c75f8380
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:13.825478+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:14.825670+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:15.825895+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688912 data_alloc: 218103808 data_used: 129723
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:16.826143+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:17.826341+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:18.826530+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:19.826687+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:20.826850+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688912 data_alloc: 218103808 data_used: 129723
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:21.827032+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:22.827225+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:23.827461+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:24.827634+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 44826624 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:25.827810+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688912 data_alloc: 218103808 data_used: 129723
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 44818432 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:26.828007+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 44818432 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:27.828204+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 44818432 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:28.828433+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 44810240 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:29.828616+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 44810240 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:30.828831+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688912 data_alloc: 218103808 data_used: 129723
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 44810240 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:31.829031+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 44810240 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:32.829270+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339853312 unmapped: 44810240 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:33.829515+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 44802048 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:34.829690+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 44802048 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:35.829866+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688912 data_alloc: 218103808 data_used: 129723
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 44802048 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:36.830034+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 44802048 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:37.830213+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 44802048 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:38.830357+0000)
Feb 28 11:14:17 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23396 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:17 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} v 0)
Feb 28 11:14:17 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} : dispatch
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:39.830536+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 44802048 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:40.830720+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 44802048 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688912 data_alloc: 218103808 data_used: 129723
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:41.830912+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 44802048 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:42.831181+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 44793856 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:43.831380+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 44793856 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8cae000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:44.831566+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 44793856 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:45.831757+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 44793856 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688912 data_alloc: 218103808 data_used: 129723
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 41.972812653s of 42.093124390s, submitted: 71
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 43K writes, 173K keys, 43K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s
                                           Cumulative WAL: 43K writes, 16K syncs, 2.71 writes per sync, written: 0.17 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5287 writes, 21K keys, 5287 commit groups, 1.0 writes per commit group, ingest: 24.46 MB, 0.04 MB/s
                                           Interval WAL: 5286 writes, 2100 syncs, 2.52 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c78c0540
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c8f6c540
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:46.831918+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c9f3cc40
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339763200 unmapped: 44900352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db0400
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4db0400 session 0x55d7c6e9c700
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c49dea80
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:47.832141+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 44883968 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:48.832346+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 44883968 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:49.832526+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 44883968 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c8c000/0x0/0x4ffc00000, data 0x39ad730/0x3b40000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:50.832695+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 44875776 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c75e3500
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3738379 data_alloc: 218103808 data_used: 133721
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:51.832852+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 44875776 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c8eccfc0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c4bd4fc0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db0400
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4db0400 session 0x55d7c7786a80
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:52.833033+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 44875776 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:53.833221+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 44875776 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:54.833368+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 44179456 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c8b000/0x0/0x4ffc00000, data 0x39ad740/0x3b41000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:55.833501+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 44179456 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788369 data_alloc: 218103808 data_used: 8284761
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:56.833651+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 44179456 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:57.833810+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 44179456 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c8b000/0x0/0x4ffc00000, data 0x39ad740/0x3b41000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:58.834005+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 44179456 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:43:59.834133+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 44179456 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:00.834287+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 44179456 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8c8b000/0x0/0x4ffc00000, data 0x39ad740/0x3b41000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788369 data_alloc: 218103808 data_used: 8284761
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:01.834458+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 44179456 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:02.834633+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 44179456 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:03.834915+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 44179456 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.298162460s of 17.396247864s, submitted: 25
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:04.835081+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 342614016 unmapped: 42049536 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:05.835276+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e836a000/0x0/0x4ffc00000, data 0x42c0740/0x4454000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857135 data_alloc: 218103808 data_used: 8731225
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:06.835437+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:07.835590+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e836a000/0x0/0x4ffc00000, data 0x42c0740/0x4454000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:08.835794+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e836a000/0x0/0x4ffc00000, data 0x42c0740/0x4454000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:09.835985+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e836a000/0x0/0x4ffc00000, data 0x42c0740/0x4454000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:10.836150+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857135 data_alloc: 218103808 data_used: 8731225
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:11.836320+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:12.836475+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:13.836699+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:14.836928+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e836a000/0x0/0x4ffc00000, data 0x42c0740/0x4454000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:15.837124+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857135 data_alloc: 218103808 data_used: 8731225
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c706bc00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c706bc00 session 0x55d7c8f6d6c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:16.837273+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c900f180
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c761aa80
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db0400
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4db0400 session 0x55d7c49df500
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 343859200 unmapped: 40804352 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e836a000/0x0/0x4ffc00000, data 0x42c0740/0x4454000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.961489677s of 13.127732277s, submitted: 90
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c58941c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7cd67f400
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7cd67f400 session 0x55d7c764a380
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c49eefc0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c6e9c380
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db0400
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4db0400 session 0x55d7c764aa80
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:17.837471+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344088576 unmapped: 40574976 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:18.837625+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344088576 unmapped: 40574976 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:19.837826+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344088576 unmapped: 40574976 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:20.838000+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344088576 unmapped: 40574976 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7d54000/0x0/0x4ffc00000, data 0x48e27b2/0x4a78000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3893462 data_alloc: 218103808 data_used: 8731225
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:21.838155+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344088576 unmapped: 40574976 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c761ae00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:22.838306+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344113152 unmapped: 40550400 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c8be6400
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:23.838530+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344113152 unmapped: 40550400 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:24.838917+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 40001536 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:25.839146+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 40001536 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7d53000/0x0/0x4ffc00000, data 0x48e27d5/0x4a79000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3932792 data_alloc: 234881024 data_used: 13604969
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:26.839319+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 40001536 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:27.839501+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 40001536 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:28.839706+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 40001536 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:29.839849+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 40001536 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:30.839973+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 40001536 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7d53000/0x0/0x4ffc00000, data 0x48e27d5/0x4a79000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3932792 data_alloc: 234881024 data_used: 13604969
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:31.840118+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 40001536 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:32.840267+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 40001536 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:33.840522+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.633998871s of 16.749742508s, submitted: 44
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345743360 unmapped: 38920192 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:34.840700+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349257728 unmapped: 35405824 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:35.840815+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e70c2000/0x0/0x4ffc00000, data 0x556d7d5/0x5704000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [0,0,0,15])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 35266560 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4022310 data_alloc: 234881024 data_used: 14370921
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:36.841018+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 35266560 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:37.841213+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 35266560 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:38.841433+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 35266560 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:39.841592+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 35266560 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7017000/0x0/0x4ffc00000, data 0x561d7d5/0x57b4000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:40.841775+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 35258368 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4023030 data_alloc: 234881024 data_used: 14370921
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:41.842006+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 35258368 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:42.842139+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 35258368 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6ff4000/0x0/0x4ffc00000, data 0x56417d5/0x57d8000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:43.842364+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6ff4000/0x0/0x4ffc00000, data 0x56417d5/0x57d8000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 35250176 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:44.842532+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 35250176 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:45.842682+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 35250176 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.561341286s of 12.556861877s, submitted: 142
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c8be6400 session 0x55d7c900fdc0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c87bbdc0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:46.842841+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3866536 data_alloc: 218103808 data_used: 7380569
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c7667a40
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344342528 unmapped: 40321024 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8371000/0x0/0x4ffc00000, data 0x42c1740/0x4455000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:47.843025+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8371000/0x0/0x4ffc00000, data 0x42c1740/0x4455000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344342528 unmapped: 40321024 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:48.843172+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 40263680 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:49.843467+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 40263680 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:50.843634+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 40263680 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:51.843799+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3864652 data_alloc: 218103808 data_used: 7384532
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8377000/0x0/0x4ffc00000, data 0x42c1740/0x4455000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 40263680 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c4b26000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c764b340
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:52.843965+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c7682000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:53.844198+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:54.844400+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:55.844630+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:56.844794+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3718631 data_alloc: 218103808 data_used: 141780
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:57.845003+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:58.845189+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:44:59.845405+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:00.845590+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:01.845748+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3718631 data_alloc: 218103808 data_used: 141780
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:02.845927+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:03.846131+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:04.846310+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:05.846450+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 44408832 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:06.846596+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3718631 data_alloc: 218103808 data_used: 141780
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 44400640 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:07.846813+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 44400640 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:08.846996+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 44400640 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:09.847178+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340262912 unmapped: 44400640 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:10.847339+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 44392448 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:11.847528+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3718631 data_alloc: 218103808 data_used: 141780
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 44392448 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:12.847712+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 44392448 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:13.847940+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 44392448 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:14.848162+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 44392448 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:15.848344+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 44392448 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:16.848515+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3718631 data_alloc: 218103808 data_used: 141780
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 44392448 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:17.848748+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 44392448 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:18.848942+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 44384256 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:19.849132+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 44384256 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:20.849374+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 44384256 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:21.849540+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3718631 data_alloc: 218103808 data_used: 141780
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 44384256 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:22.849692+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 44384256 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e9467000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64a03, meta 0x133cb5fd), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:23.849911+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 44384256 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:24.850147+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 44384256 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:25.850325+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 39.287815094s of 39.636169434s, submitted: 172
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c6d6d340
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c7439500
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341655552 unmapped: 43008000 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c6e9d880
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c764b6c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c78c1340
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:26.850511+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788426 data_alloc: 218103808 data_used: 141780
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 42991616 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:27.850712+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 42991616 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:28.850899+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e89ea000/0x0/0x4ffc00000, data 0x3c4f730/0x3de2000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 42991616 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:29.851127+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e89ea000/0x0/0x4ffc00000, data 0x3c4f730/0x3de2000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 42991616 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:30.851296+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 42991616 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:31.851474+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788426 data_alloc: 218103808 data_used: 141780
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 42991616 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:32.851662+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c9e91dc0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341671936 unmapped: 42991616 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:33.851850+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341524480 unmapped: 43139072 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:34.852120+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 43122688 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:35.852288+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e89ea000/0x0/0x4ffc00000, data 0x3c4f730/0x3de2000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 43122688 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:36.852456+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3845006 data_alloc: 234881024 data_used: 9724884
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 43122688 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:37.852621+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 43122688 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:38.852788+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 43122688 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e89ea000/0x0/0x4ffc00000, data 0x3c4f730/0x3de2000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:39.852984+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e89ea000/0x0/0x4ffc00000, data 0x3c4f730/0x3de2000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 43122688 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:40.853151+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 43122688 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:41.853375+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3845006 data_alloc: 234881024 data_used: 9724884
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 43122688 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:42.853518+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 341540864 unmapped: 43122688 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:43.853748+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.693708420s of 17.795551300s, submitted: 33
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 39698432 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:44.853906+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 39206912 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e81a5000/0x0/0x4ffc00000, data 0x4493730/0x4626000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:45.854108+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 39206912 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8118000/0x0/0x4ffc00000, data 0x451f730/0x46b2000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:46.854253+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3909802 data_alloc: 234881024 data_used: 11088852
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 39206912 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:47.854466+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 39206912 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:48.854650+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 39206912 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:49.854821+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 39206912 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:50.855023+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8118000/0x0/0x4ffc00000, data 0x451f730/0x46b2000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 39206912 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:51.855187+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3902194 data_alloc: 234881024 data_used: 11088852
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 39469056 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:52.855318+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 39469056 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:53.855508+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 39469056 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:54.855706+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e80f9000/0x0/0x4ffc00000, data 0x4540730/0x46d3000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 39469056 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:55.855904+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 39469056 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:56.856133+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3902194 data_alloc: 234881024 data_used: 11088852
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 39469056 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:57.856300+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 39469056 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:58.856435+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 39469056 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:45:59.856580+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e80f9000/0x0/0x4ffc00000, data 0x4540730/0x46d3000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345194496 unmapped: 39469056 heap: 384663552 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c90b96c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c58956c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c7fd1c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:00.856748+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db0400
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4db0400 session 0x55d7c87bae00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.947948456s of 17.117033005s, submitted: 106
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e80f9000/0x0/0x4ffc00000, data 0x4540730/0x46d3000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [0,0,0,0,0,10])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c7682380
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c7682540
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c4bd41c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c7786fc0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c75e2540
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 46784512 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:01.856919+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3980343 data_alloc: 234881024 data_used: 11088852
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 46784512 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:02.857085+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 46784512 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:03.857297+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 46784512 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:04.857474+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7476000/0x0/0x4ffc00000, data 0x51c17a1/0x5356000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 46784512 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:05.857702+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c7683500
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345227264 unmapped: 46784512 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:06.857879+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c7682700
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3980343 data_alloc: 234881024 data_used: 11088852
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 46776320 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c90b8fc0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c4bd5dc0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:07.858041+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7023000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6f25000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 46776320 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:08.858167+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 43737088 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:09.858355+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7451000/0x0/0x4ffc00000, data 0x51e57b1/0x537b000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349937664 unmapped: 42074112 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:10.858567+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349937664 unmapped: 42074112 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:11.858714+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4060353 data_alloc: 234881024 data_used: 23154988
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7451000/0x0/0x4ffc00000, data 0x51e57b1/0x537b000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349937664 unmapped: 42074112 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:12.858853+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349937664 unmapped: 42074112 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:13.859029+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349937664 unmapped: 42074112 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:14.859164+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349937664 unmapped: 42074112 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7451000/0x0/0x4ffc00000, data 0x51e57b1/0x537b000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x133cb3eb), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:15.859342+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.794326782s of 14.919716835s, submitted: 39
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 42065920 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:16.859499+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4061001 data_alloc: 234881024 data_used: 23154988
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 42065920 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:17.859783+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 42065920 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:18.859931+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e62af000/0x0/0x4ffc00000, data 0x51e67b1/0x537c000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x1456b3eb), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356286464 unmapped: 35725312 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:19.860179+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356548608 unmapped: 35463168 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:20.860392+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e53b8000/0x0/0x4ffc00000, data 0x60d67b1/0x626c000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x1456b3eb), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356548608 unmapped: 35463168 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:21.860594+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4163885 data_alloc: 234881024 data_used: 23983404
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356548608 unmapped: 35463168 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e53b8000/0x0/0x4ffc00000, data 0x60d67b1/0x626c000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x1456b3eb), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:22.860771+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356548608 unmapped: 35463168 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:23.860970+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e53b8000/0x0/0x4ffc00000, data 0x60d67b1/0x626c000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x1456b3eb), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356548608 unmapped: 35463168 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:24.861135+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356548608 unmapped: 35463168 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:25.861331+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 36126720 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:26.861514+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4158901 data_alloc: 234881024 data_used: 23987500
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 36126720 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:27.861680+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e53bd000/0x0/0x4ffc00000, data 0x60d97b1/0x626f000, compress 0x0/0x0/0x0, omap 0x64c15, meta 0x1456b3eb), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 36126720 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.467883110s of 12.747834206s, submitted: 114
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:28.861897+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 36126720 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:29.862029+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 36126720 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7023000 session 0x55d7c49de1c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6f25000 session 0x55d7c49de8c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:30.862143+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c7787a40
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351019008 unmapped: 40992768 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:31.862373+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e53bd000/0x0/0x4ffc00000, data 0x60d97b1/0x626f000, compress 0x0/0x0/0x0, omap 0x64c4a, meta 0x1456b3b6), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3918190 data_alloc: 234881024 data_used: 10173725
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6aee000/0x0/0x4ffc00000, data 0x4547730/0x46da000, compress 0x0/0x0/0x0, omap 0x64c4a, meta 0x1456b3b6), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351019008 unmapped: 40992768 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:32.862603+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351019008 unmapped: 40992768 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:33.862895+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351019008 unmapped: 40992768 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c4852e00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:34.863056+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c7439c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 40968192 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:35.863312+0000)
Feb 28 11:14:17 compute-0 ceph-mon[76304]: from='client.23388 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:17 compute-0 ceph-mon[76304]: from='client.23390 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:17 compute-0 ceph-mon[76304]: from='client.23394 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:17 compute-0 ceph-mon[76304]: from='client.23392 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:17 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} : dispatch
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 40968192 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:36.863503+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745969 data_alloc: 218103808 data_used: 129723
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 40968192 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:37.863668+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 40968192 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:38.863854+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 40968192 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:39.864136+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 40968192 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:40.864348+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 40968192 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:41.864563+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745969 data_alloc: 218103808 data_used: 129723
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 40968192 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:42.864756+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 40968192 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:43.865170+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351043584 unmapped: 40968192 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:44.865369+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351051776 unmapped: 40960000 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:45.865585+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351051776 unmapped: 40960000 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:46.865767+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745969 data_alloc: 218103808 data_used: 129723
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351059968 unmapped: 40951808 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:47.865982+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351059968 unmapped: 40951808 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:48.866185+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351059968 unmapped: 40951808 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:49.866405+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351059968 unmapped: 40951808 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:50.866661+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351059968 unmapped: 40951808 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:51.866870+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745969 data_alloc: 218103808 data_used: 129723
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351059968 unmapped: 40951808 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:52.867062+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351059968 unmapped: 40951808 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:53.867394+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351059968 unmapped: 40951808 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:54.867685+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351068160 unmapped: 40943616 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:55.867852+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351068160 unmapped: 40943616 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:56.867985+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745969 data_alloc: 218103808 data_used: 129723
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351068160 unmapped: 40943616 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:57.868134+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351068160 unmapped: 40943616 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:58.868296+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351068160 unmapped: 40943616 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:46:59.868460+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351068160 unmapped: 40943616 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:00.868632+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351068160 unmapped: 40943616 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:01.868801+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745969 data_alloc: 218103808 data_used: 129723
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351068160 unmapped: 40943616 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:02.868932+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351076352 unmapped: 40935424 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:03.869146+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351076352 unmapped: 40935424 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:04.869345+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351076352 unmapped: 40935424 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:05.869515+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351084544 unmapped: 40927232 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:06.869812+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745969 data_alloc: 218103808 data_used: 129723
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351084544 unmapped: 40927232 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:07.869986+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351084544 unmapped: 40927232 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:08.870219+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351084544 unmapped: 40927232 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:09.870409+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e8107000/0x0/0x4ffc00000, data 0x31d2730/0x3365000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351084544 unmapped: 40927232 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:10.870620+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351092736 unmapped: 40919040 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:11.870763+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745969 data_alloc: 218103808 data_used: 129723
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c7683a40
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c9e90e00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c7438fc0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351092736 unmapped: 40919040 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c75e28c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 44.083389282s of 44.192253113s, submitted: 58
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:12.870866+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c8ecd500
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6f25000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6f25000 session 0x55d7c874b6c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c90b8380
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c9f3c700
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c5850fc0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 40910848 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:13.871027+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e82c5000/0x0/0x4ffc00000, data 0x31d2769/0x3367000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 40910848 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:14.871248+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 40910848 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:15.871393+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 40910848 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:16.871548+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3780785 data_alloc: 218103808 data_used: 129723
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 40910848 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:17.871718+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 40910848 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:18.871935+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 40910848 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7ebd000/0x0/0x4ffc00000, data 0x35da7a2/0x376f000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:19.872107+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 40910848 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:20.872315+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7ebd000/0x0/0x4ffc00000, data 0x35da7a2/0x376f000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c7683dc0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351100928 unmapped: 40910848 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:21.872473+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c701e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c701e800 session 0x55d7c4ebfdc0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c5850000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3782265 data_alloc: 218103808 data_used: 129723
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c4ebe540
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 45522944 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:22.872626+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.035045624s of 10.393772125s, submitted: 42
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 45522944 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:23.872782+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 46260224 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:24.872907+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7e99000/0x0/0x4ffc00000, data 0x35fe7a2/0x3793000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 46260224 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:25.873037+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 46260224 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:26.873121+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3807869 data_alloc: 218103808 data_used: 4358859
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 46260224 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:27.873286+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 46260224 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:28.873451+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 46260224 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:29.873571+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 46260224 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:30.873733+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7e99000/0x0/0x4ffc00000, data 0x35fe7a2/0x3793000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 46260224 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:31.873941+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3807869 data_alloc: 218103808 data_used: 4358859
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:32.874167+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 345751552 unmapped: 46260224 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.973699570s of 10.002538681s, submitted: 7
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e7e99000/0x0/0x4ffc00000, data 0x35fe7a2/0x3793000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1456b1a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:33.874348+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348758016 unmapped: 43253760 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:34.874477+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 43417600 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:35.874612+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 43417600 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:36.874764+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 43417600 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6779000/0x0/0x4ffc00000, data 0x3b767a2/0x3d0b000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1570b1a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3855323 data_alloc: 218103808 data_used: 4682443
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:37.875026+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 43417600 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:38.875344+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 43417600 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:39.875502+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 43417600 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6779000/0x0/0x4ffc00000, data 0x3b767a2/0x3d0b000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1570b1a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:40.875818+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 43417600 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:41.875985+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 43417600 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6779000/0x0/0x4ffc00000, data 0x3b767a2/0x3d0b000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1570b1a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3855323 data_alloc: 218103808 data_used: 4682443
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread fragmentation_score=0.004223 took=0.000069s
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:42.876456+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348594176 unmapped: 43417600 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.051095963s of 10.274992943s, submitted: 72
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:43.876696+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348454912 unmapped: 43556864 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:44.876975+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 43548672 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:45.877210+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348463104 unmapped: 43548672 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c900e380
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c7439180
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:46.877366+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348471296 unmapped: 43540480 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7021400
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3850787 data_alloc: 218103808 data_used: 4670155
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 ms_handle_reset con 0x55d7c7021400 session 0x55d7c7667880
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 heartbeat osd_stat(store_statfs(0x4e6781000/0x0/0x4ffc00000, data 0x3b767a2/0x3d0b000, compress 0x0/0x0/0x0, omap 0x64e5c, meta 0x1570b1a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7021400
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:47.877502+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348479488 unmapped: 43532288 heap: 392011776 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 284 handle_osd_map epochs [284,285], i have 285, src has [1,285]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 285 ms_handle_reset con 0x55d7c7021400 session 0x55d7c49dec40
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 285 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c5850700
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 285 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c5850c40
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:48.877667+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 41639936 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 285 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c75e2c40
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 286 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c78c1a40
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:49.877838+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 41910272 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 286 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c49ef500
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 286 heartbeat osd_stat(store_statfs(0x4e5ce6000/0x0/0x4ffc00000, data 0x460bf3e/0x47a4000, compress 0x0/0x0/0x0, omap 0x64fd0, meta 0x1570b030), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 286 handle_osd_map epochs [287,287], i have 287, src has [1,287]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:50.878022+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 41885696 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 287 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c90b81c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 287 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c75e3a40
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 287 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c7667180
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:51.878171+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 357769216 unmapped: 41877504 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3955705 data_alloc: 234881024 data_used: 10195675
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:52.878453+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 357769216 unmapped: 41877504 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:53.878761+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 357769216 unmapped: 41877504 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7021400
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.491305351s of 10.884724617s, submitted: 76
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 287 ms_handle_reset con 0x55d7c7021400 session 0x55d7c9f3d340
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 287 heartbeat osd_stat(store_statfs(0x4e5200000/0x0/0x4ffc00000, data 0x3c69a84/0x3e01000, compress 0x0/0x0/0x0, omap 0x655a0, meta 0x168aaa60), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:54.878963+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 46014464 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 287 heartbeat osd_stat(store_statfs(0x4e5200000/0x0/0x4ffc00000, data 0x3c69a84/0x3e01000, compress 0x0/0x0/0x0, omap 0x655a0, meta 0x168aaa60), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:55.879178+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 46014464 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:56.879369+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 46014464 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3835639 data_alloc: 218103808 data_used: 129739
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:57.879591+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 46014464 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:58.879760+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 46014464 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:47:59.880047+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 46006272 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:00.880317+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 46006272 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e54e6000/0x0/0x4ffc00000, data 0x3c6b503/0x3e04000, compress 0x0/0x0/0x0, omap 0x656c5, meta 0x168aa93b), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:01.880447+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c87bbc00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 46006272 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c9e901c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c7fd0540
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c8f6d500
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb2000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c6fb2000 session 0x55d7c764a8c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb2000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c6fb2000 session 0x55d7c6df5dc0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c4ebe1c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c4ebe000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3840817 data_alloc: 218103808 data_used: 133737
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c4ebe700
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb1c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:02.880577+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 375349248 unmapped: 24297472 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c6fb1c00 session 0x55d7c9e91340
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:03.895477+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 368320512 unmapped: 31326208 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c7667c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e496d000/0x0/0x4ffc00000, data 0x47e4575/0x497f000, compress 0x0/0x0/0x0, omap 0x656c5, meta 0x168aa93b), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c5895500
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:04.895656+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 368320512 unmapped: 31326208 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c764b180
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb2000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.735244751s of 11.266489029s, submitted: 89
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 288 ms_handle_reset con 0x55d7c6fb2000 session 0x55d7c764bdc0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:05.895838+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 368320512 unmapped: 31326208 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c5a18800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db3800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7ca867800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:06.896008+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 368689152 unmapped: 30957568 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 288 heartbeat osd_stat(store_statfs(0x4e496d000/0x0/0x4ffc00000, data 0x47e4575/0x497f000, compress 0x0/0x0/0x0, omap 0x656c5, meta 0x168aa93b), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 289 ms_handle_reset con 0x55d7ca867800 session 0x55d7c3161180
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3870398 data_alloc: 218103808 data_used: 2344553
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:07.896131+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356532224 unmapped: 43114496 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:08.896322+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356532224 unmapped: 43114496 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:09.896513+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356532224 unmapped: 43114496 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:10.896713+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356532224 unmapped: 43114496 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:11.896910+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356532224 unmapped: 43114496 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 289 heartbeat osd_stat(store_statfs(0x4e53fb000/0x0/0x4ffc00000, data 0x3d54155/0x3eef000, compress 0x0/0x0/0x0, omap 0x65a6c, meta 0x168aa594), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879998 data_alloc: 218103808 data_used: 3958377
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:12.897098+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356532224 unmapped: 43114496 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:13.897477+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356532224 unmapped: 43114496 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:14.897642+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356532224 unmapped: 43114496 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e53f8000/0x0/0x4ffc00000, data 0x3d55bd4/0x3ef2000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:15.897813+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356532224 unmapped: 43114496 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:16.897995+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356532224 unmapped: 43114496 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.610133171s of 11.691160202s, submitted: 38
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e53f8000/0x0/0x4ffc00000, data 0x3d55bd4/0x3ef2000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3889116 data_alloc: 218103808 data_used: 4593257
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:17.898163+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356589568 unmapped: 43057152 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:18.898346+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356589568 unmapped: 43057152 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:19.898511+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356589568 unmapped: 43057152 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:20.898651+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356589568 unmapped: 43057152 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:21.898858+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356589568 unmapped: 43057152 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e53f8000/0x0/0x4ffc00000, data 0x3d55bd4/0x3ef2000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3897742 data_alloc: 218103808 data_used: 5477993
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:22.899043+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356597760 unmapped: 43048960 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:23.899219+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e53f5000/0x0/0x4ffc00000, data 0x3d5abd4/0x3ef7000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356597760 unmapped: 43048960 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:24.899359+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356597760 unmapped: 43048960 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:25.899539+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356597760 unmapped: 43048960 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c5a18800 session 0x55d7c78c16c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c4db3800 session 0x55d7c49eee00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:26.899710+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c49de000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 45064192 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793472 data_alloc: 218103808 data_used: 133721
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:27.899930+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 45064192 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:28.900178+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 45064192 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:29.900360+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 45064192 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:30.900502+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 45064192 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:31.900681+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 45064192 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793472 data_alloc: 218103808 data_used: 133721
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:32.900814+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 45064192 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:33.901034+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 45064192 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:34.901203+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 45064192 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:35.901387+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 45056000 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:36.901613+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 45056000 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793472 data_alloc: 218103808 data_used: 133721
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:37.901807+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 45056000 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:38.902019+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 45047808 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:39.902233+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 45047808 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:40.902430+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 45047808 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:41.902596+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 45047808 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793472 data_alloc: 218103808 data_used: 133721
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:42.902733+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 45047808 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:43.902928+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 45047808 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:44.903160+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 45047808 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:45.903312+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 45047808 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:46.903531+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 45039616 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793472 data_alloc: 218103808 data_used: 133721
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:47.903767+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 45039616 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:48.903972+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 45039616 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:49.904178+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 45039616 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:50.904339+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 45039616 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:51.904514+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 45039616 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:52.904723+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793472 data_alloc: 218103808 data_used: 133721
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 45039616 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:53.905003+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 45039616 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:54.905214+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 45031424 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:55.905443+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 45031424 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:56.905638+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 45031424 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: mgrc ms_handle_reset ms_handle_reset con 0x55d7c6fb3c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1095581968
Feb 28 11:14:17 compute-0 ceph-osd[87202]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1095581968,v1:192.168.122.100:6801/1095581968]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: get_auth_request con 0x55d7ca867800 auth_method 0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: mgrc handle_mgr_configure stats_period=5
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:57.905814+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793472 data_alloc: 218103808 data_used: 133721
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 45023232 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:58.906167+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 45023232 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:48:59.906417+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 45023232 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c42ec400 session 0x55d7c49fbc00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:00.906712+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 45023232 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:01.906887+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 45023232 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:02.907147+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793472 data_alloc: 218103808 data_used: 133721
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 45006848 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:03.907370+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 45006848 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c76836c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6fb2000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c6fb2000 session 0x55d7c74396c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c7fd0c40
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c6df48c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:04.907615+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db3800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 47.527797699s of 47.597843170s, submitted: 41
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c4db3800 session 0x55d7c58956c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c5a18800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c5a18800 session 0x55d7c4b26e00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7ca866800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7ca866800 session 0x55d7c78c0a80
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7ca866800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7ca866800 session 0x55d7c87bac40
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c7438a80
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5cf6000/0x0/0x4ffc00000, data 0x3459b9b/0x35f6000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:05.907806+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:06.908000+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c4ebe380
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:07.908155+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3823215 data_alloc: 218103808 data_used: 137719
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db3800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5cf6000/0x0/0x4ffc00000, data 0x3459bd4/0x35f6000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:08.908280+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:09.908435+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:10.908655+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:11.908881+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c4db3800 session 0x55d7c900e000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c5a18800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c5a18800 session 0x55d7c4bd4700
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:12.909017+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800495 data_alloc: 218103808 data_used: 137719
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 44580864 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:13.909325+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 44580864 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:14.909455+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355074048 unmapped: 44572672 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:15.909742+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355074048 unmapped: 44572672 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:16.909971+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355074048 unmapped: 44572672 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:17.910169+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800495 data_alloc: 218103808 data_used: 137719
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355074048 unmapped: 44572672 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:18.910355+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355074048 unmapped: 44572672 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:19.910531+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355074048 unmapped: 44572672 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:20.910703+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 44564480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:21.910835+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 44564480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:22.911204+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800495 data_alloc: 218103808 data_used: 137719
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 44564480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:23.911457+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 44564480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:24.911656+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 44564480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:25.911834+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 44564480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:26.912004+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 44556288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:27.912194+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800495 data_alloc: 218103808 data_used: 137719
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 44556288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:28.912351+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 44556288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:29.912509+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 44556288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:30.912695+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 44556288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:31.912876+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 44556288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:32.913141+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800495 data_alloc: 218103808 data_used: 137719
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 44556288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:33.913347+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 44556288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:34.913488+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:35.913752+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 ms_handle_reset con 0x55d7c70bc400 session 0x55d7c7fd0380
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e000
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:36.913942+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:37.914185+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800495 data_alloc: 218103808 data_used: 137719
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:38.919256+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:39.923276+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:40.925125+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:41.925491+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 44548096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:42.926170+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800495 data_alloc: 218103808 data_used: 137719
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f74000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 44539904 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:43.926577+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 38.908409119s of 39.092102051s, submitted: 54
Feb 28 11:14:17 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 44515328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:44.926916+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 44515328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:45.927451+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 44515328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:46.927796+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 44515328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:47.928449+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800431 data_alloc: 218103808 data_used: 141717
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 44515328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:48.929218+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f75000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 44515328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:49.929609+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355131392 unmapped: 44515328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:50.930000+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 44507136 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:51.930294+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 44507136 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:52.930889+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800431 data_alloc: 218103808 data_used: 141717
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f75000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 44507136 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:53.931801+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 44507136 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:54.933251+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 44507136 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f75000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:55.933481+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 44507136 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f75000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:56.933730+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355147776 unmapped: 44498944 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:57.933898+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800431 data_alloc: 218103808 data_used: 141717
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355147776 unmapped: 44498944 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:58.934117+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f75000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 44490752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:49:59.934262+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 44490752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:00.934446+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 44490752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:01.934604+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f75000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 44490752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:02.934758+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800431 data_alloc: 218103808 data_used: 141717
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 44490752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:03.934985+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f75000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 44490752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:04.935177+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 44490752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:05.935392+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 44490752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:06.935597+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355164160 unmapped: 44482560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:07.935815+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800431 data_alloc: 218103808 data_used: 141717
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355164160 unmapped: 44482560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:08.935991+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f75000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355164160 unmapped: 44482560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:09.936138+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355164160 unmapped: 44482560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:10.936320+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355164160 unmapped: 44482560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:11.936518+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355164160 unmapped: 44482560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:12.936694+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 heartbeat osd_stat(store_statfs(0x4e5f75000/0x0/0x4ffc00000, data 0x31dcb62/0x3377000, compress 0x0/0x0/0x0, omap 0x65b91, meta 0x168aa46f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800431 data_alloc: 218103808 data_used: 141717
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355164160 unmapped: 44482560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:13.937041+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355164160 unmapped: 44482560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.807094574s of 30.837282181s, submitted: 18
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:14.937282+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 290 handle_osd_map epochs [290,291], i have 291, src has [1,291]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 291 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c6df5dc0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:15.937446+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:16.937625+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e8f70000/0x0/0x4ffc00000, data 0x1de72c/0x379000, compress 0x0/0x0/0x0, omap 0x66670, meta 0x168a9990), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:17.937872+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3561317 data_alloc: 218103808 data_used: 141717
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e8f70000/0x0/0x4ffc00000, data 0x1de72c/0x379000, compress 0x0/0x0/0x0, omap 0x66670, meta 0x168a9990), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:18.938195+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e8f70000/0x0/0x4ffc00000, data 0x1de72c/0x379000, compress 0x0/0x0/0x0, omap 0x66670, meta 0x168a9990), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:19.938440+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:20.938620+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:21.938798+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:22.938959+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 291 heartbeat osd_stat(store_statfs(0x4e8f70000/0x0/0x4ffc00000, data 0x1de72c/0x379000, compress 0x0/0x0/0x0, omap 0x66670, meta 0x168a9990), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3561317 data_alloc: 218103808 data_used: 141717
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 43532288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:23.939149+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356098048 unmapped: 43548672 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:24.939313+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356098048 unmapped: 43548672 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:25.939608+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:26.939834+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:27.940040+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:28.940266+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:29.940464+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 43540480 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:30.940642+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 43532288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:31.940827+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 43532288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:32.940978+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 43532288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:33.941257+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 43532288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:34.941499+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 43532288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:35.941727+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 43532288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:36.941932+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 43532288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:37.942179+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 43532288 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:38.942402+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356122624 unmapped: 43524096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:39.942630+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356122624 unmapped: 43524096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:40.942804+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356122624 unmapped: 43524096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:41.942995+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356122624 unmapped: 43524096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:42.943200+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356122624 unmapped: 43524096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:43.943395+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356122624 unmapped: 43524096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:44.943581+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356122624 unmapped: 43524096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:45.943749+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356122624 unmapped: 43524096 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:46.943932+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356139008 unmapped: 43507712 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:47.944139+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356139008 unmapped: 43507712 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:48.944319+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356139008 unmapped: 43507712 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:49.944486+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:50.944647+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356139008 unmapped: 43507712 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:51.944796+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356139008 unmapped: 43507712 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:52.944956+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356147200 unmapped: 43499520 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:53.945161+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356147200 unmapped: 43499520 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:54.945326+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356147200 unmapped: 43499520 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:55.945495+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356155392 unmapped: 43491328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:56.945641+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356155392 unmapped: 43491328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:57.945806+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356155392 unmapped: 43491328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:58.945942+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356155392 unmapped: 43491328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:50:59.946092+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356155392 unmapped: 43491328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:00.946239+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356155392 unmapped: 43491328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:01.946396+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356155392 unmapped: 43491328 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:02.946542+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356163584 unmapped: 43483136 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:03.946759+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356163584 unmapped: 43483136 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:04.946898+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356163584 unmapped: 43483136 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:05.947115+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356163584 unmapped: 43483136 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:06.947317+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356171776 unmapped: 43474944 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:07.947465+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356171776 unmapped: 43474944 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:08.947691+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356171776 unmapped: 43474944 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:09.947878+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356171776 unmapped: 43474944 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:10.948037+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356179968 unmapped: 43466752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:11.948196+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356179968 unmapped: 43466752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:12.948378+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356179968 unmapped: 43466752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:13.948604+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356179968 unmapped: 43466752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:14.948776+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356179968 unmapped: 43466752 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:15.948939+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356188160 unmapped: 43458560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:16.949146+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356188160 unmapped: 43458560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:17.949552+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356188160 unmapped: 43458560 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:18.949796+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356196352 unmapped: 43450368 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:19.950003+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356196352 unmapped: 43450368 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:20.950195+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356196352 unmapped: 43450368 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:21.950375+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356196352 unmapped: 43450368 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:23.105235+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356204544 unmapped: 43442176 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:24.105462+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356204544 unmapped: 43442176 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:25.105638+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356204544 unmapped: 43442176 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:26.105800+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356204544 unmapped: 43442176 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:27.105964+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 43433984 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:28.106165+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 43433984 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:29.106497+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 43433984 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:30.106785+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 43433984 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:31.107104+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 43433984 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:32.107284+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 43433984 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:33.107422+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 43433984 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:34.107682+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356212736 unmapped: 43433984 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:35.107910+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356229120 unmapped: 43417600 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:36.108134+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356229120 unmapped: 43417600 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:37.108339+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356229120 unmapped: 43417600 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:38.108507+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356229120 unmapped: 43417600 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:39.108819+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356229120 unmapped: 43417600 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:40.109200+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356229120 unmapped: 43417600 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:41.109364+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356229120 unmapped: 43417600 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:42.109633+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356229120 unmapped: 43417600 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:43.109817+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356237312 unmapped: 43409408 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:44.110182+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356237312 unmapped: 43409408 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:45.110375+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356237312 unmapped: 43409408 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:46.110693+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356237312 unmapped: 43409408 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:47.110875+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356237312 unmapped: 43409408 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:48.111198+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356245504 unmapped: 43401216 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:49.111410+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356245504 unmapped: 43401216 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:50.111570+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356245504 unmapped: 43401216 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:51.111742+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356253696 unmapped: 43393024 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:52.111969+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356253696 unmapped: 43393024 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:53.112201+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356253696 unmapped: 43393024 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:54.112911+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356253696 unmapped: 43393024 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:55.113061+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356253696 unmapped: 43393024 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:56.113274+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356253696 unmapped: 43393024 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:57.113444+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356261888 unmapped: 43384832 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:58.113599+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356261888 unmapped: 43384832 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:51:59.113782+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356270080 unmapped: 43376640 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:00.113965+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356270080 unmapped: 43376640 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:01.114166+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356270080 unmapped: 43376640 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:02.114319+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356270080 unmapped: 43376640 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:03.114521+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:04.114739+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:05.114935+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:06.115152+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:07.115438+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:08.115634+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:09.115862+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:10.116032+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:11.116231+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:12.116444+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:13.116682+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db3800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 ms_handle_reset con 0x55d7c4db3800 session 0x55d7c7fd1180
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7ca866800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 ms_handle_reset con 0x55d7ca866800 session 0x55d7c4ebe1c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:14.116837+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356278272 unmapped: 43368448 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:15.117002+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 43343872 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:16.117216+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 43343872 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:17.117459+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 43343872 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:18.117703+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 43343872 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564011 data_alloc: 218103808 data_used: 145775
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:19.117928+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 43343872 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 heartbeat osd_stat(store_statfs(0x4e8f6e000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:20.119166+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 43343872 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:21.120210+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 43343872 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:22.121044+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c87f0c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 127.346313477s of 127.446342468s, submitted: 62
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356302848 unmapped: 43343872 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:23.121244+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 293 ms_handle_reset con 0x55d7c87f0c00 session 0x55d7c7439180
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356352000 unmapped: 43294720 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 293 heartbeat osd_stat(store_statfs(0x4e8f70000/0x0/0x4ffc00000, data 0x1e01ab/0x37c000, compress 0x0/0x0/0x0, omap 0x66794, meta 0x168a986c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3565624 data_alloc: 218103808 data_used: 145759
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:24.121933+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356352000 unmapped: 43294720 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 293 heartbeat osd_stat(store_statfs(0x4e8f6c000/0x0/0x4ffc00000, data 0x1e1d68/0x37d000, compress 0x0/0x0/0x0, omap 0x66b36, meta 0x168a94ca), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:25.122525+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356352000 unmapped: 43294720 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c49cd400
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:26.123029+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 356360192 unmapped: 43286528 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 294 ms_handle_reset con 0x55d7c49cd400 session 0x55d7c77876c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:27.123462+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 44793856 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:28.123927+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 44793856 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566996 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e8f6b000/0x0/0x4ffc00000, data 0x1e3935/0x37f000, compress 0x0/0x0/0x0, omap 0x675e3, meta 0x168a8a1d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:29.124338+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 44793856 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e8f6b000/0x0/0x4ffc00000, data 0x1e3935/0x37f000, compress 0x0/0x0/0x0, omap 0x675e3, meta 0x168a8a1d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:30.124666+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 44793856 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e8f6b000/0x0/0x4ffc00000, data 0x1e3935/0x37f000, compress 0x0/0x0/0x0, omap 0x675e3, meta 0x168a8a1d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:31.124985+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 44793856 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:32.125274+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 44793856 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:33.125401+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 44793856 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3566996 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 294 heartbeat osd_stat(store_statfs(0x4e8f6b000/0x0/0x4ffc00000, data 0x1e3935/0x37f000, compress 0x0/0x0/0x0, omap 0x675e3, meta 0x168a8a1d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:34.125690+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.699525833s of 11.839026451s, submitted: 63
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 44752896 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:35.125857+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 44752896 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:36.126128+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 44752896 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:37.126311+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 44752896 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:38.126483+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 44752896 heap: 399646720 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3571508 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:39.126649+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e8f67000/0x0/0x4ffc00000, data 0x1e53c4/0x383000, compress 0x0/0x0/0x0, omap 0x67706, meta 0x168a88fa), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 61521920 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:40.126858+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 61521920 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e7f69000/0x0/0x4ffc00000, data 0x11e53c4/0x1383000, compress 0x0/0x0/0x0, omap 0x67706, meta 0x168a88fa), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:41.127039+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 61521920 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:42.127208+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 295 heartbeat osd_stat(store_statfs(0x4e7769000/0x0/0x4ffc00000, data 0x19e53c4/0x1b83000, compress 0x0/0x0/0x0, omap 0x67706, meta 0x168a88fa), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 61521920 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 295 handle_osd_map epochs [295,296], i have 296, src has [1,296]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c764a8c0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:43.127343+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 61513728 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:44.127552+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 61513728 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:45.127705+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 61513728 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:46.128160+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 61513728 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:47.128427+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 61505536 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:48.128595+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 61505536 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:49.128782+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 61497344 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:50.128977+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 61497344 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:51.129309+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 61497344 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:52.129509+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 61497344 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:53.129708+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 61497344 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:54.129986+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 61497344 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:55.130148+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61489152 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:56.130360+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61489152 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:57.130622+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61489152 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:58.130816+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61489152 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:52:59.131010+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61489152 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:00.131191+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 61480960 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:01.131362+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 61480960 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:02.131631+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 61480960 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:03.131826+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 61480960 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:04.132168+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 61480960 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:05.132351+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 61480960 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:06.132540+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 61472768 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:07.132785+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 61472768 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:08.132981+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 61472768 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:09.133154+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 61472768 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:10.133337+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354975744 unmapped: 61456384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:11.133516+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354975744 unmapped: 61456384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:12.133749+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354975744 unmapped: 61456384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:13.133990+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354975744 unmapped: 61456384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:14.134224+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354975744 unmapped: 61456384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:15.134410+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354975744 unmapped: 61456384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:16.134577+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354975744 unmapped: 61456384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:17.134758+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354975744 unmapped: 61456384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:18.134947+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 61448192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:19.135195+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 61448192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:20.135416+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 61448192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:21.135664+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:22.135989+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 61448192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:23.136190+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 61440000 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:24.136440+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 61440000 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:25.136637+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 61440000 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:26.136823+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 61440000 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:27.136975+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355000320 unmapped: 61431808 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:28.137138+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355000320 unmapped: 61431808 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:29.137269+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355000320 unmapped: 61431808 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:30.137438+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355000320 unmapped: 61431808 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:31.137646+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355008512 unmapped: 61423616 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:32.137917+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355008512 unmapped: 61423616 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:33.138123+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355008512 unmapped: 61423616 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:34.138404+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355008512 unmapped: 61423616 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:35.138585+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 61407232 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:36.138832+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 61407232 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:37.139005+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 61407232 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:38.139153+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 61407232 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:39.139381+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 61407232 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:40.139657+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 61407232 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:41.139838+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 61407232 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:42.140095+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355024896 unmapped: 61407232 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:43.140244+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 61399040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:44.140397+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 61399040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:45.140621+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 61399040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:46.140849+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 61399040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 45K writes, 17K syncs, 2.69 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2352 writes, 8930 keys, 2352 commit groups, 1.0 writes per commit group, ingest: 9.76 MB, 0.02 MB/s
                                           Interval WAL: 2353 writes, 958 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:47.141187+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 61399040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:48.141407+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 61399040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:49.141655+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 61399040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:50.141839+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355033088 unmapped: 61399040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:51.142003+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 61390848 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:52.142189+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 61390848 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:53.142349+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 61390848 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:54.142558+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355041280 unmapped: 61390848 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:55.142783+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 61382656 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:56.143028+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355049472 unmapped: 61382656 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:57.143247+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 61374464 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:58.143463+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355057664 unmapped: 61374464 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:53:59.143664+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 61366272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:00.143900+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 61366272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:01.144201+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 61366272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:02.144413+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 61366272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:03.144604+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 61366272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:04.144811+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 61366272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:05.145023+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 61366272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:06.145253+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355065856 unmapped: 61366272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:07.145439+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355074048 unmapped: 61358080 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:08.145648+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355074048 unmapped: 61358080 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:09.145849+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355074048 unmapped: 61358080 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:10.146091+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355082240 unmapped: 61349888 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:11.146375+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 61341696 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:12.146769+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 61341696 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:13.146978+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 61341696 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:14.147244+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355090432 unmapped: 61341696 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:15.147466+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 61333504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:16.147831+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 61333504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:17.148037+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 61333504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:18.148352+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 61333504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:19.148604+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 61333504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:20.148882+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 61333504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:21.149043+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 61333504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:22.149260+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355098624 unmapped: 61333504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:23.149409+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 61317120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:24.149675+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 61317120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:25.149866+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 61317120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:26.150150+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 61317120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:27.150355+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 61317120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:28.150560+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 61317120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:29.150741+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355115008 unmapped: 61317120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3700666 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:30.150881+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db3800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 115.903709412s of 116.008216858s, submitted: 15
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355147776 unmapped: 61284352 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 heartbeat osd_stat(store_statfs(0x4e7763000/0x0/0x4ffc00000, data 0x19e6f86/0x1b87000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:31.151035+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 61276160 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 297 ms_handle_reset con 0x55d7c4db3800 session 0x55d7c49fb340
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e6f64000/0x0/0x4ffc00000, data 0x21e6fa9/0x2388000, compress 0x0/0x0/0x0, omap 0x677bf, meta 0x168a8841), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:32.151175+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355172352 unmapped: 61259776 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7022800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:33.151283+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355123200 unmapped: 61308928 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 297 heartbeat osd_stat(store_statfs(0x4e5f61000/0x0/0x4ffc00000, data 0x31e8b45/0x338b000, compress 0x0/0x0/0x0, omap 0x67dca, meta 0x168a8236), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 298 ms_handle_reset con 0x55d7c7022800 session 0x55d7c9f3ca80
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5b000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:34.151421+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 61292544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3834415 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:35.151585+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 61292544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5b000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:36.151727+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 61292544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:37.151882+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 61292544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:38.152047+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 61292544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:39.152423+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 61292544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3834415 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:40.152570+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355139584 unmapped: 61292544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5b000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:41.152667+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355147776 unmapped: 61284352 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:42.152767+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5b000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 61276160 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:43.152894+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355155968 unmapped: 61276160 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.499187469s of 13.609086990s, submitted: 18
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5b000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:44.153050+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355180544 unmapped: 61251584 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833551 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5d000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:45.153232+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355180544 unmapped: 61251584 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:46.153391+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355180544 unmapped: 61251584 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:47.153545+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355180544 unmapped: 61251584 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:48.153704+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5d000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355188736 unmapped: 61243392 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:49.153927+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355237888 unmapped: 61194240 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833551 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:50.154179+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:51.154319+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:52.154477+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:53.154713+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5d000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:54.154961+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833551 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:55.155162+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:56.155315+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:57.155433+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:58.155595+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5d000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:54:59.155780+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833551 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:00.155981+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:01.156369+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:02.156740+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355246080 unmapped: 61186048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:03.157412+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355254272 unmapped: 61177856 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5d000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:04.158309+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833551 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355254272 unmapped: 61177856 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 298 heartbeat osd_stat(store_statfs(0x4e5f5d000/0x0/0x4ffc00000, data 0x31ea704/0x338f000, compress 0x0/0x0/0x0, omap 0x67e84, meta 0x168a817c), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:05.158499+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355254272 unmapped: 61177856 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79cf400
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.959863663s of 22.148071289s, submitted: 106
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:06.159478+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355254272 unmapped: 61177856 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 299 ms_handle_reset con 0x55d7c79cf400 session 0x55d7c75e2700
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:07.159783+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355270656 unmapped: 61161472 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:08.160104+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355270656 unmapped: 61161472 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:09.160369+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e775a000/0x0/0x4ffc00000, data 0x19ec2be/0x1b90000, compress 0x0/0x0/0x0, omap 0x68226, meta 0x168a7dda), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3715014 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355270656 unmapped: 61161472 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:10.160552+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355270656 unmapped: 61161472 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:11.160986+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355278848 unmapped: 61153280 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:12.161420+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355278848 unmapped: 61153280 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:13.161713+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 299 heartbeat osd_stat(store_statfs(0x4e775a000/0x0/0x4ffc00000, data 0x19ec2be/0x1b90000, compress 0x0/0x0/0x0, omap 0x68226, meta 0x168a7dda), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355287040 unmapped: 61145088 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:14.162455+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355287040 unmapped: 61145088 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:15.162734+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355287040 unmapped: 61145088 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:16.163660+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355287040 unmapped: 61145088 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:17.164300+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355295232 unmapped: 61136896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:18.164638+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355295232 unmapped: 61136896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:19.164963+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355303424 unmapped: 61128704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:20.165725+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355303424 unmapped: 61128704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:21.165930+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355303424 unmapped: 61128704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:22.166479+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355303424 unmapped: 61128704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:23.166788+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355303424 unmapped: 61128704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:24.167028+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355303424 unmapped: 61128704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:25.167427+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355303424 unmapped: 61128704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:26.167712+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355303424 unmapped: 61128704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:27.167925+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355319808 unmapped: 61112320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:28.168211+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355319808 unmapped: 61112320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:29.168412+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355319808 unmapped: 61112320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:30.168705+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355319808 unmapped: 61112320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:31.169106+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355319808 unmapped: 61112320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:32.169556+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355319808 unmapped: 61112320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:33.169731+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355319808 unmapped: 61112320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:34.170001+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355319808 unmapped: 61112320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:35.170254+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 61095936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:36.170444+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 61095936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:37.170655+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 61095936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:38.170862+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 61095936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:39.171051+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 61095936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:40.171211+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 61095936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:41.171432+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 61095936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:42.171635+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 61095936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:43.171924+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 61087744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:44.172149+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 61087744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:45.172376+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 61087744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:46.172560+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 61087744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:47.172735+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 61087744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:48.172881+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 61087744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:49.173124+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 61087744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:50.173335+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 61087744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:51.173498+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355360768 unmapped: 61071360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:52.173613+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355360768 unmapped: 61071360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:53.173783+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355360768 unmapped: 61071360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:54.174020+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355368960 unmapped: 61063168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:55.174131+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355368960 unmapped: 61063168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:56.174274+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355368960 unmapped: 61063168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:57.174457+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355368960 unmapped: 61063168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:58.174689+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355368960 unmapped: 61063168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:55:59.174913+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 61054976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:00.175059+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 61054976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:01.175227+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 61054976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:02.175403+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 61054976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:03.175554+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 61054976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:04.175768+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 61054976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:05.175917+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 61054976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:06.176097+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 61054976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:07.176253+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 61046784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:08.176404+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 61046784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:09.176565+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 61046784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:10.176774+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 61038592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:11.177050+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 61038592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:12.177278+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 61038592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:13.177440+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 61038592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:14.177626+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 61038592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:15.177794+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 61030400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:16.177920+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 61030400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:17.178162+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 61030400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:18.178353+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 61030400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:19.178518+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 61030400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:20.178697+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 61030400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:21.178827+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 61030400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:22.178960+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 61030400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:23.179156+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 61014016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:24.179362+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 61014016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:25.179557+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 61014016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:26.179824+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 61014016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:27.180028+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 61014016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:28.180181+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 61014016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:29.180366+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 61014016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:30.180619+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355426304 unmapped: 61005824 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:31.180779+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 60997632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:32.180973+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 60997632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:33.181174+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 60997632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:34.181381+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 60997632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:35.181590+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 60997632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:36.181751+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 60997632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:37.181912+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 60997632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:38.182047+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355434496 unmapped: 60997632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:39.182213+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 60981248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:40.182356+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 60981248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:41.182498+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 60981248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:42.182628+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 60981248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:43.182773+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 60981248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:44.182987+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 60981248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:45.183148+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 60981248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:46.183369+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 60973056 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:47.183574+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 60973056 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:48.183769+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 60973056 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:49.183933+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 60973056 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:50.184111+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 60973056 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:51.184265+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 60973056 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:52.184458+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 60973056 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:53.184648+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 60973056 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:54.184837+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 60948480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:55.185013+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 60948480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:56.185207+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 60948480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:57.185377+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 60948480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:58.185564+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 60948480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:56:59.185717+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 60948480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:00.185885+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 60948480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:01.186049+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 60948480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:02.186225+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 60940288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:03.186412+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 60940288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:04.186655+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 60940288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:05.186766+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:06.186962+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 60940288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:07.187160+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 60940288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:08.187341+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 60940288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:09.187503+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 60940288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:10.187658+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355491840 unmapped: 60940288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:11.187870+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355508224 unmapped: 60923904 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:12.188105+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355508224 unmapped: 60923904 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:13.188320+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355508224 unmapped: 60923904 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:14.188524+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355508224 unmapped: 60923904 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:15.188740+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355516416 unmapped: 60915712 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:16.188898+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355516416 unmapped: 60915712 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:17.189043+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355516416 unmapped: 60915712 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:18.189219+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355516416 unmapped: 60915712 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:19.189485+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 60907520 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:20.189692+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 60907520 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:21.189897+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 60907520 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:22.190141+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 60907520 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:23.190376+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 60907520 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:24.190577+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 60907520 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:25.190725+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 60907520 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:26.190929+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355524608 unmapped: 60907520 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:27.191139+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 60891136 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:28.191302+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 60891136 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:29.191434+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 60891136 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:30.191593+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 60891136 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:31.191795+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 60891136 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:32.191986+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 60891136 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:33.192216+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 60891136 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:34.192436+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355540992 unmapped: 60891136 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:35.192732+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355549184 unmapped: 60882944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:36.192935+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355549184 unmapped: 60882944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:37.193159+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355549184 unmapped: 60882944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:38.193416+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355549184 unmapped: 60882944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:39.193650+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355549184 unmapped: 60882944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:40.193847+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355557376 unmapped: 60874752 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:41.194023+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355557376 unmapped: 60874752 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:42.194660+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355557376 unmapped: 60874752 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:43.195143+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355565568 unmapped: 60866560 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:44.195733+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355565568 unmapped: 60866560 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:45.195923+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355565568 unmapped: 60866560 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:46.196277+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355573760 unmapped: 60858368 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:47.196560+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355573760 unmapped: 60858368 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:48.196843+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355573760 unmapped: 60858368 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:49.197137+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355573760 unmapped: 60858368 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:50.197420+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355573760 unmapped: 60858368 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:51.197611+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355581952 unmapped: 60850176 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:52.197903+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355581952 unmapped: 60850176 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:53.198131+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355581952 unmapped: 60850176 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:54.198365+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355581952 unmapped: 60850176 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:55.198531+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355590144 unmapped: 60841984 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:56.198773+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355590144 unmapped: 60841984 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:57.198984+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355590144 unmapped: 60841984 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:58.199222+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355590144 unmapped: 60841984 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:57:59.199424+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 60833792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:00.199677+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 60833792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:01.199861+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 60833792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:02.200138+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 60833792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:03.200397+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 60833792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:04.200700+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 60833792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:05.200942+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 60833792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:06.201198+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355598336 unmapped: 60833792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:07.201362+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355614720 unmapped: 60817408 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:08.201544+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355614720 unmapped: 60817408 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:09.201772+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355614720 unmapped: 60817408 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:10.201994+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355614720 unmapped: 60817408 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:11.202132+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 60809216 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:12.202269+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 60809216 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:13.202418+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 60809216 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:14.206382+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355622912 unmapped: 60809216 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:15.206545+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355631104 unmapped: 60801024 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:16.206755+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355631104 unmapped: 60801024 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:17.207352+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355631104 unmapped: 60801024 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:18.208111+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355631104 unmapped: 60801024 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:19.208725+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355631104 unmapped: 60801024 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:20.208907+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355631104 unmapped: 60801024 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:21.209376+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355639296 unmapped: 60792832 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:22.209803+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355639296 unmapped: 60792832 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:23.210205+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355647488 unmapped: 60784640 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:24.210638+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355647488 unmapped: 60784640 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:25.211002+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355655680 unmapped: 60776448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:26.211363+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355655680 unmapped: 60776448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:27.211644+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355655680 unmapped: 60776448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:28.211833+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355655680 unmapped: 60776448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:29.212169+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355655680 unmapped: 60776448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:30.212443+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355655680 unmapped: 60776448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:31.212650+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355663872 unmapped: 60768256 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:32.212870+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355663872 unmapped: 60768256 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:33.213015+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355663872 unmapped: 60768256 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:34.213232+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355663872 unmapped: 60768256 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:35.213455+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355663872 unmapped: 60768256 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:36.213617+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355672064 unmapped: 60760064 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:37.213818+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355672064 unmapped: 60760064 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:38.214024+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355672064 unmapped: 60760064 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:39.214208+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355680256 unmapped: 60751872 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:40.214458+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355680256 unmapped: 60751872 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:41.214680+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355680256 unmapped: 60751872 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:42.214832+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355688448 unmapped: 60743680 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:43.215016+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355688448 unmapped: 60743680 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:44.215270+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355688448 unmapped: 60743680 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:45.215469+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355688448 unmapped: 60743680 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:46.215678+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355688448 unmapped: 60743680 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:47.215923+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355696640 unmapped: 60735488 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:48.216197+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355696640 unmapped: 60735488 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:49.216368+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355704832 unmapped: 60727296 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:50.216523+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355713024 unmapped: 60719104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:51.216767+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355713024 unmapped: 60719104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:52.216962+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355713024 unmapped: 60719104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:53.217210+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355713024 unmapped: 60719104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:54.217439+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355713024 unmapped: 60719104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:55.217710+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355713024 unmapped: 60719104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:56.217920+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355713024 unmapped: 60719104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:57.218097+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355713024 unmapped: 60719104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:58.218250+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355721216 unmapped: 60710912 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:58:59.218461+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355721216 unmapped: 60710912 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:00.218703+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355721216 unmapped: 60710912 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:01.218858+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355721216 unmapped: 60710912 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:02.219041+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355721216 unmapped: 60710912 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:03.219230+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 60694528 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:04.219449+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 60694528 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:05.219612+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 60694528 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:06.219718+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 60694528 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:07.219940+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 60694528 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:08.220103+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 60694528 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:09.220341+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 60694528 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:10.220533+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355737600 unmapped: 60694528 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:11.220713+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355745792 unmapped: 60686336 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:12.220869+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355745792 unmapped: 60686336 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:13.221116+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355745792 unmapped: 60686336 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:14.221330+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355745792 unmapped: 60686336 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:15.221523+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355745792 unmapped: 60686336 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:16.221686+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355745792 unmapped: 60686336 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:17.221899+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355745792 unmapped: 60686336 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:18.222106+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355753984 unmapped: 60678144 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:19.222995+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355770368 unmapped: 60661760 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:20.223301+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355770368 unmapped: 60661760 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:21.223860+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355778560 unmapped: 60653568 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:22.227662+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355778560 unmapped: 60653568 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:23.229839+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355778560 unmapped: 60653568 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:24.230371+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355778560 unmapped: 60653568 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:25.232674+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355778560 unmapped: 60653568 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:26.232855+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355778560 unmapped: 60653568 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:27.234151+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355786752 unmapped: 60645376 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:28.234322+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355786752 unmapped: 60645376 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:29.234804+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 355786752 unmapped: 60645376 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:30.234995+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:31.235936+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:32.236178+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:33.236929+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:34.237171+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:35.237801+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:36.237967+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:37.238442+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:38.238632+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:39.238959+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:40.239189+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:41.239385+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:42.239575+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:43.239725+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:44.239951+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:45.240188+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:46.240385+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:47.240548+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:48.240711+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:49.240869+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:50.241011+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:51.241236+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:52.241413+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:53.241661+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351215616 unmapped: 65216512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:54.242012+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351223808 unmapped: 65208320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:55.242204+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 65200128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:56.242337+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 65200128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:57.242583+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 65200128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:58.242740+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351232000 unmapped: 65200128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T10:59:59.242872+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 65183744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:00.243011+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 65183744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:01.243194+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 65183744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:02.243348+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 65183744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:03.243613+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 65183744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:04.243842+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351248384 unmapped: 65183744 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:05.243981+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 65167360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:06.244127+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 65167360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:07.244260+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 65167360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:08.244417+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 65167360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:09.244612+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 65167360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:10.244772+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 65167360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:11.244956+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 65159168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:12.245132+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 65159168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:13.245288+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351272960 unmapped: 65159168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:14.245456+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 65142784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:15.245589+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 65142784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:16.245736+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 65142784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:17.245870+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 65142784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:18.245972+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351289344 unmapped: 65142784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:19.246135+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 65134592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:20.246327+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 65134592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:21.246505+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351297536 unmapped: 65134592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:22.246633+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 65126400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:23.246760+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 65126400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:24.247014+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 65126400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:25.247157+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 65126400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:26.247319+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 65126400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:27.247460+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 65126400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:28.247636+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 65126400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:29.247778+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351305728 unmapped: 65126400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:30.247909+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 65110016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:31.248054+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 65110016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:32.248229+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 65110016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:33.248419+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 65110016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:34.248613+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 65110016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:35.248770+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 65110016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:36.248968+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 65110016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:37.249133+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351322112 unmapped: 65110016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:38.249297+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 65093632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:39.249541+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 65093632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:40.249711+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 65093632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:41.249914+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 65093632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:42.250055+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:43.250255+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 65093632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:44.250593+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 65093632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:45.250751+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 65093632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:46.250923+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351338496 unmapped: 65093632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:47.251112+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 65085440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:48.251280+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 65085440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:49.251461+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 65085440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:50.251649+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351346688 unmapped: 65085440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:51.251860+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 65077248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:52.252043+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 65077248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:53.252270+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 65077248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:54.252511+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351354880 unmapped: 65077248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:55.252649+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 65060864 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:56.252810+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351371264 unmapped: 65060864 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:57.252967+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 65052672 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:58.253119+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 65052672 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:00:59.253269+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 65052672 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:00.253442+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 65052672 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:01.253629+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 65052672 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:02.253822+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 65052672 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:03.254108+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 65036288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:04.254310+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 65036288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:05.254471+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 65036288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:06.254655+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 65036288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:07.254891+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 65036288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:08.255167+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351395840 unmapped: 65036288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:09.255340+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351404032 unmapped: 65028096 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:10.255529+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351404032 unmapped: 65028096 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717788 data_alloc: 218103808 data_used: 149785
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:11.255760+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 65011712 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:12.255981+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 65011712 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 heartbeat osd_stat(store_statfs(0x4e7757000/0x0/0x4ffc00000, data 0x19edd3d/0x1b93000, compress 0x0/0x0/0x0, omap 0x688a3, meta 0x168a775d), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c6c94400
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 366.807525635s of 366.853698730s, submitted: 29
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 300 handle_osd_map epochs [300,301], i have 301, src has [1,301]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 301 ms_handle_reset con 0x55d7c6c94400 session 0x55d7c6e9d880
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:13.256203+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 64978944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:14.256415+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 64978944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e7f54000/0x0/0x4ffc00000, data 0x11ef90a/0x1395000, compress 0x0/0x0/0x0, omap 0x68c4a, meta 0x168a73b6), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4d48c00
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:15.256697+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 351453184 unmapped: 64978944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 301 heartbeat osd_stat(store_statfs(0x4e7f54000/0x0/0x4ffc00000, data 0x11ef90a/0x1395000, compress 0x0/0x0/0x0, omap 0x68c4a, meta 0x168a73b6), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 302 ms_handle_reset con 0x55d7c4d48c00 session 0x55d7c87bb180
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601434 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:16.256907+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 67600384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:17.257116+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 67600384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:18.257291+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348831744 unmapped: 67600384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 302 heartbeat osd_stat(store_statfs(0x4e8f53000/0x0/0x4ffc00000, data 0x1f14d7/0x397000, compress 0x0/0x0/0x0, omap 0x68ff1, meta 0x168a700f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:19.257474+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 67592192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 303 heartbeat osd_stat(store_statfs(0x4e8f50000/0x0/0x4ffc00000, data 0x1f2f72/0x39a000, compress 0x0/0x0/0x0, omap 0x69116, meta 0x168a6eea), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:20.257664+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348839936 unmapped: 67592192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:21.257817+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3604208 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4db3800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 304 ms_handle_reset con 0x55d7c4db3800 session 0x55d7c4b26a80
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348856320 unmapped: 67575808 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:22.257966+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348864512 unmapped: 67567616 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:23.258142+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348872704 unmapped: 67559424 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:24.258375+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e8f4c000/0x0/0x4ffc00000, data 0x1f4b4d/0x39e000, compress 0x0/0x0/0x0, omap 0x691d1, meta 0x168a6e2f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348872704 unmapped: 67559424 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 304 heartbeat osd_stat(store_statfs(0x4e8f4c000/0x0/0x4ffc00000, data 0x1f4b4d/0x39e000, compress 0x0/0x0/0x0, omap 0x691d1, meta 0x168a6e2f), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.871269226s of 11.983715057s, submitted: 57
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:25.258594+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 67502080 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:26.258800+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611352 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348930048 unmapped: 67502080 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:27.258932+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 67485696 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:28.259191+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 67485696 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:29.259337+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348946432 unmapped: 67485696 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:30.259476+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 67477504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:31.259610+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611352 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 67477504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:32.259741+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 67477504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:33.259879+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 67477504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:34.260140+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 67477504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:35.260329+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 67461120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:36.260520+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611352 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 67461120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:37.260706+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 67461120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:38.260911+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 67461120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:39.261121+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 67461120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:40.261271+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 67461120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:41.261471+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611352 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 67461120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:42.261608+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348971008 unmapped: 67461120 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:43.261811+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 67452928 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:44.262589+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 67452928 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:45.262740+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 67452928 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:46.262895+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611352 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 67452928 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:47.263103+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348979200 unmapped: 67452928 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:48.263261+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67444736 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:49.263424+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67444736 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:50.263608+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67444736 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:51.263769+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611352 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349011968 unmapped: 67420160 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:52.263967+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67411968 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:53.264136+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67411968 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:54.264361+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67411968 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:55.264533+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67411968 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:56.264711+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611352 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67411968 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:57.264927+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67411968 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:58.265149+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67411968 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:01:59.265341+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67411968 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:00.265485+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349020160 unmapped: 67411968 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:01.265646+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611352 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349028352 unmapped: 67403776 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:02.265768+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349036544 unmapped: 67395584 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:03.265864+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 67387392 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:04.266041+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 67387392 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:05.266237+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 67387392 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:06.266388+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611352 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349044736 unmapped: 67387392 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:07.266564+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 67379200 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:08.266701+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 67379200 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 305 heartbeat osd_stat(store_statfs(0x4e8f49000/0x0/0x4ffc00000, data 0x1f65cc/0x3a1000, compress 0x0/0x0/0x0, omap 0x6985c, meta 0x168a67a4), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:09.266843+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 67379200 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:10.266988+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 67379200 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:11.267149+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611352 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 67379200 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c7022800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 47.325626373s of 47.337322235s, submitted: 10
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:12.267286+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349052928 unmapped: 67379200 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:13.267474+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 306 ms_handle_reset con 0x55d7c7022800 session 0x55d7c78c1180
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 67338240 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:14.267666+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349093888 unmapped: 67338240 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 306 heartbeat osd_stat(store_statfs(0x4e8f47000/0x0/0x4ffc00000, data 0x1f8199/0x3a3000, compress 0x0/0x0/0x0, omap 0x69c08, meta 0x168a63f8), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:15.267853+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349102080 unmapped: 67330048 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:16.267988+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3613512 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349110272 unmapped: 67321856 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:17.268153+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349110272 unmapped: 67321856 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:18.268335+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349110272 unmapped: 67321856 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:19.268555+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 67305472 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:20.268733+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349126656 unmapped: 67305472 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:21.268868+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349134848 unmapped: 67297280 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:22.269022+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349134848 unmapped: 67297280 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:23.269171+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 67280896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:24.269343+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 67280896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:25.269543+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 67280896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:26.269715+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 67280896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:27.269880+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 67280896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:28.270047+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 67280896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _renew_subs
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:29.271024+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 67280896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:30.271227+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349151232 unmapped: 67280896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:31.271429+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349159424 unmapped: 67272704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:32.272168+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349159424 unmapped: 67272704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:33.274658+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349159424 unmapped: 67272704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:34.274989+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349159424 unmapped: 67272704 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:35.275223+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349167616 unmapped: 67264512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:36.275431+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349167616 unmapped: 67264512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:37.275600+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349167616 unmapped: 67264512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:38.275779+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349167616 unmapped: 67264512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:39.275935+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 67248128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:40.276193+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 67248128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:41.276551+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 67248128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:42.276917+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 67248128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:43.277202+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 67248128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:44.277547+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 67248128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:45.277739+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 67248128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:46.277906+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349184000 unmapped: 67248128 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:47.278107+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 67239936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:48.278265+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 67239936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:49.278428+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 67239936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:50.278623+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 67239936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:51.278841+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 67239936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:52.279133+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 67239936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:53.279414+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 67239936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:54.279651+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349192192 unmapped: 67239936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:55.279877+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 67223552 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:56.280108+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 67223552 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:57.280336+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349208576 unmapped: 67223552 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:58.280565+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349216768 unmapped: 67215360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:02:59.280757+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349216768 unmapped: 67215360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:00.280888+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349216768 unmapped: 67215360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:01.281117+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 67207168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:02.281323+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349224960 unmapped: 67207168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:03.281514+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349233152 unmapped: 67198976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:04.281707+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349233152 unmapped: 67198976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:05.281863+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349233152 unmapped: 67198976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:06.281968+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349233152 unmapped: 67198976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:07.282165+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349241344 unmapped: 67190784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:08.282356+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349241344 unmapped: 67190784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:09.282503+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349241344 unmapped: 67190784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:10.282679+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349241344 unmapped: 67190784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:11.282813+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349249536 unmapped: 67182592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:12.282988+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349249536 unmapped: 67182592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:13.284611+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349249536 unmapped: 67182592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:14.284819+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349257728 unmapped: 67174400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:15.285022+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349257728 unmapped: 67174400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:16.285186+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349257728 unmapped: 67174400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:17.285318+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349257728 unmapped: 67174400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:18.285493+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349257728 unmapped: 67174400 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:19.285685+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 67166208 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:20.285866+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 67166208 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:21.286018+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349274112 unmapped: 67158016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:22.286152+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349274112 unmapped: 67158016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:23.286304+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349274112 unmapped: 67158016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:24.286494+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349274112 unmapped: 67158016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:25.286638+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349274112 unmapped: 67158016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:26.286832+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349274112 unmapped: 67158016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:27.287021+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349282304 unmapped: 67149824 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:28.287139+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349298688 unmapped: 67133440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:29.287267+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349298688 unmapped: 67133440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:30.287388+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349298688 unmapped: 67133440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:31.287530+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349298688 unmapped: 67133440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:32.287661+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349298688 unmapped: 67133440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:33.329916+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349298688 unmapped: 67133440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:34.330116+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349298688 unmapped: 67133440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:35.330252+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349315072 unmapped: 67117056 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:36.330410+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 67108864 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:37.330558+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 67108864 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:38.330691+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349323264 unmapped: 67108864 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:39.330872+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'config diff' '{prefix=config diff}'
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'config show' '{prefix=config show}'
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'counter dump' '{prefix=counter dump}'
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349331456 unmapped: 67100672 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'counter schema' '{prefix=counter schema}'
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:40.331437+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349167616 unmapped: 67264512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:41.331595+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349249536 unmapped: 67182592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'log dump' '{prefix=log dump}'
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:42.332136+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'perf dump' '{prefix=perf dump}'
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'perf schema' '{prefix=perf schema}'
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349380608 unmapped: 67051520 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:43.332277+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349388800 unmapped: 67043328 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:44.332438+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349388800 unmapped: 67043328 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:45.332598+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349388800 unmapped: 67043328 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:46.332741+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 183K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.69 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 456 writes, 1331 keys, 456 commit groups, 1.0 writes per commit group, ingest: 0.61 MB, 0.00 MB/s
                                           Interval WAL: 456 writes, 212 syncs, 2.15 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 67035136 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:47.332974+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349396992 unmapped: 67035136 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets getting new tickets!
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:48.333260+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _finish_auth 0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:48.334455+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 67026944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:49.333420+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349405184 unmapped: 67026944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:50.333759+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 67018752 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:51.333949+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 67018752 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:52.334125+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349413376 unmapped: 67018752 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:53.334283+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349421568 unmapped: 67010560 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:54.334452+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349421568 unmapped: 67010560 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:55.334584+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349421568 unmapped: 67010560 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:56.334726+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349421568 unmapped: 67010560 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:57.334896+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: mgrc ms_handle_reset ms_handle_reset con 0x55d7ca867800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1095581968
Feb 28 11:14:17 compute-0 ceph-osd[87202]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1095581968,v1:192.168.122.100:6801/1095581968]
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: get_auth_request con 0x55d7c6c94400 auth_method 0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: mgrc handle_mgr_configure stats_period=5
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349437952 unmapped: 66994176 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:58.335050+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349437952 unmapped: 66994176 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:03:59.335184+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349437952 unmapped: 66994176 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:00.335306+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 ms_handle_reset con 0x55d7c4b7e800 session 0x55d7c4c86c40
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c79cf400
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349446144 unmapped: 66985984 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:01.335463+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349446144 unmapped: 66985984 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:02.335622+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349446144 unmapped: 66985984 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:03.335753+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349446144 unmapped: 66985984 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:04.335905+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349446144 unmapped: 66985984 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:05.336043+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349454336 unmapped: 66977792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:06.336154+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 66961408 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:07.336287+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 66961408 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:08.336496+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 66961408 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:09.336659+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 66961408 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:10.336829+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 66953216 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:11.337023+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 66953216 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:12.337246+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 66953216 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:13.337609+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 66953216 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:14.337812+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 66945024 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:15.337994+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 66936832 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:16.338142+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 66936832 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:17.338287+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 66936832 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:18.338439+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 66936832 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:19.338586+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 66936832 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:20.338777+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 66936832 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:21.338959+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349495296 unmapped: 66936832 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:22.339198+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349511680 unmapped: 66920448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:23.339404+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349511680 unmapped: 66920448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:24.339617+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349511680 unmapped: 66920448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:25.339774+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349511680 unmapped: 66920448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:26.339917+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349511680 unmapped: 66920448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:27.340110+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349511680 unmapped: 66920448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:28.340257+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349511680 unmapped: 66920448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:29.340451+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349511680 unmapped: 66920448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:30.340644+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 66904064 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:31.340832+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 66904064 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:32.341000+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 66904064 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:33.341151+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 66904064 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:34.341347+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 66904064 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:35.341549+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 ms_handle_reset con 0x55d7c4b7e000 session 0x55d7c4b27a40
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: handle_auth_request added challenge on 0x55d7c4b7e800
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 66904064 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:36.341711+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 66904064 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:37.341876+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349536256 unmapped: 66895872 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:38.342039+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 66887680 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:39.342289+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 66887680 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:40.342488+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 66887680 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:41.342681+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616286 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f44000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349552640 unmapped: 66879488 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:42.342897+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349552640 unmapped: 66879488 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:43.343039+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 151.738906860s of 151.782104492s, submitted: 24
Feb 28 11:14:17 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349577216 unmapped: 66854912 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:44.343223+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 66846720 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:45.343445+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 66846720 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:46.343607+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 66846720 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:47.343735+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 66846720 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:48.343838+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349585408 unmapped: 66846720 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:49.343964+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349708288 unmapped: 66723840 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:50.344137+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349708288 unmapped: 66723840 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:51.344361+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349708288 unmapped: 66723840 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:52.344564+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:53.344834+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349708288 unmapped: 66723840 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:54.345097+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349708288 unmapped: 66723840 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:55.345376+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349708288 unmapped: 66723840 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:56.345541+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349708288 unmapped: 66723840 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:57.345735+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349708288 unmapped: 66723840 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:58.345962+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349708288 unmapped: 66723840 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:04:59.346130+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349708288 unmapped: 66723840 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:00.346364+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349716480 unmapped: 66715648 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:01.346578+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349716480 unmapped: 66715648 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:02.346736+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349716480 unmapped: 66715648 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:03.346978+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349724672 unmapped: 66707456 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:04.347162+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349724672 unmapped: 66707456 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:05.347339+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349724672 unmapped: 66707456 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:06.347520+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349724672 unmapped: 66707456 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:07.347684+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349724672 unmapped: 66707456 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:08.347929+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349724672 unmapped: 66707456 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:09.348213+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349724672 unmapped: 66707456 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:10.348462+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349724672 unmapped: 66707456 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:11.348691+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349732864 unmapped: 66699264 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:12.348871+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349732864 unmapped: 66699264 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:13.349094+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349732864 unmapped: 66699264 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:14.349326+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349732864 unmapped: 66699264 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:15.349520+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349741056 unmapped: 66691072 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:16.349743+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349741056 unmapped: 66691072 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:17.349987+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349741056 unmapped: 66691072 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:18.350163+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349741056 unmapped: 66691072 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:19.350397+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349741056 unmapped: 66691072 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:20.350615+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349741056 unmapped: 66691072 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:21.350758+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349741056 unmapped: 66691072 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:22.350901+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349749248 unmapped: 66682880 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:23.351114+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349749248 unmapped: 66682880 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:24.351368+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349749248 unmapped: 66682880 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:25.351568+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349749248 unmapped: 66682880 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:26.351719+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349749248 unmapped: 66682880 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:27.351902+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349757440 unmapped: 66674688 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:28.352088+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349757440 unmapped: 66674688 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:29.352258+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349757440 unmapped: 66674688 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:30.352414+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349757440 unmapped: 66674688 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:31.352603+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349757440 unmapped: 66674688 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:32.352766+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349757440 unmapped: 66674688 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:33.352938+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349757440 unmapped: 66674688 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:34.353175+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349757440 unmapped: 66674688 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:35.353351+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 66666496 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:36.353586+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 66666496 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:37.353769+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349765632 unmapped: 66666496 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:38.353973+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349773824 unmapped: 66658304 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:39.354589+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349773824 unmapped: 66658304 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:40.354777+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349773824 unmapped: 66658304 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:41.354926+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349773824 unmapped: 66658304 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:42.355045+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349773824 unmapped: 66658304 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:43.355237+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349773824 unmapped: 66658304 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:44.355471+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349782016 unmapped: 66650112 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:45.355709+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349782016 unmapped: 66650112 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:46.356012+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349782016 unmapped: 66650112 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:47.356236+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349782016 unmapped: 66650112 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:48.356511+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349782016 unmapped: 66650112 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:49.356639+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349782016 unmapped: 66650112 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:50.356853+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349782016 unmapped: 66650112 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:51.356968+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349790208 unmapped: 66641920 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:52.357101+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349790208 unmapped: 66641920 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:53.357246+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349790208 unmapped: 66641920 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:54.357491+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349790208 unmapped: 66641920 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:55.357634+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349790208 unmapped: 66641920 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:56.357766+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349790208 unmapped: 66641920 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:57.357930+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349790208 unmapped: 66641920 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:58.358125+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349798400 unmapped: 66633728 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:05:59.358312+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349798400 unmapped: 66633728 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:00.358615+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349798400 unmapped: 66633728 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:01.358830+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 66617344 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:02.359121+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 66617344 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:03.359389+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 66617344 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:04.359758+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 66617344 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:05.360014+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 66617344 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:06.360347+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349814784 unmapped: 66617344 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:07.360564+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 66609152 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:08.360851+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 66609152 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:09.361238+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 66609152 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:10.361467+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 66609152 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:11.361683+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 66609152 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:12.361965+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 66609152 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:13.362178+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 66609152 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:14.362394+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 66609152 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:15.362577+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 66600960 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:16.362733+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 66600960 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:17.363041+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349831168 unmapped: 66600960 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:18.363330+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 66592768 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:19.363550+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 66592768 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:20.363802+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 66592768 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:21.363993+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 66592768 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:22.364170+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349839360 unmapped: 66592768 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:23.364317+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 66576384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:24.365227+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 66576384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:25.365423+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 66576384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:26.365554+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 66576384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:27.365708+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 66576384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:28.365881+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 66576384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:29.366087+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 66576384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:30.366275+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349855744 unmapped: 66576384 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:31.366429+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 66568192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:32.366610+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 66568192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:33.366782+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 66568192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:34.367130+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 66568192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:35.367266+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 66568192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:36.367504+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 66568192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:37.367782+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349863936 unmapped: 66568192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:38.368046+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349872128 unmapped: 66560000 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:39.368316+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349888512 unmapped: 66543616 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:40.368513+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349888512 unmapped: 66543616 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:41.368668+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349888512 unmapped: 66543616 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:42.368866+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349888512 unmapped: 66543616 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:43.369008+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349888512 unmapped: 66543616 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:44.369184+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349888512 unmapped: 66543616 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:45.369374+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349888512 unmapped: 66543616 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:46.369574+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349896704 unmapped: 66535424 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:47.369720+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349913088 unmapped: 66519040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:48.369884+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349913088 unmapped: 66519040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:49.370186+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349913088 unmapped: 66519040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:50.370370+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349913088 unmapped: 66519040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:51.370508+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349913088 unmapped: 66519040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:52.370681+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349913088 unmapped: 66519040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:53.370872+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349913088 unmapped: 66519040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:54.371043+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349913088 unmapped: 66519040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:55.371265+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349921280 unmapped: 66510848 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:56.371639+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349921280 unmapped: 66510848 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:57.371990+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349921280 unmapped: 66510848 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:58.372324+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 66502656 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:06:59.372479+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 66502656 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:00.372625+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 66502656 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:01.372747+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 66502656 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:02.372823+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349929472 unmapped: 66502656 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:03.372983+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 66486272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:04.373218+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 66486272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:05.373364+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 66486272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:06.373573+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 66486272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:07.373813+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 66486272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:08.374020+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 66486272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:09.374228+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 66486272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:10.374419+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349945856 unmapped: 66486272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:11.374599+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349954048 unmapped: 66478080 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:12.374787+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349954048 unmapped: 66478080 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:13.374968+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349954048 unmapped: 66478080 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:14.375151+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 66469888 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:15.375324+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 66469888 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:16.375510+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 66469888 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:17.375711+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 66469888 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:18.375942+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 66469888 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:19.376116+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349970432 unmapped: 66461696 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:20.376294+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349970432 unmapped: 66461696 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:21.376479+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349970432 unmapped: 66461696 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:22.376615+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349970432 unmapped: 66461696 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:23.376758+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349970432 unmapped: 66461696 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:24.377027+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349970432 unmapped: 66461696 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:25.377173+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349970432 unmapped: 66461696 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:26.377315+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349978624 unmapped: 66453504 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:27.377559+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349986816 unmapped: 66445312 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:28.377756+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349986816 unmapped: 66445312 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:29.377979+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349986816 unmapped: 66445312 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:30.378150+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349986816 unmapped: 66445312 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:31.378259+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349986816 unmapped: 66445312 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:32.378428+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349986816 unmapped: 66445312 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:33.378604+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349986816 unmapped: 66445312 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:34.378815+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 349986816 unmapped: 66445312 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:35.378983+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350003200 unmapped: 66428928 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:36.379165+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350003200 unmapped: 66428928 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:37.379335+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350003200 unmapped: 66428928 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:38.379527+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350011392 unmapped: 66420736 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:39.379733+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350011392 unmapped: 66420736 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:40.379917+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350011392 unmapped: 66420736 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:41.380165+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350011392 unmapped: 66420736 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:42.380352+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 66412544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:43.380565+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 66412544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:44.380780+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 66412544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:45.380955+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 66412544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:46.381214+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 66412544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:47.381393+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 66412544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:48.381605+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 66412544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:49.381742+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350019584 unmapped: 66412544 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:50.382014+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 66396160 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:51.382177+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 66396160 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:52.382396+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 66396160 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:53.382544+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350035968 unmapped: 66396160 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:54.382734+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 66387968 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:55.382977+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 66387968 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:56.383174+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 66387968 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:57.383403+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 66379776 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:58.383601+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 66379776 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:07:59.383736+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350052352 unmapped: 66379776 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:00.383932+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350076928 unmapped: 66355200 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:01.384136+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350076928 unmapped: 66355200 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:02.384294+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350076928 unmapped: 66355200 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:03.384420+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350085120 unmapped: 66347008 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:04.384647+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350085120 unmapped: 66347008 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:05.384798+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350085120 unmapped: 66347008 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:06.384945+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 66338816 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:07.385111+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 66338816 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:08.385261+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 66338816 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:09.385506+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 66338816 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:10.385636+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 66338816 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:11.385725+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 66338816 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:12.385857+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 66338816 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:13.386020+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350093312 unmapped: 66338816 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:14.386239+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350101504 unmapped: 66330624 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:15.386395+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350101504 unmapped: 66330624 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:16.386530+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350101504 unmapped: 66330624 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:17.386712+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350101504 unmapped: 66330624 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:18.386867+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 66322432 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:19.386954+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 66322432 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:20.387135+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 66322432 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:21.387305+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 66322432 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:22.387415+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 66322432 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:23.387569+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350109696 unmapped: 66322432 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:24.387762+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 66314240 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:25.387895+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 66314240 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:26.388020+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 66314240 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:27.388255+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 66314240 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:28.388452+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 66314240 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:29.388642+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350117888 unmapped: 66314240 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:30.388801+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 66289664 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:31.388965+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 66289664 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:32.389119+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 66289664 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:33.389326+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 66289664 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:34.389567+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350142464 unmapped: 66289664 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:35.389731+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 66281472 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:36.389924+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 66281472 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:37.390140+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:38.390307+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350150656 unmapped: 66281472 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:39.390469+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350167040 unmapped: 66265088 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:40.390658+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350167040 unmapped: 66265088 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:41.390836+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350167040 unmapped: 66265088 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:42.390983+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 66256896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:43.391133+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 66256896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:44.391403+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 66256896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:45.391562+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350175232 unmapped: 66256896 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:46.391687+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 66240512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:47.391825+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 66240512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:48.391987+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 66240512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:49.392183+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350191616 unmapped: 66240512 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:50.392299+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350199808 unmapped: 66232320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:51.392420+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350199808 unmapped: 66232320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:52.392592+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350199808 unmapped: 66232320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:53.392781+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350199808 unmapped: 66232320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:54.393032+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350199808 unmapped: 66232320 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:55.393232+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350216192 unmapped: 66215936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:56.393406+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350216192 unmapped: 66215936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:57.393563+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350216192 unmapped: 66215936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:58.393769+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350216192 unmapped: 66215936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:08:59.394056+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350216192 unmapped: 66215936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:00.394356+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350216192 unmapped: 66215936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:01.394595+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350216192 unmapped: 66215936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:02.394812+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350216192 unmapped: 66215936 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:03.395025+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350232576 unmapped: 66199552 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:04.395289+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350232576 unmapped: 66199552 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:05.395472+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350232576 unmapped: 66199552 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:06.395701+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350240768 unmapped: 66191360 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:07.395885+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 66183168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:08.396101+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 66183168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:09.396247+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 66183168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:10.396415+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350248960 unmapped: 66183168 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:11.396588+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350257152 unmapped: 66174976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:12.396742+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350257152 unmapped: 66174976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:13.396964+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350257152 unmapped: 66174976 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:14.397185+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350265344 unmapped: 66166784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:15.397355+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350265344 unmapped: 66166784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:16.397484+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350265344 unmapped: 66166784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:17.397640+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350265344 unmapped: 66166784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:18.397751+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350265344 unmapped: 66166784 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:19.398154+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66158592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:20.398463+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66158592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:21.398732+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66158592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:22.398924+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66158592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:23.399116+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66158592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:24.399387+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66158592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:25.399585+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66158592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:26.399761+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350273536 unmapped: 66158592 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:27.400030+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 66142208 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:28.400205+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350289920 unmapped: 66142208 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:29.400448+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 66134016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:30.400608+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 66134016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:31.400833+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 66134016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:32.401056+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 66134016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:33.401334+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 66134016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:34.401716+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 66134016 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:35.401945+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350314496 unmapped: 66117632 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:36.402167+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350322688 unmapped: 66109440 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:37.402539+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 66101248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:38.402755+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 66101248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:39.402944+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 66101248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:40.403126+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 66101248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:41.403263+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 66101248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:42.403428+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 66101248 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:43.403591+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 66093056 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:44.403780+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 66093056 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:45.403993+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 66093056 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:46.404183+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 66084864 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:47.404324+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 66084864 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:48.404500+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 66084864 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:49.404719+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 66084864 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:50.404865+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 66084864 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:51.405039+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 66076672 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:52.405191+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 66076672 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:53.405375+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 66076672 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:54.405593+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 66076672 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:55.405765+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 66068480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:56.406015+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 66068480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:57.406248+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 66068480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:58.406543+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 66068480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:09:59.406743+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 66068480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:00.406951+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 66068480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:01.407171+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350363648 unmapped: 66068480 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:02.407362+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350371840 unmapped: 66060288 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:03.407566+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350388224 unmapped: 66043904 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:04.407822+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350388224 unmapped: 66043904 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:05.408029+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350388224 unmapped: 66043904 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:06.408343+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350388224 unmapped: 66043904 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:07.408567+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350396416 unmapped: 66035712 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:08.408790+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350396416 unmapped: 66035712 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:09.409052+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350396416 unmapped: 66035712 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:10.409289+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350396416 unmapped: 66035712 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:11.409483+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350396416 unmapped: 66035712 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:12.409686+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350396416 unmapped: 66035712 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:13.409872+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350396416 unmapped: 66035712 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:14.410207+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350396416 unmapped: 66035712 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:15.410462+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350412800 unmapped: 66019328 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:16.410709+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350412800 unmapped: 66019328 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:17.410948+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350420992 unmapped: 66011136 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:18.411160+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350429184 unmapped: 66002944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:19.411304+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350429184 unmapped: 66002944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:20.411469+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350429184 unmapped: 66002944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:21.411652+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350429184 unmapped: 66002944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:22.411829+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350429184 unmapped: 66002944 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:23.412007+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350437376 unmapped: 65994752 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:24.412278+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350437376 unmapped: 65994752 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:25.412463+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350437376 unmapped: 65994752 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:26.412658+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 65978368 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:27.412872+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 65978368 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:28.413099+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 65978368 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:29.413284+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 65978368 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:30.413469+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350453760 unmapped: 65978368 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:31.413655+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 65953792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:32.413845+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 65953792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:33.414053+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 65953792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:34.414644+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 65953792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:35.414832+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350478336 unmapped: 65953792 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:36.415053+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350486528 unmapped: 65945600 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:37.415314+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350486528 unmapped: 65945600 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:38.415513+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350486528 unmapped: 65945600 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:39.415648+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350494720 unmapped: 65937408 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:40.415788+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350494720 unmapped: 65937408 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:41.415939+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350494720 unmapped: 65937408 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:42.416194+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350494720 unmapped: 65937408 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:43.416360+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350494720 unmapped: 65937408 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:44.416756+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350494720 unmapped: 65937408 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:45.416938+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350494720 unmapped: 65937408 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:46.417191+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350502912 unmapped: 65929216 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:47.417327+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350519296 unmapped: 65912832 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:48.417461+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350519296 unmapped: 65912832 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:49.417611+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350519296 unmapped: 65912832 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:50.417822+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350519296 unmapped: 65912832 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:51.417990+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350519296 unmapped: 65912832 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:52.418155+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350519296 unmapped: 65912832 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:53.418325+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350519296 unmapped: 65912832 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:54.418507+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350519296 unmapped: 65912832 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:55.418690+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350527488 unmapped: 65904640 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:56.418946+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350527488 unmapped: 65904640 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:57.419216+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350535680 unmapped: 65896448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:58.419421+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350535680 unmapped: 65896448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:10:59.419606+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350535680 unmapped: 65896448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:00.419826+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350535680 unmapped: 65896448 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:01.420040+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350543872 unmapped: 65888256 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:02.420333+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350543872 unmapped: 65888256 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:03.420515+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350552064 unmapped: 65880064 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:04.420696+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350552064 unmapped: 65880064 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:05.420897+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350560256 unmapped: 65871872 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:06.421148+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350568448 unmapped: 65863680 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:07.421534+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350568448 unmapped: 65863680 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:08.421821+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350568448 unmapped: 65863680 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:09.422319+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350568448 unmapped: 65863680 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:10.422741+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350568448 unmapped: 65863680 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:11.423210+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350576640 unmapped: 65855488 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:12.423575+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350576640 unmapped: 65855488 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:13.423844+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 65847296 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:14.424168+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 65847296 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:15.424428+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 65847296 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:16.424617+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 65847296 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:17.424823+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 65847296 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:18.425018+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350593024 unmapped: 65839104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:19.425452+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350593024 unmapped: 65839104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:20.425687+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350593024 unmapped: 65839104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:21.425998+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350593024 unmapped: 65839104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:22.426258+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350593024 unmapped: 65839104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:23.426487+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350593024 unmapped: 65839104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:24.426739+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350593024 unmapped: 65839104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:25.426915+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350593024 unmapped: 65839104 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:26.427231+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350617600 unmapped: 65814528 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:27.427537+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350617600 unmapped: 65814528 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:28.427857+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350617600 unmapped: 65814528 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:29.428146+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350617600 unmapped: 65814528 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:30.428381+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350625792 unmapped: 65806336 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:31.428666+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350625792 unmapped: 65806336 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:32.428831+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350625792 unmapped: 65806336 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:33.429184+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350625792 unmapped: 65806336 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:34.429473+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 65781760 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:35.429646+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 65781760 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:36.429864+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 65781760 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:37.430107+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 65781760 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:38.430310+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 65781760 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:39.430492+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 65781760 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:40.430659+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 65781760 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:41.430855+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350650368 unmapped: 65781760 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:42.431041+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350666752 unmapped: 65765376 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:43.431267+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350666752 unmapped: 65765376 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:44.431508+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350666752 unmapped: 65765376 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:45.431697+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350666752 unmapped: 65765376 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:46.431893+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350666752 unmapped: 65765376 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:47.432153+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350666752 unmapped: 65765376 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:48.432402+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350666752 unmapped: 65765376 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:49.432691+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350666752 unmapped: 65765376 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:50.432922+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350674944 unmapped: 65757184 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:51.433174+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350674944 unmapped: 65757184 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:52.433450+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350683136 unmapped: 65748992 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:53.433612+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350683136 unmapped: 65748992 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:54.433781+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350683136 unmapped: 65748992 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:55.433898+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350683136 unmapped: 65748992 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:56.434019+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350683136 unmapped: 65748992 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:57.434243+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350683136 unmapped: 65748992 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:58.434358+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350699520 unmapped: 65732608 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:11:59.434564+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350699520 unmapped: 65732608 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:00.434732+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350707712 unmapped: 65724416 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:01.434858+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350707712 unmapped: 65724416 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:02.435045+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350707712 unmapped: 65724416 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:03.435232+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350715904 unmapped: 65716224 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:04.435440+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350715904 unmapped: 65716224 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:05.435653+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350715904 unmapped: 65716224 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:06.435939+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350724096 unmapped: 65708032 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:07.436140+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350732288 unmapped: 65699840 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:08.436385+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350732288 unmapped: 65699840 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:09.436612+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350732288 unmapped: 65699840 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:10.436769+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350732288 unmapped: 65699840 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:11.436948+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350732288 unmapped: 65699840 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:12.437170+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350732288 unmapped: 65699840 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:13.437347+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350732288 unmapped: 65699840 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:14.437575+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350740480 unmapped: 65691648 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:15.437787+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350740480 unmapped: 65691648 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:16.437999+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350748672 unmapped: 65683456 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:17.438193+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350748672 unmapped: 65683456 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:18.438374+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:19.438512+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350748672 unmapped: 65683456 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:20.438631+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350748672 unmapped: 65683456 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:21.438822+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350748672 unmapped: 65683456 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:22.438974+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350748672 unmapped: 65683456 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:23.439756+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350765056 unmapped: 65667072 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:24.439995+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350765056 unmapped: 65667072 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:25.440165+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350765056 unmapped: 65667072 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:26.440296+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350765056 unmapped: 65667072 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:27.440433+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350765056 unmapped: 65667072 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:28.440560+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350773248 unmapped: 65658880 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:29.440682+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350773248 unmapped: 65658880 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:30.440803+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350773248 unmapped: 65658880 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:31.440995+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350789632 unmapped: 65642496 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:32.441150+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350789632 unmapped: 65642496 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:33.441298+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350789632 unmapped: 65642496 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:34.441489+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 65634304 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:35.441645+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 65634304 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:36.441818+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 65634304 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:37.441972+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 65634304 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:38.442132+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 65634304 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:39.442264+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350806016 unmapped: 65626112 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:40.442474+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350806016 unmapped: 65626112 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:41.442675+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350806016 unmapped: 65626112 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:42.442859+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350806016 unmapped: 65626112 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:43.443024+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350806016 unmapped: 65626112 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:44.443244+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350806016 unmapped: 65626112 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:45.443339+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350806016 unmapped: 65626112 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:46.443453+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350806016 unmapped: 65626112 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:47.443600+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 65609728 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:48.443762+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 65609728 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:49.443886+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 65609728 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:50.444022+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 65609728 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:51.444123+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 65601536 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:52.444303+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 65601536 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:53.444429+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 65601536 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:54.444641+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350830592 unmapped: 65601536 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:55.444820+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350838784 unmapped: 65593344 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:56.444945+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350846976 unmapped: 65585152 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:57.445161+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350846976 unmapped: 65585152 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:58.445318+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350855168 unmapped: 65576960 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:12:59.445441+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350855168 unmapped: 65576960 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:00.445617+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350855168 unmapped: 65576960 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:01.445800+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350855168 unmapped: 65576960 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:02.445975+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350855168 unmapped: 65576960 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:03.446143+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 65568768 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:04.446480+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 65568768 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:05.446667+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 65568768 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:06.446904+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350871552 unmapped: 65560576 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:07.447170+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350871552 unmapped: 65560576 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:08.447338+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350871552 unmapped: 65560576 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:09.447526+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350871552 unmapped: 65560576 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:10.447704+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350871552 unmapped: 65560576 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:11.447877+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350887936 unmapped: 65544192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:12.448185+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350887936 unmapped: 65544192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:13.448401+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350887936 unmapped: 65544192 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:14.448610+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 65536000 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:15.449290+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 65536000 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:16.450281+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 65536000 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:17.450777+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 65536000 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:18.451871+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 65536000 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:19.452122+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 65527808 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:20.452440+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 65527808 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:21.452836+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 65527808 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:22.453050+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 65527808 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:23.453354+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 65527808 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:24.453593+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 65527808 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:25.453788+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 65527808 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:26.453952+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350904320 unmapped: 65527808 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:27.454218+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350928896 unmapped: 65503232 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:28.454367+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350928896 unmapped: 65503232 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:29.454487+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350928896 unmapped: 65503232 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:30.454643+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350928896 unmapped: 65503232 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:31.454969+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 65495040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:32.455116+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 65495040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:33.455276+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 65495040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:34.455428+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 65495040 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:35.455573+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 65486848 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:36.455718+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 65486848 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:37.455855+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 65486848 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:38.456319+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 65478656 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:39.456487+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350953472 unmapped: 65478656 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:40.456629+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 65470464 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:41.456887+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 65470464 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:42.457033+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350961664 unmapped: 65470464 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:43.457139+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350969856 unmapped: 65462272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 28 11:14:17 compute-0 ceph-osd[87202]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3615566 data_alloc: 218103808 data_used: 153846
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:44.457314+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350969856 unmapped: 65462272 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'config diff' '{prefix=config diff}'
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'config show' '{prefix=config show}'
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'counter dump' '{prefix=counter dump}'
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'counter schema' '{prefix=counter schema}'
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:45.457460+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 65536000 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:46.457599+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:14:17 compute-0 ceph-osd[87202]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 183K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.68 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 385 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313ba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7c313b8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 28 11:14:17 compute-0 ceph-osd[87202]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e8f46000/0x0/0x4ffc00000, data 0x1f9c18/0x3a6000, compress 0x0/0x0/0x0, omap 0x69d2e, meta 0x168a62d2), peers [1,2] op hist [])
Feb 28 11:14:17 compute-0 ceph-osd[87202]: prioritycache tune_memory target: 4294967296 mapped: 350773248 unmapped: 65658880 heap: 416432128 old mem: 2845415832 new mem: 2845415832
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: tick
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_tickets
Feb 28 11:14:17 compute-0 ceph-osd[87202]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-28T11:13:47.457822+0000)
Feb 28 11:14:17 compute-0 ceph-osd[87202]: do_command 'log dump' '{prefix=log dump}'
Feb 28 11:14:17 compute-0 rsyslogd[1017]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 28 11:14:18 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23402 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} v 0)
Feb 28 11:14:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} : dispatch
Feb 28 11:14:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Feb 28 11:14:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/646780427' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Feb 28 11:14:18 compute-0 nova_compute[243452]: 2026-02-28 11:14:18.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:14:18 compute-0 nova_compute[243452]: 2026-02-28 11:14:18.317 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:14:18 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:18 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23404 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:18 compute-0 ceph-mon[76304]: from='client.23396 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:18 compute-0 ceph-mon[76304]: from='client.23402 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:18 compute-0 ceph-mon[76304]: from='mgr.14122 192.168.122.100:0/2289429451' entity='mgr.compute-0.izimmo' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gzepcj", "name": "rgw_frontends"} : dispatch
Feb 28 11:14:18 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/646780427' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Feb 28 11:14:18 compute-0 ceph-mon[76304]: pgmap v3321: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:18 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Feb 28 11:14:18 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3145241580' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Feb 28 11:14:19 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23408 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Feb 28 11:14:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2007103772' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Feb 28 11:14:19 compute-0 nova_compute[243452]: 2026-02-28 11:14:19.451 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:14:19 compute-0 nova_compute[243452]: 2026-02-28 11:14:19.453 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 28 11:14:19 compute-0 nova_compute[243452]: 2026-02-28 11:14:19.453 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 28 11:14:19 compute-0 nova_compute[243452]: 2026-02-28 11:14:19.453 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:14:19 compute-0 nova_compute[243452]: 2026-02-28 11:14:19.454 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 28 11:14:19 compute-0 nova_compute[243452]: 2026-02-28 11:14:19.456 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:14:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:14:19 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23412 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:19 compute-0 ceph-mon[76304]: from='client.23404 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3145241580' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Feb 28 11:14:19 compute-0 ceph-mon[76304]: from='client.23408 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:19 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2007103772' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Feb 28 11:14:19 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Feb 28 11:14:19 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1253096419' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Feb 28 11:14:20 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 28 11:14:20 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 28 11:14:20 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 28 11:14:20 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 28 11:14:20 compute-0 systemd[1]: Starting Hostname Service...
Feb 28 11:14:20 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3322: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:20 compute-0 systemd[1]: Started Hostname Service.
Feb 28 11:14:20 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Feb 28 11:14:20 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2074154143' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Feb 28 11:14:20 compute-0 ceph-mon[76304]: from='client.23412 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1253096419' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Feb 28 11:14:20 compute-0 ceph-mon[76304]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 28 11:14:20 compute-0 ceph-mon[76304]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 28 11:14:20 compute-0 ceph-mon[76304]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb 28 11:14:20 compute-0 ceph-mon[76304]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb 28 11:14:20 compute-0 ceph-mon[76304]: pgmap v3322: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:20 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2074154143' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Feb 28 11:14:21 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23426 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:21 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Feb 28 11:14:21 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2783207057' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Feb 28 11:14:21 compute-0 ceph-mon[76304]: from='client.23426 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:21 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2783207057' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Feb 28 11:14:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Feb 28 11:14:22 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3607266541' entity='client.admin' cmd={"prefix": "df"} : dispatch
Feb 28 11:14:22 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:22 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Feb 28 11:14:22 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2507231655' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Feb 28 11:14:22 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3607266541' entity='client.admin' cmd={"prefix": "df"} : dispatch
Feb 28 11:14:22 compute-0 ceph-mon[76304]: pgmap v3323: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:22 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2507231655' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Feb 28 11:14:22 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:14:22 compute-0 ceph-osd[88267]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6001.6 total, 600.0 interval
                                           Cumulative writes: 49K writes, 190K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 49K writes, 18K syncs, 2.66 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 347 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.88              0.00         1    0.885       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.88              0.00         1    0.885       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.88              0.00         1    0.885       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.9 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562ffdfcd8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562ffdfcd8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562ffdfcd8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562ffdfcd8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.182       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.182       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.182       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562ffdfcd8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562ffdfcd8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562ffdfcd8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562ffdfcda30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562ffdfcda30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.122       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.122       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.12              0.00         1    0.122       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562ffdfcda30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.13              0.00         1    0.133       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.13              0.00         1    0.133       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.13              0.00         1    0.133       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562ffdfcd8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.6 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562ffdfcd8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 28 11:14:23 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Feb 28 11:14:23 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2228650021' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Feb 28 11:14:23 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23436 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:23 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2228650021' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Feb 28 11:14:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Feb 28 11:14:24 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/446635205' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Feb 28 11:14:24 compute-0 nova_compute[243452]: 2026-02-28 11:14:24.316 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:14:24 compute-0 nova_compute[243452]: 2026-02-28 11:14:24.356 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:14:24 compute-0 nova_compute[243452]: 2026-02-28 11:14:24.357 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:14:24 compute-0 nova_compute[243452]: 2026-02-28 11:14:24.358 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:14:24 compute-0 nova_compute[243452]: 2026-02-28 11:14:24.358 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 28 11:14:24 compute-0 nova_compute[243452]: 2026-02-28 11:14:24.359 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:14:24 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3324: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:24 compute-0 nova_compute[243452]: 2026-02-28 11:14:24.456 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:14:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:14:24 compute-0 ceph-mon[76304]: from='client.23436 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:24 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/446635205' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Feb 28 11:14:24 compute-0 ceph-mon[76304]: pgmap v3324: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Feb 28 11:14:24 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3405038233' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Feb 28 11:14:24 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:14:24 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1032109613' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:14:24 compute-0 nova_compute[243452]: 2026-02-28 11:14:24.951 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:14:25 compute-0 nova_compute[243452]: 2026-02-28 11:14:25.103 243456 WARNING nova.virt.libvirt.driver [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 28 11:14:25 compute-0 nova_compute[243452]: 2026-02-28 11:14:25.104 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3415MB free_disk=59.987354160286486GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 28 11:14:25 compute-0 nova_compute[243452]: 2026-02-28 11:14:25.104 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 28 11:14:25 compute-0 nova_compute[243452]: 2026-02-28 11:14:25.105 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 28 11:14:25 compute-0 nova_compute[243452]: 2026-02-28 11:14:25.181 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 28 11:14:25 compute-0 nova_compute[243452]: 2026-02-28 11:14:25.182 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 28 11:14:25 compute-0 nova_compute[243452]: 2026-02-28 11:14:25.207 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 28 11:14:25 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23444 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 28 11:14:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1025070826' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:14:25 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3405038233' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Feb 28 11:14:25 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1032109613' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:14:25 compute-0 ceph-mon[76304]: from='client.23444 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:25 compute-0 nova_compute[243452]: 2026-02-28 11:14:25.799 243456 DEBUG oslo_concurrency.processutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 28 11:14:25 compute-0 nova_compute[243452]: 2026-02-28 11:14:25.805 243456 DEBUG nova.compute.provider_tree [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed in ProviderTree for provider: 1ab5ec33-a817-4b85-91a2-974557eeabfb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 28 11:14:25 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Feb 28 11:14:25 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4114932571' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Feb 28 11:14:25 compute-0 nova_compute[243452]: 2026-02-28 11:14:25.831 243456 DEBUG nova.scheduler.client.report [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Inventory has not changed for provider 1ab5ec33-a817-4b85-91a2-974557eeabfb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 28 11:14:25 compute-0 nova_compute[243452]: 2026-02-28 11:14:25.834 243456 DEBUG nova.compute.resource_tracker [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 28 11:14:25 compute-0 nova_compute[243452]: 2026-02-28 11:14:25.834 243456 DEBUG oslo_concurrency.lockutils [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 28 11:14:26 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23450 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:26 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3325: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:26 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23452 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:26 compute-0 nova_compute[243452]: 2026-02-28 11:14:26.830 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:14:26 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/1025070826' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 28 11:14:26 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4114932571' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Feb 28 11:14:26 compute-0 ceph-mon[76304]: from='client.23450 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:26 compute-0 ceph-mon[76304]: pgmap v3325: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0)
Feb 28 11:14:27 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3159447243' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Feb 28 11:14:27 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Feb 28 11:14:27 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3176861287' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Feb 28 11:14:27 compute-0 ceph-mon[76304]: from='client.23452 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:27 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3159447243' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Feb 28 11:14:27 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3176861287' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23458 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23460 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5454513684934942e-05 of space, bias 1.0, pg target 0.0046363541054804825 quantized to 32 (current 32)
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.899822393338988e-06 of space, bias 1.0, pg target 0.0014699467180016965 quantized to 32 (current 32)
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.272876705470989e-07 of space, bias 4.0, pg target 0.0008727452046565186 quantized to 16 (current 16)
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 28 11:14:28 compute-0 ceph-mgr[76610]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 28 11:14:28 compute-0 ceph-mon[76304]: from='client.23458 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:28 compute-0 ceph-mon[76304]: pgmap v3326: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:28 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 28 11:14:28 compute-0 ceph-osd[89322]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.2 total, 600.0 interval
                                           Cumulative writes: 38K writes, 150K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.68 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 353 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5576845558d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5576845558d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5576845558d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5576845558d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5576845558d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5576845558d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5576845558d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557684555a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557684555a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557684555a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5576845558d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5576845558d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Feb 28 11:14:28 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Feb 28 11:14:28 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3735349980' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Feb 28 11:14:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Optimize plan auto_2026-02-28_11:14:29
Feb 28 11:14:29 compute-0 ceph-mgr[76610]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 28 11:14:29 compute-0 ceph-mgr[76610]: [balancer INFO root] do_upmap
Feb 28 11:14:29 compute-0 ceph-mgr[76610]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'backups', 'default.rgw.control', 'vms', 'volumes']
Feb 28 11:14:29 compute-0 ceph-mgr[76610]: [balancer INFO root] prepared 0/10 upmap changes
Feb 28 11:14:29 compute-0 nova_compute[243452]: 2026-02-28 11:14:29.315 243456 DEBUG oslo_service.periodic_task [None req-4d65788f-917b-4c3d-8bb4-4f3d5dfae35b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 28 11:14:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0)
Feb 28 11:14:29 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2241826767' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Feb 28 11:14:29 compute-0 nova_compute[243452]: 2026-02-28 11:14:29.457 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:14:29 compute-0 nova_compute[243452]: 2026-02-28 11:14:29.461 243456 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 28 11:14:29 compute-0 ceph-mon[76304]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:29.480448) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277269480519, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 606, "num_deletes": 257, "total_data_size": 494311, "memory_usage": 507176, "flush_reason": "Manual Compaction"}
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277269484034, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 489294, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69796, "largest_seqno": 70401, "table_properties": {"data_size": 485876, "index_size": 1197, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8867, "raw_average_key_size": 19, "raw_value_size": 478680, "raw_average_value_size": 1073, "num_data_blocks": 52, "num_entries": 446, "num_filter_entries": 446, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772277245, "oldest_key_time": 1772277245, "file_creation_time": 1772277269, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 3622 microseconds, and 1548 cpu microseconds.
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:29.484082) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 489294 bytes OK
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:29.484098) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:29.485821) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:29.485834) EVENT_LOG_v1 {"time_micros": 1772277269485830, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:29.485855) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 490748, prev total WAL file size 490748, number of live WAL files 2.
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:29.486264) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303038' seq:72057594037927935, type:22 .. '6C6F676D0033323631' seq:0, type:0; will stop at (end)
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(477KB)], [164(10231KB)]
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277269486327, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 10966850, "oldest_snapshot_seqno": -1}
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 8923 keys, 10864870 bytes, temperature: kUnknown
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277269544099, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 10864870, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10808742, "index_size": 32719, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22341, "raw_key_size": 234068, "raw_average_key_size": 26, "raw_value_size": 10652948, "raw_average_value_size": 1193, "num_data_blocks": 1271, "num_entries": 8923, "num_filter_entries": 8923, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772271188, "oldest_key_time": 0, "file_creation_time": 1772277269, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "291b26fc-4235-4453-9c96-97f924605147", "db_session_id": "K5U2AV5NM18630LWZTNR", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:29.544457) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 10864870 bytes
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:29.546062) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.3 rd, 187.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.0 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(44.6) write-amplify(22.2) OK, records in: 9447, records dropped: 524 output_compression: NoCompression
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:29.546100) EVENT_LOG_v1 {"time_micros": 1772277269546089, "job": 102, "event": "compaction_finished", "compaction_time_micros": 57934, "compaction_time_cpu_micros": 25101, "output_level": 6, "num_output_files": 1, "total_output_size": 10864870, "num_input_records": 9447, "num_output_records": 8923, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277269546378, "job": 102, "event": "table_file_deletion", "file_number": 166}
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772277269547447, "job": 102, "event": "table_file_deletion", "file_number": 164}
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:29.486171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:29.547591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:29.547603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:29.547606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:29.547609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:14:29 compute-0 ceph-mon[76304]: rocksdb: (Original Log Time 2026/02/28-11:14:29.547612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 28 11:14:29 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23466 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:29 compute-0 ceph-mon[76304]: from='client.23460 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:29 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/3735349980' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Feb 28 11:14:29 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/2241826767' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Feb 28 11:14:30 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23468 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:30 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:14:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:14:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:14:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:14:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] scanning for idle connections..
Feb 28 11:14:30 compute-0 ceph-mgr[76610]: [volumes INFO mgr_util] cleaning up connections: []
Feb 28 11:14:30 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb 28 11:14:30 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/526029422' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 28 11:14:30 compute-0 ceph-mon[76304]: from='client.23466 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:30 compute-0 ceph-mon[76304]: from='client.23468 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 28 11:14:30 compute-0 ceph-mon[76304]: pgmap v3327: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:30 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/526029422' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 28 11:14:31 compute-0 ceph-mgr[76610]: [devicehealth INFO root] Check health
Feb 28 11:14:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 28 11:14:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:14:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 28 11:14:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 28 11:14:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:14:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:14:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 28 11:14:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:14:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 28 11:14:31 compute-0 ceph-mgr[76610]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 28 11:14:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Feb 28 11:14:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4012068083' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Feb 28 11:14:31 compute-0 ceph-mon[76304]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Feb 28 11:14:31 compute-0 ceph-mon[76304]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/856170741' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
Feb 28 11:14:31 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/4012068083' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Feb 28 11:14:31 compute-0 ceph-mon[76304]: from='client.? 192.168.122.100:0/856170741' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
Feb 28 11:14:32 compute-0 ceph-mgr[76610]: log_channel(audit) log [DBG] : from='client.23476 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 28 11:14:32 compute-0 ceph-mgr[76610]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 305 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Feb 28 11:14:32 compute-0 ovs-appctl[415667]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Feb 28 11:14:32 compute-0 ovs-appctl[415672]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
